text stringlengths 9 7.94M |
|---|
Weak convergence (Hilbert space)
In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology.
Definition
A sequence of points $(x_{n})$ in a Hilbert space H is said to converge weakly to a point x in H if
$\langle x_{n},y\rangle \to \langle x,y\rangle $
for all y in H. Here, $\langle \cdot ,\cdot \rangle $ is understood to be the inner product on the Hilbert space. The notation
$x_{n}\rightharpoonup x$
is sometimes used to denote this kind of convergence.
Properties
• If a sequence converges strongly (that is, if it converges in norm), then it converges weakly as well.
• Since every closed and bounded set is weakly relatively compact (its closure in the weak topology is compact), every bounded sequence $x_{n}$ in a Hilbert space H contains a weakly convergent subsequence. Note that closed and bounded sets are not in general weakly compact in Hilbert spaces (consider the set consisting of an orthonormal basis in an infinitely dimensional Hilbert space which is closed and bounded but not weakly compact since it doesn't contain 0). However, bounded and weakly closed sets are weakly compact so as a consequence every convex bounded closed set is weakly compact.
• As a consequence of the principle of uniform boundedness, every weakly convergent sequence is bounded.
• The norm is (sequentially) weakly lower-semicontinuous: if $x_{n}$ converges weakly to x, then
$\Vert x\Vert \leq \liminf _{n\to \infty }\Vert x_{n}\Vert ,$
and this inequality is strict whenever the convergence is not strong. For example, infinite orthonormal sequences converge weakly to zero, as demonstrated below.
• If $x_{n}\to x$ weakly and $\lVert x_{n}\rVert \to \lVert x\rVert $, then $x_{n}\to x$ strongly:
$\langle x-x_{n},x-x_{n}\rangle =\langle x,x\rangle +\langle x_{n},x_{n}\rangle -\langle x_{n},x\rangle -\langle x,x_{n}\rangle \rightarrow 0.$
• If the Hilbert space is finite-dimensional, i.e. a Euclidean space, then weak and strong convergence are equivalent.
Example
The Hilbert space $L^{2}[0,2\pi ]$ is the space of the square-integrable functions on the interval $[0,2\pi ]$ equipped with the inner product defined by
$\langle f,g\rangle =\int _{0}^{2\pi }f(x)\cdot g(x)\,dx,$
(see Lp space). The sequence of functions $f_{1},f_{2},\ldots $ defined by
$f_{n}(x)=\sin(nx)$
converges weakly to the zero function in $L^{2}[0,2\pi ]$, as the integral
$\int _{0}^{2\pi }\sin(nx)\cdot g(x)\,dx.$
tends to zero for any square-integrable function $g$ on $[0,2\pi ]$ when $n$ goes to infinity, which is by Riemann–Lebesgue lemma, i.e.
$\langle f_{n},g\rangle \to \langle 0,g\rangle =0.$
Although $f_{n}$ has an increasing number of 0's in $[0,2\pi ]$ as $n$ goes to infinity, it is of course not equal to the zero function for any $n$. Note that $f_{n}$ does not converge to 0 in the $L_{\infty }$ or $L_{2}$ norms. This dissimilarity is one of the reasons why this type of convergence is considered to be "weak."
Weak convergence of orthonormal sequences
Consider a sequence $e_{n}$ which was constructed to be orthonormal, that is,
$\langle e_{n},e_{m}\rangle =\delta _{mn}$
where $\delta _{mn}$ equals one if m = n and zero otherwise. We claim that if the sequence is infinite, then it converges weakly to zero. A simple proof is as follows. For x ∈ H, we have
$\sum _{n}|\langle e_{n},x\rangle |^{2}\leq \|x\|^{2}$ (Bessel's inequality)
where equality holds when {en} is a Hilbert space basis. Therefore
$|\langle e_{n},x\rangle |^{2}\rightarrow 0$ (since the series above converges, its corresponding sequence must go to zero)
i.e.
$\langle e_{n},x\rangle \rightarrow 0.$
Banach–Saks theorem
The Banach–Saks theorem states that every bounded sequence $x_{n}$ contains a subsequence $x_{n_{k}}$ and a point x such that
${\frac {1}{N}}\sum _{k=1}^{N}x_{n_{k}}$
converges strongly to x as N goes to infinity.
Generalizations
See also: Weak topology and Weak topology (polar topology)
The definition of weak convergence can be extended to Banach spaces. A sequence of points $(x_{n})$ in a Banach space B is said to converge weakly to a point x in B if
$f(x_{n})\to f(x)$
for any bounded linear functional $f$ defined on $B$, that is, for any $f$ in the dual space $B'$. If $B$ is an Lp space on $\Omega $ and $p<+\infty $, then any such $f$ has the form Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f(x) = \int_{\Omega} x\,y\,d\mu} for some $y\in \,L^{q}(\Omega )$, where $\mu $ is the measure on $\Omega $ and ${\frac {1}{p}}+{\frac {1}{q}}=1$ are conjugate indices.
In the case where $B$ is a Hilbert space, then, by the Riesz representation theorem,
$f(\cdot )=\langle \cdot ,y\rangle $
for some $y$ in $B$, so one obtains the Hilbert space definition of weak convergence.
See also
• Dual topology
• Operator topologies – Topologies on the set of operators on a Hilbert space
References
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Direct sum of groups
In mathematics, a group G is called the direct sum[1][2] of two normal subgroups with trivial intersection if it is generated by the subgroups. In abstract algebra, this method of construction of groups can be generalized to direct sums of vector spaces, modules, and other structures; see the article direct sum of modules for more information. A group which can be expressed as a direct sum of non-trivial subgroups is called decomposable, and if a group cannot be expressed as such a direct sum then it is called indecomposable.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Definition
A group G is called the direct sum[1][2] of two subgroups H1 and H2 if
• each H1 and H2 are normal subgroups of G,
• the subgroups H1 and H2 have trivial intersection (i.e., having only the identity element $e$ of G in common),
• G = ⟨H1, H2⟩; in other words, G is generated by the subgroups H1 and H2.
More generally, G is called the direct sum of a finite set of subgroups {Hi} if
• each Hi is a normal subgroup of G,
• each Hi has trivial intersection with the subgroup ⟨{Hj : j ≠ i}⟩,
• G = ⟨{Hi}⟩; in other words, G is generated by the subgroups {Hi}.
If G is the direct sum of subgroups H and K then we write G = H + K, and if G is the direct sum of a set of subgroups {Hi} then we often write G = ΣHi. Loosely speaking, a direct sum is isomorphic to a weak direct product of subgroups.
Properties
If G = H + K, then it can be proven that:
• for all h in H, k in K, we have that h ∗ k = k ∗ h
• for all g in G, there exists unique h in H, k in K such that g = h ∗ k
• There is a cancellation of the sum in a quotient; so that (H + K)/K is isomorphic to H
The above assertions can be generalized to the case of G = ΣHi, where {Hi} is a finite set of subgroups:
• if i ≠ j, then for all hi in Hi, hj in Hj, we have that hi ∗ hj = hj ∗ hi
• for each g in G, there exists a unique set of elements hi in Hi such that
g = h1 ∗ h2 ∗ ... ∗ hi ∗ ... ∗ hn
• There is a cancellation of the sum in a quotient; so that ((ΣHi) + K)/K is isomorphic to ΣHi.
Note the similarity with the direct product, where each g can be expressed uniquely as
g = (h1,h2, ..., hi, ..., hn).
Since hi ∗ hj = hj ∗ hi for all i ≠ j, it follows that multiplication of elements in a direct sum is isomorphic to multiplication of the corresponding elements in the direct product; thus for finite sets of subgroups, ΣHi is isomorphic to the direct product ×{Hi}.
Direct summand
Given a group $G$, we say that a subgroup $H$ is a direct summand of $G$ if there exists another subgroup $K$ of $G$ such that $G=H+K$.
In abelian groups, if $H$ is a divisible subgroup of $G$, then $H$ is a direct summand of $G$.
Examples
• If we take $ G=\prod _{i\in I}H_{i}$ it is clear that $G$ is the direct product of the subgroups $ H_{i_{0}}\times \prod _{i\not =i_{0}}H_{i}$.
• If $H$ is a divisible subgroup of an abelian group $G$ then there exists another subgroup $K$ of $G$ such that $G=K+H$.
• If $G$ also has a vector space structure then $G$ can be written as a direct sum of $\mathbb {R} $ and another subspace $K$ that will be isomorphic to the quotient $G/K$.
Equivalence of decompositions into direct sums
In the decomposition of a finite group into a direct sum of indecomposable subgroups the embedding of the subgroups is not unique. For example, in the Klein group $V_{4}\cong C_{2}\times C_{2}$ we have that
$V_{4}=\langle (0,1)\rangle +\langle (1,0)\rangle ,$ and
$V_{4}=\langle (1,1)\rangle +\langle (1,0)\rangle .$
However, the Remak-Krull-Schmidt theorem states that given a finite group G = ΣAi = ΣBj, where each Ai and each Bj is non-trivial and indecomposable, the two sums have equal terms up to reordering and isomorphism.
The Remak-Krull-Schmidt theorem fails for infinite groups; so in the case of infinite G = H + K = L + M, even when all subgroups are non-trivial and indecomposable, we cannot conclude that H is isomorphic to either L or M.
Generalization to sums over infinite sets
To describe the above properties in the case where G is the direct sum of an infinite (perhaps uncountable) set of subgroups, more care is needed.
If g is an element of the cartesian product Π{Hi} of a set of groups, let gi be the ith element of g in the product. The external direct sum of a set of groups {Hi} (written as ΣE{Hi}) is the subset of Π{Hi}, where, for each element g of ΣE{Hi}, gi is the identity $e_{H_{i}}$ for all but a finite number of gi (equivalently, only a finite number of gi are not the identity). The group operation in the external direct sum is pointwise multiplication, as in the usual direct product.
This subset does indeed form a group, and for a finite set of groups {Hi} the external direct sum is equal to the direct product.
If G = ΣHi, then G is isomorphic to ΣE{Hi}. Thus, in a sense, the direct sum is an "internal" external direct sum. For each element g in G, there is a unique finite set S and a unique set {hi ∈ Hi : i ∈ S} such that g = Π {hi : i in S}.
See also
• Direct sum
• Coproduct
• Free product
• Direct sum of topological groups
References
1. Homology. Saunders MacLane. Springer, Berlin; Academic Press, New York, 1963.
2. László Fuchs. Infinite Abelian Groups
|
Weak duality
In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is always greater than or equal to the solution to an associated primal problem. This is opposed to strong duality which only holds in certain cases.[1]
Uses
Many primal-dual approximation algorithms are based on the principle of weak duality.[2]
Weak duality theorem
The primal problem:
Maximize cTx subject to A x ≤ b, x ≥ 0;
The dual problem,
Minimize bTy subject to ATy ≥ c, y ≥ 0.
The weak duality theorem states cTx ≤ bTy.
Namely, if $(x_{1},x_{2},....,x_{n})$ is a feasible solution for the primal maximization linear program and $(y_{1},y_{2},....,y_{m})$ is a feasible solution for the dual minimization linear program, then the weak duality theorem can be stated as $\sum _{j=1}^{n}c_{j}x_{j}\leq \sum _{i=1}^{m}b_{i}y_{i}$, where $c_{j}$ and $b_{i}$ are the coefficients of the respective objective functions.
Proof: cTx = xTc ≤ xTATy ≤ bTy
Generalizations
More generally, if $x$ is a feasible solution for the primal maximization problem and $y$ is a feasible solution for the dual minimization problem, then weak duality implies $f(x)\leq g(y)$ where $f$ and $g$ are the objective functions for the primal and dual problems respectively.
See also
• Convex optimization
• Max–min inequality
References
1. Boţ, Radu Ioan; Grad, Sorin-Mihai; Wanka, Gert (2009), Duality in Vector Optimization, Berlin: Springer-Verlag, p. 1, doi:10.1007/978-3-642-02886-1, ISBN 978-3-642-02885-4, MR 2542013.
2. Gonzalez, Teofilo F. (2007), Handbook of Approximation Algorithms and Metaheuristics, CRC Press, p. 2-12, ISBN 9781420010749.
|
Weak equivalence (homotopy theory)
In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
Topological spaces
Model categories were defined by Quillen as an axiomatization of homotopy theory that applies to topological spaces, but also to many other categories in algebra and geometry. The example that started the subject is the category of topological spaces with Serre fibrations as fibrations and weak homotopy equivalences as weak equivalences (the cofibrations for this model structure can be described as the retracts of relative cell complexes X ⊆ Y[1]). By definition, a continuous mapping f: X → Y of spaces is called a weak homotopy equivalence if the induced function on sets of path components
$f_{*}\colon \pi _{0}(X)\to \pi _{0}(Y)$
is bijective, and for every point x in X and every n ≥ 1, the induced homomorphism
$f_{*}\colon \pi _{n}(X,x)\to \pi _{n}(Y,f(x))$
on homotopy groups is bijective. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.)
For simply connected topological spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the induced homomorphism f*: Hn(X,Z) → Hn(Y,Z) on singular homology groups is bijective for all n.[2] Likewise, for simply connected spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the pullback homomorphism f*: Hn(Y,Z) → Hn(X,Z) on singular cohomology is bijective for all n.[3]
Example: Let X be the set of natural numbers {0, 1, 2, ...} and let Y be the set {0} ∪ {1, 1/2, 1/3, ...}, both with the subspace topology from the real line. Define f: X → Y by mapping 0 to 0 and n to 1/n for positive integers n. Then f is continuous, and in fact a weak homotopy equivalence, but it is not a homotopy equivalence.
The homotopy category of topological spaces (obtained by inverting the weak homotopy equivalences) greatly simplifies the category of topological spaces. Indeed, this homotopy category is equivalent to the category of CW complexes with morphisms being homotopy classes of continuous maps.
Many other model structures on the category of topological spaces have also been considered. For example, in the Strøm model structure on topological spaces, the fibrations are the Hurewicz fibrations and the weak equivalences are the homotopy equivalences.[4]
Chain complexes
Some other important model categories involve chain complexes. Let A be a Grothendieck abelian category, for example the category of modules over a ring or the category of sheaves of abelian groups on a topological space. Define a category C(A) with objects the complexes X of objects in A,
$\cdots \to X_{1}\to X_{0}\to X_{-1}\to \cdots ,$
and morphisms the chain maps. (It is equivalent to consider "cochain complexes" of objects of A, where the numbering is written as
$\cdots \to X^{-1}\to X^{0}\to X^{1}\to \cdots ,$
simply by defining Xi = X−i.)
The category C(A) has a model structure in which the cofibrations are the monomorphisms and the weak equivalences are the quasi-isomorphisms.[5] By definition, a chain map f: X → Y is a quasi-isomorphism if the induced homomorphism
$f_{*}\colon H_{n}(X)\to H_{n}(Y)$
on homology is an isomorphism for all integers n. (Here Hn(X) is the object of A defined as the kernel of Xn → Xn−1 modulo the image of Xn+1 → Xn.) The resulting homotopy category is called the derived category D(A).
Trivial fibrations and trivial cofibrations
In any model category, a fibration that is also a weak equivalence is called a trivial (or acyclic) fibration. A cofibration that is also a weak equivalence is called a trivial (or acyclic) cofibration.
Notes
1. Hovey (1999), Definition 2.4.3.
2. Hatcher (2002), Theorem 4.32.
3. Is there the Whitehead theorem for cohomology theory?
4. Strøm (1972).
5. Beke (2000), Proposition 3.13.
References
• Beke, Tibor (2000), "Sheafifiable homotopy model categories", Mathematical Proceedings of the Cambridge Philosophical Society, 129: 447–473, arXiv:math/0102087, Bibcode:2000MPCPS.129..447B, doi:10.1017/S0305004100004722, MR 1780498
• Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354
• Hovey, Mark (1999), Model Categories (PDF), American Mathematical Society, ISBN 0-8218-1359-5, MR 1650134
• Strøm, Arne (1972), "The homotopy category is a homotopy category", Archiv der Mathematik, 23: 435–441, doi:10.1007/BF01304912, MR 0321082
|
Four exponentials conjecture
In mathematics, specifically the field of transcendental number theory, the four exponentials conjecture is a conjecture which, given the right conditions on the exponents, would guarantee the transcendence of at least one of four exponentials. The conjecture, along with two related, stronger conjectures, is at the top of a hierarchy of conjectures and theorems concerning the arithmetic nature of a certain number of values of the exponential function.
Statement
If x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental:
$e^{x_{1}y_{1}},e^{x_{1}y_{2}},e^{x_{2}y_{1}},e^{x_{2}y_{2}}.$
An alternative way of stating the conjecture in terms of logarithms is the following. For 1 ≤ i, j ≤ 2 let λij be complex numbers such that exp(λij) are all algebraic. Suppose λ11 and λ12 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then
$\lambda _{11}\lambda _{22}\neq \lambda _{12}\lambda _{21}.\,$
An equivalent formulation in terms of linear algebra is the following. Let M be the 2×2 matrix
$M={\begin{pmatrix}\lambda _{11}&\lambda _{12}\\\lambda _{21}&\lambda _{22}\end{pmatrix}},$
where exp(λij) is algebraic for 1 ≤ i, j ≤ 2. Suppose the two rows of M are linearly independent over the rational numbers, and the two columns of M are linearly independent over the rational numbers. Then the rank of M is 2.
While a 2×2 matrix having linearly independent rows and columns usually means it has rank 2, in this case we require linear independence over a smaller field so the rank isn't forced to be 2. For example, the matrix
${\begin{pmatrix}1&\pi \\\pi &\pi ^{2}\end{pmatrix}}$
has rows and columns that are linearly independent over the rational numbers, since π is irrational. But the rank of the matrix is 1. So in this case the conjecture would imply that at least one of e, eπ, and eπ2 is transcendental (which in this case is already known since e is transcendental).
History
The conjecture was considered in the early 1940s by Atle Selberg who never formally stated the conjecture.[1] A special case of the conjecture is mentioned in a 1944 paper of Leonidas Alaoglu and Paul Erdős who suggest that it had been considered by Carl Ludwig Siegel.[2] An equivalent statement was first mentioned in print by Theodor Schneider who set it as the first of eight important, open problems in transcendental number theory in 1957.[3]
The related six exponentials theorem was first explicitly mentioned in the 1960s by Serge Lang[4] and Kanakanahalli Ramachandra,[5] and both also explicitly conjecture the above result.[6] Indeed, after proving the six exponentials theorem Lang mentions the difficulty in dropping the number of exponents from six to four — the proof used for six exponentials "just misses" when one tries to apply it to four.
Corollaries
Using Euler's identity this conjecture implies the transcendence of many numbers involving e and π. For example, taking x1 = 1, x2 = √2, y1 = iπ, and y2 = iπ√2, the conjecture—if true—implies that one of the following four numbers is transcendental:
$e^{i\pi },e^{i\pi {\sqrt {2}}},e^{i\pi {\sqrt {2}}},e^{2i\pi }.$
The first of these is just −1, and the fourth is 1, so the conjecture implies that eiπ√2 is transcendental (which is already known, by consequence of the Gelfond–Schneider theorem).
An open problem in number theory settled by the conjecture is the question of whether there exists a non-integer real number t such that both 2t and 3t are integers, or indeed such that at and bt are both integers for some pair of integers a and b that are multiplicatively independent over the integers. Values of t such that 2t is an integer are all of the form t = log2m for some integer m, while for 3t to be an integer, t must be of the form t = log3n for some integer n. By setting x1 = 1, x2 = t, y1 = log(2), and y2 = log(3), the four exponentials conjecture implies that if t is irrational then one of the following four numbers is transcendental:
$2,3,2^{t},3^{t}.\,$
So if 2t and 3t are both integers then the conjecture implies that t must be a rational number. Since the only rational numbers t for which 2t is also rational are the integers, this implies that there are no non-integer real numbers t such that both 2t and 3t are integers. It is this consequence, for any two primes (not just 2 and 3), that Alaoglu and Erdős desired in their paper as it would imply the conjecture that the quotient of two consecutive colossally abundant numbers is prime, extending Ramanujan's results on the quotients of consecutive superior highly composite number.[7]
Sharp four exponentials conjecture
The four exponentials conjecture reduces the pair and triplet of complex numbers in the hypotheses of the six exponentials theorem to two pairs. It is conjectured that this is also possible with the sharp six exponentials theorem, and this is the sharp four exponentials conjecture.[8] Specifically, this conjecture claims that if x1, x2, and y1, y2 are two pairs of complex numbers with each pair being linearly independent over the rational numbers, and if βij are four algebraic numbers for 1 ≤ i, j ≤ 2 such that the following four numbers are algebraic:
$e^{x_{1}y_{1}-\beta _{11}},e^{x_{1}y_{2}-\beta _{12}},e^{x_{2}y_{1}-\beta _{21}},e^{x_{2}y_{2}-\beta _{22}},$
then xi yj = βij for 1 ≤ i, j ≤ 2. So all four exponentials are in fact 1.
This conjecture implies both the sharp six exponentials theorem, which requires a third x value, and the as yet unproven sharp five exponentials conjecture that requires a further exponential to be algebraic in its hypotheses.
Strong four exponentials conjecture
The strongest result that has been conjectured in this circle of problems is the strong four exponentials conjecture.[9] This result would imply both aforementioned conjectures concerning four exponentials as well as all the five and six exponentials conjectures and theorems, as illustrated to the right, and all the three exponentials conjectures detailed below. The statement of this conjecture deals with the vector space over the algebraic numbers generated by 1 and all logarithms of non-zero algebraic numbers, denoted here as L∗. So L∗ is the set of all complex numbers of the form
$\beta _{0}+\sum _{i=1}^{n}\beta _{i}\log \alpha _{i},$
for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The statement of the strong four exponentials conjecture is then as follows. Let x1, x2, and y1, y2 be two pairs of complex numbers with each pair being linearly independent over the algebraic numbers, then at least one of the four numbers xi yj for 1 ≤ i, j ≤ 2 is not in L∗.
Three exponentials conjecture
The four exponentials conjecture rules out a special case of non-trivial, homogeneous, quadratic relations between logarithms of algebraic numbers. But a conjectural extension of Baker's theorem implies that there should be no non-trivial algebraic relations between logarithms of algebraic numbers at all, homogeneous or not. One case of non-homogeneous quadratic relations is covered by the still open three exponentials conjecture.[10] In its logarithmic form it is the following conjecture. Let λ1, λ2, and λ3 be any three logarithms of algebraic numbers and γ be a non-zero algebraic number, and suppose that λ1λ2 = γλ3. Then λ1λ2 = γλ3 = 0.
The exponential form of this conjecture is the following. Let x1, x2, and y be non-zero complex numbers and let γ be a non-zero algebraic number. Then at least one of the following three numbers is transcendental:
$e^{x_{1}y},e^{x_{2}y},e^{\gamma x_{1}/x_{2}}.$
There is also a sharp three exponentials conjecture which claims that if x1, x2, and y are non-zero complex numbers and α, β1, β2, and γ are algebraic numbers such that the following three numbers are algebraic
$e^{x_{1}y-\beta _{1}},e^{x_{2}y-\beta _{2}},e^{(\gamma x_{1}/x_{2})-\alpha },$
then either x2y = β2 or γx1 = αx2.
The strong three exponentials conjecture meanwhile states that if x1, x2, and y are non-zero complex numbers with x1y, x2y, and x1/x2 all transcendental, then at least one of the three numbers x1y, x2y, x1/x2 is not in L∗.
As with the other results in this family, the strong three exponentials conjecture implies the sharp three exponentials conjecture which implies the three exponentials conjecture. However, the strong and sharp three exponentials conjectures are implied by their four exponentials counterparts, bucking the usual trend. And the three exponentials conjecture is neither implied by nor implies the four exponentials conjecture.
The three exponentials conjecture, like the sharp five exponentials conjecture, would imply the transcendence of eπ2 by letting (in the logarithmic version) λ1 = iπ, λ2 = −iπ, and γ = 1.
Bertrand's conjecture
Many of the theorems and results in transcendental number theory concerning the exponential function have analogues involving the modular function j. Writing q = e2πiτ for the nome and j(τ) = J(q), Daniel Bertrand conjectured that if q1 and q2 are non-zero algebraic numbers in the complex unit disc that are multiplicatively independent, then J(q1) and J(q2) are algebraically independent over the rational numbers.[11] Although not obviously related to the four exponentials conjecture, Bertrand's conjecture in fact implies a special case known as the weak four exponentials conjecture.[12] This conjecture states that if x1 and x2 are two positive real algebraic numbers, neither of them equal to 1, then π2 and the product log(x1)log(x2) are linearly independent over the rational numbers. This corresponds to the special case of the four exponentials conjecture whereby y1 = iπ, y2 = −iπ, and x1 and x2 are real. Perhaps surprisingly, though, it is also a corollary of Bertrand's conjecture, suggesting there may be an approach to the full four exponentials conjecture via the modular function j.
Notes
1. Waldschmidt, (2006).
2. Alaoglu and Erdős, (1944), p.455: "It is very likely that q x and p x cannot be rational at the same time except if x is an integer. ... At present we can not show this. Professor Siegel has communicated to us the result that q x, r x and s x can not be simultaneously rational except if x is an integer."
3. Schneider, (1957).
4. Lang, (1966), chapter 2 section 1.
5. Ramachandra, (1967/8).
6. Waldschmidt, (2000), p.15.
7. Ramanujan, (1915), section IV.
8. Waldschmidt, "Hopf algebras..." (2005), p.200.
9. Waldschmidt, (2000), conjecture 11.17.
10. Waldschmidt, "Variations..." (2005), consequence 1.9.
11. Bertrand, (1997), conjecture 2 in section 5.
12. Diaz, (2001), section 4.
References
• Alaoglu, Leonidas; Erdős, Paul (1944). "On highly composite and similar numbers". Trans. Amer. Math. Soc. 56 (3): 448–469. doi:10.2307/1990319. JSTOR 1990319. MR 0011087.
• Bertrand, Daniel (1997). "Theta functions and transcendence". The Ramanujan Journal. 1 (4): 339–350. doi:10.1023/A:1009749608672. MR 1608721. S2CID 118628723.
• Diaz, Guy (2001). "Mahler's conjecture and other transcendence results". In Nesterenko, Yuri V.; Philippon, Patrice (eds.). Introduction to algebraic independence theory. Lecture Notes in Math. Vol. 1752. Springer. pp. 13–26. ISBN 3-540-41496-7. MR 1837824.
• Lang, Serge (1966). Introduction to transcendental numbers. Reading, Mass.: Addison-Wesley Publishing Co. MR 0214547.
• Ramachandra, Kanakanahalli (1967–1968). "Contributions to the theory of transcendental numbers. I, II". Acta Arith. 14: 65–72, 73–88. doi:10.4064/aa-14-1-65-72. MR 0224566.
• Ramanujan, Srinivasa (1915). "Highly Composite Numbers". Proc. London Math. Soc. 14 (2): 347–407. doi:10.1112/plms/s2_14.1.347. MR 2280858.
• Schneider, Theodor (1957). Einführung in die transzendenten Zahlen (in German). Berlin-Göttingen-Heidelberg: Springer. MR 0086842.
• Waldschmidt, Michel (2000). Diophantine approximation on linear algebraic groups. Grundlehren der Mathematischen Wissenschaften. Vol. 326. Berlin: Springer. ISBN 3-540-66785-7. MR 1756786.
• Waldschmidt, Michel (2005). "Hopf algebras and transcendental numbers". In Aoki, Takashi; Kanemitsu, Shigeru; Nakahara, Mikio; et al. (eds.). Zeta functions, topology, and quantum physics: Papers from the symposium held at Kinki University, Osaka, March 3–6, 2003. Developments in mathematics. Vol. 14. Springer. pp. 197–219. CiteSeerX 10.1.1.170.5648. MR 2179279.
• Waldschmidt, Michel (2005). "Variations on the six exponentials theorem". In Tandon, Rajat (ed.). Algebra and number theory. Delhi: Hindustan Book Agency. pp. 338–355. MR 2193363.
• Waldschmidt, Michel (2006). "On Ramachandra's contributions to transcendental number theory". In Balasubramanian, B.; Srinivas, K. (eds.). The Riemann zeta function and related themes: papers in honour of Professor K. Ramachandra. Ramanujan Math. Soc. Lect. Notes Ser. Vol. 2. Mysore: Ramanujan Math. Soc. pp. 155–179. MR 2335194.
External links
• "Four exponentials conjecture". PlanetMath.
• Weisstein, Eric W. "Four Exponentials Conjecture". MathWorld.
|
Weak dimension
In abstract algebra, the weak dimension of a nonzero right module M over a ring R is the largest number n such that the Tor group $\operatorname {Tor} _{n}^{R}(M,N)$ is nonzero for some left R-module N (or infinity if no largest such n exists), and the weak dimension of a left R-module is defined similarly. The weak dimension was introduced by Henri Cartan and Samuel Eilenberg (1956, p.122). The weak dimension is sometimes called the flat dimension as it is the shortest length of the resolution of the module by flat modules. The weak dimension of a module is, at most, equal to its projective dimension.
The weak global dimension of a ring is the largest number n such that $\operatorname {Tor} _{n}^{R}(M,N)$ is nonzero for some right R-module M and left R-module N. If there is no such largest number n, the weak global dimension is defined to be infinite. It is at most equal to the left or right global dimension of the ring R.
Examples
• The module $\mathbb {Q} $ of rational numbers over the ring $\mathbb {Z} $ of integers has weak dimension 0, but projective dimension 1.
• The module $\mathbb {Q} /\mathbb {Z} $ over the ring $\mathbb {Z} $ has weak dimension 1, but injective dimension 0.
• The module $\mathbb {Z} $ over the ring $\mathbb {Z} $ has weak dimension 0, but injective dimension 1.
• A Prüfer domain has weak global dimension at most 1.
• A Von Neumann regular ring has weak global dimension 0.
• A product of infinitely many fields has weak global dimension 0 but its global dimension is nonzero.
• If a ring is right Noetherian, then the right global dimension is the same as the weak global dimension, and is at most the left global dimension. In particular if a ring is right and left Noetherian then the left and right global dimensions and the weak global dimension are all the same.
• The triangular matrix ring ${\begin{bmatrix}\mathbb {Z} &\mathbb {Q} \\0&\mathbb {Q} \end{bmatrix}}$ has right global dimension 1, weak global dimension 1, but left global dimension 2. It is right Noetherian, but not left Noetherian.
References
• Cartan, Henri; Eilenberg, Samuel (1956), Homological algebra, Princeton Mathematical Series, vol. 19, Princeton University Press, ISBN 978-0-691-04991-5, MR 0077480
• Năstăsescu, Constantin; Van Oystaeyen, Freddy (1987), Dimensions of ring theory, Mathematics and its Applications, vol. 36, D. Reidel Publishing Co., doi:10.1007/978-94-009-3835-9, ISBN 9789027724618, MR 0894033
|
Goldbach's weak conjecture
In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that
Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)
Goldbach's weak conjecture
Letter from Goldbach to Euler dated on 7 June 1742 (Latin-German)[1]
FieldNumber theory
Conjectured byChristian Goldbach
Conjectured in1742
First proof byHarald Helfgott
First proof in2013
Implied byGoldbach's conjecture
This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3).
In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture.[2] As of 2018, the proof is widely accepted in the mathematics community,[3] but it has not yet been published in a peer-reviewed journal. The proof was accepted for publication in the Annals of Mathematics Studies series[4] in 2015, and has been undergoing further review and revision since; fully-refereed chapters in close to final form are being made public in the process.[5]
Some state the conjecture as
Every odd number greater than 7 can be expressed as the sum of three odd primes.[6]
This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture.
Origins
Main article: Goldbach's conjecture
The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is
Every integer greater than 5 can be written as the sum of three primes.
The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd).
Timeline of results
In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that $e^{e^{16.038}}\approx 3^{3^{15}}$ is large enough.[7] The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible.
In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing[8] that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time.[9]
Olivier Ramaré in 1995 showed that every even number n ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number n ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis.[10] In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results.[11]
In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately $n>e^{3100}\approx 2\times 10^{1346}$. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.)
In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture.[12][13][2][14] Here, the major arcs ${\mathfrak {M}}$ is the union of intervals $\left(a/q-cr_{0}/qx,a/q+cr_{0}/qx\right)$ around the rationals $a/q,q<r_{0}$ where $c$ is a constant. Minor arcs ${\mathfrak {m}}$ are defined to be ${\mathfrak {m}}=(\mathbb {R} /\mathbb {Z} )\setminus {\mathfrak {M}}$.
References
1. Correspondance mathématique et physique de quelques célèbres géomètres du XVIIIème siècle (Band 1), St.-Pétersbourg 1843, pp. 125–129.
2. Helfgott, Harald A. (2013). "The ternary Goldbach conjecture is true". arXiv:1312.7748 [math.NT].
3. "Harald Andrés Helfgott - Alexander von Humboldt-Foundation". www.humboldt-foundation.de. Archived from the original on 2022-08-24. Retrieved 2022-08-24.
4. "Annals of Mathematics Studies". Princeton University Press. 1996-12-14. Retrieved 2023-02-05.
5. "Harald Andrés Helfgott". webusers.imj-prg.fr. Retrieved 2021-04-06.
6. Weisstein, Eric W. "Goldbach Conjecture". MathWorld.
7. Helfgott, Harald Andrés (2015). "The ternary Goldbach problem". arXiv:1501.05438 [math.NT].
8. Deshouillers, Jean-Marc; Effinger, Gove W.; Te Riele, Herman J. J.; Zinoviev, Dmitrii (1997). "A complete Vinogradov 3-primes theorem under the Riemann hypothesis". Electronic Research Announcements of the American Mathematical Society. 3 (15): 99–104. doi:10.1090/S1079-6762-97-00031-0. MR 1469323.
9. Yannick Saouter (1998). "Checking the odd Goldbach Conjecture up to 1020" (PDF). Math. Comp. 67 (222): 863–866. doi:10.1090/S0025-5718-98-00928-4. MR 1451327.
10. Kaniecki, Leszek (1995). "On Šnirelman's constant under the Riemann hypothesis" (PDF). Acta Arithmetica. 72 (4): 361–374. doi:10.4064/aa-72-4-361-374. MR 1348203.
11. Tao, Terence (2014). "Every odd number greater than 1 is the sum of at most five primes". Math. Comp. 83 (286): 997–1038. arXiv:1201.6656. doi:10.1090/S0025-5718-2013-02733-0. MR 3143702. S2CID 2618958.
12. Helfgott, Harald A. (2013). "Major arcs for Goldbach's theorem". arXiv:1305.2897 [math.NT].
13. Helfgott, Harald A. (2012). "Minor arcs for Goldbach's problem". arXiv:1205.5252 [math.NT].
14. Helfgott, Harald A. (2015). "The ternary Goldbach problem". arXiv:1501.05438 [math.NT].
Prime number conjectures
• Hardy–Littlewood
• 1st
• 2nd
• Agoh–Giuga
• Andrica's
• Artin's
• Bateman–Horn
• Brocard's
• Bunyakovsky
• Chinese hypothesis
• Cramér's
• Dickson's
• Elliott–Halberstam
• Firoozbakht's
• Gilbreath's
• Grimm's
• Landau's problems
• Goldbach's
• weak
• Legendre's
• Twin prime
• Legendre's constant
• Lemoine's
• Mersenne
• Oppermann's
• Polignac's
• Pólya
• Schinzel's hypothesis H
• Waring's prime number
|
Lambda calculus definition
Lambda calculus is a formal mathematical system based on lambda abstraction and function application. Two definitions of the language are given here: a standard definition, and a definition using mathematical formulas.
For a general introduction, see Lambda calculus.
Standard definition
This formal definition was given by Alonzo Church.
Definition
Lambda expressions are composed of
• variables $v_{1}$, $v_{2}$, ..., $v_{n}$, ...
• the abstraction symbols lambda '$\lambda $' and dot '.'
• parentheses ( )
The set of lambda expressions, $\Lambda $, can be defined inductively:
1. If $x$ is a variable, then $x\in \Lambda $
2. If $x$ is a variable and $M\in \Lambda $, then $(\lambda x.M)\in \Lambda $
3. If $M,N\in \Lambda $, then $(M\ N)\in \Lambda $
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications.[1]
Notation
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied.
• Outermost parentheses are dropped: $M\ N$ instead of $(M\ N)$
• Applications are assumed to be left-associative: $M\ N\ P$ may be written instead of $((M\ N)\ P)$[2]
• The body of an abstraction extends as far right as possible: $\lambda x.M\ N$ means $\lambda x.(M\ N)$ and not $(\lambda x.M)\ N$
• A sequence of abstractions is contracted: $\lambda x.\lambda y.\lambda z.N$ is abbreviated as $\lambda xyz.N$[3][4]
Free and bound variables
The abstraction operator, $\lambda $, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. All other variables are called free. For example, in the following expression $y$ is a bound variable and $x$ is free: $\lambda y.x\ x\ y$. Also note that a variable is bound by its "nearest" abstraction. In the following example the single occurrence of $x$ in the expression is bound by the second lambda: $\lambda x.y(\lambda x.z\ x)$
The set of free variables of a lambda expression, $M$, is denoted as $\operatorname {FV} (M)$ and is defined by recursion on the structure of the terms, as follows:
1. $\operatorname {FV} (x)=\{x\}$, where $x$ is a variable
2. $\operatorname {FV} (\lambda x.M)=\operatorname {FV} (M)\backslash \{x\}$
3. $\operatorname {FV} (M\ N)=\operatorname {FV} (M)\cup \operatorname {FV} (N)$[5]
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
Reduction
The meaning of lambda expressions is defined by how expressions can be reduced.[6]
There are three kinds of reduction:
• α-conversion: changing bound variables (alpha);
• β-reduction: applying functions to their arguments (beta);
• η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, $(\lambda x.M)\ N$ is a β-redex in expressing the substitution of $N$ for $x$ in $M$; if $x$ is not free in $M$, $\lambda x.M\ x$ is an η-redex. The expression to which a redex reduces is called its reduct; using the previous example, the reducts of these expressions are respectively $M[x:=N]$ and $M$.
α-conversion
Alpha-conversion, sometimes known as alpha-renaming,[7] allows bound variable names to be changed. For example, alpha-conversion of $\lambda x.x$ might yield $\lambda y.y$. Terms that differ only by alpha-conversion are called α-equivalent. Frequently in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for alpha-conversion are not completely trivial. First, when alpha-converting an abstraction, the only variable occurrences that are renamed are those that are bound by the same abstraction. For example, an alpha-conversion of $\lambda x.\lambda x.x$ could result in $\lambda y.\lambda x.x$, but it could not result in $\lambda y.\lambda x.y$. The latter has a different meaning from the original.
Second, alpha-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace $x$ with $y$ in $\lambda x.\lambda y.x$, we get $\lambda y.\lambda y.y$, which is not at all the same.
In programming languages with static scope, alpha-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see alpha renaming to make name resolution trivial).
Substitution
Substitution, written $E[V:=R]$, is the process of replacing all free occurrences of the variable $V$ in the expression $E$ with expression $R$. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any λ expression).
${\begin{aligned}x[x:=N]&\equiv N\\y[x:=N]&\equiv y{\text{, if }}x\neq y\end{aligned}}$
${\begin{aligned}(M_{1}\ M_{2})[x:=N]&\equiv (M_{1}[x:=N])\ (M_{2}[x:=N])\\(\lambda x.M)[x:=N]&\equiv \lambda x.M\\(\lambda y.M)[x:=N]&\equiv \lambda y.(M[x:=N]){\text{, if }}x\neq y{\text{, provided }}y\notin FV(N)\end{aligned}}$
To substitute into a lambda abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for $(\lambda x.y)[y:=x]$ to result in $(\lambda x.x)$, because the substituted $x$ was supposed to be free but ended up being bound. The correct substitution in this case is $(\lambda z.x)$, up to α-equivalence. Notice that substitution is defined uniquely up to α-equivalence.
β-reduction
β-reduction captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of $((\lambda V.E)\ E')$ is $E[V:=E']$.
For example, assuming some encoding of $2,7,\times $, we have the following β-reduction: $((\lambda n.\ n\times 2)\ 7)\rightarrow 7\times 2$.
η-reduction
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between $\lambda x.(fx)$ and $f$ whenever $x$ does not appear free in $f$.
Normalization
Main article: Beta normal form
The purpose of β-reduction is to calculate a value. A value in lambda calculus is a function. So β-reduction continues until the expression looks like a function abstraction.
A lambda expression that cannot be reduced further, by either β-redex, or η-redex is in normal form. Note that alpha-conversion may convert functions. All normal forms that can be converted into each other by α-conversion are defined to be equal. See the main article on Beta normal form for details.
Normal Form TypeDefinition.
Normal FormNo β- or η-reductions are possible.
Head Normal FormIn the form of a lambda abstraction whose body is not reducible.
Weak Head Normal FormIn the form of a lambda abstraction.
Syntax definition in BNF
Lambda Calculus has a simple syntax. A lambda calculus program has the syntax of an expression where,
NameBNFDescription
Abstraction
<expression> ::= λ <variable-list> . <expression>
Anonymous function definition.
Application term
<expression> ::= <application-term>
Application
<application-term> ::= <application-term> <item>
A function call.
Item
<application-term> ::= <item>
Variable
<item> ::= <variable>
E.g. x, y, fact, sum, ...
Grouping
<item> ::= ( <expression> )
Bracketed expression.
The variable list is defined as,
<variable-list> := <variable> | <variable>, <variable-list>
A variable as used by computer scientists has the syntax,
<variable> ::= <alpha> <extension>
<extension> ::=
<extension> ::= <extension-char> <extension>
<extension-char> ::= <alpha> | <digit> | _
Mathematicians will sometimes restrict a variable to be a single alphabetic character. When using this convention the comma is omitted from the variable list.
A lambda abstraction has a lower precedence than an application, so;
$\lambda x.y\ z=\lambda x.(y\ z)$
Applications are left associative;
$x\ y\ z=(x\ y)\ z$
An abstraction with multiple parameters is equivalent to multiple abstractions of one parameter.
$\lambda x.y.z=\lambda x.\lambda y.z$
where,
• x is a variable
• y is a variable list
• z is an expression
Definition as mathematical formulas
The problem of how variables may be renamed is difficult. This definition avoids the problem by substituting all names with canonical names, which are constructed based on the position of the definition of the name in the expression. The approach is analogous to what a compiler does, but has been adapted to work within the constraints of mathematics.
Semantics
The execution of a lambda expression proceeds using the following reductions and transformations,
1. α-conversion - $\operatorname {alpha-conv} (a)\to \operatorname {canonym} [A,P]=\operatorname {canonym} [a[A],P]$
2. β-reduction - $\operatorname {beta-redex} [\lambda p.b\ v]=b[p:=v]$
3. η-reduction - $x\not \in \operatorname {FV} (f)\to \operatorname {eta-redex} [\lambda x.(f\ x)]=f$
where,
• canonym is a renaming of a lambda expression to give the expression standard names, based on the position of the name in the expression.
• Substitution Operator, $b[p:=v]$ is the substitution of the name $p$ by the lambda expression $v$ in lambda expression $b$.
• Free Variable Set $\operatorname {FV} (f)$ is the set of variables that do not belong to a lambda abstraction in $f$.
Execution is performing β-reductions and η-reductions on subexpressions in the canonym of a lambda expression until the result is a lambda function (abstraction) in the normal form.
All α-conversions of a λ-expression are considered to be equivalent.
Canonym - Canonical Names
Canonym is a function that takes a lambda expression and renames all names canonically, based on their positions in the expression. This might be implemented as,
${\begin{aligned}\operatorname {canonym} [L,Q]&=\operatorname {canonym} [L,O,Q]\\\operatorname {canonym} [\lambda p.b,M,Q]&=\lambda \operatorname {name} (Q).\operatorname {canonym} [b,M[p:=Q],Q+N]\\\operatorname {canonym} [X\ Y,x,Q]&=\operatorname {canonym} [X,x,Q+F]\ \operatorname {canonym} [Y,x,E+S]\\\operatorname {canonym} [x,M,Q]&=\operatorname {name} (M[x])\end{aligned}}$
Where, N is the string "N", F is the string "F", S is the string "S", + is concatenation, and "name" converts a string into a name
Map operators
Map from one value to another if the value is in the map. O is the empty map.
1. $O[x]=x$
2. $M[x:=y][x]=y$
3. $x\neq z\to M[x:=y][z]=M[z]$
Substitution operator
If L is a lambda expression, x is a name, and y is a lambda expression; $L[x:=y]$ means substitute x by y in L. The rules are,
1. $(\lambda p.b)[x:=y]=\lambda p.b[x:=y]$
2. $(X\,Y)[x:=y]=X[x:=y]\,Y[x:=y]$
3. $z=x\to (z)[x:=y]=y$
4. $z\neq x\to (z)[x:=y]=z$
Note that rule 1 must be modified if it is to be used on non canonically renamed lambda expressions. See Changes to the substitution operator.
Free and bound variable sets
The set of free variables of a lambda expression, M, is denoted as FV(M). This is the set of variable names that have instances not bound (used) in a lambda abstraction, within the lambda expression. They are the variable names that may be bound to formal parameter variables from outside the lambda expression.
The set of bound variables of a lambda expression, M, is denoted as BV(M). This is the set of variable names that have instances bound (used) in a lambda abstraction, within the lambda expression.
The rules for the two sets are given below.[5]
$\mathrm {FV} (M)$ - Free Variable Set Comment $\mathrm {BV} (M)$ - Bound Variable Set Comment
$\mathrm {FV} (x)=\{x\}$ where x is a variable $\mathrm {BV} (x)=\emptyset $ where x is a variable
$\mathrm {FV} (\lambda x.M)=\mathrm {FV} (M)\setminus \{x\}$ Free variables of M excluding x $\mathrm {BV} (\lambda x.M)=\mathrm {BV} (M)\cup \{x\}$ Bound variables of M plus x.
$\mathrm {FV} (M\ N)=\mathrm {FV} (M)\cup \mathrm {FV} (N)$ Combine the free variables from the function and the parameter $\mathrm {BV} (M\ N)=\mathrm {BV} (M)\cup \mathrm {BV} (N)$ Combine the bound variables from the function and the parameter
Usage;
• The Free Variable Set, FV is used above in the definition of the η-reduction.
• The Bound Variable Set, BV, is used in the rule for β-redex of non canonical lambda expression.
Evaluation strategy
This mathematical definition is structured so that it represents the result, and not the way it gets calculated. However the result may be different between lazy and eager evaluation. This difference is described in the evaluation formulas.
The definitions given here assume that the first definition that matches the lambda expression will be used. This convention is used to make the definition more readable. Otherwise some if conditions would be required to make the definition precise.
Running or evaluating a lambda expression L is,
$\operatorname {eval} [\operatorname {canonym} [L],Q]$
where Q is a name prefix possibly an empty string and eval is defined by,
${\begin{aligned}\operatorname {eval} [x\ y]&=\operatorname {eval} [\operatorname {apply} [\operatorname {eval} [x]\ \operatorname {strategy} [y]]]\\\operatorname {apply} [(\lambda x.y)\ z]&=\operatorname {canonym} [\operatorname {beta-redex} [(\lambda x.y)\ z],x]\\\operatorname {apply} [x]&=x{\text{ if x does match the above.}}\\\operatorname {eval} [\lambda x.(f\ x)]&=\operatorname {eval} [\operatorname {eta-redex} [\lambda x.(f\ x)]]\\\operatorname {eval} [L]&=L\\\operatorname {lazy} [X]&=X\\\operatorname {eager} [X]&=\operatorname {eval} [X]\end{aligned}}$
Then the evaluation strategy may be chosen as either,
${\begin{aligned}\operatorname {strategy} &=\operatorname {lazy} \\\operatorname {strategy} &=\operatorname {eager} \end{aligned}}$
The result may be different depending on the strategy used. Eager evaluation will apply all reductions possible, leaving the result in normal form, while lazy evaluation will omit some reductions in parameters, leaving the result in "weak head normal form".
Normal form
All reductions that can be applied have been applied. This is the result obtained from applying eager evaluation.
${\begin{aligned}\operatorname {normal} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {normal} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {normal} [x\ y]&=\operatorname {normal} [x]\land \operatorname {normal} [y]\end{aligned}}$
In all other cases,
$\operatorname {normal} [x]=\operatorname {true} $
Weak head normal form
Reductions to the function (the head) have been applied, but not all reductions to the parameter have been applied. This is the result obtained from applying lazy evaluation.
${\begin{aligned}\operatorname {whnf} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {whnf} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {whnf} [x\ y]&=\operatorname {whnf} [x]\end{aligned}}$
In all other cases,
$\operatorname {whnf} [x]=\operatorname {true} $
Derivation of standard from the math definition
The standard definition of lambda calculus uses some definitions which may be considered as theorems, which can be proved based on the definition as mathematical formulas.
The canonical naming definition deals with the problem of variable identity by constructing a unique name for each variable based on the position of the lambda abstraction for the variable name in the expression.
This definition introduces the rules used in the standard definition and relates explains them in terms of the canonical renaming definition.
Free and bound variables
The lambda abstraction operator, λ, takes a formal parameter variable and a body expression. When evaluated the formal parameter variable is identified with the value of the actual parameter.
Variables in a lambda expression may either be "bound" or "free". Bound variables are variable names that are already attached to formal parameter variables in the expression.
The formal parameter variable is said to bind the variable name wherever it occurs free in the body. Variable (names) that have already been matched to formal parameter variable are said to be bound. All other variables in the expression are called free.
For example, in the following expression y is a bound variable and x is free: $\lambda y.x\ x\ y$. Also note that a variable is bound by its "nearest" lambda abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: $\lambda x.y\ (\lambda x.z\ x)$
Changes to the substitution operator
In the definition of the Substitution Operator the rule,
• $(\lambda p.b)[x:=y]=\lambda p.b[x:=y]$
must be replaced with,
1. $(\lambda x.b)[x:=y]=\lambda x.b$
2. $z\neq x\ \to (\lambda z.b)[x:=y]=\lambda z.b[x:=y]$
This is to stop bound variables with the same name being substituted. This would not have occurred in a canonically renamed lambda expression.
For example the previous rules would have wrongly translated,
$(\lambda x.x\ z)[x:=y]=(\lambda x.y\ z)$
The new rules block this substitution so that it remains as,
$(\lambda x.x\ z)[x:=y]=(\lambda x.x\ z)$
Transformation
The meaning of lambda expressions is defined by how expressions can be transformed or reduced.[6]
There are three kinds of transformation:
• α-conversion: changing bound variables (alpha);
• β-reduction: applying functions to their arguments (beta), calling functions;
• η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules.
α-conversion
Alpha-conversion, sometimes known as alpha-renaming,[7] allows bound variable names to be changed. For example, alpha-conversion of $\lambda x.x$ might give $\lambda y.y$. Terms that differ only by alpha-conversion are called α-equivalent.
In an α-conversion, names may be substituted for new names if the new name is not free in the body, as this would lead to the capture of free variables.
$(y\not \in FV(b)\land a(\lambda x.b)=\lambda y.b[x:=y])\to \operatorname {alpha-con} (a)$
Note that the substitution will not recurse into the body of lambda expressions with formal parameter $x$ because of the change to the substitution operator described above.
See example;
α-conversion λ-expression Canonically named Comment
$\lambda z.\lambda y.(z\ y)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .(\operatorname {P} \operatorname {PN} )$ Original expressions.
correctly rename y to k, (because k is not used in the body) $\lambda z.\lambda k.(z\ k)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .(\operatorname {P} \operatorname {PN} )$ No change to canonical renamed expression.
naively rename y to z, (wrong because z free in $\lambda y.(z\ y)$) $\lambda z.\lambda z.(z\ z)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .({\color {Red}\operatorname {PN} }\operatorname {PN} )$ $z$ is captured.
β-reduction (capture avoiding)
β-reduction captures the idea of function application (also called a function call), and implements the substitution of the actual parameter expression for the formal parameter variable. β-reduction is defined in terms of substitution.
If no variable names are free in the actual parameter and bound in the body, β-reduction may be performed on the lambda abstraction without canonical renaming.
$(\forall z:z\not \in FV(y)\lor z\not \in BV(b))\to \operatorname {beta-redex} [\lambda x.b\ y]=b[x:=y]$
Alpha renaming may be used on $b$ to rename names that are free in $y$ but bound in $b$, to meet the pre-condition for this transformation.
See example;
β-reduction λ-expression Canonically named Comment
$(\lambda x.\lambda y.(\lambda z.(\lambda x.z\ x)(\lambda y.z\ y))(x\ y))$ $(\lambda \operatorname {P} .\lambda \operatorname {PN} .(\lambda \operatorname {PNF} .(\lambda \operatorname {PNFNF} .\operatorname {PNF} \operatorname {PNFNF} )(\lambda \operatorname {PNFNS} .\operatorname {PNF} \operatorname {PNFNS} ))(\operatorname {P} \operatorname {PN} ))$ Original expressions.
Naive beta 1, $(\lambda x.\lambda y.((\lambda x.(x\ y)x)(\lambda y.(x\ y)y)))$
Canonical $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .({\color {Blue}\operatorname {P} }\operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} {\color {Blue}\operatorname {PN} })\operatorname {PNS} )))$
Natural $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .({\color {Red}\operatorname {PNF} }\operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} {\color {Red}\operatorname {PNS} )}\operatorname {PNS} )))$
x (P) and y (PN) have been captured in the substitution.
Alpha rename inner, x → a, y → b
$(\lambda x.\lambda y.(\lambda z.(\lambda x.z\ a)(\lambda b.z\ b))(x\ y))$ $(\lambda \operatorname {P} .\lambda \operatorname {PN} .(\lambda \operatorname {PNF} .(\lambda \operatorname {PNFNF} .\operatorname {PNF} \operatorname {PNFNF} )(\lambda \operatorname {PNFNS} .\operatorname {PNF} \operatorname {PNFNS} ))(\operatorname {P} \operatorname {PN} ))$
Beta 2, $(\lambda x.\lambda y.((\lambda a.(x\ y)a)(\lambda b.(x\ y)b)))$
Canonical $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .(\operatorname {P} \operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} \operatorname {PN} )\operatorname {PNS} )))$
Natural $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .(\operatorname {P} \operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} \operatorname {PN} )\operatorname {PNS} )))$
x and y not captured.
${\begin{array}{r}((\lambda x.z\ x)(\lambda y.z\ y))[z:=(x\ y)]\\((\lambda a.z\ a)(\lambda b.z\ b))[z:=(x\ y)]\end{array}}$
In this example,
1. In the β-redex,
1. The free variables are, $\operatorname {FV} (x\ y)=\{x,y\}$
2. The bound variables are, $\operatorname {BV} ((\lambda x.z\ x)(\lambda y.z\ y))=\{x,y\}$
2. The naive β-redex changed the meaning of the expression because x and y from the actual parameter became captured when the expressions were substituted in the inner abstractions.
3. The alpha renaming removed the problem by changing the names of x and y in the inner abstraction so that they are distinct from the names of x and y in the actual parameter.
1. The free variables are, $\operatorname {FV} (x\ y)=\{x,y\}$
2. The bound variables are, $\operatorname {BV} ((\lambda a.z\ a)(\lambda b.z\ b))=\{a,b\}$
4. The β-redex then proceeded with the intended meaning.
η-reduction
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
η-reduction may be used without change on lambda expressions that are not canonically renamed.
$x\not \in \mathrm {FV} (f)\to {\text{η-redex}}[\lambda x.(fx)]=f$
The problem with using an η-redex when f has free variables is shown in this example,
ReductionLambda expressionβ-reduction
$(\lambda x.(\lambda y.y\,x)\,x)\,a$ $\lambda a.a\,a$
Naive η-reduction $(\lambda y.y\,x)\,a$ $\lambda a.a\,x$
This improper use of η-reduction changes the meaning by leaving x in $\lambda y.y\,x$ unsubstituted.
References
1. Barendregt, Hendrik Pieter (1984), The Lambda Calculus: Its Syntax and Semantics, Studies in Logic and the Foundations of Mathematics, vol. 103 (Revised ed.), North Holland, Amsterdam., ISBN 978-0-444-87508-2, archived from the original on 2004-08-23 — Corrections
2. "Example for Rules of Associativity". Lambda-bound.com. Retrieved 2012-06-18.
3. Selinger, Peter (2008), Lecture Notes on the Lambda Calculus (PDF), vol. 0804, Department of Mathematics and Statistics, University of Ottawa, p. 9, arXiv:0804.3434, Bibcode:2008arXiv0804.3434S
4. "Example for Rule of Associativity". Lambda-bound.com. Retrieved 2012-06-18.
5. Barendregt, Henk; Barendsen, Erik (March 2000), Introduction to Lambda Calculus (PDF)
6. de Queiroz, Ruy J.G.B. "A Proof-Theoretic Account of Programming and the Role of Reduction Rules." Dialectica 42(4), pages 265-282, 1988.
7. Turbak, Franklyn; Gifford, David (2008), Design concepts in programming languages, MIT press, p. 251, ISBN 978-0-262-20175-9
|
Pettis integral
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.
Definition
Let $f:X\to V$ where $(X,\Sigma ,\mu )$ is a measure space and $V$ is a topological vector space (TVS) with a continuous dual space $V'$ that separates points (that is, if $x\in V$is nonzero then there is some $l\in V'$ such that $l(x)\neq 0$), for example, $V$ is a normed space or (more generally) is a Hausdorff locally convex TVS. Evaluation of a functional may be written as a duality pairing:
$\langle \varphi ,x\rangle =\varphi [x].$
The map $f:X\to V$ is called weakly measurable if for all $\varphi \in V',$ the scalar-valued map $\varphi \circ f$ is a measurable map. A weakly measurable map $f:X\to V$ is said to be weakly integrable on $X$ if there exists some $e\in V$ such that for all $\varphi \in V',$ the scalar-valued map $\varphi \circ f$ is Lebesgue integrable (that is, $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$) and
$\varphi (e)=\int _{X}\varphi (f(x))\,\mathrm {d} \mu (x).$
The map $f:X\to V$ is said to be Pettis integrable if $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$ for all $\varphi \in V^{\prime }$ and also for every $A\in \Sigma $ there exists a vector $e_{A}\in V$ such that
$\langle \varphi ,e_{A}\rangle =\int _{A}\langle \varphi ,f(x)\rangle \,\mathrm {d} \mu (x)\quad {\text{ for all }}\varphi \in V'.$
In this case, $e_{A}$ is called the Pettis integral of $f$ on $A.$ Common notations for the Pettis integral $e_{A}$ include
$\int _{A}f\,\mathrm {d} \mu ,\qquad \int _{A}f(x)\,\mathrm {d} \mu (x),\quad {\text{and, in case that}}~A=X~{\text{is understood,}}\quad \mu [f].$
To understand the motivation behind the definition of "weakly integrable", consider the special case where $V$ is the underlying scalar field; that is, where $V=\mathbb {R} $ or $V=\mathbb {C} .$ In this case, every linear functional $\varphi $ on $V$ is of the form $\varphi (y)=sy$ for some scalar $s\in V$ (that is, $\varphi $ is just scalar multiplication by a constant), the condition
$\varphi (e)=\int _{A}\varphi (f(x))\,\mathrm {d} \mu (x)\quad {\text{for all}}~\varphi \in V',$
simplifies to
$se=\int _{A}sf(x)\,\mathrm {d} \mu (x)\quad {\text{for all scalars}}~s.$
In particular, in this special case, $f$ is weakly integrable on $X$ if and only if $f$ is Lebesgue integrable.
Relation to Dunford integral
The map $f:X\to V$ is said to be Dunford integrable if $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$ for all $\varphi \in V^{\prime }$ and also for every $A\in \Sigma $ there exists a vector $d_{A}\in V'',$ called the Dunford integral of $f$ on $A,$ such that
$\langle d_{A},\varphi \rangle =\int _{A}\langle \varphi ,f(x)\rangle \,\mathrm {d} \mu (x)\quad {\text{ for all }}\varphi \in V'$
where $\langle d_{A},\varphi \rangle =d_{A}(\varphi ).$
Identify every vector $x\in V$ with the map scalar-valued functional on $V'$ defined by $\varphi \in V'\mapsto \varphi (x).$ This assignment induces a map called the canonical evaluation map and through it, $V$ is identified as a vector subspace of the double dual $V''.$ The space $V$ is a semi-reflexive space if and only if this map is surjective. The $f:X\to V$ is Pettis integrable if and only if $d_{A}\in V$ for every $A\in \Sigma .$
Properties
An immediate consequence of the definition is that Pettis integrals are compatible with continuous, linear operators: If $\Phi :V_{1}\to V_{2}$ is and linear and continuous and $f:X\to V_{1}$ is Pettis integrable, then $\Phi \circ f$ is Pettis integrable as well and:
$\int _{X}\Phi (f(x))\,d\mu (x)=\Phi \left(\int _{X}f(x)\,d\mu (x)\right).$
The standard estimate
$\left|\int _{X}f(x)\,d\mu (x)\right|\leq \int _{X}|f(x)|\,d\mu (x)$
for real- and complex-valued functions generalises to Pettis integrals in the following sense: For all continuous seminorms $p:V\to \mathbb {R} $ and all Pettis integrable $f:X\to V,$
$p\left(\int _{X}f(x)\,d\mu (x)\right)\leq {\underline {\int _{X}}}p(f(x))\,d\mu (x)$
holds. The right hand side is the lower Lebesgue integral of a $[0,\infty ]$-valued function, that is,
${\underline {\int _{X}}}g\,d\mu :=\sup \left\{\left.\int _{X}h\,d\mu \;\right|\;h:X\to [0,\infty ]{\text{ is measurable and }}0\leq h\leq g\right\}.$ :=\sup \left\{\left.\int _{X}h\,d\mu \;\right|\;h:X\to [0,\infty ]{\text{ is measurable and }}0\leq h\leq g\right\}.}
Taking a lower Lebesgue integral is necessary because the integrand $p\circ f$ may not be measurable. This follows from the Hahn-Banach theorem because for every vector $v\in V$ there must be a continuous functional $\varphi \in V^{\ast }$ such that $\varphi (v)=p(v)$ and for all $w\in V,$ $|\varphi (w)|\leq p(w).$ Applying this to $v:=\int _{X}f\,d\mu $ it gives the result.
Mean value theorem
An important property is that the Pettis integral with respect to a finite measure is contained in the closure of the convex hull of the values scaled by the measure of the integration domain:
$\mu (A)<\infty {\text{ implies }}\int _{A}f\,d\mu \in \mu (A)\cdot {\overline {co(f(A))}}$
This is a consequence of the Hahn-Banach theorem and generalizes the mean value theorem for integrals of real-valued functions: If $V=\mathbb {R} ,$ then closed convex sets are simply intervals and for $f:X\to [a,b],$ the following inequalities hold:
$\mu (A)a~\leq ~\int _{A}f\,d\mu ~\leq ~\mu (A)b.$
Existence
If $V=\mathbb {R} ^{n}$ is finite-dimensional then $f$ is Pettis integrable if and only if each of $f$'s coordinates is Lebesgue integrable.
If $f$ is Pettis integrable and $A\in \Sigma $ is a measurable subset of $X,$ then by definition $f_{|A}:A\to V$ and $f\cdot 1_{A}:X\to V$ are also Pettis integrable and
$\int _{A}f_{|A}\,d\mu =\int _{X}f\cdot 1_{A}\,d\mu .$
If $X$ is a topological space, $\Sigma ={\mathfrak {B}}_{X}$ its Borel-$\sigma $-algebra, $\mu $ a Borel measure that assigns finite values to compact subsets, $V$ is quasi-complete (that is, every bounded Cauchy net converges) and if $f$ is continuous with compact support, then $f$ is Pettis integrable. More generally: If $f$ is weakly measurable and there exists a compact, convex $C\subseteq V$ and a null set $N\subseteq X$ such that $f(X\setminus N)\subseteq C,$ then $f$ is Pettis-integrable.
Law of large numbers for Pettis-integrable random variables
Let $(\Omega ,{\mathcal {F}},\operatorname {P} )$ be a probability space, and let $V$ be a topological vector space with a dual space that separates points. Let $v_{n}:\Omega \to V$ be a sequence of Pettis-integrable random variables, and write $\operatorname {E} [v_{n}]$ for the Pettis integral of $v_{n}$ (over $X$). Note that $\operatorname {E} [v_{n}]$ is a (non-random) vector in $V,$ and is not a scalar value.
Let
${\bar {v}}_{N}:={\frac {1}{N}}\sum _{n=1}^{N}v_{n}$
denote the sample average. By linearity, ${\bar {v}}_{N}$ is Pettis integrable, and
$\operatorname {E} [{\bar {v}}_{N}]={\frac {1}{N}}\sum _{n=1}^{N}\operatorname {E} [v_{n}]\in V.$
Suppose that the partial sums
${\frac {1}{N}}\sum _{n=1}^{N}\operatorname {E} [{\bar {v}}_{n}]$
converge absolutely in the topology of $V,$ in the sense that all rearrangements of the sum converge to a single vector $\lambda \in V.$ The weak law of large numbers implies that $\langle \varphi ,\operatorname {E} [{\bar {v}}_{N}]-\lambda \rangle \to 0$ for every functional $\varphi \in V^{*}.$ Consequently, $\operatorname {E} [{\bar {v}}_{N}]\to \lambda $ in the weak topology on $X.$
Without further assumptions, it is possible that $\operatorname {E} [{\bar {v}}_{N}]$ does not converge to $\lambda .$ To get strong convergence, more assumptions are necessary.
See also
• Bochner measurable function
• Bochner integral
• Bochner space – Mathematical concept
• Vector measure
• Weakly measurable function
References
• James K. Brooks, Representations of weak and strong integrals in Banach spaces, Proceedings of the National Academy of Sciences of the United States of America 63, 1969, 266–270. Fulltext MR0274697
• Israel M. Gel'fand, Sur un lemme de la théorie des espaces linéaires, Commun. Inst. Sci. Math. et Mecan., Univ. Kharkoff et Soc. Math. Kharkoff, IV. Ser. 13, 1936, 35–40 Zbl 0014.16202
• Michel Talagrand, Pettis Integral and Measure Theory, Memoirs of the AMS no. 307 (1984) MR0756174
• Sobolev, V. I. (2001) [1994], "Pettis integral", Encyclopedia of Mathematics, EMS Press
Integrals
Types of integrals
• Riemann integral
• Lebesgue integral
• Burkill integral
• Bochner integral
• Daniell integral
• Darboux integral
• Henstock–Kurzweil integral
• Haar integral
• Hellinger integral
• Khinchin integral
• Kolmogorov integral
• Lebesgue–Stieltjes integral
• Pettis integral
• Pfeffer integral
• Riemann–Stieltjes integral
• Regulated integral
Integration techniques
• Substitution
• Trigonometric
• Euler
• Weierstrass
• By parts
• Partial fractions
• Euler's formula
• Inverse functions
• Changing order
• Reduction formulas
• Parametric derivatives
• Differentiation under the integral sign
• Laplace transform
• Contour integration
• Laplace's method
• Numerical integration
• Simpson's rule
• Trapezoidal rule
• Risch algorithm
Improper integrals
• Gaussian integral
• Dirichlet integral
• Fermi–Dirac integral
• complete
• incomplete
• Bose–Einstein integral
• Frullani integral
• Common integrals in quantum field theory
Stochastic integrals
• Itô integral
• Russo–Vallois integral
• Stratonovich integral
• Skorokhod integral
Miscellaneous
• Basel problem
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Volumes
• Washers
• Shells
Analysis in topological vector spaces
Basic concepts
• Abstract Wiener space
• Classical Wiener space
• Bochner space
• Convex series
• Cylinder set measure
• Infinite-dimensional vector function
• Matrix calculus
• Vector calculus
Derivatives
• Differentiable vector–valued functions from Euclidean space
• Differentiation in Fréchet spaces
• Fréchet derivative
• Total
• Functional derivative
• Gateaux derivative
• Directional
• Generalizations of the derivative
• Hadamard derivative
• Holomorphic
• Quasi-derivative
Measurability
• Besov measure
• Cylinder set measure
• Canonical Gaussian
• Classical Wiener measure
• Measure like set functions
• infinite-dimensional Gaussian measure
• Projection-valued
• Vector
• Bochner / Weakly / Strongly measurable function
• Radonifying function
Integrals
• Bochner
• Direct integral
• Dunford
• Gelfand–Pettis/Weak
• Regulated
• Paley–Wiener
Results
• Cameron–Martin theorem
• Inverse function theorem
• Nash–Moser theorem
• Feldman–Hájek theorem
• No infinite-dimensional Lebesgue measure
• Sazonov's theorem
• Structure theorem for Gaussian measures
Related
• Crinkled arc
• Covariance operator
Functional calculus
• Borel functional calculus
• Continuous functional calculus
• Holomorphic functional calculus
Applications
• Banach manifold (bundle)
• Convenient vector space
• Choquet theory
• Fréchet manifold
• Hilbert manifold
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Weak interpretability
In mathematical logic, weak interpretability is a notion of translation of logical theories, introduced together with interpretability by Alfred Tarski in 1953.
Let T and S be formal theories. Slightly simplified, T is said to be weakly interpretable in S if, and only if, the language of T can be translated into the language of S in such a way that the translation of every theorem of T is consistent with S. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas.
A generalization of weak interpretability, tolerance, was introduced by Giorgi Japaridze in 1992.
See also
• Interpretability logic
References
• Tarski, Alfred (1953), Undecidable theories, Studies in Logic and the Foundations of Mathematics, Amsterdam: North-Holland Publishing Company, MR 0058532. Written in collaboration with Andrzej Mostowski and Raphael M. Robinson.
• Dzhaparidze, Giorgie (1993), "A generalized notion of weak interpretability and the corresponding modal logic", Annals of Pure and Applied Logic, 61 (1–2): 113–160, doi:10.1016/0168-0072(93)90201-N, MR 1218658.
• Dzhaparidze, Giorgie (1992), "The logic of linear tolerance", Studia Logica, 51 (2): 249–277, doi:10.1007/BF00370116, MR 1185914
• Japaridze, Giorgi; de Jongh, Dick (1998), "The logic of provability", in Buss, Samuel R. (ed.), Handbook of Proof Theory, Stud. Logic Found. Math., vol. 137, Amsterdam: North-Holland, pp. 475–546, doi:10.1016/S0049-237X(98)80022-0, MR 1640331
|
Weak inverse
In mathematics, the term weak inverse is used with several meanings.
Theory of semigroups
In the theory of semigroups, a weak inverse of an element x in a semigroup (S, •) is an element y such that y • x • y = y. If every element has a weak inverse, the semigroup is called an E-inversive or E-dense semigroup. An E-inversive semigroup may equivalently be defined by requiring that for every element x ∈ S, there exists y ∈ S such that x • y and y • x are idempotents.[1]
An element x of S for which there is an element y of S such that x • y • x = x is called regular. A regular semigroup is a semigroup in which every element is regular. This is a stronger notion than weak inverse. Every regular semigroup is E-inversive, but not vice versa.[1]
If every element x in S has a unique inverse y in S in the sense that x • y • x = x and y • x • y = y then S is called an inverse semigroup.
Category theory
In category theory, a weak inverse of an object A in a monoidal category C with monoidal product ⊗ and unit object I is an object B such that both A ⊗ B and B ⊗ A are isomorphic to the unit object I of C. A monoidal category in which every morphism is invertible and every object has a weak inverse is called a 2-group.
See also
• Generalized inverse
• Von Neumann regular ring
References
1. John Fountain (2002). "An introduction to covers for semigroups". In Gracinda M. S. Gomes (ed.). Semigroups, Algorithms, Automata and Languages. World Scientific. pp. 167–168. ISBN 978-981-277-688-4. preprint
|
Limit cardinal
In mathematics, limit cardinals are certain cardinal numbers. A cardinal number λ is a weak limit cardinal if λ is neither a successor cardinal nor zero. This means that one cannot "reach" λ from another cardinal by repeated successor operations. These cardinals are sometimes called simply "limit cardinals" when the context is clear.
A cardinal λ is a strong limit cardinal if λ cannot be reached by repeated powerset operations. This means that λ is nonzero and, for all κ < λ, 2κ < λ. Every strong limit cardinal is also a weak limit cardinal, because κ+ ≤ 2κ for every cardinal κ, where κ+ denotes the successor cardinal of κ.
The first infinite cardinal, $\aleph _{0}$ (aleph-naught), is a strong limit cardinal, and hence also a weak limit cardinal.
Constructions
One way to construct limit cardinals is via the union operation: $\aleph _{\omega }$ is a weak limit cardinal, defined as the union of all the alephs before it; and in general $\aleph _{\lambda }$ for any limit ordinal λ is a weak limit cardinal.
The ב operation can be used to obtain strong limit cardinals. This operation is a map from ordinals to cardinals defined as
$\beth _{0}=\aleph _{0},$
$\beth _{\alpha +1}=2^{\beth _{\alpha }},$ (the smallest ordinal equinumerous with the powerset)
If λ is a limit ordinal, $\beth _{\lambda }=\bigcup \{\beth _{\alpha }:\alpha <\lambda \}.$
The cardinal
$\beth _{\omega }=\bigcup \{\beth _{0},\beth _{1},\beth _{2},\ldots \}=\bigcup _{n<\omega }\beth _{n}$
is a strong limit cardinal of cofinality ω. More generally, given any ordinal α, the cardinal
$\beth _{\alpha +\omega }=\bigcup _{n<\omega }\beth _{\alpha +n}$
is a strong limit cardinal. Thus there are arbitrarily large strong limit cardinals.
Relationship with ordinal subscripts
If the axiom of choice holds, every cardinal number has an initial ordinal. If that initial ordinal is $\omega _{\lambda }\,,$ then the cardinal number is of the form $\aleph _{\lambda }$ for the same ordinal subscript λ. The ordinal λ determines whether $\aleph _{\lambda }$ is a weak limit cardinal. Because $\aleph _{\alpha ^{+}}=(\aleph _{\alpha })^{+}\,,$ if λ is a successor ordinal then $\aleph _{\lambda }$ is not a weak limit. Conversely, if a cardinal κ is a successor cardinal, say $\kappa =(\aleph _{\alpha })^{+}\,,$ then $\kappa =\aleph _{\alpha ^{+}}\,.$ Thus, in general, $\aleph _{\lambda }$ is a weak limit cardinal if and only if λ is zero or a limit ordinal.
Although the ordinal subscript tells us whether a cardinal is a weak limit, it does not tell us whether a cardinal is a strong limit. For example, ZFC proves that $\aleph _{\omega }$ is a weak limit cardinal, but neither proves nor disproves that $\aleph _{\omega }$ is a strong limit cardinal (Hrbacek and Jech 1999:168). The generalized continuum hypothesis states that $\kappa ^{+}=2^{\kappa }\,$ for every infinite cardinal κ. Under this hypothesis, the notions of weak and strong limit cardinals coincide.
The notion of inaccessibility and large cardinals
The preceding defines a notion of "inaccessibility": we are dealing with cases where it is no longer enough to do finitely many iterations of the successor and powerset operations; hence the phrase "cannot be reached" in both of the intuitive definitions above. But the "union operation" always provides another way of "accessing" these cardinals (and indeed, such is the case of limit ordinals as well). Stronger notions of inaccessibility can be defined using cofinality. For a weak (respectively strong) limit cardinal κ the requirement is that cf(κ) = κ (i.e. κ be regular) so that κ cannot be expressed as a sum (union) of fewer than κ smaller cardinals. Such a cardinal is called a weakly (respectively strongly) inaccessible cardinal. The preceding examples both are singular cardinals of cofinality ω and hence they are not inaccessible.
$\aleph _{0}$ would be an inaccessible cardinal of both "strengths" except that the definition of inaccessible requires that they be uncountable. Standard Zermelo–Fraenkel set theory with the axiom of choice (ZFC) cannot even prove the consistency of the existence of an inaccessible cardinal of either kind above $\aleph _{0}$, due to Gödel's incompleteness theorem. More specifically, if $\kappa $ is weakly inaccessible then $L_{\kappa }\models ZFC$. These form the first in a hierarchy of large cardinals.
See also
• Cardinal number
References
• Hrbacek, Karel; Jech, Thomas (1999), Introduction to Set Theory (3 ed.), ISBN 0-8247-7915-0
• Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (third millennium ed.), Berlin, New York: Springer-Verlag, doi:10.1007/3-540-44761-X, ISBN 978-3-540-44085-7
• Kunen, Kenneth (1980), Set theory: An introduction to independence proofs, Elsevier, ISBN 978-0-444-86839-8
External links
• http://www.ii.com/math/cardinals/ Infinite ink on cardinals
|
Maximum principle
In the mathematical fields of partial differential equations and geometric analysis, the maximum principle is any of a collection of results and techniques of fundamental importance in the study of elliptic and parabolic differential equations.
This article describes the maximum principle in the theory of partial differential equations. For the maximum principle in optimal control theory, see Pontryagin's maximum principle. For the theorem in complex analysis, see Maximum modulus principle.
In the simplest case, consider a function of two variables u(x,y) such that
${\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.$
The weak maximum principle, in this setting, says that for any open precompact subset M of the domain of u, the maximum of u on the closure of M is achieved on the boundary of M. The strong maximum principle says that, unless u is a constant function, the maximum cannot also be achieved anywhere on M itself.
Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once.
In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary.[1]
Intuition
A partial formulation of the strong maximum principle
Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let M be an open subset of Euclidean space and let u be a C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}=0$
where for each i and j between 1 and n, aij is a function on M with aij = aji.
Fix some choice of x in M. According to the spectral theorem of linear algebra, all eigenvalues of the matrix [aij(x)] are real, and there is an orthonormal basis of ℝn consisting of eigenvectors. Denote the eigenvalues by λi and the corresponding eigenvectors by vi, for i from 1 to n. Then the differential equation, at the point x, can be rephrased as
$\sum _{i=1}^{n}\lambda _{i}\left.{\frac {d^{2}}{dt^{2}}}\right|_{t=0}{\big (}u(x+tv_{i}){\big )}=0.$
The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where u is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero.
This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of a), that u must be constant if there is a point of M where u is maximized.
Note that the above reasoning is unaffected if one considers the more general partial differential equation
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}=0,$
since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0,$
in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality (> rather than ≥) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle.
Non-applicability of the strong maximum principle
However, the above reasoning no longer applies if one considers the condition
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\leq 0,$
since now the "balancing" condition, as evaluated at a hypothetical maximum point of u, only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that
${\frac {\partial ^{2}}{\partial x^{2}}}{\big (}{-x}^{2}-y^{2}{\big )}+{\frac {\partial ^{2}}{\partial y^{2}}}{\big (}{-x}^{2}-y^{2}{\big )}\leq 0,$
and on any open region containing the origin, the function −x2−y2 certainly has a maximum.
The classical weak maximum principle for linear elliptic PDE
The essential idea
Let M denote an open subset of Euclidean space. If a smooth function $u:M\to \mathbb {R} $ is maximized at a point p, then one automatically has:
• $(du)(p)=0$
• $(\nabla ^{2}u)(p)\leq 0,$ as a matrix inequality.
One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if u is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of u form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question.
For instance, if u solves the differential equation
$\Delta u=|du|^{2}+2,$
then it is clearly impossible to have $\Delta u\leq 0$ and $du=0$ at any point of the domain. So, following the above observation, it is impossible for u to take on a maximum value. If, instead u solved the differential equation $\Delta u=|du|^{2}$ then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If u solved the differential equation $\Delta u=|du|^{2}-2,$ then the same analysis would show that u cannot take on a minimum value.
The possibility of such analysis is not even limited to partial differential equations. For instance, if $u:M\to \mathbb {R} $ is a function such that
$\Delta u-|du|^{4}=\int _{M}e^{u(x)}\,dx,$
which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that u cannot attain a maximum value.
There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if u is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point p where $\Delta u(p)\leq 0$ is not in contradiction to the requirement $\Delta u=0$ everywhere. However, one could consider, for an arbitrary real number s, the function us defined by
$u_{s}(x)=u(x)+se^{x_{1}}.$
It is straightforward to see that
$\Delta u_{s}=se^{x_{1}}.$
By the above analysis, if $s>0$ then us cannot attain a maximum value. One might wish to consider the limit as s to 0 in order to conclude that u also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if M has a boundary such that M together with its boundary is compact, then supposing that u can be continuously extended to the boundary, it follows immediately that both u and us attain a maximum value on $M\cup \partial M.$ Since we have shown that us, as a function on M, does not have a maximum, it follows that the maximum point of us, for any s, is on $\partial M.$ By the sequential compactness of $\partial M,$ it follows that the maximum of u is attained on $\partial M.$ This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of u is also attained somewhere on M. That is the content of the "strong maximum principle," which requires further analysis.
The use of the specific function $e^{x_{1}}$ above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance,
$u_{s}(x)=u(x)+s|x|^{2}$
with the same effect.
The classical strong maximum principle for linear elliptic PDE
Summary of proof
Let M be an open subset of Euclidean space. Let $u:M\to \mathbb {R} $ be a twice-differentiable function which attains its maximum value C. Suppose that
$a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0.$
Suppose that one can find (or prove the existence of):
• a compact subset Ω of M, with nonempty interior, such that u(x) < C for all x in the interior of Ω, and such that there exists x0 on the boundary of Ω with u(x0) = C.
• a continuous function $h:\Omega \to \mathbb {R} $ which is twice-differentiable on the interior of Ω and with
$a_{ij}{\frac {\partial ^{2}h}{\partial x^{i}\,\partial x^{j}}}+b_{i}{\frac {\partial h}{\partial x^{i}}}\geq 0,$
and such that one has u + h ≤ C on the boundary of Ω with h(x0) = 0
Then L(u + h − C) ≥ 0 on Ω with u + h − C ≤ 0 on the boundary of Ω; according to the weak maximum principle, one has u + h − C ≤ 0 on Ω. This can be reorganized to say
$-{\frac {u(x)-u(x_{0})}{|x-x_{0}|}}\geq {\frac {h(x)-h(x_{0})}{|x-x_{0}|}}$
for all x in Ω. If one can make the choice of h so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that x0 is a maximum point of u on M, so that its gradient must vanish.
Proof
The above "program" can be carried out. Choose Ω to be a spherical annulus; one selects its center xc to be a point closer to the closed set u−1(C) than to the closed set ∂M, and the outer radius R is selected to be the distance from this center to u−1(C); let x0 be a point on this latter set which realizes the distance. The inner radius ρ is arbitrary. Define
$h(x)=\varepsilon {\Big (}e^{-\alpha |x-x_{\text{c}}|^{2}}-e^{-\alpha R^{2}}{\Big )}.$
Now the boundary of Ω consists of two spheres; on the outer sphere, one has h = 0; due to the selection of R, one has u ≤ C on this sphere, and so u + h − C ≤ 0 holds on this part of the boundary, together with the requirement h(x0) = 0. On the inner sphere, one has u < C. Due to the continuity of u and the compactness of the inner sphere, one can select δ > 0 such that u + δ < C. Since h is constant on this inner sphere, one can select ε > 0 such that u + h ≤ C on the inner sphere, and hence on the entire boundary of Ω.
Direct calculation shows
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}h}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial h}{\partial x^{i}}}=\varepsilon \alpha e^{-\alpha |x-x_{\text{c}}|^{2}}\left(4\alpha \sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}(x){\big (}x^{i}-x_{\text{c}}^{i}{\big )}{\big (}x^{j}-x_{\text{c}}^{j}{\big )}-2\sum _{i=1}^{n}a_{ii}-2\sum _{i=1}^{n}b_{i}{\big (}x^{i}-x_{\text{c}}^{i}{\big )}\right).$
There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below.
Lastly, note that the directional derivative of h at x0 along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of u at x0 is nonzero, in contradiction to x0 being a maximum point of u on the open set M.
Statement of the theorem
The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927):
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be continuous functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite. If u is a nonconstant C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0$
on M, then u does not attain a maximum value on M.
The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number λ such that for all x in the annulus, the matrix [aij(x)] has all eigenvalues greater than or equal to λ. One then takes α, as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number λ which is a lower bound of the eigenvalues of [aij] for all x in M.
These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof:
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite, and let λ(x) denote its smallest eigenvalue. Suppose that $\textstyle {\frac {a_{ii}}{\lambda }}$ and $\textstyle {\frac {|b_{i}|}{\lambda }}$ are bounded functions on M for each i between 1 and n. If u is a nonconstant C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0$
on M, then u does not attain a maximum value on M.
One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation y″ + 2y = 0 has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations Δu + cu = 0 which have interior maxima. The sign of c is relevant, as also seen in the one-dimensional case; for instance the solutions to y″ - 2y = 0 are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions.
See also
• Maximum modulus principle
• Hopf maximum principle
Notes
1. Chapter 32 of Rockafellar (1970).
References
Research articles
• Calabi, E. An extension of E. Hopf's maximum principle with an application to Riemannian geometry. Duke Math. J. 25 (1958), 45–56.
• Cheng, S.Y.; Yau, S.T. Differential equations on Riemannian manifolds and their geometric applications. Comm. Pure Appl. Math. 28 (1975), no. 3, 333–354.
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry and related properties via the maximum principle. Comm. Math. Phys. 68 (1979), no. 3, 209–243.
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry of positive solutions of nonlinear elliptic equations in Rn. Mathematical analysis and applications, Part A, pp. 369–402, Adv. in Math. Suppl. Stud., 7a, Academic Press, New York-London, 1981.
• Hamilton, Richard S. Four-manifolds with positive curvature operator. J. Differential Geom. 24 (1986), no. 2, 153–179.
• E. Hopf. Elementare Bemerkungen Über die Lösungen partieller Differentialgleichungen zweiter Ordnung vom elliptischen Typus. Sitber. Preuss. Akad. Wiss. Berlin 19 (1927), 147-152.
• Hopf, Eberhard. A remark on linear elliptic differential equations of second order. Proc. Amer. Math. Soc. 3 (1952), 791–793.
• Nirenberg, Louis. A strong maximum principle for parabolic equations. Comm. Pure Appl. Math. 6 (1953), 167–177.
• Omori, Hideki. Isometric immersions of Riemannian manifolds. J. Math. Soc. Jpn. 19 (1967), 205–214.
• Yau, Shing Tung. Harmonic functions on complete Riemannian manifolds. Comm. Pure Appl. Math. 28 (1975), 201–228.
• Kreyberg, H. J. A. On the maximum principle of optimal control in economic processes, 1969 (Trondheim, NTH, Sosialøkonomisk institutt https://www.worldcat.org/title/on-the-maximum-principle-of-optimal-control-in-economic-processes/oclc/23714026)
Textbooks
• Caffarelli, Luis A.; Xavier Cabre (1995). Fully Nonlinear Elliptic Equations. Providence, Rhode Island: American Mathematical Society. pp. 31–41. ISBN 0-8218-0437-5.
• Evans, Lawrence C. Partial differential equations. Second edition. Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 2010. xxii+749 pp. ISBN 978-0-8218-4974-3
• Friedman, Avner. Partial differential equations of parabolic type. Prentice-Hall, Inc., Englewood Cliffs, N.J. 1964 xiv+347 pp.
• Gilbarg, David; Trudinger, Neil S. Elliptic partial differential equations of second order. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. ISBN 3-540-41160-7
• Ladyženskaja, O. A.; Solonnikov, V. A.; Uralʹceva, N. N. Linear and quasilinear equations of parabolic type. Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23 American Mathematical Society, Providence, R.I. 1968 xi+648 pp.
• Ladyzhenskaya, Olga A.; Ural'tseva, Nina N. Linear and quasilinear elliptic equations. Translated from the Russian by Scripta Technica, Inc. Translation editor: Leon Ehrenpreis. Academic Press, New York-London 1968 xviii+495 pp.
• Lieberman, Gary M. Second order parabolic differential equations. World Scientific Publishing Co., Inc., River Edge, NJ, 1996. xii+439 pp. ISBN 981-02-2883-X
• Morrey, Charles B., Jr. Multiple integrals in the calculus of variations. Reprint of the 1966 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2008. x+506 pp. ISBN 978-3-540-69915-6
• Protter, Murray H.; Weinberger, Hans F. Maximum principles in differential equations. Corrected reprint of the 1967 original. Springer-Verlag, New York, 1984. x+261 pp. ISBN 0-387-96068-6
• Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
• Smoller, Joel. Shock waves and reaction-diffusion equations. Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 258. Springer-Verlag, New York, 1994. xxiv+632 pp. ISBN 0-387-94259-9
|
Weak n-category
In category theory, a weak n-category is a generalization of the notion of strict n-category where composition and identities are not strictly associative and unital, but only associative and unital up to coherent equivalence. This generalisation only becomes noticeable at dimensions two and above where weak 2-, 3- and 4-categories are typically referred to as bicategories, tricategories, and tetracategories. The subject of weak n-categories is an area of ongoing research.
History
There is currently much work to determine what the coherence laws for weak n-categories should be. Weak n-categories have become the main object of study in higher category theory. There are basically two classes of theories: those in which the higher cells and higher compositions are realized algebraically (most remarkably Michael Batanin's theory of weak higher categories) and those in which more topological models are used (e.g. a higher category as a simplicial set satisfying some universality properties).
In a terminology due to John Baez and James Dolan, a (n, k)-category is a weak n-category, such that all h-cells for h > k are invertible. Some of the formalism for (n, k)-categories are much simpler than those for general n-categories. In particular, several technically accessible formalisms of (infinity, 1)-categories are now known. Now the most popular such formalism centers on a notion of quasi-category, other approaches include a properly understood theory of simplicially enriched categories and the approach via Segal categories; a class of examples of stable (infinity, 1)-categories can be modeled (in the case of characteristics zero) also via pretriangulated A-infinity categories of Maxim Kontsevich. Quillen model categories are viewed as a presentation of an (infinity, 1)-category; however not all (infinity, 1)-categories can be presented via model categories.
See also
• Bicategory
• Tricategory
• Tetracategory
• Infinity category
• Opetope
• Stabilization hypothesis
External links
• n-Categories – Sketch of a Definition by John Baez
• Lectures on n-Categories and Cohomology by John Baez
• Tom Leinster, Higher operads, higher categories, math.CT/0305049
• Simpson, Carlos (2012). Homotopy theory of higher categories. New Mathematical Monographs. Vol. 19. Cambridge: Cambridge University Press. arXiv:1001.4071. Bibcode:2010arXiv1001.4071S. ISBN 9781139502191. MR 2883823.
• Jacob Lurie, Higher topos theory, math.CT/0608040, published version: pdf
|
Weak operator topology
In functional analysis, the weak operator topology, often abbreviated WOT, is the weakest topology on the set of bounded operators on a Hilbert space $H$, such that the functional sending an operator $T$ to the complex number $\langle Tx,y\rangle $ is continuous for any vectors $x$ and $y$ in the Hilbert space.
Explicitly, for an operator $T$ there is base of neighborhoods of the following type: choose a finite number of vectors $x_{i}$, continuous functionals $y_{i}$, and positive real constants $\varepsilon _{i}$ indexed by the same finite set $I$. An operator $S$ lies in the neighborhood if and only if $|y_{i}(T(x_{i})-S(x_{i}))|<\varepsilon _{i}$ for all $i\in I$.
Equivalently, a net $T_{i}\subseteq B(H)$ of bounded operators converges to $T\in B(H)$ in WOT if for all $y\in H^{*}$ and $x\in H$, the net $y(T_{i}x)$ converges to $y(Tx)$.
Relationship with other topologies on B(H)
The WOT is the weakest among all common topologies on $B(H)$, the bounded operators on a Hilbert space $H$.
Strong operator topology
The strong operator topology, or SOT, on $B(H)$ is the topology of pointwise convergence. Because the inner product is a continuous function, the SOT is stronger than WOT. The following example shows that this inclusion is strict. Let $H=\ell ^{2}(\mathbb {N} )$ and consider the sequence $\{T^{n}\}$ of unilateral shifts. An application of Cauchy-Schwarz shows that $T^{n}\to 0$ in WOT. But clearly $T^{n}$ does not converge to $0$ in SOT.
The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the strong operator topology are precisely those that are continuous in the WOT (actually, the WOT is the weakest operator topology that leaves continuous all strongly continuous linear functionals on the set $B(H)$ of bounded operators on the Hilbert space H). Because of this fact, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT.
It follows from the polarization identity that a net $\{T_{\alpha }\}$ converges to $0$ in SOT if and only if $T_{\alpha }^{*}T_{\alpha }\to 0$ in WOT.
Weak-star operator topology
The predual of B(H) is the trace class operators C1(H), and it generates the w*-topology on B(H), called the weak-star operator topology or σ-weak topology. The weak-operator and σ-weak topologies agree on norm-bounded sets in B(H).
A net {Tα} ⊂ B(H) converges to T in WOT if and only Tr(TαF) converges to Tr(TF) for all finite-rank operator F. Since every finite-rank operator is trace-class, this implies that WOT is weaker than the σ-weak topology. To see why the claim is true, recall that every finite-rank operator F is a finite sum
$F=\sum _{i=1}^{n}\lambda _{i}u_{i}v_{i}^{*}.$
So {Tα} converges to T in WOT means
${\text{Tr}}\left(T_{\alpha }F\right)=\sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TF).$
Extending slightly, one can say that the weak-operator and σ-weak topologies agree on norm-bounded sets in B(H): Every trace-class operator is of the form
$S=\sum _{i}\lambda _{i}u_{i}v_{i}^{*},$
where the series $\sum \nolimits _{i}\lambda _{i}$ converges. Suppose $\sup \nolimits _{\alpha }\|T_{\alpha }\|=k<\infty ,$ and $T_{\alpha }\to T$ in WOT. For every trace-class S,
${\text{Tr}}\left(T_{\alpha }S\right)=\sum _{i}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TS),$
by invoking, for instance, the dominated convergence theorem.
Therefore every norm-bounded set is compact in WOT, by the Banach–Alaoglu theorem.
Other properties
The adjoint operation T → T*, as an immediate consequence of its definition, is continuous in WOT.
Multiplication is not jointly continuous in WOT: again let $T$ be the unilateral shift. Appealing to Cauchy-Schwarz, one has that both Tn and T*n converges to 0 in WOT. But T*nTn is the identity operator for all $n$. (Because WOT coincides with the σ-weak topology on bounded sets, multiplication is not jointly continuous in the σ-weak topology.)
However, a weaker claim can be made: multiplication is separately continuous in WOT. If a net Ti → T in WOT, then STi → ST and TiS → TS in WOT.
SOT and WOT on B(X,Y) when X and Y are normed spaces
We can extend the definitions of SOT and WOT to the more general setting where X and Y are normed spaces and $B(X,Y)$ is the space of bounded linear operators of the form $T:X\to Y$. In this case, each pair $x\in X$ and $y^{*}\in Y^{*}$ defines a seminorm $\|\cdot \|_{x,y^{*}}$ on $B(X,Y)$ via the rule $\|T\|_{x,y^{*}}=|y^{*}(Tx)|$. The resulting family of seminorms generates the weak operator topology on $B(X,Y)$. Equivalently, the WOT on $B(X,Y)$ is formed by taking for basic open neighborhoods those sets of the form
$N(T,F,\Lambda ,\epsilon ):=\left\{S\in B(X,Y):\left|y^{*}((S-T)x)\right|<\epsilon ,x\in F,y^{*}\in \Lambda \right\},$
where $T\in B(X,Y),F\subseteq X$ is a finite set, $\Lambda \subseteq Y^{*}$ is also a finite set, and $\epsilon >0$. The space $B(X,Y)$ is a locally convex topological vector space when endowed with the WOT.
The strong operator topology on $B(X,Y)$ is generated by the family of seminorms $\|\cdot \|_{x},x\in X,$ via the rules $\|T\|_{x}=\|Tx\|$. Thus, a topological base for the SOT is given by open neighborhoods of the form
$N(T,F,\epsilon ):=\{S\in B(X,Y):\|(S-T)x\|<\epsilon ,x\in F\},$
where as before $T\in B(X,Y),F\subseteq X$ is a finite set, and $\epsilon >0.$
Relationships between different topologies on B(X,Y)
The different terminology for the various topologies on $B(X,Y)$ can sometimes be confusing. For instance, "strong convergence" for vectors in a normed space sometimes refers to norm-convergence, which is very often distinct from (and stronger than) than SOT-convergence when the normed space in question is $B(X,Y)$. The weak topology on a normed space $X$ is the coarsest topology that makes the linear functionals in $X^{*}$ continuous; when we take $B(X,Y)$ in place of $X$, the weak topology can be very different than the weak operator topology. And while the WOT is formally weaker than the SOT, the SOT is weaker than the operator norm topology.
In general, the following inclusions hold:
$\{{\text{WOT-open sets in }}B(X,Y)\}\subseteq \{{\text{SOT-open sets in }}B(X,Y)\}\subseteq \{{\text{operator-norm-open sets in }}B(X,Y)\},$
and these inclusions may or may not be strict depending on the choices of $X$ and $Y$.
The WOT on $B(X,Y)$ is a formally weaker topology than the SOT, but they nevertheless share some important properties. For example,
$(B(X,Y),{\text{SOT}})^{*}=(B(X,Y),{\text{WOT}})^{*}.$
Consequently, if $S\subseteq B(X,Y)$ is convex then
${\overline {S}}^{\text{SOT}}={\overline {S}}^{\text{WOT}},$
in other words, SOT-closure and WOT-closure coincide for convex sets.
See also
• Topologies on the set of operators on a Hilbert space
• Weak topology – Mathematical term
• Weak-star operator topology
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
|
Weak order unit
In mathematics, specifically in order theory and functional analysis, an element $x$ of a vector lattice $X$ is called a weak order unit in $X$ if $x\geq 0$ and also for all $y\in X,$ $\inf\{x,|y|\}=0{\text{ implies }}y=0.$[1]
Examples
• If $X$ is a separable Fréchet topological vector lattice then the set of weak order units is dense in the positive cone of $X.$[2]
See also
• Quasi-interior point
• Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
Citations
1. Schaefer & Wolff 1999, pp. 234–242.
2. Schaefer & Wolff 1999, pp. 204–214.
References
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Ordered topological vector spaces
Basic concepts
• Ordered vector space
• Partially ordered space
• Riesz space
• Order topology
• Order unit
• Positive linear operator
• Topological vector lattice
• Vector lattice
Types of orders/spaces
• AL-space
• AM-space
• Archimedean
• Banach lattice
• Fréchet lattice
• Locally convex vector lattice
• Normed lattice
• Order bound dual
• Order dual
• Order complete
• Regularly ordered
Types of elements/subsets
• Band
• Cone-saturated
• Lattice disjoint
• Dual/Polar cone
• Normal cone
• Order complete
• Order summable
• Order unit
• Quasi-interior point
• Solid set
• Weak order unit
Topologies/Convergence
• Order convergence
• Order topology
Operators
• Positive
• State
Main results
• Freudenthal spectral
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
|
Buchsbaum ring
In mathematics, Buchsbaum rings are Noetherian local rings such that every system of parameters is a weak sequence. A sequence $(a_{1},\cdots ,a_{r})$ of the maximal ideal $m$ is called a weak sequence if $m\cdot ((a_{1},\cdots ,a_{i-1})\colon a_{i})\subset (a_{1},\cdots ,a_{i-1})$ for all $i$.
They were introduced by Jürgen Stückrad and Wolfgang Vogel (1973) and are named after David Buchsbaum.
Every Cohen–Macaulay local ring is a Buchsbaum ring. Every Buchsbaum ring is a generalized Cohen–Macaulay ring.
References
• Buchsbaum, D. (1966), "Complexes in local ring theory", in Herstein, I. N. (ed.), Some aspects of ring theory, Centro Internazionale Matematico Estivo (C.I.M.E.). II Ciclo, Varenna (Como), 23-31 agosto, vol. 1965, Rome: Edizioni cremonese, pp. 223–228, ISBN 978-3-642-11035-1, MR 0223393
• Goto, Shiro (2001) [1994], "Buchsbaum ring", Encyclopedia of Mathematics, EMS Press
• Stückrad, Jürgen; Vogel, Wolfgang (1973), "Eine Verallgemeinerung der Cohen-Macaulay Ringe und Anwendungen auf ein Problem der Multiplizitätstheorie", Journal of Mathematics of Kyoto University, 13: 513–528, ISSN 0023-608X, MR 0335504
• Stückrad, Jürgen; Vogel, Wolfgang (1986), Buchsbaum rings and applications, Berlin, New York: Springer-Verlag, ISBN 978-3-540-16844-7, MR 0881220
|
Weak solution
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.
Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions which are not differentiable; and the weak formulation allows one to find such solutions.
Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough.
A concrete example
As an illustration of the concept, consider the first-order wave equation:
${\frac {\partial u}{\partial t}}+{\frac {\partial u}{\partial x}}=0$
(1)
where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function $\varphi \,\!$ of compact support, known as a test function, taking
$\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x)\,\varphi (t,x)\,dx\,dt$
For example, if $\varphi $ is a smooth probability distribution concentrated near a point $(t,x)=(t_{\circ },x_{\circ })$, the integral is approximately $u(t_{\circ },x_{\circ })$. Notice that while the integrals go from $-\infty $ to $\infty $, they are essentially over a finite box where $\varphi $ is non-zero.
Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation (1) by a test function $\varphi $ (smooth of compact support), and integrate:
$\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial t}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x+\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial x}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x=0.$
Using Fubini's theorem which allows one to interchange the order of integration, as well as integration by parts (in t for the first term and in x for the second term) this equation becomes:
$-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x){\frac {\partial \varphi (t,x)}{\partial t}}\,\mathrm {d} t\,\mathrm {d} x-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x){\frac {\partial \varphi (t,x)}{\partial x}}\,\mathrm {d} t\,\mathrm {d} x=0.$
(2)
(Boundary terms vanish since $\varphi $ is zero outside a finite box.) We have shown that equation (1) implies equation (2) as long as u is continuously differentiable.
The key to the concept of weak solution is that there exist functions u which satisfy equation (2) for any $\varphi $, but such u may not be differentiable and so cannot satisfy equation (1). An example is u(t, x) = |t − x|, as one may check by splitting the integrals over regions x ≥ t and x ≤ t where u is smooth, and reversing the above computation using integration by parts. A weak solution of equation (1) means any solution u of equation (2) over all test functions $\varphi $.
General case
The general idea which follows from this example is that, when solving a differential equation in u, one can rewrite it using a test function $\varphi $, such that whatever derivatives in u show up in the equation, they are "transferred" via integration by parts to $\varphi $, resulting in an equation without derivatives of u. This new equation generalizes the original equation to include solutions which are not necessarily differentiable.
The approach illustrated above works in great generality. Indeed, consider a linear differential operator in an open set W in Rn:
$P(x,\partial )u(x)=\sum a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\,\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}u(x),$
where the multi-index (α1, α2, …, αn) varies over some finite set in Nn and the coefficients $a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}$ are smooth enough functions of x in Rn.
The differential equation P(x, ∂)u(x) = 0 can, after being multiplied by a smooth test function $\varphi $ with compact support in W and integrated by parts, be written as
$\int _{W}u(x)Q(x,\partial )\varphi (x)\,\mathrm {d} x=0$
where the differential operator Q(x, ∂) is given by the formula
$Q(x,\partial )\varphi (x)=\sum (-1)^{|\alpha |}\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}\left[a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\varphi (x)\right].$
The number
$(-1)^{|\alpha |}=(-1)^{\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}$
shows up because one needs α1 + α2 + ⋯ + αn integrations by parts to transfer all the partial derivatives from u to $\varphi $ in each term of the differential equation, and each integration by parts entails a multiplication by −1.
The differential operator Q(x, ∂) is the formal adjoint of P(x, ∂) (cf adjoint of an operator).
In summary, if the original (strong) problem was to find a |α|-times differentiable function u defined on the open set W such that
$P(x,\partial )u(x)=0{\text{ for all }}x\in W$
(a so-called strong solution), then an integrable function u would be said to be a weak solution if
$\int _{W}u(x)\,Q(x,\partial )\varphi (x)\,\mathrm {d} x=0$
for every smooth function $\varphi $ with compact support in W.
Other kinds of weak solution
The notion of weak solution based on distributions is sometimes inadequate. In the case of hyperbolic systems, the notion of weak solution based on distributions does not guarantee uniqueness, and it is necessary to supplement it with entropy conditions or some other selection criterion. In fully nonlinear PDE such as the Hamilton–Jacobi equation, there is a very different definition of weak solution called viscosity solution.
References
• Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2.
|
Substructure (mathematics)
In mathematical logic, an (induced) substructure or (induced) subalgebra is a structure whose domain is a subset of that of a bigger structure, and whose functions and relations are restricted to the substructure's domain. Some examples of subalgebras are subgroups, submonoids, subrings, subfields, subalgebras of algebras over a field, or induced subgraphs. Shifting the point of view, the larger structure is called an extension or a superstructure of its substructure.
In model theory, the term "submodel" is often used as a synonym for substructure, especially when the context suggests a theory of which both structures are models.
In the presence of relations (i.e. for structures such as ordered groups or graphs, whose signature is not functional) it may make sense to relax the conditions on a subalgebra so that the relations on a weak substructure (or weak subalgebra) are at most those induced from the bigger structure. Subgraphs are an example where the distinction matters, and the term "subgraph" does indeed refer to weak substructures. Ordered groups, on the other hand, have the special property that every substructure of an ordered group which is itself an ordered group, is an induced substructure.
Definition
Given two structures A and B of the same signature σ, A is said to be a weak substructure of B, or a weak subalgebra of B, if
• the domain of A is a subset of the domain of B,
• f A = f B|An for every n-ary function symbol f in σ, and
• R A $\subseteq $ R B $\cap $ An for every n-ary relation symbol R in σ.
A is said to be a substructure of B, or a subalgebra of B, if A is a weak subalgebra of B and, moreover,
• R A = R B $\cap $ An for every n-ary relation symbol R in σ.
If A is a substructure of B, then B is called a superstructure of A or, especially if A is an induced substructure, an extension of A.
Example
In the language consisting of the binary functions + and ×, binary relation <, and constants 0 and 1, the structure (Q, +, ×, <, 0, 1) is a substructure of (R, +, ×, <, 0, 1). More generally, the substructures of an ordered field (or just a field) are precisely its subfields. Similarly, in the language (×, −1, 1) of groups, the substructures of a group are its subgroups. In the language (×, 1) of monoids, however, the substructures of a group are its submonoids. They need not be groups; and even if they are groups, they need not be subgroups.
In the case of graphs (in the signature consisting of one binary relation), subgraphs, and its weak substructures are precisely its subgraphs.
As subobjects
For every signature σ, induced substructures of σ-structures are the subobjects in the concrete category of σ-structures and strong homomorphisms (and also in the concrete category of σ-structures and σ-embeddings). Weak substructures of σ-structures are the subobjects in the concrete category of σ-structures and homomorphisms in the ordinary sense.
Submodel
In model theory, given a structure M which is a model of a theory T, a submodel of M in a narrower sense is a substructure of M which is also a model of T. For example, if T is the theory of abelian groups in the signature (+, 0), then the submodels of the group of integers (Z, +, 0) are the substructures which are also abelian groups. Thus the natural numbers (N, +, 0) form a substructure of (Z, +, 0) which is not a submodel, while the even numbers (2Z, +, 0) form a submodel.
Other examples:
1. The algebraic numbers form a submodel of the complex numbers in the theory of algebraically closed fields.
2. The rational numbers form a submodel of the real numbers in the theory of fields.
3. Every elementary substructure of a model of a theory T also satisfies T; hence it is a submodel.
In the category of models of a theory and embeddings between them, the submodels of a model are its subobjects.
See also
• Elementary substructure
• End extension
• Löwenheim–Skolem theorem
• Prime model
References
• Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
• Diestel, Reinhard (2005) [1997], Graph Theory, Graduate Texts in Mathematics, vol. 173 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4
• Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Weak trace-class operator
In mathematics, a weak trace class operator is a compact operator on a separable Hilbert space H with singular values the same order as the harmonic sequence. When the dimension of H is infinite, the ideal of weak trace-class operators is strictly larger than the ideal of trace class operators, and has fundamentally different properties. The usual operator trace on the trace-class operators does not extend to the weak trace class. Instead the ideal of weak trace-class operators admits an infinite number of linearly independent quasi-continuous traces, and it is the smallest two-sided ideal for which all traces on it are singular traces.
Weak trace-class operators feature in the noncommutative geometry of French mathematician Alain Connes.
Definition
A compact operator A on an infinite dimensional separable Hilbert space H is weak trace class if μ(n,A) = O(n−1), where μ(A) is the sequence of singular values. In mathematical notation the two-sided ideal of all weak trace-class operators is denoted,
$L_{1,\infty }=\{A\in K(H):\mu (n,A)=O(n^{-1})\}.$
where $K(H)$ are the compact operators. The term weak trace-class, or weak-L1, is used because the operator ideal corresponds, in J. W. Calkin's correspondence between two-sided ideals of bounded linear operators and rearrangement invariant sequence spaces, to the weak-l1 sequence space.
Properties
• the weak trace-class operators admit a quasi-norm defined by
$\|A\|_{w}=\sup _{n\geq 0}(1+n)\mu (n,A),$
making L1,∞ a quasi-Banach operator ideal, that is an ideal that is also a quasi-Banach space.
See also
• Lp space
• Spectral triple
• Singular trace
• Dixmier trace
References
• B. Simon (2005). Trace ideals and their applications. Providence, RI: Amer. Math. Soc. ISBN 978-0-82-183581-4.
• A. Pietsch (1987). Eigenvalues and s-numbers. Cambridge, UK: Cambridge University Press. ISBN 978-0-52-132532-5.
• A. Connes (1994). Noncommutative geometry. Boston, MA: Academic Press. ISBN 978-0-12-185860-5.
• S. Lord, F. A. Sukochev. D. Zanin (2012). Singular traces: theory and applications. Berlin: De Gruyter. ISBN 978-3-11-026255-1.
|
Truth-table reduction
In computability theory, a truth-table reduction is a reduction from one set of natural numbers to another. As a "tool", it is weaker than Turing reduction, since not every Turing reduction between sets can be performed by a truth-table reduction, but every truth-table reduction can be performed by a Turing reduction. For the same reason it is said to be a stronger reducibility than Turing reducibility, because it implies Turing reducibility. A weak truth-table reduction is a related type of reduction which is so named because it weakens the constraints placed on a truth-table reduction, and provides a weaker equivalence classification; as such, a "weak truth-table reduction" can actually be more powerful than a truth-table reduction as a "tool", and perform a reduction which is not performable by truth table.
A Turing reduction from a set B to a set A computes the membership of a single element in B by asking questions about the membership of various elements in A during the computation; it may adaptively determine which questions it asks based upon answers to previous questions. In contrast, a truth-table reduction or a weak truth-table reduction must present all of its (finitely many) oracle queries at the same time. In a truth-table reduction, the reduction also gives a boolean function (a truth table) which, when given the answers to the queries, will produce the final answer of the reduction. In a weak truth-table reduction, the reduction uses the oracle answers as a basis for further computation which may depend on the given answers but may not ask further questions of the oracle.
Equivalently, a weak truth-table reduction is a Turing reduction for which the use of the reduction is bounded by a computable function. For this reason, they are sometimes referred to as bounded Turing (bT) reductions rather than as weak truth-table (wtt) reductions.
Properties
As every truth-table reduction is a Turing reduction, if A is truth-table reducible to B (A ≤tt B), then A is also Turing reducible to B (A ≤T B). Considering also one-one reducibility, many-one reducibility and weak truth-table reducibility,
$A\leq _{1}B\Rightarrow A\leq _{m}B\Rightarrow A\leq _{tt}B\Rightarrow A\leq _{wtt}B\Rightarrow A\leq _{T}B$,
or in other words, one-one reducibility implies many-one reducibility, which implies truth-table reducibility, which in turn implies weak truth-table reducibility, which in turn implies Turing reducibility.
Furthermore, A is truth-table reducible to B iff A is Turing reducible to B via a total functional on $2^{\omega }$. The forward direction is trivial and for the reverse direction suppose $\Gamma $ is a total computable functional. To build the truth-table for computing A(n) simply search for a number m such that for all binary strings $\sigma $ of length m $\Gamma ^{\sigma }(n)$ converges. Such an m must exist by Kőnig's lemma since $\Gamma $ must be total on all paths through $2^{<\omega }$. Given such an m it is a simple matter to find the unique truth-table which gives $\Gamma ^{\sigma }(n)$ when applied to $\sigma $. The forward direction fails for weak truth-table reducibility.
References
• H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition 1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1
|
Thin set (Serre)
In mathematics, a thin set in the sense of Serre, named after Jean-Pierre Serre, is a certain kind of subset constructed in algebraic geometry over a given field K, by allowed operations that are in a definite sense 'unlikely'. The two fundamental ones are: solving a polynomial equation that may or may not be the case; solving within K a polynomial that does not always factorise. One is also allowed to take finite unions.
Formulation
More precisely, let V be an algebraic variety over K (assumptions here are: V is an irreducible set, a quasi-projective variety, and K has characteristic zero). A type I thin set is a subset of V(K) that is not Zariski-dense. That means it lies in an algebraic set that is a finite union of algebraic varieties of dimension lower than d, the dimension of V. A type II thin set is an image of an algebraic morphism (essentially a polynomial mapping) φ, applied to the K-points of some other d-dimensional algebraic variety V′, that maps essentially onto V as a ramified covering with degree e > 1. Saying this more technically, a thin set of type II is any subset of
φ(V′(K))
where V′ satisfies the same assumptions as V and φ is generically surjective from the geometer's point of view. At the level of function fields we therefore have
[K(V): K(V′)] = e > 1.
While a typical point v of V is φ(u) with u in V′, from v lying in K(V) we can conclude typically only that the coordinates of u come from solving a degree e equation over K. The whole object of the theory of thin sets is then to understand that the solubility in question is a rare event. This reformulates in more geometric terms the classical Hilbert irreducibility theorem.
A thin set, in general, is a subset of a finite union of thin sets of types I and II .
The terminology thin may be justified by the fact that if A is a thin subset of the line over Q then the number of points of A of height at most H is ≪ H: the number of integral points of height at most H is $O\left({H^{1/2}}\right)$, and this result is best possible.[1]
A result of S. D. Cohen, based on the large sieve method, extends this result, counting points by height function and showing, in a strong sense, that a thin set contains a low proportion of them (this is discussed at length in Serre's Lectures on the Mordell-Weil theorem). Let A be a thin set in affine n-space over Q and let N(H) denote the number of integral points of naive height at most H. Then[2]
$N(H)=O\left({H^{n-1/2}\log H}\right).$
Hilbertian fields
A Hilbertian variety V over K is one for which V(K) is not thin: this is a birational invariant of V.[3] A Hilbertian field K is one for which there exists a Hilbertian variety of positive dimension over K:[3] the term was introduced by Lang in 1962.[4] If K is Hilbertian then the projective line over K is Hilbertian, so this may be taken as the definition.[5][6]
The rational number field Q is Hilbertian, because Hilbert's irreducibility theorem has as a corollary that the projective line over Q is Hilbertian: indeed, any algebraic number field is Hilbertian, again by the Hilbert irreducibility theorem.[5][7] More generally a finite degree extension of a Hilbertian field is Hilbertian[8] and any finitely generated infinite field is Hilbertian.[6]
There are several results on the permanence criteria of Hilbertian fields. Notably Hilbertianity is preserved under finite separable extensions[9] and abelian extensions. If N is a Galois extension of a Hilbertian field, then although N need not be Hilbertian itself, Weissauer's results asserts that any proper finite extension of N is Hilbertian. The most general result in this direction is Haran's diamond theorem. A discussion on these results and more appears in Fried-Jarden's Field Arithmetic.
Being Hilbertian is at the other end of the scale from being algebraically closed: the complex numbers have all sets thin, for example. They, with the other local fields (real numbers, p-adic numbers) are not Hilbertian.[5]
WWA property
The WWA property (weak 'weak approximation', sic) for a variety V over a number field is weak approximation (cf. approximation in algebraic groups), for finite sets of places of K avoiding some given finite set. For example take K = Q: it is required that V(Q) be dense in
Π V(Qp)
for all products over finite sets of prime numbers p, not including any of some set {p1, ..., pM} given once and for all. Ekedahl has proved that WWA for V implies V is Hilbertian.[10] In fact Colliot-Thélène conjectures WWA holds for any unirational variety, which is therefore a stronger statement. This conjecture would imply a positive answer to the inverse Galois problem.[10]
References
1. Serre (1992) p.26
2. Serre (1992) p.27
3. Serre (1992) p.19
4. Schinzel (2000) p.312
5. Serre (1992) p.20
6. Schinzel (2000) p.298
7. Lang (1997) p.41
8. Serre (1992) p.21
9. Fried & Jarden (2008) p.224
10. Serre (1992) p.29
• Fried, Michael D.; Jarden, Moshe (2008). Field arithmetic. Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Vol. 11 (3rd revised ed.). Springer-Verlag. ISBN 978-3-540-77269-9. Zbl 1145.12001.
• Lang, Serge (1997). Survey of Diophantine Geometry. Springer-Verlag. ISBN 3-540-61223-8. Zbl 0869.11051.
• Serre, Jean-Pierre (1989). Lectures on the Mordell-Weil Theorem. Aspects of Mathematics. Vol. E15. Translated and edited by Martin Brown from notes by Michel Waldschmidt. Braunschweig etc.: Friedr. Vieweg & Sohn. Zbl 0676.14005.
• Serre, Jean-Pierre (1992). Topics in Galois Theory. Research Notes in Mathematics. Vol. 1. Jones and Bartlett. ISBN 0-86720-210-6. Zbl 0746.12001.
• Schinzel, Andrzej (2000). Polynomials with special regard to reducibility. Encyclopedia of Mathematics and Its Applications. Vol. 77. Cambridge: Cambridge University Press. ISBN 0-521-66225-7. Zbl 0956.12001.
|
Quasi-category
In mathematics, more specifically category theory, a quasi-category (also called quasicategory, weak Kan complex, inner Kan complex, infinity category, ∞-category, Boardman complex, quategory) is a generalization of the notion of a category. The study of such generalizations is known as higher category theory.
Quasi-categories were introduced by Boardman & Vogt (1973). André Joyal has much advanced the study of quasi-categories showing that most of the usual basic category theory and some of the advanced notions and theorems have their analogues for quasi-categories. An elaborate treatise of the theory of quasi-categories has been expounded by Jacob Lurie (2009).
Quasi-categories are certain simplicial sets. Like ordinary categories, they contain objects (the 0-simplices of the simplicial set) and morphisms between these objects (1-simplices). But unlike categories, the composition of two morphisms need not be uniquely defined. All the morphisms that can serve as composition of two given morphisms are related to each other by higher order invertible morphisms (2-simplices thought of as "homotopies"). These higher order morphisms can also be composed, but again the composition is well-defined only up to still higher order invertible morphisms, etc.
The idea of higher category theory (at least, higher category theory when higher morphisms are invertible) is that, as opposed to the standard notion of a category, there should be a mapping space (rather than a mapping set) between two objects. This suggests that a higher category should simply be a topologically enriched category. The model of quasi-categories is, however, better suited to applications than that of topologically enriched categories, though it has been proved by Lurie that the two have natural model structures that are Quillen equivalent.
Definition
By definition, a quasi-category C is a simplicial set satisfying the inner Kan conditions (also called weak Kan condition): every inner horn in C, namely a map of simplicial sets $\Lambda ^{k}[n]\to C$ where $0<k<n$, has a filler, that is, an extension to a map $\Delta [n]\to C$. (See Kan fibration#Definitions for a definition of the simplicial sets $\Delta [n]$ and $\Lambda ^{k}[n]$.)
The idea is that 2-simplices $\Delta [2]\to C$ are supposed to represent commutative triangles (at least up to homotopy). A map $\Lambda ^{1}[2]\to C$ represents a composable pair. Thus, in a quasi-category, one cannot define a composition law on morphisms, since one can choose many ways to compose maps.
One consequence of the definition is that $C^{\Delta [2]}\to C^{\Lambda ^{1}[2]}$ is a trivial Kan fibration. In other words, while the composition law is not uniquely defined, it is unique up to a contractible choice.
The homotopy category
Given a quasi-category C, one can associate to it an ordinary category hC, called the homotopy category of C. The homotopy category has as objects the vertices of C. The morphisms are given by homotopy classes of edges between vertices. Composition is given using the horn filler condition for n = 2.
For a general simplicial set there is a functor $\tau _{1}$ from sSet to Cat, known as the fundamental category functor, and for a quasi-category C the fundamental category is the same as the homotopy category, i.e. $\tau _{1}(C)=hC$.
Examples
• The nerve of a category is a quasi-category with the extra property that the filling of any inner horn is unique. Conversely a quasi-category such that any inner horn has a unique filling is isomorphic to the nerve of some category. The homotopy category of the nerve of C is isomorphic to C.
• Given a topological space X, one can define its singular set S(X), also known as the fundamental ∞-groupoid of X. S(X) is a quasi-category in which every morphism is invertible. The homotopy category of S(X) is the fundamental groupoid of X.
• More general than the previous example, every Kan complex is an example of a quasi-category. In a Kan complex all maps from all horns—not just inner ones—can be filled, which again has the consequence that all morphisms in a Kan complex are invertible. Kan complexes are thus analogues to groupoids - the nerve of a category is a Kan complex iff the category is a groupoid.
Variants
• An (∞, 1)-category is a not-necessarily-quasi-category ∞-category in which all n-morphisms for n > 1 are equivalences. There are several models of (∞, 1)-categories, including Segal category, simplicially enriched category, topological category, complete Segal space. A quasi-category is also an (∞, 1)-category.
• Model structure There is a model structure on sSet-categories that presents the (∞,1)-category (∞,1)Cat.
• Homotopy Kan extension The notion of homotopy Kan extension and hence in particular that of homotopy limit and homotopy colimit has a direct formulation in terms of Kan-complex-enriched categories. See homotopy Kan extension for more.
• Presentation of (∞,1)-topos theory All of (∞,1)-topos theory can be modeled in terms of sSet-categories. (ToënVezzosi). There is a notion of sSet-site C that models the notion of (∞,1)-site and a model structure on sSet-enriched presheaves on sSet-sites that is a presentation for the ∞-stack (∞,1)-toposes on C.
See also
• Model category
• Stable infinity category
• ∞-groupoid
• Higher category theory
• Globular set
References
• Boardman, J. M.; Vogt, R. M. (1973), Homotopy invariant algebraic structures on topological spaces, Lecture Notes in Mathematics, vol. 347, Berlin, New York: Springer-Verlag, doi:10.1007/BFb0068547, ISBN 978-3-540-06479-4, MR 0420609
• Groth, Moritz, A short course on infinity-categories (PDF)
• Joyal, André (2002), "Quasi-categories and Kan complexes", Journal of Pure and Applied Algebra, 175 (1): 207–222, doi:10.1016/S0022-4049(02)00135-4, MR 1935979
• Joyal, André; Tierney, Myles (2007), "Quasi-categories vs Segal spaces", Categories in algebra, geometry and mathematical physics, Contemp. Math., vol. 431, Providence, R.I.: Amer. Math. Soc., pp. 277–326, arXiv:math.AT/0607820, MR 2342834
• Joyal, A. (2008), The theory of quasi-categories and its applications, lectures at CRM Barcelona (PDF), archived from the original (PDF) on July 6, 2011
• Joyal, A., Notes on quasicategories (PDF)
• Lurie, Jacob (2009), Higher topos theory, Annals of Mathematics Studies, vol. 170, Princeton University Press, arXiv:math.CT/0608040, ISBN 978-0-691-14049-0, MR 2522659
• Joyal's Catlab entry: The theory of quasi-categories
• quasi-category at the nLab
• infinity-category at the nLab
• fundamental+category at the nLab
• Bergner, Julia E (2011). "Workshop on the homotopy theory of homotopy theories". arXiv:1108.2001 [math.AT].
• (∞, 1)-category at the nLab
• Hinich, Vladimir (2017-09-19). "Lectures on infinity categories". arXiv:1709.06271 [math.CT].
Category theory
Key concepts
Key concepts
• Category
• Adjoint functors
• CCC
• Commutative diagram
• Concrete category
• End
• Exponential
• Functor
• Kan extension
• Morphism
• Natural transformation
• Universal property
Universal constructions
Limits
• Terminal objects
• Products
• Equalizers
• Kernels
• Pullbacks
• Inverse limit
Colimits
• Initial objects
• Coproducts
• Coequalizers
• Cokernels and quotients
• Pushout
• Direct limit
Algebraic categories
• Sets
• Relations
• Magmas
• Groups
• Abelian groups
• Rings (Fields)
• Modules (Vector spaces)
Constructions on categories
• Free category
• Functor category
• Kleisli category
• Opposite category
• Quotient category
• Product category
• Comma category
• Subcategory
Higher category theory
Key concepts
• Categorification
• Enriched category
• Higher-dimensional algebra
• Homotopy hypothesis
• Model category
• Simplex category
• String diagram
• Topos
n-categories
Weak n-categories
• Bicategory (pseudofunctor)
• Tricategory
• Tetracategory
• Kan complex
• ∞-groupoid
• ∞-topos
Strict n-categories
• 2-category (2-functor)
• 3-category
Categorified concepts
• 2-group
• 2-ring
• En-ring
• (Traced)(Symmetric) monoidal category
• n-group
• n-monoid
• Category
• Outline
• Glossary
|
∞-groupoid
In category theory, a branch of mathematics, an ∞-groupoid is an abstract homotopical model for topological spaces. One model uses Kan complexes which are fibrant objects in the category of simplicial sets (with the standard model structure).[1] It is an ∞-category generalization of a groupoid, a category in which every morphism is an isomorphism.
The homotopy hypothesis states that ∞-groupoids are equivalent to spaces up to homotopy.[2]: 2–3 [3]
Globular Groupoids
Alexander Grothendieck suggested in Pursuing Stacks[2]: 3–4, 201 that there should be an extraordinarily simple model of ∞-groupoids using globular sets, originally called hemispherical complexes. These sets are constructed as presheaves on the globular category $\mathbb {G} $. This is defined as the category whose objects are finite ordinals $[n]$ and morphisms are given by
${\begin{aligned}\sigma _{n}:[n]\to [n+1]\\\tau _{n}:[n]\to [n+1]\end{aligned}}$
such that the globular relations hold
${\begin{aligned}\sigma _{n+1}\circ \sigma _{n}&=\tau _{n+1}\circ \sigma _{n}\\\sigma _{n+1}\circ \tau _{n}&=\tau _{n+1}\circ \tau _{n}\end{aligned}}$
These encode the fact that $n$-morphisms should not be able to see $(n+1)$-morphisms. When writing these down as a globular set $X_{\bullet }:\mathbb {G} ^{op}\to {\text{Sets}}$, the source and target maps are then written as
${\begin{aligned}s_{n}=X_{\bullet }(\sigma _{n})\\t_{n}=X_{\bullet }(\tau _{n})\end{aligned}}$
We can also consider globular objects in a category ${\mathcal {C}}$ as functors
$X_{\bullet }\colon \mathbb {G} ^{op}\to {\mathcal {C}}.$
There was hope originally that such a strict model would be sufficient for homotopy theory, but there is evidence suggesting otherwise. It turns out for $S^{2}$ its associated homotopy $n$-type $\pi _{\leq n}(S^{2})$ can never be modeled as a strict globular groupoid for $n\geq 3$.[2]: 445 [4] This is because strict ∞-groupoids only model spaces with a trivial Whitehead product.[5]
Examples
Fundamental ∞-groupoid
Given a topological space $X$ there should be an associated fundamental ∞-groupoid $\Pi _{\infty }(X)$ where the objects are points $x\in X$, 1-morphisms $f:x\to y$ are represented as paths, 2-morphisms are homotopies of paths, 3-morphisms are homotopies of homotopies, and so on. From this infinity groupoid we can find an $n$-groupoid called the fundamental $n$-groupoid $\Pi _{n}(X)$ whose homotopy type is that of $\pi _{\leq n}(X)$.
Note that taking the fundamental ∞-groupoid of a space $Y$ such that $\pi _{>n}(Y)=0$ is equivalent to the fundamental n-groupoid $\Pi _{n}(Y)$. Such a space can be found using the Whitehead tower.
Abelian globular groupoids
One useful case of globular groupoids comes from a chain complex which is bounded above, hence let's consider a chain complex $C_{\bullet }\in {\text{Ch}}_{\leq 0}({\text{Ab}})$.[6] There is an associated globular groupoid. Intuitively, the objects are the elements in $C_{0}$, morphisms come from $C_{0}$ through the chain complex map $d_{1}:C_{1}\to C_{0}$, and higher $n$-morphisms can be found from the higher chain complex maps $d_{n}:C_{n}\to C_{n-1}$. We can form a globular set $\mathbb {C} _{\bullet }$ with
${\begin{matrix}\mathbb {C} _{0}=&C_{0}\\\mathbb {C} _{1}=&C_{0}\oplus C_{1}\\&\cdots \\\mathbb {C} _{n}=&\bigoplus _{k=0}^{n}C_{k}\end{matrix}}$
and the source morphism $s_{n}:\mathbb {C} _{n}\to \mathbb {C} _{n-1}$ is the projection map
$pr:\bigoplus _{k=0}^{n}C_{k}\to \bigoplus _{k=0}^{n-1}C_{k}$
and the target morphism $t_{n}:C_{n}\to C_{n-1}$ is the addition of the chain complex map $d_{n}:C_{n}\to C_{n-1}$ together with the projection map. This forms a globular groupoid giving a wide class of examples of strict globular groupoids. Moreover, because strict groupoids embed inside weak groupoids, they can act as weak groupoids as well.
Applications
Higher local systems
One of the basic theorems about local systems is that they can be equivalently described as a functor from the fundamental groupoid $\Pi (X)=\Pi _{\leq 1}(X)$ to the category of Abelian groups, the category of $R$-modules, or some other abelian category. That is, a local system is equivalent to giving a functor
${\mathcal {L}}:\Pi (X)\to {\text{Ab}}$
generalizing such a definition requires us to consider not only an abelian category, but also its derived category. A higher local system is then an ∞-functor
${\mathcal {L}}_{\bullet }:\Pi _{\infty }(X)\to D({\text{Ab}})$
with values in some derived category. This has the advantage of letting the higher homotopy groups $\pi _{n}(X)$ to act on the higher local system, from a series of truncations. A toy example to study comes from the Eilenberg–MacLane spaces $K(A,n)$, or by looking at the terms from the Whitehead tower of a space. Ideally, there should be some way to recover the categories of functors ${\mathcal {L}}_{\bullet }:\Pi _{\infty }(X)\to D({\text{Ab}})$ from their truncations $\Pi _{n}(X)$ and the maps $\tau _{\leq n-1}:\Pi _{n}(X)\to \Pi _{n-1}(X)$ whose fibers should be the categories of $n$-functors
$\Pi _{n}(K(\pi _{n}(X),n))\to D(Ab)$
Another advantage of this formalism is it allows for constructing higher forms of $\ell $-adic representations by using the etale homotopy type ${\hat {\pi }}(X)$ of a scheme $X$ and construct higher representations of this space, since they are given by functors
${\mathcal {L}}:{\hat {\pi (X)}}\to D({\overline {\mathbb {Q} }}_{\ell })$
Higher gerbes
Another application of ∞-groupoids is giving constructions of n-gerbes and ∞-gerbes. Over a space $X$ an n-gerbe should be an object ${\mathcal {G}}\to X$ such that when restricted to a small enough subset $U\subset X$, ${\mathcal {G}}|_{U}\to U$ is represented by an n-groupoid, and on overlaps there is an agreement up to some weak equivalence. Assuming the homotopy hypothesis is correct, this is equivalent to constructing an object ${\mathcal {G}}\to X$ such that over any open subset
${\mathcal {G}}|_{U}\to U$
is an n-group, or a homotopy n-type. Because the nerve of a category can be used to construct an arbitrary homotopy type, a functor over an site ${\mathcal {X}}$, e.g.
$p:{\mathcal {C}}\to {\mathcal {X}}$
will give an example of a higher gerbe if the category ${\mathcal {C}}_{U}$ lying over any point $U\in {\text{Ob}}({\mathcal {X}})$ is a non-empty category. In addition, it would be expected this category would satisfy some sort of descent condition.
See also
• Pursuing Stacks
• N-group (category theory)
• Groupoid
• Homotopy type theory
References
1. "Kan complex in nLab".
2. Grothendieck. "Pursuing Stacks". thescrivener.github.io. Archived (PDF) from the original on 30 Jul 2020. Retrieved 2020-09-17.
3. Maltsiniotis, Georges (2010), Grothendieck infinity groupoids and still another definition of infinity categories, arXiv:1009.2331, CiteSeerX 10.1.1.397.2664
4. Simpson, Carlos (1998-10-09). "Homotopy types of strict 3-groupoids". arXiv:math/9810059.
5. Brown, Ronald; Higgins, Philip J. (1981). "The equivalence of $\infty $-groupoids and crossed complexes". Cahiers de Topologie et Géométrie Différentielle Catégoriques. 22 (4): 371–386.
6. Ara, Dimitri (2010). Sur les ∞-groupoïdes de Grothendieck et une variante ∞-catégorique (PDF) (PhD). Université Paris Diderot. Section 1.4.3. Archived (PDF) from the original on 19 Aug 2020.
Research articles
• Henry, Simon; Lanari, Edoardo (2019). "On the homotopy hypothesis in dimension 3". arXiv:1905.05625 [math.CT].
• Bourke, John (2016). "Note on the construction of globular weak omega-groupoids from types, topological spaces etc". arXiv:1602.07962 [math.CT].
• Polesello, Pietro; Waschkies, Ingo (2004). "Higher Monodromy". arXiv:math/0407507.
• Hoyois, Marc (2015). "Higher Galois theory". arXiv:1506.07155 [math.CT].
Applications in algebraic geometry
• Toën, Bertrand. "Homotopy types of algebraic varieties" (PDF). CiteSeerX 10.1.1.607.9789.
External links
• infinity-groupoid at the nLab
• Maltsiniotis, Georges (2010), "Grothendieck ∞-groupoids, and still another definition of ∞-categories", arXiv:1009.2331 [math.CT]
• Zawadowski, Marek, Introduction to Test Categories (PDF), archived from the original (PDF) on 2015-03-26
• Lovering, Tom (2012), Etale cohomology and Galois Representations, CiteSeerX 10.1.1.394.9850
Category theory
Key concepts
Key concepts
• Category
• Adjoint functors
• CCC
• Commutative diagram
• Concrete category
• End
• Exponential
• Functor
• Kan extension
• Morphism
• Natural transformation
• Universal property
Universal constructions
Limits
• Terminal objects
• Products
• Equalizers
• Kernels
• Pullbacks
• Inverse limit
Colimits
• Initial objects
• Coproducts
• Coequalizers
• Cokernels and quotients
• Pushout
• Direct limit
Algebraic categories
• Sets
• Relations
• Magmas
• Groups
• Abelian groups
• Rings (Fields)
• Modules (Vector spaces)
Constructions on categories
• Free category
• Functor category
• Kleisli category
• Opposite category
• Quotient category
• Product category
• Comma category
• Subcategory
Higher category theory
Key concepts
• Categorification
• Enriched category
• Higher-dimensional algebra
• Homotopy hypothesis
• Model category
• Simplex category
• String diagram
• Topos
n-categories
Weak n-categories
• Bicategory (pseudofunctor)
• Tricategory
• Tetracategory
• Kan complex
• ∞-groupoid
• ∞-topos
Strict n-categories
• 2-category (2-functor)
• 3-category
Categorified concepts
• 2-group
• 2-ring
• En-ring
• (Traced)(Symmetric) monoidal category
• n-group
• n-monoid
• Category
• Outline
• Glossary
Major topics in Foundations of Mathematics
Mathematical logic
• Peano axioms
• Mathematical induction
• Formal system
• Axiomatic system
• Hilbert system
• Natural deduction
• Mathematical proof
• Model theory
• Mathematical constructivism
• Modal logic
• List of mathematical logic topics
Set theory
• Set
• Naive set theory
• Axiomatic set theory
• Zermelo set theory
• Zermelo–Fraenkel set theory
• Constructive set theory
• Descriptive set theory
• Determinacy
• Russell's paradox
• List of set theory topics
Type theory
• Axiom of reducibility
• Simple type theory
• Dependent type theory
• Intuitionistic type theory
• Homotopy type theory
• Univalent foundations
• Girard's paradox
Category theory
• Category
• Topos theory
• Category of sets
• Higher category theory
• ∞-groupoid
• ∞-topos theory
• Mathematical structuralism
• Glossary of category theory
• List of category theory topics
|
Weakened weak form
Weakened weak form (or W2 form)[1] is used in the formulation of general numerical methods based on meshfree methods and/or finite element method settings. These numerical methods are applicable to solid mechanics as well as fluid dynamics problems.
Description
For simplicity we choose elasticity problems (2nd order PDE) for our discussion.[2] Our discussion is also most convenient in reference to the well-known weak and strong form. In a strong formulation for an approximate solution, we need to assume displacement functions that are 2nd order differentiable. In a weak formulation, we create linear and bilinear forms and then search for a particular function (an approximate solution) that satisfy the weak statement. The bilinear form uses gradient of the functions that has only 1st order differentiation. Therefore, the requirement on the continuity of assumed displacement functions is weaker than in the strong formulation. In a discrete form (such as the Finite element method, or FEM), a sufficient requirement for an assumed displacement function is piecewise continuous over the entire problems domain. This allows us to construct the function using elements (but making sure it is continuous a long all element interfaces), leading to the powerful FEM.
Now, in a weakened weak (W2) formulation, we further reduce the requirement. We form a bilinear form using only the assumed function (not even the gradient). This is done by using the so-called generalized gradient smoothing technique,[3] with which one can approximate the gradient of displacement functions for certain class of discontinuous functions, as long as they are in a proper G space.[4] Since we do not have to actually perform even the 1st differentiation to the assumed displacement functions, the requirement on the consistence of the functions are further reduced, and hence the weakened weak or W2 formulation.
History
The development of systematic theory of the weakened weak form started from the works on meshfree methods.[2] It is relatively new, but had very rapid development in the past few years.
Features of W2 formulations
1. The W2 formulation offers possibilities for formulate various (uniformly) "soft" models that works well with triangular meshes. Because triangular mesh can be generated automatically, it becomes much easier in re-meshing and hence automation in modeling and simulation. This is very important for our long-term goal of development of fully automated computational methods.
2. In addition, W2 models can be made soft enough (in uniform fashion) to produce upper bound solutions (for force-driving problems). Together with stiff models (such as the fully compatible FEM models), one can conveniently bound the solution from both sides. This allows easy error estimation for generally complicated problems, as long as a triangular mesh can be generated. This is important for producing so-called certified solutions.
3. W2 models can be built free from volumetric locking, and possibly free from other types of locking phenomena.
4. W2 models provide the freedom to assume separately the displacement gradient of the displacement functions, offering opportunities for ultra-accurate and super-convergent models. It may be possible to construct linear models with energy convergence rate of 2.
5. W2 models are often found less sensitive to mesh distortion.
6. W2 models are found effective for low order methods
Existing W2 models
Typical W2 models are the smoothed point interpolation methods (or S-PIM).[5] The S-PIM can be node-based (known as NS-PIM or LC-PIM),[6] edge-based (ES-PIM),[7] and cell-based (CS-PIM).[8] The NS-PIM was developed using the so-called SCNI technique.[9] It was then discovered that NS-PIM is capable of producing upper bound solution and volumetric locking free.[10] The ES-PIM is found superior in accuracy, and CS-PIM behaves in between the NS-PIM and ES-PIM. Moreover, W2 formulations allow the use of polynomial and radial basis functions in the creation of shape functions (it accommodates the discontinuous displacement functions, as long as it is in G1 space), which opens further rooms for future developments. The S-FEM is largely the linear version of S-PIM, but with most of the properties of the S-PIM and much simpler. It has also variations of NS-FEM, ES-FEM and CS-FEM. The major property of S-PIM can be found also in S-FEM.[11] The S-FEM models are:
• Node-based Smoothed FEM (NS-FEM)[12]
• Edge-based Smoothed FEM (NS-FEM)[13]
• Face-based Smoothed FEM (NS-FEM)[14]
• Cell-based Smoothed FEM (NS-FEM)[15][16][17]
• Edge/node-based Smoothed FEM (NS/ES-FEM)[18]
• Alpha FEM method (Alpha FEM)[19][20]
• Beta FEM method (Beta FEM)[21]
Applications
Some of the applications of W2 models are:
1. Mechanics for solids, structures and piezoelectrics;[22][23]
2. Fracture mechanics and crack propagation;[24][25][26][27]
3. Heat transfer;[28][29]
4. Structural acoustics;[30][31][32]
5. Nonlinear and contact problems;[33][34]
6. Stochastic analysis;[35]
7. Adaptive Analysis;[36][18]
8. Phase change problem;[37]
9. Crystal plasticity modeling.[38]
10. Limited analysis.[39]
See also
• Finite element method
• Meshfree methods
• Smoothed finite element method
References
1. G.R. Liu. "A G space theory and a weakened weak (W2) form for a unified formulation of compatible and incompatible methods: Part I theory and Part II applications to solid mechanics problems". International Journal for Numerical Methods in Engineering, 81: 1093–1126, 2010
2. Liu, G.R. 2nd edn: 2009 Mesh Free Methods, CRC Press. 978-1-4200-8209-9
3. Liu GR, "A Generalized Gradient Smoothing Technique and the Smoothed Bilinear Form for Galerkin Formulation of a Wide Class of Computational Methods", International Journal of Computational Methods Vol.5 Issue: 2, 199–236, 2008
4. Liu GR, "On G Space Theory", International Journal of Computational Methods, Vol. 6 Issue: 2, 257–289, 2009
5. Liu, G.R. 2nd edn: 2009 Mesh Free Methods, CRC Press. 978-1-4200-8209-9
6. Liu GR, Zhang GY, Dai KY, Wang YY, Zhong ZH, Li GY and Han X, "A linearly conforming point interpolation method (LC-PIM) for 2D solid mechanics problems", International Journal of Computational Methods, 2(4): 645–665, 2005.
7. G.R. Liu, G.R. Zhang. "Edge-based Smoothed Point Interpolation Methods". International Journal of Computational Methods, 5(4): 621–646, 2008
8. G.R. Liu, G.R. Zhang. "A normed G space and weakened weak (W2) formulation of a cell-based Smoothed Point Interpolation Method". International Journal of Computational Methods, 6(1): 147–179, 2009
9. Chen, J. S., Wu, C. T., Yoon, S. and You, Y. (2001). "A stabilized conforming nodal integration for Galerkin mesh-free methods". International Journal for Numerical Methods in Engineering. 50: 435–466.
10. G. R. Liu and G. Y. Zhang. Upper bound solution to elasticity problems: A unique property of the linearly conforming point interpolation method (LC-PIM). International Journal for Numerical Methods in Engineering, 74: 1128–1161, 2008.
11. Zhang ZQ, Liu GR, "Upper and lower bounds for natural frequencies: A property of the smoothed finite element methods", International Journal for Numerical Methods in Engineering Vol. 84 Issue: 2, 149–178, 2010
12. Liu GR, Nguyen-Thoi T, Nguyen-Xuan H, Lam KY (2009) "A node-based smoothed finite element method (NS-FEM) for upper bound solutions to solid mechanics problems". Computers and Structures; 87: 14–26.
13. Liu GR, Nguyen-Thoi T, Lam KY (2009) "An edge-based smoothed finite element method (ES-FEM) for static, free and forced vibration analyses in solids". Journal of Sound and Vibration; 320: 1100–1130.
14. Nguyen-Thoi T, Liu GR, Lam KY, GY Zhang (2009) "A Face-based Smoothed Finite Element Method (FS-FEM) for 3D linear and nonlinear solid mechanics problems using 4-node tetrahedral elements". International Journal for Numerical Methods in Engineering; 78: 324–353
15. Liu GR, Dai KY, Nguyen-Thoi T (2007) "A smoothed finite element method for mechanics problems". Computational Mechanics; 39: 859–877
16. Dai KY, Liu GR (2007) "Free and forced vibration analysis using the smoothed finite element method (SFEM)". Journal of Sound and Vibration; 301: 803–820.
17. Dai KY, Liu GR, Nguyen-Thoi T (2007) "An n-sided polygonal smoothed finite element method (nSFEM) for solid mechanics". Finite Elements in Analysis and Design; 43: 847-860.
18. Li Y, Liu GR, Zhang GY, "An adaptive NS/ES-FEM approach for 2D contact problems using triangular elements", Finite Elements in Analysis and Design Vol.47 Issue: 3, 256–275, 2011
19. Liu GR, Nguyen-Thoi T, Lam KY (2009) "A novel FEM by scaling the gradient of strains with factor α (αFEM)". Computational Mechanics; 43: 369–391
20. Liu GR, Nguyen-Xuan H, Nguyen-Thoi T, Xu X (2009) "A novel weak form and a superconvergent alpha finite element method (SαFEM) for mechanics problems using triangular meshes". Journal of Computational Physics; 228: 4055–4087
21. Zeng W, Liu GR, Li D, Dong XW (2016) A smoothing technique based beta finite element method (βFEM) for crystal plasticity modeling. Computers and Structures; 162: 48-67
22. Cui XY, Liu GR, Li GY, et al. A thin plate formulation without rotation DOFs based on the radial point interpolation method and triangular cells, International Journal for Numerical Methods in Engineering Vol.85 Issue: 8 , 958–986, 2011
23. Liu GR, Nguyen-Xuan H, Nguyen-Thoi T, A theoretical study on the smoothed FEM (S-FEM) models: Properties, accuracy and convergence rates, International Journal for Numerical Methods in Engineering Vol. 84 Issue: 10, 1222–1256, 2010
24. Liu GR, Nourbakhshnia N, Zhang YW, A novel singular ES-FEM method for simulating singular stress fields near the crack tips for linear fracture problems, Engineering Fracture Mechanics Vol.78 Issue: 6 Pages: 863–876, 2011
25. Liu GR, Chen L, Nguyen-Thoi T, et al. A novel singular node-based smoothed finite element method (NS-FEM) for upper bound solutions of fracture problems, International Journal for Numerical Methods in Engineering Vol.83 Issue: 11, 1466–1497, 2010
26. Liu GR, Nourbakhshnia N, Chen L, et al. "A Novel General Formulation for Singular Stress Field Using the Es-Fem Method for the Analysis of Mixed-Mode Cracks", International Journal of Computational Methods Vol. 7 Issue: 1, 191–214, 2010
27. Zeng W, Liu GR, Jiang C, Dong XW, Chen HD, Bao Y, Jiang Y. "An effective fracture analysis method based on the virtual crack closure-integral technique implemented in CS-FEM", Applied Mathematical Modelling Vol. 40, Issue: 5-6, 3783-3800, 2016
28. Zhang ZB, Wu SC, Liu GR, et al. "Nonlinear Transient Heat Transfer Problems using the Meshfree ES-PIM", International Journal of Nonlinear Sciences and Numerical Simulation Vol.11 Issue: 12, 1077–1091, 2010
29. Wu SC, Liu GR, Cui XY, et al. "An edge-based smoothed point interpolation method (ES-PIM) for heat transfer analysis of rapid manufacturing system", International Journal of Heat and Mass Transfer Vol.53 Issue: 9-10, 1938–1950, 2010
30. He ZC, Cheng AG, Zhang GY, et al. "Dispersion error reduction for acoustic problems using the edge-based smoothed finite element method (ES-FEM)", International Journal for Numerical Methods in Engineering Vol. 86 Issue: 11 Pages: 1322–1338, 2011
31. He ZC, Liu GR, Zhong ZH, et al. "A coupled ES-FEM/BEM method for fluid-structure interaction problems", Engineering Analysis With Boundary Elements Vol. 35 Issue: 1, 140–147, 2011
32. Zhang ZQ, Liu GR, "Upper and lower bounds for natural frequencies: A property of the smoothed finite element methods", International Journal for Numerical Methods in Engineering Vol.84 Issue: 2, 149–178, 2010
33. Zhang ZQ, Liu GR, "An edge-based smoothed finite element method (ES-FEM) using 3-node triangular elements for 3D non-linear analysis of spatial membrane structures", International Journal for Numerical Methods in Engineering, Vol. 86 Issue: 2 135–154, 2011
34. Jiang C, Liu GR, Han X, Zhang ZQ, Zeng W, A smoothed finite element method for analysis of anisotropic large deformation of passive rabbit ventricles in diastole, International Journal for Numerical Methods in Biomedical Engineering, Vol. 31 Issue: 1,1-25, 2015
35. Liu GR, Zeng W, Nguyen-Xuan H. Generalized stochastic cell-based smoothed finite element method (GS_CS-FEM) for solid mechanics, Finite Elements in Analysis and Design Vol.63, 51-61, 2013
36. Nguyen-Thoi T, Liu GR, Nguyen-Xuan H, et al. "Adaptive analysis using the node-based smoothed finite element method (NS-FEM)", International Journal for Numerical Methods in Biomedical Engineering Vol. 27 Issue: 2, 198–218, 2011
37. Li E, Liu GR, Tan V, et al. "An efficient algorithm for phase change problem in tumor treatment using alpha FEM", International Journal of Thermal Sciences Vol.49 Issue: 10, 1954–1967, 2010
38. Zeng W, Larsen JM, Liu GR. Smoothing technique based crystal plasticity finite element modeling of crystalline materials, International Journal of Plasticity Vol.65, 250-268, 2015
39. Tran TN, Liu GR, Nguyen-Xuan H, et al. "An edge-based smoothed finite element method for primal-dual shakedown analysis of structures", International Journal for Numerical Methods in Engineering Vol.82 Issue: 7, 917–938, 2010
External links
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
|
Weak NP-completeness
In computational complexity, an NP-complete (or NP-hard) problem is weakly NP-complete (or weakly NP-hard) if there is an algorithm for the problem whose running time is polynomial in the dimension of the problem and the magnitudes of the data involved (provided these are given as integers), rather than the base-two logarithms of their magnitudes. Such algorithms are technically exponential functions of their input size and are therefore not considered polynomial.[1]
For example, the NP-hard knapsack problem can be solved by a dynamic programming algorithm requiring a number of steps polynomial in the size of the knapsack and the number of items (assuming that all data are scaled to be integers); however, the runtime of this algorithm is exponential time since the input sizes of the objects and knapsack are logarithmic in their magnitudes. However, as Garey and Johnson (1979) observed, “A pseudo-polynomial-time algorithm … will display 'exponential behavior' only when confronted with instances containing 'exponentially large' numbers, [which] might be rare for the application we are interested in. If so, this type of algorithm might serve our purposes almost as well as a polynomial time algorithm.” Another example for a weakly NP-complete problem is the subset sum problem.
The related term strongly NP-complete (or unary NP-complete) refers to those problems that remain NP-complete even if the data are encoded in unary, that is, if the data are "small" relative to the overall input size.[2]
Strong and weak NP-hardness vs. strong and weak polynomial-time algorithms
Assuming P ≠ NP, the following are true for computational problems on integers:[3]
• If a problem is weakly NP-hard, then it does not have a weakly polynomial time algorithm (polynomial in the number of integers and the number of bits in the largest integer), but it may have a pseudopolynomial time algorithm (polynomial in the number of integers and the magnitude of the largest integer). An example is the partition problem. Both weak NP-hardness and weak polynomial-time correspond to encoding the input agents in binary coding.
• If a problem is strongly NP-hard, then it does not even have a pseudo-polynomial time algorithm. It also does not have a fully-polynomial time approximation scheme. An example is the 3-partition problem. Both strong NP-hardness and pseudo-polynomial time correspond to encoding the input agents in unary coding.
References
1. M. R. Garey and D. S. Johnson. Computers and Intractability: a Guide to the Theory of NP-Completeness. W.H. Freeman, New York, 1979.
2. L. Hall. Computational Complexity. The Johns Hopkins University.
3. Demaine, Erik. "Algorithmic Lower Bounds: Fun with Hardness Proofs, Lecture 2".
|
Weakly additive
In fair division, a topic in economics, a preference relation is weakly additive if the following condition is met:[1]
If A is preferred to B, and C is preferred to D (and the contents of A and C do not overlap) then A together with C is preferable to B together with D.
Every additive utility function is weakly-additive. However, additivity is applicable only to cardinal utility functions, while weak additivity is applicable to ordinal utility functions.
Weak additivity is often a realistic assumption when dividing up goods between claimants, and simplifies the mathematics of certain fair division problems considerably. Some procedures in fair division do not need the value of goods to be additive and only require weak additivity. In particular the adjusted winner procedure only requires weak additivity.
Cases where weak additivity fails
Case where the assumptions might fail would be either
• The value of A and C together is the less than the sum of their values. For instance two versions of the same CD may not be as valuable to a person as the sum of the values of the individual CDs on their own. I.e, A and C are substitute goods.
• The values of B and D together may be more than their individual values added. For instance two matching bookends may be much more valuable than twice the value of an individual bookend. I.e, B and D are complementary goods.
The use of money as compensation can often turn real cases like these into situations where the weak additivity condition is satisfied even if the values are not exactly additive.
The value of a type of goods, e.g. chairs, dependent on having some of those goods already is called the marginal utility.
See also
• Responsive set extension#Responsiveness
References
1. Brams, Steven J.; Taylor, Alan D. (1996). Fair division: from cake-cutting to dispute resolution. Cambridge University Press. ISBN 0-521-55644-9.
|
Weakly chained diagonally dominant matrix
In mathematics, the weakly chained diagonally dominant matrices are a family of nonsingular matrices that include the strictly diagonally dominant matrices.
Definition
Preliminaries
We say row $i$ of a complex matrix $A=(a_{ij})$ is strictly diagonally dominant (SDD) if $|a_{ii}|>\textstyle {\sum _{j\neq i}}|a_{ij}|$. We say $A$ is SDD if all of its rows are SDD. Weakly diagonally dominant (WDD) is defined with $\geq $ instead.
The directed graph associated with an $m\times m$ complex matrix $A=(a_{ij})$ is given by the vertices $\{1,\ldots ,m\}$ and edges defined as follows: there exists an edge from $i\rightarrow j$ if and only if $a_{ij}\neq 0$.
Definition
A complex square matrix $A$ is said to be weakly chained diagonally dominant (WCDD) if
• $A$ is WDD and
• for each row $i_{1}$ that is not SDD, there exists a walk $i_{1}\rightarrow i_{2}\rightarrow \cdots \rightarrow i_{k}$ in the directed graph of $A$ ending at an SDD row $i_{k}$.
Example
The $m\times m$ matrix
${\begin{pmatrix}1\\-1&1\\&-1&1\\&&\ddots &\ddots \\&&&-1&1\end{pmatrix}}$
is WCDD.
Properties
Nonsingularity
A WCDD matrix is nonsingular.[1]
Proof:[2] Let $A=(a_{ij})$ be a WCDD matrix. Suppose there exists a nonzero $x$ in the null space of $A$. Without loss of generality, let $i_{1}$ be such that $|x_{i_{1}}|=1\geq |x_{j}|$ for all $j$. Since $A$ is WCDD, we may pick a walk $i_{1}\rightarrow i_{2}\rightarrow \cdots \rightarrow i_{k}$ ending at an SDD row $i_{k}$.
Taking moduli on both sides of
$-a_{i_{1}i_{1}}x_{i_{1}}=\sum _{j\neq i_{1}}a_{i_{1}j}x_{j}$
and applying the triangle inequality yields
$\left|a_{i_{1}i_{1}}\right|\leq \sum _{j\neq i_{1}}\left|a_{i_{1}j}\right|\left|x_{j}\right|\leq \sum _{j\neq i_{1}}\left|a_{i_{1}j}\right|,$
and hence row $i_{1}$ is not SDD. Moreover, since $A$ is WDD, the above chain of inequalities holds with equality so that $|x_{j}|=1$ whenever $a_{i_{1}j}\neq 0$. Therefore, $|x_{i_{2}}|=1$. Repeating this argument with $i_{2}$, $i_{3}$, etc., we find that $i_{k}$ is not SDD, a contradiction. $\square $
Recalling that an irreducible matrix is one whose associated directed graph is strongly connected, a trivial corollary of the above is that an irreducibly diagonally dominant matrix (i.e., an irreducible WDD matrix with at least one SDD row) is nonsingular.[3]
Relationship with nonsingular M-matrices
The following are equivalent:[4]
• $A$ is a nonsingular WDD M-matrix.
• $A$ is a nonsingular WDD L-matrix;
• $A$ is a WCDD L-matrix;
In fact, WCDD L-matrices were studied (by James H. Bramble and B. E. Hubbard) as early as 1964 in a journal article[5] in which they appear under the alternate name of matrices of positive type.
Moreover, if $A$ is an $n\times n$ WCDD L-matrix, we can bound its inverse as follows:[6]
$\left\Vert A^{-1}\right\Vert _{\infty }\leq \sum _{i}\left[a_{ii}\prod _{j=1}^{i}(1-u_{j})\right]^{-1}$ where $u_{i}={\frac {1}{\left|a_{ii}\right|}}\sum _{j=i+1}^{n}\left|a_{ij}\right|.$
Note that $u_{n}$ is always zero and that the right-hand side of the bound above is $\infty $ whenever one or more of the constants $u_{i}$ is one.
Tighter bounds for the inverse of a WCDD L-matrix are known.[7][8][9][10]
Applications
Due to their relationship with M-matrices (see above), WCDD matrices appear often in practical applications. An example is given below.
Monotone numerical schemes
WCDD L-matrices arise naturally from monotone approximation schemes for partial differential equations.
For example, consider the one-dimensional Poisson problem
$u^{\prime \prime }(x)+g(x)=0$ for $x\in (0,1)$
with Dirichlet boundary conditions $u(0)=u(1)=0$. Letting $\{0,h,2h,\ldots ,1\}$ be a numerical grid (for some positive $h$ that divides unity), a monotone finite difference scheme for the Poisson problem takes the form of
$-{\frac {1}{h^{2}}}A{\vec {u}}+{\vec {g}}=0$ where $[{\vec {g}}]_{j}=g(jh)$
and
$A={\begin{pmatrix}2&-1\\-1&2&-1\\&-1&2&-1\\&&\ddots &\ddots &\ddots \\&&&-1&2&-1\\&&&&-1&2\end{pmatrix}}.$
Note that $A$ is a WCDD L-matrix.
References
1. Shivakumar, P. N.; Chew, Kim Ho (1974). "A Sufficient Condition for Nonvanishing of Determinants" (PDF). Proceedings of the American Mathematical Society. 43 (1): 63. doi:10.1090/S0002-9939-1974-0332820-0. ISSN 0002-9939.
2. Azimzadeh, Parsiad; Forsyth, Peter A. (2016). "Weakly Chained Matrices, Policy Iteration, and Impulse Control". SIAM Journal on Numerical Analysis. 54 (3): 1341–1364. arXiv:1510.03928. doi:10.1137/15M1043431. ISSN 0036-1429. S2CID 29143430.
3. Horn, Roger A.; Johnson, Charles R. (1990). "Matrix analysis". Cambridge University Press, Cambridge. {{cite journal}}: Cite journal requires |journal= (help)
4. Azimzadeh, Parsiad (2019). "A fast and stable test to check if a weakly diagonally dominant matrix is a nonsingular M-Matrix". Mathematics of Computation. 88 (316): 783–800. arXiv:1701.06951. Bibcode:2017arXiv170106951A. doi:10.1090/mcom/3347. S2CID 3356041.
5. Bramble, James H.; Hubbard, B. E. (1964). "On a finite difference analogue of an elliptic problem which is neither diagonally dominant nor of non-negative type". Journal of Mathematical Physics. 43: 117–132. doi:10.1002/sapm1964431117.
6. Shivakumar, P. N.; Williams, Joseph J.; Ye, Qiang; Marinov, Corneliu A. (1996). "On Two-Sided Bounds Related to Weakly Diagonally Dominant M-Matrices with Application to Digital Circuit Dynamics". SIAM Journal on Matrix Analysis and Applications. 17 (2): 298–312. doi:10.1137/S0895479894276370. ISSN 0895-4798.
7. Cheng, Guang-Hui; Huang, Ting-Zhu (2007). "An upper bound for $\Vert A^{-1}\Vert _{\infty }$ of strictly diagonally dominant M-matrices". Linear Algebra and Its Applications. 426 (2–3): 667–673. doi:10.1016/j.laa.2007.06.001. ISSN 0024-3795.
8. Li, Wen (2008). "The infinity norm bound for the inverse of nonsingular diagonal dominant matrices". Applied Mathematics Letters. 21 (3): 258–263. doi:10.1016/j.aml.2007.03.018. ISSN 0893-9659.
9. Wang, Ping (2009). "An upper bound for $\Vert A^{-1}\Vert _{\infty }$ of strictly diagonally dominant M-matrices". Linear Algebra and Its Applications. 431 (5–7): 511–517. doi:10.1016/j.laa.2009.02.037. ISSN 0024-3795.
10. Huang, Ting-Zhu; Zhu, Yan (2010). "Estimation of $\Vert A^{-1}\Vert _{\infty }$ for weakly chained diagonally dominant M-matrices". Linear Algebra and Its Applications. 432 (2–3): 670–677. doi:10.1016/j.laa.2009.09.012. ISSN 0024-3795.
|
Weak topology
In mathematics, weak topology is an alternative term for certain initial topologies, often on topological vector spaces or spaces of linear operators, for instance on a Hilbert space. The term is most commonly used for the initial topology of a topological vector space (such as a normed vector space) with respect to its continuous dual. The remainder of this article will deal with this case, which is one of the concepts of functional analysis.
This article is about the weak topology on a normed vector space. For the weak topology induced by a general family of maps, see initial topology. For the weak topology generated by a cover of a space, see coherent topology.
One may call subsets of a topological vector space weakly closed (respectively, weakly compact, etc.) if they are closed (respectively, compact, etc.) with respect to the weak topology. Likewise, functions are sometimes called weakly continuous (respectively, weakly differentiable, weakly analytic, etc.) if they are continuous (respectively, differentiable, analytic, etc.) with respect to the weak topology.
History
Starting in the early 1900s, David Hilbert and Marcel Riesz made extensive use of weak convergence. The early pioneers of functional analysis did not elevate norm convergence above weak convergence and oftentimes viewed weak convergence as preferable.[1] In 1929, Banach introduced weak convergence for normed spaces and also introduced the analogous weak-* convergence.[1] The weak topology is also called topologie faible and schwache Topologie.
The weak and strong topologies
Main article: Topologies on spaces of linear maps
Let $\mathbb {K} $ be a topological field, namely a field with a topology such that addition, multiplication, and division are continuous. In most applications $\mathbb {K} $ will be either the field of complex numbers or the field of real numbers with the familiar topologies.
Weak topology with respect to a pairing
Main article: Dual system § Weak topology
Both the weak topology and the weak* topology are special cases of a more general construction for pairings, which we now describe. The benefit of this more general construction is that any definition or result proved for it applies to both the weak topology and the weak* topology, thereby making redundant the need for many definitions, theorem statements, and proofs. This is also the reason why the weak* topology is also frequently referred to as the "weak topology"; because it is just an instance of the weak topology in the setting of this more general construction.
Suppose (X, Y, b) is a pairing of vector spaces over a topological field $\mathbb {K} $ (i.e. X and Y are vector spaces over $\mathbb {K} $ and b : X × Y → $\mathbb {K} $ is a bilinear map).
Notation. For all x ∈ X, let b(x, •) : Y → $\mathbb {K} $ denote the linear functional on Y defined by y ↦ b(x, y). Similarly, for all y ∈ Y, let b(•, y) : X → $\mathbb {K} $ be defined by x ↦ b(x, y).
Definition. The weak topology on X induced by Y (and b) is the weakest topology on X, denoted by 𝜎(X, Y, b) or simply 𝜎(X, Y), making all maps b(•, y) : X → $\mathbb {K} $ continuous, as y ranges over Y.[1]
The weak topology on Y is now automatically defined as described in the article Dual system. However, for clarity, we now repeat it.
Definition. The weak topology on Y induced by X (and b) is the weakest topology on Y, denoted by 𝜎(Y, X, b) or simply 𝜎(Y, X), making all maps b(x, •) : Y → $\mathbb {K} $ continuous, as x ranges over X.[1]
If the field $\mathbb {K} $ has an absolute value |⋅|, then the weak topology 𝜎(X, Y, b) on X is induced by the family of seminorms, py : X → $\mathbb {R} $, defined by
py(x) := |b(x, y)|
for all y ∈ Y and x ∈ X. This shows that weak topologies are locally convex.
Assumption. We will henceforth assume that $\mathbb {K} $ is either the real numbers $\mathbb {R} $ or the complex numbers $\mathbb {C} $.
Canonical duality
We now consider the special case where Y is a vector subspace of the algebraic dual space of X (i.e. a vector space of linear functionals on X).
There is a pairing, denoted by $(X,Y,\langle \cdot ,\cdot \rangle )$ or $(X,Y)$, called the canonical pairing whose bilinear map $\langle \cdot ,\cdot \rangle $ is the canonical evaluation map, defined by $\langle x,x'\rangle =x'(x)$ for all $x\in X$ and $x'\in Y$. Note in particular that $\langle \cdot ,x'\rangle $ is just another way of denoting $x'$ i.e. $\langle \cdot ,x'\rangle =x'(\cdot )$.
Assumption. If Y is a vector subspace of the algebraic dual space of X then we will assume that they are associated with the canonical pairing ⟨X, Y⟩.
In this case, the weak topology on X (resp. the weak topology on Y), denoted by 𝜎(X,Y) (resp. by 𝜎(Y,X)) is the weak topology on X (resp. on Y) with respect to the canonical pairing ⟨X, Y⟩.
The topology σ(X,Y) is the initial topology of X with respect to Y.
If Y is a vector space of linear functionals on X, then the continuous dual of X with respect to the topology σ(X,Y) is precisely equal to Y.[1](Rudin 1991, Theorem 3.10)
The weak and weak* topologies
Let X be a topological vector space (TVS) over $\mathbb {K} $, that is, X is a $\mathbb {K} $ vector space equipped with a topology so that vector addition and scalar multiplication are continuous. We call the topology that X starts with the original, starting, or given topology (the reader is cautioned against using the terms "initial topology" and "strong topology" to refer to the original topology since these already have well-known meanings, so using them may cause confusion). We may define a possibly different topology on X using the topological or continuous dual space $X^{*}$, which consists of all linear functionals from X into the base field $\mathbb {K} $ that are continuous with respect to the given topology.
Recall that $\langle \cdot ,\cdot \rangle $ is the canonical evaluation map defined by $\langle x,x'\rangle =x'(x)$ for all $x\in X$ and $x'\in X^{*}$, where in particular, $\langle \cdot ,x'\rangle =x'(\cdot )=x'$.
Definition. The weak topology on X is the weak topology on X with respect to the canonical pairing $\langle X,X^{*}\rangle $. That is, it is the weakest topology on X making all maps $x'=\langle \cdot ,x'\rangle :X\to \mathbb {K} $ continuous, as $x'$ ranges over $X^{*}$.[1]
Definition: The weak topology on $X^{*}$ is the weak topology on $X^{*}$ with respect to the canonical pairing $\langle X,X^{*}\rangle $. That is, it is the weakest topology on $X^{*}$ making all maps $\langle x,\cdot \rangle :X^{*}\to \mathbb {K} $ continuous, as x ranges over X.[1] This topology is also called the weak* topology.
We give alternative definitions below.
Weak topology induced by the continuous dual space
Alternatively, the weak topology on a TVS X is the initial topology with respect to the family $X^{*}$. In other words, it is the coarsest topology on X such that each element of $X^{*}$ remains a continuous function.
A subbase for the weak topology is the collection of sets of the form $\phi ^{-1}(U)$ where $\phi \in X^{*}$ and U is an open subset of the base field $\mathbb {K} $. In other words, a subset of X is open in the weak topology if and only if it can be written as a union of (possibly infinitely many) sets, each of which is an intersection of finitely many sets of the form $\phi ^{-1}(U)$.
From this point of view, the weak topology is the coarsest polar topology.
Weak convergence
Further information: Weak convergence (Hilbert space)
The weak topology is characterized by the following condition: a net $(x_{\lambda })$ in X converges in the weak topology to the element x of X if and only if $\phi (x_{\lambda })$ converges to $\phi (x)$ in $\mathbb {R} $ or $\mathbb {C} $ for all $\phi \in X^{*}$.
In particular, if $x_{n}$ is a sequence in X, then $x_{n}$ converges weakly to x if
$\varphi (x_{n})\to \varphi (x)$
as n → ∞ for all $\varphi \in X^{*}$. In this case, it is customary to write
$x_{n}{\overset {\mathrm {w} }{\longrightarrow }}x$
or, sometimes,
$x_{n}\rightharpoonup x.$
Other properties
If X is equipped with the weak topology, then addition and scalar multiplication remain continuous operations, and X is a locally convex topological vector space.
If X is a normed space, then the dual space $X^{*}$ is itself a normed vector space by using the norm
$\|\phi \|=\sup _{\|x\|\leq 1}|\phi (x)|.$
This norm gives rise to a topology, called the strong topology, on $X^{*}$. This is the topology of uniform convergence. The uniform and strong topologies are generally different for other spaces of linear maps; see below.
Weak-* topology
The weak* topology is an important example of a polar topology.
A space X can be embedded into its double dual X** by
$x\mapsto {\begin{cases}T_{x}:X^{*}\to \mathbb {K} \\T_{x}(\phi )=\phi (x)\end{cases}}$
Thus $T:X\to X^{**}$ is an injective linear mapping, though not necessarily surjective (spaces for which this canonical embedding is surjective are called reflexive). The weak-* topology on $X^{*}$ is the weak topology induced by the image of $T:T(X)\subset X^{**}$. In other words, it is the coarsest topology such that the maps Tx, defined by $T_{x}(\phi )=\phi (x)$ from $X^{*}$ to the base field $\mathbb {R} $ or $\mathbb {C} $ remain continuous.
Weak-* convergence
A net $\phi _{\lambda }$ in $X^{*}$ is convergent to $\phi $ in the weak-* topology if it converges pointwise:
$\phi _{\lambda }(x)\to \phi (x)$
for all $x\in X$. In particular, a sequence of $\phi _{n}\in X^{*}$ converges to $\phi $ provided that
$\phi _{n}(x)\to \phi (x)$
for all x ∈ X. In this case, one writes
$\phi _{n}{\overset {w^{*}}{\to }}\phi $
as n → ∞.
Weak-* convergence is sometimes called the simple convergence or the pointwise convergence. Indeed, it coincides with the pointwise convergence of linear functionals.
Properties
If X is a separable (i.e. has a countable dense subset) locally convex space and H is a norm-bounded subset of its continuous dual space, then H endowed with the weak* (subspace) topology is a metrizable topological space.[1] However, for infinite-dimensional spaces, the metric cannot be translation-invariant.[2] If X is a separable metrizable locally convex space then the weak* topology on the continuous dual space of X is separable.[1]
Properties on normed spaces
By definition, the weak* topology is weaker than the weak topology on $X^{*}$. An important fact about the weak* topology is the Banach–Alaoglu theorem: if X is normed, then the closed unit ball in $X^{*}$ is weak*-compact (more generally, the polar in $X^{*}$ of a neighborhood of 0 in X is weak*-compact). Moreover, the closed unit ball in a normed space X is compact in the weak topology if and only if X is reflexive.
In more generality, let F be locally compact valued field (e.g., the reals, the complex numbers, or any of the p-adic number systems). Let X be a normed topological vector space over F, compatible with the absolute value in F. Then in $X^{*}$, the topological dual space X of continuous F-valued linear functionals on X, all norm-closed balls are compact in the weak-* topology.
If X is a normed space, a version of the Heine-Borel theorem holds. In particular, a subset of the continuous dual is weak* compact if and only if it is weak* closed and norm-bounded.[1] This implies, in particular, that when X is an infinite-dimensional normed space then the closed unit ball at the origin in the dual space of X does not contain any weak* neighborhood of 0 (since any such neighborhood is norm-unbounded).[1] Thus, even though norm-closed balls are compact, X* is not weak* locally compact.
If X is a normed space, then X is separable if and only if the weak-* topology on the closed unit ball of $X^{*}$ is metrizable,[1] in which case the weak* topology is metrizable on norm-bounded subsets of $X^{*}$. If a normed space X has a dual space that is separable (with respect to the dual-norm topology) then X is necessarily separable.[1] If X is a Banach space, the weak-* topology is not metrizable on all of $X^{*}$ unless X is finite-dimensional.[3]
Examples
Hilbert spaces
Consider, for example, the difference between strong and weak convergence of functions in the Hilbert space L2($\mathbb {R} ^{n}$). Strong convergence of a sequence $\psi _{k}\in L^{2}(\mathbb {R} ^{n})$ to an element ψ means that
$\int _{\mathbb {R} ^{n}}|\psi _{k}-\psi |^{2}\,{\rm {d}}\mu \,\to 0$
as k → ∞. Here the notion of convergence corresponds to the norm on L2.
In contrast weak convergence only demands that
$\int _{\mathbb {R} ^{n}}{\bar {\psi }}_{k}f\,\mathrm {d} \mu \to \int _{\mathbb {R} ^{n}}{\bar {\psi }}f\,\mathrm {d} \mu $
for all functions f ∈ L2 (or, more typically, all f in a dense subset of L2 such as a space of test functions, if the sequence {ψk} is bounded). For given test functions, the relevant notion of convergence only corresponds to the topology used in $\mathbb {C} $.
For example, in the Hilbert space L2(0,π), the sequence of functions
$\psi _{k}(x)={\sqrt {2/\pi }}\sin(kx)$
form an orthonormal basis. In particular, the (strong) limit of $\psi _{k}$ as k → ∞ does not exist. On the other hand, by the Riemann–Lebesgue lemma, the weak limit exists and is zero.
Distributions
Main article: distribution (mathematics)
One normally obtains spaces of distributions by forming the strong dual of a space of test functions (such as the compactly supported smooth functions on $\mathbb {R} ^{n}$). In an alternative construction of such spaces, one can take the weak dual of a space of test functions inside a Hilbert space such as L2. Thus one is led to consider the idea of a rigged Hilbert space.
Weak topology induced by the algebraic dual
Suppose that X is a vector space and X# is the algebraic dual space of X (i.e. the vector space of all linear functionals on X). If X is endowed with the weak topology induced by X# then the continuous dual space of X is X#, every bounded subset of X is contained in a finite-dimensional vector subspace of X, every vector subspace of X is closed and has a topological complement.[4]
Operator topologies
If X and Y are topological vector spaces, the space L(X,Y) of continuous linear operators f : X → Y may carry a variety of different possible topologies. The naming of such topologies depends on the kind of topology one is using on the target space Y to define operator convergence (Yosida 1980, IV.7 Topologies of linear maps). There are, in general, a vast array of possible operator topologies on L(X,Y), whose naming is not entirely intuitive.
For example, the strong operator topology on L(X,Y) is the topology of pointwise convergence. For instance, if Y is a normed space, then this topology is defined by the seminorms indexed by x ∈ X:
$f\mapsto \|f(x)\|_{Y}.$
More generally, if a family of seminorms Q defines the topology on Y, then the seminorms pq, x on L(X,Y) defining the strong topology are given by
$p_{q,x}:f\mapsto q(f(x)),$
indexed by q ∈ Q and x ∈ X.
In particular, see the weak operator topology and weak* operator topology.
See also
• Eberlein compactum, a compact set in the weak topology
• Weak convergence (Hilbert space)
• Weak-star operator topology
• Weak convergence of measures
• Topologies on spaces of linear maps
• Topologies on the set of operators on a Hilbert space
• Vague topology
References
1. Narici & Beckenstein 2011, pp. 225–273.
2. Folland 1999, pp. 170.
3. Proposition 2.6.12, p. 226 in Megginson, Robert E. (1998), An introduction to Banach space theory, Graduate Texts in Mathematics, vol. 183, New York: Springer-Verlag, pp. xx+596, ISBN 0-387-98431-3.
4. Trèves 2006, pp. 36, 201.
Bibliography
• Conway, John B. (1994), A Course in Functional Analysis (2nd ed.), Springer-Verlag, ISBN 0-387-97245-5
• Folland, G.B. (1999). Real Analysis: Modern Techniques and Their Applications (Second ed.). John Wiley & Sons, Inc. ISBN 978-0-471-31716-6.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Pedersen, Gert (1989), Analysis Now, Springer, ISBN 0-387-96788-5
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
• Willard, Stephen (February 2004). General Topology. Courier Dover Publications. ISBN 9780486434797.
• Yosida, Kosaku (1980), Functional analysis (6th ed.), Springer, ISBN 978-3-540-58654-8
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
|
Weakly compact cardinal
In mathematics, a weakly compact cardinal is a certain kind of cardinal number introduced by Erdős & Tarski (1961); weakly compact cardinals are large cardinals, meaning that their existence cannot be proven from the standard axioms of set theory. (Tarski originally called them "not strongly incompact" cardinals.)
Formally, a cardinal κ is defined to be weakly compact if it is uncountable and for every function f: [κ] 2 → {0, 1} there is a set of cardinality κ that is homogeneous for f. In this context, [κ] 2 means the set of 2-element subsets of κ, and a subset S of κ is homogeneous for f if and only if either all of [S]2 maps to 0 or all of it maps to 1.
The name "weakly compact" refers to the fact that if a cardinal is weakly compact then a certain related infinitary language satisfies a version of the compactness theorem; see below.
Equivalent formulations
The following are equivalent for any uncountable cardinal κ:
1. κ is weakly compact.
2. for every λ<κ, natural number n ≥ 2, and function f: [κ]n → λ, there is a set of cardinality κ that is homogeneous for f. (Drake 1974, chapter 7 theorem 3.5)
3. κ is inaccessible and has the tree property, that is, every tree of height κ has either a level of size κ or a branch of size κ.
4. Every linear order of cardinality κ has an ascending or a descending sequence of order type κ.
5. κ is $\Pi _{1}^{1}$-indescribable.
6. κ has the extension property. In other words, for all U ⊂ Vκ there exists a transitive set X with κ ∈ X, and a subset S ⊂ X, such that (Vκ, ∈, U) is an elementary substructure of (X, ∈, S). Here, U and S are regarded as unary predicates.
7. For every set S of cardinality κ of subsets of κ, there is a non-trivial κ-complete filter that decides S.
8. κ is κ-unfoldable.
9. κ is inaccessible and the infinitary language Lκ,κ satisfies the weak compactness theorem.
10. κ is inaccessible and the infinitary language Lκ,ω satisfies the weak compactness theorem.
11. κ is inaccessible and for every transitive set $M$ of cardinality κ with κ $\in M$, ${}^{<\kappa }M\subset M$, and satisfying a sufficiently large fragment of ZFC, there is an elementary embedding $j$ from $M$ to a transitive set $N$ of cardinality κ such that $^{<\kappa }N\subset N$, with critical point $crit(j)=$κ. (Hauser 1991, Theorem 1.3)
A language Lκ,κ is said to satisfy the weak compactness theorem if whenever Σ is a set of sentences of cardinality at most κ and every subset with less than κ elements has a model, then Σ has a model. Strongly compact cardinals are defined in a similar way without the restriction on the cardinality of the set of sentences.
Properties
Every weakly compact cardinal is a reflecting cardinal, and is also a limit of reflecting cardinals. This means also that weakly compact cardinals are Mahlo cardinals, and the set of Mahlo cardinals less than a given weakly compact cardinal is stationary.
If $\kappa $ is weakly compact, then there are chains of well-founded elementary end-extensions of $(V_{\kappa },\in )$ of arbitrary length $<\kappa ^{+}$.[1]p.6
Weakly compact cardinals remain weakly compact in $L$.[2] Assuming V = L, a cardinal is weakly compact iff it is 2-stationary.[3]
See also
• List of large cardinal properties
References
• Drake, F. R. (1974), Set Theory: An Introduction to Large Cardinals, Studies in Logic and the Foundations of Mathematics, vol. 76, Elsevier Science Ltd, ISBN 0-444-10535-2
• Erdős, Paul; Tarski, Alfred (1961), "On some problems involving inaccessible cardinals", Essays on the foundations of mathematics, Jerusalem: Magnes Press, Hebrew Univ., pp. 50–82, MR 0167422
• Hauser, Kai (1991), "Indescribable Cardinals and Elementary Embeddings", Journal of Symbolic Logic, Association for Symbolic Logic, 56 (2): 439–457, doi:10.2307/2274692, JSTOR 2274692, S2CID 288779
• Kanamori, Akihiro (2003), The Higher Infinite : Large Cardinals in Set Theory from Their Beginnings (2nd ed.), Springer, ISBN 3-540-00384-3
Citations
1. Villaveces, Andres (1996). "Chains of End Elementary Extensions of Models of Set Theory". arXiv:math/9611209.
2. T. Jech, 'Set Theory: The third millennium edition' (2003)
3. Bagaria, Magidor, Mancilla. On the Consistency Strength of Hyperstationarity, p.3. (2019)
|
Dual system
In mathematics, a dual system, dual pair, or duality over a field $\mathbb {K} $ is a triple $(X,Y,b)$ consisting of two vector spaces $X$ and $Y$ over $\mathbb {K} $ and a non-degenerate bilinear map $b:X\times Y\to \mathbb {K} $.
This article is about dual pairs of vector spaces. For dual pairs in representation theory, see Reductive dual pair.
Duality theory, the study of dual systems, is part of functional analysis. It is separate and distinct to Dual-system Theory in psychology.
Definition, notation, and conventions
Pairings
A pairing or pair over a field $\mathbb {K} $ is a triple $(X,Y,b),$ which may also be denoted by $b(X,Y),$
consisting of two vector spaces $X$ and $Y$ over $\mathbb {K} $ (which this article assumes is the field either of real numbers $\mathbb {R} $ or the complex numbers $\mathbb {C} $).
and a bilinear map $b:X\times Y\to \mathbb {K} $, which is called the bilinear map associated with the pairing[1] or simply the pairing's map/bilinear form.
For every $x\in X$, define
${\begin{alignedat}{4}b(x,\,\cdot \,):\,&Y&&\to &&\,\mathbb {K} \\&y&&\mapsto &&\,b(x,y)\end{alignedat}}$
and for every $y\in Y,$ define
${\begin{alignedat}{4}b(\,\cdot \,,y):\,&X&&\to &&\,\mathbb {K} \\&x&&\mapsto &&\,b(x,y).\end{alignedat}}$
Every $b(x,\,\cdot \,)$ is a linear functional on $Y$ and every $b(\,\cdot \,,y)$ is a linear functional on $X$ . Let
$b(X,\,\cdot \,):=\{b(x,\,\cdot \,):x\in X\}\qquad {\text{ and }}\qquad b(\,\cdot \,,Y):=\{b(\,\cdot \,,y):y\in Y\}$
where each of these sets forms a vector space of linear functionals.
It is common practice to write $\langle x,y\rangle $ instead of $b(x,y)$, in which case the pair is often denoted by $\left\langle X,Y\right\rangle $ rather than $(X,Y,\langle \cdot ,\cdot \rangle ).$
However, this article will reserve the use of $\langle \cdot ,\cdot \rangle $ for the canonical evaluation map (defined below) so as to avoid confusion for readers not familiar with this subject.
Dual pairings
A pairing $(X,Y,b)$ is called a dual system, a dual pair,[2] or a duality over $\mathbb {K} $ if the bilinear form $b$ is non-degenerate, which means that it satisfies the following two separation axioms:
1. $Y$ separates/distinguishes points of $X$: if $x\in X$ is such that $b(x,\,\cdot \,)=0$ then $x=0$; or equivalently, for all non-zero $x\in X$, the map $b(x,\,\cdot \,):Y\to \mathbb {K} $ is not identically $0$ (i.e. there exists a $y\in Y$ such that $b(x,y)\neq 0$);
2. $X$ separates/distinguishes points of $Y$: if $y\in Y$ is such that $b(\,\cdot \,,y)=0$ then $y=0$; or equivalently, for all non-zero $y\in Y,$ the map $b(\,\cdot \,,y):X\to \mathbb {K} $ is not identically $0$ (i.e. there exists an $x\in X$ such that $b(x,y)\neq 0$).
In this case say that $b$ is non-degenerate, say that $b$ places $X$ and $Y$ in duality (or in separated duality), and $b$ is called the duality pairing of the $(X,Y,b)$.[1][2]
Total subsets
A subset $S$ of $Y$ is called total if for every $x\in X$,
$b(x,s)=0\quad {\text{ for all }}s\in S$
implies $x=0.$
A total subset of $X$ is defined analogously (see footnote).[note 1] Thus $X$ separates points of $Y$ if and only if $X$ is a total subset of $X$, and similarly for $Y$.
Orthogonality
The vectors $x$ and $y$ are called orthogonal, written $x\perp y$, if $b(x,y)=0$. Two subsets $R\subseteq X$ and $S\subseteq Y$ are orthogonal, written $R\perp S$, if $b(R,S)=\{0\}$; that is, if $b(r,s)=0$ for all $r\in R$ and $s\in S$. The definition of a subset being orthogonal to a vector is defined analogously.
The orthogonal complement or annihilator of a subset $R\subseteq X$ is
$R^{\perp }:=\{y\in Y:R\perp y\}:=\{y\in Y:b(R,y)=\{0\}\}$
.
Thus $R$ is a total subset of $X$ if and only if $R^{\perp }$ equals $\{0\}$.
Polar sets
Main article: Polar set
Throughout, $(X,Y,b)$ will be a pairing over $\mathbb {K} .$ The absolute polar or polar of a subset $A$ of $X$ is the set:[3]
$A^{\circ }:=\left\{y\in Y:\sup _{x\in A}|b(x,y)|\leq 1\right\}.$
Dually, the absolute polar or polar of a subset $B$ of $Y$ is denoted by $B^{\circ }$ and defined by
$B^{\circ }:=\left\{x\in X:\sup _{y\in B}|b(x,y)|\leq 1\right\}$
In this case, the absolute polar of a subset $B$ of $Y$ is also called the absolute prepolar or prepolar of $B$ and may be denoted by ${}^{\circ }B.$
The polar $B^{\circ }$ is necessarily a convex set containing $0\in Y$ where if $B$ is balanced then so is $B^{\circ }$ and if $B$ is a vector subspace of $X$ then so too is $B^{\circ }$ a vector subspace of $Y.$[4]
If $A\subseteq X$ then the bipolar of $A,$ denoted by $A^{\circ \circ },$ is the set ${}^{\circ }\left(A^{\perp }\right).$ Similarly, if $B\subseteq Y$ then the bipolar of $B$ is $B^{\circ \circ }:=\left({}^{\circ }B\right)^{\circ }.$
If $A$ is a vector subspace of $X,$ then $A^{\circ }=A^{\perp }$ and this is also equal to the real polar of $A.$
Dual definitions and results
Given a pairing $(X,Y,b),$ define a new pairing $(Y,X,d)$ where $d(y,x):=b(x,y)$ for all $x\in X\quad {\text{ and }}y\in Y.$[1]
There is a repeating theme in duality theory, which is that any definition for a pairing $(X,Y,b)$ has a corresponding dual definition for the pairing $(Y,X,d).$
Convention and Definition: Given any definition for a pairing $(X,Y,b),$ one obtains a dual definition by applying it to the pairing $(Y,X,d).$ This conventions also apply to theorems.
Convention: Adhering to common practice, unless clarity is needed, whenever a definition (or result) for a pairing $(X,Y,b)$ is given then this article will omit mention of the corresponding dual definition (or result) but nevertheless use it.
For instance, if "$X$ distinguishes points of $Y$" (resp, "$S$ is a total subset of $Y$") is defined as above, then this convention immediately produces the dual definition of "$Y$ distinguishes points of $X$" (resp, "$S$ is a total subset of $X$").
This following notation is almost ubiquitous and allows us to avoid assigning a symbol to $d.$
Convention and Notation: If a definition and its notation for a pairing $(X,Y,b)$ depends on the order of $X$ and $Y$ (e.g. the definition of the Mackey topology $\tau (X,Y,b)$ on $X$) then by switching the order of $X$ and $Y,$ then it is meant that definition applied to $(Y,X,d)$ (e.g. $\tau (Y,X,b)$ actually denotes the topology $\tau (Y,X,d)$).
For instance, once the weak topology on $X$ is defined, which is denoted by $\sigma (X,Y,b),$, then this definition will automatically be applied to the pairing $(Y,X,d)$ so as to obtain the definition of the weak topology on $Y,$ where this topology will be denoted by $\sigma (Y,X,b)$ rather than $\sigma (Y,X,d).$
Identification of $(X,Y)$ with $(Y,X)$
Although it is technically incorrect and an abuse of notation, this article will also adhere to the following nearly ubiquitous convention of treating a pairing $(X,Y,b)$ interchangeably with $(Y,X,d)$ and also of denoting $(Y,X,d)$ by $(Y,X,b).$
Examples
Restriction of a pairing
Suppose that $(X,Y,b)$ is a pairing, $M$ is a vector subspace of $X,$ and $N$ is a vector subspace of $Y.$ Then the restriction of $(X,Y,b)$ to $M\times N$ is the pairing $\left(M,N,b{\big \vert }_{M\times N}\right).$ If $(X,Y,b)$ is a duality, then it's possible for a restriction to fail to be a duality (e.g. if $Y\neq \{0\}$ and $N=\{0\}$).
This article will use the common practice of denoting the restriction $\left(M,N,b{\big \vert }_{M\times N}\right)$ by $(M,N,b).$
Canonical duality on a vector space
Suppose that $X$ is a vector space and let $X^{\#}$ denote the algebraic dual space of $X$ (that is, the space of all linear functionals on $X$). There is a canonical duality $\left(X,X^{\#},c\right)$ where $c\left(x,x^{\prime }\right)=\left\langle x,x^{\prime }\right\rangle =x^{\prime }(x),$ which is called the evaluation map or the natural or canonical bilinear functional on $X\times X^{\#}.$ Note in particular that for any $x^{\prime }\in X^{\#},$ $c\left(\,\cdot \,,x^{\prime }\right)$ is just another way of denoting $x^{\prime }$; i.e. $c\left(\,\cdot \,,x^{\prime }\right)=x^{\prime }(\,\cdot \,)=x^{\prime }.$
If $N$ is a vector subspace of $X^{\#}$, then the restriction of $\left(X,X^{\#},c\right)$ to $X\times N$ is called the canonical pairing where if this pairing is a duality then it is instead called the canonical duality. Clearly, $X$ always distinguishes points of $N$, so the canonical pairing is a dual system if and only if $N$ separates points of $X.$ The following notation is now nearly ubiquitous in duality theory.
The evaluation map will be denoted by $\left\langle x,x^{\prime }\right\rangle =x^{\prime }(x)$ (rather than by $c$) and $\langle X,N\rangle $ will be written rather than $(X,N,c).$
Assumption: As is common practice, if $X$ is a vector space and $N$ is a vector space of linear functionals on $X,$ then unless stated otherwise, it will be assumed that they are associated with the canonical pairing $\langle X,N\rangle .$
If $N$ is a vector subspace of $X^{\#}$ then $X$ distinguishes points of $N$ (or equivalently, $(X,N,c)$ is a duality) if and only if $N$ distinguishes points of $X,$ or equivalently if $N$ is total (that is, $n(x)=0$ for all $n\in N$ implies $x=0$).[1]
Canonical duality on a topological vector space
Suppose $X$ is a topological vector space (TVS) with continuous dual space $X^{\prime }.$ Then the restriction of the canonical duality $\left(X,X^{\#},c\right)$ to $X$ × $X^{\prime }$ defines a pairing $\left(X,X^{\prime },c{\big \vert }_{X\times X^{\prime }}\right)$ for which $X$ separates points of $X^{\prime }.$ If $X^{\prime }$ separates points of $X$ (which is true if, for instance, $X$ is a Hausdorff locally convex space) then this pairing forms a duality.[2]
Assumption: As is commonly done, whenever $X$ is a TVS, then unless indicated otherwise, it will be assumed without comment that it's associated with the canonical pairing $\left\langle X,X^{\prime }\right\rangle .$
Polars and duals of TVSs
The following result shows that the continuous linear functionals on a TVS are exactly those linear functionals that are bounded on a neighborhood of the origin.
Theorem[1] — Let $X$ be a TVS with algebraic dual $X^{\#}$ and let ${\mathcal {N}}$ be a basis of neighborhoods of $X$ at the origin. Under the canonical duality $\left\langle X,X^{\#}\right\rangle ,$ the continuous dual space of $X$ is the union of all $N^{\circ }$ as $N$ ranges over ${\mathcal {N}}$ (where the polars are taken in $X^{\#}$).
Inner product spaces and complex conjugate spaces
A pre-Hilbert space $(H,\langle \cdot ,\cdot \rangle )$ is a dual pairing if and only if $H$ is vector space over $\mathbb {R} $ or $H$ has dimension $0.$ Here it is assumed that the sesquilinear form $\langle \cdot ,\cdot \rangle $ is conjugate homogeneous in its second coordinate and homogeneous in its first coordinate.
• If $(H,\langle \cdot ,\cdot \rangle )$ is a real Hilbert space then $(H,H,\langle \cdot ,\cdot \rangle )$ forms a dual system.
• If $(H,\langle \cdot ,\cdot \rangle )$ is a complex Hilbert space then $(H,H,\langle \cdot ,\cdot \rangle )$ forms a dual system if and only if $\operatorname {dim} H=0.$ If $H$ is non-trivial then $(H,H,\langle \cdot ,\cdot \rangle )$ does not even form pairing since the inner product is sesquilinear rather than bilinear.[1]
Suppose that $(H,\langle \cdot ,\cdot \rangle )$ is a complex pre-Hilbert space with scalar multiplication denoted as usual by juxtaposition or by a dot $\cdot .$ Define the map
$\,\cdot \,\perp \,\cdot \,:\mathbb {C} \times H\to H\quad {\text{ by }}\quad c\perp x:={\overline {c}}x,$
where the right-hand side uses the scalar multiplication of $H.$ Let ${\overline {H}}$ denote the complex conjugate vector space of $H,$ where ${\overline {H}}$ denotes the additive group of $(H,+)$ (so vector addition in ${\overline {H}}$ is identical to vector addition in $H$) but with scalar multiplication in ${\overline {H}}$ being the map $\,\cdot \,\perp \,\cdot \,$ (instead of the scalar multiplication that $H$ is endowed with).
The map $b:H\times {\overline {H}}\to \mathbb {C} $ defined by $b(x,y):=\langle x,y\rangle $ is linear in both coordinates[note 2] and so $\left(H,{\overline {H}},\langle \cdot ,\cdot \rangle \right)$ forms a dual pairing.
Other examples
• Suppose $X=\mathbb {R} ^{2},$ $Y=\mathbb {R} ^{3},$ and for all $\left(x_{1},y_{1}\right)\in X{\text{ and }}\left(x_{2},y_{2},z_{2}\right)\in Y,$ let
$b\left(\left(x_{1},y_{1}\right),\left(x_{2},y_{2},z_{2}\right)\right):=x_{1}x_{2}+y_{1}y_{2}.$
Then $(X,Y,b)$ is a pairing such that $X$ distinguishes points of $Y,$ but $Y$ does not distinguish points of $X.$ Furthermore, $X^{\perp }:=\{y\in Y:X\perp y\}=\{(0,0,z):z\in \mathbb {R} \}.$
• Let $0<p<\infty ,$ $X:=L^{p}(\mu ),$ $Y:=L^{q}(\mu )$ (where $q$ is such that ${\tfrac {1}{p}}+{\tfrac {1}{q}}=1$), and $b(f,g):=\int fg\,\mathrm {d} \mu .$ Then $(X,Y,b)$ is a dual system.
• Let $X$ and $Y$ be vector spaces over the same field $\mathbb {K} .$ Then the bilinear form $b\left(x\otimes y,x^{*}\otimes y^{*}\right)=\left\langle x^{\prime },x\right\rangle \left\langle y^{\prime },y\right\rangle $ places $X\times Y$ and $X^{\#}\times Y^{\#}$ in duality.[2]
• A sequence space $X$ and its beta dual $Y:=X^{\beta }$ with the bilinear map defined as $\langle x,y\rangle :=\sum _{i=1}^{\infty }x_{i}y_{i}$ :=\sum _{i=1}^{\infty }x_{i}y_{i}} for $x\in X,$ $y\in X^{\beta }$ forms a dual system.
Weak topology
Main articles: Weak topology and Weak-* topology
Suppose that $(X,Y,b)$ is a pairing of vector spaces over $\mathbb {K} .$ If $S\subseteq Y$ then the weak topology on $X$ induced by $S$ (and $b$) is the weakest TVS topology on $X,$ denoted by $\sigma (X,S,b)$ or simply $\sigma (X,S),$ making all maps $b(\,\cdot \,,y):X\to \mathbb {K} $ continuous as $y$ ranges over $S.$[1] If $S$ is not clear from context then it should be assumed to be all of $Y,$ in which case it is called the weak topology on $X$ (induced by $Y$). The notation $X_{\sigma (X,S,b)},$ $X_{\sigma (X,S)},$ or (if no confusion could arise) simply $X_{\sigma }$ is used to denote $X$ endowed with the weak topology $\sigma (X,S,b).$ Importantly, the weak topology depends entirely on the function $b,$ the usual topology on $\mathbb {C} ,$ and $X$'s vector space structure but not on the algebraic structures of $Y.$
Similarly, if $R\subseteq X$ then the dual definition of the weak topology on $Y$ induced by $R$ (and $b$), which is denoted by $\sigma (Y,R,b)$ or simply $\sigma (Y,R)$ (see footnote for details).[note 3]
Definition and Notation: If "$\sigma (X,Y,b)$" is attached to a topological definition (e.g. $\sigma (X,Y,b)$-converges, $\sigma (X,Y,b)$-bounded, $\operatorname {cl} _{\sigma (X,Y,b)}(S),$ etc.) then it means that definition when the first space (i.e. $X$) carries the $\sigma (X,Y,b)$ topology. Mention of $b$ or even $X$ and $Y$ may be omitted if no confusion arises. So, for instance, if a sequence $\left(a_{i}\right)_{i=1}^{\infty }$ in $Y$ "$\sigma $-converges" or "weakly converges" then this means that it converges in $(Y,\sigma (Y,X,b))$ whereas if it were a sequence in $X$, then this would mean that it converges in $(X,\sigma (X,Y,b))$).
The topology $\sigma (X,Y,b)$ is locally convex since it is determined by the family of seminorms $p_{y}:X\to \mathbb {R} $ defined by $p_{y}(x):=|b(x,y)|,$ as $y$ ranges over $Y.$[1] If $x\in X$ and $\left(x_{i}\right)_{i\in I}$ is a net in $X,$ then $\left(x_{i}\right)_{i\in I}$ $\sigma (X,Y,b)$-converges to $x$ if $\left(x_{i}\right)_{i\in I}$ converges to $X$ in $(X,\sigma (X,Y,b)).$[1] A net $\left(x_{i}\right)_{i\in I}$ $\sigma (X,Y,b)$-converges to $x$ if and only if for all $y\in Y,$ $b\left(x_{i},y\right)$ converges to $b(x,y).$ If $\left(x_{i}\right)_{i=1}^{\infty }$ is a sequence of orthonormal vectors in Hilbert space, then $\left(x_{i}\right)_{i=1}^{\infty }$ converges weakly to 0 but does not norm-converge to 0 (or any other vector).[1]
If $(X,Y,b)$ is a pairing and $N$ is a proper vector subspace of $Y$ such that $(X,N,b)$ is a dual pair, then $\sigma (X,N,b)$ is strictly coarser than $\sigma (X,Y,b).$[1]
Bounded subsets
A subset $S$ of $X$ is $\sigma (X,Y,b)$-bounded if and only if
$\sup _{}|b(S,y)|<\infty \quad {\text{ for all }}y\in Y,$
where $|b(S,y)|:=\{b(s,y):s\in S\}.$
Hausdorffness
If $(X,Y,b)$ is a pairing then the following are equivalent:
1. $X$ distinguishes points of $Y$;
2. The map $y\mapsto b(\,\cdot \,,y)$ defines an injection from $Y$ into the algebraic dual space of $X$;[1]
3. $\sigma (Y,X,b)$ is Hausdorff.[1]
Weak representation theorem
The following theorem is of fundamental importance to duality theory because it completely characterizes the continuous dual space of $(X,\sigma (X,Y,b)).$
Weak representation theorem[1] — Let $(X,Y,b)$ be a pairing over the field $\mathbb {K} .$ Then the continuous dual space of $(X,\sigma (X,Y,b))$ is
$b(\,\cdot \,,Y):=\{b(\,\cdot \,,y):y\in Y\}.$
Furthermore,
1. If $f$ is a continuous linear functional on $(X,\sigma (X,Y,b))$ then there exists some $y\in Y$ such that $f=b(\,\cdot \,,y)$; if such a $y$ exists then it is unique if and only if $X$ distinguishes points of $Y.$
• Note that whether or not $X$ distinguishes points of $Y$ is not dependent on the particular choice of $y.$
2. The continuous dual space of $(X,\sigma (X,Y,b))$ may be identified with the quotient space $Y/X^{\perp },$ where $X^{\perp }:=\{y\in Y:b(x,y)=0{\text{ for all }}x\in X\}.$
• This is true regardless of whether or not $X$ distinguishes points of $Y$ or $Y$ distinguishes points of $X.$
Consequently, the continuous dual space of $(X,\sigma (X,Y,b))$ is
$(X,\sigma (X,Y,b))^{\prime }=b(\,\cdot \,,Y):=\left\{b(\,\cdot \,,y):y\in Y\right\}.$
With respect to the canonical pairing, if $X$ is a TVS whose continuous dual space $X^{\prime }$ separates points on $X$ (i.e. such that $\left(X,\sigma \left(X,X^{\prime }\right)\right)$ is Hausdorff, which implies that $X$ is also necessarily Hausdorff) then the continuous dual space of $\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right)$ is equal to the set of all "evaluation at a point $x$" maps as $x$ ranges over $X$ (i.e. the map that send $x^{\prime }\in X^{\prime }$ to $x^{\prime }(x)$). This is commonly written as
$\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right)^{\prime }=X\qquad {\text{ or }}\qquad \left(X_{\sigma }^{\prime }\right)^{\prime }=X.$
This very important fact is why results for polar topologies on continuous dual spaces, such as the strong dual topology $\beta \left(X^{\prime },X\right)$ on $X^{\prime }$ for example, can also often be applied to the original TVS $X$; for instance, $X$ being identified with $\left(X_{\sigma }^{\prime }\right)^{\prime }$ means that the topology $\beta \left(\left(X_{\sigma }^{\prime }\right)^{\prime },X_{\sigma }^{\prime }\right)$ on $\left(X_{\sigma }^{\prime }\right)^{\prime }$ can instead be thought of as a topology on $X.$ Moreover, if $X^{\prime }$ is endowed with a topology that is finer than $\sigma \left(X^{\prime },X\right)$ then the continuous dual space of $X^{\prime }$ will necessarily contain $\left(X_{\sigma }^{\prime }\right)^{\prime }$ as a subset. So for instance, when $X^{\prime }$ is endowed with the strong dual topology (and so is denoted by $X_{\beta }^{\prime }$) then
$\left(X_{\beta }^{\prime }\right)^{\prime }~\supseteq ~\left(X_{\sigma }^{\prime }\right)^{\prime }~=~X$
which (among other things) allows for $X$ to be endowed with the subspace topology induced on it by, say, the strong dual topology $\beta \left(\left(X_{\beta }^{\prime }\right)^{\prime },X_{\beta }^{\prime }\right)$ (this topology is also called the strong bidual topology and it appears in the theory of reflexive spaces: the Hausdorff locally convex TVS $X$ is said to be semi-reflexive if $\left(X_{\beta }^{\prime }\right)^{\prime }=X$ and it will be called reflexive if in addition the strong bidual topology $\beta \left(\left(X_{\beta }^{\prime }\right)^{\prime },X_{\beta }^{\prime }\right)$ on $X$ is equal to $X$'s original/starting topology).
Orthogonals, quotients, and subspaces
If $(X,Y,b)$ is a pairing then for any subset $S$ of $X$:
• $S^{\perp }=(\operatorname {span} S)^{\perp }=\left(\operatorname {cl} _{\sigma (Y,X,b)}\operatorname {span} S\right)^{\perp }=S^{\perp \perp \perp }$ and this set is $\sigma (Y,X,b)$-closed;[1]
• $S\subseteq S^{\perp \perp }=\left(\operatorname {cl} _{\sigma (X,Y,b)}\operatorname {span} S\right)$;[1]
• Thus if $S$ is a $\sigma (X,Y,b)$-closed vector subspace of $X$ then $S\subseteq S^{\perp \perp }.$
• If $\left(S_{i}\right)_{i\in I}$ is a family of $\sigma (X,Y,b)$-closed vector subspaces of $X$ then
$\left(\bigcap _{i\in I}S_{i}\right)^{\perp }=\operatorname {cl} _{\sigma (Y,X,b)}\left(\operatorname {span} \left(\bigcup _{i\in I}S_{i}^{\perp }\right)\right).$
[1]
• If $\left(S_{i}\right)_{i\in I}$ is a family of subsets of $X$ then $\left(\bigcup _{i\in I}S_{i}\right)^{\perp }=\bigcap _{i\in I}S_{i}^{\perp }.$[1]
If $X$ is a normed space then under the canonical duality, $S^{\perp }$ is norm closed in $X^{\prime }$ and $S^{\perp \perp }$ is norm closed in $X.$[1]
Subspaces
Suppose that $M$ is a vector subspace of $X$ and let $(M,Y,b)$ denote the restriction of $(X,Y,b)$ to $M\times Y.$ The weak topology $\sigma (M,Y,b)$ on $M$ is identical to the subspace topology that $M$ inherits from $(X,\sigma (X,Y,b)).$
Also, $\left(M,Y/M^{\perp },b{\big \vert }_{M}\right)$ is a paired space (where $Y/M^{\perp }$ means $Y/\left(M^{\perp }\right)$) where $b{\big \vert }_{M}:M\times Y/M^{\perp }\to \mathbb {K} $ is defined by
$\left(m,y+M^{\perp }\right)\mapsto b(m,y).$
The topology $\sigma \left(M,Y/M^{\perp },b{\big \vert }_{M}\right)$ is equal to the subspace topology that $M$ inherits from $(X,\sigma (X,Y,b)).$[5] Furthermore, if $(X,\sigma (X,Y,b))$ is a dual system then so is $\left(M,Y/M^{\perp },b{\big \vert }_{M}\right).$[5]
Quotients
Suppose that $M$ is a vector subspace of $X.$ Then $\left(X/M,M^{\perp },b/M\right)$is a paired space where $b/M:X/M\times M^{\perp }\to \mathbb {K} $ is defined by
$(x+M,y)\mapsto b(x,y).$
The topology $\sigma \left(X/M,M^{\perp }\right)$ is identical to the usual quotient topology induced by $(X,\sigma (X,Y,b))$ on $X/M.$[5]
Polars and the weak topology
If $X$ is a locally convex space and if $H$ is a subset of the continuous dual space $X^{\prime },$ then $H$ is $\sigma \left(X^{\prime },X\right)$-bounded if and only if $H\subseteq B^{\circ }$ for some barrel $B$ in $X.$[1]
The following results are important for defining polar topologies.
If $(X,Y,b)$ is a pairing and $A\subseteq X,$ then:[1]
1. The polar $A^{\circ }$ of $A$ is a closed subset of $(Y,\sigma (Y,X,b)).$
2. The polars of the following sets are identical: (a) $A$; (b) the convex hull of $A$; (c) the balanced hull of $A$; (d) the $\sigma (X,Y,b)$-closure of $A$; (e) the $\sigma (X,Y,b)$-closure of the convex balanced hull of $A.$
3. The bipolar theorem: The bipolar of $A,$ denoted by $A^{\circ \circ },$ is equal to the $\sigma (X,Y,b)$-closure of the convex balanced hull of $A.$
• The bipolar theorem in particular "is an indispensable tool in working with dualities."[4]
4. $A$ is $\sigma (X,Y,b)$-bounded if and only if $A^{\circ }$ is absorbing in $Y.$
5. If in addition $Y$ distinguishes points of $X$ then $A$ is $\sigma (X,Y,b)$-bounded if and only if it is $\sigma (X,Y,b)$-totally bounded.
If $(X,Y,b)$ is a pairing and $\tau $ is a locally convex topology on $X$ that is consistent with duality, then a subset $B$ of $X$ is a barrel in $(X,\tau )$ if and only if $B$ is the polar of some $\sigma (Y,X,b)$-bounded subset of $Y.$[6]
Transpose of a linear map with respect to pairings
See also: Transpose of a linear map, Transpose, and Transpose § Transposes of linear maps and bilinear forms
Let $(X,Y,b)$ and $(W,Z,c)$ be pairings over $\mathbb {K} $ and let $F:X\to W$ be a linear map.
For all $z\in Z,$ let $c(F(\,\cdot \,),z):X\to \mathbb {K} $ be the map defined by $x\mapsto c(F(x),z).$ It is said that $F$'s transpose or adjoint is well-defined if the following conditions are satisfied:
1. $X$ distinguishes points of $Y$ (or equivalently, the map $y\mapsto b(\,\cdot \,,y)$ from $Y$ into the algebraic dual $X^{\#}$ is injective), and
2. $c(F(\,\cdot \,),Z)\subseteq b(\,\cdot \,,Y),$ where $c(F(\,\cdot \,),Z):=\{c(F(\,\cdot \,),z):z\in Z\}$ and $b(\,\cdot \,,Y):=\{b(\,\cdot \,,y):y\in Y\}$.
In this case, for any $z\in Z$ there exists (by condition 2) a unique (by condition 1) $y\in Y$ such that $c(F(\,\cdot \,),z)=b(\,\cdot \,,y)$), where this element of $Y$ will be denoted by ${}^{t}F(z).$ This defines a linear map
${}^{t}F:Z\to Y$
called the transpose or adjoint of $F$ with respect to $(X,Y,b)$ and $(W,Z,c)$ (this should not to be confused with the Hermitian adjoint). It is easy to see that the two conditions mentioned above (i.e. for "the transpose is well-defined") are also necessary for ${}^{t}F$ to be well-defined. For every $z\in Z,$ the defining condition for ${}^{t}F(z)$ is
$c(F(\,\cdot \,),z)=b\left(\,\cdot \,,{}^{t}F(z)\right),$
that is,
$c(F(x),z)=b\left(x,{}^{t}F(z)\right)$
for all $x\in X.$
By the conventions mentioned at the beginning of this article, this also defines the transpose of linear maps of the form $Z\to Y,$[note 4] $X\to Z,$[note 5] $W\to Y,$[note 6] $Y\to W,$[note 7] etc. (see footnote for details).
Properties of the transpose
Throughout, $(X,Y,b)$ and $(W,Z,c)$ be pairings over $\mathbb {K} $ and $F:X\to W$ will be a linear map whose transpose ${}^{t}F:Z\to Y$ is well-defined.
• ${}^{t}F:Z\to Y$ is injective (i.e. $\operatorname {ker} {}^{t}F=\{0\}$) if and only if the range of $F$ is dense in $\left(W,\sigma \left(W,Z,c\right)\right).$[1]
• If in addition to ${}^{t}F$ being well-defined, the transpose of ${}^{t}F$ is also well-defined then ${}^{tt}F=F.$
• Suppose $(U,V,a)$ is a pairing over $\mathbb {K} $ and $E:U\to X$ is a linear map whose transpose ${}^{t}E:Y\to V$ is well-defined. Then the transpose of $F\circ E:U\to W,$ which is ${}^{t}(F\circ E):Z\to V,$ is well-defined and ${}^{t}(F\circ E)={}^{t}E\circ {}^{t}F.$
• If $F:X\to W$ is a vector space isomorphism then ${}^{t}F:Z\to Y$ is bijective, the transpose of $F^{-1}:W\to X,$ which is ${}^{t}\left(F^{-1}\right):Y\to Z,$ is well-defined, and ${}^{t}\left(F^{-1}\right)=\left({}^{t}F\right)^{-1}$[1]
• Let $S\subseteq X$ and let $S^{\circ }$ denotes the absolute polar of $A,$ then:[1]
1. $[F(S)]^{\circ }=\left({}^{t}F\right)^{-1}\left(S^{\circ }\right)$;
2. if $F(S)\subseteq T$ for some $T\subseteq W,$ then ${}^{t}F\left(T^{\circ }\right)\subseteq S^{\circ }$;
3. if $T\subseteq W$ is such that ${}^{t}F\left(T^{\circ }\right)\subseteq S^{\circ },$ then $F(S)\subseteq T^{\circ \circ }$;
4. if $T\subseteq W$ and $S\subseteq X$ are weakly closed disks then ${}^{t}F\left(T^{\circ }\right)\subseteq S^{\circ }$ if and only if $F(S)\subseteq T$;
5. $\operatorname {ker} {}^{t}F=[F(X)]^{\perp }.$
These results hold when the real polar is used in place of the absolute polar.
If $X$ and $Y$ are normed spaces under their canonical dualities and if $F:X\to Y$ is a continuous linear map, then $\|F\|=\left\|{}^{t}F\right\|.$[1]
Weak continuity
A linear map $F:X\to W$ is weakly continuous (with respect to $(X,Y,b)$ and $(W,Z,c)$) if $F:(X,\sigma (X,Y,b))\to (W,(W,Z,c))$ is continuous.
The following result shows that the existence of the transpose map is intimately tied to the weak topology.
Proposition — Assume that $X$ distinguishes points of $Y$ and $F:X\to W$ is a linear map. Then the following are equivalent:
1. $F$ is weakly continuous (that is, $F:(X,\sigma (X,Y,b))\to (W,(W,Z,c))$ is continuous);
2. $c(F(\,\cdot \,),Z)\subseteq b(\,\cdot \,,Y)$;
3. the transpose of $F$ is well-defined.
If $F$ is weakly continuous then
• ${}^{t}F:Z\to Y$ is weakly continuous, meaning that ${}^{t}F:(Z,\sigma (Z,W,c))\to (Y,(Y,X,b))$ is continuous;
• the transpose of ${}^{t}F$ is well-defined if and only if $Z$ distinguishes points of $W,$ in which case ${}^{tt}F=F.$
Weak topology and the canonical duality
Suppose that $X$ is a vector space and that $X^{\#}$ is its the algebraic dual. Then every $\sigma \left(X,X^{\#}\right)$-bounded subset of $X$ is contained in a finite dimensional vector subspace and every vector subspace of $X$ is $\sigma \left(X,X^{\#}\right)$-closed.[1]
Weak completeness
If $(X,\sigma (X,Y,b))$ is a complete topological vector space say that $X$ is $\sigma (X,Y,b)$-complete or (if no ambiguity can arise) weakly-complete. There exist Banach spaces that are not weakly-complete (despite being complete in their norm topology).[1]
If $X$ is a vector space then under the canonical duality, $\left(X^{\#},\sigma \left(X^{\#},X\right)\right)$ is complete.[1] Conversely, if $Z$ is a Hausdorff locally convex TVS with continuous dual space $Z^{\prime },$ then $\left(Z,\sigma \left(Z,Z^{\prime }\right)\right)$ is complete if and only if $Z=\left(Z^{\prime }\right)^{\#}$; that is, if and only if the map $Z\to \left(Z^{\prime }\right)^{\#}$ defined by sending $z\in Z$ to the evaluation map at $z$ (i.e. $z^{\prime }\mapsto z^{\prime }(z)$) is a bijection.[1]
In particular, with respect to the canonical duality, if $Y$ is a vector subspace of $X^{\#}$ such that $Y$ separates points of $X,$ then $(Y,\sigma (Y,X))$ is complete if and only if $Y=X^{\#}.$ Said differently, there does not exist a proper vector subspace $Y\neq X^{\#}$ of $X^{\#}$ such that $(X,\sigma (X,Y))$ is Hausdorff and $Y$ is complete in the weak-* topology (i.e. the topology of pointwise convergence). Consequently, when the continuous dual space $X^{\prime }$ of a Hausdorff locally convex TVS $X$ is endowed with the weak-* topology, then $X_{\sigma }^{\prime }$ is complete if and only if $X^{\prime }=X^{\#}$ (that is, if and only if every linear functional on $X$ is continuous).
Identification of Y with a subspace of the algebraic dual
If $X$ distinguishes points of $Y$ and if $Z$ denotes the range of the injection $y\mapsto b(\,\cdot \,,y)$ then $Z$ is a vector subspace of the algebraic dual space of $X$ and the pairing $(X,Y,b)$ becomes canonically identified with the canonical pairing $\langle X,Z\rangle $ (where $\left\langle x,x^{\prime }\right\rangle :=x^{\prime }(x)$ :=x^{\prime }(x)} is the natural evaluation map). In particular, in this situation it will be assumed without loss of generality that $Y$ is a vector subspace of $X$'s algebraic dual and $b$ is the evaluation map.
Convention: Often, whenever $y\mapsto b(\,\cdot \,,y)$ is injective (especially when $(X,Y,b)$ forms a dual pair) then it is common practice to assume without loss of generality that $Y$ is a vector subspace of the algebraic dual space of $X,$ that $b$ is the natural evaluation map, and also denote $Y$ by $X^{\prime }.$
In a completely analogous manner, if $Y$ distinguishes points of $X$ then it is possible for $X$ to be identified as a vector subspace of $Y$'s algebraic dual space.[2]
Algebraic adjoint
In the special case where the dualities are the canonical dualities $\left\langle X,X^{\#}\right\rangle $ and $\left\langle W,W^{\#}\right\rangle ,$ the transpose of a linear map $F:X\to W$ is always well-defined. This transpose is called the algebraic adjoint of $F$ and it will be denoted by $F^{\#}$; that is, $F^{\#}={}^{t}F:W^{\#}\to X^{\#}.$ In this case, for all $w^{\prime }\in W^{\#},$ $F^{\#}\left(w^{\prime }\right)=w^{\prime }\circ F$[1][7] where the defining condition for $F^{\#}\left(w^{\prime }\right)$ is:
$\left\langle x,F^{\#}\left(w^{\prime }\right)\right\rangle =\left\langle F(x),w^{\prime }\right\rangle \quad {\text{ for all }}>x\in X,$
or equivalently, $F^{\#}\left(w^{\prime }\right)(x)=w^{\prime }(F(x))\quad {\text{ for all }}x\in X.$
Examples
If $X=Y=\mathbb {K} ^{n}$ for some integer $n,$ ${\mathcal {E}}=\left\{e_{1},\ldots ,e_{n}\right\}$ is a basis for $X$ with dual basis ${\mathcal {E}}^{\prime }=\left\{e_{1}^{\prime },\ldots ,e_{n}^{\prime }\right\},$ $F:\mathbb {K} ^{n}\to \mathbb {K} ^{n}$ is a linear operator, and the matrix representation of $F$ with respect to ${\mathcal {E}}$ is $M:=\left(f_{i,j}\right),$ then the transpose of $M$ is the matrix representation with respect to ${\mathcal {E}}^{\prime }$ of $F^{\#}.$
Weak continuity and openness
Suppose that $\left\langle X,Y\right\rangle $ and $\langle W,Z\rangle $ are canonical pairings (so $Y\subseteq X^{\#}$and $Z\subseteq W^{\#}$) that are dual systems and let $F:X\to W$ be a linear map. Then $F:X\to W$ is weakly continuous if and only if it satisfies any of the following equivalent conditions:[1]
1. $F:(X,\sigma (X,Y))\to (W,\sigma (W,Z))$ is continuous;
2. $F^{\#}(Z)\subseteq Y$
3. the transpose of F, ${}^{t}F:Z\to Y,$ with respect to $\left\langle X,Y\right\rangle $ and $\langle W,Z\rangle $ is well-defined.
If $F$ is weakly continuous then ${}^{t}F::(Z,\sigma (Z,W))\to (Y,\sigma (Y,X))$ will be continuous and furthermore, ${}^{tt}F=F$[7]
A map $g:A\to B$ between topological spaces is relatively open if $g:A\to \operatorname {Im} g$ is an open mapping, where $\operatorname {Im} g$ is the range of $g.$[1]
Suppose that $\langle X,Y\rangle $ and $\langle W,Z\rangle $ are dual systems and $F:X\to W$ is a weakly continuous linear map. Then the following are equivalent:[1]
1. $F:(X,\sigma (X,Y))\to (W,\sigma (W,Z))$ is relatively open;
2. The range of ${}^{t}F$ is $\sigma (Y,X)$-closed in $Y$;
3. $\operatorname {Im} {}^{t}F=(\operatorname {ker} F)^{\perp }$
Furthermore,
• $F:X\to W$ is injective (resp. bijective) if and only if ${}^{t}F$ is surjective (resp. bijective);
• $F:X\to W$ is surjective if and only if ${}^{t}F::(Z,\sigma (Z,W))\to (Y,\sigma (Y,X))$ is relatively open and injective.
Transpose of a map between TVSs
The transpose of map between two TVSs is defined if and only if $F$ is weakly continuous.
If $F:X\to Y$ is a linear map between two Hausdorff locally convex topological vector spaces then:[1]
• If $F$ is continuous then it is weakly continuous and ${}^{t}F$ is both Mackey continuous and strongly continuous.
• If $F$ is weakly continuous then it is both Mackey continuous and strongly continuous (defined below).
• If $F$ is weakly continuous then it is continuous if and only if ${}^{t}F:^{\prime }\to X^{\prime }$ maps equicontinuous subsets of $Y^{\prime }$ to equicontinuous subsets of $X^{\prime }.$
• If $X$ and $Y$ are normed spaces then $F$ is continuous if and only if it is weakly continuous, in which case $\|F\|=\left\|{}^{t}F\right\|.$
• If $F$ is continuous then $F:X\to Y$ is relatively open if and only if $F$ is weakly relatively open (i.e. $F:\left(X,\sigma \left(X,X^{\prime }\right)\right)\to \left(Y,\sigma \left(Y,Y^{\prime }\right)\right)$ is relatively open) and every equicontinuous subsets of $\operatorname {Im} {}^{t}F={}^{t}F\left(Y^{\prime }\right)$ is the image of some equicontinuous subsets of $Y^{\prime }.$
• If $F$ is continuous injection then $F:X\to Y$ is a TVS-embedding (or equivalently, a topological embedding) if and only if every equicontinuous subsets of $X^{\prime }$ is the image of some equicontinuous subsets of $Y^{\prime }.$
Metrizability and separability
Let $X$ be a locally convex space with continuous dual space $X^{\prime }$ and let $K\subseteq X^{\prime }.$[1]
1. If $K$ is equicontinuous or $\sigma \left(X^{\prime },X\right)$-compact, and if $D\subseteq X^{\prime }$ is such that $\operatorname {span} D$is dense in $X,$ then the subspace topology that $K$ inherits from $\left(X^{\prime },\sigma \left(X^{\prime },D\right)\right)$ is identical to the subspace topology that $K$ inherits from $\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right).$
2. If $X$ is separable and $K$ is equicontinuous then $K,$ when endowed with the subspace topology induced by $\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right),$ is metrizable.
3. If $X$ is separable and metrizable, then $\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right)$ is separable.
4. If $X$ is a normed space then $X$ is separable if and only if the closed unit call the continuous dual space of $X$ is metrizable when given the subspace topology induced by $\left(X^{\prime },\sigma \left(X^{\prime },X\right)\right).$
5. If $X$ is a normed space whose continuous dual space is separable (when given the usual norm topology), then $X$ is separable.
Polar topologies and topologies compatible with pairing
Starting with only the weak topology, the use of polar sets produces a range of locally convex topologies. Such topologies are called polar topologies. The weak topology is the weakest topology of this range.
Throughout, $(X,Y,b)$ will be a pairing over $\mathbb {K} $ and ${\mathcal {G}}$ will be a non-empty collection of $\sigma (X,Y,b)$-bounded subsets of $X.$
Polar topologies
Given a collection ${\mathcal {G}}$ of subsets of $X$, the polar topology on $Y$ determined by ${\mathcal {G}}$ (and $b$) or the ${\mathcal {G}}$-topology on $Y$ is the unique topological vector space (TVS) topology on $Y$ for which
$\left\{rG^{\circ }:G\in {\mathcal {G}},r>0\right\}$
forms a subbasis of neighborhoods at the origin.[1] When $Y$ is endowed with this ${\mathcal {G}}$-topology then it is denoted by Y${\mathcal {G}}$. Every polar topology is necessarily locally convex.[1] When ${\mathcal {G}}$ is a directed set with respect to subset inclusion (i.e. if for all $G,K\in {\mathcal {G}}$ there exists some $K\in {\mathcal {G}}$ such that $G\cup H\subseteq K$) then this neighborhood subbasis at 0 actually forms a neighborhood basis at 0.[1]
The following table lists some of the more important polar topologies.
Notation: If $\Delta (X,Y,b)$ denotes a polar topology on $Y$ then $Y$ endowed with this topology will be denoted by $Y_{\Delta (Y,X,b)},$ $Y_{\Delta (Y,X)}$ or simply $Y_{\Delta }$ (e.g. for $\sigma (Y,X,b)$ we'd have $\Delta =\sigma $ so that $Y_{\sigma (Y,X,b)},$ $Y_{\sigma (Y,X)}$ and $Y_{\sigma }$ all denote $Y$ endowed with $\sigma (X,Y,b)$).
${\mathcal {G}}\subseteq {\mathcal {P}}X$
("topology of uniform convergence on ...")
Notation Name ("topology of...") Alternative name
finite subsets of $X$
(or $\sigma (X,Y,b)$-closed disked hulls of finite subsets of $X$)
$\sigma (X,Y,b)$
$s(X,Y,b)$
pointwise/simple convergence weak/weak* topology
$\sigma (X,Y,b)$-compact disks $\tau (X,Y,b)$ Mackey topology
$\sigma (X,Y,b)$-compact convex subsets $\gamma (X,Y,b)$ compact convex convergence
$\sigma (X,Y,b)$-compact subsets
(or balanced $\sigma (X,Y,b)$-compact subsets)
$c(X,Y,b)$ compact convergence
$\sigma (X,Y,b)$-bounded subsets $b(X,Y,b)$
$\beta (X,Y,b)$
bounded convergence strong topology
Strongest polar topology
Definitions involving polar topologies
Continuity
A linear map $F:X\to W$ is Mackey continuous (with respect to $(X,Y,b)$ and $(W,Z,c)$) if $F:(X,\tau (X,Y,b))\to (W,\tau (W,Z,c))$ is continuous.[1]
A linear map $F:X\to W$ is strongly continuous (with respect to $(X,Y,b)$ and $(W,Z,c)$) if $F:(X,\beta (X,Y,b))\to (W,\beta (W,Z,c))$ is continuous.[1]
Bounded subsets
A subset of $X$ is weakly bounded (resp. Mackey bounded, strongly bounded) if it is bounded in $(X,\sigma (X,Y,b))$ (resp. bounded in $(X,\tau (X,Y,b)),$ bounded in $(X,\beta (X,Y,b))$).
Topologies compatible with a pair
If $(X,Y,b)$ is a pairing over $\mathbb {K} $ and ${\mathcal {T}}$ is a vector topology on $X$ then ${\mathcal {T}}$ is a topology of the pairing and that it is compatible (or consistent) with the pairing $(X,Y,b)$ if it is locally convex and if the continuous dual space of $\left(X,{\mathcal {T}}\right)=b(\,\cdot \,,Y).$[note 8] If $X$ distinguishes points of $Y$ then by identifying $Y$ as a vector subspace of $X$'s algebraic dual, the defining condition becomes: $\left(X,{\mathcal {T}}\right)^{\prime }=Y.$[1] Some authors (e.g. [Trèves 2006] and [Schaefer 1999]) require that a topology of a pair also be Hausdorff,[2][8] which it would have to be if $Y$ distinguishes the points of $X$ (which these authors assume).
The weak topology $\sigma (X,Y,b)$ is compatible with the pairing $(X,Y,b)$ (as was shown in the Weak representation theorem) and it is in fact the weakest such topology. There is a strongest topology compatible with this pairing and that is the Mackey topology. If $N$ is a normed space that is not reflexive then the usual norm topology on its continuous dual space is not compatible with the duality $\left(N^{\prime },N\right).$[1]
Mackey–Arens theorem
Main articles: Mackey–Arens theorem, Mackey topology, and Mackey space
The following is one of the most important theorems in duality theory.
Mackey–Arens theorem I[1] — Let $(X,Y,b)$ will be a pairing such that $X$ distinguishes the points of $Y$ and let ${\mathcal {T}}$ be a locally convex topology on $X$ (not necessarily Hausdorff). Then ${\mathcal {T}}$ is compatible with the pairing $(X,Y,b)$ if and only if ${\mathcal {T}}$ is a polar topology determined by some collection ${\mathcal {G}}$ of $\sigma (Y,X,b)$-compact disks that cover[note 9] $Y.$
It follows that the Mackey topology $\tau (X,Y,b),$ which recall is the polar topology generated by all $\sigma (X,Y,b)$-compact disks in $Y,$ is the strongest locally convex topology on $X$ that is compatible with the pairing $(X,Y,b).$ A locally convex space whose given topology is identical to the Mackey topology is called a Mackey space. The following consequence of the above Mackey-Arens theorem is also called the Mackey-Arens theorem.
Mackey–Arens theorem II[1] — Let $(X,Y,b)$ will be a pairing such that $X$ distinguishes the points of $Y$ and let ${\mathcal {T}}$ be a locally convex topology on $X.$ Then ${\mathcal {T}}$ is compatible with the pairing if and only if $\sigma (X,Y,b)\subseteq {\mathcal {T}}\subseteq \tau (X,Y,b).$
Mackey's theorem, barrels, and closed convex sets
If $X$ is a TVS (over $\mathbb {R} $ or $\mathbb {C} $) then a half-space is a set of the form $\{x\in X:f(x)\leq r\}$ for some real $r$ and some continuous real linear functional $f$ on $X.$
Theorem — If $X$ is a locally convex space (over $\mathbb {R} $ or $\mathbb {C} $) and if $C$ is a non-empty closed and convex subset of $X,$ then $C$ is equal to the intersection of all closed half spaces containing it.[9]
The above theorem implies that the closed and convex subsets of a locally convex space depend entirely on the continuous dual space. Consequently, the closed and convex subsets are the same in any topology compatible with duality; that is, if ${\mathcal {T}}$ and ${\mathcal {L}}$ are any locally convex topologies on $X$ with the same continuous dual spaces, then a convex subset of $X$ is closed in the ${\mathcal {T}}$ topology if and only if it is closed in the ${\mathcal {L}}$ topology. This implies that the ${\mathcal {T}}$-closure of any convex subset of $X$ is equal to its ${\mathcal {L}}$-closure and that for any ${\mathcal {T}}$-closed disk $A$ in $X,$ $A=A^{\circ \circ }.$[1] In particular, if $B$ is a subset of $X$ then $B$ is a barrel in $(X,{\mathcal {L}})$ if and only if it is a barrel in $(X,{\mathcal {L}}).$[1]
The following theorem shows that barrels (i.e. closed absorbing disks) are exactly the polars of weakly bounded subsets.
Theorem[1] — Let $(X,Y,b)$ will be a pairing such that $X$ distinguishes the points of $Y$ and let ${\mathcal {T}}$ be a topology of the pair. Then a subset of $X$ is a barrel in $X$ if and only if it is equal to the polar of some $\sigma (Y,X,b)$-bounded subset of $Y.$
If $X$ is a topological vector space then:[1][10]
1. A closed absorbing and balanced subset $B$ of $X$ absorbs each convex compact subset of $X$ (i.e. there exists a real $r>0$ such that $rB$ contains that set).
2. If $X$ is Hausdorff and locally convex then every barrel in $X$ absorbs every convex bounded complete subset of $X.$
All of this leads to Mackey's theorem, which is one of the central theorems in the theory of dual systems. In short, it states the bounded subsets are the same for any two Hausdorff locally convex topologies that are compatible with the same duality.
Mackey's theorem[10][1] — Suppose that $(X,{\mathcal {L}})$ is a Hausdorff locally convex space with continuous dual space $X^{\prime }$ and consider the canonical duality $\left\langle X,X^{\prime }\right\rangle .$ If ${\mathcal {L}}$ is any topology on $X$ that is compatible with the duality $\left\langle X,X^{\prime }\right\rangle $ on $X$ then the bounded subsets of $(X,{\mathcal {L}})$ are the same as the bounded subsets of $(X,{\mathcal {L}}).$
Examples
Space of finite sequences
Let $X$ denote the space of all sequences of scalars $r_{\bullet }=\left(r_{i}\right)_{i=1}^{\infty }$ such that $r_{i}=0$ for all sufficiently large $i.$ Let $Y=X$ and define a bilinear map $b:X\times X\to \mathbb {K} $ by
$b\left(r_{\bullet },s_{\bullet }\right):=\sum _{i=1}^{\infty }r_{i}s_{i}.$
Then $\sigma (X,X,b)=\tau (X,X,b).$[1] Moreover, a subset $T\subseteq X$ is $\sigma (X,X,b)$-bounded (resp. $\beta (X,X,b)$-bounded) if and only if there exists a sequence $m_{\bullet }=\left(m_{i}\right)_{i=1}^{\infty }$ of positive real numbers such that $\left|t_{i}\right|\leq m_{i}$ for all $t_{\bullet }=\left(t_{i}\right)_{i=1}^{\infty }\in T$ and all indices $i$ (resp. and $m_{\bullet }\in X$).[1]
It follows that there are weakly bounded (that is, $\sigma (X,X,b)$-bounded) subsets of $X$ that are not strongly bounded (that is, not $\beta (X,X,b)$-bounded).
See also
• Biorthogonal system
• Dual space – In mathematics, vector space of linear forms
• Dual topology
• Duality (mathematics) – General concept and operation in mathematics
• Inner product – Generalization of the dot product; used to define Hilbert spacesPages displaying short descriptions of redirect targets
• L-semi-inner product – Generalization of inner products that applies to all normed spaces
• Pairing – bilinear map of modules over a ringPages displaying wikidata descriptions as a fallback
• Polar set – Subset of all points that is bounded by some given point of a dual (in a dual pairing)
• Polar topology – Dual space topology of uniform convergence on some sub-collection of bounded subsets
• Reductive dual pair
• Strong dual space – Continuous dual space endowed with the topology of uniform convergence on bounded sets
• Strong topology (polar topology) – Continuous dual space endowed with the topology of uniform convergence on bounded setsPages displaying short descriptions of redirect targets
• Topologies on spaces of linear maps
• Weak topology – Mathematical term
Notes
1. A subset $S$ of $X$ is total if for all $y\in Y$,
$b(s,y)=0\quad {\text{ for all }}s\in S$
implies $y=0$.
2. That $b$ is linear in its first coordinate is obvious. Suppose $c$ is a scalar. Then $b(x,c\perp y)=b\left(x,{\overline {c}}y\right)=\langle x,{\overline {c}}y\rangle =c\langle x,y\rangle =cb(x,y),$ which shows that $b$ is linear in its second coordinate.
3. The weak topology on $Y$ is the weakest TVS topology on $Y$ making all maps $b(x,\,\cdot \,):Y\to \mathbb {K} $ continuous, as $x$ ranges over $R.$ The dual notation of $(Y,\sigma (Y,R,b)),$ $(Y,\sigma (Y,R)),$ or simply $(Y,\sigma )$ may also be used to denote $Y$ endowed with the weak topology $\sigma (Y,R,b).$ If $R$ is not clear from context then it should be assumed to be all of $X,$ in which case it is simply called the weak topology on $Y$ (induced by $X$).
4. If $G:Z\to Y$ is a linear map then $G$'s transpose, ${}^{t}G:X\to W,$ is well-defined if and only if $Z$ distinguishes points of $W$ and $b(X,G(\,\cdot \,))\subseteq c(W,\,\cdot \,).$ In this case, for each $x\in X,$ the defining condition for ${}^{t}G(x)$ is: $c(x,G(\,\cdot \,))=c\left({}^{t}G(x),\,\cdot \,\right).$
5. If $H:X\to Z$ is a linear map then $H$'s transpose, ${}^{t}H:W\to Y,$ is well-defined if and only if $X$ distinguishes points of $Y$ and $c(W,H(\,\cdot \,))\subseteq b(\,\cdot \,,Y).$ In this case, for each $w\in W,$ the defining condition for ${}^{t}H(w)$ is: $c(w,H(\,\cdot \,))=b\left(\,\cdot \,,{}^{t}H(w)\right).$
6. If $H:W\to Y$ is a linear map then $H$'s transpose, ${}^{t}H:X\to Q,$ is well-defined if and only if $W$ distinguishes points of $Z$ and $b(X,H(\,\cdot \,))\subseteq c(\,\cdot \,,Z).$ In this case, for each $x\in X,$ the defining condition for ${}^{t}H(x)$ is: $c(x,H(\,\cdot \,))=b\left(\,\cdot \,,{}^{t}H(x)\right).$
7. If $H:Y\to W$ is a linear map then $H$'s transpose, ${}^{t}H:Z\to X,$ is well-defined if and only if $Y$ distinguishes points of $X$ and $c(H(\,\cdot \,),Z)\subseteq b(X,\,\cdot \,).$ In this case, for each $z\in Z,$ the defining condition for ${}^{t}H(z)$ is: $c(H(\,\cdot \,),z)=b\left({}^{t}H(z),\,\cdot \,\right).$
8. Of course, there is an analogous definition for topologies on $Y$ to be "compatible it a pairing" but this article will only deal with topologies on $X.$
9. Recall that a collection of subsets of a set $S$ is said to cover $S$ if every point of $S$ is contained in some set belonging to the collection.
References
1. Narici & Beckenstein 2011, pp. 225–273.
2. Schaefer & Wolff 1999, pp. 122–128.
3. Trèves 2006, p. 195.
4. Schaefer & Wolff 1999, pp. 123–128.
5. Narici & Beckenstein 2011, pp. 260–264.
6. Narici & Beckenstein 2011, pp. 251–253.
7. Schaefer & Wolff 1999, pp. 128–130.
8. Trèves 2006, pp. 368–377.
9. Narici & Beckenstein 2011, p. 200.
10. Trèves 2006, pp. 371–372.
Bibliography
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Michael Reed and Barry Simon, Methods of Modern Mathematical Physics, Vol. 1, Functional Analysis, Section III.3. Academic Press, San Diego, 1980. ISBN 0-12-585050-6.
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Schmitt, Lothar M (1992). "An Equivariant Version of the Hahn–Banach Theorem". Houston J. Of Math. 18: 429–447.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
External links
• Duality Theory
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Convex analysis and variational analysis
Basic concepts
• Convex combination
• Convex function
• Convex set
Topics (list)
• Choquet theory
• Convex geometry
• Convex metric space
• Convex optimization
• Duality
• Lagrange multiplier
• Legendre transformation
• Locally convex topological vector space
• Simplex
Maps
• Convex conjugate
• Concave
• (Closed
• K-
• Logarithmically
• Proper
• Pseudo-
• Quasi-) Convex function
• Invex function
• Legendre transformation
• Semi-continuity
• Subderivative
Main results (list)
• Carathéodory's theorem
• Ekeland's variational principle
• Fenchel–Moreau theorem
• Fenchel-Young inequality
• Jensen's inequality
• Hermite–Hadamard inequality
• Krein–Milman theorem
• Mazur's lemma
• Shapley–Folkman lemma
• Robinson-Ursescu
• Simons
• Ursescu
Sets
• Convex hull
• (Orthogonally, Pseudo-) Convex set
• Effective domain
• Epigraph
• Hypograph
• John ellipsoid
• Lens
• Radial set/Algebraic interior
• Zonotope
Series
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
Duality
• Dual system
• Duality gap
• Strong duality
• Weak duality
Applications and related
• Convexity in economics
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Weakly contractible
In mathematics, a topological space is said to be weakly contractible if all of its homotopy groups are trivial.
Property
It follows from Whitehead's Theorem that if a CW-complex is weakly contractible then it is contractible.
Example
Define $S^{\infty }$ to be the inductive limit of the spheres $S^{n},n\geq 1$. Then this space is weakly contractible. Since $S^{\infty }$ is moreover a CW-complex, it is also contractible. See Contractibility of unit sphere in Hilbert space for more.
The Long Line is an example of a space which is weakly contractible, but not contractible. This does not contradict Whitehead theorem since the Long Line does not have the homotopy type of a CW-complex. Another prominent example for this phenomenon is the Warsaw circle.
References
• "Homotopy type", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
|
Limit point compact
In mathematics, a topological space $X$ is said to be limit point compact[1][2] or weakly countably compact[3] if every infinite subset of $X$ has a limit point in $X.$ This property generalizes a property of compact spaces. In a metric space, limit point compactness, compactness, and sequential compactness are all equivalent. For general topological spaces, however, these three notions of compactness are not equivalent.
Properties and examples
• In a topological space, subsets without limit point are exactly those that are closed and discrete in the subspace topology. So a space is limit point compact if and only if all its closed discrete subsets are finite.
• A space $X$ is not limit point compact if and only if it has an infinite closed discrete subspace. Since any subset of a closed discrete subset of $X$ is itself closed in $X$ and discrete, this is equivalent to require that $X$ has a countably infinite closed discrete subspace.
• Some examples of spaces that are not limit point compact: (1) The set $\mathbb {R} $ of all real numbers with its usual topology, since the integers are an infinite set but do not have a limit point in $\mathbb {R} $; (2) an infinite set with the discrete topology; (3) the countable complement topology on an uncountable set.
• Every countably compact space (and hence every compact space) is limit point compact.
• For T1 spaces, limit point compactness is equivalent to countable compactness.
• An example of limit point compact space that is not countably compact is obtained by "doubling the integers", namely, taking the product $X=\mathbb {Z} \times Y$ where $\mathbb {Z} $ is the set of all integers with the discrete topology and $Y=\{0,1\}$ has the indiscrete topology. The space $X$ is homeomorphic to the odd-even topology.[4] This space is not T0. It is limit point compact because every nonempty subset has a limit point.
• An example of T0 space that is limit point compact and not countably compact is $X=\mathbb {R} ,$ the set of all real numbers, with the right order topology, i.e., the topology generated by all intervals $(x,\infty ).$[5] The space is limit point compact because given any point $a\in X,$ every $x<a$ is a limit point of $\{a\}.$
• For metrizable spaces, compactness, countable compactness, limit point compactness, and sequential compactness are all equivalent.
• Closed subspaces of a limit point compact space are limit point compact.
• The continuous image of a limit point compact space need not be limit point compact. For example, if $X=\mathbb {Z} \times Y$ with $\mathbb {Z} $ discrete and $Y$ indiscrete as in the example above, the map $f=\pi _{\mathbb {Z} }$ given by projection onto the first coordinate is continuous, but $f(X)=\mathbb {Z} $ is not limit point compact.
• A limit point compact space need not be pseudocompact. An example is given by the same $X=\mathbb {Z} \times Y$ with $Y$ indiscrete two-point space and the map $f=\pi _{\mathbb {Z} },$ whose image is not bounded in $\mathbb {R} .$
• A pseudocompact space need not be limit point compact. An example is given by an uncountable set with the cocountable topology.
• Every normal pseudocompact space is limit point compact.[6]
Proof: Suppose $X$ is a normal space that is not limit point compact. There exists a countably infinite closed discrete subset $A=\{x_{1},x_{2},x_{3},\ldots \}$ of $X.$ By the Tietze extension theorem the continuous function $f$ on $A$ defined by $f(x_{n})=n$ can be extended to an (unbounded) real-valued continuous function on all of $X.$ So $X$ is not pseudocompact.
• Limit point compact spaces have countable extent.
• If $(X,\tau )$ and $(X,\sigma )$ are topological spaces with $\sigma $ finer than $\tau $ and $(X,\sigma )$is limit point compact, then so is $(X,\tau ).$
See also
• Compact space – Type of mathematical space
• Countably compact space – topological space in which from every countable open cover of the space, a finite cover can be extractedPages displaying wikidata descriptions as a fallback
• Sequentially compact space – Topological space where every sequence has a convergent subsequence.
Notes
1. The terminology "limit point compact" appears in a topology textbook by James Munkres where he says that historically such spaces had been called just "compact" and what we now call compact spaces were called "bicompact". There was then a shift in terminology with bicompact spaces being called just "compact" and no generally accepted name for the first concept, some calling it "Fréchet compactness", others the "Bolzano-Weierstrass property". He says he invented the term "limit point compact" to have something at least descriptive of the property. Munkres, p. 178-179.
2. Steen & Seebach, p. 19
3. Steen & Seebach, p. 19
4. Steen & Seebach, Example 6
5. Steen & Seebach, Example 50
6. Steen & Seebach, p. 20. What they call "normal" is T4 in wikipedia's terminology, but it's essentially the same proof as here.
References
• Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260.
• Steen, Lynn Arthur; Seebach, J. Arthur (1995) [First published 1978 by Springer-Verlag, New York]. Counterexamples in topology. New York: Dover Publications. ISBN 0-486-68735-X. OCLC 32311847.
• This article incorporates material from Weakly countably compact on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Weak derivative
In mathematics, a weak derivative is a generalization of the concept of the derivative of a function (strong derivative) for functions not assumed differentiable, but only integrable, i.e., to lie in the Lp space $L^{1}([a,b])$.
The method of integration by parts holds that for differentiable functions $u$ and $\varphi $ we have
${\begin{aligned}\int _{a}^{b}u(x)\varphi '(x)\,dx&={\Big [}u(x)\varphi (x){\Big ]}_{a}^{b}-\int _{a}^{b}u'(x)\varphi (x)\,dx.\\[6pt]\end{aligned}}$
A function u' being the weak derivative of u is essentially defined by the requirement that this equation must hold for all infinitely differentiable functions φ vanishing at the boundary points ($\varphi (a)=\varphi (b)=0$).
Definition
Let $u$ be a function in the Lebesgue space $L^{1}([a,b])$. We say that $v$ in $L^{1}([a,b])$ is a weak derivative of $u$ if
$\int _{a}^{b}u(t)\varphi '(t)\,dt=-\int _{a}^{b}v(t)\varphi (t)\,dt$
for all infinitely differentiable functions $\varphi $ with $\varphi (a)=\varphi (b)=0$.
Generalizing to $n$ dimensions, if $u$ and $v$ are in the space $L_{\text{loc}}^{1}(U)$ of locally integrable functions for some open set $U\subset \mathbb {R} ^{n}$, and if $\alpha $ is a multi-index, we say that $v$ is the $\alpha ^{\text{th}}$-weak derivative of $u$ if
$\int _{U}uD^{\alpha }\varphi =(-1)^{|\alpha |}\int _{U}v\varphi ,$
for all $\varphi \in C_{c}^{\infty }(U)$, that is, for all infinitely differentiable functions $\varphi $ with compact support in $U$. Here $D^{\alpha }\varphi $ is defined as
$D^{\alpha }\varphi ={\frac {\partial ^{|\alpha |}\varphi }{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}.$
If $u$ has a weak derivative, it is often written $D^{\alpha }u$ since weak derivatives are unique (at least, up to a set of measure zero, see below).
Examples
• The absolute value function $u:\mathbb {R} \rightarrow \mathbb {R} _{+},u(t)=|t|$, which is not differentiable at $t=0$ has a weak derivative $v:\mathbb {R} \rightarrow \mathbb {R} $ known as the sign function, and given by
$v(t)={\begin{cases}1&{\text{if }}t>0;\\[6pt]0&{\text{if }}t=0;\\[6pt]-1&{\text{if }}t<0.\end{cases}}$
This is not the only weak derivative for u: any w that is equal to v almost everywhere is also a weak derivative for u. (In particular, the definition of v(0) above is superfluous and can be replaced with any desired real number r.) Usually, this is not a problem, since in the theory of Lp spaces and Sobolev spaces, functions that are equal almost everywhere are identified.
• The characteristic function of the rational numbers $1_{\mathbb {Q} }$ is nowhere differentiable yet has a weak derivative. Since the Lebesgue measure of the rational numbers is zero,
$\int 1_{\mathbb {Q} }(t)\varphi (t)\,dt=0.$
Thus $v(t)=0$ is a weak derivative of $1_{\mathbb {Q} }$. Note that this does agree with our intuition since when considered as a member of an Lp space, $1_{\mathbb {Q} }$ is identified with the zero function.
• The Cantor function c does not have a weak derivative, despite being differentiable almost everywhere. This is because any weak derivative of c would have to be equal almost everywhere to the classical derivative of c, which is zero almost everywhere. But the zero function is not a weak derivative of c, as can be seen by comparing against an appropriate test function $\varphi $. More theoretically, c does not have a weak derivative because its distributional derivative, namely the Cantor distribution, is a singular measure and therefore cannot be represented by a function.
Properties
If two functions are weak derivatives of the same function, they are equal except on a set with Lebesgue measure zero, i.e., they are equal almost everywhere. If we consider equivalence classes of functions such that two functions are equivalent if they are equal almost everywhere, then the weak derivative is unique.
Also, if u is differentiable in the conventional sense then its weak derivative is identical (in the sense given above) to its conventional (strong) derivative. Thus the weak derivative is a generalization of the strong one. Furthermore, the classical rules for derivatives of sums and products of functions also hold for the weak derivative.
Extensions
This concept gives rise to the definition of weak solutions in Sobolev spaces, which are useful for problems of differential equations and in functional analysis.
See also
• Subderivative
• Weyl's lemma (Laplace equation)
References
• Gilbarg, D.; Trudinger, N. (2001). Elliptic partial differential equations of second order. Berlin: Springer. p. 149. ISBN 3-540-41160-7.
• Evans, Lawrence C. (1998). Partial differential equations. Providence, R.I.: American Mathematical Society. p. 242. ISBN 0-8218-0772-2.
• Knabner, Peter; Angermann, Lutz (2003). Numerical methods for elliptic and parabolic partial differential equations. New York: Springer. p. 53. ISBN 0-387-95449-X.
|
Weakly harmonic function
In mathematics, a function $f$ is weakly harmonic in a domain $D$ if
$\int _{D}f\,\Delta g=0$
for all $g$ with compact support in $D$ and continuous second derivatives, where Δ is the Laplacian.[1] This is the same notion as a weak derivative, however, a function can have a weak derivative and not be differentiable. In this case, we have the somewhat surprising result that a function is weakly harmonic if and only if it is harmonic. Thus weakly harmonic is actually equivalent to the seemingly stronger harmonic condition.
See also
• Weak solution
• Weyl's lemma
References
1. Gilbarg, David; Trudinger, Neil S. (12 January 2001). Elliptic partial differential equations of second order. Springer Berlin Heidelberg. p. 29. ISBN 9783540411604. Retrieved 26 April 2023.
|
Weak Hausdorff space
In mathematics, a weak Hausdorff space or weakly Hausdorff space is a topological space where the image of every continuous map from a compact Hausdorff space into the space is closed.[1] In particular, every Hausdorff space is weak Hausdorff. As a separation property, it is stronger than T1, which is equivalent to the statement that points are closed. Specifically, every weak Hausdorff space is a T1 space.[2][3]
Separation axioms
in topological spaces
Kolmogorov classification
T0 (Kolmogorov)
T1 (Fréchet)
T2 (Hausdorff)
T2½(Urysohn)
completely T2 (completely Hausdorff)
T3 (regular Hausdorff)
T3½(Tychonoff)
T4 (normal Hausdorff)
T5 (completely normal
Hausdorff)
T6 (perfectly normal
Hausdorff)
• History
The notion was introduced by M. C. McCord[4] to remedy an inconvenience of working with the category of Hausdorff spaces. It is often used in tandem with compactly generated spaces in algebraic topology. For that, see the category of compactly generated weak Hausdorff spaces.
k-Hausdorff spaces
A k-Hausdorff space[5] is a topological space which satisfies any of the following equivalent conditions:
1. Each compact subspace is Hausdorff.
2. The diagonal $\{(x,x):x\in X\}$ is k-closed in $X\times X.$
• A subset $A\subseteq Y$ is k-closed, if $A\cap C$ is closed in $C$ for each compact $C\subseteq Y.$
3. Each compact subspace is closed and strongly locally compact.
• A space is strongly locally compact if for each $x\in X$ and each (not necessarily open) neighborhood $U\subseteq X$ of $x,$ there exists a compact neighborhood $V\subseteq X$ of $x$ such that $V\subseteq U.$
Properties
• A k-Hausdorff space is weak Hausdorff. For if $X$ is k-Hausdorff and $f:C\to X$ is a continuous map from a compact space $C,$ then $f(C)$ is compact, hence Hausdorff, hence closed.
• A Hausdorff space is k-Hausdorff. For a space is Hausdorff if and only if the diagonal $\{(x,x):x\in X\}$ is closed in $X\times X,$ and each closed subset is a k-closed set.
• A k-Hausdorff space is KC. A KC space is a topological space in which every compact subspace is closed.
• A space is Hausdorff-compactly generated weak Hausdorff if and only if it is Hausdorff-compactly generated k-Hausdorff.
• To show that the coherent topology induced by compact Hausdorff subspaces preserves the compact Hausdorff subspaces and their subspace topology requires that the space be k-Hausdorff; weak Hausdorff is not enough. Hence k-Hausdorff can be seen as the more fundamental definition.
Δ-Hausdorff spaces
A Δ-Hausdorff space is a topological space where the image of every path is closed; that is, if whenever $f:[0,1]\to X$ is continuous then $f([0,1])$ is closed in $X.$ Every weak Hausdorff space is $\Delta $-Hausdorff, and every $\Delta $-Hausdorff space is a T1 space. A space is Δ-generated if its topology is the finest topology such that each map $f:\Delta ^{n}\to X$ from a topological $n$-simplex $\Delta ^{n}$ to $X$ is continuous. $\Delta $-Hausdorff spaces are to $\Delta $-generated spaces as weak Hausdorff spaces are to compactly generated spaces.
See also
• Fixed-point space – Topological space such that every endomorphism has a fixed point, a Hausdorff space where every continuous function from the space into itself has a fixed point.
• Hausdorff space – Type of topological space
• Locally Hausdorff space
• Particular point topology
• Quasitopological space – a set X equipped with a function that associates to every compact Hausdorff space K a collection of maps K→C satisfying certain natural conditionsPages displaying wikidata descriptions as a fallback
• Separation axiom – Axioms in topology defining notions of "separation"
References
1. Hoffmann, Rudolf-E. (1979), "On weak Hausdorff spaces", Archiv der Mathematik, 32 (5): 487–504, doi:10.1007/BF01238530, MR 0547371.
2. J.P. May, A Concise Course in Algebraic Topology. (1999) University of Chicago Press ISBN 0-226-51183-9 (See chapter 5)
3. Strickland, Neil P. (2009). "The category of CGWH spaces" (PDF).
4. McCord, M. C. (1969), "Classifying spaces and infinite symmetric products", Transactions of the American Mathematical Society, 146: 273–298, doi:10.2307/1995173, JSTOR 1995173, MR 0251719.
5. Lawson, J; Madison, B (1974). "Quotients of k-semigroups". Semigroup Forum. 9: 1–18. doi:10.1007/BF02194829.
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
|
Weakly holomorphic modular form
In mathematics, a weakly holomorphic modular form is similar to a holomorphic modular form, except that it is allowed to have poles at cusps. Examples include modular functions and modular forms.
Not to be confused with almost holomorphic modular form.
Definition
To simplify notation this section does the level 1 case; the extension to higher levels is straightforward.
A level 1 weakly holomorphic modular form is a function f on the upper half plane with the properties:
• f transforms like a modular form: $f((a\tau +b)/(c\tau +d))=(c\tau +d)^{k}f(\tau )$ for some integer k called the weight, for any elements of SL2(Z).
• As a function of q=e2πiτ, f is given by a Laurent series whose radius of convergence is 1 (so f is holomorphic on the upper half plane and meromorphic at the cusps).
Examples
The ring of level 1 modular forms is generated by the Eisenstein series E4 and E6 (which generate the ring of holomorphic modular forms) together with the inverse 1/Δ of the modular discriminant.
Any weakly holomorphic modular form of any level can be written as a quotient of two holomorphic modular forms. However, not every quotient of two holomorphic modular forms is a weakly holomorphic modular form, as it may have poles in the upper half plane.
References
• Duke, W.; Jenkins, Paul (2008), "On the zeros and coefficients of certain weakly holomorphic modular forms", Pure Appl. Math. Q., Special Issue: In honor of Jean-Pierre Serre. Part 1, 4 (4): 1327–1340, doi:10.4310/PAMQ.2008.v4.n4.a15, MR 2441704, Zbl 1200.11027
|
Weakly normal subgroup
In mathematics, in the field of group theory, a subgroup $H$ of a group $G$ is said to be weakly normal if whenever $H^{g}\leq N_{G}(H)$, we have $g\in N_{G}(H)$.
Every pronormal subgroup is weakly normal.
References
• Ballester-Bolinches, Adolfo; Esteban-Romero, R. (2003), "On finite T-groups", Journal of the Australian Mathematical Society, 75 (2): 181–191, doi:10.1017/S1446788700003712, ISSN 1446-7887, MR 2000428
• Müller, Karl Hans (1966), "Schwachnormale Untergruppen: Eine gemeinsame Verallgemeinerung der normalen und normalisatorgleichen Untergruppen", Rendiconti del Seminario Matematico della Università di Padova. The Mathematical Journal of the University of Padova, 36: 129–157, ISSN 0041-8994, MR 0204528
|
Weakly o-minimal structure
In model theory, a weakly o-minimal structure is a model-theoretic structure whose definable sets in the domain are just finite unions of convex sets.
Definition
A linearly ordered structure, M, with language L including an ordering relation <, is called weakly o-minimal if every parametrically definable subset of M is a finite union of convex (definable) subsets. A theory is weakly o-minimal if all its models are weakly o-minimal.
Note that, in contrast to o-minimality, it is possible for a theory to have models that are weakly o-minimal and to have other models that are not weakly o-minimal.[1]
Difference from o-minimality
In an o-minimal structure $(M,<,...)$ the definable sets in $M$ are finite unions of points and intervals, where interval stands for a sets of the form $I=\{r\in M\,:\,a<r<b\}$, for some a and b in $M\cup \{\pm \infty \}$. For weakly o-minimal structures $(M,<,...)$ this is relaxed so that the definable sets in M are finite unions of convex definable sets. A set $C$ is convex if whenever a and b are in $C$, a < b and c ∈ $M$ satisfies that a < c < b, then c is in C. Points and intervals are of course convex sets, but there are convex sets that are not either points or intervals, as explained below.
If we have a weakly o-minimal structure expanding (R,<), the real ordered field, then the structure will be o-minimal. The two notions are different in other settings though. For example, let R be the ordered field of real algebraic numbers with the usual ordering < inherited from R. Take a transcendental number, say π, and add a unary relation S to the structure given by the subset (−π,π) ∩ R. Now consider the subset A of R defined by the formula
$0<a\,\wedge \,S(a)$
so that the set consists of all strictly positive real algebraic numbers that are less than π. The set is clearly convex, but cannot be written as a finite union of points and intervals whose endpoints are in R. To write it as an interval one would either have to include the endpoint π, which isn't in R, or one would require infinitely many intervals, such as the union
$\bigcup _{\alpha <\pi }(0,\alpha ).$
Since we have a definable set that isn't a finite union of points and intervals, this structure is not o-minimal. However, it is known that the structure is weakly o-minimal, and in fact the theory of this structure is weakly o-minimal.[2]
Notes
1. M.A.Dickmann, Elimination of Quantifiers for Ordered Valuation Rings, The Journal of symbolic Logic, Vol. 52, No. 1 (Mar., 1987), pp 116-128.
2. D. Macpherson, D. Marker, C. Steinhorn, Weakly o-minimal structures and real closed fields, Trans. Amer. Math. Soc. 352 (2000), no. 12, pp.5435–5483, MR1781273.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Weakly simple polygon
In geometry, a weakly simple polygon is a generalization of a simple polygon, allowing the polygon sides to touch each other in limited ways. Different authors have defined weakly simple polygons in different ways:
• One definition is that, when a simply connected open set in the plane is bounded by finitely many line segments, then its boundary forms a weakly simple polygon.[1] In the image, ABCDEFGHJKLM is a weakly simple polygon according to this definition, with the color blue marking the region for which it is the boundary. This type of weakly simple polygon can arise in computer graphics and CAD as a computer representation of polygonal regions with holes: for each hole a "cut" is created to connect it to an external boundary. Referring to the image above, ABCM is an external boundary of a planar region with a hole FGHJ. The cut ED connects the hole with the exterior and is traversed twice in the resulting weakly simple polygonal representation.
• In an alternative and more general definition of weakly simple polygons, they are the limits of sequences of simple polygons. The polygons in the sequence should all have the same combinatorial type as each other, with convergence under the Fréchet distance.[2] This formalizes the notion that such a polygon allows segments to touch but not to cross. This generalizes the notion of the polygonal boundary of a topological disk: this boundary is the limit of a sequence of polygons, offset from it within the disk. However, this type of weakly simple polygon does not need to form the boundary of a region, as its "interior" can be empty. For example, referring to the same image, the polygonal chain ABCBA is a weakly simple polygon according to this definition: it may be viewed as the limit of "squeezing" of the polygon ABCFGHA.
References
1. Dumitrescu, Adrian; Tóth, Csaba D. (2007). "Light orthogonal networks with constant geometric dilation". In Thomas, Wolfgang; Weil, Pascal (eds.). STACS 2007: 24th Annual Symposium on Theoretical Aspects of Computer Science, Aachen, Germany, February 22-24, 2007, Proceedings (illustrated ed.). Springer. p. 177. ISBN 978-3540709176.
2. Chang, Hsien-Chih; Erickson, Jeff; Xu, Chao (2015). "Detecting weakly simple polygons". In Indyk, Piotr (ed.). Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015. {SIAM}. pp. 1655–1670. arXiv:1407.3340. doi:10.1137/1.9781611973730.110.
Polygons (List)
Triangles
• Acute
• Equilateral
• Ideal
• Isosceles
• Kepler
• Obtuse
• Right
Quadrilaterals
• Antiparallelogram
• Bicentric
• Crossed
• Cyclic
• Equidiagonal
• Ex-tangential
• Harmonic
• Isosceles trapezoid
• Kite
• Orthodiagonal
• Parallelogram
• Rectangle
• Right kite
• Right trapezoid
• Rhombus
• Square
• Tangential
• Tangential trapezoid
• Trapezoid
By number
of sides
1–10 sides
• Monogon (1)
• Digon (2)
• Triangle (3)
• Quadrilateral (4)
• Pentagon (5)
• Hexagon (6)
• Heptagon (7)
• Octagon (8)
• Nonagon (Enneagon, 9)
• Decagon (10)
11–20 sides
• Hendecagon (11)
• Dodecagon (12)
• Tridecagon (13)
• Tetradecagon (14)
• Pentadecagon (15)
• Hexadecagon (16)
• Heptadecagon (17)
• Octadecagon (18)
• Icosagon (20)
>20 sides
• Icositrigon (23)
• Icositetragon (24)
• Triacontagon (30)
• 257-gon
• Chiliagon (1000)
• Myriagon (10,000)
• 65537-gon
• Megagon (1,000,000)
• Apeirogon (∞)
Star polygons
• Pentagram
• Hexagram
• Heptagram
• Octagram
• Enneagram
• Decagram
• Hendecagram
• Dodecagram
Classes
• Concave
• Convex
• Cyclic
• Equiangular
• Equilateral
• Infinite skew
• Isogonal
• Isotoxal
• Magic
• Pseudotriangle
• Rectilinear
• Regular
• Reinhardt
• Simple
• Skew
• Star-shaped
• Tangential
• Weakly simple
|
Weakly symmetric space
In mathematics, a weakly symmetric space is a notion introduced by the Norwegian mathematician Atle Selberg in the 1950s as a generalisation of symmetric space, due to Élie Cartan. Geometrically the spaces are defined as complete Riemannian manifolds such that any two points can be exchanged by an isometry, the symmetric case being when the isometry is required to have period two. The classification of weakly symmetric spaces relies on that of periodic automorphisms of complex semisimple Lie algebras. They provide examples of Gelfand pairs, although the corresponding theory of spherical functions in harmonic analysis, known for symmetric spaces, has not yet been developed.
References
• Akhiezer, D. N.; Vinberg, E. B. (1999), "Weakly symmetric spaces and spherical varieties", Transf. Groups, 4: 3–24, doi:10.1007/BF01236659, S2CID 124032062
• Helgason, Sigurdur (1978), Differential geometry, Lie groups and symmetric spaces, Academic Press, ISBN 0-12-338460-5
• Kac, V. G. (1990), Infinite dimensional Lie algebras (3rd ed.), Cambridge University Press, ISBN 0-521-46693-8
• Kobayashi, Toshiyuki (2002). "Branching problems of unitary representations". Proceedings of the International Congress of Mathematicians, Vol. II. Beijing: Higher Ed. Press. pp. 615–627.
• Kobayashi, Toshiyuki (2004), "Geometry of multiplicity-free representations of GL(n), visible actions on flag varieties, and triunity", Acta Appl. Math., 81: 129–146, doi:10.1023/B:ACAP.0000024198.46928.0c, S2CID 14530010
• Kobayashi, Toshiyuki (2007), "A generalized Cartan decomposition for the double coset space (U(n1)×U(n2)×U(n3))\U(n)/(U(p)×U(q))", J. Math. Soc. Jpn., 59: 669–691
• Krämer, Manfred (1979), "Sphärische Untergruppen in kompakten zusammenhängenden Liegruppen", Compositio Mathematica (in German), 38: 129–153
• Matsuki, Toshihiko (1991), "Orbits on flag manifolds", Proceedings of the International Congress of Mathematicians, Vol. II, 1990 Kyoto, Math. Soc. Japan, pp. 807–813
• Matsuki, Toshihiko (2013), "An example of orthogonal triple flag variety of finite type", J. Algebra, 375: 148–187, doi:10.1016/j.jalgebra.2012.11.012, S2CID 119132477
• Mikityuk, I. V. (1987), "On the integrability of invariant Hamiltonian systems with homogeneous configuration spaces", Math. USSR Sbornik, 57 (2): 527–546, Bibcode:1987SbMat..57..527M, doi:10.1070/SM1987v057n02ABEH003084
• Selberg, A. (1956), "Harmonic analysis and discontinuous groups in weakly symmetric riemannian spaces, with applications to Dirichlet series", J. Indian Math. Society, 20: 47–87
• Stembridge, J. R. (2001), "Multiplicity-free products of Schur functions", Annals of Combinatorics, 5 (2): 113–121, doi:10.1007/s00026-001-8008-6, hdl:2027.42/41839, S2CID 18105235
• Stembridge, J. R. (2003), "Multiplicity-free products and restrictions of Weyl characters", Representation Theory, 7 (18): 404–439, doi:10.1090/S1088-4165-03-00150-X
• Vinberg, É. B. (2001), "Commutative homogeneous spaces and co-isotropic symplectic actions", Russian Math. Surveys, 56 (1): 1–60, Bibcode:2001RuMaS..56....1V, doi:10.1070/RM2001v056n01ABEH000356, S2CID 250919435
• Wolf, J. A.; Gray, A. (1968), "Homogeneous spaces defined by Lie group automorphisms. I, II", Journal of Differential Geometry, 2: 77–114, 115–159
• Wolf, J. A. (2007), Harmonic Analysis on Commutative Spaces, American Mathematical Society, ISBN 978-0-8218-4289-8
• Ziller, Wolfgang (1996), "Weakly symmetric spaces", Topics in geometry, Progr. Nonlinear Differential Equations Appl., vol. 20, Boston: Birkhäuser, pp. 355–368
|
Wealthy Babcock
Wealthy Consuelo Babcock (November 11, 1895 – April 10, 1990) was an American mathematician. She was awarded a Ph.D. from the University of Kansas and had a long teaching career at that institution.[1]
Wealthy Consuelo Babcock
Yearbook photo of Wealthy Babcock, University of Kansas Class of 1909
Born(1895-11-18)November 18, 1895
Washington County, Kansas
DiedApril 10, 1990(1990-04-10) (aged 94)
Lawrence, Kansas
NationalityAmerican
Alma materUniversity of Kansas
Scientific career
FieldsMathematics
InstitutionsUniversity of Kansas
ThesisOn the Geometry Associated with Certain Determinants with Linear Elements (1926)
Doctoral advisorEllis Bagley Stouffer
Early life and education
Wealthy Consuelo Babcock was born in Washington County, Kansas, the second child of Ella Babcock (nee, Kerr) and Cassius Lincoln Babcock. She graduated in 1913 from Washington County High School and taught for two years in one-room country schools in Washington County. The following year, she matriculated at the University of Kansas where she was a member of the women's basketball team. After receiving her Bachelor of Arts in 1919, she taught for a year at Neodesha High School in southeastern Kansas. She then returned to the University of Kansas in 1920 as an instructor.
Career at the University of Kansas
In addition to teaching at the University, Wealthy pursued her graduate studies, earning a master's in 1922 and a doctorate with a minor in physics in 1926. She was promoted to assistant professor in 1926 and to associate professor in 1940. She retired in 1966. During her tenure on the Kansas faculty, she regularly attended meetings of the Kansas Section of the Mathematical Association of America.
She was an outstanding teacher and for thirty years she was the mathematics department's librarian.[2]
Last years
After her retirement, Wealthy Babcock was honored by the dedication of the Wealthy Babcock Mathematics Library. She served on many committees on scholarships and awards and was particularly active in the KU Alumni Association's activities, for which she received the Fred Ellsworth Medallion, the highest award for service, in 1977.
Wealthy Babcock died in 1990 at ninety-four at Presbyterian Manor in Lawrence, Kansas. She was cremated and interred in the Pioneer Cemetery on the campus of the university.[3]
References
1. Green, Judy; LaDuke, Jeanne (January 2009). Pioneering Women in American Mathematics: The Pre-1940 PhD's. American Mathematical Soc. p. 128. ISBN 978-0-8218-4376-5.
2. Judy Green and Jeanne LaDuke, “Supplementary Material for Pioneering Women in American Mathematics: The Pre-1940 PhD’s,” 33: http://www.ams.org/publications/authors/books/postpub/hmath-34-PioneeringWomen.pdf
3. “Wealthy C. Babcock,” Lawrence Journal-World, 11 April 1990
External links
• Women’s Hall of Fame, KU Emily Taylor Center for Women and Gender Equality
• Mathematics Genealogy Project
• Bill Mayer, “Rabid KU Fans Prove Basketballs Mass Appeal,” _Lawrence Journal-World_, 23 Jan 2005.
• Professor Tom Levin, “Interview with Wealthy Babcock,” Oral History of the Retirees Club, The University of Kansas, Summer of 1985.
• Wealthy Babcock at Find a Grave
Authority control: Academics
• Mathematics Genealogy Project
• zbMATH
|
Weapons of Math Destruction
Weapons of Math Destruction is a 2016 American book about the societal impact of algorithms, written by Cathy O'Neil. It explores how some big data algorithms are increasingly used in ways that reinforce preexisting inequality. It was longlisted for the 2016 National Book Award for Nonfiction but did not make it through the shortlist.[1][2][3] The book has been widely reviewed,[4] and won the Euler Book Prize.
Weapons of Math Destruction
First edition
AuthorCathy O'Neil
CountryUnited States
LanguageEnglish
SubjectMathematics, race, ethnicity
GenreNon-fiction
PublisherCrown Books
Publication date
2016
AwardsEuler Book Prize
ISBN0553418815
Overview
O'Neil, a mathematician, analyses how the use of big data and algorithms in a variety of fields, including insurance, advertising, education, and policing, can lead to decisions that harm the poor, reinforce racism, and amplify inequality. According to National Book Foundation:[1]
Most troubling, they reinforce discrimination: If a poor student can't get a loan because a lending model deems him too risky (by virtue of his zip code), he's then cut off from the kind of education that could pull him out of poverty, and a vicious spiral ensues. Models are propping up the lucky and punishing the downtrodden, creating a "toxic cocktail for democracy."
She posits that these problematic mathematical tools share three key features: they are opaque, unregulated, and difficult to contest. They are also scalable, thereby amplifying any inherent biases to affect increasingly larger populations. WMDs, or Weapons of Math Destruction, are mathematical algorithms that supposedly take human traits and quantify them, resulting in damaging effects and the perpetuation of bias against certain groups of people.
Reception
The book received widespread praise for elucidating the consequences of reliance on big data models for structuring socioeconomic resources. Clay Shirky from The New York Times Book Review said "O'Neil does a masterly job explaining the pervasiveness and risks of the algorithms that regulate our lives," while pointing out that "the section on solutions is weaker than the illustration of the problem".[5] Kirkus Reviews praised the book for being "an unusually lucid and readable" discussion of a technical subject.[6]
In 2019, the book won the Euler Book Prize of the Mathematical Association of America.[7]
References
1. 2016 National Book Award Longlist, Nonfiction, National Book Foundation
2. "The National Book Awards Longlist: Nonfiction", The New Yorker, September 14, 2016
3. Rawlins, Aimee (September 6, 2016), Math is racist: How data is driving inequality, CNN
4. See:
• Lozano, Guadalupe I., "Review of Weapons of Math Destruction", MathSciNet, MR 3561130
• Braga, Filipe Meirelles Ferreira (June 2016), "Review of Weapons of Math Destruction", Mural Internacional, 7 (1), doi:10.12957/rmi.2016.25939
• Lamb, Evelyn (August 2016), "Review of Weapons of Math Destruction", Roots of Unity, Scientific American
• Shankar, Kalpana (September 2016), "A data scientist reveals how invisible algorithms perpetuate inequality (review of Weapons of Math Destruction)", Science
• Doctorow, Cory (September 2016), "Weapons of Math Destruction: invisible, ubiquitous algorithms are ruining millions of lives", BoingBoing
• McEvers, Kelly (September 2016), "'Weapons Of Math Destruction' outlines dangers of relying on data analytics", All Things Considered, NPR
• Hayden, Robert W. (January 2017), "Review of Weapons of Math Destruction", MAA Reviews
• Varis, Piia (January 2017), "Review of Weapons of Math Destruction", Diggit
• Omitola, Tope (January 2017), "Review of Weapons of Math Destruction", ACM Computing Reviews
• Jain, Apurv (March 2017), "Review of Weapons of Math Destruction", Business Economics, 52 (2): 123–125, doi:10.1057/s11369-017-0027-3, S2CID 157914174
• Schrag, Francis (March 2017), "Review of Weapons of Math Destruction", Education Review, 24, doi:10.14507/er.v24.2197
• Bradley, James (March 2017), "Review of Weapons of Math Destruction", Perspectives on Science and Christian Faith, 69 (1): 54
• Maloney, Cory (Spring 2017), "Review of Weapons of Math Destruction", Journal of Markets & Morality, 20 (1): 194
• Roy, Michael (April 2017), "Review of Weapons of Math Destruction", College & Research Libraries, 78 (3): 403, doi:10.5860/crl.78.3.403
• Case, James (May 2017), "When big data algorithms discriminate (review of Weapons of Math Destruction", SIAM News, 50 (4)
• Arslan, Faruk (July 2017), "Review of Weapons of Math Destruction", Journal of Information Privacy and Security, 13 (3): 157–159, doi:10.1080/15536548.2017.1357388, S2CID 188383106
• Poovey, Mary (September 2017), "Review of Weapons of Math Destruction", Notices of the American Mathematical Society, 64 (8): 933–935, doi:10.1090/noti1561
• Doyle, Tony (October 2017), "Review of Weapons of Math Destruction", The Information Society, 33 (5): 301–302, doi:10.1080/01972243.2017.1354593, S2CID 22283226
• Mateen, Harris (2018), "Review of Weapons of Math Destruction", Berkeley Journal of Employment and Labor Law, 39 (1): 285–292
• Tunstall, Samuel (January 2018), "Models as weapons (review of Weapons of Math Destruction", Numeracy, 11 (1), doi:10.5038/1936-4660.11.1.10
• Woodson, Thomas (August 2018), "Review of Weapons of Math Destruction", Journal of Responsible Innovation, 5 (3): 361–363, doi:10.1080/23299460.2018.1495027
• Bansal, Gaurav (January 2019), "Review of Weapons of Math Destruction", Journal of Information Technology Case and Application Research, 21 (1): 60–63, doi:10.1080/15228053.2019.1587571, S2CID 189618193
• Verma, Shikha (June 2019), "Review of Weapons of Math Destruction", Vikalpa: The Journal for Decision Makers, 44 (2): 97–98, doi:10.1177/0256090919853933
• Eusufzai, Zaki (September 2019), "Review of Weapons of Math Destruction", The Social Science Journal, 56 (3): 425–426, doi:10.1016/j.soscij.2019.04.002, S2CID 203099077
5. Shirky, Clay (October 3, 2016), "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", The New York Times Book Review
6. "Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy", Kirkus Reviews, July 19, 2016
7. "Euler Book Prize" (PDF), Prizes and Awards, Joint Mathematics Meetings, pp. 3–4, January 2019, retrieved 2019-07-20 – via American Mathematical Society
External links
• Presentation by O'Neil on Weapons of Mass Destruction, September 20, 2016, C-SPAN
Authority control: National
• Sweden
|
Numerical weather prediction
Numerical weather prediction (NWP) uses mathematical models of the atmosphere and oceans to predict the weather based on current weather conditions. Though first attempted in the 1920s, it was not until the advent of computer simulation in the 1950s that numerical weather predictions produced realistic results. A number of global and regional forecast models are run in different countries worldwide, using current weather observations relayed from radiosondes, weather satellites and other observing systems as inputs.
Mathematical models based on the same physical principles can be used to generate either short-term weather forecasts or longer-term climate predictions; the latter are widely applied for understanding and projecting climate change. The improvements made to regional models have allowed significant improvements in tropical cyclone track and air quality forecasts; however, atmospheric models perform poorly at handling processes that occur in a relatively constricted area, such as wildfires.
Manipulating the vast datasets and performing the complex calculations necessary to modern numerical weather prediction requires some of the most powerful supercomputers in the world. Even with the increasing power of supercomputers, the forecast skill of numerical weather models extends to only about six days. Factors affecting the accuracy of numerical predictions include the density and quality of observations used as input to the forecasts, along with deficiencies in the numerical models themselves. Post-processing techniques such as model output statistics (MOS) have been developed to improve the handling of errors in numerical predictions.
A more fundamental problem lies in the chaotic nature of the partial differential equations that describe the atmosphere. It is impossible to solve these equations exactly, and small errors grow with time (doubling about every five days). Present understanding is that this chaotic behavior limits accurate forecasts to about 14 days even with accurate input data and a flawless model. In addition, the partial differential equations used in the model need to be supplemented with parameterizations for solar radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, and the effects of terrain. In an effort to quantify the large amount of inherent uncertainty remaining in numerical predictions, ensemble forecasts have been used since the 1990s to help gauge the confidence in the forecast, and to obtain useful results farther into the future than otherwise possible. This approach analyzes multiple forecasts created with an individual forecast model or multiple models.
History
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson, who used procedures originally developed by Vilhelm Bjerknes[1] to produce by hand a six-hour forecast for the state of the atmosphere over two points in central Europe, taking at least six weeks to do so.[2][1][3] It was not until the advent of the computer and computer simulations that computation time was reduced to less than the forecast period itself. The ENIAC was used to create the first weather forecasts via computer in 1950, based on a highly simplified approximation to the atmospheric governing equations.[4][5] In 1954, Carl-Gustav Rossby's group at the Swedish Meteorological and Hydrological Institute used the same model to produce the first operational forecast (i.e., a routine prediction for practical use).[6] Operational numerical weather prediction in the United States began in 1955 under the Joint Numerical Weather Prediction Unit (JNWPU), a joint project by the U.S. Air Force, Navy and Weather Bureau.[7] In 1956, Norman Phillips developed a mathematical model which could realistically depict monthly and seasonal patterns in the troposphere; this became the first successful climate model.[8][9] Following Phillips' work, several groups began working to create general circulation models.[10] The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.[11]
As computers have become more powerful, the size of the initial data sets has increased and newer atmospheric models have been developed to take advantage of the added available computing power. These newer models include more physical processes in the simplifications of the equations of motion in numerical simulations of the atmosphere.[6] In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977.[1][12] The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclones as well as air quality in the 1970s and 1980s.[13][14] By the early 1980s models began to include the interactions of soil and vegetation with the atmosphere, which led to more realistic forecasts.[15]
The output of forecast models based on atmospheric dynamics is unable to resolve some details of the weather near the Earth's surface. As such, a statistical relationship between the output of a numerical weather model and the ensuing conditions at the ground was developed in the 1970s and 1980s, known as model output statistics (MOS).[16][17] Starting in the 1990s, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.[18][19][20]
Initialization
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation.[21] The main inputs from country-based weather services are observations from devices (called radiosondes) in weather balloons that measure various atmospheric parameters and transmits them to a fixed receiver, as well as from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports,[22] or every six hours in SYNOP reports.[23] These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms.[24] The data are then used in the model as the starting point for a forecast.[25]
A variety of methods are used to gather observational data for use in numerical models. Sites launch radiosondes in weather balloons which rise through the troposphere and well into the stratosphere.[26] Information from weather satellites is used where traditional data sources are not available. Commerce provides pilot reports along aircraft routes[27] and ship reports along shipping routes.[28] Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones.[29][30] Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent.[31] Sea ice began to be initialized in forecast models in 1971.[32] Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.[33]
Computation
An atmospheric model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any modern model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere.[34] These equations—along with the ideal gas law—are used to evolve the density, pressure, and potential temperature scalar fields and the air velocity (wind) vector field of the atmosphere through time. Additional transport equations for pollutants and other aerosols are included in some primitive-equation high-resolution models as well.[35] The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods,[36] with the exception of a few idealized cases.[37] Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models and almost all regional models use finite difference methods for all three spatial dimensions, while other global models and a few regional models use spectral methods for the horizontal dimensions and finite-difference methods in the vertical.[36]
These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future; the time increment for this prediction is called a time step. This future atmospheric state is then used as the starting point for another application of the predictive equations to find new rates of change, and these new rates of change predict the atmosphere at a yet further time step into the future. This time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability.[38] Time steps for global models are on the order of tens of minutes,[39] while time steps for regional models are between one and four minutes.[40] The global models are run at varying times into the future. The UKMET Unified Model is run six days into the future,[41] while the European Centre for Medium-Range Weather Forecasts' Integrated Forecast System and Environment Canada's Global Environmental Multiscale Model both run out to ten days into the future,[42] and the Global Forecast System model run by the Environmental Modeling Center is run sixteen days into the future.[43] The visual output produced by a model solution is known as a prognostic chart, or prog.[44]
Parameterization
Some meteorological processes are too small-scale or too complex to be explicitly included in numerical weather prediction models. Parameterization is a procedure for representing these processes by relating them to variables on the scales that the model resolves. For example, the gridboxes in weather and climate models have sides that are between 5 kilometers (3 mi) and 300 kilometers (200 mi) in length. A typical cumulus cloud has a scale of less than 1 kilometer (0.6 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air within a model gridbox was conditionally unstable (essentially, the bottom was warmer and moister than the top) and the water vapor content at any point within the column became saturated then it would be overturned (the warm, moist air would begin rising), and the air in that vertical column mixed. More sophisticated schemes recognize that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sizes between 5 and 25 kilometers (3 and 16 mi) can explicitly represent convective clouds, although they need to parameterize cloud microphysics which occur at a smaller scale.[45] The formation of large-scale (stratus-type) clouds is more physically based; they form when the relative humidity reaches some prescribed value. The cloud fraction can be related to this critical value of relative humidity.[46]
The amount of solar radiation reaching the ground, as well as the formation of cloud droplets occur on the molecular scale, and so they must be parameterized before they can be included in the model. Atmospheric drag produced by mountains must also be parameterized, as the limitations in the resolution of elevation contours produce significant underestimates of the drag.[47] This method of parameterization is also done for the surface flux of energy between the ocean and the atmosphere, in order to determine realistic sea surface temperatures and type of sea ice found near the ocean's surface.[48] Sun angle as well as the impact of multiple cloud layers is taken into account.[49] Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere, and thus it is important to parameterize their contribution to these processes.[50] Within air quality models, parameterizations take into account atmospheric emissions from multiple relatively tiny sources (e.g. roads, fields, factories) within specific grid boxes.[51]
Domains
The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models (also known as limited-area models, or LAMs) allow for the use of finer grid spacing than global models because the available computational resources are focused on a specific area instead of being spread over the globe. This allows regional models to resolve explicitly smaller-scale meteorological phenomena that cannot be represented on the coarser grid of a global model. Regional models use a global model to specify conditions at the edge of their domain (boundary conditions) in order to allow systems from outside the regional model domain to move into its area. Uncertainty and errors within regional models are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as errors attributable to the regional model itself.[52]
The vertical coordinate is handled in various ways. Lewis Fry Richardson's 1922 model used geometric height ($z$) as the vertical coordinate. Later models substituted the geometric $z$ coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations.[53] This correlation between coordinate systems can be made since pressure decreases with height through the Earth's atmosphere.[54] The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (about 5,500 m (18,000 ft)) level,[4] and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates.[55] This coordinate system receives its name from the independent variable $\sigma $ used to scale atmospheric pressures with respect to the pressure at the surface, and in some cases also with the pressure at the top of the domain.[56]
Model output statistics
Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions, statistical methods have been developed to attempt to correct the forecasts. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS),[57] and were developed by the National Weather Service for their suite of weather forecasting models in the late 1960s.[16][58]
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect.[59] MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Because MOS is run after its respective global or regional model, its production is known as post-processing. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.[60]
Ensembles
Main article: Ensemble forecasting
In 1963, Edward Lorenz discovered the chaotic nature of the fluid dynamics equations involved in weather forecasting.[61] Extremely small errors in temperature, winds, or other initial inputs given to numerical models will amplify and double every five days,[61] making it impossible for long-range forecasts—those made more than two weeks in advance—to predict the state of the atmosphere with any degree of forecast skill. Furthermore, existing observation networks have poor coverage in some regions (for example, over large bodies of water such as the Pacific Ocean), which introduces uncertainty into the true initial state of the atmosphere. While a set of equations, known as the Liouville equations, exists to determine the initial uncertainty in the model initialization, the equations are too complex to run in real-time, even with the use of supercomputers.[62] These uncertainties limit forecast model accuracy to about five or six days into the future.[63][64]
Edward Epstein recognized in 1969 that the atmosphere could not be completely described with a single forecast run due to inherent uncertainty, and proposed using an ensemble of stochastic Monte Carlo simulations to produce means and variances for the state of the atmosphere.[65] Although this early example of an ensemble showed skill, in 1974 Cecil Leith showed that they produced adequate forecasts only when the ensemble probability distribution was a representative sample of the probability distribution in the atmosphere.[66]
Since the 1990s, ensemble forecasts have been used operationally (as routine forecasts) to account for the stochastic nature of weather processes – that is, to resolve their inherent uncertainty. This method involves analyzing multiple forecasts created with an individual forecast model by using different physical parametrizations or varying initial conditions.[62] Starting in 1992 with ensemble forecasts prepared by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction, model ensemble forecasts have been used to help define the forecast uncertainty and to extend the window in which numerical weather forecasting is viable farther into the future than otherwise possible.[18][19][20] The ECMWF model, the Ensemble Prediction System,[19] uses singular vectors to simulate the initial probability density, while the NCEP ensemble, the Global Ensemble Forecasting System, uses a technique known as vector breeding.[18][20] The UK Met Office runs global and regional ensemble forecasts where perturbations to initial conditions are used by 24 ensemble members in the Met Office Global and Regional Ensemble Prediction System (MOGREPS) to produce 24 different forecasts.[67]
In a single model-based approach, the ensemble forecast is usually evaluated in terms of an average of the individual forecasts concerning one forecast variable, as well as the degree of agreement between various forecasts within the ensemble system, as represented by their overall spread. Ensemble spread is diagnosed through tools such as spaghetti diagrams, which show the dispersion of one quantity on prognostic charts for specific time steps in the future. Another tool where ensemble spread is used is a meteogram, which shows the dispersion in the forecast of one quantity for one specific location. It is common for the ensemble spread to be too small to include the weather that actually occurs, which can lead to forecasters misdiagnosing model uncertainty;[68] this problem becomes particularly severe for forecasts of the weather about ten days in advance.[69] When ensemble spread is small and the forecast solutions are consistent within multiple model runs, forecasters perceive more confidence in the ensemble mean, and the forecast in general.[68] Despite this perception, a spread-skill relationship is often weak or not found, as spread-error correlations are normally less than 0.6, and only under special circumstances range between 0.6–0.7.[70] The relationship between ensemble spread and forecast skill varies substantially depending on such factors as the forecast model and the region for which the forecast is made.
In the same way that many forecasts from a single model can be used to form an ensemble, multiple models may also be combined to produce an ensemble forecast. This approach is called multi-model ensemble forecasting, and it has been shown to improve forecasts when compared to a single model-based approach.[71] Models within a multi-model ensemble can be adjusted for their various biases, which is a process known as superensemble forecasting. This type of forecast significantly reduces errors in model output.[72]
Applications
Air quality modeling
Air quality forecasting attempts to predict when the concentrations of pollutants will attain levels that are hazardous to public health. The concentration of pollutants in the atmosphere is determined by their transport, or mean velocity of movement through the atmosphere, their diffusion, chemical transformation, and ground deposition.[73] In addition to pollutant source and terrain information, these models require data about the state of the fluid flow in the atmosphere to determine its transport and diffusion.[74] Meteorological conditions such as thermal inversions can prevent surface air from rising, trapping pollutants near the surface,[75] which makes accurate forecasts of such events crucial for air quality modeling. Urban air quality models require a very fine computational mesh, requiring the use of high-resolution mesoscale weather models; in spite of this, the quality of numerical weather guidance is the main uncertainty in air quality forecasts.[74]
Climate modeling
A General Circulation Model (GCM) is a mathematical model that can be used in computer simulations of the global circulation of a planetary atmosphere or ocean. An atmospheric general circulation model (AGCM) is essentially the same as a global numerical weather prediction model, and some (such as the one used in the UK Unified Model) can be configured for both short-term weather forecasts and longer-term climate predictions. Along with sea ice and land-surface components, AGCMs and oceanic GCMs (OGCM) are key components of global climate models, and are widely applied for understanding the climate and projecting climate change. For aspects of climate change, a range of man-made chemical emission scenarios can be fed into the climate models to see how an enhanced greenhouse effect would modify the Earth's climate.[76] Versions designed for climate applications with time scales of decades to centuries were originally created in 1969 by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory in Princeton, New Jersey.[77] When run for multiple decades, computational limitations mean that the models must use a coarse grid that leaves smaller-scale interactions unresolved.[78]
Ocean surface modeling
The transfer of energy between the wind blowing over the surface of an ocean and the ocean's upper layer is an important element in wave dynamics.[79] The spectral wave transport equation is used to describe the change in wave spectrum over changing topography. It simulates wave generation, wave movement (propagation within a fluid), wave shoaling, refraction, energy transfer between waves, and wave dissipation.[80] Since surface winds are the primary forcing mechanism in the spectral wave transport equation, ocean wave models use information produced by numerical weather prediction models as inputs to determine how much energy is transferred from the atmosphere into the layer at the surface of the ocean. Along with dissipation of energy through whitecaps and resonance between waves, surface winds from numerical weather models allow for more accurate predictions of the state of the sea surface.[81]
Tropical cyclone forecasting
Tropical cyclone forecasting also relies on data provided by numerical weather models. Three main classes of tropical cyclone guidance models exist: Statistical models are based on an analysis of storm behavior using climatology, and correlate a storm's position and date to produce a forecast that is not based on the physics of the atmosphere at the time. Dynamical models are numerical models that solve the governing equations of fluid flow in the atmosphere; they are based on the same principles as other limited-area numerical weather prediction models but may include special computational techniques such as refined spatial domains that move along with the cyclone. Models that use elements of both approaches are called statistical-dynamical models.[82]
In 1978, the first hurricane-tracking model based on atmospheric dynamics—the movable fine-mesh (MFM) model—began operating.[13] Within the field of tropical cyclone track forecasting, despite the ever-improving dynamical model guidance which occurred with increased computational power, it was not until the 1980s when numerical weather prediction showed skill, and until the 1990s when it consistently outperformed statistical or simple dynamical models.[83] Predictions of the intensity of a tropical cyclone based on numerical weather prediction continue to be a challenge, since statistical methods continue to show higher skill over dynamical guidance.[84]
Wildfire modeling
On a molecular scale, there are two main competing reaction processes involved in the degradation of cellulose, or wood fuels, in wildfires. When there is a low amount of moisture in a cellulose fiber, volatilization of the fuel occurs; this process will generate intermediate gaseous products that will ultimately be the source of combustion. When moisture is present—or when enough heat is being carried away from the fiber, charring occurs. The chemical kinetics of both reactions indicate that there is a point at which the level of moisture is low enough—and/or heating rates high enough—for combustion processes to become self-sufficient. Consequently, changes in wind speed, direction, moisture, temperature, or lapse rate at different levels of the atmosphere can have a significant impact on the behavior and growth of a wildfire. Since the wildfire acts as a heat source to the atmospheric flow, the wildfire can modify local advection patterns, introducing a feedback loop between the fire and the atmosphere.[85]
A simplified two-dimensional model for the spread of wildfires that used convection to represent the effects of wind and terrain, as well as radiative heat transfer as the dominant method of heat transport led to reaction–diffusion systems of partial differential equations.[86][87] More complex models join numerical weather models or computational fluid dynamics models with a wildfire component which allow the feedback effects between the fire and the atmosphere to be estimated.[85] The additional complexity in the latter class of models translates to a corresponding increase in their computer power requirements. In fact, a full three-dimensional treatment of combustion via direct numerical simulation at scales relevant for atmospheric modeling is not currently practical because of the excessive computational cost such a simulation would require. Numerical weather models have limited forecast skill at spatial resolutions under 1 kilometer (0.6 mi), forcing complex wildfire models to parameterize the fire in order to calculate how the winds will be modified locally by the wildfire, and to use those modified winds to determine the rate at which the fire will spread locally.[88][89][90] Although models such as Los Alamos' FIRETEC solve for the concentrations of fuel and oxygen, the computational grid cannot be fine enough to resolve the combustion reaction, so approximations must be made for the temperature distribution within each grid cell, as well as for the combustion reaction rates themselves.
See also
• Atmospheric physics
• Atmospheric thermodynamics
• Tropical cyclone forecast model
• Types of atmospheric models
References
1. Lynch, Peter (March 2008). "The origins of computer weather prediction and climate modeling" (PDF). Journal of Computational Physics. 227 (7): 3431–44. Bibcode:2008JCoPh.227.3431L. doi:10.1016/j.jcp.2007.02.034. Archived from the original (PDF) on 2010-07-08. Retrieved 2010-12-23.
2. Simmons, A. J.; Hollingsworth, A. (2002). "Some aspects of the improvement in skill of numerical weather prediction". Quarterly Journal of the Royal Meteorological Society. 128 (580): 647–677. Bibcode:2002QJRMS.128..647S. doi:10.1256/003590002321042135. S2CID 121625425.
3. Lynch, Peter (2006). "Weather Prediction by Numerical Process". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 1–27. ISBN 978-0-521-85729-1.
4. Charney, Jule; Fjørtoft, Ragnar; von Neumann, John (November 1950). "Numerical Integration of the Barotropic Vorticity Equation". Tellus. 2 (4): 237. Bibcode:1950Tell....2..237C. doi:10.3402/tellusa.v2i4.8607.
5. Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. p. 208. ISBN 978-0-471-38108-2.
6. Harper, Kristine; Uccellini, Louis W.; Kalnay, Eugenia; Carey, Kenneth; Morone, Lauren (May 2007). "2007: 50th Anniversary of Operational Numerical Weather Prediction". Bulletin of the American Meteorological Society. 88 (5): 639–650. Bibcode:2007BAMS...88..639H. doi:10.1175/BAMS-88-5-639.
7. American Institute of Physics (2008-03-25). "Atmospheric General Circulation Modeling". Archived from the original on 2008-03-25. Retrieved 2008-01-13.
8. Phillips, Norman A. (April 1956). "The general circulation of the atmosphere: a numerical experiment". Quarterly Journal of the Royal Meteorological Society. 82 (352): 123–154. Bibcode:1956QJRMS..82..123P. doi:10.1002/qj.49708235202.
9. Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. p. 210. ISBN 978-0-471-38108-2.
10. Lynch, Peter (2006). "The ENIAC Integrations". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 206–208. ISBN 978-0-521-85729-1.
11. National Oceanic and Atmospheric Administration (2008-05-22). "The First Climate Model". Retrieved 2011-01-08.
12. Leslie, L.M.; Dietachmeyer, G.S. (December 1992). "Real-time limited area numerical weather prediction in Australia: a historical perspective" (PDF). Australian Meteorological Magazine. 41 (SP): 61–77. Retrieved 2011-01-03.
13. Shuman, Frederick G. (September 1989). "History of Numerical Weather Prediction at the National Meteorological Center". Weather and Forecasting. 4 (3): 286–296. Bibcode:1989WtFor...4..286S. doi:10.1175/1520-0434(1989)004<0286:HONWPA>2.0.CO;2.
14. Steyn, D. G. (1991). Air pollution modeling and its application VIII, Volume 8. Birkhäuser. pp. 241–242. ISBN 978-0-306-43828-8.
15. Xue, Yongkang; Fennessey, Michael J. (1996-03-20). "Impact of vegetation properties on U. S. summer weather prediction" (PDF). Journal of Geophysical Research. 101 (D3): 7419. Bibcode:1996JGR...101.7419X. CiteSeerX 10.1.1.453.551. doi:10.1029/95JD02169. Archived from the original (PDF) on 2010-07-10. Retrieved 2011-01-06.
16. Hughes, Harry (1976). Model output statistics forecast guidance (PDF). United States Air Force Environmental Technical Applications Center. pp. 1–16. Archived (PDF) from the original on June 17, 2019.
17. Best, D. L.; Pryor, S. P. (1983). Air Weather Service Model Output Statistics Systems. Air Force Global Weather Central. pp. 1–90.
18. Toth, Zoltan; Kalnay, Eugenia (December 1997). "Ensemble Forecasting at NCEP and the Breeding Method". Monthly Weather Review. 125 (12): 3297–3319. Bibcode:1997MWRv..125.3297T. CiteSeerX 10.1.1.324.3941. doi:10.1175/1520-0493(1997)125<3297:EFANAT>2.0.CO;2. S2CID 14668576.
19. "The Ensemble Prediction System (EPS)". ECMWF. Archived from the original on 2010-10-30. Retrieved 2011-01-05.
20. Molteni, F.; Buizza, R.; Palmer, T.N.; Petroliagis, T. (January 1996). "The ECMWF Ensemble Prediction System: Methodology and validation". Quarterly Journal of the Royal Meteorological Society. 122 (529): 73–119. Bibcode:1996QJRMS.122...73M. doi:10.1002/qj.49712252905.
21. Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 56. ISBN 978-0-521-86540-1.
22. National Climatic Data Center (2008-08-20). "Key to METAR Surface Weather Observations". National Oceanic and Atmospheric Administration. Archived from the original on 2002-11-01. Retrieved 2011-02-11.
23. "SYNOP Data Format (FM-12): Surface Synoptic Observations". UNISYS. 2008-05-25. Archived from the original on 2007-12-30.
24. Krishnamurti, T. N. (January 1995). "Numerical Weather Prediction". Annual Review of Fluid Mechanics. 27 (1): 195–225. Bibcode:1995AnRFM..27..195K. doi:10.1146/annurev.fl.27.010195.001211. S2CID 122230747.
25. "The WRF Variational Data Assimilation System (WRF-Var)". University Corporation for Atmospheric Research. 2007-08-14. Archived from the original on 2007-08-14.
26. Gaffen, Dian J. (2007-06-07). "Radiosonde Observations and Their Use in SPARC-Related Investigations". Archived from the original on 2007-06-07.
27. Ballish, Bradley A.; V. Krishna Kumar (November 2008). "Systematic Differences in Aircraft and Radiosonde Temperatures" (PDF). Bulletin of the American Meteorological Society. 89 (11): 1689–1708. Bibcode:2008BAMS...89.1689B. doi:10.1175/2008BAMS2332.1. Retrieved 2011-02-16.
28. National Data Buoy Center (2009-01-28). "The WMO Voluntary Observing Ships (VOS) Scheme". National Oceanic and Atmospheric Administration. Retrieved 2011-02-15.
29. 403rd Wing (2011). "The Hurricane Hunters". 53rd Weather Reconnaissance Squadron. Archived from the original on 2012-05-30. Retrieved 2006-03-30.
30. Lee, Christopher (2007-10-08). "Drone, Sensors May Open Path Into Eye of Storm". The Washington Post. Retrieved 2008-02-22.
31. National Oceanic and Atmospheric Administration (2010-11-12). "NOAA Dispatches High-Tech Research Plane to Improve Winter Storm Forecasts". Retrieved 2010-12-22.
32. Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 137. ISBN 978-0-521-86540-1.
33. Houghton, John Theodore (1985). The Global Climate. Cambridge University Press archive. pp. 49–50. ISBN 978-0-521-31256-1.
34. Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 48–49. ISBN 978-0-12-554766-6.
35. Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 18–19. ISBN 978-0-12-554766-6.
36. Strikwerda, John C. (2004). Finite difference schemes and partial differential equations. SIAM. pp. 165–170. ISBN 978-0-89871-567-5.
37. Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. p. 65. ISBN 978-0-12-554766-6.
38. Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 285–287. ISBN 978-0-12-554766-6.
39. Sunderam, V. S.; van Albada, G. Dick; Peter, M. A.; Sloot, J. J. Dongarra (2005). Computational Science – ICCS 2005: 5th International Conference, Atlanta, GA, USA, May 22–25, 2005, Proceedings, Part 1. Springer. p. 132. ISBN 978-3-540-26032-5.
40. Zwieflhofer, Walter; Kreitz, Norbert; European Centre for Medium Range Weather Forecasts (2001). Developments in teracomputing: proceedings of the ninth ECMWF Workshop on the Use of High Performance Computing in Meteorology. World Scientific. p. 276. ISBN 978-981-02-4761-4.{{cite book}}: CS1 maint: multiple names: authors list (link)
41. Chan, Johnny C. L. & Jeffrey D. Kepert (2010). Global Perspectives on Tropical Cyclones: From Science to Mitigation. World Scientific. pp. 295–296. ISBN 978-981-4293-47-1. Retrieved 2011-02-24.
42. Holton, James R. (2004). An introduction to dynamic meteorology, Volume 1. Academic Press. p. 480. ISBN 978-0-12-354015-7. Retrieved 2011-02-24.
43. Brown, Molly E. (2008). Famine early warning systems and remote sensing data. Springer. p. 121. Bibcode:2008fews.book.....B. ISBN 978-3-540-75367-4. Retrieved 2011-02-24.
44. Ahrens, C. Donald (2008). Essentials of meteorology: an invitation to the atmosphere. Cengage Learning. p. 244. ISBN 978-0-495-11558-8.
45. Narita, Masami & Shiro Ohmori (2007-08-06). "3.7 Improving Precipitation Forecasts by the Operational Nonhydrostatic Mesoscale Model with the Kain-Fritsch Convective Parameterization and Cloud Microphysics" (PDF). 12th Conference on Mesoscale Processes. Retrieved 2011-02-15.
46. Frierson, Dargan (2000-09-14). "The Diagnostic Cloud Parameterization Scheme" (PDF). University of Washington. pp. 4–5. Archived from the original (PDF) on 2011-04-01. Retrieved 2011-02-15.
47. Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 6. ISBN 978-0-521-86540-1.
48. McGuffie, K. & A. Henderson-Sellers (2005). A climate modelling primer. John Wiley and Sons. p. 188. ISBN 978-0-470-85751-9.
49. Melʹnikova, Irina N. & Alexander V. Vasilyev (2005). Short-wave solar radiation in the earth's atmosphere: calculation, observation, interpretation. Springer. pp. 226–228. ISBN 978-3-540-21452-6.
50. Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. pp. 12–14. ISBN 978-0-521-86540-1.
51. Baklanov, Alexander, Sue Grimmond, Alexander Mahura (2009). Meteorological and Air Quality Models for Urban Areas. Springer. pp. 11–12. ISBN 978-3-642-00297-7. Retrieved 2011-02-24.{{cite book}}: CS1 maint: multiple names: authors list (link)
52. Warner, Thomas Tomkins (2010). Numerical Weather and Climate Prediction. Cambridge University Press. p. 259. ISBN 978-0-521-51389-0.
53. Lynch, Peter (2006). "The Fundamental Equations". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 45–46. ISBN 978-0-521-85729-1.
54. Ahrens, C. Donald (2008). Essentials of meteorology: an invitation to the atmosphere. Cengage Learning. p. 10. ISBN 978-0-495-11558-8.
55. Janjic, Zavisa; Gall, Robert; Pyle, Matthew E. (February 2010). "Scientific Documentation for the NMM Solver" (PDF). National Center for Atmospheric Research. pp. 12–13. Archived from the original (PDF) on 2011-08-23. Retrieved 2011-01-03.
56. Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 131–132. ISBN 978-0-12-554766-6.
57. Baum, Marsha L. (2007). When nature strikes: weather disasters and the law. Greenwood Publishing Group. p. 189. ISBN 978-0-275-22129-4.
58. Glahn, Harry R.; Lowry, Dale A. (December 1972). "The Use of Model Output Statistics (MOS) in Objective Weather Forecasting". Journal of Applied Meteorology. 11 (8): 1203–1211. Bibcode:1972JApMe..11.1203G. doi:10.1175/1520-0450(1972)011<1203:TUOMOS>2.0.CO;2.
59. Gultepe, Ismail (2007). Fog and boundary layer clouds: fog visibility and forecasting. Springer. p. 1144. ISBN 978-3-7643-8418-0. Retrieved 2011-02-11.
60. Barry, Roger Graham; Chorley, Richard J. (2003). Atmosphere, weather, and climate. Psychology Press. p. 172. ISBN 978-0-415-27171-4. Retrieved 2011-02-11.
61. Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. pp. 222–224. ISBN 978-0-471-38108-2.
62. Manousos, Peter (2006-07-19). "Ensemble Prediction Systems". Hydrometeorological Prediction Center. Retrieved 2010-12-31.
63. Weickmann, Klaus; Jeff Whitaker; Andres Roubicek; Catherine Smith (2001-12-01). "The Use of Ensemble Forecasts to Produce Improved Medium Range (3–15 days) Weather Forecasts". Climate Diagnostics Center. Archived from the original on 2010-05-28. Retrieved 2007-02-16.
64. Chakraborty, Arindam (October 2010). "The Skill of ECMWF Medium-Range Forecasts during the Year of Tropical Convection 2008". Monthly Weather Review. 138 (10): 3787–3805. Bibcode:2010MWRv..138.3787C. doi:10.1175/2010MWR3217.1.
65. Epstein, E.S. (December 1969). "Stochastic dynamic prediction". Tellus A. 21 (6): 739–759. Bibcode:1969Tell...21..739E. doi:10.1111/j.2153-3490.1969.tb00483.x.
66. Leith, C.E. (June 1974). "Theoretical Skill of Monte Carlo Forecasts". Monthly Weather Review. 102 (6): 409–418. Bibcode:1974MWRv..102..409L. doi:10.1175/1520-0493(1974)102<0409:TSOMCF>2.0.CO;2.
67. "MOGREPS". Met Office. Archived from the original on 2012-10-22. Retrieved 2012-11-01.
68. Warner, Thomas Tomkins (2010). Numerical Weather and Climate Prediction. Cambridge University Press. pp. 266–275. ISBN 978-0-521-51389-0.
69. Palmer, T.N.; Shutts, G.J.; Hagedorn, R.; Doblas-Reyes, F.J.; Jung, T.; Leutbecher, M. (May 2005). "Representing Model Uncertainty in Weather and Climate Prediction". Annual Review of Earth and Planetary Sciences. 33: 163–193. Bibcode:2005AREPS..33..163P. doi:10.1146/annurev.earth.33.092203.122552.
70. Grimit, Eric P.; Mass, Clifford F. (October 2004). "Redefining the Ensemble Spread-Skill Relationship from a Probabilistic Perspective" (PDF). University of Washington. Archived from the original (PDF) on 2008-10-12. Retrieved 2010-01-02.
71. Zhou, Binbin; Du, Jun (February 2010). "Fog Prediction From a Multimodel Mesoscale Ensemble Prediction System" (PDF). Weather and Forecasting. 25 (1): 303. Bibcode:2010WtFor..25..303Z. doi:10.1175/2009WAF2222289.1. S2CID 4947206. Retrieved 2011-01-02.
72. Cane, D.; Milelli, M. (2010-02-12). "Multimodel SuperEnsemble technique for quantitative precipitation forecasts in Piemonte region" (PDF). Natural Hazards and Earth System Sciences. 10 (2): 265. Bibcode:2010NHESS..10..265C. doi:10.5194/nhess-10-265-2010. Retrieved 2011-01-02.
73. Daly, Aaron & Paolo Zannetti (2007). Ambient Air Pollution (PDF). The Arab School for Science and Technology and The EnviroComp Institute. p. 16. Retrieved 2011-02-24.
74. Baklanov, Alexander; Rasmussen, Alix; Fay, Barbara; Berge, Erik; Finardi, Sandro (September 2002). "Potential and Shortcomings of Numerical Weather Prediction Models in Providing Meteorological Data for Urban Air Pollution Forecasting". Water, Air, & Soil Pollution: Focus. 2 (5): 43–60. doi:10.1023/A:1021394126149. S2CID 94747027.
75. Marshall, John; Plumb, R. Alan (2008). Atmosphere, ocean, and climate dynamics: an introductory text. Amsterdam: Elsevier Academic Press. pp. 44–46. ISBN 978-0-12-558691-7.
76. Australian Bureau of Statistics (2005). Year book, Australia, Issue 87. p. 40. Retrieved 2011-02-18.
77. National Oceanic and Atmospheric Administration 200th Celebration (2008-05-22). "The First Climate Model". National Oceanic and Atmospheric Administration. Retrieved 2010-04-20.
78. Bridgman, Howard A., John E. Oliver, Michael H. Glantz (2006). The global climate system: patterns, processes, and teleconnections. Cambridge University Press. pp. 284–289. ISBN 978-0-521-82642-6. Retrieved 2011-02-18.{{cite book}}: CS1 maint: multiple names: authors list (link)
79. Chalikov, D. V. (August 1978). "The numerical simulation of wind-wave interaction". Journal of Fluid Mechanics. 87 (3): 561–82. Bibcode:1978JFM....87..561C. doi:10.1017/S0022112078001767. S2CID 122742282.
80. Lin, Pengzhi (2008). Numerical modeling of water waves. Psychology Press. p. 270. ISBN 978-0-415-41578-1.
81. Bender, Leslie C. (January 1996). "Modification of the Physics and Numerics in a Third-Generation Ocean Wave Model". Journal of Atmospheric and Oceanic Technology. 13 (3): 726–750. Bibcode:1996JAtOT..13..726B. doi:10.1175/1520-0426(1996)013<0726:MOTPAN>2.0.CO;2.
82. National Hurricane Center (July 2009). "Technical Summary of the National Hurricane Center Track and Intensity Models" (PDF). National Oceanic and Atmospheric Administration. Retrieved 2011-02-19.
83. Franklin, James (2010-04-20). "National Hurricane Center Forecast Verification". National Hurricane Center. Retrieved 2011-01-02.
84. Rappaport, Edward N.; Franklin, James L.; Avila, Lixion A.; Baig, Stephen R.; Beven II, John L.; Blake, Eric S.; Burr, Christopher A.; Jiing, Jiann-Gwo; Juckins, Christopher A.; Knabb, Richard D.; Landsea, Christopher W.; Mainelli, Michelle; Mayfield, Max; McAdie, Colin J.; Pasch, Richard J.; Sisko, Christopher; Stewart, Stacy R.; Tribble, Ahsha N. (April 2009). "Advances and Challenges at the National Hurricane Center". Weather and Forecasting. 24 (2): 395–419. Bibcode:2009WtFor..24..395R. CiteSeerX 10.1.1.207.4667. doi:10.1175/2008WAF2222128.1. S2CID 14845745.
85. Sullivan, Andrew L. (June 2009). "Wildland surface fire spread modelling, 1990–2007. 1: Physical and quasi-physical models". International Journal of Wildland Fire. 18 (4): 349. arXiv:0706.3074. doi:10.1071/WF06143. S2CID 16173400.
86. Asensio, M. I. & L. Ferragut (2002). "On a wildland fire model with radiation". International Journal for Numerical Methods in Engineering. 54 (1): 137–157. Bibcode:2002IJNME..54..137A. doi:10.1002/nme.420. S2CID 122302719.
87. Mandel, Jan, Lynn S. Bennethum, Jonathan D. Beezley, Janice L. Coen, Craig C. Douglas, Minjeong Kim, and Anthony Vodacek (2008). "A wildfire model with data assimilation". Mathematics and Computers in Simulation. 79 (3): 584–606. arXiv:0709.0086. Bibcode:2007arXiv0709.0086M. doi:10.1016/j.matcom.2008.03.015. S2CID 839881.{{cite journal}}: CS1 maint: multiple names: authors list (link)
88. Clark, T. L., M. A. Jenkins, J. Coen, and David Packham (1996). "A coupled atmospheric-fire model: Convective Froude number and dynamic fingering". International Journal of Wildland Fire. 6 (4): 177–190. doi:10.1071/WF9960177.{{cite journal}}: CS1 maint: multiple names: authors list (link)
89. Clark, Terry L., Marry Ann Jenkins, Janice Coen, and David Packham (1996). "A coupled atmospheric-fire model: Convective feedback on fire line dynamics". Journal of Applied Meteorology. 35 (6): 875–901. Bibcode:1996JApMe..35..875C. doi:10.1175/1520-0450(1996)035<0875:ACAMCF>2.0.CO;2.{{cite journal}}: CS1 maint: multiple names: authors list (link)
90. Rothermel, Richard C. (January 1972). "A mathematical model for predicting fire spread in wildland fires" (PDF). United States Forest Service. Retrieved 2011-02-28.
Further reading
• Beniston, Martin (1998). From Turbulence to Climate: Numerical Investigations of the Atmosphere with a Hierarchy of Models. Berlin: Springer. ISBN 978-3-540-63495-9.
• Blum, Andrew (2019). The Weather Machine: A Journey Inside the Forecast. New York: HarperCollins. ISBN 978-0-062-36861-4.
• Kalnay, Eugenia (2003). Atmospheric Modeling, Data Assimilation and Predictability. p. 364. Bibcode:2002amda.book.....K. ISBN 978-0-521-79629-3. {{cite book}}: |journal= ignored (help)
• Roulstone, Ian & Norbury, John (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721.
• Thompson, Philip (1961). Numerical Weather Analysis and Prediction. New York: The Macmillan Company.
• U.S. Department of Commerce; National Oceanic; Atmospheric Administration; National Weather Service, eds. (1979). National Weather Service Handbook No. 1 – Facsimile Products. Washington, DC: Department of Commerce.From Turbulence to sCl
External links
• NOAA Supercomputer upgrade
• Air Resources Laboratory
• Fleet Numerical Meteorology and Oceanography Center
• European Centre for Medium-Range Weather Forecasts
• UK Met Office
Atmospheric, oceanographic, cryospheric, and climate models
Model types
• Atmospheric model
• Oceanographic model
• Cryospheric model
• Climate model
• Numerical weather prediction
• Tropical cyclone forecast model
• Atmospheric dispersion modeling
• Chemical transport model
• Upper-atmospheric models
• Ensemble forecasting
• Model output statistics
• Meteorological reanalysis
• Parametrization
Specific models
Climate
• IGCM
• HadCM3
• HadGEM1
• GFDL CM2.X
• CGCM
• CCSM
• CESM
• CFSv2
• ECHAM
Global weather
• IFS (ECMWF)
• FIM
• GEM / GDPS
• GFS
• MPAS
• NAEFS
• NAVGEM
• UM
• JMA-GSM
• GME / ICON
• ARPEGE
Regional and mesoscale atmospheric
• NAM
• RR / RAP
• RAMS
• WRF
• RAQMS
• HIRLAM
• LAPS
• RPM
• HWRF
• RGEM
• HRDPS
Regional and mesoscale oceanographic
• HyCOM
• ROMS
• POM
• MOM
• FVCOM
• MITgcm
• FESOM
• ADCIRC
Atmospheric dispersion
• ADMS
• AERMOD
• ATSTEP
• AUSTAL2000
• CALPUFF
• DISPERSION21
• ISC3
• MEMO
• MERCURE
• NAME
• OSPM
• PUFF-PLUME
• RIMPUFF
• SAFE AIR
• SILAM
Chemical transport
• CLaMS
• MOZART
• GEOS-Chem
• CHIMERE
Land surface parametrization
• JULES
• CLASS
• ISBA
Cryospheric models
• CICE
Discontinued
• Eta
• LFM
• MM5
• NGM
• NOGAPS
• RUC
• Mathematical model
• Statistical model
• Scientific modelling
• Computer simulation
Authority control: National
• Israel
• United States
• Japan
|
Web (differential geometry)
In mathematics, a web permits an intrinsic characterization in terms of Riemannian geometry of the additive separation of variables in the Hamilton–Jacobi equation.[1][2]
Formal definition
An orthogonal web on a Riemannian manifold (M,g) is a set ${\mathcal {S}}=({\mathcal {S}}^{1},\dots ,{\mathcal {S}}^{n})$ of n pairwise transversal and orthogonal foliations of connected submanifolds of codimension 1 and where n denotes the dimension of M.
Note that two submanifolds of codimension 1 are orthogonal if their normal vectors are orthogonal and in a nondefinite metric orthogonality does not imply transversality.
Alternative definition
Given a smooth manifold of dimension n, an orthogonal web (also called orthogonal grid or Ricci’s grid) on a Riemannian manifold (M,g) is a set[3] ${\mathcal {C}}=({\mathcal {C}}^{1},\dots ,{\mathcal {C}}^{n})$ of n pairwise transversal and orthogonal foliations of connected submanifolds of dimension 1.
Remark
Since vector fields can be visualized as stream-lines of a stationary flow or as Faraday’s lines of force, a non-vanishing vector field in space generates a space-filling system of lines through each point, known to mathematicians as a congruence (i.e., a local foliation). Ricci’s vision filled Riemann’s n-dimensional manifold with n congruences orthogonal to each other, i.e., a local orthogonal grid.
Differential geometry of webs
A systematic study of webs was started by Blaschke in the 1930s. He extended the same group-theoretic approach to web geometry.
Classical definition
Let $M=X^{nr}$ be a differentiable manifold of dimension N=nr. A d-web W(d,n,r) of codimension r in an open set $D\subset X^{nr}$ is a set of d foliations of codimension r which are in general position.
In the notation W(d,n,r) the number d is the number of foliations forming a web, r is the web codimension, and n is the ratio of the dimension nr of the manifold M and the web codimension. Of course, one may define a d-web of codimension r without having r as a divisor of the dimension of the ambient manifold.
See also
• Foliation
• Parallelization (mathematics)
Notes
1. S. Benenti (1997). "Intrinsic characterization of the variable separation in the Hamilton-Jacobi equation". J. Math. Phys. 38 (12): 6578–6602. Bibcode:1997JMP....38.6578B. doi:10.1063/1.532226.
2. Chanu, Claudia; Rastelli, Giovanni (2007). "Eigenvalues of Killing Tensors and Separable Webs on Riemannian and Pseudo-Riemannian Manifolds". SIGMA. 3: 021, 21 pages. arXiv:nlin/0612042. Bibcode:2007SIGMA...3..021C. doi:10.3842/sigma.2007.021. S2CID 3100911.
3. G. Ricci-Curbastro (1896). "Dei sistemi di congruenze ortogonali in una varietà qualunque". Mem. Acc. Lincei. 2 (5): 276–322.
References
• Sharpe, R. W. (1997). Differential Geometry: Cartan's Generalization of Klein's Erlangen Program. New York: Springer. ISBN 0-387-94732-9.
• Dillen, F.J.E.; Verstraelen, L.C.A. (2000). Handbook of Differential Geometry. Vol. 1. Amsterdam: North-Holland. ISBN 0-444-82240-2.
|
Webbed space
In mathematics, particularly in functional analysis, a webbed space is a topological vector space designed with the goal of allowing the results of the open mapping theorem and the closed graph theorem to hold for a wider class of linear maps whose codomains are webbed spaces. A space is called webbed if there exists a collection of sets, called a web that satisfies certain properties. Webs were first investigated by de Wilde.
Web
Let $X$ be a Hausdorff locally convex topological vector space. A web is a stratified collection of disks satisfying the following absorbency and convergence requirements.[1]
1. Stratum 1: The first stratum must consist of a sequence $D_{1},D_{2},D_{3},\ldots $ of disks in $X$ such that their union $\bigcup _{i\in \mathbb {N} }D_{i}$ absorbs $X.$
2. Stratum 2: For each disk $D_{i}$ in the first stratum, there must exists a sequence $D_{i1},D_{i2},D_{i3},\ldots $ of disks in $X$ such that for every $D_{i}$:
$D_{ij}\subseteq \left({\tfrac {1}{2}}\right)D_{i}\quad {\text{ for every }}j$
and $\cup _{j\in \mathbb {N} }D_{ij}$ absorbs $D_{i}.$ The sets $\left(D_{ij}\right)_{i,j\in \mathbb {N} }$ will form the second stratum.
3. Stratum 3: To each disk $D_{ij}$ in the second stratum, assign another sequence $D_{ij1},D_{ij2},D_{ij3},\ldots $ of disks in $X$ satisfying analogously defined properties; explicitly, this means that for every $D_{i,j}$:
$D_{ijk}\subseteq \left({\tfrac {1}{2}}\right)D_{ij}\quad {\text{ for every }}k$
and $\cup _{k\in \mathbb {N} }D_{ijk}$ absorbs $D_{ij}.$ The sets $\left(D_{ijk}\right)_{i,j,k\in \mathbb {N} }$ form the third stratum.
Continue this process to define strata $4,5,\ldots .$ That is, use induction to define stratum $n+1$ in terms of stratum $n.$
A strand is a sequence of disks, with the first disk being selected from the first stratum, say $D_{i},$ and the second being selected from the sequence that was associated with $D_{i},$ and so on. We also require that if a sequence of vectors $(x_{n})$ is selected from a strand (with $x_{1}$ belonging to the first disk in the strand, $x_{2}$ belonging to the second, and so on) then the series $\sum _{n=1}^{\infty }x_{n}$ converges.
A Hausdorff locally convex topological vector space on which a web can be defined is called a webbed space.
Examples and sufficient conditions
Theorem[2] (de Wilde 1978) — A topological vector space $X$ is a Fréchet space if and only if it is both a webbed space and a Baire space.
All of the following spaces are webbed:
• Fréchet spaces.[2]
• Projective limits and inductive limits of sequences of webbed spaces.
• A sequentially closed vector subspace of a webbed space.[3]
• Countable products of webbed spaces.[3]
• A Hausdorff quotient of a webbed space.[3]
• The image of a webbed space under a sequentially continuous linear map if that image is Hausdorff.[3]
• The bornologification of a webbed space.
• The continuous dual space of a metrizable locally convex space endowed with the strong dual topology is webbed.[2]
• If $X$ is the strict inductive limit of a denumerable family of locally convex metrizable spaces, then the continuous dual space of $X$ with the strong topology is webbed.[4]
• So in particular, the strong duals of locally convex metrizable spaces are webbed.[5]
• If $X$ is a webbed space, then any Hausdorff locally convex topology weaker than this (webbed) topology is also webbed.[3]
Theorems
Closed Graph Theorem[6] — Let $A:X\to Y$ be a linear map between TVSs that is sequentially closed (meaning that its graph is a sequentially closed subset of $X\times Y$). If $Y$ is a webbed space and $X$ is an ultrabornological space (such as a Fréchet space or an inductive limit of Fréchet spaces), then $A$ is continuous.
Closed Graph Theorem — Any closed linear map from the inductive limit of Baire locally convex spaces into a webbed locally convex space is continuous.
Open Mapping Theorem — Any continuous surjective linear map from a webbed locally convex space onto an inductive limit of Baire locally convex spaces is open.
Open Mapping Theorem[6] — Any continuous surjective linear map from a webbed locally convex space onto an ultrabornological space is open.
Open Mapping Theorem[6] — If the image of a closed linear operator $A:X\to Y$ from locally convex webbed space $X$ into Hausdorff locally convex space $Y$ is nonmeager in $Y$ then $A:X\to Y$ is a surjective open map.
If the spaces are not locally convex, then there is a notion of web where the requirement of being a disk is replaced by the requirement of being balanced. For such a notion of web we have the following results:
Closed Graph Theorem — Any closed linear map from the inductive limit of Baire topological vector spaces into a webbed topological vector space is continuous.
See also
• Almost open linear map – Map that satisfies a condition similar to that of being an open map.Pages displaying short descriptions of redirect targets
• Barrelled space – Type of topological vector space
• Closed graph – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
• Closed graph theorem (functional analysis) – Theorems connecting continuity to closure of graphs
• Closed linear operator – Graph of a map closed in the product spacePages displaying short descriptions of redirect targets
• Discontinuous linear map
• F-space – Topological vector space with a complete translation-invariant metric
• Fréchet space – A locally convex topological vector space that is also a complete metric space
• Kakutani fixed-point theorem – On when a function f: S→Pow(S) on a compact nonempty convex subset S⊂ℝⁿ has a fixed point
• Metrizable topological vector space – A topological vector space whose topology can be defined by a metric
• Open mapping theorem (functional analysis) – Condition for a linear operator to be open
• Ursescu theorem – Generalization of closed graph, open mapping, and uniform boundedness theorem
Citations
1. Narici & Beckenstein 2011, p. 470−471.
2. Narici & Beckenstein 2011, p. 472.
3. Narici & Beckenstein 2011, p. 481.
4. Narici & Beckenstein 2011, p. 473.
5. Narici & Beckenstein 2011, pp. 459–483.
6. Narici & Beckenstein 2011, pp. 474–476.
References
• De Wilde, Marc (1978). Closed graph theorems and webbed spaces. London: Pitman.
• Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370.
• Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis (PDF). Mathematical Surveys and Monographs. Vol. 53. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-0780-4. OCLC 37141279.
• Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis. Mathematical Surveys and Monographs. American Mathematical Society. pp. 557–578. ISBN 9780821807804.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Weber's theorem (Algebraic curves)
In mathematics, Weber's theorem, named after Heinrich Martin Weber, is a result on algebraic curves. It states the following.
Consider two non-singular curves C and C′ having the same genus g > 1. If there is a rational correspondence φ between C and C′, then φ is a birational transformation.
References
• Coolidge, J. L. (1959). A Treatise on Algebraic Plane Curves. New York: Dover. p. 135. ISBN 0-486-60543-4.
• Weber, H. (1873). "Zur Theorie der Transformation algebraischer Functionen". Journal für die reine und angewandte Mathematik (in German). 76: 345–348. doi:10.1515/crll.1873.76.345.
Further reading
• Tsuji, Masatsugu (1941). "Theory of conformal mapping of a multiply connected domain". Japanese Journal of Mathematics :Transactions and Abstracts. 18: 759–775. doi:10.4099/jjm1924.18.0_759.
External links
• Weisstein, Eric W. "Weber's Theorem". MathWorld.
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
|
Parabolic cylinder function
In mathematics, the parabolic cylinder functions are special functions defined as solutions to the differential equation
${\frac {d^{2}f}{dz^{2}}}+\left({\tilde {a}}z^{2}+{\tilde {b}}z+{\tilde {c}}\right)f=0.$
(1)
This equation is found when the technique of separation of variables is used on Laplace's equation when expressed in parabolic cylindrical coordinates.
The above equation may be brought into two distinct forms (A) and (B) by completing the square and rescaling z, called H. F. Weber's equations:[1]
${\frac {d^{2}f}{dz^{2}}}-\left({\tfrac {1}{4}}z^{2}+a\right)f=0$
(A)
and
${\frac {d^{2}f}{dz^{2}}}+\left({\tfrac {1}{4}}z^{2}-a\right)f=0.$
(B)
If
$f(a,z)$
is a solution, then so are
$f(a,-z),f(-a,iz){\text{ and }}f(-a,-iz).$
If
$f(a,z)\,$
is a solution of equation (A), then
$f(-ia,ze^{(1/4)\pi i})$
is a solution of (B), and, by symmetry,
$f(-ia,-ze^{(1/4)\pi i}),f(ia,-ze^{-(1/4)\pi i}){\text{ and }}f(ia,ze^{-(1/4)\pi i})$
are also solutions of (B).
Solutions
There are independent even and odd solutions of the form (A). These are given by (following the notation of Abramowitz and Stegun (1965)):[2]
$y_{1}(a;z)=\exp(-z^{2}/4)\;_{1}F_{1}\left({\tfrac {1}{2}}a+{\tfrac {1}{4}};\;{\tfrac {1}{2}}\;;\;{\frac {z^{2}}{2}}\right)\,\,\,\,\,\,(\mathrm {even} )$
and
$y_{2}(a;z)=z\exp(-z^{2}/4)\;_{1}F_{1}\left({\tfrac {1}{2}}a+{\tfrac {3}{4}};\;{\tfrac {3}{2}}\;;\;{\frac {z^{2}}{2}}\right)\,\,\,\,\,\,(\mathrm {odd} )$
where $\;_{1}F_{1}(a;b;z)=M(a;b;z)$ is the confluent hypergeometric function.
Other pairs of independent solutions may be formed from linear combinations of the above solutions.[2] One such pair is based upon their behavior at infinity:
$U(a,z)={\frac {1}{2^{\xi }{\sqrt {\pi }}}}\left[\cos(\xi \pi )\Gamma (1/2-\xi )\,y_{1}(a,z)-{\sqrt {2}}\sin(\xi \pi )\Gamma (1-\xi )\,y_{2}(a,z)\right]$
$V(a,z)={\frac {1}{2^{\xi }{\sqrt {\pi }}\Gamma [1/2-a]}}\left[\sin(\xi \pi )\Gamma (1/2-\xi )\,y_{1}(a,z)+{\sqrt {2}}\cos(\xi \pi )\Gamma (1-\xi )\,y_{2}(a,z)\right]$
where
$\xi ={\frac {1}{2}}a+{\frac {1}{4}}.$
The function U(a, z) approaches zero for large values of z and |arg(z)| < π/2, while V(a, z) diverges for large values of positive real z .
$\lim _{z\to \infty }U(a,z)/e^{-z^{2}/4}z^{-a-1/2}=1\,\,\,\,({\text{for}}\,\left|\arg(z)\right|<\pi /2)$
and
$\lim _{z\to \infty }V(a,z)/{\sqrt {\frac {2}{\pi }}}e^{z^{2}/4}z^{a-1/2}=1\,\,\,\,({\text{for}}\,\arg(z)=0).$
For half-integer values of a, these (that is, U and V) can be re-expressed in terms of Hermite polynomials; alternatively, they can also be expressed in terms of Bessel functions.
The functions U and V can also be related to the functions Dp(x) (a notation dating back to Whittaker (1902))[3] that are themselves sometimes called parabolic cylinder functions:[2]
${\begin{aligned}U(a,x)&=D_{-a-{\tfrac {1}{2}}}(x),\\V(a,x)&={\frac {\Gamma ({\tfrac {1}{2}}+a)}{\pi }}[\sin(\pi a)D_{-a-{\tfrac {1}{2}}}(x)+D_{-a-{\tfrac {1}{2}}}(-x)].\end{aligned}}$
Function Da(z) was introduced by Whittaker and Watson as a solution of eq.~(1) with $ {\tilde {a}}=-{\frac {1}{4}},{\tilde {b}}=0,{\tilde {c}}=a+{\frac {1}{2}}$ bounded at $+\infty $.[4] It can be expressed in terms of confluent hypergeometric functions as
$D_{a}(z)={\frac {1}{\sqrt {\pi }}}{2^{a/2}e^{-{\frac {z^{2}}{4}}}\left(\cos \left({\frac {\pi a}{2}}\right)\Gamma \left({\frac {a+1}{2}}\right)\,_{1}F_{1}\left(-{\frac {a}{2}};{\frac {1}{2}};{\frac {z^{2}}{2}}\right)+{\sqrt {2}}z\sin \left({\frac {\pi a}{2}}\right)\Gamma \left({\frac {a}{2}}+1\right)\,_{1}F_{1}\left({\frac {1}{2}}-{\frac {a}{2}};{\frac {3}{2}};{\frac {z^{2}}{2}}\right)\right)}.$
Power series for this function have been obtained by Abadir (1993).[5]
References
1. Weber, H.F. (1869), "Ueber die Integration der partiellen Differentialgleichung $\partial ^{2}u/\partial x^{2}+\partial ^{2}u/\partial y^{2}+k^{2}u=0$", Math. Ann., vol. 1, pp. 1–36
2. Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 19". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 686. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
3. Whittaker, E.T. (1902) "On the functions associated with the parabolic cylinder in harmonic analysis" Proc. London Math. Soc., 35, 417–427.
4. Whittaker, E. T. and Watson, G. N. (1990) "The Parabolic Cylinder Function." §16.5 in A Course in Modern Analysis, 4th ed. Cambridge, England: Cambridge University Press, pp. 347-348.
5. Abadir, K. M. (1993) "Expansions for some confluent hypergeometric functions." Journal of Physics A, 26, 4059-4066.
|
Weber modular function
In mathematics, the Weber modular functions are a family of three functions f, f1, and f2,[note 1] studied by Heinrich Martin Weber.
Definition
Let $q=e^{2\pi i\tau }$ where τ is an element of the upper half-plane. Then the Weber functions are
${\begin{aligned}{\mathfrak {f}}(\tau )&=q^{-{\frac {1}{48}}}\prod _{n>0}(1+q^{n-1/2})={\frac {\eta ^{2}(\tau )}{\eta {\big (}{\tfrac {\tau }{2}}{\big )}\eta (2\tau )}}=e^{-{\frac {\pi i}{24}}}{\frac {\eta {\big (}{\frac {\tau +1}{2}}{\big )}}{\eta (\tau )}},\\{\mathfrak {f}}_{1}(\tau )&=q^{-{\frac {1}{48}}}\prod _{n>0}(1-q^{n-1/2})={\frac {\eta {\big (}{\tfrac {\tau }{2}}{\big )}}{\eta (\tau )}},\\{\mathfrak {f}}_{2}(\tau )&={\sqrt {2}}\,q^{\frac {1}{24}}\prod _{n>0}(1+q^{n})={\frac {{\sqrt {2}}\,\eta (2\tau )}{\eta (\tau )}}.\end{aligned}}$
These are also the definitions in Duke's paper "Continued Fractions and Modular Functions".[note 2] The function $\eta (\tau )$ is the Dedekind eta function and $(e^{2\pi i\tau })^{\alpha }$ should be interpreted as $e^{2\pi i\tau \alpha }$. The descriptions as $\eta $ quotients immediately imply
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \mathfrak{f}(\tau)\mathfrak{f}_1(\tau)\mathfrak{f}_2(\tau) =\sqrt{2}.}
The transformation τ → –1/τ fixes f and exchanges f1 and f2. So the 3-dimensional complex vector space with basis f, f1 and f2 is acted on by the group SL2(Z).
Alternative infinite product
Alternatively, let $q=e^{\pi i\tau }$ be the nome,
${\begin{aligned}{\mathfrak {f}}(q)&=q^{-{\frac {1}{24}}}\prod _{n>0}(1+q^{2n-1})={\frac {\eta ^{2}(\tau )}{\eta {\big (}{\tfrac {\tau }{2}}{\big )}\eta (2\tau )}},\\{\mathfrak {f}}_{1}(q)&=q^{-{\frac {1}{24}}}\prod _{n>0}(1-q^{2n-1})={\frac {\eta {\big (}{\tfrac {\tau }{2}}{\big )}}{\eta (\tau )}},\\{\mathfrak {f}}_{2}(q)&={\sqrt {2}}\,q^{\frac {1}{12}}\prod _{n>0}(1+q^{2n})={\frac {{\sqrt {2}}\,\eta (2\tau )}{\eta (\tau )}}.\end{aligned}}$
The form of the infinite product has slightly changed. But since the eta quotients remain the same, then ${\mathfrak {f}}_{i}(\tau )={\mathfrak {f}}_{i}(q)$ as long as the second uses the nome $q=e^{\pi i\tau }$. The utility of the second form is to show connections and consistent notation with the Ramanujan G- and g-functions and the Jacobi theta functions, both of which conventionally uses the nome.
Relation to the Ramanujan G and g functions
Still employing the nome $q=e^{\pi i\tau }$, define the Ramanujan G- and g-functions as
${\begin{aligned}2^{1/4}G_{n}&=q^{-{\frac {1}{24}}}\prod _{n>0}(1+q^{2n-1})={\frac {\eta ^{2}(\tau )}{\eta {\big (}{\tfrac {\tau }{2}}{\big )}\eta (2\tau )}},\\2^{1/4}g_{n}&=q^{-{\frac {1}{24}}}\prod _{n>0}(1-q^{2n-1})={\frac {\eta {\big (}{\tfrac {\tau }{2}}{\big )}}{\eta (\tau )}}.\end{aligned}}$
The eta quotients make their connection to the first two Weber functions immediately apparent. In the nome, assume $\tau ={\sqrt {-n}}.$ Then,
${\begin{aligned}2^{1/4}G_{n}&={\mathfrak {f}}(q)={\mathfrak {f}}(\tau ),\\2^{1/4}g_{n}&={\mathfrak {f}}_{1}(q)={\mathfrak {f}}_{1}(\tau ).\end{aligned}}$
Ramanujan found many relations between $G_{n}$ and $g_{n}$ which implies similar relations between ${\mathfrak {f}}(q)$ and ${\mathfrak {f}}_{1}(q)$. For example, his identity,
$(G_{n}^{8}-g_{n}^{8})(G_{n}\,g_{n})^{8}={\tfrac {1}{4}},$
leads to
${\big [}{\mathfrak {f}}^{8}(q)-{\mathfrak {f}}_{1}^{8}(q){\big ]}{\big [}{\mathfrak {f}}(q)\,{\mathfrak {f}}_{1}(q){\big ]}^{8}={\big [}{\sqrt {2}}{\big ]}^{8}.$
For many values of n, Ramanujan also tabulated $G_{n}$ for odd n, and $g_{n}$ for even n. This automatically gives many explicit evaluations of ${\mathfrak {f}}(q)$ and ${\mathfrak {f}}_{1}(q)$. For example, using $\tau ={\sqrt {-5}},\,{\sqrt {-13}},\,{\sqrt {-37}}$, which are some of the square-free discriminants with class number 2,
${\begin{aligned}G_{5}&=\left({\frac {1+{\sqrt {5}}}{2}}\right)^{1/4},\\G_{13}&=\left({\frac {3+{\sqrt {13}}}{2}}\right)^{1/4},\\G_{37}&=\left(6+{\sqrt {37}}\right)^{1/4},\end{aligned}}$
and one can easily get ${\mathfrak {f}}(\tau )=2^{1/4}G_{n}$ from these, as well as the more complicated examples found in Ramanujan's Notebooks.
Relation to Jacobi theta functions
The argument of the classical Jacobi theta functions is traditionally the nome $q=e^{\pi i\tau },$
${\begin{aligned}\vartheta _{10}(0;\tau )&=\theta _{2}(q)=\sum _{n=-\infty }^{\infty }q^{(n+1/2)^{2}}={\frac {2\eta ^{2}(2\tau )}{\eta (\tau )}},\\[2pt]\vartheta _{00}(0;\tau )&=\theta _{3}(q)=\sum _{n=-\infty }^{\infty }q^{n^{2}}\;=\;{\frac {\eta ^{5}(\tau )}{\eta ^{2}\left({\frac {\tau }{2}}\right)\eta ^{2}(2\tau )}}={\frac {\eta ^{2}\left({\frac {\tau +1}{2}}\right)}{\eta (\tau +1)}},\\[3pt]\vartheta _{01}(0;\tau )&=\theta _{4}(q)=\sum _{n=-\infty }^{\infty }(-1)^{n}q^{n^{2}}={\frac {\eta ^{2}\left({\frac {\tau }{2}}\right)}{\eta (\tau )}}.\end{aligned}}$
Dividing them by $\eta (\tau )$, and also noting that $\eta (\tau )=e^{\frac {-\pi i}{\,12}}\eta (\tau +1)$, then they are just squares of the Weber functions ${\mathfrak {f}}_{i}(q)$
${\begin{aligned}{\frac {\theta _{2}(q)}{\eta (\tau )}}&={\mathfrak {f}}_{2}(q)^{2},\\[4pt]{\frac {\theta _{4}(q)}{\eta (\tau )}}&={\mathfrak {f}}_{1}(q)^{2},\\[4pt]{\frac {\theta _{3}(q)}{\eta (\tau )}}&={\mathfrak {f}}(q)^{2},\end{aligned}}$
with even-subscript theta functions purposely listed first. Using the well-known Jacobi identity with even subscripts on the LHS,
$\theta _{2}(q)^{4}+\theta _{4}(q)^{4}=\theta _{3}(q)^{4};$
therefore,
${\mathfrak {f}}_{2}(q)^{8}+{\mathfrak {f}}_{1}(q)^{8}={\mathfrak {f}}(q)^{8}.$
Relation to j-function
The three roots of the cubic equation
$j(\tau )={\frac {(x-16)^{3}}{x}}$
where j(τ) is the j-function are given by $x_{i}={\mathfrak {f}}(\tau )^{24},-{\mathfrak {f}}_{1}(\tau )^{24},-{\mathfrak {f}}_{2}(\tau )^{24}$. Also, since,
$j(\tau )=32{\frac {{\Big (}\theta _{2}(q)^{8}+\theta _{3}(q)^{8}+\theta _{4}(q)^{8}{\Big )}^{3}}{{\Big (}\theta _{2}(q)\,\theta _{3}(q)\,\theta _{4}(q){\Big )}^{8}}}$
and using the definitions of the Weber functions in terms of the Jacobi theta functions, plus the fact that ${\mathfrak {f}}_{2}(q)^{2}\,{\mathfrak {f}}_{1}(q)^{2}\,{\mathfrak {f}}(q)^{2}={\frac {\theta _{2}(q)}{\eta (\tau )}}{\frac {\theta _{4}(q)}{\eta (\tau )}}{\frac {\theta _{3}(q)}{\eta (\tau )}}=2$, then
$j(\tau )=\left({\frac {{\mathfrak {f}}(\tau )^{16}+{\mathfrak {f}}_{1}(\tau )^{16}+{\mathfrak {f}}_{2}(\tau )^{16}}{2}}\right)^{3}=\left({\frac {{\mathfrak {f}}(q)^{16}+{\mathfrak {f}}_{1}(q)^{16}+{\mathfrak {f}}_{2}(q)^{16}}{2}}\right)^{3}$
since ${\mathfrak {f}}_{i}(\tau )={\mathfrak {f}}_{i}(q)$ and have the same formulas in terms of the Dedekind eta function $\eta (\tau )$.
See also
• Ramanujan–Sato series, level 4
References
• Duke, William (2005), Continued Fractions and Modular Functions (PDF), Bull. Amer. Math. Soc. 42
• Weber, Heinrich Martin (1981) [1898], Lehrbuch der Algebra (in German), vol. 3 (3rd ed.), New York: AMS Chelsea Publishing, ISBN 978-0-8218-2971-4
• Yui, Noriko; Zagier, Don (1997), "On the singular values of Weber modular functions", Mathematics of Computation, 66 (220): 1645–1662, doi:10.1090/S0025-5718-97-00854-5, MR 1415803
Notes
1. f, f1 and f2 are not modular functions (per the Wikipedia definition), but every modular function is a rational function in f, f1 and f2. Some authors use a non-equivalent definition of "modular functions".
2. https://www.math.ucla.edu/~wdduke/preprints/bams4.pdf Continued Fractions and Modular Functions, W. Duke, pp 22-23
|
Webgraph
The webgraph describes the directed links between pages of the World Wide Web. A graph, in general, consists of several vertices, some pairs connected by edges. In a directed graph, edges are directed lines or arcs. The webgraph is a directed graph, whose vertices correspond to the pages of the WWW, and a directed edge connects page X to page Y if there exists a hyperlink on page X, referring to page Y.
Properties
• The degree distribution of the webgraph strongly differs from the degree distribution of the classical random graph model, the Erdős–Rényi model:[1] in the Erdős–Rényi model, there are very few large degree nodes, relative to the webgraph's degree distribution. The precise distribution is unclear,[2] however: it is relatively well described by a lognormal distribution, as well as the Barabási–Albert model for power laws.[3][4]
• The webgraph is an example of a scale-free network.
Applications
The webgraph is used for:
• computing the PageRank[5] of the WWW-pages;
• computing the personalized PageRank;[6]
• detecting webpages of similar topics, through graph-theoretical properties only, like co-citation;[7]
• and identifying hubs and authorities in the web for HITS algorithm.
References
1. P. Erdős, A. Renyi, Publ. Math. Inst. Hung. Acad. Sci. 5 (1960)
2. Meusel, R.; Vigna, S.; Lehmberg, O.; Bizer, C. (2015). "The Graph Structure in the Web - Analyzed on Different Aggregation Levels" (PDF). Journal of Web Science. 1 (1): 33–47. doi:10.1561/106.00000003. hdl:2434/372411.
3. Clauset, A.; Shalizi, C. R.; Newman, M. E. J. (2009). "Power-law distributions in empirical data". SIAM Rev. 51 (4): 661–703. arXiv:0706.1062. Bibcode:2009SIAMR..51..661C. doi:10.1137/070710111. S2CID 9155618.
4. Barabási, Albert-László; Albert, Réka (October 1999). "Emergence of scaling in random networks" (PDF). Science. 286 (5439): 509–512. arXiv:cond-mat/9910332. Bibcode:1999Sci...286..509B. doi:10.1126/science.286.5439.509. PMID 10521342. S2CID 524106..
5. S. Brin, L. Page, Computer Networks and ISDN Systems 30, 107 (1998)
6. Glen Jeh and Jennifer Widom. 2003. Scaling personalized web search. In Proceedings of the 12th international conference on World Wide Web (WWW '03). ACM, New York, NY, USA, 271–279. doi:10.1145/775152.775191
7. Kumar, Ravi; Raghavan, Prabhakar; Rajagopalan, Sridhar; Tomkins, Andrew (1999). "Trawling the Web for emerging cyber-communities". Computer Networks. 31 (11–16): 1481–1493. CiteSeerX 10.1.1.89.4025. doi:10.1016/S1389-1286(99)00040-7. S2CID 7069190.
External links
• Webgraphs in Yahoo Sandbox
• Webgraphs at University of Milano – Laboratory for Web Algorithmics
• Webgraphs at Stanford – SNAP
• Webgraph at the Erdős Webgraph Server
• Web Data Commons - Hyperlink Graph
|
Webster Wells
Webster Wells (1851–1916) was an American mathematician known primarily for his authorship of mathematical textbooks.
Webster Wells
Born(1851-09-04)September 4, 1851
Boston, Massachusetts, U.S.
DiedMay 23, 1916(1916-05-23) (aged 64)
Arlington, Massachusetts, U.S.
NationalityAmerican
Alma materMassachusetts Institute of Technology
Scientific career
FieldsMathematics
InstitutionsMassachusetts Institute of Technology
Signature
Early life and career
Webster Wells was born at Roxbury, Massachusetts on September 4, 1851.[1] His parents, Thomas Foster Wells (1822–1903) and Sarah Morrill Wells (1828–1897), initially named him Thomas Wells, but presumably after the death of the statesman Daniel Webster in 1852, renamed him Daniel Webster Wells,[2] and from at least 1860, he was known as Webster Wells.[3] Samuel Adams, the Boston brewer and patriot, was a great-great-grandfather, and the poets Thomas Wells (1790–1861) and Anna Maria (Foster) Wells (1795–1868) were grandparents. The architect Joseph Morrill Wells was his brother.
Beginning in 1863, Webster Wells studied at the West Newton English and Classical School (aka The Allen School), West Newton, Massachusetts, and then attended the Massachusetts Institute of Technology from which he graduated in 1873 with a Bachelor of Science degree. Wells taught mathematics at MIT, where he was successively instructor (1873–1880), assistant professor (1883), associate professor (1885), and full professor (1893–1911).[4]
Personal life
Webster Wells married Emily Walker Langdon at Boston on June 21, 1876.[5]
Wells died at Arlington, Massachusetts, on May 23, 1916, from the complications of Huntington's Chorea.[6] He was buried in Oak Grove Cemetery, Medford, Massachusetts.
Textbooks
Wells' textbooks were used in many schools and colleges in the United States. Among the many titles were:
• Webster Wells. Elementary Treatise on Logarithms (Boston, MA: Robert S. Davies Co., 1878).
• Webster Wells. University Algebra (Boston MA: Leach, Shewell & Sanborn, 1880), one of "Greenleaf's Mathematical Series."
• Webster Wells. Practical Textbook on Plane and Spherical Trigonometry (Boston, MA: Leach, Shewell & Sanborn, 1883).
• Webster Wells. A Complete Course in Algebra for Academies and High Schools (Boston, MA: Leach, Shewell & Sanborn, 1885).
• Webster Wells. The Elements of Geometry (Boston, MA: Leach, Shewell & Sanborn, 1886).
• Webster Wells. Plane and Solid Geometry (1887).
• Webster Wells. The Essentials of Plane and Spherical Trigonometry (Boston, MA: Leach, Shewell & Sanborn, 1887).
• Webster Wells. Plane Trigonometry (Boston, MA: Leach, Shewell & Sanborn, 1887).
• Webster Wells. Four-place Logarithmic Tables (1888).
• Webster Wells. A Short Course in Higher Algebra (Boston, MA: Leach, Shewell & Sanborn, 1889), one of "Wells' Mathematical Series."
• Webster Wells. College Algebra (Boston, MA: D.C. Heath & Co., 1890).
• Webster Wells. An Academic Arithmetic for Academies, High and Commercial Schools (1893, 1899).
• Webster Wells. Plane Trigonometry (Boston, MA: Leach, Shewell & Sanborn, 1893), one of "Wells' Mathematical Series."
• Webster Wells. Revised Plane and Solid Geometry (1894).
• Webster Wells. New Plane and Spherical Trigonometry (1896).
• Webster Wells. Essentials of Algebra for Secondary Schools (Norwood MA: Norwood Press, 1897).
• Webster Wells. New Higher Algebra (Boston, MA: D.C. Heath & Co., 1897, 1899).
• Webster Wells. The Essentials of Geometry (Solid) (Boston, MA: D.C. Heath & Co., 1899).
• Webster Wells. Complete Trigonometry (Boston, MA: D.C. Heath & Co., 1901, copyright 1900).
• Webster Wells. Factoring (Boston, MA: D.C. Heath & Co., 1902), one of “Heath’s Mathematical Monographs.”
• Claribel Gerrish and Webster Wells. The Beginner's Algebra (Boston, MA: D.C. Heath & Co., 1902).
• Webster Wells. Advanced Course in Algebra (Boston, MA: D.C. Heath & Co., 1904).
• Webster Wells. Algebra for Secondary Schools (Boston, MA: D.C. Heath & Co., 1906, 1909).
• Webster Wells. New Plane Geometry (1908).
• Webster Wells. New Plane and Solid Geometry (Boston, MA: D.C. Heath & Co., 1909).
• Webster Wells and Walter W. Hart. First Year Algebra (Boston, MA: D.C. Heath & Co. 1912).
• Webster Wells. New High School Algebra (1912).
• Webster Wells. Second Course in Algebra (1913).
• Webster Wells and Walter W. Hart. Plane Geometry (Boston, MA: D.C. Heath & Co., 1915), one of "Wells and Hart's Mathematics Series."
• Webster Wells. Plane and Solid Geometry (1916).
Posthumous editions
• Webster Wells. Modern Algebra: Second Course (1920).
• Webster Wells. Modern First Year Algebra (1923).
• Webster Wells. Modern Algebra: Second Course (1925).
• Webster Wells and Walter W. Hart. Modern First Year Algebra, Revised (Boston, MA: D.C. Heath & Co., copyright 1928).
See also
• Joseph Morrill Wells, architect; Webster Wells’ brother
• Anna Maria Wells, poet; Webster Wells’ grandmother
• Frederick A. Wells, politician, Webster Wells' second cousin
• John Witt Randall, art collector; Webster Wells’ first cousin once removed
References
1. "Massachusetts, Town Clerk, Vital and Town Records, 1626-2001," database with images, FamilySearch (https://www.familysearch.org/ark:/61903/1:1:Q29L-812M : 10 November 2020), Thomas Wells, 4 Sep 1851; citing Birth, Roxbury, Boston, Suffolk, Massachusetts, United States, Massachusetts Secretary of the Commonwealth, Boston; FHL microfilm 004240782.
2. "Massachusetts State Census, 1855," database with images, FamilySearch (https://familysearch.org/ark:/61903/1:1:MQ4W-6PH : 11 March 2018), Daniel W Wells in household of Thomas F Wells, Ward 04, Roxbury, Norfolk, Massachusetts, United States; State Archives, Boston; FHL microfilm 953,955.
3. "United States Census, 1860", database with images, FamilySearch (https://familysearch.org/ark:/61903/1:1:MZHJ-WXR : 18 February 2021), Webster Wells in entry for T F Wells, 1860.
4. The National Cyclopaedia of American Biography. Vol. XVII. James T. White & Company. 1920. p. 124. Retrieved January 1, 2021 – via Google Books.
5. "Massachusetts, Town Clerk, Vital and Town Records, 1626-2001," database with images, FamilySearch (https://www.familysearch.org/ark:/61903/1:1:FH8C-CYS : 10 November 2020), Webster Wells, 21 Jun 1876; citing Marriage, Boston, Suffolk, Massachusetts, United States, Massachusetts Secretary of the Commonwealth, Boston; FHL microfilm 004276926.
6. "Death of Prof Wells". The Boston Globe. May 24, 1916. p. 10. Retrieved January 1, 2021 – via Newspapers.com.
Bibliography
• Gilman, D. C.; Peck, H. T.; Colby, F. M., eds. (1905). "Wells, Webster" . New International Encyclopedia (1st ed.). New York: Dodd, Mead.
• Webster Wells (2005) [1909]. "New plane geometry". University of Michigan Library. Retrieved March 4, 2011.
External links
• Works by or about Webster Wells at Internet Archive
Authority control
International
• FAST
• ISNI
• VIAF
• WorldCat
National
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• zbMATH
Other
• IdRef
|
Webster equation
The Webster equation, proposed by Thomas J. Webster, is a mathematical model used to predict the adsorption of proteins onto biomaterial surfaces.[1]
It takes into account the chemical and physical properties of the biomaterial surface, as well as the characteristics of the protein being adsorbed.[2]
Equation
The equation is
$E_{s}(r_{eff})=E_{0,s}+\rho r_{eff}$
where
$r_{eff}={\frac {S_{unit}}{S_{measured}}}\cdot {\sqrt {\frac {\sum \limits _{i=1}^{N}(Z_{i,{\text{filtered}}}-Z_{\text{ave,filtered}})^{2}}{N}}}$
Es = implant surface energy you want to adsorb the proteins you want
reff = implant surface roughness
E0,s = starting implant surface energy before nanoscale surface modification
r = empirical factor
S = surface area x, y, z = directions
N = number of measurements[3]
Application
The Webster equation predicts the optimal nanofeatures an implant should have to promote tissue growth, reduce infection, limit inflammation, or control other biological functions. It can be used to design implants that are customized to meet specific biological requirements.[4][5]
References
1. "Webster named AIMBE fellow". news.brown.edu.
2. Khang, Dongwoo; Kim, Sung Yeol; Liu-Snyder, Peishan; Palmore, G. Tayhas R.; Durbin, Stephen M.; Webster, Thomas J. (1 November 2007). "Enhanced fibronectin adsorption on carbon nanotube/poly(carbonate) urethane: Independent role of surface nano-roughness and associated surface energy". Biomaterials. 28 (32): 4756–4768. doi:10.1016/j.biomaterials.2007.07.018. PMID 17706277.
3. "Webster's Equation and the Vocal Tract". ccrma.stanford.edu. Retrieved 2023-05-09.
4. van den Doel, K.; Ascher, U. M. (2008). "Real-Time Numerical Solution of Webster's Equation on A Nonuniform Grid". IEEE Transactions on Audio, Speech, and Language Processing. 16 (6): 1163–1172. doi:10.1109/TASL.2008.2001107. ISSN 1558-7924. S2CID 12430644.
5. Bednarik, M.; Cervenka, M. (2020-03-17). "A wide class of analytical solutions of the Webster equation". Journal of Sound and Vibration. 469: 115169. doi:10.1016/j.jsv.2019.115169. ISSN 0022-460X. S2CID 213681457.
|
Joseph Wedderburn
Joseph Henry Maclagan Wedderburn FRSE FRS (2 February 1882 – 9 October 1948) was a Scottish mathematician, who taught at Princeton University for most of his career. A significant algebraist, he proved that a finite division algebra is a field, and part of the Artin–Wedderburn theorem on simple algebras. He also worked on group theory and matrix algebra.[2][3]
Joseph Wedderburn
Joseph Henry Maclagan Wedderburn (1882–1948)
Born(1882-02-02)2 February 1882
Forfar, Angus, Scotland
Died9 October 1948(1948-10-09) (aged 66)
Princeton, New Jersey, US
NationalityBritish
CitizenshipAmerican
Alma materUniversity of Edinburgh
Known forWedderburn-Etherington number
Artin–Wedderburn theorem
AwardsMacDougall-Brisbane Gold Medal,
Fellow of the Royal Society[1]
Scientific career
FieldsMathematician
InstitutionsPrinceton University
Doctoral advisorGeorge Chrystal
Doctoral studentsMerrill Flood
Nathan Jacobson
Ernst Snapper
His younger brother was the lawyer Ernest Wedderburn.
Life
Joseph Wedderburn was the tenth of fourteen children of Alexander Wedderburn of Pearsie, a physician, and Anne Ogilvie. He was educated at Forfar Academy then in 1895 his parents sent Joseph and his younger brother Ernest to live in Edinburgh with their paternal uncle, J R Maclagan Wedderburn, allowing them to attend George Watson's College. This house was at 3 Glencairn Crescent in the West End of the city.[4]
In 1898 Joseph entered the University of Edinburgh. In 1903, he published his first three papers, worked as an assistant in the Physical Laboratory of the University, obtained an MA degree with First Class Honours in mathematics, and was elected a Fellow of the Royal Society of Edinburgh, upon the proposal of George Chrystal, James Gordon MacGregor, Cargill Gilston Knott and William Peddie. Aged only 21 he remains one of the youngest Fellows ever.[5]
He then studied briefly at the University of Leipzig and the University of Berlin, where he met the algebraists Frobenius and Schur. A Carnegie Scholarship allowed him to spend the 1904–1905 academic year at the University of Chicago where he worked with Oswald Veblen, E. H. Moore, and most importantly, Leonard Dickson, who was to become the most important American algebraist of his day.
Returning to Scotland in 1905, Wedderburn worked for four years at the University of Edinburgh as an assistant to George Chrystal, who supervised his D.Sc, awarded in 1908 for a thesis titled On Hypercomplex Numbers. He gained a PhD in algebra from the University of Edinburgh in 1908.[6] From 1906 to 1908, Wedderburn edited the Proceedings of the Edinburgh Mathematical Society. In 1909, he returned to the United States to become a Preceptor in Mathematics at Princeton University; his colleagues included Luther P. Eisenhart, Oswald Veblen, Gilbert Ames Bliss, and George Birkhoff.
Upon the outbreak of the First World War, Wedderburn enlisted in the British Army as a private. He was the first person at Princeton to volunteer for that war, and had the longest war service of anyone on the staff. He served with the Seaforth Highlanders in France, as Lieutenant (1914), then as Captain of the 10th Battalion (1915–18). While a Captain in the Fourth Field Survey Battalion of the Royal Engineers in France, he devised sound-ranging equipment to locate enemy artillery.
He returned to Princeton after the war, becoming Associate Professor in 1921 and editing the Annals of Mathematics until 1928. While at Princeton, he supervised only three PhDs, one of them being Nathan Jacobson. In his later years, Wedderburn became an increasingly solitary figure and may even have suffered from depression. His isolation after his 1945 early retirement was such that his death from a heart attack was not noticed for several days. His Nachlass was destroyed, as per his instructions.
Wedderburn received the MacDougall-Brisbane Gold Medal and Prize from the Royal Society of Edinburgh in 1921, and was elected to the Royal Society of London in 1933.[1]
Work
In all, Wedderburn published about 40 books and papers, making important advances in the theory of rings, algebras and matrix theory.
In 1905, Wedderburn published a paper that included three claimed proofs of a theorem stating that a noncommutative finite division ring could not exist. The proofs all made clever use of the interplay between the additive group of a finite division algebra A, and the multiplicative group A* = A-{0}. Parshall (1983) notes that the first of these three proofs had a gap not noticed at the time. Meanwhile, Wedderburn's Chicago colleague Dickson also found a proof of this result but, believing Wedderburn's first proof to be correct, Dickson acknowledged Wedderburn's priority. But Dickson also noted that Wedderburn constructed his second and third proofs only after having seen Dickson's proof. Parshall concludes that Dickson should be credited with the first correct proof.
This theorem yields insights into the structure of finite projective geometries. In their paper on "Non-Desarguesian and non-Pascalian geometries" in the 1907 Transactions of the American Mathematical Society, Wedderburn and Veblen showed that in these geometries, Pascal's theorem is a consequence of Desargues' theorem. They also constructed finite projective geometries which are neither "Desarguesian" nor "Pascalian" (the terminology is Hilbert's).
Wedderburn's best-known paper was his sole-authored "On hypercomplex numbers," published in the 1907 Proceedings of the London Mathematical Society, and for which he was awarded the D.Sc. the following year. This paper gives a complete classification of simple and semisimple algebras. He then showed that every finite-dimensional semisimple algebra can be constructed as a direct sum of simple algebras and that every simple algebra is isomorphic to a matrix algebra for some division ring. The Artin–Wedderburn theorem generalises these results to algebras with the descending chain condition.
His best known book is his Lectures on Matrices (1934),[7] which Jacobson praised as follows:
That this was the result of a number of years of painstaking labour is evidenced by the bibliography of 661 items (in the revised printing) covering the period 1853 to 1936. The work is, however, not a compilation of the literature, but a synthesis that is Wedderburn's own. It contains a number of original contributions to the subject.
— Nathan Jacobson, quoted in Taylor 1949
About Wedderburn's teaching:
He was apparently a very shy man and much preferred looking at the blackboard to looking at the students. He had the galley proofs from his book "Lectures on Matrices" pasted to cardboard for durability, and his "lecturing" consisted of reading this out loud while simultaneously copying it onto the blackboard.
— Hooke, 1984
See also
• Hypercomplex numbers
• Wedderburn–Etherington number
• Wedderburn's little theorem
• Wedderburn's theorem (division ring)
• Wedderburn's theorem (simple ring)
• Artin–Wedderburn theorem
References
1. Taylor, H. S. (1949). "Joseph Henry Maclagen Wedderburn. 1882-1948". Obituary Notices of Fellows of the Royal Society. 6 (18): 618–626. doi:10.1098/rsbm.1949.0016. JSTOR 768943. S2CID 179012329.
2. O'Connor, John J.; Robertson, Edmund F., "Joseph Wedderburn", MacTutor History of Mathematics Archive, University of St Andrews
3. Joseph Wedderburn at the Mathematics Genealogy Project
4. Edinburgh Post Office Directory 1895
5. Biographical Index of Former Fellows of the Royal Society of Edinburgh 1783–2002 (PDF). The Royal Society of Edinburgh. July 2006. ISBN 978-0-902198-84-5. Archived from the original (PDF) on 4 March 2016. Retrieved 5 April 2019.
6. J.H., Maclagan-Wedderburn (1908). "Theory of linear associative algebras". hdl:1842/19081. {{cite journal}}: Cite journal requires |journal= (help)
7. MacDuffee, C. C. (1935). "Wedderburn on Matrices". Bull. Amer. Math. Soc. 41 (7): 471–472. doi:10.1090/s0002-9904-1935-06117-8.
Further reading
• Artin, Emil (1950). "The influence of J. H. M. Wedderburn on the development of modern algebra". Bull. Amer. Math. Soc. 56 (1, Part 1): 65–72. doi:10.1090/s0002-9904-1950-09346-x.
• Robert Hooke (1984) Recollections of Princeton, 1939–1941
• Karen Parshall (1983) "In pursuit of the finite division algebra theorem and beyond: Joseph H M Wedderburn, Leonard Dickson, and Oswald Veblen," Archives of International History of Science 33: 274–99.
• Karen Parshall (1985) "Joseph H. M. Wedderburn and the structure theory of algebras," Archive for History of Exact Sciences 32: 223–349.
• Karen Parshall (1992) "New Light on the Life and Work of Joseph Henry Maclagan Wedderburn (1882–1948)," in Menso Folkerts et al. (eds.): Amphora: Festschrift für Hans Wußing zu seinem 65. Geburtstag, Birkhäuser Verlag, 523–537.
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Australia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• SNAC
• IdRef
|
Wedderburn–Artin theorem
In algebra, the Wedderburn–Artin theorem is a classification theorem for semisimple rings and semisimple algebras. The theorem states that an (Artinian)[1] semisimple ring R is isomorphic to a product of finitely many ni-by-ni matrix rings over division rings Di, for some integers ni, both of which are uniquely determined up to permutation of the index i. In particular, any simple left or right Artinian ring is isomorphic to an n-by-n matrix ring over a division ring D, where both n and D are uniquely determined.[2]
Theorem
Let R be a (Artinian) semisimple ring. Then the Wedderburn–Artin theorem states that R is isomorphic to a product of finitely many ni-by-ni matrix rings $M_{n_{i}}(D_{i})$ over division rings Di, for some integers ni, both of which are uniquely determined up to permutation of the index i.
There is also a version of the Wedderburn–Artin theorem for algebras over a field k. If R is a finite-dimensional semisimple k-algebra, then each Di in the above statement is a finite-dimensional division algebra over k. The center of each Di need not be k; it could be a finite extension of k.
Note that if R is a finite-dimensional simple algebra over a division ring E, D need not be contained in E. For example, matrix rings over the complex numbers are finite-dimensional simple algebras over the real numbers.
Proof
There are various proofs of the Wedderburn–Artin theorem.[3][4] A common modern one[5] takes the following approach.
Suppose the ring $R$ is semisimple. Then the right $R$-module $R_{R}$ is isomorphic to a finite direct sum of simple modules (which are the same as minimal right ideals of $R$). Write this direct sum as
$R_{R}\;\cong \;\bigoplus _{i=1}^{m}I_{i}^{\oplus n_{i}}$
where the $I_{i}$ are mutually nonisomorphic simple right $R$-modules, the ith one appearing with multiplicity $n_{i}$. This gives an isomorphism of endomorphism rings
$\mathrm {End} (R_{R})\;\cong \;\bigoplus _{i=1}^{m}\mathrm {End} {\big (}I_{i}^{\oplus n_{i}}{\big )}$
and we can identify $\mathrm {End} {\big (}I_{i}^{\oplus n_{i}}{\big )}$ with a ring of matrices
$\mathrm {End} {\big (}I_{i}^{\oplus n_{i}}{\big )}\;\cong \;M_{n_{i}}{\big (}\mathrm {End} (I_{i}){\big )}$
where the endomorphism ring $\mathrm {End} (I_{i})$ of $I_{i}$ is a division ring by Schur's lemma, because $I_{i}$ is simple. Since $R\cong \mathrm {End} (R_{R})$ we conclude
$R\;\cong \;\bigoplus _{i=1}^{m}M_{n_{i}}{\big (}\mathrm {End} (I_{i}){\big )}\,.$
Here we used right modules because $R\cong \mathrm {End} (R_{R})$; if we used left modules $R$ would be isomorphic to the opposite algebra of $\mathrm {End} ({}_{R}R)$, but the proof would still go through. To see this proof in a larger context, see Decomposition of a module. For the proof of an important special case, see Simple Artinian ring.
Consequences
Since a finite-dimensional algebra over a field is Artinian, the Wedderburn–Artin theorem implies that every finite-dimensional simple algebra over a field is isomorphic to an n-by-n matrix ring over some finite-dimensional division algebra D over $k$, where both n and D are uniquely determined.[2] This was shown by Joseph Wedderburn. Emil Artin later generalized this result to the case of simple left or right Artinian rings.
Since the only finite-dimensional division algebra over an algebraically closed field is the field itself, the Wedderburn–Artin theorem has strong consequences in this case. Let R be a semisimple ring that is a finite-dimensional algebra over an algebraically closed field $k$. Then R is a finite product $\textstyle \prod _{i=1}^{r}M_{n_{i}}(k)$ where the $n_{i}$ are positive integers and $M_{n_{i}}(k)$ is the algebra of $n_{i}\times n_{i}$ matrices over $k$.
Furthermore, the Wedderburn–Artin theorem reduces the problem of classifying finite-dimensional central simple algebras over a field $k$ to the problem of classifying finite-dimensional central division algebras over $k$: that is, division algebras over $k$ whose center is $k$. It implies that any finite-dimensional central simple algebra over $k$ is isomorphic to a matrix algebra $\textstyle M_{n}(D)$ where $D$ is a finite-dimensional central division algebra over $k$.
See also
• Maschke's theorem
• Brauer group
• Jacobson density theorem
• Hypercomplex number
• Emil Artin
• Joseph Wedderburn
References
1. By the definition used here, semisimple rings are automatically Artinian rings. However, some authors use "semisimple" in a different way, to mean the ring has a trivial Jacobson radical. For Artinian rings, the two notions are equivalent, so "Artinian" is included here to eliminate that ambiguity.
2. John A. Beachy (1999). Introductory Lectures on Rings and Modules. Cambridge University Press. p. 156. ISBN 978-0-521-64407-5.
3. Henderson, D.W. (1965). "A short proof of Wedderburn's theorem". Amer. Math. Monthly. 72 (4): 385–386. doi:10.2307/2313499. JSTOR 2313499.
4. Nicholson, William K. (1993). "A short proof of the Wedderburn-Artin theorem" (PDF). New Zealand J. Math. 22: 83–86.
5. Cohn, P. M. Cohn (2003). Basic Algebra: Groups, Rings, and Fields. pp. 137–139.
• J.H.M. Wedderburn (1908). "On Hypercomplex Numbers". Proceedings of the London Mathematical Society. 6: 77–118. doi:10.1112/plms/s2-6.1.77.
• Artin, E. (1927). "Zur Theorie der hyperkomplexen Zahlen". 5: 251–260. {{cite journal}}: Cite journal requires |journal= (help)
|
Krull–Schmidt theorem
In mathematics, the Krull–Schmidt theorem states that a group subjected to certain finiteness conditions on chains of subgroups, can be uniquely written as a finite direct product of indecomposable subgroups.
Definitions
We say that a group G satisfies the ascending chain condition (ACC) on subgroups if every sequence of subgroups of G:
$1=G_{0}\leq G_{1}\leq G_{2}\leq \cdots \,$
is eventually constant, i.e., there exists N such that GN = GN+1 = GN+2 = ... . We say that G satisfies the ACC on normal subgroups if every such sequence of normal subgroups of G eventually becomes constant.
Likewise, one can define the descending chain condition on (normal) subgroups, by looking at all decreasing sequences of (normal) subgroups:
$G=G_{0}\geq G_{1}\geq G_{2}\geq \cdots .\,$
Clearly, all finite groups satisfy both ACC and DCC on subgroups. The infinite cyclic group $\mathbf {Z} $ satisfies ACC but not DCC, since (2) > (2)2 > (2)3 > ... is an infinite decreasing sequence of subgroups. On the other hand, the $p^{\infty }$-torsion part of $\mathbf {Q} /\mathbf {Z} $ (the quasicyclic p-group) satisfies DCC but not ACC.
We say a group G is indecomposable if it cannot be written as a direct product of non-trivial subgroups G = H × K.
Statement
If $G$ is a group that satisfies either ACC or DCC on normal subgroups, then there is exactly one way of writing $G$ as a direct product $G_{1}\times G_{2}\times \cdots \times G_{k}\,$ of finitely many indecomposable subgroups of $G$. Here, uniqueness means direct decompositions into indecomposable subgroups have the exchange property. That is: suppose $G=H_{1}\times H_{2}\times \cdots \times H_{l}\,$ is another expression of $G$ as a product of indecomposable subgroups. Then $k=l$ and there is a reindexing of the $H_{i}$'s satisfying
• $G_{i}$ and $H_{i}$ are isomorphic for each $i$;
• $G=G_{1}\times \cdots \times G_{r}\times H_{r+1}\times \cdots \times H_{l}\,$ for each $r$.
Proof
Proving existence is relatively straightforward: let S be the set of all normal subgroups that can not be written as a product of indecomposable subgroups. Moreover, any indecomposable subgroup is (trivially) the one-term direct product of itself, hence decomposable. If Krull-Schmidt fails, then S contains G; so we may iteratively construct a descending series of direct factors; this contradicts the DCC. One can then invert the construction to show that all direct factors of G appear in this way.[1]
The proof of uniqueness, on the other hand, is quite long and requires a sequence of technical lemmas. For a complete exposition, see.[2]
Remark
The theorem does not assert the existence of a non-trivial decomposition, but merely that any such two decompositions (if they exist) are the same.
Remak decomposition
A Remak decomposition, introduced by Robert Remak,[3] is a decomposition of an abelian group or similar object into a finite direct sum of indecomposable objects. The Krull–Schmidt theorem gives conditions for a Remak decomposition to exist and for its factors to be unique.
Krull–Schmidt theorem for modules
If $E\neq 0$ is a module that satisfies the ACC and DCC on submodules (that is, it is both Noetherian and Artinian or – equivalently – of finite length), then $E$ is a direct sum of indecomposable modules. Up to a permutation, the indecomposable components in such a direct sum are uniquely determined up to isomorphism.[4]
In general, the theorem fails if one only assumes that the module is Noetherian or Artinian.[5]
History
The present-day Krull–Schmidt theorem was first proved by Joseph Wedderburn (Ann. of Math (1909)), for finite groups, though he mentions some credit is due to an earlier study of G.A. Miller where direct products of abelian groups were considered. Wedderburn's theorem is stated as an exchange property between direct decompositions of maximum length. However, Wedderburn's proof makes no use of automorphisms.
The thesis of Robert Remak (1911) derived the same uniqueness result as Wedderburn but also proved (in modern terminology) that the group of central automorphisms acts transitively on the set of direct decompositions of maximum length of a finite group. From that stronger theorem Remak also proved various corollaries including that groups with a trivial center and perfect groups have a unique Remak decomposition.
Otto Schmidt (Sur les produits directs, S. M. F. Bull. 41 (1913), 161–164), simplified the main theorems of Remak to the 3 page predecessor to today's textbook proofs. His method improves Remak's use of idempotents to create the appropriate central automorphisms. Both Remak and Schmidt published subsequent proofs and corollaries to their theorems.
Wolfgang Krull (Über verallgemeinerte endliche Abelsche Gruppen, M. Z. 23 (1925) 161–196), returned to G.A. Miller's original problem of direct products of abelian groups by extending to abelian operator groups with ascending and descending chain conditions. This is most often stated in the language of modules. His proof observes that the idempotents used in the proofs of Remak and Schmidt can be restricted to module homomorphisms; the remaining details of the proof are largely unchanged.
O. Ore unified the proofs from various categories include finite groups, abelian operator groups, rings and algebras by proving the exchange theorem of Wedderburn holds for modular lattices with descending and ascending chain conditions. This proof makes no use of idempotents and does not reprove the transitivity of Remak's theorems.
Kurosh's The Theory of Groups and Zassenhaus' The Theory of Groups include the proofs of Schmidt and Ore under the name of Remak–Schmidt but acknowledge Wedderburn and Ore. Later texts use the title Krull–Schmidt (Hungerford's Algebra) and Krull–Schmidt–Azumaya (Curtis–Reiner). The name Krull–Schmidt is now popularly substituted for any theorem concerning uniqueness of direct products of maximum size. Some authors choose to call direct decompositions of maximum-size Remak decompositions to honor his contributions.
See also
• Krull–Schmidt category
References
1. Thomas W. Hungerford (6 December 2012). Algebra. Springer Science & Business Media. p. 83. ISBN 978-1-4612-6101-8.
2. Hungerford 2012, p.86-8.
3. Remak, Robert (1911), "Über die Zerlegung der endlichen Gruppen in direkte unzerlegbare Faktoren", Journal für die reine und angewandte Mathematik (in German), 139: 293–308, doi:10.1515/crll.1911.139.293, ISSN 0075-4102, JFM 42.0156.01
4. Jacobson, Nathan (2009). Basic algebra. Vol. 2 (2nd ed.). Dover. p. 115. ISBN 978-0-486-47187-7.
5. Facchini, Alberto; Herbera, Dolors; Levy, Lawrence S.; Vámos, Peter (1 December 1995). "Krull-Schmidt fails for Artinian modules". Proceedings of the American Mathematical Society. 123 (12): 3587. doi:10.1090/S0002-9939-1995-1277109-4.
Further reading
• A. Facchini: Module theory. Endomorphism rings and direct sum decompositions in some classes of modules. Progress in Mathematics, 167. Birkhäuser Verlag, Basel, 1998. ISBN 3-7643-5908-0
• C.M. Ringel: Krull–Remak–Schmidt fails for Artinian modules over local rings. Algebr. Represent. Theory 4 (2001), no. 1, 77–86.
External links
• Page at PlanetMath
|
Wedderburn's little theorem
In mathematics, Wedderburn's little theorem states that every finite division ring is a field. In other words, for finite rings, there is no distinction between domains, division rings and fields.
The Artin–Zorn theorem generalizes the theorem to alternative rings: every finite alternative division ring is a field.[1]
History
The original proof was given by Joseph Wedderburn in 1905,[2] who went on to prove it two other ways. Another proof was given by Leonard Eugene Dickson shortly after Wedderburn's original proof, and Dickson acknowledged Wedderburn's priority. However, as noted in (Parshall 1983), Wedderburn's first proof was incorrect – it had a gap – and his subsequent proofs appeared only after he had read Dickson's correct proof. On this basis, Parshall argues that Dickson should be credited with the first correct proof.
A simplified version of the proof was later given by Ernst Witt.[2] Witt's proof is sketched below. Alternatively, the theorem is a consequence of the Skolem–Noether theorem by the following argument.[3] Let $D$ be a finite division algebra with center $k$. Let $[D:k]=n^{2}$ and $q$ denote the cardinality of $k$. Every maximal subfield of $D$ has $q^{n}$ elements; so they are isomorphic and thus are conjugate by Skolem–Noether. But a finite group (the multiplicative group of $D$ in our case) cannot be a union of conjugates of a proper subgroup; hence, $n=1$.
A later "group-theoretic" proof was given by Ted Kaczynski in 1964.[4] This proof, Kaczynski's first published piece of mathematical writing, was a short, two-page note which also acknowledged the earlier historical proofs.
Relationship to the Brauer group of a finite field
The theorem is essentially equivalent to saying that the Brauer group of a finite field is trivial. In fact, this characterization immediately yields a proof of the theorem as follows: let k be a finite field. Since the Herbrand quotient vanishes by finiteness, $\operatorname {Br} (k)=H^{2}(k^{\text{al}}/k)$ coincides with $H^{1}(k^{\text{al}}/k)$, which in turn vanishes by Hilbert 90.
Proof
Let A be a finite domain. For each nonzero x in A, the two maps
$a\mapsto ax,a\mapsto xa:A\to A$
are injective by the cancellation property, and thus, surjective by counting. It follows from the elementary group theory[5] that the nonzero elements of $A$ form a group under multiplication. Thus, $A$ is a skew-field.
To prove that every finite skew-field is a field, we use strong induction on the size of the skew-field. Thus, let $A$ be a skew-field, and assume that all skew-fields that are proper subsets of $A$ are fields. Since the center $Z(A)$ of $A$ is a field, $A$ is a vector space over $Z(A)$ with finite dimension $n$. Our objective is then to show $n=1$. If $q$ is the order of $Z(A)$, then $A$ has order ${q}^{n}$. Note that because $Z(A)$ contains the distinct elements $0$ and $1$, $q>1$. For each $x$ in $A$ that is not in the center, the centralizer ${Z}_{x}$ of $x$ is clearly a skew-field and thus a field, by the induction hypothesis, and because ${Z}_{x}$ can be viewed as a vector space over $Z(A)$ and $A$ can be viewed as a vector space over ${Z}_{x}$, we have that ${Z}_{x}$ has order ${q}^{d}$ where $d$ divides $n$ and is less than $n$. Viewing ${Z(A)}^{*}$, $A^{*}$, and the ${Z}_{x}^{*}$ as groups under multiplication, we can write the class equation
$q^{n}-1=q-1+\sum {q^{n}-1 \over q^{d}-1}$
where the sum is taken over the conjugacy classes not contained within ${Z(A)}^{*}$, and the $d$ are defined so that for each conjugacy class, the order of ${Z}_{x}^{*}$ for any $x$ in the class is ${q}^{d}-1$. ${q}^{n}-1$ and $q^{d}-1$ both admit polynomial factorization in terms of cyclotomic polynomials
$\Phi _{f}(q).$
In the polynomial identities
$x^{n}-1=\prod _{m|n}\Phi _{m}(x)$ and $x^{d}-1=\prod _{m|d}\Phi _{m}(x)$,
we set $x=q$. Because each $d$ is a proper divisor of $n$,
$\Phi _{n}(q)$ divides both ${q}^{n}-1$ and each ${q^{n}-1 \over q^{d}-1}$,
so by the above class equation $\Phi _{n}(q)$ must divide $q-1$, and therefore
$|\Phi _{n}(q)|\leq q-1.$
To see that this forces $n$ to be $1$, we will show
$|\Phi _{n}(q)|>q-1$
for $n>1$ using factorization over the complex numbers. In the polynomial identity
$\Phi _{n}(x)=\prod (x-\zeta ),$
where $\zeta $ runs over the primitive $n$-th roots of unity, set $x$ to be $q$ and then take absolute values
$|\Phi _{n}(q)|=\prod |q-\zeta |.$
For $n>1$, we see that for each primitive $n$-th root of unity $\zeta $,
$|q-\zeta |>|q-1|$
because of the location of $q$, $1$, and $\zeta $ in the complex plane. Thus
$|\Phi _{n}(q)|>q-1.$
Notes
1. Shult, Ernest E. (2011). Points and lines. Characterizing the classical geometries. Universitext. Berlin: Springer-Verlag. p. 123. ISBN 978-3-642-15626-7. Zbl 1213.51001.
2. Lam (2001), p. 204
3. Theorem 4.1 in Ch. IV of Milne, class field theory, http://www.jmilne.org/math/CourseNotes/cft.html
4. Kaczynski, T.J. (June–July 1964). "Another Proof of Wedderburn's Theorem". American Mathematical Monthly. 71 (6): 652–653. doi:10.2307/2312328. JSTOR 2312328. (Jstor link, requires login)
5. e.g., Exercise 1.9 in Milne, group theory, http://www.jmilne.org/math/CourseNotes/GT.pdf
References
• Parshall, K. H. (1983). "In pursuit of the finite division algebra theorem and beyond: Joseph H M Wedderburn, Leonard Dickson, and Oswald Veblen". Archives of International History of Science. 33: 274–99.
• Lam, Tsit-Yuen (2001). A first course in noncommutative rings. Graduate Texts in Mathematics. Vol. 131 (2 ed.). Springer. ISBN 0-387-95183-0.
External links
• Proof of Wedderburn's Theorem at Planet Math
• Mizar system proof: http://mizar.org/version/current/html/weddwitt.html#T38
|
Weddle surface
In algebraic geometry, a Weddle surface, introduced by Thomas Weddle (1850, footnote on page 69), is a quartic surface in 3-dimensional projective space, given by the locus of vertices of the family of cones passing through 6 points in general position.
Weddle surfaces have 6 nodes and are birational to Kummer surfaces.
References
• Bolognesi, Michele (2010), Surfaces de Weddle et leurs espaces de Modules: Fonctions thêta, fibrés vectoriels et géométrie des variétés Jacobiennes des courbes, Editions universitaires europeennes, ISBN 978-6131546761
• Hudson, R. W. H. T. (1990), Kummer's quartic surface, Cambridge Mathematical Library, Cambridge University Press, ISBN 978-0-521-39790-2, MR 1097176
• Moore, Walter L. (1928), "On the Geometry of the Weddle Surface", Annals of Mathematics, Second Series, Annals of Mathematics, 30 (1): 492–498, doi:10.2307/1968298, ISSN 0003-486X, JSTOR 1968298, MR 1502899
• Weddle, Thomas (1850), "On the theorems in space analogous to those of Pascal and Brianchon in a plane.– Part II", Cambridge and Dublin mathematical journal, 5: 58–69
|
Wedge (geometry)
In solid geometry, a wedge is a polyhedron defined by two triangles and three trapezoid faces. A wedge has five faces, nine edges, and six vertices.
Wedge
Faces2 triangles,
3 quadrilaterals
Edges9
Vertices6
Dual polyhedronNotch
Propertiesconvex
A wedge is a subclass of the prismatoids with the base and opposite ridge in two parallel planes.
A wedge can also be classified as a digonal cupola.
Comparisons:
• A wedge is a parallelepiped where a face has collapsed into a line.
• A quadrilaterally-based pyramid is a wedge in which one of the edges between two trapezoid faces has collapsed into a point.
Volume
For a rectangle based wedge, the volume is
$V=bh\left({\frac {a}{3}}+{\frac {c}{6}}\right),$
where the base rectangle is a by b, c is the apex edge length parallel to a, and h the height from the base rectangle to the apex edge.
Examples
Wedges can be created from decomposition of other polyhedra. For instance, the dodecahedron can be divided into a central cube with 6 wedges covering the cube faces. The orientations of the wedges are such that the triangle and trapezoid faces can connect and form a regular pentagon.
A triangular prism is a special case wedge with the two triangle faces being translationally congruent.
Two obtuse wedges can be formed by bisecting a regular tetrahedron on a plane parallel to two opposite edges.
Special cases
Triangular prism
(Parallel triangle wedge)
Obtuse wedge as a bisected regular tetrahedron
A wedge constructed from 8 triangular faces and 2 squares. It can be seen as a tetrahedron augmented by two square pyramids.
The regular dodecahedron can be decomposed into a central cube and 6 wedges over the 6 square faces.
References
• Harris, J. W., & Stocker, H. "Wedge". §4.5.2 in Handbook of Mathematics and Computational Science. New York: Springer, p. 102, 1998. ISBN 978-0-387-94746-4
External links
• Weisstein, Eric W. "Wedge". MathWorld.
Convex polyhedra
Platonic solids (regular)
• tetrahedron
• cube
• octahedron
• dodecahedron
• icosahedron
Archimedean solids
(semiregular or uniform)
• truncated tetrahedron
• cuboctahedron
• truncated cube
• truncated octahedron
• rhombicuboctahedron
• truncated cuboctahedron
• snub cube
• icosidodecahedron
• truncated dodecahedron
• truncated icosahedron
• rhombicosidodecahedron
• truncated icosidodecahedron
• snub dodecahedron
Catalan solids
(duals of Archimedean)
• triakis tetrahedron
• rhombic dodecahedron
• triakis octahedron
• tetrakis hexahedron
• deltoidal icositetrahedron
• disdyakis dodecahedron
• pentagonal icositetrahedron
• rhombic triacontahedron
• triakis icosahedron
• pentakis dodecahedron
• deltoidal hexecontahedron
• disdyakis triacontahedron
• pentagonal hexecontahedron
Dihedral regular
• dihedron
• hosohedron
Dihedral uniform
• prisms
• antiprisms
duals:
• bipyramids
• trapezohedra
Dihedral others
• pyramids
• truncated trapezohedra
• gyroelongated bipyramid
• cupola
• bicupola
• frustum
• bifrustum
• rotunda
• birotunda
• prismatoid
• scutoid
Degenerate polyhedra are in italics.
|
Wedge sum
In topology, the wedge sum is a "one-point union" of a family of topological spaces. Specifically, if X and Y are pointed spaces (i.e. topological spaces with distinguished basepoints $x_{0}$ and $y_{0}$) the wedge sum of X and Y is the quotient space of the disjoint union of X and Y by the identification $x_{0}\sim y_{0}:$
$X\vee Y=(X\amalg Y)\;/{\sim },$
where $\,\sim \,$ is the equivalence closure of the relation $\left\{\left(x_{0},y_{0}\right)\right\}.$ More generally, suppose $\left(X_{i}\right)_{i\in I}$ is a indexed family of pointed spaces with basepoints $\left(p_{i}\right)_{i\in I}.$ The wedge sum of the family is given by:
$\bigvee _{i\in I}X_{i}=\coprod _{i\in I}X_{i}\;/{\sim },$
where $\,\sim \,$ is the equivalence closure of the relation $\left\{\left(p_{i},p_{j}\right):i,j\in I\right\}.$ In other words, the wedge sum is the joining of several spaces at a single point. This definition is sensitive to the choice of the basepoints $\left(p_{i}\right)_{i\in I},$ unless the spaces $\left(X_{i}\right)_{i\in I}$ are homogeneous.
The wedge sum is again a pointed space, and the binary operation is associative and commutative (up to homeomorphism).
Sometimes the wedge sum is called the wedge product, but this is not the same concept as the exterior product, which is also often called the wedge product.
Examples
The wedge sum of two circles is homeomorphic to a figure-eight space. The wedge sum of $n$ circles is often called a bouquet of circles, while a wedge product of arbitrary spheres is often called a bouquet of spheres.
A common construction in homotopy is to identify all of the points along the equator of an $n$-sphere $S^{n}$. Doing so results in two copies of the sphere, joined at the point that was the equator:
$S^{n}/{\sim }=S^{n}\vee S^{n}.$
Let $\Psi $ be the map $\Psi :S^{n}\to S^{n}\vee S^{n},$ that is, of identifying the equator down to a single point. Then addition of two elements $f,g\in \pi _{n}(X,x_{0})$ of the $n$-dimensional homotopy group $\pi _{n}(X,x_{0})$ of a space $X$ at the distinguished point $x_{0}\in X$ can be understood as the composition of $f$ and $g$ with $\Psi $:
$f+g=(f\vee g)\circ \Psi .$
Here, $f,g:S^{n}\to X$ are maps which take a distinguished point $s_{0}\in S^{n}$ to the point $x_{0}\in X.$ Note that the above uses the wedge sum of two functions, which is possible precisely because they agree at $s_{0},$ the point common to the wedge sum of the underlying spaces.
Categorical description
The wedge sum can be understood as the coproduct in the category of pointed spaces. Alternatively, the wedge sum can be seen as the pushout of the diagram $X\leftarrow \{\bullet \}\to Y$ in the category of topological spaces (where $\{\bullet \}$ is any one-point space).
Properties
Van Kampen's theorem gives certain conditions (which are usually fulfilled for well-behaved spaces, such as CW complexes) under which the fundamental group of the wedge sum of two spaces $X$ and $Y$ is the free product of the fundamental groups of $X$ and $Y.$
See also
• Smash product
• Hawaiian earring, a topological space resembling, but not the same as, a wedge sum of countably many circles
References
• Rotman, Joseph. An Introduction to Algebraic Topology, Springer, 2004, p. 153. ISBN 0-387-96678-1
|
Weeks manifold
In mathematics, the Weeks manifold, sometimes called the Fomenko–Matveev–Weeks manifold, is a closed hyperbolic 3-manifold obtained by (5, 2) and (5, 1) Dehn surgeries on the Whitehead link. It has volume approximately equal to 0.942707… (OEIS: A126774) and David Gabai, Robert Meyerhoff, and Peter Milley (2009) showed that it has the smallest volume of any closed orientable hyperbolic 3-manifold. The manifold was independently discovered by Jeffrey Weeks (1985) as well as Sergei V. Matveev and Anatoly T. Fomenko (1988).
Volume
Since the Weeks manifold is an arithmetic hyperbolic 3-manifold, its volume can be computed using its arithmetic data and a formula due to Armand Borel:
$V_{w}={\frac {3\cdot 23^{3/2}\zeta _{k}(2)}{4\pi ^{4}}}=0.942707\dots $
where $k$ is the number field generated by $\theta $ satisfying $\theta ^{3}-\theta +1=0$ and $\zeta _{k}$ is the Dedekind zeta function of $k$. [1] Alternatively,
$V_{w}=\Im ({\rm {{Li}_{2}(\theta )+\ln |\theta |\ln(1-\theta ))=0.942707\dots }}$
where ${\rm {{Li}_{n}}}$ is the polylogarithm and $|x|$ is the absolute value of the complex root $\theta $ (with positive imaginary part) of the cubic.
Related manifolds
The cusped hyperbolic 3-manifold obtained by (5, 1) Dehn surgery on the Whitehead link is the so-called sibling manifold, or sister, of the figure-eight knot complement. The figure eight knot's complement and its sibling have the smallest volume of any orientable, cusped hyperbolic 3-manifold. Thus the Weeks manifold can be obtained by hyperbolic Dehn surgery on one of the two smallest orientable cusped hyperbolic 3-manifolds.
See also
• Meyerhoff manifold - second small volume
References
1. (Ted Chinburg, Eduardo Friedman & Kerry N. Jones et al. 2001)
• Agol, Ian; Storm, Peter A.; Thurston, William P. (2007), "Lower bounds on volumes of hyperbolic Haken 3-manifolds (with an appendix by Nathan Dunfield)", Journal of the American Mathematical Society, 20 (4): 1053–1077, arXiv:math.DG/0506338, Bibcode:2007JAMS...20.1053A, doi:10.1090/S0894-0347-07-00564-4, MR 2328715.
• Chinburg, Ted; Friedman, Eduardo; Jones, Kerry N.; Reid, Alan W. (2001), "The arithmetic hyperbolic 3-manifold of smallest volume", Annali della Scuola Normale Superiore di Pisa. Classe di Scienze. Serie IV, 30 (1): 1–40, MR 1882023
• Gabai, David; Meyerhoff, Robert; Milley, Peter (2009), "Minimum volume cusped hyperbolic three-manifolds", Journal of the American Mathematical Society, 22 (4): 1157–1215, arXiv:0705.4325, Bibcode:2009JAMS...22.1157G, doi:10.1090/S0894-0347-09-00639-0, MR 2525782
• Matveev, Sergei V.; Fomenko, Aanatoly T. (1988), "Isoenergetic surfaces of Hamiltonian systems, the enumeration of three-dimensional manifolds in order of growth of their complexity, and the calculation of the volumes of closed hyperbolic manifolds", Akademiya Nauk SSSR i Moskovskoe Matematicheskoe Obshchestvo. Uspekhi Matematicheskikh Nauk, 43 (1): 5–22, Bibcode:1988RuMaS..43....3M, doi:10.1070/RM1988v043n01ABEH001554, MR 0937017
• Weeks, Jeffrey (1985), Hyperbolic structures on 3-manifolds, Ph.D. thesis, Princeton University
|
Wei Ho
Wei Ho is an American mathematician specializing in number theory, algebraic geometry, arithmetic geometry, and representation theory. She is an associate professor of mathematics at the University of Michigan in Ann Arbor, Michigan.
Wei Ho
Wei Ho at the Mathematical Research Institute at Oberwolfach, 2015
CitizenshipUnited States
Alma mater
• Harvard University
• Princeton University
Known forNumber theory
Awards
• Sloan Research Fellowship
Scientific career
FieldsMathematics
Institutions
• University of Michigan
• Columbia University
ThesisOrbit parametrizations of curves (2009)
Doctoral advisorManjul Bhargava
Websitewww-personal.umich.edu/~weiho
Education and career
Wei Ho grew up in Wisconsin where she attended New Berlin West High School in New Berlin, Wisconsin. She was raised with a Chinese upbringing.[1] During her middle and high school years, she participated in the Wisconsin Math League, the MATHCOUNTS competition, and the American Invitational Mathematics Examination (AIME), the USA Mathematical Talent Search (USAMTS). After her freshman year at New Berlin, Ho attended the Young Scholar Summer Program (YSSP) at the Rose-Hulman Institute of Technology in Terre Haute, Indiana. Her performance on the US National Chemistry Olympiad qualified her for a two-week study camp at the US Air Force Academy in Air Force, Colorado. Ho played on the varsity tennis team and played violin in various orchestras while at New Berlin.[2]
In 2003, Ho received both Master's and bachelor's degrees from Harvard University in Cambridge, Massachusetts. While at Harvard, she completed a senior honors thesis entitled The Main Conjecture of Iwasawa Theory under the supervision of Noam Elkies.[3] After college, Ho won a Harvard Herchel Smith Fellowship in Science, which enabled her to spend a year abroad at the University of Cambridge in Cambridge, England, where she completed Part III of the Mathematical Tripos with distinction.[3] In 2009, she completed her PhD in mathematics at Princeton University in Princeton, New Jersey, under the supervision of Manjul Bhargava.[4] She was awarded a National Science Foundation (NSF) Postdoctoral Fellowship, which enabled her to conduct research at Harvard and Princeton.[5]
In 2010, Ho became a Joseph Fels Ritt Assistant Professor at Columbia University in New York. Ho joined the faculty at the University of Michigan as an assistant professor in 2014 and was promoted to associate professor in 2019.[3]
Recognition
In 1994, at the age of 10, Ho got an 800 on the math portion of the SAT subject test, becoming perhaps the youngest girl to achieve that feat at the time.[6] In 1999, Ho won a Gold Medal while representing the US at the International Chemistry Olympiad.[3][7] In 2003, the Association for Women in Mathematics selected Ho as a runner-up for the Alice T. Schafer Prize for Excellence in Mathematics by an Undergraduate Woman.[8] In that same year, Ho received the Herman Peries Prize for excellent performance in the Mathematical Tripos at Emmanuel College, University of Cambridge.[3] In 2017, Ho was awarded a Sloan Research Fellowship by the Alfred P. Sloan Foundation.[1][9][10]
In 2022 Ho became the director of the Women and Mathematics program at the Institute for Advanced Study, Princeton. She was named to the 2023 class of Fellows of the American Mathematical Society, "for contributions to number theory and algebraic geometry, and for service to the mathematical community".[11]
References
1. Diaz-Lopez, Alexander (December 2018). "Wei Ho Interview" (PDF). Notices of the American Mathematical Society. 65 (11): 1417–1419. doi:10.1090/noti1762. Retrieved 10 January 2021.
2. Ho, Wei (1 December 2002). "Math Outside the Classroom". Imagine. 5 (4): 14–15. doi:10.1353/imag.2003.0122. ISSN 1086-3230. Retrieved 11 January 2021.
3. "Wei Ho" (PDF). University of Michigan. Retrieved 10 January 2021.
4. Wei Ho at the Mathematics Genealogy Project
5. "NSF Award Search: Award#0902853". PostDoctoral Research Fellowship. Retrieved 10 January 2021.
6. "Girl, 10, gets perfect score on math SAT". Chicago Tribune. 7 March 1994.
7. "U.S. Chemistry Olympiad Teams". American Chemical Society. Retrieved 31 January 2021.
8. "Alice T. Shafer Mathematics Prize". Association for Women in Mathematics. Retrieved 10 January 2021.
9. "Wei Ho Receives Sloan Fellowship". U-M LSA Mathematics. 22 March 1966. Retrieved 10 January 2021.
10. "Past Fellows". Alfred P. Sloan Foundation. Retrieved 10 January 2021.
11. "2023 Class of Fellows". American Mathematical Society. Retrieved 2022-11-09.
External links
• Official website
• Wei Ho's Author profile at MathSciNet
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Wei-Ming Ni
Wei-Ming Ni (Chinese: 倪維明; born 23 December 1950)[1] is a Taiwanese mathematician at the University of Minnesota and the Chinese University of Hong Kong, and the former director of the Center for PDE at the East China Normal University. He works in the field of elliptic and parabolic partial differential equations.[2][3] He did undergraduate work at National Taiwan University and obtained his Ph.D. at New York University, in 1979, under the supervision of Louis Nirenberg.[4] He is an editor-in-chief of the Journal of Differential Equations, and was an ISI Highly Cited Researcher in 2002.[5][6] As said by the journal Discrete and Continuous Dynamical Systems:
[Ni] first became a household name in the PDE community when he published with Gidas and Nirenberg the seminal paper in 1979, “On the symmetry of positive solutions of nonlinear elliptic equations” [...] The research and expository work of Professor Ni has influenced the research directions and activities of a large number of mathematicians, many of whom are playing important roles in the field of partial differential equations today.[6]
Major publications
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry and related properties via the maximum principle. Comm. Math. Phys. 68 (1979), no. 3, 209–243.
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry of positive solutions of nonlinear elliptic equations in ℝn. Mathematical analysis and applications, Part A, pp. 369–402, Adv. in Math. Suppl. Stud., 7a, Academic Press, New York-London, 1981.
• Lin, C.-S.; Ni, W.-M.; Takagi, I. Large amplitude stationary solutions to a chemotaxis system. J. Differential Equations 72 (1988), no. 1, 1–27.
• Ni, Wei-Ming; Takagi, Izumi. On the shape of least-energy solutions to a semilinear Neumann problem. Comm. Pure Appl. Math. 44 (1991), no. 7, 819–851.
• Lou, Yuan; Ni, Wei-Ming. Diffusion, self-diffusion and cross-diffusion. J. Differential Equations 131 (1996), no. 1, 79–131.
• Ni, Wei-Ming. Diffusion, cross-diffusion, and their spike-layer steady states. Notices Amer. Math. Soc. 45 (1998), no. 1, 9–18.
References
1. "NI, Wei-Ming". Chinese University of Hong Kong. Retrieved 22 October 2020. (in Chinese)
2. "Wei-Ming Ni's Home Page". University of Minnesota. Archived from the original on 11 August 2020. Retrieved 22 October 2020.
3. "Welcome to Center for PDE". cpde.ecnu.edu.cn. Archived from the original on 14 July 2020. Retrieved 22 October 2020.
4. "Wei-Ming Ni - The Mathematics Genealogy Project". Mathematics Genealogy Project. Department of Mathematics - North Dakota State University. Archived from the original on 7 July 2020. Retrieved 11 July 2020.
5. "Journal of Differential Equations Editorial Board". Archived from the original on 19 August 2020. Retrieved 22 October 2020 – via www.journals.elsevier.com.
6. Chen, Chiun-Chuan; Lou, Yuan; Ninomiya, Hirokazu; Polacik, Peter; Wang, Xuefeng. Preface: DCDS-a special issue to honor Wei-Ming Ni’s 70th birthday. Discrete Contin. Dyn. Syst. 40 (2020), no. 6, i-ii.
External links
• Wei-Ming Ni publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Norway
• Germany
• Israel
• United States
Academics
• CiNii
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Weibel's conjecture
In mathematics, Weibel's conjecture gives a criterion for vanishing of negative algebraic K-theory groups. The conjecture was proposed by Charles Weibel (1980) and proven in full generality by Kerz, Strunk & Tamme (2018) using methods from derived algebraic geometry. Previously partial cases had been proven by Morrow (2016) harvtxt error: no target: CITEREFMorrow2016 (help), Kelly (2014) harvtxt error: no target: CITEREFKelly2014 (help), Cisinski (2013) harvtxt error: no target: CITEREFCisinski2013 (help), Geisser & Hesselholt (2010) harvtxt error: no target: CITEREFGeisserHesselholt2010 (help), and Cortiñas et al. (2008) harvtxt error: no target: CITEREFCortiñasHaesemeyerSchlichtingWeibel2008 (help).
Statement of the conjecture
Weibel's conjecture asserts that for a Noetherian scheme X of finite Krull dimension d, the K-groups vanish in degrees < −d:
$K_{i}(X)=0{\text{ for }}i<-d$
and asserts moreover a homotopy invariance property for negative K-groups
$K_{i}(X)=K_{i}(X\times \mathbb {A} ^{r}){\text{ for }}i\leq -d{\text{ and arbitrary }}r.$
References
• Weibel, Charles (1980), "K-theory and analytic isomorphisms", Inventiones Mathematicae, 61 (2): 177–197, doi:10.1007/bf01390120
• Kerz, Moritz; Strunk, Florian; Tamme, Georg (2018), "Algebraic K-theory and descent for blow-ups", Inventiones Mathematicae, 211 (2): 523–577, arXiv:1611.08466, doi:10.1007/s00222-017-0752-2, MR 3748313
|
Charles Weibel
Charles Alexander Weibel (born October 28, 1950 in Terre Haute, Indiana) is an American mathematician working on algebraic K-theory, algebraic geometry and homological algebra.
Weibel studied physics and mathematics at the University of Michigan, earning bachelor's degrees in both subjects in 1972. He was awarded a master's degree by the University of Chicago in 1973 and achieved his doctorate in 1977 under the supervision of Richard Swan (Homotopy in Algebraic K-Theory). From 1970 to 1976 he was an "Operations Research Analyst" at Standard Oil of Indiana, and from 1977 to 1978 was at the Institute for Advanced Study. In 1978 he became an assistant professor at the University of Pennsylvania. In 1980 he became an assistant professor at Rutgers University, where he was promoted to professor in 1989.
He joined Vladimir Voevodsky and Markus Rost in proving the (motivic) Bloch–Kato conjecture (2009).[1] It is a generalization of the Milnor conjecture of algebraic K-theory, which was proved by Voevodsky in the 1990s. He was a visiting professor in 1992 at the University of Paris and 1993 at the University of Strasbourg. Since 1983 he has been an editor of the Journal of Pure and Applied Algebra. He helped found the K-theory Foundation in 2010, and has been a managing editor of the Annals of K-theory since 2014. In 2014, he became a Fellow of the American Mathematical Society.[2]
Writings
• With Eric Friedlander, An overview over algebraic K-theory, in Algebraic K-theory and its applications, World Scientific 1999, pp. 1–119 (1997 Trieste Lecture Notes)
• Weibel, Charles A. (2013), The K-book, Graduate Studies in Mathematics, vol. 145, American Mathematical Society, Providence, RI, ISBN 978-0-8218-9132-2, MR 3076731
• With Carlo Mazza, Vladimir Voevodsky Lectures on Motivic Cohomology, Clay Monographs in Mathematics, American Mathematical Society 2006
• An introduction to homological algebra, Cambridge University Press 1994[3]
• The proof of the Bloch-Kato conjecture, Trieste Lectures 2007, ICTP Lecture Notes Series 23 (2008), 277–305
• "2007 Trieste Lectures on The Proof of the Block-Kato Conjecture by Charles Weibel" (PDF). ICTP SMR/1840-9.
Notes
1. The norm residue isomorphism theorem, Journal of Topology, Volume 2, 2009, pp. 346–372
2. List of Fellows of the American Mathematical Society, retrieved 2014-12-17
3. Rotman, Joseph (1996). "Book Review: An introduction to homological algebra". Bulletin of the American Mathematical Society. 33 (4): 473–477. doi:10.1090/S0273-0979-96-00684-2. ISSN 0273-0979.
References
• The original article was a Google translation of the corresponding article in German Wikipedia.
External links
• Charles Weibel at the Mathematics Genealogy Project
• Homepage
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Weierstrass elliptic function
In mathematics, the Weierstrass elliptic functions are elliptic functions that take a particularly simple form. They are named for Karl Weierstrass. This class of functions are also referred to as ℘-functions and they are usually denoted by the symbol ℘, a uniquely fancy script p. They play an important role in the theory of elliptic functions. A ℘-function together with its derivative can be used to parameterize elliptic curves and they generate the field of elliptic functions with respect to a given period lattice.
Symbol for Weierstrass $\wp $-function
"℘" redirects here; the symbol can also be used to denote a power set.
Definition
Let $\omega _{1},\omega _{2}\in \mathbb {C} $ be two complex numbers that are linearly independent over $\mathbb {R} $ and let $\Lambda :=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}:m,n\in \mathbb {Z} \}$ :=\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}:=\{m\omega _{1}+n\omega _{2}:m,n\in \mathbb {Z} \}} be the lattice generated by those numbers. Then the $\wp $-function is defined as follows:
$\wp (z,\omega _{1},\omega _{2}):=\wp (z,\Lambda ):={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right).$
This series converges locally uniformly absolutely in $\mathbb {C} \setminus \Lambda $. Oftentimes instead of $\wp (z,\omega _{1},\omega _{2})$ only $\wp (z)$ is written.
The Weierstrass $\wp $-function is constructed exactly in such a way that it has a pole of the order two at each lattice point.
Because the sum $ \sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{2}}}$ alone would not converge it is necessary to add the term $ -{\frac {1}{\lambda ^{2}}}$.[1]
It is common to use $1$ and $\tau $ in the upper half-plane ${H}:=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}$ :\operatorname {Im} (z)>0\}} as generators of the lattice. Dividing by $ \omega _{1}$ maps the lattice $\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}$ isomorphically onto the lattice $\mathbb {Z} +\mathbb {Z} \tau $ with $ \tau ={\tfrac {\omega _{2}}{\omega _{1}}}$. Because $-\tau $ can be substituted for $\tau $, without loss of generality we can assume $\tau \in \mathbb {H} $, and then define $\wp (z,\tau ):=\wp (z,1,\tau )$.
Motivation
A cubic of the form $C_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}$, where $g_{2},g_{3}\in \mathbb {C} $ are complex numbers with $g_{2}^{3}-27g_{3}^{2}\neq 0$, cannot be rationally parameterized.[2] Yet one still wants to find a way to parameterize it.
For the quadric $K=\left\{(x,y)\in \mathbb {R} ^{2}:x^{2}+y^{2}=1\right\}$, the unit circle, there exists a (non-rational) parameterization using the sine function and its derivative the cosine function:
$\psi :\mathbb {R} /2\pi \mathbb {Z} \to K,\quad t\mapsto (\sin t,\cos t).$ :\mathbb {R} /2\pi \mathbb {Z} \to K,\quad t\mapsto (\sin t,\cos t).}
Because of the periodicity of the sine and cosine $\mathbb {R} /2\pi \mathbb {Z} $ is chosen to be the domain, so the function is bijective.
In a similar way one can get a parameterization of $C_{g_{2},g_{3}}^{\mathbb {C} }$ by means of the doubly periodic $\wp $-function (see in the section "Relation to elliptic curves"). This parameterization has the domain $\mathbb {C} /\Lambda $, which is topologically equivalent to a torus.[3]
There is another analogy to the trigonometric functions. Consider the integral function
$a(x)=\int _{0}^{x}{\frac {dy}{\sqrt {1-y^{2}}}}.$
It can be simplified by substituting $y=\sin t$ and $s=\arcsin x$:
$a(x)=\int _{0}^{s}dt=s=\arcsin x.$
That means $a^{-1}(x)=\sin x$. So the sine function is an inverse function of an integral function.[4]
Elliptic functions are also inverse functions of integral functions, namely of elliptic integrals. In particular the $\wp $-function is obtained in the following way:
Let
$u(z)=-\int _{z}^{\infty }{\frac {ds}{\sqrt {4s^{3}-g_{2}s-g_{3}}}}.$
Then $u^{-1}$ can be extended to the complex plane and this extension equals the $\wp $-function.[5]
Properties
• $\wp $ is an even function. That means $\wp (z)=\wp (-z)$ for all $z\in \mathbb {C} \setminus \Lambda $, which can be seen in the following way:
${\begin{aligned}\wp (-z)&={\frac {1}{(-z)^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(-z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)\\[4pt]&={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z+\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)\\[4pt]&={\frac {1}{z^{2}}}+\sum _{\lambda \in \Lambda \setminus \{0\}}\left({\frac {1}{(z-\lambda )^{2}}}-{\frac {1}{\lambda ^{2}}}\right)=\wp (z).\end{aligned}}$
The second last equality holds because $\{-\lambda :\lambda \in \Lambda \}=\Lambda $ :\lambda \in \Lambda \}=\Lambda } . Since the sum converges absolutely this rearrangement does not change the limit.
• $\wp $ is meromorphic and its derivative is[6]
$\wp '(z)=-2\sum _{\lambda \in \Lambda }{\frac {1}{(z-\lambda )^{3}}}.$
• $\wp $ and $\wp '$ are doubly periodic with the periods $\omega _{1}$ and $\omega _{2}$.[6] This means:
${\begin{aligned}\wp (z+\omega _{1})&=\wp (z)=\wp (z+\omega _{2}),\ {\textrm {and}}\\[3mu]\wp '(z+\omega _{1})&=\wp '(z)=\wp '(z+\omega _{2}).\end{aligned}}$
It follows that $\wp (z+\lambda )=\wp (z)$ and $\wp '(z+\lambda )=\wp '(z)$ for all $\lambda \in \Lambda $. Functions which are meromorphic and doubly periodic are also called elliptic functions.
Laurent expansion
Let $r:=\min\{{|\lambda }|:0\neq \lambda \in \Lambda \}$. Then for $0<|z|<r$ the $\wp $-function has the following Laurent expansion
$\wp (z)={\frac {1}{z^{2}}}+\sum _{n=1}^{\infty }(2n+1)G_{2n+2}z^{2n}$
where
$G_{n}=\sum _{0\neq \lambda \in \Lambda }\lambda ^{-n}$
for $n\geq 3$ are so called Eisenstein series.[6]
Differential equation
Set $g_{2}=60G_{4}$ and $g_{3}=140G_{6}$. Then the $\wp $-function satisfies the differential equation[6]
$\wp '^{2}(z)=4\wp ^{3}(z)-g_{2}\wp (z)-g_{3}.$
This relation can be verified by forming a linear combination of powers of $\wp $ and $\wp '$ to eliminate the pole at $z=0$. This yields an entire elliptic function that has to be constant by Liouville's theorem.[6]
Invariants
The coefficients of the above differential equation g2 and g3 are known as the invariants. Because they depend on the lattice $\Lambda $ they can be viewed as functions in $\omega _{1}$and $\omega _{2}$.
The series expansion suggests that g2 and g3 are homogeneous functions of degree −4 and −6. That is[7]
$g_{2}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-4}g_{2}(\omega _{1},\omega _{2})$
$g_{3}(\lambda \omega _{1},\lambda \omega _{2})=\lambda ^{-6}g_{3}(\omega _{1},\omega _{2})$
for $\lambda \neq 0$.
If $\omega _{1}$and $\omega _{2}$ are chosen in such a way that $\operatorname {Im} \left({\tfrac {\omega _{2}}{\omega _{1}}}\right)>0$, g2 and g3 can be interpreted as functions on the upper half-plane $\mathbb {H} :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}$ :=\{z\in \mathbb {C} :\operatorname {Im} (z)>0\}} .
Let $\tau ={\tfrac {\omega _{2}}{\omega _{1}}}$. One has:[8]
$g_{2}(1,\tau )=\omega _{1}^{4}g_{2}(\omega _{1},\omega _{2}),$
$g_{3}(1,\tau )=\omega _{1}^{6}g_{3}(\omega _{1},\omega _{2}).$
That means g2 and g3 are only scaled by doing this. Set
$g_{2}(\tau ):=g_{2}(1,\tau )$
and
$g_{3}(\tau ):=g_{3}(1,\tau ).$
As functions of $\tau \in \mathbb {H} $ $g_{2},g_{3}$ are so called modular forms.
The Fourier series for $g_{2}$ and $g_{3}$ are given as follows:[9]
$g_{2}(\tau )={\frac {4}{3}}\pi ^{4}\left[1+240\sum _{k=1}^{\infty }\sigma _{3}(k)q^{2k}\right]$
$g_{3}(\tau )={\frac {8}{27}}\pi ^{6}\left[1-504\sum _{k=1}^{\infty }\sigma _{5}(k)q^{2k}\right]$
where
$\sigma _{a}(k):=\sum _{d\mid {k}}d^{\alpha }$
is the divisor function and $q=e^{\pi i\tau }$ is the nome.
Modular discriminant
The modular discriminant Δ is defined as the discriminant of the polynomial on the right-hand side of the above differential equation:
$\Delta =g_{2}^{3}-27g_{3}^{2}.$
The discriminant is a modular form of weight 12. That is, under the action of the modular group, it transforms as
$\Delta \left({\frac {a\tau +b}{c\tau +d}}\right)=\left(c\tau +d\right)^{12}\Delta (\tau )$
where $a,b,d,c\in \mathbb {Z} $ with ad − bc = 1.[10]
Note that $\Delta =(2\pi )^{12}\eta ^{24}$ where $\eta $ is the Dedekind eta function.[11]
For the Fourier coefficients of $\Delta $, see Ramanujan tau function.
The constants e1, e2 and e3
$e_{1}$, $e_{2}$ and $e_{3}$ are usually used to denote the values of the $\wp $-function at the half-periods.
$e_{1}\equiv \wp \left({\frac {\omega _{1}}{2}}\right)$
$e_{2}\equiv \wp \left({\frac {\omega _{2}}{2}}\right)$
$e_{3}\equiv \wp \left({\frac {\omega _{1}+\omega _{2}}{2}}\right)$
They are pairwise distinct and only depend on the lattice $\Lambda $ and not on its generators.[12]
$e_{1}$, $e_{2}$ and $e_{3}$ are the roots of the cubic polynomial $4\wp (z)^{3}-g_{2}\wp (z)-g_{3}$ and are related by the equation:
$e_{1}+e_{2}+e_{3}=0.$
Because those roots are distinct the discriminant $\Delta $ does not vanish on the upper half plane.[13] Now we can rewrite the differential equation:
$\wp '^{2}(z)=4(\wp (z)-e_{1})(\wp (z)-e_{2})(\wp (z)-e_{3}).$
That means the half-periods are zeros of $\wp '$.
The invariants $g_{2}$ and $g_{3}$ can be expressed in terms of these constants in the following way:[14]
$g_{2}=-4(e_{1}e_{2}+e_{1}e_{3}+e_{2}e_{3})$
$g_{3}=4e_{1}e_{2}e_{3}$
$e_{1}$, $e_{2}$ and $e_{3}$ are related to the modular lambda function:
$\lambda (\tau )={\frac {e_{3}-e_{2}}{e_{1}-e_{2}}},\quad \tau ={\frac {\omega _{2}}{\omega _{1}}}.$
Relation to Jacobi's elliptic functions
For numerical work, it is often convenient to calculate the Weierstrass elliptic function in terms of Jacobi's elliptic functions.
The basic relations are:[15]
$\wp (z)=e_{3}+{\frac {e_{1}-e_{3}}{\operatorname {sn} ^{2}w}}=e_{2}+(e_{1}-e_{3}){\frac {\operatorname {dn} ^{2}w}{\operatorname {sn} ^{2}w}}=e_{1}+(e_{1}-e_{3}){\frac {\operatorname {cn} ^{2}w}{\operatorname {sn} ^{2}w}}$
where $e_{1},e_{2}$and $e_{3}$ are the three roots described above and where the modulus k of the Jacobi functions equals
$k={\sqrt {\frac {e_{2}-e_{3}}{e_{1}-e_{3}}}}$
and their argument w equals
$w=z{\sqrt {e_{1}-e_{3}}}.$
Relation to Jacobi's theta functions
The function $\wp (z,\tau )=\wp (z,1,\omega _{2}/\omega _{1})$ can be represented by Jacobi's theta functions:
$\wp (z,\tau )=\left(\pi \theta _{2}(0,q)\theta _{3}(0,q){\frac {\theta _{4}(\pi z,q)}{\theta _{1}(\pi z,q)}}\right)^{2}-{\frac {\pi ^{2}}{3}}\left(\theta _{2}^{4}(0,q)+\theta _{3}^{4}(0,q)\right)$
where $q=e^{\pi i\tau }$ is the nome and $\tau $ is the period ratio $(\tau \in \mathbb {H} )$.[16] This also provides a very rapid algorithm for computing $\wp (z,\tau )$.
Relation to elliptic curves
Consider the projective cubic curve
${\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }=\{(x,y)\in \mathbb {C} ^{2}:y^{2}=4x^{3}-g_{2}x-g_{3}\}\cup \{\infty \}\subset \mathbb {P} _{\mathbb {C} }^{2}.$
For this cubic, also called Weierstrass cubic, there exists no rational parameterization, if $\Delta \neq 0$.[2] In this case it is also called an elliptic curve. Nevertheless there is a parameterization that uses the $\wp $-function and its derivative $\wp '$:[17]
$\varphi :\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} },\quad {\bar {z}}\mapsto {\begin{cases}(\wp (z),\wp '(z),1)&{\bar {z}}\neq 0\\\infty \quad &{\bar {z}}=0\end{cases}}$ :\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} },\quad {\bar {z}}\mapsto {\begin{cases}(\wp (z),\wp '(z),1)&{\bar {z}}\neq 0\\\infty \quad &{\bar {z}}=0\end{cases}}}
Now the map $\varphi $ is bijective and parameterizes the elliptic curve ${\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }$.
$\mathbb {C} /\Lambda $ is an abelian group and a topological space, equipped with the quotient topology.
It can be shown that every Weierstrass cubic is given in such a way. That is to say that for every pair $g_{2},g_{3}\in \mathbb {C} $ with $\Delta =g_{2}^{3}-27g_{3}^{2}\neq 0$ there exists a lattice $\mathbb {Z} \omega _{1}+\mathbb {Z} \omega _{2}$, such that
$g_{2}=g_{2}(\omega _{1},\omega _{2})$ and $g_{3}=g_{3}(\omega _{1},\omega _{2})$.[18]
The statement that elliptic curves over $\mathbb {Q} $ can be parameterized over $\mathbb {Q} $, is known as the modularity theorem. This is an important theorem in number theory. It was part of Andrew Wiles' proof (1995) of Fermat's Last Theorem.
Addition theorems
Let $z,w\in \mathbb {C} $, so that $z,w,z+w,z-w\notin \Lambda $. Then one has:[19]
$\wp (z+w)={\frac {1}{4}}\left[{\frac {\wp '(z)-\wp '(w)}{\wp (z)-\wp (w)}}\right]^{2}-\wp (z)-\wp (w).$
As well as the duplication formula:[19]
$\wp (2z)={\frac {1}{4}}\left[{\frac {\wp ''(z)}{\wp '(z)}}\right]^{2}-2\wp (z).$
These formulas also have a geometric interpretation, if one looks at the elliptic curve ${\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }$ together with the mapping ${\varphi }:\mathbb {C} /\Lambda \to {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }$ as in the previous section.
The group structure of $(\mathbb {C} /\Lambda ,+)$ translates to the curve ${\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }$and can be geometrically interpreted there:
The sum of three pairwise different points $a,b,c\in {\bar {C}}_{g_{2},g_{3}}^{\mathbb {C} }$is zero if and only if they lie on the same line in $\mathbb {P} _{\mathbb {C} }^{2}$.[20]
This is equivalent to:
$\det \left({\begin{array}{rrr}1&\wp (u+v)&-\wp '(u+v)\\1&\wp (v)&\wp '(v)\\1&\wp (u)&\wp '(u)\\\end{array}}\right)=0,$
where $\wp (u)=a$, $\wp (v)=b$ and $u,v\notin \Lambda $.[21]
Typography
The Weierstrass's elliptic function is usually written with a rather special, lower case script letter ℘.[footnote 1]
In computing, the letter ℘ is available as \wp in TeX. In Unicode the code point is U+2118 ℘ SCRIPT CAPITAL P (℘, ℘), with the more correct alias weierstrass elliptic function.[footnote 2] In HTML, it can be escaped as ℘.
Character information
Preview℘
Unicode name SCRIPT CAPITAL P / WEIERSTRASS ELLIPTIC FUNCTION
Encodingsdecimalhex
Unicode8472U+2118
UTF-8226 132 152E2 84 98
Numeric character reference℘℘
Named character reference℘, ℘
See also
• Weierstrass functions
• Jacobi elliptic functions
• Lemniscate elliptic functions
Footnotes
1. This symbol was used already at least in 1890. The first edition of A Course of Modern Analysis by E. T. Whittaker in 1902 also used it.[22]
2. The Unicode Consortium has acknowledged two problems with the letter's name: the letter is in fact lowercase, and it is not a "script" class letter, like U+1D4C5 𝓅 MATHEMATICAL SCRIPT SMALL P, but the letter for Weierstrass's elliptic function. Unicode added the alias as a correction.[23][24]
References
1. Apostol, Tom M. (1976). Modular functions and Dirichlet series in number theory. New York: Springer-Verlag. p. 9. ISBN 0-387-90185-X. OCLC 2121639.
2. Hulek, Klaus. (2012), Elementare Algebraische Geometrie : Grundlegende Begriffe und Techniken mit zahlreichen Beispielen und Anwendungen (in German) (2., überarb. u. erw. Aufl. 2012 ed.), Wiesbaden: Vieweg+Teubner Verlag, p. 8, ISBN 978-3-8348-2348-9
3. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 259, ISBN 978-3-540-32058-6
4. Jeremy Gray (2015), Real and the complex: a history of analysis in the 19th century (in German), Cham, p. 71, ISBN 978-3-319-23715-2{{citation}}: CS1 maint: location missing publisher (link)
5. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 294, ISBN 978-3-540-32058-6
6. Apostol, Tom M. (1976), Modular functions and Dirichlet series in number theory (in German), New York: Springer-Verlag, p. 11, ISBN 0-387-90185-X
7. Apostol, Tom M. (1976). Modular functions and Dirichlet series in number theory. New York: Springer-Verlag. p. 14. ISBN 0-387-90185-X. OCLC 2121639.
8. Apostol, Tom M. (1976), Modular functions and Dirichlet series in number theory (in German), New York: Springer-Verlag, p. 14, ISBN 0-387-90185-X
9. Apostol, Tom M. (1990). Modular functions and Dirichlet series in number theory (2nd ed.). New York: Springer-Verlag. p. 20. ISBN 0-387-97127-0. OCLC 20262861.
10. Apostol, Tom M. (1976). Modular functions and Dirichlet series in number theory. New York: Springer-Verlag. p. 50. ISBN 0-387-90185-X. OCLC 2121639.
11. Chandrasekharan, K. (Komaravolu), 1920- (1985). Elliptic functions. Berlin: Springer-Verlag. p. 122. ISBN 0-387-15295-4. OCLC 12053023.{{cite book}}: CS1 maint: multiple names: authors list (link)
12. Busam, Rolf (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 270, ISBN 978-3-540-32058-6
13. Apostol, Tom M. (1976), Modular functions and Dirichlet series in number theory (in German), New York: Springer-Verlag, p. 13, ISBN 0-387-90185-X
14. K. Chandrasekharan (1985), Elliptic functions (in German), Berlin: Springer-Verlag, p. 33, ISBN 0-387-15295-4
15. Korn GA, Korn TM (1961). Mathematical Handbook for Scientists and Engineers. New York: McGraw–Hill. p. 721. LCCN 59014456.
16. Reinhardt, W. P.; Walker, P. L. (2010), "Weierstrass Elliptic and Modular Functions", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
17. Hulek, Klaus. (2012), Elementare Algebraische Geometrie : Grundlegende Begriffe und Techniken mit zahlreichen Beispielen und Anwendungen (in German) (2., überarb. u. erw. Aufl. 2012 ed.), Wiesbaden: Vieweg+Teubner Verlag, p. 12, ISBN 978-3-8348-2348-9
18. Hulek, Klaus. (2012), Elementare Algebraische Geometrie : Grundlegende Begriffe und Techniken mit zahlreichen Beispielen und Anwendungen (in German) (2., überarb. u. erw. Aufl. 2012 ed.), Wiesbaden: Vieweg+Teubner Verlag, p. 111, ISBN 978-3-8348-2348-9
19. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 286, ISBN 978-3-540-32058-6
20. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 287, ISBN 978-3-540-32058-6
21. Rolf Busam (2006), Funktionentheorie 1 (in German) (4., korr. und erw. Aufl ed.), Berlin: Springer, p. 288, ISBN 978-3-540-32058-6
22. teika kazura (2017-08-17), The letter ℘ Name & origin?, MathOverflow, retrieved 2018-08-30
23. "Known Anomalies in Unicode Character Names". Unicode Technical Note #27. version 4. Unicode, Inc. 2017-04-10. Retrieved 2017-07-20.
24. "NameAliases-10.0.0.txt". Unicode, Inc. 2017-05-06. Retrieved 2017-07-20.
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 18". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 627. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.
• N. I. Akhiezer, Elements of the Theory of Elliptic Functions, (1970) Moscow, translated into English as AMS Translations of Mathematical Monographs Volume 79 (1990) AMS, Rhode Island ISBN 0-8218-4532-2
• Tom M. Apostol, Modular Functions and Dirichlet Series in Number Theory, Second Edition (1990), Springer, New York ISBN 0-387-97127-0 (See chapter 1.)
• K. Chandrasekharan, Elliptic functions (1980), Springer-Verlag ISBN 0-387-15295-4
• Konrad Knopp, Funktionentheorie II (1947), Dover Publications; Republished in English translation as Theory of Functions (1996), Dover Publications ISBN 0-486-69219-1
• Serge Lang, Elliptic Functions (1973), Addison-Wesley, ISBN 0-201-04162-6
• E. T. Whittaker and G. N. Watson, A Course of Modern Analysis, Cambridge University Press, 1952, chapters 20 and 21
External links
Wikimedia Commons has media related to Weierstrass's elliptic functions.
• "Weierstrass elliptic functions", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Weierstrass's elliptic functions on Mathworld.
• Chapter 23, Weierstrass Elliptic and Modular Functions in DLMF (Digital Library of Mathematical Functions) by W. P. Reinhardt and P. L. Walker.
• Weierstrass P function and its derivative implemented in C by David Dumas
|
Weierstrass M-test
In mathematics, the Weierstrass M-test is a test for determining whether an infinite series of functions converges uniformly and absolutely. It applies to series whose terms are bounded functions with real or complex values, and is analogous to the comparison test for determining the convergence of series of real or complex numbers. It is named after the German mathematician Karl Weierstrass (1815-1897).
Statement
Weierstrass M-test. Suppose that (fn) is a sequence of real- or complex-valued functions defined on a set A, and that there is a sequence of non-negative numbers (Mn) satisfying the conditions
• $|f_{n}(x)|\leq M_{n}$ for all $n\geq 1$ and all $x\in A$, and
• $\sum _{n=1}^{\infty }M_{n}$ converges.
Then the series
$\sum _{n=1}^{\infty }f_{n}(x)$
converges absolutely and uniformly on A.
The result is often used in combination with the uniform limit theorem. Together they say that if, in addition to the above conditions, the set A is a topological space and the functions fn are continuous on A, then the series converges to a continuous function.
Proof
Consider the sequence of functions
$S_{n}(x)=\sum _{k=1}^{n}f_{k}(x).$
Since the series $\sum _{n=1}^{\infty }M_{n}$ converges and Mn ≥ 0 for every n, then by the Cauchy criterion,
$\forall \varepsilon >0:\exists N:\forall m>n>N:\sum _{k=n+1}^{m}M_{k}<\varepsilon .$
For the chosen N,
$\forall x\in A:\forall m>n>N$
$\left|S_{m}(x)-S_{n}(x)\right|=\left|\sum _{k=n+1}^{m}f_{k}(x)\right|{\overset {(1)}{\leq }}\sum _{k=n+1}^{m}|f_{k}(x)|\leq \sum _{k=n+1}^{m}M_{k}<\varepsilon .$
(Inequality (1) follows from the triangle inequality.)
The sequence Sn(x) is thus a Cauchy sequence in R or C, and by completeness, it converges to some number S(x) that depends on x. For n > N we can write
$\left|S(x)-S_{n}(x)\right|=\left|\lim _{m\to \infty }S_{m}(x)-S_{n}(x)\right|=\lim _{m\to \infty }\left|S_{m}(x)-S_{n}(x)\right|\leq \varepsilon .$
Since N does not depend on x, this means that the sequence Sn of partial sums converges uniformly to the function S. Hence, by definition, the series $\sum _{k=1}^{\infty }f_{k}(x)$ converges uniformly.
Analogously, one can prove that $\sum _{k=1}^{\infty }|f_{k}(x)|$ converges uniformly.
Generalization
A more general version of the Weierstrass M-test holds if the common codomain of the functions (fn) is a Banach space, in which case the premise
$|f_{n}(x)|\leq M_{n}$
is to be replaced by
$\|f_{n}(x)\|\leq M_{n}$,
where $\|\cdot \|$ is the norm on the Banach space. For an example of the use of this test on a Banach space, see the article Fréchet derivative.
See also
• Example of Weierstrass M-test
References
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Rudin, Walter (May 1986). Real and Complex Analysis. McGraw-Hill Science/Engineering/Math. ISBN 0-07-054234-1.
• Rudin, Walter (1976). Principles of Mathematical Analysis. McGraw-Hill Science/Engineering/Math.
• Whittaker, E.T.; Watson, G.N. (1927). A Course in Modern Analysis (Fourth ed.). Cambridge University Press. p. 49.
|
Stone–Weierstrass theorem
In mathematical analysis, the Weierstrass approximation theorem states that every continuous function defined on a closed interval [a, b] can be uniformly approximated as closely as desired by a polynomial function. Because polynomials are among the simplest functions, and because computers can directly evaluate polynomials, this theorem has both practical and theoretical relevance, especially in polynomial interpolation. The original version of this result was established by Karl Weierstrass in 1885 using the Weierstrass transform.
Marshall H. Stone considerably generalized the theorem[1] and simplified the proof.[2] His result is known as the Stone–Weierstrass theorem. The Stone–Weierstrass theorem generalizes the Weierstrass approximation theorem in two directions: instead of the real interval [a, b], an arbitrary compact Hausdorff space X is considered, and instead of the algebra of polynomial functions, a variety of other families of continuous functions on $X$ are shown to suffice, as is detailed below. The Stone–Weierstrass theorem is a vital result in the study of the algebra of continuous functions on a compact Hausdorff space.
Further, there is a generalization of the Stone–Weierstrass theorem to noncompact Tychonoff spaces, namely, any continuous function on a Tychonoff space is approximated uniformly on compact sets by algebras of the type appearing in the Stone–Weierstrass theorem and described below.
A different generalization of Weierstrass' original theorem is Mergelyan's theorem, which generalizes it to functions defined on certain subsets of the complex plane.
Weierstrass approximation theorem
The statement of the approximation theorem as originally discovered by Weierstrass is as follows:
Weierstrass Approximation Theorem — Suppose f is a continuous real-valued function defined on the real interval [a, b]. For every ε > 0, there exists a polynomial p such that for all x in [a, b], we have | f (x) − p(x)| < ε, or equivalently, the supremum norm || f − p|| < ε.
A constructive proof of this theorem using Bernstein polynomials is outlined on that page.
Applications
As a consequence of the Weierstrass approximation theorem, one can show that the space C[a, b] is separable: the polynomial functions are dense, and each polynomial function can be uniformly approximated by one with rational coefficients; there are only countably many polynomials with rational coefficients. Since C[a, b] is metrizable and separable it follows that C[a, b] has cardinality at most 2ℵ0. (Remark: This cardinality result also follows from the fact that a continuous function on the reals is uniquely determined by its restriction to the rationals.)
Stone–Weierstrass theorem, real version
The set C[a, b] of continuous real-valued functions on [a, b], together with the supremum norm || f || = supa ≤ x ≤ b | f (x)|, is a Banach algebra, (that is, an associative algebra and a Banach space such that || fg|| ≤ || f ||·||g|| for all f, g). The set of all polynomial functions forms a subalgebra of C[a, b] (that is, a vector subspace of C[a, b] that is closed under multiplication of functions), and the content of the Weierstrass approximation theorem is that this subalgebra is dense in C[a, b].
Stone starts with an arbitrary compact Hausdorff space X and considers the algebra C(X, R) of real-valued continuous functions on X, with the topology of uniform convergence. He wants to find subalgebras of C(X, R) which are dense. It turns out that the crucial property that a subalgebra must satisfy is that it separates points: a set A of functions defined on X is said to separate points if, for every two different points x and y in X there exists a function p in A with p(x) ≠ p(y). Now we may state:
Stone–Weierstrass Theorem (real numbers) — Suppose X is a compact Hausdorff space and A is a subalgebra of C(X, R) which contains a non-zero constant function. Then A is dense in C(X, R) if and only if it separates points.
This implies Weierstrass' original statement since the polynomials on [a, b] form a subalgebra of C[a, b] which contains the constants and separates points.
Locally compact version
A version of the Stone–Weierstrass theorem is also true when X is only locally compact. Let C0(X, R) be the space of real-valued continuous functions on X that vanish at infinity; that is, a continuous function f is in C0(X, R) if, for every ε > 0, there exists a compact set K ⊂ X such that | f | < ε on X \ K. Again, C0(X, R) is a Banach algebra with the supremum norm. A subalgebra A of C0(X, R) is said to vanish nowhere if not all of the elements of A simultaneously vanish at a point; that is, for every x in X, there is some f in A such that f (x) ≠ 0. The theorem generalizes as follows:
Stone–Weierstrass Theorem (locally compact spaces) — Suppose X is a locally compact Hausdorff space and A is a subalgebra of C0(X, R). Then A is dense in C0(X, R) (given the topology of uniform convergence) if and only if it separates points and vanishes nowhere.
This version clearly implies the previous version in the case when X is compact, since in that case C0(X, R) = C(X, R). There are also more general versions of the Stone–Weierstrass that weaken the assumption of local compactness.[3]
Applications
The Stone–Weierstrass theorem can be used to prove the following two statements, which go beyond Weierstrass's result.
• If f is a continuous real-valued function defined on the set [a, b] × [c, d] and ε > 0, then there exists a polynomial function p in two variables such that | f (x, y) − p(x, y) | < ε for all x in [a, b] and y in [c, d].
• If X and Y are two compact Hausdorff spaces and f : X × Y → R is a continuous function, then for every ε > 0 there exist n > 0 and continuous functions f1, ..., fn on X and continuous functions g1, ..., gn on Y such that || f − Σ fi gi || < ε.
The theorem has many other applications to analysis, including:
• Fourier series: The set of linear combinations of functions en(x) = e2πinx, n ∈ Z is dense in C([0, 1]/{0, 1}), where we identify the endpoints of the interval [0, 1] to obtain a circle. An important consequence of this is that the en are an orthonormal basis of the space L2([0, 1]) of square-integrable functions on [0, 1].
Stone–Weierstrass theorem, complex version
Slightly more general is the following theorem, where we consider the algebra $C(X,\mathbb {C} )$ of complex-valued continuous functions on the compact space $X$, again with the topology of uniform convergence. This is a C*-algebra with the *-operation given by pointwise complex conjugation.
Stone–Weierstrass Theorem (complex numbers) — Let $X$ be a compact Hausdorff space and let $S$ be a separating subset of $C(X,\mathbb {C} )$. Then the complex unital *-algebra generated by $S$ is dense in $C(X,\mathbb {C} )$.
The complex unital *-algebra generated by $S$ consists of all those functions that can be obtained from the elements of $S$ by throwing in the constant function 1 and adding them, multiplying them, conjugating them, or multiplying them with complex scalars, and repeating finitely many times.
This theorem implies the real version, because if a net of complex-valued functions uniformly approximates a given function, $f_{n}\to f$, then the real parts of those functions uniformly approximate the real part of that function, $\operatorname {Re} f_{n}\to \operatorname {Re} f$, and because for real subsets, $S\subset C(X,\mathbb {R} )\subset C(X,\mathbb {C} ),$ taking the real parts of the generated complex unital (selfadjoint) algebra agrees with the generated real unital algebra generated.
As in the real case, an analog of this theorem is true for locally compact Hausdorff spaces.
Stone–Weierstrass theorem, quaternion version
Following Holladay (1957), consider the algebra C(X, H) of quaternion-valued continuous functions on the compact space X, again with the topology of uniform convergence.
If a quaternion q is written in the form $ q=a+ib+jc+kd$
• its scalar part a is the real number ${\frac {q-iqi-jqj-kqk}{4}}$.
Likewise
• the scalar part of −qi is b which is the real number ${\frac {-qi-iq+jqk-kqj}{4}}$.
• the scalar part of −qj is c which is the real number ${\frac {-qj-iqk-jq+kqi}{4}}$.
• the scalar part of −qk is d which is the real number ${\frac {-qk+iqj-jqk-kq}{4}}$.
Then we may state:
Stone–Weierstrass Theorem (quaternion numbers) — Suppose X is a compact Hausdorff space and A is a subalgebra of C(X, H) which contains a non-zero constant function. Then A is dense in C(X, H) if and only if it separates points.
Stone–Weierstrass theorem, C*-algebra version
The space of complex-valued continuous functions on a compact Hausdorff space $X$ i.e. $C(X,\mathbb {C} )$ is the canonical example of a unital commutative C*-algebra ${\mathfrak {A}}$. The space X may be viewed as the space of pure states on ${\mathfrak {A}}$, with the weak-* topology. Following the above cue, a non-commutative extension of the Stone–Weierstrass theorem, which remains unsolved, is as follows:
Conjecture — If a unital C*-algebra ${\mathfrak {A}}$ has a C*-subalgebra ${\mathfrak {B}}$ which separates the pure states of ${\mathfrak {A}}$, then ${\mathfrak {A}}={\mathfrak {B}}$.
In 1960, Jim Glimm proved a weaker version of the above conjecture.
Stone–Weierstrass theorem (C*-algebras)[4] — If a unital C*-algebra ${\mathfrak {A}}$ has a C*-subalgebra ${\mathfrak {B}}$ which separates the pure state space (i.e. the weak-* closure of the pure states) of ${\mathfrak {A}}$, then ${\mathfrak {A}}={\mathfrak {B}}$.
Lattice versions
Let X be a compact Hausdorff space. Stone's original proof of the theorem used the idea of lattices in C(X, R). A subset L of C(X, R) is called a lattice if for any two elements f, g ∈ L, the functions max{ f, g}, min{ f, g} also belong to L. The lattice version of the Stone–Weierstrass theorem states:
Stone–Weierstrass Theorem (lattices) — Suppose X is a compact Hausdorff space with at least two points and L is a lattice in C(X, R) with the property that for any two distinct elements x and y of X and any two real numbers a and b there exists an element f ∈ L with f (x) = a and f (y) = b. Then L is dense in C(X, R).
The above versions of Stone–Weierstrass can be proven from this version once one realizes that the lattice property can also be formulated using the absolute value | f | which in turn can be approximated by polynomials in f . A variant of the theorem applies to linear subspaces of C(X, R) closed under max:[5]
Stone–Weierstrass Theorem (max-closed) — Suppose X is a compact Hausdorff space and B is a family of functions in C(X, R) such that
1. B separates points.
2. B contains the constant function 1.
3. If f ∈ B then af ∈ B for all constants a ∈ R.
4. If f, g ∈ B, then f + g, max{ f, g} ∈ B.
Then B is dense in C(X, R).
More precise information is available:
Suppose X is a compact Hausdorff space with at least two points and L is a lattice in C(X, R). The function φ ∈ C(X, R) belongs to the closure of L if and only if for each pair of distinct points x and y in X and for each ε > 0 there exists some f ∈ L for which | f (x) − φ(x)| < ε and | f (y) − φ(y)| < ε.
Bishop's theorem
Another generalization of the Stone–Weierstrass theorem is due to Errett Bishop. Bishop's theorem is as follows:[6]
Bishop's theorem — Let A be a closed subalgebra of the complex Banach algebra C(X, C) of continuous complex-valued functions on a compact Hausdorff space X, using the supremum norm. For S ⊂ X we write AS = {g|S : g ∈ A}. Suppose that f ∈ C(X, C) has the following property:
f |S ∈ AS for every maximal set S ⊂ X such that all real functions of AS are constant.
Then f ∈ A.
Glicksberg (1962) gives a short proof of Bishop's theorem using the Krein–Milman theorem in an essential way, as well as the Hahn–Banach theorem: the process of Louis de Branges (1959). See also Rudin (1973, §5.7).
Nachbin's theorem
Nachbin's theorem gives an analog for Stone–Weierstrass theorem for algebras of complex valued smooth functions on a smooth manifold.[7] Nachbin's theorem is as follows:[8]
Nachbin's theorem — Let A be a subalgebra of the algebra C∞(M) of smooth functions on a finite dimensional smooth manifold M. Suppose that A separates the points of M and also separates the tangent vectors of M: for each point m ∈ M and tangent vector v at the tangent space at m, there is a f ∈ A such that df(x)(v) ≠ 0. Then A is dense in C∞(M).
Editorial history
In 1885 it was also published in an English version of the paper whose title was On the possibility of giving an analytic representation to an arbitrary function of real variable.[9][10][11][12][13] According to the mathematician Yamilet Quintana, Weierstrass "suspected that any analytic functions could be represented by power series".[13][12]
See also
• Müntz–Szász theorem
• Bernstein polynomial
• Runge's phenomenon shows that finding a polynomial P such that f (x) = P(x) for some finely spaced x = xn is a bad way to attempt to find a polynomial approximating f uniformly. A better approach, explained e.g. in Rudin (1976), p. 160, eq. (51) ff., is to construct polynomials P uniformly approximating f by taking the convolution of f with a family of suitably chosen polynomial kernels.
• Mergelyan's theorem, concerning polynomial approximations of complex functions.
Notes
1. Stone, M. H. (1937), "Applications of the Theory of Boolean Rings to General Topology", Transactions of the American Mathematical Society, 41 (3): 375–481, doi:10.2307/1989788, JSTOR 1989788
2. Stone, M. H. (1948), "The Generalized Weierstrass Approximation Theorem", Mathematics Magazine, 21 (4): 167–184, doi:10.2307/3029750, JSTOR 3029750, MR 0027121; 21 (5), 237–254.
3. Willard, Stephen (1970). General Topology. Addison-Wesley. p. 293. ISBN 0-486-43479-6.
4. Glimm, James (1960). "A Stone–Weierstrass Theorem for C*-algebras". Annals of Mathematics. Second Series. 72 (2): 216–244 [Theorem 1]. doi:10.2307/1970133. JSTOR 1970133.
5. Hewitt, E; Stromberg, K (1965), Real and abstract analysis, Springer-Verlag, Theorem 7.29
6. Bishop, Errett (1961), "A generalization of the Stone–Weierstrass theorem", Pacific Journal of Mathematics, 11 (3): 777–783, doi:10.2140/pjm.1961.11.777
7. Nachbin, L. (1949), "Sur les algèbres denses de fonctions diffèrentiables sur une variété", C. R. Acad. Sci. Paris, 228: 1549–1551
8. Llavona, José G. (1986), Approximation of continuously differentiable functions, Amsterdam: North-Holland, ISBN 9780080872414
9. Pinkus, Allan. "Weierstrass and Approximation Theory" (PDF). Journal of Approximation Theory. 107 (1): 8. ISSN 0021-9045. OCLC 4638498762. Archived (PDF) from the original on October 19, 2013. Retrieved July 3, 2021.
10. Pinkus, Allan (2004). "Density methods and results in approximation theory". Orlicz Centenary Volume. Banach Center publications. Institute of Mathematics, Polish Academy of Sciences. 64: 3. CiteSeerX 10.1.1.62.520. ISSN 0137-6934. OCLC 200133324. Archived from the original on July 3, 2021.
11. Ciesielski, Zbigniew; Pełczyński, Aleksander; Skrzypczak, Leszek (2004). Orlicz centenary volume : proceedings of the conferences "The Wladyslaw Orlicz Centenary Conference" and Function Spaces VII : Poznan, 20-25 July 2003. Vol. I, Plenary lectures. Banach Center publications. Vol. 64. Institute of Mathematics. Polish Academy of Sciences. p. 175. OCLC 912348549.
12. Quintana, Yamilet; Perez D. (2008). "A survey on the Weierstrass approximation theorem". Divulgaciones Matematicas. 16 (1): 232. OCLC 810468303. Retrieved July 3, 2021. Weierstrass' perception on analytic functions was of functions that could berepresented by power series (arXiv 0611038v2).
13. Quintana, Yamilet (2010). "On Hilbert extensions of Weierstrass' theorem with weights". Journal of Function Spaces. Scientific Horizon. 8 (2): 202. doi:10.1155/2010/645369. ISSN 0972-6802. OCLC 7180746563. (arXiv 0611034v3). Citing: D. S. Lubinsky, Weierstrass' Theorem in the twentieth century: a selection, in Quaestiones Mathematicae18 (1995), 91–130.
References
• Holladay, John C. (1957), "The Stone–Weierstrass theorem for quaternions" (PDF), Proc. Amer. Math. Soc., 8: 656, doi:10.1090/S0002-9939-1957-0087047-7.
• Louis de Branges (1959), "The Stone–Weierstrass theorem", Proc. Amer. Math. Soc., 10 (5): 822–824, doi:10.1090/s0002-9939-1959-0113131-7.
• Jan Brinkhuis & Vladimir Tikhomirov (2005) Optimization: Insights and Applications, Princeton University Press ISBN 978-0-691-10287-0 MR2168305.
• Glimm, James (1960), "A Stone–Weierstrass Theorem for C*-algebras", Annals of Mathematics, Second Series, 72 (2): 216–244, doi:10.2307/1970133, JSTOR 1970133
• Glicksberg, Irving (1962), "Measures Orthogonal to Algebras and Sets of Antisymmetry", Transactions of the American Mathematical Society, 105 (3): 415–435, doi:10.2307/1993729, JSTOR 1993729.
• Rudin, Walter (1976), Principles of mathematical analysis (3rd ed.), McGraw-Hill, ISBN 978-0-07-054235-8
• Rudin, Walter (1973), Functional analysis, McGraw-Hill, ISBN 0-07-054236-8.
• JG Burkill, Lectures On Approximation By Polynomials (PDF).
Historical works
The historical publication of Weierstrass (in German language) is freely available from the digital online archive of the Berlin Brandenburgische Akademie der Wissenschaften:
• K. Weierstrass (1885). Über die analytische Darstellbarkeit sogenannter willkürlicher Functionen einer reellen Veränderlichen. Sitzungsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin, 1885 (II).
Erste Mitteilung (part 1) pp. 633–639, Zweite Mitteilung (part 2) pp. 789–805.
External links
• "Stone–Weierstrass theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
Authority control
National
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
|
Weierstrass point
In mathematics, a Weierstrass point $P$ on a nonsingular algebraic curve $C$ defined over the complex numbers is a point such that there are more functions on $C$, with their poles restricted to $P$ only, than would be predicted by the Riemann–Roch theorem.
The concept is named after Karl Weierstrass.
Consider the vector spaces
$L(0),L(P),L(2P),L(3P),\dots $
where $L(kP)$ is the space of meromorphic functions on $C$ whose order at $P$ is at least $-k$ and with no other poles. We know three things: the dimension is at least 1, because of the constant functions on $C$; it is non-decreasing; and from the Riemann–Roch theorem the dimension eventually increments by exactly 1 as we move to the right. In fact if $g$ is the genus of $C$, the dimension from the $k$-th term is known to be
$l(kP)=k-g+1,$ for $k\geq 2g-1.$
Our knowledge of the sequence is therefore
$1,?,?,\dots ,?,g,g+1,g+2,\dots .$
What we know about the ? entries is that they can increment by at most 1 each time (this is a simple argument: $L(nP)/L((n-1)P)$ has dimension as most 1 because if $f$ and $g$ have the same order of pole at $P$, then $f+cg$ will have a pole of lower order if the constant $c$ is chosen to cancel the leading term). There are $2g-2$ question marks here, so the cases $g=0$ or $1$ need no further discussion and do not give rise to Weierstrass points.
Assume therefore $g\geq 2$. There will be $g-1$ steps up, and $g-1$ steps where there is no increment. A non-Weierstrass point of $C$ occurs whenever the increments are all as far to the right as possible: i.e. the sequence looks like
$1,1,\dots ,1,2,3,4,\dots ,g-1,g,g+1,\dots .$
Any other case is a Weierstrass point. A Weierstrass gap for $P$ is a value of $k$ such that no function on $C$ has exactly a $k$-fold pole at $P$ only. The gap sequence is
$1,2,\dots ,g$
for a non-Weierstrass point. For a Weierstrass point it contains at least one higher number. (The Weierstrass gap theorem or Lückensatz is the statement that there must be $g$ gaps.)
For hyperelliptic curves, for example, we may have a function $F$ with a double pole at $P$ only. Its powers have poles of order $4,6$ and so on. Therefore, such a $P$ has the gap sequence
$1,3,5,\dots ,2g-1.$
In general if the gap sequence is
$a,b,c,\dots $
the weight of the Weierstrass point is
$(a-1)+(b-2)+(c-3)+\dots .$
This is introduced because of a counting theorem: on a Riemann surface the sum of the weights of the Weierstrass points is $g(g^{2}-1).$
For example, a hyperelliptic Weierstrass point, as above, has weight $g(g-1)/2.$ Therefore, there are (at most) $2(g+1)$ of them. The $2g+2$ ramification points of the ramified covering of degree two from a hyperelliptic curve to the projective line are all hyperelliptic Weierstrass points and these exhausts all the Weierstrass points on a hyperelliptic curve of genus $g$.
Further information on the gaps comes from applying Clifford's theorem. Multiplication of functions gives the non-gaps a numerical semigroup structure, and an old question of Adolf Hurwitz asked for a characterization of the semigroups occurring. A new necessary condition was found by R.-O. Buchweitz in 1980 and he gave an example of a subsemigroup of the nonnegative integers with 16 gaps that does not occur as the semigroup of non-gaps at a point on a curve of genus 16 (see [1]). A definition of Weierstrass point for a nonsingular curve over a field of positive characteristic was given by F. K. Schmidt in 1939.
Positive characteristic
More generally, for a nonsingular algebraic curve $C$ defined over an algebraically closed field $k$ of characteristic $p\geq 0$, the gap numbers for all but finitely many points is a fixed sequence $\epsilon _{1},...,\epsilon _{g}.$ These points are called non-Weierstrass points. All points of $C$ whose gap sequence is different are called Weierstrass points.
If $\epsilon _{1},...,\epsilon _{g}=1,...,g$ then the curve is called a classical curve. Otherwise, it is called non-classical. In characteristic zero, all curves are classical.
Hermitian curves are an example of non-classical curves. These are projective curves defined over finite field $GF(q^{2})$ by equation $y^{q}+y=x^{q+1}$, where $q$ is a prime power.
Notes
1. Eisenbud & Harris 1987, page 499.
References
• P. Griffiths; J. Harris (1994). Principles of Algebraic Geometry. Wiley Classics Library. Wiley Interscience. pp. 273–277. ISBN 0-471-05059-8.
• Farkas; Kra (1980). Riemann Surfaces. Graduate Texts in Mathematics. Springer-Verlag. pp. 76–86. ISBN 0-387-90465-4.
• Eisenbud, David; Harris, Joe (1987). "Existence, decomposition, and limits of certain Weierstrass points". Invent. Math. 87 (3): 495–515. doi:10.1007/bf01389240. S2CID 122385166.
• Garcia, Arnaldo; Viana, Paulo (1986). "Weierstrass points on certain non-classical curves". Archiv der Mathematik. 46 (4): 315–322. doi:10.1007/BF01200462. S2CID 120983683.
• Voskresenskii, V.E. (2001) [1994], "Weierstrass point", Encyclopedia of Mathematics, EMS Press
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
|
Casorati–Weierstrass theorem
In complex analysis, a branch of mathematics, the Casorati–Weierstrass theorem describes the behaviour of holomorphic functions near their essential singularities. It is named for Karl Theodor Wilhelm Weierstrass and Felice Casorati. In Russian literature it is called Sokhotski's theorem.
Formal statement of the theorem
Start with some open subset $U$ in the complex plane containing the number $z_{0}$, and a function $f$ that is holomorphic on $U\setminus \{z_{0}\}$, but has an essential singularity at $z_{0}$ . The Casorati–Weierstrass theorem then states that
if $V$ is any neighbourhood of $z_{0}$ contained in $U$, then $f(V\setminus \{z_{0}\})$ is dense in $\mathbb {C} $.
This can also be stated as follows:
for any $\varepsilon >0,\delta >0$, and a complex number $w$, there exists a complex number $z$ in $U$ with $0<|z-z_{0}|<\delta $ and $|f(z)-w|<\varepsilon $.
Or in still more descriptive terms:
$f$ comes arbitrarily close to any complex value in every neighbourhood of $z_{0}$.
The theorem is considerably strengthened by Picard's great theorem, which states, in the notation above, that $f$ assumes every complex value, with one possible exception, infinitely often on $V$.
In the case that $f$ is an entire function and $a=\infty $, the theorem says that the values $f(z)$ approach every complex number and $\infty $, as $z$ tends to infinity. It is remarkable that this does not hold for holomorphic maps in higher dimensions, as the famous example of Pierre Fatou shows.[1]
Examples
The function f(z) = exp(1/z) has an essential singularity at 0, but the function g(z) = 1/z3 does not (it has a pole at 0).
Consider the function
$f(z)=e^{1/z}.$
This function has the following Laurent series about the essential singular point at 0:
$f(z)=\sum _{n=0}^{\infty }{\frac {1}{n!}}z^{-n}.$
Because $f'(z)=-{\frac {-e^{{1}/{z}}}{z^{2}}}$ exists for all points z ≠ 0 we know that f(z) is analytic in a punctured neighborhood of z = 0. Hence it is an isolated singularity, as well as being an essential singularity.
Using a change of variable to polar coordinates $z=re^{i\theta }$ our function, f(z) = e1/z becomes:
$f(z)=e^{{\frac {1}{r}}e^{-i\theta }}=e^{{\frac {1}{r}}\cos(\theta )}e^{-{\frac {1}{r}}i\sin(\theta )}.$
Taking the absolute value of both sides:
$\left|f(z)\right|=\left|e^{{\frac {1}{r}}\cos \theta }\right|\left|e^{-{\frac {1}{r}}i\sin(\theta )}\right|=e^{{\frac {1}{r}}\cos \theta }.$
Thus, for values of θ such that cos θ > 0, we have $f(z)\to \infty $ as $r\to 0$, and for $\cos \theta <0$, $f(z)\to 0$ as $r\to 0$.
Consider what happens, for example when z takes values on a circle of diameter 1/R tangent to the imaginary axis. This circle is given by r = (1/R) cos θ. Then,
$f(z)=e^{R}\left[\cos \left(R\tan \theta \right)-i\sin \left(R\tan \theta \right)\right]$
and
$\left|f(z)\right|=e^{R}.$
Thus,$\left|f(z)\right|$ may take any positive value other than zero by the appropriate choice of R. As $z\to 0$ on the circle, $ \theta \to {\frac {\pi }{2}}$ with R fixed. So this part of the equation:
$\left[\cos \left(R\tan \theta \right)-i\sin \left(R\tan \theta \right)\right]$
takes on all values on the unit circle infinitely often. Hence f(z) takes on the value of every number in the complex plane except for zero infinitely often.
Proof of the theorem
A short proof of the theorem is as follows:
Take as given that function f is meromorphic on some punctured neighborhood V \ {z0}, and that z0 is an essential singularity. Assume by way of contradiction that some value b exists that the function can never get close to; that is: assume that there is some complex value b and some ε > 0 such that ‖f(z) − b‖ ≥ ε for all z in V at which f is defined.
Then the new function:
$g(z)={\frac {1}{f(z)-b}}$
must be holomorphic on V \ {z0}, with zeroes at the poles of f, and bounded by 1/ε. It can therefore be analytically continued (or continuously extended, or holomorphically extended) to all of V by Riemann's analytic continuation theorem. So the original function can be expressed in terms of g:
$f(z)={\frac {1}{g(z)}}+b$
for all arguments z in V \ {z0}. Consider the two possible cases for
$\lim _{z\to z_{0}}g(z).$
If the limit is 0, then f has a pole at z0 . If the limit is not 0, then z0 is a removable singularity of f . Both possibilities contradict the assumption that the point z0 is an essential singularity of the function f . Hence the assumption is false and the theorem holds.
History
The history of this important theorem is described by Collingwood and Lohwater.[2] It was published by Weierstrass in 1876 (in German) and by Sokhotski in 1868 in his Master thesis (in Russian). So it was called Sokhotski's theorem in the Russian literature and Weierstrass's theorem in the Western literature. The same theorem was published by Casorati in 1868, and by Briot and Bouquet in the first edition of their book (1859).[3] However, Briot and Bouquet removed this theorem from the second edition (1875).
References
1. Fatou, P (1922). "Sur les fonctions méromorphes de deux variables". Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences de Paris. 175: 862–865. JFM 48.0391.02. , Fatou, P (1922). "Sur certaines fonctions uniformes de deux variables". Comptes Rendus Hebdomadaires des Séances de l'Académie des Sciences de Paris. 175: 1030–1033. JFM 48.0391.03.
2. Collingwood, E; Lohwater, A (1966). The theory of cluster sets. Cambridge University Press.
3. Briot, Ch; Bouquet, C (1859). Theorie des fonctions doublement periodiques, et en particulier, des fonctions elliptiques. Paris.{{cite book}}: CS1 maint: location missing publisher (link)
• Section 31, Theorem 2 (pp. 124–125) of Knopp, Konrad (1996), Theory of Functions, Dover Publications, ISBN 978-0-486-69219-7
|
Coordinate systems for the hyperbolic plane
In the hyperbolic plane, as in the Euclidean plane, each point can be uniquely identified by two real numbers. Several qualitatively different ways of coordinatizing the plane in hyperbolic geometry are used.
This article tries to give an overview of several coordinate systems in use for the two-dimensional hyperbolic plane.
In the descriptions below the constant Gaussian curvature of the plane is −1. Sinh, cosh and tanh are hyperbolic functions.
Polar coordinate system
The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a reference point and an angle from a reference direction.
The reference point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the reference direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is called the angular coordinate, or polar angle.
From the hyperbolic law of cosines, we get that the distance between two points given in polar coordinates is
$\operatorname {dist} (\langle r_{1},\theta _{1}\rangle ,\langle r_{2},\theta _{2}\rangle )=\operatorname {arcosh} \,\left(\cosh r_{1}\cosh r_{2}-\sinh r_{1}\sinh r_{2}\cos(\theta _{2}-\theta _{1})\right)\,.$
The corresponding metric tensor field is: $(\mathrm {d} s)^{2}=(\mathrm {d} r)^{2}+\sinh ^{2}r\,(\mathrm {d} \theta )^{2}\,.$
The straight lines are described by equations of the form
$\theta =\theta _{0}\pm {\frac {\pi }{2}}\quad {\text{ or }}\quad \tanh r=\tanh r_{0}\sec(\theta -\theta _{0})$
where r0 and θ0 are the coordinates of the nearest point on the line to the pole.
Quadrant model system
The Poincaré half-plane model is closely related to a model of the hyperbolic plane in the quadrant Q = {(x,y): x > 0, y > 0}. For such a point the geometric mean $v={\sqrt {xy}}$ and the hyperbolic angle $u=\ln {\sqrt {x/y}}$ produce a point (u,v) in the upper half-plane. The hyperbolic metric in the quadrant depends on the Poincaré half-plane metric. The motions of the Poincaré model carry over to the quadrant; in particular the left or right shifts of the real axis correspond to hyperbolic rotations of the quadrant. Due to the study of ratios in physics and economics where the quadrant is the universe of discourse, its points are said to be located by hyperbolic coordinates.
Cartesian-style coordinate systems
In hyperbolic geometry rectangles do not exist. The sum of the angles of a quadrilateral in hyperbolic geometry is always less than 4 right angles (see Lambert quadrilateral). Also in hyperbolic geometry there are no equidistant lines (see hypercycles). This all has influences on the coordinate systems.
There are however different coordinate systems for hyperbolic plane geometry. All are based on choosing a real (non ideal) point (the Origin) on a chosen directed line (the x-axis) and after that many choices exist.
Axial coordinates
Axial coordinates xa and ya are found by constructing a y-axis perpendicular to the x-axis through the origin.[1]
Like in the Cartesian coordinate system, the coordinates are found by dropping perpendiculars from the point onto the x and y-axes. xa is the distance from the foot of the perpendicular on the x-axis to the origin (regarded as positive on one side and negative on the other); ya is the distance from the foot of the perpendicular on the y-axis to the origin.
Every point and most ideal points have axial coordinates, but not every pair of real numbers corresponds to a point.
If $\tanh ^{2}(x_{a})+\tanh ^{2}(y_{a})=1$ then $P(x_{a},y_{a})$ is an ideal point.
If $\tanh ^{2}(x_{a})+\tanh ^{2}(y_{a})>1$ then $P(x_{a},y_{a})$ is not a point at all.
The distance of a point $P(x_{a},y_{a})$ to the x-axis is $\operatorname {artanh} \left(\tanh(y_{a})\cosh(x_{a})\right)$. To the y-axis it is $\operatorname {artanh} \left(\tanh(x_{a})\cosh(y_{a})\right)$.
The relationship of axial coordinates to polar coordinates (assuming the origin is the pole and that the positive x-axis is the polar axis) is
$x=\operatorname {artanh} \,(\tanh r\cos \theta )$
$y=\operatorname {artanh} \,(\tanh r\sin \theta )$
$r=\operatorname {artanh} \,({\sqrt {\tanh ^{2}x+\tanh ^{2}y}}\,)$
$\theta =2\operatorname {arctan} \,\left({\frac {\tanh y}{\tanh x+{\sqrt {\tanh ^{2}x+\tanh ^{2}y}}}}\right)\,.$
Lobachevsky coordinates
The Lobachevsky coordinates xℓ and yℓ are found by dropping a perpendicular onto the x-axis. xℓ is the distance from the foot of the perpendicular to the x-axis to the origin (positive on one side and negative on the other, the same as in axial coordinates).[1]
yℓ is the distance along the perpendicular of the given point to its foot (positive on one side and negative on the other).
$x_{l}=x_{a}\ ,\ \tanh(y_{l})=\tanh(y_{a})\cosh(x_{a})\ ,\ \tanh(y_{a})={\frac {\tanh(y_{l})}{\cosh(x_{l})}}$.
The Lobachevsky coordinates are useful for integration for length of curves[2] and area between lines and curves.
Lobachevsky coordinates are named after Nikolai Lobachevsky one of the discoverers of hyperbolic geometry.
Construct a Cartesian-like coordinate system as follows. Choose a line (the x-axis) in the hyperbolic plane (with a standardized curvature of -1) and label the points on it by their distance from an origin (x=0) point on the x-axis (positive on one side and negative on the other). For any point in the plane, one can define coordinates x and y by dropping a perpendicular onto the x-axis. x will be the label of the foot of the perpendicular. y will be the distance along the perpendicular of the given point from its foot (positive on one side and negative on the other). Then the distance between two such points will be
$\operatorname {dist} (\langle x_{1},y_{1}\rangle ,\langle x_{2},y_{2}\rangle )=\operatorname {arcosh} \left(\cosh y_{1}\cosh(x_{2}-x_{1})\cosh y_{2}-\sinh y_{1}\sinh y_{2}\right)\,.$
This formula can be derived from the formulas about hyperbolic triangles.
The corresponding metric tensor is: $(\mathrm {d} s)^{2}=\cosh ^{2}y\,(\mathrm {d} x)^{2}+(\mathrm {d} y)^{2}$.
In this coordinate system, straight lines are either perpendicular to the x-axis (with equation x = a constant) or described by equations of the form
$\tanh y=A\cosh x+B\sinh x\quad {\text{ when }}\quad A^{2}<1+B^{2}$
where A and B are real parameters which characterize the straight line.
The relationship of Lobachevsky coordinates to polar coordinates (assuming the origin is the pole and that the positive x-axis is the polar axis) is
$x=\operatorname {artanh} \,(\tanh r\cos \theta )$
$y=\operatorname {arsinh} \,(\sinh r\sin \theta )$
$r=\operatorname {arcosh} \,(\cosh x\cosh y)$
$\theta =2\operatorname {arctan} \,\left({\frac {\sinh y}{\sinh x\cosh y+{\sqrt {\cosh ^{2}x\cosh ^{2}y-1}}}}\right)\,.$
Horocycle-based coordinate system
Another coordinate system uses the distance from the point to the horocycle through the origin centered around $\Omega =(0,+\infty )$ and the arclength along this horocycle.[3]
Draw the horocycle hO through the origin centered at the ideal point $\Omega $ at the end of the x-axis.
From point P draw the line p asymptotic to the x-axis to the right ideal point $\Omega $. Ph is the intersection of line p and horocycle hO.
The coordinate xh is the distance from P to Ph – positive if P is between Ph and $\Omega $, negative if Ph is between P and $\Omega $.
The coordinate yh is the arclength along horocycle hO from the origin to Ph.
The distance between two points given in these coordinates is
$\operatorname {dist} (\langle x_{1},y_{1}\rangle ,\langle x_{2},y_{2}\rangle )=\operatorname {arcosh} (\cosh(x_{2}-x_{1})+{\tfrac {1}{2}}(y_{2}-y_{1})^{2}\exp(-x_{1}-x_{2}))\,.$
The corresponding metric tensor is: $(\mathrm {d} s)^{2}=(\mathrm {d} x)^{2}+\exp(-2x)\,(\mathrm {d} y)^{2}\,.$
The straight lines are described by equations of the form y = a constant or
$x={\tfrac {1}{2}}\ln(\exp(2x_{0})-(y-y_{0})^{2})$
where x0 and y0 are the coordinates of the point on the line nearest to the ideal point $\Omega $ (i.e. having the largest value of x on the line).
Model-based coordinate systems
Model-based coordinate systems use one of the models of hyperbolic geometry and take the Euclidean coordinates inside the model as the hyperbolic coordinates.
Beltrami coordinates
The Beltrami coordinates of a point are the Cartesian coordinates of the point when the point is mapped in the Beltrami–Klein model of the hyperbolic plane, the x-axis is mapped to the segment (−1,0) − (1,0) and the origin is mapped to the centre of the boundary circle.[1]
The following equations hold:
$x_{b}=\tanh(x_{a}),\ y_{b}=\tanh(y_{a})$
Poincaré coordinates
The Poincaré coordinates of a point are the Cartesian coordinates of the point when the point is mapped in the Poincaré disk model of the hyperbolic plane,[1] the x-axis is mapped to the segment (−1,0) − (1,0) and the origin is mapped to the centre of the boundary circle.
The Poincaré coordinates, in terms of the Beltrami coordinates, are:
$x_{p}={\frac {x_{b}}{1+{\sqrt {1-x_{b}^{2}-y_{b}^{2}}}}},\ \ y_{p}={\frac {y_{b}}{1+{\sqrt {1-x_{b}^{2}-y_{b}^{2}}}}}$
Weierstrass coordinates
The Weierstrass coordinates of a point are the Cartesian coordinates of the point when the point is mapped in the hyperboloid model of the hyperbolic plane, the x-axis is mapped to the (half) hyperbola $(t\ ,\ 0\ ,\ {\sqrt {t^{2}+1}})$ and the origin is mapped to the point (0,0,1).[1]
The point P with axial coordinates (xa, ya) is mapped to
$\left({\frac {\tanh x_{a}}{\sqrt {1-\tanh ^{2}x_{a}-\tanh ^{2}y_{a}}}}\ ,\ {\frac {\tanh y_{a}}{\sqrt {1-\tanh ^{2}x_{a}-\tanh ^{2}y_{a}}}}\ ,\ {\frac {1}{\sqrt {1-\tanh ^{2}x_{a}-\tanh ^{2}y_{a}}}}\right)$
Others
Gyrovector coordinates
Main article: gyrovector
Gyrovector space
Hyperbolic barycentric coordinates
From Gyrovector space#Triangle centers
The study of triangle centers traditionally is concerned with Euclidean geometry, but triangle centers can also be studied in hyperbolic geometry. Using gyrotrigonometry, expressions for trigonometric barycentric coordinates can be calculated that have the same form for both euclidean and hyperbolic geometry. In order for the expressions to coincide, the expressions must not encapsulate the specification of the anglesum being 180 degrees.[4][5][6]
References
1. Martin, George E. (1998). The foundations of geometry and the non-Euclidean plane (Corrected 4. print. ed.). New York, NY: Springer. pp. 447–450. ISBN 0387906940.
2. Smorgorzhevsky, A.S. (1982). Lobachevskian geometry. Moscow: Mir. pp. 64–68.
3. Ramsay, Arlan; Richtmyer, Robert D. (1995). Introduction to hyperbolic geometry. New York: Springer-Verlag. pp. 97–103. ISBN 0387943390.
4. Hyperbolic Barycentric Coordinates, Abraham A. Ungar, The Australian Journal of Mathematical Analysis and Applications, AJMAA, Volume 6, Issue 1, Article 18, pp. 1–35, 2009
5. Hyperbolic Triangle Centers: The Special Relativistic Approach, Abraham Ungar, Springer, 2010
6. Barycentric Calculus In Euclidean And Hyperbolic Geometry: A Comparative Introduction Archived 2012-05-19 at the Wayback Machine, Abraham Ungar, World Scientific, 2010
|
Weierstrass function
In mathematics, the Weierstrass function is an example of a real-valued function that is continuous everywhere but differentiable nowhere. It is an example of a fractal curve. It is named after its discoverer Karl Weierstrass.
Not to be confused with the Weierstrass elliptic function ($\wp $) or the Weierstrass sigma, zeta, or eta functions.
The Weierstrass function has historically served the role of a pathological function, being the first published example (1872) specifically concocted to challenge the notion that every continuous function is differentiable except on a set of isolated points.[1] Weierstrass's demonstration that continuity did not imply almost-everywhere differentiability upended mathematics, overturning several proofs that relied on geometric intuition and vague definitions of smoothness. These types of functions were denounced by contemporaries: Henri Poincaré famously described them as "monsters" and called Weierstrass' work "an outrage against common sense", while Charles Hermite wrote that they were a "lamentable scourge". The functions were difficult to visualize until the arrival of computers in the next century, and the results did not gain wide acceptance until practical applications such as models of Brownian motion necessitated infinitely jagged functions (nowadays known as fractal curves).[2]
Construction
In Weierstrass's original paper, the function was defined as a Fourier series:
$f(x)=\sum _{n=0}^{\infty }a^{n}\cos(b^{n}\pi x),$
where $0<a<1$, $b$ is a positive odd integer, and
$ab>1+{\frac {3}{2}}\pi .$
The minimum value of $b$ for which there exists $0<a<1$ such that these constraints are satisfied is $b=7$. This construction, along with the proof that the function is not differentiable over any interval, was first delivered by Weierstrass in a paper presented to the Königliche Akademie der Wissenschaften on 18 July 1872.[3][4][5]
Despite never being differentiable, the function is continuous: Since the terms of the infinite series which defines it are bounded by ±an and this has finite sum for 0 < a < 1, convergence of the sum of the terms is uniform by the Weierstrass M-test with Mn = an. Since each partial sum is continuous, by the uniform limit theorem, it follows that f is continuous. Additionally, since each partial sum is uniformly continuous, it follows that f is also uniformly continuous.
It might be expected that a continuous function must have a derivative, or that the set of points where it is not differentiable should be countably infinite or finite. According to Weierstrass in his paper, earlier mathematicians including Gauss had often assumed that this was true. This might be because it is difficult to draw or visualise a continuous function whose set of nondifferentiable points is something other than a countable set of points. Analogous results for better behaved classes of continuous functions do exist, for example the Lipschitz functions, whose set of non-differentiability points must be a Lebesgue null set (Rademacher's theorem). When we try to draw a general continuous function, we usually draw the graph of a function which is Lipschitz or otherwise well-behaved.
The Weierstrass function was one of the first fractals studied, although this term was not used until much later. The function has detail at every level, so zooming in on a piece of the curve does not show it getting progressively closer and closer to a straight line. Rather between any two points no matter how close, the function will not be monotone.
The computation of the Hausdorff dimension D of the graph of the classical Weierstrass function was an open problem until 2018, while it was generally believed that D = $2+\log _{b}(a)<2$.[6][7] That D is strictly less than 2 follows from the conditions on $a$ and $b$ from above. Only after more than 30 years was this proved rigorously.[8]
The term Weierstrass function is often used in real analysis to refer to any function with similar properties and construction to Weierstrass's original example. For example, the cosine function can be replaced in the infinite series by a piecewise linear "zigzag" function. G. H. Hardy showed that the function of the above construction is nowhere differentiable with the assumptions 0 < a < 1, ab ≥ 1.[9]
Riemann function
The Weierstrass function is based on the earlier Riemann function, claimed to be differentiable nowhere. Occasionally, this function has also been called the Weierstrass function.[10]
$f(x)=\sum _{n=1}^{\infty }{\frac {\sin(n^{2}x)}{n^{2}}}$
While Bernhard Riemann strongly claimed that the function is differentiable nowhere, no evidence of this was published by Riemann, and Weierstrass noted that he did not find any evidence of it surviving either in Riemann's papers or orally from his students.
In 1916, G. H. Hardy confirmed that the function doesn't have a finite derivative in any value of $\pi x$ where x is irrational or is rational with the form of either ${\frac {2A}{4B+1}}$ or ${\frac {2A+1}{2B}}$, where A and B are integers.[9] In 1969, Joseph Gerver found that the Riemann function has a defined differential on every value of x that can be expressed in the form of ${\frac {2A+1}{2B+1}}\pi $ with integer A and B, or rational multipliers of pi with an odd numerator and denominator. On these points, the function has a derivative of $-{\frac {1}{2}}$.[11] In 1971, J. Gerver showed that the function has no finite differential at the values of x that can be expressed in the form of ${\frac {2A}{2B+1}}\pi $, completing the problem of the differentiability of the Riemann function.[12]
As the Riemann function is differentiable only on a null set of points, it is differentiable almost nowhere.
Hölder continuity
It is convenient to write the Weierstrass function equivalently as
$W_{\alpha }(x)=\sum _{n=0}^{\infty }b^{-n\alpha }\cos(b^{n}\pi x)$
for $\alpha =-{\frac {\ln(a)}{\ln(b)}}$. Then Wα(x) is Hölder continuous of exponent α, which is to say that there is a constant C such that
$|W_{\alpha }(x)-W_{\alpha }(y)|\leq C|x-y|^{\alpha }$
for all x and y.[13] Moreover, W1 is Hölder continuous of all orders α < 1 but not Lipschitz continuous.
Density of nowhere-differentiable functions
It turns out that the Weierstrass function is far from being an isolated example: although it is "pathological", it is also "typical" of continuous functions:
• In a topological sense: the set of nowhere-differentiable real-valued functions on [0, 1] is comeager in the vector space C([0, 1]; R) of all continuous real-valued functions on [0, 1] with the topology of uniform convergence.[14][15]
• In a measure-theoretic sense: when the space C([0, 1]; R) is equipped with classical Wiener measure γ, the collection of functions that are differentiable at even a single point of [0, 1] has γ-measure zero. The same is true even if one takes finite-dimensional "slices" of C([0, 1]; R), in the sense that the nowhere-differentiable functions form a prevalent subset of C([0, 1]; R).
See also
• Blancmange curve
• Koch snowflake
• Nowhere continuous function
Notes
1. At least two researchers formulated continuous, nowhere differentiable functions before Weierstrass, but their findings were not published in their lifetimes. Around 1831, Bernard Bolzano (1781–1848), a Czech mathematician, philosopher, and Catholic priest, constructed such a function; however, it was not published until 1922. See:
• Martin Jašek (1922) "Funkce Bolzanova" (Bolzano's function), Časopis pro Pěstování Matematiky a Fyziky (Journal for the Cultivation of Mathematics and Physics), vol. 51, no. 2, pages 69–76 (in Czech and German).
• Vojtěch Jarník (1922) "O funkci Bolzanově" (On Bolzano's function), Časopis pro Pěstování Matematiky a Fyziky (Journal for the Cultivation of Mathematics and Physics), vol. 51, no. 4, pages 248 - 264 (in Czech). Available on-line in Czech at: http://dml.cz/bitstream/handle/10338.dmlcz/109021/CasPestMatFys_051-1922-4_5.pdf . Available on-line in English at: http://dml.cz/bitstream/handle/10338.dmlcz/400073/Bolzano_15-1981-1_6.pdf .
• Karel Rychlík (1923) "Über eine Funktion aus Bolzanos handschriftlichem Nachlasse" (On a function from Bolzano's literary remains in manuscript), Sitzungsberichte der königlichen Böhmischen Gesellschaft der Wissenschaften (Prag) (Proceedings of the Royal Bohemian Society of Philosophy in Prague) (for the years 1921-1922), Class II, no. 4, pages 1-20. (Sitzungsberichte was continued as: Věstník Královské české společnosti nauk, třída matematicko-přírodovědecká (Journal of the Royal Czech Society of Science, Mathematics and Natural Sciences Class).)
Around 1860, Charles Cellérier (1818 - 1889), a professor of mathematics, mechanics, astronomy, and physical geography at the University of Geneva, Switzerland, independently formulated a continuous, nowhere differentiable function that closely resembles Weierstrass's function. Cellérier's discovery was, however, published posthumously:
• Cellérier, C. (1890) "Note sur les principes fondamentaux de l'analyse" (Note on the fundamental principles of analysis), Bulletin des sciences mathématiques, second series, vol. 14, pages 142 - 160.
2. Kucharski, Adam (26 October 2017). "Math's Beautiful Monsters: How a destructive idea paved the way for modern math". Retrieved 4 March 2020.
3. On page 560 of the 1872 Monatsberichte der Königlich Preussischen Akademie der Wissenschaften zu Berlin (Monthly Reports of the Royal Prussian Academy of Science in Berlin), there is a brief mention that on 18 July, "Hr. Weierstrass las über stetige Funktionen ohne bestimmte Differentialquotienten" (Mr. Weierstrass read [a paper] about continuous functions without definite [i.e., well-defined] derivatives [to members of the Academy]). However, Weierstrass's paper was not published in the Monatsberichte.
4. Karl Weierstrass, "Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen," (On continuous functions of a real argument which possess a definite derivative for no value of the argument) in: Königlich Preussichen Akademie der Wissenschaften, Mathematische Werke von Karl Weierstrass (Berlin, Germany: Mayer & Mueller, 1895), vol. 2, pages 71–74.;
5. See also: Karl Weierstrass, Abhandlungen aus der Functionenlehre [Treatises from the Theory of Functions] (Berlin, Germany: Julius Springer, 1886), page 97.
6. Kenneth Falconer, The Geometry of Fractal Sets (Cambridge, England: Cambridge University Press, 1985), pages 114, 149.
7. See also: Brian R. Hunt (1998) "The Hausdorff dimension of graphs of Weierstrass functions," Proceedings of the American Mathematical Society, vol. 126, no. 3, pages 791-800.
8. Shen Weixiao (2018). "Hausdorff dimension of the graphs of the classical Weierstrass functions". Mathematische Zeitschrift. 289 (1–2): 223–266. arXiv:1505.03986. doi:10.1007/s00209-017-1949-1. ISSN 0025-5874. S2CID 118844077.
9. Hardy G. H. (1916) "Weierstrass's nondifferentiable function," Transactions of the American Mathematical Society, vol. 17, pages 301–325.
10. Weisstein, Eric W. "Weierstrass Function". MathWorld.
11. Gerver, Joseph. "The Differentiability of the Riemann Function at Certain Rational Multiples of π". Proceedings of the National Academy of Sciences of the United States of America. 62 (3): 668–670. doi:10.1073/pnas.62.3.668. PMC 223649.
12. Gerver, Joseph. "More on the Differentiability of the Riemann Function". American Journal of Mathematics. doi:10.2307/2373445. S2CID 124562827.
13. Zygmund, A. (2002) [1935], Trigonometric Series. Vol. I, II, Cambridge Mathematical Library (3rd ed.), Cambridge University Press, ISBN 978-0-521-89053-3, MR 1963498, p. 47.
14. Mazurkiewicz, S.. (1931). "Sur les fonctions non-dérivables". Studia Math. 3 (3): 92–94. doi:10.4064/sm-3-1-92-94.
15. Banach, S. (1931). "Über die Baire'sche Kategorie gewisser Funktionenmengen". Studia Math. 3 (3): 174–179. doi:10.4064/sm-3-1-174-179.
References
• David, Claire (2018), "Bypassing dynamical systems : A simple way to get the box-counting dimension of the graph of the Weierstrass function", Proceedings of the International Geometry Center, Academy of Sciences of Ukraine, 11 (2): 53–68, doi:10.15673/tmgc.v11i2.1028
• Falconer, K. (1984), The Geometry of Fractal Sets, Cambridge Tracts in Mathematics, vol. Book 85, Cambridge: Cambridge University Press, ISBN 978-0-521-33705-2
• Gelbaum, B Bernard R.; Olmstead, John M. H. (2003) [1964], Counterexamples in Analysis, Dover Books on Mathematics, Dover Publications, ISBN 978-0-486-42875-8
• Hardy, G. H. (1916), "Weierstrass's nondifferentiable function" (PDF), Transactions of the American Mathematical Society, American Mathematical Society, 17 (3): 301–325, doi:10.2307/1989005, JSTOR 1989005
• Weierstrass, Karl (18 July 1872), Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen, Königlich Preussische Akademie der Wissenschaften
• Weierstrass, Karl (1895), "Über continuirliche Functionen eines reellen Arguments, die für keinen Werth des letzeren einen bestimmten Differentialquotienten besitzen", Mathematische Werke von Karl Weierstrass, vol. 2, Berlin, Germany: Mayer & Müller, pp. 71–74
• English translation: Edgar, Gerald A. (1993), "On continuous functions of a real argument that do not possess a well-defined derivative for any value of their argument", Classics on Fractals, Studies in Nonlinearity, Addison-Wesley Publishing Company, pp. 3–9, ISBN 978-0-201-58701-2
External links
• Weisstein, Eric W. "Weierstrass function". MathWorld. (a different Weierstrass Function which is also continuous and nowhere differentiable)
• Nowhere differentiable continuous function proof of existence using Banach's contraction principle.
• Nowhere monotonic continuous function proof of existence using the Baire category theorem.
• Johan Thim. "Continuous Nowhere Differentiable Functions". Master Thesis Lulea Univ of Technology 2003. Retrieved 28 July 2006.
• Weierstrass function in the complex plane Beautiful fractal.
• SpringerLink - Journal of Fourier Analysis and Applications, Volume 16, Number 1 Simple Proofs of Nowhere-Differentiability for Weierstrass's Function and Cases of Slow Growth
• Weierstrass functions: continuous but not differentiable anywhere
• The Weierstrass Function by Brent Nelson at Berkeley, showing non differentiable
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
|
Weierstrass preparation theorem
In mathematics, the Weierstrass preparation theorem is a tool for dealing with analytic functions of several complex variables, at a given point P. It states that such a function is, up to multiplication by a function not zero at P, a polynomial in one fixed variable z, which is monic, and whose coefficients of lower degree terms are analytic functions in the remaining variables and zero at P.
There are also a number of variants of the theorem, that extend the idea of factorization in some ring R as u·w, where u is a unit and w is some sort of distinguished Weierstrass polynomial. Carl Siegel has disputed the attribution of the theorem to Weierstrass, saying that it occurred under the current name in some of late nineteenth century Traités d'analyse without justification.
Complex analytic functions
For one variable, the local form of an analytic function f(z) near 0 is zkh(z) where h(0) is not 0, and k is the order of the zero of f at 0. This is the result that the preparation theorem generalises. We pick out one variable z, which we may assume is first, and write our complex variables as (z, z2, ..., zn). A Weierstrass polynomial W(z) is
zk + gk−1zk−1 + ... + g0
where gi(z2, ..., zn) is analytic and gi(0, ..., 0) = 0.
Then the theorem states that for analytic functions f, if
f(0, ...,0) = 0,
and
f(z, z2, ..., zn)
as a power series has some term only involving z, we can write (locally near (0, ..., 0))
f(z, z2, ..., zn) = W(z)h(z, z2, ..., zn)
with h analytic and h(0, ..., 0) not 0, and W a Weierstrass polynomial.
This has the immediate consequence that the set of zeros of f, near (0, ..., 0), can be found by fixing any small values of z2, ..., zn and then solving the equation W(z)=0. The corresponding values of z form a number of continuously-varying branches, in number equal to the degree of W in z. In particular f cannot have an isolated zero.
Division theorem
A related result is the Weierstrass division theorem, which states that if f and g are analytic functions, and g is a Weierstrass polynomial of degree N, then there exists a unique pair h and j such that f = gh + j, where j is a polynomial of degree less than N. In fact, many authors prove the Weierstrass preparation as a corollary of the division theorem. It is also possible to prove the division theorem from the preparation theorem so that the two theorems are actually equivalent.[1]
Applications
The Weierstrass preparation theorem can be used to show that the ring of germs of analytic functions in n variables is a Noetherian ring, which is also referred to as the Rückert basis theorem.[2]
Smooth functions
There is a deeper preparation theorem for smooth functions, due to Bernard Malgrange, called the Malgrange preparation theorem. It also has an associated division theorem, named after John Mather.
Formal power series in complete local rings
There is an analogous result, also referred to as the Weierstrass preparation theorem, for the ring of formal power series over complete local rings A:[3] for any power series $f=\sum _{n=0}^{\infty }a_{n}t^{n}\in A[[t]]$ such that not all $a_{n}$ are in the maximal ideal ${\mathfrak {m}}$ of A, there is a unique unit u in $A[[t]]$ and a polynomial F of the form $F=t^{s}+b_{s-1}t^{s-1}+\dots +b_{0}$ with $b_{i}\in {\mathfrak {m}}$ (a so-called distinguished polynomial) such that
$f=uF.$
Since $A[[t]]$ is again a complete local ring, the result can be iterated and therefore gives similar factorization results for formal power series in several variables.
For example, this applies to the ring of integers in a p-adic field. In this case the theorem says that a power series f(z) can always be uniquely factored as πn·u(z)·p(z), where u(z) is a unit in the ring of power series, p(z) is a distinguished polynomial (monic, with the coefficients of the non-leading terms each in the maximal ideal), and π is a fixed uniformizer.
An application of the Weierstrass preparation and division theorem for the ring $\mathbf {Z} _{p}[[t]]$ (also called Iwasawa algebra) occurs in Iwasawa theory in the description of finitely generated modules over this ring.[4]
There exists a non-commutative version of Weierstrass division and preparation, with A being a not necessarily commutative ring, and with formal skew power series in place of formal power series.[5]
Tate algebras
There is also a Weiertrass preparation theorem for Tate algebras
$T_{n}(k)=\left\{\sum _{\nu _{1},\dots ,\nu _{n}\geq 0}a_{\nu _{1},\dots ,\nu _{n}}X_{1}^{\nu _{1}}\cdots X_{n}^{\nu _{n}},|a_{\nu _{1},\dots ,\nu _{n}}|\to 0{\text{ for }}\nu _{1}+\dots +\nu _{n}\to \infty \right\}$
over a complete non-archimedean field k.[6] These algebras are the basic building blocks of rigid geometry. One application of this form of the Weierstrass preparation theorem is the fact that the rings $T_{n}(k)$ are Noetherian.
See also
• Oka coherence theorem
References
1. Grauert, Hans; Remmert, Reinhold (1971), Analytische Stellenalgebren (in German), Springer, p. 43, doi:10.1007/978-3-642-65033-8, ISBN 978-3-642-65034-5
2. Ebeling, Wolfgang (2007), Functions of Several Complex Variables and Their Singularities, Proposition 2.19: American Mathematical Society{{citation}}: CS1 maint: location (link)
3. Nicolas Bourbaki (1972), Commutative algebra, chapter VII, §3, no. 9, Proposition 6: Hermann{{citation}}: CS1 maint: location (link)
4. Lawrence Washington (1982), Introduction to cyclotomic fields, Theorem 13.12: Springer{{citation}}: CS1 maint: location (link)
5. Otmar Venjakob (2003). "A noncommutative Weierstrass preparation theorem and applications to Iwasawa theory". J. Reine Angew. Math. 2003 (559): 153–191. doi:10.1515/crll.2003.047. S2CID 14265629. Retrieved 2022-01-27. Theorem 3.1, Corollary 3.2
6. Bosch, Siegfried; Güntzer, Ulrich; Remmert, Reinhold (1984), Non-archimedean analysis, Chapters 5.2.1, 5.2.2: Springer{{citation}}: CS1 maint: location (link)
• Lewis, Andrew, Notes on Global Analysis
• Siegel, C. L. (1969), "Zu den Beweisen des Vorbereitungssatzes von Weierstrass", Number Theory and Analysis (Papers in Honor of Edmund Landau), New York: Plenum, pp. 297–306, MR 0268402, reprinted in Siegel, Carl Ludwig (1979), Chandrasekharan, K.; Maass., H. (eds.), Gesammelte Abhandlungen. Band IV, Berlin-New York: Springer-Verlag, pp. 1–8, ISBN 0-387-09374-5, MR 0543842
• Solomentsev, E.D. (2001) [1994], "Weierstrass theorem", Encyclopedia of Mathematics, EMS Press
• Stickelberger, L. (1887), "Ueber einen Satz des Herrn Noether", Mathematische Annalen, 30 (3): 401–409, doi:10.1007/BF01443952, S2CID 121360367
• Weierstrass, K. (1895), Mathematische Werke. II. Abhandlungen 2, Berlin: Mayer & Müller, pp. 135–142 reprinted by Johnson, New York, 1967.
External links
• Lebl, Jiří. "Weierstrass Preparation and Division Theorems. (2021, September 5)". LibreTexts.
|
Extreme value theorem
In calculus, the extreme value theorem states that if a real-valued function $f$ is continuous on the closed interval $[a,b]$, then $f$ must attain a maximum and a minimum, each at least once. That is, there exist numbers $c$ and $d$ in $[a,b]$ such that:
$f(c)\geq f(x)\geq f(d)\quad \forall x\in [a,b]$
This article is about the calculus concept. For the statistical concept, see Fisher–Tippett–Gnedenko theorem.
The extreme value theorem is more specific than the related boundedness theorem, which states merely that a continuous function $f$ on the closed interval $[a,b]$ is bounded on that interval; that is, there exist real numbers $m$ and $M$ such that:
$m\leq f(x)\leq M\quad \forall x\in [a,b].$
This does not say that $M$ and $m$ are necessarily the maximum and minimum values of $f$ on the interval $[a,b],$ which is what the extreme value theorem stipulates must also be the case.
The extreme value theorem is used to prove Rolle's theorem. In a formulation due to Karl Weierstrass, this theorem states that a continuous function from a non-empty compact space to a subset of the real numbers attains a maximum and a minimum.
History
The extreme value theorem was originally proven by Bernard Bolzano in the 1830s in a work Function Theory but the work remained unpublished until 1930. Bolzano's proof consisted of showing that a continuous function on a closed interval was bounded, and then showing that the function attained a maximum and a minimum value. Both proofs involved what is known today as the Bolzano–Weierstrass theorem.[1]
Functions to which the theorem does not apply
The following examples show why the function domain must be closed and bounded in order for the theorem to apply. Each fails to attain a maximum on the given interval.
1. $f(x)=x$ defined over $[0,\infty )$ is not bounded from above.
2. $f(x)={\frac {x}{1+x}}$ defined over $[0,\infty )$ is bounded but does not attain its least upper bound $1$.
3. $f(x)={\frac {1}{x}}$ defined over $(0,1]$ is not bounded from above.
4. $f(x)=1-x$ defined over $(0,1]$ is bounded but never attains its least upper bound $1$.
Defining $f(0)=0$ in the last two examples shows that both theorems require continuity on $[a,b]$.
Generalization to metric and topological spaces
When moving from the real line $\mathbb {R} $ to metric spaces and general topological spaces, the appropriate generalization of a closed bounded interval is a compact set. A set $K$ is said to be compact if it has the following property: from every collection of open sets $U_{\alpha }$ such that $ \bigcup U_{\alpha }\supset K$, a finite subcollection $U_{\alpha _{1}},\ldots ,U_{\alpha _{n}}$can be chosen such that $ \bigcup _{i=1}^{n}U_{\alpha _{i}}\supset K$. This is usually stated in short as "every open cover of $K$ has a finite subcover". The Heine–Borel theorem asserts that a subset of the real line is compact if and only if it is both closed and bounded. Correspondingly, a metric space has the Heine–Borel property if every closed and bounded set is also compact.
The concept of a continuous function can likewise be generalized. Given topological spaces $V,\ W$, a function $f:V\to W$ is said to be continuous if for every open set $U\subset W$, $f^{-1}(U)\subset V$ is also open. Given these definitions, continuous functions can be shown to preserve compactness:[2]
Theorem. If $V,\ W$ are topological spaces, $f:V\to W$ is a continuous function, and $K\subset V$ is compact, then $f(K)\subset W$ is also compact.
In particular, if $W=\mathbb {R} $, then this theorem implies that $f(K)$ is closed and bounded for any compact set $K$, which in turn implies that $f$ attains its supremum and infimum on any (nonempty) compact set $K$. Thus, we have the following generalization of the extreme value theorem:[2]
Theorem. If $K$ is a compact set and $f:K\to \mathbb {R} $ is a continuous function, then $f$ is bounded and there exist $p,q\in K$ such that $ f(p)=\sup _{x\in K}f(x)$ and $ f(q)=\inf _{x\in K}f(x)$.
Slightly more generally, this is also true for an upper semicontinuous function. (see compact space#Functions and compact spaces).
Proving the theorems
We look at the proof for the upper bound and the maximum of $f$. By applying these results to the function $-f$, the existence of the lower bound and the result for the minimum of $f$ follows. Also note that everything in the proof is done within the context of the real numbers.
We first prove the boundedness theorem, which is a step in the proof of the extreme value theorem. The basic steps involved in the proof of the extreme value theorem are:
1. Prove the boundedness theorem.
2. Find a sequence so that its image converges to the supremum of $f$.
3. Show that there exists a subsequence that converges to a point in the domain.
4. Use continuity to show that the image of the subsequence converges to the supremum.
Proof of the boundedness theorem
Statement If $f(x)$ is continuous on $[a,b]$ then it is bounded on $[a,b]$
Suppose the function $f$ is not bounded above on the interval $[a,b]$. Then, for every natural number $n$, there exists an $x_{n}\in [a,b]$ such that $f(x_{n})>n$. This defines a sequence $(x_{n})_{n\in \mathbb {N} }$. Because $[a,b]$ is bounded, the Bolzano–Weierstrass theorem implies that there exists a convergent subsequence $(x_{n_{k}})_{k\in \mathbb {N} }$ of $({x_{n}})$. Denote its limit by $x$. As $[a,b]$ is closed, it contains $x$. Because $f$ is continuous at $x$, we know that $f(x_{{n}_{k}})$ converges to the real number $f(x)$ (as $f$ is sequentially continuous at $x$). But $f(x_{{n}_{k}})>n_{k}\geq k$ for every $k$, which implies that $f(x_{{n}_{k}})$ diverges to $+\infty $, a contradiction. Therefore, $f$ is bounded above on $[a,b]$. $\Box $
Alternative proof
Statement If $f(x)$ is continuous on $[a,b]$ then it is bounded on $[a,b]$
Proof Consider the set $B$ of points $p$ in $[a,b]$ such that $f(x)$ is bounded on $[a,p]$. We note that $a$ is one such point, for $f(x)$ is bounded on $[a,a]$ by the value $f(a)$. If $e>a$ is another point, then all points between $a$ and $e$ also belong to $B$. In other words $B$ is an interval closed at its left end by $a$.
Now $f$ is continuous on the right at $a$, hence there exists $\delta >0$ such that $|f(x)-f(a)|<1$ for all $x$ in $[a,a+\delta ]$. Thus $f$ is bounded by $f(a)-1$ and $f(a)+1$ on the interval $[a,a+\delta ]$ so that all these points belong to $B$.
So far, we know that $B$ is an interval of non-zero length, closed at its left end by $a$.
Next, $B$ is bounded above by $b$. Hence the set $B$ has a supremum in $[a,b]$ ; let us call it $s$. From the non-zero length of $B$ we can deduce that $s>a$.
Suppose $s<b$. Now $f$ is continuous at $s$, hence there exists $\delta >0$ such that $|f(x)-f(s)|<1$ for all $x$ in $[s-\delta ,s+\delta ]$ so that $f$ is bounded on this interval. But it follows from the supremacy of $s$ that there exists a point belonging to $B$, $e$ say, which is greater than $s-\delta /2$. Thus $f$ is bounded on $[a,e]$ which overlaps $[s-\delta ,s+\delta ]$ so that $f$ is bounded on $[a,s+\delta ]$. This however contradicts the supremacy of $s$.
We must therefore have $s=b$. Now $f$ is continuous on the left at $s$, hence there exists $\delta >0$ such that $|f(x)-f(s)|<1$ for all $x$ in $[s-\delta ,s]$ so that $f$ is bounded on this interval. But it follows from the supremacy of $s$ that there exists a point belonging to $B$, $e$ say, which is greater than $s-\delta /2$. Thus $f$ is bounded on $[a,e]$ which overlaps $[s-\delta ,s]$ so that $f$ is bounded on $[a,s]$. ∎
Proof of the extreme value theorem
By the boundedness theorem, f is bounded from above, hence, by the Dedekind-completeness of the real numbers, the least upper bound (supremum) M of f exists. It is necessary to find a point d in [a, b] such that M = f(d). Let n be a natural number. As M is the least upper bound, M – 1/n is not an upper bound for f. Therefore, there exists dn in [a, b] so that M – 1/n < f(dn). This defines a sequence {dn}. Since M is an upper bound for f, we have M – 1/n < f(dn) ≤ M for all n. Therefore, the sequence {f(dn)} converges to M.
The Bolzano–Weierstrass theorem tells us that there exists a subsequence {$d_{n_{k}}$}, which converges to some d and, as [a, b] is closed, d is in [a, b]. Since f is continuous at d, the sequence {f($d_{n_{k}}$)} converges to f(d). But {f(dnk)} is a subsequence of {f(dn)} that converges to M, so M = f(d). Therefore, f attains its supremum M at d. ∎
Alternative proof of the extreme value theorem
The set {y ∈ R : y = f(x) for some x ∈ [a,b]} is a bounded set. Hence, its least upper bound exists by least upper bound property of the real numbers. Let M = sup(f(x)) on [a, b]. If there is no point x on [a, b] so that f(x) = M ,then f(x) < M on [a, b]. Therefore, 1/(M − f(x)) is continuous on [a, b].
However, to every positive number ε, there is always some x in [a, b] such that M − f(x) < ε because M is the least upper bound. Hence, 1/(M − f(x)) > 1/ε, which means that 1/(M − f(x)) is not bounded. Since every continuous function on a [a, b] is bounded, this contradicts the conclusion that 1/(M − f(x)) was continuous on [a, b]. Therefore, there must be a point x in [a, b] such that f(x) = M. ∎
Proof using the hyperreals
In the setting of non-standard calculus, let N be an infinite hyperinteger. The interval [0, 1] has a natural hyperreal extension. Consider its partition into N subintervals of equal infinitesimal length 1/N, with partition points xi = i /N as i "runs" from 0 to N. The function ƒ is also naturally extended to a function ƒ* defined on the hyperreals between 0 and 1. Note that in the standard setting (when N is finite), a point with the maximal value of ƒ can always be chosen among the N+1 points xi, by induction. Hence, by the transfer principle, there is a hyperinteger i0 such that 0 ≤ i0 ≤ N and $f^{*}(x_{i_{0}})\geq f^{*}(x_{i})$ for all i = 0, ..., N. Consider the real point
$c=\mathbf {st} (x_{i_{0}})$
where st is the standard part function. An arbitrary real point x lies in a suitable sub-interval of the partition, namely $x\in [x_{i},x_{i+1}]$, so that st(xi) = x. Applying st to the inequality $f^{*}(x_{i_{0}})\geq f^{*}(x_{i})$, we obtain $\mathbf {st} (f^{*}(x_{i_{0}}))\geq \mathbf {st} (f^{*}(x_{i}))$. By continuity of ƒ we have
$\mathbf {st} (f^{*}(x_{i_{0}}))=f(\mathbf {st} (x_{i_{0}}))=f(c)$.
Hence ƒ(c) ≥ ƒ(x), for all real x, proving c to be a maximum of ƒ.[3]
Proof from first principles
Statement If $f(x)$ is continuous on $[a,b]$ then it attains its supremum on $[a,b]$
Proof By the Boundedness Theorem, $f(x)$ is bounded above on $[a,b]$ and by the completeness property of the real numbers has a supremum in $[a,b]$. Let us call it $M$, or $M[a,b]$. It is clear that the restriction of $f$ to the subinterval $[a,x]$ where $x\leq b$ has a supremum $M[a,x]$ which is less than or equal to $M$, and that $M[a,x]$ increases from $f(a)$ to $M$ as $x$ increases from $a$ to $b$.
If $f(a)=M$ then we are done. Suppose therefore that $f(a)<M$ and let $d=M-f(a)$. Consider the set $L$ of points $x$ in $[a,b]$ such that $M[a,x]<M$.
Clearly $a\in L$ ; moreover if $e>a$ is another point in $L$ then all points between $a$ and $e$ also belong to $L$ because $M[a,x]$ is monotonic increasing. Hence $L$ is a non-empty interval, closed at its left end by $a$.
Now $f$ is continuous on the right at $a$, hence there exists $\delta >0$ such that $|f(x)-f(a)|<d/2$ for all $x$ in $[a,a+\delta ]$. Thus $f$ is less than $M-d/2$ on the interval $[a,a+\delta ]$ so that all these points belong to $L$.
Next, $L$ is bounded above by $b$ and has therefore a supremum in $[a,b]$: let us call it $s$. We see from the above that $s>a$. We will show that $s$ is the point we are seeking i.e. the point where $f$ attains its supremum, or in other words $f(s)=M$.
Suppose the contrary viz. $f(s)<M$. Let $d=M-f(s)$ and consider the following two cases:
1. $s<b$. As $f$ is continuous at $s$, there exists $\delta >0$ such that $|f(x)-f(s)|<d/2$ for all $x$ in $[s-\delta ,s+\delta ]$. This means that $f$ is less than $M-d/2$ on the interval $[s-\delta ,s+\delta ]$. But it follows from the supremacy of $s$ that there exists a point, $e$ say, belonging to $L$ which is greater than $s-\delta $. By the definition of $L$, $M[a,e]<M$. Let $d_{1}=M-M[a,e]$ then for all $x$ in $[a,e]$, $f(x)\leq M-d_{1}$. Taking $d_{2}$ to be the minimum of $d/2$ and $d_{1}$, we have $f(x)\leq M-d_{2}$ for all $x$ in $[a,s+\delta ]$.
Hence $M[a,s+\delta ]<M$ so that $s+\delta \in L$. This however contradicts the supremacy of $s$ and completes the proof.
2. $s=b$. As $f$ is continuous on the left at $s$, there exists $\delta >0$ such that $|f(x)-f(s)|<d/2$ for all $x$ in $[s-\delta ,s]$. This means that $f$ is less than $M-d/2$ on the interval $[s-\delta ,s]$. But it follows from the supremacy of $s$ that there exists a point, $e$ say, belonging to $L$ which is greater than $s-\delta $. By the definition of $L$, $M[a,e]<M$. Let $d_{1}=M-M[a,e]$ then for all $x$ in $[a,e]$, $f(x)\leq M-d_{1}$. Taking $d_{2}$ to be the minimum of $d/2$ and $d_{1}$, we have $f(x)\leq M-d_{2}$ for all $x$ in $[a,b]$. This contradicts the supremacy of $M$ and completes the proof.
Extension to semi-continuous functions
If the continuity of the function f is weakened to semi-continuity, then the corresponding half of the boundedness theorem and the extreme value theorem hold and the values –∞ or +∞, respectively, from the extended real number line can be allowed as possible values. More precisely:
Theorem: If a function f : [a, b] → [–∞, ∞) is upper semi-continuous, meaning that
$\limsup _{y\to x}f(y)\leq f(x)$
for all x in [a,b], then f is bounded above and attains its supremum.
Proof: If f(x) = –∞ for all x in [a,b], then the supremum is also –∞ and the theorem is true. In all other cases, the proof is a slight modification of the proofs given above. In the proof of the boundedness theorem, the upper semi-continuity of f at x only implies that the limit superior of the subsequence {f(xnk)} is bounded above by f(x) < ∞, but that is enough to obtain the contradiction. In the proof of the extreme value theorem, upper semi-continuity of f at d implies that the limit superior of the subsequence {f(dnk)} is bounded above by f(d), but this suffices to conclude that f(d) = M. ∎
Applying this result to −f proves:
Theorem: If a function f : [a, b] → (–∞, ∞] is lower semi-continuous, meaning that
$\liminf _{y\to x}f(y)\geq f(x)$
for all x in [a,b], then f is bounded below and attains its infimum.
A real-valued function is upper as well as lower semi-continuous, if and only if it is continuous in the usual sense. Hence these two theorems imply the boundedness theorem and the extreme value theorem.
References
1. Rusnock, Paul; Kerr-Lawson, Angus (2005). "Bolzano and Uniform Continuity". Historia Mathematica. 32 (3): 303–311. doi:10.1016/j.hm.2004.11.003.
2. Rudin, Walter (1976). Principles of Mathematical Analysis. New York: McGraw Hill. pp. 89–90. ISBN 0-07-054235-X.
3. Keisler, H. Jerome (1986). Elementary Calculus : An Infinitesimal Approach (PDF). Boston: Prindle, Weber & Schmidt. p. 164. ISBN 0-87150-911-3.
Further reading
• Adams, Robert A. (1995). Calculus : A Complete Course. Reading: Addison-Wesley. pp. 706–707. ISBN 0-201-82823-5.
• Protter, M. H.; Morrey, C. B. (1977). "The Boundedness and Extreme–Value Theorems". A First Course in Real Analysis. New York: Springer. pp. 71–73. ISBN 0-387-90215-5.
External links
• A Proof for extreme value theorem at cut-the-knot
• Extreme Value Theorem by Jacqueline Wandzura with additional contributions by Stephen Wandzura, the Wolfram Demonstrations Project.
• Weisstein, Eric W. "Extreme Value Theorem". MathWorld.
• Mizar system proof: http://mizar.org/version/current/html/weierstr.html#T15
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
|
Durand–Kerner method
In numerical analysis, the Weierstrass method or Durand–Kerner method, discovered by Karl Weierstrass in 1891 and rediscovered independently by Durand in 1960 and Kerner in 1966, is a root-finding algorithm for solving polynomial equations.[1] In other words, the method can be used to solve numerically the equation
f(x) = 0,
where f is a given polynomial, which can be taken to be scaled so that the leading coefficient is 1.
Explanation
This explanation considers equations of degree four. It is easily generalized to other degrees.
Let the polynomial f be defined by
$f(x)=x^{4}+ax^{3}+bx^{2}+cx+d$
for all x.
The known numbers a, b, c, d are the coefficients.
Let the (potentially complex) numbers P, Q, R, S be the roots of this polynomial f.
Then
$f(x)=(x-P)(x-Q)(x-R)(x-S)$
for all x. One can isolate the value P from this equation:
$P=x-{\frac {f(x)}{(x-Q)(x-R)(x-S)}}.$
So if used as a fixed-point iteration
$x_{1}:=x_{0}-{\frac {f(x_{0})}{(x_{0}-Q)(x_{0}-R)(x_{0}-S)}},$
it is strongly stable in that every initial point x0 ≠ Q, R, S delivers after one iteration the root P = x1.
Furthermore, if one replaces the zeros Q, R and S by approximations q ≈ Q, r ≈ R, s ≈ S, such that q, r, s are not equal to P, then P is still a fixed point of the perturbed fixed-point iteration
$x_{k+1}:=x_{k}-{\frac {f(x_{k})}{(x_{k}-q)(x_{k}-r)(x_{k}-s)}},$
since
$P-{\frac {f(P)}{(P-q)(P-r)(P-s)}}=P-0=P.$
Note that the denominator is still different from zero. This fixed-point iteration is a contraction mapping for x around P.
The clue to the method now is to combine the fixed-point iteration for P with similar iterations for Q, R, S into a simultaneous iteration for all roots.
Initialize p, q, r, s:
p0 := (0.4 + 0.9i)0,
q0 := (0.4 + 0.9i)1,
r0 := (0.4 + 0.9i)2,
s0 := (0.4 + 0.9i)3.
There is nothing special about choosing 0.4 + 0.9i except that it is neither a real number nor a root of unity.
Make the substitutions for n = 1, 2, 3, ...:
$p_{n}=p_{n-1}-{\frac {f(p_{n-1})}{(p_{n-1}-q_{n-1})(p_{n-1}-r_{n-1})(p_{n-1}-s_{n-1})}},$
$q_{n}=q_{n-1}-{\frac {f(q_{n-1})}{(q_{n-1}-p_{n})(q_{n-1}-r_{n-1})(q_{n-1}-s_{n-1})}},$
$r_{n}=r_{n-1}-{\frac {f(r_{n-1})}{(r_{n-1}-p_{n})(r_{n-1}-q_{n})(r_{n-1}-s_{n-1})}},$
$s_{n}=s_{n-1}-{\frac {f(s_{n-1})}{(s_{n-1}-p_{n})(s_{n-1}-q_{n})(s_{n-1}-r_{n})}}.$
Re-iterate until the numbers p, q, r, s essentially stop changing relative to the desired precision. They then have the values P, Q, R, S in some order and in the chosen precision. So the problem is solved.
Note that complex number arithmetic must be used, and that the roots are found simultaneously rather than one at a time.
Variations
This iteration procedure, like the Gauss–Seidel method for linear equations, computes one number at a time based on the already computed numbers. A variant of this procedure, like the Jacobi method, computes a vector of root approximations at a time. Both variant are effective root-finding algorithms.
One could also choose the initial values for p, q, r, s by some other procedure, even randomly, but in a way that
• they are inside some not-too-large circle containing also the roots of f(x), e.g. the circle around the origin with radius $1+\max {\big (}|a|,|b|,|c|,|d|{\big )}$, (where 1, a, b, c, d are the coefficients of f(x))
and that
• they are not too close to each other,
which may increasingly become a concern as the degree of the polynomial increases.
If the coefficients are real and the polynomial has odd degree, then it must have at least one real root. To find this, use a real value of p0 as the initial guess and make q0 and r0, etc., complex conjugate pairs. Then the iteration will preserve these properties; that is, pn will always be real, and qn and rn, etc., will always be conjugate. In this way, the pn will converge to a real root P. Alternatively, make all of the initial guesses real; they will remain so.
Example
This example is from the reference 1992. The equation solved is x3 − 3x2 + 3x − 5 = 0. The first 4 iterations move p, q, r seemingly chaotically, but then the roots are located to 1 decimal. After iteration number 5 we have 4 correct decimals, and the subsequent iteration number 6 confirms that the computed roots are fixed. This general behaviour is characteristic for the method. Also notice that, in this example, the roots are used as soon as they are computed in each iteration. In other words, the computation of each second column uses the value of the previous computed columns.
it.-no. p q r
0 +1.0000 + 0.0000i +0.4000 + 0.9000i −0.6500 + 0.7200i
1 +1.3608 + 2.0222i −0.3658 + 2.4838i −2.3858 − 0.0284i
2 +2.6597 + 2.7137i +0.5977 + 0.8225i −0.6320−1.6716i
3 +2.2704 + 0.3880i +0.1312 + 1.3128i +0.2821 − 1.5015i
4 +2.5428 − 0.0153i +0.2044 + 1.3716i +0.2056 − 1.3721i
5 +2.5874 + 0.0000i +0.2063 + 1.3747i +0.2063 − 1.3747i
6 +2.5874 + 0.0000i +0.2063 + 1.3747i +0.2063 − 1.3747i
Note that the equation has one real root and one pair of complex conjugate roots, and that the sum of the roots is 3.
Derivation of the method via Newton's method
For every n-tuple of complex numbers, there is exactly one monic polynomial of degree n that has them as its zeros (keeping multiplicities). This polynomial is given by multiplying all the corresponding linear factors, that is
$g_{\vec {z}}(X)=(X-z_{1})\cdots (X-z_{n}).$
This polynomial has coefficients that depend on the prescribed zeros,
$g_{\vec {z}}(X)=X^{n}+g_{n-1}({\vec {z}})X^{n-1}+\cdots +g_{0}({\vec {z}}).$
Those coefficients are, up to a sign, the elementary symmetric polynomials $\alpha _{1}({\vec {z}}),\dots ,\alpha _{n}({\vec {z}})$ of degrees 1,...,n.
To find all the roots of a given polynomial $f(X)=X^{n}+c_{n-1}X^{n-1}+\cdots +c_{0}$ with coefficient vector $(c_{n-1},\dots ,c_{0})$ simultaneously is now the same as to find a solution vector to the Vieta's system
${\begin{matrix}c_{0}&=&g_{0}({\vec {z}})&=&(-1)^{n}\alpha _{n}({\vec {z}})&=&(-1)^{n}z_{1}\cdots z_{n}\\c_{1}&=&g_{1}({\vec {z}})&=&(-1)^{n-1}\alpha _{n-1}({\vec {z}})\\&\vdots &\\c_{n-1}&=&g_{n-1}({\vec {z}})&=&-\alpha _{1}({\vec {z}})&=&-(z_{1}+z_{2}+\cdots +z_{n}).\end{matrix}}$
The Durand–Kerner method is obtained as the multidimensional Newton's method applied to this system. It is algebraically more comfortable to treat those identities of coefficients as the identity of the corresponding polynomials, $g_{\vec {z}}(X)=f(X)$. In the Newton's method one looks, given some initial vector ${\vec {z}}$, for an increment vector ${\vec {w}}$ such that $g_{{\vec {z}}+{\vec {w}}}(X)=f(X)$ is satisfied up to second and higher order terms in the increment. For this one solves the identity
$f(X)-g_{\vec {z}}(X)=\sum _{k=1}^{n}{\frac {\partial g_{\vec {z}}(X)}{\partial z_{k}}}w_{k}=-\sum _{k=1}^{n}w_{k}\prod _{j\neq k}(X-z_{j}).$
If the numbers $z_{1},\dots ,z_{n}$ are pairwise different, then the polynomials in the terms of the right hand side form a basis of the n-dimensional space $\mathbb {C} [X]_{n-1}$ of polynomials with maximal degree n − 1. Thus a solution ${\vec {w}}$ to the increment equation exists in this case. The coordinates of the increment ${\vec {w}}$ are simply obtained by evaluating the increment equation
$-\sum _{k=1}^{n}w_{k}\prod _{j\neq k}(X-z_{j})=f(X)-\prod _{j=1}^{n}(X-z_{j})$
at the points $X=z_{k}$, which results in
$-w_{k}\prod _{j\neq k}(z_{k}-z_{j})=-w_{k}g_{\vec {z}}'(z_{k})=f(z_{k})$, that is $w_{k}=-{\frac {f(z_{k})}{\prod _{j\neq k}(z_{k}-z_{j})}}.$
Root inclusion via Gerschgorin's circles
In the quotient ring (algebra) of residue classes modulo ƒ(X), the multiplication by X defines an endomorphism that has the zeros of ƒ(X) as eigenvalues with the corresponding multiplicities. Choosing a basis, the multiplication operator is represented by its coefficient matrix A, the companion matrix of ƒ(X) for this basis.
Since every polynomial can be reduced modulo ƒ(X) to a polynomial of degree n − 1 or lower, the space of residue classes can be identified with the space of polynomials of degree bounded by n − 1. A problem specific basis can be taken from Lagrange interpolation as the set of n polynomials
$b_{k}(X)=\prod _{1\leq j\leq n,\;j\neq k}(X-z_{j}),\quad k=1,\dots ,n,$
where $z_{1},\dots ,z_{n}\in \mathbb {C} $ are pairwise different complex numbers. Note that the kernel functions for the Lagrange interpolation are $L_{k}(X)={\frac {b_{k}(X)}{b_{k}(z_{k})}}$.
For the multiplication operator applied to the basis polynomials one obtains from the Lagrange interpolation
$X\cdot b_{k}(X)\mod f(X)=X\cdot b_{k}(X)-f(X)$ $=\sum _{j=1}^{n}{\Big (}z_{j}\cdot b_{k}(z_{j})-f(z_{j}){\Big )}\cdot {\frac {b_{j}(X)}{b_{j}(z_{j})}}$
$=z_{k}\cdot b_{k}(X)+\sum _{j=1}^{n}w_{j}\cdot b_{j}(X)$,
where $w_{j}=-{\frac {f(z_{j})}{b_{j}(z_{j})}}$ are again the Weierstrass updates.
The companion matrix of ƒ(X) is therefore
$A=\mathrm {diag} (z_{1},\dots ,z_{n})+{\begin{pmatrix}1\\\vdots \\1\end{pmatrix}}\cdot \left(w_{1},\dots ,w_{n}\right).$
From the transposed matrix case of the Gershgorin circle theorem it follows that all eigenvalues of A, that is, all roots of ƒ(X), are contained in the union of the disks $D(a_{k,k},r_{k})$ with a radius $r_{k}=\sum _{j\neq k}{\big |}a_{j,k}{\big |}$.
Here one has $a_{k,k}=z_{k}+w_{k}$, so the centers are the next iterates of the Weierstrass iteration, and radii $r_{k}=(n-1)\left|w_{k}\right|$ that are multiples of the Weierstrass updates. If the roots of ƒ(X) are all well isolated (relative to the computational precision) and the points $z_{1},\dots ,z_{n}\in \mathbb {C} $ are sufficiently close approximations to these roots, then all the disks will become disjoint, so each one contains exactly one zero. The midpoints of the circles will be better approximations of the zeros.
Every conjugate matrix $TAT^{-1}$ of A is as well a companion matrix of ƒ(X). Choosing T as diagonal matrix leaves the structure of A invariant. The root close to $z_{k}$ is contained in any isolated circle with center $z_{k}$ regardless of T. Choosing the optimal diagonal matrix T for every index results in better estimates (see ref. Petkovic et al. 1995).
Convergence results
The connection between the Taylor series expansion and Newton's method suggests that the distance from $z_{k}+w_{k}$ to the corresponding root is of the order $O{\big (}|w_{k}|^{2}{\big )}$, if the root is well isolated from nearby roots and the approximation is sufficiently close to the root. So after the approximation is close, Newton's method converges quadratically; that is, the error is squared with every step (which will greatly reduce the error once it is less than 1). In the case of the Durand–Kerner method, convergence is quadratic if the vector ${\vec {z}}=(z_{1},\dots ,z_{n})$ is close to some permutation of the vector of the roots of f.
For the conclusion of linear convergence there is a more specific result (see ref. Petkovic et al. 1995). If the initial vector ${\vec {z}}$ and its vector of Weierstrass updates ${\vec {w}}=(w_{1},\dots ,w_{n})$ satisfies the inequality
$\max _{1\leq k\leq n}|w_{k}|\leq {\frac {1}{5n}}\min _{1\leq j<k\leq n}|z_{k}-z_{j}|,$
then this inequality also holds for all iterates, all inclusion disks $D{\big (}z_{k}+w_{k},(n-1)|w_{k}|{\big )}$ are disjoint, and linear convergence with a contraction factor of 1/2 holds. Further, the inclusion disks can in this case be chosen as
$D\left(z_{k}+w_{k},{\tfrac {1}{4}}|w_{k}|\right),\quad k=1,\dots ,n,$
each containing exactly one zero of f.
Failing general convergence
The Weierstrass / Durand-Kerner method is not generally convergent: in other words, it is not true that for every polynomial, the set of initial vectors that eventually converges to roots is open and dense. In fact, there are open sets of polynomials that have open sets of initial vectors that converge to periodic cycles other than roots (see Reinke et al.)
References
1. Petković, Miodrag (1989). Iterative methods for simultaneous inclusion of polynomial zeros. Berlin [u.a.]: Springer. pp. 31–32. ISBN 978-3-540-51485-5.
• Weierstraß, Karl (1891). "Neuer Beweis des Satzes, dass jede ganze rationale Function einer Veränderlichen dargestellt werden kann als ein Product aus linearen Functionen derselben Veränderlichen". Sitzungsberichte der königlich preussischen Akademie der Wissenschaften zu Berlin.
• Durand, E. (1960). "Equations du type F(x) = 0: Racines d'un polynome". In Masson; et al. (eds.). Solutions Numériques des Equations Algébriques. Vol. 1.
• Kerner, Immo O. (1966). "Ein Gesamtschrittverfahren zur Berechnung der Nullstellen von Polynomen". Numerische Mathematik. 8 (3): 290–294. doi:10.1007/BF02162564. S2CID 115307022.
• Prešić, Marica (1980). "A convergence theorem for a method for simultaneous determination of all zeros of a polynomial" (PDF). Publications de l'Institut Mathématique. Nouvelle Série. 28 (42): 158–168.
• Petkovic, M.S., Carstensen, C. and Trajkovic, M. (1995). "Weierstrass formula and zero-finding methods". Numerische Mathematik. 69 (3): 353–372. CiteSeerX 10.1.1.53.7516. doi:10.1007/s002110050097. S2CID 18594004.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Bo Jacoby, Nulpunkter for polynomier, CAE-nyt (a periodical for Dansk CAE Gruppe [Danish CAE Group]), 1988.
• Agnethe Knudsen, Numeriske Metoder (lecture notes), Københavns Teknikum.
• Bo Jacoby, Numerisk løsning af ligninger, Bygningsstatiske meddelelser (Published by Danish Society for Structural Science and Engineering) volume 63 no. 3–4, 1992, pp. 83–105.
• Gourdon, Xavier (1996). Combinatoire, Algorithmique et Geometrie des Polynomes. Paris: École Polytechnique. Archived from the original on 2006-10-28. Retrieved 2006-08-22.
• Victor Pan (May 2002): Univariate Polynomial Root-Finding with Lower Computational Precision and Higher Convergence Rates. Tech-Report, City University of New York
• Neumaier, Arnold (2003). "Enclosing clusters of zeros of polynomials". Journal of Computational and Applied Mathematics. 156 (2): 389–401. Bibcode:2003JCoAM.156..389N. doi:10.1016/S0377-0427(03)00380-7.
• Jan Verschelde, The method of Weierstrass (also known as the Durand–Kerner method), 2003.
• Bernhard Reinke, Dierk Schleicher, and Michael Stoll, ``The Weierstrass root finder is not generally convergent, 2020
External links
• Ada Generic_Roots using the Durand–Kerner Method (archive) — an open-source implementation in Ada
• Polynomial Roots — an open-source implementation in Java
• Roots Extraction from Polynomials : The Durand–Kerner Method — contains a Java applet demonstration
|
Weierstrass factorization theorem
In mathematics, and particularly in the field of complex analysis, the Weierstrass factorization theorem asserts that every entire function can be represented as a (possibly infinite) product involving its zeroes. The theorem may be viewed as an extension of the fundamental theorem of algebra, which asserts that every polynomial may be factored into linear factors, one for each root.
The theorem, which is named for Karl Weierstrass, is closely related to a second result that every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A generalization of the theorem extends it to meromorphic functions and allows one to consider a given meromorphic function as a product of three factors: terms depending on the function's zeros and poles, and an associated non-zero holomorphic function.
Motivation
The consequences of the fundamental theorem of algebra are twofold.[1] Firstly, any finite sequence $\{c_{n}\}$ in the complex plane has an associated polynomial $p(z)$ that has zeroes precisely at the points of that sequence, $ p(z)=\prod _{n}(z-c_{n}).$
Secondly, any polynomial function $p(z)$ in the complex plane has a factorization $ p(z)=a\prod _{n}(z-c_{n}),$ where a is a non-zero constant and cn are the zeroes of p.
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of additional terms in the product is demonstrated when one considers $ \prod _{n}(z-c_{n})$ where the sequence $\{c_{n}\}$ is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that for each z, the factors $(z-c_{n})$ must approach 1 as $n\to \infty $. So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed. Weierstrass' elementary factors have these properties and serve the same purpose as the factors $(z-c_{n})$ above.
The elementary factors
Consider the functions of the form $ \exp \left(-{\tfrac {z^{n+1}}{n+1}}\right)$ for $n\in \mathbb {N} $. At $z=0$, they evaluate to $1$ and have a flat slope at order up to $n$. Right after $z=1$, they sharply fall to some small positive value. In contrast, consider the function $1-z$ which has no flat slope but, at $z=1$, evaluates to exactly zero. Also note that for |z| < 1,
$(1-z)=\exp(\ln(1-z))=\exp \left(-{\tfrac {z^{1}}{1}}-{\tfrac {z^{2}}{2}}-{\tfrac {z^{3}}{3}}+\cdots \right).$
The elementary factors,[2] also referred to as primary factors,[3] are functions that combine the properties of zero slope and zero value (see graphic):
$E_{n}(z)={\begin{cases}(1-z)&{\text{if }}n=0,\\(1-z)\exp \left({\frac {z^{1}}{1}}+{\frac {z^{2}}{2}}+\cdots +{\frac {z^{n}}{n}}\right)&{\text{otherwise}}.\end{cases}}$
For |z| < 1 and $n>0$, one may express it as $ E_{n}(z)=\exp \left(-{\tfrac {z^{n+1}}{n+1}}\sum _{k=0}^{\infty }{\tfrac {z^{k}}{1+k/(n+1)}}\right)$ and one can read off how those properties are enforced.
The utility of the elementary factors En(z) lies in the following lemma:[2]
Lemma (15.8, Rudin) for |z| ≤ 1, $n\in \mathbb {N} $
$\vert 1-E_{n}(z)\vert \leq \vert z\vert ^{n+1}.$
The two forms of the theorem
Existence of entire function with specified zeroes
Let $\{a_{n}\}$ be a sequence of non-zero complex numbers such that $|a_{n}|\to \infty $. If $\{p_{n}\}$ is any sequence of nonnegative integers such that for all $r>0$,
$\sum _{n=1}^{\infty }\left(r/|a_{n}|\right)^{1+p_{n}}<\infty ,$
then the function
$f(z)=\prod _{n=1}^{\infty }E_{p_{n}}(z/a_{n})$
is entire with zeros only at points $a_{n}$. If a number $z_{0}$ occurs in the sequence $\{a_{n}\}$ exactly m times, then function f has a zero at $z=z_{0}$ of multiplicity m.
• The sequence $\{p_{n}\}$ in the statement of the theorem always exists. For example, we could always take $p_{n}=n$ and have the convergence. Such a sequence is not unique: changing it at finite number of positions, or taking another sequence p′n ≥ pn, will not break the convergence.
• The theorem generalizes to the following: sequences in open subsets (and hence regions) of the Riemann sphere have associated functions that are holomorphic in those subsets and have zeroes at the points of the sequence.[2]
• Also the case given by the fundamental theorem of algebra is incorporated here. If the sequence $\{a_{n}\}$ is finite then we can take $p_{n}=0$ and obtain: $\,f(z)=c\,\prod }_{n}(z-a_{n})$.
The Weierstrass factorization theorem
Let ƒ be an entire function, and let $\{a_{n}\}$ be the non-zero zeros of ƒ repeated according to multiplicity; suppose also that ƒ has a zero at z = 0 of order m ≥ 0.[lower-alpha 1] Then there exists an entire function g and a sequence of integers $\{p_{n}\}$ such that
$f(z)=z^{m}e^{g(z)}\prod _{n=1}^{\infty }E_{p_{n}}\!\!\left({\frac {z}{a_{n}}}\right).$[4]
Examples of factorization
The trigonometric functions sine and cosine have the factorizations
$\sin \pi z=\pi z\prod _{n\neq 0}\left(1-{\frac {z}{n}}\right)e^{z/n}=\pi z\prod _{n=1}^{\infty }\left(1-\left({\frac {z}{n}}\right)^{2}\right)$
$\cos \pi z=\prod _{q\in \mathbb {Z} ,\,q\;{\text{odd}}}\left(1-{\frac {2z}{q}}\right)e^{2z/q}=\prod _{n=0}^{\infty }\left(1-\left({\frac {z}{n+{\tfrac {1}{2}}}}\right)^{2}\right)$
while the gamma function $\Gamma $ has factorization
${\frac {1}{\Gamma (z)}}=e^{\gamma z}z\prod _{n=1}^{\infty }\left(1+{\frac {z}{n}}\right)e^{-z/n},$
where $\gamma $ is the Euler–Mascheroni constant. The cosine identity can be seen as special case of
${\frac {1}{\Gamma (s-z)\Gamma (s+z)}}={\frac {1}{\Gamma (s)^{2}}}\prod _{n=0}^{\infty }\left(1-\left({\frac {z}{n+s}}\right)^{2}\right)$
for $s={\tfrac {1}{2}}$.
Hadamard factorization theorem
Main article: Hadamard factorization theorem
Define the Hadamard canonical factors
$E_{n}(z):=(1-z)\prod _{k=1}^{n}e^{z^{k}/k}$
Entire functions of finite order have Hadamard's canonical representation[4]:
$f(z)=z^{m}e^{P(z)}\prod _{k=1}^{\infty }E_{p}(z/a_{k})$
where $a_{k}$ are those roots of $f$ that are not zero ($a_{k}\neq 0$), $m$ is the order of the zero of $f$ at $z=0$ (the case $m=0$ being taken to mean $f(0)\neq 0$), $P$ a polynomial (whose degree we shall call $q$), and $p$ is the smallest non-negative integer such that the series
$\sum _{n=1}^{\infty }{\frac {1}{|a_{n}|^{p+1}}}$
converges. The non-negative integer $g=\max\{p,q\}$ is called the genus of the entire function $f$. In this notation,
$g\leq \rho \leq g+1$
In other words: If the order $\rho $ is not an integer, then $g=[\rho ]$ is the integer part of $\rho $. If the order is a positive integer, then there are two possibilities: $g=\rho -1$ or $g=\rho $.
For example, $\sin $, $\cos $ and $\exp $ are entire functions of genus $g=\rho =1$.
See also
• Mittag-Leffler's theorem
• Wallis product, which can be derived from this theorem applied to the sine function
• Blaschke product
Notes
1. A zero of order m = 0 at z = 0 is taken to mean ƒ(0) ≠ 0 — that is, $f$ does not have a zero at $0$.
1. Knopp, K. (1996), "Weierstrass's Factor-Theorem", Theory of Functions, Part II, New York: Dover, pp. 1–7.
2. Rudin, W. (1987), Real and Complex Analysis (3rd ed.), Boston: McGraw Hill, pp. 301–304, ISBN 0-07-054234-1, OCLC 13093736
3. Boas, R. P. (1954), Entire Functions, New York: Academic Press Inc., ISBN 0-8218-4505-5, OCLC 6487790, chapter 2.
4. Conway, J. B. (1995), Functions of One Complex Variable I, 2nd ed., springer.com: Springer, ISBN 0-387-90328-3
External links
• "Weierstrass theorem", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Visualization of the Weierstrass factorization of the sine function due to Euler at the Wayback Machine (archived 30 November 2018)
|
Weierstrass product inequality
In mathematics, the Weierstrass product inequality states that for any real numbers 0 ≤ x1, ..., xn ≤ 1 we have
$(1-x_{1})(1-x_{2})(1-x_{3})(1-x_{4})....(1-x_{n})\geq 1-S_{n},$
$(1+x_{1})(1+x_{2})(1+x_{3})(1+x_{4})....(1+x_{n})\geq 1+S_{n},$
where $S_{n}=x_{1}+x_{2}+x_{3}+x_{4}+....+x_{n}.$
The inequality is named after the German mathematician Karl Weierstrass.
Proof
The inequality with the subtractions can be proven easily via mathematical induction. The one with the additions is proven identically. We can choose $n=1$ as the base case and see that for this value of $n$ we get
$1-x_{1}\geq 1-x_{1}$
which is indeed true. Assuming now that the inequality holds for all natural numbers up to $n>1$, for $n+1$ we have:
$\prod _{i=1}^{n+1}(1-x_{i})\,\,=(1-x_{n+1})\prod _{i=1}^{n}(1-x_{i})$
$\geq (1-x_{n+1})\left(1-\sum _{i=1}^{n}x_{i}\right)$
$=1-\sum _{i=1}^{n}x_{i}-x_{n+1}+x_{n+1}\sum _{i=1}^{n}x_{i}$
$=1-\sum _{i=1}^{n+1}x_{i}+x_{n+1}\sum _{i=1}^{n}x_{i}$
$\geq 1-\sum _{i=1}^{n+1}x_{i}$
which concludes the proof.
References
• Honsberger, Ross (1991). More mathematical morsels. [Washington, D.C.]: Mathematical Association of America. ISBN 978-1-4704-5838-6.
|
Weierstrass–Enneper parameterization
In mathematics, the Weierstrass–Enneper parameterization of minimal surfaces is a classical piece of differential geometry.
Alfred Enneper and Karl Weierstrass studied minimal surfaces as far back as 1863.
Let $f$ and $g$ be functions on either the entire complex plane or the unit disk, where $g$ is meromorphic and $f$ is analytic, such that wherever $g$ has a pole of order $m$, $f$ has a zero of order $2m$ (or equivalently, such that the product $fg^{2}$ is holomorphic), and let $c_{1},c_{2},c_{3}$ be constants. Then the surface with coordinates $(x_{1},x_{2},x_{3})$ is minimal, where the $x_{k}$ are defined using the real part of a complex integral, as follows:
${\begin{aligned}x_{k}(\zeta )&{}=\Re \left\{\int _{0}^{\zeta }\varphi _{k}(z)\,dz\right\}+c_{k},\qquad k=1,2,3\\\varphi _{1}&{}=f(1-g^{2})/2\\\varphi _{2}&{}=\mathbf {i} f(1+g^{2})/2\\\varphi _{3}&{}=fg\end{aligned}}$
The converse is also true: every nonplanar minimal surface defined over a simply connected domain can be given a parametrization of this type.[1]
For example, Enneper's surface has f(z) = 1, g(z) = zm.
Parametric surface of complex variables
The Weierstrass-Enneper model defines a minimal surface $X$ ($\mathbb {R} ^{3}$) on a complex plane ($\mathbb {C} $). Let $\omega =u+vi$ (the complex plane as the $uv$ space), the Jacobian matrix of the surface can be written as a column of complex entries:
$\mathbf {J} ={\begin{bmatrix}\left(1-g^{2}(\omega )\right)f(\omega )\\i\left(1+g^{2}(\omega )\right)f(\omega )\\2g(\omega )f(\omega )\end{bmatrix}}$
where $f(\omega )$ and $g(\omega )$ are holomorphic functions of $\omega $.
The Jacobian $\mathbf {J} $ represents the two orthogonal tangent vectors of the surface:[2]
$\mathbf {X_{u}} ={\begin{bmatrix}\operatorname {Re} \mathbf {J} _{1}\\\operatorname {Re} \mathbf {J} _{2}\\\operatorname {Re} \mathbf {J} _{3}\end{bmatrix}}\;\;\;\;\mathbf {X_{v}} ={\begin{bmatrix}-\operatorname {Im} \mathbf {J} _{1}\\-\operatorname {Im} \mathbf {J} _{2}\\-\operatorname {Im} \mathbf {J} _{3}\end{bmatrix}}$
The surface normal is given by
$\mathbf {\hat {n}} ={\frac {\mathbf {X_{u}} \times \mathbf {X_{v}} }{|\mathbf {X_{u}} \times \mathbf {X_{v}} |}}={\frac {1}{|g|^{2}+1}}{\begin{bmatrix}2\operatorname {Re} g\\2\operatorname {Im} g\\|g|^{2}-1\end{bmatrix}}$
The Jacobian $\mathbf {J} $ leads to a number of important properties: $\mathbf {X_{u}} \cdot \mathbf {X_{v}} =0$, $\mathbf {X_{u}} ^{2}=\operatorname {Re} (\mathbf {J} ^{2})$, $\mathbf {X_{v}} ^{2}=\operatorname {Im} (\mathbf {J} ^{2})$, $\mathbf {X_{uu}} +\mathbf {X_{vv}} =0$. The proofs can be found in Sharma's essay: The Weierstrass representation always gives a minimal surface.[3] The derivatives can be used to construct the first fundamental form matrix:
${\begin{bmatrix}\mathbf {X_{u}} \cdot \mathbf {X_{u}} &\;\;\mathbf {X_{u}} \cdot \mathbf {X_{v}} \\\mathbf {X_{v}} \cdot \mathbf {X_{u}} &\;\;\mathbf {X_{v}} \cdot \mathbf {X_{v}} \end{bmatrix}}={\begin{bmatrix}1&0\\0&1\end{bmatrix}}$
and the second fundamental form matrix
${\begin{bmatrix}\mathbf {X_{uu}} \cdot \mathbf {\hat {n}} &\;\;\mathbf {X_{uv}} \cdot \mathbf {\hat {n}} \\\mathbf {X_{vu}} \cdot \mathbf {\hat {n}} &\;\;\mathbf {X_{vv}} \cdot \mathbf {\hat {n}} \end{bmatrix}}$
Finally, a point $\omega _{t}$ on the complex plane maps to a point $\mathbf {X} $ on the minimal surface in $\mathbb {R} ^{3}$ by
$\mathbf {X} ={\begin{bmatrix}\operatorname {Re} \int _{\omega _{0}}^{\omega _{t}}\mathbf {J} _{1}d\omega \\\operatorname {Re} \int _{\omega _{0}}^{\omega _{t}}\mathbf {J} _{2}d\omega \\\operatorname {Re} \int _{\omega _{0}}^{\omega _{t}}\mathbf {J} _{3}d\omega \end{bmatrix}}$
where $\omega _{0}=0$ for all minimal surfaces throughout this paper except for Costa's minimal surface where $\omega _{0}=(1+i)/2$.
Embedded minimal surfaces and examples
The classical examples of embedded complete minimal surfaces in $\mathbb {R} ^{3}$ with finite topology include the plane, the catenoid, the helicoid, and the Costa's minimal surface. Costa's surface involves Weierstrass's elliptic function $\wp $:[4]
$g(\omega )={\frac {A}{\wp '(\omega )}}$
$f(\omega )=\wp (\omega )$
where $A$ is a constant.[5]
Helicatenoid
Choosing the functions $f(\omega )=e^{-i\alpha }e^{\omega /A}$ and $g(\omega )=e^{-\omega /A}$, a one parameter family of minimal surfaces is obtained.
$\varphi _{1}=e^{-i\alpha }\sinh \left({\frac {\omega }{A}}\right)$
$\varphi _{2}=ie^{-i\alpha }\cosh \left({\frac {\omega }{A}}\right)$
$\varphi _{3}=e^{-i\alpha }$
$\mathbf {X} (\omega )=\operatorname {Re} {\begin{bmatrix}e^{-i\alpha }A\cosh \left({\frac {\omega }{A}}\right)\\ie^{-i\alpha }A\sinh \left({\frac {\omega }{A}}\right)\\e^{-i\alpha }\omega \\\end{bmatrix}}=\cos(\alpha ){\begin{bmatrix}A\cosh \left({\frac {\operatorname {Re} (\omega )}{A}}\right)\cos \left({\frac {\operatorname {Im} (\omega )}{A}}\right)\\-A\cosh \left({\frac {\operatorname {Re} (\omega )}{A}}\right)\sin \left({\frac {\operatorname {Im} (\omega )}{A}}\right)\\\operatorname {Re} (\omega )\\\end{bmatrix}}+\sin(\alpha ){\begin{bmatrix}A\sinh \left({\frac {\operatorname {Re} (\omega )}{A}}\right)\sin \left({\frac {\operatorname {Im} (\omega )}{A}}\right)\\A\sinh \left({\frac {\operatorname {Re} (\omega )}{A}}\right)\cos \left({\frac {\operatorname {Im} (\omega )}{A}}\right)\\\operatorname {Im} (\omega )\\\end{bmatrix}}$
Choosing the parameters of the surface as $\omega =s+i(A\phi )$:
$\mathbf {X} (s,\phi )=\cos(\alpha ){\begin{bmatrix}A\cosh \left({\frac {s}{A}}\right)\cos \left(\phi \right)\\-A\cosh \left({\frac {s}{A}}\right)\sin \left(\phi \right)\\s\\\end{bmatrix}}+\sin(\alpha ){\begin{bmatrix}A\sinh \left({\frac {s}{A}}\right)\sin \left(\phi \right)\\A\sinh \left({\frac {s}{A}}\right)\cos \left(\phi \right)\\A\phi \\\end{bmatrix}}$
At the extremes, the surface is a catenoid $(\alpha =0)$ or a helicoid $(\alpha =\pi /2)$. Otherwise, $\alpha $ represents a mixing angle. The resulting surface, with domain chosen to prevent self-intersection, is a catenary rotated around the $\mathbf {X} _{3}$ axis in a helical fashion.
Lines of curvature
One can rewrite each element of second fundamental matrix as a function of $f$ and $g$, for example
$\mathbf {X_{uu}} \cdot \mathbf {\hat {n}} ={\frac {1}{|g|^{2}+1}}{\begin{bmatrix}\operatorname {Re} \left((1-g^{2})f'-2gfg'\right)\\\operatorname {Re} \left((1+g^{2})f'i+2gfg'i\right)\\\operatorname {Re} \left(2gf'+2fg'\right)\\\end{bmatrix}}\cdot {\begin{bmatrix}\operatorname {Re} \left(2g\right)\\\operatorname {Re} \left(-2gi\right)\\\operatorname {Re} \left(|g|^{2}-1\right)\\\end{bmatrix}}=-2\operatorname {Re} (fg')$
And consequently the second fundamental form matrix can be simplified as
${\begin{bmatrix}-\operatorname {Re} fg'&\;\;\operatorname {Im} fg'\\\operatorname {Im} fg'&\;\;\operatorname {Re} fg'\end{bmatrix}}$
One of its eigenvectors is
${\overline {\sqrt {fg'}}}$
which represents the principal direction in the complex domain.[6] Therefore, the two principal directions in the $uv$ space turn out to be
$\phi =-{\frac {1}{2}}\operatorname {Arg} (fg')\pm k\pi /2$
See also
• Associate family
• Bryant surface, found by an analogous parameterization in hyperbolic space
References
1. Dierkes, U.; Hildebrandt, S.; Küster, A.; Wohlrab, O. (1992). Minimal surfaces. Vol. I. Springer. p. 108. ISBN 3-540-53169-6.
2. Andersson, S.; Hyde, S. T.; Larsson, K.; Lidin, S. (1988). "Minimal Surfaces and Structures: From Inorganic and Metal Crystals to Cell Membranes and Biopolymers". Chem. Rev. 88 (1): 221–242. doi:10.1021/cr00083a011.
3. Sharma, R. (2012). "The Weierstrass Representation always gives a minimal surface". arXiv:1208.5689 [math.DG].
4. Lawden, D. F. (2011). Elliptic Functions and Applications. Applied Mathematical Sciences. Vol. 80. Berlin: Springer. ISBN 978-1-4419-3090-3.
5. Abbena, E.; Salamon, S.; Gray, A. (2006). "Minimal Surfaces via Complex Variables". Modern Differential Geometry of Curves and Surfaces with Mathematica. Boca Raton: CRC Press. pp. 719–766. ISBN 1-58488-448-7.
6. Hua, H.; Jia, T. (2018). "Wire cut of double-sided minimal surfaces". The Visual Computer. 34 (6–8): 985–995. doi:10.1007/s00371-018-1548-0. S2CID 13681681.
|
Weierstrass ring
In mathematics, a Weierstrass ring, named by Nagata[1] after Karl Weierstrass, is a commutative local ring that is Henselian, pseudo-geometric, and such that any quotient ring by a prime ideal is a finite extension of a regular local ring.
Examples
• The Weierstrass preparation theorem can be used to show that the ring of convergent power series over the complex numbers in a finite number of variables is a Wierestrass ring. The same is true if the complex numbers are replaced by a perfect field with a valuation.
• Every ring that is a finitely-generated module over a Weierstrass ring is also a Weierstrass ring.
References
1. Nagata (1975, section 45)
Bibliography
• Danilov, V. I. (2001) [1994], "Weierstrass ring", Encyclopedia of Mathematics, EMS Press
• Nagata, Masayoshi (1975) [1962], Local rings, Interscience Tracts in Pure and Applied Mathematics, vol. 13, Interscience Publishers, pp. xiii+234, ISBN 978-0-88275-228-0, MR 0155856
|
Tangent half-angle substitution
In integral calculus, the tangent half-angle substitution is a change of variables used for evaluating integrals, which converts a rational function of trigonometric functions of $ x$ into an ordinary rational function of $ t$ by setting $ t=\tan {\tfrac {x}{2}}$. This is the one-dimensional stereographic projection of the unit circle parametrized by angle measure onto the real line. The general[1] transformation formula is:
$\int f(\sin x,\cos x)\,dx=\int f{\left({\frac {2t}{1+t^{2}}},{\frac {1-t^{2}}{1+t^{2}}}\right)}{\frac {2\,dt}{1+t^{2}}}.$
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
The tangent of half an angle is important in spherical trigonometry and was sometimes known in the 17th century as the half tangent or semi-tangent.[2] Leonhard Euler used it to evaluate the integral $ \int dx/(a+b\cos x)$ in his 1768 integral calculus textbook,[3] and Adrien-Marie Legendre described the general method in 1817.[4]
The substitution is described in most integral calculus textbooks since the late 19th century, usually without any special name.[5] It is known in Russia as the universal trigonometric substitution,[6] and also known by variant names such as half-tangent substitution or half-angle substitution. It is sometimes misattributed as the Weierstrass substitution.[7] Michael Spivak called it the "world's sneakiest substitution".[8]
The substitution
Introducing a new variable $ t=\tan {\tfrac {x}{2}},$ sines and cosines can be expressed as rational functions of $t,$ and $dx$ can be expressed as the product of $dt$ and a rational function of $t,$ as follows:
$\sin x={\frac {2t}{1+t^{2}}},\qquad \cos x={\frac {1-t^{2}}{1+t^{2}}},\qquad {\text{and}}\qquad dx={\frac {2}{1+t^{2}}}\,dt.$
Derivation
Using the double-angle formulas, introducing denominators equal to one thanks to the Pythagorean theorem, and then dividing numerators and denominators by $ \cos ^{2}{\tfrac {x}{2}},$ one gets
${\begin{aligned}\sin x&={\frac {2\sin {\tfrac {x}{2}}\,\cos {\tfrac {x}{2}}}{\cos ^{2}{\tfrac {x}{2}}+\sin ^{2}{\tfrac {x}{2}}}}={\frac {2\tan {\tfrac {x}{2}}}{1+\tan ^{2}{\tfrac {x}{2}}}}={\frac {2t}{1+t^{2}}},\\[18mu]\cos x&={\frac {\cos ^{2}{\tfrac {x}{2}}-\sin ^{2}{\tfrac {x}{2}}}{\cos ^{2}{\tfrac {x}{2}}+\sin ^{2}{\tfrac {x}{2}}}}={\frac {1-\tan ^{2}{\tfrac {x}{2}}}{1+\tan ^{2}{\tfrac {x}{2}}}}={\frac {1-t^{2}}{1+t^{2}}}.\end{aligned}}$
Finally, since $ t=\tan {\tfrac {x}{2}}$, differentiation rules imply
$dt={\tfrac {1}{2}}\left(1+\tan ^{2}{\tfrac {x}{2}}\right)dx={\frac {1+t^{2}}{2}}dx,$
and thus
$dx={\frac {2}{1+t^{2}}}dt.$
Examples
Antiderivative of cosecant
${\begin{aligned}\int \csc x\,dx&=\int {\frac {dx}{\sin x}}\\[6pt]&=\int \left({\frac {1+t^{2}}{2t}}\right)\left({\frac {2}{1+t^{2}}}\right)dt&&t=\tan {\tfrac {x}{2}}\\[6pt]&=\int {\frac {dt}{t}}\\[6pt]&=\ln |t|+C\\[6pt]&=\ln \left|\tan {\tfrac {x}{2}}\right|+C.\end{aligned}}$
We can confirm the above result using a standard method of evaluating the cosecant integral by multiplying the numerator and denominator by $ \csc x-\cot x$ and performing the substitution $ u=\csc x-\cot x,$ $ du=\left(-\csc x\cot x+\csc ^{2}x\right)\,dx$.
${\begin{aligned}\int \csc x\,dx&=\int {\frac {\csc x(\csc x-\cot x)}{\csc x-\cot x}}\,dx\\[6pt]&=\int {\frac {\left(\csc ^{2}x-\csc x\cot x\right)\,dx}{\csc x-\cot x}}\qquad u=\csc x-\cot x\\[6pt]&=\int {\frac {du}{u}}\\&=\ln |u|+C\\[6pt]&=\ln |\csc x-\cot x|+C.\end{aligned}}$
These two answers are the same because $ \csc x-\cot x=\tan {\tfrac {x}{2}}\colon $
${\begin{aligned}\csc x-\cot x&={\frac {1}{\sin x}}-{\frac {\cos x}{\sin x}}\\[6pt]&={\frac {1+t^{2}}{2t}}-{\frac {1-t^{2}}{1+t^{2}}}{\frac {1+t^{2}}{2t}}\qquad \qquad t=\tan {\tfrac {x}{2}}\\[6pt]&={\frac {2t^{2}}{2t}}=t\\[6pt]&=\tan {\tfrac {x}{2}}\end{aligned}}$
The secant integral may be evaluated in a similar manner.
A definite integral
${\begin{aligned}\int _{0}^{2\pi }{\frac {dx}{2+\cos x}}&=\int _{0}^{\pi }{\frac {dx}{2+\cos x}}+\int _{\pi }^{2\pi }{\frac {dx}{2+\cos x}}\\[6pt]&=\int _{0}^{\infty }{\frac {2\,dt}{3+t^{2}}}+\int _{-\infty }^{0}{\frac {2\,dt}{3+t^{2}}}&t&=\tan {\tfrac {x}{2}}\\[6pt]&=\int _{-\infty }^{\infty }{\frac {2\,dt}{3+t^{2}}}\\[6pt]&={\frac {2}{\sqrt {3}}}\int _{-\infty }^{\infty }{\frac {du}{1+u^{2}}}&t&=u{\sqrt {3}}\\[6pt]&={\frac {2\pi }{\sqrt {3}}}.\end{aligned}}$
In the first line, one cannot simply substitute $ t=0$ for both limits of integration. The singularity (in this case, a vertical asymptote) of $ t=\tan {\tfrac {x}{2}}$ at $ x=\pi $ must be taken into account. Alternatively, first evaluate the indefinite integral, then apply the boundary values.
${\begin{aligned}\int {\frac {dx}{2+\cos x}}&=\int {\frac {1}{2+{\frac {1-t^{2}}{1+t^{2}}}}}{\frac {2\,dt}{t^{2}+1}}&&t=\tan {\tfrac {x}{2}}\\[6pt]&=\int {\frac {2\,dt}{2(t^{2}+1)+(1-t^{2})}}=\int {\frac {2\,dt}{t^{2}+3}}\\[6pt]&={\frac {2}{3}}\int {\frac {dt}{{\bigl (}t{\big /}{\sqrt {3}}{\bigr )}^{2}+1}}&&u=t{\big /}{\sqrt {3}}\\[6pt]&={\frac {2}{\sqrt {3}}}\int {\frac {du}{u^{2}+1}}&&\tan \theta =u\\[6pt]&={\frac {2}{\sqrt {3}}}\int \cos ^{2}\theta \sec ^{2}\theta \,d\theta ={\frac {2}{\sqrt {3}}}\int d\theta \\[6pt]&={\frac {2}{\sqrt {3}}}\theta +C={\frac {2}{\sqrt {3}}}\arctan \left({\frac {t}{\sqrt {3}}}\right)+C\\[6pt]&={\frac {2}{\sqrt {3}}}\arctan \left({\frac {\tan {\tfrac {x}{2}}}{\sqrt {3}}}\right)+C.\end{aligned}}$
By symmetry,
${\begin{aligned}\int _{0}^{2\pi }{\frac {dx}{2+\cos x}}&=2\int _{0}^{\pi }{\frac {dx}{2+\cos x}}=\lim _{b\rightarrow \pi }{\frac {4}{\sqrt {3}}}\arctan \left({\frac {\tan {\tfrac {x}{2}}}{\sqrt {3}}}\right){\Biggl |}_{0}^{b}\\[6pt]&={\frac {4}{\sqrt {3}}}{\Biggl [}\lim _{b\rightarrow \pi }\arctan \left({\frac {\tan {\tfrac {b}{2}}}{\sqrt {3}}}\right)-\arctan(0){\Biggl ]}={\frac {4}{\sqrt {3}}}\left({\frac {\pi }{2}}-0\right)={\frac {2\pi }{\sqrt {3}}},\end{aligned}}$
which is the same as the previous answer.
Third example: both sine and cosine
${\begin{aligned}\int {\frac {dx}{a\cos x+b\sin x+c}}&=\int {\frac {2dt}{a(1-t^{2})+2bt+c(t^{2}+1)}}\\[6pt]&=\int {\frac {2dt}{(c-a)t^{2}+2bt+a+c}}\\[6pt]&={\frac {2}{\sqrt {c^{2}-(a^{2}+b^{2})}}}\arctan \left({\frac {(c-a)\tan {\tfrac {x}{2}}+b}{\sqrt {c^{2}-(a^{2}+b^{2})}}}\right)+C\end{aligned}}$
if $ c^{2}-(a^{2}+b^{2})>0.$
Geometry
As x varies, the point (cos x, sin x) winds repeatedly around the unit circle centered at (0, 0). The point
$\left({\frac {1-t^{2}}{1+t^{2}}},{\frac {2t}{1+t^{2}}}\right)$
goes only once around the circle as t goes from −∞ to +∞, and never reaches the point (−1, 0), which is approached as a limit as t approaches ±∞. As t goes from −∞ to −1, the point determined by t goes through the part of the circle in the third quadrant, from (−1, 0) to (0, −1). As t goes from −1 to 0, the point follows the part of the circle in the fourth quadrant from (0, −1) to (1, 0). As t goes from 0 to 1, the point follows the part of the circle in the first quadrant from (1, 0) to (0, 1). Finally, as t goes from 1 to +∞, the point follows the part of the circle in the second quadrant from (0, 1) to (−1, 0).
Here is another geometric point of view. Draw the unit circle, and let P be the point (−1, 0). A line through P (except the vertical line) is determined by its slope. Furthermore, each of the lines (except the vertical line) intersects the unit circle in exactly two points, one of which is P. This determines a function from points on the unit circle to slopes. The trigonometric functions determine a function from angles to points on the unit circle, and by combining these two functions we have a function from angles to slopes.
Gallery
• (1/2) The tangent half-angle substitution relates an angle to the slope of a line.
• (2/2) The tangent half-angle substitution illustrated as stereographic projection of the circle.
Hyperbolic functions
As with other properties shared between the trigonometric functions and the hyperbolic functions, it is possible to use hyperbolic identities to construct a similar form of the substitution, $ t=\tanh {\tfrac {x}{2}}$:
${\begin{aligned}&\sinh x={\frac {2t}{1-t^{2}}},\qquad \cosh x={\frac {1+t^{2}}{1-t^{2}}},\qquad \tanh x={\frac {2t}{1+t^{2}}},\\[6pt]&\coth x={\frac {1+t^{2}}{2t}},\qquad \operatorname {sech} x={\frac {1-t^{2}}{1+t^{2}}},\qquad \operatorname {csch} x={\frac {1-t^{2}}{2t}},\\[6pt]&{\text{and}}\qquad dx={\frac {2}{1-t^{2}}}\,dt.\end{aligned}}$
Geometrically, this change of variables is a one-dimensional analog of the Poincaré disk projection.
See also
• Rational curve
• Stereographic projection
• Tangent half-angle formula
• Trigonometric substitution
• Euler substitution
Further reading
• Courant, Richard (1937) [1934]. "1.4.6. Integration of Some Other Classes of Functions §1–3". Differential and Integral Calculus. Vol. 1. Blackie & Son. pp. 234–237.
• Edwards, Joseph (1921). "§1.6.193". A Treatise on the Integral Calculus. Vol. 1. Macmillan. pp. 187–188.
• Hardy, Godfrey Harold (1905). "VI. Transcendental functions". The integration of functions of a single variable. Cambridge. pp. 42–51. Second edition 1916, pp. 52–62
• Hermite, Charles (1873). "Intégration des fonctions transcendentes" [Integration of transcendental functions]. Cours d'analyse de l'école polytechnique (in French). Vol. 1. Gauthier-Villars. pp. 320–380.
Notes and references
1. Other trigonometric functions can be written in terms of sine and cosine.
2. Gunter, Edmund (1673) [1624]. The Works of Edmund Gunter. Francis Eglesfield. p. 73
3. Euler, Leonhard (1768). "§1.1.5.261 Problema 29" (PDF). Institutiones calculi integralis [Foundations of Integral Calculus] (in Latin). Vol. I. Impensis Academiae Imperialis Scientiarum. pp. 148–150. E342, Translation by Ian Bruce.
Also see Lobatto, Rehuel (1832). "19. Note sur l'intégration de la fonction ∂z / (a + b cos z)". Crelle's Journal (in French). 9: 259–260.
4. Legendre, Adrien-Marie (1817). Exercices de calcul intégral [Exercises in integral calculus] (in French). Vol. 2. Courcier. p. 245–246.
5. For example, in chronological order,
• Hermite (1873) https://archive.org/details/coursdanalysedel01hermuoft/page/320/
• Johnson (1883) https://archive.org/details/anelementarytre00johngoog/page/n66
• Picard (1891) https://archive.org/details/traitdanalyse03picagoog/page/77
• Goursat (1904) [1902] https://archive.org/details/courseinmathemat01gouruoft/page/236
• Wilson (1911) https://archive.org/details/advancedcalculus00wils/page/21/
• Edwards (1921) https://archive.org/details/treatiseonintegr01edwauoft/page/188
• Courant (1961) [1934] https://archive.org/details/ost-math-courant-differentialintegralcalculusvoli/page/n250
• Peterson (1950) https://archive.org/details/elementsofcalcul00pete/page/201/
• Apostol (1967) https://archive.org/details/calculus0000apos/page/264/
• Swokowski (1979) https://archive.org/details/calculuswithanal02edswok/page/482
• Larson, Hostetler, & Edwards (1998) https://archive.org/details/calculusofsingle00lars/page/520
• Rogawski (2011) https://books.google.com/books?id=rn4paEb8izYC&pg=PA435
• Salas, Etgen, & Hille (2021) https://books.google.com/books?id=R-1ZEAAAQBAJ&pg=PA409
6. Piskunov, Nikolai (1969). Differential and Integral Calculus. Mir. p. 379
7. James Stewart mentioned Karl Weierstrass when discussing the substitution in his popular calculus textbook, first published in 1987:
Stewart, James (1987). "§7.5 Rationalizing substitutions". Calculus. Brooks/Cole. p. 431. The German mathematician Karl Weierstrauss (1815–1897) noticed that the substitution t = tan(x/2) will convert any rational function of sin x and cos x into an ordinary rational function.
Later authors, citing Stewart, have sometimes referred to this as the Weierstrass substitution, for instance:
Jeffrey, David J.; Rich, Albert D. (1994). "The evaluation of trigonometric integrals avoiding spurious discontinuities". Transactions on Mathematical Software. 20 (1): 124–135. doi:10.1145/174603.174409. S2CID 13891212.
Merlet, Jean-Pierre (2004). "A Note on the History of Trigonometric Functions" (PDF). In Ceccarelli, Marco (ed.). International Symposium on History of Machines and Mechanisms. Kluwer. pp. 195–200. doi:10.1007/1-4020-2204-2_16. ISBN 978-1-4020-2203-6.
Weisstein, Eric W. (2011). "Weierstrass Substitution". MathWorld. Retrieved 2020-04-01.
Stewart provided no evidence for the attribution to Weierstrass. A related substitution appears in Weierstrass’s Mathematical Works, from an 1875 lecture wherein Weierstrass credits Carl Gauss (1818) with the idea of solving an integral of the form $ \int d\psi \,H(\sin \psi ,\cos \psi ){\big /}{\sqrt {G(\sin \psi ,\cos \psi )}}$ by the substitution $ t=-\cot(\psi /2).$
Weierstrass, Karl (1915) [1875]. "8. Bestimmung des Integrals ...". Mathematische Werke von Karl Weierstrass (in German). Vol. 6. Mayer & Müller. pp. 89–99.
8. Spivak, Michael (1967). "Ch. 9, problems 9–10". Calculus. Benjamin. pp. 325–326.
External links
• Weierstrass substitution formulas at PlanetMath
Integrals
Types of integrals
• Riemann integral
• Lebesgue integral
• Burkill integral
• Bochner integral
• Daniell integral
• Darboux integral
• Henstock–Kurzweil integral
• Haar integral
• Hellinger integral
• Khinchin integral
• Kolmogorov integral
• Lebesgue–Stieltjes integral
• Pettis integral
• Pfeffer integral
• Riemann–Stieltjes integral
• Regulated integral
Integration techniques
• Substitution
• Trigonometric
• Euler
• Weierstrass
• By parts
• Partial fractions
• Euler's formula
• Inverse functions
• Changing order
• Reduction formulas
• Parametric derivatives
• Differentiation under the integral sign
• Laplace transform
• Contour integration
• Laplace's method
• Numerical integration
• Simpson's rule
• Trapezoidal rule
• Risch algorithm
Improper integrals
• Gaussian integral
• Dirichlet integral
• Fermi–Dirac integral
• complete
• incomplete
• Bose–Einstein integral
• Frullani integral
• Common integrals in quantum field theory
Stochastic integrals
• Itô integral
• Russo–Vallois integral
• Stratonovich integral
• Skorokhod integral
Miscellaneous
• Basel problem
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Volumes
• Washers
• Shells
|
Weierstrass transform
In mathematics, the Weierstrass transform[1] of a function f : R → R, named after Karl Weierstrass, is a "smoothed" version of f(x) obtained by averaging the values of f, weighted with a Gaussian centered at x.
Specifically, it is the function F defined by
Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): F(x)=\frac{1}{\sqrt{4\pi}}\int_{-\infty}^\infty f(y) \; e^{-\frac{(x-y)^2}{4}} \; dy = \frac{1}{\sqrt{4\pi}}\int_{-\infty}^\infty f(x-y) \; e^{-\frac{y^2}{4}} \; dy~,
the convolution of f with the Gaussian function
${\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}~.$
The factor 1/√(4π) is chosen so that the Gaussian will have a total integral of 1, with the consequence that constant functions are not changed by the Weierstrass transform.
Instead of F(x) one also writes W[f](x). Note that F(x) need not exist for every real number x, when the defining integral fails to converge.
The Weierstrass transform is intimately related to the heat equation (or, equivalently, the diffusion equation with constant diffusion coefficient). If the function f describes the initial temperature at each point of an infinitely long rod that has constant thermal conductivity equal to 1, then the temperature distribution of the rod t = 1 time units later will be given by the function F. By using values of t different from 1, we can define the generalized Weierstrass transform of f.
The generalized Weierstrass transform provides a means to approximate a given integrable function f arbitrarily well with analytic functions.
Names
Weierstrass used this transform in his original proof of the Weierstrass approximation theorem. It is also known as the Gauss transform or Gauss–Weierstrass transform after Carl Friedrich Gauss and as the Hille transform after Einar Carl Hille who studied it extensively. The generalization Wt mentioned below is known in signal analysis as a Gaussian filter and in image processing (when implemented on R2) as a Gaussian blur.
Transforms of some important functions
As mentioned above, every constant function is its own Weierstrass transform. The Weierstrass transform of any polynomial is a polynomial of the same degree, and in fact same leading coefficient (the asymptotic growth is unchanged). Indeed, if Hn denotes the (physicist's) Hermite polynomial of degree n, then the Weierstrass transform of Hn(x/2) is simply xn. This can be shown by exploiting the fact that the generating function for the Hermite polynomials is closely related to the Gaussian kernel used in the definition of the Weierstrass transform.
The Weierstrass transform of the function eax (where a is an arbitrary constant) is ea2 eax. The function eax is thus an eigenfunction of the Weierstrass transform. (This is, in fact, more generally true for all convolution transforms.)
Setting a=bi where i is the imaginary unit, and applying Euler's identity, one sees that the Weierstrass transform of the function cos(bx) is e−b2 cos(bx) and the Weierstrass transform of the function sin(bx) is e−b2 sin(bx).
The Weierstrass transform of the function eax2 is
${\frac {1}{\sqrt {1-4a}}}e^{\frac {ax^{2}}{1-4a}}$ if a < 1/4 and undefined if a ≥ 1/4.
In particular, by choosing a negative, it is evident that the Weierstrass transform of a Gaussian function is again a Gaussian function, but a "wider" one.
General properties
The Weierstrass transform assigns to each function f a new function F; this assignment is linear. It is also translation-invariant, meaning that the transform of the function f(x + a) is F(x + a). Both of these facts are more generally true for any integral transform defined via convolution.
If the transform F(x) exists for the real numbers x = a and x = b, then it also exists for all real values in between and forms an analytic function there; moreover, F(x) will exist for all complex values of x with a ≤ Re(x) ≤ b and forms a holomorphic function on that strip of the complex plane. This is the formal statement of the "smoothness" of F mentioned above.
If f is integrable over the whole real axis (i.e. f ∈ L1(R)), then so is its Weierstrass transform F, and if furthermore f(x) ≥ 0 for all x, then also F(x) ≥ 0 for all x and the integrals of f and F are equal. This expresses the physical fact that the total thermal energy or heat is conserved by the heat equation, or that the total amount of diffusing material is conserved by the diffusion equation.
Using the above, one can show that for 0 < p ≤ ∞ and f ∈ Lp(R), we have F ∈ Lp(R) and ||F||p ≤ ||f||p. The Weierstrass transform consequently yields a bounded operator W : Lp(R) → Lp(R).
If f is sufficiently smooth, then the Weierstrass transform of the k-th derivative of f is equal to the k-th derivative of the Weierstrass transform of f.
There is a formula relating the Weierstrass transform W and the two-sided Laplace transform L. If we define
$g(x)=e^{-{\frac {x^{2}}{4}}}f(x)$
then
$W[f](x)={\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}L[g]\left(-{\frac {x}{2}}\right).$
Low-pass filter
We have seen above that the Weierstrass transform of cos(bx) is e−b2 cos(bx), and analogously for sin(bx). In terms of signal analysis, this suggests that if the signal f contains the frequency b (i.e. contains a summand which is a combination of sin(bx) and cos(bx)), then the transformed signal F will contain the same frequency, but with an amplitude multiplied by the factor e−b2. This has the consequence that higher frequencies are reduced more than lower ones, and the Weierstrass transform thus acts as a low-pass filter. This can also be shown with the continuous Fourier transform, as follows. The Fourier transform analyzes a signal in terms of its frequencies, transforms convolutions into products, and transforms Gaussians into Gaussians. The Weierstrass transform is convolution with a Gaussian and is therefore multiplication of the Fourier transformed signal with a Gaussian, followed by application of the inverse Fourier transform. This multiplication with a Gaussian in frequency space blends out high frequencies, which is another way of describing the "smoothing" property of the Weierstrass transform.
The inverse transform
The following formula, closely related to the Laplace transform of a Gaussian function, and a real analogue to the Hubbard–Stratonovich transformation, is relatively easy to establish:
$e^{u^{2}}={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }e^{-uy}e^{-y^{2}/4}\;dy.$
Now replace u with the formal differentiation operator D = d/dx and utilize the Lagrange shift operator
$e^{-yD}f(x)=f(x-y)$,
(a consequence of the Taylor series formula and the definition of the exponential function), to obtain
${\begin{aligned}e^{D^{2}}f(x)&={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }e^{-yD}f(x)e^{-y^{2}/4}\;dy\\&={\frac {1}{\sqrt {4\pi }}}\int _{-\infty }^{\infty }f(x-y)e^{-y^{2}/4}\;dy=W[f](x)\end{aligned}}$
to thus obtain the following formal expression for the Weierstrass transform W,
$W=e^{D^{2}}~,$
where the operator on the right is to be understood as acting on the function f(x) as
$e^{D^{2}}f(x)=\sum _{k=0}^{\infty }{\frac {D^{2k}f(x)}{k!}}~.$
The above formal derivation glosses over details of convergence, and the formula W = eD2 is thus not universally valid; there are several functions f which have a well-defined Weierstrass transform, but for which eD2f(x) cannot be meaningfully defined.
Nevertheless, the rule is still quite useful and can, for example, be used to derive the Weierstrass transforms of polynomials, exponential and trigonometric functions mentioned above.
The formal inverse of the Weierstrass transform is thus given by
$W^{-1}=e^{-D^{2}}~.$
Again, this formula is not universally valid but can serve as a guide. It can be shown to be correct for certain classes of functions if the right-hand side operator is properly defined.[2]
One may, alternatively, attempt to invert the Weierstrass transform in a slightly different way: given the analytic function
$F(x)=\sum _{n=0}^{\infty }a_{n}x^{n}~,$
apply W−1 to obtain
$f(x)=W^{-1}[F(x)]=\sum _{n=0}^{\infty }a_{n}W^{-1}[x^{n}]=\sum _{n=0}^{\infty }a_{n}H_{n}(x/2)$
once more using a fundamental property of the (physicists') Hermite polynomials Hn.
Again, this formula for f(x) is at best formal, since one didn't check whether the final series converges. But if, for instance, f ∈ L2(R), then knowledge of all the derivatives of F at x = 0 suffices to yield the coefficients an; and to thus reconstruct f as a series of Hermite polynomials.
A third method of inverting the Weierstrass transform exploits its connection to the Laplace transform mentioned above, and the well-known inversion formula for the Laplace transform. The result is stated below for distributions.
Generalizations
We can use convolution with the Gaussian kernel $ {\frac {1}{\sqrt {4\pi t}}}e^{-{\frac {x^{2}}{4t}}}$ (with some t > 0) instead of $ {\frac {1}{\sqrt {4\pi }}}e^{-{\frac {x^{2}}{4}}}$, thus defining an operator Wt, the generalized Weierstrass transform.
For small values of t, Wt[f] is very close to f, but smooth. The larger t, the more this operator averages out and changes f. Physically, Wt corresponds to following the heat (or diffusion) equation for t time units, and this is additive,
$W_{s}\circ W_{t}=W_{s+t},$
corresponding to "diffusing for t time units, then s time units, is equivalent to diffusing for s + t time units". One can extend this to t = 0 by setting W0 to be the identity operator (i.e. convolution with the Dirac delta function), and these then form a one-parameter semigroup of operators.
The kernel $ {\frac {1}{\sqrt {4\pi t}}}e^{-{\frac {x^{2}}{4t}}}$ used for the generalized Weierstrass transform is sometimes called the Gauss–Weierstrass kernel, and is Green's function for the diffusion equation $(\partial _{t}-D^{2})(e^{tD^{2}}f(x))=0$ on R.
See also: Heat equation § Fundamental solutions, and Heat kernel
Wt can be computed from W: given a function f(x), define a new function ft(x) = f(x√t); then Wt[f](x) = W[ft](x/√t), a consequence of the substitution rule.
The Weierstrass transform can also be defined for certain classes of distributions or "generalized functions".[3] For example, the Weierstrass transform of the Dirac delta is the Gaussian $ {\frac {1}{\sqrt {4\pi }}}e^{-x^{2}/4}$.
In this context, rigorous inversion formulas can be proved, e.g.,
$f(x)=\lim _{r\to \infty }{\frac {1}{i{\sqrt {4\pi }}}}\int _{x_{0}-ir}^{x_{0}+ir}F(z)e^{\frac {(x-z)^{2}}{4}}\;dz$
where x0 is any fixed real number for which F(x0) exists, the integral extends over the vertical line in the complex plane with real part x0, and the limit is to be taken in the sense of distributions.
Furthermore, the Weierstrass transform can be defined for real- (or complex-) valued functions (or distributions) defined on Rn. We use the same convolution formula as above but interpret the integral as extending over all of Rn and the expression (x − y)2 as the square of the Euclidean length of the vector x − y; the factor in front of the integral has to be adjusted so that the Gaussian will have a total integral of 1.
More generally, the Weierstrass transform can be defined on any Riemannian manifold: the heat equation can be formulated there (using the manifold's Laplace–Beltrami operator), and the Weierstrass transform W[f] is then given by following the solution of the heat equation for one time unit, starting with the initial "temperature distribution" f.
Related transforms
If one considers convolution with the kernel 1/(π(1 + x2)) instead of with a Gaussian, one obtains the Poisson transform which smoothes and averages a given function in a manner similar to the Weierstrass transform.
See also
• Gaussian blur
• Gaussian filter
• Husimi Q representation
• Heat equation#Fundamental solutions
References
1. Ahmed I. Zayed, Handbook of Function and Generalized Function Transformations, Chapter 18. CRC Press, 1996.
2. G. G. Bilodeau, "The Weierstrass Transform and Hermite Polynomials". Duke Mathematical Journal 29 (1962), p. 293-308
3. Yu A. Brychkov, A. P. Prudnikov. Integral Transforms of Generalized Functions, Chapter 5. CRC Press, 1989
|
Weierstrass functions
In mathematics, the Weierstrass functions are special functions of a complex variable that are auxiliary to the Weierstrass elliptic function. They are named for Karl Weierstrass. The relation between the sigma, zeta, and $\wp $ functions is analogous to that between the sine, cotangent, and squared cosecant functions: the logarithmic derivative of the sine is the cotangent, whose derivative is negative the squared cosecant.
For the fractal continuous function without a defined derivative, see Weierstrass function.
Weierstrass sigma function
The Weierstrass sigma function associated to a two-dimensional lattice $\Lambda \subset \mathbb {C} $ is defined to be the product
${\begin{aligned}\operatorname {\sigma } {(z;\Lambda )}&=z\prod _{w\in \Lambda ^{*}}\left(1-{\frac {z}{w}}\right)\exp \left({\frac {z}{w}}+{\frac {1}{2}}\left({\frac {z}{w}}\right)^{2}\right)\\[5mu]&=z\prod _{\begin{smallmatrix}m,n=-\infty \\\{m,n\}\neq 0\end{smallmatrix}}^{\infty }\left(1-{\frac {z}{m\omega _{1}+n\omega _{2}}}\right)\exp {\left({\frac {z}{m\omega _{1}+n\omega _{2}}}+{\frac {1}{2}}\left({\frac {z}{m\omega _{1}+n\omega _{2}}}\right)^{2}\right)}\end{aligned}}$
where $\Lambda ^{*}$ denotes $\Lambda -\{0\}$ or $\{\omega _{1},\omega _{2}\}$ are a fundamental pair of periods.
Through careful manipulation of the Weierstrass factorization theorem as it relates also to the sine function, another potentially more manageable infinite product definition is
$\operatorname {\sigma } {(z;\Lambda )}={\frac {\omega _{i}}{\pi }}\exp {\left({\frac {\eta _{i}z^{2}}{\omega _{i}}}\right)}\sin {\left({\frac {\pi z}{\omega _{i}}}\right)}\prod _{n=1}^{\infty }\left(1-{\frac {\sin ^{2}{\left(\pi z/\omega _{i}\right)}}{\sin ^{2}{\left(n\pi \omega _{j}/\omega _{i}\right)}}}\right)$
for any $\{i,j\}\in \{1,2,3\}$ with $i\neq j$ and where we have used the notation $\eta _{i}=\zeta (\omega _{i}/2;\Lambda )$ (see zeta function below).
Weierstrass zeta function
The Weierstrass zeta function is defined by the sum
$\operatorname {\zeta } {(z;\Lambda )}={\frac {\sigma '(z;\Lambda )}{\sigma (z;\Lambda )}}={\frac {1}{z}}+\sum _{w\in \Lambda ^{*}}\left({\frac {1}{z-w}}+{\frac {1}{w}}+{\frac {z}{w^{2}}}\right).$
The Weierstrass zeta function is the logarithmic derivative of the sigma-function. The zeta function can be rewritten as:
$\operatorname {\zeta } {(z;\Lambda )}={\frac {1}{z}}-\sum _{k=1}^{\infty }{\mathcal {G}}_{2k+2}(\Lambda )z^{2k+1}$
where ${\mathcal {G}}_{2k+2}$ is the Eisenstein series of weight 2k + 2.
The derivative of the zeta function is $-\wp (z)$, where $\wp (z)$ is the Weierstrass elliptic function.
The Weierstrass zeta function should not be confused with the Riemann zeta function in number theory.
Weierstrass eta function
The Weierstrass eta function is defined to be
$\eta (w;\Lambda )=\zeta (z+w;\Lambda )-\zeta (z;\Lambda ),{\mbox{ for any }}z\in \mathbb {C} $ and any w in the lattice $\Lambda $
This is well-defined, i.e. $\zeta (z+w;\Lambda )-\zeta (z;\Lambda )$ only depends on the lattice vector w. The Weierstrass eta function should not be confused with either the Dedekind eta function or the Dirichlet eta function.
Weierstrass ℘-function
The Weierstrass p-function is related to the zeta function by
$\operatorname {\wp } {(z;\Lambda )}=-\operatorname {\zeta '} {(z;\Lambda )},{\mbox{ for any }}z\in \mathbb {C} $
The Weierstrass ℘-function is an even elliptic function of order N=2 with a double pole at each lattice point and no other poles.
Degenerate case
Consider the situation where one period is real, which we can scale to be $\omega _{1}=2\pi $ and the other is taken to the limit of $\omega _{2}\rightarrow i\infty $ so that the functions are only singly-periodic. The corresponding invariants are $\{g_{2},g_{3}\}=\left\{{\tfrac {1}{12}},{\tfrac {1}{216}}\right\}$ of discriminant $\Delta =0$. Then we have $\eta _{1}={\tfrac {\pi }{12}}$ and thus from the above infinite product definition the following equality:
$\operatorname {\sigma } {(z;\Lambda )}=2e^{z^{2}/24}\sin {\left({\tfrac {z}{2}}\right)}$
A generalization for other sine-like functions on other doubly-periodic lattices is
$f(z)={\frac {\pi }{\omega _{1}}}e^{-(4\eta _{1}/\omega _{1})z^{2}}\operatorname {\sigma } {(2z;\Lambda )}$
This article incorporates material from Weierstrass sigma function on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Weierstrass–Erdmann condition
The Weierstrass–Erdmann condition is a mathematical result from the calculus of variations, which specifies sufficient conditions for broken extremals (that is, an extremal which is constrained to be smooth except at a finite number of "corners").[1]
Conditions
The Weierstrass-Erdmann corner conditions stipulate that a broken extremal $y(x)$ of a functional $J=\int \limits _{a}^{b}f(x,y,y')\,dx$ satisfies the following two continuity relations at each corner $c\in [a,b]$:
1. $\left.{\frac {\partial f}{\partial y'}}\right|_{x=c-0}=\left.{\frac {\partial f}{\partial y'}}\right|_{x=c+0}$
2. $\left.\left(f-y'{\frac {\partial f}{\partial y'}}\right)\right|_{x=c-0}=\left.\left(f-y'{\frac {\partial f}{\partial y'}}\right)\right|_{x=c+0}$.
Applications
The condition allows one to prove that a corner exists along a given extremal. As a result, there are many applications to differential geometry. In calculations of the Weierstrass E-Function, it is often helpful to find where corners exist along the curves. Similarly, the condition allows for one to find a minimizing curve for a given integral.
References
1. Gelfand, I. M.; Fomin, S. V. (1963). Calculus of Variations. Englewood Cliffs, NJ: Prentice-Hall. pp. 61–63. ISBN 9780486135014.
|
Weighing matrix
In mathematics, a weighing matrix of order $n$ and weight $w$ is a matrix $W$ with entries from the set $\{0,1,-1\}$ such that:
$WW^{\mathsf {T}}=wI_{n}$
Where $W^{\mathsf {T}}$ is the transpose of $W$ and $I_{n}$ is the identity matrix of order $n$. The weight $w$ is also called the degree of the matrix. For convenience, a weighing matrix of order $n$ and weight $w$ is often denoted by $W(n,w)$.[3]
Weighing matrices are so called because of their use in optimally measuring the individual weights of multiple objects. When the weighing device is a balance scale, the statistical variance of the measurement can be minimized by weighing multiple objects at once, including some objects in the opposite pan of the scale where they subtract from the measurement.[1][2]
Properties
Some properties are immediate from the definition. If $W$ is a $W(n,w)$, then:
• The rows of $W$ are pairwise orthogonal (that is, every pair of rows you pick from $W$ will be orthogonal). Similarly, the columns are pairwise orthogonal.
• Each row and each column of $W$ has exactly $w$ non-zero elements.
• $W^{\mathsf {T}}W=wI$, since the definition means that $W^{-1}=w^{-1}W^{\mathsf {T}}$, where $W^{-1}$ is the inverse of $W$.
• $\det(W)=\pm w^{n/2}$ where $\det(W)$ is the determinant of $W$.
A weighing matrix is a generalization of Hadamard matrix, which does not allow zero entries.[3] As two special cases, a $W(n,n)$ is a Hadamard matrix[3] and a $W(n,n-1)$ is equivalent to a conference matrix.
Applications
Experiment design
See also: Design of experiments § Example
Weighing matrices take their name from the problem of measuring the weight of multiple objects. If a measuring device has a statistical variance of $\sigma ^{2}$, then measuring the weights of $N$ objects and subtracting the (equally imprecise) tare weight will result in a final measurement with a variance of $2\sigma ^{2}$.[4] It is possible to increase the accuracy of the estimated weights by measuring different subsets of the objects, especially when using a balance scale where objects can be put on the opposite measuring pan where they subtract their weight from the measurement.
An order $n$ matrix $W$ can be used to represent the placement of $n$ objects—including the tare weight—in $n$ trials. Suppose the left pan of the balance scale adds to the measurement and the right pan subtracts from the measurement. Each element of this matrix $w_{ij}$ will have:
$w_{ij}={\begin{cases}0&{\text{if on the }}i{\text{th trial the }}j{\text{th object was not measured}}\\1&{\text{if on the }}i{\text{th trial the }}j{\text{th object was placed in the left pan}}\\-1&{\text{if on the }}i{\text{th trial the }}j{\text{th object was placed in the right pan }}\\\end{cases}}$
Let $\mathbf {x} $ be a column vector of the measurements of each of the $n$ trials, let $\mathbf {e} $ be the errors to these measurements each independent and identically distributed with variance $\sigma ^{2}$, and let $\mathbf {y} $ be a column vector of the true weights of each of the $n$ objects. Then we have:
$\mathbf {x} =W\mathbf {y} +\mathbf {e} $
Assuming that $W$ is non-singular, we can use the method of least-squares to calculate an estimate of the true weights:
$\mathbf {y} =(W^{T}W)^{-1}W\mathbf {x} $
The variance of the estimated $\mathbf {y} $ vector cannot be lower than ${\tfrac {\sigma ^{2}}{n}}$, and will be minimum if and only if $W$ is a weighing matrix.[4][5]
Optical measurement
Weighing matrices appear in the engineering of spectrometers, image scanners,[6] and optical multiplexing systems.[5] The design of these instruments involve an optical mask and two detectors that measure the intensity of light. The mask can either transmit light to the first detector, absorb it, or reflect it toward the second detector. The measurement of the second detector is subtracted from the first, and so these three cases correspond to weighing matrix elements of 1, 0, and -1 respectively. As this is essentially the same measurement problem as in the previous section, the usefulness of weighing matrices also applies.[6]
Examples
Note that when weighing matrices are displayed, the symbol $-$ is used to represent −1. Here are some examples:
This is a $W(2,2)$:
${\begin{pmatrix}1&1\\1&-\end{pmatrix}}$
This is a $W(4,3)$:
${\begin{pmatrix}1&1&1&0\\1&-&0&1\\1&0&-&-\\0&1&-&1\end{pmatrix}}$
This is a $W(7,4)$:
${\begin{pmatrix}1&1&1&1&0&0&0\\1&-&0&0&1&1&0\\1&0&-&0&-&0&1\\1&0&0&-&0&-&-\\0&1&-&0&0&1&-\\0&1&0&-&1&0&1\\0&0&1&-&-&1&0\end{pmatrix}}$
Another $W(7,4)$:
${\begin{pmatrix}-&1&1&0&1&0&0\\0&-&1&1&0&1&0\\0&0&-&1&1&0&1\\1&0&0&-&1&1&0\\0&1&0&0&-&1&1\\1&0&1&0&0&-&1\\1&1&0&1&0&0&-\end{pmatrix}}$
Which is cyclic, namely, each row is a cyclic shift of the previous row. Such a matrix is called a $CW(n,k)$ and is determined by its first row. Circulant weighing matrices are of special interest since their algebraic structure makes them easier for classification. Indeed, we know that a circulant weighing matrix of order $n$ and weight $k$ must be of square weight. So, weights $1,4,9,16,...$ are permissible and weights $k\leq 25$ have been completely classified.[7] Two special (and actually, extreme) cases of circulant weighing matrices are (A) circulant Hadamard matrices which are conjectured not to exist unless their order is less than 5. This conjecture, circulant Hadamard conjecture first raised by Ryser is known to be true for many orders but is still open. (B) $CW(n,k)$ of weight $k=s^{2}$ and minimal order $n$ exist if $s$ is a power of a prime and such a circulant weighing matrix can be obtained by signing the complement of a finite projective plane. Since all $CW(n,k)$ for $k\leq 25$ have been classified, the first open case is $CW(105,36)$. The first open case for a general weighing matrix (certainly not a circulant) is $W(35,25)$.
Equivalence
Two weighing matrices are considered to be equivalent if one can be obtained from the other by a series of permutations and negations of the rows and columns of the matrix. The classification of weighing matrices is complete for cases where $w$ ≤ 5 as well as all cases where $n$ ≤ 15 are also completed.[8] However, very little has been done beyond this with exception to classifying circulant weighing matrices.[9][10]
Open questions
There are many open questions about weighing matrices. The main question about weighing matrices is their existence: for which values of $n$ and $w$ does there exist a $W(n,w)$? A great deal about this is unknown. An equally important but often overlooked question about weighing matrices is their enumeration: for a given $n$ and $w$, how many $W(n,w)$'s are there?
This question has two different meanings. Enumerating up to equivalence and enumerating different matrices with same n,k parameters. Some papers were published on the first question but none were published on the second important question.
References
1. Raghavarao, Damaraju (1960). "Some Aspects of Weighing Designs". The Annals of Mathematical Statistics. Institute of Mathematical Statistics. 31 (4): 878–884. doi:10.1214/aoms/1177705664. ISSN 0003-4851.
2. Seberry, Jennifer (2017). "Some Algebraic and Combinatorial Non-existence Results". Orthogonal Designs. Cham: Springer International Publishing. pp. 7–17. doi:10.1007/978-3-319-59032-5_2. ISBN 978-3-319-59031-8.
3. Geramita, Anthony V.; Pullman, Norman J.; Wallis, Jennifer S. (1974). "Families of weighing matrices". Bulletin of the Australian Mathematical Society. Cambridge University Press (CUP). 10 (1): 119–122. doi:10.1017/s0004972700040703. ISSN 0004-9727. S2CID 122560830.
4. Raghavarao, Damaraju (1971). "Weighing Designs". Constructions and combinatorial problems in design of experiments. New York: Wiley. pp. 305–308. ISBN 978-0471704850.
5. Koukouvinos, Christos; Seberry, Jennifer (1997). "Weighing matrices and their applications". Journal of Statistical Planning and Inference. Elsevier BV. 62 (1): 91–101. doi:10.1016/s0378-3758(96)00172-3. ISSN 0378-3758. S2CID 122205953.
6. Sloane, Neil J. A.; Harwit, Martin (1976-01-01). "Masks for Hadamard transform optics, and weighing designs". Applied Optics. The Optical Society. 15 (1): 107–114. Bibcode:1976ApOpt..15..107S. doi:10.1364/ao.15.000107. ISSN 0003-6935. PMID 20155192.
7. Arasu, K.T.; Gordon, Daniel M.; Zhang, Yiran (2019). "New Nonexistence Results on Circulant Weighing Matrices". arXiv:1908.08447v3. {{cite journal}}: Cite journal requires |journal= (help)
8. Harada, Masaaki; Munemasa, Akihiro (2012). "On the classification of weighing matrices and self-orthogonal codes". J. Combin. Designs. 20: 40–57. arXiv:1011.5382. doi:10.1002/jcd.20295. S2CID 1004492.
9. Ang, Miin Huey; Arasu, K.T.; Lun Ma, Siu; Strassler, Yoseph (2008). "Study of proper circulant weighing matrices with weight 9". Discrete Mathematics. 308 (13): 2802–2809. doi:10.1016/j.disc.2004.12.029.
10. Arasu, K.T.; Hin Leung, Ka; Lun Ma, Siu; Nabavi, Ali; Ray-Chaudhuri, D.K. (2006). "Determination of all possible orders of weight 16 circulant weighing matrices". Finite Fields and Their Applications. 12 (4): 498–538. doi:10.1016/j.ffa.2005.06.009.
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
|
Weight (strings)
The $a$-weight of a string, for a letter $a$, is the number of times that letter occurs in the string. More precisely, let $A$ be a finite set (called the alphabet), $a\in A$ a letter of $A$, and $c\in A^{*}$ a string (where $A^{*}$ is the free monoid generated by the elements of $A$, equivalently the set of strings, including the empty string, whose letters are from $A$). Then the $a$-weight of $c$, denoted by $\mathrm {wt} _{a}(c)$, is the number of times the generator $a$ occurs in the unique expression for $c$ as a product (concatenation) of letters in $A$.
If $A$ is an abelian group, the Hamming weight $\mathrm {wt} (c)$ of $c$, often simply referred to as "weight", is the number of nonzero letters in $c$.
Examples
• Let $A=\{x,y,z\}$. In the string $c=yxxzyyzxyzzyx$, $y$ occurs 5 times, so the $y$-weight of $c$ is $\mathrm {wt} _{y}(c)=5$.
• Let $A=\mathbf {Z} _{3}=\{0,1,2\}$ (an abelian group) and $c=002001200$. Then $\mathrm {wt} _{0}(c)=6$, $\mathrm {wt} _{1}(c)=1$, $\mathrm {wt} _{2}(c)=2$ and $\mathrm {wt} (c)=\mathrm {wt} _{1}(c)+\mathrm {wt} _{2}(c)=3$.
This article incorporates material from Weight (strings) on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Weight function
A weight function is a mathematical device used when performing a sum, integral, or average to give some elements more "weight" or influence on the result than other elements in the same set. The result of this application of a weight function is a weighted sum or weighted average. Weight functions occur frequently in statistics and analysis, and are closely related to the concept of a measure. Weight functions can be employed in both discrete and continuous settings. They can be used to construct systems of calculus called "weighted calculus"[1] and "meta-calculus".[2]
Discrete weights
General definition
In the discrete setting, a weight function $w\colon A\to \mathbb {R} ^{+}$ is a positive function defined on a discrete set $A$, which is typically finite or countable. The weight function $w(a):=1$ corresponds to the unweighted situation in which all elements have equal weight. One can then apply this weight to various concepts.
If the function $f\colon A\to \mathbb {R} $ is a real-valued function, then the unweighted sum of $f$ on $A$ is defined as
$\sum _{a\in A}f(a);$
but given a weight function $w\colon A\to \mathbb {R} ^{+}$, the weighted sum or conical combination is defined as
$\sum _{a\in A}f(a)w(a).$
One common application of weighted sums arises in numerical integration.
If B is a finite subset of A, one can replace the unweighted cardinality |B| of B by the weighted cardinality
$\sum _{a\in B}w(a).$
If A is a finite non-empty set, one can replace the unweighted mean or average
${\frac {1}{|A|}}\sum _{a\in A}f(a)$
by the weighted mean or weighted average
${\frac {\sum _{a\in A}f(a)w(a)}{\sum _{a\in A}w(a)}}.$
In this case only the relative weights are relevant.
Statistics
Weighted means are commonly used in statistics to compensate for the presence of bias. For a quantity $f$ measured multiple independent times $f_{i}$ with variance $\sigma _{i}^{2}$, the best estimate of the signal is obtained by averaging all the measurements with weight $ w_{i}=1/{\sigma _{i}^{2}}$, and the resulting variance is smaller than each of the independent measurements $ \sigma ^{2}=1/\sum _{i}w_{i}$. The maximum likelihood method weights the difference between fit and data using the same weights $w_{i}$.
The expected value of a random variable is the weighted average of the possible values it might take on, with the weights being the respective probabilities. More generally, the expected value of a function of a random variable is the probability-weighted average of the values the function takes on for each possible value of the random variable.
In regressions in which the dependent variable is assumed to be affected by both current and lagged (past) values of the independent variable, a distributed lag function is estimated, this function being a weighted average of the current and various lagged independent variable values. Similarly, a moving average model specifies an evolving variable as a weighted average of current and various lagged values of a random variable.
Mechanics
The terminology weight function arises from mechanics: if one has a collection of $n$ objects on a lever, with weights $w_{1},\ldots ,w_{n}$ (where weight is now interpreted in the physical sense) and locations ${\boldsymbol {x}}_{1},\dotsc ,{\boldsymbol {x}}_{n}$, then the lever will be in balance if the fulcrum of the lever is at the center of mass
${\frac {\sum _{i=1}^{n}w_{i}{\boldsymbol {x}}_{i}}{\sum _{i=1}^{n}w_{i}}},$
which is also the weighted average of the positions ${\boldsymbol {x}}_{i}$.
Continuous weights
In the continuous setting, a weight is a positive measure such as $w(x)\,dx$ on some domain $\Omega $, which is typically a subset of a Euclidean space $\mathbb {R} ^{n}$, for instance $\Omega $ could be an interval $[a,b]$. Here $dx$ is Lebesgue measure and $w\colon \Omega \to \mathbb {R} ^{+}$ is a non-negative measurable function. In this context, the weight function $w(x)$ is sometimes referred to as a density.
General definition
If $f\colon \Omega \to \mathbb {R} $ is a real-valued function, then the unweighted integral
$\int _{\Omega }f(x)\ dx$
can be generalized to the weighted integral
$\int _{\Omega }f(x)w(x)\,dx$
Note that one may need to require $f$ to be absolutely integrable with respect to the weight $w(x)\,dx$ in order for this integral to be finite.
Weighted volume
If E is a subset of $\Omega $, then the volume vol(E) of E can be generalized to the weighted volume
$\int _{E}w(x)\ dx,$
Weighted average
If $\Omega $ has finite non-zero weighted volume, then we can replace the unweighted average
${\frac {1}{\mathrm {vol} (\Omega )}}\int _{\Omega }f(x)\ dx$
by the weighted average
${\frac {\int _{\Omega }f(x)\,w(x)\,dx}{\int _{\Omega }w(x)\,dx}}$
Bilinear form
If $f\colon \Omega \to {\mathbb {R} }$ and $g\colon \Omega \to {\mathbb {R} }$ are two functions, one can generalize the unweighted bilinear form
$\langle f,g\rangle :=\int _{\Omega }f(x)g(x)\ dx$ :=\int _{\Omega }f(x)g(x)\ dx}
to a weighted bilinear form
$\langle f,g\rangle :=\int _{\Omega }f(x)g(x)\ w(x)\ dx.$ :=\int _{\Omega }f(x)g(x)\ w(x)\ dx.}
See the entry on orthogonal polynomials for examples of weighted orthogonal functions.
See also
• Center of mass
• Numerical integration
• Orthogonality
• Weighted mean
• Linear combination
• Kernel (statistics)
• Measure (mathematics)
• Riemann–Stieltjes integral
• Weighting
• Window function
References
1. Jane Grossman, Michael Grossman, Robert Katz. The First Systems of Weighted Differential and Integral Calculus, ISBN 0-9771170-1-4, 1980.
2. Jane Grossman.Meta-Calculus: Differential and Integral, ISBN 0-9771170-2-2, 1981.
|
Activity selection problem
The activity selection problem is a combinatorial optimization problem concerning the selection of non-conflicting activities to perform within a given time frame, given a set of activities each marked by a start time (si) and finish time (fi). The problem is to select the maximum number of activities that can be performed by a single person or machine, assuming that a person can only work on a single activity at a time. The activity selection problem is also known as the Interval scheduling maximization problem (ISMP), which is a special type of the more general Interval Scheduling problem.
A classic application of this problem is in scheduling a room for multiple competing events, each having its own time requirements (start and end time), and many more arise within the framework of operations research.
Formal definition
Assume there exist n activities with each of them being represented by a start time si and finish time fi. Two activities i and j are said to be non-conflicting if si ≥ fj or sj ≥ fi. The activity selection problem consists in finding the maximal solution set (S) of non-conflicting activities, or more precisely there must exist no solution set S' such that |S'| > |S| in the case that multiple maximal solutions have equal sizes.
Optimal solution
The activity selection problem is notable in that using a greedy algorithm to find a solution will always result in an optimal solution. A pseudocode sketch of the iterative version of the algorithm and a proof of the optimality of its result are included below.
Algorithm
Greedy-Iterative-Activity-Selector(A, s, f):
Sort A by finish times stored in f
S = {A[1]}
k = 1
n = A.length
for i = 2 to n:
if s[i] ≥ f[k]:
S = S U {A[i]}
k = i
return S
Explanation
Line 1: This algorithm is called Greedy-Iterative-Activity-Selector, because it is first of all a greedy algorithm, and then it is iterative. There's also a recursive version of this greedy algorithm.
• $A$ is an array containing the activities.
• $s$ is an array containing the start times of the activities in $A$.
• $f$ is an array containing the finish times of the activities in $A$.
Note that these arrays are indexed starting from 1 up to the length of the corresponding array.
Line 3: Sorts in increasing order of finish times the array of activities $A$ by using the finish times stored in the array $f$. This operation can be done in $O(n\cdot \log n)$ time, using for example merge sort, heap sort, or quick sort algorithms.
Line 4: Creates a set $S$ to store the selected activities, and initialises it with the activity $A[1]$ that has the earliest finish time.
Line 5: Creates a variable $k$ that keeps track of the index of the last selected activity.
Line 9: Starts iterating from the second element of that array $A$ up to its last element.
Lines 10,11: If the start time $s[i]$ of the $ith$ activity ($A[i]$) is greater or equal to the finish time $f[k]$ of the last selected activity ($A[k]$), then $A[i]$ is compatible to the selected activities in the set $S$, and thus it can be added to $S$.
Line 12: The index of the last selected activity is updated to the just added activity $A[i]$.
Proof of optimality
Let $S=\{1,2,\ldots ,n\}$ be the set of activities ordered by finish time. Assume that $A\subseteq S$ is an optimal solution, also ordered by finish time; and that the index of the first activity in A is $k\neq 1$, i.e., this optimal solution does not start with the greedy choice. We will show that $B=(A\setminus \{k\})\cup \{1\}$, which begins with the greedy choice (activity 1), is another optimal solution. Since $f_{1}\leq f_{k}$, and the activities in A are disjoint by definition, the activities in B are also disjoint. Since B has the same number of activities as A, that is, $|A|=|B|$, B is also optimal.
Once the greedy choice is made, the problem reduces to finding an optimal solution for the subproblem. If A is an optimal solution to the original problem S containing the greedy choice, then $A^{\prime }=A\setminus \{1\}$ is an optimal solution to the activity-selection problem $S'=\{i\in S:s_{i}\geq f_{1}\}$.
Why? If this were not the case, pick a solution B′ to S′ with more activities than A′ containing the greedy choice for S′. Then, adding 1 to B′ would yield a feasible solution B to S with more activities than A, contradicting the optimality.
Weighted activity selection problem
The generalized version of the activity selection problem involves selecting an optimal set of non-overlapping activities such that the total weight is maximized. Unlike the unweighted version, there is no greedy solution to the weighted activity selection problem. However, a dynamic programming solution can readily be formed using the following approach:[1]
Consider an optimal solution containing activity k. We now have non-overlapping activities on the left and right of k. We can recursively find solutions for these two sets because of optimal sub-structure. As we don't know k, we can try each of the activities. This approach leads to an $O(n^{3})$ solution. This can be optimized further considering that for each set of activities in $(i,j)$, we can find the optimal solution if we had known the solution for $(i,t)$, where t is the last non-overlapping interval with j in $(i,j)$. This yields an $O(n^{2})$ solution. This can be further optimized considering the fact that we do not need to consider all ranges $(i,j)$ but instead just $(1,j)$. The following algorithm thus yields an $O(n\log n)$ solution:
Weighted-Activity-Selection(S): // S = list of activities
sort S by finish time
opt[0] = 0 // opt[j] represents optimal solution (sum of weights of selected activities) for S[1,2..,j]
for i = 1 to n:
t = binary search to find activity with finish time <= start time for i
// if there are more than one such activities, choose the one with last finish time
opt[i] = MAX(opt[i-1], opt[t] + w(i))
return opt[n]
References
1. Dynamic Programming with introduction to Weighted Activity Selection
External links
• Activity Selection Problem
|
Behrend function
In algebraic geometry, the Behrend function of a scheme X, introduced by Kai Behrend, is a constructible function
$\nu _{X}:X\to \mathbb {Z} $
such that if X is a quasi-projective proper moduli scheme carrying a symmetric obstruction theory, then the weighted Euler characteristic
$\chi (X,\nu _{X})=\sum _{n\in \mathbb {Z} }n\,\chi (\{\nu _{X}=n\})$
is the degree of the virtual fundamental class
$[X]^{\text{vir}}$
of X, which is an element of the zeroth Chow group of X. Modulo some solvable technical difficulties (e.g., what is the Chow group of a stack?), the definition extends to moduli stacks such as the moduli stack of stable sheaves (the Donaldson–Thomas theory) or that of stable maps (the Gromov–Witten theory).
References
• Behrend, Kai (2009), "Donaldson–Thomas type invariants via microlocal geometry", Annals of Mathematics, 2nd Ser., 170 (3): 1307–1338, arXiv:math/0507523, doi:10.4007/annals.2009.170.1307, MR 2600874.
|
Weighted Voronoi diagram
In mathematics, a weighted Voronoi diagram in n dimensions is a generalization of a Voronoi diagram. The Voronoi cells in a weighted Voronoi diagram are defined in terms of a distance function. The distance function may specify the usual Euclidean distance, or may be some other, special distance function. In weighted Voronoi diagrams, each site has a weight that influences the distance computation. The idea is that larger weights indicate more important sites, and such sites will get bigger Voronoi cells.
In a multiplicatively weighted Voronoi diagram, the distance between a point and a site is divided by the (positive) weight of the site.[1] In the plane under the ordinary Euclidean distance, the multiplicatively weighted Voronoi diagram is also called circular Dirichlet tessellation[2][3] and its edges are circular arcs and straight line segments. A Voronoi cell may be non-convex, disconnected and may have holes. This diagram arises, e.g., as a model of crystal growth, where crystals from different points may grow with different speed. Since crystals may grow in empty space only and are continuous objects, a natural variation is the crystal Voronoi diagram, in which the cells are defined somewhat differently.
In an additively weighted Voronoi diagram, weights are subtracted from the distances. In the plane under the ordinary Euclidean distance this diagram is also known as the hyperbolic Dirichlet tessellation and its edges are arcs of hyperbolas and straight line segments.[1]
The power diagram is defined when weights are subtracted from the squared Euclidean distance. It can also be defined using the power distance defined from a set of circles.[4]
References
1. "Dictionary of distances", by Elena Deza and Michel Deza pp. 255, 256
2. Peter F. Ash and Ethan D. Bolker, [Generalized Dirichlet tessellations https://doi.org/10.1007%2FBF00164401], Geometriae Dedicata, Volume 20, Number 2 , 209-243doi:10.1007/BF00164401
3. Note: "Dirichlet tessellation" is a synonym for "Voronoi diagram".
4. Edelsbrunner, Herbert (1987), "13.6 Power Diagrams", Algorithms in Combinatorial Geometry, EATCS Monographs on Theoretical Computer Science, vol. 10, Springer-Verlag, pp. 327–328.
External links
• Adam Dobrin: A review of properties and variations of Voronoi diagrams
|
Weighted average return on assets
The weighted average return on assets, or WARA, is the collective rates of return on the various types of tangible and intangible assets of a company.
The presumption of a WARA is that each class of a company's asset base (such as manufacturing equipment, contracts, software, brand names, etc.) carries its own rate of return, each unique to the asset's underlying operational risk as well as its ability to attain debt and equity.[1]
Tangible assets, generally speaking, carry a lower rate of return due to two factors:
• Debt financing—tangible assets can be provided as collateral in attracting debt capital, which typically require a lower rate of return than equity capital
• Stability of earnings—tangible assets tend to provide more certainty in expected earnings, which reduces risk to the financier of the asset
Intangible assets, in contrast, carry a higher rate of return due to the same factors above.
Averaging these rates of returns, as a percentage of the total asset base, produces a WARA. In theory, the WARA should generate the same cost of capital as the Weighted average cost of capital, or WACC. The theory holds true because the operating entity is considered fundamentally equivalent to the combined assets of the company. Therefore, the measure of risks across each are equivalent. In the case of the operating entity, risk is measured against the WACC, while in the case of the combined assets, risk is measured by the WARA. Reconciliations between the two are typically required as a component of a Purchase price allocation in accordance with the Financial Accounting Standards Board's ("FASB") Statement of Financial Accounting Standards No. 141 “Business Combinations” (“SFAS 141”).[2]
References
1. Pratt, Shannon and Grabowski, Roger. "Cost of Capital: Applications and Examples." 3rd Edition. 2008: pp. 637-38.
2. "Summary of Statement No. 141: Business Combinations (Issued 6/01)". Archived from the original on 2009-01-29. Retrieved 2009-02-12.
|
Weighted catenary
A weighted catenary is a catenary curve, but of a special form. A "regular" catenary has the equation
$y=a\,\cosh \left({\frac {x}{a}}\right)={\frac {a\left(e^{\frac {x}{a}}+e^{-{\frac {x}{a}}}\right)}{2}}$
for a given value of a. A weighted catenary has the equation
$y=b\,\cosh \left({\frac {x}{a}}\right)={\frac {b\left(e^{\frac {x}{a}}+e^{-{\frac {x}{a}}}\right)}{2}}$
and now two constants enter: a and b.
Significance
A catenary arch has a uniform thickness. However, if
1. the arch is not of uniform thickness,[1]
2. the arch supports more than its own weight,[2]
3. or if gravity varies,[3]
it becomes more complex. A weighted catenary is needed.
The aspect ratio of a weighted catenary (or other curve) describes a rectangular frame containing the selected fragment of the curve theoretically continuing to the infinity. [4][5]
Examples
The Gateway Arch in the American city of St. Louis (Missouri) is the most famous example of a weighted catenary.
Simple suspension bridges use weighted catenaries.[5]
References
1. Robert Osserman (February 2010). "Mathematics of the Gateway Arch" (PDF). Notices of the AMS.
2. Re-review: Catenary and Parabola: Re-review: Catenary and Parabola, accessdate: April 13, 2017
3. MathOverflow: classical mechanics - Catenary curve under non-uniform gravitational field - MathOverflow, accessdate: April 13, 2017
4. Definition from WhatIs.com: What is aspect ratio? - Definition from WhatIs.com, accessdate: April 13, 2017
5. Robert Osserman (2010). "How the Gateway Arch Got its Shape" (PDF). Nexus Network Journal. Retrieved 13 April 2017.
External links and references
General links
• One general-interest link
On the Gateway arch
• Mathematics of the Gateway Arch
• On the Gateway Arch
• A weighted catenary graphed
Commons
• Category:Catenary
• Category:Arches
|
Positional notation
Positional notation (or place-value notation, or positional numeral system) usually denotes the extension to any base of the Hindu–Arabic numeral system (or decimal system). More generally, a positional system is a numeral system in which the contribution of a digit to the value of a number is the value of the digit multiplied by a factor determined by the position of the digit. In early numeral systems, such as Roman numerals, a digit has only one value: I means one, X means ten and C a hundred (however, the value may be negated if placed before another digit). In modern positional systems, such as the decimal system, the position of the digit means that its value must be multiplied by some value: in 555, the three identical symbols represent five hundreds, five tens, and five units, respectively, due to their different positions in the digit string.
Part of a series on
Numeral systems
Place-value notation
Hindu-Arabic numerals
• Western Arabic
• Eastern Arabic
• Bengali
• Devanagari
• Gujarati
• Gurmukhi
• Odia
• Sinhala
• Tamil
• Malayalam
• Telugu
• Kannada
• Dzongkha
• Tibetan
• Balinese
• Burmese
• Javanese
• Khmer
• Lao
• Mongolian
• Sundanese
• Thai
East Asian systems
Contemporary
• Chinese
• Suzhou
• Hokkien
• Japanese
• Korean
• Vietnamese
Historic
• Counting rods
• Tangut
Other systems
• History
Ancient
• Babylonian
Post-classical
• Cistercian
• Mayan
• Muisca
• Pentadic
• Quipu
• Rumi
Contemporary
• Cherokee
• Kaktovik (Iñupiaq)
By radix/base
Common radices/bases
• 2
• 3
• 4
• 5
• 6
• 8
• 10
• 12
• 16
• 20
• 60
• (table)
Non-standard radices/bases
• Bijective (1)
• Signed-digit (balanced ternary)
• Mixed (factorial)
• Negative
• Complex (2i)
• Non-integer (φ)
• Asymmetric
Sign-value notation
Non-alphabetic
• Aegean
• Attic
• Aztec
• Brahmi
• Chuvash
• Egyptian
• Etruscan
• Kharosthi
• Prehistoric counting
• Proto-cuneiform
• Roman
• Tally marks
Alphabetic
• Abjad
• Armenian
• Alphasyllabic
• Akṣarapallī
• Āryabhaṭa
• Kaṭapayādi
• Coptic
• Cyrillic
• Geʽez
• Georgian
• Glagolitic
• Greek
• Hebrew
List of numeral systems
The Babylonian numeral system, base 60, was the first positional system to be developed, and its influence is present today in the way time and angles are counted in tallies related to 60, such as 60 minutes in an hour and 360 degrees in a circle. Today, the Hindu–Arabic numeral system (base ten) is the most commonly used system globally. However, the binary numeral system (base two) is used in almost all computers and electronic devices because it is easier to implement efficiently in electronic circuits.
Systems with negative base, complex base or negative digits have been described. Most of them do not require a minus sign for designating negative numbers.
The use of a radix point (decimal point in base ten), extends to include fractions and allows representing any real number with arbitrary accuracy. With positional notation, arithmetical computations are much simpler than with any older numeral system; this led to the rapid spread of the notation when it was introduced in western Europe.
History
Today, the base-10 (decimal) system, which is presumably motivated by counting with the ten fingers, is ubiquitous. Other bases have been used in the past, and some continue to be used today. For example, the Babylonian numeral system, credited as the first positional numeral system, was base-60. However, it lacked a real zero. Initially inferred only from context, later, by about 700 BC, zero came to be indicated by a "space" or a "punctuation symbol" (such as two slanted wedges) between numerals.[1] It was a placeholder rather than a true zero because it was not used alone or at the end of a number. Numbers like 2 and 120 (2×60) looked the same because the larger number lacked a final placeholder. Only context could differentiate them.
The polymath Archimedes (ca. 287–212 BC) invented a decimal positional system in his Sand Reckoner which was based on 108[2] and later led the German mathematician Carl Friedrich Gauss to lament what heights science would have already reached in his days if Archimedes had fully realized the potential of his ingenious discovery.[3]
Before positional notation became standard, simple additive systems (sign-value notation) such as Roman numerals were used, and accountants in ancient Rome and during the Middle Ages used the abacus or stone counters to do arithmetic.[4]
Counting rods and most abacuses have been used to represent numbers in a positional numeral system. With counting rods or abacus to perform arithmetic operations, the writing of the starting, intermediate and final values of a calculation could easily be done with a simple additive system in each position or column. This approach required no memorization of tables (as does positional notation) and could produce practical results quickly.
The oldest extant positional notation system is either that of Chinese rod numerals, used from at least the early 8th century, or perhaps Khmer numerals, showing possible usages of positional-numbers in the 7th century. Khmer numerals and other Indian numerals originate with the Brahmi numerals of about the 3rd century BC, which symbols were, at the time, not used positionally. Medieval Indian numerals are positional, as are the derived Arabic numerals, recorded from the 10th century.
After the French Revolution (1789–1799), the new French government promoted the extension of the decimal system.[5] Some of those pro-decimal efforts—such as decimal time and the decimal calendar—were unsuccessful. Other French pro-decimal efforts—currency decimalisation and the metrication of weights and measures—spread widely out of France to almost the whole world.
History of positional fractions
Main article: Decimal
J. Lennart Berggren notes that positional decimal fractions were used for the first time by Arab mathematician Abu'l-Hasan al-Uqlidisi as early as the 10th century.[6] The Jewish mathematician Immanuel Bonfils used decimal fractions around 1350, but did not develop any notation to represent them.[7] The Persian mathematician Jamshīd al-Kāshī made the same discovery of decimal fractions in the 15th century.[6] Al Khwarizmi introduced fractions to Islamic countries in the early 9th century; his fraction presentation was similar to the traditional Chinese mathematical fractions from Sunzi Suanjing.[8] This form of fraction with numerator on top and denominator at bottom without a horizontal bar was also used by 10th century Abu'l-Hasan al-Uqlidisi and 15th century Jamshīd al-Kāshī's work "Arithmetic Key".[8][9]
The adoption of the decimal representation of numbers less than one, a fraction, is often credited to Simon Stevin through his textbook De Thiende;[10] but both Stevin and E. J. Dijksterhuis indicate that Regiomontanus contributed to the European adoption of general decimals:[11]
European mathematicians, when taking over from the Hindus, via the Arabs, the idea of positional value for integers, neglected to extend this idea to fractions. For some centuries they confined themselves to using common and sexagesimal fractions... This half-heartedness has never been completely overcome, and sexagesimal fractions still form the basis of our trigonometry, astronomy and measurement of time. ¶ ... Mathematicians sought to avoid fractions by taking the radius R equal to a number of units of length of the form 10n and then assuming for n so great an integral value that all occurring quantities could be expressed with sufficient accuracy by integers. ¶ The first to apply this method was the German astronomer Regiomontanus. To the extent that he expressed goniometrical line-segments in a unit R/10n, Regiomontanus may be called an anticipator of the doctrine of decimal positional fractions.[11]: 17, 18
In the estimation of Dijksterhuis, "after the publication of De Thiende only a small advance was required to establish the complete system of decimal positional fractions, and this step was taken promptly by a number of writers ... next to Stevin the most important figure in this development was Regiomontanus." Dijksterhuis noted that [Stevin] "gives full credit to Regiomontanus for his prior contribution, saying that the trigonometric tables of the German astronomer actually contain the whole theory of 'numbers of the tenth progress'."[11]: 19
Mathematics
Base of the numeral system
In mathematical numeral systems the radix r is usually the number of unique digits, including zero, that a positional numeral system uses to represent numbers. In some cases, such as with a negative base, the radix is the absolute value $r=|b|$ of the base b. For example, for the decimal system the radix (and base) is ten, because it uses the ten digits from 0 through 9. When a number "hits" 9, the next number will not be another different symbol, but a "1" followed by a "0". In binary, the radix is two, since after it hits "1", instead of "2" or another written symbol, it jumps straight to "10", followed by "11" and "100".
The highest symbol of a positional numeral system usually has the value one less than the value of the radix of that numeral system. The standard positional numeral systems differ from one another only in the base they use.
The radix is an integer that is greater than 1, since a radix of zero would not have any digits, and a radix of 1 would only have the zero digit. Negative bases are rarely used. In a system with more than $|b|$ unique digits, numbers may have many different possible representations.
It is important that the radix is finite, from which follows that the number of digits is quite low. Otherwise, the length of a numeral would not necessarily be logarithmic in its size.
(In certain non-standard positional numeral systems, including bijective numeration, the definition of the base or the allowed digits deviates from the above.)
In standard base-ten (decimal) positional notation, there are ten decimal digits and the number
$5305_{\mathrm {dec} }=(5\times 10^{3})+(3\times 10^{2})+(0\times 10^{1})+(5\times 10^{0})$.
In standard base-sixteen (hexadecimal), there are the sixteen hexadecimal digits (0–9 and A–F) and the number
$14\mathrm {B} 9_{\mathrm {hex} }=(1\times 16^{3})+(4\times 16^{2})+(\mathrm {B} \times 16^{1})+(9\times 16^{0})\qquad (=5305_{\mathrm {dec} }),$
where B represents the number eleven as a single symbol.
In general, in base-b, there are b digits $\{d_{1},d_{2},\dotsb ,d_{b}\}=:D$ and the number
$(a_{3}a_{2}a_{1}a_{0})_{b}=(a_{3}\times b^{3})+(a_{2}\times b^{2})+(a_{1}\times b^{1})+(a_{0}\times b^{0})$
has $\forall k\colon a_{k}\in D.$ Note that $a_{3}a_{2}a_{1}a_{0}$ represents a sequence of digits, not multiplication.
Notation
When describing base in mathematical notation, the letter b is generally used as a symbol for this concept, so, for a binary system, b equals 2. Another common way of expressing the base is writing it as a decimal subscript after the number that is being represented (this notation is used in this article). 11110112 implies that the number 1111011 is a base-2 number, equal to 12310 (a decimal notation representation), 1738 (octal) and 7B16 (hexadecimal). In books and articles, when using initially the written abbreviations of number bases, the base is not subsequently printed: it is assumed that binary 1111011 is the same as 11110112.
The base b may also be indicated by the phrase "base-b". So binary numbers are "base-2"; octal numbers are "base-8"; decimal numbers are "base-10"; and so on.
To a given radix b the set of digits {0, 1, ..., b−2, b−1} is called the standard set of digits. Thus, binary numbers have digits {0, 1}; decimal numbers have digits {0, 1, 2, ..., 8, 9}; and so on. Therefore, the following are notational errors: 522, 22, 1A9. (In all cases, one or more digits is not in the set of allowed digits for the given base.)
Exponentiation
Positional numeral systems work using exponentiation of the base. A digit's value is the digit multiplied by the value of its place. Place values are the number of the base raised to the nth power, where n is the number of other digits between a given digit and the radix point. If a given digit is on the left hand side of the radix point (i.e. its value is an integer) then n is positive or zero; if the digit is on the right hand side of the radix point (i.e., its value is fractional) then n is negative.
As an example of usage, the number 465 in its respective base b (which must be at least base 7 because the highest digit in it is 6) is equal to:
$4\times b^{2}+6\times b^{1}+5\times b^{0}$
If the number 465 was in base-10, then it would equal:
$4\times 10^{2}+6\times 10^{1}+5\times 10^{0}=4\times 100+6\times 10+5\times 1=465$
(46510 = 46510)
If however, the number were in base 7, then it would equal:
$4\times 7^{2}+6\times 7^{1}+5\times 7^{0}=4\times 49+6\times 7+5\times 1=243$
(4657 = 24310)
10b = b for any base b, since 10b = 1×b1 + 0×b0. For example, 102 = 2; 103 = 3; 1016 = 1610. Note that the last "16" is indicated to be in base 10. The base makes no difference for one-digit numerals.
This concept can be demonstrated using a diagram. One object represents one unit. When the number of objects is equal to or greater than the base b, then a group of objects is created with b objects. When the number of these groups exceeds b, then a group of these groups of objects is created with b groups of b objects; and so on. Thus the same number in different bases will have different values:
241 in base 5:
2 groups of 52 (25) 4 groups of 5 1 group of 1
ooooo ooooo
ooooo ooooo ooooo ooooo
ooooo ooooo + + o
ooooo ooooo ooooo ooooo
ooooo ooooo
241 in base 8:
2 groups of 82 (64) 4 groups of 8 1 group of 1
oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo + + o
oooooooo oooooooo
oooooooo oooooooo oooooooo oooooooo
oooooooo oooooooo
oooooooo oooooooo
The notation can be further augmented by allowing a leading minus sign. This allows the representation of negative numbers. For a given base, every representation corresponds to exactly one real number and every real number has at least one representation. The representations of rational numbers are those representations that are finite, use the bar notation, or end with an infinitely repeating cycle of digits.
Digits and numerals
A digit is a symbol that is used for positional notation, and a numeral consists of one or more digits used for representing a number with positional notation. Today's most common digits are the decimal digits "0", "1", "2", "3", "4", "5", "6", "7", "8", and "9". The distinction between a digit and a numeral is most pronounced in the context of a number base.
A non-zero numeral with more than one digit position will mean a different number in a different number base, but in general, the digits will mean the same.[12] For example, the base-8 numeral 238 contains two digits, "2" and "3", and with a base number (subscripted) "8". When converted to base-10, the 238 is equivalent to 1910, i.e. 238 = 1910. In our notation here, the subscript "8" of the numeral 238 is part of the numeral, but this may not always be the case.
Imagine the numeral "23" as having an ambiguous base number. Then "23" could likely be any base, from base-4 up. In base-4, the "23" means 1110, i.e. 234 = 1110. In base-60, the "23" means the number 12310, i.e. 2360 = 12310. The numeral "23" then, in this case, corresponds to the set of base-10 numbers {11, 13, 15, 17, 19, 21, 23, ..., 121, 123} while its digits "2" and "3" always retain their original meaning: the "2" means "two of", and the "3" means "three of".
In certain applications when a numeral with a fixed number of positions needs to represent a greater number, a higher number-base with more digits per position can be used. A three-digit, decimal numeral can represent only up to 999. But if the number-base is increased to 11, say, by adding the digit "A", then the same three positions, maximized to "AAA", can represent a number as great as 1330. We could increase the number base again and assign "B" to 11, and so on (but there is also a possible encryption between number and digit in the number-digit-numeral hierarchy). A three-digit numeral "ZZZ" in base-60 could mean 215999. If we use the entire collection of our alphanumerics we could ultimately serve a base-62 numeral system, but we remove two digits, uppercase "I" and uppercase "O", to reduce confusion with digits "1" and "0".[13] We are left with a base-60, or sexagesimal numeral system utilizing 60 of the 62 standard alphanumerics. (But see Sexagesimal system below.) In general, the number of possible values that can be represented by a $d$ digit number in base $r$ is $r^{d}$.
The common numeral systems in computer science are binary (radix 2), octal (radix 8), and hexadecimal (radix 16). In binary only digits "0" and "1" are in the numerals. In the octal numerals, are the eight digits 0–7. Hex is 0–9 A–F, where the ten numerics retain their usual meaning, and the alphabetics correspond to values 10–15, for a total of sixteen digits. The numeral "10" is binary numeral "2", octal numeral "8", or hexadecimal numeral "16".
Radix point
Main article: Radix point
The notation can be extended into the negative exponents of the base b. Thereby the so-called radix point, mostly ».«, is used as separator of the positions with non-negative from those with negative exponent.
Numbers that are not integers use places beyond the radix point. For every position behind this point (and thus after the units digit), the exponent n of the power bn decreases by 1 and the power approaches 0. For example, the number 2.35 is equal to:
$2\times 10^{0}+3\times 10^{-1}+5\times 10^{-2}$
Sign
Main article: Sign (mathematics)
If the base and all the digits in the set of digits are non-negative, negative numbers cannot be expressed. To overcome this, a minus sign, here »-«, is added to the numeral system. In the usual notation it is prepended to the string of digits representing the otherwise non-negative number.
Base conversion
The conversion to a base $b_{2}$ of an integer n represented in base $b_{1}$ can be done by a succession of Euclidean divisions by $b_{2}:$ the right-most digit in base $b_{2}$ is the remainder of the division of n by $b_{2};$ the second right-most digit is the remainder of the division of the quotient by $b_{2},$ and so on. The left-most digit is the last quotient. In general, the kth digit from the right is the remainder of the division by $b_{2}$ of the (k−1)th quotient.
For example: converting A10BHex to decimal (41227):
0xA10B/10 = 0x101A R: 7 (ones place)
0x101A/10 = 0x19C R: 2 (tens place)
0x19C/10 = 0x29 R: 2 (hundreds place)
0x29/10 = 0x4 R: 1 ...
4
When converting to a larger base (such as from binary to decimal), the remainder represents $b_{2}$ as a single digit, using digits from $b_{1}$. For example: converting 0b11111001 (binary) to 249 (decimal):
0b11111001/10 = 0b11000 R: 0b1001 (0b1001 = "9" for ones place)
0b11000/10 = 0b10 R: 0b100 (0b100 = "4" for tens)
0b10/10 = 0b0 R: 0b10 (0b10 = "2" for hundreds)
For the fractional part, conversion can be done by taking digits after the radix point (the numerator), and dividing it by the implied denominator in the target radix. Approximation may be needed due to a possibility of non-terminating digits if the reduced fraction's denominator has a prime factor other than any of the base's prime factor(s) to convert to. For example, 0.1 in decimal (1/10) is 0b1/0b1010 in binary, by dividing this in that radix, the result is 0b0.00011 (because one of the prime factors of 10 is 5). For more general fractions and bases see the algorithm for positive bases.
In practice, Horner's method is more efficient than the repeated division required above[14]. A number in positional notation can be thought of as a polynomial, where each digit is a coefficient. Coefficients can be larger than one digit, so an efficient way to convert bases is to convert each digit, then evaluate the polynomial via Horner's method within the target base. Converting each digit is a simple lookup table, removing the need for expensive division or modulus operations; and multiplication by x becomes right-shifting. However, other polynomial evaluation algorithms would work as well, like repeated squaring for single or sparse digits. Example:
Convert 0xA10B to 41227
A10B = (10*16^3) + (1*16^2) + (0*16^1) + (11*16^0)
Lookup table:
0x0 = 0
0x1 = 1
...
0x9 = 9
0xA = 10
0xB = 11
0xC = 12
0xD = 13
0xE = 14
0xF = 15
Therefore 0xA10B's decimal digits are 10, 1, 0, and 11.
Lay out the digits out like this. The most significant digit (10) is "dropped":
10 1 0 11 <- Digits of 0xA10B
---------------
10
Then we multiply the bottom number from the source base (16), the product is placed under the next digit of the source value, and then add:
10 1 0 11
160
---------------
10 161
Repeat until the final addition is performed:
10 1 0 11
160 2576 41216
---------------
10 161 2576 41227
and that is 41227 in decimal.
Convert 0b11111001 to 249
Lookup table:
0b0 = 0
0b1 = 1
Result:
1 1 1 1 1 0 0 1 <- Digits of 0b11111001
2 6 14 30 62 124 248
-------------------------
1 3 7 15 31 62 124 249
Terminating fractions
The numbers which have a finite representation form the semiring
${\frac {\mathbb {N} _{0}}{b^{\mathbb {N} _{0}}}}:=\left\{mb^{-\nu }\mid m\in \mathbb {N} _{0}\wedge \nu \in \mathbb {N} _{0}\right\}.$
More explicitly, if $p_{1}^{\nu _{1}}\cdot \ldots \cdot p_{n}^{\nu _{n}}:=b$ is a factorization of $b$ into the primes $p_{1},\ldots ,p_{n}\in \mathbb {P} $ with exponents $\nu _{1},\ldots ,\nu _{n}\in \mathbb {N} $,[15] then with the non-empty set of denominators $S:=\{p_{1},\ldots ,p_{n}\}$ we have
$\mathbb {Z} _{S}:=\left\{x\in \mathbb {Q} \left|\,\exists \mu _{i}\in \mathbb {Z} :x\prod _{i=1}^{n}{p_{i}}^{\mu _{i}}\in \mathbb {Z} \right.\right\}=b^{\mathbb {Z} }\,\mathbb {Z} ={\langle S\rangle }^{-1}\mathbb {Z} $
where $\langle S\rangle $ is the group generated by the $p\in S$ and ${\langle S\rangle }^{-1}\mathbb {Z} $ is the so-called localization of $\mathbb {Z} $ with respect to $S$.
The denominator of an element of $\mathbb {Z} _{S}$ contains if reduced to lowest terms only prime factors out of $S$. This ring of all terminating fractions to base $b$ is dense in the field of rational numbers $\mathbb {Q} $. Its completion for the usual (Archimedean) metric is the same as for $\mathbb {Q} $, namely the real numbers $\mathbb {R} $. So, if $S=\{p\}$ then $\mathbb {Z} _{\{p\}}$ has not to be confused with $\mathbb {Z} _{(p)}$, the discrete valuation ring for the prime $p$, which is equal to $\mathbb {Z} _{T}$ with $T=\mathbb {P} \setminus \{p\}$.
If $b$ divides $c$, we have $b^{\mathbb {Z} }\,\mathbb {Z} \subseteq c^{\mathbb {Z} }\,\mathbb {Z} .$
Rational numbers
The representation of non-integers can be extended to allow an infinite string of digits beyond the point. For example, 1.12112111211112 ... base-3 represents the sum of the infinite series:
${\begin{array}{l}1\times 3^{0\,\,\,}+{}\\1\times 3^{-1\,\,}+2\times 3^{-2\,\,\,}+{}\\1\times 3^{-3\,\,}+1\times 3^{-4\,\,\,}+2\times 3^{-5\,\,\,}+{}\\1\times 3^{-6\,\,}+1\times 3^{-7\,\,\,}+1\times 3^{-8\,\,\,}+2\times 3^{-9\,\,\,}+{}\\1\times 3^{-10}+1\times 3^{-11}+1\times 3^{-12}+1\times 3^{-13}+2\times 3^{-14}+\cdots \end{array}}$
Since a complete infinite string of digits cannot be explicitly written, the trailing ellipsis (...) designates the omitted digits, which may or may not follow a pattern of some kind. One common pattern is when a finite sequence of digits repeats infinitely. This is designated by drawing a vinculum across the repeating block:
$2.42{\overline {314}}_{5}=2.42314314314314314\dots _{5}$
This is the repeating decimal notation (to which there does not exist a single universally accepted notation or phrasing). For base 10 it is called a repeating decimal or recurring decimal.
An irrational number has an infinite non-repeating representation in all integer bases. Whether a rational number has a finite representation or requires an infinite repeating representation depends on the base. For example, one third can be represented by:
$0.1_{3}$
$0.{\overline {3}}_{10}=0.3333333\dots _{10}$
or, with the base implied:
$0.{\overline {3}}=0.3333333\dots $ (see also 0.999...)
$0.{\overline {01}}_{2}=0.010101\dots _{2}$
$0.2_{6}$
For integers p and q with gcd (p, q) = 1, the fraction p/q has a finite representation in base b if and only if each prime factor of q is also a prime factor of b.
For a given base, any number that can be represented by a finite number of digits (without using the bar notation) will have multiple representations, including one or two infinite representations:
1. A finite or infinite number of zeroes can be appended:
$3.46_{7}=3.460_{7}=3.460000_{7}=3.46{\overline {0}}_{7}$
2. The last non-zero digit can be reduced by one and an infinite string of digits, each corresponding to one less than the base, are appended (or replace any following zero digits):
$3.46_{7}=3.45{\overline {6}}_{7}$
$1_{10}=0.{\overline {9}}_{10}\qquad $ (see also 0.999...)
$220_{5}=214.{\overline {4}}_{5}$
Irrational numbers
Main article: irrational number
A (real) irrational number has an infinite non-repeating representation in all integer bases.
Examples are the non-solvable nth roots
$y={\sqrt[{n}]{x}}$
with $y^{n}=x$ and y ∉ Q, numbers which are called algebraic, or numbers like
$\pi ,e$
which are transcendental. The number of transcendentals is uncountable and the sole way to write them down with a finite number of symbols is to give them a symbol or a finite sequence of symbols.
Applications
Decimal system
Main article: Decimal representation
In the decimal (base-10) Hindu–Arabic numeral system, each position starting from the right is a higher power of 10. The first position represents 100 (1), the second position 101 (10), the third position 102 (10 × 10 or 100), the fourth position 103 (10 × 10 × 10 or 1000), and so on.
Fractional values are indicated by a separator, which can vary in different locations. Usually this separator is a period or full stop, or a comma. Digits to the right of it are multiplied by 10 raised to a negative power or exponent. The first position to the right of the separator indicates 10−1 (0.1), the second position 10−2 (0.01), and so on for each successive position.
As an example, the number 2674 in a base-10 numeral system is:
(2 × 103) + (6 × 102) + (7 × 101) + (4 × 100)
or
(2 × 1000) + (6 × 100) + (7 × 10) + (4 × 1).
Sexagesimal system
The sexagesimal or base-60 system was used for the integral and fractional portions of Babylonian numerals and other Mesopotamian systems, by Hellenistic astronomers using Greek numerals for the fractional portion only, and is still used for modern time and angles, but only for minutes and seconds. However, not all of these uses were positional.
Modern time separates each position by a colon or a prime symbol. For example, the time might be 10:25:59 (10 hours 25 minutes 59 seconds). Angles use similar notation. For example, an angle might be 10°25′59″ (10 degrees 25 minutes 59 seconds). In both cases, only minutes and seconds use sexagesimal notation—angular degrees can be larger than 59 (one rotation around a circle is 360°, two rotations are 720°, etc.), and both time and angles use decimal fractions of a second. This contrasts with the numbers used by Hellenistic and Renaissance astronomers, who used thirds, fourths, etc. for finer increments. Where we might write 10°25′59.392″, they would have written 10°25$\scriptstyle {{}^{\prime }}$59$\scriptstyle {{}^{\prime \prime }}$23$\scriptstyle {{}^{\prime \prime \prime }}$31$\scriptstyle {{}^{\prime \prime \prime \prime }}$12$\scriptstyle {{}^{\prime \prime \prime \prime \prime }}$ or 10°25i59ii23iii31iv12v.
Using a digit set of digits with upper and lowercase letters allows short notation for sexagesimal numbers, e.g. 10:25:59 becomes 'ARz' (by omitting I and O, but not i and o), which is useful for use in URLs, etc., but it is not very intelligible to humans.
In the 1930s, Otto Neugebauer introduced a modern notational system for Babylonian and Hellenistic numbers that substitutes modern decimal notation from 0 to 59 in each position, while using a semicolon (;) to separate the integral and fractional portions of the number and using a comma (,) to separate the positions within each portion.[16] For example, the mean synodic month used by both Babylonian and Hellenistic astronomers and still used in the Hebrew calendar is 29;31,50,8,20 days, and the angle used in the example above would be written 10;25,59,23,31,12 degrees.
Computing
In computing, the binary (base-2), octal (base-8) and hexadecimal (base-16) bases are most commonly used. Computers, at the most basic level, deal only with sequences of conventional zeroes and ones, thus it is easier in this sense to deal with powers of two. The hexadecimal system is used as "shorthand" for binary—every 4 binary digits (bits) relate to one and only one hexadecimal digit. In hexadecimal, the six digits after 9 are denoted by A, B, C, D, E, and F (and sometimes a, b, c, d, e, and f).
The octal numbering system is also used as another way to represent binary numbers. In this case the base is 8 and therefore only digits 0, 1, 2, 3, 4, 5, 6, and 7 are used. When converting from binary to octal every 3 bits relate to one and only one octal digit.
Hexadecimal, decimal, octal, and a wide variety of other bases have been used for binary-to-text encoding, implementations of arbitrary-precision arithmetic, and other applications.
For a list of bases and their applications, see list of numeral systems.
Other bases in human language
Base-12 systems (duodecimal or dozenal) have been popular because multiplication and division are easier than in base-10, with addition and subtraction being just as easy. Twelve is a useful base because it has many factors. It is the smallest common multiple of one, two, three, four and six. There is still a special word for "dozen" in English, and by analogy with the word for 102, hundred, commerce developed a word for 122, gross. The standard 12-hour clock and common use of 12 in English units emphasize the utility of the base. In addition, prior to its conversion to decimal, the old British currency Pound Sterling (GBP) partially used base-12; there were 12 pence (d) in a shilling (s), 20 shillings in a pound (£), and therefore 240 pence in a pound. Hence the term LSD or, more properly, £sd.
The Maya civilization and other civilizations of pre-Columbian Mesoamerica used base-20 (vigesimal), as did several North American tribes (two being in southern California). Evidence of base-20 counting systems is also found in the languages of central and western Africa.
Remnants of a Gaulish base-20 system also exist in French, as seen today in the names of the numbers from 60 through 99. For example, sixty-five is soixante-cinq (literally, "sixty [and] five"), while seventy-five is soixante-quinze (literally, "sixty [and] fifteen"). Furthermore, for any number between 80 and 99, the "tens-column" number is expressed as a multiple of twenty. For example, eighty-two is quatre-vingt-deux (literally, four twenty[s] [and] two), while ninety-two is quatre-vingt-douze (literally, four twenty[s] [and] twelve). In Old French, forty was expressed as two twenties and sixty was three twenties, so that fifty-three was expressed as two twenties [and] thirteen, and so on.
In English the same base-20 counting appears in the use of "scores". Although mostly historical, it is occasionally used colloquially. Verse 10 of Psalm 90 in the King James Version of the Bible starts: "The days of our years are threescore years and ten; and if by reason of strength they be fourscore years, yet is their strength labour and sorrow". The Gettysburg Address starts: "Four score and seven years ago".
The Irish language also used base-20 in the past, twenty being fichid, forty dhá fhichid, sixty trí fhichid and eighty ceithre fhichid. A remnant of this system may be seen in the modern word for 40, daoichead.
The Welsh language continues to use a base-20 counting system, particularly for the age of people, dates and in common phrases. 15 is also important, with 16–19 being "one on 15", "two on 15" etc. 18 is normally "two nines". A decimal system is commonly used.
The Inuit languages use a base-20 counting system. Students from Kaktovik, Alaska invented a base-20 numeral system in 1994[17]
Danish numerals display a similar base-20 structure.
The Māori language of New Zealand also has evidence of an underlying base-20 system as seen in the terms Te Hokowhitu a Tu referring to a war party (literally "the seven 20s of Tu") and Tama-hokotahi, referring to a great warrior ("the one man equal to 20").
The binary system was used in the Egyptian Old Kingdom, 3000 BC to 2050 BC. It was cursive by rounding off rational numbers smaller than 1 to 1/2 + 1/4 + 1/8 + 1/16 + 1/32 + 1/64, with a 1/64 term thrown away (the system was called the Eye of Horus).
A number of Australian Aboriginal languages employ binary or binary-like counting systems. For example, in Kala Lagaw Ya, the numbers one through six are urapon, ukasar, ukasar-urapon, ukasar-ukasar, ukasar-ukasar-urapon, ukasar-ukasar-ukasar.
North and Central American natives used base-4 (quaternary) to represent the four cardinal directions. Mesoamericans tended to add a second base-5 system to create a modified base-20 system.
A base-5 system (quinary) has been used in many cultures for counting. Plainly it is based on the number of digits on a human hand. It may also be regarded as a sub-base of other bases, such as base-10, base-20, and base-60.
A base-8 system (octal) was devised by the Yuki tribe of Northern California, who used the spaces between the fingers to count, corresponding to the digits one through eight.[18] There is also linguistic evidence which suggests that the Bronze Age Proto-Indo Europeans (from whom most European and Indic languages descend) might have replaced a base-8 system (or a system which could only count up to 8) with a base-10 system. The evidence is that the word for 9, newm, is suggested by some to derive from the word for "new", newo-, suggesting that the number 9 had been recently invented and called the "new number".[19]
Many ancient counting systems use five as a primary base, almost surely coming from the number of fingers on a person's hand. Often these systems are supplemented with a secondary base, sometimes ten, sometimes twenty. In some African languages the word for five is the same as "hand" or "fist" (Dyola language of Guinea-Bissau, Banda language of Central Africa). Counting continues by adding 1, 2, 3, or 4 to combinations of 5, until the secondary base is reached. In the case of twenty, this word often means "man complete". This system is referred to as quinquavigesimal. It is found in many languages of the Sudan region.
The Telefol language, spoken in Papua New Guinea, is notable for possessing a base-27 numeral system.
Non-standard positional numeral systems
Main article: Non-standard positional numeral systems
Interesting properties exist when the base is not fixed or positive and when the digit symbol sets denote negative values. There are many more variations. These systems are of practical and theoretic value to computer scientists.
Balanced ternary[20] uses a base of 3 but the digit set is {1,0,1} instead of {0,1,2}. The "1" has an equivalent value of −1. The negation of a number is easily formed by switching the on the 1s. This system can be used to solve the balance problem, which requires finding a minimal set of known counter-weights to determine an unknown weight. Weights of 1, 3, 9, ... 3n known units can be used to determine any unknown weight up to 1 + 3 + ... + 3n units. A weight can be used on either side of the balance or not at all. Weights used on the balance pan with the unknown weight are designated with 1, with 1 if used on the empty pan, and with 0 if not used. If an unknown weight W is balanced with 3 (31) on its pan and 1 and 27 (30 and 33) on the other, then its weight in decimal is 25 or 1011 in balanced base-3.
10113 = 1 × 33 + 0 × 32 − 1 × 31 + 1 × 30 = 25.
The factorial number system uses a varying radix, giving factorials as place values; they are related to Chinese remainder theorem and residue number system enumerations. This system effectively enumerates permutations. A derivative of this uses the Towers of Hanoi puzzle configuration as a counting system. The configuration of the towers can be put into 1-to-1 correspondence with the decimal count of the step at which the configuration occurs and vice versa.
Decimal equivalents −3 −2 −1 0 1 2 3 4 5 6 7 8
Balanced base 3 10 11 1 0 1 11 10 11 111 110 111 101
Base −2 1101 10 11 0 1 110 111 100 101 11010 11011 11000
Factoroid 010100110200210100010101100
Non-positional positions
Each position does not need to be positional itself. Babylonian sexagesimal numerals were positional, but in each position were groups of two kinds of wedges representing ones and tens (a narrow vertical wedge | for the one and an open left pointing wedge ⟨ for the ten) — up to 5+9=14 symbols per position (i.e. 5 tens ⟨⟨⟨⟨⟨ and 9 ones ||||||||| grouped into one or two near squares containing up to three tiers of symbols, or a place holder (\\) for the lack of a position).[21] Hellenistic astronomers used one or two alphabetic Greek numerals for each position (one chosen from 5 letters representing 10–50 and/or one chosen from 9 letters representing 1–9, or a zero symbol).[22]
See also
Examples:
• List of numeral systems
• Category: Positional numeral systems
Related topics:
• Algorism
• Hindu–Arabic numeral system
• Mixed radix
• Non-standard positional numeral systems
• Numeral system
• Scientific notation
Other:
• Significant figures
Notes
1. Kaplan, Robert (2000). The Nothing That Is: A Natural History of Zero. Oxford: Oxford University Press. pp. 11–12 – via archive.org.
2. "Greek numerals". Archived from the original on 26 November 2016. Retrieved 31 May 2016.
3. Menninger, Karl: Zahlwort und Ziffer. Eine Kulturgeschichte der Zahl, Vandenhoeck und Ruprecht, 3rd. ed., 1979, ISBN 3-525-40725-4, pp. 150–153
4. Ifrah, page 187
5. L. F. Menabrea. Translated by Ada Augusta, Countess of Lovelace. "Sketch of The Analytical Engine Invented by Charles Babbage" Archived 15 September 2008 at the Wayback Machine. 1842.
6. Berggren, J. Lennart (2007). "Mathematics in Medieval Islam". The Mathematics of Egypt, Mesopotamia, China, India, and Islam: A Sourcebook. Princeton University Press. p. 518. ISBN 978-0-691-11485-9.
7. Gandz, S.: The invention of the decimal fractions and the application of the exponential calculus by Immanuel Bonfils of Tarascon (c. 1350), Isis 25 (1936), 16–45.
8. Lam Lay Yong, "The Development of Hindu-Arabic and Traditional Chinese Arithmetic", Chinese Science, 1996 p38, Kurt Vogel notation
9. Lay Yong, Lam. "A Chinese Genesis, Rewriting the history of our numeral system". Archive for History of Exact Sciences. 38: 101–108.
10. B. L. van der Waerden (1985). A History of Algebra. From Khwarizmi to Emmy Noether. Berlin: Springer-Verlag.
11. E. J. Dijksterhuis (1970) Simon Stevin: Science in the Netherlands around 1600, Martinus Nijhoff Publishers, Dutch original 1943
12. The digit will retain its meaning in other number bases, in general, because a higher number base would normally be a notational extension of the lower number base in any systematic organization. In the mathematical sciences there is virtually only one positional-notation numeral system for each base below 10, and this extends with few, if insignificant, variations on the choice of alphabetic digits for those bases above 10.
13. We do not usually remove the lowercase digits "l" and lowercase "o", for in most fonts they are discernible from the digits "1" and "0".
14. User 'Gone'. "number systems - How to change from base $n$ to $m$". Mathematics Stack Exchange. Retrieved 6 August 2020. {{cite web}}: |last1= has generic name (help)
15. The exact size of the $\nu _{1},\ldots ,\nu _{n}$ does not matter. They only have to be ≥ 1.
16. Neugebauer, Otto; Sachs, Abraham Joseph; Götze, Albrecht (1945), Mathematical Cuneiform Texts, American Oriental Series, vol. 29, New Haven: American Oriental Society and the American Schools of Oriental Research, p. 2, ISBN 9780940490291, archived from the original on 1 October 2016, retrieved 18 September 2019
17. Bartley, Wm. Clark (January–February 1997). "Making the Old Way Count" (PDF). Sharing Our Pathways. 2 (1): 12–13. Archived (PDF) from the original on 25 June 2013. Retrieved 27 February 2017.
18. Barrow, John D. (1992), Pi in the sky: counting, thinking, and being, Clarendon Press, p. 38, ISBN 9780198539568.
19. (Mallory & Adams 1997) Encyclopedia of Indo-European Culture
20. Knuth, pages 195–213
21. Ifrah, pages 326, 379
22. Ifrah, pages 261–264
References
• O'Connor, John; Robertson, Edmund (December 2000). "Babylonian Numerals". Archived from the original on 11 September 2014. Retrieved 21 August 2010.
• Kadvany, John (December 2007). "Positional Value and Linguistic Recursion". Journal of Indian Philosophy. 35 (5–6): 487–520. doi:10.1007/s10781-007-9025-5. S2CID 52885600.
• Knuth, Donald (1997). The art of Computer Programming. Vol. 2. Addison-Wesley. pp. 195–213. ISBN 0-201-89684-2.
• Ifrah, George (2000). The Universal History of Numbers: From Prehistory to the Invention of the Computer. Wiley. ISBN 0-471-37568-3.
• Kroeber, Alfred (1976) [1925]. Handbook of the Indians of California. Courier Dover Publications. p. 176. ISBN 9780486233680.
External links
Wikimedia Commons has media related to Positional numeral systems.
• Accurate Base Conversion
• The Development of Hindu Arabic and Traditional Chinese Arithmetics
• Implementation of Base Conversion at cut-the-knot
• Learn to count other bases on your fingers
• Online Arbitrary Precision Base Converter
|
Weighted correlation network analysis
Weighted correlation network analysis, also known as weighted gene co-expression network analysis (WGCNA), is a widely used data mining method especially for studying biological networks based on pairwise correlations between variables. While it can be applied to most high-dimensional data sets, it has been most widely used in genomic applications. It allows one to define modules (clusters), intramodular hubs, and network nodes with regard to module membership, to study the relationships between co-expression modules, and to compare the network topology of different networks (differential network analysis). WGCNA can be used as a data reduction technique (related to oblique factor analysis), as a clustering method (fuzzy clustering), as a feature selection method (e.g. as gene screening method), as a framework for integrating complementary (genomic) data (based on weighted correlations between quantitative variables), and as a data exploratory technique.[1] Although WGCNA incorporates traditional data exploratory techniques, its intuitive network language and analysis framework transcend any standard analysis technique. Since it uses network methodology and is well suited for integrating complementary genomic data sets, it can be interpreted as systems biologic or systems genetic data analysis method. By selecting intramodular hubs in consensus modules, WGCNA also gives rise to network based meta analysis techniques.[2]
History
The WGCNA method was developed by Steve Horvath, a professor of human genetics at the David Geffen School of Medicine at UCLA and of biostatistics at the UCLA Fielding School of Public Health and his colleagues at UCLA, and (former) lab members (in particular Peter Langfelder, Bin Zhang, Jun Dong). Much of the work arose from collaborations with applied researchers. In particular, weighted correlation networks were developed in joint discussions with cancer researchers Paul Mischel, Stanley F. Nelson, and neuroscientists Daniel H. Geschwind, Michael C. Oldham (according to the acknowledgement section in[1]). There is a vast literature on dependency networks, scale free networks and coexpression networks.
Comparison between weighted and unweighted correlation networks
A weighted correlation network can be interpreted as special case of a weighted network, dependency network or correlation network. Weighted correlation network analysis can be attractive for the following reasons:
• The network construction (based on soft thresholding the correlation coefficient) preserves the continuous nature of the underlying correlation information. For example, weighted correlation networks that are constructed on the basis of correlations between numeric variables do not require the choice of a hard threshold. Dichotomizing information and (hard)-thresholding may lead to information loss.[3]
• The network construction gives highly robust results with respect to different choices of the soft threshold.[3] In contrast, results based on unweighted networks, constructed by thresholding a pairwise association measure, often strongly depend on the threshold.
• Weighted correlation networks facilitate a geometric interpretation based on the angular interpretation of the correlation, chapter 6 in.[4]
• Resulting network statistics can be used to enhance standard data-mining methods such as cluster analysis since (dis)-similarity measures can often be transformed into weighted networks;[5] see chapter 6 in.[4]
• WGCNA provides powerful module preservation statistics which can be used to quantify similarity to another condition. Also module preservation statistics allow one to study differences between the modular structure of networks.[6]
• Weighted networks and correlation networks can often be approximated by "factorizable" networks.[4][7] Such approximations are often difficult to achieve for sparse, unweighted networks. Therefore, weighted (correlation) networks allow for a parsimonious parametrization (in terms of modules and module membership) (chapters 2, 6 in [1]) and.[8]
Method
First, one defines a gene co-expression similarity measure which is used to define the network. We denote the gene co-expression similarity measure of a pair of genes i and j by $s_{ij}$. Many co-expression studies use the absolute value of the correlation as an unsigned co-expression similarity measure,
$s_{ij}^{unsigned}=|cor(x_{i},x_{j})|$
where gene expression profiles $x_{i}$ and $x_{j}$ consist of the expression of genes i and j across multiple samples. However, using the absolute value of the correlation may obfuscate biologically relevant information, since no distinction is made between gene repression and activation. In contrast, in signed networks the similarity between genes reflects the sign of the correlation of their expression profiles. To define a signed co-expression measure between gene expression profiles $x_{i}$ and $x_{j}$ , one can use a simple transformation of the correlation:
$s_{ij}^{signed}=0.5+0.5cor(x_{i},x_{j})$
As the unsigned measure $s_{ij}^{unsigned}$ , the signed similarity $s_{ij}^{signed}$ takes on a value between 0 and 1. Note that the unsigned similarity between two oppositely expressed genes ($cor(x_{i},x_{j})=-1$) equals 1 while it equals 0 for the signed similarity. Similarly, while the unsigned co-expression measure of two genes with zero correlation remains zero, the signed similarity equals 0.5.
Next, an adjacency matrix (network), $A=[a_{ij}]$, is used to quantify how strongly genes are connected to one another. $A$ is defined by thresholding the co-expression similarity matrix $S=[s_{ij}]$ . 'Hard' thresholding (dichotomizing) the similarity measure $S$ results in an unweighted gene co-expression network. Specifically an unweighted network adjacency is defined to be 1 if $s_{ij}>\tau $ and 0 otherwise. Because hard thresholding encodes gene connections in a binary fashion, it can be sensitive to the choice of the threshold and result in the loss of co-expression information.[3] The continuous nature of the co-expression information can be preserved by employing soft thresholding, which results in a weighted network. Specifically, WGCNA uses the following power function assess their connection strength:
$ a_{ij}=(s_{ij})^{\beta }$,
where the power $\beta $ is the soft thresholding parameter. The default values $\beta =6$ and $\beta =12$ are used for unsigned and signed networks, respectively. Alternatively, $\beta $ can be chosen using the scale-free topology criterion which amounts to choosing the smallest value of $\beta $ such that approximate scale free topology is reached.[3]
Since $log(a_{ij})=\beta log(s_{ij})$, the weighted network adjacency is linearly related to the co-expression similarity on a logarithmic scale. Note that a high power $\beta $ transforms high similarities into high adjacencies, while pushing low similarities towards 0. Since this soft-thresholding procedure applied to a pairwise correlation matrix leads to weighted adjacency matrix, the ensuing analysis is referred to as weighted gene co-expression network analysis.
A major step in the module centric analysis is to cluster genes into network modules using a network proximity measure. Roughly speaking, a pair of genes has a high proximity if it is closely interconnected. By convention, the maximal proximity between two genes is 1 and the minimum proximity is 0. Typically, WGCNA uses the topological overlap measure (TOM) as proximity.[9][10] which can also be defined for weighted networks.[3] The TOM combines the adjacency of two genes and the connection strengths these two genes share with other "third party" genes. The TOM is a highly robust measure of network interconnectedness (proximity). This proximity is used as input of average linkage hierarchical clustering. Modules are defined as branches of the resulting cluster tree using the dynamic branch cutting approach.[11] Next the genes inside a given module are summarized with the module eigengene, which can be considered as the best summary of the standardized module expression data.[4] The module eigengene of a given module is defined as the first principal component of the standardized expression profiles. Eigengenes define robust biomarkers,[12] and can be used as features in complex machine learning models such as Bayesian networks.[13] To find modules that relate to a clinical trait of interest, module eigengenes are correlated with the clinical trait of interest, which gives rise to an eigengene significance measure. Eigengenes can be used as features in more complex predictive models including decision trees and Bayesian networks.[12] One can also construct co-expression networks between module eigengenes (eigengene networks), i.e. networks whose nodes are modules.[14] To identify intramodular hub genes inside a given module, one can use two types of connectivity measures. The first, referred to as $kME_{i}=cor(x_{i},ME)$, is defined based on correlating each gene with the respective module eigengene. The second, referred to as kIN, is defined as a sum of adjacencies with respect to the module genes. In practice, these two measures are equivalent.[4] To test whether a module is preserved in another data set, one can use various network statistics, e.g. $Zsummary$.[6]
Applications
WGCNA has been widely used for analyzing gene expression data (i.e. transcriptional data), e.g. to find intramodular hub genes.[2][15] Such as, WGCNA study reveals novel transcription factors are associated with Bisphenol A (BPA) dose-response.[16]
It is often used as data reduction step in systems genetic applications where modules are represented by "module eigengenes" e.g.[17][18] Module eigengenes can be used to correlate modules with clinical traits. Eigengene networks are coexpression networks between module eigengenes (i.e. networks whose nodes are modules) . WGCNA is widely used in neuroscientific applications, e.g.[19][20] and for analyzing genomic data including microarray data,[21] single cell RNA-Seq data[22][23] DNA methylation data,[24] miRNA data, peptide counts[25] and microbiota data (16S rRNA gene sequencing).[26] Other applications include brain imaging data, e.g. functional MRI data.[27]
R software package
The WGCNA R software package[28] provides functions for carrying out all aspects of weighted network analysis (module construction, hub gene selection, module preservation statistics, differential network analysis, network statistics). The WGCNA package is available from the Comprehensive R Archive Network (CRAN), the standard repository for R add-on packages.
References
1. Horvath S (2011). Weighted Network Analysis: Application in Genomics and Systems Biology. New York, NY: Springer. ISBN 978-1-4419-8818-8.
2. Langfelder P, Mischel PS, Horvath S, Ravasi T (17 April 2013). "When Is Hub Gene Selection Better than Standard Meta-Analysis?". PLOS ONE. 8 (4): e61505. Bibcode:2013PLoSO...861505L. doi:10.1371/journal.pone.0061505. PMC 3629234. PMID 23613865.
3. Zhang B, Horvath S (2005). "A general framework for weighted gene co-expression network analysis" (PDF). Statistical Applications in Genetics and Molecular Biology. 4: 17. CiteSeerX 10.1.1.471.9599. doi:10.2202/1544-6115.1128. PMID 16646834. S2CID 7756201.
4. Horvath S, Dong J (2008). "Geometric Interpretation of Gene Coexpression Network Analysis". PLOS Computational Biology. 4 (8): e1000117. Bibcode:2008PLSCB...4E0117H. doi:10.1371/journal.pcbi.1000117. PMC 2446438. PMID 18704157.
5. Oldham MC, Langfelder P, Horvath S (12 June 2012). "Network methods for describing sample relationships in genomic datasets: application to Huntington's disease". BMC Systems Biology. 6: 63. doi:10.1186/1752-0509-6-63. PMC 3441531. PMID 22691535.
6. Langfelder P, Luo R, Oldham MC, Horvath S (20 January 2011). "Is my network module preserved and reproducible?". PLOS Computational Biology. 7 (1): e1001057. Bibcode:2011PLSCB...7E1057L. doi:10.1371/journal.pcbi.1001057. PMC 3024255. PMID 21283776.
7. Dong J, Horvath S (4 June 2007). "Understanding network concepts in modules". BMC Systems Biology. 1: 24. doi:10.1186/1752-0509-1-24. PMC 3238286. PMID 17547772.
8. Ranola JM, Langfelder P, Lange K, Horvath S (14 March 2013). "Cluster and propensity based approximation of a network". BMC Systems Biology. 7: 21. doi:10.1186/1752-0509-7-21. PMC 3663730. PMID 23497424.
9. Ravasz E, Somera AL, Mongru DA, Oltvai ZN, Barabasi AL (2002). "Hierarchical organization of modularity in metabolic networks". Science. 297 (5586): 1551–1555. arXiv:cond-mat/0209244. Bibcode:2002Sci...297.1551R. doi:10.1126/science.1073374. PMID 12202830. S2CID 14452443.
10. Yip AM, Horvath S (24 January 2007). "Gene network interconnectedness and the generalized topological overlap measure". BMC Bioinformatics. 8: 22. doi:10.1186/1471-2105-8-22. PMC 1797055. PMID 17250769.
11. Langfelder P, Zhang B, Horvath S (2007). "Defining clusters from a hierarchical cluster tree: the Dynamic Tree Cut library for R". Bioinformatics. 24 (5): 719–20. doi:10.1093/bioinformatics/btm563. PMID 18024473. S2CID 1095190.
12. Foroushani A, Agrahari R, Docking R, Chang L, Duns G, Hudoba M, Karsan A, Zare H (16 March 2017). "Large-scale gene network analysis reveals the significance of extracellular matrix pathway and homeobox genes in acute myeloid leukemia: an introduction to the Pigengene package and its applications". BMC Medical Genomics. 10 (1): 16. doi:10.1186/s12920-017-0253-6. PMC 5353782. PMID 28298217.
13. Agrahari, Rupesh; Foroushani, Amir; Docking, T. Roderick; Chang, Linda; Duns, Gerben; Hudoba, Monika; Karsan, Aly; Zare, Habil (3 May 2018). "Applications of Bayesian network models in predicting types of hematological malignancies". Scientific Reports. 8 (1): 6951. Bibcode:2018NatSR...8.6951A. doi:10.1038/s41598-018-24758-5. ISSN 2045-2322. PMC 5934387. PMID 29725024.
14. Langfelder P, Horvath S (2007). "Eigengene networks for studying the relationships between co-expression modules". BMC Systems Biology. 2007 (1): 54. doi:10.1186/1752-0509-1-54. PMC 2267703. PMID 18031580.
15. Horvath S, Zhang B, Carlson M, Lu KV, Zhu S, Felciano RM, Laurance MF, Zhao W, Shu Q, Lee Y, Scheck AC, Liau LM, Wu H, Geschwind DH, Febbo PG, Kornblum HI, Cloughesy TF, Nelson SF, Mischel PS (2006). "Analysis of Oncogenic Signaling Networks in Glioblastoma Identifies ASPM as a Novel Molecular Target". PNAS. 103 (46): 17402–17407. Bibcode:2006PNAS..10317402H. doi:10.1073/pnas.0608396103. PMC 1635024. PMID 17090670.
16. Hartung, Thomas; Kleensang, Andre; Tran, Vy; Maertens, Alexandra (2018). "Weighted Gene Correlation Network Analysis (WGCNA) Reveals Novel Transcription Factors Associated With Bisphenol A Dose-Response". Frontiers in Genetics. 9: 508. doi:10.3389/fgene.2018.00508. ISSN 1664-8021. PMC 6240694. PMID 30483308.
17. Chen Y, Zhu J, Lum PY, Yang X, Pinto S, MacNeil DJ, Zhang C, Lamb J, Edwards S, Sieberts SK, Leonardson A, Castellini LW, Wang S, Champy MF, Zhang B, Emilsson V, Doss S, Ghazalpour A, Horvath S, Drake TA, Lusis AJ, Schadt EE (27 March 2008). "Variations in DNA elucidate molecular networks that cause disease". Nature. 452 (7186): 429–35. Bibcode:2008Natur.452..429C. doi:10.1038/nature06757. PMC 2841398. PMID 18344982.
18. Plaisier CL, Horvath S, Huertas-Vazquez A, Cruz-Bautista I, Herrera MF, Tusie-Luna T, Aguilar-Salinas C, Pajukanta P, Storey JD (11 September 2009). "A Systems Genetics Approach Implicates USF1, FADS3, and Other Causal Candidate Genes for Familial Combined Hyperlipidemia". PLOS Genetics. 5 (9): e1000642. doi:10.1371/journal.pgen.1000642. PMC 2730565. PMID 19750004.
19. Voineagu I, Wang X, Johnston P, Lowe JK, Tian Y, Horvath S, Mill J, Cantor RM, Blencowe BJ, Geschwind DH (25 May 2011). "Transcriptomic analysis of autistic brain reveals convergent molecular pathology". Nature. 474 (7351): 380–4. doi:10.1038/nature10110. PMC 3607626. PMID 21614001.
20. Hawrylycz MJ, Lein ES, Guillozet-Bongaarts AL, Shen EH, Ng L, Miller JA, van de Lagemaat LN, Smith KA, Ebbert A, Riley ZL, Abajian C, Beckmann CF, Bernard A, Bertagnolli D, Boe AF, Cartagena PM, Chakravarty MM, Chapin M, Chong J, Dalley RA, David Daly B, Dang C, Datta S, Dee N, Dolbeare TA, Faber V, Feng D, Fowler DR, Goldy J, Gregor BW, Haradon Z, Haynor DR, Hohmann JG, Horvath S, Howard RE, Jeromin A, Jochim JM, Kinnunen M, Lau C, Lazarz ET, Lee C, Lemon TA, Li L, Li Y, Morris JA, Overly CC, Parker PD, Parry SE, Reding M, Royall JJ, Schulkin J, Sequeira PA, Slaughterbeck CR, Smith SC, Sodt AJ, Sunkin SM, Swanson BE, Vawter MP, Williams D, Wohnoutka P, Zielke HR, Geschwind DH, Hof PR, Smith SM, Koch C, Grant S, Jones AR (20 September 2012). "An anatomically comprehensive atlas of the adult human brain transcriptome". Nature. 489 (7416): 391–399. Bibcode:2012Natur.489..391H. doi:10.1038/nature11405. PMC 4243026. PMID 22996553.
21. Kadarmideen HN, Watson-Haigh NS, Andronicos NM (2011). "Systems biology of ovine intestinal parasite resistance: disease gene modules and biomarkers". Molecular BioSystems. 7 (1): 235–246. doi:10.1039/C0MB00190B. PMID 21072409.
22. Kogelman LJ, Cirera S, Zhernakova DV, Fredholm M, Franke L, Kadarmideen HN (30 September 2014). "Identification of co-expression gene networks, regulatory genes and pathways for obesity based on adipose tissue RNA Sequencing in a porcine model". BMC Medical Genomics. 7 (1): 57. doi:10.1186/1755-8794-7-57. PMC 4183073. PMID 25270054.
23. Xue Z, Huang K, Cai C, Cai L, Jiang CY, Feng Y, Liu Z, Zeng Q, Cheng L, Sun YE, Liu JY, Horvath S, Fan G (29 August 2013). "Genetic programs in human and mouse early embryos revealed by single-cell RNA sequencing". Nature. 500 (7464): 593–7. Bibcode:2013Natur.500..593X. doi:10.1038/nature12364. PMC 4950944. PMID 23892778.
24. Horvath S, Zhang Y, Langfelder P, Kahn RS, Boks MP, van Eijk K, van den Berg LH, Ophoff RA (3 October 2012). "Aging effects on DNA methylation modules in human brain and blood tissue". Genome Biology. 13 (10): R97. doi:10.1186/gb-2012-13-10-r97. PMC 4053733. PMID 23034122.
25. Shirasaki DI, Greiner ER, Al-Ramahi I, Gray M, Boontheung P, Geschwind DH, Botas J, Coppola G, Horvath S, Loo JA, Yang XW (12 July 2012). "Network organization of the huntingtin proteomic interactome in mammalian brain". Neuron. 75 (1): 41–57. doi:10.1016/j.neuron.2012.05.024. PMC 3432264. PMID 22794259.
26. Maomeng Tong; Xiaoxiao Li; Laura Wegener Parfrey; et al. (2013). "A modular organization of the human intestinal mucosal microbiota and its association with inflammatory bowel disease". PLOS One. 8 (11): e80702. Bibcode:2013PLoSO...880702T. doi:10.1371/JOURNAL.PONE.0080702. ISSN 1932-6203. PMC 3834335. PMID 24260458. Wikidata Q21559533.
27. Mumford JA, Horvath S, Oldham MC, Langfelder P, Geschwind DH, Poldrack RA (1 October 2010). "Detecting network modules in fMRI time series: a weighted network analysis approach". NeuroImage. 52 (4): 1465–76. doi:10.1016/j.neuroimage.2010.05.047. PMC 3632300. PMID 20553896.
28. Langfelder P, Horvath S (29 December 2008). "WGCNA: an R package for weighted correlation network analysis". BMC Bioinformatics. 9: 559. doi:10.1186/1471-2105-9-559. PMC 2631488. PMID 19114008.
|
Directed graph
In mathematics, and more specifically in graph theory, a directed graph (or digraph) is a graph that is made up of a set of vertices connected by directed edges, often called arcs.
Definition
In formal terms, a directed graph is an ordered pair G = (V, A) where[1]
• V is a set whose elements are called vertices, nodes, or points;
• A is a set of ordered pairs of vertices, called arcs, directed edges (sometimes simply edges with the corresponding set named E instead of A), arrows, or directed lines.
It differs from an ordinary or undirected graph, in that the latter is defined in terms of unordered pairs of vertices, which are usually called edges, links or lines.
The aforementioned definition does not allow a directed graph to have multiple arrows with the same source and target nodes, but some authors consider a broader definition that allows directed graphs to have such multiple arcs (namely, they allow the arc set to be a multiset). Sometimes these entities are called directed multigraphs (or multidigraphs).
On the other hand, the aforementioned definition allows a directed graph to have loops (that is, arcs that directly connect nodes with themselves), but some authors consider a narrower definition that does not allow directed graphs to have loops.[2] Directed graphs without loops may be called simple directed graphs, while directed graphs with loops may be called loop-digraphs (see section Types of directed graph).
Types of directed graphs
See also: Graph (discrete mathematics) § Types of graphs
Subclasses
• Symmetric directed graphs are directed graphs where all edges appear twice, one in each direction (that is, for every arrow that belongs to the digraph, the corresponding inverse arrow also belongs to it). (Such an edge is sometimes called "bidirected" and such graphs are sometimes called "bidirected", but this conflicts with the meaning for bidirected graphs.)
• Simple directed graphs are directed graphs that have no loops (arrows that directly connect vertices to themselves) and no multiple arrows with same source and target nodes. As already introduced, in case of multiple arrows the entity is usually addressed as directed multigraph. Some authors describe digraphs with loops as loop-digraphs.[2]
• Complete directed graphs are simple directed graphs where each pair of vertices is joined by a symmetric pair of directed arcs (it is equivalent to an undirected complete graph with the edges replaced by pairs of inverse arcs). It follows that a complete digraph is symmetric.
• Semicomplete multipartite digraphs are simple digraphs in which the vertex set is partitioned into sets such that for every pair of vertices x and y in different sets, there is an arc between x and y. There can be one arc between x and y or two arcs in opposite directions. [3]
• Semicomplete digraphs are simple digraphs where there is an arc between each pair of vertices. Every semicomplete digraph is a semicomplete multipartite digraph in a trivial way, with each vertex constituting a set of the partition. [4]
• Quasi-transitive digraphs are simple digraphs where for every triple x, y, z of distinct vertices with arcs from x to y and from y to z, there is an arc between x and z. There can be just one arc between x and z or two arcs in opposite directions. A semicomplete digraph is a quasi-transitive digraph. There are extensions of quasi-transitive digraphs called k-quasi-transitive digraphs. [5]
• Oriented graphs are directed graphs having no opposite pairs of directed edges (i.e. at most one of (x, y) and (y, x) may be arrows of the graph). It follows that a directed graph is an oriented graph if and only if it has no 2-cycle.[6] (This is not the only meaning of "oriented graph"; see Orientation (graph theory).)
• Tournaments are oriented graphs obtained by choosing a direction for each edge in undirected complete graphs. A tournament is a semicomplete digraph. [7]
• A directed graph is acyclic if it has no directed cycles. The usual name for such a digraph is directed acyclic graph (DAG).[8]
• Multitrees are DAGs in which there are no two distinct directed paths from the same starting vertex to the same ending vertex.
• Oriented trees or polytrees are DAGs formed by orienting the edges of trees (connected, acyclic undirected graphs).
• Rooted trees are oriented trees in which all edges of the underlying undirected tree are directed either away from or towards the root (they are called, respectively, arborescences or out-trees, and in-trees.
Digraphs with supplementary properties
• Weighted directed graphs (also known as directed networks) are (simple) directed graphs with weights assigned to their arrows, similarly to weighted graphs (which are also known as undirected networks or weighted networks).[2]
• Flow networks are weighted directed graphs where two nodes are distinguished, a source and a sink.
• Rooted directed graphs (also known as flow graphs) are digraphs in which a vertex has been distinguished as the root.
• Control-flow graphs are rooted digraphs used in computer science as a representation of the paths that might be traversed through a program during its execution.
• Signal-flow graphs are directed graphs in which nodes represent system variables and branches (edges, arcs, or arrows) represent functional connections between pairs of nodes.
• Flow graphs are digraphs associated with a set of linear algebraic or differential equations.
• State diagrams are directed multigraphs that represent finite state machines.
• Commutative diagrams are digraphs used in category theory, where the vertices represent (mathematical) objects and the arrows represent morphisms, with the property that all directed paths with the same start and endpoints lead to the same result by composition.
• In the theory of Lie groups, a quiver Q is a directed graph serving as the domain of, and thus characterizing the shape of, a representation V defined as a functor, specifically an object of the functor category FinVctKF(Q) where F(Q) is the free category on Q consisting of paths in Q and FinVctK is the category of finite-dimensional vector spaces over a field K. Representations of a quiver label its vertices with vector spaces and its edges (and hence paths) compatibly with linear transformations between them, and transform via natural transformations.
Basic terminology
An arc (x, y) is considered to be directed from x to y; y is called the head and x is called the tail of the arc; y is said to be a direct successor of x and x is said to be a direct predecessor of y. If a path leads from x to y, then y is said to be a successor of x and reachable from x, and x is said to be a predecessor of y. The arc (y, x) is called the reversed arc of (x, y).
The adjacency matrix of a multidigraph with loops is the integer-valued matrix with rows and columns corresponding to the vertices, where a nondiagonal entry aij is the number of arcs from vertex i to vertex j, and the diagonal entry aii is the number of loops at vertex i. The adjacency matrix of a directed graph is a logical matrix, and is unique up to permutation of rows and columns.
Another matrix representation for a directed graph is its incidence matrix.
See direction for more definitions.
Indegree and outdegree
For a vertex, the number of head ends adjacent to a vertex is called the indegree of the vertex and the number of tail ends adjacent to a vertex is its outdegree (called branching factor in trees).
Let G = (V, A) and v ∈ V. The indegree of v is denoted deg−(v) and its outdegree is denoted deg+(v).
A vertex with deg−(v) = 0 is called a source, as it is the origin of each of its outcoming arcs. Similarly, a vertex with deg+(v) = 0 is called a sink, since it is the end of each of its incoming arcs.
The degree sum formula states that, for a directed graph,
$\sum _{v\in V}\deg ^{-}(v)=\sum _{v\in V}\deg ^{+}(v)=|A|.$
If for every vertex v ∈ V, deg+(v) = deg−(v), the graph is called a balanced directed graph.[9]
Degree sequence
The degree sequence of a directed graph is the list of its indegree and outdegree pairs; for the above example we have degree sequence ((2, 0), (2, 2), (0, 2), (1, 1)). The degree sequence is a directed graph invariant so isomorphic directed graphs have the same degree sequence. However, the degree sequence does not, in general, uniquely identify a directed graph; in some cases, non-isomorphic digraphs have the same degree sequence.
The directed graph realization problem is the problem of finding a directed graph with the degree sequence a given sequence of positive integer pairs. (Trailing pairs of zeros may be ignored since they are trivially realized by adding an appropriate number of isolated vertices to the directed graph.) A sequence which is the degree sequence of some directed graph, i.e. for which the directed graph realization problem has a solution, is called a directed graphic or directed graphical sequence. This problem can either be solved by the Kleitman–Wang algorithm or by the Fulkerson–Chen–Anstee theorem.
Directed graph connectivity
Main article: Connectivity (graph theory)
A directed graph is weakly connected (or just connected[10]) if the undirected underlying graph obtained by replacing all directed edges of the graph with undirected edges is a connected graph.
A directed graph is strongly connected or strong if it contains a directed path from x to y (and from y to x) for every pair of vertices (x, y). The strong components are the maximal strongly connected subgraphs.
A connected rooted graph (or flow graph) is one where there exists a directed path to every vertex from a distinguished root vertex.
See also
• Binary relation
• Coates graph
• DRAKON flowchart
• Flow chart
• Globular set
• Glossary of graph theory
• Graph Style Sheets
• Graph theory
• Graph (abstract data type)
• Network theory
• Orientation
• Preorder
• Topological sorting
• Transpose graph
• Vertical constraint graph
Notes
1. Bang-Jensen & Gutin (2000). Bang-Jensen & Gutin (2018), Chapter 1.Diestel (2005), Section 1.10. Bondy & Murty (1976), Section 10.
2. Chartrand, Gary (1977). Introductory Graph Theory. Courier Corporation. ISBN 9780486247755. Archived from the original on 2023-02-04. Retrieved 2020-10-02.
3. Bang-Jensen & Gutin (2018), Chapter 7 by Yeo.
4. Bang-Jensen & Gutin (2018), Chapter 2 by Bang-Jensen and Havet.
5. Bang-Jensen & Gutin (2018), Chapter 8 by Galeana-Sanchez and Hernandez-Cruz.
6. Diestel (2005), Section 1.10.
7. Bang-Jensen & Gutin (2018), Chapter 2 by Bang-Jensen and Havet.
8. Bang-Jensen & Gutin (2018), Chapter 3 by Gutin.
9. Satyanarayana, Bhavanari; Prasad, Kuncham Syam, Discrete Mathematics and Graph Theory, PHI Learning Pvt. Ltd., p. 460, ISBN 978-81-203-3842-5; Brualdi, Richard A. (2006), Combinatorial Matrix Classes, Encyclopedia of Mathematics and Its Applications, vol. 108, Cambridge University Press, p. 51, ISBN 978-0-521-86565-4.
10. Bang-Jensen & Gutin (2000) p. 19 in the 2007 edition; p. 20 in the 2nd edition (2009).
References
• Bang-Jensen, Jørgen; Gutin, Gregory (2000), Digraphs: Theory, Algorithms and Applications, Springer, ISBN 1-85233-268-9
(the corrected 1st edition of 2007 is now freely available on the authors' site; the 2nd edition appeared in 2009 ISBN 1-84800-997-6).
• Bang-Jensen, Jørgen; Gutin, Gregory (2018), Classes of Directed Graphs, Springer, ISBN 978-3319718408.
• Bondy, John Adrian; Murty, U. S. R. (1976), Graph Theory with Applications, North-Holland, ISBN 0-444-19451-7.
• Diestel, Reinhard (2005), Graph Theory (3rd ed.), Springer, ISBN 3-540-26182-6 (the electronic 3rd edition is freely available on author's site).
• Harary, Frank; Norman, Robert Z.; Cartwright, Dorwin (1965), Structural Models: An Introduction to the Theory of Directed Graphs, New York: Wiley.
• Number of directed graphs (or directed graphs) with n nodes from On-Line Encyclopedia of Integer Sequences
External links
Wikimedia Commons has media related to Directed graphs.
|
Weighted geometric mean
In statistics, the weighted geometric mean is a generalization of the geometric mean using the weighted arithmetic mean.
Given a sample $x=(x_{1},x_{2}\dots ,x_{n})$ and weights $w=(w_{1},w_{2},\dots ,w_{n})$, it is calculated as:
${\bar {x}}=\left(\prod _{i=1}^{n}x_{i}^{w_{i}}\right)^{1/\sum _{i=1}^{n}w_{i}}=\quad \exp \left({\frac {\sum _{i=1}^{n}w_{i}\ln x_{i}}{\sum _{i=1}^{n}w_{i}\quad }}\right)$
The second form above illustrates that the logarithm of the geometric mean is the weighted arithmetic mean of the logarithms of the individual values. If all the weights are equal, the weighted geometric mean simplifies to the ordinary unweighted geometric mean.
See also
• Average
• Central tendency
• Summary statistics
• Weighted arithmetic mean
• Weighted harmonic mean
External links
• Non-Newtonian calculus website
|
Weighted matroid
In combinatorics, a branch of mathematics, a weighted matroid is a matroid endowed with function with respect to which one can perform a greedy algorithm.
A weight function $w:E\rightarrow \mathbb {R} ^{+}$ for a matroid $M=(E,I)$ assigns a strictly positive weight to each element of $E$. We extend the function to subsets of $E$ by summation; $w(A)$ is the sum of $w(x)$ over $x$ in $A$. A matroid with an associated weight function is called a weighted matroid.
Spanning forest algorithms
As a simple example, say we wish to find the maximum spanning forest of a graph. That is, given a graph and a weight for each edge, find a forest containing every vertex and maximizing the total weight of the edges in the tree. This problem arises in some clustering applications. If we look at the definition of the forest matroid above, we see that the maximum spanning forest is simply the independent set with largest total weight — such a set must span the graph, for otherwise we can add edges without creating cycles. But how do we find it?
Finding a basis
There is a simple algorithm for finding a basis:
• Initially let $A$ be the empty set.
• For each $x$ in $E$
• if $A\cup \{x\}$ is independent, then set $A$ to $A\cup \{x\}$.
The result is clearly an independent set. It is a maximal independent set because if $B\cup \{x\}$ is not independent for some subset $B$ of $A$, then $A\cup \{x\}$ is not independent either (the contrapositive follows from the hereditary property). Thus if we pass up an element, we'll never have an opportunity to use it later. We will generalize this algorithm to solve a harder problem.
Extension to optimal
An independent set of largest total weight is called an optimal set. Optimal sets are always bases, because if an edge can be added, it should be; this only increases the total weight. As it turns out, there is a trivial greedy algorithm for computing an optimal set of a weighted matroid. It works as follows:
• Initially let $A$ be the empty set.
• For each $x$ in $E$, taken in (monotonically) decreasing order by weight
• if $A\cup \{x\}$ is independent, then set $A$ to $A\cup \{x\}$.
This algorithm finds a basis, since it is a special case of the above algorithm. It always chooses the element of largest weight that it can while preserving independence (thus the term "greedy"). This always produces an optimal set: suppose that it produces $A=\{e_{1},e_{2},\ldots ,e_{r}\}$ and that $B=\{f_{1},f_{2},\ldots ,f_{r}\}$. Now for any $k$ with $1\leq k\leq r$, consider open sets $O_{1}=\{e_{1},\ldots ,e_{k-1}\}$ and $O_{2}=\{f_{1},\ldots ,f_{k}\}$. Since $O_{1}$ is smaller than $O_{2}$, there is some element of $O_{2}$ which can be put into $O_{1}$ with the result still being independent. However $e_{k}$ is an element of maximal weight that can be added to $O_{1}$ to maintain independence. Thus $e_{k}$ is of no smaller weight than some element of $O_{2}$, and hence $e_{k}$ is of at least a large a weight as $f_{k}$. As this is true for all $k$, $A$ is weightier than $B$.
Complexity analysis
The easiest way to traverse the members of $E$ in the desired order is to sort them. This requires $O(|E|\log |E|)$ time using a comparison sorting algorithm. We also need to test for each $x$ whether $A\cup \{x\}$ is independent; assuming independence tests require $O(f(|E|))$ time, the total time for the algorithm is $O(|E|\log |E|+|E|f(|E|))$.
If we want to find a minimum spanning tree instead, we simply "invert" the weight function by subtracting it from a large constant. More specifically, let $w_{\text{min}}(x)=W-w(x)$, where $W$ exceeds the total weight over all graph edges. Many more optimization problems about all sorts of matroids and weight functions can be solved in this trivial way, although in many cases more efficient algorithms can be found that exploit more specialized properties.
Matroid requirement
Note also that if we take a set $I$ of "independent" sets which is a down-set but not a matroid, then the greedy algorithm will not always work. For then there are independent sets $I_{1}$ and $I_{2}$ with $|I_{1}|<|I_{2}|$, but such that for no $e\in I_{2}\setminus I_{1}$ is $I_{1}\cup e$ independent.
Pick an $\epsilon >0$ and $\tau >0$ such that $(1+2\epsilon )|I_{1}|+\tau |E|<|I_{2}|$. Weight the elements of $I_{1}\cup I_{2}$ in the range $2$ to $2+2\epsilon $, the elements of $I_{1}\setminus I_{2}$ in the range $1+\epsilon $ to $1+2\epsilon $, the elements of $I_{2}\setminus I_{1}$ in the range $1$ to $1+\epsilon $, and the rest in the range $0$ to $\tau $. The greedy algorithm will select the elements of $I_{1}$, and then cannot pick any elements of $I_{2}\setminus I_{1}$. Therefore, the independent set it constructs will be of weight at most $(1+2\epsilon )|I_{1}|+\tau |E|+|I_{1}\cup I_{2}|$, which is smaller than the weight of $I_{2}$.
History
It was not until 1971 that Jack Edmonds connected weighted matroids to greedy algorithms in his paper "Matroids and the greedy algorithm". Korte and Lovász would generalize these ideas to objects called greedoids, which allow even larger classes of problems to be solved by greedy algorithms.
References
• Jack Edmonds. Matroids and the Greedy Algorithm. Mathematical Programming, volume 1, p. 125–136. 1971.
|
Weighted product model
The weighted product model (WPM) is a popular multi-criteria decision analysis (MCDA) / multi-criteria decision making (MCDM) method. It is similar to the weighted sum model (WSM). The main difference is that instead of addition in the main mathematical operation, there is multiplication.
Description
As with all MCDA / MCDM methods, given is a finite set of decision alternatives described in terms of a number of decision criteria. Each decision alternative is compared with the others by multiplying a number of ratios, one for each decision criterion. Each ratio is raised to the power equivalent to the relative weight of the corresponding criterion.
Suppose that a given MCDA problem is defined on m alternatives and n decision criteria. Furthermore, let us assume that all the criteria are benefit criteria. That is, the higher the values are, the better it is. Next suppose that wj denotes the relative weight of importance of the criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of criterion Cj. Then, if one wishes to compare the two alternatives AK and AL (where m ≥ K, L ≥ 1) then, the following product has to be calculated:[1]
$P(A_{K}/A_{L})=\prod _{j=1}^{n}(a_{Kj}/a_{Lj})^{w_{j}},{\text{ for }}K,L=1,2,3,\dots ,m.$
If the ratio P(AK/AL) is greater than or equal to the value 1, then it indicates that alternative AK is more desirable than alternative AL (in the maximization case). If we are interested in determining the best alternative, then the best alternative is the one that is better than or at least equal to all other alternatives.
The WPM is often called dimensionless analysis because its mathematical structure eliminates any units of measure.[1][2]
Therefore, the WPM can be used in single- and multi-dimensional MCDA / MCDM problems. That is, on decision problems where the alternatives are described in terms that use different units of measurement. An advantage of this method is that instead of the actual values it can use relative ones.
The following is a simple numerical example which illustrates how the calculations for this method can be carried out. As data we use the same numerical values as in the numerical example described for the weighted sum model. These numerical data are repeated next for easier reference.
Example
This simple decision problem is based on three alternatives denoted as A1, A2, and A3 each described in terms of four criteria C1, C2, C3 and C4. Next, let the numerical data for this problem be as in the following decision matrix:
C1C2C3C4
Alts. 0.200.150.400.25
A125201530
A210302030
A330103010
The above table specifies that the relative weight of the first criterion is 0.20, the relative weight for the second criterion is 0.15 and so on. Similarly, the value of the first alternative (i.e., A1) in terms of the first criterion is equal to 25, the value of the same alternative in terms of the second criterion is equal to 20 and so on. However, now the restriction to express all criteria in terms of the same measurement unit is not needed. That is, the numbers under each criterion may be expressed in different units.
When the WPM is applied on the previous data, then the following values are derived:
$P(A_{1}/A_{2})=(25/10)^{0.20}\times (20/30)^{0.15}\times (15/20)^{0.40}\times (30/30)^{0.25}=1.007>1.$
Similarly, we also get:
$P(A_{1}/A_{3})=1.067>1,{\text{ and }}P(A_{2}/A_{3})=1.059>1.\,$
Therefore, the best alternative is A1, since it is superior to all the other alternatives. Furthermore, the following ranking of all three alternatives is as follows: A1 > A2 > A3 (where the symbol ">" stands for "better than").
An alternative approach with the WPM method is for the decision maker to use only products without the previous ratios.[1][2] That is, to use the following variant of main formula given earlier:
$P(A_{K})=\prod _{j=1}^{n}(a_{Kj})^{w_{j}},{\text{ for }}K=1,2,3,\dots ,m.$
In the previous expression the term P(AK) denotes the total performance value (i.e., not a relative one) of alternative AK when all the criteria are considered simultaneously under the WPM model. Then, when the previous data are used, exactly the same ranking is derived. Some interesting properties of this method are discussed in the 2000 book by Triantaphyllou on MCDA / MCDM.[1]
History
Some of the first references to this method are due to Bridgman[3] and Miller and Starr.[4] The tutorial article by Tofallis describes its advantages over the weighted sum approach.[5]
See also
• Ordinal Priority Approach
• Weighted sum model
More details on this method are given in the MCDM book by Triantaphyllou.[1]
References
1. Triantaphyllou, E. (2000). Multi-Criteria Decision Making: A Comparative Study. Dordrecht, The Netherlands: Kluwer Academic Publishers (now Springer). p. 320. ISBN 0-7923-6607-7.
2. Triantaphyllou, E.; S.H. Mann (1989). "An Examination of the Effectiveness of Multi-Dimensional Decision-Making Methods: A Decision-Making Paradox". International Journal of Decision Support Systems. 5 (3): 303–312. doi:10.1016/0167-9236(89)90037-7. Retrieved 2010-06-25.
3. Bridgman, P.W. (1922). Dimensional Analysis. New Haven, CT, U.S.A.: Yale University Press.
4. Miller, D.W.; M.K. Starr (1969). Executive Decisions and Operations Research. Englewood Cliffs, NJ, U.S.A.: Prentice-Hall, Inc.
5. Tofallis, C. (2014). Add or multiply? A tutorial on ranking and choosing with multiple criteria. INFORMS Transactions on Education, 14(3), 109-119.
|
Weighted projective space
In algebraic geometry, a weighted projective space P(a0,...,an) is the projective variety Proj(k[x0,...,xn]) associated to the graded ring k[x0,...,xn] where the variable xk has degree ak.
Properties
• If d is a positive integer then P(a0,a1,...,an) is isomorphic to P(da0,da1,...,dan). This is a property of the Proj construction; geometrically it corresponds to the d-tuple Veronese embedding. So without loss of generality one may assume that the degrees ai have no common factor.
• Suppose that a0,a1,...,an have no common factor, and that d is a common factor of all the ai with i≠j, then P(a0,a1,...,an) is isomorphic to P(a0/d,...,aj-1/d,aj,aj+1/d,...,an/d) (note that d is coprime to aj; otherwise the isomorphism does not hold). So one may further assume that any set of n variables ai have no common factor. In this case the weighted projective space is called well-formed.
• The only singularities of weighted projective space are cyclic quotient singularities.
• A weighted projective space is a Q-Fano variety[1] and a toric variety.
• The weighted projective space P(a0,a1,...,an) is isomorphic to the quotient of projective space by the group that is the product of the groups of roots of unity of orders a0,a1,...,an acting diagonally.[2]
References
1. M. Rossi and L. Terracini, Linear algebra and toric data of weighted projective spaces. Rend. Semin. Mat. Univ. Politec. Torino 70 (2012), no. 4, 469--495, proposition 8
2. This should be understood as a GIT quotient. In a more general setting, one can speak of a weighted projective stack. See https://mathoverflow.net/questions/136888/.
• Dolgachev, Igor (1982), "Weighted projective varieties", Group actions and vector fields (Vancouver, B.C., 1981), Lecture Notes in Math., vol. 956, Berlin: Springer, pp. 34–71, CiteSeerX 10.1.1.169.5185, doi:10.1007/BFb0101508, ISBN 978-3-540-11946-3, MR 0704986
• Hosgood, Timothy (2016), An introduction to varieties in weighted projective space, arXiv:1604.02441, Bibcode:2016arXiv160402441H
• Reid, Miles (2002), Graded rings and varieties in weighted projective space (PDF)
|
Weighted space
In functional analysis, a weighted space is a space of functions under a weighted norm, which is a finite norm (or semi-norm) that involves multiplication by a particular function referred to as the weight.
Weights can be used to expand or reduce a space of considered functions. For example, in the space of functions from a set $U\subset \mathbb {R} $ to $\mathbb {R} $ under the norm $\|\cdot \|_{U}$ defined by: $\|f\|_{U}=\sup _{x\in U}{|f(x)|}$, functions that have infinity as a limit point are excluded. However, the weighted norm $\|f\|=\sup _{x\in U}{\left|f(x){\tfrac {1}{1+x^{2}}}\right|}$ is finite for many more functions, so the associated space contains more functions. Alternatively, the weighted norm $\|f\|=\sup _{x\in U}{\left|f(x)(1+x^{4})\right|}$ is finite for many fewer functions.
When the weight is of the form ${\tfrac {1}{1+x^{m}}}$, the weighted space is called polynomial-weighted.[1]
References
1. Walczak, Zbigniew (2005). "On the rate of convergence for some linear operators" (PDF). Hiroshima Mathematical Journal. 35: 115–124. doi:10.32917/hmj/1150922488.
• Kudryavtsev, L.D. (2001) [1994], "Weighted space", in Michiel Hazewinkel (ed.), Encyclopedia of Mathematics, EMS Press
|
Weighted sum model
In decision theory, the weighted sum model (WSM),[1][2] also called weighted linear combination (WLC)[3] or simple additive weighting (SAW),[4] is the best known and simplest multi-criteria decision analysis (MCDA) / multi-criteria decision making method for evaluating a number of alternatives in terms of a number of decision criteria.
Description
In general, suppose that a given MCDA problem is defined on m alternatives and n decision criteria. Furthermore, let us assume that all the criteria are benefit criteria, that is, the higher the values are, the better it is. Next suppose that wj denotes the relative weight of importance of the criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of criterion Cj. Then, the total (i.e., when all the criteria are considered simultaneously) importance of alternative Ai, denoted as AiWSM-score, is defined as follows:
$A_{i}^{\text{WSM-score}}=\sum _{j=1}^{n}w_{j}a_{ij},{\text{ for }}i=1,2,3,\dots ,m.$
For the maximization case, the best alternative is the one that yields the maximum total performance value.[2]
It is very important to state here that it is applicable only when all the data are expressed in exactly the same unit. If this is not the case, then the final result is equivalent to "adding apples and oranges."
Example
For a simple numerical example suppose that a decision problem of this type is defined on three alternative choices A1, A2, A3 each described in terms of four criteria C1, C2, C3 and C4. Furthermore, let the numerical data for this problem be as in the following decision matrix:
Criteria WSM
Score
C1 C2 C3 C4
Weighting 0.200.150.400.25 –
Choice A1 2520153021.50
Choice A2 1030203022.00
Choice A3 3010301022.00
For instance, the relative weight of the first criterion is equal to 0.20, the relative weight for the second criterion is 0.15 and so on. Similarly, the value of the first alternative (i.e., A1) in terms of the first criterion is equal to 25, the value of the same alternative in terms of the second criterion is equal to 20 and so on.
When the previous formula is applied on these numerical data the WSM scores for the three alternatives are:
$A_{1}^{\text{WSM-score}}=25\times 0.20+20\times 0.15+15\times 0.40+30\times 0.25=21.50.$
Similarly, one gets:
$A_{2}^{\text{WSM-score}}=22.00,{\text{ and }}A_{3}^{\text{WSM-score}}=22.00.$
Thus, the best choice (in the maximization case) is either alternative A2 or A3 (because they both have the maximum WSM score which is equal to 22.00). These numerical results imply the following ranking of these three alternatives: A2 = A3 > A1 (where the symbol ">" stands for "greater than").
See also
• Decision-making software
• Weighted product model
References
1. Fishburn, P.C. (1967). "Additive Utilities with Incomplete Product Set: Applications to Priorities and Assignments". Journal of the Operations Research Society of America. doi:10.1287/opre.15.3.537.
2. Triantaphyllou, E. (2000). Multi-Criteria Decision Making: A Comparative Study. Dordrecht, The Netherlands: Kluwer Academic Publishers (now Springer). p. 320. ISBN 0-7923-6607-7.
3. Malczewski, Jacek; Rinner, Claus (2015). Multicriteria Decision Analysis in Geographic Information Science. New York (USA), Heidelberg (Germany), Dordrecht (The Netherlands), London (UK): Springer. doi:10.1007/978-3-540-74757-4. ISBN 978-3-540-86875-0. S2CID 126734355. Retrieved May 31, 2020.
4. Churchman, Charles W.; Ackoff, Russell L.; Smith, Nicolas M. (1954). "An approximate measure of value". Journal of the Operations Research Society of America. 2 (2): 172–187. doi:10.1287/opre.2.2.172. Retrieved May 31, 2020.
|
Weighting pattern
A weighting pattern for a linear dynamical system describes the relationship between an input $u$ and output $y$. Given the time-variant system described by
${\dot {x}}(t)=A(t)x(t)+B(t)u(t)$
$y(t)=C(t)x(t)$,
then the output can be written as
$y(t)=y(t_{0})+\int _{t_{0}}^{t}T(t,\sigma )u(\sigma )d\sigma $,
where $T(\cdot ,\cdot )$ is the weighting pattern for the system. For such a system, the weighting pattern is $T(t,\sigma )=C(t)\phi (t,\sigma )B(\sigma )$ such that $\phi $ is the state transition matrix.
The weighting pattern will determine a system, but if there exists a realization for this weighting pattern then there exist many that do so.[1]
Linear time invariant system
In a LTI system then the weighting pattern is:
Continuous
$T(t,\sigma )=Ce^{A(t-\sigma )}B$
where $e^{A(t-\sigma )}$ is the matrix exponential.
Discrete
$T(k,l)=CA^{k-l-1}B$.
References
1. Brockett, Roger W. (1970). Finite Dimensional Linear Systems. John Wiley & Sons. ISBN 978-0-471-10585-5.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.