text stringlengths 9 7.94M |
|---|
Multiple kernel learning
Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm. Reasons to use multiple kernel learning include a) the ability to select for an optimal kernel and parameters from a larger set of kernels, reducing bias due to kernel selection while allowing for more automated machine learning methods, and b) combining data from different sources (e.g. sound and images from a video) that have different notions of similarity and thus require different kernels. Instead of creating a new kernel, multiple kernel algorithms can be used to combine kernels already established for each individual data source.
Part of a series on
Machine learning
and data mining
Paradigms
• Supervised learning
• Unsupervised learning
• Online learning
• Batch learning
• Meta-learning
• Semi-supervised learning
• Self-supervised learning
• Reinforcement learning
• Rule-based learning
• Quantum machine learning
Problems
• Classification
• Generative model
• Regression
• Clustering
• dimension reduction
• density estimation
• Anomaly detection
• Data Cleaning
• AutoML
• Association rules
• Semantic analysis
• Structured prediction
• Feature engineering
• Feature learning
• Learning to rank
• Grammar induction
• Ontology learning
• Multimodal learning
Supervised learning
(classification • regression)
• Apprenticeship learning
• Decision trees
• Ensembles
• Bagging
• Boosting
• Random forest
• k-NN
• Linear regression
• Naive Bayes
• Artificial neural networks
• Logistic regression
• Perceptron
• Relevance vector machine (RVM)
• Support vector machine (SVM)
Clustering
• BIRCH
• CURE
• Hierarchical
• k-means
• Fuzzy
• Expectation–maximization (EM)
• DBSCAN
• OPTICS
• Mean shift
Dimensionality reduction
• Factor analysis
• CCA
• ICA
• LDA
• NMF
• PCA
• PGD
• t-SNE
• SDL
Structured prediction
• Graphical models
• Bayes net
• Conditional random field
• Hidden Markov
Anomaly detection
• RANSAC
• k-NN
• Local outlier factor
• Isolation forest
Artificial neural network
• Autoencoder
• Cognitive computing
• Deep learning
• DeepDream
• Feedforward neural network
• Recurrent neural network
• LSTM
• GRU
• ESN
• reservoir computing
• Restricted Boltzmann machine
• GAN
• Diffusion model
• SOM
• Convolutional neural network
• U-Net
• Transformer
• Vision
• Spiking neural network
• Memtransistor
• Electrochemical RAM (ECRAM)
Reinforcement learning
• Q-learning
• SARSA
• Temporal difference (TD)
• Multi-agent
• Self-play
Learning with humans
• Active learning
• Crowdsourcing
• Human-in-the-loop
Model diagnostics
• Learning curve
Mathematical foundations
• Kernel machines
• Bias–variance tradeoff
• Computational learning theory
• Empirical risk minimization
• Occam learning
• PAC learning
• Statistical learning
• VC theory
Machine-learning venues
• ECML PKDD
• NeurIPS
• ICML
• ICLR
• IJCAI
• ML
• JMLR
Related articles
• Glossary of artificial intelligence
• List of datasets for machine-learning research
• Outline of machine learning
Multiple kernel learning approaches have been used in many applications, such as event recognition in video,[1] object recognition in images,[2] and biomedical data fusion.[3]
Algorithms
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels, however, many algorithms have been developed. The basic idea behind multiple kernel learning algorithms is to add an extra parameter to the minimization problem of the learning algorithm. As an example, consider the case of supervised learning of a linear combination of a set of $n$ kernels $K$. We introduce a new kernel $K'=\sum _{i=1}^{n}\beta _{i}K_{i}$, where $\beta $ is a vector of coefficients for each kernel. Because the kernels are additive (due to properties of reproducing kernel Hilbert spaces), this new function is still a kernel. For a set of data $X$ with labels $Y$, the minimization problem can then be written as
$\min _{\beta ,c}\mathrm {E} (Y,K'c)+R(K,c)$
where $\mathrm {E} $ is an error function and $R$ is a regularization term. $\mathrm {E} $ is typically the square loss function (Tikhonov regularization) or the hinge loss function (for SVM algorithms), and $R$ is usually an $\ell _{n}$ norm or some combination of the norms (i.e. elastic net regularization). This optimization problem can then be solved by standard optimization methods. Adaptations of existing techniques such as the Sequential Minimal Optimization have also been developed for multiple kernel SVM-based methods.[4]
Supervised learning
For supervised learning, there are many other algorithms that use different methods to learn the form of the kernel. The following categorization has been proposed by Gonen and Alpaydın (2011)[5]
Fixed rules approaches
Fixed rules approaches such as the linear combination algorithm described above use rules to set the combination of the kernels. These do not require parameterization and use rules like summation and multiplication to combine the kernels. The weighting is learned in the algorithm. Other examples of fixed rules include pairwise kernels, which are of the form
$k((x_{1i},x_{1j}),(x_{2i},x_{2j}))=k(x_{1i},x_{2i})k(x_{1j},x_{2j})+k(x_{1i},x_{2j})k(x_{1j},x_{2i})$.
These pairwise approaches have been used in predicting protein-protein interactions.[6]
Heuristic approaches
These algorithms use a combination function that is parameterized. The parameters are generally defined for each individual kernel based on single-kernel performance or some computation from the kernel matrix. Examples of these include the kernel from Tenabe et al. (2008).[7] Letting $\pi _{m}$ be the accuracy obtained using only $K_{m}$, and letting $\delta $ be a threshold less than the minimum of the single-kernel accuracies, we can define
$\beta _{m}={\frac {\pi _{m}-\delta }{\sum _{h=1}^{n}(\pi _{h}-\delta )}}$
Other approaches use a definition of kernel similarity, such as
$A(K_{1},K_{2})={\frac {\langle K_{1},K_{2}\rangle }{\sqrt {\langle K_{1},K_{1}\rangle \langle K_{2},K_{2}\rangle }}}$
Using this measure, Qui and Lane (2009)[8] used the following heuristic to define
$\beta _{m}={\frac {A(K_{m},YY^{T})}{\sum _{h=1}^{n}A(K_{h},YY^{T})}}$
Optimization approaches
These approaches solve an optimization problem to determine parameters for the kernel combination function. This has been done with similarity measures and structural risk minimization approaches. For similarity measures such as the one defined above, the problem can be formulated as follows:[9]
$\max _{\beta ,\operatorname {tr} (K'_{tra})=1,K'\geq 0}A(K'_{tra},YY^{T}).$
where $K'_{tra}$ is the kernel of the training set.
Structural risk minimization approaches that have been used include linear approaches, such as that used by Lanckriet et al. (2002).[10] We can define the implausibility of a kernel $\omega (K)$ to be the value of the objective function after solving a canonical SVM problem. We can then solve the following minimization problem:
$\min _{\operatorname {tr} (K'_{tra})=c}\omega (K'_{tra})$
where $c$ is a positive constant. Many other variations exist on the same idea, with different methods of refining and solving the problem, e.g. with nonnegative weights for individual kernels and using non-linear combinations of kernels.
Bayesian approaches
Bayesian approaches put priors on the kernel parameters and learn the parameter values from the priors and the base algorithm. For example, the decision function can be written as
$f(x)=\sum _{i=0}^{n}\alpha _{i}\sum _{m=1}^{p}\eta _{m}K_{m}(x_{i}^{m},x^{m})$
$\eta $ can be modeled with a Dirichlet prior and $\alpha $ can be modeled with a zero-mean Gaussian and an inverse gamma variance prior. This model is then optimized using a customized multinomial probit approach with a Gibbs sampler.
[11] These methods have been used successfully in applications such as protein fold recognition and protein homology problems [12][13]
Boosting approaches
Boosting approaches add new kernels iteratively until some stopping criteria that is a function of performance is reached. An example of this is the MARK model developed by Bennett et al. (2002) [14]
$f(x)=\sum _{i=1}^{N}\sum _{m=1}^{P}\alpha _{i}^{m}K_{m}(x_{i}^{m},x^{m})+b$
The parameters $\alpha _{i}^{m}$ and $b$ are learned by gradient descent on a coordinate basis. In this way, each iteration of the descent algorithm identifies the best kernel column to choose at each particular iteration and adds that to the combined kernel. The model is then rerun to generate the optimal weights $\alpha _{i}$ and $b$.
Semisupervised learning
Semisupervised learning approaches to multiple kernel learning are similar to other extensions of supervised learning approaches. An inductive procedure has been developed that uses a log-likelihood empirical loss and group LASSO regularization with conditional expectation consensus on unlabeled data for image categorization. We can define the problem as follows. Let $L={(x_{i},y_{i})}$ be the labeled data, and let $U={x_{i}}$ be the set of unlabeled data. Then, we can write the decision function as follows.
$f(x)=\alpha _{0}+\sum _{i=1}^{|L|}\alpha _{i}K_{i}(x)$
The problem can be written as
$\min _{f}L(f)+\lambda R(f)+\gamma \Theta (f)$
where $L$ is the loss function (weighted negative log-likelihood in this case), $R$ is the regularization parameter (Group LASSO in this case), and $\Theta $ is the conditional expectation consensus (CEC) penalty on unlabeled data. The CEC penalty is defined as follows. Let the marginal kernel density for all the data be
$g_{m}^{\pi }(x)=\langle \phi _{m}^{\pi },\psi _{m}(x)\rangle $
where $\psi _{m}(x)=[K_{m}(x_{1},x),\ldots ,K_{m}(x_{L},x)]^{T}$ (the kernel distance between the labeled data and all of the labeled and unlabeled data) and $\phi _{m}^{\pi }$ is a non-negative random vector with a 2-norm of 1. The value of $\Pi $ is the number of times each kernel is projected. Expectation regularization is then performed on the MKD, resulting in a reference expectation $q_{m}^{pi}(y|g_{m}^{\pi }(x))$ and model expectation $p_{m}^{\pi }(f(x)|g_{m}^{\pi }(x))$. Then, we define
$\Theta ={\frac {1}{\Pi }}\sum _{\pi =1}^{\Pi }\sum _{m=1}^{M}D(q_{m}^{pi}(y|g_{m}^{\pi }(x))||p_{m}^{\pi }(f(x)|g_{m}^{\pi }(x)))$
where $D(Q||P)=\sum _{i}Q(i)\ln {\frac {Q(i)}{P(i)}}$ is the Kullback-Leibler divergence. The combined minimization problem is optimized using a modified block gradient descent algorithm. For more information, see Wang et al.[15]
Unsupervised learning
Unsupervised multiple kernel learning algorithms have also been proposed by Zhuang et al. The problem is defined as follows. Let $U={x_{i}}$ be a set of unlabeled data. The kernel definition is the linear combined kernel $K'=\sum _{i=1}^{M}\beta _{i}K_{m}$. In this problem, the data needs to be "clustered" into groups based on the kernel distances. Let $B_{i}$ be a group or cluster of which $x_{i}$ is a member. We define the loss function as $\sum _{i=1}^{n}\left\Vert x_{i}-\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})x_{j}\right\Vert ^{2}$. Furthermore, we minimize the distortion by minimizing $\sum _{i=1}^{n}\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})\left\Vert x_{i}-x_{j}\right\Vert ^{2}$. Finally, we add a regularization term to avoid overfitting. Combining these terms, we can write the minimization problem as follows.
$\min _{\beta ,B}\sum _{i=1}^{n}\left\Vert x_{i}-\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})x_{j}\right\Vert ^{2}+\gamma _{1}\sum _{i=1}^{n}\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})\left\Vert x_{i}-x_{j}\right\Vert ^{2}+\gamma _{2}\sum _{i}|B_{i}|$
where . One formulation of this is defined as follows. Let $D\in {0,1}^{n\times n}$ be a matrix such that $D_{ij}=1$ means that $x_{i}$ and $x_{j}$ are neighbors. Then, $B_{i}={x_{j}:D_{ij}=1}$. Note that these groups must be learned as well. Zhuang et al. solve this problem by an alternating minimization method for $K$ and the groups $B_{i}$. For more information, see Zhuang et al.[16]
Libraries
Available MKL libraries include
• SPG-GMKL: A scalable C++ MKL SVM library that can handle a million kernels.[17]
• GMKL: Generalized Multiple Kernel Learning code in MATLAB, does $\ell _{1}$ and $\ell _{2}$ regularization for supervised learning.[18]
• (Another) GMKL: A different MATLAB MKL code that can also perform elastic net regularization[19]
• SMO-MKL: C++ source code for a Sequential Minimal Optimization MKL algorithm. Does $p$-n orm regularization.[20]
• SimpleMKL: A MATLAB code based on the SimpleMKL algorithm for MKL SVM.[21]
• MKLPy: A Python framework for MKL and kernel machines scikit-compliant with different algorithms, e.g. EasyMKL[22] and others.
References
1. Lin Chen, Lixin Duan, and Dong Xu, "Event Recognition in Videos by Learning From Heterogeneous Web Sources," in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2666-2673
2. Serhat S. Bucak, Rong Jin, and Anil K. Jain, Multiple Kernel Learning for Visual Object Recognition: A Review. T-PAMI, 2013.
3. Yu et al. L2-norm multiple kernel learning and its application to biomedical data fusion. BMC Bioinformatics 2010, 11:309
4. Francis R. Bach, Gert R. G. Lanckriet, and Michael I. Jordan. 2004. Multiple kernel learning, conic duality, and the SMO algorithm. In Proceedings of the twenty-first international conference on Machine learning (ICML '04). ACM, New York, NY, USA
5. Mehmet Gönen, Ethem Alpaydın. Multiple Kernel Learning Algorithms Jour. Mach. Learn. Res. 12(Jul):2211−2268, 2011
6. Ben-Hur, A. and Noble W.S. Kernel methods for predicting protein-protein interactions. Bioinformatics. 2005 Jun;21 Suppl 1:i38-46.
7. Hiroaki Tanabe, Tu Bao Ho, Canh Hao Nguyen, and Saori Kawasaki. Simple but effective methods for combining kernels in computational biology. In Proceedings of IEEE International Conference on Research, Innovation and Vision for the Future, 2008.
8. Shibin Qiu and Terran Lane. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 6(2):190–199, 2009
9. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27–72, 2004a
10. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. In Proceedings of the 19th International Conference on Machine Learning, 2002
11. Mark Girolami and Simon Rogers. Hierarchic Bayesian models for kernel learning. In Proceedings of the 22nd International Conference on Machine Learning, 2005
12. Theodoros Damoulas and Mark A. Girolami. Combining feature spaces for classification. Pattern Recognition, 42(11):2671–2683, 2009
13. Theodoros Damoulas and Mark A. Girolami. Probabilistic multi-class multi-kernel learning: On protein fold recognition and remote homology detection. Bioinformatics, 24(10):1264–1270, 2008
14. Kristin P. Bennett, Michinari Momma, and Mark J. Embrechts. MARK: A boosting algorithm for heterogeneous kernel models. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002
15. Wang, Shuhui et al. S3MKL: Scalable Semi-Supervised Multiple Kernel Learning for Real-World Image Applications. IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 14, NO. 4, AUGUST 2012
16. J. Zhuang, J. Wang, S.C.H. Hoi & X. Lan. Unsupervised Multiple Kernel Learning. Jour. Mach. Learn. Res. 20:129–144, 2011
17. Ashesh Jain, S. V. N. Vishwanathan and Manik Varma. SPG-GMKL: Generalized multiple kernel learning with a million kernels. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Beijing, China, August 2012
18. M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In Proceedings of the International Conference on Machine Learning, Montreal, Canada, June 2009
19. Yang, H., Xu, Z., Ye, J., King, I., & Lyu, M. R. (2011). Efficient Sparse Generalized Multiple Kernel Learning. IEEE Transactions on Neural Networks, 22(3), 433-446
20. S. V. N. Vishwanathan, Z. Sun, N. Theera-Ampornpunt and M. Varma. Multiple kernel learning and the SMO algorithm. In Advances in Neural Information Processing Systems, Vancouver, B. C., Canada, December 2010.
21. Alain Rakotomamonjy, Francis Bach, Stephane Canu, Yves Grandvalet. SimpleMKL. Journal of Machine Learning Research, Microtome Publishing, 2008, 9, pp.2491-2521.
22. Fabio Aiolli, Michele Donini. EasyMKL: a scalable multiple kernel learning algorithm. Neurocomputing, 169, pp.215-224.
|
Untouchable number
An untouchable number is a positive integer that cannot be expressed as the sum of all the proper divisors of any positive integer (including the untouchable number itself). That is, these numbers are not in the image of the aliquot sum function. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable.[1]
Unsolved problem in mathematics:
Are there any odd untouchable numbers other than 5?
(more unsolved problems in mathematics)
Examples
The number 4 is not untouchable as it is equal to the sum of the proper divisors of 9: 1 + 3 = 4. The number 5 is untouchable as it is not the sum of the proper divisors of any positive integer: 5 = 1 + 4 is the only way to write 5 as the sum of distinct positive integers including 1, but if 4 divides a number, 2 does also, so 1 + 4 cannot be the sum of all of any number's proper divisors (since the list of factors would have to contain both 4 and 2).
The first few untouchable numbers are
2, 5, 52, 88, 96, 120, 124, 146, 162, 188, 206, 210, 216, 238, 246, 248, 262, 268, 276, 288, 290, 292, 304, 306, 322, 324, 326, 336, 342, 372, 406, 408, 426, 430, 448, 472, 474, 498, ... (sequence A005114 in the OEIS).
Properties
The number 5 is believed to be the only odd untouchable number, but this has not been proven. It would follow from a slightly stronger version of the Goldbach conjecture, since the sum of the proper divisors of pq (with p, q distinct primes) is 1 + p + q. Thus, if a number n can be written as a sum of two distinct primes, then n + 1 is not an untouchable number. It is expected that every even number larger than 6 is a sum of two distinct primes, so probably no odd number larger than 7 is an untouchable number, and $1=\sigma (2)-2$, $3=\sigma (4)-4$, $7=\sigma (8)-8$, so only 5 can be an odd untouchable number.[2] Thus it appears that besides 2 and 5, all untouchable numbers are composite numbers (since except 2, all even numbers are composite). No perfect number is untouchable, since, at the very least, it can be expressed as the sum of its own proper divisors. Similarly, none of the amicable numbers or sociable numbers are untouchable. Also, none of the Mersenne numbers are untouchable, since Mn = 2n − 1 is equal to the sum of the proper divisors of 2n.
No untouchable number is one more than a prime number, since if p is prime, then the sum of the proper divisors of p2 is p + 1. Also, no untouchable number is three more than a prime number, except 5, since if p is an odd prime then the sum of the proper divisors of 2p is p + 3.
Infinitude
There are infinitely many untouchable numbers, a fact that was proven by Paul Erdős.[3] According to Chen & Zhao, their natural density is at least d > 0.06.[4]
See also
• Aliquot sequence
• Nontotient
• Noncototient
• Weird number
References
1. Sesiano, J. (1991), "Two problems of number theory in Islamic times", Archive for History of Exact Sciences, 41 (3): 235–238, doi:10.1007/BF00348408, JSTOR 41133889, MR 1107382, S2CID 115235810
2. The stronger version is obtained by adding to the Goldbach conjecture the further requirement that the two primes be distinct—see Adams-Watters, Frank & Weisstein, Eric W. "Untouchable Number". MathWorld.
3. P. Erdos, Über die Zahlen der Form $\sigma (n)-n$ und $n-\phi (n)$. Elemente der Math. 28 (1973), 83-86
4. Yong-Gao Chen and Qing-Qing Zhao, Nonaliquot numbers, Publ. Math. Debrecen 78:2 (2011), pp. 439-442.
• Richard K. Guy, Unsolved Problems in Number Theory (3rd ed), Springer Verlag, 2004 ISBN 0-387-20860-7; section B10.
External links
• OEIS sequence A070015 (Least m such that sum of aliquot parts of m equals n or 0 if no such number exists)
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
|
Unusual number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than ${\sqrt {n}}$.
A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non-${\sqrt {n}}$-smooth.
Relation to prime numbers
All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2).
Examples
The first few unusual numbers are
2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... (sequence A064052 in the OEIS)
The first few non-prime (composite) unusual numbers are
6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... (sequence A063763 in the OEIS)
Distribution
If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows:
n u(n) u(n) / n
10 6 0.6
100 67 0.67
1000 715 0.72
10000 7319 0.73
100000 73322 0.73
1000000 731660 0.73
10000000 7280266 0.73
100000000 72467077 0.72
1000000000 721578596 0.72
Richard Schroeppel stated in 1972 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words:
$\lim _{n\rightarrow \infty }{\frac {u(n)}{n}}=\ln(2)=0.693147\dots \,.$
External links
• Weisstein, Eric W. "Rough Number". MathWorld.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
|
Proof mining
In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that studies or analyzes formalized proofs, especially in analysis, to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive.[1] This research has led to improved results in analysis obtained from the analysis of classical proofs.
References
1. Ulrich Kohlenbach (2008). Applied Proof Theory: Proof Interpretations and Their Use in Mathematics. Springer Verlag, Berlin. pp. 1–536.
Further reading
• Ulrich Kohlenbach and Paulo Oliva, "Proof Mining: A systematic way of analysing proofs in mathematics", Proc. Steklov Inst. Math, 242:136–164, 2003
• Paulo Oliva, "Proof Mining in Subsystems of Analysis", BRICS PhD thesis citeseer
|
Up-and-Down Designs
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice.
Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties.[1] The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time.
UDDs were developed in the 1940s by several research groups independently.[2][3][4] The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties,[5] and new and better estimation methods.[6][7]
UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures,[8] and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research.[9] They are also considered a viable choice for Phase I clinical trials.[10]
Mathematical description
Definition
Let $n$ be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables $X_{1},\ldots ,X_{n}$, are chosen from a discrete, finite set of $M$ increasing dose levels ${\mathcal {X}}=\left\{d_{1},\ldots ,d_{M}:\ d_{1}<\cdots <d_{M}\right\}.$ Furthermore, if $X_{i}=d_{m}$, then $X_{i+1}\in \{d_{m-1},d_{m},d_{m+1}\},$ according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted $Y_{1},\ldots ,Y_{n}\in \left\{0,1\right\};$ hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns $X_{1},\ldots ,X_{n}$ into a random walk over ${\mathcal {X}}$. Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above.
Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, $x$, is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing $x$. The goal of dose-finding experiments is to estimate the dose $x$ (on a continuous scale) that would trigger positive responses at a pre-specified target rate $\Gamma =P\left\{Y=1\mid X=x\right\},\ \ \Gamma \in (0,1)$; often known as the "target dose". This problem can be also expressed as estimation of the quantile $F^{-1}(\Gamma )$ of a cumulative distribution function describing the dose-toxicity curve $F(x)$. The density function $f(x)$ associated with $F(x)$ is interpretable as the distribution of response thresholds of the population under study.
Transition probability matrix
Given that a subject receives dose $d_{m}$, denote the probability that the next subject receives dose $d_{m-1},d_{m}$, or $d_{m+1}$, as $p_{m,m-1},p_{mm}$ or $p_{m,m+1}$, respectively. These transition probabilities obey the constraints $p_{m,m-1}+p_{mm}+p_{m,m+1}=1$ and the boundary conditions $p_{1,0}=p_{M,M+1}=0$.
Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of $F(x)$. Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon $\left(X_{i},Y_{i}\right)$ and through them upon $F(x)$ (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) $\mathbf {P} $:
${\bf {{P}=\left({\begin{array}{cccccc}p_{11}&p_{12}&0&\cdots &\cdots &0\\p_{21}&p_{22}&p_{23}&0&\ddots &\vdots \\0&\ddots &\ddots &\ddots &\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &\ddots &0\\\vdots &\ddots &0&p_{M-1,M-2}&p_{M-1,M-1}&p_{M-1,M}\\0&\cdots &\cdots &0&p_{M,M-1}&p_{MM}\\\end{array}}\right).}}$
Balance point
Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose $x^{*}$ that can be calculated from the transition rules, when those are expressed as a function of $F(x)$.[1] This dose has often been confused with the experiment's formal target $F^{-1}(\Gamma )$, and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while $x^{*}$, known as the "balance point", is approximately where the UDD's random walk revolves around.[11]
Stationary distribution of dose allocations
Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, $\pi $, once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by $\pi $. According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate.[12] Numerical studies suggest that it would typically take between $2/M$ and $4/M$ subjects for the effect to wear off nearly completely.[11] $\pi $ is also the asymptotic distribution of cumulative dose allocations.
UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of $\pi $) will be one of the two doses closest to the balance point $x^{*}$.[1] If $x^{*}$ is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to $x^{*}$ in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely.
Common UDDs
Original ("simple" or "classical") UDD
The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are
${\begin{array}{rl}p_{m,m+1}&=P\{Y_{i}=0|X_{i}=d_{m}\}=1-F(d_{m});\\p_{m,m-1}&=P\{Y_{i}=1|X_{i}=d_{m}\}=F(d_{m}).\end{array}}$
We use the original UDD as an example for calculating the balance point $x^{*}$. The design's 'up', 'down' functions are $p(x)=1-F(x),q(x)=F(x).$ We equate them to find $F^{*}$:
$1-F^{*}=F^{*}\ \longrightarrow \ F^{*}=0.5.$
The "classical" UDD is designed to find the median threshold. This is a case where $F^{*}=\Gamma .$
The "classical" UDD can be seen as a special case of each of the more versatile designs described below.
Durham and Flournoy's biased coin design
This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability $b=P\{{\textrm {heads}}\}.$ This biased-coin design (BCD) has two "flavors", one for $F^{*}>0.5$ and one for $F^{*}<0.5,$ whose rules are shown below:
$X_{i+1}={\begin{array}{ll}d_{m+1}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'heads'}};\\d_{m-1}&{\textrm {if}}\ \ Y\_i=1;\\d_{m}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'tails'}}.\\\end{array}}$
The heads probability $b$ can take any value in$[0,1]$. The balance point is
${\begin{array}{rcl}b\left(1-F^{*}\right)&=&F^{*}\\F^{*}&=&{\frac {b}{1+b}}\in [0,0.5].\end{array}}$
The BCD balance point can made identical to a target rate $F^{-1}(\Gamma )$ by setting the heads probability to $b=\Gamma /(1-\Gamma )$. For example, for $\Gamma =0.3$ set $b=3/7$. Setting $b=1$ makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD.
Group (cohort) UDDs
Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size $s$ rather than to individuals. $X_{i}$ becomes the dose given to cohort $i$, and $Y_{i}$ is the number of positive responses in the $i$-th cohort, rather than a binary outcome. Given that the $i$-th cohort is treated at $X_{i}=d_{m}$ on the interior of ${\mathcal {X}}$ the $i+1$-th cohort is assigned to
$X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i}\leq l;\\d_{m-1}&{\textrm {if}}\ \ Y_{i}\geq u;\\d_{m}&{\textrm {if}}\ \ Y_{i}<u.\end{cases}}$
$Y_{i}$ follow a binomial distribution conditional on $X_{i}$, with parameters$s$ and$F(X_{i})$. The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if $u=l+1$). A specific choice of parameters can be abbreviated as GUD$_{(s,l,u)}.$
Nominally, group UDDs generate $s$-order random walks, since the $s$ most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies:
• Symmetric designs with $l+u=s$ (e.g., GUD$_{(2,0,2)}$) target the median.
• The family GUD$_{(s,0,1)},$ encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at $x$ is $\left(1-F(x)\right)^{s},$ and since this design does not allow for remaining at the same dose, at the balance point it will be exactly $1/2$. Therefore,
$F^{*}=1-\left({\frac {1}{2}}\right)^{1/s}.$
With $s=2,3,4$ would be associated with $F^{*}\approx 0.293,0.206$ and $0.159$, respectively. The mirror-image family GUD$_{(s,s-1,s)}$ has its balance points at one minus these probabilities.
For general group UDDs, the balance point can be calculated only numerically, by finding the dose $x^{*}$ with toxicity rate $F^{*}$ such that
$\sum _{r=u}^{s}\left({\begin{array}{c}s\\r\\\end{array}}\right)\left(F^{*}\right)^{r}(1-F^{*})^{s-r}=\sum _{t=0}^{l}\left({\begin{array}{c}s\\t\\\end{array}}\right)\left(F^{*}\right)^{t}(1-F^{*})^{s-t}.$
Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for $F^{*}$.[13]
$k$-in-a-row (or "transformed" or "geometric") UDD
This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963,[14] and proliferated by him and colleagues shortly thereafter to psychophysics,[15] where it remains one of the standard methods to find sensory thresholds.[8] Wetherill called it "transformed" UDD; Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s;[16] and in the 2000s the more straightforward name "$k$-in-a-row" UDD was adopted.[11] The design's rules are deceptively simple:
$X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i-k+1}=\cdots =Y_{i}=0,\ \ {\textrm {all}}\ {\textrm {observed}}\ {\textrm {at}}\ \ d_{m};\\d_{m-1}&{\textrm {if}}\ \ Y_{i}=1;\\d_{m}&{\textrm {otherwise}},\end{cases}}$
Every dose escalation requires $k$ non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD$_{(s,0,1)}$ described above, and indeed shares the same balance point. The difference is that $k$-in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending.
The method used in sensory studies is actually the mirror-image of the one defined above, with $k$ successive responses required for a de-escalation and only one non-response for escalation, yielding $F^{*}\approx 0.707,0.794,0.841,\ldots $ for $k=2,3,4,\ldots $.[17]
$k$-in-a-row generates a $k$-th order random walk because knowledge of the last $k$ responses might be needed. It can be represented as a first-order chain with $Mk$ states, or as a Markov chain with $M$ levels, each having $k$ internal states labeled $0$ to $k-1$ The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level $m$, are all assigned the same dose $d_{m}$. Either way, the TPM is $Mk\times Mk$ (or more precisely, $\left[(M-1)k+1)\right]\times \left[(M-1)k+1)\right]$, because the internal counter is meaningless at the highest dose) - and it is not tridiagonal.
Here is the expanded $k$-in-a-row TPM with $k=2$ and $M=5$, using the abbreviation $F_{m}\equiv F\left(d_{m}\right).$ Each level's internal states are adjacent to each other.
${\begin{bmatrix}F_{1}&1-F_{1}&0&0&0&0&0&0&0\\F_{1}&0&1-F_{1}&0&0&0&0&0&0\\F_{2}&0&0&1-F_{2}&0&0&0&0&0\\F_{2}&0&0&0&1-F_{2}&0&0&0&0\\0&0&F_{3}&0&0&1-F_{3}&0&0&0\\0&0&F_{3}&0&0&0&1-F_{3}&0&0\\0&0&0&0&F_{4}&0&0&1-F_{4}&0\\0&0&0&0&F_{4}&0&0&0&1-F_{4}\\0&0&0&0&0&0&F_{5}&0&1-F_{5}\\\end{bmatrix}}.$
$k$-in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, $k$ is chosen to aim close to the target rate, e.g., $k=2$ for studies targeting the 30th percentile, and $k=3$ for studies targeting the 20th percentile.
Estimating the target dose
Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from $\pi $, since the latter is centered roughly around $x^{*}.$[5]
The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average.[18]In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice.[5]
By contrast, regression estimators attempt to approximate the curve $y=F(x)$ describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses $d_{m}$ on the horizontal axis, and the observed toxicity frequencies,
${\hat {F}}_{m}={\frac {\sum _{i=1}^{n}Y_{i}I\left[X_{i}=d_{m}\right]}{\sum _{i=1}^{n}I\left[X_{i}=d_{m}\right]}},\ m=1,\ldots ,M,$
on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses $y=\Gamma .$
Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data.[6] More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general.[7] Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust.[5] The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications.[19]
References
1. Durham, SD; Flournoy, N. "Up-and-down designs. I. Stationary treatment distributions.". In Flournoy, N; Rosenberger, WF (eds.). IMS Lecture Notes Monograph Series. Vol. 25: Adaptive Designs. pp. 139–157.
2. Dixon, WJ; Mood, AM (1948). "A method for obtaining and analyzing sensitivity data". Journal of the American Statistical Association. 43 (241): 109–126. doi:10.1080/01621459.1948.10483254.
3. von Békésy, G (1947). "A new audiometer". Acta Oto-Laryngologica. 35 (5–6): 411–422. doi:10.3109/00016484709123756.
4. Anderson, TW; McCarthy, PJ; Tukey, JW (1946). 'Staircase' method of sensitivity testing (Technical report). Naval Ordnance Report. 65-46.
5. Flournoy, N; Oron, AP. "Up-and-Down Designs for Dose-Finding". In Dean, A (ed.). Handbook of Design and Analysis of Experiments. CRC Press. pp. 858–894.
6. Stylianou, MP; Flournoy, N (2002). "Dose finding using the biased coin up-and-down design and isotonic regression". Biometrics. 58 (1): 171–177. doi:10.1111/j.0006-341x.2002.00171.x. PMID 11890313. S2CID 8743090.
7. Oron, AP; Flournoy, N (2017). "Centered Isotonic Regression: Point and Interval Estimation for Dose-Response Studies". Statistics in Biopharmaceutical Research. 9 (3): 258–267. arXiv:1701.05964. doi:10.1080/19466315.2017.1286256. S2CID 88521189.
8. Leek, MR (2001). "Adaptive procedures in psychophysical research". Perception and Psychophysics. 63 (8): 1279–1292. doi:10.3758/bf03194543. PMID 11800457.
9. Pace, NL; Stylianou, MP (2007). "Advances in and Limitations of Up-and-down Methodology: A Precis of Clinical Use, Study Design, and Dose Estimation in Anesthesia Research". Anesthesiology. 107 (1): 144–152. doi:10.1097/01.anes.0000267514.42592.2a. PMID 17585226.
10. Oron, AP; Hoff, PD (2013). "Small-Sample Behavior of Novel Phase I Cancer Trial Designs". Clinical Trials. 10 (1): 63–80. arXiv:1202.4962. doi:10.1177/1740774512469311. PMID 23345304. S2CID 5667047.
11. Oron, AP; Hoff, PD (2009). "The k-in-a-row up-and-down design, revisited". Statistics in Medicine. 28 (13): 1805–1820. doi:10.1002/sim.3590. PMID 19378270. S2CID 25904900.
12. Diaconis, P; Stroock, D (1991). "Geometric bounds for eigenvalues of Markov chain". The Annals of Applied Probability. 1: 36–61. doi:10.1214/aoap/1177005980.
13. Gezmu, M; Flournoy, N (2006). "Group up-and-down designs for dose-finding". Journal of Statistical Planning and Inference. 136 (6): 1749–1764. doi:10.1016/j.jspi.2005.08.002.
14. Wetherill, GB; Levitt, H (1963). "Sequential estimation of quantal response curves". Journal of the Royal Statistical Society, Series B. 25: 1–48. doi:10.1111/j.2517-6161.1963.tb00481.x.
15. Wetherill, GB (1965). "Sequential estimation of points on a Psychometric Function". British Journal of Mathematical and Statistical Psychology. 18: 1–10. doi:10.1111/j.2044-8317.1965.tb00689.x. PMID 14324842.
16. Gezmu, Misrak (1996). The Geometric Up-and-Down Design for Allocating Dosage Levels (PhD). American University.
17. Garcia-Perez, MA (1998). "Forced-choice staircases with fixed step sizes: asymptotic and small-sample properties". Vision Research. 38 (12): 1861–81. doi:10.1016/s0042-6989(97)00340-4. PMID 9797963.
18. Wetherill, GB; Chen, H; Vasudeva, RB (1966). "Sequential estimation of quantal response curves: a new method of estimation". Biometrika. 53 (3–4): 439–454. doi:10.1093/biomet/53.3-4.439.
19. Oron, Assaf. "Package 'cir'". CRAN. R Foundation for Statistical Computing. Retrieved 26 December 2020.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
|
Knuth's up-arrow notation
In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.[1]
In his 1947 paper,[2] R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc. Various notations have been used to represent hyperoperations. One such notation is $H_{n}(a,b)$. Knuth's up-arrow notation $\uparrow $ is another. For example:
• the single arrow $\uparrow $ represents exponentiation (iterated multiplication)
$2\uparrow 4=H_{3}(2,4)=2\times (2\times (2\times 2))=2^{4}=16$
• the double arrow $\uparrow \uparrow $ represents tetration (iterated exponentiation)
$2\uparrow \uparrow 4=2[4]4=2\uparrow (2\uparrow (2\uparrow 2))=2^{2^{2^{2}}}=2^{16}=65,536$
• the triple arrow $\uparrow \uparrow \uparrow $ represents pentation (iterated tetration)
${\begin{aligned}2\uparrow \uparrow \uparrow 4=H_{5}(2,4)=2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow 2))\\&=2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow 2))\\&=2\uparrow \uparrow (2\uparrow \uparrow 4)\\&=\underbrace {2\uparrow (2\uparrow (2\uparrow \dots ))} \;=\;\underbrace {\;2^{2^{\cdots ^{2}}}} \\&\;\;\;\;\;2\uparrow \uparrow 4{\mbox{ copies of }}2\;\;\;\;\;{\mbox{65,536 2s}}\\\end{aligned}}$
The general definition of the up-arrow notation is as follows (for $a\geq 0,n\geq 1,b\geq 0$):
$a\uparrow ^{n}b=H_{n+2}(a,b)=a[n+2]b.$
Here, $\uparrow ^{n}$ stands for n arrows, so for example
$2\uparrow \uparrow \uparrow \uparrow 3=2\uparrow ^{4}3.$
The square brackets are another notation for hyperoperations.
Introduction
The hyperoperations naturally extend the arithmetical operations of addition and multiplication as follows. Addition by a natural number is defined as iterated incrementation:
${\begin{matrix}H_{1}(a,b)=a+b=&a+\underbrace {1+1+\dots +1} \\&b{\mbox{ copies of }}1\end{matrix}}$
Multiplication by a natural number is defined as iterated addition:
${\begin{matrix}H_{2}(a,b)=a\times b=&\underbrace {a+a+\dots +a} \\&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\times 3&=&\underbrace {4+4+4} &=&12\\&&3{\mbox{ copies of }}4\end{matrix}}$
Exponentiation for a natural power $b$ is defined as iterated multiplication, which Knuth denoted by a single up-arrow:
${\begin{matrix}a\uparrow b=H_{3}(a,b)=a^{b}=&\underbrace {a\times a\times \dots \times a} \\&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\uparrow 3=4^{3}=&\underbrace {4\times 4\times 4} &=&64\\&3{\mbox{ copies of }}4\end{matrix}}$
Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”:
${\begin{matrix}a\uparrow \uparrow b=H_{4}(a,b)=&\underbrace {a^{a^{{}^{.\,^{.\,^{.\,^{a}}}}}}} &=&\underbrace {a\uparrow (a\uparrow (\dots \uparrow a))} \\&b{\mbox{ copies of }}a&&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\uparrow \uparrow 3=&\underbrace {4^{4^{4}}} &=&\underbrace {4\uparrow (4\uparrow 4)} &=&4^{256}&\approx &1.34078079\times 10^{154}&\\&3{\mbox{ copies of }}4&&3{\mbox{ copies of }}4\end{matrix}}$
Expressions are evaluated from right to left, as the operators are defined to be right-associative.
According to this definition,
$3\uparrow \uparrow 2=3^{3}=27$
$3\uparrow \uparrow 3=3^{3^{3}}=3^{27}=7,625,597,484,987$
$3\uparrow \uparrow 4=3^{3^{3^{3}}}=3^{3^{27}}=3^{7625597484987}\approx 1.2580143\times 10^{3638334640024}$
$3\uparrow \uparrow 5=3^{3^{3^{3^{3}}}}=3^{3^{3^{27}}}=3^{3^{7625597484987}}\approx 3^{1.2580143\times 10^{3638334640024}}$
etc.
This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.
Pentation, defined as iterated tetration, is represented by the “triple arrow”:
${\begin{matrix}a\uparrow \uparrow \uparrow b=H_{5}(a,b)=&\underbrace {a_{}\uparrow \uparrow (a\uparrow \uparrow (\dots \uparrow \uparrow a))} \\&b{\mbox{ copies of }}a\end{matrix}}$
Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:
${\begin{matrix}a\uparrow \uparrow \uparrow \uparrow b=H_{6}(a,b)=&\underbrace {a_{}\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow (\dots \uparrow \uparrow \uparrow a))} \\&b{\mbox{ copies of }}a\end{matrix}}$
and so on. The general rule is that an $n$-arrow operator expands into a right-associative series of ($n-1$)-arrow operators. Symbolically,
${\begin{matrix}a\ \underbrace {\uparrow _{}\uparrow \!\!\dots \!\!\uparrow } _{n}\ b=\underbrace {a\ \underbrace {\uparrow \!\!\dots \!\!\uparrow } _{n-1}\ (a\ \underbrace {\uparrow _{}\!\!\dots \!\!\uparrow } _{n-1}\ (\dots \ \underbrace {\uparrow _{}\!\!\dots \!\!\uparrow } _{n-1}\ a))} _{b{\text{ copies of }}a}\end{matrix}}$
Examples:
$3\uparrow \uparrow \uparrow 2=3\uparrow \uparrow 3=3^{3^{3}}=3^{27}=7,625,597,484,987$
${\begin{matrix}3\uparrow \uparrow \uparrow 3=3\uparrow \uparrow (3\uparrow \uparrow 3)=3\uparrow \uparrow (3\uparrow 3\uparrow 3)=&\underbrace {3_{}\uparrow 3\uparrow \dots \uparrow 3} \\&3\uparrow 3\uparrow 3{\mbox{ copies of }}3\end{matrix}}{\begin{matrix}=&\underbrace {3_{}\uparrow 3\uparrow \dots \uparrow 3} \\&{\mbox{7,625,597,484,987 copies of 3}}\end{matrix}}{\begin{matrix}=&\underbrace {3^{3^{3^{3^{\cdot ^{\cdot ^{\cdot ^{\cdot ^{3}}}}}}}}} \\&{\mbox{7,625,597,484,987 copies of 3}}\end{matrix}}$
Notation
In expressions such as $a^{b}$, the notation for exponentiation is usually to write the exponent $b$ as a superscript to the base number $a$. But many environments — such as programming languages and plain-text e-mail — do not support superscript typesetting. People have adopted the linear notation $a\uparrow b$ for such environments; the up-arrow suggests 'raising to the power of'. If the character set does not contain an up arrow, the caret (^) is used instead.
The superscript notation $a^{b}$ doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notation $a\uparrow b$ instead.
$a\uparrow ^{n}b$ is a shorter alternative notation for n uparrows. Thus $a\uparrow ^{4}b=a\uparrow \uparrow \uparrow \uparrow b$.
Writing out up-arrow notation in terms of powers
Attempting to write $a\uparrow \uparrow b$ using the familiar superscript notation gives a power tower.
For example: $a\uparrow \uparrow 4=a\uparrow (a\uparrow (a\uparrow a))=a^{a^{a^{a}}}$
If b is a variable (or is too large), the power tower might be written using dots and a note indicating the height of the tower.
$a\uparrow \uparrow b=\underbrace {a^{a^{.^{.^{.{a}}}}}} _{b}$
Continuing with this notation, $a\uparrow \uparrow \uparrow b$ could be written with a stack of such power towers, each describing the size of the one above it.
$a\uparrow \uparrow \uparrow 4=a\uparrow \uparrow (a\uparrow \uparrow (a\uparrow \uparrow a))=\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{a}}}$
Again, if b is a variable or is too large, the stack might be written using dots and a note indicating its height.
$a\uparrow \uparrow \uparrow b=\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}b$
Furthermore, $a\uparrow \uparrow \uparrow \uparrow b$ might be written using several columns of such stacks of power towers, each column describing the number of power towers in the stack to its left:
$a\uparrow \uparrow \uparrow \uparrow 4=a\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow a))=\left.\left.\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}a$
And more generally:
$a\uparrow \uparrow \uparrow \uparrow b=\underbrace {\left.\left.\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\cdots \right\}a} _{b}$
This might be carried out indefinitely to represent $a\uparrow ^{n}b$ as iterated exponentiation of iterated exponentiation for any a, n and b (although it clearly becomes rather cumbersome).
Using tetration
The Rudy Rucker notation $^{b}a$ for tetration allows us to make these diagrams slightly simpler while still employing a geometric representation (we could call these tetration towers).
$a\uparrow \uparrow b={}^{b}a$
$a\uparrow \uparrow \uparrow b=\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{b}$
$a\uparrow \uparrow \uparrow \uparrow b=\left.\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{\underbrace {\vdots } _{a}}}\right\}b$
Finally, as an example, the fourth Ackermann number $4\uparrow ^{4}4$ could be represented as:
$\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{4}}}=\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{^{^{^{4}4}4}4}}$
Generalizations
Some numbers are so large that multiple arrows of Knuth's up-arrow notation become too cumbersome; then an n-arrow operator $\uparrow ^{n}$ is useful (and also for descriptions with a variable number of arrows), or equivalently, hyper operators.
Some numbers are so large that even that notation is not sufficient. The Conway chained arrow notation can then be used: a chain of three elements is equivalent with the other notations, but a chain of four or more is even more powerful.
${\begin{matrix}a\uparrow ^{n}b&=&a[n+2]b&=&a\to b\to n\\{\mbox{(Knuth)}}&&{\mbox{(hyperoperation)}}&&{\mbox{(Conway)}}\end{matrix}}$
$6\uparrow \uparrow 4$ = $\underbrace {6^{6^{.^{.^{.^{6}}}}}} _{4}$, Since $6\uparrow \uparrow 4$ = $6^{6^{6^{6}}}$ = $6^{6^{46,656}}$, Thus the result comes out with $\underbrace {6^{6^{.^{.^{.^{6}}}}}} _{4}$
$10\uparrow (3\times 10\uparrow (3\times 10\uparrow 15)+3)$ = $\underbrace {100000...000} _{\underbrace {300000...003} _{\underbrace {300000...000} _{15}}}$ or $10^{3\times 10^{3\times 10^{15}}+3}$ (Petillion)
Even faster-growing functions can be categorized using an ordinal analysis called the fast-growing hierarchy. The fast-growing hierarchy uses successive function iteration and diagonalization to systematically create faster-growing functions from some base function $f(x)$. For the standard fast-growing hierarchy using $f_{0}(x)=x+1$, $f_{3}(x)$ already exhibits exponential growth, $f_{4}(x)$ is comparable to tetrational growth and is upper-bounded by a function involving the first four hyperoperators;. Then, $f_{\omega }(x)$ is comparable to the Ackermann function, $f_{\omega +1}(x)$ is already beyond the reach of indexed arrows but can be used to approximate Graham's number, and $f_{\omega ^{2}}(x)$ is comparable to arbitrarily-long Conway chained arrow notation.
These functions are all computable. Even faster computable functions, such as the Goodstein sequence and the TREE sequence require the usage of large ordinals, may occur in certain combinatorical and proof-theoretic contexts. There exists functions which grow uncomputably fast, such as the Busy Beaver, whose very nature will be completely out of reach from any up-arrow, or even any ordinal-based analysis.
Definition
Without reference to hyperoperation the up-arrow operators can be formally defined by
$a\uparrow ^{n}b={\begin{cases}a^{b},&{\text{if }}n=1;\\1,&{\text{if }}n>1{\text{ and }}b=0;\\a\uparrow ^{n-1}(a\uparrow ^{n}(b-1)),&{\text{otherwise }}\end{cases}}$
for all integers $a,b,n$ with $a\geq 0,n\geq 1,b\geq 0$[nb 1].
This definition uses exponentiation $(a\uparrow ^{1}b=a\uparrow b=a^{b})$ as the base case, and tetration $(a\uparrow ^{2}b=a\uparrow \uparrow b)$ as repeated exponentiation. This is equivalent to the hyperoperation sequence except it omits the three more basic operations of succession, addition and multiplication.
One can alternatively choose multiplication $(a\uparrow ^{0}b=a\times b)$ as the base case and iterate from there. Then exponentiation becomes repeated multiplication. The formal definition would be
$a\uparrow ^{n}b={\begin{cases}a\times b,&{\text{if }}n=0;\\1,&{\text{if }}n>0{\text{ and }}b=0;\\a\uparrow ^{n-1}(a\uparrow ^{n}(b-1)),&{\text{otherwise }}\end{cases}}$
for all integers $a,b,n$ with $a\geq 0,n\geq 0,b\geq 0$.
Note, however, that Knuth did not define the "nil-arrow" ($\uparrow ^{0}$). One could extend the notation to negative indices (n ≥ -2) in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
$H_{n}(a,b)=a[n]b=a\uparrow ^{n-2}b{\text{ for }}n\geq 0.$
The up-arrow operation is a right-associative operation, that is, $a\uparrow b\uparrow c$ is understood to be $a\uparrow (b\uparrow c)$, instead of $(a\uparrow b)\uparrow c$. If ambiguity is not an issue parentheses are sometimes dropped.
Tables of values
Computing 0↑n b
Computing $0\uparrow ^{n}b=H_{n+2}(0,b)=0[n+2]b$ results in
0, when n = 0 [nb 2]
1, when n = 1 and b = 0 [nb 1][nb 3]
0, when n = 1 and b > 0 [nb 1][nb 3]
1, when n > 1 and b is even (including 0)
0, when n > 1 and b is odd
Computing 2↑n b
Computing $2\uparrow ^{n}b$ can be restated in terms of an infinite table. We place the numbers $2^{b}$ in the top row, and fill the left column with values 2. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $2\uparrow ^{n}b$ = $H_{n+2}(2,b)$ = $2[n+2]b$ = 2 → b → n
b
ⁿ
1 2 3 4 5 6 formula
1 248163264$2^{b}$
2 241665536$2^{65{,}536}\approx 2.0\times 10^{19{,}728}$$2^{2^{65{,}536}}\approx 10^{6.0\times 10^{19{,}727}}$$2\uparrow \uparrow b$
3 2465536${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$$2\uparrow \uparrow \uparrow b$
4 24${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$ $2\uparrow \uparrow \uparrow \uparrow b$
The table is the same as that of the Ackermann function, except for a shift in $n$ and $b$, and an addition of 3 to all values.
Computing 3↑n b
We place the numbers $3^{b}$ in the top row, and fill the left column with values 3. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $3\uparrow ^{n}b$ = $H_{n+2}(3,b)$ = $3[n+2]b$ = 3 → b → n
b
ⁿ
1 2 3 4 5 formula
1 392781243$3^{b}$
2 3277,625,597,484,987$3^{7{,}625{,}597{,}484{,}987}\approx 1.3\times 10^{3{,}638{,}334{,}640{,}024}$$3^{3^{7{,}625{,}597{,}484{,}987}}$$3\uparrow \uparrow b$
3 37,625,597,484,987${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$$3\uparrow \uparrow \uparrow b$
4 3${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$ $3\uparrow \uparrow \uparrow \uparrow b$
Computing 4↑n b
We place the numbers $4^{b}$ in the top row, and fill the left column with values 4. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $4\uparrow ^{n}b$ = $H_{n+2}(4,b)$ = $4[n+2]b$ = 4 → b → n
b
ⁿ
1 2 3 4 5 formula
1 416642561024$4^{b}$
2 4256$4^{256}\approx 1.34\times 10^{154}$$4^{4^{256}}\approx 10^{8.0\times 10^{153}}$$4^{4^{4^{256}}}$$4\uparrow \uparrow b$
3 4$4^{4^{256}}\approx 10^{8.0\times 10^{153}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$$4\uparrow \uparrow \uparrow b$
4 4${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$ $4\uparrow \uparrow \uparrow \uparrow b$
Computing 10↑n b
We place the numbers $10^{b}$ in the top row, and fill the left column with values 10. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $10\uparrow ^{n}b$ = $H_{n+2}(10,b)$ = $10[n+2]b$ = 10 → b → n
b
ⁿ
1 2 3 4 5 formula
1 101001,00010,000100,000$10^{b}$
2 1010,000,000,000$10^{10,000,000,000}$$10^{10^{10,000,000,000}}$$10^{10^{10^{10,000,000,000}}}$$10\uparrow \uparrow b$
3 10${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$$10\uparrow \uparrow \uparrow b$
4 10${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$ $10\uparrow \uparrow \uparrow \uparrow b$
For 2 ≤ b ≤ 9 the numerical order of the numbers $10\uparrow ^{n}b$ is the lexicographical order with n as the most significant number, so for the numbers of these 8 columns the numerical order is simply line-by-line. The same applies for the numbers in the 97 columns with 3 ≤ b ≤ 99, and if we start from n = 1 even for 3 ≤ b ≤ 9,999,999,999.
See also
• Primitive recursion
• Hyperoperation
• Busy beaver
• Cutler's bar notation
• Tetration
• Pentation
• Ackermann function
• Graham's number
• Steinhaus–Moser notation
Notes
1. For more details, see Powers of zero.
2. Keep in mind that Knuth did not define the operator $\uparrow ^{0}$.
3. For more details, see Zero to the power of zero.
References
1. Knuth, Donald E. (1976). "Mathematics and Computer Science: Coping with Finiteness". Science. 194 (4271): 1235–1242. Bibcode:1976Sci...194.1235K. doi:10.1126/science.194.4271.1235. PMID 17797067. S2CID 1690489.
2. R. L. Goodstein (Dec 1947). "Transfinite Ordinals in Recursive Number Theory". Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR 2266486. S2CID 1318943.
External links
• Weisstein, Eric W. "Knuth Up-Arrow Notation". MathWorld.
• Robert Munafo, Large Numbers: Higher hyper operators
Hyperoperations
Primary
• Successor (0)
• Addition (1)
• Multiplication (2)
• Exponentiation (3)
• Tetration (4)
• Pentation (5)
Inverse for left argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Root extraction (3)
• Super-root (4)
Inverse for right argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Logarithm (3)
• Super-logarithm (4)
Related articles
• Ackermann function
• Conway chained arrow notation
• Grzegorczyk hierarchy
• Knuth's up-arrow notation
• Steinhaus–Moser notation
Large numbers
Examples
in
numerical
order
• Thousand
• Ten thousand
• Hundred thousand
• Million
• Ten million
• Hundred million
• Billion
• Trillion
• Quadrillion
• Quintillion
• Sextillion
• Septillion
• Octillion
• Nonillion
• Decillion
• Eddington number
• Googol
• Shannon number
• Googolplex
• Skewes's number
• Moser's number
• Graham's number
• TREE(3)
• SSCG(3)
• BH(3)
• Rayo's number
• Transfinite numbers
Expression
methods
Notations
• Scientific notation
• Knuth's up-arrow notation
• Conway chained arrow notation
• Steinhaus–Moser notation
Operators
• Hyperoperation
• Tetration
• Pentation
• Ackermann function
• Grzegorczyk hierarchy
• Fast-growing hierarchy
Related
articles
(alphabetical
order)
• Busy beaver
• Extended real number line
• Indefinite and fictitious numbers
• Infinitesimal
• Largest known prime number
• List of numbers
• Long and short scales
• Number systems
• Number names
• Orders of magnitude
• Power of two
• Power of three
• Power of 10
• Sagan Unit
• Names
• History
Donald Knuth
Publications
• The Art of Computer Programming
• "The Complexity of Songs"
• Computers and Typesetting
• Concrete Mathematics
• Surreal Numbers
• Things a Computer Scientist Rarely Talks About
• Selected papers series
Software
• TeX
• Metafont
• MIXAL (MIX
• MMIX)
Fonts
• AMS Euler
• Computer Modern
• Concrete Roman
Literate programming
• WEB
• CWEB
Algorithms
• Knuth's Algorithm X
• Knuth–Bendix completion algorithm
• Knuth–Morris–Pratt algorithm
• Knuth shuffle
• Robinson–Schensted–Knuth correspondence
• Trabb Pardo–Knuth algorithm
• Generalization of Dijkstra's algorithm
• Knuth's Simpath algorithm
Other
• Dancing Links
• Knuth reward check
• Knuth Prize
• Knuth's up-arrow notation
• Man or boy test
• Quater-imaginary base
• -yllion
• Potrzebie system of weights and measures
|
Combinatorial game theory
Combinatorial game theory is a branch of mathematics and theoretical computer science that typically studies sequential games with perfect information. Study has been largely confined to two-player games that have a position that the players take turns changing in defined ways or moves to achieve a defined winning condition. Combinatorial game theory has not traditionally studied games of chance or those that use imperfect or incomplete information, favoring games that offer perfect information in which the state of the game and the set of available moves is always known by both players.[1] However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, thus the boundaries of the field are ever changing.[2] Scholars will generally define what they mean by a "game" at the beginning of a paper, and these definitions often vary as they are specific to the game being analyzed and are not meant to represent the entire scope of the field.
This article is about the theory of combinatorial games. For the theory that includes games of chance and games of imperfect knowledge, see Game theory.
Combinatorial games include well-known games such as chess, checkers, and Go, which are regarded as non-trivial, and tic-tac-toe, which is considered trivial, in the sense of being "easy to solve". Some combinatorial games may also have an unbounded playing area, such as infinite chess. In combinatorial game theory, the moves in these and other games are represented as a game tree.
Combinatorial games also include one-player combinatorial puzzles such as Sudoku, and no-player automata, such as Conway's Game of Life, (although in the strictest definition, "games" can be said to require more than one participant, thus the designations of "puzzle" and "automata".[3])
Game theory in general includes games of chance, games of imperfect knowledge, and games in which players can move simultaneously, and they tend to represent real-life decision making situations.
Combinatorial game theory has a different emphasis than "traditional" or "economic" game theory, which was initially developed to study games with simple combinatorial structure, but with elements of chance (although it also considers sequential moves, see extensive-form game). Essentially, combinatorial game theory has contributed new methods for analyzing game trees, for example using surreal numbers, which are a subclass of all two-player perfect-information games.[3] The type of games studied by combinatorial game theory is also of interest in artificial intelligence, particularly for automated planning and scheduling. In combinatorial game theory there has been less emphasis on refining practical search algorithms (such as the alpha–beta pruning heuristic included in most artificial intelligence textbooks), but more emphasis on descriptive theoretical results (such as measures of game complexity or proofs of optimal solution existence without necessarily specifying an algorithm, such as the strategy-stealing argument).
An important notion in combinatorial game theory is that of the solved game. For example, tic-tac-toe is considered a solved game, as it can be proven that any game will result in a draw if both players play optimally. Deriving similar results for games with rich combinatorial structures is difficult. For instance, in 2007 it was announced that checkers has been weakly solved—optimal play by both sides also leads to a draw—but this result was a computer-assisted proof.[4] Other real world games are mostly too complicated to allow complete analysis today, although the theory has had some recent successes in analyzing Go endgames. Applying combinatorial game theory to a position attempts to determine the optimum sequence of moves for both players until the game ends, and by doing so discover the optimum move in any position. In practice, this process is torturously difficult unless the game is very simple.
It can be helpful to distinguish between combinatorial "mathgames" of interest primarily to mathematicians and scientists to ponder and solve, and combinatorial "playgames" of interest to the general population as a form of entertainment and competition.[5] However, a number of games fall into both categories. Nim, for instance, is a playgame instrumental in the foundation of combinatorial game theory, and one of the first computerized games.[6] Tic-tac-toe is still used to teach basic principles of game AI design to computer science students.[7]
History
Combinatorial game theory arose in relation to the theory of impartial games, in which any play available to one player must be available to the other as well. One such game is Nim, which can be solved completely. Nim is an impartial game for two players, and subject to the normal play condition, which means that a player who cannot move loses. In the 1930s, the Sprague–Grundy theorem showed that all impartial games are equivalent to heaps in Nim, thus showing that major unifications are possible in games considered at a combinatorial level, in which detailed strategies matter, not just pay-offs.
In the 1960s, Elwyn R. Berlekamp, John H. Conway and Richard K. Guy jointly introduced the theory of a partisan game, in which the requirement that a play available to one player be available to both is relaxed. Their results were published in their book Winning Ways for your Mathematical Plays in 1982. However, the first work published on the subject was Conway's 1976 book On Numbers and Games, also known as ONAG, which introduced the concept of surreal numbers and the generalization to games. On Numbers and Games was also a fruit of the collaboration between Berlekamp, Conway, and Guy.
Combinatorial games are generally, by convention, put into a form where one player wins when the other has no moves remaining. It is easy to convert any finite game with only two possible results into an equivalent one where this convention applies. One of the most important concepts in the theory of combinatorial games is that of the sum of two games, which is a game where each player may choose to move either in one game or the other at any point in the game, and a player wins when his opponent has no move in either game. This way of combining games leads to a rich and powerful mathematical structure.
Conway stated in On Numbers and Games that the inspiration for the theory of partisan games was based on his observation of the play in Go endgames, which can often be decomposed into sums of simpler endgames isolated from each other in different parts of the board.
Examples
The introductory text Winning Ways introduced a large number of games, but the following were used as motivating examples for the introductory theory:
• Blue–Red Hackenbush - At the finite level, this partisan combinatorial game allows constructions of games whose values are dyadic rational numbers. At the infinite level, it allows one to construct all real values, as well as many infinite ones that fall within the class of surreal numbers.
• Blue–Red–Green Hackenbush - Allows for additional game values that are not numbers in the traditional sense, for example, star.
• Toads and Frogs - Allows various game values. Unlike most other games, a position is easily represented by a short string of characters.
• Domineering - Various interesting games, such as hot games, appear as positions in Domineering, because there is sometimes an incentive to move, and sometimes not. This allows discussion of a game's temperature.
• Nim - An impartial game. This allows for the construction of the nimbers. (It can also be seen as a green-only special case of Blue-Red-Green Hackenbush.)
The classic game Go was influential on the early combinatorial game theory, and Berlekamp and Wolfe subsequently developed an endgame and temperature theory for it (see references). Armed with this they were able to construct plausible Go endgame positions from which they could give expert Go players a choice of sides and then defeat them either way.
Another game studied in the context of combinatorial game theory is chess. In 1953 Alan Turing wrote of the game, "If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate."[8] In a 1950 paper, Claude Shannon estimated the lower bound of the game-tree complexity of chess to be 10120, and today this is referred to as the Shannon number.[9] Chess remains unsolved, although extensive study, including work involving the use of supercomputers has created chess endgame tablebases, which shows the result of perfect play for all end-games with seven pieces or less. Infinite chess has an even greater combinatorial complexity than chess (unless only limited end-games, or composed positions with a small number of pieces are being studied).
Overview
A game, in its simplest terms, is a list of possible "moves" that two players, called left and right, can make. The game position resulting from any move can be considered to be another game. This idea of viewing games in terms of their possible moves to other games leads to a recursive mathematical definition of games that is standard in combinatorial game theory. In this definition, each game has the notation {L|R}. L is the set of game positions that the left player can move to, and R is the set of game positions that the right player can move to; each position in L and R is defined as a game using the same notation.
Using Domineering as an example, label each of the sixteen boxes of the four-by-four board by A1 for the upper leftmost square, C2 for the third box from the left on the second row from the top, and so on. We use e.g. (D3, D4) to stand for the game position in which a vertical domino has been placed in the bottom right corner. Then, the initial position can be described in combinatorial game theory notation as
$\{(\mathrm {A} 1,\mathrm {A} 2),(\mathrm {B} 1,\mathrm {B} 2),\dots |(\mathrm {A} 1,\mathrm {B} 1),(\mathrm {A} 2,\mathrm {B} 2),\dots \}.$
In standard Cross-Cram play, the players alternate turns, but this alternation is handled implicitly by the definitions of combinatorial game theory rather than being encoded within the game states.
$\{(\mathrm {A} 1,\mathrm {A} 2)|(\mathrm {A} 1,\mathrm {B} 1)\}=\{\{|\}|\{|\}\}.$
The above game describes a scenario in which there is only one move left for either player, and if either player makes that move, that player wins. (An irrelevant open square at C3 has been omitted from the diagram.) The {|} in each player's move list (corresponding to the single leftover square after the move) is called the zero game, and can actually be abbreviated 0. In the zero game, neither player has any valid moves; thus, the player whose turn it is when the zero game comes up automatically loses.
The type of game in the diagram above also has a simple name; it is called the star game, which can also be abbreviated ∗. In the star game, the only valid move leads to the zero game, which means that whoever's turn comes up during the star game automatically wins.
An additional type of game, not found in Domineering, is a loopy game, in which a valid move of either left or right is a game that can then lead back to the first game. Checkers, for example, becomes loopy when one of the pieces promotes, as then it can cycle endlessly between two or more squares. A game that does not possess such moves is called loopfree.
Game abbreviations
Numbers
Numbers represent the number of free moves, or the move advantage of a particular player. By convention positive numbers represent an advantage for Left, while negative numbers represent an advantage for Right. They are defined recursively with 0 being the base case.
0 = {|}
1 = {0|}, 2 = {1|}, 3 = {2|}
−1 = {|0}, −2 = {|−1}, −3 = {|−2}
The zero game is a loss for the first player.
The sum of number games behaves like the integers, for example 3 + −2 = 1.
Star
Star, written as ∗ or {0|0}, is a first-player win since either player must (if first to move in the game) move to a zero game, and therefore win.
∗ + ∗ = 0, because the first player must turn one copy of ∗ to a 0, and then the other player will have to turn the other copy of ∗ to a 0 as well; at this point, the first player would lose, since 0 + 0 admits no moves.
The game ∗ is neither positive nor negative; it and all other games in which the first player wins (regardless of which side the player is on) are said to be fuzzy with or confused with 0; symbolically, we write ∗ || 0.
Up
Up, written as ↑, is a position in combinatorial game theory.[10] In standard notation, ↑ = {0|∗}.
−↑ = ↓ (down)
Up is strictly positive (↑ > 0), but is infinitesimal. Up is defined in Winning Ways for your Mathematical Plays.
Down
Down, written as ↓, is a position in combinatorial game theory.[10] In standard notation, ↓ = {∗|0}.
−↓ = ↑ (up)
Down is strictly negative (↓ < 0), but is infinitesimal. Down is defined in Winning Ways for your Mathematical Plays.
"Hot" games
Consider the game {1|−1}. Both moves in this game are an advantage for the player who makes them; so the game is said to be "hot;" it is greater than any number less than −1, less than any number greater than 1, and fuzzy with any number in between. It is written as ±1. It can be added to numbers, or multiplied by positive ones, in the expected fashion; for example, 4 ± 1 = {5|3}.
Nimbers
An impartial game is one where, at every position of the game, the same moves are available to both players. For instance, Nim is impartial, as any set of objects that can be removed by one player can be removed by the other. However, domineering is not impartial, because one player places horizontal dominoes and the other places vertical ones. Likewise Checkers is not impartial, since the players own different colored pieces. For any ordinal number, one can define an impartial game generalizing Nim in which, on each move, either player may replace the number with any smaller ordinal number; the games defined in this way are known as nimbers. The Sprague–Grundy theorem states that every impartial game is equivalent to a nimber.
The "smallest" nimbers – the simplest and least under the usual ordering of the ordinals – are 0 and ∗.
See also
• Alpha–beta pruning, an optimised algorithm for searching the game tree
• Backward induction, reasoning backwards from a final situation
• Cooling and heating (combinatorial game theory), various transformations of games making them more amenable to the theory
• Connection game, a type of game where players attempt to establish connections
• Endgame tablebase, a database saying how to play endgames
• Expectiminimax tree, an adaptation of a minimax game tree to games with an element of chance
• Extensive-form game, a game tree enriched with payoffs and information available to players
• Game classification, an article discussing ways of classifying games
• Game complexity, an article describing ways of measuring the complexity of games
• Grundy's game, a mathematical game in which heaps of objects are split
• Multi-agent system, a type of computer system for tackling complex problems
• Positional game, a type of game where players claim previously-unclaimed positions
• Solving chess
• Sylver coinage, a mathematical game of choosing positive integers that are not the sum of non-negative multiples of previously chosen integers
• Wythoff's game, a mathematical game of taking objects from one or two piles
• Topological game, a type of mathematical game played in a topological space
• Zugzwang, being obliged to play when this is disadvantageous
Notes
1. Lessons in Play, p. 3
2. Thomas S. Fergusson's analysis of poker is an example of combinatorial game theory expanding into games that include elements of chance. Research into Three Player Nim is an example of study expanding beyond two player games. Conway, Guy and Berlekamp's analysis of partisan games is perhaps the most famous expansion of the scope of combinatorial game theory, taking the field beyond the study of impartial games.
3. Demaine, Erik D.; Hearn, Robert A. (2009). "Playing games with algorithms: algorithmic combinatorial game theory". In Albert, Michael H.; Nowakowski, Richard J. (eds.). Games of No Chance 3. Mathematical Sciences Research Institute Publications. Vol. 56. Cambridge University Press. pp. 3–56. arXiv:cs.CC/0106019.
4. Schaeffer, J.; Burch, N.; Bjornsson, Y.; Kishimoto, A.; Muller, M.; Lake, R.; Lu, P.; Sutphen, S. (2007). "Checkers is solved". Science. 317 (5844): 1518–1522. Bibcode:2007Sci...317.1518S. CiteSeerX 10.1.1.95.5393. doi:10.1126/science.1144079. PMID 17641166. S2CID 10274228.
5. Fraenkel, Aviezri (2009). "Combinatorial Games: selected bibliography with a succinct gourmet introduction". Games of No Chance 3. 56: 492.
6. Grant, Eugene F.; Lardner, Rex (2 August 1952). "The Talk of the Town - It". The New Yorker.
7. Russell, Stuart; Norvig, Peter (2021). "Chapter 5: Adversarial search and games". Artificial Intelligence: A Modern Approach. Pearson series in artificial intelligence (4th ed.). Pearson Education, Inc. pp. 146–179. ISBN 978-0-13-461099-3.
8. Alan Turing. "Digital computers applied to games". University of Southampton and King's College Cambridge. p. 2.
9. Claude Shannon (1950). "Programming a Computer for Playing Chess" (PDF). Philosophical Magazine. 41 (314): 4. Archived from the original (PDF) on 2010-07-06.
10. E. Berlekamp; J. H. Conway; R. Guy (1982). Winning Ways for your Mathematical Plays. Vol. I. Academic Press. ISBN 0-12-091101-9.
E. Berlekamp; J. H. Conway; R. Guy (1982). Winning Ways for your Mathematical Plays. Vol. II. Academic Press. ISBN 0-12-091102-7.
References
• Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: An Introduction to Combinatorial Game Theory. A K Peters Ltd. ISBN 978-1-56881-277-9.
• Beck, József (2008). Combinatorial games: tic-tac-toe theory. Cambridge University Press. ISBN 978-0-521-46100-9.
• Berlekamp, E.; Conway, J. H.; Guy, R. (1982). Winning Ways for your Mathematical Plays: Games in general. Academic Press. ISBN 0-12-091101-9. 2nd ed., A K Peters Ltd (2001–2004), ISBN 1-56881-130-6, ISBN 1-56881-142-X
• Berlekamp, E.; Conway, J. H.; Guy, R. (1982). Winning Ways for your Mathematical Plays: Games in particular. Academic Press. ISBN 0-12-091102-7. 2nd ed., A K Peters Ltd (2001–2004), ISBN 1-56881-143-8, ISBN 1-56881-144-6.
• Berlekamp, Elwyn; Wolfe, David (1997). Mathematical Go: Chilling Gets the Last Point. A K Peters Ltd. ISBN 1-56881-032-6.
• Bewersdorff, Jörg (2004). Luck, Logic and White Lies: The Mathematics of Games. A K Peters Ltd. ISBN 1-56881-210-8. See especially sections 21–26.
• Conway, John Horton (1976). On Numbers and Games. Academic Press. ISBN 0-12-186350-6. 2nd ed., A K Peters Ltd (2001), ISBN 1-56881-127-6.
• Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd. ISBN 978-1-56881-322-6.
External links
• List of combinatorial game theory links at the homepage of David Eppstein
• An Introduction to Conway's games and numbers by Dierk Schleicher and Michael Stoll
• Combinational Game Theory terms summary by Bill Spight
• Combinatorial Game Theory Workshop, Banff International Research Station, June 2005
Topics in game theory
Definitions
• Congestion game
• Cooperative game
• Determinacy
• Escalation of commitment
• Extensive-form game
• First-player and second-player win
• Game complexity
• Graphical game
• Hierarchy of beliefs
• Information set
• Normal-form game
• Preference
• Sequential game
• Simultaneous game
• Simultaneous action selection
• Solved game
• Succinct game
Equilibrium
concepts
• Bayesian Nash equilibrium
• Berge equilibrium
• Core
• Correlated equilibrium
• Epsilon-equilibrium
• Evolutionarily stable strategy
• Gibbs equilibrium
• Mertens-stable equilibrium
• Markov perfect equilibrium
• Nash equilibrium
• Pareto efficiency
• Perfect Bayesian equilibrium
• Proper equilibrium
• Quantal response equilibrium
• Quasi-perfect equilibrium
• Risk dominance
• Satisfaction equilibrium
• Self-confirming equilibrium
• Sequential equilibrium
• Shapley value
• Strong Nash equilibrium
• Subgame perfection
• Trembling hand
Strategies
• Backward induction
• Bid shading
• Collusion
• Forward induction
• Grim trigger
• Markov strategy
• Dominant strategies
• Pure strategy
• Mixed strategy
• Strategy-stealing argument
• Tit for tat
Classes
of games
• Bargaining problem
• Cheap talk
• Global game
• Intransitive game
• Mean-field game
• Mechanism design
• n-player game
• Perfect information
• Large Poisson game
• Potential game
• Repeated game
• Screening game
• Signaling game
• Stackelberg competition
• Strictly determined game
• Stochastic game
• Symmetric game
• Zero-sum game
Games
• Go
• Chess
• Infinite chess
• Checkers
• Tic-tac-toe
• Prisoner's dilemma
• Gift-exchange game
• Optional prisoner's dilemma
• Traveler's dilemma
• Coordination game
• Chicken
• Centipede game
• Lewis signaling game
• Volunteer's dilemma
• Dollar auction
• Battle of the sexes
• Stag hunt
• Matching pennies
• Ultimatum game
• Rock paper scissors
• Pirate game
• Dictator game
• Public goods game
• Blotto game
• War of attrition
• El Farol Bar problem
• Fair division
• Fair cake-cutting
• Cournot game
• Deadlock
• Diner's dilemma
• Guess 2/3 of the average
• Kuhn poker
• Nash bargaining game
• Induction puzzles
• Trust game
• Princess and monster game
• Rendezvous problem
Theorems
• Arrow's impossibility theorem
• Aumann's agreement theorem
• Folk theorem
• Minimax theorem
• Nash's theorem
• Negamax theorem
• Purification theorem
• Revelation principle
• Sprague–Grundy theorem
• Zermelo's theorem
Key
figures
• Albert W. Tucker
• Amos Tversky
• Antoine Augustin Cournot
• Ariel Rubinstein
• Claude Shannon
• Daniel Kahneman
• David K. Levine
• David M. Kreps
• Donald B. Gillies
• Drew Fudenberg
• Eric Maskin
• Harold W. Kuhn
• Herbert Simon
• Hervé Moulin
• John Conway
• Jean Tirole
• Jean-François Mertens
• Jennifer Tour Chayes
• John Harsanyi
• John Maynard Smith
• John Nash
• John von Neumann
• Kenneth Arrow
• Kenneth Binmore
• Leonid Hurwicz
• Lloyd Shapley
• Melvin Dresher
• Merrill M. Flood
• Olga Bondareva
• Oskar Morgenstern
• Paul Milgrom
• Peyton Young
• Reinhard Selten
• Robert Axelrod
• Robert Aumann
• Robert B. Wilson
• Roger Myerson
• Samuel Bowles
• Suzanne Scotchmer
• Thomas Schelling
• William Vickrey
Miscellaneous
• All-pay auction
• Alpha–beta pruning
• Bertrand paradox
• Bounded rationality
• Combinatorial game theory
• Confrontation analysis
• Coopetition
• Evolutionary game theory
• First-move advantage in chess
• Game Description Language
• Game mechanics
• Glossary of game theory
• List of game theorists
• List of games in game theory
• No-win situation
• Solving chess
• Topological game
• Tragedy of the commons
• Tyranny of small decisions
Authority control: National
• Germany
|
Up to
Two mathematical objects a and b are called equal up to an equivalence relation R
• if a and b are related by R, that is,
• if aRb holds, that is,
• if the equivalence classes of a and b with respect to R are equal.
This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count. For example, x is unique up to R means that all objects x under consideration are in the same equivalence class with respect to the relation R.
Moreover, the equivalence relation R is often designated rather implicitly by a generating condition or transformation. For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation R that relates two lists if one can be obtained by reordering (permutation) from the other.[1] As another example, the statement "the solution to an indefinite integral is sin(x), up to addition of a constant" tacitly employs the equivalence relation R between functions, defined by fRg if the difference f−g is a constant function, and means that the solution and the function sin(x) are equal up to this R. In the picture, "there are 4 partitions up to rotation" means that the set P has 4 equivalence classes with respect to R defined by aRb if b can be obtained from a by rotation; one representative from each class is shown in the bottom left picture part.
Equivalence relations are often used to disregard possible differences of objects, so "up to R" can be understood informally as "ignoring the same subtleties as R ignores". In the factorization example, "up to ordering" means "ignoring the particular ordering".
Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in the Examples section.
In informal contexts, mathematicians often use the word modulo (or simply "mod") for similar purposes, as in "modulo isomorphism".
Examples
Tetris
A simple example is "there are seven tetrominoes, up to rotations", which makes reference to the seven possible contiguous arrangements of tetrominoes (collections of four unit squares arranged to connect on at least one side) and which are frequently thought of as the seven Tetris pieces (O, I, L, J, T, S, Z). One could also say "there are five tetrominoes, up to reflections and rotations", which would then take into account the perspective that L and J (as well as S and Z) can be thought of as the same piece when reflected. The Tetris game does not allow reflections, so the former statement is likely to seem more relevant than the latter.
To add in the exhaustive count, there is no formal notation for the number of pieces of tetrominoes. However, it is common to write that "there are seven tetrominoes (= 19 total[2]) up to rotations". Here, Tetris provides an excellent example, as one might simply count 7 pieces × 4 rotations as 28, but some pieces (such as the 2×2 O) obviously have fewer than four rotation states.
Eight queens
In the eight queens puzzle, if the eight queens are considered to be distinct, then there are 3709440 distinct solutions. Normally, however, the queens are considered to be equal, and one usually says "there are 3,709,440 / 8! = 92 unique solutions up to permutations of the queens", or that "there are 92 solutions modulo the names of the queens", signifying that two different arrangements of the queens are considered equivalent if the queens have been permuted, but the same squares on the chessboard are occupied by them.
If, in addition to treating the queens as identical, rotations and reflections of the board were allowed, we would have only 12 distinct solutions up to symmetry and the naming of the queens, signifying that two arrangements that are symmetrical to each other are considered equivalent (for more, see Eight queens puzzle § Solutions).
Polygons
The regular n-gon, for a fixed n, is unique up to similarity. In other words, by scaling, translation, and rotation, as necessary, any n-gon can be transformed to any other n-gon (with the same n).
Group theory
In group theory, one may have a group G acting on a set X, in which case, one might say that two elements of X are equivalent "up to the group action"—if they lie in the same orbit.
Another typical example is the statement that "there are two different groups of order 4 up to isomorphism", or "modulo isomorphism, there are two groups of order 4". This means that there are two equivalence classes of groups of order 4—assuming that one considers groups to be equivalent if they are isomorphic.
Nonstandard analysis
A hyperreal x and its standard part st(x) are equal up to an infinitesimal difference.
Computer science
In computer science, the term up-to techniques is a precisely defined notion that refers to certain proof techniques for (weak) bisimulation, and to relate processes that only behave similarly up to unobservable steps.[3]
See also
Look up up to in Wiktionary, the free dictionary.
• Abuse of notation
• Adequality
• All other things being equal
• Essentially unique
• List of mathematical jargon
• Modulo
• Quotient group
• Quotient set
• Synecdoche
References
1. Nekovář, Jan (2011). "Mathematical English (a brief summary)" (PDF). Institut de mathématiques de Jussieu – Paris Rive Gauche. Retrieved 2019-11-21.
2. Weisstein, Eric W. "Tetromino". mathworld.wolfram.com. Retrieved 2019-11-21.
3. Damien Pous, Up-to techniques for weak bisimulation, Proc. 32nd ICALP, Lecture Notes in Computer Science, vol. 3580, Springer Verlag (2005), pp. 730–741
Further reading
• Up-to Techniques for Weak Bisimulation
|
Bound graph
In graph theory, a bound graph expresses which pairs of elements of some partially ordered set have an upper bound. Rigorously, any graph G is a bound graph if there exists a partial order ≤ on the vertices of G with the property that for any vertices u and v of G, uv is an edge of G if and only if u ≠ v and there is a vertex w such that u ≤ w and v ≤ w.
Bound graphs are sometimes referred to as upper bound graphs, but the analogously defined lower bound graphs comprise exactly the same class—any lower bound for ≤ is easily seen to be an upper bound for the dual partial order ≥.
References
• McMorris, F.R.; Zaslavsky, T. (1982). "Bound graphs of a partially ordered set". Journal of Combinatorics, Information & System Sciences. 7: 134–138.
• Lundgren, J.R.; Maybee, J.S. (1983). "A characterization of upper bound graphs". Congressus Numerantium. 40: 189–193.
• Bergstrand, D.J.; Jones, K.F. (1988). "On upper bound graphs of partially ordered sets". Congressus Numerantium. 66: 185–193.
• Tanenbaum, P.J. (2000). "Bound graph polysemy" (PDF). Electronic Journal of Combinatorics. 7: #R43. doi:10.37236/1521.
|
Limits of integration
In calculus and mathematical analysis the limits of integration (or bounds of integration) of the integral
$\int _{a}^{b}f(x)\,dx$
of a Riemann integrable function $f$ defined on a closed and bounded interval are the real numbers $a$ and $b$, in which $a$ is called the lower limit and $b$ the upper limit. The region that is bounded can be seen as the area inside $a$ and $b$.
For example, the function $f(x)=x^{3}$ is defined on the interval $[2,4]$
$\int _{2}^{4}x^{3}\,dx$
with the limits of integration being $2$ and $4$.[1]
Integration by Substitution (U-Substitution)
In Integration by substitution, the limits of integration will change due to the new function being integrated. With the function that is being derived, $a$ and $b$ are solved for $f(u)$. In general,
$\int _{a}^{b}f(g(x))g'(x)\ dx$
where $u=g(x)$ and $du=g'(x)\ dx$. Thus, $a$ and $b$ will be solved in terms of $u$; the lower bound is $g(a)$ and the upper bound is $g(b)$.
For example,
$\int _{0}^{2}2x\cos(x^{2})dx=\int _{0}^{4}\cos(u)\,du$
where $u=x^{2}$ and $du=2xdx$. Thus, $f(0)=0^{2}=0$ and $f(2)=2^{2}=4$. Hence, the new limits of integration are $0$ and $4$.[2]
The same applies for other substitutions.
Improper integrals
Limits of integration can also be defined for improper integrals, with the limits of integration of both
$\lim _{z\to a^{+}}\int _{z}^{b}f(x)\,dx$
and
$\lim _{z\to b^{-}}\int _{a}^{z}f(x)\,dx$
again being a and b. For an improper integral
$\int _{a}^{\infty }f(x)\,dx$
or
$\int _{-\infty }^{b}f(x)\,dx$
the limits of integration are a and ∞, or −∞ and b, respectively.[3]
Definite Integrals
If $c\in (a,b)$, then[4]
$\int _{a}^{b}f(x)\ dx=\int _{a}^{c}f(x)\ dx\ +\int _{c}^{b}f(x)\ dx.$
See also
• Integral
• Riemann integration
• Definite integral
References
1. "31.5 Setting up Correct Limits of Integration". math.mit.edu. Retrieved 2019-12-02.
2. "𝘶-substitution". Khan Academy. Retrieved 2019-12-02.
3. "Calculus II - Improper Integrals". tutorial.math.lamar.edu. Retrieved 2019-12-02.
4. Weisstein, Eric W. "Definite Integral". mathworld.wolfram.com. Retrieved 2019-12-02.
|
Minkowski–Bouligand dimension
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set $S$ in a Euclidean space $\mathbb {R} ^{n}$, or more generally in a metric space $(X,d)$. It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.
To calculate this dimension for a fractal $S$, imagine this fractal lying on an evenly spaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm.
Suppose that $N(\varepsilon )$ is the number of boxes of side length $\varepsilon $ required to cover the set. Then the box-counting dimension is defined as
$\dim _{\text{box}}(S):=\lim _{\varepsilon \to 0}{\frac {\log N(\varepsilon )}{\log(1/\varepsilon )}}.$
Roughly speaking, this means that the dimension is the exponent $d$ such that $N(1/n)\approx Cn^{d}$, which is what one would expect in the trivial case where $S$ is a smooth space (a manifold) of integer dimension $d$.
If the above limit does not exist, one may still take the limit superior and limit inferior, which respectively define the upper box dimension and lower box dimension. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity, limit capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension.
The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very special applications is it important to distinguish between the three (see below). Yet another measure of fractal dimension is the correlation dimension.
Alternative definitions
It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number $N_{\text{covering}}(\varepsilon )$ is the minimal number of open balls of radius ε required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number $N'_{\text{covering}}(\varepsilon )$, which is defined the same way but with the additional requirement that the centers of the open balls lie inside the set S. The packing number $N_{\text{packing}}(\varepsilon )$ is the maximal number of disjoint open balls of radius ε one can situate such that their centers would be inside the fractal. While N, Ncovering, N'covering and Npacking are not exactly identical, they are closely related and give rise to identical definitions of the upper and lower box dimensions. This is easy to prove once the following inequalities are proven:
$N_{\text{packing}}(\varepsilon )\leq N'_{\text{covering}}(\varepsilon )\leq N_{\text{covering}}(\varepsilon /2).$
These, in turn, follow with a little effort from the triangle inequality.
The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is extrinsic — one assumes the fractal space S is contained in a Euclidean space, and defines boxes according to the external geometry of the containing space. However, the dimension of S should be intrinsic, independent of the environment into which S is placed, and the ball definition can be formulated intrinsically. One defines an internal ball as all points of S within a certain distance of a chosen center, and one counts such balls to get the dimension. (More precisely, the Ncovering definition is extrinsic, but the other two are intrinsic.)
The advantage of using boxes is that in many cases N(ε) may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal.
The logarithm of the packing and covering numbers are sometimes referred to as entropy numbers and are somewhat analogous to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale ε and also measure how many bits or digits one would need to specify a point of the space to accuracy ε.
Another equivalent (extrinsic) definition for the box-counting dimension is given by the formula
$\dim _{\text{box}}(S)=n-\lim _{r\to 0}{\frac {\log {\text{vol}}(S_{r})}{\log r}},$
where for each r > 0, the set $S_{r}$ is defined to be the r-neighborhood of S, i.e. the set of all points in $R^{n}$ that are at distance less than r from S (or equivalently, $S_{r}$ is the union of all the open balls of radius r centered at a point in S).
Properties
Both box dimensions are finitely additive, i.e. if {A1, ..., An} is a finite collection of sets, then
$\dim(A_{1}\cup \dotsb \cup A_{n})=\max\{\dim A_{1},\dots ,\dim A_{n}\}.$
However, they are not countably additive, i.e. this equality does not hold for an infinite sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff measure by comparison, is countably additive.
An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If A and B are two sets in a Euclidean space, then A + B is formed by taking all the pairs of points a, b where a is from A and b is from B and adding a + b. One has
$\dim _{\text{upper box}}(A+B)\leq \dim _{\text{upper box}}(A)+\dim _{\text{upper box}}(B).$
Relations to the Hausdorff dimension
The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC).[1] For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent.
The box dimensions and the Hausdorff dimension are related by the inequality
$\dim _{\text{Haus}}\leq \dim _{\text{lower box}}\leq \dim _{\text{upper box}}.$
In general, both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the set of numbers in the interval [0, 1] satisfying the condition
for any n, all the digits between the 22n-th digit and the (22n+1 − 1)-th digit are zero.
The digits in the "odd place-intervals", i.e. between digits 22n+1 and 22n+2 − 1 are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating N(ε) for $\varepsilon =10^{-2^{n}}$ and noting that their values behave differently for n even and odd.
Another example: the set of rational numbers $\mathbb {Q} $, a countable set with $\dim _{\text{Haus}}=0$, has $\dim _{\text{box}}=1$ because its closure, $\mathbb {R} $, has dimension 1. In fact,
$\dim _{\text{box}}\left\{0,1,{\frac {1}{2}},{\frac {1}{3}},{\frac {1}{4}},\ldots \right\}={\frac {1}{2}}.$
These examples show that adding a countable set can change box dimension, demonstrating a kind of instability of this dimension.
See also
• Correlation dimension
• Packing dimension
• Uncertainty exponent
• Weyl–Berry conjecture
• Lacunarity
References
1. Wagon, Stan (2010). Mathematica in Action: Problem Solving Through Visualization and Computation. Springer-Verlag. p. 214. ISBN 0-387-75477-6.
• Falconer, Kenneth (1990). Fractal geometry: mathematical foundations and applications. Chichester: John Wiley. pp. 38–47. ISBN 0-471-92287-0. Zbl 0689.28003.
• Weisstein, Eric W. "Minkowski-Bouligand Dimension". MathWorld.
External links
• FrakOut!: an OSS application for calculating the fractal dimension of a shape using the box counting method (Does not automatically place the boxes for you).
• FracLac: online user guide and software ImageJ and FracLac box counting plugin; free user-friendly open source software for digital image analysis in biology
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
|
Lower envelope
In mathematics, the lower envelope or pointwise minimum of a finite set of functions is the pointwise minimum of the functions, the function whose value at every point is the minimum of the values of the functions in the given set. The concept of a lower envelope can also be extended to partial functions by taking the minimum only among functions that have values at the point. The upper envelope or pointwise maximum is defined symmetrically. For an infinite set of functions, the same notions may be defined using the infimum in place of the minimum, and the supremum in place of the maximum.[1]
For continuous functions from a given class, the lower or upper envelope is a piecewise function whose pieces are from the same class. For functions of a single real variable whose graphs have a bounded number of intersection points, the complexity of the lower or upper envelope can be bounded using Davenport–Schinzel sequences, and these envelopes can be computed efficiently by a divide-and-conquer algorithm that computes and then merges the envelopes of subsets of the functions.[2]
For convex functions or quasiconvex functions, the upper envelope is again convex or quasiconvex. The lower envelope is not, but can be replaced by the lower convex envelope to obtain an operation analogous to the lower envelope that maintains convexity. The upper and lower envelopes of Lipschitz functions preserve the property of being Lipschitz. However, the lower and upper envelope operations do not necessarily preserve the property of being a continuous function.[3]
References
1. Choquet, Gustave (1966), "3. Upper and lower envelopes of a family of functions", Topology, Academic Press, pp. 129–131, ISBN 9780080873312
2. Boissonnat, Jean-Daniel; Yvinec, Mariette (1998), "15.3.2 Computing the lower envelope", Algorithmic Geometry, Cambridge University Press, p. 358, ISBN 9780521565295
3. Choquet (1966), p. 136.
|
Upper and lower probabilities
Upper and lower probabilities are representations of imprecise probability. Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, this method uses two numbers: the upper probability of the event and the lower probability of the event.
Because frequentist statistics disallows metaprobabilities, frequentists have had to propose new solutions. Cedric Smith and Arthur Dempster each developed a theory of upper and lower probabilities. Glenn Shafer developed Dempster's theory further, and it is now known as Dempster–Shafer theory or Choquet (1953). More precisely, in the work of these authors one considers in a power set, $P(S)\,\!$, a mass function $m:P(S)\rightarrow R$ satisfying the conditions
$m(\varnothing )=0\,\,\,\,\,\,\!;\,\,\,\,\,\,m(A)\geq 0\,\,\,\,\,\,\!;\,\,\,\,\,\,\sum _{A\in P(X)}m(A)=1.\,\!$
In turn, a mass is associated with two non-additive continuous measures called belief and plausibility defined as follows:
$\operatorname {bel} (A)=\sum _{B\mid B\subseteq A}m(B)\,\,\,\,;\,\,\,\,\operatorname {pl} (A)=\sum _{B\mid B\cap A\neq \varnothing }m(B)$
In the case where $S$ is infinite there can be $\operatorname {bel} $ such that there is no associated mass function. See p. 36 of Halpern (2003). Probability measures are a special case of belief functions in which the mass function assigns positive mass to singletons of the event space only.
A different notion of upper and lower probabilities is obtained by the lower and upper envelopes obtained from a class C of probability distributions by setting
$\operatorname {env_{1}} (A)=\inf _{p\in C}p(A)\,\,\,\,;\,\,\,\,\operatorname {env_{2}} (A)=\sup _{p\in C}p(A)$
The upper and lower probabilities are also related with probabilistic logic: see Gerla (1994).
Observe also that a necessity measure can be seen as a lower probability and a possibility measure can be seen as an upper probability.
See also
• Possibility theory
• Fuzzy measure theory
• Interval finite element
• Probability bounds analysis
References
• Choquet, G. (1953). "Theory of Capacities". Annales de l'Institut Fourier. 5: 131–295. doi:10.5802/aif.53.
• Gerla, G. (1994). "Inferences in Probability Logic". Artificial Intelligence. 70 (1–2): 33–52. doi:10.1016/0004-3702(94)90102-3.
• Halpern, J. Y. (2003). Reasoning about Uncertainty. MIT Press. ISBN 978-0-262-08320-1.
• Halpern, J. Y.; Fagin, R. (1992). "Two views of belief: Belief as generalized probability and belief as evidence". Artificial Intelligence. 54 (3): 275–317. CiteSeerX 10.1.1.70.6130. doi:10.1016/0004-3702(92)90048-3.
• Huber, P. J. (1980). Robust Statistics. New York: Wiley. ISBN 978-0-471-41805-4.
• Saffiotti, A. (1992). "A Belief-Function Logic". Procs of the 10h AAAI Conference. San Jose, CA. pp. 642–647. ISBN 978-0-262-51063-9.{{cite book}}: CS1 maint: location missing publisher (link)
• Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton: Princeton University Press. ISBN 978-0-691-08175-5.
• Walley, P.; Fine, T. L. (1982). "Towards a frequentist theory of upper and lower probability". Annals of Statistics. 10 (3): 741–761. doi:10.1214/aos/1176345868. JSTOR 2240901.
|
Dini derivative
In mathematics and, specifically, real analysis, the Dini derivatives (or Dini derivates) are a class of generalizations of the derivative. They were introduced by Ulisse Dini, who studied continuous but nondifferentiable functions.
The upper Dini derivative, which is also called an upper right-hand derivative,[1] of a continuous function
$f:{\mathbb {R} }\rightarrow {\mathbb {R} },$
is denoted by f′+ and defined by
$f'_{+}(t)=\limsup _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}},$
where lim sup is the supremum limit and the limit is a one-sided limit. The lower Dini derivative, f′−, is defined by
$f'_{-}(t)=\liminf _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}},$
where lim inf is the infimum limit.
If f is defined on a vector space, then the upper Dini derivative at t in the direction d is defined by
$f'_{+}(t,d)=\limsup _{h\to {0+}}{\frac {f(t+hd)-f(t)}{h}}.$
If f is locally Lipschitz, then f′+ is finite. If f is differentiable at t, then the Dini derivative at t is the usual derivative at t.
Remarks
• The functions are defined in terms of the infimum and supremum in order to make the Dini derivatives as "bullet proof" as possible, so that the Dini derivatives are well-defined for almost all functions, even for functions that are not conventionally differentiable. The upshot of Dini's analysis is that a function is differentiable at the point t on the real line (ℝ), only if all the Dini derivatives exist, and have the same value.
• Sometimes the notation D+ f(t) is used instead of f′+(t) and D− f(t) is used instead of f′−(t).[1]
• Also,
$D^{+}f(t)=\limsup _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}}$
and
$D_{-}f(t)=\liminf _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}}$.
• So when using the D notation of the Dini derivatives, the plus or minus sign indicates the left- or right-hand limit, and the placement of the sign indicates the infimum or supremum limit.
• There are two further Dini derivatives, defined to be
$D_{+}f(t)=\liminf _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}}$
and
$D^{-}f(t)=\limsup _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}}$.
which are the same as the first pair, but with the supremum and the infimum reversed. For only moderately ill-behaved functions, the two extra Dini derivatives aren't needed. For particularly badly behaved functions, if all four Dini derivatives have the same value ($D^{+}f(t)=D_{+}f(t)=D^{-}f(t)=D_{-}f(t)$) then the function f is differentiable in the usual sense at the point t .
• On the extended reals, each of the Dini derivatives always exist; however, they may take on the values +∞ or −∞ at times (i.e., the Dini derivatives always exist in the extended sense).
See also
• Denjoy–Young–Saks theorem – Mathematical theorem about Dini derivatives
• Derivative (generalizations) – Fundamental construction of differential calculusPages displaying short descriptions of redirect targets
• Semi-differentiability
References
1. Khalil, Hassan K. (2002). Nonlinear Systems (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-067389-7.
• Lukashenko, T.P. (2001) [1994], "Dini derivative", Encyclopedia of Mathematics, EMS Press.
• Royden, H. L. (1968). Real Analysis (2nd ed.). MacMillan. ISBN 978-0-02-404150-0.
• Thomson, Brian S.; Bruckner, Judith B.; Bruckner, Andrew M. (2008). Elementary Real Analysis. ClassicalRealAnalysis.com [first edition published by Prentice Hall in 2001]. pp. 301–302. ISBN 978-1-4348-4161-2.
This article incorporates material from Dini derivative on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
|
Semi-continuity
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function $f$ is upper (respectively, lower) semicontinuous at a point $x_{0}$ if, roughly speaking, the function values for arguments near $x_{0}$ are not much higher (respectively, lower) than $f\left(x_{0}\right).$
For the notion of upper or lower semi-continuous set-valued function, see Hemicontinuity.
A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point $x_{0}$ to $f\left(x_{0}\right)+c$ for some $c>0$, then the result is upper semicontinuous; if we decrease its value to $f\left(x_{0}\right)-c$ then the result is lower semicontinuous.
The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899.[1]
Definitions
Assume throughout that $X$ is a topological space and $f:X\to {\overline {\mathbb {R} }}$ is a function with values in the extended real numbers ${\overline {\mathbb {R} }}=\mathbb {R} \cup \{-\infty ,\infty \}=[-\infty ,\infty ]$.
Upper semicontinuity
A function $f:X\to {\overline {\mathbb {R} }}$ is called upper semicontinuous at a point $x_{0}\in X$ if for every real $y>f\left(x_{0}\right)$ there exists a neighborhood $U$ of $x_{0}$ such that $f(x)<y$ for all $x\in U$.[2] Equivalently, $f$ is upper semicontinuous at $x_{0}$ if and only if
$\limsup _{x\to x_{0}}f(x)\leq f(x_{0})$
where lim sup is the limit superior of the function $f$ at the point $x_{0}$.
A function $f:X\to {\overline {\mathbb {R} }}$ is called upper semicontinuous if it satisfies any of the following equivalent conditions:[2]
(1) The function is upper semicontinuous at every point of its domain.
(2) All sets $f^{-1}([-\infty ,y))=\{x\in X:f(x)<y\}$ with $y\in \mathbb {R} $ are open in $X$, where $[-\infty ,y)=\{t\in {\overline {\mathbb {R} }}:t<y\}$.
(3) All superlevel sets $\{x\in X:f(x)\geq y\}$ with $y\in \mathbb {R} $ are closed in $X$.
(4) The hypograph $\{(x,t)\in X\times \mathbb {R} :t\leq f(x)\}$ is closed in $X\times \mathbb {R} $.
(5) The function is continuous when the codomain ${\overline {\mathbb {R} }}$ is given the left order topology. This is just a restatement of condition (2) since the left order topology is generated by all the intervals $[-\infty ,y)$.
Lower semicontinuity
A function $f:X\to {\overline {\mathbb {R} }}$ is called lower semicontinuous at a point $x_{0}\in X$ if for every real $y<f\left(x_{0}\right)$ there exists a neighborhood $U$ of $x_{0}$ such that $f(x)>y$ for all $x\in U$. Equivalently, $f$ is lower semicontinuous at $x_{0}$ if and only if
$\liminf _{x\to x_{0}}f(x)\geq f(x_{0})$
where $\liminf $ is the limit inferior of the function $f$ at point $x_{0}$.
A function $f:X\to {\overline {\mathbb {R} }}$ is called lower semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is lower semicontinuous at every point of its domain.
(2) All sets $f^{-1}((y,\infty ])=\{x\in X:f(x)>y\}$ with $y\in \mathbb {R} $ are open in $X$, where $(y,\infty ]=\{t\in {\overline {\mathbb {R} }}:t>y\}$.
(3) All sublevel sets $\{x\in X:f(x)\leq y\}$ with $y\in \mathbb {R} $ are closed in $X$.
(4) The epigraph $\{(x,t)\in X\times \mathbb {R} :t\geq f(x)\}$ is closed in $X\times \mathbb {R} $.
(5) The function is continuous when the codomain ${\overline {\mathbb {R} }}$ is given the right order topology. This is just a restatement of condition (2) since the right order topology is generated by all the intervals $(y,\infty ]$.
Examples
Consider the function $f,$ piecewise defined by:
$f(x)={\begin{cases}-1&{\mbox{if }}x<0,\\1&{\mbox{if }}x\geq 0\end{cases}}$
This function is upper semicontinuous at $x_{0}=0,$ but not lower semicontinuous.
The floor function $f(x)=\lfloor x\rfloor ,$ which returns the greatest integer less than or equal to a given real number $x,$ is everywhere upper semicontinuous. Similarly, the ceiling function $f(x)=\lceil x\rceil $ is lower semicontinuous.
Upper and lower semicontinuity bear no relation to continuity from the left or from the right for functions of a real variable. Semicontinuity is defined in terms of an ordering in the range of the functions, not in the domain.[3] For example the function
$f(x)={\begin{cases}\sin(1/x)&{\mbox{if }}x\neq 0,\\1&{\mbox{if }}x=0,\end{cases}}$
is upper semicontinuous at $x=0$ while the function limits from the left or right at zero do not even exist.
If $X=\mathbb {R} ^{n}$ is a Euclidean space (or more generally, a metric space) and $\Gamma =C([0,1],X)$ is the space of curves in $X$ (with the supremum distance $d_{\Gamma }(\alpha ,\beta )=\sup\{d_{X}(\alpha (t),\beta (t)):t\in [0,1]\}$), then the length functional $L:\Gamma \to [0,+\infty ],$ which assigns to each curve $\alpha $ its length $L(\alpha ),$ is lower semicontinuous.[4] As an example, consider approximating the unit square diagonal by a staircase from below. The staircase always has length 2, while the diagonal line has only length ${\sqrt {2}}$.
Let $(X,\mu )$ be a measure space and let $L^{+}(X,\mu )$ denote the set of positive measurable functions endowed with the topology of convergence in measure with respect to $\mu .$ Then by Fatou's lemma the integral, seen as an operator from $L^{+}(X,\mu )$ to $[-\infty ,+\infty ]$ is lower semicontinuous.
Properties
Unless specified otherwise, all functions below are from a topological space $X$ to the extended real numbers ${\overline {\mathbb {R} }}=[-\infty ,\infty ].$ Several of the results hold for semicontinuity at a specific point, but for brevity they are only stated from semicontinuity over the whole domain.
• A function $f:X\to {\overline {\mathbb {R} }}$ is continuous if and only if it is both upper and lower semicontinuous.
• The indicator function of a set $A\subset X$ (defined by $\mathbf {1} _{A}(x)=1$ if $x\in A$ and $0$ if $x\notin A$) is upper semicontinuous if and only if $A$ is a closed set. It is lower semicontinuous if and only if $A$ is an open set.[note 1]
• The sum $f+g$ of two lower semicontinuous functions is lower semicontinuous[5] (provided the sum is well-defined, i.e., $f(x)+g(x)$ is not the indeterminate form $-\infty +\infty $). The same holds for upper semicontinuous functions.
• If both functions are non-negative, the product function $fg$ of two lower semicontinuous functions is lower semicontinuous. The corresponding result holds for upper semicontinuous functions.
• A function $f:X\to {\overline {\mathbb {R} }}$ is lower semicontinuous if and only if $-f$ is upper semicontinuous.
• The composition $f\circ g$ of upper semicontinuous functions is not necessarily upper semicontinuous, but if $f$ is also non-decreasing, then $f\circ g$ is upper semicontinuous.[6]
• The minimum and the maximum of two lower semicontinuous functions are lower semicontinuous. In other words, the set of all lower semicontinuous functions from $X$ to ${\overline {\mathbb {R} }}$ (or to $\mathbb {R} $) forms a lattice. The same holds for upper semicontinuous functions.
• The (pointwise) supremum of an arbitrary family $(f_{i})_{i\in I}$ of lower semicontinuous functions $f_{i}:X\to {\overline {\mathbb {R} }}$ (defined by $f(x)=\sup\{f_{i}(x):i\in I\}$) is lower semicontinuous.[7]
In particular, the limit of a monotone increasing sequence $f_{1}\leq f_{2}\leq f_{3}\leq \cdots $ of continuous functions is lower semicontinuous. (The Theorem of Baire below provides a partial converse.) The limit function will only be lower semicontinuous in general, not continuous. An example is given by the functions $f_{n}(x)=1-(1-x)^{n}$ defined for $x\in [0,1]$ for $n=1,2,\ldots .$
Likewise, the infimum of an arbitrary family of upper semicontinuous functions is upper semicontinuous. And the limit of a monotone decreasing sequence of continuous functions is upper semicontinuous.
• (Theorem of Baire)[note 2] Assume $X$ is a metric space. Every lower semicontinuous function $f:X\to {\overline {\mathbb {R} }}$ is the limit of a monotone increasing sequence of extended real-valued continuous functions on $X$; if $f$ does not take the value $-\infty $, the continuous functions can be taken to be real-valued.[8][9]
And every upper semicontinuous function $f:X\to {\overline {\mathbb {R} }}$ is the limit of a monotone decreasing sequence of extended real-valued continuous functions on $X$; if $f$ does not take the value $\infty ,$ the continuous functions can be taken to be real-valued.
• If $C$ is a compact space (for instance a closed bounded interval $[a,b]$) and $f:C\to {\overline {\mathbb {R} }}$ is upper semicontinuous, then $f$ has a maximum on $C.$ If $f$ is lower semicontinuous on $C,$ it has a minimum on $C.$
(Proof for the upper semicontinuous case: By condition (5) in the definition, $f$ is continuous when ${\overline {\mathbb {R} }}$ is given the left order topology. So its image $f(C)$ is compact in that topology. And the compact sets in that topology are exactly the sets with a maximum. For an alternative proof, see the article on the extreme value theorem.)
• Any upper semicontinuous function $f:X\to \mathbb {N} $ on an arbitrary topological space $X$ is locally constant on some dense open subset of $X.$
• Tonelli's theorem in functional analysis characterizes the weak lower semicontinuity of nonlinear functionals on Lp spaces in terms of the convexity of another function.
See also
• Directional continuity – Mathematical function with no sudden changesPages displaying short descriptions of redirect targets
• Katětov–Tong insertion theorem – On existence of a continuous function between semicontinuous upper and lower bounds
• Semicontinuous set-valued function
Notes
1. In the context of convex analysis, the characteristic function of a set $A$ is defined differently, as $\chi _{A}(x)=0$ if $x\in A$ and $\chi _{A}(x)=\infty $ if $x\notin A$. With that definition, the characteristic function of any closed set is lower semicontinuous, and the characteristic function of any open set is upper semicontinuous.
2. The result was proved by René Baire in 1904 for real-valued function defined on $\mathbb {R} $. It was extended to metric spaces by Hans Hahn in 1917, and Hing Tong showed in 1952 that the most general class of spaces where the theorem holds is the class of perfectly normal spaces. (See Engelking, Exercise 1.7.15(c), p. 62 for details and specific references.)
References
1. Verry, Matthieu. "Histoire des mathématiques - René Baire".
2. Stromberg, p. 132, Exercise 4
3. Willard, p. 49, problem 7K
4. Giaquinta, Mariano (2007). Mathematical analysis : linear and metric structures and continuity. Giuseppe Modica (1 ed.). Boston: Birkhäuser. Theorem 11.3, p.396. ISBN 978-0-8176-4514-4. OCLC 213079540.
5. Puterman, Martin L. (2005). Markov Decision Processes Discrete Stochastic Dynamic Programming. Wiley-Interscience. pp. 602. ISBN 978-0-471-72782-8.
6. Moore, James C. (1999). Mathematical methods for economic theory. Berlin: Springer. p. 143. ISBN 9783540662358.
7. "To show that the supremum of any collection of lower semicontinuous functions is lower semicontinuous".
8. Stromberg, p. 132, Exercise 4(g)
9. "Show that lower semicontinuous function is the supremum of an increasing sequence of continuous functions".
Bibliography
• Benesova, B.; Kruzik, M. (2017). "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review. 59 (4): 703–766. arXiv:1601.00390. doi:10.1137/16M1060947. S2CID 119668631.
• Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 1–4. Springer. ISBN 0-201-00636-7.
• Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 5–10. Springer. ISBN 3-540-64563-2.
• Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4.
• Gelbaum, Bernard R.; Olmsted, John M.H. (2003). Counterexamples in analysis. Dover Publications. ISBN 0-486-42875-3.
• Hyers, Donald H.; Isac, George; Rassias, Themistocles M. (1997). Topics in nonlinear analysis & applications. World Scientific. ISBN 981-02-2534-2.
• Stromberg, Karl (1981). Introduction to Classical Real Analysis. Wadsworth. ISBN 978-0-534-98012-2.
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
• Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive.
Convex analysis and variational analysis
Basic concepts
• Convex combination
• Convex function
• Convex set
Topics (list)
• Choquet theory
• Convex geometry
• Convex metric space
• Convex optimization
• Duality
• Lagrange multiplier
• Legendre transformation
• Locally convex topological vector space
• Simplex
Maps
• Convex conjugate
• Concave
• (Closed
• K-
• Logarithmically
• Proper
• Pseudo-
• Quasi-) Convex function
• Invex function
• Legendre transformation
• Semi-continuity
• Subderivative
Main results (list)
• Carathéodory's theorem
• Ekeland's variational principle
• Fenchel–Moreau theorem
• Fenchel-Young inequality
• Jensen's inequality
• Hermite–Hadamard inequality
• Krein–Milman theorem
• Mazur's lemma
• Shapley–Folkman lemma
• Robinson-Ursescu
• Simons
• Ursescu
Sets
• Convex hull
• (Orthogonally, Pseudo-) Convex set
• Effective domain
• Epigraph
• Hypograph
• John ellipsoid
• Lens
• Radial set/Algebraic interior
• Zonotope
Series
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
Duality
• Dual system
• Duality gap
• Strong duality
• Weak duality
Applications and related
• Convexity in economics
|
Upper topology
In mathematics, the upper topology on a partially ordered set X is the coarsest topology in which the closure of a singleton $\{a\}$ is the order section $a]=\{x\leq a\}$ for each $a\in X.$ If $\leq $ is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets. However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets. The preorder inducing the upper topology is its specialization preorder, but the specialization preorder of the lower topology is opposite to the inducing preorder.
The real upper topology is most naturally defined on the upper-extended real line $(-\infty ,+\infty ]=\mathbb {R} \cup \{+\infty \}$ by the system $\{(a,+\infty ]:a\in \mathbb {R} \cup \{\pm \infty \}\}$ of open sets. Similarly, the real lower topology $\{[-\infty ,a):a\in \mathbb {R} \cup \{\pm \infty \}\}$ is naturally defined on the lower real line $[-\infty ,+\infty )=\mathbb {R} \cup \{-\infty \}.$ A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line ${[-\infty ,+\infty )}.$ Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on ${(-\infty ,+\infty ]}.$
See also
• List of topologies – List of concrete topologies and topological spaces
References
• Gerhard Gierz; K.H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove; D. S. Scott (2003). Continuous Lattices and Domains. Cambridge University Press. p. 510. ISBN 0-521-80338-1.
• Kelley, John L. (1955). General Topology. Van Nostrand Reinhold. p. 101.
• Knapp, Anthony W. (2005). Basic Real Analysis. Birkhhauser. p. 481. ISBN 0-8176-3250-6.
|
Upward planar drawing
In graph drawing, an upward planar drawing of a directed acyclic graph is an embedding of the graph into the Euclidean plane, in which the edges are represented as non-crossing monotonic upwards curves. That is, the curve representing each edge should have the property that every horizontal line intersects it in at most one point, and no two edges may intersect except at a shared endpoint.[1] In this sense, it is the ideal case for layered graph drawing, a style of graph drawing in which edges are monotonic curves that may cross, but in which crossings are to be minimized.
Characterizations
A directed acyclic graph must be planar in order to have an upward planar drawing, but not every planar acyclic graph has such a drawing. Among the planar directed acyclic graphs with a single source (vertex with no incoming edges) and sink (vertex with no outgoing edges), the graphs with upward planar drawings are the st-planar graphs, planar graphs in which the source and sink both belong to the same face of at least one of the planar embeddings of the graph. More generally, a graph G has an upward planar drawing if and only if it is directed and acyclic, and is a subgraph of an st-planar graph on the same vertex set.[2]
In an upward embedding, the sets of incoming and outgoing edges incident to each vertex are contiguous in the cyclic ordering of the edges at the vertex. A planar embedding of a given directed acyclic graph is said to be bimodal when it has this property. Additionally, the angle between two consecutive edges with the same orientation at a given vertex may be labeled as small if it is less than π, or large if it is greater than π. Each source or sink must have exactly one large angle, and each vertex that is neither a source nor a sink must have none. Additionally, each internal face of the drawing must have two more small angles than large ones, and the external face must have two more large angles than small ones. A consistent assignment is a labeling of the angles that satisfies these properties; every upward embedding has a consistent assignment. Conversely, every directed acyclic graph that has a bimodal planar embedding with a consistent assignment has an upward planar drawing, that can be constructed from it in linear time.[3]
Another characterization is possible for graphs with a single source. In this case an upward planar embedding must have the source on the outer face, and every undirected cycle of the graph must have at least one vertex at which both cycle edges are incoming (for instance, the vertex with the highest placement in the drawing). Conversely, if an embedding has both of these properties, then it is equivalent to an upward embedding.[4]
Computational complexity
Several special cases of upward planarity testing are known to be possible in polynomial time:
• Testing whether a graph is st-planar may be accomplished in linear time by adding an edge from s to t and testing whether the remaining graph is planar. Along the same lines, it is possible to construct an upward planar drawing (when it exists) of a directed acyclic graph with a single source and sink, in linear time.[5]
• Testing whether a directed graph with a fixed planar embedding can be drawn upward planar, with an embedding consistent with the given one, can be accomplished by checking that the embedding is bimodal and modeling the consistent assignment problem as a network flow problem. The running time is linear in the size of the input graph, and polynomial in its number of sources and sinks.[6]
• Because oriented polyhedral graphs have a unique planar embedding, the existence of an upward planar drawing for these graphs may be tested in polynomial time.[7]
• Testing whether an outerplanar directed acyclic graph has an upward planar drawing is also polynomial.[8]
• Every series–parallel graph, oriented consistently with the series–parallel structure, is upward planar. An upward planar drawing can be constructed directly from the series–parallel decomposition of the graph.[9] More generally, arbitrary orientations of undirected series–parallel graphs may be tested for upward planarity in polynomial time.[10]
• Every oriented tree is upward planar.[9]
• Every bipartite planar graph, with its edges oriented consistently from one side of the bipartition to the other, is upward planar[9][11]
• A more complicated polynomial time algorithm is known for testing upward planarity of graphs that have a single source, but multiple sinks, or vice versa.[12]
• Testing upward planarity can be performed in polynomial time when there are a constant number of triconnected components and cut vertices, and is fixed-parameter tractable in these two numbers.[13] It is also fixed-parameter tractable in the cyclomatic number of the input graph.[14] It is also fixed-parameter tractable in the number of sources (i.e. vertices with no in-edges)[15]
• If the y-coordinates of all vertices are fixed, then a choice of x-coordinates that makes the drawing upward planar can be found in polynomial time.[16]
However, it is NP-complete to determine whether a planar directed acyclic graph with multiple sources and sinks has an upward planar drawing.[17]
Straight-line drawing and area requirements
Fáry's theorem states that every planar graph has a drawing in which its edges are represented by straight line segments, and the same is true of upward planar drawing: every upward planar graph has a straight upward planar drawing.[18] A straight-line upward drawing of a transitively reduced st-planar graph may be obtained by the technique of dominance drawing, with all vertices having integer coordinates within an n × n grid.[19] However, certain other upward planar graphs may require exponential area in all of their straight-line upward planar drawings.[18] If a choice of embedding is fixed, even oriented series parallel graphs and oriented trees may require exponential area.[20]
Hasse diagrams
Upward planar drawings are particularly important for Hasse diagrams of partially ordered sets, as these diagrams are typically required to be drawn upwardly. In graph-theoretic terms, these correspond to the transitively reduced directed acyclic graphs; such a graph can be formed from the covering relation of a partial order, and the partial order itself forms the reachability relation in the graph. If a partially ordered set has one minimal element, has one maximal element, and has an upward planar drawing, then it must necessarily form a lattice, a set in which every pair of elements has a unique greatest lower bound and a unique least upper bound.[21] The Hasse diagram of a lattice is planar if and only if its order dimension is at most two.[22] However, some partial orders of dimension two and with one minimal and maximal element do not have an upward planar drawing (take the order defined by the transitive closure of $a<b,a<c,b<d,b<e,c<d,c<e,d<f,e<f$).
References
Footnotes
1. Garg & Tamassia (1995); Di Battista et al. (1998).
2. Garg & Tamassia (1995), pp. 111–112; Di Battista et al. (1998), 6.1 "Inclusion in a Planar st-Graph", pp. 172–179; Di Battista & Tamassia (1988); Kelly (1987).
3. Garg & Tamassia (1995), pp. 112–115; Di Battista et al. (1998), 6.2 "Angles in Upward Drawings", pp. 180–188; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994).
4. Garg & Tamassia (1995), p. 115; Di Battista et al. (1998), 6.7.2 "Forbidden Cycles for Single-Source Digraphs", pp. 209–210; Thomassen (1989).
5. Garg & Tamassia (1995), p. 119; Di Battista et al. (1998), p. 179.
6. Garg & Tamassia (1995), pp. 119–121; Di Battista et al. (1998), 6.3 "Upward Planarity Testing of Embedded Digraphs", pp. 188–192; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994); Abbasi, Healy & Rextin (2010).
7. Di Battista et al. (1998), pp. 191–192; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994).
8. Garg & Tamassia (1995), pp. 125–126; Di Battista et al. (1998), 6.7.1 "Outerplanar Digraph", p. 209; Papakostas (1995).
9. Di Battista et al. (1998), 6.7.4 "Some Classes of Upward Planar Digraphs", p. 212.
10. Didimo, Giordano & Liotta (2009).
11. Di Battista, Liu & Rival (1990).
12. Garg & Tamassia (1995), pp. 122–125; Di Battista et al. (1998), 6.5 "Optimal Upward Planarity Testing of Single-Source Digraphs", pp. 195–200; Hutton & Lubiw (1996); Bertolazzi et al. (1998).
13. Chan (2004); Healy & Lynch (2006).
14. Healy & Lynch (2006).
15. Chaplick et al. (2022)
16. Jünger & Leipert (1999).
17. Garg & Tamassia (1995), pp. 126–132; Di Battista et al. (1998), 6.6 "Upward Planarity Testing is NP-complete", pp. 201–209; Garg & Tamassia (2001).
18. Di Battista & Frati (2012); Di Battista, Tamassia & Tollis (1992).
19. Di Battista et al. (1998), 4.7 "Dominance Drawings", pp. 112–127; Di Battista, Tamassia & Tollis (1992).
20. Di Battista & Frati (2012); Bertolazzi et al. (1994); Frati (2008).
21. Di Battista et al. (1998), 6.7.3 "Forbidden Structures for Lattices", pp. 210–212; Platt (1976).
22. Garg & Tamassia (1995), pp. 118; Baker, Fishburn & Roberts (1972).
Surveys and textbooks
• Di Battista, Giuseppe; Eades, Peter; Tamassia, Roberto; Tollis, Ioannis G. (1998), "Flow and Upward Planarity", Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, pp. 171–213, ISBN 978-0-13-301615-4.
• Di Battista, Giuseppe; Frati, Fabrizio (2012), "Drawing trees, outerplanar graphs, series–parallel graphs, and planar graphs in small area", Thirty Essays on Geometric Graph Theory, Algorithms and combinatorics, vol. 29, Springer, pp. 121–165, doi:10.1007/978-1-4614-0110-0_9, ISBN 9781461401100. Section 5, "Upward Drawings", pp. 149–151.
• Garg, Ashim; Tamassia, Roberto (1995), "Upward planarity testing", Order, 12 (2): 109–133, doi:10.1007/BF01108622, MR 1354797, S2CID 14183717.
Research articles
• Abbasi, Sarmad; Healy, Patrick; Rextin, Aimal (2010), "Improving the running time of embedded upward planarity testing", Information Processing Letters, 110 (7): 274–278, doi:10.1016/j.ipl.2010.02.004, MR 2642837.
• Baker, K. A.; Fishburn, P. C.; Roberts, F. S. (1972), "Partial orders of dimension 2", Networks, 2 (1): 11–28, doi:10.1002/net.3230020103.
• Bertolazzi, Paola; Cohen, Robert F.; Di Battista, Giuseppe; Tamassia, Roberto; Tollis, Ioannis G. (1994), "How to draw a series–parallel digraph", International Journal of Computational Geometry & Applications, 4 (4): 385–402, doi:10.1142/S0218195994000215, MR 1310911.
• Bertolazzi, Paola; Di Battista, Giuseppe (1991), "On upward drawing testing of triconnected digraphs", Proceedings of the Seventh Annual Symposium on Computational Geometry (SCG '91, North Conway, New Hampshire, USA), New York, NY, USA: ACM, pp. 272–280, doi:10.1145/109648.109679, ISBN 0-89791-426-0, S2CID 18306721.
• Bertolazzi, P.; Di Battista, G.; Liotta, G.; Mannino, C. (1994), "Upward drawings of triconnected digraphs", Algorithmica, 12 (6): 476–497, doi:10.1007/BF01188716, MR 1297810, S2CID 33167313.
• Bertolazzi, Paola; Di Battista, Giuseppe; Mannino, Carlo; Tamassia, Roberto (1998), "Optimal upward planarity testing of single-source digraphs", SIAM Journal on Computing, 27 (1): 132–169, doi:10.1137/S0097539794279626, MR 1614821.
• Chan, Hubert (2004), "A parameterized algorithm for upward planarity testing", Proc. 12th European Symposium on Algorithms (ESA '04), Lecture Notes in Computer Science, vol. 3221, Springer-Verlag, pp. 157–168, doi:10.1007/978-3-540-30140-0_16.
• Di Battista, Giuseppe; Liu, Wei-Ping; Rival, Ivan (1990), "Bipartite graphs, upward drawings, and planarity", Information Processing Letters, 36 (6): 317–322, doi:10.1016/0020-0190(90)90045-Y, MR 1084490.
• Di Battista, Giuseppe; Tamassia, Roberto (1988), "Algorithms for plane representations of acyclic digraphs", Theoretical Computer Science, 61 (2–3): 175–198, doi:10.1016/0304-3975(88)90123-5, MR 0980241.
• Di Battista, Giuseppe; Tamassia, Roberto; Tollis, Ioannis G. (1992), "Area requirement and symmetry display of planar upward drawings", Discrete and Computational Geometry, 7 (4): 381–401, doi:10.1007/BF02187850, MR 1148953.
• Didimo, Walter; Giordano, Francesco; Liotta, Giuseppe (2009), "Upward spirality and upward planarity testing", SIAM Journal on Discrete Mathematics, 23 (4): 1842–1899, doi:10.1137/070696854, MR 2594962, S2CID 26154284.
• Frati, Fabrizio (2008), "On minimum area planar upward drawings of directed trees and other families of directed acyclic graphs", International Journal of Computational Geometry & Applications, 18 (3): 251–271, doi:10.1142/S021819590800260X, MR 2424444.
• Garg, Ashim; Tamassia, Roberto (2001), "On the computational complexity of upward and rectilinear planarity testing", SIAM Journal on Computing, 31 (2): 601–625, doi:10.1137/S0097539794277123, MR 1861292, S2CID 15691098.
• Healy, Patrick; Lynch, Karol (2006), "Two fixed-parameter tractable algorithms for testing upward planarity", International Journal of Foundations of Computer Science, 17 (5): 1095–1114, doi:10.1142/S0129054106004285.
• Hutton, Michael D.; Lubiw, Anna (1996), "Upward planar drawing of single-source acyclic digraphs", SIAM Journal on Computing, 25 (2): 291–311, doi:10.1137/S0097539792235906, MR 1379303. First presented at the 2nd ACM-SIAM Symposium on Discrete Algorithms, 1991.
• Jünger, Michael; Leipert, Sebastian (1999), "Level planar embedding in linear time", Graph Drawing (Proc. GD '99), Lecture Notes in Computer Science, vol. 1731, pp. 72–81, doi:10.1007/3-540-46648-7_7, ISBN 978-3-540-66904-3.
• Kelly, David (1987), "Fundamentals of planar ordered sets", Discrete Mathematics, 63 (2–3): 197–216, doi:10.1016/0012-365X(87)90008-2, MR 0885497.
• Papakostas, Achilleas (1995), "Upward planarity testing of outerplanar dags (extended abstract)", Graph Drawing: DIMACS International Workshop, GD '94, Princeton, New Jersey, USA, October 10–12, 1994, Proceedings, Lecture Notes in Computer Science, vol. 894, Berlin: Springer, pp. 298–306, doi:10.1007/3-540-58950-3_385, MR 1337518.
• Platt, C. R. (1976), "Planar lattices and planar graphs", Journal of Combinatorial Theory, Ser. B, 21 (1): 30–39, doi:10.1016/0095-8956(76)90024-1.
• Thomassen, Carsten (1989), "Planar acyclic oriented graphs", Order, 5 (4): 349–361, doi:10.1007/BF00353654, MR 1010384, S2CID 121445872.
• Chaplick, Steven; Di Giacomo, Emilio; Frati, Fabrizio; Ganian, Robert; Raftopoulou, Chrysanthi N.; Simonov, Kirill (2022), "Parameterized Algorithms for Upward Planarity", 38th International Symposium on Computational Geometry, SoCG, Leibniz International Proceedings in Informatics (LIPIcs), vol. 224, pp. 26:1–26:16, doi:10.4230/LIPIcs.SoCG.2022.26, ISBN 9783959772273
|
Upwind scheme
In computational physics, the term upwind scheme (sometimes advection scheme) typically refers to a class of numerical discretization methods for solving hyperbolic partial differential equations, in which so-called upstream variables are used to calculate the derivatives in a flow field. That is, derivatives are estimated using a set of data points biased to be more "upwind" of the query point, with respect to the direction of the flow. Historically, the origin of upwind methods can be traced back to the work of Courant, Isaacson, and Rees who proposed the CIR method.[1]
Model equation
To illustrate the method, consider the following one-dimensional linear advection equation
${\frac {\partial u}{\partial t}}+a{\frac {\partial u}{\partial x}}=0$
which describes a wave propagating along the $x$-axis with a velocity $a$. This equation is also a mathematical model for one-dimensional linear advection. Consider a typical grid point $i$ in the domain. In a one-dimensional domain, there are only two directions associated with point $i$ – left (towards negative infinity) and right (towards positive infinity). If $a$ is positive, the traveling wave solution of the equation above propagates towards the right, the left side of $i$ is called upwind side and the right side is the downwind side. Similarly, if $a$ is negative the traveling wave solution propagates towards the left, the left side is called downwind side and right side is the upwind side. If the finite difference scheme for the spatial derivative, $\partial u/\partial x$ contains more points in the upwind side, the scheme is called an upwind-biased or simply an upwind scheme.
First-order upwind scheme
The simplest upwind scheme possible is the first-order upwind scheme. It is given by[2]
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i}^{n}-u_{i-1}^{n}}{\Delta x}}=0\quad {\text{for}}\quad a>0$
(1)
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i+1}^{n}-u_{i}^{n}}{\Delta x}}=0\quad {\text{for}}\quad a<0$
(2)
where $n$ refers to the $t$ dimension and $i$ refers to the $x$ dimension. (By comparison, a central difference scheme in this scenario would look like
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i+1}^{n}-u_{i-1}^{n}}{2\Delta x}}=0,$
regardless of the sign of $a$.)
Compact form
Defining
$a^{+}={\text{max}}(a,0)\,,\qquad a^{-}={\text{min}}(a,0)$
and
$u_{x}^{-}={\frac {u_{i}^{n}-u_{i-1}^{n}}{\Delta x}}\,,\qquad u_{x}^{+}={\frac {u_{i+1}^{n}-u_{i}^{n}}{\Delta x}}$
the two conditional equations (1) and (2) can be combined and written in a compact form as
$u_{i}^{n+1}=u_{i}^{n}-\Delta t\left[a^{+}u_{x}^{-}+a^{-}u_{x}^{+}\right]$
(3)
Equation (3) is a general way of writing any upwind-type schemes.
Stability
The upwind scheme is stable if the following Courant–Friedrichs–Lewy condition (CFL) is satisfied.[3]
$c=\left|{\frac {a\Delta t}{\Delta x}}\right|\leq 1$ and $0\leq a$.
A Taylor series analysis of the upwind scheme discussed above will show that it is first-order accurate in space and time. Modified wavenumber analysis shows that the first-order upwind scheme introduces severe numerical diffusion/dissipation in the solution where large gradients exist due to necessity of high wavenumbers to represent sharp gradients.
Second-order upwind scheme
The spatial accuracy of the first-order upwind scheme can be improved by including 3 data points instead of just 2, which offers a more accurate finite difference stencil for the approximation of spatial derivative. For the second-order upwind scheme, $u_{x}^{-}$ becomes the 3-point backward difference in equation (3) and is defined as
$u_{x}^{-}={\frac {3u_{i}^{n}-4u_{i-1}^{n}+u_{i-2}^{n}}{2\Delta x}}$
and $u_{x}^{+}$ is the 3-point forward difference, defined as
$u_{x}^{+}={\frac {-u_{i+2}^{n}+4u_{i+1}^{n}-3u_{i}^{n}}{2\Delta x}}$
This scheme is less diffusive compared to the first-order accurate scheme and is called linear upwind differencing (LUD) scheme.
See also
• Finite difference method
• Upwind differencing scheme for convection
• Godunov's scheme
References
1. Courant, Richard; Isaacson, E; Rees, M. (1952). "On the Solution of Nonlinear Hyperbolic Differential Equations by Finite Differences". Comm. Pure Appl. Math. 5 (3): 243..255. doi:10.1002/cpa.3160050303.
2. Patankar, S. V. (1980). Numerical Heat Transfer and Fluid Flow. Taylor & Francis. ISBN 978-0-89116-522-4.
3. Hirsch, C. (1990). Numerical Computation of Internal and External Flows. John Wiley & Sons. ISBN 978-0-471-92452-4.
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
|
Urelement
In set theory, a branch of mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial') is an object that is not a set, but that may be an element of a set. It is also referred to as an atom or individual.
Theory
There are several different but essentially equivalent ways to treat urelements in a first-order theory.
One way is to work in a first-order theory with two sorts, sets and urelements, with a ∈ b only defined when b is a set. In this case, if U is an urelement, it makes no sense to say $X\in U$, although $U\in X$ is perfectly legitimate.
Another way is to work in a one-sorted theory with a unary relation used to distinguish sets and urelements. As non-empty sets contain members while urelements do not, the unary relation is only needed to distinguish the empty set from urelements. Note that in this case, the axiom of extensionality must be formulated to apply only to objects that are not urelements.
This situation is analogous to the treatments of theories of sets and classes. Indeed, urelements are in some sense dual to proper classes: urelements cannot have members whereas proper classes cannot be members. Put differently, urelements are minimal objects while proper classes are maximal objects by the membership relation (which, of course, is not an order relation, so this analogy is not to be taken literally).
Urelements in set theory
The Zermelo set theory of 1908 included urelements, and hence is a version now called ZFA or ZFCA (i.e. ZFA with axiom of choice).[1] It was soon realized that in the context of this and closely related axiomatic set theories, the urelements were not needed because they can easily be modeled in a set theory without urelements.[2] Thus, standard expositions of the canonical axiomatic set theories ZF and ZFC do not mention urelements (for an exception, see Suppes[3]). Axiomatizations of set theory that do invoke urelements include Kripke–Platek set theory with urelements and the variant of Von Neumann–Bernays–Gödel set theory described by Mendelson.[4] In type theory, an object of type 0 can be called an urelement; hence the name "atom".
Adding urelements to the system New Foundations (NF) to produce NFU has surprising consequences. In particular, Jensen proved[5] the consistency of NFU relative to Peano arithmetic; meanwhile, the consistency of NF relative to anything remains an open problem, pending verification of Holmes's proof of its consistency relative to ZF. Moreover, NFU remains relatively consistent when augmented with an axiom of infinity and the axiom of choice. Meanwhile, the negation of the axiom of choice is, curiously, an NF theorem. Holmes (1998) takes these facts as evidence that NFU is a more successful foundation for mathematics than NF. Holmes further argues that set theory is more natural with than without urelements, since we may take as urelements the objects of any theory or of the physical universe.[6] In finitist set theory, urelements are mapped to the lowest-level components of the target phenomenon, such as atomic constituents of a physical object or members of an organisation.
Quine atoms
An alternative approach to urelements is to consider them, instead of as a type of object other than sets, as a particular type of set. Quine atoms (named after Willard Van Orman Quine) are sets that only contain themselves, that is, sets that satisfy the formula x = {x}.[7]
Quine atoms cannot exist in systems of set theory that include the axiom of regularity, but they can exist in non-well-founded set theory. ZF set theory with the axiom of regularity removed cannot prove that any non-well-founded sets exist (unless it is inconsistent, in which case it will prove any arbitrary statement), but it is compatible with the existence of Quine atoms. Aczel's anti-foundation axiom implies that there is a unique Quine atom. Other non-well-founded theories may admit many distinct Quine atoms; at the opposite end of the spectrum lies Boffa's axiom of superuniversality, which implies that the distinct Quine atoms form a proper class.[8]
Quine atoms also appear in Quine's New Foundations, which allows more than one such set to exist.[9]
Quine atoms are the only sets called reflexive sets by Peter Aczel,[8] although other authors, e.g. Jon Barwise and Lawrence Moss, use the latter term to denote the larger class of sets with the property x ∈ x.[10]
References
1. Dexter Chua et al.: ZFA: Zermelo–Fraenkel set theory with atoms, on: ncatlab.org: nLab, revised on July 16, 2016.
2. Jech, Thomas J. (1973). The Axiom of Choice. Mineola, New York: Dover Publ. p. 45. ISBN 0486466248.
3. Suppes, Patrick (1972). Axiomatic Set Theory ([Éd. corr. et augm. du texte paru en 1960] ed.). New York: Dover Publ. ISBN 0486616304. Retrieved 17 September 2012.
4. Mendelson, Elliott (1997). Introduction to Mathematical Logic (4th ed.). London: Chapman & Hall. pp. 297–304. ISBN 978-0412808302. Retrieved 17 September 2012.
5. Jensen, Ronald Björn (December 1968). "On the Consistency of a Slight (?) Modification of Quine's 'New Foundations'". Synthese. Springer. 19 (1/2): 250–264. doi:10.1007/bf00568059. ISSN 0039-7857. JSTOR 20114640. S2CID 46960777.
6. Holmes, Randall, 1998. Elementary Set Theory with a Universal Set. Academia-Bruylant.
7. Thomas Forster (2003). Logic, Induction and Sets. Cambridge University Press. p. 199. ISBN 978-0-521-53361-4.
8. Aczel, Peter (1988), Non-well-founded sets, CSLI Lecture Notes, vol. 14, Stanford University, Center for the Study of Language and Information, p. 57, ISBN 0-937073-22-9, MR 0940014, retrieved 2016-10-17.
9. Barwise, Jon; Moss, Lawrence S. (1996), Vicious circles. On the mathematics of non-wellfounded phenomena, CSLI Lecture Notes, vol. 60, CSLI Publications, p. 306, ISBN 1575860090.
10. Barwise, Jon; Moss, Lawrence S. (1996), Vicious circles. On the mathematics of non-wellfounded phenomena, CSLI Lecture Notes, vol. 60, CSLI Publications, p. 57, ISBN 1575860090.
External links
• Weisstein, Eric W. "Urelement". MathWorld.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
|
Urbach tail
The Urbach tail is an exponential part in the energy spectrum of the absorption coefficient. This tail appears near the optical band edge in amorphous, disordered and crystalline materials.
History
Researchers began questioning the nature of "tail states" in disordered semiconductors in the 1950s. It was found that such tails arise from the strains sufficient to push local states past the band edges.
In 1953, the Austrian-American physicist Franz Urbach (1902–1969)[1] found that such tails decay exponentially into the gap.[2] Later, photoemission experiments delivered absorption models revealing temperature dependence of the tail.[3]
A variety of amorphous crystalline solids expose exponential band edges via optical absorption. The universality of this feature suggested a common cause. Several attempts were made to explain the phenomenon, but these could not connect specific topological units to the electronic structure.[4][5]
See also
• Tauc plot
References
1. Franz Urbach. Austrian Academy of Sciences
2. Urbach, Franz (1953). "The Long-Wavelength Edge of Photographic Sensitivity and of the Electronic Absorption of Solids". Physical Review. 92 (5): 1324. Bibcode:1953PhRv...92.1324U. doi:10.1103/physrev.92.1324.
3. Aljishi, Samer; Cohen, J. David; Jin, Shu; Ley, Lothar (1990-06-04). "Band tails in hydrogenated amorphous silicon and silicon-germanium alloys". Physical Review Letters. 64 (23): 2811–2814. Bibcode:1990PhRvL..64.2811A. doi:10.1103/physrevlett.64.2811. PMID 10041817.
4. Bacalis, N.; Economou, E. N.; Cohen, M. H. (1988). "Simple derivation of exponential tails in the density of states". Physical Review B. 37 (5): 2714–2717. Bibcode:1988PhRvB..37.2714B. doi:10.1103/physrevb.37.2714. PMID 9944833.
5. Cohen, M. H.; Chou, M.-Y.; Economou, E. N.; John, S.; Soukoulis, C. M. (1988). "Band tails, path integrals, instantons, polarons, and all that". IBM Journal of Research and Development. 32 (1): 82–92. doi:10.1147/rd.321.0082.
|
Uriel Feige
Uriel Feige (Hebrew: אוריאל פייגה) is an Israeli computer scientist who was a doctoral student of Adi Shamir.
Uriel Feige
Alma materPh.D. Weizmann Institute of Science, 1992[1]
Known forFeige–Fiat–Shamir identification scheme
Scientific career
InstitutionsWeizmann Institute
Doctoral advisorAdi Shamir
Life
Uriel Feige currently holds the post of Professor at the Department of Computer Science and Applied Mathematics, the Weizmann Institute of Science, Rehovot in Israel.[2]
Work
He is notable for co-inventing the Feige–Fiat–Shamir identification scheme along with Amos Fiat and Adi Shamir.
Honors and awards
He won the Gödel Prize in 2001 "for the PCP theorem and its applications to hardness of approximation".
References
1. Uriel Feige at the Mathematics Genealogy Project.
2. Uriel Feige's profile at the Weizmann Institute
Gödel Prize laureates
1990s
• Babai / Goldwasser / Micali / Moran / Rackoff (1993)
• Håstad (1994)
• Immerman / Szelepcsényi (1995)
• Jerrum / Sinclair (1996)
• Halpern / Moses (1997)
• Toda (1998)
• Shor (1999)
2000s
• Vardi / Wolper (2000)
• Arora / Feige / Goldwasser / Lund / Lovász / Motwani / Safra / Sudan / Szegedy (2001)
• Sénizergues (2002)
• Freund / Schapire (2003)
• Herlihy / Saks / Shavit / Zaharoglou (2004)
• Alon / Matias / Szegedy (2005)
• Agrawal / Kayal / Saxena (2006)
• Razborov / Rudich (2007)
• Teng / Spielman (2008)
• Reingold / Vadhan / Wigderson (2009)
2010s
• Arora / Mitchell (2010)
• Håstad (2011)
• Koutsoupias / Papadimitriou / Roughgarden / É. Tardos / Nisan / Ronen (2012)
• Boneh / Franklin / Joux (2013)
• Fagin / Lotem / Naor (2014)
• Spielman / Teng (2015)
• Brookes / O'Hearn (2016)
• Dwork / McSherry / Nissim / Smith (2017)
• Regev (2018)
• Dinur (2019)
2020s
• Moser / G. Tardos (2020)
• Bulatov / Cai / Chen / Dyer / Richerby (2021)
• Brakerski / Gentry / Vaikuntanathan (2022)
Authority control: Academics
• Association for Computing Machinery
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Uriel Rothblum
Uriel George "Uri" Rothblum (Tel Aviv, March 16, 1947 – Haifa, March 26, 2012) was an Israeli mathematician and operations researcher. From 1984 until 2012 he held the Alexander Goldberg Chair in Management Science at the Technion – Israel Institute of Technology in Haifa, Israel.[1][2]
Uriel Rothblum
Born(1947-03-16)March 16, 1947
Tel-Aviv, Israel.
DiedMarch 26, 2012(2012-03-26) (aged 65)
Haifa, Israel.
CitizenshipIsrael, United States
Alma mater
• Tel Aviv University B.S and M.S.
• Stanford Ph.D.
Scientific career
Fields
• Mathematics
• operation research
• system analysis
Institutions
• Technion
• New York University
• Yale
Rothblum was born in Tel Aviv to a family of Jewish immigrants from Austria.[3] He went to Tel Aviv University, where Robert Aumann became his mentor; he earned a bachelor's degree there in 1969 and a master's in 1971. He completed his doctorate in 1974 from Stanford University, in operations research, under the supervision of Arthur F. Veinott. After postdoctoral research at New York University, he joined the Yale University faculty in 1975, and moved to the Technion in 1984.[2]
Rothblum became president of the Israeli Operational Research Society (ORSIS) for 2006–2008, and editor-in-chief of Mathematics of Operations Research from 2010 until his death.[2] He was elected to the 2003 class of Fellows of the Institute for Operations Research and the Management Sciences.[4]
References
1. Loewy, Raphael (2012), "Uriel G. Rothblum (1947–2012)", Linear Algebra and Its Applications, 437 (12): 2997–3009, doi:10.1016/j.laa.2012.07.010, MR 2966614.
2. Golany, Boaz (2012), "Uriel G. Rothblum, March 16, 1947 – March 26, 2012", OR/MS Today.
3. "Uriel Rothblum - Biography".
4. Fellows: Alphabetical List, Institute for Operations Research and the Management Sciences, retrieved 2019-10-09
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Urmila Mahadev
Urmila Mahadev is an American mathematician and theoretical computer scientist known for her work in quantum computing and quantum cryptography.
Education and career
Mahadev is originally from Los Angeles, where her parents are physicians. She became interested in quantum computing through a course with Leonard Adleman at the University of Southern California,[1] where she graduated in 2010.[2]
She went to the University of California, Berkeley for graduate study, supported by a National Science Foundation Graduate Research Fellowship.[2] As a student of Umesh Vazirani at Berkeley, Mahadev discovered interactive proof systems that could demonstrate with high certainty, to an observer using only classical computation, that a quantum computer has correctly performed a desired quantum-computing task.[1]
She completed her Ph.D. in 2018,[3] and after continued postdoctoral research at Berkeley,[1] she became an assistant professor of computing and mathematical sciences at the California Institute of Technology.[4]
Recognition
For her work on quantum verification, Mahadev won the Machtey Award at the Symposium on Foundations of Computer Science in 2018, and in 2021 one of the three inaugural Maryam Mirzakhani New Frontiers Prizes for early-career achievements by women mathematicians.[5][6]
References
1. Klarreich, Erica (October 8, 2018), "Graduate Student Solves Quantum Verification Problem: Urmila Mahadev spent eight years in graduate school solving one of the most basic questions in quantum computation: How do you know whether a quantum computer has done anything quantum at all?", Quanta
2. Wall of Scholars, University of Southern California, retrieved 2020-09-19
3. Urmila Mahadev at the Mathematics Genealogy Project
4. Urmila Mahadev, California Institute of Technology, retrieved 2020-09-19
5. "Prizes", FOCS 2018, retrieved 2020-09-19
6. "Winners of the 2021 Breakthrough Prizes in life sciences, fundamental physics and mathematics announced", Breakthrough Prizes, September 10, 2020, retrieved 2020-09-19
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Urs Schreiber
Urs Schreiber (born 1974) is a mathematician specializing in the connection between mathematics and theoretical physics (especially string theory) and currently working as a researcher at New York University Abu Dhabi.[1] He was previously a researcher at the Czech Academy of Sciences, Institute of Mathematics, Department for Algebra, Geometry and Mathematical Physics.[2]
Education
Schreiber obtained his doctorate from the University of Duisburg-Essen in 2005 with a thesis supervised by Robert Graham and titled From Loop Space Mechanics to Nonabelian Strings.[3]
Work
Schreiber's research fields include the mathematical foundation of quantum field theory.
Schreiber is a co-creator of the nLab, a wiki for research mathematicians and physicists working in higher category theory.
Selected writings
• With Hisham Sati, Mathematical Foundations of Quantum Field and Perturbative String Theory, Proceedings of Symposia in Pure Mathematics, volume 83 AMS (2011)
• Schreiber, Urs (2013). "Differential cohomology in a cohesive ∞-topos". arXiv:1310.7930v1 [math-ph].
Notes
1. "Center for Quantum and Topological Systems". Retrieved 2022-07-21.
2. Researchers, Czech Academy of Sciences, retrieved 2015-07-31.
3. DuEPublico
References
• Interview of John Baez and Urs Schreiber
External links
• Home page in nLab
Authority control
International
• ISNI
• VIAF
National
• Norway
• Catalonia
• Germany
• Israel
• United States
• Netherlands
Academics
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
|
Ursell function
In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions).
The Ursell function was named after Harold Ursell, who introduced it in 1927.
Definition
If X is a random variable, the moments sn and cumulants (same as the Ursell functions) un are functions of X related by the exponential formula:
$\operatorname {E} (\exp(zX))=\sum _{n}s_{n}{\frac {z^{n}}{n!}}=\exp \left(\sum _{n}u_{n}{\frac {z^{n}}{n!}}\right)$
(where $\operatorname {E} $ is the expectation).
The Ursell functions for multivariate random variables are defined analogously to the above, and in the same way as multivariate cumulants.[1]
$u_{n}\left(X_{1},\ldots ,X_{n}\right)=\left.{\frac {\partial }{\partial z_{1}}}\cdots {\frac {\partial }{\partial z_{n}}}\log \operatorname {E} \left(\exp \sum z_{i}X_{i}\right)\right|_{z_{i}=0}$
The Ursell functions of a single random variable X are obtained from these by setting X = X1 = … = Xn.
The first few are given by
${\begin{aligned}u_{1}(X_{1})={}&\operatorname {E} (X_{1})\\u_{2}(X_{1},X_{2})={}&\operatorname {E} (X_{1}X_{2})-\operatorname {E} (X_{1})\operatorname {E} (X_{2})\\u_{3}(X_{1},X_{2},X_{3})={}&\operatorname {E} (X_{1}X_{2}X_{3})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3})-\operatorname {E} (X_{2})\operatorname {E} (X_{3}X_{1})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2})+2\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\\u_{4}\left(X_{1},X_{2},X_{3},X_{4}\right)={}&\operatorname {E} (X_{1}X_{2}X_{3}X_{4})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3}X_{4})-\operatorname {E} (X_{2})\operatorname {E} (X_{1}X_{3}X_{4})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2}X_{4})-\operatorname {E} (X_{4})\operatorname {E} (X_{1}X_{2}X_{3})\\&-\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3}X_{4})-\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2}X_{4})-\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2}X_{3})\\&+2\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2})\operatorname {E} (X_{3})+2\operatorname {E} (X_{2}X_{3})\operatorname {E} (X_{1})\operatorname {E} (X_{4})\\&+2\operatorname {E} (X_{2}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{3})+2\operatorname {E} (X_{3}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{2})-6\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})\end{aligned}}$
Characterization
Percus (1975) showed that the Ursell functions, considered as multilinear functions of several random variables, are uniquely determined up to a constant by the fact that they vanish whenever the variables Xi can be divided into two nonempty independent sets.
See also
• Cumulant
References
1. Shlosman, S. B. (1986). "Signs of the Ising model Ursell functions". Communications in Mathematical Physics. 102 (4): 679–686. Bibcode:1985CMaPh.102..679S. doi:10.1007/BF01221652. S2CID 122963530.
• Glimm, James; Jaffe, Arthur (1987), Quantum physics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-96476-8, MR 0887102
• Percus, J. K. (1975), "Correlation inequalities for Ising spin lattices" (PDF), Comm. Math. Phys., 40 (3): 283–308, Bibcode:1975CMaPh..40..283P, doi:10.1007/bf01610004, MR 0378683, S2CID 120940116
• Ursell, H. D. (1927), "The evaluation of Gibbs phase-integral for imperfect gases", Proc. Cambridge Philos. Soc., 23 (6): 685–697, Bibcode:1927PCPS...23..685U, doi:10.1017/S0305004100011191, S2CID 123023251
|
Ursula Hamenstädt
Ursula Hamenstädt (born 15 January 1961) is a German mathematician who works as a professor at the University of Bonn.[1] Her primary research subject is differential geometry.
Education and career
Hamenstädt earned her PhD from the University of Bonn in 1986, under the supervision of Wilhelm Klingenberg. Her dissertation, Zur Theorie der Carnot-Caratheodory Metriken und ihren Anwendungen [The theory of Carnot–Caratheodory metrics and their applications], concerned the theory of sub-Riemannian manifolds.[2]
After completing her doctorate, she became a Miller Research Fellow at the University of California, Berkeley and then an assistant professor at the California Institute of Technology before returning to Bonn as a faculty member in 1990.[1]
Honors
Hamenstädt was an invited speaker at the International Congress of Mathematicians in 2010.[3] In 2012 she was elected to the German Academy of Sciences Leopoldina,[4] and in the same year she became one of the inaugural fellows of the American Mathematical Society.[5] She was the Emmy Noether Lecturer of the German Mathematical Society in 2017.[6]
Selected publications
• Hamenstädt, Ursula (2008). "Geometry of the mapping class groups I: Boundary amenability". Inventiones Mathematicae. 175 (3): 545–609. arXiv:math/0510116. Bibcode:2009InMat.175..545H. doi:10.1007/s00222-008-0158-2. ISSN 0020-9910. S2CID 2640202.
• Hamenstädt, Ursula (1989). "A new description of the Bowen–Margulis measure". Ergodic Theory and Dynamical Systems. 9 (3): 455–464. doi:10.1017/S0143385700005095. ISSN 1469-4417.
• Hamenstädt, Ursula (1990). "Some regularity theorems for Carnot–Carathéodory metrics". Journal of Differential Geometry. 32 (3): 819–850. doi:10.4310/jdg/1214445536. ISSN 0022-040X.
References
1. Faculty profile, University of Bonn, retrieved 18 December 2014.
2. Ursula Hamenstädt at the Mathematics Genealogy Project
3. Hamenstädt, Ursula (2010), "Actions of the mapping class group", Proceedings of the International Congress of Mathematicians. Volume II (PDF), New Delhi: Hindustan Book Agency, pp. 1002–1021, MR 2827829.
4. List of members: Prof. Dr. Ursula Hamenstädt, Leopoldina, retrieved 18 December 2014.
5. List of Fellows of the American Mathematical Society, retrieved 18 December 2014.
6. Preise und Auszeichnungen (in German), German Mathematical Society, retrieved 5 November 2018
External links
• Home page
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• DBLP
• Google Scholar
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
|
Urysohn's lemma
In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function.[1]
Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem.
The lemma is named after the mathematician Pavel Samuilovich Urysohn.
Discussion
Two subsets $A$ and $B$ of a topological space $X$ are said to be separated by neighbourhoods if there are neighbourhoods $U$ of $A$ and $V$ of $B$ that are disjoint. In particular $A$ and $B$ are necessarily disjoint.
Two plain subsets $A$ and $B$ are said to be separated by a continuous function if there exists a continuous function $f:X\to [0,1]$ from $X$ into the unit interval $[0,1]$ such that $f(a)=0$ for all $a\in A$ and $f(b)=1$ for all $b\in B.$ Any such function is called a Urysohn function for $A$ and $B.$ In particular $A$ and $B$ are necessarily disjoint.
It follows that if two subsets $A$ and $B$ are separated by a function then so are their closures. Also it follows that if two subsets $A$ and $B$ are separated by a function then $A$ and $B$ are separated by neighbourhoods.
A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function.
The sets $A$ and $B$ need not be precisely separated by $f$, i.e., it is not necessary and guaranteed that $f(x)\neq 0$ and $\neq 1$ for $x$ outside $A$ and $B.$ A topological space $X$ in which every two disjoint closed subsets $A$ and $B$ are precisely separated by a continuous function is perfectly normal.
Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff.
Formal statement
A topological space $X$ is normal if and only if, for any two non-empty closed disjoint subsets $A$ and $B$ of $X,$ there exists a continuous map $f:X\to [0,1]$ such that $f(A)=\{0\}$ and $f(B)=\{1\}.$
Proof sketch
The proof proceeds by repeatedly applying the following alternate characterization of normality. If $X$ is a normal space, $Z$ is an open subset of $X$, and $Y\subseteq Z$ is closed, then there exists an open $U$ and a closed $V$ such that $Y\subseteq U\subseteq V\subseteq Z$.
Let $A$ and $B$ be disjoint closed subsets of $X$. The main idea of the proof is to repeatedly apply this characterization of normality to $A$ and $B^{\complement }$, continuing with the new sets built on every step.
The sets we build are indexed by dyadic fractions. For every dyadic fraction $r\in (0,1)$, we construct an open subset $U(r)$ and a closed subset $V(r)$ of $X$ such that:
• $A\subseteq U(r)$ and $V(r)\subseteq B^{\complement }$ for all $r$,
• $U(r)\subseteq V(r)$ for all $r$,
• For $r<s$, $V(r)\subseteq U(s)$.
Intuitively, the sets $U(r)$ and $V(r)$ expand outwards in layers from $A$:
${\begin{array}{ccccccccccccccc}A&&&&&&&\subseteq &&&&&&&B^{\complement }\\A&&&\subseteq &&&\ U(1/2)&\subseteq &V(1/2)&&&\subseteq &&&B^{\complement }\\A&\subseteq &U(1/4)&\subseteq &V(1/4)&\subseteq &U(1/2)&\subseteq &V(1/2)&\subseteq &U(3/4)&\subseteq &V(3/4)&\subseteq &B^{\complement }\end{array}}$
This construction proceeds by mathematical induction. For the base step, we define two extra sets $U(1)=B^{\complement }$ and $V(0)=A$.
Now assume that $n\geq 0$ and that the sets $U\left(k/2^{n}\right)$ and $V\left(k/2^{n}\right)$ have already been constructed for $k\in \{1,\ldots ,2^{n}-1\}$. Note that this is vacuously satisfied for $n=0$. Since $X$ is normal, for any $a\in \left\{0,1,\ldots ,2^{n}-1\right\}$, we can find an open set and a closed set such that
$V\left({\frac {a}{2^{n}}}\right)\subseteq U\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq V\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq U\left({\frac {a+1}{2^{n}}}\right)$
The above three conditions are then verified.
Once we have these sets, we define $f(x)=1$ if $x\not \in U(r)$ for any $r$; otherwise $f(x)=\inf\{r:x\in U(r)\}$ for every $x\in X$, where $\inf $ denotes the infimum. Using the fact that the dyadic rationals are dense, it is then not too hard to show that $f$ is continuous and has the property $f(A)\subseteq \{0\}$ and $f(B)\subseteq \{1\}.$ This step requires the $V(r)$ sets in order to work.
The Mizar project has completely formalised and automatically checked a proof of Urysohn's lemma in the URYSOHN3 file.
See also
• Mollifier
Notes
1. Willard 1970 Section 15.
References
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
• Willard, Stephen (1970). General Topology. Dover Publications. ISBN 0-486-43479-6.
External links
• "Urysohn lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Mizar system proof: http://mizar.org/version/current/html/urysohn3.html#T20
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
|
Tietze extension theorem
In topology, the Tietze extension theorem (also known as the Tietze–Urysohn–Brouwer extension theorem or Urysohn-Brouwer lemma[1]) states that continuous functions on a closed subset of a normal topological space can be extended to the entire space, preserving boundedness if necessary.
Formal statement
If $X$ is a normal space and
$f:A\to \mathbb {R} $
is a continuous map from a closed subset $A$ of $X$ into the real numbers $\mathbb {R} $ carrying the standard topology, then there exists a continuous extension of $f$ to $X;$ that is, there exists a map
$F:X\to \mathbb {R} $
continuous on all of $X$ with $F(a)=f(a)$ for all $a\in A.$ Moreover, $F$ may be chosen such that
$\sup\{|f(a)|:a\in A\}~=~\sup\{|F(x)|:x\in X\},$
that is, if $f$ is bounded then $F$ may be chosen to be bounded (with the same bound as $f$).
History
L. E. J. Brouwer and Henri Lebesgue proved a special case of the theorem, when $X$ is a finite-dimensional real vector space. Heinrich Tietze extended it to all metric spaces, and Pavel Urysohn proved the theorem as stated here, for normal topological spaces.[2][3]
Equivalent statements
This theorem is equivalent to Urysohn's lemma (which is also equivalent to the normality of the space) and is widely applicable, since all metric spaces and all compact Hausdorff spaces are normal. It can be generalized by replacing $\mathbb {R} $ with $\mathbb {R} ^{J}$ for some indexing set $J,$ any retract of $\mathbb {R} ^{J},$ or any normal absolute retract whatsoever.
Variations
If $X$ is a metric space, $A$ a non-empty subset of $X$ and $f:A\to \mathbb {R} $ is a Lipschitz continuous function with Lipschitz constant $K,$ then $f$ can be extended to a Lipschitz continuous function $F:X\to \mathbb {R} $ with same constant $K.$ This theorem is also valid for Hölder continuous functions, that is, if $f:A\to \mathbb {R} $ is Hölder continuous function with constant less than or equal to $1,$ then $f$ can be extended to a Hölder continuous function $F:X\to \mathbb {R} $ with the same constant.[4]
Another variant (in fact, generalization) of Tietze's theorem is due to H.Tong and Z. Ercan:[5] Let $A$ be a closed subset of a normal topological space $X.$ If $f:X\to \mathbb {R} $ is an upper semicontinuous function, $g:X\to \mathbb {R} $ a lower semicontinuous function, and $h:A\to \mathbb {R} $ a continuous function such that $f(x)\leq g(x)$ for each $x\in X$ and $f(a)\leq h(a)\leq g(a)$ for each $a\in A$, then there is a continuous extension $H:X\to \mathbb {R} $ of $h$ such that $f(x)\leq H(x)\leq g(x)$ for each $x\in X.$ This theorem is also valid with some additional hypothesis if $\mathbb {R} $ is replaced by a general locally solid Riesz space.[5]
See also
• Blumberg theorem – Any real function on R admits a continuous restriction on a dense subset of R
• Hahn–Banach theorem – Theorem on extension of bounded linear functionals
• Whitney extension theorem – Partial converse of Taylor's theorem
References
1. "Urysohn-Brouwer lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
2. "Urysohn-Brouwer lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
3. Urysohn, Paul (1925), "Über die Mächtigkeit der zusammenhängenden Mengen", Mathematische Annalen, 94 (1): 262–295, doi:10.1007/BF01208659, hdl:10338.dmlcz/101038.
4. McShane, E. J. (1 December 1934). "Extension of range of functions". Bulletin of the American Mathematical Society. 40 (12): 837–843. doi:10.1090/S0002-9904-1934-05978-0.
5. Zafer, Ercan (1997). "Extension and Separation of Vector Valued Functions" (PDF). Turkish Journal of Mathematics. 21 (4): 423–430.
• Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260.
External links
• Weisstein, Eric W. "Tietze's Extension Theorem." From MathWorld
• Mizar system proof: http://mizar.org/version/current/html/tietze.html#T23
• Bonan, Edmond (1971), "Relèvements-Prolongements à valeurs dans les espaces de Fréchet", Comptes Rendus de l'Académie des Sciences, Série I, 272: 714–717.
|
Urysohn universal space
The Urysohn universal space is a certain metric space that contains all separable metric spaces in a particularly nice manner. This mathematics concept is due to Pavel Urysohn.
Not to be confused with Urysohn space.
Definition
A metric space (U,d) is called Urysohn universal[1] if it is separable and complete and has the following property:
given any finite metric space X, any point x in X, and any isometric embedding f : X\{x} → U, there exists an isometric embedding F : X → U that extends f, i.e. such that F(y) = f(y) for all y in X\{x}.
Properties
If U is Urysohn universal and X is any separable metric space, then there exists an isometric embedding f:X → U. (Other spaces share this property: for instance, the space l∞ of all bounded real sequences with the supremum norm admits isometric embeddings of all separable metric spaces ("Fréchet embedding"), as does the space C[0,1] of all continuous functions [0,1]→R, again with the supremum norm, a result due to Stefan Banach.)
Furthermore, every isometry between finite subsets of U extends to an isometry of U onto itself. This kind of "homogeneity" actually characterizes Urysohn universal spaces: A separable complete metric space that contains an isometric image of every separable metric space is Urysohn universal if and only if it is homogeneous in this sense.
Existence and uniqueness
Urysohn proved that a Urysohn universal space exists, and that any two Urysohn universal spaces are isometric. This can be seen as follows. Take $(X,d),(X',d')$, two Urysohn universal spaces. These are separable, so fix in the respective spaces countable dense subsets $(x_{n})_{n},(x'_{n})_{n}$. These must be properly infinite, so by a back-and-forth argument, one can step-wise construct partial isometries $\phi _{n}:X\to X'$ whose domain (resp. range) contains $\{x_{k}:k<n\}$ (resp. $\{x'_{k}:k<n\}$). The union of these maps defines a partial isometry $\phi :X\to X'$ whose domain resp. range are dense in the respective spaces. And such maps extend (uniquely) to isometries, since a Urysohn universal space is required to be complete.
References
1. Juha Heinonen (January 2003), Geometric embeddings of metric spaces, retrieved 6 January 2009
|
Ushadevi Bhosle
Dr. Ushadevi Narendra Bhosle is an Indian mathematician, educator and researcher. She specialises in Algebraic Geometry.[1] She worked on the moduli spaces of bundles.[1]
Early life and education
She got a B.Sc. degree in 1969 and an M.Sc. degree in 1971 from University of Pune, Shivaji University, respectively.[1] She commenced her post-graduate studies in 1971 from Tata Institute of Fundamental Research and got her doctorate degree of philosophy under the guidance of her mentor S.Ramanan in 1980.[1]
Career
She started her career with being a research assistant at the Tata Institute of Fundamental Research from 1971 to 1974. Then she became the Research Associate II in the same institute Tata Institute of Fundamental Research, from 1974 to 1977. Later on, she became a Research Fellow from 1977-1982, a Fellow from 1982–1990 and a Reader 1991-1995 at the same institute . She was the Associate Professor 1995 - 1998, Professor 1998-2011 and Senior Professor 2012-2014 at the same institute Tata Institute of Fundamental Research.
She was the Raja Ramanna fellow 2014 - 2017 at Indian Institute of Science, Bangalore. She is INSA Senior Scientist at Indian Statistical Institute, Bangalore from Jan 2019.
Membership
She is the member of FASc, FNASc, FNASI and VBAC international committees.[1] She also was the senior associate of International Centre of Theoretical Physics, Italy.[1] She was a fellow member of the Indian National Science Academy, Delhi, Indian Academy of Sciences, Bangalore and National Academy of Sciences, Allahabad, India.[2][3][1]
Works
She has 66 publications.
• Desale U.V. and Ramanan S. Poincare Polynomials of the variety of stable bundles, Math. Ann.vol. 216, no.3,(1975)233-244.
• Desale U.V. and Ramanan S.: Classification of vector bundles of
rank two on hyperelliptic curves. Invent. Math. 38, 161-185 (1976).
• Bhosle (Desale) Usha N.:
Moduli of orthogonal and spin bundles over hyperelliptic curves. Compositio Math. 51, 15-40 (1984).
• Bhosle, U. N. (1989). "Parabolic vector bundles on curves". Arkiv för Matematik. 27 (1–2): 15–22. Bibcode:1989ArM....27...15B. doi:10.1007/BF02386356. ISSN 0004-2080.
• Bhosle, Usha N. (1999). "Picard groups of the moduli spaces of vector bundles". Mathematische Annalen. 314 (2): 245–263. doi:10.1007/s002080050293. ISSN 0025-5831.
• Bhosle, Usha N. (1996). "Generalized parabolic bundles and applications— II". Proceedings Mathematical Sciences. 106 (4): 403–420. doi:10.1007/BF02837696. ISSN 0253-4142.
• Bhosle Usha N. (1992), Parabolic sheaves on higher dimensional varieties,Math. Ann. 293 177–192[4]
• Bhosle, U.N. (1986): Nets of quadrics and vector bundles on a double plane. Math. Zeit.192, 29–43[5]
• Bhosle Usha N. (1992), Generalised parabolic bundles and applications to torsion-free sheaves on nodal curves.Ark. Mat. 30 187–215[6]
• Bhosle, U.N. (1989), Ramanathan, A.: Moduli of parabolicG-bundles on curves. Math. Z.202, 161–180[7]
• BHOSLE, U. (1999). VECTOR BUNDLES ON CURVES WITH MANY COMPONENTS. Proceedings of the London Mathematical Society, 79(1), 81-106.
• Bhosle Usha N (1995), Representations of the fundamental group and vector bundles,Math. Ann.302 601–608[8]
Awards and honours
She was awarded by Stree Shakti Science Samman in 2010 and Ramaswamy Aiyer Memorial Award in 2000.[1]
Personal life
Apart from mathematics, her other interests are drawing, painting, reading and music. Currently, she lives in Mumbai.[1]
References
1. "INSA :: Indian Fellow Detail". insaindia.res.in. Retrieved 16 February 2019.
2. "The National Academy of Sciences, India - Founder Members". Nasi.org.in. Retrieved 14 October 2018.
3. "INSA". Archived from the original on 12 August 2016. Retrieved 13 May 2016.
4. Bhosle, Usha (1 December 1992). "Parabolic sheaves on higher dimensional varieties". Mathematische Annalen. 293 (1): 177–192. doi:10.1007/BF01444711. ISSN 1432-1807.
5. Bhosle, Usha N. (1 March 1986). "Nets of quadrics and vector bundles on a double plane". Mathematische Zeitschrift. 192 (1): 29–43. doi:10.1007/BF01162017. ISSN 1432-1823.
6. Bhosle, Usha (1 December 1992). "Generalised parabolic bundles and applications to torsionfree sheaves on nodal curves". Arkiv för Matematik. 30 (1): 187–215. Bibcode:1992ArM....30..187B. doi:10.1007/BF02384869. ISSN 1871-2487.
7. Bhosle, Usha; Ramanathan, A. (1 June 1989). "Moduli of parabolicG-bundles on curves". Mathematische Zeitschrift. 202 (2): 161–180. doi:10.1007/BF01215252. ISSN 1432-1823.
8. Bhosle, Usha N. (1 May 1995). "Representations of the fundamental group and vector bundles". Mathematische Annalen. 302 (1): 601–608. doi:10.1007/BF01444510. ISSN 1432-1807.
External links
• Suirauqa (1 July 2013). "Oh, the humanity of it all!: Noted Women Scientists of India - an attempt at enumeration". Ohthehumanityofitall.blogspot.com. Retrieved 14 October 2018.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Ensemble (mathematical physics)
In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single system.[1] The concept of an ensemble was introduced by J. Willard Gibbs in 1902.[2]
Statistical mechanics
• Thermodynamics
• Kinetic theory
Particle statistics
• Spin–statistics theorem
• Identical particles
• Maxwell–Boltzmann
• Bose–Einstein
• Fermi–Dirac
• Parastatistics
• Anyonic statistics
• Braid statistics
Thermodynamic ensembles
• NVE Microcanonical
• NVT Canonical
• µVT Grand canonical
• NPH Isoenthalpic–isobaric
• NPT Isothermal–isobaric
Models
• Debye
• Einstein
• Ising
• Potts
Potentials
• Internal energy
• Enthalpy
• Helmholtz free energy
• Gibbs free energy
• Grand potential / Landau free energy
Scientists
• Maxwell
• Boltzmann
• Bose
• Gibbs
• Einstein
• Ehrenfest
• von Neumann
• Tolman
• Debye
• Fermi
A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.[3][4]
Physical considerations
The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes.
The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function.
The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium.[2]
Terminology
• The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature.
• The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent.
Main types
The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics.
"We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)[5]
Three important thermodynamic ensembles were defined by Gibbs:[2]
• Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.[2]
• Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.[2]
• Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.[2]
The calculations that can be made using each of these ensembles are explored further in their respective articles. Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived. For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.[6]
Representations
The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables. In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily.
Requirements for representations
Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system:
• Test whether A, B are statistically equivalent.
• If p is a real number such that 0 < p < 1, then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability 1 – p.
Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set.
Quantum mechanical
Main article: Density matrix
A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by ${\hat {\rho }}$. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator, X̂. The expectation value of this operator on the statistical ensemble $\rho $ is given by the following trace:
$\langle X\rangle =\operatorname {Tr} ({\hat {X}}\rho ).$
This can be used to evaluate averages (operator X̂), variances (using operator X̂ 2), covariances (using operator X̂Ŷ), etc. The density matrix must always have a trace of 1: $\operatorname {Tr} {\hat {\rho }}=1$ (this essentially is the condition that the probabilities must add up to one).
In general, the ensemble evolves over time according to the von Neumann equation.
Equilibrium ensembles (those that do not evolve over time, $d{\hat {\rho }}/dt=0$) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator Ĥ (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator N̂. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is
${\hat {\rho }}=\sum _{i}P_{i}|\psi _{i}\rangle \langle \psi _{i}|$
where the |ψi⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.)
Classical mechanical
In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space.[2] While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation.
In a mechanical system with a defined number of parts, the phase space has n generalized coordinates called q1, ... qn, and n associated canonical momenta called p1, ... pn. The ensemble is then represented by a joint probability density function ρ(p1, ... pn, q1, ... qn).
If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers N1 (first kind of particle), N2 (second kind of particle), and so on up to Ns (the last kind of particle; s is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function ρ(N1, ... Ns, p1, ... pn, q1, ... qn). The number of coordinates n varies with the numbers of particles.
Any mechanical quantity X can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by ρ:
$\langle X\rangle =\sum _{N_{1}=0}^{\infty }\ldots \sum _{N_{s}=0}^{\infty }\int \ldots \int \rho X\,dp_{1}\ldots dq_{n}.$
The condition of probability normalization applies, requiring
$\sum _{N_{1}=0}^{\infty }\ldots \sum _{N_{s}=0}^{\infty }\int \ldots \int \rho \,dp_{1}\ldots dq_{n}=1.$
Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume.[note 1] In particular, the probability density function in phase space, ρ, is related to the probability distribution over microstates, P by a factor
$\rho ={\frac {1}{h^{n}C}}P,$
where
• h is an arbitrary but predetermined constant with the units of energy×time, setting the extent of the microstate and providing correct dimensions to ρ.[note 2]
• C is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns.
Since h can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of h influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of h when comparing different systems.
Correcting overcounting in phase space
Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems:
• Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another.[note 3]
• Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox.[2]
• Foundational issues in defining the chemical potential and the grand canonical ensemble.[2]
It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting.
A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' x coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor C introduced above would be set to C = 1, and the integral would be restricted to the selected subregion of phase space.)
A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor C introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates,[note 4] so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, C does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers.
As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using[2]
$C=N_{1}!N_{2}!\ldots N_{s}!.$
This is known as "correct Boltzmann counting".
Ensembles in statistics
The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like.
In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks.
Ensemble average
In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble.
Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit. The grand canonical ensemble is an example of an open system.[7]
Classical statistical mechanics
For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system:
${\bar {A}}={\frac {\int {Ae^{-\beta H(q_{1},q_{2},...q_{M},p_{1},p_{2},...p_{N})}d\tau }}{\int {e^{-\beta H(q_{1},q_{2},...q_{M},p_{1},p_{2},...p_{N})}d\tau }}}$
where:
${\bar {A}}$ is the ensemble average of the system property A,
$\beta $ is ${\frac {1}{kT}}$, known as thermodynamic beta,
H is the Hamiltonian of the classical system in terms of the set of coordinates $q_{i}$ and their conjugate generalized momenta $p_{i}$, and
$d\tau $ is the volume element of the classical phase space of interest.
The denominator in this expression is known as the partition function, and is denoted by the letter Z.
Quantum statistical mechanics
In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral:
${\bar {A}}={\frac {\sum _{i}{A_{i}e^{-\beta E_{i}}}}{\sum _{i}{e^{-\beta E_{i}}}}}$
Canonical ensemble average
The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics.
The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) as well as particles with its surroundings but the volume (V) is kept constant.
Operational interpretation
In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance,
• It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?)
• It is not clear how to physically generate an ensemble.
In this section, we attempt to partially answer this question.
Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ....,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble.
In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ...., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes).
Assume the following time average exists:
$\sigma (E)=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{k=1}^{N}\operatorname {Meas} (E,X_{k})$
For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes-no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that:
$\sigma (E)=\operatorname {Tr} (ES).$
We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values.
See also
• Density matrix
• Ensemble (fluid mechanics)
• Phase space
• Liouville's theorem (Hamiltonian)
• Maxwell–Boltzmann statistics
• Replication (statistics)
Notes
1. This equal-volume partitioning is a consequence of Liouville's theorem, i. e., the principle of conservation of extension in canonical phase space for Hamiltonian mechanics. This can also be demonstrated starting with the conception of an ensemble as a multitude of systems. See Gibbs' Elementary Principles, Chapter I.
2. (Historical note) Gibbs' original ensemble effectively set h = 1 [energy unit]×[time unit], leading to unit-dependence in the values of some thermodynamic quantities like entropy and chemical potential. Since the advent of quantum mechanics, h is often taken to be equal to Planck's constant in order to obtain a semiclassical correspondence with quantum mechanics.
3. In some cases the overcounting error is benign. An example is the choice of coordinate system used for representing orientations of three-dimensional objects. A simple encoding is the 3-sphere (e. g., unit quaternions) which is a double cover—each physical orientation can be encoded in two ways. If this encoding is used without correcting the overcounting, then the entropy will be higher by k log 2 per rotatable object and the chemical potential lower by kT log 2. This does not actually lead to any observable error since it only causes unobservable offsets.
4. Technically, there are some phases where the permutation of particles does not even yield a distinct specific phase: for example, two similar particles can share the exact same trajectory, internal state, etc.. However, in classical mechanics these phases only make up an infinitesimal fraction of the phase space (they have measure zero) and so they do not contribute to any volume integral in phase space.
References
1. Rennie, Richard; Jonathan Law (2019). Oxford Dictionary of Physics. pp. 458 ff. ISBN 978-0198821472.
2. Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics. New York: Charles Scribner's Sons.
3. Kittel, Charles; Herbert Kroemer (1980). Thermal Physics, Second Edition. San Francisco: W.H. Freeman and Company. pp. 31 ff. ISBN 0-7167-1088-9.
4. Landau, L.D.; Lifshitz, E.M. (1980). Statistical Physics. Pergamon Press. pp. 9 ff. ISBN 0-08-023038-5.
5. Gibbs, J.W. (1928). The Collected Works, Vol. 2. Green & Co, London, New York: Longmans.
6. Simulation of chemical reaction equilibria by the reaction ensemble Monte Carlo method: a review https://doi.org/10.1080/08927020801986564
7. http://physics.gmu.edu/~pnikolic/PHYS307/lectures/ensembles.pdf
External links
• Monte Carlo applet applied in statistical physics problems.
Statistical mechanics
Theory
• Principle of maximum entropy
• ergodic theory
Statistical thermodynamics
• Ensembles
• partition functions
• equations of state
• thermodynamic potential:
• U
• H
• F
• G
• Maxwell relations
Models
• Ferromagnetism models
• Ising
• Potts
• Heisenberg
• percolation
• Particles with force field
• depletion force
• Lennard-Jones potential
Mathematical approaches
• Boltzmann equation
• H-theorem
• Vlasov equation
• BBGKY hierarchy
• stochastic process
• mean-field theory and conformal field theory
Critical phenomena
• Phase transition
• Critical exponents
• correlation length
• size scaling
Entropy
• Boltzmann
• Shannon
• Tsallis
• Rényi
• von Neumann
Applications
• Statistical field theory
• elementary particle
• superfluidity
• Condensed matter physics
• Complex system
• chaos
• information theory
• Boltzmann machine
Wikimedia Commons has media related to Statistical ensemble.
|
Using the Borsuk–Ulam Theorem
Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics. It describes the use of results in topology, and in particular the Borsuk–Ulam theorem, to prove theorems in combinatorics and discrete geometry. It was written by Czech mathematician Jiří Matoušek, and published in 2003 by Springer-Verlag in their Universitext series (ISBN 978-3-540-00362-5).[1][2]
Topics
The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics.[2][3] The starting point of the field,[3] and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser, according to which the Kneser graphs $KG_{2n+k,n}$ have no graph coloring with $k+1$ colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area.[4]
The book has six chapters. After two chapters reviewing the basic notions of algebraic topology, and proving the Borsuk–Ulam theorem, the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem, the necklace splitting problem, Gale's lemma on points in hemispheres, and several results on colorings of Kneser graphs.[1][2] After another chapter on more advanced topics in equivariant topology, two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action.[5] Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces, and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls.[1][2]
Audience and reception
The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative".[2] And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples."
Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner, and Günter M. Ziegler.[2][5] However, this was not completed before Matoušek's untimely death in 2015.[6]
References
1. Dzedzej, Zdzisław (2004), "Review of Using the Borsuk-Ulam Theorem", Mathematical Reviews, MR 1988723
2. Poplicher, Mihaela (January 2005), "Review of Using the Borsuk-Ulam Theorem", MAA Reviews, Mathematical Association of America
3. de Longueville, Mark, "25 years proof of the Kneser conjecture: The advent of topological combinatorics" (PDF), EMS Newsletter, European Mathematical Society: 16–19
4. Ziegler, Günter M., "Review of Using the Borsuk-Ulam Theorem", zbMATH, Zbl 1016.05001
5. Bárány, Imre (March 2004), "Review of Using the Borsuk-Ulam Theorem", Combinatorics, Probability and Computing, 13 (2): 281–282, doi:10.1017/s096354830400608x
6. Kratochvíl, Jan; Loebl, Martin; Nešetřil, Jarik; Valtr, Pavel, Prof. Jiří Matoušek
|
Uta Merzbach
Uta Caecilia Merzbach (February 9, 1933 – June 27, 2017) was a German-American historian of mathematics who became the first curator of mathematical instruments at the Smithsonian Institution.[1]
Uta Merzbach
BornBerlin
DiedGeorgetown
Alma mater
• University of Texas at Austin
• Harvard University
OccupationCurator
Employer
• National Museum of American History
Position heldcurator, Associate Curator
Early life
Merzbach was born in Berlin, where her mother was a philologist and her father was an economist who worked for the Reich Association of Jews in Germany during World War II. The Nazi government closed the association in June 1943; they arrested the family, along with other leading members of the association, and sent them to the Theresienstadt concentration camp on August 4, 1943.[1][2] The Merzbachs survived the war and the camp, and after living for a year in a refugee camp in Deggendorf they moved to Georgetown, Texas in 1946, where her father found a faculty position at Southwestern University.
Education
After high school in Brownwood, Texas, Merzbach entered Southwestern, but transferred after two years to the University of Texas at Austin, where she graduated in 1952 with a bachelor's degree in mathematics. In 1954, she earned a master's degree there, also in mathematics.[1] Merzbach became a school teacher, but soon returned to graduate study at Harvard University.[1]
She completed her Ph.D. at Harvard in 1965. Her dissertation, Quantity of Structure: Development of Modern Algebraic Concepts from Leibniz to Dedekind, combined mathematics and the history of science; it was jointly supervised by mathematician Garrett Birkhoff and historian of science I. Bernard Cohen.[1][3][4]
Career
Merzbach joined the Smithsonian as an associate curator in 1964 (later curator), and served there until 1988 in the National Museum of American History. As well as collecting mathematical objects at the Smithsonian, she also collected interviews with many of the pioneers of computing.[1] In 1991, she co-authored the second edition of A History of Mathematics, originally published in 1968 by Carl Benjamin Boyer.[1][5]
After her retirement she returned to Georgetown, Texas, where she died in 2017.[1]
References
1. "In Memoriam: Uta C. Merzbach", Smithsonian Torch, July 2017
2. Spicer, Kevin; Cucchiara, Martina, eds. (2017), The Evil That Surrounds Us: The WWII Memoir of Erna Becker-Kohen, Indiana University Press, pp. 13, 27–28, 53, 133, 140, ISBN 9780253029904
3. Uta Merzbach at the Mathematics Genealogy Project
4. "Uta C. Merzbach Papers, 1948-2017". Texas Archival Resources Online. Retrieved 2022-05-08.
5. Acker, Kathleen (July 2007), "Review of History of Mathematics (2nd ed.)", Convergence, Mathematical Association of America
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Italy
• Israel
• Belgium
• United States
• Latvia
• Japan
• Czech Republic
• Greece
• Croatia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Utilitarian cake-cutting
Utilitarian cake-cutting (also called maxsum cake-cutting) is a rule for dividing a heterogeneous resource, such as a cake or a land-estate, among several partners with different cardinal utility functions, such that the sum of the utilities of the partners is as large as possible. It is a special case of the utilitarian social choice rule. Utilitarian cake-cutting is often not "fair"; hence, utilitarianism is often in conflict with fair cake-cutting.
Part of a series on
Utilitarianism
Predecessors
• Mozi
• Śāntideva
• David Hume
• Claude Adrien Helvétius
• Cesare Beccaria
• William Godwin
• Francis Hutcheson
• William Paley
Key proponents
• Jeremy Bentham
• John Stuart Mill
• Henry Sidgwick
• R. M. Hare
• Peter Singer
Types of utilitarianism
• Negative
• Rule
• Act
• Two-level
• Total
• Average
• Preference
• Classical
Key concepts
• Pain
• Suffering
• Pleasure
• Utility
• Happiness
• Eudaimonia
• Consequentialism
• Equal consideration
• Felicific calculus
• Utilitarian social choice rule
Problems
• Demandingness objection
• Mere addition paradox
• Paradox of hedonism
• Replaceability argument
• Utility monster
Related topics
• Rational choice theory
• Game theory
• Neoclassical economics
• Population ethics
• Effective altruism
Philosophy portal
Example
Consider a cake with two parts: chocolate and vanilla, and two partners: Alice and George, with the following valuations:
PartnerChocolateVanilla
Alice91
George64
The utilitarian rule gives each part to the partner with the highest utility. In this case, the utilitarian rule gives the entire chocolate to Alice and the entire Vanilla to George. The maxsum is 13.
The utilitarian division is not fair: it is not proportional since George receives less than half the total cake value, and it is not envy-free since George envies Alice.
Notation
The cake is called $C$. It is usually assumed to be either a finite 1-dimensional segment, a 2-dimensional polygon or a finite subset of the multidimensional Euclidean plane $\mathbb {R} ^{d}$.
There are $n$ partners. Each partner $i$ has a personal value function $V_{i}$ which maps subsets of $C$ ("pieces") to numbers.
$C$ has to be divided to $n$ disjoint pieces, one piece per partner. The piece allocated to partner $i$ is called $X_{i}$, and $C=X_{1}\sqcup ...\sqcup X_{n}$.
A division $X$ is called utilitarian or utilitarian-maximal or maxsum if it maximizes the following expression:
$\sum _{i=1}^{n}{V_{i}(X_{i})}$
The concept is often generalized by assigning a different weight to each partner. A division $X$ is called weighted-utilitarian-maximal (WUM) if it maximizes the following expression:
$\sum _{i=1}^{n}{\frac {V_{i}(X_{i})}{w_{i}}}$
where the $w_{i}$ are given positive constants.
Maxsum and Pareto-efficiency
Every WUM division with positive weights is obviously Pareto-efficient. This is because, if a division $Y$ Pareto-dominates a division $X$, then the weighted sum-of-utilities in $Y$ is strictly larger than in $X$, so $X$ cannot be a WUM division.
What's more surprising is that every Pareto-efficient division is WUM for some selection of weights.[1]
Characterization of the utilitarian rule
Christopher P. Chambers suggests a characterization to the WUM rule.[2] The characterization is based on the following properties of a division rule R:
• Pareto-efficiency (PE): the rule R returns only divisions which are Pareto-efficient.
• Division independence (DI): whenever a cake is partitioned to several sub-cakes and each cake is divided according to rule R, the result is the same as if the original cake were partitioned according to R.
• Independence of infeasible land (IIL): whenever a sub-cake is divided according to R, the result does not depend on the utilities of the partners in the other sub-cakes.
• Positive treatment of equals (PTE): whenever all partners have the same utility function, R recommends at least one division that gives a positive utility to each partner.
• Scale-invariance (SI): whenever the utility functions of the partners are multiplied by constants (a possibly different constant to each partner), the recommendations given by R do not change.
• Continuity (CO): for a fixed piece of cake, the set of utility profiles which map to a specific allocation is a closed set under pointwise convergence.
The following is proved for partners that assign positive utility to every piece of cake with positive size:
• If R is PE DI and IIL, then there exists a sequence of weights $w_{1},\dots ,w_{n}$ such that all divisions recommended by R are WUM with these weights (it is known that every PE division is WUM with some weights; the news are that all divisions recommended by R are WUM with the same weights. This follows from the DI property).
• If R is PE DI IIL and PTE, then all divisions recommended by R are utilitarian-maximal (in other words, all divisions must be WUM and all agents must have equal weights. This follows from the PTE property).
• If R is PE DI IIL and SI, then R is a dictatorial rule - it gives the entire cake to a single partner.
• If R is PE DI IIL and CO, then there exists a sequence of weights $w_{1},\dots ,w_{n}$ such that R is a WUM rule with these weights (i.e. R recommends all and only WUM divisions with these weights).
Finding utilitarian divisions
Disconnected pieces
When the value functions are additive, maxsum divisions always exist. Intuitively, we can give each fraction of the cake to the partner that values it the most, as in the example above. Similarly, WUM divisions can be found by giving each fraction of the cake to the partner for whom the ratio $V_{i}/w_{i}$ is largest.
This process is easy to carry out when cake is piecewise-homogeneous, i.e., the cake can be divided to a finite number of pieces such that the value-density of each piece is constant for all partners.
When the cake is not piecewise-homogeneous, the above algorithm does not work since there is an infinite number of different "pieces" to consider.
Maxsum divisions still exist. This is a corollary of the Dubins–Spanier compactness theorem and it can also be proved using the Radon–Nikodym set.
However, no finite algorithm can find a maxsum division. Proof:[3][4]: Cor.2 A finite algorithm has value-data only about a finite number of pieces. I.e. there is only a finite number of subsets of the cake, for which the algorithm knows the valuations of the partners. Suppose the algorithm has stopped after having value-data about $k$ subsets. Now, it may be the case that all partners answered all the queries as if they have the same value measure. In this case, the largest possible utilitarian value that the algorithm can achieve is 1. However, it is possible that deep inside one of the $k$ pieces, there is a subset which two partners value differently. In this case, there exists a super-proportional division, in which each partner receives a value of more than $1/n$, so the sum of utilities is strictly more than 1. Hence, the division returned by the finite algorithm is not maxsum.
Connected pieces
When the cake is 1-dimensional and the pieces must be connected, the simple algorithm of assigning each piece to the agent that values it the most no longer works, even with piecewise-constant valuations. In this case, the problem of finding a UM division is NP-hard, and furthermore no FPTAS is possible unless P=NP.
There is an 8-factor approximation algorithm, and a fixed-parameter tractable algorithm which is exponential in the number of players.[5]
For every set of positive weights, a WUM division exists and can be found in a similar way.
Maxsum and fairness
A maxsum division is not always fair; see the example above. Similarly, a fair division is not always maxsum.
One approach to this conflict is to bound the "price of fairness" - calculate upper and lower bounds on the amount of decrease in the sum of utilities, that is required for fairness. For more details, see price of fairness.
Another approach to combining efficiency and fairness is to find, among all possible fair divisions, a fair division with a highest sum-of-utilities:
Finding utilitarian-fair allocations
The following algorithms can be used to find an envy-free cake-cutting with maximum sum-of-utilities, for a cake which is a 1-dimensional interval, when each person may receive disconnected pieces and the value functions are additive:[6]
1. For $n$ partners with piecewise-constant valuations: divide the cake into m totally-constant regions. Solve a linear program with nm variables: each (agent, region) pair has a variable that determines the fraction of the region given to the agent. For each region, there is a constraint saying that the sum of all fractions from this region is 1; for each (agent, agent) pair, there is a constraint saying that the first agent does not envy the second one. Note that the allocation produced by this procedure might be highly fractioned.
2. For $2$ partners with piecewise-linear valuations: for each point in the cake, calculate the ratio between the utilities: $r=u_{1}/u_{2}$. Give partner 1 the points with $r\geq r^{*}$ and partner 2 the points with $r<r^{*}$, where $r^{*}$ is a threshold calculated so that the division is envy-free. In general $r^{*}$ cannot be calculated because it might be irrational, but in practice, when the valuations are piecewise-linear, $r^{*}$ can be approximated by an "irrational search" approximation algorithm. For any $\epsilon >0$, The algorithm find an allocation that is $\epsilon $-EF (the value of each agent is at least the value of each other agent minus $\epsilon $), and attains a sum that is at least the maximum sum of an EF allocation. Its run-time is polynomial in the input and in $\log(1/\epsilon )$.
3. For $n$ partners with general valuations: additive approximation to envy and efficiency, based on the piecewise-constant-valuations algorithm.
Properties of utilitarian-fair allocations
Brams, Feldman, Lai, Morgenstern and Procaccia[7] study both envy-free (EF) and equitable (EQ) cake divisions, and relate them to maxsum and Pareto-optimality (PO). As explained above, maxsum allocations are always PO. However, when maxsum is constrained by fairness, this is not necessarily true. They prove the following:
• When there are two agents, maxsum-EF, maximum-EQ and maximum-EF-EQ allocations are always PO.
• When there are three or more agents with piecewise-uniform valuations, maxsum-EF allocations are always PO (since EF is equivalent to proportionality, which is preserved under Pareto improvements). However, there may be no maxsum-EQ and maxsum-EQ-EF allocations that are PO.
• When there are three or more agents with piecewise-constant valuations, there may be even no maxsum-EF allocations that are PO. For example, consider a cake with three regions and three agents with values: Alice: 51/101, 50/101, 0 Bob: 50/101, 51/101, 0 Carl: 51/111, 10/111, 50/111 The maxsum rule gives region i to agent i, but it is not EF since Carl envies Alice. Using a linear program, it is possible to find the unique maxsum-EF allocation, and show that it must share both region 1 and region 2 between Alice and Bob. However, such allocation cannot be PO since Alice and Bob could both gain by swapping their shares in these regions.
• When all agents have piecewise-linear valuations, the utility-sum of a maxsum-EF allocation is at least as large as a maxsum-EQ allocation. This result extends to general valuations up to an additive approximation (i.e., $\epsilon $-EF allocations have a utility-sum of at least EQ allocations minus $\epsilon $).
Monotonicity properties of utilitarian cake-cutting
When the pieces may be disconnected, the absolute-utilitarian rule (maximizing the sum of non-normalized utilities) is resource-monotonic and population-monotonic. The relative-utilitarian rule (maximizing the sum of normalized utilities) is population-monotonic but not resource-monotonic.[8]
This no longer holds when the pieces are connected.[9]
See also
• Efficient cake-cutting
• Fair cake-cutting
• Weller's theorem
• Pareto-efficient envy-free division
• Rank-maximal allocation
• Utilitarian voting - the utilitarian principle in a different context.
References
1. Barbanel, Julius B.; Zwicker, William S. (1997). "Two applications of a theorem of Dvoretsky, Wald, and Wolfovitz to cake division". Theory and Decision. 43 (2): 203. doi:10.1023/a:1004966624893. S2CID 118505359.. See also Weller's theorem. For a similar result related to the problem of homogeneous resource allocation, see Varian's theorems.
2. Chambers, Christopher P. (2005). "Allocation rules for land division". Journal of Economic Theory. 121 (2): 236–258. doi:10.1016/j.jet.2004.04.008.
3. Brams, Steven J.; Taylor, Alan D. (1996). Fair Division [From cake-cutting to dispute resolution]. p. 48. ISBN 978-0521556446.
4. Ianovski, Egor (2012-03-01). "Cake Cutting Mechanisms". arXiv:1203.0100 [cs.GT].
5. Aumann, Yonatan; Dombb, Yair; Hassidim, Avinatan (2013). Computing Socially-Efficient Cake Divisions. AAMAS.
6. Cohler, Yuga Julian; Lai, John Kwang; Parkes, David C; Procaccia, Ariel (2011). Optimal Envy-Free Cake Cutting. AAAI.
7. Steven J. Brams; Michal Feldman; John K. Lai; Jamie Morgenstern; Ariel D. Procaccia (2012). On Maxsum Fair Cake Divisions. Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI-12). pp. 1285–1291. Retrieved 6 December 2015.
8. Segal-Halevi, Erel; Sziklai, Balázs R. (2019-09-01). "Monotonicity and competitive equilibrium in cake-cutting". Economic Theory. 68 (2): 363–401. arXiv:1510.05229. doi:10.1007/s00199-018-1128-6. ISSN 1432-0479. S2CID 179618.
9. Segal-Halevi, Erel; Sziklai, Balázs R. (2018-09-01). "Resource-monotonicity and population-monotonicity in connected cake-cutting". Mathematical Social Sciences. 95: 19–30. arXiv:1703.08928. doi:10.1016/j.mathsocsci.2018.07.001. ISSN 0165-4896. S2CID 16282641.
|
Utility functions on indivisible goods
Some branches of economics and game theory deal with indivisible goods, discrete items that can be traded only as a whole. For example, in combinatorial auctions there is a finite set of items, and every agent can buy a subset of the items, but an item cannot be divided among two or more agents.
It is usually assumed that every agent assigns subjective utility to every subset of the items. This can be represented in one of two ways:
• An ordinal utility preference relation, usually marked by $\succ $. The fact that an agent prefers a set $A$ to a set $B$ is written $A\succ B$. If the agent only weakly prefers $A$ (i.e. either prefers $A$ or is indifferent between $A$ and $B$) then this is written $A\succeq B$.
• A cardinal utility function, usually denoted by $u$. The utility an agent gets from a set $A$ is written $u(A)$. Cardinal utility functions are often normalized such that $u(\emptyset )=0$, where $\emptyset $ is the empty set.
A cardinal utility function implies a preference relation: $u(A)>u(B)$ implies $A\succ B$ and $u(A)\geq u(B)$ implies $A\succeq B$. Utility functions can have several properties.[1]
Monotonicity
Monotonicity means that an agent always (weakly) prefers to have extra items. Formally:
• For a preference relation: $A\supseteq B$ implies $A\succeq B$.
• For a utility function: $A\supseteq B$ implies $u(A)\geq u(B)$ (i.e. u is a monotone function).
Monotonicity is equivalent to the free disposal assumption: if an agent may always discard unwanted items, then extra items can never decrease the utility.
Additivity
Additive utility
$A$$u(A)$
$\emptyset $0
apple5
hat7
apple and hat12
Additivity (also called linearity or modularity) means that "the whole is equal to the sum of its parts." That is, the utility of a set of items is the sum of the utilities of each item separately. This property is relevant only for cardinal utility functions. It says that for every set $A$ of items,
$u(A)=\sum _{x\in A}u({x})$
assuming that $u(\emptyset )=0$. In other words, $u$ is an additive function. An equivalent definition is: for any sets of items $A$ and $B$,
$u(A)+u(B)=u(A\cup B)+u(A\cap B).$
An additive utility function is characteristic of independent goods. For example, an apple and a hat are considered independent: the utility a person receives from having an apple is the same whether or not he has a hat, and vice versa. A typical utility function for this case is given at the right.
Submodularity and supermodularity
Submodular utility
$A$$u(A)$
$\emptyset $0
apple5
bread7
apple and bread9
Submodularity means that "the whole is not more than the sum of its parts (and may be less)." Formally, for all sets $A$ and $B$,
$u(A)+u(B)\geq u(A\cup B)+u(A\cap B)$
In other words, $u$ is a submodular set function.
An equivalent property is diminishing marginal utility, which means that for any sets $A$ and $B$ with $A\subseteq B$, and every $x\notin B$:[2]
$u(A\cup \{x\})-u(A)\geq u(B\cup \{x\})-u(B)$.
A submodular utility function is characteristic of substitute goods. For example, an apple and a bread loaf can be considered substitutes: the utility a person receives from eating an apple is smaller if he has already ate bread (and vice versa), since he is less hungry in that case. A typical utility function for this case is given at the right.
Supermodular utility
$A$$u(A)$
$\emptyset $0
apple5
knife7
apple and knife15
Supermodularity is the opposite of submodularity: it means that "the whole is not less than the sum of its parts (and may be more)". Formally, for all sets $A$ and $B$,
$u(A)+u(B)\leq u(A\cup B)+u(A\cap B)$
In other words, $u$ is a supermodular set function.
An equivalent property is increasing marginal utility, which means that for all sets $A$ and $B$ with $A\subseteq B$, and every $x\notin B$:
$u(B\cup \{x\})-u(B)\geq u(A\cup \{x\})-u(A)$.
A supermoduler utility function is characteristic of complementary goods. For example, an apple and a knife can be considered complementary: the utility a person receives from an apple is larger if he already has a knife (and vice versa), since it is easier to eat an apple after cutting it with a knife. A possible utility function for this case is given at the right.
A utility function is additive if and only if it is both submodular and supermodular.
Subadditivity and superadditivity
Subadditive but not submodular
$A$$u(A)$
$\emptyset $0
X or Y or Z2
X,Y or Y,Z or Z,X3
X,Y,Z5
Subadditivity means that for every pair of disjoint sets $A,B$
$u(A\cup B)\leq u(A)+u(B)$
In other words, $u$ is a subadditive set function.
Assuming $u(\emptyset )$ is non-negative, every submodular function is subadditive. However, there are non-negative subadditive functions that are not submodular. For example, assume that there are 3 identical items, $X,Y$, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is subadditive but not submodular, since
$u(\{X,Y\})+u(\{Y,Z\})<u(\{X,Y\}\cup \{Y,Z\})+u(\{X,Y\}\cap \{Y,Z\}).$
Superadditive but not supermodular
$A$$u(A)$
$\emptyset $0
X or Y or Z1
X,Y or Y,Z or Z,X3
X,Y,Z4
Superadditivity means that for every pair of disjoint sets $A,B$
$u(A\cup B)\geq u(A)+u(B)$
In other words, $u$ is a superadditive set function.
Assuming $u(\emptyset )$ is non-positive, every supermodular function is superadditive. However, there are non-negative superadditive functions that are not supermodular. For example, assume that there are 3 identical items, $X,Y$, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is non-negative and superadditive but not supermodular, since
$u(\{X,Y\})+u(\{Y,Z\})<u(\{X,Y\}\cup \{Y,Z\})+u(\{X,Y\}\cap \{Y,Z\}).$
A utility function with $u(\emptyset )=0$ is said to be additive if and only if it is both superadditive and subadditive.
With the typical assumption that $u(\emptyset )=0$, every submodular function is subadditive and every supermodular function is superadditive. Without any assumption on the utility from the empty set, these relations do not hold.
In particular, if a submodular function is not subadditive, then $u(\emptyset )$ must be negative. For example, suppose there are two items, $X,Y$, with $u(\emptyset )=-1$, $u(\{X\})=u(\{Y\})=1$ and $u(\{X,Y\})=3$. This utility function is submodular and supermodular and non-negative except on the empty set, but is not subadditive, since
$u(\{X,Y\})>u(\{X\})+u(\{Y\}).$
Also, if a supermodular function is not superadditive, then $u(\emptyset )$ must be positive. Suppose instead that $u(\emptyset )=u(\{X\})=u(\{Y\})=u(\{X,Y\})=1$. This utility function is non-negative, supermodular, and submodular, but is not superadditive, since
$u(\{X,Y\})<u(\{X\})+u(\{Y\}).$
Unit demand
Unit demand utility
$A$$u(A)$
$\emptyset $0
apple5
pear7
apple and pear7
Unit demand (UD) means that the agent only wants a single good. If the agent gets two or more goods, he uses the one of them that gives him the highest utility, and discards the rest. Formally:
• For a preference relation: for every set $B$ there is a subset $A\subseteq B$ with cardinality $|A|=1$, such that $A\succeq B$.
• For a utility function: For every set $A$:[3]
$u(A)=\max _{x\in A}u({x})$
A unit-demand function is an extreme case of a submodular function. It is characteristic of goods that are pure substitutes. For example, if there are an apple and a pear, and an agent wants to eat a single fruit, then his utility function is unit-demand, as exemplified in the table at the right.
Gross substitutes
Gross substitutes (GS) means that the agents regards the items as substitute goods or independent goods but not complementary goods. There are many formal definitions to this property, all of which are equivalent.
• Every UD valuation is GS, but the opposite is not true.
• Every GS valuation is submodular, but the opposite is not true.
See Gross substitutes (indivisible items) for more details.
Hence the following relations hold between the classes:
$UD\subsetneq GS\subsetneq Submodular\subsetneq Subadditive$
See diagram on the right.
Aggregates of utility functions
A utility function describes the happiness of an individual. Often, we need a function that describes the happiness of an entire society. Such a function is called a social welfare function, and it is usually an aggregate function of two or more utility functions. If the individual utility functions are additive, then the following is true for the aggregate functions:
Aggregate
function
PropertyExample
values of functions
on {a}, {b} and {a,b
}
fghaggregate(f,g,h)
SumAdditive1,3; 43,1; 44,4; 8
AverageAdditive1,3; 43,1; 42,2; 4
MinimumSuper-additive1,3; 43,1; 41,1; 4
MaximumSub-additive1,3; 43,1; 43,3; 4
Medianneither1,3; 43,1; 41,1; 21,1; 4
1,3; 43,1; 43,3; 63,3; 4
See also
• Utility functions on divisible goods
• Single-minded agent
References
1. Gul, F.; Stacchetti, E. (1999). "Walrasian Equilibrium with Gross Substitutes". Journal of Economic Theory. 87: 95–124. doi:10.1006/jeth.1999.2531.
2. Moulin, Hervé (1991). Axioms of cooperative decision making. Cambridge England New York: Cambridge University Press. ISBN 9780521424585.
3. Koopmans, T. C.; Beckmann, M. (1957). "Assignment Problems and the Location of Economic Activities" (PDF). Econometrica. 25 (1): 53–76. doi:10.2307/1907742. JSTOR 1907742.
|
UTM theorem
In computability theory, the UTM theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function.[1] The universal function is an abstract version of the universal Turing machine, thus the name of the theorem.
Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem.
Theorem
The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that $f(x)\simeq u(e,x)$ for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined.[2]
The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, … is an enumeration of the partial computable functions. The function $u$ in the statement of the theorem is called a universal function.
References
1. Rogers 1987, p. 22.
2. Soare 1987, p. 15.
• Rogers, H. (1987) [1967]. The Theory of Recursive Functions and Effective Computability. First MIT press paperback edition. ISBN 0-262-68052-1.
• Soare, R. (1987). Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 3-540-15299-7.
|
Uwe Jannsen
Uwe Jannsen (born 11 March 1954)[1] is a German mathematician, specializing in algebra, algebraic number theory, and algebraic geometry.
Education and career
Born in Meddewade, Jannsen studied mathematics and physics at the University of Hamburg with Diplom in mathematics in 1978 and with Promotion (PhD) in 1980 under Helmut Brückner and Jürgen Neukirch with thesis Über Galoisgruppen lokaler Körper (On Galois groups of local fields).[2] In the academic year 1983–1984 he was a postdoc at Harvard University. From 1980 to 1989 he was an assistant and then docent at the University of Regensburg, where he received in 1988 his habilitation. From 1989 to 1991 he held a research professorship at the Max-Planck-Institut für Mathematik in Bonn. In 1991 he became a full professor at the University of Cologne and since 1999 he has been a professor at the University of Regensburg.
Jannsen's research deals with, among other topics, the Galois theory of algebraic number fields, the theory of motives in algebraic geometry, the Hasse principle (local–global principle), and resolution of singularities. In particular, he has done research on a cohomology theory for algebraic varieties, involving their extension in mixed motives as a development of research by Pierre Deligne, and a motivic cohomology as a development of research by Vladimir Voevodsky. In the 1980s with Kay Wingberg he completely described the absolute Galois group of p-adic number fields, i.e. in the local case.[3]
In 1994 he was an Invited Speaker with talk Mixed motives, motivic cohomology and Ext-groups at the International Congress of Mathematicians in Zürich.[4]
He was elected in 2009 a full member of the Bayerische Akademie der Wissenschaften and in 2011 a full member of the Academia Europaea.
His doctoral students include Moritz Kerz.[5]
Selected publications
• Continuous étale cohomology, Mathematische Annalen vol. 280, no. 2 1988, pp. 207–245 doi:10.1007/BF01456052
• "On the ℓ-adic cohomology of varieties over number fields and its Galois cohomology." In Galois Groups over $\mathbb {Q} $, pp. 315–360. Springer, New York, NY, 1989.
• Mixed motives and algebraic K-theory, Lecture Notes in Mathematics vol. 1400, Springer Verlag 1990 (with appendices by C. Schoen and Spencer Bloch).
• with Steven Kleiman and Jean-Pierre Serre (eds.): Motives, Proc. Symposium Pure Mathematics vol. 55, 2 vols., American Mathematical Society 1994 (Conference University of Washington, Seattle, 1991) vol. 2
• Motives, numerical equivalence and semi-simplicity, Inventiones Mathematicae, vol. 107 1992, pp. 447–452 doi:10.1007/BF01231898
References
1. biography, pdf, University of Regensburg
2. Jannsen, Uwe (1982). "Über Galoisgruppen lokaler Körper" (PDF). Inventiones Mathematicae. 70: 53–69. doi:10.1007/BF01393198. S2CID 120934623.
3. Jannsen, U.; Wingberg, K. (1982). "'Die Struktur der absoluten Galoisgruppe p-adischer Zahlkörper" (PDF). Inventiones Mathematicae. 70: 71–98. doi:10.1007/BF01393199. S2CID 119378923.
4. Jannsen, Uwe. "Mixed motives, motivic cohomology, and Ext-groups." In Proceedings of the International Congress of Mathematicians, vol. 1, p. 2. 1994.
5. Uwe Jannsen at the Mathematics Genealogy Project
External links
• Homepage in Regensburg
• Bericht seiner Forschungsgruppe in Regensburg
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
|
Uzawa iteration
In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming.[1]
Basic idea
We consider a saddle point problem of the form
${\begin{pmatrix}A&B\\B^{*}&\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}},$
where $A$ is a symmetric positive-definite matrix. Multiplying the first row by $B^{*}A^{-1}$ and subtracting from the second row yields the upper-triangular system
${\begin{pmatrix}A&B\\&-S\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}-B^{*}A^{-1}b_{1}\end{pmatrix}},$
where $S:=B^{*}A^{-1}B$ denotes the Schur complement. Since $S$ is symmetric positive-definite, we can apply standard iterative methods like the gradient descent method or the conjugate gradient method to
$Sx_{2}=B^{*}A^{-1}b_{1}-b_{2}$
in order to compute $x_{2}$. The vector $x_{1}$ can be reconstructed by solving
$Ax_{1}=b_{1}-Bx_{2}.\,$
It is possible to update $x_{1}$ alongside $x_{2}$ during the iteration for the Schur complement system and thus obtain an efficient algorithm.
Implementation
We start the conjugate gradient iteration by computing the residual
$r_{2}:=B^{*}A^{-1}b_{1}-b_{2}-Sx_{2}=B^{*}A^{-1}(b_{1}-Bx_{2})-b_{2}=B^{*}x_{1}-b_{2},$
of the Schur complement system, where
$x_{1}:=A^{-1}(b_{1}-Bx_{2})$
denotes the upper half of the solution vector matching the initial guess $x_{2}$ for its lower half. We complete the initialization by choosing the first search direction
$p_{2}:=r_{2}.\,$
In each step, we compute
$a_{2}:=Sp_{2}=B^{*}A^{-1}Bp_{2}=B^{*}p_{1}$
and keep the intermediate result
$p_{1}:=A^{-1}Bp_{2}$
for later. The scaling factor is given by
$\alpha :=p_{2}^{*}a_{2}/p_{2}^{*}r_{2}$ :=p_{2}^{*}a_{2}/p_{2}^{*}r_{2}}
and leads to the updates
$x_{2}:=x_{2}+\alpha p_{2},\quad r_{2}:=r_{2}-\alpha a_{2}.$
Using the intermediate result $p_{1}$ saved earlier, we can also update the upper part of the solution vector
$x_{1}:=x_{1}-\alpha p_{1}.\,$
Now we only have to construct the new search direction by the Gram–Schmidt process, i.e.,
$\beta :=r_{2}^{*}a_{2}/p_{2}^{*}a_{2},\quad p_{2}:=r_{2}-\beta p_{2}.$ :=r_{2}^{*}a_{2}/p_{2}^{*}a_{2},\quad p_{2}:=r_{2}-\beta p_{2}.}
The iteration terminates if the residual $r_{2}$ has become sufficiently small or if the norm of $p_{2}$ is significantly smaller than $r_{2}$ indicating that the Krylov subspace has been almost exhausted.
Modifications and extensions
If solving the linear system $Ax=b$ exactly is not feasible, inexact solvers can be applied.[2][3][4]
If the Schur complement system is ill-conditioned, preconditioners can be employed to improve the speed of convergence of the underlying gradient method.[2][5]
Inequality constraints can be incorporated, e.g., in order to handle obstacle problems.[5]
References
1. Uzawa, H. (1958). "Iterative methods for concave programming". In Arrow, K. J.; Hurwicz, L.; Uzawa, H. (eds.). Studies in linear and nonlinear programming. Stanford University Press.
2. Elman, H. C.; Golub, G. H. (1994). "Inexact and preconditioned Uzawa algorithms for saddle point problems". SIAM J. Numer. Anal. 31 (6): 1645–1661. CiteSeerX 10.1.1.307.8178. doi:10.1137/0731085.
3. Bramble, J. H.; Pasciak, J. E.; Vassilev, A. T. (1997). "Analysis of the inexact Uzawa algorithm for saddle point problems". SIAM J. Numer. Anal. 34 (3): 1072–1982. CiteSeerX 10.1.1.52.9559. doi:10.1137/S0036142994273343.
4. Zulehner, W. (1998). "Analysis of iterative methods for saddle point problems. A unified approach". Math. Comp. 71 (238): 479–505. doi:10.1090/S0025-5718-01-01324-2.
5. Gräser, C.; Kornhuber, R. (2007). "On Preconditioned Uzawa-type Iterations for a Saddle Point Problem with Inequality Constraints". Domain Decomposition Methods in Science and Engineering XVI. Lec. Not. Comp. Sci. Eng. Vol. 55. pp. 91–102. CiteSeerX 10.1.1.72.9238. doi:10.1007/978-3-540-34469-8_8. ISBN 978-3-540-34468-1.
Further reading
• Chen, Zhangxin (2006). "Linear System Solution Techniques". Finite Element Methods and Their Applications. Berlin: Springer. pp. 145–154. ISBN 978-3-540-28078-1.
|
Convex polytope
A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the $n$-dimensional Euclidean space $\mathbb {R} ^{n}$. Most texts[1][2] use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others[3] (including this article) allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary.
Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming.
In the influential textbooks of Grünbaum[1] and Ziegler[2] on the subject, as well as in many other texts in discrete geometry, convex polytopes are often simply called "polytopes". Grünbaum points out that this is solely to avoid the endless repetition of the word "convex", and that the discussion should throughout be understood as applying only to the convex variety (p. 51).
A polytope is called full-dimensional if it is an $n$-dimensional object in $\mathbb {R} ^{n}$.
Examples
• Many examples of bounded convex polytopes can be found in the article "polyhedron".
• In the 2-dimensional case the full-dimensional examples are a half-plane, a strip between two parallel lines, an angle shape (the intersection of two non-parallel half-planes), a shape defined by a convex polygonal chain with two rays attached to its ends, and a convex polygon.
• Special cases of an unbounded convex polytope are a slab between two parallel hyperplanes, a wedge defined by two non-parallel half-spaces, a polyhedral cylinder (infinite prism), and a polyhedral cone (infinite cone) defined by three or more half-spaces passing through a common point.
Definitions
A convex polytope may be defined in a number of ways, depending on what is more suitable for the problem at hand. Grünbaum's definition is in terms of a convex set of points in space. Other important definitions are: as the intersection of half-spaces (half-space representation) and as the convex hull of a set of points (vertex representation).
Vertex representation (convex hull)
In his book Convex Polytopes, Grünbaum defines a convex polytope as a compact convex set with a finite number of extreme points:
A set $K$ of $\mathbb {R} ^{n}$ is convex if, for each pair of distinct points $a$, $b$ in $K$, the closed segment with endpoints $a$ and $b$ is contained within $K$.
This is equivalent to defining a bounded convex polytope as the convex hull of a finite set of points, where the finite set must contain the set of extreme points of the polytope. Such a definition is called a vertex representation (V-representation or V-description).[1] For a compact convex polytope, the minimal V-description is unique and it is given by the set of the vertices of the polytope.[1] A convex polytope is called an integral polytope if all of its vertices have integer coordinates.
Intersection of half-spaces
A convex polytope may be defined as an intersection of a finite number of half-spaces. Such definition is called a half-space representation (H-representation or H-description).[1] There exist infinitely many H-descriptions of a convex polytope. However, for a full-dimensional convex polytope, the minimal H-description is in fact unique and is given by the set of the facet-defining halfspaces.[1]
A closed half-space can be written as a linear inequality:[1]
$a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq b$
where $n$ is the dimension of the space containing the polytope under consideration. Hence, a closed convex polytope may be regarded as the set of solutions to the system of linear inequalities:
${\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;\leq \;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;\leq \;&&&b_{2}\\\vdots \;\;\;&&&&\vdots \;\;\;&&&&\vdots \;\;\;&&&&&\;\vdots \\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;\leq \;&&&b_{m}\\\end{alignedat}}$
where $m$ is the number of half-spaces defining the polytope. This can be concisely written as the matrix inequality:
$Ax\leq b$
where $A$ is an $m\times n$ matrix, $x$ is an $n\times 1$ column vector whose coordinates are the variables $x_{1}$ to $x_{n}$, and $b$ is an $m\times 1$ column vector whose coordinates are the right-hand sides $b_{1}$ to $b_{m}$ of the scalar inequalities.
An open convex polytope is defined in the same way, with strict inequalities used in the formulas instead of the non-strict ones.
The coefficients of each row of $A$ and $b$ correspond with the coefficients of the linear inequality defining the respective half-space. Hence, each row in the matrix corresponds with a supporting hyperplane of the polytope, a hyperplane bounding a half-space that contains the polytope. If a supporting hyperplane also intersects the polytope, it is called a bounding hyperplane (since it is a supporting hyperplane, it can only intersect the polytope at the polytope's boundary).
The foregoing definition assumes that the polytope is full-dimensional. In this case, there is a unique minimal set of defining inequalities (up to multiplication by a positive number). Inequalities belonging to this unique minimal system are called essential. The set of points of a polytope which satisfy an essential inequality with equality is called a facet.
If the polytope is not full-dimensional, then the solutions of $Ax\leq b$ lie in a proper affine subspace of $\mathbb {R} ^{n}$ and the polytope can be studied as an object in this subspace. In this case, there exist linear equations which are satisfied by all points of the polytope. Adding one of these equations to any of the defining inequalities does not change the polytope. Therefore, in general there is no unique minimal set of inequalities defining the polytope.
In general the intersection of arbitrary half-spaces need not be bounded. However if one wishes to have a definition equivalent to that as a convex hull, then bounding must be explicitly required.
Using the different representations
The two representations together provide an efficient way to decide whether a given vector is included in a given convex polytope: to show that it is in the polytope, it is sufficient to present it as a convex combination of the polytope vertices (the V-description is used); to show that it is not in the polytope, it is sufficient to present a single defining inequality that it violates.[4]: 256
A subtle point in the representation by vectors is that the number of vectors may be exponential in the dimension, so the proof that a vector is in the polytope might be exponentially long. Fortunately, Carathéodory's theorem guarantees that every vector in the polytope can be represented by at most d+1 defining vectors, where d is the dimension of the space.
Representation of unbounded polytopes
For an unbounded polytope (sometimes called: polyhedron), the H-description is still valid, but the V-description should be extended. Theodore Motzkin (1936) proved that any unbounded polytope can be represented as a sum of a bounded polytope and a convex polyhedral cone.[5] In other words, every vector in an unbounded polytope is a convex sum of its vertices (its "defining points"), plus a conical sum of the Euclidean vectors of its infinite edges (its "defining rays"). This is called the finite basis theorem.[3]
Properties
Every (bounded) convex polytope is the image of a simplex, as every point is a convex combination of the (finitely many) vertices. However, polytopes are not in general isomorphic to simplices. This is in contrast to the case of vector spaces and linear combinations, every finite-dimensional vector space being not only an image of, but in fact isomorphic to, Euclidean space of some dimension (or analog over other fields).
The face lattice
Main article: abstract polytope
A face of a convex polytope is any intersection of the polytope with a halfspace such that none of the interior points of the polytope lie on the boundary of the halfspace. Equivalently, a face is the set of points giving equality in some valid inequality of the polytope.[4]: 258
If a polytope is d-dimensional, its facets are its (d − 1)-dimensional faces, its vertices are its 0-dimensional faces, its edges are its 1-dimensional faces, and its ridges are its (d − 2)-dimensional faces.
Given a convex polytope P defined by the matrix inequality $Ax\leq b$, if each row in A corresponds with a bounding hyperplane and is linearly independent of the other rows, then each facet of P corresponds with exactly one row of A, and vice versa. Each point on a given facet will satisfy the linear equality of the corresponding row in the matrix. (It may or may not also satisfy equality in other rows). Similarly, each point on a ridge will satisfy equality in two of the rows of A.
In general, an (n − j)-dimensional face satisfies equality in j specific rows of A. These rows form a basis of the face. Geometrically speaking, this means that the face is the set of points on the polytope that lie in the intersection of j of the polytope's bounding hyperplanes.
The faces of a convex polytope thus form an Eulerian lattice called its face lattice, where the partial ordering is by set containment of faces. The definition of a face given above allows both the polytope itself and the empty set to be considered as faces, ensuring that every pair of faces has a join and a meet in the face lattice. The whole polytope is the unique maximum element of the lattice, and the empty set, considered to be a (−1)-dimensional face (a null polytope) of every polytope, is the unique minimum element of the lattice.
Two polytopes are called combinatorially isomorphic if their face lattices are isomorphic.
The polytope graph (polytopal graph, graph of the polytope, 1-skeleton) is the set of vertices and edges of the polytope only, ignoring higher-dimensional faces. For instance, a polyhedral graph is the polytope graph of a three-dimensional polytope. By a result of Whitney[6] the face lattice of a three-dimensional polytope is determined by its graph. The same is true for simple polytopes of arbitrary dimension (Blind & Mani-Levitska 1987, proving a conjecture of Micha Perles).[7] Kalai (1988)[8] gives a simple proof based on unique sink orientations. Because these polytopes' face lattices are determined by their graphs, the problem of deciding whether two three-dimensional or simple convex polytopes are combinatorially isomorphic can be formulated equivalently as a special case of the graph isomorphism problem. However, it is also possible to translate these problems in the opposite direction, showing that polytope isomorphism testing is graph-isomorphism complete.[9]
Topological properties
A convex polytope, like any compact convex subset of Rn, is homeomorphic to a closed ball.[10] Let m denote the dimension of the polytope. If the polytope is full-dimensional, then m = n. The convex polytope therefore is an m-dimensional manifold with boundary, its Euler characteristic is 1, and its fundamental group is trivial. The boundary of the convex polytope is homeomorphic to an (m − 1)-sphere. The boundary's Euler characteristic is 0 for even m and 2 for odd m. The boundary may also be regarded as a tessellation of (m − 1)-dimensional spherical space — i.e. as a spherical tiling.
Simplicial decomposition
A convex polytope can be decomposed into a simplicial complex, or union of simplices, satisfying certain properties.
Given a convex r-dimensional polytope P, a subset of its vertices containing (r+1) affinely independent points defines an r-simplex. It is possible to form a collection of subsets such that the union of the corresponding simplices is equal to P, and the intersection of any two simplices is either empty or a lower-dimensional simplex. This simplicial decomposition is the basis of many methods for computing the volume of a convex polytope, since the volume of a simplex is easily given by a formula.[11]
Algorithmic problems for a convex polytope
Construction of representations
Different representations of a convex polytope have different utility, therefore the construction of one representation given another one is an important problem. The problem of the construction of a V-representation is known as the vertex enumeration problem and the problem of the construction of a H-representation is known as the facet enumeration problem. While the vertex set of a bounded convex polytope uniquely defines it, in various applications it is important to know more about the combinatorial structure of the polytope, i.e., about its face lattice. Various convex hull algorithms deal both with the facet enumeration and face lattice construction.
In the planar case, i.e., for a convex polygon, both facet and vertex enumeration problems amount to the ordering vertices (resp. edges) around the convex hull. It is a trivial task when the convex polygon is specified in a traditional way for polygons, i.e., by the ordered sequence of its vertices $v_{1},\dots ,v_{m}$. When the input list of vertices (or edges) is unordered, the time complexity of the problems becomes O(m log m).[12] A matching lower bound is known in the algebraic decision tree model of computation.[13]
Volume computation
The task of computing the volume of a convex polytope has been studied in the field of computational geometry. The volume can be computed approximately, for instance, using the convex volume approximation technique, when having access to a membership oracle. As for exact computation, one obstacle is that, when given a representation of the convex polytope as an equation system of linear inequalities, the volume of the polytope may have a bit-length which is not polynomial in this representation.[14]
See also
• Oriented matroid
• Nef polyhedron
• Steinitz's theorem for convex polyhedra
References
1. Branko Grünbaum, Convex Polytopes, 2nd edition, prepared by Volker Kaibel, Victor Klee, and Günter M. Ziegler, 2003, ISBN 0-387-40409-0, ISBN 978-0-387-40409-7, 466pp.
2. Ziegler, Günter M. (1995), Lectures on Polytopes, Graduate Texts in Mathematics, vol. 152, Berlin, New York: Springer-Verlag.
3. Mathematical Programming, by Melvyn W. Jeter (1986) ISBN 0-8247-7478-7, p. 68
4. Lovász, László; Plummer, M. D. (1986), Matching Theory, Annals of Discrete Mathematics, vol. 29, North-Holland, ISBN 0-444-87916-1, MR 0859549
5. Motzkin, Theodore (1936). Beitrage zur Theorie der linearen Ungleichungen (Ph.D. dissertation). Jerusalem.{{cite book}}: CS1 maint: location missing publisher (link)
6. Whitney, Hassler (1932). "Congruent graphs and the connectivity of graphs". Amer. J. Math. 54 (1): 150–168. doi:10.2307/2371086. hdl:10338.dmlcz/101067. JSTOR 2371086.
7. Blind, Roswitha; Mani-Levitska, Peter (1987), "Puzzles and polytope isomorphisms", Aequationes Mathematicae, 34 (2–3): 287–297, doi:10.1007/BF01830678, MR 0921106.
8. Kalai, Gil (1988), "A simple way to tell a simple polytope from its graph", Journal of Combinatorial Theory, Ser. A, 49 (2): 381–383, doi:10.1016/0097-3165(88)90064-7, MR 0964396.
9. Kaibel, Volker; Schwartz, Alexander (2003). "On the Complexity of Polytope Isomorphism Problems". Graphs and Combinatorics. 19 (2): 215–230. arXiv:math/0106093. doi:10.1007/s00373-002-0503-y. Archived from the original on 2015-07-21.
10. Glen E. Bredon, Topology and Geometry, 1993, ISBN 0-387-97926-3, p. 56.
11. Büeler, B.; Enge, A.; Fukuda, K. (2000). "Exact Volume Computation for Polytopes: A Practical Study". Polytopes — Combinatorics and Computation. p. 131. doi:10.1007/978-3-0348-8438-9_6. ISBN 978-3-7643-6351-2.
12. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "33.3 Finding the convex hull". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 947–957. ISBN 0-262-03293-7.
13. Yao, Andrew Chi Chih (1981), "A lower bound to finding convex hulls", Journal of the ACM, 28 (4): 780–787, doi:10.1145/322276.322289, MR 0677089; Ben-Or, Michael (1983), "Lower Bounds for Algebraic Computation Trees", Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing (STOC '83), pp. 80–86, doi:10.1145/800061.808735.
14. Lawrence, Jim (1991). "Polytope volume computation". Mathematics of Computation. 57 (195): 259–271. doi:10.1090/S0025-5718-1991-1079024-2. ISSN 0025-5718.
External links
Wikimedia Commons has media related to Convex polytopes.
• Weisstein, Eric W. "Convex polygon". MathWorld.
• Weisstein, Eric W. "Convex polyhedron". MathWorld.
• Komei Fukuda, Polyhedral computation FAQ.
|
V-ring (ring theory)
In mathematics, a V-ring is a ring R such that every simple R-module is injective. The following three conditions are equivalent:[1]
1. Every simple left (resp. right) R-module is injective
2. The radical of every left (resp. right) R-module is zero
3. Every left (resp. right) ideal of R is an intersection of maximal left (resp. right) ideals of R
A commutative ring is a V-ring if and only if it is Von Neumann regular.[2]
References
1. Faith, Carl (1973). Algebra: Rings, modules, and categories. Springer-Verlag. ISBN 978-0387055510. Retrieved 24 October 2015.
2. Michler, G.O.; Villamayor, O.E. (April 1973). "On rings whose simple modules are injective". Journal of Algebra. 25 (1): 185–201. doi:10.1016/0021-8693(73)90088-4.
|
Victor-Amédée Lebesgue
Victor-Amédée Lebesgue, sometimes written Le Besgue, (2 October 1791, Grandvilliers (Oise) – 10 June 1875, Bordeaux (Gironde)) was a mathematician working on number theory. He was elected a member of the Académie des sciences in 1847.
For the analyst, see Henri Lebesgue.
Victor-Amédée Lebesgue
Born(1791-10-02)2 October 1791
Grandvilliers, France
Died10 June 1875(1875-06-10) (aged 83)
Bordeaux, France
Scientific career
FieldsMathematics
See also
• Catalan's conjecture
• Proof of Fermat's Last Theorem for specific exponents
• Lebesgue–Nagell type equations
Publications
• Lebesgue, Victor-Amédée (1837), Thèses de mécanique et d'astronomie
• Lebesgue, Victor-Amédée (1859), Exercices d'analyse numérique
• Lebesgue, Victor-Amédée (1862), Introduction à la théorie des nombres, Paris{{citation}}: CS1 maint: location missing publisher (link)
• Lebesgue, Victor Amédée (1864), Tables diverses pour le décomposition des nombres en leurs facteurs premiers
References
• Abria, O.; Hoüel, J. (1876), "Notice sur la vie et les travaux de Victor Amédée Le Besgue", Bullettino di Bibliografia e di Storia delle Scienze Matematiche e Fisiche, IX: 554–594
• LEBESGUE , Victor Amédée
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• United States
• Netherlands
Academics
• zbMATH
Other
• IdRef
|
V. J. Havel
Václav Jaromír Havel is a Czech mathematician. He is known for characterizing the degree sequences of undirected graphs and the Havel–Hakimi algorithm. It is an important contribution to the theory graphs. [1]
Selected publications
• Havel, Václav (1955), "A remark on the existence of finite graphs", Časopis pro pěstování matematiky (in Czech), 80 (4): 477–480, doi:10.21136/CPM.1955.108220
References
1. Allenby, R.B.J.T.; Slomson, Alan (2011), "Theorem 9.3: the Havel–Hakimi theorem", How to Count: An Introduction to Combinatorics, Discrete Mathematics and Its Applications (2nd ed.), CRC Press, p. 159, ISBN 9781420082616, A proof of this theorem was first published by Václav Havel ... in 1963 another proof was published independently by S. L. Hakimi.
Authority control
International
• VIAF
National
• Czech Republic
Academics
• MathSciNet
• Scopus
• zbMATH
|
Ordinal definable set
In mathematical set theory, a set S is said to be ordinal definable if, informally, it can be defined in terms of a finite number of ordinals by a first-order formula. Ordinal definable sets were introduced by Gödel (1965).
A drawback to this informal definition is that it requires quantification over all first-order formulas, which cannot be formalized in the language of set theory. However there is a different way of stating the definition that can be so formalized. In this approach, a set S is formally defined to be ordinal definable if there is some collection of ordinals α1, ..., αn such that $S\in V_{\alpha _{1}}$ and $S$ can be defined as an element of $V_{\alpha _{1}}$ by a first-order formula φ taking α2, ..., αn as parameters. Here $V_{{\alpha }_{1}}$ denotes the set indexed by the ordinal α1 in the von Neumann hierarchy. In other words, S is the unique object such that φ(S, α2...αn) holds with its quantifiers ranging over $V_{\alpha _{1}}$.
The class of all ordinal definable sets is denoted OD; it is not necessarily transitive, and need not be a model of ZFC because it might not satisfy the axiom of extensionality. A set is hereditarily ordinal definable if it is ordinal definable and all elements of its transitive closure are ordinal definable. The class of hereditarily ordinal definable sets is denoted by HOD, and is a transitive model of ZFC, with a definable well ordering. It is consistent with the axioms of set theory that all sets are ordinal definable, and so hereditarily ordinal definable. The assertion that this situation holds is referred to as V = OD or V = HOD. It follows from V = L, and is equivalent to the existence of a (definable) well-ordering of the universe. Note however that the formula expressing V = HOD need not hold true within HOD, as it is not absolute for models of set theory: within HOD, the interpretation of the formula for HOD may yield an even smaller inner model.
HOD has been found to be useful in that it is an inner model that can accommodate essentially all known large cardinals. This is in contrast with the situation for core models, as core models have not yet been constructed that can accommodate supercompact cardinals, for example.
References
• Gödel, Kurt (1965) [1946], "Remarks before the Princeton Bicentennial Conference on Problems in Mathematics", in Davis, Martin (ed.), The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions, Raven Press, Hewlett, N.Y., pp. 84–88, ISBN 978-0-486-43228-1, MR 0189996
• Kunen, Kenneth (1980), Set theory: An introduction to independence proofs, Elsevier, ISBN 978-0-444-86839-8
|
VEGAS algorithm
The VEGAS algorithm, due to G. Peter Lepage,[1][2][3] is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral.
The VEGAS algorithm is based on importance sampling. It samples points from the probability distribution described by the function $|f|,$ so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine.
Sampling method
In general, if the Monte Carlo integral of $f$ over a volume $\Omega $ is sampled with points distributed according to a probability distribution described by the function $g,$ we obtain an estimate $\mathrm {E} _{g}(f;N),$
$\mathrm {E} _{g}(f;N)={1 \over N}\sum _{i}^{N}{f(x_{i})}/g(x_{i}).$
The variance of the new estimate is then
$\mathrm {Var} _{g}(f;N)=\mathrm {Var} (f/g;N)$
where $\mathrm {Var} (f;N)$ is the variance of the original estimate, $\mathrm {Var} (f;N)=\mathrm {E} (f^{2};N)-(\mathrm {E} (f;N))^{2}.$
If the probability distribution is chosen as $g=|f|/\textstyle \int _{\Omega }|f(x)|dx$ then it can be shown that the variance $\mathrm {Var} _{g}(f;N)$ vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution.
Approximation of probability distribution
The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like $K^{d}$ with dimension d the probability distribution is approximated by a separable function: $g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\cdots $ so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS.
See also
• Las Vegas algorithm
• Monte Carlo integration
• Importance sampling
References
1. Lepage, G.P. (May 1978). "A New Algorithm for Adaptive Multidimensional Integration". Journal of Computational Physics. 27 (2): 192–203. Bibcode:1978JCoPh..27..192L. doi:10.1016/0021-9991(78)90004-9.
2. Lepage, G.P. (March 1980). "VEGAS: An Adaptive Multi-dimensional Integration Program". Cornell Preprint. CLNS 80-447.
3. Ohl, T. (July 1999). "Vegas revisited: Adaptive Monte Carlo integration beyond factorization". Computer Physics Communications. 120 (1): 13–19. arXiv:hep-ph/9806432. Bibcode:1999CoPhC.120...13O. doi:10.1016/S0010-4655(99)00209-X. S2CID 18194240.
|
VIKOR method
The VIKOR method is a multi-criteria decision making (MCDM) or multi-criteria decision analysis method. It was originally developed by Serafim Opricovic to solve decision problems with conflicting and noncommensurable (different units) criteria, assuming that compromise is acceptable for conflict resolution, the decision maker wants a solution that is the closest to the ideal, and the alternatives are evaluated according to all established criteria. VIKOR ranks alternatives and determines the solution named compromise that is the closest to the ideal.
The idea of compromise solution was introduced in MCDM by Po-Lung Yu in 1973,[1] and by Milan Zeleny.[2]
S. Opricovic had developed the basic ideas of VIKOR in his Ph.D. dissertation in 1979, and an application was published in 1980.[3] The name VIKOR appeared in 1990 [4] from Serbian: VIseKriterijumska Optimizacija I Kompromisno Resenje, that means: Multicriteria Optimization and Compromise Solution, with pronunciation: vikor. The real applications were presented in 1998.[5] The paper in 2004 contributed to the international recognition of the VIKOR method.[6] (The most cited paper in the field of Economics, Science Watch, Apr.2009).
The MCDM problem is stated as follows: Determine the best (compromise) solution in multicriteria sense from the set of J feasible alternatives A1, A2, ...AJ, evaluated according to the set of n criterion functions. The input data are the elements fij of the performance (decision) matrix, where fij is the value of the i-th criterion function for the alternative Aj.
VIKOR method steps
The VIKOR procedure has the following steps:
Step 1. Determine the best fi* and the worst fi^ values of all criterion functions, i = 1,2,...,n; fi* = max (fij,j=1,...,J), fi^ = min (fij,j=1,...,J), if the i-th function is benefit; fi* = min (fij,j=1,...,J), fi^ = max (fij,j=1,...,J), if the i-th function is cost.
Step 2. Compute the values Sj and Rj, j=1,2,...,J, by the relations: Sj=sum[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Manhattan distance; Rj=max[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Chebyshev distance; where wi are the weights of criteria, expressing the DM's preference as the relative importance of the criteria.
Step 3. Compute the values Qj, j=1,2,...,J, by the relation Qj = v(Sj – S*)/(S^ - S*) + (1-v)(Rj-R*)/(R^-R*) where S* = min (Sj, j=1,...,J), S^ = max (Sj, j=1,...,J), R* = min (Rj, j=1,...,J), R^ = max (Rj, j=1,...,J),; and is introduced as a weight for the strategy of maximum group utility, whereas 1-v is the weight of the individual regret. These strategies could be compromised by v = 0.5, and here v is modified as = (n + 1)/ 2n (from v + 0.5(n-1)/n = 1) since the criterion (1 of n) related to R is included in S, too.
Step 4. Rank the alternatives, sorting by the values S, R and Q, from the minimum value. The results are three ranking lists.
Step 5. Propose as a compromise solution the alternative A(1) which is the best ranked by the measure Q (minimum) if the following two conditions are satisfied: C1. “Acceptable Advantage”: Q(A(2) – Q(A(1)) >= DQ where: A(2) is the alternative with second position in the ranking list by Q; DQ = 1/(J-1). C2. “Acceptable Stability in decision making”: The alternative A(1) must also be the best ranked by S or/and R. This compromise solution is stable within a decision making process, which could be the strategy of maximum group utility (when v > 0.5 is needed), or “by consensus” v about 0.5, or “with veto” v < 0.5). If one of the conditions is not satisfied, then a set of compromise solutions is proposed, which consists of: - Alternatives A(1) and A(2) if only the condition C2 is not satisfied, or - Alternatives A(1), A(2),..., A(M) if the condition C1 is not satisfied; A(M) is determined by the relation Q(A(M)) – Q(A(1)) < DQ for maximum M (the positions of these alternatives are “in closeness”).
The obtained compromise solution could be accepted by the decision makers because it provides a maximum utility of the majority (represented by min S), and a minimum individual regret of the opponent (represented by min R). The measures S and R are integrated into Q for compromise solution, the base for an agreement established by mutual concessions.
Comparative analysis
A comparative analysis of MCDM methods VIKOR, TOPSIS, ELECTRE and PROMETHEE is presented in the paper in 2007, through the discussion of their distinctive features and their application results.[7] Sayadi et al. extended the VIKOR method for decision making with interval data.[8] Heydari et al. extende this method for solving Multiple Objective Large-Scale Nonlinear Programming problems.[9]
Fuzzy VIKOR method
The Fuzzy VIKOR method has been developed to solve problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to handle imprecise numerical quantities. Fuzzy VIKOR is based on the aggregating fuzzy merit that represents distance of an alternative to the ideal solution. The fuzzy operations and procedures for ranking fuzzy numbers are used in developing the fuzzy VIKOR algorithm. [10]
See also
• Rank reversals in decision-making
• Multi-criteria decision analysis
• Ordinal Priority Approach
• Pairwise comparison
References
1. Po Lung Yu (1973) "A Class of Solutions for Group Decision Problems", Management Science, 19(8), 936–946.
2. Milan Zelrny (1973) "Compromise Programming", in Cochrane J.L. and M.Zeleny (Eds.), Multiple Criteria Decision Making, University of South Carolina Press, Columbia.
3. Lucien Duckstein and Serafim Opricovic (1980) "Multiobjective Optimization in River Basin Development", Water Resources Research, 16(1), 14–20.
4. Serafim Opricović., (1990) "Programski paket VIKOR za visekriterijumsko kompromisno rangiranje", SYM-OP-IS
5. Serafim Opricovic (1998) “Multicriteria Optimization in Civil Engineering" (in Serbian), Faculty of Civil Engineering, Belgrade, 302 p. ISBN 86-80049-82-4.
6. Serafim Opricovic and Gwo-Hshiung Tzeng (2004) "The Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS", European Journal of Operational Research, 156(2), 445–455.
7. Serafim Opricovic and Gwo-Hshiung Tzeng (2007) "Extended VIKOR Method in Comparison with Outranking Methods", European Journal of Operational Research, Vol. 178, No 2, pp. 514–529.
8. Sayadi, Mohammad Kazem; Heydari, Majeed; Shahanaghi, Kamran (2009). "Extension of VIKOR method for decision making problem with interval numbers". Applied Mathematical Modelling. 33 (5): 2257–2262. doi:10.1016/j.apm.2008.06.002.
9. Heydari, Majeed; Kazem Sayadi, Mohammad; Shahanaghi, Kamran (2010). "Extended VIKOR as a new method for solving Multiple Objective Large-Scale Nonlinear Programming problems". Rairo - Operations Research. 44 (2): 139–152. doi:10.1051/ro/2010011.
10. Serafim Opricovic (2011) "Fuzzy VIKOR with an application to water resources planning", Expert Systems with Applications 38, pp. 12983–12990.
|
Vanishing scalar invariant spacetime
In mathematical physics, vanishing scalar invariant (VSI) spacetimes are Lorentzian manifolds with all polynomial curvature invariants of all orders vanishing. Although the only Riemannian manifold with VSI property is flat space, the Lorentzian case admits nontrivial spacetimes with this property. Distinguishing these VSI spacetimes from Minkowski spacetime requires comparing non-polynomial invariants[1] or carrying out the full Cartan–Karlhede algorithm on non-scalar quantities.[2][3]
All VSI spacetimes are Kundt spacetimes.[4] An example with this property in four dimensions is a pp-wave. VSI spacetimes however also contain some other four-dimensional Kundt spacetimes of Petrov type N and III. VSI spacetimes in higher dimensions have similar properties as in the four-dimensional case.[5][6]
References
1. Page, Don N. (2009), "Nonvanishing Local Scalar Invariants even in VSI Spacetimes with all Polynomial Curvature Scalar Invariants Vanishing", Classical and Quantum Gravity, 26 (5): 055016, arXiv:0806.2144, Bibcode:2009CQGra..26e5016P, doi:10.1088/0264-9381/26/5/055016, S2CID 118331266
2. Koutras, A. (1992), "A spacetime for which the Karlhede invariant classification requires the fourth covariant derivative of the Riemann tensor", Classical and Quantum Gravity, 9 (10): L143, Bibcode:1992CQGra...9L.143K, doi:10.1088/0264-9381/9/10/003
3. Koutras, A.; McIntosh, C. (1996), "A metric with no symmetries or invariants", Classical and Quantum Gravity, 13 (5): L47, Bibcode:1996CQGra..13L..47K, doi:10.1088/0264-9381/13/5/002
4. Pravda, V.; Pravdova, A.; Coley, A.; Milson, R. (2002), "All spacetimes with vanishing curvature invariants", Classical and Quantum Gravity, 19 (23): 6213–6236, arXiv:gr-qc/0209024, Bibcode:2002CQGra..19.6213P, doi:10.1088/0264-9381/19/23/318, S2CID 11958495
5. Coley, A.; Milson, R.; Pravda, V.; Pravdova, A. (2004), "Vanishing Scalar Invariant Spacetimes in Higher Dimensions", Classical and Quantum Gravity, 21 (23): 5519–5542, arXiv:gr-qc/0410070, Bibcode:2004CQGra..21.5519C, doi:10.1088/0264-9381/21/23/014, S2CID 17036677.
6. Coley, A.; Fuster, A.; Hervik, S.; Pelavas, N. (2006), "Higher dimensional VSI spacetimes", Classical and Quantum Gravity, 23 (24): 7431–7444, arXiv:gr-qc/0611019, Bibcode:2006CQGra..23.7431C, doi:10.1088/0264-9381/23/24/014, S2CID 85442360
|
Vacuous truth
In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied.[1] It is sometimes said that a statement is vacuously true because it does not really say anything.[2] For example, the statement "all cell phones in the room are turned off" will be true when no cell phones are in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off", which would otherwise be incoherent and false.
More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent.[1][3][2][4] One example of such a statement is "if Tokyo is in France, then the Eiffel Tower is in Bolivia".
Such statements are considered vacuous truths, because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent. In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("Tokyo is in France" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way.
Examples common to everyday speech include conditional phrases used as idioms of improbability like "when hell freezes over..." and "when pigs can fly...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition.
In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction.[5] This notion has relevance in pure mathematics, as well as in any other field that uses classical logic.
Outside of mathematics, statements which can be characterized informally as vacuously true can be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist. For example, a child might truthfully tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten some vegetables, even though that is not true. In addition, a vacuous truth is often used colloquially with absurd statements, either to confidently assert something (e.g. "the dog was red, or I'm a monkey's uncle" to strongly claim that the dog was red), or to express doubt, sarcasm, disbelief, incredulity or indignation (e.g. "yes, and I'm the King of England" to disagree a previously made statement).
Scope of the concept
A statement $S$ is "vacuously true" if it resembles a material conditional statement $P\Rightarrow Q$, where the antecedent $P$ is known to be false.[1][3][2]
Vacuously true statements that can be reduced (with suitable transformations) to this basic form (material conditional) include the following universally quantified statements:
• $\forall x:P(x)\Rightarrow Q(x)$, where it is the case that $\forall x:\neg P(x)$.[4]
• $\forall x\in A:Q(x)$, where the set $A$ is empty.
• This logical form $\forall x\in A:Q(x)$ can be converted to the material conditional form in order to easily identify the antecedent. For the above example $S$ "all cell phones in the room are turned off", it can be formally written as $\forall x\in A:Q(x)$ where $A$ is the set of all cell phones in the room and $Q(x)$ is "$x$ is turned off". This can be written to a material conditional statement $\forall x\in B:P(x)\Rightarrow Q(x)$ where $B$ is the set of all things in the room (including cell phones if they exist in the room), the antecedent $P(x)$ is "$x$ is a cell phone", and the consequent $Q(x)$ is "$x$ is turned off".
• $\forall \xi :Q(\xi )$, where the symbol $\xi $ is restricted to a type that has no representatives.
Vacuous truths most commonly appear in classical logic with two truth values. However, vacuous truths can also appear in, for example, intuitionistic logic, in the same situations as given above. Indeed, if $P$ is false, then $P\Rightarrow Q$ will yield a vacuous truth in any logic that uses the material conditional; if $P$ is a necessary falsehood, then it will also yield a vacuous truth under the strict conditional.
Other non-classical logics, such as relevance logic, may attempt to avoid vacuous truths by using alternative conditionals (such as the case of the counterfactual conditional).
In computer programming
Many programming environments have a mechanism for querying if every item in a collection of items satisfies some predicate. It is common for such a query to always evaluate as true for an empty collection. For example:
• In JavaScript, the array method every executes a provided callback function once for each element present in the array, only stopping (if and when) it finds an element where the callback function returns false. Notably, calling the every method on an empty array will return true for any condition.[6]
• In Python, the all function returns True if all of the elements of the given iterable are True. The function also returns True when given an iterable of zero length.[7]
• In Rust, the Iterator::all function accepts an iterator and a predicate and returns true only when the predicate returns true for all items produced by the iterator, or if the iterator produces no items.[8]
Examples
These examples, one from mathematics and one from natural language, illustrate the concept of vacuous truths:
• "For any integer x, if x > 5 then x > 3."[9] – This statement is true non-vacuously (since some integers are indeed greater than 5), but some of its implications are only vacuously true: for example, when x is the integer 2, the statement implies the vacuous truth that "if 2 > 5 then 2 > 3".
• "All my children are goats" is a vacuous truth, when spoken by someone without children. Similarly, "None of my children is a goat" would also be a vacuous truth, when spoken by the same person.
See also
• De Morgan's laws – specifically the law that a universal statement is true just in case no counterexample exists: $\forall x\,P(x)\equiv \neg \exists x\,\neg P(x)$
• Empty sum and empty product
• Empty function
• Paradoxes of material implication, especially the principle of explosion
• Presupposition, double question
• State of affairs (philosophy)
• Tautology (logic) – another type of true statement that also fails to convey any substantive information
• Triviality (mathematics) and degeneracy (mathematics)
References
1. "Vacuously true". web.cse.ohio-state.edu. Retrieved 2019-12-15.
2. "Vacuously true - CS2800 wiki". courses.cs.cornell.edu. Retrieved 2019-12-15.
3. "Definition:Vacuous Truth - ProofWiki". proofwiki.org. Retrieved 2019-12-15.
4. Edwards, C. H. (January 18, 1998). "Vacuously True" (PDF). swarthmore.edu. Retrieved 2019-12-14.
5. Baldwin, Douglas L.; Scragg, Greg W. (2011), Algorithms and Data Structures: The Science of Computing, Cengage Learning, p. 261, ISBN 978-1-285-22512-8
6. "Array.prototype.every() - JavaScript | MDN". developer.mozilla.org.
7. "Built-in Functions — Python 3.10.2 documentation". docs.python.org.
8. "Iterator in std::iter - Rust". doc.rust-lang.org.
9. "logic - What precisely is a vacuous truth?". Mathematics Stack Exchange.
Bibliography
• Blackburn, Simon (1994). "vacuous," The Oxford Dictionary of Philosophy. Oxford: Oxford University Press, p. 388.
• David H. Sanford (1999). "implication." The Cambridge Dictionary of Philosophy, 2nd. ed., p. 420.
• Beer, Ilan; Ben-David, Shoham; Eisner, Cindy; Rodeh, Yoav (1997). "Efficient Detection of Vacuity in ACTL Formulas". Computer Aided Verification: 9th International Conference, CAV'97 Haifa, Israel, June 22–25, 1997, Proceedings. Lecture Notes in Computer Science. Vol. 1254. pp. 279–290. doi:10.1007/3-540-63166-6_28. ISBN 978-3-540-63166-8.
External links
• Conditional Assertions: Vacuous truth
|
Vagif Rza Ibrahimov
Vagif Rza Ibrahimov (born May 9, 1947, in the village of Jahri) is an Azerbaijani mathematician and professor. He is a corresponding member of ANAS and an organizer and a participant of numerous conferences. He has published more than 102 articles abroad. He is a professor at Baku State University.
Vagif Rza Ibrahimov
Born(1947-05-09)May 9, 1947
Jahri, Nakhchivan Autonomous Republic, Azerbaijan
EducationDoctor of Physical and Mathematical Sciences[1]
Occupation(s)Professor at Baku State University,[2] membership of American Mathematical Society and European Mathematical Society
References
1. "Vagif Ibrahimov".
2. "Bakı Dövlət Universitetinin əməkdaşlarına fəxri adların verilməsi haqqında" Azərbaycan Respublikası Prezidentinin 30 oktyabr 2009-cu il tarixli, 538 nömrəli Sərəncamı. e-qanun.az (in Azerbaijani)
External links
• Biography at the Official website of the Baku State University
• Biography at the Official website of Institute of Control Systems
Authority control: Academics
• Google Scholar
• MathSciNet
• ORCID
• ResearcherID
• Scopus
• zbMATH
|
Inverse semigroup
In group theory, an inverse semigroup (occasionally called an inversion semigroup[1]) S is a semigroup in which every element x in S has a unique inverse y in S in the sense that x = xyx and y = yxy, i.e. a regular semigroup in which every element has a unique inverse. Inverse semigroups appear in a range of contexts; for example, they can be employed in the study of partial symmetries.[2]
(The convention followed in this article will be that of writing a function on the right of its argument, e.g. x f rather than f(x), and composing functions from left to right—a convention often observed in semigroup theory.)
Origins
Inverse semigroups were introduced independently by Viktor Vladimirovich Wagner[3] in the Soviet Union in 1952,[4] and by Gordon Preston in the United Kingdom in 1954.[5] Both authors arrived at inverse semigroups via the study of partial bijections of a set: a partial transformation α of a set X is a function from A to B, where A and B are subsets of X. Let α and β be partial transformations of a set X; α and β can be composed (from left to right) on the largest domain upon which it "makes sense" to compose them:
$\operatorname {dom} \alpha \beta =[\operatorname {im} \alpha \cap \operatorname {dom} \beta ]\alpha ^{-1}\,$
where α−1 denotes the preimage under α. Partial transformations had already been studied in the context of pseudogroups.[6] It was Wagner, however, who was the first to observe that the composition of partial transformations is a special case of the composition of binary relations.[7] He recognised also that the domain of composition of two partial transformations may be the empty set, so he introduced an empty transformation to take account of this. With the addition of this empty transformation, the composition of partial transformations of a set becomes an everywhere-defined associative binary operation. Under this composition, the collection ${\mathcal {I}}_{X}$ of all partial one-one transformations of a set X forms an inverse semigroup, called the symmetric inverse semigroup (or monoid) on X, with inverse the functional inverse defined from image to domain (equivalently, the converse relation).[8] This is the "archetypal" inverse semigroup, in the same way that a symmetric group is the archetypal group. For example, just as every group can be embedded in a symmetric group, every inverse semigroup can be embedded in a symmetric inverse semigroup (see § Homomorphisms and representations of inverse semigroups below).
The basics
The inverse of an element x of an inverse semigroup S is usually written x−1. Inverses in an inverse semigroup have many of the same properties as inverses in a group, for example, (ab)−1 = b−1a−1. In an inverse monoid, xx−1 and x−1x are not necessarily equal to the identity, but they are both idempotent.[9] An inverse monoid S in which xx−1 = 1 = x−1x, for all x in S (a unipotent inverse monoid), is, of course, a group.
There are a number of equivalent characterisations of an inverse semigroup S:[10]
• Every element of S has a unique inverse, in the above sense.
• Every element of S has at least one inverse (S is a regular semigroup) and idempotents commute (that is, the idempotents of S form a semilattice).
• Every ${\mathcal {L}}$-class and every ${\mathcal {R}}$-class contains precisely one idempotent, where ${\mathcal {L}}$ and ${\mathcal {R}}$ are two of Green's relations.
The idempotent in the ${\mathcal {L}}$-class of s is s−1s, whilst the idempotent in the ${\mathcal {R}}$-class of s is ss−1. There is therefore a simple characterisation of Green's relations in an inverse semigroup:[11]
$a\,{\mathcal {L}}\,b\Longleftrightarrow a^{-1}a=b^{-1}b,\quad a\,{\mathcal {R}}\,b\Longleftrightarrow aa^{-1}=bb^{-1}$
Unless stated otherwise, E(S) will denote the semilattice of idempotents of an inverse semigroup S.
Examples of inverse semigroups
• Partial bijections on a set X form an inverse semigroup under composition.
• Every group is an inverse semigroup.
• The bicyclic semigroup is inverse, with (a, b)−1 = (b, a).
• Every semilattice is inverse.
• The Brandt semigroup is inverse.
• The Munn semigroup is inverse.
Multiplication table example. It is associative and every element has its own inverse according to aba = a, bab = b. It has no identity and is not commutative.
Inverse semigroup
abcde
aaaaaa
babcaa
caaabc
dadeaa
eaaade
The natural partial order
An inverse semigroup S possesses a natural partial order relation ≤ (sometimes denoted by ω), which is defined by the following:[12]
$a\leq b\Longleftrightarrow a=eb,$
for some idempotent e in S. Equivalently,
$a\leq b\Longleftrightarrow a=bf,$
for some (in general, different) idempotent f in S. In fact, e can be taken to be aa−1 and f to be a−1a.[13]
The natural partial order is compatible with both multiplication and inversion, that is,[14]
$a\leq b,c\leq d\Longrightarrow ac\leq bd$
and
$a\leq b\Longrightarrow a^{-1}\leq b^{-1}.$
In a group, this partial order simply reduces to equality, since the identity is the only idempotent. In a symmetric inverse semigroup, the partial order reduces to restriction of mappings, i.e., α ≤ β if, and only if, the domain of α is contained in the domain of β and xα = xβ, for all x in the domain of α.[15]
The natural partial order on an inverse semigroup interacts with Green's relations as follows: if s ≤ t and s$\,{\mathcal {L}}\,$t, then s = t. Similarly, if s$\,{\mathcal {R}}\,$t.[16]
On E(S), the natural partial order becomes:
$e\leq f\Longleftrightarrow e=ef,$
so, since the idempotents form a semilattice under the product operation, products on E(S) give least upper bounds with respect to ≤.
If E(S) is finite and forms a chain (i.e., E(S) is totally ordered by ≤), then S is a union of groups.[17] If E(S) is an infinite chain it is possible to obtain an analogous result under additional hypotheses on S and E(S).[18]
Homomorphisms and representations of inverse semigroups
A homomorphism (or morphism) of inverse semigroups is defined in exactly the same way as for any other semigroup: for inverse semigroups S and T, a function θ from S to T is a morphism if (sθ)(tθ) = (st)θ, for all s,t in S. The definition of a morphism of inverse semigroups could be augmented by including the condition (sθ)−1 = s−1θ, however, there is no need to do so, since this property follows from the above definition, via the following theorem:
Theorem. The homomorphic image of an inverse semigroup is an inverse semigroup; the inverse of an element is always mapped to the inverse of the image of that element.[19]
One of the earliest results proved about inverse semigroups was the Wagner–Preston Theorem, which is an analogue of Cayley's theorem for groups:
Wagner–Preston Theorem. If S is an inverse semigroup, then the function φ from S to ${\mathcal {I}}_{S}$, given by
dom (aφ) = Sa−1 and x(aφ) = xa
is a faithful representation of S.[20]
Thus, any inverse semigroup can be embedded in a symmetric inverse semigroup, and with image closed under the inverse operation on partial bijections. Conversely, any subsemigroup of the symmetric inverse semigroup closed under the inverse operation is an inverse semigroup. Hence a semigroup S is isomorphic to a subsemigroup of the symmetric inverse semigroup closed under inverses if and only if S is an inverse semigroup.
Congruences on inverse semigroups
Congruences are defined on inverse semigroups in exactly the same way as for any other semigroup: a congruence ρ is an equivalence relation that is compatible with semigroup multiplication, i.e.,
$a\,\rho \,b,\quad c\,\rho \,d\Longrightarrow ac\,\rho \,bd.$[21]
Of particular interest is the relation $\sigma $, defined on an inverse semigroup S by
$a\,\sigma \,b\Longleftrightarrow $ there exists a $c\in S$ with $c\leq a,b.$[22]
It can be shown that σ is a congruence and, in fact, it is a group congruence, meaning that the factor semigroup S/σ is a group. In the set of all group congruences on a semigroup S, the minimal element (for the partial order defined by inclusion of sets) need not be the smallest element. In the specific case in which S is an inverse semigroup σ is the smallest congruence on S such that S/σ is a group, that is, if τ is any other congruence on S with S/τ a group, then σ is contained in τ. The congruence σ is called the minimum group congruence on S.[23] The minimum group congruence can be used to give a characterisation of E-unitary inverse semigroups (see below).
A congruence ρ on an inverse semigroup S is called idempotent pure if
$a\in S,e\in E(S),a\,\rho \,e\Longrightarrow a\in E(S).$[24]
E-unitary inverse semigroups
One class of inverse semigroups that has been studied extensively over the years is the class of E-unitary inverse semigroups: an inverse semigroup S (with semilattice E of idempotents) is E-unitary if, for all e in E and all s in S,
$es\in E\Longrightarrow s\in E.$
Equivalently,
$se\in E\Rightarrow s\in E.$[25]
One further characterisation of an E-unitary inverse semigroup S is the following: if e is in E and e ≤ s, for some s in S, then s is in E.[26]
Theorem. Let S be an inverse semigroup with semilattice E of idempotents, and minimum group congruence σ. Then the following are equivalent:[27]
• S is E-unitary;
• σ is idempotent pure;
• $\sim $ = σ,
where $\sim $ is the compatibility relation on S, defined by
$a\sim b\Longleftrightarrow ab^{-1},a^{-1}b$ are idempotent.
McAlister's Covering Theorem. Every inverse semigroup S has a E-unitary cover; that is there exists an idempotent separating surjective homomorphism from some E-unitary semigroup T onto S.[28]
Central to the study of E-unitary inverse semigroups is the following construction.[29] Let ${\mathcal {X}}$ be a partially ordered set, with ordering ≤, and let ${\mathcal {Y}}$ be a subset of ${\mathcal {X}}$ with the properties that
• ${\mathcal {Y}}$ is a lower semilattice, that is, every pair of elements A, B in ${\mathcal {Y}}$ has a greatest lower bound A $\wedge $ B in ${\mathcal {Y}}$ (with respect to ≤);
• ${\mathcal {Y}}$ is an order ideal of ${\mathcal {X}}$, that is, for A, B in ${\mathcal {X}}$, if A is in ${\mathcal {Y}}$ and B ≤ A, then B is in ${\mathcal {Y}}$.
Now let G be a group that acts on ${\mathcal {X}}$ (on the left), such that
• for all g in G and all A, B in ${\mathcal {X}}$, gA = gB if, and only if, A = B;
• for each g in G and each B in ${\mathcal {X}}$, there exists an A in ${\mathcal {X}}$ such that gA = B;
• for all A, B in ${\mathcal {X}}$, A ≤ B if, and only if, gA ≤ gB;
• for all g, h in G and all A in ${\mathcal {X}}$, g(hA) = (gh)A.
The triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is also assumed to have the following properties:
• for every X in ${\mathcal {X}}$, there exists a g in G and an A in ${\mathcal {Y}}$ such that gA = X;
• for all g in G, g${\mathcal {Y}}$ and ${\mathcal {Y}}$ have nonempty intersection.
Such a triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is called a McAlister triple. A McAlister triple is used to define the following:
$P(G,{\mathcal {X}},{\mathcal {Y}})=\{(A,g)\in {\mathcal {Y}}\times G:g^{-1}A\in {\mathcal {Y}}\}$
together with multiplication
$(A,g)(B,h)=(A\wedge gB,gh)$.
Then $P(G,{\mathcal {X}},{\mathcal {Y}})$ is an inverse semigroup under this multiplication, with (A, g)−1 = (g−1A, g−1). One of the main results in the study of E-unitary inverse semigroups is McAlister's P-Theorem:
McAlister's P-Theorem. Let $(G,{\mathcal {X}},{\mathcal {Y}})$ be a McAlister triple. Then $P(G,{\mathcal {X}},{\mathcal {Y}})$ is an E-unitary inverse semigroup. Conversely, every E-unitary inverse semigroup is isomorphic to one of this type.[30]
F-inverse semigroups
An inverse semigroup is said to be F-inverse if every element has a unique maximal element above it in the natural partial order, i.e. every σ-class has a maximal element. Every F-inverse semigroup is an E-unitary monoid. McAlister's covering theorem has been refined by M.V. Lawson to:
Theorem. Every inverse semigroup has an F-inverse cover.[31]
McAlister's P-theorem has been used to characterize F-inverse semigroups as well. A McAlister triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is an F-inverse semigroup if and only if ${\mathcal {Y}}$ is a principal ideal of ${\mathcal {X}}$ and ${\mathcal {X}}$ is a semilattice.
Free inverse semigroups
A construction similar to a free group is possible for inverse semigroups. A presentation of the free inverse semigroup on a set X may be obtained by considering the free semigroup with involution, where involution is the taking of the inverse, and then taking the quotient by the Vagner congruence
$\{(xx^{-1}x,x),\;(xx^{-1}yy^{-1},yy^{-1}xx^{-1})\;|\;x,y\in (X\cup X^{-1})^{+}\}.$
The word problem for free inverse semigroups is much more intricate than that of free groups. A celebrated result in this area due to W. D. Munn who showed that elements of the free inverse semigroup can be naturally regarded as trees, known as Munn trees. Multiplication in the free inverse semigroup has a correspondent on Munn trees, which essentially consists of overlapping common portions of the trees. (see Lawson 1998 for further details)
Any free inverse semigroup is F-inverse.[31]
Connections with category theory
The above composition of partial transformations of a set gives rise to a symmetric inverse semigroup. There is another way of composing partial transformations, which is more restrictive than that used above: two partial transformations α and β are composed if, and only if, the image of α is equal to the domain of β; otherwise, the composition αβ is undefined. Under this alternative composition, the collection of all partial one-one transformations of a set forms not an inverse semigroup but an inductive groupoid, in the sense of category theory. This close connection between inverse semigroups and inductive groupoids is embodied in the Ehresmann–Schein–Nambooripad Theorem, which states that an inductive groupoid can always be constructed from an inverse semigroup, and conversely.[32] More precisely, an inverse semigroup is precisely a groupoid in the category of posets that is an étale groupoid with respect to its (dual) Alexandrov topology and whose poset of objects is a meet-semilattice.
Generalisations of inverse semigroups
As noted above, an inverse semigroup S can be defined by the conditions (1) S is a regular semigroup, and (2) the idempotents in S commute; this has led to two distinct classes of generalisations of an inverse semigroup: semigroups in which (1) holds, but (2) does not, and vice versa.
Examples of regular generalisations of an inverse semigroup are:[33]
• Regular semigroups: a semigroup S is regular if every element has at least one inverse; equivalently, for each a in S, there is an x in S such that axa = a.
• Locally inverse semigroups: a regular semigroup S is locally inverse if eSe is an inverse semigroup, for each idempotent e.
• Orthodox semigroups: a regular semigroup S is orthodox if its subset of idempotents forms a subsemigroup.
• Generalised inverse semigroups: a regular semigroup S is called a generalised inverse semigroup if its idempotents form a normal band, i.e., xyzx = xzyx, for all idempotents x, y, z.
The class of generalised inverse semigroups is the intersection of the class of locally inverse semigroups and the class of orthodox semigroups.[34]
Amongst the non-regular generalisations of an inverse semigroup are:[35]
• (Left, right, two-sided) adequate semigroups.
• (Left, right, two-sided) ample semigroups.
• (Left, right, two-sided) semiadequate semigroups.
• Weakly (left, right, two-sided) ample semigroups.
Inverse category
This notion of inverse also readily generalizes to categories. An inverse category is simply a category in which every morphism f : X → Y has a generalized inverse g : Y → X such that fgf = f and gfg = g. An inverse category is selfdual. The category of sets and partial bijections is the prime example.[36]
Inverse categories have found various applications in theoretical computer science.[37]
See also
• Orthodox semigroup
• Biordered set
• Pseudogroup
• Partial symmetries
• Regular semigroup
• Semilattice
• Green's relations
• Category theory
• Special classes of semigroups
• Weak inverse
• Nambooripad order
Notes
1. Weisstein, Eric W. (2002). CRC Concise Encyclopedia of Mathematics (2nd ed.). CRC Press. p. 1528. ISBN 978-1-4200-3522-3.
2. Lawson 1998
3. Since his father was German, Wagner preferred the German transliteration of his name (with a "W", rather than a "V") from Cyrillic – see Schein 1981.
4. First a short announcement in Wagner 1952, then a much more comprehensive exposition in Wagner 1953.
5. Preston 1954a,b,c.
6. See, for example, Gołab 1939.
7. Schein 2002, p. 152
8. Howie 1995, p. 149
9. Howie 1995, Proposition 5.1.2(1)
10. Howie 1995, Theorem 5.1.1
11. Howie 1995, Proposition 5.1.2(1)
12. Wagner 1952
13. Howie 1995, Proposition 5.2.1
14. Howie 1995, pp. 152–3
15. Howie 1995, p. 153
16. Lawson 1998, Proposition 3.2.3
17. Clifford & Preston 1967, Theorem 7.5
18. Gonçalves, D; Sobottka, M; Starling, C (2017). "Inverse semigroup shifts over countable alphabets". Semigroup Forum. 96 (2): 203–240. arXiv:1510.04117. doi:10.1007/s00233-017-9858-5Corollary 4.9{{cite journal}}: CS1 maint: postscript (link)
19. Clifford & Preston 1967, Theorem 7.36
20. Howie 1995, Theorem 5.1.7 Originally, Wagner 1952 and, independently, Preston 1954c.
21. Howie 1995, p. 22
22. Lawson 1998, p. 62
23. Lawson 1998, Theorem 2.4.1
24. Lawson 1998, p. 65
25. Howie 1995, p. 192
26. Lawson 1998, Proposition 2.4.3
27. Lawson 1998, Theorem 2.4.6
28. Grillet, P. A. (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 248. ISBN 978-0-8247-9662-4.
29. Howie 1995, pp. 193–4
30. Howie 1995, Theorem 5.9.2. Originally, McAlister 1974a,b.
31. Lawson 1998, p. 230
32. Lawson 1998, 4.1.8
33. Howie 1995, Section 2.4 & Chapter 6
34. Howie 1995, p. 222
35. Fountain 1979, Gould
36. Grandis, Marco (2012). Homological Algebra: The Interplay of Homology with Distributive Lattices and Orthodox Semigroups. World Scientific. p. 55. ISBN 978-981-4407-06-9.
37. Hines, Peter; Braunstein, Samuel L. (2010). "The Structure of Partial Isometries". In Gay and, Simon; Mackie, Ian (eds.). Semantic Techniques in Quantum Computation. Cambridge University Press. p. 369. ISBN 978-0-521-51374-6.
References
• Clifford, A. H.; Preston, G. B. (1967). The Algebraic Theory of Semigroups. Mathematical Surveys of the American Mathematical Society. Vol. 7. ISBN 978-0-8218-0272-4.
• Fountain, J. B. (1979). "Adequate semigroups". Proceedings of the Edinburgh Mathematical Society. 22 (2): 113–125. doi:10.1017/S0013091500016230.
• Gołab, St. (1939). "Über den Begriff der "Pseudogruppe von Transformationen"". Mathematische Annalen (in German). 116: 768–780. doi:10.1007/BF01597390.
• Exel, R. (1998). "Partial actions of groups and actions of inverse semigroups". Proceedings of the American Mathematical Society. 126 (12): 3481–4. arXiv:funct-an/9511003. doi:10.1090/S0002-9939-98-04575-4.
• Gould, V. "(Weakly) left E-ample semigroups". Archived from the original (Postscript) on 2005-08-26. Retrieved 2006-08-28.
• Howie, J. M. (1995). Fundamentals of Semigroup Theory. Oxford: Clarendon Press. ISBN 0198511949.
• Lawson, M. V. (1998). Inverse Semigroups: The Theory of Partial Symmetries. World Scientific. ISBN 9810233167.
• McAlister, D. B. (1974a). "Groups, semilattices and inverse semigroups". Transactions of the American Mathematical Society. 192: 227–244. doi:10.2307/1996831. JSTOR 1996831.
• McAlister, D. B. (1974b). "Groups, semilattices and inverse semigroups II". Transactions of the American Mathematical Society. 196: 351–370. doi:10.2307/1997032. JSTOR 1997032.
• Petrich, M. (1984). Inverse semigroups. Wiley. ISBN 0471875457.
• Preston, G. B. (1954a). "Inverse semi-groups". Journal of the London Mathematical Society. 29 (4): 396–403. doi:10.1112/jlms/s1-29.4.396.
• Preston, G. B. (1954b). "Inverse semi-groups with minimal right ideals". Journal of the London Mathematical Society. 29 (4): 404–411. doi:10.1112/jlms/s1-29.4.404.
• Preston, G. B. (1954c). "Representations of inverse semi-groups". Journal of the London Mathematical Society. 29 (4): 411–9. doi:10.1112/jlms/s1-29.4.411.
• Schein, B. M. (1981). "Obituary: Viktor Vladimirovich Vagner (1908–1981)". Semigroup Forum. 28: 189–200. doi:10.1007/BF02676643.
• Schein, B. M. (2002). "Book Review: "Inverse Semigroups: The Theory of Partial Symmetries" by Mark V. Lawson". Semigroup Forum. 65: 149–158. doi:10.1007/s002330010132.
• Wagner, V. V. (1952). "Generalised groups". Proceedings of the USSR Academy of Sciences (in Russian). 84: 1119–1122. English translation(PDF)
• Wagner, V. V. (1953). "The theory of generalised heaps and generalised groups". Matematicheskii Sbornik. Novaya Seriya (in Russian). 32 (74): 545–632.
Further reading
• For a brief introduction to inverse semigroups, see either Clifford & Preston 1967, Chapter 7 or Howie 1995, Chapter 5.
• More comprehensive introductions can be found in Petrich 1984 and Lawson 1998.
• Linckelmann, M. (2012). "On inverse categories and transfer in cohomology" (PDF). Proceedings of the Edinburgh Mathematical Society. 56: 187. doi:10.1017/S0013091512000211. Open access preprint
|
Vague topology
In mathematics, particularly in the area of functional analysis and topological vector spaces, the vague topology is an example of the weak-* topology which arises in the study of measures on locally compact Hausdorff spaces.
Let $X$ be a locally compact Hausdorff space. Let $M(X)$ be the space of complex Radon measures on $X,$ and $C_{0}(X)^{*}$ denote the dual of $C_{0}(X),$ the Banach space of complex continuous functions on $X$ vanishing at infinity equipped with the uniform norm. By the Riesz representation theorem $M(X)$ is isometric to $C_{0}(X)^{*}.$ The isometry maps a measure $\mu $ to a linear functional $I_{\mu }(f):=\int _{X}f\,d\mu .$
The vague topology is the weak-* topology on $C_{0}(X)^{*}.$ The corresponding topology on $M(X)$ induced by the isometry from $C_{0}(X)^{*}$ is also called the vague topology on $M(X).$ Thus in particular, a sequence of measures $\left(\mu _{n}\right)_{n\in \mathbb {N} }$ converges vaguely to a measure $\mu $ whenever for all test functions $f\in C_{0}(X),$
$\int _{X}fd\mu _{n}\to \int _{X}fd\mu .$
It is also not uncommon to define the vague topology by duality with continuous functions having compact support $C_{c}(X),$ that is, a sequence of measures $\left(\mu _{n}\right)_{n\in \mathbb {N} }$ converges vaguely to a measure $\mu $ whenever the above convergence holds for all test functions $f\in C_{c}(X).$ This construction gives rise to a different topology. In particular, the topology defined by duality with $C_{c}(X)$ can be metrizable whereas the topology defined by duality with $C_{0}(X)$ is not.
One application of this is to probability theory: for example, the central limit theorem is essentially a statement that if $\mu _{n}$ are the probability measures for certain sums of independent random variables, then $\mu _{n}$ converge weakly (and then vaguely) to a normal distribution, that is, the measure $\mu _{n}$ is "approximately normal" for large $n.$
See also
• List of topologies – List of concrete topologies and topological spaces
References
• Dieudonné, Jean (1970), "§13.4. The vague topology", Treatise on analysis, vol. II, Academic Press.
• G. B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd ed, John Wiley & Sons, Inc., 1999.
This article incorporates material from Weak-* topology of the space of Radon measures on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
|
Vaidya metric
In general relativity, the Vaidya metric describes the non-empty external spacetime of a spherically symmetric and nonrotating star which is either emitting or absorbing null dusts. It is named after the Indian physicist Prahalad Chunnilal Vaidya and constitutes the simplest non-static generalization of the non-radiative Schwarzschild solution to Einstein's field equation, and therefore is also called the "radiating(shining) Schwarzschild metric".
General relativity
$G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }$
• Introduction
• History
• Timeline
• Tests
• Mathematical formulation
Fundamental concepts
• Equivalence principle
• Special relativity
• World line
• Pseudo-Riemannian manifold
Phenomena
• Kepler problem
• Gravitational lensing
• Gravitational waves
• Frame-dragging
• Geodetic effect
• Event horizon
• Singularity
• Black hole
Spacetime
• Spacetime diagrams
• Minkowski spacetime
• Einstein–Rosen bridge
• Equations
• Formalisms
Equations
• Linearized gravity
• Einstein field equations
• Friedmann
• Geodesics
• Mathisson–Papapetrou–Dixon
• Hamilton–Jacobi–Einstein
Formalisms
• ADM
• BSSN
• Post-Newtonian
Advanced theory
• Kaluza–Klein theory
• Quantum gravity
Solutions
• Schwarzschild (interior)
• Reissner–Nordström
• Gödel
• Kerr
• Kerr–Newman
• Kasner
• Lemaître–Tolman
• Taub–NUT
• Milne
• Robertson–Walker
• Oppenheimer-Snyder
• pp-wave
• van Stockum dust
• Weyl−Lewis−Papapetrou
Scientists
• Einstein
• Lorentz
• Hilbert
• Poincaré
• Schwarzschild
• de Sitter
• Reissner
• Nordström
• Weyl
• Eddington
• Friedman
• Milne
• Zwicky
• Lemaître
• Oppenheimer
• Gödel
• Wheeler
• Robertson
• Bardeen
• Walker
• Kerr
• Chandrasekhar
• Ehlers
• Penrose
• Hawking
• Raychaudhuri
• Taylor
• Hulse
• van Stockum
• Taub
• Newman
• Yau
• Thorne
• others
• Physics portal
• Category
From Schwarzschild to Vaidya metrics
The Schwarzschild metric as the static and spherically symmetric solution to Einstein's equation reads
$ds^{2}=-\left(1-{\frac {2M}{r}}\right)dt^{2}+\left(1-{\frac {2M}{r}}\right)^{-1}dr^{2}+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$
(1)
To remove the coordinate singularity of this metric at $r=2M$, one could switch to the Eddington–Finkelstein coordinates. Thus, introduce the "retarded(/outgoing)" null coordinate $u$ by
$t=u+r+2M\ln \left({\frac {r}{2M}}-1\right)\qquad \Rightarrow \quad dt=du+\left(1-{\frac {2M}{r}}\right)^{-1}dr\;,$
(2)
and Eq(1) could be transformed into the "retarded(/outgoing) Schwarzschild metric"
$ds^{2}=-\left(1-{\frac {2M}{r}}\right)du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right);$
(3)
or, we could instead employ the "advanced(/ingoing)" null coordinate $v$ by
$t=v-r-2M\ln \left({\frac {r}{2M}}-1\right)\qquad \Rightarrow \quad dt=dv-\left(1-{\frac {2M}{r}}\right)^{-1}dr\;,$
(4)
so Eq(1) becomes the "advanced(/ingoing) Schwarzschild metric"
$ds^{2}=-\left(1-{\frac {2M}{r}}\right)dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$
(5)
Eq(3) and Eq(5), as static and spherically symmetric solutions, are valid for both ordinary celestial objects with finite radii and singular objects such as black holes. It turns out that, it is still physically reasonable if one extends the mass parameter $M$ in Eqs(3) and Eq(5) from a constant to functions of the corresponding null coordinate, $M(u)$ and $M(v)$ respectively, thus
$ds^{2}=-\left(1-{\frac {2M(u)}{r}}\right)du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\ sin^{2}\theta \,d\phi ^{2}\right),$
(6)
$ds^{2}=-\left(1-{\frac {2M(v)}{r}}\right)dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$
(7)
The extended metrics Eq(6) and Eq(7) are respectively the "retarded(/outgoing)" and "advanced(/ingoing)" Vaidya metrics.[1][2] It is also sometimes useful to recast the Vaidya metrics Eqs(6)(7) into the form
$ds^{2}={\frac {2M(u)}{r}}du^{2}+ds^{2}({\text{flat}})={\frac {2M(v)}{r}}dv^{2}+ds^{2}({\text{flat}})\,,$
(8)
where
${\begin{aligned}ds^{2}({\text{flat}})&=-du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\\&=-dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\\&=-dt^{2}+dr^{2}+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\end{aligned}}$
represents the metric of flat spacetime.
Outgoing Vaidya with pure Emitting field
As for the "retarded(/outgoing)" Vaidya metric Eq(6),[1][2][3][4][5] the Ricci tensor has only one nonzero component
$R_{uu}=-2{\frac {M(u)_{,\,u}}{r^{2}}}\,,$
(9)
while the Ricci curvature scalar vanishes, $R=g^{ab}R_{ab}=0$ because $g^{uu}=0$. Thus, according to the trace-free Einstein equation $G_{ab}=R_{ab}=8\pi T_{ab}$, the stress–energy tensor $T_{ab}$ satisfies
$T_{ab}=-{\frac {M(u)_{,\,u}}{4\pi r^{2}}}l_{a}l_{b}\;,\qquad l_{a}dx^{a}=-du\;,$
(10)
where $l_{a}=-\partial _{a}u$ and $l^{a}=g^{ab}l_{b}$ are null (co)vectors (c.f. Box A below). Thus, $T_{ab}$ is a "pure radiation field",[1][2] which has an energy density of $ -{\frac {M(u)_{,\,u}}{4\pi r^{2}}}$. According to the null energy conditions
$T_{ab}k^{a}k^{b}\geq 0\;,$
(11)
we have $M(u)_{,\,u}<0$ and thus the central body is emitting radiation.
Following the calculations using Newman–Penrose (NP) formalism in Box A, the outgoing Vaidya spacetime Eq(6) is of Petrov-type D, and the nonzero components of the Weyl-NP and Ricci-NP scalars are
$\Psi _{2}=-{\frac {M(u)}{r^{3}}}\qquad \Phi _{22}=-{\frac {M(u)_{\,,\,u}}{r^{2}}}\;.$
(12)
It is notable that, the Vaidya field is a pure radiation field rather than electromagnetic fields. The emitted particles or energy-matter flows have zero rest mass and thus are generally called "null dusts", typically such as photons and neutrinos, but cannot be electromagnetic waves because the Maxwell-NP equations are not satisfied. By the way, the outgoing and ingoing null expansion rates for the line element Eq(6) are respectively
$\theta _{(\ell )}=-(\rho +{\bar {\rho }})={\frac {2}{r}}\,,\quad \theta _{(n)}=\mu +{\bar {\mu }}={\frac {-r+2M(u)}{r^{2}}}\;.$
(13)
Suppose $ F:=1-{\frac {2M(u)}{r}}$, then the Lagrangian for null radial geodesics $(L=0,{\dot {\theta }}=0,{\dot {\phi }}=0)$ of the "retarded(/outgoing)" Vaidya spacetime Eq(6) is
$L=0=-F{\dot {u}}^{2}+2{\dot {u}}{\dot {r}}\,,$
where dot means derivative with respect to some parameter $\lambda $. This Lagrangian has two solutions,
${\dot {u}}=0\quad {\text{and}}\quad {\dot {r}}={\frac {F}{2}}{\dot {u}}\;.$
According to the definition of $u$ in Eq(2), one could find that when $t$ increases, the areal radius $r$ would increase as well for the solution ${\dot {u}}=0$, while $r$ would decrease for the solution $ {\dot {r}}={\frac {F}{2}}{\dot {u}}$. Thus, ${\dot {u}}=0$ should be recognized as an outgoing solution while $ {\dot {r}}={\frac {F}{2}}{\dot {u}}$ serves as an ingoing solution. Now, we can construct a complex null tetrad which is adapted to the outgoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the outgoing Vaidya spacetime. Such an outgoing adapted tetrad can be set up as
$l^{a}=(0,1,0,0)\,,\quad n^{a}=\left(1,-{\frac {F}{2}},0,0\right)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$
and the dual basis covectors are therefore
$l_{a}=(-1,0,0,0)\,,\quad n_{a}=\left(-{\frac {F}{2}},-1,0,0\right)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$
In this null tetrad, the spin coefficients are
$\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \varepsilon =0$
$\rho =-{\frac {1}{r}}\,,\quad \mu ={\frac {-r+2M(u)}{2r^{2}}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \gamma ={\frac {M(u)}{2r^{2}}}\,.$
The Weyl-NP and Ricci-NP scalars are given by
$\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M(u)}{r^{3}}}\,,$
$\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Lambda =0\,,\quad \Phi _{22}=-{\frac {M(u)_{\,,\,u}}{r^{2}}}\,,$
Since the only nonvanishing Weyl-NP scalar is $\Psi _{2}$, the "retarded(/outgoing)" Vaidya spacetime is of Petrov-type D. Also, there exists a radiation field as $\Phi _{22}\neq 0$.
For the "retarded(/outgoing)" Schwarzschild metric Eq(3), let $ G:=1-{\frac {2M}{r}}$, and then the Lagrangian for null radial geodesics will have an outgoing solution ${\dot {u}}=0$ and an ingoing solution $ {\dot {r}}=-{\frac {G}{2}}{\dot {u}}$. Similar to Box A, now set up the adapted outgoing tetrad by
$l^{a}=(0,1,0,0)\,,\quad n^{a}=\left(1,-{\frac {G}{2}},0,0\right)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$
$l_{a}=(-1,0,0,0)\,,\quad n_{a}=\left(-{\frac {G}{2}},-1,0,0\right)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$
so the spin coefficients are
$\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \varepsilon =0$
$\rho =-{\frac {1}{r}}\,,\quad \mu ={\frac {-r+2M}{2r^{2}}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \gamma ={\frac {M}{2r^{2}}}\,,$
and the Weyl-NP and Ricci-NP scalars are given by
$\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M}{r^{3}}}\,,$
$\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,.$
The "retarded(/outgoing)" Schwarzschild spacetime is of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar.
Ingoing Vaidya with pure absorbing field
As for the "advanced/ingoing" Vaidya metric Eq(7),[1][2][6] the Ricci tensors again have one nonzero component
$R_{vv}=2{\frac {M(v)_{,\,v}}{r^{2}}}\,,$
(14)
and therefore $R=0$ and the stress–energy tensor is
$T_{ab}={\frac {M(v)_{,\,v}}{4\pi r^{2}}}\,n_{a}n_{b}\;,\qquad n_{a}dx^{a}=-dv\;.$
(15)
This is a pure radiation field with energy density $ {\frac {M(v)_{,\,v}}{4\pi r^{2}}}$, and once again it follows from the null energy condition Eq(11) that $M(v)_{,\,v}>0$, so the central object is absorbing null dusts. As calculated in Box C, the nonzero Weyl-NP and Ricci-NP components of the "advanced/ingoing" Vaidya metric Eq(7) are
$\Psi _{2}=-{\frac {M(v)}{r^{3}}}\qquad \Phi _{00}={\frac {M(v)_{\,,\,v}}{r^{2}}}\;.$
(16)
Also, the outgoing and ingoing null expansion rates for the line element Eq(7) are respectively
$\theta _{(\ell )}=-(\rho +{\bar {\rho }})={\frac {r-2M(v)}{r^{2}}}\,,\quad \theta _{(n)}=\mu +{\bar {\mu }}=-{\frac {2}{r}}\;.$
(17)
The advanced/ingoing Vaidya solution Eq(7) is especially useful in black-hole physics as it is one of the few existing exact dynamical solutions. For example, it is often employed to investigate the differences between different definitions of the dynamical black-hole boundaries, such as the classical event horizon and the quasilocal trapping horizon; and as shown by Eq(17), the evolutionary hypersurface $r=2M(v)$ is always a marginally outer trapped horizon ($\theta _{(\ell )}=0\;,\theta _{(n)}<0$).
Suppose ${\tilde {F}}:=1-{\frac {2M(v)}{r}}$, then the Lagrangian for null radial geodesics of the "advanced(/ingoing)" Vaidya spacetime Eq(7) is
$L=-{\tilde {F}}{\dot {v}}^{2}+2{\dot {v}}{\dot {r}}\,,$
which has an ingoing solution ${\dot {v}}=0$ and an outgoing solution $ {\dot {r}}={\frac {\tilde {F}}{2}}{\dot {v}}$ in accordance with the definition of $v$ in Eq(4). Now, we can construct a complex null tetrad which is adapted to the ingoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the Vaidya spacetime. Such an ingoing adapted tetrad can be set up as
$l^{a}=\left(1,{\frac {\tilde {F}}{2}},0,0\right)\,,\quad n^{a}=(0,-1,0,0)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$
and the dual basis covectors are therefore
$l_{a}=\left(-{\frac {\tilde {F}}{2}},1,0,0\right)\,,\quad n_{a}=(-1,0,0,0)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$
In this null tetrad, the spin coefficients are
$\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \gamma =0$
$\rho ={\frac {-r+2M(v)}{2r^{2}}}\,,\quad \mu =-{\frac {1}{r}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \varepsilon ={\frac {M(v)}{2r^{2}}}\,.$
The Weyl-NP and Ricci-NP scalars are given by
$\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M(v)}{r^{3}}}\,,$
$\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,,\quad \Phi _{00}={\frac {M(v)_{\,,\,v}}{r^{2}}}\;.$
Since the only nonvanishing Weyl-NP scalar is $\Psi _{2}$, the "advanced(/ingoing)" Vaidya spacetime is of Petrov-type D, and there exists a radiation field encoded into $\Phi _{00}$.
For the "advanced(/ingoing)" Schwarzschild metric Eq(5), still let $ G:=1-{\frac {2M}{r}}$, and then the Lagrangian for the null radial geodesics will have an ingoing solution ${\dot {v}}=0$ and an outgoing solution $ {\dot {r}}={\frac {G}{2}}{\dot {v}}$. Similar to Box C, now set up the adapted ingoing tetrad by
$l^{a}=\left(1,{\frac {G}{2}},0,0\right)\,,\quad n^{a}=(0,-1,0,0)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$
$l_{a}=\left(-{\frac {G}{2}},1,0,0\right)\,,\quad n_{a}=(-1,0,0,0)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$
so the spin coefficients are
$\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \gamma =0$
$\rho ={\frac {-r+2M}{2r^{2}}}\,,\quad \mu =-{\frac {1}{r}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \varepsilon ={\frac {M}{2r^{2}}}\,,$
and the Weyl-NP and Ricci-NP scalars are given by
$\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M}{r^{3}}}\,,$
$\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,.$
The "advanced(/ingoing)" Schwarzschild spacetime is of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar.
Comparison with the Schwarzschild metric
As a natural and simplest extension of the Schwazschild metric, the Vaidya metric still has a lot in common with it:
• Both metrics are of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar (as calculated in Boxes A and B).
However, there are three clear differences between the Schwarzschild and Vaidya metric:
• First of all, the mass parameter $M$ for Schwarzschild is a constant, while for Vaidya $M(u)$ is a u-dependent function.
• Schwarzschild is a solution to the vacuum Einstein equation $R_{ab}=0$, while Vaidya is a solution to the trace-free Einstein equation $R_{ab}=8\pi T_{ab}$ with a nontrivial pure radiation energy field. As a result, all Ricci-NP scalars for Schwarzschild are vanishing, while we have $\Phi _{00}={\frac {M(u)_{\,,\,u}}{r^{2}}}$ for Vaidya.
• Schwarzschild has 4 independent Killing vector fields, including a timelike one, and thus is a static metric, while Vaidya has only 3 independent Killing vector fields regarding the spherical symmetry, and consequently is nonstatic. Consequently, the Schwarzschild metric belongs to Weyl's class of solutions while the Vaidya metric does not.
Extension of the Vaidya metric
Kinnersley metric
While the Vaidya metric is an extension of the Schwarzschild metric to include a pure radiation field, the Kinnersley metric[7] constitutes a further extension of the Vaidya metric; it describes a massive object that accelerates in recoil as it emits massless radiation anisotropically. The Kinnersley metric is a special case of the Kerr-Schild metric, and in cartesian spacetime coordinates $x^{\mu }$ it takes the following form:
$g_{\mu \nu }=\eta _{\mu \nu }-{\frac {2m{\bigl (}u(x){\bigr )}}{r(x)^{3}}}\sigma _{\mu }(x)\sigma _{\nu }(x)$
(18)
$r(x)=\sigma _{\mu }(x)\,\,\lambda ^{\mu }(u(x))$
(19)
$\sigma ^{\mu }(x)=X^{\mu }(u(x))-x^{\mu },\quad \eta _{\mu \nu }\sigma ^{\mu }(x)\sigma ^{\nu }(x)=0$
(20)
where for the duration of this section all indices shall be raised and lowered using the "flat space" metric $\eta _{\mu \nu }$, the "mass" $m(u)$ is an arbitrary function of the proper-time $u$ along the mass's world line as measured using the "flat" metric, $du^{2}=\eta _{\mu \nu }\,dX^{\mu }dX^{\nu },$ and $X^{\mu }(u)$ describes the arbitrary world line of the mass, $\lambda ^{\mu }(u)=dX^{\mu }(u)/du$ is then the four-velocity of the mass, $\sigma _{\mu }(x)$ is a "flat metric" null-vector field implicitly defined by Eqn. (20), and $u(x)$ implicitly extends the proper-time parameter to a scalar field throughout spacetime by viewing it as constant on the outgoing light cone of the "flat" metric that emerges from the event $X^{\mu }(u),$ and satisfies the identity $\lambda ^{\mu }(u(x))\,\partial _{\mu }u(x)=1.$ Grinding out the Einstein tensor for the metric $g_{\mu \nu }$ and integrating the outgoing energy–momentum flux "at infinity," one finds that the metric $g_{\mu \nu }$ describes a mass with proper-time dependent four-momentum $P^{\mu }=m(u)\,\lambda ^{\mu }(u)$ that emits a net <<link:0>> at a proper rate of $-dP^{\mu }/du;$ as viewed from the mass's instantaneous rest-frame, the radiation flux has an angular distribution $A(u)+B(u)\,\cos(\theta (u)),$ where $A(u)$ and $B(u)$ are complicated scalar functions of $m(u),\lambda ^{\mu }(u),\sigma _{\mu }(u),$ and their derivatives, and $\theta (u)$ is the instantaneous rest-frame angle between the 3-acceleration and the outgoing null-vector. The Kinnersley metric may therefore be viewed as describing the gravitational field of an accelerating photon rocket with a very badly collimated exhaust.
In the special case where $\lambda ^{\mu }$ is independent of proper-time, the Kinnersley metric reduces to the Vaidya metric.
Vaidya–Bonner metric
Since the radiated or absorbed matter might be electrically non-neutral, the outgoing and ingoing Vaidya metrics Eqs(6)(7) can be naturally extended to include varying electric charges,
$ds^{2}=-\left(1-{\frac {2M(u)}{r}}+{\frac {Q(u)}{r^{2}}}\right)du^{2}-2dudr+r^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})\;,$
(18)
$ds^{2}=-\left(1-{\frac {2M(v)}{r}}+{\frac {Q(v)}{r^{2}}}\right)dv^{2}+2dvdr+r^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})\;.$
(19)
Eqs(18)(19) are called the Vaidya-Bonner metrics, and apparently, they can also be regarded as extensions of the Reissner–Nordström metric, analogously to the correspondence between Vaidya and Schwarzschild metrics.
See also
• Schwarzschild metric
• Null dust solution
References
1. Eric Poisson. A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics. Cambridge: Cambridge University Press, 2004. Section 4.3.5 and Section 5.1.8.
2. Jeremy Bransom Griffiths, Jiri Podolsky. Exact Space-Times in Einstein's General Relativity. Cambridge: Cambridge University Press, 2009. Section 9.5.
3. Thanu Padmanabhan. Gravitation: Foundations and Frontiers. Cambridge: Cambridge University Press, 2010. Section 7.3.
4. Pankaj S Joshi. Global Aspects in Gravitation and Cosmology. Oxford: Oxford University Press, 1996. Section 3.5.
5. Pankaj S Joshi. Gravitational Collapse and Spacetime Singularities. Cambridge: Cambridge University Press, 2007. Section 2.7.6.
6. Valeri Pavlovich Frolov, Igor Dmitrievich Novikov. Black Hole Physics: Basic Concepts and New Developments. Berlin: Springer, 1998. Section 5.7.
7. Kinnersley, W. (October 1969). "Field of an arbitrarily accelerating point mass". Phys. Rev. 186 (5): 1335. Bibcode:1969PhRv..186.1335K. doi:10.1103/PhysRev.186.1335.
|
Vakhitov–Kolokolov stability criterion
The Vakhitov–Kolokolov stability criterion is a condition for linear stability (sometimes called spectral stability) of solitary wave solutions to a wide class of U(1)-invariant Hamiltonian systems, named after Soviet scientists Aleksandr Kolokolov (Александр Александрович Колоколов) and Nazib Vakhitov (Назиб Галиевич Вахитов). The condition for linear stability of a solitary wave $u(x,t)=\phi _{\omega }(x)e^{-i\omega t}$ with frequency $\omega $ has the form
${\frac {d}{d\omega }}Q(\omega )<0,$
where $Q(\omega )\,$ is the charge (or momentum) of the solitary wave $\phi _{\omega }(x)e^{-i\omega t}$, conserved by Noether's theorem due to U(1)-invariance of the system.
Original formulation
Originally, this criterion was obtained for the nonlinear Schrödinger equation,
$i{\frac {\partial }{\partial t}}u(x,t)=-{\frac {\partial ^{2}}{\partial x^{2}}}u(x,t)+g(|u(x,t)|^{2})u(x,t),$
where $x\in \mathbb {R} $, $t\in \mathbb {R} $, and $g\in C^{\infty }(\mathbb {R} )$ is a smooth real-valued function. The solution $u(x,t)$ is assumed to be complex-valued. Since the equation is U(1)-invariant, by Noether's theorem, it has an integral of motion, $ Q(u)={\frac {1}{2}}\int _{\mathbb {R} }|u(x,t)|^{2}\,dx$, which is called charge or momentum, depending on the model under consideration. For a wide class of functions $g$, the nonlinear Schrödinger equation admits solitary wave solutions of the form $u(x,t)=\phi _{\omega }(x)e^{-i\omega t}$, where $\omega \in \mathbb {R} $ and $\phi _{\omega }(x)$ decays for large $x$ (one often requires that $\phi _{\omega }(x)$ belongs to the Sobolev space $H^{1}(\mathbb {R} ^{n})$). Usually such solutions exist for $\omega $ from an interval or collection of intervals of a real line. The Vakhitov–Kolokolov stability criterion,[1][2][3][4]
${\frac {d}{d\omega }}Q(\phi _{\omega })<0,$
is a condition of spectral stability of a solitary wave solution. Namely, if this condition is satisfied at a particular value of $\omega $, then the linearization at the solitary wave with this $\omega $ has no spectrum in the right half-plane.
This result is based on an earlier work[5] by Vladimir Zakharov.
Generalizations
This result has been generalized to abstract Hamiltonian systems with U(1)-invariance.[6] It was shown that under rather general conditions the Vakhitov–Kolokolov stability criterion guarantees not only spectral stability but also orbital stability of solitary waves.
The stability condition has been generalized[7] to traveling wave solutions to the generalized Korteweg–de Vries equation of the form
$\partial _{t}u+\partial _{x}^{3}u+\partial _{x}f(u)=0\,$.
The stability condition has also been generalized to Hamiltonian systems with a more general symmetry group.[8]
See also
• Derrick's theorem
• Linear stability
• Lyapunov stability
• Nonlinear Schrödinger equation
• Orbital stability
References
1. Колоколов, А. А. (1973). "Устойчивость основной моды нелинейного волнового уравнения в кубичной среде". Прикладная механика и техническая физика (3): 152–155.
2. A.A. Kolokolov (1973). "Stability of the dominant mode of the nonlinear wave equation in a cubic medium". Journal of Applied Mechanics and Technical Physics. 14 (3): 426–428. Bibcode:1973JAMTP..14..426K. doi:10.1007/BF00850963. S2CID 123792737.
3. Вахитов, Н. Г. & Колоколов, А. А. (1973). "Стационарные решения волнового уравнения в среде с насыщением нелинейности". Известия высших учебных заведений. Радиофизика. 16: 1020–1028.
4. N.G. Vakhitov & A.A. Kolokolov (1973). "Stationary solutions of the wave equation in the medium with nonlinearity saturation". Radiophys. Quantum Electron. 16 (7): 783–789. Bibcode:1973R&QE...16..783V. doi:10.1007/BF01031343. S2CID 123386885.
5. Vladimir E. Zakharov (1967). "Instability of Self-focusing of Light" (PDF). Zh. Eksp. Teor. Fiz. 53: 1735–1743. Bibcode:1968JETP...26..994Z.
6. Manoussos Grillakis; Jalal Shatah & Walter Strauss (1987). "Stability theory of solitary waves in the presence of symmetry. I". J. Funct. Anal. 74: 160–197. doi:10.1016/0022-1236(87)90044-9.
7. Jerry Bona; Panagiotis Souganidis & Walter Strauss (1987). "Stability and instability of solitary waves of Korteweg-de Vries type". Proceedings of the Royal Society A. 411 (1841): 395–412. Bibcode:1987RSPSA.411..395B. doi:10.1098/rspa.1987.0073. S2CID 120894859.
8. Manoussos Grillakis; Jalal Shatah & Walter Strauss (1990). "Stability theory of solitary waves in the presence of symmetry". J. Funct. Anal. 94 (2): 308–348. doi:10.1016/0022-1236(90)90016-E.
|
Val
Val may refer to:
Look up val, Val, or -val in Wiktionary, the free dictionary.
Military equipment
• Aichi D3A, a Japanese World War II dive bomber codenamed "Val" by the Allies
• AS Val, a Soviet assault rifle
Music
• Val, album by Val Doonican
• VAL (band), Belarusian pop duo
People
• Val (given name), a unisex given name
• Rafael Merry del Val (1865–1930), Spanish Catholic cardinal
• Val (sculptor) (1967–2016), French sculptor
• Val (footballer, born 1983), Lucivaldo Lázaro de Abreu, Brazilian football midfielder
• Val (footballer, born 1997), Valdemir de Oliveira Soares, Brazilian football defensive midfielder
Places
• Val (Rychnov nad Kněžnou District), a village and municipality in the Czech Republic
• Val (Tábor District), a village and municipality in the Czech Republic
• Vál, a village in Hungary
• Val, Iran, a village in Kurdistan Province, Iran
• Val, Italy, a frazione in Cortina d'Ampezzo, Veneto, Italy
• Val, Bhiwandi, a village in Maharashtra, India
Other uses
• Val (film), an American documentary about Val Kilmer, directed by Leo Scott and Ting Poo
• Valley girl or Val, an American stereotype
• Abbreviation of the amino acid valine
• A weapon used in the Indian martial art of Kalarippayattu
• Vieques Air Link, a Puerto Rican airline company
See also
• VAL (disambiguation)
• Wal (disambiguation)
• Vala (disambiguation)
• Vale (disambiguation)
• Vali (disambiguation)
• Valo (disambiguation)
• Vals (disambiguation)
• Valy (disambiguation)
• Valk (surname)
• Vall (surname)
• All pages with titles beginning with Val
|
Valentin Belousov
Valentin Danilovich Belousov (Russian: Валенти́н Дани́лович Белоу́сов; 20 February 1925 – 23 July 1988) was a Soviet and Moldovan mathematician and a corresponding member of the Academy of Pedagogical Sciences of the USSR (1968).[1][2]
Valentin Belousov
Born
Valentin Danilovich Belousov
(1925-02-20)20 February 1925
Bălți, Kingdom of Romania
Died23 July 1988(1988-07-23) (aged 63)
Kishinev, Moldavian SSR, Soviet Union
EducationDoctor of Physical and Mathematical Sciences (1966)
Alma materKishinev Pedagogical Institute
Scientific career
FieldsMathematics
He graduated from the Kishinev Pedagogical Institute (1947), Doctor of Physical and Mathematical Sciences (1966), Professor (1967), honored worker of science and technology of the Moldavian SSR.
Since 1962, he worked at the Institute of Mathematics, Academy of Sciences of the Moldavian SSR. Major works include algebra, especially the theory of quasigroups and their applications. Known for his book "Fundamentals of the theory of quasigroups and loops" (1967), textbooks for schools. Laureate of the State Prize in Science and Technology of the Moldavian SSR.
Honored Worker of Science and Technology of MSSR (1970). Laureate of the State Prize for Science and Technology MSSR (1982). He is the founder of the theory of quasi-groups at school in the former USSR.
Milestones in the scientific life
• 1944–1947 – student of the Pedagogical Institute in Kishinev,
• 1947–1948 – teacher training courses at the Kishinev Pedagogical Institute,
• 1948–1950 – a teacher and head teacher of high school, the village Sofia Balti district,
• 1950–1954 – Lecturer, Department of Mathematics, Balti Pedagogical Institute,
• 1954–1955 – student of the postgraduate courses at MSU. Lomonosov,
• 1955–1956 – post-graduate student of Moscow State University. M.V.Lomonosov,
• 1957–1960 – Lecturer, Department of Mathematics, Balti Pedagogical Institute,
• 1960–1961 – intern University of Wisconsin (exchange between the USSR and the USA), the State of Madison, USA,
• 1961–1962 – Head of the Department of Mathematics Balti Pedagogical Institute,
• 1962–1987 – Head of the Department at the Institute of Mathematics of the MSSR,
• 1964–1966 – Associate Professor, Department of Mathematics Technical University (part-time),
• 1966–1988 – Professor, Head of Department (until 1977) of higher algebra Kishinev State University (part-time).
Scientific heritage
Theses of V. D. Belousov
• Studies in the theory of quasigroups and loops (1958) – PhD thesis.
• Systems of quasigroups with identities (1966) – doctoral thesis.
(Both of the above are protected at the Moscow State University M. V. Lomonosov).
• Field of study – theory of quasi-groups, related areas
Research areas
• The general theory of quasi-groups (derivative operations; core; regular substitution, groups associated with quasigroups; autotopies; antiavtotopii et al.).
• Classes binary quasigroups and loops (distributive quasigroups, left distributive quasigroup, IP-quasigroup, F-quasigroups, CI-quasigroup, I-quasigroups Bol loops, totally symmetric quasigroups and quasigroup Stein et al.)
• Quasigroups a balanced identities. Systems of quasigroups with identities (associative, medial, transitivity, distributivity, Stein et al.)
• Functional equations on quasigroups (general associativity with the same procedure and the various variables, the total Distributivity, loops, medial, and others.)
• Positional algebra (algebra Belousov) (apparatus for solving functional equations)
• n-ary and infinitary quasigroup (we laid the foundations of the theory of n-ary quasigroups and infinitary)
• Algebraic networks and quasigroups (general theory, the conditions of circuit configuration)
• Combinatorial questions of the theory of quasi-groups (continued quasigroups, orthogonal systems and binary
n-ary operations and quasigroups parastrophic orthogonal quasigroups):
Books
• Fundamentals of the theory of quasigroups and loops. M .: Nauka, 1967.
• Algebraic networks and quasigroups. Kishinev Shtiintsa 1971.
• n-ary quasigroup. Kishinev, Ştiinţa 1972.
• Changes in algebraic networks. Kishinev, Ştiinţa 1979.
• Elements of the theory of quasi-groups (Textbook on a special course). Kishinev, Kishinev State University, 1981.
• Latin squares and their applications. Kishinev, Ştiinţa, 1989 (collab. with G.B.Belyavskaya with GB).
• Mathematics in schools of Moldova (1812–1972). Kishinev, Ştiinţa1973 (collab. with I.I.Lupu and Y.I.Neagu).
• Russian-Moldovan Mathematics Dictionary. 1980, Kishinev, Moldavian Soviet Encyclopedia (collab. with Y.I.Neagu)
• IK Man. Pages of life and creativity. Kishinev, Ştiinţa, 1983 (collab. with Y.I.Neagu)
Educational activity
Valentin Danilovich Belousov was not only a scientist but also an excellent teacher. He has made important contributions in the system of education of Moldova and in training of Moldovan mathematicians. As a member of the Academy of Pedagogical Sciences (Mathematics section), he spent a great scientific and organizational work in the field of mathematics education. About 30 of his students defended their theses and work in many countries. Valentin Belousov and Y. I. Neagu wrote Moldovan-Russian Dictionary of Mathematics, which has long been used by mathematicians in Moldova. Together with I. I. Lupu and Y. I. Neagu, V. D. Belousov published a book of mathematics in schools of Moldova (1812–1972). For many years, V. D. Belousov was the chairman of the jury of school mathematical Olympiads of Moldova.
Trainees and graduates
Under the direction of Valentin Danilovich 22 mathematics from different republics of the former Soviet Union and from abroad defended their theses, four of them also defended their doctoral dissertations.
Social work
In parallel with the scientific and pedagogical activity Valentin Danilovich spent big public work as a deputy Balti City Council (1960–1962), member of the District Committee of the CPM (1967–1973), member of the Supreme Council of Moldova (1975–1980), member of the Presidium of the Society "Knowledge", a member of editorial boards of various domestic and foreign publications, organizing committee member of many international conferences.
Family
Father Daniel Afinogenovich Belousov (1897–1956), was an officer in the army of Tsarist Russia (graduated from military school in Tbilisi). He took part in World War I. In Moldova, he worked at the post office in the city of Bălți. Mother of Valentin Danilovich, Elena K. Belousov (Garbu) (1897–1982), also worked at the post office.
Wife – Belousov (Bondareva) Elizabeth Feodorovna (5 May 1925 – 26 November 1991). Philologist, she taught at the State University of Moldova.
Children: Alexander Valentinovich (03.10.1948 – 3 September 1998), PhD in physics, senior research fellow of the Academy of Sciences of Moldova; Tatiana Valentinovna Kravchenko (born 16 February 1952), a doctor neurologist.
Brother: MD Victor Danilovich Belousov (b. 1927) traumatologist orthopedic, the author of the monograph "Road traffic accidents. First aid to victims (1984) and Conservative treatment of false joints of long bones (1990).
Awards and titles
For merits in the field of science, education and social activities, he was awarded the Order of the Red Banner of Labour (1961), Honorary Diploma of the Presidium of the Supreme Soviet of the Moldavian Soviet Socialist Republic (1967), he was elected as correspondent member of the Academy of Pedagogical Sciences the USSR (1968). He was honored worker of science of Moldova (1970), laureate of the State Prize of Moldova in the field of science and technology (1972). Overachiever of the Education of Moldova (1980).
Foreign tours
Despite the strict controls in those years by the government, Valentin Danilovich received permission to research trips abroad and many times traveled abroad:
• 1960–1961 – United States
• 1964, 1976 – Hungary
• 1967, 1972, 1977 – Bulgaria
• 1968, 1974,1981 – Yugoslavia
• 1967, 1969–1970 – Canada
• 1968 – Sierra Leone
• 1975 – East Germany
School of Belousov
The lifework of Valentin Danilovich Belousov is continued by his numerous disciples and followers in various countries, whose number is constantly increasing. In 1994 at the initiative of the students of V. D. Belousov at the Institute of Mathematics and Computer Science of the ASM was established the scientific journal "Quasigroups and Related Systems (http://www.quasigroups.eu/)'", now known all over the world. For twenty years this magazine published many articles by both Moldovan and foreign specialists in the theory of quasi-groups and regions close to it. Since 1995, the Institute of Mathematics and Computer Science of the ASM annual anniversary of Valentin Danilovich (20 February) has carried out advanced algebraic seminars in memory of Valentin Danilovich Belousov. In this workshop, his disciples and followers from Moldova and other countries take stock of the past year and report on new results in the theory of quasi-groups and related areas.
Thanks to the productive scientific and pedagogical activity of V. D. Belousov, in the Republic of Moldova was founded on the theory of quasigroups Belousov's School on Theory of Quasigroups, consisting of his 22 disciples (http://belousov.scerb.com/), and more than 40 followers continue his work in Moldova and abroad.
References
1. Persons: Valentin Danilovich Belousov
2. G. B. Belyavskaya, W. A. Dudek and V. A. Shcherbacov. (2005). Valentin Danilovich Belousov – his life and work. Quasigroups And Related Systems : magazine. pp. 1–7. ISSN 1561-2848.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Valentina Harizanov
Valentina Harizanov is a Serbian-American mathematician and professor of mathematics at The George Washington University. Her main research contributions are in computable structure theory (roughly at the intersection of computability theory and model theory), where she introduced the notion of degree spectra of relations on computable structures and obtained the first significant results concerning uncountable, countable, and finite Turing degree spectra.[1] Her recent interests include algorithmic learning theory and spaces of orders on groups.
Valentina Harizanov
NationalitySerbian-American
Alma materUniversity of Wisconsin, Madison, University of Belgrade
Known forResearch in computability theory
AwardsOscar and Shoshana Trachtenberg Prize for Faculty Scholarship (2016)
Scientific career
FieldsMathematics, Computability Theory
InstitutionsThe George Washington University
ThesisDegree Spectrum of a Recursive Relation on a Recursive Structure (1987)
Doctoral advisorTerry Millar
Education
She obtained her Bachelor of Science in mathematics in 1978 at the University of Belgrade and her Ph.D. in mathematics in 1987 at the University of Wisconsin–Madison under the direction of Terry Millar.[2][3]
Career
At The George Washington University, Harizanov was an assistant professor of mathematics from 1987 to 1993, an associate professor of mathematics from 1994 to 2002, and a professor of mathematics from 2003 to the present. She has held two visiting professor positions, one in 1994 at the University of Maryland, College Park and one in 2014 at the Kurt Gödel Research Center at the University of Vienna.[3]
Harizanov has co-directed the Center for Quantum Computing, Information, Logic, and Topology at The George Washington University since 2011.[3]
Research
In 2009, Harizanov received a grant from the National Science Foundation to research how algebraic, topological, and algorithmic properties of mathematical structures relate.[4]
Awards and honors
Harizanov won the Oscar and Shoshana Trachtenberg Prize for Faculty Scholarship from The George Washington University (GWU) in 2016.[5] This award is presented each year to a tenured GWU faculty member to recognize outstanding research accomplishments.[6] She was named MSRI Eisenbud Professor for Fall 2020.[7]
Publications
Harizanov has over 40 publications in peer-reviewed journals, including
• V.S. Harizanov, "Some effects of Ash-Nerode and other decidability conditions on degree spectra " Annals of Pure and Applied Logic 55 (1), pp. 51–65 (1991), cited 21 times according to Web of Science
In addition, she has published the following book-length survey paper and co-edited, co-authored book:
• V.S. Harizanov, “Pure computable model theory,” in the volume: Handbook of Recursive Mathematics, vol. 1, Yu.L. Ershov, S.S. Goncharov, A. Nerode, and J.B. Remmel, editors (North-Holland, Amsterdam, 1998), pp. 3–114.
• M. Friend, N.B. Goethe, and V.S. Harizanov, Induction, Algorithmic Learning Theory, and Philosophy, Series: Logic, Epistemology, and the Unity of Science, vol. 9, Springer, Dordrecht, 304 pp., 2007.
Degree spectra of relations are introduced and first studied in Harizanov's dissertation: Degree Spectrum of a Recursive Relation on a Recursive Structure(1987).[1]
References
1. Harizanov, V.S. (1987). "Degree Spectrum of a Recursive Relation on a Recursive Structure". Ph.D. Dissertation, University of Wisconsin–Madison.
2. Valentina Harizanov at the Mathematics Genealogy Project
3. "Curriculum Vitae of Valentina Harizanov" (PDF). The George Washington University. Retrieved 15 January 2018.
4. "Award Abstract #0904101: Topics in Computable Mathematics". National Science Foundation. Retrieved 15 January 2018.
5. "Trachtenberg Research Award Winners". The George Washington University. Retrieved 15 January 2018.
6. "Oscar and Shoshana Trachtenberg Prize for Faculty Scholarship (Research)". The George Washington University. Retrieved 15 January 2018.
7. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-06-07.
External links
• Valentina Harizanov's home page
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• PhilPeople
• zbMATH
Other
• IdRef
|
Valentina Gorbachuk
Valentina Ivanivna Gorbachuk (born 1937) is a Soviet and Ukrainian mathematician, specializing in operator theory and partial differential equations.
Education and career
Gorbachuk was born in Mogilev on 25 June 1937; then part of the Soviet Union, it has since become part of Belarus. Her parents worked as an accountant and a telegraphist; in search of better work, they moved to Lutsk in what is now Ukraine when Gorbachuk was a child, and that is where she was schooled.[1]
She applied to study mathematics and mechanics at Taras Shevchenko National University of Kyiv, but was denied because of a "stay in the occupation". Instead, she went to the Lutsk Pedagogical Institute, graduating in 1959. On the advice of one of her faculty mentors there, S.I. Zuhovitsky,[1] she entered graduate study at the NASU Institute of Mathematics, as a student of Yury Berezansky, earning a candidate degree (the Soviet equivalent of a Ph.D.) in the early 1960s.[1][2]
She continued as a researcher at the Institute of Mathematics for the rest of her career, defending a Doctor of Science (equivalent of a habilitation under the former Soviet system) in 1992.[1]
Books
Gorbachuk is the coauthor, with M. L. Gorbachuk, of two books on operator theory, translated from Russian into English:
• Boundary value problems for operator differential equations (Naukova Dumka, 1984; trans., Mathematics and its Applications 48, Kluwer, 1991)[3]
• M. G. Krein’s lectures on entire operators (Operator Theory: Advances and Applications, 97, Birkhäuser, 1997)[4]
Recognition
In 1998, Gorbachuk won the State Prize of Ukraine in Science and Technology.[1]
Personal life
Gorbachuk worked closely with her husband, Miroslav L'vovich Gorbachuk (1938–2017), a mathematician with whom she shared her research interests.[1] Their son, Volodymyr Myroslavovich Gorbachuk, is an associate professor of mathematical physics at the Igor Sikorsky Kyiv Polytechnic Institute (National Technical University of Ukraine).[1][5]
References
1. "Valentina Ivanivna Gorbachuk (to 80th birthday anniversary)", Methods of Functional Analysis and Topology, 23 (3): 207–208, 2017, Zbl 1399.01003
2. Valentina Gorbachuk at the Mathematics Genealogy Project Note that this source states her Ph.D. year as 1964; Methods of Functional Analysis and Topology states it as 1962.
3. Reviews of Boundary value problems for operator differential equations: J. Wloka, Zbl 0567.47041; J. W. Macki, MR0776604; J. Mawhin, Zbl 0751.47025
4. Reviews of M. G. Krein’s lectures on entire operators: A. Pankov, Zbl 0883.47008; Damir Z. Arov and Harry Dym, MR1466698
5. Gorbachuk Volodymyr Myroslavovich, Igor Sikorsky Kyiv Polytechnic Institute, retrieved 2023-03-03
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
Other
• Encyclopedia of Modern Ukraine
• IdRef
|
Valeria Simoncini
Valeria Simoncini (born 1966)[1] is an Italian researcher in numerical analysis who works as a professor in the mathematics department at the University of Bologna.[2] Her research involves the computational solution of equations involving large matrices, and their applications in scientific computing.[3] She is the chair of the SIAM Activity Group on Linear Algebra.[4]
Education and career
Simoncini earned a degree from the University of Bologna in 1989, became a visiting scholar at the University of Illinois at Urbana–Champaign from 1991 to 1993, and completed her PhD at the University of Padua in 1994. After working at CNR from 1995 to 2000, she returned to Bologna as an associate professor in 2000, and was promoted to full professor in 2010.[2]
Book
With Antonio Navarra, she is the author of the book A Guide to Empirical Orthogonal Functions for Climate Data Analysis (Springer, 2010).
Recognition
Simoncini was a second-place winner of the Leslie Fox Prize for Numerical Analysis in 1997.[5] In 2014 she was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to numerical linear algebra".[6] She was named to the 2021 class of fellows of the American Mathematical Society "for contributions to computational mathematics, in particular to numerical linear algebra".[7] In 2023, she was elected to serve on the SIAM Council.[8]
References
1. Birth year from German National Library catalog entry, retrieved 2018-12-02.
2. Curriculum vitae (PDF), January 14, 2015, retrieved 2017-08-14
3. "Research Interests and Problem Solving", Valeria Simoncini, University of Bologna, retrieved 2017-08-14
4. "SIAM Activity Groups Election Results", SIAM News, 6 December 2018
5. IMA Leslie Fox Prize for Numerical Analysis, Institute of Mathematics & its Applications, retrieved 2022-02-06
6. SIAM Fellows: Class of 2014, Society for Industrial and Applied Mathematics, retrieved 2017-08-14
7. 2021 Class of Fellows of the AMS, American Mathematical Society, retrieved 2020-11-02
8. "Welcoming the Newest Electees to the SIAM Board of Trustees and Council". SIAM News. Retrieved 2023-03-15.
External links
• Home page
• Valeria Simoncini publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• Association for Computing Machinery
• CiNii
• DBLP
• Google Scholar
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
|
Valerie Thomas
Valerie L. Thomas (born February 8, 1943) is an American data scientist and inventor. She invented the illusion transmitter, for which she received a patent in 1980.[2] She was responsible for developing the digital media formats image processing systems used in the early years of NASA's Landsat program.[3]
Valerie Thomas
NASA photograph of Thomas next to a stack of early Landsat Computer Compatible Tapes, 1979[1]
Born (1943-02-08) February 8, 1943
Maryland, United States
Alma mater
• Morgan State University
• George Washington University
• University of Delaware
• Simmons College Graduate School of Management
Known forInventor of the illusion transmitter
Scientific career
Institutions
• NASA Goddard
• UMBC
Early life and education
Thomas was born in Baltimore, Maryland.[4] She graduated from high school in 1961, during the era of integration.[5] She attended Morgan State University, where she was one of two women majoring in physics.[6] Thomas excelled in her mathematics and science courses at Morgan State University, graduating with a degree in physics with highest honors in 1964.[5]
Career
Thomas began working for NASA as a data analyst in 1964.[7][8] She developed real-time computer data systems to support satellite operations control centers (1964–1970). She oversaw the creation of the Landsat program (1970–1981), becoming an international expert in Landsat data products. Her participation in this program expanded upon the works of other NASA scientists in the pursuit of being able to visualize Earth from space.[9]
In 1974, Thomas headed a team of approximately 50 people for the Large Area Crop Inventory Experiment (LACIE), a joint effort with the NASA Johnson Space Center, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Department of Agriculture. An unprecedented scientific project, LACIE demonstrated the feasibility of using space technology to automate the process of predicting wheat yield on a worldwide basis.[8]
She attended an exhibition in 1976 that included an illusion of a light bulb that appeared to be lit, even though it had been removed from its socket. The illusion, which involved another light bulb and concave mirrors, inspired Thomas. Curious about how light and concave mirrors could be used in her work at NASA, she began her research in 1977. This involved creating an experiment in which she observed how the position of a concave mirror would affect the real object that is reflected. Using this technology, she would invent an optical device called the illusion transmitter.[6] On October 21, 1980,[7] she obtained the patent for the illusion transmitter, a device NASA continues to use today, and it's being adapted for use in surgery, as well as for televisions and video screens.[10][11] . Thomas became associate chief of the Space Science Data Operations Office at NASA.[12] Thomas's invention has been depicted in a children's fictional book, television, and in video games.[5]
In 1985, as the NSSDC Computer Facility manager, Thomas was responsible for a major consolidation and reconfiguration of two previously independent computer facilities, and infused them with new technology. She then served as the Space Physics Analysis Network (SPAN)[13] project manager from 1986 to 1990 during a period when SPAN underwent a major reconfiguration and grew from a scientific network with approximately 100 computer nodes to one directly connecting approximately 2,700 computer nodes worldwide. Thomas' team was credited with developing a computer network that connected research stations of scientists from around the world to improve scientific collaboration.[5]
In 1990, SPAN became a major part of NASA's science networking and today's Internet.[8] She also participated in projects related to Halley's Comet, ozone research, satellite technology, and the Voyager spacecraft.
She mentored countless numbers of students in the Mathematics Aerospace Research and Technology Inc. program.[14] Because of her unique career and commitment to giving something back to the community, Thomas often spoke to groups of students from elementary school, secondary, college, and university ages, as well as adult groups. As a role model for potential young black engineers and scientists, she made hundreds of visits to schools and national meetings over the years. She has mentored many students working in summer programs at Goddard Space Flight Center. She also judged at science fairs, working with organizations such as the National Technical Association (NTA) and Women in Science and Engineering (WISE). These latter programs encourage students from various underrepresented groups to pursue science and technology careers.[15]
At the end of August 1995, she retired from NASA and her positions of associate chief of the NASA Space Science Data Operations Office, manager of the NASA Automated Systems Incident Response Capability, and as chair of the Space Science Data Operations Office Education Committee.[8]
Retirement
After retiring, Thomas served as an associate at the UMBC Center for Multicore Hybrid Productivity Research.[16] She also continued to mentor youth through the Science Mathematics Aerospace Research and Technology, Inc. and the National Technical Association.[6]
Notable achievements
Throughout her career, Thomas held high-level positions at NASA including heading the Large Area Crop Inventory Experiment (LACIE) collaboration between NASA, NOAA, and USDA in 1974, serving as assistant program manager for Landsat/Nimbus (1975–1976), managing the NSSDC Computer Facility (1985), managing the Space Physics Analysis Network project (1986–1990), and serving as associate chief of the Space Science Data Operations Office. She authored many scientific papers and holds a patent for the illusion transmitter. For her achievements, Thomas has received numerous awards including the Goddard Space Flight Center Award of Merit and the NASA Equal Opportunity Medal.[14]
See also
• Timeline of women in science
• Mary Jackson (engineer)
• Dorothy Vaughan
• Katherine Johnson
• Claudia Alexander
• Doris Cohen
• Lynnae Quick
References
1. Smith, Yvette (January 28, 2020). "Dr. Valerie L. Thomas: The Face Behind Landsat Images". NASA.
2. US patent 4229761, Valerie L. Thomas, "Illusion Transmitter", issued October 21, 1980
3. "A Face Behind Landsat Images: Meet Dr. Valerie L. Thomas « Landsat Science". February 28, 2019. Retrieved June 10, 2020.
4. "VALERIE THOMAS (1943- )". Blackpast. April 21, 2021. Retrieved February 1, 2022.
5. "Life and Work of Valerie L. Thomas". Robin Lindeen-Blakeley. Retrieved February 21, 2021.
6. "Illusion Transmitter". Inventor of the Week. MIT. 2003. Retrieved January 7, 2020.
7. "Valerie Thomas". Inventors. The Black Inventor On-Line Museum. 2011. Retrieved November 13, 2011.
8. James L. Green (September 1995). "Valerie L. Thomas Retires". Goddard Space Flight Center. Archived from the original on December 19, 1996. Retrieved March 10, 2017.
9. Smith, Yvette (January 28, 2020). "Dr. Valerie L. Thomas: The Face Behind Landsat Images". NASA. Retrieved February 10, 2021.
10. "Valerie Thomas - Inventions, NASA, and Facts - Biography". Biography.com. A&E Television Networks. April 12, 2021 [2 April 2014]. Retrieved February 2, 2022. This technology was subsequently adopted by NASA and has since been adapted for use in surgery as well as the production of television and video screens.
11. "Valerie Thomas | Lemelson". LEMELSON-MIT. MASSACHUSETTS INSTITUTE OF TECHNOLOGY. n.d. Retrieved February 2, 2022. NASA uses the technology today, and scientists are currently working on ways to incorporate it into tools for surgeons to look inside the human body, and possibly for television sets and video screens one day.
12. "Life and Work of Valerie L. Thomas". Robin Lindeen-Blakeley. Retrieved April 28, 2020.
13. Thomas, Koblinsky, Webster, Zlotnicki, Green (1987). "NSSDC: National Space Science Data Center" (PDF).{{cite web}}: CS1 maint: multiple names: authors list (link)
14. Connolly, Danielle (May 15, 2019). "Make them Mainstream". Make Them Mainstream. Archived from the original on February 1, 2022. Retrieved February 1, 2022.
15. "Valerie L. Thomas Retires". nssdc.gsfc.nasa.gov. Retrieved February 25, 2021.
16. "Little Known Black History Fact: Valerie Thomas". Black America Web. October 27, 2014. Retrieved March 10, 2017.
Authority control
International
• VIAF
National
• Israel
• United States
• Korea
• Poland
|
Valery Alexeev (mathematician)
Valery Alexeev (born 1964)[1] is an American mathematician who is currently the David C. Barrow Professor at University of Georgia and an Elected Fellow of the American Mathematical Society.[2][3] He received his Ph.D from Lomonosov Moscow State University in 1990.[4]
References
1. "Alexeev, Valery, 1964-". id.loc.gov. Retrieved January 6, 2021.
2. "Fellows". ams.org. Retrieved April 25, 2017.
3. "Valery Alexeev". uga.edu. Retrieved April 25, 2017.
4. "Valery Alexeev". genealogy.math.ndsu. Retrieved January 6, 2021.
Authority control
National
• United States
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
|
Valery Goppa
Valery Denisovich Goppa (Russian: Вале́рий Дени́сович Го́ппа; born 1939[1]) is a Soviet and Russian mathematician.
He discovered a relation between algebraic geometry and codes, utilizing the Riemann-Roch theorem. Today these codes are called algebraic geometry codes.[2] In 1981 he presented his discovery at the algebra seminar of the Moscow State University.
He also constructed other classes of codes in his career, and in 1972 he won the best paper award of the IEEE Information Theory Society for his paper "A new class of linear correcting codes".[3] It is this class of codes that bear the name of “Goppa code”.
Selected publications
• V. D. Goppa (1988). Geometry and Codes (Mathematics and its Applications). Berlin: Springer. ISBN 90-277-2776-7.
• E. N. Gozodnichev; V. D. Goppa (1995). Algebraic Information Theory (Series on Soviet and East European Mathematics, Vol 11). World Scientific Pub Co Inc. ISBN 981-02-0943-6.{{cite book}}: CS1 maint: multiple names: authors list (link)
• VD Goppa (1970). "A New Class of Linear Error Correcting Codes". Problemy Peredachi Informatsii.
• VD Goppa (1971). "Rational Representation of Codes and (L,g)-Codes". Problemy Peredachi Informatsii.
• VD Goppa (1972). "Codes Constructed on the Base of $(L,g)$-Codes". Probl. Peredachi Inf. 8 (2): 107–109.
• VD Goppa (1974). "Binary Symmetric Channel Capacity Is Attained with Irreducible Codes". Probl. Peredachi Inf. 10 (1): 111–112.
• VD Goppa (1974). "Correction of Arbitrary Noise by Irreducible Codes". Probl. Peredachi Inf. 10 (3): 118–119.
• VD Goppa (1977). "Codes Associated with Divisors". Probl. Peredachi Inf. 13 (1): 33–39.
• VD Goppa (1983). "Algebraico-Geometric Codes". Math. USSR Izv. 21 (1): 75–91. Bibcode:1983IzMat..21...75G. doi:10.1070/IM1983v021n01ABEH001641.
• VD Goppa (1984). "Codes and information". Russ. Math. Surv. 39 (1): 87–141. Bibcode:1984RuMaS..39...87G. doi:10.1070/RM1984v039n01ABEH003062. S2CID 250898540.
• VD Goppa (1995). "Group representations and algebraic information theory". Izv. Math. 59 (6): 1123–1147. Bibcode:1995IzMat..59.1123G. doi:10.1070/IM1995v059n06ABEH000051. S2CID 250882696.
References
1. Stepan G. Korneev, Soviet scientists - honorary members of foreign scientific societies.(in Russian) Nauka, Moscow, 1981; p. 41
2. Huffman, William Cary; Pless, Vera S. (2003). Fundamentals of Error-Correcting Codes. Cambridge University Press. p. 521. ISBN 978-0-521-78280-7. 0-521-78280-5. Retrieved 2016-02-02.
3. "Information Theory Society Paper Award". IEEE Information Theory Society. Retrieved March 6, 2013.
• David Joyner (23 August 2002). "A brief guide to Goppa codes".
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Netherlands
Academics
• zbMATH
Other
• IdRef
|
Valery Senderov
Valery Senderov (Russian: Валерий Сендеров; 17 March 1945 – 12 November 2014) was a Soviet dissident, mathematician, teacher, and advocate of human rights known for his struggle against state-sponsored antisemitism.
Valery Senderov
Born(1945-03-17)17 March 1945
Moscow, Soviet Union
Died12 November 2014(2014-11-12) (aged 69)
Moscow, Russia
NationalityRussian
Alma materMoscow Institute of Physics and Technology
Scientific career
FieldsMathematics, Politics
Biography
Senderov was born on 17 March 1945 in Moscow. In 1962, he was accepted at the prestigious Moscow Institute of Physics and Technology, where he studied mathematics. In 1968, just before completing his doctoral dissertation, Senderov was expelled for the dissemination of "philosophical literature", which was a euphemism for anything that was viewed by the censors as being anti-Soviet. He was given the opportunity to complete his degree in 1970.[1]
In the 1970s, Senderov taught mathematics at the Second Mathematical School in Moscow. Toward the end of the decade, he joined the National Alliance of Russian Solidarists, an anticommunist organization headed by Russian emigres, and also the International Society for Human Rights. In the 1980s, Senderov became one of the leaders of the International Society for Human Rights and one of the founders of the Free Interprofessional Association of Workers, the first labor union in the Soviet Union that sought to be free of government control.[1][2]
In 1982, Senderov was arrested by the KGB for publishing anticommunist articles in Russian-language newspapers printed abroad, in particular the magazine Posev (Sowing) and the newspaper Russkaya Mysl. After his arrest, Senderov openly admitted to the KGB that he was a member of the National Alliance of Russian Solidarists, becoming one of just two openly avowed members of this anticommunist group in the Soviet Union. At his trial, Senderov stated that he was a member of anticommunist groups and expressed that he would continue to fight against the Soviet regime even after he was freed from incarceration. He was sentenced to 7 years of hard labor and a subsequent probationary exile of an additional 5 years.[1][2][3]
He was sent to a prison camp for political prisoners near Perm, where he spent much of his time in solitary confinement in a cold cell on rationed food for his refusal to comply with the rules of the prison camp. He refused to comply to protest the confiscation of his Bible and the prohibition against studying mathematics. In 1987, Senderov was released and, in 1988, became the leader of the National Alliance of Russian Solidarists in the Soviet Union, holding the first official press conference in this new role in 1988. During the period of perestroika, the National Alliance took an active part in supporting opposition parties.[1][2] Over the course of his life, Senderov authored dozens of political articles in magazines, newspapers, and anthologies, as well as a number of mathematical works dealing with functional analysis. He also wrote three books.[4][5]
Death
On 12 November 2014, he died at the age of 69 in Moscow.[1]
Struggle against Antisemitism
In 1980, Senderov self-published with Boris Kanevsky a work titled "Intellectual Genocide" about the discrimination by Soviet universities against Jewish applicants. In particular, the work singled out the mechanical and mathematical departments at the prestigious Moscow State University.
Senderov shed light on the various methods used by the university administration to dissuade and reject Jewish applicants. One method was to hand-pick the most difficult problems from the International Mathematical Olympiad and to give these problems to Jewish applicants as part of the entrance examinations - a practice that was specifically prohibited by the Soviet Ministry of Education. Another method was to select problems that could be solved given the standard high school curriculum, but whose solution required far more time than allotted for the entrance exams. In addition, admission committees would ask Jewish applicants questions that were far outside the standard high school curriculum or separate them into special groups and then find reasons to fail those groups during the more subjective oral exams.
In addition to describing the various methods used to reject Jewish applicants, Senderov also provided practical advice on preparing for the types of questions often asked of such applicants and using the appeals process to fight against unfair admission decisions.
In conjunction with publishing this work, Senderov became one of the founders of a set of informal courses of study under the moniker of "Jewish National University", where well-known mathematicians gave lectures to applicants who had been denied admission to Moscow State University for being Jewish.
References
1. "Умер публицист Валерий Сендеров". Grani.ru. 12 November 2014.
2. "Памяти Валерия Сендерова".
3. Łabędź, Leopold (1989). The Use and Abuse of Sovietology. Transaction Publishers. p. 170. ISBN 9781412840873.
4. "Math papers by V. A. Senderov, according to MathSciNet" (PDF).
5. "Publications of Valery Senderov". mathnet.ru.
External links
• Profile, ucsj.org, 1 November 2012; accessed 14 November 2014.
• Reference in The Day, news.google.com; accessed 14 November 2014.
• Reference in A Mathematical Medley, books.google.com; accessed 14 November 2014.
• Senderov, Valery (1989). How a Program Became a Pogrom.
Authority control
International
• ISNI
• VIAF
National
• Germany
• United States
• Sweden
Academics
• MathSciNet
• Scopus
• zbMATH
Other
• IdRef
|
Valiant–Vazirani theorem
The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP = RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986.[1] The proof is based on the Mulmuley–Vazirani–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science.
The Valiant–Vazirani theorem implies that the Boolean satisfiability problem, which is NP-complete, remains a computationally hard problem even if the input instances are promised to have at most one satisfying assignment.
Proof outline
Unambiguous-SAT is the promise problem of deciding whether a given Boolean formula that has at most one satisfying assignment is unsatisfiable or has exactly one satisfying assignment. In the first case, an algorithm for Unambiguous-SAT should reject, and in the second it should accept the formula. If the formula has more than one satisfying assignment, then there is no condition on the behavior of the algorithm. The promise problem Unambiguous-SAT can be decided by a nondeterministic Turing machine that has at most one accepting computation path. In this sense, this promise problem belongs to the complexity class UP (which is usually only defined for languages).
The proof of the Valiant–Vazirani theorem consists of a probabilistic reduction from SAT to SAT such that, with probability at least $\Omega (1/n)$, the output formula has at most one satisfying assignment, and thus satisfies the promise of the Unambiguous-SAT problem. More precisely, the reduction is a randomized polynomial-time algorithm that maps a Boolean formula $F(x_{1},\dots ,x_{n})$ with $n$ variables $x_{1},\dots ,x_{n}$ to a Boolean formula $F'(x_{1},\dots ,x_{n})$ such that
• every satisfying assignment of $F'$ also satisfies $F$, and
• if $F$ is satisfiable, then, with probability at least $\Omega (1/n)$, $F'$ has a unique satisfying assignment $(a_{1},\dots ,a_{n})$.
By running the reduction a polynomial number $t$ of times, each time with fresh independent random bits, we get formulas $F'_{1},\dots ,F'_{t}$. Choosing $t=O(n)$, we get that the probability that at least one formula $F'_{i}$ is uniquely satisfiable is at least $1/2$ if $F$ is satisfiable. This gives a Turing reduction from SAT to Unambiguous-SAT since an assumed algorithm for Unambiguous-SAT can be invoked on the $F'_{i}$. Then the self-reducibility of SAT can be used to compute a satisfying assignment, should it exist. Overall, this proves that NP = RP if Unambiguous-SAT can be solved in RP.
The idea of the reduction is to intersect the solution space of the formula $F$ with $k$ random affine hyperplanes over ${\text{GF}}(2)^{n}$, where $k\in \{1,\dots ,n\}$ is chosen uniformly at random. An alternative proof is based on the isolation lemma by Mulmuley, Vazirani, and Vazirani. They consider a more general setting, and applied to the setting here this gives an isolation probability of only $\Omega (1/n^{8})$.
References
1. Valiant, L.; Vazirani, V. (1986). "NP is as easy as detecting unique solutions" (PDF). Theoretical Computer Science. 47: 85–93. doi:10.1016/0304-3975(86)90135-0.
|
Validated numerics
Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (German: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems,[1] and today it is recognized as a powerful tool for the study of dynamical systems.[2]
See also: Numerical analysis and Interval arithmetic
Importance
Computation without verification may cause unfortunate results. Below are some examples.
Rump's example
In the 1980s, Rump made an example.[3][4] He made a complicated function and tried to obtain its value. Single precision, double precision, extended precision results seemed to be correct, but its plus-minus sign was different from the true value.
Phantom solution
Breuer–Plum–McKenna used the spectrum method to solve the boundary value problem of the Emden equation, and reported that an asymmetric solution was obtained.[5] This result to the study conflicted to the theoretical study by Gidas–Ni–Nirenberg which claimed that there is no asymmetric solution.[6] The solution obtained by Breuer–Plum–McKenna was a phantom solution caused by discretization error. This is a rare case, but it tells us that when we want to strictly discuss differential equations, numerical solutions must be verified.
Accidents caused by numerical errors
The following examples are known as accidents caused by numerical errors:
• Failure of intercepting missiles in the Gulf War (1991)[7]
• Failure of the Ariane 5 rocket (1996)[8]
• Mistakes in election result totalization[9]
Main topics
The study of validated numerics is divided into the following fields:
• Verification in numerical linear algebra
• Validating numerical solutions of a given system of linear equations[10][11]
• Validating numerically obtained eigenvalues[12][13][14]
• Rigorously computing determinants[15]
• Validating numerical solutions of matrix equations[16][17][18][19][20][21][22]
• Verification of special functions:
• Gamma function[23][24]
• Elliptic functions[25]
• Hypergeometric functions[26]
• Hurwitz zeta function[27]
• Bessel function
• Matrix function[28][29][30]
• Verification of numerical quadrature[31][32][33]
• Verification of nonlinear equations (The Kantorovich theorem,[34] Krawczyk method, interval Newton method, and the Durand–Kerner–Aberth method are studied.)
• Verification for solutions of ODEs, PDEs[35] (For PDEs, knowledge of functional analysis are used.[34])
• Verification of linear programming[36]
• Verification of computational geometry
• Verification at high-performance computing environment
See also: numerical methods for ordinary differential equations, numerical linear algebra, numerical quadrature, and computational geometry
Tools
• INTLAB Library made by MATLAB/GNU Octave
• kv Library made by C++. This library can obtain multiple precision outputs by using GNU MPFR.
• kv on GitHub
• Arb Library made by C. It is capable to rigorously compute various special functions.
• arb on GitHub
• CAPD A collection of flexible C++ modules which are mainly designed to computation of homology of sets, maps and validated numerics for dynamical systems.
• JuliaIntervals on GitHub (Library made by Julia)
• Boost Safe Numerics - C++ header only library of validated replacements for all builtin integer types]].
• Safe numerics on GitHub
See also
• Computer-assisted proof
• Interval arithmetic
• Affine arithmetic
• INTLAB (Interval Laboratory)
• Automatic differentiation
• wikibooks:Numerical calculations and rigorous mathematics
• Kantorovich theorem
• Gershgorin circle theorem
• Ulrich W. Kulisch
References
1. Tucker, Warwick. (1999). "The Lorenz attractor exists." Comptes Rendus de l'Académie des Sciences-Series I-Mathematics, 328(12), 1197–1202.
2. Zin Arai, Hiroshi Kokubu, Paweãl Pilarczyk. Recent Development In Rigorous Computational Methods In Dynamical Systems.
3. Rump, Siegfried M. (1988). "Algorithms for verified inclusions: Theory and practice." In Reliability in computing (pp. 109–126). Academic Press.
4. Loh, Eugene; Walster, G. William (2002). Rump's example revisited. Reliable Computing, 8(3), 245-248.
5. Breuer, B.; Plum, Michael; McKenna, Patrick J. (2001). "Inclusions and existence proofs for solutions of a nonlinear boundary value problem by spectral numerical methods." In Topics in Numerical Analysis (pp. 61–77). Springer, Vienna.
6. Gidas, B.; Ni, Wei-Ming; Nirenberg, Louis (1979). "Symmetry and related properties via the maximum principle." Communications in Mathematical Physics, 68(3), 209–243.
7. "The Patriot Missile Failure".
8. ARIANE 5 Flight 501 Failure, http://sunnyday.mit.edu/nasa-class/Ariane5-report.html
9. Rounding error changes Parliament makeup
10. Yamamoto, T. (1984). Error bounds for approximate solutions of systems of equations. Japan Journal of Applied Mathematics, 1(1), 157.
11. Oishi, S., & Rump, S. M. (2002). Fast verification of solutions of matrix equations. Numerische Mathematik, 90(4), 755-773.
12. Yamamoto, T. (1980). Error bounds for computed eigenvalues and eigenvectors. Numerische Mathematik, 34(2), 189-199.
13. Yamamoto, T. (1982). Error bounds for computed eigenvalues and eigenvectors. II. Numerische Mathematik, 40(2), 201-206.
14. Mayer, G. (1994). Result verification for eigenvectors and eigenvalues. Topics in Validated Computations, Elsevier, Amsterdam, 209-276.
15. Ogita, T. (2008). Verified Numerical Computation of Matrix Determinant. SCAN’2008 El Paso, Texas September 29–October 3, 2008, 86.
16. Shinya Miyajima, Verified computation for the Hermitian positive definite solution of the conjugate discrete-time algebraic Riccati equation, Journal of Computational and Applied Mathematics, Volume 350, Pages 80-86, April 2019.
17. Shinya Miyajima, Fast verified computation for the minimal nonnegative solution of the nonsymmetric algebraic Riccati equation, Computational and Applied Mathematics, Volume 37, Issue 4, Pages 4599-4610, September 2018.
18. Shinya Miyajima, Fast verified computation for the solution of the T-congruence Sylvester equation, Japan Journal of Industrial and Applied Mathematics, Volume 35, Issue 2, Pages 541-551, July 2018.
19. Shinya Miyajima, Fast verified computation for the solvent of the quadratic matrix equation, The Electronic Journal of Linear Algebra, Volume 34, Pages 137-151, March 2018
20. Shinya Miyajima, Fast verified computation for solutions of algebraic Riccati equations arising in transport theory, Numerical Linear Algebra with Applications, Volume 24, Issue 5, Pages 1-12, October 2017.
21. Shinya Miyajima, Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations, Journal of Computational and Applied Mathematics, Volume 319, Pages 352-364, August 2017.
22. Shinya Miyajima, Fast verified computation for solutions of continuous-time algebraic Riccati equations, Japan Journal of Industrial and Applied Mathematics, Volume 32, Issue 2, Pages 529-544, July 2015.
23. Rump, Siegfried M. (2014). Verified sharp bounds for the real gamma function over the entire floating-point range. Nonlinear Theory and Its Applications, IEICE, 5(3), 339-348.
24. Yamanaka, Naoya; Okayama, Tomoaki; Oishi, Shin’ichi (2015, November). Verified Error Bounds for the Real Gamma Function Using Double Exponential Formula over Semi-infinite Interval. In International Conference on Mathematical Aspects of Computer and Information Sciences (pp. 224-228). Springer.
25. Johansson, Fredrik (2019). Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms. In Elliptic Integrals, Elliptic Functions and Modular Forms in Quantum Field Theory (pp. 269-293). Springer, Cham.
26. Johansson, Fredrik (2019). Computing Hypergeometric Functions Rigorously. ACM Transactions on Mathematical Software (TOMS), 45(3), 30.
27. Johansson, Fredrik (2015). Rigorous high-precision computation of the Hurwitz zeta function and its derivatives. Numerical Algorithms, 69(2), 253-270.
28. Miyajima, S. (2018). Fast verified computation for the matrix principal pth root. en:Journal of Computational and Applied Mathematics, 330, 276-288.
29. Miyajima, S. (2019). Verified computation for the matrix principal logarithm. Linear Algebra and its Applications, 569, 38-61.
30. Miyajima, S. (2019). Verified computation of the matrix exponential. Advances in Computational Mathematics, 45(1), 137-152.
31. Johansson, Fredrik (2017). Arb: efficient arbitrary-precision midpoint-radius interval arithmetic. IEEE Transactions on Computers, 66(8), 1281-1292.
32. Johansson, Fredrik (2018, July). Numerical integration in arbitrary-precision ball arithmetic. In International Congress on Mathematical Software (pp. 255-263). Springer, Cham.
33. Johansson, Fredrik; Mezzarobba, Marc (2018). Fast and Rigorous Arbitrary-Precision Computation of Gauss--Legendre Quadrature Nodes and Weights. SIAM Journal on Scientific Computing, 40(6), C726-C747.
34. Eberhard Zeidler, Nonlinear Functional Analysis and Its Applications I-V. Springer Science & Business Media.
35. Mitsuhiro T. Nakao, Michael Plum, Yoshitaka Watanabe (2019) Numerical Verification Methods and Computer-Assisted Proofs for Partial Differential Equations (Springer Series in Computational Mathematics).
36. Oishi, Shin’ichi; Tanabe, Kunio (2009). Numerical Inclusion of Optimum Point for Linear Programming. JSIAM Letters, 1, 5-8.
Further reading
• Tucker, Warwick (2011). Validated Numerics: A Short Introduction to Rigorous Computations. Princeton University Press.
• Moore, Ramon Edgar, Kearfott, R. Baker., Cloud, Michael J. (2009). Introduction to Interval Analysis. Society for Industrial and Applied Mathematics.
• Rump, Siegfried M. (2010). Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19, 287–449.
External links
• Validated Numerics for Pedestrians
• Reliable Computing, An open electronic journal devoted to numerical computations with guaranteed accuracy, bounding of ranges, mathematical proofs based on floating-point arithmetic, and other theory and applications of interval arithmetic and directed rounding.
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
Industrial and applied mathematics
Computational
• Algorithms
• design
• analysis
• Automata theory
• Coding theory
• Computational geometry
• Constraint programming
• Computational logic
• Cryptography
• Information theory
Discrete
• Computer algebra
• Computational number theory
• Combinatorics
• Graph theory
• Discrete geometry
Analysis
• Approximation theory
• Clifford analysis
• Clifford algebra
• Differential equations
• Ordinary differential equations
• Partial differential equations
• Stochastic differential equations
• Differential geometry
• Differential forms
• Gauge theory
• Geometric analysis
• Dynamical systems
• Chaos theory
• Control theory
• Functional analysis
• Operator algebra
• Operator theory
• Harmonic analysis
• Fourier analysis
• Multilinear algebra
• Exterior
• Geometric
• Tensor
• Vector
• Multivariable calculus
• Exterior
• Geometric
• Tensor
• Vector
• Numerical analysis
• Numerical linear algebra
• Numerical methods for ordinary differential equations
• Numerical methods for partial differential equations
• Validated numerics
• Variational calculus
Probability theory
• Distributions (random variables)
• Stochastic processes / analysis
• Path integral
• Stochastic variational calculus
Mathematical
physics
• Analytical mechanics
• Lagrangian
• Hamiltonian
• Field theory
• Classical
• Conformal
• Effective
• Gauge
• Quantum
• Statistical
• Topological
• Perturbation theory
• in quantum mechanics
• Potential theory
• String theory
• Bosonic
• Topological
• Supersymmetry
• Supersymmetric quantum mechanics
• Supersymmetric theory of stochastic dynamics
Algebraic structures
• Algebra of physical space
• Feynman integral
• Poisson algebra
• Quantum group
• Renormalization group
• Representation theory
• Spacetime algebra
• Superalgebra
• Supersymmetry algebra
Decision sciences
• Game theory
• Operations research
• Optimization
• Social choice theory
• Statistics
• Mathematical economics
• Mathematical finance
Other applications
• Biology
• Chemistry
• Psychology
• Sociology
• "The Unreasonable Effectiveness of Mathematics in the Natural Sciences"
Related
• Mathematics
• Mathematical software
Organizations
• Society for Industrial and Applied Mathematics
• Japan Society for Industrial and Applied Mathematics
• Société de Mathématiques Appliquées et Industrielles
• International Council for Industrial and Applied Mathematics
• European Community on Computational Methods in Applied Sciences
• Category
• Mathematics portal / outline / topics list
|
Valuation (geometry)
In geometry, a valuation is a finitely additive function from a collection of subsets of a set $X$ to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of $\mathbb {R} ^{n}.$ Other examples of valuations on finite unions of convex bodies of $\mathbb {R} ^{n}$ are surface area, mean width, and Euler characteristic.
In geometry, continuity (or smoothness) conditions are often imposed on valuations, but there are also purely discrete facets of the theory. In fact, the concept of valuation has its origin in the dissection theory of polytopes and in particular Hilbert's third problem, which has grown into a rich theory reliant on tools from abstract algebra.
Definition
Let $X$ be a set, and let ${\mathcal {S}}$ be a collection of subsets of $X.$ A function $\phi $ on ${\mathcal {S}}$ with values in an abelian semigroup $R$ is called a valuation if it satisfies
$\phi (A\cup B)+\phi (A\cap B)=\phi (A)+\phi (B)$
whenever $A,$ $B,$ $A\cup B,$ and $A\cap B$ are elements of ${\mathcal {S}}.$ If $\emptyset \in {\mathcal {S}},$ then one always assumes $\phi (\emptyset )=0.$
Examples
Some common examples of ${\mathcal {S}}$ are
• the convex bodies in $\mathbb {R} ^{n}$
• compact convex polytopes in $\mathbb {R} ^{n}$
• convex cones
• smooth compact polyhedra in a smooth manifold $X$
Let ${\mathcal {K}}(\mathbb {R} ^{n})$ be the set of convex bodies in $\mathbb {R} ^{n}.$ Then some valuations on ${\mathcal {K}}(\mathbb {R} ^{n})$ are
• the Euler characteristic $\chi :K(\mathbb {R} ^{n})\to \mathbb {Z} $
• Lebesgue measure restricted to ${\mathcal {K}}(\mathbb {R} ^{n})$
• intrinsic volume (and, more generally, mixed volume)
• the map $A\mapsto h_{A},$ where $h_{A}$ is the support function of $A$
Some other valuations are
• the lattice point enumerator $P\mapsto |\mathbb {Z} ^{n}\cap P|$, where $P$ is a lattice polytope
• cardinality, on the family of finite sets
Valuations on convex bodies
From here on, let $V=\mathbb {R} ^{n}$, let ${\mathcal {K}}(V)$ be the set of convex bodies in $V$, and let $\phi $ be a valuation on ${\mathcal {K}}(V)$.
We say $\phi $ is translation invariant if, for all $K\in {\mathcal {K}}(V)$ and $x\in V$, we have $\phi (K+x)=\phi (K)$.
Let $(K,L)\in {\mathcal {K}}(V)^{2}$. The Hausdorff distance $d_{H}(K,L)$ is defined as
$d_{H}(K,L)=\inf\{\varepsilon >0:K\subset L_{\varepsilon }{\text{ and }}L\subset K_{\varepsilon }\},$
where $K_{\varepsilon }$ is the $\varepsilon $-neighborhood of $K$ under some Euclidean inner product. Equipped with this metric, ${\mathcal {K}}(V)$ is a locally compact space.
The space of continuous, translation-invariant valuations from ${\mathcal {K}}(V)$ to $\mathbb {C} $ is denoted by $\operatorname {Val} (V).$
The topology on $\operatorname {Val} (V)$ is the topology of uniform convergence on compact subsets of ${\mathcal {K}}(V).$ Equipped with the norm
$\|\phi \|=\max\{|\phi (K)|:K\subset B\},$
where $B\subset V$ is a bounded subset with nonempty interior, $\operatorname {Val} (V)$ is a Banach space.
Homogeneous valuations
A translation-invariant continuous valuation $\phi \in \operatorname {Val} (V)$ is said to be $i$-homogeneous if
$\phi (\lambda K)=\lambda ^{i}\phi (K)$
for all $\lambda >0$ and $K\in {\mathcal {K}}(V).$ The subset $\operatorname {Val} _{i}(V)$ of $i$-homogeneous valuations is a vector subspace of $\operatorname {Val} (V).$ McMullen's decomposition theorem[1] states that
$\operatorname {Val} (V)=\bigoplus _{i=0}^{n}\operatorname {Val} _{i}(V),\qquad n=\dim V.$
In particular, the degree of a homogeneous valuation is always an integer between $0$ and $n=\operatorname {dim} V.$
Valuations are not only graded by the degree of homogeneity, but also by the parity with respect to the reflection through the origin, namely
$\operatorname {Val} _{i}=\operatorname {Val} _{i}^{+}\oplus \operatorname {Val} _{i}^{-},$
where $\phi \in \operatorname {Val} _{i}^{\epsilon }$ with $\epsilon \in \{+,-\}$ if and only if $\phi (-K)=\epsilon \phi (K)$ for all convex bodies $K.$ The elements of $\operatorname {Val} _{i}^{+}$ and $\operatorname {Val} _{i}^{-}$ are said to be even and odd, respectively.
It is a simple fact that $\operatorname {Val} _{0}(V)$ is $1$-dimensional and spanned by the Euler characteristic $\chi ,$ that is, consists of the constant valuations on ${\mathcal {K}}(V).$
In 1957 Hadwiger[2] proved that $\operatorname {Val} _{n}(V)$ (where $n=\dim V$) coincides with the $1$-dimensional space of Lebesgue measures on $V.$
A valuation $\phi \in \operatorname {Val} (\mathbb {R} ^{n})$ is simple if $\phi (K)=0$ for all convex bodies with $\dim K<n.$ Schneider[3] in 1996 described all simple valuations on $\mathbb {R} ^{n}$: they are given by
$\phi (K)=c\operatorname {vol} (K)+\int _{S^{n-1}}f(\theta )d\sigma _{K}(\theta ),$
where $c\in \mathbb {C} ,$ $f\in C(S^{n-1})$ is an arbitrary odd function on the unit sphere $S^{n-1}\subset \mathbb {R} ^{n},$ and $\sigma _{K}$ is the surface area measure of $K.$ In particular, any simple valuation is the sum of an $n$- and an $(n-1)$-homogeneous valuation. This in turn implies that an $i$-homogeneous valuation is uniquely determined by its restrictions to all $(i+1)$-dimensional subspaces.
Embedding theorems
The Klain embedding is a linear injection of $\operatorname {Val} _{i}^{+}(V),$ the space of even $i$-homogeneous valuations, into the space of continuous sections of a canonical complex line bundle over the Grassmannian $\operatorname {Gr} _{i}(V)$ of $i$-dimensional linear subspaces of $V.$ Its construction is based on Hadwiger's characterization[2] of $n$-homogeneous valuations. If $\phi \in \operatorname {Val} _{i}(V)$ and $E\in \operatorname {Gr} _{i}(V),$ then the restriction $\phi |_{E}$ is an element $\operatorname {Val} _{i}(E),$ and by Hadwiger's theorem it is a Lebesgue measure. Hence
$\operatorname {Kl} _{\phi }(E)=\phi |_{E}$
defines a continuous section of the line bundle $Dens$ over $\operatorname {Gr} _{i}(V)$ with fiber over $E$ equal to the $1$-dimensional space $\operatorname {Dens} (E)$ of densities (Lebesgue measures) on $E.$
Theorem (Klain[4]). The linear map $\operatorname {Kl} :\operatorname {Val} _{i}^{+}(V)\to C(\operatorname {Gr} _{i}(V),\operatorname {Dens} )$ :\operatorname {Val} _{i}^{+}(V)\to C(\operatorname {Gr} _{i}(V),\operatorname {Dens} )} is injective.
A different injection, known as the Schneider embedding, exists for odd valuations. It is based on Schneider's description of simple valuations.[3] It is a linear injection of $\operatorname {Val} _{i}^{-}(V),$ the space of odd $i$-homogeneous valuations, into a certain quotient of the space of continuous sections of a line bundle over the partial flag manifold of cooriented pairs $(F^{i}\subset E^{i+1}).$ Its definition is reminiscent of the Klain embedding, but more involved. Details can be found in.[5]
The Goodey-Weil embedding is a linear injection of $\operatorname {Val} _{i}$ into the space of distributions on the $i$-fold product of the $(n-1)$-dimensional sphere. It is nothing but the Schwartz kernel of a natural polarization that any $\phi \in \operatorname {Val} _{k}(V)$ admits, namely as a functional on the $k$-fold product of $C^{2}(S^{n-1}),$ the latter space of functions having the geometric meaning of differences of support functions of smooth convex bodies. For details, see.[5]
Irreducibility Theorem
The classical theorems of Hadwiger, Schneider and McMullen give fairly explicit descriptions of valuations that are homogeneous of degree $1,$ $n-1,$ and $n=\operatorname {dim} V.$ But for degrees $1<i<n-1$ very little was known before the turn of the 21st century. McMullen's conjecture is the statement that the valuations
$\phi _{A}(K)=\operatorname {vol} _{n}(K+A),\qquad A\in {\mathcal {K}}(V),$
span a dense subspace of $\operatorname {Val} (V).$ McMullen's conjecture was confirmed by Alesker in a much stronger form, which became known as the Irreducibility Theorem:
Theorem (Alesker[6]). For every $0\leq i\leq n,$ the natural action of $GL(V)$ on the spaces $\operatorname {Val} _{i}^{+}(V)$ and $\operatorname {Val} _{i}^{-}(V)$ is irreducible.
Here the action of the general linear group $GL(V)$ on $\operatorname {Val} (V)$ is given by
$(g\cdot \phi )(K)=\phi (g^{-1}K).$
The proof of the Irreducibility Theorem is based on the embedding theorems of the previous section and Beilinson-Bernstein localization.
Smooth valuations
A valuation $\phi \in \operatorname {Val} (V)$ is called smooth if the map $g\mapsto g\cdot \phi $ from $GL(V)$ to $\operatorname {Val} (V)$ is smooth. In other words, $\phi $ is smooth if and only if $\phi $ is a smooth vector of the natural representation of $GL(V)$ on $\operatorname {Val} (V).$ The space of smooth valuations $\operatorname {Val} ^{\infty }(V)$ is dense in $\operatorname {Val} (V)$; it comes equipped with a natural Fréchet-space topology, which is finer than the one induced from $\operatorname {Val} (V).$
For every (complex-valued) smooth function $f$ on $\operatorname {Gr} _{i}(\mathbb {R} ^{n}),$
$\phi (K)=\int _{\operatorname {Gr} _{i}(\mathbb {R} ^{n})}\operatorname {vol} _{i}(P_{E}K)f(E)dE,$
where $P_{E}:\mathbb {R} ^{n}\to E$ denotes the orthogonal projection and $dE$ is the Haar measure, defines a smooth even valuation of degree $i.$ It follows from the Irreducibility Theorem, in combination with the Casselman-Wallach theorem, that any smooth even valuation can be represented in this way. Such a representation is sometimes called a Crofton formula.
For any (complex-valued) smooth differential form $\omega \in \Omega ^{n-1}(\mathbb {R} ^{n}\times S^{n-1})$ that is invariant under all the translations $(x,u)\mapsto (x+t,u)$ and every number $c\in \mathbb {C} ,$ integration over the normal cycle defines a smooth valuation:
$\phi (K)=c\operatorname {vol} _{n}(K)+\int _{N(K)}\omega ,\qquad K\in {\mathcal {K}}(\mathbb {R} ^{n}).$
(1)
As a set, the normal cycle $N(K)$ consists of the outward unit normals to $K.$ The Irreducibility Theorem implies that every smooth valuation is of this form.
Operations on translation-invariant valuations
There are several natural operations defined on the subspace of smooth valuations $\operatorname {Val} ^{\infty }(V)\subset \operatorname {Val} (V).$ The most important one is the product of two smooth valuations. Together with pullback and pushforward, this operation extends to valuations on manifolds.
Exterior product
Let $V,W$ be finite-dimensional real vector spaces. There exists a bilinear map, called the exterior product,
$\boxtimes :\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(W)\to \operatorname {Val} (V\times W)$ :\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(W)\to \operatorname {Val} (V\times W)}
which is uniquely characterized by the following two properties:
• it is continuous with respect to the usual topologies on $\operatorname {Val} $ and $\operatorname {Val} ^{\infty }.$
• if $\phi =\operatorname {vol} _{V}(\bullet +A)$ and $\psi =\operatorname {vol} _{W}(\bullet +B)$ where $A\in {\mathcal {K}}(V)$ and $B\in {\mathcal {K}}(W)$ are convex bodies with smooth boundary and strictly positive Gauss curvature, and $\operatorname {vol} _{V}$ and $\operatorname {vol} _{W}$ are densities on $V$ and $W,$ then
$\phi \boxtimes \psi =(\operatorname {vol} _{V}\boxtimes \operatorname {vol} _{W})(\bullet +A\times B).$
Product
The product of two smooth valuations $\phi ,\psi \in \operatorname {Val} ^{\infty }(V)$ is defined by
$(\phi \cdot \psi )(K)=(\phi \boxtimes \psi )(\Delta (K)),$
where $\Delta :V\to V\times V$ is the diagonal embedding. The product is a continuous map
$\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V).$
Equipped with this product, $\operatorname {Val} ^{\infty }(V)$ becomes a commutative associative graded algebra with the Euler characteristic as the multiplicative identity.
Alesker-Poincaré duality
By a theorem of Alesker, the restriction of the product
$\operatorname {Val} _{k}^{\infty }(V)\times \operatorname {Val} _{n-k}^{\infty }(V)\to \operatorname {Val} _{n}^{\infty }(V)=\operatorname {Dens} (V)$
is a non-degenerate pairing. This motivates the definition of the $k$-homogeneous generalized valuation, denoted $\operatorname {Val} _{k}^{-\infty }(V),$ as $\operatorname {Val} _{n-k}^{\infty }(V)^{*}\otimes \operatorname {Dens} (V),$ topologized with the weak topology. By the Alesker-Poincaré duality, there is a natural dense inclusion $\operatorname {Val} _{k}^{\infty }(V)\hookrightarrow \operatorname {Val} _{k}^{-\infty }(V)/$
Convolution
Convolution is a natural product on $\operatorname {Val} ^{\infty }(V)\otimes \operatorname {Dens} (V^{*}).$ For simplicity, we fix a density $\operatorname {vol} $ on $V$ to trivialize the second factor. Define for fixed $A,B\in {\mathcal {K}}(V)$ with smooth boundary and strictly positive Gauss curvature
$\operatorname {vol} (\bullet +A)\ast \operatorname {vol} (\bullet +B)=\operatorname {vol} (\bullet +A+B).$
There is then a unique extension by continuity to a map
$\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V),$
called the convolution.
Unlike the product, convolution respects the co-grading, namely if $\phi \in \operatorname {Val} _{n-i}^{\infty }(V),$ $\psi \in \operatorname {Val} _{n-j}^{\infty }(V),$ then $\phi \ast \psi \in \operatorname {Val} _{n-i-j}^{\infty }(V).$
For instance, let $V(K_{1},\ldots ,K_{n})$ denote the mixed volume of the convex bodies $K_{1},\ldots ,K_{n}\subset \mathbb {R} ^{n}.$ If convex bodies $A_{1},\dots ,A_{n-i}$ in $\mathbb {R} ^{n}$ with a smooth boundary and strictly positive Gauss curvature are fixed, then $\phi (K)=V(K[i],A_{1},\dots ,A_{n-i})$ defines a smooth valuation of degree $i.$ The convolution two such valuations is
$V(\bullet [i],A_{1},\dots ,A_{n-i})\ast V(\bullet [j],B_{1},\dots ,B_{n-j})=c_{i,j}V(\bullet [n-j-i],A_{1},\dots ,A_{n-i},B_{1},\dots ,B_{n-j}),$
where $c_{i,j}$ is a constant depending only on $i,j,n.$
Fourier transform
The Alesker-Fourier transform is a natural, $GL(V)$-equivariant isomorphism of complex-valued valuations
$\mathbb {F} :\operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V^{*})\otimes \operatorname {Dens} (V),$ :\operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V^{*})\otimes \operatorname {Dens} (V),}
discovered by Alesker and enjoying many properties resembling the classical Fourier transform, which explains its name. It reverses the grading, namely $\mathbb {F} :\operatorname {Val} _{k}^{\infty }(V)\to \operatorname {Val} _{n-k}^{\infty }(V^{*})\otimes \operatorname {Dens} (V),$ :\operatorname {Val} _{k}^{\infty }(V)\to \operatorname {Val} _{n-k}^{\infty }(V^{*})\otimes \operatorname {Dens} (V),} and intertwines the product and the convolution:
$\mathbb {F} (\phi \cdot \psi )=\mathbb {F} \phi \ast \mathbb {F} \psi .$
Fixing for simplicity a Euclidean structure to identify $V=V^{*},$ $\operatorname {Dens} (V)=\mathbb {C} ,$ we have the identity
$\mathbb {F} ^{2}\phi (K)=\phi (-K).$
On even valuations, there is a simple description of the Fourier transform in terms of the Klain embedding: $\operatorname {Kl} _{\mathbb {F} \phi }(E)=\operatorname {Kl} _{\phi }(E^{\perp }).$ In particular, even real-valued valuations remain real-valued after the Fourier transform.
For odd valuations, the description of the Fourier transform is substantially more involved. Unlike the even case, it is no longer of purely geometric nature. For instance, the space of real-valued odd valuations is not preserved.
Pullback and pushforward
Given a linear map $f:U\to V,$ there are induced operations of pullback $f^{*}:\operatorname {Val} (V)\to \operatorname {Val} (U)$ and pushforward $f_{*}:\operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*}\to \operatorname {Val} (V)\otimes \operatorname {Dens} (V)^{*}.$ The pullback is the simpler of the two, given by $f^{*}\phi (K)=\phi (f(K)).$ It evidently preserves the parity and degree of homogeneity of a valuation. Note that the pullback does not preserve smoothness when $f$ is not injective.
The pushforward is harder to define formally. For simplicity, fix Lebesgue measures on $U$ and $V.$ The pushforward can be uniquely characterized by describing its action on valuations of the form $\operatorname {vol} (\bullet +A),$ for all $A\in {\mathcal {K}}(U),$ and then extended by continuity to all valuations using the Irreducibility Theorem. For a surjective map $f,$
$f_{*}\operatorname {vol} (\bullet +A)=\operatorname {vol} (\bullet +f(A)).$
For an inclusion $f:U\hookrightarrow V,$ choose a splitting $V=U\oplus W.$ Then
$f_{*}\operatorname {vol} (\bullet +A)(K)=\int _{W}\operatorname {vol} (K\cap (U+w)+A)dw.$
Informally, the pushforward is dual to the pullback with respect to the Alesker-Poincaré pairing: for $\phi \in \operatorname {Val} (V)$ and $\psi \in \operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*},$
$\langle f^{*}\phi ,\psi \rangle =\langle \phi ,f_{*}\psi \rangle .$
However, this identity has to be carefully interpreted since the pairing is only well-defined for smooth valuations. For further details, see.[7]
Valuations on manifolds
In a series of papers beginning in 2006, Alesker laid down the foundations for a theory of valuations on manifolds that extends the theory of valuations on convex bodies. The key observation leading to this extension is that via integration over the normal cycle (1), a smooth translation-invariant valuation may be evaluated on sets much more general than convex ones. Also (1) suggests to define smooth valuations in general by dropping the requirement that the form $\omega $ be translation-invariant and by replacing the translation-invariant Lebesgue measure with an arbitrary smooth measure.
Let $X$ be an n-dimensional smooth manifold and let $\mathbb {P} _{X}=\mathbb {P} _{+}(T^{*}X)$ be the co-sphere bundle of $X,$ that is, the oriented projectivization of the cotangent bundle. Let ${\mathcal {P}}(X)$ denote the collection of compact differentiable polyhedra in $X.$ The normal cycle $N(A)\subset \mathbb {P} _{X}$ of $A\in {\mathcal {P}}(X),$ which consists of the outward co-normals to $A,$ is naturally a Lipschitz submanifold of dimension $n-1.$
For ease of presentation we henceforth assume that $X$ is oriented, even though the concept of smooth valuations in fact does not depend on orientability. The space of smooth valuations ${\mathcal {V}}^{\infty }(X)$ on $X$ consists of functions $\phi :{\mathcal {P}}(X)\to \mathbb {C} $ :{\mathcal {P}}(X)\to \mathbb {C} } of the form
$\phi (A)=\int _{A}\mu +\int _{N(A)}\omega ,\qquad A\in {\mathcal {P}}(X),$
where $\mu \in \Omega ^{n}(X)$ and $\omega \in \Omega ^{n-1}(\mathbb {P} _{X})$ can be arbitrary. It was shown by Alesker that the smooth valuations on open subsets of $X$ form a soft sheaf over $X.$
Examples
The following are examples of smooth valuations on a smooth manifold $X$:
• Smooth measures on $X.$
• The Euler characteristic; this follows from the work of Chern[8] on the Gauss-Bonnet theorem, where such $\mu $ and $\omega $ were constructed to represent the Euler characteristic. In particular, $\mu $ is then the Chern-Gauss-Bonnet integrand, which is the Pfaffian of the Riemannian curvature tensor.
• If $X$ is Riemannian, then the Lipschitz-Killing valuations or intrinsic volumes $V_{0}^{X}=\chi ,V_{1}^{X},\ldots ,V_{n}^{X}=\mathrm {vol} _{X}$ are smooth valuations. If $f:X\to \mathbb {R} ^{m}$ is any isometric immersion into a Euclidean space, then $V_{i}^{X}=f^{*}V_{i}^{\mathbb {R} ^{m}},$ where $V_{i}^{\mathbb {R} ^{m}}$ denotes the usual intrinsic volumes on $\mathbb {R} ^{m}$ (see below for the definition of the pullback). The existence of these valuations is the essence of Weyl's tube formula.[9]
• Let $\mathbb {C} P^{n}$ be the complex projective space, and let $\mathrm {Gr} _{k}^{\mathbb {C} }$ denote the Grassmannian of all complex projective subspaces of fixed dimension $k.$ The function
$\phi (A)=\int _{\mathrm {Gr} _{k}^{\mathbb {C} }}\chi (A\cap E)dE,\qquad A\in {\mathcal {P}}(\mathbb {C} P^{n}),$
where the integration is with respect to the Haar probability measure on $\mathrm {Gr} _{k}^{\mathbb {C} },$ is a smooth valuation. This follows from the work of Fu.[10]
Filtration
The space ${\mathcal {V}}^{\infty }(X)$ admits no natural grading in general, however it carries a canonical filtration
${\mathcal {V}}^{\infty }(X)=W_{0}\supset W_{1}\supset \cdots \supset W_{n}.$
Here $W_{n}$ consists of the smooth measures on $X,$ and $W_{j}$ is given by forms $\omega $ in the ideal generated by $\pi ^{*}\Omega ^{j}(X),$ where $\pi :\mathbb {P} _{X}\to X$ :\mathbb {P} _{X}\to X} is the canonical projection. The associated graded vector space $\bigoplus _{i=0}^{n}W_{i}/W_{i+1}$ is canonically isomorphic to the space of smooth sections
$\bigoplus _{i=0}^{n}C^{\infty }(X,\operatorname {Val} _{i}^{\infty }(TX)),$
where $\operatorname {Val} _{i}^{\infty }(TX)$ denotes the vector bundle over $X$ such that the fiber over a point $x\in X$ is $\operatorname {Val} _{i}^{\infty }(T_{x}X),$ the space of $i$-homogeneous smooth translation-invariant valuations on the tangent space $T_{x}X.$
Product
The space ${\mathcal {V}}^{\infty }(X)$ admits a natural product. This product is continuous, commutative, associative, compatible with the filtration:
$W_{i}\cdot W_{j}\subset W_{i+j},$
and has the Euler characteristic as the identity element. It also commutes with the restriction to embedded submanifolds, and the diffeomorphism group of $X$ acts on ${\mathcal {V}}^{\infty }(X)$ by algebra automorphisms.
For example, if $X$ is Riemannian, the Lipschitz-Killing valuations satisfy
$V_{i}^{X}\cdot V_{j}^{X}=V_{i+j}^{X}.$
The Alesker-Poincaré duality still holds. For compact $X$ it says that the pairing ${\mathcal {V}}^{\infty }(X)\times {\mathcal {V}}^{\infty }(X)\to \mathbb {C} ,$ $(\phi ,\psi )\mapsto (\phi \cdot \psi )(X)$ is non-degenerate. As in the translation-invariant case, this duality can be used to define generalized valuations. Unlike the translation-invariant case, no good definition of continuous valuations exists for valuations on manifolds.
The product of valuations closely reflects the geometric operation of intersection of subsets. Informally, consider the generalized valuation $\chi _{A}=\chi (A\cap \bullet ).$ The product is given by $\chi _{A}\cdot \chi _{B}=\chi _{A\cap B}.$ Now one can obtain smooth valuations by averaging generalized valuations of the form $\chi _{A},$ more precisely $\phi (X)=\int _{S}\chi _{s(A)}ds$ is a smooth valuation if $S$ is a sufficiently large measured family of diffeomorphisms. Then one has
$\int _{S}\chi _{s(A)}ds\cdot \int _{S'}\chi _{s'(B)}ds'=\int _{S\times S'}\chi _{s(A)\cap s'(B)}dsds',$
see.[11]
Pullback and pushforward
Every smooth immersion $f:X\to Y$ of smooth manifolds induces a pullback map $f^{*}:{\mathcal {V}}^{\infty }(Y)\to {\mathcal {V}}^{\infty }(X).$ If $f$ is an embedding, then
$(f^{*}\phi )(A)=\phi (f(A)),\qquad A\in {\mathcal {P}}(X).$
The pullback is a morphism of filtered algebras. Every smooth proper submersion $f:X\to Y$ defines a pushforward map $f^{*}:{\mathcal {V}}^{\infty }(X)\to {\mathcal {V}}^{\infty }(Y)$ by
$(f_{*}\phi )(A)=\phi (f^{-1}(A)),\qquad A\in {\mathcal {P}}(Y).$
The pushforward is compatible with the filtration as well: $f_{*}:W_{i}(X)\to W_{i-(\dim X-\dim Y)}(Y).$
For general smooth maps, one can define pullback and pushforward for generalized valuations under some restrictions.
Applications in Integral Geometry
Let $M$ be a Riemannian manifold and let $G$ be a Lie group of isometries of $M$ acting transitively on the sphere bundle $SM.$ Under these assumptions the space ${\mathcal {V}}^{\infty }(M)^{G}$ of $G$-invariant smooth valuations on $M$ is finite-dimensional; let $\phi _{1},\ldots ,\phi _{m}$ be a basis. Let $A,B\in {\mathcal {P}}(M)$ be differentiable polyhedra in $M.$ Then integrals of the form $\int _{G}\phi _{i}(A\cap gB)dg$ are expressible as linear combinations of $\phi _{k}(A)\phi _{l}(B)$ with coefficients $c_{i}^{kl}$ independent of $A$ and $B$:
$\int _{G}\phi _{i}(A\cap gB)dg=\sum _{k,l=1}^{m}c_{i}^{kl}\phi _{k}(A)\phi _{l}(B),\qquad A,B\in {\mathcal {P}}(M).$
(2)
Formulas of this type are called kinematic formulas. Their existence in this generality was proved by Fu.[10] For the three simply connected real space forms, that is, the sphere, Euclidean space, and hyperbolic space, they go back to Blaschke, Santaló, Chern, and Federer.
Describing the kinematic formulas explicitly is typically a difficult problem. In fact already in the step from real to complex space forms, considerable difficulties arise and these have only recently been resolved by Bernig, Fu, and Solanes.[12] [13] The key insight responsible for this progress is that the kinematic formulas contain the same information as the algebra of invariant valuations ${\mathcal {V}}^{\infty }(M)^{G}.$
For a precise statement, let
$k_{G}:{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}$
be the kinematic operator, that is, the map determined by the kinematic formulas (2). Let
$\operatorname {pd} :{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G*}$ :{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G*}}
denote the Alesker-Poincaré duality, which is a linear isomorphism. Finally let $m_{G}^{*}$ be the adjoint of the product map
$m_{G}:{\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}.$
The Fundamental theorem of algebraic integral geometry relating operations on valuations to integral geometry, states that if the Poincaré duality is used to identify ${\mathcal {V}}^{\infty }(M)^{G}$ with ${\mathcal {V}}^{\infty }(M)^{G*},$ then $k_{G}=m_{G}^{*}$:
.
See also
• Hadwiger's theorem
• Integral geometry – theory of measures on a geometrical space invariant under the symmetry group of that spacePages displaying wikidata descriptions as a fallback
• Mixed volume
• Modular set function – Mapping functionPages displaying short descriptions of redirect targets
• Set function – Function from sets to numbers
• Valuation (measure theory) – map in measure or domain theoryPages displaying wikidata descriptions as a fallback
References
1. McMullen, Peter (1980), "Continuous translation-invariant valuations on the space of compact convex sets", Archiv der Mathematik, 34 (4): 377–384, doi:10.1007/BF01224974
2. Hadwiger, Hugo (1957), Vorlesungen über Inhalt, Oberfläche und Isoperimetrie, Die Grundlehren der Mathematischen Wissenschaften, vol. 93, Berlin-Göttingen-Heidelberg: Springer-Verlag, doi:10.1007/978-3-642-94702-5, ISBN 978-3-642-94703-2
3. Schneider, Rolf (1996), "Simple valuations on convex bodies", Mathematika, 43 (1): 32–39, doi:10.1112/S0025579300011578
4. Klain, Daniel A. (1995), "A short proof of Hadwiger's characterization theorem", Mathematika, 42 (2): 329–339, doi:10.1112/S0025579300014625
5. Alesker, Semyon (2018), Introduction to the theory of valuations, CBMS Regional Conference Series in Mathematics, vol. 126, Providence, RI: American Mathematical Society
6. Alesker, Semyon (2001), "Description of translation invariant valuations on convex sets with solution of P. McMullen's conjecture", Geometric and Functional Analysis, 11 (2): 244–272, doi:10.1007/PL00001675
7. Alesker, Semyon (2011), "A Fourier-type transform on translation-invariant valuations on convex sets", Israel Journal of Mathematics, 181: 189–294, doi:10.1007/s11856-011-0008-6
8. Chern, Shiing-Shen (1945), "On the curvatura integra in a Riemannian manifold", Annals of Mathematics, Second Series, 46 (4): 674–684, doi:10.2307/1969203, JSTOR 1969203
9. Weyl, Hermann (1939), "On the Volume of Tubes", American Journal of Mathematics, 61 (2): 461–472, doi:10.2307/2371513, JSTOR 2371513
10. Fu, Joseph H. G. (1990), "Kinematic formulas in integral geometry", Indiana University Mathematics Journal, 39 (4): 1115–1154, doi:10.1512/iumj.1990.39.39052
11. Fu, Joseph H. G. (2016), "Intersection theory and the Alesker product", Indiana University Mathematics Journal, 65 (4): 1347–1371, arXiv:1408.4106, doi:10.1512/iumj.2016.65.5846, S2CID 119736489
12. Bernig, Andreas; Fu, Joseph H. G.; Solanes, Gil (2014), "Integral geometry of complex space forms", Geometric and Functional Analysis, 24 (2): 403–49, arXiv:1204.0604, doi:10.1007/s00039-014-0251-12
13. Bernig, Andreas; Fu, Joseph H. G. (2011), "Hermitian integral geometry", Annals of Mathematics, Second Series, 173 (2): 907–945, doi:10.4007/annals.2011.173.2.7
Bibliography
• S. Alesker (2018). Introduction to the theory of valuations. CBMS Regional Conference Series in Mathematics, 126. American Mathematical Society, Providence, RI. ISBN 978-1-4704-4359-7.
• S. Alesker; J. H. G. Fu (2014). Integral geometry and valuations. Advanced Courses in Mathematics. CRM Barcelona. Birkhäuser/Springer, Basel. ISBN 978-1-4704-4359-7.
• D. A. Klain; G.-C. Rota (1997). Introduction to geometric probability. Lezioni Lincee. [Lincei Lectures]. Cambridge University Press. ISBN 0-521-59362-X.
• R. Schneider (2014). Convex bodies: the Brunn-Minkowski theory. Encyclopedia of Mathematics and its Applications, 151. Cambridge University Press, Cambridge, RI. ISBN 978-1-107-60101-7.
|
Valuation (measure theory)
In measure theory, or at least in the approach to it via the domain theory, a valuation is a map from the class of open sets of a topological space to the set of positive real numbers including infinity, with certain properties. It is a concept closely related to that of a measure, and as such, it finds applications in measure theory, probability theory, and theoretical computer science.
Domain/Measure theory definition
Let $\scriptstyle (X,{\mathcal {T}})$ be a topological space: a valuation is any set function
$v:{\mathcal {T}}\to \mathbb {R} ^{+}\cup \{+\infty \}$
satisfying the following three properties
${\begin{array}{lll}v(\varnothing )=0&&\scriptstyle {\text{Strictness property}}\\v(U)\leq v(V)&{\mbox{if}}~U\subseteq V\quad U,V\in {\mathcal {T}}&\scriptstyle {\text{Monotonicity property}}\\v(U\cup V)+v(U\cap V)=v(U)+v(V)&\forall U,V\in {\mathcal {T}}&\scriptstyle {\text{Modularity property}}\,\end{array}}$
The definition immediately shows the relationship between a valuation and a measure: the properties of the two mathematical object are often very similar if not identical, the only difference being that the domain of a measure is the Borel algebra of the given topological space, while the domain of a valuation is the class of open sets. Further details and references can be found in Alvarez-Manilla, Edalat & Saheb-Djahromi 2000 and Goubault-Larrecq 2005.
Continuous valuation
A valuation (as defined in domain theory/measure theory) is said to be continuous if for every directed family $\scriptstyle \{U_{i}\}_{i\in I}$ of open sets (i.e. an indexed family of open sets which is also directed in the sense that for each pair of indexes $i$ and $j$ belonging to the index set $I$, there exists an index $k$ such that $\scriptstyle U_{i}\subseteq U_{k}$ and $\scriptstyle U_{j}\subseteq U_{k}$) the following equality holds:
$v\left(\bigcup _{i\in I}U_{i}\right)=\sup _{i\in I}v(U_{i}).$
This property is analogous to the τ-additivity of measures.
Simple valuation
A valuation (as defined in domain theory/measure theory) is said to be simple if it is a finite linear combination with non-negative coefficients of Dirac valuations, that is,
$v(U)=\sum _{i=1}^{n}a_{i}\delta _{x_{i}}(U)\quad \forall U\in {\mathcal {T}}$
where $a_{i}$ is always greater than or at least equal to zero for all index $i$. Simple valuations are obviously continuous in the above sense. The supremum of a directed family of simple valuations (i.e. an indexed family of simple valuations which is also directed in the sense that for each pair of indexes $i$ and $j$ belonging to the index set $I$, there exists an index $k$ such that $\scriptstyle v_{i}(U)\leq v_{k}(U)\!$ and $\scriptstyle v_{j}(U)\leq v_{k}(U)\!$) is called quasi-simple valuation
${\bar {v}}(U)=\sup _{i\in I}v_{i}(U)\quad \forall U\in {\mathcal {T}}.\,$
See also
• The extension problem for a given valuation (in the sense of domain theory/measure theory) consists in finding under what type of conditions it can be extended to a measure on a proper topological space, which may or may not be the same space where it is defined: the papers Alvarez-Manilla, Edalat & Saheb-Djahromi 2000 and Goubault-Larrecq 2005 in the reference section are devoted to this aim and give also several historical details.
• The concepts of valuation on convex sets and valuation on manifolds are a generalization of valuation in the sense of domain/measure theory. A valuation on convex sets is allowed to assume complex values, and the underlying topological space is the set of non-empty convex compact subsets of a finite-dimensional vector space: a valuation on manifolds is a complex valued finitely additive measure defined on a proper subset of the class of all compact submanifolds of the given manifolds.[lower-alpha 1]
Examples
Dirac valuation
Let $\scriptstyle (X,{\mathcal {T}})$ be a topological space, and let $x$ be a point of $X$: the map
$\delta _{x}(U)={\begin{cases}0&{\mbox{if}}~x\notin U\\1&{\mbox{if}}~x\in U\end{cases}}\quad {\text{ for all }}U\in {\mathcal {T}}$
is a valuation in the domain theory/measure theory, sense called Dirac valuation. This concept bears its origin from distribution theory as it is an obvious transposition to valuation theory of Dirac distribution: as seen above, Dirac valuations are the "bricks" simple valuations are made of.
See also
• Valuation (geometry) – in geometryPages displaying wikidata descriptions as a fallback
Notes
1. Details can be found in several arXiv papers of prof. Semyon Alesker.
Works cited
• Alvarez-Manilla, Maurizio; Edalat, Abbas; Saheb-Djahromi, Nasser (2000), "An extension result for continuous valuations", Journal of the London Mathematical Society, 61 (2): 629–640, CiteSeerX 10.1.1.23.9676, doi:10.1112/S0024610700008681.
• Goubault-Larrecq, Jean (2005), "Extensions of valuations", Mathematical Structures in Computer Science, 15 (2): 271–297, doi:10.1017/S096012950400461X
External links
• Alesker, Semyon, "various preprints on valuation s", arXiv preprint server, primary site at Cornell University. Several papers dealing with valuations on convex sets, valuations on manifolds and related topics.
• The nLab page on valuations
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
|
Valuation ring
In abstract algebra, a valuation ring is an integral domain D such that for every element x of its field of fractions F, at least one of x or x−1 belongs to D.
Given a field F, if D is a subring of F such that either x or x−1 belongs to D for every nonzero x in F, then D is said to be a valuation ring for the field F or a place of F. Since F in this case is indeed the field of fractions of D, a valuation ring for a field is a valuation ring. Another way to characterize the valuation rings of a field F is that valuation rings D of F have F as their field of fractions, and their ideals are totally ordered by inclusion; or equivalently their principal ideals are totally ordered by inclusion. In particular, every valuation ring is a local ring.
The valuation rings of a field are the maximal elements of the set of the local subrings in the field partially ordered by dominance or refinement,[1] where
$(A,{\mathfrak {m}}_{A})$ dominates $(B,{\mathfrak {m}}_{B})$ if $A\supseteq B$ and ${\mathfrak {m}}_{A}\cap B={\mathfrak {m}}_{B}$.[2]
Every local ring in a field K is dominated by some valuation ring of K.
An integral domain whose localization at any prime ideal is a valuation ring is called a Prüfer domain.
Definitions
There are several equivalent definitions of valuation ring (see below for the characterization in terms of dominance). For an integral domain D and its field of fractions K, the following are equivalent:
1. For every nonzero x in K, either x is in D or x−1 is in D.
2. The ideals of D are totally ordered by inclusion.
3. The principal ideals of D are totally ordered by inclusion (i.e. the elements in D are, up to units, totally ordered by divisibility.)
4. There is a totally ordered abelian group Γ (called the value group) and a valuation ν: K → Γ ∪ {∞} with D = { x ∈ K | ν(x) ≥ 0 }.
The equivalence of the first three definitions follows easily. A theorem of (Krull 1939) states that any ring satisfying the first three conditions satisfies the fourth: take Γ to be the quotient K×/D× of the unit group of K by the unit group of D, and take ν to be the natural projection. We can turn Γ into a totally ordered group by declaring the residue classes of elements of D as "positive".[lower-alpha 1]
Even further, given any totally ordered abelian group Γ, there is a valuation ring D with value group Γ (see Hahn series).
From the fact that the ideals of a valuation ring are totally ordered, one can conclude that a valuation ring is a local domain, and that every finitely generated ideal of a valuation ring is principal (i.e., a valuation ring is a Bézout domain). In fact, it is a theorem of Krull that an integral domain is a valuation ring if and only if it is a local Bézout domain.[3] It also follows from this that a valuation ring is Noetherian if and only if it is a principal ideal domain. In this case, it is either a field or it has exactly one non-zero prime ideal; in the latter case it is called a discrete valuation ring. (By convention, a field is not a discrete valuation ring.)
A value group is called discrete if it is isomorphic to the additive group of the integers, and a valuation ring has a discrete valuation group if and only if it is a discrete valuation ring.[4]
Very rarely, valuation ring may refer to a ring that satisfies the second or third condition but is not necessarily a domain. A more common term for this type of ring is uniserial ring.
Examples
• Any field $\mathbb {F} $ is a valuation ring. For example, the ring of rational functions $\mathbb {F} (X)$ on an algebraic variety $X$.[5][6]
• A simple non-example is the integral domain $\mathbb {C} [X]$ since the inverse of a generic $f/g\in \mathbb {C} (X)$ is $g/f\not \in \mathbb {C} [X]$.
• The field of power series:
$\mathbb {F} ((X))=\left\{f(X)=\!\sum _{i>-\infty }^{\infty }a_{i}X^{i}\,:\ a_{i}\in \mathbb {F} \right\}$
has the valuation $v(f)=\inf \nolimits _{a_{n}\neq 0}n$. The subring $\mathbb {F} [[X]]$ is a valuation ring as well.
• $\mathbb {Z} _{(p)},$ the localization of the integers $\mathbb {Z} $ at the prime ideal (p), consisting of ratios where the numerator is any integer and the denominator is not divisible by p. The field of fractions is the field of rational numbers $\mathbb {Q} .$
• The ring of meromorphic functions on the entire complex plane which have a Maclaurin series (Taylor series expansion at zero) is a valuation ring. The field of fractions are the functions meromorphic on the whole plane. If f does not have a Maclaurin series then 1/f does.
• Any ring of p-adic integers $\mathbb {Z} _{p}$ for a given prime p is a local ring, with field of fractions the p-adic numbers $\mathbb {Q} _{p}$. The integral closure $\mathbb {Z} _{p}^{\text{cl}}$ of the p-adic integers is also a local ring, with field of fractions $\mathbb {Q} _{p}^{\text{cl}}$ (the algebraic closure of the p-adic numbers). Both $\mathbb {Z} _{p}$ and $\mathbb {Z} _{p}^{\text{cl}}$ are valuation rings.
• Let k be an ordered field. An element of k is called finite if it lies between two integers n < x < m; otherwise it is called infinite. The set D of finite elements of k is a valuation ring. The set of elements x such that x ∈ D and x−1 ∉ D is the set of infinitesimal elements; and an element x such that x ∉ D and x−1 ∈ D is called infinite.
• The ring F of finite elements of a hyperreal field *R (an ordered field containing the real numbers) is a valuation ring of *R. F consists of all hyperreal numbers differing from a standard real by an infinitesimal amount, which is equivalent to saying a hyperreal number x such that −n < x < n for some standard integer n. The residue field, finite hyperreal numbers modulo the ideal of infinitesimal hyperreal numbers, is isomorphic to the real numbers.
• A common geometric example comes from algebraic plane curves. Consider the polynomial ring $\mathbb {C} [x,y]$ and an irreducible polynomial $f$ in that ring. Then the ring $\mathbb {C} [x,y]/(f)$ is the ring of polynomial functions on the curve $\{(x,y):f(x,y)=0\}$. Choose a point $P=(P_{x},P_{y})\in \mathbb {C} ^{2}$ such that $f(P)=0$ and it is a regular point on the curve; i.e., the local ring R at the point is a regular local ring of Krull dimension one or a discrete valuation ring.
• For example, consider the inclusion $(\mathbb {C} [[X^{2}]],(X^{2}))\hookrightarrow (\mathbb {C} [[X]],(X))$. These are all subrings in the field of bounded-below power series $\mathbb {C} ((X))$.
Dominance and integral closure
The units, or invertible elements, of a valuation ring are the elements x in D such that x −1 is also a member of D. The other elements of D – called nonunits – do not have an inverse in D, and they form an ideal M. This ideal is maximal among the (totally ordered) ideals of D. Since M is a maximal ideal, the quotient ring D/M is a field, called the residue field of D.
In general, we say a local ring $(S,{\mathfrak {m}}_{S})$ dominates a local ring $(R,{\mathfrak {m}}_{R})$ if $S\supseteq R$ and ${\mathfrak {m}}_{S}\cap R={\mathfrak {m}}_{R}$; in other words, the inclusion $R\subseteq S$ is a local ring homomorphism. Every local ring $(A,{\mathfrak {p}})$ in a field K is dominated by some valuation ring of K. Indeed, the set consisting of all subrings R of K containing A and $1\not \in {\mathfrak {p}}R$ is nonempty and is inductive; thus, has a maximal element $R$ by Zorn's lemma. We claim R is a valuation ring. R is a local ring with maximal ideal containing ${\mathfrak {p}}R$ by maximality. Again by maximality it is also integrally closed. Now, if $x\not \in R$, then, by maximality, ${\mathfrak {p}}R[x]=R[x]$ and thus we can write:
$1=r_{0}+r_{1}x+\cdots +r_{n}x^{n},\quad r_{i}\in {\mathfrak {p}}R$.
Since $1-r_{0}$ is a unit element, this implies that $x^{-1}$ is integral over R; thus is in R. This proves R is a valuation ring. (R dominates A since its maximal ideal contains ${\mathfrak {p}}$ by construction.)
A local ring R in a field K is a valuation ring if and only if it is a maximal element of the set of all local rings contained in K partially ordered by dominance. This easily follows from the above.[lower-alpha 2]
Let A be a subring of a field K and $f:A\to k$ a ring homomorphism into an algebraically closed field k. Then f extends to a ring homomorphism $g:D\to k$, D some valuation ring of K containing A. (Proof: Let $g:R\to k$ be a maximal extension, which clearly exists by Zorn's lemma. By maximality, R is a local ring with maximal ideal containing the kernel of f. If S is a local ring dominating R, then S is algebraic over R; if not, $S$ contains a polynomial ring $R[x]$ to which g extends, a contradiction to maximality. It follows $S/{\mathfrak {m}}_{S}$ is an algebraic field extension of $R/{\mathfrak {m}}_{R}$. Thus, $S\to S/{\mathfrak {m}}_{S}\hookrightarrow k$ extends g; hence, S = R.)
If a subring R of a field K contains a valuation ring D of K, then, by checking Definition 1, R is also a valuation ring of K. In particular, R is local and its maximal ideal contracts to some prime ideal of D, say, ${\mathfrak {p}}$. Then $R=D_{\mathfrak {p}}$ since $R$ dominates $D_{\mathfrak {p}}$, which is a valuation ring since the ideals are totally ordered. This observation is subsumed to the following:[7] there is a bijective correspondence ${\mathfrak {p}}\mapsto D_{\mathfrak {p}},\operatorname {Spec} (D)\to $ the set of all subrings of K containing D. In particular, D is integrally closed,[8][lower-alpha 3] and the Krull dimension of D is the number of proper subrings of K containing D.
In fact, the integral closure of an integral domain A in the field of fractions K of A is the intersection of all valuation rings of K containing A.[9] Indeed, the integral closure is contained in the intersection since the valuation rings are integrally closed. Conversely, let x be in K but not integral over A. Since the ideal $x^{-1}A[x^{-1}]$ is not $A[x^{-1}]$,[lower-alpha 4] it is contained in a maximal ideal ${\mathfrak {p}}$. Then there is a valuation ring R that dominates the localization of $A[x^{-1}]$ at ${\mathfrak {p}}$. Since $x^{-1}\in {\mathfrak {m}}_{R}$, $x\not \in R$.
The dominance is used in algebraic geometry. Let X be an algebraic variety over a field k. Then we say a valuation ring R in $k(X)$ has "center x on X " if $R$ dominates the local ring ${\mathcal {O}}_{x,X}$ of the structure sheaf at x.[10]
Ideals in valuation rings
We may describe the ideals in the valuation ring by means of its value group.
Let Γ be a totally ordered abelian group. A subset Δ of Γ is called a segment if it is nonempty and, for any α in Δ, any element between −α and α is also in Δ (end points included). A subgroup of Γ is called an isolated subgroup if it is a segment and is a proper subgroup.
Let D be a valuation ring with valuation v and value group Γ. For any subset A of D, we let $\Gamma _{A}$ be the complement of the union of $v(A-0)$ and $-v(A-0)$ in $\Gamma $. If I is a proper ideal, then $\Gamma _{I}$ is a segment of $\Gamma $. In fact, the mapping $I\mapsto \Gamma _{I}$ defines an inclusion-reversing bijection between the set of proper ideals of D and the set of segments of $\Gamma $.[11] Under this correspondence, the nonzero prime ideals of D correspond bijectively to the isolated subgroups of Γ.
Example: The ring of p-adic integers $\mathbb {Z} _{p}$ is a valuation ring with value group $\mathbb {Z} $. The zero subgroup of $\mathbb {Z} $ corresponds to the unique maximal ideal $(p)\subseteq \mathbb {Z} _{p}$ and the whole group to the zero ideal. The maximal ideal is the only isolated subgroup of $\mathbb {Z} $.
The set of isolated subgroups is totally ordered by inclusion. The height or rank r(Γ) of Γ is defined to be the cardinality of the set of isolated subgroups of Γ. Since the nonzero prime ideals are totally ordered and they correspond to isolated subgroups of Γ, the height of Γ is equal to the Krull dimension of the valuation ring D associated with Γ.
The most important special case is height one, which is equivalent to Γ being a subgroup of the real numbers ℝ under addition (or equivalently, of the positive real numbers ℝ+ under multiplication.) A valuation ring with a valuation of height one has a corresponding absolute value defining an ultrametric place. A special case of this are the discrete valuation rings mentioned earlier.
The rational rank rr(Γ) is defined as the rank of the value group as an abelian group,
$\mathrm {dim} _{\mathbb {Q} }(\Gamma \otimes _{\mathbb {Z} }\mathbb {Q} ).$
Places
This section is based on Zariski & Samuel 1975.
General definition
A place of a field K is a ring homomorphism p from a valuation ring D of K to some field such that, for any $x\not \in D$, $p(1/x)=0$. The image of a place is a field called the residue field of p. For example, the canonical map $D\to D/{\mathfrak {m}}_{D}$ is a place.
Example
Let A be a Dedekind domain and ${\mathfrak {p}}$ a prime ideal. Then the canonical map $A_{\mathfrak {p}}\to k({\mathfrak {p}})$ is a place.
Specialization of places
We say a place p specializes to a place p′, denoted by $p\rightsquigarrow p'$, if the valuation ring of p contains the valuation ring of p'. In algebraic geometry, we say a prime ideal ${\mathfrak {p}}$ specializes to ${\mathfrak {p}}'$ if ${\mathfrak {p}}\subseteq {\mathfrak {p}}'$. The two notions coincide: $p\rightsquigarrow p'$ if and only if a prime ideal corresponding to p specializes to a prime ideal corresponding to p′ in some valuation ring (recall that if $D\supseteq D'$ are valuation rings of the same field, then D corresponds to a prime ideal of $D'$.)
Example
For example, in the function field $\mathbb {F} (X)$ of some algebraic variety $X$ every prime ideal ${\mathfrak {p}}\in {\text{Spec}}(R)$ contained in a maximal ideal ${\mathfrak {m}}$ gives a specialization ${\mathfrak {p}}\rightsquigarrow {\mathfrak {m}}$.
Remarks
It can be shown: if $p\rightsquigarrow p'$, then $p'=q\circ p|_{D'}$ for some place q of the residue field $k(p)$ of p. (Observe $p(D')$ is a valuation ring of $k(p)$ and let q be the corresponding place; the rest is mechanical.) If D is a valuation ring of p, then its Krull dimension is the cardinarity of the specializations other than p to p. Thus, for any place p with valuation ring D of a field K over a field k, we have:
$\operatorname {tr.deg} _{k}k(p)+\dim D\leq \operatorname {tr.deg} _{k}K$.
If p is a place and A is a subring of the valuation ring of p, then $\operatorname {ker} (p)\cap A$ is called the center of p in A.
Places at infinity
For the function field on an affine variety $X$ there are valuations which are not associated to any of the primes of $X$. These valuations are called the places at infinity. For example, the affine line $\mathbb {A} _{k}^{1}$ has function field $k(x)$. The place associated to the localization of
$k\left[{\frac {1}{x}}\right]$
at the maximal ideal
${\mathfrak {m}}=\left({\frac {1}{x}}\right)$
is a place at infinity.
Notes
1. More precisely, Γ is totally ordered by defining $[x]\geq [y]$ if and only if $xy^{-1}\in D$ where [x] and [y] are equivalence classes in Γ. cf. Efrat (2006), p. 39
2. Proof: if R is a maximal element, then it is dominated by a valuation ring; thus, it itself must be a valuation ring. Conversely, let R be a valuation ring and S a local ring that dominates R but not R. There is x that is in S but not in R. Then $x^{-1}$ is in R and in fact in the maximal ideal of R. But then $x^{-1}\in {\mathfrak {m}}_{S}$, which is absurd. Hence, there cannot be such S.
3. To see more directly that valuation rings are integrally closed, suppose that xn + a1xn−1 + ... + a0 = 0. Then dividing by xn−1 gives us x = −a1 − ... − a0x−n +1. If x were not in D, then x−1 would be in D and this would express x as a finite sum of elements in D, so that x would be in D, a contradiction.
4. In general, $x^{-1}$ is integral over A if and only if $xA[x]=A[x].$
Citations
1. Hartshorne 1977, Theorem I.6.1A.
2. Efrat 2006, p. 55.
3. Cohn 1968, Proposition 1.5.
4. Efrat 2006, p. 43.
5. The role of valuation rings in algebraic geometry
6. Does there exist a Riemann surface corresponding to every field extension? Any other hypothesis needed?
7. Zariski & Samuel 1975, Ch. VI, Theorem 3.
8. Efrat 2006, p. 38.
9. Matsumura 1989, Theorem 10.4.
10. Hartshorne 1977, Ch II. Exercise 4.5.
11. Zariski & Samuel 1975, Ch. VI, Theorem 15.
Sources
• Bourbaki, Nicolas (1972). Commutative Algebra. Elements of Mathematics (First ed.). Addison-Wesley. ISBN 978-020100644-5.
• Cohn, P. M. (1968), "Bezout rings and their subrings" (PDF), Proc. Cambridge Philos. Soc., 64 (2): 251–264, Bibcode:1968PCPS...64..251C, doi:10.1017/s0305004100042791, ISSN 0008-1981, MR 0222065, S2CID 123667384, Zbl 0157.08401
• Efrat, Ido (2006), Valuations, orderings, and Milnor K-theory, Mathematical Surveys and Monographs, vol. 124, Providence, RI: American Mathematical Society, ISBN 0-8218-4041-X, Zbl 1103.12002
• Fuchs, László; Salce, Luigi (2001), Modules over non-Noetherian domains, Mathematical Surveys and Monographs, vol. 84, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1963-0, MR 1794715, Zbl 0973.13001
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Krull, Wolfgang (1939), "Beiträge zur Arithmetik kommutativer Integritätsbereiche. VI. Der allgemeine Diskriminantensatz. Unverzweigte Ringerweiterungen", Mathematische Zeitschrift, 45 (1): 1–19, doi:10.1007/BF01580269, ISSN 0025-5874, MR 1545800, S2CID 121374449, Zbl 0020.34003
• Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8, Translated from the Japanese by Miles Reid (Second ed.), ISBN 0-521-36764-6, Zbl 0666.13002
• Zariski, Oscar; Samuel, Pierre (1975), Commutative algebra. Vol. II, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90171-8, MR 0389876
|
Valuative criterion
In mathematics, specifically algebraic geometry, the valuative criteria are a collection of results that make it possible to decide whether a morphism of algebraic varieties, or more generally schemes, is universally closed, separated, or proper.
Statement of the valuative criteria
Recall that a valuation ring A is a domain, so if K is the field of fractions of A, then Spec K is the generic point of Spec A.
Let X and Y be schemes, and let f : X → Y be a morphism of schemes. Then the following are equivalent:[1][2]
1. f is separated (resp. universally closed, resp. proper)
2. f is quasi-separated (resp. quasi-compact, resp. of finite type and quasi-separated) and for every valuation ring A, if Y' = Spec A and X' denotes the generic point of Y' , then for every morphism Y' → Y and every morphism X' → X which lifts the generic point, then there exists at most one (resp. at least one, resp. exactly one) lift Y' → X.
The lifting condition is equivalent to specifying that the natural morphism
${\text{Hom}}_{Y}(Y',X)\to {\text{Hom}}_{Y}({\text{Spec}}K,X)$
is injective (resp. surjective, resp. bijective).
Furthermore, in the special case when Y is (locally) noetherian, it suffices to check the case that A is a discrete valuation ring.
References
1. EGA II, proposition 7.2.3 and théorème 7.3.8.
2. Stacks Project, tags 01KA, 01KY, and 0BX4.
• Grothendieck, Alexandre; Jean Dieudonné (1961). "Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : II. Étude globale élémentaire de quelques classes de morphismes". Publications Mathématiques de l'IHÉS. 8: 5–222. doi:10.1007/bf02699291.
|
Value (mathematics)
In mathematics, value may refer to several, strongly related notions.
In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as π or an integer such as 42.
• The value of a variable or a constant is any number or other mathematical object assigned to it.
• The value of a mathematical expression is the result of the computation described by this expression when the variables and constants in it are assigned values.
• The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values.[1][2]
For example, if the function f is defined by f(x) = 2x2 – 3x + 1, then assigning the value 3 to its argument x yields the function value 10, since f(3) = 2·32 – 3·3 + 1 = 10.
If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values.
See also
• Value function
• Value (computer science)
• Absolute value
• Truth value
References
1. "Value".
2. Meschkowski, Herbert (1968). Introduction to Modern Mathematics. George G. Harrap & Co. Ltd. p. 32. ISBN 0245591095.
|
Valuation (logic)
In logic and model theory, a valuation can be:
• In propositional logic, an assignment of truth values to propositional variables, with a corresponding assignment of truth values to all propositional formulas with those variables.
• In first-order logic and higher-order logics, a structure, (the interpretation) and the corresponding assignment of a truth value to each sentence in the language for that structure (the valuation proper). The interpretation must be a homomorphism, while valuation is simply a function.
Mathematical logic
In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema. Valuations are also called truth assignments.
In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas.
In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set (domain of discourse) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables) in the language.
Notation
If $v$ is a valuation, that is, a mapping from the atoms to the set $\{t,f\}$, then the double-bracket notation is commonly used to denote a valuation; that is, $v(\phi )=[\![\phi ]\!]_{v}$ for a proposition $\phi $.[1]
See also
• Algebraic semantics
References
1. Dirk van Dalen, (2004) Logic and Structure, Springer Universitext, (see section 1.2) ISBN 978-3-540-20879-2
• Rasiowa, Helena; Sikorski, Roman (1970), The Mathematics of Metamathematics (3rd ed.), Warsaw: PWN, chapter 6 Algebra of formalized languages.
• J. Michael Dunn; Gary M. Hardegree (2001). Algebraic methods in philosophical logic. Oxford University Press. p. 155. ISBN 978-0-19-853192-0.
|
Value distribution theory of holomorphic functions
In mathematics, the value distribution theory of holomorphic functions is a division of mathematical analysis. It tries to get quantitative measures of the number of times a function f(z) assumes a value a, as z grows in size, refining the Picard theorem on behaviour close to an essential singularity. The theory exists for analytic functions (and meromorphic functions) of one complex variable z, or of several complex variables.
In the case of one variable the term Nevanlinna theory, after Rolf Nevanlinna, is also common. The now-classical theory received renewed interest, when Paul Vojta suggested some analogies with the problem of integral solutions to Diophantine equations. These turned out to involve some close parallels, and to lead to fresh points of view on the Mordell conjecture and related questions.
|
Value function
The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem.[1][2] In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t, t1] when started at the time-t state variable x(t)=x.[3] If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function."[4][5] In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function.[6][7]
In a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given $(t_{0},x_{0})\in [0,t_{1}]\times \mathbb {R} ^{d}$, a typical optimal control problem is to
${\text{maximize}}\quad J(t_{0},x_{0};u)=\int _{t_{0}}^{t_{1}}I(t,x(t),u(t))\,\mathrm {d} t+\phi (x(t_{1}))$
subject to
${\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=f(t,x(t),u(t))$
with initial state variable $x(t_{0})=x_{0}$.[8] The objective function $J(t_{0},x_{0};u)$ is to be maximized over all admissible controls $u\in U[t_{0},t_{1}]$, where $u$ is a Lebesgue measurable function from $[t_{0},t_{1}]$ to some prescribed arbitrary set in $\mathbb {R} ^{m}$. The value function is then defined as
$V(t,x(t))=\max _{u\in U}\int _{t}^{t_{1}}I(\tau ,x(\tau ),u(\tau ))\,\mathrm {d} \tau +\phi (x(t_{1}))$
with $V(t_{1},x(t_{1}))=\phi (x(t_{1}))$, where $\phi (x(t_{1}))$ is the "scrap value". If the optimal pair of control and state trajectories is $(x^{\ast },u^{\ast })$, then $V(t_{0},x_{0})=J(t_{0},x_{0};u^{\ast })$. The function $h$ that gives the optimal control $u^{\ast }$ based on the current state $x$ is called a feedback control policy,[4] or simply a policy function.[9]
Bellman's principle of optimality roughly states that any optimal policy at time $t$, $t_{0}\leq t\leq t_{1}$ taking the current state $x(t)$ as "new" initial condition must be optimal for the remaining problem. If the value function happens to be continuously differentiable,[10] this gives rise to an important partial differential equation known as Hamilton–Jacobi–Bellman equation,
$-{\frac {\partial V(t,x)}{\partial t}}=\max _{u}\left\{I(t,x,u)+{\frac {\partial V(t,x)}{\partial x}}f(t,x,u)\right\}$
where the maximand on the right-hand side can also be re-written as the Hamiltonian, $H\left(t,x,u,\lambda \right)=I(t,x,u)+\lambda (t)f(t,x,u)$, as
$-{\frac {\partial V(t,x)}{\partial t}}=\max _{u}H(t,x,u,\lambda )$
with $\partial V(t,x)/\partial x=\lambda (t)$ playing the role of the costate variables.[11] Given this definition, we further have $\mathrm {d} \lambda (t)/\mathrm {d} t=\partial ^{2}V(t,x)/\partial x\partial t+\partial ^{2}V(t,x)/\partial x^{2}\cdot f(x)$, and after differentiating both sides of the HJB equation with respect to $x$,
$-{\frac {\partial ^{2}V(t,x)}{\partial t\partial x}}={\frac {\partial I}{\partial x}}+{\frac {\partial ^{2}V(t,x)}{\partial x^{2}}}f(x)+{\frac {\partial V(t,x)}{\partial x}}{\frac {\partial f(x)}{\partial x}}$
which after replacing the appropriate terms recovers the costate equation
$-{\dot {\lambda }}(t)=\underbrace {{\frac {\partial I}{\partial x}}+\lambda (t){\frac {\partial f(x)}{\partial x}}} _{={\frac {\partial H}{\partial x}}}$
where ${\dot {\lambda }}(t)$ is Newton notation for the derivative with respect to time.[12]
The value function is the unique viscosity solution to the Hamilton–Jacobi–Bellman equation.[13] In an online closed-loop approximate optimal control, the value function is also a Lyapunov function that establishes global asymptotic stability of the closed-loop system.[14]
References
1. Fleming, Wendell H.; Rishel, Raymond W. (1975). Deterministic and Stochastic Optimal Control. New York: Springer. pp. 81–83. ISBN 0-387-90155-8.
2. Caputo, Michael R. (2005). Foundations of Dynamic Economic Analysis : Optimal Control Theory and Applications. New York: Cambridge University Press. p. 185. ISBN 0-521-60368-4.
3. Weber, Thomas A. (2011). Optimal Control Theory : with Applications in Economics. Cambridge: The MIT Press. p. 82. ISBN 978-0-262-01573-8.
4. Bertsekas, Dimitri P.; Tsitsiklis, John N. (1996). Neuro-Dynamic Programming. Belmont: Athena Scientific. p. 2. ISBN 1-886529-10-8.
5. "EE365: Dynamic Programming" (PDF).
6. Mas-Colell, Andreu; Whinston, Michael D.; Green, Jerry R. (1995). Microeconomic Theory. New York: Oxford University Press. p. 964. ISBN 0-19-507340-1.
7. Corbae, Dean; Stinchcombe, Maxwell B.; Zeman, Juraj (2009). An Introduction to Mathematical Analysis for Economic Theory and Econometrics. Princeton University Press. p. 145. ISBN 978-0-691-11867-3.
8. Kamien, Morton I.; Schwartz, Nancy L. (1991). Dynamic Optimization : The Calculus of Variations and Optimal Control in Economics and Management (2nd ed.). Amsterdam: North-Holland. p. 259. ISBN 0-444-01609-0.
9. Ljungqvist, Lars; Sargent, Thomas J. (2018). Recursive Macroeconomic Theory (Fourth ed.). Cambridge: MIT Press. p. 106. ISBN 978-0-262-03866-9.
10. Benveniste and Scheinkman established sufficient conditions for the differentiability of the value function, which in turn allows an application of the envelope theorem, see Benveniste, L. M.; Scheinkman, J. A. (1979). "On the Differentiability of the Value Function in Dynamic Models of Economics". Econometrica. 47 (3): 727–732. doi:10.2307/1910417. JSTOR 1910417. Also see Seierstad, Atle (1982). "Differentiability Properties of the Optimal Value Function in Control Theory". Journal of Economic Dynamics and Control. 4: 303–310. doi:10.1016/0165-1889(82)90019-7.
11. Kirk, Donald E. (1970). Optimal Control Theory. Englewood Cliffs, NJ: Prentice-Hall. p. 88. ISBN 0-13-638098-0.
12. Zhou, X. Y. (1990). "Maximum Principle, Dynamic Programming, and their Connection in Deterministic Control". Journal of Optimization Theory and Applications. 65 (2): 363–373. doi:10.1007/BF01102352. S2CID 122333807.
13. Theorem 10.1 in Bressan, Alberto (2019). "Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems" (PDF). Lecture Notes.
14. Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren (2018). "Optimal Control and Lyapunov Stability". Reinforcement Learning for Optimal Feedback Control: A Lyapunov-Based Approach. Berlin: Springer. pp. 26–27. ISBN 978-3-319-78383-3.
Further reading
• Caputo, Michael R. (2005). "Necessary and Sufficient Conditions for Isoperimetric Problems". Foundations of Dynamic Economic Analysis : Optimal Control Theory and Applications. New York: Cambridge University Press. pp. 174–210. ISBN 0-521-60368-4.
• Clarke, Frank H.; Loewen, Philip D. (1986). "The Value Function in Optimal Control: Sensitivity, Controllability, and Time-Optimality". SIAM Journal on Control and Optimization. 24 (2): 243–263. doi:10.1137/0324014.
• LaFrance, Jeffrey T.; Barney, L. Dwayne (1991). "The Envelope Theorem in Dynamic Optimization" (PDF). Journal of Economic Dynamics and Control. 15 (2): 355–385. doi:10.1016/0165-1889(91)90018-V.
• Stengel, Robert F. (1994). "Conditions for Optimality". Optimal Control and Estimation. New York: Dover. pp. 201–222. ISBN 0-486-68200-5.
|
Value of structural health information
The value of structural health information is the expected utility gain of a built environment system by information provided by structural health monitoring (SHM). The quantification of the value of structural health information is based on decision analysis adapted to built environment engineering. The value of structural health information can be significant for the risk and integrity management of built environment systems.
Background
The value of structural health information takes basis in the framework of the decision analysis and the value of information analysis as introduced by Raiffa and Schlaifer[1] and adapted to civil engineering by Benjamin and Cornell.[2] Decision theory itself is based upon the expected utility hypothesis by Von Neumann and Morgenstern.[3] The concepts for the value of structural health information in built environment engineering were first formulated by Pozzi and Der Kiureghian[4] and Faber and Thöns.[5]
Formulation
The value of structural health information is quantified with a normative decision analysis. The value of structural health monitoring $V$ is calculated as the difference between the optimized expected utilities of performing and not performing structural health monitoring (SHM), $U_{1}$ and $U_{0}$, respectively:
$V=U_{1}-U_{0}$
The expected utilities are calculated with a decision scenario involving (1) interrelated built environment system state, utility and consequence models, (2) structural health information type, precision and cost models and (2) structural health action type and implementation models. The value of structural health information quantification facilitates an optimization of structural health information system parameters and information dependent actions.[6][7]
Application
The value of structural health information provides a quantitative decision basis for (1) implementing SHM or not, (2) the identification of the optimal SHM strategy and (3) for planning optimal structural health actions, such as e.g., repair and replacement. The value of structural health information presupposes relevance of SHM information for the built environment system performance. A significant value of structural health information has been found for the risk and integrity management of engineering structures.[6][8][7]
References
1. Raiffa, Howard, 1924-2016. (2000). Applied statistical decision theory. Schlaifer, Robert. (Wiley classics library ed.). New York: Wiley. ISBN 047138349X. OCLC 43662059.{{cite book}}: CS1 maint: multiple names: authors list (link)
2. Benjamin, J. R. Cornell, C. A. (1970). Probability, Statistics, and Decision for Civil Engineers. McGraw-Hill. OCLC 473420360.{{cite book}}: CS1 maint: multiple names: authors list (link)
3. von Neumann, John; Morgenstern, Oskar (2007-12-31). Theory of Games and Economic Behavior (60th Anniversary Commemorative ed.). Princeton: Princeton University Press. doi:10.1515/9781400829460. ISBN 9781400829460.
4. Pozzi, Matteo; Der Kiureghian, Armen (2011-03-24). Kundu, Tribikram (ed.). "Assessing the value of information for long-term structural health monitoring". Health Monitoring of Structural and Biological Systems 2011. SPIE. 7984: 79842W. Bibcode:2011SPIE.7984E..2WP. doi:10.1117/12.881918. S2CID 3057973.
5. Faber, M; Thöns, S (2013-09-18), "On the value of structural health monitoring", Safety, Reliability and Risk Analysis, CRC Press, pp. 2535–2544, doi:10.1201/b15938-380, ISBN 9781138001237
6. "TU1402 Guidelines - Quantifying the Value of Structural Health Monitoring - COST Action TU 1402". www.cost-tu1402.eu. Retrieved 2019-10-21.
7. Thöns, Sebastian. "Background documentation of the Joint Committee of Structural Safety (JCSS): Quantifying the value of structural health information for decision support" (PDF).{{cite web}}: CS1 maint: url-status (link)
8. Sohn, H.; Farrar, C. R.; Hemez, F. M.; Shunk, D. D.; Stinemates, D. W.; Nadler, B. R.; Czarnecki, J. J. (2001). A Review of Structural Health Monitoring Literature: 1996–2001. Los Alamos: Los Alamos National Laboratory report LA-13070-MS.{{cite book}}: CS1 maint: multiple names: authors list (link)
|
Valérie Berthé
Valérie Berthé (born 16 December 1968)[1] is a French mathematician who works as a director of research for the Centre national de la recherche scientifique (CNRS) at the Institut de Recherche en Informatique Fondamentale (IRIF), a joint project between CNRS and Paris Diderot University. Her research involves symbolic dynamics, combinatorics on words, discrete geometry, numeral systems, tessellations, and fractals.[2]
Education
Berthé completed her baccalauréat at age 16,[3] and studied at the École Normale Supérieure from 1988 to 1993. She earned a licentiate and master's degree in pure mathematics from Pierre and Marie Curie University in 1989, a Diplôme d'études approfondies from University of Paris-Sud in 1991, completed her agrégation in 1992, and was recruited by CNRS in 1993.[1] Continuing her graduate studies, she defended a doctoral thesis in 1994 at the University of Bordeaux 1. Her dissertation, Fonctions de Carlitz et automates: Entropies conditionnelles was supervised by Jean-Paul Allouche.[1][4] She completed a habilitation in 1999, again under the supervision of Allouche, at the University of the Mediterranean Aix-Marseille II; her habilitation thesis was Étude arithmétique et dynamique de suites algorithmiques.[1]
Research
Berthé's research spans the area of symbolic dynamics, combinatorics on words, numeration systems and discrete geometry. She has recently made significant process in the study of S-adic dynamical systems, and also of continued fractions in higher dimensions.[5][6][7][8]
Associations
Berthé is a vice-president of the Société mathématique de France (SMF), and director of publications for the SMF.[9] She has played an active role in L'association femmes et mathématiques.[10] Berthé has also been associated with the M. Lothaire pseudonymous mathematical collaboration on combinatorics on words[11] and the Pythias Fogg pseudonymous collaboration on substitution systems.[12]
Recognition
In 2013, Berthé was elevated to the Legion of Honour.[3][10]
References
1. Curriculum vitae (PDF), April 2012, retrieved 2018-02-10
2. Berthé Valérie, Institut de Recherche en Informatique Fondamentale (IRIF), retrieved 2018-02-10
3. "Valérie Berthé, brillante tête chercheuse au CNRS", Ouest-France (in French), February 7, 2014
4. Valérie Berthé at the Mathematics Genealogy Project
5. Berthé, Valérie; Steiner, Wolfgang; Thuswaldner, Jörg M. (2019). "Geometry, dynamics, and arithmetic of $S$-adic shifts". Annales de l'Institut Fourier. 69 (3): 1347–1409. arXiv:1410.0331. doi:10.5802/aif.3273.
6. Berthé, Valérie; Steiner, Wolfgang; Thuswaldner, Jörg M.; Yassawi, Reem (November 2019). "Recognizability for sequences of morphisms". Ergodic Theory and Dynamical Systems. 39 (11): 2896–2931. arXiv:1705.00167. doi:10.1017/etds.2017.144. ISSN 0143-3857. S2CID 31678325.
7. Berthé, Valérie; Kim, Dong Han (2018). "Some constructions for the higher-dimensional three-distance theorem". Acta Arithmetica. 184 (4): 385–411. arXiv:1806.02721. doi:10.4064/aa171021-30-5. ISSN 0065-1036. S2CID 51808154.
8. Berthé, Valérie; Lhote, Loïck; Vallée, Brigitte (March 2018). "The Brun gcd algorithm in high dimensions is almost always subtractive". Journal of Symbolic Computation. 85: 72–107. doi:10.1016/j.jsc.2017.07.004.
9. Bureau, Société mathématique de France, retrieved 2018-02-10
10. "Valérie Berthé", Women and Men Inspiring Europe Resource-Pool, European Institute for Gender Equality, retrieved 2018-02-10
11. Lothaire, M. (2005), Applied combinatorics on words, Encyclopedia of Mathematics and Its Applications, vol. 105, A collective work by Jean Berstel, Dominique Perrin, Maxime Crochemore, Eric Laporte, Mehryar Mohri, Nadia Pisanti, Marie-France Sagot, Gesine Reinert, Sophie Schbath, Michael Waterman, Philippe Jacquet, Wojciech Szpankowski, Dominique Poulalhon, Gilles Schaeffer, Roman Kolpakov, Gregory Koucherov, Jean-Paul Allouche and Valérie Berthé, Cambridge: Cambridge University Press, ISBN 0-521-84802-4, Zbl 1133.68067
12. Pytheas Fogg, N. (2002), Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, A. (eds.), Substitutions in dynamics, arithmetics and combinatorics, Lecture Notes in Mathematics, vol. 1794, Berlin: Springer-Verlag, ISBN 3-540-44141-7, Zbl 1014.11015
External links
• Valérie Berthé publications indexed by Google Scholar
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Japan
• Czech Republic
• Croatia
• Netherlands
• Poland
Academics
• CiNii
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• ResearcherID
• zbMATH
Other
• IdRef
|
Vampire number
In number theory, a vampire number (or true vampire number) is a composite natural number with an even number of digits, that can be factored into two natural numbers each with half as many digits as the original number, where the two factors contain precisely all the digits of the original number, in any order, counting multiplicity. The two factors cannot both have trailing zeroes. The first vampire number is 1260 = 21 × 60.[1][2]
Definition
Let $N$ be a natural number with $2k$ digits:
$N={n_{2k}}{n_{2k-1}}...{n_{1}}$
Then $N$ is a vampire number if and only if there exist two natural numbers $A$ and $B$, each with $k$ digits:
$A={a_{k}}{a_{k-1}}...{a_{1}}$
$B={b_{k}}{b_{k-1}}...{b_{1}}$
such that $A\times B=N$, $a_{1}$ and $b_{1}$ are not both zero, and the $2k$ digits of the concatenation of $A$ and $B$ $({a_{k}}{a_{k-1}}...{a_{2}}{a_{1}}{b_{k}}{b_{k-1}}...{b_{2}}{b_{1}})$ are a permutation of the $2k$ digits of $N$. The two numbers $A$ and $B$ are called the fangs of $N$.
Vampire numbers were first described in a 1994 post by Clifford A. Pickover to the Usenet group sci.math,[3] and the article he later wrote was published in chapter 30 of his book Keys to Infinity.[4]
Examples
nCount of vampire numbers of length n
47
6148
83228
10108454
124390670
14208423682
1611039126154
1260 is a vampire number, with 21 and 60 as fangs, since 21 × 60 = 1260 and the digits of the concatenation of the two factors (2160) are a permutation of the digits of the original number (1260).
However, 126000 (which can be expressed as 21 × 6000 or 210 × 600) is not a vampire number, since although 126000 = 21 × 6000 and the digits (216000) are a permutation of the original number, the two factors 21 and 6000 do not have the correct number of digits. Furthermore, although 126000 = 210 × 600, both factors 210 and 600 have trailing zeroes.
The first few vampire numbers are:
1260 = 21 × 60
1395 = 15 × 93
1435 = 35 × 41
1530 = 30 × 51
1827 = 21 × 87
2187 = 27 × 81
6880 = 80 × 86
102510 = 201 × 510
104260 = 260 × 401
105210 = 210 × 501
The sequence of vampire numbers is:
1260, 1395, 1435, 1530, 1827, 2187, 6880, 102510, 104260, 105210, 105264, 105750, 108135, 110758, 115672, 116725, 117067, 118440, 120600, 123354, 124483, 125248, 125433, 125460, 125500, ... (sequence A014575 in the OEIS)
There are many known sequences of infinitely many vampire numbers following a pattern, such as:
1530 = 30 × 51, 150300 = 300 × 501, 15003000 = 3000 × 5001, ...
Al Sweigart calculated all the vampire numbers that have at most 10 digits.[5]
Multiple fang pairs
A vampire number can have multiple distinct pairs of fangs. The first of infinitely many vampire numbers with 2 pairs of fangs:
125460 = 204 × 615 = 246 × 510
The first with 3 pairs of fangs:
13078260 = 1620 × 8073 = 1863 × 7020 = 2070 × 6318
The first with 4 pairs of fangs:
16758243290880 = 1982736 × 8452080 = 2123856 × 7890480 = 2751840 × 6089832 = 2817360 × 5948208
The first with 5 pairs of fangs:
24959017348650 = 2947050 × 8469153 = 2949705 × 8461530 = 4125870 × 6049395 = 4129587 × 6043950 = 4230765 × 5899410
Variants
Pseudovampire numbers (disfigurate vampire numbers) are similar to vampire numbers, except that the fangs of an n-digit pseudovampire number need not be of length n/2 digits. Pseudovampire numbers can have an odd number of digits, for example 126 = 6 × 21.
More generally, more than two fangs are allowed. In this case, vampire numbers are numbers n which can be factorized using the digits of n. For example, 1395 = 5 × 9 × 31. This sequence starts (sequence A020342 in the OEIS):
126, 153, 688, 1206, 1255, 1260, 1395, ...
A vampire prime or prime vampire number, as defined by Carlos Rivera in 2002,[6] is a true vampire number whose fangs are its prime factors. The first few vampire primes are:
117067, 124483, 146137, 371893, 536539
As of 2007 the largest known is the square (94892254795 × 10103294 + 1)2, found by Jens K. Andersen in September, 2007.[2]
A double vampire number is a vampire number which has fangs that are also vampire numbers, an example of such a number is 1047527295416280 = 25198740 × 41570622 = (2940 × 8571) × (5601 × 7422) which is the smallest double vampire number.
A Roman numeral vampire number is vampire number that uses Roman numerals instead of base-10. An example of this number is II × IV = VIII.
Other bases
Vampire numbers also exist for bases other than base 10.
For example, a vampire number in base 12 is 10392BA45768 = 105628 × BA3974, where A mens ten and B means eleven. Another example in the same base is a vampire number with 3 fangs, 572164B9A830 = 8752 × 9346 × A0B1. One example with 4 fangs is 3715A6B89420 = 763 × 824 × 905 × B1A. In these examples, all 12 digits are used exactly once.
See also
• Friedman number
References
1. Weisstein, Eric W. "Vampire Numbers". MathWorld.
2. Andersen, Jens K. "Vampire numbers".
3. Pickover's original post describing vampire numbers
4. Pickover, Clifford A. (1995). Keys to Infinity. Wiley. ISBN 0-471-19334-8.
5. Sweigart, Al. "Vampire Numbers Visualized".
6. Rivera, Carlos. "The Prime-Vampire numbers".
External links
• Sweigart, Al. Vampire Numbers Visualized
• Grime, James; Copeland, Ed. "Vampire numbers". Numberphile. Brady Haran. Archived from the original on 2017-10-14. Retrieved 2013-04-08.
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
|
Van Amringe Mathematical Prize
The Department of Mathematics at Columbia University has presented a Professor Van Amringe Mathematical Prize each year (since 1910). The prize was established in 1910 by George G. Dewitt, Class of 1867. It was named after John Howard Van Amringe, who taught mathematics at Columbia (holding a professorship from 1865 to 1910), was the first Dean of Columbia College, and was the first president of the American Mathematical Society (between 1888 and 1890).
For many years, the prize was awarded to the freshman or sophomore mathematics student at Columbia College deemed most proficient in the mathematical subjects designated during the year of the award. More recently (since 2003), the prize has been awarded to three Columbia College students majoring in math (a freshman, a sophomore, and a junior) who are deemed proficient in their class in the mathematical subjects designated during the year of the award.
Recipients
Year Recipients
2023 Rafay Abbas Ashary ('24), Noah Bergam ('25), Hao Cui ('26), Zheheng Xiao ('25) [1]
2022 Kevin Zhang ('25), Carter Teplica ('23), Zheheng Xiao ('25), David Chen ('23) [1]
2021 Elena Gribelyuk ('22), Jacob Weinstein ('22), David Chen ('23), Aiden Sagerman ('24) [1]
2020 Christian Serio ('21), Anda Tenie ('22), Gregory Pershing ('22), Rafay Ashary ('23) [2]
2019 Quang Dao ('20), Myeonhu Kim ('20), Anda Tenie ('22) [3]
2018 Quang Dao ('20), Myeonhu Kim ('20), Matthew Lerner-Brecher ('20) [4]
2017 Quang Dao ('20), Vu-Anh Phung ('19), Noah Miller ('18) [5]
2016 Nguyen Dung ('18), Srikar Varadaraj ('17) [6]
2015 Nguyen Dung ('18), Hardik Shah ('17), Samuel Nicoll ('16) [7]
2014 Hardik Shah ('17), Samuel Nicoll ('16), Yifei Zhao ('15) [8]
2013 Ha-Young Shin ('16), Yifei Zhao ('15), Sicong Zhang ('14) [9]
2012 Yifei Zhao ('15), Sicong Zhang ('14), Sung Chul Park ('13) [10]
2011 Sicong Zhang ('14), Sung Chul Park ('13), Shenjun Xu ('12) [11]
2010 Sung Park ('13), Shenjun Xu ('12), Samuel Beck ('11) [12]
2009 Shenjun Xu ('12), Jiayang Jiang ('11), Atanas Atanasov ('10) [13]
2008 Andra Liana Mihali ('11), Atanas Atanasov ('10), So Eun Park ('09) [14]
2007 Atanas Atanasov ('10), So Eun Park ('09), Dmytro Karabash ('08) [15]
2006 Vedant Misra ('09), Dmytro Karabash ('08) and Mikhail Shklyar ('08), Ilya Vinogradov ('07) [16]
2005 Mikhail Shklyar ('08), Ilya Vinogradov ('07), Florian Sprung ('06) [17]
2004 Ilya Vinogradov ('07)
2003 Mark Xue ('06), Kiril Datchev ('05), Jay Heumann ('05) [18]
2002 Kiril Datchev ('05) [19]
2001 Vladislav Shchogolev ('04) and Eric Patterson ('03) [20]
2000 David Anderson ('02) and Ari Stern ('01)
1990 Ali Yegulalp ('90)
1988 Ali Yegulalp ('90)
1987 Ali Yegulalp ('90)
1979 Sahotra Sarkar ('81)
1976 Chris Tong ('78)
1967 Louis Halle Rowen ('69)
1964 Sylvain Cappell ('66) [21]
1937 Jerome Kurshan ('39) [22]
1922 Melvin David Baller ('24) and Benedict Kurshan ('24) [23]
1921 Wilfred Francis Skeats ('23) [24]
1917 Israel Koral ('20) [25]
External links
• Columbia College Prizes
• Columbia College Prizes and Fellowships
• Past Prize Exams
Notes
1. "Awards and Honors".
2. Columbia College Today, Congrats, Class of 2020!: Academic Prizes
3. Columbia College Today, Summer 2019: Academic Awards and Prizes
4. Columbia College Today, Graduation 2018: Academic Awards and Prizes
5. Columbia College Today, Graduation 2017: Academic Awards and Prizes Winners
6. Columbia College Today, 2016 Academic Awards and Prizes
7. Columbia College Today, 2015 Academic Awards and Prizes
8. Columbia College Today, 2014 Academic Awards and Prizes
9. Columbia College Today, 2013 Academic Awards and Prizes
10. Columbia College Today, 2012 Academic Awards and Prizes
11. Columbia College Today, 2011 Academic Awards and Prizes
12. Columbia College Today, 2010 Academic Awards and Prize
13. Columbia College Today, 2009 Academic Awards and Prizes
14. Columbia College Today, 2008 Academic Awards and Prizes
15. Columbia College Today, 2007 Academic Awards and Prizes
16. Columbia College Today, College Students Honored at Awards and Prizes Ceremony
17. Columbia College Today, College Honors 78 Students at Awards and Prizes Ceremony
18. Columbia College Today, College Honors 78 Students at Awards and Prizes Ceremony
19. Columbia College Today, College Honors 65 Students at Awards and Prizes Ceremony
20. Columbia College Today, Second Annual Awards & Prizes Ceremony Held in Low Rotunda
21. New York Times, Columbia Will Award Degrees to 6,278 Today
22. Obituaries & Guestbooks from The Times
23. New York Times, COLUMBIA AWARDS 1922 PRIZE HONORS
24. New York Times, SIMS'S BOOK WINS COLUMBIA PRIZE
25. New York Times, COLUMBIA ANNOUNCES LIST OF PRIZE WINNERS
|
Glen Van Brummelen
Glen Robert Van Brummelen (born May 20th, 1965) is a Canadian historian of mathematics specializing in historical applications of mathematics to astronomy. In his words, he is the “best trigonometry historian, and the worst trigonometry historian” (as he is the only one).
Glen Van Brummelen
Photo of Glen showing off a gift from one of his students.
He is president of the Canadian Society for History and Philosophy of Mathematics,[1] and was a co-editor of Mathematics and the Historian's Craft: The Kenneth O. May Lectures (Springer, 2005).
Life
Van Brummelen earned his PhD degree from Simon Fraser University in 1993,[2] and served as a professor of mathematics at Bennington College from 1999 to 2006. He then transferred to Quest University Canada as a founding faculty member. In 2020, he became the dean of the Faculty of Natural and Applied Sciences at Trinity Western University in Langley, BC.[3]
Glen Van Brummelen has published the first major history in English of the origins and early development of trigonometry, The Mathematics of the Heavens and the Earth: The Early History of Trigonometry.[4] His second book, Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry, concerns spherical trigonometry.[5][6]
He teaches courses on the history of mathematics and trigonometry at MathPath, specifically Heavenly Mathematics and Spherical Trigonometry. He is also well known for the glensheep and the "glenneagon", a variant on the enneagon (as well as to a lesser extent the glenelephant, and to even lesser extent the glenturtle), a two-dimensional animal he coined at MathPath.
Works
• The Mathematics of the Heavens and the Earth: The Early History of Trigonometry Princeton; Oxford: Princeton University Press, 2009. ISBN 9780691129730, OCLC 750691811
• Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry Princeton; Oxford: Princeton University Press, 2013. ISBN 9780691175997, OCLC 988234342
• Trigonometry: A Very Short Introduction; Oxford: Princeton University Press, 2020 ISBN 9780198814313, OCLC 1101269106
• The Doctrine of Triangles: The History of Modern Trigonometry Princeton; Oxford: Princeton University Press, 2021 ISBN 978-0691179414, OCLC 1201300540
References
1. CSHPM Council, retrieved 2013-12-26.
2. Glen Van Brummelen at the Mathematics Genealogy Project
3. "Trinity Western University Welcomes New Dean of the Faculty of Natural and Applied Sciences". Trinity Western University. 29 May 2020. Retrieved 8 June 2020.
4. McRae, Alan S. (2009), Review of The Mathematics of the Heavens and the Earth, MR2473955.
5. Steele, John M. (July 2013), "A forgotten discipline (review of Heavenly Mathematics)", Metascience, doi:10.1007/s11016-013-9836-9, S2CID 254793113
6. Funk, Martin (2013), Review of Heavenly Mathematics, MR3012466.
External links
• Bio at Quest's Website
• Homepage at Bennington College
• Publication list
• Trigonometry Book page
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
|
Hendrik van Heuraet
Hendrik van Heuraet (1633, Haarlem - 1660?, Leiden) was a Dutch mathematician also known as Henrici van Heuraet. He is noted as one of the founders of the integral, and author of Epistola de Transmutatione Curvarum Linearum in Rectus [On the Transformation of Curves into Straight Lines] (1659).[1] From 1653 he studied at Leiden University where he interacted with Frans van Schooten, Johannes Hudde, and Christiaan Huygens. In 1658 he and Hudde left for Saumur in France. He returned to Leiden the next year as a physician. After this his trail is lost.
Bibliography
• van Maanen, Jan A. (1984). "Hendrick van Heureat (1634-1660?): His Life and Mathematical Work". Centaurus. 27 (3): 218–279. Bibcode:1984Cent...27..218V. doi:10.1111/j.1600-0498.1984.tb00781.x. ISSN 0008-8994.
References
1. Mathematical Treasures - Van Heuraet's Rectification of Curves, Frank J. Swetz, Victor J. Katz, Mathematical Association of America (maa.org) Accessed: 10-13-2016
External links
• Geometria, à Renato Des Cartes Anno 1637 (1683) with Epistola de Transmutatione Curvarum Linearum in Rectus, p. 517, @GoogleBooks.
• Hendrik van Heuraet Archived 2014-08-08 at the Wayback Machine at Turnbull WWW server
• Text with slightly more info on his life (Dutch)
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Italy
• United States
• Czech Republic
• Netherlands
Academics
• Mathematics Genealogy Project
People
• Netherlands
Other
• IdRef
|
Van Kampen diagram
In the mathematical area of geometric group theory, a Van Kampen diagram (sometimes also called a Lyndon–Van Kampen diagram[1][2][3] ) is a planar diagram used to represent the fact that a particular word in the generators of a group given by a group presentation represents the identity element in that group.
History
The notion of a Van Kampen diagram was introduced by Egbert van Kampen in 1933.[4] This paper appeared in the same issue of American Journal of Mathematics as another paper of Van Kampen, where he proved what is now known as the Seifert–Van Kampen theorem.[5] The main result of the paper on Van Kampen diagrams, now known as the van Kampen lemma can be deduced from the Seifert–Van Kampen theorem by applying the latter to the presentation complex of a group.[6] However, Van Kampen did not notice it at the time and this fact was only made explicit much later (see, e.g.[7]). Van Kampen diagrams remained an underutilized tool in group theory for about thirty years, until the advent of the small cancellation theory in the 1960s, where Van Kampen diagrams play a central role.[8] Currently Van Kampen diagrams are a standard tool in geometric group theory. They are used, in particular, for the study of isoperimetric functions in groups, and their various generalizations such as isodiametric functions, filling length functions, and so on.
Formal definition
The definitions and notations below largely follow Lyndon and Schupp.[9]
Let
$G=\langle A|R\,\rangle $ (†)
be a group presentation where all r∈R are cyclically reduced words in the free group F(A). The alphabet A and the set of defining relations R are often assumed to be finite, which corresponds to a finite group presentation, but this assumption is not necessary for the general definition of a Van Kampen diagram. Let R∗ be the symmetrized closure of R, that is, let R∗ be obtained from R by adding all cyclic permutations of elements of R and of their inverses.
A Van Kampen diagram over the presentation (†) is a planar finite cell complex ${\mathcal {D}}\,$, given with a specific embedding ${\mathcal {D}}\subseteq \mathbb {R} ^{2}\,$ with the following additional data and satisfying the following additional properties:
1. The complex ${\mathcal {D}}\,$ is connected and simply connected.
2. Each edge (one-cell) of ${\mathcal {D}}\,$ is labelled by an arrow and a letter a∈A.
3. Some vertex (zero-cell) which belongs to the topological boundary of ${\mathcal {D}}\subseteq \mathbb {R} ^{2}\,$ is specified as a base-vertex.
4. For each region (two-cell) of ${\mathcal {D}}$, for every vertex on the boundary cycle of that region, and for each of the two choices of direction (clockwise or counter-clockwise), the label of the boundary cycle of the region read from that vertex and in that direction is a freely reduced word in F(A) that belongs to R∗.
Thus the 1-skeleton of ${\mathcal {D}}\,$ is a finite connected planar graph Γ embedded in $\mathbb {R} ^{2}\,$ and the two-cells of ${\mathcal {D}}\,$ are precisely the bounded complementary regions for this graph.
By the choice of R∗ Condition 4 is equivalent to requiring that for each region of ${\mathcal {D}}\,$ there is some boundary vertex of that region and some choice of direction (clockwise or counter-clockwise) such that the boundary label of the region read from that vertex and in that direction is freely reduced and belongs to R.
A Van Kampen diagram ${\mathcal {D}}\,$ also has the boundary cycle, denoted $\partial {\mathcal {D}}\,$, which is an edge-path in the graph Γ corresponding to going around ${\mathcal {D}}\,$ once in the clockwise direction along the boundary of the unbounded complementary region of Γ, starting and ending at the base-vertex of ${\mathcal {D}}\,$. The label of that boundary cycle is a word w in the alphabet A ∪ A−1 (which is not necessarily freely reduced) that is called the boundary label of ${\mathcal {D}}\,$.
Further terminology
• A Van Kampen diagram ${\mathcal {D}}\,$ is called a disk diagram if ${\mathcal {D}}\,$ is a topological disk, that is, when every edge of ${\mathcal {D}}\,$ is a boundary edge of some region of ${\mathcal {D}}\,$ and when ${\mathcal {D}}\,$ has no cut-vertices.
• A Van Kampen diagram ${\mathcal {D}}\,$ is called non-reduced if there exists a reduction pair in ${\mathcal {D}}\,$, that is a pair of distinct regions of ${\mathcal {D}}\,$ such that their boundary cycles share a common edge and such that their boundary cycles, read starting from that edge, clockwise for one of the regions and counter-clockwise for the other, are equal as words in A ∪ A−1. If no such pair of region exists, ${\mathcal {D}}\,$ is called reduced.
• The number of regions (two-cells) of ${\mathcal {D}}\,$ is called the area of ${\mathcal {D}}\,$ denoted ${\rm {Area}}({\mathcal {D}})\,$.
In general, a Van Kampen diagram has a "cactus-like" structure where one or more disk-components joined by (possibly degenerate) arcs, see the figure below:
Example
The following figure shows an example of a Van Kampen diagram for the free abelian group of rank two
$G=\langle a,b|aba^{-1}b^{-1}\rangle .$
The boundary label of this diagram is the word
$w=b^{-1}b^{3}a^{-1}b^{-2}ab^{-1}ba^{-1}ab^{-1}ba^{-1}a.$
The area of this diagram is equal to 8.
Van Kampen lemma
A key basic result in the theory is the so-called Van Kampen lemma[9] which states the following:
1. Let ${\mathcal {D}}\,$ be a Van Kampen diagram over the presentation (†) with boundary label w which is a word (not necessarily freely reduced) in the alphabet A ∪ A−1. Then w=1 in G.
2. Let w be a freely reduced word in the alphabet A ∪ A−1 such that w=1 in G. Then there exists a reduced Van Kampen diagram ${\mathcal {D}}\,$ over the presentation (†) whose boundary label is freely reduced and is equal to w.
Sketch of the proof
First observe that for an element w ∈ F(A) we have w = 1 in G if and only if w belongs to the normal closure of R in F(A) that is, if and only if w can be represented as
$w=u_{1}s_{1}u_{1}^{-1}\cdots u_{n}s_{n}u_{n}^{-1}{\text{ in }}F(A),$ (♠)
where n ≥ 0 and where si ∈ R∗ for i = 1, ..., n.
Part 1 of Van Kampen's lemma is proved by induction on the area of ${\mathcal {D}}\,$. The inductive step consists in "peeling" off one of the boundary regions of ${\mathcal {D}}\,$ to get a Van Kampen diagram ${\mathcal {D}}'\,$ with boundary cycle w and observing that in F(A) we have
$w=usu^{-1}w',\,$
where s∈R∗ is the boundary cycle of the region that was removed to get ${\mathcal {D}}'\,$ from ${\mathcal {D}}\,$.
The proof of part two of Van Kampen's lemma is more involved. First, it is easy to see that if w is freely reduced and w = 1 in G there exists some Van Kampen diagram ${\mathcal {D}}_{0}\,$ with boundary label w0 such that w = w0 in F(A) (after possibly freely reducing w0). Namely consider a representation of w of the form (♠) above. Then make ${\mathcal {D}}_{0}\,$ to be a wedge of n "lollipops" with "stems" labeled by ui and with the "candys" (2-cells) labelled by si. Then the boundary label of ${\mathcal {D}}_{0}\,$ is a word w0 such that w = w0 in F(A). However, it is possible that the word w0 is not freely reduced. One then starts performing "folding" moves to get a sequence of Van Kampen diagrams ${\mathcal {D}}_{0},{\mathcal {D}}_{1},{\mathcal {D}}_{2},\dots \,$ by making their boundary labels more and more freely reduced and making sure that at each step the boundary label of each diagram in the sequence is equal to w in F(A). The sequence terminates in a finite number of steps with a Van Kampen diagram ${\mathcal {D}}_{k}\,$ whose boundary label is freely reduced and thus equal to w as a word. The diagram ${\mathcal {D}}_{k}\,$ may not be reduced. If that happens, we can remove the reduction pairs from this diagram by a simple surgery operation without affecting the boundary label. Eventually this produces a reduced Van Kampen diagram ${\mathcal {D}}\,$ whose boundary cycle is freely reduced and equal to w.
Strengthened version of Van Kampen's lemma
Moreover, the above proof shows that the conclusion of Van Kampen's lemma can be strengthened as follows.[9] Part 1 can be strengthened to say that if ${\mathcal {D}}\,$ is a Van Kampen diagram of area n with boundary label w then there exists a representation (♠) for w as a product in F(A) of exactly n conjugates of elements of R∗. Part 2 can be strengthened to say that if w is freely reduced and admits a representation (♠) as a product in F(A) of n conjugates of elements of R∗ then there exists a reduced Van Kampen diagram with boundary label w and of area at most n.
Dehn functions and isoperimetric functions
Main article: Dehn function
Area of a word representing the identity
Let w ∈ F(A) be such that w = 1 in G. Then the area of w, denoted Area(w), is defined as the minimum of the areas of all Van Kampen diagrams with boundary labels w (Van Kampen's lemma says that at least one such diagram exists).
One can show that the area of w can be equivalently defined as the smallest n≥0 such that there exists a representation (♠) expressing w as a product in F(A) of n conjugates of the defining relators.
Isoperimetric functions and Dehn functions
A nonnegative monotone nondecreasing function f(n) is said to be an isoperimetric function for presentation (†) if for every freely reduced word w such that w = 1 in G we have
${\rm {Area}}(w)\leq f(|w|),$
where |w| is the length of the word w.
Suppose now that the alphabet A in (†) is finite. Then the Dehn function of (†) is defined as
${\rm {Dehn}}(n)=\max\{{\rm {Area}}(w):w=1{\text{ in }}G,|w|\leq n,w{\text{ freely reduced}}.\}$
It is easy to see that Dehn(n) is an isoperimetric function for (†) and, moreover, if f(n) is any other isoperimetric function for (†) then Dehn(n) ≤ f(n) for every n ≥ 0.
Let w ∈ F(A) be a freely reduced word such that w = 1 in G. A Van Kampen diagram ${\mathcal {D}}\,$ with boundary label w is called minimal if ${\rm {Area}}({\mathcal {D}})={\rm {Area}}(w).$ Minimal Van Kampen diagrams are discrete analogues of minimal surfaces in Riemannian geometry.
Generalizations and other applications
• There are several generalizations of van-Kampen diagrams where instead of being planar, connected and simply connected (which means being homotopically equivalent to a disk) the diagram is drawn on or homotopically equivalent to some other surface. It turns out, that there is a close connection between the geometry of the surface and certain group theoretical notions. A particularly important one of these is the notion of an annular Van Kampen diagram, which is homotopically equivalent to an annulus. Annular diagrams, also known as conjugacy diagrams, can be used to represent conjugacy in groups given by group presentations.[9] Also spherical Van Kampen diagrams are related to several versions of group-theoretic asphericity and to Whitehead's asphericity conjecture,[10] Van Kampen diagrams on the torus are related to commuting elements, diagrams on the real projective plane are related to involutions in the group and diagrams on Klein's bottle are related to elements that are conjugated to their own inverse.
• Van Kampen diagrams are central objects in the small cancellation theory developed by Greendlinger, Lyndon and Schupp in the 1960s-1970s.[9][11] Small cancellation theory deals with group presentations where the defining relations have "small overlaps" with each other. This condition is reflected in the geometry of reduced Van Kampen diagrams over small cancellation presentations, forcing certain kinds of non-positively curved or negatively cn curved behavior. This behavior yields useful information about algebraic and algorithmic properties of small cancellation groups, in particular regarding the word and the conjugacy problems. Small cancellation theory was one of the key precursors of geometric group theory, that emerged as a distinct mathematical area in the late 1980s and it remains an important part of geometric group theory.
• Van Kampen diagrams play a key role in the theory of word-hyperbolic groups introduced by Gromov in 1987.[12] In particular, it turns out that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality. Moreover, there is an isoperimetric gap in the possible spectrum of isoperimetric functions for finitely presented groups: for any finitely presented group either it is hyperbolic and satisfies a linear isoperimetric inequality or else the Dehn function is at least quadratic.[13][14]
• The study of isoperimetric functions for finitely presented groups has become an important general theme in geometric group theory where substantial progress has occurred. Much work has gone into constructing groups with "fractional" Dehn functions (that is, with Dehn functions being polynomials of non-integer degree).[15] The work of Rips, Ol'shanskii, Birget and Sapir[16][17] explored the connections between Dehn functions and time complexity functions of Turing machines and showed that an arbitrary "reasonable" time function can be realized (up to appropriate equivalence) as the Dehn function of some finitely presented group.
• Various stratified and relativized versions of Van Kampen diagrams have been explored in the subject as well. In particular, a stratified version of small cancellation theory, developed by Ol'shanskii, resulted in constructions of various group-theoretic "monsters", such as the Tarski Monster,[18] and in geometric solutions of the Burnside problem for periodic groups of large exponent.[19][20] Relative versions of Van Kampen diagrams (with respect to a collection of subgroups) were used by Osin to develop an isoperimetric function approach to the theory of relatively hyperbolic groups.[21]
See also
• Geometric group theory
• Presentation of a group
• Seifert–Van Kampen theorem
Basic references
• Alexander Yu. Ol'shanskii. Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1
• Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ISBN 978-3-540-41158-1; Ch. V. Small Cancellation Theory. pp. 235–294.
Footnotes
1. B. Fine and G. Rosenberger, The Freiheitssatz and its extensions. The mathematical legacy of Wilhelm Magnus: groups, geometry and special functions (Brooklyn, NY, 1992), 213–252, Contemp. Math., 169, Amer. Math. Soc., Providence, RI, 1994
2. I.G. Lysenok, and A.G. Myasnikov, A polynomial bound for solutions of quadratic equations in free groups. Tr. Mat. Inst. Steklova 274 (2011), Algoritmicheskie Voprosy Algebry i Logiki, 148-190; translation in Proc. Steklov Inst. Math. 274 (2011), no. 1, 136–173
3. B. Fine, A. Gaglione, A. Myasnikov, G. Rosenberger, and D. Spellman, The elementary theory of groups. A guide through the proofs of the Tarski conjectures. De Gruyter Expositions in Mathematics, 60. De Gruyter, Berlin, 2014. ISBN 978-3-11-034199-7
4. E. van Kampen. On some lemmas in the theory of groups. American Journal of Mathematics. vol. 55, (1933), pp. 268–273.
5. E. R. van Kampen. On the connection between the fundamental groups of some related spaces. American Journal of Mathematics, vol. 55 (1933), pp. 261–267.
6. Invitations to Geometry and Topology. Oxford Graduate Texts in Mathematics. Oxford, New York: Oxford University Press. 2003. ISBN 9780198507727.
7. Aleksandr Yur'evich Ol'shanskii. Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1.
8. Bruce Chandler, and Wilhelm Magnus. The history of combinatorial group theory. A case study in the history of ideas. Studies in the History of Mathematics and Physical Sciences, 9. Springer-Verlag, New York, 1982. ISBN 0-387-90749-1.
9. Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ISBN 978-3-540-41158-1; Ch. V. Small Cancellation Theory. pp. 235–294.
10. Ian M. Chiswell, Donald J. Collins, and Johannes Huebschmann. Aspherical group presentations. Mathematische Zeitschrift, vol. 178 (1981), no. 1, pp. 1–36.
11. Martin Greendlinger. Dehn's algorithm for the word problem. Communications on Pure and Applied Mathematics, vol. 13 (1960), pp. 67–83.
12. M. Gromov. Hyperbolic Groups. Essays in Group Theory (G. M. Gersten, ed.), MSRI Publ. 8, 1987, pp. 75–263; ISBN 0-387-96618-8.
13. Michel Coornaert, Thomas Delzant, Athanase Papadopoulos, Géométrie et théorie des groupes: les groupes hyperboliques de Gromov. Lecture Notes in Mathematics, vol. 1441, Springer-Verlag, Berlin, 1990. ISBN 3-540-52977-2.
14. B. H. Bowditch. A short proof that a subquadratic isoperimetric inequality implies a linear one. Michigan Mathematical Journal, vol. 42 (1995), no. 1, pp. 103–107.
15. M. R. Bridson, Fractional isoperimetric inequalities and subgroup distortion. Journal of the American Mathematical Society, vol. 12 (1999), no. 4, pp. 1103–1118.
16. M. Sapir, J.-C. Birget, E. Rips, Isoperimetric and isodiametric functions of groups. Annals of Mathematics (2), vol. 156 (2002), no. 2, pp. 345–466.
17. J.-C. Birget, Aleksandr Yur'evich Ol'shanskii, E. Rips, M. Sapir, Isoperimetric functions of groups and computational complexity of the word problem. Annals of Mathematics (2), vol. 156 (2002), no. 2, pp. 467–518.
18. Ol'sanskii, A. Yu. (1979). Бесконечные группы с циклическими подгруппами [Infinite groups with cyclic subgroups]. Doklady Akademii Nauk SSSR (in Russian). 245 (4): 785–787.
19. A. Yu. Ol'shanskii. On a geometric method in the combinatorial group theory. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Warsaw, 1983), pp. 415–424, PWN, Warsaw, 1984.
20. S. V. Ivanov. The free Burnside groups of sufficiently large exponents. International Journal of Algebra and Computation, vol. 4 (1994), no. 1-2.
21. Denis V. Osin. Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems. Memoirs of the American Mathematical Society 179 (2006), no. 843.
External links
• Van Kampen diagrams from the files of David A. Jackson
|
Frans van Schooten
Frans van Schooten Jr. also rendered as Franciscus van Schooten (15 May 1615, Leiden – 29 May 1660, Leiden) was a Dutch mathematician who is most known for popularizing the analytic geometry of René Descartes.
Frans van Schooten
Born1615
Leiden, Dutch Republic
Died29 May 1660
Leiden, Dutch Republic
Known forVan Schooten's theorem
Scientific career
FieldsMathematics
Influences
• Viète
• Descartes
• Beaune
• Fermat
• Hudde
• Witt
• Heuraet
InfluencedChristiaan Huygens
Life
Van Schooten's father, Frans van Schooten Senior was a professor of mathematics at the University of Leiden, having Christiaan Huygens, Johann van Waveren Hudde, and René de Sluze as students.
Van Schooten met Descartes in 1632 and read his Géométrie (an appendix to his Discours de la méthode) while it was still unpublished. Finding it hard to understand, he went to France to study the works of other important mathematicians of his time, such as François Viète and Pierre de Fermat. When Frans van Schooten returned to his home in Leiden in 1646, he inherited his father's position and one of his most important pupils, Huygens.
The pendant marriage portraits of him and his wife Margrieta Wijnants were painted by Rembrandt and are kept in the National Gallery of Art:[1]
• Portrait of a Gentleman with a Tall Hat and Gloves
• Portrait of a Lady with an Ostrich-Feather Fan
Work
Van Schooten's 1649 Latin translation of and commentary on Descartes' Géométrie was valuable in that it made the work comprehensible to the broader mathematical community, and thus was responsible for the spread of analytic geometry to the world.
Over the next decade he enlisted the aid of other mathematicians of the time, de Beaune, Hudde, Heuraet, de Witt and expanded the commentaries to two volumes, published in 1659 and 1661. This edition and its extensive commentaries was far more influential than the 1649 edition. It was this edition that Gottfried Leibniz and Isaac Newton knew.
Van Schooten was one of the first to suggest, in exercises published in 1657, that these ideas be extended to three-dimensional space. Van Schooten's efforts also made Leiden the centre of the mathematical community for a short period in the middle of the seventeenth century.
In elementary geometry Van Schooten's theorem is named after him.
References
1. Discovery of portraits of Leiden professor and his wife in NRC, 6 November 2018
• Some Contemporaries of Descartes, Fermat, Pascal and Huygens: Van Schooten, based on W. W. Rouse Ball's A Short Account of the History of Mathematics (4th edition, 1908)
External links
• Mathematische Oeffeningen van Frans van Schooten (in Dutch)
• Biografisch Woordenboek van Nederlandse Wiskundigen: Frans van Schooten (in Dutch)
• Frans van Schooten, and his Ruler Constructions at Convergence
• O'Connor, John J.; Robertson, Edmund F., "Frans van Schooten", MacTutor History of Mathematics Archive, University of St Andrews
• An e-textbook developed from Frans van Schooten 1646 by dbook
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• Spain
• France
• BnF data
• Germany
• Italy
• Israel
• Belgium
• United States
• Sweden
• Czech Republic
• Australia
• Croatia
• Netherlands
• Poland
• Portugal
• Vatican
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Artists
• Scientific illustrators
• RKD Artists
People
• Netherlands
• Deutsche Biographie
• Trove
Other
• IdRef
|
Heine–Stieltjes polynomials
In mathematics, the Heine–Stieltjes polynomials or Stieltjes polynomials, introduced by T. J. Stieltjes (1885), are polynomial solutions of a second-order Fuchsian equation, a differential equation all of whose singularities are regular. The Fuchsian equation has the form
${\frac {d^{2}S}{dz^{2}}}+\left(\sum _{j=1}^{N}{\frac {\gamma _{j}}{z-a_{j}}}\right){\frac {dS}{dz}}+{\frac {V(z)}{\prod _{j=1}^{N}(z-a_{j})}}S=0$
For the orthogonal polynomials, see Stieltjes-Wigert polynomial. For the polynomials associated to a family of orthogonal polynomials, see Stieltjes polynomials.
for some polynomial V(z) of degree at most N − 2, and if this has a polynomial solution S then V is called a Van Vleck polynomial (after Edward Burr Van Vleck) and S is called a Heine–Stieltjes polynomial.
Heun polynomials are the special cases of Stieltjes polynomials when the differential equation has four singular points.
References
• Marden, Morris (1931), "On Stieltjes Polynomials", Transactions of the American Mathematical Society, Providence, R.I.: American Mathematical Society, 33 (4): 934–944, doi:10.2307/1989516, ISSN 0002-9947, JSTOR 1989516
• Sleeman, B. D.; Kuznetzov, V. B. (2010), "Stieltjes Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Stieltjes, T. J. (1885), "Sur certains polynômes qui vérifient une équation différentielle linéaire du second ordre et sur la theorie des fonctions de Lamé", Acta Mathematica, 6 (1): 321–326, doi:10.1007/BF02400421
|
Adriaan van Wijngaarden
Adriaan "Aad" van Wijngaarden (2 November 1916 – 7 February 1987) was a Dutch mathematician and computer scientist. Trained as a mechanical engineer, Van Wijngaarden emphasized and promote the mathematical aspects of computing, first in numerical analysis, then in programming languages and finally in design principles of such languages.
Adriaan van Wijngaarden
Born(1916-11-02)2 November 1916
Rotterdam, Netherlands
Died7 February 1987(1987-02-07) (aged 70)
Amstelveen, Netherlands
CitizenshipNetherlands
Alma materDelft University of Technology (1939)
Known forALGOL
CWI
IFIP
Van Wijngaarden grammar
AwardsIEEE Computer Pioneer Award (1986)
Scientific career
FieldsNumerical mathematics
Computer science
InstitutionsUniversity of Amsterdam
Mathematisch Centrum in Amsterdam
Doctoral advisorCornelis Benjamin Biezeno
Doctoral studentsEdsger W. Dijkstra
Peter van Emde Boas
Jaco de Bakker
Reinder van de Riet
Guus Zoutendijk
Maarten van Emden
Signature
Biography
Van Wijngaarden's university education was in mechanical engineering, for which he received a degree from Delft University of Technology[1] in 1939. He then studied for a doctorate in hydrodynamics, but abandoned the field. He joined the Nationaal Luchtvaartlaboratorium in 1945 and went with a group to England the next year to learn about new technologies that had been developed there during World War II.
Van Wijngaarden was intrigued by the new idea of automatic computing. On 1 January 1947, he became the head of the Computing Department of the brand-new Centrum Wiskunde & Informatica (CWI), which was at the time known as the Mathematisch Centrum (MC), in Amsterdam.[1] He then made further visits to England and the United States, gathering ideas for the construction of the first Dutch computer, the ARRA, an electromechanical device first demonstrated in 1952. In that same year, van Wijngaarden hired Edsger W. Dijkstra, and they worked on software for the ARRA.
in 1958, while visiting Edinburgh, Scotland, Van Wijngaarden was seriously injured in an automobile accident in which his wife was killed. After he recovered, he focused more on programming language research. The following year, he became a member of the Royal Netherlands Academy of Arts and Sciences.[2]
In 1961, he became the director of the Mathematisch Centrum in Amsterdam and remained in that post for the next twenty years.
He was one of the designers of the original ALGOL language, and later ALGOL 68,[3] for which he developed a two-level type of formal grammar that became known as a Van Wijngaarden grammar.
In 1962, he became involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi,[4] which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68.[5]
Van Wijngaarden Awards
The Van Wijngaarden Awards are named in his honor and are awarded every 5 years from the 60th anniversary of the Centrum Wiskunde & Informatica in 2006. The physical award consists of a bronze sculpture.
• 2006: Computer scientist Nancy Lynch and mathematician-magician Persi Diaconis.[6]
• 2011: Computer scientist Éva Tardos and numerical mathematician John C. Butcher.[7]
• 2016: Computer scientist Xavier Leroy and statistician Sara van de Geer.[8]
• 2021: Computer scientist Marta Kwiatkowska and statistician Susan Murphy.[9]
See also
• List of pioneers in computer science
• List of computer science awards
References
1. Verrijn-Stuart, Alex (1995). "IFIP 36 years Obituaries: Prof. Adriaan van WIJNGAARDEN (1916–1987)". Retrieved 2020-10-11.
2. "Adriaan van Wijngaarden (1916 - 1987)". Royal Netherlands Academy of Arts and Sciences. Retrieved 2015-07-20.
3. van Wijngaarden, Adriaan; Mailloux, Barry James; Peck, John Edward Lancelot; Koster, Cornelis Hermanus Antonius; Sintzoff, Michel [in French]; Lindsey, Charles Hodgson; Meertens, Lambert Guillaume Louis Théodore; Fisker, Richard G., eds. (1976). Revised Report on the Algorithmic Language ALGOL 68 (PDF). Springer-Verlag. ISBN 978-0-387-07592-1. OCLC 1991170. Archived (PDF) from the original on 2019-04-19. Retrieved 2019-05-11.
4. Jeuring, Johan; Meertens, Lambert; Guttmann, Walter (2016-08-17). "Profile of IFIP Working Group 2.1". Foswiki. Retrieved 2020-09-11.
5. Swierstra, Doaitse; Gibbons, Jeremy; Meertens, Lambert (2011-03-02). "ScopeEtc: IFIP21: Foswiki". Foswiki. Retrieved 2020-09-11.
6. "First Van Wijngaarden Awards for Lynch and Diaconis" (Press release). Centrum Wiskunde & Informatica. 2006-02-10. Archived from the original on 2012-07-28. Retrieved 2009-10-11.
7. "Van Wijngaarden Award 2011 for Éva Tardos and John Butcher" (Press release). Centrum Wiskunde & Informatica. 2011-02-10. Archived from the original on 2011-02-21. Retrieved 2011-02-12.
8. CWI soiree & Van Wijngaarden Award Ceremony, Centrum Wiskunde & Informatica, 2016-09-01, archived from the original on 2016-09-25, retrieved 2016-09-01
9. Marta kwiatkowska and susan murphy win van wijngaarden awards 2021 for preventing software faults and for improving decision making in health, Centrum Wiskunde & Informatica, 2021-09-20, retrieved 2021-12-07
External links
• Adriaan van Wijngaarden at DBLP Bibliography Server
• Rekenmeisjes en rekentuig door Gerard Alberts. Pythagoras.
• Adriaan van Wijngaarden (1916-1987). Biografisch Woordenboek van Nederlandse Wiskundigen.
• Aad van Wijngaarden’s 100th Birthday
ALGOL programming
Implementations
Technical
standards
• ALGOL 58
• ALGOL 60
• ALGOL 68
Dialects
• ABC ALGOL
• ALCOR
• ALGO
• ALGOL 68C
• ALGOL 68-R
• ALGOL 68RS (ELLA)
• ALGOL 68S
• ALGOL N
• ALGOL W
• ALGOL X
• Atlas Autocode (Edinburgh IMP)
• Burroughs ALGOL
• CORAL 66
• Dartmouth ALGOL 30
• DASK ALGOL
• DG/L
• Elliott ALGOL
• Executive Systems Problem Oriented Language (ESPOL) → New Executive Programming Language (NEWP)
• FLACC
• IMP
• JOVIAL
• Kidsgrove Algol
• MAD
• Mary
• NELIAC
• RTL/2
• S-algol, PS-algol, Napier88
• Simula
• Small Machine ALGOL Like Language (SMALL)
• SMIL ALGOL
Formalisms
• Jensen's device
• Van Wijngaarden grammar
Community
Organizations
Professional
associations
• ALCOR Group
• Association for Computing Machinery (ACM)
• BSI Group
• Euro-Asian Council for Standardization, Metrology and Certification (EASC)
• International Federation for Information Processing (IFIP) IFIP Working Group 2.1
• Society of Applied Mathematics and Mechanics (GAMM)
Business
• Burroughs Corporation
• Elliott Brothers
• Regnecentralen
Education
• Case Institute of Technology
• University of Edinburgh
• University of St Andrews
• Manchester University
• Massachusetts Institute of Technology (MIT)
Government
• Royal Radar Establishment (RRE)
People
ALGOL 58
• John Backus
• Friedrich L. Bauer
• Hermann Bottenbruch
• Charles Katz
• Alan Perlis
• Heinz Rutishauser
• Klaus Samelson
• Joseph Henry Wegstein
MAD
• Bruce Arden
• Bernard Galler
• Robert M. Graham
ALGOL 60
• Backus^
• Roland Carl Backhouse
• Bauer^
• Richard Bird
• Stephen R. Bourne
• Edsger W. Dijkstra
• Andrey Ershov
• Robert W. Floyd
• Jeremy Gibbons
• Julien Green
• David Gries
• Eric Hehner
• Tony Hoare
• Jørn Jensen
• Katz^
• Peter Landin
• Tom Maibaum
• Conor McBride
• John McCarthy
• Carroll Morgan
• Peter Naur
• Maurice Nivat
• John E. L. Peck
• Perlis^
• Brian Randell
• Rutishauser^
• Samelson^
• Jacob T. Schwartz
• Micha Sharir
• David Turner
• Bernard Vauquois
• Eiiti Wada
• Wegstein^
• Adriaan van Wijngaarden
• Mike Woodger
Simula
• Ole-Johan Dahl
• Kristen Nygaard
ALGOL 68
• Bauer^
• Susan G. Bond
• Bourne^
• Robert Dewar
• Dijkstra^
• Gerhard Goos
• Michael Guy
• Hoare^
• Cornelis H. A. Koster
• Peter Landin
• Charles H. Lindsey
• Barry J. Mailloux
• McCarthy^
• Lambert Meertens
• Naur^
• Peck^
• Willem van der Poel
• Randell^
• Douglas T. Ross
• Samelson^
• Michel Sintzoff
• van Wijngaarden^
• Niklaus Wirth
• Woodger^
• Philip Woodward
• Nobuo Yoneda
• Hal Abelson
• John Barnes
• Tony Brooker
• Ron Morrison
• Peter O'Hearn
• John C. Reynolds
• ALGOL Bulletin
Comparison
• ALGOL 58 influence on ALGOL 60
• ALGOL 68 to other languages
• ALGOL 68 to C++
• ^ = full name and link in prior ALGOL version above
Category: ALGOL Category: ALGOL 60
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• 2
• United States
• Czech Republic
• Netherlands
• Poland
Academics
• Association for Computing Machinery
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Netherlands
• Deutsche Biographie
• Trove
Other
• IdRef
|
Van der Corput lemma (harmonic analysis)
In mathematics, in the field of harmonic analysis, the van der Corput lemma is an estimate for oscillatory integrals named after the Dutch mathematician J. G. van der Corput.
The following result is stated by E. Stein:[1]
Suppose that a real-valued function $\phi (x)$ is smooth in an open interval $(a,b)$, and that $|\phi ^{(k)}(x)|\geq 1$ for all $x\in (a,b)$. Assume that either $k\geq 2$, or that $k=1$ and $\phi '(x)$ is monotone for $x\in \mathbb {R} $. Then there is a constant $c_{k}$, which does not depend on $\phi $, such that
${\bigg |}\int _{a}^{b}e^{i\lambda \phi (x)}{\bigg |}\leq c_{k}\lambda ^{-1/k}$
for any $\lambda \in \mathbb {R} $.
Sublevel set estimates
The van der Corput lemma is closely related to the sublevel set estimates,[2] which give the upper bound on the measure of the set where a function takes values not larger than $\epsilon $.
Suppose that a real-valued function $\phi (x)$ is smooth on a finite or infinite interval $I\subset \mathbb {R} $, and that $|\phi ^{(k)}(x)|\geq 1$ for all $x\in I$. There is a constant $c_{k}$, which does not depend on $\phi $, such that for any $\epsilon \geq 0$ the measure of the sublevel set $\{x\in I:|\phi (x)|\leq \epsilon \}$ is bounded by $c_{k}\epsilon ^{1/k}$.
References
1. Elias Stein, Harmonic Analysis: Real-variable Methods, Orthogonality and Oscillatory Integrals. Princeton University Press, 1993. ISBN 0-691-03216-5
2. M. Christ, Hilbert transforms along curves, Ann. of Math. 122 (1985), 575–596
|
Vandermonde matrix
In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an $(m+1)\times (n+1)$ matrix
$V=V(x_{0},x_{1},\cdots ,x_{m})={\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}$
with entries $V_{i,j}=x_{i}^{j}$, the jth power of the number $x_{i}$, for all zero-based indices $i$ and $j$.[1] Most authors define the Vandermonde matrix as the transpose of the above matrix.[2][3]
The determinant of a square Vandermonde matrix (when $n=m$) is called a Vandermonde determinant or Vandermonde polynomial. Its value is:
$\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$
This is non-zero if and only if all $x_{i}$ are distinct (no two are equal), making the Vandermonde matrix invertible.
Applications
The polynomial interpolation problem is to find a polynomial $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$ which satisfies $p(x_{0})=y_{0},\ldots ,p(x_{m})=y_{m}$ for given data points $(x_{0},y_{0}),\ldots ,(x_{m},y_{m})$. This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. $V$ computes the values of $p(x)$ at the points $x=x_{0},\ x_{1},\dots ,\ x_{m}$ via a matrix multiplication $Va=y$, where $a=(a_{0},\ldots ,a_{n})$ is the vector of coefficients and $y=(y_{0},\ldots ,y_{m})=(p(x_{0}),\ldots ,p(x_{m}))$ is the vector of values (both written as column vectors):
${\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}\cdot {\begin{bmatrix}a_{0}\\a_{1}\\\vdots \\a_{n}\end{bmatrix}}={\begin{bmatrix}p(x_{0})\\p(x_{1})\\\vdots \\p(x_{m})\end{bmatrix}}.$
If $n=m$ and $x_{0},\dots ,\ x_{n}$ are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required $p(x)$ by solving for its coefficients $a$ in the equation $Va=y$:[4]
$a=V^{-1}y$.
That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials.
In statistics, the equation $Va=y$ means that the Vandermonde matrix is the design matrix of polynomial regression.
In numerical analysis, solving the equation $Va=y$ naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method[5] (or the Lagrange interpolation formula[6][7]) to solve the equation in O(n2) time, which also gives the UL factorization of $V^{-1}$. The resulting algorithm produces extremely accurate solutions, even if $V$ is ill-conditioned.[2] (See polynomial interpolation.)
The Vandermonde determinant is used in the representation theory of the symmetric group.[8]
When the values $x_{i}$ belong to a finite field, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of BCH codes and Reed–Solomon error correction codes.
The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the $x_{i}$ are chosen to be nth roots of unity. The Fast Fourier transform computes the product of this matrix with a vector in O(n log2n) time.[9]
In the physical theory of the quantum Hall effect, the Vandermonde determinant shows that the Laughlin wavefunction with filling factor 1 is equal to a Slater determinant. This is no longer true for filling factors different from 1 in the fractional quantum Hall effect.
In the geometry of polyhedra, the Vandermonde matrix gives the normalized volume of arbitrary $k$-faces of cyclic polytopes. Specifically, if $F=C_{d}(t_{i_{1}},\dots ,t_{i_{k+1}})$ is a $k$-face of the cyclic polytope $C_{d}(T)\subset \mathbb {R} ^{d}$ corresponding to $T=\{t_{1}<\cdots <t_{N}\}\subset \mathbb {R} $, then
$\mathrm {nvol} (F)={\frac {1}{k!}}\prod _{1\leq m<n\leq k+1}{(t_{i_{n}}-t_{i_{m}})}.$
Determinant
The determinant of a square Vandermonde matrix is called a Vandermonde polynomial or Vandermonde determinant. Its value is the polynomial
$\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})$
which is non-zero if and only if all $x_{i}$ are distinct.
The Vandermonde determinant was formerly sometimes called the discriminant, but in current terminology the discriminant of a polynomial $p(x)=(x-x_{0})\cdots (x-x_{n})$ is the square of the Vandermonde determinant of the roots $x_{i}$. The Vandermonde determinant is an alternating form in the $x_{i}$, meaning that exchanging two $x_{i}$ changes the sign, and $\det(V)$ thus depends on order for the $x_{i}$. By contrast, the discriminant $\det(V)^{2}$ does not depend on any order, so that Galois theory implies that the discriminant is a polynomial function of the coefficients of $p(x)$.
The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof is based on the linear algebra concepts of change of basis in a vector space and the determinant of a linear map. In the process, it computes the LU decomposition of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations.
First proof: polynomial properties
By the Leibniz formula, $\det(V)$ is a polynomial in the $x_{i}$, with integer coefficients. All entries of the $i$th column (zero-based) have total degree $i$. Thus, again by the Leibniz formula, all terms of the determinant have total degree
$0+1+2+\cdots +n={\frac {n(n+1)}{2}};$
(that is the determinant is a homogeneous polynomial of this degree).
If, for $i\neq j$, one substitutes $x_{i}$ for $x_{j}$, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, by the factor theorem, $x_{j}-x_{i}$ is a divisor of $\det(V)$. By the unique factorization property of multivariate polynomials, the product of all $x_{j}-x_{i}$ divides $\det(V)$, that is
$\det(V)=Q\prod _{1\leq i<j\leq n}(x_{j}-x_{i}),$
where $Q$ is a polynomial. As the product of all $x_{j}-x_{i}$ and $\det(V)$ have the same degree $n(n+1)/2$, the polynomial $Q$ is, in fact, a constant. This constant is one, because the product of the diagonal entries of $V$ is $x_{1}x_{2}^{2}\cdots x_{n}^{n}$, which is also the monomial that is obtained by taking the first term of all factors in $\textstyle \prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ This proves that
$\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$
Second proof: linear maps
Let F be a field containing all $x_{i},$ and $P_{n}$ the F vector space of the polynomials of degree less than or equal to n with coefficients in F. Let
$\varphi :P_{n}\to F^{n+1}$
be the linear map defined by
$p(x)\mapsto (p(x_{0}),p(x_{1}),\ldots ,p(x_{n}))$.
The Vandermonde matrix is the matrix of $\varphi $ with respect to the canonical bases of $P_{n}$ and $F^{n+1}.$
Changing the basis of $P_{n}$ amounts to multiplying the Vandermonde matrix by a change-of-basis matrix M (from the right). This does not change the determinant, if the determinant of M is 1.
The polynomials $1$, $x-x_{0}$, $(x-x_{0})(x-x_{1})$, …, $(x-x_{0})(x-x_{1})\cdots (x-x_{n-1})$ are monic of respective degrees 0, 1, …, n. Their matrix on the monomial basis is an upper-triangular matrix U (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of $\varphi $ on this new basis is
${\begin{bmatrix}1&0&0&\ldots &0\\1&x_{1}-x_{0}&0&\ldots &0\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&(x_{n}-x_{0})(x_{n}-x_{1})&\ldots &(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})\end{bmatrix}}$.
Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries.
This proves the desired equality. Moreover, one gets the LU decomposition of V as $V=LU^{-1}$.
Third proof: row and column operations
This third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged.
So, by subtracting to each column – except the first one – the preceding column multiplied by $x_{0}$, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix
${\begin{bmatrix}1&0&0&0&\cdots &0\\1&x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\1&x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$
Applying the Laplace expansion formula along the first row, we obtain $\det(V)=\det(B)$, with
$B={\begin{bmatrix}x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$
As all the entries in the $i$-th row of $B$ have a factor of $x_{i+1}-x_{0}$, one can take these factors out and obtain
$\det(V)=(x_{1}-x_{0})(x_{2}-x_{0})\cdots (x_{n}-x_{0}){\begin{vmatrix}1&x_{1}&x_{1}^{2}&\cdots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\cdots &x_{2}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\cdots &x_{n}^{n-1}\\\end{vmatrix}}=\prod _{1<i\leq n}(x_{i}-x_{0})\det(V')$,
where $V'$ is a Vandermonde matrix in $x_{1},\ldots ,x_{n}$. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of $\det(V)$ as the product of all $x_{j}-x_{i}$ such that $i<j$.
Rank of the Vandermonde matrix
• An m × n rectangular Vandermonde matrix such that m ≤ n has rank m if and only if all xi are distinct.
• An m × n rectangular Vandermonde matrix such that m ≥ n has rank n if and only if there are n of the xi that are distinct.
• A square Vandermonde matrix is invertible if and only if the xi are distinct. An explicit formula for the inverse is known (see below).[10][3][11]
Inverse Vandermonde matrix
As explained above in Applications, the polynomial interpolation problem for $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$satisfying $p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}$ is equivalent to the matrix equation $Va=y$, which has the unique solution $a=V^{-1}y$. There are other known formulas which solve the interpolation problem, which must be equivalent to the unique $a=V^{-1}y$, so they must give explicit formulas for the inverse matrix $V^{-1}$. In particular, Lagrange interpolation shows that the columns of the inverse matrix
$V^{-1}={\begin{bmatrix}1&x_{0}&\dots &x_{0}^{n}\\\vdots &\vdots &&\vdots \\[.5em]1&x_{n}&\dots &x_{n}^{n}\end{bmatrix}}^{-1}=L={\begin{bmatrix}L_{00}&\!\!\!\!\cdots \!\!\!\!&L_{0n}\\\vdots &&\vdots \\L_{n0}&\!\!\!\!\cdots \!\!\!\!&L_{nn}\end{bmatrix}}$
are the coefficients of the Lagrange polynomials
$L_{j}(x)=L_{0j}+L_{1j}x+\cdots +L_{nj}x^{n}=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {f(x)}{(x-x_{j})\,f'(x_{j})}}\,,$
where $f(x)=(x-x_{0})\cdots (x-x_{n})$. This is easily demonstrated: the polynomials clearly satisfy $L_{j}(x_{i})=0$ for $i\neq j$ while $L_{j}(x_{j})=1$, so we may compute the product $VL=[L_{j}(x_{i})]_{i,j=0}^{n}=I$, the identity matrix.
Confluent Vandermonde matrices
As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial $p(x)$ of degree $n-1$ based on the values $p(x_{1}),\,...,\,p(x_{n})$, where $x_{1},\,...,\,x_{n}$ are distinct points. If $x_{i}$ are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem
${\begin{cases}p(0)=y_{1}\\p'(0)=y_{2}\\p(1)=y_{3}\end{cases}}$
where $p(x)=ax^{2}+bx+c$, has a unique solution for all $y_{1},y_{2},y_{3}$ with $y_{1}\neq y_{3}$. In general, suppose that $x_{1},x_{2},...,x_{n}$ are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent:
$x_{1}=\cdots =x_{m_{1}},\ x_{m_{1}+1}=\cdots =x_{m_{2}},\ \ldots ,\ x_{m_{k-1}+1}=\cdots =x_{m_{k}}$
where $m_{1}<m_{2}<\cdots <m_{k}=n,$ and $x_{m_{1}},\ldots ,x_{m_{k}}$ are distinct. Then the corresponding interpolation problem is
${\begin{cases}p(x_{m_{1}})=y_{1},&p'(x_{m_{1}})=y_{2},&\ldots ,&p^{(m_{1}-1)}(x_{m_{1}})=y_{m_{1}},\\p(x_{m_{2}})=y_{m_{1}+1},&p'(x_{m_{2}})=y_{m_{1}+2},&\ldots ,&p^{(m_{2}-m_{1}-1)}(x_{m_{2}})=y_{m_{2}},\\\qquad \vdots &&&\qquad \vdots \\p(x_{m_{k}})=y_{m_{k-1}+1},&p'(x_{m_{k}})=y_{m_{k-1}+2},&\ldots ,&p^{(m_{k}-m_{k-1}-1)}(x_{m_{k}})=y_{m_{k}}.\end{cases}}$
The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows. If $1\leq i,j\leq n$, then $m_{\ell }<i\leq m_{\ell +1}$ for a unique $0\leq \ell \leq k-1$ (denoting $m_{0}=0$). We let
$V_{i,j}={\begin{cases}0&{\text{if }}j<i-m_{\ell },\\[6pt]{\dfrac {(j-1)!}{(j-(i-m_{\ell }))!}}x_{i}^{j-(i-m_{\ell })}&{\text{if }}j\geq i-m_{\ell }.\end{cases}}$
This generalization of the Vandermonde matrix makes it non-singular, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows.
Another way to derive this formula is by taking a limit of the Vandermonde matrix as the $x_{i}$'s approach each other. For example, to get the case of $x_{1}=x_{2}$, take subtract the first row from second in the original Vandermonde matrix, and let $x_{2}\to x_{1}$: this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving $p(x_{i}),p'(x_{i})$ is similar to giving $p(x_{i}),p(x_{i}+\varepsilon )$ for small $\varepsilon $. Geometers have studied the problem of tracking confluent points along their tangent lines, known as compacitification of configuration space.
See also
• Companion matrix § Diagonalizability
• Schur polynomial – a generalization
• Alternant matrix
• Lagrange polynomial
• Wronskian
• List of matrices
• Moore determinant over a finite field
• Vieta's formulas
References
1. Roger A. Horn and Charles R. Johnson (1991), Topics in matrix analysis, Cambridge University Press. See Section 6.1.
2. Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). The Johns Hopkins University Press. pp. 203–207. ISBN 978-1-4214-0859-0.
3. Macon, N.; A. Spitzbart (February 1958). "Inverses of Vandermonde Matrices". The American Mathematical Monthly. 65 (2): 95–100. doi:10.2307/2308881. JSTOR 2308881.
4. François Viète (1540-1603), Vieta's formulas, https://en.wikipedia.org/wiki/Vieta%27s_formulas
5. Björck, Å.; Pereyra, V. (1970). "Solution of Vandermonde Systems of Equations". American Mathematical Society. 24 (112): 893–903. doi:10.1090/S0025-5718-1970-0290541-1. S2CID 122006253.
6. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.8.1. Vandermonde Matrices". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8.
7. Inverse of Vandermonde Matrix (2018), https://proofwiki.org/wiki/Inverse_of_Vandermonde_Matrix
8. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Lecture 4 reviews the representation theory of symmetric groups, including the role of the Vandermonde determinant.
9. Gauthier, J. "Fast Multipoint Evaluation On n Arbitrary Points." Simon Fraser University, Tech. Rep (2017).
10. Turner, L. Richard (August 1966). Inverse of the Vandermonde matrix with applications (PDF).
11. "Inverse of Vandermonde Matrix". 2018.
Further reading
• Ycart, Bernard (2013), "A case of mathematical eponymy: the Vandermonde determinant", Revue d'Histoire des Mathématiques, 13, arXiv:1204.4716, Bibcode:2012arXiv1204.4716Y.
External links
• Vandermonde matrix at ProofWiki
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
|
Permanent (mathematics)
In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix.[1] Both are special cases of a more general function of a matrix called the immanant.
Definition
The permanent of an n×n matrix A = (ai,j) is defined as
$\operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.$
The sum here extends over all elements σ of the symmetric group Sn; i.e. over all permutations of the numbers 1, 2, ..., n.
For example,
$\operatorname {perm} {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=ad+bc,$
and
$\operatorname {perm} {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}=aei+bfg+cdh+ceg+bdi+afh.$
The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account.
The permanent of a matrix A is denoted per A, perm A, or Per A, sometimes with parentheses around the argument. Minc uses Per(A) for the permanent of rectangular matrices, and per(A) when A is a square matrix.[2] Muir and Metzler use the notation ${\overset {+}{|}}\quad {\overset {+}{|}}$.[3]
The word, permanent, originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function,[4] and was used by Muir and Metzler[5] in the modern, more specific, sense.[6]
Properties
If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix $A=\left(a_{ij}\right)$ of order n:[7]
• perm(A) is invariant under arbitrary permutations of the rows and/or columns of A. This property may be written symbolically as perm(A) = perm(PAQ) for any appropriately sized permutation matrices P and Q,
• multiplying any single row or column of A by a scalar s changes perm(A) to s⋅perm(A),
• perm(A) is invariant under transposition, that is, perm(A) = perm(AT).
• If $A=\left(a_{ij}\right)$ and $B=\left(b_{ij}\right)$ are square matrices of order n then,[8]
$\operatorname {perm} \left(A+B\right)=\sum _{s,t}\operatorname {perm} \left(a_{ij}\right)_{i\in s,j\in t}\operatorname {perm} \left(b_{ij}\right)_{i\in {\bar {s}},j\in {\bar {t}}},$
where s and t are subsets of the same size of {1,2,...,n} and ${\bar {s}},{\bar {t}}$ are their respective complements in that set.
• If $A$ is a triangular matrix, i.e. $a_{ij}=0$, whenever $i>j$ or, alternatively, whenever $i<j$, then its permanent (and determinant as well) equals the product of the diagonal entries:
$\operatorname {perm} \left(A\right)=a_{11}a_{22}\cdots a_{nn}=\prod _{i=1}^{n}a_{ii}.$
Relation to determinants
Laplace's expansion by minors for computing the determinant along a row, column or diagonal extends to the permanent by ignoring all signs.[9]
For every $ i$,
$\mathbb {perm} (B)=\sum _{j=1}^{n}B_{i,j}M_{i,j},$
where $B_{i,j}$ is the entry of the ith row and the jth column of B, and $ M_{i,j}$ is the permanent of the submatrix obtained by removing the ith row and the jth column of B.
For example, expanding along the first column,
${\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&1\cdot \operatorname {perm} \left({\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}}\right)+2\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}}\right)\\&{}+\ 3\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}}\right)+4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)\\={}&1(1)+2(1)+3(1)+4(1)=10,\end{aligned}}$
while expanding along the last row gives,
${\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)+0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}}\right)\\&{}+\ 0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}}\right)+1\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&1\end{matrix}}\right)\\={}&4(1)+0+0+1(6)=10.\end{aligned}}$
On the other hand, the basic multiplicative property of determinants is not valid for permanents.[10] A simple example shows that this is so.
${\begin{aligned}4&=\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\\&\neq \operatorname {perm} \left(\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\right)=\operatorname {perm} \left({\begin{matrix}2&2\\2&2\end{matrix}}\right)=8.\end{aligned}}$
Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in combinatorics, in treating boson Green's functions in quantum field theory, and in determining state probabilities of boson sampling systems.[11] However, it has two graph-theoretic interpretations: as the sum of weights of cycle covers of a directed graph, and as the sum of weights of perfect matchings in a bipartite graph.
Applications
Symmetric tensors
The permanent arises naturally in the study of the symmetric tensor power of Hilbert spaces.[12] In particular, for a Hilbert space $H$, let $\vee ^{k}H$ denote the $k$th symmetric tensor power of $H$, which is the space of symmetric tensors. Note in particular that $\vee ^{k}H$ is spanned by the symmetric products of elements in $H$. For $x_{1},x_{2},\dots ,x_{k}\in H$, we define the symmetric product of these elements by
$x_{1}\vee x_{2}\vee \cdots \vee x_{k}=(k!)^{-1/2}\sum _{\sigma \in S_{k}}x_{\sigma (1)}\otimes x_{\sigma (2)}\otimes \cdots \otimes x_{\sigma (k)}$
If we consider $\vee ^{k}H$ (as a subspace of $\otimes ^{k}H$, the kth tensor power of $H$) and define the inner product on $\vee ^{k}H$ accordingly, we find that for $x_{j},y_{j}\in H$
$\langle x_{1}\vee x_{2}\vee \cdots \vee x_{k},y_{1}\vee y_{2}\vee \cdots \vee y_{k}\rangle =\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}$
Applying the Cauchy–Schwarz inequality, we find that $\operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\geq 0$, and that
$\left|\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}\right|^{2}\leq \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\cdot \operatorname {perm} \left[\langle y_{i},y_{j}\rangle \right]_{i,j=1}^{k}$
Cycle covers
Any square matrix $A=(a_{ij})_{i,j=1}^{n}$ can be viewed as the adjacency matrix of a weighted directed graph on vertex set $V=\{1,2,\dots ,n\}$, with $a_{ij}$ representing the weight of the arc from vertex i to vertex j. A cycle cover of a weighted directed graph is a collection of vertex-disjoint directed cycles in the digraph that covers all vertices in the graph. Thus, each vertex i in the digraph has a unique "successor" $\sigma (i)$ in the cycle cover, and so $\sigma $ represents a permutation on V. Conversely, any permutation $\sigma $ on V corresponds to a cycle cover with arcs from each vertex i to vertex $\sigma (i)$.
If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then
$\operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)},$
implying that
$\operatorname {perm} (A)=\sum _{\sigma }\operatorname {weight} (\sigma ).$
Thus the permanent of A is equal to the sum of the weights of all cycle-covers of the digraph.
Perfect matchings
A square matrix $A=(a_{ij})$ can also be viewed as the adjacency matrix of a bipartite graph which has vertices $x_{1},x_{2},\dots ,x_{n}$ on one side and $y_{1},y_{2},\dots ,y_{n}$ on the other side, with $a_{ij}$ representing the weight of the edge from vertex $x_{i}$ to vertex $y_{j}$. If the weight of a perfect matching $\sigma $ that matches $x_{i}$ to $y_{\sigma (i)}$ is defined to be the product of the weights of the edges in the matching, then
$\operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)}.$
Thus the permanent of A is equal to the sum of the weights of all perfect matchings of the graph.
Permanents of (0, 1) matrices
Enumeration
The answers to many counting questions can be computed as permanents of matrices that only have 0 and 1 as entries.
Let Ω(n,k) be the class of all (0, 1)-matrices of order n with each row and column sum equal to k. Every matrix A in this class has perm(A) > 0.[13] The incidence matrices of projective planes are in the class Ω(n2 + n + 1, n + 1) for n an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For n = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively.[13] Let Z be the incidence matrix of the projective plane with n = 2, the Fano plane. Remarkably, perm(Z) = 24 = |det (Z)|, the absolute value of the determinant of Z. This is a consequence of Z being a circulant matrix and the theorem:[14]
If A is a circulant matrix in the class Ω(n,k) then if k > 3, perm(A) > |det (A)| and if k = 3, perm(A) = |det (A)|. Furthermore, when k = 3, by permuting rows and columns, A can be put into the form of a direct sum of e copies of the matrix Z and consequently, n = 7e and perm(A) = 24e.
Permanents can also be used to calculate the number of permutations with restricted (prohibited) positions. For the standard n-set {1, 2, ..., n}, let $A=(a_{ij})$ be the (0, 1)-matrix where aij = 1 if i → j is allowed in a permutation and aij = 0 otherwise. Then perm(A) is equal to the number of permutations of the n-set that satisfy all the restrictions.[9] Two well known special cases of this are the solution of the derangement problem and the ménage problem: the number of permutations of an n-set with no fixed points (derangements) is given by
$\operatorname {perm} (J-I)=\operatorname {perm} \left({\begin{matrix}0&1&1&\dots &1\\1&0&1&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&1&1&\dots &0\end{matrix}}\right)=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}},$
where J is the n×n all 1's matrix and I is the identity matrix, and the ménage numbers are given by
${\begin{aligned}\operatorname {perm} (J-I-I')&=\operatorname {perm} \left({\begin{matrix}0&0&1&\dots &1\\1&0&0&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&1&1&\dots &0\end{matrix}}\right)\\&=\sum _{k=0}^{n}(-1)^{k}{\frac {2n}{2n-k}}{2n-k \choose k}(n-k)!,\end{aligned}}$
where I' is the (0, 1)-matrix with nonzero entries in positions (i, i + 1) and (n, 1).
Bounds
The Bregman–Minc inequality, conjectured by H. Minc in 1963[15] and proved by L. M. Brégman in 1973,[16] gives an upper bound for the permanent of an n × n (0, 1)-matrix. If A has ri ones in row i for each 1 ≤ i ≤ n, the inequality states that
$\operatorname {perm} A\leq \prod _{i=1}^{n}(r_{i})!^{1/r_{i}}.$
Van der Waerden's conjecture
In 1926, Van der Waerden conjectured that the minimum permanent among all n × n doubly stochastic matrices is n!/nn, achieved by the matrix for which all entries are equal to 1/n.[17] Proofs of this conjecture were published in 1980 by B. Gyires[18] and in 1981 by G. P. Egorychev[19] and D. I. Falikman;[20] Egorychev's proof is an application of the Alexandrov–Fenchel inequality.[21] For this work, Egorychev and Falikman won the Fulkerson Prize in 1982.[22]
Computation
Main articles: Computing the permanent and Sharp-P-completeness of 01-permanent
The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to H. J. Ryser.[23] Ryser's method is based on an inclusion–exclusion formula that can be given[24] as follows: Let $A_{k}$ be obtained from A by deleting k columns, let $P(A_{k})$ be the product of the row-sums of $A_{k}$, and let $\Sigma _{k}$ be the sum of the values of $P(A_{k})$ over all possible $A_{k}$. Then
$\operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.$
It may be rewritten in terms of the matrix entries as follows:
$\operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.$
The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in polynomial time by Gaussian elimination, Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is #P-complete. Thus, if the permanent can be computed in polynomial time by any method, then FP = #P, which is an even stronger statement than P = NP. When the entries of A are nonnegative, however, the permanent can be computed approximately in probabilistic polynomial time, up to an error of $\varepsilon M$, where $M$ is the value of the permanent and $\varepsilon >0$ is arbitrary.[25] The permanent of a certain set of positive semidefinite matrices can also be approximated in probabilistic polynomial time: the best achievable error of this approximation is $\varepsilon {\sqrt {M}}$ ($M$ is again the value of the permanent).[26]
MacMahon's master theorem
Main article: MacMahon's master theorem
Another way to view permanents is via multivariate generating functions. Let $A=(a_{ij})$ be a square matrix of order n. Consider the multivariate generating function:
${\begin{aligned}F(x_{1},x_{2},\dots ,x_{n})&=\prod _{i=1}^{n}\left(\sum _{j=1}^{n}a_{ij}x_{j}\right)\\&=\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right).\end{aligned}}$
The coefficient of $x_{1}x_{2}\dots x_{n}$ in $F(x_{1},x_{2},\dots ,x_{n})$ is perm(A).[27]
As a generalization, for any sequence of n non-negative integers, $s_{1},s_{2},\dots ,s_{n}$ define:
$\operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)$
as the coefficient of $x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}$ in$\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)^{s_{1}}\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)^{s_{2}}\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right)^{s_{n}}.$
MacMahon's master theorem relating permanents and determinants is:[28]
$\operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)={\text{ coefficient of }}x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}{\text{ in }}{\frac {1}{\det(I-XA)}},$
where I is the order n identity matrix and X is the diagonal matrix with diagonal $[x_{1},x_{2},\dots ,x_{n}].$
Rectangular matrices
The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case.[29] Specifically, for an m × n matrix $A=(a_{ij})$ with m ≤ n, define
$\operatorname {perm} (A)=\sum _{\sigma \in \operatorname {P} (n,m)}a_{1\sigma (1)}a_{2\sigma (2)}\ldots a_{m\sigma (m)}$
where P(n,m) is the set of all m-permutations of the n-set {1,2,...,n}.[30]
Ryser's computational result for permanents also generalizes. If A is an m × n matrix with m ≤ n, let $A_{k}$ be obtained from A by deleting k columns, let $P(A_{k})$ be the product of the row-sums of $A_{k}$, and let $\sigma _{k}$ be the sum of the values of $P(A_{k})$ over all possible $A_{k}$. Then[10]
$\operatorname {perm} (A)=\sum _{k=0}^{m-1}(-1)^{k}{\binom {n-m+k}{k}}\sigma _{n-m+k}.$
Systems of distinct representatives
The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance:
Let S1, S2, ..., Sm be subsets (not necessarily distinct) of an n-set with m ≤ n. The incidence matrix of this collection of subsets is an m × n (0,1)-matrix A. The number of systems of distinct representatives (SDR's) of this collection is perm(A).[31]
See also
• Computing the permanent
• Bapat–Beg theorem, an application of permanents in order statistics
• Slater determinant, an application of permanents in quantum mechanics
• Hafnian
Notes
1. Marcus, Marvin; Minc, Henryk (1965). "Permanents". Amer. Math. Monthly. 72 (6): 577–591. doi:10.2307/2313846. JSTOR 2313846.
2. Minc (1978)
3. Muir & Metzler (1960)
4. Cauchy, A. L. (1815), "Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et de signes contraires par suite des transpositions opérées entre les variables qu'elles renferment.", Journal de l'École Polytechnique, 10: 91–169
5. Muir & Metzler (1960)
6. van Lint & Wilson 2001, p. 108
7. Ryser 1963, pp. 25 – 26
8. Percus 1971, p. 2
9. Percus 1971, p. 12
10. Ryser 1963, p. 26
11. Aaronson, Scott (14 Nov 2010). "The Computational Complexity of Linear Optics". arXiv:1011.3245 [quant-ph].
12. Bhatia, Rajendra (1997). Matrix Analysis. New York: Springer-Verlag. pp. 16–19. ISBN 978-0-387-94846-1.
13. Ryser 1963, p. 124
14. Ryser 1963, p. 125
15. Minc, Henryk (1963), "Upper bounds for permanents of (0,1)-matrices", Bulletin of the American Mathematical Society, 69 (6): 789–791, doi:10.1090/s0002-9904-1963-11031-9
16. van Lint & Wilson 2001, p. 101
17. van der Waerden, B. L. (1926), "Aufgabe 45", Jber. Deutsch. Math.-Verein., 35: 117.
18. Gyires, B. (1980), "The common source of several inequalities concerning doubly stochastic matrices", Publicationes Mathematicae Institutum Mathematicum Universitatis Debreceniensis, 27 (3–4): 291–304, MR 0604006.
19. Egoryčev, G. P. (1980), Reshenie problemy van-der-Vardena dlya permanentov (in Russian), Krasnoyarsk: Akad. Nauk SSSR Sibirsk. Otdel. Inst. Fiz., p. 12, MR 0602332. Egorychev, G. P. (1981), "Proof of the van der Waerden conjecture for permanents", Akademiya Nauk SSSR (in Russian), 22 (6): 65–71, 225, MR 0638007. Egorychev, G. P. (1981), "The solution of van der Waerden's problem for permanents", Advances in Mathematics, 42 (3): 299–305, doi:10.1016/0001-8708(81)90044-X, MR 0642395.
20. Falikman, D. I. (1981), "Proof of the van der Waerden conjecture on the permanent of a doubly stochastic matrix", Akademiya Nauk Soyuza SSR (in Russian), 29 (6): 931–938, 957, MR 0625097.
21. Brualdi (2006) p.487
22. Fulkerson Prize, Mathematical Optimization Society, retrieved 2012-08-19.
23. Ryser (1963, p. 27)
24. van Lint & Wilson (2001) p. 99
25. Jerrum, M.; Sinclair, A.; Vigoda, E. (2004), "A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries", Journal of the ACM, 51 (4): 671–697, CiteSeerX 10.1.1.18.9466, doi:10.1145/1008731.1008738, S2CID 47361920
26. Chakhmakhchyan, Levon; Cerf, Nicolas; Garcia-Patron, Raul (2017). "A quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices". Phys. Rev. A. 96 (2): 022329. arXiv:1609.02416. Bibcode:2017PhRvA..96b2329C. doi:10.1103/PhysRevA.96.022329. S2CID 54194194.
27. Percus 1971, p. 14
28. Percus 1971, p. 17
29. In particular, Minc (1978) and Ryser (1963) do this.
30. Ryser 1963, p. 25
31. Ryser 1963, p. 54
References
• Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications. Vol. 108. Cambridge: Cambridge University Press. ISBN 978-0-521-86565-4. Zbl 1106.05001.
• Minc, Henryk (1978). Permanents. Encyclopedia of Mathematics and its Applications. Vol. 6. With a foreword by Marvin Marcus. Reading, MA: Addison–Wesley. ISSN 0953-4806. OCLC 3980645. Zbl 0401.15005.
• Muir, Thomas; Metzler, William H. (1960) [1882]. A Treatise on the Theory of Determinants. New York: Dover. OCLC 535903.
• Percus, J.K. (1971), Combinatorial Methods, Applied Mathematical Sciences #4, New York: Springer-Verlag, ISBN 978-0-387-90027-8
• Ryser, Herbert John (1963), Combinatorial Mathematics, The Carus Mathematical Monographs #14, The Mathematical Association of America
• van Lint, J.H.; Wilson, R.M. (2001), A Course in Combinatorics, Cambridge University Press, ISBN 978-0521422604
Further reading
• Hall Jr., Marshall (1986), Combinatorial Theory (2nd ed.), New York: John Wiley & Sons, pp. 56–72, ISBN 978-0-471-09138-7 Contains a proof of the Van der Waerden conjecture.
• Marcus, M.; Minc, H. (1965), "Permanents", The American Mathematical Monthly, 72 (6): 577–591, doi:10.2307/2313846, JSTOR 2313846
External links
• Permanent at PlanetMath.
• Van der Waerden's permanent conjecture at PlanetMath.
|
Van der Waerden's theorem
Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers r and k, there is some number N such that if the integers {1, 2, ..., N} are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W(r, k), named after the Dutch mathematician B. L. van der Waerden.[1]
Example
For example, when r = 2, you have two colors, say red and blue. W(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this:
1 2 3 4 5 6 7 8
B R R B B R R B
and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression.
In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W(2, 3) is 9.
Open problem
It is an open problem to determine the values of W(r, k) for most values of r and k. The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color.
For r = 3 and k = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26
R R G G R R G B G B B R B R R G R G G B R B B G B G
An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$1000 for showing W(2, k) < 2k2.[2] In addition, he offered a US$250 prize for a proof of his conjecture involving more general off-diagonal van der Waerden numbers, stating W(2; 3, k) ≤ kO(1), while mentioning numerical evidence suggests W(2; 3, k) = k2 + o(1). Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to W(2; 3, k) < kr for any r.[3] The best upper bound currently known is due to Timothy Gowers,[4] who establishes
$W(r,k)\leq 2^{2^{r^{2^{2^{k+9}}}}},$
by first establishing a similar result for Szemerédi's theorem, which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales–Jewett theorem, which is another strengthening of Van der Waerden's theorem.
The best lower bound currently known for $W(2,k)$ is that for all positive $\varepsilon $ we have $W(2,k)>2^{k}/k^{\varepsilon }$, for all sufficiently large $k$.[5]
Proof of Van der Waerden's theorem (in a special case)
The following proof is due to Ron Graham, B.L. Rothschild, and Joel Spencer.[6] Khinchin[7] gives a fairly simple proof of the theorem without estimating W(r, k).
Proof in the case of W(2, 3)
W(2, 3) table
bc(n): color of integers
0 12345
R R B R B
1 678910
B R R B R
… …
64 321322323324325
R B R B R
We will prove the special case mentioned above, that W(2, 3) ≤ 325. Let c(n) be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color.
Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5b + 1, ..., 5b + 5} for some b in {0, ..., 64}. Since each integer is colored either red or blue, each block is colored in one of 32 different ways. By the pigeonhole principle, there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers b1 and b2, both in {0,...,32}, such that
c(5b1 + k) = c(5b2 + k)
for all k in {1, ..., 5}. Among the three integers 5b1 + 1, 5b1 + 2, 5b1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5b1 + a1 and 5b1 + a2, where the ai are in {1,2,3} and a1 < a2. Suppose (without loss of generality) that these two integers are both red. (If they are both blue, just exchange 'red' and 'blue' in what follows.)
Let a3 = 2a2 − a1. If 5b1 + a3 is red, then we have found our arithmetic progression: 5b1 + ai are all red.
Otherwise, 5b1 + a3 is blue. Since a3 ≤ 5, 5b1 + a3 is in the b1 block, and since the b2 block is colored identically, 5b2 + a3 is also blue.
Now let b3 = 2b2 − b1. Then b3 ≤ 64. Consider the integer 5b3 + a3, which must be ≤ 325. What color is it?
If it is red, then 5b1 + a1, 5b2 + a2, and 5b3 + a3 form a red arithmetic progression. But if it is blue, then 5b1 + a3, 5b2 + a3, and 5b3 + a3 form a blue arithmetic progression. Either way, we are done.
Proof in the case of W(3, 3)
W(3, 3) table
g=2·37·(2·37 + 1) ,
m=7(2·37 + 1)
bc(n): color of integers
0 123…m
G R R … B
1 m + 1m + 2m + 3…2m
B R G … R
… …
g gm + 1gm + 2gm + 3…(g + 1)m
B R B … G
A similar argument can be advanced to show that W(3, 3) ≤ 7(2·37+1)(2·37·(2·37+1)+1). One begins by dividing the integers into 2·37·(2·37 + 1) + 1 groups of 7(2·37 + 1) integers each; of the first 37·(2·37 + 1) + 1 groups, two must be colored identically.
Divide each of these two groups into 2·37+1 subgroups of 7 integers each; of the first 37 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red; this implies either a red progression or an element of a different color, say blue, in the same subgroup.
Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue, would complete a red or blue progression, by a construction analogous to the one for W(2, 3). Suppose that this element is green. Since there is a group that is colored identically, it must contain copies of the red, blue, and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression.
Proof in general case
The proof for W(2, 3) depends essentially on proving that W(32, 2) ≤ 33. We divide the integers {1,...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for W(3, 3) depends on proving that
$W(3^{7(2\cdot 3^{7}+1)},2)\leq 3^{7(2\cdot 3^{7}+1)}+1.$
By a double induction on the number of colors and the length of the progression, the theorem is proved in general.
Proof
A D-dimensional arithmetic progression (AP) consists of numbers of the form:
$a+i_{1}s_{1}+i_{2}s_{2}+\cdots +i_{D}s_{D}$
where a is the basepoint, the s's are positive step-sizes, and the i's range from 0 to L − 1. A d-dimensional AP is homogeneous for some coloring when it is all the same color.
A D-dimensional arithmetic progression with benefits is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices i's can be equal to L. The sides you tack on are ones where the first k i's are equal to L, and the remaining i's are less than L.
The boundaries of a D-dimensional AP with benefits are these additional arithmetic progressions of dimension $d-1,d-2,d-3,d-4$, down to 0. The 0-dimensional arithmetic progression is the single point at index value $(L,L,L,L,\ldots ,L)$. A D-dimensional AP with benefits is homogeneous when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color.
Next define the quantity MinN(L, D, N) to be the least integer so that any assignment of N colors to an interval of length MinN or more necessarily contains a homogeneous D-dimensional arithmetical progression with benefits.
The goal is to bound the size of MinN. Note that MinN(L,1,N) is an upper bound for Van der Waerden's number. There are two inductions steps, as follows:
Lemma 1 — Assume MinN is known for a given lengths L for all dimensions of arithmetic progressions with benefits up to D. This formula gives a bound on MinN when you increase the dimension to D + 1:
let $M=\operatorname {MinN} (L,D,n)$, then
$\operatorname {MinN} (L,D+1,n)\leq M\cdot \operatorname {MinN} (L,1,n^{M})$
Proof
First, if you have an n-coloring of the interval 1...I, you can define a block coloring of k-size blocks. Just consider each sequence of k colors in each k block to define a unique color. Call this k-blocking an n-coloring. k-blocking an n coloring of length l produces an nk coloring of length l/k.
So given a n-coloring of an interval I of size $M\cdot \operatorname {MinN} (L,1,n^{M}))$ you can M-block it into an nM coloring of length $\operatorname {MinN} (L,1,n^{M})$. But that means, by the definition of MinN, that you can find a 1-dimensional arithmetic sequence (with benefits) of length L in the block coloring, which is a sequence of blocks equally spaced, which are all the same block-color, i.e. you have a bunch of blocks of length M in the original sequence, which are equally spaced, which have exactly the same sequence of colors inside.
Now, by the definition of M, you can find a d-dimensional arithmetic sequence with benefits in any one of these blocks, and since all of the blocks have the same sequence of colors, the same d-dimensional AP with benefits appears in all of the blocks, just by translating it from block to block. This is the definition of a d + 1 dimensional arithmetic progression, so you have a homogeneous d + 1 dimensional AP. The new stride parameter sD + 1 is defined to be the distance between the blocks.
But you need benefits. The boundaries you get now are all old boundaries, plus their translations into identically colored blocks, because iD+1 is always less than L. The only boundary which is not like this is the 0-dimensional point when $i_{1}=i_{2}=\cdots =i_{D+1}=L$. This is a single point, and is automatically homogeneous.
Lemma 2 — Assume MinN is known for one value of L and all possible dimensions D. Then you can bound MinN for length L + 1.
$\operatorname {MinN} (L+1,1,n)\leq 2\operatorname {MinN} (L,n,n)$
Proof
Given an n-coloring of an interval of size MinN(L,n,n), by definition, you can find an arithmetic sequence with benefits of dimension n of length L. But now, the number of "benefit" boundaries is equal to the number of colors, so one of the homogeneous boundaries, say of dimension k, has to have the same color as another one of the homogeneous benefit boundaries, say the one of dimension p < k. This allows a length L + 1 arithmetic sequence (of dimension 1) to be constructed, by going along a line inside the k-dimensional boundary which ends right on the p-dimensional boundary, and including the terminal point in the p-dimensional boundary. In formulas:
if
$a+Ls_{1}+Ls_{2}+\cdots +Ls_{D-k}$ has the same color as
$a+Ls_{1}+Ls_{2}+\cdots +Ls_{D-p}$
then
$a+L\cdot (s_{1}+\cdots +s_{D-k})+u\cdot (s_{D-k+1}+\cdots +s_{p})$ have the same color
$u=0,1,2,\cdots ,L-1,L$ i.e. u makes a sequence of length L+1.
This constructs a sequence of dimension 1, and the "benefits" are automatic, just add on another point of whatever color. To include this boundary point, one has to make the interval longer by the maximum possible value of the stride, which is certainly less than the interval size. So doubling the interval size will definitely work, and this is the reason for the factor of two. This completes the induction on L.
Base case: MinN(1,d,n) = 1, i.e. if you want a length 1 homogeneous d-dimensional arithmetic sequence, with or without benefits, you have nothing to do. So this forms the base of the induction. The Van der Waerden theorem itself is the assertion that MinN(L,1,N) is finite, and it follows from the base case and the induction steps.[8]
See also
• Van der Waerden numbers for all known values for W(n,r) and the best known bounds for unknown values.
• Van der Waerden game – a game where the player picks integers from the set 1, 2, ..., N, and tries to collect an arithmetic progression of length n.
• Hales–Jewett theorem
• Rado's theorem
• Szemerédi's theorem
• Bartel Leendert van der Waerden
Notes
1. van der Waerden, B. L. (1927). "Beweis einer Baudetschen Vermutung". Nieuw. Arch. Wisk. (in German). 15: 212–216.
2. Graham, Ron (2007). "Some of My Favorite Problems in Ramsey Theory". INTEGERS: The Electronic Journal of Combinatorial Number Theory. 7 (2): #A15.
3. Klarreich, Erica (2021). "Mathematician Hurls Structure and Disorder Into Century-Old Problem". Quanta Magazine.
4. Gowers, Timothy (2001). "A new proof of Szemerédi's theorem". Geometric and Functional Analysis. 11 (3): 465–588. doi:10.1007/s00039-001-0332-9. S2CID 124324198.
5. Szabó, Zoltán (1990). "An application of Lovász' local lemma-a new lower bound for the van der Waerden number". Random Structures & Algorithms. 1 (3): 343–360. doi:10.1002/rsa.3240010307.
6. Graham, Ronald; Rothschild, Bruce; Spencer, Joel (1990). Ramsey theory. Wiley. ISBN 0471500461.
7. Khinchin (1998, pp. 11–17, chapter 1)
8. Graham, R. L.; Rothschild, B. L. (1974). "A short proof of van der Waerden's theorem on arithmetic progressions". Proceedings of the American Mathematical Society. 42 (2): 385–386. doi:10.1090/S0002-9939-1974-0329917-8.
References
• Khinchin, A. Ya. (1998), Three Pearls of Number Theory, Mineola, NY: Dover, pp. 11–17, ISBN 978-0-486-40026-6 (second edition originally published in Russian in 1948)
External links
• O'Bryant, Kevin. "van der Waerden's Theorem". MathWorld.
• O'Bryant, Kevin & Weisstein, Eric W. "Van der Waerden Number". MathWorld.
|
Arithmetic progression game
The arithmetic progression game is a positional game where two players alternately pick numbers, trying to occupy a complete arithmetic progression of a given size.
The game is parameterized by two integers n > k. The game-board is the set {1,...,n}. The winning-sets are all the arithmetic progressions of length k. In a Maker-Breaker game variant, the first player (Maker) wins by occupying a k-length arithmetic progression, otherwise the second player (Breaker) wins.
The game is also called the van der Waerden game,[1] named after Van der Waerden's theorem. It says that, for any k, there exists some integer W(2,k) such that, if the integers {1, ..., W(2,k)} are partitioned arbitrarily into two sets, then at least one set contains an arithmetic progression of length k. This means that, if $n\geq W(2,k)$, then Maker has a winning strategy.
Unfortunately, this claim is not constructive - it does not show a specific strategy for Maker. Moreover, the current upper bound for W(2,k) is extremely large (the currently known bounds are: $2^{k}/k^{\varepsilon }<W(2,k)<2^{2^{2^{2^{k+9}}}}$).
Let W*(2,k) be the smallest integer such that Maker has a winning strategy. Beck[1] proves that $2^{k-7k^{7/8}}<W^{*}(2,k)<k^{3}2^{k-4}$. In particular, if $k^{3}2^{k-4}<n$, then the game is Maker's win (even though it is much smaller than the number that guarantees no-draw).
References
1. Beck, József (1981). "Van der Waerden and Ramsey type games". Combinatorica. 1 (2): 103–116. doi:10.1007/bf02579267. ISSN 0209-9683.
|
Vandermonde polynomial
In algebra, the Vandermonde polynomial of an ordered set of n variables $X_{1},\dots ,X_{n}$, named after Alexandre-Théophile Vandermonde, is the polynomial:
$V_{n}=\prod _{1\leq i<j\leq n}(X_{j}-X_{i}).$
(Some sources use the opposite order $(X_{i}-X_{j})$, which changes the sign ${\binom {n}{2}}$ times: thus in some dimensions the two formulas agree in sign, while in others they have opposite signs.)
It is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix.
The value depends on the order of the terms: it is an alternating polynomial, not a symmetric polynomial.
Alternating
The defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the $X_{i}$ by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial – in fact, it is the basic alternating polynomial, as will be made precise below.
It thus depends on the order, and is zero if two entries are equal – this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding $V_{n}=-V_{n},$ and thus $V_{n}=0$ (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric).
Conversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have $(X_{i}-X_{j})$ as a factor for all $i\neq j$.
Alternating polynomials
Main article: Alternating polynomial
Thus, the Vandermonde polynomial (together with the symmetric polynomials) generates the alternating polynomials.
Discriminant
Its square is widely called the discriminant, though some sources call the Vandermonde polynomial itself the discriminant.
The discriminant (the square of the Vandermonde polynomial: $\Delta =V_{n}^{2}$) does not depend on the order of terms, as $(-1)^{2}=1$, and is thus an invariant of the unordered set of points.
If one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables $\Lambda _{n}$, one obtains the quadratic extension $\Lambda _{n}[V_{n}]/\langle V_{n}^{2}-\Delta \rangle $, which is the ring of alternating polynomials.
Vandermonde polynomial of a polynomial
Given a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field; for a non-monic polynomial, with leading coefficient a, one may define the Vandermonde polynomial as
$V_{n}=a^{n-1}\prod _{1\leq i<j\leq n}(X_{j}-X_{i}),$
(multiplying with a leading term) to accord with the discriminant.
Generalizations
Over arbitrary rings, one instead uses a different polynomial to generate the alternating polynomials – see (Romagny, 2005).
The Vandermonde determinant is a very special case of the Weyl denominator formula applied to the trivial representation of the special unitary group $\mathrm {SU} (n)$.
See also
• Capelli polynomial (ref)
References
• The fundamental theorem of alternating functions, by Matthieu Romagny, September 15, 2005
|
Kummer–Vandiver conjecture
In mathematics, the Kummer–Vandiver conjecture, or Vandiver conjecture, states that a prime p does not divide the class number hK of the maximal real subfield $K=\mathbb {Q} (\zeta _{p})^{+}$ of the p-th cyclotomic field. The conjecture was first made by Ernst Kummer on 28 December 1849 and 24 April 1853 in letters to Leopold Kronecker, reprinted in (Kummer 1975, pages 84, 93, 123–124), and independently rediscovered around 1920 by Philipp Furtwängler and Harry Vandiver (1946, p. 576),
Kummer–Vandiver conjecture
FieldAlgebraic number theory
Conjectured byErnst Kummer
Conjectured in1849
Open problemYes
As of 2011, there is no particularly strong evidence either for or against the conjecture and it is unclear whether it is true or false, though it is likely that counterexamples are very rare.
Background
The class number h of the cyclotomic field $\mathbb {Q} (\zeta _{p})$ is a product of two integers h1 and h2, called the first and second factors of the class number, where h2 is the class number of the maximal real subfield $K=\mathbb {Q} (\zeta _{p})^{+}$ of the p-th cyclotomic field. The first factor h1 is well understood and can be computed easily in terms of Bernoulli numbers, and is usually rather large. The second factor h2 is not well understood and is hard to compute explicitly, and in the cases when it has been computed it is usually small.
Kummer showed that if a prime p does not divide the class number h, then Fermat's Last Theorem holds for exponent p.
The Kummer–Vandiver conjecture states that p does not divide the second factor h2. Kummer showed that if p divides the second factor, then it also divides the first factor. In particular the Kummer–Vandiver conjecture holds for regular primes (those for which p does not divide the first factor).
Evidence for and against the Kummer–Vandiver conjecture
Kummer verified the Kummer–Vandiver conjecture for p less than 200, and Vandiver extended this to p less than 600. Joe Buhler, Richard Crandall, and Reijo Ernvall et al. (2001) verified it for p < 12 million. Buhler & Harvey (2011) extended this to primes less than 163 million, and Hart, Harvey & Ong (2017) extended this to primes less than 231.
Washington (1996, p. 158) describes an informal probability argument, based on rather dubious assumptions about the equidistribution of class numbers mod p, suggesting that the number of primes less than x that are exceptions to the Kummer–Vandiver conjecture might grow like (1/2)log log x. This grows extremely slowly, and suggests that the computer calculations do not provide much evidence for Vandiver's conjecture: for example, the probability argument (combined with the calculations for small primes) suggests that one should only expect about 1 counterexample in the first 10100 primes, suggesting that it is unlikely any counterexample will be found by further brute force searches even if there are an infinite number of exceptions.
Schoof (2003) gave conjectural calculations of the class numbers of real cyclotomic fields for primes up to 10000, which strongly suggest that the class numbers are not randomly distributed mod p. They tend to be quite small and are often just 1. For example, assuming the generalized Riemann hypothesis, the class number of the real cyclotomic field for the prime p is 1 for p<163, and divisible by 4 for p=163. This suggests that Washington's informal probability argument against the conjecture may be misleading.
Mihăilescu (2010) gave a refined version of Washington's heuristic argument, suggesting that the Kummer–Vandiver conjecture is probably true.
Consequences of the Kummer–Vandiver conjecture
Kurihara (1992) showed that the conjecture is equivalent to a statement in the algebraic K-theory of the integers, namely that Kn(Z) = 0 whenever n is a multiple of 4. In fact from the Kummer–Vandiver conjecture and the norm residue isomorphism theorem follow a full conjectural calculation of the K-groups for all values of n; see Quillen–Lichtenbaum conjecture for details.
See also
• Regular and irregular primes
• Herbrand–Ribet theorem
References
• Buhler, Joe; Crandall, Richard; Ernvall, Reijo; Metsänkylä, Tauno; Shokrollahi, M. Amin (2001), Bosma, Wieb (ed.), "Irregular primes and cyclotomic invariants to 12 million", Computational algebra and number theory (Proceedings of the 2nd International Magma Conference held at Marquette University, Milwaukee, WI, May 12–16, 1996), Journal of Symbolic Computation, 31 (1): 89–96, doi:10.1006/jsco.1999.1011, ISSN 0747-7171, MR 1806208
• Ghate, Eknath (2000), "Vandiver's conjecture via K-theory" (PDF), in Adhikari, S. D.; Katre, S. A.; Thakur, Dinesh (eds.), Cyclotomic fields and related topics, Proceedings of the Summer School on Cyclotomic Fields held in Pune, June 7–30, 1999, Bhaskaracharya Pratishthana, Pune, pp. 285–298, MR 1802389
• Buhler, J. P.; Harvey, D. (2011), "Irregular primes to 163 million", Mathematics of Computation, 80 (276): 2435–2444, doi:10.1090/S0025-5718-2011-02461-0, MR 2813369
• Hart, William; Harvey, David; Ong, Wilson (2017), "Irregular primes to two billion", Mathematics of Computation, 86 (308): 3031–3049, arXiv:1605.02398, doi:10.1090/mcom/3211, MR 3667037, S2CID 37245286
• Kummer, Ernst Eduard (1975), Weil, André (ed.), Collected papers. Volume 1: Contributions to Number Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-06835-0, MR 0465760
• Kurihara, Masato (1992), "Some remarks on conjectures about cyclotomic fields and K-groups of Z", Compositio Mathematica, 81 (2): 223–236, ISSN 0010-437X, MR 1145807
• Mihăilescu, Preda (2010), Turning Washington's heuristics in favor of Vandiver's conjecture, arXiv:1011.6283, Bibcode:2010arXiv1011.6283M
• Schoof, René (2003), "Class numbers of real cyclotomic fields of prime conductor", Mathematics of Computation, 72 (242): 913–937, doi:10.1090/S0025-5718-02-01432-1, ISSN 0025-5718, MR 1954975
• Vandiver, H. S. (1946), "Fermat's last theorem. Its history and the nature of the known results concerning it", The American Mathematical Monthly, 53 (10): 555–578, doi:10.1080/00029890.1946.11991754, ISSN 0002-9890, JSTOR 2305236, MR 0018660
• Washington, Lawrence C. (1996), Introduction to Cyclotomic Fields, Springer, ISBN 978-0-387-94762-4
|
Vanessa Robins
Vanessa Robins is an Australian applied mathematician whose research interests include computational topology, image processing, and the structure of granular materials. She is a fellow in the departments of applied mathematics and theoretical physics at Australian National University, where she was ARC Future Fellow from 2014 to 2019.[1]
Education
Robins earned a bachelor's degree in mathematics at Australian National University in 1994.[1] She completed a PhD at the University of Colorado Boulder in 2000. Her dissertation, Computational Topology at Multiple Resolutions: Foundations and Applications to Fractals and Dynamics, was jointly supervised by James D. Meiss and Elizabeth Bradley.[2]
Contributions
One of Robins's publications, from 1999, is one of the three works that independently introduced persistent homology in topological data analysis.[3] As well as working on mathematical research, she has collaborated with artist Julie Brooke, of the Australian National University School of Art & Design, on the mathematical visualization of topological surfaces.[4]
References
1. "Dr Vanessa Robins", People, Australian National University Research School of Physics, retrieved 2020-05-04
2. Vanessa Robins at the Mathematics Genealogy Project
3. Edelsbrunner, Herbert; Morozov, Dmitriy (2013), "Persistent homology: theory and practice" (PDF), European Congress of Mathematics, Eur. Math. Soc., Zürich, pp. 31–50, MR 3469114
4. "The art of science in jewellery, metal, tape and music", Science in Public, 9 December 2014, retrieved 2020-05-04; "Julie Brooke: Minimal Surfaces", Art Almanac, 30 March 2015, retrieved 2020-05-04
External links
• Vanessa Robins publications indexed by Google Scholar
Authority control
Academics
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• ResearcherID
• Scopus
People
• Trove
Other
• IdRef
|
Missing square puzzle
The missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle, but one has a 1×1 hole in it.
Solution
The key to the puzzle is the fact that neither of the 13×5 "triangles" is truly a triangle, nor would either truly be 13x5 if it were, because what appears to be the hypotenuse is bent. In other words, the "hypotenuse" does not maintain a consistent slope, even though it may appear that way to the human eye.
A true 13×5 triangle cannot be created from the given component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area. The apparent triangles formed from the figures are 13 units wide and 5 units tall, so it appears that the area should be S = 13×5/2 = 32.5 units. However, the blue triangle has a ratio of 5:2 (=2.5), while the red triangle has the ratio 8:3 (≈2.667), so the apparent combined hypotenuse in each figure is actually bent. With the bent hypotenuse, the first figure actually occupies a combined 32 units, while the second figure occupies 33, including the "missing" square.
The amount of bending is approximately 1/28 unit (1.245364267°), which is difficult to see on the diagram of the puzzle, and was illustrated as a graphic. Note the grid point where the red and blue triangles in the lower image meet (5 squares to the right and two units up from the lower left corner of the combined figure), and compare it to the same point on the other figure; the edge is slightly under the mark in the upper image, but goes through it in the lower. Overlaying the hypotenuses from both figures results in a very thin parallelogram (represented with the four red dots) with an area of exactly one grid square (Pick's theorem gives 0 [1] + 4 [2]/2 − 1 = 1), so the "missing" area.
Principle
According to Martin Gardner,[3] this particular puzzle was invented by a New York City amateur magician, Paul Curry, in 1953. However, the principle of a dissection paradox has been known since the start of the 16th century.
The integer dimensions of the parts of the puzzle (2, 3, 5, 8, 13) are successive Fibonacci numbers, which leads to the exact unit area in the thin parallelogram. Many other geometric dissection puzzles are based on a few simple properties of the Fibonacci sequence.[4]
Similar puzzles
Sam Loyd's chessboard paradox demonstrates two rearrangements of an 8×8 square. In the "larger" rearrangement (the 5×13 rectangle in the image to the right), the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the original square figure.[5] In the "smaller" rearrangement (the shape below the 5×13 rectangle), each quadrilateral needs to overlap the triangle by an area of half a unit for its top/bottom edge to align with a grid line, resulting overall loss in one unit square area.
Mitsunobu Matsuyama's "paradox" uses four congruent quadrilaterals and a small square, which form a larger square. When the quadrilaterals are rotated about their centers they fill the space of the small square, although the total area of the figure seems unchanged. The apparent paradox is explained by the fact that the side of the new large square is a little smaller than the original one. If θ is the angle between two opposing sides in each quadrilateral, then the ratio of the two areas is given by sec2 θ. For θ = 5°, this is approximately 1.00765, which corresponds to a difference of about 0.8%.
A vanishing puzzle is a mechanical optical illusion showing different numbers of a certain object when parts of the puzzle are moved around.[6]
See also
• Chessboard paradox
• Einstellung effect
• Hooper's paradox
• Missing dollar riddle
References
1. number of interior lattice points
2. number of boundary lattice points
3. Gardner, Martin (1956). Mathematics Magic and magic. Dover. pp. 139–150. ISBN 9780486203355.
4. Weisstein, Eric. "Cassini's Identity". Math World.
5. "A Paradoxical Dissection". mathblag. 2011-08-28. Retrieved 2018-04-19.
6. The Guardian, Vanishing Leprechaun, Disappearing Dwarf and Swinging Sixties Pin-up Girls – puzzles in pictures
External links
Wikimedia Commons has media related to Missing square puzzle.
• A printable Missing Square variant with a video demonstration.
• Curry's Paradox: How Is It Possible? at cut-the-knot
• Jigsaw Paradox
• The Eleven Holes Puzzle
• "Infinite Chocolate Bar Trick", a demonstration of the missing square puzzle utilising a 4×6 chocolate bar
|
Vanishing cycle
In mathematics, vanishing cycles are studied in singularity theory and other parts of algebraic geometry. They are those homology cycles of a smooth fiber in a family which vanish in the singular fiber.
For example, in a map from a connected complex surface to the complex projective line, a generic fiber is a smooth Riemann surface of some fixed genus g and, generically, there will be isolated points in the target whose preimages are nodal curves. If one considers an isolated critical value and a small loop around it, in each fiber, one can find a smooth loop such that the singular fiber can be obtained by pinching that loop to a point. The loop in the smooth fibers gives an element of the first homology group of a surface, and the monodromy of the critical value is defined to be the monodromy of the first homology of the fibers as the loop is traversed, i.e. an invertible map of the first homology of a (real) surface of genus g.
A classical result is the Picard–Lefschetz formula,[1] detailing how the monodromy round the singular fiber acts on the vanishing cycles, by a shear mapping.
The classical, geometric theory of Solomon Lefschetz was recast in purely algebraic terms, in SGA7. This was for the requirements of its application in the context of l-adic cohomology; and eventual application to the Weil conjectures. There the definition uses derived categories, and looks very different. It involves a functor, the nearby cycle functor, with a definition by means of the higher direct image and pullbacks. The vanishing cycle functor then sits in a distinguished triangle with the nearby cycle functor and a more elementary functor. This formulation has been of continuing influence, in particular in D-module theory.
See also
• Thom–Sebastiani Theorem
References
1. Given in , for Morse functions.
• Dimca, Alexandru; Singularities and Topology of Hypersurfaces.
• Section 3 of Peters, C.A.M. and J.H.M. Steenbrink: Infinitesimal variations of Hodge structure and the generic Torelli problem for projective hypersurfaces, in : Classification of Algebraic Manifolds, K. Ueno ed., Progress inMath. 39, Birkhauser 1983.
• For the étale cohomology version, see the chapter on monodromy in Freitag, E.; Kiehl, Reinhardt (1988), Etale Cohomology and the Weil Conjecture, Berlin: Springer-Verlag, ISBN 978-0-387-12175-8
• Deligne, Pierre; Katz, Nicholas, eds. (1973), Séminaire de Géométrie Algébrique du Bois Marie – 1967–69 – Groupes de monodromie en géométrie algébrique – (SGA 7) – vol. 2, Lecture Notes in Mathematics, vol. 340, Berlin, New York: Springer-Verlag, pp. x+438, see especially Pierre Deligne, Le formalisme des cycles évanescents, SGA7 XIII and XIV.
• Massey, David (2010). "Notes on Perverse Sheaves and Vanishing Cycles". arXiv:math/9908107.
External links
• Vanishing Cycle in the Encyclopedia of Mathematics
|
Vanja Dukic
Vanja Dukic is an expert in computational statistics and mathematical epidemiology who works as a professor of applied mathematics at the University of Colorado Boulder. Her research includes work on using internet search engine access patterns to track diseases,[1][2] and on the effects of climate change on the spread of diseases.
Dukic earned a bachelor's degree in finance and actuarial mathematics from Bryant University in 1995.[3] She completed her doctorate at Brown University in 2001, under the joint supervision of biostatisticians Constantine Gatsonis and Joseph Hogan.[4] She worked as a faculty member in the biostatistics program of the Department of Public Health Sciences at the University of Chicago from 2001 to 2010, before moving to Colorado.[3]
In 2015 she was elected as a Fellow of the American Statistical Association "for important contributions to Bayesian modeling of complex processes and analysis of Big Data, substantive and collaborative research in infectious diseases and climate change, and service to the profession, including excellence in editorial work."[5][6]
References
1. Wernau, Julie (December 11, 2009), "Flu is waning, say U. of C. professors: Trio uses Google data to track illness", Chicago Tribune.
2. Keim, Brandon (May 20, 2011), "Google search patterns could track MRSA spread", Wired.
3. Home page and brief biography, University of Colorado, retrieved 2016-07-09.
4. Vanja Dukic at the Mathematics Genealogy Project
5. "ASA name 62 new Fellows", IMS Bulletin, October 2, 2015.
6. ASA name 62 new Fellows: Selection honors each as "foremost members" of statistical science (PDF), American Statistical Association, June 4, 2015, archived from the original (PDF) on 2016-03-04, retrieved 2016-07-09.
External links
• Vanja Dukic publications indexed by Google Scholar
Authority control: Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
|
Vantieghems theorem
In number theory, Vantieghems theorem is a primality criterion. It states that a natural number n≥3 is prime if and only if
$\prod _{1\leq k\leq n-1}\left(2^{k}-1\right)\equiv n\mod \left(2^{n}-1\right).$
Similarly, n is prime, if and only if the following congruence for polynomials in X holds:
$\prod _{1\leq k\leq n-1}\left(X^{k}-1\right)\equiv n-\left(X^{n}-1\right)/\left(X-1\right)\mod \left(X^{n}-1\right)$
or:
$\prod _{1\leq k\leq n-1}\left(X^{k}-1\right)\equiv n\mod \left(X^{n}-1\right)/\left(X-1\right).$
Example
Let n=7 forming the product 1*3*7*15*31*63 = 615195. 615195 = 7 mod 127 and so 7 is prime
Let n=9 forming the product 1*3*7*15*31*63*127*255 = 19923090075. 19923090075 = 301 mod 511 and so 9 is composite
References
• Kilford, L.J.P. (2004). "A generalization of a necessary and sufficient condition for primality due to Vantieghem". Int. J. Math. Math. Sci. 2004 (69–72): 3889–3892. arXiv:math/0402128. Bibcode:2004math......2128K. doi:10.1155/S0161171204403226. Zbl 1126.11307.. An article with proof and generalizations.
• Vantieghem, E. (1991). "On a congruence only holding for primes". Indag. Math. New Series. 2 (2): 253–255. doi:10.1016/0019-3577(91)90013-W. Zbl 0734.11003.
|
Vanya Mirzoyan
Vanya Mirzoyan (Armenian: Վանյա Միրզոյան, born 5 July 1948) Armenian scientist-mathematician.
Vanya Aleksandrovich Mirzoyan
Born (1948-07-05) July 5, 1948
Mountainous Jagir Village, Shamkhor, Artsakh
Academic work
DisciplineScience
InstitutionsNational Polytechnic University of Armenia
Biography
V.A. Mirzoyan was born in Mountainous Jagir, an Armenian Village located in Shamkhor District of Artsakh. His father, Aleksandr Ghazar Mirzoyan, was a teacher of Geography and Astronomy at the Secondary School of Mountainous Jagir, mother - Arshaluys Sergey Harutyunyan was an employee. From 1964 to 1968 he studied at Yerevan Technical College of Electronic Computers. In 1967 graduated from Yerevan Secondary Correspondence School 3 and was admitted to Yerevan State University, Department of Mechanics and Mathematics, which he graduated in 1972. From 1972 to 1974 he served in the Soviet Army as an officer. From October 1975 to October 1978 he pursued his targeted postgraduate studies at the University of Tartu, Estonia, with a degree in “Geometry and Topology” under scientific supervision of Doctor of Physical and Mathematical sciences, member of the Estonian Academy of Sciences, professor Ülo G. Lumiste. From 1979 to 1981 he worked as a professor of the Algebra and Geometry Department at Armenian State Pedagogical University named after Khachatur Abovian. Since 1981, he has been a staff member of National Polytechnic University of Armenia (Yerevan), held the positions of Assistant, Associate Professor, Professor, Head of Department.
Scientific interests
Scientific interests include Riemannian geometry, which studies Riemannian manifolds and submanifolds with natural parallel and semi-parallel tensor fields. These are Riemannian symmetric, semi-symmetric, Einstein, semi-Einstein, Ricci-semisymmetric manifolds and their isometric realizations in spaces of constant curvature.
Scientific results
• Has given general local classification of Riemannian Ricci-semisymmetric manifolds,
• Has opened semi-Einstein manifolds and singled out the class of such manifolds in the form of cones over Einstein manifolds,
• Has given the local classification and geometric description of Ricci-semisymmetric hypersurfaces in Euclidean spaces,
• Has studied and geometrically described various classes of Semi-Einstein submanifolds of arbitrary codimension in Euclidean spaces,
• Has established fundamental interrelation between submanifolds with parallel tensor fields and submanifolds with corresponding semi-parallel tensor fields in spaces of constant curvature,
• Has given general local classification of normally flat Ricci-semisymmetric submanifolds in Euclidean spaces.
Awards
• Candidate of Physical and Mathematical Sciences (21.01.1980, Moscow, Moscow State Pedagogical University named after V.I.Lenin)
• Doctor of Physical and Mathematical Sciences (21.01.1999, Kazan, KSU)
References
External links
• Profile at Marquis Who's Who
• Profile at Russian Mathematical Portal
• Profile at Hayazg.info
• Main Scientific Works
Authority control: Academics
• MathSciNet
• Scopus
• zbMATH
|
Variance
In probability theory and statistics, variance is the squared deviation from the mean of a random variable. The variance is also often defined as the square of the standard deviation. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by $\sigma ^{2}$, $s^{2}$, $\operatorname {Var} (X)$, $V(X)$, or $\mathbb {V} (X)$.[1]
An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished.
There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real world system. If all possible observations of the system are present then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below.
The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling.
Etymology
The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:[2]
The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations $\sigma _{1}$ and $\sigma _{2}$, it is found that the distribution, when both causes act together, has a standard deviation ${\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}$. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance...
Definition
The variance of a random variable $X$ is the expected value of the squared deviation from the mean of $X$, $\mu =\operatorname {E} [X]$:
$\operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right].$
This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself:
$\operatorname {Var} (X)=\operatorname {Cov} (X,X).$
The variance is also equivalent to the second cumulant of a probability distribution that generates $X$. The variance is typically designated as $\operatorname {Var} (X)$, or sometimes as $V(X)$ or $\mathbb {V} (X)$, or symbolically as $\sigma _{X}^{2}$ or simply $\sigma ^{2}$ (pronounced "sigma squared"). The expression for the variance can be expanded as follows:
${\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[(X-\operatorname {E} [X])^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}-2X\operatorname {E} [X]+\operatorname {E} [X]^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]\operatorname {E} [X]+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}\end{aligned}}$
In other words, the variance of X is equal to the mean of the square of X minus the square of the mean of X. This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see Algorithms for calculating variance.
Discrete random variable
If the generator of random variable $X$ is discrete with probability mass function $x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}$, then
$\operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot (x_{i}-\mu )^{2},$
where $\mu $ is the expected value. That is,
$\mu =\sum _{i=1}^{n}p_{i}x_{i}.$
(When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.)
The variance of a collection of $n$ equally likely values can be written as
$\operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}$
where $\mu $ is the average value. That is,
$\mu ={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.$
The variance of a set of $n$ equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[3]
$\operatorname {Var} (X)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {1}{2}}(x_{i}-x_{j})^{2}={\frac {1}{n^{2}}}\sum _{i}\sum _{j>i}(x_{i}-x_{j})^{2}.$
Absolutely continuous random variable
If the random variable $X$ has a probability density function $f(x)$, and $F(x)$ is the corresponding cumulative distribution function, then
${\begin{aligned}\operatorname {Var} (X)=\sigma ^{2}&=\int _{\mathbb {R} }(x-\mu )^{2}f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}f(x)\,dx-2\mu \int _{\mathbb {R} }xf(x)\,dx+\mu ^{2}\int _{\mathbb {R} }f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \int _{\mathbb {R} }x\,dF(x)+\mu ^{2}\int _{\mathbb {R} }\,dF(x)\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \cdot \mu +\mu ^{2}\cdot 1\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-\mu ^{2},\end{aligned}}$
or equivalently,
$\operatorname {Var} (X)=\int _{\mathbb {R} }x^{2}f(x)\,dx-\mu ^{2},$
where $\mu $ is the expected value of $X$ given by
$\mu =\int _{\mathbb {R} }xf(x)\,dx=\int _{\mathbb {R} }x\,dF(x).$
In these formulas, the integrals with respect to $dx$ and $dF(x)$ are Lebesgue and Lebesgue–Stieltjes integrals, respectively.
If the function $x^{2}f(x)$ is Riemann-integrable on every finite interval $[a,b]\subset \mathbb {R} ,$ then
$\operatorname {Var} (X)=\int _{-\infty }^{+\infty }x^{2}f(x)\,dx-\mu ^{2},$
where the integral is an improper Riemann integral.
Examples
Exponential distribution
The exponential distribution with parameter λ is a continuous distribution whose probability density function is given by
$f(x)=\lambda e^{-\lambda x}$
on the interval [0, ∞). Its mean can be shown to be
$\operatorname {E} [X]=\int _{0}^{\infty }x\lambda e^{-\lambda x}\,dx={\frac {1}{\lambda }}.$
Using integration by parts and making use of the expected value already calculated, we have:
${\begin{aligned}\operatorname {E} \left[X^{2}\right]&=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}\,dx\\&=\left[-x^{2}e^{-\lambda x}\right]_{0}^{\infty }+\int _{0}^{\infty }2xe^{-\lambda x}\,dx\\&=0+{\frac {2}{\lambda }}\operatorname {E} [X]\\&={\frac {2}{\lambda ^{2}}}.\end{aligned}}$
Thus, the variance of X is given by
$\operatorname {Var} (X)=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}={\frac {2}{\lambda ^{2}}}-\left({\frac {1}{\lambda }}\right)^{2}={\frac {1}{\lambda ^{2}}}.$
Fair die
A fair six-sided die can be modeled as a discrete random variable, X, with outcomes 1 through 6, each with equal probability 1/6. The expected value of X is $(1+2+3+4+5+6)/6=7/2.$ Therefore, the variance of X is
${\begin{aligned}\operatorname {Var} (X)&=\sum _{i=1}^{6}{\frac {1}{6}}\left(i-{\frac {7}{2}}\right)^{2}\\[5pt]&={\frac {1}{6}}\left((-5/2)^{2}+(-3/2)^{2}+(-1/2)^{2}+(1/2)^{2}+(3/2)^{2}+(5/2)^{2}\right)\\[5pt]&={\frac {35}{12}}\approx 2.92.\end{aligned}}$
The general formula for the variance of the outcome, X, of an n-sided die is
${\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left(X^{2}\right)-(\operatorname {E} (X))^{2}\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}i^{2}-\left({\frac {1}{n}}\sum _{i=1}^{n}i\right)^{2}\\[5pt]&={\frac {(n+1)(2n+1)}{6}}-\left({\frac {n+1}{2}}\right)^{2}\\[4pt]&={\frac {n^{2}-1}{12}}.\end{aligned}}$
Commonly used probability distributions
The following table lists the variance for some commonly used probability distributions.
Name of the probability distribution Probability distribution function Mean Variance
Binomial distribution $\Pr \,(X=k)={\binom {n}{k}}p^{k}(1-p)^{n-k}$ $np$ $np(1-p)$
Geometric distribution $\Pr \,(X=k)=(1-p)^{k-1}p$ ${\frac {1}{p}}$ ${\frac {(1-p)}{p^{2}}}$
Normal distribution $f\left(x\mid \mu ,\sigma ^{2}\right)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}$ $\mu $ $\sigma ^{2}$
Uniform distribution (continuous) $f(x\mid a,b)={\begin{cases}{\frac {1}{b-a}}&{\text{for }}a\leq x\leq b,\\[3pt]0&{\text{for }}x<a{\text{ or }}x>b\end{cases}}$ ${\frac {a+b}{2}}$ ${\frac {(b-a)^{2}}{12}}$
Exponential distribution $f(x\mid \lambda )=\lambda e^{-\lambda x}$ ${\frac {1}{\lambda }}$ ${\frac {1}{\lambda ^{2}}}$
Poisson distribution $f(k\mid \lambda )={\frac {e^{-\lambda }\lambda ^{k}}{k!}}$ $\lambda $ $\lambda $
Properties
Basic properties
Variance is non-negative because the squares are positive or zero:
$\operatorname {Var} (X)\geq 0.$
The variance of a constant is zero.
$\operatorname {Var} (a)=0.$
Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value:
$\operatorname {Var} (X)=0\iff \exists a:P(X=a)=1.$
Issues of finiteness
If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index $k$ satisfies $1<k\leq 2.$
Decomposition
The general formula for variance decomposition or the law of total variance is: If $X$ and $Y$ are two random variables, and the variance of $X$ exists, then
$\operatorname {Var} [X]=\operatorname {E} (\operatorname {Var} [X\mid Y])+\operatorname {Var} (\operatorname {E} [X\mid Y]).$
The conditional expectation $\operatorname {E} (X\mid Y)$ of $X$ given $Y$, and the conditional variance $\operatorname {Var} (X\mid Y)$ may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation $\operatorname {E} (X\mid Y=y)$ given the event Y = y. This quantity depends on the particular value y; it is a function $g(y)=\operatorname {E} (X\mid Y=y)$. That same function evaluated at the random variable Y is the conditional expectation $\operatorname {E} (X\mid Y)=g(Y).$
In particular, if $Y$ is a discrete random variable assuming possible values $y_{1},y_{2},y_{3}\ldots $ with corresponding probabilities $p_{1},p_{2},p_{3}\ldots ,$, then in the formula for total variance, the first term on the right-hand side becomes
$\operatorname {E} (\operatorname {Var} [X\mid Y])=\sum _{i}p_{i}\sigma _{i}^{2},$
where $\sigma _{i}^{2}=\operatorname {Var} [X\mid Y=y_{i}]$. Similarly, the second term on the right-hand side becomes
$\operatorname {Var} (\operatorname {E} [X\mid Y])=\sum _{i}p_{i}\mu _{i}^{2}-\left(\sum _{i}p_{i}\mu _{i}\right)^{2}=\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2},$
where $\mu _{i}=\operatorname {E} [X\mid Y=y_{i}]$ and $\mu =\sum _{i}p_{i}\mu _{i}$. Thus the total variance is given by
$\operatorname {Var} [X]=\sum _{i}p_{i}\sigma _{i}^{2}+\left(\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2}\right).$
A similar formula is applied in analysis of variance, where the corresponding formula is
${\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{between}}+{\mathit {MS}}_{\text{within}};$
here ${\mathit {MS}}$ refers to the Mean of the Squares. In linear regression analysis the corresponding formula is
${\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{regression}}+{\mathit {MS}}_{\text{residual}}.$
This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated.
Similar decompositions are possible for the sum of squared deviations (sum of squares, ${\mathit {SS}}$):
${\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{between}}+{\mathit {SS}}_{\text{within}},$
${\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{regression}}+{\mathit {SS}}_{\text{residual}}.$
Calculation from the CDF
The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using
$2\int _{0}^{\infty }u(1-F(u))\,du-\left(\int _{0}^{\infty }(1-F(u))\,du\right)^{2}.$
This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed.
Characteristic property
The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. $\mathrm {argmin} _{m}\,\mathrm {E} \left(\left(X-m\right)^{2}\right)=\mathrm {E} (X)$. Conversely, if a continuous function $\varphi $ satisfies $\mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)$ for all random variables X, then it is necessarily of the form $\varphi (x)=ax^{2}+b$, where a > 0. This also holds in the multidimensional case.[4]
Units of measurement
Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is √2.9 ≈ 1.7, slightly larger than the expected absolute deviation of 1.5.
The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution.
Propagation
Addition and multiplication by a constant
Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged:
$\operatorname {Var} (X+a)=\operatorname {Var} (X).$
If all values are scaled by a constant, the variance is scaled by the square of that constant:
$\operatorname {Var} (aX)=a^{2}\operatorname {Var} (X).$
The variance of a sum of two random variables is given by
$\operatorname {Var} (aX+bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\,\operatorname {Cov} (X,Y)$
$\operatorname {Var} (aX-bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)-2ab\,\operatorname {Cov} (X,Y)$
where $\operatorname {Cov} (X,Y)$ is the covariance.
Linear combinations
In general, for the sum of $N$ random variables $\{X_{1},\dots ,X_{N}\}$, the variance becomes:
$\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i,j=1}^{N}\operatorname {Cov} (X_{i},X_{j})=\sum _{i=1}^{N}\operatorname {Var} (X_{i})+\sum _{i\neq j}\operatorname {Cov} (X_{i},X_{j}),$
see also general Bienaymé's identity.
These results lead to the variance of a linear combination as:
${\begin{aligned}\operatorname {Var} \left(\sum _{i=1}^{N}a_{i}X_{i}\right)&=\sum _{i,j=1}^{N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+\sum _{i\not =j}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i<j\leq N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j}).\end{aligned}}$
If the random variables $X_{1},\dots ,X_{N}$ are such that
$\operatorname {Cov} (X_{i},X_{j})=0\ ,\ \forall \ (i\neq j),$
then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables $X_{1},\dots ,X_{N}$ are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically:
$\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i=1}^{N}\operatorname {Var} (X_{i}).$
Since independent random variables are always uncorrelated (see Covariance § Uncorrelatedness and independence), the equation above holds in particular when the random variables $X_{1},\dots ,X_{n}$ are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances.
Matrix notation for the variance of a linear combination
Define $X$ as a column vector of $n$ random variables $X_{1},\ldots ,X_{n}$, and $c$ as a column vector of $n$ scalars $c_{1},\ldots ,c_{n}$. Therefore, $c^{\mathsf {T}}X$ is a linear combination of these random variables, where $c^{\mathsf {T}}$ denotes the transpose of $c$. Also let $\Sigma $ be the covariance matrix of $X$. The variance of $c^{\mathsf {T}}X$ is then given by:[5]
$\operatorname {Var} \left(c^{\mathsf {T}}X\right)=c^{\mathsf {T}}\Sigma c.$
This implies that the variance of the mean can be written as (with a column vector of ones)
$\operatorname {Var} \left({\bar {x}}\right)=\operatorname {Var} \left({\frac {1}{n}}1'X\right)={\frac {1}{n^{2}}}1'\Sigma 1.$
Sum of uncorrelated variables
One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances:
$\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\operatorname {Var} (X_{i}).$
This statement is called the Bienaymé formula[6] and was discovered in 1853.[7][8] It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is
$\operatorname {Var} \left({\overline {X}}\right)=\operatorname {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.$
That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem.
To prove the initial statement, it suffices to show that
$\operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y).$
The general result then follows by induction. Starting with the definition,
${\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[(X+Y)^{2}\right]-(\operatorname {E} [X+Y])^{2}\\[5pt]&=\operatorname {E} \left[X^{2}+2XY+Y^{2}\right]-(\operatorname {E} [X]+\operatorname {E} [Y])^{2}.\end{aligned}}$
Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows:
${\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[X^{2}\right]+2\operatorname {E} [XY]+\operatorname {E} \left[Y^{2}\right]-\left(\operatorname {E} [X]^{2}+2\operatorname {E} [X]\operatorname {E} [Y]+\operatorname {E} [Y]^{2}\right)\\[5pt]&=\operatorname {E} \left[X^{2}\right]+\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [X]^{2}-\operatorname {E} [Y]^{2}\\[5pt]&=\operatorname {Var} (X)+\operatorname {Var} (Y).\end{aligned}}$
Sum of correlated variables with fixed sample size
In general, the variance of the sum of n variables is the sum of their covariances:
$\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}\operatorname {Cov} \left(X_{i},X_{j}\right)=\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)+2\sum _{1\leq i<j\leq n}\operatorname {Cov} \left(X_{i},X_{j}\right).$
(Note: The second equality comes from the fact that Cov(Xi,Xi) = Var(Xi).)
Here, $\operatorname {Cov} (\cdot ,\cdot )$ is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory.
So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is
$\operatorname {Var} \left({\overline {X}}\right)={\frac {\sigma ^{2}}{n}}+{\frac {n-1}{n}}\rho \sigma ^{2}.$
This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to
$\operatorname {Var} \left({\overline {X}}\right)={\frac {1}{n}}+{\frac {n-1}{n}}\rho .$
This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have
$\lim _{n\to \infty }\operatorname {Var} \left({\overline {X}}\right)=\rho .$
Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables.
Sum of uncorrelated variables with random sample size
There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size N is a random variable whose variation adds to the variation of X, such that,
$\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\operatorname {E} \left[N\right]\operatorname {Var} (X)+\operatorname {Var} (N)(\operatorname {E} \left[X\right])^{2}$[9]
which follows from the law of total variance.
If N has a Poisson distribution, then $\operatorname {E} [N]=\operatorname {Var} (N)$ with estimator n = N. So, the estimator of $\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)$ becomes $n{S_{x}}^{2}+n{\bar {X}}^{2}$, giving $\operatorname {SE} ({\bar {X}})={\sqrt {\frac {{S_{x}}^{2}+{\bar {X}}^{2}}{n}}}$ (see standard error of the sample mean).
Weighted sum of variables
The scaling property and the Bienaymé formula, along with the property of the covariance Cov(aX, bY) = ab Cov(X, Y) jointly imply that
$\operatorname {Var} (aX\pm bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)\pm 2ab\,\operatorname {Cov} (X,Y).$
This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y.
The expression above can be extended to a weighted sum of multiple variables:
$\operatorname {Var} \left(\sum _{i}^{n}a_{i}X_{i}\right)=\sum _{i=1}^{n}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i}\sum _{<j\leq n}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})$
Product of independent variables
If two variables X and Y are independent, the variance of their product is given by[10]
$\operatorname {Var} (XY)=[\operatorname {E} (X)]^{2}\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\operatorname {Var} (X)+\operatorname {Var} (X)\operatorname {Var} (Y).$
Equivalently, using the basic properties of expectation, it is given by
$\operatorname {Var} (XY)=\operatorname {E} \left(X^{2}\right)\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (X)]^{2}[\operatorname {E} (Y)]^{2}.$
Product of statistically dependent variables
In general, if two variables are statistically dependent, then the variance of their product is given by:
${\begin{aligned}\operatorname {Var} (XY)={}&\operatorname {E} \left[X^{2}Y^{2}\right]-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\operatorname {E} (X^{2})\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\left(\operatorname {Var} (X)+[\operatorname {E} (X)]^{2}\right)\left(\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\right)\\[5pt]&-[\operatorname {Cov} (X,Y)+\operatorname {E} (X)\operatorname {E} (Y)]^{2}\end{aligned}}$
Arbitrary functions
The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by
$\operatorname {Var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {Var} \left[X\right]$
provided that f is twice differentiable and that the mean and variance of X are finite.
Population variance and sample variance
See also: Unbiased estimation of standard deviation
Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example that sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest.
The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n − 1, n + 1, and n − 1.5: n is the simplest (population variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution.
Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting by this factor (dividing by n − 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean.
Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation.
Population variance
In general, the population variance of a finite population of size N with values xi is given by
${\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}-\mu \right)^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\mu ^{2}\end{aligned}}$
where the population mean is
$\mu ={\frac {1}{N}}\sum _{i=1}^{N}x_{i}.$
The population variance can also be computed using
$\sigma ^{2}={\frac {1}{N^{2}}}\sum _{i<j}\left(x_{i}-x_{j}\right)^{2}={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}.$
This is true because
${\begin{aligned}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}\\[5pt]={}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)+{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\[5pt]={}&\sigma ^{2}\end{aligned}}$
The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations.
Sample variance
See also: Sample standard deviation
Biased sample variance
In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[11] This is generally referred to as sample variance or empirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution.
We take a sample with replacement of n values Y1, ..., Yn from the population, where n < N, and estimate the variance on the basis of this sample.[12] Directly taking the variance of the sample data gives the average of the squared deviations:
${\tilde {S}}_{Y}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}=\left({\frac {1}{n}}\sum _{i=1}^{n}Y_{i}^{2}\right)-{\overline {Y}}^{2}={\frac {1}{n^{2}}}\sum _{i,j\,:\,i<j}\left(Y_{i}-Y_{j}\right)^{2}.$
Here, ${\overline {Y}}$ denotes the sample mean:
${\overline {Y}}={\frac {1}{n}}\sum _{i=1}^{n}Y_{i}.$
Since the Yi are selected randomly, both ${\overline {Y}}$ and ${\tilde {S}}_{Y}^{2}$ are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {Yi} of size n from the population. For ${\tilde {S}}_{Y}^{2}$ this gives:
${\begin{aligned}\operatorname {E} [{\tilde {S}}_{Y}^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\frac {1}{n}}\sum _{j=1}^{n}Y_{j}\right)^{2}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\operatorname {E} \left[Y_{i}^{2}-{\frac {2}{n}}Y_{i}\sum _{j=1}^{n}Y_{j}+{\frac {1}{n^{2}}}\sum _{j=1}^{n}Y_{j}\sum _{k=1}^{n}Y_{k}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left({\frac {n-2}{n}}\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left[{\frac {n-2}{n}}\left(\sigma ^{2}+\mu ^{2}\right)-{\frac {2}{n}}(n-1)\mu ^{2}+{\frac {1}{n^{2}}}n(n-1)\mu ^{2}+{\frac {1}{n}}\left(\sigma ^{2}+\mu ^{2}\right)\right]\\[5pt]&={\frac {n-1}{n}}\sigma ^{2}.\end{aligned}}$
Hence ${\tilde {S}}_{Y}^{2}$ gives an estimate of the population variance that is biased by a factor of ${\frac {n-1}{n}}$. For this reason, ${\tilde {S}}_{Y}^{2}$ is referred to as the biased sample variance.
Unbiased sample variance
Correcting for this bias yields the unbiased sample variance, denoted $S^{2}$:
$S^{2}={\frac {n}{n-1}}{\tilde {S}}_{Y}^{2}={\frac {n}{n-1}}\left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}\right]={\frac {1}{n-1}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}$
Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution.
The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator.
The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population.
Distribution of the sample variance
Distribution and cumulative distribution of S2/σ2, for various values of ν = n − 1, when the yi are independent normally distributed.
Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that S2 follows a scaled chi-squared distribution (see also: asymptotic properties and an elementary proof):[13]
$(n-1){\frac {S^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}.$
As a direct consequence, it follows that
$\operatorname {E} \left(S^{2}\right)=\operatorname {E} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)=\sigma ^{2},$
and[14]
$\operatorname {Var} \left[S^{2}\right]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{(n-1)^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.$
If the Yi are independent and identically distributed, but not necessarily normally distributed, then[15]
$\operatorname {E} \left[S^{2}\right]=\sigma ^{2},\quad \operatorname {Var} \left[S^{2}\right]={\frac {\sigma ^{4}}{n}}\left(\kappa -1+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),$
where κ is the kurtosis of the distribution and μ4 is the fourth central moment.
If the conditions of the law of large numbers hold for the squared observations, S2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[16][17][18]
Samuelson's inequality
Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[19] Values must lie within the limits ${\bar {y}}\pm \sigma _{Y}(n-1)^{1/2}.$
Relations with the harmonic and arithmetic means
It has been shown[20] that for a sample {yi} of positive real numbers,
$\sigma _{y}^{2}\leq 2y_{\max }(A-H),$
where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and $\sigma _{y}^{2}$ is the (biased) variance of the sample.
This bound has been improved, and it is known that variance is bounded by
$\sigma _{y}^{2}\leq {\frac {y_{\max }(A-H)(y_{\max }-A)}{y_{\max }-H}},$
$\sigma _{y}^{2}\geq {\frac {y_{\min }(A-H)(A-y_{\min })}{H-y_{\min }}},$
where ymin is the minimum of the sample.[21]
Tests of equality of variances
The F-test of equality of variances and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult.
Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal.
The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test.
Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances.
Moment of inertia
The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of $\Sigma $ is given by
$I=n\left(\mathbf {1} _{3\times 3}\operatorname {tr} (\Sigma )-\Sigma \right).$
This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like
$\Sigma ={\begin{bmatrix}10&0&0\\0&0.1&0\\0&0&0.1\end{bmatrix}}.$
That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is
$I=n{\begin{bmatrix}0.2&0&0\\0&10.1&0\\0&0&10.1\end{bmatrix}}.$
Semivariance
The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation:
${\text{Semivariance}}={1 \over {n}}\sum _{i:x_{i}<\mu }(x_{i}-\mu )^{2}$
It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[22]
For inequalities associated with the semivariance, see Chebyshev's inequality § Semivariances.
Generalizations
For complex variables
If $x$ is a scalar complex-valued random variable, with values in $\mathbb {C} ,$ then its variance is $\operatorname {E} \left[(x-\mu )(x-\mu )^{*}\right],$ where $x^{*}$ is the complex conjugate of $x.$ This variance is a real scalar.
As a matrix
If $X$ is a vector-valued random variable, with values in $\mathbb {R} ^{n},$ and thought of as a column vector, then a natural generalization of variance is $\operatorname {E} \left[(X-\mu )(X-\mu )^{\operatorname {T} }\right],$ where $\mu =\operatorname {E} (X)$ and $X^{\operatorname {T} }$ is the transpose of $X,$ and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix).
If $X$ is a vector- and complex-valued random variable, with values in $\mathbb {C} ^{n},$ then the covariance matrix is $\operatorname {E} \left[(X-\mu )(X-\mu )^{\dagger }\right],$ where $X^{\dagger }$ is the conjugate transpose of $X.$ This matrix is also positive semi-definite and square.
As a scalar
Another generalization of variance for vector-valued random variables $X$, which results in a scalar value rather than in a matrix, is the generalized variance $\det(C)$, the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[23]
A different generalization is obtained by considering the variance of the Euclidean distance between the random variable and its mean. This results in $\operatorname {E} \left[(X-\mu )^{\operatorname {T} }(X-\mu )\right]=\operatorname {tr} (C),$ which is the trace of the covariance matrix.
See also
Look up variance in Wiktionary, the free dictionary.
• Bhatia–Davis inequality
• Coefficient of variation
• Homoscedasticity
• Least-squares spectral analysis for computing a frequency spectrum with spectral magnitudes in % of variance or in dB
• Modern portfolio theory
• Popoviciu's inequality on variances
• Measures for statistical dispersion
• Variance-stabilizing transformation
Types of variance
• Correlation
• Distance variance
• Explained variance
• Pooled variance
• Pseudo-variance
References
1. Wasserman, Larry (2005). All of Statistics: a concise course in statistical inference. Springer texts in statistics. p. 51. ISBN 9781441923226.
2. Ronald Fisher (1918) The correlation between relatives on the supposition of Mendelian Inheritance
3. Yuli Zhang, Huaiyu Wu, Lei Cheng (June 2012). Some new deformation formulas about variance and covariance. Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.{{cite conference}}: CS1 maint: uses authors parameter (link)
4. Kagan, A.; Shepp, L. A. (1998). "Why the variance?". Statistics & Probability Letters. 38 (4): 329–333. doi:10.1016/S0167-7152(98)00041-8.
5. Johnson, Richard; Wichern, Dean (2001). Applied Multivariate Statistical Analysis. Prentice Hall. p. 76. ISBN 0-13-187715-1.
6. Loève, M. (1977) "Probability Theory", Graduate Texts in Mathematics, Volume 45, 4th edition, Springer-Verlag, p. 12.
7. Bienaymé, I.-J. (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Comptes rendus de l'Académie des sciences Paris, 37, p. 309–317; digital copy available
8. Bienaymé, I.-J. (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Journal de Mathématiques Pures et Appliquées, Série 2, Tome 12, p. 158–167; digital copy available
9. Cornell, J R, and Benjamin, C A, Probability, Statistics, and Decisions for Civil Engineers, McGraw-Hill, NY, 1970, pp.178-9.
10. Goodman, Leo A. (December 1960). "On the Exact Variance of Products". Journal of the American Statistical Association. 55 (292): 708–713. doi:10.2307/2281592. JSTOR 2281592.
11. Navidi, William (2006) Statistics for Engineers and Scientists, McGraw-Hill, p. 14.
12. Montgomery, D. C. and Runger, G. C. (1994) Applied statistics and probability for engineers, page 201. John Wiley & Sons New York
13. Knight K. (2000), Mathematical Statistics, Chapman and Hall, New York. (proposition 2.11)
14. Casella and Berger (2002) Statistical Inference, Example 7.3.3, p. 331
15. Mood, A. M., Graybill, F. A., and Boes, D.C. (1974) Introduction to the Theory of Statistics, 3rd Edition, McGraw-Hill, New York, p. 229
16. Kenney, John F.; Keeping, E.S. (1951) Mathematics of Statistics. Part Two. 2nd ed. D. Van Nostrand Company, Inc. Princeton: New Jersey. http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf
17. Rose, Colin; Smith, Murray D. (2002) Mathematical Statistics with Mathematica. Springer-Verlag, New York. http://www.mathstatica.com/book/Mathematical_Statistics_with_Mathematica.pdf
18. Weisstein, Eric W. (n.d.) Sample Variance Distribution. MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/SampleVarianceDistribution.html
19. Samuelson, Paul (1968). "How Deviant Can You Be?". Journal of the American Statistical Association. 63 (324): 1522–1525. doi:10.1080/01621459.1968.10480944. JSTOR 2285901.
20. Mercer, A. McD. (2000). "Bounds for A–G, A–H, G–H, and a family of inequalities of Ky Fan's type, using a general method". J. Math. Anal. Appl. 243 (1): 163–173. doi:10.1006/jmaa.1999.6688.
21. Sharma, R. (2008). "Some more inequalities for arithmetic mean, harmonic mean and variance". Journal of Mathematical Inequalities. 2 (1): 109–114. CiteSeerX 10.1.1.551.9397. doi:10.7153/jmi-02-11.
22. Fama, Eugene F.; French, Kenneth R. (2010-04-21). "Q&A: Semi-Variance: A Better Risk Measure?". Fama/French Forum.
23. Kocherlakota, S.; Kocherlakota, K. (2004). "Generalized Variance". Encyclopedia of Statistical Sciences. Wiley Online Library. doi:10.1002/0471667196.ess0869. ISBN 0471667196.
Theory of probability distributions
• probability mass function (pmf)
• probability density function (pdf)
• cumulative distribution function (cdf)
• quantile function
• raw moment
• central moment
• mean
• variance
• standard deviation
• skewness
• kurtosis
• L-moment
• moment-generating function (mgf)
• characteristic function
• probability-generating function (pgf)
• cumulant
• combinant
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Authority control: National
• Germany
• Japan
|
Varadhan's lemma
In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables.
Statement of the lemma
Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition
$\lim _{M\to \infty }\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}\,\mathbf {1} {\big (}\phi (Z_{\varepsilon })\geq M{\big )}{\big ]}{\big )}=-\infty ,$
where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition
$\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\gamma \phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}{\big )}<\infty .$
Then
$\lim _{\varepsilon \to 0}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}=\sup _{x\in X}{\big (}\phi (x)-I(x){\big )}.$
See also
• Laplace principle (large deviations theory)
References
• Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.3.1)
|
Michela Varagnolo
Michela Varagnolo is a mathematician whose research topics have included representation theory, Hecke algebra, Schur–Weyl duality, Yangians, and quantum affine algebras. She earned a doctorate in 1993 at the University of Pisa, under the supervision of Corrado de Concini,[1] and is maître de conférences in the department of mathematics at CY Cergy Paris University,[2] affiliated there with the research laboratory on analysis, geometry, and modeling.[3]
Varagnolo was an invited speaker at the 2014 International Congress of Mathematicians.[4] In 2019, with Éric Vasserot, she won the Prix de l'État of the French Academy of Sciences for their work on the geometric representation theory of Hecke algebras and quantum groups.[2]
References
1. Michela Varagnolo at the Mathematics Genealogy Project
2. Lauréats 2019 du prix fondé par l'État : Michela Varagnolo et Éric Vasserot (in French), French Academy of Sciences, 15 October 2019, retrieved 2021-11-11
3. "Membres laboratoire / département", AGM - Analyse, géométrie et modélisation (in French), CY Cergy Paris University, retrieved 2021-11-11
4. "Invited section lectures", program, International Congress of Mathematicians, 2014, retrieved 2021-11-11
Authority control
International
• VIAF
National
• Germany
Academics
• MathSciNet
• Mathematics Genealogy Project
Other
• IdRef
|
Varga K. Kalantarov
Varga K. Kalantarov (born 1950) is an Azerbaijani mathematician, scientist and professor of mathematics. He is a member of the Koç University Mathematics Department in İstanbul, Turkey.[1]
Education
Varga Kalantarov was born in 1950. He graduated from Baku State University in 1971. He received his PhD in Differential Equations and Mathematical Physics at the Baku Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences in 1974. He received his Doctor of Sciences degree in 1988 under the supervision of Olga Ladyzhenskaya at the Steklov Institute of Mathematics, Saint Petersburg, Russia.[2]
Academic career
After he received his PhD he started to hold a scientific researcher position in Baku Institute of Mathematics and Mechanics. Meanwhile, between 1975 and 1981 he was a visiting researcher in the Steklov Institute of Mathematics. From 1989 to 1993 he was the head of the Department of Partial Differential Equations at the Baku Institute of Mathematics and Mechanics. After the perestroika era he moved to Turkey with his family in 1993. Between 1993 and 2001 he was a full time professor in Hacettepe University, Mathematics Department, Ankara.[3] Starting from 2001 he became a full time professor in Koç University.
He has been an active researcher, having published more than 60 scientific manuscripts with more than 700 citations. He has had 16 PhD students.
Research areas
His research interests include PDEs and dynamical systems.
Representative scientific publications
• Kalantarov, V. K.; Ladyženskaja, O. A. Formation of collapses in quasilinear equations of parabolic and hyperbolic types. (Russian) Boundary value problems of mathematical physics and related questions in the theory of functions, 10. Zap. Naučn. Sem. LOMI 69 (1977), 77-102, 274.
• Kalantarov, Varga K.; Titi, Edriss S. Global attractors and determining modes for the 3D Navier-Stokes-Voight equations. Chin. Ann. Math. Ser. B 30 (2009), no. 6, 697–714.
• Kalantarov, Varga; Zelik, Sergey Finite-dimensional attractors for the quasi-linear strongly-damped wave equation. J. Differential Equations 247 (2009), no. 4, 1120–1155.
Memberships
Varga Kalantarov is a member of the Azerbaijan Mathematical Society, Turkish Mathematical Society and the American Mathematical Society.
References
1. His web page in Koç University Mathematics Department
2. Record in the Genealogy Project
3. Hacettepe University Mathematics Department
External links
• Varga Kalantarov's professional home page
• Varga K. Kalantarov publications indexed by Google Scholar
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
|
Constant-Q transform
In mathematics and signal processing, the constant-Q transform and variable-Q transform, simply known as CQT and VQT, transforms a data series to the frequency domain. It is related to the Fourier transform[1] and very closely related to the complex Morlet wavelet transform.[2] Its design is suited for musical representation.
The transform can be thought of as a series of filters fk, logarithmically spaced in frequency, with the k-th filter having a spectral width δfk equal to a multiple of the previous filter's width:
$\delta f_{k}=2^{1/n}\cdot \delta f_{k-1}=\left(2^{1/n}\right)^{k}\cdot \delta f_{\text{min}},$
where δfk is the bandwidth of the k-th filter, fmin is the central frequency of the lowest filter, and n is the number of filters per octave.
Calculation
The short-time Fourier transform of x[n] for a frame shifted to sample m is calculated as follows:
$X[k,m]=\sum _{n=0}^{N-1}W[n-m]x[n]e^{-j2\pi kn/N}.$
Given a data series at sampling frequency fs = 1/T, T being the sampling period of our data, for each frequency bin we can define the following:
• Filter width, δfk.
• Q, the "quality factor":
$Q={\frac {f_{k}}{\delta f_{k}}}.$
This is shown below to be the integer number of cycles processed at a center frequency fk. As such, this somewhat defines the time complexity of the transform.
• Window length for the k-th bin:
$N[k]={\frac {f_{\text{s}}}{\delta f_{k}}}={\frac {f_{\text{s}}}{f_{k}}}Q.$
Since fs/fk is the number of samples processed per cycle at frequency fk, Q is the number of integer cycles processed at this central frequency.
The equivalent transform kernel can be found by using the following substitutions:
• The window length of each bin is now a function of the bin number:
$N=N[k]=Q{\frac {f_{\text{s}}}{f_{k}}}.$
• The relative power of each bin will decrease at higher frequencies, as these sum over fewer terms. To compensate for this, we normalize by N[k].
• Any windowing function will be a function of window length, and likewise a function of window number. For example, the equivalent Hamming window would be
$W[k,n]=\alpha -(1-\alpha )\cos {\frac {2\pi n}{N[k]-1}},\quad \alpha =25/46,\quad 0\leqslant n\leqslant N[k]-1.$
• Our digital frequency, ${\frac {2\pi k}{N}}$, becomes ${\frac {2\pi Q}{N[k]}}$.
After these modifications, we are left with
$X[k]={\frac {1}{N[k]}}\sum _{n=0}^{N[k]-1}W[k,n]x[n]e^{\frac {-j2\pi Qn}{N[k]}}.$
Variable-Q bandwidth calculation
The variable-Q transform is the same as constant-Q transform, but the only difference is the filter Q is variable, hence the name variable-Q transform. The variable-Q transform is useful where time resolution on low frequencies is important. There are ways to calculate the bandwidth of the VQT, one of them using equivalent rectangular bandwidth as a value for VQT bin's bandwidth.[3]
The simplest way to implement a variable-Q transform is add a bandwidth offset called γ like this one:
$\delta f_{k}=\left({\frac {2}{f_{k}+\gamma }}\right)Q.$
This formula can be modified to have extra parameters to adjust sharpness of the transition between constant-Q and constant-bandwidth like this:
$\delta f_{k}=\left({\frac {2}{\sqrt[{\alpha }]{f_{k}^{\alpha }+\gamma ^{\alpha }}}}\right)Q.$
with α as a parameter for transition sharpness and where α of 2 is equals to hyperbolic sine frequency scale, in terms of frequency resolution.
Fast calculation
The direct calculation of the constant-Q transform (either using naive DFT or slightly faster Goertzel algorithm) is slow when compared against the fast Fourier transform (FFT). However, the FFT can itself be employed, in conjunction with the use of a kernel, to perform the equivalent calculation but much faster.[4] An approximate inverse to such an implementation was proposed in 2006; it works by going back to the DFT, and is only suitable for pitch instruments.[5]
A development on this method with improved invertibility involves performing CQT (via FFT) octave-by-octave, using lowpass filtered and downsampled results for consecutively lower pitches.[6] Implementations of this method include the MATLAB implementation and LibROSA's Python implementation.[7] LibROSA combines the subsampled method with the direct FFT method (which it dubs "pseudo-CQT") by having the latter process higher frequencies as a whole.[7]
The sliding DFT can be used for faster calculation of constant-Q transform, since the sliding DFT does not have to be linear-frequency spacing and same window size per bin.[8]
Alternatively, the constant-Q transform can be approximated by using multiple FFTs of different window sizes and/or sampling rate at different frequency ranges then stitch it together. This is called multiresolution STFT, however the window sizes for multiresolution FFTs are different per-octave, rather than per-bin.
Comparison with the Fourier transform
In general, the transform is well suited to musical data, and this can be seen in some of its advantages compared to the fast Fourier transform. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively, and this proves useful where frequencies span several octaves. As the range of human hearing covers approximately ten octaves from 20 Hz to around 20 kHz, this reduction in output data is significant.
The transform exhibits a reduction in frequency resolution with higher frequency bins, which is desirable for auditory applications. The transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better, whereas temporal resolution improves at higher frequencies. At the bottom of the piano scale (about 30 Hz), a difference of 1 semitone is a difference of approximately 1.5 Hz, whereas at the top of the musical scale (about 5 kHz), a difference of 1 semitone is a difference of approximately 200 Hz.[9] So for musical data the exponential frequency resolution of constant-Q transform is ideal.
In addition, the harmonics of musical notes form a pattern characteristic of the timbre of the instrument in this transform. Assuming the same relative strengths of each harmonic, as the fundamental frequency changes, the relative position of these harmonics remains constant. This can make identification of instruments much easier. The constant Q transform can also be used for automatic recognition of musical keys based on accumulated chroma content.[10]
Relative to the Fourier transform, implementation of this transform is more tricky. This is due to the varying number of samples used in the calculation of each frequency bin, which also affects the length of any windowing function implemented.[11]
Also note that because the frequency scale is logarithmic, there is no true zero-frequency / DC term present, which may be a drawback in applications that are interested in the DC term. Although for applications that are not interested in the DC such as audio, this is not a drawback.
References
1. Judith C. Brown, Calculation of a constant Q spectral transform, J. Acoust. Soc. Am., 89(1):425–434, 1991.
2. Continuous Wavelet Transform "When the mother wavelet can be interpreted as a windowed sinusoid (such as the Morlet wavelet), the wavelet transform can be interpreted as a constant-Q Fourier transform. Before the theory of wavelets, constant-Q Fourier transforms (such as obtained from a classic third-octave filter bank) were not easy to invert, because the basis signals were not orthogonal."
3. Cwitkowitz, Frank C.Jr (2019). "End-to-End Music Transcription Using Fine-Tuned Variable-Q Filterbanks" (PDF). Rochester Institute of Technology: 32–34. Retrieved 2022-08-21.
4. Judith C. Brown and Miller S. Puckette, An efficient algorithm for the calculation of a constant Q transform, J. Acoust. Soc. Am., 92(5):2698–2701, 1992.
5. FitzGerald, Derry; Cychowski, Marcin T.; Cranitch, Matt (1 May 2006). "Towards an Inverse Constant Q Transform". Audio Engineering Society Convention. Paris: Audio Engineering Society. 120.
6. Schörkhuber, Christian; Klapuri, Anssi (2010). "CONSTANT-Q TRANSFORM TOOLBOX FOR MUSIC PROCESSING". 7th Sound and Music Computing Conference. Barcelona. Retrieved 12 December 2018. paper
7. McFee, Brian; Battenberg, Eric; Lostanlen, Vincent; Thomé, Carl (12 December 2018). "librosa: core/constantq.py at 8d26423". GitHub. librosa. Retrieved 12 December 2018.
8. Bradford, R, ffitch, J & Dobson, R 2008, Sliding with a constant-Q, in 11th International Conference on Digital Audio Effects (DAFx-08) Proceedings September 1-4th, 2008 Espoo, Finland . DAFx, Espoo, Finland, pp. 363-369, Proc. of the Int. Conf. on Digital Audio Effects (DAFx-08), 1/09/08.
9. http://newt.phys.unsw.edu.au/jw/graphics/notes.GIF
10. Hendrik Purwins, Benjamin Blankertz and Klaus Obermayer, A New Method for Tracking Modulations in Tonal Music in Audio Data Format, International Joint Conference on Neural Network (IJCNN’00)., 6:270-275, 2000.
11. Benjamin Blankertz, The Constant Q Transform, 1999.
|
Free variables and bound variables
In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. The terms are opposites. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol.
For free variables in systems of linear equations, see Free variables (system of linear equations).
"Free variable" redirects here. Not to be confused with Free parameter or Dummy variable.
In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context.
An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where $n$ is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound.[1]pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,[1]p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function.
While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers:
$\forall y\,\exists x\,\left(x={\sqrt {y}}\right).$
This expression evaluates to false if the domain of $x$ and $y$ is the real numbers, but true if the domain is the complex numbers.
The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis.
Examples
Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would:
In the expression
$\sum _{k=1}^{10}f(k,n),$
n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called k on which it could depend.
In the expression
$\int _{0}^{\infty }x^{y-1}e^{-x}\,dx,$
y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend.
In the expression
$\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}},$
x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called h on which it could depend.
In the expression
$\forall x\ \exists y\ {\Big [}\varphi (x,y,z){\Big ]},$
z is a free variable and x and y are bound variables, associated with logical quantifiers; consequently the logical value of this expression depends on the value of z, but there is nothing called x or y on which it could depend.
More widely, in most proofs, bound variables are used. For example, the following proof shows that all squares of positive even integers are divisible by $4$
Let $n$ be a positive even integer. Then there is an integer $k$ such that $n=2k$. Since $n^{2}=4k^{2}$, we have $n^{2}$ divisible by $4$
not only k but also n have been used as bound variables as a whole in the proof.
Variable-binding operators
The following
$\sum _{x\in S}\quad \quad \prod _{x\in S}\quad \quad \int _{0}^{\infty }\cdots \,dx\quad \quad \lim _{x\to 0}\quad \quad \forall x\quad \quad \exists x$
are some common variable-binding operators. Each of them binds the variable x for some set S.
Many of these are operators which act on functions of the bound variable. In more complicated contexts, such notations can become awkward and confusing. It can be useful to switch to notations which make the binding explicit, such as
$\sum _{1,\ldots ,10}\left(k\mapsto f(k,n)\right)$
for sums or
$D\left(x\mapsto x^{2}+2x+1\right)$
for differentiation.
Formal explanation
Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science. In all cases, however, they are purely syntactic properties of expressions and variables in them. For this section we can summarize syntax by identifying an expression with a tree whose leaf nodes are variables, constants, function constants or predicate constants and whose non-leaf nodes are logical operators. This expression can then be determined by doing an inorder traversal of the tree. Variable-binding operators are logical operators that occur in almost every formal language. A binding operator Q takes two arguments: a variable v and an expression P, and when applied to its arguments produces a new expression Q(v, P). The meaning of binding operators is supplied by the semantics of the language and does not concern us here.
Variable binding relates three things: a variable v, a location a for that variable in an expression and a non-leaf node n of the form Q(v, P). Note: we define a location in an expression as a leaf node in the syntax tree. Variable binding occurs when that location is below the node n.
In the lambda calculus, x is a bound variable in the term M = λx. T and a free variable in the term T. We say x is bound in M and free in T. If T contains a subterm λx. U then x is rebound in this term. This nested, inner binding of x is said to "shadow" the outer binding. Occurrences of x in U are free occurrences of the new x.[2]
Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses. Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially.
A closed term is one containing no free variables.
Function expressions
To give an example from mathematics, consider an expression which defines a function
$f=\left[(x_{1},\ldots ,x_{n})\mapsto t\right]$
where t is an expression. t may contain some, all or none of the x1, …, xn and it may contain other variables. In this case we say that function definition binds the variables x1, …, xn.
In this manner, function definition expressions of the kind shown above can be thought of as the variable binding operator, analogous to the lambda expressions of lambda calculus. Other binding operators, like the summation sign, can be thought of as higher-order functions applying to a function. So, for example, the expression
$\sum _{x\in S}{x^{2}}$
could be treated as a notation for
$\sum _{S}{(x\mapsto x^{2})}$
where $\sum _{S}{f}$ is an operator with two parameters—a one-parameter function, and a set to evaluate that function over. The other operators listed above can be expressed in similar ways; for example, the universal quantifier $\forall x\in S\ P(x)$ can be thought of as an operator that evaluates to the logical conjunction of the boolean-valued function P applied over the (possibly infinite) set S.
Natural language
When analyzed in formal semantics, natural languages can be seen to have free and bound variables. In English, personal pronouns like he, she, they, etc. can act as free variables.
Lisa found her book.
In the sentence above, the possessive pronoun her is a free variable. It may refer to the previously mentioned Lisa or to any other female. In other words, her book could be referring to Lisa's book (an instance of coreference) or to a book that belongs to a different female (e.g. Jane's book). Whoever the referent of her is can be established according to the situational (i.e. pragmatic) context. The identity of the referent can be shown using coindexing subscripts where i indicates one referent and j indicates a second referent (different from i). Thus, the sentence Lisa found her book has the following interpretations:
Lisai found heri book. (interpretation #1: her = of Lisa)
Lisai found herj book. (interpretation #2: her = of a female that is not Lisa)
The distinction is not purely of academic interest, as some languages do actually have different forms for heri and herj: for example, Norwegian and Swedish translate coreferent heri as sin and noncoreferent herj as hennes.
English does allow specifying coreference, but it is optional, as both interpretations of the previous example are valid (the ungrammatical interpretation is indicated with an asterisk):
Lisai found heri own book. (interpretation #1: her = of Lisa)
*Lisai found herj own book. (interpretation #2: her = of a female that is not Lisa)
However, reflexive pronouns, such as himself, herself, themselves, etc., and reciprocal pronouns, such as each other, act as bound variables. In a sentence like the following:
Jane hurt herself.
the reflexive herself can only refer to the previously mentioned antecedent, in this case Jane, and can never refer to a different female person. In this example, the variable herself is bound to the noun Jane that occurs in subject position. Indicating the coindexation, the first interpretation with Jane and herself coindexed is permissible, but the other interpretation where they are not coindexed is ungrammatical:
Janei hurt herselfi. (interpretation #1: herself = Jane)
*Janei hurt herselfj. (interpretation #2: herself = a female that is not Jane)
The coreference binding can be represented using a lambda expression as mentioned in the previous Formal explanation section. The sentence with the reflexive could be represented as
(λx.x hurt x)Jane
in which Jane is the subject referent argument and λx.x hurt x is the predicate function (a lambda abstraction) with the lambda notation and x indicating both the semantic subject and the semantic object of sentence as being bound. This returns the semantic interpretation JANE hurt JANE with JANE being the same person.
Pronouns can also behave in a different way. In the sentence below
Ashley hit her.
the pronoun her can only refer to a female that is not Ashley. This means that it can never have a reflexive meaning equivalent to Ashley hit herself. The grammatical and ungrammatical interpretations are:
*Ashleyi hit heri. (interpretation #1: her = Ashley)
Ashleyi hit herj. (interpretation #2: her = a female that is not Ashley)
The first interpretation is impossible. Only the second interpretation is permitted by the grammar.
Thus, it can be seen that reflexives and reciprocals are bound variables (known technically as anaphors) while true pronouns are free variables in some grammatical structures but variables that cannot be bound in other grammatical structures. The binding phenomena found in natural languages was particularly important to the syntactic government and binding theory (see also: Binding (linguistics)).
See also
• Closure (computer science)
• Combinatory logic
• Lambda lifting
• Name binding
• Scope (programming)
References
1. W. V. O. Quine, Mathematical Logic (1981). Harvard University Press, 0-674-55451-5.
2. Thompson 1991, p. 33.
• Thompson, Simon (1991). Type theory and functional programming. Wokingham, England: Addison-Wesley. ISBN 0201416670. OCLC 23287456.
Further reading
• Gowers, Timothy; Barrow-Green, June; Leader, Imre, eds. (2008). The Princeton Companion to Mathematics. Princeton, New Jersey: Princeton University Press. pp. 15–16. doi:10.1515/9781400830398. ISBN 978-0-691-11880-2. JSTOR j.ctt7sd01. LCCN 2008020450. MR 2467561. OCLC 227205932. OL 19327100M. Zbl 1242.00016.
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.