text
stringlengths 256
16.4k
|
|---|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
It will be shown how to compute the density matrix for the harmonic oscillator:
\[ H = {P^2 \over 2m} + {1 \over 2}m\omega^2 X^2\]
using the functional integral representation. The density matrix is given by
\[ \rho(x,x';\beta) = \int_{x(0)=x}^{x(\beta\hbar)=x'}{\cal D} x (\tau ) exp \left [ - {1 \over \hbar } \int _0^{\beta \hbar } d \tau \left ({1 \over 2}m\dot{x}^2 + {1 \over 2}m\omega^2x^2\right)\right]\]
As we saw in the last lecture, paths in the vicinity of the classical path on the inverted potential give rise to the dominant contribution to the functional integral. Thus, it proves useful to expand the path \( x (\tau )\) about the classical path. We introduce a change of path variables from \(x (\tau ) \) to \( y (\tau ) \), where
\[ x(\tau) = x_{\rm cl}(\tau) + y(\tau)\]
where \(x_{\rm cl}(\tau)\) satisfies
\[ m\ddot{x}_{\rm cl} = m\omega^2 x_{\rm cl}\]
subject to the conditions
\[ x_{\rm cl}(0) = x,\;\;\;\;\;\;\;\;\;\;x_{\rm cl}(\beta\hbar)=x' \]
so that \(y (0) = y (\beta \hbar ) = 0 \).
Substituting this change of variables into the action integral yields
\[S\] \( \int_0^{\beta\hbar}d\tau\left[{1 \over 2}m\dot{x}^2 + {1 \over 2}m\omega^2 x^2\right]\)
\(\int_0^{\beta\hbar} d\tau\left[{1 \over 2}m(\dot{x}_{\rm cl} + \dot{y})^2 + {1 \over 2}m\omega^2(x_{\rm cl}+y)^2\right]\)
\(\int_0^{\beta\hbar} d\tau\left[{1 \over 2}m\dot{x}_{\rm cl}^2 + {1 \over 2} m \omega^2 x^2_{cl} \right ] + \int _0^{\beta \hbar}d\tau\left[{1 \over 2}m\dot{y}^2 + {1 \over 2}m\omega^2y^2\right]\)
\(\int_0^{\beta\hbar}d\tau \left[m\dot{x}_{cl}\dot{y} +m\omega^2 x_{\rm cl}y\right]\)
An integration by parts makes the cross terms vanish:
\[\int_0^{\beta\hbar}d\tau\left[m\dot{x}_{\rm cl}\dot{y} +m\omega^2x_{cl} = m \dot {x}_{cl} y \right ] = m \dot {x}_{cl}y \vert _0^{\beta \hbar} + \int _0^{\beta \hbar } dtau\left[-m\ddot{x}_{\rm cl} +m\omega^2 x{\rm cl}\right]y = 0\]
where the surface term vanishes because \(y(0) = y (\beta \hbar ) = 0 \) and the second term vanishes because \(xcl\) satisfies the classical equation of motion.
The first term in the expression for \(S\) is the classical action, which we have seen is given by
\[ \int_0^{\beta\hbar}d\tau\left[{1 \over 2}m\dot{x}_{\rm cl}^2 + {1 \over 2} m\omega ^2 x^2_{cl} \right] = {m\omega \over 2 \sinh (2\beta \hbar \omega} \left[(x^2 + x^{'2}){\rm cosh}(\beta\hbar\omega) - 2xx'\right]\]
Therefore, the density matrix for the harmonic oscillator becomes
\[ \rho(x,x';\beta) = I[y]\exp\left[-{m\omega \over 2{\rm sinh} (\beta \hbar \omega )} \left ( (x^2 + x^2 + x^{'2}){\rm cosh}(\beta\hbar\omega) - 2xx'\right)\right]\]
where \(I (y)\) is the path integral
\[ I[y] = \int_{y(0)=0}^{y(\beta\hbar)=0}{\cal D}y(\tau) exp\left [ - { 1 \over \hbar } \int _0^{\beta \hbar} \left ({m \over 2}\dot{y}^2 + {m \omega^2 \over 2}y^2\right)\right]\]
Note that \(I [y] \) does not depend on the points \(x\) and \(x'\) and therefore can only contribute an overall (temperature dependent) constant to the density matrix. This will affect the thermodynamics but not any averages of physical observables. Nevertheless, it is important to see how such a path integral is done.
To compute \(I [y] \), we note that it is a functional integral over functions \(y (\tau) \) that vanish at \(\tau = 0 \) and \(\tau = \beta \hbar \). Thus, they are a special class of periodic functions and can be expanded in a Fourier sine series:
\[ y(\tau) = \sum_{n=1}^{\infty}c_n \sin(\omega_n\tau)\]
where
\[ \omega_n = {n\pi\over\ \beta\hbar}\]
Thus, we wish to change from an integral over the functions \(y (\tau ) \) to an integral over the Fourier expansion coefficients \(\underline {c_n} \). The two integrations should be equivalent, as the coefficients uniquely determine the functions \( y (\tau ) \). Note that
\[ \dot{y}(\tau) = \sum_{n=1}^{\infty} \omega_n c_n \cos(\omega_n \tau) \]
Thus, terms in the action are:
\[\int_0^{\beta\hbar} d\tau {1 \over m}\dot{y}^2 ={m \over 2} \sum_{n=1}^{\infty} \sum_{n'=1}^{\infty} c_nc_{n'}\omega _n\omega _{n'} \int._0^{\beta\hbar} d\tau \cos(\omega_n \tau)\cos(\omega_{n'}\tau) \]
Since the cosines are orthogonal between \(\tau = 0\) and \(\tau = \beta \hbar \), the integral becomes
\[ \int_0^{\beta\hbar} d\tau {1 \over m}\dot{y}^2 ={m \over 2}\sum_{n=1}^{\infty} c_n^2 \omega _n^2 \int _0^{\beta \hbar} d\tau \cos ^2 (\omega _n \tau ) = {m \over 2} \sum _{n=1}^{\infty} c_n^2 \omega _n^2 \int _0^{\beta \hbar } d \tau \left [ {1 \over 2} + {1 \over 2} \cos (2 \omega _n \tau ) \right] = {m\beta\hbar \over 4}\sum_{n=1}^{\infty}c_n^2 \omega_n^2 \]
similarly,
\[\int_0^{\beta\hbar} {1 \over 2}m\omega^2 y^2 = {m\beta\hbar \over 4}\omega^2\sum_{n=1}^{\infty}c_n^2\]
The measure becomes
\[{\cal D}y(t) \rightarrow \prod_{n=1}^{\infty} {dc_n \over \sqrt{4\pi/m\beta\omega_n^2}}\]
which, is not an equivalent measure (since it is not derived from a determination of the Jacobian), but is chosen to give the correct free-particle ( \(\omega = 0\)) limit, which can ultimately be corrected by attaching an overall factor of \(\sqrt{m/2\pi\beta\hbar^2}\).
With this change of variables, \(I [y] \) becomes
\[ I[y] = \prod_{n=1}^{\infty} \int_{-\infty}^{\infty}{dc_n \over \sqrt {4\pi / m \beta \omega ^2_n}} exp \left [ - {m \beta \over 4} (\omega ^2 + \omega ^2_n )c^2_n \right ] = \prod _{n=1}^{\infty} \left[{\omega_n^2 \over \omega^2 + \omega_n^2}\right]^{1/2} \]
The infinite product can be written as
\[\prod_{n=1}^{\infty} \left[{\pi^2 n^2/\beta^2\hbar^2 \over \omega^2 + \pi^2 n^2/ \beta^2 \hbar^2} \right ] = \left [\prod _{n=1}^{\infty} \left ( {\beta^2 \hbar^2 \omega^2 \over \pi^2 n^2}\right)\right]^{-1}\]
the product in the square brackets is just the infinite product formula for \({\rm sinh}(\beta\hbar\omega)/(\beta\hbar\omega)\), so that \(I [y]\) is just
\[I[y] = \sqrt {\beta\hbar\omega \over {\rm sinh}(\beta\hbar\omega)}\]
Finally, attaching the free-particle factor \(\sqrt {m/2\pi\beta\hbar^2}\), the harmonic oscillator density matrix becomes:
\[ \rho(x,x';\beta) = \sqrt {m\omega \over 2\pi\hbar{\rm sinh}(\beta \hbar \omega )} exp \left [ - {m \omega \over 2 \sinh (\beta \hbar \omega ) } \left ( (x^2 + x'^2 ) \cosh (\beta \hbar \omega ) - 2xx' \right ) \right]\]
Notice that in the free-particle limit \( (\omega \rightarrow 0 ) \), \(\sinh (\beta \hbar \omega ) \approx \beta \hbar \omega \) and \(\cosh (\beta \hbar \omega ) \approx 1 \), so that
\[ \rho(x,x';\beta) \rightarrow\sqrt {m \over 2\pi\beta\hbar^2} \exp\left[-{m \over 2\beta\hbar^2}(x-x')^2\right] \]
which is the expected free-particle density matrix.
|
How to solve that eqation?
$\displaystyle 3^x=4y+5$
Follow Math Help Forum on Facebook and Google+
$\displaystyle 3^x\equiv 5=1 \ \mbox{(mod 4)}$
Ok, but what it is. How can I do i to my equation.
What happens when x = 2? How about x = 4? ....
Can you solve me a complete this equation, Step by step?
I pretty much solved it for you already. When x = 2, the congruence is true. When x = 4, it is true. x = 6, true as well. x = 8 guess what? True. Notice anything? Let's not forgot x = 0 is true too.
$\displaystyle 3^{2k}=9^k\equiv 1^k=1\pmod{4}$, but $\displaystyle 3^{2k+1}=9^k\cdot 3\equiv 3\pmod{4}$.
View Tag Cloud
|
Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities
School of Mathematics and Information Science, Guangzhou University, Center for Applied Mathematics, Guangzhou University, Guangzhou 510006, China
By using critical point theory, we obtain some new sufficient conditions on the existence of homoclinic solutions of a class of nonlinear discrete $ \phi $-Laplacian equations with mixed nonlinearities for the potentials being periodic or being unbounded, respectively. And we prove it is also necessary in some special cases. In addition, multiplicity results of homoclinic solutions for nonlinear discrete $ \phi $-Laplacian equations with unbounded potentials have also been considered. In our paper, the nonlinearities can be mixed super $ p $-linear with asymptotically $ p $-linear at $ ∞ $ for $ p≥ 1 $. To the best of our knowledge, there is no such result for the existence of homoclinic solutions with discrete $ \phi $-Laplacian before. Finally, an extension has also been considered.
Keywords:Homoclinic solution, discrete $ \phi $-Laplacian, mixed nonlinearity, critical point theory, periodic potential, unbounded potential. Mathematics Subject Classification:Primary: 39A14; Secondary: 34C37. Citation:Genghong Lin, Zhan Zhou. Homoclinic solutions of discrete $ \phi $-Laplacian equations with mixed nonlinearities. Communications on Pure & Applied Analysis, 2018, 17 (5) : 1723-1747. doi: 10.3934/cpaa.2018082
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
J. W. Fleischer, M. Segev, N. K. Efremidis and D. N. Christodoulides,
Observation of two-dimensional discrete solitons in optically induced nonlinear photonic lattices,
[10] [11] [12] [13]
G. Kopidakis, S. Aubry and G. P. Tsironis, Targeted energy transfer through discrete breathers in nonlinear systems,
[14] [15] [16]
G. Lin and Z. Zhou, Periodic and subharmonic solutions for a $ 2n $th-order difference equation containing both advance and retardation with $ \phi $-Laplacian,
[17] [18] [19] [20]
R. Livi, R. Franzosi and G. L. Oppo, Self-localization of Bose-Einstein condensates in optical lattices via boundary dissipation,
[21] [22]
A. Mai and Z. Zhou, Ground state solutions for the periodic discrete nonlinear Schrödinger equations with superlinear nonlinearities,
[23] [24]
J. Mawhin,
Periodic solutions of second order Lagrangian difference systems with bounded or singular $ \phi $-Laplacian and periodic potential,
[25] [26] [27] [28] [29] [30] [31] [32]
X. Tang and J. Chen, Infinitely many homoclinic orbits for a class of discrete Hamiltonian systems,
[33]
M. Willem,
[34] [35] [36] [37] [38] [39] [40]
show all references
References:
[1] [2] [3] [4] [5] [6] [7] [8] [9]
J. W. Fleischer, M. Segev, N. K. Efremidis and D. N. Christodoulides,
Observation of two-dimensional discrete solitons in optically induced nonlinear photonic lattices,
[10] [11] [12] [13]
G. Kopidakis, S. Aubry and G. P. Tsironis, Targeted energy transfer through discrete breathers in nonlinear systems,
[14] [15] [16]
G. Lin and Z. Zhou, Periodic and subharmonic solutions for a $ 2n $th-order difference equation containing both advance and retardation with $ \phi $-Laplacian,
[17] [18] [19] [20]
R. Livi, R. Franzosi and G. L. Oppo, Self-localization of Bose-Einstein condensates in optical lattices via boundary dissipation,
[21] [22]
A. Mai and Z. Zhou, Ground state solutions for the periodic discrete nonlinear Schrödinger equations with superlinear nonlinearities,
[23] [24]
J. Mawhin,
Periodic solutions of second order Lagrangian difference systems with bounded or singular $ \phi $-Laplacian and periodic potential,
[25] [26] [27] [28] [29] [30] [31] [32]
X. Tang and J. Chen, Infinitely many homoclinic orbits for a class of discrete Hamiltonian systems,
[33]
M. Willem,
[34] [35] [36] [37] [38] [39] [40]
[1]
Peng Mei, Zhan Zhou, Genghong Lin.
Periodic and subharmonic solutions for a 2$n$th-order $\phi_c$-Laplacian difference equation containing both advances and retardations.
[2]
Gabriele Bonanno, Giuseppina D'Aguì.
Mixed elliptic problems involving the $p-$Laplacian with nonhomogeneous boundary conditions.
[3]
Yong Xia, Ruey-Lin Sheu, Shu-Cherng Fang, Wenxun Xing.
Double well potential function and its optimization in the $N$ -dimensional real space-part Ⅱ.
[4]
Shu-Cherng Fang, David Y. Gao, Gang-Xuan Lin, Ruey-Lin Sheu, Wenxun Xing.
Double well potential function and its optimization in the $N$-dimensional real space-part Ⅰ.
[5]
Sugata Gangopadhyay, Goutam Paul, Nishant Sinha, Pantelimon Stǎnicǎ.
Generalized nonlinearity of $ S$-boxes.
[6]
Mohan Mallick, R. Shivaji, Byungjae Son, S. Sundar.
Bifurcation and multiplicity results for a class of $n\times n$ $p$-Laplacian system.
[7]
Claudianor O. Alves, Vincenzo Ambrosio, Teresa Isernia.
Existence, multiplicity and concentration for a class of fractional $ p \& q $ Laplacian problems in $ \mathbb{R} ^{N} $.
[8]
Jaime Angulo Pava, César A. Hernández Melo.
On stability properties of the Cubic-Quintic Schródinger equation with $\delta$-point interaction.
[9]
Nicholas J. Kass, Mohammad A. Rammaha.
Local and global existence of solutions to a strongly damped wave equation of the $ p $-Laplacian type.
[10]
K. D. Chu, D. D. Hai.
Positive solutions for the one-dimensional singular superlinear $ p $-Laplacian problem.
[11]
Phuong Le.
Symmetry of singular solutions for a weighted Choquard equation involving the fractional $ p $-Laplacian.
[12]
Qunyi Bie, Haibo Cui, Qiru Wang, Zheng-An Yao.
Incompressible limit for the compressible flow of liquid crystals in $ L^p$ type critical Besov spaces.
[13]
Xinghong Pan, Jiang Xu.
Global existence and optimal decay estimates of the compressible viscoelastic flows in $ L^p $ critical spaces.
[14] [15] [16]
Jean Mawhin.
Periodic solutions of second order Lagrangian difference systems with bounded or singular $\phi$-Laplacian and periodic potential.
[17]
Yong-Yong Li, Yan-Fang Xue, Chun-Lei Tang.
Ground state solutions for asymptotically periodic modified Schr$ \ddot{\mbox{o}} $dinger-Poisson system involving critical exponent.
[18]
Emma D'Aniello, Saber Elaydi.
The structure of $ \omega $-limit sets of asymptotically non-autonomous discrete dynamical systems.
[19] [20]
Abdelwahab Bensouilah, Van Duong Dinh, Mohamed Majdoub.
Scattering in the weighted $ L^2 $-space for a 2D nonlinear Schrödinger equation with inhomogeneous exponential nonlinearity.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
Search
Now showing items 1-9 of 9
Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV
(Springer, 2012-10)
The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ...
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-12)
In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ...
Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV
(Springer-verlag, 2012-11)
The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ...
Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV
(Springer, 2012-09)
The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ...
J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ...
Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012)
The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ...
Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2012-03)
The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ...
Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV
(American Physical Society, 2012-12)
The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
|
Wednesday, March 4, 2015
Since Github Pages is way nicer than Blogger, I'm writing over there at http://davidchudzicki.com now. To keep old posts alive, these pages at http://blog.davidchudzicki.com will remain as is.
Tuesday, January 21, 2014 Saturday, October 26, 2013
Then I learned that GIFs can only use 256 colors and we have to do something to map the many colors to the fewer colors. We could just use the closest available color in the new set of colors, but that results in sharp jumps and "color banding" where the colors change.
So instead we use (and ImageMagick's default is) dithering, which assigns pixels to a mixture of color values that locally average to the appropriate shade. I had a great time reading about this in ImageMagick's documentation! Here's a description of the default method:
The pixel is assigned the color closest to that pixels value, and any difference between the pixels original color and the selected color, is saved and added to the next pixels color values (which is always a neighbouring pixel) before a new color is again selected. In this way any color variations between selected colors and the images original color is distributed to the other pixels in the same area. The result is that while only specific colors will be assigned to the final image, the same basic overall color for that area will closely match the original image.When there aren't enough dots, the picture looks blurry.
Resolution: 800x600 Resolution: 200x150 PNG (lots of colors) no dither (GIF, 256 colors) default dither (GIF, 256 colors) convert -resize 100% -dither None ./zoomed_in/Weierstrass001.png ./res100no_dither.gif convert -resize 50% -dither None ./zoomed_in/Weierstrass001.png ./res50no_dither.gif convert -resize 100% ./zoomed_in/Weierstrass001.png ./res100default_dither.gif convert -resize 25% ./zoomed_in/Weierstrass001.png ./res25default_dither.gif convert -resize 100% ./zoomed_in/Weierstrass001.png ./res100many_colors.png convert -resize 25% ./zoomed_in/Weierstrass001.png ./res25many_colors.png Wednesday, October 16, 2013 This post will describe a way I came up with of fitting a function that's constrained to be increasing, using Stan. If you want practical help, standard statistical approaches, or expert research, this isn't the place for you (look up “isotonic regression” or “Bayesian isotonic regression” or David Dunson, whose work Andrew Gelman pointed me to). This is the place for you if you want to read about how I thought about setting up a model, implemented the model in Stan, and created graphics to understand what was going on. Many thanks to Naftali Harris for comments and insights on an earlier version.
I recently read a paper that Andrew Gelman wrote back in 1996 about theoretical considerations that can restrict model choice even “in the abence of data with minimal applied context”. As one example, he describes the problem fitting a function \( f(t) \) that is known to be increasing and where \( f(0)=0 \) and \( f(1)=1 \) are known. We estimate the function given noisy observations of its value at \( (N-1) \) equally spaced sample points \( \theta_i=f(\frac{i}{N}) \) (for \( i \in \{1,2,3,...,N-1\} \)).
What prior should we use for the \( \{\theta_i\} \)? It might seem like a reasonable “uninformative” prior would make the \( \{\theta_i\} \) independent and uniform on \( [0,1] \), with the restriction that non-increasing sets of \( \{\theta_i\} \) are given zero probability. But this turns out badly! Gelman notes that this prior on the \( \{\theta_i\} \) is the same as the distribution of the order statistics of a samples of size \( N-1 \) from the uniform distribution on \( [0,1] \). As the number of points in the discretization increases, the mass of the prior concentrates around a straight line, which will overwhelm the data.
To illustrate, here are some samples from this distribution, for various \( N \):
This weird! Under the uniform distribution, each (legal) path is equally likely. That's why if we fit this model to noisy data, the maximum likelihood estimate (which is the mode of the posterior) would be reasonable. But it turns out that for large \( k \), almost all legal paths are basically straight lines, so the bulk of the posterior would be basically a straight line no matter what the data say. It reminds me a bit of statistical mechanics, which is largely about micro-level randomness determining macro-level properties. (I'm no expert, but I recommend this book.)
Tuesday, July 2, 2013 From 1886 onward, the mine seems to get deeper as a roughly constant rate (averaged over time). Are they digging it deeper smoothly over time, or does the depth increase in spurts? Why do we have so much more data in the period from 1886 to 1892 than the rest? Wednesday, June 19, 2013
Here are some guesses:
Monday, March 11, 2013
The code is here. The code is O(n^2) (where n is the number of terms I want images for) when it really should be O(n), since I'm in effect asking for a lot of repeated computation by making each image separately. Sorry!
Sunday, August 5, 2012
This time I looked at the digits data set that Kaggle is using as the basis of a competition for "getting started". The random forest is trained to classify the digits, and this is an embedding of 1000 digits into 2 dimensions preserving proximities from the random forest as closely as possible:
Here's the same but just for the 7's:
The random forest has done a reasonable job putting different types of 7's in different areas, with the most "canonical" 7's toward the middle.
You can see all of the other digits http://www.learnfromdata.com/media/blog/digits/.
Note that this random forest is different from the one in my last post -- here it's built to classify the digits, not separate digits from non-digits. I wonder what kind of results a random forest to distinguish 7's from non-7's would look like?
Code is on Github.
Have a data set Create a set of fake data by permuting the columns randomly -- each column will still have the same distribution, but the relationships are destroyed. Train a random forest to distinguish the fake data from the original data. Get a "proximity" measure between points, based on how often the points are in the same leaf node. Embed the points in 2D in such a way as to distort these proximities as little as possible. shouldbe, as a way of thinking about how it works. So I generated 931 images of diamonds varying in two dimensions: Size Position (only how far left/right) for the same difference in position, small diamonds need to be closer to each other than large diamonds.None of my diamonds have a diameter smaller than 4 pixels, but imagine of the sizes got so small the diamond wasn't even there -- then position wouldn't matter at all of those diamonds.
|
In the days before email, mathematicians relied upon pen, paper and the postman to share ideas and communicate fiendish numerical taunts. An excited Dirichlet wrote to Kronecker in 1858:
… that sum, which I could only describe up to an error of order $\sqrt{x}$ at the time of my last letter, I’ve now managed to home in on significantly.
Dirichlet’s sum is associated to the divisor function $d(n)$: the number of distinct divisors of the natural number $n$. The number six is divisible by 1, 2, 3 and 6, so $d(6)=4$. Four has divisors 1, 2 and 4, so $d(4)=3$.
It might seem odd that a great mathematician was troubled by a quantity whose description amounts to a good grasp of ‘times tables’ – still odder that he was confessing to a distinguished contemporary that the damned thing had caused him grief.
It is not hard (although perhaps time-consuming) to work out the value of $d(n)$ for any given natural number $n$. But what if we want to provide a formula to describe it? Where to begin?
Mean values
The trouble is, the divisor function is not at all well-behaved. In fact, it jumps significantly in value infinitely often. The adjacent integers $d(200560490130)=2^{11}$ and $d(200560490131) = 2$ provide one such example, but you can happily construct as large a jump as you fancy. Try it.
The divisor function is one of many interesting arithmetic functions; understanding their asymptotic behaviour as $n\to\infty$ is a key question in multiplicative number theory. However, as we saw above, the value of these functions can be erratic.
To try to smooth this wildness, you could instead consider certain types of ‘average’. Dirichlet’s sum was one such average.
He began by accumulating a sum from the divisor function on adjacent naturals:
\begin{equation}
D(x) = d(1) + d(2) + \cdots + d([x]) = \sum\limits_{n \leqslant x} d(n), \end{equation}
where $[x]$ indicates the largest integer less than or equal to $x$.
Upon dividing the sum $D(x)$ by $x$, we get the
mean value of the divisor function up to $x$.
Taking the mean has the effect of ‘smoothing out’ the jumps in the divisor function. But just how well-behaved is this average? Do we get some sort of regular behaviour as $x$ becomes very large?
Hyperbolæ
Estimating the divisor mean comes more naturally if we take a different perspective on the divisor function. Instead of viewing $d(n)$ as the number of divisors of $n$, we think of it as the number of natural-valued pairs $(a,b)$ such that $n=ab$. The number 6 may be written 1 × 6 = 2 × 3 = 3 × 2 = 6 × 1, giving us the four pairs (1,6), (2,3), (3,2) and (6,1). If we wanted to see the divisors, we could just look at the first number in each pair. For 4 we have the three pairs: (1,4), (2,2) and (4,1); again showing $d(4)=3$.
Why not treat these natural-valued pairs, from here on called
lattice points, as coordinates in the plane? The question of counting the number of divisors of $n$ turns into that of counting the lattice points lying on the hyperbola $XY=n$. If $n$ is not a natural number, then the hyperbola contains no lattice points. If $n$ is a natural, then the hyperbola contains exactly $d(n)$ lattice points.
If we now let $n$ be real-valued and increase it continuously, the hyperbola sweeps through the upper quadrant of the plane. The only time that the curve passes through lattice points is when $n$ hits a natural number.
By the reasoning above $d(1) + d(2) + \cdots + d([x])$ is a count of
all of the pairs that the hyperbola sweeps through as we run $n$ from 1 through to $x$. Dirichlet’s sum $D(x)$ counts the number of lattice points lying below the hyperbola $XY=x$.
Perhaps the simplest way of summing up these lattice points is to group them into vertical towers. For each natural-valued $X$-coordinate we look vertically, counting $\lbrack \frac{x}{n}\rbrack$ for the number of lattice points lying directly above $X=n$, but below the hyperbola. So
\begin{equation} D(x)= \sum\limits_{n \leqslant x}\left\lbrack \frac{x}{n}\right\rbrack. \end{equation} Keeping track of errors: big- O notation
Our count above is still exact and depends upon the precise value of the divisor function, which we know to jump suddenly in value. If we are happy to settle for trying to determine the dominant behaviour of $D(x)$, then we can afford to be a little less precise: allowing for some small deviation can make it easier to provide a more useful formula whose value is easy to calculate. This deviation from the precise value of the function is usually called
error, and provided that we don’t introduce too much, we can get a good idea of the dominant behaviour.
For instance, say we want to get an idea of the size of the function $f(x) = [x^2]$ for large $x$. It makes sense to resign ourselves to an error of at most 1, taking just the larger and larger $x^2$ term. To keep track of our neglected error we write $f(x) = [x^2] = x^2 + O(1)$. The symbol $O(1)$ means that we are allowing an error that is at most constant with respect to $x$.
If our function were more complicated, we might wish to keep track of an error that varies with respect to $x$, instead of remaining constant. Big-
O notation gives us this flexibility: we write $r(x) = O(g(x))$, for some positive function $g(x)$, to mean that there is a constant $K$ and a positive number $x_0$ such that whenever $x \geqslant x_0$ we have that $|r(x)| \leqslant K g(x)$. analogue radioto be an obsolete cuboidal object with a knob, which ingests radio waves and—subject to correct knob position and alignment of the clouds—excretes a melody of questionable taste.
One day—not too cloudy—you twizzle the knob and your favourite tune blares out. At least, you think it is your favourite tune… There is some distractingly apparent white noise, rendering it a challenge to make out the precise melody. You try turning up the volume, but this only helps up to a certain point. From some volume onwards, the white noise remains proportionally loud.
We can return to big-
O notation by thinking of the $x$ variable in the definition above as the volume of the radio’s excretion, the white noise as the error $r(x)$ from your favourite tune. Then $r(x) = O(g(x))$ is to say that: whenever the volume $x$ is greater than some $x_0$, the radio’s deviation $r(x)$ from your favourite tune is at most proportional to some function $g(x)$.
Now for a more mathematical example: $f(x) = x^7 + 5x + \sin(x)$. If $x$ is large then $x^7$ is very large. If you were to plot a graph of $f(x)$, the $5x$ and $\sin(x)$ terms would seem less and less significant. We could make use of big-
O notation to indicate the dominant behaviour: $f(x) = x^7 + O(x)$.
For Dirichlet’s sum we can now write
\begin{equation} D(x) = \sum\limits_{n \leqslant x}\left(\frac{x}{n} + O(1)\right) = x\sum\limits_{n \leqslant\ x}\frac{1}{n} + O(x).\label{D1}\tag{1} \end{equation}
The sum $\sum_{n \leq x}\frac{1}{n}$ is (up to a constant) the same as the integral of $\frac{1}{t}$ over the interval $[1,x]$. This
integral is $\log(x)$. The constant in question approaches the Euler–Mascheroni constant, $\gamma \approx 0.57$, at a rate of $O\left(\frac{1}{x}\right)$. As $x$ gets large this error vanishes, so that the sum $\sum_{n \leqslant x}\frac{1}{n}$ is close in value to $\log(x) + \gamma$. Substituting into \eqref{D1} gives \begin{equation} D(x) = x\left(\log x + \gamma\right) + O(x) = x\log x + O(x). \end{equation}
Phrased differently, the mean value of the divisor function
does behave well for large $x$: it grows like $\log(x)$ with some fixed constant error: \begin{equation} \frac{1}{x} \sum\limits_{n\leqslant x} d(n) = \log x + O(1). \end{equation} Exploiting symmetry
This was not good enough for Dirichlet. He wanted to pin down the constant—that number floating around in the $O(1)$ envelope above. With some rather clever counting he exploited a natural symmetry of the hyperbola $XY=x$ so as to make fewer than the $x$ lots of $O(1)$ approximations we made in \eqref{D1}.If we swap the $X$ and $Y$ axes the hyperbola is left unchanged: there is a reflective symmetry about the line $X=Y$. Counting the lattice points lying below the hyperbola but vertically above the $X$-axis coordinates $1, 2,\ldots, [\sqrt{x}]$ is exactly the same as counting the lattice points lying below the hyperbola but horizontally to the right of the $Y$-axis coordinates $1, 2, \ldots, [\sqrt{x}]$. Combined, these counts cover all of the lattice points under the hyperbola. Except that we have over-counted the $[\sqrt{x}]\times[\sqrt{x}]$ lattice points lying in the square below the hyperbola which is sent to itself under the symmetry. In symbols:
\begin{align*}
D(x) = 2 \sum\limits_{n \leqslant \sqrt{x}} \left\lbrack \frac{x}{n} \right\rbrack – \left\lbrack \sqrt{x} \right\rbrack ^2.
\end{align*}
Arithmetic along the lines of \eqref{D1} transforms this into:
\begin{align*} D(x) = x \log x + (2\gamma – 1)x + O(\sqrt{x}). \end{align*}
Dirichlet had pinned down the constant. Upon dividing both sides above by $x$, the $O(\sqrt{x})$ turns into $O(\frac{1}{\sqrt{x}})$, which vanishes as $x$ becomes large. We have shown that the mean value of the divisor function $d(n)$ taken over $n\leqslant x$, for large $x$, is $\log(x) + 2\gamma – 1$.
Dirichlet’s divisor problem
Well, I feel content. Don’t you? But not Dirichlet. Never Dirichlet. Despite a decent amount of work, a rather sneaky argument and answering all of the questions that he had asked, he wanted to further dissect the $O(\sqrt{x})$. He was convinced that he was being far more imprecise than he needed to be: like approximating $[x^2]$ by $x^2+O(x)$, when in fact we can be sure that it is $x^2 + O(1)$.
Writing
\begin{equation} \mathit{\Delta}(x) = D(x) – x\log x – (2\gamma – 1)x, \end{equation}
our efforts so far have shown that $\mathit{\Delta}(x) = O(\sqrt{x})$. In his letter to Kronecker, Dirichlet hinted that he could replace $O(\sqrt{x})$ by something significantly more precise. Nothing in all of Dirichlet’s notes has been found to back up this claim. Perhaps it was a simply a taunt aimed at Kronecker? Perhaps he made a mistake. In any case it gave birth to the
Dirichlet divisor problem:
The “
ε condition” is a neat way of saying that we are only interested in determining the smallest possible $\alpha$ up to a fixed power of $\log(x)$. The logarithm grows more slowly than any positive power of $x$. So for instance $x^2(\log(x))^7 = O(x^{2+\varepsilon})$ for every $\varepsilon> 0$. The other end up
The first fifty years after Dirichlet’s letter saw little improvement. Progress was slow and the going was tough. Together with Littlewood, Hardy decided to take a different tack to the
‘O’ approach. Noting the difficulty of containing the error in a big- O envelope, their idea was to “ attack the problem from the other end“, introducing the notion of ‘Ω’ results. They sought to display positive functions $g(x)$, for which there exists a constant $K$ such that the multiple $Kg(x)$ is exceeded by $|\mathit{\Delta}(x)|$ for arbitrarily large values of $x$.
By finding such a $g$ of as large an order of magnitude as they could, they were proving that the divisor sum error $\mathit{\Delta}(x)$, was
at best $O(g(x))$. In this way, they could install the lower bound of an interval for possible $\alpha$. The year 1915 saw Hardy prove an ‘ Ω‘ result showing that $\alpha \geqslant \frac{1}{4}$. When read alongside Dirichlet’s hyperbola result we surmise that $\alpha \in \lbrack \frac{1}{4}, \frac{1}{2}\rbrack$. Hot oscillations
Since around 1922, the sharp end of research has involved playing with a daunting representation of the error:
\begin{equation} \mathit{\Delta}(x) = \frac{x^{1/4}}{\mathrm{\pi}\sqrt{2}}\sum\limits_{n=1}^{\infty}\frac{d(n)}{n^{3/4}} \cos \left(4\mathrm{\pi}\sqrt{nx} – \frac{1}{4}\mathrm{\pi} \right). \end{equation}
Deriving such an expression is no mean feat. As illustration: we can encode the divisor function in a Dirichlet series, which turns out to equal the square of the Riemann-zeta function. As a meromorphic function, the techniques of complex analysis can be put to work. Mellin transforms, asymptotic formulæ for Bessel functions and contour integration all feature in the derivation of the equation for $\mathit{\Delta}(x)$ above.
Because of the oscillatory nature of the cosine, researchers have used this handy expression to provide both lower bounds and upper bounds for $\alpha$, turning that infamous phrase: “
it’s not the size, it’s what you do with it” on its head. Here, it’s what you do with it that indicates its size: ‘Ω’ or ‘O’.
In the form above, the representation is not very useful. Far better is a truncation of the sum at some carefully chosen $n = N(x)$, together with a big-
O error for the remainder of the terms. There is an art to deciding the truncation point so as to reveal dominant behaviour of the expression whilst keeping the error in check. Closing in on α
Where does the problem stand? Thanks to Hardy we have a lower bound $\alpha \geqslant \frac{1}{4}$. In fact, from computational evidence and heuristic arguments this is expected to be exactly on the money—we believe that $\alpha = \frac{1}{4}$. If true, the first person to show an upper bound that is
also $\frac{1}{4}$ will have solved Dirichlet’s divisor problem.
However, 150 years of dedicated work on the upper bound can be read in the decelerating sequence of improvements: $\frac{1}{2}$, $\frac{1}{3}$, $\frac{33}{100}$, $\frac{27}{82}$, $\frac{15}{46}$, $\frac{12}{37}$, $\frac{346}{1067}$, $\frac{35}{108}$, $\frac{7}{22}$. The current best has stood since 2003, when a contribution by Martin Huxley showed that $\alpha \leqslant \frac{131}{416} \approx 0.3149$.
We have confined $\alpha$ to the interval [0.25, 0.3149]. At the current rate of progress, we are still a long time from squeezing this interval to one number, thus pinning down the precise value of $\alpha$. Well over a century on, Dirichlet’s challenge is still to be met.
You might also like… Find out more about the patterns on the cover of Issue 07 Solving differential equations instantaneously, using some electrical components and an oscilloscope More than spirals and rabbits, Fibonacci gave us something much more fundamental. The great contributions of the man who started popular mathematics The story behind Issue 3's cover artwork Why voting systems can never be fair
|
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
|
Solving Nonlinear Static Finite Element Problems
Here, we begin an overview of the algorithms used for solving nonlinear static finite element problems. This information is presented in the context of a very simple 1D finite element problem, and builds upon our previous entry on Solving Linear Static Finite Element Models.
A System of a Spring Attached to a Rigid Wall
Consider the system shown below, of a spring that is attached to a rigid wall at one end, and with an applied force at the other end. The stiffness of the spring is a function of the distance it is stretched, k(u)=exp(u). That is, the spring stiffness increases exponentially as it is stretched.
We are interested in finding the displacement of the end of the spring, where the force is applied. Just as we did earlier for the
linear problem, we can now write the following function describing the balance of forces on the node for the nonlinear finite element problem:
In this case, only the spring stiffness is dependent on the solution, but more generally, both the load and the properties of the elements can be arbitrarily dependent upon the solution in a nonlinear problem.
Let us plot out this function, and keep in mind that we are trying to find u such that f(u)=0.
Finding the solution to the problem is, in fact, only marginally different from the linear case. Recall that to solve the linear problem we took a single Newton-Raphson iteration — and we do the exact same thing here:
As you can see, we again start at an initial guess to the solution, u_0=0, and evaluate the function, f(u_0), as well as its derivative, f'(u_0). This gets us to the the point u_1. By examination, we see that this is not the solution, since f(u_1) \ne 0. But if we continue to take Newton-Raphson iterations, as shown below, it becomes clear that we are approaching the solution to the problem. (For more details about this algorithm, you can use this resource on Newton’s method.)
So finding the solution to a nonlinear problem is essentially identical to solving a linear problem, except that we take multiple Newton-Raphson steps to get to the solution. In fact, we could continue to take iterations and get arbitrarily close to the solution, but this is not needed. As discussed earlier, we always run into issues of numerical precision on computers, so there is a practical limit to how close we can get. Let’s have a look at the results after several iterations:
i u_i |f(u_i)| |u_{i-1}-u_i| |f(u_{i-1})-f(u_i)| 0 0.000 2.000 1 2.000 12.77 2.000 10.77 2 1.424 3.915 0.576 8.855 3 1.035 0.914 0.389 3.001 4 0.876 0.104 0.159 0.810 5 0.853 0.002 0.023 0.102 6 0.852 0.001 0.001 0.001
After six iterations, we see here that the difference between successive values of f(u), and u, as well as the absolute value of f(u), is reduced to 0.001 or less. After six Newton-Raphson iterations starting from u_0=0, the solution has converged to within a tolerance of 0.001. When we solve nonlinear problems, we apply this algorithm until the solution was converged to within the desired tolerance. There is a second termination criterion: that the solver should take no more than a specified number of iterations. Whichever criterion, tolerance, or number of iterations gets satisfied first will stop the solver. Also, keep in mind the discussion from the blog post on solving linear static finite element problems about the numerical scaling of the problem. The tolerance criteria applies to the scaled solution vector — not the absolute values of the solution.
Although it is more complicated to visualize, this is the same algorithm used to solve problems where u is a vector, as is the case for typical nonlinear finite element problems. However, when solving a problem with hundreds, thousands, or even millions of degrees of freedom, it is desirable to take as few Newton-Raphson steps as possible. Recall that we need to solve \mathbf{u}_{i+1}=\mathbf{u}_{i}-[\mathbf{f}'(\mathbf{u}_{i})]^{-1}\mathbf{f}(\mathbf{u}_{i}) and that computing the inverse of the derivative is the most computationally intensive step. To avoid proceeding into a region where there is no solution, and to minimize the number of Newton-Raphson steps taken, COMSOL uses a
damping factor. Consider again the first Newton-Raphson step plotted earlier, and observe that for this step |\mathbf{f}(\mathbf{u}_{i+1})|>|\mathbf{f}(\mathbf{u}_{i})|. So for this iteration, we have taken too large of a step. When this happens, COMSOL will perform a simple search along the interval [\mathbf{u}_{i},\mathbf{u}_{i+1}] for a point \mathbf{u}_{damped}=\mathbf{u}_i+\alpha(\mathbf{u}_{i+1}-\mathbf{u}_i) such that |\mathbf{f(u}_{damped})|<|\mathbf{f(u}_{i})|. The Newton-Raphson iteration scheme is then restarted at this point.
The term \alpha is known as the damping factor and has bounds 0< \alpha \le 1. As \alpha \rightarrow 0 we say that the damping is
increased, while \alpha = 1 means that the problem is undamped. This method is attractive because the search requires only that COMSOL evaluates \mathbf{f(u}_{damped}) and the computational cost of this is quite low as compared to computing the derivative \mathbf{f'(u}_{i}) and its inverse [\mathbf{f}'(\mathbf{u}_i)]^\mathbf{-1}.
It is important to emphasize that this damping term has no direct physical interpretation. Although this method works quite well to improve convergence, there is very little physical insight that can be gleaned by examining the damping factor. Furthermore, although COMSOL does allow you to manually modify the damping factor, it is not generally possible to use any physical understanding or information from the model as guidance when doing so. The default choice of damping algorithm is difficult to outperform through manual intervention. However, there are other techniques that can be used, which are usually motivated by the physics of the problem, that work well when the default damped Newton-Raphson methods converge slowly or not at all.
Why Nonlinear Problems May Not Converge
Nonlinear problems are inherently difficult to solve since there are multiple ways in which the above solution procedure can fail to converge. Although there are many ways in which the Newton-Raphson method can fail, in practice we can reduce the discussion to the following cases.
Case 1: The Initial Condition is Too Far Away from the Solution
First, consider the same nonlinear problem as before, but with a different starting point, for example, u_0=-2. As we can see from the plot below, if we choose any initial condition u_0\le-1, the Newton-Raphson method cannot find a solution since the derivatives of f(u) do not point towards the solution. There is no solution to be found to the left of u_0=-1, so these starting points are outside of the
radius of convergence of the Newton-Raphson method. The choice of initial condition can cause the Newton-Raphson method to fail to converge, even if a solution exists. So, unlike the linear case, where a well-posed problem will always solve, the convergence of nonlinear models may be highly dependent on the choice of starting condition. We will address later how best to choose a good initial condition.
Case 2: The Problem Does Not Have a Solution
The nonlinear solver will also fail if the problem itself does not have a solution. Consider again the problem from above, but with a spring stiffness of k(u)=\exp(-u). In other words, as the spring gets stretched, the stiffness decreases. If we plot out f(u) for a load of p=2, we see that there is no solution to be found. Unfortunately, the Newton-Raphson algorithm cannot determine that this is the case; the algorithm will simply fail to find a solution and terminate after a user-specifiable number of iterations.
Case 3: The Problem Is Non-Smooth and Non-Differentiable
Last, consider the case of a material property that has a discontinuous change in properties. For example, consider the same system as before, but with a spring stiffness that has different values over different intervals, a value of k=0.5 for u\le1.8, a value of k=1 for 1.8<u<2.2, and k=1.5 for u\ge2.2. If we plot out f(u) for this case we see that it is non-differentiable and discontinuous, which is a violation of the requirements of the Newton-Raphson method. It is also clear by examination that unless we choose a starting point in the interval 1.8<u<2.2 the Newton-Raphson iterations will oscillate between iterations outside of this interval.
To summarize, so far we have introduced the damped Newton-Raphson method used to solve nonlinear finite element problems and discussed the convergence criteria used. We introduced several ways in which this method can fail to find a solution, including:
Choosing an initial condition that is too far away from the solution Setting up a problem that does not have a solution Defining a problem that is non-smooth and non-differentiable Interpreting the COMSOL Log File The Log File
We will soon discuss ways of addressing all of these issues, but first, let’s take a look at the log file of a typical nonlinear finite element problem. Below you will see the log file (with line numbers added) from a geometric nonlinear structural mechanics problem:
1) Stationary Solver 1 in Solver 1 started at 10-Jul-2013 15:23:07. 2) Nonlinear solver 3) Number of degrees of freedom solved for: 2002. 4) Symmetric matrices found. 5) Scales for dependent variables: 6) Displacement field (Material) (mod1.u): 1 7) Iter ErrEst Damping Stepsize #Res #Jac #Sol 8) 1 6.1 0.1112155 7 3 1 3 9) 2 0.12 0.6051934 1.2 4 2 5 10) 3 0.045 1.0000000 0.18 5 3 7 11) 4 0.012 1.0000000 0.075 6 4 9 12) 5 0.0012 1.0000000 0.018 7 5 11 13) 6 1.6e-005 1.0000000 0.0015 8 6 13 14) Stationary Solver 1 in Solver 1: Solution time: 1 s 15) Physical memory: 849 MB 16) Virtual memory: 946 MB Explanations Line 1 reports the type of solver called and the start time. Line 2 reports that the software is calling the nonlinear system solver. Line 3 reports the size of the problem in terms of the number of degrees of freedom. Line 4 reports on the type of finite element matrix to be solved. Lines 5-6 report the scaling. In this case, the displacement field scale is 1 m, which is appropriate for the expected magnitude of the solution. Lines 7-13 report that six Newton-Raphson iterations were used to arrive at the converged solution. The first column reports the iteration number and the second reports the error estimate used to define convergence. By default, the convergence criterion is 0.001. The third column shows that some damping was used for the first two steps, but steps 3-6 were undamped. Lines 14-16 report the solution time and memory requirements.
Now you should have gained an understanding of how nonlinear static problems are solved in COMSOL as well as how to interpret the log file.
Comments (7) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science TAGS CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
L-function
Calculates an estimate of the \(L\)-function (Besag's transformation of Ripley's \(K\)-function) for a spatial point pattern.
Usage
Lest(X, ...)
Arguments Details
This command computes an estimate of the \(L\)-function for the spatial point pattern
X. The \(L\)-function is a transformation of Ripley's \(K\)-function, $$L(r) = \sqrt{\frac{K(r)}{\pi}}$$ where \(K(r)\) is the \(K\)-function.
See
Kest for information about Ripley's \(K\)-function. The transformation to \(L\) was proposed by Besag (1977).
The command
Lest first calls
Kest to compute the estimate of the \(K\)-function, and then applies the square root transformation.
For a completely random (uniform Poisson) point pattern, the theoretical value of the \(L\)-function is \(L(r) = r\). The square root also has the effect of stabilising the variance of the estimator, so that \(L(r)\) is more appropriate for use in simulation envelopes and hypothesis tests.
See
Kest for the list of arguments.
Value
Essentially a data frame containing columns
the vector of values of the argument \(r\) at which the function \(L\) has been estimated
the theoretical value \(L(r) = r\) for a stationary Poisson process
Variance approximations
If the argument
var.approx=TRUE is given, the return value includes columns
rip and
ls containing approximations to the variance of \(\hat L(r)\) under CSR. These are obtained by the delta method from the variance approximations described in
Kest.
References
Besag, J. (1977) Discussion of Dr Ripley's paper.
Journal of the Royal Statistical Society, Series B, 39, 193--195. See Also Aliases Lest Examples
# NOT RUN { data(cells) L <- Lest(cells) plot(L, main="L function for cells")# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
|
Again, let \(x_i\) and \(x_j\) be specific components of the phase space vector \(x = (p_1,\cdots ,p_{3N},q_1,\cdots,q_{3N})\). Consider the canonical average
\[ \langle x_i \frac {\partial H}{\partial x_j} \rangle \]
given by
\( \langle x_i \frac {\partial H}{\partial x_j} \rangle \)
\(\frac {1}{Q} C_N \int dx x_i \frac {\partial H}{\partial x_j}e^{-\beta H(x)}\)
\(\frac {1}{Q} C_N \int dx x_i \left(- \frac {1}{\beta} \frac {\partial}{\partial x_j} \right ) e^{-\beta H(x)}\)
But
\( x_i \frac {\partial}{\partial x_j}e^{-\beta H(x)}\)
\(\frac {\partial}{\partial x_j} \left ( x_i e^{-\beta H(x)} \right ) - e^{-\beta H(x)} \frac {\partial x_i}{\partial x_j}\)
\(\frac {\partial}{\partial x_j} \left ( x_i e^{-\beta H(x)} \right ) - \delta_{ij}e^{-\beta H(x)}\)
Thus,
\( \langle x_i \frac {\partial H}{\partial x_j} \rangle \)
\( - \frac {1}{\beta Q} C_N \int dx \frac {\partial}{\partial x_j} \left ( x_i e^{- \beta H (x)} \right ) + \frac {1}{\beta Q} \delta_{ij} C_N \int dx e^{-\beta H(x)}\)
\( - \frac {1}{\beta Q} C_N \int dx' \int dx_j \frac {\partial}{\partial x_j} \left ( x_i e^{-\beta H(x)}\right )+ kT\delta_{ij}\)
\( \int dx' \left.x_i e^{-\beta H(x)} \right \vert _{x_j=-\infty}^{\infty}+ kT\delta_{ij}\)
Several cases exist for the surface term \(x_i exp (-\beta H (x)) \):
\(x_i = p_i \)a momentum variable. Then, since \(H \sim p^2_i, exp (-\beta H)\) evaluated at \(p_i = \pm \infty \) clearly vanishes. \(x_i = q_i \)and \(U \rightarrow \infty \) as \(q_i \rightarrow \pm \infty \), thus representing a bound system. Then, \(exp (- \beta H ) \) also vanishes at \(q_i = \pm \infty \). \(x_i = q_i \)and \(U \rightarrow 0 \) as \(q_i \rightarrow \pm \infty \), representing an unbound system. Then the exponential tends to 1 both at \(q_i = \pm \infty \), hence the surface term vanishes. \(x_i = q_i \)and the system is periodic, as in a solid. Then, the system will be represented by some supercell to which periodic boundary conditions can be applied, and the coordinates will take on the same value at the boundaries. Thus, \(H \) and \(exp (- \beta H) \) will take on the same value at the boundaries and the surface term will vanish. \(x_i = q_i \) and the particles experience elastic collisions with the walls of the container. Then there is an infinite potential at the walls so that \(U \rightarrow \infty \) at the boundary and \( exp (- \beta H ) \rightarrow 0 \) at the boundary.
Thus, we have the result
\[\langle x_i \frac {\partial H}{\partial x_j} \rangle = kT\delta_{ij}\]
The above cases cover many but not all situations, in particular, the case of a system confined within a volume \(V\) with reflecting boundaries. Then, surface contributions actually give rise to an observable pressure (to be discussed in more detail in the next lecture).
|
Given an single vector field $A_\mu(x)$ is it possible to make a diffeomorphism invariant action in 4 dimensions? In the same way that General Relativity is diffeomorphism invariant?
My first guess would be:
$$ S = \int \left( \det_{ij}(\partial_i A_j(x))\right)^{1/2} dx^4 $$
or the same question with a single scalar field if we set $A_\mu(x)=\partial_\mu \phi(x)$ ? (But then I think we'd end up with zero).
I think this might be wrong because it doesn't use covariant derivatives so won't be diffeomorphism invariant. Do we always require a metric tensor in 4 dimensions?
(I know its possible to do it with 4 vector fields since we can define a metric tensor as $g_{\mu\nu}(x)=\eta_{ij}A^i_\mu(x) A^j_\nu(x)$. This is just the vierbien. But can it be done with just one?)
|
FARADAY MAGNETO OPTICAL (MCD) ACTIVITY IN CUBIC COMPLEXES Issue Date:1970 MetadataShow full item record Publisher:Ohio State University Abstract:
Magneto-optical rotational strengths of d-d and charge-transfer electronic transitions were measured using MCD spectroscopy. Magnetic fields ranging from 25 000 gauss to 45 000 gauss were employed. The metal ions of most compounds investigated were in the environment of octahedral ($O_{h}$) microsymmetry. The magnitudes of Faraday parameters A(a$\rightarrow$j), B(a$\rightarrow$j) and C(a$\rightarrow$j), and of dipole strengths D(a$\rightarrow$j) were obtained from these experiments. Also, the magnitudes, signs, and usefulness of newly defined magneto-optical $g_{p}$-values for Faraday parameters, P = A, B, and C, of each electronic transition $a\rightarrow j$ will be discussed. {g}^{^{o}K}_{P}[{P}({a}\rightarrow {j})]=\frac{[\vartheta({P})]^{^{o}K}_{M \max}}{\epsilon_{\max}}\times 10^{4} Experimental evidence for magnitudes of excited-state orbital angular momenta will be presented. In a number of compounds parameter A(a$\rightarrow$j) will be considered within the framework of Stephens proposed vibronic intensity mechanism.
Description:
Address of Robert G. Denning: University of Oxford, Oxford, England, since September 1968.""
Author Institution: Department of Chemistry, North Carolina State University; Department of Chemistry, University of Illinois
Author Institution: Department of Chemistry, North Carolina State University; Department of Chemistry, University of Illinois
Type:article Other Identifiers:1970-K-5
Items in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated.
|
AliPhysics b76e98e (b76e98e)
#include <AliFMDCorrNoiseGain.h>
AliFMDCorrNoiseGain () AliFMDCorrNoiseGain (const AliFMDFloatMap &map) Float_t Get (UShort_t d, Char_t r, UShort_t s, UShort_t t) const void Set (UShort_t d, Char_t r, UShort_t s, UShort_t t, Float_t x) const AliFMDFloatMap & Values ()
AliFMDFloatMap fValues
Get the noise calibration. That is, the ratio
\[ \frac{\sigma_{i}}{g_{i}k} \]
where \( k\) is a constant determined by the electronics of units DAC/MIP, and \( \sigma_i, g_i\) are the noise and gain of the \( i \) strip respectively.
This correction is needed because some of the reconstructed data (what which have an AliESDFMD class version less than or equal to 3) used the wrong zero-suppression factor. The zero suppression factor used by the on-line electronics was 4, but due to a coding error in the AliFMDRawReader a zero suppression factor of 1 was assumed during the reconstruction. This shifts the zero of the energy loss distribution artificially towards the left (lover valued signals).
So let's assume the real zero-suppression factor is \( f\) while the zero suppression factor \( f'\) assumed in the reconstruction was (wrongly) lower. The number of ADC counts \( c_i'\) used in the reconstruction can be calculated from the reconstructed signal \( m_i'\) by
\[ c_i' = m_i \times g_i \times k / \cos\theta_i \]
where \(\theta_i\) is the incident angle of the \( i\) strip.
This number of counts used the wrong noise factor \( f'\) so to correct to the on-line value, we need to do
\[ c_i = c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor \]
which gives the correct number of ADC counts over the pedestal. To convert back to the scaled energy loss signal we then need to calculate (noting that \( f,f'\) are integers)
\begin{eqnarray} m_i &=& \frac{c_i \times \cos\theta_i}{g_i \times k}\\ &=& \left(c_i' - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right)\frac{\cos\theta}{g_i \times k}\\ &=& \left(\frac{m_i'\times g_i\times k}{\cos\theta} - \lfloor f'\times n_i\rfloor + \lfloor f\times n_i\rfloor\right) \frac{\cos\theta}{g_i \times k}\\ &=& m_i' + \frac{1}{g_i \times k} \left(\lfloor f\times n_i\rfloor- \lfloor f'\times n_i\rfloor\right)\cos\theta\\ &=& m_i' + \frac{\lfloor n_i\rfloor}{g_i \times k} \left(f-f'\right)\cos\theta \end{eqnarray}
inline
inline
inline
protected
|
I haven't checked the details of this myself, so I can't tell you the correct answers to your questions, but I would suggest that you try to apply the adjoint functor theorem of category theory:
http://en.wikipedia.org/wiki/Adjoint_functors#General_existence_theorem
To translate the problem into categorical language, let $\mathbf{Smgrp}$ denote the category of semigroups (with semigroup homomorphisms as morphisms) and let $\mathbf{Inv}$ denote the category of inverse semigroups (the morphisms are, again, semigroup homomorphisms; it is easy to see that a morphism of semigroups preserve inverses as you've defined them above). Clearly, $\mathbf{Inv}$ is a subcategory of $\mathbf{Smgrp}$, so there is a forgetful functor $F: \mathbf{Inv} \to \mathbf{Smgrp}$.Since the construction here is called the universal enveloping inverse semigroup of a semigroup, we should expect that the construction constitutes a left adjoint functor to $F$.
To spell out the universal property, this means that if $S_{I}$ is the universal inverse semigroup of the semigroup $S$, then there is a semigroup homomorphism $f: S \to S_{I}$ (formally, a morphism $f: S \to F(S_{I})$) such that whenever $g: S \to H$ ($g: S \to F(H)$ is a semigroup homomorphism with $H$ an inverse semigroup, then there exists a unique semigroup homomorphism $h: S_{I} \to H$ such that $g = h \circ f$ (formally, $g = F(h) \circ f$).
It is now a simple matter of checking that the conditions of the theorem are satisfied; the category $\mathbf{Inv}$ is complete (it has products and equalizers, and hence all (small) limits), so the theorem applies. If you want the semigroups involved to be commutative, simply change the categories appropriately.
More details
Proof sketch that the left adjoint exists:First of all, we should look at limits in $\mathbf{Smgrp}$ and $\mathbf{Inv}$; they turn out to be "the same": if $\{ S_{i} \}_{i \in I}$ is some family of semigroups, the product is simply the cartesian product of the underlying sets, with the obvious "pointwise" operation. If all semigroups are inverse semigroups, the result is an inverse semigroup. You can easily check that it has the requisite universal property for a product, see
http://en.wikipedia.org/wiki/Product_(category_theory)
If $f,g: S_1 \to S_2$ are semigroup homomorphisms, their equalizer is $$E = \{ s \in S_1 \,|\, f(s) = g(s) \}$$together with the inclusion homomorphism $e: E \hookrightarrow S_1$. $E$ is a semigroup in the obvious way, and is an inverse semigroup if $S_1$ and $S_2$ are (if $x \in E$ have inverse $y \in S_1$, we have $f(x) = g(x)$, and it follows that both $g(y)$ and $f(y)$ is the inverse of $f(x) = g(x)$; by uniqueness, $f(y) = g(y)$ so $y \in E$ also). You can again check that $(E,e)$ has the requisite universal property, see
http://en.wikipedia.org/wiki/Equalizer_(mathematics)
Hence both $\mathbf{Smgrp}$ and $\mathbf{Inv}$ are (small-) complete categories, see http://en.wikipedia.org/wiki/Limit_(category_theory)(under
Existence of limits).
We get that $F$ preserves limits for free, since limits are constructed in exactly the same manner in both categories, and the morphisms are the same. What remains is the solution set condition. Fix a semigroup $S$, and consider a semigroup homomorphism $f: S \to F(H)$, for some inverse semigroup $H$. The idea is to take the solution set to be the isomorphism classes of inverse semigroups generated by $f(S)$ for some $f$. This will be a set if the cardinality of the inverse semigroup $\langle f(s) \rangle \subseteq H$, generated by $f(S) \subseteq H$ is bounded, for a given $S$. This appears to be the case here, since you can start with $G = f(S) \cup f(S)^{-1}$ (where $f(S)^{-1}$ denotes the set of inverses in $H$ for elements in $f(S)$), and consider all finite products of elements in $G$. This will be an inverse semigroup (because, as you mentioned, the formula $(xy)^{-1} = y^{-1}x^{-1}$ holds). You still need to check that this actually
is a solution set, but that should be reasonably easy.
Note that this only answers the question of existence, it doesn't provide an explicit construction. My guess is that coming up with one isn't going to be too difficult (and it appears that you already have some ideas in that direction, at least for specific examples), perhaps the construction of the Grothendieck group can be emulated.
|
I have just been to Perimeter Institute, by generous invitation of Thomas Galley. I gave a talk there about my recent-ish paper, Probability in two deterministic universes. Since I have already blogged about it here, I’m not writing about it again, but rather what I discussed with Thomas about his derivations of the Born rule.
I was interested in his most recent derivation, that besides structural assumptions about measurements and probabilities, needs two substantial assumptions: no-signalling and the possibility of state estimation, or
state estimation for brevity. No-signalling is well-motivated and well-understood, but I was curious about state estimation. What does it mean? How does a theory that violates it looks like?
The precise definition is that
state estimation is true if there is a finite set of measurement outcomes1 whose probabilities completely determine the quantum state. Or conversely, if state estimation fails, then for any finite set of measurement outcomes there are two different quantum states that give the same probabilities for all these outcomes. This is clearly not obeyed by quantum mechanics in the case of infinite-dimensional systems — you need to know the probability at each point in space to completely determine the wavefunction, which is an infinite set of outcomes2 — so the authors require it only for finite-dimensional systems.
How bad is it violate it for finite-dimensional systems, then? What can you learn about the quantum state with a reasonably small number of measurement outcomes? A good approximation, or would you have little idea about what the quantum state is? It seems that the former is the case. To illustrate that, we came up with a rather artificial theory where the measurements allow you to deterministically read off bits from some representation of the quantum state; for the case of a qubit $\ket{\psi}=\cos\theta\ket{0}+e^{i\varphi}\sin\theta\ket{1}$ a measurement would tell you the $n$th bit of $\theta$ or $\varphi$. It is clear that this theory violates
state estimation: for any finite set of measurements there will be a largest $n$ that they can reach, and therefore any pair of quantum states that differ on bits higher than $n$ will be indistinguishable for this set of measurements. It is also clear that this violation: with only $2n$ measurements we can get a $n$-bit approximation for any qubit, which is much better than what can be done in reality! In reality when need about $2^n$ measurements to estimate the probabilities, and therefore the amplitudes, with such an accuracy.
This already tells us that
state estimation is too strong; it needs at least to be qualified somehow in order to exclude the deterministic theory above. What does it mean in probabilistic theories, though? An often considered toy theory is one where structure of quantum mechanics is kept as it is, but the exponent in the Born rule is changed from $2$ to some $n$. More precisely, let the probability of obtaining outcome $i$ when measuring the state $\psi$ in the orthogonal basis $\{\ket{e_i}\}$ be \[ p(i|\psi) = \frac{|\langle e_i|\psi\rangle|^n}{\sum_{i’}|\langle e_{i’}|\psi\rangle|^n}. \]An interesting feature of this theory is that a finite set of measurement outcomes can distinguish all pure states (in fact the same measurements that distinguishes them in quantum theory), so state estimation can only fail here for mixed states.
A nice example is the pair of ensembles
\[\omega_A = \{(p,\ket{0}),(1-p,\ket{1})\}\] and \[\omega_B = \{(1/2,p^\frac1n\ket{0}+(1-p)^\frac1n\ket{1}),(1/2,p^\frac1n\ket{0}-(1-p)^\frac1n\ket{1})\}.\] In quantum mechanics ($n=2$) they are equivalent, both being represented by the density matrix \[ \rho = \begin{pmatrix} p & 0 \\ 0 & 1-p \end{pmatrix}. \] If $n\neq 2$, though, they are not equivalent anymore, even though they give the same probabilities for any measurements in the X, Y, and Z basis3. To distinguish them we just need to measure the ensembles in the basis \[\{p^\frac1n\ket{0}+(1-p)^\frac1n\ket{1},(1-p)^\frac1n\ket{0}-p^\frac1n\ket{1}\}.\] The probability of obtaining the first outcome for ensemble $\omega_A$ is $p^2 +(1-p)^2$, and for ensemble $\omega_B$ it is some complicated expression that depends on $n$.
Now this is by no means a proof4, but it makes me suspect that it will be rather easy to distinguish any two ensembles that are not equivalent, by making a measurement that contains one of the pure states that was mixed in to make the ensemble. Then if we divide the Bloch sphere in a number of regions, assigning a measurement to cover each such region, we do that with a good enough approximation. Unlike the deterministic theory explored above, in this toy theory it is clearly more laborious to do state estimation than in quantum mechanics, but is still firmly within the real of possibility.
What now, then? If the possibility of state estimation is not a good assumption from which to derive the Born rule, is there a derivation in this operational framework that follows from better assumptions? It turns out that Galley himself has such a derivation, based only on similar structural assumptions together with no-signalling and purification, with no need for
state estimation. But rather ironically, here the roles flip: while I find purification an excellent axiom to use, Galley is not a fan.
Let me elaborate. Purification is the assumption that every mixed state (like the ensembles above) is obtained by ignoring part of a pure state. It implies then that there are no “external” probabilities in the theory; if you want to flip a coin in order to mix two pure states, you better model that coin inside the theory, and as a pure state. Now Galley doesn’t find purification so nice: for once, because classical theories fail purification, and also because it feels like postulating that your theory is universal, which is a big step to take, in particular when the theory in question is quantum mechanics.
Well, I find that classical theories failing purification is just one more example in a huge pile of examples of how classical theories are wrong. In this particular case they are wrong by being essentially deterministic, and only allowing for probabilities when they are put there by hand. About postulating the universality of the theory, indeed that is a big assumption, but so what? I don’t think good assumptions need to be self-evidently true, I just think they should be well-motivated and physically meaningful.
Addendum: A natural question to ask is whether both no-signalling and purification are necessary in such a derivation. It turns that the answer is yes: the toy theory where the exponent in the Born rule is $n$ respects purification, when extended in the obvious way for composite systems, but violates no-signalling, and Galley’s rule respects no-signalling but violates purification.
|
I promised I'd come back to this one because it merits special discussion. Now it's time to do exactly that, as this one (as well as its many variations) has piqued my ire every time I've seen it.
Most people who have had at least a basic prealgebra class tend to agree that the \(1+2\) in parentheses should be evaluated first, reducing the problem to \[6\div 2(3).\]
After that the battlefield gets bloody.
Do we do the multiplication first, or do we do the division first?
Some people say the multiplication needs to come first, valiantly shouting "PEMDAS" as their battle cry, arguing that since "M" comes before "D", the answer is \(1\).
Those who have more experience with order of operations and don't just rely on a silly (and wrong) mnemonic say that the division comes first, since multiplication and division are really the same under the hood - after all, division is equivalent to multiplication by the reciprocal - and by convention* are performed from left to right in the order they appear. For these more seasoned warriors, the answer is "obviously" \(9\).
Those in the latter camp definitely are applying better mathematical reasoning than those in the former. But do they have "the" correct answer? If you're like me, though you know that multiplication and division are
supposedto happen in order from left to right, there's just something about that \(2(3)\) that catches your eye, that makes you feellike for some reason it "should" come first.
And that's why we need to talk about
implicit(or implied) multiplication.
When we first learn multiplication, we write it with a cross (\(\times\)). But once variables like \(x\) start to come into play, we have to find new, less confusing ways to write multiplication. So instead of writing \(a\times b\), we have a few options.
We can use a dot: \[a\cdot b\] We can use parentheses: \[a(b)\] Or as long as both factors aren't numerals, we can just concatenate (attach) them: \[ab\] explicit multiplication, because we've explicitly indicated our operation using a symbol. The latter two are called implicit multiplication- we know the intended operation is multiplication because no explicit symbol was provided.
Why does this matter? As it turns out, in some conventions, implicit multiplication may actually take precedence over explicit multiplication and therefore division! For example, the "Style and Notation Guide" for
Physical Review, an American scientific journal, specifies that implicit multiplication should come before division when submitting manuscripts**. Of course there are other conventions in which this is not the case, but the point to understand here is that multiple conventions do exist.
What's more, we can't even turn to our trusty calculators to tell us which way is "the" correct way, because different calculators may follow different conventions!
Two Casio calculators
Two TI calculators
(Try this on your own calculator and see which convention it uses!)
I'd like to posit one more reason that the \(2(3)\) may
feellike it "should" come first. One unfortunate side effect of trying to use existing punctuation when possible to represent mathematics is that certain symbols become overloaded - the same symbols can represent different things. In this case, the notation \(2(3)\) for multiplication bears a very strong resemblance to the notation \(f(3)\) for function evaluation! If the question were \[6\div f(1+2),\] even though we have no idea what function \(f\) is, there's no question that it would be evaluated before the division took place! This may be a possible reason that the implicit-trumps-explicit convention is followed in some circles.
The inevitable conclusion is that there is no single correct answer - it all depends on what convention you're using. At this point you may be ready to throw your hands up in despair. But there is hope. The best way to solve this kind of problem is ... you guessed it ... to use better notation in the first place! (I mean who uses the obelus (\(\div\)) anymore past 5th grade anyway?)
If you mean for multiplication to be done first, then say so! \[\frac{6}{2(1+2)}=1\] If you mean for division to be done first, then say so! \[\frac{6}{2}(1+2)=9\]
It all comes back to the point of the previous article, which I will make explicit one more time: Math isn't about symbols. Math is about ideas. If your symbols don't unambiguously convey your ideas, then use better symbols.
* The left-to-right convention is probably so because those who established the convention spoke the sorts of European languages for which \(6\div 2\) would be vocalized in that order - not always the case if you've ever heard how fractions are read out loud in Japanese or Korean! ** The guide may seem like it's claiming that all multiplication should come before division, but this is because they don't use explicit multiplication at all except in specific contexts such as indicating dimensions and performing operations on vectors. *** If you have a newer calculator, you may have a nifty fraction template that you can use to clear up confusion even further! If you're using a TI-84+, and you've got the newest operating system on your calculator, try hitting [ALPHA] and then [Y=]. If you see a little menu come up, choose "n/d" and voilà - you can now do fractions without getting lost in a sea of parentheses!
|
I am learning qiskit software and this term keeps popping up and I am unable to get a grasp on the technical definition given by wikipedia. For example, the functions state fidelity and process fidelity.
Simply it is the distance (similarity measure) between two quantum states, for example the fidelity between $|0\rangle$ and $|1\rangle$ is less than the fidelity between $|0\rangle$ and $\frac{1}{\sqrt{2}}\big(|0\rangle + |1\rangle\big)$. or you can say it is the cosine of the smallest angle between two states, also called the cosine similarity
It might be worth mentioning the physical motivation for these definitions and the concept of fidelity itself.
Unlike the classical computers we all know and love, quantum computers are fundamentally analog machines. what that means practically is that the gates you apply when you run code on a real quantum computer are going to be parameterized by a real variable. For example, in superconducting qubits, applying a single-qubit gate means driving your qubit with a (typically microwave-range) pulse from an arbitrary waveform generator. The amplitude, frequency, and time-duration of that pulse are all real-valued parameters, and as such they're all subject to some amount of error. These so-called 'unitary' errors are a separate issue from the errors that result from your qubit interacting with the environment. You've applied a real quantum gate, and prepared a real, coherent quantum state, but neither the gate nor the state are ever going to be exactly the ones you intended.
That's where measures of fidelity come in, as a way of keeping track of just how close you can expect to come on your actual, physical quantum computer to the circuits you are producing in code.
In a way, fidelity forms the essential link between the neat digital niceties of your high level implementation and the messy realities of the quantum hardware itself.
That, at least, is how I understand it. I'd welcome any corrections.
The following video gives some more specific examples of the same idea https://www.youtube.com/watch?v=MtD1Z8MMrgY, while this article gives a pretty friendly, historically motivated explanation of the math involved.
Qualitatively, fidelity is the measure of the distance between two quantum states. Fidelity equal to 1, means that two states are equal. In the case of a density matrix, fidelity represents the overlap with a reference pure state. Fidelity equal to 1, means that the density matrix is diagonal and equivalent to the pure reference state.
Like every distance in metric spaces, it is obtained by means of an inner product. The inner product of two states is the overlap. The square of that is the fidelity (in the same way that that distance is defined in Euclidean space). That is, $|\langle \psi_1 | \psi_2 \rangle |^2$, also known as spectroscopic factor in some physics context.
For mixed state is the equivalent operation, that is evaluation of the density matrix $\langle \psi_1 | \rho_1 |\psi_1 \rangle$.
This measure is particularly important because quantum devices are noisy devices, therefore the fidelity of a result respect to an exact solution represents the inverse of the noise.
|
I am going through the derivation of CMS convexity from the notes of Lesniewski
There is a transformation from $T_p$ forward measure to annuity measure $Q$ as
$$ P(0,T_p)E^{Q_{T_p}}\left[S(T_0,T)\right]=A(0,T_0,T_n)E^Q\left[S(T_0,T)\frac{P(T_0,T_p)}{A(t,T_0,T_n)}\right] $$
where $A(t,T_0,T)=\sum_{1\le j \le n} \alpha_i P(t,T_i) $ is price of annuity at t paying $\alpha_i$ at $T_1,...,T_n$
Why is there an additional $P(T_0,T_p)$ term (zero coupon price of a bond maturing at $T_p$ and starting at $T_0$O in above equation?
Edit 1:I guess his notation is not clear. Right hand side can be written as $E[S(T_0;T_0,T)D(0,T_p)]$ where $D(t,T_p)=E\left[e^{-\int_t^{T_p}r dt}\right]$ and $S(T_0;T_0,T)$ is the swap spread at $T_0$ for the period from $T_0$ to $T$. $E[S(T_0;T_0,T)D(0,T_p)]=E^{Q_{T_p}}[S(T_p;T_0,T)]P(0,T_p)$ assuming $T_p$ is after $T_0$. At the same time $E[S(T_0;T_0,T)D(0,T_p)]=A(0;T_0,T_n)E^Q \left[S(T_0;T_0,T)\frac{D(T_0,T_p)}{A(T_0;T_0,T_n)}\right]$ through measure change
|
Century
First, you'd have to watch through a night to see if Polaris wobbles - currently, the radius is about 1° I think, but that changes with precession (and nutation, but that's small enough to ignore).
Once you know that, you can try to find a point in the sky that stays still all the time (like Polaris nearly does in our time). This is celestial pole, the direction the axis of earth points to. It moves in a circle with ~23° radius ( = obliquity of the ecliptic, currenty ~23,4°). The center of the circle is roughly between Polaris and Vega, that makes it easy.
Then, you have to see whether Polaris is approaching or departing it on its way on the "precession circle".
If you can measure the angle $\alpha$ it has travelled on its way on this circle, you can estimate which century you have been transported to: $year = 2000 + \alpha \frac {26000} {360}$ If you take exact values and do all the calculations and especially the measurements very very exact, you might get more than just the century.
If you really want to prepare, print this and pin it somewhere you will see it every day: https://en.wikipedia.org/wiki/File:Precession_N.gif
Beware that if you are transported more than 13,000 years back or forth, you will guess wrong.
In this case, you'll have to take the proper motion of the stars into account - you'll have to know not only the sky but also the motion of the stars (or at least a few) very very good. Which is the first thing that is really hard to do.
Maybe you'd have to invent something with sticks for the measurements.
Year
I guess you need to know the planets positions (especially Jupiter because it's bright and not to fast) of the time you are transported to to derive the year - but you'd have to be a walking ephemeris for this.
Time of the year
The time of the year is actually much easier than the year - most of the time, you can even feel it.
If you still need to calculate it, it is a direct function of the position of the sun in the sky: You need to know the orientation of the ecliptic in the sky and the equinox of our time - in Pisces, you can see it in the image in that Wikipedia link: the intersection of "0°" and the ecliptic. You measure the time in hours $h$ that passes from either the spring or the fall equinox (of our time) is at midheaven until the sun is at midheaven (noon). The current date of the year, measured (roughly) from january to december in values from 0 to 1 is: $ y = h/24-0.2+\frac \alpha {360}$ If you took the fall equinox, you have to add or substract 0.5, whichever fits. Without the $-0.2$, it would be measured from spring equinox (~ march 23rd) to spring equinox. Geographical latitude
The angle of the celestial pole mentioned above and the ground (as long as it's horizontal) helps you calculate the geographical latitude $ \phi = 90°-angle$.
Geographical longitude
Sorry, that's the thing I could not do.
|
Existence of Solutions of Some Nonlinear φ-Laplacian Equations with Neumann-Steklov Nonlinear Boundary Conditions
Existence of Solutions of Some Nonlinear φ-Laplacian Equations with Neumann-Steklov Nonlinear Boundary Conditions
Abstract
We study the existence of solutions of the quasilinear equation $$(D(u(t))\phi(u'(t)))'=f(t,u(t),u'(t)),\qquad a.e. \;\;t\in [0,T],$$ subject to nonlinear Neumann-Steklov boundary conditions on $[0,T]$, where $\phi: (-a,a)\rightarrow \mathbb{R}$ (for $0 < a < \infty$) is an increasing homeomorphism such that $\phi(0)=0$, $f:[0,T]\times\mathbb{R}^{2} \rightarrow \mathbb{R}$ a $L^1$-Carath\'eodory function, $D$ : $\mathbb{R}\longrightarrow (0,\infty)$ is a continuous function. Using topological methods, we obtain existence and multiplicity results.
|
I thought I was done writing about this topic, but it just keeps coming back. The internet just cannot seem to leave this sort of problem alone:
I don't know what it is about expressions of the form \(a\div b(c+d)\) that fascinates us as a species, but fascinate it does. I've written about this before (as well as why "PEMDAS" is terrible), but the more I've thought about it, the more sympathy I've found with those in the minority of the debate, and as a result my position has evolved somewhat.oomfies solve this pic.twitter.com/0RO5zTJjKk— em ♥︎ (@pjmdolI) July 28, 2019
So I'm going to go out on a limb, and claim that the answer
shouldbe \(1\).
Before you walk away shaking your head and saying "he's lost it, he doesn't know what he's talking about", let me assure you that I'm obviouly not denying the left-to-right convention for how to do explicit multiplication and division. Nobody's arguing that.* Rather, there's something much more subtle going on here.
What we may be seeing here is evidence of a mathematical "language shift".
It's easy to forget that mathematics did not always look as it does today, but has arrived at its current form through very human processes of invention and revision. There's an excellent page by Jeff Miller that catalogues the earliest recorded uses of symbols like the operations and the equals sign -- symbols that seem timeless, symbols we take for granted every day.
People also often don't realize that this process of invention and revision still happens to this day. The modern notation for the floor function is a great example that was only developed within the last century. Even today on the internet, you occasionally see discussions in which people debate on how mathematical notation can be improved. (I'm still holding out hope that my alternative notation for logarithms will one day catch on.)
Of particular note is the evolution of grouping symbols. We usually think only of parentheses (as well as their variations like square brackets and curly braces) as denoting grouping, but an even earlier symbol used to group expressions was the
vinculum-- a horizontal bar found over or under an expression. Consider the following expression: \[3-(1+2)\] If we wrote the same expression with a vinculum, it would look like this: \[3-\overline{1+2}\] Vincula can even be stacked: \[13-\overline{\overline{1+2}\cdot 3}=4\] This may seem like a quaint way of grouping, but it does in fact survive in our notation for fractions and radicals! You can even see both uses in the quadratic formula: \[x=\dfrac{-b\pm\sqrt{b^2-4ac}}{2a}\]
Getting back to the original problem, what I think we're seeing is evidence that
concatenation-- placing symbols next to each other with no sort of explicit symbol -- has become another way to represent grouping.
"But wait", you might say, "concatenation is used to represent
multiplication, not grouping!" That's certainly true in many cases, for example in how we write polynomials. However, there are a few places in mathematics that provide evidence that there's more to it than that.
First of all, as a beautifully-written Twitter thread by EnchantressOfNumbers (@EoN_tweets) points out, we use concatenation to show a special importance of grouping when we write out certain trigonometric expressions without putting their arguments in parentheses. Consider the following identity:
\[\sin 4u=2\sin 2u\cos 2u\] When we write such an equation, we're saying that not only do \(4u\) and \(2u\) represent multiplications, but that this grouping is so tight that they constitute the entire arguments of the sine and cosine functions. In fact, the space between \(\sin 2x\) and \(\cos 2x\) can also be seen as a somewhat looser form of concatention. Then again, so does the space between \(\sin\) and \(x\), which represents a different thing -- the connection of a function to its argument. Perhaps this is why the popular (and amazing) online graphing calculator Desmos is only so permissive when it comes to parsing concatenation:
An even more curious case is
mixed numbers. When writing mixed numbers, concatenation actually stands for addition, not multiplication. \[3\tfrac{1}{2}=3+\tfrac{1}{2}\] In fact, concatenation actually makes addition come beforemultiplication when we multiply mixed numbers! \[3\tfrac{1}{2}\cdot 5\tfrac{5}{6}=(3+\tfrac{1}{2})\cdot(5+\tfrac{5}{6})=20\tfrac{5}{12}\]
Now, you may feel that this example shows how mixed numbers are an inelegance in mathematical notation (and I would agree with you). Even so, I argue that this is evidence that we
fundamentallyview concatenation as a way to represent grouping. It just so happens that, since multiplication takes precedence over addition anyway in the absence of other grouping symbols, we use concatenation when we write it. This all stems from a sort of "laziness" in how we write things -- - laying out precedence rules allows us to avoid writing parentheses, and once we've established those precedence rules, we don't even need to write out the multiplication at all.
So how does the internet's favorite math problem fit into all this?
The most striking feature of the expression \(8\div 2(2+2)\) is that
it's written all in one line.
Mathematical typesetting is difficult. LaTeX is powerful, but has a steep learning curve, though various other editors have made it a bit easier, such as Microsoft Word's Equation Editor (which has much improved since when I first used it!). Calculators have also recognized this difficulty, which is why TI calculators now have MathPrint templates (though its entry is quite clunky compared to Desmos's "as-you-type" formatting via MathQuill).
Even so, all of these input methods exist in very specific applications. What about when you're writing an email? Or sending a text? Or a Facebook message? (If you're wondering "who the heck writes about math in a Facebook message", the answer at least includes "students who are trying to study for a test".) The evolution of these sorts of media has led to the importance of one-line representations of mathematics with easily-accessible symbols. When you don't have the ability (or the time) to neatly typeset a fraction, you're going to find a way to use the tools you've got. And that's even more important as we realize that
everybodycan (and should!) engage with mathematics, not just mathematicians or educators.
So that might explain why a physics student might type "hbar = h / 2pi", and others would know that this clearly means \(\hbar=\dfrac{h}{2\pi}\) rather than \(\hbar=\dfrac{h}{2}\pi\). Remember, mathematics is not about just answer-getting. It's about communication of those ideas. And when the medium of communication limits how those ideas can be represented, the method of communication often changes to accomodate it.
What the infamous problem points out is that while almost nobody has laid out any explicit rules for how to deal with concatenation, we seem to have developed some implicit ones, which we use without thinking about them. We just never had to deal with them until recently, as more "everyday" people communicate mathematics on more "everyday" media.
Perhaps it's time that we address this convention explicitly and admit that
concatenation really has become a way to represent grouping, just like parentheses or the vinculum. This is akin to taking a more descriptivist, rather than prescriptivist, approach to language: all we would be doing is recognizing that this is alreadyhow we do things everywhere else.
Of course, this would throw a wrench in PEMDAS, but that just means we'd need to actually talk about the mathematics behind it rather than memorizing a silly mnemonic. After all, as inane as these internet math problems can be, they've shown that (whether they admit it or not) people really
dowant to get to the bottom of mathematics, to truly understand it.
I'd say that's a good thing.
* If your argument for why the answer is \(16\) starts with "Well, \(2(2+2)\) means \(2\cdot(2+2)\), so...", then you have missed the point entirely.
|
LombScargle¶ class
astropy.timeseries.
LombScargle(
t, y, dy=None, fit_mean=True, center_data=True, nterms=1, normalization='standard')¶
Compute the Lomb-Scargle Periodogram.
This implementations here are based on code presented in [R14388b5a5a57-1] and [R14388b5a5a57-2]; if you use this functionality in an academic application, citation of those works would be appreciated.
Parameters tarray_like or Quantity
sequence of observation times
yarray_like or Quantity
sequence of observations associated with times t
dyfloat, array_like or Quantity (optional)
error or sequence of observational errors associated with times t
fit_meanbool (optional, default=True)
if True, include a constant offset as part of the model at each frequency. This can lead to more accurate results, especially in the case of incomplete phase coverage.
center_databool (optional, default=True)
if True, pre-center the data by subtracting the weighted mean of the input data. This is especially important if fit_mean = False
ntermsint (optional, default=1)
number of terms to use in the Fourier fit
normalization{‘standard’, ‘model’, ‘log’, ‘psd’}, optional
Normalization to use for the periodogram.
References
R14388b5a5a57-1
Vanderplas, J., Connolly, A. Ivezic, Z. & Gray, A.
Introduction to astroML: Machine learning for astrophysics. Proceedings of the Conference on Intelligent Data Understanding (2012) R14388b5a5a57-2
VanderPlas, J. & Ivezic, Z.
Periodograms for Multiband Astronomical Time Series. ApJ 812.1:18 (2015)
Examples
Generate noisy periodic data:
>>> rand = np.random.RandomState(42) >>> t = 100 * rand.rand(100) >>> y = np.sin(2 * np.pi * t) + rand.randn(100)
Compute the Lomb-Scargle periodogram on an automatically-determined frequency grid & find the frequency of max power:
>>> frequency, power = LombScargle(t, y).autopower() >>> frequency[np.argmax(power)] # doctest: +FLOAT_CMP 1.0016662310392956
Compute the Lomb-Scargle periodogram at a user-specified frequency grid:
>>> freq = np.arange(0.8, 1.3, 0.1) >>> LombScargle(t, y).power(freq) # doctest: +FLOAT_CMP array([0.0204304 , 0.01393845, 0.35552682, 0.01358029, 0.03083737])
If the inputs are astropy Quantities with units, the units will be validated and the outputs will also be Quantities with appropriate units:
>>> from astropy import units as u >>> t = t * u.s >>> y = y * u.mag >>> frequency, power = LombScargle(t, y).autopower() >>> frequency.unit Unit("1 / s") >>> power.unit Unit(dimensionless)
Note here that the Lomb-Scargle power is always a unitless quantity, because it is related to the \(\chi^2\) of the best-fit periodic model at each frequency.
Attributes Summary
Methods Summary
autofrequency(self[, samples_per_peak, …])
Determine a suitable frequency grid for data.
autopower(self[, method, method_kwds, …])
Compute Lomb-Scargle power at automatically-determined frequencies.
design_matrix(self, frequency[, t])
Compute the design matrix for a given frequency
distribution(self, power[, cumulative])
Expected periodogram distribution under the null hypothesis.
false_alarm_level(self, false_alarm_probability)
Level of maximum at a given false alarm probability.
false_alarm_probability(self, power[, …])
False alarm probability of periodogram maxima under the null hypothesis.
from_timeseries(timeseries[, …])
Initialize a periodogram from a time series object.
model(self, t, frequency)
Compute the Lomb-Scargle model at the given frequency.
model_parameters(self, frequency[, units])
Compute the best-fit model parameters at the given frequency.
offset(self)
Return the offset of the model
power(self, frequency[, normalization, …])
Compute the Lomb-Scargle power at the given frequencies.
Attributes Documentation
available_methods
= ['auto', 'slow', 'chi2', 'cython', 'fast', 'fastchi2', 'scipy']¶
Methods Documentation
autofrequency(
self, samples_per_peak=5, nyquist_factor=5, minimum_frequency=None, maximum_frequency=None, return_freq_limits=False)¶
Determine a suitable frequency grid for data.
Note that this assumes the peak width is driven by the observational baseline, which is generally a good assumption when the baseline is much larger than the oscillation period. If you are searching for periods longer than the baseline of your observations, this may not perform well.
Even with a large baseline, be aware that the maximum frequency returned is based on the concept of “average Nyquist frequency”, which may not be useful for irregularly-sampled data. The maximum frequency can be adjusted via the nyquist_factor argument, or through the maximum_frequency argument.
Parameters samples_per_peakfloat (optional, default=5)
The approximate number of desired samples across the typical peak
nyquist_factorfloat (optional, default=5)
The multiple of the average nyquist frequency used to choose the maximum frequency if maximum_frequency is not provided.
minimum_frequencyfloat (optional)
If specified, then use this minimum frequency rather than one chosen based on the size of the baseline.
maximum_frequencyfloat (optional)
If specified, then use this maximum frequency rather than one chosen based on the average nyquist frequency.
return_freq_limitsbool (optional)
if True, return only the frequency limits rather than the full frequency grid.
Returns frequencyndarray or Quantity
The heuristically-determined optimal frequency bin
autopower(
self, method='auto', method_kwds=None, normalization=None, samples_per_peak=5, nyquist_factor=5, minimum_frequency=None, maximum_frequency=None)¶
Compute Lomb-Scargle power at automatically-determined frequencies.
Parameters methodstring (optional)
specify the lomb scargle implementation to use. Options are:
‘auto’: choose the best method based on the input
‘fast’: use the O[N log N] fast method. Note that this requires evenly-spaced frequencies: by default this will be checked unless
assume_regular_frequencyis set to True.
‘slow’: use the O[N^2] pure-python implementation
‘cython’: use the O[N^2] cython implementation. This is slightly faster than method=’slow’, but much more memory efficient.
‘chi2’: use the O[N^2] chi2/linear-fitting implementation
‘fastchi2’: use the O[N log N] chi2 implementation. Note that this requires evenly-spaced frequencies: by default this will be checked unless
assume_regular_frequencyis set to True.
‘scipy’: use
scipy.signal.lombscargle, which is an O[N^2] implementation written in C. Note that this does not support heteroskedastic errors.
method_kwdsdict (optional)
additional keywords to pass to the lomb-scargle method
normalization{‘standard’, ‘model’, ‘log’, ‘psd’}, optional
If specified, override the normalization specified at instantiation.
samples_per_peakfloat (optional, default=5)
The approximate number of desired samples across the typical peak
nyquist_factorfloat (optional, default=5)
The multiple of the average nyquist frequency used to choose the maximum frequency if maximum_frequency is not provided.
minimum_frequencyfloat (optional)
If specified, then use this minimum frequency rather than one chosen based on the size of the baseline.
maximum_frequencyfloat (optional)
If specified, then use this maximum frequency rather than one chosen based on the average nyquist frequency.
Returns frequency, powerndarrays
The frequency and Lomb-Scargle power
design_matrix(
self, frequency, t=None)¶
Compute the design matrix for a given frequency
Parameters Returns Xnp.ndarray (len(t), n_parameters)
The design matrix for the model at the given frequency.
distribution(
self, power, cumulative=False)¶
Expected periodogram distribution under the null hypothesis.
This computes the expected probability distribution or cumulative probability distribution of periodogram power, under the null hypothesis of a non-varying signal with Gaussian noise. Note that this is not the same as the expected distribution of peak values; for that see the
false_alarm_probability()method.
Parameters powerarray_like
The periodogram power at which to compute the distribution.
cumulativebool (optional)
If True, then return the cumulative distribution.
Returns distnp.ndarray
The probability density or cumulative probability associated with the provided powers.
false_alarm_level(
self, false_alarm_probability, method='baluev', samples_per_peak=5, nyquist_factor=5, minimum_frequency=None, maximum_frequency=None, method_kwds=None)¶
Level of maximum at a given false alarm probability.
This gives an estimate of the periodogram level corresponding to a specified false alarm probability for the largest peak, assuming a null hypothesis of non-varying data with Gaussian noise.
Parameters false_alarm_probabilityarray-like
The false alarm probability (0 < fap < 1).
maximum_frequencyfloat
The maximum frequency of the periodogram.
method{‘baluev’, ‘davies’, ‘naive’, ‘bootstrap’}, optional
The approximation method to use; default=’baluev’.
method_kwdsdict, optional
Additional method-specific keywords.
Returns powernp.ndarray
The periodogram peak height corresponding to the specified false alarm probability.
Notes
The true probability distribution for the largest peak cannot be determined analytically, so each method here provides an approximation to the value. The available methods are:
“baluev” (default): the upper-limit to the alias-free probability, using the approach of Baluev (2008) [1].
“davies” : the Davies upper bound from Baluev (2008) [1].
“naive” : the approximate probability based on an estimated effective number of independent frequencies.
“bootstrap” : the approximate probability based on bootstrap resamplings of the input data.
Note also that for normalization=’psd’, the distribution can only be computed for periodograms constructed with errors specified.
References
false_alarm_probability(
self, power, method='baluev', samples_per_peak=5, nyquist_factor=5, minimum_frequency=None, maximum_frequency=None, method_kwds=None)¶
False alarm probability of periodogram maxima under the null hypothesis.
This gives an estimate of the false alarm probability given the height of the largest peak in the periodogram, based on the null hypothesis of non-varying data with Gaussian noise.
Parameters powerarray-like
The periodogram value.
method{‘baluev’, ‘davies’, ‘naive’, ‘bootstrap’}, optional
The approximation method to use.
maximum_frequencyfloat
The maximum frequency of the periodogram.
method_kwdsdict (optional)
Additional method-specific keywords.
Returns false_alarm_probabilitynp.ndarray
The false alarm probability
Notes
The true probability distribution for the largest peak cannot be determined analytically, so each method here provides an approximation to the value. The available methods are:
“baluev” (default): the upper-limit to the alias-free probability, using the approach of Baluev (2008) [1].
“davies” : the Davies upper bound from Baluev (2008) [1].
“naive” : the approximate probability based on an estimated effective number of independent frequencies.
“bootstrap” : the approximate probability based on bootstrap resamplings of the input data.
Note also that for normalization=’psd’, the distribution can only be computed for periodograms constructed with errors specified.
References
classmethod
from_timeseries(
timeseries, signal_column_name=None, uncertainty=None, **kwargs)¶
Initialize a periodogram from a time series object.
If a binned time series is passed, the time at the center of the bins is used. Also note that this method automatically gets rid of NaN/undefined values when initalizing the periodogram.
Parameters signal_column_namestr
The name of the column containing the signal values to use.
uncertaintystr or float or
Quantity, optional
The name of the column containing the errors on the signal, or the value to use for the error, if a scalar.
**kwargs
Additional keyword arguments are passed to the initializer for this periodogram class.
model(
self, t, frequency)¶
Compute the Lomb-Scargle model at the given frequency.
The model at a particular frequency is a linear model: model = offset + dot(design_matrix, model_parameters)
Parameters tarray_like or Quantity, length n_samples
times at which to compute the model
frequencyfloat
the frequency for the model
Returns ynp.ndarray, length n_samples
The model fit corresponding to the input times
model_parameters(
self, frequency, units=True)¶
Compute the best-fit model parameters at the given frequency.
The model described by these parameters is:\[y(t; f, \vec{\theta}) = \theta_0 + \sum_{n=1}^{\tt nterms} [\theta_{2n-1}\sin(2\pi n f t) + \theta_{2n}\cos(2\pi n f t)]\]
where \(\vec{\theta}\) is the array of parameters returned by this function.
Parameters frequencyfloat
the frequency for the model
unitsbool
If True (default), return design matrix with data units.
Returns thetanp.ndarray (n_parameters,)
The best-fit model parameters at the given frequency.
offset(
self)¶
Return the offset of the model
The offset of the model is the (weighted) mean of the y values. Note that if self.center_data is False, the offset is 0 by definition.
Returns offsetscalar
power(
self, frequency, normalization=None, method='auto', assume_regular_frequency=False, method_kwds=None)¶
Compute the Lomb-Scargle power at the given frequencies.
Parameters frequencyarray_like or Quantity
frequencies (not angular frequencies) at which to evaluate the periodogram. Note that in order to use method=’fast’, frequencies must be regularly-spaced.
methodstring (optional)
specify the lomb scargle implementation to use. Options are:
‘auto’: choose the best method based on the input
‘fast’: use the O[N log N] fast method. Note that this requires evenly-spaced frequencies: by default this will be checked unless
assume_regular_frequencyis set to True.
‘slow’: use the O[N^2] pure-python implementation
‘cython’: use the O[N^2] cython implementation. This is slightly faster than method=’slow’, but much more memory efficient.
‘chi2’: use the O[N^2] chi2/linear-fitting implementation
‘fastchi2’: use the O[N log N] chi2 implementation. Note that this requires evenly-spaced frequencies: by default this will be checked unless
assume_regular_frequencyis set to True.
‘scipy’: use
scipy.signal.lombscargle, which is an O[N^2] implementation written in C. Note that this does not support heteroskedastic errors.
assume_regular_frequencybool (optional)
if True, assume that the input frequency is of the form freq = f0 + df * np.arange(N). Only referenced if method is ‘auto’ or ‘fast’.
normalization{‘standard’, ‘model’, ‘log’, ‘psd’}, optional
If specified, override the normalization specified at instantiation.
fit_meanbool (optional, default=True)
If True, include a constant offset as part of the model at each frequency. This can lead to more accurate results, especially in the case of incomplete phase coverage.
center_databool (optional, default=True)
If True, pre-center the data by subtracting the weighted mean of the input data. This is especially important if fit_mean = False.
method_kwdsdict (optional)
additional keywords to pass to the lomb-scargle method
Returns powerndarray
The Lomb-Scargle power at the specified frequency
|
Definition:Restriction/Operation Definition
Let $\left({S, \circ}\right)$ be an algebraic structure, and let $T \subseteq S$.
The restriction of $\circ$ to $T \times T$ is denoted $\circ {\restriction_T}$, and is defined as: $\forall t_1, t_2 \in T: t_1 \mathbin{\circ {\restriction_T}} t_2 = t_1 \circ t_2$ The notation $\circ {\restriction_T}$ is generally used only if it is necessary to emphasise that $\circ {\restriction_T}$ is strictly different from $\circ$ (through having a different domain). When no confusion is likely to result, $\circ$ is generally used for both.
Thus in this context, $\left({T, \circ {\restriction_T}}\right)$ and $\left({T, \circ}\right)$ mean the same thing.
The use of the symbol $\restriction$ is a recent innovation over the more commonly-encountered $|$.
Thus the notation $\mathcal R |_{X \times Y}$ and $\struct {T, \circ|_T}$, etc. are currently more likely to be seen than $\mathcal R {\restriction_{X \times Y} }$ and $\struct {T, \circ {\restriction_T} }$.
No doubt as the convention becomes more established, $\restriction$ will develop.
It is strongly arguable that $\restriction$, affectionately known as the
harpoon, is preferable to $|$ as the latter is suffering from the potential ambiguity of overuse. Some authors prefer not to subscript the subset, and render the notation as: $f \mathbin \restriction X = \set {\tuple {x, \map f x}: x \in X}$
but this is not recommended on $\mathsf{Pr} \infty \mathsf{fWiki}$ because it has less clarity.
Also see
The $\LaTeX$ code for \(f {\restriction_{X \times Y} }: X \to Y\) is
f {\restriction_{X \times Y} }: X \to Y .
Note that because of the way MathJax renders the image, the restriction symbol and its subscript
\restriction_T need to be enclosed within braces
{ ... } in order for the spacing to be correct.
The $\LaTeX$ code for \(s \mathrel {\mathcal R {\restriction_{X \times Y} } } t\) is
s \mathrel {\mathcal R {\restriction_{X \times Y} } } t .
The $\LaTeX$ code for \(t_1 \mathbin {\circ {\restriction_T} } t_2\) is
t_1 \mathbin {\circ {\restriction_T} } t_2 .
Again, note the use of
\mathrel { ... } and
\mathbin { ... } so as to render the spacing evenly.
|
Definition:Trigonometric Function Contents 1 Definition 2 Sine 3 Cosine 4 Tangent 5 Cotangent 6 Secant 7 Cosecant 8 Also known as 9 Sources Definition
In the above right triangle, we are concerned about the angle $\theta$.
The
sine of $\angle \theta$ is defined as being $\dfrac {\text{Opposite}} {\text{Hypotenuse}}$. Then the sine of $\theta$ is defined as the length of $AP$.
The real function $\sin: \R \to \R$ is defined as:
\(\displaystyle \sin x\) \(=\) \(\displaystyle \sum_{n \mathop = 0}^\infty \paren {-1}^n \frac {x^{2 n + 1} } {\paren {2 n + 1}!}\) \(\displaystyle \) \(=\) \(\displaystyle x - \frac {x^3} {3!} + \frac {x^5} {5!} - \cdots\)
The complex function $\sin: \C \to \C$ is defined as:
\(\displaystyle \sin z\) \(=\) \(\displaystyle \sum_{n \mathop = 0}^\infty \paren {-1}^n \frac {z^{2 n + 1 } } {\paren {2 n + 1}!}\) \(\displaystyle \) \(=\) \(\displaystyle z - \frac {z^3} {3!} + \frac {z^5} {5!} - \frac {z^7} {7!} + \cdots + \paren {-1}^n \frac {z^{2 n + 1 } } {\paren {2 n + 1}!} + \cdots\)
In the above right triangle, we are concerned about the angle $\theta$.
The
cosine of $\angle \theta$ is defined as being $\dfrac {\text{Adjacent}} {\text{Hypotenuse}}$. Then the cosine of $\theta$ is defined as the length of $AP$.
The real function $\cos: \R \to \R$ is defined as:
\(\displaystyle \cos x\) \(=\) \(\displaystyle \sum_{n \mathop = 0}^\infty \paren {-1}^n \frac {x^{2 n} } {\paren {2 n!} }\) \(\displaystyle \) \(=\) \(\displaystyle 1 - \frac {x^2} {2!} + \frac {x^4} {4!} - \frac {x^6} {6!} + \cdots + \paren {-1}^n \frac {x^{2 n} } {\paren {2 n}!} + \cdots\)
The complex function $\cos: \C \to \C$ is defined as:
\(\displaystyle \cos z\) \(=\) \(\displaystyle \sum_{n \mathop = 0}^\infty \paren {-1}^n \frac {z^{2 n} } {\paren {2 n}!}\) \(\displaystyle \) \(=\) \(\displaystyle 1 - \frac {z^2} {2!} + \frac {z^4} {4!} - \frac {z^6} {6!} + \cdots + \paren {-1}^n \frac {z^{2 n} } {\paren {2 n}!} + \cdots\)
In the above right triangle, we are concerned about the angle $\theta$.
The
tangent of $\angle \theta$ is defined as being $\dfrac{\text{Opposite}} {\text{Adjacent}}$.
Let a tangent line be drawn to touch $C$ at $A = \left({1, 0}\right)$.
Then the tangent of $\theta$ is defined as the length of $AB$.
Let $x \in \R$ be a real number.
The real function $\tan x$ is defined as:
$\tan x = \dfrac {\sin x} {\cos x}$
where:
The definition is valid for all $x \in \R$ such that $\cos x \ne 0$.
Let $z \in \C$ be a complex number.
The complex function $\tan z$ is defined as:
$\tan z = \dfrac {\sin z} {\cos z}$
where:
The definition is valid for all $z \in \C$ such that $\cos z \ne 0$.
In the above right triangle, we are concerned about the angle $\theta$.
The
cotangent of $\angle \theta$ is defined as being $\dfrac {\text{Adjacent}} {\text{Opposite}}$.
Let a tangent line be drawn to touch $C$ at $A = \left({0, 1}\right)$.
Then the cotangent of $\theta$ is defined as the length of $AB$.
Let $x \in \R$ be a real number.
The real function $\cot x$ is defined as:
$\cot x = \dfrac {\cos x} {\sin x} = \dfrac 1 {\tan x}$
where:
The definition is valid for all $x \in \R$ such that $\sin x \ne 0$.
Let $z \in \C$ be a complex number.
The complex function $\cot z$ is defined as:
$\cot z = \dfrac {\cos z} {\sin z} = \dfrac 1 {\tan z}$
where:
The definition is valid for all $z \in \C$ such that $\cos z \ne 0$.
In the above right triangle, we are concerned about the angle $\theta$.
The
secant of $\angle \theta$ is defined as being $\dfrac{\text{Hypotenuse}} {\text{Adjacent}}$.
Let a tangent line be drawn to touch $C$ at $A = \left({1, 0}\right)$.
Then the secant of $\theta$ is defined as the length of $OB$.
Let $x \in \R$ be a real number.
The real function $\sec x$ is defined as:
$\sec x = \dfrac 1 {\cos x}$
where $\cos x$ is the cosine of $x$.
The definition is valid for all $x \in \R$ such that $\cos x \ne 0$.
Let $z \in \C$ be a complex number.
The complex function $\sec z$ is defined as:
$\sec z = \dfrac 1 {\cos z}$
where $\cos z$ is the cosine of $z$.
The definition is valid for all $z \in \C$ such that $\cos z \ne 0$.
In the above right triangle, we are concerned about the angle $\theta$.
The
cosecant of $\angle \theta$ is defined as being $\dfrac{\text{Hypotenuse}} {\text{Opposite}}$.
Let a tangent line be drawn to touch $C$ at $A = \left({0, 1}\right)$.
Then the cosecant of $\theta$ is defined as the length of $OB$.
Let $x \in \C$ be a real number.
The real function $\csc x$ is defined as:
$\csc x = \dfrac 1 {\sin x}$
where $\sin x$ is the sine of $x$.
The definition is valid for all $x \in \R$ such that $\sin x \ne 0$.
Let $z \in \C$ be a complex number.
The complex function $\csc z$ is defined as:
$\csc z = \dfrac 1 {\sin z}$
where $\sin z$ is the sine of $z$.
The definition is valid for all $z \in \C$ such that $\sin z \ne 0$. Also known as
A
trigonometric function is sometimes referred to as a direct trigonometric function so as to distinguish it from an inverse trigonometric function.
|
On the geometry of the
p-Laplacian operator
1.
Mathematisches Institut, Universität zu Köln, 50923 Köln, Germany
2.
Fakultät Maschinenbau, TH Ingolstadt, Postfach 21 04 54,85019 Ingolstadt, Germany
$p$
$\Delta_pu={\rm div }\left(|\nabla u|^{p-2}\nabla u\right)$
$p\in(1,2)\cup(2,\infty)$
$p\to \infty$
$p\to 1$
$p$
$p\to\infty$
$p\to 1$
$\Omega$
$p\in[1,\infty]$
$p$
$\Delta_p^Nu:=\tfrac{1}{p}|\nabla u|^{2-p}\Delta_pu=\tfrac{1}{p}\Delta_1^Nu+\tfrac{p-1}{p}\Delta_\infty^Nu$
$u_t-\Delta_p^N u=0$
$\Delta_p^N$
$p\in (1,\infty)$
$p$ Mathematics Subject Classification:Primary: 35J92; Secondary: 35K92, 35D40, 49L25. Citation:Bernd Kawohl, Jiří Horák. On the geometry of the p-Laplacian operator. Discrete & Continuous Dynamical Systems - S, 2017, 10 (4) : 799-813. doi: 10.3934/dcdss.2017040
References:
[1]
T. N. Anoop, P. Drábek and S. Sarath,
On the structure of the second eigenfunctions of the
[2]
S. N. Armstrong and C. K. Smart,
A finite difference approach to the infinity Laplace equation and tug-of-war games,
[3] [4] [5] [6]
T. Bhattacharya, E. DiBenedetto and J. Manfredi, Limits as
f and related extremal problems, Some topics in nonlinear PDEs (Turin, 1989), Rend. Sem. Mat. Univ. Politec. , Torino, Special Issue, (1991), 15-68.Google Scholar
[7] [8]
I. Birindelli and F. Demengel,
Regularity and uniqueness of the first eigenfunction for singular fully nonlinear operators,
[9]
L. Brasco, C. Nitsch and C. Trombetti,
An inequality á la Szegö-Weinberger for the
[10]
J. Cheeger, A lower bound for the smallest eigenvalue of the Laplacian, in Problems in Analysis,
[11] [12] [13] [14] [15]
L. Esposito, V. Ferone, B. Kawohl, C. Nitsch and C. Trombetti,
The longest shortest fence and sharp Poincaré Sobolev inequalities,
[16]
L. C. Evans and J. Spruck,
Motion of level sets by mean curvature Ⅰ,
[17] [18] [19] [20] [21] [22]
V. Julin and P. Juutinen,
A new proof for the equivalence of weak and viscosity solutions for the
[23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
B. Kawohl,
Variations on the p-Laplacian in
[37] [38] [39] [40] [41]
G. Lu and P. Wang, A uniqueness theorem for degenerate elliptic equations,
[42] [43] [44]
E. Parini, The second eigenvalue of the
[45] [46] [47]
R. P. Sperb,
[48]
show all references
References:
[1]
T. N. Anoop, P. Drábek and S. Sarath,
On the structure of the second eigenfunctions of the
[2]
S. N. Armstrong and C. K. Smart,
A finite difference approach to the infinity Laplace equation and tug-of-war games,
[3] [4] [5] [6]
T. Bhattacharya, E. DiBenedetto and J. Manfredi, Limits as
f and related extremal problems, Some topics in nonlinear PDEs (Turin, 1989), Rend. Sem. Mat. Univ. Politec. , Torino, Special Issue, (1991), 15-68.Google Scholar
[7] [8]
I. Birindelli and F. Demengel,
Regularity and uniqueness of the first eigenfunction for singular fully nonlinear operators,
[9]
L. Brasco, C. Nitsch and C. Trombetti,
An inequality á la Szegö-Weinberger for the
[10]
J. Cheeger, A lower bound for the smallest eigenvalue of the Laplacian, in Problems in Analysis,
[11] [12] [13] [14] [15]
L. Esposito, V. Ferone, B. Kawohl, C. Nitsch and C. Trombetti,
The longest shortest fence and sharp Poincaré Sobolev inequalities,
[16]
L. C. Evans and J. Spruck,
Motion of level sets by mean curvature Ⅰ,
[17] [18] [19] [20] [21] [22]
V. Julin and P. Juutinen,
A new proof for the equivalence of weak and viscosity solutions for the
[23] [24] [25] [26] [27] [28] [29] [30] [31] [32] [33] [34] [35] [36]
B. Kawohl,
Variations on the p-Laplacian in
[37] [38] [39] [40] [41]
G. Lu and P. Wang, A uniqueness theorem for degenerate elliptic equations,
[42] [43] [44]
E. Parini, The second eigenvalue of the
[45] [46] [47]
R. P. Sperb,
[48]
[1] [2]
Nikolaos S. Papageorgiou, Vicenţiu D. Rǎdulescu, Dušan D. Repovš.
Nodal solutions for the Robin
[3] [4] [5] [6]
Shanming Ji, Yutian Li, Rui Huang, Xuejing Yin.
Singular periodic solutions for the
[7]
Maya Chhetri, D. D. Hai, R. Shivaji.
On positive solutions for classes of p-Laplacian semipositone systems.
[8]
Everaldo S. de Medeiros, Jianfu Yang.
Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition.
[9]
Adam Lipowski, Bogdan Przeradzki, Katarzyna Szymańska-Dębowska.
Periodic solutions to differential equations with a generalized p-Laplacian.
[10]
Leyun Wu, Pengcheng Niu.
Symmetry and nonexistence of positive solutions to fractional
[11] [12]
Genni Fragnelli, Dimitri Mugnai, Nikolaos S. Papageorgiou.
Robin problems for the
[13] [14]
Leszek Gasiński.
Positive solutions for resonant boundary value problems with the scalar p-Laplacian and nonsmooth potential.
[15]
Yinbin Deng, Yi Li, Wei Shuai.
Existence of solutions for a class of p-Laplacian type equation
with critical growth and potential vanishing at infinity.
[16]
Ronghua Jiang, Jun Zhou.
Blow-up and global existence of solutions to a parabolic equation associated with the fraction
[17]
Eun Kyoung Lee, R. Shivaji, Inbo Sim, Byungjae Son.
Analysis of positive solutions for a class of semipositone
[18]
Patrizia Pucci, Mingqi Xiang, Binlin Zhang.
A diffusion problem of Kirchhoff type involving the nonlocal fractional
[19] [20]
Carlo Mercuri, Michel Willem.
A global compactness result for the p-Laplacian involving critical nonlinearities.
2018 Impact Factor: 0.545
Tools Metrics Other articles
by authors
[Back to Top]
|
Search
Now showing items 1-1 of 1
Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE
(Elsevier, 2017-11)
Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
|
Stoyan's Rule of Thumb for Bandwidth Selection
Computes a rough estimate of the appropriate bandwidth for kernel smoothing estimators of the pair correlation function and other quantities.
Usage
bw.stoyan(X, co=0.15)
Arguments X
A point pattern (object of class
"ppp").
co
Coefficient appearing in the rule of thumb. See Details.
Details
Estimation of the pair correlation function and other quantities by smoothing methods requires a choice of the smoothing bandwidth. Stoyan and Stoyan (1995, equation (15.16), page 285) proposed a rule of thumb for choosing the smoothing bandwidth.
For the Epanechnikov kernel, the rule of thumb is to set the kernel's half-width \(h\) to \(0.15/\sqrt{\lambda}\) where \(\lambda\) is the estimated intensity of the point pattern, typically computed as the number of points of
X divided by the area of the window containing
X.
For a general kernel, the corresponding rule is to set the standard deviation of the kernel to \(\sigma = 0.15/\sqrt{5\lambda}\).
The coefficient \(0.15\) can be tweaked using the argument
co.
Value
A numerical value giving the selected bandwidth (the standard deviation of the smoothing kernel).
References
Stoyan, D. and Stoyan, H. (1995) Fractals, random shapes and point fields: methods of geometrical statistics. John Wiley and Sons.
See Also Aliases bw.stoyan Examples
# NOT RUN { data(shapley) bw.stoyan(shapley)# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
|
K-Means, coverings, and Voronoi diagrams This is the 4th of a series of posts on cluster-algorithms and ideas in data analysis.
The $k$-Means algorithm computes a Voronoi partition of the data set such that each landmark is given by the centroid of the corresponding cell. Let me quickly quote Wikipedia on the history of the algorithm before I explain what it is about:
The term “$k$-means” was first used by James MacQueen in 1967, though the idea goes back to Hugo Steinhaus in 1957. The standard algorithm was first proposed by Stuart Lloyd in 1957. The $k$-Means Problem
Let $X \subset \mathbb{R}^n$ be a finite collection of data points. Fix a positive integer $k$. Then our aim is to find a partition $\boldsymbol S = \{S_1, \ldots, S_k\}$ of $X$ into $k$ subsets such that it minimizes the following function \[ J(\boldsymbol S) = \sum_{i=1}^k \sum_{x \in S_i} \| x – \mu(S_i) \|^2, \] where $\mu(S)$ denotes the mean of the points in $S$, i.e. \[ \mu(S) = \frac{1}{|S|}\sum_{x \in S} x. \] We denote by $\mu_* \big( \boldsymbol S \big)$ the collection of means of the sets in $\boldsymbol S$.
As a rule of thumb: in most of my posts, the $*$-functor applied to some construction (or function) $f$ can in functional-programming terms be translated to \[ f_*(Z) := \verb+map+ \ f \ Z. \] Voronoi cells
Let $(Y,d)$ be a metric space. Let $\Lambda \subset Y$ be a finite subset called the
landmarks. Given a landmark $\lambda \in \Lambda$ we define its associated Voronoi cell $V_\lambda$ by \[ V_\lambda := \{ y \in Y \ | \ d(y,\lambda) \leq d(y, \Lambda) \}. \] Suppose we are given a subset $X \subset Y$ then we introduce the following shorthand notation for a realtive version of a Voronoi cell \[ V_{X, \lambda} := V_\lambda \cap X. \] When it is clear whether we are dealing with the relative or ordinary version we may omit the extra index. We write $V_*(\Lambda)$ resp. $(V_X)_*(\Lambda)$ for the whole collection of Voronoi cells associated to the landmarks $\Lambda$, i.e. for the relative version we have \[ (V_X)_*(\Lambda) := \{ V_{X,\lambda} \ | \ \lambda \in \Lambda \}. \] Partitions and Voronoi cells
Suppose we have a discrete set $X$ embedded in $m$-dimensional Euclidean space $\mathbb{R}^m$ endowed with the Euclidean metric $d$. Suppose further we have chosen a family $\Lambda = \{\lambda_1,\ldots,\lambda_k\}$ of landmarks. We would like to produce a partition of $X$, i.e. a decomposition of $X$ into mutually disjoint sets, based on the Voronoi cells associated to $\Lambda$. However we are facing an ambiguity for points $x \in X$ with \[ d(x, \lambda_i) = d(x, \lambda_j), \text{ for some $i \neq j$}. \] We have to make a choice to which set we are assigning $x$ (and from which cell we are removing $x$). For the remaining part of this post we will:
Assign $x$ to the cell with the lower index, and remove it from the other.
There is no particular reason to go with this rule other than it is the easiest I could come up with. After reassigning all problematic points we end up with an honest partition of $X$. We will continue to denote these sets by $V_\lambda$ resp. $V_{X,\lambda}$, and continue to refer to them as Voronoi cells.
Lloyd’s algorithm
Computing the minimum of the function $J$ described above is usually too expensive. Instead one uses a heuristic algorithm to compute a local minimum. The most common algorithm is Lloyd’s algorithm which I will sketch in the following: Suppose $X$ is a finite discrete set in $m$-dimensional Euclidean space $\mathbb{R}^m$ endowed with the Euclidean metric $d$. Suppose further we fixed a positive integer $k$. Then choose an arbitrary partition $\boldsymbol S$, i.e. a decomposition of $X$ into a family mutually disjoint sets $S_1,\ldots,S_k \subset X$. Then define a sequence $(C_n)_{n \in \mathbb{N}}$ of partitions as follows \[ C_n := L^n(\boldsymbol S) \ \ \ \text{, where } L:= (V_X \circ \mu)_*. \] It is not hard to show that this sequence converges (see the section below). Hence one can define the result of Lloyd’s algorithm applied to the initial partition $\boldsymbol S$ as follows \[ \mathscr{L}_{V,\mu}\big(\boldsymbol S \big) := \lim_{n \to \infty} C_{n} . \]
Convergence of $C_n$
Observe that for any any partition $\boldsymbol S$ of $X$ we have \[ J(\boldsymbol S) \geq J \big( L(\boldsymbol S) \big). \] Furthermore equality holds if and only if $\boldsymbol S = L(\boldsymbol S)$. Therefore $J(C_n)$ defines a descending sequence in $\mathbb{R}$ which is bounded below by zero, hence it converges. Since the set of partitions of $X$ is finite $J$ only takes a finite number of values. This implies that $J(C_n)$ takes a constant value for $n$ sufficiently large. By the observation above this implies that $C_n$ is constant for sufficiently large $n$ as well.
Drawbacks
A solution to the $k$-Means problem will always be a partition of $X$ into $k$ subsets. A solution to the problem is not always suited to be interpreted as partition into “clusters”. Imagine a point cloud that is distributed according to a probability distribution centered along a straight line. Intuitively one would suggest a single “connected” cluster. However $k$-means by definition would suggest otherwise. Without further analysis we couldn’t tell the difference of that particular data set and another one scattered around $k$ different centers. So one should really see the solution for what it is. A partition, or “cover”, of $X$. Luckily there are further directions to go from here and to build on top of $k$-Means. We briefly sketch one of those possible extensions in the section below.
Where to go from here – Witness complexes
Let $\boldsymbol S = \{ S_1,\ldots,S_k \} $ be a partition of $X$ obtained by the Lloyd’s algorithm say. We would like to associate a simplicial complex to $\boldsymbol S$. In a previous post on the Mapper construction I explained how to construct the nerve of a covering of $X$. However, since $\boldsymbol S$ is a partition, i.e. the sets are mutually disjoint, this construction will only yield a trivial zero-dimensional complex. All we have to do is to slightly enlarge the sets in the partition $\boldsymbol S$. For $\varepsilon > 0$ we define \[ \boldsymbol S_\varepsilon := \big\{ N_\varepsilon(S_1), \ldots, N_\varepsilon(S_k) \big\}, \] where $N_\varepsilon(S)$ denotes the epsilon neighbourhood of $S$ in $X$, i.e. the set of points in $x$ whose distance to $S$ is at most $\varepsilon$. We can compute the nerve $\check{N}(\boldsymbol S_\varepsilon)$ of the enlarged cover $\boldsymbol S_\varepsilon$. For the “right” choice of $\varepsilon$ we are now able to distinguish the two data sets given in the previous section.
A construction that is closely related (almost similar) to the above is the following. Suppose $\boldsymbol S$ is the set of Voronoi cells associated to a family $\Lambda$ of landmarks. The
strong Witness complex $\mathcal{W}^s(X,\Lambda ,\varepsilon)$ is defined to be the complex whose vertex set is $\Lambda$, and where a collection $(\lambda_{1},\ldots, \lambda_{i})$ defines an $l$-simplex if and only if there is a witness $x \in X$ for $(\lambda_{1},\ldots, \lambda_{i})$, i.e. \[ d(x,\lambda_{j}) \leq d(x,\Lambda) + \varepsilon, \text{ for $j=1,…,i$}. \]
|
Peter Saveliev
Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA.
My current projects are these two books:
In part, the latter book is about
Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands:
Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases:
(a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! -Why is discrete calculus better than infinitesimal calculus? -Why? -Because it can be integer-valued! -And? -And the integer-valued calculus can detect if the space is non-orientable! Read Integer-valued calculus, an essay making a case for discrete calculus by appealing to topology and physics. -The political “spectrum” might be a circle!- So? -Then there can be no fair decision-making system! Read The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter.
|
The qualifier "natural" is meant to exclude examples like "PA + P=NP" or "PA + True $\Pi_1$".
For concreteness, let's say that "natural" = sound, computably enumerable, with a feasible proof-checker. Context of the question.
A naive way to approach the P vs NP problem, from the logical point of view, could go like this:
Show that if P=NP, then P=NP is provable is some fixed system $S$, such as ZFC or ZFC+large cardinals; Improve the previous result for weaker and weaker $S$ $-$ for example, go from ZFC down to PA2, PA, $I\Sigma_1$, etc.; Once $S$ is as elementary as possible, use an ad-hoc argument to show $\not \vdash_S$ P=NP.
Of course, as all known approaches to the problem, this one quickly falls upon itself.
Proposition. ( folklore?)For every function $f$ which is computed by a Turing machine $M$, and for every natural formal system $S$, which proves that $M$ computes $f$, there exists a Turing machine $M'$ which computes $f$, such that $S$ does not prove that $M'$ computes $f$. The runtime of $M'$ is $O(n+Time(M))$.
Given input $x$, the machine $M'$ searches for a contradiction in $S$ for $|x|$ many steps. If no contradiction is found, it runs $M$ on $x$, returning the result; otherwise, it launches all nuclear missiles at once.
Of course, $M'$ computes $f$. But $S$ can never know this, because then it would know that there is no contradiction from $S$, contradicting Gödel's theorem.
The point of the above observation is that no formal system can make inferences about correctness of algorithms from runtime constraints alone. If we assume that S knows that some machine M decides SAT in polynomial time, there will always be another M for which S will not know this.
Motivation.
This seems troubling, since it can in principle be conceived that, while P=NP, the logical complexity of proving any polynomial-time satisfiability algorithm $M$ to be correct can be larger than the consistency strength of any formal theory $T$ that may be considered in say, the next 100 years:
$(\forall x\ M(x){\in}\{0,1\}\ \&\ (M(x)=1 \iff x \in SAT)) \Rightarrow \mathsf{Con}(T)$
Can such a situation be ruled out, for some natural extension $T$ of ZFC? This would mean exactly that $T$ answers the question posed in the title.
|
What is Perceptron?
Perceptron is one of the simplest types of artificial neural network and invented by Frank Rosenblatt in 1957. A single layer Perceptron is typically used for binary classification problems (1 or 0, Yes or No). The goal of Perceptron is to estimate the parameters that best predict the outcome, given the input features. The optimal parameters should yield the best approximation of decision boundary which separates the input data in two parts. For data that has non-linear decision boundary, more complicated algorithm such as deep learning instead of Perceptron is required.
How the Perceptron works 1. Create and initialize the parameters of the network
In Perceptron, a single layer has two kinds of parameters: weights and biases.
Bias term is an additional parameter which is used to adjust the output along with the weighted sum of the inputs. Bias is a constant and often denoted as $w_0$. Use random initialization for the weights: \[np.random.randn(shape)*0.01\] Use zero initialization for the biases: \[np.zeros(shape)\]
Bias term is an additional parameter which is used to adjust the output along with the weighted sum of the inputs. Bias is a constant and often denoted as $w_0$.
Use random initialization for the weights: \[np.random.randn(shape)*0.01\]
Use zero initialization for the biases: \[np.zeros(shape)\]
2. Multiply weights by inputs and sum them up
Calculate the dot product of weights and inputs, and then add bias term to it. This operating can be done easily by using NumPy, which is the package for scientific computing in Python:
\[np.dots(weights, inputs) + bias\] This sum value is usually called the input of the activation function, or pre-activation parameter.
\[np.dots(weights, inputs) + bias\]
This sum value is usually called the input of the activation function, or pre-activation parameter.
3. Apply activation function
The purpose of applying activation function is to convert a input signal of a node to an output signal. In more detail, it is restricting outputs to a certain range or value which enhance the categorization of the data. There are many different types of activation functions. Perceptron uses binary step function, which is a threshold-based function:
\[\phi (z)=\begin{cases} 1 & \text{ if } z>0\\ -1 & \text{ if } z\leq 0 \end{cases}\]
\[\phi (z)=\begin{cases} 1 & \text{ if } z>0\\ -1 & \text{ if } z\leq 0 \end{cases}\]
4. Calculate the cost
In Perceptron, the mean squared error (MSE) cost function is used to estimate how badly models are performing. Our goal is to find the parameters that minimizes this cost value.
The formula of MSE is: \[\frac{1}{2m}\sum_{i=1}^{m}(y_i-\hat{y}_i)^2\] where $m$: number of examples, $y$: true label vector, $\hat{y}$: output prediction vector.
The formula of MSE is:
\[\frac{1}{2m}\sum_{i=1}^{m}(y_i-\hat{y}_i)^2\]
where $m$: number of examples, $y$: true label vector, $\hat{y}$: output prediction vector.
5. Update the parameters
We need to update weights and biases by using derivative of cost function.
Perceptron uses an optimization algorithm called gradient descent to update the parameters. Gradient can be computed as follows: \[dw = \frac{1}{m}np.dot(inputs, y-\hat{y})\] \[db = \frac{1}{m}np.sum(y-\hat{y})\] where $dw$: derivative of cost function with respect to weights, $db$: derivative of cost function with respect to bias Gradient descent algorithm: \[weights = weights - learning\_rate*dw\] \[bias = bias - learning\_rate*db\] where $learning\_rate$: Learning rate of the gradient descent update rule (0 < $\alpha$ < 1)
Perceptron uses an optimization algorithm called gradient descent to update the parameters.
Gradient can be computed as follows:
\[dw = \frac{1}{m}np.dot(inputs, y-\hat{y})\]
\[db = \frac{1}{m}np.sum(y-\hat{y})\]
where $dw$: derivative of cost function with respect to weights, $db$: derivative of cost function with respect to bias
Gradient descent algorithm:
\[weights = weights - learning\_rate*dw\]
\[bias = bias - learning\_rate*db\]
where $learning\_rate$: Learning rate of the gradient descent update rule (0 < $\alpha$ < 1)
|
I suppose the right way to do C (charge), T (time reversal), P(parity) transformation on the state $\hat{O}| v \rangle$ with operators $\hat{O}$ is that:
$$C(\hat{O}| v \rangle)=(C\hat{O}C^{-1})(C| v \rangle)\\P(\hat{O}| v \rangle)=(P\hat{O}P^{-1})(P| v \rangle)\\T(\hat{O}| v \rangle)=(T\hat{O}T^{-1})(T| v \rangle)$$
Thus to understand how an operator $\hat{O}$ transforms under C,P,T, we care about the following form$$\hat{O} \to (C\hat{O}C^{-1})\\\hat{O} \to (P\hat{O}P^{-1})\\\hat{O} \to (T\hat{O}T^{-1})$$
Here $\hat{O}=\hat{O}(\hat{\Phi},\hat{\Psi},a,a^\dagger)$ contains possible field operators ($\hat{\Phi},\hat{\Psi}$), or $a,a^\dagger$ etc.
To understand how a state $|v \rangle$ transforms, we care about$$| v \rangle\to C| v \rangle\\| v \rangle \to P| v \rangle\\| v \rangle\to T| v \rangle$$
However, in Peskin and Schroeder QFT book, throughout Chap 3, the transformation is done on the fermion field $\hat{\Psi}$(operator in the QFT) :$$\hat{\Psi} \to (C\hat{\Psi}C)? (Eq.3.145)\\\hat{\Psi} \to (P\hat{\Psi}P)? (Eq.3.128)\\\hat{\Psi} \to (T\hat{\Psi}T)? (Eq.3.139)$$
I suppose one should take one side as inverse operator ($(C\hat{\Psi}C^{-1}),(P\hat{\Psi}P^{-1}),(T\hat{\Psi}T^{-1})$). What have been written there in Peskin and Schroeder QFT Chap 3 is incorrect, especailly because $T \neq T^{-1}$, and $T^2 \neq 1$ in general. ($T^2=-1$ for spin-1/2 fermion)
Am I right?(P&S incorrect here) Or am I wrong on this point? (Why is that correct? I suppose S. Weinberg and M. Srednicki and A Zee use the way I described.) This post imported from StackExchange Physics at 2014-06-04 11:39 (UCT), posted by SE-user Idear
|
Hananish Joy G. Odarve and Majvell Kay G. Odarve
The wavefunction, or quantum state, is a complete description that can be given into a physical system. The Schrodinger equation can describe how the wavefunction changes as time propagates.
A particle state, for example, can be determined by solving the Schrodinger equation. Since the potential of a system, [eq] V(r) [/eq], is a function of the distance from the origin, the shperical coordinates [eq] (r, \;\theta, \;\phi) [/eq] can be employed . The time-independent Schrodinger equation in spherical coordinates has the expression
[eq]\frac{-\hbar^2}{2m}[{\frac{1}{r^2}}\frac{\partial}{\partial r}\;(r^2 \frac{\partial \Psi}{\partial r})[/eq] [eq]+\frac{1}{{r^2}\sin \theta}\;\frac{\partial}{\partial \theta}(\sin \theta \frac{\partial \Psi}{\partial \theta}) [/eq][eq]+\frac{1}{{r^2}\sin^2 \theta}(\frac{\partial^2 \Psi}{\partial \phi^2})] + V\Psi = E\Psi[/eq] (1)
where [eq]\Psi[/eq] is a separable solution to Eqn.(1) given as
[eq]\Psi (r,\theta,\phi) = R(r)Y(\theta,\phi)[/eq].
Performing the separation of variables leads to the determination of the angular and the radial equations given, respectively as
[eq]\frac{1}{Y}\left[\frac{1}{\sin \theta}\frac{\partial}{\partial \theta}(\sin \theta \frac{\partial Y}{\partial \theta})+\frac{1}{\sin^2 \theta}\frac{\partial^2 Y}{\partial \phi^2}\right]= -l(l+1)[/eq] (2)
and
[eq]\frac{1}{R}\frac{d}{dr}(r^2\frac{dR}{dr})-\frac{2mr^2}{\hbar^2}\left(V(r)-E\right)=l(l+1)\;.[/eq] (3)
The actual shape of a given potential, [eq]V(r)[/eq], only affects the radial part of the wavefunction, [eq]R(r)[/eq], which can be determined from Eqn. (3). This can be further be simplified by changing of variables where we let
[eq]U(r)=rR(r),[/eq]
leading to the general expression of the radial equation,
[eq]\frac{-\hbar^2}{2m}\;\frac{d^2U}{dr^2}+\left[V(r)+\frac{\hbar^2}{2m}\;\frac{l(l+1)}{r^2}\right]U = EU.[/eq] (4)
From here, we can compute the ground state of a particle with [eq]l=0[/eq]. To demonstrate this, we can study a particle of mass
m placed in a finite spherical well having a potential
[eq]V(r) = \left\{
\begin{array}{rl} -V_o, & if\;\;\;r\leq a\\ 0, & if\;\;\;r > a \end{array} \right[/eq].
We also have to show that no bound states exists if
[eq]V_o\;\;a^2\;\;<\;\;\frac{\pi^2\hbar^2}{8m}.[/eq]
Now, the Schrodinger equation yields bound states when [eq]E < V_o[/eq]. We first investigate the region at [eq]r\leq a[/eq]. Eqn. (4) with [eq]l=0[/eq] becomes,
[eq]\frac{-\hbar^2}{2m}\;\frac{d^2U}{dr^2} – V_o U = EU[/eq]
[eq]\frac{d^2U}{dr^2} = -\frac{2m}{\hbar^2}(E+V_o)U[/eq]
where we let [eq]\epsilon = \sqrt{{\frac{2m}{\hbar^2}}(E+V_o)} [/eq]. Now we have,
[eq]\frac{d^2U}{dr^2}+\epsilon^2 U = 0[/eq]
The general solution for this expression is given as
[eq]U_{in}(r)=A\sin(\epsilon r)+ B\cos(\epsilon r).[/eq]
Thus, the radial equation inside the finite spherical well can be expressed as
[eq]R_{in}(r)=\frac{U_{in}(r)}{r}= \frac{A\sin(\epsilon r)}{r}+ \frac{B\cos(\epsilon r)}{r}.[/eq] (5)
We impose the boundary condition that as [eq] r \rightarrow 0 [/eq], the potential has to have a finite value. However, the term [eq]\frac{B\cos(\epsilon r)}{r} [/eq] blows up so we set B=0. Thus,
[eq]R_{in}(r)= \frac{A\sin(\epsilon r)}{r}.[/eq] (6)
Next, we investigate the region where [eq]r>a[/eq]. In this region, [eq]V(r)[/eq] is zero so Eqn. (4) becomes
[eq]\frac{-\hbar^2}{2m}\frac{d^2U}{dr^2} = EU[/eq]
[eq]\frac{d^2U}{dr^2} = -\frac{2m}{\hbar^2}EU.[/eq]
We then let [eq]\beta = \sqrt{-\frac{2mE}{\hbar^2}}[/eq] so we have
[eq]\frac{d^2U}{dr^2}-\beta^2 U = 0.[/eq]
The general solution for this expression is
[eq]U_{out}(r)=Ce^{\beta r}+ De^{-\beta r}.[/eq] (7)
The radial equation outside the spherical well is then expressed as
[eq]R_{out}(r)=\frac{U_{out}(r)}{r}= \frac{Ce^{\beta r}}{r}+ \frac{De^{-\beta r}}{r}.[/eq]
From here, we impose another boundary condition that as [eq]r\rightarrow\infty[/eq] the potential must be finite. However, the term [eq]e^{\beta r} \rightarrow\infty[/eq], so we set [eq]C=0[/eq]. Thus,
[eq]R_{out}(r)=\frac{De^{-\beta r}}{r}.[/eq] (8)
The continuity of [eq]R[/eq] and [eq]\frac{dR}{dr}[/eq] at the interface region, [eq]r=a[/eq], requires that,
i. [eq]R_{in}(r) = R_{out}(r)[/eq] and
ii. [eq]\frac{dR_{in}(r)}{dr}[/eq] = [eq]\frac{dR_{out}(r)}{dr}[/eq]
From condition (i):
[eq]\frac{A\sin(\epsilon a)}{a}=\frac{De^{-\beta a}}{a}[/eq]
[eq]A\sin(\epsilon a) = {De^{-\beta a}}.[/eq] (9)
and from condition (ii):
[eq]\frac{dR_{in}(r)}{dr} = \frac{Aa\epsilon \cos(\epsilon a) – A \sin (\epsilon a)}{a^2}[/eq]
[eq]\frac{dR_{out}(r)}{dr} = \frac{-Da\beta e^{-\beta a} – De^{-\beta a}}{a^2}.[/eq]
equating [eq]\frac{dR_{in}(r)}{dr}[/eq] and [eq]\frac{dR_{out}(r)}{dr}[/eq],
[eq]Aa\epsilon \cos(\epsilon a) – A \sin (\epsilon a)=-Da\beta e^{-\beta a} – De^{-\beta a}[/eq]
[eq]A(a\epsilon \cos(\epsilon a) – \sin (\epsilon a))=-De^{-\beta a}(a\beta +1).[/eq] (10)
We divide Eqn. (10) by (9),
[eq]\frac{a\epsilon \cos(\epsilon a) – \sin (\epsilon a)}{\sin\epsilon a} = -(a\beta+1)[/eq]
[eq]a\epsilon\frac{\cos{\epsilon a}}{\sin{\epsilon a}} – 1= -a\beta – 1[/eq]
[eq]a\epsilon \cot(\epsilon a) = -a\beta[/eq]
we let [eq]k_1 = \epsilon a[/eq] and [eq]k_2 = \beta a[/eq] so,
[eq]k_1 \cot(k_1) = -k_2.[/eq] (11)
From our representation that [eq]\epsilon = \frac{k_1}{a}[/eq] and [eq]\beta=\frac{k_2}{a}[/eq], it tells us that the value of [eq]k_1[/eq] and [eq]k_2[/eq] should be positive.
[eq]k_1^2 + k_2^2 =\epsilon^2 a^2 + \beta^2 a^2 = a^2 \left[\frac{2m}{\hbar^2} (E+V_o) + (\frac{-2mE}{\hbar^2})\right][/eq][eq] = \frac{a^2}{\hbar^2} 2m (E+V_o-E)[/eq]
[eq]R^2 = \frac{2ma^2}{\hbar^2} V_o[/eq]
From the results, the allowable states only occurs when [eq]k_1[/eq] and [eq]k_2[/eq] are positive. That argumnet would only be possible if the constants are located in the first quadrant. The solutions thus can only be found at [eq]R<\frac{\pi}{2}[/eq].
[eq]\frac{2ma^2}{\hbar^2}V_o\;\;<\;\;\frac{\pi^2}{4}[/eq]
[eq]V_o a^2\;\;<\;\;\frac{\hbar^2}{2m}\frac{\pi^2}{4}[/eq]
[eq]V_o a^2\;\;<\;\;\frac{\pi^2 \hbar^2}{8m}.[/eq]
|
Definition:Jump Discontinuity Definition
Let $X$ be an open subset of $\R$.
Let $f: X \to Y$ be a real function.
Let $f$ be discontinuous at some point $c \in X$.
Then $c$ is called a jump discontinuity of $f$ if and only if: $\displaystyle \lim_{x \mathop \to c^-} \map f x$ and $\displaystyle \lim_{x \mathop \to c^+} \map f x$ exist and are not equal
Let $X$ be an open subset of $\R$.
Let $f: X \to Y$ be a real function.
The jump at $c$ is defined as: $\displaystyle \lim_{x \mathop \to c^+} \map f x - \lim_{x \mathop \to c^-} \map f x$ Also known as
Some authors take discontinuities of the first kind and
jump discontinuities to be synonymous.
The difference is that some authors allow removable discontinuities to be a subset of
jump discontinuities.
Other authors choose to distinguish between the two concepts.
Sources 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics... (previous) ... (next): Entry: jump discontinuity 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: discontinuity 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics(5th ed.) ... (previous) ... (next): Entry: jump discontinuity
|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
March 2007 , Volume 19 , Issue 1
Select all articles
Export/Reference:
Abstract:
We are interested in a remarkable property of certain nonlinear diffusion equations, which we call
blow-downor delayed regularization. The following happens: a solution of one of these equations is shown to exist in some generalized sense, and it is also shown to be non-smooth for some time $ 0 < t < t_1$, after which it becomes smooth and still nontrivial. We use the logarithmic diffusion equation to examine an example of occurrence of this phenomenon starting from data that contain Dirac deltas, which persist for a finite time. The interpretation of the results in terms of diffusion is also unusual: if the process starts with one or several point masses surrounded by a continuous distribution, then the masses decay into the medium over a finite period of time. The study of the phenomenon implies consideration of a new concept of measure solution which seems natural for these diffusion processes. Abstract:
The initial value problem for the $L^{2}$ critical semilinear Schrödinger equation with periodic boundary data is considered. We show that the problem is globally well-posed in $H^{s}( T^{d} )$, for $s>4/9$ and $s>2/3$ in 1D and 2D respectively, confirming in 2D a statement of Bourgain in [4]. We use the "$I$-method''. This method allows one to introduce a modification of the energy functional that is well defined for initial data below the $H^{1}(T^{d} )$ threshold. The main ingredient in the proof is a "refinement" of the Strichartz's estimates that hold true for solutions defined on the rescaled space, $T^{d}_\lambda = R^{d}/{\lambda Z^{d}}$, $d=1,2$.
Abstract:
We provide a dynamical portrait of singular-hyperbolic transitive attractors of a flow on a 3-manifold. Our Main Theorem establishes the existence of unstable manifolds for a subset of the attractor which is visited infinitely many times by a residual subset. As a consequence, we prove that the set of periodic orbits is dense, that it is the closure of a unique homoclinic class of some periodic orbit, and that there is an SRB-measure supported on the attractor.
Abstract:
In this paper, we attempt to clarify an open problem related to a generalization of the snap-back repeller. Constructing a semi-conjugacy from the finite product of a transformation $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ on an invariant set $\Lambda$ to a sub-shift of the finite type on a $w$-symbolic space, we show that the corresponding transformation associated with the generalized snap-back repeller on $\mathbb{R}^{n}$ exhibits chaotic dynamics in the sense of having a positive topological entropy. The argument leading to this conclusion also shows that a certain kind of degenerate transformations, admitting a point in the unstable manifold of a repeller mapping back to the repeller, have positive topological entropies on the orbits of their invariant sets. Furthermore, we present two feasible sufficient conditions for obtaining an unstable manifold. Finally, we provide two illustrative examples to show that chaotic degenerate transformations are omnipresent.
Abstract:
In this paper, the dynamics of transcendental meromorphic functions in the one-parameter family
$\mathcal{M} = { f_{\lambda}(z) = \lambda f(z) : f(z) = \tanh(e^{z}) \mbox{for} z \in \mathbb{C} \mbox{and} \lambda \in \mathbb{R} \setminus \{ 0 \} }$
is studied. We prove that there exists a parameter value $\lambda^$* $\approx -3.2946$ such that the Fatou set of $f_{\lambda}(z)$ is a basin of attraction of a real fixed point for $\lambda > \lambda^$* and, is a parabolic basin corresponding to a real fixed point for $\lambda = \lambda^$*. It is a basin of attraction or a parabolic basin corresponding to a real periodic point of prime period $2$ for $\lambda < \lambda^$*. If $\lambda >\lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ is connected and, is infinitely connected. Consequently, the singleton components are dense in the Julia set of $f_{\lambda}$ for $\lambda >\lambda^$*. If $\lambda \leq \lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ contains infinitely many pre-periodic components and each component of the Fatou set of $f_{\lambda}$ is simply connected. Finally, it is proved that the Lebesgue measure of the Julia set of $f_{\lambda}$ for $\lambda \in \mathbb{R} \setminus \{ 0 \}$ is zero.
Abstract:
This note is a shortened version of my dissertation paper, defended at Stony Brook University in December 2004. It illustrates how dynamic complexity of a system evolves under deformations. The objects I considered are quartic polynomial maps of the interval that are compositions of two logistic maps. In the parameter space $P^{Q}$ of such maps, I considered the algebraic curves corresponding to the parameters for which critical orbits are periodic, and I called such curves left and right bones. Using quasiconformal surgery methods and rigidity, I showed that the bones are simple smooth arcs that join two boundary points. I also analyzed in detail, using kneading theory, how the combinatorics of the maps evolve along the bones. The behavior of the topological entropy function of the polynomials in my family is closely related to the structure of the bone-skeleton. The main conclusion of the paper is that the entropy level-sets in the parameter space that was studied are connected.
Abstract:
The upper semi-continuous convergence of approximate attractors for an infinite delay differential equation of logistic type is proved, first for the associated truncated delay equation with finite delay and then for a numerical scheme applied to the truncated equation.
Abstract:
We consider second order periodic systems with a nonsmooth potential and an indefinite linear part. We impose conditions under which the nonsmooth Euler functional is unbounded. Then using a nonsmooth variant of the reduction method and the nonsmooth local linking theorem, we establish the existence of at least two nontrivial solutions.
Abstract:
This paper is concerned with the existence and nodal character of the nontrivial solutions for the following equations involving critical Sobolev and Hardy exponents:
$-\Delta u + u - \mu \frac{u}{|x|^2}=|u|^{2^*-2}u + f(u),$
$u \in H^1_r (\R ^N),(1)$
where $2^$*$=\frac{2N}{N-2}$ is the critical Sobolev exponent for the embedding $H^1_r (\R ^N) \rightarrow L^{2^}$*$ (\R ^N)$, $\mu \in [0, \ (\frac {N-2}{2})^2)$ and $f: \R \rightarrow\R $ is a function satisfying some conditions. The main results obtained in this paper are that there exists a nontrivial solution of equation (1) provided $N\ge 4$ and $\mu \in [0, \ (\frac {N-2}{2})^2-1] $ and there exists at least a pair of nontrivial solutions $u^+_k$, $u^-_k$ of problem (1) for each k $\in N \cup \{0\}$ such that both $u^+_k$ and $u^-_k$ possess exactly k nodes provided $N\ge 6$ and $\mu \in [0, \ (\frac {N-2}{2})^2-4]$.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Is there a group $G$ with the property that $G$ is a smooth manifold, the multiplication map of $G$ is smooth, but the inversion map of $G$ is not smooth?
Robert L. Bryant "An Introduction to Lie Groups and Symplectic Geometry" requires in the definition of a Lie group only that the multiplication map be smooth, and then proves that the inversion map must be smooth also. (Proposition 1, page 14.)
Every Čech-complete paratopological group is a topological group. That means that for Čech-complete groups you do not have to require the continuity of the inverse, continuity of multiplication suffices. Every manifold is Čech-complete. Using the affirmative answer to Hilbert’s fifth problem we get that every paratopological group on a manifold is actually a Lie group uniquely determined by the topological group structure.
In the spirit of Martin let me give a correct (I hope I have not forgotten anything) definition which is even wronger than the definition without the inverse:
A topological space $G$ with a function $\cdot\colon G^2\to G$ is called an $n$-dimensional
Lie groupif and only if
$G$ is second-countable
There exists an injective, open continuous map $\iota\colon \mathbb{R}^n\to G$
For every $g\in G$ the map $x\mapsto g\cdot x$ is continuous and surjective
There exists $e\in G$ such that $x\mapsto x\cdot e$ is the identity
For every $g\in G$ the map $x\mapsto x\cdot g$ is continuous
$\cdot$ is associative
|
Surds are less common in MBA entrance tests, including CAT. However, the concept of surds is quite simple and could be applied in other calculations. Note that we may not have direct questions, but these concepts might have to be applied while solving other algebraic questions.
Questions on Indices are quite common in MBA entrance tests. The basics of indices is already covered in the lesson on Number Theory.
1. Surds
A surd is an irrational number which includes the root of an integer. Surds can also be expressed as the sum of a rational number and an irrational number. The following are examples of surds:
$\sqrt{5}, 2 + \sqrt{5}, 5^{\frac{1}{3}} + 6^{\frac{2}{3}}, \sqrt[7]{56} + \sqrt[9]{67}$
Surds where the highest power is $\dfrac{1}{2}$ are called
quadratic surds
and where the highest power is $\dfrac{1}{3}$ are called
cubic surds
.
From the perspective of entrance tests, we will be primarily tested on quadratic surds, and not cubic or other lower power surds.
Where $a, b, c$ and $d$ are integers and $b$ and $d$ are not perfect squares, if $a + \sqrt{b} = c + \sqrt{d}$, then $a = c$ and $b = d$.
For instance, if $x + \sqrt{y} = 4 + 2 \sqrt{5}$
$\implies x + \sqrt{y} = 4 + \sqrt{2^{2} \times 5}$
$\implies x + \sqrt{y} = 4 + \sqrt{20}$
∴ $x = 4, y = 20$
To summarise,
if two surds are equal
, then their rational parts are equal and their irrational parts are equal.
1.1 Conjugate of surds
Quadratic surds
can be eliminated if each of its terms are
squared
.
As $(a + b)(a - b) = a^{2} - b^{2}$, for the term $\bold{(a + b)}$, the
conjugate
is $\bold{(a - b)}$ and vice versa.
If a surd in the denominator has to be removed, then we multiply and divide by the
conjugate
of the denominator.
∴ $\dfrac{4}{\sqrt{5} + \sqrt{3}} = \dfrac{4}{\sqrt{5} + \sqrt{3}} \times \dfrac{\sqrt{5} - \sqrt{3}}{\sqrt{5} - \sqrt{3}} = \dfrac{4 \times (\sqrt{5} - \sqrt{3})}{(\sqrt{5})^{2} - (\sqrt{3})^{2}}$
$= \dfrac{4 \times (\sqrt{5} - \sqrt{3})}{5 - 3} = 2 \times (\sqrt{5} - \sqrt{3})$
Likewise, $\dfrac{2 + \sqrt{3}}{5 - 2 \sqrt{5}} = \dfrac{2 + \sqrt{3}}{5 - 2 \sqrt{5}} \times \dfrac{5 + 2 \sqrt{5}}{5 + 2 \sqrt{5}} = \dfrac{10 + 5 \sqrt{3} + 4 \sqrt{5} + 2 \sqrt{15}}{5}$
Example 1
Where $a$ and $b$ are rational numbers, if $\dfrac{3 + \sqrt{5}}{3 - \sqrt{5}} = a + \sqrt{b}$, then $a + 2b =$
Solution
$a + \sqrt{b} = \dfrac{3 + \sqrt{5}}{3 - \sqrt{5}} \times \dfrac{3 + \sqrt{5}}{3 + \sqrt{5}} = \dfrac{9 + 5 + 6 \sqrt{5}}{9 - 5}$
$\implies a + \sqrt{b} = \dfrac{7 + 3 \sqrt{5}}{2} = \dfrac{7}{2} + \sqrt{\dfrac{3^{2} \times 5}{2^{2}}} = \dfrac{7}{2} + \sqrt{\dfrac{45}{4}}$
∴ $a = \dfrac{7}{2}$ and $b = \dfrac{45}{4}$
$a + 2b = \dfrac{7}{2} + \dfrac{45}{2} = 26$
Answer: 26
|
I have a Hamiltonian and I want to know the corresponding density matrix. The matrix I'm interested in is the one in this question.
There's many different density matrices that can correspond to a given Hamiltonian.
For the 8x8 matrix in your question, there's 8 different "eigenstate" density matrices that can be obtained, one for each of the 8 eigenvectors. The density matrices are constructed by doing the outer product of the eigenvectors. For the $i^{\rm{th}}$ eigenstate of the Hamiltonian, the density matrix $\rho_i$ is:
$ \rho_i = |\psi_i\rangle_ \langle \psi_i| $.
A system can also be in a "pure" superposition of eigenstates, for example:
$|\psi \rangle = \frac{1}{\sqrt{2}}|\psi_1\rangle + \frac{1}{\sqrt{2}}|\psi_2\rangle $.
Then the density matrix is once again made by doing the outer product of the pure wave function $|\psi\rangle$ with itself.
A system can also be in a "mixed" state, which means it's a linear combination of "pure" states.
In this case you would construct the density matrix like this (for example):
$\rho = 0.5 \rho_1 + 0.5\rho_2$,
which descrbes a state which is a 50% mixture of $\rho_1$ and a 50% mixture of $\rho_2$.
Your question remains very unclear as to what it actually is that you want to calculate.
There is no direct correspondence between a system Hamiltonian and the quantum state of the system. No matter what the Hamiltonian, any quantum state is a valid state of the system.
Where a Hamiltonian comes in useful is, if you know the state at some time (say, $t=0$), you can find out what the state is at any later time via the Schroedinger equation $$ i\frac{\partial |\psi\rangle}{\partial t}=H(t)|\psi\rangle. $$ If $H$ does not change in time, you get $$ |\psi(t)\rangle=e^{-iHt}|\psi(0)\rangle $$ or, if your initial state is a mixed state, $$ \rho(t)=e^{-iHt}\rho(0)e^{iHt}. $$
Now, there are two reasonable things that
might be relevant in terms of a state derived from a Hamiltonian - the thermal state and the ground state (which is the thermal state at 0 temperature). At temperature $T$, the thermal state is$$\rho_{\text{thermal}}=\frac{e^{-H/(k_BT)}}{\text{Tr}(e^{-H/(k_BT)})},$$while the ground state is simply the eigenstate of $H$ with the smallest energy. You can (crudely) think of the thermal state as the best guess about what the state would be if you cooled it to a temperature $T$.
In one of the comments on another answer, you say
I need it to get the purity of my qubit states and the internal energy of the system vs. the magnetisation factor h
Purity has nothing to do with the Hamiltonian. If you know the density matrix $\rho$ of your system, purity is just $\text{Tr}(\rho^2)$. The Hamiltonian will help you with the expected internal energy: $\text{Tr}(\rho H)$ but, again, the state has to be provided from elsewhere, not from the Hamiltonian.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
It's misleading. It's unnecessary.
Let's start with the "P" (Parentheses). People claim that PEMDAS
is"the order of operations." This is already problematic because parentheses aren't really a mathematical operation.** Operations dothings. Parentheses don't actually doanything - they just group things together. This distinction may seem like more of a technicality, but it actually brings to light the main issue: parentheses aren't the important thing, but the idea of grouping in general. There's lots of ways to group expressions. You can group expressions using a fraction bar. \[\frac{1+2}{3+4}\] You can group expressions under a radical. \[\sqrt{3^2+4^2}\] You can even group expressions inside an exponent! \[2^{4+1}\](How are you supposed to "do" exponents before addition if there's addition inthe exponent and you don't have parentheses to tell you what to do?)
So in terms of grouping, PEMDAS is at best incomplete.
Though the "E" (Exponents) is pretty much unambiguous, the entire rest of the mnemonic causes problems. By putting the "MDAS" in linear order, a number of students get the idea that all Multiplication should be done before any Division, and that all Addition should be done before any Subtraction. Thus you get students who will make the following mistakes:\[4-1+2\\=4-3\\=1(?!)\]\[6\div2\times3\\=6\div6\\=1(?!)\]
What's even scarier is that PEMDAS has become so ingrained in our math education culture that
some teachers actually teach it this blatantly incorrect way. Don't believe me? Take a look at this video from TED-Ed:
Try to tell me that doesn't lead you to believe that MDAS is done in linear order. And we wonder why kids have trouble.
Of course, many teachers are careful to explain how the order of operations are
supposedto work - that multiplication and division (which is just multiplication by the multiplicative inverse) are done in order from left to right as they appear. Likewise, subtraction is just addition by the additive inverse, so addition and subtraction are done left to right as well. Some teachers write the mnemonic as PE(MD)(AS) or in some other sort of arrangement to emphasize this fact. Others further extend the ridiculous mnemonic-for-a-mnemonic to say "Please Excuse My Dear Aunt Sally ... and Let her Rest".
But now why is PEMDAS unnecessary?
There is a way to bypass all of this mnemonic madness and teach the order of operations in a way that actually makes sense. How?
By teaching
whyit works that way.
When I've asked fellow teachers why order of operations the way it is, those who have been able to answer often gave something to the effect of "well, we needed to decide on
somekind of convention to deal with possible ambiguity, so we decided on what we have today." This is half correct - getting rid of ambiguity is very much important. But it wasn't an arbitrary decision. It's not like we could have just as equally decided that addition and subtraction come first, then multiplication and division, and then exponents. There's a very good reason that the operations fall naturally in the order that they do.
Think back to elementary school when all you knew about was addition and subtraction. Eventually you ran into expressions that looked like this:\[3+3+3+3+3+3+3\]You didn't want to write so many 3's, so you were introduced to a shorthand to write this expression. Since there were seven 3's being added together, you learned you could instead write:\[3\times7\]Thus you learned that
multiplication is repeated addition.
Fast forward a few years, when you had multiplication and division under your belt. Now you saw expressions like this instead:\[4\times4\times4\times4\times4\]Again, you were introduced to a shorthand to keep from having to write all those 4's. Since there were five 4's being multiplied together, you wrote:\[4^5\]Thus you learned that
exponentiation is repeated multiplication.
Now we come to an expression like this.\[5^2+4\times3\]What do we do first? Well, remembering that
exponentiation is repeated multiplication, we rewrite our exponent to say what it really means.\[5\times5+4\times3\]Next, remembering that multiplication is repeated addition, we rewrite our multiplication in even more basic terms.\[5+5+5+5+5+4+4+4\]Now the expression is a cinch to evaluate - anyone can add! The value just comes out to 37. But, more remarkably, what we've just done is uncovered the reason whythe order of operations is as it is: The most compact shorthand is evaluated first.
Once students understand this, they won't need to actually write out the additions explicitly - they'll just evaluate things in the order they should be handled. But they'll know
whyto do it. With this in mind, I propose a better way to teach order of operations. Students need only remember two things. Pay attention to grouping. Shorthand comes first.
If we must use an acronym, instead of PEMDAS, let's use something like ... say ... GEMA. (Grouping, Exponentiation, Multiplication, Addition.) But if we do use GEMA or something similar, we shouldn't deprive students of the understanding that comes from knowing
whythe order of operations works. * In other countries, variations on PEMDAS are used, such as BODMAS or BIDMAS. The "B" stands for Brackets, another word for parentheses, and "O" and "I" stand for Orders and Indices, respectively, which are both alternate words for exponents. Note that in BODMAS and BIDMAS, the "D" and "M" are interchanged - think about how much confusion that could cause for students! ** Thanks to Quintopia for pointing this out: Parentheses in the context of computer science CAN in fact be thought of as operators which let the computer know to make a call to a subroutine. It would be even harder to make this into an acronym ... unless it's a recursive acronym in which the G stands for GEMA, similar to how WINE stands for WINE Is Not an Emulator!
|
OpenCV 4.1.0
Open Source Computer Vision
void cv::accumulate (InputArray src, InputOutputArray dst, InputArray mask=noArray()) Adds an image to the accumulator image. More... void cv::accumulateProduct (InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask=noArray()) Adds the per-element product of two input images to the accumulator image. More... void cv::accumulateSquare (InputArray src, InputOutputArray dst, InputArray mask=noArray()) Adds the square of a source image to the accumulator image. More... void cv::accumulateWeighted (InputArray src, InputOutputArray dst, double alpha, InputArray mask=noArray()) Updates a running average. More... void cv::createHanningWindow (OutputArray dst, Size winSize, int type) This function computes a Hanning window coefficients in two dimensions. More... Point2d cv::phaseCorrelate (InputArray src1, InputArray src2, InputArray window=noArray(), double *response=0) The function is used to detect translational shifts that occur between two images. More...
void cv::accumulate ( InputArray src, InputOutputArray dst, InputArray mask =
noArray()
)
Python: dst = cv.accumulate( src, dst[, mask] )
#include <opencv2/imgproc.hpp>
Adds an image to the accumulator image.
The function adds src or some of its elements to dst :
\[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\]
The function supports multi-channel images. Each channel is processed independently.
The function cv::accumulate can be used, for example, to collect statistics of a scene background viewed by a still camera and for the further foreground-background segmentation.
src Input image of type CV_8UC(n), CV_16UC(n), CV_32FC(n) or CV_64FC(n), where n is a positive integer. dst Accumulator image with the same number of channels as input image, and a depth of CV_32F or CV_64F. mask Optional operation mask.
void cv::accumulateProduct ( InputArray src1, InputArray src2, InputOutputArray dst, InputArray mask =
noArray()
)
Python: dst = cv.accumulateProduct( src1, src2, dst[, mask] )
#include <opencv2/imgproc.hpp>
Adds the per-element product of two input images to the accumulator image.
The function adds the product of two images or their selected regions to the accumulator dst :
\[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src1} (x,y) \cdot \texttt{src2} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\]
The function supports multi-channel images. Each channel is processed independently.
src1 First input image, 1- or 3-channel, 8-bit or 32-bit floating point. src2 Second input image of the same type and the same size as src1 . dst Accumulator image with the same number of channels as input images, 32-bit or 64-bit floating-point. mask Optional operation mask.
void cv::accumulateSquare ( InputArray src, InputOutputArray dst, InputArray mask =
noArray()
)
Python: dst = cv.accumulateSquare( src, dst[, mask] )
#include <opencv2/imgproc.hpp>
Adds the square of a source image to the accumulator image.
The function adds the input image src or its selected region, raised to a power of 2, to the accumulator dst :
\[\texttt{dst} (x,y) \leftarrow \texttt{dst} (x,y) + \texttt{src} (x,y)^2 \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\]
The function supports multi-channel images. Each channel is processed independently.
src Input image as 1- or 3-channel, 8-bit or 32-bit floating point. dst Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. mask Optional operation mask.
void cv::accumulateWeighted ( InputArray src, InputOutputArray dst, double alpha, InputArray mask =
noArray()
)
Python: dst = cv.accumulateWeighted( src, dst, alpha[, mask] )
#include <opencv2/imgproc.hpp>
Updates a running average.
The function calculates the weighted sum of the input image src and the accumulator dst so that dst becomes a running average of a frame sequence:
\[\texttt{dst} (x,y) \leftarrow (1- \texttt{alpha} ) \cdot \texttt{dst} (x,y) + \texttt{alpha} \cdot \texttt{src} (x,y) \quad \text{if} \quad \texttt{mask} (x,y) \ne 0\]
That is, alpha regulates the update speed (how fast the accumulator "forgets" about earlier images). The function supports multi-channel images. Each channel is processed independently.
src Input image as 1- or 3-channel, 8-bit or 32-bit floating point. dst Accumulator image with the same number of channels as input image, 32-bit or 64-bit floating-point. alpha Weight of the input image. mask Optional operation mask.
void cv::createHanningWindow ( OutputArray dst, Size winSize, int type )
Python: dst = cv.createHanningWindow( winSize, type[, dst] )
#include <opencv2/imgproc.hpp>
This function computes a Hanning window coefficients in two dimensions.
An example is shown below:
dst Destination array to place Hann coefficients in winSize The window size specifications (both width and height must be > 1) type Created array type
Point2d cv::phaseCorrelate ( InputArray src1, InputArray src2, InputArray window =
noArray(),
double * response =
0
)
Python: retval, response = cv.phaseCorrelate( src1, src2[, window] )
#include <opencv2/imgproc.hpp>
The function is used to detect translational shifts that occur between two images.
The operation takes advantage of the Fourier shift theorem for detecting the translational shift in the frequency domain. It can be used for fast image registration as well as motion estimation. For more information please see http://en.wikipedia.org/wiki/Phase_correlation
Calculates the cross-power spectrum of two supplied source arrays. The arrays are padded if needed with getOptimalDFTSize.
The function performs the following equations:
\[\mathbf{G}_a = \mathcal{F}\{src_1\}, \; \mathbf{G}_b = \mathcal{F}\{src_2\}\]where \(\mathcal{F}\) is the forward DFT.
\[R = \frac{ \mathbf{G}_a \mathbf{G}_b^*}{|\mathbf{G}_a \mathbf{G}_b^*|}\]
\[r = \mathcal{F}^{-1}\{R\}\]
\[(\Delta x, \Delta y) = \texttt{weightedCentroid} \{\arg \max_{(x, y)}\{r\}\}\]
src1 Source floating point array (CV_32FC1 or CV_64FC1) src2 Source floating point array (CV_32FC1 or CV_64FC1) window Floating point array with windowing coefficients to reduce edge effects (optional). response Signal power within the 5x5 centroid around the peak, between 0 and 1 (optional).
|
I have a sequence of positive terms $(a_n)$, for which $\sum_{n=1}^\infty a_n = A <\infty$, and wish to take a psuedo-random sample from the discrete probability distribution
\begin{equation} \mathbf{P}[ X = n] = \frac{a_n}{A}. \end{equation}
The standard approach to sampling $X$ is to first sample a (continuous) uniform variable $U \sim \text{Unif}[0,1]$, and then set $X = x_U$, with \begin{equation} x_U = \min \Bigg\{n \, \colon \, \sum_{k=1}^n a_k > U \Bigg\}. \end{equation}
In general if the $a_n$ are arranged in decreasing order of magnitude, then for typical samples of $U$, $x_U$ can be calculated rather quickly. However, given that I want to sample the distribution many times, inevitably I will encounter cases where computing $x_U$ is very time consuming.
I am looking for an efficient implementation of this algorithm in Mathematica. My guess is that there is perhaps a way to do this with recursive functions, saving the totals of the 'large sums' (i.e. those over many $k$) to be used for future calculations.
Does anybody have any suggestions? Thanks!
EDITTo give an example of a distirbution, consider $a_n = \log(n)/n^{(3/2)}$. This sequence is decreasing, and the associated series converges to
-Zeta'[3/2], i.e. $A \approx 3.93224$.
On my (admittedly not particularly swift) computer, if I sample $U =0.9$, it takes approximately 1.25 seconds to return $X =2496$. My simple (naive) implementation is as follows:
X[u_] := Module[{A = -Zeta'[3/2]},k=0;sumk=0;While[(sumk/Z < u),k+=1;sumk += N[Log[k]/k^(3/2)];];k];
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
I never learned stochastic differential equations, and so am trying to do some self study. I've arrive at this question: $tB_t\sim N(0,t^3)$? $B_t$ is standard brownian motion. $B_t\sim N(0,t)$, so just applying standard probability, we get the $t^3$. Is that ok?
But I have derived the formula: $tB_t=\int_0^t sdB_s+\int_0^t B_sds$. I think that both integrals on the right are $N(0,\frac{1}{3}t^3)$ random variables. So I can't just add the variances on the left to get $\frac{4}{3}t^3$.
It seems that the integrals have a non-zero covariance, so this doesn't surprise me, but I can't seem to figure out what I am misunderstanding.
So my question(s): Is $$tB_t=\int_0^t sdB_s+\int_0^t B_sds$$ correct? If so, how am I to see that the left side as a $N(0,t^3)$? Since these are stochastic integrals, am I to think of the $B_s$ in each integral as a distinct random process?
|
Smoothing Estimate of Intensity as Function of a Covariate
Computes a smoothing estimate of the intensity of a point process, as a function of a (continuous) spatial covariate.
Usage
rhohat(object, covariate, ...)
# S3 method for ppprhohat(object, covariate, ..., baseline=NULL, weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95)
# S3 method for quadrhohat(object, covariate, ..., baseline=NULL, weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95)
# S3 method for ppmrhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95)
# S3 method for lpprhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, nd=1000, eps=NULL, random=TRUE, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95)
# S3 method for lppmrhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, nd=1000, eps=NULL, random=TRUE, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95)
Arguments object
A point pattern (object of class
"ppp"or
"lpp"), a quadrature scheme (object of class
"quad") or a fitted point process model (object of class
"ppm"or
"lppm").
covariate
Either a
function(x,y)or a pixel image (object of class
"im") providing the values of the covariate at any location. Alternatively one of the strings
"x"or
"y"signifying the Cartesian coordinates.
weights
Optional weights attached to the data points. Either a numeric vector of weights for each data point, or a pixel image (object of class
"im") or a
function(x,y)providing the weights.
baseline
Optional baseline for intensity function. A
function(x,y)or a pixel image (object of class
"im") providing the values of the baseline at any location.
method
Character string determining the smoothing method. See Details.
horvitz
Logical value indicating whether to use Horvitz-Thompson weights. See Details.
smoother
Character string determining the smoothing algorithm. See Details.
subset
Optional. A spatial window (object of class
"owin") specifying a subset of the data, from which the estimate should be calculated.
dimyx,eps,nd,random
Arguments controlling the pixel resolution at which the covariate will be evaluated. See Details.
bw
Smoothing bandwidth or bandwidth rule (passed to
density.default).
adjust
Smoothing bandwidth adjustment factor (passed to
density.default).
n, from, to
Arguments passed to
density.defaultto control the number and range of values at which the function will be estimated.
bwref
Optional. An alternative value of
bwto use when smoothing the reference density (the density of the covariate values observed at all locations in the window).
… covname
Optional. Character string to use as the name of the covariate.
confidence
Confidence level for confidence intervals. A number between 0 and 1.
Details
This command estimates the relationship between point process intensity and a given spatial covariate. Such a relationship is sometimes called a
resource selection function (if the points are organisms and the covariate is a descriptor of habitat) or a prospectivity index (if the points are mineral deposits and the covariate is a geological variable). This command uses a nonparametric smoothing method which does not assume a particular form for the relationship.
If
object is a point pattern, and
baseline is missing or null, this command assumes that
object is a realisation of a Poisson point process with intensity function \(\lambda(u)\) of the form $$\lambda(u) = \rho(Z(u))$$ where \(Z\) is the spatial covariate function given by
covariate, and \(\rho(z)\) is a function to be estimated. This command computes estimators of \(\rho(z)\) proposed by Baddeley and Turner (2005) and Baddeley et al (2012).
The covariate \(Z\) must have continuous values.
If
object is a point pattern, and
baseline is given, then the intensity function is assumed to be $$\lambda(u) = \rho(Z(u)) B(u)$$ where \(B(u)\) is the baseline intensity at location \(u\). A smoothing estimator of the relative intensity \(\rho(z)\) is computed.
If
object is a fitted point process model, suppose
X is the original data point pattern to which the model was fitted. Then this command assumes
X is a realisation of a Poisson point process with intensity function of the form $$ \lambda(u) = \rho(Z(u)) \kappa(u) $$ where \(\kappa(u)\) is the intensity of the fitted model
object. A smoothing estimator of \(\rho(z)\) is computed.
The estimation procedure is determined by the character strings
method and
smoother and the argument
horvitz. The estimation procedure involves computing several density estimates and combining them. The algorithm used to compute density estimates is determined by
smoother:
If
smoother="kernel", each the smoothing procedure is based on fixed-bandwidth kernel density estimation, performed by
density.default.
If
smoother="local", the smoothing procedure is based on local likelihood density estimation, performed by
locfit.
The
method determines how the density estimates will be combined to obtain an estimate of \(\rho(z)\):
If
method="ratio", then \(\rho(z)\) is estimated by the ratio of two density estimates. The numerator is a (rescaled) density estimate obtained by smoothing the values \(Z(y_i)\) of the covariate \(Z\) observed at the data points \(y_i\). The denominator is a density estimate of the reference distribution of \(Z\).
If
method="reweight", then \(\rho(z)\) is estimated by applying density estimation to the values \(Z(y_i)\) of the covariate \(Z\) observed at the data points \(y_i\), with weights inversely proportional to the reference density of \(Z\).
If
method="transform", the smoothing method is variable-bandwidth kernel smoothing, implemented by applying the Probability Integral Transform to the covariate values, yielding values in the range 0 to 1, then applying edge-corrected density estimation on the interval \([0,1]\), and back-transforming.
If
horvitz=TRUE, then the calculations described above are modified by using Horvitz-Thompson weighting. The contribution to the numerator from each data point is weighted by the reciprocal of the baseline value or fitted intensity value at that data point; and a corresponding adjustment is made to the denominator.
The covariate will be evaluated on a fine grid of locations, with spatial resolution controlled by the arguments
dimyx,eps,nd,random. In two dimensions (i.e. if
object is of class
"ppp",
"ppm" or
"quad") the arguments
dimyx, eps are passed to
as.mask to control the pixel resolution. On a linear network (i.e. if
object is of class
"lpp") the argument
nd specifies the total number of test locations on the linear network,
eps specifies the linear separation between test locations, and
random specifies whether the test locations have a randomised starting position.
If the argument
weights is present, then the contribution from each data point
X[i] to the estimate of \(\rho\) is multiplied by
weights[i].
If the argument
subset is present, then the calculations are performed using only the data inside this spatial region.
Value
A function value table (object of class
"fv") containing the estimated values of \(\rho\) for a sequence of values of \(Z\). Also belongs to the class
"rhohat" which has special methods for
plot and
predict.
Categorical and discrete covariates
This technique assumes that the covariate has continuous values. It is not applicable to covariates with categorical (factor) values or discrete values such as small integers. For a categorical covariate, use
intensity.quadratcount applied to the result of
quadratcount(X, tess=covariate).
References
Baddeley, A., Chang, Y.-M., Song, Y. and Turner, R. (2012) Nonparametric estimation of the dependence of a point process on spatial covariates.
Statistics and Its Interface 5 (2), 221--236.
Baddeley, A. and Turner, R. (2005) Modelling spatial point patterns in R. In: A. Baddeley, P. Gregori, J. Mateu, R. Stoica, and D. Stoyan, editors,
Case Studies in Spatial Point Pattern Modelling, Lecture Notes in Statistics number 185. Pages 23--74. Springer-Verlag, New York, 2006. ISBN: 0-387-28311-0. See Also
See
ppm for a parametric method for the same problem.
Aliases rhohat rhohat.ppp rhohat.quad rhohat.ppm rhohat.lpp rhohat.lppm Examples
# NOT RUN { X <- rpoispp(function(x,y){exp(3+3*x)}) rho <- rhohat(X, "x") rho <- rhohat(X, function(x,y){x}) plot(rho) curve(exp(3+3*x), lty=3, col=2, add=TRUE) rhoB <- rhohat(X, "x", method="reweight") rhoC <- rhohat(X, "x", method="transform") # }# NOT RUN { fit <- ppm(X, ~x) rr <- rhohat(fit, "y")# linear network Y <- runiflpp(30, simplenet) rhoY <- rhohat(Y, "y")# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
|
Two common analytical problems are: (1) matrix components that interfere with an analyte’s analysis; and (2) an analyte with a concentration that is too small to analyze accurately. We have seen that we can use a separation to solve the first problem. Interestingly, we often can use a separation to solve the second problem as well. For a separation in which we recover the analyte in a new phase, it may be possible to increase the analyte’s concentration. This step in an analytical procedure is known as a
preconcentration.
An example from the analysis of water samples illustrate how we can simultaneously accomplish a separation and a preconcentration. In the gas chromatographic analysis for organophosphorous pesticides in environmental waters, the analytes in a 1000-mL sample may be separated from their aqueous matrix by a solid-phase extraction using 15 mL of ethyl acetate.
21 After the extraction, the analytes in the ethyl acetate have a concentration that is 67 times greater than that in the original sample (assuming the extraction is 100% efficient).
\[\mathrm{\dfrac{1000\;mL}{15\;mL}\approx 67\times}\]
|
January 11 Matt Papanikolas (Brown University) Periods of Drinfeldmodules with complex multiplication
We investigatetranscendence properties of periods of Drinfeld modules withcomplex multiplication. In particular we show that if suchDrinfeld modules have different CM fields then their fundamentalperiods are algebraically independent over the algebraic numbers. Joint work with Dale Brownawell.
January 25 Robert Vaughan (PSU) Waring's problem: ASurvey
Recent work on
G(k)in Waring's problem, jointly with T. D. Wooley, will bedescribed and placed in a historical context. Somespeculations will be made about future directions in the Hardy-Littlewoodmethod. February 1 No seminar: Peter Sarnak (Princeton University and theInstitute for Advanced study) is giving the Russell MarkerLectures in Mathematics L-Functions,Arithmetic and Semiclassics
Monday, January 29, 8:00 p.m.,112 Osmond Laboratory,
Hilbert's Eleventh Problem
Tuesday, January 30, 4:30 p.m.,112 Osmond Laboratory,
Lp Norms of Eigenfunctions
Wednesday, January 31, 4:30p.m., 112 Osmond Laboratory,
Quantum Unique Ergodicity
Thursday, February 1, 4:30p.m., 112 Osmond Laboratory,
Families of L-Functions andSymmetry February 8 Sanju Velani (Queen Mary Westfield College, London), 103McAllister On simultaneouslybadly approximable pairs
For any pair $i,j \geq 0$with $i+j =1$ let $\Bad(i,j)$ denote the set of pairs $(\a,\b)\in \R^2$ for which $ \max \{ ||q\a||^{1/i}\, ||q\b||^{1/j} \}> c/q $ for all $ q \in \N $. Here $c = c(\a,\b)$ is apositive constant. If $i=0$ we identify the set $\Bad(0,1)$ with$\R \times \Bad $ where $\Bad$ is the set of badly approximablenumbers. That is, $\Bad(0,1)$ consists of pairs $(\a,\b)$ with $\a\in \R$ and $\b \in \Bad$. If $j=0$ the roles of $\a$ and $\b$are reversed. We prove that the set $\Bad(1,0)\cap \Bad(0,1) \cap\Bad(i,j)$ has Hausdorff dimension 2, i.e. full dimension. Themethod easily generalizes to give analogous statements in higherdimensions.
February 15 Mark Watkins (PSU), 10:10, 103 Mcallister: NOTE earliertime. Special values of L-functionsand modular parametrisations of elliptic curves
Formulae which relate L-values to arithmetic information canbe viewed in two directions: you can first compute the arithmeticinformation to determine the size of the L-value, or converselyyou can compute the special L-value by a separate method, thusgaining arithmetic information. The generic method forcomputing special L-values (actually any L-value) goes back toCohen and Zagier in the 1970s, but only recently has it appearedin full generality in print (appendix of Cohen's latest book). We describe how their method works, and then use it in a specificexample, namely the computation of the degree of modularparametrisation of an elliptic curve. In fact, we give data froma large-scale project to compute modular degrees, with over 40000curves considered.
February 22 Edward Formanek (PSU) A relation betweenthe Bezoutian and the Jacobian March 1 Trevor Wooley (University of Michigan) : Colloquium Slim exceptional setsin Waring's problem
A result of Hua from 1938shows that the expected asymptotic formula holds in Waring'sproblem for sums of four squares of primes for almost allintegers in the expected residue classes, in the sense that thenumber of exceptions up to N is O(N(log N)^{-A}). This estimatewas recently improved by Liu and Liu to O(N^{13/15+epsilon}).From a naive viewpoint, both conclusions are surprisingly weak,in the sense that a similar conclusion holds already for sums ofthree squares of primes, and the excess square of a prime bringsa negligible improvement in the estimate for theexceptional set. This phenomenon permeates the subject,especially when the variables under consideration are from suchexotic sets as the prime numbers or integers possessing onlysmall prime factors. We present a method for better exploitingexcessive variables, especially exotic variables, and therebyslim down the available estimates for associated exceptional setsin various problems of Waring type. By way of illustration, weestablish that the above exponent 13/15 may now be replaced by 13/30.As with the best miracle diet plans, this slimming process
involves almost no effort. March 15 Tonghai Yang (University of Wisconsin) Taylor expansion ofan Eisenstein series
In this talk, we will givean analogue of the well-known Kronecker limit formula for aclassical Eisenstein series (with character). In this case, theEisenstein series is holomorphic at its center and its centralvalue is given by theta functions via the Siegel-Weil formula. Wewill give an explicit formula for its central derivative. We willalso use the formula to compute the central derivative of certainHecke L-functions, which are related to CM elliptic curves.
March 22 Joel Anderson (PSU) March 29 Scott Parsell (Texas A&M) Pairs of additiveequations and inequalities
We will discuss recentprogress on obtaining upper bounds for the number of variablesrequired to ensure that a pair of diagonal forms of differingdegree, satisfying appropriate local solubility conditions, has anon-trivial integral zero. The arguments are based on theHardy-Littlewood method, and in particular on the iterativemethods of Vaughan and Wooley for estimating mean values ofexponential sums over smooth numbers. Our estimates canalso be applied to the corresponding problem for inequalities, inwhich one tries to show that two forms with real coefficientsassume arbitrarily small values simultaneously at integral points.
April 5 David Farmer (Bucknell) Deformation of Maass forms
Phillips and Sarnak conjecture that Maass forms on cofinitesubgroups of SL(2,R) are destroyed by almost all deformations ofthe group. Some calculations will be described whichindicate that "almost all" cannot be replaced by "all." Time permitting, the dynamics of the motion of the Maass formsunder deformation will also be discussed.
April 12 William Stein (Harvard) Visibility of Mordell-Weil groups
I will introduce the notions of visibility and modularity ofMordell-Weil groups of abelian varieties. My notion ofvisibility is analogous and dual to Barry Mazur's notion ofvisibility of Shafarevich-Tate groups. In my talk, I willmake conjectures about visibility of Mordell-Weil groups, provethat Mordell-Weil groups of certain elliptic curves are visiblein an appropriate restriction of scalars, and give some explicitexamples. If time permits, I will discuss connections withthe Birch and Swinnerton-Dyer conjecture for elliptic curves ofanalytic rank greater than one.
April 19 Wenzhi Luo (Ohio State University) Equidistribution ofHecke eigenforms on modular surface
For the holomorphic Hecke eigenforms of weight 2k, one can associate with it naturally a probability measure $mu _{k}$ on the modular surface X. We show that
\mu _{k}(A) = \mu (A) + O(k^{-1/2})
holds uniformly for any setA on X as k tends to infinity, where $\mu$ is the invariantmeasure associated to the Poincare metric. Moreover the abovedecay rate is sharp. This equidistribution property of Heckeeigenforms can be regarded as an analogue of ergodicity ofLaplacian eigenfunctions.
April 26 David Boyd (University of British Columbia): Colloquium Mahler's measure and the Bloch group
|
I am working with functions like
f[z_] = Hypergeometric2F1[4, 4, 8, z]
Here is a plot of this function over the interval $z \in [0,1]$:
Plot[f[z], {z, 0, 1}]
As you can see, Mathematica has difficulties evaluating it in the region $z \approx 0$. This is surprising, because the hypergeometric function admits by definition a simple series expansion around $z = 0$, $$ f(z) = \sum_{n = 0}^\infty \frac{7!}{(3!)^2} \frac{(n+1)(n+2)(n+3)}{(n+4)(n+5)(n+6)(n+7)} z^n = 1 + 2 z + \frac{25}{9} z^2 + \ldots $$
The problem is that Mathematica does not use this defining property of the hypergeometric function, but instead it "simplifies" it to
f[z] = (140*(-60*z + 60*z^2 - 11*z^3 - 60*Log[1 - z] + 90*z*Log[1 - z] - 36*z^2*Log[1 - z] + 3*z^3*Log[1 - z]))/(3*z^7)
and it turns out that cancellations between large numbers occur in this expression when $z \approx 0$.
How can I instruct mathematica to not perform this "simplification" in general? Is there a way I can use the command Hold or something similar?
What I want to do eventually is evaluate numerically some integrals in which $f(z)$ appears in the integrand, so I need a robust way of evaluating the function over the interval $z \in [0,1]$.
|
№ 9
All Issues Chaichenko S. O.
↓ Abstract
Ukr. Mat. Zh. - 2019. - 71, № 4. - pp. 516-542
We compute the exact values of the exact upper bounds on the classes of bounded holomorphic and harmonic functions in a unit disk for the remainders in a Voronovskaya-type formula in the case of approximation by Fej´er means. We also present some consequences that are of independent interest.
↓ Abstract
Ukr. Mat. Zh. - 2017. - 69, № 11. - pp. 1577-1584
We solve the problem of best rational approximations of the Bergman kernels on the unit circle of the complex plane in the quadratic and uniform metrics.
Ukr. Mat. Zh. - 2014. - 66, № 11. - pp. 1540–1549
We compute the values of the best approximations for the Cauchy kernel on the real axis $ℝ$ by some subspaces from $L_q (ℝ)$. This result is applied to the evaluation of the sharp upper bounds for pointwise deviations of certain interpolation operators with interpolation nodes in the upper half plane and certain linear means of the Fourier series in the Takenaka–Malmquist system from the functions lying in the unit ball of the Hardy space $H_p,\; 2 ≤ p < ∞$.
Ukr. Mat. Zh. - 2014. - 66, № 6. - pp. 835–846
We study some problems of imbedding of the sets of $ψ$-integrals of the functions $f \in L^{p(∙)}$ and determine the orders of approximations of functions from these sets by Fourier’s sums.
Ukr. Mat. Zh. - 2012. - 64, № 9. - pp. 1249-1265
In generalized Lebesgue spaces with variable exponent, we determine the order of the best approximation on the classes of $(\psi, \beta)$-differentiable $2\pi$-periodic functions. We also obtain an analog of the well-known Bernstein inequality for the $(\psi, \beta)$-derivative, with the help of which the converse theorems of approximation theory are proved on the indicated classes.
Ukr. Mat. Zh. - 2011. - 63, № 1. - pp. 102-109
On classes of convolutions of analytic functions in uniform and integral metrics, we find asymptotic equations for the least upper bounds of deviations of trigonometric polynomials generated by certain linear approximation method of a special form.
Approximation by de la Vallée-Poussin operators on the classes of functions locally summable on the real axis
Ukr. Mat. Zh. - 2010. - 62, № 7. - pp. 968–978
For the least upper bounds of deviations of the de la Vallée-Poussin operators on the classes $\widehat{L}^{\psi}_{\beta}$ of rapidly vanishing functions $ψ$ in the metric of the spaces $\widehat{L}_p,\; 1 ≤ p ≤ ∞$, we establish upper estimates that are exact on some subsets of functions from $\widehat{L}_p$.
Ukr. Mat. Zh. - 2002. - 54, № 12. - pp. 1653-1669
We investigate the approximation properties of the de la Vallée-Poussin sums on the classes \(C_{\beta }^q H_{\omega }\) . We obtain asymptotic equalities that, in certain cases, guarantee the solvability of the Kolmogorov–Nikol'skii problem for the de la Vallée-Poussin sums on the classes \(C_{\beta }^q H_{\omega }\) .
Ukr. Mat. Zh. - 2002. - 54, № 5. - pp. 681-691
We investigate the problem of the approximation of the classes $C^{{\bar \psi }} H_{\omega }$ introduced by Stepanets in 1996 by the de la Valée-Poussin sums. We obtain asymptotic equalities that give a solution of the Kolmogorov–Nikol'skii problem for the de la Valée-Poussin sums on the classes Cψ¯HωCψ¯Hω in several important cases.
Approximation of $\overline \psi$-Integrals of Periodic Functions by de la Vallée-Poussin Sums (Low Smoothness)
Ukr. Mat. Zh. - 2001. - 53, № 12. - pp. 1641-1653
We investigate the asymptotic behavior of the upper bounds of deviations of linear means of Fourier series from the classes $C_{\infty} ^{\psi}$. In particular, we obtain asymptotic equalities that give a solution of the Kolmogorov – Nikol'skii problem for the de la Vallée-Poussin sums on the classes $C_{\infty} ^{\psi}$.
|
JET-COOLED DIODE LASER SPECTRUM OF THE $\nu_{3}$ BAND OF $N_{2}O_{3}$ AT $1304 CM^{-1}$ Issue Date:1993 MetadataShow full item record Publisher:Ohio State University Abstract:
The jet-cooled spectrum of the $\nu_{3}$ (symmetric $NO_{2}$ stretch) band of $N_{2}O_{3} (NO_{2}$-NO mixed dimer) at $1304 cm^{-1}$ has been observed using an tunable infrared diode laser spectrometer. The spectrum was produced by expanding a mixture of 50 torr $NO_{2}, 150$ torr NO and 500 torr Ar through a $10 cm \times 25 \mu m$ continuous slit nozzle. About 65\% of the region between $1299 cm^{-1}$ and $1310 cm^{-1}$ was covered using two diodes and transitions up to $J=20$ and $K=10$ were assigned. Transitions have a full-width at half maximum of 50 MHz and center freqencies are estimated to be accurate to $\pm 0.0002 cm^{-1}$. A total of 432 transitions were fit to three rotational and four distortion constants using the Watson A-reduced Hamiltonian with rms error of $0.0003 cm^{-1}$. A number of small perturbations were observed and the origin of these will be discussed.
Description:
Author Institution: Molecular Physics Division, National Institute of Standards and Technology; Insitiuto de Estructura de al Materia, CSIC
Type:article Other Identifiers:1993-TE-5
Items in Knowledge Bank are protected by copyright, with all rights reserved, unless otherwise indicated.
|
Use Case
Mathematica evaluate the partial derivative as:
$$\frac{\partial}{\partial A_{abc}}\sum _{j=1}^J \sum _{k=1}^K \log \left(\sum _{l=1}^L A_{jkl} B_{jkl}\right) = \sum _{j=1}^J \sum _{k=1}^K \frac{\sum _{l=1}^L\delta _{aj} \delta _{bk} \delta _{cl} B_{jkl}}{\sum _{l=1}^L A_{jkl}B_{jkl}}$$
Instead of
$$\frac{B_{abc}}{\sum _{l=1}^L A_{abl} B_{abl}}$$
For my case, each summation is over all possible values of an index.
The following code gives the result above:
expr = Sum[ Log[Sum[A[j, k, l]*B[j, k, l], {l, 1, L}]], {j, 1, J}, {k, 1, K}];expr = Simplify[D[expr, A[a, b, c]]]
Current Solution
In my last question, Chris suggested the following rule to simplify the Kronecker deltas.
expr /. Sum[ y_ KroneckerDelta[s_, r_], {s_, 1, p_}] :> (y /. s -> r) /. Sum[y_ KroneckerDelta[s_, r_] KroneckerDelta[s1_, r1_], {s_, 1, p_}, {s1_, 1, p1_}] :> (y /. s -> r /. s1 -> r1)
Potential Improvement
However, the rule can be simpler if Mathematica can automatically apply the following rule for multiple times.
expr = expr /. Sum[y_ KroneckerDelta[r_, s_], {s_, 1, p_}, z__] :> Sum[(y /. s -> r), z] /. Sum[y_ KroneckerDelta[r_, s_], {s_, 1, p_}] :> (y /. s -> r)
Question
How to make Mathematica apply the same rule (or same set of rules) for simplification whenever possible?
Can I make a function call
SimplifyKroneckerDelta that would apply this rule exhaustively?
Thanks.
Update:
Merely defining the following function leads to infinite loop.
SimplifyKronecker[expr_] = FixedPoint[expr /. Sum[y_ KroneckerDelta[r_, s_], {s_, 1, p_}, z__] :> Sum[(y /. s -> r), z] /. Sum[y_ KroneckerDelta[r_, s_], {s_, 1, p_}] :> (y /. s -> r), expr];
|
I found that some theories about quantum theory is similar to Fourier transform theory. For instance, it says "A finite-time light's frequency can't be a certain value", which is similar to "A finite signal has infinite frequency spectrum" in Fourier analysis theory. I think that a continuous frequency spectrum cannot be measured accurately, which is similar to Uncertainty principle by Hermann Weyl. How do you think about this?
Yes, there is a very strong interconnection.
A particle in q.m. hasn't got a defined position. Instead, there is a function describing the probability amplitude distribution for the position: the wavefunction $u(x)$. This is always told even in books for the general public. However, also the momentum of the particle isn't, in general, well defined: for it also we have a probability amplitude distribution, let's call it $w(p)$.It happens that $u$ and $w$ are in some sense the fourier transforms one of the other. The reason is the following.In Dirac's notation,$$u(x) = \langle x|\psi\rangle,\quad w(p)=\langle p|\psi\rangle $$where $|\psi\rangle$ is the state of the particle, $|x\rangle,|p\rangle$ are respectively the
eigenstates of the position and momentum operators.
Suppose to work in the $x$ basis. The $p$ operator is written $-i\hbar\partial/\partial x$. To find
eigenstates of $p$, we can call $\langle x|p\rangle=f_p(x)$$$ -i\hbar\frac{\partial}{\partial x}f_p(x)=pf_p(x)$$which yields to $f_p(x) = e^{ipx/\hbar}$.
Now, to pass from a basis to the other we can write $$\langle p|\psi\rangle= \int \langle p|x\rangle\langle x|\psi\rangle dx$$ or $$ w(p) = \int e^{-ipx/\hbar}u(x)dx$$ which is a Fourier transform! The $\hbar$ factor is to give the correct dimensionality.
Nice, isn't it?As you pointed out, the fact that if $u$ is "spread" then $w$ is "peaked" and vice versa is typical of Fourier transformed functions.So
Heisenberg's principle can be thought to come from here.
This holds for a lot of conjugated quantum variables.
|
Some weeks ago, Robert Kucharczyk and Peter Scholze found a topological realisation of the ‘hopeless’ part of the absolute Galois group $\mathbf{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$. That is, they constructed a compact connected space $M_{cyc}$ such that etale covers of it correspond to Galois extensions of the cyclotomic field $\mathbb{Q}_{cyc}$. This gives, at least in theory, a handle on the hopeless part of the Galois group $\mathbf{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}_{cyc})$, see the previous post in this series.
Here, we will get halfway into constructing $M_{cyc}$. We will try to understand the topology of the prime ideal spectrum $\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}])$ of the complex group algebra of the multiplicative group $\overline{\mathbb{Q}}^{\times}$ of all non-zero algebraic numbers.
[section_title text=”Pontryagin duals”]
Take an Abelian locally compact group $A$ (for example, an Abelian group equipped with the discrete topology), then its Pontryagin dual $A^{\vee}$ is the space of all continuous group morphisms $A \rightarrow \mathbb{S}^1$ to the unit circle $\mathbb{S}^1$ endowed with the compact open topology.
There are these topological properties of the locally compact group $A^{\vee}$:
– $A^{\vee}$ is compact if and only if $A$ has the discrete topology,
– $A^{\vee}$ is connected if and only if $A$ is a torsion free group,
– $A^{\vee}$ is totally disconnected if and only if $A$ is a torsion group.
If we take the additive group of rational numbers with the discrete topology, the dual space $\mathbb{Q}^{\vee}$ is the one-dimensional solenoid
It is a compact and connected group, but is not path connected. In fact, it path connected components can be identified with the finite adele classes $\mathbb{A}_f/\mathbb{Q} = \widehat{\mathbb{Z}}/\mathbb{Z}$ where $\widehat{\mathbb{Z}}$ is the ring of profinite integers.
[section_title text=”The multiplicative group of algebraic numbers”]
A torsion element $x$ in the multiplicative group $\overline{\mathbb{Q}}^{\times}$ of all algebraic numbers must satisfy $x^N=1$ for some $N$ so is a root of unity, so we have the exact sequence of Abelian groups
$0 \rightarrow \pmb{\mu}_{\infty} \rightarrow \overline{\mathbb{Q}}^{\times} \rightarrow \overline{\mathbb{Q}}^{\times}_{tf} \rightarrow 0$
where the last term is the maximal torsion-free quotient of $\overline{\mathbb{Q}}^{\times}$. By Pontryagin duality this gives us an exact sequence of compact topological groups
$0 \rightarrow (\overline{\mathbb{Q}}^{\times}_{tf})^{\vee} \rightarrow (\overline{\mathbb{Q}}^{\times})^{\vee} \rightarrow \pmb{\mu}^{\vee}_{\infty} \rightarrow 0$
Here, the left-most space is connected and $\pmb{\mu}^{\vee}_{\infty}$ is totally disconnected. That is, the connected components of $(\overline{\mathbb{Q}}^{\times})^{\vee}$ are precisely the translates of the connected subgroup $(\overline{\mathbb{Q}}^{\times}_{tf})^{\vee}$.
[section_title text=”Prime ideal spectra”]
The short exact sequence of Abelian groups gives a short exact sequence of the corresponding group schemes
$0 \rightarrow \mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}_{tf}]) \rightarrow \mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}] \rightarrow \mathbf{Spec}(\mathbb{C}[\pmb{\mu}_{\infty}]) \rightarrow 0$
The torsion free abelian group $\overline{\mathbb{Q}}^{\times}_{tf}$ is the direct limit $\underset{\rightarrow}{lim}~M_i$ of finitely generated abelian groups $M_i$ and as the corresponding group algebra $\mathbb{C}[M_i] = \mathbb{C}[x_1,x_1^{-1},\cdots, x_k,x_k^{-1}]$, we have that $\mathbf{Spec}(\mathbb{C}[M_i])$ is connected. But then this also holds for
$\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}_{tf}]) = \underset{\leftarrow}{lim}~\mathbf{Spec}(\mathbb{C}[M_i])$
The underlying group of $\mathbb{C}$-points of $\mathbf{Spec}(\mathbb{C}[\pmb{\mu}_{\infty}])$ is $\pmb{\mu}_{\infty}^{\vee}$ and is therefore totally disconnected. But then we have
$\pi_0(\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}]) \simeq \pi_0(\mathbf{Spec}(\mathbb{C}[\pmb{\mu}_{\infty}]) \simeq \pmb{\mu}_{\infty}^{\vee}$
and, more importantly, for the etale fundamental group
$\pi_1^{et}(\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}],x) \simeq \pi_1^{et}(\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}_{tf}],y)$
So, we have to compute the latter one. Again, write the torsion-free quotient as a direct limit of finitely generated torsion-free Abelian groups and recall that connected etale covers of $\mathbf{Spec}(\mathbb{C}[M_i])=\mathbf{Spec}(\mathbb{C}[x_1,x_1^{-1},\cdots,x_k,x_k^{-1}])$ are all of the form $\mathbf{Spec}(\mathbb{C}[N])$, where $N$ is a subgroup of $M_i \otimes \mathbb{Q}$ that contains $M_i$ with finite index (that is, adjoining roots of the $x_i$).
Again, this goes through the limit and so a connected etale cover of $\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}_{tf}])$ would be determined by a subgroup of the $\mathbb{Q}$-vectorspace $\overline{\mathbb{Q}}^{\times}_{tf} \otimes \mathbb{Q}$ containing $\overline{\mathbb{Q}}^{\times}_{tf}$ with finite index.
But, $\overline{\mathbb{Q}}^{\times}_{tf}$ is already a $\mathbb{Q}$-vectorspace as we can take arbitrary roots in it (remember we’re using the multiplicative structure). That is, $\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}])$ is simply connected!
[section_title text=”Bringing in the Galois group”]
Now, we’re closing in on the mysterious space $M_{cyc}$. Clearly, it cannot be the complex points of $\mathbf{Spec}(\mathbb{C}[\overline{\mathbb{Q}}^{\times}])$ as this has no proper etale covers, but we still have to bring the Galois group $\mathbf{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}_{cyc})$ into the game.
The group algebra $\mathbb{C}[\overline{\mathbb{Q}}^{\times}]$ is a commutative and cocommutative Hopf algebra, and all the elements of the Galois group act on it as Hopf-automorphisms, so it is natural to consider the fixed Hopf algebra
$H_{cyc}=\mathbb{C}[\overline{\mathbb{Q}}^{\times}]^{\mathbf{Gal}(\overline{\mathbb{Q}}/\mathbb{Q}_{cyc})}$
This Hopf algebra has an interesting alternative description as a subalgebra of the Witt ring $W(\mathbb{Q}_{cyc})$, bringing it into the realm of $\mathbb{F}_1$-geometry.
This ring of Witt vectors has as its underlying set of elements $1 + \mathbb{Q}_{cyc}[[t]]$ of formal power series in $\mathbb{Q}_{cyc}[[t]]$. Addition on this set is defined by
multiplication of power series. The surprising fact is that we can then put a ring structure on it by demanding that the product $\odot$ should obey the rule that for all $a,b \in \mathbb{Q}_{cyc}$ we have
$(1-at) \odot (1-bt) = 1 – ab t$
In this mind-boggling ring the Hopf algebra $H_{cyc}$ is the subring consisting of all power series having a rational expression of the form
$\frac{1+a_1t+a_2t^2+ \cdots + a_n t^n}{1+b_1 t + b_2 t^2 + \cdots + b_m t^m}$
with all $a_i,b_j \in \mathbb{Q}_{cyc}$.
We can embed $\pmb{\mu}_{\infty}$ by sending a root of unity $\zeta$ to $1 – \zeta t$, and then the desired space $M_{cyc}$ will be close to
$\mathbf{Spec}(H_{cyc} \otimes_{\mathbb{Z}[\pmb{\mu}_{\infty}]} \mathbb{C})$
but I’ll spare the details for another time.
In case you want to know more about the title-picture, quoting from John Baez’ post The Beauty of Roots:
“Sam Derbyshire decided to to make a high resolution plot of some roots of polynomials. After some experimentation, he decided that his favorite were polynomials whose coefficients were all 1 or -1 (not 0). He made a high-resolution plot by computing all the roots of all polynomials of this sort having degree ≤ 24. That’s $2^{24}$ polynomials, and about $24 \times 2^{24}$ roots — or about 400 million roots! It took Mathematica 4 days to generate the coordinates of the roots, producing about 5 gigabytes of data.”
Similar Posts: Topology and the symmetries of roots Anabelian vs. Noncommutative Geometry Mazur’s knotty dictionary Langlands versus Connes Connes & Consani go categorical Manin’s geometric axis profinite groups survival guide big Witt vectors for everyone (1/2) $\mathbf{Ext}(\mathbb{Q},\mathbb{Z})$ and the solenoid $\widehat{\mathbb{Q}}$ Connes-Consani for undergraduates (1)
|
Search
Now showing items 1-5 of 5
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
Once we have ascertained that our Euler diagram fits well, we can turn to visualizing the solution. For this purpose,
eulerr relies on the grid graphics system (R Core Team 2017) and offers intuitive and granular control over the output.
Plotting the ellipses is straightforward using the parametrization of a rotated ellipse,
\[ \begin{bmatrix} x \\ y \end{bmatrix} = \begin{bmatrix} h + a \cos{\theta} \\ k + b \sin{\theta} \end{bmatrix}, \]
where \(\theta \in [0, 2\pi],\quad a,b>0\).
Most users will also prefer to label the ellipses and their intersections with text and this, however, is considerably more involved.
Labeling the ellipses is complicated since the shapes of the intersections often are irregular, lacking well-defined centers; we know of no analytical solution to this problem. Instead,
eulerr relies on the polylabelr package (Larsson 2018), which was created by the author. It provides a simple wrapper for the polylabel (Mapbox 2018) C++ library from Mapbox.
Euler diagrams display both quantitative and qualitative data. The quantitative aspect is the quantities or sizes of the sets depicted in the diagram and is visualized by the relative sizes, and possibly the labels, of the areas of the shapes—this is the main focus of this paper. The qualitative aspects, meanwhile, consist of the mapping of each set to some quality or category, such as having a certain gene or not. In the diagram, these qualities can be separated through any of the following aesthetics:
or a combination of these. The main purpose of these aesthetics is to separate out the different ellipses so that the audience may interpret the diagram with ease and clarity.
Among these aesthetics, the best choice (from a viewer perspective) appears to be color (Blake 2016), which provides useful information without extraneous chart junk.
The issue with color, however, is that it cannot be perceived perfectly by all. Eight percent of men and 0.4% of women in European Caucasian countries, for instance, suffer the most common form, red–green color deficiency. Moreover, color is often printed at a premium in scientific publications and adds no utility to a diagram of two shapes.
For these reasons,
eulerr defaults to distinguishing ellipses with color using a manually tuned color palette.
If there are disjoint clusters of ellipses, the optimizer will often spread these out more than is necessary, wasting space in our diagram. To tackle this, we use a SKYLINE-BL rectangle packing algorithm (Jylänki 2010) designed specifically for
eulerr. In it, we surround each ellipse cluster with a bounding box, pack these boxes into a bin of appropriate size and aspect ratio, and adjust the coordinates of the ellipses in the clusters to compact our diagram. As a bonus, this increases the chance of having similar layouts for different function calls.
Blake, Andrew. 2016. “The Impact of Graphical Choices on the Perception of Euler Diagrams.” Ph.D. dissertation, Brighton, UK: Brighton University. http://eprints.brighton.ac.uk/15754/1/main.pdf.
Jylänki, Jukka. 2010. “A Thousand Ways to Pack the Bin – a Practical Approach to Two-Dimensional Rectangle Bin Packing.”
Larsson, Johan. 2018. “polylabelr: Find the Pole of Inaccessibility (Visual Center) of a Polygon.”
Mapbox. 2018. “polylabel: A Fast Algorithm for Finding the Pole of Inaccessibility of a Polygon (in JavaScript and C++).” Mapbox.
R Core Team. 2017.
R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing. https://www.R-project.org/.
|
Consider two canonical systems, 1 and 2, with particle numbers \(N_1\) and \(N_2\), volumes \(V_1\) and \(V_2\) and at temperature \(T\). The systems are in chemical contact, meaning that they can exchange particles. Furthermore, we assume that \(N_2 \gg N_1 \) and \(V_2 \gg V_1 \) so that system 2 is a particle reservoir. The total particle number and volume are
\[V = V_1 + V_2\]
\[N = N_1 + N_2\]
The total Hamiltonian \(H (x, N ) \) is
\[ H(x,N) = H_1(x_1,N_1) + H_2(x_2,N_2)\]
If the systems
could not exchange particles, then the canonical partition function for the whole system would be
\[ Q(N,V,T) = \dfrac {1}{N! h^{3N}}\int dx \, e^{-\beta (H_1(x_1,N_1)+ H_2(x_2,N_2))}\]
\[ = \dfrac {N_1! N_2!}{N!}Q_1(N_1,V_1,T)Q_2(N_2,V_2,T)\]
where
\[ Q_1(N_1,V_1,T) = \dfrac{1}{N_1! h^{3N_1}}\int dx \, e^{-\beta H_1(x_1,N_1)}\]
\[ Q_2(N_2,V_2,T) = \dfrac{1}{N_2! h^{3N_2}}\int dx \, e^{-\beta H_2(x_2,N_2)}\]
However, \(N_1\) and \(N_2\) are
not fixed, therefore, in order to sum over all microstates, we need to sum over all values that \(N_1\) can take on subject to the constraint \(N = N_1 + N_2\). Thus, we can write the canonical partition function for the whole system as
\[ Q(N,V,T) = \sum_{N_1=0}^N f(N_1,N) \frac {N_1! N_2!}{N!} Q_1(N_1,V_1,T) Q_2(N_2,V_2,T)\]
where \(f (N_1, N_2 ) \) is a function that weights each value of \(N_1\) for a given \(N\). Thus,
\(f (0, N ) \) is the number of configurations with 0 particles in \(V_1\) and \(N\) particles in \(V_2\).
\( f (1, N) \) is the number of configurations with 1 particles in \(V_1\) and \(N - 1\) particles in \( V_2 \).
etc.
Determining the values of \(f (N_1, N ) \) amounts to a problem of counting the number of ways we can put \(N\) identical objects into 2 baskets. Thus,
\[ f (0, N ) = 1 \]
\[ f (1, N) = N = \frac {N!}{1! (N - 1)! } \]
\[f(2,N)=\frac {N(N-1)}{2} = \frac {N!}{2!(N-2)!}\]
etc. or generally,
\[ f(N_1,N) = \frac {N!}{N_1! (N-N_1)!} = \frac {N!}{N_1! N_2!}\]
ad hocquantum correction \(\frac {1}{N !} \) in the expression for the canonical partition function, and we see that these quantum factors will exactly cancel the classical degeneracy factor, leading to the following expression:
which expresses the fact that, in reality, the various configurations are not distinguishable from each other, and so each one should count with equal weighting. Now, the distribution function \(\rho(x)\) is given by
\[\rho(x,N) = \frac{\frac{1}{N!h^{3N}}e^{-\beta H(x,N)}}{Q(N,V,T)}\]
\[ \int dx \rho(x,N) = 1\]
However, recognizing that \(N_2 \approx N\), we can obtain the distribution for \(\rho _1 (x_1, N_1)\) immediately, by integrating over the phase space of system 2:
\[ \rho_1(x_1,N_1) = \frac{1}{Q(N,V,T)} \frac{1}{N_1! h^{3N_1}}e^{-\beta H_1 (x_1, N_1)} \frac{1}{N_2! h^{3N_2}}\int dx_2 e^{-\beta H_2(x_2,N_2)}\]
where the \(\frac{1}{N_1! h^{3N_1}}\) prefactor has been introduced so that
\[\sum_{N_1=0}^N \int dx_1 \rho(x_1,N_1) = 1\]
ad hocquantum correction factor that must be multiplied by the distribution function for each ensemble to account for the identical nature of the particles. Thus, we see that the distribution function becomes
\[ \rho_1(x_1,N_1) = \frac {Q_2(N_2,V_2,T)}{Q(N,V,T)} \frac{1}{N_1! h^{3N_1}}e^{-\beta H_1(x_1,N_1)}\]
Recall that the Hemlholtz free energy is given by
\[A = -\frac {1}{\beta}\ln Q\]
Thus,
\( Q(N,V,T)\)
\( e^{-\beta A(N,V,T)}\)
\( Q_2(N_2,V_2,T)\)
\( e^{-\beta A(N_2,V_2,T)} =e^{-\beta A(N-N_1,V-V_1,T)}\)
or
\[\frac {Q_2(N_2,V_2,T)}{Q(N,V,T)} = e^{-\beta (A(N-N_1,V-V_1,T) -A(N,V,T))}\]
\(A(N-N_1,V-V_1,T)\)
\( A(N,V,T) - \frac{\partial A}{\partial N}N_1- \frac{\partial A}{\partial V}V_1 + \cdots\)
\(A(N,V,T) - \mu N_1 + PV_1 + \cdots\)
Therefore the distribution function becomes
\( \rho_1(x_1,N_1)\)
\(\frac{1}{N_1! h^{3N_1}}e^{\beta \mu N_1}e^{-\beta PV_1}e^{-\beta H_1(x_1,N_1)}\)
\(\frac{1}{N_1! h^{3N_1}} \frac{1}{e^{\beta PV_1}}e^{\beta \mu N_1}e^{-\beta H_1(x_1,N_1)}\)
Dropping the ``1'' subscript, we have
\[\rho(x,N) =\frac{1}{e^{\beta PV}}\left[\frac{1}{N! h^{3N}}e^{\beta \mu N}e^{-\beta H(x,N)}\right]\]
\( \sum_{N=0}^{\infty}\int dx\rho(x,N)\)
\(\frac{1}{e^{\beta PV}}\left[\sum_{N=0}^{\infty} \frac{1}{N! h^{3N}}e^{\beta \mu N}\int dxe^{-\beta H(x,N)}\right]\)
Now, we define the grand canonical partition function
\[{\cal Z}(\mu,V,T) = \sum_{N=0}^{\infty} \frac{1}{N! h^{3N}}e^{\beta \mu N}\int dxe^{-\beta H(x,N)}\]
\[ {\cal Z}(\mu,V,T) = e^{\beta PV}\]
\[ \ln {\cal Z}(\mu,V,T) = \frac{PV}{kT}\]
Therefore \(PV\) is the free energy of the grand canonical ensemble, and the entropy \(S (\mu , V, T ) \) is given by
\[S(\mu,V,T) = \left(\frac{\partial (PV)}{\partial T}\right)_{\mu, V} = k \ln {\cal Z} (\mu, V, T) - k \beta \left ( \frac {\partial}{\partial \beta } \ln {\cal Z}(\mu,V,T)\right)_{\mu,V}\]
fugacity \(\zeta \)defined to be
\[\zeta = e^{\beta \mu}\]
\[{\cal Z}(\zeta,V,T) = \sum_{N=0}^{\infty} \frac{1}{N! h^{3N}} \zeta ^N \int dx e^{-\beta H(x,N)} =\sum_{N=0}^{\infty} \zeta^N Q(N,V,T)\]
Other thermodynamic quantities follow straightforwardly:
\[\frac{\partial}{\partial \mu} = \frac{\partial \zeta}{\partial \zeta}= \beta \zeta\frac{\partial}{\partial \zeta}\]
Thus,
\[\langle N \rangle = \zeta \frac{\partial}{\partial \zeta}\ln {\cal Z}(\zeta,V,T)\]
Energy
\[ E = \langle H(x,N)\rangle = \dfrac{1}{\cal {Z}}\sum_{N=0}^{\infty} \dfrac{\zeta^N}{N! h^{3N}}\int dxH(x,N)e^{-\beta H(x,N)}\]
\[ = -\left(\frac{\partial\ln {\cal Z}(\zeta,V,T)}{\partial \beta}\right)_{\zeta,V}\]
Average particle number
\[\langle N \rangle = kT\left(\frac{\partial \ln {\cal Z}(\mu,V,T)}{\partial \mu}\right)_{V,T}\]
This can also be expressed using the fugacity by noting that
|
You may as well assume that $U=N$, otherwise restrict to $U$.
Thus the question is, given a manifold $N$, and a distribution $\Delta$ such that $\Delta_q= \newcommand\Span{\operatorname{Span}}\Span(X_{1,q},\ldots,X_{i,q},\ldots)$ for $X_i$ global vector fields on $N$, then for any $X\in \Delta$, do there exist locally finite smooth functions $a_i$ such that $X=\sum_i a_iX_i$. (I've generalized slightly by allowing the index set to be infinite).
The answer is yes. (Admittedly I'm a little new to differential geometry, but this should be correct).
Proof.
Let $k = \dim \Delta$.
For any $q\in N$, $\Delta_q$ is the span of the $X_{i,q}$, so there are indices $i_1,\ldots,i_k$ such that $X_{i_1},\ldots X_{i_k}$ form a basis for $\Delta_q$ at $q$. Since being a basis for the distribution is an open condition ($\det\ne 0$), there is some neighborhood of $q$, $U_q$, on which $X_{i_1}|_{U_q},\ldots,X_{i_k}|_{U_q}$ form a local basis. On $U_q$, we can write $X|_{U_q}$ as $X|_{U_q} = \sum_{j=1}^k a_{q,i_j}X_{i_j}|_{U_q}$ for some smooth functions $a_{q,i_j}:U_q\to \Bbb{R}$.
Then for the other indices, define $a_{q,i} : U_q\to \Bbb{R}$ by $a_{q,i}=0$ when $i\ne i_j$ for some $j$.
On $U_q$, we now have $$X=\sum_i a_{q,i} X_i |_{U_q}.$$
To extend this to the whole manifold, observe that since we constructed these functions for some neighborhood of every point $q$, we can cover $N$ by open sets $U_q$.
Taking a smooth partition of unity $\{\rho_q\}$, where $\operatorname{supp}\rho_q \subseteq U_q$, we now have$$ X = \sum_q \rho_q \sum_i a_{q,i}X_i =\sum_i \sum_q \rho_q a_{q,i} X_i.$$
Thus if we define $a_i = \sum_q a_{q,i}$, we obtain smooth functions $a_i$ such that$$X=\sum_i a_iX_i,$$as desired.
Point of clarification
When I say that being a basis is an open condition ($\det\ne 0$), what I mean precisely is the following.
There is some open neighborhood of $q$ on which $\Delta$ has a local basis, $e_1,\ldots,e_k$. With respect to this basis, we can write our vector fields $$X_{i_j} =\sum_{\ell} a_{j\ell} e_\ell$$ for some smooth functions $a_{j\ell}$ defined on this neighborhood. Taking the determinant of the matrix $(a_{j\ell})$, we have that the vector fields $X_{i_j}$ are a local basis when the determinant of this matrix is nonzero. This is true on a neighborhood of $q$, since it is true at $q$.
Side note
Note that this proof didn't require the index set to have any particular cardinality, nor did it require $\Delta$ to be a distribution specifically rather than a general vector bundle. Also, it's fairly clear that the $a_{q,i}$s we construct, if we regard them as functions of $X$ will be linear in $X$. Thus from some slight modifications of this proof we obtain the following result.
Proposition: Let $E$ be a vector bundle on a manifold $N$. Given global sections $\{X_i\}\subseteq \Gamma(E)$ such that $\{X_{i,q}\}$ span $E_q$ at every point $q$ of $N$, there are global sections $a_i \in \Gamma(E^*)$ such that for every $X\in \Gamma(E)$, $$X=\sum_i a_i(X)X_i.$$
An algebraist will recognize this as roughly saying that $\Gamma(E)$ is a projective module over $C^\infty(N)$. This statement is a little stronger, at least when $N$ is compact. When $N$ isn't compact, the proof given here doesn't quite show that it's projective, since our sum is only locally finite. Compare with this characterization of projective modules.
|
Over the last days I’ve been staring at the Bost-Connes algebra to find a ringtheoretic way into it. Ive had some chats about it with the resident graded-guru but all we came up with so far is that it seems to be an extension of Fred’s definition of a ‘crystalline’ graded algebra. Knowing that several excellent ringtheorists keep an eye on my stumblings here, let me launch an appeal for help :
What is the most elegant ringtheoretic framework in which the Bost-Connes Hecke algebra is a motivating example?
Let us review what we know so far and extend upon it with a couple of observations that may (or may not) be helpful to you. The algebra $\mathcal{H} $ is the algebra of $\mathbb{Q} $-valued functions (under the convolution product) on the double coset-space $\Gamma_0 \backslash \Gamma / \Gamma_0 $ where
$\Gamma = { \begin{bmatrix} 1 & b \\ 0 & a \end{bmatrix}~:~a,b \in \mathbb{Q}, a > 0 } $ and $\Gamma_0 = { \begin{bmatrix} 1 & n \\ 0 & 1 \end{bmatrix}~:~n \in \mathbb{N}_+ } $
We have seen that a $\mathbb{Q} $-basis is given by the characteristic functions $X_{\gamma} $ (that is, such that $X_{\gamma}(\gamma’) = \delta_{\gamma,\gamma’} $) with $\gamma $ a rational point represented by the couple $~(a,b) $ (the entries in the matrix definition of a representant of $\gamma $ in $\Gamma $) lying in the fractal comb
defined by the rule that $b < \frac{1}{n} $ if $a = \frac{m}{n} $ with $m,n \in \mathbb{N}, (m,n)=1 $. Last time we have seen that the algebra $\mathcal{H} $ is generated as a $\mathbb{Q} $-algebra by the following elements (changing notation)
$\begin{cases}X_m=X_{\alpha_m} & \text{with } \alpha_m = \begin{bmatrix} 1 & 0 \\ 0 & m \end{bmatrix}~\forall m \in \mathbb{N}_+ \\
X_n^*=X_{\beta_n} & \text{with } \beta_n = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{n} \end{bmatrix}~\forall n \in \mathbb{N}_+ \\ Y_{\gamma} = X_{\gamma} & \text{with } \gamma = \begin{bmatrix} 1 & \gamma \\ 0 & 1 \end{bmatrix}~\forall \lambda \in \mathbb{Q}/\mathbb{Z} \end{cases} $
Using the tricks of last time (that is, figuring out what functions convolution products represent, knowing all double-cosets) it is not too difficult to prove the
defining relations among these generators to be the following (( if someone wants the details, tell me and I’ll include a ‘technical post’ or consult the Bost-Connes original paper but note that this scanned version needs 26.8Mb ))
(1) : $X_n^* X_n = 1, \forall n \in \mathbb{N}_+$
(2) : $X_n X_m = X_{nm}, \forall m,n \in \mathbb{N}_+$
(3) : $X_n X_m^* = X_m^* X_n, \text{whenever } (m,n)=1$
(4) : $Y_{\gamma} Y_{\mu} = Y_{\gamma+\mu}, \forall \gamma,mu \in \mathbb{Q}/\mathbb{Z}$
(5) : $Y_{\gamma}X_n = X_n Y_{n \gamma},~\forall n \in \mathbb{N}_+, \gamma \in \mathbb{Q}/\mathbb{Z}$
(6) : $X_n Y_{\lambda} X_n^* = \frac{1}{n} \sum_{n \delta = \gamma} Y_{\delta},~\forall n \in \mathbb{N}_+, \gamma \in \mathbb{Q}/\mathbb{Z}$
Simple as these equations may seem, they bring us into rather uncharted ringtheoretic territories. Here a few fairly obvious ringtheoretic ingredients of the Bost-Connes Hecke algebra $\mathcal{H} $
the group-algebra of $\mathbb{Q}/\mathbb{Z} $
The equations (4) can be rephrased by saying that the subalgebra generated by the $Y_{\gamma} $ is the rational groupalgebra $\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] $ of the (additive) group $\mathbb{Q}/\mathbb{Z} $. Note however that $\mathbb{Q}/\mathbb{Z} $ is a torsion group (that is, for all $\gamma = \frac{m}{n} $ we have that $n.\gamma = (\gamma+\gamma+ \ldots + \gamma) = 0 $). Hence, the groupalgebra has LOTS of zero-divisors. In fact, this group-algebra doesn’t have any good ringtheoretic properties except for the fact that it can be realized as a limit of finite groupalgebras (semi-simple algebras)
$\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] = \underset{\rightarrow}{lim}~\mathbb{Q}[\mathbb{Z}/n \mathbb{Z}] $
and hence is a quasi-free (or formally smooth) algebra, BUT far from being finitely generated…
the grading group $\mathbb{Q}^+_{\times} $
The multiplicative group of all positive rational numbers $\mathbb{Q}^+_{\times} $ is a torsion-free Abelian ordered group and it follows from the above defining relations that $\mathcal{H} $ is graded by this group if we give
$deg(Y_{\gamma})=1,~deg(X_m)=m,~deg(X_n^*) = \frac{1}{n} $
Now, graded algebras have been studied extensively in case the grading group is torsion-free abelian ordered AND finitely generated, HOWEVER $\mathbb{Q}^+_{\times} $ is infinitely generated and not much is known about such graded algebras. Still, the ordering should allow us to use some tricks such as taking leading coefficients etc.
the endomorphisms of $\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] $
We would like to view the equations (5) and (6) (the latter after multiplying both sides on the left with $X_n^* $ and using (1)) as saying that $X_n $ and $X_n^* $ are normalizing elements. Unfortunately, the algebra morphisms they induce on the group algebra $\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] $ are NOT isomorphisms, BUT endomorphisms. One source of algebra morphisms on the group-algebra comes from group-morphisms from $\mathbb{Q}/\mathbb{Z} $ to itself. Now, it is known that
$Hom_{grp}(\mathbb{Q}/\mathbb{Z},\mathbb{Q}/\mathbb{Z}) \simeq \hat{\mathbb{Z}} $, the profinite completion of $\mathbb{Z} $. A class of group-morphisms of interest to us are the maps given by multiplication by n on $\mathbb{Q}/\mathbb{Z} $. Observe that these maps are
epimorphisms with a cyclic order n kernel. On the group-algebra level they give us the epimorphisms
$\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] \longrightarrow^{\phi_n} \mathbb{Q}[\mathbb{Q}/\mathbb{Z}] $ such that $\phi_n(Y_{\lambda}) = Y_{n \lambda} $ whence equation (5) can be rewritten as $Y_{\lambda} X_n = X_n \phi_n(Y_{\lambda}) $, which looks good until you think that $\phi_n $ is not an automorphism…
There are even other (non-unital) algebra endomorphisms such as the map $\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] \rightarrow^{\psi_n} R_n $ defined by $\psi_n(Y_{\lambda}) = \frac{1}{n}(Y_{\frac{\lambda}{n}} + Y_{\frac{\lambda + 1}{n}} + \ldots + Y_{\frac{\lambda + n-1}{n}}) $ and then, we can rewrite equation (6) as $Y_{\lambda} X_n^* = X_n^* \psi_n(Y_{\lambda}) $, but again, note that $\psi_n $ is NOT an automorphism.
almost strongly graded, but not quite…
Recall from last time that the characteristic function $X_a $ for any double-coset-class $a \in \Gamma_0 \backslash \Gamma / \Gamma_0 $ represented by the matrix $a=\begin{bmatrix} 1 & \lambda \\ 0 & \frac{m}{n} \end{bmatrix} $ could be written in the Hecke algebra as $X_a = n X_m Y_{n \lambda} X_n^* = n Y_{\lambda} X_m X_n^* $. That is, we can write the Bost-Connes Hecke algebra as
$\mathcal{H} = \oplus_{\frac{m}{n} \in \mathbb{Q}^+_{\times}}~\mathbb{Q}[\mathbb{Q}/\mathbb{Z}] X_mX_n^* $
Hence, if only the morphisms $\phi_n $ and $\psi_m $ would be automorphisms, this would say that $\mathcal{H} $ is a strongly $\mathbb{Q}^+_{\times} $-algebra with part of degree one the groupalgebra of $\mathbb{Q}/\mathbb{Z} $.
However, they are not. But there is an extension of the notion of strongly graded algebras which Fred has dubbed
crystalline graded algebras in which it is sufficient that the algebra maps are all epimorphisms. (maybe I’ll post about these algebras, another time). However, this is not the case for the $\psi_m $…
So, what is the most elegant ringtheoretic framework in which the algebra $\mathcal{H} $ fits??? Surely, you can do better than
generalized crystalline graded algebra…
|
The Compton effect concerns the inelastic scattering of x‑rays by electrons. Scattering means dispersing in different directions, and inelastic means that energy is lost by the scattered object in the process. The intensity of the scattered x‑ray is measured as a function of the wavelength shift \(\Delta \lambda\), where
\[ \lambda ' = \lambda + \Delta \lambda \label {2-6}\]
and the scattering angle \(\theta\).
Figure \(\PageIndex{1}\): The Compton Effect. X-rays scattered from a target at an angle have a different wavelength from the incident x-ray, and produce an ejected electron.
To explain the experimental observations, it is necessary to describe the situation just as one would when discussing two particles, e.g. marbles, colliding and scattering from each other. The x‑ray scatters (changes direction) and causes an electron with mass me to be ejected from the object with a direction that conserves the momentum of the system. Momentum and energy conservation equations then explain the scattering angles and the observed wavelength shift of the x-ray when the momentum of the x-rays is taken to be equal to \(h/\lambda\) and the energy is \(h\nu\).
These considerations lead to Equation \(\ref{2-7}\), which describes the experimental data for the variation of \(\Delta \lambda\) with \(\theta\). The success of using energy and momentum conservation for two colliding particles to explain the experimental data for the Compton effect is powerful evidence that electromagnetic radiation has momentum just like a particle and that the momentum and energy are given by \(h/\lambda\) and \(h\nu\), respectively.
\[ \Delta \lambda = \frac {h}{m_ec} (1 - \cos \theta ) \label {2-7}\]
Example \(\PageIndex{1}\)
For Compton scattering, determine the wavelength shift at a scattering angle of \(90^o\), and identify the scattering angles where the wavelength shift is the smallest and the largest.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
Steiner Quadruple Systems¶
A Steiner Quadruple System on \(n\) points is a family \(SQS_n \subset \binom {[n]} 4\) of \(4\)-sets, such that any set \(S\subset [n]\) of size three is a subset of exactly one member of \(SQS_n\).
This module implements Haim Hanani’s constructive proof that a Steiner Quadruple System exists if and only if \(n\equiv 2,4 \pmod 6\). Hanani’s proof consists in 6 different constructions that build a large Steiner Quadruple System from a smaller one, and though it does not give a very clear understanding of why it works (to say the least)… it does !
The constructions have been implemented while reading two papers simultaneously, for one of them sometimes provides the informations that the other one does not. The first one is Haim Hanani’s original paper [Han1960], and the other one is a paper from Horan and Hurlbert which goes through all constructions [HH2012].
It can be used through the
designs object:
sage: designs.steiner_quadruple_system(8)Incidence structure with 8 points and 14 blocks
AUTHORS:
Nathann Cohen (May 2013, while listening to “ Le Blues Du Pauvre Delahaye”) Index¶
This module’s main function is the following :
steiner_quadruple_system()
Return a Steiner Quadruple System on \(n\) points
This function redistributes its work among 6 constructions :
Construction \(1\)
two_n()
Return a Steiner Quadruple System on \(2n\) points Construction \(2\)
three_n_minus_two()
Return a Steiner Quadruple System on \(3n-2\) points Construction \(3\)
three_n_minus_eight()
Return a Steiner Quadruple System on \(3n-8\) points Construction \(4\)
three_n_minus_four()
Return a Steiner Quadruple System on \(3n-4\) points Construction \(5\)
four_n_minus_six()
Return a Steiner Quadruple System on \(4n-6\) points Construction \(6\)
twelve_n_minus_ten()
Return a Steiner Quadruple System on \(12n-10\) points
It also defines two specific Steiner Quadruple Systems that the constructions require, i.e. \(SQS_{14}\) and \(SQS_{38}\) as well as the systems of pairs \(P_{\alpha}(m)\) and \(\overline P_{\alpha}(m)\) (see [Han1960]).
Functions¶
sage.combinat.designs.steiner_quadruple_systems.
P(
alpha, m)¶
Return the collection of pairs \(P_{\alpha}(m)\)
For more information on this system, see [Han1960].
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import P sage: P(3,4) [(0, 5), (2, 7), (4, 1), (6, 3)]
sage.combinat.designs.steiner_quadruple_systems.
barP(
eps, m)¶
Return the collection of pairs \(\overline P_{\alpha}(m)\)
For more information on this system, see [Han1960].
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import barP sage: barP(3,4) [(0, 4), (3, 5), (1, 2)]
sage.combinat.designs.steiner_quadruple_systems.
barP_system(
m)¶
Return the 1-factorization of \(K_{2m}\) \(\overline P(m)\)
For more information on this system, see [Han1960].
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import barP_system sage: barP_system(3) [[(4, 3), (2, 5)], [(0, 5), (4, 1)], [(0, 2), (1, 3)], [(1, 5), (4, 2), (0, 3)], [(0, 4), (3, 5), (1, 2)], [(0, 1), (2, 3), (4, 5)]]
sage.combinat.designs.steiner_quadruple_systems.
four_n_minus_six(
B)¶
Return a Steiner Quadruple System on \(4n-6\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points.
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import four_n_minus_six sage: for n in range(4, 20): ....: if (n%6) in [2,4]: ....: sqs = designs.steiner_quadruple_system(n) ....: if not four_n_minus_six(sqs).is_t_design(3,4*n-6,4,1): ....: print("Something is wrong !")
sage.combinat.designs.steiner_quadruple_systems.
relabel_system(
B)¶
Relabels the set so that \(\{n-4, n-3, n-2, n-1\}\) is in \(B\).
INPUT:
B– a list of 4-uples on \(0,...,n-1\).
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import relabel_system sage: SQS8 = designs.steiner_quadruple_system(8) sage: relabel_system(SQS8) Incidence structure with 8 points and 14 blocks
sage.combinat.designs.steiner_quadruple_systems.
steiner_quadruple_system(
n, check=False)¶
Return a Steiner Quadruple System on \(n\) points.
INPUT:
n– an integer such that \(n\equiv 2,4\pmod 6\)
check(boolean) – whether to check that the system is a Steiner Quadruple System before returning it (\(False\) by default)
EXAMPLES:
sage: sqs4 = designs.steiner_quadruple_system(4) sage: sqs4 Incidence structure with 4 points and 1 blocks sage: sqs4.is_t_design(3,4,4,1) True sage: sqs8 = designs.steiner_quadruple_system(8) sage: sqs8 Incidence structure with 8 points and 14 blocks sage: sqs8.is_t_design(3,8,4,1) True
sage.combinat.designs.steiner_quadruple_systems.
three_n_minus_eight(
B)¶
Return a Steiner Quadruple System on \(3n-8\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points.
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import three_n_minus_eight sage: for n in range(4, 30): ....: if (n%12) == 2: ....: sqs = designs.steiner_quadruple_system(n) ....: if not three_n_minus_eight(sqs).is_t_design(3,3*n-8,4,1): ....: print("Something is wrong !")
sage.combinat.designs.steiner_quadruple_systems.
three_n_minus_four(
B)¶
Return a Steiner Quadruple System on \(3n-4\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points where \(n\equiv 10\pmod{12}\).
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import three_n_minus_four sage: for n in range(4, 30): ....: if n%12 == 10: ....: sqs = designs.steiner_quadruple_system(n) ....: if not three_n_minus_four(sqs).is_t_design(3,3*n-4,4,1): ....: print("Something is wrong !")
sage.combinat.designs.steiner_quadruple_systems.
three_n_minus_two(
B)¶
Return a Steiner Quadruple System on \(3n-2\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points.
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import three_n_minus_two sage: for n in range(4, 30): ....: if (n%6) in [2,4]: ....: sqs = designs.steiner_quadruple_system(n) ....: if not three_n_minus_two(sqs).is_t_design(3,3*n-2,4,1): ....: print("Something is wrong !")
sage.combinat.designs.steiner_quadruple_systems.
twelve_n_minus_ten(
B)¶
Return a Steiner Quadruple System on \(12n-6\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points.
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import twelve_n_minus_ten sage: for n in range(4, 15): ....: if (n%6) in [2,4]: ....: sqs = designs.steiner_quadruple_system(n) ....: if not twelve_n_minus_ten(sqs).is_t_design(3,12*n-10,4,1): ....: print("Something is wrong !")
sage.combinat.designs.steiner_quadruple_systems.
two_n(
B)¶
Return a Steiner Quadruple System on \(2n\) points.
INPUT:
B– A Steiner Quadruple System on \(n\) points.
EXAMPLES:
sage: from sage.combinat.designs.steiner_quadruple_systems import two_n sage: for n in range(4, 30): ....: if (n%6) in [2,4]: ....: sqs = designs.steiner_quadruple_system(n) ....: if not two_n(sqs).is_t_design(3,2*n,4,1): ....: print("Something is wrong !")
|
This is mostly a recap of the observations made in the comments, plus some more analysis, because I think it's a nice problem to analyse.
First, both the functional form of the system and the reflection symmetry (see also the phase plane) suggest it's a good idea to introduce $x = \alpha+\beta$, $y = \alpha-\beta$, to obtain\begin{align} \dot{x} &= \frac{1}{2}(y^2 - 3 x^2), \tag{1a}\\ \dot{y} &= 3 x y.\tag{1b}\end{align}We can simplify system (1) somewhat by rescaling $y \to \sqrt{3} y$ and $t \to \frac{1}{3} t$, yielding\begin{align} \dot{x} &= \frac{1}{2}(y^2 - x^2), \tag{2a}\\ \dot{y} &= x y.\tag{2b}\end{align}The phase plane of system (2) looks like this:
What's immediately obvious, is that this phase plane is highly symmetric. It seems to be invariant under rotation over an angle of $\frac{2 \pi}{3}$ (= 120 deg), and rotation over an angle of $\frac{2 \pi}{6}$ seems to keep the shape of the orbits invariant, but changes their flow direction. You can check that both of these observations are indeed correct by considering\begin{equation} \begin{pmatrix} \xi_1 \\ \eta_1 \end{pmatrix} := R(\frac{2\pi}{3}) \begin{pmatrix} x \\ y \end{pmatrix}\quad \text{and} \quad \begin{pmatrix} \xi_2 \\ \eta_2 \end{pmatrix} := R(\frac{2\pi}{6}) \begin{pmatrix} x \\ y \end{pmatrix},\end{equation}with\begin{equation} R(\theta) = \begin{pmatrix} \cos \theta & \sin \theta \\ -\sin \theta & \cos \theta \end{pmatrix}\end{equation}is the matrix that rotates a vector about the origin over angle $\theta$. Substituting $\xi{1,2}$ and $\eta_{1,2}$ in system (2), you obtain\begin{align} \dot{\xi_1} &= \frac{1}{2}(\eta_1^2 - \xi_1^2), \\ \dot{\eta_1} &= \xi_1 \eta_1,\end{align}and\begin{align} \dot{\xi_2} &= -\frac{1}{2}(\eta_2^2 - \xi_2^2), \\ \dot{\eta_2} &= -\xi_2 \eta_2,\end{align}which implies the above observations.
It seems a good idea to 'factor out' this rotational symmetry present in system (2). Introducing polar coordinates $x = r \cos \theta$, $y = r \sin \theta$, we obtain the dynamical system\begin{align} \dot{r} &= -\frac{1}{2} r^2 \cos(3\theta),\tag{3a}\\ \dot{\theta} &= \frac{1}{2} r \sin(3\theta),\tag{3b}\end{align}which is indeed invariant under $\theta \to \theta + \frac{2 \pi}{3}$. Now, we rewrite the above in terms of the new angle $\phi := 3 \theta$ to obtain\begin{align} \dot{r} &= -\frac{1}{2} r^2 \cos(\phi), \tag{4a}\\ \dot{\phi} &= \frac{3}{2} r \sin(\phi). \tag{4b}\end{align}Now, the nice thing is that we can
reinterpret system (4) as being the `polar representation' of some Cartesian system. That is, if we introduce $X$ and $Y$ by $X := r \cos \phi$ and $Y := r \sin \phi$, we can rewrite system (4) in terms of $X$ and $Y$ to obtain\begin{align}\dot{X} &= -\frac{1}{2}(X^2 + 3 Y^2), \tag{5a}\\\dot{Y} &= X Y. \tag{5b}\end{align}The phase plane of system (5) looks as we would have expected:
Moreover, system (5) turns out to be Hamiltonian, i.e. of the form\begin{align} \dot{X} &= \frac{\partial H}{\partial Y},\\ \dot{Y} &= -\frac{\partial H}{\partial X},\end{align}with\begin{equation} H(X,Y) = -\frac{1}{2} Y(X^2 + Y^2).\end{equation}Therefore, orbits lie on level sets of $H$, i.e. those curves where $H = E$ (= constant), which can be used to express $X$ in terms of $Y$ as\begin{equation} X = \pm \sqrt{-\frac{2 E + Y^3}{Y}}.\end{equation}
As a final remark, the instability of the origin can be derived from system (5) as follows. Consider the horizontal line $\ell = \left\{ (X,Y) \vert Y=0 \right\}$, i.e. the $X$-axis. Substituting $Y=0$ in system (5), we obtain $\dot{Y} = 0$; therefore, the line $\ell$ is
invariant under the flow. In other words, every point on $\ell$ stays on $\ell$ for all time.
It so happens that the origin $(0,0)$ lies on the line $\ell$. Taking a point $(-\epsilon,0) \in \ell$ arbitrarily close to (and directly to the left of) the origin, we see that this point will flow
away from the origin, because the flow on $\ell$ is given by $\dot{X} = -\frac{1}{2} X^2$. This holds for all $\epsilon > 0$; hence, we have found an unstable flow direction at the origin, spanned by $(-1,0)$. Therefore, the origin is a (nonlinearly) unstable fixed point of system (5), and in extension of system (1).
|
To address your questions 1 and 2: this graph shows the real part of $\Psi{(\vec r, t)}=A e^{i(\vec k \cdot \vec r-\omega t)} $ in blue and the real part of $\Psi{(\vec r, t)}=A e^{i(\phi + \vec k \cdot \vec r-\omega t)} $ in purple. Obviously $\Psi$ is a function of two variables, so you can regard the graph either as keeping $\vec r$ constant and varying $t$ or keeping $t$ constant and varying $\vec r$.
The quantity $\phi$ is just the phase difference between the two waves e.g. the distance between the peaks shown by the arrow on the diagram.
The absolute value of $\phi$ has no physical significance because you can measure $\phi$ from any reference point you want. However the difference in $\phi$ between two wavefunctions has a very important physical meaning because it determines how the waves will interfere.
To address your question 3: actually the mention of the double slit experiment is spot on. The slits split the incoming light (or electrons or whatever) into two sources, call these $\Psi_a$ and $\Psi_b$, and if you take some point on the screen, this point will receive light from $\Psi_a$ and from $\Psi_b$, but the phase of the two waves, $\phi_a$ and $\phi_b$ won't be the same.
There isn't any physical meaning to the absolute phase of $\Psi_a$ and $\Psi_b$, $\phi_a$ and $\phi_b$, but if $\phi_a - \phi_b$ is an even multiple of $\pi$ ($2\pi$, $4\pi$, etc) the waves will be in sync and you'll get constructive interference and a bright area. If the phase difference is an odd multiple of $\pi$ ($\pi$, $3\pi$ etc) the waves will interfere destructively and you get a dark spot. This is exactly why you get the pattern of alternating bright and dark bands in the two slit experiment - it's because the phase difference, $\phi_a - \phi_b$, varies as you move along the screen.
So no experiment can measure the absolute value of $\phi_a$ or $\phi_b$, because the absolute value has no physical significance. However the double slit experiment can measure the phase difference $\phi_a - \phi_b$.
|
Global classical large solution to compressible viscous micropolar and heat-conducting fluids with vacuum
School of Mathematics, South China University of Technology, Guangzhou 510641, China
In this paper we consider the non-stationary 1-D flow of a compressible viscous and heat-conducting micropolar fluid, assuming that it is in the thermodynamically sense perfect and polytropic. Since the strong nonlinearity and degeneracies of the equations due to the temperature equation and vanishing of density, there are a few results about global existence of classical solution to this model. In the paper, we obtain a global classical solution to the equations with large initial data and vacuum. Moreover, we get the uniqueness of the solution to this system without vacuum. The analysis is based on the assumption $ \kappa(\theta) = O(1+\theta^q) $ where $ q\geq0 $ and delicate energy estimates.
Mathematics Subject Classification:Primary: 35Q35, 35B65; Secondary: 76N10. Citation:Zefu Feng, Changjiang Zhu. Global classical large solution to compressible viscous micropolar and heat-conducting fluids with vacuum. Discrete & Continuous Dynamical Systems - A, 2019, 39 (6) : 3069-3097. doi: 10.3934/dcds.2019127
References:
[1] [2] [3]
M. T. Chen, B. Huang and J. W. Zhang,
Blow up criterion for the three-dimensional equations of compressible viscous micropolar fluids with vacuum,
[4]
M. T. Chen, X. Y. Xu and J. W. Zhang,
Global weak solutions of 3D compressible micropolar fluids with discontinuous initial data and vacuum,
[5]
Q. L. Chen and C. X. Miao,
Global well-posedness for the micropolar fluid system in critical Besov spaces,
[6] [7]
Y. Cho and H. Kim,
On classical solutions of the compressible Navier-Stokes equations with nonnegative initial densities,
[8]
S. J. Ding, H. Y. Wen and C. J. Zhu,
Global classical large solutions to 1D compressible Navier-Stokes equations with density-dependent viscosity and vacuum,
[9]
I. Dražić and N. Mujaković, 3-D flow of a compressible viscous micropolar fluid with spherical symmetry: A global existence theorem,
[10]
I. Dražić and N. Mujaković,
3-D flow of a compressible viscous micropolar fluid with spherical symmetry: Large time behavior of the solution,
[11]
I. Dražić, L. Simčić and N. Mujakovi'c,
3-D flow of a compressible viscous micropolar fluid with spherical symmetry: regularity of the solution,
[12] [13]
L. C. Evans,
[14] [15]
E. Feireisl, A. Novotný and H. Petzeltová,
On the existence of globally defined weak solutions to the Navier-Stokes equations,
[16]
E. Feireisl,
Dynamics of Viscous Compressible Fluids, Oxford University Press, 2004.
Google Scholar
[17]
G. P. Galdi and S. Rionero,
A note on the existence and uniqueness of solutions of the micropolar fluid equations,
[18]
Q. Han and F. H. Lin,
[19] [20]
X. D. Huang, J. Li and Z. P. Xin,
Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations,
[21]
Z. L. Liang and S. Q. Wu, Classical solution to 1D viscous polytropic perfect fluids with constant diffusion coefficients and vacuum,
[22]
P. L. Lions,
[23] [24]
G. Lukaszewicz,
[25] [26] [27] [28]
N. Mujaković, Nonhomogeneous boundary value problem for one-dimensional compressible viscous micropolar fluid model: Regularity of the solution,
[29]
N. Mujaković,
Non-homogeneous boundary value problem for one-dimensional compressible viscous micropolar fluid model: a global existence theorem,
[30] [31]
Y. M. Qin, T. G. Wang and G. L. Hu,
The Cauchy problem for a 1D compressible viscous micropolar fluid model: analysis of the stabilization and the regularity,
[32] [33]
H. Y. Wen and C. J. Zhu,
Global classical large solutions to Navier-Stokes equations for viscous compressible and heat conducting fluids with vacuum,
[34]
H. Y. Wen and C. J. Zhu,
Global symmetric classical and strong solutions of the full compressible Navier-Stokes equations with vacuum and large initial data,
show all references
References:
[1] [2] [3]
M. T. Chen, B. Huang and J. W. Zhang,
Blow up criterion for the three-dimensional equations of compressible viscous micropolar fluids with vacuum,
[4]
M. T. Chen, X. Y. Xu and J. W. Zhang,
Global weak solutions of 3D compressible micropolar fluids with discontinuous initial data and vacuum,
[5]
Q. L. Chen and C. X. Miao,
Global well-posedness for the micropolar fluid system in critical Besov spaces,
[6] [7]
Y. Cho and H. Kim,
On classical solutions of the compressible Navier-Stokes equations with nonnegative initial densities,
[8]
S. J. Ding, H. Y. Wen and C. J. Zhu,
Global classical large solutions to 1D compressible Navier-Stokes equations with density-dependent viscosity and vacuum,
[9]
I. Dražić and N. Mujaković, 3-D flow of a compressible viscous micropolar fluid with spherical symmetry: A global existence theorem,
[10]
I. Dražić and N. Mujaković,
3-D flow of a compressible viscous micropolar fluid with spherical symmetry: Large time behavior of the solution,
[11]
I. Dražić, L. Simčić and N. Mujakovi'c,
3-D flow of a compressible viscous micropolar fluid with spherical symmetry: regularity of the solution,
[12] [13]
L. C. Evans,
[14] [15]
E. Feireisl, A. Novotný and H. Petzeltová,
On the existence of globally defined weak solutions to the Navier-Stokes equations,
[16]
E. Feireisl,
Dynamics of Viscous Compressible Fluids, Oxford University Press, 2004.
Google Scholar
[17]
G. P. Galdi and S. Rionero,
A note on the existence and uniqueness of solutions of the micropolar fluid equations,
[18]
Q. Han and F. H. Lin,
[19] [20]
X. D. Huang, J. Li and Z. P. Xin,
Global well-posedness of classical solutions with large oscillations and vacuum to the three-dimensional isentropic compressible Navier-Stokes equations,
[21]
Z. L. Liang and S. Q. Wu, Classical solution to 1D viscous polytropic perfect fluids with constant diffusion coefficients and vacuum,
[22]
P. L. Lions,
[23] [24]
G. Lukaszewicz,
[25] [26] [27] [28]
N. Mujaković, Nonhomogeneous boundary value problem for one-dimensional compressible viscous micropolar fluid model: Regularity of the solution,
[29]
N. Mujaković,
Non-homogeneous boundary value problem for one-dimensional compressible viscous micropolar fluid model: a global existence theorem,
[30] [31]
Y. M. Qin, T. G. Wang and G. L. Hu,
The Cauchy problem for a 1D compressible viscous micropolar fluid model: analysis of the stabilization and the regularity,
[32] [33]
H. Y. Wen and C. J. Zhu,
Global classical large solutions to Navier-Stokes equations for viscous compressible and heat conducting fluids with vacuum,
[34]
H. Y. Wen and C. J. Zhu,
Global symmetric classical and strong solutions of the full compressible Navier-Stokes equations with vacuum and large initial data,
[1]
Fei Jiang, Song Jiang, Junpin Yin.
Global weak solutions to the two-dimensional Navier-Stokes equations of
compressible heat-conducting flows with symmetric data and forces.
[2]
Eduard Feireisl, Dalibor Pražák.
A stabilizing effect of a high-frequency driving force on the
motion of a viscous, compressible, and heat conducting fluid.
[3]
Chunhua Jin.
Global classical solution and stability to a coupled chemotaxis-fluid model with logistic source.
[4]
Giovambattista Amendola, Mauro Fabrizio, John Murrough Golden, Adele Manes.
Energy stability for thermo-viscous fluids with a fading memory heat flux.
[5]
Peixin Zhang, Jianwen Zhang, Junning Zhao.
On the global existence of classical solutions for compressible Navier-Stokes equations with vacuum.
[6]
Xiaoli Li.
Global strong solution for the incompressible flow of liquid crystals with vacuum in dimension two.
[7] [8]
Šárka Nečasová, Joerg Wolf.
On the existence of global strong solutions to the equations modeling a motion of a rigid body around a viscous fluid.
[9] [10]
Marcelo M. Disconzi.
On the existence of solutions and causality for relativistic viscous conformal fluids.
[11]
Bingyuan Huang, Shijin Ding, Huanyao Wen.
Local classical solutions of compressible Navier-Stokes-Smoluchowski equations with vacuum.
[12]
Feng Li, Yuxiang Li.
Global existence of weak solution in a chemotaxis-fluid system with nonlinear diffusion and rotational flux.
[13] [14]
Allen Montz, Hamid Bellout, Frederick Bloom.
Existence and uniqueness of steady flows of nonlinear bipolar viscous fluids in a cylinder.
[15]
Boling Guo, Guangwu Wang.
Existence of the solution for the viscous bipolar quantum hydrodynamic model.
[16]
Youcef Amirat, Kamel Hamdache.
Strong solutions to the equations of flow and heat transfer in magnetic fluids
with internal rotations.
[17]
Chiu-Ya Lan, Chi-Kun Lin.
Asymptotic behavior of the compressible viscous potential fluid: Renormalization group approach.
[18] [19] [20]
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
Say $f:X\rightarrow Y$ and $g:Y\rightarrow X$ are functions where $g\circ f:X\rightarrow X$ is the identity. Which of $f$ and $g$ is onto, and which is one-to-one?
closed as off-topic by Najib Idrissi, N. F. Taussig, Empty, quid♦, Hagen von Eitzen Oct 25 '15 at 22:42
This question appears to be off-topic. The users who voted to close gave this specific reason:
" This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – N. F. Taussig, Empty, Hagen von Eitzen
If it is just a matter of remembering what the right conclusion is, here's the picture I always use to remember: $$\begin{array}{rcl} &\bullet &\\ &&\searrow\\ \bullet\rightarrow& \bullet & \rightarrow\bullet\\ X\quad\quad&Y&\quad\quad Z \end{array}$$ The compositum is one-to-one and onto: the first function is one-to-one but not onto; the second function is onto, but not one-to-one.
So: if a compositum is one-to-one, the first function applied is one-to-one. If a compositum is onto, then the
second function applied is onto.
If $g\circ f = \mathrm{id}$, then the first function ($f$) is one-to-one, and the second function ($g$), is onto.
If it is a matter of
proving that the first function is one-to-one and the second function is onto, well, you'd need a proof. An example does not suffice.
HINT: $$\begin{array}{}&&\bullet&&\\ &&&\searrow&\\ \bullet&\to&\bullet&\to&\bullet\\ X&f&Y&g&X \end{array}$$
You already have several answers which can help you remember the theorem. If you're looking for a proof (and have problems with showing it yourself), you might try to have a look at these links:
http://www.proofwiki.org/wiki/Injection_if_Composite_is_an_Injection http://www.proofwiki.org/wiki/Surjection_if_Composite_is_a_Surjection
or these questions/answers:
If g(f(x)) is one-to-one (injective) show f(x) is also one-to-one (given that...) Surjection on composed function? Injective and Surjective Functions
More general results are here:
I think $f$ should be one-to-one and $g$ should be onto, since $g$ has to cover all of $X$ in its range and $f$ has to make $X$ correspond in a one-to-one fashion with $Y$. It seems that $g$ could be not one-to-one if it is an inverse of $f$ that discards some of the information that being a member of Y conveys. For example, if $X = \mathbb{R}^{+}$, $Y = \mathbb{R}$, $f(x) = x$, $g(x) = |x|$, $g$ is not one-to-one, but it is onto, and f is one-to-one, but clearly not onto. Hopefully this example is valid and helps you out.
|
Definition:Strictly Positive/Real Number Definition
The
strictly positive real numbers are the set defined as: $\R_{>0} := \set {x \in \R: x > 0}$
The
strictly positive real numbers, written $R_{>0}$, is the subset of $\R$ that satisfies the following:
\((\R_{>0} 1)\) $:$ Closure under addition \(\displaystyle \forall x, y \in \R_{>0}:\) \(\displaystyle x + y \in \R_{>0} \) \((\R_{>0} 2)\) $:$ Closure under multiplication \(\displaystyle \forall x, y \in \R_{>0}:\) \(\displaystyle xy \in \R_{>0} \) \((\R_{>0} 3)\) $:$ Trichotomy \(\displaystyle \forall x \in \R:\) \(\displaystyle x \in \R_{>0} \lor x = 0 \lor -x \in \R_{>0} \) Also known as Some sources merely refer to this as positive, as their treatments do not accept $0$ as being either positive or negative. The $\mathsf{Pr} \infty \mathsf{fWiki}$-specific notation $\R_{> 0}$ is actually non-standard. The conventional symbol to denote this concept is $\R_+^*$.
Note that $\R^+$ is also seen sometimes, but this is usually interpreted as the set $\left\{{x \in \R: x \ge 0}\right\}$.
|
When writing a document that contains mathematics, many time the need to add an explanation (e.g. stating the theorem used) is raised. To answer this need I wrote two short LaTeX macros:
\explain and
\explainup.
\newcommand{\explain}[2]{\underset{\mathclap{\overset{\uparrow}{#2}}}{#1}}
\newcommand{\explainup}[2]{\overset{\mathclap{\underset{\downarrow}{#2}}}{#1}}
This macros add the explanation below and above the formula.
The first argument is the relation (actually it can be anything) that needs be explained, for example =. The second argument is the actual explanation.
The macros depend on the
mathtools package by
Example
\documentclass{article}
\usepackage{mathtools}
\makeatletter
\newcommand{\explain}[2]{\underset{\mathclap{\overset{\uparrow}{#2}}}{#1}}
\newcommand{\explainup}[2]{\overset{\mathclap{\underset{\downarrow}{#2}}}{#1}}
\makeatother
\begin{document}
\[
U_x\explain{=}{\textrm{C.-R.}}V_y\explainup{\geq}{V(x,y)=ye^x}0
\]
\end{document}
Will result in
|
Search
Now showing items 1-10 of 51
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
|
This question came up when I was doing some reading into convolution squares of singular measures. Recall a function $f$ on the torus $T = [-1/2,1/2]$ is said to be $\alpha$-Hölder (for $0 < \alpha < 1$) if $\sup_{t \in \mathbb{T}} \sup_{h \neq 0} |h|^{-\alpha}|f(t+h)-f(t)| < \infty$. In this case, define this value, $\omega_\alpha(f) = \sup_{t \in \mathbb{T}} \sup_{h \neq 0} |h|^{-\alpha}|f(t+h)-f(t)|$. This behaves much like a metric, except functions differing by a constant will not differ in $\omega_\alpha$. My primary question is this:
1) Is it true that the smooth functions are "dense" in the space of continuous $\alpha$-Hölder functions, i.e., for a given continuous $\alpha$-Hölder $f$ and $\varepsilon > 0$, does there exists a smooth function $g$ with $\omega_\alpha(f-g) < \varepsilon$?
To be precise, where this came up was worded somewhat differently. Suppose $K_n$ are positive, smooth functions supported on $[-1/n,1/n]$ with $\int K_n = 1$.
2) Given a fixed continuous function $f$ which is $\alpha$-Hölder and $\varepsilon > 0$, does there exist $N$ such that $n \geq N$ ensures $\omega_\alpha(f-f*K_n) < \varepsilon$?
This second formulation is stronger than the first, but is not needed for the final result, I believe.
To generalize, fix $0 < \alpha < 1$ and suppose $\psi$ is a function defined on $[0,1/2]$ that is strictly increasing, $\psi(0) = 0$, and $\psi(t) \geq t^{\alpha}$. Say that a function $f$ is $\psi$-Hölder if $\sup_{t \in \mathbb{T}} \sup_{h \neq 0} \psi(|h|)^{-1}|f(t+h)-f(t)| < \infty$. In this case, define this value, $\omega_\psi(f) = \sup_{t \in \mathbb{T}} \sup_{h \neq 0} \psi(|h|)^{-1}|f(t+h)-f(t)|$. Then we can ask 1) and 2) again with $\alpha$ replaced by $\psi$.
I suppose the motivation would be that the smooth functions are dense in the space of continuous functions under the usual metrics on function spaces, and this "Hölder metric" seems to be a natural way of defining a metric of the equivalence classes of functions (where $f$ and $g$ are equivalent if $f = g+c$ for a constant $c$). Any insight would be appreciated.
|
Last time we revisited Robin’s theorem saying that 5040 being the largest counterexample to the bound
\[ \frac{\sigma(n)}{n~log(log(n))} < e^{\gamma} = 1.78107... \] is equivalent to the Riemann hypothesis.
\[
\Psi(n) = n \prod_{p | n}(1 + \frac{1}{p}) \]
where $p$ runs over the prime divisors of $n$. It is series A001615 in the online encyclopedia of integer sequences and it starts off with
1, 3, 4, 6, 6, 12, 8, 12, 12, 18, 12, 24, 14, 24, 24, 24, 18, 36, 20, 36, 32, 36, 24, 48, 30, 42, 36, 48, 30, 72, 32, 48, 48, 54, 48, …
and here’s a plot of its first 1000 values
To understand this behaviour it is best to focus on the ‘slopes’ $\frac{\Psi(n)}{n}=\prod_{p|n}(1+\frac{1}{p})$.So, the red dots of minimal ‘slope’ $\approx 1$ correspond to the prime numbers, and the ‘outliers’ have a maximal number of distinct small prime divisors. Look at $210 = 2 \times 3 \times 5 \times 7$ and its multiples $420,630$ and $840$ in the picture. For this reason the primorial numbers, which are the products of the fist $k$ prime numbers, play a special role. This is series A002110 starting off with
1, 2, 6, 30, 210, 2310, 30030, 510510, 9699690, 223092870,…
In Patrick Solé and Michel Planat Extreme values of the Dedekind $\Psi$ function, it is shown that the primorials play a similar role for Dedekind’s Psi as the superabundant numbers play for the sum-of-divisors function $\sigma(n)$.That is, if $N_k$ is the $k$-th primorial, then for all $n < N_k$ we have that the 'slope' at $n$ is strictly below that of $N_k$ \[ \frac{\Psi(n)}{n} < \frac{\Psi(N_k)}{N_k} \] which follows immediately from the fact that any $n < N_k$ can have at most $k-1$ distinct prime factors and $p \mapsto 1 + \frac{1}{p}$ is a strictly decreasing function. Another easy, but nice, observation is that for all $n$ we have the inequalities
\[
n^2 > \phi(n) \times \psi(n) > \frac{n^2}{\zeta(2)} \]
where $\phi(n)$ is Euler’s totient function
\[
\phi(n) = n \prod_{p | n}(1 – \frac{1}{p}) \]
This follows as once from the definitions of $\phi(n)$ and $\Psi(n)$
\[
\phi(n) \times \Psi(n) = n^2 \prod_{p|n}(1 – \frac{1}{p^2}) < n^2 \prod_{p~\text{prime}} (1 - \frac{1}{p^2}) = \frac{n^2}{\zeta(2)} \] But now it starts getting interesting. In the proof of his theorem, Guy Robin used a result of his Ph.D. advisor Jean-Louis Nicolas
known as Nicolas’ criterion for the Riemann hypothesis: RH is true if and only if for all $k$ we have the inequality for the $k$-th primorial number $N_k$
\[ \frac{N_k}{\phi(N_k)~log(log(N_k))} > e^{\gamma} \] From the above lower bound on $\phi(n) \times \Psi(n)$ we have for $n=N_k$ that \[ \frac{\Psi(N_k)}{N_k} > \frac{N_k}{\phi(N_k) \zeta(2)} \] and combining this with Nicolas’ criterion we get \[ \frac{\Psi(N_k)}{N_k~log(log(N_k))} > \frac{N_k}{\phi(N_k)~log(log(N_k)) \zeta(2)} > \frac{e^{\gamma}}{\zeta(2)} \approx 1.08… \] In fact, Patrick Solé and Michel Planat prove in their paper Extreme values of the Dedekind $\Psi$ function that RH is equivalent to the lower bound \[ \frac{\Psi(N_k)}{N_k~log(log(N_k))} > \frac{e^{\gamma}}{\zeta(2)} \] holding for all $k \geq 3$.
In other words, it gives us the number of tiles needed in the Dedekind tessellation to describe the fundamental domain of the action of $\Gamma_0(n)$ on the upper half-plane by Moebius transformations.
When $n=6$ we have $\Psi(6)=12$ and we can view its fundamental domain via these Sage commands:
G=Gamma0(6) FareySymbol(G).fundamental_domain()
giving us the 24 back or white tiles (note that these tiles are each fundamental domains of the extended modular group, so we have twice as many of them as for subgroups of the modular group)
But, there are plenty of other, seemingly unrelated, topics where $\Psi(n)$ appears. To name just a few:
The number of points on the projective line $\mathbb{P}^1(\mathbb{Z}/n\mathbb{Z})$. The number of lattices at hyperdistance $n$ in Conway’s big picture. The number of admissible maximal commuting sets of operators in the Pauli group for the $n$ qudit.
and there are explicit natural one-to-one correspondences between all these manifestations of $\Psi(n)$, tbc.Leave a Comment
|
Yes, your example (pieced together from the original post and the comments) $\begin{bmatrix}F_2&F_2\\0&0\end{bmatrix}$ is a noncommutative rng without identity of order $4$.
Since $\begin{bmatrix}1&0\\0&0\end{bmatrix}$ acts as an identity on the left, it would have to be equal to any candidate two-sided identity, if it existed, but $\begin{bmatrix}0&1\\0&0\end{bmatrix}\begin{bmatrix}1&0\\0&0\end{bmatrix}\neq \begin{bmatrix}0&1\\0&0\end{bmatrix}$
Obviously you can't have such a ring with $1$ or $2$ elements.
If such a ring had $3$ elements, say $\{0,a,b\}$, all distinct, then $a+b$ must be somewhere in this set. $a+b\in \{a,b\}$ creates a contradiction, so $a+b=0$ necessarily, so that $b=-a$. But multiplication in $\{-a,0,a\}$ necessarily commutes.
What you gave is actually a nice representation for one of my favorite examples of semigroup rings. You take the semigroup $S=\{a,b\}$ with multiplication defined by $a^2=ba=a$ and $b^2=ab=b$, and make the semigroup ring $F_2[S]$. I think this is the same ring as if we had taken $a=\begin{bmatrix}1&0\\0&0\end{bmatrix}$ and $b=\begin{bmatrix}1&1\\0&0\end{bmatrix}$.
|
Definition:Well-Defined/Mapping Definition
Let $f: S \to T$ be a mapping.
Let $\mathcal R$ be an equivalence relation on $S$.
Let $S / \mathcal R$ be the quotient set determined by $\mathcal R$.
Let $\phi: S / \mathcal R \to T$ be a mapping such that: $\map \phi {\eqclass x {\mathcal R} } = \map f x$ Then $\phi: S / \mathcal R \to T$ is well-defined if and only if: $\forall \tuple {x, y} \in \mathcal R: \map f x = \map f y$ Comment
Suppose we are given a mapping $f: S \to T$.
Suppose we have an equivalence $\mathcal R$ on $S$, and we want to define a mapping on the quotient set: $\phi: S / \mathcal R \to T$ such that $\map \phi {\eqclass \ldots {\mathcal R} } = \map f \ldots$.
That is, we want every member of the equivalence class to map to the same element.
The only way this can be done is to set $\map \phi {\eqclass x {\mathcal R} } = \map f x$.
Now, if $x, y \in S$ are in the same equivalence class with respect to $\mathcal R$, that is, in order for $\map \phi {\eqclass x {\mathcal R} }$ to make any sort of sense, we need to make sure that $\map \phi {\eqclass x {\mathcal R} } = \map \phi {\eqclass y {\mathcal R} }$, or (which comes to the same thing) $\map f x = \map f y$.
So $\map \phi {\eqclass x {\mathcal R} } = \map f x$ defines a mapping $\phi: S / \mathcal R \to T$ if and only if $\forall \tuple {x, y} \in \mathcal R: \map f x = \map f y$.
If this holds, then the mapping $\phi$ is
well-defined.
The terminology is misleading, as $\phi$ can not be defined at all if the condition is
not met. What this means is: if we want to define a mapping from a quotient set to any other set, then all the individual elements of each equivalence class in the domain must map to the same element in the codomain.
Therefore, when attempting to construct or analyse such a mapping, it is necessary to check for
well-definedness. Also known as
Some sources use the term
consistent for well-defined.
|
Ahh, trig identities... a rite of passage for any precalculus student.
This is a huge stumbling block for many students, because up until this point, many have been perfectly successful (or at least have gotten by) in their classes by learning canned formulas and procedures and then doing a bunch of exercises that just change a \(2\) to a \(3\) here and a plus to a minus there. Now, all of a sudden, there's no set way of going about things. No "step 1 do this, step 2 do that". Now they have to rely on their intuition and "play" with an identity until they prove that it's correct.
And to make matters worse, many textbooks --- and, as a result, many teachers --- make this subject arbitrarily and artificially harder for the students.
They insist that students are not allowed to work on both sides of the equation, but instead must specifically start at one end and work their way to the other. I myself once subscribed to this "rule", because it's how I'd always been taught, and I always fed students the old line of "you can't assume the thing you're trying to prove because that's a logical fallacy".
Then one of my Honors Precalculus students called me on it.
He asked me to come up with an example of a trig non-identity where adding the same thing on both sides would lead to a false proof that the identity was correct. After some thought, I realized that not only couldn't I think of one, but that mathematically, there's no reason that one should exist.
To begin with, one valid way to prove an identity is to work with each side of the equation
separatelyand show that they are both equal to the same thing. For example, suppose you want to verify the following identity:
\[\dfrac{\cot^2{\theta}}{1+\csc{\theta}}=\dfrac{1-\sin{\theta}}{\sin{\theta}}\]
Trying to work from one side to the other would be a nightmare, but it's much simpler to show that each side is equal to \(\csc{\theta}-1\). This in fact demonstrates one of the oldest axioms in mathematics, as written by Euclid: "things which are equal to the same thing are equal to each other."
But what about doing the same thing to
bothsides of an equation?
There are two important points to realize about what's going on behind the scenes here.
The first is that if your "thing you do to both sides" is a
reversible step--- that is, if you're applying a one-to-one functionto both sides of an equation --- then it's perfectly valid to use that as part of your proof because it establishes an if-and-only-ifrelationship. If that function is not one-to-one, all bets are off. You can't prove that \(2=-2\) by squaring both sides to get \(4=4\), because the function \(x\mapsto x^2\) maps multiple inputs to the same output.
It baffles me that most Precalculus textbooks mention one-to-one functions in the first chapter or two, yet completely fail to understand how this applies to solving equations.* A notable exception is UCSMP's Precalculus and Discrete Mathematics book, which establishes the following on p. 169:
Reversible Steps Theorem Let \(f\), \(g\), and \(h\) be functions. Then, for all \(x\) in the intersection of the domains of functions \(f\), \(g\), and \(h\), \(f(x)=g(x) \Leftrightarrow f(x)+h(x)=g(x)+h(x)\) \(f(x)=g(x) \Leftrightarrow f(x)\cdot h(x)=g(x)\cdot h(x)\) [We'll actually come back to this one in a bit -- there's a slight issue with it.] If \(h\) is 1-1, then for all \(x\) in the domains of \(f\) and \(g\) for which \(f(x)\) and \(g(x)\) are in the domain of \(h\), \[f(x)=g(x) \Leftrightarrow h(f(x))=h(g(x)).\]
Later on p. 318, the book says:
"...there is no new or special logic for proving identities. Identities are equations and all the logic that was discussed with equation-solving applies to them."
Yes, that whole "math isn't just a bunch of arbitrary rules" thing applies here too.
The second important point, which you may have noticed while looking at the statement of the Reversible Steps Theorem, is that the implied
domainof an identity matters a great deal. When you're proving a trig identity, you are trying to establish that it is true for all inputs that are in the domainof both sides. Most textbooks at least pay lip service to this fact, even though they don't follow it to its logical conclusion.
To illustrate why domain is so important, consider this example:
\[\dfrac{\cos{x}}{1-\sin{x}} = \dfrac{1+\sin{x}}{\cos{x}}\]
To verify this identity, I'm going to do something that may give you a visceral reaction: I'm going to "cross-multiply". Or, more properly, I'm going to multiply both sides by the expression \((1 - \sin x)\cos x\). I claim that this is a perfectly valid step to take, and what's more, it makes the rest of the proof downright easy by reducing to everyone's favorite Pythagorean identity:
\[
\begin{align*}
(\cos{x})(\cos{x}) &= (1+\sin{x})(1-\sin{x})\\
\cos^2{x} &= 1-\sin^2{x}\\
\sin^2{x} + \cos^2{x} &= 1 \quad\blacksquare
\end{align*}
\]
"But wait," you ask, "what if \(x=\pi/2\)? Then you're multiplying both sides by zero, and that's certainly not reversible!"
True. But if \(x=\pi/2\), then the denominators of both sides of the equation are zero, so the identity isn't even true in the first place. For any value of \(x\) that does
notyield a zero in either denominator, though, multiplying both sides of an equation by that value is a reversible operation and therefore completely valid.
Now, this isn't to say that multiplying both sides of an equation by a function can't lead to problems --- for example, if \(h(x)=0\) (as in the zero
function), then \(f(x)\cdot h(x)=g(x)\cdot h(x)\) no matter what. This can even lead to problems in more subtle cases: suppose \(f\) and \(g\) are equal everywhere but a single point \(a\); for example, perhaps \(f(a)=1\) and \(g(a)=2\). If it just so happens that \(h(a)=0\), then \(f\cdot h\) and \(g\cdot h\) will be equal as functions, even though \(f\) and \(g\) are not themselves equal.
The real issue here can be explained via a quick foray into higher mathematics. Functions form what's called a
ring-- basically meaning you can add, subtract, and multiply them, and these operations have all the nice properties we'd expect. But being able to preserve that if-and-only-if relationship when multiplying a function by both sides of an equation requires a special kind of ring called an integral domain, which means that it's impossible to multiply two nonzero functions together and get a zero function.
Unfortunately, functions in general don't form an integral domain --- not even continuous functions, or differentiable functions, or even
infinitelydifferentiable functions do! But if we move up to the complex numbers(where everything works better!), then the set of analyticfunctions --- functions that can be written as power series (infinite polynomials) on an open domain --- isan integral domain. And most of the functions that precalculus students encounter generally turn out to be analytic**: polynomial, rational, exponential, logarithmic, trigonometric, and even inverse trigonometric. This means that when proving trigonometric identities, multiplying both sides by the same function is a "safe" operation.
So in sum, when proving trigonometric identities, as long as you're careful to only use reversible steps (what a great time to spiral back to one-to-one functions, by the way!), you are welcome to apply all the same algebraic operations that you would when solving equations, and the chain of equalities you establish will prove the identity. Even "cross-multiplying" is fair game, because any input that would make the denominator zero would invalidate the identity anyway.*** Since trigonometric functions are generally "safe" (analytic), we're guaranteed to never run into any issues.
Now, none of this is to say that there isn't intrinsic merit to learning how to prove an identity by working from one side to the other. Algebraic "tricks" --- like multiplying by an expression over itself (\(1\) in disguise!) to conveniently simplify certain expressions --- are important tools for students to have under their belts, especially when they encounter limits and integrals next year in calculus.
What we need to do, then, is encourage our students to come up with multiple solution methods, and perhaps present working from one side to the other as an added challenge to build their mathematical muscles. And if students are going to work on both sides of an equation at once, then we need to hold them to high standards and make them
explicitlystate in their proofs that all the steps they have taken are reversible! If they're unsure on whether or not a step is valid, have them investigate it until they're convinced one way or the other.
If we're artificially limiting our students by claiming that only one solution method is correct, we're sending the wrong message about what mathematics really is. Instead, celebrating and cultivating our students' creativity is the best way to prepare them for problem-solving in the real world.
--
* Rather, I wouldsay it baffles me, but actually I'm quite used to seeing textbooks treat mathematical topics as disparate and unconnected, like how a number of Precalculus books teach vectors in one chapter and matrices in the next, yet never once mentione how they are so beautifully tied together via transformations. ** Except perhaps at a few points. The more correct term for rational functions and certain trigonometric functions is actually meromorphic, which describes functions that are analytic everywhere except a discrete set of points, called the polesof the function, where the function blows up to infinity because of division by zero. *** If you extend the domains of the trig functions to allow for division by zero, you do need to be more careful. Not because there's anything intrinsically wrong with dividing by zero, but because \(0\cdot\infty\) is an indeterminate expression and causes problems that algebra simply can't handle.
|
Difference between revisions of "Peter Saveliev"
Line 4: Line 4:
My current projects are these two books:
My current projects are these two books:
−
*''[[Topology Illustrated]]'', published 2016
+
*''[[Topology Illustrated]]'', published 2016
−
*''[[Calculus Illustrated]]'',
+
*''[[Calculus Illustrated]]'', 2019
In part, the latter book is about ''Discrete Calculus'', which is based on a simple idea:
In part, the latter book is about ''Discrete Calculus'', which is based on a simple idea:
$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$
$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$
Line 25: Line 25: −
*
+
*-valued calculus if the is non-orientable, an essay making a case for discrete calculus by appealing to topology and physics.
[[image:mirror image of man.png| center]]
[[image:mirror image of man.png| center]]
−
*[[The political spectrum is a circle]], an essay based on the very last section of the topology book.
+
*[[The political spectrum is a circle]], an essay based on the very last section of the topology book.
[[image:Political_spectrum_as_circle_distorted_D.png| center]]
[[image:Political_spectrum_as_circle_distorted_D.png| center]]
Latest revision as of 03:06, 11 September 2019
Hello! My name is Peter Saveliev. I am a professor of mathematics at Marshall University, Huntington WV, USA.
My current projects are these two books:
In part, the latter book is about
Discrete Calculus, which is based on a simple idea:$$\lim_{\Delta x\to 0}\left( \begin{array}{cc}\text{ discrete }\\ \text{ calculus }\end{array} \right)= \text{ calculus }.$$I have been involved in research in algebraic topology and several other fields but nowadays I think this is a pointless activity. My non-academic projects have been: digital image analysis, automated fingerprint identification, and image matching for missile navigation/guidance. Once upon a time, I took a better look at the poster of Drawing Handsby Escher hanging in my office and realized that what is shown isn't symmetric! To fix the problem I made my own picture called Painting Hands:
Such a symmetry is supposed to be an involution of the $3$-space, $A^2=I$; therefore, its diagonalized matrix has only $\pm 1$ on the diagonal. These are the three cases:
(a) One $-1$: mirror symmetry, then pen draws pen. No! (b) Two $-1$s: $180$ degrees rotation, the we have two right (or two left) hands. No! (c) Three $-1$s: central symmetry. Yes! -Why is discrete calculus better than infinitesimal calculus? -Why? -Because it can be integer-valued! -And? -And the integer-valued calculus can detect if the space is non-orientable! Read Integer-valued calculus, an essay making a case for discrete calculus by appealing to topology and physics. -The political “spectrum” might be a circle!- So? -Then there can be no fair decision-making system! Read The political spectrum is a circle, an essay based on the very last section of the topology book. Note: I am frequently asked, what should "Saveliev" sound like? I used to care about that but got over that years ago. The one I endorse is the most popular: "Sav-leeeeeev". Or, simply call me Peter.
|
Holt-Winters Filtering
Computes Holt-Winters Filtering of a given time series. Unknown parameters are determined by minimizing the squared prediction error.
Keywords ts Usage
HoltWinters(x, alpha = NULL, beta = NULL, gamma = NULL, seasonal = c("additive", "multiplicative"), start.periods = 2, l.start = NULL, b.start = NULL, s.start = NULL, optim.start = c(alpha = 0.3, beta = 0.1, gamma = 0.1), optim.control = list())
Arguments x An object of class
ts
alpha $alpha$ parameter of Holt-Winters Filter. beta $beta$ parameter of Holt-Winters Filter. If set to
FALSE, the function will do exponential smoothing.
gamma $gamma$ parameter used for the seasonal component. If set to
FALSE, an non-seasonal model is fitted.
seasonal Character string to select an
"additive"(the default) or
"multiplicative"seasonal model. The first few characters are sufficient. (Only takes effect if
gammais non-zero).
start.periods Start periods used in the autodetection of start values. Must be at least 2. l.start Start value for level (a[0]). b.start Start value for trend (b[0]). s.start Vector of start values for the seasonal component ($s_1[0] \ldots s_p[0]$) optim.start Vector with named components
alpha,
beta, and
gammacontaining the starting values for the optimizer. Only the values needed must be specified. Ignored in the one-parameter case.
optim.control Optional list with additional control parameters passed to
optimif this is used. Ignored in the one-parameter case.
Details
The additive Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = a[t] + h b[t] + s[t - p + 1 + (h - 1) \bmod p],$$ where $a[t]$, $b[t]$ and $s[t]$ are given by $$a[t] = \alpha (Y[t] - s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] -a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] - a[t]) + (1-\gamma) s[t-p]$$
The multiplicative Holt-Winters prediction function (for time series with period length p) is $$\hat Y[t+h] = (a[t] + h b[t]) \times s[t - p + 1 + (h - 1) \bmod p].$$ where $a[t]$, $b[t]$ and $s[t]$ are given by $$a[t] = \alpha (Y[t] / s[t-p]) + (1-\alpha) (a[t-1] + b[t-1])$$ $$b[t] = \beta (a[t] - a[t-1]) + (1-\beta) b[t-1]$$ $$s[t] = \gamma (Y[t] / a[t]) + (1-\gamma) s[t-p]$$ The data in
x are required to be non-zero for a multiplicative model, but it makes most sense if they are all positive.
The function tries to find the optimal values of $\alpha$ and/or $\beta$ and/or $\gamma$ by minimizing the squared one-step prediction error if they are
NULL (the default).
optimize will be used for the single-parameter case, and
optim otherwise.
For seasonal models, start values for
a,
b and
s are inferred by performing a simple decomposition in trend and seasonal component using moving averages (see function
decompose) on the
start.periods first periods (a simple linear regression on the trend component is used for starting level and trend). For level/trend-models (no seasonal component), start values for
a and
b are
x[2] and
x[2] - x[1], respectively. For level-only models (ordinary exponential smoothing), the start value for
a is
x[1].
Value An object of class
"HoltWinters", a list with components:
fitted A multiple time series with one column for the filtered series as well as for the level, trend and seasonal components, estimated contemporaneously (that is at time t and not at the end of the series). x The original series alpha alpha used for filtering beta beta used for filtering gamma gamma used for filtering coefficients A vector with named components
a, b, s1, ..., spcontaining the estimated values for the level, trend and seasonal components
seasonal The specified
seasonalparameter
SSE The final sum of squared errors achieved in optimizing call The call used References
C. C. Holt (1957) Forecasting trends and seasonals by exponentially weighted moving averages,
ONR Research Memorandum, Carnegie Institute of Technology 52.
P. R. Winters (1960) Forecasting sales by exponentially weighted moving averages,
Management Science 6, 324--342. See Also Aliases HoltWinters print.HoltWinters residuals.HoltWinters Examples
library(stats)
<testonly>od <- options(digits = 5)</testonly>require(graphics)## Seasonal Holt-Winters(m <- HoltWinters(co2))plot(m)plot(fitted(m))(m <- HoltWinters(AirPassengers, seasonal = "mult"))plot(m)## Non-Seasonal Holt-Wintersx <- uspop + rnorm(uspop, sd = 5)m <- HoltWinters(x, gamma = FALSE)plot(m)## Exponential Smoothingm2 <- HoltWinters(x, gamma = FALSE, beta = FALSE)lines(fitted(m2)[,1], col = 3)<testonly>options(od)</testonly>
Documentation reproduced from package stats, version 3.3, License: Part of R 3.3
|
On a class of nonlinear time optimal control problems
1.
Dipartimento di Matematica, Università degli Studi di Roma "Tor Vergata", Via della Ricerca Scientifica, 00133 Roma
2.
Dipartimento di Matematica, Università di Roma, Via della Ricerca Scientifica 1, 00133 Roma
$ y'(t)=f(y(t),u(t))\,\quad y(t) \in \mathbb{R}^n,\ u(t)\in U \subset \mathbb{R}^d. $
We assume $f(x,U)$ to be a convex set with $C^1$ boundary for all $x\in\mathbb{R}^n$ and the target $\kappa$ to satisfy an interior sphere condition. For such problems we prove necessary and sufficient optimality conditions using the properties of the minimum time function $T(x)$. Moreover, we give a local description of the singular set of $T$.
Mathematics Subject Classification:49L20, 49L25, 93C1. Citation:Piermarco Cannarsa, Carlo Sinestrari. On a class of nonlinear time optimal control problems. Discrete & Continuous Dynamical Systems - A, 1995, 1 (2) : 285-300. doi: 10.3934/dcds.1995.1.285
[1]
Hans Josef Pesch.
Carathéodory's royal road of the calculus of variations:
Missed exits to the maximum principle of optimal control theory.
[2]
Bernard Dacorogna, Giovanni Pisante, Ana Margarida Ribeiro.
On non quasiconvex problems of the calculus of variations.
[3] [4]
Gisella Croce, Nikos Katzourakis, Giovanni Pisante.
$\mathcal{D}$-solutions to the system of vectorial Calculus of Variations in $L^∞$ via the singular value problem.
[5]
Ioan Bucataru, Matias F. Dahl.
Semi-basic 1-forms and Helmholtz conditions for the inverse
problem of the calculus of variations.
[6]
Ivar Ekeland.
From Frank Ramsey to René Thom: A classical problem in the calculus of variations
leading to an implicit differential equation.
[7] [8] [9]
Agnieszka B. Malinowska, Delfim F. M. Torres.
Euler-Lagrange equations
for composition functionals in calculus of variations on time
scales.
[10]
Delfim F. M. Torres.
Proper extensions of Noether's symmetry theorem for nonsmooth extremals of the calculus of variations.
[11]
Nuno R. O. Bastos, Rui A. C. Ferreira, Delfim F. M. Torres.
Necessary
optimality conditions for fractional difference problems of the
calculus of variations.
[12] [13]
Omid S. Fard, Javad Soolaki, Delfim F. M. Torres.
A necessary condition of Pontryagin type for fuzzy fractional optimal control problems.
[14]
Ana P. Lemos-Paião, Cristiana J. Silva, Delfim F. M. Torres.
A sufficient optimality condition for delayed state-linear optimal control problems.
[15] [16] [17]
Urszula Ledzewicz, Heinz Schättler.
Drug resistance in cancer chemotherapy as an optimal control problem.
[18]
Akram Kheirabadi, Asadollah Mahmoudzadeh Vaziri, Sohrab Effati.
Solving optimal control problem using Hermite wavelet.
[19] [20]
Nikos Katzourakis.
Nonuniqueness in vector-valued calculus of variations in $L^\infty$ and some Linear elliptic systems.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
A Belyi-extender (or dessinflateur) is a rational function $q(t) = \frac{f(t)}{g(t)} \in \mathbb{Q}(t)$ that defines a map
\[ q : \mathbb{P}^1_{\mathbb{C}} \rightarrow \mathbb{P}^1_{\mathbb{C}} \] unramified outside $\{ 0,1,\infty \}$, and has the property that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$.
An example of such a Belyi-extender is the power map $q(t)=t^n$, which is totally ramified in $0$ and $\infty$ and we clearly have that $q(0)=0,~q(1)=1$ and $q(\infty)=\infty$.
The composition of two Belyi-extenders is again an extender, and we get a rather mysterious monoid $\mathcal{E}$ of all Belyi-extenders.
Very little seems to be known about this monoid. Its units form the symmetric group $S_3$ which is the automrphism group of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \}$, and mapping an extender $q$ to its degree gives a monoid map $\mathcal{E} \rightarrow \mathbb{N}_+^{\times}$ to the multiplicative monoid of positive natural numbers.
If one relaxes the condition of $q(t) \in \mathbb{Q}(t)$ to being defined over its algebraic closure $\overline{\mathbb{Q}}$, then such maps/functions have been known for some time under the name of
dynamical Belyi-functions, for example in Zvonkin’s Belyi Functions: Examples, Properties, and Applications (section 6).
Here, one is interested in the complex dynamical system of iterations of $q$, that is, the limit-behaviour of the orbits
\[ \{ z,q(z),q^2(z),q^3(z),… \} \] for all complex numbers $z \in \mathbb{C}$.
In general, the 2-sphere $\mathbb{P}^1_{\mathbb{C}} = S^2$ has a finite number of open sets (the Fatou domains) where the limit behaviour of the series is similar, and the union of these open sets is dense in $S^2$. The complement of the Fatou domains is the Julia set of the function, of which we might expect a nice fractal picture.
Let’s take again the power map $q(t)=t^n$. For a complex number $z$ lying outside the unit disc, the series $\{ z,z^n,z^{2n},… \}$ has limit point $\infty$ and for those lying inside the unit circle, this limit is $0$. So, here we have two Fatou domains (interior and exterior of the unit circle) and the Julia set of the power map is the (boring?) unit circle.
Fortunately, there are indeed dynamical Belyi-maps having a more pleasant looking Julia set, such as this one
But then, many dynamical Belyi-maps (and Belyi-extenders) are systems of an entirely different nature, they are
completely chaotic, meaning that their Julia set is the whole $2$-sphere! Nowhere do we find an open region where points share the same limit behaviour… (the butterfly effect).
There’s a nice sufficient condition for chaotic behaviour, due to Dennis Sullivan, which is pretty easy to check for dynamical Belyi-maps.
A
periodic point for $q(t)$ is a point $p \in S^2 = \mathbb{P}^1_{\mathbb{C}}$ such that $p = q^m(p)$ for some $m > 1$. A critical point is one such that either $q(p) = \infty$ or $q'(p)=0$.
Sullivan’s result is that $q(t)$ is completely chaotic when all its critical points $p$ become eventually periodic, that is some $q^k(p)$ is periodic,
but $p$ itself is not periodic.
For a Belyi-map $q(t)$ the critical points are either comlex numbers mapping to $\infty$ or the inverse images of $0$ or $1$ (that is, the black or white dots in the dessin of $q(t)$) which are not leaf-vertices of the dessin.
Let’s do an example, already used by Sullivan himself:
\[ q(t) = (\frac{t-2}{t})^2 \] This is a Belyi-function, and in fact a Belyi-extender as it is defined over $\mathbb{Q}$ and we have that $q(0)=\infty$, $q(1)=1$ and $q(\infty)=1$. The corresponding dessin is (inverse images of $\infty$ are marked with an $\ast$)
The critical points $0$ and $2$ are not periodic, but they become eventually periodic:
\[
2 \rightarrow^q 0 \rightarrow^q \infty \rightarrow^q 1 \rightarrow^q 1 \] and $1$ is periodic.
For a general Belyi-extender $q$, we have that the image under $q$ of any critical point is among $\{ 0,1,\infty \}$ and because we demand that $q(\{ 0,1,\infty \}) \subseteq \{ 0,1,\infty \}$, every critical point of $q$ eventually becomes periodic.
If we want to avoid the corresponding dynamical system to be completely chaotic, we have to ensure that one of the periodic points among $\{ 0,1,\infty \}$ (and there is at least one of those) must be critical.
Let’s consider the very special Belyi-extenders $q$ having the additional property that $q(0)=0$, $q(1)=1$ and $q(\infty)=\infty$, then all three of them are periodic.
So, the system is always completely chaotic
unless the black dot at $0$ is not a leaf-vertex of the dessin, or the white dot at $1$ is not a leaf-vertex, or the degree of the region determined by the starred $\infty$ is at least two.
Going back to the mystery Manin-Marcolli sub-monoid of $\mathcal{E}$, it might explain why it is a good idea to restrict to very special Belyi-extenders having associated dessin a $2$-coloured tree, for then the periodic point $\infty$ is critical (the degree of the outside region is at least two), and therefore the conditions of Sullivan’s theorem are not satisfied. So, these Belyi-extenders do not necessarily have to be completely chaotic. (tbc)Leave a Comment
|
2017
Was T. S. Eliot's "tantalus Jar" actually a Leyden Jar?, Eric A. Schiff
2014
Electron and hole drift mobility measurements on thin film CdTe solar cells, Qi Long, Steluta A. Dinca, Eric A. Schiff, Ming Yu, and Jeremy Theil
2012
FINDCHIRP: An algorithm for detection of gravitational waves from inspiraling compact binaries, Bruce Allen, Warren G. Anderson, Patrick R. Brady, Duncan A. Brown, and Jolien D E Creighton
Electron drift-mobility measurements in polycrystalline CuIn1-xGaxSe2 solar cells, Steluta A. Dinca, Eric A. Schiff, William N. Shafarman, Brian Egaas, Rommel Noufi, and David L. Young
2011
Quantum Equivalence Principle Violations in Scalar-Tensor Theories, Christian Armendariz-Picon and Riccardo Penco
Generic Phases of Cross-Linked Active Gels: Relaxation, Oscillation and Contractility, Shiladitya Banerjee, Tanniemola B. Liverpool, and M. C. Marchetti
Substrate Rigidity Deforms and Polarizes Active Gels, Shiladitya Banerjee and M. C. Marchetti
Motor-Driven Dynamics of Cytoskeletal FIlaments in Motility Assays, Shiladitya Banerjee, M. Cristina Marchetti, and Kristian Muller-Nedebock
Nonlinear Hydrodynamics of Disentangled Flux-Line Liquids, Panayotis Benetatos and M. Cristina Marchetti
Three-Dimensional Folding of the Triangular Lattice, Mark Bowick, P Di Francesco, Oliver Golinelli, and Emmanuel Guitter
Self-Propulsion of Droplets by Spatially-Varying Roughness, Mark Bowick and Zhenwei Yao
The Shrinking Instability of Toroidal Liquid Droplets in the Stokes Flow Regime, Mark Bowick and Zhenwei Yao
Search for Gravitational Wave Bursts from Six Magnetars, Duncan Brown and J. Abadie
All-Sky Search for Periodic Gravitational Waves in the Full S5 LIGO Data, Duncan Brown, J. Abadie, Stefan Ballmer, Collin Capano, and P. Couvares
Implementation and Testing of the First Prompt Search for Electromagnetic Counterparts to Gravitational Wave Transients, Duncan Brown, J. Abadie, Stefan Ballmer, Collin Capano, and P. Couvares
Search for Gravitational Waves from Binary Black Hole Inspiral, Merger and Ringdown, Duncan Brown, J. Abadie, Collin Capano, J. A. Garofoli, and Eiichi Hirose
Beating the Spin-Down Limit on Gravitational Wave Emission from the Vela Pulsar, Duncan Brown, J. Abadie, Collin Capano, J. A. Garofoli, and A. P. Lundgren
A Measurement of the Semileptonic Branching Fraction of the B_s Meson, Duncan Brown and J. P. Lees
MCRG Minimal Walking Technicolor, Simon Catterall, Luigi Del Debbio, and Joel Giedt
Systematic Errors of the MCRG Method, Simon Catterall, Luigi Del Debbio, Joel Giedt, and Liam Keegan
Perturbative Renormalization of Lattice N=4 Super Yang-Mills Theory, Simon Catterall, Eric Dzienkowski, Joel Giedt, Anosh Joseph, and Robert Wells
de Sitter Gravity from Lattice Gauge Theory, Simon Catterall, Daniel Ferrante, and Arwen Nicholson
An Object Oriented Code for Simulating Supersymmetric Yang--Mills Theories, Simon Catterall and Anosh Joseph
Cooperative Self-Propulsion of Active and Passive Rotors, Yaouen Fily, Aparna Baskaran, and M. Cristina Marchetti
Polar Patterns in Active Fluids, Luca Giomi and M. Cristina Marchetti
Active Jamming: Self-Propelled Soft Particles at High Density, Silke Henkes, Yaouen Fily, and M. Christina Marchetti
Theory of Double-Sided Flux Decorations, M. Cristina Marchetti and David R. Nelson
Critical Slowing Down in Polynomial Time Algorithms, Alan Middleton
Zero and Low Temperature Behavior of the Two-Dimensional \pm J Ising Spin Glass, Alan Middleton, Creighton K. Thomas, and David A. Huse
Observation of J/ψ Pair Production in pp Collisions at \sqrt{s}=7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alesandra Borgia
Determination of f_s/f_d for 7 TeV pp Collisions and a Measurement of the Branching Fraction of the Decay Bd->D-K+, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
First Observation of B0s → Ds2*+ Xη-ν Decays, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
First Observation of B0s → J/Ψf0(980) Decays, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
First Observation of Bs -> D_{s2}^{*+} X mu nu Decays, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
First Observation of Bs -> J/psi f0(980) Decays, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of J/psi Production in pp Collisions at Sqrt(s)=7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of J/Ψ Production in pp Collisions at √s=7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of Measurement of V 0 Production Ratios in pp Collisions at ps = 0.9 and 7TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of the Inclusive phi Cross-Section in pp Collisions at Sqrt(s) = 7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of the Inclusive Φ Cross-Section in pp Collisions at √s = 7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of V0 Production Ratios in pp Collisions at √s= 0.9 and 7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurements of the Branching Fractions for B_(s) -> D_(s)πππ and Λ_b^0 -> Λ_c^+πππ, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurements of the Branching Fractions for B(s) → D(s)πππ and Λb0 → Λc+πππ, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Observation of J/ψ Pair Production in pp Collisions at √s=7 TeV, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Search for Lepton-Number Violating Processes in B+ -> h- l+ l+ Decays, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Search for the Lepton Number Violating Decays B+ → π- μ+μ+ and B+→ K-μ+ μ+, Raymond Mountain, Marina Artuso, S. Blusk, and Alessandra Borgia
Measurement of the eta_b(1S) mass and the Branching Fraction for Upsilon(3S) --> gamma eta_b(1S), Raymond Mountain, Marina Artuso, S. Blusk, and Sadia Khalil
Amplitude Analyses of the Decays chi_c1 -> eta pi+ pi- and chi_c1 -> eta' pi+ pi-, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Amplitude Analyses of the Decays Χc1 →ηπ+π- and Χc1→η'π+π-, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Analysis of the Decay D0 → K0Sπ0π0, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Analysis of the Decay D^0 to K^0_S pi^0 pi^0, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Branching Fractions for Y(3S) -> pi^0 h_b and psi(2S) -> pi^0 h_c, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Branching Fractions for ϒ(3S)→ π0hb and Ψ(2S) →π0hc, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Measurements of Branching Fractions for Electromagnetic Transitions Involving the χ_{bJ}(1P) States, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Measurements of Branching Fractions for Electromagnetic Transitions Involving the χbJ(1P) States, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Observation of the Dalitz Decay D_{s}^{*+} \to D_{s}^{+} e^{+} e^{-}, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Observation of the h_c(1P) Using e^+e^- Collisions Above DDbar Threshold, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Observation of the Hc(1P) Using e+e- Collisions Above DD Threshold, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Search for the Decay D^+_{s}\toωe^+ν, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Studies of D^+ -> {eta', eta, phi} e^+ nu_e, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Studies of D+→ {η', η, φ}e+νe, Raymond Mountain, Marina Artuso, S. Blusk, and T. Skwarnicki
Determination of fs/fd for 7 TeV pp Collisions and a Measurement of the Branching Fraction of the Decay B0→D-K+, Raymond Mountain, Marina Artuso, S. Blusk, and S. Stone
Search for the Decay D+s→ωe+ν, Raymond Mountain, Marina Artuso, and T. Skwarnicki
Observation of the Dalitz Decay Ds*+ → Ds*+ e+e-, Raymond Mountain, Marina Artuso, T. Skwarnicki, and S. Stone
Study of Radiation Damage in Lead Tungstate Crystals Using Intense High Energy Beams, Raymond Mountain, K. Khroustalev, V.A. Batarin, and T. Brennan
Rectification of Vortex Motion in a Circular Ratchet Channel, Britton Plourde, N. S. Lin, T. W. Heitmann, Kang Yu, and V. R. Misko
Are Three Flavors Special?, Joseph Schechter, Amir H. Fariborz, Renata Jora, and M. Naeem Shahid
Chiral Nonet Mixing in pi pi Scattering, Joseph Schechter, Amir H. Fariborz, Renata Jora, and M. Naeem Shahid
Semi-Leptonic D_s^+(1968) Decays as a Scalar Meson Probe, Joseph Schechter, Amir H. Fariborz, Renata Jora, and M. Naeem Shahid
Possible Z-Width Probe of a "Brane-World" Scenario for Neutrino Masses, Joseph Schechter, Sherif Moussa, Salah Nasri, and F Sannino
The Importance of Slow-Roll Corrections During Multi-field Inflation, Scott Watson, Anastasios Avgoustidis, Sera Cremonini, and Anne-Christine Davis
The Baryon-Dark Matter Ratio Via Moduli Decay After Affleck-Dine Baryogenesis, Scott Watson, Gordon Kane, Jing Shao, and Hai-Bo Yu
Constraints on a Non-thermal History from Galactic Dark Matter Spikes, Scott Watson and Pearl Sandick
2010
Do Cosmological Perturbations Have Zero Mean?, Christian Armendariz-Picon
Effective Theory Approach to the Spontaneous Breakdown of Lorentz Invariance, Christian Armendariz-Picon, Alberto Diez-Tejedor, and Riccardo Penco
Primordial Perturbations in Einstein-Aether and BPSH Theories, Christian Armendariz-Picon, Noela Farina Sierra, and Jaume Garriga
Covariant Quantum Fields on Noncommutative Spacetimes, A. P. Balachandran, A. Ibort, G. Marmo, and M. Martone
Quantum Geons and Noncommutative Spacetimes, A. P. Balachandran, A. Ibort, G. Marmo, and M. Martone
Instabilities and Oscillations in Isotropic Active Gels, Shiladitya Banerjee and M. Cristina Marchetti
Nonequilibrium Statistical Mechanics of Self-Propelled Hard Rods, Aparna Baskaran and M. Cristina Marchetti
Paraboloidal Crystals, Mark Bowick and Luca Giomi
Crystalline Order on Catenoidal Capillary Bridges, Mark Bowick and Zhenwei Yao
Crystalline Order on Catenoidal Capillary Bridges, Mark Bowick and Zhenwei Yao
A Search for Gravitational Waves Associated with the August 2006 Timing Glitch of the Vela Pulsar, Duncan Brown and J. Abadie
Calibration of the LIGO Gravitational Wave Detectors in the Fifth Science Run, Duncan Brown and J. Abadie
First Search for Gravitational Waves from the Youngest Known Neutron Star, Duncan Brown and J. Abadie
Methods for Reducing False Alarms in Searches for Compact Binary Coalescences in LIGO Data, Duncan Brown and J. Slutsky
Supersymmetric Lattices, Simon Catterall
Topological Gravity on the Lattice, Simon Catterall
Twisted Lattice Supersymmetry and Applications to AdS/CFT, Simon Catterall
First Results from Lattice Simulation of the PWMM, Simon Catterall and Greg Van Anders
MCRG Minimal Walking Technicolor, Simon Catterall, Luigi Del Debbio, Joel Giedt, and Liam Keegan
Realization of Center Symmetry in Two Adjoint Flavor Large-N Yang-Mills, Simon Catterall, Richard Galvez, and Mithat Unsal
Gauge Theory Duals of Black Hole - Black String Transitions of Gravitational Theories on a Circle, Simon Catterall, Anosh Joseph, and Toby Wiseman
Thermal Phases of D1-Branes on a Circle from Lattice Super Yang-Mills, Simon Catterall, Anosh Joseph, and Toby Wiseman
Discreteness and the Transmission of Light from Distant Sources, Fay Dowker, Joe Henson, and Rafael Sorkin
Light Scalar Puzzle in QCD, Amir H. Fariborz, Renata Jora, and Joseph Schechter
Sheared Active Fluids: Thickening, Thinning and Vanishing Viscosity, Luca Giomi, Tanniemola B. Liverpool, and M. Cristina Marchetti
|
Search
Now showing items 1-10 of 26
Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider
(American Physical Society, 2016-02)
The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ...
Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(Elsevier, 2016-02)
Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ...
Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(Springer, 2016-08)
The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
$^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2016-03)
The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ...
Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV
(Elsevier, 2016-09)
The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ...
Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV
(Elsevier, 2016-12)
We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ...
Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV
(Springer, 2016-05)
Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
|
A Belyi-extender (or dessinflateur) $\beta$ of degree $d$ is a quotient of two polynomials with rational coefficients
\[ \beta(t) = \frac{f(t)}{g(t)} \] with the special properties that for each complex number $c$ the polynomial equation of degree $d$ in $t$ \[ f(t)-c g(t)=0 \] has $d$ distinct solutions, except perhaps for $c=0$ or $c=1$, and, in addition, we have that \[ \beta(0),\beta(1),\beta(\infty) \in \{ 0,1,\infty \} \]
Let’s take for instance the power maps $\beta_n(t)=t^n$.
For every $c$ the degree $n$ polynomial $t^n – c = 0$ has exactly $n$ distinct solutions, except for $c=0$, when there is just one. And, clearly we have that $0^n=0$, $1^n=1$ and $\infty^n=\infty$. So, $\beta_n$ is a Belyi-extender of degree $n$.
A cute observation being that if $\beta$ is a Belyi-extender of degree $d$, and $\beta’$ is an extender of degree $d’$, then $\beta \circ \beta’$ is again a Belyi-extender, this time of degree $d.d’$.
That is, Belyi-extenders form a monoid under composition!
In our example, $\beta_n \circ \beta_m = \beta_{n.m}$. So, the power-maps are a sub-monoid of the Belyi-extenders, isomorphic to the multiplicative monoid $\mathbb{N}_{\times}$ of strictly positive natural numbers.
In their paper Quantum statistical mechanics of the absolute Galois group, Yuri I. Manin and Matilde Marcolli say they use the full monoid of Belyi-extenders to act on all Grothendieck’s dessins d’enfant.
But, they attach properties to these Belyi-extenders which they don’t have, in general. That’s fine, as they foresee in Remark 2.21 of their paper that the construction works equally well for any suitable sub-monoid, as long as this sub-monoid contains all power-map exenders.
I’m trying to figure out what the maximal mystery sub-monoid of extenders is satisfying all the properties they need for their proofs.
But first, let us see what Belyi-extenders have to do with dessins d’enfant.
In his user-friendlier period, Grothendieck told us how to draw a picture, which he called a dessin d’enfant, of an extender $\beta(t) = \frac{f(t)}{g(t)}$ of degree $d$:
Look at all complex solutions of $f(t)=0$ and label them with a black dot (and add a black dot at $\infty$ if $\beta(\infty)=0$). Now, look at all complex solutions of $f(t)-g(t)=0$ and label them with a white dot (and add a white dot at $\infty$ if $\beta(\infty)=1$).
Now comes the fun part.
Because $\beta$ has exactly $d$ pre-images for all real numbers $\lambda$ in the open interval $(0,1)$ (and $\beta$ is continuous), we can connect the black dots with the white dots by $d$ edges (the pre-images of the open interval $(0,1)$), giving us a $2$-coloured graph.
For the power-maps $\beta_n(t)=t^n$, we have just one black dot at $0$ (being the only solution of $t^n=0$), and $n$ white dots at the $n$-th roots of unity (the solutions of $x^n-1=0$). Any $\lambda \in (0,1)$ has as its $n$ pre-images the numbers $\zeta_i.\sqrt[n]{\lambda}$ with $\zeta_i$ an $n$-th root of unity, so we get here as picture an $n$-star. Here for $n=5$:
This dessin should be viewed on the 2-sphere, with the antipodal point of $0$ being $\infty$, so projecting from $\infty$ gives a homeomorphism between the 2-sphere and $\mathbb{C} \cup \{ \infty \}$.
To get all information of the dessin (including possible dots at infinity) it is best to slice the sphere open along the real segments $(\infty,0)$ and $(1,\infty)$ and flatten it to form a ‘diamond’ with the upper triangle corresponding to the closed upper semisphere and the lower triangle to the open lower semisphere.
In the picture above, the right hand side is the dessin drawn in the diamond, and this representation will be important when we come to the action of extenders on more general Grothendieck dessins d’enfant.
Okay, let’s try to get some information about the monoid $\mathcal{E}$ of all Belyi-extenders.
What are its invertible elements?
Well, we’ve seen that the degree of a composition of two extenders is the product of their degrees, so invertible elements must have degree $1$, so are automorphisms of $\mathbb{P}^1_{\mathbb{C}} – \{ 0,1,\infty \} = S^2-\{ 0,1,\infty \}$ permuting the set $\{ 0,1,\infty \}$.
They form the symmetric group $S_3$ on $3$-letters and correspond to the Belyi-extenders
\[ t,~1-t,~\frac{1}{t},~\frac{1}{1-t},~\frac{t-1}{t},~\frac{t}{t-1} \] You can compose these units with an extender to get anther extender of the same degree where the roles of $0,1$ and $\infty$ are changed.
For example, if you want to colour all your white dots black and the black dots white, you compose with the unit $1-t$.
Manin and Marcolli use this and claim that you can transform any extender $\eta$ to an extender $\gamma$ by composing with a unit, such that $\gamma(0)=0, \gamma(1)=1$ and $\gamma(\infty)=\infty$.
That’s fine as long as your original extender $\eta$ maps $\{ 0,1,\infty \}$
onto $\{ 0,1,\infty \}$, but usually a Belyi-extender only maps into $\{ 0,1,\infty \}$.
Here are some extenders of degree three (taken from Melanie Wood’s paper Belyi-extending maps and the Galois action on dessins d’enfants):
with dessin $5$ corresponding to the Belyi-extender
\[ \beta(t) = \frac{t^2(t-1)}{(t-\frac{4}{3})^3} \] with $\beta(0)=0=\beta(1)$ and $\beta(\infty) = 1$.
So, a first property of the mystery Manin-Marcolli monoid $\mathcal{E}_{MMM}$ must surely be that all its elements $\gamma(t)$ map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$, for they use this property a number of times, for instance to construct a monoid map
\[ \mathcal{E}_{MMM} \rightarrow M_2(\mathbb{Z})^+ \qquad \gamma \mapsto \begin{bmatrix} d & m-1 \\ 0 & 1 \end{bmatrix} \] where $d$ is the degree of $\gamma$ and $m$ is the number of black dots in the dessin (or white dots for that matter).
Further, they seem to believe that the dessin of any Belyi-extender must be a 2-coloured tree.
Already last time we’ve encountered a Belyi-extender $\zeta(t) = \frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with dessin
But then, you may argue, this extender sends all of $0,1$ and $\infty$ to $0$, so it cannot belong to $\mathcal{E}_{MMM}$.
Here’s a trick to construct Belyi-extenders from Belyi-maps $\beta : \mathbb{P}^1 \rightarrow \mathbb{P}^1$, defined over $\mathbb{Q}$ and having the property that there are rational points in the fibers over $0,1$ and $\infty$.
Let’s take an example, the ‘monstrous dessin’ corresponding to the congruence subgroup $\Gamma_0(2)$
with map $\beta(t) = \frac{(t+256)^3}{1728 t^2}$.
As it stands, $\beta$ is not a Belyi-extender because it does not map $1$ into $\{ 0,1,\infty \}$. But we have that
\[ -256 \in \beta^{-1}(0),~\infty \in \beta^{-1}(\infty),~\text{and}~512,-64 \in \beta^{-1}(1) \] (the last one follows from $(t+256)^2-1728 t^3=(t-512)^2(t+64)$).
We can now pre-compose $\beta$ with the automorphism (defined over $\mathbb{Q}$) sending $0$ to $-256$, $1$ to $-64$ and fixing $\infty$ to get a Belyi-extender
\[ \gamma(t) = \frac{(192t)^3}{1728(192t-256)^2} \] which maps $\gamma(0)=0,~\gamma(1)=1$ and $\gamma(\infty)=\infty$ (so belongs to $\mathcal{E}_{MMM}$) with the same dessin, which is not a tree,
That is, $\mathcal{E}_{MMM}$ can at best consist only of those Belyi-extenders $\gamma(t)$ that map $\{ 0,1,\infty \}$ onto $\{ 0,1,\infty \}$ and such that their dessin is a tree.
Let me stop, for now, by asking for a reference (or counterexample) to perhaps the most startling claim in the Manin-Marcolli paper, namely that any 2-coloured tree can be realised as the dessin of a Belyi-extender!
|
I want to create a lognormal distribution of future stock prices. Using a monte carlo simulation I came up with the standard deviation as being $\sqrt{(days/252)}$ $*volatility*mean*$ $\log(mean)$. Is this correct?
I'm not sure I understand, but if you want to compute the variance of $exp(X)$, where $X$ is normally distributed with mean $\mu$ and variance $\sigma^2$, that variance is (from Wikipedia): $$\left(\exp{(\sigma^2)} - 1\right) \exp{(2\mu + \sigma^2)}$$
The distribution of the log of a stock price in n days is a normal distribution with mean of $\log(current_price)$ and standard deviation of $volatility*\sqrt(n/365.2425)$ if you're using calendar days, and assuming no dividends and 0% risk-free interest rate.
Note that the standard deviation is independent of the current_price: if $\log(current_price)$ increases by 0.3 (for example), the stock has increased by 35%, regardless of its current_price.
To include dividends and the risk-free interest rate, see:
which models future stock prices w/ an eye towards pricing options.
To create a lognormal distribution (that is, to generate values from it), you need to start with normally distributed numbers and then exponentiate them.
That is to say, take a sample $z$ from the standard normal distribution, and form the lognormally distributed underlying value
$$ U_T = U_0 \exp\left( (r-q-\sigma^2/2)T + \sigma \sqrt{T} z \right) $$
The probability density function of $U_T$ is formed from solving this for $z$ and then applying the normal PDF.
|
Suppose we have the following dataset that records individual survival times (dur) and a covariate z:
id| dur | z-------------1 | 1 | -12 | 2 | 13 | 3 | -14 | 4 | 1
I want to model the duration as a function of z. I may specify dur ~ weibull() and parameterize the scale as a function of z. This model can be fitted easily, e.g. with phreg or survreg in R.
If I am now interested to incorporate time-dependent covariates, I need to transform the dataset into something like this:
id | event | time | z | orig.row_____________________________________1 | 1 | 1 | -1 | 12 | 0 | 1 | 1 | 22.1| 1 | 2 | 1 | 23 | 0 | 1 |-1 | 33.1| 0 | 2 |-1 | 33.2| 1 | 3 |-1 | 34 | 0 | 1 | 1 | 44.1| 0 | 2 | 1 | 44.2| 0 | 3 | 1 | 44.3| 1 | 4 | 1 | 4
Apparently, a to say event ~ poisson() with a "poisson mean" $$\log(u_i) = \beta_0 + \alpha \log(t_i) + \beta_1 z_i$$ is equivalent to a the Weibull model above (Lindsey 1995). But when I run the analysis in R, I get two largely different values for the scale:
# Generate the data: library(eha)enter <- rep(0, 4)exit <- 1:4event <- rep(1, 4)z <- rep(c(-1, 1), 2)dat <- data.frame(enter, exit, event, z)binDat <- toBinary(dat)binDat <- binDat[order(rownames(binDat)),]# Run the model: summary(phreg(Surv(enter,exit, event) ~ z, data = dat, dist="weibull"))summary(glm(event ~ z + log(risktime), data = binDat, family = poisson("log")))
The shape parmeter (glm: log(risktime)) is sort of similar as well as the beta for z. But the scale parameter is different (glm: Intercept). What am I doing wrong?
## Resultssummary(phreg(Surv(enter,exit, event) ~ z, data = dat, dist="weibull"))Call:phreg(formula = Surv(enter, exit, event) ~ z, data = dat, dist = "weibull")Covariate W.mean Coef Exp(Coef) se(Coef) Wald pz 0.200 -0.432 0.649 0.519 0.406 log(scale) 1.020 2.774 0.197 0.000 log(shape) 0.985 2.677 0.421 0.019 Events 4 Total time at risk 10 Max. log. likelihood -5.6567 LR test statistic 0.68 Degrees of freedom 1 Overall p-value 0.410367summary(glm(event ~ z + log(risktime), data = binDat, family = poisson("log")))Call:glm(formula = event ~ z + log(risktime), family = poisson("log"), data = binDat)Deviance Residuals: Min 1Q Median 3Q Max -1.0696 -0.7695 -0.5391 0.3482 1.0671 Coefficients: Estimate Std. Error z value Pr(>|z|)(Intercept) -1.6122 1.0019 -1.609 0.108z -0.3166 0.5149 -0.615 0.539log(risktime) 1.0634 1.0928 0.973 0.330(Dispersion parameter for poisson family taken to be 1) Null deviance: 7.3303 on 9 degrees of freedomResidual deviance: 6.1388 on 7 degrees of freedomAIC: 20.139Number of Fisher Scoring iterations: 5
|
A
tetrahedral snake, sometimes called a Steinhaus snake, is a collection of tetrahedra, linked face to face.
Steinhaus showed in 1956 that the last tetrahedron in the snake can never be a translation of the first one. This is a consequence of the fact that the group generated by the four reflexions in the faces of a tetrahedron form the free product $C_2 \ast C_2 \ast C_2 \ast C_2$.
For a proof of this, see Stan Wagon’s book The Banach-Tarski paradox, starting at page 68.
The thread $(3|3)$ is the
spine of the $(9|1)$-snake which involves the following lattices \[ \xymatrix{& & 1 \frac{1}{3} \ar@[red]@{-}[dd] & & \\ & & & & \\ 1 \ar@[red]@{-}[rr] & & 3 \ar@[red]@{-}[rr] \ar@[red]@{-}[dd] & & 1 \frac{2}{3} \\ & & & & \\ & & 9 & &} \] It is best to look at the four extremal lattices as the vertices of a tetrahedron with the lattice $3$ corresponding to its point of gravity.
The congruence subgroup $\Gamma_0(9)$ fixes each of these lattices, and the arithmetic group $\Gamma_0(3|3)$ is the conjugate of $\Gamma_0(1)$
\[ \Gamma_0(3|3) = \{ \begin{bmatrix} \frac{1}{3} & 0 \\ 0 & 1 \end{bmatrix}.\begin{bmatrix} a & b \\ c & d \end{bmatrix}.\begin{bmatrix} 3 & 0 \\ 0 & 1 \end{bmatrix} = \begin{bmatrix} a & \frac{b}{3} \\ 3c & 1 \end{bmatrix}~|~ad-bc=1 \} \] We know that $\Gamma_0(3|3)$ normalizes the subgroup $\Gamma_0(9)$ and we need to find the moonshine group $(3|3)$ which should have index $3$ in $\Gamma_0(3|3)$ and contain $\Gamma_0(9)$.
So, it is natural to consider the finite group $A=\Gamma_0(3|3)/\Gamma_9(0)$ which is generated by the co-sets of
\[ x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix} \qquad \text{and} \qquad y = \begin{bmatrix} 1 & 0 \\ 3 & 0 \end{bmatrix} \] To determine this group we look at the action of it on the lattices in the $(9|1)$-snake. It will fix the central lattice $3$ but will move the other lattices.
Recall that it is best to associate to the lattice $M.\frac{g}{h}$ the matrix
\[ \alpha_{M,\frac{g}{h}} = \begin{bmatrix} M & \frac{g}{h} \\ 0 & 1 \end{bmatrix} \] and then the action is given by right-multiplication.
\[
\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{1}{3} \\ 0 & 1 \end{bmatrix}.x = \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}, \quad \begin{bmatrix} 1 & \frac{2}{3} \\ 0 & 1 \end{bmatrix}.x=\begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix} \] That is, $x$ corresponds to a $3$-cycle $1 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 1$ and fixes the lattice $9$ (so is rotation around the axis through the vertex $9$).
To compute the action of $y$ it is best to use an alternative description of the lattice, replacing the roles of the base-vectors $\vec{e}_1$ and $\vec{e}_2$. These latices are projectively equivalent
\[ \mathbb{Z} (M \vec{e}_1 + \frac{g}{h} \vec{e}_2) \oplus \mathbb{Z} \vec{e}_2 \quad \text{and} \quad \mathbb{Z} \vec{e}_1 \oplus \mathbb{Z} (\frac{g’}{h} \vec{e}_1 + \frac{1}{h^2M} \vec{e}_2) \] where $g.g’ \equiv~1~(mod~h)$. So, we have equivalent descriptions of the lattices \[ M,\frac{g}{h} = (\frac{g’}{h},\frac{1}{h^2M}) \quad \text{and} \quad M,0 = (0,\frac{1}{M}) \] and we associate to the lattice in the second normal form the matrix \[ \beta_{M,\frac{g}{h}} = \begin{bmatrix} 1 & 0 \\ \frac{g’}{h} & \frac{1}{h^2M} \end{bmatrix} \] and then the action is again given by right-multiplication.
In the tetrahedral example we have
\[ 1 = (0,\frac{1}{3}), \quad 1\frac{1}{3}=(\frac{1}{3},\frac{1}{9}), \quad 1\frac {2}{3}=(\frac{2}{3},\frac{1}{9}), \quad 9 = (0,\frac{1}{9}) \] and \[ \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix}.y = \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix},\quad \begin{bmatrix} 1 & 0 \\ \frac{2}{3} & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}, \quad \begin{bmatrix} 1 & 0 \\ 0 & \frac{1}{9} \end{bmatrix}. y = \begin{bmatrix} 1 & 0 \\ \frac{1}{3} & \frac{1}{9} \end{bmatrix} \] That is, $y$ corresponds to the $3$-cycle $9 \rightarrow 1 \frac{1}{3} \rightarrow 1 \frac{2}{3} \rightarrow 9$ and fixes the lattice $1$ so is a rotation around the axis through $1$.
Clearly, these two rotations generate the full rotation-symmetry group of the tetrahedron
\[ \Gamma_0(3|3)/\Gamma_0(9) \simeq A_4 \] which has a unique subgroup of index $3$ generated by the reflexions (rotations with angle $180^o$ around axis through midpoints of edges), generated by $x.y$ and $y.x$.
The moonshine group $(3|3)$ is therefore the subgroup generated by
\[ (3|3) = \langle \Gamma_0(9),\begin{bmatrix} 2 & \frac{1}{3} \\ 3 & 1 \end{bmatrix},\begin{bmatrix} 1 & \frac{1}{3} \\ 3 & 2 \end{bmatrix} \rangle \]
|
Magnetism results from the circular motion of charged particles. This property is demonstrated on a macroscopic scale by making an electromagnet from a coil of wire and a battery. Electrons moving through the coil produce a magnetic field (Figure \(\PageIndex{1}\)), which can be thought of as originating from a magnetic dipole or a bar magnet.
Figure \(\PageIndex{1}\): Faraday’s apparatus for demonstrating that a magnetic field can produce a current. A change in the field produced by the top coil induces an emf and, hence, a current in the bottom coil. When the switch is opened and closed, the galvanometer registers currents in opposite directions. No current flows through the galvanometer when the switch remains closed or open. Image used with permission (CC BT 3.0; OpenStax).
Magnetism results from the circular motion of charged particles.
Electrons in atoms also are moving charges with angular momentum so they too produce a magnetic dipole, which is why some materials are magnetic. A magnetic dipole interacts with an applied magnetic field, and the energy of this interaction is given by the scalar product of the magnetic dipole moment, and the magnetic field, \(\vec{B}\).
\[E_B = - \vec{\mu} _m \cdot \vec{B} \label {8.4.1}\]
Magnets are acted on by forces and torques when placed within an external applied magnetic field (Figure \(\PageIndex{2}\)). In a uniform external field, a magnet experiences no net force, but a net torque. The torque tries to align the magnetic moment (\(\vec{\mu} _m\) of the magnet with the external field \(\vec{B}\). The magnetic moment of a magnet points from its south pole to its north pole.
Figure \(\PageIndex{2}\): A magnet will feel a force to realign in an external field, i.e. go from a higher energy to a lower energy. The energy of this system is determined by Equation \(ref{8.4.1}\) and classical can vary since the angle between \(\vec{\mu} _m\)and \(\vec{B}\) can vary continuously from 0 (low energy) to 180° (high energy).
In a non-uniform magnetic field a current loop, and therefore a magnet, experiences a net force, which tries to pull an aligned dipole into regions where the magnitude of the magnetic field is larger and push an anti-aligned dipole into regions where magnitude the magnetic field is smaller.
Quantum Effects
As expected, the quantum picture is different. Pieter Zeeman was one of the first to observe the splittings of spectral lines in a magnetic field caused by this interaction. Consequently such splittings are known as the Zeeman effect. Let’s now use our current knowledge to predict what the Zeeman effect for the 2p to 1s transition in hydrogen would look like, and then compare this prediction with a more complete theory. To understand the Zeeman effect, which uses a magnetic field to remove the degeneracy of different angular momentum states, we need to examine how an electron in a hydrogen atom interacts with an external magnetic field, \(\vec{B}\). Since magnetism results from the circular motion of charged particles, we should look for a relationship between the angular momentum \(\vec{L}\) and the magnetic dipole moment \(\vec{\mu} _m\).
The relationship between the magnetic dipole moment \(\vec{\mu} _m\) (also referred to simply as the magnetic moment) and the angular momentum \(\vec{L}\) of a particle with mass m and charge \(q\) is given by
\[ \vec{\mu} _m = \dfrac {q}{2m} \vec{L} \label {8.4.2}\]
For an electron, this equation becomes
\[ \vec{\mu} _m = - \dfrac {e}{2m_e} \vec{L} \label {8.4.3}\]
where the specific charge and mass of the electron have been substituted for \(q\) and \(m\). The magnetic moment for the electron is a vector pointing in the direction opposite to \(\vec{L}\), both of which classically are perpendicular to the plane of the rotational motion.
Exercise \(\PageIndex{1}\)
Will an electron in the ground state of hydrogen have a magnetic moment? Why or why not?
The relationship between the angular momentum of a particle and its magnetic moment is commonly expressed as a ratio, called the gyromagnetic ratio, \(\gamma\). Gyro is Greek for turn so gyromagnetic simply relates turning (angular momentum) to magnetism. Now you also know why the Greek sandwiches made with meat cut from a spit turning over a fire are called gyros.
\[ \gamma = \dfrac {\mu _m}{L} = \dfrac {q}{2m} \label {8.4.4}\]
In the specific case of an electron,
\[ \gamma _e = - \dfrac {e}{2m_e} \label {8.4.5}\]
Exercise \(\PageIndex{2}\)
Calculate the magnitude of the gyromagnetic ratio for an electron.
To determine the energy of a hydrogen atom in a magnetic field we need to include the operator form of the hydrogen atom Hamiltonian. The Hamiltonian always consists of all the energy terms that are relevant to the problem at hand.
\[ \hat {H} = \hat {H} ^0 + \hat {H} _m \label {8.4.6}\]
where \(\hat {H} ^0\) is the Hamiltonian operator in the absence of the field and \(\hat {H} _m\) is written using the operator forms of Equations \(\ref{8.4.1}\) and \(\ref{8.4.3}\)),
\[ \hat {H}_m = - \hat {\mu} \cdot \vec{B} = \dfrac {e}{2m_e} \hat {L} \cdot B \label {8.4.7}\]
The scalar product
\[ \hat {L} \cdot \vec{B} = \hat {L}_x B_x + \hat {L}_y B_y + \hat {L}_z B_z \label {8.4.8}\]
simplifies if the z-axis is defined as the direction of the external field because then \(B_x\) and \(B_y\) are automatically 0, and Equation \ref{8.4.6} becomes
\[ \hat {H} = \hat {H}^0 + \dfrac {eB_z}{2m_e} \hat {L} _z \label {8.4.9}\]
where \(B_z\) is the magnitude of the magnetic field, which is along the z-axis.
We now can ask, “What is the effect of a magnetic field on the energy of the hydrogen atom orbitals?” To answer this question, we will not solve the Schrödinger equation again; we simply calculate the expectation value of the energy, \(\left \langle E \right \rangle \), using the existing hydrogen atom wavefunctions and the new Hamiltonian operator.
\[ \left \langle E \right \rangle = \left \langle \hat {H}^0 \right \rangle + \dfrac {eB_z}{2m_e} \left \langle \hat {L} _z \right \rangle \label {8.4.10}\]
where
\[\left \langle \hat {H}^0 \right \rangle = \int \psi ^*_{n,l,m_l} \hat {H}^0 \psi _{n,l,m_l} d \tau = E_n \label {8.4.11}\]
and
\[\left \langle \hat {L}_z \right \rangle = \int \psi ^*_{n,l,m_l} \hat {L}_z \psi _{n,l,m_l} d \tau = m_l \hbar \label {8.4.12}\]
Exercise \(\PageIndex{3}\)
Show that the expectation value \(\left \langle \hat {L}_z \right \rangle = m_l \hbar\).
The expectation value approach provides an exact result in this case because the hydrogen atom wavefunctions are eigenfunctions of both \(\hat {H} ^0\) and \(\hat {L}_z\). If the wavefunctions were not eigenfunctions of the operator associated with the magnetic field, then this approach would provide a first-order estimate of the energy. First and higher order estimates of the energy are part of a general approach to developing approximate solutions to the Schrödinger equation. This approach, called perturbation theory, is discussed in the next chapter.
The expectation value calculated for the total energy in this case is the sum of the energy in the absence of the field, \(E_n\), plus the Zeeman energy, \(\dfrac {e \hbar B_z m_l}{2m_e}\)
\[\left \langle E \right \rangle = E_n + \dfrac {e \hbar B_z m_l}{2m_e} = E_n + \mu _B B_z m_l \label {8.4.13}\]
The factor
\[ \dfrac {e \hbar}{2m_e} = - \gamma _e \hbar = \mu _B \label {8.4.14}\]
defines the constant \(\mu _B\), called the
Bohr magneton, which is taken to be the fundamental magnetic moment. It has units of \(9.2732 \times 10^{-21}\) erg/Gauss or \(9.2732 \times 10^{-24}\) Joule/Tesla. This factor will help you to relate magnetic fields, measured in Gauss or Tesla, to energies, measured in ergs or Joules, for any particle with a charge and mass the same as an electron.
Equation \(\ref{8.4.13}\) shows that the \(m_l\) quantum number degeneracy of the hydrogen atom is removed by the magnetic field. For example, the three states \(\Psi _{211}\) , \(\Psi _{21-1}\), and \(\Psi _{210}\), which are degenerate in zero field, have different energies in a magnetic field, as shown in Figure \(\PageIndex{3}\).
Figure \(\PageIndex{3}\): The Zeeman effect. Emission when an electron switches from a 2p orbital to a 1s orbital occurs at only one energy in the absence of a magnetic field, but can occur at three different energies in the presence of a magnetic field.
The \(m_l = 0\) state, for which the component of angular momentum and hence also the magnetic moment in the external field direction is zero, experiences no interaction with the magnetic field. The \(m_l = +1\) state, for which the angular momentum in the z-direction is +ħ and the magnetic moment is in the opposite direction, against the field, experiences a raising of energy in the presence of a field. Maintaining the magnetic dipole against the external field direction is like holding a small bar magnet with its poles aligned exactly opposite to the poles of a large magnet (Figure \(\PageIndex{5}\). It is a higher energy situation than when the magnetic moments are aligned with each other.
Figure \(\PageIndex{4}\): The effect of an external magnetic field (B) on the energy of a magnetic dipole (L) oriented a) with and b) against the applied magnetic field.
Exercise \(\PageIndex{4}\)
Carry out the steps going from Equation \(\ref{8.4.10}\) to Equation \(\ref{8.4.13}\).
Exercise \(\PageIndex{5}\)
Consider the effect of changing the magnetic field on the magnitude of the Zeeman splitting. Sketch a diagram where the magnetic field strength is on the x-axis and the energy of the three 2p orbitals is on the y-axis to show the trend in splitting magnitudes with increasing magnetic field. Be quantitative, calculate and plot the exact numerical values using a software package of your choice.
Exercise \(\PageIndex{6}\)
Based on your calculations in Exercise \(\PageIndex{2}\) sketch a luminescence spectrum for the hydrogen atom in the n = 2 level in a magnetic field of 1 Tesla. Provide the numerical value for each of the transition energies. Use cm-1 or electron volts for the energy units.
Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
The integrand being a polynomial, I used the binomial formula to separate the monomials:
$$\int_{\frac{1}{4}}^{\frac{3}{4}} x^n(1-x)^n \, dx = \sum_{k = 0}^{n}{ n \choose k}(-1)^{k}\int_{\frac{1}{4}}^{\frac{3}{4}} x^{n+k} \, dx = \sum_{k = 0}^{n}{ n \choose k}(-1)^{k}\left[\frac{(\frac{3}{4})^{n+k+1}-(\frac{1}{4})^{n+k+1}}{n+k+1}\right]. $$
I'm looking to get a closed form of the following sum or at least make it 'nicer' in terms of looks:
$$f(a) =\sum_{k = 0}^{n}{ n \choose k}(-1)^{k}\frac{a^{n+k+1}}{n+k+1},\,\, 0<a < 1.$$
We have $$f'(a) =\sum_{k = 0}^{n}{ n \choose k}(-1)^{k}a^{n+k} $$ Then :
$$f'(a) =a^n(-1)^n\sum_{k = 0}^{n}{ n \choose k}(-1)^{n-k}a^{k} = a^n(-1)^{n}(a-1)^n = a^n(1-a)^n $$
so we are back to square one.
Past this circular simplification, I thought of integrating but that will just make the formula much messier.
Maybe there's another approach?
Edit 1: My main goal is to get a closed form of the integral, so if $\frac14$ and $\frac34$ give some cancellations, it would be great if someone pointed them out. Edit 2: the original problem :
let $(X_1,\cdots,X_{2n+1})$ be a $2n+1$ sample of independant, identically distributed random variables that are uniformly distributed over $[0,1]$
-Find the probability that the median $\in [\frac14,\frac34]$.
Edit 3:
using @JamesArathoon Observation
$$\begin{align}I_n & =\int_0^1 x^n (1-x)^n \, dx-2\int_0^{\frac{1}{4}} x^n (1-x)^n \, dx = B(n+1,n+1) - 2B(\frac14;n+1,n+1) \end{align}$$
where first special function is the beta function and second one is the incomplete beta.
$$\begin{align}I_n & = B(n+1,n+1) - 2B(\frac14;n+1,n+1) = \frac{\Gamma^2(n+1)}{\Gamma(2n+2)} -\frac{2}{4^{n+1}n+1}{_{2}}F_{1}(n+1,-n,n+2,\frac{1}{4}) \\ &=\frac{(n^2)!}{(2n+1)!} -\frac{2}{4^{n+1}n+1} {_{2}}F_{1}(n+1,-n,n+2,\frac{1}{4}) \end{align}$$
because one of the arguments of the hypergeomtric function is negative then the series terminates and is given by
$$\begin{align}{_{2}}F_{1}(n+1,-n,n+2,\frac{1}{4}) &= \sum_{k=0}^{n} (-1)^k \binom{n}{k} \frac{(n+1)_k}{4^k(n+2)_k} \\ &= 1 + \sum_{k=1}^{n} (-1)^k \frac{n!}{k!(n-k)!} \frac{(n+1)(n+2)\cdots(n+k)}{4^k(n+2)(n+3)\cdots(n+k+1)} \\ &=1 + \sum_{k=1}^{n} (-1)^k \frac{n!}{k!(n-k)!} \frac{(n+1)}{4^k(n+k+1)} \\ \end{align}$$
now that looks like it's got the potential to be turned into a binomial formula with $a = -1, b = \frac14$
but I still can't see it
Thanks in advance!
|
Let $\pi \colon M\to N$ be a smooth map between real smooth manifolds. Then $C^\infty(M)$ forms a module over $C^\infty(N)$ (via pullback). Is this module flat when $\pi$ is a submersion?
Recall that the usual definition of flatness is equivalent to the following
equational condition: whenever $ h_1 \ldots h_k\in C^\infty(N) $ and $g_1 \ldots g_k\in C^\infty(M)$ are such that:$$h_1 g_1 + \ldots + h_k g_k = 0$$ (as functions on $M$)then there are functions $G_1 \ldots G_r\in C^\infty(M)$ and $a_{i,j}\in C^\infty(N)$ such that:$$g_i= \sum_j a_{i,j}G_j \; \forall i $$ and $$\sum_i h_i a_{i,j}= 0 \; \forall j$$
Some remarks:
It's known that the inclusion of an open subset $U\subset N$ is a flat morphism since smooth functions on $U$ are obtained from smooth functions on $N$ by localizing w.r.t. functions vanishing nowhere on $U$.
It's also known that smooth flat maps have to be open. Proofs of both of these facts can be found for example in the book: Gonzales, Salas, $C^\infty$-differentiable spaces, Lecture notes in Mathematics, Springer 2000.
I've asked some of the experts including Malgrange and the above authors and it seems that the answer is not known.
I gave the equational condition of flatness since it seems like the most reasonable thing to use here. But considering already the simplest situation here's what gets me stuck: suppose you want to check flatness of the standard projection $\mathbb{R}^2 \to \mathbb{R}, (x,y)\mapsto x$, and take the case of just one $h\in C^\infty(\mathbb{R})$ and one $g\in C^\infty(\mathbb{R}^2)$ with $hg=0$. If you pick $h(x)$ to be strictly positive for $x<0$ and $0$ for $x\geq 0$, then the flatness condition translates into:
Any smooth function $g(x,y) \in C^\infty (\mathbb{R}^2)$ that vanishes on the half plane $x\leq 0 $ admits a "factorization": $$g(x,y)= \sum_j a_j (x)G_j (x,y)$$ where the $a_j\in C^\infty(\mathbb{R})$ all vanish on $x\leq 0$ and the $G_j\in C^\infty(\mathbb{R}^2)$ are arbitrary.
Anyone has an idea how to prove this "simple" case, or sees a counter example?
(
Edit: George Lowther beautifully proves this "simple" case, and also comes closer to the full result in his second answer. If you also think he deserves some credit consider up-voting his second answer since the first one turned community wiki.) Motivation
My personal interest is that a positive answer would allow me to finish a certain proof, which trying to explain here would take this too far afield. But I may try to put the question into context as follows: the notion of flat morphism plays an important role in algebraic geometry where it is basically the right way to formalize the notion of parametrized families of varieties (fibers of such a morphism being these families). One may also say that it is the right "technical" notion allowing one to do all the things one expects to do with such parametrized families (correct me if I'm wrong).
Now I've been taught that differential topology may also be seen as a part of commutative algebra (and that taking such a point of view might even be useful at times). For example: a manifold itself may be recovered completely from the algebra of smooth functions on it, and any smooth map between manifolds is completely encoded by the corresponding algebra morphism. Other examples: vector fields are just derivations of the algebra, vector bundles are just finitely generated projective modules over the algebra etc. Good places to learn this point of view are: Jet Nestruev, Smooth manifolds and observables, as well as the above mentioned book.
Now in differential topology there is a well know notion of smooth parametrized families of manifolds, namely smooth fiber bundles. Hence from this algebraic point of view it would be natural to expect that fiber bundles are flat morphisms.
|
EDIT: After mrf's comment below and some discussion with my instructor for the course it was decided that the below was not really an issue. Namely, I went into reading this lecture with the notion that we were going to solve the $\bar{\partial}$ equation--that this was our main goal. In other words, in the below we were mainly $f$ focused and not $\phi$ focused. In all actuality, it is the other way around. We were supposed to know that the $\bar{\partial}$ equation always has distributional solutions and that, in fact, we were really interested in finding solutions to $\bar{\partial}u=f$ with $u$ having controlled $\|\cdot\|_\phi$ norm.
This begs two questions though that I would love if someone may be able to fill in:
This is the one-dimensional case of Hormander's Theorem. Can someone give me intuition about why as an algebraic/differential geometer having Hormander's theorem is such a huge deal (as it is made out to be).
mrfsays that the below theorems actually show that $\bar{\partial}u=f$ is always solvable for any $f$ since we can always find (given a fixed $f$) a $C^2(\Omega,\mathbb{R})$ subharmonic function $\phi$ for which $\displaystyle \int_\Omega\frac{|f|^2}{\Delta\phi}e^{-\phi}$ is finite (we need finiteness to actually show a solution exists). Is there an easy way to see whysuch a function $\phi$ always exists for a given $f$?
Thanks!
I am currently reading the Park City lecture notes on Analytic and Algebraic Geometry (this book) and am really confused by some implicit assumptions made in the first lecture of the first minicourse (Lecture 1 of Bo Berndtsson's "An Introduction to Things $\overline{\partial}$").
Let me explain some of the background to the issue I am having. Let $\phi\in C^2(\Omega,\mathbb{R})$ be subharmonic and define the inner product:
$$\langle f,g\rangle_\phi=\int_\Omega f\bar{g}e^{-\phi}$$
and the norm $\|\alpha\|_\phi^2=\langle \alpha,\alpha\rangle_\phi$. We then define $\bar{\partial}^\ast_\phi$ to be the adjoint of $\bar{\phi}$ with respect to $\langle,\rangle_\phi$. Explicitly one can show that
$$\bar{\partial}^\ast_\phi\alpha=-e^{\phi}\frac{\partial}{\partial z}\left(e^{-\phi}\alpha\right)$$
So, now we are trying to follow the proof of Theorem 1.1.3 in the book which is stated as follows:
Theorem 1.1.3Let $\Omega\subseteq\mathbb{C}$ be a domain and suppose that $\phi\in C^2(\Omega,\mathbb{R})$ which is subharmonic. Then, for any $f\in L^2_{\text{loc}}(\Omega)$ there is a distributional solution $u$ to $\displaystyle \frac{\partial u}{\partial \bar{z}}=f$ subject to $$\int_\Omega |u|^2 e^{-\phi}\leqslant \int_\Omega \frac{|f|^2}{\Delta \phi}e^{-\phi}$$
The author states that the theorem follows from the following three propositions:
Proposition 1.1.1Given $f$ there exists a distributional solution to $\displaystyle \frac{\partial u}{\partial\bar{z}}$ satisfying $$\|u\|_\phi^2\leqslant C\quad \mathbf{(1.3)}$$ for some $C>0$ if and only if the estimate $$\left\langle f,\alpha\right\rangle_\phi \leqslant C\|\bar{\partial}^\ast_\phi \alpha\|_\phi\quad\mathbf{(1.4)}$$ holds for every $\alpha\in C^2_c(\Omega)$.
,
Proposition 1.1.1(cont.)For any given $\mu:\Omega\to\mathbb{R}^+$ $\mathbf{(1.4)}$ holds for all $f$ satisfying $$\int_\Omega \frac{|f|^2}{\mu}e^{-\phi}\, dz\leqslant C\quad\mathbf{(1.5)}$$ if and only if $$\int_\Omega \mu|\alpha|^2 e^{-\phi}\, dz\leqslant \|\bar{\partial}^\ast_\phi\alpha\|\quad\mathbf{(1.6)}$$ holds for all $\alpha\in C^2_c(\Omega)$.
and,
Proposition 1.1.2Let $\Omega\subseteq\mathbb{C}$ be a domain $\phi\in C^2(\Omega,\mathbb{R})$ and $\alpha\in C_c^2(\Omega)$. Then, $$\int_\Omega \Delta\phi|\alpha|^2 e^{-\phi}+\int_\Omega\left|\frac{\partial \alpha}{\partial\bar{z}}\right|^2 e^{-\phi}=\|\bar{\partial}^\ast_\phi\alpha\|\quad\mathbf{(1.7)}$$
It seems by the ease to which he claims Theorem 1.1.3 follows from these three propositions that the easy answer should be the correct one. The easier answer is that Proposition 1.1.2 shows that (1.6) holds for $\mu=\Delta\phi$. Thus, Proposition 1.1.1(cont.) implies that for every $f$ satisfying (1.5) we have that $f$ satisfies (1.4) for all $\alpha$ and thus we have a distributional solution to $\displaystyle \frac{\partial u}{\partial\bar{z}}u=f$ satisfying (1.3).
Ok, so everything seems hunky-dory, all of this goes through correctly to prove Theorem 1.1.3 if, given $f\in L^2_{\text{loc}}(\Omega)$, we could take
$$C=\int_\Omega \frac{|f|^2}{\Delta\phi}e^{-\phi}$$
The only issue is that the apply the proof of Proposition 1.1.1 we apply Riesz-Fischer to a certain operator $L$, the boundedness of which follows because we obtain a bound $\|L\|_\text{op}\leqslant C$. Thus, everything breaks down if $C$ is infinite. So, all of this strongly seems to suggest that the integral
$$\int_\Omega\frac{|f|^2}{\Delta\phi}e^{-\phi}$$
is finite for every $f\in L^2_\text{loc}(\Omega)$ and every subharmonic $\phi\in C^2(\Omega,\mathbb{R})$. But, I am fairly sure this is not true (just take $\Omega=\mathbb{C}$, $\phi=x^2+y^2$, and $f=\exp(2(x^2+y^2))$). Even if we require that $f\in L^2_{\text{loc}}(\Omega)$ and $fe^{\frac{-\phi}{2}}\in L^2(\Omega)$ (which may be a possible typo) there is still doubt that this integral always converges.
If anyone could provide any insight into what I am missing/what the author may have meant I would be extremely grateful.
|
I’ve just seen that Open Science, a new journal by the prestigious Royal Society, published the article Quantum correlations are weaved by the spinors of the Euclidean primitives, by Joy Christian. The article, as numerous others by the same author, claims that Bell’s theorem is wrong, and that one can violate Bell inequalities using a local hidden-variables model.
This is of course nonsense. Bell’s theorem is not only a rather simple piece of mathematics, with a few-lines proof that can be understood by high-school students, but also the foundation of an entire field of research — quantum information theory. It has been studied, verified, and improved upon by thousands of scientists around the world.
The form of Bell’s theorem that is relevant for the article at hand is that for all probability distributions $\rho(\lambda)$ and response functions $A(a,\lambda)$ and $B(b,\lambda)$ with range $\{-1,+1\}$ we have that
\begin{multline*} -2 \le \sum_\lambda \rho(\lambda) \Big[A(a_1,\lambda)B(b_1,\lambda)+A(a_1,\lambda)B(b_2,\lambda) \\ +A(a_2,\lambda)B(b_1,\lambda)-A(a_2,\lambda)B(b_2,\lambda)\Big] \le 2 \end{multline*} The author’s proposed counterexample? It’s described in equations (3.48) and (3.49): A binary random variable $\lambda$ that can take values $-1$ or $+1$, with $\rho(-1)=\rho(+1)=1/2$, and response functions $A(a,\pm1)=\pm1$ and $B(b,\pm1)=\mp1$. That’s it. Just perfectly anti-correlated results, that do not even depend on the local settings $a$ and $b$. The value of the Bell expression above is simply $-2$.
Now how could Open Science let such trivial nonsense pass? They do provide the “Review History” of the article, so we can see what happened: there were two referees that pointed out that the manuscript was wrong, one that was unsure, and two that issued a blanket approval without engaging with the contents. And the editor decided to accept it anyway.
What now? Open Science can recover a bit of its reputation by withdrawing this article, as Annals of Physics did with a previous version, but I’m never submitting an article to them.
|
When I evaluate
Solve[a==Sin[b*c], b] to rearrange the following for $ b $:
$$ a = \sin(bc) $$
I get the following result from Mathematica:
$$\begin{align*} \left\{\left\{b\to \text{ConditionalExpression}\left[\frac{-\sin ^{-1}(a)+2 \pi c_1+\pi }{c},c_1\in \mathbb{Z}\right]\right\},\right.\left.\left\{b\to \text{ConditionalExpression}\left[\frac{\sin ^{-1}(a)+2 \pi c_1}{c},c_1\in \mathbb{Z}\right]\right\}\right\} \end{align*}$$
It seems far too complicated. Unless I'm making a huge mistake, surely solving the equation for $ b $ would give:
$$ b = \frac{\sin ^{-1}(a)}{c} $$
Am I doing something wrong?
|
The Schrödinger equation for one-electron atoms and ions such as H, \(He^+\), \(Li^{2+}\), etc. is constructed using a Coulombic potential energy operator and the three-dimensional kinetic energy operator written in spherical coordinates. Because the radial and angular motions are separable, solutions to the Schrödinger equation consist of products \(R (r) Y (\theta , \varphi )\) of radial functions \(R(r) \) and angular functions \(Y (\theta , \varphi )\) that are called atomic orbitals. Three quantum numbers, n, \(l\), and \(m_l\) are associated with the orbitals. Numerous visualization methods are available to enhance our understanding of the orbital shapes and sizes represented by the modulus squared of the wavefunctions. The orbital energy eigenvalues depend only on the n quantum number and match the energies found using the Bohr model of the hydrogen atom. Because all orbitals with the same principal quantum number have the same energy in one-electron systems, each orbital energy level is n2-degenerate. For example, the n = 3 level contains 9 orbitals (one 3s, three 3p’s and five 3d’s.)
Atomic spectra measured in magnetic fields have more spectral lines than those measured in field-free environments. This Zeeman effect is caused by the interaction of the imposed magnetic field with the magnetic dipole moment of the electrons, which removes the \(m_l\) quantum number degeneracy.
In addition to the orbital wavefunctions obtained by solving the Schrödinger equation, electrons in atoms possess a quality called spin that has associated wavefunctions \(\sigma\), quantum numbers s and ms, spin angular momentum S and spectroscopic selection rules. Interaction with a magnetic field removes the degeneracy of the two spin states, which are labeled \(\alpha\) and \(\beta \), and produces additional fine structure in atomic spectra. While spin does not appear during the solution of the hydrogen atom presented in this text, spin is presented as a postulate because it is necessary to explain experimental observations about atoms.
Single-electron wavefunctions that incorporate both the orbital (spatial) and spin wavefunctions are called spin-orbitals. The occupancy of spin-orbitals is called the electron configuration of an atom. The lowest energy configuration is called the ground state configuration and all other configurations are called excited state configurations. To fully understand atomic spectroscopy, it is necessary to specify the total electronic state of an atom, rather than simply specifying the orbital configuration. An electronic state, or term, is characterized by a specific energy, total angular momentum and coupling of the orbital and spin angular momenta, and can be represented by a term symbol of the form \(^{2s+1} L_J\) where S is the total spin angular momentum quantum number, L is the total orbital angular momentum quantum number and J is the sum of L and S. One term may include several degenerate electron configurations. The degeneracy of a term is determined by the number of projections of the total angular momentum vector on the z-axis. The degeneracy of a term can be split by interaction with a magnetic field.
Overview of key concepts and equations for the hydrogen atom Potential energy Hamiltonian Wavefunctions Quantum Numbers Energies Spectroscopic Selection Rules Angular Momentum Properties Contributors Adapted from "Quantum States of Atoms and Molecules" by David M. Hanson, Erica Harvey, Robert Sweeney, Theresa Julia Zielinski
|
I want to calculate a flux on my fpga using the Euler equations with the finite volume method. Unfortunately the values of the state variables differ a lot. For example the pressure has a value of 100000 and the density 1.16. This makes it complicated to calculate on the FPGA. Now I'm wondering if there is a normalized form for the Euler equations with finite volumes, so that the values of the state variables are in the same range. I've tried to set them all to one, but my simulation crashed. I think that's not possible because of the non-linearity of the equations.
There is a normalized form, though it's properly called the
dimensionless Euler equations.
The way to do it is define:
scale time $t_0$ scale density $\rho_0$ scale length $L_0$
and then derive the scales from these:$$v_0 = \frac{L_0}{t_0},\quad p_0=\rho_0v_0^2$$
NB: it is possible to use other combinations, but I find that these are often the easiest to employ. Your Euler equations then become, formally,$$\frac{\partial\rho'}{\partial t'}+\nabla\cdot\rho'\mathbf v'=0 \\\frac{\partial\rho'\mathbf v'}{\partial t'}+\nabla\cdot\left(\rho'\mathbf v'\mathbf v'+p'\mathbb I\right)=0 \\ \frac{\partial e'}{\partial t'}+\nabla\cdot\left(\left[e'+p'\right]\mathbf v'\right)=0$$where$$t'=\frac{t}{t_0},\quad\rho'=\frac{\rho}{\rho_0},\quad\mathbf v'=\frac{\mathbf v}{v_0},\quad p'=\frac{p}{p_0},\quad e'=\frac{e}{p_0}$$are your dimensionless quantities.
Note that these variables have
changed your equations, so the not onlything that you need to do is define the scales & then divide your variables by the scales in your initial conditions file.
|
Model Compensator of K Function
Given a point process model fitted to a point pattern dataset, this function computes the
compensator of the \(K\) function based on the fitted model (as well as the usual nonparametric estimates of \(K\) based on the data alone). Comparison between the nonparametric and model-compensated \(K\) functions serves as a diagnostic for the model. Usage
Kcom(object, r = NULL, breaks = NULL, ..., correction = c("border", "isotropic", "translate"), conditional = !is.poisson(object), restrict = FALSE, model = NULL, trend = ~1, interaction = Poisson(), rbord = reach(interaction), compute.var = TRUE, truecoef = NULL, hi.res = NULL)
Arguments object
Object to be analysed. Either a fitted point process model (object of class
"ppm") or a point pattern (object of class
"ppp") or quadrature scheme (object of class
"quad").
r
Optional. Vector of values of the argument \(r\) at which the function \(K(r)\) should be computed. This argument is usually not specified. There is a sensible default.
breaks
This argument is for advanced use only.
…
Ignored.
correction
Optional vector of character strings specifying the edge correction(s) to be used. See
Kestfor options.
conditional
Optional. Logical value indicating whether to compute the estimates for the conditional case. See Details.
restrict
Logical value indicating whether to compute the restriction estimator (
restrict=TRUE) or the reweighting estimator (
restrict=FALSE, the default). Applies only if
conditional=TRUE. See Details.
model
Optional. A fitted point process model (object of class
"ppm") to be re-fitted to the data using
update.ppm, if
objectis a point pattern. Overrides the arguments
trend,interaction,rbord.
trend,interaction,rbord compute.var
Logical value indicating whether to compute the Poincare variance bound for the residual \(K\) function (calculation is only implemented for the isotropic correction).
truecoef
Optional. Numeric vector. If present, this will be treated as if it were the true coefficient vector of the point process model, in calculating the diagnostic. Incompatible with
hi.res.
hi.res
Optional. List of parameters passed to
quadscheme. If this argument is present, the model will be re-fitted at high resolution as specified by these parameters. The coefficients of the resulting fitted model will be taken as the true coefficients. Then the diagnostic will be computed for the default quadrature scheme, but using the high resolution coefficients.
Details
This command provides a diagnostic for the goodness-of-fit of a point process model fitted to a point pattern dataset. It computes an estimate of the \(K\) function of the dataset, together with a
model compensator of the \(K\) function, which should be approximately equal if the model is a good fit to the data.
The first argument,
object, is usually a fitted point process model (object of class
"ppm"), obtained from the model-fitting function
ppm.
For convenience,
object can also be a point pattern (object of class
"ppp"). In that case, a point process model will be fitted to it, by calling
ppm using the arguments
trend (for the first order trend),
interaction (for the interpoint interaction) and
rbord (for the erosion distance in the border correction for the pseudolikelihood). See
ppm for details of these arguments.
The algorithm first extracts the original point pattern dataset (to which the model was fitted) and computes the standard nonparametric estimates of the \(K\) function. It then also computes the
model compensator of the \(K\) function. The different function estimates are returned as columns in a data frame (of class
"fv").
The argument
correction determines the edge correction(s) to be applied. See
Kest for explanation of the principle of edge corrections. The following table gives the options for the
correction argument, and the corresponding column names in the result:
correction
description of correction nonparametric compensator
"isotropic"
Ripley isotropic correction
iso
icom
"translate"
Ohser-Stoyan translation correction
trans
tcom
The nonparametric estimates can all be expressed in the form $$ \hat K(r) = \sum_i \sum_{j < i} e(x_i,x_j,r,x) I\{ d(x_i,x_j) \le r \} $$ where \(x_i\) is the \(i\)-th data point, \(d(x_i,x_j)\) is the distance between \(x_i\) and \(x_j\), and \(e(x_i,x_j,r,x)\) is a term that serves to correct edge effects and to re-normalise the sum. The corresponding model compensator is $$ {\bf C} \, \tilde K(r) = \int_W \lambda(u,x) \sum_j e(u,x_j,r,x \cup u) I\{ d(u,x_j) \le r\} $$ where the integral is over all locations \(u\) in the observation window, \(\lambda(u,x)\) denotes the conditional intensity of the model at the location \(u\), and \(x \cup u\) denotes the data point pattern \(x\) augmented by adding the extra point \(u\).
If the fitted model is a Poisson point process, then the formulae above are exactly what is computed. If the fitted model is not Poisson, the formulae above are modified slightly to handle edge effects.
The modification is determined by the arguments
conditional and
restrict. The value of
conditional defaults to
FALSE for Poisson models and
TRUE for non-Poisson models. If
conditional=FALSE then the formulae above are not modified. If
conditional=TRUE, then the algorithm calculates the
restriction estimator if
restrict=TRUE, and calculates the
reweighting estimator if
restrict=FALSE. See Appendix D of Baddeley, Rubak and Moller (2011). Thus, by default, the reweighting estimator is computed for non-Poisson models.
The nonparametric estimates of \(K(r)\) are approximately unbiased estimates of the \(K\)-function, assuming the point process is stationary. The model compensators are unbiased estimates
of the mean values of the corresponding nonparametric estimates, assuming the model is true. Thus, if the model is a good fit, the mean value of the difference between the nonparametric estimates and model compensators is approximately zero. Value
A function value table (object of class
"fv"), essentially a data frame of function values. There is a plot method for this class. See
fv.object.
References
Baddeley, A., Rubak, E. and Moller, J. (2011) Score, pseudo-score and residual diagnostics for spatial point process models.
Statistical Science 26, 613--646. See Also
Point process models:
ppm.
Aliases Kcom Examples
# NOT RUN { fit0 <- ppm(cells, ~1) # uniform Poisson # }# NOT RUN { if(interactive()) { plot(Kcom(fit0))# compare the isotropic-correction estimates plot(Kcom(fit0), cbind(iso, icom) ~ r)# uniform Poisson is clearly not correct } fit1 <- ppm(cells, ~1, Strauss(0.08)) # }# NOT RUN { K1 <- Kcom(fit1) K1 if(interactive()) { plot(K1) plot(K1, cbind(iso, icom) ~ r) plot(K1, cbind(trans, tcom) ~ r)# how to plot the difference between nonparametric estimates and compensators plot(K1, iso - icom ~ r)# fit looks approximately OK; try adjusting interaction distance } fit2 <- ppm(cells, ~1, Strauss(0.12)) # }# NOT RUN { K2 <- Kcom(fit2) if(interactive()) { plot(K2) plot(K2, cbind(iso, icom) ~ r) plot(K2, iso - icom ~ r) }# }
Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
|
Is there a decent way to understand coordinate transformations in this representation?
(By the way, the incredible similarity between that equation and Schrodinger's equation is pretty cool. That matrix there behaves a lot like the complex unit $i$ in that it is a rotation by 90 degrees and has eigenvalues $\pm i$.)
If we do the wick rotation such that τ = it, then Schrödinger equation, say of a free particle, does have the same form of heat equation. However, it is clear that it admits the wave solution so it is sensible to call it a wave equation.Whether we should treat it as a wave equation or a heat e...
This question prompted me to do a price check on osmium. I was surprised that it's relatively cheap compared to the rest of the platinum group, but that seems to be because it has a small & steady supply & demand, so it's not attractive to traders in precious metals.
I guess the toxicity of its oxide (& other tetravalent compounds) is also a disincentive. ;) Wikipedia says it sells for around $1000 USD per troy ounce, but other sites quote a starting price of $400.
These guys sell nice looking ingots of just about every metal you can think of, apart from the alkalis & radioactives. I think I'll pass on the osmium & iridium, but I suppose I could afford a 1 oz tungsten ingot. :) Sure, it's not quite as dense as osmium / iridium, but its density is still pretty impressive.
In mathematics, a complex structure on a real vector space V is an automorphism of V that squares to the minus identity, −I. Such a structure on V allows one to define multiplication by complex scalars in a canonical fashion so as to regard V as a complex vector space.Every complex vector space can be equipped with a compatible complex structure, however, there is in general no canonical such structure. Complex structures have applications in representation theory as well as in complex geometry where they play an essential role in the definition of almost complex manifolds, by contrast to complex...
I thought Arnol'd had a more in-depth discussion, but there's only a brief mention in §41.E
I just don't get why someone would think that proving a mathematical thing like Pythagoras' theorem is something that physics can do. physics.stackexchange.com/questions/468504/… I guess it'd be reasonable in Euclid's day, or even Newton's, but certainly not since the development of non-Euclidean geometries.
Hi, @Leyla. I hope you don't think my previous reply is rude, but it's much better if you write equations in MathJax. My old eyes can barely read the equations in that photo, especially on my phone. And MathJax is a lot easier to search than equations in images.
unless there's crucial bits of context that you've omitted, the tensor cannot be assumed to be symmetric
indeed if $t_{ijk]$ were indeed totally symmetric, then $t_{(ijk)}$ would be identically zero and there would be no need to consider it
you're correct as far as $$t_{[321]} + t_{(321)} = \frac{2}{3!} \left[ t_{321} + t_{213} + t_{132} \right] $$ goes, but that's as far as you can take the calculation
this is enough to ensure that $t_{321} \neq t_{[321]} + t_{(321)} $ for an arbitrary rank-three tensor
particularly because it is perfectly possible for there to exist a rank-three tensor $t$ and a reference frame $R$ such that the components of $t$ on $R$ are such that $t_{321}=1$ and the rest of its components vanish.
@LeylaAlkan tensors are just vectors in a vector space. It's extremely important that you understand how these linear-independence and linearity arguments work, and that you get comfortable in producing them when they're needed.
i.e. the core take-home message you should be extracting from this is how the counter-example was generated and why it works.
The total wavefunction of an electron $\psi(\vec{r},s)$ can always be written as $$\psi(\vec{r},s)=\phi(\vec{r})\zeta_{s,m_s}$$ where $\phi(\vec{r})$ is the space part and $\zeta_{s,m_s}$ is the spin part of the total wavefunction $\psi(\vec{r},s)$. In my notation, $s=1/2, m_s=\pm 1/2$.Questio...
I don't know if there is such a rule that only particles with nonzero flavor would undergo weak interaction. I read from [Wikipedia-Weak isospin](https://en.wikipedia.org/wiki/Weak_isospin) that "Fermions with positive chirality ("right-handed" fermions) and anti-fermions with negative chirality ("left-handed" anti-fermions) have $T = T_3 = 0$ and form singlets that do not undergo weak interactions." and "... all the electroweak bosons have weak hypercharge $Y_ w = 0$ , so unlike gluons and the color force, the electroweak bosons are unaffected by the force they mediate."
but $W^+$ has weak isospin 1 and $W^-$ has weak isospin -1, not zero, so they should participate weak interaction.
so I am confused as to what quantum number determines whether a particle participates weak interaction.
|
My book is Differential Forms in Algebraic Topology by Loring W. Tu and Raoul Bott of which An Introduction to Manifolds by Loring W. Tu is a prequel.
The characterization of the closed Poincaré dual is given here (the "(5.13)") in Section 5.5. This has $\int_M \omega \wedge \eta_S$, where $\eta_S$ is on the
right rather than left. Question: Why is it $\int_M \omega \wedge \eta_S$, where $\eta_S$ is on the right rather than left?
See below for why I think $\eta_S$ should be on the
left rather than right.
Note: I believe the characterization for compact Poincaré dual for compact $S$ and $M$ of finite type is correct with $\eta_S'$ on the right.
Guess: Could have something to do with sign commutativity of Mayer-Vietoris, as described in Lemma 5.6.
Guess: Poincare dual as described is indeed with $\eta_S$ on the left, but there's also a unique cohomology class $[\gamma_S]$ that's on the right given by $[\gamma_S] = [-\eta_S]$.
How I got $\int_M \eta_S \wedge \omega$ instead of $\int_M \omega \wedge \eta_S$:
I use $()^{\vee}$, instead of $()^{*}$, to denote dual just like in Section 3.1 of the prequel.
Let $\varphi$ be the "linear functional on $H^{k}_cM$" given here.
Such $\varphi: H^{k}_cM \to \mathbb R$ is given by $\varphi[\omega] = \int_S \iota^{*}\omega$ for $[\omega] \in H^k_cM$ and $\iota: S \to M$ inclusion.
Let $\delta$ be the isomorphism of Poincaré duality (the "(5.4)").
Such $\delta: H^{n-k}M \to (H^{k}_cM)^{\vee}$ is given by $\delta([\tau]) = \delta_{[\tau]}$, for $[\tau] \in H^{n-k}M$ and $\delta_{[\tau]}$ given below.
$\delta_{[\tau]}([\omega]) = \int_M (\tau \wedge \omega)$, for $[\omega] \in H^k_cM$, under the well-definedness described in Section 24.4 of the prequel (which I think is the full details of the "Because the wedge product is an antiderivation, it descends to cohomology" here) and under the pairing given here, which I believe puts $\tau$ on the
left rather than right.
$[\eta_S]$ is the inverse image of $\varphi$ under $\delta$.
By choosing $[\tau] = [\eta_S]$, we get $\delta([\eta_S]) = \delta_{[\eta_S]} = \varphi$, that is, for all $[\omega] \in H^k_cM$,
$$\int_M (\eta_S \wedge \omega) = \int_S \iota^{*}\omega,$$
where $\eta_S$ is on the
left rather than right. Edit: After doing some thinking (it's easier to think when you know something is right/wrong as opposed to thinking about whether or not it's right/wrong, I believe), along with comments of Najib Idrissi and answer of Prof Tu, I think I've got it. Is this right?
We get a unique class $[\gamma_S]$ where for $\gamma_S \in [\gamma_S]$ (or any other element of $[\gamma_S]$), we have that for all $[\omega] \in H^k_cM$ $\omega \in [\omega]$ (or any other element of $[\omega]$),
$$\int_M (\gamma_S \wedge \omega) = \int_M ((-1)^{k} (-1)^{n-k}\omega \wedge \gamma_S) = \int_M (\omega \wedge (-1)^{k} (-1)^{n-k} \gamma_S) = \int_S \iota^{*}\omega$$ and then define $[\eta_S] := (-1)^{k} (-1)^{n-k} [\gamma_S] := [(-1)^{k} (-1)^{n-k} \gamma_S]$.
In this case, I think $[\eta_S] := - [\gamma_S] := [-\gamma_S]$ is a different definition from the one in the preceding paragraph unless $k(n-k)$ is an odd integer or something.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
i am looking for literature on this kind of problem. $$ \begin{align} \min_x \max_k &\quad \sum_{i,j} x_{ij}c_{ijk}\\ \text{subject to}&\\ &\sum_j x_{ij}=1,&& \forall i\in\mathcal J\\ &x_{ij}\in\{0,1\},&& \forall i\in\mathcal J, j\in\mathcal M \end{align} $$
$\mathcal J$ is a set of Jobs and $\mathcal M$ is a set of Machines. $c_{ij}$ is a vector and describes the cost of the job $i$ on the machine $j$. So the cost has multiple dimension.
The Aim of the programm is to find a schedule so that each job is scheduled on a machine and the biggest dimension of the add up cost vector is minimized.
The problem is described here:
http://arxiv.org/pdf/1211.5729v2.pdf (Page 2 left column bottom) as Generalized load balancing
and here:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=A8FEB960E21CA0361066CD5200C02718?doi=10.1.1.380.8241&rep=rep1&type=pdf (Page 4 left column) as generalized AP association problem
EDIT
load balancing can be generalized in several ways, i.e. restrict that each job can only run on a specific set of machines. This is also a generalization of the scheduling problem.
I'm interested in the case where the scheduling is restricted and multidimensional. I'm looking for more literature for this problem.
Thank you
|
I’m trying to get into the latest Manin-Marcolli paper Quantum Statistical Mechanics of the Absolute Galois Group on how to create from Grothendieck’s dessins d’enfant a quantum system, generalising the Bost-Connes system to the non-Abelian part of the absolute Galois group $Gal(\overline{\mathbb{Q}}/\mathbb{Q})$.
In doing so they want to extend the action of the multiplicative monoid $\mathbb{N}_{\times}$ by power maps on the roots of unity to the action of a larger monoid on all dessins d’enfants.
Here they use an idea, originally due to Jordan Ellenberg, worked out by Melanie Wood in her paper Belyi-extending maps and the Galois action on dessins d’enfants.
To grasp this, it’s best to remember what dessins have to do with Belyi maps, which are maps defined over $\overline{\mathbb{Q}}$
\[ \pi : \Sigma \rightarrow \mathbb{P}^1 \] from a Riemann surface $\Sigma$ to the complex projective line (aka the 2-sphere), ramified only in $0,1$ and $\infty$. The dessin determining $\pi$ is the 2-coloured graph on the surface $\Sigma$ with as black vertices the pre-images of $0$, white vertices the pre-images of $1$ and these vertices are joined by the lifts of the closed interval $[0,1]$, so the number of edges is equal to the degree $d$ of the map.
Wood considers a very special subclass of these maps, which she calls Belyi-extender maps, of the form
\[ \gamma : \mathbb{P}^1 \rightarrow \mathbb{P}^1 \] defined over $\mathbb{Q}$ with the additional property that $\gamma$ maps $\{ 0,1,\infty \}$ into $\{ 0,1,\infty \}$.
The upshot being that post-compositions of Belyi’s with Belyi-extenders $\gamma \circ \pi$ are again Belyi maps, and if two Belyi’s $\pi$ and $\pi’$ lie in the same Galois orbit, then so must all $\gamma \circ \pi$ and $\gamma \circ \pi’$.
The crucial Ellenberg-Wood idea is then to construct “new Galois invariants” of dessins by checking existing and easily computable Galois invariants on the dessins of the Belyi’s $\gamma \circ \pi$.
For this we need to know how to draw the dessin of $\gamma \circ \pi$ on $\Sigma$ if we know the dessins of $\pi$ and of the Belyi-extender $\gamma$. Here’s the procedure
Here, the middle dessin is that of the Belyi-extender $\gamma$ (which in this case is the power map $t \rightarrow t^4$) and the upper graph is the unmarked dessin of $\pi$.
One has to replace each of the black-white edges in the dessin of $\pi$ by the dessin of the expander $\gamma$, but one must be very careful in respecting the orientations on the two dessins. In the upper picture just one edge is replaced and one has to do this for all edges in a compatible manner.
Thus, a Belyi-expander $\gamma$ inflates the dessin $\pi$ with factor the degree of $\gamma$. For this reason i prefer to call them
dessinflateurs, a contraction of dessin+inflator.
In her paper, Melanie Wood says she can separate dessins for which all known Galois invariants were the same, such as these two dessins,
by inflating them with a suitable Belyi-extender and computing the monodromy group of the inflated dessin.
This monodromy group is the permutation group generated by two elements, the first one gives the permutation on the edges given by walking counter-clockwise around all black vertices, the second by walking around all white vertices.
For example, by labelling the edges of $\Delta$, its monodromy is generated by the permutations $(2,3,5,4)(1,6)(8,10,9)$ and $(1,3,2)(4,7,5,8)(9,10)$ and GAP tells us that the order of this group is $1814400$. For $\Omega$ the generating permutations are $(1,2)(3,6,4,7)(8,9,10)$ and $(1,2,4,3)(5,6)(7,9,8)$, giving an isomorphic group.
Let’s inflate these dessins using the Belyi-extender $\gamma(t) = -\frac{27}{4}(t^3-t^2)$ with corresponding dessin
It took me a couple of attempts before I got the inflated dessins correct (as i knew from Wood that this simple extender would not separate the dessins). Inflated $\Omega$ on top:
Both dessins give a monodromy group of order $35838544379904000000$.
Now we’re ready to do serious work.
Melanie Wood uses in her paper the extender $\zeta(t)=\frac{27 t^2(t-1)^2}{4(t^2-t+1)^3}$ with associated dessin
and says she can now separate the inflated dessins by the order of their monodromy groups. She gets for the inflated $\Delta$ the order $19752284160000$ and for inflated $\Omega$ the order $214066877211724763979841536000000000000$.
It’s very easy to make mistakes in these computations, so probably I did something horribly wrong but I get for both $\Delta$ and $\Omega$ that the order of the monodromy group of the inflated dessin is $214066877211724763979841536000000000000$.
I’d be very happy when someone would be able to spot the error!
Similar Posts: 214066877211724763979841536000000000000 the mystery Manin-Marcolli monoid permutation representations of monodromy groups the modular group and superpotentials (1) Klein’s dessins d’enfant and the buckyball Complete chaos and Belyi-extenders The best rejected proposal ever anabelian geometry Monstrous dessins 3 the modular group and superpotentials (2)
|
You can use the formula for centre-of-mass, CoM, to calculate the location of the "centre", which is the point the total mass "averages down to". Place your coordinate system somewhere and calculate the following x- and y-coordinates:
$$x_{com}=\frac{\sum mx}{\sum m}\quad , \quad y_{com}=\frac{\sum my}{\sum m}\,.$$
Similar for the z-coordinate. If you have individual particles, it is fairly easy to sum them all up. If you have an extended body, you must integrate $\int$ instead of summing $\sum$; it may be tricky depending on the exact geometry.
But if the geometry happens to be fairly simple and symmetric and the mass evenly distributed, you may be able to easily figure out the CoM location with no calculations. If your triangle is equilateral, e.g., (and its mass evenly spread), then you know by symmetry that the CoM is exactly in the geometric centre.
Regardless of how you find the CoM, you can consider this point as where gravity pulls. As if the entire object's mass was actually only concentrated at this point. This is therefore easy to input into a gravitational potential energy formula, such as $$U=mgh\, ,$$
depending on your purpose. Just remember that the height $h$ in this particular formula is not the distance of the CoM from the ground, but the distance between where the CoM starts out and where it ends after tilting over. Only this amount of potential energy is released.
|
Let $\eta(\tau)$ be the
Dedekind eta function. In his Lost Notebook, Ramanujan played around with a related function and came up with some of the nice evaluations,
$$\begin{aligned} \eta(i) &= \frac{1}{2} \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(2i) &= \frac{1}{2^{11/8}} \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(3i) &= \frac{1}{2\cdot 3^{3/8}} \frac{1}{(2+\sqrt{3})^{1/12}} \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(4i) &= \frac{1}{2^{29/16}} \frac{1}{(1+\sqrt{2})^{1/4}} \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(5i) &= \frac{1}{2\sqrt{5}}\left(\tfrac{1+\sqrt{5}}{2}\right)^{-1/2}\, \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(6i) &=\; \color{red}{??}\\ \eta(7i) &= \frac{1}{2\sqrt{7}}\left(-\tfrac{7}{2}+\sqrt{7}+\tfrac{1}{2}\sqrt{-7+4\sqrt{7}} \right)^{{1/4}}\, \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(8i) &= \frac{1}{2^{73/32}} \frac{(-1+\sqrt[4]{2})^{1/2}}{(1+\sqrt{2})^{1/8}} \frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\\ \eta(16i) &= \frac{1}{2^{177/64}} \frac{(-1+\sqrt[4]{2})^{1/4}}{(1+\sqrt{2})^{1/16}} \left(-2^{5/8}+\sqrt{1+\sqrt{2}}\right)^{1/2}\,\frac{\Gamma\big(\tfrac{1}{4}\big)}{\pi^{3/4}}\end{aligned}$$
with the higher ones $>4$ added by this OP. (Note the powers of $2$.)
Questions: Similar to the others, what is the exact value of $\eta(6i)$? Is it true that the function, $$F(\sqrt{-N}) = \frac{\pi^{3/4}}{\Gamma\big(\tfrac{1}{4}\big)}\,\eta(\sqrt{-N}) $$ is an algebraic numberonly if $N$ is a square? P.S. It seems strange there is a function that yields an algebraic number for square input $N$ and a transcendental number for non-square $N$. (Are there well-known functions like that?) For an example of non-square $N$, we have,
$$\eta(\sqrt{-3}) = \frac{3^{1/8}}{2^{4/3}} \frac{\Gamma\big(\tfrac{1}{3}\big)^{3/2}}{\pi} = 0.63542\dots$$
and $F(\sqrt{-3})$ seems to be transcendental.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.