text
stringlengths 256
16.4k
|
|---|
Since my question relates directly to a part of the text from a 2004 book,
Logic in Computer Science: Modelling and Reasoning about Systems (2nd Edition) by Michael Huth and Mark Ryan, in order to provide context for the following discussion, I'm partially quoting the book verbatim:
The decision problem of validity in predicate logic is undecidable: no program exists which, given any $\varphi$, decides whether $\varphi$.
PROOF: As said before, we pretend that validity is decidable for predicate logic and thereby solve the (insoluble) Post correspondence problem. Given a correspondence problem instance $C$: $$s_1 s_2 ... s_k$$ $$t_1 t_2 ... t_k$$ we need to be able to construct, within finite space and time and uniformly so for all instances, some formula $\varphi$ of predicate logic such that $\varphi$ holds iff the correspondence problem instance $C$ above has a solution.
As function symbols, we choose a constant $e$ and two function symbols $f_0$ and $f_1$ each of which requires one argument. We think of $e$ as the empty string, or word, and $f_0$ and $f_1$ symbolically stand for concatenation with 0, respectively 1. So if $b_1 b_2 ... b_l$ is a binary string of bits, we can code that up as the term $f_{b_l}(f_{b_{l−1}}...(f_{b_2}(f_{b_1}(e)))...)$. Note that this coding spells that word backwards. To facilitate reading those formulas, we abbreviate terms like $f_{b_l}(f_{b_{l−1}}...(f_{b_2}(f_{b_1}(t)))...)$ by $f_{{b_1}{b_2}...{b_l}}(t)$.
We also require a predicate symbol $P$ which expects two arguments. The intended meaning of $P(s,t)$ is that there is some sequence of indices $(i_1,i_2,...,i_m)$ such that $s$ is the term representing $s_{i_1} s_{i_2}...s_{i_m}$ and $t$ represents $t_{i_1} t_{i_2}...t_{i_m}$. Thus, $s$ constructs a string using the same sequence of indices as does $t$; only $s$ uses the $s_i$ whereas $t$ uses the $t_i$.
Our sentence $\varphi$ has the coarse structure $\varphi_1 \wedge \varphi_2 \implies \varphi_3$ where we set
$$\varphi_1 \stackrel{def}{=} \bigwedge\limits_{i=1}^k P\left(f_{s_i}(e),f_{t_i}(e)\right)$$
$$\varphi_2 \stackrel{def}{=} \forall v,w \hspace{1mm} P(v,w)\rightarrow\bigwedge\limits_{i=1}^kP(f_{s_i}(v),f_{t_i}(w))$$
$$\varphi_3 \stackrel{def}{=} \exists z\hspace{1mm} P(z,z)$$.
Our claim is $\varphi$ holds iff the Post correspondence problem $C$ has a solution.
In proving PCP ⟹ Validity:
Conversely, let us assume that the Post correspondence problem C has some solution, [...] The way we proceed here is by
interpretingfinite, binary strings in the domain of values $A′$ of the model $M′$. This is not unlike the coding of an interpreter for one programming language in another. The interpretation is done by a function interpretwhich is defined inductively on the data structure of finite, binary strings:
$$\text{interpret}(\epsilon) \stackrel{def}{=} e^{M′}$$
$$\text{interpret}(s0) \stackrel{def}{=} {f_0}^{M′}(\text{interpret}(s))$$
$$\text{interpret}(s1) \stackrel{def}{=} {f_1}^{M′}(\text{interpret}(s))$$.
[...] Using [$\text{interpret}(b_1 b_2...b_l) = f_{b_l}^{M′}(f_{b_{l-1}}^{M′}(...(f_{b_1}^{M′}(e{M′})...)))$] and the fact that $M\models\varphi_1$, we conclude that $(\text{interpret}(s_i), \text{interpret}(t_i)) \in P^{M′}$ for $i = 1,2,...,k$. [...] since $M′ \models \varphi_2$, we know that for all $(s,t) \in P^{M′}$ we have that $(\text{interpret}(ss_i),\text{interpret}(tt_i)) \in P^{M′}$ for $i=1,2,...,k$. Using these two facts, starting with $(s, t) = (s_{i_1}, t_{i_1})$, we repeatedly use the latter observation to obtain
(2.9) $(\text{interpret}(s_{i_1}s_{i_2}...s_{i_n}),\text{interpret}(t_{i_1}t_{i_2}...t_{i_n})) \in P^{M′}$.
[...] Hence (2.9) verifies $\exists{z} P(z,z)$ in $M′$ and thus $M′ \models \varphi_3$.
In proving that the validity of predicate logic is undecidable, according to the approach I learned from school, which is based on that of the
Huth & Ryan book (2nd edition, page 135), when constructing the reduction of PCP to Validity problem, the "finite binary strings" of the universe are interpreted with a " interpret function", which encodes binary strings into composites of functions of the model.
Then it goes on to show that, using the fact that the antecedent of $\varphi$ must hold for it to be non-trivial, both sub-formulae of the antecedent can be expressed with the said "
interpret function". From there, it follows that the consequence holds, too, since it can also be expressed in a way with the interpret function that follows from the previous expressions with interpret.
My question is: what is the purpose of this "
interpret function"? Why can't we just use the previously devised φ and get the same result? What do we get out of using interpret to express our elements?
And also, what if our universe contains some arbitrary elements; that is, what if they are not binary strings? Do we just construct some mapping of the two?
|
2019-09-04 12:06
Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Detaljnije - Slični zapisi 2019-08-15 17:39
LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-08-15 17:36
Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Detaljnije - Slični zapisi 2019-05-15 16:57 Detaljnije - Slični zapisi 2019-02-12 14:01
XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Detaljnije - Slični zapisi 2019-01-21 09:59
Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Detaljnije - Slični zapisi 2019-01-15 14:22
Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Detaljnije - Slični zapisi 2019-01-10 15:54 Detaljnije - Slični zapisi 2018-12-20 16:31
Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Detaljnije - Slični zapisi 2018-12-14 16:02
The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Detaljnije - Slični zapisi
|
A positive solution for an asymptotically cubic quasilinear Schrödinger equation
School of Mathematical Sciences, Dalian University of Technology, 116024 Dalian, China
$ - \Delta u + V(x)u - \Delta ({u^2})u = q(x)g(u),\;\;\;\;x \in {\mathbb{R}^N}, $
$N≥ 1$
$0 < q(x)≤ \lim_{|x|\to∞}q(x)$
$g∈ C(\mathbb{R}^+, \mathbb{R})$
$g(u)/u^3 \to 1$
$u \to ∞.$ Keywords:Quasilinear Schrödinger equation, positive solution, asymptotically cubic, Nehari manifold. Mathematics Subject Classification:Primary: 35J20, 35J62; Secondary: 49J35. Citation:Xiang-Dong Fang. A positive solution for an asymptotically cubic quasilinear Schrödinger equation. Communications on Pure & Applied Analysis, 2019, 18 (1) : 51-64. doi: 10.3934/cpaa.2019004
References:
[1] [2]
S. Adachi, M. Shibata and T. Watanabe,
Global uniqueness results for ground states for a class of quasilinear elliptic equations,
[3]
A. Ambrosetti, G. Cerami and D. Ruiz,
Solitons of linearly coupled systems of semilinear non-autonomous equations on $\mathbb{R}^N$,
[4] [5]
P. C. Carrião, R. Lehrer and O. H. Miyagaki,
Existence of solutions to a class of asymptotically linear Schrödinger equations in $\mathbb{R}^N$ via the Pohozaev manifold,
[6] [7]
K. C. Chang,
Methods in Nonlinear Analysis, Springer-Verlag, Berlin, 2005.
Google Scholar
[8] [9] [10] [11] [12] [13] [14] [15] [16]
L. Jeanjean and K. Tanaka,
A positive solution for an asymptotically linear elliptic problem on $\mathbb{R}^N$ autonomous at infinity,
[17] [18]
R. Lehrer, L. A. Maia and R. Ruviaro,
Bound states of a nonhomogeneous nonlinear Schrödinger equation with non symmetric potential,
[19] [20] [21] [22] [23] [24]
M. Struwe,
Variational Methods, second ed., Springer-Verlag, Berlin, 1996.
doi: 10.1007/978-3-662-03212-1.
Google Scholar
[25]
C. A. Stuart, An introduction to elliptic equation on $\mathbb{R}^N$, in
[26]
C. A. Stuart and H. S. Zhou,
Applying the mountain pass theorem to an asymptotically linear elliptic equation on $\mathbb{R}^N$,
[27] [28]
A. Szulkin and T. Weth, The method of Nehari manifold, in
[29]
M. Willem,
[30] [31]
M. B. Yang and Y. H. Ding,
Existence of semiclassical states for a quasilinear Schrödinger equation with critical exponent in $\mathbb{R}^N$,
show all references
References:
[1] [2]
S. Adachi, M. Shibata and T. Watanabe,
Global uniqueness results for ground states for a class of quasilinear elliptic equations,
[3]
A. Ambrosetti, G. Cerami and D. Ruiz,
Solitons of linearly coupled systems of semilinear non-autonomous equations on $\mathbb{R}^N$,
[4] [5]
P. C. Carrião, R. Lehrer and O. H. Miyagaki,
Existence of solutions to a class of asymptotically linear Schrödinger equations in $\mathbb{R}^N$ via the Pohozaev manifold,
[6] [7]
K. C. Chang,
Methods in Nonlinear Analysis, Springer-Verlag, Berlin, 2005.
Google Scholar
[8] [9] [10] [11] [12] [13] [14] [15] [16]
L. Jeanjean and K. Tanaka,
A positive solution for an asymptotically linear elliptic problem on $\mathbb{R}^N$ autonomous at infinity,
[17] [18]
R. Lehrer, L. A. Maia and R. Ruviaro,
Bound states of a nonhomogeneous nonlinear Schrödinger equation with non symmetric potential,
[19] [20] [21] [22] [23] [24]
M. Struwe,
Variational Methods, second ed., Springer-Verlag, Berlin, 1996.
doi: 10.1007/978-3-662-03212-1.
Google Scholar
[25]
C. A. Stuart, An introduction to elliptic equation on $\mathbb{R}^N$, in
[26]
C. A. Stuart and H. S. Zhou,
Applying the mountain pass theorem to an asymptotically linear elliptic equation on $\mathbb{R}^N$,
[27] [28]
A. Szulkin and T. Weth, The method of Nehari manifold, in
[29]
M. Willem,
[30] [31]
M. B. Yang and Y. H. Ding,
Existence of semiclassical states for a quasilinear Schrödinger equation with critical exponent in $\mathbb{R}^N$,
[1] [2]
Hongyu Ye.
Positive high energy solution for Kirchhoff equation in $\mathbb{R}^{3}$ with superlinear nonlinearities via Nehari-Pohožaev manifold.
[3]
A. Pankov.
Gap solitons in periodic discrete nonlinear Schrödinger equations II: A generalized Nehari manifold approach.
[4]
Alireza Khatib, Liliane A. Maia.
A positive bound state for an asymptotically linear or superlinear Schrödinger equation in exterior domains.
[5] [6] [7] [8] [9]
Alp Eden, Elİf Kuz.
Almost cubic nonlinear Schrödinger equation: Existence, uniqueness and scattering.
[10]
Yanfang Xue, Chunlei Tang.
Ground state solutions for asymptotically periodic quasilinear Schrödinger equations with critical growth.
[11]
Qingfang Wang.
The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities.
[12]
Wentao Huang, Jianlin Xiang.
Soliton solutions for a quasilinear Schrödinger equation with critical exponent.
[13]
Kun Cheng, Yinbin Deng.
Nodal solutions for a generalized quasilinear Schrödinger equation with critical exponents.
[14] [15]
Caisheng Chen, Qing Yuan.
Existence of solution to $p-$Kirchhoff type problem in
$\mathbb{R}^N$ via Nehari manifold.
[16]
Nakao Hayashi, Pavel Naumkin.
On the reduction of the modified Benjamin-Ono equation to the cubic derivative nonlinear Schrödinger equation.
[17]
Xiaoyan Lin, Yubo He, Xianhua Tang.
Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential.
[18]
Fouad Hadj Selem, Hiroaki Kikuchi, Juncheng Wei.
Existence and uniqueness of singular solution
to stationary Schrödinger equation with supercritical nonlinearity.
[19] [20]
Gökçe Dİlek Küçük, Gabil Yagub, Ercan Çelİk.
On the existence and uniqueness of the solution of an optimal control problem for Schrödinger equation.
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
$f\sim g$ does not imply $f'\sim g'$! L'hopital's rule only works in one direction:$$\log x \sim \log \left((5+\sin x)x\right) \quad\text{but}\quad\frac1{x}\nsim\frac{((5+\sin x)x)'}{(5+\sin x)x}$$or if you want,$$\log\log x \sim \log \log \left((5+\sin x)x\right) \quad\text{but}\quad\frac1{x\log x}\nsim\frac{((5+\sin x)x)'}{(5+\sin x)x \cdot \log\left((5+\sin x)x\right)}$$(The factor $5+\sin x$ is there just to make the second quotient misbehave.)
The point is that we don't know (a priori) that$$\frac{\pi(x)}{x/\log x}$$has a limit for $x\to\infty$.
What l'Hopital does tell us, is that
if the limit of $(\pi(x)\log x)/x$ exists, then it is $1$.
I believe Chebyshev's original proof (and any subsequent ones) of this fact also goes along these lines, via a Mertens-type estimate for $\sum_{p\leq x}1/p\sim\int_1^x\pi(t)/t^2$.
|
Is it possible (tractable) to determine if the following system of equations has any nontrivial solutions (ie, none of the unknowns are zero) in the domain of integers?
$$A^2 + B^2=C^2 D^2$$ $$2 C^4 + 2 D^4 = E^2 + F^2$$
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Is it possible (tractable) to determine if the following system of equations has any nontrivial solutions (ie, none of the unknowns are zero) in the domain of integers?
$$A^2 + B^2=C^2 D^2$$ $$2 C^4 + 2 D^4 = E^2 + F^2$$
for the second one, take $C > D > 0,$ then $$ E = C^2 - D^2, \; \; \; F = C^2 + D^2 $$
If you wanted a system, take any $C,D \equiv 1 \pmod 4$ distinct primes, such as $5,13.$ We get the Pythagorean triple $16^2 + 63^2 = 65^2 = 5^2 13^2.$ Then $2 \cdot 5^4 + 2 \cdot 13^4 = (13^2 - 5^2)^2 + (13^2 + 5^2)^2 = 144^2 + 194^2.$
To solve,
$$A^2+B^2=C^2 D^2\\ (2C)^4+(2D)^4=E^2+F^2$$
Choose,
$$\begin{aligned} A&=2(ac-bd)(ad+bc)\\ B&=(ac-bd)^2-(ad+bc)^2\\ C&=a^2+b^2\\ D&=c^2+d^2\\ E&=(a^2+b^2 )^2-(c^2+d^2 )^2\\ F&=(a^2+b^2 )^2+(c^2+d^2 )^2\\ \end{aligned}$$
|
Conservation of the number of particle is a symmetry of the system. As Akshay Kumar said in his response, when the number of particles operator commutes with the Hamiltonian, it is conserved. It simply means, well, that's the number of particle is conserved. Particles are all what is discussed in condensed matter (better to say
quasi-particles actually), like electrons and holes (certainly the most famous ones, but we should say quasi-particle of positive and negative excitation energy relative to the Fermi energy if we were not lazy: I think the length of their exact names is sufficient to keep electron and hole in the following :-). So it should be fine to know if some (quasi-)particles can or not pop-out from nowhere. Fortunately enough, when the particle number is conserved, they do not pop-out from nowhere, they can only be transmuted from an other (quasi-)particle. That's what happens with superconductivity: two electrons disappear and one Cooper pair emerges (in a really pictorial way of speaking).
Now for superconductivity, it is easier to say that you will conserve the number of particles if your Hamiltonian is invariant with respect to the transformation
$$c\rightarrow e^{\mathbf{i}\theta}c$$ and $$c^{\dagger}\rightarrow e^{-\mathbf{i}\theta}c^{\dagger}$$
where the $c$'s are the fermionic operators, and $\theta$ an angle. Actually, $\theta$ is better defined as the generator of the U(1) rotation. In particular, if your Hamiltonian (better to say a Lagrangian) is invariant with the phase shift operation defined above, you can associated a Noether current to it. For the U(1) rotation symmetry, the conserved current will be the current of particles. In particular for time independent problems (to simplify say), the
number of particles will be conserved if your Hamiltonian is invariant under the above defined transformation.
The BCS Hamiltonian describing the conventional superconductivity reads (I discard the one body term and the spin for simplicity: they change nothing to the conclusions we want to arrive at)
$$H_{\text{BCS}}\propto c^{\dagger}c^{\dagger}cc$$
such that making the U(1) rotation does not change it, since there is the same number of $c$ than the number of $c^{\dagger}$ operators.
Below the critical temperature, the new superconducting phase appears, characterised by a non vanishing order parameter (
i.e. the number of Cooper pair, still in a pictorial way of speaking-- better to say the superconducting gap parameter)
$$\Delta\propto cc$$
which transforms under a U(1) phase shift like
$$\Delta\rightarrow e^{2\mathbf{i}\theta}\Delta$$
since there are now two $c$ operators not compensated by some $c^{\dagger}$. So, the order parameter $\Delta$ is
not invariant under the U(1) phase transformation symmetry. One says that the ground state of superconductivity does not conserve the number of particles.
Note that:
Saying that the number of particles is not conserved is an abuse of language, since the total number of electrons is the same in both the normal and superconducting phases. The condensed (superconducting) phase simply does not verify the invariance under the U(1) rotation. But that's true that some electrons are disappearing in a sense. As I said in the introduction: they are transmuted in Cooper pairs (once again, that's a pictorial way of speaking).
Such mechanism when the Hamiltonian verifies a symmetry that its ground state does not is called a
spontaneous symmetry breaking. Superconductivity is just one example of such mechanism.
$\Delta$ remains invariant under the restricted rotation $c\rightarrow e^{\mathbf{i}n\pi}c$ with $n\in\mathbb{Z}$. Since there are only two such rotation elements $e^{\mathbf{i}n\pi}=\pm 1$, one says that
U(1) has been broken to $\mathbb{Z}_{2}$ (a fancy notation for the group of only two elements).
Post-Scriptum: Please tell me if you need more explanations about some terminology. I don't know where you're starting from and my answer is a little bit abrupt for young students I believe.
|
The two previous answers certainly solve the problem. Soundness of Geoffroy's protocol is fine indeed, but there is the appearance of the witness $x$ in the computation of the announcement $(A',B')$ as $B'=g^{r_x/x} h^r$. This can be avoided, however, and at the same time one can find the protocol maybe in a bit more natural way as follows.
Starting with $A=g^x h^y$ and $B=g^{1/x} h^z$, we see that $B$ is a Pedersen commitmentto the (multiplicative) inverse $1/x$ of the committed value $x$ in the Pedersen commitment $A$. So, $x$ appears both in $A$ and $B$ and these two occurrences need to be connected somehow. EQ-composition is a very effective way to accomplish this, but we cannot directly apply it to exponents of the form $x$ and $1/x$. A simple way out is to move $x$ around a bit in the equation for $B$ by raising both sides to the power of $x$, such that we get:$$ A=g^x h^y,\qquad g=B^x h^{-zx}.$$We can now apply EQ-composition to the factors $g^x$ and $B^x$, but one may wonder about the new factor $h^{-zx}$ which also depends on $x$. Fortunately, such a factor causes no problems because we can think of $h^{z'}=h^{-zx}$ as a factor that is independent of $x$; it's like replacing $z$ with $z'=-zx$, which is fine because this is a one-to-one transformation for nonzero $x$.
For the $\Sigma$-protocol we get:
Prover sends announcement $(a,b)=(g^u h^v, B^u h^w)$ with $u,v,w\in_R\mathbb{Z}_n$.
Verifier sends challenge $c\in_R\mathbb{Z}_n$.
Prover sends response $(r,s,t)=(u+c\,x, v+c\,y, w-c\,z\,x) \bmod n$. Verifier accepts if $g^r h^s = a A^c$ and $B^r h^t = b g^c$.
It is instructive to see why special soundness holds. So, let's consider two accepting conversations $(a,b;c;r,s,t)$ and $(a,b;c';r',s',t')$ with $c\neq c'$. Then we find:$$\begin{array}{cl} & g^r h^s = a A^c,\ g^{r'} h^{s'} = a A^{c'},\quad B^r h^t = b g^c,\ B^{r'} h^{t'} = b g^{c'}\\ \Rightarrow& g^{r-r'} h^{s-s'} = A^{c-c'} ,\quad B^{r-r'} h^{t-t'} = g^{c-c'} \\ \Leftrightarrow& A = g^{\frac{r-r'}{c-c'}} h^{\frac{s-s'}{c-c'}},\quad B = g^{\frac{c-c'}{r-r'}} h^{-\frac{t-t'}{r-r'}}.\end{array}$$Here, we are using that $r\neq r'$ holds as well: otherwise we see that $B^{r-r'} h^{t-t'} = g^{c-c'}$ is equivalent to $h^{t-t'} = g^{c-c'}$, and we would have $\log_g h = (t-t')/(c-c')$, contradicting the assumption that $\log_g h$ is unknown.Hence, a witness is obtained as $x=(r-r')/(c-c')$, $y=(s-s')/(c-c')$, and $z=-(t-t')/(r-r')$.Clearly, $x\neq0$ and $B=g^{1/x} h^z$ holds, as well as $A=g^x h^y$.
The same line of reasoning can be applied to show special soundness for Geoffroy's protocol.
|
The usual formula for euclidean distance that everybody uses is
$$d(x,y):=\sqrt{\sum (x_i - y_i)^2}$$
Now as far as I know, the sum-of-squares usually come with some problems wrt. numerical precision.
There is an obviously equivalent formula:
$$d(x,y):= c \sqrt{\sum \left(\frac{x_i - y_i}{c}\right)^2}$$
Where it seems to be a common practise to choose $c = \max_i |x_i - y_i|$.
For 2d, this simplifies to a formula of the form: $d(x,y):= c \sqrt{1 + \frac{b}{c}^2}$
Some questions here:
How big is the gain in precision of doing this, in particular for high dimensionalities? How much does it increate computational costs? Is this choice of $c$ optimal?
To compute $c$, this needs two passes over the data. However, it should be possible in a single pass, by starting with $c_0=1$, and then adjusting it when necessary for optimal precision.
E.g. let $c_0=1$, $c_i=\max_{j\leq i} |x_i-y_i|$. Then $$S_i:=\sum_{j\leq i} \left(\frac{x_i - y_i}{c_i}\right)^2 = \sum_{j\leq i-1} \left(\frac{x_i - y_i}{c_{i-1}}\right)^2 \cdot \frac{c_{i-1}^2}{c_i^2}+\left(\frac{x_i - y_i}{c_i}\right)^2 = S_{i-1} \cdot \left(\frac{c_{i-1}}{c_i}\right)^2+\left(\frac{x_i - y_i}{c_i}\right)^2$$ This should allow single-pass computation of this formula, right?
Any comments in particular on the computational cost and precision benefits of computing Euclidean distance this way? Why is everybody using the naive way, is the gain in precision too small for low dimensionality and the associated computational cost too high?
P.S. At least to my understanding, the usual formula should be precise up to the value range of
sqrt(Double.MAX_VALUE) to
sqrt(Double.MIN_NORMAL), which covers around
e+-154, at most divided by the dimensionality - so even for 1000 dimensions, that should be fine for most uses of a distance function ...
|
A corrected form of the question asks to show that $\int_{\mathbb R^n} e^{-x^tAx}\;dx\;=\; \pi^{n/2}/\sqrt{\det A}$ for symmetric $n$-by-$n$ $A$ with positive-definite real part. First, for $A$
real (positive-definite), there is a (unique) positive-definite square root $S$ of $A$, and the change of variables $x=S^{-1}y$ gives the result, as the questioner had noted.
The trick here, as in many similar situations asking for extension to complex parameters of a computation that succeeds simply by change of variables in the purely real case, is invocation of the Identity Principle from complex analysis. That is, if $f,g$ are holomorphic on a non-empty open $\Omega$ and $f(z)=g(z)$ for $z$ in some subset with an accumulation point, then $f=g$ throughout $\Omega$. This can be iterated to apply to several complex variables, in various manners. In the case at hand, this gives an extension from symmetric
real matrices to symmetric complex matrices (with the constraint of positive-definiteness on the real part, for convergence of everything).
To be sure, the complex span (in the space of $n$-by-$n$ matrices) of real symmetric matrices is complex
symmetric matrices, not $n$-by-$n$ complex matrices with arbitrary imaginary part.
EDIT: To discuss meromorphy in each of the entries, observe that if $A$ is symmetric with positive-definite real part, then so is $A+z\cdot (e_{ij}+e_{ji})$ for sufficiently small complex $z$, where $e_{ij}$ is the matrix with $ij$-th entry $1$ and otherwise $0$. Without attempting to describe the precise domain, this allows various proofs of holomorphy of both sides of the asserted equality. To prove connectedness of whatever that domain (for fixed $i>j$) is, it suffices to observe that it is
convex: if $A$ and $B$ are symmetric complex with positive-definite real part, then the same is true of $tA+(1-t)B$ for real $t$ in the range $0\le t\le 1$.
|
I am studying from a set of notes (otherwise quite excellent) which does not explicitly specify some definitions of "ordering."
I have tried to accumulate the various pieces from various discussions and proofs in the text, but am having a hard time putting them together.
In one place it says
strictly well-orderered by $\in$ is $n\lt m$ iff $n\in m$.
In another it says a
well-ordered set is totally ordered in which every non-empty set has a minimal element. And one aspect of the definition of total ordering is that inequality satisfied by " minimal" is "weak" ($\leq$) rather than "strict" ($\lt$).
Lastly, one of the two features of the definition of an
ordinal is that it is strictly well-orderered by $\in$.
M confusion comes in a proof showing that an $\alpha$ is an ordinal because "it has a minimal element and
thus is well-ordered under $\in$." Whereas, the definition of an ordinal says nothing about a minimal element. Are they the same thing?
And why does it not say:
strictlywell-ordered by $\in$?
I know this is probably torturous reading this and I've tried to be as clear as possible. Perhaps it would be easier to suggest a reference where these aspects are delineated. I've tried wikipedia and several well-known texts.
Thanks
|
The problem isn't fully solvable in exact form, since it requires the solution of a transcendental equation to get $k$, and with it the energy eigenvalue. This shouldn't be surprising $-$ it's exactly the same situation as for a particle in a ring (i.e. without the internal cutout).
You are correct in all the steps that lead down to your radial eigenfunction in the form$$R(r) = A_l J_l(kr) + B_l Y_l(kr),$$and now the task is indeed to solve for the coefficients $A_l$ and $B_l$ and for the wavenumber $k$. From the first two, one is easy, and you can get it by setting $R(r)$ to zero at one of the two borders, say,$$R(R_1) = A_l J_l(kR_1) + B_l Y_l(kR_1)=0,$$or in other words$$B_l = -\frac{ J_l(kR_1)}{Y_l(kR_1)}A_l,$$so that's $B_l$ down and $A_l$ to go. Shifting notation slightly by defining $C_l=A_l/Y_l(kR_1)$, you get your full eigenfunction in the form $$R(r) = C_l \bigg[Y_l(kR_1) J_l(kr) - J_l(kR_1) Y_l(kr)\bigg],$$and your second boundary condition in the form$$R(R_2) = C_l \bigg[Y_l(kR_1) J_l(kR_2) - J_l(kR_1) Y_l(kR_2)\bigg] = 0,$$and this is where things get tricky.
The reason that things get hairy at this point is that you are now solving for the wavenumber $k$ (itself a proxy for the energy eigenvalue $E=\hbar^2 k^2/2m$) in the transcendental equation$$Y_l(kR_1) J_l(kR_2) - J_l(kR_1) Y_l(kR_2) = 0,$$and this simply does not accommodate any exact-form solutions (much in the same way e.g. the finite square well will reduce to a transcendental equation of the form $\tan(ka) = \sqrt{C+k^2b^2}$ with no exact solutions). This is analogous to the situation for the no-inner-cutout case, where you'll just be left with$$J_l(kR)=0,$$where you know that the product $kR$ needs to be a Bessel zero, but that is as far as the analytical approaches can tell you.
In the no-inner-cutout case, of course, you're in luck: the Bessel zeros are extremely common and well-studied objects, and they are included in most tabulations of special functions (e.g. Abramowitz and Stegun, and the like) and they are a standard component of most mathematical software.
For your case, with a nonzero inner cutout, you're less lucky, because the situation is less common and it is therefore less widespread in both tabulations and mathematical software. The keyword to search for is the cross-product Bessel zeros: we do know a lot about them, but that knowledge doesn't always trickle down to applications.
When I work with those objects, I tend to do so in Mathematica, which does have a
BesselJZero in-built function in the core language, but for which the rest of the BesselZeros package, including the cross-product zeros, didn't quite make it into the core system. The package
is still available and it does work (though with some reliability problems on the cross-product zeros!) if that's the way you want to roll.
And, finally, that third constant: for the normalization constant $C_l$, which is fixed by requiring that$$\int_{R_1}^{R_2} |R(r)|^2 r\,\mathrm dr = 1$$or some similar requirement $-$ yeah, there's no chance of that being analytically integrable, much like the no-inner-cutout case. You just integrate it numerically when it comes down to it.
|
Difference between revisions of "Gamma-function"
(Figure 4)
m (Г(x) = (x-1)!)
Line 9: Line 9:
$
$
−
A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$. It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product
+
A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$ . It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product
$$
$$
\Gamma(z) =
\Gamma(z) =
Latest revision as of 11:50, 5 December 2012 $\Gamma$-function
2010 Mathematics Subject Classification:
Primary: 33B15 Secondary: 33B2033D05 [MSN][ZBL]$\newcommand{\abs}[1]{\left|#1\right|}\newcommand{\Re}{\mathop{\mathrm{Re}}}\newcommand{\Im}{\mathop{\mathrm{Im}}}$
A transcendental function $\Gamma(z)$ that extends the values of the factorial $z!$ to any complex number $z$ (one writes $\Gamma(z) = (z-1)!$). It was introduced in 1729 by L. Euler in a letter to Ch. Goldbach, using the infinite product $$ \Gamma(z) = \lim_{n\rightarrow\infty}\frac{n!n^z}{z(z+1)\ldots(z+n)} = \lim_{n\rightarrow\infty}\frac{n^z}{z(1+z/2)\ldots(1+z/n)}, $$ which was used by L. Euler to obtain the integral representation (Euler integral of the second kind, cf. Euler integrals) $$ \Gamma(z) = \int_0^\infty x^{z-1}e^{-x} \rd x, $$ which is valid for $\Re z > 0$. The multi-valuedness of the function $x^{z-1}$ is eliminated by the formula $x^{z-1}=e^{(z-1)\ln x}$ with a real $\ln x$. The symbol $\Gamma(z)$ and the name gamma-function were proposed in 1814 by A.M. Legendre.
If $\Re z < 0$ and $-k-1 < \Re z < -k$, $k=0,1,\ldots$, the gamma-function may be represented by the Cauchy–Saalschütz integral: $$ \Gamma(z) = \int_0^\infty x^{z-1} \left( e^{-x} - \sum_{m=0}^k (-1)^m \frac{x^m}{m!} \right) \rd x. $$ In the entire plane punctured at the points $z=0,-1,\ldots $, the gamma-function satisfies a Hankel integral representation: \begin{equation} \label{eq1} \Gamma(z) = \frac{1}{e^{2\pi iz} - 1} \int_C s^{z-1}e^{-s} \rd s, \end{equation} where $s^{z-1} = e^{(z-1)\ln s}$ and $\ln s$ is the branch of the logarithm for which $0 < \arg\ln s < 2\pi$; the contour $C$ is represented in Figure 1. It is seen from the Hankel representation that $\Gamma(z)$ is a meromorphic function. At the points $z_n = -n$, $n=0,1,\ldots$ it has simple poles with residues $(-1)^n/n!$.
Contents Fundamental relations and properties of the gamma-function.
1) Euler's functional equation: $$ z\Gamma(z) = \Gamma(z+1), $$ or $$ \Gamma(z) = \frac{1}{z\ldots(z+n)}\Gamma(z+n+1); $$ $\Gamma(1)=1$, $\Gamma(n+1) = n!$ if $n$ is an integer; it is assumed that $0! = \Gamma(1) = 1$.
2) Euler's completion formula: $$ \Gamma(z)\Gamma(1-z) = \frac{\pi}{\sin \pi z}. $$ In particular, $\Gamma(1/2)=\sqrt{\pi}$; $$ \Gamma\left(n+\frac{1}{2}\right) = \frac{1.3\ldots(2n-1)}{2^n}\sqrt{\pi} $$ if $n>0$ is an integer; $$ \abs{\Gamma\left(\frac{1}{2} + iy\right)}^2 = \frac{\pi}{\cosh y\pi}, $$ where $y$ is real.
3) Gauss' multiplication formula: $$ \prod_{k=0}^{m-1} \Gamma\left( z + \frac{k}{m} \right) = (2\pi)^{(m-1)/2}m^{(1/2)-mz}\Gamma(mz), \quad m = 2,3,\ldots $$ If $m=2$, this is the Legendre duplication formula.
4) If $\Re z \geq \delta > 0$ or $\abs{\Im z} \geq \delta > 0$, then $\ln\Gamma(z)$ can be asymptotically expanded into the Stirling series: $$ \ln\Gamma(z) = \left(z-\frac{1}{2}\right)\ln z - z + \frac{1}{2}\ln 2\pi + \sum_{n=1}^m \frac{B_{2n}}{2n(2n-1)z^{2n-1}} + O\bigl(z^{-2m-1}\bigr), \quad m = 1,2,\ldots, $$ where $B_{2n}$ are the Bernoulli numbers. It implies the equality $$ \Gamma(z) = \sqrt{2\pi} z^{z-1/2} z^{-z} \left( 1 + \frac{1}{12}z^{-1} + \frac{1}{288}z^{-2} - \frac{139}{51840}z^{-3} - \frac{571}{2488320}z^{-4} + O\bigl(z^{-5}\bigr) \right). $$ In particular, $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + \theta/12x}, \quad 0 < \theta < 1. $$ More accurate is Sonin's formula [So]: $$ \Gamma(1+x) = \sqrt{2\pi} x^{x+1/2} e^{-x + 1/12(x+\theta)}, \quad 0 < \theta < 1/2. $$
5) In the real domain, $\Gamma(x) > 0$ for $x > 0$ and it assumes the sign $(-1)^{k+1}$ on the segments $-k-1 < x < -k$, $k = 0,1,\ldots$ (Figure 2).
For all real $x$ the inequality $$ \Gamma\Gamma^{\prime\prime} > \bigl(\Gamma^\prime\bigr)^2 \geq 0 $$ is valid, i.e. all branches of both $\abs{\Gamma(x)}$ and $\ln\abs{\Gamma(x)}$ are convex functions. The property of logarithmic convexity defines the gamma-function among all solutions of the functional equation $$ \Gamma(1+x) = x\Gamma(x) $$ up to a constant factor (see also the Bohr–Mollerup theorem).
For positive values of $x$ the gamma-function has a unique minimum at $x=1.4616321\ldots$ equal to $0.885603\ldots$. The local minima of the function $\abs{\Gamma(x)}$ form a sequence tending to zero as $x\rightarrow -\infty$.
6) In the complex domain, if $\Re z > 0$, the gamma-function rapidly decreases as $\abs{\Im z} \rightarrow \infty$, $$ \lim_{\abs{\Im z} \rightarrow \infty} \abs{\Gamma(z)}\abs{\Im z}^{(1/2)-\Re z}e^{\pi\abs{\Im z}/2} = \sqrt{2\pi}. $$
7) The function $1/\Gamma(z)$ (Figure 3) is an entire function of order one and of maximal type; asymptotically, as $r \rightarrow \infty$, $$ \ln M(r) \sim r \ln r, $$ where $$ M(r) = \max_{\abs{z} = r} \frac{1}{\abs{\Gamma(z)}}. $$ It can be represented by the infinite Weierstrass product: $$ \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left(\left( 1 + \frac{z}{n} \right) e^{-z/n} \right), $$ which converges absolutely and uniformly on any compact set in the complex plane ($\gamma$ is the Euler constant). A Hankel integral representation is valid: \begin{equation} \label{eq2} \frac{1}{\Gamma(z)} = \frac{1}{2\pi i} \int_{C'} e^s s^{-z} \rd s, \end{equation} where the contour $C'$ is shown in Figure 4.
G.F. Voronoi [Vo] obtained integral representations for powers of the gamma-function.
In applications, the so-called poly-gamma functions — $k$th derivatives of $\ln\Gamma(z)$ — are of importance. The function (Gauss' $\psi$-function) $$ \psi(z) = \frac{\mathrm{d}}{\mathrm{d}z}\ln\Gamma(z) = \frac{\Gamma'(z)}{\Gamma(z)} = -\gamma + \sum_{n=0}^\infty \frac{z-1}{(n+1)(z+n)} = -\gamma + \int_0^1 \frac{1 - (1-t)^{z-1}}{t} \rd t $$ is meromorphic, has simple poles at the points $z=0,-1,\ldots$ and satisfies the functional equation $$ \psi(z+1) - \psi(z) = \frac{1}{z}. $$ The representation of $\psi(z)$ for $\abs{z}<1$ yields the formula $$ \ln\Gamma(1+z) = -\gamma z + \sum_{k=2}^\infty \frac{(-1)^k S_k}{k} z^k, $$ where $$ S_k = \sum_{n=1}^\infty n^{-k}. $$ This formula may be used to compute $\Gamma(z)$ in a neighbourhood of the point $z=1$.
For other poly gamma-functions see [BaEr]. The incomplete gamma-function is defined by the equation $$ I(x,y) = \int_0^y e^{-t}t^{x-1} \rd t. $$ The functions $\Gamma(z)$ and $\psi(z)$ are transcendental functions which do not satisfy any linear differential equation with rational coefficients (Hölder's theorem).
The exceptional importance of the gamma-function in mathematical analysis is due to the fact that it can be used to express a large number of definite integrals, infinite products and sums of series (for example, the beta-function). In addition, it is widely used in the theory of special functions (the hypergeometric function, of which the gamma-function is a limit case, cylinder functions, etc.), in analytic number theory, etc.
References
[An] A. Angot, "Compléments de mathématiques. A l'usage des ingénieurs de l'electrotechnique et des télécommunications", C.N.E.T. (1957) [BaEr] H. Bateman (ed.) A. Erdélyi (ed.), Higher transcendental functions, 1. The gamma function. The hypergeometric functions. Legendre functions, McGraw-Hill (1953) [Bo] N. Bourbaki, "Elements of mathematics. Functions of a real variable", Addison-Wesley (1976) (Translated from French) [JaEm] E. Jahnke, F. Emde, "Tables of functions with formulae and curves", Dover, reprint (1945) (Translated from German) [Ni] N. Nielsen, "Handbuch der Theorie der Gammafunktion", Chelsea, reprint (1965) [So] N.Ya. Sonin, "Studies on cylinder functions and special polynomials", Moscow (1954) (In Russian) [Vo] G.F. Voronoi, "Studies of primitive parallelotopes", Collected works, 2, Kiev (1952) pp. 239–368 (In Russian) [WhWa] E.T. Whittaker, G.N. Watson, "A course of modern analysis", Cambridge Univ. Press (1952) Comments
The $q$-analogue of the gamma-function is given by $$ \Gamma_q(z) = (1-q)^{1-z} \prod_{k=1}^\infty \frac{1-q^{k+1}}{1-q^{k+z}}, \quad z \neq 0,-1,-2,\ldots;\quad 0<q<1, $$ cf. [As]. Its origin goes back to E. Heine (1847) and D. Jackson (1904).
References
[Ar] E. Artin, "The gamma function", Holt, Rinehart & Winston (1964) [As] R. Askey, "The $q$-Gamma and $q$-Beta functions" Appl. Anal., 8 (1978) pp. 125–141 How to Cite This Entry:
Gamma-function.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Gamma-function&oldid=29082
|
Update: Rupert has withdrawn his claim. See the final bullet point below.
Rupert McCallum has posted a new paper to the mathematics arXiv
Rupert McCallum, The choiceless cardinals are inconsistent, mathematics arXiv 2017: 1712.09678.
He is claiming to establish the Kunen inconsistency in ZF, without the axiom of choice, which is a long-standing open question. In particular, this would refute the Reinhardt cardinals in ZF and all the stronger ZF large cardinals that have been studied.
If correct, this result will constitute a central advance in large cardinal set theory.
I am making this post to provide a place to discuss the proof and any questions that people might have about it. Please feel free to post comments with questions or answers to other questions that have been posted. I will plan to periodically summarize things in the main body of this post as the discussion proceeds.
My first question concerns lemma 0.4, where he claims that $j’\upharpoonright V_{\lambda+2}^N$ is a definable class in $N$. He needs this to get the embedding into $N$, but I don’t see why the embedding should be definable here. I wrote to Rupert about this concern, and he replied that it may be an issue, and that he intends to post a new version of his paper, where he may retreat to the weaker claim refuting only the super-Reinhardt cardinals. The updated draft is now available. Follow the link above. It will become also available on the arXiv later this week. The second January 2 draft has a new section claiming again the original refutation of Reinhardt cardinals. New draft January 3. Rupert has reportedly been in communication with Matteo Viale about his result. Rupert has announced (Jan 3) that he is going to take a week or so to produce a careful rewrite. He has made available his new draft, January 7. It will also be posted on the arXiv. January 8: In light of the issues identified on this blog, especially the issue mentioned by Gabe, Rupert has sent me an email stating (and asking me to post here) that he is planning to think it through over the next couple of weeks and will then make some kind of statement about whether he thinks he can save the argument. For the moment, therefore, it seems that we should consider the proof to be on hold. January 24: After consideration, Rupert has withdrawn the claim, sending me the following message:
“Gabriel has very kindly given me extensive feedback on many different drafts. I attach the latest version which he commented on [January 24 draft above]. He has identified the flaw, namely that on page 3 I claim that $\exists n \forall Y \in W_n \psi(Y)$ if and only if $\forall Y \in U \psi(Y)$. This claim is not justified, and this means that there is no way that is apparent to me to rescue the proof of Lemma 1.2. Gabriel has directed me to a paper of Laver which does indeed show that my mapping e is an elementary embedding but which does not give the stronger claim that I want.
…So, I withdraw my claim. It is possible that this method of proof can work somehow, but some new insight is needed to make it work.”
-Rupert McCallum, January 24, 2018
|
A object orbiting the earth has total mechanical energy equal to \begin{align*} E^{mech} = \frac{1}{2} m v^2 - \frac{GMm}{r} \end{align*} with $M$ the mass of the earth and $r$ the distance. My course notes say we have to equal $E^{mech} = 0$ find the escape velocity, which then gives \begin{align*} v = \sqrt{\frac{2GM}{r}} \end{align*} But I don't understand why we should do this. In general we have $E = K_1 + U_1 = K_2 + U_2$. Now I see that if $U(r)$ with $r \rightarrow \infty$, then $U_2$ becomes zero. But why should $K_2$ ever be set the zero? That means the object would come to rest somewhere, which we cannot know.
The easiest way to calculate escape velocity, is neglecting Earths rotation and assuming the object takes of in a radial direction. Then, indeed, you start from
$$E = K_1 + U_1 = K_2 + U_2$$
where $K_1=\frac{mv^2}{2}$ and $U_1=- \frac{GMm}{r}$.
Since the range of gravitional forces is infinite, you say (theoretically, not practically) that an object has escaped Earths gravition when it is infinity far away, so $U_2 = 0$. Now, if the object would have velocity = 0 before it is infinity far away, then (neglecting the rest of the universe), it would fall back to Earth and hence didn't escape. So it should still have a velocity when it is infinity far away. This velocity may be as small as you want, so the border point between falling back to earth and escaping is velocity =0. So take $v_2 =0$ and you find the minimal value such that the objects velocity doesn't become zero before reaching infinity.
When a rocket is fired from Earth with a sudden impulse, its total energy is given by: $$E_k \text{ (kinetic energy)} + E_p \text{ (potential energy)}= \frac{1}{2}mv^2 - \frac{GMm}{r} = constant$$ The potential energy here is taken to be negative because the reference point chosen for potential energy to be zero is when the rocket is unbound in Earth's orbit. Hence, after the rocket is fired (with no propulsion after the initial impulse) it is bound if its: $$E_{total} < 0$$ and unbound if its: $$E_{total} \geq 0$$ In your case $E_{mech}$ is $E_{total}$. Setting $E_{total} = 0$ you can calculate what the escape velocity must be.
|
I'm interested in the pure gauge (no matter fields) case on Minkowski spacetime with simple gauge groups. It would be nice if someone can find a review article discussing all such solutions
EDIT: I think these are relevant to the physics of corresponding QFTs in the high energy / small scale regime. This is because the path integral for a pure gauge Yang-Mills theory is of the form
$$\int \exp\left(\frac{iS[A]}{ \hbar}\right) \, \mathcal{D}A$$
In high energies we have the renormalization group behavior $g \to 0$ (asymptotic freedom) which can be equivalently described by fixing $g$ and letting $\hbar \to 0$.
EDIT: For the purpose of this question, an "exact" solution is a solution in closed form modulo single variable functions defined by certain ODEs and initial / boundary conditions.
|
There are two ways to answer your question. One is direct and has less depth to it, the other is more indirect and has a lot of depth to it. I will begin with the indirect one because it has wide ranging applications beyond finance or economics and because it should serve as a warning to journal editors and so forth. Also, it covers an area of statistics that everyone but statisticians have forgotten about.
Although the idea of a statistic is rather old, the field of statistics is rather new. It is probably the newest or nearly newest of all fields. Aeronautics is older. Genetics is older. It opened up a ton of practical questions that it took time to solve. It has to do with how the field defined as statistic. A statistic is any function of the data. This means that almost every statistic is useless as there are uncountably many functions.
This led to a process to decide which statistics to keep and which to discard. This created unexpected results. If you are finding poor estimators as a result of your theory, then there is a good chance, you are doing it wrong. In the defense of finance, it has been struggling with this since Mandelbrot published the first empirical refutation of mean-variance finance. It tried to solve it in the 1960's but a couple of things got in the way. The first was the use of punch card technology. Even if work by Eugene Fama or Mandelbrot were correct, it would have resulted in problems that would take decades to solve. The second was that there was no reason for them to be correct. There was no theory behind the observations.
The unexpected result, in searching for a statistic, was that all Bayesian statistics were admissible. This was surprising because it was proved with Frequentist axioms. It also found that all other statistics were valid to the extent they either mapped to a Bayesian measure in a particular case or at the limit. It provides a test, however. If you can stochastically dominate a measure, then you drop that measure. If you hunting for accurate measurements that work, then something deeper is going on and you are missing it.
The more direct answer is that the distribution of returns, for the Markowtiz model to be correct, have to have certain properties. The first is that there needs to be a mean in order to have an expectation in the first place. Most standard distributions have a mean, but not all do. The Cauchy distribution and, in general, the Paretian distributions of Mandelbrot's article do not. The Cauchy distribution is $$\frac{1}{\pi}\frac{\sigma}{\sigma^2+(x-\mu)^2}.$$
The second is that if a mean exists, a covariance matrix needs to exist. Not all distributions with a variance have a covariance in its multivariate form. The hyperbolic secant distribution is an example of that. It is $$sech\left(\frac{x-\mu}{\sigma}\right).$$
There have been attempts to use both in empirical finance. If either of those distributions are present in the likelihood function, then mean-variance finance is indefensible. The former is problematic because you cannot form an expectation on your returns in the first place. They are excluded by the laws of general summation. The second is a bit more subtle because if the second is present none of the assets can be independent, but none of them can covary either. They can comove, but not covary. It creates a very ugly issue.
There is a paper that derives the distribution of returns at https://ssrn.com/abstract=2828744. It shows that there are many distributions that can be present. The logic of the paper is that returns are not data, rather, prices are data. Returns are transformations of data. In particular, they are the ratio of jointly distributed variables, a present value and a future value. The distribution depends upon the rules in use to create the prices. As a result, stocks have different returns than antiques because the auction process is different.
As it happens, all distributions for equity securities include some mixture of a transformation of the Cauchy distribution. Because the distributions involved lack a sufficient statistic, any point estimator has to lose information, so no non-Bayesian solution exists for projective problems (such as choosing an allocation), and should be avoided for inferential questions if possible. You cannot avoid them in your true hypothesis is a sharp null hypothesis as there is no good Bayesian solution for sharp null hypotheses.
A population test of the paper can be found at https://ssrn.com/abstract=2653151
There are also papers to replace the method of pricing options and the rules of econometrics. Papers to create optimal portfolios and to extend stochastic calculus are in process. The distributions paper will be presented at the Southerwestern Finance Association Conference in March.
Some things will have to change. You cannot make an assumption of i.i.d. variables, for example. The entire discussion of the Solow convergence will have to change in economics and so the core of the whole discussion of capital, physical, financial and human.
A lot of focus will end up on the scale parameter. In the Cauchy distribution, there is no covariance matrix. If you had a one asset portfolio, denoted $a$, then it may have a scale parameter $\gamma_a$. If you switch to a two asset portfolio you do not get two scale parameters, let alone a covariance style matrix. Instead you get a new scale parameter $\gamma_{ab}$. If you got fancy and used a vector process, all the vectors would jointly share a scale parameter $\gamma_v$. Taking the logarithm brings you to the hyperbolic secant distribution and so no gain is had. It also has no covariance matrix, but OLS does. OLS would be measuring something that does not exist.
The headaches are just starting.
|
Numerical simulations of physical processes generally involve solving some
differential equation on a computational domain too complicated to solve analytically. Solving simple systems by “hand” is quite possible in one-dimension. But things get more complicated as you go to higher dimensions. If the domain has a nice shape – that is, if it’s rectangular or cylindrical in nature – you may be able to solve it analytically using techniques such as separation of variables. But as you start introducing irregularities in the boundary or in the forcing function, things start getting hairy really soon. In that case, going to a numerical solution is the only viable option.
The Finite Difference Method (FDM) is a way to solve differential equations numerically. It is not the only option, alternatives include the finite volume and finite element methods, and also various mesh-free approaches. However, FDM is very popular. The popularity of FDM stems from the fact it is very simple to both derive and implement numerically.
Figure 1. Problem definition (left) and domain discretization (right)
As an example, let’s consider the Poisson equation, \(\nabla^2\phi=-\dfrac{\rho}{\epsilon_0}\). This equations governs the variation of electric potential given some charge density distribution. It is one of the most fundamental equations in the field of electro-static plasma simulations. We want to solve this equation numerically on a rectangular domain shown in Figure 1 subject to the boundaries listed in the figure. The domain contains of two regions of fixed potential along upper and bottom edge – these could represent charged electrodes. Remaining edges have zero electric field, except for the left edge, on which electric field is specified.
We start by
discretizing the domain – in other words, overlaying a computational mesh over the domain. In the Finite Difference method, solution to the system is known only on on the nodes of the computational mesh. As such, it is important to chose mesh spacing fine enough to resolve the details of interest. In addition, cell edges must coincide with the axis of the coordinate system being used. This is one of the main disadvantages of FDM: complex geometries cannot be directly resolved by fitting the mesh to the object boundary. Finite Difference representation of derivatives
We are looking for the solution to \( \nabla^2\phi=\dfrac{\partial^2\phi}{\partial x^2}+\dfrac{\partial^2\phi}{\partial y^2}=-\dfrac{\rho}{\epsilon_0}\). From Taylor’s Series, we know that a value of a function some distance $latex \Delta x$ from a known point
x can be estimated from derivatives as $$ f(x+\Delta x)=f(x)+\dfrac{f'(x)}{1!}(\Delta x) + \dfrac{f”(x)}{2!}(\Delta x)^2 + \dfrac{f”'(x)}{3!}(\Delta x)^3+ O(4)+\ldots $$ In other words, the second derivative at x given by $$f”(x) = \dfrac{2}{\Delta^2x}\left[ f(x+\Delta x) – f(x) – f'(x)\Delta x – \dfrac{f”'(x)}{6}\Delta^3 x+ O(4)+\ldots \right]$$ The expression above is known as the forward difference. We are estimating the derivative at a point using data to the front (in positive direction) of that point. We can obtain a similar expression by going backward $$f(x-\Delta x)=f(x)+\dfrac{f'(x)}{1!}(-\Delta x) + \dfrac{f”(x)}{2!}(-\Delta x)^2 + \dfrac{f”'(x)}{3!}(-\Delta x)^3+ O(4)+\ldots $$ or $$f”(x) = \dfrac{2}{\Delta^2x}\left[ f(x-\Delta x) – f(x) + f'(x)\Delta x + \dfrac{f”'(x)}{6}\Delta^3 x+ O(4)+\ldots \right]$$
We now have two expressions for the second derivative at point
x. Instead of using one of the other, it’s best practice to use their average. By adding the two expression and dividing by two, we obtain the central difference representation for the second derivative: $$f”(x) = \dfrac{f(x-\Delta x) – 2f(x) + f(x+\Delta x)}{\Delta^2x} + O(4) +\ldots$$ The first and third derivative conveniently cancel out, and the resulting expression is second-order accurate. Numerical Implementation
On the computational domain, potential is no longer a continuous function, instead, it is given by a collection of values at node indices as
phi[i][j]. Assuming uniform spacing between the grid nodes in each directions, the second derivative in the x direction is written as
d2f_dx2 = (phi[i-1][j]-2*phi[i][j]+phi[i+1][j])/(dx*dx). Adding the expression for the y direction, we obtain the discretized version of the Poisson equation:
(phi[i-1][j]-2*phi[i][j]+phi[i+1][j])/(dx*dx) + (phi[i][j-1]-2*phi[i][j]+phi[i][j+1])/(dy*dy) = -rho[i][j]/eps0
That’s it, quite simple. The expression above gives us an expression that can be used to solve for potential everywhere inside the domain. But to complete the problem, we need to include the boundaries.
Boundary Conditions
Our problem has two types of boundary conditions: fixed potential along portions of top and bottom boundary, and fixed derivative (electric field) on the remaining nodes.
The first case is known as
Dirichlet boundary condition. It is simple to implement. On each node along the boundary we have
phi[i][j]=g[i][j]
The second condition is known as
Neumann. On the left face, we have \(\nabla \phi \cdot (-i)=-E_0\) or \(\dfrac{\partial \phi}{\partial y}=E_0\). From the Taylor expansion, the first derivative is given by $$f'(x)=\dfrac{f(x+\Delta x)-f(x)}{\Delta x} + O(2) + \ldots$$ This is the forward difference for the first derivative. Central difference can be derived using process analogous to above. However, since we do not have any data in the backward direction along the y=0 face, we are forced to use this less-accurate representation. This equation is implemented numerically as
(phi[i+1][j]-phi[i][j])/dx = E0
Numerical implementation
This simple example implemented in a simple Java Finite Difference Solver. Give the program a try (I used Eclipse to write it) and let me know if you have any questions.
Figure 2. Solution to the sample problem using the Finite Difference Method.
For an alternative, see The Finite Volume Method.
Don’t forget to link to this article if you find it useful.
|
I am trying to understand this paper: http://link.aps.org/doi/10.1103/PhysRevLett.99.236809
(Here is an arXiv version: http://arxiv.org/abs/0709.1274)
In the introduction, they mention certain symmetry arguments (the two paragraphs in the second column of the first page). Unfortunately, I am ill-equipped to understand these symmetry arguments. Would it be possible for an expert to walk me through these two paragraphs?
I am sorry if this is a poorly worded question (this is my first post here).
--
As per the comments, I am copying the relevant paragraphs here:
``Before starting specific calculations, it will be instructive to make some general symmetry analysis. A valley contrasting magnetic moment has the relation $ \mathfrak{m}_v=\chi \tau_z $, where $\tau_z = \pm 1$ labels the two valleys and $\chi$ is a coefficient characterizing the material. Under time reversal, $\mathfrak{m}_v$ changes sign, and so does $\tau_z$ (the two valleys switch when the crystal momentum changes sign). Therefore, $\chi$ can be non-zero even if the system is non-magnetic. Under spatial inversion, only $\tau_z$ changes sign. Therefore $\mathfrak{m}_v$ can be nonzero only in systems with broken inversion symmetry.
Inversion symmetry breaking simultaneously allows a valley Hall effect, with $\mathbf j^v = \sigma^v_H \hat{\mathbf z} \times \mathbf E$, where $\sigma^v_H$ is the transport coefficient (valley Hall conductivity), and the valley current $\mathbf j^v$ is defined as the average of the valley index times the velocity operator. Under time reversal, both the valley current and electric field are invariant. Under spatial inversion, the valley current is still invariant but the electric field changes sign. Therefore, the valley Hall conductivity can be non-zero when the inversion symmetry is broken, even if the time reversal symmetry remains.''
|
That depends on the groups you are working in.
Using a $\Sigma$-protocol
If you have a group $G$ of prime order $q$ where the DDH is hard and you have a DH tuple $(g,g^u,g^v,g^w)$ with $w\equiv uv \pmod q$, then if your prover knows one of these values, say $u$, then we can write the DH tuple as $(g,g^u,h,h^u)$ and he is able to convince a verifier that this is a DH tuple by means of a standard $\Sigma$-protocol, see here (Section 5) for a concrete protocol. This proof can be made non-interactive by using the Fiat Shamir heuristic.
Gap DH Groups
If you are working in a gap Diffie Hellman group setting, i.e., a setting where the DDH is easy but the CDH is still hard, that is easy without interactive proofs. I use additive notation for groups $G_1$ and $G_2$ below as we are talking about elliptic curve groups.
For instance, take a symmetric cryptographic pairing $e:G_1\times G_1 \rightarrow G_T$ where $G_1$ is an elliptic curve group of prime order $p$ and $G_T$ a multiplicative subgroup if a finite field of the same order $p$. Then $G_1$ is a gap DH group.
Then given $(P, uP, vP, wP)\in G_1^4$ you can check if this is a DH tuple by checking if $$e(uP,vP)\stackrel{?}{=}e(wP,P).$$
Same works for an asymmetric type-2 pairing $e:G_1\times G_2\rightarrow G_T$, where you have an efficiently computable homomorphism $\psi: G_2 \rightarrow G_1$. Here $G_2$ is a gap DH group and when given a DH tuple $(P', uP', vP',wP')\in G_2^4$ you can efficiently check it by checking $$e(\psi(uP'),vP')\stackrel{?}{=}e(\psi(P'),wP').$$
Side note (which at least I find interesting)
An interesting side note is that the Chaum Pedersen signature scheme uses the first approach in DDH hard groups, where the proof is made non-interactive by using the Fiat Shamir heuristic. Here, the DH tuple to check is $(g,g^x,m,m^x)$ where $\sigma=m^x$ is a signature for $m$ w.r.t. public key $h=g^x$ and they require a non-interactive proof $\pi$ that $\log_g h=\log_m \sigma$, i.e., to prove that the signer actually knows the secret key $x$.
The BLS signature scheme essentially does the same in gap DH groups where they do not require this non-interactive proof $\pi$, but can use the pairing $e$ for checking the DH tuple. However, they have to hash the message to a group element using a secure hash function $H$, as the direct application of the Chaum Pedersen approach would make the signature scheme insecure (as otherwise one could forge signatures without the secret key). Here the DH tuple to check is $(P,xP,H(m),xH(m))$ where $xH(m)$ is a signature for $m$ w.r.t to public key $xP$.
|
I just review the following problem:
How to find the limits $\lim\limits_{h\rightarrow 0} \frac{e^{-h}}{-h}$ and $\lim\limits_{h\rightarrow 0} \frac{|\cos h-1|}{h}$?
However, I cannot still know how to solve the following: How to find the following: $$\lim_{x\rightarrow 0^+} \frac{e^{-a/x}}{x}, \ \ a>0$$
By L'hospital's rule:
$$\lim_{x\rightarrow 0^+} \frac{e^{-a/x} \frac{a}{x^2}}{1}= \lim_{x\rightarrow 0^+} \frac{e^{-a/x} a}{x^2}$$
it seems that the degree of the denominator will increase; however, I am still confused about the limit of this problem. Please advise, thanks!
|
The moments of a continuous distribution, and functions of them like the kurtosis, tell you extremely little about the graph of its density function.
Consider, for instance, the following graphs.
Each of these is the graph of a non-negative function integrating to $1$: they are all PDFs. Moreover, they all have exactly the same moments--every last infinite number of them. Thus they share a common kurtosis (which happens to equal $-3+3 e^2+2 e^3+e^4$.)
The formulas for these functions are
$$f_{k,s}(x) = \frac{1}{\sqrt{2\pi}x} \exp\left(-\frac{1}{2}(\log(x))^2\right)\left(1 + s\sin(2 k \pi \log(x)\right)$$
for $x \gt 0,$ $-1\le s\le 1,$ and $k\in\mathbb{Z}.$
The figure displays values of $s$ at the left and values of $k$ across the top. The left-hand column shows the PDF for the standard lognormal distribution.
Exercise 6.21 in
Kendall's Advanced Theory of Statistics (Stuart & Ord, 5th edition) asks the reader to show that these all have the same moments.
One can similarly modify
any pdf to create another pdf of radically different shape but with the same second and fourth central moments (say), which therefore would have the same kurtosis. From this example alone it should be abundantly clear that kurtosis is not an easily interpretable or intuitive measure of symmetry, unimodality, bimodality, convexity, or any other familiar geometric characterization of a curve.
Functions of moments, therefore (and kurtosis as a special case) do not describe geometric properties of the graph of the pdf. This intuitively makes sense: because a pdf represents probability by means of
area, we can almost freely shift probability density around from one location to another, radically changing the appearance of the pdf, while fixing any finite number of pre-specified moments.
|
Is this new mathematics?
No. Just an argument over mathematical presentation, bringing together individual observations previously made by e.g. Hatcher 2002.
What mathematical presentation?
It is standard practice to define ‘reduced’ homology after and on the basis of ‘traditional’ homology. But reduced homology is actually more fundamental, while traditional homology can be seen as secondary. Accordingly, I prefer to write \(H_i(X)\) for the \(i\)
th reduced homology group, and \(H_i(X)_{\emptyset}\) for the \(i\) th traditional homology group of a topological space \(X\). Traditional and reduced homology are almost identical.
Yes, for any non-empty space \(X\), the difference between traditional and reduced homology boils down to \(H_0(X)_{\emptyset} \simeq H_0(X) \times \mb{Z}\), so there’s not that much at stake here.
What kind of homology are we talking about?
Singular homology. (The arguments for cohomology are analogous.)
In what way is reduced homology more fundamental?
Reduced homology has a more parsimonious interpretation, it has a more natural definition, and traditional homology can in turn be elegantly defined in terms of reduced homology.
A more parsimonious interpretation?
Reduced homology is a better measure of the number of 1-dimensional holes in a space.
The 0 th traditional Betti number encodes the number of connected components of a space. So the 0 th reduced Betti number encodes “the number of connected components minus 1”. How is that more parsimonious?
Isn’t it unfortunate that with traditional homology, a contractible space, in particular a point — a simple space with no holes whatsoever — nonetheless has non-trivial homology in degree \(0\)? Isn’t it strange that whereas the \(n\)
th Betti number generally represents the number of \(n+1\)-dimensional ‘holes’ in a structure, for \(n = 0\), it should be the number of connected components? With reduced homology, contractible spaces have trivial homology and the 0 th Betti number actually encodes the number of 1-dimensional holes: the number of ‘gaps’ between components. How do you define the number of ‘gaps’ between components? If a topological space \(X\) has \(k\) components, isn’t every pair of components separated by a gap, for a total of \(k!/2\) gaps if \(k \geq 2\)? Whereas the 0 th reduced Betty number is \(k - 1\)?
For 1-dimensional holes, we need a 1-dimensional perspective. This we get by projecting the topological space onto a line, in which case we find exactly \(k-1\) gaps between its components.
How can reduced homology be more fundamental if it assigns trivial homology both to contractible spaces and to the empty space?
The reduced homology of the empty space isn’t generally discussed. But if you follow the definition, you find that its \(-1\)
st reduced homology group is \(\mb{Z}\). That just sounds wrong.
It is exactly what we want if homology encodes holes. The empty space has a hole of a unique, 0-dimensional kind: it is empty. In general, the prototypical space with a single \(n+1\)-dimensional hole is the \(n\)-sphere. The empty space is the \(-1\)-sphere (Definition 1) and it has the reduced homology of a sphere (Figure 1).
Definition 1:Let \(n \geq -1\) and let \(\left\|-\right\|\) be the euclidean norm on \(\mathbb{R}^{n+1}\). The \(n\)-sphereis the topological subspace \(S^n = \{a \in \mathbb{R}^{n+1} | \left\|a\right\| = 1\}\). The \(n+1\)-diskis the topological subspace \(D_{n+1} = \{a \in \mathbb{R}^{n+1} | \left\|a\right\| \leq 1\}\).
Figure 1:The groups \(H_i(S^n)\) Is defining this empty \(-1\)-sphere in any way useful?
This is a bit tangential, but there exists an elegant proof that establishes the homology groups of the \(n\)-sphere through induction on \(n\), starting at \(n = -1\). In other words, we can derive the homology of the spheres from the homology of the empty space. Without providing full details, the proof, as covered in Hatcher 2002, hinges on the following theorem:
Theorem 1:For any topological space \(X\), and any two subspaces \(A, B \sbeq X\) such that \(A \cup B = X\), there exists a short exact sequence$$0 \lra A \cap B \lra A \oplus B \lra X \lra 0$$
It is possible to apply this to \(X = S^n\), for any \(n \geq 0\), by taking \(A\) and \(B\) to be two overlapping hemispheres of \(X\) homeomorphic to the disk \(D_n\), such that \(A \cap B\) is homeomorphic to \(S^{n-1}\):
The Meyer-Vietoris theorem applied to this short exact sequence gives us a long exact sequence of homology groups. Because disks are contractible, the terms \(H_i(D_n \oplus D_n)\) are all trivial, hence \(H_{i+1}(S^{n+1})\) and \(H_{i}(S^{n})\) are isomorphic for all \(i\) and all \(n \geq -1\). We can thus derive any homology group of any sphere through induction from the homology groups of \(S^{-1}\), the empty space: for any \(n \geq -1\), all homology groups of \(S^n\) are trivial, except \(H_{n}(S^n)\), which is isomorphic to \(\mb{Z}\).
With traditional homology, this derivation is arguably less elegant because then induction cannot start with the empty space and the term \(H_0(D_n \oplus D_n)\) is non-trivial.
You said that reduced homology has a more elegant definition than traditional homology. How is this possible given that reduced singular homology is usually defined by taking the singular chain complex of a space \(X\), from the definition of traditional homology, and extending it with a special map \(\epsilon: C_0(X) \lra \mb{Z}\).
That construction is unnecessarily complicated. There is a more natural definition of reduced homology, and of \(\epsilon\) in particular. We obtain \(\epsilon\) automatically
as part of the chain complex if we remove an arbitrary restriction from the definition of traditional singular homology. Just watch the definitions unfold: Definition 2:For any \(n \geq -1\), the standard \(n\)-simplex \(\Delta_n\)is the \(n\)-simplex spanned by the standard basis \((e_0,e_1,\dots,e_n)\) of \(\mathbb{R}^{n+1}\), i.e. the set$$\left\{(w_0,w_1,\dots,w_n) \in \mathbb{R}^{n+1} \Bigg| \sum_{0 \leq i \leq n}w_i = 1, w_0,w_1,\dots,w_n \geq 0\right\}.$$A singular \(n\)-simplex\(\sigma\) in a topological space \(X\) is a continuous map \(\Delta_n \longrightarrow X\). For any \(n \geq i \geq -1\), let \(\iota_i: \Delta_n \lra \Delta_{n+1}\) be the linear map which sends any \(e_j\) to \(e_j\) if \(j \leq i\) and to \(e_{j+1}\) if \(j < i\). Definition 3:Define \(C: \mf{Top} \lra \mf{Ch_{\bullet}(Ab)}\) by letting \(C_n: \mathfrak{Top} \lra \mathfrak{Ab}\) be the composition of the functor \(\Cont(\Delta^n,-): \mf{Top} \lra \mf{Set}\) which sends a topological space \(X\) to the set of all singular \(n\)-simplices in \(X\) and the free functor \(\mf{Set} \lra \mf{Ab}\), and by having \(d_n: C_n(X) \lra C_{n-1}(X)\) send any generating \(n\)-simplex \(\sigma\) of \(C_n(X)\) to \(\sum (-1)^{i} \sigma \circ \iota_i\), for every \(n \in \mathbb{Z}\). Then for every \(n \in \mb{Z}\), the \(n\) thhomology functor$$H_n: \mf{Top} \lra \mf{Ab}$$is the composition of \(C\) with the homology functor \(H_n: \mf{Ch_{\bullet}(Ab)} \lra \mf{Ab}\). Isn’t this just the definition of traditional singular homology?
If we wanted to obtain traditional homology, we would have to restrict \(n \geq 0\) in Definition 2. But there is no intrinsic reason for this restriction, as the case \(n = -1\) is still well-defined: \(\Delta^{-1} = \emptyset\), therefore \(\Cont(\Delta^{-1},X)\) is a singleton for any space \(X\), and consequently, \(C_{-1}(X) \simeq \mb{Z}\) and \(d_0 = \epsilon\) maps every \(0\)-simplex in \(C_{0}(X)\) onto the unique \(-1\)-simplex generating \(C_{-1}(X)\). In contrast, the restriction \(n \geq -1\) is really necessary, because \(\mathbb{R}^{n+1}\) is not defined for \(n < -1\).
Also, note in particular that since \(C_0(\emptyset) = 0\), we get \(H_0(\emptyset) \simeq \mb{Z}\).
Defining homology in degree \(-1\) looks everything but natural.
That is an esthetic argument, purely due to the choice of index. The standard \(n\)-simplex is defined as a subset of \(\mb{R}^{n+1}\), and so might just as well have been called the \(n+1\)-simplex. In essence, \(n = -1\) is the true ‘zero-case’ of (co)homology.
Do we really want to postulate an empty \(-1\)-simplex?
Besides being a building block in the definition of reduced homology, the \(-1\)-simplex is useful in other ways. It allows one to observe that every subset of vertices of a simplex spans a subsimplex (a face), and that, in a simplicial complex, the intersection of any two simplices is a face of both. More generally, the number of \(i\)-faces of an \(n\)-simplex is \(\binom{i+1}{n+1}\), but the resulting Pascal’s Triangle is not complete without the unique \(-1\)-dimensional face (Figure 2).
Figure 2:The number of \(i\)-faces of an \(n\)-simplex So how precisely is traditional homology a special case of reduced homology?
It is possible to give at least two equivalent definitions. First, traditional homology can be subsumed under the more general concept of
relative homology (Definition 4).
Definition 4:Let \(X\) be a topological space, and \(A \sbeq X\) a subspace. Then for any \(n \in \mb{Z}\), the relative homology group\(H_n(X,A)\) is the \(n\) thhomology group of the chain complex \(C(X)/C(A)\).
This definition of relative homology is equally valid for reduced and traditional homology, and this choice does not affect the outcome: \(H_n(X,A)\) and \(H_n(X,A)_{\emptyset}\) are isomorphic for all \(n\). Moreover, both reduced and traditional homology can be derived back from relative homology. For \(x \in X\) any point, \(H_n(X,\{x\})\) is the reduced homology group \(H_n(X)\), while \(H_n(X,\emptyset)\) is the traditional homology group \(H_n(X)_{\emptyset}\) (hence this choice of notation). In other words, traditional homology is reduced homology relativised to the empty space.
One could also define reduced homology as traditional homology relativised to a point.
Except that the choice of a point of \(X\) makes this arguably less elegant, and it cannot be done for the empty space.
What is the second equivalent definition?
The traditional homology of a space \(X\) corresponds to the reduced homology of \(X \sqcup \{z\}\), where {z} is a singleton.
That’s not a principled definition, that’s a hack.
More formally, for any \(n \in \mb{Z}\), we let \({H_n}_\emptyset\) be the composition \(H_nFI\), where \(F: \mf{Top}_* \lra \mf{Top}\) is the forgetful functor and \(I\) is its left adjoint which sends \(X\) to \(X \sqcup \{z\}\). Note that \(I\) preserves coproducts and sends products to smash products, so this gives us a way to relate traditional and reduced homology.
If traditional homology is just a derivative of reduced homology, why does it feature in Poincaré Duality? That formula doesn’t work if you plug in reduced homology groups.
But there is a more general Poincaré duality theorem defined in terms of relative homology (Theorem 2, see Theorem 8.3 of Bredon 1993). And since traditional homology is a special case of relative homology, we obtain traditional Poincaré duality when \(L \simeq M\) and \(K = \emptyset\).
Theorem 2 (Poincaré–Alexander–Lefschetz):Let \(M\) be an \(n\)-dimensional oriented manifold and let \(K \sbeq L \sbeq M\) be compact subspaces of \(M\). Then for every \(i\),$$H_i(M\setminus K, M\setminus L) \simeq H^{n-i}(L,K).$$ What about the Künneth theorem?
Here too there exists a more general, relative theorem (Theorem 3, see p276 of Hatcher 2002) which reduces to the traditional Künneth theorem when \(A = B = \emptyset\).
Theorem 3 (Künneth):Let \(X\) and \(Y\) be two topological spaces. Then for each \(n\), there exists a short exact sequence$$0 \lra \bigoplus_i H_i(X,A) \otimes H_{n-1}(Y,B) \lra H_n(X \times Y,(A \times Y) \cup (B \times X)) \lra \bigoplus_i \Tor(H_i(X,A), H_{n-i-1}(Y,B)) \lra 0$$which splits.
There also exists a reduced Künneth theorem, which we obtain when \(A = \{x\}\) and \(B = \{y\}\) (Corrolary 3.1).
Corrolary 3.1:Let \(X\) and \(Y\) be two non-empty topological spaces. Then for each \(n\), there exists a short exact sequence$$0 \lra \bigoplus_i H_i(X) \otimes H_{n-1}(Y) \lra H_n(X \wedge Y) \lra \bigoplus_i \Tor(H_i(X), H_{n-i-1}(Y)) \lra 0$$which splits.
We can also see the traditional Künneth theorem as an implication of this reduced Künneth theorem, given that the traditional homology of \(X\) and \(Y\) corresponds to the reduced homology of \(X \sqcup \{z\}\) and \(Y \sqcup \{z\}\), and that \((X \sqcup \{z\}) \wedge_z (Y \sqcup \{z\}) \simeq (X \times Y) \sqcup \{z\}\).
The \(i\) th traditional homology group of the \(n\)-fold torus \(T^n\) is \(\mb{Z}^{\binom{i}{n}}\). With reduced homology, this pattern breaks down for \(i=0\). Isn’t this an example where traditional homology produces more natural results?
That the ranks of the homology groups of a torus are binomial coefficients is a direct combinatorial consequence of repeated application of the Künneth theorem to multiplication of \(S^1\) with itself. One can obtain a similar result with the reduced Künneth Theorem for a whole range of spaces. The reduced equivalent of multiplying \(S^1\) with itself is taking the smash product of \(S^0 \vee_x S^1 \simeq S^1 \sqcup \{z\}\) with itself in \(z\), producing \(S^0 \vee_x T^n \simeq T^n \sqcup \{z\}\). But one can also take the smash product in other points to obtain different spaces with the same homology. A consistent choice for \(x\) produces the wedge sum \(\bigvee_{i}\bigvee_{\binom{i}{n}}S^i\). Alternating choices for \(z\) and \(x\) produce various halfway structures, all with homology groups whose ranks correspond to binomial coefficients.
References
Bredon, Glen Eugene, 1993. Topology and Geometry. Graduate Texts in Mathematics 139. Springer Verlag, New York.
Hatcher, Allen Edward, 2002. Algebraic Topology.
|
The process $X$ is not gaussian and its increments are not independent.
Note first that $X$ is a Brownian martingale, hence a Brownian motion with a change of time, thus, it is distributed like $(\beta_{\langle X\rangle_t})$, where $\beta$ is a Brownian motion independent of $X$. For example, $X_1$ has the distribution of $\beta_{\langle X\rangle_1}=\sqrt{\alpha}\cdot\gamma$ where $\gamma$ is standard normal independent of $(X_t)$ and $\alpha=\langle X\rangle_1$. Thus, $E[X_1]=0$, $E[X_1^2]=E[\alpha]\cdot E[\gamma^2]=E[\alpha]$ and $E[X_1^4]=E[\alpha^2]\cdot E[\gamma^4]=3E[\alpha^2]$.
Since $E[Z^4]=3E[Z^2]^2$ for every centered normal random variable $Z$, these remarks show that if $X_1$ is normal then $E[\alpha^2]=E[\alpha]^2$, that is, $\alpha$ is almost surely constant. But $\alpha=\int\limits_0^1B_t^4\,\mathrm dt$ hence this is not so and $X_1$ is not normal.
To study the independence of the increments of $X$, fix some $s\geqslant0$ and consider the sigma-algebras $\mathcal F^X_s=\sigma(X_u;u\leqslant s)$ and $\mathcal F^B_s=\sigma(B_u;u\leqslant s)$, and the Brownian motion $C$ defined by $C_u=B_{s+u}-B_s$ for every $u\geqslant0$. Then $C$ is independent of $\mathcal F^B_s$. Furthermore, for every $t\geqslant0$,$$X_{t+s}=X_s+\int_0^t(B_s+C_u)^2\mathrm dC_u=X_s+B_s^2C_t+2B_s\int_0^tC_s\mathrm dC_s+\int_0^tC_s^2\mathrm dC_s.$$Rewrite this as$$X_{t+s}-X_s=B_s^2C_t+B_sD_t+G_t,$$where $D_t$ and $G_t$ are functionals of $C$ hence independent of $\mathcal F^B_s$. Thus,$$E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=B_s^4E[C_t^2]+B_s^2E[D_t^2]+E[G_t^2]+2B_s^3E[C_tD_t]+2B_s^2E[C_tG_t]+2B_sE[D_tG_t].$$One can check that $E[C_tD_t]=E[D_tG_t]=0$, $E[C_t^2]=t$, $E[D_t^2]=2t^2$, $E[G_t^2]=t^3$ and $E[C_tG_t]=\frac12t^2$ hence$$E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=tB_s^4+3t^2B_s^2+t^3.$$Note that $\mathrm d\langle X\rangle_s=B_s^4\mathrm ds$ and that $\langle X\rangle$ is $\mathcal F^X$-adapted hence $B_s^4$ and every function of $B_s^4$, for example $B_s^2$, are measurable with respect to $\mathcal F^X_s$. This yields$$E[(X_{t+s}-X_s)^2\mid\mathcal F^X_s]=tB_s^4+3t^2B_s^2+t^3.$$The RHS is not almost surely constant hence $(X_{t+s}-X_s)^2$ is not independent of $\mathcal F^X_s$, in particular the increments of $X$ are not independent.
Edit: One may feel that the computation of the conditional expectation of $(X_{t+s}-X_s)^2$ above is rather cumbersome (it is) and try to replace it by the (definitely simpler) computation of the conditional expectation of $X_{t+s}-X_s$. Unfortunately,$$E[X_{t+s}-X_s\mid\mathcal F^X_s]=0,$$hence this computation is not sufficient to decide whether the conditional distribution of $X_{t+s}-X_s$ conditionally on $\mathcal F^X_s$ is constant or not (which is the reformulation of the independence of a random variable and a sigma-algebra this solution relies on). Another way of looking at the situation is that, fortunately, already the conditional second moments are not constant.
|
Assume that the model of computation is a standard Turing machine model with input alphabet $\Sigma = \{0,1\}$, work alphabet $\Gamma = \{0,1,\_\}$, 1 input tape, 1 work tape and 1 output tape.
We can build a Turing machine $U$ that accepts a (reasonable) binary representation of a Turing machine $M$ and
simulates it (possibly doing additional computation before and/or after the simulation).
Such concept of simulation is used massively and in different contexts (e.g. in the proof of the time hierarchy theorem).
I'm wondering if there is a formal definition of
"simulation"(or different definitions) and when such definition(s) appeared for the first time.
Something like "Given a description $p$ of $M$, $U$ on input $p$ simulates $M$ if during the computation there is a one-to-one mapping between the internal state of $U$ and the internal state of the simulated $M$, and between the content of the work tape of $U$ and the simulated work tape of $M$"
Note 1: as commented by Zonko we could use this definition: "Given a description $p$ of $M$, $U$ simulates $M$ if $U(p,x)=M(x)$, $\forall x$ accepted by $M$"; but if $U(p,x)=M(x)+1$ then probably $U$ needs to "simulate" $M$ as well.
|
Search
Now showing items 1-10 of 32
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
Let $ f:S→B$ be an elliptic fibration from an integral surface $ S$ to integral curve $ B$
.
Here I use following definitions:
A surface (resp. curve) is a $ 2$ -dim (resp. $ 1$ -dim) proper k scheme over fixed field $ k$ .
Fibration has two properties: 1. $ O_B = f_*O_S$ 2. all fibers of f are geometrically connected
Futhermore a fibration is elliptic if the generic fiber $ S_{\eta}=f^{-1}(\eta)$ is an elliptic curve (over $ k(\eta)$ .
Denote by $ i_S: S_{\eta} \to S$ the canonical immersion. Here I’m ot sure to 100% but I guess that for the structure sheaf holds $ O_{S_{\eta}}= O_S \otimes_k k(\eta)$ .
Now the QUESTION:
Since $ S_{\eta}$ is elliptic curve and therefore smooth the restriction of the Kähler differentials $ \Omega^2_{S/B} \vert _{S_{\eta}}$ is invertible.
My question is how to see that there exist open neighbourhood $ U \subset S$ of $ S_{\eta}$ such that the restriction $ \Omega^2_{S/B} \vert _U$ is still invertible?
Suppose we have the group $ G$ of invertible functions over some set $ S \subseteq \mathbb{R}$ under composition. I’m interested in the divisibility of such group. For example, taking $ S=[-1, 1]$ , for any $ n \in \mathbb{N}$ , one might try to find some function $ f \in S$ such that $ $ \underbrace{f \circ f \cdots \circ f }_{n \text{ times}}= \sin( \frac{\pi x}{2})$ $ More generally, we want to satisfy $ \forall n \ \forall x \ \exists y \ y^n = x $ , the usual divisibility axiom. My guess is that this groups are far from divisible, but it is known that they can always be embedded in some divisible group $ \overline{G}$ . However, the construction of $ \overline{G}$ I have read about requires taking a sequence of wreath products and a direct limit. This usually makes things very abstract and one might have a hard time identifying $ f$ as an element of $ \overline{G}$ .
Is there a better way to visualize this construction? My goal is to make sense of the elements in $ \overline{G}$ as functions over some larger (possibly not even real) $ S’ \supseteq S$ .
Problem:
Let $ F_q$ be a finite field with $ q$ elements.
$ T_n(F_q) := \{ A = (a_{ij}) \in F^{n \times n}$ | $ a_{ij} = 0$ for $ i < j,$ and $ a_{ij} \neq 0$ $ \forall i \}$ .
Determine the number of elements in $ T_n(F_q)$ .
My solution is as follows:
Starting with the last row going upwards, there are:
$ q-1$ possibilities for the last row;
$ (q-1)q$ possibilities for the row before the last;
.
.
.
$ (q-1)q^{n-1}$ possibilities for the first row.
Therefore, in total there are $ (q-1)^nq^{\sum_{i=1}^{n-1} i} = (q-1)^nq^{\frac{n(n-1)}{2}}$ elements.
Could you, please, check my solution?
$ \mathcal{K}_{X} $ denotes the sheaf of total rings of fractions of $ \mathcal{O}_{x}. $
A)f′(x)=0 has n−1/2 distinct real roots B)f′(x)=0 has n−1 distinct real roots C)all the roots of f′(x)=0 are distinct D)none of these
I want to show that the set of all invertible operators $ \mathcal{G}(\ell^2)$ is not dense in $ \mathcal{B}(\ell^2)$ .
Consider the right shift operator $ T\in \mathcal{B}(\ell^2)$ . We also know that $ T\notin \mathcal{G}(\ell^2)$ . I want to show now that $ $ B=\{R\in \mathcal{B}(\ell^2):\|R-T\|<1\}$ $ is disjoint with $ \mathcal{G}(\ell^2)$ .
Suppose not. Then there exists $ R\in \mathcal{G}(\ell^2)$ such that $ \|R-T\|<1$ . How to arrive at a contradiction from here? Any help is appreciated.
the answer is obviously false. for example: $ $ A=\begin{pmatrix} 1 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & 1/2 \ \end{pmatrix} A^{-1}=\begin{pmatrix} 1 & 0 & 0 \ 0 & 1/2 & 0 \ 0 & 0 & 2 \ \end{pmatrix} \ B=\begin{pmatrix} 1 & 0 & 0 \ 0 & 2 & 0 \ 0 & 0 & -1/2 \ \end{pmatrix} B^{-1}=\begin{pmatrix} 1 & 0 & 0 \ 0 & 1/2 & 0 \ 0 & 0 & -2 \ \end{pmatrix} \ A+B=\begin{pmatrix} 2 & 0 & 0 \ 0 & 4 & 0 \ 0 & 0 & 0 \ \end{pmatrix}$ $ A+B dose not have an inverse matrix(rank is not n). is there a better way to prove it other then an example?
I started linear algebra, and I encountered the part that using Gauss-Jordan method to compute invertible matrix. Then, how can we prove that a matrix is invertible if we can change that matrix into an identity matrix by doing some elementary row exchanges?
The following question is from the Fall 2016 UCLA algebra qualifying exam:
Let $ F$ be a field and $ a\in F$ . Show that the functor that takes $ R$ , commutative $ F$ -algebras to the invertible elements of $ R[X]/(X^2-a)$ is representable.
What I have so far: If $ a\in F$ , then then we have that $ R[X]/(X^2-a)\cong R\times R$ . Hence we get that $ $ Hom_{F-\text{alg}}(F[t,t^{-1}]\otimes_FF[t,t^{-1}],R)\cong Hom_{F-\text{alg}}(F,R\times R)\cong U(R)$ $ where $ U$ is the functor that take $ R$ to units of $ R[X]/(X^2-a)$ . Hence in this case, the functor is representable.
I’m unsure how to extend this to the general case.
The question is: “If A and B are n x n matrices, show that they have the same null space if and only if A = UB for some invertible matrix U.
I started the question by saying Ax = 0 for some vector x in null(A). Now I’m lost. Could someone please help me out with this question? Thank you very much.
|
From Folland's
Analysis, suppose $\{E_j\}_1^\infty \subset \mathcal{A}$. Set
$$F_k = E_k \setminus \bigcup_{j=1}^{k-1}E_j$$
Then the $F_k$'s are disjoint, and $\bigcup_1^\infty E_j = \bigcup_1^\infty F_k$.
My question is, what if you let $x \in \bigcap_1^\infty E_j$? Then wouldn't $x \in \bigcup_1^{k-1}E_j$ for all $k$, which would imply $x \not \in F_k \implies \bigcup_1^\infty E_j \neq \bigcup_1^\infty F_k$?
I'm sure I'm just missing something really simple here, but I can't for the life of me figure out what it is!
|
A compact locally connected metric space is "uniformly locally connected"\ That is, for any $\epsilon > 0$, there is some $\delta > 0$ such that whenever $\rho(x, y) < \delta$, then $x$ and $y$ both lie in some connected subset of $X$ of diameter $<\epsilon$.
proof:-
Since $X$ is locally connected metric space then each $x\in X$ has a nhood base of open connected sets\ Given $\epsilon > 0$, let $x \in X $ and $U_x=\rho(x, \epsilon) $ be a nhood of $x$\ There exist an open connected basic nhood $V_x$ with diameter $<\epsilon$, Now $$X=\bigcup_{x\in X }{V_x}$$, hence cover $X$ by open connected nhoods of diameter $<\epsilon$.\ Since $X$ is compact, reduce this to a finite subcover $\{V_{x1},. . . , V_{xn}\}$ and let $\delta$ be a Lebesgue number (22.5) for this cover.\ Then if $\rho(x, y) < \delta$, both $x$ and $у$ belong to some $V_{xi}$. \
{Theorem 22.5} (Lebesgue covering lemma). If $\{U_1..., U_n\}$ is a finite opencover of a compact metric space X, there is some $\delta > 0$ such that if A is any subset of $X$ of diameter $< \delta$, then $A \subset U_i$ for some i.
I try to write the proof better than this.
I would like to confirm this proof
If acceptable, I would like to clarify and improve it (Language and Mathematical)as much as possible
|
I think the question is more about the physical intepretation of the complex expression
$\psi (x,t)=Ae^{i(kx-\omega t)}$
than the mathematical meaning of it. For the physical meaning of it, we think of the probability amplitude like a rotating arrow, which rotates as the particle travels in space. The rotation frequency of the arrow is determined by the energy (frequency) of the particle (photon.) This arrow has been given the name 'phasor' because the argument $\phi =kx-\omega t$ is an angle (in wave mechanics it is called 'phase' of the wave). This phase tells us how many degrees the arrow has rotated from the moment the particle has been created until it reaches the point $x$ at time $t$ of its journey.
This complex number representation is very convenient, not only because it shows the phase of the wave but it also shows the direction (if the wave travels in 3-D.) However its importance in QM comes from the need to combine (add) waves comming from different sources at some point in space. This is not a simple algebraic addition because the angles involved make the problem geometrical, and the complex number representation does this very neatly. In a way the fasors add like vecors do (the
real with the real, and the imaginary with the imaginary and its done!)
The calculation of the probabilities follows rules that are also geometrical. For example, let us think of two waves comming from the two slits in the DS experiment as:
from slit 1 $S_1: \psi_1(x_1,t)$ and from slit 2 $S_2: \psi_2(x_2,t)$.
The $x_1$ and $x_2$ show the distances the two phasors (waves) traveled by the time they reach some point P on the screen. When these two waves arrive at the screen, they will be added to get the total amplitude first
$A=\psi_1(x_1,t)+\psi_2(x_2,t)$
and then the probability will be the 'square of the modulus' of the total amplitude as
$P=|A|^2= |\psi_1 (x_1,t)|^2+ |\psi_2 (x_2,t)|^2 + 2|\psi_1 (x_1,t)|\times|\psi_2 (x_2,t)|\cos(\theta)$
The thrird term in the equation above, shows the real need for the complex representation of the wave functions in QM, as well as the need for finding first the total probability amplitude, and then finding the probability as the square of the total modulus. This term is the root of all beautiful interference phenomana we observe in the quantum mechanical world. I hope this helps a little.
|
For example, I don't understand why the speed of a satellite moving in an orbit of radius 'a' around the Earth must be equal to $\sqrt{GM/a}$ If I release a particle in outer space with a velocity perpendicular to the line joining the particle and Earth but magnitude not equal to $\sqrt{GM/a}$ what will happen? For al I know, the acceleration will still be perpendicular to its velocity at all instants. And, an always perpendicular acceleration is only capable of changing the direction of velocity continuously. Then, wouldn't the particle still rotate even if its speed not equal to the one given by F=$mv^2/r$?
You agree that a satellite with speed $\sqrt{\frac{GM}{a}}$ will undergo a circular orbit of radius $a$.
Now suppose that you take a satellite up to that distance $a$ and give it a tangential speed greater than $\sqrt{\frac{GM}{a}}$.
The satellite will start executing a curved path of smaller curvature, larger radius, than if the speed had been $\sqrt{\frac{GM}{a}}$ because the gravitational attraction of the Earth on the satellite is not large enough for the path to be of radius $a$. The satellite would therefore move further from the Earth gaining gravitational potential energy but at the same time losing kinetic energy i.e. moving at a slower speed. Remember that the satellite has a mass and if no force was acting on it the satellite would travel in a straight line. With the higher speed the gravitational attraction of the Earth cannot pull the satellite enough to make the satellite execute a circular path so it goes along a less curved, elliptical path.
What happens next depends on the speed that you give the satellite.
Below a certain value the satellite would execute an elliptical orbit about the Earth i.e. the distance between the Earth and the satellite would vary as would the speed of the satellite.
At a particular speed, the escape speed, the satellite would execute a parabolic path and escape from the Earth.
Above the escape speed the path of the satellite would be hyperbolic.
When you have a particle orbiting circularly around Earth, you can easily write the motion laws for two directions: radial direction and tangential direction. In tangential direction, you have uniform motion with constant speed. In radial direction, due to the motion of the particle, in order to have a circular motion, you should have a radial (i.e., centripetal) force with modulus: $$F=\frac{m v^2}{ R}$$ with $m$ mass of the particle, $v$ its speed and $R$ radius of the circular orbit. In this case, this force is gravity and, from the law of Gravitational attraction, you can get: $$ F = \frac{G mM}{ R^2} $$ with $G$ constant, $M$ mass of the Earth and $R$ distance between them. If you put these two expressions equal, you can get the right speed for a satellite circularly orbiting the Earth. $$ v =\sqrt{\frac{GM}{R}} $$ If you have a satellite's speed greater than the one calculated for a circular motion, you can see that in the radial direction the required centripetal force is greater than the actual Gravitational one. This means that gravity cannot keep the satellite orbiting around the Earth, you have a force term in radial direction and it will cause the satellite to go further from the Earth while it is rotating around it. On the contrary, if the speed is less than required, you still have a force in radial direction and it will cause the satellite to fall toward Earth's surface, while orbiting around it, and the orbit will become elliptical.
For example, I don't understand why the speed of a satellite moving in an orbit of radius 'a' around the Earth must be equal to $\sqrt{GM/a}.$
For the satellite to orbit at radius $a$, the Earth's gravitational field must exert a
centripetal force $F_c$:
$$F_c=\frac{mv^2}{a}$$
This force is the gravitational force, so:
$$F_c=G\frac{mM}{a^2}$$
So:
$$\frac{mv^2}{a}=G\frac{mM}{a^2}$$ $$\implies v=\sqrt\frac{GM}{a}\tag{1}$$
If the 'launch speed' (as you defined it) is higher, the satellite will move to a different orbit, until $(1)$ is satisfied again.
So here we assume, as per the OP, that $v_0 \neq \sqrt\frac{GM}{a}$.
The 'final orbit' $r$ is found from energy conservation (we assume gravity to be the only external force). At launch the total energy $T$ is ($v_0$ is launch velocity):
$$T=\frac{mv_0^2}{2}-\frac{GMm}{a}$$
In final orbit the total energy $T$ is:
$$T=\frac{GmM}{2r}-\frac{GmM}{r}=-\frac{GmM}{2r}$$
From the identity, $r$ can be calculated:
$$-\frac{GM}{2r}=\frac{v_0^2}{2}-\frac{GM}{a}$$ $$\frac{GM}{r}=\frac{2GM-av_0^2}{a}$$ $$\boxed{r=\frac{aGM}{2GM-av_0^2}}$$
If $v_0 > \sqrt\frac{GM}{a}$, then $r>a$, so the satellite moves to a higher orbit. It slows down because some of its kinetic energy is converted to potential energy.
If $v_0 < \sqrt\frac{GM}{a}$, then $r<a$, so the satellite moves to a higher orbit. It will speed up because some of its potential energy is converted to kinetic energy.
I will compare circular motion in the case of a rotating frame and with the case of the Schwarzshild metric. In this general relativistic context we can examine what is meant by the centripetal force in this context for a particle in a rotating frame and a particle in a circular orbit around a gravitating mass. We can compare the two and see what is modified by general relativity, and what is interpreted by centripetal acceleration.
The metric for a rotating coordinate system with $d\phi~\rightarrow~d\phi~+~\omega dt$ is $$ ds^2~=~A(\omega,~r)\left(cdt~-~\frac{\omega r^2}{c^2A(\omega,~r)}d\phi\right)^2~-~dr^2~-~\frac{r^2}{A(\omega,~r)}d\phi^2~-~dz^2, $$ $$ A(\omega,~r)~=~(1~-~\omega^2r^2/c^2) $$ gives the Christoffel symbols $$ \Gamma^r_{tt}~=~\omega^2 r,~\Gamma^\phi_{rr}~=~-\frac{\omega}{r},~\Gamma^\phi_{r\phi}~=~-\frac{1}{r},~\Gamma^r_{\phi\phi}~=~r. $$ For circular motion we can set $\Gamma^\phi_{rr}~=~\Gamma^\phi_{r\phi}~=~0$. The geodesic equation of interest is of the form $$ \frac{d^2r}{ds^2}~+~\Gamma^r_{\phi\phi}\left(\frac{d\phi}{ds}\right)^2~=~0, $$ or equivalently $$ \frac{d^2r}{ds^2}~+~\omega^2 r\left(\frac{dt}{ds}\right)^2~+~r\left(\frac{d\phi}{ds}\right)^2~=~0. $$ This is similar to centripetal accleration.
To make the connection to Newtonian physics let us transform this to acceleration in the standard coordinates of an observer. We then have $$ \frac{d^2r}{ds^2}~=~\left(\frac{d^2r}{dt^2}\right)\left(\frac{dt}{ds}\right)^2, $$ which employs $dr/dt~=~dz/dt~=~0$ for circular motion. We use the metric with $dr~=~0$ $$ ds^2~=~A(\omega,~r)\left(cdt~-~\frac{\omega r^2}{c^2A(\omega,~r)}d\phi\right)^2~-~\frac{r^2}{A(\omega,~r)}d\phi^2, $$ so the term $dt/ds$ is seen in $$ \left(\frac{ds}{dt}\right)^2~=~A(\omega,~r)\left(1~-~\frac{\omega^2r^2}{A(\omega,~r)}\right)~-~\frac{\omega^2r^2}{A(\omega,~r)}, $$ so that $dt/ds$ is a form of Lorentz gamma factor $$ \gamma(\omega)~\dot=~\frac{dt}{ds}~=~\frac{1}{\sqrt{A(\omega,~r)\left(1~-~\frac{\omega^2r^2}{A(\omega,~r)}\right)~-~\frac{\omega^2r^2}{A(\omega,~r)}}}. $$ This then gives us a gamma factor modified form of the centripetal acceleration. Since it is evident the modified gamma factor divides out on both side this is centripetal acceleration for a particle fixed to a rotating frame. The question is then whether this applies to gravitation.
For gravitation we turn to the Schwarzschild metric $$ ds^2~=~(1~-~2m/r)dt^2~-~(1~-~2m/r)^{-1}dr^2~-~r^2(d\theta^2~+~sin^2\theta d\phi^2)~with~m~=~GM/c^2, $$ where for a circular orbit we have $dr~=~d\theta=-~0$ and $\theta~=~\pi/2$ so that $$ ds^2~=~(1~-~2m/r)dt^2~-~r^2d\phi^2. $$ Dividing through by $dt^2$ and letting $\omega~=~d\phi/dt$ gives $$ ds^2~=~[(1~-~2m/r)~-~r^2\omega^2]dt^2, $$ which gives a similar gamma factor $$ \gamma_m(\omega)~=~\frac{1}{\sqrt{1~-~2m/r~-~r^2\omega^2}}. $$ The Christoffel symbol relevant for calculation is $$ \Gamma^r_{tt}~=~\frac{m(r~-~2m)}{r^3}, $$ so that $$ \frac{d^2r}{ds^2}~+~\frac{m(r~-~2m)}{r^3}\left(\frac{dt}{ds}\right)^2~=~0. $$ It is evident that the $\gamma_m(\omega)$ divides out and this leaves the dynamical equation $$ \frac{d^2r}{dt^2}~+~\frac{m(r~-~2m)}{r^3}~=~0. $$ For $r~>>~2m$ this recovers Newton's second law with gravity.
It appears that in standard coordinates centripetal acceleration is the same. What is modified by general relativity is the nature of gravitation as a force interpreted in standard coordinates.
For al I know, the acceleration will still be perpendicular to its velocity at all instants. And, an always perpendicular acceleration is only capable of changing the direction of velocity continuously.
Well, let's see. For convenience, stipulate that at $t=0$, the particle is located at $r=R, \theta=\pi/2, \phi=0$ with velocity
$$\vec v(0)=v\;\hat{\boldsymbol{y}}$$
and acceleration
$$\vec a(0)=-\frac{GM}{R^2}\;\hat{\boldsymbol{x}}$$
The velocity at the next instant is then $$\vec v(0 + dt)=-\frac{GM}{R^2}dt\;\hat{\boldsymbol{x}} + v\;\hat{\boldsymbol{y}}$$
while the acceleration is
$$\vec a(0 + dt)=-\frac{GM}{R^2}\left(\hat{\boldsymbol{x}} + \frac{v}{R}\,dt\;\hat{\boldsymbol{y}}\right)$$
and so the dot product of these velocity and acceleration vectors is
$$\vec v(0 + dt)\cdot \vec a(0 + dt)=-\frac{GM}{R^2}\left[-\frac{GM}{R^2}+\frac{v^2}{R}\right]dt$$
which is zero only if
$$\frac{v^2}{R}=\frac{GM}{R^2}$$
or
$$v=\sqrt{\frac{GM}{R}}$$
Thus, it
isn't the case that the acceleration will still be perpendicular to the velocity at all instants for arbitrary $v$.
|
Thank you for using the timer!We noticed you are actually not timing your practice. Click the START button first next time you use the timer.There are many benefits to timing your practice, including:
Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19)
If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400
OA: 200
ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400
I think I am making a silly mistake some where but I just can't figure it out.
Thanks
Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear.
\(|\frac{x}{2}| + |\frac{y}{2}| = 5\)
You will have 4 case:
\(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\);
\(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\);
\(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\);
\(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\);
So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\).
Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\).
If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
15 Jan 2012, 10:26
3
8
Apex231 wrote:
If equation |x/2|+|y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region?
A 20 B 50 C 100 D 200 E 400
First of all to simplify the given expression a little bit let's multiply it be 2: \(|\frac{x}{2}|+|\frac{y}{2}|=5\) --> \(|x|+|y|=10\).
Now, find x and y intercepts of the region (x-intercept is a value(s) of x for y=0 and similarly y-intercept is a value(s) of y for x=0): \(y=0\) --> \(|x|=10\) --> \(x=10\) and \(x=-10\); \(x=0\) --> \(|y|=10\) --> \(y=10\) and \(y=-10\).
So we have 4 points: (10, 0), (-10, 0), (0, 10) and (-10, 0).
When you join them you'll get the region enclosed by \(|x|+|y|=10\):
You can see that it's a square. Why a square? Because diagonals of the rectangle are equal (20 and 20), and also are perpendicular bisectors of each other (as they are on X and Y axis), so it must be a square. As this square has a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\).
Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\).
If equation |x/2|+|y/2| = 5 encloses a certain region[#permalink]
Show Tags
10 Sep 2012, 12:25
CMcAboy wrote:
Can someone help me with this question:
If equation |x/2| + |y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region?
A) 20 B) 50 C) 100 D) 200 E) 400
I believe this is the simplest & the quickest solution |x/2| + |y/2| = 5 Put x = 0 in the above equation we get |y/2| = 5, which means y= 10, - 10 Put y = 0 in the above equation we get |y/2| = 5, which means x= 10, - 10
If you see plot these four points you get a square with two equal diagonals of length 20 units Thus area = 1/2 * (Diagonal)^2 -----> 1/2 * 400 = 200
I hope this will help many._________________
If you like my Question/Explanation or the contribution, Kindly appreciate by pressing KUDOS. Kudos always maximizes GMATCLUB worth-Game Theory
If you have any question regarding my post, kindly pm me or else I won't be able to reply
If equation |x/2|+|y/2| = 5 encloses a certain region[#permalink]
Show Tags
15 Oct 2014, 19:17
Apex231 wrote:
If equation |x/2|+|y/2| = 5 encloses a certain region on the coordinate plane, what is the area of this region?
A. 20 B. 50 C. 100 D. 200 E. 400
Hello There, Equation of a straight line whose x and y intercepts are a and b resp. is (x/a) + (y/b) = 1 i.e., coordinates of two ends of the line are (a,0) and (0,b). Now, from the given question, |x/2|+|y/2| = 5, reducing this to intercept form we get, |x/10|+|y/10| = 1 Considering the equation without modulus, coordinates are (10,0) and (0,10). Since there is modulus, other two coordinates are (-10,0) and (0,-10). Now coordinates (10,0), (0,10), (-10,0) and (0,-10) form a square with diagonal length = 20. Here diagonal length can be obtained by calculating the distance between (10,0) and (-10,0) or (0,10) and (0,-10). In a square, Diagonal = Side * sqrt(2) Side = 10 * sqrt(2) Area = Side * Side = 200.
Ans : D
Hope this helps! Thanks!_________________
Regards, Bharat Bhushan Sunkara.
"You need to sacrifice what you are TODAY, for what you want to be TOMORROW!!"
Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
10 Jun 2015, 09:21
Bunuel wrote:
Barkatis wrote:
Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19)
If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400
OA: 200
ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400
I think I am making a silly mistake some where but I just can't figure it out.
Thanks
Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear.
\(|\frac{x}{2}| + |\frac{y}{2}| = 5\)
You will have 4 case:
\(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\);
\(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\);
\(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\);
\(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\);
So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\).
Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\).
If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
10 Jun 2015, 09:50
arshu27 wrote:
Bunuel wrote:
If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400
OA: 200
I had another way of solving. The answer is wrong but i wanted to know what is wrong in the method.
We can re-write the question as below
\(x^2/4 +y^2/4 = 5\) (since \(|x| = x^2\))
\(x^2 + y^2 = 20\)
This is the equation is a circle having the centre at (0,0) (general form is \(x^2 + y^2= r^2\))
area =\(3.14 * R^2\) = \(3.14 * 20\) = 62.8
What am i assuming wrong here?? Thanks!
The part that I have highlighted above is WRONG which the first step in your solution
|x| is NOT equal to x^2 for all values of x[/highlight]
The Function "Modulus" only keeps the final sign Positive but that doesn't mean what you mentioned in the quoted Highlighted section.
Alternatively you can solve this question in this way
Step 1: Substitute y=0, \(|\frac{x}{2}| + |\frac{0}{2}| = 5\) i.e. \(|\frac{x}{2}| = 5\) i.e. \(|x| = 10\) i.e. \(x = +10\)
So on the X-Y plane you get two Point (+10,0) and (-10,0)
Step 2:Substitute x=0, \(|\frac{0}{2}| + |\frac{y}{2}| = 5\) i.e. \(|\frac{y}{2}| = 5\) i.e. \(|y| = 10\) i.e. \(y = +10\)
So on the X-Y plane you get two lines parallel to X-Axis passing through Y=+10 and Y=-10
So on the X-Y plane you get two Point (0, +10) and (0, -10)
Join all the four points, It's a Square with Side \(10\sqrt{2}\)
i.e. Area =\((10\sqrt{2})^2\) = 200_________________
Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
10 Jun 2015, 10:02
arshu27 wrote:
Bunuel wrote:
Barkatis wrote:
Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19)
If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400
OA: 200
ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400
I think I am making a silly mistake some where but I just can't figure it out.
Thanks
Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear.
\(|\frac{x}{2}| + |\frac{y}{2}| = 5\)
You will have 4 case:
\(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\);
\(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\);
\(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\);
\(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\);
So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\).
Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\).
I had another way of solving. The answer is wrong but i wanted to know what is wrong in the method.
We can re-write the question as below
\(x^2/4 +y^2/4 = 5\) (since \(|x| = x^2\))
\(x^2 + y^2 = 20\)
This is the equation is a circle having the centre at (0,0) (general form is \(x^2 + y^2= r^2\))
area =\(3.14 * R^2\) = \(3.14 * 20\) = 62.8
What am i assuming wrong here?? Thanks!
One More Clarification
( \(|x| is NOT equal to x^2\))
Instead, \(|x| = \sqrt{(x^2)}\) _________________
Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
12 Jun 2015, 14:43
why should I suppose that x or y equals +- 10 & zeros ? what about +- 5 as following : |+ 5 |+ |-5| = 10 |-5|+|+5| = 10 |-5|+|-5|= 10 |+5|+|+5|=10 S0 we have Square with side of 10 length Its area is 100
Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
12 Jun 2015, 23:02
hatemnag wrote:
why should I suppose that x or y equals +- 10 & zeros ? what about +- 5 as following : |+ 5 |+ |-5| = 10 |-5|+|+5| = 10 |-5|+|-5|= 10 |+5|+|+5|=10 S0 we have Square with side of 10 length Its area is 100
Hi Hatemnag,
The given equation is basically representing FOUR linear equations which are representing 4 lines on the plane
One Linear equation when x is +ve and y is +ve i.e. X+Y = 10 Second Linear equation when x is +ve and y is -ve i.e. X-Y = 10 Third Linear equation when x is -ve and y is +ve i.e. -X+Y = 10 Forth Linear equation when x is -ve and y is -ve i.e. -X-Y = 10
So you need to plot these equation and then take the area of Quadrilateral formed
Also, Please Note that Four Vertices of Quadrilateral are obtained where two lines Intersect, and The intersections of the lines are obtained at points (10,0), (-10,0), (0,10) and (0,-10)
Whereas, what you have done is taking any FOUR RANDOM POINTS on those four lines as per your convenience and then have assumed that these points form the Square
I hope this clears your doubt!_________________
Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
20 Jun 2015, 01:15
Bunuel wrote:
Barkatis wrote:
Hello, Am new here. I just took the m25 GMAT CLub Test and I don't get the solution of a question. (Q19)
If equation \(|\frac{x}{2}| + |\frac{y}{2}| = 5\) encloses a certain region on the coordinate plane, what is the area of this region? 20 50 100 200 400
OA: 200
ME: well, since \(|x| + |y| = 10\) ; X can range from (-10) to (10) (when Y is 0) and the same for Y So the length of the side of the square should be 20. My Answer : 400
I think I am making a silly mistake some where but I just can't figure it out.
Thanks
Hi and welcome to the Gmat Club. Below is the solution for your problem. Hope it's clear.
\(|\frac{x}{2}| + |\frac{y}{2}| = 5\)
You will have 4 case:
\(x<0\) and \(y<0\) --> \(-\frac{x}{2}-\frac{y}{2}=5\) --> \(y=-10-x\);
\(x<0\) and \(y\geq{0}\) --> \(-\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10+x\);
\(x\geq{0}\) and \(y<0\) --> \(\frac{x}{2}-\frac{y}{2}=5\) --> \(y=x-10\);
\(x\geq{0}\) and \(y\geq{0}\) --> \(\frac{x}{2}+\frac{y}{2}=5\) --> \(y=10-x\);
So we have equations of 4 lines. If you draw these four lines you'll see that the figure which is bounded by them is square which is turned by 90 degrees and has a center at the origin. This square will have a diagonal equal to 20, so the \(Area_{square}=\frac{d^2}{2}=\frac{20*20}{2}=200\).
Or the \(Side= \sqrt{200}\) --> \(area=side^2=200\).
Re: If equation |x/2| + |y/2| = 5 enclose a certain region[#permalink]
Show Tags
20 Jun 2015, 01:53
2
2
jayanthjanardhan wrote:
Sorry, i dont know what i am missing, how do i get the diagonal to be 20?...from the square i got, i have all the sides equal to 20, hence the area=400.
Hi Jayanthjanardan,
The given equation is basically representing FOUR linear equations which are representing 4 lines on the plane
One Linear equation when x is +ve and y is +ve i.e. X+Y = 10 Second Linear equation when x is +ve and y is -ve i.e. X-Y = 10 Third Linear equation when x is -ve and y is +ve i.e. -X+Y = 10 Forth Linear equation when x is -ve and y is -ve i.e. -X-Y = 10
NOTE: PLEASE PLOT THE LINES TO UNDERSTAND THE FIGURE (REFER THE FIGURE) and see that Diagonal of Square is 10
So you need to plot these equation and then take the area of Quadrilateral formed
Also, Please Note that Four Vertices of Quadrilateral are obtained where two lines Intersect, and The intersections of the lines are obtained at points (10,0), (-10,0), (0,10) and (0,-10)
Whereas, what you have done is taking any FOUR RANDOM POINTS on those four lines as per your convenience and then have assumed that these points form the Square
I hope this clears your doubt!
Attachments
File comment: www.GMATinsight.com
figure.jpg [ 88.87 KiB | Viewed 9528 times ]
_________________
Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
Its a similar problem, but the diagram we end getting is a square and not a rhombus...what am i missing here?
Even that is a square but never forget that a Square is a specific type of Rhombus only
I hope, You can understand that the Product of the slopes of the adjacent sides is -1 in that fugure which proves the angle between the adjacent sides as 90 degree
a Square is a "Rhombus with all angles 90 degrees". So calling it a Rhombus won;t be wrong either but you are right about the figure being a Square._________________
Prosper!!! GMATinsight Bhoopendra Singh and Dr.Sushma Jha e-mail: info@GMATinsight.com I Call us : +91-9999687183 / 9891333772 Online One-on-One Skype based classes and Classroom Coaching in South and West Delhi http://www.GMATinsight.com/testimonials.html
|
THE PACE OF SCIENCE -- THE DEVELOPMENT OF EXTENSIONS
Augustin-Louis Cauchy (1789-1857) published his famous inequality in 1821 in the second of two notes on the theory of inequalities that formed the final part of his book
Cours d'Analyse Algébrique, a volume which was perhaps the world's first rigorous calculus text. Oddly enough, Cauchy did not use his inequality in his text, except in some illustrative exercises. The first time Cauchy's inequality was applied in earnest by anyone was in 1829, when Cauchy used his inequality in an investigation of Newton's method for the calculation of the roots of algebraic and transcendental equations. This eight-year gap provides an interesting gauge of the pace of science; now, each month, there are hundreds--perhaps thousands--of new scientific publications where Cauchy's inequality is applied in one way or another.
A great many of those applications depend on a natural analog of Cauchy's inequality where sums are replaced by integrals,
$$ \int_{a}^{b} f(x)g(x) \, dx \leq \left(\int_{a}^{b} f^{2}(x)\, dx\right)^{\frac12}\left(\int_{a}^{b} g^{2}(x)\, dx\right)^{\frac12} \quad ... (*)$$
This bound first appeared in print in a
Mémoire by Victor Yacovlevich Bunyakovsky which was published by the Imperial Academy of Sciences of St. Petersburg in 1859. Bunyakovsky (1804-1889) had studied in Paris with Cauchy, and he was quite familiar with Cauchy's work on inequalities; so much so that by the time he came to write his Mémoire, Bunyakovsky was content to refer to the classical form of Cauchy's inequality for finite sums simply as well-known. Moreover, Bunyakovsky did not dawdle over the limiting process; he took only a single line to pass from Cauchy's inequality for finite sums to his continuous analog in $(*)$. By ironic coincidence, one finds that this analog is labelled as inequality $(\mathbf{C})$ in Bunyakovsky's Mémoire, almost as though Bunyakovsky had Cauchy in mind.
Bunyakovsky's
Mémoire was written in French, but it does not seem to have circulated widely in Western Europe. In particular, it does not seem to have been known in Göttingen in 1885 when Hermann Amandus Schwarz (1843-1921) was engaged in his fundamental work on the theory of minimal surfaces.
In the course of this work, Schwarz had the need for a a two-dimensional integral analog of Cauchy's inequality. In particular, he needed to show that if $S \subseteq \mathbb{R}^{2}$ and $f \colon S \to \mathbb{R}$ and $g \colon S \to \mathbb{R}$, then the double integrals
$$ A = \iint_{S} f^{2} \, dxdy, \quad B = \iint_{S} fg \, dxdy \quad C = \iint_{S} g^{2} \, dxdy$$
must satisfy the inequality
$$ |B| \leq \sqrt{A} \cdot \sqrt{C}, $$
and Schwarz also needed to know that the inequality is strict unless the functions $f$ and $g$ are proportional.
An approach to this result via Cauchy's inequality would have been problematical for several reasons, including the fact that the strictness of a discrete inequality can be lost in the limiting passage to integrals. Thus, Schwarz had to look for an alternative path, and, faced with necessity, he discovered a proof whose charm has stood the test of time.
Schwarz based his proof on one striking observation. Specifically, he noted that the real polynomial
$$ p(t) = \iint_{S} \left(tf(x,y)+g(x,y)\right)^{2} \, dxdy = At^{2}+2Bt+C$$
is always nonnegative, and, moreover, $p(t)$ is strictly positive unless $f$ and $g$ are proportional. The binomial formula then tells us that the coefficients must satisfy $B^{2}\leq AC$, and unless $f$ and $g$ are proportional, one actually has the strict inequality $B^{2} < AC$. Thus, from a single algebraic insight, Schwarz found everything he needed to know.
Schwarz's proof requires the wisdom to consider the polynomial $p(t)$, but, granted that step, the proof is lightning quick. Moreover, ... Schwarz's argument can be used almost without change to prove the inner product form of Cauchy's inequality, and even there Schwarz's argument provides one with a quick understanding of the case of equality. Thus, there is a little to reason to wonder why Schwarz's argument has become a textbook favorite, even though it does require one to pull a rabbit--or at least a polynomial--out of a hat.
THE NAMING OF THINGS -- ESPECIALLY INEQUALITIES
In light of the clear historical precedence of Bunyakovsky's work over that Schwarz, the common practice of referring to the bound $(*)$ as Schwarz's inequality may seem unjust. Nevertheless, by modern standards, both Bunyakovsky and Schwarz might count themselves lucky to have their names so closely associated with such a fundamental tool of mathematical analysis. Except in unusual circumstances, one garners little credit nowadays for crafting a continuous analog to a discrete inequality, or vice versa...
Ultimately, one sees that inequalities get their names in a great variety of ways. Sometimes the name is purely descriptive, such as one finds with the triangle inequality... Perhaps, more often, an inequality is associated with the name of a mathematician, but even then there is no hard-and-fast rule to govern that association. Sometimes the inequality is named after the first finder, but other principles may apply--such as the framer of the final form, or the provider of the best known application.
If one were to insist on the consistent use of the rule of the first finder, then Hölder's inequality would become Roger's inequality, Jensen's inequality would become Hölder's inequality, and only riotous confusion would result. The most practical rule--and the one used here--is simply to use the traditional names. Nevertheless, from time to time, it may be scientifically informative to examine the roots of those traditions.
|
arXiv:1906.04715 [math.AP] Asymptotic analysis of exit time for dynamical systems with a single well potential
Published 2019-06-11Version 1
We study the exit time from a bounded multi-dimensional domain $\Omega$ of the stochastic process $\mathbf{Y}_\varepsilon=\mathbf{Y}_\varepsilon(t,a)$, $t\geqslant 0$, $a\in \mathcal{A}$, governed by the overdamped Langevin dynamics \begin{equation*} d\mathbf{Y}_\varepsilon =-\nabla V(\mathbf{Y}_\varepsilon) dt +\sqrt{2}\varepsilon\, d\mathbf{W}, \qquad \mathbf{Y}_\varepsilon(0,a)\equiv x\in\Omega \end{equation*} where $\varepsilon$ is a small positive parameter, $\mathcal{A}$ is a sample space, $\mathbf{W}$ is a $n$-dimensional Wiener process. The exit time corresponds to the first hitting of $\partial\Omega$ by the trajectories of the above dynamical system and the expectation value of this exit time solves the boundary value problem \begin{equation*} (-\varepsilon^2\Delta +\nabla V\cdot \nabla)u_\varepsilon=1\quad\text{in}\quad\Omega,\qquad u_\varepsilon=0\quad\text{on}\quad\partial\Omega. \end{equation*} We assume that the function $V$ is smooth enough and has the only minimum at the origin (contained in $\Omega$); the minimum can be degenerate. At other points of $\Omega$, the gradient of $V$ is non-zero and the normal derivative of $V$ at the boundary $\partial\Omega$ does not vanish as well. Our main result is a complete asymptotic expansion for $u_\varepsilon$ as well as for the lowest eigenvalue of the considered problem and for the associated eigenfunction. The asymptotics for $u_\varepsilon$ involves a term exponentially large $\varepsilon$; we find this term in a closed form. Apart of this term, we also construct a power in $\varepsilon$ asymptotic expansion such that this expansion and a mentioned exponentially large term approximate $u_\varepsilon$ up to arbitrarily power of $\varepsilon$. We also discuss some probabilistic aspects of our results.
Related articles:Most relevant | Search more
|
A novel scheme suitable for the emulation of fractional-order capacitors and inductors of any order less than 2 is presented in this work. Classically, fractional-order impedances are characterized in the frequency domain by a fractional-order Laplacian of the form \(s^{\pm \alpha }\) with an order \(0<\alpha <1\). The ideal inductor and capacitor correspond, respectively, to setting \(\alpha =\pm 1\). In the range \(1<\alpha <2\), fractional-order impedances can still be obtained before turning into a Frequency- Dependent Negative Resistor (FDNR) at \(\alpha =\pm 2\). Here, we propose an electronically tunable fractional-order impedance emulator with adjustable order in the full range \(0<\alpha <2\). The values of the emulated capacitance/inductance, as well as the bandwidth of operation, are also electronically adjustable. The post- layout simulation results confirm the correct operation of the proposed circuits.
This is a preview of subscription content, log in to check access.
Notes
References
1.
A. Adhikary, S. Choudhary, S. Sen, Optimal design for realizing a grounded fractional order inductor using GIC. IEEE Trans. Circuit Syst. I: Regul. P. 65(8), 2411–2421 (2018)MathSciNetCrossRefGoogle Scholar
2.
A. Adhikary, P. Sen, S. Sen, K. Biswas, Design and performance study of dynamic fractors in any of the four quadrants. Circuit Syst. Signal Process. 35(6), 1909–1932 (2016)MathSciNetCrossRefGoogle Scholar
3.
A. Adhikary, S. Sen, K. Biswas, Practical realization of tunable fractional order parallel resonator and fractional order filters. IEEE Trans. Circuit Syst. I: Regul. P. 63(8), 1142–1151 (2016)MathSciNetCrossRefGoogle Scholar
4.
A. Adhikary, S. Sen, K. Biswas, Design and hardware realization of a tunable fractional-order series resonator with high quality factor. Circuit Syst. Signal Process. 36(9), 3457–3476 (2017)CrossRefGoogle Scholar
5.
K. Biswas, G. Bohannan, R. Caponetto, A.M. Lopes, J.A.T. Machado, Fractional-Order Devices (Springer, Berlin, 2017)CrossRefGoogle Scholar
6.
G. Carlson, C. Halijak, Approximation of fractional capacitors (1/s)\(^{(1/n)}\) by a regular Newton process. IEEE Trans. Circuit Theory 11(2), 210–213 (1964)CrossRefGoogle Scholar
7.
P. Corbishley, E. Rodriguez-Villegas, A nanopower bandpass filter for detection of an acoustic signal in a wearable breathing detector. IEEE Trans. Biomed. Circuit Syst. 1(3), 163–171 (2007)CrossRefGoogle Scholar
8.
I. Dimeas, G. Tsirimokou, C. Psychalinos, A.S. Elwakil, Experimental verification of fractional-order filters using a reconfigurable fractional-order impedance emulator. J. Circuit Syst. Comput. 26(09), 1750142 (2017)CrossRefGoogle Scholar
9.
R. El-Khazali, On the biquadratic approximation of fractional-order Laplacian operators. Analog Integr. Circuit Signal Process. 82(3), 503–517 (2015)CrossRefGoogle Scholar
10.
T.J. Freeborn, B. Maundy, A.S. Elwakil, Field programmable analogue array implementation of fractional step filters. IET Circuit Devices Syst. 4(6), 514–524 (2010)CrossRefGoogle Scholar
11.
C. Halijak, An RC impedance approximant to \((1/s)^{1/2}\). IEEE Trans. Circuit Theory 11(4), 494–495 (1964)CrossRefGoogle Scholar
12.
Y. Jiang, B. Zhang, High-power fractional-order capacitor with \(1<\alpha <2\) based on power converter. IEEE Trans. Ind. Electron. 85(4), 3157–3164 (2018)CrossRefGoogle Scholar
K. Matsuda, H. Fujii, H\(\infty \) optimized wave-absorbing control: analytical and experimental result. J. Guidance Control Dyn. 16(6), 1146–1153 (1993)CrossRefzbMATHGoogle Scholar
15.
P.A. Mohan, VLSI Analog Filters: Active RC, OTA-C, and SC (Springer, Berlin, 2012)zbMATHGoogle Scholar
16.
A. Oustaloup, F. Levron, B. Mathieu, F.M. Nanot, Frequency-band complex noninteger differentiator: characterization and synthesis. IEEE Trans. Circuit Syst. I: Fundam. Theory Appl. 47(1), 25–39 (2000)CrossRefGoogle Scholar
17.
D. Pullaguram, S. Mishra, N. Senroy, M. Mukherjee, Design and tuning of robust fractional order controller for autonomous microgrid VSC system. IEEE Trans. Ind. Appl. 54(1), 91–101 (2018)CrossRefGoogle Scholar
V.P. Sarathi, G. Uma, M. Umapathy, Realization of fractional order inductive transducer. IEEE Sens. J. 18(21), 8803–8811 (2018)Google Scholar
20.
M.C. Tripathy, D. Mondal, K. Biswas, S. Sen, Experimental studies on realization of fractional inductors and fractional-order bandpass filters. Int. J. Circuit Theory Appl. 43(9), 1183–1196 (2015)CrossRefGoogle Scholar
21.
G. Tsirimokou, A. Kartci, J. Koton, N. Herencsar, C. Psychalinos, Comparative study of discrete component realizations of fractional-order capacitor and inductor active emulators. J. Circuit Syst. Comput. 27(11), 1850170 (2018)CrossRefGoogle Scholar
G. Tsirimokou, C. Psychalinos, A.S. Elwakil, K.N. Salama, Experimental behavior evaluation of series and parallel connected constant phase elements. Int. J. Electron. Commun. (AEÜ) 74, 5–12 (2017)CrossRefGoogle Scholar
24.
G. Tsirimokou, C. Psychalinos, A.S. Elwakil, K.N. Salama, Electronically tunable fully integrated fractional-order resonator. IEEE Trans. Circuit Syst. II Express Briefs 65(2), 166–170 (2018)CrossRefGoogle Scholar
25.
J. Valsa, J. Vlach, RC models of a constant phase element. Int. J. Circuit Theory Appl. 41(1), 59–67 (2013)Google Scholar
26.
P. Veeraian, U. Gandhi, U. Mangalanathan, Analysis of fractional order inductive transducers. Eur. Phys. J. Spec. Top. 226(16–18), 3851–3873 (2017)CrossRefGoogle Scholar
27.
R. Verma, N. Pandey, R. Pandey, Realization of a higher fractional order element based on novel OTA based IIMC and its application in filter. Analog Integr. Circuit Sig. Process 97(1), 177–191 (2018)CrossRefGoogle Scholar
|
ISSN:
1930-5311
eISSN:
1930-532X
All Issues
Journal of Modern Dynamics
Open Access Articles
Abstract:
This special issue presents some of the lecture notes of the courses held in the 2008 and 2011 Summer Institutes at the Mathematics Research and Conference Center of Polish Academy of Sciences at Będlewo, Poland. The school was structured as daily courses with a double lecture each, in two parts of 45-50 minutes with a break in between.
For more information please click the “Full Text” above.
Abstract:
We review the Brin prize work of Artur Avila on Teichmüller dynamics and Interval Exchange Transformations. The paper is a nontechnical self-contained summary that intends to shed some light on Avila's early approach to the subject and on the significance of his achievements.
Abstract:
The field of one-dimensional dynamics, real and complex, emerged from obscurity in the 1970s and has been intensely explored ever since. It combines the depth and complexity of chaotic phenomena with a chance to fully understand it in probabilistic terms: to describe the dynamics of typical orbits for typical maps. It also revealed fascinating universality features that had never been noticed before. The interplay between real and complex worlds illuminated by beautiful pictures of fractal structures adds special charm to the field. By now, we have reached a full probabilistic understanding of real analytic unimodal dynamics, and Artur Avila has been the key player in the final stage of the story (which roughly started with the new century). To put his work into perspective, we will begin with an overview of the main events in the field from the 1970s up to the end of the last century. Then we will describe Avila's work on unimodal dynamics that effectively closed up the field. We will finish by describing his results in the closely related direction, the geometry of Feigenbaum Julia sets, including a recent construction of a new class of Julia sets of positive area.
Abstract:
Professor Michael Brin of the University of Maryland endowed an international prize for outstanding work in the theory of dynamical systems and related areas. The prize is given biennially for specific mathematical achievements that appear as a single publication or a series thereof in refereed journals, proceedings or monographs.
For more information please click the "Full Text" above.
Abstract:
We compute the asymptotics, as $R$ tends to infinity, of the number $N(R)$ of closed geodesics of length at most $R$ in the moduli space of compact Riemann surfaces of genus $g$. In fact, $N(R)$ is the number of conjugacy classes of pseudo-Anosov elements of the mapping class group of a compact surface of genus $g$ of translation length at most $R$.
Abstract:
We show that given a fixed irrational rotation of the $d$-dimensional torus, any analytic
SL(2, R)-cocycle can be perturbed in such a way that the Lyapunov exponent becomes positive. This result strengthens and generalizes previous results of Krikorian [6] and Fayad-Krikorian [5]. The key technique is the analyticity of $m$-functions (under the hypothesis of stability of zero Lyapunov exponents), first observed and used in the solution of the Ten-Martini Problem [2]. Abstract:
Given a hyperbolic matrix $H\in SL(2,\R)$, we prove that for almost every $R\in SL(2,\R)$, any product of length $n$ of $H$ and $R$ grows exponentially fast with $n$ provided the matrix $R$ occurs less than $o(\frac{n}{\log n\log\log n})$ times.
On measures invariant under diagonalizable actions: the Rank-One case and the general Low-Entropy method
Abstract:
We consider measures on locally homogeneous spaces $\Gamma \backslash G$ which are invariant and have positive entropy with respect to the action of a
singlediagonalizable element $a \in G$ by translations, and prove a rigidity statement regarding a certain type of measurable factors of this action.
This rigidity theorem, which is a generalized and more conceptual form of the low entropy method of [14,3] is used to classify positive entropy measures invariant under a one parameter group with an additional recurrence condition for $G=G_1 \times G_2$ with $G_1$ a rank one algebraic group. Further applications of this rigidity statement will appear in forthcoming papers.
Abstract:
The editors of the Journal of Modern Dynamics are happy to dedicate this issue to Gregory Margulis, who, over the last four decades, has influenced dynamical systems as deeply as few others have, and who has blazed broad trails in the application of dynamical systems to other fields of core mathematics.
For more information please click the “Full Text” above.
Additional editors: Leonid Polterovich, Ralf Spatzier, Amie Wilkinson and Anton Zorich.
Readers Authors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
arXiv:1906.04719 [math.CO] The $h^*$-polynomials of locally anti-blocking lattice polytopes and their $γ$-positivity
Published 2019-06-11Version 1
A lattice polytope $\mathcal{P} \subset \mathbb{R}^d$ is called a locally anti-blocking polytope if for any closed orthant $\mathbb{R}^d_{\varepsilon}$ in $\mathbb{R}^d$, $\mathcal{P} \cap \mathbb{R}^d_{\varepsilon}$ is unimodularly equivalent to an anti-blocking polytope by reflections of coordinate hyperplanes. In the present paper, we give a formula of the $h^*$-polynomials of locally anti-blocking lattice polytopes. In particular, we discuss the $\gamma$-positivity of the $h^*$-polynomials of locally anti-blocking reflexive polytopes.
Comments:18 pages Categories:math.CO Related articles:Most relevant | Search more
|
Can you all please solve this one?
I was originally thining of $$\int_{-\infty}^{\infty} e^{-iax}\left(\frac{x^2}{n} + 1\right)^{-\frac12(n+1)} \,dx.$$
My mentor has told me that if n is odd you can calculate this and if n is even you will get to use special functions.
If n is odd. $$\int_{-\infty}^{\infty} e^{-iax}\left(\frac{x^2}{2k+1} + 1\right)^{-(k+1)} \,dx.$$ This case is done by Shashi by employing contour integration. Thanks.
If n is even $$\int_{-\infty}^{\infty} e^{-iax}\left(\frac{x^2}{2k} + 1\right)^{-\frac12(2k+1)} \,dx.$$ This is unsolved yet. \begin{align} I=(2k)^\frac{k+1}{2}\int^\infty_{-\infty}\frac{e^{-iax}}{(x^2+2k)^\frac{k+1}{2}}\,dx \end{align}
So it is enough to do the integral \begin{align} J = \int_\mathbb{R} \frac{e^{-iax}}{(x^2+2k)^\frac{k+1}{2}}\,dx \end{align} where b > $0$. Consider the contour integral:
\begin{align} \oint_C \frac{e^{-iaz}}{(z^2+b)^\frac{k+1}{2}}\,dz \end{align}
Shashi gives me an advice: since the exponent of the denominator of the first integral I have written is not an integer in the even case. One must then be careful with the branch cut and all that. So this is a problem of contour integral having branch cuts. But I don't know how to choose a proper contour for that...
|
I understand that a function $f:X\to Y$ is surjective iff $\forall \, y \in Y, \exists x \in X$, such that $f(x)=y$
Basically, every element of $Y$ needs to have at least $1$ pre image.
Intuitively, I can show a function is surjective if I can show that Range = Codomain.
But given my limited skills, I'm not always able to find Range of every function. So while looking up alternatives, I watched a video on youtube, that first started off by expressing $x$ in terms of $y$ and then substituted this $x$ in $f(x)$ to show $f(x)=y$
I don't know how correct this method is, but if it's correct can anyone explain me what is going on in this method? And is this method always 100% going to be right?
EDIT : I don't remember the video, but I'll just give my own self made example to show what exactly happened.
$f: \mathbb{R} \to \mathbb{R}$ and $f(x)= 2x+3$ clearly this is an onto function. But let's show it, using that method.
$x= \dfrac{y-3}{2}$ and $f\left(\dfrac{y-3}{2} \right) = y$ and it then concluded that $f$ is a surjective.
|
Let $f(x)=x^8-16$. Determine the Galois group of the splitting field of $f(x)$ over the field $K$ in each case.
a) $K=\Bbb{Q}$
b) $K=\Bbb{Z}_{17}$.
a) The roots of $f(x)$ are $\alpha$, $\alpha\omega$, $\alpha\omega^2$, ... ,$\alpha\omega^7$ where $\alpha=16^{1/8} = (2^4)^{1/8} = 2^{1/2}$ and $\omega$ is the $8$th primitive root of unity.
First we need to find the degree of the extension since it is equal to the order of the Galois group.
$[\Bbb{Q}(\alpha,\omega):\Bbb{Q}]=[\Bbb{Q}(\alpha,\omega):\Bbb{Q}(\omega)][\Bbb{Q}(\omega):\Bbb{Q}]$.
$[\Bbb{Q}(\omega):\Bbb{Q}]=\Psi_8(x)$, where $\Psi_k(x)$ is the $k$th cyclotomic polynomial.
We know that
$$x^8-1 = \Psi_1(x)\Psi_2(x)\Psi_4(x)\Psi_8(x)$$
$$\implies x^8-1 = (x-1)(x+1)(x^2+1)\Psi_8(x)$$
$$\implies \Psi_8(x)=x^4+1$$
Since cyclotomic polynomials are irreducible over $\Bbb{Q}$, we know that $[\Bbb{Q}(\omega):\Bbb{Q}]=4$.
Now we need to find $[\Bbb{Q}(\omega,\alpha):\Bbb{Q}(\omega)]$.
Since $\alpha$ is a root of $x^8-16$, the minimal polynomial must be a divisor of it. We have $x^8-16 = (x^2-2)(x^2+2)(x^4+4)$. Here, we can see that $\alpha$ is a root of $x^2-2$, so we just need to check if $x^2-2$ is reducible. But $x^2-2=(x-\sqrt{2})(x+\sqrt{2})$, and we know that $\sqrt{2} \not\in \Bbb{Q}(\omega)$ since it is irrational. So $[\Bbb{Q}(\omega,\alpha):\Bbb{Q}(\omega)]=2 \implies [K:\Bbb{Q}]=8$
There are five groups of order 8 up to isomorphism:
$\bullet \Bbb{Z}_8$
$\bullet \Bbb{Z}_4 \times \Bbb{Z}_2$
$\bullet \Bbb{Z}_2 \times \Bbb{Z}_2 \times \Bbb{Z}_2$
$\bullet D_8$
$\bullet Q_8$
Since $x^8-16 = (x^2-2)(x^2+2)(x^4+4)$, the automorphisms are
$$\sigma_1: \alpha \rightarrow -\alpha$$
$$\sigma_2: \alpha\omega^2 \rightarrow \alpha\omega^6$$
$$\sigma_3: \alpha\omega \rightarrow \alpha\omega^3$$
$$\sigma_4: \alpha\omega \rightarrow \alpha\omega^5$$
$$\sigma_5: \alpha\omega \rightarrow \alpha\omega^7$$
$$\sigma_6: \alpha\omega^3 \rightarrow \alpha\omega^5$$
$$\sigma_7: \alpha\omega^3 \rightarrow \alpha\omega^7$$
$$\sigma_8: \alpha\omega^5 \rightarrow \alpha\omega^7$$
I guess I can tell which group of order 8 this is isomrophic to by direct computation, but I was kind of confused...for example, let's say I want to check if $\sigma_3\sigma_2(\sigma\omega) = \sigma_2\sigma_3(\alpha\omega)$. But $\sigma_2$ takes $\sigma\omega^2$ to $\alpha\omega^6$. But we don't have $\alpha\omega^2$, we have $\alpha\omega$. If we just raise it to the 3rd power, then what's the difference between $\sigma_2$ and $\sigma_3$? So that doesn't really make sense to me...
Also is there an easier way find out which group it's isomorphic to without actually having to directly go through all the elements and subgroups of the Galois group?
b) As in part a), $[\Bbb{Z_{17}}(\alpha,\omega):\Bbb{Z_{17}}]=[\Bbb{Z_{17}}(\alpha,\omega):\Bbb{Z_{17}}(\omega)][\Bbb{Z_{17}}(\omega):\Bbb{Z_{17}}]$. So, again, we need to check if $\Psi_8(x) = x^4+1$ is irreducible in $\Bbb{Z}_{17}$. In other words, since $x^4+1=0 \implies x^4=-1 \implies x^8=1$, we need to check if there are elements of order 8 in $\Bbb{Z}_{17}$.
We know that the multiplicative field of $\Bbb{Z}_{17}$ is $\Bbb{Z}^{\times}_{16}$. So we have a cyclic subgroup of order 8 $\implies$ the minimal polynomial of $\omega$ over $\Bbb{Z}_{17}$ is of degree 1 $\implies$ $[\Bbb{Z_{17}}(\omega):\Bbb{Z_{17}}]=1$.
For $[\Bbb{Z_{17}}(\alpha,\omega):\Bbb{Z_{17}}(\omega)]$, we get the same polynomial as we did for part a), which is not reducible since $x^2-2=(x-\sqrt{2})(x+\sqrt{2})$ and $\sqrt{2} \not\in \Bbb{Z}_{17}$. So $[\Bbb{Z_{17}}(\alpha,\omega):\Bbb{Z_{17}}]=2$, implying that the Galois group is isomorphic to a cyclic group of order 2.
Is my answer for part b), correct?
Thanks in advance
|
What do you mean by "pursue"? If you mean trying to produce a full-fledged theory of quantum gravity that can be "directly added" to the standard model, it's most likely not worth to "pursue". See for instance, this "proof".
But often, we can use theories which are actually inconsistent, but can give some meaningful results that can be used to test more complete theories. An obvious example that comes to my mind is Supergravity.
Similarly, trying to quantise gravity like any other force can actually yield meaningful results, making it certainly worthwhile to pursue. For instance, one may calculate graviton-graviton scattering amplitudes at the tree level, and see if the same predicted by string theory reduces to this when one takes the limit as \(\alpha'\to0\) - the exact factor by which one multiplies the stringy prediction by to get the field theory prediction is:
\( \frac{{\Gamma \left( {1 + \frac{{\alpha '}}{4}s} \right)\Gamma \left( {1 + \frac{{\alpha '}}{4}t} \right)\Gamma \left( {1 + \frac{{\alpha '}}{4}u} \right)}}{{\Gamma \left( {1 - \frac{{\alpha '}}{4}s} \right)\Gamma \left( {1 - \frac{{\alpha '}}{4}t} \right)\Gamma \left( {1 - \frac{{\alpha '}}{4}u} \right)}}\)
Take the field-theoretic limit and this approaches 1 (obviously) with no dependence on
s, t, or u. See Mohaupt's lecture notes on string theory for more details.
So to answer your question, yes it is worthwhile, but not as a theory of quantum gravity on its own right.
|
Previous Article
Does the existence of "talented outliers" help improve team performance? Modeling heterogeneous personalities in teamwork JIMOHome This Issue
Next Article
Application of the preventive maintenance scheduling to increase the equipment reliability: Case study- bag filters in cement factory Extension of generalized solidarity values to interval-valued cooperative games
a.
School of Economics and Management, Fuzhou University, Fuzhou, Fujian 350108, China
b.
School of Architecture, Fuzhou University, Fuzhou, Fujian 350108, China
The main purpose of this paper is to extend the concept of generalized solidarity values to interval-valued cooperative games and hereby develop a simplified and fast approach for solving a subclass of interval-valued cooperative games. In this paper, we find some weaker coalition monotonicity-like conditions so that the generalized solidarity values of the $ \alpha $-cooperative games associated with interval-valued cooperative games are always monotonic and non-decreasing functions of any parameter $ \alpha \in [0,1] $. Thereby the interval-valued generalized solidarity values can be directly and explicitly obtained by computing their lower and upper bounds through only using the lower and upper bounds of the interval-valued coalitions' values, respectively. The developed method does not use the interval subtraction and hereby can effectively avoid the issues resulted from it. Furthermore, we discuss the effect of the parameter $ \xi $ on the interval-valued generalized solidarity values of interval-valued cooperative games and some significant properties of interval-valued generalized solidarity values.
Keywords:Cooperative game, interval-valued cooperative game, solidarity value, interval computing, uncertainty. Mathematics Subject Classification:Primary: 91A12. Citation:Deng-Feng Li, Yin-Fang Ye, Wei Fei. Extension of generalized solidarity values to interval-valued cooperative games. Journal of Industrial & Management Optimization, doi: 10.3934/jimo.2018185
References:
[1] [2]
S. Z. Alparslan G$\rm{\ddot{o}}$k, O. Branzei, R. Branzei and S. Tijs,
Set-valued solution concepts using interval-type payoffs for interval games,
[3]
S. B$\rm{\acute{e}}$al, E. R$\rm{\acute{e}}$mila and P. Solal,
Axiomatization and implementation of a class of solidarity values for TU-games,
[4] [5]
R. Branzei, O. Branzei, S. Z. Alparslan G$\rm{\ddot{o}}$k and S. Tijs,
Cooperative interval games: A survey,
[6] [7]
A. Calik, T. Paksoy, A. Yildizbasi and N. Y. Pehlivan,
A decentralized model for allied closed-loop supply chains: Comparative analysis of interactive fuzzy programming approaches,
[8]
E. Calvo and E. Guti$\rm{\acute{e}}$rrez-L$\rm{\acute{o}}$pez,
Axiomatic characterizations of the weighted solidarity values,
[9] [10] [11]
W. B. Han, H. Sun and G. J. Xu,
A new approach of cooperative interval games: The interval core and Shapley value revisited,
[12] [13]
X. F. Hu and D.-F. Li,
A new axiomatization of the Shapley-solidarity value for games with a coalition structure,
[14]
Y. Kamijo and T. Kongo,
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value,
[15]
G. Kara, A. $\rm{\ddot{O}}$zmen and G. W. Weber, Stability advances in robust portfolio optimization under parallelepiped uncertainty,
[16]
B. B. Kirlar, S. Erg$\rm{\ddot{u}}$n, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
A game-theoretical and cryptographical approach to crypto-cloud computing and its economical and financial aspects,
[17] [18] [19] [20]
R. Moore,
[21] [22]
B. Oksendal, L. Sandal and J. Uboe,
Stochastic Stackelberg equilibria with applications to time-dependent newsvendor models,
[23]
A. $\rm{\ddot{O}}$zmen, E. Kropat and G. W. Weber,
Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty,
[24]
A. $\rm{\ddot{O}}$zmen, G. W. Weber, I. Batmaz and E. Kropat,
RCMARS: Robustification of CMARS with different scenarios under polyhedral uncertainty set,
[25]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, S. Erg$\rm{\ddot{u}}$n and G. W. Weber,
Cooperative grey games and the grey Shapley value,
[26]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, M. O. Olgun and G. W. Weber,
Transportation interval situations and related games,
[27]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
An axiomatization of the interval Shapley value and on some interval solution concepts,
[28]
M. Pervin, S. K. Roy and G. W. Weber,
Analysis of inventory control model with shortage under time-dependent demand and time-varying holding cost including stochastic deterioration,
[29] [30]
S. K. Roy, G. Maity and G. W. Weber,
Multi-objective two-stage grey transportation problem using utility function with goals,
[31]
E. Savku and G. W. Weber,
A stochastic maximum principle for a markov regime-switching jump-diffusion model with delay and an application to finance,
[32]
G. J. Xu, H. Dai, D. S. Hou and H. Sun,
A-potential function and a non-cooperative foundation for the Solidarity value,
show all references
References:
[1] [2]
S. Z. Alparslan G$\rm{\ddot{o}}$k, O. Branzei, R. Branzei and S. Tijs,
Set-valued solution concepts using interval-type payoffs for interval games,
[3]
S. B$\rm{\acute{e}}$al, E. R$\rm{\acute{e}}$mila and P. Solal,
Axiomatization and implementation of a class of solidarity values for TU-games,
[4] [5]
R. Branzei, O. Branzei, S. Z. Alparslan G$\rm{\ddot{o}}$k and S. Tijs,
Cooperative interval games: A survey,
[6] [7]
A. Calik, T. Paksoy, A. Yildizbasi and N. Y. Pehlivan,
A decentralized model for allied closed-loop supply chains: Comparative analysis of interactive fuzzy programming approaches,
[8]
E. Calvo and E. Guti$\rm{\acute{e}}$rrez-L$\rm{\acute{o}}$pez,
Axiomatic characterizations of the weighted solidarity values,
[9] [10] [11]
W. B. Han, H. Sun and G. J. Xu,
A new approach of cooperative interval games: The interval core and Shapley value revisited,
[12] [13]
X. F. Hu and D.-F. Li,
A new axiomatization of the Shapley-solidarity value for games with a coalition structure,
[14]
Y. Kamijo and T. Kongo,
Whose deletion does not affect your payoff? The difference between the Shapley value, the egalitarian value, the solidarity value, and the Banzhaf value,
[15]
G. Kara, A. $\rm{\ddot{O}}$zmen and G. W. Weber, Stability advances in robust portfolio optimization under parallelepiped uncertainty,
[16]
B. B. Kirlar, S. Erg$\rm{\ddot{u}}$n, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
A game-theoretical and cryptographical approach to crypto-cloud computing and its economical and financial aspects,
[17] [18] [19] [20]
R. Moore,
[21] [22]
B. Oksendal, L. Sandal and J. Uboe,
Stochastic Stackelberg equilibria with applications to time-dependent newsvendor models,
[23]
A. $\rm{\ddot{O}}$zmen, E. Kropat and G. W. Weber,
Robust optimization in spline regression models for multi-model regulatory networks under polyhedral uncertainty,
[24]
A. $\rm{\ddot{O}}$zmen, G. W. Weber, I. Batmaz and E. Kropat,
RCMARS: Robustification of CMARS with different scenarios under polyhedral uncertainty set,
[25]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, S. Erg$\rm{\ddot{u}}$n and G. W. Weber,
Cooperative grey games and the grey Shapley value,
[26]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k, M. O. Olgun and G. W. Weber,
Transportation interval situations and related games,
[27]
O. Palanci, S. Z. Alparslan G$\rm{\ddot{o}}$k and G. W. Weber,
An axiomatization of the interval Shapley value and on some interval solution concepts,
[28]
M. Pervin, S. K. Roy and G. W. Weber,
Analysis of inventory control model with shortage under time-dependent demand and time-varying holding cost including stochastic deterioration,
[29] [30]
S. K. Roy, G. Maity and G. W. Weber,
Multi-objective two-stage grey transportation problem using utility function with goals,
[31]
E. Savku and G. W. Weber,
A stochastic maximum principle for a markov regime-switching jump-diffusion model with delay and an application to finance,
[32]
G. J. Xu, H. Dai, D. S. Hou and H. Sun,
A-potential function and a non-cooperative foundation for the Solidarity value,
[1] [2]
Serap Ergün, Bariş Bülent Kırlar, Sırma Zeynep Alparslan Gök, Gerhard-Wilhelm Weber.
An application of crypto cloud computing in social networks by cooperative game theory.
[3]
Jiang-Xia Nan, Deng-Feng Li.
Linear programming technique for solving interval-valued constraint matrix games.
[4]
Yuhua Sun, Laisheng Wang.
Optimality conditions and duality in nondifferentiable interval-valued programming.
[5]
Hsien-Chung Wu.
Solving the interval-valued optimization problems based on the concept of null set.
[6]
Serap Ergün, Sirma Zeynep Alparslan Gök, Tuncay Aydoǧan, Gerhard Wilhelm Weber.
Performance analysis of a cooperative flow game algorithm in ad hoc networks and a comparison to Dijkstra's algorithm.
[7]
Xiuhong Chen, Zhihua Li.
On optimality conditions and duality for non-differentiable interval-valued programming problems with the generalized (
[8]
Tadeusz Antczak, Najeeb Abdulaleem.
Optimality conditions for $ E $-differentiable vector optimization problems with the multiple interval-valued objective function.
[9] [10]
Harish Garg.
Some robust improved geometric aggregation operators under interval-valued intuitionistic fuzzy environment for multi-criteria decision-making process.
[11]
Harish Garg, Kamal Kumar.
Group decision making approach based on possibility degree measure under linguistic interval-valued intuitionistic fuzzy set environment.
[12] [13]
Deepak Singh, Bilal Ahmad Dar, Do Sang Kim.
Sufficiency and duality in non-smooth interval valued programming problems.
[14] [15] [16] [17] [18] [19] [20]
Shou-Fu Tian.
Initial-boundary value problems for the coupled modified Korteweg-de Vries equation on the interval.
2018 Impact Factor: 1.025
Tools Metrics Other articles
by authors
[Back to Top]
|
This one is obviously a little messy, but there is a simple methodology by which the volume may be reduced to a single integral. The idea is to compute the cross sectional area $A(z)$ for each value of $z$ in the volume, and then integrate over $z$ to get the volume.
First, we needs the bounds in $z$. This is done by observing that the relevant $z$ vertex of the ellipsoid is at $z=-2 \sqrt{6}$. In between, it should be clear that each cross section is an intersection of two ellipses centered at the origin. The first has equation
$$\frac{x^2}{z^2/3} + \frac{y^2}{z^2/5} = 1$$
the other has equation
$$\frac{x^2}{\frac{80}{3} \left(1-\frac{z^2}{24}\right)} + \frac{y^2}{\frac{8}{5} \left(1-\frac{z^2}{24}\right)}=1$$
There are three different scenarios: one in which the first ellipse is entirely within the second, one in which the second is entirely within the first, and one in which they stick out of each other. The intervals in $z$ where these different scenarios occur may be determined by equating the vertices in $x$ and $y$, respectively, in the first and second ellipses. The result is three intervals in $z$:
I: $z \in \left[-2 \sqrt{6},-\sqrt{\frac{240}{13}}\right]$ II: $z \in \left[-\sqrt{\frac{240}{13}},-\sqrt{6}\right]$ III: $z \in \left[-\sqrt{6},0\right]$
In interval I, the area is simply the area of the ellipsoid cross section:
$$A(z) = \frac{8 \sqrt{6}}{3} \pi \left(1-\frac{z^2}{24}\right)$$
(Note that I used the formula $A=\pi a b$ as the area of the ellipse $(x^2/a^2)+(y^2/b^2)=1$.)
In interval III, the area is simply the area of the conical cross section:
$$A(z) = \frac{\pi}{\sqrt{15}} z^2$$
That leaves interval II, in which there is an intersection:
The cross sectional area is broken up into 2 pieces: one bounded by the first ellipse at the sides, and another bounded by the second ellipse in the center. The area is then
$$4 \int_{x_0(z)}^{z/\sqrt{3}} dx \: \frac{1}{\sqrt{5}}\sqrt{z^2-3 y^2} + 2 \int_{-x_0(z)}^{x_0(z)} dx \: \sqrt{\frac{8}{5}} \sqrt{1-\frac{z^2}{24}-3 \frac{x^2}{80}}$$
where
$$x_0(z)=\frac{\sqrt{z (z+3)-24}}{3 \sqrt{\frac{1}{z}-\frac{1}{10}}}$$
|
The CAPM is an economic theory that expected returns in excess of the risk free rate should be linear in the regression beta on the market.
$$ \operatorname{E}[R_i - R^f] = \beta_i \operatorname{E}[R^m - R^f]$$
Graphically, it would look like this:
As market beta increases, expected returns increase.
Testing the CAPM with a cross-sectional regression
Conceptually, what Fama and Macbeth wanted to do was:
For each portfolio $i=1, \ldots, n$, run a time series regression to get market beta $\beta_i$. Test the CAPM with a cross-sectional regression of $\operatorname{E}[R_i - R^f]$ on $\beta_i$ using the $n$ securities. That is, run the regression:
$$ \bar{R_i} - R^f = \gamma_0 + \gamma_1 \beta_i + \epsilon_i$$
If you're statistician/econometrician, you'll realize that naively running that regression will have a
HUGE problem with inconsistent standard errors because returns are cross-sectionally correlated!
A modern approach to consistently estimate standard errors might be to run the following panel regression and cluster by time $t$:
$$ R_{it} - R^f_t = \gamma_0 + \gamma_1 \beta_i + \epsilon_{it}$$
What Fama and Macbeth did back in the 1970s was develop an intuitive procedure to estimate consistent standard errors in the presence of cross-sectional correlation. For each time period $t$, they ran the cross-sectional regression:
$$ R_{it} - R^f_t = \gamma_{0,t} + \gamma_{1,t} \beta_i + \epsilon_{it}$$
They then assumed each time period was independent (broadly reasonable) hence $\gamma_{1,t}$ and $\gamma_{0,t}$ are an IID time series, hence you can take time-series averages and calculate standard errors in the usual Statistics 1 way.
$$\hat{\gamma}_1 = \frac{1}{T} \sum_t \hat{\gamma}_{1,t} \quad \quad \hat{\operatorname{Var}}(\gamma_1) = \frac{1}{T-1} \sum_t (\gamma_{1,t} - \hat{\gamma_1})^2$$
etc...
Assumptions of the first stage?
If by "first stage" you are referring to the time-series regression:
$$ R_{it} - R^f_t = \alpha_i + \beta_i \left( R^m_t - R^f_t \right) + \epsilon_{it} $$
The classic assumptions employed by Fama were that each time period is independent and that the joint distribution of returns is multivariate normal, thereby making any regression of returns on returns a well specified regression.
You can relax these assumptions if you rely on asymptotic assumptions. Let $\mathbf{x}_t = \begin{bmatrix}1 \\ R^m_t - R^f_t \end{bmatrix}$ and $y_t = R_t - R^f_t$. Following Hayashi's
Econometrics (p. 133), the assumptions would be: (2.1.) linearity: $y_t = \mathbf{x}_t \cdot \boldsymbol{\beta} + \epsilon_t$, (2.2) ergodic stationarity of $(y_t, \mathbf{x}_t)$ (2.3) predetermined regressors (i.e. regressors orthogonal to contemporaneous error term), (2.4) $\operatorname{E}[\mathbf{x} \mathbf{x}']$ is full rank, and (2.5) $\mathbf{x}_t \epsilon_t$ is a martingale difference sequence. References
Hayashi, Fumio,
Econometrics, 2000, Princeton University Press
|
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order"
(→Over an abelian group)
Line 33: Line 33:
The homology groups with coefficients in an abelian group <math>M</math> are given as follows:
The homology groups with coefficients in an abelian group <math>M</math> are given as follows:
−
<math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/
+
<math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math>
Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>.
Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>.
Revision as of 19:04, 24 October 2011 Contents Particular cases Homology groups Over the integers
The first few homology groups are given below:
rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group
The homology groups with coefficients in an abelian group are given as follows:
Here, is the quotient of by and .
These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology.
Important case types for abelian groups
Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action Over the integers
The cohomology groups with coefficients in the integers are given as below:
The first few cohomology groups are given below:
0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group
The cohomology groups with coefficients in an abelian group are given as follows:
Here, is the quotient of by and .
These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology.
Important case types for abelian groups
Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of
|
While answering a question about the MacLaurin Series Expansion of a composite function I noticed something strange I can not explain to myself. The task was to verify that the MacLaurin Series Expansion of $\ln(1+\sin x)$ is up to the fourth term given by
$$\ln(1+\sin x)=x-\frac{x^2}2+\frac{x^3}6-\frac{x^4}{12}+\left(\frac{x^5}{24}-\frac{x^6}{45}+\frac{61x^7}{5040}-\frac{17x^8}{2520}+\frac{277x^9}{72576}\cdots\right.$$
Not that hard actually. However, the OP tried something which made me smile in the first place but while examining his approach confuses me right away. Instead of computing the derivatives and evaluating them at $0$ he instead decided to just plug in the expansion of $\sin x$ as argument of the expansion of $\log(1+x)$. Well, apparently this works out; at least for a few terms!
Starting by using both, i.e. the one of the sine and the one of the logarithm, expansions up to the fifth term we obtain $$\ln(1+\sin x)=x-\frac{x^2}2+\frac{x^3}6-\frac{x^4}{12}+\frac{x^5}{24}+\color{red}{\frac{13x^6}{90}+\cdots}$$ I have marked the first erroneous term. At this point I though it was only a coincidence that this naïve approach leads to the right solution. So I tried the same with more terms, to be precise with both expansions up to the ninth term from which I got the $$\ln(1+\sin x)=x-\frac{x^2}2+\frac{x^3}6-\frac{x^4}{12}+\frac{x^5}{24}-\frac{x^6}{45}+\frac{61x^7}{5040}-\frac{17x^8}{2520}+\frac{277x^9}{72576}+\color{red}{\frac{2773x^{10}}{28350}+\cdots}$$ Again, I marked the first erroneous term. Notice that it is the tenth term, i.e. the one for which the used series representations were not correct anymore (hence non-existing tenth term). At least the accurency of the so obtained series seems reasonable to me. However, I am totally confused concerning the fact that this ridiculous straightforward approach works out.
Quite simple question: Why? Furthermore, is this actually used somewhere?
Thanks in advance!
|
I am reading Rebonato's Volatility and Correlation (2nd Edition) and I think it's a great book. I'm having difficulty trying to derive a formula he used that he described as the expression for standard deviation in a simple binomial replication example:
\begin{eqnarray}\sigma_S\sqrt{\Delta t}=\frac{\ln S_2-\ln S_1}{2}\end{eqnarray}
This expression is equation (2.48) on page 45. You can read that page and get some context from Google Books: http://goo.gl/uDgYg3
I understand continuous compounding is used in the example, if that helps any. It's a little confusing because the equations he listed a few pages above (pg.43; not available in Google Books) use a discrete rate of return, not continuous compounding. But in any case, this discrepancy does not seem to provide any hint as to how the standard deviation is obtained.
Any help is much appreciated.
|
I don't understand the difference between the parameter $C$ and $λ$ in terms of the SVM. It seems to me that they are both involved in regulating over-fitting of the data.
What difference between $C$ and $λ$?
Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. It only takes a minute to sign up.Sign up to join this community
Both are regularisation hyperparameters, related by $C \sim \frac{1}{\lambda}$. Which one you use will depend on the formulation of SVM that you're using. As mentioned on one of your question comments, it would be good to read into primal and dual statements for SVMs if you're feeling a bit lost - Chapter 12 in Elements of Statistical Learning covers this.
|
Definition:Upper Closure Contents Definition
Let $\left({S, \preccurlyeq}\right)$ be an ordered set.
Let $a \in S$.
The upper closure of $a$ (in $S$) is defined as: $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$
Let $T \subseteq S$.
The upper closure of $T$ (in $S$) is defined as: $T^\succeq := \bigcup \left\{{t^\succeq: t \in T}\right\}$
where $t^\succeq$ denotes the upper closure of $t$ in $S$.
That is:
$T^\succeq := \left\{ {u \in S: \exists t \in T: t \preceq u}\right\}$ $a^\preccurlyeq := \left\{{b \in S: b \preccurlyeq a}\right\}$: the lower closure of $a \in S$: everything in $S$ that precedes $a$ $a^\succcurlyeq := \left\{{b \in S: a \preccurlyeq b}\right\}$: the upper closure of $a \in S$: everything in $S$ that succeeds $a$ $a^\prec := \left\{{b \in S: b \preccurlyeq a \land a \ne b}\right\}$: the strict lower closure of $a \in S$: everything in $S$ that strictly precedes $a$ $a^\succ := \left\{{b \in S: a \preccurlyeq b \land a \ne b}\right\}$: the strict upper closure of $a \in S$: everything in $S$ that strictly succeeds $a$. $\displaystyle T^\preccurlyeq := \bigcup \left\{{t^\preccurlyeq: t \in T:}\right\}$: the lower closure of $T \in S$: everything in $S$ that precedes some element of $T$ $\displaystyle T^\succcurlyeq := \bigcup \left\{{t^\succcurlyeq: t \in T:}\right\}$: the upper closure of $T \in S$: everything in $S$ that succeeds some element of $T$ $\displaystyle T^\prec := \bigcup \left\{{t^\prec: t \in T:}\right\}$: the strict lower closure of $T \in S$: everything in $S$ that strictly precedes some element of $T$ $\displaystyle T^\succ := \bigcup \left\{{t^\succ: t \in T:}\right\}$: the strict upper closure of $T \in S$: everything in $S$ that strictly succeeds some element of $T$. The astute reader may point out that, for example, $a^\preccurlyeq$ is ambiguous as to whether it means: The lower closure of $a$ with respect to $\preccurlyeq$ The upper closure of $a$ with respect to the dual ordering $\succcurlyeq$
By Lower Closure is Dual to Upper Closure and Strict Lower Closure is Dual to Strict Upper Closure, the two are seen to be equal.
Also denoted as
Other notations for closure operators include:
${\downarrow} a, {\bar \downarrow} a$ for lower closure of $a \in S$ ${\uparrow} a, {\bar \uparrow} a$ for upper closure of $a \in S$ ${\downarrow} a, {\dot \downarrow} a$ for strict lower closure of $a \in S$ ${\uparrow} a, {\dot \uparrow} a$ for strict upper closure of $a \in S$
However, as there is considerable inconsistency in the literature as to exactly which of these arrow notations is being used at any one time, its use is not endorsed on $\mathsf{Pr} \infty \mathsf{fWiki}$.
Also known as
An
upper closure can also be referred to as a weak upper closure to distinguish it from a strict upper closure.
|
First find the molality of solute particles. Assume i = 1; we're just counting how many things are dissolved and it doesn't matter whether they are ions or molecules at this point. I'm using m for mass and b for molality throughout to avoid confusion.
$\Delta T = ibk_f\implies\frac{\Delta T}{k}=b$
$b=\frac{0.768}{1.86}=\pu{0.413 molal}$
Next find how many moles of particles were dissolved.
$b = \frac{n_{\text{solute}}}{m_{\text{solvent}}}$
$n_{\text{solute}}=b\times m_{\text{solvent}}=0.413\times 0.150 = 0.0619~\text{moles of particles}$
Now determine the mass of each species based on this total number of moles.
$n_{\text{particles}}=3n_{\ce{Zn(NO3)2}}+n_{\ce{C12H22O11}}$
$n_{\text{particles}}=3\frac{m_{\ce{Zn(NO3)2}}}{M_{\ce{Zn(NO3)2}}}+\frac{m_{\ce{C12H22O11}}}{M_{\ce{C12H22O11}}}$
You know the total number of moles of particles, 0.0619, and you know the molar masses of both zinc nitrate and sucrose, but you don't know either the actual mass of zinc nitrate or of sucrose. However, you do have one other piece of information that allows you to set up a system of equations.
$m_{\text{solute}}=m_{\ce{Zn(NO3)2}}+m_{\ce{C12H22O11}}=\pu{4g}$
Solving this system of equations should give you the masses of both solutes.
|
Maybe it could be called book-errata?
I cannot give many examples of discussions from math.SE offhand but at least one example from here https://math.stackexchange.com/questions/34641/find-limit-of-unknown-function
This is an example from a different forum: http://www.sosmath.com/CBB/viewtopic.php?p=181367
I guess questions (and answers) revealing mistakes in book will appear here occasionaly. (Perhaps even withou this being the original intent.)
A related (interesting) link: https://mathoverflow.net/questions/3038/errata-database/3040#3040
Although I have tag-creating privileges, I've never done this before and I wanted to ask about opinion of other members first.
EDIT: Here's a recent post of this type: Showing $\sum\limits^N_{n=1}\left(\prod\limits_{i=1}^n b_i \right)^\frac1{n}\le\sum\limits^N_{n=1}\left(\prod\limits_{i=1}^n a_i \right)^\frac1{n}$?
|
I am having a hard time understanding the numbers of probing which might occur due to using different collision prevention method such as separate chaining, Linear Probing, double probing, which is given here.
Let $\alpha = N/M$ (the load factor: average number of keys per array index). Analysis is probabilistic, rather than worst-case.
Expected number of probes:
$$\begin{eqnarray*} &\text{not found} & \quad\text{found}\\ \text{chaining}\quad & 1+\alpha &\quad1+\frac\alpha2\\ \text{linear probing}\quad & \frac12 + \frac1{2(1-\alpha)^2} &\quad \frac12 + \frac1{2(1-\alpha)} \\ \text{double hashing}\quad &\frac1{1-\alpha} &\quad \frac1\alpha \ln\frac1{1-\alpha} \end{eqnarray*}$$
A clarification of why the numbers are as they are would be appreciated. :)
|
The Radon–Nikodym theorem states that,
given a measurable space $(X,\Sigma)$, if a $\sigma$-finite measure $\nu$ on $(X,\Sigma)$ is absolutely continuous with respect to a $\sigma$-finite measure $\mu$ on $(X,\Sigma)$, then there is a measurable function $f$ on $X$ and taking values in $[0,\infty)$, such that
$$\nu(A) = \int_A f \, d\mu$$
for any measurable set $A$.
$f$ is called the Radon–Nikodym derivative of $\nu$ wrt $\mu$.
I was wondering in what cases the concept of Radon–Nikodym derivative and the concept of derivative in real analysis can coincide and how?
Thanks and regards!
|
I'm starting to study elliptic partial differential equations and I just want to know if there are any connections between the following concepts:
An elliptic partial differential equation is given as being a second-order partial differential equation of the form $$Au_{xx} + 2Bu_{xy} + Cu_{yy}+Du_{x} + Eu_{y} + F = 0$$ that satisfies the condition $B^{2}-AC < 0$. The classification seems to be connected with conic sections.
And then there's the definition of an elliptic operator which is defined as a linear differential operator $L$ of order $m$ on a domain $\Omega$ in $\mathbb{R}^{d}$ given by $$Lu = \sum_{|\alpha| \leq m}a_{\alpha}(x)\partial^{\alpha}u$$ (where $\alpha$ is a multi-index) is called elliptic if for every $x$ in $\Omega$ and every non-zero $\zeta$ in $\mathbb{R}^{d}$ $$\sum_{|\alpha|=m}a_{\alpha}(x)\zeta^{\alpha} \neq 0$$
I just have a couple of questions about these concepts? Firstly, why are PDE's classified in this way where it relates to conic sections?(elliptic, parabolic,hyperbolic) Secondly, what is the connection between elliptic partial differential equations and elliptic operators? I thought that an elliptic operator would be an elliptic PDE in operator form, in the sense that say $x-y=0$ was an elliptic PDE then $f(x,y) = x-y$ would be an elliptic operator. But it seems that there is no connection between elliptic operators and elliptic PDE's?
Thanks for any help.
|
It's been a while since I've done any geometry so I'm a bit confused by this question.
We have a triangle $\triangle PQR$ whose total area is $90 \mathrm{cm}^2$. Another triangle $\triangle PTU$ is formed inside $\triangle PQR$ where the point $T$ is such that $PT=2TQ$ and $U$ is such that $QU = 2UR$. The points $T,U$ are on the edges $PQ$ and $QR$, respectively.
So we can say that $TQ= x, PT = 2x, PQ =3x$ and $UR =y, UQ = 2y, QR= 3y$.
I'm trying to find the area of the triangle $\triangle PTU$ but I'm having some difficulties. If we let $\angle PQR=\gamma, \angle PRQ = \beta, \angle QPR = \alpha$, then $$y=kx$$ where $k=\frac{\sin(\alpha)}{\sin(\beta)}$ and $$x^2 \frac{\sin(\gamma)\sin(\alpha)}{\sin(\beta)}=20$$ via the Sine Rule and the Cosine Rule.
I seem to have a few too many variables to solve this though and was wondering if I could get some pointers. Thanks!
PS: This is supposed to be solvable using elementary methods (middle-high school), so nothing crazy in your solutions (even Routh's theorem would probably be a bit much). Sorry for the lack of a diagram too!
|
The skew/smile of long term options is flatter than short term options, the reason for this can be explained in several ways.The Vega of a shorter-dated option is smaller than a longer-dated option. Vega is the dollar value of a 1% change in implied volatility.i.e.,30d ATM option, $65 strike, .31 ivol = VEGA .0730d 25 delta option,31% ivol = ...
The volatilities of short dated options are more sensitive to market changes as compared to those of long dated options. This is implied by square root of time rule. As such, volatility skew are larger for short dated options.
One possible reason could be jumps. Over the longer maturity, there could be more jumps so the jumps average out in a way; whereas over the short term, a jump can make a bigger difference and hence the risk of jump increases demand.This reasoning is used to justify Stochastic volatility with jumps models in some books.
Given the variation, ATM vol = alpha * F ^(beta-1), if your stochastic process for forward price dF= alphaF^beta dW, that means your effective beta, CEV, is 1. This gives horizontal backbone of the vol surface. I think it all depends on whether this is what you expect to see - the vol surface is stickey under shocked price scenarios.
Your question is twofoldHow a market maker should adjust its quotes on a vol surface with respect to his inventory?How to adjust the vol surface when a new trade is observed on the markets?Let me focus on the market making question, and that for you need to be familiar with optimal trading and optimal market making literature:A breakthrough has been ...
Yes, that's what we wish to see from the correctly-specified model.Now, let me try to answer your 2nd and 3rd questions together as they are based on the same confusion. There are two different concepts: model-implied volatility and model-implied BSIV (Black-Scholes Implied Volatility). I think you are confused because of mixing them up.So yes, people ...
One way to think about this is to forget about equities (for a moment), and think about credit.90% of the time, credit just gets paid.5% of the time, credit still gets paid; but a booming economy means that rates rise, so the increased certainty of getting paid is worth less than the decline to NPVs from the coupon becoming worth less.3% of the time, ...
The Black-Scholes model was based on assuming lognormal stock price fluctuations with a constant volatility. However, the modern practice is to use the Black-Scholes formula not as a prediction but merely as a parametrization of option prices, where the observed price of a given option at a given time translates to a "local" implied volatility (IV). Thus, ...
No, and this is wrong. The implied vols (from market prices) are actually not necessarily convex but yet may be still arbitrage-free, there are many examples of this for various equities. Furthermore, preserving convexity is not necessarily enough either. In terms of implied variance $w(y)=\sigma^2 T$ as a function of log-moneyness $y=\ln\frac{K}{F}$, the no ...
If you recall the derivation from Breeden and Litzenberger (1978), all you need (other than no-arbitrage and infinitely many call options) is the following$\max\{S_0e^{−qT} − Ke^{−rT} , 0\} \leq C(S_0,K,T) \leq S_0e^{−qT}$ for all strikes $K \geq0$,$\frac{\partial C(S_0,K,T)}{\partial K}\geq -e^{-rT}$ for all strikes $K \geq0$,$\lim\limits_{K\to\infty}\...
As @XiaotianDeng mentioned, the simple at-the-money approximation you mention does not always hold: it works only if you assume that $\alpha^2 T, \nu^2 T$ are small, typically $o(1)$. I wanted to add that there is really no need for such an approximation, except, possibly, to do calculations in your head, or for understanding the scale of $\alpha$ against $\...
Using a cubic spline or worse, SVI is overkill to find the at-the-money (ATM) volatility when it is not quoted by the market: both approaches are global in the sense that a small change of one of the quotes far from the money will have a not so small impact on the at-the-money implied vol. Yes, one solution is to truncate the range of option strikes ...
|
You can use a supermajority rule to eliminate the possibility of Condorcet cycles. e.g. if you require a \(\frac{2}{3}\) supermajority there is no possibility of creating a Condorcet triple with the majority strictly preferring A to B, B to C and C to A.
Unfortunately that’s only the three candidate case. If you consider the \(N\) candidate case you need a supermajority of \(1 – \frac{1}{N}\) to achieve the same result (just consider the \(N\) cyclic permutations of the candidates and you get a generalisation of the Condorcet example where every candidate beats every other this fraction of the time).
So there is no supermajority rule that in general prevents Condorcet cycles.
In “The Probability of Condorcet Cycles and Super Majority Rules”, Yves Balasko and Hervé Crès show the following asymptotic result: If you consider a profile of votes chosen uniformly at random on the set \(\{p \in \mathbb{R}^{n!}: p \geq 0, \sum p = 1\}\), then the probability of there being a Condorcet cycle in that set with a supermajority of \(\tau\) is bounded above by \(n! \left( \frac{1 – \tau}{0.4714}\right)^{n!}\).
From this they conclude that the critical threshold for a supermajority is around 54%, because this goes very rapidly towards zero. e.g. with the 54% threshold the probability of a cycle in \(7\) candidates is \(< 10^{-52}\), which we can probably treat as adequately safe.
They also say “Within our setup, this makes the Condorcet cycles a theoretical curiosity without any practical bearing for super majority rules that exceed the threshold value of 53%”.
Unfortunately this conclusion is totally wrong (the bound is probably correct, but I’ll confess to not having fully followed the rather complicated calculations), because it neglects the fact that \(n\) can be small as well as large.
e.g. for \(n=3\), their upper bound isn’t even smaller than \(1\). It’s about 5.18. It then grows before dropping – at 4 it’s 13.34, at 5 it’s 6.36, and then finally at 6 is becomes small and is 1.6e-5.
So in order to find what the appropriate super majority threshold is for making the probability of a cycle adequately small (say under 1 in 1000) we need to manually check the cases of 3, 4 and 5.
Fortunately it’s easy to do that by direct simulation, so I wrote some code, and the 54% threshold is indeed the wrong one and fails to handle the three candidate case well.
About 6% of elections on three candidates have majority cycles, and of those about a quarter would still have a cycle with a 54% supermajority. The 99.9% mark is a 59% supermajority: Above that, fewer than one in a thousand profiles have a cycle. For four candidates, the 99.9% mark is 57%.
If we take the higher requirement of one in a million (which is well into the region where my simulation does not have sufficient fidelity to actually give a good answer) the thresholds become 64% and 61%.
That leaves the n=5 case. My simulation is a bit too inefficient (it has an O(n!) componetn in it) to run in reasonable time for n=5, but fortunately it doesn’t have to. The upper bound for the supermajorities the lower n required is now adequately low: For a 59% supermajority it’s 6e-6, for a 60% supermajority it’s 3e-7, so goes below the one in a million mark too.
So, in conclusion, if you require a 60% supermajority, and not just a 54% one, you are probably free of Condorcet cycles. If you go for the full 2/3 supermajority (which probably makes a lot of sense for serious cases) you’re almost certainly safe – you’re provably safe at n=3 and, provably have vanishingly low probability for n >= 5, and for n=4 a simulation says there’s well under a one in a million chance of getting a cycle.
Unless of course your population is drawn from a model that isn’t well represented by a uniform distribution (i.e. all populations ever) or the candidates are adversarially chosen to manipulate the electorate (i.e. all candidates ever). In those edge cases you might still have some problems.
|
The first-ever article on this blog was a quick introduction to the Particle In Cell (PIC) Method. That article (and the follow up example) discussed a two-dimensional Cartesian implementation. The ideas presented there are easily extended to 3D codes. But many engineering applications exhibit azimuthal symmetry. For instance, I mainly specialize in electric (i.e. plasma) spacecraft propulsion. Hollow cathodes, used to neutralize the plasma beam, are cylindrical in nature. Similarly, Hall effect thruster (HETs) are also typically studied with two-dimensional axisymmetric codes (although that assumption was brought into question by recent observations of a “rotating spoke” mode at Princeton’s PPPL, see Parker, et. al, APL
97, 091501, 2010, pdf).
Having said all that, how do we go about developing a particle in cell (PIC) code in cylindrical coordinates? This article will show you how.
Differences between XY and RZ codes
Moving from a 3D “xyz” code to a 2D “xy” code is relatively simple. An XY code models a slice through an infinitely deep slab, such plasma flowing around a very long cylinder. Modeling this with a 2D code instead of 3D simply requires using a two-dimensional mesh, and replacing cell volumes with the cell areas, since unit depth can be assumed. Similarly fractional areas are used for scattering and gathering between particles and the mesh.
An RZ code shares similar traits but there are two notable differences: 1) cell volumes grow with distance from the axis of rotation and 2) particle push must take into account the cylindrical geometry.
Problem Setup
To demonstrate the technique, we’ll develop a simulation of a simplistic ion gun. The geometry is shown below in Figure 1. In this example, we’ll use Python with the numPy and sciPy libraries. If you don’t have Python installed yet, the easiest way to get everything in one step, including the Spyder IDE is to download Anaconda.
Also, despite calling this method “RZ”, the code is actually “ZR”. The Z axis is the horizontal, and R is the vertical direction. We will refer to “Z” with the “i” index and to “R” with “j”. With this formulation, the third axis is theta, and it points “out of the screen” in the positive direction.
Potential Solver
In the older article on the Finite Volume Method, I showed you how to derive the expression for the Laplacian and gradient in cylindrical coordinates. In summary, we have
$$\nabla^2_R\phi \equiv \frac{\partial^2\phi}{\partial r^2} + \frac{1}{r}\frac{\partial \phi}{\partial r} + \frac{\partial^2\phi}{\partial z^2}$$ where we assumed axial symmetry \(\partial \phi/\partial \theta=0\).
Rewriting the Poisson’s equation using the standard central finite difference, we have
$$\frac{\phi_{i,j+1}-2\phi_{i,j}+\phi_{i,j-1}}{\Delta^2r} + \frac{1}{r_{i,j}}\frac{\phi_{i,j+1}-\phi_{i,j-1}}{2\Delta r} + \frac{\phi_{i-1,j}-2\phi_{i,j}+\phi_{i+1,j}}{\Delta^2z}=-\frac{\rho}{\epsilon_0}$$ To solve this using the Jacobi/Gauss-Seidel method, we simply need to move all non-\(\phi_{i,j}\) terms to the RHS. After a bit of math, we obtain $$\phi_{i,j}=\left[ \frac{\rho}{\epsilon_0} + \frac{\phi_{i,j+1}+\phi_{i,j-1}}{\Delta^2r}+\frac{1}{r_{i,j}}\frac{\phi_{i,j+1}-\phi_{i,j-1}}{2\Delta r} + \frac{\phi_{i-1,j}+\phi_{i+1,j}}{\Delta^2z}\right]/\left(\frac{2}{\Delta^2r}+\frac{2}{\Delta^2z}\right)$$
We also have the boundary condition
$$\partial \phi/\partial r\Big|_{r=0}=0$$ since there can’t be any sharp change across the axis. This is really just the Neumann boundary condition. In Python, the implementation of the solver may look like this: for i in range(1,nz-1): for j in range(1,nr-1): if (cell_type[i,j]>0): continue rho_e=QE*n0*math.exp((phi[i,j]-phi0)/kTe) b = (rho_i[i,j]-rho_e)/EPS0; g[i,j] = (b + (phi[i,j-1]+phi[i,j+1])/dr2 + (phi[i,j-1]-phi[i,j+1])/(2*dr*r[i,j]) + (phi[i-1,j] + phi[i+1,j])/dz2) / (2/dr2 + 2/dz2) phi[i,j]=g[i,j]
We can alternatively implement the solver using numPy’s vector operators. The solution is shown below in Figure 2. You can see how the potential drop across the aperture optics creates a focusing effect that accelerates the ion beam. This type of configuration is also used to accelerate the ionized propellant in ion thrusters.
#compute electron term rho_e = QE*n0*numpy.exp(numpy.subtract(P,phi0)/kTe) b = numpy.where(cell_type<=0,(rho_i - rho_e)/EPS0,0) #special form along centerline g[1:-1,0] = (b[1:-1,0] + (phi[2:,0] + phi[:-2,0])/dz2) / (2/dz2) #regular form everywhere else g[1:-1,1:-1] = (b[1:-1,1:-1] + (phi[1:-1,2:]+phi[1:-1,:-2])/dr2 + (phi[1:-1,0:-2]-phi[1:-1,2:])/(2*dr*r[1:-1,1:-1]) + (phi[2:,1:-1] + phi[:-2,1:-1])/dz2) / (2/dr2 + 2/dz2) #neumann boundaries g[0] = g[1] #left g[-1] = g[-2] #right g[:,-1] = g[:,-2] #top #dirichlet nodes phi = numpy.where(cell_type>0,P,g) Electric Field
The electric field in cylindrical coordinates is computed the same way as in Cartesian coordinates, since \(\nabla_R = \nabla_C\).
#computes electric field def computeEF(phi,efz,efr): #central difference efz[1:-1] = (phi[0:nz-2]-phi[2:nz+1])/(2*dz) efr[:,1:-1] = (phi[:,0:nr-2]-phi[:,2:nr+1])/(2*dr) #one sided difference on boundaries efz[0,:] = (phi[0,:]-phi[1,:])/dz efz[-1,:] = (phi[-2,:]-phi[-1,:])/dz efr[:,0] = (phi[:,0]-phi[:,1])/dr efr[:,-1] = (phi[:,-2]-phi[:,-1])/dr
Note that this code is not quite correct. It will use the central difference across solid boundaries, but in reality, we should use a one-sided difference. But for the sake of this example, this is sufficient.
Particle Push
Ok, so now we have all we need to compute the forces acting on the particles. But how to actually perform the particle push in cylindrical coordinates? We will do this by pushing the particle in a 3D Cartesian system, but then rotating the particle back to the RZ plane. We retain all three components of velocity. The third components is not \(u_\theta\) but a Cartesian velocity out of the screen, \(u_y\). As such, most particles will end up outside the RZ plane after the push. This is shown graphically in Figure 3. In this notation, “Y” is the perpendicular distance from the plane. “R” (in capital letters) is the radial component of position after the push. The new radial distance, taking into account the movement off the plane is given by the lower case r.
The angle between the RZ plane and the particle position is given by \(\theta_r = \tan^{-1}(Y/R) = \sin^{-1}(Y/r)\). To return the position and velocity back to the RZ plane, we simply need to rotate those two vectors by \(-\theta_r\). Now, since we don’t actually care about “Y” term, we can simplify the algorithm by rotating only the velocity, and replacing particle’s radial position with \(r\). The “rotate to RZ” algorithm is thus
$$ r= \sqrt{R^2+Y^2}$$ and $$\left[\begin{array}{c}u_r\\u_y\end{array}\right]=\left[\begin{array}{cc}\cos\theta_r & -\sin\theta_r \\ \sin\theta_r & \cos\theta_r\end{array}\right]\left[\begin{array}{c}U_r\\U_y\end{array}\right]$$
The particle push then looks like this:
#push particles for part in particles: #gather electric field lc = XtoL(part.pos) part_ef = [gather(efz,lc), gather(efr,lc), 0] for dim in range(3): part.vel[dim] += qm*part_ef[dim]*dt part.pos[dim] += part.vel[dim]*dt #rotate particle back to ZR plane r = math.sqrt(part.pos[1]*part.pos[1] + part.pos[2]*part.pos[2]) sin_theta_r = part.pos[2]/r part.pos[1] = r part.pos[2] = 0 #rotate velocity cos_theta_r = math.sqrt(1-sin_theta_r*sin_theta_r) u2 = cos_theta_r*part.vel[1] - sin_theta_r*part.vel[2] v2 = sin_theta_r*part.vel[1] + cos_theta_r*part.vel[2] part.vel[1] = u2 part.vel[2] = v2
Since we don’t actually need the angle, the code only computes \(\sin \theta_r\) and the cosine is computed from the identity. Also, because of our coordinate system,
-pos[2] corresponds to positive
sin_theta. But we then need the negative value for the rotation, hence
sin_theta_r = part.pos[2]/r = -(-part.pos[2]/r)
Node Volumes
The final change that is required is accounting for the variable node volume. In this axisymmetric code, the simulation cells correspond to an annulus about the centerline. Volume of on annulus centered at node “j” is
$$ V = \pi(r_{j+0.5}^2 – r_{j-0.5}^2)\Delta z$$
For \(j>0\) and constant \(\Delta r\), this formulation is identical to \(2\pi r_j \Delta r \Delta z\), however, the form above makes it more obvious how to compute the node volume at the axis. In the code, we precompute the node volumes and store them in a numPy matrix:
node_volume = numpy.zeros([nz,nr]) for i in range(0,nz): for j in range(0,nr): j_min = j-0.5 j_max = j+0.5 if (j_min<0): j_min=0 if (j_max>nr-1): j_max=nr-1 a = 0.5 if (i==0 or i==nz-1) else 1.0 #note, this is r*dr for non-boundary nodes node_volume[i][j] = a*dz*(R(j_max)**2-R(j_min)**2) Particle Injection: Ionization Model
Even though we have now discussed how to move particles, we still don’t have a way to introduce them into the domain. Since the goal is to model an ion gun, we implement a simple ionization volumetric source. Chemical reactions such as
$$A^0 + e^- \to A^+ + 2e^-$$ can be modeled using rate coefficients, $$ \begin{align} \dot{n}_i & = k n_a n_e \\ \dot{n}_e & = k n_a n_e \\ \dot{n}_a & = -k n_a n_e \\ \end{align} $$ In other words, the rates of increase of ion and electron density are equal and opposite to the rate of decay of the neutral density. In a real device, this reaction results in a “prey and predator” model, where the ionization decreases when neutral density is low, allowing the neutral density (the prey) to rebound. Ionization rate (the predator) then picks up, reducing the neutral density, and hence the ionization rate. Here to keep things simple, I assumed that the electron and neutral densities remain constant. To convert from density increase to the number of macroparticles to inject, simply multiply by cell volume and divide by specific weight. Since this will result in some floating-point number, we use a “remainder” array to store the fractional particles that were not injected and add them to the total at the next sampling. Ions are given a random isotropic velocity. This gives us the following code: #compute production rate na = 1e15 ne = 1e12 k = 2e-10 #not a physical value dni = k*ne*na*dt #inject particles for i in range(1,tube_i_max): for j in range(0,tube_j_max): #skip over solid cells if (cell_type[i][j]>0): continue #interpolate node volume to cell center to get cell volume cell_volume = gather(node_volume,(i+0.5,j+0.5)) #floating point production rate mpf_new = dni*cell_volume/spwt + mpf_rem[i][j] #truncate down, adding randomness mp_new = int(math.trunc(mpf_new+random())) #save fraction part mpf_rem[i][j] = mpf_new - mp_new #new fractional reminder #generate this many particles for p in range(mp_new): pos = Pos([i+random(), j+random()]) vel = sampleIsotropicVel(300) particles.append(Particle(pos,vel)) Results
And that’s it. You can download the entire code from the link below. It runs slow as molasses for me. Since I hardly ever program in Python, it’s possible that there are some obvious (but not to me) ways to speed it up – let me know if you find some! Running the profiler, it appears that most of the computational time is spent in the Gather function.
The final ion density is plotted in Figure 4. You can see how the plasma lens accelerates the ion beam. The final ion velocity of ~20km/s agrees reasonably well with the expected 22km/s. The difference is likely due to the ions not having fallen through the entire 100V of potential drop, since they are probably born at locations with potential <100V, and the potential in the plume is slightly higher than 0V.
Source Code
You can download the Python script here: rz-pic.py. Also, I recently started adding the blog examples to Github. You may be interested in cloning the following repo: github.com/particleincell/PICCBlog.
Also, if you would like to learn more about the PIC method, you should sign up for my simulation courses, offered every year.
|
Given an ellipse with semi major axis $a$ and semi minor axis $b$. What is the formula to compute the chord length formed by two points, say $P$ and $Q$ on the arc of the ellipse (Euclidean distance between the two points).
The parametric equation for the ellipse are $(x,y)=(a \cos \theta, b \sin \theta)$ and the length of the arc between two points $P$ and $Q$ is: $$ l_{PQ}=\int_{\theta _P}^{\theta_Q}\sqrt{dx^2+dy^2}=\int_{\theta _P}^{\theta_Q}\sqrt{\left(\frac{dx}{d \theta}\right)^2+\left(\frac{dy}{d \theta}\right)^2} d \theta = $$ $$ =\int_{\theta _P}^{\theta_Q}\sqrt{a^2\cos^2 \theta +b^2 \sin^2 \theta} d \theta $$
where $\theta_P$ and $\theta _Q$ are the angle from the $x$ axis and the lines that passes thorough the origin and the two points (note that the angle that subtend the arc is not sufficient to fix its length).
This cannot be evaluated with standard functions, it is an elliptic integral of the second kind.
For the length of the chord the result is simpler:
$$ L=\sqrt{(x_P-x_Q)^2+(y_P-y_Q)^2}=\sqrt{a^2(\cos \theta_P-\cos \theta_Q)^2+b^2(\sin \theta_P-\sin \theta_Q)^2} $$ but also in this case we need two angles.
|
Advance warning: This is a very boring post.
In my last post I outlined a ludicrously over-simplified model for why you might want to consider high variance strategies.
I’ve been thinking over some of the modelling assumptions and wondering whether it could be made a bit less over-simplified. The only ones that are obviously easy to weaken are the assumptions on the distribution shape.
Here’s an example that shows you need
some assumptions on the distribution shape. Consider a standard distribution \(Z\) with \(P(Z = 1) = P(Z = -1) = \frac{1}{2}\) and suppose we can choose strategies of the form \(\mu + \sigma Z\). Note that \(E(Z) = 0\) and \(\mathrm{Var}(Z) = 1\) so these really are the mean and standard deviation of our distributions.
But \(E(\max\limits_{1 \leq k \leq n} Z_i) = 1 (1 – 2^{-n}) – 2^{-n} = 1 – 2^{1 – n}\) (because the maximum takes the value \(-1\) only if all of the individual values are \(-1\), which happens with probability \(2^{-n}\)). So \(E(\max\limits_{1 \leq k \leq n} \mu + \sigma Z_i) = \mu + (1 – 2^{1 – n} \sigma\). \(1 – 2^{1 – n} < 1\), so you’re always better off raising \(\mu\) rather than \(\sigma\).
The interesting feature of this example is that \(P(X_k \leq \mu + \sigma) = 1\). If this happens then it will always be the case that \(E(\mathrm{max}(X_k) ) \leq \mu + \sigma\) so there’s no real benefit to raising \(\sigma\) instead of \(\mu\) (note: It’s conceivable that there’s some complicated dependency on \(\mu\) as a parameter, but I’m just going to assume that \(\mu\) is purely positional and not worry about that).
You only need to go slightly beyond that to show that for some sufficiently large group you’ll always eventually be better off raising \(\sigma\) rather than \(\mu\).
Suppose all our strategies are drawn from some distribution \(X = \mu + Z^\sigma\) with \(E(Z^\sigma) = 0\). The only dependency on \(\sigma\) that we care about is that \(P(Z^\sigma \geq (1 + \epsilon)\sigma \geq p\). for fixed \(\epsilon > 0, 0 < p < 1\) and all \(\sigma > 0\) (this is trivially satisfied by the normal distribution for example).
Then we have \(E(\max\limits_{1 \leq k \leq n} X_n) = \mu +E(\max\limits_{1 \leq k \leq n} Z^\sigma_n)\).
So we now just want to find some lower bounds on \(T_n = E(\max\limits_{1 \leq k \leq n} Z^\sigma_n)\). We’ll split this up as three variables. Let \(T_n = U_n + V_n + W_n\) where \(U_n = T_n \mathbb{1}_{T_n \leq 0}\), \(V_n = T_n \mathbb{1}_{0 < T_n < (1 + \epsilon) \sigma }\) and \(W_n = T_n \mathbb{1}_{(1 + \epsilon) \sigma \leq T_n }\).
Because \(U_n \geq 0\) and \(W_n \leq (1 + \epsilon) \sigma\) this gives us the lower bound \(E(T_n) \geq E(U_n) + (1 + \epsilon)\sigma P(W_n \geq (1 + \epsilon) \sigma) \geq E(U_n) + (1 + \epsilon)\sigma (1 – p)^n\).
We now just need to bound \(U_n\) below. But \(U_n \geq U_1 \mathbb{1}_{T_k \leq 0, k \geq 2}\). But these two random variables are independent so \(E(U_n) \geq E(U_1) P(Z \leq 0)^{n – 1}\). Therefore \(E(T_n) \geq + (1 + \epsilon)\sigma (1 – p)^n\)
This lower bound lets us show a much less pretty version of our last result:
Given a strategy \(\mu, \sigma\) being employed by \(n\) people, and given some increase \(a\) which could go to either \(\mu\) or \(\sigma\) there exists some sufficiently large \(m\) such that for \(m\) people, changing the strategy to \(\mu, \sigma + a\) would beat changing the strategy to \(\mu + a, \sigma\).
Yeah, that phrasing is kinda gross to me too.
Note though that if we go back to the previous case where \(\sigma\) is just a scaling parameter and are just dropping the normality strategy, we can use our lower bound on \(E(T_n)\) to find some \(n\) for which \(E(T_n|\sigma = 1) > 1\) and for all \(m \geq n\) it will be beneficial to increase \(\sigma_n\).
Note by the way the crucial role of \(\epsilon\). I think if you consider a distribution that takes with equal probability the values \(\sigma, -\sigma, \sigma + 2^{-\sigma}, -\sigma – 2^{-\sigma}\) (note that \(\sigma\) is not the standard deviation here) then it’s not actually helpful to raise \(\sigma\) instead of \(\mu\), even though \(P(Z^\sigma > \sigma) = \frac{1}{4}\). I have not bothered to work out the details.
|
There are several straightforward ways to do this in Excel.
Perhaps the simplest uses
LINEST to fit the lines conditional on a trial value of the x-intercept. One of the outputs of this function is the mean squared residual. Use
Solver to find the x-intercept minimizing the mean squared residual. If you take some care in controlling
Solver--especially by constraining the x-intercept within reasonable bounds and giving it a good starting value--you ought to get excellent estimates.
The fiddly part involves setting up the data in the right way. We can figure this out by means of a mathematical expression for the implicit model. There are five groups of data: let's index them by $k$ ranging from $1$ to $5$ (from bottom to top in the plot). Each data point can then be identified by means of a second index $j$ as the ordered pair $x_{kj}, y_{kj}$. (It appears that $x_{kj} = x_{k'j}$ for any two indexes $k$ and $k'$, but this is not essential.) In these terms the model supposes there are five slopes $\beta_k$ and an x-intercept $\alpha$; that is, $y_{kj}$ should be closely approximated by $\beta_k (x_{kj}-\alpha)$. The combined
LINEST/
Solver solution minimizes the sum of squares of the discrepancies. Alternatively--this will come in handy for assessing confidence intervals--we can view the $y_{kj}$ as independent draws from normal distributions having a common unknown variance $\sigma^2$ and means $\beta_k(x_{kj}-\alpha)$.
This formulation, with five different coefficients and the proposed use of
LINEST, suggests
we should set up the data in an array where there is a . separate column for each $k$ and these are immediately followed by a column for the $y_{kj}$
I worked up an example using simulated data akin to those shown in the question. Here is what the data array looks like:
[B] [C] [D] [E] [F] [G] [H] [I]
k x 1 2 3 4 5 y
-----------------------------------------------
355 7355 0 0 0 0 636
355 0 7355 0 0 0 3705
355 0 0 7355 0 0 6757
355 0 0 0 7355 0 9993
355 0 0 0 0 7355 13092
429 7429 0 0 0 0 539
...
The strange values
7355,
7429, etc, as well as all the zeros, are produced by formulas. The one in cell
D3, for instance, is
=IF($B2=D$1, $C2-Alpha, 0)
Here,
Alpha is a named cell containing the intercept (currently set to -7000). This formula, when pasted down the full extent of the columns headed "1" through "5", puts a zero in each cell except when the value of $k$ (shown in the leftmost column) corresponds to the column heading, where it puts the difference $x_{kj}-\alpha$. This is what is needed to perform multiple linear regression with
LINEST. The expression looks like
LINEST(I2:I126,D2:H126,FALSE,TRUE)
Range
I2:I126 is the column of y-values; range
D2:H126 comprises the five computed columns;
FALSE stipulates that the y-intercept is forced to $0$; and
TRUE asks for extended statistics. The formula's output occupies a range of 6 rows by 5 columns, of which the first three rows might look like
1.296 0.986 0.678 0.371 0.062
0.001 0.001 0.001 0.001 0.001
1.000 51.199
...
Strangely (you have to put up with the bizarre when doing stats in Excel :-), the output columns correspond to the input columns
in reverse order: thus,
1.296 is the estimated coefficient for column
H (corresponding to $k=5$, which we have named $\beta_5$) while
0.062 is the estimated coefficient for column
D (corresponding to $k=1$, which we have named $\beta_1$).
Notice, in particular, the
51.199 in row 3, column 2 of the
LINEST output: this is the mean sum of squares of residuals. That's what we would like to minimize. In my spreadsheet this value sits at cell
U9. In eyeballing the plots, I figured the x-intercept was surely between $-20000$ and $0$. Here's the corresponding
Solver dialog to minimize
U9 by varying $\alpha$, named
XIntercept in this sheet:
It returned a reasonable result almost instantly. To see how it can perform, compare the parameters as set in the simulation against the estimates obtained in this fashion:
Parameter Value Estimate
Alpha -10000 -9696.2
Beta1 .05 .0619
Beta2 .35 .3710
Beta3 .65 .6772
Beta4 .95 .9853
Beta5 1.25 1.2957
Sigma 50 51.199
Using these parameters, the fit is excellent:
One can go further by computing the fit and using that to calculate the log likelihood.
Solver can modify a set of parameters (initalized to the
LINEST estimates) one parameter at time to attain any desired value of the log likelihood
less than the maximum value. In the usual way--by reducing the log likelihood by a quantile of a $\chi^2$ distribution--you can obtain confidence intervals for each parameter. In fact, if you want--this is an excellent way to learn how the maximum likelihood machinery works--you can skip the
LINEST approach altogether and use
Solver to maximize the log likelihood. However, using
Solver in this "naked" way--without knowing in advance approximately what the parameter estimates should be--is risky.
Solver will readily stop at a (poor) local maximum. The combination of an initial estimate, such as that afforded by guessing at $\alpha$ and applying
LINEST, along with a quick application of
Solver to polish these results, is much more reliable and tends to work well.
|
I'm trying to understand why the following proposition is true:
Let $J$ be a small category and $F, G : J \to \textbf{Top}$ functors. If $\tau : F \Rightarrow G$ is a pointwise homotopy equivalence, then $\operatorname{hocolim}_J F \to \operatorname{hocolim}_J G$ is a homotopy equivalence.
This seems to be such a natural result that I'm surprised it's not mentioned at all in Riehl, Dugger or Hirschorn's texts on homotopy theory. Although I think all three mention a
version of this result for weak homotopy equivalences.
For instance, Riehl has [Proposition 14.5.7, p. 259 of
Categorical Homotopy Theory] that
If $X_\bullet \to Y_\bullet$ is a pointwise weak equivalence of split simplicial spaces, then $\vert X_\bullet \vert \to \vert Y_\bullet \vert$ is a weak equivalence.
The details of this are explained in Dugger [Theorem 3.5, p. 10]. Applying this to the $\operatorname{hocolim}$, after justifying a few points, one get's the desired result for weak homotopy equivalence [Dugger, Theorem 4.7, p. 17].
But this seems to be where the story ends and the fact that you actually get a homotopy equivalence doesn't seem to be that important. Can someone explain why this is the case?
Now, the only source I've found that states and proves this result is Munson and Volic's
Cubical Homotopy Theory, [Theorem 8.3.7, p. 409], but it's a (very technical) ten page proof, that in turn references results all over the book.
So my main question : is there a simpler way to see why this is true? If so, could you explain or point me in the right direction?
|
This question deals with Bayesian updating with conjugate prior.Suppose we have a prior distribution of
N~(5, 3) and then we observe 5 data points (8, 9, 10, 8, 7) (assumed to be taken randomly from a N~(9, 3) distribution). What would be the posterior after these observations in the form of N~(x, y)? I read the Wikipedia article on conjugate priors, but I want to have a more precise understanding of how to solve this specific problem. If there is no way to solve it without assuming some things, can you please explain what needs to be known and solve under very simple assumptions? Thank you in advance.
This question deals with Bayesian updating with conjugate prior.Suppose we have a prior distribution of
I am afraid that you are misunderstanding what Bayesian inference is about in general. Bayes theorem is
$$ \underbrace{p(\theta \mid X)}_\text{posterior} = \frac{\overbrace{p(X \mid \theta)}^\text{likelihood} \, \overbrace{p(\theta)}^\text{prior}}{\underbrace{p(X)}_\text{normalizing constant}} $$
To make it more concrete, you can estimate $\mu$ and $\sigma^2$ parameters from normal distribution (i.e. normal
likelihood) using data $X$, assuming normal prior for $\mu$ with hyperparameters $\mu_0$ and $\sigma^2_0$, and uniform prior for $\sigma^2$ with hyperparameters $a$ and $b$, to obtain posterior distributions for $\mu$ and $\sigma^2$:
$$ X \sim \mathrm{Normal}(\mu, \sigma^2) \\ \mu \sim \mathrm{Normal}(\mu_0, \sigma^2_0) \\ \sigma^2 \sim \mathrm{Uniform}(a,b) $$
So
priors are assigned to parameters of interest, not to data. You also have to specify assumed distribution of your data ( likelihood). Finally, posterior is distribution over estimated parameters, so you need to specify what you are actually estimating.
|
Continued fraction
A continued fraction is an expression of the form
$$a_0+{b_1|\over |a_1}+\cdots+{b_n|\over |a_n}+\cdots,\label{1}$$ where
$$\def\o{\omega}\{a_n\}_{n=0}^\o\label{2}$$ and
$$\{b_n\}_{n=1}^\o\label{3}$$ are finite or infinite sequences of complex numbers. Instead of the expression (1) one also uses the notation
$$a_0 + \cfrac{b_1}{a_1 + \cfrac{b_2}{{\kern-10pt\raise4pt\ddots} {\strut\atop{\displaystyle a_{n-1} + \cfrac{b_n}{a_n+ {\atop\displaystyle\ddots}}}}}}$$ The continued fraction of the sequence (2) is defined as the expression
$$a_0+{1|\over |a_1}+\cdots+{1|\over |a_n}+\cdots.$$ For every continued fraction (1) the recurrence equations
$$P_n = a_n P_{n-1}+b_n P_{n-2},$$
$$Q_n = a_n Q_{n-1}+b_n Q_{n-2},$$ with the initial conditions
$$b_0 = 1,\quad P_{-2} = 0,\quad P_{-1} = 1,\quad Q_{-2} = 1,\quad Q_{-1} = 0,$$ determine two sequences $\{P_n\}_{n=0}^\o$ and $\{Q_n\}_{n=0}^\o$ of complex numbers. As a rule, it is assumed that the sequences (2) and (3) are such that $Q_n\ne 0$ for all $n$, $0\le n \le \o+1$. The fraction $\def\d{\delta}\d = P_n/Q_n$ is called the $n$-th convergent of the continued fraction (1). Here
$$\d_0 = a_0,\quad \d_1 = a_0+{b_1\over a_1} \quad \d_2 = a_0+{b_1\over {a_1+{b_2\over a_2}}}, \cdots,$$ moreover,
$$\d_n-\d_{n-1} = {(-1)^{n-1}b_1\cdots b_n\over Q_n Q_{n-1}}.$$ It is convenient to denote the $n$-th convergent of the continued fraction of the sequence (2) by
$$[a_0;a_1,\dots,a_n].$$ These convergents satisfy the following equalities:
$$[a_n;\dots,a_1] = {Q_n\over Q_{n-1}}\quad \text{ for } n\ge 1,$$
$$[a_n;\dots,a_0] = {P_n\over P_{n-1}}\quad \text{ for } a_0\ne 0 \text{ and } n\ge 0.$$ If $\o = \infty$ and the sequence of convergents of (1) converges to some limit $l$, then the continued fraction (1) is called convergent and the number $l$ is its value. If $\o < \infty$, that is, the continued fraction is finite, then its value is defined as the last of its convergents.
If all terms of the sequences (2) and (3), except possibly $a_0$, are positive real numbers, and if $a_0$ is real, then the sequence $\d_0,\d_2,\d_4,\dots$ of convergents of even order of (1) increases, and the sequence $\d_1,\d_3,\d_5,\dots$ of convergents of odd order decreases. Here a convergent of even order is less than the corresponding convergent of odd order (see [Kh2]).
If $\def\a{\alpha}\a_0,\a_1, \dots $ is the sequence of complex numbers for which
$$\a_0 = a_0+{b_1\over\a_1},\quad \a_1 = a_1+{b_1\over \a_2},\; \dots,$$ then the expression (1) is called an expansion of the number $\a_0$ in a continued fraction. Not every continued fraction converges, and the value of a continued fraction is not always equal to the number from which it is expanded. There are a number of criteria for the convergence of continued fractions (see, for example, [Ma] and [Kh2]):
1) Suppose that $\o=\infty$, that all terms of the sequences (2) and (3) are real numbers, and that $a_n > 0$ for all natural numbers $n$ from some term onwards. If $a_n-|b_n| \ge 1$ for such $n$, then the continued fraction (1) converges.
2) Suppose that $\o=\infty$ and that all terms of the sequence (2) beginning with $a_1$ are positive. Then the continued fraction of the sequence (2) converges if and only if the series $\sum_{n=0}^\infty a_n$ diverges (Seidel's theorem).
The continued fraction of a sequence (2) is called regular if all its terms (except possibly $a_0$) are natural numbers, $a_0$ is an integer and $a_\o\ge 2$ for $1\le\o<\infty$. For every real number $r$ there exists a unique regular continued fraction with value $r$. This fraction is finite if and only if $r$ is rational (see [Bu], [Ve], [Kh]). An algorithm for the expansion of a real number $r$ in a regular continued fraction is defined by the following relations
$$ \begin{array}{lll} a_0=[r], & \a_1={1\over r-a_0} & \text{ if } a_0 \ne r,\\ a_1=[\a_1], & \a_2={1\over \a_1-a_1} & \text{ if } a_1\ne \a_1,\\ a_2=[\a_2], &\dots & \end{array}\label{4}$$ where $[x]$ denotes the integral part of $x$.
The numbers $a_n$ and $\a_n$ defined by (4) are called, respectively, the complete and incomplete quotients of order $n$ of the expansion of $r$ in a continued fraction.
Around 1768 J. Lambert found the expansion of $\tan x$ in a continued fraction:
$${1|\over |1/x} - {1|\over |3/x} -\cdots - {1|\over |(2n+1)/x} - \cdots.$$ Under the assumption that this continued fraction converges, A. Legendre proved that its value for rational values of $x$ is irrational. It should be mentioned that in this way he proved the irrationality of the number $\pi$ (see [X]).
L. Euler found in 1737 that
$${1\over 2}(e-1)={1|\over|1} + {1|\over|6} + {1|\over|10} + {1|\over|14} +\cdots.$$ A real number $r$ is an irrational root of a polynomial of degree 2 with integer coefficients if and only of the incomplete quotients of the expansion of $r$ in a continued fraction from some term onwards are repeated periodically (the Euler–Lagrange theorem, see [Bu] and [Kh]). At present (1984) expansions in regular continued fractions of algebraic numbers of degree 3 and higher are not known. The assertion that the incomplete quotients of the expansion of $2^{1/3}$ in a continued fraction are bounded has not been proved.
Regular continued fractions are a very convenient tool for the approximation of real numbers by rational numbers. The following propositions hold:
1) If $\d_n = P_n/Q_n$ and $\d_{n+1} = P_{n+1}/Q_{n+1}$ are neighbouring convergents of the expansion of a number $r$ in a regular continued fraction, then
$$|r-\d_n| \ge |r -\d_{n+1}|$$ and
$$\Big|r-{P_n\over Q_n}\Big| \le {1\over Q_nQ_{n+1}},$$ where in the latter case equality holds only when $r= \d_{n+1}$.
2) For two neighbouring convergents of the expansion of a number $r$ in a regular continued fraction, at least one of them satisfies the inequality:
$$\Big|r-{P_n\over Q_n}\Big| = {1\over 2 Q_n^2},$$ 3) If $a$ and $b$ are integers, $b\ge 1$, if $r$ is a real number, and if
$$\Big|r-{a\over b}\Big| \le {1\over 2 b^2},$$ then $a/b$ is a convergent of the expansion of $r$ in a regular continued fraction.
4) If $\d_n = P_n/Q_n$ is a convergent of the expansion of a number $r$ into a regular continued fraction, then for any integers $a$ and $b$ it follows from $b>0$, $\d_n \ne a/b$ and
$$\Big|r-{a\over b}\Big| \le |r-\d_n|,$$ that $b> Q_n$ (theorem on the best approximation).
The first twenty-five incomplete quotients of the expansion of the number $\pi$ in a regular continued fraction are: 3, 7, 15, 1, 292, 1, 1, 1, 2, 1, 3, 1, 14, 2, 1, 1, 2, 2, 2, 2, 1, 84, 2, 1, 1.
The first five convergents of the expansion of $\pi$ in a regular continued fraction are:
$$\d_0 = 3,\; \d_1 = {22\over 7},\; \d_2 = {333\over 116},\; \d_3 = {355 \over 113},\; \d_3 = {103993 \over 33102}.$$ Therefore,
$$\Big|\pi - {22\over 7}\Big| < {1\over742} \text{ and } \Big|\pi -{355\over 113}\Big|< 3.10^{-7}.$$ There exist several generalizations of continued fractions (see, e.g., [Sz]).
Comments
A classical reference on convergence is [Wa]. Together with [JoTh], recent references are [JoThWa],[Th],[Kr]. Some generalizations can be found in [Br],[Sk], [Bo]. Except for [Wa], [HaWr] all references contain extensive lists of references on recent developments and applications in Padé approximation; moment problems (cf. Moment problem); orthogonal polynomials; number theory; and the metrical theory of continued fractions (see also Metric theory of numbers).
References
[Bo] D.I. Bodnap, "Convergent continued fractions", Kiev (1986) (In Russian) [Br] A.J. Brentjes, "Multidimensional continued fractions", CWI, Amsterdam (1981) MR0638474 [Bu] A.A. Bukhshtab, "Number theory", Moscow (1966) (In Russian) [HaWr] G.H. Hardy, E.M. Wright, "An introduction to the theory of numbers", Oxford Univ. Press (1959) MR2568169 MR2445243 MR0568909 MR0067125 MR1561815 [He] P. Henrici, "Applied and computational complex analysis", 2, Wiley (1977) MR0453984 [JoTh] W.B. Jones, W.J. Thron, "Continued fractions, analytic theory and applications", Addison-Wesley (1980) [JoThWa] W.B. Jones (ed.) W.J. Thron (ed.) E.H. Waadeland (ed.), Analytic theory of continued fractions, Lect. notes in math., 932, Springer (1982) MR0690450 [Kh] A.Ya. [A.Ya. Khinchin] Khintchine, "Kettenbrüche", Teubner (1956) (Translated from Russian) MR1544727 MR1512207 MR1544632 [Kh2] A.N. Khovanskii, "Application of continued fractions and their generalizations to problems in approximation theory", Moscow (1966) (In Russian) MR1533406 MR0156126 [Kr] C. Kraaikamp, "The distribution of some sequences connected with the nearest integer continued fraction" Indag. Math., 49 (1987) pp. 177–191 MR0898162 [Ma] A.A. Markov, "Selected works", Moscow-Leningrad (1948) (In Russian) MR2086689 MR2086688 MR0050525 [Pe] O. Perron, "Die Lehre von den Kettenbrüchen", 1–2, Teubner (1954–1957) MR0064172 [Sk] V.Ya. Skorobogatko, "The theory of convergent continued fractions and its applications in numerical mathematics", Moscow (1983) (In Russian) [Sz] G. Szekeres, "Multidimensional continued fractions" Ann. Univ. Sci. Sec. Math., 13 (1970) pp. 113–140 MR0313198 [Th] W.J. Thron (ed.), Analytic theory of continued fractions II, Lect. notes in math., 1199, Springer (1986) MR0870239 [Ve] B.A. Venkov, "Elementary number theory", Wolters-Noordhoff (1970) (Translated from Russian) MR0265267 [Wa] H.S. Wall, "Analytic theory of continued fractions", Chelsea (1973) MR0025596 MR0008102 [X] , Ueber die Kwadratur des Kreises (1936) How to Cite This Entry:
Continued fraction.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Continued_fraction&oldid=30344
|
Layer-adapted Methods for a Singularly Perturbed Singular Problem
Related Articles The Evolution of General Three-Dimensional Disturbances in a Thermally Stratified Couette Flow. Balagondar, P. M.; Vijayalakshmi, A. R. // Global Journal of Pure & Applied Mathematics;2007, Vol. 3 Issue 3, p399
The evolution of general three-dimensional disturbances in a thermally stratified Couette flow is investigated with the help of initial-value problem approach. Using Boussinesq approximation, the disturbances satisfy a system of linear first-order partial differential equation which is solved...
Existence of Solution for Four-Point Boundar Value Problems of Second-Order Impulsive Differential Equations (III). Li Ge // World Academy of Science, Engineering & Technology;Apr2011, Issue 52, p987
In this paper, we study the existence of solution of the four-point boundary value problem for second-order differential equations with impulses by using Leray-Schauder theory: Due to image rights restrictions, multiple line equation(s) cannot be graphically displayed. (E) where 0 < ξ ≤...
Local existence and blow up in a semilinear heat equation with the Bessel operator. Messaoudi, Salim A. // Nonlinear Studies;2003, Vol. 10 Issue 1, p59
Considers an initial one-point boundary value problem to the heat equation with a Bessel operator. Existence theorem of weak solutions for a related linear problem; Theory for ordinary differential equations; Information on semilinear problem.
On the existence and stability of a global subsonic flow in a 3D infinitely long cylindrical nozzle. XU, Gang; YIN, Huicheng // Chinese Annals of Mathematics;Mar2010, Vol. 31 Issue 2, p163
This paper is concerned with the problem on the global existence and stability of a subsonic flow in an infinitely long cylindrical nozzle for the 3D steady potential flow equation. Such a problem was indicated by Courant-Friedrichs in [8, p. 377]: A flow through a duct should be considered as a...
On the unique solvability of a family of two-point boundary-value problems for systems of ordinary differential equations. Asanova, A. T. // Journal of Mathematical Sciences;May2008, Vol. 150 Issue 5, p2302
We consider a family of two-point boundary-value problems for systems of ordinary differential equations with functional parameters. This family is the result of the reduction of a boundary-value problem with nonlocal condition for a system of second-order, quasilinear hyperbolic equations by...
PERIODIC SOLUTIONS OF LIÉNARD EQUATIONS WITH ASYMMETRIC NONLINEARITIES AT RESONANCE. ANNA CAPIETTO // Journal of the London Mathematical Society;Aug2003, Vol. 68 Issue 1, p119
The existence of $2\pi$-periodic solutions of the second-order differential equation \[ x''+f(x)x'+ax^+-bx^-+g(x)=p(t), \qquad n\in \mathbb{N},\] where $a, b$ satisfy $1/\sqrt{a}+1/\sqrt{b}=2/n$ and $p(t)=p(t+2\pi)$, $t\in...
THE LEVINSON-TYPE FORMULA FOR A BOUNDARY VALUE PROBLEM WITH A SPECTRAL PARAMETER IN THE BOUNDARY CONDITION. Mamedov, Khanlar R.; Menken, Hamza // Arabian Journal for Science & Engineering (Springer Science & Bu;Jan2009, Vol. 34 Issue 1A, p219
In the present paper, we consider a boundary-value problem generated by a second order differential equation and a spectral parameter dependent boundary condition on the half line. For this boundary-value problem, we define the scattering data, we prove the continuity of the scattering function...
A Lyapunov-Type Function for Generalized Dynamical Systems Without Uniqueness. Filippov, A. F. // Differential Equations;Jun2003, Vol. 39 Issue 6, p901
Discusses the Lyapunov-type function for generalized dynamical systems without uniqueness. Autonomous systems of differential equations; General conditions providing the validity of the study's results.
Multiple Solutions of Boundary-Value Problems for Fourth-Order Differential Equations with Deviating Arguments. Jankowski, T.; Jankowski, R. // Journal of Optimization Theory & Applications;Jul2010, Vol. 146 Issue 1, p105
This paper considers fourth-order differential equations with four-point boundary conditions and deviating arguments. We establish sufficient conditions under which such boundary-value problems have positive solutions. We discuss such problems in the cases when the deviating arguments are...
|
We will develop conservation of energy relations for electricity that are analogous to those we just developed for flowing fluids. Instead of a real fluid flowing, current electricity (as opposed to static electricity) involves the flow of electric charge. To create intensive energy systems we divide by electric charge, rather than by volume as we did for fluids.
Summary of the Components of Our Energy-Density Model
We begin by summarizing the components of the steady-state energy-density model we developed in the context of fluids and which we will now generalize to the flow of electric charge. The complete energy-density equation(5.3.1) as applied to fluid phenomena,
\[\Delta (\text{total head}) = \frac{E_{pump}}{vol} – I R \tag{5.3.1}\]
says that the change in the total fluid energy-density as we move from one point to another point in the stead-state flow will increase due to energy added by a pump and will decrease due to the transfer of fluid energy-density to thermal energy-density. There are four fundamental constructs in this relation:
(1) change in the fluid energy-density, (2) fluid flow rate, (3) resistance to fluid flow, and (4) pumps, sources of fluid energy-density. We now look at the counterparts of these four constructs for the flow of electric charge. (1) Electric Energy-Density System
In one way, current electricity is simpler than dissipative fluid flow. With fluids we have three energy-density systems that all contribute to the total head. In current electricity, there is only one energy system: the
electric potential energy per charge. (Because the mass of charge carriers is typically so small and velocities are small, both the gravitational potential energy changes and the KE changes are totally negligible compared to the changes in the electric potential energy.) The electric potential energy is analogous to the gravitational potential energy we encountered previously. Both the gravitational potential energy and the electric potential energy depend on the amount and separation of something, mass in the gravitational case and charge in the electric case. By dividing by electric charge, we turn the extensive electric potential energy, which depends on the amount of charge, into an intensive quantity. The electric potential energy per charge is given the name electric potential. It is customary to omit the word electric, so frequently we will simply refer to the potential. If we are dealing with electricity, you will know that it is referring to electric potential.
Electric potential has SI units of volts, abbreviated V. Electric charge has units of coulombs, abbreviated \(C\). Since electric potential must have units of energy per charge, the volt must be a joule per coulomb.
\[electric~ potential = \frac{electric~potential~ energy}{charge} \]
\[volt = \frac{joule}{coulomb}\]
\[ V = \frac{J}{C} \tag{5.3.2} \]
Electric potential is commonly referred to as “
voltage.” You should get into the habit of always consciously thinking “energy per charge” when you hear or use the term “ voltage.” (2) Flow of Electric Charge: Current
The unit of current, the quantity of what is flowing past a particular point per second, will be charge per time: coulombs per second. The unit of electric current is the amp, abbreviated A.
\[current = \frac{electric ~charge}{time}\]
\[amp = \frac{coulomb}{second}\]
\[ A = \frac{C}{s} \tag{5.3.3}\]
There is an important point we need to get very clear about right from the start. Sometimes the electric charge that flows is associated with a flow of electrons; other times it is associated with the flow of ions, either positive or negatively charged, or both. When we take an energy system approach, we don’t need to know the details. In fact, we don’t want to get bogged down in messy questions, such as: “Just how do electrons move through a material?” Now that is indeed a very interesting question, but it is not the question we are addressing here. As always, with an energy-system approach, we focus on
changes in energy, not on the details of the interactions. So, for our purposes now, what flows in current electricity is electric charge. In this kind of analysis, we don’t usually care whether the charge is associated with electrons, protons, ions, or “holes”.
Historically, positive charge was defined in a way that makes the charge on an electron negative. Now, when we speak of current as being in a particular direction, we mean
positive charge flow. So, if that charge flow is due to the motion of electrons, then those electrons are in fact moving in the opposite direction. We will always emphasize flow, not the flow of the charge carriers, e.g., the electrons, ions, or whatever, when using the steady-state energy density model with electrical phenomena. charge (3) Resistance to Electric Charge Flow
Resistance to the flow of a fluid causes a transfer of energy from the fluid density systems to thermal systems. Likewise, in electric circuits, resistance to the flow of charge causes transfer of electric potential energy to thermal systems. In both cases the amount of energy transferred per unit of transported quantity is equal to the product of the current and the resistance. That is, \( \Delta\)E
th/charge = IR. The unit of electrical resistance is the ohm with abbreviation \(\Omega\) . (4) Sources of Electric Energy per Charge
Batteries and generators in electric circuits are analogous to pumps in fluid systems. Batteries convert chemical energy (bond energy) into electric potential energy. Generators convert mechanical energy (often from water or steam) into electric potential energy through a process involving changing magnetic fields, which we will study in Part 3. Historically, these were called sources of electromotive force abbreviated
emf. An upper-case script letter \(“ \varepsilon ”\) is usually used as the symbol for emf. Electromotive force, \(\varepsilon \) , is analogous to E pump/vol; it is an energy “per charge.” Thus, the unit for \( \varepsilon \) is the volt, just like for electric potential. Common practice today is to speak of “voltage” instead of emf when referring to batteries and generators. Thus one commonly hears phrases such as, “The voltage of a ‘D’ battery is 1.5 volts, the same as a ‘double-A’ battery.” The Complete Energy-Density Equation for Electric Circuits
Using the four electric components just discussed, the complete energy-density equation(5.3.4) for electric charge becomes
\[\Delta V = \varepsilon – I R \tag{5.3.4}\]
\[where~ ~~~~IR = \frac{\Delta E_{th}}{charge}\]
The meaning of this equation(5.3.4) is completely analogous to the meaning of the complete energy density equation(5.3.1) used for fluid flow phenomena. It says that the change in the electric potential energy per charge, or voltage, as we move from one point to another point will increase due to energy added by a battery or generator and will decrease due to the transfer of electric potential energy per charge to thermal energy systems.
The arguments we made in developing the fluid version of the energy-density equation(5.3.1) apply to current electricity as well. If there are no sources or energy transfer into or out of the electric charge system, then the electric potential does not change. But if we attach batteries or generators, we put energy into the system. If there is a current and charge flows through conductors that have resistance, then electric potential energy per charge will be converted to thermal energy, which decreases the electric potential.
As with fluid circuits, we must always remember that the complete energy-density equation(5.3.1) applies to two specific points along the current path. The algebraic sign of the term “
IR” also works the same way. If we move in the direction of positive charge flow, i.e., in the direction of the current, then “ IR” is positive, and the minus sign insures that the electric potential decreases in energy as we move in that direction. This is often referred to as a “ voltage drop” or “ IR drop.” Power Relationships
The power relationships for current electricity are completely analogous to those for fluids.
\[P = \Delta V I \tag{5.3.5} \]
rate of change of the electric potential energy system
\[P = \varepsilon I \]
rate energy is transferred into the electric potential system by a battery or generator
\[P = I^2R = \frac{( \Delta V)^2}{R}\]
rate energy is transferred into the thermal system from the electric potential system
Example: Calculating Power Dissipation and Current: Hot and Cold Power
(a) Consider the examples given in 20.3 and 20.4. Then find the power dissipated by the car headlight in these examples, both when it is hot and when it is cold.
Strategy
For the hot headlight, we know voltage and current, so we can use \(P = IV\) to find the power. For the cold headlight, we know the voltage and resistance, so we can use \(P=V^{2}/R\) to find the power.
Solution
Entering the known values of current and voltage for the hot headlight, we obtain \[P = IV = \left(2.50 A\right)\left(12.0 V\right) = 30.0 W.\] The cold resistance was \(0.350 \Omega\), and so the power it uses when first switched on is \[P = \frac{V^{2}}{R} = \frac{\left(12.0 V\right)^{2}}{0.350 \Omega} = 411 W.\]
Discussion
The 30 W dissipated by the hot headlight is typical. But the 411 W when cold is surprisingly higher. The initial power quickly decreases as the bulb’s temperature increases and its resistance increases.
(b) What current does it draw when cold?
Solution
The current when the bulb is cold can be found several different ways. We rearrange one of the power equations, \(P = I^{2}R\), and enter known values, obtaining \[I = \sqrt{\frac{P}{R}} = \sqrt{\frac{411 W}{0.350 \Omega}} = 34.3 A.\]
Discussion
The cold current is remarkably higher than the steady-state value of 2.50 A, but the current will quickly decline to that value as the bulb’s temperature increases. Most fuses and circuit breakers (used to limit the current in a circuit) are designed to tolerate very high currents briefly as a device comes on. In some cases, such as with electric motors, the current remains high for several seconds, necessitating special “slow blow” fuses.
The Energy-Density Equation for Both Fluids and Electric Charge
The energy-density equations for fluids and current electricity (without pumps or batteries) are:
\[\Delta (total~ head) =~ – I R~~~~~~~~~(\text{fluids}) \tag{5.3.6}\]
\[ \Delta V =~ – IR ~~~~~~~(\text{current electricity}) \tag{5.3.7} \]
Up to this point we have emphasized the origin of these equations as residing in the fundamental principle of conservation of energy. However, they are examples of a general class of transport phenomena. Making the current the focus of the equations we have:
\[I =~ – \Delta(\text{total head}) \Big(\frac{1}{R}\Big)~~~~~~~~~(\text{fluids}) \tag{5.3.8} \]
\[I =~ – \Delta V \Big(\frac{1}{R}\Big )~~~~~~~~~~~~~(\text{current electricity}) \tag{5.3.9} \]
We interpret these relations to mean that a current of something exists because there is a gradient in the “driving potential” for that something. (Gradient means change in the quantity with change in position.) For fluid flow there must be a gradient in the total head. In order to have electric charge flow, there must be a gradient in the electric potential. In each case, the flow is proportional to the inverse of the resistance.
|
The main concepts from this chapter are:
A material wave is a propagating disturbance in a material, while the atoms that make up that material do not travel very far.
Waves describe a large range of phenomena such as ripples in a medium, pressure fluctuations in sound or even fluctuations that describe light.
A wave function \(y(x,t)\) describes the behavior of the wave. It can be applied to find the displacement of a medium, or the change in pressure for sound, etc.
Harmonic waves have the form \[\Delta y = y(x,t) - y_0 = A \sin \Phi (x,t)\] where \(y_0\) is the equilibrium value of \(y(x,t)\) and \(\Phi (x,t)\) is the total phase.
The wave function and the total phase are function of space
andtime; knowing only one is not good enough.
Two common representations of waves: \(y(x,t = \text{const})\) vs. \(x\)
or\(y(x = \text{const}, t)\) vs. \(t\). What the graphs of these correspond to physically.
The wave velocity \(v_{wave}\) is set by the medium.
The frequency \(f\) is set by the source.
The wavelength depends on both the frequency and the velocity: \(\lambda = v_{wave}/f\).
Below is a detailed summary of the properties of waves and the types of waves they apply to
Property Found in Description
Amplitude \(A\)
All waves The amount that the wave medium displaces from equilibrium. For some waves,like 1-D waves or plane waves, amplitude is constant. For other waves, like sound waves or ripples on a pond, amplitude decreases with distance from the source Speed \(v_{wave}\) All waves The speed at which the wave moves through the medium (describes motion of the disturbance, not motion of individual particles). Dimensionality All waves Direction All waves Direction the wave travels. For waves that travel in only one dimension (like waves on a rope or plane waves) this takes one of two value: + or -. In 2-D and 3-D waves, propagation may occur outward in all directions Polarization Most waves Direction of medium displacement with respect to the wave: longitudinal waves displace in the same direction as wave; transverse waves displace perpendicularly to the direction of wave motion. Period \(T\) Repetitive waves The amount of time in between successive crests or troughs on a wave. Also, the amount of time in between identical configurations of a periodic wave Frequency \(f\) Repetitive waves The number of crests or identical pieces that occur in a wave during a time of one second. \(f= \frac{1}{T}\) Wavelength \(\lambda\) Repetitive waves Distance between consecutive crests or identical pieces of a wave. Fixed phase \(\phi\) Harmonic waves Sets conditions for the wave at \(t=0\) and \(x=0\) Total phase \(\Phi\) Harmonic waves
Incorporates information from \(\phi\), \(T\), and \(\lambda\) into a new quantity to conveniently answer questions about a wave for any \(x\) or \(t\). Waves repeat when you increase or decrease \(\Phi\) by increments of \(2\pi\).
Almost all of physics 7C builds on these main ideas. Make sure you have a solid grasp of them!
|
I'm trying to go through the proof of rejection sampling and I found a paper ACCEPTANCE-REJECTION SAMPLING MADE EASY which provides several helpful explanations. For Lemma 2, the paper claims that if $Z$ has a uniform distribution $A$, and let $B \subset A$ and then the conditional distribution of $Z$ given $Z \in B$ is uniform in $B$. However, it does not provide proof. Can anyone help? Thanks.
I'll construct a proof of a simpler proposition which should make it clear how the more general one is done. Let $z \sim \text{U}(0,1)$. Then the density $p(z) = 1$ and the cumulative distribution $P(z) = z$. Now let us find the conditional distribution of $z | z < c$, i.e., $z \in (0,c)$.
Using the definition of conditional probability, $p(z|z<c)p(z<c) = p(z)$. In our case, $p(z<c) = c$ from the definition of the cumulative distribution and $p(z) = 1$ from the definition of the density. Rearranging terms gives:
$$p(z|z<c) = {p(z) \over p(z<c)} = {1 \over c}$$
Since $p(z|z<c)$ is constant for all $z$, the distribution is clearly Uniform over $(0,c)$. (The "constant for all $z$" part is why the distribution is called "Uniform", so this is really definitional.)
|
At higher loops it certainly isn't true that amplitudes are real. By the optical theorem the imaginary part of an amplitude is related to the intermediate states that can go on-shell. At higher loops lots of intermediate particles can go on shell so one gets an interesting imaginary part.
But here's one interesting way of getting a phase just at tree level. At tree level one would conclude from the optical theorem that there's no phase except at discrete momenta where the Mandelstam variables are equal to some mass-squared in the theory. But suppose we have a theory with an infinite tower of particles with increasing masses, with some small spacing between the masses. Then if we average over momentum scales that are large compared to this spacing, we could get a large imaginary part.
The classic example is high energy, small angle scattering in flat space string theory (or QCD), where the four-point amplitude (averaged over a range of energies that's much larger than the string scale) is
\(\Gamma(-1-\alpha' t)e^{-i\pi t}s^{\alpha' t},\)
where \(\alpha' \) is the Regge slope. The imaginary part is related to the fact that there is an infinite number of resonances that can be produced in the s-channel.
Another way you can get a phase in field theory is if there is a time delay. For example consider quantum field theory in the curved background of a highly boosted particle (a shockwave). In this background there is the Shapiro time-delay, see http://arxiv.org/pdf/1407.5597.pdf for a recent discussion. At large impact parameter and high energy (the eikonal approximation) one can resum the loops to get the answer
\(\exp(iG_{\text{N}}s\log b)\)
and differentiating with respect to the energy gives the standard answer for the Shapiro time-delay.
|
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ...
@EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics.
Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They...
@JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;)
I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears.
@ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$
@BalarkaSen sorry if you were in our discord you would know
@ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$.
@Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication.
@Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist.
Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union.
since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap)
I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition
Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
|
Consider the following problem:
Input: lists $X,Y$ of integers Goal: determine whether there exists an integer $x$ that is in both lists.
Suppose both lists $X,Y$ are of size $n$. Is there a deterministic, linear-time algorithm for this problem? In other words, can you solve this problem in $O(n)$ time deterministically, without using randomness?
Unfortunately, you cannot assume that the list elements are all small.
I can see how to solve it in $O(n)$ expected time using a randomized algorithm: randomly choose a 2-universal hash function $h$, store the elements of $X$ into a hashtable (using $h$ as the hash function), and then look up each element of $Y$ to see if it is in the hashtable. The expected running time will be $O(n)$. However, I can't see how to find a deterministic algorithm with $O(n)$ running time. If you try to derandomize this and fix a single specific hash function, there will exist a worst-case input that causes this procedure to run in $\Theta(n^2)$ time. The best deterministic algorithm I can find involves sorting the values, but that won't be linear-time. Can we achieve linear running time?
Also, I can see how to solve it in linear time if you assume that all of the list elements are integers in the range $[1,n]$ (basically, do counting sort) -- but I am interested in what happens in the general case when we cannot assume that.
If the answer depends on the model of computation, the RAM model jumps to mind, but I'd be interested in results for any reasonable model of computation. I'm aware of $\Omega(n \log n)$ lower bounds for decision tree algorithms for element uniqueness, but this isn't definitive, as sometimes we can find linear-time algorithms even when there is a $\Omega(n \log n)$ bound in the decision-tree model.
|
A very basic and very strange question came to me.
Let $D\subseteq\mathbb {R}$, $f:D\to\mathbb{R}$ a continuous function. Then $f$ is uniformly continuous on $D$ iff. $$\forall\epsilon>0\ \exists\delta>0\,\,\text{such that}\,\,\forall\,x,y\in D$$ we have $$|y-x|<\delta\,\Longrightarrow\, |f(y)-f(x)|<\epsilon$$. Now, let $f\in\mathbb{R}$ be uniformly continuous on $D$. Show that for every $\epsilon>0$, there exists $\delta>0$ such that $$|f(y)-f(x)|<\frac{\epsilon}{2},\quad\text{whenever}\quad |x-y|<\frac{\delta}{2}$$.
I really do not know what need to be proven but I was asked to write a proof on this.
Thanks for any helps.
|
Learning Objectives
Explain what an impulse is, physically Describe what an impulse does Relate impulses to collisions Apply the impulse-momentum theorem to solve problems
We have defined momentum to be the product of mass and velocity. Therefore, if an object’s velocity should change (due to the application of a force on the object), then necessarily, its momentum changes as well. This indicates a connection between momentum and force. The purpose of this section is to explore and describe that connection.
Suppose you apply a force on a free object for some amount of time. Clearly, the larger the force, the larger the object’s change of momentum will be. Alternatively, the more time you spend applying this force, again the larger the change of momentum will be, as depicted in Figure \(\PageIndex{1}\). The amount by which the object’s motion changes is therefore proportional to the magnitude of the force, and also to the time interval over which the force is applied.
Figure \(\PageIndex{1}\): The change in momentum of an object is proportional to the length of time during which the force is applied. If a force is exerted on the lower ball for twice as long as on the upper ball, then the change in the momentum of the lower ball is twice that of the upper ball.
Mathematically, if a quantity is proportional to two (or more) things, then it is proportional to the product of those things. The product of a force and a time interval (over which that force acts) is called impulse, and is given the symbol \(\vec{J}\).
Definition: Impulse
Let \(\vec{F}\)(t) be the force applied to an object over some differential time interval \(dt\) (Figure \(\PageIndex{2}\)). The resulting impulse on the object is defined as
$$d \vec{J} \equiv \vec{F} (t) dt \ldotp \label{9.2}$$
Figure \(\PageIndex{2}\): A force applied by a tennis racquet to a tennis ball over a time interval generates an impulse acting on the ball.
\[\vec{J} = \int_{t_{i}}^{t_{f}} d \vec{J}\]
or
\[\vec{J} \equiv \int_{t_{i}}^{t_{f}} \vec{F} (t) dt \ldotp \label{9.3}\]
Equations \ref{9.2} and \ref{9.3} together say that when a force is applied for an infinitesimal time interval dt, it causes an infinitesimal impulse d\(\vec{J}\), and the total impulse given to the object is defined to be the sum (integral) of all these infinitesimal impulses.
To calculate the impulse using Equation \ref{9.3}, we need to know the force function F(t), which we often don’t. However, a result from calculus is useful here: Recall that the average value of a function over some interval is calculated by
$$f(x)_{ave} = \frac{1}{\Delta x} \int_{x_{i}}^{x_{f}} f(x)dx$$
where \(\Delta\)x = x
f − x i. Applying this to the time-dependent force function, we obtain
$$\vec{F}_{ave} = \frac{1}{\Delta t} \int_{t_{i}}^{t_{f}} \vec{F} (t)dt \ldotp \label{9.4}$$
Therefore, from Equation \ref{9.3},
$$\vec{J} = \vec{F}_{ave} \Delta t \ldotp \label{9.5}$$
The idea here is that you can calculate the impulse on the object even if you don’t know the details of the force as a function of time; you only need the average force. In fact, though, the process is usually reversed: You determine the impulse (by measurement or calculation) and then calculate the average force that caused that impulse.
To calculate the impulse, a useful result follows from writing the force in Equation \ref{9.3} as \(\vec{F}\)(t) = m \(\vec{a}\)(t):
$$\vec{J} = \int_{t_{i}}^{t_{f}} \vec{F} (t)dt = m \int_{t_{i}}^{t_{f}} \vec{a} (t)dt = m \big[ \vec{v} (t_{f}) - \vec{v} (t_{i}) \big] \ldotp$$
For a constant force \(\vec{F}_{ave}\) = \(\vec{F}\) = m\(\vec{a}\), this simplifies to
$$\vec{J} = m \vec{a} \Delta t = m \vec{v}_{f} - m \vec{v}_{i} = m (\vec{v}_{f} - \vec{v}_{i}) \ldotp$$
That is,
$$\vec{J} = m \Delta \vec{v} \ldotp \label{9.6}$$
Note that the integral form, Equation \ref{9.3}, applies to constant forces as well; in that case, since the force is independent of time, it comes out of the integral, which can then be trivially evaluated.
Example \(\PageIndex{1}\): The Arizona Meteor Crater
Approximately 50,000 years ago, a large (radius of 25 m) iron-nickel meteorite collided with Earth at an estimated speed of 1.28 x 10
4 m/s in what is now the northern Arizona desert, in the United States. The impact produced a crater that is still visible today (Figure \(\PageIndex{3}\)); it is approximately 1200 m (three-quarters of a mile) in diameter, 170 m deep, and has a rim that rises 45 m above the surrounding desert plain. Iron-nickel meteorites typically have a density of \(\rho\) = 7970 kg/m 3. Use impulse considerations to estimate the average force and the maximum force that the meteor applied to Earth during the impact. Figure \(\PageIndex{3}\): The Arizona Meteor Crater in Flagstaff, Arizona (often referred to as the Barringer Crater after the person who first suggested its origin and whose family owns the land). (credit: “Shane.torgerson”/Wikimedia Commons) Strategy
It is conceptually easier to reverse the question and calculate the force that Earth applied on the meteor in order to stop it. Therefore, we’ll calculate the force on the meteor and then use Newton’s third law to argue that the force from the meteor on Earth was equal in magnitude and opposite in direction.
Using the given data about the meteor, and making reasonable guesses about the shape of the meteor and impact time, we first calculate the impulse using Equation \ref{9.6}. We then use the relationship between force and impulse Equation \ref{9.5} to estimate the average force during impact. Next, we choose a reasonable force function for the impact event, calculate the average value of that function Equation \ref{9.4}, and set the resulting expression equal to the calculated average force. This enables us to solve for the maximum force.
Solution
Define upward to be the +y-direction. For simplicity, assume the meteor is traveling vertically downward prior to impact. In that case, its initial velocity is \(\vec{v}_{i}\) = −v
i \(\hat{j}\), and the force Earth exerts on the meteor points upward, \(\vec{F}\)(t) = + F(t) \(\hat{j}\). The situation at t = 0 is depicted below.
The average force during the impact is related to the impulse by
$$\vec{F}_{ave} = \frac{\vec{J}}{\Delta t} \ldotp$$
From Equation \ref{9.6}, \(\vec{J}\) = m\(\Delta \vec{v}\), so we have
$$\vec{F}_{ave} = \frac{m \Delta \vec{v}}{\Delta t} \ldotp$$
The mass is equal to the product of the meteor’s density and its volume:
$$m = \rho V \ldotp$$
If we assume (guess) that the meteor was roughly spherical, we have
$$V = \frac{4}{3} \pi R^{3} \ldotp$$
Thus we obtain
$$\vec{F}_{ave} = \frac{\rho V \Delta \vec{v}}{\Delta t} = \frac{\rho \left(\dfrac{4}{3} \pi R^{3}\right) (\vec{v}_{f} - \vec{v}_{i})}{\Delta t} \ldotp$$
The problem says the velocity at impact was −1.28 x 10
4 m/s \(\hat{j}\) (the final velocity is zero); also, we guess that the primary impact lasted about t max = 2 s. Substituting these values gives
$$\begin{split} \vec{F}_{ave} & = \frac{(7970\; kg/m^{3}) \big[ \frac{4}{3} \pi (25\; m)^{3} \big] \big[ 0\; m/s - (-1.28 \times 10^{4}\; m/s\; \hat{j}) \big]}{2\; s} \\ & = + (3.33 \times 10^{12}\; N) \hat{j} \end{split}$$
This is the average force applied during the collision. Notice that this force vector points in the same direction as the change of velocity vector \(\Delta \vec{v}\).
Next, we calculate the maximum force. The impulse is related to the force function by
$$\vec{J} = \int_{t_{i}}^{t_{max}} \vec{F} (t)dt \ldotp$$
We need to make a reasonable choice for the force as a function of time. We define t = 0 to be the moment the meteor first touches the ground. Then we assume the force is a maximum at impact, and rapidly drops to zero. A function that does this is
$$F(t) = F_{max} e^{\frac{-t^{2}}{2 \tau^{2}}} \ldotp$$
The parameter \(\tau\) represents how rapidly the force decreases to zero.) The average force is
$$F_{ave} = \frac{1}{\Delta t} \int_{0}^{t_{max}} F_{max} e^{\frac{-t^{2}}{2 \tau^{2}}} dt$$
where \(\Delta\)t = t
max − 0 s. Since we already have a numeric value for F ave, we can use the result of the integral to obtain F max. Choosing \(\tau\) = \(\frac{1}{e}\)t max (this is a common choice, as you will see in later chapters), and guessing that t max = 2 s, this integral evaluates to
$$F_{avg} = 0.458\; F_{max} \ldotp$$
Thus, the maximum force has a magnitude of
$$\begin{split} 0.458\; F_{max} & = 3.33 \times 10^{12}\; N \\ F_{max} & = 7.27 \times 10^{12}\; N \ldotp \end{split}$$
The complete force function, including the direction, is
$$\vec{F} (t) = (7.27 \times 10^{12}\; N) e^{\frac{-t^{2}}{8\; s^{2}}} \hat{y} \ldotp$$
This is the force Earth applied to the meteor; by Newton’s third law, the force the meteor applied to Earth is
$$\vec{F} (t) = - (7.27 \times 10^{12}\; N) e^{\frac{-t^{2}}{8\; s^{2}}} \hat{y}$$
which is the answer to the original question.
Significance
The graph of this function contains important information. Let’s graph (the magnitude of) both this function and the average force together (Figure \(\PageIndex{4}\)).
Figure \(\PageIndex{4}\): A graph of the average force (in red) and the force as a function of time (blue) of the meteor impact. The areas under the curves are equal to each other, and are numerically equal to the applied impulse.
Notice that the area under each plot has been filled in. For the plot of the (constant) force F
ave, the area is a rectangle, corresponding to F ave \(\Delta\)t = J. As for the plot of F(t), recall from calculus that the area under the plot of a function is numerically equal to the integral of that function, over the specified interval; so here, that is \(\int_{0}^{t_{max}}\)F(t)dt = J. Thus, the areas are equal, and both represent the impulse that the meteor applied to Earth during the two-second impact. The average force on Earth sounds like a huge force, and it is. Nevertheless, Earth barely noticed it. The acceleration Earth obtained was just
$$\vec{a} = \frac{- \vec{F}_{ave}}{M_{Earth}} = \frac{- (3.33 \times 10^{12}\; N) \hat{j}}{5.97 \times 10^{24}\; kg} = - (5.6 \times 10^{-13} m/s^{2}) \hat{j}$$
which is completely immeasurable. That said, the impact created seismic waves that nowadays could be detected by modern monitoring equipment.
Example \(\PageIndex{2}\): The Benefits of Impulse
A car traveling at 27 m/s collides with a building. The collision with the building causes the car to come to a stop in approximately 1 second. The driver, who weighs 860 N, is protected by a combination of a variable-tension seatbelt and an airbag (Figure \(\PageIndex{5}\)). (In effect, the driver collides with the seatbelt and airbag and not with the building.) The airbag and seatbelt slow his velocity, such that he comes to a stop in approximately 2.5 s.
What average force does the driver experience during the collision? Without the seatbelt and airbag, his collision time (with the steering wheel) would have been approximately 0.20 s. What force would he experience in this case? Figure \(\PageIndex{5}\): The motion of a car and its driver at the instant before and the instant after colliding with the wall. The restrained driver experiences a large backward force from the seatbelt and airbag, which causes his velocity to decrease to zero. (The forward force from the seatback is much smaller than the backward force, so we neglect it in the solution.) Strategy
We are given the driver’s weight, his initial and final velocities, and the time of collision; we are asked to calculate a force. Impulse seems the right way to tackle this; we can combine Equation \ref{9.5} and Equation \ref{9.6}.
Solution Define the +x-direction to be the direction the car is initially moving. We know $$\vec{J} = \vec{F} \Delta t$$and $$\vec{J} = m \Delta \vec{v} \ldotp$$Since J is equal to both those things, they must be equal to each other: $$\vec{F} \Delta t = m \Delta \vec{v} \ldotp$$We need to convert this weight to the equivalent mass, expressed in SI units: $$\frac{860\; N}{9.8\; m/s^{2}} = 87.8\; kg \ldotp$$Remembering that \(\Delta \vec{v} = \vec{v}_{f} − \vec{v}_{i}\), and noting that the final velocity is zero, we solve for the force: $$\vec{F} = m \frac{0 - v_{i}\; \hat{i}}{\Delta t} = (87.8\; kg) \left(\dfrac{-(27\; m/s) \hat{i}}{2.5\; s}\right) = - (948\; N) \hat{i} \ldotp$$The negative sign implies that the force slows him down. For perspective, this is about 1.1 times his own weight. Same calculation, just the different time interval: $$\vec{F} = (87.8\; kg) \left(\dfrac{-(27\; m/s) \hat{i}}{0.20\; s}\right) = - (11,853\; N) \hat{i} \ldotp$$which is about 14 times his own weight. Big difference! Significance
You see that the value of an airbag is how greatly it reduces the force on the vehicle occupants. For this reason, they have been required on all passenger vehicles in the United States since 1991, and have been commonplace throughout Europe and Asia since the mid-1990s. The change of momentum in a crash is the same, with or without an airbag; the force, however, is vastly different.
Contributors
Samuel J. Ling (Truman State University), Jeff Sanny (Loyola Marymount University), and Bill Moebs with many contributing authors. This work is licensed by OpenStax University Physics under a Creative Commons Attribution License (by 4.0).
|
A common mistake is the thought that the total energy is the sum of all orbital energies $\{\epsilon_i\}$.
From Step #6 of Daniel Crawford's SCF programming project (modified slightly in some places):
The SCF electronic energy may be computed using the density matrix as:
$$
E_{\text{elec}} = \sum_{\mu\nu}^{\text{AO}} D_{\mu\nu} (H_{\mu\nu}^{\text{core}} + F_{\mu\nu})
$$
The total energy is the sum of the electronic energy and the nuclear repulsion energy:
$$
E_{\text{total}} = E_{\text{elec}} + E_{\text{nuc}},
$$
where the density matrix is defined as (Step #8)
$$
D_{\mu\nu} = \sum_{m}^{\text{occ. MO}} C_{\mu m} C_{\nu m},
$$
the Fock matrix as (Step #7)
$$
\begin{align}
F_{\mu\nu} &= H_{\mu\nu}^{\text{core}} + \sum_{\lambda\sigma}^{\text{AO}} D_{\lambda\sigma} \left[ 2(\mu\nu|\lambda\sigma) - (\mu\lambda|\nu\sigma) \right] \\
&= H_{\mu\nu}^{\text{core}} + 2 J_{\mu\nu} - K_{\mu\nu}
,
\end{align}
$$
and the core Hamiltonian as (Step #2)
$$
H_{\mu\nu}^{\text{core}} = T_{\mu\nu} + V_{\mu\nu}.
$$
I've also introduced the definitions of the Coulomb matrix $J$ and the exchange matrix $K$:
$$\begin{align}J_{\mu\nu} &= \sum_{\lambda\sigma}^{\text{AO}} D_{\lambda\sigma} (\mu\nu|\lambda\sigma) \\K_{\mu\nu} &= \sum_{\lambda\sigma}^{\text{AO}} D_{\lambda\sigma} (\mu\lambda|\nu\sigma) \\\end{align}$$
Now, identify each of the terms in the Kohn-Sham equations with the terms from above.
$$\begin{align}\hat{T}_{e} &= -\frac{1}{2} \nabla^2 \rightarrow T_{\mu\nu} = \left< \chi_{\mu} \left| \hat{T} \right| \chi_{\nu} \right> \\\hat{V}_{eN}(\vec{r}) &= \sum_{A}^{\text{nuclei}} \frac{Z_A}{|\vec{r} - \vec{R}_{A}|} \rightarrow V_{\mu\nu} = \left< \chi_{\mu} \left| \hat{V}_{eN} \right| \chi_{\nu} \right> \\\hat{V}_{ee}(\vec{r}) &\stackrel{?}{\rightarrow} 2 \hat{J} \\\hat{V}_{\text{XC}}(\vec{r}) &\stackrel{?}{\rightarrow} - \hat{K}\end{align}$$
This last part isn't quite correct though. Usually, when looking at the Kohn-Sham equations, one replaces the full electron-electron interaction $\hat{V}_{ee}$ with the
sum of the Hartree potential $\hat{V}_{H}$, which gives the Coulomb energy, and the exchange-correlation potential $\hat{V}_{\text{XC}}$, which replaces the exact exchange $\hat{K}$ with a (currently approximate) expression for both the exchange term and the true electron-electron (correlated) interaction.
In terms of how the energy is actually calculated, all quantities from above are the same as in Hartree-Fock theory, except the calculation of the exact exchange integrals during the Fock build is replaced with calculating the exchange-correlation matrix $F^{\text{XC}}$, leading to
$$
\begin{align}
F_{\mu\nu}^{\alpha} &= H_{\mu\nu}^{\text{core}} + J_{\mu\nu} + F_{\mu\nu}^{\text{XC}\alpha} \\
F_{\mu\nu}^{\beta} &= H_{\mu\nu}^{\text{core}} + J_{\mu\nu} + F_{\mu\nu}^{\text{XC}\beta}
\end{align}
$$
For a density functional approximation (DFA) based on the generalized gradient approximation (GGA), where the functional is dependent on both the density $\rho(\mathbf{r})$ and its gradient $\nabla \rho(\mathbf{r})$,
$$
\begin{align}
E_{\text{XC}} &= \int f_{GGA}^{\text{DFA}}(\rho_{\alpha},\rho_{\beta},\gamma_{\alpha\alpha},\gamma_{\alpha\beta},\gamma_{\beta\beta}) \, \mathrm{d}\mathbf{r} \\
\gamma_{\alpha\alpha} &= |\nabla \rho_{\alpha}|^{2} \\
\gamma_{\beta\beta} &= |\nabla \rho_{\beta}|^{2} \\
\gamma_{\alpha\beta} &= \nabla \rho_{\alpha} \cdot \nabla \rho_{\beta} \\
\end{align}
$$
The exchange-correlation parts of the Fock matrices are given by
$$
F_{\mu\nu}^{\text{XC}\alpha} = \int \left[ \frac{\partial f}{\partial \rho_{\alpha}} \chi_{\mu}\chi_{\nu} + \left( 2\frac{\partial f}{\partial \gamma_{\alpha\alpha}} \nabla\rho_{\alpha} + \frac{\partial f}{\partial \gamma_{\alpha\beta}} \nabla\rho_{\beta} \right) \cdot \nabla(\chi_{\mu}\chi_{\nu}) \right] \mathrm{d}\mathbf{r}
$$
$f^{\text{DFA}}$, $\frac{\partial f^{\text{DFA}}}{\partial \rho}$, and $\frac{f^{\text{DFA}}}{\partial \gamma}$ are unique closed-form expressions for each DFA, and are usually evaluated numerically on an atom-centered grid (ACG) such as a Lebedev grid. This generally requires mapping the set of AOs/basis functions $\{\chi\}$ onto this grid.
References
$\tiny{\text{As usual, sorry if I'm lazy with notation, being consistent is so difficult...}}$
|
A random person (let's call him Bob) is given 3 cards, containing the names of 3 different countries. Bob is also given the names of the capital cities of these countries (in random order), and his task is to place the country cards with the correct capitals, i.e. to form correct pairs. It just so happens that Bob has absolutely no clue, so he just makes pairs randomly. We call the random variable that counts the number of correct pairs $X$.
So I have to find the mean and the standard deviation of $X$, but I'm not exactly sure what the correct probability distribution of this problem is. From all the probability distributions I have encountered as of yet, this seems most like a hypergeometric probability distribution, because there is no replacement in this problem. But I don't intuitively understand why. Wikipedia says:
The hypergeometric distribution is a discrete probability distribution that describes the probability of $k$ successes in $n$ draws, without replacement, from a finite population of size $N$ that contains exactly $K$ successes, wherein each draw is either a success or a failure.
I have trouble relating this to our problem. We also want to find the probability of $k$ successes in $n (\stackrel{?}{=}3$) draws, without replacement, from a finite population size $N(\stackrel{?}{=} 3)$ that contains exactly $K(\stackrel{?}{=} 1)$ successes, wherein each draw is either a success or failure.
So that would give us a mean of $n \times \dfrac{K}{N} = 3 \dfrac{1}{3} = 1$ and a standard deviation of $\sqrt{ n\dfrac{K}{N} \dfrac{(N-K)}{N} \dfrac{N-n}{N-1}} = \sqrt{\dfrac{1}{3} \times \dfrac{2}{3} \times 0} = 0$, and herein lies the problem, because this is obviously wrong.
|
2019-05-20 15:18 Detaljerad journal - Similar records 2019-01-23 09:13
nuSTORM at CERN: Feasibility Study / Long, Kenneth Richard (Imperial College (GB)) The Neutrinos from Stored Muons, nuSTORM, facility has been designed to deliver a definitive neutrino-nucleus scattering programme using beams of $\bar{\nu}_e$ and $\bar{\nu}_\mu$ from the decay of muons confined within a storage ring. The facility is unique, it will be capable of storing $\mu^\pm$ beams with a central momentum of between 1 GeV/c and 6 GeV/c and a momentum spread of 16%. [...] CERN-PBC-REPORT-2019-003.- Geneva : CERN, 2019 - 150. Detaljerad journal - Similar records 2019-01-23 08:54 Detaljerad journal - Similar records 2019-01-15 15:35 Detaljerad journal - Similar records 2018-12-20 13:45 Detaljerad journal - Similar records 2018-12-18 14:08
Physics Beyond Colliders at CERN: Beyond the Standard Model Working Group Report / Beacham, J. (Ohio State U., Columbus (main)) ; Burrage, C. (U. Nottingham) ; Curtin, D. (Toronto U.) ; De Roeck, A. (CERN) ; Evans, J. (Cincinnati U.) ; Feng, J.L. (UC, Irvine) ; Gatto, C. (INFN, Naples ; NIU, DeKalb) ; Gninenko, S. (Moscow, INR) ; Hartin, A. (U. Coll. London) ; Irastorza, I. (U. Zaragoza, LFNAE) et al. The Physics Beyond Colliders initiative is an exploratory study aimed at exploiting the full scientific potential of the CERN’s accelerator complex and scientific infrastructures through projects complementary to the LHC and other possible future colliders. These projects will target fundamental physics questions in modern particle physics. [...] arXiv:1901.09966; CERN-PBC-REPORT-2018-007.- Geneva : CERN, 2018 - 150 p. Full Text: PDF; Fulltext: PDF; Detaljerad journal - Similar records 2018-12-17 18:05
PBC technology subgroup report / Siemko, Andrzej (CERN) ; Dobrich, Babette (CERN) ; Cantatore, Giovanni (Universita e INFN Trieste (IT)) ; Delikaris, Dimitri (CERN) ; Mapelli, Livio (Universita e INFN, Cagliari (IT)) ; Cavoto, Gianluca (Sapienza Universita e INFN, Roma I (IT)) ; Pugnat, Pierre (Lab. des Champs Magnet. Intenses (FR)) ; Schaffran, Joern (Deutsches Elektronen-Synchrotron (DE)) ; Spagnolo, Paolo (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Ten Kate, Herman (CERN) et al. Goal of the technology WG set by PBC: Exploration and evaluation of possible technological contributions of CERN to non-accelerator projects possibly hosted elsewhere: survey of suitable experimental initiatives and their connection to and potential benefit to and from CERN; description of identified initiatives and how their relation to the unique CERN expertise is facilitated.. CERN-PBC-REPORT-2018-006.- Geneva : CERN, 2018 - 31. Fulltext: PDF; Detaljerad journal - Similar records 2018-12-14 16:17
AWAKE++: The AWAKE Acceleration Scheme for New Particle Physics Experiments at CERN / Gschwendtner, Edda (CERN) ; Bartmann, Wolfgang (CERN) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Calviani, Marco (CERN) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Damerau, Heiko (CERN) ; Depero, Emilio (ETH Zurich (CH)) ; Doebert, Steffen (CERN) ; Gall, Jonathan (CERN) et al. The AWAKE experiment reached all planned milestones during Run 1 (2016-18), notably the demonstration of strong plasma wakes generated by proton beams and the acceleration of externally injected electrons to multi-GeV energy levels in the proton driven plasma wakefields. During Run~2 (2021 - 2024) AWAKE aims to demonstrate the scalability and the acceleration of electrons to high energies while maintaining the beam quality. [...] CERN-PBC-REPORT-2018-005.- Geneva : CERN, 2018 - 11. Detaljerad journal - Similar records 2018-12-14 15:50
Particle physics applications of the AWAKE acceleration scheme / Wing, Matthew (University of London (GB)) ; Caldwell, Allen Christopher (Max-Planck-Institut fur Physik (DE)) ; Chappell, James Anthony (University of London (GB)) ; Crivelli, Paolo (ETH Zurich (CH)) ; Depero, Emilio (ETH Zurich (CH)) ; Gall, Jonathan (CERN) ; Gninenko, Sergei (Russian Academy of Sciences (RU)) ; Gschwendtner, Edda (CERN) ; Hartin, Anthony (University of London (GB)) ; Keeble, Fearghus Robert (University of London (GB)) et al. The AWAKE experiment had a very successful Run 1 (2016-8), demonstrating proton-driven plasma wakefield acceleration for the first time, through the observation of the modulation of a long proton bunch into micro-bunches and the acceleration of electrons up to 2 GeV in 10 m of plasma. The aims of AWAKE Run 2 (2021-4) are to have high-charge bunches of electrons accelerated to high energy, about 10 GeV, maintaining beam quality through the plasma and showing that the process is scalable. [...] CERN-PBC-REPORT-2018-004.- Geneva : CERN, 2018 - 11. Fulltext: PDF; Detaljerad journal - Similar records 2018-12-13 13:21
Summary Report of Physics Beyond Colliers at CERN / Jaeckel, Joerg (CERN) ; Lamont, Mike (CERN) ; Vallee, Claude (Centre National de la Recherche Scientifique (FR)) Physics Beyond Colliders is an exploratory study aimed at exploiting the full scientific potential of CERN's accelerator complex and its scientific infrastructure in the next two decades through projects complementary to the LHC, HL-LHC and other possible future colliders. These projects should target fundamental physics questions that are similar in spirit to those addressed by high-energy colliders, but that require different types of beams and experiments. [...] arXiv:1902.00260; CERN-PBC-REPORT-2018-003.- Geneva : CERN, 2018 - 66 p. Fulltext: PDF; PBC summary as submitted to the ESPP update in December 2018: PDF; Detaljerad journal - Similar records
|
So I have two functions. $f(x) = e^{-x^2+1}$ and $g(x)=\sqrt{x^2-4x+3}$. I am then asked to determine the domain and range of
$a)f∘g,$
$b)g∘f$
I already did part $a)$ and the domain for part $b)$.
For part $a)$, the domain was $(-\infty,1)\cup(3,\infty)$ and the range was $(0,e^2)$.
For part $b$, I figured out that the domain was $(-\infty,-1]\cup[1,\infty)$. I am not sure how to find the range though. Normally, I would take the inverse of g∘f and find the domain of that, and although I can do it, I don't think I did it correctly.
Currently, I did figure out that $g∘f$ is $\sqrt{e^{-2x^2+2}-4e^{-x^2+1}+3}$. How do I find the range of this mess though? I attempted to take the inverse which I believe is:
$y=\pm\sqrt{1-\ln(2\pm\sqrt{1+y^2})}$
Although I know that Wolfram Alpha is not the arbitrator of what correct is, it's generally been right and my answer disagrees with what Wolfram alpha has obtained (As seen here). In addition, the range is something that I am not sure how Wolfram obtained (As seen here). This also looks REALLY messy.
Can anyone guide me as to how this was obtained? That would be much appreciated!
|
17 November 2016
Abstracts
Abstract: Finding cycles in graphs is a fundamental problem in algorithmic graph theory. In this paper, we consider the problem of finding and reporting a cycle of length 2k in a graph G with n nodes and m edges for constant k >= 2. A classic result by Bondy and Simonovits [J. Combinatorial Theory, 1974] implies that if m > 100 k n^{1+1/k}, then G contains a 2k-cycle, further implying that one needs to consider only graphs with m = O(n^{1+1/k}).
Previously the best known algorithms were an O(n^2) algorithm by Yuster and Zwick [J. Discrete Math 1997] and a O(m^{2-(1+1 / ceil(k/2))/(k+1)}) algorithm by Alon et. al. [Algorithmica 1997]. We present an algorithm that uses O(m^{2k/(k+1)}) time and finds a 2k-cycle if one exists. This bound is O(n^2) exactly when m = Theta(n^{1+1/k}). When finding 4-cycles our new bound coincides with Alon et. al., while for every k>2 our new bound yields a polynomial improvement in m.
We also observe new conditional lower bounds for this problem. In particular, when k=3 we show that no combinatorial algorithm can decide whether G contains a 2k-cycle in time m^{3/2-eps} for any eps > 0 unless boolean matrix multiplication can be solved combinatorially in time n^{3-eps'} for some eps' > 0, which is widely believed to be false. Coupled with our main result, this gives tight bounds for finding 6-cycles combinatorially. Our conditional lower bounds also provide a separation in the complexity of finding 4- and 6-cycles combinatorially giving evidence that in the time complexity, the exponent of m should increase with k.
Yuster and Zwick noted that it is "plausible to conjecture that O(n^2) is the best possible bound in terms of n". We show "conditional optimality": if this hypothesis holds then our O(m^{2k/(k+1)}) algorithm is tight as well.
Our new algorithm is conceptually simple, and is in fact a slight modification of an O(mn) time algorithm by Monien [Ann. Discrete Math., 1985]. The time analysis, however, is significantly more involved. Here we introduce the notion of capped k-walks, which are walks of length k that visit only nodes according to a fixed ordering. Our technical contribution is several properties of such walks which may be of independent interest.
Joint work with Søren Dahlgaard and Morten Stöckel
In this talk I'll look at the pattern matching problem in the PRAM model. First I'll give a short introduction to the PRAM model followed by a short summary of known solutions for pattern matching in suffix trees in the PRAM model. I'll then show our new results that achieve optimal worst-case matching time O(log m), work O(m) and space O(n) for patterns of length m in a text of length n. The preprocessing part is non-deterministic. Our solution uses simple techniques based on hashing and Karp-Rabin fingerprints. The main challenge is to deal with colliding fingerprints. Finally, I'll shortly present the lower bound that shows the algorithm is time optimal.
Abstract: Given an edge-weighted undirected graph G=(V,E,w) with n vertices and m edges and t>=1, a subgraph H of G is called a t-spanner of G if for all u,v in V(G), the shortest path distance between u and v in H is at most a factor of t longer than in G. The two main measures of the sparseness of a spanner are the size (number of edges) and the lightness, defined as w(H)/w(MST(G)) where w(H) resp. w(MST(G)) is the total weight of edges in H resp. in an MST of G. It is known that O(n^(1+1/k)) size and O(n^(1/k)) lightness is optimal assuming the Erdös Girth Conjecture.
In this talk I will present a recent result obtaining fast construction of light spanners. Specifically, we present an algorithm that constructs a O(k)-spanner of size O(n^(1+1/k)) and lightness O(n^(1/k)) in time O(m + kn^(1+1/k+eps)) for any constant eps>0. As an important corollary we obtain an asymptotically optimal O(log n)-spanner of size O(n) and lightness O(1) in almost linear time O(m+n^(1+eps)). Joint work with: Stephen Alstrup, Morten Stöckel, and Christian Wulff-Nilsen
Abstract. In a storyline visualization, we visualize a collection of interacting characters (e.g., in a movie, play, etc.) by x-monotone curves that converge for each interaction, and diverge otherwise. Given a storyline with n characters, we show tight lower and upper bounds on the number of crossings required in any storyline visualization for a restricted case. In particular, we show that if (1) each meeting consists of exactly two characters and (2) the meetings can be modeled as a tree, then we can always find a storyline visualization with O(n log n) crossings. Furthermore, we show that there exist storylines in this restricted case that require Ω(n log n) crossings. Lastly, we show that, in the general case, minimizing the number of crossings in a storyline visualization is fixed-parameter tractable, when parameterized on the number of characters k. Joint with with I. Kostitsyna, M. Nöllenburg, A. Schulz, D. Strash, presented at Graph Drawing 2015
Abstract. DTUA 2-partition (V_1,V_2) of a digraph D=(V,A) is a partition of V into disjoint sets V_1,V_2 such that V=V_1\cup V_2. We will consider problems of deciding whether a given digraph has a 2-partition (V_1,V_2) such that V_i induces a digraph with some specified property. In \cite{BJCH15} and \cite{BJH15} Bang-Jensen, Cohen and Havet determined the complexity of 120 such problems. In this paper we consider tournaments and semicomplete digraphs and the complexity of the problems with graph properties minimum in-degree, minimum out-degree and minimum semidegree. While we can prove the existence of a polynomial algorithm for the (\delta^+\geq k, \delta^+\geq k)-partition problem for every k \in \mathbb{Z}, we can only prove similar for the (\delta^+\geq k, \delta^-\geq k)-partition and the (\delta^0 \geq k, \delta^0 \geq k)-partition problem in the case where k=1.
Abstract: In [Nat. Chem. Biol., 9(6):362--363, 2013] polypeptide sequences were designed such that the three-dimensional embedding of these sequences fold into a predefined polyhedron, the success was verified with atomic force microscopy. The biochemical design question is basically answered by finding a ``strong trace'', which essentially is identical to an embedding of a graph in a higher surface. This allows to directly connect topological graph theory and the application of methods for efficient enumeration of all possible solutions. In this presentation we will introduce the problem and discuss methods for efficient enumeration of all possible one-face embeddings.
|
Fields Interact and Propagate as Waves
When we started discussing the electric and magnetic fields they seemed to be quite separate. We knew that any electric charge (moving or not) created an electric field, and that any
moving charge created a magnetic field. Other than that there is not much similarity: the forces produced by the electric field look very different than the forces produced by the magnetic field, and there are no pieces of "magnetic charge" (monopoles) for magnetic field lines to start or end on. Then we started to discuss induction; we discovered that changing the magnetic flux through a loop caused an electric current to flow. By calculating the force on the moving charges within a wire we could show that we got a current only when the magnetic flux through the loop changed.
But one detail has been ignored in the preceding analysis. We know a change in flux, and hence a change in current, can be caused by increasing or decreasing the magnetic field. If we model our charges as starting at rest, then \(v = 0\) and the magnetic field (changing or not) does not seem capable of forcing them to move. Recall that magnetic forces should only exist for moving charges, as \(|\mathbf{F}_{\mathbf{B} \text{ on charge}}| = |q\mathbf{B} v \sin \theta | = 0\). So How does the current start? The answer to this question is that changing a magnetic field
produces an electric field. The electric field so produced does not begin or end on charges; instead it connects with itself so that the field lines don’t start or end (similar to the magnetic field). We should emphasize that even though these two methods of creating a current (creating forces on charges in the wire, or creating an electric field with a changing magnetic field) seem very different, for either of them or any combination of the two, the method of calculating the voltage by looking at the change in flux works.
The fact that a changing magnetic field creates an electric field suggests that the electric and magnetic fields are more closely related than we originally thought. In fact, the rules of electromagnetism are inconsistent as we currently know them! If we kept only the rules we knew, the answers to some of our calculations would depend on how we choose to calculate them! The change that we make here (and shown experimentally to be correct) is: a changing
electric field creates a magnetic field.
If we accept this new rule, an interesting possibility arises. If we have a magnetic field that is changing, we can create an electric field. If that electric field changes, it can create a magnetic field. We can imagine a situation where we start a magnetic field going, and then it creates an electric field which is itself changing. This field creates a changing magnetic field, which creates a changing electric field, which creates a changing magnetic field, and so on. A proper mathematical treatment shows that not only can these disturbances occur, but that these disturbances do not happen in the same place – rather they propagate, and travel like material waves. We call these propagating disturbances in the electric and magnetic fields
electromagnetic waves. They're more colloquially referred to as light. Harmonic Plane Waves
While there are many different types of electromagnetic waves, including pulse waves and spherical waves, we will devote our attention to the plane wave. The reason is a practical one; at long distances from the source, wavefronts of most electromagnetic waves look flat, and the wave can be approximated as a plane wave. For electromagnetic waves, the \(\mathbf{E}\) and \(\mathbf{B}\) fields oscillate sinusoidally, as harmonic waves. Each has a harmonic wave equation: \[\mathbf{E} (x,t) = E_0 \sin \left( \dfrac{2 \pi t}{T} \pm \dfrac{2 \pi x}{\lambda} + \phi \right) \hat{\mathbf{e}}\] \[\mathbf{B} (x,t) = B_0 \sin \left( \dfrac{2 \pi t}{T} \pm \dfrac{2 \pi x}{\lambda} + \phi \right) \hat{\mathbf{b}}\]
The difference between these equations and harmonic equations from earlier are \(\hat{\mathbf{e}}\) and \(\hat{\mathbf{b}}\) at the end. These are vectors that specify the direction that the electric and magnetic fields oscillate. The vector \(\hat{\mathbf{e}}\) is a
unit vector, so its magnitude is \(| \hat{\mathbf{e}}| = 1\); it points along the direction in which the electric field is oscillating. Similar remarks apply for the magnetic field and its unit vector \(\hat{\mathbf{b}}\).
The directions \(\hat{\mathbf{e}}\) and \(\hat{\mathbf{b}}\) are related; for an electromagnetic wave traveling through free space the electric and magnetic fields oscillate perpendicular to one another, also perpendicular to the direction that the wave travels. If you hold your thumb, index and middle fingers perpendicular to one another you can always point your thumb in the direction of the \(\mathbf{E}\) field, your index finger in the direction of the \(\mathbf{B}\) field, and your middle finger will point in the direction that the wave is traveling. A model of an electromagnetic wave is shown below.
Because the oscillations in both the \(\mathbf{E}\) and \(\mathbf{B}\) fields are perpendicular to the direction of motion, an electromagnetic wave is a
transverse wave. The plane of polarisation is the plane containing the direction in which electric field oscillates and the direction the wave travels (this is the plane containing \(\hat{\mathbf{e}}\) and \(\hat{\mathbf{k}}\)\ in the picture above).
The model above also demonstrates some characteristics found in the harmonic equations for the \(\mathbf{E}\) and \(\mathbf{B}\) fields. Notice that the \(\mathbf{E}\) and \(\mathbf{B}\) fields have the same wavelength \(\lambda\). They are also in phase, and so they must have the same phase constant \(\phi\). That is why, unlike before, the \(\phi\) and \(\lambda\) don't have subscripts to specify which wave they reference. Furthermore, The disturbances in the \(\mathbf{E}\) and \(\mathbf{B}\) fields travel with the same speed, which tells us that the period \(T\) must be the same. Lastly, the amplitudes of the electric and magnetic fields, \(E_0\) and \(B_0\), are related. For stronger magnetic fields, oscillations in the field cause a larger change in field, so we expect stronger electric fields too. This is in fact the case. The relationship between the amplitudes of the electric and magnetic field oscillations is \[E_0 = c B_0\] where \(c\) is the speed of light (approximately \(3 \times 10^8 \text{ m/s}\)).
Exercise
If an electromagnetic wave had its electric field pointing to the right of a page, and the magnetic field pointing to the top of the page, which way would the electromagnetic wave be traveling?
|
Browse by Person
Up a level 39. Article
Aad, G, Abbott, B, Abdallah, J et al. (2883 more authors) (2016)
Addendum to ‘Measurement of the tˉt production cross-section using eμ events with b-tagged jets in pp collisions at √s = 7 and 8 TeV with the ATLAS detector’. European Physical Journal C: Particles and Fields, 76. 642. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2855 more authors) (2016)
Performance of pile-up mitigation techniques for jets in pp collisions at √s=8 TeV using the ATLAS detector. European Physical Journal C, 76 (11). ISSN 1434-6044
Aad, G, Abajyan, T, Abbott, B et al. (2840 more authors) (2016)
Measurement of the centrality dependence of the charged-particle pseudorapidity distribution in proton–lead collisions at sNN‾‾‾√=5.02sNN=5.02 TeV with the ATLAS detector. The European Physical Journal C, 76 (4). ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2879 more authors) (2016)
Centrality, rapidity, and transverse momentum dependence of isolated prompt photon production in lead-lead collisions at TeV measured with the ATLAS detector. Physical Review C, 93 (3). ISSN 0556-2813
Aad, G, Abbott, B, Abdallah, J et al. (2824 more authors) (2016)
Measurements of the Higgs boson production and decay rates and coupling strengths using pp collision data at $$\sqrt{s}=7$$ s = 7 and 8 TeV in the ATLAS experiment. European Physical Journal C: Particles and Fields, 76. 6. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2854 more authors) (2015)
ATLAS Run 1 searches for direct pair production of third-generation squarks at the Large Hadron Collider. European Physical Journal C: Particles and Fields, 75 (10). 510. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2835 more authors) (2015)
Search for Higgs boson pair production in the $$b\bar{b}b\bar{b}$$ b b ¯ b b ¯ final state from pp collisions at $$\sqrt{s} = 8$$ s = 8 TeVwith the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (9). 412. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2825 more authors) (2015)
Search for heavy long-lived multi-charged particles in pp collisions at root s=8 TeV using the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (8). 362. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2819 more authors) (2015)
Constraints on the off-shell Higgs boson signal strength in the high-mass ZZ and WW final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (7). 335. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for a new resonance decaying to a W or Z boson and a Higgs boson in the $$\ell \ell / \ell \nu / \nu \nu + b \bar{b}$$ ℓ ℓ / ℓ ν / ν ν + b b ¯ final states with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (6). 263. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2823 more authors) (2015)
Determination of spin and parity of the Higgs boson in the $$WW^*\rightarrow e \nu \mu \nu $$ W W ∗ → e ν μ ν decay channel with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 231. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2815 more authors) (2015)
Observation and measurements of the production of prompt and non-prompt $$\varvec{\text {J}\uppsi }$$ J ψ mesons in association with a $$\varvec{Z}$$ Z boson in $$\varvec{pp}$$ p p collisions at $$\varvec{\sqrt{s}= 8\,\text {TeV}}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 229. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2821 more authors) (2015)
Search for direct pair production of a chargino and a neutralino decaying to the 125 GeV Higgs boson in $$\sqrt{\varvec{s}} = 8$$ s = 8 TeV $$\varvec{pp}$$ p p collisions with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (5). 208. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for $$W' \rightarrow tb \rightarrow qqbb$$ W ′ → t b → q q b b decays in $$pp$$ p p collisions at $$\sqrt{s}$$ s = 8 TeV with the ATLAS detector. European Physical Journal C: Particles and Fields, 75 (4). 165. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2822 more authors) (2015)
Search for Higgs and Z Boson Decays to J/ψγ and ϒ(nS)γ with the ATLAS Detector. Physical Review Letters, 114 (12). 121801. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2881 more authors) (2015)
Simultaneous measurements of the tt¯, W+W−, and Z/γ∗→ττ production cross-sections in pp collisions at √s=7 TeV with the ATLAS detector. Physical Review D - Particles, Fields, Gravitation and Cosmology, 91 (5). 052005. ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C, 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2467 more authors) (2015)
Search for dark matter in events with heavy quarks and missing transverse momentum in pp collisions with the ATLAS detector. European Physical Journal C , 75 (2). 92. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (2896 more authors) (2015)
Measurements of Higgs boson production and couplings in the four-lepton channel in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 91 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2888 more authors) (2014)
Search for nonpointing and delayed photons in the diphoton and missing transverse momentum final state in 8 TeV pp collisions at the LHC using the ATLAS detector. Physical Review D, 90 (11). ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2793 more authors) (2014)
Measurements of normalized differential cross sections for tt¯ production in pp collisions at √(s)=7 TeV using the ATLAS detector. Physical Review D, 90 (7). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2886 more authors) (2014)
Measurement of the Higgs boson mass from the H→γγ and H→ZZ∗→4ℓ channels in pp collisions at center-of-mass energies of 7 and 8 TeV with the ATLAS detector. Physical Review D, 90 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (2878 more authors) (2014)
Search for high-mass dilepton resonances in pp collisions at s√=8 TeV with the ATLAS detector. Physical Review D, 90. 052005. ISSN 1550-7998
Aad, G, Abajyan, T, Abbott, B et al. (2920 more authors) (2013)
Evidence for the spin-0 nature of the Higgs boson using ATLAS data. Physics Letters B, 726 (1-3). pp. 120-144. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (1825 more authors) (2012)
Search for contact interactions in dilepton events from pp collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 712 (1-2). pp. 40-58. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (2923 more authors) (2012)
Measurement of D*± meson production in jets from pp collisions at s√=7 TeV with the ATLAS detector. Physical Review D, 85 (5). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3057 more authors) (2012)
Search for the Standard Model Higgs Boson in the Diphoton Decay Channel with 4.9 fb−1 of pp Collision Data at √s=7 TeV with ATLAS. Physical Review Letters, 108. 111803. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2775 more authors) (2012)
Measurement of the ZZ Production Cross Section and Limits on Anomalous Neutral Triple Gauge Couplings in Proton-Proton Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 108 (4). 041804. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2992 more authors) (2012)
K0s and Λ production in pp interactions at s√=0.9 and 7 TeV measured with the ATLAS detector at the LHC. Physical Review D, 85 (1). ISSN 1550-7998
Aad, G, Abbott, B, Abdallah, J et al. (3022 more authors) (2011)
Search for Dilepton Resonances in pp Collisions at √s=7 TeV with the ATLAS Detector. Physical Review Letters, 107 (27). ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3028 more authors) (2011)
Measurement of the transverse momentum distribution of Z/gamma* bosons in proton-proton collisions at root s=7 TeV with the ATLAS detector. Physics Letters B, 705 (5). pp. 415-434. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3023 more authors) (2011)
Search for a standard model Higgs boson in the H→ZZ→ℓ(+)ℓ(-)νν decay channel with the ATLAS detector. Physical Review Letters, 107 (22). 221802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3017 more authors) (2011)
Search for new phenomena with the monojet and missing transverse momentum signature using the ATLAS detector in sqrt(s) = 7 TeV proton-proton collisions. Physics Letters B, 705 (4). pp. 294-312. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3033 more authors) (2011)
Measurement of the W+W− Cross Section in s√=7 TeV pp Collisions with ATLAS. Physical Review Letters, 107. 041802. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3046 more authors) (2011)
Measurement of the production cross section for W-bosons in association with jets in pp collisions at √s=7 TeV with the ATLAS detector. Physics Letters B, 698 (5). pp. 325-345. ISSN 0370-2693
Aad, G, Abbott, B, Abdallah, J et al. (3024 more authors) (2011)
Measurement of Dijet Azimuthal Decorrelations in pp Collisions at s√=7 TeV. Physical Review Letters, 106. 172002. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (3034 more authors) (2010)
Observation of a Centrality-Dependent Dijet Asymmetry in Lead-Lead Collisions at root s(NN)=2.76 TeV with the ATLAS Detector at the LHC. Physical Review Letters, 105 (25). 252303. ISSN 0031-9007
Aad, G, Abbott, B, Abdallah, J et al. (2517 more authors) (2010)
The ATLAS Simulation Infrastructure. European Physical Journal C: Particles and Fields, 70 (3). pp. 823-874. ISSN 1434-6044
Aad, G, Abbott, B, Abdallah, J et al. (3177 more authors) (2010)
Measurement of the W -> lv and Z/gamma* -> ll production cross sections in proton-proton collisions at root s=7 TeV with the ATLAS detector. Journal of High Energy Physics. 60. ISSN 1029-8479
|
Let $X:=W^{1,1}(\mathbb{R}^{2})\cap W^{1,\infty}(\mathbb{R}^{2})$ be given. Is it true that $$ X\overset{c}\hookrightarrow L^{1}(\mathbb{R}^{2}),$$ meaning that $X$ is compactly embedded in $L^{1}(\mathbb{R}^{2})$.
The typically Frechet-Kolmogorov Compactness Theorem cannot be applied here directly since we would need the measure to be bounded (here $\mathbb{R}^{2}$).
There is a result by Adams stating that we might have for $p\in [1,\infty)$ $$ W^{1,p}_{0}(\mathbb{R^{2}})\overset{c}{\hookrightarrow} L^{p}(\mathbb{R}^{2}) $$ and I was hoping that the additional Lipschitz-continuity in $X$ could make the compact embedding hold even for $X$!
Does anyone have an idea where I could find such a result, or -- even better -- if I have missed something obvious and the result is quite easy to prove or to disprove?
Thank you very much and best,
Alex
|
So the question states, Let $B = \{x = (x_1,x_2,x_3) \in \mathbb{R}^3: x_1^2 +x_2^2 +x_3^2 \leq 1 \}$ be the unit ball in $\mathbb{R}^3$. Compute the diameter of $B$ for each of the following metrics.
note: $diam(B) = \sup\{d(x,y): x,y \in B\}$
I know the diameter is 2 but I want to be able to do this in general for an arbitrary distance. I think seeing this one will help me do others. Here is what I have so far using euclidean distance.
Let $x,y \in B$ then \begin{eqnarray*} d(x,y) &=& ((x_1 - y_1)^2+(x_2 -y_2)^2 + (x_3-y_3)^2)^{1/2} \\ &=& ((x_1^2 +x_2^2+x_3^2)+(y_1^2+y_2^2+y_3^2) -2(x_1y_1+x_2y_2+x_3y_3))^{1/2} \\ &\leq& (1 + 1 - 2(x_1y_1+x_2y_2+x_3y_3))^{1/2} \\ &=& (2(1 -(x_1y_1+x_2y_2+x_3y_3))^{1/2} \end{eqnarray*}
I'm still yet to use the $\sup$ but I'm not sure how to move on from here. Do I need to use Lagrange multipliers or is there a better way to solve this?
|
This is only a partial answer - it shows that the sequence converges, but does not give the limit.
Define$$I_n(x) = \int_0^x \cos^{2n+1}(t)dt .$$We have $I_n(\pi) = 0$ because $\cos^{2n+1}(\pi/2 + t) = -\cos^{2n+1}(\pi/2 - t)$. For $a \in (0,2\pi)$ define $$f(n,a) = \int_0^a (1 + 1/4\cos^{2n+1}(t))dt = a + 1/4 I_n(a) .$$Then $a_{n+1} = f(n,a_n)$. For $a_0 = \pi$ we get $a_n = \pi$ for all $n$; this sequence trivially converges to $\pi$. We claim that$$a < f(n,a) <\pi \text{ for } 0 < a < \pi .$$This implies that $(a_n)$ is bounded and strictly increasing, i.e. is convergent. The case $\pi < a < 2\pi$ can be treated similarly (we get $\pi < f(n,a) < a$ so that $(a_n)$ is bounded and strictly decreasing).
Let us prove the above claim. For $0 < a \le \pi/2$ we have $0 < I_n(a) < a \le \pi/2$ which holds because $0 \le \cos^{2n+1}(t) \le 1$ for $0 \le t \le \pi/2$. For $\pi/2 < a < \pi$ we have $I_n(a) = I_n(\pi) - \int_a^\pi \cos^{2n+1}(t)dt = - \int_a^\pi \cos^{2n+1}(t)dt = \int_a^\pi \lvert \cos^{2n+1}(t) \rvert dt \in (0, \pi - a)$.
Added: For $a < b$ we have $f(n,a) < f(n,b)$ because $f(n,b) - f(n,a) = b - a + 1/4\int_a^b \cos^{2n+1}(t)dt \ge b -a -1/4\int_a^b \lvert \cos^{2n+1}(t) \rvert dt >$ $ b -a - 1/4(b-a) > 0$.
Letting denote $\overline{a}_0$ the limit of the sequence $(a_n)$ starting with $a_0$ we see that $\overline{a}_0 \le \overline{b}_0$ when $a_0 < b_0$.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Your method for solving the problem is correct, but you have made some mistakes.
First, you have assumed that the current $i$ through the inductor towards Earth equals $-\frac{dq}{dt}$ where $q$ is the charge on the sphere. This is not quite correct because charge is also arriving on the sphere from the electron beam at the constant rate $i_0$. So you should have $$\frac{dq}{dt}=i_0 -i$$
Then we can differentiate and substitute into your 1st equation : $$\frac{q}{C}=L\frac{di}{dt}=-L\frac{d(i_0-i)}{dt}$$ $$\frac{1}{C}\frac{dq}{dt}=\frac{1}{C}(i_0-i)=-L\frac{d^2(i_0-i)}{dt^2}$$ $$\frac{d^2(i_0-i)}{dt^2}+\frac{1}{LC}(i_0-i)=0$$ $$(i_0-i)=A\sin(\omega t+\phi)$$ where $\omega^2 =\frac{1}{LC}$. I have used $\sin$ but you can instead use $\cos$ then the value of $\phi$ is different.
$A, \phi$ are constants which are to be determined from the initial conditions. When $t=0$ there is no charge on the sphere and no current flowing through the inductor ($i=0$) and also no voltage across the inductor ($V_L=L\frac{di}{dt}=0$) therefore $$i_0=A\sin\phi$$ $$-\frac{di}{dt}=0=\omega A\cos\phi$$ $$\implies \phi=\frac{\pi}{2}, A=i_0$$ $$i=i_0(1-\cos\omega t)$$ We can see from this that the maximum current through the inductor is $2i_0$, which occurs when $\cos\omega t=-1$.
So what is $i_0$? This is where you make another mistake or two. The rate at which charge falls onto the sphere is $$i_0=\text{cross-section area x velocity of electrons x density of beam x electron charge }=-\pi R^2 une$$ (
Comment : This is not realistic because as -ve charge accumulates on the sphere other electrons in the beam will be repelled from it, so $i_0$ will not be a constant, as assumed above, but will decrease and will be a function of $q$ and therefore of time $t$. However, I think we are expected to ignore this effect, which would make the problem vastly more complicated.)
What about maximum charge? We integrate the equation for $i(t)$ using the initial condition $q=0$ when $t=0$ : $$i(t)=i_0-\frac{dq}{dt}=i_0(1-\cos\omega t)$$ $$\frac{dq}{dt}=i_0\cos\omega t$$ $$q=\frac{i_0}{\omega}\sin\omega t$$ Maximum charge occurs when $\sin\omega t=\pm 1$ and is $$Q=\pm \frac{i_0}{\omega}=\pm \sqrt{LC}\pi R^2 une$$
|
NOTE - I didn't receive any answer in here and I think because my first post is not clear, so I entirely made another example:
$K={\{id,r^2,r^4,s,r^2s,r^4s}\}$ is a proper subgroup of the dihedral group $D_6$. As it is shown here by Gerry Myerson, $K$ is isomorphic to $S_3$. We label the vertices of the hexagon 1 through 6. Forget about 2, 4, and 6, and see what $K$ does to 1, 3, and 5. You will see that the 6 elements of $K$ are precisely the 6 permutations of 1, 3, 5, thus, precisely $S_3$; i.e: $$\begin{align*} \mathrm{id} &\longleftrightarrow \mathrm{id}\\ \tau_s=(35) &\longleftrightarrow s\\ (135) &\longleftrightarrow r^4\\ (13) &\longleftrightarrow r^4s\\ \tau_{r^2s}=(15) &\longleftrightarrow r^2s\\ \tau_{r^2}=(153) &\longleftrightarrow r^2 \end{align*},$$ $r$ means one rotation clockwise and $s$ means flip on horizontal line. In order to achieve the following new state of hexagon,
we first do $r^2$ then $s$. But in $S_3$ it happens in reversed way, i.e. first $\tau_s$ (the permutation bijective to $s$) then $\tau_{r^2}$ (the permutation bijective to $r^2$) or mathematically
$(15)(\mathrm{id})=(153)(35)(\mathrm{id}) \Leftrightarrow \tau_{r^2s} (\mathrm{id}) = \tau_{r^2} \circ \tau_s (\mathrm{id})$.
My question is that to reach to the same figure
why ( both in mathematical and intuitive explanations) in $K$ it is right-to-left (first $r^2$ then $s$) but in $S_3$ it is left-to-right (first $\tau_s$ (bijective of $s$) then $\tau_{r^2}$ ((bijective of $r^2$))?
EDIT - My knowledge of group theory is limited to few first chapters of C.C.Pinter's Abstract Algebra, and I highly appreciate easy-to-understand explanations. My question is on the reason of reversed order of 'actions', intuitively/geometrically/mathematically.
Thank you.
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
While I was working on noise profile creation for darktable, I had to learn how variance stabilization transform work.
So, here is a summary of what I learnt about this What are variance stabilization transform? Noise variance is not constant: the variance depends on the light intensity. Typically, variance is higher in bright areas than in dark areas.
Plenty of noise algorithms are designed to work on images that have constant variance.
To use these algorithms, we can transform the data so that variance becomes constant. Such a transform is called a variance stabilizing transform. Why and How does it works? First thing to know is how variance evolves depending on the mean. Let’s say we have empirically: V(X)=g(E[X]) We can try fitting a model to these empirical data. For instance, for creating darktable’s noise profiles, we use the gaussian-poissonian model, the variance is assumed to evolve linearly with regards to the mean: V(X)=a*E[X]+b So, we try to find a and b such that the model fits as much as possible the reality (in practice, in darktable, these a and b are exactly what your noise profiles are). At this step, the “only” thing we have to do is to find a model that fits well the reality
Then, we can design the transform.
Let f be our variance stabilization transform. We want V(f(X))=c, and we know V(X)=g(E[X]). Let’s write m = E[X]. Using Taylor expansion (see https://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables), we get: V(f(X)) \approx V(f(m)+f'(m)(X-m)) Which we can simplify using the properties of variance: V(f(X)) \approx V(f'(m)(X-m)) V(f(X)) \approx f'(m)^2*V((X-m)) V(f(X)) \approx f'(m)^2*V(X)
So our condition V(f(X))=c gives us the following condition:
f'(m)^2*V(X)=c We know that V(X)=g(E[X]), so we get: f'(m)^2*g(m)=c For all values of m such as g(m)\neq 0, we have: f'(m)=\sqrt{\frac{c}{g(m)}}
We usually choose c=1:
f'(m)=\frac{1}{\sqrt{g(m)}}
From this, we can integrate to get a function that stabilize variance.
For instance, with the gaussian-poissonian model: f'(m)=\frac{1}{\sqrt{am+b}} We can choose the following function: f(m)=\frac{2}{a}\sqrt{am+b}
If we transform our data by applying this function, we will get noise variance of 1.
We can then denoise the data using our favorite denoising algorithm, and perform an inverse transformation after that. In practice Finding a model that fits perfectly reality and that is easy to compute is not easy. At least we can get as close as we can.
Here is an example of a fit using the gaussian-poissonian model (see how variance varies with the mean):
Here is the variance of data transformed:
Here is another example of a fit using a different model, where we assume V(X)=a*(E[X]+0.0005)^b:
Here is the variance of the data transformed:
Here, the second model allows to have a better variance stabilization in the dark areas, which can make denoising of very noisy images easier.
In darktable, what we currently have is the first model.
In Rawtherapee, while AFAIK there is no noise profiles, there is still a variance stabilization transform done: the gamma parameter in the noise reduction module allows the user to find the transform to stabilize the variance himself. A gamma equal to 2 will give roughly the same results as the variance stabilization we currently use in darktable. If you find the right gamma value, you will get something similar of what we have with the second model shown here. Note that depending on the gamma value chosen, the variance will not always be stabilized, but it can be
|
This question was motivated by this related one: How "far" a differential form is from an exterior product .
Let $\mathbb{V}$ be a vector space of dimension $n$ with underlying field $\mathbb{F}$, and say (for lack of a better term) that the
wedge rank of a $k$-form $$\phi \in \Lambda^k \mathbb{V}^*$$ is the minimum number $r$ for which there are exist wedge products $v_a^1 \wedge \cdots \wedge v_a^k$, $a = 1, \ldots, r$, of $1$-forms $v_a^b \in \mathbb{V}^*$, $b = 1, \ldots, k$, such that$$\phi = \sum_{a = 1}^r v_a^1 \wedge \cdots \wedge v_a^k.$$(For convenience, we can declare the empty sum to have value the $0$ k-form, so that the wedge rank of $0$ is $0$.) In general, given $\phi$, what is an effective way to determine its wedge rank $r$?
We can make a few obvious remarks: First, $r \leq \dim \Lambda^k \mathbb{V}^* = {{n}\choose{k}}$, but in general it is much smaller, and anyway for nonzero $0$-, $1$- and $n$-forms, $r = 1$, and exploiting the natural isomorphism $\Lambda^{n - 1} \mathbb{V}^* \cong \mathbb{V} \otimes \Lambda^n \mathbb{V}^*$ gives that the same applies to nonzero $(n - 1)$-forms.
For $k = 2$, a $2$-form $\phi$ has wedge rank $1$ (that is, it is decomposable) iff $\phi \wedge \phi = 0$, and we can exploit the isomorphism $\Lambda^{n - 2} \mathbb{V}^* \cong \Lambda^2 \mathbb{V} \otimes \Lambda^n \mathbb{V}^*$ to make an analogous statement about the $k = n - 2$ case. Furthermore, if $n$ is even, the wedge rank of $\phi$ is exactly $r$ iff $$\underbrace{\phi \wedge \cdots \wedge \phi}_r \neq 0 \qquad \text{but} \qquad \underbrace{\phi \wedge \cdots \wedge \phi}_{r + 1} = 0.$$ (Perhaps something similar holds for odd $n$?)
In higher tensor ranks, the story quickly becomes more complicated. For example, if $\dim \mathbb{V} = 7$ (and $\mathbb{F}$ perfect and $\text{char } \mathbb{F} \neq 2$), the tensor rank of a $3$-form $\phi$ is at most $5$ (this already seems nonobvious). It turns out (at least over $\mathbb{R}$ and $\mathbb{C}$) that $r = 5$ iff the $\Lambda^7 \mathbb{V}^*$-valued bilinear form $$(X, Y) \mapsto (i_X \phi) \wedge (i_Y \phi) \wedge \phi$$ is nondegenerate, but the rank of the bilinear form does not determine $r$ for all smaller values of $r$. Anyway, this particular property seems essentially unique to this $(n, k)$.
There's a further complication, namely that the wedge rank of a $k$-form need not remain the same under extension of the base field. This phenomenon already shows up in the smallest-dimensional case not covered by the above considerations: If $\mathbb{F} = \mathbb{R}$ and $\dim \Bbb V = 6$, there is a $3$-form whose stabilizer under the pullback action of $GL(\mathbb{V}) \cong GL(6, \mathbb{R})$ on $\Lambda^3 \mathbb{V}^*$ is exactly $SU(3)$, and any such $3$-form has wedge rank $4$ (in fact, there is a single $GL(\Bbb V)$-orbit of such $3$-forms, and it is open). When viewed as an element of the complex vector space $\mathbb{V} \otimes_{\mathbb{R}} \mathbb{C}$, however, any such $3$-form has wedge rank $2$. So, the structure of the underlying field $\mathbb{F}$ plays a (to me) subtle role, and quite possibly it turns out this question is easier to answer over algebraically closed fields.
|
Dynamical behaviors of stochastic type K monotone Lotka-Volterra systems
Department of Mathematics, Harbin Institute of Technology (Weihai), Weihai 264209, China
Two n-species stochastic type K monotone Lotka-Volterra systems are proposed and investigated. For non-autonomous system, we show that there is a unique positive solution to the model for any positive initial value. Moreover, sufficient conditions for stochastic permanence and global attractivity are established. For autonomous system, we prove that for each species, there is a constant which can be represented by the coefficients of the system. If the constant equals 1, then the corresponding species will be nonpersistent on average. To illustrate the theoretical results, the corresponding numerical simulations are also given.
Mathematics Subject Classification:Primary: 60G15, 60H10; Secondary: 37A50. Citation:Dejun Fan, Xiaoyu Yi, Ling Xia, Jingliang Lv. Dynamical behaviors of stochastic type K monotone Lotka-Volterra systems. Discrete & Continuous Dynamical Systems - B, 2019, 24 (6) : 2901-2922. doi: 10.3934/dcdsb.2018291
References:
[1] [2] [3] [4]
A. Berman and R. J. Plemmons,
[5]
L. Chen and J. Jiang,
Stochastic epidemic models driven by stochastic algorithms with constant step,
[6] [7] [8] [9] [10] [11]
M. W. Hirsch,
System of differential equations that are competitive or cooperative. Ⅳ: Structural stability in three-dimensional systems,
[12] [13]
X. Li, A. Gray, D. Jiang and X. Mao,
Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching,
[14] [15]
X. Li and X. Mao,
Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation,
[16] [17]
X. Liang and J. Jiang,
The dynamical behaviour of type-K competitive Kolmogorov systems and its application to three-dimensional type-K competitive Lotka-Volterra systems,
[18] [19]
M. Liu and K. Wang,
Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations,
[20]
M. Liu, K. Wang and Q. Wu,
Survival analysis of stochastic competitive models in a polluted environment and stochastic competitive exclusion principle,
[21]
X. Mao,
[22] [23] [24] [25] [26]
P. Tak$\acute{a}\check{c}$,
Convergence to equilibrium on invariant $d$-hypersurfaces for strongly increasing discrete-time semigroups,
[27]
P. Tak$\acute{a}\check{c}$,
Domains of attraction of generic omega-limit sets for strongly monotone discrete-time semigroups,
[28] [29] [30] [31] [32]
C. Tu and J. Jiang,
The necessary and sufficient conditions for the global stability of type-K Lotka-Volterra system,
[33] [34] [35]
Y. Wang and J. Jiang,
Uniqueness and attractivity of the carrying simplices for the discrete-time competitive dynamical systems,
[36] [37]
show all references
References:
[1] [2] [3] [4]
A. Berman and R. J. Plemmons,
[5]
L. Chen and J. Jiang,
Stochastic epidemic models driven by stochastic algorithms with constant step,
[6] [7] [8] [9] [10] [11]
M. W. Hirsch,
System of differential equations that are competitive or cooperative. Ⅳ: Structural stability in three-dimensional systems,
[12] [13]
X. Li, A. Gray, D. Jiang and X. Mao,
Sufficient and necessary conditions of stochastic permanence and extinction for stochastic logistic populations under regime switching,
[14] [15]
X. Li and X. Mao,
Population dynamical behavior of non-autonomous Lotka-Volterra competitive system with random perturbation,
[16] [17]
X. Liang and J. Jiang,
The dynamical behaviour of type-K competitive Kolmogorov systems and its application to three-dimensional type-K competitive Lotka-Volterra systems,
[18] [19]
M. Liu and K. Wang,
Population dynamical behavior of Lotka-Volterra cooperative systems with random perturbations,
[20]
M. Liu, K. Wang and Q. Wu,
Survival analysis of stochastic competitive models in a polluted environment and stochastic competitive exclusion principle,
[21]
X. Mao,
[22] [23] [24] [25] [26]
P. Tak$\acute{a}\check{c}$,
Convergence to equilibrium on invariant $d$-hypersurfaces for strongly increasing discrete-time semigroups,
[27]
P. Tak$\acute{a}\check{c}$,
Domains of attraction of generic omega-limit sets for strongly monotone discrete-time semigroups,
[28] [29] [30] [31] [32]
C. Tu and J. Jiang,
The necessary and sufficient conditions for the global stability of type-K Lotka-Volterra system,
[33] [34] [35]
Y. Wang and J. Jiang,
Uniqueness and attractivity of the carrying simplices for the discrete-time competitive dynamical systems,
[36] [37]
[1]
Qi Wang, Yang Song, Lingjie Shao.
Boundedness and persistence of populations in advective Lotka-Volterra competition system.
[2] [3] [4]
Hélène Leman, Sylvie Méléard, Sepideh Mirrahimi.
Influence of a spatial structure on the long time behavior of a competitive Lotka-Volterra type system.
[5]
Yasuhisa Saito.
A global stability result for an N-species Lotka-Volterra food chain system with distributed time delays.
[6]
Yoshiaki Muroya, Teresa Faria.
Attractivity of saturated equilibria for Lotka-Volterra systems with infinite delays and feedback controls.
[7]
Rui Wang, Xiaoyue Li, Denis S. Mukama.
On stochastic multi-group Lotka-Volterra ecosystems with regime switching.
[8]
Ting-Hui Yang, Weinian Zhang, Kaijen Cheng.
Global dynamics of three species omnivory models with Lotka-Volterra interaction.
[9] [10]
Yukio Kan-On.
Global bifurcation structure of stationary solutions for a Lotka-Volterra competition model.
[11]
Guo-Bao Zhang, Ruyun Ma, Xue-Shi Li.
Traveling waves of a Lotka-Volterra strong competition system with nonlocal dispersal.
[12]
Yuan Lou, Dongmei Xiao, Peng Zhou.
Qualitative analysis for a Lotka-Volterra competition system in advective homogeneous environment.
[13]
Linping Peng, Zhaosheng Feng, Changjian Liu.
Quadratic perturbations of a quadratic reversible Lotka-Volterra system with two centers.
[14] [15]
Jong-Shenq Guo, Ying-Chih Lin.
The sign of the wave speed for the Lotka-Volterra competition-diffusion system.
[16]
Qi Wang, Chunyi Gai, Jingda Yan.
Qualitative analysis of a Lotka-Volterra competition system with advection.
[17]
Anthony W. Leung, Xiaojie Hou, Wei Feng.
Traveling wave solutions for Lotka-Volterra system re-visited.
[18] [19]
Xiaoling Zou, Ke Wang.
Optimal harvesting for a stochastic N-dimensional competitive Lotka-Volterra model with jumps.
[20]
Norimichi Hirano, Wieslaw Krawcewicz, Haibo Ruan.
Existence of nonstationary periodic solutions for $\Gamma$-symmetric
Lotka-Volterra type systems.
2018 Impact Factor: 1.008
Tools Metrics Other articles
by authors
[Back to Top]
|
Let $A(n, B, d)$, $B(n, d)$ be a {Turing machine, random-access machine, C program} with finite tape (which I'm going to denote "Turing machine*" and "random-access machine*" respectively), $n \in \mathbb{N}$. $A$ and $B$ are the same type of machine. For any $N \in \mathbb{N}$, statements where $n \leq N$, don't matter.
$n + |d|$ is the number of spatial units on $B$'s tape if $B$ is a Turing machine* or random-access machine*. If it is a C program, $n$ is the number of Bytes in $B$'s read-write memory (registers are neglected, see last sentence of previous pragraph). The memory $d$ takes up is read-only and not accounted for. All spatial units regarded are capable of holding the same amount of information.
$d$ is the argument data. You may assume $n$ to be included in $d$.
If $A$ is a Turing machine* or random-access machine*, it may access $2 \cdot (n + |d|) + c$ spatial units where $c$ is constant.
If $A$ is a C program, it may access $n + c$ Byte of read-write memory. The memory $d$ takes up is read-only and not accounted for. It may have a function which for any $p, k \in \mathbb{N}, B \in \{\text{C program}\}, d$ returns the $k$th Byte of $B(n, d)$'s memory after $p$ steps.
For any of the situations $B \in \{\text{Turing machine*}, \text{random-access machine*}, \text{C program}\}$: Is there an $A$ which decides whether $B(n, d)$ will halt?
I care about the reasons as to why this is the case. Unfortunately, I forgot a lot of what I learned about computability. I'm sorry if this is super trivial or unknown.
|
With the spring attached to a wall (and not the ceiling) one might assume that the mass is sliding on a friction-less horizontal surface. Then the starting kinetic energy = the final potential energy: (½) m vo^2 = (½) k x^2 where k is the spring constant and x is the stretch. If the spring is hanging vertically, the rest position occurs where its force ...
It depends what sign convection you choose, suppose you have chosen upward displacement as negative, then velocity and acceleration direction are different, hence you can take a=(+) , for basic problem remember if direction of velocity is opposite the force like gravity then by assigning upward displacement as positive, you can take a=(-) ,.
Initially the mass has kinetic energy $\frac 1 2 mv^2$ and the spring has potential energy $E_{pot}=0$. So $E_{total} = \frac 1 2 mv^2$.When the spring is fully extended, the velocity of the mass is $0$, so $E_{kin}=0$, and the potential energy is a function $f$ of the spring's maximum extension $x_{max}$. So $E_{total}=f(x_{max})$.Since there is no ...
The equation you wrote is correct. All you need to do is take the partial derivative. I think this is where you got confused. You can check the difference between partial and total derivative. So here is the answer:$\dfrac{\partial KE}{\partial t} =\dfrac{\partial ( \dfrac{1}{2}\cdot m \cdot \dot{\vec{r}} \cdot \dot{\vec{r}})}{\partial t} = 0 $$\dfrac{\...
1st one is correct.The equation describes the acting of the external force,as theres no external horizontal force acting on m1,its not in the equation,however it is accelerated by the tension of the string. horizontally on cart M, which pushes on block m2. They therefore accelerate ata = F / (M + m2).In the absence of friction, F cannot impart any ...
Gravity pulls down on M and m1, but the ground holds them up. These vertical forces are irrelevant in the absense of friction to convert them to horizontal forces.Gravity also pulls down on m2. For it not to move, the tension in the string must be just enough to hold it up.T = m2 gForce F pushes horizontally on cart M, which pushes on block m2. They ...
I think you are querying the necessity of the statementInfinite accelerations are not allowedTo get zero displacement one has to have a positive displacement and a negative displacement and hence at some stage the velocity must be zero as shown on the left hand graph which is continuous and well behaved.Now what about going from a velocity $...
In part. This is the equation of position as a function of time for uniformly accelerated motion with $t$ shifted by $t_0=2s$, i.e. in terms of $(t-t_0)$ rather than just $t$.Note that the velocity at $t=t_0=2s$ is $u$ but that’s not the initial velocity. Since the acceleration is constant with $t$, you can read it off at any $t$, including a $t=t_0=2s$, ...
So the answer should be given by taking a particular frame of reference....imagine if you are in the elevator, you will feel that the acceleration of the stone is g+a downwards but now imagine that you're on the ground looking at the stone falling, you will feel that the acceleration is simply g downwards hence I think that the answer was given taking ground ...
Your teacher is right. You are confused about acceleration and velocity, which can be tricky.Acceleration is the change of velocity with respect to time and nothing can be accelerated without a force acting on it. Think about it. Your car won't drive unless the engine is powering it. Your bike won't go unless you pedal. So as soon as a force starts/...
You're confusing velocity with acceleration. The acceleration "a" acts on the stone only until you let go."After the release" means "after you let go", hence you no longer impart any acceleration to the stone, and "a", the acceleration of the lift, no longer affects the stone. In your balloon experiment, the stone has velocity "u", but in the lift the ...
There isn't a single point of contact, when walking there is multitude of points or a whole surface that makes contact with the ground. But otherwise yes, those points of the boot, if it does not slip, have zero velocity with respect to the ground.
You're using $v$ in two different ways. In the first few equations, $v$ means the final velocity after a period of uniform acceleration, assuming one starts from rest. In the equation $v = s/t$, $v$ means the average velocity instead. That's half as much, which is why you're off by $2$.This is a general warning for learning physics. In other high school ...
You always hear that an increase in fluid velocity causes a decrease in pressure, but actually it is the other way around: the lower pressure causes the fluid to accelerate into the lower pressure zone. (This applies to steady state flow at subsonic speeds.) In common physics problems, where a fluid speeds up and then slows down again back to its original ...
I took another stab at it and I was able to solve it using the following equations: $$x_f=x_i+v_xt$$$$y_f=y_i+v_y{_i}t +\frac 12at^2$$Applying and rearranging these to my problemFor x-direction I get:$$d=0+v_i\cos\theta\cdot t$$$$v_i=\frac{d}{\cos\theta\cdot t}$$And for y-direction:$$0=h+v_i\sin\theta\cdot t -\frac 12gt^2$$Apply previous equation and ...
Consider a streamline along which the fluid velocity is increasing. This means that the fluid is accelerating, so there must be a force. Forces in an ideal fluid are related to differences in pressure, so the pressure must be decreasing.
You had the correct tangential, $c$, and radial, $\dfrac {c^2t^2}{b}$, accelerationsIf the radius is constant then in polar coordinates $\vec v = b\,\dot \theta \hat \theta = c\,t\, \hat \theta \Rightarrow \dot \theta = \dfrac{ct}{b} \Rightarrow \ddot \theta = \dfrac cb$ and $\vec a = -b\,\dot \theta^2\, \hat r + b\,\ddot\theta \,\hat \theta =-\dfrac{c^...
Both are correct as it is just a matter of wording although I would favour change in position and displacement most of the time.Consider one dimensional motion along the x-axis with the unit vector $\hat x$ defining the positive x-direction.A body starts at position $+3 \,\hat x$ with velocity $+6\,\hat x$ and after undergoing constant acceleration ...
You correctly wrote the expression for the four-velocity and $\gamma$ but it'd be much easier to use straight forward the time coordinate (Rindler) transform for a uniformly accelerated frame (without integral calculus and 4-velocities) as$$t=\frac{c}{a_0}\mathrm{asinh }\left(\frac{a_0}{c}\tau\right)$$where $\tau$ is proper time in (in P's frame) and $t$ ...
$$v_\text{average}=\frac{\Delta s}{\Delta t}$$$$v_\text{instantaneous}=\lim_{\Delta t\to0}\frac{\Delta s}{\Delta t}$$If the time interval gets infinitesimally small $\Delta t\to 0$, then you are dividing with something very, very tiny - so the number should become very big: $$\frac{\cdots}{\Delta t}\to \infty \quad\text{ when } \quad\Delta t\to0$$If the ...
Suppose you are travelling at a uniform velocity and you cover 1 meter in 1 second. Your average velocity is$$\frac{1\ {\rm m}}{1\ {\rm s}} = 1 \frac{\rm m}{\rm s}.$$If you consider a 1 millisecond interval within that 1 second, you cover 1 millimeter. Your average velocity in that milisecond is$$\frac{1\ {\rm mm}}{1\ {\rm ms}} = 1 \frac{\rm m}{\rm s}...
I believe not only this question is too linguistical or philosophical for this SE, but also that the concept of motion is understood as a postulate within Physics. You don't define motion. You calculate and predict it.
The trajectories of the thrown object and its consecutive bounces will be parabolic in their shape. In this case the ball will lose some of its kinetic energy each time it make contact with the Ground. However, the angle it makes with the ground each time will be equal to the angle it was original throw at. This means that all of the trajectories will be ...
In almost every case, we choose to model an object as a point because we don't care about the object's internal structure for the calculation we're trying to make. This can have several causes; for example:The effect of the object's internal structure may simply be too small to be detectable in the results;The desired precision of the calculation may be ...
When we are dealing with their mass. For example, Newton's law of gravity deals with two objects as the distance between their masses. The "point" is the center of mass of the object which is just a theoretical mathematical point. Remember that there is a difference between total mass and mass density. When you observe an object in space you are looking ...
Each derivation rests on the assumptions used. The standard kinematics equations you mention first, depend on the assumption of constant acceleration.My problem is that your friend hasn't stated what assumptions he used to get to $x(t) = x_0 + v(t)\,t$.So let us differentiate both sides to see what kind of acceleration is needed (using the product rule)...
To give a purely qualitative answer consider the meaning of your friends third line$$x(t) = x_0 + v(t) \cdot t \;, \tag{1}$$(where I've made the multiplication explicit).This claims that you find the position at moment $t$ by taking the initial position ($x_0$) and adding to that the elapsed time times the velocity the particle has at moment $t$.So, ...
I think I understand it now, mathematically speaking, but is there amore conceptual answer?OP evidently seeks a conceptual answer to why $x(t) \ne x_0 + v(t)\cdot t$ when $v(t) = v_0 + at$ and $a$ is a constant.Consider the simple case that the initial position and initial velocity are zero. Stipulate that $v(t) = at$ where $a$ is a constant and it ...
Given velocity $v(t)$, the distance moved after a certain time $t$ is not $v(t)t$ - this formula works at constant velocity, but when the velocity is changing, the correct expression is $\int^{t_f}_{t_0} v(t) dt$. Therefore your friend's third line is incorrect.
The error is just that $v(t)t$ is not the anti-derivative of $at$. This is easily checked by just taking the derivative.$$\frac{\text d}{\text dt}\left(v(t)\cdot t\right)=v(t)\cdot\frac{\text d}{\text dt}(t)+t\cdot\frac{\text d}{\text dt}(v(t))=v(t)+at\neq at$$It's a simple calculus mistake.
The displacement is only the velocity multiplied by the elapsed time if velocity is constant as you suggested. To derive the equation for varying velocity you must consider the infinitesimal case where the elapsed time is so small that you can consider velocity constant. In this case, a small displacement $dx$ is given by the product of velocity v(t) by ...
Whether or not the person is "saved" would depend on the duration of the free fall which would determine the persons velocity upon impact. For example, If a person jumps from some height above the ground the person can lesson the impact force by bending the knees immediately on contact. This reduces the average force on the person by the work energy ...
One way to understand this is to realize that $\vec{v_1} -\vec{v_2}$ is basically adding $\vec{v_1}$ with $\vec{-v_2}$ i.e. $\vec{v_1} -\vec{v_2} = \vec{v_1} + (\vec{-v_2})$. So, as the first answer explained, if you have two cars $A$ and $B$ travelling east at velocities $\vec{v_1}$ and $\vec{v_2}$ respectively, the person sitting in car $B$ will see car $A$...
Let's say you are observing from your point of view two objects, traveling in their own directions and at their own speeds. So you have two velocity vectors $\vec{v}_1$ and $\vec{v_2}$.By subtracting one from the other, you get the relative velocity between those objects: $\vec{v}_2 - \vec{v}_1$ will be the velocity of object 2 as observed by object 1 (...
You have set up a false comparison. You are computing two different energies.In the relativistic case, when you you write $$E^2=m^2+p^2, $$for a single particle, the energy term is the sum of the kinetic energy and the mass energy: $$E=K+m.$$In the non-relativistic situation you have defined the energy simply as the kinetic energy and ignored the ...
That's because force is a vector quantity. Unlike a scalar quantity like for instance temperature, a force always has magnitude and direction. Therefore when adding two forces, we have to add them like vectors.If you're interested in why we use vector addition for forces, this answer, which draws from Newton's Principia Mathematica, might be for you. A ...
Your source should've worded that more carefully. This is not the acceleration of the particle in the rest frame; instead, it's the acceleration in a momentarily comoving inertial frame, that is, an inertial frame in which the particle is momentarily at rest. In the actual rest frame, which is a non-inertial frame, the velocity and acceleration of the ...
If you are able to specify acceleration as a function of position or velocity, then it is still true that $$a=\frac{\text dv}{\text dt}=\frac{\text d^2x}{\text dt^2}$$because this is just the definition of acceleration.You encounter the position case in things like simple harmonic motion (like a mass on a spring). Then you can think of acceleration as a ...
Acceleration is the derivative of velocity with respect to time, by definition.It doesn't matter what factors affect it, that's still what it is. You could have acceleration as a function of the amount a spring is stretched, acceleration as a function of how much you press the gas pedal, acceleration of a sail boat as a function of how fast the wind is ...
First, some information to motivate my answer.In circular motion, it is useful to break forces into components: the radial component $F_r$ (points towards or away from the center of the circle around which the particle is moving) and the tangential component $F_\theta$ (points tangent to the circle around which the particle is moving).For planar motion, ...
Average speed has its main meaning in situationswhen the actual speed is almost constant in the observed time period, i.e.it doesn't vary too much in the substantial part of the observed period.(As you can see, it is very subjective — what are “too much” and “substantial part”? Statisticians would probably use something as e.g. 5%.)when total time is ...
Why do we use velocity instead of speed for different physicsproblems? I recognize how they are different but why use one over theother?My cottage is 100 km due north of my house in Toronto.If I drive 100 km/h, will I arrive at my cottage in an hour?What if I drive east?What if I start in Montreal?Thus velocity.
On the motorway, your passenger asks: "How fast are you going?". You look at the speedometer and answer: "109 km/hr". Had he asked 1 minute later, then you would have said: "111 km/hr". Had he asked 1 minute earlier, then you would have said: "110 km/hr". You are telling him the instantaneous speed.But what if he asks you afterwards: "How fast did we ...
This average speed / velocity doesn't give accurate information aboutmotion of an object then why it is taught?Primarily because that's what you'll actually use in the real world.For example, if you drive for an hour and cover 60 miles, I suspect you'll describe that you were "averaging 60 mph".I am sure that on such a trip there were times where ...
You have a few misconceptions. The friction which you draw in the AB direction actually acts in the opposite direction, toward the center of the curve. This friction component is the force component which has magnitude of precisely $mv^2/r$. That $mv^2/r$ value simply tells the magnitude of force necessary radially in order to travel a certain curved path at ...
|
There are cases where the symmetries of a problem ( seem to ) characterize its complexity. One very interesting example is constraint satisfaction problems (CSPs).
Definition of CSP
A CSP is given by a domain $U$, and a constraint language $\Gamma$ ($k$-ary functions from $U^k$ to $\{0, 1\}$). A constraint satisfaction instance is given by a set of variables $V$ and constraints from $\Gamma$. A solution to the instance is an assignment $\phi:V \rightarrow U$ such that all constraints are satisfied.
For example, in this language 3-SAT is given by $\Gamma$ which is the set of all disjunctions of 3 literals, $U$ is simply $\{0, 1\}$. For another example, systems of linear equations mod 2 are given by a $\Gamma$ which is all linear equations mod 2 with $k$ variables, and $U$ is again $\{0, 1\}$.
Polymorphisms
There is a sense in which the hardness of a CSP is characterized by its symmetries. The symmetries in question are called polymorphisms. A polymorphism is a way to
locally combine several solutions to a CSP to get a new solution. Locally here means that there is a function that is applied to each variable separately. More precisely, if you have several solutions (satisfying assignments) $\phi_1, \ldots, \phi_t$, a polymorphism is a function $f:U^t \rightarrow U$ that can be applied to each variable to get a new solution $\phi$: $\phi(v) = f(\phi_1(v), \ldots, \phi_t(v))$. For $f$ to be a polymorphism it should map all tuples of $t$ satisfying assignments to any instance to a satisfying assignment of the same instance.
A polymorphism for systems of linear equations for example is $f(x, y, z) = x + y + z \pmod 2$. Notice that $f(x, x, y) = f(y, x, x) = y$. An $f$ that satisfies this property is known as a Maltsev operation. CSPs that have a Maltsev polymorphism are solvable by Gaussian elimination.
On the other hand, disjunctions of 3 literals only have dictators as polymorphisms, i.e. functions of the type $f(x, y) = x$.
Polymorphisms and Complexity (the dichotomy conjecture)
Polymorphisms in fact have computational implications: if a CSP $\Gamma_1$ admits all polymorphisms of $\Gamma_2$, then $\Gamma_1$ is polynomial-time reducible to $\Gamma_2$. This is a way to formally say that a CSP $\Gamma_2$ which is "less symmetric" than another CSP $\Gamma_1$ is in fact harder.
A major open problem in complexity theory is to characterize the hardness of CSPs. The dichotomy conjecture of Feder and Vardi states that any CSP is either in P or NP-complete. The conjecture can be reduced to a statement about polymorphisms: a CSP is NP-hard if and only if the only polymorphisms it admits are "dictators" (otherwise it is in P). I.e. a CSP is hard only if there is no local way to form genuine new solutions from old solutions. The if part (hardness) is known, but the only if part (designing a polytime algorithm) is open.
However, an important case where we do have a dichotomy is boolean CSPs (where $U = \{0, 1\}$). According to Schaefer's theorem, a boolean CSP is in P if it admits one of 6 polymorphisms, otherwise it is NP-complete. The six polymorphisms are basically what you need to solve the problem either by gaussian elimination or by propagation (as you do with horn-sat for example), or to solve it by a trivial assignment.
To read more about polymorphisms, universal algebra, and the dichotomy conjecture, you can look at the survey by Bulatov.
Polymorphisms and Approximability
I also recommend an IAS lecture by Prasad Raghavendra where he puts his result giving optimal approximability of any CSP assuming the unique games conjecture in a similar framework. On a high level, if all polymorphisms (this needs to be generalized to handle approximation problems) of a CSP are close to dictators, one can use the CSP to design a way to test if a function is a dictator, and that turns out to be all you need in order to give a hardness of approximation reduction from unique games. This gives the hardness direction of his result; the algorithmic direction is that when a CSP has a polymorphism which is far from a dictator, one can use an invariance principle (generalization of central limit theorems) to argue that an SDP rounding algorithm gives a good approximation. A really sketchy intuition for the algorithmic part: a polymorphism that is far from a dictator doesn't care if it is given as arguments (a distribution over) variable assignments or gaussian random variables that locally approximate a distribution over variable assignments. This is the same way that a sum function "doesn't care" if it is given discrete random variables with small variance or gaussian r.v.'s with the same variance, by the central limit theorem. The gaussian random variables we need can be computed from an SDP relaxation of the CSP problem. So we find a polymorphism that is far from a dictator, feed it the gaussian samples, and get a good solution back.
|
Search
Now showing items 31-40 of 451
SOME APPLICATIONS OF COMPLEX GEOMETRY TO FIELD THEORY
(1981)
Let be compactified complexified Minkowski space, and ('*) twistor space and dual twistor space, respectively, and ambitwistor space, a complex hypersurface in x ('*). There is a geometric ...
Numerical safeguarded use of the implicit restarted Lanczos algorithm for solving nonlinear eigenvalue problems and its monotonicity analysis
(1993)
In this thesis, we develop an efficient accurate numerical algorithm for evaluating a few of the smallest eigenvalues and their corresponding eigenvectors for large scale nonlinear eigenproblems. The entries of the matrices ...
A robust choice of the Lagrange multipliers in the successive quadratic programming method
(1994)
We study the choice of the Lagrange multipliers in the successive quadratic programming method (SQP) applied to the equality constrained optimization problem. It is known that the augmented Lagrangian SQP-Newton method ...
Some static and dynamic problems in plasticity
(1991)
In part I of this thesis, we prove some regularity and uniqueness results of the minimizer for the problem$$\inf\{\int\sb\Omega \phi(Dv) + \int\sb{\partial\Omega} \vert{v - g}\vert dH\sp{n-1} : v \in BV(\Omega), g \in ...
Harmonic maps, heat flows, currents and singular spaces
(1995)
This thesis studies some problems in geometry and analysis with techniques developed from non-linear partial differential equations, variational calculus, geometric measure theory and topology. It consists of three independent ...
On proper holomorphic mappings: Smooth extension to the boundary
(1988)
The subject of proper holomorphic mapping is currently a very active area of research. One of the most interesting questions is the following: if $\Omega\sb1$, $\Omega\sb2 \subseteq C\sp{n}$ are open sets with C$\sp{\infty}$ ...
INITIAL-VALUE METHOD FOR TWO-POINT BOUNDARY-VALUE PROBLEMS
(1982)
In this thesis, we consider two problems: (i) linear, two-point boundary-value problems with differential constraints and general boundary conditions; and (ii) nonlinear, two-point boundary-value problems with differential ...
EGOROV'S THEOREM FOR A DIFFRACTIVE BOUNDARY PROBLEM
(1980)
Let (TRIANGLE) be the Laplacian on R('n)(FDIAG)K with Dirichlet boundary conditions. Assume K is smoothly bounded with strictly convex boundary. By the spectral theorem define e('itSQRT.(-)(TRIANGLE)(' )and extend this ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-10 of 32
The ALICE Transition Radiation Detector: Construction, operation, and performance
(Elsevier, 2018-02)
The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ...
Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Elsevier, 2018-02)
In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ...
First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC
(Elsevier, 2018-01)
This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ...
First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV
(Elsevier, 2018-06)
The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ...
D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV
(American Physical Society, 2018-03)
The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ...
Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV
(Elsevier, 2018-05)
We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ...
Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(American Physical Society, 2018-02)
The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ...
$\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV
(Springer, 2018-03)
An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ...
J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2018-01)
We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ...
Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV
(Springer Berlin Heidelberg, 2018-07-16)
Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
|
Search
Now showing items 1-7 of 7
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE
(Elsevier, 2017-11)
Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ...
Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions
(Elsevier, 2017-11)
Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Elliptic flow of muons from heavy-flavour hadron decays at forward rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(Elsevier, 2016-02)
The elliptic flow, $v_{2}$, of muons from heavy-flavour hadron decays at forward rapidity ($2.5 < y < 4$) is measured in Pb--Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The scalar ...
Centrality dependence of the pseudorapidity density distribution for charged particles in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2013-11)
We present the first wide-range measurement of the charged-particle pseudorapidity density distribution, for different centralities (the 0-5%, 5-10%, 10-20%, and 20-30% most central events) in Pb-Pb collisions at $\sqrt{s_{NN}}$ ...
Beauty production in pp collisions at √s=2.76 TeV measured via semi-electronic decays
(Elsevier, 2014-11)
The ALICE Collaboration at the LHC reports measurement of the inclusive production cross section of electrons from semi-leptonic decays of beauty hadrons with rapidity |y|<0.8 and transverse momentum 1<pT<10 GeV/c, in pp ...
|
06/04/2011, 09:43 PM (This post was last modified: 06/04/2011, 10:26 PM by tommy1729.)
(06/04/2011, 01:13 PM)Gottfried Wrote: Sometimes we find easter-eggs even after easter...
For the alternating iteration-series
(definitions as copied and extended from previous post, see below)
we find a rational polynomial for p=4. That means
(maybe this is trivial and a telescoping sum only, didn't check this thorough)
<hr>
Another one:
<hr>
Code:
\\ define function f(x) for forward iteration and g(x) for backward iteration (=negative height)
\\(additional parameter h for positive integer heights is possible)
f(x,h=1) = for(k=1,h,x = x^2 - 0.5 ); return (x) ;
g(x,h=1) = for(k=1,h,x = sqrt(0.5 + x) ); return (x) ;
\\ do analysis at central value for alternating sums x0=1
x = 1.0
sp(x) = sumalt(h=0,(-1)^h * f(x , h))
sn(x) = sumalt(h=0,(-1)^h * g(x , h))
y(x) = sp(x) + sn(x) - x
this is not my expertise ... yet.
but i think i have seen those before in some far past.
for starters , i related your sums to equations of type f(x) = f(g(x)).
also , ergodic theory studies averages of type
F(x) = lim n-> oo 1/n (f^[0](x) + f^[1](x) + ... f^[n](x).)
hidden telescoping can indeed occur.
and sometimes we can rewrite to an integral.
but again , this is not my expertise yet.
you gave me extra question instead of an answer :p
in particular i do not understand your matrix idea in this thread.
my guess is that when you start at 1.0 , you use carleman matrices to compute the sum and one carleman matrix will not converge ( lies outside the radius ) for 1.0 ; so one is wrong and the other is not.
talking about alternating series 1/2 -1/3 + 1/5 -1/7 + 1/11 - ...
i believe this has a closed form/name and if i recall correctly its called the first mertens constant ...
there was something else i wanted to say ... forgot :s
edit : i do not know how to rewrite an average as a sum or superfunction ( do know integral and perhaps infinite product )... i say that because it might be usefull to see the link with the " ergodic average " ( or whatever its called ).
it bothers me , i wanna get rid of this " lim **/n " term for averages. ( might also be of benefit for number theory and statistics )
(06/04/2011, 09:43 PM)tommy1729 Wrote: in particular i do not understand your matrix idea in this thread.
You may look at alternating sum of iterates (here: of exponential function)
There I describe the method first time however with another function as basis: the exponential function.
The problem of convergence of series of matrices surfaces, and the question of convergence of the shortcutformula for the geometric series especially.
Nearly everything was completely new for me, so this article should be rewritten; anyway in its naivety it might be a good introductory impulse to understand the key idea for that matrix-method and possibly engage in the area which I call now "iteration-series" in resemblance to "powerseries" and "dirichletseries".
Gottfried
Gottfried Helms, Kassel
06/05/2011, 11:40 AM (This post was last modified: 06/05/2011, 12:35 PM by Gottfried.)
Looking back at the article on the alternating iteration-series of exponential there was some confirmation for the matrix-based method missing. While I could use the serial summation (Abel- or Eulersummation of the explicite iterates) for the crosscheck of the matrix-method for the bases, where the powertower of infinite height converges, I could not do that for the other bases due to too fast growth of terms/iterated exponentials.
But well, if I take the (complex) fixpoint t
as initial value, then the alternating series
reduces to
, which should be meaningful for each base, whether its exponential fixpoint is real or not.
With this I have now (at least) one check-value by serial summation for the comparision with the matrix-method.
The matrix-method, dimension 32x32, for instance for base e
, which has a divergent iteration-series, comes out near the expected result
to three/four digits and the same was true for the conjugate of t
. If the convergence could be accelerated, then this gives another confirmation for the applicability of this method for the iteration-series.
Gottfried Helms, Kassel
(03/03/2009, 12:15 PM)Gottfried Wrote: serial summation
0.709801988103 towards 2'nd fixpoint:
0.419756033790 towards 1'st fixpoint:
Matrix-method:
0.580243966210 towards 2'nd fixpoint // incorrect, doesn't match serial summation
0.419756033790 towards 1'st fixpoint // matches serial summation
a reason might be this : the vandermonde matrix must have a determinant <> 1 for almost all functions.
hence the determinant of f^h(x) and f^-h(x) cannot both satisfy to be in the radius ( determinant < 1 = within radius 1 ) for (1 - A)^-1.
basicly just taylor series radius argument for matrices.
have you considered this ?
if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?
(06/05/2011, 01:45 PM)tommy1729 Wrote: if i am correct about that , the question becomes : what if the determinant of f(x) is 1 ? will the matrix method agree on both fixpoints ?
How do you compute or at least estimate the determinant of an (infinite sized) Carleman-matrix (as simply transposed of "matrix-operators")?
Gottfried
Gottfried Helms, Kassel
ive noticed we used both the terms vandermonde and carleman matrix.
ofcourse its carleman matrix and not vandermonde ! also note that the 2 matrix-method number must sum to 1 !! 0.580243966210 + 0.41975603379 =0.9999999999 = 1 simply because 1/(1+x) + 1/(1+(1/x)) = 1. - which also shows the importance of the determinant !! - because of this sum = 1 , the matrix methods cannot match the serial summation.(*) this is similar to my determinant argument made before , just an equivalent restatement. * the sum of both serials is related to the equation f(g(x)) = f(x) , whereas the sum of matrix methods just gives 1 for all x.
(06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379
=0.9999999999 = 1
simply because 1/(1+x) + 1/(1+(1/x)) = 1.
Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason
why powerseries-based methods disagree with the serial summation - and always only one of the results.
And then possibly for some adaption/cure, so that the results can be made matching. For instance, Ramanujan-summation for divergent series includes one integral term to correct for the change-of-order-of-summation which is an internal detail in that summation method, possibly we should find something analoguous here.
Quote:also note that the 2 matrix-method number must sum to 1 !!
- which also shows the importance of the determinant !! -
Thank you for the double exclamation. They don't introduce a determinant of an infinite sized matrix but make much noise, which I do not like as you know from earlier conversations of mine in sci.math. So I'll stop that small conversation on your postings here as I don't have to say much more relevant at the moment for the other occasional and interested reader.
Gottfried
Gottfried Helms, Kassel
10/19/2017, 10:38 AM (This post was last modified: 10/19/2017, 10:40 AM by Gottfried.)
(06/06/2011, 12:47 PM)Gottfried Wrote: (06/06/2011, 11:01 AM)tommy1729 Wrote: 0.580243966210
+
0.41975603379
=0.9999999999 = 1
simply because 1/(1+x) + 1/(1+(1/x)) = 1.
Yes, that observation was exactly what I was discussing when I presented these considerations here since 2007; especially I had a conversation with Andy on this. The next step which is obviously to do, is to search for the reason why powerseries-based methods disagree with the serial summation - and always only one of the results.
(...)
It should be mentioned also in this thread, that the reason for this problem of matching the Carleman-based and the simple serial summation based results is simple and simple correctable.
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) and do the Neumann-series then of this Carlemanmatrix. This evaluates then again correctly and in concordance with the series summation. (Of course, "serial summation" means always to possibly include Cesaro or Euler summation or the like).
So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.
Gottfried
Gottfried Helms, Kassel
10/19/2017, 04:50 PM (This post was last modified: 10/19/2017, 05:21 PM by sheldonison.)
(10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...
So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.
Gottfried
Is there a connection between the Carlemann-matrix and the Schröder's equation,
? Here lambda is the derivative at the fixed point;
, and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:
Does the solution to the Carlemann Matrix give you the power series for
?
I would like a Matrix solution for the Schröder's equation. I have a pari-gp program for the formal power series for both
, iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for
- Sheldon
10/19/2017, 09:33 PM (This post was last modified: 10/23/2017, 11:56 PM by Gottfried.)
(10/19/2017, 04:50 PM)sheldonison Wrote: (10/19/2017, 10:38 AM)Gottfried Wrote: ...
1) The Carleman-matrix is always based on the power series of a function f(x) and more specifically of a function g(x+t_0)-t_0 where t_0 is the attracting fixpoint for the function f(x). For that option the Carleman-matrix-based and the serial summation approach evaluate to the same value.
2) But for the other direction of the iteration series, with iterates of the inverse function f^[-1] () we need the Carleman matrix developed at that fixpoint t_1 which is attracting for f^[-1](x) ...
So with the correct adapation of the required two Carleman-matrices and their Neumann-series we reproduce correctly the iteration-series in question in both directions.
Gottfried
Is there a connection between the Carlemann-matrix and the Schröder's equation, ? Here lambda is the derivative at the fixed point; , and then the iterated function g(x+1)= f(g(x)) can be generated from the inverse Schröder's equation:
Does the solution to the Carlemann Matrix give you the power series for ?
I would like a Matrix solution for the Schröder's equation. I have a pari-gp program for the formal power series for both , iterating using Pari-gp's polynomials, but a Matrix solution would be easier to port over to a more accessible programming language and I thought maybe your Carlemann solution might be what I'm looking for
Hi Sheldon - yes that connection is exceptionally simple. The Schröder-function is simply expressed by the eigenvector-matrices which occur by diagonalization of the Carleman-matrix for function f(x).
In my notation, with a Carlemanmatrix F for your function f(x) we have with a vector V(x) = [1,x,x^2,x^3,...]
Then by diagonalization we find a solution in M and D such that
The software must take care, that the eigenvectors in M are correctly scaled, for instance in the triangular case, (where f(x) has no constant term) the diagonal in M is the diagonal unit matrix I such that indeed M is in the Carleman-form. (Using M=mateigen(F) in Pari/GP does not suffice, you must scale the columns in M appropriately - I've built my own eigen-solver for triangular matrices which I can provide to you).
Then we have
We need here only to take attention for the problem, that non-triangular Carlemanmatrices of finite size - as they are only available to our software packages - give not the correct eigenvectors for the true power series of f(x). To learn about this it is best to use functions which have triangular Carleman-matrices, so for instance $f(x)=ax+b$ $f(x) = qx/(1+qx) $ or $f(x) = t^x-1 $ or the like where also the coefficient at the linear term is not zero and not 1.
For the non-triangular matrices, for instance for $f(x)=b^x$ the diagonalization gives only rough approximations to an -in some sense- "best-possible" solution for fractional iterations and its eigenvector-matrices are in general not Carleman or truncated Carleman. But they give nonetheless real-to-real solutions also for $b > \eta $ and seem to approximate the Kneser-solution when the size of the matrices increase.
You can have my Pari/GP-toolbox for the adequate handling of that type of matrices and especially for calculating the diagonalization for $t^x-1$ such that the eigenvectormatrices are of Carleman-type and true truncations of the \psi-powerseries for the Schröder-function (for which the builtin-eigensolver in Pari/GP does not take care). If you are interested it is perhaps better to contact me via email because the set of routines should have also some explanations with them and I expect some need for diadactical hints.
<hr>
For a "preview" of that toolbox see perhaps page 21 ff in http://go.helms-net.de/math/tetdocs/Cont...ration.pdf
which discusses the diagonalization for $t^x -1$ with its schroeder-function (and the "matrix-logarithm" method for the $ e^x - 1$ and $ \sin(x)$ functions which have no diagonalization in the case of finite size).
Gottfried Helms, Kassel
|
I currently have a very limited understanding of this topic and would be very grateful if someone could work me through a solution to this question:
Let $\phi : M \to N$ be a diffeomorphism of manifolds. For a vector field $X$ on $M$ define the
push-forwardvector field $Z = \phi_* X$ on $N$ by $$Z|_y = \mathrm{d}\phi_x(X|_x)$$ where $x = \phi^{-1}(y)$. Show that for any function $f:N \to \mathbb{R}$ $$(\phi_*X)\cdot f = (X \cdot (f \circ \phi))\circ \phi^{-1}.$$
Where do I start? I don't really have a great understanding of vector fields to begin with, let alone how to manipulate the algebra to get to this solution. Thanks in advance.
|
Let the pdfs of $X$, $Y$, and $Z$ be denoted by $f_X(x)$, $f_Y(y)$, and $f_Z(x)$, respectively.
I don't immediately see how this problem can be solved when $f_X(x)$ and $f_Y(y)$ are
unknown. However, you say that $X$ and $Y$ are mean $0$ as well as Gaussian, so the only missing pieces of information are their variances. The following solution should work if $f_X(x)$ and $f_Y(y)$ are known.
Define new random variables $U = X + Z$ and $V = Y + Z$. The joint pdf of $U$ and $V$ is$$f_{UV}(u, v) = \frac{\partial}{\partial u}\frac{\partial}{\partial v}F_{UV}(u, v)\, ,$$where $F_{UV}(u, v)$, the joint
cdf of $u$ and $v$, is given by\begin{align}F_{UV}(u, v)&= P\left(x+z < u,\; y+z < v\right)\\[0.1in]&= P\left(x < u-z,\; y < v-z\right)\\[0.1in]&=\int_{-\infty}^{+\infty} dz\, f_Z(z)\int_{-\infty}^{u-z} dx\, f_X(x)\int_{-\infty}^{v-z} dy\, f_Y(y)\, .\end{align}
The joint pdf of $U$ and $V$ then becomes:\begin{align}f_{UV}(u, v)&= \frac{\partial}{\partial u}\frac{\partial}{\partial v} F_{UV}(u, v)\\[0.1in]&= \int_{-\infty}^{+\infty} dz\; f_Z(z)\; f_X(u-z)\; f_Y(v-z) \qquad\qquad (1)\end{align}
Define the two-dimensional Fourier transform of the pdf $f_{UV}(u, v)$ as\begin{equation}\hat{f}_{UV}(s, t)\;\equiv\;\int_{-\infty}^{+\infty}du\, e^{-isu}\int_{-\infty}^{+\infty}dv\, e^{-itv}f_{UV}(u, v)\, ,\end{equation}where $s$ and $t$ are frequencies.
Using this definition, take a Fourier transform of both sides of Eq. (1) above to yield\begin{align}\hat{f}_{UV}(s, t)&=\int_{-\infty}^{+\infty} dz\, f_Z(z)\;\int_{-\infty}^{+\infty} du\, e^{-isu}\, f_X(u-z)\;\int_{-\infty}^{+\infty} dv\, e^{-itv}\, f_Y(v-z)\\[0.1in]&=\int_{-\infty}^{+\infty} dz\,e^{-i(s+t)z}\, f_Z(z)\;\hat{f}_{X}(s)\; \hat{f}_{Y}(t)\\[0.1in]&=\hat{f}_{Z}(s+t)\; \hat{f}_{X}(s)\; \hat{f}_{Y}(t)\, .\end{align}Between the first and second lines above, we have made a change of variables $u\rightarrow u+z$, $v\rightarrow v+z$.
From here, one can set either $s = 0$ or $t = 0$, to yield$$\hat{f}_Z(t) = \frac{\hat{f}_{UV}(0, t)}{\hat{f}_Y(t)}$$or$$\hat{f}_Z(s) = \frac{\hat{f}_{UV}(s, 0)}{\hat{f}_X(s)}\, ,$$respectively. Note that we have used the fact that $\hat{f}_X(0) = \hat{f}_Y(0) = 1$, which must be true for the Fourier transform of any pdf.
Assuming that all of the relevant pdfs are analytically known originally, all that is left to do
in principle is to take an inverse Fourier transform of one of the above expressions for $\hat{f}_Z$:\begin{align}f_Z(z)&= \frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\, e^{+itz}\, \hat{f}_Z(t)\;=\;\frac{1}{2\pi}\int_{-\infty}^{+\infty}dt\, e^{+itz}\, \frac{\hat{f}_{UV}(0, t)}{\hat{f}_Y(t)}\\[0.1in]&= \frac{1}{2\pi}\int_{-\infty}^{+\infty}ds\, e^{+isz}\, \hat{f}_Z(s)\;=\;\frac{1}{2\pi}\int_{-\infty}^{+\infty}ds\, e^{+isz}\, \frac{\hat{f}_{UV}(s, 0)}{\hat{f}_X(s)}\end{align}
Edit:
It strikes me that $\hat{f}_{UV}(s, 0)$ and $\hat{f}_{UV}(0, t)$ are the Fourier transforms of the marginal pdfs $f_{U}(u)$ and $f_{V}(v)$ of $U$ and $V$, respectively. The final result above could therefore have been obtained in a much shorter way as follows:$$f_{U}(u) = \int_{-\infty}^{+\infty}dz\, f_Z(z)\, f_X(u-z)\;\;\rightarrow\;\;\hat{f}_U(s) = \hat{f}_Z(s)\, \hat{f}_X(s)$$
|
There is a objective function:
$f(W)$ = $||W||_{2,1}$
For any element $w_{ab}$ in $W$, we apply $F_{w_{ab}}$ to denote the part of $f(W)$ which is only related to $w_{ab}$.
$F'_{w_{ab}}=(DW)_{ab}$
$F''_{w_{ab}}=(D-D^{3}(W\odot W))_{aa}$, (the main problem)
where $D_{ii}$=$\frac{1}{||w^i||_2}$, $\odot$ denotes the element-wise multiplication.
About $F_{w_{ab}}$: why the second order derivative of F is equation (28)?
Note that: The norm $\|\cdot\|_{2,1}$ of a matrix $W\in\mathbb{R}^{n\times m}$ is defined as
$$ \Vert W \Vert_{2,1} = \sum_{i=1}^n \Vert w^{i} \Vert_2 = \sum_{i=1}^n \left( \sum_{j=1}^m |w_{ij}|^2 \right)^{1/2} $$ where $w^i$ denotes $i^{th}$ row of $W$, $w_{ij}$ denotes a element of $W$.
How to solve $F''_{w_{ab}}$? I want to know the detailed calculation process of solving the above formula.
There some more explicit definition in the following papers (I gave the exact location.)
Some Related Papers:
Graph Regularized Nonnegative Matrix Factorization for Data Representation(Look Page 10, APPENDIX A, PROOFS OF THEOREM 1, formulas (26), (27) and (28))
In
Nonnegative matrix factorization by joint locality-constrained and $l_{2,1}$-norm regularization, (Look Page 7, the Proof of Lemma 1: Here, how to obtain the result of $F^{''}_{v_{ab}}$?)
Thank you all for your help.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.