text
stringlengths 256
16.4k
|
|---|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
I think there are two legitimate sources of complaint. For the first, I will give you the anti-poem that I wrote in complaint against both economists and poets. A poem, of course, packs meaning and emotion into pregnant words and phrases. An anti-poem removes all feeling and sterilizes the words so that they are clear. The fact that most English speaking humans cannot read this assures economists of continued employment. You cannot say that economists are not bright.
Live Long and Prosper-An Anti-Poem
May you be denoted as $k\in{I},I\in\mathbb{N}$, such that $I=1\dots{i}\dots{k}\dots{Z}$
where $Z$ denotes the most recently born human.
$\exists$ a fuzzy set $Y=\{y^i:\text{Human Mortality Expectations}\mapsto{y^i},\forall{i\in{I}}\},$
may $y^k\in\Omega,\Omega\in{Y}$ and $\Omega$ is denoted as "long"
and may $U(c)$, where c is the matrix of goods and services across your lifetime
$U$ is a function of $c$, where preferences are well-defined and $U$ is qualitative satisfaction,
be maximized $\forall{t}$, $t$ denoting time, subject to
$w^k=f'_t(L_t),$ where $f$ is your production function across time
and $L$ is the time vector of your amount of work,
and further subject to $w^i_tL^i_t+s^i_{t-1}=P_t^{'}c_t^i+s^i_t,\forall{i}$
where $P$ is the vector of prices and $s$ is a measure of personal savings across time.
May $\dot{f}\gg{0}.$
Let $W$ be the set $W=\{w^i_t:\forall{i,t}\text{ ranked ordinally}\}$
Let $Q$ be the fuzzy subset of $W$ such that $Q$ is denoted "high".
Let $w_t^k\in{Q},\forall{t}$
The second is mentioned above, which is the misuse of math and statistical methods. I would both agree and disagree with the critics on this. I believe that most economists are not aware of how fragile some statistical methods can be. To provide an example, I did a seminar for the students in the math club as to how your probability axioms can completely determine the interpretation of an experiment.
I proved using real data that newborn babies will float out of their cribs unless nurses swaddle them. Indeed, using two different axiomatizations of probability, I had babies clearly floating away and obviously sleeping soundly and securely in their cribs. It wasn't the data that determined the result; it was axioms in use.
Now any statistician would clearly point out that I was abusing the method, except that I was abusing the method in a manner that is normal in the sciences. I didn't actually break any rules, I just followed a set of rules to their logical conclusion in a way that people do not consider because babies don't float. You can get significance under one set of rules and no effect at all under another. Economics is especially sensitive to this type of problem.
I do belive that there is an error of thought in the Austrian school and maybe the Marxist about the use of statistics in economics that I believe is based on a statistical illusion. I am hoping to publish a paper on a serious math problem in econometrics that nobody has seemed to notice before and I think it is related to the illusion.
This image is the sampling distribution of Edgeworth's Maximum Likelihood estimator under Fisher's interpretation (blue) versus the sampling distribution of the Bayesian maximum a posteriori estimator (red) with a flat prior. It comes from a simulation of 1000 trials each with 10,000 observations, so they should converge. The true value is approximately .99986. Since the MLE is also the OLS estimator in the case, it is also Pearson and Neyman's MVUE.
Note how relatively inaccurate the Frequency based estimator is compared to the Bayesian. Indeed, the relative efficiency of $\hat{\beta}$ under the two methods is 20:1. Although Leonard Jimmie Savage was certainly alive when the Austrian school left statistical methods behind, the computational ability to use them didn't exist. The first element of the illusion is inaccuracy.
The second part can better be seen with a kernel density estimate of the same graph.
In the region of the true value, there are almost no examples of the maximum likelihood estimator being observed, while the Bayesian maximum a posteriori estimator closely covers .999863. In fact, the average of the Bayesian estimators is .99987 whereas the frequency based solution is .9990. Remember this is with 10,000,000 data points overall.
Frequency based estimators are averaged over the sample space. The missing implication is that it is unbiased, on average, over the entire space, but possibly biased for any specific value of $\theta$. You also see this with the binomial distribution. The effect is even greater on the intercept.
The red is the histogram of Frequentist estimates of the itercept, whose true value is zero, while the Bayesian is the spike in blue. The impact of these effects are worsened with small sample sizes because the large samples pull the estimator to the true value.
I think the Austrians were seeing results that were inaccurate and didn't always make logical sense. When you add data mining into the mix, I think they were rejecting the practice.
The reason I believe the Austrians are incorrect is that their most serious objections are solved by Leonard Jimmie Savage's personalistic statistics. Savages
Foundations of Statistics fully covers their objections, but I think the split had effectively already happened and so the two have never really met up.
Bayesian methods are generative methods while Frequency methods are sampling based methods. While there are circumstances where it may be inefficient or less powerful, if a second moment exists in the data, then the t-test is always a valid test for hypotheses regarding the location of the population mean. You do not need to know how the data was created in the first place. You need not care. You only need to know that the central limit theorem holds.
Conversely, Bayesian methods depend entirely on how the data came into existence in the first place. For example, imagine you were watching English style auctions for a particular type of furniture. The high bids would follow a Gumbel distribution. The Bayesian solution for inference regarding the center of location would not use a t-test, but rather the joint posterior density of each of those observations with the Gumbel distribution as the likelihood function.
The Bayesian idea of a parameter is broader than the Frequentist and can accomodate completely subjective constructions. As an example, Ben Roethlisberger of the Pittsburgh Steelers could be considered a parameter. He would also have parameters associated with him such as pass completion rates, but he could have a unique configuration and he would be a parameter in a sense similar to Frequentist model comparison methods. He might be thought of as a model.
The complexity rejection isn't valid under Savage's methodology and indeed cannot be. If there were no regularities in human behavior, it would be impossible to cross a street or take a test. Food would never be delivered. It may be the case, however, that "orthodox" statistical methods can give pathological results that have pushed some groups of economists away.
|
Definition:Continuous Real Function/Left-Continuous/Point Definition
Let $f: A \to \R$ be a real function.
Let $x_0 \in A$. $\displaystyle \lim_{\substack{x \mathop \to x_0^- \\ x_0 \mathop \in A}} f \left({x}\right) = f \left({x_0}\right)$
where $\displaystyle \lim_{x \mathop \to x_0^-}$ is a limit from the left.
Furthermore, $f$ is said to be left-continuous if and only if: $\forall x_0 \in A$, $f$ is left-continuous at $x_0$ Also known as
A function which is
left-continuous (either at a point or generally) is also seen referred to as continuous from the left. Also see
|
We have now seen some of the most generally useful methods for discovering antiderivatives, and there are others. Unfortunately, some functions have no simple antiderivatives; in such cases if the value of a definite integral is needed it will have to be approximated. We will see two methods that work reasonably well and yet are fairly simple; in some cases more sophisticated techniques will be needed.
Of course, we already know one way to approximate an integral: if we think of the integral as computing an area, we can add up the areas of some rectangles. While this is quite simple, it is usually the case that a large number of rectangles is needed to get acceptable accuracy. A similar approach is much better: we approximate the area under a curve over a small interval as the area of a trapezoid. In figure 10.5.1 we see an area under a curve approximated by rectangles and by trapezoids; it is apparent that the trapezoids give a substantially better approximation on each subinterval.
Use the slider to change the number of subintervals.
As with rectangles, we divide the interval into $n$ equal subintervalsof length $\Delta x$.A typical trapezoid is pictured in figure 10.5.2;it has area $\ds{f(x_i)+f(x_{i+1})\over2}\Delta x$. If we add up the areas of all trapezoids we get$$ \eqalign{ {f(x_0)+f(x_1)\over2}\Delta x&+{f(x_1)+f(x_2)\over2}\Delta x+\cdots+ {f(x_{n-1})+f(x_n)\over2}\Delta x=\cr &\left({f(x_0)\over2}+f(x_1)+f(x_2)+\cdots+f(x_{n-1})+{f(x_n)\over2}\right) \Delta x.\cr}$$ This is usually known as the
Trapezoid Rule.For a modest number of subintervals this is not too difficult to dowith a calculator; a computer can easily do many subintervals.
In practice, an approximation is useful only if we know how accurateit is; for example, we might need a particular value accurate to threedecimal places. When we compute a particular approximation to anintegral, the error is the difference between the approximation andthe true value of the integral. For any approximation technique, weneed an
error estimate, a value thatis guaranteed to be larger than the actual error. If $A$ is anapproximation and $E$ is the associated error estimate, then we knowthat the true value of the integral is between $A-E$ and$A+E$. In the case of our approximation of the integral, we want$E=E(\Delta x)$ to be a function of $\Delta x$ that gets small rapidlyas $\Delta x$ gets small. Fortunately, for many functions, there issuch an error estimate associated with the trapezoid approximation.
Theorem 10.5.1 Suppose $f$ has a second derivative $f''$ everywhere on the interval $[a,b]$, and $|f''(x)|\le M$ for all $x$ in the interval. With $\Delta x= (b-a)/n$, an error estimate for the trapezoid approximation is $$ E(\Delta x) = {b-a\over12}M(\Delta x)^2={(b-a)^3\over 12n^2}M. $$
Let's see how we can use this.
Example 10.5.2 Approximate $\ds\int_0^1 e^{-x^2}\,dx$ to two decimal places. The second derivative of $\ds f=e^{-x^2}$ is $\ds(4x^2-2)e^{-x^2}$, and it is not hard to see that on $[0,1]$, $\ds|(4x^2-2)e^{-x^2}|\le 2$. We begin by estimating the number of subintervals we are likely to need. To get two decimal places of accuracy, we will certainly need $E(\Delta x)< 0.005$ or $$ \eqalign{ {1\over12}(2){1\over n^2} &< 0.005\cr {1\over6}(200)&< n^2\cr 5.77\approx\sqrt{100\over3}&< n\cr} $$ With $n=6$, the error estimate is thus $\ds1/6^3< 0.0047$. We compute the trapezoid approximation for six intervals: $$ \left({f(0)\over2}+f(1/6)+f(2/6)+\cdots+f(5/6)+{f(1)\over2}\right){1\over6} \approx 0.74512. $$ So the true value of the integral is between $0.74512-0.0047=0.74042$ and $0.74512+0.0047=0.74982$. Unfortunately, the first rounds to $0.74$ and the second rounds to $0.75$, so we can't be sure of the correct value in the second decimal place; we need to pick a larger $n$. As it turns out, we need to go to $n=12$ to get two bounds that both round to the same value, which turns out to be $0.75$. For comparison, using $12$ rectangles to approximate the area gives $0.7727$, which is considerably less accurate than the approximation using six trapezoids.
In practice it generally pays to start by requiring better than the maximum possible error; for example, we might have initially required $E(\Delta x)< 0.001$, or $$ \eqalign{ {1\over12}(2){1\over n^2} &< 0.001\cr {1\over6}(1000)&< n^2\cr 12.91\approx\sqrt{500\over3}&< n\cr} $$ Had we immediately tried $n=13$ this would have given us the desired answer.
The trapezoid approximation works well, especially compared to rectangles, because the tops of the trapezoids form a reasonably good approximation to the curve when $\Delta x$ is fairly small. We can extend this idea: what if we try to approximate the curve more closely, by using something other than a straight line? The obvious candidate is a parabola: if we can approximate a short piece of the curve with a parabola with equation $\ds y=ax^2+bx+c$, we can easily compute the area under the parabola.
There are an infinite number of parabolas through any two given points, but only one through three given points. If we find a parabola through three consecutive points $(x_i,f(x_i))$, $(x_{i+1},f(x_{i+1}))$, $(x_{i+2},f(x_{i+2}))$ on the curve, it should be quite close to the curve over the whole interval $[x_i,x_{i+2}]$, as in figure 10.5.3. If we divide the interval $[a,b]$ into an even number of subintervals, we can then approximate the curve by a sequence of parabolas, each covering two of the subintervals. For this to be practical, we would like a simple formula for the area under one parabola, namely, the parabola through $(x_i,f(x_i))$, $(x_{i+1},f(x_{i+1}))$, and $(x_{i+2},f(x_{i+2}))$. That is, we should attempt to write down the parabola $y=ax^2+bx+c$ through these points and then integrate it, and hope that the result is fairly simple. Although the algebra involved is messy, this turns out to be possible. The algebra is well within the capability of a good computer algebra system like Sage, so we will present the result without all of the algebra; you can see how to do it in this Sage worksheet.
To find the parabola, we solve these three equationsfor $a$, $b$, and $c$:$$ \eqalign{ f(x_i)&=a(x_{i+1}-\Delta x)^2+b(x_{i+1}-\Delta x)+c\cr f(x_{i+1})&=a(x_{i+1})^2+b(x_{i+1})+c\cr f(x_{i+2})&=a(x_{i+1}+\Delta x)^2+b(x_{i+1}+\Delta x)+c\cr}$$Not surprisingly, the solutions turn out to be quitemessy. Nevertheless, Sage can easily compute and simplify the integralto get$$ \int_{x_{i+1}-\Delta x}^{x_{i+1}+\Delta x} ax^2+bx+c\,dx= {\Delta x\over3}(f(x_i)+4f(x_{i+1})+f(x_{i+2})).$$Now the sum of the areas under all parabolas is$$ \displaylines{ {\Delta x\over3}(f(x_0)+4f(x_{1})+f(x_{2})+f(x_2)+4f(x_{3})+f(x_{4})+\cdots +f(x_{n-2})+4f(x_{n-1})+f(x_{n}))=\cr {\Delta x\over3}(f(x_0)+4f(x_{1})+2f(x_{2})+4f(x_{3})+2f(x_{4})+\cdots +2f(x_{n-2})+4f(x_{n-1})+f(x_{n})).\cr}$$This is just slightly more complicated than the formula fortrapezoids; we need to remember the alternating 2 and 4 coefficients;note that $n$ must be even for this to make sense.This approximation technique is referred to as
Simpson's Rule.
As with the trapezoid method, this is useful only with an error estimate:
Theorem 10.5.3 Suppose $f$ has a fourth derivative $f^{(4)}$ everywhere on the interval $[a,b]$, and $|f^{(4)}(x)|\le M$ for all $x$ in the interval. With $\Delta x= (b-a)/n$, an error estimate for Simpson's approximation is $$ E(\Delta x) = {b-a\over180}M(\Delta x)^4={(b-a)^5\over 180n^4}M. $$
Example 10.5.4 Let us again approximate $\ds\int_0^1 e^{-x^2}\,dx$ to two decimal places. The fourth derivative of $\ds f=e^{-x^2}$ is $\ds(16x^2-48x^2+12)e^{-x^2}$; on $[0,1]$ this is at most $12$ in absolute value. We begin by estimating the number of subintervals we are likely to need. To get two decimal places of accuracy, we will certainly need $E(\Delta x)< 0.005$, but taking a cue from our earlier example, let's require $E(\Delta x)< 0.001$: $$ \eqalign{ {1\over180}(12){1\over n^4} &< 0.001\cr {200\over3}&< n^4\cr 2.86\approx\root[4] \of {200\over3}&< n\cr} $$ So we try $n=4$, since we need an even number of subintervals. Then the error estimate is $\ds12/180/4^4< 0.0003$ and the approximation is $$ (f(0)+4f(1/4)+2f(1/2)+4f(3/4)+f(1)){1\over3\cdot4} \approx 0.746855. $$ So the true value of the integral is between $0.746855-0.0003=0.746555$ and $0.746855+0.0003=0.7471555$, both of which round to $0.75$.
Exercises 10.5
In the following problems, compute the trapezoid and Simpson approximations using 4 subintervals, and compute the error estimate for each. (Finding the maximum values of the second and fourth derivatives can be challenging for some of these; you may use a graphing calculator or computer software to estimate the maximum values.) If you have access to Sage or similar software, approximate each integral to two decimal places. You can use this Sage worksheet to get started.
Ex 10.5.1$\ds\int_1^3 x\,dx$(answer)
Ex 10.5.2$\ds\int_0^3 x^2\,dx$(answer)
Ex 10.5.3$\ds\int_2^4 x^3\,dx$(answer)
Ex 10.5.4$\ds\int_1^3 {1\over x}\,dx$(answer)
Ex 10.5.5$\ds\int_1^2 {1\over 1+x^2}\,dx$(answer)
Ex 10.5.6$\ds\int_0^1 x\sqrt{1+x}\,dx$(answer)
Ex 10.5.7$\ds\int_1^5 {x\over 1+x}\,dx$(answer)
Ex 10.5.8$\ds\int_0^1 \sqrt{x^3+1}\,dx$(answer)
Ex 10.5.9$\ds\int_0^1 \sqrt{x^4+1}\,dx$(answer)
Ex 10.5.10$\ds\int_1^4 \sqrt{1+1/x}\,dx$(answer)
Ex 10.5.11Using Simpson's rule on a parabola $f(x)$, even with justtwo subintervals, gives the exact value of the integral, because theparabolas used to approximate $f$ will be $f$ itself. Remarkably,Simpson's rule also computes the integral of a cubic function$f(x)=ax^3+bx^2+cx+d$ exactly. Show this is true by showing that$$ \int_{x_0}^{x_2} f(x)\,dx={x_2-x_0\over3\cdot2}(f(x_0)+4f((x_0+x_2)/2)+f(x_2)).$$This does require a bit of messy algebra, so you may prefer to use Sage.
|
In electrodynamics, for a line segment of length 2L with a uniform line charge $\lambda$, the electric field at a distance z above the midpoint of this straight line segment is
$E\left(\vec{r}\right)=\frac{1}{4 \pi \varepsilon_{0}}\frac{2 \lambda L}{z\sqrt{z^{2}+L^{2}}}\hat{z}$
For a field point z > > L:
We may 'translate' L to the 'zero' point so that
$E\approx \frac{1}{4 \pi \varepsilon_{0}}\frac{2 \lambda L}{z^{2}}$
and
for the limit $L\rightarrow \infty$:
$E = \frac{1}{4 \pi \varepsilon_{0}}\frac{2 \lambda}{z}$
In the case of the second, I have forgotten how the derivation was done.
It seems to involve Taylor series expansion if I remember.
Any jolt to my memory would be appreciated.
|
This is related to another question I just asked where I learned that the equation of motion of a harmonic oscillator is expressed as:
$$\ddot{x}+kx=0$$
What little physics I grasp centers on geodesics as derived from the principle of stationary action and the Euler-Lagrange equations. I have therefore become accustomed to understanding the equation of motion as the geodesic:
$$\ddot{x}^m+{\Gamma^{\:\:m}_{jk} \dot{x}^j \dot{x}^k}=0$$
which can also be thought of as the covariant derivative of the tangent vector of a particle's path. I guess this second eq. is mostly used for analysis of particle motion in GR, but I also understand it is applicable to any other situations with position-dependent coefficients (like motion of light through opaque substances). (We can get rid of all the indices by the way since the harmonic oscillator is one dimensional)
My question: Is it possible to reduce the second equation to the first? The acceleration term is the same, and (I think) Hooke's constant $k$ is basically like the Christoffel symbol in the second eq., but I don't see the similarity between $x$ and $\dot{x}^2$. I sense I am missing something big. Appreciate your help.
EDIT: --I include here a response to JerrySchirmer in comments section below-- In the Newtonian limit (flat and slow) the $00$ component (or $tt$) of the Chistoffel symbol is the only one that doesn't vanish. I wanted to see if this component could some how be expressed as $-kx$. But (insofar as I understand) this one non-vanishing component is usually of first order (a field gradient), not "0 order" like $-kx$. Is there a way to think of $kx$ as a field gradient--like $$kx=\frac{\partial \phi}{\partial x}$$?
|
Omega, $\omega$
The smallest infinite ordinal, often denoted $\omega$ (omega), has the order type of the natural numbers. As a von Neumann ordinal, $\omega$ is in fact equal to the set of natural numbers. Since $\omega$ is infinite, it is not equinumerous with any smaller ordinal, and so it is an initial ordinal, that is, a cardinal. When considered as a cardinal, the ordinal $\omega$ is denoted $\aleph_0$. So while these two notations are intensionally different---we use the term $\omega$ when using this number as an ordinal and $\aleph_0$ when using it as a cardinal---nevertheless in the contemporary treatment of cardinals in ZFC as initial ordinals, they are extensionally the same and refer to the same object.
Countable sets
A set is
countable if it can be put into bijective correspondence with a subset of $\omega$. This includes all finite sets, and a set is countably infinite if it is countable and also infinite. Some famous examples of countable sets include: The natural numbers $\mathbb{N}=\{0,1,2,\ldots\}$. The integers $\mathbb{Z}=\{\ldots,-2,-1,0,1,2,\ldots\}$ The rational numbers $\mathbb{Q}=\{\frac{p}{q}\mid p,q\in\mathbb{Z}, q\neq 0\}$ The real algebraic numbers $\mathbb{A}$, consisting of all zeros of nontrivial polynomials over $\mathbb{Q}$
The union of countably many countable sets remains countable, although in the general case this fact requires the axiom of choice.
A set is uncountable if it is not countable.
|
In this section, we show that the proof of Theorem 7 in [2] is wrong. Following [2], we define 1-liv\(_2\) to denote the family of languages \(B_n\) restricted to words of length 2. Let us first restate the theorem.
Theorem 7 (Bianchi et al. [2] ). Consider propositional variables r( a) (\(\lnot r(a)\), resp.) with the interpretation that the vertex a is reachable (non reachable, resp.) from the left side, and let \(\mathcal {F}\) be the set of all propositional formulæ on such variables. Then, any reasonable automaton over \(\mathcal {F}\) solving the n-th language from 1-liv\(_2\) must have \(\varOmega (2^{\frac{n}{2}})\) states.
Analogously to Sect. 2.2, let \(A=\{a_1, \ldots , a_n\}\), \(B=\{b_1, \ldots , b_n\}\), and \(C=\{c_1, \ldots , c_n\}\) be the vertices in the first, second, and third column, respectively.
The proof in [2] proceeds as follows. For any \(\emptyset \subsetneq R\subsetneq [n]\), let \(w^{(R)}\in \varSigma _n^2\) be a graph which contains exactly the edges \((a_1, b_i)\) for all \(i\in R\), and \((b_j, c_1)\) for all \(j\in [n]\setminus R\). Hence, any word \(w^{(R)}\) should be rejected by a valid reasonable automaton. For any set
R, let \(f_{R}\) be the formula which is defined as follows: \(f_{R}\equiv \bigwedge _{i\in R} r(b_i)\wedge \bigwedge _{j\in [n]\setminus R} \lnot r(b_j)\). Let L be the set of all \(2^n-2\) words \(w^{(R)}\).
To arrive at a contradiction, let \(\mathcal {A}\) be a reasonable automaton over \(\mathcal {F}\) solving the
n-th language from 1-liv\(_2\), which is in the normal form defined in [2], with \(m<2^{\frac{n}{2}}\) states. Then there are at most \(m^2<2^n-2\) pairs of states, and thus there exists a pair of words \(w^{(R_1)}, w^{(R_2)}\in L\), such that the pair of states preceding the (rejecting) final state in the computation of \(\mathcal {A}\) on \(w^{(R_1)}, w^{(R_2)}\) is identical on both of them. Let \(q_{pp}, q_{p}\) be these two states in the order in which they appear in the computation of \(\mathcal {A}\) on \(w^{(R_1)}, w^{(R_2)}\). Since \(\mathcal {A}\) is in the normal form, the formula \(\kappa (q_{pp})\) together with the information carried by either \(w^{(R_1)}_{\tau (q_{pp})}\) or \(w^{(R_2)}_{\tau (q_{pp})}\) cannot imply either \(r(c_1)\) or \(\lnot r(c_1)\).
The next step in the proof is the conclusion that the formula \(\kappa (q_{pp})\) must hold for all words satisfying any of the following four formulas: \(f_{R_1}\wedge r(c_1)\), \(f_{R_1}\wedge \lnot r(c_1)\), \(f_{R_2}\wedge r(c_1)\), and \(f_{R_2}\wedge \lnot r(c_1)\). This conclusion is wrong as demonstrated by the formula \(\kappa (q_{pp})\equiv (f_{R_1}\Rightarrow \bigwedge _{c\in C} \lnot r(c))\wedge (f_{R_2}\Rightarrow \bigwedge _{c\in C} \lnot r(c))\). If \(\tau (q_{pp})=2\), then the formula \(\kappa (q_{pp})\) together with the information carried by either \(w^{(R_1)}_{2}\) or \(w^{(R_2)}_{2}\) does not imply either \(r(c_1)\) or \(\lnot r(c_1)\) (observe that the first symbol of the current word could potentially contain no edges, or contain edges to all vertices in the second column, and still satisfy the formula \(\kappa (q_{pp})\) which is a conjunction of two
implications). However, the formula \(\kappa (q_{pp})\) contradicts the formula \(f_{R_1}\wedge r(c_1)\), and thus it does not hold for any word satisfying the formula \(f_{R_1}\wedge r(c_1)\).
Another minor issue in the reasoning of the proof is that the word \(w^{(R_1)}_1w^{(R_2)}_2\) is in the
n-th language from 1-liv\(_2\) only if \(R_1\not \subseteq R_2\), which can be nevertheless assumed without loss of generality.
In the following, we construct a concrete counterexample to the (wrong) step in the proof in [2]. Let \(\mathcal {A}'=(\varSigma _n, Q', 2, q_s', Q_F', Q_R', \delta ', \tau ', \kappa ')\) be an arbitrary reasonable automaton over \(\mathcal {F}\) solving the
n-th language from 1-liv\(_2\), which is in the normal form. We construct a valid reasonable automaton \(\mathcal {A}\) over \(\mathcal {F}\) solving the n-th language from 1-liv\(_2\), which is in the normal form, and still contradicts the reasoning in the proof of Theorem 7 in [2].
Let us fix two words \(w^{(R_1)}, w^{(R_2)}\in L\), \(R_1\ne R_2\), such that \(R_1\not \subseteq R_2\). In the following, let us abbreviate \(w^{(R_1)}\equiv x\), \(w^{(R_2)}\equiv y\).
Let us define the reasonable automaton \(\mathcal {A}=(\varSigma _n, Q, 2, q_s, Q_F, Q_R, \delta , \tau , \kappa )\) as follows. The set of states \(Q=\{q_s, q_x, q_y, q_{pp}, q_{p}, q_f, q_r\}\cup Q'\). The starting state is \(q_s\in Q\). The set of accepting states is \(Q_F=\{q_f\}\cup Q_F'\), and the set of rejecting states is \(Q_R=\{q_r\}\cup Q_R'\). The focus of a state \(q\in Q\) is as follows: \(\tau (q_s)=\tau (q_{p})=1\), \(\tau (q_x)=\tau (q_y)=\tau (q_{pp})=2\), and \(\tau (q')=\tau '(q')\) for any \(q'\in Q'\setminus (Q_F'\cup Q_R')\).
Let us define \(f_x\equiv f_{R_1}\), and \(f_y\equiv f_{R_2}\). The formula assigned to a state \(q\in Q\) is as follows: \(\kappa (q_s)=\top \), \(\kappa (q_x)=f_x\vee \bigwedge _{b\in B} r(b)\), \(\kappa (q_y)=f_y\vee \bigwedge _{b\in B} r(b)\), \(\kappa (q_{pp})=\kappa (q_{p})=(f_x\Rightarrow \bigwedge _{c\in C}\lnot r(c))\wedge (f_y\Rightarrow \bigwedge _{c\in C} \lnot r(c))\), \(\kappa (q_f)=\bigvee _{c\in C} r(c)\), \(\kappa (q_r)=\bigwedge _{c\in C}\lnot r(c)\), and \(\kappa (q')=\kappa '(q')\) for any \(q'\in Q'\).
The transition function is as follows: \(\delta (q_s, x_1)=q_x\), \(\delta (q_s, y_1)=q_y\), \(\delta (q_x, x_2)=\delta (q_y, y_2)=q_{pp}\), \(\delta (q_{pp}, x_2)=\delta (q_{pp}, y_2)=q_{p}\), \(\delta (q_{p}, x_1)=\delta (q_{p}, y_1)=q_r\), and \(\delta (q, a)\in \{q_f, q_r, q_s'\}\) for any \(q\in \{q_s, q_x, q_y, q_{pp}, q_{p}\}\) and \(a\in \varSigma _n\) such that \(\delta (q, a)\) has not been defined previously. Note that \(\delta (q, a)\in \{q_f, q_r\}\) whenever possible, so that \(\mathcal {A}\) is in the normal form. Finally, \(\delta (q', a)=\delta '(q', a)\) for any \(q'\in Q'\setminus (Q_F'\cup Q_R')\) and \(a\in \varSigma _n\).
The computation of \(\mathcal {A}\)
on the words
x
,
y
looks as follows:
$$\begin{aligned} (x, q_s)&\vdash _{\mathcal {A}} (x, q_x)\vdash _{\mathcal {A}} (x, q_{pp})\vdash _{\mathcal {A}} (x, q_{p})\vdash _{\mathcal {A}} (x, q_r),\\ (y, q_s)&\vdash _{\mathcal {A}} (y, q_y)\vdash _{\mathcal {A}} (y, q_{pp})\vdash _{\mathcal {A}} (y, q_{p})\vdash _{\mathcal {A}} (y, q_r). \end{aligned}$$
Hence, the computation of \(\mathcal {A}\)
on
x
,
y
has the same pair of states preceding the (rejecting) final state \(q_r\)
. According to the proof of Theorem 7 in [2
], the formula \(\kappa (q_{pp})\)
must hold for all words satisfying \(f_x\wedge r(c_1)\)
, which is not the case as we see from the definition of \(\kappa (q_{pp})\)
.
It only remains to show that \(\mathcal {A}\) is a valid reasonable automaton in the normal form. Since \(\mathcal {A}'\) is assumed to be a valid reasonable automaton in the normal form, it suffices to only check the transitions from the states \(q\in \{q_s, q_x, q_y, q_{pp}, q_{p}\}\). We omit the details.
|
Two players A and B each has a fair coin and they start to toss simultaneously (counted as one round). They toss in $n$ ($\ge 1$) rounds and stop because they have accumulated the same number of heads (could be 0, the case that both of them did not get any head) for the first time. What is the distribution of $n$ and its expectation?
The problem can be mapped on a one-dimensional discrete random walk. Imagine that you start at 0 and you make a step +1 (for the outcome HT) with probability $1/4$, a step -1 (for the outcome TH) with probability $1/4$, and a step 0 (for the outcomes TT and HH) with probability $1/2$. The question about the number of coin tosses after which both have accumulate the same number of heads is then equivalent to asking after how many steps the random walk returns to the origin.
Let us introduce the random variable $Z_i$ at step $i$ with value +1 (for HT), -1 (for TH), and 0 in the other cases. As described above, it holds that $P(Z_i = 1) =1/4$, $P(Z_i = -1) =1/4$, and $P(Z_i = 0) =1/2$. Furthermore, introduce $S_n = \sum_{i=1}^n Z_i$. $S_n$ denotes the number of heads which player 1 has obtained minus the number of heads which player 2 has obtained after $n$ steps. The sequence $\{S_n\}$ is called random walk. The question about both players having the same number of heads corresponds to the problem of the
first return to the origin.
Let us first solve a simpler problem: what is the probability $P_n$ to return after $n$ steps? We should have the same number of $Z$'s having +1 and -1. Each step we have 3 outcomes $Z= \pm 1,0$. In order to return to the origin, we need to have $m$ times $+1$, $m$ times $-1$, and $n-2m$ times $0$ ($0 \leq m \leq n/2$); this event has probability $$ \frac{n!}{m! m! (n-2m)!} P(Z_i = 1)^m P(Z_i = 1)^m P(Z_i = 0)^{n-2m} = \frac{n!}{(m!)^2 (n-2m)! 2^{n+2m}} .$$ The probability to return after $n$ steps is thereby given by$$ P_n = \sum_{m=0}^{\lfloor n/2 \rfloor} \frac{n!}{(m!)^2 (n-2m)! 2^{n+2m} }= {2n \choose n}2^{-2n}.$$ What remains to be done is to obtain the probability for the
first return.
There is a quite general relation between the probability $P_n$ for a return after $n$ steps and the probability $\pi_n$ for the
first return,$$ P_n= \pi_1 P_{n-1} + \cdots + \pi_n P_0.$$ The interpretation is simple: to return at step $n$ either you have the first return after one step and then return again after $n-1$ steps, or you return the first time after the second step and then again after $n-2$ steps, ..., or you return the first time after $n$ steps ($P_0$ = 1).
The right hand side is a convolution of two probability distributions. Therefore, it is convenient to introduce the generating functions $P(x) = \sum_{n=0}^\infty P_n x^n$ and $\Pi (x) = \sum_{n=0}^\infty \pi_n x^n$ (we set $\pi_0=0$). Multiplying the aforementioned equation by $x^n$ and summing over $n$, we obtain the relation $P(x) = 1 + P(x) \Pi(x)$ (remember $\pi_0 =0$ and $P_0 =1$). We can solve this equation for $$\Pi(x) = \frac{ P(x) -1 }{P(x)}.$$ All what remains to be done is to evaluate $P(x)$. This can be done (binomial theorem) with the result $$ P(x) = \frac{1}{\sqrt{1-x}}$$ and therefore we obtain $$\Pi(x) = 1- \sqrt{1-x}.$$ Expanding this result in a Taylor series, we obtain the probability for the
first return (the probability that the game stops after $n$ rounds)$$ \pi_n = \frac{{2n \choose n} }{(2n -1) 2^{2n}} \quad n\geq1.$$
The expected number of rounds can be also calculated $$ \bar n = \sum_{n=1}^\infty \pi_n n \to \infty,$$ i.e., it diverges. The reason is that the asymptotic behavior of $\pi_n$ is given by $$\pi_n \sim \frac{1}{2 \sqrt{\pi} n^{3/2}}.$$
Edit:
The question has arisen about the evaluate the sum leading to $P_n$. Borrowing from the solution of Mike there is an alternative way to get $P_n$ (and thereby proving that the sum really is what I wrote). Instead of looking at the game as having $n$ rounds, we can think that the game has $2n$ rounds where we split the tosses of player 1 and player 2. If player 1 has H then we move $+1/2$ for tail $-1/2$. If player 2 has T then we move $+1/2$ for tail $-1/2$. After two rounds this leads to the same random walk as I described above (HH and TT no move, HT move $+1$, TH move $-1$). The probability $P_n$ to return after $n$ rounds (of the original game) is therefore equivalent to return after $2n$ rounds in the new game. There are $2n \choose n$ possible paths which return after $2n$ rounds because we can choose which $n$ steps which move up (H for player 1 and T for player 2) and in the rest of the steps we have to move down. In total there are $2^{2n}$ paths of length $2n$. The probability to return at the origin after $2n$ steps ($n$ steps in the original game) is given by $$ P_n = {2n \choose n} 2^{-2n}.$$
The probability of the game ending after $n$ rounds is $\frac{2}{4^n}C_{n-1}$, where $C_n = \frac{1}{n+1} \binom{2n}{n}$, the $n$th Catalan number, and the expected number of rounds is infinite.
To see that the probability is $\frac{2}{4^n}C_{n-1}$, we use the interpretation of the $n-1$ Catalan number as the number of Dyck paths from $(0,0)$ to $(n,n)$ that lie strictly below and do not touch the diagonal $y=x$ except at $(0,0)$ and $(n,n)$.
Assume Player 1 has more heads except at the beginning and at the end. Let $P1(k)$ denote the number of heads Player 1 has after $k$ rounds and $P2(k)$ denote the number of heads Player 2 has after $k$ rounds. Let $Z_k = P1(k) - P2(k)$. Let $(x_k,y_k)$ denote the position on the lattice after $2k$ steps in a Dyck path. Let $X_k = x_k - k$.
Associate the outcomes of the coin flips (H = heads, T = tails, with Player 1's outcome first) with two steps in a Dyck path (R = right, U = up) in the following fashion:
HT $\Leftrightarrow$ RR
HH $\Leftrightarrow$ RU
TT $\Leftrightarrow$ UR
TH $\Leftrightarrow$ UU
We have the following:
If the flips are HT, then $Z_k$ increments by 1. With an RR move, $X_k$ increments by 1. If the flips are HH, then $Z_k$ doesn't change. With an RU move, $X_k$ doesn't change. If the flips are TT, then $Z_k$ doesn't change. With a UR move, $X_k$ doesn't change. If the flips are TH, then $Z_k$ decrements by 1. With a UU move, $X_k$ decrements by 1.
$Z_0 = X_0 = 0$, and both the game and the Dyck path end when $Z_n = X_n = 0$.
So we see that there is a bijection between the set of ways for the game to end after $n$ rounds with Player 1 ahead except at the beginning and the end and the number of Dyck paths from $(0,0)$ to $(n,n)$ that lie strictly below and do not touch the diagonal $y=x$ except at $(0,0)$ and $(n,n)$. Thus the number of ways for the game to end after $n$ rounds with Player 1 ahead except at the beginning and the end is $C_{n-1}$. Double that to account for Player 2 being ahead except at the beginning and the end, and then obtain the probability of ending after $n$ rounds by dividing by $4^n$, the number of possible sequences of flips after $n$ rounds.
To see that the expected number of rounds is infinite, let $X_n$ denote the absolute value of the difference between the number of heads and the number of tails obtained by the two players by the $n$th round. Then $X_n$ is a discrete-time Markov chain in which the transition probabilities are $p_{00} = \frac{1}{2}$, $p_{01} = \frac{1}{2}$, and, for $i \geq 1$, $p_{i,i+1} = p_{i, i-1} = \frac{1}{4}$, $p_{ii} = \frac{1}{2}$. We want $M_0$, the mean recurrence time for state 0.
Such Markov chains are well-studied; see, for example, Example 5 in these notes. There the author solves the more general case in which $p_{00} = 1- \lambda$, $p_{01} = \lambda$, and, for $i \geq 1$, $p_{i,i+1} = \lambda(1-\mu), p_{i, i-1} = \mu (1 - \lambda)$, $p_{ii} = \lambda \mu + (1 - \lambda)(1 - \mu)$.
The solution is that if $\lambda < \mu$, then $$M_0 = \frac{1}{1 - \frac{(1 - \mu)\lambda}{(1- \lambda) \mu}},$$ and if $\lambda \geq \mu$, then $M_0 = \infty$. (As joriki points out in the comments, the author's expression for $\pi_0$ should actually be the expression for $M_0$.)
Since we have $\lambda = \mu = \frac{1}{2}$, $M_0 = \infty$.
The problem is symmetric with the respect to the two players. Treating the first round separately, we have probability $1/2$ to stop at $n=1$ (both $0$ heads or both $1$ head), and probability $1/2$ to continue with a difference of $1$ in the head counts. It will be convenient to denote by $q_n$ the conditional probability that we end
after round $n$ if the first step left us with a difference of $1$. The overall probability to end in round $n$ will then be given by $1/2$ for $n=1$ and by $q_{n-1}/2$ for $n>1$.
The sign of the difference cannot change since we stop when it becomes zero, so we can assume without loss of generality that it's non-negative. Thus we are interested in the probability distribution $p_n(k)$, where $n$ is the round and $k$ is the difference in head counts. This is a Markov chain with an absorbing state at $k=0$. The evolution of the probabilities is given by
$$p_{n+1}(k)=\frac{1}{4}p_n(k-1)+\frac{1}{2}p_n(k)+\frac{1}{4}p_n(k+1)$$
for $k>1$ and $p_{n+1}(1)=\frac{1}{2}p_n(1) + \frac{1}{4}p_n(2)$ for $k=1$. Let's ignore the complication at the origin for a bit and just treat the recursion relation for general $k$. This is a linear operator being applied to the sequence $p_n (k)$ to obtain the sequence $p_{n+1}(k)$, and the eigenfunctions of this linear operator are the sequences $e^{ik\phi}$ with $-\pi < \phi \le \pi$ (since other values of $\phi$ just reproduce the same sequences). We can obtain the corresponding eigenvalue $\lambda(\phi)$ from
$$\lambda(\phi) e^{ik\phi}=\frac{1}{4}e^{i(k-1)\phi}+\frac{1}{2}e^{ik\phi}+\frac{1}{4}e^{i(k+1)\phi}\;,$$
$$\lambda(\phi) =\frac{1}{4}e^{-i\phi}+\frac{1}{2}+\frac{1}{4}e^{i\phi}\;,$$
$$\lambda(\phi)=\frac{1+\cos\phi}{2}=\cos^2\frac{\phi}{2}\;.$$
We have to combine the sequences $e^{ik\phi}$ and $e^{-ik\phi}$ into sines and cosines to get real sequences. Here's where the boundary at $k=0$ comes into play again. The equation $\lambda(\phi) p_n(1)=\frac{1}{2}p_n(1) + \frac{1}{4}p_n(2)$ provides an additional condition that selects a particular linear combination of the sines and cosines. In fact, it selects the sines, since this equation differs only by the term $\frac{1}{4}p_n(0)$ from the general recursion relation, and they can only both be satisfied if this term would have been zero anyway, which is the case for the sines.
Since we know the time evolution of the eigenfunctions (they are multiplied by the corresponding eigenvalue in each round), we can now decompose our initial sequence, $p_1(1)=1$ and $p_1(k)=0$ for $k\neq1$, into sines and write its time evolution as the sum of the time evolution of the sines. Thus,
$$p_1(k)=\int_0^\pi f(\phi) \sin (k\phi)\mathrm{d}\phi$$
for $k \ge 1$, and we obtain our initial sequence by taking $f(\phi)=(2/\pi)\sin\phi$. Then the time evolution is given by
$$p_n(k)=\frac{2}{\pi}\int_0^\pi \sin\phi \sin (k\phi)\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi\;.$$
Now the probability $q_n$ that we end after round $n$ is just the probability $p_n(1)$ times the probability $1/4$ that we move from $k=1$ to $k=0$, and so
$$q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi\;.$$
According to Wolfram, this is
$$q_n=\frac{\Gamma(n+1/2)}{\sqrt{\pi}\,\Gamma(n+2)}\;,$$
which I presume could be simplified for integer $n$. We can check that the $q_n$ sum up to $1$:
$$\sum_{n=1}^\infty q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \sum_{n=1}^\infty\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi$$
$$=\frac{1}{2\pi}\int_0^\pi \frac{\sin^2\phi}{1-\frac{1+\cos\phi}{2}}\mathrm{d}\phi$$
$$=\frac{1}{\pi}\int_0^\pi \frac{\sin^2\phi}{1-\cos\phi}\mathrm{d}\phi$$
$$=\frac{1}{\pi}\int_0^\pi \frac{\sin^2\phi}{1-\cos^2\phi}\left(1+\cos\phi\right)\mathrm{d}\phi$$
$$=\frac{1}{\pi}\int_0^\pi 1+\cos\phi\,\mathrm{d}\phi$$
$$=1\;.$$
We can also try to find the first moment of this probability distribution:
$$\sum_{n=1}^\infty n q_n=\frac{1}{2\pi}\int_0^\pi \sin^2\phi \sum_{n=0}^\infty n\left(\frac{1+\cos\phi}{2}\right)^{n-1}\mathrm{d}\phi$$
$$=-\frac{1}{\pi}\int_0^\pi \sin\phi \sum_{n=0}^\infty \frac{\mathrm{d}}{\mathrm{d}\phi}\left(\frac{1+\cos\phi}{2}\right)^n\mathrm{d}\phi$$
$$=\frac{1}{\pi}\int_0^\pi \cos\phi \sum_{n=0}^\infty \left(\frac{1+\cos\phi}{2}\right)^n\mathrm{d}\phi$$
$$=\frac{2}{\pi}\int_0^\pi \frac{\cos\phi}{1-\cos\phi}\mathrm{d}\phi\;.$$
This integral diverges at the limit $\phi=0$ (which corresponds to the sequence wandering off to large $k$), and thus as Mike had already pointed out the expected number of rounds is infinite.
|
As mentioned in the comments, Binet's formula,
$$F_n=\frac1{\sqrt{5}}(\phi^n-(-\phi)^{-n})$$
where $\phi=\frac12(1+\sqrt 5)$ is the golden ratio, is a closed-form expression for the Fibonacci numbers. See this related question for a few proofs.
As for counting how many Fibonacci numbers are there that are less than or equal to a given number $n$, one can derive an estimate from Binet's formula. The second term in the formula can be ignored for large enough $n$, so
$$F_n\approx\frac{\phi^n}{\sqrt{5}}$$
Solving for $n$ here gives
$$n=\frac{\log\,F_n+\frac12\log\,5}{\log\,\phi}$$
Taking the floor of that gives a reasonable estimate; that is, the expression
$$\left\lfloor\frac{\log\,n+\frac12\log\,5}{\log\,\phi}\right\rfloor$$
can be used to estimate the number of Fibonacci numbers $\le n$. This is inaccurate for small $n$, but does better for large $n$.
It turns out that by adding a fudge term of $\frac12$ to $n$, the false positives of the previous formula disappear. (Well, at least in the range I tested.) Thus,
$$\left\lfloor\frac{\log\left(n+\frac12\right)+\frac12\log\,5}{\log\,\phi}\right\rfloor$$
gives better results.
|
Does this inequality always hold : $$\frac{1}{6} \pi ^2 \prod _{i=1}^x \frac{\left(p_i\right){}^2-1}{\left(p_i\right){}^2}\leq \frac{1}{p_x}+1 $$
such that $p_i$ is the $i$-th prime number
MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community
Yes. We have $$ \frac{1}{6} \pi ^2 \prod _{i=1}^x \frac{\left(p_i\right){}^2-1}{\left(p_i\right){}^2}=\prod_{i>x} \frac{p_i^2}{p_i^2-1}\leqslant \prod_{n=p_x+1}^{\infty} \frac{n^2}{n^2-1}=\frac1{p_x}+1. $$
|
Solving a differential equation generally goes something like this:
One is given a set of initial conditions. There is a large freedom of choice in doing so. For example, in Quantum Mechanics any normalizable $\psi(x,0)$ will do The differential equation then determines how the state evolves, giving $\psi(x,t)$ for all times $t$.
Consider now QFT, in particular the free real scalar field as a simple example. The (Klein-Gordon) equation of motion for fields in the Heisenberg picture is solved by
$$\phi(\vec{x},t)=e^{itH}\phi(\vec{x},0)e^{-itH}$$
For the real scalar field, the explicit solution is written $$\phi(\vec{x},t)=\int (a(\vec{p})e^{-ip\cdot x}+a^{\dagger}(\vec{p})e^{ip\cdot x}) \frac{d^3\vec{p}}{\sqrt{(2\pi)^3 2E_0}}$$
In QFT lecture, it is often stated that this is the most general $\phi(x,t)$ which solves the equation of motion. Let's investigate that claim. In particular, plugging in $t=0$, we have assumed the initial condition
$$\phi(\vec{x},0)=\int (a(\vec{p})e^{i\vec{p}\cdot \vec{x}}+a^{\dagger}(\vec{p})e^{-i\vec{p}\cdot \vec{x}}) \frac{d^3\vec{p}}{\sqrt{(2\pi)^3 2E_0}}$$
But as far as I can tell, the only constraint on $\phi(x,0)$ introduced in lecture is that it should obey the commutation relations. $\phi(x,0)$ is a
very important quantity, because in the Schrödinger picture it is the field operator, just as analogously $x$ and $p=i\nabla$ were the position and momentum operators in Quantum Mechanics.
Therefore I'd like to ask for one of two possible solutions:
Can someone provide a proof that we must have this particular initial condition for the fields? My feeling is that the commutation relations alone are not enough to give this specific form, as operators have a huge amount of degrees of freedom. Just as interesting would be to provide a counterexample: an initial condition $\phi(\vec{x})$ which satisfies the commutation relations but is not equal to $\phi(\vec{x},0)$ above.
Either of these answers would be very elucidating for me. If not a proof, then a reference to a proof would of course also be just as helpful.
EDIT: After more thought, the answer to this question is sort of a mixture of knzhou and ~Cosmos Zachos' answers. As knzhou remarked, the form $\phi(x,0)$ is not uniquely defined by the commutation relations or KG equation, because you could just redefine $\phi(x,0)$ to be the field with $t=1$ and all equations would still hold. This would be equivalent to redefining the operators $a(\vec{p})$ by adding a phase.
However, there is a strong restriction on the $a(\vec{p})$ operators, namely* they can only be redefined up to a momentum-dependent phase $a(\vec{p}) \to e^{i \alpha(\vec{p})}a(\vec{p})$. The proof is along the lines of what Cosmos Zachos' was mentioning and the treatment is done in many QFT textbooks. The essential reason this is possible is that redefining $a(\vec{p}) \to e^{i \alpha(\vec{p})}a(\vec{p})$ does not change the commutation relations.
*The momentum-dependence is my own realization and not taken from a book, but seeing as the commutation relations remain unchanged under the redefinition, it seems clear that they cannot constrain the phase of $a(\vec{p})$. In addition, redefining $\phi(\vec{x},0) \to \phi(\vec{x},1)$ is a momentum-dependent change in the $a$ operators: $a(\vec{p}) \to e^{i\sqrt{\vec{p}^2+m^2} t} a(\vec{p})$
|
Recently Makoto Kikuchi (Kobe University) asked me the following interesting question, which arises very naturally if one should adopt a mereological perspective in the foundations of mathematics, placing a focus on the parthood relation rather than the element-of relation. In set theory, this perspective would lead one to view the subset or inclusion relation $\subseteq$ as the primary fundamental relation, rather than the membership $\in$ relation.
Question. Can there be two different models of set theory, with the same inclusion relation?
We spent an evening discussing it, over delicious (Rokko-michi-style) okonomiyaki and bi-ru, just like old times, except that we are in Tokyo at the CTFM 2015, and I’d like to explain the answer, which is yes, this always happens in every model of set theory.
Theorem. In any universe of set theory $\langle V,\in\rangle$, there is a definable relation $\in^*$, different from $\in$, such that $\langle V,\in^*\rangle$ is a model of set theory, in fact isomorphic to the original universe $\langle V,\in\rangle$, for which the corresponding inclusion relation $$u\subseteq^* v\iff \forall a\, (a\in^* u\to a\in^* v)$$ is identical to the usual inclusion relation $u\subseteq v$. Proof. Let $\theta:V\to V$ be any definable non-identity permutation of the universe, and let $\tau:u\mapsto \theta[u]=\{\ \theta(a)\mid a\in u\ \}$ be the function determined by pointwise image under $\theta$. Since $\theta$ is bijective, it follows that $\tau$ is also a bijection of $V$ to $V$, since every set is the $\theta$-image of a unique set. Furthermore, $\tau$ is an automorphism of $\langle V,\subseteq\rangle$, since $$u\subseteq v\iff\theta[u]\subseteq\theta[v]\iff\tau(u) \subseteq\tau(v).$$ I had used this idea a few years ago in my answer to the MathOverflow question, Is the inclusion version of Kunen inconsistency theorem true?, which shows that there are nontrivial $\subseteq$ automorphisms of the universe. Note that since $\tau(\{a\})=\{\theta(a)\}$, it follows that any instance of nontriviality $\theta(a)\neq a$ in $\theta$ leads immediately to an instance of nontriviality in $\tau$.
Using the map $\tau$, define $a\in^* b\iff\tau(a)\in\tau(b)$. By definition, therefore, $\tau$ is an isomorphism of $\langle V,\in^*\rangle\cong\langle V,\in\rangle$. Let us show that $\in^*\neq \in$. Since $\theta$ is nontrivial, there is an $\in$-minimal set $a$ with $\theta(a)\neq a$. By minimality, $\theta[a]=a$ and so $\tau(a)=a$. But as mentioned, $\tau(\{a\})=\{\theta(a)\}\neq\{a\}$. So we have $a\in\{a\}$, but $\tau(a)=a\notin\{\theta(a)\}=\tau(\{a\})$ and hence $a\notin^*\{a\}$. So the two relations are different.
Meanwhile, consider the corresponding subset relation. Specifically, $u\subseteq^* v$ is defined to mean $\forall a\,(a\in^* u\to a\in^* v)$, which holds if and only if $\forall a\, (\tau(a)\in\tau(u)\to \tau(a)\in\tau(v))$; but since $\tau$ is surjective, this holds if and only if $\tau(u)\subseteq \tau(v)$, which as we observed at the beginning of the proof, holds if and only if $u\subseteq v$. So the corresponding subset relations $\subseteq^*$ and $\subseteq$ are identical, as desired.
Another way to express what is going on is that $\tau$ is an isomorphism of the structure $\langle V,{\in^*},{\subseteq}\rangle$ with $\langle V,{\in},{\subseteq}\rangle$, and so $\subseteq$ is in fact that same as the corresponding inclusion relation $\subseteq^*$ that one would define from $\in^*$.
QED Corollary. One cannot define $\in$ from $\subseteq$ in a model of set theory. Proof. The map $\tau$ is a $\subseteq$-automorphism, and so it preserves every relation definable from $\subseteq$, but it does not preserve $\in$. QED
Nevertheless, I claim that the isomorphism type of $\langle V,\in\rangle$ is implicit in the inclusion relation $\subseteq$, in the sense that any other class relation $\in^*$ having that same inclusion relation is isomorphic to the $\in$ relation.
Theorem. Assume ZFC in the universe $\langle V,\in\rangle$. Suppose that $\in^*$ is a class relation for which $\langle V,\in^*\rangle$ is a model of set theory (a weak set theory suffices), such that the corresponding inclusion relation $$u\subseteq^* v\iff\forall a\,(a\in^* u\to a\in^* v)$$is the same as the usual inclusion relation $u\subseteq v$. Then the two membership relations are isomorphic $$\langle V,\in\rangle\cong\langle V,\in^*\rangle.$$ Proof. Since the singleton set $\{a\}$ has exactly two subsets with respect to the usual $\subseteq$ relation — the empty set and itself — this must also be true with respect to the inclusion relation $\subseteq^*$ defined via $\in^*$, since we have assumed $\subseteq^*=\subseteq$. Thus, the object $\{a\}$ is also a singleton with respect to $\in^*$, and so there is a unique object $\eta(a)$ such that $x\in^* a\iff x=\eta(a)$. By extensionality and since every object has its singleton, it follows that $\eta:V\to V$ is both one-to-one and onto. Let $\theta=\eta^{-1}$ be the inverse permutation.
Observe that $a\in u\iff \{a\}\subseteq u\iff \{a\}\subseteq^* u\iff\eta(a)\in^* u$. Thus, $$b\in^* u\iff \theta(b)\in u.$$
Using $\in$-recursion, define $b^*=\{\ \theta(a^*)\mid a\in b\ \}$. The map $b\mapsto b^*$ is one-to-one by $\in$-recursion, since if there is no violation of this for the elements of $b$, then we may recover $b$ from $b^*$ by applying $\theta^{-1}$ to the elements of $b^*$ and then using the induction assumption to find the unique $a$ from $a^*$ for each $\theta(a^*)\in b^*$, thereby recovering $b$. So $b\mapsto b^*$ is injective.
I claim that this map is also surjective. If $y_0\neq b^*$ for any $b$, then there must be an element of $y_0$ that is not of the form $\theta(b^*)$ for any $b$. Since $\theta$ is surjective, this means there is $\theta(y_1)\in y_0$ with $y_1\neq b^*$ for any $b$. Continuing, there is $y_{n+1}$ with $\theta(y_{n+1})\in y_n$ and $y_{n+1}\neq b^*$ for any $b$. Let $z=\{\ \theta(y_n)\mid n\in\omega\ \}$. Since $x\in^* u\iff \theta(x)\in u$, it follows that the $\in^*$-elements of $z$ are precisely the $y_n$’s. But $\theta(y_{n+1})\in y_n$, and so $y_{n+1}\in^* y_n$. So $z$ has no $\in^*$-minimal element, violating the axiom of foundation for $\in^*$, a contradiction. So the map $b\mapsto b^*$ is a bijection of $V$ with $V$.
Finally, we observe that because $$a\in b\iff\theta(a^*)\in b^*\iff a^*\in^* b^*,$$ it follows that the map $b\mapsto b^*$ is an isomorphism of $\langle V,\in\rangle$ with $\langle V,\in^*\rangle$, as desired.
QED
The conclusion is that although $\in$ is not definable from $\subseteq$, nevertheless, the isomorphism type of $\in$ is implicit in $\subseteq$, in the sense that any other class relation $\in^*$ giving rise to the same inclusion relation $\subseteq^*=\subseteq$ is isomorphic to $\in$.
Meanwhile, I do not yet know what the situation is when one drops the assumption that $\in^*$ is a class with respect to the $\langle V,\in\rangle$ universe.
Question. Can there be two models of set theory $\langle M,\in\rangle$ and $\langle M,\in^*\rangle$, not necessarily classes with respect to each other, which have the same inclusion relation $\subseteq=\subseteq^*$, but which are not isomorphic?
(This question is now answered! See my joint paper with Kikuchi at Set-theoretic mereology.)
|
I'm trying to derive the Chernoff bound $\mathrm{erfc}(x) \le \exp(-x^2)$, by first showing:
$$\mathrm{erfc}(x) = \frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}\exp\left(-\frac{x^2}{\cos^2\theta}\right)d\theta$$
This equality should be derived by considering 2 independent, standard gaussian random variables $x_1, x_2$ and the region in the $x_1x_2$-plane where $|x_1|\le x$.
What I have so far, is come up with a diagram like this:
where I consider the the probability where $x_1, x_2$ falls in the square (4 $\times$ "blue square"). Then
\begin{align} P(x_1<x,x_2<x) &= P(x_1<x)P(x_2<x)\\ &=\frac{1}{2\pi}\int_{0}^{x}\int_{0}^{x}e^{-\frac{x_1^2+x_2^2}{2}}dx_1dx_2\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{x}{\cos\theta}}e^{-\frac{r^2}{2}}rdrd\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\frac{x^2}{2\cos^2\theta}}e^{-v}dvd\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}-e^{-v}\Big|_0^{\frac{x^2}{2\cos^2\theta}}d\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}1-e^{-\frac{x^2}{2\cos^2\theta}}d\theta\\ &=\frac{1}{4} - \frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta \end{align} This is the probability for the blue square, so for the entire square, I have: \begin{align} 4P(x_1<x,x_2<x)&=1-\frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta \end{align} The probability of falling outside the square is: $$1-\Big[1-\frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta\Big] = \frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta$$
I'm kind of stuck here. I think this is my $[Q(x)]^2$, but according to this paper (Eq 9), my $Q(x)$ is incorrect. How can I derive the $Q$-function in the paper using this gaussian RV derivation method?
ETA: As per Dilip Sarwate's suggestion, I considered the circle that circumscribes the squares, and I got the following by considering just 1 (upper-right) quadrant.
\begin{align} P(\text{in the circle quadrant}) &= \frac{1}{2\pi}\int_{0}^{\sqrt{2}x}\int_{0}^{\sqrt{2x^2-x_2^2}}e^{-\frac{x_1^2+x_2^2}{2}}dx_1dx_2\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}\int_{0}^{\sqrt{2}x}e^{-\frac{r^2}{2}}rdrd\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}\int_{0}^{x^2}e^{-v}dvd\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}-e^{-v}\Big|_0^{x^2}d\theta\\ &=\frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}1-e^{-x^2}d\theta\\ &=\frac{1}{4}-\frac{1}{4}e^{-x^2} \end{align}
\begin{align} \text{P(in the blue square)} &< \text{P(in the circle quadrant)}\\ \frac{1}{4} - \frac{1}{2\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta &< \frac{1}{4}-\frac{1}{4}e^{-x^2}\\ \frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{x^2}{2\cos^2\theta}}d\theta &> e^{-x^2} \end{align} Let $v = \frac{x}{\sqrt{2}}$
\begin{align} \frac{2}{\pi}\int_{0}^{\frac{\pi}{2}}e^{-\frac{v^2}{cos^2\theta}}d\theta &> e^{-2v^2} \end{align} I still don't quite understand how my lefthand expression became erfc(v), since it was calculated from $P(x_1<x,x_2<x)$, and how to eventually get $\mathrm{erfc}(x) \le \exp(-x^2)$.
|
Euclidean Geometry • Equitable Distributing • Linear Programming • Set Theory • Nonstandard Analysis • Advice • Topology • Number Theory • Computation of Time (Previous | Next)
In the following section, the Set Theory is presupposed.
Definition: A family of sets \(\mathbb{Y} \subseteq \mathcal{P}(X)\) is called
topology on \(X \subseteq R\) if every intersection and union of sets of \(\mathbb{Y}\) belongs apart from \(\emptyset\) and \(X\) to \(\mathbb{Y}\). The pair \((X, \mathbb{Y})\) is called topological space. If \(\mathbb{Y} = \mathcal{P}(X)\), the topology is called discrete. A set \(B \subseteq \mathbb{Y}\) is called a base of \(\mathbb{Y}\) if every set of \(\mathbb{Y}\) can be written as union of any number of sets of \(B\). Every irreflexive relation \(N \subseteq {A}^{2}\) defines a neighbourhood relation in \(A \subseteq X\) for the underlying set \(X\). If \((a, b) \in N\), \(a\) is called neighbour of or neighbouring to \(b\).
Examples: The base for \(\mathbb{N}, \mathbb{Z}, \mathbb{Q}, \mathbb{A}_\mathbb{R}, \mathbb{A}_\mathbb{C}, \mathbb{R}\) and \(\mathbb{C}\) is precisely each related discrete topology.
Definition: In particular, an element \(x \in A \subseteq X\) is called neighbour of an element \(y \in A\), where \(x \ne y\) if we have for all \(z \in X\) and a mapping \(d: {X}^{2} \rightarrow \mathbb{R}_{\ge 0}\): (1) \(d(x, y) \le \text{max }\{\text{min }\{d(x, z), d(z, x)\}, \text{min }\{d(y, z), d(z, y)\}\}\) and (2) \(d(z, z) = 0\). Here \(d\) is called
neighbourhood metric. Let \(P = R \cup V\) be the set of all points partitioned into actual points \(R\) and virtual points \(V\) for \(R, V \ne \emptyset = R \cap V\).
Definition: The set \(A' := R \setminus A\), where \(A \subseteq R\), is called
complement of \(A\) in \(R\). When \(R\) is clear from context, it can be omitted and \(A'\) can be called the exterior of \(A\). The set \(\partial V \; (\partial A)\) consists of all points of \(V \; (A)\) that have a neighbour in \(R \; (A' \cup V)\), and is called the (inner) boundary of \(V \; (A)\). Here \('\) takes precedence over \(\partial\). When we apply \(\partial\) successively beyond that, we assume the argument to be without complement. The set \(A ° := A \setminus \partial A\) is called the interior of \(A\).
Definition: A set \(S \subseteq R \; (V)\) is said to be
connected if we have for every partition of \(S\) into \(Y \cup Z\) such that \(Y, Z \ne \emptyset = Y \cap Z\): \(\partial Y' \cap \partial Z \ne \emptyset \ne \partial Z' \cap \partial Y\). \(S \subseteq R\) is moreover said to be simply connected if we have: Both \(\partial Y' \cap \partial Z \cup \partial Z' \cap \partial Y\) for every partition into connected \(Y\) and \(Z\) and \(S' \cup (\partial)V\) for \( S'\) as complement of \(S\) in \(R\) are connected for a connected (\(\partial)V\). Let \(P\) and \(R\) be simply connected.
Definition: An \(h\)-homogeneous subset of \(R := \mathbb{R}^{m}\) for \(m \in \mathbb{N}^{*}\) is \(n\)
-dimensional, where \(m \ge n \in \mathbb{N}^{*}\), if and only if it contains at least one \(n\)-cube with edge length \(h \in \mathbb{R}_{>0}\) and maximum \(n\). The definition for \(R := \mathbb{C}^{m}\) is analogous. Let be dim \({}^{(\omega)}\mathbb{C} = 2\). The set \({\mathbb{B}}_{r}(a) := \{z \in K := {}^{(\omega)}\mathbb{K}^{n} : ||z - a|| \le r\}\) for \(\mathbb{K} = \mathbb{R} \; (\mathbb{C})\) is called real (complex) (2)n-ball or briefly ball with radius \(r \in {}^{(\omega)}\mathbb{R}_{>0}\) around its centre \(a \in K\) and its boundary is called real (complex) (2)n-sphere \({\mathbb{S}}_{r}(a)\) or briefly sphere.
Examples: Every ball is simply connected and for \(r > d0\) every real \(n\)-sphere, where \(n \ge 2\), is only connected and every real 1-sphere is not connected.
Definition: When \(a = 0\) and \(r = 1\), we obtain the
unit ball with the special case of the unit disc \(\mathbb{D}\) for \(\mathbb{K} = \mathbb{C}\) and \(n = 1\). Every \(U \subseteq R\) is called neighbourhood of \(x \in R\) if \(x \in U°\). A function between two topological spaces is said to be continuous if we have for every point that can be mapped: for every neighbourhood of the image of this point there is a neighbourhood of the point whose image lies completely in the neighbourhood of the image of this point.
Remark: The neighbouring boundary points of the conventional closed [0, 1] and the conventional open ]0, 1[ especially have not the Hausdorff property. So not every metric space can be a Hausdorff space or normal and (pre-) regular spaces are limited. The spaces \(\mathbb{C}^{n}\) and \(\mathbb{R}^{n}\) with \(n \in {}^{\omega }\mathbb{N}^{*}\) have therefore only the Fréchet topology. The situation is, however, different in partially imprecise conventional mathematics.
© 2013-2019 by Boris Haase
• disclaimer • mail@boris-haase.de • pdf-version • bibliography • subjects • definitions • statistics • php-code • rss-feed • top
|
Prove that: $$6 \not\left|\ \left\lfloor\frac 1 {(\sqrt[3]{28} - 3)^{n}}\right\rfloor \ (n \in Z^+)\right.$$ ($\lfloor x\rfloor$ = largest integer not exceeding $x$)
I am very bad as English and number theory, please help me
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Let $x=1/(\root3\of{28}-3)$. Then $\root3\of{28}=3+x^{-1}$. Cubing, $28=27+27x^{-1}+9x^{-2}+x^{-3}$, which says $x^3-27x^2-9x-1=0$. If we let $y$ and $z$ be the conjugates of $x$, and let $a_n=x^n+y^n+z^n$, then $a_n$ is an integer for all $n$, $a_n$ is the integer closest to $x^n$ (since $y^n$ and $z^n$ go to zero, quickly), and $a_n$ satisfies the recurrence $a_n=27a_{n-1}+9a_{n-2}+a_{n-3}$. Now you can figure out the initial conditions (that is, the values of $a_0,a_1,a_2$) and then you'll be in a position to use the recurrence to work on the residue of $a_n$ modulo $6$. If you look a little more closely at $y^n$ and $z^n$, you may find that $a_n=[x^n]$, I'm not sure. Anyway, there's some work to be done, but this looks like a promising approach.
If we set $\eta=\sqrt[3]{28}$ and $\omega=\dfrac1{\eta-3}=\dfrac{\eta^3-27}{\eta-3}=\eta^2+3\eta+9$, then, working $\bmod\ \eta^3-28$: $$ \begin{align} \omega^0&=1\\ \omega^1&=9+3\eta+\eta^2\\ \omega^2&=249+82\eta+27\eta^2\\ \omega^3&=6805+2241\eta+738\eta^2 \end{align}\tag{1} $$ Solving the linear equations involved yields $$ \omega^3-27\omega^2-9\omega-1=0\tag{2} $$ Looking at the critical points of $x^3-27x^2-9x-1$, we see that it has one real root and two complex conjugate roots. The real root is $\omega\stackrel.=27.3306395$, and since the product of all the roots is $1$, the absolute value of the two conjugate roots is less than $\frac15$.
Let $\omega_0=\omega$ and $\omega_1$ and $\omega_2=\overline{\omega}_1$ be the roots of $x^3-27x^2-9x-1=0$. Symmetric functions and the coefficients of $(2)$ yield $$ \begin{align} a_0=\omega_0^0+\omega_1^0+\omega_2^0&=3\\ a_1=\omega_0^1+\omega_1^1+\omega_2^1&=27\\ a_2=\omega_0^2+\omega_1^2+\omega_2^2&=747\quad=27^2-2(-9) \end{align}\tag{3} $$ and, because each $\omega_k$ satisfies $(2)$, $$ a_n=27a_{n-1}+9a_{n-2}+a_{n-3}\tag{4} $$ Because $|\omega_1|=|\omega_2|<\frac15$, $|\,a_n-\omega^n\,|\le\frac2{5^n}$. Also, $(3)$ and $(4)$ show that $a_n\equiv3\pmod{6}$.
Therefore, $\omega^0=1$ and for $n\ge1$, $$ \lfloor\omega^n\rfloor\in\{2,3\}\pmod{6}\tag{5} $$
|
Inspired by Joshua Grochow and Iddo Tzameret's answers in a post on http://cstheory.stackexchange.com , I would like to get more references on possible connections between complexity theory and set ...
In this post, when I talk about bounded arithmetic theories,I mean the theories of arithmetic according to "Logical Foundations of Proof Complexity", which capture the complexity classes between $AC^...
Definition: Assume that $\phi(q)$ is of the form $\exists y \leq 2^{p(n)} \varphi(q,y)$, where $p$ is a polynomial and $n = |q|$ (i.e. $n$ is the length of the binary representation of $q$). Then a ...
Motivated by Suresh's post, Techniques for showing that problem is in hardness limbo, it seems that there might be an underlying theory that explains why some of these problems can not be complete for ...
We have all been there, when a formula works for the first 30 parameters,but it is not sufficient for a proof. My question is where one can actually just check a finite number of cases, to conclude ...
Assume you have some notion of proof complexity: for instance, at the basic level, the length of a proof, or the number of symbols used, take your pick (there are more involved measures, but for sake ...
The Completeness Theorem in first-order logic states that any mathematical validity is derivable from axioms. Hence, any informal mathematical proof (which is rigorous) can be translated into a formal ...
Is there an oracle such that in the relativized world, bd-Frege (bounded depth Frege propositional proof system) has FIP (feasible interpolation property) but Frege does not have FIP?Such an oracle ...
I've just been asked for a good example of a situation in maths where using infinity can greatly shorten an argument. The person who wants the example wants it as part of a presentation to the general ...
|
ISSN:
1534-0392
eISSN:
1553-5258
All Issues
Communications on Pure & Applied Analysis
March 2002 , Volume 1 , Issue 1
Select all articles
Export/Reference:
Abstract:
An efficient adaptive moving mesh method for investigation of the semi-classical limit of the focusing nonlinear schrödinger equation is presented. The method employs a dynamic mesh to resolve the sea of solitons observed for small dispersion parameters. A second order semi-implicit discretization is used in conjunction with a dynamic mesh generator to achieve a cost-efficient, accurate, and stable adaptive scheme. This method is used to investigate with highly resolved numerics the solution's behavior for small dispersion parameters. Convincing evidence is presented of striking regular space-time patterns for both analytic and non-analytic inital data.
Abstract:
We are interested in partial differential equations and systems of partial differential equations arising in some population dynamics models, for populations living in heterogeneous spatial domains. Discontinuities appear in the coefficients of divergence form operators and in reaction terms as well. Global posedness results are given. For models offering a great a degree of heterogeneity we derive simpler models with constant coefficients by applying homogenization method. Long term behavior is then analyzed.
Abstract:
In this paper, we study the Lagrangian averaged Navier-Stokes (LANS-$\alpha$) equations on bounded domains. The LANS-$\alpha$ equations are able to accurately reproduce the large-scale motion (at scales larger than $\alpha >0$) of the Navier-Stokes equations while filtering or averaging over the motion of the fluid at scales smaller than α, an a priori fixed spatial scale.
We prove the global well-posedness of weak $H^1$ solutions for the case of no-slip boundary conditions in three dimensions, generalizing the periodic-box results of [8]. We make use of the new formulation of the LANS-$\alpha$ equations on bounded domains given in [20] and [14], which reveals the additional boundary conditions necessary to obtain well-posedness. The uniform estimates yield global attractors; the bound for the dimension of the global attractor in 3D exactly follows the periodic box case of [8]. In 2D, our bound is $\alpha$-independent and is similar to the bound for the global attractor for the 2D Navier-Stokes equations.
Abstract:
This paper is concerned with the boundary layers that arise in solutions of a nonlinear hyperbolic system of conservation laws in presence of vanishing diffusion. We consider self-similar solutions of the Riemann problem in a half-space, following a pioneering idea by Dafermos for the standard Riemann problem. The system is strictly hyperbolic but no assumption of genuine nonlinearity is made; moreover, the boundary is possibly characteristic, that is, the wave speed do not have a specific sign near the (stationary) boundary.
First, we generalize a technique due to Tzavaras and show that the boundary Riemann problem with diffusion admits a family of continuous solutions that remain uniformly bounded in the total variation norm. Careful estimates are necessary to cope with waves that collapse at the boundary and generate the boundary layer.
Second, we prove the convergence of these continuous solutions toward weak solutions of the Riemann problem when the diffusion parameter approaches zero. Following Dubois and LeFloch, we formulate the boundary condition in a weak form, based on a set of admissible boundary traces. Following Part I of this work, we identify and rigorously analyze the boundary set associated with the zero-diffusion method. In particular, our analysis fully justifies the use of the scaling $1/\varepsilon$ near the boundary (where $\varepsilon$ is the diffusion parameter), even in the characteristic case as advocated in Part I by the authors.
Abstract:
The carbonate system is an important reaction system in natural waters because it plays the role of a buffer, regulating the pH of the water. We present a global existence result for a system of partial differential equations that can be used to model the combined dynamics of diffusion, advection, and the reaction kinetics of the carbonate system.
Abstract:
We propose a method to investigate the structure of positive radial solutions to semilinear elliptic problems with various boundary conditions. It is already shown that the boundary value problems can be reduced to a canonical form by a suitable change of variables. We show structure theorems to canonical forms to equations with power nonlinearities and various boundary conditions. By using these theorems, it is possible to study the properties of radial solutions of semilinear elliptic equations in a systematic way, and make clear unknown structure of various equations.
Abstract:
We give a new proof based on Fourier Transform of the classical Glassey and Strauss [6] global existence result for the 3D relativistic Vlasov-Maxwell system, under the assumption of compactly supported particle densities. Though our proof is not substantially shorter than that of [6], we believe it adds a new perspective to the problem. In particular the proof is based on three main observations, see Facts 1-3 following the statement of Theorem 1.4, which are of independent interest.
Abstract:
In this paper we prove a fundamental estimate for the weak solution of a degenerate elliptic system: $\nabla\times [\rho(x)\nabla\times H]=F$, $\nabla\cdot H=0$ in a bounded domain in $R^3$, where $\rho(x)$ is only assumed to be in $L^{\infty}$ with a positive lower bound. This system is the steady-state of Maxwell’s system for the evolution of a magnetic field $H$ under the influence of an external force $F$, where $\rho(x)$ represents the resistivity of the conductive material. By using Campanato type of techniques, we show that the weak solution to the system is Hölder continuous, which is optimal under the assumption. This result solves the regularity problem for the system under the minimum assumption on the coefficient. Some applications arising in inductive heating are presented.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
I'm asking about the answer here:
I didn't understand the best answer here
Choose edge $e \in T_1 \mathop{\Delta} T_2$ with $w(e) = \min W$, that is $e$ is an edge that occurs in only one of the trees and has minimum disagreeing weight. Such an edge, that is in particular $e \in T_1 \mathop{\Delta} T_2$, always exists: clearly, not all edges of weight $\min W$ can be in both trees, otherwise $\min W \notin W$. W.l.o.g. let $e \in T_1$ and assume $T_1$ has more edges of weight $\min W$ than $T_2$.
First of all, what does minimum
disagreeing weight mean?
assume $T_1$ has more edges of weight $\min W$ than $T_2$.
Does the above line mean, $T_1$ has multiple edges with weight $\min W$
Now consider all edges in $T_2$ that are also in the cut $C_{T_1}(e)$ that is induced by $e$ in $T_1$. If there is an edge $e'$ in there that has the same weight as $e$, update $T_1$ by using $e'$ instead of $e$; note that the new tree is still a minimal spanning tree with the same edge-weight multiset as $T_1$. We iterate this argument, shrinking $W$ by two elements and thereby removing one edge from the set of candidates for $e$ in every step. Therefore, we get after finitely many steps to a setting where all edges in $T_2 \cap C_{T_1}(e)$ (where $T_1$ is the updated version) have weights other than $g(e)$.
This paragraph seems to be a problem for me. So, we remove edges with weight $e$ which is also $\min W$ and replace it with $e'$. So we do that for all edge weight $e$ present in $T_1$
Therefore, we get after finitely many steps to a setting where all edges in $T_2 \cap C_{T_1}(e)$ (where $T_1$ is the updated version) have weights other than $g(e)$.
I didn't understand this line at all. What is supposed to be $g(e)$?
Now we can always choose $e' \in C_{T_1}(e) \cap T_2$ such that we can swap $e$ and $e'$¹, that is we can create a new spanning tree
$\qquad \displaystyle T_3 = \begin{cases} (T_1 \setminus \{e\}) \cup \{e'\} &, w(e') \lt w(e) \\[.5em] (T_2 \setminus \{e'\}) \cup \{e\} &, w(e') \gt w(e) \end{cases}$
which has smaller weight than $T_1$ and $T_2$; this contradicts the choice of $T_1,T_2$ as minimal spanning trees. Therefore, $W_1 = W_2$.
Why is $w(e') \lt w(e)$ here? isnt $e'$ same as $e$ ? and why does $T_1,T_2$ contradict as minimal spanning trees?
|
An anisotropic harmonic oscillator in three dimensions is given by the potential:
$$V(X,Y,Z)=\frac{1}{2}m\omega^2\bigg[\bigg(1+\frac{2\lambda}{3}\bigg)(X^2+Y^2)+\bigg(1-\frac{4\lambda}{3}Z^2\bigg)\bigg].$$
The energy eigenvalues of the stationary states are:
$$E_{n_x,n_y,n_z}=(n_x+n_y+1)\hbar\omega\sqrt{1+\frac{2\lambda}{3}}+(n_z+\frac{1}{2})\hbar\omega\sqrt{1-\frac{4\lambda}{3}}.$$
It is obvious that the potential has a defined parity, because under a change $\boldsymbol{r}\rightarrow-\boldsymbol{r}$ it remains the same function. How does this defined parity affect the eigenvalues of $H$?
Because there is no explicit dependence on the coordinates in $E_{n_x,n_y,n_z}$, it seems like there is nothing to say about the
parity of the eigenvalues. Still, the problem I am posting is from Cohen-Tannoudji's book of quantum mechanics, and he asks to discuss the parity and the degree of degeneracy of the ground State for this harmonic oscillator.
|
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) Abstract. We show that Kelley-Morse (KM) set theory does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\newcommand\Ord{\text{Ord}}\Ord$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\Ord$ that there is a class function $F:\Ord\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a class $A\subseteq\omega\times\Ord$, such that each section $A_n=\{\alpha\mid (n,\alpha)\in A\}$ contains a class club, but $\bigcap_n A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed.
The
class Fodor principle is the assertion that every regressive class function $F:S\to\Ord$ defined on a stationary class $S$ is constant on a stationary subclass of $S$. This statement can be expressed in the usual second-order language of set theory, and the principle can therefore be sensibly considered in the context of any of the various second-order set-theoretic systems, such as Gödel-Bernays (GBC) set theory or Kelley-Morse (KM) set theory. Just as with the classical Fodor’s lemma in first-order set theory, the class Fodor principle is equivalent, over a weak base theory, to the assertion that the class club filter is normal. We shall investigate the strength of the class Fodor principle and try to find its place within the natural hierarchy of second-order set theories. We shall also define and study weaker versions of the class Fodor principle.
If one tries to prove the class Fodor principle by adapting one of the classical proofs of the first-order Fodor’s lemma, then one inevitably finds oneself needing to appeal to a certain second-order class-choice principle, which goes beyond the axiom of choice and the global choice principle, but which is not available in Kelley-Morse set theory. For example, in one standard proof, we would want for a given $\Ord$-indexed sequence of non-stationary classes to be able to choose for each member of it a class club that it misses. This would be an instance of class-choice, since we seek to choose classes here, rather than sets. The class choice principle $\text{CC}(\Pi^0_1)$, it turns out, is sufficient for us to make these choices, for this principle states that if every ordinal $\alpha$ admits a class $A$ witnessing a $\Pi^0_1$-assertion $\varphi(\alpha,A)$, allowing class parameters, then there is a single class $B\subseteq \Ord\times V$, whose slices $B_\alpha$ witness $\varphi(\alpha,B_\alpha)$; and the property of being a class club avoiding a given class is $\Pi^0_1$ expressible.
Thus, the class Fodor principle, and consequently also the normality of the class club filter, is provable in the relatively weak second-order set theory $\text{GBC}+\text{CC}(\Pi^0_1)$. This theory is known to be weaker in consistency strength than the theory $\text{GBC}+\Pi^1_1$-comprehension, which is itself strictly weaker in consistency strength than KM.
But meanwhile, although the class choice principle is weak in consistency strength, it is not actually provable in KM; indeed, even the weak fragment $\text{CC}(\Pi^0_1)$ is not provable in KM. Those results were proved several years ago by the first two authors, but they can now be seen as consequences of the main result of this article (see corollary 15. In light of that result, however, one should perhaps not have expected to be able to prove the class Fodor principle in KM.
Indeed, it follows similarly from arguments of the third author in his dissertation that if $\kappa$ is an inaccessible cardinal, then there is a forcing extension $V[G]$ with a symmetric submodel $M$ such that $V_\kappa^M=V_\kappa$, which implies that $\mathcal M=(V_\kappa,\in, V^M_{\kappa+1})$ is a model of Kelley-Morse, and in $\mathcal M$, the class Fodor principle fails in a very strong sense.
In this article, adapting the ideas of Karagila to the second-order set-theoretic context and using similar methods as in Gitman and Hamkins’s previous work on KM, we shall prove that every model of KM has an extension in which the class Fodor principle fails in that strong sense: there can be a class function $F:\Ord\to\omega$, which is not constant on any stationary class. In particular, in these models, the class club filter is not $\sigma$-closed: there is a class $B\subseteq\omega\times\Ord$, each of whose vertical slices $B_n$ contains a class club, but $\bigcap B_n$ is empty.
Main Theorem. Kelley-Morse set theory KM, if consistent, does not prove the class Fodor principle. Indeed, if there is a model of KM, then there is a model of KM with a class function $F:\Ord\to \omega$, which is not constant on any stationary class; in this model, therefore, the class club filter is not $\sigma$-closed.
We shall also investigate various weak versions of the class Fodor principle.
Definition. For a cardinal $\kappa$, the class $\kappa$-Fodor principleasserts that every class function $F:S\to\kappa$ defined on a stationary class $S\subseteq\Ord$ is constant on a stationary subclass of $S$. The class ${<}\Ord$-Fodor principleis the assertion that the $\kappa$-class Fodor principle holds for every cardinal $\kappa$. The bounded class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is bounded on a stationary subclass of $S$. The very weak class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is constant on an unbounded subclass of $S$.
We shall separate these principles as follows.
Theorem. Suppose KM is consistent. There is a model of KM in which the class Fodor principle fails, but the class ${<}\Ord$-Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle fails, but the bounded class Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle holds, but the bounded class Fodor principle fails. $\text{GB}^-$ proves the very weak class Fodor principle.
Finally, we show that the class Fodor principle can neither be created nor destroyed by set forcing.
Theorem. The class Fodor principle is invariant by set forcing over models of $\text{GBC}^-$. That is, it holds in an extension if and only if it holds in the ground model.
Let us conclude this brief introduction by mentioning the following easy negative instance of the class Fodor principle for certain GBC models. This argument seems to be a part of set-theoretic folklore. Namely, consider an $\omega$-standard model of GBC set theory $M$ having no $V_\kappa^M$ that is a model of ZFC. A minimal transitive model of ZFC, for example, has this property. Inside $M$, let $F(\kappa)$ be the least $n$ such that $V_\kappa^M$ fails to satisfy $\Sigma_n$-collection. This is a definable class function $F:\Ord^M\to\omega$ in $M$, but it cannot be constant on any stationary class in $M$, because by the reflection theorem there is a class club of cardinals $\kappa$ such that $V_\kappa^M$ satisfies $\Sigma_n$-collection.
Read more by going to the full article:
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
|
You have the concept slightly wrong.
This part is mostly correct:
In Reinforcement learning a random-initialized network will first "play"/"do" a sequence of moves in an environment. (In this case a Game). After that, it will receive a reward r.
Technically neural networks are not required in RL, and it is really worth studying some simple systems that don't need them. It will make everything much clearer.
A reward $r$ can be received on every time step. However, some environments will only have a single reward at the end for success or failure for a whole episode - e.g. an instance of a game like chess where a player wins or loses.
This part is where things go a bit off track:
Furthermore a q-Value gets defined by the [developer]. This reward times the q-Value q to the power of the position n of the action will be [fed] back using BP.
Q values are one type of data that can be calculated for an agent acting in a Markov Decision Process. They are also called "action values" and they are not usually defined by a developer. The q value,
if correct should return the expected future sum of rewards from following a current policy. One way of writing this is:
$$q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\infty}\gamma^k R_{t+k+1}| S_t=s, A_t=a]$$
In natural language, the q value for state s and action a is the expected value (when following the given policy) of the discounted sum of rewards, starting from the given state and action. The discount factor, $\gamma$ can take any value from $0$ up to $1$, but only strictly episodic problems (which always terminate) should use the value $1$
A developer does not get to define that (except they might get to choose reward system and value of $\gamma$). Instead, they need to implement
something that estimates the value of $q(s,a)$ based on what the agent has experienced. There are a few different algorithms that can do this. A popular one is called Q learning.
Regarding "[fed] back using BP", this is correct if you are using a neural network. Typically in DQN (Q learning with neural networks), this just consists of creating a small sample of training data from recent experience and training the neural network almost identically to supervised learning.
So how do I know how slight chances in $\vec{w}$ are changing $rq^n$?
Definitely don't use $rq^n$ - there is no purpose to that quantity in RL. Instead for value-based RL, you are mostly interested in your estimate for Q value. This might be written $\hat{q}(s,a,\vec{w}) \approx q(s,a)$.
However, in general your question stands. If you have implemented a neural network to learn q values, how do you know if it is working?
There are actually two parts to this problem:
What you need to do is measure, and maybe plot some relevant quantities.
For the first question, you would typically plot the total reward that the agent gets each episode. This will be noisy, so it is a good idea to smooth it out by taking some kind of moving average (e.g. average total reward over last 100 episodes).
For the second question, it is normal to plot some loss function of the network, just like supervised learning. Typically this is Mean Squared Error loss, as the network is learning a regression to predict q values given $s$ and $a$. You can compare observed sums of discounted reward (aka "return" or "utility") with the earlier predicted ones, and take the error function. You need to get some measure of a "true" value of q - usually a noisy sample taken during training or testing, and measure loss. For MSE that might be
$$J(\vec{w}) = \frac{1}{2|D|}\sum_{(s,a) \in D}(\hat{q}(s,a, \vec{w}) - q(s,a))^2$$
Where $D$ is some dataset you have put together of $s,a$ and $q(s,a)$ measurements to test with. If this looks familiar to you from supervised learning MSE loss, then that's correct - it is essentially the same thing, just different how you go about collecting the data.
You may expect the loss function for $\hat{q}$ in Q learning to be somewhat unstable as the agent learns. That's because in Q learning, the policy is updating at the same time as the estimates are improving. Which makes the estimates out-of-date. However, it should still be possible to see a reduction in error as learning progresses. If it becomes stable at a relatively low value compared to initially, then the agent has probably learned all that it can - although sometimes new discoveries by the agent can open up more improvements, even late in training, and throw the error function out again.
Note that a low value of the error function does not mean you have an optimal agent. It means that the value function estimate is good for how the agent is currently behaving. In turn that means the agent cannot make further improvements without new and different experience.
|
Parentheses and brackets are very common in mathematical formulas. You can easily control the size and style of brackets in LaTeX; this article explains how.
Contents
Here's how to type some common math braces and parentheses in LaTeX:
Type LaTeX markup Renders as Parentheses; round brackets
(x+y)
\((x+y)\) Brackets; square brackets
[x+y]
\([x+y]\) Braces; curly brackets
\{ x+y \}
\(\{ x+y \}\) Angle brackets
\langle x+y \rangle
\(\langle x+y\rangle\) Pipes; vertical bars
|x+y|
\(\displaystyle| x+y |\) Double pipes
\|x+y\|
\(\| x+y \|\)
The size of brackets and parentheses can be manually set, or they can be resized dynamically in your document, as shown in the next example:
\[ F = G \left( \frac{m_1 m_2}{r^2} \right) \] Notice that to insert the parentheses or brackets, the
\left and
\right commands are used. Even if you are using only one bracket,
both commands are mandatory.
\left and
\right can dynamically adjust the size, as shown by the next example:
\[ \left[ \frac{ N } { \left( \frac{L}{p} \right) - (m+n) } \right] \] When writing multi-line equations with the
align,
align* or
aligned environments, the
\left and
\right commands must be balanced
on each line and on the same side of &. Therefore the following code snippet will fail with errors: \[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \\ & \quad + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \] The solution is to use "invisible" brackets to balance things out, i.e. adding a
\right. at the end of the first line, and a
\left. at the start of the second line after
&:
\[ y = 1 + & \left( \frac{1}{x} + \frac{1}{x^2} + \frac{1}{x^3} + \ldots \right. \\ & \quad \left. + \frac{1}{x^{n-1}} + \frac{1}{x^n} \right) \]
The size of the brackets can be controlled explicitly
The commands
\Bigg and
\bigg stablish the size of the delimiters
< and
> respectively. For a complete list of parentheses and sizes see the reference guide.
LaTeX markup Renders as
\big( \Big( \bigg( \Bigg(
\big] \Big] \bigg] \Bigg]
\big\{ \Big\{ \bigg\{ \Bigg\{
\big \langle \Big \langle \bigg \langle \Bigg \langle
\big \rangle \Big \rangle \bigg \rangle \Bigg \rangle
\big| \Big| \bigg| \Bigg|
\(\displaystyle\big| \; \Big| \; \bigg| \; \Bigg|\)
\big\| \Big\| \bigg\| \Bigg\|
\(\displaystyle\big\| \; \Big\| \; \bigg\| \; \Bigg\|\)
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
The axioms of Kripke-Platek set theory
Kripke-Platek set theory ($\text{KP}$) is a collection of axioms that is considerably weaker than ZFC. The formal language used to express each axiom is first-order with equality ($=$) together with one binary relation symbol, $\in$, intended to denote set membership.
$L_\alpha$ is a model of $\mathrm{KP}$ for admissible $\alpha$.
Contents Axiom of Extensionality
Sets are determined uniquely by their elements. This is expressed formally as $$ \forall x \forall y \big(\forall z (z\in x\leftrightarrow z\in y)\rightarrow x=y\big).$$
The “$\rightarrow$” can be replaced by “$\leftrightarrow$”, but the $\leftarrow$ direction is a theorem of logic.
Axiom of Null Set
There exists some set. In fact, there is a set which contains no members. This is expressed formally $$ \exists x \forall y (y\not\in x).$$
Such an $x$ is unique by extensionality and this set is denoted by $\emptyset$.
Axiom of Pairing
For any two sets $x$ and $y$ (not necessarily distinct) there is a further set $z$ whose members are exactly the sets $x$ and $y$.
$$ \forall x \forall y \exists z \forall w \big(w\in z\leftrightarrow (w=x\vee w=y)\big).$$
Such a $z$ is unique by extensionality and is denoted as $\{x,y\}$.
Axiom of Union
For any set $x$ there is a further set $y$ whose members are exactly all the members of the members of $x$. That is, the union of all the members of a set exists. This is expressed formally as
$$\forall x \exists y \forall z \big(z\in y \leftrightarrow \exists w (w\in x \wedge z\in w)\big).$$
Such a $y$ is unique by extensionality and is written as $y = \bigcup x$.
Axiom Schema of Foundation
Suppose that a given property $P$ is true for some set $x$. Then there is a $\in$-minimal set for which $P$ is true. In more detail, given a formula $\varphi(x_1,\dots,x_n,x)$ the following is an instance of the induction schema: $$\forall x_1, \ldots, x_n \big[ \exists x \varphi(x_1, \ldots, x_n, x) \rightarrow \exists y \big( \varphi(x_1, \ldots, x_n, y) \wedge \forall z \in y \neg \varphi(x_1, \ldots, x_n, z) \big) \big]$$
Axiom Schema of $\Sigma_0$-Separation
For any set $a$ and any $\Sigma_0$-predicate $P(x)$ written in the language of ZFC, the set $\{x\in a: P(x)\}$ exists. In more detail, given any $\Sigma_0$-formula $\varphi$ with free variables $x_1,x_2,\dots,x_n$ the following is an instance of the $\Sigma_0$-seperation schema: $$ \forall a \forall x_1 \forall x_2\dots \forall x_n \exists y \forall z \big(z\in y \leftrightarrow (z\in a \wedge \varphi(x_1,x_2,\dots,x_n,z)\big) $$
Such a $y$, unique by extensionality and is written (for fixed sets $a, x_1\dots, x_n$) $y=\{z\in a: \varphi(x_1,x_2,\dots,x_n,z)\}$.
Axiom Schema of $\Sigma_0$-Collection
If $a$ is a set and for all $x\in a$ there's a some $y$ such that $(x,y)$ satisfies a given $\Sigma_0$-property, then there is some set $b$ such that for all $x \in a$ there is some $y \in b$ such that $(x,y)$ satisfies that property. In more detail, given a $\Sigma_0$-formula $\varphi(x_1,\dots,x_n,x,y)$ the following is an instance of the $\Sigma_0$-collection schema: $$ \forall a \forall x_1 \dots \forall x_n \big[\big( \forall x\in a \exists y \varphi(x_1,\dots,x_n,x,y)\big)\rightarrow \big(\exists b \forall x \in a \exists y \in b \varphi(x_1, \ldots, x_n, x,y) \big) \big].$$
Axiom of Infinity
Some authors include the axiom of infinity in Kripke-Platek set theory which states that there is an inductive set – a canonical example of an infinite set. More precisely: $$ \exists x \big( \emptyset \in x \wedge \forall y \in x (y \cup \{y \} \in x) \big).$$ The axiom of infinity combined with an instance of $\Sigma_0$-separation imply the axiom of null set so that it be dropped if one assumes the axiom of infinity.
|
I'm trying to solve a problem but I suspect my solution is incorrect. I'm hoping someone can verify and perhaps give me an idea about how to solve it correctly.
We have three fair coins and each is tossed until the first head appears on each coin. (So once one head appears on one of the three coins, we stop tossing that particular coin.) When we are finished, a total of $6$ tails have been obtained. What is the expected value of the number of tails on the first coin?
This is how I have tried to solve it:
If a total of $6$ tails have appeared the possible number of tails obtained with the first coin is 0, 1, 2, 3, 4, 5 or 6. I define a random variable $X \sim \mathrm{Geometric}(1/2)$ so that the value of $X$ is the number of tails (failures) until the first head (success) is obtained.
I know that the PMF of a random variable with Geometric distribution is $(1-p)^k*p$ and in this case $p = 1/2$
From here I went on to compute the expected value as $$\mathrm{E}(X) = \sum_{k=0}^6 k*\mathrm{P}(X=k) = \sum_{k=0}^6 k*(\frac{1}{2})^k*\frac{1}{2}$$
and reached the result $15/16.$
As I said, I'm pretty certain this is not the right answer, but I'm not sure why and how to solve it correctly, so any help is appreciated.
|
I need to prove the fact that a quantum channel (a superoperator) cannot increase the Holevo information of an ensemble $\epsilon = \{\rho_x, p_x\}$. Mathematically expressed I need to prove
$$\begin{align} \chi(\$(\epsilon)) \leq \chi(\epsilon) \end{align} \tag{1}\label{1}$$
where $\$$ represents a quantum channel (a positive trace preserving map) which works on the ensemble as $\$ \epsilon = \{ \$\rho_x, p_x \}$. This needs to be done with the property that a superoperator $\$$ cannot increase relative entropy (some may remember my previous question about this):
$$\begin{align} S(\$\rho || \$\sigma) \leq S(\rho || \sigma) \end{align}\tag{2}\label{2}$$
with relative entropy defined as $$S(\rho || \sigma) = \mathrm{tr}(\rho \log \rho - \rho \log \sigma).$$
The Holevo information is defined as (if someone would not know)
$$\begin{align} \chi(\epsilon) = S(\sum_x p_x \rho_x ) - \sum_x p_x S(\rho_x) \end{align}.$$
Does anyone know how we get from equation $\eqref{2}$ to equation $\eqref{1}$? Maybe what density operators to fill in $\eqref{2}$? Or how do we start such a proof?
|
I'm dealing with a CGARCH-M model that the long and short-run volatility components have different effects on returns. Here are its mean and variance equations:
Mean equation:
$$ y_t = \alpha + \beta x_t + \gamma_1 \sqrt{\sigma_{t}^2-q_{t}}+\gamma_2 \sqrt{q_{t}} +\varepsilon_{t} $$
Variance equation:
$$ \sigma_{t}^2 = q_{t}+ \alpha_1 (\varepsilon_{t-1}^2-q_{t-1})+\varphi (\varepsilon_{t-1}^2-q_{t-1}) D_{t-1} + \beta_1 (\sigma_{t-1}^2-q_{t-1}) $$
$$ q_t = V + \rho (q_{t-1}-V) + \theta (\varepsilon_{t-1}^2-\sigma_{t+1}^2) $$
How should I estimate such models?
|
What are the integrals of motion of a system with the following Lagrangian?
$$L=a\dot{\phi_1}^2+b\dot{\phi_2}^2+c\cos(\phi_1-\phi_2)$$?
where $a,b,c$ are constants, $\phi_1,\phi_2$ are angles and $\dot{\phi_i}$ represents differentiation wrt time.
I believe the Hamiltonian is conserved, but are there any more?
Perhaps there is an isotropy of space here, since $\phi_1,\phi_2$ only exist as a difference $\phi_1-\phi_2$? So angular momentum?
Are the above 2 right? Are there any more?
Thanks.
ADDED: "integrals of motion" are sometimes referred to elsewhere as "constants of motions" or "conserved quantities".
|
Let $A = C^\infty(S^1)$ be the ring of smooth functions on the circle (if you prefer, you can see it as the ring of smooth $2\pi$-periodic functions $\mathbb R \to \mathbb R$).
First, $A$ isn't Noetherian : the ideal $I_{\mathscr V(0)}$ of functions vanishing on a neighbourhood of $0$ isn't finitely generated.
But the maximal ideals of $A$ are exactly the $$\mathfrak m_p = \left\{ f \in A\, \Big |\, f(p) = 0 \right \},$$ for $p \in S^1$, which are generated by the two functions $(x,y) \mapsto x-x_p$ and $(x,y) \mapsto y - y_p$. (If you think of $A$ as a set of trigonometric functions, $x$ is $\cos$ and $y$ is $\sin$).
Proof of the various claims:
$I_{\mathscr V(0)}$ isn't f.g. : Suppose ad absurdum that $I_{\mathscr V(0)} = (f_1, \ldots, f_r)$ where each $f_i$ vanishes on a neighbourhood $V_i$ of $0$. Then any function of $(f_1, \ldots, f_r)$ vanishes on $V = \bigcap V_i$, which is a fixed neighbourhood of 0. But it is easy to construct functions of $A$ vanishing on a neighbourhood of $0$ as small as desired (in particular, strictly smaller than $V$), a contradiction. $\mathrm{Max}(A) = \left\{ \mathfrak m_p \, \Big | \, p \in S^1 \right\}$ : Let $I$ be an ideal of $A$. We are going to prove that either $I$ is contained in some $\mathfrak m_p$ or $I = A$. The negation of “$I$ is contained in some $\mathfrak m_p$” is “forall $p \in S^1$, there is a function $f$ s.t. $f(p) \neq 0$”. Since the set on which a function doesn't vanish is open and $S^1$ is compact, that implies the existence of finitely many functions $f_1, \ldots, f_r \in I$ such that $\forall p \in S^1, \exists i : f_i(p) \neq 0$. Then, $f = f_1^2 + \cdots + f_r^2 \in I$ is everywhere nonzero, so it is invertible in $A$ and $I = A$. $\mathfrak m_p = (x-x_p, y-y_p)$ : The inclusion $\supseteq$ is clear. Let $f \in \mathfrak m_p$. By definition of a smooth function on a submanifold, $f$ is the restriction of a smooth function $F \in C^1(V)$ for some neighbourhood $V$ of $p$ in $\mathbb R^2$. Of course, $F$ still vanishes on $p$. The claim then follows from Hadamard's lemma.
PS : All this seems to indicate that $A$ has some strange (in particular non f.g.) prime ideals. I must confess I cannot really understand who they are.
|
I have following problem: I have a sorted sequence of $N$ integers (assume they are monotonically increasing). I want to check whether there is any subsequence of length $\ge N/4$, such that consecutive elements of the subsequence all differ by the same value.
For example, in the sequence [3,4,5,8,12] there are two such subsequences: [3,4,5] (the difference is 1) and [4,8,12] (the difference is 4). Thus, the length of longest such subsequence is 3 for this example. Since $3 \ge 5/4$, the answer is yes, there is a subsequence of length $\ge N/4$ with the desired property.
In my real-life situation, the sequence is of length $N\approx 10^6$, and the elements are all 9-digit numbers. Is there an efficient algorithm to solve this problem?
My naive approach was to create Cartesian product with absolute differences between numbers:
$$ \left( \begin{array}{ccccc} 0 & 1 & 2 & 5 & 9 \\ 1 & 0 & 1 & 4 & 8 \\ 2 & 1 & 0 & 3 & 7 \\ 5 & 4 & 3 & 0 & 4 \\ 9 & 8 & 7 & 4 & 0 \end{array} \right) $$
And then focus on top-right part and compute number of occurrences of each difference, so:
$$ ||\text{diff-by-1}|| = 2 => \text{3 numbers diff by 1}\\ ||\text{diff-by-4}|| = 2 => \text{3 numbers diff by 4} $$
This is very simple and very ineffective. It requires lot of comparisons and does not scale (at all): its running time is $\Theta(N^2)$. In my real life scenario my sequence is ~10^6 long, so this is too slow.
To give you wider picture as maybe there is much better (probabilistic) approach to this problem: after largest sub-sequence is found I want to compute simple ratio:
$$ r:=\frac{\text{largest sub-sequence length}}{\text{sequence length}} $$
and if $r$ is greater then some fixed value I want to raise alarm (or do whatever I have to do ;-)).
Thanks for any help, references, pointers, etc.
BTW: here are things that I was/am looking at:
http://link.springer.com/article/10.1007/s00453-009-9376-2 http://en.wikipedia.org/wiki/Longest_increasing_subsequence_problem http://en.wikipedia.org/wiki/Longest_common_subsequence_problem http://en.wikipedia.org/wiki/Kalman_filter Update: was thinking a little bit more about it and started from the end, so instead of computing all differences between numbers (top-right corner of the matrix) I can derive small $k$ value from "fixed value" I mentioned at the end of original question. For instance if I am going to raise the alarm when 25% of all numbers are in some sequence I need to focus on small "triangles" in matrix and number of computations required is smaller (much smaller). When I add some sampling then it should be simple enough to implement at scale. Update 2 - Implemented @D.W. algorithm, sample run below:
11:51:06 ~$ time nodejs progression.js L: 694000000,694000002,694000006,694000007,694000009,694000010, 694000013,694000015,694000018,694000019,694000021,694000022,694000023, 694000026,694000028,694000030,694000034,694000036,694000038,694000040, 694000043,694000045,694000046,694000048,694000051,694000053,694000055, 694000057,694000060,694000061,694000063,694000067,694000069,694000072, 694000074,694000076,694000077,694000079,694000080,694000082,694000083, 694000084,694000086,694000090,694000091,694000093,694000095,694000099, 694000102,694000103,694000105,694000108,694000109,694000113,694000116, 694000118,694000122,694000125,694000128,694000131,694000134,694000137, 694000141,694000143,694000145,694000148,694000152,694000153,694000154, 694000157,694000160,694000162,694000163,694000166,694000170,694000173, 694000174,694000177,694000179,694000180,694000181,694000184,694000185, 694000187,694000189,694000193,694000194,694000198,694000200,694000203, 694000207,694000211,694000215,694000219,694000222,694000226,694000228, 694000232,694000235,694000236 N: 100 P: 0.1 L: 10 (min) D: 26 (max) [ 9, 18, 27, 36, 45, 54, 63, 72, 81, 90 ] Found progression of 10 elements, difference: 16 starts: 694000045, ends: 694000189. real 0m0.065s user 0m0.052s sys 0m0.004s
|
Let $G$ be an Abelian group. And let H={$g\in G |\mathrm{order}(g) < \infty$}
I need to prove that H is a normal subgroup in G.
I know that if i prove it's a subgroup, it will be normal as well, since G is Abelian. But i wonder about it being a subgroup.
I know this has to work: $$\forall h_1, h_2\in H,\hspace{1cm}h_1*h_2^{-1}\in H$$
But I wonder, is it enough to just show that the order of the product is still finite? Is there any special way to show the order is finite? Or is it just trivial?
If anyone can help clearing this out, I would really appreciate it.
|
Can you please help me find this integral?
$$\int \sin(\ln(x)) dx$$
Give me a clue or show step by step solutions please.
Thank you very much.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Make a substitution: $u = \ln x$. Then $du = \frac{1}{x} dx$, so $dx = x du$. Then you can use the fact that $e^u = x$.
Hope this helps!
Putting $\ln x=y, x=e^y, dx=e^y dy$
So, $\int \sin(\ln x)dx=\int \sin y\cdot e^y dy$
Use Integration by parts, with $e^y$ as the first term
Alternatively, using Euler's formula, $e^{iy}=\cos y+i\sin y$
$\int \sin y\cdot e^y dy$ is the imaginary part of $\int e^y\cdot e^{iy}dy$
$$\int e^y\cdot e^{iy}dy=\int e^{y(1+i)}dy=\frac{e^y(e^{iy})}{(1+i)}=\frac{(1-i)e^y(\cos y+i\sin y)}2=\frac{e^y\{(\cos y+\sin y)+i(\sin y-\cos y)\}}2$$
$$\implies \int \sin y\cdot e^y dy=\frac{e^y(\sin y-\cos y)}2$$
$$\implies \int \sin(\ln x)dx=\frac{x(\sin(\ln x)-\cos(\ln x))}2$$
Let our integral be $I$. Use integration by parts. Let $u=\sin(\ln x)$ and $dv=dx$. Then $u=\frac{1}{x}\cos(\ln x)$, and we can take $v=x$. Thus $$I=x\cos(\ln x)-\int \cos(\ln x)\,dx.$$ Let $J=\int \cos(\ln x)\,dx$. The same sort of calculation as the one above yields $$J=-x\cos(\ln x)+\int \sin(\ln x)\,dx.$$ Thus $$I=x\cos(\ln x)-J\qquad\text{and}\qquad J=-x\sin(\ln x)+I.$$ Solve for $I$, and don't forget the $+C$. We get $$I=\frac{x\cos(\ln x)+x\ln(\sin x)}{2} +C.$$
|
I liked @ttnphns's suggestion (that he made in the comments above) so much, that could not resist from trying it out.
As @ttnphns said, LDA is equivalent to canonical correlation analysis (CCA) between your multivariate data and the set of dummy variables coding class labels. In case of only two groups, there is only one non-redundant dummy variable that takes e.g. value of $0$ for one class and value of $1$ for the second class. Running CCA between this dummy variable and the multivariate data will yield the canonical axis (maximizing correlation with the dummy variable) identical to the LDA axis (maximizing between-class/within-class variance ratio).
If we now replace this dummy variable by your class probability variable that takes values from $[0, 1]$, we can still run CCA analysis and use the first canonical axis. It will not be equal to any LDA axis anymore, because now that there are no clear groups LDA does not make sense.
Here is a figure illustrating it:
Blue dots are one class, red dots are another class, size of the dots represents the probabilistic label (smallest dots for $p\approx 0.5$, largest dots for $p\approx 0$ and $p\approx 1$). Solid line is the LDA axis. Dashed line is the CCA axis, computed using probabilistic labels. It goes diagonally because if you look
very carefully you will notice that the probabilistic labels were assigned such that they increase along the diagonal.
Note that in this very simple case of only two classes and consequently only a one-dimensional label variable, CCA and LDA are equivalent to good old multiple regression (regressing label variable $l\in \mathbb R$ on $\mathbf x \in \mathbb R^2$ results in a two-dimensional $\beta$ coefficient that defines an axis in $\mathbb R^2$). I demonstrate this equivalence numerically in my code below.
MATLAB code
%// generate data
n = 100;
x = randn([n*2 2]);
x(1:n, :) = bsxfun(@times, x(1:n, :), [1 4]);
x(1:n, :) = bsxfun(@plus, x(1:n, :), [1 1]);
x(n+1:end, :) = bsxfun(@times, x(n+1:end, :), [1 4]);
x(n+1:end, :) = bsxfun(@plus, x(n+1:end, :), [10 10]);
x = bsxfun(@plus, x, -mean(x));
labels = [ones(n,1); 2*ones(n,1)]; %// group labels (ones and twos)
prob_labels = (labels-mean(labels) * x * [1 1]'); %// probabilistic labels (from 0 to 1)
prob_labels = prob_labels / max(abs(prob_labels)) / 2 + 0.5;
%// LDA via within- and between-class covariance matrices (my own function)
axisLDA = mylda(x, labels);
%// CCA with probabilistic labels
[axisCCAprob, ~] = canoncorr(x, prob_labels);
axisCCAprob = axisCCAprob / norm(axisCCAprob);
%// plot
axesScale = 15;
dotScale = 200;
figure
hold on
axis([-20 20 -20 20])
axis square
scatter(x(1:n,1), x(1:n,2), abs(prob_labels(1:n)-0.5)*dotScale, 'b')
scatter(x(n+1:end,1), x(n+1:end,2), abs(prob_labels(n+1:end,:)-0.5)*dotScale, 'r')
plot(axisLDA(1)*axesScale*[-1 1], axisLDA(2)*axesScale*[-1 1], 'k')
plot(axisCCAprob(1)*axesScale*[-1 1], axisCCAprob(2)*axesScale*[-1 1], 'k--')
We can now check that different approaches mentioned above give identical results:
%// check that different approaches give identical results
display(['LDA, take 1: ' num2str(axisLDA')])
[axisCCA, ~] = canoncorr(x, labels); %// LDA via canonical correlation with dummies
axisCCA = axisCCA / norm(axisCCA);
display(['LDA, take 2: ' num2str(axisCCA')])
axisRegression = regress(labels, x); %// LDA via regression
axisRegression = axisRegression / norm(axisRegression);
display(['LDA, take 3: ' num2str(axisRegression')])
display(['CCA with probabilistic labels, take 1: ' num2str(axisCCAprob')])
axisRegressionProb = regress(prob_labels, x); %// CCA via regression
axisRegressionProb = axisRegressionProb / norm(axisRegressionProb);
display(['CCA with probabilistic labels, take 2: ' num2str(axisRegressionProb')])
Running this code produces the following output:
LDA, take 1: 0.99735 0.072685
LDA, take 2: 0.99735 0.072685
LDA, take 3: 0.99735 0.072685
CCA with probabilistic labels, take 1: 0.68477 0.72876
CCA with probabilistic labels, take 2: -0.68477 -0.72876
|
The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions
1.
Departamento de Matemática Aplicada, E.T.S.I. de Caminos, Canales y Puertos, Universidad Politécnica de Madrid. 28040 Madrid, Spain
2.
Departamento de Matemáticas, Universidad Autónoma de Madrid, Cantoblanco, 28049 Madrid
$\rho(x)\partial_t u= \Delta u^m\qquad$ in $Q$:$=\mathbb R^n\times\mathbb R_+$
$u(x, 0)=u_0$
in dimensions $n\ge 3$. We deal with a class of solutions having finite energy
$E(t)=\int_{\mathbb R^n} \rho(x)u(x,t) dx$
for all $t\ge 0$. We assume that $m> 1$ (slow diffusion) and
the density $\rho(x)$ is positive, bounded and smooth. We
prove existence of weak solutions starting from data $u_0\ge 0$
with finite energy. We show that uniqueness takes place if $\rho$
has a moderate decay as $|x|\to\infty$ that essentially amounts to
the condition $\rho\notin L^1(\mathbb R^n)$. We also identify
conditions on the density that guarantee finite speed of
propagation and energy conservation, $E(t)=$const. Our
results are based on a new
a priori estimate of the
solutions. Keywords:semigroup solution., Inhomogeneous porous medium flow, classes of uniqueness, Cauchy problem. Mathematics Subject Classification:35K15, 35K65, 35B4. Citation:Guillermo Reyes, Juan-Luis Vázquez. The inhomogeneous PME in several space dimensions. Existence and uniqueness of finite energy solutions. Communications on Pure & Applied Analysis, 2008, 7 (6) : 1275-1294. doi: 10.3934/cpaa.2008.7.1275
[1] [2] [3] [4]
Kazuhiro Ishige.
On the existence of solutions of the Cauchy problem for porous medium equations with radon measure as initial data.
[5]
Mikhail D. Surnachev, Vasily V. Zhikov.
On existence and uniqueness classes for the Cauchy problem for parabolic equations of the p-Laplace type.
[6]
Marie Henry, Danielle Hilhorst, Robert Eymard.
Singular limit of a two-phase flow problem in porous medium as the air viscosity tends to zero.
[7] [8]
Sofía Nieto, Guillermo Reyes.
Asymptotic behavior of the solutions of the inhomogeneous Porous Medium Equation with critical vanishing density.
[9]
Matteo Bonforte, Yannick Sire, Juan Luis Vázquez.
Existence, uniqueness and asymptotic behaviour for fractional porous medium equations on bounded domains.
[10]
María Astudillo, Marcelo M. Cavalcanti.
On the upper semicontinuity of the global attractor for a porous medium type problem with large diffusion.
[11]
Taebeom Kim, Sunčica Čanić, Giovanna Guidoboni.
Existence and uniqueness of a solution
to a three-dimensional axially symmetric Biot problem arising in
modeling blood flow.
[12]
Shaoyong Lai, Yong Hong Wu.
The asymptotic solution of the Cauchy problem for a generalized Boussinesq equation.
[13]
Vladimir E. Fedorov, Natalia D. Ivanova.
Identification problem for a degenerate
evolution equation with overdetermination on the solution semigroup
kernel.
[14]
Guofu Lu.
Nonexistence and short time asymptotic behavior of source-type solution for porous medium equation with convection in one-dimension.
[15]
Anna Marciniak-Czochra, Andro Mikelić.
A nonlinear effective slip interface law for transport phenomena between a fracture flow and a porous medium.
[16]
Yaqing Liu, Liancun Zheng.
Second-order slip flow of a generalized Oldroyd-B fluid through porous medium.
[17]
Wen Wang, Dapeng Xie, Hui Zhou.
Local Aronson-Bénilan gradient estimates and Harnack inequality for the porous medium equation along Ricci flow.
[18]
Kashif Ali Abro, Ilyas Khan.
MHD flow of fractional Newtonian fluid embedded in a porous medium via Atangana-Baleanu fractional derivatives.
[19]
María Anguiano, Francisco Javier Suárez-Grau.
Newtonian fluid flow in a thin porous medium with non-homogeneous slip boundary conditions.
[20]
2018 Impact Factor: 0.925
Tools Metrics Other articles
by authors
[Back to Top]
|
It looks like you are on the wrong track.
While it is trivially true that $(L_1^*L_2^*)^{(0)} = \lambda = (L_1\cup L_2)^{(0)}$ where $\lambda$ is the language consisting of the empty word, it is more often than not that $(L_1^*L_2^*)^{(1)}\neq (L_1\cup L_2)^{(1)}$. For example, if either $L_1$ or $L_2$ has a non-empty words, the left side $(L_1^*L_2^*)^{(1)}=(L_1^*L_2^*)^{(1)}\supseteq (L_1^*\cup L_2^*)$ has infinitely many word. However, if furthermore we let both $L_1$ and $L_2$ be finite languages, the right side $(L_1\cup L_2)^{(1)}$ is a finite language as well. A concrete counterexample can be given by $L_1=\{a\}$ and $L_2=\lambda$.
Here is an approach to move forward. Can you prove that for any language $L$, $(L^*)^*=L^*$? The use of that equality is that you can deduce that $(L_1\cup L_2)^{*}=\left((L_1\cup L_2)^{*}\right)^*$.
Here is another approach that I like. Intuitively, it is easy to "see" the equality $(L_1^*L_2^*)^* = (L_1\cup L_2)^*$. Note that A word in $L_1^*L_2^*$ is some number of words in $L_1$ followed by some number of words in $L_2$.
A word in $(L_1^*L_2^*)^*$ is some number of words in $L_1$ followed by some number of words in $L_2$, possibly followed by some number of words in $L_1$ followed by some number of words in $L_2$, ..., possibly followed by some number of words in $L_1$ followed by some number of words in $L_2$. Here "some number of" means zero or more. A non-empty word in $(L_1\cup L_2)^*$ is a word in $L_1$ or $L_2$, possibly followed by a word in $L_1$ or $L_2$, ..., possibly followed by a word in $L_1$ or $L_2$.
Can you see how a word in the former language must be a word in the latter language? Can you see how a word in the latter language must be a word in the former language? If you can, try expressing those "how" in rigorous terms. That would be a proof.
Here is a related exercise.
Let $L_1$ and $L_2$ be languages such that $L_1\subset L_2^*$ and $L_2\subset L_1^*$. Prove that $L_1^*=L_2^*$.
|
I'm interested in numerical analysis but I don't have experience on it. I was wondering how one can solve integral equations numerically like $\int_0^x e^{t^3}dt=4$? I was thinking whether there is some numerical differential equation solution method faster that finding some range $a\leq x\leq b$ and then splitting that range half and repeating.
I will map out the approach and see if you can fill in the details.
We are asked to find the value of $x$ where:
$$\int_0^x e^{t^3}~dt = 4$$
We need two numerical approaches here. One to find the zeros of the function $f(x)$, Newton's Method, and one to estimate the integral, Composite Simpson or whichever integration rule you prefer, like Composite Trapezoidal or many others, above where:
$$f(x) = \displaystyle \int_0^{x}~ e^{t^3}~dt - 4 = 0$$
The derivative wrt $x$ of this function is:
$$f'(x) = e^{x^3}$$
The Newton-Raphson method is given by:
$$\displaystyle x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)} = x_n - \dfrac{\displaystyle \int_0^{x_n} e^{t^3}~dt - 4}{e^{x_n^3}}$$
At
each iteration, we have to use the Composite Simpson's Rule to find the value of that integral for the next $x_n$.
$$s = \int_a^b f(x) \approx \dfrac{h}{3} \left( f(a) + f(b) + 4 \sum_{i=1}^{n/2}~f(a + (2i - 1)h)+2 \sum_{i=1}^{(n-2)/2} f(a+2 ih) \right)$$
Choose the initial starting point is $x_0$ with a desired accuracy of $\epsilon$.
The iterations will proceed as follows:
$x_0 = x_0$ Using Composite Simpson, with needed $n$: $s$ evaluated between $(0, x_0)$ gives $s = s_0$ Using Newton's iteration: $x_1 = x_0 - \dfrac{f(x_0)}{f'(x_0)} = x_1$ $s$ evaluated between $(0, x_1)$ gives $s = s_1$ Continue this until the number of iterations converges to the desired accuracy.
The numerical approach should yield an $x \approx 1.39821$. Next, you can compare this to the exact result and validate we found the correct value of $x$.
Curious question, is it possible to calculate the value of $n$ for the desired accuracy apriori to doing the iterative steps when using these two numerical approaches? Probably (actually the answer is yes), but I will leave that for you to ponder.
Aside: We can also just use a numerical integrator and randomly try points and more easily bound the problem and then use a the procedure above to fine tune to the desired accuracy. It all comes down to computational complexity.
|
I've seen Baire category theorem used to prove existence of objects with certain properties. But it seems there is another class of interesting applications of Baire category theorem that I have yet to see.
First I found this MathOverflow problem:
Let $f$ be an infinitely differentiable function on $[0,1]$ and suppose that for each $x \in [0,1]$ there is an integer $n \in \mathbb{N}$ such that $f^{(n)}(x)=0$. Then $f$ coincide on $[0,1]$ with some polynomial.
I found another one from Ben Green's notes:
Suppose that $f:\mathbb{R}^+\to\mathbb{R}^+$ is a continuous function with the following property: for all $x\in\mathbb{R}^+$, the sequence $f(x),f(2x),f(3x),\ldots$ tends to $0$. Prove that $\lim_{t\to\infty}f(t)=0$.
Are there any other classic problems of this type?
|
Search
Now showing items 1-10 of 53
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE
(Elsevier, 2017-11)
We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ...
System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE
(Elsevier, 2017-11)
We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ...
Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions
(Elsevier, 2017-11)
Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ...
Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE
(Elsevier, 2017-11)
The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
The change of Gibbs energy at constant temperature and species numbers, $\Delta G$, is given by an integral $\int_{p_1}^{p_2}V\,{\mathrm d}p$. For the ideal gas law $$p\,V=n\,RT,$$ this comes down to $$\int_{p_1}^{p_2}\frac{1}{p}\,{\mathrm d}p=\ln\frac{p_2}{p_1}.$$ That logarithm is at fault for a lot of the formulas in chemistry.
I find I have a surprisingly hard time computing $\Delta G$ for gas governed by the equation of state $$\left(p + a\frac{n^2}{V^2}\right)\,(V - b\,n) = n\, R T,$$ where $a\ne 0,b$ are small constants. What is $\Delta G$, at least in low orders in $a,b$?
One might be able to compute ΔG by an integral in which not V is the integrand.
Edit 19.8.15: My questions are mostly motivated by the desire to understand the functional dependencies of the chemical potential $\mu(T)$, that is essentially given by the Gibbs energy. For the ideal gas and any constant $c$, we see that a state change from e.g. the pressure $c\,p_1$ to another pressure $c\,p_2$ doesn't actually affect the Gibbs energy. The constant factors out in $\frac{1}{p}\,{\mathrm d}p$, resp. $\ln\frac{p_2}{p_1}$. However, this is a mere feature of the gas law with $V\propto \frac{1}{p}$, i.e. it likely comes from the ideal gas law being a model of particles without interaction with each other.
|
Cardinal characteristics of the continuum The subject known as cardinal characteristics of the continuum explores the rich territory---sometimes hidden from view, depending on the set-theoretic background---between the countably infinite cardinal $\aleph_0$ and the uncountable cardinality of the continuum. The subject begins with Cantor's theorem laying out the basic dichotomy that the continuum $\frak{c}=2^{\aleph_0}$ is strictly larger than $\aleph_0$, and goes on to explore the various ways that properties of $\aleph_0$ might extend to uncountable cardinals.
For example, the union of countably many measure zero subsets of $\mathbb{R}$ has measure $0$; the union of countably many meager sets is meager; every countable number of functions $f:\omega\to\omega$ is bounded by a single function under eventual domination; every countable set of reals has measure $0$. To what extent can we hope to extend such properties to uncountable collections? The various cardinal characteristics of the continuum, many of which are described below, are defined exactly to be the cardinalities where these and other similar such properties first begin to fail for uncountable collections. Each cardinal characteristic measures the extent to which a particular mathematical phenomenon extends from the countable to the uncountable, and the lesson of the subject is that there is an enormous diversity of such characteristics, exhibiting diverse combinations in various models of set theory. When the continuum is small, the characteristics are pressed together---under the continuum hypothesis, for example, they are all equal to the continuum---but in other models, the different characteristics are teased apart and seen to express fundamentally different inequivalent properties. The subject breaks into two major components: first, proving the positive relations amongs the characteristics, such as $\omega_1\leq\frak{b}\leq\frak{d}\leq\frak{c}$; and second, constructing models of set theory, generally by forcing, which reveal the range of possibility, such as a model in which $\omega_1\lt\frak{b}\lt\frak{d}\lt\frak{c}$. Thus, the philosophy of the subject naturally exhibits an unusual degree of contingency for set-theoretic truth: we understand the cardinal characteristic more deeply because we know the range of possibility for their relations to each other.
An excellent general resource on the subject is [1].
Contents The bounding number
The
bounding number $\frak{b}$ is the size of the smallest family of functions $f:\omega\to\omega$ that is not bounded with respect to eventual domination. The dominating number
The
dominating number $\frak{d}$ is the size of the smallest family of functions $f:\omega\to\omega$, such that every function is eventually dominated by a function in the family. The covering numbers The additivity numbers Cichoń's diagram
${\rm cov}({\mathcal L})$ $\longrightarrow$ ${\rm non}({\mathcal K})$ $\longrightarrow$ ${\rm cof}({\mathcal K})$ $\longrightarrow$ ${\rm cof}({\mathcal L})$ $\longrightarrow$ $2^{\aleph_0}$ $ \Bigg\uparrow $ $\uparrow$ $ \uparrow$ $ \Bigg\uparrow $ ${\mathfrak b} $ $\longrightarrow$ ${\mathfrak d} $ $\uparrow$ $\uparrow$ $\aleph_1$ $\longrightarrow$ ${\rm add}({\mathcal L})$ $\longrightarrow$ ${\rm add}({\mathcal K})$ $\longrightarrow$ ${\rm cov}({\mathcal K})$ $\longrightarrow$ ${\rm non}({\mathcal L})$ This article is a stub. Please help us to improve Cantor's Attic by adding information.
References Blass, Andreas. Chapter 6: Cardinal characteristics of the continuum.Handbook of Set Theory , 2010. www bibtex
|
Given initial values $d[0]$ and $k[0]$, I would like to solve for the initial rate of change, $\dot d[0]$, and compare this value against some data.
I have the following profit function, which I would like to maximize:
$$\pi=\int^T_0 e^{-pt}(-50 (d - .1)^2 (d - .95) (d - .7) - 4 (.3 - k)^2 + 2 d k - \dot d^2-\dot k^2) $$
Where d and k are functions of time, t. I get the following Euler equations:
$$ 4020 d - 11100 d^2 + 8000 d^3 + 80 p \dot d = 299 + 80 k + 80 \ddot d $$ $$ 6 + 5 d + 5 \ddot k =20 k + 5 p \dot k $$
And the following boundary values: $$d[0]=\alpha, k[0]=\beta$$ $$-2 e^{-pT} \dot d[T] = 0,-2 e^{-pT} \dot k[T] = 0$$ Ideally, I would like an analytic solution in terms of $\alpha, \beta, p,$ and $ T$ but specifying them from the start would also be ok.
I have tried solving the system in Mathematica with no luck. DSolve just returns the input, NDSolve complains about stiffness and singularity. I don't know how to deal with this. I successfully used a Taylor expansion to do approximate $\dot d[0]$, but given that there are supposed to be two stationary points I'm not sure how I can incorporate both so that I can test my predictions against my fairly large data set. How could I non-arbitrarily determine which stationary point to approximate the behavior around given $d[0]$ and $k[0]$? Also, I'm not sure how appropriate the approximation is, given that most of the values I'm interested in will be somewhat far from the stationary points.
Is there any way at all that I can solve or approximate a value for $\dot d[0]$, either analytically or numerically?
Any help would be greatly appreciated!
|
I came across the following YouTube video by Numberphile today:
It is about multiplicative persistence which means roughly the following. We take an integer and multiply the digits together to get a new number. This will be done in a loop until we get a single digit number. Example: 3279->378->168->48->32->6. The persistence of 3279 is 5 as it takes 5 steps.
Currently we only know numbers with a persistence of up to 11. The smallest number with this persistence is 277,777,788,888,899.The community is only interested in the smallest number. Therefore, $3279$ is not really interesting as $2379$ has the same persistence and is smaller.
FYI: 679 is even better and the smallest with a persistence of 5.
You can find the list in the on-line encyclopedia of integer sequences oeis.
Mathematicians think that there is no number with a higher multiplicative persistence than 11. They tested it with all reasonable numbers up to over 230 digits.
Anyway I wanted to give it a try. In the video Matt Parker writes some code as well to get the persistence of a number in Python. I used Julia ;)
Matt used recursion which would look like this in Julia:
function per(x; steps=0) if x >= 10 return per(prod(digits(x)); steps=steps+1) end return stepsend
More elegant solution: (mentioned by Jolle in the comment section)
function per(x) x < 10 && return 0 return 1 + per(prod(digits(x)))end
Then you don't need the
steps parameter and it is very easy to read.
Yes
digits and
prod are standard functions in Julia.
We can easily test the performance when we use BenchmarkTools
using BenchmarkTools@btime Persistence.per(277777788888899) 845.794 ns (11 allocations: 1.45 KiB)11
and we know that we probably don't get in trouble with too many recursive calls as our expected limit is a persistence of 11 but it is also not really necessary and can be avoided by using:
function per_while_simple(x) steps = 0 while x >= 10 x = prod(digits(x)) steps += 1 end return stepsend
which has a similar performance.
@btime per_while_simple(277777788888899) 832.013 ns (11 allocations: 1.45 KiB)11
now we have a problem that currently we are dealing with normal integers which doesn't work if we actually want to deal with large numbers (like 230 digits...).
We have to change
prod(digits(x)) to
prod(convert.(BigInt,digits(x))) in both functions.This increases the runtime to $\approx 18 \mu s$
We can try it with a very big number:
27777778888888888888888888888888888888888888888888888888889999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999
which has 230 digits. This takes $\approx 170 \mu s$.
Let's generate some numbers and compute their persistence.
We call the function
function create_list_simple(;longest=15) and start with this code. BTW:
longest means the maximum number of digits we want to create.
best_x = 0best_s = 0n = zeros(Int, 8) # from 2 to 9start_time = time()
Our current best one is
0 with
0 persistence. We store a number based on the fact that we only care about the smallest number in the following way:
n stores how often a digit is used. $0$ and $1$ are useless as a $0$ is always bad for persistence and a
1 can be removed.Now an example for $n$ can be
[1,0,0,0,0,6,6,2] which represents the number
277,777,788,888,899. All the interesting numbers have this ascending structure so we can store them this way.The next parts will get into a
while True loop.
for d in 8:-1:3 if n[d] < longest-(sum(n)-n[d]) n[d] += 1 break end n[d:8] .= 0end
My idea was to create numbers in this form like normal counting in our
n array structure. So we first have
9 then
99 and so on until we have 15 of them then we get to
8,89,899 etc. This implies that we don't create our numbers in an ascending order.
We do this only down to 4 as $2 \cdot 2 = 4$ so we could make the number smaller same principle for $3$.
Attention: 8:-1:3 creates 8,...,3 but we store it in our special structure where we don't save the 1 so 3 corresponds later to 4.
In the end our
n would be
[0,0,0,0,0,0,0,0] then we want to add a
3 at the front and after that a
2.
if all(i -> i == 0, n[3:8]) if n[2] == 0 n[2] = 1 elseif n[1] == 0 n[1] = 1 n[2] = 0 else println("Checked all") break endend
then we check if there is a
2 and a
4 as this could be again reduced to $8$.
if n[1]+n[3] < 2 x = arr2x(n) s = per_while_simple(x) if s > best_s || (s == best_s && x < best_x) best_s = s best_x = x println("Found better: ", x , " needs ", s , " steps and has a length of: ", length(string(x))) endend
The
arr2x function creates the actual number out of our array:
function arr2x(a::Vector{Int}) str = "" for i in 2:9 str *= repeat(string(i), a[i-1]) end return parse(BigInt, str)end
after closing the while loop we can
println("Time needed in total: ", time()-start_time)
julia> Persistence.create_list_simple(;longest=15)Found better: 99 needs 2 steps and has a length of: 2Found better: 999 needs 4 steps and has a length of: 3Found better: 9999999 needs 5 steps and has a length of: 7Found better: 8899999 needs 5 steps and has a length of: 7Found better: 888899 needs 6 steps and has a length of: 6Found better: 888888 needs 6 steps and has a length of: 6Found better: 888888888 needs 7 steps and has a length of: 9Found better: 77788899 needs 8 steps and has a length of: 8Found better: 7777788999 needs 9 steps and has a length of: 10Found better: 6668888888888 needs 10 steps and has a length of: 13Found better: 6667788899 needs 10 steps and has a length of: 10Found better: 666677777788888 needs 11 steps and has a length of: 15Found better: 466777777888889 needs 11 steps and has a length of: 15Found better: 367777778888889 needs 11 steps and has a length of: 15Found better: 277777788888899 needs 11 steps and has a length of: 15Checked allTime needed in total: 1.5783820152282715
We found our $277777788888899$ which is satisfying. I know it was checked for over 200 digits but let's see how long it takes for
40 digits...It checks $6,404,978$ numbers in 440s on my machine.
Can we do better?
If we increase the number of digits in a number we have to do a lot of $9^x$ calculations. My idea was to precompute those numbers.
global cachefunction create_cache(;longest=240) global cache cache = Vector{Dict{Int,BigInt}}() for d = 2:9 cd = Dict{Int,BigInt}() for i = 0:longest cd[i] = d^convert(BigInt,i) end push!(cache, cd) endend
If we call that function we create a cache for each digit with up to $x^{240}$.
Now we create our
n structure back from a number and calculate the next number using that structure.
function per_while(x) steps = 0 n = zeros(Int, 8) # from 2 to 9 while x >= 10 list = digits(x) for l in list if l == 0 return steps+1 end if l > 1 n[l-1] += 1 end end x = prod_arr(n) n[1:8] .= 0 steps += 1 end return stepsend
prod_arr is defined as:
function prod_arr(a::Vector{Int}) global cache n = ones(BigInt, 8) for i=2:9 n[i-1] = cache[i-1][a[i-1]] end return prod(n)end
If we use this function instead we can calculate all reasonable numbers in about 300s.
In general there are options to further reduce the search space. ;)
I also created a small visualization of the persistence of the numbers up to 100.
Additional second part
I thought about this again after published it and improved the style and performance of the code. Additionally I created some more visualizations.
Problems with the version I explained above about creating a list of numbers is that it doesn't generate them in ascending order.Like
666677777788888 before
277777788888899 for this I actually got rid of the
n structure and created a list in a different way.This might be not the best way according to what I saw on Twitter but I think it's alright.The thing that is changing is inside the
while True loop.We initialize a variable
x=BigInt(1) before which represents the number we want to test.
x += 1sx = string(x)# keeping track of the current length of the numberif x >= 10^convert(BigInt,current_length) current_length += 1 println("current_length: ", current_length)end
So we increment it by one each time and whenever it gets bigger than the
current_length we increase it and have a little output.This would now produce
2,3,4,5,6,7,8,9,10,11,... now we want
2,3,4,5,6,7,8,9,22,23,... instead.
if sx[end] == '0' # if the number is 10...0 then the next reasonable one is 22...2 if sx[1] == '1' x = parse(BigInt,"2"^current_length) else # check how many 0 we have in the end like 2223000 after 2222999 # then the next reasonable is 22233333 tail = 1 while sx[end-tail] == '0' tail += 1 end front = sx[1:end-tail] back = front[end] x = parse(BigInt,front*back^tail) endend
Whenever we have a
0 as the last digit we check whether we got a
10...0 or just sth. like from
229 to
230.In the first case we jump to
22...2 in the other case we check how many zeros we have and then replace it accordingly like from
229 directly to
233 and from
2399 to
2444.Our check for break is then:
sx = string(x)if length(sx) > longest println("Checked all") breakend
Running the program now creates:
current_length: 2Found better: 25 needs 2 steps and has a length of: 2Found better: 39 needs 3 steps and has a length of: 2Found better: 77 needs 4 steps and has a length of: 2current_length: 3Found better: 679 needs 5 steps and has a length of: 3current_length: 4Found better: 6788 needs 6 steps and has a length of: 4current_length: 5Found better: 68889 needs 7 steps and has a length of: 5current_length: 6current_length: 7Found better: 2677889 needs 8 steps and has a length of: 7current_length: 8Found better: 26888999 needs 9 steps and has a length of: 8current_length: 9current_length: 10Found better: 3778888999 needs 10 steps and has a length of: 10current_length: 11current_length: 12current_length: 13current_length: 14current_length: 15Found better: 277777788888899 needs 11 steps and has a length of: 15
which creates the correct integer sequence (oeis).
For this I removed the checks whether it is reasonable so it creates
222 as well even if it can be reduced to
8.These kind of checks are now generalized in my code in the cache part.
### check for up to 10^power whether the digits are reasonable# i.e 22 is not reasonable as 4 is smaller same with power = 7next_possible = zeros(Int, 10^power)smallest_possible_for_x = 10^power*ones(Int, 9^power) l = length(smallest_possible_for_x)last_i = 1for i=1:10^power m = prod(digits(i)) # if it ends in 0 => not reasonable only for step 2 if 0 < m <= l && m % 10 != 0 if i < smallest_possible_for_x[m] smallest_possible_for_x[m] = i next_possible[last_i+1:i] .= i last_i = i end endendsmallest_possible_for_x[1:9] = collect(1:9);
The important array here is
next_possible which is an indicator whether a number is reasonable.For example
next_possible[222] is
255 which means that
222 up to
249 all numbers are not reasonable.
222 as mentioned is the same as
8,
223 is the same as
26 and so on up to
249 which is the same as
89.I save this up to 7 digits. If a number is reasonable i.e 26 then
next_possible[26] equals to 26.The extra check
&& m % 10 != 0 is for something like
25 which creates a
0 in the end and is definitely useless so we consider it as not reasonable. This removes
25 from our list with a little sad but I'm okay with it :D
Later we use this to jump to the next useable number:
first_digits = parse(Int,sx[1:min(length(sx),7)])if next_possible[first_digits] != first_digits # jumping to the next reasonable number first_digits = next_possible[first_digits] x = parse(BigInt, string(first_digits)*string(first_digits)[end:end]^(current_length-length(string(first_digits))))end
With this improved code we can check the numbers with up to 40 digits in $\approx 60s$ which is 5 times faster than the 300s we had before.
I read somewhere that it would be nice to have a histogram as well so I plotted it:
The first one shows the first version of the improved code (without the filtering check)
the next one shows the histogram with up to 20 digits as well but uses the filtering step.
Additional third part
There is another performance update again mentioned by Jolle in the comment section below.Creating unnecessary arrays is quite bad and I was just too lazy by using the default function
digits.
Instead using this function:
function digitprod(x::T)::T where {T<:Integer} val = one(T) ten = T(10) while !iszero(val) && !iszero(x) (x, r) = divrem(x, ten) val *= r end return valend
Which takes a number
x of some kind of type integer and then computes the digits itself by dividing it by 10 and then taking the remainder and continue with the rest. This doesn't create arrays each time and is therefore quite a bit faster.Running my test again with all reasonable numbers up to 40 digits it takes now $\approx 25$s instead of 60s. It also doesn't really matter whether we use the more complicated cache version or the simple
while version then.
There are some other people who saw the video on Numberphile and programmed their own version.They specialized more on performance and on interesting ideas how to reduce the search space.Please have a look at the code by Amit Moryossef on pastebin who is also trying to prove that there is no number with a persistence of 12.I'll definitely keep you updated on that. Additionally please check out the code byDavipb which is written in C. He mentioned on Reddit that he checked it up to 400 digits.
Edit 31.03.2019If you're interested about the newest code projects and findings you might wanna check /r/numberphile. There I found this database of numbers with a persistence of >2 up to a huge amount of digits Multiplcative persistence DB
The code for the visualization with d3 as well as the Julia code is on my GitHub.
I'll keep you updated if there are any news on persistence on Twitter OpenSourcES as well as my more personal one: Twitter Wikunia_de
Thanks for reading!
If you enjoy the blog in general please consider a donation via Patreon. You can read my posts earlier than everyone else and keep this blog running.
Become a Patron!
|
> Input
> Input
>> 1²
>> (3]
>> 1%L
>> L=2
>> Each 5 4
>> Each 6 7
>> L⋅R
>> Each 9 4 8
> {0}
>> {10}
>> 12∖11
>> Output 13
Try it online!
Returns a set of all possible solutions, and the empty set (i.e. \$\emptyset\$) when no solution exists.
How it works
Unsurprisingly, it works almost identically to most other answers: it generates a list of numbers and checks each one for inverse modulus with the argument.
If you're familiar with how Whispers' program structure works, feel free to skip ahead to the horizontal line. If not: essentially, Whispers works on a line-by-line reference system, starting on the final line. Each line is classed as one of two options. Either it is a
nilad line, or it is a operator line.
Nilad lines start with
>, such as
> Input or
> {0} and return the exact value represented on that line i.e
> {0} returns the set \$\{0\}\$.
> Input returns the next line of STDIN, evaluated if possible.
Operator lines start with
>>, such as
>> 1² or
>> (3] and denote running an operator on one or more values. Here, the numbers used do not reference those explicit numbers, instead they reference the value on that line. For example,
² is the square command (\$n \to n^2\$), so
>> 1² does not return the value \$1^2\$, instead it returns the square of line
1, which, in this case, is the first input.
Usually, operator lines only work using numbers as references, yet you may have noticed the lines
>> L=2 and
>> L⋅R. These two values,
L and
R, are used in conjunction with
Each statements.
Each statements work by taking two or three arguments, again as numerical references. The first argument (e.g.
5) is a reference to an operator line used a function, and the rest of the arguments are arrays. We then iterate the function over the array, where the
L and
R in the function represent the current element(s) in the arrays being iterated over. As an example:
Let \$A = [1, 2, 3, 4]\$, \$B = [4, 3, 2, 1]\$ and \$f(x, y) = x + y\$. Assuming we are running the following code:
> [1, 2, 3, 4]
> [4, 3, 2, 1]
>> L+R
>> Each 3 1 2
We then get a demonstration of how
Each statements work. First, when working with two arrays, we zip them to form \$C = [(1, 4), (2, 3), (3, 2), (4, 1)]\$ then map \$f(x, y)\$ over each pair, forming our final array \$D = [f(1, 4), f(2, 3), f(3, 2), f(4, 1)] = [5, 5, 5, 5]\$
Try it online!
How this code works
Working counter-intuitively to how Whispers works, we start from the first two lines:
> Input
> Input
This collects our two inputs, lets say \$x\$ and \$y\$, and stores them in lines
1 and 2 respectively. We then store \$x^2\$ on line 3 and create a range \$A := [1 ... x^2]\$ on line 4. Next, we jump to the section
>> 1%L
>> L=2
>> Each 5 4
>> Each 6 7
The first thing executed here is line
7,
>> Each 5 4, which iterates line
5 over line 4. This yields the array \$B := [i \: \% \: x \: | \: i \in A]\$, where \$a \: \% \: b\$ is defined as the modulus of \$a\$ and \$b\$.
We then execute line
8,
>> Each 6 7, which iterates line
6 over \$B\$, yielding an array \$C := [(i \: \% \: x) = y \: | \: i \in A]\$.
For the inputs \$x = 5, y = 2\$, we have \$A = [1, 2, 3, ..., 23, 24, 25]\$, \$B = [0, 1, 2, 1, 0, 5, 5, ..., 5, 5]\$ and \$C = [0, 0, 1, 0, 0, ..., 0, 0]\$
We then jump down to
>> L⋅R
>> Each 9 4 8
which is our example of a dyadic
Each statement. Here, our function is line
9 i.e
>> L⋅R and our two arrays are \$A\$ and \$C\$. We multiply each element in \$A\$ with it's corresponding element in \$C\$, which yields an array, \$E\$, where each element works from the following relationship:
$$E_i =\begin{cases}0 & C_i = 0 \\A_i & C_i = 1\end{cases}$$
We then end up with an array consisting of \$0\$s and the inverse moduli of \$x\$ and \$y\$. In order to remove the \$0\$s, we convert this array to a set (
>> {10}), then take the set difference between this set and \$\{0\}\$, yielding, then outputting, our final result.
|
Let $R$ be the ring of all functions $f : \Bbb{R}\longrightarrow \Bbb{R}$ which are continuous outside $(-1,1)$ and let $S$ be the ring of all functions $f : \Bbb{R}\longrightarrow \Bbb{R}$ which are continuous outside a bounded open interval containing zero (depended on $f$). Is it true that $R \cong S$?
They are not ring isomorphic, because e.g. $R$ has the following property of a ring $X$, and $S$ does not:
There is a non zero element $u \in X$ such that for any invertible $f\in X$ either $uf$ or $-uf$ is a square, and for some $g\in X$ neither $ug$ nor $-ug$ is a square.
An idempotent $u$ is called primitive if the equation of idempotents $$(1-u)x=0$$ has a unique non-zero solution $x=u$. The primitive idempotents in $R$ are $\chi_{\{a\}}$ for $a \in (-1,1)$ and $\chi_{(-\infty,-1]}$ and $\chi_{[1,+\infty)}$. For ease of notation give two last mentioned primitive idempotents index $-1$ and $1$ respectively. The primitive idempotents in $S$ are indexed by $a \in \mathbb R$.
A ring isomorphism $\phi \colon R \rightarrow S$ would then define a bijection $\tilde \phi \colon [-1,1] \rightarrow \mathbb R$. Let $A=\tilde \phi^{-1}(\mathbb Q) \subseteq [-1,1]$, where $\mathbb Q$ denotes the rational numbers. Then $\chi_A$ is an idempotent in $R$ with the property that $\chi_A \cdot \chi_{\{a\}} \neq 0$ for all $a \in A$ and $\chi_A \cdot \chi_{\{a\}}= 0$ for all $a \in [-1,1] \smallsetminus A$.
The idempotent $\phi(\chi_A) \in S$ must be of the form $\chi_B$ for some $B \subseteq \mathbb R$. We must have $$\chi_B \cdot \chi_{\{\tilde \phi(a)\}}=\phi(\chi_A \cdot \chi_{\{a\}}) \neq 0$$ for all $a \in A$ and $$\chi_B \cdot \chi_{\{\tilde \phi(a)\}}=\phi(\chi_A \cdot \chi_{\{a\}}) = 0$$ for all $a \in [-1,1] \smallsetminus A$. Therefore $B$ contains all the rational numbers and no numbers that are not rational. But $\chi_{\mathbb Q}$ is not an element of $S$ and we reach a contradiction. Therefore there is no ring isomorphism $\phi \colon R \rightarrow S$.
|
I began with problem which looked simple in the beginning but became increasingly complex as I dug deeper.
Main questions: Find the number of solutions $s(n)$ of the equation$$n = \frac{k_1}{1} + \frac{k_2}{2} + \ldots + \frac{k_n}{n}$$where $k_i \ge 0$ is a non-negative integer. This is my main questions. After tying different approaches, the one that I found most optimistic is as follows. But soon even this turned out to be devil (as we shall see why).
Let $l_n$ be the LCM of the first $n$ natural numbers We know that $\log l_n =\psi(n)$. Multiplying both sides by $l_n$ we obtain $$ n l_n = \frac{k_1 l_n}{1} + \frac{k_2 l_n}{2} + \ldots + \frac{k_n l_n}{n} $$
Each term on the RHS is a positive integer thus our question is equivalent to finding the number of partitions of $nl_n$ in which each part satisfy some criteria.
Criteria 1: How small can a part be? Assume that there is a solution with $k_n = 1$ then the smallest term in the above sum will be the $n$-th term which is $l_n / n$. Hence each term in our partition is $\ge l_n/n$. Criteria 2: How many prime factors can each part contain? If my calculation is correct then for $n \ge 2, 2 \le r \le n$, the minimum number of prime factors that $l_n /r$ can contain is $\pi(n)-1$. With these two selection criterion we have: $s(n) \le $ No. of partitions of $n l_n$ into at most $n$ parts such that each part is greater than $l_n / n$ and has at least $\pi(n) - 1$ different prime factors.
May be we can narrow down further by adding sharper selection criterions but I thought it was already complicated enough for the time being. The asymptotics of the number of partitions of $n$ into $k$ parts $p(n,k)$ is well known, but I have not found in literature any asymptotics for the number of partitions of $n$ into $k$ parts such that each part is at least $m$, let alone the case when each part has a certain minimum number of prime factors. I am looking for any suggestions, reference materials that would help in these intermediate questions that would ultimately help in answering the main question.
|
Thanks but ERF or ERFC that doesn't replicate =NORM.DIST(1,0,1,TRUE) = 0.84134475 - unless I'm missing something here.
you are missing something.
The error function and cumulative normal distribution differ by several factors in their definitions and have different limits of integration
(1+erf(1/sqrt(2.)))/2;=0.8413447462 replicates your result
[$]{\rm erf}(x)=\frac{2}{\sqrt{\pi}}\int_{0}^{x}e^{-x^{2}}dx[$]
the cumulative normal distribution is
[$]\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-x^{2}/2}dx[$]
|
This question is inspired partly by this question Any reference on Brownian Motion continuity. In this post, the author asked if the following three axioms can define a Brownian motion without assuming the continuity axiom
"
4.$W(t)$ is continuous with probability one. i.e. $\lim _{h\rightarrow 0}P(|W(t+h)-W(t)|>\epsilon )=0,\forall \epsilon>0, t\in S$"By assuming this, Brownian motion is a special case of Levy process. OP $W(0) = 0$. For all $0 \le t_1 \le t_2 \le t_3 \le t_4$, $W(t_2) - W(t_1)$ and $W(t_4) - W(t_3)$ are independent random variables. For all $0 \le t_1 \le t_2$ , $W(t_2) - W(t_1)$ is normally distributed with mean 0 and variance $\sigma^2\,(t_2 - t_1)$.
In fact, [Karlin&Taylor] defined Brownian motion to be a stochastic process satisfying 1,2,3 axioms with an additional stipulation
"
4*.$W(t)$ is continuous at $t=0$"
And they derived continuity of Brownian path as a result using Karhunen–Loève representation Theorem at Sec 7.4. A possible relevant clue is that we always require the characteristic function $E(e^{Xt})$ to be continuous around origin in order to determine a random variables in distribution via characteristic functions. So I guess axiom 4* is a guarantee that some transform exists?
My question is that: If we only assume axiom 1,2,3 on a stochastic process as above, can we construct a stochastic process $W(t)$ that is not a Brownian motion (which is defined as a stochastic process with axiom 1,2,3,4 satisfied OR axiom 1,2,3,4* satisfied in [Karlin&Taylor])? OR Alternatively, is the continuity axiom redundant? (I do not think so but it does not seem very clear how I can construct a counter example to illustrate the point.) After looking at @Bjørn Kjos-Hanssen's answer, I felt a more appropriate question to ask is that if there is a stochastic process that is not càdlàg and satisfies axioms 1,2,3.
[Karlin&Taylor]Karlin, S., and H. M. Taylor. "A first course in stochastic processes" Academic Press. New York (1975).
|
Intersection with Complement is Empty iff Subset
Jump to navigation Jump to search
Theorem $S \subseteq T \iff S \cap \map \complement T = \O$
where:
$S \subseteq T$ denotes that $S$ is a subset of $T$ $S \cap T$ denotes the intersection of $S$ and $T$ $\O$ denotes the empty set $\complement$ denotes set complement. $S \cap T = \O \iff S \subseteq \relcomp {} T$ Proof
\(\displaystyle S\) \(\subseteq\) \(\displaystyle T\) \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle S \setminus T\) \(=\) \(\displaystyle \O\) Set Difference with Superset is Empty Set \(\displaystyle \leadstoandfrom \ \ \) \(\displaystyle S \cap \map \complement T\) \(=\) \(\displaystyle \O\) Set Difference as Intersection with Complement
$\blacksquare$
Also see Sources 1955: John L. Kelley: General Topology... (previous) ... (next): Chapter $0$: Subsets and Complements; Union and Intersection: Theorem $1$ 1965: Seth Warner: Modern Algebra... (previous) ... (next): Exercise $3.3 \ \text{(b)}$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Exercise $\text{B vi}$
|
Contents
Integral expression can be added using the
\int_{lower}^{upper}
command.
Note, that integral expression may seems a little different in
inline and display math mode - in inline mode the integral symbol and the limits are compressed.
LaTeX code Output
Integral $\int_{a}^{b} x^2 dx$ inside text
$$\int_{a}^{b} x^2 dx$$
To obtain double/triple/multiple integrals and cyclic integrals you must use
amsmath and
esint (for cyclic integrals) packages.
LaTeX code Output
Like integral, sum expression can be added using the
\sum_{lower}^{upper}
command.
LaTeX code Output
Sum $\sum_{n=1}^{\infty} 2^{-n} = 1$ inside text
$$\sum_{n=1}^{\infty} 2^{-n} = 1$$
In similar way you can obtain expression with product of a sequence of factors using the
\prod_{lower}^{upper}
command.
LaTeX code Output
Product $\prod_{i=a}^{b} f(i)$ inside text
$$\prod_{i=a}^{b} f(i)$$
Limit expression can be added using the
\lim_{lower}
command.
LaTeX code Output
Limit $\lim_{x\to\infty} f(x)$ inside text
$$\lim_{x\to\infty} f(x)$$
In
inline math mode the integral/sum/product lower and upper limits are placed right of integral symbol. Similar is for limit expressions. If you want the limits of an integral/sum/product to be specified above and below the symbol in inline math mode, use the
\limits command before limits specification.
LaTeX code Output
Integral $\int_{a}^{b} x^2 dx$ inside text
Improved integral $\int\limits_{a}^{b} x^2 dx$ inside text
Sum $\sum_{n=1}^{\infty} 2^{-n} = 1$ inside text
Improved sum $\sum\limits_{n=1}^{\infty} 2^{-n} = 1$ inside text
Moreover, adding
\displaystyle beforehand will make the symbol large and easier to read.
On the other hand,
\mathlarger command (provided by
relsize package) is used to get bigger integral symbol in display.
For more information see
|
Answer
Period$=\frac{\pi}{5}$ seconds Frequency=$\frac{5}{\pi}$ cycles per second
Work Step by Step
The period can be found by first comparing the equation $s(t)=-4\cos 10 t$ to $s(t)=a\cos w t$ to find $w$ which is the angular velocity. By comparing the two equations, we find that $w=10$. Next, we find the period using the formula: Period$=\frac{2\pi}{w}$ Period$=\frac{2\pi}{10}$ Period$=\frac{\pi}{5}$ seconds. Since frequency is the reciprocal of the period, the frequency is $\frac{5}{\pi}$ cycles per second.
|
Since the statement in question is an “if and only if” we need to prove both directions. That is, we need to show$$mn=1 \text{ for some integers $m,n$ implies } m=n=1 \text{ or }m=n=-1$$and $$m=n=1 \text{ or } m=n=-1 \text{ implies } mn=1.$$
To prove that $A$ implies $B$ we assume that $A$ is true and deduce that $B$ must also be true. We will consider the second statement (which is the easiest of the two).
Suppose that $m,n\in\mathbb{Z}$ and $m=n=1$. We are to show that $mn=1$. By multiplying $m$ and $n$ we have $$ m \cdot n = (1) \cdot (1) = 1 $$ and we are done.
Note that it suffices to only consider the case $m=n=1$ and not $m=n=-1$ because the hypothesis is an
or statement. Since we proved the statement true in one of the cases then the whole statement is true.
Using this strategy you can now prove the first statement. That is, assume that you have two integers $m$ and $n$ such that $mn=1$ and show that either $m=n=1$ and $m=n=-1$. Hint: use the fact $u\in\mathbb{Z}$ is a unit if and only if $u\mid 1$ and that the only units in $\mathbb{Z}$ are $1$ and $-1$.
|
(Note: This post is more technical than most stuff I write here. The intended audience here is not the general public, or even the general educated public: it’s students of geometry, broadly understood. In any case, if you don’t know what a differential form is, you’re probably not going to get much out of this.)
I’d like to show you some very nice geometry, involving some vector fields and differential forms.
Consider a surface. In fact, consider the plane, \(\mathbb{R}^2\). That’s just the standard Euclidean plane, with coordinates \( x\) and \( y\).
Now let’s consider a differential 1-form on the plane; call it \( \beta\). We’ll impose one condition on \( \beta\): its exterior derivative \( d\beta\) should be everywhere nonzero.
For instance, we can take \( \beta = x \; dy\). In fact, we will take this as a running example. Its exterior derivative is \( d\beta = dx \wedge dy\), which is just the usual Euclidean area form on the plane, and which is nowhere zero.
Now, saying that \(d\beta\) is everywhere nonzero is the same as saying that \( d\beta\) is an area form (although in general it might be different from the Euclidean area form \(dx \wedge dy\)); and this is also the same as saying that \( d\beta\) is a non-degenerate 2-form. In fact, being exact, \(d\beta\) is also closed: and hence \( d\beta\) is a closed non-degenerate 2-form, also known as a
symplectic form.
Non-degenerate 2-forms are great. When you insert a vector into one, you get a 1-form; and because of the non-degeneracy, if the vector is nonzero, then the resulting 1-form is nonzero. So you get a bijective correspondence, or
duality, between 1-forms and vectors.
This means that, at each point \(p \in \mathbb{R}^2\), the non-degenerate 2-form \(d\beta\) provides a linear map of 2-dimensional vector spaces
\[ T_p \mathbb{R}^2 \rightarrow T_p^* \mathbb{R}^2, \] or in other words \[ \{ \text{Vectors at $p$} \} \rightarrow \{ \text{1-forms at $p$} \}, \] which sends a vector \(v\) to the 1-form \(\iota_v d\beta\) (i.e. the 1-form \(d\beta(v, \cdot)\), where you have fed \( d\beta\) one vector, but it eats two courses of vectors, and after its entree it remains a 1-form on its remaining main course). It’s a linear map and, by the non-degeneracy of \(d\beta\), its kernel/nullspace consists solely of the zero vector. Thus it’s injective and, both vector spaces being 2-dimensional, it’s an isomorphism.
If we consider (smooth) vector
fields rather than just single vectors, then we can simultaneously do this at each point of \(\mathbb{R}^2\), and we get a map \[ \{ \text{Vector fields on $\mathbb{R}^2$} \} \rightarrow \{ \text{1-forms on $\mathbb{R}^2$} \}. \]
So \( d\beta\), being a non-degenerate 2-form, gives us a way to go from 1-forms to vectors and back again. We can think of this as a
duality: for each 1-form, this correspondence gives us a dual 1-form, and vice versa.
So far, we only have \(\beta\) and \( d\beta\). But \( \beta\) is a 1-form! So some vector field must correspond to it. Let’s call it \( X\). As it turns out, the 1-form \( \beta\), the 2-form \( d\beta\), and the vector field \( X\), form a very nice structure.
The name of Joseph Liouville is often associated with this stuff. Often the 1-form \(\beta\) is called a
Liouville form, often the surface (or manifold in general) is called a Liouville manifold, and the whole thing is often called a Liouville structure.
In fact, we can draw pictures of such a structure.
The easiest thing to draw is the vector field \( X\): a vector, drawn as an arrow, at each point.
How do we draw \(\beta\)? It’s generally hard to draw a picture of a differential form! However for a 1-form, we can draw its
kernel \(\ker \beta\). At any point \(p\), \(\beta_p\) is a linear map from the tangent space \(T_p \mathbb{R^2}\), which is a 2-dimensional vector space, to \(\mathbb{R}\). When \(\beta_p = 0\), \(\beta\) is the zero map \(T_p \mathbb{R}^2 \rightarrow \mathbb{R}^2\) and hence the kernel is the whole 2-dimensional tangent space \(T_p \mathbb{R}^2\). But where \(\beta_p \neq 0\), we have a nontrivial linear map from a 2-dimensional vector space to a 1-dimensional vector space. Hence \(\beta_p\) has rank 1 and nullity 1, so the kernel is a 1-dimensional subspace of \(T_p \mathbb{R}^2\). We then have a 1-dimensional tangent subspace at each point; in other words, \(\ker \beta\) is a line field on \(\mathbb{R}^2\). We can even join up the lines (i.e. integrate them) to obtain a collection of curves on \(\mathbb{R}^2\), which become the leaves of a foliation. In this way \(\beta\) can be drawn as a collection of curves, or singular foliation, on \(\mathbb{R}^2\): the foliation is singular at the points where \(\beta = 0\). True, drawing it this way only shows the kernel of \(\beta\), i.e. in which direction you will get zero if you feed a vector into \(\beta\), and you will not see what you get if you feed vectors in other directions. So you don’t see how “strong” \(\beta\) is at each point. But it’s a useful way to represent \(\beta\) nonetheless.
As for the 2-form \(d\beta\)? It’s just an area form; we won’t attempt to draw anything to represent that.
So, let’s draw what we get in our example. In our example, the 1-form is \( \beta = x \; dy\), the 2-form is \( d\beta = dx \wedge dy\), and the vector field \( X\) must satisfy
\[ \iota_X d\beta = \beta, \quad \text{i.e.} \quad \iota_X ( dx \wedge dy ) = x \; dy. \] It’s not too difficult to calculate that \( X = x \partial_x\) (here \( \partial_x\) is a unit vector in the \( x\) direction; if we’re using \( (x,y)\) to denote coordinates, then \( \partial_x = (1,0)\)).
Being a multiple of \( \partial_x\), \( X\) always points in the \( x\) direction, i.e. horizontally. When \( x\) is positive it points to the right, when \( x\) is negative it points to the left; and when \( x=0\), i.e. along the \( y\)-axis, \( X\) is zero. So \( X\) is actually a
singular vector field, in the sense that it has zeroes. And it’s zero along a whole line. (So it’s not generic.)
As for \( \beta = x \; dy\), it is zero when \( x=0\), so the foliation \( \ker \beta\) is singular along the \( y\)-axis. When \( x \neq 0\), the kernel consists of anything pointing in the \( x\)-direction. Since \( \beta\) has only a \( dy\), but no \( dx\) term, if you feed it a \( \partial_x\), or any multiple thereof, you’ll get zero. So the line field you draw is horinzontal, as is the foliation. In other words, \( \ker \beta\) is the singular foliation consisting of horizontal lines, with singularities along the \( y\)-axis.
Note that, although we might have expected the line field of \( \ker \beta\) and the arrows of \( X\) to point all over the place, in fact the arrows and lines point in the same direction, i.e. horizontal! The vectors of \( X\) point along the lines of \( \ker \beta\). This means that, at each point, the vector \( X\) lies in the kernel of \( \beta\), and hence \( \beta(X) = 0\).
Now in this example, the vector field \(X = x \partial_x\) has a very nice property. If you
flow it, then points move out horizontally, and exponentially. Indeed, if you interpret \( x \partial_x\) as a velocity vector field, it is telling a point with horizontal coordinate \( x\) to move to the right, with velocity \( x\). Telling something to move as fast as where it already is, is a hallmark of exponential movement.
Denoting the flow of \( X\) for time \( t\) by \( \phi_t\), we have a map \( \phi_t : \mathbb{R}^2 \rightarrow \mathbb{R}^2\). It’s an exponential function:
\[ \phi_t (x,y) = (x e^{t}, y). \] Indeed, you can check that \( \frac{\partial}{\partial t} \phi_t = (x e^t, 0)\), which at time \( t=0\) is \[ \frac{\partial}{\partial t} |_{t=0} = (x,0) = X. \]
Now the vector field \(X\) (or its flow \(\phi_t\)) expands in the horizontal direction exponentially, and does nothing in the \(y\) direction: from this it follows that \(X\) also expands
area exponentially. An infinitesimal volume \(V\), after flowing under \(\phi_t\) for time \(t\), expands so that \(\frac{\partial V}{\partial t} = V\), and hence grows exponentially: at time \(t\), the volume has expanded from \(V\) to \(V e^t\). Rephrased in terms of differential forms, the Lie derivative of the area form \(d\beta = dx \wedge dy\) under the flow of \(X\) is the area form itself: \[ L_X d\beta = d\beta. \]
To see this, we use the Cartan formula \(L = d\iota + \iota d\), which yields
\[ L_X d\beta = d \iota_X (d\beta) + \iota_X d (d\beta). \] From the definition of \(X\), being dual to \(\beta\), we have \(\iota_X (d\beta) = \beta\); using this, and the fact that \(d^2 = 0\), the first term becomes \(d\beta\) and the second term is zero.
In fact, \( X\) doesn’t just expand the
area \( d\beta\) exponentially; it also expands the Liouville 1-form \( \beta\) exponentially. In other words, \[ L_X \beta = \beta. \]
To see this, we just apply the Cartan formula,
\[ L_X \beta = d \iota_X \beta + \iota_X d\beta = 0 + \beta. \] The first term is zero because, as we saw, \( X\) points along \( \ker \beta\), so \( \iota_X \beta = \beta(X) = 0\); the second term is zero because of the definition of \( X\) as dual to \( \beta\).
To summarise our example so far: we started with a 1-form \( \beta\) whose exterior derivative \( d\beta\) was a non-degenerate 2-form. We took a vector field \( X\) dual to \( \beta\), using the non-degeneracy of \( d\beta\). We have found that:
The vector field \( X\) points along the foliation \( \ker \beta\); in other words, \( \beta(X) = 0\). Flowing \( X\) expands area exponentially: \( L_X d\beta = d\beta\). Flowing \( X\) in fact expands the 1-form \( \beta\) exponentially: \( L_X \beta = \beta\).
We proved some of these in sort-of generality, but not everything. Let’s prove it all at once in general, now.
PROPOSITION. Let \( \beta\) be a 1-form on a surface such that \( d\beta\) is non-degenerate. Let \( X\) be dual to \( \beta\), i.e. \( \iota_X d\beta = \beta\). Then: (i) \(\beta(X) = 0\) (ii) \( L_X d\beta = d\beta\) (iii) \( L_X \beta = \beta.\)
PROOF. For (i), we note \( \beta(X) = \iota_X \beta\), and then use \( \beta = \iota_X d\beta\) and the fact that differential forms are antisymmetric:
\[ \beta(X) = \iota_X \beta = \iota_X \iota_X d\beta = d\beta(X,X) = 0. \]
For (ii) and (iii), we can then follow the arguments above. For (ii), we use the Cartan formula \( L_X = d \iota_X + \iota_X d\), the fact that \( d^2 = 0\), and the fact that \( \iota_X d\beta = \beta\).
\[ L_X d\beta = d \iota_X d\beta + \iota_X d d\beta = d\beta + 0 \]
For (iii), we use the Cartan formula, part (i) that \(\iota_X \beta = 0\), and the fact that \(\iota_X d\beta =\beta\).
\[ L_X \beta = d \iota_X \beta + \iota_X d \beta = d 0 + \beta = \beta. \]
So Liouville structures have some very nice properties.
And, this is all in fact classical physics. We can think of the plane as a phase space, and \( \beta\) as an action \( y \; dx = p \; dq\). Then \( d\beta\) is the symplectic form on phase space, and the equation \( L_X d\beta = d\beta\) shows that this symplectic form, the fundamental structure on the phase space, is expanded by the flow of \( X\). (There is something in classical mechanics, or symplectic geometry, known as Liouville’s thoerem, which also says something about the effect of a flow of a vector field on the symplectic form.)
Anyway, above we saw one example of a Liouville structure on the plane. Here’s another one, which is more “radial”.
Take \(\beta = \frac{1}{2}(x dy – y dx)\). Then \(d\beta = \frac{1}{2}(dx \wedge dy – dy \wedge dx) = dx \wedge dy\). The vector field \(X\) dual to \(\beta\) is then \(\frac{1}{2}(x \; dx + y \; dy)\), which is a radial vector field. The kernel of \(\beta\) is also the radial direction: \(\beta(X) = (xdy – ydx)(x \partial_x + y \partial_y) = xy – yx = 0\). The flow of \(X\) then looks like it should expand area exponentially, and it does.
Liouville structures can exist in other places too, and not just on surfaces: they exist in higher dimensions too. Notice that the fact that \(\beta\) was on a surface was never actually used in the proof above: it could have been any manifold, with any 1-form \( \beta\) such that \( d\beta\) is non-degenerate.
However, not
every manifold has Liouville structures. In fact, there are many surfaces on which no Liouville structures exist. Any compact surface (without boundary) has no Liouville structure.
Why is this? The idea is pretty simple. If you have a closed and bounded surface, it’s pretty hard to have a smooth vector field \( X\) which expands the area! Your surface has a given finite area, and then you flow it along \( X\) — moving the points of the surface around by a diffeomorphism — and now it has exponentially larger area! This is pretty paradoxical, and indeed it’s a contradiction.
The plane escapes this paradox, because it’s not bounded. You can indeed expand the plane by any factor you like, and it’s still the same plane. Surfaces with boundary also escape the paradox, because the flow of \( X\) will not be defined for all time: eventually points will be pushed off the edge.
But a sphere torus does not escape the paradox. Nor does a torus, or any higher genus compact surface without boundary.
Further, Liouville structures can only exist in
even dimensions. So you can’t have one on a 3-dimensional space. But you can have one on a 4-dimensional space. Why is this? It can be seen from linear algebra. You simply can’t have non-degenerate 2-forms in odd dimensions. (The easiest way I know to see this is as follows. Let \( \omega\) be a non-degenerate 2-form on an \( n\)-dimensional vector space. Choose a basis \( e_1, \ldots, e_n\) and write \( \omega(e_i, e_j)\) as the \( (i,j)\) term of the \( n \times n\) matrix \( A\) for \( \omega\). The facts that \( \omega\) is antisymmetric and non-degenerate mean that \( A^T= -A\) and \( \det A \neq 0\) respectively. But then \( \det A = \det A^T = \det (-A) = (-1)^n \det A\), so \( (-1)^n = 1\), and \(n\) is even.
Just as a Liouville structure cannot exist on a compact surface (without boundary), it can’t exist on any
compact manifold (without boundary). The argument is similar: because \( X\) expands the 2-form \( d\beta\), it also expands all its exterior powers, and hence the volume form of the manifold, whatever (even) dimension it may be.
So, we’ve seen that a Liouville structure can only exist on a manifold with even dimension, and which is not compact, or has boundary. (If you know about de Rham cohomology, it’s not difficult to see why the manifold must have \( H^2 \neq 0\).)
But when it does, we have a wonderful little geometric triplet \( \beta, d\beta\) and \( X\).
|
Contents Homework 5, ECE438, Fall 2011, Prof. Boutin Question 1
Diagram of "decimation by two" FFT computing N-pt DFT. N=8 in this question.
where $ W_N^k = e^{-j2\pi k/N} $
Recall the definition of DFT:
$ X[k]=\sum_{n=0}^{N-1} x[n]e^{-j2\pi k/N},\ k=0,...,N-1 $.
For each k, we need N times complex multiplications and N-1 times complex additions.
In total, we need $ N^2 = 64 $ times of complex multiplications and $ N^2-N = 56 $ times of complex additions.
Using "decimation by two" FFT algorithm, the DFT is computed in two steps. For the first step, two N/2-pt DFT are computed with $ 2\cdot (\frac{N}{2})^2 $ multiplications and $ 2((\frac{N}{2})^2-\frac{N}{2}) $. For the second step, $ \frac{N}{2} $ multiplications and $ N $ additions are needed.
When $ N=8 $, the total numbers of complex operations are
multiplications: $ 2\cdot (\frac{N}{2})^2 + \frac{N}{2} = 36 $
additions: $ 2((\frac{N}{2})^2-\frac{N}{2}) + N = 32 $
Question 2
Diagram of "radix-2" FFT computing 8-pt DFT.
where $ W_N^k = e^{-j2\pi k/N} $
Recall the definition of DFT:
$ X[k]=\sum_{n=0}^{N-1} x[n]e^{-j2\pi k/N},\ k=0,...,N-1 $
In this question N=8
If we use summation formula to compute DFT, for each k, we need N times complex multiplications and N-1 times complex additions.
In total, we need N*N=64 times of complex multiplications and N*(N-1)=56 times of complex additions.
In decimation-in-time FFT algorithm, we keep on decimating the number of points by 2 until we get 2 points DFT. At most, we can decimate $ v=log_2 N $ times. As a result, we get v levels of DFT. Except for the first level (2-pt FFT), which only needs N times complex additions, for the rest of levels, we need N/2 times of complex multiplications and N times of complex additions.
In total, we need $ \frac{N}{2}(log_2 N -1)=8 $ times of complex multiplications and $ Nlog_2 N=24 $ times of complex additions.
(Note: when $ N $ is large, $ log_2 N -1 \approx log_2 N $. So the number of multiplications becomes $ \frac{N}{2}log_2 N $.)
Question 3
The diagram is identical to the diagram in Question 1 except N=122.
By similar argument presented in Question 1, a direct computation of DFT requires $ N^2 = 14884 $ times of complex multiplications and $ N^2-N = 14762 $ times of complex additions.
Using "decimation by two" FFT algorithm, the total number of multiplications is $ 2\cdot (\frac{N}{2})^2 + \frac{N}{2} = 7503 $ and the total number of additions is $ 2((\frac{N}{2})^2-\frac{N}{2}) + N = 7442 $
Question 4
Denote N is the points number of the input signal's DFT. In this question N=6.
1) The normal DFT algorithm: If we use summation formula to compute DFT, according to the analysis of Question 1. We need N*N=36 times of complex multiplications and N*(N-1)=30 times of complex additions.
Flow diagram of FFT:
Analytical Expressions:
$ \begin{align} X_6(k)&=\sum_{n=0}^{5} x(n)e^{-\frac{j2\pi kn}{6}} \\ &\text{Change variable n=2m+l, m=0,1,2; l=0,1 } \\ X_6(k)&=\sum_{l=0}^{1}\sum_{m=0}^{2}x(2m+l)e^{-\frac{j2\pi k(2m+l)}{6}} \\ &=\sum_{l=0}^{1}e^{-\frac{j2\pi kl}{6}}\sum_{m=0}^{2}x(2m+l)e^{-\frac{j2\pi km}{3}} \\ &=\sum_{l=0}^{1}W_6^{kl}X_l(k) \text{ ,k=0,1,...,5 } \end{align} $
Note that $ X_l(k) = X_l(k+3),\ k=0,1,2 $
In this FFT algorithm, the computing of DFT is divided into two levels. First, we compute two 3-point DFT, which are DFT of even and odd points.
In this step we need 4 times of complex multiplications (consider when m=0 and k=0,3, there are no multiplies needed) and 6 times of complex additions for each 3-point DFT. Since we have to do this twice, we need 4*2=8 times of complex multiplications and 6*2=12 times of complex additions.
Second, we compute the final DFT by combing the first level result. According to the analysis of Question 1, we need N/2=3 times of complex multiplications and N=6 times of complex additions.
In total, we need 8+3=11 times of complex multiplications and 12+6=18 times of complex additions.
2)
Flow diagram of FFT:
Analytical Expressions:
$ \begin{align} X_6(k)&=\sum_{n=0}^{5} x(n)e^{-\frac{j2\pi kn}{6}} \\ &\text{Change variable n=3m+l, m=0,1; l=0,1,2 } \\ X_6(k)&=\sum_{l=0}^{2}\sum_{m=0}^{1}x(3m+l)e^{-\frac{j2\pi k(3m+l)}{6}} \\ &=\sum_{l=0}^{2}e^{-\frac{j2\pi kl}{6}}\sum_{m=0}^{1}x(3m+l)e^{-\frac{j2\pi km}{2}} \\ &=\sum_{l=0}^{2}W_6^{kl}X_l(k) \text{ ,k=0,1,...,5 } \end{align} $
Note that $ X_l(k) = X_l(k+2) = X_l(k+4),\ k=0,1 $
In this FFT algorithm, the computing of DFT is still divided into two levels. However, we will first compute three 2 points DFT, which are DFT of the first half points and the second half points. According to analysis of Question 1, we need N/2=3 times of complex multiplications and N=6 times of complex additions.
Second, we compute the two 3 points DFT using first level result. we need 4 times of complex multiplications (consider when l=0 and k=0,3 there are no multiplies needed) and 6 times of complex additions for each 3-point DFT. Since we have to do this twice, we need 4*2=8 times of complex multiplications and 6*2=12 times of complex additions.
In total, we need 3+8=11 times of complex multiplications and 6+12=18 times of complex additions.
Note that the output sequences of DFT of 2) is different from 1).
Compare 1) and 2), both methods have the same amount of complex operations.
Question 5
Flow Diagram:
$ 5120=5*1024=5*2^{10} $ So we can split the 5120-point DFT into 5 1024-point DFTs using decimation-in-time and compute the 1024-point DFT using the subroutine of radix 2 FFT.
Analytical expression:
$ \begin{align} X^{(5120)}(k)&=\sum_{n=0}^{5119} x(n)e^{-\frac{j2\pi kn}{5120}} \\ &\text{Change variable n=5m+l, m=0,1,...,1023; l=0,1,...,4 } \\ X^{(5120)}(k)&=\sum_{l=0}^{4}\sum_{m=0}^{1023}x(5m+l)e^{-\frac{j2\pi k(5m+l)}{5120}} \\ &=\sum_{l=0}^{4}e^{-\frac{j2\pi kl}{5120}}\sum_{m=0}^{1023}x(5m+l)e^{-\frac{j2\pi km}{1024}} \\ &=\sum_{l=0}^{4}W_{5120}^{kl}X_l^{(1024)}(k) \text{ ,k=0,1,...,5119 } \end{align} $
Direct computation requires N*N=5120*5120=26214400 times of complex multiplications and N*(N-1)=5120*5119=26209280 times of complex additions.
Using FFT we first perform five 1024-point FFTs, which each require $ \frac{N}{2}log_2 N=512*10=5120 $ complex multiplications and $ Nlog_2 N=1024*10=10240 $ complex additions.
To calculate each of the 5120 final output of the 5120 DFT, we have to perform 5 complex multiplications and 4 complex additions. So we need 5120*5= 25600 times of complex multiplications and 5120*4=20480 times of complex additions.
For the FFT based method we have a total of 5120+25600=30720 complex multiplications and 10240+20480=30720 complex addtions.
By compare the two methods, we see that FFT based method is far more efficient than the direct computation.
Back to Homework 5
Back to ECE 438 Fall 2011
|
The concepts of on-policy vs off-policy and online vs offline are separate, but do interact to make certain combinations more feasible. When looking at this, it is worth also considering the difference between
prediction and control in Reinforcement Learning (RL). Online vs Offline
These concepts are not specific to RL, many learning systems can be categorised as online or offline (or somewhere in-between).
Online learning algorithms work with data as it is made available. Strictly online algorithms improve incrementally from each piece of new data as it arrives, then discard that data and do not use it again. It is not a requirement, but it is commonly desirable for an online algorithm to forget older examples over time, so that it can adapt to non-stationary populations. Stochastic gradient descent with back-propagation - as used in neural networks - is an example. Offline learning algorithms work with data in bulk, from a dataset. Strictly offline learning algorithms need to be re-run from scratch in order to learn from changed data. Support vector machines and random forests are strictly offline algorithms (although researchers have constructed online variants of them).
Of the two types,
online algorithms are more general in that you can easily construct an offline algorithm from a strictly online one plus a stored dataset, but the opposite is not true for a strictly offline algorithm. However, this does not necessarily make them superior - often compromises are made in terms of sample efficiency, CPU cost or accuracy when using an online algorithm. Approaches such as mini-batches in neural network training can be viewed as attempts to find a middle ground between online and offline algorithms.
Experience replay, a common RL technique, used in Deep Q Networks amongst others, is another in-between approach. Although you
could store all the experience necessary to fully train an agent in theory, typically you store a rolling history and sample from it. It's possible to argue semantics about this, but I view the approach as being a kind of "buffered online", as it requires low-level components that can work online (e.g. neural networks for DQN). On-policy vs Off-Policy
These are more specific to control systems and RL. Despite the similarities in name between these concepts and online/offline, they refer to a different part of the problem.
On-policy algorithms work with a single policy, often symbolised as $\pi$, and require any observations (state, action, reward, next state) to have been generated using that policy.
Off-policy algorithms work with two policies (sometimes effectively more, though never more than two per step). These are a policy being learned, called the target policy (usually shown as $\pi$), and the policy being followed that generates the observations, called the behaviour policy (called various things in the literature - $\mu$, $\beta$, Sutton and Barto call it $b$ in the latest edition). A very common scenario for off-policy learning is to learn about best guess at optimal policy from an exploring policy, but that is not the definition of off-policy. The primary difference between observations generated by $b$ and the target policy $\pi$ is which actions are selected on each time step. There is also a secondary difference which can be important: The population distribution of both states and actions in the observations can be different between $b$ and $\pi$ - this can have an impact for function approximation, as cost functions (for e.g. NNs) are usually optimised over a population of data.
In both cases, there is no requirement for the observations to be processed strictly online or offline.
In contrast to the relationship between online and offline learning, off-policy is always a strict generalisation of on-policy. You can make any off-policy algorithm into an equivalent on-policy one by setting $\pi = b$. There is a sense in which you can do this by degrees, by making $b$ closer to $\pi$ (for instance, reducing $\epsilon$ in an $\epsilon$-greedy behaviour policy for $b$ where $\pi$ is the fully greedy policy). This can be desirable, as off-policy agents do still need to observe states and actions that occur under the target policy - if that happens rarely because of differences between $b$ and $\pi$, then learning about the target policy will happen slowly.
Prediction vs Control
This can get forgotten due to the focus on search for optimal policies.
The
prediction problem in RL is to estimate the value of a particular state or state/action pair, given an environment and a policy.
The (optimal)
control problem in RL is to find the best policy given an environment.
Solving the control problem when using value-based methods involves both estimating the value of being in a certain state (i.e. solving the prediction problem), and adjusting the policy to make higher value choices based on those estimates. This is called generalised policy iteration.
The main thing to note here is that the prediction problem is stationary (all long-term expected distributions are the same over time), whilst the control problem adds non-stationary target for the prediction component (the policy changes, so does the expected return, distribution of states etc)
Combinations That Work
Note this is not about choice of algorithms. The strongest driver for algorithm choice is on-policy (e.g. SARSA) vs off-policy (e.g. Q-learning). The same core learning algorithms can often be used online or offline, for prediction or for control.
Online, on-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs), and learns from observation data as it arrives. It should always act the same way (it may be observing some other control system with a fixed, and maybe unknown policy).
Online, off-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs) from the perspective of an arbitrary fixed target policy $\pi$ (which must be defined to the agent), and learns from observation data as it arrives. The observations can be from any behaviour policy $b$ - depending on the algorithm being used, it may be necessary to have $b$ defined as well as $\pi$.
Offline, on-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs), and is given a dataset of observations from the environment of an agent acting using some fixed policy.
Offline, off-policy prediction. A learning agent is set the task of evaluating certain states (or state/action pairs) from the perspective of an arbitrary fixed target policy $\pi$ (which must be defined to the agent), and is given a dataset of observations from the environment of an agent acting using some other policy $b$.
Online, on-policy control. A learning agent is set the task of behaving optimally in an environment, and learns from each observations as it arrives. It will adapt its own policy as it learns, making this a non-stationary problem, and also importantly
making its own history of observations off-policy data.
Online, off-policy control. A learning agent is set the task of behaving optimally in an environment. It may behave and gain observations from a behaviour policy $b$, but learns a separate optimal target policy $\pi$. It is common to link $b$ and $\pi$ - e.g. for $\pi$ to be deterministic greedy policy with respect to estimated action values, and for $b$ to be $\epsilon$-greedy policy with respect to the same action values.
Offline, on-policy control. This not really possible in general, as an on-policy agent needs to be able to observe data about its current policy. As soon as it has learned a policy different to that in the stored dataset, then all the data becomes off-policy to it, and the agent has no valid source data. In some cases you might still be able to get something to work.
Offline, off-policy control. A learning agent is set the task of learning an optimal policy from a store dataset of observations. The observations can be from any behaviour policy $b$ - depending on the algorithm being used, it may be necessary to have $b$ defined as well as $\pi$.
As you can see above, only one combination
offline, on-policy control, causes a clash.
There is a strong skew towards online learning in RL. Approaches that buffer some data, such as Monte Carlo control, experience replay or Dyna-Q do mix-in some of the traits of offline learning, but still require a constant supply of new observations plus forget older ones. Control algorithms imply non-stationary data, and these require online forgetting behaviour from the estimators - another online learning trait.
However, mixing in a little "offline" data in experience replay can cause some complications for on-policy control algorithms. The experience buffer can contain things that are technically off-policy to the latest iteration of the agent. How much of a problem this is in practice will vary.
|
The situation with solids is considerably more complicated, with different speeds in different directions, in different kinds of geometries, and differences between transverse and longitudinal waves. Nevertheless, the speed of sound in solids is larger than in liquids and definitely larger than in gases. Young's Modulus for a representative value for the bulk modulus for steel is 160 \(10^9\) N /\(m^2\). A list of materials with their typical velocity can be found in the above book. Speed of sound in solid of steel, using a general tabulated value for the bulk modulus, gives a sound speed for structural steel of
\[
\nonumber c = \sqrt{ E \over \rho} = \sqrt{160 \times 10^{9} N/m^{2} \over 7860 Kg /m^3} = 4512 m/s \tag{19} \]
Compared to one tabulated value the example values for stainless steel lays between the speed for longitudinal and transverse waves.
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
|
Answer
$s(1.466)\approx2$ This means that after $t=1.466$ seconds, the weight is about 2 inches above the equilibrium position.
Work Step by Step
We calculate $s(1.466)$ by substituting $t=1.466$ into the equation and solving: $s(t)=-4\cos 10t$ $s(1.466)=-4\cos (10\times1.466)$ $s(1.466)=-4\cos (14.66)$ $s(1.466)=-4(-0.5)$ $s(1.466)=2$ We know that the motion starts from $-4$ inches. Since the value of $s$ is positive, this means that the weight is moving upwards and has passed the equilibrium position. Therefore, after $t=1.466$ seconds, the weight is about 2 inches above the equilibrium position.
|
An ideal diatomic gas undergoes an elliptic cyclic process characterized by the following points in a $PV$ diagram:
$$(3/2P_1, V1)$$ $$(2P_1, (V1+V2)/2)$$ $$(3/2P_1, V2)$$ $$(P_1, (V1+V2)/2)$$
A rough sketch:
This system is used as a heat engine (converting the added heat into mechanical work).
Evaluate the efficiency of this engine
We know that the efficiency is defined as the benefit/cost ratio:
$$e = \frac{W}{Q_h}$$
Let's focus first on the work done by the engine; taking into account the quasistatic approximation, $W=PV$. Then:
$$W = (P_2 - P_1)(V_2 - V_1)$$
Note that from the given points we can guess that $P_2 = 2P_1$. Then:
$$W = P_1(V_2 - V_1)$$
Now let's focus on $Q_h$.
I have the following issue here: none of the 4 steps of the cycle has either P or V constants. This means that the strategy of using:
$$Q = nc\Delta T$$
Won't work because you cannot use neither $c_p$ nor $c_p$.
However, when we deal with a rectangular cycle:
It would be really easy to derive expressions for both $Q_a$ and $Q_b$ and then the efficiency for the system would be obtained. That is because in each step $Q_h$ is added, either $P$ or $V$ are constant (and thus $Q = nc\Delta T$ works).
What to do with the elliptical cycle to get $Q_h$?
EDIT
My bad, the work done by the working substance is the area under the PV graph. So as Chet Miller pointed out, the work is:
$$W = \pi (P_2 - P_1)(V_2 - V_1)$$
I have been trying to solve the heat equation so that we get the two angles.
So what I did was deriving $P$ and $V$ wrt the angle:
$$dP = (P_{max} -P_0)\cos \theta d\theta$$
$$dV = (V_{max} -V_0)\sin \theta d\theta$$
And plugging it into 3:
$$0=[-(C_v+R)P(V_{max} -V_0)\sin \theta+C_VV(P_{max} -P_0)\cos \theta]$$
This above equation is satisfied by two $\theta$ angles.
But how to solve it?
|
As I understand it a perfect electrical conductor (PEC) will expel all electric fields and time-varying magnetic fields. For this post I'm assuming low enough frequencies as to ignore displacement current and wave phenomenon. As a result the boundary conditions for the magnetic field become:
$$\hat n \times (H_1 - H_2) = \hat n \times H = K$$
$$\hat n \cdot (\mu_0 H_1 - \mu_0 H_2) = \hat n \cdot \mu_0 H = 0$$
I understand these to mean that there is no normal component to the magnetic field at the surface and that the surface current density $K$ is equal $H$. Does this also imply what the phase of $K$ should be?
Based on this reference I am going to say that $H_i$ is a field which already exists in the the environment before the PEC was introduced, and that no current sources are present locally. And, that $H_e$ is the field that is produced once a PEC is introduced. The total field is $H=H_i+H_e$. In this case $H_e$ is entirely responsible for enforcing the boundary condition of expelling $H_i$ from the material and that $K$ is the cause of $H_e$. Naively, without doing any math, I would have assumed that the phase of $H_i$, $H_e$ and $K$ should be the same.
But I also would have assumed, like this wiki artical on Eddy currents, that Faraday's law of induction ($\nabla\times E_e = -\partial B/ \partial t$) would describe the process by which $K$ was generated. Unless the spatial derivative is having an affect that I don't understand, then it seems that there'd be a phase difference between $B$ and $K$ of 90°.
In the reference listed earlier, which was specifically written for finite conductivity, you can see that the phase of the current near the surface is 45°. I reproduced those plots with a significantly higher conductivity (see figure below) and the phase remains 45°.
What I am confused about is that if $K$ and $B$ are not in phase, then wouldn't there be times when $B$ would dip into the conductor violating the boundary condition? And since clearly I am wrong, why is it 45° and how does that not violate the boundary conditions?
|
ISSN:
1930-5311
eISSN:
1930-532X
All Issues
Journal of Modern Dynamics
January 2008 , Volume 2 , Issue 1
The editors of the Journal of Modern Dynamics are happy to dedicate this issue to Gregory Margulis, who, over the last four decades, has influenced dynamical systems as deeply as few others have, and who has blazed broad trails in the application of dynamical systems to other fields of core mathematics. For the full preface, please click the "full text" button above. Additional editors: Leonid Polterovich, Ralf Spatzier, Amie Wilkinson, and Anton Zorich.
Select all articles
Export/Reference:
Abstract:
The editors of the Journal of Modern Dynamics are happy to dedicate this issue to Gregory Margulis, who, over the last four decades, has influenced dynamical systems as deeply as few others have, and who has blazed broad trails in the application of dynamical systems to other fields of core mathematics.
For more information please click the “Full Text” above.
Additional editors: Leonid Polterovich, Ralf Spatzier, Amie Wilkinson and Anton Zorich.
Abstract:
This volume is devoted to the mathematical works of Gregory Margulis, and their influence. The editors have asked me to write about aspects of Margulis’ career that might be of general human interest. I shall restrict myself to experiences that I personally have witnessed. Thus, this will be one colleague’s observations of Grisha’s life in the USSR and in the USA.
Abstract:
When I received the unexpected but attractive suggestion to write something about Gregory (Grisha) Margulis I began to wonder what I could communicate to the reader. I wanted to avoid a stereotypical anniversary article of the kind seen for example in Russian Mathematical Surveys or Physics Uspechi that lists some biographical dates of the honoree as well as his titles and honors and reviews his scientific results. I will instead tell some episodes of my many years of acquaintance with Grisha. Perhaps, it is better to say that these are some observations on our life, based on our lasting friendship.
For the paper, click the "full text" button above.
Abstract:
We consider a semi-simple algebraic group $\mathbf G$ defined over a local field of zero characteristic and we denote by $G$ the group of its $k$-rational points. For $\Gamma$ a "large" sub-semigroup of $G$ we define a closed subgroup 〈Spec$\Gamma$〉 associated with $\Gamma$, and we show that 〈Spec$\Gamma$〉 is large in a certain sense. This allows us to study the $\Gamma$-orbit closures for certain $\Gamma$-actions. The analytic structure of closed subgroups of $G$, over $\mathbb R$ or $\mathbb Q_{p}$, allows to use the Lie algebras techniques. The properties of the limit set of $\Gamma$ are developed ; they play an important role in the proofs.
Abstract:
Given an $m \times n$ real matrix $Y$, an unbounded set $\mathcal{T}$ of parameters $t =( t_1, \ldots, t_{m+n})\in\mathbb{R}_+^{m+n}$ with $\sum_{i = 1}^m t_i =\sum_{j = 1}^{n} t_{m+j} $ and $0<\varepsilon \leq 1$, we say that Dirichlet's Theorem can be $\varepsilon$-improved for $Y$ along $\mathcal{T}$ if for every sufficiently large $\v \in \mathcal{T}$ there are nonzero $\q \in \mathbb Z^n$ and $\p \in \mathbb Z^m$ such that
$|Y_i\q - p_i| < \varepsilon e^{-t_i}\,$ $i = 1,\ldots, m$
$|q_j| < \varepsilon e^{t_{m+j}}\,$ $j = 1,\ldots, n$
(here $Y_1,\ldots,Y_m$ are rows of $Y$). We show that for any $\varepsilon<1$ and any $\mathcal{T}$ 'drifting away from walls', see (1.8), Dirichlet's Theorem cannot be $\epsilon$-improved along $\mathcal{T}$ for Lebesgue almost every $Y$. In the case $m = 1$ we also show that for a large class of measures $\mu$ (introduced in [14]) there is $\varepsilon_0>0$ such that for any drifting away from walls unbounded $\mathcal{T}$, any $\varepsilon<\varepsilon_0$, and for $\mu$-almost every $Y$, Dirichlet's Theorem cannot be $\varepsilon$-improved along $\mathcal{T}$. These measures include natural measures on sufficiently regular smooth manifolds and fractals.
Our results extend those of several authors beginning with the work of Davenport and Schmidt done in late 1960s. The proofs rely on a translation of the problem into a dynamical one regarding the action of a diagonal semigroup on the space $\SL_{m+n}(\mathbb R)$/$SL_{m+n}(\mathbb Z)$.
Abstract:
We establish stable ergodicity of diffeomorphisms with partially hyperbolic attractors whose Lyapunov exponents along the central direction are all negative with respect to invariant SRB-measures.
Abstract:
We consider measures on locally homogeneous spaces $\Gamma \backslash G$ which are invariant and have positive entropy with respect to the action of a
singlediagonalizable element $a \in G$ by translations, and prove a rigidity statement regarding a certain type of measurable factors of this action.
This rigidity theorem, which is a generalized and more conceptual form of the low entropy method of [14,3] is used to classify positive entropy measures invariant under a one parameter group with an additional recurrence condition for $G=G_1 \times G_2$ with $G_1$ a rank one algebraic group. Further applications of this rigidity statement will appear in forthcoming papers.
Abstract:
Let $Q$ be a nondegenerate indefinite quadratic form on $\mathbb{R}^n$, $n\geq 3$, which is not a scalar multiple of a rational quadratic form, and let $C_Q=\{v\in \mathbb R^n | Q(v)=0\}$. We show that given $v_1\in C_Q$, for almost all $v\in C_Q \setminus \mathbb R v_1$ the following holds: for any $a\in \mathbb R$, any affine plane $P$ parallel to the plane of $v_1$ and $v$, and $\epsilon >0$ there exist primitive integral $n$-tuples $x$ within $\epsilon $ distance of $P$ for which $|Q(x)-a|<\epsilon$. An analogous result is also proved for almost all lines on $C_Q$.
Abstract:
Moduli spaces of Abelian and quadratic differentials are stratified by multiplicities of zeroes; connected components of the strata correspond to ergodic components of the Teichmüller geodesic flow. It is known that the strata are not necessarily connected; the connected components were recently classified by M. Kontsevich and the author and by E. Lanneau. The strata can be also viewed as families of flat metrics with conical singularities and with $\mathbb Z$/$2 \mathbb Z$-holonomy.
For every connected component of each stratum of Abelian and quadratic differentials we construct an explicit representative which is a Jenkins–Strebel differential with a single cylinder. By an elementary variation of this construction we represent almost every Abelian (quadratic) differential in the corresponding connected component of the stratum as a polygon with identified pairs of edges, where combinatorics of identifications is explicitly described.
Specifically, the combinatorics is expressed in terms of a generalized permutation. For any component of any stratum of Abelian and quadratic differentials we construct a generalized permutation in the corresponding extended Rauzy class.
Readers Authors Librarians Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
Producing Reports With knitr Overview Teaching:60 min Exercises:15 min Questions
How can I integrate software and reports?
Objectives
Understand the value of writing reproducible reports
Learn how to recognise and compile the basic components of an R Markdown file
Become familiar with R code chunks, and understand their purpose, structure and options
Demonstrate the use of inline chunks for weaving R outputs into text blocks, for example when discussing the results of some calculations
Be aware of alternative output formats to which an R Markdown file can be exported
Data analysis reports
Data analysts tend to write a lot of reports, describing their analyses and results, for their collaborators or to document their work for future reference.
Many new users begin by first writing a single R script containing all of the work. Then simply share the analysis by emailing the script and various graphs as attachments. But this can be cumbersome, requiring a lengthy discussion to explain which attachment was which result.
Writing formal reports with Word or LaTeX can simplify this by incorporating both the analysis report and output graphs into a single document. But tweaking formatting to make figures look correct and fix obnoxious page breaks can be tedious and lead to a lengthly “whack a mole” game of fixing new mistakes resulting from a single formatting change.
Creating a web page (as an html file) by using R Markdown makes things easier. The report can be one long stream, so tall figures that wouldn’t ordinary fit on one page can be kept full size and easier to read, since the reader can simply keep scrolling. Formatting is simple and easy to modify, allowing you to spend more time on your analyses instead of writing reports.
Literate programming
Ideally, such analysis reports are
reproducible documents: If anerror is discovered, or if some additional subjects are added to thedata, you can just re-compile the report and get the new or correctedresults (versus having to reconstruct figures, paste them intoa Word document, and further hand-edit various detailed results).
The key R package is
knitr. It allows youto create a document that is a mixture of text and chunks ofcode. When the document is processed by
knitr, chunks of code willbe executed, and graphs or other results inserted into the final document.
This sort of idea has been called “literate programming”.
knitr allows you to mix basically any sort of text with code from different programming languages, but we recommend that you use
R Markdown, which mixes Markdownwith R. Markdown is a light-weight mark-up language for creating webpages.
Creating an R Markdown file
Within RStudio, click File → New File → R Markdown and you’ll get a dialog box like this:
You can stick with the default (HTML output), but give it a title.
Basic components of R Markdown
The initial chunk of text (header) contains instructions for R to specify what kind of document will be created, and the options chosen. You can use the header to give your document a title, author, date, and tell it that you’re going to want to produce html output (in other words, a web page).
---title: "Initial R Markdown document"author: "Karl Broman"date: "April 23, 2015"output: html_document---
You can delete any of those fields if you don’t want themincluded. The double-quotes aren’t strictly
necessary in this case.They’re mostly needed if you want to include a colon in the title.
RStudio creates the document with some example text to get you started. Note below that there are chunks like
```{r} summary(cars) ```
These are chunks of R code that will be executed by
knitr and replacedby their results. More on this later.
Also note the web address that’s put between angle brackets (
< >) aswell as the double-asterisks in
**Knit**. This isMarkdown.
Markdown
Markdown is a system for writing web pages by marking up the text muchas you would in an email rather than writing html code. The marked-uptext gets
converted to html, replacing the marks with the properhtml code.
For now, let’s delete all of the stuff that’s there and write a bit of markdown.
You make things
bold using two asterisks, like this:
**bold**,and you make things
italics by using underscores, like this:
_italics_.
You can make a bulleted list by writing a list with hyphens or asterisks, like this:
* bold with double-asterisks* italics with underscores* code-type font with backticks
or like this:
- bold with double-asterisks- italics with underscores- code-type font with backticks
Each will appear as:
bold with double-asterisks italics with underscores code-type font with backticks
You can use whatever method you prefer, but
be consistent. This maintains thereadability of your code.
You can make a numbered list by just using numbers. You can even use the same number over and over if you want:
1. bold with double-asterisks1. italics with underscores1. code-type font with backticks
This will appear as:
bold with double-asterisks italics with underscores code-type font with backticks
You can make section headers of different sizes by initiating a linewith some number of
# symbols:
# Title## Main section### Sub-section#### Sub-sub section
You
compile the R Markdown document to an html webpage by clickingthe “Knit” button in the upper-left. Challenge 1
Create a new R Markdown document. Delete all of the R code chunks and write a bit of Markdown (some sections, some italicized text, and an itemized list).
Convert the document to a webpage.
Solution to Challenge 1
In RStudio, select File > New file > R Markdown…
Delete the placeholder text and add the following:
# Introduction ## Background on Data This report uses the *gapminder* dataset, which has columns that include: * country * continent * year * lifeExp * pop * gdpPercap ## Background on Methods
Then click the ‘Knit’ button on the toolbar to generate an html document (webpage).
A bit more Markdown
You can make a hyperlink like this:
[text to show](http://the-web-page.com).
You can include an image file like this:

You can do subscripts (e.g., F~2~) with
F~2~ and superscripts (e.g.,F^2^) with
F^2^.
If you know how to write equations inLaTeX, you can use
$ $ and
$$ $$ to insert math equations, like
$E = mc^2$ and
$$y = \mu + \sum_{i=1}^p \beta_i x_i + \epsilon$$
You can review Markdown syntax by navigating to the “Markdown Quick Reference” under the “Help” field in the toolbar at the top of RStudio.
R code chunks
The real power of Markdown comes from mixing markdown with chunks of code. This is R Markdown. When processed, the R code will be executed; if they produce figures, the figures will be inserted in the final document.
The main code chunks look like this:
```{r load_data} gapminder <- read.csv("gapminder.csv") ```
That is, you place a chunk of R code between
```{r chunk_name}and
```. You should give each chunka unique name, as they will help you to fix errors and, if any graphs areproduced, the file names are based on the name of the code chunk thatproduced them.
Challenge 2
Add code chunks to:
Load the ggplot2 package Read the gapminder data Create a plot Solution to Challenge 2```{r load-ggplot2} library("ggplot2") ``````{r read-gapminder-data} gapminder <- read.csv("gapminder.csv") ``````{r make-plot} plot(lifeExp ~ year, data = gapminder) ``` How things get compiled
When you press the “Knit” button, the R Markdown document isprocessed by
[knitr](http://yihui.name/knitr) and a plain Markdowndocument is produced (as well as, potentially, a set of figure files): the R code is executedand replaced by both the input and the output; if figures areproduced, links to those figures are included.
The Markdown and figure documents are then processed by the tool
pandoc, which converts the Markdown file into anhtml file, with the figures embedded.
Chunk options
There are a variety of options to affect how the code chunks are treated. Here are some examples:
Use
echo=FALSEto avoid having the code itself shown.
Use
results="hide"to avoid having any results printed.
Use
eval=FALSEto have the code shown but not evaluated.
Use
warning=FALSEand
message=FALSEto hide any warnings or messages produced.
Use
fig.heightand
fig.widthto control the size of the figures produced (in inches).
So you might write:
```{r load_libraries, echo=FALSE, message=FALSE} library("dplyr") library("ggplot2") ```
Often there will be particular options that you’ll want to userepeatedly; for this, you can set
global chunk options, like so: ```{r global_options, echo=FALSE} knitr::opts_chunk$set(fig.path="Figs/", message=FALSE, warning=FALSE, echo=FALSE, results="hide", fig.width=11) ```
The
fig.path option defines where the figures will be saved. The
/here is really important; without it, the figures would be saved inthe standard place but just with names that begin with
Figs.
If you have multiple R Markdown files in a common directory, you mightwant to use
fig.path to define separate prefixes for the figure filenames, like
fig.path="Figs/cleaning-" and
fig.path="Figs/analysis-".
Challenge 3
Use chunk options to control the size of a figure and to hide the code.
Solution to Challenge 3```{r echo = FALSE, fig.width = 3} plot(faithful) ```
You can review all of the
R chunk options by navigating tothe “R Markdown Cheat Sheet” under the “Cheatsheets” section of the “Help” field in the toolbar at the top of RStudio.
Inline R code
You can make
every number in your report reproducible. Use
`r and
` for an in-line code chunk,like so:
`r round(some_value, 2)`. The code will beexecuted and replaced with the
value of the result.
Don’t let these in-line chunks get split across lines.
Perhaps precede the paragraph with a larger code chunk that doescalculations and defines variables, with
include=FALSE for that largerchunk (which is the same as
echo=FALSE and
results="hide").
Rounding can produce differences in output in such situations. You may want
2.0, but
round(2.03, 1) will give just
2.
Challenge 4
Try out a bit of in-line R code.
Solution to Challenge 4
Here’s some inline code to determine that 2 + 2 =
`r 2+2`.
Other output options
You can also convert R Markdown to a PDF or a Word document. Click thelittle triangle next to the “Knit” button to get a drop-downmenu. Or you could put
pdf_document or
word_document in the initial headerof the file.
Tip: Creating PDF documents
Creating .pdf documents may require installation of some extra software. If required this is detailed in an error message.
Resources Knitr in a knutshell tutorial Dynamic Documents with R and knitr (book) R Markdown documentation R Markdown cheat sheet Getting started with R Markdown R Markdown: The Definitive Guide (book by Rstudio team) Reproducible Reporting The Ecosystem of R Markdown Introducing Bookdown Key Points
Mix reporting written in R Markdown with software written in R.
Specify chunk options to control formatting.
Use
knitrto convert these documents into PDF and other formats.
|
I have a question about solving equations with trigonometric functions. To my understanding, one can use inverse functions like this:
$$\sin \theta = a, \quad \arcsin a = \theta$$
But I do not know how to solve something with more than one function, like this one:
$$xy\sin \theta = zxy\cos \theta $$
I was able to simplify it to:
$$\sin \theta = z * \cos \theta $$
How can I rewrite this to be equal to theta?
Sorry if I missed anything; this is for an intro calculus class. Thanks.
By the way, I was unable to get something from these: Multiple trigonometric functions, How can I solve for a single variable which occurs in multiple trigonometric functions in an equation?
|
First: The paper references [37] for Levy's Lemma, but you will find no mention of "Levy's Lemma" in [37]. You will find it called "Levy's Inequality", which is called Levy's Lemma in this, which is not cited in the paper you mention.
Second: There is an easy proof that this claim is false for VQE. In quantum chemistry we optimize the parameters of a wavefunction ansatz $|\Psi(\vec{p})\rangle$ in order to get the lowest (i.e. most accurate) energy. The energy is evaluated by:
$$E_{\vec{p}} = \frac{\left\langle \Psi(\vec{p})\right|H\left|\Psi(\vec{p})\right\rangle}{\left\langle\Psi(\vec{p}) \right|\left.\Psi(\vec{p}) \right\rangle}. $$
VQE just means we use a quantum computer to evaluate this energy, and a classical computer to choose how to improve the parameters in $\vec{p}$ so that the energy will be lower in the next quantum iteration.
So whether or not the "gradient will be will be 0 almost everywhere when the number of parameters in $\vec{p}$ is large" does not depend at all on whether we are using VQE (on a quantum computer) or just running a standard quantum chemistry program (like Gaussian) on a classical computer. Quantum chemists typically variationally optimize the above energy with up to $10^{10}$ parameters in $\vec{p}$, and the only reason we don't go beyond that is because we run out of RAM, not because the energy landscape starts to become flat. In this paper you can see at the end of the abstract that they calculated the energy for a wavefunction with about $10^{12}$ parameters, where the parameters are coefficients of Slater determinants. It is generally known that the energy landscape is not so flat (like it would be if the gradient were 0 almost everywhere) even when there's a trillion parameters or even more.
Conclusion: The application of Levy's Lemma is going to depend on the particular energy landscape that you have, which will depend on both $H$ and your ansatz $|\Psi(\vec{p})\rangle$. In the case of their particular implementation of QNN's, they have found an application of Levy's Lemma to be appropriate. In the case of VQE, we have a counter-example to the claim that Levy's Lemma "always" applies. The counter example where Levy's Lemma does not apply is when $H$ is a molecular Hamiltonian and $|\Psi\rangle$ is a CI wavefunction.
|
Answer
The central angle is $32.2^{\circ}$
Work Step by Step
Let $\theta$ be the angle in radians. Let $r$ be the radius. We can use the arc length $d$ to make an expression for the radius $r$: $d = \theta ~r$ $r = \frac{d}{\theta}$ Let $A$ be the area of the sector. Then the ratio of the angle $\theta$ to $2\pi$ is equal to the ratio of the sector area to the area of the whole circle. $\frac{\theta}{2\pi} = \frac{A}{\pi ~r^2}$ $\frac{\theta}{2\pi} = \frac{A}{\pi ~(\frac{d}{\theta})^2}$ $\frac{\theta}{2\pi} = \frac{A~\theta^2}{\pi ~d^2}$ $\theta = \frac{d^2}{2~A}$ $\theta = \frac{(6.0~cm)^2}{(2)(16~cm^2)}$ $\theta = 1.125~rad$ We can convert the angle $\theta$ to degrees: $\theta = (1.125~rad)(\frac{180^{\circ}}{2\pi~rad}) = 32.2^{\circ}$ The central angle is $32.2^{\circ}$
|
@user193319 I believe the natural extension to multigraphs is just ensuring that $\#(u,v) = \#(\sigma(u),\sigma(v))$ where $\# : V \times V \rightarrow \mathbb{N}$ counts the number of edges between $u$ and $v$ (which would be zero).
I have this exercise: Consider the ring $R$ of polynomials in $n$ variables with integer coefficients. Prove that the polynomial $f(x _1 ,x _2 ,...,x _n ) = x _1 x _2 ···x _n$ has $2^ {n+1} −2$ non-constant polynomials in R dividing it.
But, for $n=2$, I ca'nt find any other non-constant divisor of $f(x,y)=xy$ other than $x$, $y$, $xy$
I am presently working through example 1.21 in Hatcher's book on wedge sums of topological spaces. He makes a few claims which I am having trouble verifying. First, let me set-up some notation.Let $\{X_i\}_{i \in I}$ be a collection of topological spaces. Then $\amalg_{i \in I} X_i := \cup_{i ...
Each of the six faces of a die is marked with an integer, not necessarily positive. The die is rolled 1000 times. Show that there is a time interval such that the product of all rolls in this interval is a cube of an integer. (For example, it could happen that the product of all outcomes between 5th and 20th throws is a cube; obviously, the interval has to include at least one throw!)
On Monday, I ask for an update and get told they’re working on it. On Tuesday I get an updated itinerary!... which is exactly the same as the old one. I tell them as much and am told they’ll review my case
@Adam no. Quite frankly, I never read the title. The title should not contain additional information to the question.
Moreover, the title is vague and doesn't clearly ask a question.
And even more so, your insistence that your question is blameless with regards to the reports indicates more than ever that your question probably should be closed.
If all it takes is adding a simple "My question is that I want to find a counterexample to _______" to your question body and you refuse to do this, even after someone takes the time to give you that advice, then ya, I'd vote to close myself.
but if a title inherently states what the op is looking for I hardly see the fact that it has been explicitly restated as a reason for it to be closed, no it was because I orginally had a lot of errors in the expressions when I typed them out in latex, but I fixed them almost straight away
lol I registered for a forum on Australian politics and it just hasn't sent me a confirmation email at all how bizarre
I have a nother problem: If Train A leaves at noon from San Francisco and heads for Chicago going 40 mph. Two hours later Train B leaves the same station, also for Chicago, traveling 60mph. How long until Train B overtakes Train A?
@swagbutton8 as a check, suppose the answer were 6pm. Then train A will have travelled at 40 mph for 6 hours, giving 240 miles. Similarly, train B will have travelled at 60 mph for only four hours, giving 240 miles. So that checks out
By contrast, the answer key result of 4pm would mean that train A has gone for four hours (so 160 miles) and train B for 2 hours (so 120 miles). Hence A is still ahead of B at that point
So yeah, at first glance I’d say the answer key is wrong. The only way I could see it being correct is if they’re including the change of time zones, which I’d find pretty annoying
But 240 miles seems waaay to short to cross two time zones
So my inclination is to say the answer key is nonsense
You can actually show this using only that the derivative of a function is zero if and only if it is constant, the exponential function differentiates to almost itself, and some ingenuity. Suppose that the equation starts in the equivalent form$$ y'' - (r_1+r_2)y' + r_1r_2 y =0. \tag{1} $$(Obvi...
Hi there,
I'm currently going through a proof of why all general solutions to second ODE look the way they look. I have a question mark regarding the linked answer.
Where does the term e^{(r_1-r_2)x} come from?
It seems like it is taken out of the blue, but it yields the desired result.
|
For any integers $a,b,c\ge 0$,one has the well known identity or Dixon's Theorem:$$\sum_{k\in\mathbb{Z}} (-1)^k\left(\begin{array}{c}a+b\\a+k\end{array}\right)\left(\begin{array}{c}b+c\\b+k\end{array}\right)\left(\begin{array}{c}c+a\\c+k\end{array}\right)\ =\ \frac{(a+b+c)!}{a!\ b!\ c!}\ .$$There are tons of proofs in the literature, but I am wondering if there is one which
only uses (possibly repeated) applications of the Chu-Vandermonde identity$$\sum_{j\ge 0}\left(\begin{array}{c}a\\j\end{array}\right)\left(\begin{array}{c}b\\c-j\end{array}\right)\ =\ \left(\begin{array}{c}a+b\\c\end{array}\right)\ .$$
I should add, for more context, that I found such a proof and was wondering if something of that kind appeared in the literature. I would be very surprised if it didn't. A good example of what I mean by a proof only using Chu-Vandermonde is the derivation of the single index formula for 6j symbols by Racah in appendix B of his 1942 article "Theory of complex spectra II". BTW, appendix A of the same article also contains a proof of the above identity.
Edit (Mar 28, 2019):
The proof that I found is contained in Section 3 of the paper: "An algebraic independence result related to a conjecture of Dixmier on binary form invariants" which is now available. Given that I did not get an answer to this question since it was posted a few months ago, I would tend to think that my proof is new. Again, if you know otherwise, please point me towards the relevant reference.
|
I could need some advice on extensions of the CIR model.
The standard CIR reads
$dr(t)=\kappa(\theta-r(t))dt + \sigma \sqrt{r(t)} dW(t)$.
A possible extension, if we would like the short-rate to also include negative values, could be a displaced version, so that $r(t)+\alpha$, where $\alpha>0$, follows a CIR model.
Further, to fit the initial term structure one could also consider the CIR++ (can be seen in Brigo et al) which is that
$r(t)=x(t)+\phi(t)$,
where $x$ is CIR and $\phi(t)$ is deterministic and chosen to fit the initial term structure.
My question is if it would make sense to consider a displaced CIR++, that is that $r(t)+\alpha=x(t)+\phi(t)$. My immediate thought is that the $\alpha$ does not provide any additional value for the model, and that the $\phi$-function already makes it possible for the short-rate to be negative?
|
First of all, you should specify whether you want all components or the most significant ones?
Denote your matrix $A \in \mathbb{R}^{N \times M}$ with $N$ being number of samples and $M$ dimensionality.
In case you want all components the classical way to go is to compute covariance matrix $C \in \mathbb{R}^{M\times M}$ (which has time complexity of $O(NM^2)$) and then apply SVD to it (additional $O(M^3)$). In terms of memory this would take $O(2M^2)$ (covariance matrix + singular vectors and values forming orthogonal basis) or $\approx 1.5$ GB in double precision for your particular $A$.
You could apply SVD directly to the matrix $A$ if you normalize each dimension prior to that and take left singular vectors. However, practically I would expect SVD of the matrix $A$ to take longer.
If you need only a fraction of (perhaps most significant) components you may want to apply iterative PCA. As far as I know all these algorithms are closely related to Lanczos process thus you are dependent on the spectrum of the $C$ and practically it will be difficult to achieve accuracy of SVD for obtained vectors and it will degrade with the number of singular vector.
|
Answer
The area of the region that is cleaned is $75.4~in^2$
Work Step by Step
We can convert the angle to radians: $\theta = (95^{\circ})(\frac{\pi~rad}{180^{\circ}}) = 1.658~rad$ We can find the total area covered by the 10-inch arm. Let $A$ be the total area of the sector. Then the ratio of the angle $\theta$ to $2\pi$ is equal to the ratio of the sector area to the area of the whole circle. $\frac{\theta}{2\pi} = \frac{A}{\pi ~r^2}$ $A = \frac{\theta ~r^2}{2}$ $A = \frac{(1.658~rad)(10~in)^2}{2}$ $A = 82.9~in^2$ We can find the area of the smaller area $A_2$ that is not cleaned, and subtract this area from the total area $A$. $A_2 = \frac{\theta ~r^2}{2}$ $A_2 = \frac{(1.658~rad)(3~in)^2}{2}$ $A_2 = 7.461~in^2$ We can find the area $A_c$ of the region that is cleaned by the wiper: $A_c = A - A_2$ $A_c = 82.9~in^2 - 7.461~in^2$ $A_c = 75.4~in^2$ The area of the region that is cleaned is $75.4~in^2$
|
Topological Methods in Nonlinear Analysis Topol. Methods Nonlinear Anal. Volume 47, Number 1 (2016), 43-54. On the asymptotic relation of topological amenable group actions Abstract
For a topological action $\Phi$ of a countable amenable orderablegroup $G$ on a compact metric space we introduce a concept of theasymptotic relation $\mathbf{A} (\Phi)$ and we show that $\mathbf{A} (\Phi)$ isnon-trivial if the topological entropy $h(\Phi)$ is positive. Itis also proved that if the Pinsker $\sigma$-algebra$\pi_{\mu}(\Phi)$ is trivial, where $\mu$ is an invariant measurewith full support, then $\mathbf{A} (\Phi)$ is dense. These results aregeneralizations of those of Blanchard, Host and Ruette ([B.F. Bryant and P. Walters,
Asymptotic properties of expansive homeomorphisms, Math. System Theory 3 (1969), 60-66]) that concern the asymptotic relation for$\mathbb{Z}$-actions.
We give an example of an expansive $G$-action ($G=\mathbb{Z}^2$) with $\mathbf{A}(\Phi)$ trivial which shows that the Bryant-Walters classicalresult ([B.F. Bryant and P. Walters,
Asymptotic properties of expansive homeomorphisms, Math. System Theory 3 (1969), 60-66]) fails to be true in general case. Article information Source Topol. Methods Nonlinear Anal., Volume 47, Number 1 (2016), 43-54. Dates First available in Project Euclid: 23 March 2016 Permanent link to this document https://projecteuclid.org/euclid.tmna/1458740728 Digital Object Identifier doi:10.12775/TMNA.2015.086 Mathematical Reviews number (MathSciNet) MR3469046 Zentralblatt MATH identifier 1362.37025 Citation
Bułatek, Wojciech; Kamiński, Brunon; Szymański, Jerzy. On the asymptotic relation of topological amenable group actions. Topol. Methods Nonlinear Anal. 47 (2016), no. 1, 43--54. doi:10.12775/TMNA.2015.086. https://projecteuclid.org/euclid.tmna/1458740728
|
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
If is a ?2-grading of a simple Lie algebra, we explicitly describe a-module Spin0 () such that the exterior algebra of is the tensor square of this module times some power of 2.
The operation adj on matrices arises from the (n - 1)st exterior power functor on modules; the analogous factorization question for matrix constructions arising from other functors is raised, as are several other questions.
Let q be a power of p and let G(q) be the finite group of Fq-rational points of G.
We prove a Tauberian theorem of the form $\phi * g (x)\sim p(x)w(x)$ as $x \to \infty,$ where p(x) is a bounded periodic function and w(x) is a weighted function of power growth.
A new C*-algebra of strong limit power functions is proposed.
更多
|
After a while since my last post, I have finally come back to getting some things written! Been really involved with work and other important things for myself, but I recently got most of these things out of the way so it is time to get back to the blogging grind! WOOT! Introduction
In the world of calculus, a fundamental part is the computation of derivatives. The classic formalization of a derivative is in the following form below:
\begin{align}
\left.\frac{df}{dx}\right|_x = \lim_{h \rightarrow 0} \frac{f(x+h) – f(x)}{h} \label{eq1} \end{align}
This is great and all, but what happens when you try to use equation $\refp{eq1}$ to compute a derivative on a computer? As one might guess, the division by $0$ based on $h$ in the denominator causes some undesireable affects on a computer, one being an estimation for a derivative that is not at all correct. So how can we begin to estimate the value of a derivative at some location? One of the common approaches is something referred to as
Finite Differences. Finite Differences Intro
The reality is estimating a derivative is much easier than it first appears. Let us pretend instead of making $h = 0$ in equation $\refp{eq1}$, we choose $h = \epsilon$, where $\epsilon \ll 1$. Computing the derivative in this fashion produces a basic
Finite Difference scheme, where the name comes from the fact we use small and finite changes in a function, based on a small $\epsilon$, to estimate a value for the derivative of that function. Using this aproach, we can then just use the classic derivative formulation and approximate the derivative like so:
$$ \left.\frac{df}{dx}\right|_x \approx \frac{f(x+\epsilon) – f(x)}{\epsilon} $$
Error Bounds
As we can imagine, for small values of $h$, the derivative is close. However, this is really just an approximation. What’s the theoretical error of this approximation? Well we can estimate this by using a Taylor Series! This can be done using the following steps:
\begin{align*}
\left.\frac{df}{dx}\right|_x &= \frac{f(x+\epsilon) – f(x)}{\epsilon} + \text{Error} \\ \left.\frac{df}{dx}\right|_x &= \frac{1}{\epsilon} f(x+\epsilon) – \frac{1}{\epsilon} f(x) + \text{Error} \\ \left.\frac{df}{dx}\right|_x &\approx \frac{1}{\epsilon} \left(f(x) + \epsilon \left.\frac{df}{dx}\right|_x + \frac{\epsilon^2}{2!}\left.\frac{d^2f}{dx^2}\right|_x \right) – \frac{1}{\epsilon} f(x) + \text{Error} \\ \left.\frac{df}{dx}\right|_x &\approx \left.\frac{df}{dx}\right|_x + \frac{\epsilon}{2!}\left.\frac{d^2f}{dx^2}\right|_x + \text{Error} \\ \text{Error} &\approx -\frac{\epsilon}{2!}\left.\frac{d^2f}{dx^2}\right|_x = O(\epsilon) \end{align*}
What we get from this result is that the true error is approximately proportional to the value of $\epsilon$. So basically, if you cut the value of $\epsilon$ in half, you should expect the error of the derivative approximation to be cut in half. That’s pretty neat!
Deriving New Schemes – Taylor Series Approach
So now that we have proven we can approximate a derivative with this scheme, we’re done and it’s time to wrap things up… Oh wait, you want to know if we can do better than this scheme? Well that’s an interesting thought. Let’s try something. First, let’s assume we can approximate a derivative using the following weighted average:
$$ \left.\frac{df}{dx}\right|_{x_i} \approx \sum_{j=n}^{m} a_j f(x_i + jh)$$
where we are finding the derivative at some location $x_i$ and where $n$ and $m$ are integers one can choose such that $n \lt m$. Given this, let’s also state that the Taylor Series of some $f(x_i + jh)$ can be written out like so:
$$ f(x_i + jh) = f_{i+j} = f_i + \sum_{k=1}^{\infty} \frac{(jh)^k}{k!}f_i^{(k)}$$
What’s interesting is we can see that each $f_{i+j}$ in the weighted average will share similar Taylor Series, only differing in the $(jh)^{k}$ coefficient of each term. We can use this pattern to setup a set of equations. This can be shown to be the following:
\begin{align}
\left.\frac{df}{dx}\right|_{x_i} &\approx \sum_{j=n}^{m} a_j f(x_i + jh) \nonumber \\ \left.\frac{df}{dx}\right|_{x_i} &\approx \sum_{j=n}^{m} a_j \left( f_i + \sum_{k=1}^{\infty} \frac{(jh)^k}{k!}f_i^{(k)} \right) \nonumber \\ \left.\frac{df}{dx}\right|_{x_i} &\approx \left(\sum_{j=n}^{m} a_j\right)f_i + \sum_{j=n}^{m} a_j\sum_{k=1}^{\infty} \frac{(jh)^k}{k!}f_i^{(k)} \nonumber \\ \left.\frac{df}{dx}\right|_{x_i} &\approx \left(\sum_{j=n}^{m} a_j\right)f_i + \sum_{k=1}^{\infty} \left(\sum_{j=n}^{m} a_j j^{k}\right)\frac{h^{k}}{k!}f_i^{(k)} \label{eq_s} \end{align}
If we equate both sides of this equation, we end up with the following set of equations to solve for the $(m-n+1)$ weights :
\begin{align*}
0 &= \left(\sum_{j=n}^{m} a_j\right) \\ 1 &= \left(\sum_{j=n}^{m} a_j j\right)h \\ 0 &= \left(\sum_{j=n}^{m} a_j j^{2}\right) \\ 0 &= \left(\sum_{j=n}^{m} a_j j^{3}\right) \\ &\vdots \\ 0 &= \left(\sum_{j=n}^{m} a_j j^{m-n}\right) \end{align*}
If we solve this system of equations, we obtain the weights for the weighted average form of the Finite Difference scheme such that we zero out all but one of the $(m-n+1)$ terms in the truncated Taylor Series. This zeroing out of terms typically results in a numerical scheme of order $O(h^{m-n})$ for first order derivative approximations, though the exact order can be found by obtaining the first nonzero Taylor Series term found in the $\text{Error}$ after you plug in the values for $\left\lbrace a_j\right\rbrace$.
Example 1
As an example, let’s choose the case where $n = -1$ and $m = 1$. Using these values, we end up with the following system of equations:
\begin{align*}
0 &= a_{-1} + a_{0} + a_{1}\\ \frac{1}{h} &= -a_{-1} + a_{1}\\ 0 &= a_{-1} + a_{1} \end{align*}
We will solve this set of equations analytically as an example, but typically you’d likely want to compute it these schemes numerically using some Linear Algebra routines. So based on these equations, we can first see that $a_{-1} = -a_{1}$. Thus, by the second equation, $a_{1} = \frac{1}{2h}$ and in turn $a_{1} = -\frac{1}{2h}$. Plugging in the values for $a_{-1}$ and $a_{1}$ into the first equation results in $a_{0} = 0$. Thus, our resulting Finite Difference scheme, known as a First Order Central Difference, is:
$$ \left.\frac{df}{dx}\right|_{x_i} = \frac{f_{i+1} – f_{i-1}}{2h} $$
That’s pretty convenient! What’s interesting with this setup is we can pretty easily compute Finite Fifferences for more than just a single first order derivative.
Example 2
To show how Finite Difference schemes can be derived for more complicated expressions, let’s try a second example. So how about we try to compute a Finite Difference scheme to estimate the following quantity:
\begin{align}
\alpha\left.\frac{df}{dx}\right|_{x_i} + \beta\left.\frac{d^2f}{dx^2}\right|_{x_i} \approx \sum_{j=-2}^{2} a_j f(x_i + jh) \label{ex2} \end{align}
If we use the right-hand side of equation $\refp{eq_s}$ from earlier and equate it to the left-hand side of equation $\refp{ex2}$, we end up with the following:
\begin{align*}
0 &= \left(\sum_{j=-2}^{2} a_j\right) \\ \alpha &= \left(\sum_{j=-2}^{2} a_j j\right)h \\ \beta &= \left(\sum_{j=-2}^{2} a_j j^{2}\right)\frac{h^2}{2!} \\ 0 &= \left(\sum_{j=-2}^{2} a_j j^{3}\right) \\ 0 &= \left(\sum_{j=-2}^{2} a_j j^{4}\right) \end{align*}
If we expand the various series in the equations above and then write all these equations in matrix form, we end up with the following matrix equations to solve for the unknown coefficients $\left\lbrace a_j \right\rbrace$:
\begin{align}
\begin{pmatrix} 1 & 1 & 1 & 1 & 1 \\ -2 & -1 & 0 & 1 & 2 \\ 4 & 1 & 0 & 1 & 4 \\ -8 & -1 & 0 & 1 & 8 \\ 16 & 1 & 0 & 1 & 16 \end{pmatrix} \begin{pmatrix} a_{-2} \\ a_{-1} \\ a_{0} \\ a_{1} \\ a_{2} \end{pmatrix} = \begin{pmatrix} 0 \\ \frac{\alpha}{h} \\ \frac{2\beta}{h^2} \\ 0 \\ 0 \end{pmatrix} \label{ex2_mat} \end{align}
After solving this set of equations, using whatever method you prefer (numerically, symbolically, by hand, etc.), one is able to employ the scheme in whatever problems are needed. As one can see from this example, the thing that changes the most in this formulation is just the vector on the right-hand side of the matrix equation where you basically express which derivative quantities you want the Finite Difference scheme to approximate. So as you can see, it’s really not too difficult to develop a Finite Difference scheme!
Deriving New Schemes – Lagrange Interpolation Approach
Now obtaining Finite Differences in this way is actually not the only approach. One potentially more straight forward way is to build Finite Difference schemes using Lagrange Interpolation. Essentially, the idea is to use Lagrange Interpolation to build an interpolant based on the number of points you wish to use to approximate the derivative. Then, you just take whatever derivatives you need of this interpolant to obtain the derivative you want! So to jump into it, we can write out our Lagrange Interpolant in 1-D, $\hat{f}(x)$, as the following, given we are evaluating our function $f(\cdot)$ at $x_j = x_i + jh \;\; \forall j \in \left\lbrace n, n+1, \cdots, m-1, m \right\rbrace$:
\begin{align}
\hat{f}(x) = \sum_{j=n}^{m} f(x_j) \prod_{k=n,k\neq j}^m \frac{(x-x_k)}{(x_j – x_k)} \end{align}
Given this expression, we can then compute the necessary derivatives we wish to approximate, evaluate the results at $x_i$, and obtain our Finite Difference scheme. For example, let’s try this again against Example 1. First, we can expand the interpolant to be the following based on the fact $n=-1$ and $m=1$:
\begin{align}
\hat{f}(x) = f(x_{-1})\frac{(x-x_{0})}{(x_{-1} – x_{0})}\frac{(x-x_{1})}{(x_{-1} – x_{1})} + f(x_{0})\frac{(x-x_{-1})}{(x_{0} – x_{-1})}\frac{(x-x_{1})}{(x_{0} – x_{1})} + f(x_{1})\frac{(x-x_{0})}{(x_{1} – x_{0})}\frac{(x-x_{-1})}{(x_{1} – x_{-1})} \end{align}
We then take the derivative once, resulting in the expression below:
\begin{align} \frac{d\hat{f}}{dx}(x) = f(x_{-1})\frac{(x-x_{0}) + (x-x_{1})}{(x_{-1} – x_{1})(x_{-1} – x_{0})} + f(x_{0})\frac{(x-x_{-1}) + (x-x_{1})}{(x_{0} – x_{-1})(x_{0} – x_{1})} + f(x_{1})\frac{(x-x_{0}) + (x-x_{-1})}{(x_{1} – x_{0})(x_{1} – x_{-1})} \end{align}
We then evaluate this derivative expression at $x_i$ and simplify the numerators and denominators, resulting in the following:
\begin{align} \frac{d\hat{f}}{dx}(x_i) &= f(x_{-1})\frac{-h}{(-2h)(-h)} + f(x_{0})\frac{0}{(h)(-h)} + f(x_{1})\frac{h}{(h)(2h)} \nonumber \\ \frac{d\hat{f}}{dx}(x_i) &= f(x_{1})\frac{1}{2h} – f(x_{-1})\frac{1}{2h} \nonumber \\ \frac{d\hat{f}}{dx}(x_i) &= \frac{f(x_{1}) – f(x_{-1})}{2h} \nonumber \\ \frac{d\hat{f}}{dx}(x_i) &= \frac{f_{i+1} – f_{i-1}}{2h} \label{ex1_v2} \end{align}
As we can see looking at equation $\refp{ex1_v2}$, the Lagrange Interpolant reproduced a Second Order Central Difference scheme at some location $x_i$, showing there’s more than one approach to generating a Finite Difference scheme. But now knowing a Lagrange Interpolant can help derive these schemes, there’s something quite interesting we can now understand with respect to Finite Differences.
Runge’s Phenomenon and Finite Differences
In model building, there exists a phenomenon named Runge’s Phenomenon that essentially displays how fitting a polynomial model on the order of the number of data points you’re fitting to results in oscillations between points due to overfitting. An example of this phenomenon can be seen below.
As one can see based on the graphic, the polynomial based on Lagrange interpolation goes through each data point, but gets large deviations between points as it works its way away from the center. This oscillation actually results in large errors in derivatives, which makes sense if you look at the picture. If we just compare the slopes at the right most point on the plot, for example, we can see the slope of the true function is much smaller in magnitude than the slope based on the Lagrange interpolation.
This phenomenon often occurs when data points are roughly equially spaced apart, though it can be shown placing the data differently (like using Chebychev points) can greatly mitigate the occurance of Runge’s Phenomenon. Additionally, the distance between points makes a large difference, where larger distances between points increases the problem with Runge’s Phenomenon. An example plot below show how the fit using Lagrange Interpolation, even for a high order polynomial, does fine when the distances between points are small.
The improved fit using Lagrange interpolation makes sense because the true function approaches linear behavior between data points as the distance between points shrinks, which results in approximate fits being quite accurate.
Now with respect to Finite Differences, since they can be modeled using Lagrange Interpolation, we can see it is possible the accuracy of derivatives based on high order fits (or Finite Differences using many points in the weighted average) can gain a bit of error, especially if the distance between the points aren’t very small. This property of Finite Differences makes them trickier to use successfully if you want to implement, for example, a $9^{th}$ order accurate Finite Difference scheme to estimate a first order derivative.
However, if you do manage to make the stepsize, $h$, particularly small, you will likely result in a fine approximation using a high order Finite Difference scheme. However, it is important one validates the scheme is providing the sort of error one expects, especially since finite arithmetic in floating point numbers can generate its own set of problems (which won’t be covered here).
Conclusion
In this post, we covered some fundamental theory and examples revolving around the derivation of Finite Differences. In future posts, we will investigate the use of Finite Differences in various problems to get a feel for their value in computational mathematics.
For those interested, I recommend working out some of the math, deriving some error terms for some different Finite Difference formulas, and taking the mathematical equations and trying to build some codes around them… aka be like Bender:
For further investigation in the fundamentals of Finite Difference and related topics covered here, I recommend the book
Fundamentals of Engineering Numerical Analysis by Parviz Moin. This covers a lot of topics fairly well for anyone first investigating the subject of Numerical Analysis and who may not be a mathematician (aka this book isn’t super mathematically rigorous). It is a good read nonetheless!
|
Basic Order Theory We will denote an arbitrary ordering relation by $R$. We will establish some preliminary definitions: An ordering $R$ is reflexiveif and only if $xRx$, for all $x$ in the domain of $R$. An ordering $R$ is irreflexiveif and only if $\neg xRx$. An ordering $R$ is transitiveif and only if $xRy$ and $yRz$ implies $xRz$, for all $x$, $y$, and $z$. An ordering $R$ is antisymmetricif and only if $xRy$ and $yRx$ implies $x=y$. An ordering $R$ is trichotomousif and only if $xRy$, $x=y$, or $yRx$ for all $x$ and $y$ in the field of $R$. Partial Ordering
A
partial ordering consists of a relation along with a set such as $( A , \le )$ such that the order is reflexive, transitive, and antisymmetric for all members of $A$.
A
strict partial ordering consists of an ordered pair $( A , \lt )$ that is irreflexive and transitive for all members of $A$.
All strict partial orders are
asymmetric, meaning that $xRy$ implies that $\neg yRx$. Total Ordering
A
total ordering consists of a partial ordering where any two elements are comparable, that is, for all $x$ and $y$ in $A$, $x\le y \lor y\le x$
A
strict total ordering is a strict partial ordering that is also trichotomous. Well-Founded Relations
A
minimal element of $B$ with respect to a strict ordering relation $\lt$ is an element $x$ of $B$ that is not greater than any other element in $B$. That is $\forall y \in B: \neg y \lt x$
A
well-founded relation is an ordering $\lt$ under $A$ such that any nonempty subset $x$ of $A$ contains a minimal element.
There are many interesting properties of well-founded relations. For example, all well-founded relations do not have any ordering "loops". That is, they are irreflexive, asymmetric, etc.
Well-founded relations do not have any infinitely descending $<$-chains. Another way to state this is that no function $f$ mapping the natural numbers to well-founded set $A$ where $f(n+1) < f(n)$ for all natural numbers $n$.
Any subset of $A$, even if it is a proper class, must have a minimal element. The proof of this is not as straightforward as it sounds.
We can also prove schemas of well-founded induction and well-founded recursion; the first strongly resembles epsilon induction, while the second defines a function $F(x)$ in terms of a function $G$ of the restriction of $F$ to the initial segment of $x$.
An
initial segment or extension of $x$ is the collection of all sets in $A$ less than $x$.
We call a well-founded relation
setlike if the initial segments of all the elements of $A$ are elements. Well-Ordering Relations
A
well-ordering relation is a well-founded relation that is also a strict total order. Equivalently, we can also define a well-ordering relation as a well-founded relation that satisfies trichotomy.
The ordinals can be defined as the set of all transitive sets that are well-ordered by the membership relation.
The Well-ordering principle shows that all sets have some well-order associated with them.
All well-ordered sets are order-isomorphic to the ordinals.
|
I was able to solve this using trig instead of vector math. Here's how it looks as a triangle.
Notice that since this is meant for a computer coordinate system, the \$y+\$ axis is down. Also the angles are as such: \$x+ = 0°\$, \$y+ = 90°\$, \$y- = -90°\$, and \$x- = ±180°\$
Additionally, we know that line B's angle is \$∠B = -10°\$.
The speeds don't matter since they can be expressed as a ratio \$r = 120 / 25\$. So \$A\$ can be expressed as \$B * r\$, as shown.
We can calculate the length and angle of \$C\$ as:
$$\begin{align}C &= \sqrt{(b.x - a.x)^2 + (b.y - a.y)^2} &&\approx 10.05 \\∠C &= atan2(b.y - a.y, b.x - a.x) &&\approx 84.29°\end{align}$$therefore:
\$∠a = ∠C - ∠B ≈ 94.29°\$
Now we have all we need to use the Law of Cosine, which states:
\$A^2 = B^2 + C^2 - 2*B*C*cos(a)\$
Since we know that \$A\$ is a ratio of \$B\$, we can express it like so:
\$(B*r)^2 = B^2 + C^2 - 2*B*C*cos(a)\$
Solve for \$B\$ and we get:
\$B = ± \frac{ \sqrt{ C^2 * (cos(a)^2 + r^2 -1 ) } - cos(a)*C }{r^2 - 1} \approx 2.18\$
Since time is not a factor in this equation, we get two results, depending on if we're forward in time or backwards. We'd have to check that the \$B\$ we get is in the right direction.
Finally, we now know enough to calculate what we want:
$$\begin{align}c.x &= a.x + B * cos(∠B) &&\approx 4.14 \\c.y &= a.y + B * sin(∠B) &&\approx 1.62\end{align}$$
For bonus points, if we wanted the shooting angle \$∠A\$, we can get it like so:
\$∠A = atan2(c.y - b.y, c.x - b.x) \approx -83.72°\$
|
I have data tracking about 25,000 individuals as each one moves through a Markov chain.
I want to know the
shape of the relationship between a continuous 'covariate' (my independent variable of interest) and the transition intensities between the different states. The shape is really important for my question.
To find the shape of the relationship (linear, quadratic, etc.), I would imagine doing polynomial regression of expected hitting time versus the continuous covariate.
However, as I understand it, predictor covariates are typically fitted to Markov chains by applying a linear regression term to each transition intensity $\lambda_{ij}$. For instance, this paper
"introduces covariables as a proportional factor in the baseline transition intensities":
\begin{equation} \lambda_{ij}(\textbf{z})=\lambda_{ij}\textbf{e}^{\beta_{ij} \textbf{z}} \end{equation}
("where $\beta_{ij}$ is the vector of regression coefficients associated with the vector of covariables $\textbf{z}$ for the transition between states $i$ and $j$")
It doesn't seem impossible that there could be a nonlinear relationship between the predictor covariate and the transition intensities. Perhaps, for instance, a farm can rush its crops through the 'Markov chain' of crop stages really quickly at an optimum temperature (i.e. high transition rates), but lower temperatures and higher temperatures both have very low transition rates. In this case, plotting a linear relationship between the predictor covariate (temperature) and the transition rates would be inappropriate.
Is the way around this simply to apply different nonlinear transformations to my predictor covariate, treating each transformation as different predictors, and seeing which yields the maximum-likelihood fit? I can't see how this could solve the problem. Alternatively, are there methods for dealing with nonlinear predictors of a Markov chain process?
(I plan to work with the
msm package in R, but there may be other packages more suitable to this question out there...?)
Many thanks if you can help!
|
Subset Relation on Power Set is Partial Ordering Theorem
Let $S$ be a set.
Let $\powerset S$ be the power set of $S$.
Then $\struct {\powerset S, \subseteq}$ is an ordered set.
Proof
Then $\exists a, b \in S$ such that $a \ne b$.
Then $\set a \in \powerset S$ and $\set b \in \powerset S$.
However, $\set a \nsubseteq \set b$ and $\set b \nsubseteq \set a$.
So by definition, $\subseteq$ is a partial ordering.
Now suppose $S = \O$.
Then $\powerset S = \set \O$ and, by Empty Set is Subset of All Sets, $\O \subseteq \O$.
Hence, trivially, $\subseteq$ is a total ordering on $\powerset S$.
Now suppose $S$ is a singleton: let $S = \set a$.
Then $\powerset S = \set {\O, \set a}$.
So there are only two elements of $\powerset S$, and we see that $\O \subseteq \set a$ from Empty Set is Subset of All Sets.
So, trivially again, $\subseteq$ is a total ordering on $\powerset S$.
$\blacksquare$
Sources 1960: Paul R. Halmos: Naive Set Theory... (previous) ... (next): $\S 14$: Order 1965: J.A. Green: Sets and Groups... (previous) ... (next): $\S 1.2$.Subsets: Example $11$ 1968: A.N. Kolmogorov and S.V. Fomin: Introductory Real Analysis... (previous) ... (next): $\S 3.1$: Partially ordered sets: Example $3$ 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 7$: Example $7.1$ 1982: P.M. Cohn: Algebra Volume 1(2nd ed.) ... (previous) ... (next): Chapter $1$: Sets and mappings: $\S 1.5$: Ordered Sets: $\text{(i)}$ 1993: Keith Devlin: The Joy of Sets: Fundamentals of Contemporary Set Theory(2nd ed.) ... (previous) ... (next): $\S 1$: Naive Set Theory: $\S 1.5$: Relations
|
Contents
Now that we studied a couple of techniques to create a Bézier patch, a problem remains. We need to shade this patch. Generating the position of the points is something we can already do but for shading, but we also need a normal. How can we generate a normal at the position of each vertex of the grid? We can easily generate a face normal using a technique similar to the one we have been using in the getSurfaceProperties method of the TriangleMesh class. We can compute the cross product between two edges making up a face. But this technique only produces a face normal.
What we want is a vertex normal, a normal at each point making up the grid that is perpendicular to the Bézier surface. Hopefully, because the grid itself is computed from equations, we can also use maths to compute an accurate normal at any point on the surface of the Bézier patch. What we want to do is compute two tangents at the point of interest (whose coordinates on the patch are specified in terms of \(u\) and \(v\)), one along the \(u\) direction and one along the \(v\) direction. By taking the cross product of these two tangents we then get the normal we are looking for. The question now is, how do we compute these tangents? As suggested, we can express the problem in mathematical terms. In mathematics the tangent of a curve computed from a function, can be defined as the function derivative with respect to a parameter, for example:$$\dfrac{\partial}{\partial x}f(x),$$
would represent the derivative of the function \(f(x)\) with respect to the parameter \(x\). We can do the exact same thing with the equation to compute points on a Bézier patch. We can for example compute the derivative of that function or equation with respect to the parameter \(u\) or \(v\):$$\dfrac{\partial}{\partial u}B(u,v), \dfrac{\partial}{\partial v}B(u,v).$$
The first equation would give us a tangent along the \(u\) direction and the second, the tangent a long the \(v\) direction. Now we know that the function \(B(u,v)\) itself can be written as:$$B(u,v) = \sum_i\sum_j b_i(u) b_j(v) p_{ij}.$$
Where \(b_i(u)\) and \(b_j(v)\) are the Bernstein polynomials and \(p_{ij}\) the control points. We also now how to compute the Bernstein polynomials:$$ \begin{array}{l} k_1(t) = (1 - t)^3\\ k_2(t) = 3t(1 - t)^2\\ k_3(t) = t^23(1 - t)\\ k_4(t) = t^3 \end{array} $$
So all we need to do is compute the derivative of these functions, which are:$$ \begin{array}{l} k'_1(t) = -3(1-t)^2\\ k'_2(t) = 3(1-t)^2-6t(1-t)\\ k'_3(t) = 6t(1-t)-3t^2\\ k'_4(t) = 3t^2 \end{array} $$
chain rule. $$\dfrac{d}{dx}y = \dfrac{d}{du}y \dfrac{d}{dx}u.$$ You need to: Recognise \(u\) (always choose the inner-most expression, usually the part inside brackets, or under the square root sign) Then we need to re-express \(y\) in terms of \(u\) Then we differentiate \(y\) with respect to \(u\), then we re-express everything in terms of \(x\) The next step is to find \(\dfrac{d}{dx}u\) Then we multiply \(\dfrac{d}{du}y\) and \(\dfrac{d}{dx}u\).
If we derive the equation \(B(u,v) = \sum_i\sum_j b_i(u) b_j(v) p_{ij}\) with respect to \(u\), you would get (this would give us the tangent on the patch for the pair of parameter \((u,v)\) along the \(u\) direction):$$ \begin{array}{l} \dfrac{\partial}{\partial u}B(u,v) =&-3(1-u)^2 \sum_j b_j(v) p_{0j}\\ &3(1-u)^2-6u(1-u) \sum_j b_j(v) p_{1j}\\ &6u(1-u)-3u^2 \sum_j b_j(v) p_{1j}\\ &3u^2 \sum_j b_j(v) p_{3j} \end{array} $$
What we actually do there is to compute a Bézier parallel to the \(u\) direction and then once we have this curve, compute the derivative along this curve using \(k'_1(u)\), \(k'_2(u)\), \(k'_3(u)\) and \(k'_4(u)\). The same thing can be done to find the tangent along the \(v\) direction:$$ \begin{array}{l} \dfrac{\partial}{\partial v}B(u,v) =&-3(1-v)^2 \sum_i b_i(u) p_{i0}\\ &3(1-v)^2-6v(1-v) \sum_i b_i(u) p_{i1}\\ &6v(1-v)-3v^2 \sum_i b_i(u) p_{i2}\\ &3v^2 \sum_i b_i(u) p_{i3} \end{array} $$
Et voila! All there is left to do is compute these two equations to obtain the two tangents at the parametric coordinates \((u,v)\) and then compute the cross-product between these two tangents to get the normal at this point. The implementation of this technique is shown below. We have two functions, dUBezier to compute the tangent along the \(u\) direction and dVBezier to compute the other tangent along the \(v\) direction. The result is surprising. As you can see in the adjacent image, not only the result is very smooth but note also that the transition at the patches boundaries is also invisible. Mathematics once again work remarkably well and give us a perfect result including at the edge or at the corners of the patches.
You can find the complete source code of the program in the last chapter of this lesson.
|
Search
Now showing items 1-6 of 6
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Multiplicity dependence of two-particle azimuthal correlations in pp collisions at the LHC
(Springer, 2013-09)
We present the measurements of particle pair yields per trigger particle obtained from di-hadron azimuthal correlations in pp collisions at $\sqrt{s}$=0.9, 2.76, and 7 TeV recorded with the ALICE detector. The yields are ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV
(Springer, 2015-09)
We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV
(Springer, 2015-07-10)
The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
|
In the previous sections, the density was assumed to be constant. For non constant density the derivations aren't ``clean'' but are similar. Consider straight/flat body that is under liquid with a varying density. If density can be represented by average density, the force that is acting on the body is \[F_{total} = \int_{A} g\rho h dA \sim \bar{\rho} \int_{A} g h dA \tag{164}\] In cases where average density cannot be represented reasonably, the integral has be carried out. In cases where density is non–continuous, but constant in segments, the following can be said \[F_{total} = \int_{A} g \rho h dA = \int_{A_{1}} g \rho_{1} h dA + \int_{A_{2}} g \rho_{2} h dA + \cdot \cdot \cdot + \int_{A_{n}} g \rho_{n} h dA \tag{165}\] As before for single density, the following can be written \[F_{total} = gsin \beta \left[\rho_{1} \int_{A_{1}} \xi dA + \rho_{2} \int_{A_{2}} \xi dA + \cdot \cdot \cdot + \rho_{n} \int_{A_{n}} \xi dA \right] \tag{166}\] Or in a compact form and in addition considering the ``atmospheric'' pressure can be written as
Total Static Force
\[F_{total} = P_{atmos} A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i}x_{ci}A_{i}\tag{167}\]
where the density, \(\rho_{i}\) is the density of the layer \(i\) and \(A_{i}\) and \(x_{ci}\) are geometrical properties of the area which is in contact with that layer. The atmospheric pressure can be entered into the calculation in the same way as before. Moreover, the atmospheric pressure can include all the layer(s) that do(es) not with the ``contact'' area. The moment around axis \(y\), \(M_{y}\) under the same considerations as before is \[M_{y} = \int_{A} g \rho \xi ^{2} sin \beta dA \tag{168}\] After similar separation of the total integral, one can find that
Total Static Moment
\[M_{y} = P_{atmos}x_{c}A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i} I_{x'x'i}\tag{170}\]
In the same fashion one can obtain the moment for \(x\) axis as
Total Static Moment
\[M_{x} = P_{atmos}y_{c}A_{total} + g sin\beta \sum_{i=1}^{n} \rho_{i} I_{x'y'i}\tag{171}\]
To illustrate how to work with these equations the following example is provided.
Example 4.16
Consider the hypothetical Figure 4.25 The last layer is made of water with density of \(1000 [kg/m^3]\). The densities are \(\rho_1 = 500[kg/m^3]\), \(\rho_2 = 800[kg/m^3]\), \(\rho_3 = 850[kg/m^3]\), and \(\rho_4 = 1000[kg/m^3]\). Calculate the forces at points \(a_1\) and \(b_1\). Assume that the layers are stables without any movement between the liquids. Also neglect all mass transfer phenomena that may occur. The heights are: \(h_1 = 1[m]\), \(h_2 = 2[m]\), \(h_3 = 3[m]\), and \(h_4 = 4[m]\). The forces distances are \(a_1=1.5[m]\), \(a_2=1.75[m]\), and \(b_1=4.5[m]\). The angle of inclination is is \(\beta= 45^\circ\).
Fig. 4.25 The effects of multi layers density on static forces.
Solution 4.16
Since there are only two unknowns, only two equations are needed, which are (170) and (167). The solution method of this example is applied for cases with less layers (for example by setting the specific height difference to be zero). Equation (170) can be used by modifying it, as it can be noticed that instead of using the regular atmospheric pressure the new "atmospheric'' pressure can be used as
\[ {P_{atmos}}^{'} = P_{atmos} + \rho_1\,g\,h_1 \tag{172} \] The distance for the center for each area is at the middle of each of the "small'' rectangular. The geometries of each areas are \begin{array}{lcr} {x_c}_1 = \dfrac{a_2 + \dfrac{h_2}{\sin\beta}}{2} & A_1 = ll \left( \dfrac{h_2}{\sin\beta} -a_2 \right) & {I_{x^{'}x^{'}}}_1 = \dfrac{ll\left(\dfrac{h_2}{\sin\beta}-a_2\right)^{3}}{36} + \left({x_c}_1\right)^2\, A_1 \\ {x_c}_2 = \dfrac{h_2 + h_3}{2\,\sin\beta} & A_2 = \dfrac{ll}{\sin\beta} \left(h_3 - h_2\right) & {I_{x^{'}x^{'}}}_2 = \dfrac{ll\left({h_3}-h_2\right)^{3}}{36\,\sin\beta} + \left({x_c}_2\right)^2\, A_2 \\ {x_c}_3 = \dfrac{h_3 + h_4}{2\,\sin\beta} & A_3 = \dfrac{ll}{\sin\beta} \left(h_4 - h_3\right) & {I_{x^{'}x^{'}}}_3 = \dfrac{ll\left({h_4}-h_3\right)^{3}}{36\,\sin\beta} + \left({x_c}_3\right)^2\, A_3 \tag{173} \end{array} After inserting the values, the following equations are obtained Thus, the first equation is \[ F_1 + F_2 = {P_{atmos}}^{'} \overbrace{ll (b_2-a_2)}^{A_{total}} + g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\, A_i \tag{174} \] The second equation is (170) to be written for the moment around the point "O'' as \[ F_1\,a_1 + F_2\,b_1 = {P_{atmos}}^{'}\, \overbrace{\dfrac{(b_2+a_2)}{2}{ll (b_2-a_2)}}^
Callstack: at (Bookshelves/Chemical_Engineering/Map:_Fluid_Mechanics_(Bar-Meir)/04:_Fluids_Statics/4.5:_Fluid_Forces_on_Surfaces/4.5.1:_Fluid_Forces_on_Straight_Surfaces/4.5.1.2:_Multiply_Layers), /content/body/div[5]/div/p[2]/span, line 1, column 4
+ g\,\sin\beta\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i \tag{175}
\]
The solution for the above equations is
\[
F1=
\begin{array}{c}
\dfrac{
2\,b_1\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\, A_i
-2\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i }
{2\,b_1-2\,a_1} - \\
\qquad\dfrac{\left({b_2}^{2}-2\,b_1\,b_2+2\,a_2\,b_1-{a_2}^{2}\right)
ll\,P_{atmos}}
{2\,b_1-2\,a_1}
\end{array}
\]
\[
F2=
\begin{array}{c}
\dfrac{
2\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\,{I_{x^{'}x^{'}}}_i
-2\,a_1\,g\,\sin\beta\,\sum_{i=1}^{3}\rho_{i+1}\, {x_c}_i\,
A_i}
{2\,b_1-2\,a_1}
+
\\
\dfrac{
\left( {b_2}^{2}+2\,a_1\,b_2+{a_2}^{2}-2\,a_1\,a_2\right)
ll\,P_{atmos}}
{2\,b_1-2\,a_1}
\end{array}
\]
The solution provided isn't in the complete long form since it will makes things messy. It is simpler to compute the terms separately. A mini source code for the calculations is provided in the the text source. The intermediate results in SI units ([m], [\(m^2\)], [\(m^4\)]) are:
\[
\begin{array}{lcr}
x_{c1}=2.2892& x_{c2}=3.5355& x_{c3}=4.9497\\
A_1=2.696& A_2=3.535& A_3=3.535\\
{I_{x'x'}}_1=14.215& {I_{x'x'}}_2=44.292& {I_{x'x'}}_3=86.718
\end{array}
\]
The final answer is
\[
F_1=304809.79[N] \tag{176}
\]
and
\[
F_2=958923.92[N] \tag{177}
\]
Contributors
Dr. Genick Bar-Meir. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or later or Potto license.
|
Let $f$ be an entire function and $B$ be a bounded open set in $\mathbb {C} $. Prove that boundary of image of $B$ under $f$ is contained in image of boundary of $B$. Does the same result is true for unbounded open set in $\mathbb C.$
If $f$ is constant it is clear. Assume it is non-constant.
Then by the open mapping theorem it is open. Therefore, interior points of $B$ are mapped to interior points of $f(B)$. Hence, boundary points of $f(B)$ can only be image of boundary points of $B$. In fact, if $w\in\partial f(B)$ there is a sequence $w_n=f(z_n)$ with $z_n\in B$ such that $w_n\to w$. Since $B$ is bounded $z_n$ have an accumulation point $z$, which must be in the boundary of $B$.
For the unbounded case we can consider $B=\{z\in\mathbb{C}:\ -2\pi<Im(z)<2\pi\}$ and $f(z)=e^z$.
Then $f(B)$ is equal to the $\mathbb{C}\setminus\{0\}$.
The origin is a boundary point of $f(B)$. But $e^z$ doesn't map any point of the plane to the origin, and in particular no point of the boundary of $B$.
|
In a triange $ABC$, points $D$ and $E$ are taken on side $BC$ such that $BD=DE=EC$. If $\angle ADE=\angle AED=\phi$, how can we prove that
$$\frac{6\tan\phi}{\tan^2\phi-9}=\tan A$$
The key here is recognizing the identity we wish to prove. Note that
$$\tan{A} = -\frac{\frac{2}{3} \tan{\phi}}{1-(\frac13 \tan{\phi})^2}$$
This looks like a double-angle formula for tangent. Now note that, because of symmetry, $B=C$. Thus $A = \pi-2 B$ and $\tan{A} = -\tan{2 B}$. Therefore, the formula implies that
$$\tan{B} = \frac13 \tan{\phi}$$
This is what we should need to prove. We do this by noting two items: 1) The triangle $\Delta ABE$ is isosceles with side $AD=AE=d$ given by
$$d \cos{\phi} = \frac{a}{6}$$
2) The law of sines in either side triangle, say $\Delta ABD$ produces the equation
$$\frac{\sin{(\phi-B)}}{\sin{B}} = \frac{a}{3 d}$$
or
$$\frac{\sin{\phi}}{\tan{B}} - \cos{\phi} = 2 \cos{\phi} $$
The above assertion - and consequently, the original relation - is proven.
|
为了更好的帮助您理解掌握查询词或其译词在地道英语中的实际用法,我们为您准备了出自英文原文的大量英语例句,供您参考。
As a corollary we obtain that the moduli space of rank 3 and degree 0 bundles on a smooth projective curve of genusg is C-M.
Lower degree bounds for modular invariants and a question of I.
In particular, we prove an integral formula for the degree of an ample divisor on a variety of complexity 1, and apply this formula to computing the degree of a closed 3-dimensional orbit in any SL2-module.
Given integers n,d,e with $1 \leqslant e >amp;lt; \frac{d}{2},$ let $X \subseteq {\Bbb P}^{\binom{d+n}{d}-1}$ denote the locus of degree d hypersurfaces in ${\Bbb P}^n$ which are supported on two hyperplanes with multiplicities d-e and e.
This paper studies intersection theory on the compactified moduli space ${\mbox{$\cal M$}} (n,d)$ of holomorphic bundles of rank n and degree d over a fixed compact Riemann surface $\Sigma$ of genus $g \geq 2$ where n and d may have common factors.
更多
|
The key idea is that the sampling distribution of the median is simple to express in terms of the distribution function but more complicated to express in terms of the median value. Once we understand how the distribution function can re-express values as probabilities and back again, it is easy to derive the
exact sampling distribution of the median. A little analysis of the behavior of the distribution function near its median is needed to show that this is asymptotically Normal.
(The same analysis works for the sampling distribution of any quantile, not just the median.)
I will make no attempt to be rigorous in this exposition, but I do carry out it out in steps that are readily justified in a rigorous manner if you have a mind to do that.
Intuition
These are snapshots of a box containing 70 atoms of a hot atomic gas:
In each image I have found a location, shown as a red vertical line, that splits the atoms into two equal groups between the left (drawn as black dots) and right (white dots). This a
median of the positions: 35 of the atoms lie to its left and 35 to its right. The medians change because the atoms are moving randomly around the box.
We are interested in the distribution of this middle position. Such a question is answered by reversing my procedure: let's first draw a vertical line somewhere, say at location $x$. What is the chance that half the atoms will be to the left of $x$ and half to its right? The atoms at the left individually had chances of $x$ to be at the left. The atoms at the right individually had chances of $1-x$ to be at the right. Assuming their positions are statistically independent, the chances multiply, giving $x^{35}(1-x)^{35}$ for the chance of this particular configuration. An equivalent configuration could be attained for a different split of the $70$ atoms into two $35$-element pieces. Adding these numbers for all possible such splits gives a chance of
$${\Pr}(x\text{ is a median}) = C x^{n/2} (1-x)^{n/2}$$
where $n$ is the total number of atoms and $C$ is proportional to the number of splits of $n$ atoms into two equal subgroups.
This formula identifies the distribution of the median as a Beta$(n/2+1, n/2+1)$ distribution.
Now consider a box with a more complicated shape:
Once again the medians vary. Because the box is low near the center, there isn't much of its volume there: a small change in the
volume occupied by the left half of the atoms (the black ones once again)--or, we might as well admit, the area to the left as shown in these figures--corresponds to a relatively large change in the horizontal position of the median. In fact, because the area subtended by a small horizontal section of the box is proportional to the height there, the changes in the medians are divided by the box's height. This causes the median to be more variable for this box than for the square box, because this one is so much lower in the middle.
In short, when we measure the position of the median in terms of
area (to the left and right), the original analysis (for a square box) stands unchanged. The shape of the box only complicates the distribution if we insist on measuring the median in terms of its horizontal position. When we do so, the relationship between the area and position representation is inversely proportional to the height of the box.
There is more to learn from these pictures. It is clear that when few atoms are in (either) box, there is a greater chance that half of them could accidentally wind up clustered far to either side. As the number of atoms grows, the potential for such an extreme imbalance decreases. To track this, I took "movies"--a long series of 5000 frames--for the curved box filled with $3$, then with $15$, then $75$, and finally with $375$ atoms, and noted the medians. Here are histograms of the median positions:
Clearly, for a sufficiently large number of atoms, the distribution of their median position begins to look bell-shaped and grows narrower: that looks like a Central Limit Theorem result, doesn't it?
Quantitative Results
The "box," of course, depicts the probability density of some distribution: its top is the graph of the density function (PDF). Thus areas represent probabilities. Placing $n$ points randomly and independently within a box and observing their horizontal positions is one way to draw a sample from the distribution. (This is the idea behind rejection sampling.)
The next figure connects these ideas.
This looks complicated, but it's really quite simple. There are four related plots here:
The top plot shows the PDF of a distribution along with one random sample of size $n$. Values greater than the median are shown as white dots; values less than the median as black dots. It does not need a vertical scale because we know the total area is unity.
The middle plot is the cumulative distribution function for the same distribution: it uses
height to denote probability. It shares its horizontal axis with the first plot. Its vertical axis must go from $0$ to $1$ because it represents probabilities.
The left plot is meant to be read sideways: it is the PDF of the Beta$(n/2+1, n/2+1)$ distribution. It shows how the median in the box will vary,
when the median is measured in terms of areas to the left and right of the middle (rather than measured by its horizontal position). I have drawn $16$ random points from this PDF, as shown, and connected them with horizontal dashed lines to the corresponding locations on the original CDF: this is how volumes (measured at the left) are converted to positions (measured across the top, center, and bottom graphics). One of these points actually corresponds to the median shown in the top plot; I have drawn a solid vertical line to show that.
The bottom plot is the sampling density of the median,
as measured by its horizontal position. It is obtained by converting area (in the left plot) to position. The conversion formula is given by the inverse of the original CDF: this is simply the definition of the inverse CDF! (In other words, the CDF converts position into area to the left; the inverse CDF converts back from area to position.) I have plotted vertical dashed lines showing how the random points from the left plot are converted into random points within the bottom plot. This process of reading across and then down tells us how to go from area to position.
Let $F$ be the CDF of the original distribution (middle plot) and $G$ the CDF of the Beta distribution. To find the chance that the median lies to the left of some position $x$, first use $F$ to obtain the
area to the left of $x$ in the box: this is $F(x)$ itself. The Beta distribution at the left tells us the chance that half the atoms will lie within this volume, giving $G(F(x))$: this is the CDF of the median position. To find its PDF (as shown in the bottom plot), take the derivative:
$$\frac{d}{dx}G(F(x)) = G'(F(x))F'(x) = g(F(x))f(x)$$
where $f$ is the PDF (top plot) and $g$ is the Beta PDF (left plot).
This is an for any continuous distribution. (With some care in interpretation it can be applied to any distribution whatsoever, whether continuous or not.) exact formula for the distribution of the median Asymptotic Results
When $n$ is very large and $F$ does not have a jump at its median, the sample median must vary closely around the true median $\mu$ of the distribution.
Also assuming the PDF $f$ is continuous near $\mu$, $f(x)$ in the preceding formula will not change much from its value at $\mu,$ given by $f(\mu).$ Moreover, $F$ will not change much from its value there either: to first order,
$$F(x) = F\left(\mu + (x-\mu)\right) \approx F(\mu) + F^\prime(\mu)(x-\mu) = 1/2 + f(\mu)(x-\mu).$$
Thus, with an ever-improving approximation as $n$ grows large,
$$g(F(x))f(x) \approx g\left(1/2 + f(\mu)(x-\mu)\right) f(\mu).$$
That is merely a shift of the location and scale of the Beta distribution. The rescaling by $f(\mu)$ will divide its variance by $f(\mu)^2$ (which had better be nonzero!). Incidentally, the variance of Beta$(n/2+1, n/2+1)$ is very close to $n/4$.
This analysis can be viewed as an application of the Delta Method.
Finally, Beta$(n/2+1, n/2+1)$ is approximately Normal for large $n$. There are many ways to see this; perhaps the simplest is to look at the logarithm of its PDF near $1/2$:
$$\log\left(C(1/2 + x)^{n/2}(1/2-x)^{n/2}\right) = \frac{n}{2}\log\left(1-4x^2\right) + C' = C'-2nx^2 +O(x^4).$$
(The constants $C$ and $C'$ merely normalize the total area to unity.) Through third order in $x,$ then, this is the same as the log of the Normal PDF with variance $1/(4n).$ (This argument is made rigorous by using characteristic or cumulant generating functions instead of the log of the PDF.)
Putting this altogether, we conclude that
The distribution of the sample median has variance approximately $1/(4 n f(\mu)^2)$,
and it is approximately Normal for large $n$,
all provided the PDF $f$ is continuous and nonzero at the median $\mu.$
|
Second-Order Systems, Part I: Boing!!
Spring-mass-damper systems are fairly common; you’ve seen these before, whether you realize it or not. One household example of these is the spring doorstop (BOING!!):
(For what it’s worth: the spring doorstop appears to have been invented by Frank H. Chase during World War I.)
The mechanical systems may be more fun. But the electrical circuits are more relevant to embedded systems, so let’s start there.
Differential equations
Consider a series LRC circuit:
The equations for this system are
$$ \begin{aligned} C\frac{dV_C}{dt}&=I, \cr L\frac{dI}{dt}&=V_{in}-V_C-IR \end{aligned} $$
If you solve these equations for \( V_C \), you get
$$ LC\frac{d^2V_C}{dt}+RC\frac{dV_C}{dt}+V_C=V_{in} $$
The time-domain derivative operator \( \frac{d}{dt} \) is isomorphic to multiplication by the Laplace domain variable
s, so in the Laplace domain, we have
$$H(s) = \frac{V_C}{V_{in}} = \frac{1}{LCs^2 + RCs + 1}$$
Hey, look! It’s a second-order system with a DC gain of 1. What else can we figure out about this system?
There are two standard forms of this type of second-order system, depending on whether you want to think of the timescale in terms of a time constant or a natural frequency:
$$H(s) = \frac{1}{\tau^2s^2 + 2\zeta\tau s + 1} = \frac{{\omega_n}^2}{s^2 + 2\zeta\omega_n s + {\omega_n}^2}$$
With our LRC circuit, this gives us \( \tau = \sqrt{LC} \), \( \omega_n = \frac{1}{\sqrt{LC}} \) and \( \zeta = \frac{R}{2}\sqrt{\frac{C}{L}} \).
The reason we put the transfer function into this form is because we can normalize the system response and scale the Laplace frequency to define \( \bar{s} = \tau s = s/\omega_n \):
$$H(\bar{s}) = \frac{1}{\bar{s}^2 + 2\zeta\bar{s} + 1}$$
This leaves only one parameter ζ. Second order systems with a constant numerator in the transfer function (no zeros) have a behavior that is completely determined by a timescale (natural frequency \( \omega_n \) or time constant τ) and damping factor (\( \zeta \)). Let’s forget about the damping factor for a moment, and just take a look at how the system response varies with \( \omega_n \):
Same shape, different time scale. If we normalize the time scale \( \bar{t} = \omega_n t \), we get this:
Hey, great! It’s just like first-order systems: the natural frequency \( \omega_n \) is not an interesting parameter. All systems with the same damping factor but different values of \( \omega_n \) are exactly the same, just with different time scales.
Damping factor
So let’s focus on the damping factor ζ and choose a fixed value for the natural frequency. We’ll choose \( \omega_n = 2\pi \) and look what happens for different values of ζ:
Values of ζ that are less than 1.0 lead to
underdamped systems, which have an overshoot. Values of ζ that are greater than 1.0 lead to overdamped systems, which do not have an overshoot, and which settle more slowly. If ζ = 1.0 then the system is critically damped; this is the minimum value for ζ that does not have an overshoot.
How do we figure out the overshoot as a function of ζ ?
Poles and Overshoot
Let’s look again at the denominator of the transfer function, \( s^2 + 2\zeta\omega_n s + {\omega_n}^2 \). If we factor it, we can find the poles of the transfer function. So solve for \( s^2 + 2\zeta\omega_n s + {\omega_n}^2 = 0 \) using the quadratic formula:
$$ s = \frac{-2\zeta\omega_n \pm \sqrt{4\zeta^2{\omega_n}^2 - 4{\omega_n}^2} }{2} = \omega_n\left(-\zeta \pm \sqrt{\zeta^2 - 1}\right) $$
If \( \zeta > 1 \), both poles are real and the equation tells us how to compute them. (For example, if \( \zeta = 1.6 \), then the poles are at \( s=-0.351\omega_n \) and \( s=-2.849\omega_n \).)
If \( \zeta = 1 \), then both poles are at \( s=-\omega_n \).
If \( \zeta < 1 \) — the underdamped case — then the poles are a complex conjugate pair at \( s = \omega_n\left(-\zeta \pm j\sqrt{1-\zeta^2}\right) = -\alpha \pm j\omega_d \) where \( \alpha = \zeta\omega_n \) is the decay rate and \( \omega_d = \omega_n\sqrt{1-\zeta^2} \) is the damped natural frequency.
The underdamped case is the most interesting one. We can solve for the step response \( h_1(t) \) and impulse response \( h(t) = \frac{d}{dt}h_1(t) \) exactly; if you go through the grungy math, it turns out that
$$ \begin{aligned} h(t) &= u(t) \frac{\omega_n}{\sqrt{1-\zeta^2}} e^{-\alpha t} \sin \omega_d t \cr h_1(t) &= u(t) \left(1-\frac{\zeta}{\sqrt{1-\zeta^2}} e^{-\alpha t} \sin \omega_d t - e^{-\alpha t}\cos\omega_d t\right) \cr &= u(t) \left(1+\frac{1}{\sqrt{1-\zeta^2}} e^{-\alpha t} \sin (\omega_d t + \phi)\right) \end{aligned} $$
which is not too complicated. The nice thing is that we can make a couple of conclusions here:
At t = 0, the slope of the step response is zero. (In our LCR circuit, that’s because the rate of change in output voltage \( \frac{dV_C}{dt} = I/C \), and the inductor current starts off at zero.)
The local extrema (the points that are a local minimum or maximum) of the step response are the instants where its derivative (= the impulse response) is zero, which occurs when \( \sin \omega_d t = 0 \), namely at integer multiples of \( \frac{T}{2} = \frac{\pi}{\omega_d} \).
The point of maximum overshoot is the first extremum after \( t=0 \), and occurs at \( t=\frac{\pi}{\omega_d} \). For our example with \( \omega_n=2\pi \), at low values of ζ this occurs at t=0.5; as ζ increases, the damped natural frequency slows down a little bit.
The value of maximum overshoot is pretty easy to calculate; we just plug in \( t=\frac{\pi}{\omega_d} \), and since \( \sin\omega_d t = 0 \) there, we are left with
$$ h_1(t) = 1- e^{-\alpha t}\cos\omega_d t = 1+e^{-\alpha \frac{\pi}{\omega_d}}$$
and therefore the overshoot is the amount this exceeds 1.0, which is
$$ OV = e^{-\alpha \frac{\pi}{\omega_d}} = e^{-\pi \frac{\zeta}{\sqrt{1-\zeta^2}}}$$
If we want to solve this equation for the damping factor, we get \( \zeta = \sqrt{\frac{(\ln OV)^2}{\pi^2 + (\ln OV)^2}} \)
Note that the overshoot decreases rapidly with increasing ζ, and once you reach about ζ = 0.7, the overshoot is small and decreases more slowly. That’s an important factor for system design. Let’s say you are tuning a system and need to trade off the value of ζ against other design choices. If you can tolerate a small amount of overshoot, and your design tradeoffs would benefit from a smaller value of ζ, you don’t need an overdamped system; you can generally tolerate a ζ in the range of 0.7 - 0.9.
Summary and what’s next
We looked at second order systems of the form
$$H(s) = \frac{1}{\tau^2s^2 + 2\zeta\tau s + 1} = \frac{{\omega_n}^2}{s^2 + 2\zeta\omega_n s + {\omega_n}^2}$$
and examined features of the step response. This transfer function has a DC gain of 1, two poles, and no zeroes. Its timescale is determined by the natural frequency \( \omega_n \) and its shape is determined by the damping factor ζ. Systems with \( \zeta < 1 \) are underdamped and have two conjugate poles. Systems with \( \zeta > 1 \) are overdamped and have two real poles.
Other characteristics of the step response:
The poles are located at \( s=\omega_n\left(-\zeta \pm \sqrt{\zeta^2 - 1}\right) \) for overdamped systems and \( s=\omega_n\left(-\zeta \pm j\sqrt{1-\zeta^2}\right) \) for underdamped systems. For underdamped systems:
the step response looks like \( 1 + \frac{1}{\sqrt{1-\zeta^2}}e^{-\alpha t} \sin(\omega_d t + \phi) \) the envelope of the step response decays with a rate of \( \alpha = \zeta\omega_n \) the resonance occurs at \( \omega_d = \omega_n\sqrt{1-\zeta^2} \) the overshoot is equal to \( OV = e^{-\alpha \frac{\pi}{\omega_d}} = e^{-\pi \frac{\zeta}{\sqrt{1-\zeta^2}}} \)
In Part II we’ll look at how the presence of a zero affects the behavior of second-order systems.
© 2014 Jason M. Sachs, all rights reserved.
Previous post by Jason Sachs:
The CRC Wild Goose Chase: PPP Does What?!?!?!
Next post by Jason Sachs:
Important Programming Concepts (Even on Embedded Systems) Part IV: Singletons
To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments.
Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
|
Kolmogorov axioms
There are a few ways to axiomatize the subject of probability. This is one way of doing it:
$$P(E) \in \mathbb, P(E) \ge 0$$ $$P(\Omega) = 1$$ $$P\left(\bigcup_{i=1}^nE_i\right) = \sum_{i=1}^nE_n \text{ , where all sets $E_i$ are mutually exclusive}$$
Consequences
For this second problem, let's start with a few facts: $$A \cap (B - A) = A \cap (B \cap A^c) = (A \cap A^c) \cap B = \emptyset$$ $$A \cup (B - A) = A \cup (B \cap A^c) = A \cup B$$ Now, let's make our assumptions. Let's say $A$ is completely contained in $B$. So, $A \cap B = A$ and $A \cup B = B$. $$p(A \cup (B - A)) = p(A) + p(B-A) \text{ (prop 3)}$$ $$p(B) = P(A) + p(B-A)$$ $$\text A \in B \to p(B) \geq P(A) \text{ (property 1. All probabilities are $\geq$ 0)}$$ From this, we can prove something else. All sets are $\in \Omega$. Therefore, $p(S) \leq P(\Omega) = 1$. $$\forall S \in \Omega, \boxed{0 \leq p(S) \leq 1} \text{ (prop. 1)}$$
|
We hope you’ve seen it many times. It’s on the covers of our books. It’s on our business cards and stationery. It’s even on a “sponsor a highway” sign on Route 9 in Natick,Massachusetts. But, do you really know what the logo is?
I’m talking about the L-shaped membrane.We’ve used various pictures of it ever since The MathWorks was founded almost twenty years ago, but it only recently became the official company logo. I’d like to tell you about its mathematical background.
The wave equation is a fundamental model in mathematical physics that describes how a disturbance travels through matter. If t is time and x and y are spatial coordinates with the units chosen so that the wave propagation speed is equal to one, then the amplitude of a wave satisfies the partial differential equation
\[\frac{\partial^2 u}{\partial t^2} = \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2}\]
Periodic time behavior gives solutions of the form
\[\begin{gather}u(t,x,y) = \sin\left(\sqrt{y}t\right)) v(x,y), \text{where }\\ \frac{\partial^2 v}{\partial x^2} + \frac{\partial^2 v}{\partial y^2} + \lambda v = 0\end{gather}\]
The quantities \(\lambda\) are the
eigenvalues and the corresponding functions \(v(x,y)\) are the eigenfunctions or modes of vibration. They are determined by the physical properties, the geometry, and the boundary conditions of each particular situation. Any solution to the wave equation can be expressed as a linear combination of these eigenfunctions. The square roots of the eigenvalues are resonant frequencies. A periodic external driving force at one of these frequencies will generate an unboundedly strong response in the medium.
The L-shaped region formed from three unit squares is interesting for several reasons. It is one of the simplest geometries for which solutions to the wave equation cannot be expressed analytically, so numerical computation is necessary. The 270º nonconvex corner causes a singularity in the solution.Mathematically, the gradient of the first eigenfunction is unbounded near the corner. Physically, a membrane stretched over such a region would rip at the corner. This singularity limits the accuracy of finite difference methods with uniform grids.
The simple physical situations involving waves on an L-shaped region include a vibrating L-shaped membrane, or tambourine, and a beach towel blowing in the wind, constrained by a picnic basket on onefourth of the towel. A more practical example involves microwave waveguides. One such device is a waveguide-to-coax adapter. The active region is the channel with the H-shaped cross section visible at the end of the adapter. The ridges increase the bandwidth of the guide at the expense of higher attenuation and lower power-handling capability. Symmetry about the dotted lines in the contour plot of the electric field in the channel implies that only one-quarter of the cross section needs to be modeled and that the resulting geometry is our L-shaped region. The boundary conditions are not the same as the membrane problem, but the differential equation and the solution techniques are the same.
You can use classic finite difference methods to compute the eigenvalues and eigenfunctions of the L-shaped membrane in MATLAB with
n = 200 h = 1/n A = delsq(numgrid('L',2*n+1))/h^2 lambda = eigs(A,12,0)
The resulting sparse matrix
A has order 119201 and 594409 nonzero entries. The
eigs function uses Arnoldi’s method from the MATLAB implementation of ARPACK to compute the first 12 eigenvalues. This takes only a little over a minute on a 1.4 GHz Pentium laptop. However, the corner singularity causes the computed eigenvalues to be accurate to only three or four significant digits. If you try for more accuracy with a finer mesh and a larger matrix, you soon exceed half a gigabyte of main memory.
For the L-shaped membrane and similar problems, a technique using analytic solutions to the underlying differential equation is much more efficient and accurate than finite difference methods. The building blocks are the fractional order Bessel functions and trig functions that yield eigenfunctions of circular sectors. Remember Pac-Man? How would Pac-Man vibrate? This simple graphics character from one of the earliest video games provides a two-dimensional test domain. When he was not chomping ghosts, Pac-Man was three-quarters of the unit disc.With polar coordinates \(r\) and \(\theta\), and parameters \(\alpha\) and \(\lambda\), the eigenfunctions of a circular sector are
\[v(r,\theta) = J\left(\alpha, \sqrt{\lambda} r\right) \sin(\alpha \theta)\]
The eigenvalues are determined by the requirement that \(\theta = 0\) vanish on the boundary of the sector. For Pac-Man, the straight edges have \(v(r,θ)\) and \(\theta = 3\pi/2\), so we can satisfy the boundary conditions on these edges by choosing α to be an integer multiple of two-thirds,
\[\alpha_j = \frac{2j}{3}, \quad j \in \mathbb{Z}\]
The circular portion of the boundary has \(r = 1\), so we can satisfy the boundary conditions on the circle by taking \(\lambda\) to be a square of a zero of a Bessel function,
\[J(\alpha, \sqrt{\lambda}k = 0 \]
Contour plots of the first six eigenfunctions show that two are symmetric about the center line, \(\theta = 3\pi / 4\) two are antisymmetric about the center line, and two are eigenfunctions of the quarter disc, reflected twice to fill out the entire region. Linear combinations of these circular sector basis functions provide accurate approximations to the eigenvalues and eigenfunctions of other regions with corners. Let
\[v(r,\theta) = \sum_j c_j J\left(\alpha_j, \sqrt{\lambda} r\right) \sin(\alpha_j \theta)\]
These functions are exact solutions to the eigenvalue differential equation. They also satisfy the boundary conditions on the two edges that meet at the origin. All that remains is to pick \(\lambda\) and the \(c_j\)’s so that the boundary conditions on the remaining edges are satisfied.
A least squares approach employing the matrix singular value decomposition is used to determine \(\lambda\) and the \(c_j\)’s. Pick \(m\) points, \(r_i \theta_i\) on the remaining edges of the boundary. Let \(n\) be the number of fundamental solutions to be used. Define an
m-by- n matrix \(A\) with elements that depend upon \(\lambda\),
\[A_{i,j} = J\left(\right) \sin(\alpha_j \theta_i), \quad i=1,\dots,m \text{ and } j=1,\ldots,n\]
Then, for any vector \(c\)
, the vector \(A_c\) is the vector of boundary values, \(v(r_i \theta_i)\). We want to make \(\|A_c\|\)|small without making \(\|c\|\) small. The SVD does this job. Let \(\sigma_n(A(\lambda))\) denote the smallest singular value of the matrix \(A(\lambda)\) and let \(\lambda_k\) denote the values of \(\lambda\) that produce local minima of the smallest singular value,
\[\lambda_k = \text{\(k\)-th minimizer }\sigma_n(A(\lambda))\]
Each \(\lambda_k\) approximates an eigenvalue of the region. The
n-th column of the corresponding matrix \(V\) from the singular value decomposition provides the coefficient vector \(c\) for the linear combination.
The MATLAB function
membrane uses an older version of this algorithm to compute eigenfunctions of the L-shaped membrane. Contour plots of the first six eigenfunctions show that the first, fifth, and sixth are symmetric about the center line; the second and fourth are antisymmetric about the center line; and the third is actually the first eigenfunction of a unit square, reflected into the other two squares.
The MATLAB function
logo generates our company logo, a lighted, reflective, surface plot of a variant of the first eigenfunction. After being so careful to satisfy the boundary conditions, the logo uses only the first two terms in the sum. This artistic license gives the edge of the logo a more interesting, curved shape.
|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
July 2009 , Volume 23 , Issue 3
Select all articles
Export/Reference:
Abstract:
We consider the impact of noise on the stability and propagation of fronts in an excitable media with a piece-wise smooth, discontinuous ignition process. In a neighborhood of the ignition threshold the system interacts strongly with noise, the front can loose monotonicity, resulting in multiple crossings of the ignition threshold. We adapt the renormalization group methods developed for coherent structure interaction, a key step being to determine pairs of function spaces for which the the ignition function is Frechet differentiable, but for which the associated semi-group, $S(t)$, is integrable at $t=0$. We parameterize a neighborhood of the front solution through a dynamic front position and a co-dimension one remainder. The front evolution and the asymptotic decay of the remainder are on the same time scale, the RG approach shows that the remainder becomes asymptotically small, in terms of the noise strength and regularity, and the front propagation is driven by a competition between the ignition process and the noise.
Abstract:
We introduce a class of non-symmetric bilinear forms on the d-dimen\-sional canonical simplex, related with Fleming-Viot type operators.
Strong continuity, closedness and results in the spirit of Beurling-Deny criteria are established. Moreover, under suitable assumptions, we prove that the forms satisfy the Log-Sobolev inequality. As a consequence, regularity results for semigroups generated by a class of Fleming-Viot type operators are given.
Abstract:
We consider the existence and multiplicity of riemannian metrics of prescribed mean curvature and zero boundary mean curvature on the three dimensional half sphere $(S^3_+,g_c)$ endowed with its standard metric $g_c$. Due to Kazdan-Warner type obstructions, conditions on the function to be realized as a scalar curvature have to be given. Moreover the existence of
critical point at infinityfor the associated Euler Lagrange functional makes the existence results harder to be proved. However it turns out that such noncompact orbits of the gradient can be treated as a usual critical point once a Morse Lemma at infinityis performed. In particular their topological contribution to the level sets of the functional can be computed. In this paper we prove that, under generic conditions on $K$, this topology at infinityis a lower bound for the number of metrics in the conformal class of $g_c$ having prescribed scalar curvature and zero boundary mean curvature. Abstract:
In the space of $C^k$ piecewise expanding unimodal maps, $k\geq 1$, we characterize the $C^1$ smooth families of maps where the topological dynamics does not change (the "smooth deformations") as the families tangent to a continuous distribution of codimension-one subspaces (the "horizontal" directions) in that space. Furthermore such codimension-one subspaces are defined as the kernels of an explicit class of linear functionals. As a consequence we show the existence of $C^{k-1+Lip}$ deformations tangent to every given $C^k$ horizontal direction, for $k\ge 2$.
Abstract:
We present a topological method for the detection of normally hyperbolic type invariant sets for maps. The invariant set covers a sub-manifold without a boundary in $\mathbb{R}^k$. For the method to hold we only need to assume that the movement of the system transversal to the manifold has directions of topological expansion and contraction. The movement in the direction of the manifold can be arbitrary. The result is based on the method of covering relations and local Brouwer degree theory.
Abstract:
We study the existence of solution for the one-dimensional $\phi$-laplacian equation $(\phi(u'))'=\lambda f(t,u,u')$ with Dirichlet or mixed boundary conditions. Under general conditions, an explicit estimate $\lambda_0$ is given such that the problem possesses a solution for any $|\lambda|<\lambda_0$.
Abstract:
This paper proves the local well posedness of differential equations in metric spaces under assumptions that allow to comprise several different applications. We consider below a system of balance laws with a dissipative non local source, the Hille-Yosida Theorem, a generalization of a recent result on nonlinear operator splitting, an extension of Trotter formula for linear semigroups and the heat equation.
Abstract:
For two-dimensional Euler equation on the torus, we prove that the $L^\infty$ norm of the gradient can grow superlinearly for some infinitely smooth initial data. We also show the exponential growth of the gradient for finite time.
Abstract:
This paper deals with asymptotic profiles of solutions to the 2D viscous incompressible micropolar fluid flows in the whole space $R^2$. Based on the spectral decomposition of linearized micropolar fluid flows, the sharp algebraic time decay estimates of the micropolar fluid flows in $L_2$ and $L_\infty$ norms are obtained.
Abstract:
This paper is concerned with the homogenization of some particle systems with two-body interactions in dimension one and of dislocation dynamics in higher dimensions.
The dynamics of our particle systems are described by some ODEs. We prove that the rescaled "cumulative distribution function'' of the particles converges towards the solution of a Hamilton-Jacobi equation. In the case when the interactions between particles have a slow decay at infinity as $1/x$, we show that this Hamilton-Jacobi equation contains an extra diffusion term which is a half Laplacian. We get the same result in the particular case where the repulsive interactions are exactly $1/x$, which creates some additional difficulties at short distances.
We also study a higher dimensional generalisation of these particle systems which is particularly meaningful to describe the dynamics of dislocations lines. One main result of this paper is the discovery of a satisfactory mathematical formulation of this dynamics, namely a Slepčev formulation. We show in particular that the system of ODEs for particle systems can be naturally imbedded in this Slepčev formulation. Finally, with this formulation in hand, we get homogenization results which contain the particular case of particle systems.
Abstract:
In this paper, we introduce a penalisation method in order to approximate the solutions of the initial boundary value problem for a semi-linear first order symmetric hyperbolic system, with dissipative boundary conditions. The penalization is carefully chosen in order that the convergence to the wished solution is sharp, does not generate any boundary layer, and applies to fictitious domains.
Abstract:
A basic assumption of tiling theory is that adjacent tiles can meet in only a finite number of ways, up to rigid motions. However, there are many interesting tiling spaces that do not have this property. They have "fault lines", along which tiles can slide past one another. We investigate the topology of a certain class of tiling spaces of this type. We show that they can be written as inverse limits of CW complexes, and their Čech cohomology is related to properties of the fault lines.
Abstract:
We study the early stages of the nonlinear dynamics of pattern formation for unstable generalized thin film equations. For unstable constant steady states, we obtain rigorous estimates for the short- to intermediate-time nonlinear evolution which extends the mathematical characterization for pattern formation derived from linear analysis: formation of patterns can be bounded by the finitely many dominant growing eigenmodes from the initial perturbation.
Abstract:
In this paper, we study the asymptotic behavior and the convergence rates of solutions to the so-called $p$-system with nonlinear damping on quadrant $\mathbb{R^+}\times \mathbb{R^+}=(0,\infty)\times (0,\infty)$,
$v_t$-u_x=0, $u_t$+p(v)_x=-αu-g(u)
with the Dirichlet boundary condition $u|_{x=0}=0$ or the Neumann boundary condition $u_x|_{x=0}=0$. The initial data $(v_0,u_0)(x)$ has the constant states $(v_+,u_+)$ at $x=\infty$. In the case of null-Dirichlet boundary condition on $u$, we show that the corresponding problem admits a unique global solution $(v(x,t), u(x,t))$ and such a solution tends time-asymptotically to the corresponding nonlinear diffusion wave $(\bar{v}(x,t), \bar{u}(x,t))$ governed by the classical Darcy's law provided that the corresponding prescribed initial error function $(w_0(x), z_0(x))$ lies in $(H^3\times H^2)(\mathbb{R}^+)$ and $||v_0(x)-v_+||_{L^1}+||w_0||_3+||z_0||_2+||V_0||_5+||Z_0||_4$ is sufficiently small. Its optimal $L^\infty$ convergence rate is also obtained by using the Green function of the diffusion equation. In the case of null-Neumann boundary condition on $u$, the global existence of smooth solution with small initial data is obtained in both of the case of $v_0(0)= v_+$ and $v_0(0)\neq v_+$. The solution $(v(x,t), u(x,t))$ is proved to tend to $(\bar v(x,t), 0)$ as $t$ tends to infinity, and we also get the optimal $L^\infty$ convergence rate in the case of $v_0(0)= v_+$.
Abstract:
The paper is concerned with the equation $-\Delta_{h}u=f(u)$ on $S^d$ where $\Delta_{h}$ denotes the Laplace-Beltrami operator on the standard unit sphere $(S^d,h)$, while the continuous nonlinearity $f:\mathbb R\to \mathbb R$ oscillates either at zero or at infinity having an asymptotically critical growth in the Sobolev sense. In both cases, by using a group-theoretical argument and an appropriate variational approach, we establish the existence of $[{d}/{2}]+(-1)^{d+1}-1$ sequences of sign-changing weak solutions in $H_1^2(S^d)$ whose elements in different sequences are mutually symmetrically distinct whenever $f$ has certain symmetry and $d\geq 5$. Although we are dealing with a smooth problem, we are forced to use a non-smooth version of the principle of symmetric criticality (see Kobayashi-Ôtani, J. Funct. Anal. 214 (2004), 428-449). The $L^\infty$-- and $H_1^2$--asymptotic behaviour of the sequences of solutions are also fully characterized.
Abstract:
An IFS ( iterated function system), $([0,1], \tau_{i})$, on the interval $[0,1]$, is a family of continuous functions $\tau_{0},\tau_{1}, ..., \tau_{d-1} : [0,1] \to [0,1]$. Associated to a IFS one can consider a continuous map $\hat{\sigma} : [0,1]\times \Sigma \to [0,1]\times \Sigma$, defined by $\hat{\sigma}(x,w)=(\tau_{X_{1}(w)}(x), \sigma(w))$ where $\Sigma=\{0,1, ..., d-1\}^{\mathbb{N}}$, $\sigma: \Sigma \to \Sigma$ is given by $\sigma(w_{1},w_{2},w_{3},...)=(w_{2},w_{3},w_{4}...)$ and $X_{k} : \Sigma \to \{0,1, ..., n-1\}$ is the projection on the coordinate $k$. A $\rho$-weighted system, $\rho \geq 0$, is a weighted system $([0,1], \tau_{i}, u_{i})$ such that there exists a positive bounded function $h : [0,1] \to \mathbb{R}$ and a probability $\nu $ on $[0,1]$ satisfying $ P_{u}(h)=\rho h, \quad P_{u}^{*}(\nu)=\rho \nu$.
A probability $\hat{\nu}$ on $[0,1]\times \Sigma$ is called holonomic for $\hat{\sigma}$, if, $ \int\, g \circ \hat{\sigma}\, d\hat{\nu}= \int \,g \,d\hat{\nu}, \, \forall g \in C([0,1])$. We denote the set of holonomic probabilities by $\mathcal H$.
For a holonomic probability $\hat{\nu}$ on $[0,1]\times \Sigma$ we define the entropy $h(\hat{\nu})$=inf$_f \in \mathbb{B}^{+} \int \ln(\frac{P_{\psi}f}{\psi f}) d\hat{\nu}\geq 0$, where, $\psi \in \mathbb{B}^{+}$ is a fixed (any one) positive potential.
Finally, we analyze the problem: given $\phi \in \mathbb{B}^{+}$, find solutions of the maximization problem $p(\phi)$= sup$_\hat{\nu} \in \mathcal{H} \{ \,h(\hat{\nu}) + \int \ln(\phi) d\hat{\nu} \,\}.$ We show an example where a holonomic not-$\hat{\sigma}$-invariant probability attains the supremum value. In the last section we consider maximizing probabilities, sub-actions and duality for potentials $A(x,w)$, $(x,w)\in [0,1]\times \Sigma$, for IFS.
Abstract:
The pre-image topological pressure is defined for bundle random dynamical systems. A variational principle for it has also been given.
Abstract:
In this work, we analyze a 3-d dynamic optimal design problem in conductivity governed by the two-dimensional wave equation. Under this dynamic perspective, the optimal design problem consists in seeking the time-dependent optimal layout of two isotropic materials on a 2-d domain ($\Omega\subsetR^2$); this is done by minimizing a cost functional depending on the square of the gradient of the state function involving coefficients which can depend on time, space and design. The lack of classical solutions of this type of problem is well-known, so that a relaxation must be sought. We utilize a specially appropriate characterization of 3-d ($(t,x)\inR\timesR^2$) divergence free vector fields through Clebsh potentials; this lets us transform the optimal design problem into a typical non-convex vector variational problem, to which Young measure theory can be applied to compute explicitly the "
constrained quasiconvexification" of the cost density. Moreover this relaxation is recovered by dynamic (time-space) first- or second-order laminates. There are two main concerns in this work: the 2-d hyperbolic state law, and the vector character of the problem. Though these two ingredients have been previously considered separately, we put them together in this work. Abstract:
Sufficient conditions on the function $\,f\,$ are given which ensure the boundedness of the solutions of the second order linear differential equation $\,u''+(1+f(t))\,u=0\,$ as $\, t\rightarrow +\infty\,$. To do this, a suitable class of quadratic forms is introduced.
Abstract:
Consider the functional differential equation (FDE) $\dot{x}(t)=f(x_t)$ with $f$ defined on an open subset of the space $C^1=C^1([-h,0],\R^n)$. Under mild smoothness assumptions, which are designed for the application to differential equations with state-dependent delays, the FDE generates a semiflow on a submanifold of $C^1$ with continuously differentiable time-$t$-maps. We show that at a stationary point continuously differentiable local center-stable manifolds of the semiflow exist. The proof uses results of Chen, Hale and Tan and of Krisztin about invariant manifolds of time-$t$-maps and their invariance properties with respect to the semiflow.
Abstract:
We prove that every compact plane billiard, bounded by a smooth curve, is insecure: there exist pairs of points $A,B$ such that no finite set of points can block all billiard trajectories from $A$ to $B$.
Abstract:
The goal of this paper is to present an approximation scheme for a reaction-diffusion equation with finite delay, which has been used as a model to study the evolution of a population with density distribution $u$, in such a way that the resulting finite dimensional ordinary differential system contains the same asymptotic dynamics as the reaction-diffusion equation.
Abstract:
In this paper, we study the variable-territory prey-predator model. We first establish the global stability of the unique positive constant steady state for the ODE system and the reaction diffusion system, and then prove the existence, uniqueness and stability of positive stationary solutions for the heterogeneous environment case.
Abstract:
This paper is concerned with the following non-periodic diffusion system
$\partial_tu-\Delta_x u+b(t,x)\cdot\nabla_x u+V(x)u=H_v(t,x,u,v)$in $\mathbb{R}\times\mathbb{R}^N,$
$-\partial_tv-\Delta_x v-b(t,x)\cdot\nabla_x v+V(x)v=H_u(t,x,u,v)$in$\mathbb{R}\times\mathbb{R}^N,$
$u(t,x)\to 0$and$v(t,x)\to0$as$|t|+|x|\to\infty.$
Assuming the potential $V$ is bounded and has a positive bound from below, existence and multiplicity of solutions are obtained for the system with asymptotically quadratic nonlinearities via variational approach.
Abstract:
It is shown that a trick introduced by H. R. Thieme [6] to study a one-species integral equation model with a nonmonotone operator can be used to show that some multispecies reaction-diffusion systems which are cooperative for small population densities but not for large ones have a spreading speed. The ideas are explained by considering a model for the interaction between ungulates and grassland.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
A sample of $\pu{1.42 g}$ of helium and an unweighted quantity of oxygen gas are mixed in a flask at room temperature. The partial pressure of helium in the flask is $\pu{42.5 torr}$, and partial pressure of oxygen gas is $\pu{158 torr}$. What is the mass of the oxygen in the container?
As volume is not stated, I assume it's constant, and temperature is a constant too. Therefore, from the ideal gas law
$$\frac{p}{n} = \frac{RT}{V}$$
For helium and oxygen gas, the above formula is applicable, and the mass of oxygen can be determined:
\begin{align} \frac{\pu{0.5592 atm}}{1.42/4.0026} &= \frac{\pu{0.20789 atm}}{m(\ce{O2})/31.99} \\ \to m(\ce{O2}) &= \pu{4.22 g} \end{align}
This is my attempt on finding the mass. However, I believe it's not right. Why is that the case?
|
The answer is
no, there are transcendental numbers $\theta$ which aren't rational multiples of $\pi$ such that $\tan(\theta)$ is algebraic. Indeed, since $\tan$ is onto $\mathbb R$, there exists a value of $\theta$ for which $\tan(\theta)=\sqrt{2}$. On one hand, by Lindemann-Weierstrass theorem, $\theta$ has to be transcendental, for otherwise $e^{i\theta}$ would be transcendental and the equality$$-\frac{i}{2}\frac{e^{i\theta}-e^{-i\theta}}{e^{i\theta}+e^{-i\theta}}=\tan(\theta)=\sqrt{2}$$would be impossible. So now we want to argue that $\theta$ is not a rational multiple of $\pi$. Let me only outline the solution.
Suppose $\theta$ was a rational multiple of $\pi$. Then, because $\tan$ is $\pi$-periodic, the sequence $\tan(2^n\theta)$ for $n=0,1,2,\dots$ would take only finitely many values (since it only depends on $n$ modulo the denominator of $\theta/\pi$). However, by repeatedly using the tangent of double angle formula, we can derive $\tan(2^n\theta)=\frac{p_n}{q_n}\sqrt{2}$, where the sequences $p_n,q_n$ are defined by $p_0=q_0=1,p_{n+1}=2p_nq_n,q_{n+1}=q_n^2-2p_n^2$ and we can easily check that $p_n,q_n$ are nonzero and relatively prime. But then the values of $\tan(2^n\theta)$ are all clearly pairwise distinct, since $p_n$ is strictly increasing, which is a contradiction.
Just for fun, we can now conclude that $\theta$ is not an
algebraic multiple of $\pi$ either, for if $\theta=\alpha\pi$, then $\alpha$ would be irrational and we would deduce that $e^{i\theta}=(e^{i\pi})^\alpha=(-1)^\alpha$ is transcendental by the Gelfond-Schneider theorem.
|
I am interested in understanding how and whether the transformation properties of a (classical or quantum) field under rotations or boosts relate in a simple way to the directional dependence of the radiation from an oscillating dipole.
For example, the EM field from an oscillating electric dipole $\mathbf{d}(t) = q x(t) \hat{\mathbf{x}}$ pointing along the $x$-axis vanishes in the x-direction (in the far-field). On the other hand, the sound radiating from a similar acoustic dipole vanishes in the $y-z$ plane (in the far-field). This result makes total sense classically because EM radiation consists of transverse waves while acoustic radiation is carried by longitudinal waves (try drawing a picture if this is not immediately obvious). The same holds true even if the fields and dipoles under consideration are treated quantum mechanically.
Now, the acoustic (displacement) field is represented by a scalar field, while the EM field is a (Lorentz) vector field. This leads me to wonder if one can draw some more general conclusions about multipole radiation, based on the transformation properties of the field in question.
Given a tensorial or spinorial field of rank k, and a multipole source of order l, what is the asymptotic angular dependence of the resulting radiation? In particular, how does the radiation from a "Dirac" dipole look, assuming such a thing can even make sense mathematically?
By the latter I mean writing the classical Dirac equation
$$ (\mathrm{i}\gamma^{\mu}\partial_{\mu} - m)\psi(x) = J(x), $$
for some spinor-valued source field $J(x)$ corresponding to something like an oscillating dipole. (I understand that in general this term violates gauge invariance and so is unphysical, but I am hoping this fact doesn't completely invalidate the mathematical problem.)
Apologies to the expert field theorists if this is all nonsense :)
|
I am trying to understand the Feynman path integral by reading the book from Leon Takhtajan.
In one of the examples, there is a full explanation of the calculation of the propagator
$$K(\mathbf{q'},t';\mathbf{q},t) = \frac{1}{(2\pi\hbar)^n} \int_{\mathbb{R}^n} e^{\frac{i}{\hbar}(\mathbf{p}(\mathbf{q'}-\mathbf{q})-\frac{\mathbf{p}^2}{2m}T)} d^n\mathbf{p},\quad T=t'-t.$$
in the case of a free quantum particle with Hamiltonian operator
$$H_0 = \frac{\mathbf{P}^2}{2m},$$
and the solution is given by
$$K(\mathbf{q'},t';\mathbf{q},t) = \left(\frac{m}{2\pi i \hbar T}\right)^{\frac{n}{2}} e^{\frac{im}{2\hbar T}(\mathbf{q}-\mathbf{q'})^2}.$$
Could you please help me to understand how to perform the calculation in the case where the Hamiltonian is given by
$$H_1 = \frac{\mathbf{P}^2}{2m} + V(\mathbf{Q})$$
where $V(\mathbf{Q})$ is the potential defined by
$$ V(\mathbf{Q})=\left\{ \begin{array}{cc} \infty, & \mathbf{Q} \leq b \\ 0, & \mathbf{Q}>b. \\ \end{array} \right. $$
Update :
I've read the article provided by Trimok, and another one found in the references, but I am still annoyed with the way the propagator is computed. I may be mistaken, but it seems that in that kind of articles, they always start the computation from the scratch, without using what they already know about path integral.
I am actually trying to write something about the use of path integrals in option pricing. From Takhtajan's book, I know that for a general Hamiltonian $H=H_0 + V(q)$ where $H_0 = \frac{P^2}{2m}$, the path integral in the configuration space (or more precisely the propagator) is given by
\begin{equation} \begin{array}{c} \displaystyle K(q',t';q,t) = \lim_{n\to\infty}\left(\frac{m}{2\pi\hbar i \Delta t}\right)^{\frac{n}{2}} \\ \displaystyle \times \underset{\mathbb{R}^{n-1}}{\int \cdots\int} \exp\left\{\frac{i}{\hbar}\sum_{k=0}^{n-1}\left(\frac{m}{2}\left(\frac{q_{k+1} - q_k}{\Delta t}\right)^2 - V(q_k)\right)\Delta t\right\} \prod_{k=1}^{n-1} dq_k.\\ \end{array} \end{equation} I would like to start my computation from this result, and avoid repeating once again the time slicing procedure. So du to the particular form of the potential, I think I can rewrite the previous equation as \begin{equation} \begin{array}{c} \displaystyle K(q',t';q,t) = \lim_{n\to\infty}\left(\frac{m}{2\pi\hbar i \Delta t}\right)^{\frac{n}{2}} \\ \displaystyle \times \int_0^{+\infty} \cdots\int_0^{+\infty} \exp\left\{\frac{i}{\hbar}\frac{m}{2}\sum_{k=0}^{n-1}\frac{(q_{k+1} - q_k)^2}{\Delta t}\right\} \prod_{k=1}^{n-1} dq_k.\\ \end{array} \end{equation} Then I need a trick to go back to full integrals over $\mathbb{R}$ and use what I already know on the free particle propagator. However, since the integrals are coupled, I don't find the right way to end the calculation and find the result provided by Trimok.
Could you please tell me if I am right or wrong ? Thanks.
|
Second-order set theories In construction.
$\text{ZFC}$, the usual first-order axiomatic set theory, can only manipulate sets, and cannot formalize the notion of a proper class (e.g. the class of all sets, $V$), so when one needs to manipulate proper class objects, it is tempting to switch to a second-order logic form of $\text{ZFC}$. However, because many useful theorems such as Gödel's Completeness Theorem does not apply to second-order logic theories, it is more convenient to use first-order two-sorted axiomatic theories with two types of variables, one for sets and one for classes. Two such "false" second-order theories, $\text{NBG}$ and $\text{MK}$, are the most widely used extensions of $\text{ZFC}$.
Throughout this article, "
first-order (set theory) formula" means a formula in the language of $\text{ZFC}$, eventually with class parameters, but only set quantifiers. " second-order (set theory) formula" means a formula in the language of $\text{NBC}$ and $\text{MK}$, i.e. it can contain class quantifiers. Von Neumann-Bernays-Gödel set theory (commonly abbreviated as $\text{NBG}$ or $\text{GCB}$) is a conservative extension of $\text{ZFC}$ - that is, it proves the same first-order sentences as $\text{ZFC}$. It is equiconsistent with $\text{ZFC}$. However, unlike $\text{ZFC}$ and $\text{MK}$, it is possible to finitely axiomatize $\text{NBG}$. It was named after John von Neumann, Paul Bernays and Kurt Gödel. Morse-Kelley set theory (commonly abbreviated as $\text{MK}$ or $\text{KM}$) is an extension of $\text{NBG}$ which is stronger than $\text{ZFC}$ and $\text{NBG}$ in consistency strength. It was named after John L. Kelley and Anthony Morse. It only differs from $\text{NBG}$ by a single axiom-schema. It is not finitely axiomatizable.
It is possible to turn both theories into first-order axiomatic set theories with only class variables, a class $X$ is called a set (abbreviated $\text{M}X$) iff $\exists Y(X\in Y)$. One can also define $\text{M}X\equiv X\in V$ with $V$ a symbol of the language representing the universal class containing all sets. That is, a set is a class member of another class.
Contents Axioms of $\text{NBG}$ and $\text{MK}$
We will be using capital letters to denote classes and lowercase letters to denote sets. The following axioms are common to both $\text{NBG}$ and $\text{MK}$:
Extensionality:two classes are equal iff they have the same elements: $\forall X\forall Y(X=Y\iff\forall z(z\in X\iff z\in Y))$. Regularity:every class is disjoint from one of its elements: $\forall X(\exists a(a\in X)\implies\exists x(x\in X\land\forall y(y\in x\implies y\not\in X)))$. Infinity:there exists an infinite set. For example, $\exists x(\exists a(a\in x)\land\forall y(y\in x\implies\exists z(y\in z\land z\in x))$. Union:the union of the elements of a set is a set : $\forall x\exists y\forall z(z\in y\iff\exists w(z\in w\land w\in x))$. Pairing:the pair of two sets is itself a set: $\forall x\forall y\exists z\forall w(w\in z\iff(w=x\lor w=y))$. Powerset:the powerset of a set is a set: $\forall x\exists y\forall z(z\in y\iff\forall w(w\in z\implies w\in x))$. Limitation of Size:a class $X$ is proper if and only if there is bijection between $X$ and the universal class $V$.
The axiom of limitation of size implies the axiom of global choice ("there is a well-ordering of the universe") and $\text{ZFC}$'s axiom of replacement. Using the one-sorted version of those theories, pairing becomes $\forall X\forall Y(\text{M}X\land\text{M}Y\Rightarrow\exists Z(\text{M}Z\land X\in Z\land Y\in Z))$. The other axioms are modified similarly.
$\text{NBG}$ the the theory obtained by adding the following axiom-scheme:
Class comprehension:for every first-orderformula $\varphi(x)$ with a free variables $x$ and class parameters, there exists a class containing all sets $x$ such that $\varphi(x)$.
Not that the created class only contains sets, in particular there is no class of all classes.
$\text{MK}$ is obtained by removing the "first-order" restriction in class comprehension, i.e. second-order formulas are now allowed.
Finitely axiomatizing $\text{NBG}$
$\text{NBG}$ can be finitely axiomatized as its class comprehension axiom can be replaced by the following set of axioms: For every classes $A$ and $B$,
The complement $V\setminus A=\{x:x\not\in A\}$ exists. The intersection $A\cap B=\{x:x\in A\land x\in B\}$ exists. The range $\{y:\exists x((x,y)\in A\}$ exists, using $(x,y)=\{\{x\},\{x,y\}\}$. The product $A\times B=\{(a,b):a\in A\land b\in B\}$ exists. The classes $\{(x,y):x\in y\}$ and $\{(x,y):x=y\}$ exist. The following classes exist: $\{(b,a):(a,b)\in A\}$, $\{(b,(a,c)):(a,(b,c))\in A\}$. The following classes exist: $\{((a,b),c):(a,(b,c))\in A\}$, $\{(d,(a,(b,c))):(d,((a,b),c))\in A\}$. Models of $\text{MK}$
In consistency strength, $\text{MK}$ is stronger than $\text{ZFC}$ and weaker than the existence of an inaccessible cardinal. It directly implies the consistency of $\text{ZFC}$ plus "there is a proper class of worldly cardinals stationary in $\text{Ord}$". However, if a cardinal $\kappa$ is inaccessible then $V_{\kappa+1}\models\text{MK}$, also $\text{def}(V_\kappa)$ satisfies $\text{NBG}$ minus global choice (i.e. replacing limitation of size by $\text{ZFC}$'s axiom of replacement).
Because it uses classes, set models of $\text{MK}$ are generally taken to be the powerset of some model of $\text{ZFC}$. For this reason, a large cardinal axiom with $V_\kappa$ having some elementary property can be strengthened by instead using $V_{\kappa+1}$. When doing this with $\Pi_m^0$-indescribability, one achieves $\Pi_m^1$-indescribability (which is considerably stronger). When doing this with 0-extendibility (which is equiconsistent with $\text{ZFC}$), one achieves 1-extendibility.
Between $\text{NBG}$ and $\text{MK}$ Class forcing theorem and equivalents Class determinacy of open games This article is a stub. Please help us to improve Cantor's Attic by adding information.
|
The well-ordered replacement axiom is the scheme asserting that if $I$ is well-ordered and every $i\in I$ has unique $y_i$ satisfying a property $\phi(i,y_i)$, then $\{y_i\mid i\in I\}$ is a set. In other words, the image of a well-ordered set under a first-order definable class function is a set.
Alfredo had introduced the theory Zermelo + foundation + well-ordered replacement, because he had noticed that it was this fragment of ZF that sufficed for an argument we were mounting in a joint project on bi-interpretation. At first, I had found the well-ordered replacement theory a bit awkward, because one can only apply the replacement axiom with well-orderable sets, and without the axiom of choice, it seemed that there were not enough of these to make ordinary set-theoretic arguments possible.
But now we know that in fact, the theory is equivalent to ZF.
Theorem. The axiom of well-ordered replacement is equivalent to full replacement over Zermelo set theory with foundation.
$$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{well-ordered replacement}$$
Proof. Assume Zermelo set theory with foundation and well-ordered replacement.
Well-ordered replacement is sufficient to prove that transfinite recursion along any well-order works as expected. One proves that every initial segment of the order admits a unique partial solution of the recursion up to that length, using well-ordered replacement to put them together at limits and overall.
Applying this, it follows that every set has a transitive closure, by iteratively defining $\cup^n x$ and taking the union. And once one has transitive closures, it follows that the foundation axiom can be taken either as the axiom of regularity or as the $\in$-induction scheme, since for any property $\phi$, if there is a set $x$ with $\neg\phi(x)$, then let $A$ be the set of elements $a$ in the transitive closure of $\{x\}$ with $\neg\phi(a)$; an $\in$-minimal element of $A$ is a set $a$ with $\neg\phi(a)$, but $\phi(b)$ for all $b\in a$.
Another application of transfinite recursion shows that the $V_\alpha$ hierarchy exists. Further, we claim that every set $x$ appears in the $V_\alpha$ hierarchy. This is not immediate and requires careful proof. We shall argue by $\in$-induction using foundation. Assume that every element $y\in x$ appears in some $V_\alpha$. Let $\alpha_y$ be least with $y\in V_{\alpha_y}$. The problem is that if $x$ is not well-orderable, we cannot seem to collect these various $\alpha_y$ into a set. Perhaps they are unbounded in the ordinals? No, they are not, by the following argument. Define an equivalence relation $y\sim y’$ iff $\alpha_y=\alpha_{y’}$. It follows that the quotient $x/\sim$ is well-orderable, and thus we can apply well-ordered replacement in order to know that $\{\alpha_y\mid y\in x\}$ exists as a set. The union of this set is an ordinal $\alpha$ with $x\subseteq V_\alpha$ and so $x\in V_{\alpha+1}$. So by $\in$-induction, every set appears in some $V_\alpha$.
The argument establishes the principle: for any set $x$ and any definable class function $F:x\to\text{Ord}$, the image $F\mathrel{\text{”}}x$ is a set. One proves this by defining an equivalence relation $y\sim y’\leftrightarrow F(y)=F(y’)$ and observing that $x/\sim$ is well-orderable.
We can now establish the collection axiom, using a similar idea. Suppose that $x$ is a set and every $y\in x$ has a witness $z$ with $\phi(y,z)$. Every such $z$ appears in some $V_\alpha$, and so we can map each $y\in x$ to the smallest $\alpha_y$ such that there is some $z\in V_{\alpha_y}$ with $\phi(y,z)$. By the observation of the previous paragraph, the set of $\alpha_y$ exists and so there is an ordinal $\alpha$ larger than all of them, and thus $V_\alpha$ serves as a collecting set for $x$ and $\phi$, verifying this instance of collection.
From collection and separation, we can deduce the replacement axiom $\Box$
I’ve realized that this allows me to improve an argument I had made some time ago, concerning Transfinite recursion as a fundamental principle. In that argument, I had proved that ZC + foundation + transfinite recursion is equivalent to ZFC, essentially by showing that the principle of transfinite recursion implies replacement for well-ordered sets. The new realization here is that we do not need the axiom of choice in that argument, since transfinite recursion implies well-ordered replacement, which gives us full replacement by the argument above.
Corollary. The principle of transfinite recursion is equivalent to the replacement axiom over Zermelo set theory with foundation.
$$\text{ZF}\qquad = \qquad\text{Z} + \text{foundation} + \text{transfinite recursion}$$
There is no need for the axiom of choice.
|
This could relate to different applications, but my application of interest is in similarity-search systems based on high-dimensional feature vectors. In these systems, since search based on Euclidean distance is costly, the vectors are encoded as binary vectors. Then the search is done based on Hamming distance computation which is supposed to be faster than the Euclidean counterpart. My question is,
practically, how much is this speed-up for an average architecture?
More formally, suppose we have a database $\mathrm{X} = [\mathbf{x}_1, \cdots, \mathbf{x}_i, \cdots, \mathbf{x}_N]$ of $N$ vectors of dimension $n$, and a vector $\mathbf{y}$ of the same dimensionality. Now imagine two cases:
Vectors are real-valued, i.e., $\mathbf{x}_i \in \Re^n$. We are interested in finding the $\mathbf{x}_i$ from $\mathrm{X}$ with the minimum Euclidean distance to $\mathbf{y}$, i.e.:
\begin{equation} i = \underset{{1 \leqslant i \leqslant N}}{\text{argmin}} ||\mathbf{y} - \mathbf{x}_i||_2. \end{equation}
Vectors are binary, i.e., $\mathbf{x}_i \in \mathcal{B}^n$ with $\mathcal{B} = \{0,1\}$. We are interested in finding the $\mathbf{x}_i$ from $\mathrm{X}$ with the minimum Hamming distance to $\mathbf{y}$, i.e.:
\begin{equation} i = \underset{{1 \leqslant i \leqslant N}}{\text{argmin}} \sum_{j=1}^n \big( \mathbf{y}(j) \oplus \mathbf{x}_i(j) \big), \end{equation} where $\oplus$ denotes the XOR operation.
Both of these operations have complexity of $\mathcal{O}(Nn)$. However, the real case requires doing the operations in floating-point arithmetic while the binary case enjoys the fixed-point binary operations which are faster.
My question is, in practice, how are these complexities different? Can we say, very roughly though, that they have different constant factors and the ratio of these factors is a constant value? What would that constant value be then? Around 10 or 100 maybe? And how much is it architecture dependent?
|
I often encounter the integrals in the following form:
$\int_0^\infty{\rm Bessel}(ax)\cdot{\rm Bessel}(bx)\cdot f(cx)dx$,
where Bessel can be $J$, $N$, $H^{(1)}$, $H^{(2)}$, $I$, or $K$; and $f(x)$ can be $\sin(x)$, $e^x$, etc. For example,
$\int_0^\infty K_\nu(ax)I_\nu(bx)\cos(cx)dx=\frac{1}{2\sqrt{ab}}Q_{\nu-1/2}(\frac{a^2+b^2+c^2}{2ab})$,$\qquad{\rm Re}(a)>|{\rm Re}(b)|$, $c>0$, ${\rm Re}(\nu)>-1/2$
I already know the result of this integral because it is in Gradshtein & Ryzhik's book. Sometimes the integration is with respect to the order of the Bessel function.
Both Mathematica 8 and Maple 15 cannot do this kind of integrals. When the integral involves two Bessel functions or two other special functions, Mathematica and Maple usually cannot do even if the integral has a closed-form result. My questions are as follows:
Is there any general theory about how to do this kind of integrals? (I wonder how the authors of that book did the above integral.)
I know there is a Mathematica package "HolonomicFunctions." It seems that this package can help, but it does not seem very straightforward to obtain the final result. This package can verify the integrals with already known results, but can it do new integrals as above? Are there any better ways to deal with these integrals by computer?
|
The answer is
There is one valid solution which is $S=4, I=2, N=7, U=8, E=1, V=5, L=9$
Formatting as above
427
* 27 --------- 2989 854 --------- 11529
Proof
First we note that $N \times N \equiv L (\bmod 10)$ so immediately we cannot have $N=0,1,5,6$ as then $L=N$ which is not permitted. Also important is $I \times N \equiv S (\bmod 10)$
(1) We can thus consider the problem for different values of $N$. In the following, let $M_i$ represent the $i$th digit in the first multiplication (reading left to right) $N=2 \Rightarrow L=4 \Rightarrow M_3 + S \equiv I (\bmod 10)$ and we must have $M_3 = S$ but then from (1) above that means that $4S \equiv 2I \equiv S (\bmod 10)$ which means $S=0$ and this cannot be since it is a leading digit. $N=3 \Rightarrow L=9 \Rightarrow M_3 = S$ and using (1) we must have $6S \equiv 3I \equiv S (\bmod 10)$ which allows $S=2,4,6,8$ paired with $I=4,8,2,6$ respectively. Clearly, since the second multiplication has one less digit than the first, we must have $I < N$ so we can rule out all but the third of these possibilities but then $623 \times 2$ has four digits instead of three so this won't work. $N=4 \Rightarrow L=6 \Rightarrow M_3 = S+1$ which means that $8S+4 \equiv 4I \equiv S (\bmod 10)$ which means $S=8$ and $I=7$ but this doesn't work since we must have $I<N$. $N=7 \Rightarrow L=9 \Rightarrow M_3 \equiv S+4 (\bmod 10)$ which means that $14S+28 \equiv 7I \equiv S(\bmod 10)$ or more simply $3S \equiv 2(\bmod 10)$ which gives $S=4$ and $I=2$. This gives a valid solution with additionally $U=8, E=1, V=5$ $N=8 \Rightarrow L=4 \Rightarrow M_3 \equiv S+6 (\bmod 10)$ which means that $16S + 48 \equiv 8I \equiv S (\bmod 10)$ or more simply $5S \equiv 2 (\bmod 10)$ which has no solutions. Finally, $N=9 \Rightarrow L=1 \rightarrow M_3 \equiv S+8 (\bmod 10)$ which gives us $18S+72 \equiv 9I \equiv S(\bmod 10)$ or more neatly $7S \equiv 8 (\bmod 10)$ which gives $S=4$ and $I=6$ but this gives four digits in the second multiplication instead of three. Having checked all the cases, we find there is only one valid solution.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.