text
stringlengths 256
16.4k
|
|---|
I'm trying to measure the amplitude of a sinusoidal component in a noisy signal.
Since the signal frequency is known beforehand, I'm calculating scalar product with a sine and cosine base function at the given frequency (see this thread: Measuring amplitude of a spectral component)
$$a = \left<y,\cos(2\pi t f_c/f_s)\right>$$
$$b = \left<y,\sin(2\pi t f_c/f_s)\right>$$
Now the problem that I've noticed is that when i'm running the ADC at half speed (
312Ksps) I get less noise (less variance over multiple measurements with the input signal being stable, and all other parameters staying the same) than with full speed ( 625Ksps) - in fact the standard deviation doubles.
I know that this noise is coming from the digital processing, not from the ADC chip, because if I'm running the chip at full speed doing a 2x decimation (while keeping the same number of samples), the standard deviation is the same as running the chip at half speed.
I tried to investigate the source of the variance. Since the input signal is very stable (I know this from other measurements), I suspect the variance is coming from out-of-band noise.
I tried to make a quick simulation to see how other frequencies around the base frequency affect the result. The base frequency was
7200 Hz. The curve below is generated by keeping the base functions constant, and generating input signals with frequencies running from 7 - 8 KHz in small steps.
The
0's of the curve seem to follow $f_s/N_{samples}$. That is to say if I’m taking 65536 samples @ 625Ksps, the distance between the 0's is about 9.53 Hz. With the same nr. of samples @ 312Ksps, the distance will be 4.77 Hz.This means that the width of the central lobe (where the signal pickup is maximal) is 2x in the case of 625K sample rate.
I suspect that this is why more noise is picked up (noise as components around the signal with varying amplitude).
Now I understand that with the same nr. of samples and the lower fs you see more cycles of the singal.
However with the higher fs you see less cycles with more resolution. These should be equivalent right?
How do I get rid of the noise?
|
Update: This implementation is now a package called
CompoundMatrixMethod, hosted on github. It can be installed easily by evaluating:
Needs["PacletManager`"]
PacletInstall["CompoundMatrixMethod", "Site" ->
"http://raw.githubusercontent.com/paclets/Repository/master"]
This version also includes a function
ToMatrixSystem which converts a system of ODEs to matrix form (and linearises if necessary), including the boundary conditions. This eliminates the need to set the matrices directly, and also specifies which variable is the eigenvalue, simplifying the notation. Please use the package rather than the code below.
I've written an implementation of the Compound Matrix Method that suits my purposes, and so I'll put it here for other people. A good explanation of this method are available here. Basically the Compound Matrix Method takes an $n$ by $n$ eigenvalue problem of the form $$\mathbf{y}' = A(x, \lambda) \mathbf{y}, \quad a \leq x \leq b, \\ B(x,\lambda) \mathbf{y} = \mathbf{0}, \quad x=a, \\ C(x,\lambda) \mathbf{y} = \mathbf{0}, \quad x=b,$$ and converts it to a larger system of determinants that satisfy a different matrix equation $$ \mathbf{\phi}' = Q(x, \lambda) \mathbf{\phi}.$$This removes a lot of the stiffness from the equations, as well as being able to also remove the exponential growth terms that dominate away from an eigenvalue.
The code is written for general size $n$, and I've used it for $n=10$. The first time you run the code for a particular size $n$ the general form of matrix $\mathbf{Q}$ will be calculated, for $n=10$ this takes about 3 minutes for me, after that the matrix will be cached. The matching should be independent of the choice of matching point, but you can change it in the code to check that.
reprules = ϕ[a_List] :> Signature[a] ϕ[Sort[a]];
minorsDerivs[list_?VectorQ,len_?NumericQ] :=
Sum[Sum[AA[y, z] ϕ[list /. y -> z], {z, Union[Complement[Range[len], list], {y}]}], {y, list}] /. reprules
qComponents[n_?NumericQ, len_?NumericQ] := qComponents[n, len] =
Coefficient[Table[minorsDerivs[ii, len], {ii, Subsets[Range[len], {len/2}]}]
/.Thread[Subsets[Range[len], {len/2}] -> Range[Binomial[len, len/2]]], \[Phi][n]]
Evans[{λ_/;!NumericQ[λ], λλ_?NumericQ}, Amat_?MatrixQ, bvec_?MatrixQ, cvec_?MatrixQ,
{x_ /;!NumericQ[x], xa_?NumericQ, xb_?NumericQ,xmatch_:False}] :=
Module[{ya, yb, ϕpa, ϕmb, valsleft, valsright, ϕpainit, ϕmbinit, posint,
negint, ϕmvec, ϕpvec, det, QQ, len, subsets,matchpt},
len = Length[Amat];
If[(xa <= xmatch <= xb && NumericQ[xmatch]), matchpt = xmatch, matchpt = (xb - xa)/2];
If[!EvenQ[len], Print["Matrix A does not have even dimension"]; Abort[]];
If[Length[Amat] != Length[Transpose[Amat]],Print["Matrix A is not a square matrix"]; Abort[]];
subsets = Subsets[Range[len], {len/2}];
ya = NullSpace[bvec];
If[Length[ya] != len/2, Print["Rank of matrix B is not correct"];Abort[]];
yb = NullSpace[cvec];
If[Length[yb] != len/2, Print["Rank of matrix C is not correct"];Abort[]];
ϕmvec = Table[ϕm[i][x], {i, 1, Length[subsets]}];
ϕpvec = Table[ϕp[i][x], {i, 1, Length[subsets]}];
ϕpa = (Det[Transpose[ya][[#]]] & /@ subsets);
ϕmb = (Det[Transpose[yb][[#]]] & /@ subsets);
valsleft = Select[Eigenvalues[Amat /. x -> xa /. λ -> λλ], Re[#] > 0 &];
valsright = Select[Eigenvalues[Amat /. x -> xb /. λ -> λλ], Re[#] < 0 &];
ϕpainit = Thread[Through[Array[ϕp, {Length[subsets]}][xa]] == ϕpa];
ϕmbinit = Thread[Through[Array[ϕm, {Length[subsets]}][xb]] == ϕmb];
QQ = Transpose[Table[qComponents[i, len], {i, 1, Length[subsets]}]] /.
AA[i_, j_] :> Amat[[i, j]] /. λ -> λλ;
posint = NDSolve[{Thread[D[ϕpvec,x] == (QQ - Total[Re@valsleft] IdentityMatrix[Length[QQ]]).ϕpvec], ϕpainit},
Array[ϕp, {Length[subsets]}], {x, xa, xb}][[1]];
negint = NDSolve[{Thread[D[ϕmvec,x] == (QQ - Total[Re@valsright] IdentityMatrix[Length[QQ]]).ϕmvec], ϕmbinit},
Array[ϕm, {Length[subsets]}], {x, xa, xb}][[1]];
det = Total@Table[ϕm[i][x] ϕp[Complement[Range[len], i]][x] (-1)^(Total[Range[len/2] + i]) //. reprules /.
Thread[subsets -> Range[Length[subsets]]], {i, subsets}];
Exp[-Integrate[Tr[Amat], {x, xa, matchpt}]] det /. x -> matchpt /. posint /. negint]
For a simple 2nd order eigenvalue problem, $y''(x) + \lambda y(x) = 0, y(0)=y(L)=0$, the roots can be found analytically as $n \pi/L, n \in \mathbb{Z}$. Here the matrix $A$ is
{{0,1}, {-\[Lambda]^2, 0}}, and the BCs are
DiagonalMatrix[{1, 0}]:
Plot[Evans[{λ, λλ}, {{0, 1}, {-λ^2, 0}},
DiagonalMatrix[{1, 0}], DiagonalMatrix[{1, 0}], {x, 0, 2}], {λλ, 0.1, 20}]
Changing the boundary conditions is straight forward, so for a Robin BCs like $y(0)+2y'(0)=0$ the corresponding matrix $B$ would be
{{1, 2}, {0, 0}}.
For the first 4th order example in the linked notes $$\epsilon^4 y''''(x) + 2 \epsilon^2 \lambda \frac{d}{dx}\left[\sin(x) \frac{dy}{dx}\right]+y =0, \\ y(0) = y''(0) = y'(\pi/2) = y'''(\pi/2) = 0,$$ the matrices are given by:
A1={{0,1,0,0}, {0,0,1,0}, {0,0,0,1}, {-1/ϵ^4, -2 ω Cos[x]/ϵ^2, -2 ω Sin[x]/ϵ^2, 0}};
B1 = DiagonalMatrix[{1,0,1,0}]; C1 = DiagonalMatrix[{0,1,0,1}];
Evans[{ω, 1}, A1 /. ϵ-> 0.1, B1, C1, {x, 0, Pi/2}]
(* -0.650472 *)
And we can then vary the value of $\omega$ to see the roots:
Plot[Evans[{ω, ωω}, A1 /.ϵ->0.1, B1, C1, {x, 0, Pi/2}], {ωω, 1, 3}]
For a 10x10 example similar to my original question (that has positive eigenvalues):
A2 = {{0, 1, 0, 0, 0, 0, 5, 0, -5, 0}, {0, 0, 1, 0, 0, 0, 0, 0, 0,
0}, {0, 0, 0, 1, 0, 0, 0, 0, 0, 0}, {-625 ω, -(125/2), 2,
0, 0, 3, -300, 0, 300, 0}, {0, 0, 0, 0, 0, 1, 0, 0, 0, 0}, {0, 0,
0, -1.5, 1/2, 0, 0, 0, 0, 0}, {0, 0, 0, 0, 0, 0, 0, 1, 0,
0}, {0, -169, 0, 0, 0, 0, 9175 + 694 ω, 0, 811, 0}, {0, 0,
0, 0, 0, 0, 0, 0, 0, 1}, {0, 672, 0, 0, 0, 0, 3222,
0, -709 + 694 ω, 0}};
B2 = C2 = DiagonalMatrix[{0, 1, 1, 0, 1, 0, 0, 1, 0, 1}];
Evans[{ω, 1}, A2, B2, C2, {x, 0, 1}]
(* 0.672945 *)
We can plot and see some positive eigenvalues:
ListPlot[Table[{ωω,Evans[{ω, ωω}, A2, B2, C2, {x, 0, 1}]},{ωω,0.1,1,0.01}]
And then
FindRoot will find one:
FindRoot[Evans[{ω, ωω}, A2, B2, C2, {x, 0, 1}],{ωω,0.5}]
The eigenfunctions can be extracted from this method if required, but I haven't coded that here. The subtraction of the dominant growing eigenvalues from $Q$ may not be suitable for all problems, but is really useful when it works. It will also use exact numbers if you give them in the original matrices, so it'll be faster if you give an approximate number.
|
Contents What is a normal?
We briefly mentioned what normals were in the first chapter of this lesson. A surface normal from a surface at P, is a vector perpendicular to the tangent plane to that surface at P. We will learn more about how to compute normals as we get to the lessons on geometric primitives. But lets just say for now that if you know the tangent T and bi-tangent B of the surface at P (which defines the plane tangent to the surface at P) then we can compute the surface normal at P using a simple cross product between T and B:$$N = T \times B$$
Remember what we have said on the cross product operation. It is anticommutative which means that swapping the position of any two arguments negates the result. In other words: \(T \times B = N\) and \(B \times T = -N\). In practice, it just means that you will have to be careful to compute the normal so that it points away from the surface (for reasons we will explain when we will get to the lessons on Shading) but we will come back on this again in other lessons.
Transforming Normals
You may ask then why not simply considering normals as vectors. Why do we take the pain of differentiating them? In the previous chapters, we have learned to use matrix multiplication to transform points and vectors. The problem with normals, is that we tend to assume that transforming them in the same way we transform points and vectors will work. In fact, this is sometimes the case, for example when the matrix scales the normal uniformly (that is when the values of the matrix along the diagonal, which we have learned encode the scale values applied to the transformed point or vector are all the same). But lets now consider the case where a non-uniform scale is applied to an object. Lets draw (in 2D) a line which is passing through the points A=(0, 1, 0) and B=(1, 0, 0) as illustrated in figure 1. If you draw another line from the origin to the coordinate (1, 1, 0) you can see that this line is perpendicular to our plane. Lets consider this to be our normal N (technically, we should normalize this vector but not doing so is not going to be a problem for this explanation). Now lets say that we apply a non uniform scale to the plane using the following matrix:$$M=\begin{bmatrix}2&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}$$
This matrix scale the x-coordinate of any point (or vector) by 2 and leave the other coordinates unchanged. Applied to our example, we get A'=A*M which gives A'=(0, 1, 0) and B'=B*M which is equal to (2, 0, 0). Similarly, if we compute N' as N'=N*M, we get N'=(2, 1, 0). Now, if we both draw our new transformed line (going through A' and B') and N', we can see that N' is no longer perpendicular to A'B'. In fact, the solution to transforming normals, is not to multiply them by the same matrix used for transforming points and vectors, but to multiply them by the
transpose of the inverse of that matrix:
$$N'=N*M^{-1T}$$
Before considering the mathematical proof, let's first try to explain why this solution works using intuition. First, we know that normals represent directions, so like vectors they are not affected by translation. In other words, we can ignore the fourth column and fourth row of our [4x4] matrix and just consider the inner, upper-left [3x3] remaining matrix which we know encode the rotation and the scale. We have also explained in this lesson that the transpose of an orthogonal matrix is also its inverse, and that rotation matrices are orthogonal. In other words, if Q is an orthogonal matrix, we can write:
\(Q^T=Q^{-1}\) therefore \(Q=Q^{-1T}\)
The transpose of the inverse of an orthogonal matrix Q gives the matrix Q. In other words, this doesn't change anything. Using the transpose of the inverse of that matrix doesn't change the elements from the matrix that encode rotations, and transforming a normal with this transposed inverted matrix, will rotate the normals as if we had use the original matrix (we want the normal to follow any rotation you apply to an object).
Question from a reader: "But the elements of the matrix \(M\) along the diagonal can encode rotations and scaling at the same time. So if scale and rotations are mixed up in one single matrix, is the matrix still orthogonal?". If the scaling is different than 1 in any dimension, you would be right. However you can see a matrix that encodes both rotation and scaling as a multiplication of two distinct matrices, one that encodes rotation only \(R\), and one that encodes scaling only \(S\):
$$M=R * S$$
And the matrix on the left \(R\) would be orthogonal. Therefore saying that the transpose of the inverse of that matrix \(R^{-1T}\) is the same as the matrix itself \(R\) holds true. All we are left to do in our demonstration, is to see what happens to the matrix \(S\) when we take the transpose of its inverse.
The last elements from the matrix we haven't looked at yet, are the numbers along the diagonal of the matrix which we know encode the scale values. What happens to them when we compute the transpose of the inverse of a matrix? The transpose operation itself doesn't change the elements along the diagonal of a matrix. Only the inverse operation changes them. If a point is scaled by a factor of 4 we know that we need to scale it by 0.25 (\(1 \over 4\), the inverse of the original scale factor) to bring it back to its original position. Similarly, the inverse of a scale matrix can easily be computed by taking the inverse of the scale factors. Applied to our example we get:$$M^{-1T}=\begin{bmatrix}1 \over 2&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{bmatrix}$$
If we apply this matrix to our normal N=(1, 1, 0) we get N'=(0.5, 1, 0). Lets now draw this vector next to the line A'B' and check that it is perpendicular to the line (figure 2c). As you can see, we now have a normal which is orthogonal to the transformed line A'B'.
It is also possible to compute the normals from transformed vertices but this technique can't be used for example with quadratic shapes. Imaging a sphere renderered as quadradic shape. If you scale the sphere along the x-axis by 2 you will get an ellipsoid. Try to visually imagine what's happening to the normal of a sphere transformed that way, if you just apply the original matrix to the normals. If you can compute the derivatives of a point on surface (the tangent and bitangent), you can compute a transformed normal from these transformed derivatives, no matter what type of geometric primitive you are dealing with. This is actually the technique we will be using in our basic renderer but we won't always have access to these derivatives so using the transpose of the inverse matrix is still the only valid technique we can use in these cases.
Here is now the mathematical proof that the transpose of the inverse is actually what we need to transform normals. Remember that the dot product of two orthogonal vectors is equal to 0. Note also, that we can re-write the dot product as a matrix multiplication between a [1x3] and a [3x1] matrix which gives us a [1x1] matrix, one number as with the result of the dot product. If the result of the dot product is 0 then the result of the matrix multiplication (assuming you are using the same vectors) should also be 0. Imagine that we have two vectors orthogonal to each other at point P. One vector is \(v\) and lies within the plane tangent to P and \(n\) is the normal at P. The dot product of v and n is 0 since n is the normal and v lies in plane tangent to P. We can also re-write n as a [3x1] matrix which we can get by transposing n itself and multiply \(v\) as a [1x3] matrix by \(n^T\) which result should also be 0 (since the formula of the matrix multiplication is the same as the formula of a dot product in that case):$$v \cdot n = \begin{pmatrix}v_x & v_y & v_x\end{pmatrix}*\begin{pmatrix}n_x\\n_y\\n_z\end{pmatrix}=v * n^T=0$$ $$v \cdot n = v * n^T = v_x * n_x + v_y * n_y + v_z * n_z$$
We can also write:$$v * n^T = v * M * M^{-1} * n^T = v * I * n^T$$
where \(M\) is a matrix we want to transform P with and \(I\) is the identity matrix. We know that the multiplication of a matrix with its inverse gives the identity matrix, so in essence, technically, the term \(M^{-1} * M\) we added in the middle of the term \(v * n^T\) does nothing. However, lets see what we can do by re-arranging and re-writing the terms:$$v * n^T = (v*M) * (n*M^{-1T})^T$$
First we can notice that the first term on the left, \(v*M\), is nothing else than the vector \(v'\) which is the vector \(v\) transformed by the matrix \(M\). We said before that transforming vectors with the matrix doesn't work for normals but it does work for vectors lying in the plane tangent to P. In other words:$$v' = v * M$$
The second term on the right, has been re-arranged. We moved the matrix \(M^{-1}\) to the right of \(n^T\). This is technically possible only if we transpose the matrix itself which is why we wrote \(M^{-1T}\). Remember that \(A \times B = B^T \times A\). Finally we can write:$$v * n^T =v' * n'^T$$
This equality has to be true because the dot product between v and n should be same after the two vectors have been transformed (the dot product is invariant under linear transformation). Thus, if \((n * M^{-1T})^T=n'^T\), then \(n'=n * M^{-1T}\).
|
I would like to ask that at what distance from the Earth's surface the curvature of the Earth is visible. What layer of the atmosphere is this?
I've noticed that at the height of 9-12 Km (the view from from aeroplanes) it is not visible.
Earth Science Stack Exchange is a question and answer site for those interested in the geology, meteorology, oceanography, and environmental sciences. It only takes a minute to sign up.Sign up to join this community
Depends on your eye. You can realise the curvature of the Earth by just going to the beach. Last summer I was on a scientific cruise in the Mediterranean. I took two pictures of a distant boat, within an interval of a few seconds: one from the lowest deck of the ship (left image), the other one from our highest observation platform (about 16 m higher; picture on the right):
A distant boat seen from 6 m (left) and from 22 m (right) above the sea surface. This boat was about 30 km apart. My pictures, taken with a 30x optical zoom camera.
The part of the boat that is missing in the left image is hidden by the quasi-spherical shape of the Earth. In fact, if you would know the size of the boat and its distance, we could infer the radius of the Earth. But since we already know this, let's do it the other way around and deduce the distance to which we can see the full boat:
The distance $d$ from an observer $O$ at an elevation $h$ to the visible horizon follows the equation (adopting a spherical Earth):
$$ d=R\times\arctan\left(\frac{\sqrt{2\times{R}\times{h}}}{R}\right) $$
where $d$ and $h$ are in meters and $R=6370*10^3m$ is the radius of the Earth. The plot is like this:
Distance of visibility d (vertical axis, in km), as a function of the elevation h of the observer above the sea level (horizontal axis, in m).
From just 3 m above the surface, you can see the horizon 6.2 km apart. If you are 30 m high, then you can see up to 20 km far away. This is one of the reasons why the ancient cultures, at least since the sixth century BC, knew that the Earth was curved, not flat. They just needed good eyes. You can read first-hand Pliny (1st century) on the unquestionable spherical shape of our planet in his
Historia Naturalis.
But addressing more precisely the question. Realising that the horizon is lower than normal (lower than the perpendicular to gravity) means realising the angle ($gamma$) that the horizon lowers below the flat horizon (angle between $OH$ and the tangent to the circle at
O, see cartoon below; this is equivalent to gamma in that cartoon). This angle depends on the altitude $h$ of the observer, following the equation:
$$ \gamma=\frac{180}{\pi}\times\arctan\left(\frac{\sqrt{2\times{R}\times{h}}}{R}\right) $$
where
gamma is in degrees, see the cartoon below. Angle of the horizon below the flat-Earth horizon ( gamma, in degrees, on the vertical axis of this plot) as a function of the observer's elevation h above the surface (meters). Note that the apparent angular size of the Sun or the Moon is around 0.5 degrees..
So, at an altitude of only 290 m above the sea level you can already see 60 km far and the horizon will be lower than normal by the same angular size of the sun (half a degree). While normally we are no capable of feeling this small lowering of the horizon, there is a cheap telescopic device called levelmeter that allows you to point in the direction perpendicular to gravity, revealing how lowered is the horizon when you are only a few meters high.
When you are on a plane ca. 10,000 m above the sea level, you see the horizon 3.2 degrees below the astronomical horizon (O-H), this is, around 6 times the angular size of the Sun or the Moon. And you can see (under ideal meteorological conditions) to a distance of 357 km.
Felix Baumgartner roughly doubled this number but the pictures circulated in the news were taken with very wide angle, so the ostensible curvature of the Earth they suggest is mostly an artifact of the camera, not what Felix actually saw. This ostensible curvature of the Earth is mostly an artifact of the camera's wide-angle objective, not what Felix Baumgartner actually saw.
A quick Google turned up a published article answering precisely this question (Lynch, 2008). The abstract states:
Reports and photographs claiming that visual observers can detect the curvature of the Earth from high mountains or high-flying commercial aircraft are investigated. Visual daytime observations show that the minimum altitude at which curvature of the horizon can be detected is at or slightly below 35,000 ft, providing that the field of view is wide (60°) and nearly cloud free. The high-elevation horizon is almost as sharp as the sea-level horizon, but its contrast is less than 10% that of the sea-level horizon. Photographs purporting to show the curvature of the Earth are always suspect because virtually all camera lenses project an image that suffers from barrel distortion. To accurately assess curvature from a photograph, the horizon must be placed precisely in the center of the image, i.e., on the optical axis.
Note that the given minimum of 35,000 feet (10.7 km) is a plausible cruise altitude for a commercial airliner, but you probably shouldn't expect to see the curvature on a typical commercial flight, because:
Lynch, D. K. (2008). Visually discerning the curvature of the Earth.
Applied Optics, 47(34), H39-H43.
It's hard to see the curvature of the earth from an altitude of 7 miles or 37,000 ft (typical cruising altitude of a jetliner) but easy to see from 250 miles (typical altitude of the ISS).
The line of sight from an aircraft at 37,000 feet = 235 miles. That's only about 3.4 degrees of the earth's surface. From the ISS at 250 miles, the line of sight is 1,435 miles, which covers about 19.8 degrees of the earth's surface - much easier to see the curve from this altitude.
Most people don't realize how large the earth is compared to the altitude of a passenger aircraft. It's easy to think we're really high up, but comparatively we're just skimming the surface.
The attached drawing is to scale, but the images of the jetliner and ISS are NOT to scale (much, much larger than their actual sizes).
Further to DrGC's excellent answer, a subjective assessment of visibility of the Earth's curvature can be gleaned from pilot's experioence over many decades. These can be summarized as:
High up on a peak in Hawaii surrounded by nothing but water in every direction, seeing the curvature can be really quite humbling. As far as the boat theory goes, its not something I'd be able to use considering Im aware of the unnerving sizes of deep sea swells and counting rogue waves, naturally between those the boat is at a low point. Having parents that use to go deep sea fishing often, spending over a week out at sea, the swells are...huge.
Is not the amount of curvature available to see reduced by looking at it from a very flat angle - ie multiplied by the sine of that small angle? At 35000 feet the horizon is 229 miles away and 440 miles long, with the maximum field of vision of the human eye of 110 degrees (not attainable in practice) so the curvature depth is 78 miles, but because of the flatness of view, it foreshortens to about 2.4 miles (and much less with a narrower field of view). To resolve 2.4 miles at 229 distance over 440 miles miles is going some, or perhaps about 1 mile or less in practice through a window. Using a telescope does not help as all it does is reduce the angle of the field of view proportionately.
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
I have been working on an old problem in one of my finance classes and, since no solution has been provided and I won't be able to contact my teacher anytime soon, I was hoping I could ask you guys to give me some feedback on my solution.
Here's the problem:
Consider a measure Q under which the dynamics of St are:
$\frac{dS_{t}}{S_{t}}=rdt+\sigma dW_{t}^{Q}$
where $W_{t}^{Q}$ is a Brownian motion under Q.Solve the boundary value problem you found above via an equivalent martingale method under the measure Q, i.e.
$P(t,S_{t})=e^{-r(T-t)}E{_{t}}^{Q}[P(T,S_{t})]$
Here is my attempt at a solution:
$E{_{t}}^{Q}[P(T,S_{t})]=E{_{t}}^{Q}[K\times 1_{S_{t}\leq K}]-E{_{t}}^{Q}[S_{T}\times 1_{S_{t}\leq K}]$ where 1 is an indicator function
$E{_{t}}^{Q}[P(T,S_{t})]=K\times Prob^{Q}(S_{T}\leq K)-E{_{t}}^{Q}[S_{t}e^{(r-\frac{\sigma ^{2}}{2})(T-t)+\sigma (W_{T}^{Q}-W_{t}^{Q}))}\times 1_{S_{t}\leq K}]$
$E{_{t}}^{Q}[P(T,S_{t})]=K\times Prob^{Q}(S_{T}\leq K)-S_{t}e^{r(T-t))}E{_{t}}^{Q}[e^{(-\frac{\sigma ^{2}}{2})(T-t)+\sigma (W_{T}^{Q}-W_{t}^{Q}))}\times 1_{S_{t}\leq K}] $
Now, here comes the part where I am not sure whether how I proceeded is correct.
Fact: $E[e^{yZ-0.5y^2}\times 1_{Z\geq -a}]=\Phi (y+a)$, for $Z\sim N(0,1)$
Proof (provided to us):
$E[e^{yZ-0.5y^2}\times 1_{Z\geq -a}]=\int_{-\infty }^{\infty }e^{yZ-0.5y^2}\times 1_{Z\geq -a}[\frac{1}{\sqrt{2\pi }}e^{-0.5Z^2}]dZ$ $=\frac{1}{\sqrt{2\pi }}\int_{-a}^{\infty }e^{-0.5(Z-y)^2}dZ=\Phi (y+a)$
From this I went on to rewrite $1_{S_{t}\leq K}$ :
$S_{t}\leq K$
$S_{t}e^{(r-\frac{\sigma ^{2}}{2})(T-t)+\sigma (W_{T}^{Q}-W_{t}^{Q}))}\leq K$
$lnS_{t}+(r-\frac{\sigma ^2}{2})(T-t)+\sigma (W_{T}^{Q}-W_{t}^{Q})\leq lnK$
$\sigma (W_{T}^{Q}-W_{t}^{Q})\leq lnK -lnS_{t}-(r-\frac{\sigma ^2}{2})(T-t)$
$\sigma \sqrt{T-t}Z\leq ln\frac{K}{S_{t}}-(r-\frac{\sigma ^2}{2})(T-t)$
$Z \geq\frac{ln\frac{S_{t}}{K}+(r-\frac{\sigma ^2}{2})(T-t)}{\sigma \sqrt{T-t}}$
${ \frac{ln\frac{S_{t}}{K}+(r-\frac{\sigma ^2}{2})(T-t)}{\sigma \sqrt{T-t}}}\equiv a$
and so
$1_{Z\geq a}$
Therefore,
$E[e^{-\frac{\sigma ^{2}}{2}(T-t)+\sigma\sqrt{T-t}Z}\times 1_{Z\geq a}]=\Phi (y-a)$
is it correct here to subtract a, instead of adding it as given in the proof?
Further:
$\Phi (y-a)=\sigma \sqrt{T-t}-\frac{ln\frac{S_{t}}{K}+(r-\frac{\sigma ^2}{2})(T-t)}{\sigma \sqrt{T-t}}$
$\Phi (y-a)=\frac{\sigma^2 (T-t)}{\sigma\sqrt{T-t}}\ - \frac{ln\frac{S_{t}}{K}+(r-\frac{\sigma ^2}{2})(T-t)}{\sigma \sqrt{T-t}}$
giving:
$\Phi (y-a)= -\frac{ln\frac{S_{t}}{K}+(r+\frac{\sigma ^2}{2})(T-t)}{\sigma \sqrt{T-t}}=\Phi (-d_{1})$
Finally:
$P(t,S_{t})=e^{-r(T-t)}[K \times Prob^{Q}(S_{T}\leq K)-S_{t}e^{r(T-t)}\Phi (-d_{1})]$
$=e^{-r(T-t)}K\Phi (-d2)-S_{t}\Phi (-d_{1})$$
Which is the B-S put option formula, if I'm not mistaken. Again, as mentioned above, I am not sure if everything I did was correct, especially the application of the fact I've given above. I'd be grateful, if someone more knowledgable than could quickly go over my solution and let me know if I went wrong anywhere :)
Thanks for any help!
|
Search
Now showing items 1-10 of 24
Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV
(Springer, 2015-01-10)
The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ...
Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV
(Springer, 2015-05-20)
The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ...
Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV
(Springer Berlin Heidelberg, 2015-04-09)
The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ...
Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV
(Springer, 2015-06)
We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ...
Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV
(Springer, 2015-05-27)
The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ...
Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV
(American Physical Society, 2015-03)
We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ...
Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV
(Springer, 2015-09)
Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ...
Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2015-06)
The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ...
Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2015-11)
The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ...
K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(American Physical Society, 2015-02)
The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
|
Search
Now showing items 1-10 of 192
J/Ψ production and nuclear effects in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-02)
Inclusive J/ψ production has been studied with the ALICE detector in p-Pb collisions at the nucleon–nucleon center of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement is performed in the center of mass rapidity ...
Measurement of electrons from beauty hadron decays in pp collisions at root √s=7 TeV
(Elsevier, 2013-04-10)
The production cross section of electrons from semileptonic decays of beauty hadrons was measured at mid-rapidity (|y| < 0.8) in the transverse momentum range 1 < pT <8 GeV/c with the ALICE experiment at the CERN LHC in ...
Suppression of ψ(2S) production in p-Pb collisions at √sNN=5.02 TeV
(Springer, 2014-12)
The ALICE Collaboration has studied the inclusive production of the charmonium state ψ(2S) in proton-lead (p-Pb) collisions at the nucleon-nucleon centre of mass energy √sNN = 5.02TeV at the CERN LHC. The measurement was ...
Event-by-event mean pT fluctuations in pp and Pb–Pb collisions at the LHC
(Springer, 2014-10)
Event-by-event fluctuations of the mean transverse momentum of charged particles produced in pp collisions at s√ = 0.9, 2.76 and 7 TeV, and Pb–Pb collisions at √sNN = 2.76 TeV are studied as a function of the ...
Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV
(Elsevier, 2017-12-21)
We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ...
Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV
(American Physical Society, 2017-09-08)
The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ...
Online data compression in the ALICE O$^2$ facility
(IOP, 2017)
The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ...
Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV
(American Physical Society, 2017-09-08)
In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ...
J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV
(American Physical Society, 2017-12-15)
We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ...
Highlights of experimental results from ALICE
(Elsevier, 2017-11)
Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ...
|
Attainability of the fractional hardy constant with nonlocal mixed boundary conditions: Applications
1.
Laboratoire d'Analyse Nonlinéaire et Mathématiques Appliquées, Université Abou Bakr Belkaïd, Tlemcen, Tlemcen 13000, Algeria
2.
Département de Mathématiques, Université Ibn Khaldoun, Tiaret, Tiaret 14000, Algeria
3.
Departamento de Matemáticas, Universidad Autonoma de Madrid, 28049 Madrid, Spain
fractional Hardy inequality
$\Lambda_{N}\equiv \Lambda_{N}(\Omega): = \inf\limits_{\{\varphi\in \mathbb{E}^{s}(\Omega, D), \varphi \ne0\}}\dfrac{\frac{a_{d, s}}{2}\displaystyle\int_{\mathbb R^d}\int_{\mathbb R^d}\dfrac{|\varphi(x)-\phi(y)|^{2}}{|x-y|^{d+2s}}dx dy}{\displaystyle\int_{\Omega}\frac{\varphi^2}{|x|^{2s}}\, dx}, $
$\Omega$
$\mathbb R^d$
$0<s<1$
$D\subset \mathbb R^d\setminus \Omega$
$N = (\mathbb R^d\setminus \Omega)\setminus\overline{D}$
$\mathbb{E}^{s}(\Omega, D) = \{ u \in H^s(\mathbb R^d):\, u = 0 \text{ in } D\}.$ mixed Dirichlet-Neumann boundary problemassociated to the minimization problem and related properties; precisely, to study semilinear elliptic problem for the fractional Laplacian, that is,
${P_\lambda } \equiv \left\{ {\begin{array}{*{20}{l}}{{{\left( { - \Delta } \right)}^s}u\;\;\; = \;\;\;\lambda \frac{u}{{|x{|^{2s}}}} + {u^p}}&{{\rm{in}}\;\Omega ,}\\{\;\;\;\;\;\;\;\;\;u\;\;\; > \;\;\;0}&{{\rm{in}}\;\Omega ,}\\{\;\;\;\;\;\;{{\cal B}_s}u\;\;\;: = \;\;u{\chi _D} + {{\cal N}_s}u{\chi _N} = 0}&{{\rm{in}}\;{{\mathbb {R}}^d}\backslash \Omega ,}\end{array}} \right.$
$N$
$D$
$\mathbb R^{d}\backslash\Omega$
$N \cap D = \emptyset$
$\overline{N}\cup \overline{D} = \mathbb R^{d}\backslash\Omega$
$d>2s$
$\lambda> 0$
$<p\le 2_s^*-1_s^* = \frac{2d}{d-2s}$
$(-\Delta)^s $
$\mathcal{N}_{s}$ Keywords:Fractional Laplacian, mixed boundary condition, Hardy inequality, doubly-critical problem. Mathematics Subject Classification:Primary: 35R11, 35A15, 35A16; Secondary: 35J61, 47G20. Citation:Boumediene Abdellaoui, Ahmed Attar, Abdelrazek Dieb, Ireneo Peral. Attainability of the fractional hardy constant with nonlocal mixed boundary conditions: Applications. Discrete & Continuous Dynamical Systems - A, 2018, 38 (12) : 5963-5991. doi: 10.3934/dcds.2018131
References:
[1]
B. Abdellaoui and R. Bentifour,
Caffarelli-Kohn-Nirenberg Type Inequalities of Fractional Order and Applications,
[2]
B. Abdellaoui, A. Dieb and E. Valdinoci,
A nonlocal concave-convex problem with nonlocal mixed boundary data,
[3] [4]
B. Abdellaoui, E. Colorado and I. Peral,
Effect on the boundary conditions in the behaviour of the optimal constant of some Caffarelli-Kohn-Nirenberg inequalities. Application to some doubly critical nonlinear elliptic problems,
[5]
B. Abdellaoui, M. Medina, I. Peral and A. Primo,
A note on the effect of the Hardy potential in some Calderon-Zygmund properties for the fractional Laplacian,
[6]
B. Abdellaoui, M. Medina, I. Peral and A. Primo,
Optimal results for the fractional heat equation involving the Hardy potential,
[7]
B. Abdellaoui, I. Peral and A. Primo,
A remark on the fractional Hardy inequality with a remainder term,
[8] [9] [10] [11]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[12]
C. Bucur and E. Valdinoci,
[13] [14] [15] [16]
S. Dipierro, L. Montoro, I. Peral and D. Sciunzi, Qualitative properties of positive solutions to nonlocal critical problems involving the Hardy-Leray potential,
[17] [18] [19] [20]
R. L. Frank and R. Seiringer,
Non-linear ground state representations and sharp Hardy inequalities,
[21] [22]
G. Grubb,
Local and nonlocal boundary conditions for $mu$-transmission and fractional order elliptic pseudodifferential operators,
[23] [24]
N. S. Landkof,
[25]
T. Leonori, M. Medina, I. Peral, A. Primo and F. Soria, Principal eigenvalue of mixed problem for the fractional Laplacian: Moving the boundary conditions,
[26]
T. Leonori, I. Peral, A. Primo and F. Soria,
Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations,
[27]
G. Molica Bisci, V. Radulescu and R. Servadei,
[28]
A. C. Ponce,
[29] [30] [31] [32] [33]
show all references
References:
[1]
B. Abdellaoui and R. Bentifour,
Caffarelli-Kohn-Nirenberg Type Inequalities of Fractional Order and Applications,
[2]
B. Abdellaoui, A. Dieb and E. Valdinoci,
A nonlocal concave-convex problem with nonlocal mixed boundary data,
[3] [4]
B. Abdellaoui, E. Colorado and I. Peral,
Effect on the boundary conditions in the behaviour of the optimal constant of some Caffarelli-Kohn-Nirenberg inequalities. Application to some doubly critical nonlinear elliptic problems,
[5]
B. Abdellaoui, M. Medina, I. Peral and A. Primo,
A note on the effect of the Hardy potential in some Calderon-Zygmund properties for the fractional Laplacian,
[6]
B. Abdellaoui, M. Medina, I. Peral and A. Primo,
Optimal results for the fractional heat equation involving the Hardy potential,
[7]
B. Abdellaoui, I. Peral and A. Primo,
A remark on the fractional Hardy inequality with a remainder term,
[8] [9] [10] [11]
H. Brézis and L. Nirenberg,
Positive solutions of nonlinear elliptic equations involving critical Sobolev exponents,
[12]
C. Bucur and E. Valdinoci,
[13] [14] [15] [16]
S. Dipierro, L. Montoro, I. Peral and D. Sciunzi, Qualitative properties of positive solutions to nonlocal critical problems involving the Hardy-Leray potential,
[17] [18] [19] [20]
R. L. Frank and R. Seiringer,
Non-linear ground state representations and sharp Hardy inequalities,
[21] [22]
G. Grubb,
Local and nonlocal boundary conditions for $mu$-transmission and fractional order elliptic pseudodifferential operators,
[23] [24]
N. S. Landkof,
[25]
T. Leonori, M. Medina, I. Peral, A. Primo and F. Soria, Principal eigenvalue of mixed problem for the fractional Laplacian: Moving the boundary conditions,
[26]
T. Leonori, I. Peral, A. Primo and F. Soria,
Basic estimates for solutions of a class of nonlocal elliptic and parabolic equations,
[27]
G. Molica Bisci, V. Radulescu and R. Servadei,
[28]
A. C. Ponce,
[29] [30] [31] [32] [33]
[1]
Christina A. Hollon, Jeffrey T. Neugebauer.
Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition.
[2]
Xudong Shang, Jihui Zhang, Yang Yang.
Positive solutions of nonhomogeneous fractional Laplacian problem with critical exponent.
[3] [4]
Raúl Ferreira, Julio D. Rossi.
Decay estimates for a nonlocal $p-$Laplacian evolution problem
with mixed boundary conditions.
[5]
Pradeep Boggarapu, Luz Roncal, Sundaram Thangavelu.
On extension problem, trace Hardy and Hardy's inequalities for some fractional Laplacians.
[6]
Stathis Filippas, Luisa Moschini, Achilles Tertikas.
Trace Hardy--Sobolev--Maz'ya inequalities for the half fractional Laplacian.
[7]
José Francisco de Oliveira, João Marcos do Ó, Pedro Ubilla.
Hardy-Sobolev type inequality and supercritical extremal problem.
[8]
Mikko Kemppainen, Peter Sjögren, José Luis Torrea.
Wave extension problem for the fractional Laplacian.
[9]
Jinhui Chen, Haitao Yang.
A result on Hardy-Sobolev critical elliptic equations with boundary singularities.
[10]
Masato Hashizume, Chun-Hsiung Hsia, Gyeongha Hwang.
On the Neumann problem of Hardy-Sobolev critical equations with the multiple singularities.
[11]
Wei Gao, Juan Luis García Guirao, Mahmoud Abdel-Aty, Wenfei Xi.
An independent set degree condition for fractional critical deleted graphs.
[12]
Zuodong Yang, Jing Mo, Subei Li.
Positive solutions of $p$-Laplacian equations with
nonlinear boundary condition.
[13]
Qingfang Wang.
The Nehari manifold for a fractional Laplacian equation involving critical nonlinearities.
[14]
Everaldo S. de Medeiros, Jianfu Yang.
Asymptotic behavior of solutions to a perturbed p-Laplacian problem with Neumann condition.
[15] [16]
R.G. Duran, J.I. Etcheverry, J.D. Rossi.
Numerical approximation of a parabolic problem with a nonlinear boundary condition.
[17] [18]
Jaeyoung Byeon, Sangdon Jin.
The Hénon equation with a critical exponent under the Neumann boundary condition.
[19]
Khadijah Sharaf.
A perturbation result for a critical elliptic equation with zero Dirichlet boundary condition.
[20]
Hua Jin, Wenbin Liu, Huixing Zhang, Jianjun Zhang.
Ground states of nonlinear fractional Choquard equations with Hardy-Littlewood-Sobolev critical growth.
2018 Impact Factor: 1.143
Tools Metrics Other articles
by authors
[Back to Top]
|
\(\newcommand{\norm}[1]{\left \lVert #1 \right \rVert}\)
Introduction
So in the world of practical optimization, especially with respect to applications in things like Machine Learning, it is super common to hear about the use of Gradient Descent. Gradient Descent is a simple recursive scheme that is used to finding critical points (hopefully local optima!) of functions. This scheme takes the following form:
\begin{align}
x_{k+1} &= x_{k} – \alpha_k \nabla f(x_{k}) \end{align}
where $x_k \in \mathbb{R}^n$ is the $k^{th}$ estimate of a critical point for a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, $\alpha_k$ is called the stepsize, and $\nabla f(x)$ is the gradient of $f$ at some location $x$. The cool thing about using Gradient Descent is that it is fairly intuitive. Just thinking about the terms in the expression, you can view each improved estimate as taking the current estimate $x_k$ and stepping in the steepest direction from the current estimate, $-\nabla f(x_k)$, for some small distance $\alpha_k$ so that we hopefully get a new estimate $x_{k+1}$ closer to the actual critical point. Seems pretty intuitive to me!
Now it is neat to know we have some recursion we can use to arrive at a critical point, but what we really need to know is how quickly we will converge to such a location because how else will we know if this strategy is useful? To take things further, we might also want to know if we are guaranteed to find a local minima or even global minima.. so some important things to worry about!
In this post, I’m going to walk through a convergence proof for the case where $f(\cdot)$ is strongly convex and is Lipschitz smooth with a Lipschitz constant $C$ that holds everywhere. This proof will highlight an interesting theoretical result when using Gradient Descent for subsets of problems that satisfy the above criteria. So with that, let us get started!
Theoretical Analysis
So the goal in this part of the post is to come up with some theoretical bounds on Gradient Descent that we can use to show convergence to a local minima for some convex function $f: \mathbb{R}^n \rightarrow \mathbb{R}$. To get rolling on that, we should try to use our assumptions on $f$ of strong convexity and Lipschitz smoothness and bring them into a tangible, mathematical form we can use.
For both of the above assumptions, we can relate them to the Hessian of $f$, $\nabla^2 f(x)$, in the following manner:
\begin{align}
\nabla^2 f(x) &\succeq c I \tag{Strong Convexity}\\ \nabla^2 f(x) &\preceq C I \tag{Lipschitz Smoothness} \end{align}
where the above relationships hold for all $x \in \mathbb{R}^n$. Using Taylor’s Theorem and the above expressions, we can also find some convenient bounds on $f(x)$ $\forall x$. These bounds can be found to be the following:
\begin{align}
f(x) \geq f(y) + \nabla f(y)^T (x – y) + \frac{c}{2} \norm{x – y}^2 \tag{Strong Convexity}\\ f(x) \leq f(y) + \nabla f(y)^T (x – y) + \frac{C}{2} \norm{x – y}^2 \tag{Lipschitz Smoothness}\\ \end{align}
Dope! We have some inequalities that should become quite useful shortly! Now let us assume an optimal location $x^*$ exists that such that $f(x^*) = \inf_{v \in \mathbb{R}^n} f(v)$. We can use this assumption to come up with the following useful inequality:
\begin{align*}
f(x^*) &= \inf_{v} f(v) \\ &= \inf_{v} f(x_k + v) \\ &\geq \inf_{v} \left(f(x_k) + \nabla f(x_k)^T v + \frac{c}{2} \norm{v}^2 \right) \\ &= f(x_k) – \frac{1}{2c} \norm{\nabla f(x_k)}^2 \tag{$\dagger$} \end{align*}
Using the definition of Gradient Descent, Lipschitz smoothness, and the choice of $\alpha_k = \frac{1}{C}$, we can obtain the other useful inequality:
\begin{align*}
f(x_k) &\leq f(x_{k-1}) + \nabla f(x_{k-1})^T \left(x_{k} – x_{k-1}\right) + \frac{C}{2} \norm{x_{k} – x_{k-1}}^2 \\ &= f(x_{k-1}) – \frac{1}{C}\nabla f(x_{k-1})^T \nabla f(x_{k-1}) + \frac{C}{2} \norm{\frac{1}{C} \nabla f(x_{k-1})}^2 \\ &= f(x_{k-1}) – \frac{1}{2C} \norm{\nabla f(x_{k-1})}^2 \tag{$\ddagger$} \end{align*}
Using both $(\dagger)$ and $(\ddagger)$, we can get close to our ultimate goal. We can make progress using the following steps:
\begin{align*}
f(x_k) &\leq f(x_{k-1}) – \frac{1}{2C} \norm{\nabla f(x_{k-1})}^2 \\ &\leq f(x_{k-1}) + \frac{c}{C}\left(f(x^*) – f(x_{k-1})\right) \\ &= \left(1 – r \right) f(x_{k-1}) + r f(x^*) \\ f(x_k) – f(x^*) &\leq \left(1 – r \right) f(x_{k-1}) – \left(1 – r \right) f(x^*) \\ &= \left(1 – r \right) \left(f(x_{k-1}) – f(x^*)\right) \\ &\leq \left(1 – r \right)^2 \left(f(x_{k-2}) – f(x^*)\right) \\ &\vdots \\ &\leq \left(1 – r \right)^k \left(f(x_{0}) – f(x^*)\right) \\ &\leq K \left(1 – r \right)^k \end{align*}
where $K > 0$ is some constant, $r = \frac{c}{C}$, and $0 \lt r \lt 1$. Now we must try to bound $\left(1 – r \right)^k$, which we can do in the following manner:
\begin{align*}
(1-r)^k &= \exp\left( \log (1 – r)^k\right) \\ &= \exp\left( k \log (1-r) \right) \\ &= \exp\left( -k \sum_{l=1}^{\infty} \frac{r^l}{l} \right) \\ &\leq \exp\left( -k r \right) \end{align*}
Using the above result, we can achieve the final bound to be
\begin{align*}
f(x_k) – f(x^*) &\leq K \left(1 – r \right)^k \\ &\leq K \exp\left( -k r \right) \end{align*}
Awesome! As we can see, we are able to bound the error between the minimum and our current estimate of the minimum by an exponentially decreasing term as the number of iterations increases! This shows that Gradient Descent, under the assumptions that $f(x)$ is Lipschitz smooth and strongly convex, leads to an exponential convergence to the local minimum. Note that since $f(x)$ is convex, the local minimum is also the global minimum! With that, we have found a pretty intriguing result!
Numerical Study
Just to try and prove this bound makes sense, I put together some basic code to show how the bound compares to the actual convergence. For this problem, I assumed $f(x) = (x – x^*)^T Q (x – x^*)$ where $Q$ was positive definite and chose $K$ to be $f(x_0)$ since $f(x^*) = 0$. As the figure below shows, the bound certainly overestimates the error and yet still converges exponentially fast!
Conclusion
As we can see in this post, having some extra functions on a function $f(x)$ we would like to minimize can result in some shocking convergence rates and potentially other useful properties! In this instance, we see having a function $f(x)$ that is both Lipschitz smooth and strongly convex results in exponential convergence to the global minimum and we have seen this holds true in a numerical experiment. It’s always super cool to see how theory and practice mesh together and looks like in this instance we will be able to use it to better understand how to benefit from Gradient Descent in practice!
|
WP 34S vs. DM42 decimal128 differences?
03-20-2018, 09:15 AM (This post was last modified: 03-27-2018 09:51 AM by rkf.)
Post: #1
WP 34S vs. DM42 decimal128 differences?
Today I stumbled about a footnote in Walter's WP 34S Owner's manual, where at Page 319 the result of the Calculator Forensics Test is mentioned. To my big surprise, the DP difference between the test result, and 9, is
for WP 34S: -6.2465E-29
for DM42: -6.2466E-29
But why the differerence of 1 ULP?? For the other example (1.0000001^2^27), there isn't any difference between the two models, BTW.
03-20-2018, 09:27 AM
Post: #2
RE: WP 34S vs. DM42 decimal128 differences?
(03-20-2018 09:15 AM)rkf Wrote: Today I stumbled about a footnote in Walter's WP 34S Owner's manual, where at Page 319 the result of the Calculator Forensics Test is mentioned. To my big surprise, the DP difference between the test result, and 9, is
Both calculators use 34-digit precision, but this does not mean that they return the same results. The trig functions may be implemented differently, and there is no claim (at least for the 34s) that all 34 digits are correct. The 34s also has different round modes that can be set, which will affect the result as well.
Dieter
03-20-2018, 09:35 AM
Post: #3
RE: WP 34S vs. DM42 decimal128 differences?
I believe the 34S is correct here. Free42 uses the Intel decimal library which rounds some functions incorrectly.
Well, the 34S used to get this correct but there were some changes to the trig code to better handle cases near multiples of \( \frac{\pi}{2} \), it is possible they threw it off.
Pauli
03-26-2018, 09:51 PM
Post: #4
RE: WP 34S vs. DM42 decimal128 differences?
This is addressed to Pauli:
I've noticed that sin (pi radians) in double-precision mode on the WP34S is only correct to 17 significant figures. This disturbs me more than I would have expected (given that I normally have the calculator set to give me answers to 4 sf only!).
If I change SINCOSDIGITS in decn.c to 69, all calculated digits are correct. Trig functions are slower (more so in double precision than in single precision).
If I use 69 digits in the function sincosTaylor() when the calculator is in double precision mode, but 39 digits in single precision, I seem to get the best of both worlds - correct answers in both single and double precision, with no slow-down in single-precision mode.
Is this a change worth making (once I've done some more extensive testing, and looked at the code for tan x as well)? Or is there a downside other than speed to using these extra digits?
Nigel (UK)
03-26-2018, 10:43 PM
Post: #5
RE: WP 34S vs. DM42 decimal128 differences?
(03-26-2018 09:51 PM)Nigel (UK) Wrote: Is this a change worth making (once I've done some more extensive testing, and looked at the code for tan x as well)? Or is there a downside other than speed to using these extra digits?
I'd be concerned about overflowing the stack in other functions that call this. E.g. the gamma code can call this and gamma in turn is called from the statistical routines which are close to the limit already.
Using a smaller number of digits in single precision mode might also fall awry of this function's use elsewhere. If later digits carry an error they could cascade into the first sixteen.
I'll have to have a think about it...
Pauli
03-27-2018, 06:10 AM
Post: #6
RE: WP 34S vs. DM42 decimal128 differences?
(03-26-2018 09:51 PM)Nigel (UK) Wrote: This is addressed to Pauli:
Although I'm not Pauli but the original poster, I don't get the point here. Typing in Rad mode pi sin gives on both WP 34S DBLON, and DM42, a result of about -1.158E-34.
For me as a plain user this result seems to have nevertheless 34 significant figures, all of them zeroes (one before the decimal point, and 33 thereafter).
Or do you compare the (exact) result of taking sin of a number, which is exactly pi rounded to 34 significant figures (thus to be expected in the ballpark around +/- 1E-34) with the result the calculator gives, and then -1.158028306006248941790250554076922E-34 itself is only correct to 17 figures? But that would be no problem at all for me.
03-27-2018, 07:58 AM
Post: #7
RE: WP 34S vs. DM42 decimal128 differences?
(03-27-2018 06:10 AM)rkf Wrote: Or do you compare the (exact) result of taking sin of a number, which is exactly pi rounded to 34 significant figures (thus to be expected in the ballpark around +/- 1E-34) with the result the calculator gives, and then -1.158028306006248941790250554076922E-34 itself is only correct to 17 figures? But that would be no problem at all for me.
I'm counting significant figures from the first non-zero digit - i.e., what you say above.
I agree that it isn't likely to be a problem in any real-world application of the calculator. However, given that the WP34S has a double-precision mode it would be nice if the result was also correct to this level of precision, so long as this can be done without breaking the calculator's behaviour elsewhere. If you calculate sin (3.1415926536) on the HP28S (for example) you will get an answer correct to 12 non-zero significant digits, and Pauli has already adjusted the WP34S code to get cos (pi/2) (which is a similar case) correct to 34 sf. Most of the time the trig functions are correct to this level of precision, so trying to bring sin (pi) up to the same standard isn't unreasonable.
I think it would be nice to fix it, if it can be fixed.
Nigel (UK)
03-27-2018, 08:22 AM
Post: #8
RE: WP 34S vs. DM42 decimal128 differences?
(03-27-2018 07:58 AM)Nigel (UK) Wrote: ... I'm counting significant figures from the first non-zero digit - i.e., what you say above. ...
OK - thanks for the clarification! BTW, this very situation is discussed in detail at p. 184 of the HP-15C Advanced Functions Handbook (although for the HP-15C's 10 shown, and 13 internal digits - thus giving, depending from interpretation, either ten, or three significant digits). Since the WP 34S, and DM42 behave with respect to sin(pi) in complete accordance with the HP manual cited above, I'm fine with the status quo. :-)
03-27-2018, 09:42 AM
Post: #9
RE: WP 34S vs. DM42 decimal128 differences?
03-29-2018, 04:03 PM
Post: #10
RE: WP 34S vs. DM42 decimal128 differences?
To Pauli, and anyone else interested in this:
I've got full precision working correctly with just one 69-digit decNumber - x, in cvt_2rad_sincos. sincosTaylor itself isn't changed; its arguments are still pointers to SINCOS-digit decNumbers and the calculations in it are carried out to this precision.
I divide radian arguments to cvt_2rad_sincos by \(2\pi\) to 69-digit precision, and then do range reduction as with degrees and grads using fractions of a full circle as the thresholds. This means that any radian arguments that have a very small sine or cosine are mapped onto angles near zero, and so 69-digit precision isn't needed to calculate these functions correctly. I've included the constant 0.125 in consts.h and compile_consts.c for use as the \(45^\circ\) threshold.
I don't know whether the memory requirements will still be a problem. I've attached the modified source files in case you want to try them out yourself - if you don't, or you have a better approach of your own, that's fine as well!
Nigel (UK)
03-29-2018, 11:46 PM
Post: #11
RE: WP 34S vs. DM42 decimal128 differences?
An interesting approach, one I had considered when I added the range reduction to these functions. My concern, then and now, is that e.g. \( \frac{\pi}4 - x \) cannot be represented exactly and that this could change the result in some cases. You are dividing by \( 2\pi \) so the later range reduction will be exact, but this division step and the later inverse would be where the equivalent error occurs.
It is possible that at some level of extended precision, the errors become insignificant. Determining and proving this wouldn't be trivial.
It's quite a difficult problem Still, it is an approach that's worth chasing.
Pauli
03-30-2018, 12:11 PM
Post: #12
RE: WP 34S vs. DM42 decimal128 differences?
(03-29-2018 11:46 PM)Paul Dale Wrote: An interesting approach, one I had considered when I added the range reduction to these functions. My concern, then and now, is that e.g. \( \frac{\pi}4 - x \) cannot be represented exactly and that this could change the result in some cases. You are dividing by \( 2\pi \) so the later range reduction will be exact, but this division step and the later inverse would be where the equivalent error occurs.
I'm sure that I'm missing some of the subtleties but the situation doesn't seem too bad to me.
Incidentally, calculating \(\sin(\pi)\) with a 34-digit value of \(\pi\), using newRPL with 34 digits of precision, gives an answer also correct to 34 digits. Well done Claudio!
Nigel (UK)
03-30-2018, 04:19 PM
Post: #13
RE: WP 34S vs. DM42 decimal128 differences?
(03-30-2018 12:11 PM)Nigel (UK) Wrote: Incidentally, calculating \(\sin(\pi)\) with a 34-digit value of \(\pi\), using newRPL with 34 digits of precision, gives an answer also correct to 34 digits. Well done Claudio!
Thanks, but there's not much glory on that achievement. It simply uses \(\pi\) with twice the current precision, so the range reduction produces the angle with full required precision. It's easy to achieve when the whole system has variable precision.
Now if you set the system to maximum precision:
Code:
Now what do you see? Only 22 good digits (thanks to a few extra guard digits beyond 2000), but certainly not 2000 good digits as you would've expected. At the limit of the system precision, newRPL has the same issues of the wp34s. The only solution I found was to use 4000 digits for pi, but what's the point: if the system works with 4000 digits, then I'd rather let the user use them all, just warning them that if you use more than half of that, don't expect all corner cases to be accurate. Due to memory limitations, I chose 2000 digits as the system limit. If you use 1000 digits or less, all functions are guaranteed to give you correctly rounded results in all corner cases with the 1000 digits precision you expect.
On the wp34s it's exactly the same: they used double precision to guarantee single precision on all corner cases. But since it's available, why not let the user access it? after all it's good for 99% of the cases. Now people want perfect results at double precision, so Paul needs quad precision, and then why not let the user use quad?... and here we go again.
03-30-2018, 04:44 PM
Post: #14
RE: WP 34S vs. DM42 decimal128 differences?
Thank you for explaining the situation in newRPL, which is an extraordinary achievement.
In the WP34S source code \(2\pi\) is already present as a constant to 450 decimal places (or thereabouts). So why not use it to get double precision answers for trig functions of double precision arguments? All it seems to take is a few code changes and one quad precision variable in one function (although whether the calculator has the RAM needed to cope with this remains an open question).
I understand that in the absence of variable-precision arithmetic, chasing perfection is a way without an end. I'm also sure that Pauli has far more important things to think about. But in this case it seems that a few simple changes might make it work, and if so, why not?
Nigel (UK)
03-30-2018, 11:26 PM
Post: #15
RE: WP 34S vs. DM42 decimal128 differences?
(03-30-2018 04:44 PM)Nigel (UK) Wrote: \In the WP34S source code \(2\pi\) is already present as a constant to 450 decimal places (or thereabouts). So why not use it to get double precision answers for trig functions of double precision arguments?
The many decimals are required to get accurate answers in single precision. E.g. try \(sin(10^{100})\). To get the same for double precision requires thousands of digits -- I'd guess about 8500 give or take. Making double precision trigonometric functions accurate across their entire range isn't feasible on the hardware, there isn't enough memory.
Still, you made a good argument for improving them where they are more typically used. I'll have to think it through in more detail. I'll also have to work out the worst case memory usage for functions that can call sine or cosine to see if the extra space required will fit. Were a few bytes from not fitting the stack in.
Pauli
User(s) browsing this thread: 1 Guest(s)
|
Here is an interesting game I heard a few days ago from one of my undergraduate students; I’m not sure of the provenance.
The game is played with stones on a grid, which extends indefinitely upward and to the right, like the lattice $\mathbb{N}\times\mathbb{N}$. The game begins with three stones in the squares nearest the origin at the lower left. The goal of the game is to vacate all stones from those three squares. At any stage of the game, you may remove a stone and replace it with two stones, one in the square above and one in the square to the right, provided that both of those squares are currently unoccupied.
For example, here is a sample play.
Question. Can you play so as completely to vacate the yellow corner region?
One needs only to move the other stones out of the way so that the corner stones have room to move out. Can you do it? It isn’t so easy, but I encourage you to try.
Here is an online version of the game that I coded up quickly in Scratch: Escape!
My student mentioned the problem to me and some other students in my office on the day of the final exam, and we puzzled over it, but then it was time for the final exam. So I had a chance to think about it while giving the exam and came upon a solution. I’ll post my answer later on, but I’d like to give everyone a chance to think about it first.
Solution. Here is the solution I hit upon, and it seems that many others also found this solution. The main idea is to assign an invariant to the game positions. Let us assign weights to the squares in the lattice according to the following pattern. We give the corner square weight $1/2$, the next diagonal of squares $1/4$ each, and then $1/8$, and so on throughout the whole playing board. Every square should get a corresponding weight according to the indicated pattern.
The weights are specifically arranged so that making a move in the game preserves the total weight of the occupied squares. That is, the total weight of the occupied squares is invariant as play proceeds, because moving a stone with weight $1/2^k$ will create two stones of weight $1/2^{k+1}$, which adds up to the same. Since the original three stones have total weight $\frac 12+\frac14+\frac14=1$, it follows that the total weight remains $1$ after every move in the game.
Meanwhile, let us consider the total weight of all the squares on the board. If you consider the bottom row only, the weights add to $\frac12+\frac14+\frac18+\cdots$, which is the geometric series with sum $1$. The next row has total weight $\frac14+\frac18+\frac1{16}+\cdots$, which adds to $1/2$. And the next adds to $1/4$ and so on. So the total weight of all the squares on the board is $1+\frac12+\frac14+\cdots$, which is $2$. Since we have $k$ stones with weight $1/2^k$, another way to think about it is that we are essentially establishing the sum $\sum_k\frac k{2^k}=2$.
The subtle conclusion is that after any finite number of moves, only finitely many of those other squares are occupied, and so some of them remain empty. So after only finitely many moves, the total weight of the occupied squares off of the original L-shape is strictly less than $1$. Since the total weight of all the occupied squares is exactly $1$, this means that the L-shape has not been vacated.
So it is impossible to vacate the original L-shape in finitely many moves. $\Box$
Suppose that we relax the one-stone-per-square requirement, and allow you to stack several stones on a single square, provided that you eventually unstack them. In other words, can you play the stacked version of the game, so as to vacate the original three squares, provided that all the piled-up stones eventually are unstacked?
No, it is impossible! And the proof is the same invariant-weight argument as above. The invariance argument does not rely on the one-stone-per-square rule during play, since it is still an invariant if one multiplies the weight of a square by the number of stones resting upon it. So we cannot transform the original stones, with total weight $1$, to any finite number of stones on the rest of the board (with one stone per square in the final position), since those other squares do not have sufficient weight to add up to $1$, even if we allow them to be stacked during intermediate stages of play.
Meanwhile, let us consider playing the game on a finite $n\times n$ board, with the rule modified so that stones that would be created in row or column $n+1$ in the infinite game simply do not materialize in the $n\times n$ game. This breaks the proof, since the weight is no longer an invariant for moves on the outer edges. Furthermore, one can win this version of the game. It is easy to see that one can systematically vacate all stones on the upper and outer edges, simply by moving any of them that is available, pushing the remaining stones closer to the outer corner and into oblivion. Similarly, one can vacate the penultimate outer edges, by doing the same thing, which will push stones into the outer edges, which can then be vacated. By reverse induction from the outer edges in, one can vacate every single row and column. Thus, for play on this finite board with the modified rule on the outer edges, one can vacate the entire $n\times n$ board!
Indeed, in the finite $n\times n$ version of the game, there is no way to lose! If one simply continues making legal moves as long as this is possible, then the board will eventually be completely vacated. To see this, notice first that if there are stones on the board, then there is at least one legal move. Suppose that we can make an infinite sequence of legal moves on the $n\times n$ board. Since there are only finitely many squares, some of the squares must have been moved-upon infinitely often. If you consider such a square closest to the origin (or of minimal weight in the scheme of weights above), then since the lower squares are activated only finitely often, it is clear that eventually the given square will replenished for the last time. So it cannot have been activated infinitely often. (Alternatively, argue by induction on squares from the lower left that they are moved-upon at most finitely often.) Indeed, I claim that the number of steps to win, vacating the $n\times n$ board, does not depend on the order of play. One can see this by thinking about the path of a given stone and its clones through the board, ignoring the requirement that a given square carries only one stone. That is, let us make all the moves in parallel time. Since there is no interaction between the stones that would otherwise interfere, it is clear that the number of stones appearing on a given square in total is independent of the order of play. A tenacious person could calculate the sum exactly: each square is becomes occupied by a number of stones that is equal to the number of grid paths to it from one of the original three stones, and one could use this sum to calculate the total length of play on the $n\times n$ board.
|
If $I(\theta)$ is the Fisher Information for $\theta$
Then how do I find $J=I(g(\theta ))$ ?
Currently my thoughts are that:
$$I(\theta) = E\left( \left(\frac{\partial}{\partial \theta}\log f(X;\theta)\right)^2 \right)$$ and if $u=g(\theta)$ where $g( \cdot )$ is differentiable, then $$\frac{\partial}{\partial u}=\frac{\partial\theta}{\partial u} \frac{\partial}{\partial \theta} =\frac{1}{g'(\theta)}\frac{\partial}{\partial \theta}$$ So $$I(u) = E\left( \left(\frac{\partial}{\partial u}\log f(X;u)\right)^2 \right) = E\left( \frac{1}{(g'(\theta))^2}\left(\frac{\partial}{\partial \theta}\log f(X;\theta)\right)^2 \right) \ \ \text{as u is a function of} \ \theta$$ but at this point I don't see the justification for taking the $\frac{1}{(g'(\theta))^2}$ out of the bracket...
I'm guessing the result should arrive at
$$ I(g(\theta))=\frac{I(\theta)}{(g'(\theta))^2}$$
but I'm not sure how to get this...
|
I think it might be helpful to put the new statement at the beginning and put the original post at the end. This new statement is more mathematically elegant.
Let $f\geq0$ be in $L^1(\mathbb{R}^d)$ and $g(x)=\exp(-\|x\|^2)$. Let $1_{B_1}$ be the indicator function of the unit ball centered at origin. Let $*$ be the convolution operation. Does the condition $$(f*g)(x)\leq C_1\exp(-C_2\|x\|^2)$$ for some $C_1,C_2>0$ imply $$\lim_{n\to+\infty}\frac{(f*1_{B_1})(\mu_n)}{(f*g)(\mu_n)}=0$$ for some sequence $\mu_n\in\mathbb{R}^d$? If this is not true, what additional regularity conditions on $f$ do we need? Any idea or possibly useful reference would be appreciated! The result can be verified easily when $f$ is another Gaussian function as well as some linear combination of Gaussian functions.
----------------Original post---------------------
Let $X$ be a random vector in $\mathbb{R}^d$ satisfying the following property: there exists $C_1,C_2>0$ such that $$\int_0^{+\infty}\mathbb{P}(\|X-\mu_0\|\leq\sqrt{t})\exp(-t)dt\leq C_1\exp(-C_2\|\mu_0\|^2)$$ for any $\mu_0\in\mathbb{R}^d$. Here $\|\|$ is the Euclidean norm in $\mathbb{R}^d$. If the above property holds, is the following statement true: there exists a sequence of vectors $\mu_n$ in $\mathbb{R}^d$ and a sequence of real numbers $t_n\to+\infty$ ($t_n$ may depend on $\mu_n$ for example $t_n=\|\mu_n\|^2/4$) such that: $$\lim_{n\to+\infty}\frac{\mathbb{P}(\|X-\mu_n\|\leq1)}{\mathbb{P}(\|X-\mu_n\|\leq \sqrt{t_n})\exp(-t_n)}=0$$
If this is not true, is there a counter example? Or is the the following result true? $$\lim_{n\to+\infty}\frac{\mathbb{P}(\|X-\mu_n\|\leq1)}{\int_0^{+\infty}\mathbb{P}(\|X-\mu_n\|\leq\sqrt{t})\exp(-t)dt}=0$$
|
In a physics text book I need help to make sense of the part highlighted in yellow:
This is out of context of course, so just to make it clearer:
$\tau$ is the
mean free time of the electrons in a conductor (the average time between collisions with ions in the material).
This text snippet is part of a derivation of a microscale expression of drift velocity. Before time $t<0$ there is no field $\vec E=0$, so electrons move randomly as always with no average drift.
At time $t=0$ a field $\vec E \neq 0$ is applied and drift starts. Electrons are accelerated $\vec Eq=\vec F=m \vec a$ and speed up.
The question:
What I find unclear in their method is the postulate marked in yellow.
Electrons are accelerated until time $t=\tau$ since they (on average) don't collide with anything that could absorb their kinetic energy. But why are we sure that when the first collisions happen, the electrons are
decelerated exactly as much as the acceleration caused by the field, so it cancels out (it
just balances, as the yellow marked text says)?
Because their net acceleration must be zero since we now assume to reach a steady drift velocity $\vec v_d$.
Apparently we expect
constant acceleration before the first collision (during the time $\tau$) and suddenly zero net acceleration after that. How do we know that drift velocity is steady from now on and not only after $2 \tau$ or $5 \tau$ or more?
|
What are some examples of functions which are continuous, but whose inverse is not continuous?
nb: I changed the question after a few comments, so some of the below no longer make sense. Sorry.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Define $f: [0,1) \cup [2,3] \rightarrow [0,2]$ by
$$f(x)=\begin{cases} x & x \in [0,1) \\ x-1 & x \in [2,3] \end{cases}$$
A bijective map that is continuous but with non-continuous inverse is the following parametrization of the unit circle $\mathbb{S}^1$:
$$f: \colon [0, 2\pi) \to \mathbb{S}^1, \qquad f(\theta)=e^{i \theta}.$$
This map cannot have continuous inverse, because $\mathbb{S}^1$ is compact, while $[0, 2\pi)$ is not. Indeed, $f^{-1}$ jumps abruptly from $2\pi$ to $0$ when we travel round the unit circle.
Another example, somewhat similar in nature, is the map $g\colon [0,1] \cup (2, 3] \to [0, 2]$ defined by
$$g(x)=\begin{cases} x & 0 \le x \le 1 \\ x-1 & 2 < x \le 3 \end{cases}$$
The inverse map is $$g^{-1}(y)=\begin{cases} y & 0 \le y \le 1 \\ y+1 & 1 < y \le 2\end{cases}$$
and it is not continuous because of a jump at $y=1$. Note that, again, the range of $g$ is compact while the domain is not.
More generally, every bijective map $h\colon X \to K$ with $X$ non-compact and $K$ compact cannot have a continuous inverse.
Let $X$ be a set and $\tau_1,\tau_2$ two topologies on $X$ with $\tau_2\subsetneq\tau_1$. Then the identity function from the topological space $(X,\tau_1)$ to $(X,\tau_2)$ is a continuous bijection but the inverse function (the identity function from $(X,\tau_2)$ to $(X,\tau_1)$) is not continuous.
Let $\rm X$ be the set of rational numbers with the discrete topology. Then the identity map $\rm X\to \mathbb{Q} $ is bijective and continuous, with discontinuous inverse.
1) Take any topological space,
2) Obtain another space by refining its topology, 3) ... 4) PROFIT!
In fact, consider the forgetful functor $F: \mathbf{Top} \to \mathbf{Set}$. For any set $S$ the continuous functions of the form $f: X \to Y$ such that $FX = FY = S$ and $Ff = 1_S$ form a linear order on the set of all topologies on $S$, and this order is in fact inverse to the usual one (formed by set inclusion of topologies).
For example, extending the answer by Marco, consider a simple curve $\gamma: I \to M$ on some manifold with the finite number of self-intersections. For each intersection, remove all corresponding points from $I$ except for one. Voila :)
UPD: actually, you can remove all corresponding points, period!
Take a "8"-shaped plane curve $\mathcal C \subset \mathbb R^2$ endowed with the subspace topology. Let $\phi: \mathbb R \to \mathcal C$ be a continuous injective parametrization of $\mathcal C$. The inverse function $\phi^{-1}: \mathcal C\to\mathbb R$ cannot be continuous because $\phi^{-1}((a,+\infty))$ is not an open set for some $a$.
|
As mentioned in my other post, I am attempting to learn from Gross'"
Relativistic quantum mechanics and field theory", and I have a question concerning the manipulation of the antisymmetric 4x4 tensors involved.
There are two points where this does not make sense to me. Firstly, when proving that the correct equations of motion can be derived from the EM Lagrangian density:
$$ L = -\frac{1}{4} F_{\mu \nu}F^{\mu \nu}-j_\mu A^{\mu} $$
Gross mentions that the equation can be simplified by expanding it the following way:
$$ -\frac{1}{4} F_{\mu \nu}F^{\mu \nu} = -\frac{1}{4} (\partial_\mu A_\nu -\partial_\nu A_\mu)(\partial^\mu A^\nu -\partial^\nu A^\mu) $$ $$ =-\frac{1}{2} g^{\mu \mu'}g^{\nu \nu'}(\partial_\mu A_\nu\partial_{\mu'} A_{\nu'} -\partial_\mu A_\nu\partial_{\nu'} A_{\mu'}) $$
We then construct the Lagrangian equations of motion, which doesn't bother me. My question is (hopefully) much simpler. No matter how I treat the product shown above I cannot remove the multiple of 2. What 4-vector or metric tensor trickery is happening here?
The best I can do is to operate on all of the contravariant partials and A's with the metric tensor and get the following for the expansion: $$ -\frac{1}{4} (\partial_\mu A_\nu -\partial_\nu A_\mu)(\partial^\mu A^\nu -\partial^\nu A^\mu) $$ $$ = -\frac{1}{4} g^{\mu \mu'}g^{\nu \nu'}(\partial_\mu A_\nu\partial_{\mu'} A_{\nu'} -\partial_\mu A_\nu\partial_{\nu'} A_{\mu'}-\partial_\nu A_\mu\partial_{\mu'} A_{\nu'}+\partial_\nu A_\mu\partial_{\nu'} A_{\mu'}) $$
I understand from here the argument of cancelling the factor of 2 is as simple as grouping these terms into two distinct factions, but I don't see how one can do so?
I can sort of see how, if $\mu = \mu'$ and $ \nu = \nu'$ how the middle two terms could be reshuffled, ie: $$ = -\frac{1}{4} g^{\mu \mu'}g^{\nu \nu'}(\partial_\mu A_\nu\partial_{\mu'} A_{\nu'} -2\partial_\mu A_\nu\partial_{\nu'} A_{\mu'}+\partial_\nu A_\mu\partial_{\nu'} A_{\mu'}) $$
But even if I allow for that, the partials for the last two terms are completely different.
I expect that my misunderstanding here stems from being unfamiliar with 4-vectors. My background is Chemistry and I am just trying to understand some of these deeper concepts.
A similar occurrence happens two pages later, where Gross states that in making the relativistic lagrangian density, we can separate out the scalar potential terms (the time terms in the A 4-vector), by doing:
$$ L = -\frac{1}{4} F_{\mu \nu}F^{\mu \nu}-j_\mu A^{\mu} $$ $$ = -\frac{1}{2} \partial_\mu A^0(\partial^\mu A^0 -\partial^0 A^\mu)+\frac{1}{2} \partial_\mu A^i(\partial^\mu A^i -\bigtriangledown_i A^\mu) - \rho A^0 + j \cdot A $$ Where $\rho$ and $A^0$ are the time terms for the j and A 4-vectors, respectively.
Similarly, I don't know how the $-\frac{1}{4}$ term can be broken up here either. I know that this is something dead simple, but I'm stuck and I'd appreciate any help.
|
I´ve solved the Exercise 7.1.1 (Bernstein–Vazirani problem) of the book "An introduction to quantum computing" (Mosca et altri). The problem is the following:
Show how to find $a \in Z_2^n$ given one application of a black box that maps $|x\rangle|b\rangle \to |x\rangle |b \oplus x · a\rangle$ for some $b\in \{0, 1\}$.
I´d say we can do it like this:
First i go from $|0|0\rangle \to \sum_{i \in \{0,1\}^n}|i\rangle| + \rangle$ using QFT and Hadamard Then I apply the oracle: $$ \sum_{i \in \{0,1\}^n}(-1)^{(i,a)} |i\rangle| + \rangle $$ Then I read the pase with an Hadamard (since we are in $Z_2^n$ our QFT is an Hadamard) $$ |a\rangle |+ \rangle $$
I think is correct. Do you agree?
|
How to Use Circular Ports in the RF Module
The
Port boundary condition in the RF Module, an add-on to the COMSOL Multiphysics® software, can be used to launch and absorb electromagnetic energy. We explain how to set up a circular waveguide port and review the analytical solution that defines the port mode field. We also analyze a polarized circular port for power transmission with respect to port orientation, and then extend the model to include higher-order modes. Circular Port Reference Axis for Describing Degenerate Modes
To simulate wave propagation in a circular waveguide, we need to set up the excitation and termination via boundary conditions that describe the mode field. However, circular ports exhibit degeneracy, which yields uncertainty in the orientation of the mode field. Let’s begin our discussion on how we can use the
Circular Port Reference Axis subfeature to suppress angular degeneracy in a circular waveguide port. The dominant TE 11 mode exhibits degeneracy. To fix the orientation of the mode field, we use the Circular Port Reference Axis subfeature.
First, we run a
Mode Analysis study to find the resonant modes on a simple circle in 2D, which represents our port boundary. Among the modes returned by default, some are simple rotations of the exact same TE 11 mode shape about the origin. So, how can we determine which solution is correct? All of them are equally accurate solutions to the equations describing the transverse field components of a circular waveguide (Ref. 1):
E_{\rho}=\frac{-j\omega\mu m}{k_{c}^{2}\rho}(A \ cos \ m\phi-B \ sin \ m\phi) \ J_{m}(k_{c}\rho) \ e^{-j\beta z}E_{\phi}=\frac{j\omega\mu}{k_{c}}(A \ sin \ m\phi+B \ cos \ m\phi) \ J’_{m}(k_{c}\rho) \ e^{-j\beta z}
H_{\rho}=\frac{-j\beta}{k_{c}}(A \ sin \ m\phi+B \ cos \ m\phi) \ J’_{m}(k_{c}\rho) \ e^{-j\beta z}H_{\phi}=\frac{-j\beta m}{k_{c}^{2}\rho}(A \ cos \ m\phi-B \ sin \ m\phi) \ J_{m}(k_{c}\rho) \ e^{-j\beta z}
Here,
m represents the value of the first mode index, J m is the Bessel function of the first kind, and J’is the derivative of the Bessel function. The argument is the cutoff wavenumber k m c=χ’ mn/a.
The seemingly identical modes arise because circular ports are degenerate. This means that an infinite number of rotations of a given mode field can exist on the same boundary, which can be problematic for describing the orientation of port mode fields with respect to one another. We therefore define a
Circular Port Reference Axis, which is available as a subfeature to the Port node. This feature allows us to select two vertices on the port circumference that define the orientation of fields on the port boundary. The mode field is then defined with respect to this reference axis, and any uncertainty in its orientation is resolved. We may now extend our study of circular ports to 3D. TE 11 mode propagating through a circular waveguide. The animation of the contour plot shows the z -component of the E field, created with the full dynamic data extension sequence type. The arrow plot describes the electric mode field on the port boundary. Modeling a Polarized Circular Waveguide
Let’s consider the Polarized Circular Ports model, available in the RF Application Gallery. This tutorial demonstrates how to excite and terminate a port with degenerate port modes. The structure under study is a straight, circular waveguide surrounded by perfectly conducting walls.
As with any COMSOL Multiphysics model, we start by building the geometry, assigning materials, and then setting up the physics. Our structure here is a simple cylinder filled with air. We model the metallic boundaries on the exterior using the
Perfect Electric Conductor boundary condition. Since this condition assumes a lossless conductor, there is no need to assign a material to these boundaries. Next, we add a Port boundary condition and select the circular boundary at one end of the waveguide. The outer walls of the waveguide are modeled as Perfect Electric Conductor boundaries. Streamlines of the electric fields are shown in red, and magnetic fields in blue. Arrow plots (black) show the direction of power flow from the excitation port to listener ports. Solid lines represent the reference axis for ports 1 and 2 on the near end, and ports 3 and 4 on the far end.
In the Port 1 settings, we start by setting the geometry type to
Circular. When using a circular port, the mode type ( TE or TM) and mode number must be specified. TE and TM stand for transverse electric and transverse magnetic, respectively; both of these mode types are supported by circular ports. Circular mode numbers are described by two indices, m and n, which are used in the transverse magnetic and electric field equations shown above.
We are interested in the dominant TE
11 mode. Therefore, in the Port 1 settings, we set the mode type to TE and mode number to 11. We select two opposite vertices on the port circumference in the Circular Port Reference Axis subfeature. Now, the important question becomes: How can we terminate this mode at the other end of the waveguide?
Any incident field can be completely terminated by two mutually orthogonal ports with the same mode field shape. We can verify this by expanding the Polarized Circular Waveguide model to study transmittance when the listener ports’ reference axes are rotated with respect to that of the excitation port. In this model, we set up a total of four ports, only one of which is an excitation port (Port 1). Together, mutually orthogonal ports 1 and 2 receive all of the reflected energy, while mutually orthogonal ports 3 and 4 receive all of the transmitted energy.
We run a
Parametric Sweep where the reference axes of ports 3 and 4 are rotated together by an angle theta about the origin. This rotation angle is plotted on the x-axis in the plot below. S-parameters are available as built-in expressions, ready for evaluation in postprocessing. The power ratio transmitted to ports 3 and 4 can be evaluated as the magnitude squared of S 31 and S 41, respectively. We used this relation to evaluate the transmittance at each angle spanned in the plot below. At any angle, the transmittance values sum to one, indicating nearly zero reflection, and therefore ideal termination. The reference axis of the first receiving port can be chosen freely as long as the reference axis of the second receiving port is a 90-degree rotation of the first about the waveguide axis. An Important Consideration: Cutoff Frequency
Anytime we wish to excite a waveguide or port, it is important to consider the cutoff frequency of the structure — that is, the lowest frequency for which a particular mode can propagate. This is true not only for the dominant mode but also higher-order modes. Rectangular, circular, and coaxial ports each have analytical expressions for cutoff frequency. This value is dependent on the size of the structure, the medium inside it, as well as the mode number. Below are the equations for cutoff frequency for both TE and TM modes in a circular waveguide:
f_c=\frac{\chi’_{mn}}{2\pi a \sqrt{\mu\epsilon}} for TE modes
f_c=\frac{\chi_{mn}}{2\pi a \sqrt{\mu\epsilon}} for TM modes
Here,
a is the radius, μ is the permeability, and ε is the permittivity.
The values of χ’
mn are given by the zeros of the derivative of the Bessel function J m(x); these are needed to determine cutoffs for TE modes. The values of χ mn are given by the zeros of the Bessel function; these are needed to determine cutoffs for TM modes. Luckily, the zeros of the Bessel function and its first derivative are well known, and some of them are listed here.
You may notice that the values in the m = 0 column of the χ’
mntable are identical to the values in the m = 1 column of the χ mntable. Therefore, the TE 0nand TM 1nmodes have identical cutoff frequencies and are referred to as degenerate modes.
Mode Index m = 0 m = 1 m = 2 m = 3 m = 4 m = 5 n = 1 2.4049 3.8318 5.1357 6.3802 7.5884 8.7715 n = 2 5.5201 7.0156 8.4173 9.7610 11.0647 12.3386 n = 3 8.6537 10.1735 11.6199 13.0152 14.3726 15.7002 Zeroes χ mn of the first kind Bessel function Jm(x) used in TM mode. (Ref. 2)
Mode Index m = 0 m = 1 m = 2 m = 3 m = 4 m = 5 n = 1 3.8318 1.8412 3.0542 4.2012 5.3175 6.4155 n = 2 7.0156 5.3315 6.7062 8.0153 9.2824 10.5199 n = 3 10.1735 8.5363 9.9695 11.3459 12.6819 13.9872 Zeroes χ’ mn of the derivative of the first kind Bessel function J’m(x) used in TE mode. (Ref. 2)
The number of modes that can exist in a given waveguide increases with frequency. Below, we show the first 24 modes of a circular port in the order of increasing cutoff frequency.
TE 11 TM 01 TE 21 TM 11 TE 01 TE 31 TM 21 TE 41 TE 12 TM 02 TM 31 TE 51 TE 22 TE 02 TM 12 TE 61 TM 41 TE 32 TM 22 TE 13 TE 71 TM 03 TM 51 TE 42 The first 24 modes of a circular port are shown. For TE modes, a surface plot of the electric field norm and arrow plot of the magnetic fields are displayed. For TM modes, a surface plot of the magnetic field norm and arrow plot of the electric fields are displayed. Automate the Data Collection Process Using Methods
To produce the figure above, we used a powerful tool called
methods to accelerate the modeling workflow. A method contains a series of commands. When called, these tasks are run automatically within the software. Here, we use a method to automate and expedite the process of producing the field distributions for the first 24 modes of a circular port. The method performs the following actions sequentially: Calculate the cutoff frequency for a given mode Enter the mode numbers in the Portnode settings Run the model at a frequency value just above the cutoff and store the solution at the port boundary
The method loops through this process for each mode, using its respective χ
mn/χ’ mn constant (the values of which are entered in the Parameters node). The zeroes of the Bessel function for each mode number are entered as parameters (left). These parameters are used in method 1 (shown below) to compute the solution sets for each mode number (right), just above the cutoff frequency. Closing Remarks
This blog post has outlined how to use circular ports for waveguide excitation and termination. While only circular ports are discussed here, remember that cutoff frequency must be considered when using other port types as well. The only difference for rectangular and coaxial ports is their respective cutoff frequency equations, which are still a function of the mode number and port dimensions. You can visualize the mode shapes for these port types efficiently by implementing similar methods.
If you have a question about modeling circular ports, please contact COMSOL Support.
Next Step
Try modeling the polarized circular waveguide featured in this blog post by clicking the button below, which will take you to the Application Gallery. Once there, you can log into your COMSOL Access account and, with a valid software license, download the MPH-file.
References David M. Pozar, Microwave Engineering, John Wiley & Sons, 1998. Constantine A. Balanis, Advanced Engineering Electromagnetics, John Wiley & Sons, 1999. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
|
Inaccessible cardinal Inaccessible cardinals are the traditional entry-point to the large cardinal hierarchy, although weaker notions such as the worldly cardinals can still be viewed as large cardinals.
A cardinal $\kappa$ being inaccessible implies the following:
$V_\kappa$ is a model of ZFC and so inaccessible cardinals are worldly. The worldly cardinals are unbounded in $\kappa$, so $V_\kappa$ satisfies the existence of a proper class of worldly cardinals. $\kappa$ is an aleph fixed point and a beth fixed point, and consequently $V_\kappa=H_\kappa$. (Solovay)there is an inner model of a forcing extension satisfying ZF+DC in which every set of reals is Lebesgue measurable; in fact, this is equiconsistent to the existence of an inaccessible cardinal. For any $A\subseteq V_\kappa$, the set of all $\alpha<\kappa$ such that $\langle V_\alpha;\in,A\cap V_\alpha\rangle\prec\langle V_\kappa;\in,A\rangle$ is club in $\kappa$.
An ordinal $\alpha$ being inaccessible is equivalent to the following:
$V_{\alpha+1}$ satisfies $\mathrm{KM}$. $\alpha>\omega$ and $V_\alpha$ is a Grothendiek universe. $\alpha$ is $\Pi_0^1$-Indescribable. $\alpha$ is $\Sigma_1^1$-Indescribable. $\alpha$ is $\Pi_2^0$-Indescribable. $\alpha$ is $0$-Indescribable. $\alpha$ is a nonzero limit ordinal and $\beth_\alpha=R_\alpha$ where $R_\beta$ is the $\beta$-th regular cardinal, i.e. the least regular $\gamma$ such that $\{\kappa\in\gamma:\mathrm{cf}(\kappa)=\kappa\}$ has order-type $\beta$. $\alpha = \beth_{R_\alpha}$. $\alpha = R_{\beth_\alpha}$. $\alpha$ is a weakly inaccessible strong limit cardinal (see weakly inaccessible below). Contents Weakly inaccessible cardinal
A cardinal $\kappa$ is
weakly inaccessible if it is an uncountable regular limit cardinal. Under GCH, this is equivalent to inaccessibility, since under GCH every limit cardinal is a strong limit cardinal. So the difference between weak and strong inaccessibility only arises when GCH fails badly. Every inaccessible cardinal is weakly inaccessible, but forcing arguments show that any inaccessible cardinal can become a non-inaccessible weakly inaccessible cardinal in a forcing extension, such as after adding an enormous number of Cohen reals (this forcing is c.c.c. and hence preserves all cardinals and cofinalities and hence also all regular limit cardinals). Meanwhile, every weakly inaccessible cardinal is fully inaccessible in any inner model of GCH, since it will remain a regular limit cardinal in that model and hence also be a strong limit there. In particular, every weakly inaccessible cardinal is inaccessible in the constructible universe $L$. Consequently, although the two large cardinal notions are not provably equivalent, they are equiconsistent.
There are a few equivalent definitions of weakly inaccessible cardinals. In particular:
Letting $R$ be the transfinite enumeration of regular cardinals, a limit ordinal $\alpha$ is weakly inaccessible if and only if $R_\alpha=\aleph_\alpha$ A nonzero cardinal $\kappa$ is weakly inaccessible if and only if $\kappa$ is regular and there are $\kappa$-many regular cardinals below $\kappa$; that is, $\kappa=R_\kappa$. A regular cardinal $\kappa$ is weakly inaccessible if and only if $\mathrm{REG}$ is unbounded in $\kappa$ (showing the correlation between weakly Mahlo cardinals and weakly inaccessible cardinals, as stationary in $\kappa$ is replaced with unbounded in $\kappa$) Levy collapse
The Levy collapse of an inaccessible cardinal $\kappa$ is the $\lt\kappa$-support product of $\text{Coll}(\omega,\gamma)$ for all $\gamma\lt\kappa$. This forcing collapses all cardinals below $\kappa$ to $\omega$, but since it is $\kappa$-c.c., it preserves $\kappa$ itself, and hence ensures $\kappa=\omega_1$ in the forcing extension.
Inaccessible to reals
A cardinal $\kappa$ is
inaccessible to reals if it is inaccessible in $L[x]$ for every real $x$. For example, after the Levy collapse of an inaccessible cardinal $\kappa$, which forces $\kappa=\omega_1$ in the extension, the cardinal $\kappa$ is of course no longer inaccessible, but it remains inaccessible to reals. Universes
When $\kappa$ is inaccessible, then $V_\kappa$ provides a highly natural transitive model of set theory, a universe in which one can view a large part of classical mathematics as taking place. In what appears to be an instance of convergent evolution, the same universe concept arose in category theory out of the desire to provide a hierarchy of notions of smallness, so that one may form such categories as the category of all small groups, or small rings or small categories, without running into the difficulties of Russell's paradox. Namely, a
Grothendieck universe is a transitive set $W$ that is closed under pairing, power set and unions. That is, (transitivity) If $b\in a\in W$, then $b\in W$. (pairing) If $a,b\in W$, then $\{a,b\}\in W$. (power set) If $a\in W$, then $P(a)\in W$. (union) If $a\in W$, then $\cup a\in W$.
The
Grothendieck universe axiom is the assertion that every set is an element of a Grothendieck universe. This is equivalent to the assertion that the inaccessible cardinals form a proper class. Degrees of inaccessibility
A cardinal $\kappa$ is
$1$-inaccessible if it is inaccessible and a limit of inaccessible cardinals. In other words, $\kappa$ is $1$-inaccessible if $\kappa$ is the $\kappa^{\rm th}$ inaccessible cardinal, that is, if $\kappa$ is a fixed point in the enumeration of all inaccessible cardinals. Equivalently, $\kappa$ is $1$-inaccessible if $V_\kappa$ is a universe and satisfies the universe axiom.
More generally, $\kappa$ is $\alpha$-inaccessible if it is inaccessible and for every $\beta\lt\alpha$ it is a limit of $\beta$-inaccessible cardinals.
$1$-inaccessibility is already consistency-wise stronger than the existence of a proper class of inaccessible cardinals, and $2$-inaccessibility is stronger than the existence of a proper class of $1$-inaccessible cardinals. More specifically, a cardinal $\kappa$ is $\alpha$-inaccessible if and only if for every $\beta<\alpha$: $$V_{\kappa+1}\models\mathrm{KM}+\text{There is a proper class of }\beta\text{-inaccessible cardinals}$$
As a result, if $\kappa$ is $\alpha$-inaccessible then for every $\beta<\alpha$: $$V_\kappa\models\mathrm{ZFC}+\text{There exists a }\beta\text{-inaccessible cardinal}$$
Therefore $2$-inaccessibility is weaker than $3$-inaccessibility, which is weaker than $4$-inaccessibility... all of which are weaker than $\omega$-inaccessibility, which is weaker than $\omega+1$-inaccessibility, which is weaker than $\omega+2$-inaccessibility...... all of which are weaker than hyperinaccessibility, etc.
Hyper-inaccessible and more
A cardinal $\kappa$ is
hyperinaccessible if it is $\kappa$-inaccessible. One may similarly define that $\kappa$ is $\alpha$-hyperinaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$, it is a limit of $\beta$-hyperinaccessible cardinals. Continuing, $\kappa$ is hyperhyperinaccessible if $\kappa$ is $\kappa$-hyperinaccessible.
More generally, $\kappa$ is
hyper${}^\alpha$-inaccessible if it is hyperinaccessible and for every $\beta\lt\alpha$ it is $\kappa$-hyper${}^\beta$-inaccessible, where $\kappa$ is $\alpha$-hyper${}^\beta$-inaccessible if it is hyper${}^\beta$-inaccessible and for every $\gamma<\alpha$, it is a limit of $\gamma$-hyper${}^\beta$-inaccessible cardinals.
Meta-ordinal terms are terms like $Ω^α · β + Ω^γ · δ +· · ·+Ω^\epsilon · \zeta + \theta$ where $α, β...$ are ordinals. They are ordered as if $Ω$ were an ordinal greater then all the others. $(Ω · α + β)$-inaccessible denotes $β$-hyper${}^α$-inaccessible, $Ω^2$-inaccessible denotes hyper${}^\kappa$-inaccessible $\kappa$ etc. Every Mahlo cardinal $\kappa$ is $\Omega^α$-inaccessible for all $α<\kappa$ and probably more. Similar hierarchy exists for Mahlo cardinals below weakly compact. All such properties can be killed softly by forcing to make them any weaker properties from this family.[1]
ReferencesMain library
|
Stable
Stability was developed as a large countable ordinal property in order to try to generalize the different strengthened variants of admissibility. More specifically, they capture the various assertions that $L_\alpha\models\text{KP}+A$ for different axioms $A$ by saying that $L_\alpha\models\text{KP}+A$ for many axioms $A$. One could also argue that stability is a weakening of $\Sigma_1$-correctness (which is trivial) to a nontrivial form.
Definition and Variants
Stability is defined using a reflection principle. A countable ordinal $\alpha$ is called
stable iff $L_\alpha\prec_{\Sigma_1}L$; equivalently, $L_\alpha\prec_{\Sigma_1}L_{\omega_1}$. [1] Variants
There are quite a few (weakened) variants of stability:[1]
A countable ordinal $\alpha$ is called $(+\beta)$-stableiff $L_\alpha\prec_{\Sigma_1}L_{\alpha+\beta}$. A countable ordinal $\alpha$ is called $({}^+)$-stableiff $L_\alpha\prec_{\Sigma_1}L_{\beta}$ where $\beta$ is the least admissible ordinal larger than $\alpha$. A countable ordinal $\alpha$ is called $({}^{++})$-stableiff $L_\alpha\prec_{\Sigma_1}L_{\beta}$ where $\beta$ is the least admissible ordinal larger than an admissible ordinal larger than $\alpha$. A countable ordinal $\alpha$ is called inaccessibly-stableiff $L_\alpha\prec_{\Sigma_1}L_{\beta}$ where $\beta$ is the least computably inaccessible ordinal larger than $\alpha$. A countable ordinal $\alpha$ is called Mahlo-stableiff $L_\alpha\prec_{\Sigma_1}L_{\beta}$ where $\beta$ is the least computably Mahlo ordinal larger than $\alpha$; that is, the least $\beta$ such that any $\beta$-recursive function $f:\beta\rightarrow\beta$ has an admissible $\gamma<\beta$ which is closed under $f$. A countable ordinal $\alpha$ is called doubly $(+1)$-stableiff there is a $(+1)$-stable ordinal $\beta>\alpha$ such that $L_\alpha\prec_{\Sigma_1}L_\beta$. A countable ordinal $\alpha$ is called nonprojectibleiff the set of all $\beta<\alpha$ such that $L_\beta\prec_{\Sigma_1}L_\alpha$ is unbounded in $\alpha$. Properties
Any $L$-stable ordinal is stable. This is because $L_\alpha^L=L_\alpha$ and $L^L=L$. [2] Any $L$-countable stable ordinal is $L$-stable for the same reason. Therefore, an ordinal is $L$-stable iff it is $L$-countable and stable. This property is the same for all variants of stability.
The smallest stable ordinal is also the smallest ordinal $\alpha$ such that $L_\alpha\models\text{KP}+\Sigma_2^1\text{-reflection}$, which in turn is the smallest ordinal which is not the order-type of any $\Delta_2^1$-ordering of the natural numbers. The smallest stable ordinal $\sigma$ has the property that any $\Sigma_1(L_\sigma)$ subset of $\omega$ is $\omega$-finite. [1]
If there is an ordinal $\eta$ such that $L_\eta\models\text{ZFC}$ (i.e. the minimal height of a transitive model of $\text{ZFC}$) then it is smaller than the least stable ordinal. On the other hand, the sizes of the least $(+1)$-stable ordinal and the least nonprojectible ordinal lie between the least recursively weakly compact and the least $Σ_2$-admissible (the same for other weakened variants of stability defined above). [1]
References Madore, David. A zoo of ordinals., 2017. www bibtex Jech, Thomas J. Third, Springer-Verlag, Berlin, 2003. (The third millennium edition, revised and expanded) www bibtex Set Theory.
|
I have a question on the possibility of using Mathematica to plot a convex closed region satisfying a linear system of equalities and inequalities.
Let me first present the linear system. Let $x\equiv (x_1,...,x_{34})$ be a $34\times 1$ vector of unknowns. Let $A,C,E,b,d$ be matrices of known parameters with appropriate dimensions. The linear system is
$ 1) A\times (x_1,...,x_{32})=b$ ,
$ 2)C\times(x_1,...,x_{32}) \leq d$,
$ 3)E\times(x_1,...,x_{32})-(x_{33},x_{34})=0$,
The matrices are here https://filebin.net/e7k3749uxd2f1dg4, in
.mat format. They are too big to be reported.
My objective: I would like to plot the region of values of $(x_{33},x_{34})$ for which there exists $(x_1,...,x_{32})$ such that $(x_1,...,x_{34})$ satisfies $1),2),3)$. Question: Can Mathematica allow me to plot the desired $2$-D region? The tricky part here is to explore the set of solution of $1),2)$ with respect to $(x_1,...,x_{32})$. I typically use Matlab which, however, to the best of my knowledge, does not have packages doing what I want due to the high dimension of the problem. Clarification: I have never used Mathematica (hence, I don't have a code of attempts to show you), but I'd be happy to start studying it if you tell me that it can help me with my question. Clarification 2: there exists at least one value of $(x_1,...,x_{34})$ satisfying 1),2),3). It is here https://filebin.net/e7k3749uxd2f1dg4 under the name
possible_solution_complete.mat.
|
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review) Abstract. We show that Kelley-Morse (KM) set theory does not prove the class Fodor principle, the assertion that every regressive class function $F:S\to\newcommand\Ord{\text{Ord}}\Ord$ defined on a stationary class $S$ is constant on a stationary subclass. Indeed, it is relatively consistent with KM for any infinite $\lambda$ with $\omega\leq\lambda\leq\Ord$ that there is a class function $F:\Ord\to\lambda$ that is not constant on any stationary class. Strikingly, it is consistent with KM that there is a class $A\subseteq\omega\times\Ord$, such that each section $A_n=\{\alpha\mid (n,\alpha)\in A\}$ contains a class club, but $\bigcap_n A_n$ is empty. Consequently, it is relatively consistent with KM that the class club filter is not $\sigma$-closed.
The
class Fodor principle is the assertion that every regressive class function $F:S\to\Ord$ defined on a stationary class $S$ is constant on a stationary subclass of $S$. This statement can be expressed in the usual second-order language of set theory, and the principle can therefore be sensibly considered in the context of any of the various second-order set-theoretic systems, such as Gödel-Bernays (GBC) set theory or Kelley-Morse (KM) set theory. Just as with the classical Fodor’s lemma in first-order set theory, the class Fodor principle is equivalent, over a weak base theory, to the assertion that the class club filter is normal. We shall investigate the strength of the class Fodor principle and try to find its place within the natural hierarchy of second-order set theories. We shall also define and study weaker versions of the class Fodor principle.
If one tries to prove the class Fodor principle by adapting one of the classical proofs of the first-order Fodor’s lemma, then one inevitably finds oneself needing to appeal to a certain second-order class-choice principle, which goes beyond the axiom of choice and the global choice principle, but which is not available in Kelley-Morse set theory. For example, in one standard proof, we would want for a given $\Ord$-indexed sequence of non-stationary classes to be able to choose for each member of it a class club that it misses. This would be an instance of class-choice, since we seek to choose classes here, rather than sets. The class choice principle $\text{CC}(\Pi^0_1)$, it turns out, is sufficient for us to make these choices, for this principle states that if every ordinal $\alpha$ admits a class $A$ witnessing a $\Pi^0_1$-assertion $\varphi(\alpha,A)$, allowing class parameters, then there is a single class $B\subseteq \Ord\times V$, whose slices $B_\alpha$ witness $\varphi(\alpha,B_\alpha)$; and the property of being a class club avoiding a given class is $\Pi^0_1$ expressible.
Thus, the class Fodor principle, and consequently also the normality of the class club filter, is provable in the relatively weak second-order set theory $\text{GBC}+\text{CC}(\Pi^0_1)$. This theory is known to be weaker in consistency strength than the theory $\text{GBC}+\Pi^1_1$-comprehension, which is itself strictly weaker in consistency strength than KM.
But meanwhile, although the class choice principle is weak in consistency strength, it is not actually provable in KM; indeed, even the weak fragment $\text{CC}(\Pi^0_1)$ is not provable in KM. Those results were proved several years ago by the first two authors, but they can now be seen as consequences of the main result of this article (see corollary 15. In light of that result, however, one should perhaps not have expected to be able to prove the class Fodor principle in KM.
Indeed, it follows similarly from arguments of the third author in his dissertation that if $\kappa$ is an inaccessible cardinal, then there is a forcing extension $V[G]$ with a symmetric submodel $M$ such that $V_\kappa^M=V_\kappa$, which implies that $\mathcal M=(V_\kappa,\in, V^M_{\kappa+1})$ is a model of Kelley-Morse, and in $\mathcal M$, the class Fodor principle fails in a very strong sense.
In this article, adapting the ideas of Karagila to the second-order set-theoretic context and using similar methods as in Gitman and Hamkins’s previous work on KM, we shall prove that every model of KM has an extension in which the class Fodor principle fails in that strong sense: there can be a class function $F:\Ord\to\omega$, which is not constant on any stationary class. In particular, in these models, the class club filter is not $\sigma$-closed: there is a class $B\subseteq\omega\times\Ord$, each of whose vertical slices $B_n$ contains a class club, but $\bigcap B_n$ is empty.
Main Theorem. Kelley-Morse set theory KM, if consistent, does not prove the class Fodor principle. Indeed, if there is a model of KM, then there is a model of KM with a class function $F:\Ord\to \omega$, which is not constant on any stationary class; in this model, therefore, the class club filter is not $\sigma$-closed.
We shall also investigate various weak versions of the class Fodor principle.
Definition. For a cardinal $\kappa$, the class $\kappa$-Fodor principleasserts that every class function $F:S\to\kappa$ defined on a stationary class $S\subseteq\Ord$ is constant on a stationary subclass of $S$. The class ${<}\Ord$-Fodor principleis the assertion that the $\kappa$-class Fodor principle holds for every cardinal $\kappa$. The bounded class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is bounded on a stationary subclass of $S$. The very weak class Fodor principleasserts that every regressive class function $F:S\to\Ord$ on a stationary class $S\subseteq\Ord$ is constant on an unbounded subclass of $S$.
We shall separate these principles as follows.
Theorem. Suppose KM is consistent. There is a model of KM in which the class Fodor principle fails, but the class ${<}\Ord$-Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle fails, but the bounded class Fodor principle holds. There is a model of KM in which the class $\omega$-Fodor principle holds, but the bounded class Fodor principle fails. $\text{GB}^-$ proves the very weak class Fodor principle.
Finally, we show that the class Fodor principle can neither be created nor destroyed by set forcing.
Theorem. The class Fodor principle is invariant by set forcing over models of $\text{GBC}^-$. That is, it holds in an extension if and only if it holds in the ground model.
Let us conclude this brief introduction by mentioning the following easy negative instance of the class Fodor principle for certain GBC models. This argument seems to be a part of set-theoretic folklore. Namely, consider an $\omega$-standard model of GBC set theory $M$ having no $V_\kappa^M$ that is a model of ZFC. A minimal transitive model of ZFC, for example, has this property. Inside $M$, let $F(\kappa)$ be the least $n$ such that $V_\kappa^M$ fails to satisfy $\Sigma_n$-collection. This is a definable class function $F:\Ord^M\to\omega$ in $M$, but it cannot be constant on any stationary class in $M$, because by the reflection theorem there is a class club of cardinals $\kappa$ such that $V_\kappa^M$ satisfies $\Sigma_n$-collection.
Read more by going to the full article:
V. Gitman, J. D. Hamkins, and A. Karagila, “Kelley-Morse set theory does not prove the class Fodor theorem.” (manuscript under review)
@ARTICLE{GitmanHamkinsKaragila:KM-set-theory-does-not-prove-the-class-Fodor-theorem, author = {Victoria Gitman and Joel David Hamkins and Asaf Karagila}, title = {Kelley-Morse set theory does not prove the class {F}odor theorem}, journal = {}, year = {}, volume = {}, number = {}, pages = {}, month = {}, note = {manuscript under review}, abstract = {}, keywords = {under-review}, eprint = {1904.04190}, archivePrefix = {arXiv}, primaryClass = {math.LO}, source = {}, doi = {}, url = {http://wp.me/p5M0LV-1RD}, }
|
I want to solve the diffusion equation, i.e. $$ \dot{f} - f'' = 0 $$ with a boundary condition $f(0) = f(1) = 0$ and with an initial condition that $f$ is a boxcar function concentrated over some small region of size $d \ll 1$. The units are scaled so that the diffusion constant is equal to unity. Then I want to use the resulting solution to compute an integral of the form $$ \int dt \int dx F(f(x), f'(x)) , $$ where $F$ is some function (it is actually quadratic in $f$ with some $x$-dependent coefficients if this makes any difference).
I have tried two different ways to do this. First is to solve the diffusion equation analytically in Fourier space and then write my integral also in Fourier space and calculate it numerically. The second is to solve the diffusion equation numerically (I've tried both explicit and implicit methods and the well-known Crank-Nicolson method).
In all approaches, I end up with the same problem: The initial condition for $f$ is very concentrated which means I need a ridiculously small space and time discretization (corresponds to taking into account very large Fourier modes in F-space solution). On the other hand, since I need to solve the equation also for large times (to calculate the integral), I need to take into account also solutions in large times. Fourier-space method is not faster because in that I actually need to compute three integrals numerically (two Fourier sums with a kernel that is calculated from the $x$-integral). This makes my calculations very slow.
Are there any better methods to do what I just described? Since diffusion equation is very simple, I feel pretty stupid for not being able to do this faster. I've been thinking about implementing some kind of adaptive grid that would be concentrated to small times and close to the initial condition, but this seems like a complicated thing to do tho solve such a simple problem.
|
This question already has an answer here:
Consider the following complex power series $$\sum_{n \geq 1} \frac{z^n}{n} \,\,\,\,\,\,\, z \in \mathbb{C}$$
It surely converges conditionally for $z=-1$ (for alternating series test) and for $z=1$ it diverges (it is the harmonic series).
My question is: how can one show that the power series converges conditionally for
any $z \in \mathbb{C}$ such that $|z|=1$ (except for $z=1$)?
|
This is not a solution, but it's too long for a comment.
As I wrote in the comment above, if $n$ is composite, then the statement is easily proved. Write $n=x \cdot y$ and pick $(a,b)=(x, xy-x)$. Since $$a^2+b^2=x^2(1+(y+1)^2)$$is divisible by $x^2$, it's not a prime.
This reduces the study of this problem to primes $n \ge 7$.
This is equivalent to the following:
$$\exists a \in \left\{ 1, \dots , \frac{n-1}{2} \right\}: \ a^2+(n-a)^2 \mbox{ is composite}$$
Now, for all $a \in \left\{ 1, \dots , \frac{n-1}{2} \right\}$, call $$s=n-2a$$Note that $s$ is odd and that $s \in \left\{ 1, 3, 5, \dots , n-2 \right\}$. Then$$\frac{n^2+s^2}{2} = \frac{2n^2-4an+4a^2}{2} = n^2-2an+2a^2 = a^2+(n-a)^2$$which is composite for a suitable option for $s$.
Hence your conjecture is equivalent to the following:
For all primes $n \ge 7$ there exists an odd integer $s \in \{ 1, \dots , n-2\}$ such that $\frac{n^2+s^2}{2}$ is composite.
|
From a categorical perspective, the subcategory of $\mathsf{Top}$ generated by spaces with open connected components does not have pullbacks.
Borceux and Janelidze,
Galois Theories, section 6.4:
The category $\mathsf{Top}$ of topological spaces is extensive, but not of the form $\mathsf{Fam}(\mathcal A)$. On the other hand the category of topological spaces with open connected components (see proposition 6.1.1), which is the "obvious best replacement" of $\mathsf{Top}$ by a category of the form $\mathsf{Fam}(\mathcal A)$, does not have pullbacks.
Regarding your second question (there's probably a mistake somewhere, since I doubt nobody else would have thought of this before):
First of all, I think $\Pi_0$ has a right adjoint whenever it exists and $\mathcal A$ admits a terminal object. It is given by the discrete functor $H$ defined by $A\mapsto A\cdot \mathbf{1}=\coprod_A\mathbf{1}$. The very
existence of a connected components functor means your category is actually of the form $\mathsf{Fam}(\mathcal A)$ i.e is a free coproduct cocompletion. These are precisely the extensive categories in which every object has a presentation as a coproduct of connected objects - the connected components.
Now, I think that for a space $X$ TFAE:
$X$ is the coproduct of its connected components. $X$ has open connected components.
Proof. The fact $\mathsf{Top}$ is extensive ensures a coproduct decomposition is unique up to homeomorphism. By definition a space is disconnected if it has a nontrivial coproduct presentation by open subspaces. Hence, the uniqueness up to isomorphism guarantees that if $X$ is the coproduct of its connected components, they are open. You already wrote the converse.
This means the largest full subcategory of $\mathsf{Top}$ on which $\Pi_0$ is defined is the category of spaces with open connected components.
Added. In any category there is a notion of connected object. This notion is especially well behaved in extensive categories, in which coproducts interact well with pullback. In a "spatial" category, ideally, every object should have a presentation as a unique up to iso coproduct of connected objects - a canonical decomposition into simple parts. However, this is often not true - totally disconnected spaces need not be discrete and so are not the coproducts of copies of $\mathbf{1}$.
Given a category $\mathcal A$, $\mathsf{Fam}(\mathcal A)$ is its free coproduct completion. What it does is freely append coproducts of connected objects. It can be defined as a fibration, as in section 6.1 of
Galois Theories
For a family $(A_i)_{i\in I}$ of objects of a category $\mathcal A$ we
write $$(A_i)_{i\in I}=A=(A_i)_{i\in I(A)}$$ considering $I$ as a
functor $$I:\mathsf{Fam}(\mathcal A)\longrightarrow \mathsf{Set}$$
from the category of all families of objects in $\mathcal A$ to the
category of sets.
In this way, (writing $\Pi_0=I$) every object $(A_i)_{i\in \Pi_0(A)}$ in $\mathsf{Fam}(\mathcal A)$ is identified as the collection of connected components of the object $\coprod_{i\in \Pi_0(A)}A_i$.
The existence of $\Pi_0$ is the equivalent to saying that
every object has a unique up to iso presentation as a coproduct of connected objects, and that's why it's important.
|
A countable limit ordinal $\kappa$ has cofinality $\omega$. One proves this in
ZF, say, using the usual trick for representing $\kappa$ as a countable set of reals having closed convex span $[0,1]$ (with the usual order) and then comparing with any increasing sequence in $[0,1]$ converging to 1.
Nevertheless, I suspect independence from
ZF for the following uniform version of this claim:
There exists a function $f:\omega \times \omega_1 \rightarrow \omega_1$ such that
1) $f(\alpha,\beta) < \beta$;
2) ${\rm sup}_\alpha f(\alpha,\beta) =\beta $ for $\beta$ a limit ordinal.
(For fixed $\beta$ assume that $f(\alpha,\beta)$ increases with $\alpha$, if you like.)
Briefly, such an $f$ would support, by induction and coding tricks, the construction of an injection from $\omega_1$ to ${\Bbb R}$; that in turn would mean that
CH implies the existence of a well-ordering of the reals. (Details on demand.) Questions:
1) Does this independence come up in the literature?
2) Can someone point me to a model of
ZF+CH where the reals have no well-ordering?
3) Is it easy to get directly a model of
ZF having no such $f$ by forcing?
4) Is the existence of $f$ equivalent to any well-known consequences of
AC?
5) Are there toposes where even the original cofinality statement (on some reasonable interpretation) fails (for lack, say, of a global bijection between $\kappa$ and the natural number object)?
|
I found at the beginning of tom Dieck's Book the following (non proved) result
Suppose $X$ is the colimit of the sequence $$ X_1 \subset X_2 \subset X_3 \subset \cdots $$ Suppose points in $X_i$ are closed. Then each compact subset $K$ of $X$ is contained in some $X_k$
Now I really don't know how to prove this fact. The idea would be to find a suitable open cover to it and after taking a finite sub cover trying to claim that $K$ lies in one of the $X_k$. I'm able to do this reasoning in some more specific cases, where I've more control on how open subsets looks like, but in this full generality I don't see which open cover I can take.
My Attempt:The only idea or approach I'm able to cook up so far is to try use some kind of sequence of points $x_n \in K\cap X_n \setminus X_{n-1}$ which can be assumed to exist by absurd. Being $K$ compact, there must be an accumulation point $k\in K$. Clearly $k \in X_k$ (little abuse of notation here) and for every neighbourhood of $k$, there is a tailof this sequence entirely contained in it. Now everything seems to boil down to find the right nbhd to find the counterexample. It seems doable, but I don't have any idea on how to choose it, because the only open I have for sure are complements of points, but they seems a little bit coarsefor what I want to do.
As a side note, May claims at page $67$ of his "Concise Course (revised)" that this result holds for any based spaces. The proof seems to use the above result without T1 assumption. How one can prove this result in such generalities? (no details where provided, only the rough idea.
|
This is standard theory. Try
Birrell, N. D., & Davies, P. C. W. (1982). Quantum Fields in Curved Space. Cambridge: Cambridge University Press.
Bog standard Curved space QFT text. Don't remember how much is said specifically about spinors though.
Brill, D., & Wheeler, J. (1957). Interaction of Neutrinos and Gravitational Fields. Reviews of Modern Physics, 29(3), 465–479. doi:10.1103/RevModPhys.29.465
<-- This paper was particularly clear from memory.
Yepez, J. (2011). Einstein’s vierbein field theory of curved space. General Relativity and Quantum Cosmology; History of Physics. Retrieved from http://arxiv.org/abs/1106.2037
Great discussion. Thanks twistor59.
Boulanger, N., Spindel, P., & Buisseret, F. (2006). Bound states of Dirac particles in gravitational fields. Physical Review D, 74(12). doi:10.1103/PhysRevD.74.125014
Technical examples worked out in painful detail
Lasenby, A., Doran, C., & Gull, S. (1998). Gravity, gauge theories and geometric algebra. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 356(1737), 487–582. General Relativity and Quantum Cosmology; Astrophysics. doi:10.1098/rsta.1998.0178 http://arxiv.org/abs/gr-qc/0405033
Geometric algebra technique - a powerful and elegant modern formalism that I'm hardly an expert on. See Muphrid's answer for more details. :)
These are less specific to the question but still with material pertaining to it:
There are other references. I'll put them in as I think of them or others point them out (thanks guys!).
The reason you need the spin connection is because fundamentally you need the tetrad or orthonormal frame fields. These fields give a set of "laboratory frames" at every point in spacetime:
$$ e^\mu_a(x),\ \text{with}\ e^\mu_a(x) e_{\mu b}(x) = \eta_{ab}, $$
where $a$ labels the field $a=\hat{t},\hat{x},\cdots$ and $\mu$ is the spacetime vector index. The intuitive meaning of this is that $e^\mu_{\hat{t}}$ represents the 4-velocity of the lab, $e^\mu_{\hat{x}}$ is bolted down on the lab bench oriented along the $x$-axis etc. You can prove the relationship
$$ g_{\mu\nu}(x) = \eta_{ab} e^a_\mu(x) e^b_\nu(x). $$
For this reason the tetrad is commonly known as the "square root of the metric," which is not an entirely satisfactory notion. Anyway, you can see that the tetrad is not uniquely defined. Any tetrad related to another by $e'^a_\mu(x) = \Lambda^a_b(x) e^b_\mu(x)$ where $\Lambda^a_b(x)$ is a
local Lorentz transformation is just as good. This corresponds to the freedom of different labs to rotate their axes and boost themselves independently.
This means the theory has a huge built in redundancy - local Lorentz invariance - which plays a similar role for spinors as coordinate tranformation invariance does in GR. The tetrads are necessary because spinor representations are defined in relation to the double cover of the Lorentz group SL(2,C), which cannot be represented in terms of tensors under the diffeomorphism group. You can however define spinors relative to a locally Minkowski frame:
$$ \psi(x) \rightarrow \left( 1 - \frac{i}{2} \omega_{ab}(x) \sigma^{ab} \right) \psi(x), $$
where $\omega_{ab}(x)$ is a
local Lorentz transformation and $\sigma^{ab}\propto [\gamma^a,\gamma^b]$ are the generators of spinor transformations. The spinors basically live in an internal space. The next key idea is that you have to then be able to "solder" SL(2,C) representations in these frames together consistently to cover the spacetime. The consistency conditions you desire are that: $\bar{\psi} \psi$ is a scalar field The product rule and linearity work for covariant derivatives The tetrad postulate, i.e. compatibility of the spinor covariant derivative with the ordinary covariant derivative
These conditions form the relationship between the internal spin and the spacetime, and they give the formula for the spin connection:
$$ \omega_{\mu b}^{a}=e_{\lambda}^{a}\Gamma_{\mu\nu}^{\lambda}e_{b}^{\nu}-\left(\partial_{\mu}e_{\nu}^{a}\right)e_{b}^{\nu}. $$
In older literature you may see curved space gamma matrices defined by contraction with the tetrad:
$$ \gamma^\mu(x) = \gamma^a e^\mu_a(x). $$
This is fine but I find it less confusing to keep the tetrads explicit. Note that the $\gamma^a$ are constant numerical matrices, whereas $\gamma^\mu(x)$ are spacetime functions.
This, combined with the references and some googling should hopefully get you started. If you are still really stuck after trying to work some of this out (Try to do it for yourself! There are so many different conventions in the literature it's hard to trust copy'n'pasting different people's results!) then I have an example calculation from go to woe in my honours thesis.
|
Think of moving volatility in the other direction.
As volatility approaches zero, any call strike strictly smaller than the ATM strike, $K<K_{ATM}$, will have zero probability of ending in the money, and the corresponding option value will be zero. An infinitesimally small change in stock price will not move $K$ past $K_{ATM}$, so the option value remains zero nearby. Thus, the sensitivity is zero.
Similarly for $K>K_{ATM}$, all options end up in the money, so the $\Gamma$ is also zero (though for the ITM options $\Delta=1$ rather than 0, ignoring interest and dividend rates).
Only strikes very close to $ATM$ have any likelihood of changing between $\Delta=0$ and $\Delta=1$.
Now, note that that $\Gamma = \frac{\partial}{\partial S}\Delta$ for any volatility. Furthermore, for a call $\Delta(S) \rightarrow 1$ as $ S \rightarrow \infty$.
Thus
$$\int_0^\infty \Gamma(S) dS = 1$$
That is to say, the area under the gamma curve is always 1.
In high-volatility cases, the $\Gamma$ is "spread out" over a wide range of $S$, so it never gets very big yet adds up to 1. When volatility is low, the $\Gamma$ is all concentrated near $K_{ATM}$ so it has to get very big.
We conclude that as volatility increases, $\Gamma$ decreases near $K_{ATM}$ and increases for other strikes.
(This is a more formal version of SolitonK's answer, which I have upvoted)
|
2019-10-11 14:20
TURBO stream animation /LHCb Collaboration An animation illustrating the TURBO stream is provided. It shows events discarded by the trigger in quick sequence, followed by an event that is kept but stripped of all data except four tracks [...] LHCB-FIGURE-2019-010.- Geneva : CERN, 2019 - 3. Detailed record - Similar records 2019-10-10 15:48 Detailed record - Similar records 2019-09-12 16:43
Pending/LHCb Collaboration Pending LHCB-FIGURE-2019-008.- Geneva : CERN, 10 Detailed record - Similar records 2019-09-10 11:06
Smog2 Velo tracking efficiency/LHCb Collaboration LHCb fixed-target programme is facing a major upgrade (Smog2) for Run3 data taking consisting in the installation of a confinement cell for the gas covering $z \in [-500, -300] \, mm $. Such a displacement for the $pgas$ collisions with respect to the nominal $pp$ interaction point requires a detailed study of the reconstruction performances. [...] LHCB-FIGURE-2019-007.- Geneva : CERN, 10 - 4. Fulltext: LHCb-FIGURE-2019-007_2 - PDF; LHCb-FIGURE-2019-007 - PDF; Detailed record - Similar records 2019-09-09 14:37
Background rejection study in the search for $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$/LHCb Collaboration A background rejection study has been made using LHCb Simulation in order to investigate the capacity of the experiment to distinguish between $\Lambda^0 \rightarrow p^+ \mu^- \overline{\nu}$ and its main background $\Lambda^0 \rightarrow p^+ \pi^-$. Two variables were explored, and their rejection power was estimated applying a selection criteria. [...] LHCB-FIGURE-2019-006.- Geneva : CERN, 09 - 4. Fulltext: PDF; Detailed record - Similar records 2019-09-06 14:56
Tracking efficiencies prior to alignment corrections from 1st Data challenges/LHCb Collaboration These plots show the first outcoming results on tracking efficiencies, before appli- cation of alignment corrections, as obtained from the 1st data challenges tests. In this challenge, several tracking detectors (the VELO, SciFi and Muon) have been misaligned and the effects on the tracking efficiencies are studied. [...] LHCB-FIGURE-2019-005.- Geneva : CERN, 2019 - 5. Fulltext: PDF; Detailed record - Similar records 2019-09-06 11:34 Detailed record - Similar records 2019-09-02 15:30
First study of the VELO pixel 2 half alignment/LHCb Collaboration A first look into the 2 half alignment for the Run 3 Vertex Locator (VELO) has been made. The alignment procedure has been run on a minimum bias Monte Carlo Run 3 sample in order to investigate its functionality [...] LHCB-FIGURE-2019-003.- Geneva : CERN, 02 - 4. Fulltext: VP_alignment_approval - TAR; VELO_plot_approvals_VPAlignment_v3 - PDF; Detailed record - Similar records 2019-07-29 14:20 Detailed record - Similar records 2019-07-09 09:53
Variation of VELO Alignment Constants with Temperature/LHCb Collaboration A study of the variation of the alignment constants has been made in order to investigate the variations of the LHCb Vertex Locator (VELO) position under different set temperatures between $-30^\circ$ and $-20^\circ$. Alignment for both the translations and rotations of the two halves and of the modules with certain constrains of the modules position was performed for each run that correspond to different a temperature [...] LHCB-FIGURE-2019-001.- Geneva : CERN, 04 - 4. Fulltext: PDF; Related data file(s): ZIP; Detailed record - Similar records
|
Let $B \in \mathbb{R}^{n \times n}$ be symmetric and positive semi-definite such that $B = U\Lambda U^T$, where $U = [u_1,\cdots,u_n]^T$ is an orthogonal matrix with $u_i \in \mathbb{R}^n$, and $\Lambda=\text{diag}(0,\cdots,0,\lambda_k,\cdots,\lambda_n)$ is the diagonal matrix with $0<\lambda_k \leq\cdots \leq \lambda_n$. Show that for $\Delta >0$ the following are equivalet:
1- $u_i^Tc=0,\,\,\,\,\,\, \forall i=1,\cdots,k-1$, and $\sum_{i=k}^n \frac{(u_i^T)^2}{\lambda_i^2}\leq \delta^2$
2- There exist $x \in \mathbb{R}^n$ such that $Bx+c = 0$ and and $\|x\| \leq \Delta$.
I can proof $(2)$ implies $(1)$ but do not know how to show $(1)$ implies $(2)$.
|
$\newcommand{\qr}[1]{|#1\rangle}$Grover algorithm's input is a superposition, representing the haystack, and the Bell state $\qr{-}$. The $\qr{-}$ seems utterly important: when I replaced $\qr{-}$ by, say, $\qr{+}$, the oracle became the identity operator and so Grover's algorithm couldn't work with it. So it seems that we need $\qr{-}$ so that $$U_f\qr{i}\qr{-} = (-1)^{f(i)}\qr{i}\qr{-},$$ which is the phase kickback tricky. Is $\qr{-}$ the only state that can do the trick for Grover's algorithm?
It would still work if you rotated the $|−\rangle$ state, you refer to, by an angle with cosine less than $\frac{1}{L}$ where $L$ is the square root of the size of the search space (which is a power of 2). So it works with a different state which is a small rotation of the $|-\rangle$ state.
Remember that the states $|+\rangle$ and $|-\rangle$ form a basis. That means that any state $|\psi\rangle$ that you use can be written as $$ |\psi\rangle=\alpha|+\rangle+\beta|-\rangle. $$ By linearity, we can basically treat what these two components do separately. The $|+\rangle$ component is basically useless and gives the correct output with probability $1/N$ if there are $N$ items in the database, while the $|-\rangle$ component returns the marked item with a probability close to 1. Hence the overall success probability of a single run is approximately $$ p=\frac{|\alpha|^2}{N}+|\beta|^2. $$ With, on average $1/p$ repetitions, we'll find the answer we're looking for. So long as $1/p<\sqrt{N}$, we still get a speed-up over the classical case. Hence, roughly, we want $|\beta|^2>1/\sqrt{N}$.
A similar analysis can be repeated whenever the phase kickback trick is used (with the caveat that you never do anything to that qubit that should be in $|-\rangle$ except do the operation that creates the phase kickback, but I believe that is always the case)
|
14 0
This result isn't in our book, but it is in my notes and I want to make sure it's correct. Please verify if you can.
I have two I.I.D random variables. I want the conditional expectation of Y given Y is less than some other independent random variable Z.
[tex] E(Y \, \vert \, Y < z) = \dfrac{\int_0^{z} y \cdot f(y) \, dy}{F(z)} [/tex]
Where f(y) is the pdf of Y and F(z) is the cdf for Z
I've searched the book and the web, but all I find is the formula for conditional expectation for [tex] E(X | Y = y) [/tex] for joint distributions and the like. Is my formula correct? 1. Homework Statement
I have two I.I.D random variables. I want the conditional expectation of Y given Y is less than some other independent random variable Z.
[tex] E(Y \, \vert \, Y < z) = \dfrac{\int_0^{z} y \cdot f(y) \, dy}{F(z)} [/tex]
Where f(y) is the pdf of Y and F(z) is the cdf for Z
3. The Attempt at a Solution
I've searched the book and the web, but all I find is the formula for conditional expectation for [tex] E(X | Y = y) [/tex] for joint distributions and the like. Is my formula correct?
|
Positive set theory Positive set theory is the name of a certain group of axiomatic set theories originally created as an example of a (nonstandard) set theories in which the axiom of foundation fails (e.g. there exists $x$ such that $x\in x$). [1] Those theories are based on a weakening of the (inconsistent) comprehension axiom of naive set theory (which asserts that every formula $\phi(x)$ defines a set that contains all $x$ such that $\phi(x)$) by restraining the formulas used to a smaller class of formulas called positive formulas.
While most positive set theories are weaker than $\text{ZFC}$ (and usually mutually interpretable with second-order arithmetic), one of them, $\text{GPK}^+_\infty$ turns out to be very powerful, being mutually interpretable with Morse-Kelley set theory plus an axiom asserting that the class of all ordinals is weakly compact. [2]
Contents Positive formulas
In the first-order language $\{=,\in\}$, we define a
BPF formula (bounded positive formula) the following way [2]:For every variable $x$, $y$ and BPF formulas $\varphi$, $\psi$, $x=y$ and $x\in y$ are BPF. $\varphi\land\psi$, $\varphi\lor\psi$, $\exists x\varphi$ and $(\forall x\in y)\varphi$ are BPF.
A formula is then a
GPF formula (generalized positive formula) if it is a BPF formula or if it is of the form $\forall x(\theta(x)\Rightarrow\varphi)$ with $\theta(x)$ a GPF formula with exactly one free variable $x$ and no parameter and $\varphi$ is a GPF formula (possibly with parameters). [3] $\text{GPK}$ positive set theories
The positive set theory $\text{GPK}$ consists of the following axioms:
Empty set: $\exists x\forall y(y\not\in x)$. Extensionality: $\forall x\forall y(x=y\Leftrightarrow\forall z(z\in x\Leftrightarrow z\in y))$. GPF comprehension: the universal closure of $\exists x\forall y(y\in x\Leftrightarrow\varphi)$ for every GPF formula $\varphi$ (with parameters) in which $x$ does not occur.
The empty set axiom is necessary, as without it the theory would hold in the trivial model which has only one element satisfying $x=\{x\}$. Note that, while $\text{GPK}$ do proves the existence of a set such that $x\in x$, Olivier Esser proved that it refutes the anti-foundation axiom (AFA). [3]
The theory $\text{GPK}^+$ is obtained by adding the following axiom:
Closure: the universal closure of $\exists x(\forall z(\varphi(z)\Rightarrow z\in x)\land\forall y(\forall z(\varphi(z)\Rightarrow z\in y)\Rightarrow x\subset y))$ for every formula $\varphi(z)$ (not necessarily BPF or GPF) with a free variable $z$ (and possibly parameters) such that $x$ does not occur in $\varphi$.
This axiom scheme asserts that for any (possibly proper) class $C=\{x|\varphi(x)\}$ there is a smallest set $X$ containing $C$, i.e. $C\subset X$ and for all sets $Y$ such that $C\subset Y$, one has $X\subset Y$. [4]
Note that replacing GPF comprehension in $\text{GPK}^+$ by BPF comprehension does not make the theory any weaker: BPF comprehension plus Closure implies GPF comprehension.
Both $\text{GPK}$ and $\text{GPK}^+$ are consistent relative to $\text{ZFC}$, in fact mutually interpretable with second-order arithmetic. However a much stronger theory,
$\text{GPK}^+_\infty$, is obtained by adding the following axiom: Infinity: the von Neumann ordinal $\omega$ is a set.
By "von Neumann ordinal" we mean the usual definition of ordinals as well-ordered-by-inclusion sets containing all the smaller ordinals. Here $\omega$ is the set of all finite ordinals (the natural numbers). The point of this axiom is not implying the existence of an infinite set; the
class $\omega$ exists, so it has a set closure which is certainely infinite. This set closure happens to satisfy the usual axiom of infinity of $\text{ZFC}$ (i.e. it contains 0 and the successor of all its members) but in $\text{GPK}^+$ this is not enough to deduce that $\omega$ itself is a set (an improper class).
Olivier Esser showed that $\text{GPK}^+_\infty$ can not only interpret $\text{ZFC}$ (and prove it consistent), but is in fact mutually interpretable with a
much stronger set theory, namely, Morse-Kelley set theory with an axiom asserting that the (proper) class of all ordinals is weakly compact. This theory is powerful enough to prove, for instance, that there exists a proper class of hyper-Mahlo cardinals. [2] As a topological set theory To be expanded. The axiom of choice and $\text{GPK}$ set theories Other positive set theories and the inconsistency of the axiom of extensionality To be expanded. [6]
References Forti, M and Hinnion, R. The Consistency Problem for Positive Comprehension Principles.J Symbolic Logic 54(4):1401--1418, 1989. bibtex Esser, Olivier. An Interpretation of the Zermelo-Fraenkel Set Theory and the Kelley-Morse Set Theory in a Positive Theory.Mathematical Logic Quarterly 43:369--377, 1997. www DOI bibtex Esser, Olivier. Inconsistency of GPK+AFA.Mathematical Logic Quarterly 42:104--108, 1996. www DOI bibtex Esser, Olivier. On the Consistency of a Positive Theory.Mathematical Logic Quarterly 45:105--116, 1999. www DOI bibtex Esser, Olivier. Inconsistency of the Axiom of Choice with the Positive Theory $GPK^+_\infty$.Journal of Symbolic Logic 65(4):1911--1916, Dec., 2000. www DOI bibtex Esser, Olivier. On the axiom of extensionality in the positive set theory.Mathematical Logic Quarterly 19:97--100, 2003. www DOI bibtex
This article is a stub. Please help us to improve Cantor's Attic by adding information.
|
I've just started learning about limits. Why can we say $$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} = 0 $$ even though $\lim_{x\rightarrow \infty} \sin x$ does not exist?
It seems like the fact that sin is bounded could cause this, but I'd like to see it algebraically.
$$ \lim_{x\rightarrow \infty} \frac{\sin x}{x} = \frac{\lim_{x\rightarrow \infty} \sin x} {\lim_{x\rightarrow \infty} x} = ? $$
L'Hopital's rule gives a fraction whose numerator doesn't converge. What is a simple way to proceed here?
|
Gabriel Romon
Student at ENSAE Paristech and ENS Paris-Saclay (MVA Master's degree). Interested in statistics and machine learning.
A few recent good answers of mine:
DCT for convergence in probability
$\frac{S_n}{\sqrt n}$ is dense in $\mathbb R$ almost surely $\cos(2^n)$ is dense in $[-1,1]$ Showing $(X_n >c_n \text{ i.o.})=(\max_{1\leq i\leq n}X_i >c_n \text{ i.o.})$ Derivative of the MGF Infinite convex combination of characteristic functions is a characteristic function Different $\mathcal C^\infty$ characteristic functions that coincide in a neighborhood of $0$ Different metrics that metrize convergence in probability Relations between different definitions of the Gaussian width Weak consistency from asymptotic unbiasedness $(\sum_{j=1}^{n} X_{j}) / b_{n} \overset {P}{\to} C$ implies $b_{n}\sim b_{n+1}$ CLT and pointwise convergence of densities If $X\in L^1$, $P(X>x)=o\left(\frac 1x\right)$ Convex function with directional derivatives in all directions is differentiable Concentration of the $q$-norm of a Gaussian vector Almost sure convergence of $\sum_n \frac{X_n}n$
Paris, France
Member for 1 year, 7 months
8 profile views
Last seen Sep 3 at 15:31
|
This will be a talk for the Rutgers Logic Seminar, April 2, 2018. Hill Center, Busch campus.
Abstract. I shall define a certain finite set in set theory $$\{x\mid\varphi(x)\}$$ and prove that it exhibits a universal extension property: it can be any desired particular finite set in the right set-theoretic universe and it can become successively any desired larger finite set in top-extensions of that universe. Specifically, ZFC proves the set is finite; the definition $\varphi$ has complexity $\Sigma_2$ and therefore any instance of it $\varphi(x)$ is locally verifiable inside any sufficient $V_\theta$; the set is empty in any transitive model and others; and if $\varphi$ defines the set $y$ in some countable model $M$ of ZFC and $y\subset z$ for some finite set $z$ in $M$, then there is a top-extension of $M$ to a model $N$ in which $\varphi$ defines the new set $z$. The definition can be thought of as an idealized diamond sequence, and there are consequences for the philosophical theory of set-theoretic top-extensional potentialism.
This is joint work with W. Hugh Woodin.
This is a talk for the Rutgers Logic Seminar on March 25th, 2013. Simon Thomas specifically requested that I give a talk aimed at philosophers.
Abstract. I shall describe the debate on pluralism in the philosophy of set theory, specifically on the question of whether every mathematical and set-theoretic assertion has a definite truth value. A traditional Platonist view in set theory, which I call the universe view, holds that there is an absolute background concept of set and a corresponding absolute background set-theoretic universe in which every set-theoretic assertion has a final, definitive truth value. I shall try to tease apart two often-blurred aspects of this perspective, namely, to separate the claim that the set-theoretic universe has a real mathematical existence from the claim that it is unique. A competing view, the multiverse view, accepts the former claim and rejects the latter, by holding that there are many distinct concepts of set, each instantiated in a corresponding set-theoretic universe, and a corresponding pluralism of set-theoretic truths. After framing the dispute, I shall argue that the multiverse position explains our experience with the enormous diversity of set-theoretic possibility, a phenomenon that is one of the central set-theoretic discoveries of the past fifty years and one which challenges the universe view. In particular, I shall argue that the continuum hypothesis is settled on the multiverse view by our extensive knowledge about how it behaves in the multiverse, and as a result it can no longer be settled in the manner formerly hoped for.
Some of this material arises in my recent articles:
This will be a talk for the Rutgers Logic Seminar on November 19, 2012.
Abstract. I will speak on my recent theorem that every countable model of set theory $M$, including every well-founded model, is isomorphic to a submodel of its own constructible universe. In other words, there is an embedding $j:M\to L^M$ that is elementary for quantifier-free assertions. The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading to the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, closely connected with the surreal numbers. The proof shows that $L^M$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$. The method of proof also establishes that the countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory, one of them is isomorphic to a submodel of the other. Indeed, the bi-embeddability classes form a well-ordered chain of length $\omega_1+1$. Specifically, the countable well-founded models are ordered by embeddability in accordance with the heights of their ordinals; every shorter model embeds into every taller model; every model of set theory $M$ is universal for all countable well-founded binary relations of rank at most $\text{Ord}^M$; and every ill-founded model of set theory is universal for all countable acyclic binary relations. Finally, strengthening a classical theorem of Ressayre, the same proof method shows that if $M$ is any nonstandard model of PA, then every countable model of set theory—in particular, every model of ZFC—is isomorphic to a submodel of the hereditarily finite sets $HF^M$ of $M$. Indeed, $HF^M$ is universal for all countable acyclic binary relations.
Article | Rutgers Logic Seminar
|
I've thought about this question for a while without an answer. The key is to consider the structure of the constructible real numbers. I was actually a bit cavalier with my original definition, "$x$ is irrational if $|x-p/q|<q^{-2}$ has infinitely many coprime solutions". The problem lies in what is meant by "infinitely many" here. If it means that there is an injection from $\Bbb N$, then we get the same set as described by the other definition using infinite continued fractions. In particular, if $x$ is irrational, then there is a function $f:\Bbb N\to\Bbb Q$ that converges to $x$, and under the "Russian constructivist" camp, we can assume that $f$ is a computable function, so $x$ is computable. And obviously the rational numbers are computable.
Thus Chaitin's constant $\Omega$ is neither rational nor irrational.
To be clear, in constructivist logic we don't necessarily know that $\Omega$ even
exists; in fact, that's the whole point. But it does mean that we can take the Markovian model of constructible reals $\Bbb R_M$ and add $\Omega$ to get a model $\Bbb R_M[\Omega]=:\Bbb R_{\Omega}$ which can be viewed as a subset of $\Bbb R$ (in the ambient universe where LEM is true). Then $\Bbb Q=\Bbb Q_M=\Bbb Q_{\Omega}$, and $\Bbb I_M=\Bbb I_{\Omega}\subsetneq\Bbb I$, with $\Omega\in\Bbb I\setminus \Bbb I_{\Omega}$, yet at the same time $\Omega\in \Bbb R_{\Omega}$ by definition, so we have $\Bbb Q_{\Omega}\cup\Bbb I_{\Omega}\subsetneq\Bbb R_{\Omega}$. (We can even say $\Bbb Q_{\Omega}\cup\Bbb I_{\Omega}=\Bbb R_M$ but this is only valid as a proof in the full universe, using LEM.)
I mentioned that "infinitely many" had two interpretations above. The other one is that it is not finite, and this yields the poorer definition "$x$ is irrational if $x$ is not rational". In this case it is clear that no number can be neither rational nor irrational, because if it is not rational then it must be irrational by definition.
|
In the original form of Maldacena's AdS/CFT, the bulk is classical supergravity and the boundary is superconformal field theory in the Maldacena's limit. However, in many applications of AdS/CFT, for example in AdS/CMT, we only consider the bulk is classical gravity and the boundary is CFT, which does not include supersymmetry. My question is where is the supersymmetry, why we need not consider it in most of the applications?
Though AdS/CMT is not my research area, I recall some basic points from the Nastase book, and thus this is the main reference to answer the AdS/CMT part.
This answer is in two parts.
Part I: Answer the AdS/CMT question.
So, in standard condensed matter theory, near a phase transition, there are fixed points that sometimes can exhibit so-called “dynamical scaling”. These are called Lifshitz points.
The Lifshitz scaling is
$$t \rightarrow \lambda^z t$$
$$\vec{x} \rightarrow \lambda \vec{x}$$
where in the above $z$ is the so-called critical exponent.
To describe these points, people, use a phenomenological AdS/CFT approach. In particular, they try to realize the symmetry group geometrically. In the case of $AdS_{d+1}$, the symmetry group $SO(d, 2)$, however, if you relax the condition in your bulk and if you assume that the $AdS/CFT$ holds for general gravity backgrounds, then you are able to describe a $d + 1$-dimensional gravitational background dual to the Lifshitz point;
$$ds_{d+1}^2 = R^2 (- \frac{dt^2}{u^{2z}} + \frac{d \vec{x}^2}{u^2} + \frac{du^2}{u^2})$$
see Nastase's book for these and more formulas and details of course.
Some condensed matter theory guys have been very sceptical with these methods, so far. Just mentioning this as a fact.
As a punchline: SUSY is not there, by assumption, since there is some indication for this pheno approach of the $AdS/CFT$.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Part II: Comments on the "... supersymmetry, why we need not consider it in most of the applications?"
Most of the application and the formal developments of the $AdS/CFT$ do consider SUSY backgrounds.
The reason for that: in a paper Vafa argued that non-SUSY $AdS/CFT$ belongs to the swampland of string theory, unless the theory comes with and infinite tower of Kaluza-Klein modes.
Examples are $AdS/QCD$, Dynamic $AdS/QCD$ and top-down models- like the $D3/D7$, the Sakai-Sugimoto and others, for holographic flavours possess SUSY.
Final point: Even if SUSY is not realised in nature it remains mathematically consistent -and it actually never started as a physical theory- and there is no problem using it in higher dimensional theories. The only thing that needs treatment is to find a mechanism to break/reduce the number of SUSY when you want to go in the $4d$-theory.
Sorry for the long reply.
Cheers!!!
|
I've tried to explain the solution using as little math as possible, and to give some intuition as to what makes it tick. Nonetheless there will be a little mathematical notation at the end.
First steps: going beyond the obvious solution in the simplest case (N=2)
The statement of the puzzle as presented here doesn't make this very clear, but the puzzle relies on the prisoners not knowing anything about which name is located in which box (until they get into the room, after which they cannot communicate anymore). If every prisoner checks 50 boxes at random, then each prisoner has a ½ chance of finding his own name. If all the prisoners choose a set of boxes at random, independently, then the probability that they all find their own name is ½ × … × ½ = 1/2
100 — infinitesimal.
Making independent choices is a waste, though. If anyone gets it wrong, the situation isn't worse than if everybody gets it wrong. Rather than make independent choices, they can make correlated choices; the idea is to try to arrange that either everybody gets it right, or many get it wrong.
Let's consider the simpler case when there are two prisoners. If they both choose at random, then they have ½ × ½ = ¼ chance of surviving. But there's an obvious waste: suppose prisoner #1 opens the left-hand box and finds his name: then prisoner #2 will not find his name in the left-hand box. So the prisoners can decide that #1 will look at the box on the left and #2 looks at the box on the right: that way, either they both get it right or both get it wrong, and they have a ½ chance of survival.
Incidentally, note that another assumption that wasn't clearly stated here is that the prisoners get to formulate their strategy in secret. If the warden knows which prisoner chose which box, he can arrange for the prisoners to fail by putting prisoner #1's name in the right-hand box.
The next step: N=4
The obvious way to generalize this to more prisoners is to assign each prisoner a fixed set of boxes that he'll open. However, I won't pursue this further, because it doesn't take advantage of an important ability: after a prisoner has opened a first box, he can base his decision on which box to open next on the content of the first box, and so on.
Consider the case with 4 prisoners and 4 boxes. I'll use numbers for the prisoners' names, and assume that the boxes are numbered as well. Intuitively, it is preferable for each prisoner to pick a different box to open first, since otherwise some common choices are wasted. So prisoner #1 opens box #1 and finds a name (number). Now what? If he finds his own name (#1) (¼ chance), of course, he can stop. If he finds some different name (say 2) (¾ chance), what information does this provide? Well, since each box contains a different name, prisoner #1 now knows that box #2 does not contain 2, so prisoner #2 will not be lucky the first time either. Furthermore, the strategy should favor arranging for prisoner #2 to pick box #1 next.
To simplify the analysis, I'll only look at cases where all prisoners follow the same strategy. (I don't have an intuitive argument as to why breaking the symmetry wouldn't be advantageous.) Either they all open the box whose number they found in the first box, or they all open a different box.
If #1 opens box #2 and finds his name there, then either boxes #3 and #4 contain 3 and 4 respectively, or they contain 4 and 3. Either way, with the strategy of using the name in the first box, if one prisoner is lucky the second time then every prisoner is lucky! If #1 opens box #3 instead and finds his name there, then there is a ½ chance that prisoner #2 will find his name in box #2, and a ½ chance that he'll find his name in box #4. But what about prisoner #3? He finds the name of prisoner #1 in box #3, which doesn't give any clue as to where 3 might be instead.
So let's concentrate on the strategy where each prisoner opens the second box whose number is what he found in the first box. What arrangement of numbers in boxes make it work?
There are 4 ways to choose which box contains the number 1. The number 2 can go into any of the 3 remaining boxes. The number 3 can go in either of the 2 remaining boxes, and the number 4 must go into the one remaining box. So there are 4×3×2 = 24 different arrangements. The following arrangements lead to success because each number is either in its own box or swapped with another number:
1234 1243 1324 1432 2134 2143 3214 3412 4231 4321
That's 10 successful arrangements out of 24. The chance of success isn't very far from the theoretical maximum of ½, which is encouraging.
Note that in order for the chance of success to be 10/24, we need to know that the arrangements have an equal chance of being chosen. If the warden is nasty and arranges the numbers as 2341, the prisoners are sure to lose. This is where the fact that the prisoners choose a strategy in secret comes in. In my analysis, I used numbers for prisoners — but fact the prisoners are names, not numbers, and they can pick a random assignment of names to numbers as part of their secret strategy (in fact, this assignment is the only secret part, since the warden may have looked up the solution of the puzzle).
General analysis
Let's explore a strategy that generalizes what we explored for 4 boxes: each prisoner opens the box with his own number, then the box whose number is contained in the first box, and so on. Consider the sequence of numbers that a certain prisoner encounters: $x_0$ (the inital box numbered with the prisoner's own number), $x_1$ (number contained in box $x_0$), $x_2$ (number contained in box $x_1$), … Since each number is contained in only one box, this sequence cannot contain any repeated element as long as it doesn't loop back to $x_0$. Eventually the sequence has to loop back to $x_0$ since it will run out of numbers. At that point, the prisoner has found his own name. The critical problem for the prisoner is whether the loop completes before or after the prisoner has opened the maximum of 50 boxes.
From now on, let me use the proper mathematical vocabulary. A way to arrange distinct numbers into as many boxes is called a
permutation. Opening box number $k$ and looking at the number that it contains is called applying that permutation. Repeated applications of a permutation eventually runs into a loop; such a loop is called a cycle. The prisoners succeed if all of the cycles for the permutation have a length of at most 50.
Let's call a cycle
long if it contains 51 or more elements. Observe that there cannot be more than one long cycle (if one cycle has at least 51 elements, then there are only 49 or fewer elements to share between the other cycles). So we can count the losing configurations by adding up the permutations of 100 elements that have a cycle of length 51, 52, …, 100.
Lemma: there are $n! = 1 \cdot 2 \cdot 3 \cdot \ldots \cdot (n-1) \cdot n$ distinct permutations of $n$ elements. Proof: there are $n$ ways to pick the image of the first element, $n-1$ remaining ways to pick the image of the second elements, etc., down to a single way to pick the image of the last element.
Now let's count the number of permutations that have a cycle of length $c$ (for $c \ge 51$, so that there's a single such cycle). We're actually going to count each permutation $c$ times, once for each element of the cycle. Pick the first element in the cycle: there are 100 possibilities. There are 99 possibilities for the second element, and so on, until we've picked $c$ elements. So far, that's $100 \times 99 \times \ldots \times (100-c+1)$ possibilities. There are $100-c$ remaining elements, and they can be permuted in any way, so there are $(100-c)!$ possibilities as per the lemma above. That's a total of $(100 \times 99 \times \ldots \times (100-c+1)) \times ((100-c) \times \ldots \times 2 \times 1)$ possibilities, which nicely collapses to $100!$. Recall that we counted each permutation $c$ times, since we counted it once per element in the cycle. So the number of permutations with a cycle of length $c$ is $100! / c$.
The number of permutations with a long cycle is thus$$ \frac{100!}{100} + \frac{100!}{99} + \ldots + \frac{100!}{51} $$That's out of a total of $100!$ permutation. The proportion of failing permutations is thus$$ \frac{1}{100} + \frac{1}{99} + \ldots + \frac{1}{51} $$Numerically, this is about 0.6882, i.e. the chance of success is about 31.18%, a little over the requisite 30%.
In general, the proportion of failing permutations for $N$ prisoners is $H_N - H_{N/2}$ where$$ H_n = 1 + \frac{1}{2} + \ldots + \frac{1}{n} $$is called the $n$th harmonic number. For large values of $n$, $H_n \approx \ln n + C$ for some number C, and the series $H_N - H_{N/2}$ converges to $\ln 2 \approx 0.6931$ from below. (I will not provide an elementary proof of that). This gives a lower limit to the chance of success for large numbers of prisoners: 30.68% is achievable.
|
@DavidReed the notion of a "general polynomial" is a bit strange. The general polynomial over a field always has Galois group $S_n$ even if there is not polynomial over the field with Galois group $S_n$
Hey guys. Quick question. What would you call it when the period/amplitude of a cosine/sine function is given by another function? E.g. y=x^2*sin(e^x). I refer to them as variable amplitude and period but upon google search I don't see the correct sort of equation when I enter "variable period cosine"
@LucasHenrique I hate them, i tend to find algebraic proofs are more elegant than ones from analysis. They are tedious. Analysis is the art of showing you can make things as small as you please. The last two characters of every proof are $< \epsilon$
I enjoyed developing the lebesgue integral though. I thought that was cool
But since every singleton except 0 is open, and the union of open sets is open, it follows all intervals of the form $(a,b)$, $(0,c)$, $(d,0)$ are also open. thus we can use these 3 class of intervals as a base which then intersect to give the nonzero singletons?
uh wait a sec...
... I need arbitrary intersection to produce singletons from open intervals...
hmm... 0 does not even have a nbhd, since any set containing 0 is closed
I have no idea how to deal with points having empty nbhd
o wait a sec...
the open set of any topology must contain the whole set itself
so I guess the nbhd of 0 is $\Bbb{R}$
Btw, looking at this picture, I think the alternate name for these class of topologies called British rail topology is quite fitting (with the help of this WfSE to interpret of course mathematica.stackexchange.com/questions/3410/…)
Since as Leaky have noticed, every point is closest to 0 other than itself, therefore to get from A to B, go to 0. The null line is then like a railway line which connects all the points together in the shortest time
So going from a to b directly is no more efficient than go from a to 0 and then 0 to b
hmm...
$d(A \to B \to C) = d(A,B)+d(B,C) = |a|+|b|+|b|+|c|$
$d(A \to 0 \to C) = d(A,0)+d(0,C)=|a|+|c|$
so the distance of travel depends on where the starting point is. If the starting point is 0, then distance only increases linearly for every unit increase in the value of the destination
But if the starting point is nonzero, then the distance increases quadratically
Combining with the animation in the WfSE, it means that in such a space, if one attempt to travel directly to the destination, then say the travelling speed is 3 ms-1, then for every meter forward, the actual distance covered by 3 ms-1 decreases (as illustrated by the shrinking open ball of fixed radius)
only when travelling via the origin, will such qudratic penalty in travelling distance be not apply
More interesting things can be said about slight generalisations of this metric:
Hi, looking a graph isomorphism problem from perspective of eigenspaces of adjacency matrix, it gets geometrical interpretation: question if two sets of points differ only by rotation - e.g. 16 points in 6D, forming a very regular polyhedron ...
To test if two sets of points differ by rotation, I thought to describe them as intersection of ellipsoids, e.g. {x: x^T P x = 1} for P = P_0 + a P_1 ... then generalization of characteristic polynomial would allow to test if our sets differ by rotation ...
1D interpolation: finding a polynomial satisfying $\forall_i\ p(x_i)=y_i$ can be written as a system of linear equations, having well known Vandermonde determinant: $\det=\prod_{i<j} (x_i-x_j)$. Hence, the interpolation problem is well defined as long as the system of equations is determined ($\d...
Any alg geom guys on? I know zilch about alg geom to even start analysing this question
Manwhile I am going to analyse the SR metric later using open balls after the chat proceed a bit
To add to gj255's comment: The Minkowski metric is not a metric in the sense of metric spaces but in the sense of a metric of Semi-Riemannian manifolds. In particular, it can't induce a topology. Instead, the topology on Minkowski space as a manifold must be defined before one introduces the Minkowski metric on said space. — baluApr 13 at 18:24
grr, thought I can get some more intuition in SR by using open balls
tbf there’s actually a third equivalent statement which the author does make an argument about, but they say nothing about substantive about the first two.
The first two statements go like this : Let $a,b,c\in [0,\pi].$ Then the matrix $\begin{pmatrix} 1&\cos a&\cos b \\ \cos a & 1 & \cos c \\ \cos b & \cos c & 1\end{pmatrix}$ is positive semidefinite iff there are three unit vectors with pairwise angles $a,b,c$.
And all it has in the proof is the assertion that the above is clearly true.
I've a mesh specified as an half edge data structure, more specifically I've augmented the data structure in such a way that each vertex also stores a vector tangent to the surface. Essentially this set of vectors for each vertex approximates a vector field, I was wondering if there's some well k...
Consider $a,b$ both irrational and the interval $[a,b]$
Assuming axiom of choice and CH, I can define a $\aleph_1$ enumeration of the irrationals by label them with ordinals from 0 all the way to $\omega_1$
It would seemed we could have a cover $\bigcup_{\alpha < \omega_1} (r_{\alpha},r_{\alpha+1})$. However the rationals are countable, thus we cannot have uncountably many disjoint open intervals, which means this union is not disjoint
This means, we can only have countably many disjoint open intervals such that some irrationals were not in the union, but uncountably many of them will
If I consider an open cover of the rationals in [0,1], the sum of whose length is less than $\epsilon$, and then I now consider [0,1] with every set in that cover excluded, I now have a set with no rationals, and no intervals.One way for an irrational number $\alpha$ to be in this new set is b...
Suppose you take an open interval I of length 1, divide it into countable sub-intervals (I/2, I/4, etc.), and cover each rational with one of the sub-intervals.Since all the rationals are covered, then it seems that sub-intervals (if they don't overlap) are separated by at most a single irrat...
(For ease of construction of enumerations, WLOG, the interval [-1,1] will be used in the proofs) Let $\lambda^*$ be the Lebesgue outer measure We previously proved that $\lambda^*(\{x\})=0$ where $x \in [-1,1]$ by covering it with the open cover $(-a,a)$ for some $a \in [0,1]$ and then noting there are nested open intervals with infimum tends to zero.
We also knew that by using the union $[a,b] = \{a\} \cup (a,b) \cup \{b\}$ for some $a,b \in [-1,1]$ and countable subadditivity, we can prove $\lambda^*([a,b]) = b-a$. Alternately, by using the theorem that $[a,b]$ is compact, we can construct a finite cover consists of overlapping open intervals, then subtract away the overlapping open intervals to avoid double counting, or we can take the interval $(a,b)$ where $a<-1<1<b$ as an open cover and then consider the infimum of this interval such that $[-1,1]$ is still covered. Regardless of which route you take, the result is a finite sum whi…
W also knew that one way to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ is to take the union of all singletons that are rationals. Since there are only countably many of them, by countable subadditivity this give us $\lambda^*(\Bbb{Q}\cap [-1,1]) = 0$. We also knew that one way to compute $\lambda^*(\Bbb{I}\cap [-1,1])$ is to use $\lambda^*(\Bbb{Q}\cap [-1,1])+\lambda^*(\Bbb{I}\cap [-1,1]) = \lambda^*([-1,1])$ and thus deducing $\lambda^*(\Bbb{I}\cap [-1,1]) = 2$
However, what I am interested here is to compute $\lambda^*(\Bbb{Q}\cap [-1,1])$ and $\lambda^*(\Bbb{I}\cap [-1,1])$ directly using open covers of these two sets. This then becomes the focus of the investigation to be written out below:
We first attempt to construct an open cover $C$ for $\Bbb{I}\cap [-1,1]$ in stages:
First denote an enumeration of the rationals as follows:
$\frac{1}{2},-\frac{1}{2},\frac{1}{3},-\frac{1}{3},\frac{2}{3},-\frac{2}{3}, \frac{1}{4},-\frac{1}{4},\frac{3}{4},-\frac{3}{4},\frac{1}{5},-\frac{1}{5}, \frac{2}{5},-\frac{2}{5},\frac{3}{5},-\frac{3}{5},\frac{4}{5},-\frac{4}{5},...$ or in short:
Actually wait, since as the sequence grows, any rationals of the form $\frac{p}{q}$ where $|p-q| > 1$ will be somewhere in between two consecutive terms of the sequence $\{\frac{n+1}{n+2}-\frac{n}{n+1}\}$ and the latter does tends to zero as $n \to \aleph_0$, it follows all intervals will have an infimum of zero
However, any intervals must contain uncountably many irrationals, so (somehow) the infimum of the union of them all are nonzero. Need to figure out how this works...
Let's say that for $N$ clients, Lotta will take $d_N$ days to retire.
For $N+1$ clients, clearly Lotta will have to make sure all the first $N$ clients don't feel mistreated. Therefore, she'll take the $d_N$ days to make sure they are not mistreated. Then she visits client $N+1$. Obviously the client won't feel mistreated anymore. But all the first $N$ clients are mistreated and, therefore, she'll start her algorithm once again and take (by suposition) $d_N$ days to make sure all of them are not mistreated. And therefore we have the recurence $d_{N+1} = 2d_N + 1$
Where $d_1$ = 1.
Yet we have $1 \to 2 \to 1$, that has $3 = d_2 \neq 2^2$ steps.
|
I was doing some computations for research purposes, which led me to this integral:
$$I(n) = \int_0^{\infty} (t^2+t^4)^n e^{-t^2-t^4}\,dt.$$
This is very suggestively written so as to employ a parametric differentiation technique as so:
$$\left(\frac{\partial^n}{\partial \alpha^n}\right)\int_0^{\infty}e^{-\alpha(t^2+t^4)}\,dt.$$
This integral has a nice, closed form expression:
$$\int_0^{\infty} e^{-\alpha(t^2+t^4)}\,dt = \frac{1}{4} e^{\frac{\alpha}{8}}K_{\frac{1}{4}}\left(\frac{\alpha}{8}\right),$$
where $K_{\nu}$ is the modified Bessel function of the second kind. From here, I would have to employ $n$ differentiations which would be pretty messy to work out due to product rule. $n$ applications of product rule does have a nice combinatorial expression but it is far from explicit. Moreover, Bessel functions can get pretty complicated after differentiating so this seems like a bad approach.
Instead I decided to run some examples on Mathematica and computed the first 22 of these and noticed a very surprising pattern. In what follows $I_{\nu}$ is the modified Bessel function of the first kind.
$$I(0) = \frac{1}{4} e^{\frac{1}{8}}K_{\frac{1}{4}}\left(\frac{1}{8}\right)$$
$$I(1) = \frac{1}{32} e^{\frac{1}{8}}\left(K_{\frac{1}{4}}\left(\frac{1}{8}\right) + K_{\frac{3}{4}}\left(\frac{1}{8}\right)\right)$$
$$I(2) = \frac{3}{128\sqrt{2}} e^{\frac{1}{8}}\pi \left(3 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + I_{\frac{1}{4}}\left(\frac{1}{8}\right) - I_{\frac{3}{4}}\left(\frac{1}{8}\right) + I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$
$$I(3) = \frac{1}{256\sqrt{2}} e^{\frac{1}{8}}\pi \left(39 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 17 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 14 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 14 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$
$$I(4) = \frac{1}{2048\sqrt{2}} e^{\frac{1}{8}} \pi \left(1029 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 367 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 349 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 349 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$
$$I(5) = \frac{9}{8192\sqrt{2}} e^{\frac{1}{8}} \pi \left(1953 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 619 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 643 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 643 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$
$$I(6) = \frac{1}{16384\sqrt{2}} e^{\frac{1}{8}} \pi \left(185157 I_{-\frac{1}{4}} \left(\frac{1}{8}\right) + 53131 I_{\frac{1}{4}}\left(\frac{1}{8}\right) - 59572 I_{\frac{3}{4}}\left(\frac{1}{8}\right) + 59572 I_{\frac{5}{4}}\left(\frac{1}{8}\right)\right)$$
Repeat ad nauseum. Each of the terms in the denominator seems to be a power of $2$, the third and fourth terms seem to have the same coefficient (modulo a sign) and the signs are $+$, $+$, $-$, $+$. The "nice" output seems to suggest to me that there is a closed-form expression for $I(n)$ in general but I haven't the slightest clue as to how to come up with it. Can anyone shed some light on the matter?
A PDF with more expressions can be found here. (Mathematica output.)
|
Let $\Omega\subset\mathbb{R}^N$ be a bounded smooth domain. Suppose that $0<k<t<1$ and $\Phi$ is a mooth function satisfying: $\Phi(x)=0$ for $x\le k$, $\Phi(x)=1$ for $x\ge t$.
Take $u\in C_0^\infty(\overline{\Omega})$ with $u\ge 0$ and $u$ superharmonic ($-\Delta u\ge 0$). Note that $$\Delta (\Phi\circ u)=(\Phi''\circ u)|\nabla u|^2+(\Phi'\circ u)\Delta u,\tag{1}$$
so $$|\Delta (\Phi\circ u)|\le C(|\nabla u|^2+\chi_{u>k}|\Delta u|).\tag{2}$$
Once $u\ge 0$ and $-\Delta u\ge 0$, we have that from $(2)$ that $$\chi_{u>k}|\Delta u|\le \frac{1}{k}u|\Delta u|=-\frac{1}{k}u\Delta u\tag{3}.$$
We combine Green's indetity with $(3)$ to conclude that $$\int_{u>k}|\Delta u|\le \frac{-1}{k}\int _\Omega u\Delta u=\frac{1}{k}\int_\Omega |\nabla u|^2.\tag{4}$$
Therefore, $(1)$ and $(4)$ gives $$\int_\Omega |\Delta (\Phi\circ u)|\le C\left(1+\frac{1}{k}\right)\int_\Omega |\nabla u|^2,$$
or equivalently $$\|\Delta(\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2,\ \forall \ u\in C_0^\infty (\overline{\Omega}),\ u\ge 0,\ -\Delta u\ge 0.\tag{5}$$
So my question is: Can we extend $(5)$ for all functions $u\in W_0^{1,2}(\Omega)$ such that $u\ge 0$ and $u$ is superharmonic (the distributional laplacean of $u$ is nonegative)?
Remark 1: $C_0^\infty(\overline{\Omega})$ is the space of all $C^\infty(\overline{\Omega})$ functions, which vanishes on the boundary.
Remark 2: $C>0$ is a constant, which can change in every line and it depends only on $k$, $\|\Phi'\|_\infty$ and $\|\Phi''\|_\infty$.
Remark 3: In this question, I tried to solve this problem, by showing that $(1)$ were true in the sense of measure, also for $u\in W_0^{1,2}(\Omega)$, however, it does not seems to be true.
$\textbf{Update (A Supposed Proof)}$: Assume that $u\in W_0^{1,2}(\Omega)$ satisfies $u\ge 0$, $-\Delta u\ge 0$. Let $\varphi_n$ be the standard mollifier sequence. Extend $u$ by zero outside $\Omega$ (we still using the same notation $u$) and consider the sequence $$u_n(x)=\int_{\mathbb{R}^N} \varphi_n(x-y)u(y)dy,\ x\in \mathbb{R}^N$$
Once $u\in W_0^{1,2}(\mathbb{R}^N)$, we have that $u_n\in C_0^\infty(\overline{\Omega})$, $u_n\ge 0$ and $-\Delta u_n\ge 0$. Moreover, $u_n\to u$ in $W^{1,2}(\Omega)$. From $(5)$, we have that $$\|\Delta (\Phi\circ u_n)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u_n\|_2^2,$$
therefore, we can assume without loss of generality that $\Phi\circ u_n$ converges in the weak star topology, to some measure $\mu\in \mathcal{M}(\Omega)$. Let $v\in W_0^{1,1}(\Omega)$ be the solution of the problem
$$ \left\{ \begin{array}{ccc} \Delta v =\mu&\mbox{ in $\Omega$} \\ v=0 &\mbox{ on $\partial \Omega$} \end{array} \right. $$
By the weak convergence, we have that $$\int_\Omega \phi(x)\Delta (\Phi\circ u_n)\to \int_\Omega \phi(x) d\mu=\int_\Omega \phi(x)\Delta v, \forall \phi\in C_0(\overline{\Omega}),\tag{6}$$
hence, from $(6)$ and the definition of $v$, we conclude that $$\int_\Omega \Delta \phi(x) (\Phi\circ u_n)\to\int_\Omega \Delta \phi(x) v,\ \forall \phi\in C_0^\infty(\overline{\Omega}) .$$
As $\Phi\circ u_n \to \Phi\circ u$ in $W^{1,2}(\Omega)$, we must concude that $v=\Phi\circ u$ and thus, $\Delta(\Phi\circ u)\in \mathcal{M}(\Omega)$ and $$\|\Delta (\Phi\circ u)\|_{\mathcal{M}(\Omega)}\le C\|\nabla u\|_2^2$$
Is this proof right, can anyone check it for me please?
|
since electromagnetic radiation possess the property of both wave and particle(photon). and both theory are applicable but how we have to find out that which theory is suitable or applicable in particular explanation. for example in laser traping of atom we use photon concept rather than wave why?
I'd like to add a slightly different take on both Anna V's and Ben Crowell's answers.
Classical optics IS the theory of one photon. There is NO approximation in this statement in free space (classical optics is actually a little bit more than this generally, but I'll get to that below). Maxwell's equations to the lone photon are EXACTLY what the Dirac equation is to the lone electron (indeed the two equation sets can be written in forms where they are the same aside from a mass term in the Dirac equation coupling the left and right circularly polarized fields together, whereas the two polarisations stay uncoupled in the mass-free Maxwell equations). Photons are Bosons, which means you can put as many of them as you like into the right same state: so you can build up classical states which correspond EXACTLY to one photon states. When you do experiments where one photon is transferred at a time, you solve Maxwell's equations for a field inside your experimental set up, normalise the solution field so that the total electromagnetic field energy is unity, and then the energy density $\frac{1}{2}\epsilon_0 |\mathbf{E}|^2 + \frac{1}{2}\mu_0 |\mathbf{H}|^2$ becomes the probability density that your one photon will be photodetected at the point in question.
The full quantised theory of light works like this: each monochromatic free space mode (plane wave) is replaced by a quantum mechanical harmonic oscillator (which you have likely dealt with). The motivation for this is that free space classical plane waves oscillate sinusoidally with time and so a classical free space wave is assumed to corresponds to a coherent quantum state of the corresponding quantum harmonic oscillator. You may recall that energy may be given to/withdrawn from a quantum harmonic oscillator only in discrete packets. It is these discrete packets which are the "particles". In this bigger picture, a one photon state is a quantum superposition of next-to-ground-state (one photon) states of the infinite collection of plane wave harmonic oscillators that are "THE FIELD"; the spatial Fourier transform of the superposition coefficients propagates precisely following Maxwell's equations. Otherwise put in the Heisenberg picture: in a one photon state, the electric and magnetic field observables evolve with time precisely following Maxwell's equations. Note that in this description, it's very hard to say where the "particles" actually are: this is how I like to think of it: "the Electromagnetic Field communicates with the outside world (the other electron, quark, ... fields making up the universe) in discrete data packets, and these packets are what we call photons". Also note that, in the light of Anna V's comments below: one doesn't have to stick with plane waves: one can choose any complete orthonormal set of fields and assign quantum harmonic oscillators to each of these and the description is wholly equivalent. So one chooses whatever the basis states make the analysis of their particular problem easiest.
Going back to our one photon state evolving following Maxwell's equations: when we put dielectrics and other matter into the description, we no longer purely have photons. If we represent the "atoms" of the matter by two level quantum systems, when light propagates in matter it is not really just light: it is a quantum superposition of free photons and excited matter states. The description of lossy materials in this picture is a little more complicated but it can be done: lossy materials are continuums of quantum oscillators that the photon has an extremely low probability of remission from once it's absorbed there). So here then is how I like to think of classical optics:
Classical Optics = The Theory of One Photon + Optical Materials Science
Let's go back to the statement about putting many Bosons into the same state and thus building a classical light field that is mathematically the same as a one photon state. Mostly that's all there is to macroscopic optics: and this is what I believe Dirac meant when he said famously that "each photon interferes only with itself". In macroscopic states built by simply copying Bosons, it should be pretty clear that whether you calculate their propagations wholly separately as lone photons and then sum up their probability densities to get the field intensity, or if you simply classically calculate the field intensity, you'll get the right same result. Most macroscopic light fields behave like this and it is actually very hard to find deviations from this behavior. Aside from the experiment where we turn the light level right down low so that interference patterns are built up "click click click" one photon at time, the photon is extremely hard to observe experimentally as a quantum and not as a classical light field. Mathematically what all this means is that macroscopic states behave as though they are products of one photon states (special Glauber "coherent" states - actually discovered by Schrödinger): and the last twist in the quantum optics tale is the phenomenon of entanglement. This is what we see in the seldom and very hard to set up situations where the "product" behaviour no longer holds: but you may care to see the Wikipedia page on quantum entanglement or ask another question to find out about that one!
Lastly, to think about laser trapping, as Ben says indeed a classical field theory is enough. What you might be thinking of is laser cooling and in this case Anna V's comments apply. Here the atom is withdrawing one quantum at a time from the electromagnetic field and likewise emitting a few quantums at a time. But it's not all "particle-like" - for instance, the Fermi golden rule calculations that give the cross sections for these particle transitions all involve overlap integrals between the atomic dipoles and the wave fields - note that this is a very classical looking calculation wholly analogous to the analysis of the interaction between a short ($\ll \lambda$) dipole antenna and classical electromagnetic field. As in Ben's answer, both the wave and particle aspects of the photon's behaviour are thus showing themselves here. Also - I might be guessing here as laser cooling is not my field - an optical photon's momentum is quite an appreciable chunk of a slow atom's momentum. Optical photons are of the order of 1eV so that their momentum is of the order $10^{-27}\mathrm{kg\,m\,s^{-1}}$ and a proton's mass is of the order of $10^{-27}\mathrm{kg}$, so the transfers are way too chunky to be reduced to continuous momentum / energy flux calculations and still hope for an accurate picture.
since electromagnetic radiation possess the property of both wave and particle(photon).
There is some confusion here: electromagnetic radiation is a classical physics concept, and yes it does display wave behavior classically.
The photon is an elementary particle, which depending on the experiment displays a wave property or a particle property.The same is true for all elementary particles. The classical electromagnetic wave is composed of zillions of photons which conspire to build up the frequency and behavior of the classical wave.
and both theory are applicable
the classical wave is applicable to for classical optics,
but how we have to find out that which theory is suitable or applicable in particular explanation. for example in laser traping of atom we use photon concept rather than wave why?
In laser trapping the transitions are quantum mechanical and the photon is used because it is a quantum mechanical entity suitable for describing the microcosm. The wave/particle dual nature concept for an elementary particle defines its indeterminacy when trying to localize it.
A classical wave is a collective emergent phenomenon from an ensemble of photons and is suitable for macroscopic observations. When atomic transitions are studied the classical wave is not suitable.
both theory are applicable but how we have to find out that which theory is suitable or applicable in particular explanation.
There is no particle theory of light. There is a wave theory and a wave-particle theory. For example, the equation $E=h\nu$ can't be an element of a particle theory of light, since the left-hand side refers to a particle (the amount of energy per particle), and the right-hand side refers to a wave (the frequency of the wave).
Sometimes people will claim that light acts like a particle in some experiments and a wave in others. This is wrong for the reasons given above, and also because it implies that there are no experiments in which it acts like both. For example, you can observe double-slit diffraction with individual photons.
The wave-particle theory is valid in all cases. So the question becomes this: under what circumstances is it valid to save ourselves work by using the pure wave theory as an approximation? One way of answering this is that the pure wave theory applies when the density of photons is high enough to allow us to talk about measuring the value of a classical field at a certain point in space. A quantitative criterion for this is the concentration defined by the average number of photons found in a volume $\lambda^3$, where $\lambda$ is the wavelength. If this concentration is large, then the classical theory applies.
or example in laser traping of atom we use photon concept rather than wave why?
I could be wrong about this example, since it's not my specialty, but I don't think it's true that laser trapping has to be discussed in terms of photons. The laser beam has a high concentration according to the definition above, and can be treated classically. This description is totally classical:
Proper explanation of optical trapping behavior depends upon the size of the trapped particle relative to the wavelength of light used to trap it. In cases where the dimensions of the particle are much greater than the wavelength, a simple ray optics treatment is sufficient. If the wavelength of light far exceeds the particle dimensions, the particles can be treated as electric dipoles in an electric field. For optical trapping of dielectric objects of dimensions within an order of magnitude of the trapping beam wavelength, the only accurate models involve the treatment of either time dependent or time harmonic Maxwell equations using appropriate boundary conditions.
(The preceding paragraph in the Wikipedia article does talk about photons, but that doesn't mean that the use of the photon theory is mandatory.)
|
Rocky Mountain Journal of Mathematics Rocky Mountain J. Math. Volume 44, Number 3 (2014), 1015-1026. On quadratic twists of hyperelliptic curves Abstract
Let $C$ be a hyperelliptic curve of good reduction defined over a discrete valuation field $K$ with algebraically closed residue field $k$. Assume moreover that $\text{char\,} k\ne2$. Given $d\in K^*\setminus K^{*2}$, we introduce an explicit description of the minimal regular model of the quadratic twist of $C$ by $d$. As an application, we show that if $C/\q$ is a nonsingular hyperelliptic curve given by $y^2=f(x)$ with $f$ an irreducible polynomial, there exists a positive density family of prime quadratic twists of $C$ which are not everywhere locally soluble.
Article information Source Rocky Mountain J. Math., Volume 44, Number 3 (2014), 1015-1026. Dates First available in Project Euclid: 28 September 2014 Permanent link to this document https://projecteuclid.org/euclid.rmjm/1411945677 Digital Object Identifier doi:10.1216/RMJ-2014-44-3-1015 Mathematical Reviews number (MathSciNet) MR3264495 Zentralblatt MATH identifier 1304.14041 Citation
Sadek, Mohammad. On quadratic twists of hyperelliptic curves. Rocky Mountain J. Math. 44 (2014), no. 3, 1015--1026. doi:10.1216/RMJ-2014-44-3-1015. https://projecteuclid.org/euclid.rmjm/1411945677
|
A key step in many proofs consists of showing that two possiblydifferent values are in fact the same. The
Pigeonhole principle cansometimes help with this.
Proof. Suppose each box contains at most one object. Then the total number of objects is at most $1+1+\cdots+1=n$, a contradiction.
This seemingly simple fact can be used in surprising ways. The key typically is to put objects into boxes according to some rule, so that when two objects end up in the same box it is because they have some desired relationship.
Example 1.6.2 Among any 13 people, at least two share a birth month.
Label 12 boxes with the names of the months. Put each person in the box labeled with his or her birth month. Some box will contain at least two people, who share a birth month.
Example 1.6.3 Suppose 5 pairs of socks are in a drawer. Picking 6 socks guarantees that at least one pair is chosen.
Label the boxes by "the pairs'' (e.g., the red pair, the blue pair, the argyle pair,…). Put the 6 socks into the boxes according to description.
Some uses of the principle are not nearly so straightforward.
Example 1.6.4 Suppose $a_1,\ldots,a_n$ are integers. Then some "consecutive sum'' $a_k+a_{k+1}+a_{k+2}+\cdots+a_{k+m}$ is divisible by $n$.
Consider these $n$ sums: $$\eqalign{ s_1&=a_1\cr s_2&=a_1+a_2\cr s_3&=a_1+a_2+a_3\cr &\vdots\cr s_n&=a_1+a_2+\cdots+a_n\cr }$$ These are all consecutive sums, so if one of them is divisible by $n$ we are done. If not, dividing each by $n$ leaves a non-zero remainder, $r_1=s_1\bmod n$, $r_2=s_2\bmod n$, and so on. These remainders have values in $\{1,2,3,\ldots,n-1\}$. Label $n-1$ boxes with these $n-1$ values; put each of the $n$ sums into the box labeled with its remainder mod $n$. Two sums end up in the same box, meaning that $s_i\bmod n=s_j\bmod n$ for some $j>i$; hence $s_j-s_i$ is divisible by $n$, and $s_j-s_i=a_{i+1}+a_{i+2}+\cdots+a_j$, as desired.
A similar argument provides a proof of the
Chinese Remainder Theorem.
Proof. Consider the integers $a,a+m,a+2m,\ldots a+(n-1)m$, each with remainder $a$ when divided by $m$. We wish to show that one of these integers has remainder $b$ when divided by $n$, in which case that number satisfies the desired property.
For a contradiction, suppose not. Let the remainders be $r_0=a\bmod n$, $r_1=a+m\bmod n$,…, $r_{n-1}=a+(n-1)m\bmod n$. Label $n-1$ boxes with the numbers $0,1,2,3,\ldots,b-1,b+1,\ldots n-1$. Put each $r_i$ into the box labeled with its value. Two remainders end up in the same box, say $r_i$ and $r_j$, with $j>i$, so $r_i=r_j=r$. This means that $$a+im=q_1n+r\quad\hbox{and}\quad a+jm=q_2n+r.$$ Hence $$\eqalign{ a+jm-(a+im)&=q_2n+r-(q_1n+r)\cr (j-i)m&=(q_2-q_1)n.\cr }$$ Since $n$ is relatively prime to $m$, this means that $n\divides(j-i)$. But since $i$ and $j$ are distinct and in $\{0,1,2,\ldots,n-1\}$, $0< j-i< n$, so $n\nmid(j-i)$. This contradiction finishes the proof.
More general versions of the Pigeonhole Principle can be proved byessentially the same method. A natural generalization would besomething like this:
If $X$ objects are put into $n$ boxes,some box contains at least $m$ objects. For example:
Theorem 1.6.6 Suppose that $r_1,\ldots,r_n$ are positive integers. If $X\ge(\sum_{i=1}^n r_i) -n + 1$ objects are put into $n$ boxes labeled $1,2,3,\ldots,n$, then some box labeled $i$ contains at least $r_i$ objects.
Proof. Suppose not. Then the total number of objects in the boxes is at most $(r_1-1)+(r_2-1)+(r_3-1)+\cdots+(r_n-1)=(\sum_{i=1}^n r_i) -n < X$, a contradiction.
This full generalization is only occasionally needed; often this simpler version is sufficient:
Proof. Apply the previous theorem with $r_i=r$ for all $i$.
$$\bullet\quad\bullet\quad\bullet$$
Here is a simple application of the Pigeonhole Principle that leads to many interesting questions.
We turn this into a graph theory question: Consider the graphconsisting of 6 vertices, each connected to all the others by an edge,called the
complete graph on $6$vertices, and denoted $K_6$; the vertices represent the people. Coloran edge red if the people represented by its endpoints are acquainted,and blue if they are not acquainted. Any choice of 3 vertices definesa triangle; we wish to show that either there is a red triangle or ablue triangle.
Consider the five edges incident at a single vertex $v$; by the Pigeonhole Principle (the version in corollary 1.6.7, with $r=3$, $X=2(3-1)+1=5$), at least three of them are the same color, call it color $C$; call the other color $D$. Let the vertices at the other ends of these three edges be $v_1$, $v_2$, $v_3$. If any of the edges between these vertices have color $C$, there is a triangle of color $C$: if the edge connects $v_i$ to $v_j$, the triangle is formed by $v$, $v_i$, and $v_j$. If this is not the case, then the three vertices $v_1$, $v_2$, $v_3$ are joined by edges of color $D$, and form a triangle of color $D$.
The number 6 in this example is special: with 5 or fewer vertices it is not true that there must be a monochromatic triangle, and with more than 6 vertices it is true. To see that it is not true for 5 vertices, we need only show an example, as in figure 1.6.1.
The
Ramsey number $R(i)$ is thesmallest integer $n$ such that when the edges of $K_n$ are coloredwith two colors, there is a monochromatic complete graph on $i$vertices, $K_i$, contained within $K_n$. The example shows that $R(3)=6$.
More generally, $R(i,j)$ is the smallest integer $n$ such that when the edges of $K_n$ are colored with two colors, say $C_1$ and $C_2$, either there is a $K_i$ contained within $K_n$ all of whose edges are color $C_1$, or there is a $K_j$ contained within $K_n$ all of whose edges are color $C_2$. Using this notion, $R(k)=R(k,k)$. More generally still, $R(i_1,i_2,\ldots,i_m)$ is the smallest integer $n$ such that when the edges of $K_n$ are colored with $m$ colors, $C_1,…,C_m$, then for some $j$ there is a $K_{i_j}$ contained in $K_n$ all of whose edges are color $C_j$.
Ramsey proved that in all of these cases, there actually is such anumber $n$. Generalizations of this problem have led to the subjectcalled
Ramsey Theory.
Computing any particular value $R(i,j)$ turns out to be quite difficult; Ramsey numbers are known only for a few small values of $i$ and $j$, and in some other cases the Ramsey number is bounded by known numbers. Typically in these cases someone has exhibited a $K_m$ and a coloring of the edges without the existence of a monochromatic $K_i$ or $K_j$ of the desired color, showing that $R(i,j)>m$; and someone has shown that whenever the edges of $K_n$ have been colored, there is a $K_i$ or $K_j$ of the correct color, showing that $R(i,j)\le n$.
Exercises 1.6
Ex 1.6.2Suppose that 501 distinct integers are selected from$1\ldots1000$. Show that there are distinct selected integers $a$ and$b$ such that $a\divides b$. Show that this is not always true if 500integers are selected.(hint)
Ex 1.6.3Each of 15 red balls and 15 green balls is marked with aninteger between 1 and 100 inclusive; no integer appears on more thanone ball. The value of a pair of balls is the sum of the numbers onthe balls. Show there are at least two pairs, consisting of one redand one green ball, with the same value. Show that this is notnecessarily true if there are 13 balls of each color.
Ex 1.6.4Suppose we have 14 red balls and 14 green balls as in theprevious exercise. Show that at least two pairs, consisting of one redand one green ball, have the same value. What about 13 red balls and14 green balls?
Ex 1.6.5Suppose $(a_1,a_2,\ldots,a_{52})$ are integers, notnecessarily distinct. Show that there are two, $a_i$ and $a_j$ with$i\ne j$, such that either $a_i+a_j$ or $a_i-a_j$ is divisible by100. Show that this is not necessarily true for integers$(a_1,a_2,\ldots,a_{51})$.
Ex 1.6.6Suppose five points are chosen from a square whose sides arelength $s$. (The points may be either in the interior of the square oron the boundary.) Show that two of the points are at most $s\sqrt2/2$apart. Find five points so that no two are less than $s\sqrt2/2$apart.
Ex 1.6.7Show that if the edges of $K_6$ are colored with twocolors, there are at least two monochromatic triangles. (Two trianglesare different if each contains at least one vertex not in theother. For example, two red triangles that share an edge count as twotriangles.) Color the edges of $K_6$ so that there are exactly twomonochromatic triangles.
Ex 1.6.8Suppose the edges of a $K_5$ are colored with two colors,say red and blue, so that there are no monochromatic triangles. Showthat the red edges form a cycle, and the blue edges form a cycle, eachwith five edges. (A cycle is a sequence of edges $\{v_1,v_2\},\{v_2,v_3\},\ldots,\{v_k,v_1\}$, where allof the $v_i$ are distinct.Note that this is true in figure 1.6.1.)
Ex 1.6.10Show that $R(3,4)=9$.
|
Tverberg Theorem (1965): Let $ x_1,x_2,\dots, x_m$ be points in $ R^d$, $ m \ge (r-1)(d+1)+1$. Then there is a partition $ S_1,S_2,\dots, S_r$ of $ \{1,2,\dots,m\}$ such that $\cap _{j=1}^rconv (x_i: i \in S_j) \ne \emptyset$.
Tverberg's theorem was conjectured by Birch who also proved the planar case. The case $r=2$ is a 1920 theorem of Radon which follows easily from linear algebra consideration.
(The first thing to note is that Tverberg's theorem is sharp. If you have only $ (r-1)(d+1)$ points in $ R^d$ in a "generic" position then for every partition into $ r$ parts even the affine spans of the points in the parts will not have a point in common.)
The first proof of this theorem appeared in 1965. It was rather complicated and was based on the idea to first prove the theorem for points in some special position and then show that when you continuously change the location of the points the theorem remains true. A common dream was to find an extension of the proof of Radon's theorem, a proof which is based on the two types of numbers - positive and negative. Somehow we need three, four, or $ r$ types of numbers. In 1981 Helge Tverberg found yet another proof of his theorem. This proof was inspired by Barany's proof of the colored Caratheodory theorem (mentioned below) and it was still rather complicated. It once took me 6-7 hours in class to present it.
What could be the probability of hearing two new simple proofs of Tverberg'stheorem on the same day? While visiting the Mittag-Leffler Institute in 1992, I met Helge one day around lunch and asked him if he has found a new proof. To my surprise, he told me about a new proof that he found with Sinisa Vrecica. This is a proof that can be presented in class in 2 hours! It appeared (look here) along with a far-reaching conjecture (still unproved). Later in the afternoon I met Karanbir Sarkaria and he told me about a proof he found to Tverberg's theorem which was absolutely startling. This is a proof you can present in a one hour lecture; it also somehow goes along with the dream of having $r$ "types" of numbers replacing the role of positive and negative real numbers. Another very simple proof of Tverberg's theorem was found by Jean-Pierre Roudneff in 1999.
For further details see these blog posts (I,II).
|
In our day to day lives, we live with uncertainty about what might happen.
What will the weather be like today?
Will there be traffic on my way to work? Will I finally get the recognition and promotion I have been working for?
In life, there’s many things we don’t have enough data to predict. To us, many things might appear like chance because we don’t understand the long-term affect of choices and actions we take that might impact our life trajectories. I like to think of life as a complex system of differential algebraic equations, of which we are very far from ever understanding in a mathematical sense.
However, in many areas, such as science and engineering, models have been discovered and used to make predictions of many phenomena, ranging from simple predictions of pendulum motion to complicated predictions of the stable flight of the space shuttle (
how else can they know the guidance systems should work?).
So how are these predictions made, one might ask? The answer generally lies in how the model is formulated. For example, lets imagine we have a statistical model that can state the probability that someone of a particular age will go clubbing on a Friday night, defined as the following:
$$
\begin{align} p &= \text{WillGoClubbing}(\text{Age})\\ \text{where } p &\in [0,1] \end{align} $$
In this case, we could state a probability that a person will go clubbing on a Friday night given their age. We could even use this to find the age someone is expected to be if they are going clubbing! Pretty interesting way to make predictions. These sorts of statistical predictions are done all the time in areas such as Machine Learning, Quantitive Finance, and even areas like Missile Performance (
shameless plug since I work on this stuff.. okay?).
However, these aren’t the only types of predictions that are made. For example, how do we predict the flight of a rocket, or the weather, or the trajectory of a bullet? How do we predict earthquakes or estimate where hurricanes will end up? Typically, these sorts of problems are tackled using models based on differential equations, some time varying and others steady state (
meaning they won’t change over time). Below is an example of a few differential equations that you might find being simulated: Newtonian Dynamics
$$
\begin{align} m \vec{a} = \sum_i^n \vec{F}_i \end{align} $$ Transient Heat Equation
$$
\begin{align} \frac{\partial T}{\partial t} + \nabla \cdot \nabla T = g(t,T) \end{align} $$ Navier-Stokes Momentum Equation
$$
\begin{align} \frac{\partial}{\partial t}\left(\rho \textbf{u}\right) + \nabla \cdot \left(\rho \textbf{u} \otimes \textbf{u} + p\textbf{I} \right) = \nabla \cdot \mathbf{\tau} + \rho \textbf{g} \end{align} $$
Many of these equations are used today by researchers to make predictions of very complicated and sensitive phenomenon. Depending on the problem, solving these equations accurately, such as Navier-Stokes, can require resources like clusters or supercomputers that run for weeks or months and generate enough data to leave researchers working to understanding the results for months or more. These sorts of activities are definitely not cakewalk.
With all this said, there is still many things out there we don’t have the models or data for to make adequate predictions. Additionally, many of the models we do have are idealized enough or incomplete in that they don’t always capture the real world phenomena accurately. This is why at times the weather predictions are wrong or why we can’t predict the stock market too well. The models used to make these predictions just break down over long periods of time, whether by assumptions, incomplete modeling, or not adequately taking into account transient inputs to the dynamical system.
Fortunately, due to the great abundance of data becoming available (think Big Data) and the great advancements in AI, models have been getting built statistically that can make improved predictions of many things.. From what you’ll likely want to purchase, to what types of shows you might like based on the types of things you watch, to predicting captions for pictures, and more. Using these new techniques and data, more precise models are being empirically created and understood and helping pave the way to understanding more complicated phenomenon down the road.
After explaining a few of the fundamentals in prediction and how it’s used, I hope in future posts to dive into algorithms that can be built to help make predictions of various kinds. In the meantime, thanks for reading and best wishes.
|
What are the conditions for formulating a Lagrangian, if the system's state is stochastic?
Some notation: \( x \in X \subset \mathbb{R}^d\) is the control variable, \(u \in \mathbb{R}^n\) is the state vector, \( ( \Omega, S, P ) \) with \( \omega \in \Omega , 0 \leq P(A \in S) \leq 1\) is a probability space. We know that the state u obeys an equilibrium equation, \( G(u(x, \omega)) = 0\), and we wish to optimize a differentiable function of the state, \( F(u(x, \omega))\). Given any realization \( \omega \), \( x \mapsto u(x, \cdot)\) is differentiable, whereas \(F(u)\) is a simple function e.g. \( \|u\|^2 \). If the equilibrium equation is linear, we can write it as \(G(x,\omega) u = b\), where \(b\) is a known forcing vector.
In general, if we formulate a Lagrangian functional \(L = F(x(u, \omega)) + \langle \lambda, G(u(x, \omega)) \rangle\), the multiplier vector \(\lambda\) will itself be a random variable (since the state function \(x \mapsto G(x, \omega)\) is random, there is a distinct multiplier for each realization, i.e. scenario, let's say we have \(N\) of them), and consequently, we will need to solve \(N\) adjoint problems, in addition to \(N\) "forward" solves, for a sample-average estimate of the subgradient \(\partial_x L\).
I wonder if this reasoning is fruitful and whether it ties in with the more established linear stochastic programming theory.
Some doubts: in this case, what is the definition of the inner product \( \langle \cdot, \cdot \rangle\)? How is the notion of adjoint operator related to that of non-anticipating constraints?
Also, I would be happy if anyone could point me to related references. Thank you in advance
|
In line with DilipSarwate let me put what I understand from your question and the excerpt you posted.
First of all the excerpt does not say that one can transmit a signal without distortion through a channel whose bandwidth is less than the signal's bandwidth. Rather it says that one can, in principle, perfectly
reverse the distortions that occured during transmission and hence recover the original signal by a suitable processing provided that there is no noise in the channel. Of course this is effectly a distortionless transmission whatever.
The author gives this (idealized) example probably to support the idea that in digital communication systems one can transmit message signals over noisy channels without errors (or very small error rates). The example in the second paragraph however is not what you would normally put forward to support the main idea as the aternate answer describes.
Furthermore, the idea in the second paragraph actually is an example of signal recovery by
inverse filtering; a signal $x(t)$ is distorted by a filter $H(\omega)$ while passing through a channel as $y(t) = h(t) \star x(t)$ and $Y(\omega) = H(\omega) X(\omega)$. Then you can apply an inverse filtering on $y(t)$ which would recover the original signal $x(t)$ as:$$X_i(\omega) = Y(\omega) H_{i}(\omega) \longleftrightarrow x_i(t) = x(t) \star h_i(t)$$
Which suggests that $$H_i(\omega) = \frac{1}{H(\omega)} $$for all $\omega$ such that $H(\omega) \neq 0$.
Now it's obvious that even in principle you cannot invert an arbitrary filter $H(\omega)$ into $1/H(\omega)$ because of the possibility of zeros of the forward filter; i.e. if the forward filter has nulls, then the inverse filter will approach $\infty$ at those frequencies and hence make the processed output $x_i(t)$ be unbounded (useless)
That last observation is further enhanced by the introduction of
noise into the system. In any practical system there is always noise. So a more correct inversion relation is:
$$ y(t) = h(t)\star x(t) + n(t) \leftrightarrow x_i(t) = y(t) \star h_i(t) = x(t)\star h(t) \star h_i(t) + n(t) \star h_i(t) $$
As you can see the noise term at the output is what contributes to the
imperfectness of the inversion process in addition to any imperfectness of the inverse system impulse response $h_i(t)$.
In the example, the author therefore stresses the fact that there is
no noise through the channel $n(t) = 0$ and furthermore he implies (but not clearly states) that the forward system (the channel) frequency response has no null frequencies; i.e. there is no $\omega$ for which $H(\omega) = 0$ at least through the signal bandwidth. Under these two conditions, then, it's possible that you can perfectly recover the original transmitted signal $x(t)$ from the received signal $y(t)$ whatever the frequency response of the transmission channel is.
One (pretty unseccesful) example of such reversible channel could be obtained by a simple electronic RC filter whose impulse response is $$h_{RC} = \frac{1}{RC} e^{-t/{RC}} u(t)$$ and the corresponding frequency response $H_{RC}(\omega)$ does not have any zeros, hence invertable as $H_i(\omega) = 1/H(\omega)$
I leave the MATLAB simulation of this to you.
|
Let $T$ be a linear operator on an inner product space $V$. If $\langle T(x),y \rangle=0$ for all $x,y \in V$ then $T=T_0$ where it means zero transformation.
Prove this result if the equality holds for all $x,y$ in some basis for $V$.
The hint says use the theorem below.
[let $V$ and $W$ be vector spaces over $F$ and suppose that {$v_1,\dots,v_n$} is a bsis for $V$. For $w_1,\dots,w_n$ in $W$, there exists exactly one linear transformation $T:V \rightarrow W$ such that $T(v_i)=w_i$.] But I don't know how to apply this. Actually the true meaning of this question is still ambiguous to me. It means that $x,y \in \beta$ for some basis $\beta$? And $\langle T(x),y \rangle=0$?
|
Definition
A two-class classifier.
A Feedforward Neural Network without hidden layers.
More specifically: It maps its input $x$ to output $f(x)$ with parameter $\theta$:
$$ \begin{equation} f(x) = \begin{cases} 0 & \text{if }x \cdot \theta > 0 \\ 1 & \text{otherwise} \end{cases} \end{equation} $$ Learning Algorithm Random initialize $\theta$ For each example $i$ in the training set, update parameter $\theta$ as follow: $$\theta = \theta + (y_i – f(\theta \cdot x_i)) \cdot x_i$$ Repeat
step 2until the classifier classifies most examples correctly.
Properties Let’s change a view. Let each example donates a constraint. And the expected $\theta$ is at the intersection of those constraints. If you know Linear Programming, then you will see that this is actually a Half-Plane Intersection Problem. It’s clear to see that if $\theta_1, \theta_2$ is legal, then $a\theta_1 + b\theta_2$ is legal when $a + b = 1, a, b \ge 0$. It easy to proof: Half-Plane is a convex set. The intersection of some convex sets is convex. And this property can help to prove the convergence of this algorithm. If you know how to solve Linear Regression Problems with Gradient Descent, then you will know that sometimes we may pass the best solution if we choose a learning rate which is too large. In this algorithm, the same problem exists. So this algorithm will converge. But it may not converge to a solution which fit the dataset perfectly. So consider “generously feasible solution” that lies within the feasible region by a margin at least as great as the length of the input vector that defines each constraint plane. The algorithm can only be proved that it will converge to the “generously feasible solution”. Thus the solution in the proof of convergence below means “generously feasible solution”. Convergence
As you can see from the definition, Perceptron is a linear classifier. Thus, if the dataset is not linearly separable, this algorithm will not converge.
If you can imagine plotting the dataset to a plane(space), with knowledge of linear algebra you can easily have an intuition that we are actually adjusting the decision boundary(separating hyperplane) according to every single example. And each iteration makes the parameter $\theta$ better. Actually, this algorithm will converge. And here is the proof:
Because the feasible solution is a convex set, the modification made from other examples won’t make the decision boundary worse. As for a single misclassified example $i$, each modification will change the value of $\theta \cdot x$ by $||x_i||^2$. And the total value need to be correct is $|\theta \cdot x_i|$. Thus, the maximum number of iteration before the classifier makes the right classification for each example is $O(\max\limits_i{(\frac{|\theta \cdot x_i|}{||x_i||^2})})$
And here is a more rigorous proof, read it if you like: http://leijun00.github.io/2014/08/perceptron/
Disadvantages
The most well-known disadvantage of this algorithm is that it can’t simulate the
XOR Function. But actually, there are more general theorems, like “Group Invariance Theorem”. So I decide to read first. Then I will come back to finish this part. Perceptrons: an introduction to computational geometry(Minsky & Papert,1969)
—————————— UPD 2017.8.15 ——————————
I thought is a paper. But it’s actually a book with 292 pages! So I give it up to read it for now. Maybe I will read it in university? Perceptrons: an introduction to computational geometry
|
The Annals of Statistics Ann. Statist. Volume 27, Number 3 (1999), 1041-1060. Asymptotics when the number of parameters tends to infinity in the Bradley-Terry model for paired comparisons Abstract
We are concerned here with establishing the consistency and asymptotic normality for the maximum likelihood estimator of a “merit vector” $(u_0,\dots,u_t)$, representing the merits of $t +1$ teams (players, treatments, objects), under the Bradley–Terry model, as $t \to \infty$. This situation contrasts with the well-known Neyman–Scott problem under which the number of parameters grows with $t$ (the amount of sampling), and for which the maximum likelihood estimator fails even to attain consistency. A key feature of our proof is the use of an effective approximation to the inverse of the Fisher information matrix. Specifically, under the Bradley–Terry model, when teams $i$ and $j$ with respective merits $u_i$ and $u_j$ play each other, the probability that team $i$ prevails is assumed to be $u_i/(u_i + u_j)$. Suppose each pair of teams play each other exactly $n$ times for some fixed $n$. The objective is to estimate the merits, $u_i$’s, based on the outcomes of the $nt(t +1)/2$ games. Clearly, the model depends on the $u_i$’s only through their ratios. Under some condition on the growth rate of the largest ratio $u_i/u_j (0 \leq i, j \leq t)$ as $t \to \infty$, the maximum likelihood estimator of $(u_1/u_0,\dots,u_t/u_0)$ is shown to be consistent and asymptotically normal. Some simulation results are provided.
Article information Source Ann. Statist., Volume 27, Number 3 (1999), 1041-1060. Dates First available in Project Euclid: 5 April 2002 Permanent link to this document https://projecteuclid.org/euclid.aos/1018031267 Digital Object Identifier doi:10.1214/aos/1018031267 Mathematical Reviews number (MathSciNet) MR1724040 Zentralblatt MATH identifier 0951.62061 Citation
Simons, Gordon; Yao, Yi-Ching. Asymptotics when the number of parameters tends to infinity in the Bradley-Terry model for paired comparisons. Ann. Statist. 27 (1999), no. 3, 1041--1060. doi:10.1214/aos/1018031267. https://projecteuclid.org/euclid.aos/1018031267
|
J. D. Hamkins, “Every countable model of set theory embeds into its own constructible universe,” J. Math. Logic, vol. 13, iss. 2, p. 1350006, 27, 2013.
@article {Hamkins2013:EveryCountableModelOfSetTheoryEmbedsIntoItsOwnL,
AUTHOR = {Hamkins, Joel David},
TITLE = {Every countable model of set theory embeds into its own
constructible universe},
JOURNAL = {J. Math. Logic},
FJOURNAL = {J.~Math.~Logic},
VOLUME = {13},
YEAR = {2013},
NUMBER = {2},
PAGES = {1350006, 27},
ISSN = {0219-0613},
MRCLASS = {03C62 (03E99 05C20 05C60 05C63)},
MRNUMBER = {3125902},
MRREVIEWER = {Robert S. Lubarsky},
DOI = {10.1142/S0219061313500062},
eprint = {1207.0963},
archivePrefix = {arXiv},
primaryClass = {math.LO},
URL = {http://wp.me/p5M0LV-jn},
}
In this article, I prove that every countable model of set theory $\langle M,{\in^M}\rangle$, including every well-founded model, is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$. Another way to say this is that there is an embedding
$$j:\langle M,{\in^M}\rangle\to \langle L^M,{\in^M}\rangle$$ that is elementary for quantifier-free assertions in the language of set theory.
Main Theorem 1. Every countable model of set theory $\langle M,{\in^M}\rangle$ is isomorphic to a submodel of its own constructible universe $\langle L^M,{\in^M}\rangle$.
The proof uses universal digraph combinatorics, including an acyclic version of the countable random digraph, which I call the countable random $\mathbb{Q}$-graded digraph, and higher analogues arising as uncountable Fraisse limits, leading eventually to what I call the hypnagogic digraph, a set-homogeneous, class-universal, surreal-numbers-graded acyclic class digraph, which is closely connected with the surreal numbers. The proof shows that $\langle L^M,{\in^M}\rangle$ contains a submodel that is a universal acyclic digraph of rank $\text{Ord}^M$, and so in fact this model is universal for all countable acyclic binary relations of this rank. When $M$ is ill-founded, this includes all acyclic binary relations. The method of proof also establishes the following, thereby answering a question posed by Ewan Delanoy.
Main Theorem 2. The countable models of set theory are linearly pre-ordered by embeddability: for any two countable models of set theory $\langle M,{\in^M}\rangle$ and $\langle N,{\in^N}\rangle$, either $M$ is isomorphic to a submodel of $N$ or conversely. Indeed, the countable models of set theory are pre-well-ordered by embeddability in order type exactly $\omega_1+1$.
The proof shows that the embeddability relation on the models of set theory conforms with their ordinal heights, in that any two models with the same ordinals are bi-embeddable; any shorter model embeds into any taller model; and the ill-founded models are all bi-embeddable and universal.
The proof method arises most easily in
finite set theory, showing that the nonstandard hereditarily finite sets $\text{HF}^M$ coded in any nonstandard model $M$ of PA or even of $I\Delta_0$ are similarly universal for all acyclic binary relations. This strengthens a classical theorem of Ressayre, while simplifying the proof, replacing a partial saturation and resplendency argument with a soft appeal to graph universality.
Main Theorem 3. If $M$ is any nonstandard model of PA, then every countable model of set theory is isomorphic to a submodel of the hereditarily finite sets $\langle \text{HF}^M,{\in^M}\rangle$ of $M$. Indeed, $\langle\text{HF}^M,{\in^M}\rangle$ is universal for all countable acyclic binary relations.
In particular, every countable model of ZFC and even of ZFC plus large cardinals arises as a submodel of $\langle\text{HF}^M,{\in^M}\rangle$. Thus, inside any nonstandard model of finite set theory, we may cast out some of the finite sets and thereby arrive at a copy of any desired model of infinite set theory, having infinite sets, uncountable sets or even large cardinals of whatever type we like.
The proof, in brief: for every countable acyclic digraph, consider the partial order induced by the edge relation, and extend this order to a total order, which may be embedded in the rational order $\mathbb{Q}$. Thus, every countable acyclic digraph admits a $\mathbb{Q}$
-grading, an assignmment of rational numbers to nodes such that all edges point upwards. Next, one can build a countable homogeneous, universal, existentially closed $\mathbb{Q}$-graded digraph, simply by starting with nothing, and then adding finitely many nodes at each stage, so as to realize the finite pattern property. The result is a computable presentation of what I call the countable random $\mathbb{Q}$-graded digraph $\Gamma$. If $M$ is any nonstandard model of finite set theory, then we may run this computable construction inside $M$ for a nonstandard number of steps. The standard part of this nonstandard finite graph includes a copy of $\Gamma$. Furthermore, since $M$ thinks it is finite and acyclic, it can perform a modified Mostowski collapse to realize the graph in the hereditary finite sets of $M$. By looking at the sets corresponding to the nodes in the copy of $\Gamma$, we find a submodel of $M$ that is isomorphic to $\Gamma$, which is universal for all countable acyclic binary relations. So every model of ZFC isomorphic to a submodel of $M$.
The article closes with a number of questions, which I record here (and which I have also asked on mathoverflow: Can there be an embedding $j:V\to L$ from the set-theoretic universe $V$ to the constructible universe $L$, when $V\neq L$?) Although the main theorem shows that every countable model of set theory embeds into its own constructible universe $$j:M\to L^M,$$ this embedding $j$ is constructed completely externally to $M$ and there is little reason to expect that $j$ could be a class in $M$ or otherwise amenable to $M$. To what extent can we prove or refute the possibility that $j$ is a class in $M$? This amounts to considering the matter internally as a question about $V$. Surely it would seem strange to have a class embedding $j:V\to L$ when $V\neq L$, even if it is elementary only for quantifier-free assertions, since such an embedding is totally unlike the sorts of embeddings that one usually encounters in set theory. Nevertheless, I am at a loss to refute the hypothesis, and the possibility that there might be such an embedding is intriguing, if not tantalizing, for one imagines all kinds of constructions that pull structure from $L$ back into $V$.
Question 1. Can there be an embedding $j:V\to L$ when $V\neq L$?
By embedding, I mean an isomorphism from $\langle V,{\in}\rangle$ to its range in $\langle L,{\in}\rangle$, which is the same as a quantifier-free-elementary map $j:V\to L$. The question is most naturally formalized in Gödel-Bernays set theory, asking whether there can be a GB-class $j$ forming such an embedding. If one wants $j:V\to L$ to be a definable class, then this of course implies $V=\text{HOD}$, since the definable $L$-order can be pulled back to $V$, via $x\leq y\iff j(s)\leq_L j(y)$. More generally, if $j$ is merely a class in Gödel-Bernays set theory, then the existence of an embedding $j:V\to L$ implies global choice, since from the class $j$ we can pull back the $L$-order. For these reasons, we cannot expect every model of ZFC or of GB to have such embeddings. Can they be added generically? Do they have some large cardinal strength? Are they outright refutable?
It they are not outright refutable, then it would seem natural that these questions might involve large cardinals; perhaps $0^\sharp$ is relevant. But I am unsure which way the answers will go. The existence of large cardinals provides extra strength, but may at the same time make it harder to have the embedding, since it pushes $V$ further away from $L$. For example, it is conceivable that the existence of $0^\sharp$ will enable one to construct the embedding, using the Silver indiscernibles to find a universal submodel of $L$; but it is also conceivable that the non-existence of $0^\sharp$, because of covering and the corresponding essential closeness of $V$ to $L$, may make it easier for such a $j$ to exist. Or perhaps it is simply refutable in any case. The first-order analogue of the question is:
Question 2. Does every set $A$ admit an embedding $j:\langle A,{\in}\rangle \to \langle L,{\in}\rangle$? If not, which sets do admit such embeddings?
The main theorem shows that every countable set $A$ embeds into $L$. What about uncountable sets? Let us make the question extremely concrete:
Question 3. Does $\langle V_{\omega+1},{\in}\rangle$ embed into $\langle L,{\in}\rangle$? How about $\langle P(\omega),{\in}\rangle$ or $\langle\text{HC},{\in}\rangle$?
It is also natural to inquire about the nature of $j:M\to L^M$ even when it is not a class in $M$. For example, can one find such an embedding for which $j(\alpha)$ is an ordinal whenever $\alpha$ is an ordinal? The embedding arising in the proof of the main theorem definitely does not have this feature.
Question 4. Does every countable model $\langle M,{\in^M}\rangle$ of set theory admit an embedding $j:M\to L^M$ that takes ordinals to ordinals?
Probably one can arrange this simply by being a bit more careful with the modified Mostowski procedure in the proof of the main theorem. And if this is correct, then numerous further questions immediately come to mind, concerning the extent to which we ensure more attractive features for the embeddings $j$ that arise in the main theorems. This will be particularly interesting in the case of well-founded models, as well as in the case of $j:V\to L$, as in question , if that should be possible.
Question 5. Can there be a nontrivial embedding $j:V\to L$ that takes ordinals to ordinals?
Finally, I inquire about the extent to which the main theorems of the article can be extended from the countable models of set theory to the $\omega_1$-like models:
Question 6. Does every $\omega_1$-like model of set theory $\langle M,{\in^M}\rangle$ admit an embedding $j:M\to L^M$ into its own constructible universe? Are the $\omega_1$-like models of set theory linearly pre-ordered by embeddability?
|
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too.
|
Let $n$ be a square-free integer. Then for a given integer $m$, $m$ is a square modulo $n$ if and only if the sum
$$\displaystyle \sum_{d | n} \left(\frac{m}{d}\right) > 0.$$
In fact one can write the indicator function $\Psi(m,n)$, which equals unity if $m$ is a square mod $n$ and zero otherwise, as
$$\Psi(m,n) = \frac{1}{2^{\omega(n)}} \sum_{d | n} \left(\frac{m}{d}\right).$$
In both instances $\left(\frac{\cdot}{\cdot}\right)$ denotes the Jacobi symbol.
It is a problem of central interest to evaluate the following sum:
$$\displaystyle \sum_{\substack{mn \leq X \\ mn \text{ square-free}}} \Psi(m,n) \Psi(n,m).$$
This sum, for example, is essential to computing the average size of the 4-class group of quadratic fields (E. Fouvry, J. Kluners,
On the 4-rank of class groups of quadratic fields, Invent. Math. 167 (2007), 455-513) and the average size of the 2-Selmer group of congruent number curves (D.R. Heath-Brown, The size of Selmer groups for the congruent number problem, Invent. Math 111 (1993), 171-195). This sum can be shown to be of size of order $X$ (see the arguments in the two papers cited above).
It turns out that in some cases one has to consider the following related sum. Let $m,n$ be positive integers such that $m^2 + n^2$ is square-free. Define the function
$$\displaystyle \Phi(m,n) = \begin{cases} 1 & \text{if } p | m^2 + n^2 \Rightarrow \left(\frac{m}{p}\right) = 1 \\ 0 & \text{otherwise}. \end{cases}$$
How does one evaluate the sum
$$\displaystyle \sum_{\substack{m^2 + n^2 \leq X \\ m^2 + n^2 \text{ square-free}}} \Phi(m,n) \Phi(n,m)?$$
|
LaTeX supports many worldwide languages by means of some special packages. In this article is explained how to import and use those packages to create documents in
German.
Contents
German language has some special characters. For this reason the preamble of your file must be modified accordingly to support these characters and some other features.
\documentclass{article} %encoding %-------------------------------------- \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} %-------------------------------------- %German-specific commands %-------------------------------------- \usepackage[ngerman]{babel} %-------------------------------------- %Hyphenation rules %-------------------------------------- \usepackage{hyphenat} \hyphenation{Mathe-matik wieder-gewinnen} %-------------------------------------- \begin{document} \tableofcontents \vspace{2cm} %Add a 2cm space \begin{abstract} Dies ist eine kurze Zusammenfassung der Inhalte des in deutscher Sprache verfassten Dokuments. \end{abstract} \section{Einleitendes Kapitel} Dies ist der erste Abschnitt. Hier können wir einige zusätzliche Elemente hinzufügen und alles wird korrekt geschrieben und umgebrochen werden. Falls ein Wort für eine Zeile zu lang ist, wird \texttt{babel} versuchen je nach Sprache richtig zu trennen. \section{Eingabe mit mathematischer Notation} In diesem Abschnitt ist zu sehen, was mit Macros, die definiert worden, geschieht. \[ \lim x = \theta + 152383.52 \] \end{document}
There are three packages in this document related to the encoding and the special characters. These packages will be explained in the next sections.
If your are looking for instructions on how to use more than one language in a single document, for instance English and German, see the International language support article.
Modern computer systems allow you to input letters of national alphabets directly from the keyboard. In order to handle a variety of input encodings used for different groups of languages and/or on different computer platforms LaTeX employs the
inputenc package to set up input encoding. In this case the package properly displays characters in the German alphabet. To use this package add the next line to the preamble of your document: \usepackage[utf8]{inputenc} The recommended input encoding is utf-8. You can use other encodings depending on your operating system.
To proper LaTeX document generation you must also choose a font encoding which has to support specific characters for German language, this is accomplished by the
package: fontenc \usepackage[T1]{fontenc} Even though the default encoding works well in German, using this specific encoding will avoid glitches that occur if you copy the text from the generated PDF with some specific characters. The default LaTeX encoding is
OT1.
To extended the default LaTeX capabilities, for proper hyphenation and translating the names of the document elements, import the
babel package for the German language. \usepackage[ngerman]{babel} As you may see in the example at the introduction, instead of "abstract" and "Contents" the German words "Zusammenfassung" and "Inhaltsverzeichnis" are used.
The new ortographic rules approved in 1998 are supported by
babel using
ngerman instead of the
german parameter, which supports the old ortography.
Sometimes for formatting reasons some words have to be broken up in syllables separated by a
- (
hyphen) to continue the word in a new line. For example, Mathematik could become Mathe-matik. The package babel, whose usage was described in the previous section, usually does a good job breaking up the words correctly, but if this is not the case you can use a couple of commands in your preamble.
\usepackage{hyphenat} \hyphenation{Mathe-matik wieder-gewinnen}
The first command will import the package
hyphenat and the second line is a list of space-separated words with defined hyphenation rules. On the other side, if you want a word not to be broken automatically, use the
{\nobreak word} command within your document.
Commands enabled for the German language
Command Description "a to produce the character ä, can be used with upper-case and lower-case vowels.
"s and
"z
to produce the German character ß. Works on upper-case and lower-case.
"ck
for ck to be hyphenated as k-k.
"ff
for ff to be hyphenated as ff-f, this is also implemented for l, m, n, p, r and t.
"|
disable ligature at this point.
"-
An explicit hyphen sign that allows hyphenation in the rest of the word.
"`
Left double quotes, „
"'
Right double quotes, “
"<
French left double quotes «
">
French right double quotes »
For more information see
|
I am confused how to interpret the result of performing a normalized correlation with a constant vector. Since you have to divide by the standard deviation of both vectors (reference: http://en.wikipedia.org/wiki/Cross-correlation ), if one of them is constant (say a vector of all 5's, which has standard deviation of zero), then the correlation is infinity, but in fact the correlation should be zero right? This isn't just a corner case, in general if the standard deviation of one of the vectors is small, the correlation to
any other vector is very high, which obviously doesn't make sense. Can anyone explain my misinterpretation?
I am confused how to interpret the result of performing a normalized correlation with a constant vector. Since you have to divide by the standard deviation of both vectors (reference: http://en.wikipedia.org/wiki/Cross-correlation ), if one of them is constant (say a vector of all 5's, which has standard deviation of zero), then the correlation is infinity, but in fact the correlation should be zero right? This isn't just a corner case, in general if the standard deviation of one of the vectors is small, the correlation to
Let $\boldsymbol{x}$ and $\boldsymbol{y}$ be your two vectors and let $\boldsymbol{\bar{x}} \equiv \bar{x} \boldsymbol{1}$ and $\boldsymbol{\bar{y}} \equiv \bar{y} \boldsymbol{1}$ be constant vectors for the means of the two original vectors. The components of the sample correlation are:
$$\begin{matrix} s_{x,y}^2 = (\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}}) & & s_x = ||\boldsymbol{x} - \boldsymbol{\bar{x}}|| & & s_y = ||\boldsymbol{y} - \boldsymbol{\bar{y}}||. \end{matrix}$$
The sample correlation between $\boldsymbol{x}$ and $\boldsymbol{y}$ is just
the cosine of the angle between the vectors $\boldsymbol{x} - \boldsymbol{\bar{x}}$ and $\boldsymbol{y} - \boldsymbol{\bar{y}}$. Letting this angle be $\theta$ we have:
$$\rho_{x,y} = \frac{(\boldsymbol{x} - \boldsymbol{\bar{x}}) \cdot (\boldsymbol{y} - \boldsymbol{\bar{y}})}{||\boldsymbol{x} - \boldsymbol{\bar{x}}|| \cdot ||\boldsymbol{y} - \boldsymbol{\bar{y}}||} = \cos \theta.$$
Since scaling of either vector scales the covariance and standard deviation equivalently, this means that correlation is unaffected by scale. It is not correct to say that a low standard deviation gives a high correlation. What matters for correlation is the angle between the vectors, not their lengths.
In the special case where $\boldsymbol{y} \propto \boldsymbol{1}$ (i.e., $\boldsymbol{y}$ is a constant vector) you have $\boldsymbol{y} - \boldsymbol{\bar{y}} = \boldsymbol{0}$ which then gives $s_{x,y}^2 = 0$ and $s_{y} = 0$. In this case the correlation is undefined. Geometrically this occurs because there is no defined angle with the zero vector.
|
Please help transcribe this video using our simple transcription tool. You need to be logged in to do so.
Description
We present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worst-case hardness of short vector problems on arbitrary lattices. As icing on the cake, our scheme is quit e efficient, and has very short ciphertexts. Our construction improves upon previous works in two aspects: 1. We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em re-linearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. More importantly, we deviate from the ``squashing paradigm'' used in all previous works. We introduce a new {\em dimension reduction} technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. In contrast, all previous works required an additional, very strong assumption (namely, the sparse subset sum assumption). Since our scheme has very short ciphertexts, we use it to construct an asymptotically-efficient LWE-based single-server private information retrieval (PIR) protocol. The communication complexity of our protocol (in the public-key model) is $k \cdot \polylog\,k+\log |DB|$ bits per single-bit query, which is better than any known scheme. Previously, it was not known how to achieve a communication complexity of even $\poly(k, \log|DB|)$ based on LWE.
Questions and Answers You need to be logged in to be able to post here.
ADS
|
Consider a zero-temperature, one-dimensional crystal with allowed electron momenta $k_n = \frac{2\pi n}{L}$.
Question: Which is the more correct way to think about the Fermi sea?
Sharp plane waves --
$$ \prod_{\epsilon_k<\epsilon_f} c_{k\uparrow}^\dagger c_{k\downarrow}^\dagger \lvert 0\rangle$$
or
Wave packets that are very narrow in momentum space --
$$ \prod_{\epsilon_{k_1}<\epsilon_f, \epsilon_{k_2}<\epsilon_f} \alpha_{k_1\uparrow}^\dagger(x_{k_1}) \alpha_{k_2\downarrow}^\dagger(x_{k_2}) \lvert 0\rangle,$$
where
$$\alpha_{k\uparrow}^\dagger(x_k) = \sum_q \exp\biggl[-\frac{1}{2}\biggl(\frac{q-k}{\delta}\biggr)^2 + i q x_k\biggr]\ c_{q\uparrow}^\dagger,$$
with a small width $\delta$ and randomly distributed positions $x_{k_1},x_{k_2}$.
Also, if (2) is more correct, what determines the width $\delta$?
Discussion:I expected sharp plane waves. But wave packets seem necessary to make sense of the semiclassical equation of motion:
$$\frac{d}{dt}k = -e E\tag{3}$$
which, as I understand it, applies to the center $k$ of a given wave packet.
For a definite example where wave packets seem necessary, consider Bloch oscillations. One solves for the positions $x_k$ as a function of time using (3).
|
There are other ways that a function might be said to generate asequence, other than as what we have called a generating function. Forexample,$$e^x = \sum_{n=0}^\infty {1\over n!} x^n$$is the generating function for the sequence $1,1,{1\over2}, {1\over3!},\ldots$. But if we write the sum as$$e^x = \sum_{n=0}^\infty 1\cdot {x^n\over n!},$$considering the $n!$ to be part of the expression $x^n/n!$, we mightthink of this same function as generating the sequence $1,1,1,\ldots$,interpreting 1 as the coefficient of $x^n/n!$. This is not a veryinteresting sequence, of course, but this idea can often provefruitful. If $$f(x) = \sum_{n=0}^\infty a_n {x^n\over n!},$$we say that $f(x)$ is the
exponential generating function for $a_0,a_1,a_2,\ldots$.
Example 3.2.1 Find an exponential generating function for the number of permutations with repetition of length $n$ of the set $\{a,b,c\}$, in which there are an odd number of $a\,$s, an even number of $b\,$s, and any number of $c\,$s.
For a fixed $n$ and fixed numbers of the letters, we already know how to do this. For example, if we have 3 $a\,$s, 4 $b\,$s, and 2 $c\,$s, there are ${9\choose 3\;4\;5}$ such permutations. Now consider the following function: $$ \sum_{i=0}^\infty {x^{2i+1}\over (2i+1)!} \sum_{i=0}^\infty {x^{2i}\over (2i)!} \sum_{i=0}^\infty {x^{i}\over i!}. $$ What is the coefficient of $x^9/9!$ in this product? One way to get an $x^9$ term is $$ {x^3\over 3!}{x^4\over 4!}{x^2\over 2!}={9!\over 3!\;4!\;2!}{x^9\over 9!} ={9\choose 3\;4\;5}{x^9\over 9!}. $$ That is, this one term counts the number of permutations in which there are 3 $a\,$s, 4 $b\,$s, and 2 $c\,$s. The ultimate coefficient of $x^9/9!$ will be the sum of many such terms, counting the contributions of all possible choices of an odd number of $a\,$s, an even number of $b\,$s, and any number of $c\,$s.
Now we notice that $\ds \sum_{i=0}^\infty {x^{i}\over i!}=e^x$, and that the other two sums are closely related to this. A little thought leads to $$ e^x + e^{-x} = \sum_{i=0}^\infty {x^{i}\over i!} + \sum_{i=0}^\infty {(-x)^{i}\over i!} = \sum_{i=0}^\infty {x^i + (-x)^i\over i!}. $$ Now $x^i+(-x)^i$ is $2x^i$ when $i$ is even, and $0$ when $x$ is odd. Thus $$ e^x + e^{-x} = \sum_{i=0}^\infty {2x^{2i}\over (2i)!}, $$ so that $$ \sum_{i=0}^\infty {x^{2i}\over (2i)!} = {e^x+e^{-x}\over 2}. $$ A similar manipulation shows that $$ \sum_{i=0}^\infty {x^{2i+1}\over (2i+1)!} = {e^x-e^{-x}\over 2}. $$ Thus, the generating function we seek is $$ {e^x-e^{-x}\over 2}{e^x+e^{-x}\over 2} e^x= {1\over 4}(e^x-e^{-x})(e^x+e^{-x})e^x = {1\over 4}(e^{3x}-e^{-x}). $$ Notice the similarity to example 3.1.5.
Exercises 3.2
Ex 3.2.2Find an exponential generating function for the number ofpermutations with repetition of length $n$ of the set $\{a,b,c\}$, inwhich there are an odd number of $a\,$s, an even number of $b\,$s, and aneven number of $c\,$s.
Ex 3.2.3Find an exponential generating function for the number ofpermutations with repetition of length $n$ of the set $\{a,b,c\}$, inwhich the number of $a\,$s is even and at least 2, the number of$b\,$s is even and at most 6, and the number of $c\,$s is at least 3.
Ex 3.2.4In how many ways can we paint the 10 rooms of a hotel if at most threecan be painted red, at most 2 painted green, at most 1 painted white,and any number can be painted blue or orange?
Ex 3.2.5Recall from section 1.4 that the Bell numbers$B_n$ count all of the partitions of $\{1,2,\ldots,n\}$.
Let $\ds f(x)=\sum_{n=0}^\infty B_n\cdot {x^n\over n!}$, and note that $$ f'(x)=\sum_{n=1}^\infty B_n {x^{n-1}\over (n-1)!}= \sum_{n=0}^\infty B_{n+1}{x^{n}\over n!}= \sum_{n=0}^\infty \left(\sum_{k=0}^n {n\choose k}B_{n-k}\right) {x^{n}\over n!}, $$ using the recurrence relation 1.4.1 for $B_{n+1}$ from section 1.4. Now it is possible to write this as a product of two infinite series: $$ f'(x) = \left(\sum_{n=0}^\infty B_n\cdot {x^n\over n!}\right) \left(\sum_{n=0}^\infty a_n x^n\right) = f(x)g(x). $$ Find an expression for $a_n$ that makes this true, which will tell you what $g(x)$ is, then solve the differential equation for $f(x)$, the exponential generating function for the Bell numbers. From section 1.4, the first few Bell numbers are 1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, 115975, 678570, 4213597, 27644437. You can check your answer in Sage.
|
\(R^2\) Statistic M Loecher
It seems to odd to me that we measure the explanatory power of a regression model in “percent of variance explained”, or \(R^2 = cor(\hat{y},y)^2 = r^2\) even though we all know that variance is just an auxiliary quantityto compute the more meaningful measure of uncertainty which is the standard deviation. Risk in finance or uncertainty in prediction is measured by \(\sigma\),
not by \(\sigma^2\). Knowing the reduction in variance in a regression model seems much less useful than the reduction in stdev.
In fact, whenever I try to explain \(R^2\) to my students, I usually start by comparing the overall variation of y (as measured by \(\sigma_y\)) to the remaining variation around the regression line (measured by \(\sigma_{\epsilon}\)). That idea is adapted much more naturally than the comparison of the variances which really have no direct interpretation!
So I propose a new measure which is truly “the amount of standard deviation explained”. We can quickly derive it: \[
R^2 (=r^2) = 1 – \frac{RSS}{TSS} \Leftrightarrow 1 – \frac{\sqrt{RSS}}{\sqrt{TSS}} = 1-\sqrt{1-r^2} \] where RSS = “residual sum of squares” (\(\approx \sigma_{\epsilon}^2\)) and TSS = “total sum of squares” (\(\approx \sigma_{y}^2\))
Comparing the traditional \(r^2\) with the new measure \(1-\sqrt{1-r^2}\) reveals that substantially stronger correlations \(cor(\hat{y},y)\) are needed to result in similar “uncertainty reduction”. E.g. what one used to call a high value of \(R^2 = 0.8\) explaining 80% of the variance, would have reduced the true uncertainty by merely 55% !
The graph below shows the stronger convexity of this alternative measure.
// add bootstrap table styles to pandoc tables $(document).ready(function () { $('tr.header').parent('thead').parent('table').addClass('table table-condensed'); });
|
The idea behind this expression is indeed a fairly general one.
$\newcommand{\bs}[1]{\boldsymbol{#1}}\newcommand{\calS}{\mathcal{S}}$An entanglement witness $\mathcal W$ is defined as an operator such that $\operatorname{Tr}(\mathcal W\rho)\ge0$ for all separable $\rho\in\mathcal S$, while for some entangled $\rho_{ent}$ we have $\operatorname{Tr}(\mathcal W\rho_{ent})<0$.
Geometrically, this definition is very easily understood as saying that $\mathcal W$ defines a
hyperplane in the space of states that separates the separable states from the non-separable ones. Because of the convexity of the space of separable states, any non-separable state can be separated by such an operator (see e.g. (Horodecki 2007) or (Gühne 2008)).
Now forget for a second about states and quantum mechanics. Let $\calS$ denote a convex subset of $\mathbb R^n$ for some $n$, let $ v\notin \calS$, and let $ v_0\in\calS$ be the vector in $\calS$ that is the closest to $ v$ (in standard euclidean distance).This means that the line (or more generally hyperplane) that is orthogonal to $ v- v_0$ and touches $ v_0$ is
tangent to $\calS$ at $ v_0$. Such a "line" is an "optimal" linear separation between $\calS$ and $ v$:
We now simply need to define an operator that tells us on which side of such separation we are in. A natural candidate would be an operator which projects a candidate vector $ w$ on the line $ v- v_0$, or, more precisely, an operator $A$ defined via
$$A w\equiv\langle v- v_0, w- v_0\rangle.$$
Clearly, we then have $A v>0$, and $A v_0=0$, while all vectors in $\calS$ (together with the points on the $\calS$ side of the separation) correspond to $Aw<0$.
To obtain the given expression for the witness you now simply change the sign (because we conventionally define witnesses to be
positive on the separable).
In conclusion, you have$A_{opt} w=\langle v- v_0, v_0- w\rangle$, that corresponds to$$A_{opt}=( v_0^*-v^*)-\langle v_0, v_0-v\rangle I.$$where $v^*$ denotes the linear functional $v^*(w)\equiv \langle v,w\rangle$ (or, if you prefer, the bra $\langle v\rvert$).
To get to the expression given in the paper you now just add a normalisation factor, which I guess was added to make something simpler later on in the paper.
|
Write the Taylor series of $\text{Log}(1+w)$ with center at $w=0$ on $|w|<1$;
check that if $|z-2|<1$, then $|z|>1$. (If you have difficulties in checking this formally, try to draw a picture of the situation). Use these facts to compute
$$ \int_C \text{Log}\left(1+\frac{1}{z}\right)dz\, $$ $$ C:w(\theta) = 2 + (1/2)e^{i\theta},\quad \theta \in [0,2\pi] $$
My attempt so far:
I know the Taylor series of Log(1+w).
(I am pretty sure) I know what |z-2| < 1 looks like on the complex plane, and |z| > 1 also, but how do those two hold at the same time? |z| > 1 contains the region that |z-2| < 1 (disk centered at 2, radius 1), but that's it? They for sure aren't equal.
Writing out the Taylor series expansion of Log(1+(1/z)) seems to be reminiscent of a Laurent series with all all positive term coefficients equal to zero, but given our simple closed curve, there seems to be only singularities at z = 0, so I don't see anything resembling using residues or such.
I am sorry I am not fluent in LaTeX to formalize my question, but thanks in advance.
EDIT: I have not done any problems in a long time that simply just refer to Cauchy-Goursat holding, but IIRC, if f(z) is holomorphic within the interior of our curve of integration (and writing out several Taylor expansion terms of Log(1+(1/z))), there are no singularities in the interior of our curve, so is the integral simply zero? I doubt this, but just throwing my ideas out there.
|
We know that the sign of the derivative tells us whether a function is increasing or decreasing; for example, when $f'(x)>0$, $f(x)$ is increasing. The sign of the second derivative $f''(x)$ tells us whether $f'$ is increasing or decreasing; we have seen that if $f'$ is zero and increasing at a point then there is a local minimum at the point, and if $f'$ is zero and decreasing at a point then there is a local maximum at the point. Thus, we extracted information about $f$ from information about $f''$.
We can get information from the sign of $f''$ even when $f'$ is notzero. Suppose that $f''(a)>0$. This means that near $x=a$, $f'$ isincreasing. If $f'(a)>0$, this means that $f$ slopes up and is gettingsteeper; if $f'(a)< 0$, this means that $f$ slopes down and is getting{\it less\/} steep. The two situations are shown infigure 5.4.1. A curve that is shaped like this iscalled
concave up.
Now suppose that $f''(a)< 0$. This means that near $x=a$, $f'$ isdecreasing. If $f'(a)>0$, this means that $f$ slopes up and is gettingless steep; if $f'(a)< 0$, this means that $f$ slopes down and is gettingsteeper. The two situations are shown infigure 5.4.2. A curve that is shaped like this iscalled
concave down.
If we are trying to understand the shape of the graph of a function,knowing where it is concave up and concave down helps us to get a moreaccurate picture. Of particular interest are points at which theconcavity changes from up to down or down to up; such points arecalled
inflection points. If theconcavity changes from up to down at $x=a$, $f''$ changes frompositive to the left of $a$ to negative to the right of $a$, andusually $f''(a)=0$. We can identify such points by first finding where$f''(x)$ is zero and then checking to see whether $f''(x)$ does infact go from positive to negative or negative to positive at thesepoints. Note that it is possible that $f''(a)=0$ but the concavity isthe same on both sides; $\ds f(x)=x^4$ at $x=0$ is an example.
Example 5.4.1 Describe the concavity of $\ds f(x)=x^3-x$. $\ds f'(x)=3x^2-1$, $f''(x)=6x$. Since $f''(0)=0$, there is potentially an inflection point at zero. Since $f''(x)>0$ when $x>0$ and $f''(x)< 0$ when $x< 0$ the concavity does change from down to up at zero, and the curve is concave down for all $x< 0$ and concave up for all $x>0$.
Note that we need to compute and analyze the second derivative to understand concavity, so we may as well try to use the second derivative test for maxima and minima. If for some reason this fails we can then try one of the other tests.
Exercises 5.4
Describe the concavity of the functions in 1–18.
Ex 5.4.1$\ds y=x^2-x$ (answer)
Ex 5.4.2$\ds y=2+3x-x^3$ (answer)
Ex 5.4.3$\ds y=x^3-9x^2+24x$(answer)
Ex 5.4.4$\ds y=x^4-2x^2+3$ (answer)
Ex 5.4.5$\ds y=3x^4-4x^3$(answer)
Ex 5.4.6$\ds y=(x^2-1)/x$(answer)
Ex 5.4.7$\ds y=3x^2-(1/x^2)$ (answer)
Ex 5.4.8$y=\sin x + \cos x$ (answer)
Ex 5.4.9$\ds y = 4x+\sqrt{1-x}$(answer)
Ex 5.4.10$\ds y = (x+1)/\sqrt{5x^2 + 35}$(answer)
Ex 5.4.11$\ds y= x^5 - x$(answer)
Ex 5.4.12$\ds y = 6x + \sin 3x$(answer)
Ex 5.4.13$\ds y = x+ 1/x$(answer)
Ex 5.4.14$\ds y = x^2+ 1/x$(answer)
Ex 5.4.15$\ds y = (x+5)^{1/4}$(answer)
Ex 5.4.16$\ds y = \tan^2 x$(answer)
Ex 5.4.17$\ds y =\cos^2 x - \sin^2 x$(answer)
Ex 5.4.18$\ds y = \sin^3 x$(answer)
Ex 5.4.19Identify the intervals on which the graph of the function$\ds f(x) = x^4-4x^3 +10$ is of one of these fourshapes: concave up and increasing; concave up and decreasing; concavedown and increasing; concave down and decreasing.(answer)
Ex 5.4.20Describe the concavity of $\ds y = x^3 + bx^2 + cx + d$.You will need to consider different cases, depending on the values ofthe coefficients.
Ex 5.4.21Let $n$ be an integer greater than or equal totwo, and suppose $f$ is a polynomial of degree $n$. How many inflection pointscan $f$ have? Hint: Use the second derivative test and thefundamental theorem of algebra.
|
\(\)
Time series pervade financial markets and, although some embrace the so-called efficient market hypothesis, stating that current market prices reflect all available information about a security into its price, I’m more inclined to think they provide us with a lot of information that we rarely know how to exploit for our own benefit.
I agree that
financial time series may be damn difficult to predict, but that does not mean we cannot beat a guessing monkey or perform better than our favourite benchmark. Frequently, and this is common to many other disciplines, the two basic characteristics of time series (statistical distribution of amplitudes and time structure), are studied independently.
We all have heard of fat-tailed financial returns and using the autocorrelation function
à la Box-Jenkins to understand the time structure of the series but, sadly, in the vast majority of times analysis ends here.
However, real-world financial returns are hardly ever
independently distributed, and the relationship between them is not necessarily linear, a fact that tools like the autocorrelation function fail to identify. Remember, the autocorrelation function is basically a sequence of Pearson correlation coefficients (i.e., a measure of linear dependence) between the time series and lagged versions of itself. Let me illustrate this point with an example.
The following figure compares the autocorrelation function of the daily returns of the S&P 500 index with that of white Gaussian noise:
The goal of this type of analysis is usually evaluating if it would be a good idea to come up with an autoregressive model of the form
$$y_t = w_0 + \sum_{i=1}^N w_i y_{t-i},$$ or, equivalently, $$y_t = \mathbf{w}^T\mathbf{x}_t,$$ where \(\mathbf{x}_t=[1,y_{t-1}, y_{t-2}, \ldots, y_{t-N}]^T\) denotes the vector of regressors or features used to model \(y_t\).
In least-squares regression, the coefficients \(\mathbf{w}\) are obtained by solving the
training problem $$\underset{\mathbf{w}}{\text{argmin}\,}\|\mathbf{X}^T\mathbf{w}-\mathbf{y}\|^2,$$ and then used to predict future outcomes by applying the prediction/classification rule: $$y_{t+1} = \mathbf{w}^T\mathbf{x}_{t+1}.$$ However, from the autocorrelation function above, it looks like the time series at hand is rather white (autocorrelation function is approximately zero everywhere except for the zero-th lag) and, therefore, no substantial linear correlation exists among lagged versions of the time series. In this case, the aforementioned autoregressive model would not be of much help. Going non-linear
Nevertheless, a well-known property of financial returns is the so-called
volatility clustering. According to this, similar price changes tend to cluster together, resulting in persistence of the amplitudes of price changes, but not their sign.
A quantitative manifestation of this fact is that squared returns show a slow-decaying autocorrelation function, revealing that the series is not as white as previously indicated:
This observation has historically motivated the introduction of more elaborate tools such as ARCH, GARCH or stochastic volatility models. Another alternative is introducing a non-linear model trying to explain returns as a non-linear function of previous returns. For example, a quadratic model will look like this:
$$y_t=w_0+w_1y_{t-1}+w_3y_{t-2}+w_4y_{t-1}^2+w_5y_{t-1}y_{t-2}+w_6y_{t-2}^2,$$ which writes $$y_t = \mathbf{w}^T\phi(\mathbf{x}_t),$$ with \(\phi(\mathbf{x}_t)\) being the augmented feature vector $$\phi(\mathbf{x}_t)=[1,y_{t-1},y_{t-2},y_{t-1}^2,y_{t-1}y_{t-2}, y_{t-2}^2]^T,$$ and everything stays the same as before, with \(\mathbf{x}_t\) replaced by \(\phi(\mathbf{x}_t)\). The training problem and prediction rule stay exactly the same, but with a subtle difference in output: by augmenting our feature space, our model has gained in expressiveness, at the cost of making these two tasks a little bit more computationally demanding. For our toy quadratic example this may be affordable, but what if we want to include higher order terms into our model? The computational cost will soon get prohibitive, unless we use the kernel trick. Trick and treat
Many classification and regression problems can be written in the same general form, consisting of a training problem,
$$\underset{\mathbf{w}}{\text{argmin}\, }L(\mathbf{X}^T\mathbf{w},\mathbf{y}),$$ where \(L\) denotes a general loss function which depends on the particular problem, and a prediction/classification rule that depends only on \(\mathbf{w}^T\mathbf{x}\), where \(\mathbf{x}\) is the latest available data point.
Some representative examples belonging to this general family of problems are shown in the following table:
Algorithm Loss function name Loss function \(L(z,y)\) Linear regression Squared loss \(\|z-y\|^2\) SVM classification Hinge loss \(\sum\max(0,1-y_iz_i)\) Logistic regression Logistic loss \(-\sum\log(1+e^{-y_iz_i})\)
A collection of results known as
representer theorems state that it’s possible to write the solution to any problem in this family as a linear combination of dot products in the feature space. As you may have already noticed, current formulations of the representer theorem sound a bit cryptic (if not, try Wikipedia’s entry). In the following, I will try to put it in simpler terms by resorting to elementary linear algebra facts involving two of the four fundamental subspaces.
First, note that \(\mathbf{X}^T\mathbf{w}\) is simply the projection of \(\mathbf{w}\) in the subspace spanned by the columns of \(\mathbf{X}\) and, consequently, \(\mathbf{w}\) can be written as the sum of two orthogonal vectors:
$$\mathbf{w} = \mathbf{X} \mathbf{v} + \mathbf{r},$$ where \(\mathbf{X} \mathbf{v}\) is, by construction, lying in the column space of \(\mathbf{X}\) and \(\mathbf{r}\) lies in the null space of \(\mathbf{X}^T\), i.e., fulfills \(\mathbf{X}^T\mathbf{r} = 0\).
That being said, both our general
training problem (with its loss function) and our prediction/classification rule can be reparameterised as a function of \(\mathbf{v}\) instead of \(\mathbf{w}\). In particular, the training problem turns into: $$\underset{\mathbf{v}}{\text{argmin}\,}L(\mathbf{X}^T\mathbf{X}\mathbf{v},\mathbf{y}),$$ while the prediction/classification rule depends solely on: $$\mathbf{v}^T\mathbf{X}^T\mathbf{x}.$$
And it’s here that the
kernel trick comes to fruition. Note that the loss function depends only on what is called the kernel matrix \(\mathbf{K} = \mathbf{X}^T\mathbf{X}\), which basically contains dot products between all data pairs in the training set. Likewise, the prediction/classification rule depends on \(\mathbf{k} = \mathbf{X}^T\mathbf{x}\), that is, the dot products between data points in the training set and a newly available data point. Therefore, the computational effort involved in solving the training problem and making a prediction depends only on our ability to quickly evaluate such dot products. In order to apply this idea to large dimensional feature spaces, what we really need is a kernel function, \(k(\mathbf{x}_i,\mathbf{x}_j) = \phi(\mathbf{x}_i)^T\phi(\mathbf{x}_j)\), able to compute dot products efficiently in the feature space, and that will do the trick. A commonly used example is the Gaussian or radial basis function (RBF) kernel: $$k(\mathbf{x}_i,\mathbf{x}_j)=\exp(-\gamma\|\mathbf{x}_i-\mathbf{x}_j\|^2).$$
Toolkits like Python’s
scikit-learn implement a variety of kernel functions and even allow you to define custom kernels that can be passed to many methods as parameters. As shown in the previous blog post, the best known example of kernel methods are support vector machines (SVMs), which are derived from the hard margin version of the generalised portrait algorithm (which was, indeed, the first algorithm using kernels), but many other kernel methods are also available. Some of them are already available in
scikit-learn, such as kernel principal component analysis (K-PCA), kernel ridge regression and Gaussian processes.
However, as you can imagine, the ideas above can be
applied to a plethora of machine learning algorithms, thus turning them into kernel methods. To cite some examples, we have: relevance vector machines, and the kernelized version of discriminant analysis, Fisher discriminant analysis, canonical correlation analysis, independent component analysis, K-means, spectral clustering,…
Last but not least, I would like to stress that, although I have omitted it here for the sake of clarity, the training problem for these methods usually includes a regularisation term to avoid overfitting; a common phenomenon in such a high-dimensional feature space.
Getting fundamental
Going back to where we started, would it be possible to apply this kernelisation idea to perform a non-linear extension of the concept of autocorrelation function? The answer is yes and, indeed, the resulting function is known as generalised autocorrelation function.
Interestingly, when the Gaussian kernel is applied, the generalised autocorrelation function shows some interesting connections with information-theoretic concepts such as the quadratic Renyi’s entropy of the underlying random process, thus closing the gap between the two apparently separated (purely statistical and time structural) approaches I started this post with. For this reason, the generalised autocorrelation function is also known as autocorrentropy, but I will leave the details for a different post.
Meanwhile, you can find out more about the interplay of information theory and efficient market hypotheses in yet-another great Turing Finance post.
|
What does it mean to measure a qubit (or multiple qubits) in standard basis?
A $1$-qubit system, in general, can be in a state $a|0\rangle+b|1\rangle$ where $|0\rangle$ and $|1\rangle$ are basis vectors of a two dimensional complex vector space. The
standard basis for measurement here is $\{|0\rangle,|1\rangle\}$. When you are measuring in this basis, with $\frac{|a|^2}{|a|^2+|b|^2}\times 100\%$ probability you will find that the state after measurement is $|0\rangle$ and with $\frac{|b|^2}{|a|^2+|b|^2}\times 100\%$ you'll find that the state after measurement is $|1\rangle$.
But you could carry out the measurement in some other basis too, say $\{\frac{|0\rangle+|1\rangle}{\sqrt{2}},\frac{|0\rangle-|1\rangle}{\sqrt{2}}\}$, but that wouldn't be the
standard basis.
Exercise: Express $a|0\rangle+b|1\rangle$ in the form $c(\frac{|0\rangle+|1\rangle}{\sqrt{2}})+d(\frac{|0\rangle-|1\rangle}{\sqrt{2}})$ where $a,b,c,d\in\Bbb C$.
If you measure in the basis the probability of ending in the state $\frac{|0\rangle+|1\rangle}{\sqrt{2}}$ after a measurement is $\frac{|c|^2}{|c|^2+|d|^2}\times 100\%$ and probability of ending in the state $\frac{|0\rangle-|1\rangle}{\sqrt{2}}$ is $\frac{|d|^2}{|c|^2+|d|^2}\times 100\%$.
Similarly, for a $2$-qubit system the standard basis would $\{|00\rangle,|01\rangle,|10\rangle,|11\rangle\}$ and its general state can be expressed as $\alpha|00\rangle + \beta|01\rangle + \gamma|10\rangle + \delta|11\rangle$. When you measure this in the standard basis you can easily see that the probability of ending up in the state (say) $|00\rangle$ will be $\frac{|\alpha|^2}{|\alpha|^2+|\beta|^2+|\gamma|^2+|\delta|^2}\times 100\%$. Similarly you can deduce the probabilities for the other states.
You should be able to extrapolate this same logic to general $n$-qubit states, now. Feel free to ask questions in the comments.
You define the projectors $$ P_0=|0\rangle\langle 0|=\left(\begin{array}{cc} 1 & 0 \\ 0 & 0 \end{array}\right)\qquad P_1=|1\rangle\langle 1|=\left(\begin{array}{cc} 0 & 0 \\ 0 & 1 \end{array}\right). $$ For any state $|\psi\rangle$, the probability of getting answer $x$ is $p_x=\langle\psi|P_x|\psi\rangle$ and, after the measurement, the qubit is in the state $|x\rangle$.
If you want to measure multiple (say $n$) qubits in the standard basis, you can take arbitrary tensor products of the single-qubit terms, $$ P_x=|x\rangle\langle x|=\bigotimes_{i=1}^nP_{x_i} $$ for $x\in\{0,1\}^n$.
Where you have to be a little more careful is if you're measuring only a subset of qubits. Then, the probability is still $p_x=\langle\psi|P_x|\psi\rangle$, but the output state is $P_x|\psi\rangle/\sqrt{p_x}$, which could still be a superposition of multiple basis states (all those for which the measured qubits correspond to $x$).
|
Let $\mathscr C$ be a small category. Necessary and sufficient conditions for a presheaf $F$ to be cofibrant in the global projective model structure on $[\mathscr C^\mathrm{op}, [\Delta^\mathrm{op}, \mathbf{Set}]]$ are that:
(1) Each $F(-)(n) \colon \mathscr C^\mathrm{op} \to \mathbf{Set}$ is projective (i.e., a coproduct of retracts of representables; if $\mathscr C$ is Cauchy-complete, then equivalently a coproduct of representables).
(2) $F \colon \mathscr C^\mathrm{op} \to [\Delta^\mathrm{op}, \mathbf{Set}]$ factors through the subcategory $[\Delta^\mathrm{op}, \mathbf{Set}]_{\mathrm{nd}}$ of $[\Delta^\mathrm{op}, \mathbf{Set}]$ whose objects are simplicial sets and whose maps are simplicial maps which sends non-degenerate simplices to non-degenerate simplices.
On the one hand, it's easy to show by induction that any cofibrant object satisfies (1) and (2). Conversely, condition (2) means that each $F(-)(n)$ breaks up as a coproduct $G_n(-) + H_n(-)$ of non-degenerate and degenerate simplices, and the object $G_n(-)$ of non-degenerate simplices is projective since $F(-)(n)$ is. This means that any object satisfying (1) and (2) can be built up dimension by dimension using the generating cofibrations: first construct the $0$-skeleton using pushouts of maps
$\partial \Delta_0 \cdot \mathscr C(-, W) \to \Delta_0 \cdot \mathscr C(-, W)$ (for $W \in \mathscr C$)
one for each representable summand in the projective object $F(-)(0)$; then construct the $1$-skeleton by using pushouts of maps
$\partial \Delta_1 \cdot \mathscr C(-, W) \to \Delta_1 \cdot \mathscr C(-, W)$ (for $W \in \mathscr C$)
one for each representable summand in the projective object $G_1(-)$ of non-degenerate $1$-simplices; and so on.
|
Howto Raytracer: Ray / Triangle Intersection Theory
Besides sphere and plane intersections another important one is the ray / triangle intersection, because most 3D models consist of triangles or can be converted to such a representation. So let’s learn how to do it to be able to model some complex models.
A triangle $$T$$ can be represented by three points $$v0, v1, v2$$ that define a plane. So first, we check if the ray intersects this plane. I already did a tutorial on ray / plane intersection so I won’t cover it again. If there is such an intersection it means we just have to check if this hitpoint $$P$$ lies within the bounds of the triangle. For this we calculate a different representation of $$P$$ with respect to the triangle: As the point is on the plane, it can be written as $$P = v0 + su + tv$$ for some $$s,t$$ where $$u$$ and $$v$$ are the “edge vectors” incident to $$v0$$.
Once we have found the values for $$s,t$$ the following has to be true for the point to be inside the triangle:
$$0 \leq s,t \leq 1$$ $$s + t \leq 1$$
These constraints on $$s,t$$ essentially define the triangle structure and all in all, the set of points of the triangle is $$ \begin{aligned} T = {P \mid & P \in T_{\text{Plane}} \land P = v0 + s(v1-v0) + t(v2-v0) \land \\ & 0 \leq s,t, \leq 1 \land s + t \leq 1} \end{aligned}$$
Assume we have checked that $$P$$ lies inside the triangle’s plane, then we just have to solve $$P = v0 + su + tv$$ for $$s,t$$, or equivalent $$w = P - v0 = su + tv$$.
Solving the equation
Unfortunately, $$w = su + tv$$ has two unknowns ($$s,t$$) in only one equation, so we have to use a little trick to get two equations out of this. The normal $$n$$ of the triangle can be computed by the cross product of $$u$$ and $$v$$. Let’s now consider the vector $$u^\perp$$ that is both perpendicular to $$u$$ and to $$n$$. This vector lies in the plane and can again be computed by using the cross product, i.e., $$u^\perp = n \times u$$. We can now (dot product-) multiply our equation $$w = su + tv$$ on both sides by $$u^\perp$$:
$$ \begin{aligned} w = su + tv \\ w \cdot u^\perp = s u \cdot u^\perp + t v \cdot u^\perp \\ w \cdot u^\perp = t v \cdot u^\perp \end{aligned} $$
The term $$u \cdot u^\perp$$ is $$0$$, because by definition of $$u^\perp$$ it is perpendicular to $$u$$, and with it the $$s$$ vanishes and we have reduced the equation to only one unknown, so we solve it for $$t$$ by:
$$ \begin{aligned} t = \frac{w \cdot u^\perp}{v \cdot u^\perp} \end{aligned} $$
We do a similar thing to compute $$s$$ by multiplying with $$v^\perp = n \times v$$: $$ \begin{aligned} w = su + tv \\ w \cdot v^\perp = s u \cdot v^\perp + t v \cdot v^\perp \\ w \cdot v^\perp = s u \cdot v^\perp \\ s = \frac{w \cdot v^\perp}{u \cdot v^\perp} \end{aligned} $$
Some optimizations
We can reduce the number of computations by seeing that the denominator used to compute $$s$$ is just the negative of the denominator for $$t$$:$$u\cdot v^\perp = u \cdot (n \times v) = v \cdot (u \times n) = v \cdot (-n \times u) = - v \cdot (n \times u) = - v \cdot u^\perp$$ This gives us the following equations:
$$\begin{aligned} s = \frac{w \cdot v^\perp}{u \cdot v^\perp} && v^\perp = n \times v && w = P - v0 \\ t = \frac{w \cdot u^\perp}{-u \cdot v^\perp} && u^\perp = n \times u & \end{aligned}$$
In the case of raytracing where we shoot a ray through the scene for every pixel, we can save a lot of computation time by caching some of these values. In fact, the only term that depends on the ray is $$w$$, whereas $$u^\perp, v^\perp, u \cdot v^\perp$$ only have to be computed once for each triangle.
Here’s some C# code that implements the ray/triangle intersection test:
using UnityEngine;public class RTTriangle : RTObject{ protected Vector3 v0, v1, v2; protected Vector3 normal; protected Vector3 u, v; protected Vector3 uPerp, vPerp; protected float denominatorST; public RTTriangle(Vector3 v0, Vector3 v1, Vector3 v2, bool clockwise = false) { Init(v0, v1, v2, clockwise); } protected void Init(Vector3 v0, Vector3 v1, Vector3 v2, bool clockwise = false) { this.v0 = v0; this.v1 = v1; this.v2 = v2; u = v1 - v0; v = v2 - v0; // Unity uses clockwise winding order to determine front-facing triangles // Unity uses a left-handed coordinate system // the normal will face the front // if the direction of the normal is not important to you // just remove the clockwise branching normal = (clockwise ? 1 : -1) * Vector3.Cross(u, v).normalized; uPerp = Vector3.Cross(normal, u); vPerp = Vector3.Cross(normal, v); denominatorST = Vector3.Dot(u, vPerp); if (Mathf.Abs(denominatorST) < Mathf.Epsilon) { Debug.LogError("Triangle is broken"); return; } } public override RayTracer.HitInfo Intersect(Ray ray) { RayTracer.HitInfo info = new RayTracer.HitInfo(); Vector3 d = ray.direction; float denominator = Vector3.Dot(d, normal); if (Mathf.Abs(denominator) < Mathf.Epsilon) return info; // direction and plane parallel, no intersection float tHit = Vector3.Dot(v0 - ray.origin, normal) / denominator; if (tHit < 0) return info; // plane behind ray's origin // we have a hit point with the triangle's plane Vector3 w = ray.GetPoint(tHit) - v0; float s = Vector3.Dot(w, vPerp) / denominatorST; if (s < 0 || s > 1) return info; // won't be inside triangle float t = Vector3.Dot(w, uPerp) / -denominatorST; if (t >= 0 && (s + t) <= 1) { info.time = tHit; info.hitPoint = ray.GetPoint(tHit); info.normal = normal; } return info; }}
You can now import a 3D model consisting of triangles in .obj format, parse the triangles and do your ray test. This is what the result might look like, a lot better than just planes and spheres.
|
Suppose we have a surface given in cylindrical coordinates as $z=f(r,\theta)$ and we wish to find the integral over some region. We could attempt to translate into rectangular coordinates and do the integration there, but it is often easier to stay in cylindrical coordinates.
How might we approximate the volume under such a surface in a way that uses cylindrical coordinates directly? The basic idea is the same as before: we divide the region into many small regions, multiply the area of each small region by the height of the surface somewhere in that little region, and add them up. What changes is the shape of the small regions; in order to have a nice representation in terms of $r$ and $\theta$, we use small pieces of ring-shaped areas, as shown in figure 17.2.1. Each small region is roughly rectangular, except that two sides are segments of a circle and the other two sides are not quite parallel. Near a point $(r,\theta)$, the length of either circular arc is about $r\Delta\theta$ and the length of each straight side is simply $\Delta r$. When $\Delta r$ and $\Delta \theta$ are very small, the region is nearly a rectangle with area $r\Delta r\Delta\theta$, and the volume under the surface is approximately $$\sum\sum f(r_i,\theta_j)r_i\Delta r\Delta\theta.$$ In the limit, this turns into a double integral $$\int_{\theta_0}^{\theta_1}\int_{r_0}^{r_1} f(r,\theta)r\,dr\,d\theta.$$
Example 17.2.1 Find the volume under $z=\sqrt{4-r^2}$ above the quarter circle bounded by the two axes and the circle $x^2+y^2=4$ in the first quadrant.
In terms of $r$ and $\theta$, this region is described by the restrictions $0\le r\le 2$ and $0\le\theta\le\pi/2$, so we have $$\eqalign{ \int_{0}^{\pi/2}\int_{0}^{2} \sqrt{4-r^2}\;r\,dr\,d\theta &=\int_{0}^{\pi/2}\left. -{1\over3}(4-r^2)^{3/2}\right|_0^2\,d\theta\cr &=\int_{0}^{\pi/2} {8\over3}\,d\theta\cr &={4\pi\over3}.\cr }$$ The surface is a portion of the sphere of radius 2 centered at the origin, in fact exactly one-eighth of the sphere. We know the formula for volume of a sphere is $(4/3)\pi r^3$, so the volume we have computed is $(1/8)(4/3)\pi 2^3=(4/3)\pi$, in agreement with our answer.
This example is much like a simple one in rectangular coordinates: the region of interest may be described exactly by a constant range for each of the variables. As with rectangular coordinates, we can adapt the method to deal with more complicated regions.
Example 17.2.2 Find the volume under $z=\sqrt{4-r^2}$ above the region enclosed by the curve $r=2\cos\theta$, $-\pi/2\le\theta\le\pi/2$; see figure 17.2.2. The region is described in polar coordinates by the inequalities $-\pi/2\le\theta\le\pi/2$ and $0\le r\le2\cos\theta$, so the double integral is $$ \int_{-\pi/2}^{\pi/2}\int_{0}^{2\cos\theta} \sqrt{4-r^2}\;r\,dr\,d\theta =2\int_{0}^{\pi/2}\int_{0}^{2\cos\theta} \sqrt{4-r^2}\;r\,dr\,d\theta. $$ We can rewrite the integral as shown because of the symmetry of the volume; this avoids a complication during the evaluation. Proceeding: $$\eqalign{ 2\int_{0}^{\pi/2}\int_{0}^{2\cos\theta} \sqrt{4-r^2}\;r\,dr\,d\theta &=2\int_{0}^{\pi/2}-{1\over3}\left.(4-r^2)^{3/2}\right|_0^{2\cos\theta}\,d\theta\cr &=2\int_{0}^{\pi/2}-{8\over3}\sin^3\theta+{8\over3}\,d\theta\cr &=\left.2\left(-{8\over3}{\cos^3\theta\over3}-\cos\theta+{8\over3}\theta\right)\right|_0^{\pi/2}\cr &={8\over3}\pi-{32\over9}.\cr }$$
You might have learned a formula for computing areas in polar coordinates. It is possible to compute areas as volumes, so that you need only remember one technique. Consider the surface $z=1$, a horizontal plane. The volume under this surface and above a region in the $x$-$y$ plane is simply $1\cdot(\hbox{area of the region})$, so computing the volume really just computes the area of the region.
Example 17.2.3 Find the area outside the circle $r=2$ and inside $r=4\sin\theta$; see figure 17.2.3. The region is described by $\pi/6\le\theta\le5\pi/6$ and $2\le r\le4\sin\theta$, so the integral is $$\eqalign{ \int_{\pi/6}^{5\pi/6}\int_2^{4\sin\theta}1\,r\,dr\,d\theta &=\int_{\pi/6}^{5\pi/6}\left. {1\over2}r^2\right|_2^{4\sin\theta}\,d\theta\cr &=\int_{\pi/6}^{5\pi/6}8\sin^2\theta-2\,d\theta\cr &={4\over3}\pi+2\sqrt3.\cr }$$
Exercises 17.2
Ex 17.2.1Find the volume above the $x$-$y$ plane, under the surface$r^2=2z$, and inside $r=2$.(answer)
Ex 17.2.2Find the volume inside both $r=1$ and $r^2+z^2=4$.(answer)
Ex 17.2.3Find the volume below $z=\sqrt{1-r^2}$ and abovethe top half of the cone $z=r$.(answer)
Ex 17.2.4Find the volume below $z=r$, above the $x$-$y$ plane, andinside $r=\cos\theta$.(answer)
Ex 17.2.5Find the volume below $z=r$, above the $x$-$y$ plane, andinside $r=1+\cos\theta$.(answer)
Ex 17.2.6Find the volume between $x^2+y^2=z^2$ and $x^2+y^2=z$.(answer)
Ex 17.2.7Find the area inside $r=1+\sin\theta$ and outside$r=2\sin\theta$. (answer)
Ex 17.2.8Find the area inside both$r=2\sin\theta$ and $r=2\cos\theta$. (answer)
Ex 17.2.9Find the area inside the four-leaf rose $r=\cos(2\theta)$and outside $r=1/2$.(answer)
Ex 17.2.10Find the area inside the cardioid $r=2(1+\cos\theta)$and outside $r=2$.(answer)
Ex 17.2.11Find the area of one loop of the three-leaf rose $r=\cos(3\theta)$.(answer)
Ex 17.2.12Compute $\ds \int_{-3}^3\int_0^{\sqrt{9-x^2}}\sin(x^2+y^2)\,dy\,dx$ by converting to cylindrical coordinates.(answer)
Ex 17.2.13Compute $\ds \int_{0}^a\int_{-\sqrt{a^2-x^2}}^0 x^2y\,dy\,dx$ by converting to cylindrical coordinates.(answer)
Ex 17.2.14Find the volume under $z=y^2+x+2$ abovethe region $x^2+y^2\le 4$(answer)
Ex 17.2.15Find the volume between$z=x^2y^3$ and $z=1$ abovethe region $x^2+y^2\le 1$(answer)
Ex 17.2.16Find the volume inside$x^2+y^2=1$ and $x^2+z^2=1$.(answer)
Ex 17.2.17Find the volume under $z=r$ above $r=3+\cos\theta$.(answer)
Ex 17.2.18Figure 17.2.4 shows the plot of$r=1+4\sin(5\theta)$.
a. Describe the behavior of the graph in terms of the given equation. Specifically, explain maximum and minimum values, number of leaves, and the `leaves within leaves'.
b. Give an integral or integrals to determine the area outside a smaller leaf but inside a larger leaf.
c. How would changing the value of $a$ in the equation $r=1+a\cos(5\theta)$ change the relative sizes of the inner and outer leaves? Focus on values $a\geq 1$. (Hint: How would we change the maximum and minimum values?)
Ex 17.2.19Consider the integral $\ds\dint{D} {1\over\sqrt{x^2+y^2}} \;dA$, where $D$ is the unit disk centered at the origin.
a. Why might this integral be considered improper?
b. Calculate the value of the integral of the same function $\ds 1/\sqrt{x^2+y^2}$ over the annulus with outer radius 1 and inner radius $\delta$.
c. Obtain a value for the integral on the whole disk by letting $\delta$ approach 0. (answer)
d. For which values $\lambda$ can we replace the denominator with $(x^2+y^2)^\lambda$ in the original integral and still get a finite value for the improper integral?
|
I have a measurement of $x$ coordinates of an oscillating particle at presence of noise taken at constant time steps of 1/60 s.
The corresponding data set can be obtained here:
I would like to measure the power spectrum density (PSD) using the autoregressive (AR) estimation method.
In the paper http://www.measurement.sk/2011/Angrisani.pdf (equation 1) it is supposed that the $x$ coordinates are the output of a linear system:
$x(n)=-\sum\limits_{m=1}^{p} a_{p,m} x(n-m) + \epsilon(n)$
where $x(n)$ is the analyzed signal sample at the time interval $n$, $a_{p,1}$, $a_{p,2}$, ... , $a_{p,p}$ are the model coefficients, $\{\epsilon(n)\}$ is a white noise process with variance $\sigma^2_p$, and $p$ is the model order. The PSD of a modeled variation of the $x$ coordinates in this way is totally described by the model parameters and the variance of the white noise process (equation 2):
$S(f)=\frac{\sigma^2_p T_S}{\left|1+ \sum\limits_{m=1}^{p} a_{p,m}e^{-j2\pi m f T_S}\right|}$ with $|f|\leq f_N$
where $T_S=1/f_S$ is the sampling interval and $f_N=1/(2T_S$) is the Nyquist rate.
How can I find the model coefficients ($a_{p,1}$, $a_{p,2}$, ... , $a_{p,p}$) and the noise variance $\sigma$? with mathematica? I think that ARProcess is the right function, but I do not know how to use it?
|
These results hold for absolutely continuous random variables (see this question for the sample median behavior for discrete random variables). $F$ will be the cumulative distribution function, $f$ the probability density function, $\mu$ the mean. A hat will indicate sample quantities, and a bar sample means. Denote the quantile associated with probability $p$ by $\xi_p$, so that the median is $\xi_{1/2}\equiv \theta$.
The
Bahadur representation of a sample quantile is
$$\hat \xi_p = \xi_p +\frac{1-\hat F_n(\xi_p)-(1-p)}{f(\xi_p)} +R_{n,p},\;\; R_{n,p} =o(1/\sqrt n)$$
$$\Rightarrow \sqrt n(\hat \xi_p - \xi_p) = \sqrt n\frac{1-\hat F_n(\xi_p)-(1-p)}{f(\xi_p)} +\sqrt nR_{n,p} \tag{1}$$
Define the indicator function $I_i\equiv I(X_i>\xi_p)$ (this is the complementary indicator function of the one the OP uses). We have that
$$E(I_i)= P(X_i>\xi_p) = 1-p,\;\; \operatorname{Var}(I_i)=p(1-p)\tag {2}$$
and also
$$\bar S_I \equiv \frac 1n\sum_{i=1}^nI_i = 1-\hat F_n(\xi_p), \;\; E(\bar S_I) = 1-p, \operatorname{Var}(\bar S_I) = p(1-p)/n \tag {3}$$
Using $(2)$ and $(3)$ we can write $(1)$ as
$$\sqrt n(\hat \xi_p - \xi_p) = \sqrt n\frac{1}{f(\xi_p)}\left(\bar S_I-E(\bar S_I)\right) +\sqrt nR_{n,p} \tag{4}$$
The quantity in the big parenthesis is a sample mean centered, the term in front is a constant, the last term asymptotically vanishes, so we are led to the CLT for sample medians. But this is not what we are after.
Consider now the bivariate random vector $(X_i, I_i)'$. The covariance between the two components is
$$\operatorname{Cov}(X_i,I_i) = E(X_iI_i) - \mu(1-p) \tag{5}$$
By an application of a multivariate CLT, $(5)$ will also be the covariance of the asymptotic distribution of the bivariate vector $\sqrt n\big(\bar X -\mu,\, \bar S_I-E(\bar S_I)\big)'$.
We are interested in the (asymptotic) quantity
$$\operatorname{Cov}\Big(\sqrt n(\bar X -\mu),\sqrt n(\hat \xi_p - \xi_p)\Big) = \operatorname{Cov}\Big(\sqrt n(\bar X -\mu),\sqrt n\frac{1}{f(\xi_p)}\left(\bar S_I-E(\bar S_I)\right)\Big) $$
$$=\frac{1}{f(\xi_p)}\operatorname{Cov}\Big(\sqrt n(\bar X -\mu),\sqrt n\left(\bar S_I-E(\bar S_I)\right)\Big)$$
$$= \frac{1}{f(\xi_p)}\left(E(X_iI_i) - \mu(1-p)\right) \tag {6}$$
For the sample median, $(6)$ becomes
$$\operatorname{Cov}\Big(\sqrt n(\bar X -\mu),\sqrt n(\hat \theta - \theta)\Big) = \frac{1}{2f(\theta)}\left(2\int_{\theta}^{\infty}xf(x)dx - \mu\right) \tag{7}$$
Eq. $(7)$ is the same as what the OP found (remember, we have defined the indicator function complementarily). Now, Ferguson's paper gives this quantity as $E(|X-\theta|)/2f(\theta)$. We have to verify that this is the same as $(7)$.
$$E(|X-\theta|) = \int_{-\infty}^{\infty}|x-\theta|f(x)dx = \int_{-\infty}^{\theta}(\theta-x)f(x)dx+\int_{\theta}^{\infty}(x-\theta)f(x)dx$$
$$=\theta\int_{-\infty}^{\theta}f(x)dx - \int_{-\infty}^{\theta}xf(x)dx + \int_{\theta}^{\infty}xf(x)dx - \theta\int_{\theta}^{\infty}f(x)dx$$
Bring the terms not involving $x$ together and add and subtract $\int_{\theta}^{\infty}xf(x)dx$ to obtain
$$E(|X-\theta|) =\theta F(\theta) -\theta[1-F(\theta)] + \\+2\int_{\theta}^{\infty}xf(x)dx - \int_{-\infty}^{\theta}xf(x)dx - \int_{\theta}^{\infty}xf(x)dx$$
$$= \theta/2 - \theta/2 +2\int_{\theta}^{\infty}xf(x)dx -\int_{-\infty}^{\infty}xf(x)dx$$
$$\Rightarrow E(|X-\theta|) = 2\int_{\theta}^{\infty}xf(x)dx -\mu$$
So Ferguson's expression for the asymptotic covariance between sample mean and sample median is correct, without needing to impose $\mu = \theta$.
|
ISSN:
1078-0947
eISSN:
1553-5231
All Issues
Discrete & Continuous Dynamical Systems - A
January 2006 , Volume 14 , Issue 1
Special Issue
Qualitative theory of nonlinear elliptic and parabolic problems
Select all articles
Export/Reference:
Abstract:
Interaction between nonlinearity and diffusion is a fascinating subject. Many interesting phenomena are known to result from such interaction, for example, generation of interfaces and singularities, formation of spatial and temporal patterns, propagation of waves, symmetrization and symmetry-breaking of solutions, and so on. Nonlinear diffusive systems have undergone a thorough investigation aimed at mathematical understanding of the mechanisms behind the phenomena.
The analysis serving this aim uses various classical and newly developed techniques relying on results from the bifurcation theory, singular perturbation theory, variational method, and the theory of finite and infinite dimensional dynamical systems. The qualitative theory of parabolic and elliptic equations has been developing extensively in the last few decades, and a lot of important, interesting and beautiful results have been obtained concerning the dynamics of solutions and qualitative description of steady states.
For more information please click the “Full Text” above.
Abstract:
We study the degenerate logistic model described by the equation $ u_t - $Δ$ u=au-b(x)u^p$ with standard boundary conditions, where $p>1$, $b$ vanishes on a nontrivial subset $\Omega_0$ of the underlying bounded domain $\Omega\subset R^N$ and $b$ is positive on $\Omega_+=\Omega\setminus \overline{\Omega}_0$. We consider the difficult case where $\partial\Omega_0\cap \partial \Omega$≠$\emptyset$ and $\partial\Omega_+\cap \partial \Omega$≠$\emptyset$, and examine the asymptotic behaviour of the solutions. By a detailed study of a singularly mixed boundary blow-up problem, we obtain some basic results on the dynamics of the model.
Abstract:
This paper is concerned with the dynamics of travelling spot solutions in two dimensions. Travelling spot solutions are constructed under the bifurcation structure with Jordan block type degeneracy. It is shown that if the velocity is very slow, such travelling spots possess reflection property. In order to do it, we derive the reduced ordinary differential equations describing the dynamics of interacting travelling spots in RD systems by using center manifold theory. This reduction enables us to prove that two very slowly travelling spots reflect before collision as if they were elastic particles.
Abstract:
This paper examines the following question: Suppose that we have a reaction-diffusion equation or system such that some solutions which are homogeneous in space blow up in finite time. Is it possible to inhibit the occurrence of blow-up as a consequence of imposing Dirichlet boundary conditions, or other effects where diffusion plays a role? We give examples of equations and systems where the answer is affirmative.
Abstract:
In this paper we study solutions to reaction-diffusion equations in the bistable case, defined on the whole space in dimension $N$. The existence of solutions with cylindric symmetry is already known. Here we prove the uniqueness of these cylindric solutions whose level sets are curved Lipschitz graphs. Using a centre manifold-like argument, we also give the precise asymptotics of these level sets at infinity. In dimension 2, we classify all solutions under weak conditions at infinity. Finally, we also provide an alternative proof of the existence of these solutions in dimension 2, based on a continuation argument.
Abstract:
We study the existence of multiple positive stable solutions for
$ -\epsilon^2\Delta u(x) = u(x)^2(b(x)-u(x)) \ \mbox{in}\ \Omega, \quad$ $ \frac{\partial u}{\partial n}(x) = 0 \ \mbox{on}\ \partial\Omega.$
Here $\epsilon>0$ is a small parameter and $b(x)$ is a piecewise continuous function which changes sign. These type of equations appear in a population growth model of species with a saturation effect in biology.
Abstract:
The blowup behaviors of solutions to a scalar-field equation with the Robin condition are discussed. For some range of the parameter, there exist at least two positive solutions to the equation. Here, the blowup rate of the large solution and the scaling properties are discussed.
Abstract:
In this paper, we consider a Lotka-Volterra competition model with diffusion, and show that the global bifurcation structure of positive stationary solutions for the model is similar to that for a certain scalar reaction-diffusion equation. To do this, the comparison principle, the bifurcation theory, and the numerical verification are employed.
Abstract:
We study a Ginzburg-Landau energy in a one-dimensional ring with nonuniform thickness, where the nonuniformity is expressed by a piecewise-constant function. That is a simplified model describing a supercurrent in the superconducting ring. Then the Ginzburg-Landau equation with a discontinuous coefficient subject to periodic boundary conditions is derived as the Euler-Lagrange equation of the energy functional. Since the unknown variable of the equation is complex-valued, we can define the phase of a solution if the solution has no zero. The purpose of this article is to establish the existence of nontrivial solutions with no zero and to reveal the configuration of the phase of the solutions as the coefficient converges to zero in a set of subintervals. More precisely we control the convergence of the coefficient with a small positive parameter $\varepsilon$ having various orders in the subintervals and prove the convergence of the solutions to those of a limiting equation as $\varepsilon\to0$ together with the convergence rate. As a consequence, for small $\varepsilon$ most of the phase variation takes place on the subintervals where the coefficient converges to zero with the highest order. Finally we show the stability of those solutions.
Abstract:
We study the continuation of solutions of superlinear indefinite parabolic problems after the blow-up time. The nonlinearity is of the form $a(x)u^p$, where $p>1$ is subcritical and $a$ changes sign. Unlike the case $a>0$, the solutions will never blow up completely in the whole domain but only in a certain subdomain. In some cases we give a precise description of this subdomain. We also derive sufficient conditions for the blow-up of the associated energy.
Abstract:
We consider positive solutions of the equation $- $ε$^2 $Δ $u + u $=$u^p$ in $\Omega$, where $\Omega \subseteq \R^n$, $p > 1$ and ε is a small positive parameter. Neumann boundary conditions are imposed in general. We prove existence of solutions which concentrate at curves or manifolds in $\overline{\Omega}$ when ε → 0.
Abstract:
This paper is concerned with the long time behavior for the evolution of a curve governed by the curvature flow with constant driving force in two-dimensional space. Especially, the asymptotic stability of a traveling wave whose shape is a line is studied. We deal with moving curves represented by the entire graphs on the $x$-axis. By studying the Cauchy problem, the asymptotic stability of traveling waves with spatially decaying initial perturbations and the convergence rate are obtained. Moreover we establish the stability result where initial perturbations do not decay to zero but oscillate at infinity. In this case, we prove that one of the sufficient conditions for asymptotic stability is that a given perturbation is asymptotic to an almost periodic function in the sense of Bohr at infinity. Our results hold true with no assumptions on the smallness of given perturbations, and include the curve shortening flow problem as a special case.
Abstract:
This paper is devoted to analyze a case of singularity formation in infinite time for a semilinear heat equation involving linear diffusion and superlinear convection. A feature to be noted is that blow-up happens not for the main unknown but for its derivative. The singularity builds up at the boundary. The formation of inner and outer regions is examined, as well as the matching between them. As a consequence, we obtain the precise exponential rates of blow-up in infinite time.
Readers Authors Editors Referees Librarians More Email Alert
Add your name and e-mail address to receive news of forthcoming issues of this journal:
[Back to Top]
|
On a question on this site there is an explanation of the algorithm Knuth gives in
The Art Of Computer Programming to compute an approximation of $y = \log_bx$.
Now, I understand why it works; anyway, the only question arising in my mind is: how can we pre-compute a table of logarithms with arguments of the type $\frac{2^k}{2^k-1}$? Or, generally speaking, is there an algorithm to compute a good approximation of $y = \log_b\frac{a^k}{a^k-1}$, considering such a logarithm as a special case?
I see that the simplest case is when $a = b$. So we can write $y = k - \log_b(b^k-1)$. But then? Of course we cannot execute the same algorithm to compute $y = \log_b(b^k-1)$, for, unless $k=1$, the argument $x = b^k-1$ won't respect the initial condition of $1 \leq x < a$
Probably a solution would be to factorize $x$ in prime numbers and then sum the logarithms of each one, since once I read that Henry Briggs (who derived the fundamental idea behind this algorithm) found a clever way to take logarithms of prime numbers (this is Chapter Nine of his
Arithmetica Logarithmica: see here); but, you know, I had no more motivation to inform myself on that as I got to the first page of his book: "Logarithms are numbers which, adjoined to numbers in proportions, mantain equal differences".
I would rather like more "modern" explanations of the problem :)
|
equation
Banach Spaces of Solutions of the Helmholtz Equation in the Plane
Time Decay and Regularity of Solutions to the Wave Equation
Mapping Properties of a Projection Related to the Helmholtz Equation
Herglotz Wave Functions are the entire solutions of the Helmholtz equation which have L2-Far-Field-Pattern.
The Ces\'aro operator $\mathcal{C}_{\alpha}$ is defined by \begin{equation*} (\mathcal{C}_{\alpha}f)(x) = \int_{0}^{1}t^{-1}f\left( t^{-1}x \right)\alpha (1-t)^{\alpha -1}\,dt~, \end{equation*} where $f$ denotes a function on $\mathbb{R}$.
On Global Finite Energy Solutions of the Camassa-Holm Equation
We consider the Camassa-Holm equation with data in the energy norm H1(R1).
Sharp Global Well-Posedness for a Higher Order Schrodinger Equation
It is transferred to a boundary value problem for analytic functions and then further reduced to a singular integral equation, the unique solvability and an effective method of solution for which are established.
This paper provides a functional equation satisfied by the dichromatic sum function of rooted outer-planar maps.
By the equation, the dichromatic sum function can be found explicitly.
By means of our inequality and the method of Lyapunov function we study the stability of two kinds of large scale differential systems with time lag and the stability of a higher-order differential equation with time lag.
The Alternating Segment Crank-Nicolson scheme for one-dimensional diffusion equation has been developed in [1], and the Alternating Block Crank-Nicolson method for two-dimensional problem in [2].
In this paper for the two-dimensional diffusion equation, the net region is divided into bands, a special kind of block.
The errors estimation of the Chebyshev spectral-difference method for two-dimensional vorticity equation
We construct a Chebyshev spectral-difference scheme for solving two-dimensional vorticity equation with semi-homogeneous boundary conditions.
The chebyshev spectral method with a restraint operator for burgers equation
For a differential equation, a theoretical proof of the relationship between the symmetry and the one-parameter invariant group is given: the relationship between symmetry and the group-invariant solution is presented.
As an application, some solutions of theKdV equation are discussed.
Exact envelope wave solution to nonlinear Schr?dinger equation with dissipative term
|
Expectations and conditional expectations are either random or fixed points. It depends upon your choice of axioms. Unfortunately, I have never found a good book that fairly describes both well.
On the Bayesian side, I would suggest the very polemical "Probability Theory: The Language of Science" by E.T. Jaynes. On the null hypothesis side, I would suggest the undergraduate text "John Freund's Mathematical Statistics." The late John Freund wrote my introductory undergraduate text back when dinosaurs roamed the Earth. The undergraduate text for "Mathematical Statistics" as opposed to "Elementary Statistics" provides a good grounding in null hypothesis methods.
Most graduate students are trained in null hypothesis methodologies. In that axiomatic framework, parameters are fixed points and data is random. Because the null hypothesis fixes the parameter space all randomness is due to chance alone. The probability test is of a result as extreme or more extreme than the observed result. Randomness is chance.
Bayesian methods are orthogonal to null hypothesis methods. Parameters are random and data is fixed. After all, you saw the data, there is no uncertainty about it. It is fixed. The data fixes the sample space all randomness is due to uncertainty about the location of the parameter. The probability test is about the truth of a hypothesis given the observed data. Randomness is defined as uncertainty.
You will need to be mentally careful when reading books on either one as they often define the same words with fundamentally different meanings. The simple example is the definition of an expectation.
The expectation under null hypothesis thinking is $$E(\tilde{x})=\int_{\tilde{x}\in\chi}\tilde{x}p(\tilde{x})\mathrm{d}\tilde{x},$$ while the expectation under Bayesian thinking is $$E(\theta)=\int_{\theta\in\Theta}\theta{p}(\theta)\mathrm{d}\theta.$$
Using Keynesian notation, a Bayesian test of a hypothesis is $\Pr(\theta|X)$ while a Frequentist test is $\Pr(X|\theta)$.
Some terms, such as conditional probability, don't resemble the meaning in the other framework. All Bayesian inference is called conditional probability. A conditional expectation in a Bayesian framework is a posterior expectation $E(\theta|X)$ and is a random variable. An unconditional expectation would be a prior expectation $E(\theta)$.
On the null hypothesis side, it is a bit more complicated. An unconditional expectation is just the expectation of the distribution involved, $E(P_\theta(X))$. Conditional expectation is more complex. It depends on whether you are conditioning on a stochastic or non-stochastic variable. The added richness to the discussion comes from the differing role the sample space has. On the Bayesian side, all data is fixed and the remainder of the sample space is discarded as irrelevant.
As for links, on the null hypothesis side consider reading Deborah Mayo whose area is the philosophy of science. Her website is https://errorstatistics.com/
Alternatively, you could read Cosma Shalizi who is a statistician at http://www.stat.cmu.edu/~cshalizi/
On the Bayesian side, consider Andrew Gelman, a statistician, at http://www.stat.columbia.edu/~gelman/
or consider the psychologist Eric-Jan Wagenmakers at https://www.ejwagenmakers.com/
There is also a good posting on an existing stack exchange via the idea of an interval. The post constructs Frequentist confidence intervals for a data set of cookies versus the same Bayesian credible intervals (also called credible sets). It also gives a good idea of how the two groups think of conditioning. Since the intervals do not match and do not have the same properties it gives a way to think about the consequence of considering one thing random versus another. It is at https://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
|
Can you interchange limits and supremums of functions?
That is, does $$\lim_{n \to \infty} \sup_{x \in X} f_n(x) = \sup_{x \in X} \lim_{n \to \infty} f_n(x) ?$$
Thank you!
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Can you interchange limits and supremums of functions?
That is, does $$\lim_{n \to \infty} \sup_{x \in X} f_n(x) = \sup_{x \in X} \lim_{n \to \infty} f_n(x) ?$$
Thank you!
No, in general you cannot do that. Imagine if $X$ is $\mathbb{R}$ and $f_n(x)$ is zero except on $[n,n+1]$ where it is $1$. Work out both sides of the equation in this case...
No. Consider, for example $$f_n(x) = \frac{1}{(x-n)^2+1}$$ and $X=\mathbb R$. The supremum of each $f_n$ (and thus the limit of the suprema) is $1$, but the pointwise limit at each $x$ (and thus the supremum of the limits) is $0$.
You'll have better luck if you can assume that the $f_n$s converge
uniformly on $X$.
$$f_n(x)\leq \sup_{x\in A} f_n(x) $$ $$\Rightarrow {\liminf}_{n\rightarrow \infty} f_n(x)\leq {\liminf}_{n\rightarrow \infty}\sup_{x\in A}f_n(x) $$ $$\Rightarrow \sup_{x\in A}{\liminf}_{n\rightarrow \infty} f_n(x)\leq {\liminf}_{n\rightarrow \infty}\sup_{x\in A}f_n(x) $$
|
Limit ordinal Properties
All limit ordinals are equal to their union.
All limit ordinals contain an ordinal $\alpha$ if and only if they contain $\alpha + 1$.
$\omega$ is the smallest nonzero limit ordinal, and the smallest ordinal of infinite cardinal number.
$(\omega + \omega)$, also written $( \omega \cdot 2 )$, is the next limit ordinal. $( \omega \cdot \alpha )$ is a limit ordinal for any ordinal $\alpha$.
Types of Limits
A limit ordinal $\alpha$ is called
additively indecomposable (or a $\gamma$ number) if it cannot be the sum of $\beta<\alpha$ ordinals less than $\alpha$. These numbers are any ordinal of the form $\omega^\beta$ for $\beta$ an ordinal. The smallest is written $\gamma_0$, and the smallest larger than that is $\gamma_1$, etc.
A limit ordinal $\alpha$ is called
multiplicatively indecomposable (or a $\delta$ number) if it cannot be the product of $\beta<\alpha$ ordinals less than $\alpha$. These numbers are any ordinal of the form $\omega^{\omega^{\beta}}$. The smallest is written $\delta_0$, and the smallest larger than that is $\delta_1$, etc.
Interestingly, this pattern does not continue with
exponentially indecomposable (or $\varepsilon$ numbers) ordinals being $\omega^{\omega^{\omega^\beta}}$, but rather $\varepsilon_0=sup_{n<\omega}f^n(0)$ with $f(\alpha)=\omega^\alpha$ and $f^n(\alpha)=f(f(...f(\alpha)...))$ with $n$ iterations of $f$. It is the smallest fixed point of $f$. The next $\varepsilon$ number (i.e. the next fixed point of $f$) is then $\varepsilon_1=sup_{n<\omega}f^n(\varepsilon_0+1)$, and more generally the $(\alpha+1)$th fixed point of $f$ is $\varepsilon_{\alpha+1}=sup_{n<\omega}f^n(\varepsilon_\alpha+1)$, also $\varepsilon_\lambda=\cup_{\alpha<\lambda}\varepsilon_\alpha$ for limit $\lambda$.
The
tetrationally indecomposable ordinals (or $\zeta$ numbers) are then the ordinals $\zeta$ such that $\varepsilon_\zeta=\zeta$. These are obtained similarly as $\varepsilon$ numbers by taking $f(\alpha)=\varepsilon_\alpha$. Pentationally indecomposable ordinals (or $\eta$ ordinals) are then obtained by taking $f(\alpha)=\zeta_\alpha$, and so on.
This pattern continues on with the Veblen Hiearchy, continuing up to the Feferman-Schütte ordinal $\Gamma_0$, the smallest ordinal such that this process does not generate any larger kind of ordinals.
|
The terms can mean almost anything, but I will try to present here
one way in which the terms "parallel algorithms" and "distributed algorithms" are understood. Here we interpret "distributed algorithms" from the perspective of "network computing" (think: algorithms that keep the Internet running).
I will use as a running example the problem of finding a
proper 3-colouring of a directed path (linked list). I will first describe the problem from the perspective of "traditional" algorithms — those are also known as centralised algorithms, to emphasise that they are not distributed, or sequential algorithms, to emphasise that they are not parallelised. Centralised sequential algorithms
The model of computing is e.g. the familiar
RAM model.
The input is a linked list that is stored in the main memory of the computer. There is a read-only array $x$ with $n$ elements; node number $x[i]$ is the successor of node number $i$.
The output will be also stored in the main memory of the computer. There is a write-only array $y$ with $n$ elements.
We need to find a proper colouring of the list with $3$ colours. That is, for each index $i$ we must choose a colour $y[i] \in \{1,2,3\}$ such that $y[i] \ne y[j]$ whenever node $j$ is the successor of node $i$.
There is a single processor that can directly access any part of the main memory. In one time unit, the processor can read from main memory, write to main memory, or perform elementary operations such as arithmetic or comparisons. The
running time of the algorithm is defined to be the number of time units until the algorithms stops.
Clearly, the problem can be solved in time $O(n)$, and this is optimal. For the upper bound, follow the linked list and colour the nodes with e.g. colours $1,2,1,2,\dotsc$. For the lower bound, observe that we need to write $\Omega(n)$ elements of output.
Parallel algorithms
The only difference between parallel and sequential algorithms is that we will use the
PRAM model instead of the RAM model. In the PRAM model we can consider any number of processors, but here a particularly interesting case is what happens if there are precisely $n$ processors.
While we will have
multiple processors, there is still just one main memory. As before, the input is stored as a single array in the main memory, and the output will be written in a single array in the main memory.
Now in one time unit, each processor in parallel can read from main memory, write to main memory, or perform elementary operations such as arithmetic or comparisons. Some care is needed with memory accesses that may conflict. For the sake of concreteness, let us focus on the CREW PRAM model: the processors may freely read any part of the memory, but concurrent writes are forbidden.
Now in this setting it is not at all obvious what is the time complexity of $3$-colouring linked lists. Perhaps we could solve the problem in $O(1)$ time, as we have $n$ processors, and only $n$ units of input to read and $n$ units of output to write?
However, it turns out that the time complexity of this problem is precisely $\Theta(\log \log^* n)$. So it can be solved in
almost constant time, but not quite. Distributed algorithms
Now things change radically. The model of computing is e.g. the
LOCAL model, which has very little resemblance to RAM or PRAM.
There is no "main memory". There are no "arrays".
We are only given a
computer network that consists of $n$ nodes. Each node is labelled with a unique identifier (say, a number from $\{1,2,\dotsc,n\}$). Each node has two communication ports: one port that connects the node with its successor, and one port that connects it with its predecessor.
The same (unknown) computer network is both our input and the tool that we are supposed to use to solve the problem. Each node is a computational entity that has to output its own colour, and the colours have to form a proper colouring of the network (i.e., my colour has to be different from the colours of my neighbours).
Note that everything is distributed: no single entity holds the entire input, and no single entity needs to know the entire output.
All nodes run the same algorithm. In one time unit, all nodes in parallel can send messages to their neighbours, receive messages from their neighbours, or perform elementary operations. The
running time of the algorithm is defined to be the number of time units until all nodes have stopped and produced their local outputs.
Again, it is not at all obvious what is the time complexity of $3$-colouring. It turns out that it is precisely $\Theta(\log^* n)$.
From this perspective:
Research on parallel algorithms is primarily about
understanding how to harness the computational power of a massively parallel computer. For practical applications, consider high-performance computing, number-crunching, multicore, GPU computing, OpenMP, MPI, grids, clouds, clusters, etc.
Research on distributed algorithms is primarily about
understanding which tasks can be solved efficiently in a distributed system. For practical applications, consider computer networks, communication networks, social networks, markets, biological systems, chemical systems, physical systems, etc.
For example:
If you want to know how to multiply two huge matrices efficiently with modern computer hardware, it may be a good idea to first have a look at research related to "parallel algorithms".
If you want to know if there is any hope people could form stable marriages in their real-world social network, by just exchanging information with those whom they know, it may be a good idea to first have a look at research related to "distributed algorithms".
Once again, I emphasise that this is just one way in which the terms are used. There are many other interpretations. However, this is perhaps the most interesting interpretation in the sense that e.g. PRAM and LOCAL are radically different models.
As many other answers show, another possible interpretation is to understand "distributed algorithms" from the perspective of e.g. distributed high-performance computing (computer clusters, cloud computing, MPI, etc.). Then you could indeed say that distributed algorithms are not necessarily that different from e.g. I/O efficient parallel algorithms. At least if we put aside e.g. issues related to fault tolerance.
Incidentally, there is apparently some interest in the community to make the terminology slightly less confusing. People occasionally use the term
distributed graph algorithms (cf. http://adga.hiit.fi/) or the term network computing to emphasise the perspective that I described here. However, there is not that much pressure to do that, as we can use formally precise terms such as "LOCAL" and "CONGEST" for distributed graph algorithms, "PRAM" for parallel algorithms, and e.g. "congested clique" and "BSP" (bulk synchronous parallel) for various in-between cases. References
|
A contradiction is usually represented as $A \land \lnot A$. It's typical in intuitionistic logic to
define $\lnot A$ as $A \Rightarrow \bot$. It's clear we can derive $\bot$ from $A \land \lnot A$. Ultimately, a contradiction will be a hypothetical derivation of $\bot$ as the very definition of $\lnot$ suggests. It will be hypothetical because otherwise your logic is inconsistent.
The point Harper is making is that to prove something is to
have a proof and to refute something is to have a proof that it implies $\bot$. However, you can easily be in the situation that you can (meta-logically) prove that you are unable to provide either a proof or refutation. In such a situation, the proposition is neither constructively true nor false.
A way to understand classical logic and contrast it to the above is the following (essentially Kolmogorov's double negation interpretation): we say a proposition is
false if it implies a contradiction, i.e. it implies $\bot$. A proposition is true if we can prove that it can't be contradicted, i.e. we can show assuming it is false leads to a contradiction. In symbols, $A$ is false in this sense if $A \Rightarrow \bot$, as usual. $A$ is true in this sense if $\lnot A \Rightarrow \bot$, i.e. $\lnot \lnot A$ is provable. You can show that the the Law of the Excluded Middle holds constructively if we interpret "true" and "false" in this sense. That is, you can prove that $\lnot \lnot (\lnot \lnot A \lor \lnot A)$ holds constructively. More compactly, you can show $\lnot \lnot \lnot A \Rightarrow \lnot A$. With this notion of "true" and "false", we can say that a proposition is true if we can prove that no refutation exists. By contrast, constructively a proposition can fail to be constructively true even if we can demonstrate within the system that no refutation can exist.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
I would like to generate an isotropic gaussian random field described by a power spectrum $P(k)$ on a 3D grid which represents spherical polar coordinates (i.e., the angular separation between pixels at each slice in the radial direction is equal and each pixel represents a solid angle).
I am currently generating a random field on a cartesian grid by
Filling a 3D grid in Fourier space $(k_x, k_y, k_z)$ with random numbers drawn from a normal distribution with standard deviation $\sigma = \frac{P(k)}{2}$ where $k = \sqrt{k_x^2 + k_y^2 + k_z^2}$ Imposing hermitian symmetry to ensure a real field Fourier transforming into real space
My only thought is that I could do this process on a high resolution cartesian grid and average into spherical bins, though this seems convoluted.
Is there a more efficient way that I could generate this directly?
|
kidzsearch.com > wiki Explore:images videos games
Algebra Algebra is a part of mathematics (often called math in the United States and maths in the United Kingdom [1] ). It uses variables to represent a value that is not yet known. When an equals sign (=) is used, this is called an equation. A very simple equation using a variable is:
2 + 3 = x In this example,
x = 5, or it could also be said, "x is five". This is called
solving for x. [2]
Algebra can be used to solve real problems because the rules of algebra work in real life and numbers can be used to represent the values of real things. Physics, engineering and computer programming are areas that use algebra all the time. It is also useful to know in surveying, construction and business, especially accounting.
People who do algebra need to know the rules of numbers and mathematic operations used on numbers, starting with adding, subtracting, multiplying, and dividing. More advanced operations involve exponents, starting with squares and square roots. Many of these rules can also be used on the variables, and this is where it starts to get interesting.
Algebra was first used to solve equations and inequalities. Two examples are linear equations (the equation of a line,
y=mx+b) and quadratic equations, which has variables that are squared (power of two, a number that is multiplied by itself, for example:
2*2, 3*3, x*x). How to factor polynomials is needed for quadratic equations.
Contents 1 History 2 Examples 3 Writing algebra 4 Functions and Graphs 5 Rules of algebra 5.1 Rules 5.1.1 Commutative property of addition 5.1.2 Commutative property of multiplication 5.1.3 Associative property of addition 5.1.4 Associative property of multiplication 5.1.5 Distributive property 5.1.6 Additive identity property 5.1.7 Multiplicative identity property 5.1.8 Additive inverse property 5.1.9 Multiplicative inverse property 5.1 Rules 6 Advanced Algebra 7 Related pages 8 References 9 Other websites History
Early forms of algebra were developed by the Babylonians and the Greeks. However the word "algebra" is a Latin form of the Arabic word
Al-Jabr ("casting") and comes from a mathematics book Al-Maqala fi Hisab-al Jabr wa-al-Muqabilah, ("Essay on the Computation of Casting and Equation") written in the 9th century by a famous Persian mathematician, Muhammad ibn Mūsā al-Khwārizmī, who was a Muslim born in Khwarizm in Uzbekistan. He flourished under Al-Ma'moun in Baghdad, Iraq through 813-833 AD, and died around 840 AD. The book was brought into Europe and translated into Latin in the 12th century. The book was then given the name 'Algebra'. (The ending of the mathematician's name, al-Khwarizmi, was changed into a word easier to say in Latin, and became the English word algorithm.) [3] Examples
Here is a simple example of an algebra problem:
Sue has 12 jellybeans, Ann has 24 jellybeans. They decide to share so that they have the same number of jellybeans.
These are the steps you can use to solve the problem:
To have the same number of jellybeans, Ann has to give some to Sue. Let xrepresent the number of jellybeans Ann gives to Sue. Sue's jellybeans, plus x, must be the same as Ann's jellybeans minus x. This is written as:
12 + x = 24 - x
Subtract 12 from both sides of the equation. This gives:
x = 12 - x. (What happens on one side of the equals sign must happen on the other side too, for the equation to still be true. So in this case when 12 was subtracted from both sides, there was a middle step of
12 + x - 12 = 24 - x - 12. After a person is comfortable with this, the middle step is not written down.)
Add xto both sides of the equation. This gives:
2x = 12
Divide both sides of the equation by 2. This gives
x = 6. The answer is six. If Ann gives Sue 6 jellybeans, they will have the same number of jellybeans.
To check this, put 6 back into the original equation wherever xwas:
12 + 6 = 24 - 6
This gives
18=18, which is true. They both now have 18 jellybeans.
With practice, algebra can be used when faced with a problem that is too hard to solve any other way. Problems such as building a freeway, designing a cell phone, or finding the cure for a disease all require algebra.
Writing algebra
As in most parts of mathematics, adding
z to y (or y plus z) is written as
y + z.Subtracting
z from y (or y minus z) is written as
y − z.Dividing
y by z (or y over z: [math]y \over z[/math]) is written as
y ÷ z or
y/z.
y/z is more commonly used.
In algebra, multiplying
y by z (or y times z) can be written in 4 ways:
y × z,
y * z,
y·z, or just
yz. The multiplication symbol "×" is usually not used, because it looks too much like the letter x, which is often used as a variable. Also, when multiplying a larger expression, parentheses can be used:
y (z+1).
When we multiply a number and a letter in algebra, we write the number in front of the letter:
5 × y = 5y. When the number is 1, then the 1 is not written because 1 times any number is that number (1 ×
y = y) and so it is not needed. Functions and Graphs
An important part of algebra is the study of functions, since functions often appear in equations that we are trying to solve. A function is like a box you can put a number or numbers into and get a certain number out. When using functions, graphs can be powerful tools in helping us to study the solutions to equations.
A graph is a picture that shows all the values of the variables that make the equation or inequality true. Usually this is easy to make when there are only one or two variables. The graph is often a line, and if the line does not bend or go straight up-and-down it can be described by the basic formula
y = mx + b. The variable
b is the y-intercept of the graph (where the line crosses the vertical axis) and m is the slope or steepness of the line. This formula applies to the coordinates of a graph, where each point on the line is written
(x, y).
In some math problems like the equation for a line, there can be more than one variable (
x and y in this case). To find points on the line, one variable is changed. The variable that is changed is called the "independent" variable. Then the math is done to make a number. The number that is made is called the "dependent" variable. Most of the time the independent variable is written as x and the dependent variable is written as y, for example, in
y = 3x + 1. This is often put on a graph, using an
x axis (going left and right) and a y axis (going up and down). It can also be written in function form:
f(x) = 3x + 1. So in this example, we could put in 5 for
x and get
y = 16. Put in 2 for
x would get
y=7. And 0 for
x would get
y=1. So there would be a line going thru the points (5,16), (2,7), and (0,1) as seen in the graph to the right.
If x has a power of 1, it is a straight line. If it is squared or some other power, it will be curved. If it uses an inequality (
< or >), then usually part of the graph is shaded, either above or below the line. Rules of algebra
In algebra, there are a few rules that can be used for further understanding of equations. These are called the rules of algebra. While these rules may seem senseless or obvious, it is wise to understand that these properties do not hold throughout all branches of mathematics. Therefore, it will be useful to know how these axiomatic rules are declared, before taking them for granted. Before going on to the rules, reflect on two definitions that will be given.
Opposite - the opposite of [math]a[/math] is [math]-a[/math]. Reciprocal - the reciprocal of [math]a[/math] is [math]\frac{1}{a}[/math]. Rules Commutative property of addition
'Commutative' means that a function has the same result if the numbers are swapped around. In other words, the order of the terms in an equation do not matter. When the operator of two terms is an addition, the 'commutative property of addition' is applicable. In algebraic terms, this gives [math]a + b = b + a[/math].
Note that this does not apply for subtraction! (i.e. [math]a - b \ne b - a[/math])
Commutative property of multiplication
When the operator of two terms is an multiplication, the 'commutative property of multiplication' is applicable. In algebraic terms, this gives [math]a \cdot b = b \cdot a[/math].
Note that this does not apply for division! (i.e. [math]\frac{a}{b} \ne \frac{b}{a}[/math], when [math]a \neq b [/math])
Associative property of addition
'Associative' refers to the grouping of numbers. The associative property of addition implies that, when adding three or more terms, it doesn't matter how these terms are grouped. Algebraically, this gives [math]a + (b + c) = (a + b) + c[/math]. Note that this does not hold for subtraction, e.g. [math]1 = 0 - (0 - 1) \neq (0 - 0) - 1 = -1[/math] (see the distributive property).
Associative property of multiplication
The associative property of multiplication implies that, when multiplying three or more terms, it doesn't matter how these terms are grouped. Algebraically, this gives [math]a \cdot (b \cdot c) = (a \cdot b) \cdot c[/math]. Note that this does not hold for division, e.g. [math]2 = 1/(1/2) \neq (1/1)/2 = 1/2[/math].
Distributive property
The distributive property states that the multiplication of a number by another term can be distributed. For instance: [math]a \cdot (b + c) = ab + ac[/math]. (Do
not confuse this with the associative properties! For instance, [math]a \cdot (b + c) \ne (a \cdot b) + c[/math].) Additive identity property
'Identity' refers to the property of a number that it is equal to itself. In other words, there exists an operation of two numbers so that it equals the variable of the sum. The additive identity property states that the sum of any number and 0 is that number: [math]a + 0 = a[/math]. This also holds for subtraction: [math]a - 0 = a[/math].
Multiplicative identity property
The multiplicative identity property states that the product of any number and 1 is that number: [math]a \cdot 1 = a[/math]. This also holds for division: [math]\frac{a}{1} = a[/math].
Additive inverse property
The additive inverse property is somewhat like the opposite of the additive identity property. When an operation is the sum of a number and its opposite, and it equals 0, that operation is a valid algebraic operation. Algebraically, it states the following: [math]a - a = 0[/math].
Multiplicative inverse property
The multiplicative inverse property entails that when an operation is the product of a number and its reciprocal, and it equals 1, that operation is a valid algebraic operation. Algebraically, it states the following: [math]\frac{a}{a} = 1[/math].
Advanced Algebra
In addition to "elementary algebra", or basic algebra, there are advanced forms of algebra, taught in colleges and universities, such as abstract algebra, linear algebra, and universal algebra. This includes how to use a matrix to solve many linear equations at once.
Abstract algebra is the study of things that are found in equations. For example, polynomials and matrices. These things are known as algebraic structures. Understanding the properties of more complex algebraic structures can help us solve more complex equations.
Many math problems are about physics and engineering. In many of these physics problems time is a variable. Time uses the letter
t. Using the basic ideas in algebra can help reduce a math problem to its simplest form making it easier to solve difficult problems. Energy is e, force is f, mass is m, acceleration is a and speed of light is sometimes c. This is used in some famous equations, like
f = ma and
e=mc^2 (although more complex math beyond algebra was needed to come up with that last equation).
Related pages References "Math or Maths". Wolfram.com. http://mathworld.wolfram.com/Mathematics.html. Retrieved 11 April 2013. "Algebra Introduction". Math Is Fun. http://www.mathsisfun.com/algebra/introduction.html. Retrieved 11 April 2013. "History of Algebra". UCS Louisiana. http://www.ucs.louisiana.edu/~sxw8045/history.htm. Retrieved 11 April 2013. Other websites
The English Wikibooks has more information on:
|
Recall that we were able to analyze all geometric series "simultaneously'' to discover that $$\sum_{n=0}^\infty kx^n = {k\over 1-x},$$ if $|x|< 1$, and that the series diverges when $|x|\ge 1$. At the time, we thought of $x$ as an unspecified constant, but we could just as well think of it as a variable, in which case the series $$\sum_{n=0}^\infty kx^n$$ is a function, namely, the function $k/(1-x)$, as long as $|x|< 1$. While $k/(1-x)$ is a reasonably easy function to deal with, the more complicated $\sum kx^n$ does have its attractions: it appears to be an infinite version of one of the simplest function types—a polynomial. This leads naturally to the questions: Do other functions have representations as series? Is there an advantage to viewing them in this way?
The geometric series has a special feature that makes it unlike a typical polynomial—the coefficients of the powers of $x$ are the same, namely $k$. We will need to allow more general coefficients if we are to get anything other than the geometric series.
Definition 13.8.1 A power series has the form $$\ds\sum_{n=0}^\infty a_nx^n,$$ with the understanding that $\ds a_n$ may depend on $n$ but not on $x$.
Example 13.8.2 $\ds\sum_{n=1}^\infty {x^n\over n}$ is a power series. We can investigate convergence using the ratio test: $$ \lim_{n\to\infty} {|x|^{n+1}\over n+1}{n\over |x|^n} =\lim_{n\to\infty} |x|{n\over n+1} =|x|. $$ Thus when $|x|< 1$ the series converges and when $|x|>1$ it diverges, leaving only two values in doubt. When $x=1$ the series is the harmonic series and diverges; when $x=-1$ it is the alternating harmonic series (actually the negative of the usual alternating harmonic series) and converges. Thus, we may think of $\ds\sum_{n=1}^\infty {x^n\over n}$ as a function from the interval $[-1,1)$ to the real numbers.
A bit of thought reveals that the ratio test applied to a power serieswill always have the same nice form. In general, we will compute$$ \lim_{n\to\infty} {|a_{n+1}||x|^{n+1}\over |a_n||x|^n} =\lim_{n\to\infty} |x|{|a_{n+1}|\over |a_n|} = |x|\lim_{n\to\infty} {|a_{n+1}|\over |a_n|} =L|x|,$$assuming that $\ds \lim |a_{n+1}|/|a_n|$ exists. Then the seriesconverges if $L|x|< 1$, that is, if $|x|< 1/L$, and diverges if$|x|>1/L$. Only the two values $x=\pm1/L$ require furtherinvestigation. Thus the series will definitely define a function onthe interval $(-1/L,1/L)$, and perhaps will extend to one or bothendpoints as well. Two special cases deserve mention: if $L=0$ thelimit is $0$ no matter what value $x$ takes, so the series convergesfor all $x$ and the function is defined for all real numbers. If$L=\infty$, then no matter what value $x$ takes the limit is infiniteand the series converges only when $x=0$. The value $1/L$ is calledthe
radius of convergence of the series, and theinterval on which the series converges is the interval of convergence .
Consider again the geometric series, $$\sum_{n=0}^\infty x^n={1\over 1-x}.$$ Whatever benefits there might be in using the series form of this function are only available to us when $x$ is between $-1$ and $1$. Frequently we can address this shortcoming by modifying the power series slightly. Consider this series: $$ \sum_{n=0}^\infty {(x+2)^n\over 3^n}= \sum_{n=0}^\infty \left({x+2\over 3}\right)^n={1\over 1-{x+2\over 3}}= {3\over 1-x}, $$ because this is just a geometric series with $x$ replaced by $(x+2)/3$. Multiplying both sides by $1/3$ gives $$\sum_{n=0}^\infty {(x+2)^n\over 3^{n+1}}={1\over 1-x},$$ the same function as before. For what values of $x$ does this series converge? Since it is a geometric series, we know that it converges when $$\eqalign{ |x+2|/3&< 1\cr |x+2|&< 3\cr -3 < x+2 &< 3\cr -5< x&< 1.\cr }$$ So we have a series representation for $1/(1-x)$ that works on a larger interval than before, at the expense of a somewhat more complicated series. The endpoints of the interval of convergence now are $-5$ and $1$, but note that they can be more compactly described as $-2\pm3$. We say that $3$ is the radius of convergence, and we now say that the series is centered at $-2$.
Definition 13.8.3 A power series centered at $a$ has the form $$\ds\sum_{n=0}^\infty a_n(x-a)^n,$$ with the understanding that $\ds a_n$ may depend on $n$ but not on $x$.
Exercises 13.8
Ex 13.8.1$\ds\sum_{n=0}^\infty n x^n$(answer)
Ex 13.8.2$\ds\sum_{n=0}^\infty {x^n\over n!}$(answer)
Ex 13.8.3$\ds\sum_{n=1}^\infty {n!\over n^n}x^n$(answer)
Ex 13.8.4$\ds\sum_{n=1}^\infty {n!\over n^n}(x-2)^n$(answer)
Ex 13.8.5$\ds\sum_{n=1}^\infty {(n!)^2\over n^n}(x-2)^n$(answer)
Ex 13.8.6$\ds\sum_{n=1}^\infty {(x+5)^n\over n(n+1)}$(answer)
|
We begin with a definition:
Definition 1.4.1 A
partition of a set $S$ is acollection of non-empty subsets $A_i\subseteq S$, $1\le i\le k$ (the parts of the partition), such that $\bigcup_{i=1}^k A_i=S$ andfor every $i\not=j$, $A_i\cap A_j=\emptyset$.
Example 1.4.2 The partitions of the set $\{a,b,c\}$ are $\{\{a\},\{b\},\{c\}\}$, $\{\{a,b\},\{c\}\}$, $\{\{a,c\},\{b\}\}$, $\{\{b,c\},\{a\}\}$, and $\{\{a,b,c\}\}$.
Partitions arise in a number of areas of mathematics. For example, if$\equiv$ is an equivalence relation on a set $S$, the equivalenceclasses of $\equiv$ form a partition of $S$. Here we consider the numberof partitions of a finite set $S$, which we might as well take to be$[n]=\{1,2,3,\ldots,n\}$, unless some other set is of interest. We denote the number of partitionsof an $n$-element set by $B_n$; these are the
Bell numbers. From the example above, we see that $B_3=5$. Forconvenience we let $B_0=1$. It is quite easy to see that $B_1=1$ and$B_2=2$.
There are no known simple formulas for $B_n$, so we content ourselves with a recurrence relation.
Theorem 1.4.3 The Bell numbers satisfy $$ B_{n+1} = \sum_{k=0}^n {n\choose k} B_k. $$
Proof. Consider a partition of $S=\{1,2,\ldots,n+1\}$, $A_1$,…,$A_m$. We may suppose that $n+1$ is in $A_1$, and that $|A_1|=k+1$, for some $k$, $0\le k\le n$. Then $A_2$,…,$A_{m}$ form a partition of the remaining $n-k$ elements of $S$, that is, of $S\backslash A_1$. There are $B_{n-k}$ partitions of this set, so there are $B_{n-k}$ partitions of $S$ in which one part is the set $A_1$. There are ${n\choose k}$ sets of size $k+1$ containing $n+1$, so the total number of partitions of $S$ in which $n+1$ is in a set of size $k+1$ is ${n\choose k}B_{n-k}$. Adding up over all possible values of $k$, this means $$ \eqalignno{ B_{n+1} &= \sum_{k=0}^n {n\choose k} B_{n-k}.& (1.4.1)\cr } $$ We may rewrite this, using theorem 1.3.3, as $$ B_{n+1} = \sum_{k=0}^n {n\choose n-k} B_{n-k}, $$ and then notice that this is the same as the sum $$ B_{n+1} = \sum_{k=0}^n {n\choose k} B_k, $$ written backwards.
Example 1.4.4 We apply the recurrence to compute the first few Bell numbers: $$ \eqalign{ B_1&=\sum_{k=0}^0 {0\choose 0}B_0 = 1\cdot 1 = 1\cr B_2&=\sum_{k=0}^1 {1\choose k}B_k = {1\choose 0}B_0 + {1\choose 1}B_1 = 1\cdot 1+1\cdot 1 =1+1 =2\cr B_3&=\sum_{k=0}^2 {2\choose k}B_k = 1\cdot 1 + 2\cdot 1 + 1\cdot 2 = 5\cr B_4&=\sum_{k=0}^3 {3\choose k}B_k = 1\cdot 1 + 3\cdot 1 + 3\cdot 2 + 1\cdot 5 = 15\cr } $$
The Bell numbers grow exponentially fast; the first few are 1, 1, 2, 5, 15, 52, 203, 877, 4140, 21147, 115975, 678570, 4213597, 27644437.
The Bell numbers turn up in many other problems; here is an interesting example. A common need in some computer programs is to generate a random permutation of $1,2,3,\ldots,n$, which we may think of as a shuffle of the numbers, visualized as numbered cards in a deck. Here is an attractive method that is easy to program: Start with the numbers in order, then at each step, remove one number at random (this is easy in most programming languages) and put it at the front of the list of numbers. (Viewed as a shuffle of a deck of cards, this corresponds to removing a card and putting it on the top of the deck.) How many times should we do this? There is no magic number, but it certainly should not be small relative to the size of $n$. Let's choose $n$ as the number of steps.
We can write such a shuffle as a list of $n$ integers, $(m_1,m_2,\ldots,m_n)$. This indicates that at step $i$, the number $m_i$ is moved to the front.
Example 1.4.5 Let's follow the shuffle $(3,2,2,4,1)$: $$ \eqalign{ (3)&:\quad 3 1 2 4 5\cr (2)&:\quad 2 3 1 4 5\cr (2)&:\quad 2 3 1 4 5\cr (4)&:\quad 4 2 3 1 5\cr (1)&:\quad 1 4 2 3 5\cr } $$
Note that we allow "do nothing'' moves, removing the top card and then placing it on top. The number of possible shuffles is then easy to count: there are $n$ choices for the card to remove, and this is repeated $n$ times, so the total number is $n^n$. (If we continue a shuffle for $m$ steps, the number of shuffles is $n^m$.) Since there are only $n!$ different permutations of $1,2,\ldots,n$, this means that many shuffles give the same final order.
Here's our question: how many shuffles result in the original order?
Example 1.4.6 These shuffles return to the original order: $(1,1,1,1,1)$, $(5,4,3,2,1)$, $(4,1,3,2,1)$.
Theorem 1.4.7 The number of shuffles of $[n]$ that result in the original sorted order is $B_n$.
Proof. Since we know that $B_n$ counts the number of partitions of $\{1,2,3,\ldots,n\}$, we can prove the theorem by establishing a 1–1 correspondence between the shuffles that leave the deck sorted and the partitions. Given a shuffle $(m_1,m_2,\ldots,m_n)$, we put into a single set all $i$ such that $m_i$ has a single value. For example, using the shuffle $(4,1,3,2,1)$, Since $m_2=m_5$, one set is $\{2,5\}$. All the other values are distinct, so the other sets in the partition are $\{1\}$, $\{3\}$, and $\{4\}$.
Note that every shuffle, no matter what the final order, produces a partition by this method. We are only interested in the shuffles that leave the deck sorted. What we now need is to show that each partition results from exactly one such shuffle.
Suppose we have a partition with $k$ parts. If a shuffle leaves the deck sorted, the last entry, $m_n$, must be 1. If the part containing $n$ is $A_1$, then it must be that $m_i=1$ if and only if $i\in A_1$. If $k=1$, then the only part contains all of $\{1,2,\ldots,n\}$, and the shuffle must be $(1,1,1,\ldots,1)$.
If $k>1$, the last move that is not 1 must be 2, since 2 must end up immediately after 1. Thus, if $j_2$ is the largest index such that $j_2\notin A_1$, let $A_2$ be the part containing $j_2$, and it must be that $m_i=2$ if and only if $i\in A_2$. We continue in this way: Once we have discovered which of the $m_i$ must have values $1,2,\ldots,p$, let $j_{p+1}$ be the largest index such that $j_{p+1}\notin A_1\cup\cdots\cup A_p$, let $A_{p+1}$ be the part containing $j_{p+1}$, and then $m_i=p+1$ if and only if $i\in A_{p+1}$. When $p=k$, all values $m_i$ have been determined, and this is the unique shuffle that corresponds to the partition.
Example 1.4.8 Consider the partition $\{\{1,5\},\{2,3,6\},\{4,7\}\}$. We must have $m_7=m_4=1$, $m_6=m_3=m_2=2$, and $m_5=m_1=3$, so the shuffle is $(3,2,2,1,3,2,1)$.
Returning to the problem of writing a computer program to generate a partition: is this a good method? When we say we want a random permutation, we mean that we want each permutation to occur with equal probability, namely, $1/n!$. Since the original order is one of the permutations, we want the number of shuffles that produce it to be exactly $n^n/n!$, but $n!$ does not divide $n^n$, so this is impossible. The probability of getting the original permutation is $B_n/n^n$, and this turns out to be quite a bit larger than $1/n!$. Thus, this is not a suitable method for generating random permutations.
The recurrence relation above is a somewhat cumbersome way to compute the Bell numbers. Another way to compute them is with a different recurrence, expressed in the Bell triangle, whose construction is similar to that of Pascal's triangle: $$\matrix{ A_{1,1}\cr A_{2,1}&A_{2,2}\cr A_{3,1}&A_{3,2}&A_{3,3}\cr A_{4,1}&A_{4,2}&A_{4,3}&A_{4,4}\cr} \qquad\matrix{ 1\cr 1&2\cr 2&3&5\cr 5&7&10&15\cr} $$ The rule for constructing this triangle is: $A_{1,1}=1$; the first entry in each row is the last entry in the previous row; other entries are $A_{n,k}=A_{n,k-1}+A_{n-1,k-1}$; row $n$ has $n$ entries. Both the first column and the diagonal consist of the Bell numbers, with $A_{n,1}=B_{n-1}$ and $A_{n,n}=B_n$.
$A_{n,k}$ may be interpreted as the number of partitions of $\{1,2,\ldots,n+1\}$ in which $\{k+1\}$ is the singleton set with the largest entry in the partition. For example, $A_{3,2}=3$; the partitions of $3+1=4$ in which $2+1=3$ is the largest number appearing in a singleton set are $\{\{1\},\{2,4\},\{3\}\}$, $\{\{2\},\{1,4\},\{3\}\}$, and $\{\{1,2,4\},\{3\}\}$.
To see that this indeed works as advertised, we need to confirm a few things. First, consider $A_{n,n}$, the number of partitions of $\{1,2,\ldots,n+1\}$ in which $\{n+1\}$ is the singleton set with the largest entry in the partition. Since $n+1$ is the largest element of the set, all partitions containing the singleton $\{n+1\}$ satisfy the requirement, and so the $B_n$ partitions of $\{1,2,\ldots,n\}$ together with $\{n+1\}$ are exactly the partitions of interest, that is, $A_{n,n}=B_n$.
Next, we verify that under the desired interpretation, it is indeed true that $A_{n,k}=A_{n,k-1}+A_{n-1,k-1}$ for $k>1$. Consider a partition counted by $A_{n,k-1}$. This contains the singleton $\{k\}$, and the element $k+1$ is not in a singleton. If we interchange $k$ and $k+1$, we get the singleton $\{k+1\}$, and no larger element is in a singleton. This gives us partitions in which $\{k+1\}$ is a singleton and $\{k\}$ is not. Now consider a partition of $\{1,2,\ldots,n\}$ counted by $A_{n-1,k-1}$. Replace all numbers $j>k$ by $j+1$, and add the singleton $\{k+1\}$. This produces a partition in which both $\{k+1\}$ and $\{k\}$ appear. In fact, we have described how to produce each partition counted by $A_{n,k}$ exactly once, and so $A_{n,k}=A_{n,k-1}+A_{n-1,k-1}$.
Finally, we need to verify that $A_{n,1}=B_{n-1}$. We know that $A_{1,1}=1=B_0$. Now we claim that for $n>1$, $$ A_{n,1}=\sum_{k=0}^{n-2}{n-2\choose k}A_{k+1,1}. $$ In a partition counted by $A_{n,1}$, 2 is the largest element in a singleton, so $\{n+1\}$ is not in the partition. Choose any $k\ge 1$ elements of $\{3,4,\ldots,n\}$ to form the set containing $n+1$. There are $A_{n-k-1,1}$ partitions of the remaining $n-k$ elements in which 2 is the largest element in a singleton. This accounts for ${n-2\choose k}A_{n-k-1,1}$ partitions of $\{1,2,\ldots,n+1\}$, or over all $k$: $$ \sum_{k=1}^{n-2}{n-2\choose k}A_{n-k-1,1}= \sum_{k=1}^{n-2}{n-2\choose n-k-2}A_{n-k-1,1}= \sum_{k=0}^{n-3} {n-2\choose k}A_{k+1,1}. $$ We are missing those partitions in which 1 is in the part containing $n+1$. We may produce all such partitions by starting with a partition counted by $A_{n-1,1}$ and adding $n+1$ to the part containing 1. Now we have $$ A_{n,1} = A_{n-1,1}+\sum_{k=0}^{n-3} {n-2\choose k}A_{k+1,1}= \sum_{k=0}^{n-2} {n-2\choose k}A_{k+1,1}. $$ Although slightly disguised by the shifted indexing of the $A_{n,1}$, this is the same as the recurrence relation for the $B_n$, and so $A_{n,1}=B_{n-1}$ as desired.
Exercises 1.4
Ex 1.4.1Show that if $\{A_1,A_2,\ldots,A_k\}$ is a partition of $\{1,2,\ldots,n\}$, then there is a unique equivalence relation$\equiv$ whose equivalence classes are $\{A_1,A_2,\ldots,A_k\}$.
Ex 1.4.2Suppose $n$ is a square-free number, that is, no number$m^2$ divides $n$; put another way, square-free numbers are productsof distinct prime factors, that is, $n=p_1p_2\cdots p_k$, where each$p_i$ is prime and no two prime factors are equal.Find the number of factorizations of $n$. For example, $30=2\cdot3\cdot 5$, and the factorizations of 30 are 30, $6\cdot 5$, $10\cdot3$, $2\cdot 15$, and $2\cdot 3\cdot 5$. Note we count 30 alone as afactorization, though in some sense a trivial factorization.
Ex 1.4.3The rhyme scheme of a stanza of poetry indicates which linesrhyme. This is usually expressed in the form ABAB, meaning the firstand third lines of a four line stanza rhyme, as do the second andfourth, or ABCB, meaning only lines two and four rhyme, and so on. Alimerick is a five line poem with rhyming scheme AABBA. How manydifferent rhyme schemes are possible for an $n$ line stanza? To avoidduplicate patterns, we only allow a new letter into the pattern whenall previous letters have been used to the left of the new one. Forexample, ACBA is not allowed, since when C is placed in position 2, Bhas not been used to the left. This is the same rhyme scheme as ABCA,which is allowed.
Ex 1.4.4Another way to express the Bell numbers for $n>0$ is $$B_n=\sum_{k=1}^n S(n,k),$$where $S(n,k)$ is the number of partitions of $\{1,2,\ldots,n\}$ intoexactly $k$ parts, $1\le k\le n$. The $S(n,k)$ are the Stirling numbers of the second kind .Find a recurrence relation for$S(n,k)$. Your recurrence should allow a fairly simple triangleconstruction containing the values $S(n,k)$, and then the Bell numbersmay be computed by summing the rows of this triangle. Show the firstfive rows of the triangle, $n\in\{1,2,\ldots,5\}$.
Ex 1.4.5Let $A_n$ be the number of partitions of$\{1,2,\ldots,n+1\}$ in which no consecutive integers are in the samepart of the partition. For example, when $n=3$ these partitions are$\{\{1\},\{2\},\{3\},\{4\}\}$, $\{\{1\},\{2,4\},\{3\}\}$,$\{\{1,3\},\{2\},\{4\}\}$, $\{\{1,3\},\{2,4\}\}$, $\{\{1,4\},\{2\},\{3\}\}$, so $A_3=5$. Let $A(n,k)$ be the number of partitions of$\{1,2,\ldots,n+1\}$ into exactly $k$ parts, in which no consecutiveintegers are in the same part of the partition. Thus$$A_n=\sum_{k=2}^{n+1} A(n,k).$$Find a recurrence for $A(n,k)$ and then show that $A_n=B_n$.
|
There are a number of solutions to this problem online that use identities I have not been taught. Here is where I am in relation to my own coursework:
$ \sin(z) = 2 $
$ \exp(iz) - \exp(-iz) = 4i $
$ \exp(2iz) - 1 = 4i \cdot \exp (iz) $
Then, setting $w = \exp(iz),$ I get:
$ w^2 - 4iw -1 = 0$
I can then use the quadratic equation to find:
$ w = i(2 \pm \sqrt 3 )$
So therefore,
$\exp(iz) = w = i(2 \pm \sqrt 3 ) $ implies
$ e^{-y}\cos(x) = 0 $, thus $ x = \frac{\pi}{2} $ $ ie^{-y}\sin(x) = i(2 \pm \sqrt 3 ) $ so $ y = -\ln( 2 \pm \sqrt 3 ) $
So I have come up with $ z = \frac{\pi}{2} - i \ln( 2 \pm \sqrt 3 )$ But the back of the book has $ z = \frac{\pi}{2} \pm i \ln( 2 + \sqrt 3 ) +2n\pi$
Now, the $+2n\pi$ I understand because sin is periodic, but how did the plus/minus come out of the natural log? There is no identity for $\ln(a+b)$ that I am aware of. I believe I screwed up something in the calculations, but for the life of me cannot figure out what. If someone could point me in the right direction, I would appreciate it.
|
Which of these terms is greater ?
$2x-6y+1$ or $1$
if $x^4 + 3y^2=0$
According to the text they are equal ?How is that ?
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
Which of these terms is greater ?
$2x-6y+1$ or $1$
if $x^4 + 3y^2=0$
According to the text they are equal ?How is that ?
But since $x^2$ and $3y^2$ are positive, (assuming $x,y$ real) we have that $x^2+3y^2=0$ forces $x=y=0$ so the text is correct!
What number system are $x,y$ in? If $x,y\in\mathbb{R}$, then it is simple, since $x^4$, $3y^2$ $\ge 0$, we must have that $x,y=0$, hence $2x-6y+1=1$.
If sum of squares is $0$ then each term of the series is $0$ individually (if they belong to $\Bbb R$) $\implies$ $x=0,y=0$ in your problem which gives $2x-6y+1=1$.
we know that for positive terms Arithmetic Mean $\geq$ Geometric Mean
i.e. $$(x^4 + y^2 + y ^2 + y^2)/4 \geq \sqrt[4]{(x^4)(y^2)(y^2)(y^2)}$$
i.e. $(x^4 + 3y^2)/4 \ge (x)(y)\sqrt{y}$
i.e. $(x)(y)\sqrt{y} \le (x^4 + 3y^2)/4$
i.e. $(x)(y)\sqrt{y} \le 0$
for $\sqrt{y}$ to be real $y \ge 0$
i.e. $y \ge 0$ ------- (1)
i.e. $x \le 0$ ------- (2)
we know that for positive terms Arithmetic Mean $\geq$ Harmonic Mean
i.e. $$(x^4 + y^2 + y^2 + y^2)/4 \geq {4/(1/x^4 + 1/y^2 + 1/y^2 + 1/y^2)}$$
i.e. $$(x^4 + 3y^2)(1/x^4 + 3/y^2) \geq {(4)(4)}$$
i.e. $$(1 + 3(y/x^2)^2 + 3(x^2/y)^2 + 9) \geq {16}$$
i.e. $$3(y^2/x^4) + 3(x^4/y^2) \geq {6}$$
i.e. $$(y^2/x^4) + (x^4/y^2) \geq {2}$$
i.e. $$(y^4 + x^8) \geq {2(y^2)(x^4)}$$
i.e. $$(y^2 - x^4)^2 \geq {0}$$
i.e. $$(x^4 - y^2)^2 \geq {0}$$
consider $$x^4 \geq {y^2}$$
i.e. $$(x^4 - y^2) \geq {0}$$
i.e. $$(x^2 - y)(x^2 + y) \geq {0}$$
i.e. $$(x^2 - y)(x^2 + y) \geq {0}$$
i.e. $$(y - x^2)(y + x^2) \leq {0}$$
i.e. ${-x^2} \leq y \leq {x^2}$ ------- (3)
The three inequalities listed above will hold concurrently only when $x = y = 0$
|
I'm stuck with finding the recursion relation of this differential equation using the power series method. So I started by setting: $$y(x)=\sum\limits_{n=0}^{\infty}{a_n x^n} \\ e^x=\sum\limits_{n=0}^{\infty}{\dfrac{x^n}{n!}}$$
But when I implemented this in the equation, I got stuck with finding the recursion relation because of the double summation. This is what I have:
$\sum\limits_{n=2}^\infty (n)(n-1) a_n x^{n-2}+\left(\sum\limits_{n=0}^\infty\dfrac{x^n}{n!}\right)\left(\sum\limits_{n=1}^\infty n a_nx^{n-1}\right) - \sum\limits_{n=0}^{\infty}{a_n x^n} =0$
I know there exist the Cauchy product but I don't know how to use it in this case.
PS: I know it is also possible to set $y=\sum\limits_{n=0}^\infty\dfrac{a_nx^n}{n!}$ , but I chose to use $y(x)=\sum\limits_{n=0}^{\infty}{a_n x^n}$.
|
Let's first address your comment in response to Igor Rivin's answer: why don't we find this topic addressed in textbooks on Lie groups? Beyond the definite (= compact) case, disconnectedness issues become more complicated and your question is thereby very much informed by the theory of linear algebraic groups $G$ over $\mathbf{R}$. That in turn involves two subtle aspects (see below) that are not easy to express solely in analytic terms and are therefore beyond the level of such books (which usually don't assume familiarity with algebraic geometry at the level needed to work with linear algebraic groups over a field such as $\mathbf{R}$ that is not algebraically closed). And books on linear algebraic groups tend to say little about Lie groups.
The first subtlety is that $G(\mathbf{R})^0$ can be smaller than $G^0(\mathbf{R})$ (i.e., connectedness for the analytic topology may be finer than for the Zariski topology), as we know already for indefinite orthogonal groups, and textbooks on Lie groups tend to focus on the connected case for structural theorems. It is a deep theorem of Elie Cartan that if a linear algebraic group $G$ over $\mathbf{R}$ is Zariski-connected semisimple and simply connected (in the sense of algebraic groups; e.g., ${\rm{SL}}_n$ and ${\rm{Sp}}_{2n}$ but not ${\rm{SO}}_n$) then $G(\mathbf{R})$ is connected, but that lies beyond the level of most textbooks. (Cartan expressed his result in analytic terms via anti-holomorphic involutions of complex semisimple Lie groups, since there was no robust theory of linear algebraic groups at that time.) The group $G(\mathbf{R})$ has finitely many connected components, but that is not elementary (especially if one assumes no knowledge of algebraic geometry), and the theorem on maximal compact subgroups of Lie groups $H$ in case $\pi_0(H)$ is finite but possibly not trivial appears to be treated in only one textbook (Hochschild's "Structure of Lie groups", which however does not address the structure of automorphism groups); e.g.,Bourbaki's treatise on Lie groups assumes connectedness for much of its discussion ofthe structure of compact Lie groups.
The second subtlety is that when the purely analytic operation of "complexification" for Lie groups (developed in Hochschild's book too) is applied to the Lie group of $\mathbf{R}$-points of a (Zariski-connected) semisimple linear algebraic group, it doesn't generally "match" the easier algebro-geometric scalar extension operation on the given linear algebraic group (e.g., the complexification of the Lie group ${\rm{PGL}}_3(\mathbf{R})$ is ${\rm{SL}}_3(\mathbf{C})$, not ${\rm{PGL}}_3(\mathbf{C})$). Here too, things are better-behaved in the "simply connected" case, but that lies beyond the level of introductory textbooks on Lie groups.
Now let us turn to your question. Let $n = p+q$, and assume $n \ge 3$ (so the Lie algebra is semisimple;the cases $n \le 2$ can be analyzed directly anyway). I will only address ${\rm{SO}}(p,q)$ rather than ${\rm{O}}(p, q)$, since it is already enough of a headache to keep track of disconnected effects in the special orthogonal case. To be consistent with your notation, we'll write $\mathbf{O}(p,q) \subset {\rm{GL}}_n$ to denote the linear algebraic group over $\mathbf{R}$ "associated" to the standard quadratic form of signature $(p, q)$ (so its group of $\mathbf{R}$-points is what you have denoted as ${\rm{O}}(p,q)$), and likewise for ${\mathbf{SO}}(p,q)$.
We will show that ${\rm{SO}}(p, q)$ has only inner automorphisms for odd $n$, and only the expectedouter automorphism group of order 2 (arising from reflection in any nonzero vector) for even $n$ in both the definite case and the case when $p$ and $q$ are each odd. I will leave it to someone else to figure out (or find a reference on?) the case with $p$ and $q$ both even and positive.
We begin with some preliminary comments concerning the definite (= compact) case for all $n \ge 3$, for which the Lie group ${\rm{SO}}(p,q) = {\rm{SO}}(n)$ is connected. The crucial (non-trivial) fact isthat the theory of connected compact Lie groups is completely "algebraic'', and in particularif $G$ and $H$ are two connected semisimple $\mathbf{R}$-groups for which$G(\mathbf{R})$ and $H(\mathbf{R})$ are compact then every Lie group homomorphism$G(\mathbf{R}) \rightarrow H(\mathbf{R})$ arising from a (unique) algebraic homomorphism $G \rightarrow H$.In particular, the automorphism groups of $G$ and $G(\mathbf{R})$ coincide, sothe automorphism group of ${\rm{SO}}(n)$ coincides with that of $\mathbf{SO}(n)$.
Note that any linear automorphism preserving a non-degenerate quadratic form up to a nonzero scaling factor preserves its orthogonal and special orthogonal group. It is a general fact (due to Dieudonne over general fields away from characteristic 2) that if $(V, Q)$ is a non-degenerate quadratic space of dimension $n \ge 3$ over any field $k$ and if ${\mathbf{GO}}(Q)$ denotes the linear algebraic $k$-group ofconformal automorphisms then the action of the algebraic group ${\mathbf{PGO}}(Q) = {\mathbf{GO}}(Q)/{\rm{GL}}_1$ on ${\mathbf{SO}}(Q)$ through conjugation gives exactly the automorphisms as an algebraic group. More specifically, $${\mathbf{PGO}}(Q)(k) = {\rm{Aut}}_k({\mathbf{SO}}(Q)).$$This is proved using a lot of the structure theory of connected semisimple groupsover an extension field that splits the quadratic form, so it is hard to "see'' this fact working directly over the given ground field $k$ (such as $k = \mathbf{R}$); thatis one of the great merits of the algebraic theory (allowing us to prove results over a fieldby making calculations with a geometric object over an extension field, and using techniquessuch as Galois theory to come back to where we began).
Inside the automorphism group of the Lie group ${\rm{SO}}(p,q)$, we have built the subgroup${\rm{PGO}}(p,q) := {\mathbf{PGO}}(p,q)(\mathbf{R})$ of "algebraic'' automorphisms (and it gives allautomorphisms when $p$ or $q$ vanish). This subgroup is $${\mathbf{GO}}(p,q)(\mathbf{R})/\mathbf{R}^{\times} = {\rm{GO}}(p,q)/\mathbf{R}^{\times}.$$To analyze the group ${\rm{GO}}(p,q)$ of conformal automorphisms of the quadratic space, there are two possibilities: if $p \ne q$ (such as whenever $p$ or $q$ vanish) then any such automorphism must involve a positive conformal scaling factor due to the need to preserve the signature, and if $p=q$ (the "split'' case: orthogonal sum of $p$ hyperbolic planes) then signature-preservation imposes no condition and we see (upon choosing a decomposition as an orthogonal sum of $p$ hyperbolic planes) that there is an evident involution $\tau$ of the vector space whose effect is to negative the quadratic form. Thus, if $p \ne q$ then ${\rm{GO}}(p,q) = \mathbf{R}^{\times} \cdot {\rm{O}}(p,q)$ whereas ${\rm{GO}}(p,p) = \langle \tau \rangle \ltimes (\mathbf{R}^{\times} \cdot {\rm{O}}(p,p))$. Hence, ${\rm{PGO}}(p,q) = {\rm{O}}(p,q)/\langle -1 \rangle$ if $p \ne q$ and ${\rm{PGO}}(p,p) = \langle \tau \rangle \ltimes ({\rm{O}}(p,p)/\langle -1 \rangle)$ for an explicit involution $\tau$ as above.
We summarize the conclusions for outer automorphisms of the Lie group ${\rm{SO}}(p, q)$ arising from the algebraic theory.If $n$ is odd (so $p \ne q$) then ${\rm{O}}(p,q) = \langle -1 \rangle \times {\rm{SO}}(p,q)$ and so the algebraic automorphisms are inner (as is very well-known in the algebraic theory). Suppose $n$ is even, so $-1 \in {\rm{SO}}(p, q)$. If $p \ne q$ (with the same parity) then the group of algebraic automorphisms contributes a subgroup of order 2 to the outer automorphism group (arising from any reflection in a non-isotropic vector, for example). Finally, the contribution of algebraic automorphisms to the outer automorphism group of ${\rm{SO}}(p,p)$ has order 4 (generated by two elements of order 2: an involution $\tau$ as above and a reflection in a non-isotropic vector). This settles the definite case as promised (i.e., all automorphisms inner for odd $n$ and outer automorphism groupof order 2 via a reflection for even $n$) since in such cases we know that all automorphisms are algebraic.
Now we may and do assume $p, q > 0$. Does ${\rm{SO}}(p, q)$ have any non-algebraic automorphisms?We will show that if $n \ge 3$ is odd (i.e., $p$ and $q$ have opposite parity) or if $p$ and $q$ are both odd then there are no non-algebraic automorphisms (so we would be done).
First, let's compute $\pi_0({\rm{SO}}(p,q))$ for any $n \ge 3$. By the spectral theorem, the maximal compact subgroups of ${\rm{O}}(p,q)$ are the conjugates of the evident subgroup ${\rm{O}}(q) \times {\rm{O}}(q)$with 4 connected components, and one deduces in a similar way that the maximal compact subgroups of ${\rm{SO}}(p, q)$ are the conjugates of the evident subgroup $$\{(g,g') \in {\rm{O}}(p) \times {\rm{O}}(q)\,|\, \det(g) = \det(g')\}$$with 2 connected components. For any Lie group $\mathscr{H}$with finite component group (such as the group $G(\mathbf{R})$for any linear algebraic group $G$ over $\mathbf{R}$), the maximal compact subgroups $K$constitute a single conjugacy class (with every compact subgroup contained in one) and as a smooth manifold$\mathscr{H}$ is a direct product of such a subgroup against a Euclidean space (see Chapter XV, Theorem 3.1 ofHochschild's book "Structure of Lie groups'' for a proof). In particular, $\pi_0(\mathscr{H}) = \pi_0(K)$, so ${\rm{SO}}(p, q)$ has exactly 2 connected components for any $p, q > 0$.
Now assume $n$ is odd, and swap $p$ and $q$ if necessary (as we may) so that $p$ is odd and $q>0$ is even.For any $g \in {\rm{O}}(q) - {\rm{SO}}(q)$, the element $(-1, g) \in {\rm{SO}}(p, q)$ liesin the unique non-identity component. Since $n \ge 3$ is odd, so ${\rm{SO}}(p, q)^0$ is the quotientof the connected (!) Lie group ${\rm{Spin}}(p, q)$ modulo its order-2 center, the algebraic theory in characteristic 0 gives $${\rm{Aut}}({\mathfrak{so}}(p,q)) = {\rm{Aut}}({\rm{Spin}}(p, q)) = {\rm{SO}}(p, q).$$Thus, to find nontrivial elements of the outer automorphism group of the disconnected Lie group ${\rm{SO}}(p, q)$ we can focusattention on automorphisms $f$ of ${\rm{SO}}(p, q)$ that induce the identity on ${\rm{SO}}(p, q)^0$.
We have arranged that $p$ is odd and $q>0$ is even (so $q \ge 2$).The elements $$(-1, g) \in {\rm{SO}}(p, q) \cap ({\rm{O}}(p) \times {\rm{O}}(q))$$ (intersection inside ${\rm{O}}(p, q)$, so $g \in {\rm{O}}(q) - {\rm{SO}}(q)$) have an intrinsiccharacterization in terms of the Lie group ${\rm{SO}}(p, q)$and its evident subgroups ${\rm{SO}}(p)$ and ${\rm{SO}}(q)$: these are the elements outside ${\rm{SO}}(p, q)^0$ that centralize${\rm{SO}}(p)$ and normalize ${\rm{SO}}(q)$. (To prove this, consider the standard representation of${\rm{SO}}(p) \times {\rm{SO}}(q)$ on $\mathbf{R}^{p+q} = \mathbf{R}^n$, especially theisotypic subspaces for the action of ${\rm{SO}}(q)$ with $q \ge 2$.) Hence, for every $g \in {\rm{O}}(q) - {\rm{SO}}(q)$we have $f(-1,g) = (-1, F(g))$ for a diffeomorphism $F$ of the connected manifold ${\rm{O}}(q) - {\rm{SO}}(q)$.
Since $f$ acts as the identity on ${\rm{SO}}(q)$, it follows that the elements $g, F(g) \in {\rm{O}}(q) - {\rm{SO}}(q)$have the same conjugation action on ${\rm{SO}}(q)$. But ${\rm{PGO}}(q) \subset {\rm{Aut}}({\rm{SO}}(q))$,so $F(g)g^{-1} \in \mathbf{R}^{\times}$ inside ${\rm{GL}}_q(\mathbf{R})$with $q>0$ even. Taking determinants, this forces $F(g) = \pm g$ for a sign that may depend on $g$. But $F$ is continuous on the connected space ${\rm{O}}(q) - {\rm{SO}}(q)$, so the sign is actually independent of $g$. The case $F(g) = g$ corresponds to the identity automorphism of ${\rm{SO}}(q)$, so for the study of non-algebraic contributions to the outer automorphism group of ${\rm{SO}}(p, q)$ (with $p$ odd and $q > 0$ even) we are reduced to showing that the case $F(g) = -g$ cannot occur.
We are seeking to rule out the existence of an automorphism $f$ of ${\rm{SO}}(p, q)$ that is the identity on ${\rm{SO}}(p, q)^0$ and satisfies $(-1, g) \mapsto (-1, -g)$ for $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. For this to be a homomorphism, it is necessary (and sufficient) that the conjugation actions of $(-1, g)$ and $(-1, -g)$ on ${\rm{SO}}(p, q)^0$ coincide for all $g \in {\rm{O}}(q) - {\rm{SO}}(q)$. In other words, this requires that the element $(1, -1) \in {\rm{SO}}(p, q)$ centralizes ${\rm{SO}}(p, q)^0$. But the algebraic group ${\mathbf{SO}}(p, q)$ is connected (for the Zariski topology) with trivial center and the same Lie algebra as ${\rm{SO}}(p, q)^0$, so by consideration of the compatible algebraic and analytic adjoint representations we see that $(1, -1)$ cannot centralize ${\rm{SO}}(p, q)^0$. Thus, no non-algebraic automorphism of ${\rm{SO}}(p, q)$ exists in the indefinite case when $n \ge 3$ is odd.
Finally, suppose $p$ and $q$ are both odd, so ${\rm{SO}}(p,q)^0$ does not contain the element $-1 \in {\rm{SO}}(p,q)$ that generates the center of ${\rm{SO}}(p,q)$ (and even the center of ${\rm{O}}(p,q)$). Thus, we have ${\rm{SO}}(p,q) = {\rm{SO}}(p,q)^0 \times \langle -1 \rangle$ with ${\rm{SO}}(p,q)^0$ having trivial center. Any (analytic) automorphism of ${\rm{SO}}(p,q)$ clearly acts trivially on the order-2 center $\langle -1 \rangle$ and must preserve the identity component too, so such an automorphism is determined by its effect on the identity component. It suffices to show that every analytic automorphism $f$ of ${\rm{SO}}(p,q)^0$ arises from an algebraic automorphism of ${\rm{SO}}(p,q)$, as then all automorphisms of ${\rm{SO}}(p,q)$ would be algebraic (so the determination of the outer analytic automorphism group for $p, q$ odd follows as for the definite case with even $n \ge 4$).
By the theory of connected semisimple algebraic groups in characteristic 0, for any $p, q \ge 0$ with $p+q \ge 3$ every analytic automorphism of the connected (!) group ${\rm{Spin}}(p,q)$ is algebraic. Thus, it suffices to show that any automorphism $f$ of ${\rm{SO}}(p,q)^0$ lifts to an automorphism of the degree-2 cover $\pi:{\rm{Spin}}(p,q) \rightarrow {\rm{SO}}(p,q)^0$. (Beware that this degree-2 cover is
not the universal cover if $p, q \ge 2$, as ${\rm{SO}}(p,q)^0$ has maximal compact subgroup ${\rm{SO}}(p) \times {\rm{SO}}(q)$ with fundamental group of order 4.) The Lie algebra automorphism ${\rm{Lie}}(f)$ of ${\mathfrak{so}}(p,q) = {\mathfrak{spin}}(p,q)$ arises from a unique algebraic automorphism of the group ${\mathbf{Spin}}(p,q)$ since this latter group is simply connected in the sense of algebraic groups. The induced automorphism of the group ${\rm{Spin}}(p,q)$ of $\mathbf{R}$-points does the job, since its compatibility with $f$ via $\pi$ can be checked on Lie algebras (as we are working with connected Lie groups).
This final argument also shows that the remaining problem for even $p, q \ge 2$ is to determine if any automorphism of ${\rm{SO}}(p,q)$ that is the identity map on ${\rm{SO}}(p,q)^0$ is itself the identity map. (If affirmative for such $p, q$ then the outer automorphism group of ${\rm{SO}}(p,q)$ is of order 2, and if negative then the outer automorphism group is bigger.)
|
Alladi Ramakrishnan Hall
Spinoriality of Representations of Lie Groups
Steven Spallone
IISER Pune
Let $G$ be a connected semisimple complex Lie group and $\pi: G \to SO(V)$ an orthogonal representation with highest weight $\lambda$. We give a polynomial formula in $\lambda$ which determines whether $\pi$ lifts to a homomorphism $\tilde{\pi}: G \to Spin(V)$. This is joint work with Rohit Joshi. Done
|
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ...
The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial.
This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ...
I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv...
As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists?
I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib...
@EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc.
Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/…
You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago
So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball.
@ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why?
@AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially...
@vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes.
@RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself
@AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that?
@ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions...
When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former.
@RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that
And that is what I mean by "the basics".
Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers
@RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14
The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for...
@vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world.
@Slereah It's like the brain has a limited capacity on math skills it can store.
@NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life"
I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money
It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge
Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
|
In general, Weierstrass is probably a good idea for such trigonometric integrals. However, your progress left the denominator much more manageable. I would start as in David H's answer up until the step
$$\frac12\int\frac{\sin\theta-\cos\theta}{1+\sin\theta}d\theta$$
Instead of Weierstrass from here, simply multiply by $\frac{1-\sin\theta}{1-\sin\theta}$
$$\frac12\int\frac{(\sin\theta-\cos\theta)(1-\sin\theta)}{1-\sin^2\theta}d\theta=\frac12\int\frac{\sin\theta-\cos\theta-\sin^2\theta+\sin\theta\cos\theta}{\cos^2\theta}d\theta=$$$$\frac12\int\sec\theta\tan\theta d\theta-\frac12\int\sec\theta d\theta-\frac12\int(\sec^2\theta-1)d\theta+\frac12\int\tan\theta d\theta$$
You should have no trouble with these remaining integrals.
|
Answer
The male actors can be selected in 116,280 ways for four roles.
Work Step by Step
We know that the order in which the four actors are selected makes a difference as the all the actors would be cast for a different role. Since the order matters, we use permutations. The four officers are to be selected from a club of twenty members. So, $ n=20,r=4$. Hence, $\begin{align} & _{20}{{P}_{4}}=\frac{20!}{\left( 20-4 \right)!} \\ & =\frac{20!}{16!} \\ & =\frac{20\times 19\times 18\times 17\times 16!}{16!} \\ & =116,280 \end{align}$
|
Coordinate systems are tools that let us use algebraic methods tounderstand geometry. While the
rectangular (also called Cartesian) coordinates that wehave been discussing are the most common, some problems are easier toanalyze in alternate coordinate systems.
A coordinate system is a scheme that allows us to identify any point in the plane or in three-dimensional space by a set of numbers. In rectangular coordinates these numbers are interpreted, roughly speaking, as the lengths of the sides of a rectangular "box.''
In two dimensions you may already be familiar with an alternative,called
polar coordinates. In this system, eachpoint in the plane is identified by a pair of numbers $(r,\theta)$.The number $\theta$ measures the angle between the positive$x$-axis and a vector with tail at the origin and head at thepoint, as shown in figure 14.6.1; the number$r$ measures the distance from the origin to thepoint. Either of these may be negative; a negative $\theta$ indicatesthe angle is measured clockwise from the positive$x$-axis instead of counter-clockwise, and a negative $r$ indicatesthe point at distance $|r|$ in the opposite of the direction given by$\theta$. Figure 14.6.1 also shows the point withrectangular coordinates $\ds (1,\sqrt3)$ and polar coordinates $(2,\pi/3)$, 2 units from the origin and $\pi/3$ radians from thepositive $x$-axis.
We can extend polar coordinates to three dimensions simply by adding a$z$ coordinate; this is called
cylindrical coordinates .Each point in three-dimensional space is represented by threecoordinates $(r,\theta,z)$ in the obvious way: this point is $z$ unitsabove or below the point $(r,\theta)$ in the $x$-$y$ plane,as shown in figure 14.6.2. The pointwith rectangular coordinates $\ds (1,\sqrt3, 3)$ and cylindricalcoordinates $(2,\pi/3,3)$ is also indicatedin figure 14.6.2.
Some figures with relatively complicated equations in rectangular coordinates will be represented by simpler equations in cylindrical coordinates. For example, the cylinder in figure 14.6.3 has equation $\ds x^2+y^2=4$ in rectangular coordinates, but equation $r=2$ in cylindrical coordinates.
Given a point $(r,\theta)$ in polar coordinates, it is easy to see (as in figure 14.6.1) that the rectangular coordinates of the same point are $(r\cos\theta,r\sin\theta)$, and so the point $(r,\theta,z)$ in cylindrical coordinates is $(r\cos\theta,r\sin\theta,z)$ in rectangular coordinates. This means it is usually easy to convert any equation from rectangular to cylindrical coordinates: simply substitute $$\eqalign{ x&=r\cos\theta\cr y&=r\sin\theta\cr} $$ and leave $z$ alone. For example, starting with $\ds x^2+y^2=4$ and substituting $x=r\cos\theta$, $y=r\sin\theta$ gives $$\eqalign{ r^2\cos^2\theta+r^2\sin^2\theta&=4\cr r^2(\cos^2\theta+\sin^2\theta)&=4\cr r^2&=4\cr r&=2.\cr }$$ Of course, it's easy to see directly that this defines a cylinder as mentioned above.
Cylindrical coordinates are an obvious extension of polar coordinatesto three dimensions, but the use of the $z$ coordinate means they arenot as closely analogous to polar coordinates as another standardcoordinate system. In polar coordinates, we identify a point by adirection and distance from the origin; in three dimensions we can dothe same thing, in a variety of ways. The question is: how do werepresent a direction? One way is to give the angle of rotation,$\theta$, from the positive $x$ axis, just as in cylindricalcoordinates, and also an angle of rotation, $\phi$, from the positive$z$ axis. Roughly speaking, $\theta$ is like longitude and $\phi$ islike latitude. (Earth longitude is measured as a positive or negativeangle from the prime meridian, and is always between 0 and 180degrees, east or west; $\theta$ can be any positive or negative angle,and we use radians except in informal circumstances. Earth latitude is measured north or southfrom the equator; $\phi$ is measured from the north pole down.) Thissystem is called
spherical coordinates ; the coordinates are listed in the order$(\rho,\theta,\phi)$, where $\rho$ is the distance from theorigin, and like $r$ in cylindrical coordinates it may be negative. The general case and anexample are pictured in figure 14.6.4; thelength marked $r$ is the $r$ of cylindrical coordinates.
As with cylindrical coordinates, we can easily convert equations in rectangular coordinates to the equivalent in spherical coordinates, though it is a bit more difficult to discover the proper substitutions. Figure 14.6.5 shows the typical point in spherical coordinates from figure 14.6.4, viewed now so that the arrow marked $r$ in the original graph appears as the horizontal "axis'' in the left hand graph. From this diagram it is easy to see that the $z$ coordinate is $\rho\cos\phi$, and that $r=\rho\sin\phi$, as shown. Thus, in converting from rectangular to spherical coordinates we will replace $z$ by $\rho\cos\phi$. To see the substitutions for $x$ and $y$ we now view the same point from above, as shown in the right hand graph. The hypotenuse of the triangle in the right hand graph is $r=\rho\sin\phi$, so the sides of the triangle, as shown, are $x=r\cos\theta=\rho\sin\phi\cos\theta$ and $y=r\sin\theta=\rho\sin\phi\sin\theta$. So the upshot is that to convert from rectangular to spherical coordinates, we make these substitutions: $$\eqalign{ x&=\rho\sin\phi\cos\theta\cr y&=\rho\sin\phi\sin\theta\cr z&=\rho\cos\phi.\cr} $$
Example 14.6.1 As the cylinder had a simple equation in cylindrical coordinates, so does the sphere in spherical coordinates: $\rho=2$ is the sphere of radius 2. If we start with the Cartesian equation of the sphere and substitute, we get the spherical equation: $$\eqalign{ x^2+y^2+z^2&=2^2\cr \rho^2\sin^2\phi\cos^2\theta+ \rho^2\sin^2\phi\sin^2\theta+\rho^2\cos^2\phi&=2^2\cr \rho^2\sin^2\phi(\cos^2\theta+\sin^2\theta)+\rho^2\cos^2\phi&=2^2\cr \rho^2\sin^2\phi+\rho^2\cos^2\phi&=2^2\cr \rho^2(\sin^2\phi+\cos^2\phi)&=2^2\cr \rho^2&=2^2\cr \rho&=2\cr }$$
Example 14.6.2 Find an equation for the cylinder $\ds x^2+y^2=4$ in spherical coordinates.
Proceeding as in the previous example: $$\eqalign{ x^2+y^2&=4\cr \rho^2\sin^2\phi\cos^2\theta+ \rho^2\sin^2\phi\sin^2\theta=4\cr \rho^2\sin^2\phi(\cos^2\theta+\sin^2\theta)&=4\cr \rho^2\sin^2\phi&=4\cr \rho\sin\phi&=2\cr \rho&={2\over\sin\phi}\cr }$$
Exercises 14.6
Ex 14.6.1Convert the following points in rectangular coordinates tocylindrical and spherical coordinates:
a. $(1,1,1)$
b. $(7,-7,5)$
c. $(\cos(1),\sin(1),1)$
d. $(0,0,-\pi)$ (answer)
Ex 14.6.2Find an equation for the sphere $\ds x^2+y^2+z^2=4$ incylindrical coordinates.(answer)
Ex 14.6.3Find an equation for the $y$-$z$ plane in cylindricalcoordinates. (answer)
Ex 14.6.4Find an equation equivalent to $\ds x^2+y^2+2z^2+2z-5=0$ incylindrical coordinates.(answer)
Ex 14.6.5Suppose the curve $\ds \ds z=e^{-x^2}$ in the $x$-$z$ plane isrotated around the $z$ axis. Find an equation for the resultingsurface in cylindrical coordinates.(answer)
Ex 14.6.6Suppose the curve $\ds z=x$ in the $x$-$z$ plane isrotated around the $z$ axis. Find an equation for the resultingsurface in cylindrical coordinates.(answer)
Ex 14.6.7Find an equation for the plane $y=0$ inspherical coordinates.(answer)
Ex 14.6.8Find an equation for the plane $z=1$ inspherical coordinates.(answer)
Ex 14.6.9Find an equation for the sphere with radius 1 and center at$(0,1,0)$ in spherical coordinates.(answer)
Ex 14.6.10Find an equation for the cylinder $\ds x^2+y^2=9$ inspherical coordinates.(answer)
Ex 14.6.11Suppose the curve $\ds z=x$ in the $x$-$z$ plane isrotated around the $z$ axis. Find an equation for the resultingsurface in spherical coordinates.(answer)
Ex 14.6.12Plot the polar equations $r=\sin(\theta)$ and $r=\cos(\theta)$and comment on their similarities. (If you get stuck on how to plotthese, you can multiply both sides of each equation by $r$ and convertback to rectangular coordinates).
Ex 14.6.14Convert the spherical formula $\rho=\sin \theta \sin \phi$ torectangular coordinates and describe the surface defined by theformula (Hint: Multiply both sides by $\rho$.)(answer)
Ex 14.6.15We can describe points in the first octant by $x >0$, $y>0$ and$z>0$. Give similar inequalities for the first octant in cylindricaland spherical coordinates.(answer)
|
I am working on estimating the position and orientation (pose) of a model (rigid object) from its silhouette in an image. For this, I have constructed an error measure between the model in its pose and the silhouette, which looks roughly like:
$$ \epsilon ( \bar{x} ) = \sum_{\forall i} \| f(\bar{x}, m_i) - s_i \|^2 $$
where $\bar{x}$ is a six-dimensional vector describing the 3D translation and rotations as
$$ f( \bar{x}, p ) = R_{\bar{x}} \cdot p + t_{\bar{x}} $$
Ordinarily, this could be nonlinear least squares, however there is a catch: An assignment needs to be made between model-points $m_i$ and silhouette points $c_i$, which complicates the evaluation of the error measure.
I am approaching the problem as a general nonlinear optimization problem. I already know that this error measure is continous, but not continously differentiable due to the aforementioned assignment. I do have gradient information however, but this does not take the assignment into account and therefore is not completely accurate.
The question: Is there a method which can calculate/approximate and visualize the basins of attractions in this six-dimensional space?
If this is absolutely not feasible, is there a method which can calculate/approximate the number of local minima within a "bounded" region?
|
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1...
Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer...
The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$.
Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result?
Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa...
@AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works.
Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months.
Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter).
Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals.
I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ...
I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side.
On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book?
suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable
Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ .
Can you give some hint?
My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$
If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero.
I have a bilinear functional that is bounded from below
I try to approximate the minimum by a ansatz-function that is a linear combination
of any independent functions of the proper function space
I now obtain an expression that is bilinear in the coeffcients
using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0)
I get a set of $n$ equations with the $n$ the number of coefficients
a set of n linear homogeneus equations in the $n$ coefficients
Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists
This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz.
Avoiding the neccessity to solve for the coefficients.
I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero.
I wonder if there is something deeper in the background, or so to say a more very general principle.
If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x).
> Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel.
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
(Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.)
It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
|
The following two facts concerning totally disconnected spaces should (separately) help you demonstrate that your space is totally disconnected.
Fact 1: Every T$_1$-space with a basis consisting of clopen sets ( i.e., every zero-dimensional space) is totally disconnected.
proof. Suppose that $A \subseteq X$ contains at least two points, and let $x , y \in A$ be distinct. By assumption there is a clopen $U \subseteq X$ such that $x \in U$ and $y \notin U$. But then $U$ and $X \setminus U$ witness that $A$ is not a connected subset of $X$. $\quad\Box$
Fact 2: Every product of (nonempty) totally disconnected spaces is totally disconnected.
proof. Suppose that $X_i$ is totally disconnected for all $i \in I$, and let $A \subseteq \prod_{i \in I} X_i$ contain at least two points. Then there must be a $j \in I$ such that $A_j = \{ x_j : x = ( x_i )_{i \in I} \in A \}$ contains at least two points. As $X_j$ is totally disconnected, there are open $U_j , V_j \subseteq X_j$ such that $U_j \cap A_j \neq \emptyset \neq V_j \cap A_j$ and $U_j \cap V_j \cap A_j = \emptyset$ and $A_j \subseteq U_j \cup V_j$. Let $$U = {\textstyle \prod_{i \neq j}} X_i \times U_j; \quadV = {\textstyle \prod_{i \neq j}} X_i \times V_j.$$Then $U , V$ are open subsets of $\prod_{i \in I} X_i$, $U \cap A \neq \emptyset \neq V \cap A$, $U \cap V \cap A = \emptyset$ and $A \subseteq U \cup V$. Thus $A$ is not a connected subset of $\prod_{i \in I} X_i$. $\quad\Box$
Either of these should be useful (but especially the second) because your space appears to be a subspace of $\{ 1 , \ldots , k \}^{\mathbb{N}}$ taking $\{ 1 , \ldots , k \}$ to be discrete, and then taking the product topology. (Also, total disconnectedness is a hereditary property of topological spaces.)
|
In LaTeX2e,how can I sum two values and assign them to other variable?
I want to compute something like:
var=\textwidth - 1cm
And if both were constants:
var=1+1
TeX - LaTeX Stack Exchange is a question and answer site for users of TeX, LaTeX, ConTeXt, and related typesetting systems. It only takes a minute to sign up.Sign up to join this community
In regular LaTeX, the
calc package allows for easy manipulation of length arithmetic:
\documentclass{article}\usepackage{calc}% http://ctan.org/pkg/calc\newlength{\mylength}\begin{document}\setlength{\mylength}{\textwidth}%\noindent\rule{\mylength}{20pt}\bigskip\setlength{\mylength}{\textwidth-1cm}%\noindent\rule{\mylength}{20pt}\bigskip\setlength{\mylength}{\textwidth-80pt+5mm-1bp}%\noindent\rule{\mylength}{20pt}\end{document}
\documentclass{article}\usepackage[nomessages]{fp}% http://ctan.org/pkg/fp\begin{document}The following arithmetic is easy:\begin{itemize} \item \FPeval{\result}{clip(5+6)}% $5+6=\result$ \item \FPeval{\result}{round(2+3/5*pi,5)}% $2+3/5\times\pi=\result$\end{itemize}\end{document}
In classical Knuth TeX,
\newdimen\len\len=\hsize\advance\len by -1cm\newcount\cnt\cnt=1\advance\cnt by 1
eTeX,
\newdimen\len\len=\dimexpr\hsize-1cm\relax\newcount\cnt\cnt=\numexpr1+1\relax
LaTeX with
calc,
\usepackage{calc}\newlength\len\setlength{\textwidth+1cm}\newcounter{cnt}\setcounter{cnt}{1+1}
LaTeX2e with
expl3 (LaTeX3),
\usepackage{expl3}\ExplSyntaxOn\dim_new:N \l_len_dim\dim_set:Nn \l_len_dim {\textwidth + 1cm}\int_new:N \l_cnt_int\int_set:Nn \l_cnt_int {1+1}\ExplSyntaxOff
Since LuaTeX is available, forget all that complicated stuff and do something like:
\directlua{a = 0a = a + 1tex.print(a)}
In LaTeX, if you just want to subtract one known length (say,
1cm) from another (say,
\textwidth) to obtain a new length variable, you can do so using the
\newlength,
\setlength, and
\addtolength instructions, as in the following example:
\newlength\mylength\setlength\mylength\textwidth\addtolength\mylength{-1cm} %% note the minus sign
With a fairly recent TeX distribution
\newdimen\len\len=\dimexpr\textwidth-1cm\relax\newcount\cnt\cnt=\numexpr1+1\relax
It's not quite clear what's the framework you're interested in, though.
This is a bit overkill for the particular examples that you mention, but since this works for more complicated expressions I tend to use it:
\documentclass{article}\usepackage{pgf}\begin{document}\pgfmathsetmacro{\var}{\textwidth - 1cm}The value of var is \var\end{document}
Note that with
pgfmathsetmacro the result is a decimal without units. If you are only interested in lengths, then you can use a similar macro
\pgfmathsetlength.
If you want to minimize what gets loaded and still use
pgfmath, then see Is it possible to load pgfmath without loading the full pgf package?
I often use the
tikz package for its wide range of capabilities; consider the following:
\documentclass[varwidth]{standalone}\usepackage{tikz}\usetikzlibrary{math}\begin{document}textwidth = \the\textwidth \tikzmath{ \resultone = \textwidth - 1cm;}textwidth - 1cm = \resultone\tikzmath{ \resulttwo = 1 + 1; % default type is float % integer \resultthree; % declare a variable as an integer \resultthree = 1 + 1;}1 + 1 as float = \resulttwo\\1 + 1 as integer = \resultthree\end{document}
produces the following:
textwidth = 345.0pttextwidth - 1cm = 316.547261 + 1 as float = 2.01 + 1 as integer = 2
More information can be found in the pgf/tikz manual in the chapter for the Math Library.
There is the
adjcalc package which is a sub-package of
adjustbox used to abstract the different calculation backends (eTeX, calc, pgfmath) to a common interface. Which backend is used can be selected by package options.
\usepackage{adjcalc} % or {adjustbox}\newdimen\lengthmacro\adjsetlength{\lengthmacro}{\textwidth-1cm}
this helped me:
\documentclass{article}\usepackage{xfp}\begin{document}$12.5+13.8 = \fpeval{12.5+13.8} $\end{document}
|
This seems like an interesting question but I am not sure I understand all the details here, so this is more a comment than an answer.Because the preference relation is incomplete, we cannot have a real-valued utility function representing it (recall that indifference is different from indecisiveness because the latter is not transitive).
Moreover, I do not think that you can have an arbitrary set $X$ together with an arbitrary $\succeq$ being represented by a finite-dimensional $u$. To give a simple example, suppose that $X=[1,p_n]$ with $p_n$ being the $n-th$ prime and your preferences are $x\succeq y$ if $x\geq y$ and $x,y\in(p_k,p_{k+1}]$ for some $k<n$. There are $n$ subsets of alternatives, each with a well-defined preference order but alternatives are no comparable across subsets so that you need at least $f(n)$ (not sure about the exact $f$) dimensions in your vector-valued utility function. When $X=[1,\infty)$, the number of dimensions explodes.
An important example is the case of social choice. Suppose that you have $k$ individuals with preferences over $X$ and you want to aggregate them. The Pareto order then gives you $u(x)=(u_i(x))$ for all $i$ the (ordinal) utility of individual $i$ from alternative $x$. Indeed, if $u_i(x)>u_i(y)$ for all $i$, then $u(x)>u(y)$.
Finally, if $X$ was finite, then simply order the elements in some way $x^1,x^2,..,x^k$ and construct $u\in \mathbb R^k$ as follows:$u_l(x^l)=1$, $u_l(x^k)=1$ iff $x^l\succeq x^k$ and $u_l(x^k)=0$ otherwise.
To see that this represents the preferences, notice that if $x^l\succeq x^{l'}$, $u_l(x^l)=1>u_l(x^{l'})$, $u_{l'}(x^l)=u_{l'}(x^{l'})$; if $x^m\succeq x^l$ it must be that $x^m \succeq x^{l'}$ so that $u_m(x^l)=u_m(x^{l'})$ and viceversa but if $x^l\succeq x^m\succeq x^{l'}$, then $u_m(x^l)>u_m(x^{l'})$; and if $x^m$ is not comparable with $x^l$ it cannot be worse than $x^{l'}$ so that $u_m(x^l)=u_m(x^{l'})=0$, while if it is not comparable with $x^{l'}$ it cannot be better than $x^l$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.