text
stringlengths 256
16.4k
|
|---|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
Yeah, this software cannot be too easy to install, my installer is very professional looking, currently not tied into that code, but directs the user how to search for their MikTeX and or install it and does a test LaTeX rendering
Some body like Zeta (on codereview) might be able to help a lot... I'm not sure if he does a lot of category theory, but he does a lot of Haskell (not that I'm trying to conflate the two)... so he would probably be one of the better bets for asking for revision of code.
he is usually on the 2nd monitor chat room. There are a lot of people on those chat rooms that help each other with projects.
i'm not sure how many of them are adept at category theory though... still, this chat tends to emphasize a lot of small problems and occasionally goes off tangent.
you're project is probably too large for an actual question on codereview, but there is a lot of github activity in the chat rooms. gl.
In mathematics, the Fabius function is an example of an infinitely differentiable function that is nowhere analytic, found by Jaap Fabius (1966). It was also written down as the Fourier transform off^(z)=∏m=1∞(cos...
Defined as the probability that $\sum_{n=1}^\infty2^{-n}\zeta_n$ will be less than $x$, where the $\zeta_n$ are chosen randomly and independently from the unit interval
@AkivaWeinberger are you familiar with the theory behind Fourier series?
anyway here's a food for thought
for $f : S^1 \to \Bbb C$ square-integrable, let $c_n := \displaystyle \int_{S^1} f(\theta) \exp(-i n \theta) (\mathrm d\theta/2\pi)$, and $f^\ast := \displaystyle \sum_{n \in \Bbb Z} c_n \exp(in\theta)$. It is known that $f^\ast = f$ almost surely.
(a) is $-^\ast$ idempotent? i.e. is it true that $f^{\ast \ast} = f^\ast$?
@AkivaWeinberger You need to use the definition of $F$ as the cumulative function of the random variables. $C^\infty$ was a simple step, but I don't have access to the paper right now so I don't recall it.
> In mathematics, a square-integrable function, also called a quadratically integrable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value is finite.
I am having some difficulties understanding the difference between simplicial and singular homology. I am aware of the fact that they are isomorphic, i.e. the homology groups are in fact the same (and maybe this doesnt't help my intuition), but I am having trouble seeing where in the setup they d...
Usually it is a great advantage to consult the notes, as they tell you exactly what has been done. A book will teach you the field, but not necessarily help you understand the style that the prof. (who creates the exam) creates questions.
@AkivaWeinberger having thought about it a little, I think the best way to approach the geometry problem is to argue that the relevant condition (centroid is on the incircle) is preserved by similarity transformations
hence you're free to rescale the sides, and therefore the (semi)perimeter as well
so one may (for instance) choose $s=(a+b+c)/2=1$ without loss of generality
that makes a lot of the formulas simpler, e.g. the inradius is identical to the area
It is asking how many terms of the Euler Maclaurin formula do we need in order to compute the Riemann zeta function in the complex plane?
$q$ is the upper summation index in the sum with the Bernoulli numbers.
This appears to answer it in the positive: "By repeating the above argument we see that we have analytically continued the Riemann zeta-function to the right-half plane σ > 1 − k, for all k = 1, 2, 3, . . .."
|
I am trying to understand the way Smith demonstrates that the general solution to his equation (12) is (13) (see page 6).
(12) \begin{eqnarray} \dot{z} &=& (1- \alpha)\left[1-\left(\delta + \frac{\bar{x}}{1+\bar{x}Ae^{\bar{x}t}}\right)z\right] \end{eqnarray}
In the Appendix I demonstrate that the general solution to Equation (12) is:
(13)\begin{eqnarray} z &=& \frac{1}{\bar{x}+\delta} 2F1(1-\alpha,1,d;\omega)+B\bar{x}^{\alpha-1}e^{-(1-\alpha)(\bar{x}+\delta)t}(1+\bar{x}Ae^{\bar{x}t})^{1-\alpha} \end{eqnarray}
The Appendix: A.1 \begin{eqnarray} \dot{z} &=& -(1- \alpha)\left(\delta + \frac{\bar{x}}{1+\bar{x}Ae^{\bar{x}t}}\right)z \end{eqnarray} This can be integrated to find the complementary solution:(A.2) \begin{eqnarray} z_c &=& \bar{x}^{\alpha-1}e^{-(1-\alpha)(\bar{x}+\delta)t}(1+\bar{x}Ae^{\bar{x}t})^{1-\alpha} \end{eqnarray}
To find the particular solution to equation (12), I will use the method of variation of parameters. Conjecture that the particular solution is $z_p$ $=$ $z_c$$\Psi$, where $\Psi$ is an unknown function of time. Substituting this conjecture into equation (12), it follows that: (A.3) \begin{eqnarray} \dot{\Psi} &=& \frac{1-\alpha}{z_c} &=& (1-\alpha)\bar{x}^{1-\alpha}e^{(1-\alpha)(\bar{x}+\delta)t}(1+\bar{x}Ae^{\bar{x}t})^{\alpha-1} \end{eqnarray}
First, is (A.2) then just an integrated version of (A.1) or are there any other steps involved?
Second, I really do not see how he substituted the conjecture into 12 and how can he obtain (A.3) from this substitution? Especially here I am really lost and am lacking the imagination how he came up with $\dot{\Psi}$. Did he substitute $z_p$ for $\dot{z}$ or $z$ in (12)?
Citation: [Smith, William. (2006). A Closed Form Solution to the Ramsey Model. Contributions to Macroeconomics. 6.]
|
Square brackets $[\;]$ will denote taking the ring of polynomials, and round brackets $(\;)$ will denote taking the field of rational functions.
My homework assignment from about a month ago had the following problem in it.
Find the Galois group of the field extension $\mathbb F_3(x^4)\subset\mathbb F_{3^2}(x).$
I didn't do it then and now I'm trying to do it to prepare myself for the exam.
The problem looks very exotic to me and I'm having trouble starting to do it. I haven't determined any Galois groups by myself yet. I first tried to see what I know about this extension.
First of all, we have $$\mathbb F_3(x^4)\subset \mathbb F_{3}(x)\subset \mathbb F_{3^2}(x).$$
The first extension has degree at most $4$ because $x$ is a zero of $f\in \mathbb F_{3}(x^4)[y],$ where $$f(y)=y^4-x^4.$$ But $f$ is irreducible by Eisentein's criterion: $x^4$ is a prime element in $\mathbb F_3[x^4]$. Therefore $f$ is the minimal polynomial of $x^4$ and the first extension has degree $4$.
The extension $\mathbb F_3\subset \mathbb F_{3^2}$ has degree $2$ by an element count. Therefore by what's been said here, the second extension has degree $2$. Thus, the extension in the problem is finite and has degree $8$.
Unfortunately, $\mathbb F_3(x^4)$ isn't finite, nor is it of characteristic $0$. If it were one of these things, I could say that the order of the Galois group of the extension is $\leq 8.$ The best I'm able to see is that there is an eight-element $\mathbb F_3(x^4)$-basis of $\mathbb F_{3^2}(x),$ for example $(1,x_1,x_2,\ldots,x_7)$. So $$\mathbb F_{3^2}(x)=\mathbb F_3(x^4)(x_1,x_2,\ldots,x_7).$$
I could try to give an upper bound to the order of the Galois group of the extension by saying that each of $x_i$ must be mapped by any automorphism to some root of its minimal polynomial. But I don't think I can get a satisfactory bound this way.
In general, most of the study of field automorphisms done in classes was in characteristic zero and for finite fields, so most of the theorems don't apply here. I think this must mean that the problem must have an elementary solution, but I don't see it.
|
RS Aggarwal Class 9 Chapter 4 – Lines And Triangles Ex 4B Solutions Free PDF
The RS Aggarwal Class 9 Solutions Chapter 4 Angles, Lines and Triangles Ex 4B have been formulated in accordance with the latest syllabus of the CBSE, which makes these solutions highly effective from exam point of view. It helps students to improve their performance level and giving the ability to solve difficult questions with ease. It is the best resource to prepare for the exam as it provides ample number of questions to practice from each and every topic.
Students are advised to solve these solutions so that they feel confident while attempting the final question paper. The RS Aggarwal Class 9 Solutions are simpler and can be practiced easily and are highly recommended for students as reference material. It will help you clear all your doubts quite well and attempt a great number of questions during the exam.
Download PDF of RS Aggarwal Class 9 Solutions Chapter 4– Angles, Lines and Triangles Ex 4B
Question 1: In \(\bigtriangleup ABC\), if \(\angle B=76 ^{0}\), find \(\angle A\).
Ans:\(\angle A + \angle B + \angle C = 180 ^{0}\) [Sum of the angles of a triangle] \(\Rightarrow + 76^{0} + 48^{0} = 180^{0}\) \(\Rightarrow + 124^{0} = 180^{0}\) \(\Rightarrow = 56^{0}\)
Question 2: The angles of a triangle are in the ratio 2:3:4 . Find the angles.
Ans:
Let the angles of the given triangle measure (2x)
0, (3x) 0 and (4x) 0, respectively.
Then,
2x + 3x + 4x = 180
0 [Sum of the angles of a triangle]
9x = 180
0
x =20
0
Hence, the measures of the angles are 2 x 20
0 = 40 0, 3 x 20 0 = 60 0 and 4 x 20 0=80 0.
Question 3: In \(\bigtriangleup ABC\), if \(3\angle A = 4\angle B=6\angle C\), calculate \(\angle A , \angle B\; and \;\angle C\).
Ans:
Let \(3 \angle A = 4 \angle B = 6 \angle C = x^{0}\)
Then,\(\angle A = (\frac{x}{3})^{0} , \angle B = (\frac{x}{4})^{0}\ and\ \angle C = (\frac{x}{6})^{0}\)
Therefore, \( \frac{x}{3} + \frac{x}{4} + \frac{x}{6} = 180^{0}\) [Sum of the angles of a triangle]
4x + 3x + 2x = 2160
0
9x = 2160
0
x = 240
0
Therefore,\(\angle A = (\frac{240}{3})^{0} = 80^{0}\) \(\angle B = (\frac{240}{4})^{0} = 60^{0}\) \(\angle c = (\frac{240}{6})^{0} = 40^{0}\)
Question 4: In \(\bigtriangleup ABC,\angle A +\angle B=108^{0}\; and \;\angle B+\angle C =130^{0}\), find \(\angle A,\angle B\; and \angle C\) .
Ans:
Let \(\angle A + \angle B = 108^{0}\ and\ \angle B + \angle C = 130^{0}\) \(\Rightarrow \angle A + \angle B + \angle B + \angle C = (108+130)^{0} \) \(\Rightarrow (\angle A + \angle B + \angle C) + \angle B = 238^{0}\ [\angle A + \angle B + \angle C = 180^{0}]\) \(\Rightarrow 180^{0} + \angle B = 238^{0}\) \(\Rightarrow \angle B = 58^{0}\)
therefre, \( \angle C = 130^{0} – \angle B\)
= (130 – 58)
0
= 72
0
Therefore, \( \angle A = 108^{0} – \angle B\)
= (108 – 58)
0
= 50
0
Question 5: In \(\bigtriangleup ABC,\angle A +\angle B=125^{0}\; and \;\angle A+\angle C =113^{0}\), find \(\angle A,\angle B\; and \angle C\) .
Ans:
Let \(\angle A + \angle B = 125^{0}\; and\; \angle A + \angle C = 113^{0}\)
Then,\(\angle A + \angle B + \angle A + \angle C = (125+113)^{0}\) \(\Rightarrow (\angle A + \angle B + \angle C ) + \angle A = 238^{0}\) \(\Rightarrow 180^{0} + \angle A = 238^{0}\) \(\Rightarrow \angle A = 58^{0}\)
therefre, \( \angle B = 125^{0} – \angle A\)
= (125 – 58)
0
=67
0
Therefore, \( \angle C = 113^{0} – \angle A\)
= (113-58)
0
=55
0
Question 6: In \(\bigtriangleup PQR, \angle P -\angle Q = 42 ^{0}\; and \;\angle Q – \angle R = 21^{0}\), find \(\angle P ,\angle Q \; and \;\angle R \) .
Ans:
Given :\(\angle P – \angle Q = 42^{0}\; and\; \angle Q – \angle R = 21^{0}\)
Then,\(\angle P = 42^{0} + \angle Q\; and\; \angle R = \angle Q – 21^{0}\)
Therefore, \( 42^{0} + \angle Q + \angle Q + \angle Q – 21^{0} = 180^{0}\) [Sum of the angles of a triangle] \(\Rightarrow 3 \angle Q = 159^{0}\) \(\Rightarrow \angle Q = 53^{0}\)
Therefore, \( \angle P = 42^{0} + \angle Q\)
= (42 + 53)
0
= 95
0
Therefore, \( \angle R = \angle Q – 21^{0}\)
= (53 – 21)
0
= 32
0
Question 7: The sum of two angles of a triangle is \(116^{0}\) and their difference is \(24^{0}\). Find the measure of each angle of the triangle.
Ans:
Let \(\angle A + \angle B = 116^{0}\ and \ \angle A – \angle B = 24^{0} \)
Then,
Therefore, \( \angle A + \angle B + \angle A – \angle B = (116+24)^{0}\) \(\Rightarrow 2 \angle A = 140^{0}\) \(\Rightarrow \angle A = 70^{0}\)
Therefore, \( \angle B = 116^{0} – \angle A\)
= (116 – 70)
0
= 46
0
Also, in \(\triangle ABC\) \(\angle A + \angle B + \angle C = 180^{0}\) [Sum of the angles of a triangle]
Therefore, \( 70^{0} + 46^{0} + \angle C = 180^{0}\)
Therefore, \( \angle C = 64^{0}\)
Question 8: Two angles of a triangle are equal and the third angles is greater than each one of them by \(18^{0}\). Find the angles.
Ans:
Let \(\angle A = \angle B\ and\ \angle C = \angle A + 18^{0}\)
Then,\(\angle A + \angle B + \angle C = 180^{0}\) [Sum of the angles of a triangle] \(\angle A + \angle A + \angle A + 18^{0} = 180^{0}\) \(\Rightarrow 3 \angle A = 162^{0}\) \(\Rightarrow \angle A = 54^{0}\)
Since,\(\angle A = \angle B\) \(\Rightarrow \angle B = 54^{0}\)
Therefore, \( \angle C = \angle A + 18^{0}\)
= (54 + 18)
0
= 72
0
Question 9: Of the three angles of a triangle, one is twice the smallest and another one is thrice the smallest. Find the angles.
Ans:
Let the smallest angle of the triangle be \(\angle C\ and\ let\ \angle A = 2 \angle C and \angle B = 3 \angle C\)
Then,\(\angle A + \angle B + \angle C = 180^{0}\) [Sum of the angles of a triangle] \(\Rightarrow 2 \angle C + 3 \angle C + \angle C = 180^{0}\) \(\Rightarrow 6 \angle C = 180^{0}\) \(\Rightarrow \angle C = 30^{0}\)
Therefore, \( \angle A = 2 \angle C \)
= 2(30)
0
= 60
0
Also,\(\angle B = 3 \angle C \)
= 3(30)
0
=90
0
Question 10: In a right-angled triangle, one of the acute angles measure \(53^{0}\). Find the measure of each angle of the triangle.
Ans:
Let ABC be a triangle right-angled at B.
Then, \(\angle B = 90^{0}\ and\ let\ \angle A=53^{0}\)
theref
0re, \( \angle A + \angle B + \angle C = 180^{0}\) [Sum of the angles of a triangle]\(\Rightarrow 53^{0} + 90^{0} + \angle C = 180^{0}\)\(\Rightarrow \angle C = 37^{0}\)
Hence, \(\angle A = 53^{0}, \angle B = 90^{0}, \angle C = 37^{0}\)
Question 11: If one angle of a triangle is equal to the sum of the other two, show that the triangle is right angled.
Ans:
Let ABC be triangle
Then \(\angle A = \angle B + \angle C\)
Therefore, \( \angle A + \angle B + \angle C = 180^{0}\) [Sum of the angles of a triangle] \(\Rightarrow \angle B + \angle C + \angle B + \angle C = 180^{0}\) \(\Rightarrow 2 \angle B + \angle C = 180^{0}\) \(\Rightarrow \angle B + \angle C = 90^{0}\) \(\Rightarrow\angle A = 90^{0}\) [\( \angle A = \angle B + \angle C\)]
This implies that the triangle is right-angled at A.
Question 12: A \( \bigtriangleup ABC\) is right angled at A. If \(AL\perp BC\), prove that \(\angle BAL=\angle ACB\).
Ans: We know that the sum of two acute angles of a right angled triangle is 90
0.
From the right \(\triangle ABL\), we have:
Therefore, \( \angle BAL + \angle ABL = 90^{0}\) \(\Rightarrow \angle BAL = 90^{0} – \angle ABL\) \(\Rightarrow \angle BAL = 90^{0} – \angle ABC\) …(1)
Also, from the right \(\triangle ABL \) , we have:\( \angle ABC+ \angle ACB = 90^{0}\) \( \Rightarrow \angle ACB = 90^{0} – \angle ABC\) …(2)
From (1) and (2), we get:\( \angle ACB = \angle BAL [\angle BAL = 90^{0} – \angle ABC]\)
Therefore, \( \angle BAL = \angle ACB\)
Question 13: If each of a triangle is less than the sum of the other two, show that the triangle is acute angled.
Ans;
Let ABC be the triangle
Let \( \angle A < \angle B + \angle C\)
Then,\( 2\angle A < \angle A + \angle B + \angle C \) [Adding Angle A to both sides ] \( \Rightarrow 2 \angle A < 180^{0}\) \( \Rightarrow \angle A < 90^{0}\)
Also, let \( \angle B < \angle A + \angle C\)
Then,\( 2\angle B < \angle A + \angle B + \angle C\) [Adding Angle B to both sides ] \( \Rightarrow 2 \angle B < 180^{0}\) \( \Rightarrow \angle B < 90^{0}\)
And let \( \angle C < \angle A + \angle B\)
Then,\( 2 \angle C < \angle A + \angle B + \angle C\) [Adding Angle C to both sides ] \(\Rightarrow 2 \angle C < 180^{0} [\angle A + \angle B + \angle C = 180^{0}] \) \( \Rightarrow C < 90^{0}\)
Hence, each angle of the triangle is less than 90
0
Therefore, the triangle is acute-angled.
Question 14: If one of a triangle is greater than the sum of the other two, show that the triangle is obtuse-angled.
Ans:
Let ABC be a triangle and let \(\angle C > \angle A + \angle B \)
Then, we have\( 2 \angle C > \angle A + \angle B + \angle C\) [Adding Angle C to both sides ] \( \Rightarrow 2 \angle C > 180^{0} [\angle A + \angle B + \angle C = 180^{0}]\) \( \Rightarrow \angle C > 90^{0}\)
Since one of the angles of the triangle is greater than 90
0. the triangle is obtuse-angled.
Question 15: In the given figure, side BC of \(\bigtriangleup ABC\) is produced to D. If \(\angle ACD= 128^{0}\) and \(\angle ABC= 43^{0}\), find \(\angle BAC\; and \; \angle ACB\).
Ans:
Side BC of triangle ABC is produced to D.
Therefore, \( \angle ACD = \angle A + \angle B \) [Exterior angle property] \( \Rightarrow 128^{0} = \angle A + 43^{0}\) \(\Rightarrow \angle A = (128 – 43)^{0} \) \( \Rightarrow \angle A = 85^{0} \) \( \Rightarrow \angle BAC = 85^{0}\)
Also, in triangle ABC,\( \angle BAC + \angle ABC + \angle ACB = 180^{0}\) [Sum of the angles of a triangle] \( \Rightarrow 85^{0} + 43^{0} + \angle ACB = 180^{0}\) \( \Rightarrow 128^{0} + \angle ACB = 180^{0}\) \( \Rightarrow \angle ACB = 52^{0} \)
Question 16: In the given figure, the side BC of \(\bigtriangleup ABC\) has been produced on both sides on the left to D and on the right to E. If \(\angle ABC= 106^{0}\; and \; \angle ACE = 118^{0}\), find the measure of each angle of the triangle.
Ans:
Side BC of triangle ABC is produced to D.
Therefore, \( ABC = \angle A + \angle C\) \( \Rightarrow 106^{0} = \angle A + \angle C \) …(1)
Also, side BC of triangle ABC is produced to E.\( \angle ACE = \angle A + \angle B \) \( \Rightarrow 118^{0} = \angle A + \angle B \) …(ii)
Adding (i) and (ii), we get:\( \angle A + \angle A + \angle B + \angle C = (106 + 118)^{0} \) \( \Rightarrow ( \angle A + \angle B + \angle C ) + \angle A = 224^{0} \;[\angle A + \angle B + \angle C = 180^{0}] \) \(\Rightarrow 180^{0} + \angle A = 224^{0} \) \(\Rightarrow \angle A = 44^{0}\)
therefore, \( B = 118^{0} – \angle A\ [Using (ii)] \) \( \Rightarrow \angle B = (118 – 44)^{0} \) \( \Rightarrow \angle B = 74^{0} \)
And,\( \angle C = 106^{0} – \angle A\ [Using (i)] \) \(\Rightarrow \angle C = (106 – 44)^{0} \) \( \Rightarrow \angle C = 62^{0} \)
Question 17: Calculate the value of x in each of the following figures.
Ans:
(i) Side AC of a triangle ABC is produced to E.
Therefore, \( \angle EAB = \angle B + \angle C \) \( \Rightarrow 110^{0} = x + \angle C\) ….(i)
Also,\( \angle ACD + \angle ACB = 180^{0}\) [Linear Pair] \( \Rightarrow 120^{0} + \angle ACB = 180^{0} \) \( \Rightarrow \angle ACB = 60^{0} \) \( \angle C = 60^{0} \)
Substituting the value of \( \angle C \) in (i), we get x = 50
(ii)
From \( \triangle ABC \) we have:\( \angle A + \angle B + \angle C = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 30^{0} + 40^{0} +\angle C = 180^{0} \) \( \Rightarrow \angle C = 110^{0} \) \( \Rightarrow \angle ACB = 110^{0} \)
Also,\( \angle ECB + \angle ECD = 180^{0} \) \( \Rightarrow 110^{0} + \angle ECD = 180^{0} \) \( \Rightarrow \angle ECD = 70^{0} \)
Now, in \( \triangle ECD, \)
Therefore, \( \angle AED = \angle ECD + \angle EDC \) [exterior angle property] \( \Rightarrow x = 70^{0} + 50^{0} \) \( \Rightarrow x = 120^{0} \)
(iii)\( \angle ACB + \angle ACD = 180^{0} \) [Linear Pair] \( \Rightarrow \angle ACB + 115^{0} = 180^{0} \) \( \Rightarrow \angle ACB = 65^{0} \)
Also,\( \angle EAF = \angle BAC\) [Vertically opposite angles] \( \Rightarrow \angle BAC = 60^{0} \)
Therefore, \( \angle BAC + \angle ABC + \angle ACB = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 60^{0} + x + 65^{0} = 180^{0} \) \( \Rightarrow x = 55^{0} \)
(iv)\( \angle BAE = \angle CDE\) [Alternate angles] \( \Rightarrow \angle CDE = 60^{0} \)
Therefore, \( \angle ECD + \angle CDE + \angle CED = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 45^{0} + 60^{0} + x = 180^{0} \) \( \Rightarrow x = 75^{0} \)
(v)
From \( \triangle ABC \), we have:\( \angle BAC + \angle ABC + \angle ACB = 180^{0} \) \( \Rightarrow 40^{0} + \angle ABC + 90^{0} = 180^{0} \) \( \Rightarrow \angle ABC = 50^{0} \)
Also, form \( \triangle EBD \) , we have:\( \angle BED + \angle EBD + \angle BDE = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 100^{0} + 50^{0} + x = 180^{0} [\angle ABC = \angle EBD] \) \( \Rightarrow x = 30^{0} \)
(vi)
From \( \triangle ABE, \) we have:\( \angle BAE + \angle ABE + \angle AEB = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 75^{0} + 65^{0} + \angle AEB = 180^{0} \) \( \Rightarrow \angle AEB = 40^{0} \)
Therefore, \( \angle AEB = \angle CED \) [Vertically opposite angles]
Therefore, \( \angle CED = 40^{0} \)
Also, from \( \triangle CDE,\) we have:\( \angle ECD + \angle CDE + \angle CED = 180^{0} \) [Sum of the angles of a triangle] \( \Rightarrow 110^{0} + x + 40^{0} = 180^{0} \) \( \Rightarrow x = 30^{0} \)
Question 18: Calculate the value of x in the given figure.
Ans:
Join A and D to produce AD to E.
Then,\( \angle CAD + \angle DAB = 55^{0} \) \( \angle CDE + \angle EDB = x^{0} \)
Side AD of triangle ACD is produced to E.
Therefore, \( \angle CDE = \angle CAD + \angle ACD \) …(i) (Exterior angle property)
Side AD of triangle ABD is produced to E.
Therefore, \( EDB = \angle DAB + \angle ABD \) …(ii) (Exterior angle property)
Adding (i) and (ii) we get.\( \angle CDE + \angle EDB = \angle CAD + \angle ACD + \angle DAB + \angle ABD \) \( \Rightarrow x^{0} = ( \angle CAD + \angle DAB) + 30^{0} + 45^{0} \) \(\Rightarrow x^{0} = 55^{0} + 30^{0} + 45^{0} \) \(\Rightarrow x^{0} = 130^{0} \)
Key Features of RS Aggarwal Class 9 Solutions Chapter 4– Angles, Lines And Triangles Ex 4B The solutions can be referred by the students to clear their doubts. All the solutions are solved accurately and in a simple language. It is the best study material for students if they want to score good marks in their exam. It provides easy methods to solve tricky and difficult questions.
|
I'm following this tutorial where at somepoint the derived PDF for spherical coordinates for a Lambertian surface is
\begin{array}{l} p(\theta, \phi) = \dfrac{\sin \theta}{2 \pi}. \end{array}
But as soon as they compute a sample, the result is instead divided by $ \dfrac{1}{2\pi} $, which as they say is the "pdf of the integral"
Why isn't it divided by $ \frac{\sin \theta}{2 \pi} $ instead?
If we were using differential steradians over the unit hemisphere, the only possible probability density function integrating to 1 is infact $ \frac{1}{2\pi} $
But if we separate the integral over the hemisphere traced by spherical coordinates
$$ \int_{\phi = 0}^{2\pi} \int_{\theta = 0}^{\frac{\pi}{2}} \, \frac{\sin{\theta}}{2\pi}d\theta d\phi= 1. $$
The PDF now becomes $ \frac{\sin \theta}{2 \pi} $ yet they still divide by $ \frac{1}{2\pi} $
EDIT: after carefully reviewing the concept of PDFs and integration over the hemisphere I'm starting to think the article I've linked is making a substantial error, mixing the idea of importance sampling with the pdf of choosing a direction of reflectance from a lambertian surface
Radiance is defined as $$ L_{(x,\omega)} = \frac{\mathrm{d}^2\Phi}{\mathrm{d}\omega\ cos\theta\ \mathrm{d}x} $$ Since it's defined over differential solid angles, we can interpret the result of one sample as if it was the flux density "over a unit steradian"
If we use monte carlo estimation and find "the average flux density over a single steradian" and multiply the result by $2\pi$, we get irradiance:
$$ (2\pi) \frac{1}{n}\sum^n L_{(x,\omega)} $$. But in this particular case, $2\pi$ has nothing to do with pdfs! since it's the integral domain used for the monte carlo estimation!
Instead, the real pdf is computed for $\theta$ and $\phi$ because that is the probability density function of choosing
one direction over the other, according to the particular properties of a lambertian surface. A mirror-like surface has a different probability of choosing one direction over the other, but this has nothing to do us with dividing the sample with $\frac{1}{2\pi}$. It would be different if we were using importance sampling, but in this case it seems like we're not
Is my reasoning correct? If not, what am I missing?
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Suppose infinite 3D Euclidean space is uniformly filled with dense matter except a spherical cavity. Will a particle in this cavity experience gravitational acceleration towards the nearest wall?
Either there is no force, or the situation is ill-defined.
That "no force" is the only consistent answer is evident because we may divide the infinite matter into infinitely many spherical shells of finite thickness, and for each of those shells we know that there is no force exerted on something within.
However, we may also imagine dividing the matter up in some other asymmetrical way, and it is not evident that the limit of all such divisions should agree with each other. For any finite distribution of mass, this would be the case because $\int_{\mathbb{R}^3} \rho(x) \mathrm{d}V = \sum_i \int_{V_i}\rho(x)\mathrm{d}x$ for any collection of disjoint sets $V_i$ with $\bigcup_i V_i = \mathbb{R}^3$, but in this case the integral $\int_{\mathbb{R}^3}\rho(x)\mathrm{d}V$ doesn't exist. That is, we might produce different answers depending on the limiting procedure used to model "infinite space filled with matter".
Well, after reading other answers, I say different. Point me out where I am wrong and I would be glad to understand my mistake.
Suppose the entire region was filled with matter. So, using any kind of symmetry, there would be no force any particle. Now we remove a sphere, which contains that particle that we're observing. Isolating the particle and the sphere, were know that the force due to the sphere is same as in the case of force on particle inside a sphere with uniform mass distributed.
Thus, by removing the sphere, we tend to create some sort of 'negative' mass, with causes a force on the particle on the opposite direction.
Also note that there is symmetry only when the particle we consider is at the center of the sphere, wherein my answer also stands in agreement with the no force result.
Hope this helped!
I think that the result is undeterminate. Let me explain why.
At first, one could think that the answer is that there is no force inside the cavity because of Newton's shell theorem, which is just an application of Gauss's law plus the divergence theorem. The usual proof is this:
We start from the differential form of Gauss's law (proof):
$$\tag{1} \nabla \cdot \vec g (\vec r) = - 4 \pi G \rho (\vec r)$$
Then we use the divergence theorem:
$$\tag{2} \int_V \nabla \cdot \vec g \ d \vec r = \int_S \vec g \cdot d\vec S$$
where $S$ is an arbitrary closed surface and $V$ the volume enclosed by it. For simplicity, we will take a spherical surface centered at the center of our cavity.
The left side is easy to evaluate using $(1)$:
$$\int_V \nabla \cdot \vec g \ d \vec r =- 4 \pi G \int _V \rho(\vec r) \ d \vec r = -4 \pi G M$$
where $M$ is the mass contained in $V$. We therefore obtain from $(2)$:
$$\int_S \vec g \cdot d\vec S = - 4 \pi G M$$
If the chosen surface contains no mass, then
$$\tag{3} \int_S \vec g \cdot d\vec S = 0$$
Now the crucial passage: usually, we say that since we are considering a uniform spherical shell, then
by symmetry the force must be directed radially outwards and must be constant over $S$, so that we have
$$\int_S \vec g \cdot d\vec S = \int_S | \vec g | \ \hat r \cdot d \vec S = | \vec g | \int_S dS = |\vec g | 4 \pi R^2 = 0$$
where $R$ is the radius of the spherical surface and $|\vec g |$ is the magnitude of $\vec g$ evaluated over the surface. We therefore conclude that
$$|\vec g|=0$$
for every spherical surface inside the shell, and therefore
$$\vec g = \vec 0$$
inside the shell.
So what is the problem with an "infinitely thick" shell, like in the problem you propose? The problem is in the passage where we say that
by symmetry the force must be radial and constant over the surface. This is certainly true for every shell of uniform density and finite thickness, but if the thickness is infinite the whole concept of spherical symmetry is in my opinion undefined and we cannot proceed from $(3)$. Therefore, I think that the problem is undeterminate.
To put it simply, think at the problem this way: imagine, instead of a uniform shell, a uniform spherical distribution of separate point masses (picture below). Now, place your test mass closer to the "left side" of this (picture below...). For such a finite set of point masses, you can see intuitively why the shell theorem works: on the "left side" the test mass interacts strongly with a small number of masses, while on the "right side" it interacts with many more masses, but more weakly (of course, the definition of "left side" and "right side" is completely arbitrary). But if you have an infinitely tick set of points both on the "right side" and on the "left side", then what is going to happen...?
|
Recent Posts Recent Comments Archives Categories Meta Author Archives: jal6318
Climate change has been a recognized problem for years now. Although we all want to help, no action has taken place by every citizen. Climate change effects weather, our lives, food, water and mostly the ice caps. We know the … Continue reading
Ever since taking Math for Sustainability I have become more aware of my surroundings, specifically food waste. I never realized how tossing away our food could have such an impact on our society and our landfills. Whenever recycling is being … Continue reading
As we know, global warming is a undeniable subject across the globe. Although many companies and nonprofit organizations are making leaps in slowing down, preventing and even reversing the mass destruction taking place. There is no denying that global warming … Continue reading
Jordan Lane 9/18/17 Climate change is increasing as we disregard the real-world impact of plastic bottles on our atmosphere. We have to change our habits in order to save our planet, our lives and our future. I’m going to be … Continue reading
As Climate Change, being the biggest issue throughout the years continues to stun us with its atrocious affects, a relatively new one has risen. A swarm of jellyfish have taken ownership of the coastal water in Madonna di Roca Vecchia, … Continue reading
\(x^3-3x^2-10x=0\) \((1+r)^n\) \((5.7 \times 10^{-8}) \times\ (1.6\times 10^{12})=9.12 \times 10^{4}\) \(\pi L(1- \alpha)R^{2}=4\pi\sigma T^{4}R^{2}\) \[12/text{ km} \times \frac{0.6 \text{ miles}} {1 \text{km}} \approx7.2 \text {mile}\] \(4, 173, 445, 346.50 \approx 4,200,000,000 = 4.2 \times 10^{9}\) \[50 \text {m} \times \frac … Continue reading
|
In this module:
Example (ABC Farming Co): net present value (NPV) in table format Excel pointers and the fill handle Net present value
An important concept in finance is the application of the
Net Present Value method to project evaluation. Example: ABC Farming Co is considering an investment of $80,000 in a new carrot washing machine, that is expected to generate the following incremental cash inflows: $10,000 (year 1), $20,000 (year 2), $15,000 (year 3), $50,000 (year 4), and $20,000 (year 5). Assume that all cash inflows occur at the end of the year. The discount rate is 5.0% pa. Required: Calculate the net present value of the project, and present your answer in table form based on the NPV equation – $$\begin{equation} NPV=C_0 + \sum_{t=1}^{n} \frac{C_t}{(1+k)^t} \end{equation}$$ where \(C_0\) is the initial cash flow at time zero. The future cash flows at time \(t\) are denoted \(C_t\) over \(n\) periods. \(k\) is the periodic discount rate.
Using equation 1, the solution is:
$$ NPV=-80,000 + \frac{10,000}{(1+0.05)^{1}} + \frac{20,000}{(1+0.05)^{2}}+ \frac{15,000}{(1+0.05)^{3}}+ \frac{50,000}{(1+0.05)^{4}}+ \frac{20,000}{(1+0.05)^{5}} = \$17,428 $$
ABC Farming Co
The task is to replicate the NPV calculation in table format as illustrated in figure 1.
The following points are a summary of the session demonstration. All cell references relate to figure 1.
Data entry Leading equals sign in equationRow 36 applies index numbers to the Period, Cash flow, and Discount factor columns. Cell
W36contains a text string indicating that the column W value is the product of index 2 and index 3
The entry in cell
W36commences with an equals (=) sign. Excel will interpret this as an equation and return an error or an information message and then attempt to return the value 6. With an equation, the multiplication symbol is a * not an x, and white space is not allowed. To solve this problem, either change the cell format to
TextBEFORE the text is entered, or leave the format as
Generaland include a leading apostrophe in the string
'= 2 x 3. This is the usual method to control the operation of (=). The apostrophe appears in the formula bar, but not in the cell.
Manually enter the individual cash flows in column index 2 Period – fill downIn the range
T38:T39enter 0 and 1. Select these two cells, then use the Fill Handle (click-and-drag down) to complete the sequence. The fill handle is discussed later in this module
Discount factorSelect cell
V38and enter the formula
=(1+0.05)^-T38. The idea is to use the corresponding Period value, 0 in this case, as the exponent in the formula. \((1+0.05)^{0}=1\)
Period discounted cash flow– complete the formula in cell
W38: =U38*V38
Fill down index 3 and 4– select
V38:W38, then double click the fill handle to complete the two column sequence
Sum the cash flows and discounted cash flows– use Home > Editing > AutoSum as this will automatically select the range for the SUM function The concatenated text and value cellIn cell
T46enter the formula
="Net cash flows ="&TEXT(U44,"$#,#"). Without the TEXT function the formula in cell
T47would return
Net present value at 5% = 17427.6101399059
Format of table Color format– use Home > Font > Fill Color Border format– use Home > Font > Borders
The completed table is shown in the Excel Web App #1 in figure 2.
Excel pointers and the fill handle
Excel uses a number of specialised cursors or pointers for the mouse or other pointing device. In normal operation, the pointer is a open cross as shown in figure 3. Left click a cell, or click and drag when making a multi-cell selection.
To move the selection a short distance on the current worksheet, place the pointer on the border of the cell or selection. The pointer will change to open cross will change to a fine cross with arrow tips and an arrow pointer – see figure 4. Left click on the border of the selection and drag to new location.
The fill handle is use to copy or extend the selection to adjacent cells. It is the small square in the bottom right hand corner of the selection, shown in the circle in figure 5.
When the pointer is placed over the fill handle, the pointer changes to a black closed cross – see figure 6. Either left click and drag to manually extend the selection, or double click to fill down for the length of the adjacent column vector.
Move selection mode (figure 4) and fill handle mode (figure 6) are collectively described as
drag-and-drop in Excel’s editing options. Drag-and-drop inoperative?
If the fill handle and cell drag-and-drop is not working, click on File > Options > Advanced and in the Editing Options group tick the Enable fill handle and cell drag-and-drop item.
Related material: Fill handle mode examples This example was developed in Excel 2013 Pro 64 bit. Last modified: 13 Mar 2017, 7:52 pm [Australian Eastern Time (AET)] Thanks to … … for identifying an error in the NPV expanded numerical example of equation 1
|
8.1.1.1.4 - Example: Seatbelt Usage
In the year 2001 Youth Risk Behavior survey done by the U.S. Centers for Disease Control, 747 out of 1168 female 12th graders said they always use a seatbelt when driving. Let’s construct a 95% confidence interval for the proportion of 12th grade females in the population who always use a seatbelt when driving.
\(\widehat{p}=\frac{747}{1168}=.0640\)
First we need to check our assumptions that both \(np \geq 10\) and \(n(1-p) \geq 10\)
\(np=1168 \times 0.640 = 747\) and \(n(1-p)=1168 \times (1-0.640)=421\) Both are greater than 10 so this assumption has been met and we can use the standard normal approximation with this data.
Now we can compute the standard error.
\(SE=\sqrt{\frac{\hat{p} (1-\hat{p})}{n}}=\sqrt{\frac{0.640 (1-0.640)}{1168}}=0.014\)
The \(z^*\) multiplier for a 95% confidence interval is 1.960
Our 95% confidence for interval for \(\widehat{p}\) is \(0.640\pm 1.960(0.014)=0.640\pm0.028=[0.612, \;0.668]\)
We are 95% confident that between 61.2% and 66.8% of all 12th grade females say that they always use a seatbelt when driving.
Let’s think about how our interval will change. The 99% confidence interval will be wider than the 95% confidence interval. In order to increase are level of confidence, we will need to expand the interval.
In terms of computing the 99% confidence interval, we will use the same point estimate \(\widehat{p}\) and the same standard error. The multiplier will change though. From the plot below, we see that the \(z^*\) multiplier for a 99% confidence interval is 2.576. The standard error is still 0.14, it has not changed because neither \(n\) nor \(\hat{p}\) have changed.
\(99\%\;C.I.:\;0.640\pm 2.576 (0.014)=0.0640\pm 0.036=[0.604, \; 0.676]\)
We are 99% confidence that between 60.4% and 67.6% of all 12th grade females say that they always use a seatbelt when driving.
|
How do you measure mass? Weight is easy using a scale, but we can't measure mass that way, because then mass would be different on every planet. I know there was a Veritasium video (here) on defining what, exactly, one kilogram was, but they can only define that if they know some previous measurement (i.e., one cube of metal is 2kg)!
You measure mass by observing it's acceleration response to force (i.e by applying Newton's second law).
Now, because it is impractical to accurately measure straight-line accelerations over a wide range, we actually use periodic motions and measure frequency.
Mass-on-a-spring harmonic oscillator. $\omega = \sqrt{\frac{k}{m}}$ with known spring constant. Measure the centripetal force on a centrifuge. $F = m \frac{v^2}{r} = m \omega^2 r$, is the naive approach, but on the surface of the planet you have to be a little more clever (adding the centripetal force to the existing weight). Here you would put a scale between the test mass and the centrifuge to get $F$.
An alternative is to measure both the weight and the local value of $g$, which can be done with a small-angle pendulum ($\omega = \sqrt{g/\ell}$).
I would maintain that we most often explicitly measure mass by comparing the pull of the local (unknown) gravity on the mass to be measured to the pull of the same gravity on a known, reference mass or masses.
If you look at the measuring devices in stores, you sometimes see the slogan, "Honest weight; no springs!" They are proudly claiming that they are not using a direct, force measuring device, but rather a force comparing system.
A traditional chemists scale: uses an equal arm balance and a set of various masses to achieve balance between the object on one side, and the collection of calibrated masses on the other side. A suspicious individual can reverse the position of object and masses to check the geometry (Is it really equal arm?). This device will give exactly the same result anywhere on earth, on the moon,or on an accelerating elevator. Keeping the reference masses uncontaminated is a requirement...
The mass measuring device you encounter in a clinical setting: differs slightly in one detail. It relies on a small number of reference masses, but uses the geometry of the bars on which the masses slide (and the notches on the bars) to create the different forces that balance the force of gravity on the subject. Again, contamination of the masses would be a problem, as would be wear and tear on the notches on the bars...
In the absence of gravity, this method:
can be used to compare a known mass(the saddle alone) to an unknown mass(saddle plus astronaut) This compensates for the possible error caused by changes in the spring...
We almost always determine
mass by measuring weight. Weight is the force on an object exterted by a gravitational field, and is proportional to the mass. On the Earth's surface W = m*g. We can convert weight to mass if our measuring scale is calibrated, usually with an object of known mass. This would work for any planet.
Even though the weight would change from planet to planet, the mass would not. But we would need to bring along a known mass to calibrate our scale on the new planet. At the moment, the kilogram, the unit of mass is defined in terms of an object stored in a vault in France. Every mass-measuring instrumenthas been at least indirectly compared to this standard kilogram.
The verisatium video talks about a new method proposed to define the kilogram, which depends on counting the number of atoms in an object. The mass of one atom is known, so simple multiplation will give you the mass of an object. This will really only give you a new way to define a standard kilogram (if for instance, the present kilogram was destroyed).
protected by AccidentalFourierTransform Aug 11 '18 at 13:49
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
I was reading up on various graph algorithms (Dijkstra's algorithm and some variants) and found the runtime $O(m + n \log n)$, where $m$ is the number of edges in the graph and $n$ is the number of nodes. Intuitively, this makes sense, but I recently realized that I don't know, formally, what this statement means.
The definition of big-O notation that I am familiar with concerns single-variable functions; that is, $f(n) = O(g(n))$ if $\exists n_0, c$ such that $\forall n > n_0. |f(n)| \le c|g(n)|$. However, this definition doesn't make sense for something like $O(m + n \log n)$, since there are two free parameters here - $m$ and $n$. Although in the context of graphs there are well-specified relations between $m$ and $n$, in some other algorithms (for example, string matching) the runtime might be described as $O(f(m, n))$ where $m$ and $n$ are completely independent of one another.
My question is this:
what is the formal definition of the statement $f(m, n) = O(g(m, n))$? Is it a straightforward generalization of the definition for one variable where we give lower bounds on both $m$ and $n$ that must be simultaneously satisfied, or is there some other definition defined in terms of limits?
Thanks!
|
I'm trying to find the probability distribution of a sum of a random number of variables which aren't identically distributed. Here's an example:
John works at a customer service call center. He receives calls with problems and tries to solve them. The ones he can't solve, he forwards them to his superior. Let's assume that the number of calls he gets in a day follows a Poisson distribution with mean $\mu$. The difficulty of each problem varies from pretty simple stuff (which he can definitely deal with) to very specialized questions which he won't know how to solve. Assume that the probability $p_i$ he'll be able to solve the
i-th problem follows a Beta distribution with parameters $\alpha$ and $\beta$ and is independent of the previous problems. What is the distribution of the number of calls he solves in a day?
More formally, I have:
$Y = I(N > 0)\sum_{i = 0}^{N} X_i$ for $i = 0, 1, 2, ..., N$
where $N \sim \mathrm{Poisson}(\mu)$ , $(X_i | p_i) \sim \mathrm{Bernoulli}(p_i)$ and $p_i \sim \mathrm{Beta}(\alpha, \beta)$
Note that, for now, I'm happy to assume that the $X_i$'s are independent. I'd also accept that the parameters $\mu, \alpha$ and $\beta$ do not affect each other although in a real-life example of this when $\mu$ is large, the parameters $\alpha$ and $\beta$ are such so that the Beta distribution has more mass on low success rates $p$. But let's ignore that for now.
I can calculate $P(Y = 0)$ but that's about it. I can also simulate values to get an idea of what the distribution of $Y$ looks like (it looks like Poisson but I don't know if that's down to the numbers of $\mu, \alpha$ and $\beta$ I tried or whether it generalizes, and how it might change for different parameter values). Any idea of what this distribution is or how I could go about deriving it?
Please note that I have also posted this question on TalkStats Forum but I thought that it might get more attention here. Apologies for cross-posting and many thanks in advance for your time.
EDIT: As it turns out (see the very helpful answers below - and thanks for those!), it is indeed a $\mathrm{Poisson}(\frac{\mu\alpha}{\alpha + \beta})$ distribution, something which I was guessing based on my intuition and some simulations, but was not able to prove. What I now find surprising though, is that the Poisson distribution only depends on the mean of the $\mathrm{Beta}$ distribution but is not affected by its variance.
As an example, the following two Beta distributions have the same mean but different variance. For clarity, the blue pdf represents a $\mathrm{Beta}(2, 2)$ and the red one $\mathrm{Beta}(0.75, 0.75)$.
However, they would both result in the same $\mathrm{Poisson}(0.5\mu)$ distribution which, to me, seems slightly counter-intuitive. (Not saying that the result is wrong, just surprising!)
|
Let $y$ be the proportion in $[0,1]$ instead of the percentage. I think the issues here are possible nonlinearity and censoring. You can try including quadratic terms on the right hand side. At the same time, you can try the following models.
I. Linear models: If no $y$ is exactly 0 or 1, linear models will be OK. The following four options come to my mind.
$y=\beta_0 + \beta_1 x + u$ if no $y$ is close to 0 or 1.
A log model $\ln y = \beta_0 + \beta_1 x + u$, which is helpful when some $y$ values are close to zero but all are far from one. Note that interpretation changes.
Transform the dependent variable to $-\ln (1-y)$. This is helpful if $y$ are all far from zero but some close to one. But this looks a bit unnatural to me, and I would consider the next logistic model instead.
The logistic model $\ln \frac{y}{1-y} = \beta_0 + \beta_1 x +u$. This usually helps if there are many $y\simeq 0$ and $y\simeq 1$. Interpretation is done in terms of logits.
II. Tobit models: If some $y$ are exactly 0 or 1, you can try Tobit models (
help tobit in Stata). Remember that normality is assumed for the error term before censoring. Also, using Tobit models means that "I think $y$ could be bigger than 1 (smaller than 0) if not censored."
|
Given a list of floating point numbers,
standardize it. Details A list \$x_1,x_2,\ldots,x_n\$ is standardizedif the meanof all values is 0, and the standard deviationis 1. One way to compute this is by first computing the mean \$\mu\$ and the standard deviation \$\sigma\$ as $$ \mu = \frac1n\sum_{i=1}^n x_i \qquad \sigma = \sqrt{\frac{1}{n}\sum_{i=1}^n (x_i -\mu)^2} ,$$ and then computing the standardization by replacing every \$x_i\$ with \$\frac{x_i-\mu}{\sigma}\$. You can assume that the input contains at least two distinct entries (which implies \$\sigma \neq 0\$). Note that some implementations use the sample standard deviation, which is not equal to the population standard deviation \$\sigma\$ we are using here. There is a CW answer for all trivial solutions. Examples
[1,2,3] -> [-1.224744871391589,0.0,1.224744871391589][1,2] -> [-1,1][-3,1,4,1,5] -> [-1.6428571428571428,-0.21428571428571433,0.8571428571428572,-0.21428571428571433,1.2142857142857144]
(These examples have been generated with this script.)
|
-
Calculus
(
http://mymathforum.com/calculus/
)
Omnipotent September 17th, 2017 07:33 AM
Inverse of KT Probability Weighting Function http://oi67.tinypic.com/28uo45i.jpg
I'm trying to find the inverse function of above. I can't seem to get it right, and websites such as wolfram and symbolab aren't working for me as well.
Would be great to get some pointers. Many thanks! (Relates to my thesis)
Country Boy September 17th, 2017 11:01 AM
That would involve solving a 'polynomial' of degree
. (I put "polynomial" in quotes because
is not necessarily an integer. There is no general formula for that.
romsek September 17th, 2017 12:15 PM
$\omega = \left(\dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}\right)^\frac 1 \gamma$
$\omega^\gamma = \dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}$
$\omega^\gamma = \dfrac{1}{1+\left(\frac{1-p}{p}\right)^\gamma}$
$\omega^{-\gamma} = 1+\left(\frac{1-p}{p}\right)^\gamma$
$\omega^{-\gamma}-1 = \left(\frac{1-p}{p}\right)^\gamma$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} = \dfrac{1-p}{p} = \dfrac 1 p - 1$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 = \dfrac 1 p$
$p = \dfrac{1}{\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 }$
Omnipotent September 18th, 2017 01:52 PM
Quote:
Originally Posted by
romsek
(Post 580552)
$\omega = \left(\dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}\right)^\frac 1 \gamma$
$\omega^\gamma = \dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}$
$\omega^\gamma = \dfrac{1}{1+\left(\frac{1-p}{p}\right)^\gamma}$
$\omega^{-\gamma} = 1+\left(\frac{1-p}{p}\right)^\gamma$
$\omega^{-\gamma}-1 = \left(\frac{1-p}{p}\right)^\gamma$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} = \dfrac{1-p}{p} = \dfrac 1 p - 1$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 = \dfrac 1 p$
$p = \dfrac{1}{\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 }$
There is a mistake at the beginning, the 1/y is supposed to be for the denominator.
But, thanks nonetheless, I think I have an idea now
Omnipotent September 18th, 2017 04:24 PM
Quote:
Originally Posted by
Country Boy
(Post 580547)
That would involve solving a 'polynomial' of degree
. (I put "polynomial" in quotes because
is not necessarily an integer. There is no general formula for that.
How would I go about finding the inverse if y (gamma) was .65?
Omnipotent September 18th, 2017 04:59 PM
Quote:
Originally Posted by
romsek
(Post 580552)
$\omega = \left(\dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}\right)^\frac 1 \gamma$
$\omega^\gamma = \dfrac{p^\gamma}{p^\gamma+(1-p)^\gamma}$
$\omega^\gamma = \dfrac{1}{1+\left(\frac{1-p}{p}\right)^\gamma}$
$\omega^{-\gamma} = 1+\left(\frac{1-p}{p}\right)^\gamma$
$\omega^{-\gamma}-1 = \left(\frac{1-p}{p}\right)^\gamma$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} = \dfrac{1-p}{p} = \dfrac 1 p - 1$
$\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 = \dfrac 1 p$
$p = \dfrac{1}{\left(\omega^{-\gamma}-1 \right)^{\frac 1 \gamma} +1 }$
Nevermind, I seem to have confused myself again. Your help would be greatly appreciated!
romsek September 18th, 2017 05:35 PM
Quote:
Originally Posted by
Omnipotent
(Post 580669)
Nevermind, I seem to have confused myself again. Your help would be greatly appreciated!
It appears the correct expression doesn't admit a closed form solution for $p$ in terms of $\omega$
You'll have to use numeric methods.
romsek September 18th, 2017 09:13 PM
Toying with this a bit in Mathematica produced
$p \approx 0.477328 -0.502426 \cos (3.33061 \omega+0.141105), ~\omega \in [0,1]$ as a pretty good fit to an inverse of the curve with $\gamma = 0.65$
Omnipotent September 19th, 2017 08:06 AM
Quote:
Originally Posted by
romsek
(Post 580697)
Toying with this a bit in Mathematica produced
$p \approx 0.477328 -0.502426 \cos (3.33061 \omega+0.141105), ~\omega \in [0,1]$
as a pretty good fit to an inverse of the curve with $\gamma = 0.65$
Thank you very much for your help!
I tried graphing that alongside the original equation, but the results were a bit weird.
I'm trying to go for something like this: http://oi63.tinypic.com/2zgsydj.jpg
Where the green line is the inverse, and the red line is the original. Is this worth pursuing?
romsek September 19th, 2017 08:19 AM
1 Attachment(s)
Quote:
Originally Posted by
romsek
(Post 580697)
Toying with this a bit in Mathematica produced
$p \approx 0.477328 -0.502426 \cos (3.33061 \omega+0.141105), ~\omega \in [0,1]$
as a pretty good fit to an inverse of the curve with $\gamma = 0.65$
looks pretty good to me http://mymathforum.com/attachment.ph...1&d=1505837930
Copyright © 2019 My Math Forum. All rights reserved.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Probability Distributions
Summary
The DynaML
dynaml.probability.distributions package leverages and extends the
breeze.stats.distributions package. Below is a list of distributions implemented.
Specifying Distributions¶
Every probability density function \rho(x) defined over some domain x \in \mathcal{X} can be represented as \rho(x) = \frac{1}{Z} f(x), where f(x) is the un-normalized probability weight and Z is the normalization constant. The normalization constant ensures that the density function sums to 1 over the whole domain \mathcal{X}.
Describing Skewness¶
An important analytical way to create skewed distributions was described by Azzalani et. al. It consists of four components.
A symmetric probability density \varphi(.) An odd function w(.) A cumulative distribution function G(.) of some symmetric density A cut-off parameter \tau Distributions API¶
The
Density[T] and
Rand[T] traits form the API entry points for implementing probability distributions in breeze. In the
dynaml.probability.distributions package, these two traits are inherited by
GenericDistribution[T] which is extended by
AbstractContinuousDistr[T] and
AbstractDiscreteDistr[T] classes.
Distributions which can produce confidence intervals
The trait
HasErrorBars[T] can be used as a mix in to provide the ability of producing error bars to distributions. To extend it, one has to implement the
confidenceInterval(s: Double): (T, T) method.
Skewness
The
SkewSymmDistribution[T] class is the generic base implementations for skew symmetric family of distributions in DynaML.
Distributions Library¶
Apart from the distributions defined in the
breeze.stats.distributions, users have access to the following distributions implemented in the
dynaml.probability.distributions.
Multivariate Students T¶
Defines a
Students' T distribution over the domain of finite dimensional vectors.
\mathcal{X} \equiv \mathbb{R}^{n}
f(x) = \left[1+{\frac {1}{\nu }}({\mathbf {x} }-{\boldsymbol {\mu }})^{\rm {T}}{\boldsymbol {\Sigma }}^{-1}({\mathbf {x} }-{\boldsymbol {\mu }})\right]^{-(\nu +p)/2}
Z = \frac{\Gamma \left[(\nu +p)/2\right]}{\Gamma (\nu /2)\nu ^{p/2}\pi ^{p/2}\left|{\boldsymbol {\Sigma }}\right|^{1/2}}
Usage:
1 2 3 4 val mu = 2.5 val mean = DenseVector(1.0, 0.0) val cov = DenseMatrix((1.5, 0.5), (0.5, 2.5)) val d = MultivariateStudentsT(mu, mean, cov) Matrix T¶
Defines a
Students' T distribution over the domain of matrices.
\mathcal{X} \equiv \mathbb{R}^{n \times p}
f(x) = \left|{\mathbf {I}}_{n}+{\boldsymbol \Sigma }^{{-1}}({\mathbf {X}}-{\mathbf {M}}){\boldsymbol \Omega }^{{-1}}({\mathbf {X}}-{\mathbf {M}})^{{{\rm {T}}}}\right|^{{-{\frac {\nu +n+p-1}{2}}}}
Z = {\frac {\Gamma_{p}\left({\frac {\nu +n+p-1}{2}}\right)}{(\pi )^{{\frac {np}{2}}}\Gamma _{p}\left({\frac {\nu +p-1}{2}}\right)}}|{\boldsymbol \Omega }|^{{-{\frac {n}{2}}}}|{\boldsymbol \Sigma }|^{{-{\frac {p}{2}}}}
Usage:
1 2 3 4 5 val mu = 2.5 val mean = DenseMatrix((-1.5, -0.5), (3.5, -2.5)) val cov_rows = DenseMatrix((1.5, 0.5), (0.5, 2.5)) val cov_cols = DenseMatrix((0.5, 0.1), (0.1, 1.5)) val d = MatrixT(mu, mean, cov_rows, cov_cols) Matrix Normal¶
Defines a
Gaussian distribution over the domain of matrices.
\mathcal{X} \equiv \mathbb{R}^{n \times p}
f(x) = \exp\left( -\frac{1}{2} \, \mathrm{tr}\left[ \mathbf{V}^{-1} (\mathbf{X} - \mathbf{M})^{T} \mathbf{U}^{-1} (\mathbf{X} - \mathbf{M}) \right] \right)
Z = (2\pi)^{np/2} |\mathbf{V}|^{n/2} |\mathbf{U}|^{p/2}
Usage:
1 2 3 4 val mean = DenseMatrix((-1.5, -0.5), (3.5, -2.5)) val cov_rows = DenseMatrix((1.5, 0.5), (0.5, 2.5)) val cov_cols = DenseMatrix((0.5, 0.1), (0.1, 1.5)) val d = MatrixNormal(mean, cov_rows, cov_cols) Truncated Normal¶
Defines a univariate
Gaussian distribution that is defined in a finite domain.
\mathcal{X} \equiv [a, b]
f(x) = \begin{cases} \phi ({\frac {x-\mu }{\sigma }}) & a \leq x \leq b\\0 & else\end{cases}
Z = \sigma \left(\Phi ({\frac {b-\mu }{\sigma }})-\Phi ({\frac {a-\mu }{\sigma }})\right)
\phi() and \Phi() being the gaussian density function and cumulative distribution function respectively
Usage:
1 2 3 4 val mean = 1.5 val sigma = 1.5 val (a,b) = (-0.5, 2.5) val d = TruncatedGaussian(mean, sigma, a, b) Skew Gaussian¶ Univariate¶
\mathcal{X} \equiv \mathbb{R}
f(x) = \phi(\frac{x - \mu}{\sigma}) \Phi(\alpha (\frac{x-\mu}{\sigma}))
Z = \frac{1}{2}
\phi() and \Phi() being the standard gaussian density function and cumulative distribution function respectively
Multivariate¶
\mathcal{X} \equiv \mathbb{R}^d
f(x) = \phi_{d}(\mathbf{x}; \mathbf{\mu}, {\Sigma}) \Phi(\mathbf{\alpha}^{\intercal} L^{-1}(\mathbf{x} - \mathbf{\mu}))
Z = \frac{1}{2}
\phi_{d}(.; \mathbf{\mu}, {\Sigma}) and \Phi() are the multivariate gaussian density function and standard gaussian univariate cumulative distribution function respectively and L is the lower triangular Cholesky decomposition of \Sigma.
Skewness parameter \alpha
The parameter \alpha determines the skewness of the distribution and its sign tells us in which direction the distribution has a fatter tail. In the univariate case the parameter \alpha is a scalar, while in the multivariate case \alpha \in \mathbb{R}^d, so for the multivariate skew gaussian distribution, there is a skewness value for each dimension.
Usage:
1 2 3 4 5 6 7 8 9 10 11 //Univariate val mean = 1.5 val sigma = 1.5 val a = -0.5 val d = SkewGaussian(a, mean, sigma) //Multivariate val mu = DenseVector.ones[Double](4) val alpha = DenseVector.fill[Double](4)(1.2) val cov = DenseMatrix.eye[Double](4)*1.5 val md = MultivariateSkewNormal(alpha, mu, cov) Extended Skew Gaussian¶ Univariate¶
The generalization of the univariate skew
Gaussian distribution.
\mathcal{X} \equiv \mathbb{R}
f(x) = \phi(\frac{x - \mu}{\sigma}) \Phi(\alpha (\frac{x-\mu}{\sigma}) + \tau\sqrt{1 + \alpha^{2}})
Z = \Phi(\tau)
\phi() and \Phi() being the standard gaussian density function and cumulative distribution function respectively
Multivariate¶
\mathcal{X} \equiv \mathbb{R}^d
f(x) = \phi_{d}(\mathbf{x}; \mathbf{\mu}, {\Sigma}) \Phi(\mathbf{\alpha}^{\intercal} L^{-1}(\mathbf{x} - \mathbf{\mu}) + \tau\sqrt{1 + \mathbf{\alpha}^{\intercal}\mathbf{\alpha}})
Z = \Phi(\tau)
\phi_{d}(.; \mathbf{\mu}, {\Sigma}) and \Phi() are the multivariate gaussian density function and standard gaussian univariate cumulative distribution function respectively and L is the lower triangular Cholesky decomposition of \Sigma.
Usage:
1 2 3 4 5 6 7 8 9 10 11 12 13 //Univariate val mean = 1.5 val sigma = 1.5 val a = -0.5 val c = 0.5 val d = ExtendedSkewGaussian(c, a, mean, sigma) //Multivariate val mu = DenseVector.ones[Double](4) val alpha = DenseVector.fill[Double](4)(1.2) val cov = DenseMatrix.eye[Double](4)*1.5 val tau = 0.2 val md = ExtendedMultivariateSkewNormal(tau, alpha, mu, cov)
Confusing Nomenclature
The following distribution has a very similar form and name to the
extended skew gaussian distribution shown above. But despite its deceptively similar formula, it is a very different object.
We use the name MESN to denote the variant below instead of its expanded form.
MESN¶
The
Multivariate Extended Skew Normal or MESN distribution was formulated by Adcock and Schutes. It is given by
\mathcal{X} \equiv \mathbb{R}^d
f(x) = \phi_{d}(\mathbf{x}; \mathbf{\mu} + \mathbf{\alpha}\tau, {\Sigma} + \mathbf{\alpha}\mathbf{\alpha}^\intercal) \Phi\left(\frac{\mathbf{\alpha}^{\intercal} \Sigma^{-1}(\mathbf{x} - \mathbf{\mu}) + \tau}{\sqrt{1 + \mathbf{\alpha}^{\intercal}\Sigma^{-1}\mathbf{\alpha}}}\right)
Z = \Phi(\tau)
\phi_{d}(.; \mathbf{\mu}, {\Sigma}) and \Phi() are the multivariate gaussian density function and standard gaussian univariate cumulative distribution function respectively.
Usage:
1 2 3 4 5 6 7 8 9 10 11 12 13 //Univariate val mean = 1.5 val sigma = 1.5 val a = -0.5 val c = 0.5 val d = UESN(c, a, mean, sigma) //Multivariate val mu = DenseVector.ones[Double](4) val alpha = DenseVector.fill[Double](4)(1.2) val cov = DenseMatrix.eye[Double](4)*1.5 val tau = 0.2 val md = MESN(tau, alpha, mu, cov) Extended Skew Gaussian Process ESGP
The MESN distribution is used to define the finite dimensional probabilities for the ESGP process.
|
Ok, Xenapior and Reynolds together have the right idea. But the explanation is a bit lacking so here is a image to explain it all and some further musings. First let us start by drawing an image (yes i know that is what they say in school for you to do but nobody does it).
From the image we can see that there are 2 equal right triangles $V_2, A, C$ and $V_1, B, C$. In this triangle we have one unknown that we can define namely rounding radius $r$, also we know right angle is 90°. The angle between the line $V_1-V_2 = \vec a$ and line $V_2-V_3 = \vec b$ is easy to compute with the formulation fo angle between vectors
$$\cos(\beta) = \frac{\vec a·\vec b}{ |\vec a|·|\vec b|}$$
That in turn can be simplified if the vectors already are normal. Thus three things of the triangle is known, which means all is known. So if you know the rounding radius to use the calculating the points $A$, $B$ and $C$. So finally:
a = normalize(V2-V1);
b = normalize(V2-V3);
halfang = acos(dot(a, b))/2.;
// skip center if you iuse splines
C = V2 - r / sin(halfang) * normalize((a+b)/2);
A = V2 - r/tan(halfang)*a;
B = V2 - r/tan(halfang)*b;
You can simplify this a bit with trigonometric identities. Or if you use rational B-splines you can skip the calculation of C
Note: that this is only one possible formulation
|
As a first remark: the Anscombe-Aumann axioms, in particular Independence, are defined over acts taking the state space to a linear space (generally simple lotteries over consumption objects). Even when we consider the restriction of the model to purely subjectively uncertain acts, we still need to employ the full model or we will lose information.
That being said: Lets let $S$ be a finite state space, and $X$ a finite set of alternatives. Let $\Delta(X)$ denote all the lotteries over $X$ and $f: S \to \Delta(X)$ is an act. For an event $E \subseteq S$, let $f_{-E}g$ be the act defined by$$f_{-E}g \begin{cases} f(s) \text{ if } x \in E \\ g(s) \text{ if } x \notin E. \end{cases}$$
Now, we can say that our model satisfies the
sure thing principle if $f_{-E}h \succsim g_{-E}h$ and $f_{-E^c}h \succsim g_{-E^c}h$ then $f \succsim g.$ This definition is valid for all acts, not just ones without objective risk, but clearly you can consider only the relevant projection.
Assume the antecedent of the STP. From $f_{-E}h \succsim g_{-E}h$ and independence we have that $$\frac12 f_{-E}h + \frac12 f_{-E^c}h \succsim \frac12 g_{-E}h + \frac12 f_{-E^c}h.$$ Notice we can rewrite this as$$\frac12 f + \frac12 h \succsim \frac12 g_{-E}f + \frac12h$$and, applying independence again, we get\begin{equation}\tag{1}f \succsim g_{-E}f.\end{equation}
In an analogous fashion, from $f_{-E^c}h \succsim g_{-E^c}h$ and independence we have that $$\frac12 f_{-E^c}h + \frac12 g_{-E}h \succsim \frac12 g_{-E^c}h + \frac12 g_{-E}h.$$ Again, we can rewrite as$$\frac12 g_{-E}f + \frac12 h \succsim \frac12 g + \frac12h$$and, applying independence again, we get\begin{equation}\tag{2}g_{-E}f \succsim g.\end{equation}
Combining (1) and (2) via transitivity yields the desired relations. Going back to the prefatory remark, notice that to apply independence, we need to mix acts, appealing to objective risk. Thus, even when $f$, $g$, and $h$ have no objective risk, we still need risky acts to serve as an intermediary in the proof. In a sense, this is the grand insight to the whole AA framework---using objective risk to get around the necessity of an infinite state space by using the linearity of expectations to force the STP.
Notice only independence and transitivity were used. This should indicated that even state-dependent EU (where monotonicity / state-independence fails) or Bewley EU (where completeness is relaxed) will still satisfy the STP.
Edit in response to a comment: Lets call the above notion of the Sure Thing Principle STP1 and say the preference satisfies STP2 if $f_{-E}h \succsim g_{-E}h \iff f_{-E}h' \succsim g_{-E}h'$ for all $f,g,h,h'$. Then if $\succsim$ is a preorder, it satisfies STP1 if and only if it satisfies STP2.
First assume STP2 holds and that $f_{-E}h \succsim g_{-E}h$ and $f_{-E^c}h \succsim g_{-E^c}h$. Then by STP2 we have $$f = f_{-E}f \succsim g_{-E}f \qquad \text{ and } \qquad g_{-E}f = f_{-E^c}g \succsim g.$$ Transitivity implies $f \succsim g$; STP1 holds.
Next, assume STP1 holds and $f_{-E}h \succsim g_{-E}h$. Define $\hat f = f_{-E}h'$ and $\hat g$ analogously. By definition $$\hat f_{-E}h = f_{-E}h \qquad \text{ and } \qquad \hat g_{-E}h = g_{-E}h,$$so our assumption is identically that \begin{equation}\tag{3}\hat f_{-E}h \succsim \hat g_{-E}h.\end{equation}Further $\hat f_{-E^c}h = \hat g_{-E^c}h = h'_{-E}h$ so we have, by the reflexivity of preference, that\begin{equation}\tag{4}\hat f_{-E^c}h \succsim \hat g_{-E^c}h.\end{equation}Now we can apply STP1 to (3) and (4) to obtain that $\hat f \succsim \hat g$, which, given their definition, exactly what we need to show for STP2 to hold.
|
Let $n\geqslant 3$. Show that the unique element $\sigma$ of $S_n$ that satisfies $\sigma\gamma=\gamma\sigma$ for all of $\gamma\in S_n$ is the identity(id.)
Prepostion: If $\alpha,\beta$ are disjoint, they commute.
If we have $\alpha\in S_n$, then $id\circ\alpha=\alpha\circ id=\alpha$
Question:
I am not understanding what is asked. If the permutation are disjoint or if they are elevated to a certain power(ex:$\alpha^2=\alpha\circ\alpha)$, they commute. How can id(identity) be the only element that assures $\sigma\gamma=\gamma\sigma$?
|
12.2.1.2 - Example: Age & Height
Data concerning body measurements from 507 adults retrieved from body.dat.txt for more information see body.txt. In this example we will use the variables of age (in years) and height (in centimeters).
Research question: Is there a relationship between age and height in adults?
Age (in years) and height (in centimeters) are both quantitative variables. From the scatterplot below we can see that the relationship is linear (or at least not non-linear).
\(H_0: \rho = 0\)
\(H_a: \rho \neq 0\)
From Minitab Express:
Pearson correlation of Height (cm) and Age = 0.067883 P-Value = 0.1269
\(r=0.067883\)
\(p=.1269\)
\(p > \alpha\) therefore we fail to reject the null hypothesis.
There is not evidence of a relationship between age and height in the population from which this sample was drawn.
|
In water wave physics, when we say that the wave "feels" the bottom, we mean that the water depth affects the properties of the wave.The dispersion relationship for water waves is:$$\omega^2 = gk \tanh{(kd)}$$where $\omega$ is the wave frequency, $k$ is the wavenumber, $d$ is the mean water depth, and $g$ is gravitational acceleration. We ...
There are two important ways to recognize different types of waves in seismic records:Their velocity. These waves travel at different speeds: P-waves are fastest, then S-waves, then Love waves, then Rayleigh. Since seismic recordings are measures of earth displacement, particle velocity, or water pressure over elapsed time, this means the waves show up at ...
The physical process you describe is known as wave shoaling.At the basic level, waves propagating into shallow water become shorter and higher, and consequently, steeper. In shallow water, the water particles near the crest move forward faster than those below them. Similarly, the particles near the trough move backward faster than those above them. This ...
In simplest terms, it simply means that:if the source signal is shifted by some amount of time Δt, but otherwise unchanged, then the seismogram will also be shifted by Δt, but otherwise unchanged; andthe seismogram generated by the sum of two (or more) source signals is the sum of the seismograms that would've been generated by each of the ...
Tsunamis and sound waves are different types of wave - one is a transverse wave and the other is a longitudinal one. Let's look at the factors that influence the speed of each one is determined.Tsunami - transverse wave in shallow waterA transverse wave is one of the type that we think of from day to day - where the direction of oscillation is ...
Feel the bottom refers to the fact that the wave-induced velocity field extends all the way from the top of the water column to the bottom of the water column. When the wave "feels the bottom" it means that there is some interaction with the bottom boundary. A very thin boundary layer develops at the bottom where vorticity is generated due to the velocity ...
To my knowledge, the best study looking at potential explanations for the Red Sea crossing is the one by Nof and Paldor (1992). They present a couple of plausible scenarios for the crossing. The main one is the effect of strong winds blowing along the Gulf of Suez and they find that the sea level drop could be sufficient:It is found that, even for ...
P and S waves are fundamentally different, when it comes to properties of the wave. An example might be that P waves can travel through fluids while S waves cannot. However, when it comes down to wave theory, these two are just different polarizations of a mechanical wave.SeismologyIn seismics this concept may be puzzling as we make some very ...
Yes. Spectral wave models cannot model storm surge because the wave energy balance equation that they integrate does not describe the physical processes associated with storm surge.Wave models solve the wave energy balance equation:$$\dfrac{\partial E}{\partial t}+ \dfrac{\partial (c_gE)}{\partial x}+ \dfrac{\partial (\dot{k}E)}{\partial k}+ \dfrac{...
The stability correction factor ASF is related to the effects of atmospheric stability (function of buoyancy and shear) on wave growth, and has been implemented in Wavewatch3 in the Tolman and Chalikov 1996 input source term. The code where the correction happens can be found in w3updtmd.ftn:! 5. Stability correction ( !/STAB2 )! Original settings :!...
This is a very good question, not just important to seismic inversion, but also modeling in general.Lets set this problem up differently. Lets say point's A and D are nodes.Each node represents a system of equations, and these equations are only calculated on these points. Therefore, the model can only exist on the points in which they are calculated. ...
There are few known mechanisms that lead to the generation of Rogue waves, such as the ones you mentioned, but essentially all Rogue waves are the due to the nonlinear wave dispersion characteristics of large groups of waves.I can imagine one approach to predicting the emergence of such waves is to simulate the evolution of initial wave states with Navier-...
As SimonW points out strong tidal currents will modify the wave shape and significant height. The Wolf & Prandle (1999) study provides a neat summary description of the effects of currents (of any kind) on waves:(i) Wave generation by wind—the effective wind is that relative to the surface current, and the wave age (cp/U*) and effective surface ...
Whitecapping refers to the steepness-induced wave dissipation in deep water during which some air is entrained into the near-surface water, forming an emulsion of water and air bubbles (foam) that appears white. It occurs when the velocity of individual water particles near the wave crest exceed the phase speed of the wave, causing the front face of the wave ...
I think the best option for sediment transport modeling is the Community Sediment Transport Modeling System (CSTMS) package that was developed for ROMS. CSTMS was created by a group of sediment transport modelers lead by the USGS. One of the many benefits is that it is open-source and, thus, free. The model was designed for realistic simulations of processes ...
These are rotor clouds, and are manifestations of "Lee Waves", a particular kind of internal "gravity wave" (better defined as "buoyancy effect").Forced convection helps form these clouds as warm, moist air is forced upward by both wind from behind and the mountain barrier in front. The upward movement forces cooling and condensation of vapor into clouds. ...
Yes, wave variance or energy spectrum, direcional or non-directional is positive-definite as @aretxabaleta said in the comment.In linear water-wave theory, the surface elevation is described as a linear superposition of sinusoids:$$\eta(t) = \sum_{i-1}^{N}a_i \sin(f_i t + \phi_i)$$where $a_i$, $f_i$ and $\phi_i$ are the amplitude, frequency and ...
Two-way time to depth calibration is a vertical problem.How you handle deviated wells probably depends a bit on how you are tying the wells. Here are two things to watch out for:You should be tying to true vertical depth (TVD) anyway — make sure you're not using measured depth somehow. I expect you are using TVD — so the deviated section will be ...
The only open and ongoing data source for in-situ ocean wave measurements I am aware of is the National Data Buoy Center. Though NDBC manages data service from plenty of moored buoys in the Gulf of Mexico and the Atlantic and Pacific coasts of North America, unfortunately there isn't much in your region of interest.The only buoys I have found that are ...
Coastal trapped Kelvin waves are important processes contributing to variability in the sea surface height and temperature near the coast. Field studies have measured large temperature fluctuations mainly made up of low-frequency internal Kelvin waves mostly of semi-diurnal tidal period at the continental shelf on the great barrier reef (Wolanski, 1983). ...
You can automatically detect the P and S waves for an event but I don't know of a way to automatically extract the Rayleigh and Love waves directly from the seismograms.As @kwinkunks points out things are complicated in the real world. Understanding an event is an iterative bootstrapping process that is best done by looking at recordings from several ...
One thing that I would add is a discussion of a physical parameter that is a simple measure of whether a wave is going to topple or not. The Froude number is defined as the ratio of the maximum absolute fluid velocity, $U$, and the wavespeed, $c$:$F = U/c$Because of the mechanism explained by IRO-bot,In shallow water, the water particles near the ...
In the picture above we see an internal wave propagating in the direction of the wavenumber vector$$\mathbf{K} = k \mathbf{e_x} + m \mathbf{e_z}$$which (in 2D) is given by the vector sum of its components. $\mathbf{e_x}$ and $\mathbf{e_z}$ are unit vectors in the horizontal and vertical directions, respectively.The continuity condition implies that $\...
Large parts of Delft3D - including, I think, the sediment transport module - are available in an open source form. The GUI is not currently open source, but (a) Deltares have been offering licences for this for free for academic use; (b) if they are no longer doing this, it is entirely possible to use the software without the GUI.FVCOM also has a sediment ...
Seismic, but...There are lots of ways of estimating wavelets. None of them rely on well logs alone, because they don't contain any information about the wavelet. Some methods are purely statistical, some use the seismic data, and some of them use seismic and well logs.BackgroundI recommend reading what you can about wavelet extraction. Especially these ...
I don't have personal experience with this situation, but reading around suggests it depends what kind of mixing you are talking about, for example whether it is vertical or horizontal.Looking around for recent examples, I see Chao et al. (2007) had a value of 4.0 m2/s for horizontal diffusivity. This larger value would give you a smaller Péclet ...
To supplement the current and very good answers , we can look at a seismogram:from (http://www.bgs.ac.uk/discoveringGeology/hazards/earthquakes/images/dia_seismogram.jpg)As you can see, we know that P-waves(compressional) will generally arrives before S-waves(shear). Also, in general, S-waves will arrive before the much stronger in response body waves. ...
The answer to your question, based on linear theory, is no. The short answer is that the slope of these waves is very small, and the displacement of the particle paths is proportional to (ak) (times, perhaps, a depth dependent term) with a the wave amplitude and k its wavenumber. Now let's make this answer rigorous.Recall, for irrotational inviscid 2 ...
The reason we use convolution is because we consider the earth to be a linear, time-invariant, passive system. The output of any such system is the convolution of the input and the impulse response of the system."linear" means that if input x(t) produces output X(t) and input y(t) produces output Y(t), then input Ax(t)+By(t) produces output AX(t)+BY(t) [...
It is a kind of spectral shaping, intended to increase the vertical resolution of seismic reflection data.The logic goes like this:Seismic data is band-limited and lacks high frequencies. This limits its vertical (travel time, and thus thickness) resolution. This is annoying because we often care about thin beds.The spectral peak of seismic data tends ...
|
On the method of orthogonal extension of overdetermined systems I. S. Gudovich Abstract: In the article a description is given of Noether boundary value problems for overdetermined systems of partial differential equations with constant coefficients of the form
\begin{equation}
\mathscr L(D)u=f,\qquad\mathscr W^*(D)u=g,
\end{equation}
where $\mathscr L(\xi)$ ($\xi=(\xi_1,…,\xi_m)$) is an $N\times n$ matrix inducing a homomorphism $\mathscr L\colon\mathscr P^n\to\nobreak\mathscr P^N$ whose kernel and cokernel are assumed to be free modules ($\mathscr P^n$ is the module composed of all $n$-dimensional vectors with coordinates polynomially depending on $\xi$). The matrix $\mathscr W(\xi)$ is composed of column vectors forming a basis in the kernel of $\mathscr L$.
A necessary condition for the solvability of (1) is
\begin{equation}
\mathscr V(D)f=0,
\end{equation}
where $\mathscr V(\xi)$ is a matrix of row vectors forming a basis in the cokernel of $\mathscr L$.
The system
\begin{equation}
\mathscr L(D)u+v^*(D)p=f,\qquad\mathscr W^*(D)u=g,
\end{equation}
which is called an orthogonal extension of the original system, is introduced into consideration.
Bibliography: 13 titles. Full text: PDF file (874 kB) References: PDF file HTML file English version: Mathematics of the USSR-Sbornik, 1974, 22:3, 456–464 Bibliographic databases: UDC: 517.946 MSC: 35N05 Received: 10.05.1973 Citation: I. S. Gudovich, “On the method of orthogonal extension of overdetermined systems”, Mat. Sb. (N.S.), 93(135):3 (1974), 451–459; Math. USSR-Sb., 22:3 (1974), 456–464 Citation in format AMSBIB
\Bibitem{Gud74}
\by I.~S.~Gudovich \paper On the method of orthogonal extension of overdetermined systems \jour Mat. Sb. (N.S.) \yr 1974 \vol 93(135) \issue 3 \pages 451--459 \mathnet{http://mi.mathnet.ru/msb3426} \mathscinet{http://www.ams.org/mathscinet-getitem?mr=388459} \zmath{https://zbmath.org/?q=an:0292.35010} \transl \jour Math. USSR-Sb. \yr 1974 \vol 22 \issue 3 \pages 456--464 \crossref{https://doi.org/10.1070/SM1974v022n03ABEH002169} Linking options: http://mi.mathnet.ru/eng/msb3426 http://mi.mathnet.ru/eng/msb/v135/i3/p451 Citing articles on Google Scholar: Russian citations, English citations Related articles on Google Scholar: Russian articles, English articles
Number of views: This page: 146 Full text: 49 References: 22
|
This vignette illustrates the use of
mitml for the treatment of missing data at Level 2. Specifically, the vignette addresses the following topics:
Further information can be found in the other vignettes and the package documentation.
For purposes of this vignette, we make use of the
leadership data set, which contains simulated data from 750 employees in 50 groups including ratings on job satisfaction, leadership style, and work load (Level 1) as well as cohesion (Level 2).
The package and the data set can be loaded as follows.
In the
summary of the data, it becomes visible that all variables are affected by missing data.
# GRPID JOBSAT COHES NEGLEAD WLOAD # Min. : 1.0 Min. :-7.32934 Min. :-3.4072 Min. :-3.13213 low :416 # 1st Qu.:13.0 1st Qu.:-1.61932 1st Qu.:-0.4004 1st Qu.:-0.70299 high:248 # Median :25.5 Median :-0.02637 Median : 0.2117 Median : 0.08027 NA's: 86 # Mean :25.5 Mean :-0.03168 Mean : 0.1722 Mean : 0.04024 # 3rd Qu.:38.0 3rd Qu.: 1.64571 3rd Qu.: 1.1497 3rd Qu.: 0.79111 # Max. :50.0 Max. :10.19227 Max. : 2.5794 Max. : 3.16116 # NA's :69 NA's :30 NA's :92
The following data segment illustrates this fact, including cases with missing data at Level 1 (e.g., job satisfaction) and 2 (e.g., cohesion).
# GRPID JOBSAT COHES NEGLEAD WLOAD# 73 5 -1.72143400 0.9023198 0.83025589 high# 74 5 NA 0.9023198 0.15335056 high# 75 5 -0.09541178 0.9023198 0.21886272 low# 76 6 0.68626611 NA -0.38190591 high# 77 6 NA NA NA low# 78 6 -1.86298201 NA -0.05351001 high
In the following, we will employ a two-level model to address missing data at both levels simultaneously.
The specification of the two-level model, involves two components, one pertaining to the variables at each level of the sample (Goldstein, Carpenter, Kenward, & Levin, 2009; for further discussion, see also Enders, Mister, & Keller, 2016; Grund, Lüdtke, & Robitzsch, in press).
Specifically, the imputation model is specified as a list with two components, where the first component denotes the model for the variables at Level 1, and the second component denotes the model for the variables at Level 2.
For example, using the
formula interface, an imputation model targeting all variables in the data set can be written as follows.
The first component of this list includes the three target variables at Level 1 and a fixed (
1) as well as a random intercept (
1|GRPID). The second component includes the target variable at Level 2 with a fixed intercept (
1).
From a statistical point of view, this specification corresponds to the following model \[ \begin{aligned} \mathbf{y}_{1ij} &= \boldsymbol\mu_{1} + \mathbf{u}_{1j} + \mathbf{e}_{ij} \\ \mathbf{y}_{2j} &= \boldsymbol\mu_{2} + \mathbf{u}_{1j} \; , \end{aligned} \] where \(\mathbf{y}_{1ij}\) denotes the target variables at Level 1, \(\mathbf{y}_{2j}\) the target variables at Level 2, and the right-hand side of the model contains the fixed effects, random effects, and residual terms as mentioned above.
Note that, even though the two components of the model appear to be separated, they define a single (joint) model for all target variables at both Level 1 and 2. Specifically, this model employs a two-level covariance structure, which allows for relations between variables at both Level 1 (i.e., correlated residuals at Level 1) and 2 (i.e., correlated random effects residuals at Level 2).
Because the data contain missing values at both levels, imputations will be generated with
jomoImpute (and not
panImpute). Except for the specification of the two-level model, the syntax is the same as in applications with missing data only at Level 1.
Here, we will run 5,000 burn-in iterations and generate 20 imputations, each 250 iterations apart.
By looking at the
summary, we can then review the imputation procedure and verify that the imputation model converged.
# # Call:# # jomoImpute(data = leadership, formula = fml, n.burn = 5000, n.iter = 250, # m = 20)# # Level 1:# # Cluster variable: GRPID # Target variables: JOBSAT NEGLEAD WLOAD # Fixed effect predictors: (Intercept) # Random effect predictors: (Intercept) # # Level 2:# # Target variables: COHES # Fixed effect predictors: (Intercept) # # Performed 5000 burn-in iterations, and generated 20 imputed data sets,# each 250 iterations apart. # # Potential scale reduction (Rhat, imputation phase):# # Min 25% Mean Median 75% Max# Beta: 1.001 1.001 1.001 1.001 1.001 1.001# Beta2: 1.001 1.001 1.001 1.001 1.001 1.001# Psi: 1.000 1.001 1.003 1.001 1.003 1.009# Sigma: 1.000 1.003 1.004 1.004 1.006 1.009# # Largest potential scale reduction:# Beta: [1,3], Beta2: [1,1], Psi: [4,3], Sigma: [3,1]# # Missing data per variable:# GRPID JOBSAT NEGLEAD WLOAD COHES# MD% 0 9.2 12.3 11.5 4.0
Due to the greater complexity of the two-level model, the output includes more information than in applications with missing data only at Level 1. For example, the output features the model specification for variables at both Level 1 and 2. Furthermore, it provides convergence statistics for the additional regression coefficients for the target variables at Level 2 (i.e.,
Beta2).
Finally, it also becomes visible that the two-level model indeed allows for relations between target variables at Level 1 and 2. This can be seen from the fact that the potential scale reduction factor (\(\hat{R}\)) for the covariance matrix at Level 2 (
Psi) was largest for
Psi[4,3], which is the covariance between cohesion and the random intercept of work load.
The completed data sets can then be extracted with
mitmlComplete.
When inspecting the completed data, it is easy to verify that the imputations for variables at Level 2 are constant within groups as intended, thus preserving the two-level structure of the data.
# GRPID JOBSAT NEGLEAD WLOAD COHES# 73 5 -1.72143400 0.83025589 high 0.9023198# 74 5 -2.80749991 0.15335056 high 0.9023198# 75 5 -0.09541178 0.21886272 low 0.9023198# 76 6 0.68626611 -0.38190591 high -1.0275552# 77 6 1.52825873 -1.11035850 low -1.0275552# 78 6 -1.86298201 -0.05351001 high -1.0275552
Enders, C. K., Mistler, S. A., & Keller, B. T. (2016). Multilevel multiple imputation: A review and evaluation of joint modeling and chained equations imputation.
Psychological Methods, 21, 222–240. doi: 10.1037/met0000063 (Link)
Goldstein, H., Carpenter, J. R., Kenward, M. G., & Levin, K. A. (2009). Multilevel models with multivariate mixed response types.
Statistical Modelling, 9, 173–197. doi: 10.1177/1471082X0800900301 (Link)
Grund, S., Lüdtke, O., & Robitzsch, A. (in press). Multiple imputation of missing data for multilevel models: Simulations and recommendations.
Organizational Research Methods. doi: 10.1177/1094428117703686 (Link)
# Author: Simon Grund (grund@ipn.uni-kiel.de)# Date: 2019-01-02
|
In a linear regression problem with sparsity constraint, $P = (P_1, \cdots, P_N)^{T}$ is the column vector of the outputs, and $D = (d_{j, k})$ is the $(N \times M)$- dimensional matrix of inputs. The objective function is
$$\text{argmin}_{c \in \Bbb R^{M}} (\Vert P - Dc \Vert_2^2 + \lambda \Vert c \Vert_0)$$
in which $\Vert c \Vert_0 = \# \{j: c_j \neq 0\}$
I learnt that this problem is NP-hard, but I don't understand why.
What I think:
There are in total $N$ cases that we need to consider
$$N = C(M, 1) + C(M, 2) + \cdots + C(M, M)$$ in which $M$ is the dimension of the coefficient vector $c$. $C(M, n) = \begin{pmatrix} M\\ n \end{pmatrix} = \frac{M!}{(M-n)!n!} $.
For each tuple of selected features, we perform OLS (without taking into account the regularizer) and record the loss: $$L = RMSE + \lambda \Vert c \Vert_0$$
After doing $N$ such calculations, we can choose the tuple of features that yield the smallest $L$.
However, I don't know whether the result that obtained really is the solution to the original objective function since we don't take into account the effect of regularizer in the first place.
Or this algorithm does not make sense, instead of comparing $L = RMSE + \lambda \Vert c \Vert_0$ between tuples of different sizes, we can only compare the $RMSE$ of the set of feature tuples of the same size.
|
People often say some event has a 50-60% chance of happening. Sometimes I will even see people give explicit error bars on probability assignments. Do these statements have any meaning or are they just a linguistic quirk of discomfort choosing a specific number for something that is inherently unknowable?
It wouldn't make sense if you were talking about
known probabilities, e.g. with fair coin the probability of throwing heads is 0.5 by definition. However, unless you are talking about textbook example, the exact probability is never known, we only know it approximately.
The different story is when you
estimate the probabilities from the data, e.g. you observed 13 winning tickets among the 12563 tickets you bought, so from this data you estimate the probability to be 13/12563. This is something you estimated from the sample, so it is uncertain, because with different sample you could observe different value. The uncertainty estimate is not about the probability, but around the estimate of it.
Another example would be when the probability is not fixed, but depends on other factors. Say that we are talking about probability of dying in car accident. We can consider "global" probability, single value that is marginalized over all the factors that directly and indirectly lead to car accidents. On another hand, you can consider how the probabilities vary among the population given the risk factors.
You can find many more examples where probabilities themselves are considered as
random variables, so they vary rather then being fixed.
with associated caption:
...an effect size of 1.68 (95% CI: 1.56 (95% CI: 1.52 (95% CI: 1.504 (95% CI: 1.494 (95% CI: 1.488 (95% CI: 1.485 (95% CI: 1.482 (95% CI: 1.481 (95% CI: 1.4799 (95% CI: 1.4791 (95% CI: 1.4784...
I know of two interpretations. The first was said by Tim: We have observed $X$ successes out of $Y$ trials, so if we believe the trials were i.i.d. we can estimate the probability of the process at $X/Y$ with some error bars, e.g. of order $1/\sqrt{Y}$.
The second involves "higher-order probabilities" or uncertainties about a generating process. For example, say I have a coin in my hand manufactured by a crafter gambler, who with $0.5$ probability made a 60%-heads coin, and with $0.5$ probability made a 40%-heads coin. My best guess is a 50% chance that the coin comes up heads, but with big error bars: the "true" chance is either 40% or 60%.
In other words, you can imagine running the experiment a billion times and taking the fraction of successes $X/Y$ (actually the limiting fraction). It makes sense, at least from a Bayesian perspective, to give e.g. a 95% confidence interval around that number. In the above example, given current knowledge, this is $[0.4,0.6]$. For a real coin, maybe it is $[0.47,0.53]$ or something. For more, see:
Do We Need Higher-Order Probabilities and, If So, What Do They Mean? Judea Pearl. UAI 1987. https://arxiv.org/abs/1304.2716
All measurements are uncertain.
Therefore, any measurement of probability is also uncertain.
This uncertainty on the measurement of probability can be visually represented with an uncertainty bar. Note that uncertainty bars are often referred to as error bars. This is incorrect or at least misleading, because it shows uncertainty and not error (the error is the difference between the measurement and the unknown truth, so the error is unknown; the uncertainty is a measure of the width of the probability density after taking the measurement).
A related topic is meta-uncertainty. Uncertainty describes the width of an a posteriori probability distribution function, and in case of a Type A uncertainty (uncertainty estimated by repeated measurements), there is inevitable an uncertainty on the uncertainty; metrologists have told me that metrological practice dictates to expand the uncertainty in this case (IIRC, if uncertainty is estimated by the standard deviation of N repeated measurements, one should multiply the resulting standard deviation by $\frac{N}{N-2}$), which is essentially a meta-uncertainty.
How could an error bar on a probability arise? Suppose we can assign $\mathrm{prob}(\mathcal{A} | \Theta = \theta, \mathcal{I})$. If $\mathcal{I}$ implies $\Theta = \theta_0$, then $\mathrm{prob}(\Theta = \theta | \mathcal{I}) = \delta_{\theta \theta_0}$ and \begin{align} \mathrm{prob}(\mathcal{A} | \mathcal{I}) &= \sum_\theta \mathrm{prob}(\mathcal{A} | \Theta = \theta, \mathcal{I}) \: \delta_{\theta \theta_0} \\ &= \mathrm{prob}(\mathcal{A} | \Theta = \theta_0, \mathcal{I}) \end{align}
Now if $\Theta$ cannot be deduced from $\mathcal{I}$, then it's tempting to think that the uncertainty in $\mathrm{prob}(\Theta = \theta | \mathcal{I})$ must lead to uncertainty in $\mathrm{prob}(\mathcal{A} | \mathcal{I})$. But it doesn't. It merely implies a joint probability for $\mathcal{A}$ and $\Theta = \theta$, which, when $\Theta$ is marginalised, gives a definitive probability for $\mathcal{A}$: \begin{align} \mathrm{prob}(\mathcal{A}, \Theta = \theta | \mathcal{I}) &= \mathrm{prob}(\mathcal{A} | \Theta = \theta, \mathcal{I}) \: \mathrm{prob}(\Theta = \theta | \mathcal{I}) \\ \mathrm{prob}(\mathcal{A} | \mathcal{I}) &= \sum_\theta \mathrm{prob}(\mathcal{A} | \Theta = \theta, \mathcal{I}) \: \mathrm{prob}(\Theta = \theta | \mathcal{I}) \end{align}
Thus, adding error bars to a probability is akin to adding uncertainty to nuisance parameters, which can modify the probability, but cannot make it uncertain.
There are very often occasions where you want to have a probability of a probability. Say for instance you worked in food safety and used a survival analysis model to estimate the probability that botulinum spores would germinate (and thus produce the deadly toxin) as a function of the food preparation steps (i.e. cooking) and incubation time/temperature (c.f. paper). Food producers may then want to use that model to set safe "use-by" dates so that consumer's risk of botulism is appropriately small. However, the model is fit to a finite training sample, so rather than picking a use-by date for which the probability of germination is less than, say 0.001, you might want to choose an earlier date for which (given the modelling assumptions) you could be 95% sure the probability of germination is less than 0.001. This seems a fairly natural thing to do in a Bayesian setting.
Any one-off guess from a particular guesser can be reduced to a single probability. However, that's just the trivial case; probability structures can make sense whenever there's some contextual relevance beyond just a single probability. tl;dr-
The chance of a random coin landing on Heads is 50%.
Doesn't matter if it's a fair coin or not; at least, not to me. Because while the coin may have bias that a knowledgeable observer could use to make more informed predictions, I'd have to guess 50% odds.
My probability table is: $$ \begin{array}{c|c} \textbf{Heads} & \textbf{Tails} \\ \hline 50 \% & 50 \% \end{array}_{.} $$ But what if I tell someone that the coin has 50% odds, and then they have to make a decision about what happens on two coin flips? Lacking further information, they'd have to default to guessing that coin flips are independent events, arriving at: $$ {\newcommand{\rotate}[2]{{\style{transform-origin: center middle; display: inline-block; transform: rotate(#1deg); padding: 25px}{\rlap{#2}}}}} \hspace{-165px} \begin{array}{rc} & \qquad \qquad \small{\text{First flip}} \\ \rotate{-90}{\hspace{-25px}\small{\begin{array}{c}\text{Second} \\ \text{flip} \end{array}}} & \begin{array}{r|c|c} & \textbf{Heads} & \textbf{Tails} \\ \hline \textbf{Heads} & 25 \% & 25 \% \\ \hline \textbf{Tails} & 25 \% & 25 \% \end{array}_{,} \end{array} $$ from which they might conclude $$ \begin{array}{c|c} \begin{array}{c}\textbf{Same side} \\[-5px] \textbf{twice}\end{array} & \begin{array}{c}\textbf{Heads} \\[-5px] \textbf{and Tails} \end{array} \\ \hline 50 \% & 50 \% \end{array}_{.} $$ However, the coin flips aren't independent events; they're connected by a common causal agent, describable as the coin's bias.
If we assume a model in which a coin has a constant probability of Heads, $P_{\small{\text{Heads}}} ,$ then it might be more precise to say $$ \begin{array}{c|c} \textbf{Heads} & \textbf{Tails} \\ \hline P_{\small{\text{Heads}}} & 1 - P_{\small{\text{Heads}}} \end{array}_{.} $$ From this, someone might think $$ {\newcommand{\rotate}[2]{{\style{transform-origin: center middle; display: inline-block; transform: rotate(#1deg); padding: 25px}{\rlap{#2}}}}} \hspace{-165px} \begin{array}{rc} & \qquad \qquad \small{\text{First flip}} \\ \rotate{-90}{\hspace{-25px}\small{\begin{array}{c}\text{Second} \\ \text{flip} \end{array}}} & \begin{array}{r|c|c} & \textbf{Heads} & \textbf{Tails} \\ \hline \textbf{Heads} & P_{\small{\text{Heads}}}^{2} & P_{\small{\text{Heads}}} \left(1-P_{\small{\text{Heads}}}\right) \\ \hline \textbf{Tails} & P_{\small{\text{Heads}}} \left(1-P_{\small{\text{Heads}}}\right) & {\left(1-P_{\small{\text{Heads}}}\right)}^{2} \end{array}_{,} \end{array} $$ from which they might conclude $$ \begin{array}{c|c} \begin{array}{c}\textbf{Same side} \\[-5px] \textbf{twice}\end{array} & \begin{array}{c}\textbf{Heads} \\[-5px] \textbf{and Tails} \end{array} \\ \hline 1 - 2 P_{\small{\text{Heads}}} \left(1 - P_{\small{\text{Heads}}} \right) & 2 P_{\small{\text{Heads}}} \left(1 - P_{\small{\text{Heads}}} \right) \end{array}_{.} $$ If I had to guess $P_{\small{\text{Heads}}} ,$ then I'd still go with $50 \% ,$ so it'd seem like this would reduce to the prior tables.
So it's the same thing, right?
Turns out that the odds of getting two-Heads-or-Tails is always greater than getting one-of-each, except in the special case of a perfectly fair coin. So if you do reduce the table, assuming that the probability itself captures the uncertainty, your predictions would be absurd when extended.
That said, there's no "
true" coin flip. We could have all sorts of different flipping methodologies that could yield very different results and apparent biases. So, the idea that there's a consistent value of $P_{\small{\text{Heads}}}$ would also tend to lead to errors when we construct arguments based on that premise.
So if someone asks me the odds of a coin flip, I wouldn't say $`` 50 \% " ,$ despite it being my best guess. Instead, I'd probably say $`` \text{probably about}~50\% " .$
And what I'd be trying to say is roughly:
If I had to make a one-off guess, I'd probably go with about $50 \% .$ However, there's further context that you should probably ask me to clarify if it's important.
People often say some event has a 50-60% chance of happening.
If you sat down with them and worked out all of their data, models, etc., you might be able to generate a better number, or, ideally, a better model that'd more robustly capture their predictive ability.
But if you split the difference and just call it 55%, that'd be like assuming $P_{\small{\text{Heads}}} = 50 \%$ in that you'd basically be running with a quick estimate after having truncated the higher-order aspects of it. Not necessarily a bad tactic for a one-off quick estimate, but it does lose something.
I would argue that
only the error bars matter, but in the given example, the whole thing is probably almost meaningless. The example lends itself to interpretaton as a confidence interval, in which the upper and lower bounds of some degree of certainty are the range of probability. This proposed answer will deal with that interpretation. Majority source -- https://www.amazon.com/How-Measure-Anything-Intangibles-Business-ebook/dp/B00INUYS2U
The example says that to a given level of confidence, the answer is unlikely to be above 60% and equally unlikely to be below 50%. This is so convenient a set of numbers that it resembles "binning", in which a swag of 55% is further swagged to a +/- 5% range. Familiarly round numbers are immediately suspect.
One way to arrive at a confidence interval is to decide upon a chosen level of confidence -- let's say 90% -- and we allow that the thing could be either lower or higher than our estimate, but that there is only a 10% chance the "correct" answer lies outside of our interval. So we estimate a higher bound such that "there is only a 1/20 chance of the proper answer being greater than this upper bound", and do similar for the lower bound. This can be done through "calibrated estimation", which is one form of measurement, or though other forms of measurement. Regardless, the point is to A) admit from the beginning that there is an uncertainty associated with our uncertainty, and B) avoid throwing up our hands at the thing, calling it a mess, and simply tacking on 5% above and below. The benefit is that an approach rigorous to a chosen degree can yield results which are still mathematically relevant, to a degree which can be stated mathematically: "There is a 90% chance that the correct answer lies between these two bounds..." This is a properly formed confidence interval (CI), anmd it can be used in further calculations. What's more, by assiging it a confidence, we can calibrate the method used to arrive at the estimate, by comparing predictions vs results and acting on what we find to improve the estimation method. Nothing can be made perfect, but many things can be made 90% effective. Note that the 90% CI has nothing to do with the fact that the example given in the OP contains 10% of the field and omits 90%. What is the wingspan of a Boeing 747-100, to a 90% CI? Well, I'm 95% sure that it is not more than 300 ft, and I am equally sure that it is not less than 200 ft. So off the top of my head, I'll give you a 90% CI of 200-235 feet. NOTE that there is no "central" estimate. CIs are not formed by guesses plus fudge factors. This is why I say that the error bars probably matter more than a given estimate.
That said, an interval estimate (everything above) is not necessarily better than a point estimate with a
properly calulated error (which is beyond my recall at this point -- I recall only that it's frequently done incorrectly). I am just saying that many estimates expressed as ranges -- and I'll hazard that most ranges with round numbers -- are point+fudge rather than either interval or point+error estimates.
One proper use of point+error:
"A machine fills cups with a liquid, and is supposed to be adjusted so that the content of the cups is 250 g of liquid. As the machine cannot fill every cup with exactly 250.0 g, the content added to individual cups shows some variation, and is considered a random variable X. This variation is assumed to be normally distributed around the desired average of 250 g, with a standard deviation, σ, of 2.5 g. To determine if the machine is adequately calibrated, a sample of n = 25 cups of liquid is chosen at random and the cups are weighed. The resulting measured masses of liquid are X1, ..., X25, a random sample from X."
Key point: in this example, both the mean and the error are specified/assumed, rather than estimated/measured.
|
Under a Lorentz transformation, a spinor living in \(d\) dimensions transforms as
\(\psi (x) \rightarrow \psi'(x') = e^{\frac{1}{2} \lambda^{\mu \nu} \Sigma_{\mu \nu}} \psi (x)\)
up to some numerical factors in the exponential from convention. This is only true if we're dealing with the orthochronous proper Lorentz transforms \(SO^+ (1,3)\), because the projective spinor representations mean we can just deal with the algebra, and the \(\Sigma_{\mu \nu}\) are representations of \(so (1, d-1)\). How does this change when we want to think about \(P\) and \(T\)? That is, how does one derive the action of elements of \(O (1, d-1)\) on spinors?
The only progress I've made towards understanding this is the coordinate-dependent interpretation put forth in standard texts on QFT, where \(P\) and \(T\) act with some product of gamma matrices. I was looking for a slightly more general definition.
A related question: is the number of of disconnected components of \(O (1, d-1)\) the same for even and odd dimension?
|
But if you don't want to have a Google account: Chrome is really good. Much faster than FF (I can't run FF on either of the laptops here) and more reliable (it restores your previous session if it crashes with 100% certainty).
And Chrome has a Personal Blocklist extension which does what you want.
: )
Of course you already have a Google account but Chrome is cool : )
Guys, I feel a little defeated in trying to understand infinitesimals. I'm sure you all think this is hilarious. But if I can't understand this, then I'm yet again stalled. How did you guys come to terms with them, later in your studies?
do you know the history? Calculus was invented based on the notion of infinitesimals. There were serious logical difficulties found in it, and a new theory developed based on limits. In modern times using some quite deep ideas from logic a new rigorous theory of infinitesimals was created.
@QED No. This is my question as best as I can put it: I understand that lim_{x->a} f(x) = f(a), but then to say that the gradient of the tangent curve is some value, is like saying that when x=a, then f(x) = f(a). The whole point of the limit, I thought, was to say, instead, that we don't know what f(a) is, but we can say that it approaches some value.
I have problem with showing that the limit of the following function$$\frac{\sqrt{\frac{3 \pi}{2n}} -\int_0^{\sqrt 6}(1-\frac{x^2}{6}+\frac{x^4}{120})^ndx}{\frac{3}{20}\frac 1n \sqrt{\frac{3 \pi}{2n}}}$$equal to $1$, with $n \to \infty$.
@QED When I said, "So if I'm working with function f, and f is continuous, my derivative dy/dx is by definition not continuous, since it is undefined at dx=0." I guess what I'm saying is that (f(x+h)-f(x))/h is not continuous since it's not defined at h=0.
@KorganRivera There are lots of things wrong with that: dx=0 is wrong. dy/dx - what/s y? "dy/dx is by definition not continuous" it's not a function how can you ask whether or not it's continous, ... etc.
In general this stuff with 'dy/dx' is supposed to help as some kind of memory aid, but since there's no rigorous mathematics behind it - all it's going to do is confuse people
in fact there was a big controversy about it since using it in obvious ways suggested by the notation leads to wrong results
@QED I'll work on trying to understand that the gradient of the tangent is the limit, rather than the gradient of the tangent approaches the limit. I'll read your proof. Thanks for your help. I think I just need some sleep. O_O
@NikhilBellarykar Either way, don't highlight everyone and ask them to check out some link. If you have a specific user which you think can say something in particular feel free to highlight them; you may also address "to all", but don't highlight several people like that.
@NikhilBellarykar No. I know what the link is. I have no idea why I am looking at it, what should I do about it, and frankly I have enough as it is. I use this chat to vent, not to exercise my better judgment.
@QED So now it makes sense to me that the derivative is the limit. What I think I was doing in my head was saying to myself that g(x) isn't continuous at x=h so how can I evaluate g(h)? But that's not what's happening. The derivative is the limit, not g(h).
@KorganRivera, in that case you'll need to be proving $\forall \varepsilon > 0,\,\,\,\, \exists \delta,\,\,\,\, \forall x,\,\,\,\, 0 < |x - a| < \delta \implies |f(x) - L| < \varepsilon.$ by picking some correct L (somehow)
Hey guys, I have a short question a friend of mine asked me which I cannot answer because I have not learnt about measure theory (or whatever is needed to answer the question) yet. He asks what is wrong with \int_0^{2 \pi} \frac{d}{dn} e^{inx} dx when he applies Lesbegue's dominated convergence theorem, because apparently, if he first integrates and then derives, the result is 0 but if he first derives and then integrates it's not 0. Does anyone know?
|
The derived category of abelian groups is somewhat special that makes the Künneth and universal coefficient theorems take an unusually special form.
An abstract way to state this property is
Theorem: Every element of the derived category of abelian groups is the direct sum of one-term complexes
Proof: Every chain complex is quasi-isomorphic to a complex of free abelian groups. And if $C_\bullet$ is a complex of free abelian groups, the fact every subgroup of a free abelian group is free implies you can decompose $C_n = \ker(\partial_n) \oplus \mathrm{im}(\partial_n)$, and thus you can decompose $C_n$ into a direct sum of complexes of the form $\mathrm{im(\partial_n)} \to \ker(\partial_{n-1})$, each of which is isomorphic to the one-term complex $H_{n-1}(C_\bullet)$ concentrated in degree $n-1$. $\square$
In particular, the equivalence class of every chain complex $C_\bullet$ includes the complex
$$ \ldots \xrightarrow{0} H_1(C_\bullet) \xrightarrow{0} \underline{H_0(C_\bullet)} \xrightarrow{0} H_{-1}(C_\bullet) \xrightarrow{0} \ldots $$
which, of course, breaks apart into the direct sum of its individual terms.
From the form of tor and ext for one-term complexes, we can then write
$$ H_n (C_\bullet \otimes^\mathbb{L} D_\bullet)\cong H_{n-i-j} \left( \bigoplus_i \bigoplus_j H_i(C_\bullet) \otimes^\mathbb{L} H_j(D_\bullet) \right) \\ \cong\bigoplus_i \bigoplus_j \begin{cases}H_i(C_\bullet) \otimes H_j(D_\bullet) & n = i+j\\ \mathrm{tor}(H_i(C_\bullet), H_j(D_\bullet)) & n = i+j +1\end{cases}$$
The universal coefficient theorem is the special case where $C_\bullet$ is the complex of coefficients concentrated in degree zero. Similarly,
$$ H_n (\mathbb{R}{\hom}(C_\bullet, D_\bullet))\cong H_{n+i-j} \left(\prod_i \bigoplus_j \mathbb{R}{\hom}(C_i, D_j) \right) \\ \cong\prod_i \bigoplus_j \begin{cases}\hom(H_i(C_\bullet), H_j(D_\bullet)) & n = j-i\\ \mathrm{ext}(H_i(C_\bullet), H_j(D_\bullet)) & n = j-i-1\end{cases}\\\cong\prod_i \hom(H_i(C_\bullet), H_{n+i}(D_\bullet)) \oplus\mathrm{ext}(H_i(C_\bullet), H_{n+i+1}(D_\bullet))$$
With $D$ concentrated in degree zero, this becomes the familiar
$$ H_{-n} (\mathbb{R}{\hom}(C_\bullet, D))\cong \hom(H_n(C_\bullet), D) \oplus\mathrm{ext}(H_{n-1}(C_\bullet), D) $$
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
Okay. Let $A \subseteq \Bbb{R}^\omega$ be the set of all bounded sequences in $\Bbb{R}$. The problem I am working on is trying to show that $x \in \Bbb{R}^\omega$ is lies in the same component of $0$ if and only if $x$ is a bounded sequence, where $\Bbb{R}^\omega$ is endowed with the uniform topology; or, in other words, that $C(0)=A$, where $C(0)$ is the connected component of $0$. Here is how I set about doing this. I mistakenly thought that $f : \Bbb{R} \to \Bbb{R}^\omega$ defined by $f(t) = xt + (1-t)y$ is continuous, where $x,y \in \Bbb{R}^\omega$. I then proceeded to use this to show that the $\epsilon$-balls are path connected, which would show that $\Bbb{R}^\omega$ is locally path connected and therefore the components and path components coincide. Of course, if it were the case that $f$ is continuous, $\Bbb{R}^\omega$ would be a path-connected and therefore connected space; but it isn't as $A$ is clopen. My first question is, is $\Bbb{R}^\omega$ locally path connected?
I did, however, show that $f$ is continuous when the codomain is restricted to $A$ (see General Topology Chat); and from this I was able to conclude that $A$ is path connected. Martin Sleziak and I were able to show that $A$ is in fact a path component of $0$. But seeing as $\Bbb{R}^\omega$ being locally path connected is up in the air at this point, I cannot conclude that $A$ is also a component of $0$. How do I fix this?
EDIT:
As I said in my comment below, most of the concepts mentioned in Alex Ravsky's answer haven't yet been introduced in my book. But I think I gathered from his answer, as well as Brian Scott's which he linked below, the essential ideas.
Claim: Let $\{P_i\}$ be the path components of some topological space $X$. If the $P_i$ are open in $X$ and are locally path connected, then $X$ is also locally path connected (LPC).
Proof: If $P_i$ is LPC, then its (subspace?) topology is by a basis $\mathcal{B}_i$ that is entirely comprised of path connected sets. Since $P_i$ is open in $X$, each $B \in \mathcal{B}_i$ is open in $X$, and, moreover, they are path connected in $X$. If $x \in X$, then $x \in P_i$ for some $i$; and so there exists a $B \in \mathcal{B}_i$ such that $x \in B \subseteq P_i \subseteq X$. This means that $\mathcal{B} = \bigcup \mathcal{B}_i$ is a basis for $X$'s topology entirely comprised of path connected sets, thereby showing that $X$ is LPC.
Now, since A is open (in fact, it is clopen) and locally path connected, and all other path components of $\Bbb{R}^\omega$ are homeomorphic to $A$, then $\Bbb{R}^\omega$ must also be locally path connected, in which case the components and path components coincide. Martin and I proved that $A$ is a path component of $0$, so it must also a component of $0$.
How does this sound?
|
Your question is based on a misapprehension. The heating in microwave ovens is not a resonant process. See the answers to Does a domestic microwave work by emitting an electromagnetic wave at the same frequency as a OH bond in water? for a discussion of this.
But let's leave this aside, because there is some interesting physics in your question. Suppose there was something that created an energy difference between orientations of the water dipole. For example, suppose we apply an external electric field so that the dipole aligned with the field has a lower energy than the dipole aligned against it. We'll tweak the field so that the energy difference between the two alignments is equal to the energy $h\nu$ of the microwave radiation.
A dipole aligned with the field can absorb a photon and flip into anti-alignment. This is like hitting the bell i.e. it adds energy to the bell. However the radiation can also cause stimulated emission of the excited state so it decays to the ground state by emitting a photon - you'd put one photon in and get two out. This is like hitting the bell in antiphase - hitting the bell in antiphase means your hammer rebounds with more energy than you put in.
The problem is you don't have a single bell. You have $6.023 \times 10^{23}$ bells for every 18g of water and those bells have no mechanism to keep themselves in phase with each other. So when you turn on the microwaves you end up with an equilibrium between absorption and stimulated (and spontaneous) emission. Typically you get roughly equal numbers of dipoles in the ground and excited state.
But actually this still isn't really heating in the sense we normally use the word. Heating occurs when the excited dipoles can relax by transferring their energy to lattice vibrations rather than by emitting a photon. So the overall process is
photon $\rightarrow$ excited dipole $\rightarrow$ lattice vibrations = heat. This is more like what happens in a microwave oven. The reverse, i.e. cooling, process is when lattice vibrations cause transient dipoles and these radiate photons. This is just black body radiation.
|
The word "Earth" is sometimes misleading. I think (if I get the sense of your question right), it is more properly called a "Protective Earth". In home electricity supplies, one side of the supply is "tethered" to the same potential as a
protective earth circuit. This latter is simply a system of conductors, going through the third "Earth" pin on the socket outlet, that tethers any conducting surface of an electrical appliance to the one (called the "Neutral") side of the supply. The other side of the supply is called the "active".
If a fault happens in an appliance such that the active touches the conductive housing of an appliance (say of a toaster), we have a dangerous situation, since anyone touching the appliance can then get an electric shock. However, if the housing is connected to the protective earth circuit, there is a redundant path back to the supply's neutral. This leads to a high current in the redundant path and hopefully a blown fuse in the supply live.
A more modern and safer way to achieve this protection is earth leakage protection - a system that detects, through the Ampère-law-begotten "magneto motive force MMF)", when the current through the active is different from that through the neutral. In this system, both active and neutral lines pass through a torus-shapen ferromanetic core. If the current in one doesn't match that in the other (
i.e. the current going into the appliance through the active is not the same magnitude but exactly out of phase with the current coming out through the neutral), then there is a nonzero $\oint \vec{H}\cdot{\rm d}\vec{\ell}$ ("magneto-motive force") around the core and thus an AC magnetic field though a sense coil wound threading the torus- by Faraday's law, this begets an EMF in the sense coil which trips a circuit breaker. These devices can be made very cheaply and are extremely effective - shutting off within milliseconds if an imbalance of more than typically $50{\rm \mu A}$ is sensed.
Lastly, electric current
can in general flow without a return path through the mechanisms of displacement current and capacitance. See my answer to the question "Does alternating current (AC) require a complete circuit?" for details.
Lastly, there have been some really bizarre truly one-line power transmission systems thought of in the past, where the one line works as a waveguide and does not need a return path. See the Wiki Page for the Goubau Line for details.
|
G is an abelian group with the following property:
(*) If H is any subgroup of G then there exists a subgroup F of G such that G/H is isomorphic to F.
Now I want to prove that If G is finitely generated abelian group and G has the above property then G is finite.
this is my solution and I have a sense that it is not correct but I can't figure out my mistake: Assume that G=<$g_{1},g_{2},....,g_{n}$>.My solution is based on proving that order of every $g_{i}$ is finite. consider the group generated by $g_{i}$ namely . So we have $G/<g_{i}>$ is isomorphic to a subgroup of G so $G/<g_{i}>$ is finitely generated. Now from there $\forall g\in G$ we have $g<g_{i}>=\sum_{j=1}^{n} a_{i_{j}}g_{i_{j}}$ so we have that $g_{i}$ and all its powers can be written as a finite combination of $g_{i}$s.Now since G is finitely generated then there are finite number of combinations of $g_{j},j=1,.,n$. In this case there are k and l such that $g_{i}^{k}=g_{i}^{l}$ which means that $g_{i}$ has a finite order.
|
I have a photon counting system that uses a gated avalanche diode to detect single photons. The repetition frequency of the gates is $f_1$ and the temporal gate width is $\tau_1$ (so the duty cycle is $\tau_1 f_1$)
I want to get an estimate of the quantum efficiency $\eta_{\lambda}$, i.e., the probability of detecting single photons, for this diode at some wavelength $\lambda$. Note $\lambda$ is not anywhere close to the peak wavelength $\lambda_p$ of the diode, so $\eta_{\lambda}$ is expected to be small.
I have a pulsed laser system with repetition frequency $f_2 >> f_1$, pulse width $\tau_2 << \tau_1$, and center wavelength $\lambda$. I connect the output of this laser to the avalanche diode through a variable optical attenuator (VOA) already characterized at this wavelength before. I control the VOA setting so that the power impinging on the diode is $P$ and observe a probability of detection per gate $p_d$.
The main thing to note is that in this experiment the
laser pulses are not synchronized to the detection gates because the laser and photon counting system do not have/cannot share the same clock. In this case:
Q1. Can I assume the diode essentially sees the laser light as a quasi-CW source? As in, is the power P more or less uniformly distributed throughout the train of gates?
Q2. If the answer to the above is yes, the mean photon number seen inside a gate should be $\mu_p = P\lambda\tau_1/\hbar c$. Can I then invert the equation $p_d = 1 - e^{-\mu_p \eta_{\lambda}}$ to obtain an estimate for $\eta_{\lambda}$ (ignoring dark counts here)?
Q3. If the above is correct, can it be said that the actual value -- found using a more precise method -- would be
surely larger/smaller than the estimated one? As in, does the estimate provide lower/upper bound in this experiment?
|
Consider a static, complete information game with 2 players.
Strategy sets are $S_1=\{U,D\},S_2=\{l,m,r\}$.
Payoffs are irrelevant to this question as I am trying to get the concept of rationalizability correct.
Suppose I want to verify whether $m$ is a rationalizable strategy for player 2.
Then, I want to ask the following question:
$\exists \sigma_1=(q^*,1-q^*)\in\Delta(S_1)$ such that for $\sigma^*_2=(0,1,0),$ $u_2(\sigma_1,\sigma^*_2)\geq u_2(\sigma_1,\sigma_2)$ for all $\sigma_2\in\Delta(S_2)?$
Now, suppose I have payoff matrix such that I could find $(q^*,1-q^*)$ such that it satisfies both:
(1) $u_2(\sigma_1,\sigma^*_2)\geq u_2(\sigma_1,(1,0,0))$
(2) $u_2(\sigma_1,\sigma^*_2)\geq u_2(\sigma_1,(0,0,1))$.
This means, I could find a valid range of $q^*$ such that for player 2, choosing $m$ provides a weakly better payoff for her compared to the degenerate (i.e. pure) strategies of $l$ or $r$.
My question is:
If I could find such $q^*$ that satisfies both (1),(2), then I do not have to check for any other strategy profiles in $\Delta(S_2)$, that is any convex combo of $(1,0,0)$ and $(0,0,1)$? My intuition is that for any $\alpha\in[0,1]$, I could simple have:
(1)' $\alpha u_2(\sigma_1,\sigma^*_2)\geq \alpha u_2(\sigma_1,(1,0,0))$
(2)' $(1-\alpha)u_2(\sigma_1,\sigma^*_2)\geq (1-\alpha)u_2(\sigma_1,(0,0,1))$.
and show that (1)'+(2)'implies the degenerate strategy $m$ for player 2 is a best response to some belief, $\sigma_1\in\Delta(S_1)$. Hence, the bottom line is (1),(2) is sufficient, and I do not have to check the convex combo of the two other pure strategies.
|
3.4.2.1 - Formulas for Computing Pearson's r
There are a number of different versions of the formula for computing Pearson's \(r\). You should get the same correlation value regardless of which formula you use.
Note that you will not have to compute Pearson's \(r\) by hand in this course. These formulas are presented here to help you understand what the value means. You should always be using technology to compute this value.
First, we'll look at the conceptual formula which uses \(z\) scores. To use this formula we would first compute the \(z\) score for every \(x\) and \(y\) value. We would multiply each case's \(z_x\) by their \(z_y\). If their \(x\) and \(y\) values were both above the mean then this product would be positive. If their x and y values were both below the mean this product would be positive. If one value was above the mean and the other was below the mean this product would be negative. Think of how this relates to the correlation being positive or negative. The sum of all of these products is divided by \(n-1\) to obtain the correlation.
Pearson's r: Conceptual Formula
\(r=\dfrac{\sum{z_x z_y}}{n-1}\)
where \(z_x=\dfrac{x - \overline{x}}{s_x}\) and \(z_y=\dfrac{y - \overline{y}}{s_y}\)
When we replace \(z_x\) and \(z_y\) with the \(z\) score formulas and move the \(n-1\) to a separate fraction we get the formula in your textbook: \(r=\frac{1}{n-1}\Sigma{\left(\frac{x-\overline x}{s_x}\right) \left( \frac{y-\overline y}{s_y}\right)}\)
|
I'm a beginner in using LaTeX and my problem is that the dots of the following diagram are too close to the boundary (the dots should have the same distance to the boundary than the text), and the terms in the lower row of this diagram look much bigger than the terms in the upper row... i.e. the whole diagram looks ugly. I used tikzcd for this, here is the code:
\documentclass[pdftex,12pt,a4paper,twoside]{article}\usepackage{adjustbox}\usetikzlibrary{arrows,chains,matrix,positioning,scopes,snakes,cd}\begin{document}\[\adjustbox{scale=0.9,center}{\begin{tikzcd}[font=\small, row sep=1.2em]\cdots \arrow{r} & \overset{\large{K_*(A)}}{\underset{\large{K_*(B)}}{\otimes}}\arrow{r} \arrow{d} {\alpha(A,B)}[swap]{\cong} & \overset{\Large{K_*(A)}}{\underset{\Large{K_*(B)}}{\otimes}} \arrow{r} \arrow{d}{\alpha(A,B)}[swap]{\cong}& \overset{\large{K_*(A)}}{\underset{\large{K_*(B\rtimes_\varphi\mathbb{Z})}}{\otimes}} \arrow{r} \arrow{d}{\alpha(A,B\rtimes_\varphi\mathbb{Z})} & \overset{\Large{K_{*}(A)}}{\underset{\Large K_{*-1}(B)}{\otimes}} \arrow{r}\arrow{d}{\alpha(A,B)}[swap]{\cong} & \cdots \\ \cdots \arrow{r} & K_*(A\otimes B) \arrow{r} & K_*(A\otimes B) \arrow{r} & \tiny{K_*(A\otimes (B\rtimes_\varphi\mathbb{Z}))} \arrow{r} & \tiny{K_{*-1}(A\otimes B)} \arrow{r} & \cdots \end{tikzcd} }\]\end{document}
If I scale everything down, it becomes unreadable, so that it isn't a solution. One idea is to shorten just the outter arrows so that the dots have the same distance to the boundary than the whole text, but I don't know how it works. Can you help me to draw it in a 'more efficient' way, so that it looks better than now? Will drawing it in tikzpicture be a solution?
Thank you.
|
Area can be defined using calculus. Suppose you have a square in $\mathbb{R}^2$.Let $x$ be the change in the first coordinate going from E to D. Let $y$ be the change in the second coordinate going from E to D. We can see that the area of the square is $(x - y)^2 + 2xy = x^2 - 2xy + y^2 + 2xy = x^2 + y^2$. With no assumptions about what properties the distance formula follows, that just proves that the area is $x^2 + y^2$ and proves nothing about what the distance formula is. We could seek a function $d$ that's a binary function from $\mathbb{R}^2$ to $\mathbb{R}$, in other words, a function from $(\mathbb{R}^2)^2$ to $\mathbb{R}$ satisfying certain properties where we say that $\forall x \in \mathbb{R}\forall y \in \mathbb{R}\forall z \in \mathbb{R}\forall w \in \mathbb{R}d((x, y), (z, w))$ is the distance from $(x, y)$ to $(z, w)$. We seek a function $d$ satisfying the following properties:
The distance from any point to itself is 0 For any square, the distance from any vertex to the one adjacent to it in the counterclockwise direction is the square root of the area
It can be easily shown that $d((x, y), (z, w)) = \sqrt{(z - x)^2 + (w - y)^2}$ is the unique function satisfying those properties. That just shows that using this definition of distance, the Pythagorean theorem holds for all right angle triangles whose legs are parallel to the axes. To show that the Pythagorean theorem holds for all right angle triangles, we also have to show that that function satisfies the following property
$\forall x \in \mathbb{R}\forall y \in \mathbb{R}\forall z \in \mathbb{R}\forall w \in \mathbb{R}d((0, 0), (xz - yw, xw + yz)) = d((0, 0), (x, y))d((0, 0), (z, w))$
That can be done as follows. $d((0, 0), (xz - yw, xw + yz)) = \sqrt{(xz - yw)^2 + (xw + yz)^2} = \sqrt{x^2z^2 - 2xyzw + y^2w^2 + x^2w^2 + 2xyzw + y^2z^2} = \sqrt{x^2z^2 + x^2w^2 + y^2z^2 + y^2w^2} = \sqrt{(x^2 + y^2)(z^2 + w^2)} = \sqrt{x^2 + y^2}\sqrt{z^2 + w^2} = d((0, 0), (x, y))d((0, 0), (z, w))$
Some people find other properties of distance so intuitive. How do we know that there exists a way of defining distance that satisfies all of them? Because it has been proven in this answer that $d((x, y), (z, w)) = \sqrt{(z - x)^2 + (w - y)^2}$ is the unique function satisfying the following properties
$\forall x \in \mathbb{R}\forall y \in \mathbb{R}\forall z \in \mathbb{R}\forall w \in \mathbb{R}d((x, y), (x + z, y + w)) = d((0, 0), (z, w))$ $\forall x \in \mathbb{R}\forall y \in \mathbb{R}\forall z \in \mathbb{R}\forall w \in \mathbb{R}d((x, y), (z, w))$ is nonnegative $\forall \text{ nonnegative } x \in \mathbb{R}d((0, 0), (x, 0)) = x$ $\forall x \in \mathbb{R}\forall y \in \mathbb{R}d((0, 0), (x, -y)) = d((0, 0), (x, y))$ $\forall x \in \mathbb{R}\forall y \in \mathbb{R}\forall z \in \mathbb{R}\forall w \in \mathbb{R}d((0, 0), (xz - yw, xw + yz)) = d((0, 0), (x, y))d((0, 0), (z, w))$
and it also satisfies the additional properties
The area of any square is the square of the length of its edges $\forall x \in \mathbb{R}d((0, 0), (\cos(x), \sin(x))) = 1$
Image source: https://www.maa.org/press/periodicals/convergence/proportionality-in-similar-triangles-a-cross-cultural-comparison-the-student-module
|
Suppose $a<b$ and $f:[a,b] \to [a,b]$ be continous. Suppose that $x \neq y$ in $[a,b]$ with $f(x)=y$ and $f(y)=x$. Prove that $f$ has a fixed point in $(x,y)$.
So I was thinking of considering the function $g(x)=f(x)-x$, which we know is continuous. Then we also know that because $f(a) \geq a$ that $g(a)=f(a)-a \geq 0$. Similarly, because $f(b) \leq b$ then $g(b)=f(b)-b \leq 0$.
Can we just use the fact that because $g(x)$ is continuous, $0 \in [g(b),g(a)]$, the IVT says there exists $c \in [a,b]$ such that $g(c)=f(c)-c=0$ so $f(c)=c$? Then we know $c$ is a fixed point.
How do we show that $c$ is in $(x,y)$??
We know that $g(x)=f(x)-x=y-x \neq 0$ and $g(y)=f(y)-y=x-y \neq 0$ but we don't know that those are in $(a,b)$?
|
Basic tutorial: qubit rotation¶
To see how PennyLane allows the easy construction and optimization of quantum functions, let’sconsider the simple case of
qubit rotation the PennyLane version of the ‘Hello, world!’example.
The task at hand is to optimize two rotation gates in order to flip a single qubit from state \(\left|0\right\rangle\) to state \(\left|1\right\rangle\).
The quantum circuit¶
In the qubit rotation example, we wish to implement the following quantum circuit:
Breaking this down step-by-step, we first start with a qubit in the ground state \(|0\rangle = \begin{bmatrix}1 & 0 \end{bmatrix}^T\), and rotate it around the x-axis by applying the gate
and then around the y-axis via the gate
After these operations the qubit is now in the state
Finally, we measure the expectation value \(\langle \psi \mid \sigma_z \mid \psi \rangle\) of the Pauli-Z operator
Using the above to calculate the exact expectation value, we find that
Depending on the circuit parameters \(\phi_1\) and \(\phi_2\), the output expectation lies between \(1\) (if \(\left|\psi\right\rangle = \left|0\right\rangle\)) and \(-1\) (if \(\left|\psi\right\rangle = \left|1\right\rangle\)).
Let’s see how we can easily implement and optimize this circuit using PennyLane.
Importing PennyLane and NumPy¶
The first thing we need to do is import PennyLane, as well as the wrapped version of NumPy provided by PennyLane.
import pennylane as qmlfrom pennylane import numpy as np
Important
When constructing a hybrid quantum/classical computational model with PennyLane,it is important to
always import NumPy from PennyLane, not the standard NumPy!
By importing the wrapped version of NumPy provided by PennyLane, you can combine the power of NumPy with PennyLane:
continue to use the classical NumPy functions and arrays you know and love
combine quantum functions (evaluated on quantum hardware/simulators) and classical functions (provided by NumPy)
allow PennyLane to automatically calculate gradients of both classical and quantum functions
Creating a device¶
Before we can construct our quantum node, we need to initialize a
device.
Definition
Any computational object that can apply quantum operations, and return an measurement valueis called a quantum
device.
In PennyLane, a device could be a hardware device (such as the IBM QX4, via the PennyLane-PQ plugin), or a software simulator (such as Strawberry Fields, via the PennyLane-SF plugin).
Tip
Devices are loaded in PennyLane via the function
pennylane.device()
PennyLane supports devices using both the qubit model of quantum computation and devices using the CV model of quantum computation. In fact, even a hybrid computation containing both qubit and CV quantum nodes is possible; see the hybrid computation example for more details.
For this tutorial, we are using the qubit model, so let’s initialize the
'default.qubit' deviceprovided by PennyLane; a simple pure-state qubit simulator.
dev1 = qml.device("default.qubit", wires=1)
For all devices,
device() accepts the following arguments:
name: the name of the device to be loaded
wires: the number of subsystems to initialize the device with
Here, as we only require a single qubit for this example, we set
wires=1.
Constructing the QNode¶
Now that we have initialized our device, we can begin to construct a
quantum node (or QNode).
Definition
QNodes are an abstract encapsulation of a quantum function, described by a quantum circuit. QNodes are bound to a particular quantum device, which is used to evaluate expectation and variance values of this circuit.
Tip
First, we need to define the quantum function that will be evaluated in the QNode:
def circuit(params): qml.RX(params[0], wires=0) qml.RY(params[1], wires=0) return qml.expval(qml.PauliZ(0))
This is a simple circuit, matching the one described above.Notice that the function
circuit() is constructed as if it were anyother Python function; it accepts a positional argument
params, which maybe a list, tuple, or array, and uses the individual elements for gate parameters.
However, quantum functions are a
restricted subset of Python functions.For a Python function to also be a valid quantum function, there are someimportant restrictions: Quantum functions must only contain quantum operations, one operation per line, in the order in which they are to be applied.
In addition, we must always specify the subsystem the operation applies to, by passing the
wiresargument; this may be a list or an integer, depending on how many wires the operation acts on.
For a full list of quantum operations, see
supported operations.
Quantum functions must return either a single or a tuple of measured observables.
As a result, the quantum function always returns a classical quantity, allowing the QNode to interface with other classical functions (and also other QNodes).
Quantum functions must not contain any classical processing of circuit parameters.
Note
Certain devices may only support a subset of the available PennyLane operations/observables, or may even provide additional operations/observables. Please consult the documentation for the plugin/device for more details.
@qml.qnode(dev1)def circuit(params): qml.RX(params[0], wires=0) qml.RY(params[1], wires=0) return qml.expval(qml.PauliZ(0))
Thus, our
circuit() quantum function is now a
QNode, which will run ondevice
dev1 every time it is evaluated.
To evaluate, we simply call the function with some appropriate numerical inputs:
print(circuit([0.54, 0.12]))
Out:
0.8515405859048367
Calculating quantum gradients¶
The gradient of the function
circuit, encapsulated within the
QNode,can be evaluated by utilizing the same quantumdevice (
dev1) that we used to evaluate the function itself.
PennyLane incorporates both analytic differentiation, as well as numerical methods (such as the method of finite differences). Both of these are done automatically.
We can differentiate by using the built-in
grad() function.This returns another function, representing the gradient (i.e., the vector ofpartial derivatives) of
circuit. The gradient can be evaluated in the sameway as the original function:
dcircuit = qml.grad(circuit, argnum=0)
The function
grad() itself
returns a function, representingthe derivative of the QNode with respect to the argument specified in
argnum.In this case, the function
circuit takes one argument (
params), so wespecify
argnum=0. Because the argument has two elements, the returned gradientis two-dimensional. We can then evaluate this gradient function at any point in the parameter space.
print(dcircuit([0.54, 0.12]))
Out:
[-0.5104386525165021, -0.10267819945693202]
A note on arguments
Quantum circuit functions, being a restricted subset of Python functions, can also make use of multiple positional arguments and keyword arguments. For example, we could have defined the above quantum circuit function using two positional arguments, instead of one array argument:
@qml.qnode(dev1)def circuit2(phi1, phi2): qml.RX(phi1, wires=0) qml.RY(phi2, wires=0) return qml.expval(qml.PauliZ(0))
When we calculate the gradient for such a function, the usage of
argnumwill be slightly different. In this case,
argnum=0 will return the gradientwith respect to only the first parameter (
phi1), and
argnum=1 will givethe gradient for
phi2. To get the gradient with respect to both parameters,we can use
argnum=[0,1]:
dcircuit = qml.grad(circuit2, argnum=[0, 1])print(dcircuit(0.54, 0.12))
Out:
(array(-0.51043865), array(-0.1026782))
Keyword arguments may also be used in your custom quantum function. PennyLanedoes
not differentiate QNodes with respect to keyword arguments,so they are useful for passing external data to your QNode. Optimization¶
Definition
If using the default NumPy/Autograd interface, PennyLane provides a collection of optimizers based on gradient descent. These optimizers accept a cost function and initial parameters, and utilize PennyLane’s automatic differentiation to perform gradient descent.
Tip
See
pennylane.optimize
for details and documentation of available optimizers
Next, let’s make use of PennyLane’s built-in optimizers to optimize the two circuit parameters \(\phi_1\) and \(\phi_2\) such that the qubit, originally in state \(\left|0\right\rangle\), is rotated to be in state \(\left|1\right\rangle\). This is equivalent to measuring a Pauli-Z expectation value of \(-1\), since the state \(\left|1\right\rangle\) is an eigenvector of the Pauli-Z matrix with eigenvalue \(\lambda=-1\).
In other words, the optimization procedure will find the weights \(\phi_1\) and \(\phi_2\) that result in the following rotation on the Bloch sphere:
To do so, we need to define a
cost function. By minimizing the cost function, theoptimizer will determine the values of the circuit parameters that produce the desired outcome.
In this case, our desired outcome is a Pauli-Z expectation value of \(-1\). Since we know that the Pauli-Z expectation is bound between \([-1, 1]\), we can define our cost directly as the output of the QNode:
def cost(var): return circuit(var)
To begin our optimization, let’s choose small initial values of \(\phi_1\) and \(\phi_2\):
init_params = np.array([0.011, 0.012])print(cost(init_params))
Out:
0.9998675058299389
We can see that, for these initial parameter values, the cost function is close to \(1\).
Finally, we use an optimizer to update the circuit parameters for 100 steps. We can use the built-in
pennylane.optimize.GradientDescentOptimizer class:
# initialise the optimizeropt = qml.GradientDescentOptimizer(stepsize=0.4)# set the number of stepssteps = 100# set the initial parameter valuesparams = init_paramsfor i in range(steps): # update the circuit parameters params = opt.step(cost, params) if (i + 1) % 5 == 0: print("Cost after step {:5d}: {: .7f}".format(i + 1, cost(params)))print("Optimized rotation angles: {}".format(params))
Out:
Cost after step 5: 0.9961778Cost after step 10: 0.8974944Cost after step 15: 0.1440490Cost after step 20: -0.1536720Cost after step 25: -0.9152496Cost after step 30: -0.9994046Cost after step 35: -0.9999964Cost after step 40: -1.0000000Cost after step 45: -1.0000000Cost after step 50: -1.0000000Cost after step 55: -1.0000000Cost after step 60: -1.0000000Cost after step 65: -1.0000000Cost after step 70: -1.0000000Cost after step 75: -1.0000000Cost after step 80: -1.0000000Cost after step 85: -1.0000000Cost after step 90: -1.0000000Cost after step 95: -1.0000000Cost after step 100: -1.0000000Optimized rotation angles: [9.08664625e-17 3.14159265e+00]
We can see that the optimization converges after approximately 40 steps.
Substituting this into the theoretical result \(\langle \psi \mid \sigma_z \mid \psi \rangle = \cos\phi_1\cos\phi_2\), we can verify that this is indeed one possible value of the circuit parameters that produces \(\langle \psi \mid \sigma_z \mid \psi \rangle=-1\), resulting in the qubit being rotated to the state \(\left|1\right\rangle\).
Note
Some optimizers, such as
AdagradOptimizer, haveinternal hyperparameters that are stored in the optimizer instance. These canbe reset using the
reset() method.
Continue on to the next tutorial, Gaussian transformation, to see a similar example using continuous-variable (CV) quantum nodes.
Total running time of the script: ( 0 minutes 0.453 seconds)
|
Smoothing Estimate of Intensity as Function of a Covariate
Computes a smoothing estimate of the intensity of a point process, as a function of a (continuous) spatial covariate.
Usage
rhohat(object, covariate, ...)
# S3 method for ppprhohat(object, covariate, ..., baseline=NULL, weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95, positiveCI)
# S3 method for quadrhohat(object, covariate, ..., baseline=NULL, weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95, positiveCI)
# S3 method for ppmrhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, dimyx=NULL, eps=NULL, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95, positiveCI)
# S3 method for lpprhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, nd=1000, eps=NULL, random=TRUE, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95, positiveCI)
# S3 method for lppmrhohat(object, covariate, ..., weights=NULL, method=c("ratio", "reweight", "transform"), horvitz=FALSE, smoother=c("kernel", "local"), subset=NULL, nd=1000, eps=NULL, random=TRUE, n = 512, bw = "nrd0", adjust=1, from = NULL, to = NULL, bwref=bw, covname, confidence=0.95, positiveCI)
Arguments object
A point pattern (object of class
"ppp"or
"lpp"), a quadrature scheme (object of class
"quad") or a fitted point process model (object of class
"ppm"or
"lppm").
covariate
Either a
function(x,y)or a pixel image (object of class
"im") providing the values of the covariate at any location. Alternatively one of the strings
"x"or
"y"signifying the Cartesian coordinates.
weights
Optional weights attached to the data points. Either a numeric vector of weights for each data point, or a pixel image (object of class
"im") or a
function(x,y)providing the weights.
baseline
Optional baseline for intensity function. A
function(x,y)or a pixel image (object of class
"im") providing the values of the baseline at any location.
method
Character string determining the smoothing method. See Details.
horvitz
Logical value indicating whether to use Horvitz-Thompson weights. See Details.
smoother
Character string determining the smoothing algorithm. See Details.
subset
Optional. A spatial window (object of class
"owin") specifying a subset of the data, from which the estimate should be calculated.
dimyx,eps,nd,random
Arguments controlling the pixel resolution at which the covariate will be evaluated. See Details.
bw
Smoothing bandwidth or bandwidth rule (passed to
density.default).
adjust
Smoothing bandwidth adjustment factor (passed to
density.default).
n, from, to
Arguments passed to
density.defaultto control the number and range of values at which the function will be estimated.
bwref
Optional. An alternative value of
bwto use when smoothing the reference density (the density of the covariate values observed at all locations in the window).
… covname
Optional. Character string to use as the name of the covariate.
confidence
Confidence level for confidence intervals. A number between 0 and 1.
positiveCI
Logical value. If
TRUE, confidence limits are always positive numbers; if
FALSE, the lower limit of the confidence interval may sometimes be negative. Default is
FALSEif
smoother="kernel"and
TRUEif
smoother="local". See Details.
Details
This command estimates the relationship between point process intensity and a given spatial covariate. Such a relationship is sometimes called a
resource selection function (if the points are organisms and the covariate is a descriptor of habitat) or a prospectivity index (if the points are mineral deposits and the covariate is a geological variable). This command uses a nonparametric smoothing method which does not assume a particular form for the relationship.
If
object is a point pattern, and
baseline is missing or null, this command assumes that
object is a realisation of a Poisson point process with intensity function \(\lambda(u)\) of the form $$\lambda(u) = \rho(Z(u))$$ where \(Z\) is the spatial covariate function given by
covariate, and \(\rho(z)\) is a function to be estimated. This command computes estimators of \(\rho(z)\) proposed by Baddeley and Turner (2005) and Baddeley et al (2012).
The covariate \(Z\) must have continuous values.
If
object is a point pattern, and
baseline is given, then the intensity function is assumed to be $$\lambda(u) = \rho(Z(u)) B(u)$$ where \(B(u)\) is the baseline intensity at location \(u\). A smoothing estimator of the relative intensity \(\rho(z)\) is computed.
If
object is a fitted point process model, suppose
X is the original data point pattern to which the model was fitted. Then this command assumes
X is a realisation of a Poisson point process with intensity function of the form $$ \lambda(u) = \rho(Z(u)) \kappa(u) $$ where \(\kappa(u)\) is the intensity of the fitted model
object. A smoothing estimator of \(\rho(z)\) is computed.
The estimation procedure is determined by the character strings
method and
smoother and the argument
horvitz. The estimation procedure involves computing several density estimates and combining them. The algorithm used to compute density estimates is determined by
smoother:
If
smoother="kernel", each the smoothing procedure is based on fixed-bandwidth kernel density estimation, performed by
density.default.
If
smoother="local", the smoothing procedure is based on local likelihood density estimation, performed by
locfit.
The
method determines how the density estimates will be combined to obtain an estimate of \(\rho(z)\):
If
method="ratio", then \(\rho(z)\) is estimated by the ratio of two density estimates. The numerator is a (rescaled) density estimate obtained by smoothing the values \(Z(y_i)\) of the covariate \(Z\) observed at the data points \(y_i\). The denominator is a density estimate of the reference distribution of \(Z\).
If
method="reweight", then \(\rho(z)\) is estimated by applying density estimation to the values \(Z(y_i)\) of the covariate \(Z\) observed at the data points \(y_i\), with weights inversely proportional to the reference density of \(Z\).
If
method="transform", the smoothing method is variable-bandwidth kernel smoothing, implemented by applying the Probability Integral Transform to the covariate values, yielding values in the range 0 to 1, then applying edge-corrected density estimation on the interval \([0,1]\), and back-transforming.
If
horvitz=TRUE, then the calculations described above are modified by using Horvitz-Thompson weighting. The contribution to the numerator from each data point is weighted by the reciprocal of the baseline value or fitted intensity value at that data point; and a corresponding adjustment is made to the denominator.
The covariate will be evaluated on a fine grid of locations, with spatial resolution controlled by the arguments
dimyx,eps,nd,random. In two dimensions (i.e. if
object is of class
"ppp",
"ppm" or
"quad") the arguments
dimyx, eps are passed to
as.mask to control the pixel resolution. On a linear network (i.e. if
object is of class
"lpp") the argument
nd specifies the total number of test locations on the linear network,
eps specifies the linear separation between test locations, and
random specifies whether the test locations have a randomised starting position.
If the argument
weights is present, then the contribution from each data point
X[i] to the estimate of \(\rho\) is multiplied by
weights[i].
If the argument
subset is present, then the calculations are performed using only the data inside this spatial region.
Pointwise confidence intervals for the true value of \(\rho(z)\) are also calculated for each \(z\) using the central limit theorem, and will be plotted as grey shading. If
positiveCI=FALSE, the lower limit of the confidence interval may sometimes be negative, because the confidence intervals are based on a normal approximation to the estimate of \(\rho(z)\). If
positiveCI=TRUE, the confidence limits are always positive, because the confidence interval is based on a normal approximation to the estimate of \(\log(\rho(z))\). For consistency with earlier versions, the default is
positiveCI=FALSE for
smoother="kernel" and
positiveCI=TRUE for
smoother="local".
Value
A function value table (object of class
"fv") containing the estimated values of \(\rho\) for a sequence of values of \(Z\). Also belongs to the class
"rhohat" which has special methods for
plot and
predict.
Categorical and discrete covariates
This technique assumes that the covariate has continuous values. It is not applicable to covariates with categorical (factor) values or discrete values such as small integers. For a categorical covariate, use
intensity.quadratcount applied to the result of
quadratcount(X, tess=covariate).
References
Baddeley, A., Chang, Y.-M., Song, Y. and Turner, R. (2012) Nonparametric estimation of the dependence of a point process on spatial covariates.
Statistics and Its Interface 5 (2), 221--236.
Baddeley, A. and Turner, R. (2005) Modelling spatial point patterns in R. In: A. Baddeley, P. Gregori, J. Mateu, R. Stoica, and D. Stoyan, editors,
Case Studies in Spatial Point Pattern Modelling, Lecture Notes in Statistics number 185. Pages 23--74. Springer-Verlag, New York, 2006. ISBN: 0-387-28311-0. See Also
See
ppm for a parametric method for the same problem.
Aliases rhohat rhohat.ppp rhohat.quad rhohat.ppm rhohat.lpp rhohat.lppm Examples
# NOT RUN { X <- rpoispp(function(x,y){exp(3+3*x)}) rho <- rhohat(X, "x") rho <- rhohat(X, function(x,y){x}) plot(rho) curve(exp(3+3*x), lty=3, col=2, add=TRUE) rhoB <- rhohat(X, "x", method="reweight") rhoC <- rhohat(X, "x", method="transform") # }# NOT RUN { fit <- ppm(X, ~x) rr <- rhohat(fit, "y")# linear network Y <- runiflpp(30, simplenet) rhoY <- rhohat(Y, "y")# }
Documentation reproduced from package spatstat, version 1.59-0, License: GPL (>= 2)
|
Let $A=:\{(x,x)|x\in[0,1]\}\subset[0,1]\times[0,1]$
I see A as $A=\{(x,y)|x\in[0,1]|x=y\}$ so it is a straight line bounded by points $(0,0)$ and $(1,1)$ if I understand this construction.
Is this set countable?
I could not find a suitable injective function from $A$ to $\mathbb{N}$ so I was thinking of taking other approach through countable unions.
Can we write $A$ as a countable union of points it contains $A=\displaystyle\bigcup_{i=1}^{\infty}\{(x_{i},y_{i})|x_{i}\in[0,1]|x_{i}=y_{i}\}$ would that work? How to show it is countable or show otherwise?
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
Lesson 13: Proportional Hazards Regression
After reading this lesson material, you will be able to:
differentiate between a proportional hazards regression and logistic regression. define a hazard ratio
Suppose you wish to consider the impact of a risk factor on the
time to the occurrence of an event. For example, Arnlov et al (2010) consider the impact of body mass index (BMI) and metabolic syndrome on the development of cardiovascular disease and death in middle-aged men. The associations were investigated using data from a cohort study of 1758 middle-aged Swedish men residing in one county with over 30 years of follow-up. The figure below depicts the time to a major cardiovascular event by BMI category and presence (B) or absence (A) of metabolic syndrome. Is there a difference in these survival curves? Figure 2: Kaplan-Meier curves for major cardiovascular events in different BMI categories in individuals without MetS (A) and with MetS (B).
(Figures reproduced from Arnlov, J et al. Impact of Body Mass Index and the Metabolic Syndrome on the Risk of Cardiovascular Disease and Death in Middle-Aged Men.
Circulation 2010; 121; 230-236, originally published online 12/28/2009; DOI 10.1161/CIRCULATIONAHA.109.88752) Q: The occurrence of a major cardiovascular event is a binary response. Would logistic regression, with BMI as a predictor variable, be appropriate to analyze these data? A: The relationship between the presence or absence of a major cardiovascular event and the predictor variable could be assessed with logistic regression at a particular time, but this would not directly compare the survival curves. A survival analysis would compare the curves on the basis of time to the event.
Survival analysis methods, such as proportional hazards regression differ from logistic regression by assessing a rate instead of a proportion.
Proportional hazards regression, also called Cox regression, models the incidence or hazard rate, the number of new cases of disease per population at-risk per unit time. If the outcome is death, this is the mortality rate. The hazard function is the probability that if a person survives to t, they will experience the event in the next instant.
Logistic regression in contrast, considers the
proportion of new cases that develop in a given time period, i.e. the cumulative incidence. Logistic regression estimates the odds ratio; proportional hazards regression estimates the hazard ratio.
The
proportional hazards model is as follows:
Let \(\lambda(t|X_{1i}, X_{2i}, \cdots ,X_{Ki})\) denote the hazard function for the i
th person at time \(t, i = 1, 2, \cdots , n\), where the K regressors are denoted as \(X_{1i}, X_{2i}, \cdots , X_{Ki}\). The baseline hazard function at time t, i.e., when \(X_{1i} = 0, X_{2i} = 0, ... , X_{Ki} = 0\), is denoted as \(\lambda_0 (t)\). The baseline hazard function is analogous to the intercept term in a multiple regression or logistic regression model. Notice the baseline hazard function is not specified, but must be positive.
The hazard ratio, \(\lambda_1 (t) / \lambda_0 (t)\) can be regarded as the relative risk of the event occurring at time
t.
The log of the hazard ratio, i.e. the hazard function divided by the baseline hazard function at time
t, is a linear combination of parameters and regressors, i.e.,
\[log\left ( \frac{\lambda \left ( t|X_{1i},X_{2i},...,X_{Ki} \right )}{\lambda_{0}(t) } \right )= \beta_{1}X_{1i}+\beta_{2}X_{2i}+...+\beta_{K}X_{Ki}\]
The ratio of hazard functions can be considered a ratio of risk functions, so the proportional hazards regression model can be considered as function of relative risk (while logistic regression models are a function of an odds ratio). Changes in a covariate have a
multiplicative effect on the baseline risk. The model in terms of the hazard function at time t is:
\[\lambda \left ( t|X_{1i},X_{2i},...,X_{Ki} \right )=\lambda_{0} (t)exp\left ( \beta_{1}X_{1i}+\beta_{2}X_{2i}+...+\beta_{K}X_{Ki} \right )\]
Although no particular probability model is selected to represent the survival times, proportional hazards regression does have an important assumption: the hazard for any individual is a fixed proportion of the hazard for any other individual. (i.e.,
proportional hazards). Notice if \(\lambda_0 (t)\) is the hazard function for a subject with all the predictor values equal to zero and \(\lambda_1 (t)\) is the hazard function for a subject with other values for the predictor variables, then the hazard ratio depends only on the predictor variables and not on time t. This assumption means if a covariate doubles the risk of the event on day one, it also doubles the risk of the event on any other day.
Proportional hazards models can be used for discrete or continuous measures of event time and can incorporate time-dependent covariates (covariates whose values that may change during the observation period). Using proportional hazards regression, covariate-adjusted hazard (or risk) ratios can be produced.
Let’s return to the original question posed by Arnlov and colleagues…do BMI and metabolic syndrome affect the development of cardiovascular disease? Read the Arnlov et al. Impact of Body Mass Index and the Metabolic Syndrome on the Risk of Cardiovascular Disease and Death in Middle Aged Men
Circulation 2010;121;230-236, giving particular attention to the statistical methods, results and conclusions.
Compare and contrast the
proportional hazards regression approach to a logistic regression approach by reading Franco et al. Trajectories of Entering the Metabolic Syndrome: The Framingham Heart Study. Circulation 2009;120;1943-1950; originally published online Nov 2, 2009; American Heart Association. 7272 Greenville Avenue, Dallas, TX DOI: 10.1161/CIRCULATIONAHA.109.855817
You may also compare the results of the two studies. Both papers are in the Readings folder for Week 14.
(accessed 4/16/2010 at https://www.oxfordjournals.org/our_journals/tropej/online/ma_chap12.pdf )
|
These three circuits are all equivalent:
(A)
A resistor at nonzero temperature, which has Johnson noise;
(B)
A noiseless resistor in series
with a noise-creating voltage source (i.e. the Thévenin equivalent
circuit);
(C)
A noiseless resistance in parallel
with a noise-creating current source (i.e. the Norton equivalent
circuit).
Johnson–Nyquist noise ( thermal noise, Johnson noise, or Nyquist noise) is the electronic noise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium.
Thermal noise in an ideal resistor is approximately white, meaning that the power spectral density is nearly constant throughout the frequency spectrum (however see the section below on extremely high frequencies). When limited to a finite bandwidth, thermal noise has a nearly Gaussian amplitude distribution.
[1]
Contents History 1 Noise voltage and power 2 Noise current 3 Noise power in decibels 4 Thermal noise on capacitors 5 Generalized forms 6 Reactive impedances 6.1 Quantum effects at high frequencies 6.2 Multiport electrical networks 6.3 Continuous electrodynamic media 6.4 See also 7 References 8 External links 9 History
This type of noise was first measured by John B. Johnson at Bell Labs in 1926.
[2] [3] He described his findings to Harry Nyquist, also at Bell Labs, who was able to explain the results. [4] Noise voltage and power
Thermal noise is distinct from shot noise, which consists of additional current fluctuations that occur when a voltage is applied and a macroscopic current starts to flow. For the general case, the above definition applies to charge carriers in any type of conducting medium (e.g. ions in an electrolyte), not just resistors. It can be modeled by a voltage source representing the noise of the non-ideal resistor in series with an ideal noise free resistor.
The one-sided power spectral density, or voltage variance (mean square) per hertz of bandwidth, is given by
\overline {v_{n}^2} = 4 k_\text{B} T R
where
k B is Boltzmann's constant in joules per kelvin, T is the resistor's absolute temperature in kelvin, and R is the resistor value in ohms (Ω). Use this equation for quick calculation, at room temperature: \sqrt{\overline {v_{n}^2}} = 0.13 \sqrt{R} ~\mathrm{nV}/\sqrt{\mathrm{Hz}}.
For example, a 1 kΩ resistor at a temperature of 300 K has
\sqrt{\overline {v_{n}^2}} = \sqrt{4 \cdot 1.38 \cdot 10^{-23}~\mathrm{J}/\mathrm{K} \cdot 300~\mathrm{K} \cdot 1~\mathrm{k}\Omega} = 4.07 ~\mathrm{nV}/\sqrt{\mathrm{Hz}}.
For a given bandwidth, the root mean square (RMS) of the voltage, v_{n}, is given by
v_{n} = \sqrt{\overline {v_{n}^2}}\sqrt{\Delta f } = \sqrt{ 4 k_\text{B} T R \Delta f }
where Δ
f is the bandwidth in hertz over which the noise is measured. For a 1 kΩ resistor at room temperature and a 10 kHz bandwidth, the RMS noise voltage is 400 nV. [5] A useful rule of thumb to remember is that 50 Ω at 1 Hz bandwidth correspond to 1 nV noise at room temperature.
A resistor in a short circuit dissipates a noise power of
P = {v_{n}^2}/R = 4 k_\text{B} \,T \Delta f.
The noise generated at the resistor can transfer to the remaining circuit; the maximum noise power transfer happens with impedance matching when the Thévenin equivalent resistance of the remaining circuit is equal to the noise generating resistance. In this case each one of the two participating resistors dissipates noise in both itself and in the other resistor. Since only half of the source voltage drops across any one of these resistors, the resulting noise power is given by
P = k_\text{B} \,T \Delta f
where
P is the thermal noise power in watts. Notice that this is independent of the noise generating resistance. Noise current
The noise source can also be modeled by a current source in parallel with the resistor by taking the Norton equivalent that corresponds simply to divide by
R. This gives the root mean square value of the current source as: i_n = \sqrt {{4 k_\text{B} T \Delta f } \over R}.
Thermal noise is intrinsic to all resistors and is not a sign of poor design or manufacture, although resistors may also have excess noise.
Noise power in decibels
Signal power is often measured in dBm (decibels relative to 1 milliwatt). From the equation above, noise power in a resistor at room temperature, in dBm, is then:
P_\mathrm{dBm} = 10\ \log_{10}(k_\text{B} T \Delta f \times 1000)
where the factor of 1000 is present because the power is given in milliwatts, rather than watts. This equation can be simplified by separating the constant parts from the bandwidth:
P_\mathrm{dBm} = 10\ \log_{10}(k_\text{B} T \times 1000) + 10\ \log_{10}(\Delta f)
which is more commonly seen approximated for room temperature (T = 300 K) as:
P_\mathrm{dBm} = -174 + 10\ \log_{10}(\Delta f)
where \Delta f is given in Hz; e.g., for a noise bandwidth of 40 MHz, \Delta f is 40,000,000.
Using this equation, noise power for different bandwidths is simple to calculate:
Bandwidth (\Delta f ) Thermal noise power Notes 1 Hz −174 dBm 10 Hz −164 dBm 100 Hz −154 dBm 1 kHz −144 dBm 10 kHz −134 dBm FM channel of 2-way radio 15 kHz −132.24 dBm One LTE subcarrier 100 kHz −124 dBm 180 kHz −121.45 dBm One LTE resource block 200 kHz −121 dBm GSM channel 1 MHz −114 dBm Bluetooth channel 2 MHz −111 dBm Commercial GPS channel 3.84 MHz −108 dBm UMTS channel 6 MHz −106 dBm Analog television channel 20 MHz −101 dBm WLAN 802.11 channel 40 MHz −98 dBm WLAN 802.11n 40 MHz channel 80 MHz −95 dBm WLAN 802.11ac 80 MHz channel 160 MHz −92 dBm WLAN 802.11ac 160 MHz channel 1 GHz −84 dBm UWB channel Thermal noise on capacitors
Thermal noise on capacitors is referred to as
kTC noise. Thermal noise in an RC circuit has an unusually simple expression, as the value of the resistance ( R) drops out of the equation. This is because higher R contributes to more filtering as well as to more noise. The noise bandwidth of the RC circuit is 1/(4 RC), [6] which can substituted into the above formula to eliminate R. The mean-square and RMS noise voltage generated in such a filter are: [7] \overline {v_{n}^2} = k_\text{B} T / C v_{n} = \sqrt{ k_\text{B} T / C }.
Thermal noise in the resistor accounts for 100% of
kTC noise.
In the extreme case of the
reset noise left on a capacitor by opening an ideal switch, the resistance is infinite, yet the formula still applies; however, now the RMS must be interpreted not as a time average, but as an average over many such reset events, since the voltage is constant when the bandwidth is zero. In this sense, the Johnson noise of an RC circuit can be seen to be inherent, an effect of the thermodynamic distribution of the number of electrons on the capacitor, even without the involvement of a resistor.
The noise is not caused by the capacitor itself, but by the thermodynamic equilibrium of the amount of charge on the capacitor. Once the capacitor is disconnected from a conducting circuit, the thermodynamic fluctuation is
frozen at a random value with standard deviation as given above.
The reset noise of capacitive sensors is often a limiting noise source, for example in image sensors. As an alternative to the voltage noise, the reset noise on the capacitor can also be quantified as the electrical charge standard deviation, as
Q_{n} = \sqrt{ k_\text{B} T C }.
Since the charge variance is k_\text{B} T C, this noise is often called
kTC noise.
Any system in thermal equilibrium has state variables with a mean energy of
kT/2 per degree of freedom. Using the formula for energy on a capacitor ( E = ½ CV 2), mean noise energy on a capacitor can be seen to also be ½ C( kT/ C), or also kT/2. Thermal noise on a capacitor can be derived from this relationship, without consideration of resistance.
The
kTC noise is the dominant noise source at small capacitors.
Noise of capacitors at 300 K
Capacitance \sqrt{ k_\text{B} T / C } Electrons 1 fF 2 mV 12.5 e − 10 fF 640 µV 40 e − 100 fF 200 µV 125 e − 1 pF 64 µV 400 e − 10 pF 20 µV 1250 e − 100 pF 6.4 µV 4000 e − 1 nF 2 µV 12500 e − Generalized forms
The 4 k_\text{B} T R voltage noise described above is a special case for a purely resistive component for low frequencies. In general, the thermal electrical noise continues to be related to resistive response in many more generalized electrical cases, as a consequence of the fluctuation-dissipation theorem. Below a variety of generalizations are noted. All of these generalizations share a common limitation, that they only apply in cases where the electrical component under consideration is purely passive and linear.
Reactive impedances
Nyquist's original paper also provided the generalized noise for components having partly reactive response, e.g., sources that contain capacitors or inductors.
[4] Such a component can be described by a frequency-dependent complex electrical impedance Z(f). The formula for the power spectral density of the series noise voltage is S_{v_n v_n}(f) = 4 k_\text{B} T \eta(f) \operatorname{Re}[Z(f)].
The function \eta(f) is simply equal to 1 except at very high frequencies (see below).
The real part of impedance, \operatorname{Re}[Z(f)], is in general frequency dependent and so the Johnson–Nyquist noise is not white noise. The rms noise voltage over a span of frequencies f_1 to f_2 can be found by integration of the power spectral density:
\sqrt{\langle v_n^2 \rangle} = \sqrt{\int_{f_1}^{f_2} S_{v_n v_n}(f) df}.
Alternatively, a parallel noise current can be used to describe Johnson noise, its power spectral density being
S_{i_n i_n}(f) = 4 k_\text{B} T \eta(f) \operatorname{Re}[Y(f)].
where Y(f) = 1/Z(f) is the electrical admittance; note that \operatorname{Re}[Y(f)] = \operatorname{Re}[Z(f)]/|Z(f)|^2
Quantum effects at high frequencies
Nyquist also pointed out that quantum effects analogous to Planck's law occur for very high frequencies.
[4] The function \eta(f) is in general given by \eta(f) = \frac{hf/k_\text{B} T}{e^{hf/k_\text{B} T} - 1},
where h is Planck's constant. At very high frequencies f \gtrsim k_\text{B} T/h, the function \eta(f) starts to exponentially decrease to zero. At room temperature this transition occurs in the terahertz, far beyond the capabilities of conventional electronics, and so it is valid to set \eta(f)=1 for conventional electronics work.
Multiport electrical networks
Richard Q. Twiss extended Nyquist's formulas to multi-port passive electrical networks, including non-reciprocal devices such as circulators and isolators.
[8] Thermal noise appears at every port, and can be described as random series voltage sources in series with each port. The random voltages at different ports may be correlated, and their amplitudes and correlations are fully described by a set of cross-spectral density functions relating the different noise voltages, S_{v_m v_n}(f) = 2 k_\text{B} T \eta(f) (Z_{mn}(f) + Z_{nm}(f)^*)
where the Z_{mn} are the elements of the impedance matrix \mathbf{Z}. Again, an alternative description of the noise is instead in terms of parallel current sources applied at each port. Their cross-spectral density is given by
S_{i_m i_n}(f) = 2 k_\text{B} T \eta(f) (Y_{mn}(f) + Y_{nm}(f)^*)
where \mathbf{Y} = \mathbf{Z}^{-1} is the admittance matrix.
Continuous electrodynamic media
The full generalization of Nyquist noise is found in fluctuation electrodynamics, which describes the noise current density inside continuous media with dissipative response in a continuous response function such as dielectric permittivity or magnetic permeability. The equations of fluctuation electrodynamics provide a common framework for describing both Johnson–Nyquist noise and free space blackbody radiation.
[9] See also References ^ John R. Barry, Edward A. Lee, and David G. Messerschmitt (2004). Digital Communications. Sprinter. p. 69. ^ "Proceedings of the American Physical Society: Minutes of the Philadelphia Meeting December 28, 29, 30, 1926", Phys. Rev. 29, pp. 367-368 (1927) – a February 1927 publication of an abstract for a paper - entitled "Thermal agitation of electricity in conductors" - presented by Johnson during the December 1926 APS Annual Meeting ^ J. Johnson, "Thermal Agitation of Electricity in Conductors", Phys. Rev. 32, 97 (1928) – details of the experiment ^ a b c H. Nyquist, "Thermal Agitation of Electric Charge in Conductors", Phys. Rev. 32, 110 (1928) – the theory ^ Google Calculator result for 1 kΩ room temperature 10 kHz bandwidth ^ Kent H. Lundberg, See pdf, page 10: http://web.mit.edu/klund/www/papers/UNP_noise.pdf ^ R. Sarpeshkar, T. Delbruck, and C. A. Mead, "White noise in MOS transistors and resistors", IEEE Circuits Devices Mag., pp. 23–29, Nov. 1993. Also here ^ Twiss, R. Q. (1955). "Nyquist's and Thevenin's Theorems Generalized for Nonreciprocal Linear Networks". Journal of Applied Physics 26 (5): 599. ^
This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).
External links
Amplifier noise in RF systems Thermal noise (undergraduate) with detailed math Johnson–Nyquist noise or thermal noise calculator – volts and dB Derivation of the Nyquist relation using a random electric field, H. Sonoda Applet of the thermal noise.
This article was sourced from Creative Commons Attribution-ShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, E-Government Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a non-profit organization.
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
In the experts problem, $n$ experts give you binary predictions on a daily basis, and you have to predict whether it's going to rain tomorrow.
That is, at day $t$, you know the past predictions of the experts, the actual weather for days $1,2,\ldots t$, and the predictions for tomorrows, and have to predict whether it will rain the next day.
In the classic Weighted Majority algorithm, the algorithm makes $O(\log n + m)$ mistakes, where $m$ is the number of mistakes of the best expert.
To me, this seems like an extremely weak promise, as it does not allow any benefit from combining predictions of several experts.
Assume that each outcome is $\{\pm 1\}$, prediction of expert $i$ on day $t$ is $p_{i,t}$, and the outcome of day $t$ is $o_t$. We can define an ``optimal weighted majority'' adversary as an optimal weight function $w\in\Delta([n])$, such that the decision made by the adversary on day $t$ is defined as $sign(w\cdot p_t)$, i.e. the weighted majority of the predictions, with respect to the vector $w$. Using this notation, the previous adversary (best expert) could only pick unit vectors.
We can then define the optimal error for days $1,2,\ldots T$ as: $$E = \frac{1}{2}\min_{w\in\Delta([n])} \sum_{t=1}^T|sign(w\cdot p_t)-o_t|$$
How would you minimize the regret, compared to $E$?
To see that this is a much more powerful adversary, consider the case of $3$ experts and $3$ days in which the outcome was always $1$. If $p_1=(1,1,-1), p_2 = (1,-1,1), p_3=(-1,1,1)$, then each expert had a mistake, but a weighted majority vector of $(1/3,1/3,1/3)$ had none.
|
This may be a sledgehammer for your problem but there is a general theorem.
Poincare theorem$\color{blue}{{}^{[1]}}$
Given any recurrence relations with non-constant coefficients
$$x_{n+k} + p_1(n)x_{n+k-1} + p_2(n)x_{n+k-2} + \cdots + p_k(n) x_{n} = 0\tag{*1}$$
such that there are real numbers $p_i, 1 \le i \le k$ with
$$\lim_{n\to\infty} p_i(n) = p_i, \quad 1 \le i \le k$$
and the roots $\lambda_1, \lambda_2 \ldots, \lambda_k$ for the associated characteristic equation:
$$\lambda^k + p_1 \lambda^{k-1} + \cdots + p_k = 0$$
have distinct moduli.
For any solution of $(*1)$, either $x_n = 0$ for all large $n$
or $\displaystyle\;\lim_{n\to\infty} \frac{x_{n+1}}{x_n} = \lambda_i\;$ for some $i$.
In short, if the coefficients of a recurrence relations converges and the corresponds $|\lambda_i|$ are distinct, then either the sequence $x_n$ terminates (i.e. infinite radius of convergence) or the radius of convergence is one of $\frac{1}{|\lambda_i|}$.
For your case, it is clear $c_n$ didn't terminate. Since the characteristic equation of your sequence "converge" to $\lambda - \frac15$, its radius of convergence is $5$.
Notes/References $\color{blue}{[1]}$ - Saber Elaydi, An Introduction to difference equations, $\S 8.2$ Poincare theorem.
|
Check My Steps
Quickly and easily check your algebra steps using this simple GeoGebra utility.
Enter each step in the math box below, and press the
ADD STEP (+)button. After each step, or at the end, tap Check My StepsYou may also tap Show Editorand enter your steps directly into the Editor. (If you need to make corrections to your steps, the Editor is the place to do that - then just Check Your Stepsagain!)
Hint: When entering mathematical expressions in the math box, use the space key to step out of fractions, powers, etc. More...
Back to top
Back to top
Back to Top
Type simple mathematical expressions and equations as you would normally enter these: for example, "x^2[space]-4x+3", and "2/3[space]". For more interesting elements, use Latex notation (prefix commands such as "sqrt" and "nthroot3" with a backslash (\)): for example: "\sqrt(2)[space][space]".
Note: To check indefinite integrals (such as \int(x)) do not include the constant term in your answer.
Some to try...
Cube root of 8
GeoGebra: nroot(8,3):
Enter
\nthroot3[space]8[space] Greatest Common Divisor of 24 and 32
GeoGebra: GCD(24,32):
Enter
gcd(24,32) Derivative of \(y=x^2\)
GeoGebra: Derivative(\(x^2\))):
Enter
d/dx[space](x^2) (Be sure to include parentheses around the expression!) Tangent of \(y=x^2\) at x = 1
GeoGebra: Tangent(1,\(x^2\)):
Enter
tngnt(1,x^2) Integral of \(y=x^2\) between 0 and 1
GeoGebra: Integral(\(x^2\),0,1):
Enter
\int_0[space]^1[space] (x^2 )[space][space][space]dx (Be sure to include parentheses around the expression!) Sum of \(x^2\) from x = 1 to 10
GeoGebra: Sum(\(x^2,x,0,10\)):
Enter
\sum(x=0[space]10[space]x^2[space]) Complete the Square for \(x^2+4x+3\)
GeoGebra: CompleteSquare(\(x^2+4x+3\)):
Enter
cs(x^2 [space]+4x+3) Factorise \(x^2+4x+3\)
GeoGebra: Factor(\(x^2+4x+3\)):
Enter
factor(x^2 [space]+4x+3) Complex Factors of \(x^2+4x+3\)
GeoGebra: CFactor(\(x^2+4x+3\)):
Enter
cfactor(x^2 [space]+4x+3) (CAS only) Solve \(x^2+4x+3=0\)
GeoGebra: Solve(\(x^2+4x+3=0\)):
Enter
solve(x^2 [space]+4x+3) (CAS only) Complex Solve \(x^2+4x+3=0\)
GeoGebra: CSolve(\(x^2+4x+3=0\)):
Enter
csolve(x^2 [space]+4x+3) (Input Field only) Continued Fraction for \(pi\)
GeoGebra: ContinuedFraction(\(pi\)):
Enter
cf(\Pi[space]) (Input Field only) Stat Plots: BarChart
GeoGebra: BarChart(dataSet1,dataSet2):
Enter
barchart(dataSet1,dataset2) (Input Field only) Stat Plots: BoxPlot
GeoGebra: BoxPlot(1,1,dataSet1):
Enter
boxplot(1,1,dataSet1) (Input Field only) Stat Plots: DotPlot
GeoGebra: DotPlot(dataSet1):
Enter
dotplot(dataSet1) (Input Field only) Stat Plots: StemPlot
GeoGebra: StemPlot(dataSet1):
Enter
stemplot(dataSet1)
This utility uses MathQuill for live entry of mathematical terms, and MathJax to present the mathematics beautifully. A latex2math.js function converts the latex form produced by MathQuill into a text form which can be read by GeoGebra.
Back to top
Home ← Live Mathematics and STEM on the Web ← Create Your Own Live Web Page ← Check My Steps
|
In Finding Lexicographic Orders for Termination Proofs in Isabelle/Holl the authors construct a method for proving termination of functions based on constructing a matrix that registers for each row the recursive calls in the body and for each column a measure. The matrix itself contains the information about the decrease of the measure on that specific call.
As an example of mutually recursive functions (page 46) they give:
even 0 = Trueeven Suc n = odd nodd 0 = Falseodd Suc n = even n
they state that they convert these functions into a single function over the sum type nat+nat. Apparently this transformation is explained in this thesis (page 117).
How would the transformed function look like?
They go on stating that this transformation give the following proof obligations:
wf ?R
$\land$n.(Inr n,Inl Suc n) $\in$?R
$\land$n.(Inl n,Inr Suc n) $\in$?R
where Inl,Inr are the injection functions for sum types. Similarly,
Do you understand why these proof obligations are derived? What are their meaning?
Finally, the step in which I am most interested. They introduce measure functions in this sum types in order to proof termination. The set of measures for $t_1 + t_2$ is:
$M(t_1+t_2) =\{case_+ m_1 m_2 | m_1 \in M(t_1),m_2 \in M(t_2)\}$
and they explain that this means that the measures are built taking all combinations for measures of $t_1$,$t_2$ and combining them with a case combinator for sum types which has signature $case_+ :: (\alpha \Rightarrow \gamma) \Rightarrow (\beta \Rightarrow \gamma) \Rightarrow (\alpha + \beta \Rightarrow \gamma)$.
Could you explain what the case combinator does?
So summarizing, I need a clarification on the construction idea, assuming no knowledge on sum types.
|
Prosjek
You are given an array of $N$ integers. Find a consecutive subsequence of numbers of the length at least $K$ that has the maximal possible average.
Input
The first line of input contains two integers $N$ ($1 \leq N \leq 3 \cdot 10^5$) and $K$ ($1 \leq K \leq N$). The second line of input contains $N$ integers $a_ i$ ($1 \leq a_ i \leq 10^6$).
Output
The first and only line of output must contain the maximal possible average. An absolute deviation of $\pm 0.001$ from the official solution is permitted.
Sample Input 1 Sample Output 1 4 1 1 2 3 4 4.000000
Sample Input 2 Sample Output 2 4 2 2 4 3 4 3.666666
Sample Input 3 Sample Output 3 6 3 7 1 2 1 3 6 3.333333
|
Difference between revisions of "Lebesgue integral"
m (some TeX)
m (some Tex)
Line 5: Line 5:
[[Category:TeX wanted]]
[[Category:TeX wanted]]
−
The most important generalization of the concept of an [[Integral|integral]]. Let $(X,\mu)$ be a space with a non-negative complete countably-additive measure $\mu$ (cf. [[Countably-additive set function|Countably-additive set function]]; [[Measure space|Measure space]]), where $\mu(X)<\infty$. A simple function is a [[Measurable function|measurable function]] $g:X\to\mathbb R$ that takes at most a countable set of values:
+
The most important generalization of the concept of an [[Integral|integral]]. Let $(X,\mu)$ be a space with a non-negative complete countably-additive measure $\mu$ (cf. [[Countably-additive set function|Countably-additive set function]]; [[Measure space|Measure space]]), where $\mu(X)<\infty$. A simple function is a [[Measurable function|measurable function]] $g:X\to\mathbb R$ that takes at most a countable set of values: =, for , if , ==. A simple function is said to be summable if the series{{Anchor|series}}
− + + +
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057860/l05786011.png" /></td> </tr></table>
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l057/l057860/l05786011.png" /></td> </tr></table>
Revision as of 11:40, 25 January 2013
The most important generalization of the concept of an integral. Let $(X,\mu)$ be a space with a non-negative complete countably-additive measure $\mu$ (cf. Countably-additive set function; Measure space), where $\mu(X)<\infty$. A simple function is a measurable function $g:X\to\mathbb R$ that takes at most a countable set of values: $g(x)=y_n$, $y_n\ne y_k$ for $n\ne k$, if $x\in X_n$, $\bigcup\limits_{n=1}^{\infty}X_n=X$. A simple function $g$ is said to be summable if the series \begin{equation} \sum\limits_{n=1}^{\infty}y_n\mu(X_n) \end{equation}
converges absolutely (cf. Absolutely convergent series); the sum of this series is the Lebesgue integral
A function is summable on , , if there is a sequence of simple summable functions uniformly convergent (cf. Uniform convergence) to on a set of full measure, and if the limit
is finite. The number is the Lebesgue integral
This is well-defined: the limit exists and does not depend on the choice of the sequence . If , then is a measurable almost-everywhere finite function on . The Lebesgue integral is a linear non-negative functional on with the following properties:
1) if and if
then and
2) if , then and
3) if , and is measurable, then and
4) if and is measurable, then and
In the case when and , , the Lebesgue integral is defined as
under the condition that this limit exists and is finite for any sequence such that , , . In this case the properties 1), 2), 3) are preserved, but condition 4) is violated.
For the transition to the limit under the Lebesgue integral sign see Lebesgue theorem.
If is a measurable set in , then the Lebesgue integral
is defined either as above, by replacing by , or as
where is the characteristic function of ; these definitions are equivalent. If , then for any measurable . If
if is measurable for every , if
and if , then
Conversely, if under these conditions on one has for every and if
then and the previous equality is true (-additivity of the Lebesgue integral).
The function of sets given by
is absolutely continuous with respect to (cf. Absolute continuity); if , then is a non-negative measure that is absolutely continuous with respect to . The converse assertion is the Radon–Nikodým theorem.
For functions the name "Lebesgue integral" is applied to the corresponding functional if the measure is the Lebesgue measure; here, the set of summable functions is denoted simply by , and the integral by
For other measures this functional is called a Lebesgue–Stieltjes integral.
If , and if is a non-decreasing absolutely continuous function, then
If , and if is monotone on , then and there is a point such that
(the second mean-value theorem).
In 1902 H. Lebesgue gave (see [Le]) a definition of the integral for and measure equal to the Lebesgue measure. He constructed simple functions that uniformly approximate almost-everywhere on a set of finite measure a measurable non-negative function , and proved the existence of a common limit (finite or infinite) of the integrals of these simple functions as they tend to . The Lebesgue integral is a basis for various generalizations of the concept of an integral. As N.N. Luzin remarked [Lu], property 2), called absolute integrability, distinguishes the Lebesgue integral for from all possible generalized integrals.
References
[Le] H. Lebesgue, "Leçons sur l'intégration et la récherche des fonctions primitives" , Gauthier-Villars (1928) MR2857993 Zbl 54.0257.01 [Lu] N.N. Luzin, "The integral and trigonometric series" , Moscow-Leningrad (1915) (In Russian) (Thesis; also: Collected Works, Vol. 1, Moscow, 1953, pp. 48–212) [KF] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis" , 1–2 , Graylock (1957–1961) (Translated from Russian) MR1025126 MR0708717 MR0630899 MR0435771 MR0377444 MR0234241 MR0215962 MR0118796 MR1530727 MR0118795 MR0085462 MR0070045 Zbl 0932.46001 Zbl 0672.46001 Zbl 0501.46001 Zbl 0501.46002 Zbl 0235.46001 Zbl 0103.08801 Comments
For other generalizations of the notion of an integral see -integral; Bochner integral; Boks integral; Burkill integral; Daniell integral; Darboux sum; Denjoy integral; Kolmogorov integral; Perron integral; Perron–Stieltjes integral; Pettis integral; Radon integral; Stieltjes integral; Strong integral; Wiener integral. See also, of course, Riemann integral. See also Double integral; Improper integral; Fubini theorem (on changing the order of integration).
References
[H] P.R. Halmos, "Measure theory" , v. Nostrand (1950) MR0033869 Zbl 0040.16802 [P] I.N. Pesin, "Classical and modern integration theories" , Acad. Press (1970) (Translated from Russian) MR0264015 Zbl 0206.06401 [S] S. Saks, "Theory of the integral" , Hafner (1952) (Translated from French) MR0167578 Zbl 1196.28001 Zbl 0017.30004 Zbl 63.0183.05 [Ro] H.L. Royden, "Real analysis", Macmillan (1968) [Ru] W. Rudin, "Real and complex analysis" , McGraw-Hill (1978) pp. 24 MR1736644 MR1645547 MR0924157 MR0850722 MR0662565 MR0344043 MR0210528 Zbl 1038.00002 Zbl 0954.26001 Zbl 0925.00005 Zbl 0613.26001 Zbl 0925.00003 Zbl 0278.26001 Zbl 0142.01701 [HS] E. Hewitt, K.R. Stromberg, "Real and abstract analysis" , Springer (1965) MR0188387 Zbl 0137.03202 How to Cite This Entry:
Lebesgue integral.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Lebesgue_integral&oldid=29350
|
Naively, in my very limited awareness, it feels that the Max-CUT is a very "special" NP-Hard problem because for a graph with edge-set $E$, it can be written as the question of trying to maximize a $n$ variable polynomial $\sum_{(i,j)\in E} \frac{ (x_i - x_j)^2 }{4}$ over the hypercube $\{ -1, 1\}^n$.
Is Max-CUT really special or is there always a reduction from a target NP-complete question to Max-CUT so that the optimization version of the target question can also be written as trying to optimize some polynomial over the hypercube? Do we know how to systematically get these special reductions or do they provably not exist or its just that we still don't know?
At some level all NP-complete questions are "equivalent" (sure they have variously different approximation hardness properties!) but it seems intuitively a bit unobvious that the optimization versions of each of them can't be written in this form though one of them can trivially be!
Is it possible that the different approximation hardness behaviour of the different NP-complete problems is somehow related to this issue of their optimization versions not having an uniform representation as polynomials (one for each) to be optimized over the hypercube?
|
We use the standard notation that we have been using all along:
\(X_{ik}\)= Response for variable kin sample unit i(the number of individual species kat site i) \(n\) = Number of sample units \(p\) = Number of variables
Johnson and Wichern list four different measures of association (similarity) that are frequently used with continuous variables in cluster analysis:
Some other distances also use a similar concept.
Euclidean Distance
This is used most commonly. For instance, in two dimensions, we can plot the observations in a scatter plot, and simply measure the distances between the pairs of points. More generally we can use the following equation:
\(d(\mathbf{X_i, X_j}) = \sqrt{\sum\limits_{k=1}^{p}(X_{ik} - X_{jk})^2}\)
This is the square-root of the sum of the squared differences between the measurements for each variable. (This is the only method that is available in SAS. In Minitab there are other distances like Pearson, Squared Euclidean, etc.)
Minkowski Distance
\(d(\mathbf{X_i, X_j}) = \left[\sum\limits_{k=1}^{p}|X_{ik}-X_{jk}|^m\right]^{1/m}\)
Here the square is replaced with raising the difference by a power of
mand instead of taking the square root, we take the mth root.
Here are two other methods for measuring association:
Canberra Metric
\(d(\mathbf{X_i, X_j}) = \sum\limits_{k=1}^{p}\frac{|X_{ik}-X_{jk}|}{X_{ik}+X_{jk}}\)
Czekanowski Coefficient
\(d(\mathbf{X_i, X_j}) = 1- \frac{2\sum\limits_{k=1}^{p}\text{min}(X_{ik},X_{jk})}{\sum\limits_{k=1}^{p}(X_{ik}+X_{jk})}\)
For each distance measure, similar subjects have smaller distances than dissimilar subjects. Similar subjects are more strongly associated.
Or, if you like, you can invent your own measure! However, whatever you invent, the measure of association must satisfy the following properties:
Symmetry
\(d(\mathbf{X_i, X_j}) = d(\mathbf{X_j, X_i})\)i.e., the distance between subject one and subject two must be the same as the distance between subject two and subject one.
Positivity
\(d(\mathbf{X_i, X_j}) > 0\) if \(\mathbf{X_i} \ne \mathbf{X_j}\)...the distances must be positive - negative distances are not allowed!
Identity
\(d(\mathbf{X_i, X_j}) = 0\) if \(\mathbf{X_i} = \mathbf{X_j}\)...the distance between the subject and itself should be zero.
Triangle inequality
\(d(\mathbf{X_i, X_k}) \le d(\mathbf{X_i, X_j}) +d(\mathbf{X_j, X_k}) \)This follows from a geometric consideration, that is the sum of two sides of a traingle cannot be smaller than the third side.
|
Consider a mechanism $M: \mathcal{R} \rightarrow X$, where
$\mathcal{R}$ is a domain of preference profiles $R = (R_1,\dots, R_n)$, and $X$ is a set of outcomes.
I believe that the following is a folk theorem
If the domain $\mathcal{R}$ is a cartesian domain, i.e., $\mathcal{R} = \times_{i\in \{1,\dots,n\}} \mathcal{R}_i$, for some collection of individual domains $(\mathcal{R}_1,\dots, \mathcal{R}_n)$ and the game $(M,R)$ has a truthful Nash equilibrium for
everyprofile $R \in \mathcal{R}$, then for every profile $R\in \mathcal{R}$, every player has a truthful dominant strategy in the game $(M,R)$.
This is not hard to prove and follows almost directly by definition, but I'd rather include a reference than a proof. I think I remember seeing this result proven somewhere (in a paper by Morris et al.?), but I cannot find that paper anymore.
Do you know any reference of a paper where that result (or a similar result) would be proven?
|
It depends on how you define "growth". Trade models have multiple goods in them, so there's no unique way to define "growth" - you are just moving along a fixed production possibility frontier. You can calculate nominal GDP, but then you have to come up with a measure of inflation so that you can deflate the nominal figure into something real.
One of the results here is that if your economy which produces $ n $ goods has a production possibility frontier given by the zero set of a $ C^2 $ function $ Q : \mathbb R^n \to \mathbb R $ (with the usual properties, i.e increasing in each coordinate and with positive definite Hessian at every point), then a simple application of the chain rule gives (here the $ dx $ notation denotes derivatives with respect to time, so we're assuming that the economy shifts continuously along the ppf instead of a unit step jump from one point to another - this assumption is important, otherwise we can't lose the higher order terms in the Taylor expansion of $ Q $...)
$$ 0 = dQ = \sum_{k=1}^{n} \frac{\partial Q}{\partial x_k} dx_k $$
if you're moving along the ppf, so that the value of $ Q $ is conserved. However, we also know from the first order conditions of producers in a competitive market that
$$ \frac{\partial Q/\partial x_i}{\partial Q / \partial x_j} = \frac{p_i}{p_j} $$
where $ p_i, p_j $ are the nominal prices of good $ i $ and good $ j $ respectively. Substituting this into the above identity gives
$$ 0 = \sum_{k=1}^n p_k dx_k $$
This is an important result: it says that if you calculate growth by looking at changes in production and hold the prices fixed, then you will detect no growth if the economy moves along a fixed production possibility frontier. Furthermore, this identity combined with the identity
$$ NGDP = \sum_{k=1}^{n} p_k x_k $$
gives, using the product rule, that
$$ d(\log(NGDP)) = \frac{1}{NGDP} \sum_{k=1}^n x_k d p_k = \textrm{GDP deflator} $$
In other words, if we deflate GDP by using the GDP deflator which is defined as the percent change in the price of a basket of goods weighted by the share of total production they represented in a given time period, we will measure zero RGDP growth so long as the economy remains along the same production possibility frontier $ Q(x_1, x_2, \ldots, x_n) = 0 $. The results hold true in autarky as well as in an environment of free trade, so in this sense free trade does not lead to "more growth", at least not due to the effects present in Heckscher-Ohlin.
We can get away from this result if we choose to deflate nominal GDP by an alternative measure, but I think this should be enough illustration that you should be thinking about the
welfare effects of trade and not about the "growth" effects. If you do come up with an appropriate notion of CPI inflation in a model, it will most likely be a "cost of living index" of the kind you find in Dixit-Stiglitz type models, so you will still indirectly be measuring welfare. The basic proof of the first welfare theorem carries over to the case of international free trade vs autarky, so under the usual assumptions (locally nonsatiated preferences and such) free trade outcomes are Pareto optimal, whereas autarky outcomes are in general not Pareto optimal.
|
Calculate (via two methods) the flux integral $$\int_S (\nabla \times \vec{F}) \cdot \vec{n} dS$$ where $\vec{F} = (y,z,x^2y^2)$ and $S$ is the surface given by $z = x^2 +y^2$ and $0 \leq z \leq 4$, oriented so that $\vec{n}$ points downwards.
I have applied Stokes' Theorem and resulted with $$\oint_{\partial S} \vec{F} \cdot \vec{n} d\vec{r}.$$ Further, I calculated the normal vector $$(rsin(\theta), rcos(\theta), -r)$$ such that it is pointing downwards. I am stuck now because I cannot interpret the meaning of $\vec{r}$.
My second method would be to direcly compute curl, $$\text{curl} \vec{F} = (2x^2y - 1,-2xy^2,-1).$$ Compute the normal via gradient, $$\nabla z = (2x,2y,-1)$$ Substitute everything to get, $$\int_S \dots dS. $$
I'm stuck now and having difficulty finding the boundary in terms of $x$ and $y$. Can anyone please lend me a hand? Thanks.
|
a) Regular implies normal.
Okay, maybe that's too high-tech. If $f=g+hy$ lies in the integral closure, then it satisfies the quadratic polynomial $(z-g)^2 - h^2(x^3-x)$. If the coefficients are polynomials (which they must be, by Gauss's Lemma), then $g$ is a polynomial, and $h^2 (x^3-x)$ is a polynomial. But $x^3-x$ is square-free (here we use that the characteristic is not $2$), so $h$ is a polynomial and $f\in R$.
b) Since $y^2,x^3 \in P^2$, it is straightforward that $x\in P^2$ and $P^2 = (x)$.
c) (Note: it is sufficient to address this problem for $R\otimes_k \overline{k}$, so we assume that $k$ is algebraically closed to avoid some technicalities.)
First of all, $X=\operatorname{Spec}k[x,y]/(y^2-x^3+x)$ is an open subset of an elliptic curve $\overline{X} = Y\cup \{\infty\}$. This is a rather important observation, because it implies that $R=k[x,y]/(y^2-x^3+x)$
doesn't have any principal prime ideals whatsoever (except, of course, for $(0)$).
We will demonstate this, real mathematician-style, by looking at the global behavior of a hypothetical generator of $P$. If $P=(f)$ is principal, then $f$ extends to a rational function on $\overline{X}$ with a simple pole at $\infty$. Therefore $1/f$ is a non-constant rational function on $\overline{X}$ with only a simple pole at $P$. This implies that $l(P) \geq 2$.
Here $l(P)$ is shorthand for the dimension of the space of rational functions on $\overline{X}$ which have at worst a simple pole at $P$, and are regular elsewhere. In general, $l(D)$ is defined for $D$ a Weil divisor, i.e. a formal sum of points. For example, $l(P+2Q-3R)$ counts the number of linearly independent functions which must have a triple zero at $R$, and are allowed to have a pole of order $1$ at $P$, and $2$ at $Q$. The sum of the coefficients of $D$ is called its
degree.
We will calculate $l(P)$ exactly using the Riemann-Roch theorem, which says that, for any Weil divisor $D$, $l(D)-l(K-D) = \operatorname{deg}{D} + 1 - g$, where $g$ is the genus of $\overline{X}$, and $K$ is the divisor associated to any differential form on $\overline{X}$. For an elliptic curve, we must have $K=0$ and $g=1$, so this equation takes the simple form $l(D)-l(-D) = \operatorname{deg}{D}$.
We set $D=P$. Since $\operatorname{deg}{P} = 1$, and $l(-P)=0$ (in fact, $l(-D)=0$ for any divisor of positive degree), we conclude that $l(P) = 1$. Since earlier we found that $l(P)\geq 2$ for any principal maximal ideal on $X$ (in fact, the proof works for any projective curve with a single point removed), we have our contradiction.
(The above does have a nice, concrete interpretation: when you strip away the machinery, the argument is that, if $P=(f)$, then the map $k[x]\to R$ sending $x$ to $f$ is an isomorphism. We use basic facts about divisors to avoid some difficult calculations, and genus to show that $R$ cannot be a polynomial ring.)
|
Current browse context:
gr-qc
Change to browse by: Bookmark(what is this?) General Relativity and Quantum Cosmology Title: Longterm general relativistic simulation of binary neutron stars collapsing to a black hole
(Submitted on 29 Apr 2009 (v1), last revised 31 Aug 2009 (this version, v2))
Abstract: General relativistic simulations for the merger of binary neutron stars are performed as an extension of a previous work\cite{Shibata:2006nm}. We prepare binary neutron stars with a large initial orbital separation and employ the moving-puncture formulation, which enables to follow merger and ringdown phases for a long time, even after black hole formation. For modeling inspiraling neutron stars, which should be composed of cold neutron stars, the Akmal-Pandhalipande-Ravenhall (APR) equation of state (EOS) is adopted. After the onset of the merger, the hybrid-type EOS is used; i.e., the cold and thermal parts are given by the APR and $\Gamma$-law EOSs, respectively. Three equal-mass binaries each with mass $1.4M_\odot,1.45M_\odot,1.5M_\odot$ and two unequal-mass binaries with mass 1.3--$1.6M_\odot$, 1.35--$1.65M_\odot$ are prepared. We focus primarily on the black hole formation case, and explore mass and spin of the black hole, mass of disks which surround the black hole, and gravitational waves emitted during the black hole formation. We find that (i) for the systems of $m_0=2.9$--$3.0M_\odot$ and of mass ratio $\approx 0.8$, the mass of disks which surround the formed black hole is 0.006--$0.02M_{\odot}$; (ii) the spin of the formed black hole is $0.78 \pm 0.02$ when a black hole is formed after the merger in the dynamical time scale. This value depends weakly on the total mass and mass ratio, and is about 0.1 larger than that of a black hole formed from nonspinning binary black holes; (iii) for the black-hole formation case, Fourier spectrum shape of gravitational waves emitted in the merger and ringdown phases has a universal qualitative feature irrespective of the total mass and mass ratio, but quantitatively, the spectrum reflects the parameters of the binary neutron stars. Submission historyFrom: Kenta Kiuchi [view email] [v1]Wed, 29 Apr 2009 08:27:50 GMT (691kb) [v2]Mon, 31 Aug 2009 05:53:51 GMT (732kb)
|
A matrix $M$ is
sorted if $M_{i,j}\leq M_{i+1,j}$ and $M_{i,j}\leq M_{i,j+1}$.
Consider the following problem.
Search in a sorted matrix
Given a $n\times m$ sorted matrix $M$, where $n\leq m$. There is a hidden number $x$. We have access to an oracle $f$, where $f(y)$ returns if $y< x$, $y=x$ or $y>x$. Decide if $x$ is in $M$.
The native algorithm would be going through every element in the matrix, and call the oracle on the element. This takes $O(nm)$ time and $O(nm)$ oracle calls.
One can get much better in terms of oracle calls. A simple algorithm is to sort all elements in the matrix in $O(nm \log nm)$ time, and then use the oracle $O(\log nm)=O(\log m)$ times to do a binary search on the sorted list.
The best-known algorithm can achieve $O(\log m)$ oracle calls and $O(n\log \frac{m}{n})$ time. See here. This is best possible, as one can show a $\Omega(\log m)$ lower bound on oracle calls and $\Omega(n\log \frac{m}{n})$ on running time.
The problem is about more refined running time. The numbers that are no larger than $x$ form an (inverted) staircase shape that contains all elements no larger than $x$. We measure the complexity of the problem by the $h$, the number of steps of the staircase, and $k$, the number of elements no larger than $x$.
Example:Let $x=11$ and $M$ is the matrix below. The staircase shape is highlighted. The staircase shape has $h=3$ steps, and there are $k=11$ elements in the staircase.
One can obtain a $O(h\log \frac{k}{h^2})$ time algorithm for the problem by using exponential search to find the boundary of the staircase. This never worse than $O(n\log \frac{m}{n})$, and significantly better when $h$ is small. However, the algorithm has to use the oracle $O(h\log \frac{k}{h^2})$ times.
One can show $\Omega(m)$ is the lower bound on the number of oracle calls.
Can we get the best of both worlds? A faster running time together with small number of oracle calls?
Is it possible to use $O(h\log \frac{k}{h^2})$ time and $O(\log m)$ oracle calls to search in a sorted matrix?
|
The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. Simply click the bounty award icon next to each answer to permanently award your bounty to the answerer. (You cannot award a bounty to your own answer.)
@Mathphile I found no prime of the form $$n^{n+1}+(n+1)^{n+2}$$ for $n>392$ yet and neither a reason why the expression cannot be prime for odd n, although there are far more even cases without a known factor than odd cases.
@TheSimpliFire That´s what I´m thinking about, I had some "vague feeling" that there must be some elementary proof, so I decided to find it, and then I found it, it is really "too elementary", but I like surprises, if they´re good.
It is in fact difficult, I did not understand all the details either. But the ECM-method is analogue to the p-1-method which works well, then there is a factor p such that p-1 is smooth (has only small prime factors)
Brocard's problem is a problem in mathematics that asks to find integer values of n and m for whichn!+1=m2,{\displaystyle n!+1=m^{2},}where n! is the factorial. It was posed by Henri Brocard in a pair of articles in 1876 and 1885, and independently in 1913 by Srinivasa Ramanujan.== Brown numbers ==Pairs of the numbers (n, m) that solve Brocard's problem are called Brown numbers. There are only three known pairs of Brown numbers:(4,5), (5,11...
$\textbf{Corollary.}$ No solutions to Brocard's problem (with $n>10$) occur when $n$ that satisfies either \begin{equation}n!=[2\cdot 5^{2^k}-1\pmod{10^k}]^2-1\end{equation} or \begin{equation}n!=[2\cdot 16^{5^k}-1\pmod{10^k}]^2-1\end{equation} for a positive integer $k$. These are the OEIS sequences A224473 and A224474.
Proof: First, note that since $(10^k\pm1)^2-1\equiv((-1)^k\pm1)^2-1\equiv1\pm2(-1)^k\not\equiv0\pmod{11}$, $m\ne 10^k\pm1$ for $n>10$. If $k$ denotes the number of trailing zeros of $n!$, Legendre's formula implies that \begin{equation}k=\min\left\{\sum_{i=1}^\infty\left\lfloor\frac n{2^i}\right\rfloor,\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\right\}=\sum_{i=1}^\infty\left\lfloor\frac n{5^i}\right\rfloor\end{equation} where $\lfloor\cdot\rfloor$ denotes the floor function.
The upper limit can be replaced by $\lfloor\log_5n\rfloor$ since for $i>\lfloor\log_5n\rfloor$, $\left\lfloor\frac n{5^i}\right\rfloor=0$. An upper bound can be found using geometric series and the fact that $\lfloor x\rfloor\le x$: \begin{equation}k=\sum_{i=1}^{\lfloor\log_5n\rfloor}\left\lfloor\frac n{5^i}\right\rfloor\le\sum_{i=1}^{\lfloor\log_5n\rfloor}\frac n{5^i}=\frac n4\left(1-\frac1{5^{\lfloor\log_5n\rfloor}}\right)<\frac n4.\end{equation}
Thus $n!$ has $k$ zeroes for some $n\in(4k,\infty)$. Since $m=2\cdot5^{2^k}-1\pmod{10^k}$ and $2\cdot16^{5^k}-1\pmod{10^k}$ has at most $k$ digits, $m^2-1$ has only at most $2k$ digits by the conditions in the Corollary. The Corollary if $n!$ has more than $2k$ digits for $n>10$. From equation $(4)$, $n!$ has at least the same number of digits as $(4k)!$. Stirling's formula implies that \begin{equation}(4k)!>\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\end{equation}
Since the number of digits of an integer $t$ is $1+\lfloor\log t\rfloor$ where $\log$ denotes the logarithm in base $10$, the number of digits of $n!$ is at least \begin{equation}1+\left\lfloor\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)\right\rfloor\ge\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right).\end{equation}
Therefore it suffices to show that for $k\ge2$ (since $n>10$ and $k<n/4$), \begin{equation}\log\left(\frac{\sqrt{2\pi}\left(4k\right)^{4k+\frac{1}{2}}}{e^{4k}}\right)>2k\iff8\pi k\left(\frac{4k}e\right)^{8k}>10^{4k}\end{equation} which holds if and only if \begin{equation}\left(\frac{10}{\left(\frac{4k}e\right)}\right)^{4k}<8\pi k\iff k^2(8\pi k)^{\frac1{4k}}>\frac58e^2.\end{equation}
Now consider the function $f(x)=x^2(8\pi x)^{\frac1{4x}}$ over the domain $\Bbb R^+$, which is clearly positive there. Then after considerable algebra it is found that \begin{align*}f'(x)&=2x(8\pi x)^{\frac1{4x}}+\frac14(8\pi x)^{\frac1{4x}}(1-\ln(8\pi x))\\\implies f'(x)&=\frac{2f(x)}{x^2}\left(x-\frac18\ln(8\pi x)\right)>0\end{align*} for $x>0$ as $\min\{x-\frac18\ln(8\pi x)\}>0$ in the domain.
Thus $f$ is monotonically increasing in $(0,\infty)$, and since $2^2(8\pi\cdot2)^{\frac18}>\frac58e^2$, the inequality in equation $(8)$ holds. This means that the number of digits of $n!$ exceeds $2k$, proving the Corollary. $\square$
We get $n^n+3\equiv 0\pmod 4$ for odd $n$, so we can see from here that it is even (or, we could have used @TheSimpliFire's one-or-two-step method to derive this without any contradiction - which is better)
@TheSimpliFire Hey! with $4\pmod {10}$ and $0\pmod 4$ then this is the same as $10m_1+4$ and $4m_2$. If we set them equal to each other, we have that $5m_1=2(m_2-m_1)$ which means $m_1$ is even. We get $4\pmod {20}$ now :P
Yet again a conjecture!Motivated by Catalan's conjecture and a recent question of mine, I conjecture thatFor distinct, positive integers $a,b$, the only solution to this equation $$a^b-b^a=a+b\tag1$$ is $(a,b)=(2,5).$It is of anticipation that there will be much fewer solutions for incr...
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
The Breit Wigner cross section derived in my lecture notes is
$\sigma = \frac{g\pi}{p_i^2}\frac{\Gamma_{Z\rightarrow i}\Gamma_{Z \rightarrow f}}{(E-E_0)^2+\frac{\Gamma^2}{4}}$
where $g$ is the spin degeneracy factor, $\Gamma$ is the resonance width for the intermediate particle $Z$, and $\Gamma_{Z \rightarrow i/f}$ are the partial decay widths.
The $p_i$ factor corresponds to the density of states due to the incident momentum, and is supposedly the momentum in the centre of mass frame. But this can't be true, as in head-on collisions the momentum in the centre of mass frame is zero. While the decay rates correspond to decays to a specified state (e.g. electron and positron), the density of states was derived as if the initial and final states were a single particle- a single plane wave. I am not sure how to consider this for the case of two colliding particles, say an electron and positron with equal and opposite momenta in the lab frame, compared with stationary target collisions in he lab frame. What is the form of $p_i$ then?
|
The classical 2d Ising model has a Hamiltonian of the form:
\begin{equation} H = -\sum_{m,n = 0}^{M,N} J_1 x_{m,n}x_{m+1,n} + J_2 x_{m,n}x_{m,n+1} \end{equation}
The partition function for the model can be written as the sum over all configuration sof the spins $x_{ij}$ times the boltzman factor. Up to an over all multiplicative constant we can re-write this as:
\begin{equation} Z = \sum_{x_{ij}}\prod_m (1+t_1 x_{mn} x_{m+1,n})(1+t_2 x_{mn} x_{m,n+1}) \end{equation}
Then we can perform the sum by which ever means we like. My current favourite is by following this reference: http://link.springer.com/article/10.1007%2FBF01017042?LI=true, or see http://link.springer.com/article/10.1007/BF02896231. They re-write the partition sum as an integral over Grassmann variables. The computation is technical at points but to my understanding, they essentially sum over all loops in the plane with appropriate boltzmann weights by a convenient choice of integral. This can be seen by expanding the exponential and recognizing that the only terms left after performing the integration will be loops that describe domain walls.
They successfully perform the integration on a finite lattice with the topology of a torus and obtain the free energy. When I have more time I will detail some of these calculations. But, for now I have a question: What if we want to consider vortices in the system?
One way to do introduce two vortices in the model is to require that some finite number of neighbouring rows satisfy different boundary conditions from all others.
Another may be to consider the system Hamiltonian on a cylinder and specify antiperiodic boundary conditions on the lower half and periodic on the upper half.
In particular, I would be interested in knowing what happens to the free energy relative to the case of no vortices in the thermodynamic limit (this may be the only tractable part of the problem).
|
Kutta-Merson method
A five-stage Runge–Kutta method with fourth-order accuracy. Applied to the Cauchy problem
$$y'(x)=f(x,y(x)),\quad x_0\leq x\leq X,\quad y(x_0)=y_0,\tag{1}$$
the method is as follows: \begin{equation}\label{eq2}\tag{2} \begin{gathered} k_1 = h f(x_0,y_0), \quad y_0 = y(x_0),\\ k_2 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac13 k_1 \big),\\ k_3 = h f \big( x_0 + \tfrac13 h, y_0 + \tfrac16 k_1 + \tfrac16 k_2 \big),\\ k_4 = h f \big( x_0 + \tfrac12 h, y_0 + \tfrac18 k_1 + \tfrac38 k_3 \big),\\ k_5 = h f \big( x_0 + h, y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2 k_4 \big),\\ y^1(x_0+h) = y_0 + \tfrac12 k_1 - \tfrac32 k_3 + 2k_4,\\ y^2(x_0+h) = y_0 + \tfrac16 k_1 + \tfrac23 k_4 + \tfrac16 k_5. \end{gathered} \end{equation} The number $R=0.2|y^1-y^2|$ serves as an estimate of the error and is used for automatic selection of the integration step. If $\epsilon$ is the prescribed accuracy of the computation, the integration step is selected as follows. First choose some initial step and start the computation by \eqref{eq2} to obtain the number $R$. If $R>\epsilon$, divide the integration step by 2; if $R\leq\epsilon/64$, double it. If $\epsilon/64<R\leq\epsilon$, the selected integration step is satisfactory. Now replace the initial point $x_0$ by $x_0+h$ and repeat the entire procedure. This yields an approximate solution $y^2$; the quantity $y^1$ is mainly auxiliary.
Since
$$y^2(x_0+h)=y_0+\frac16k_1+\frac23k_4+\frac16hf(x_0+h,y^1(x_0+h)),$$
i.e. the formula for $y^1$ is as it were "nested" in the formula for $y^2$, the method described here for the estimation of the error and the selection of the integration step is known as an imbedded Runge–Kutta method.
References
[1] J. Christiansen, "Numerical solution of ordinary simultaneous differential equations of the 1st order using a method for automatic step change" Numer. Math. , 14 (1970) pp. 317–324 [2] P.M. Lukehart, "Algorithm 218. Kutta Merson" Comm. Assoc. Comput. Mach. , 6 : 12 (1963) pp. 737–738 [3] L. Fox, "Numerical solution of ordinary and partial differential equations" , Pergamon (1962) [4] G.N. Lance, "Numerical methods for high speed computers" , Iliffe (1960) Comments
Often Runge's name is added: Runge–Kutta–Merson method.
The (Runge–) Kutta–Merson method is due to R.H. Merson [a6]. The order of the numerical approximation defined by $y^2$ is four and that of the auxiliary (reference) solution $y^1$ is three. Hence, in general, the difference of these two numerical approximations is only of order three, so that a conservative error estimate results (i.e., it overestimates the local error $y(x_0+h)-y^2$ for small $h$). However, for linear equations with constant coefficients, a correct estimate of the local error is obtained as $h\to0$. This can be shown by observing that for linear equations
$$y^1=\left[1+h\frac{d}{dx}+\frac{\left(h\frac{d}{dx}\right)^2}{2}+\frac{\left(h\frac{d}{dx}\right)^3}{6}+\frac{\left(h\frac{d}{dx}\right)^4}{24}\right]y(x_0),$$
$$y^2=y^1+\frac{\left(h\frac{d}{dx}\right)^5}{144}y(x_0),$$
$$y(x_0+h)=y^1+\left[\frac{\left(h\frac{d}{dx}\right)^5}{120}+O(h^6)\right]y(x_0).$$
Thus,
$$R=0.2|y^1-y^2|=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}\right]y(x_0)$$
and
$$y(x_0+h)-y^2=\left[\frac{\left(h\frac{d}{dx}\right)^5}{720}+O(h^6)\right]y(x_0),$$
so that $R$ equals the local error $y(x_0+h)-y^2$ within order $h^6$. A further discussion of this method may be found in [a1] and [a5]. A Fortran code of the Kutta–Merson method is available in the NAG library. The Kutta–Merson method is the earliest proposed method belonging to the family of imbedded methods. Higher-order imbedded formulas providing asymptotically-correct approximations to the local error have been derived by E. Fehlberg [a3], [a4]; they have the additional feature that the error constants of the main formula (the formula of highest order) are minimized. However, since the relation between the true (global) error and the local error is generally not known, it is questionable whether one should use the highest-order formula as the main formula, and modern insights advocate to interchange the roles of the main formula and the reference formula (see [a5]). The recently developed imbedded method of J.R. Dormand and P.J. Prince [a2], which combines an eighth-order formula with seventh-order reference formula, is considered as one of the most efficient high-accuracy methods nowadays available [a5].
References
[a1] J.C. Butcher, "The numerical analysis of ordinary differential equations. Runge–Kutta and general linear methods" , Wiley (1987) [a2] J.R. Dormand, P.J. Prince, "A family of embedded Runge–Kutta formulae" J. Comp. Appl. Math. , 6 (1980) pp. 19–26 [a3] E. Fehlberg, "Classical fifth-, sixth-, seventh-, and eighth-order Runge–Kutta formulas with stepsize control" NASA Techn. Rep. , 287 (Abstract in: Computing (1969), 93–106) [a4] E. Fehlberg, "Low-order classical Runge–Kutta formulas with stepsize control and their application to some heat transfer problems" NASA Techn. Rep. , 315 (Abstract in: Computing (1969), 61–71) [a5] E. Hairer, S.P. Nørsett, G. Wanner, "Solving ordinary differential equations" , I. Nonstiff problems , Springer (1987) [a6] R.H. Merson, "An operational method for the study of integration processes" , Proc. Symp. Data Processing , Weapons Res. Establ. Salisbury , Salisbury (1957) pp. 110–125 [a7] S.O. Fatunla, "Numerical methods for initial value problems in ordinary differential equations" , Acad. Press (1988) How to Cite This Entry:
Kutta–Merson method.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Kutta%E2%80%93Merson_method&oldid=22696
|
I have the following problem that asks me to solve for the "maximal growth portfolio."
Suppose that the equilibrium stochastic discount factor evolves as $$ \log S_{t+1} - \log S_t = \kappa_s(X_t, W_{t+1}). $$Solve the following maximization problem: \begin{equation*} \begin{aligned} & \underset{x}{\text{maximize}} & & E[\log(R_{t+1})] \\ & \text{subject to} & & E\left[ \frac{S_{t+1}}{S_t} R_{t+1}\, \middle | X_t = x \right] = 1. \end{aligned} \end{equation*}
It seems clear how we would derive the following solution: $R^*_{t+1} = \exp(-\kappa(X_t, W_{t+1})) = \frac{S_t}{S_{t+1}}$. However, my question is related to understanding the economics behind this problem. The constraint in this problem is clearly saying that whatever portfolio we construct, it must be fairly priced (given the stochastic discount factor process $S_t$). However, I don't understand why there would be a portfolio that produces a "maximal expected return." So, there are my two questions:
Why isn't the objective here unbounded? Can't we always choose a portfolio with a higher expected return (with correspondingly higher risk)? Also, why is there a log in the objective? Would this problem work just the same as having the objective be $E[R_{t+1}]$?
|
This is an interesting question! I apologize if the following is unnecesarily verbose, but there's a lot of math to keep track of.
If you have an object $P$ moving around in an arbitrary time-dependent (
but rigid) coordinate frame $\mathcal{B}$ with possibly moving origin $\mathcal{O_B}$, you can check how accurately Newton's laws describe the dynamics of that object as it if it was in an inertial reference frame by checking the magnitude of the acceleration components not present in an inertial frame relative to some arbitrary stationary frame $\mathcal{F}$ with origin $\mathcal{O}$.
Concretely, you can calculate some sort of inertial ratio vector $\vec{\mathcal{I}}$ that compares the magnitudes of the accelerations you'd see if $P$ was moving in an inertial frame versus the accelerations you'd see if it was not:
$$\vec{\mathcal{I}} = \frac{\vec{a}_{P/\mathcal{B}}}{\vec{a}_{\mathcal{O}/\mathcal{O'}}+ 2\vec{\omega}_\mathcal{B}\times\vec{v}_{P/\mathcal{B}}+ \vec{\alpha}_\mathcal{B}\times\vec{r}_{P/\mathcal{O'}}+\vec{\omega}_\mathcal{B}\times(\vec{\omega}_\mathcal{B}\times\vec{r}_{P/\mathcal{O'}})+\vec{a}_{P/\mathcal{B}}}$$
where $/$ indicates "relative to", $\vec{r}$'s, $\vec{v}$'s and $\vec{a}$'s are positions, velocities and accelerations respectively, and $\vec{\omega}$'s and $\vec{\alpha}$'s are angular velocities and accelerations respectively.
If $\mathcal{B}$ is inertial, then $\vec{\mathcal{I}} = \vec{1}$; the non-inertial effects can bring this down to at worst $\vec{\mathcal{I}} = \vec{0}$, which is also the case if the object $P$ is not accelerating relative to $\mathcal{B}$ (i.e. there are no net "real" forces on the object $P$ in this frame).
Hope this helps!
|
Search
Now showing items 31-40 of 165
Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV
(Springer, 2012-09)
Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ...
Measurement of inelastic, single- and double-diffraction cross sections in proton-proton collisions at the LHC with ALICE
(Springer, 2013-06)
Measurements of cross sections of inelastic and diffractive processes in proton--proton collisions at LHC energies were carried out with the ALICE detector. The fractions of diffractive processes in inelastic collisions ...
Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Elsevier, 2016-03)
The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ...
Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-06)
The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ...
Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV
(Elsevier, 2016-03)
Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ...
Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV
(Elsevier, 2016-07)
The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ...
Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC
(Springer, 2017-01)
The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ...
Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC
(Springer, 2017-06)
We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
Transverse Momentum Distribution and Nuclear Modification Factor of Charged Particles in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV
(American Physical Society, 2013-02)
The transverse momentum ($p_T$) distribution of primary charged particles is measured in non single-diffractive p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV with the ALICE detector at the LHC. The $p_T$ spectra measured ...
Measurement of azimuthal correlations of D mesons and charged particles in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV
(Springer, 2017-04)
The azimuthal correlations of D mesons and charged particles were measured with the ALICE detector in pp collisions at $\sqrt{s}=7$ TeV and p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV at the Large Hadron Collider. ...
|
Consider an improper integral such that: $$I = \int_0^{+\infty} \frac{f(x)}{x}dx.$$
If $\int_0^{+\infty}f(x)dx < + \infty$, Can we conclude that the integral I converges? Thanks for any answer or suggestion.
Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community
No. Consider $f=1_{[0,1]}$, that is, $f(x)=1$ when $x\in[0,1]$, and $0$ otherwise.
Other example. Take $f(x)=e^{-x}$.
The answer is a partial YES and a partial NO.
If you split the Riemann improper integral into two pieces:
$$\int_0^\infty \frac{f(x)}{x} dx \stackrel{def}{=} \lim_{\Lambda \to \infty, \lambda \to 0} \int_\lambda^\Lambda \frac{f(x)}{x} dx = \lim_{\Lambda\to\infty} \int_1^\Lambda \frac{f(x)}{x} dx + \lim_{\lambda\to 0} \int_\lambda^1 \frac{f(x)}{x}dx $$ The first piece for large $x$ exists. This is because the condition $$\int_0^\infty f(x)dx \stackrel{def}{=} \lim_{\Lambda\to\infty,\lambda \to 0} \int_\lambda^\Lambda f(x)dx < \infty$$ implies as a function of $\Lambda$, the integral $\displaystyle\;\int_1^\Lambda f(x) dx\;$ converges and hence bounded as $\Lambda \to \infty$.
This in turn implies the collection of integrals $\displaystyle\;\int_a^b f(x) dx\;$ for $(a,b) \subset (1,\infty)$ are uniformly bounded. Since $\frac{1}{x}$ is monotonic decreasing to $0$ as $x \to \infty$, Dirichlet's test for improper integral tell us following limit exists: $$\lim_{\Lambda\to\infty} \int_1^\Lambda \frac{f(x)}{x} dx$$
However, this doesn't mean $\displaystyle\;\int_0^\infty \frac{f(x)}{x} dx\;$ exists. This is because the second piece for small $x$, $\displaystyle\;\lim_{\lambda\to 0}\int_\lambda^1 \frac{f(x)}{x} dx\;$ can blow up. A simple example is take $f(x)$ to be any function which equal to $1$ on $[0,1]$. Independent how well behaved is $f(x)$ for $x > 1$, we have
$$\int_\lambda^1 \frac{f(x)}{x}dx = \int_\lambda^1 \frac{1}{x} dx = -\log\lambda \to +\infty \quad\text{ as }\quad\lambda \to 0$$
|
I have been told that $$[\hat x^2,\hat p^2]=2i\hbar (\hat x\hat p+\hat p\hat x)$$ illustrates
. operator ordering ambiguity
What does that mean? I tried googling but to no avail.
Physics Stack Exchange is a question and answer site for active researchers, academics and students of physics. It only takes a minute to sign up.Sign up to join this community
The ordering ambiguity is the statement – or the "problem" – that for a classical function $f(x,p)$, or a function of analogous phase space variables, there may exist multiple operators $\hat f(\hat x,\hat p)$ that represent it. In particular, the quantum Hamiltonian isn't uniquely determined by the classical limit.
This ambiguity appears even if we require the quantum operator corresponding to a real function to be Hermitian and $x^2 p^2$ is the simplest demonstration of this "more serious" problem. On one hand, the Hermitian part of $\hat x^2 \hat p^2$ is $$ \hat x^2 \hat p^2 - [\hat x^2,\hat p^2]/2 = \hat x^2\hat p^2 -i\hbar (\hat x\hat p+\hat p\hat x)$$ where I used your commutator.
On the other hand, we may also classically write the product and add the hats as $\hat x \hat p^2\hat x$ which is already Hermitian. But $$ \hat x \hat p^2\hat x = \hat x^2 \hat p^2+\hat x[\hat p^2,\hat x] = \hat x^2\hat p^2-2i\hbar\hat x\hat p $$ where you see that the correction is different because $\hat x\hat p+\hat p\hat x$ isn't quite equal to $2\hat x\hat p$ (there's another, $c$-valued commutator by which they differ). So even when you consider the Hermitian parts of the operators "corresponding" to classical functions, there will be several possible operators that may be the answer. The $x^2p^2$ is the simplest example and the two answers we got differed by a $c$-number. For higher powers or more general functions, the possible quantum operators may differ by $q$-numbers, nontrivial operators, too.
This is viewed as a deep problem (perhaps too excessive a description) by the physicists who study various effective quantum mechanical models such as those with a position-dependent mass – where we need $p^2/2m(x)$ in the kinetic energy and by an expansion of $m(x)$ around a minimum or a maximum, we may get the $x^2p^2$ problem suggested above.
But the ambiguity shouldn't really be surprising because it's the quantum mechanics, and not the classical physics, that is fundamental. The quantum Hamiltonian contains all the information, including all the behavior in the classical limit. On the other hand, one can't "reconstruct" the full quantum answer out of its classical limit. If you know the limit $\lim_{\hbar\to 0} g(\hbar)$ of one variable $g(\hbar)$, it clearly doesn't mean that you know the whole function $g(\hbar)$ for any $\hbar$.
Many people don't get this fundamental point because they think of classical physics as the fundamental theory and they consider quantum mechanics just a confusing cherry on a pie that may nevertheless obtained by quantization, a procedure they consider canonical and unique (just hat addition). It's the other way around, quantum mechanics is fundamental, classical physics is just a derivable approximation valid in a limit, and the process of quantization isn't producing unique results for a sufficiently general classical limit.
The ordering ambiguity also arises in field theory. In that case, all the ambiguous corrections are actually divergent, due to short-distance singularities, and the proper definition of the quantum theory requires one to understand renormalization. At the end, what we should really be interested in is the space of relevant/consistent quantum theories, not "the right quantum counterpart" of a classical theory (the latter isn't fundamental so it shouldn't stand at the beginning or base of our derivations).
In the path-integral approach, one effectively deals with classical fields and their classical functions so the ordering ambiguities seem to be absent; in reality, all the consequences of these ambiguities reappear anyway due to the UV divergences that must be regularized and renormalized. The process of regularization and renormalization depends on the subtraction of various divergent counterterms, to get the finite answer, which isn't quite unique, either (the finite leftover coupling may be anything).
That's why the renormalization ambiguities are just the ordering ambiguities in a different language. Whether we study those things as ordering ambiguities or renormalization ambiguities, the lesson is clear: the space of possible classical theories isn't the same thing as the space of possible quantum theories and we shouldn't think about the classical answers when we actually want to do something else – to solve the problems in quantum mechanics.
|
Search
Now showing items 1-1 of 1
Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV
(Elsevier, 2014-09)
Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
|
Search
Now showing items 21-27 of 27
Measurement of transverse energy at midrapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV
(American Physical Society, 2016-09)
We report the transverse energy ($E_{\mathrm T}$) measured with ALICE at midrapidity in Pb-Pb collisions at ${\sqrt{s_{\mathrm {NN}}}}$ = 2.76 TeV as a function of centrality. The transverse energy was measured using ...
Elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV
(Springer, 2016-09)
The elliptic flow of electrons from heavy-flavour hadron decays at mid-rapidity ($|y| < 0.7$) is measured in Pb–Pb collisions at $\sqrt{s_{\rm NN}}= 2.76$ TeV with ALICE at the LHC. The particle azimuthal distribution with ...
Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV
(Springer, 2016-09)
The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
D-meson production in $p$–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV and in $pp$ collisions at $\sqrt{s}=7$ TeV
(American Physical Society, 2016-11)
The production cross sections of the prompt charmed mesons D$^0$, D$^+$, D$^{*+}$ and D$_{\rm s}^+$ were measured at mid-rapidity in p-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm NN}}=5.02$ TeV ...
Azimuthal anisotropy of charged jet production in $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions
(Elsevier, 2016-02)
This paper presents measurements of the azimuthal dependence of charged jet production in central and semi-central $\sqrt{s_{\rm NN}}=2.76$ TeV Pb–Pb collisions with respect to the second harmonic event plane, quantified ...
Jet shapes in pp and Pb–Pb collisions at ALICE
(Elsevier, 2016)
The aim of this work is to explore possible medium modifications to the substructure of inclusive charged jets in Pb-Pb relative to proton-proton collisions by measuring a set of jet shapes. The set of shapes includes the ...
Particle identification in ALICE: a Bayesian approach
(Springer Berlin Heidelberg, 2016-05-25)
We present a Bayesian approach to particle identification (PID) within the ALICE experiment. The aim is to more effectively combine the particle identification capabilities of its various detectors. After a brief explanation ...
|
2018-08-25 06:58
Recent developments of the CERN RD50 collaboration / Menichelli, David (U. Florence (main) ; INFN, Florence)/CERN RD50 The objective of the RD50 collaboration is to develop radiation hard semiconductor detectors for very high luminosity colliders, particularly to face the requirements of the possible upgrade of the large hadron collider (LHC) at CERN. Some of the RD50 most recent results about silicon detectors are reported in this paper, with special reference to: (i) the progresses in the characterization of lattice defects responsible for carrier trapping; (ii) charge collection efficiency of n-in-p microstrip detectors, irradiated with neutrons, as measured with different readout electronics; (iii) charge collection efficiency of single-type column 3D detectors, after proton and neutron irradiations, including position-sensitive measurement; (iv) simulations of irradiated double-sided and full-3D detectors, as well as the state of their production process.. 2008 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 596 (2008) 48-52 In : 8th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 27 - 29 Jun 2007, pp.48-52 Registro completo - Registros similares 2018-08-25 06:58 Registro completo - Registros similares 2018-08-25 06:58
Performance of irradiated bulk SiC detectors / Cunningham, W (Glasgow U.) ; Melone, J (Glasgow U.) ; Horn, M (Glasgow U.) ; Kazukauskas, V (Vilnius U.) ; Roy, P (Glasgow U.) ; Doherty, F (Glasgow U.) ; Glaser, M (CERN) ; Vaitkus, J (Vilnius U.) ; Rahman, M (Glasgow U.)/CERN RD50 Silicon carbide (SiC) is a wide bandgap material with many excellent properties for future use as a detector medium. We present here the performance of irradiated planar detector diodes made from 100-$\mu \rm{m}$-thick semi-insulating SiC from Cree. [...] 2003 - 5 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 509 (2003) 127-131 In : 4th International Workshop on Radiation Imaging Detectors, Amsterdam, The Netherlands, 8 - 12 Sep 2002, pp.127-131 Registro completo - Registros similares 2018-08-24 06:19
Measurements and simulations of charge collection efficiency of p$^+$/n junction SiC detectors / Moscatelli, F (IMM, Bologna ; U. Perugia (main) ; INFN, Perugia) ; Scorzoni, A (U. Perugia (main) ; INFN, Perugia ; IMM, Bologna) ; Poggi, A (Perugia U.) ; Bruzzi, M (Florence U.) ; Lagomarsino, S (Florence U.) ; Mersi, S (Florence U.) ; Sciortino, Silvio (Florence U.) ; Nipoti, R (IMM, Bologna) Due to its excellent electrical and physical properties, silicon carbide can represent a good alternative to Si in applications like the inner tracking detectors of particle physics experiments (RD50, LHCC 2002–2003, 15 February 2002, CERN, Ginevra). In this work p$^+$/n SiC diodes realised on a medium-doped ($1 \times 10^{15} \rm{cm}^{−3}$), 40 $\mu \rm{m}$ thick epitaxial layer are exploited as detectors and measurements of their charge collection properties under $\beta$ particle radiation from a $^{90}$Sr source are presented. [...] 2005 - 4 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 546 (2005) 218-221 In : 6th International Workshop on Radiation Imaging Detectors, Glasgow, UK, 25-29 Jul 2004, pp.218-221 Registro completo - Registros similares 2018-08-24 06:19
Measurement of trapping time constants in proton-irradiated silicon pad detectors / Krasel, O (Dortmund U.) ; Gossling, C (Dortmund U.) ; Klingenberg, R (Dortmund U.) ; Rajek, S (Dortmund U.) ; Wunstorf, R (Dortmund U.) Silicon pad-detectors fabricated from oxygenated silicon were irradiated with 24-GeV/c protons with fluences between $2 \cdot 10^{13} \ n_{\rm{eq}}/\rm{cm}^2$ and $9 \cdot 10^{14} \ n_{\rm{eq}}/\rm{cm}^2$. The transient current technique was used to measure the trapping probability for holes and electrons. [...] 2004 - 8 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 3055-3062 In : 50th IEEE 2003 Nuclear Science Symposium, Medical Imaging Conference, 13th International Workshop on Room Temperature Semiconductor Detectors and Symposium on Nuclear Power Systems, Portland, OR, USA, 19 - 25 Oct 2003, pp.3055-3062 Registro completo - Registros similares 2018-08-24 06:19
Lithium ion irradiation effects on epitaxial silicon detectors / Candelori, A (INFN, Padua ; Padua U.) ; Bisello, D (INFN, Padua ; Padua U.) ; Rando, R (INFN, Padua ; Padua U.) ; Schramm, A (Hamburg U., Inst. Exp. Phys. II) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) ; Wyss, J (Cassino U. ; INFN, Pisa) Diodes manufactured on a thin and highly doped epitaxial silicon layer grown on a Czochralski silicon substrate have been irradiated by high energy lithium ions in order to investigate the effects of high bulk damage levels. This information is useful for possible developments of pixel detectors in future very high luminosity colliders because these new devices present superior radiation hardness than nowadays silicon detectors. [...] 2004 - 7 p. - Published in : IEEE Trans. Nucl. Sci. 51 (2004) 1766-1772 In : 13th IEEE-NPSS Real Time Conference 2003, Montreal, Canada, 18 - 23 May 2003, pp.1766-1772 Registro completo - Registros similares 2018-08-24 06:19
Radiation hardness of different silicon materials after high-energy electron irradiation / Dittongo, S (Trieste U. ; INFN, Trieste) ; Bosisio, L (Trieste U. ; INFN, Trieste) ; Ciacchi, M (Trieste U.) ; Contarato, D (Hamburg U., Inst. Exp. Phys. II) ; D'Auria, G (Sincrotrone Trieste) ; Fretwurst, E (Hamburg U., Inst. Exp. Phys. II) ; Lindstrom, G (Hamburg U., Inst. Exp. Phys. II) The radiation hardness of diodes fabricated on standard and diffusion-oxygenated float-zone, Czochralski and epitaxial silicon substrates has been compared after irradiation with 900 MeV electrons up to a fluence of $2.1 \times 10^{15} \ \rm{e} / cm^2$. The variation of the effective dopant concentration, the current related damage constant $\alpha$ and their annealing behavior, as well as the charge collection efficiency of the irradiated devices have been investigated.. 2004 - 7 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 530 (2004) 110-116 In : 6th International Conference on Large Scale Applications and Radiation Hardness of Semiconductor Detectors, Florence, Italy, 29 Sep - 1 Oct 2003, pp.110-116 Registro completo - Registros similares 2018-08-24 06:19
Recovery of charge collection in heavily irradiated silicon diodes with continuous hole injection / Cindro, V (Stefan Inst., Ljubljana) ; Mandić, I (Stefan Inst., Ljubljana) ; Kramberger, G (Stefan Inst., Ljubljana) ; Mikuž, M (Stefan Inst., Ljubljana ; Ljubljana U.) ; Zavrtanik, M (Ljubljana U.) Holes were continuously injected into irradiated diodes by light illumination of the n$^+$-side. The charge of holes trapped in the radiation-induced levels modified the effective space charge. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 343-345 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.343-345 Registro completo - Registros similares 2018-08-24 06:19
First results on charge collection efficiency of heavily irradiated microstrip sensors fabricated on oxygenated p-type silicon / Casse, G (Liverpool U.) ; Allport, P P (Liverpool U.) ; Martí i Garcia, S (CSIC, Catalunya) ; Lozano, M (Barcelona, Inst. Microelectron.) ; Turner, P R (Liverpool U.) Heavy hadron irradiation leads to type inversion of n-type silicon detectors. After type inversion, the charge collected at low bias voltages by silicon microstrip detectors is higher when read out from the n-side compared to p-side read out. [...] 2004 - 3 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 518 (2004) 340-342 In : 9th Pisa Meeting on Advanced Detectors, La Biodola, Italy, 25 - 31 May 2003, pp.340-342 Registro completo - Registros similares 2018-08-23 11:31
Formation and annealing of boron-oxygen defects in irradiated silicon and silicon-germanium n$^+$–p structures / Makarenko, L F (Belarus State U.) ; Lastovskii, S B (Minsk, Inst. Phys.) ; Korshunov, F P (Minsk, Inst. Phys.) ; Moll, M (CERN) ; Pintilie, I (Bucharest, Nat. Inst. Mat. Sci.) ; Abrosimov, N V (Unlisted, DE) New findings on the formation and annealing of interstitial boron-interstitial oxygen complex ($\rm{B_iO_i}$) in p-type silicon are presented. Different types of n+−p structures irradiated with electrons and alpha-particles have been used for DLTS and MCTS studies. [...] 2015 - 4 p. - Published in : AIP Conf. Proc. 1583 (2015) 123-126 Registro completo - Registros similares
|
Introduction:
Mankind has been fascinated with $\pi$, the ratio between the circumference of a circle and its diameter, for at least 2500 years. Ancient Hebrews used the approximation 3 (see 1 Kings 7:23 and 2 Chron. 4:2). Babylonians used the approximation 3 1/8. Archimedes, in the first rigorous analysis of $\pi$, proved that 3 10/71 < $\pi$ < 3 1/7, by means of a sequence of inscribed and circumscribed triangles. Later scholars in India (where decimal arithmetic was first developed, at least by 300 CE), China and the Middle East computed $\pi$ ever more accurately. In 1665, Newton computed 16 digits, but, as he later confessed, “I am ashamed to tell you to how many figures I carried these computations, having no other business at the time.” In 1844 the computing prodigy Johan Dase produced 200 digits. In 1874, Shanks published 707 digits, although later it was found that only the first 527 were correct.
In the 20th century, $\pi$ was computed to thousands, then to millions, then to billions of digits, in part due to some remarkable new formulas and algorithms for $\pi$, and in part due to clever computational techniques, all accelerated by the relentless advance of Moore’s Law. The most recent computation, as far as the present author is aware, produced 22.4 trillion digits. One may download up to the first trillion digits here. For additional details on the history and computation of $\pi$, see The quest for $\pi$ and The computation of previously inaccessible digits of pi^2 and Catalan’s constant.
For many years, part of the motivation for computing $\pi$ was to answer the question of whether $\pi$ was a rational number — if $\pi$ was rational, the decimal expansion would eventually repeat. But given that no repetitions were found in early computations, many leading mathematicians in the 17th and 18th century concluded that $\pi$ must be irrational. In 1761, Johann Heinrich Lambert settled the question by proving that $\pi$ is irrational. Then in 1882 Ferdinand von Lindemann proved that $\pi$ is transcendental, which proved once and for all that the ancient Greek problem of squaring the circle is impossible (because ruler-and-compass constructions can only produce power-of-two degree algebraic numbers).
We present here what we believe to be the simplest proof that $\pi$ is irrational. It was first published in the 1930s as an exercise in the Bourbaki treatise on calculus. It requires only a familiarity with integration, including integration by parts, which is a staple of any high school or first-year college calculus course. It is similar to, but simpler than (in the present author’s opinion), a related proof due to Ivan Niven.
Gist of the proof: The basic idea is to define a function $A_n(b)$, based on an integral from $0$ to $\pi$. This function has the property that for each positive integer $b$ and for all sufficiently large integers $n$, $A_n(b)$ lies strictly between 0 and 1. Yet another line of reasoning, assuming $\pi$ is a rational number, and applying integration by parts, concludes that $A_n(b)$ must be an integer. This contradiction shows that $\pi$ must be irrational. THEOREM: $\pi$ is irrational. Proof: For each positive integer $b$ and non-negative integer $n$, define $$A_n(b) = b^n \int_0^\pi \frac{x^n(\pi – x)^n \sin(x)}{n!} \, {\rm d}x.$$ Note that the integrand function of $A_n(b)$ is zero at $x = 0$ and $x = \pi$, but is strictly positive elsewhere in the integration interval. Thus $A_n(b) \gt 0$. In addition, since $x (\pi – x) \le (\pi/2)^2$, we can write $$A_n(b) \le \frac{\pi b^n}{n!}\left(\frac{\pi}{2}\right)^{2n} = \frac{\pi (b \pi^2 / 4)^n}{n!},$$ which is less than one for large $n$, since, by Stirling’s formula, the $n!$ in the denominator increases faster than the $n$-th power in the numerator. Thus we have established, for any integer $b \ge 1$ and all sufficiently large $n$, that $0 \lt A_n(b) \lt 1.$
Now let us assume that $\pi = a/b$ for relatively prime integers $a$ and $b$. Define $f(x) = x^n (a – bx)^n / n!$. Then we can write, using integration by parts, $$A_n(b) = \int_0^\pi f(x) \sin(x) \, {\rm d}x = \left[-f(x) \cos(x)\right]_0^\pi – \left[-f'(x) \sin(x)\right]_0^\pi + \cdots$$ $$ \cdots \pm \left[f^{(2n)} (x) \cos(x)\right]_0^\pi \pm \int_0^\pi f^{(2n+1)} (x) \cos(x) \, {\rm d}x.$$ Now note that for $k = 1$ to $k = n$, the derivative $f^{(k)} (x)$ is zero at $x = 0$ and $x = \pi$. For $k = n + 1$ to $2n$, $f^{(k)} (x)$ takes various integer values at $x = 0$ and $x = \pi$; and for $k = 2n + 1$, the derivative is zero. Also, the function $\sin(x) $ is $0$ at $x = 0$ and $x = \pi$, while the function $\cos(x)$ is $1$ at $x = 0$ and $-1$ at $x = \pi$. Combining these facts, we conclude that $A_n (b)$ must be an integer. The contradiction between this fact and the above-proven fact that $0 \lt A_n(b) \lt 1$ proves that $\pi$ is irrational.
For other proofs in this series see the listing at Simple proofs of great theorems.
|
I don't understand why the conservation of angular momentum can imply an acceleration, in absence of a force.
Consider for istance planetary motion. The angular momentum $\vec{L}$ of the planets is conserved and that means $\mid \vec{L} \mid=mr^2 \dot{\theta}=mrv_{\theta}$ is conserved too.
Consider the acceleration in polar coordinates $$ \left( \ddot r - r\dot\theta^2 \right) \hat{\mathbf r} + \left( r\ddot\theta+ 2\dot r \dot\theta\right) \hat{\boldsymbol{\theta}} \ $$
The second term is zero since $\vec{L}$ is constant. This means that there is no acceleration in the direction of $ \hat{\boldsymbol{\theta}} $, which is clear since the gravitational force is a
central froce.
But if the distance $r$ decreases $v_{\theta}$ (i.e. the velocity in the direction of $ \hat{\boldsymbol{\theta}} $) must increase in order to keep $\mid\vec{L} \mid$ constant.
How can $v_{\theta}$ increase if there is no acceleration in the direction of $ \hat{\boldsymbol{\theta}} $?
I understood that it happens because of the conservation of angular momentum but if there is an acceleration, necessarily a force is needed. I don't see where do this force come from.
|
The following is an about a Left-Right Symmetric model.
$SU(2)\otimes SU(2)$ $(2\otimes 2=3\oplus 1)$ will generate a triplet, which in Left-Right Symmetric model is $$\vec{\Delta}=\begin{pmatrix}\delta_{1}\\ \delta_{2} \\ \delta_{3} \end{pmatrix}$$. The $SU(2)$ quantum number/charge on above is $$\begin{pmatrix} 1\\0\\-1 \end{pmatrix}$$. We can write this in $2\times 2$ representation as $\Delta=\frac{1}{2}\vec{\tau}\cdot \vec{\Delta}$, where $\vec{\tau}$ is vector made out of pauli matrices.
What is the complete mathematical explanation of $2\otimes 2=3\oplus 1$?
How to make a scalar triplet from a scalar doublet?
Answer. There is some procedure like $H^{T} i \tau_{2} \tau H$. (PS. I don't know exact form of this at this point of time).
What is the exact form? and why it is correct or how to make a cross product out of two doublet scalar fields?
How to calculate the charges of the components $\Delta$?
Answer. There is some mechanism to calculate the $T_{3L}+T_{3R}$ using commutator as $T_{3L}+T_{3R}=\frac{1}{2}[\tau_{3},\Delta]$. Or we can say the formula to calculate $Q$ is $Q=T_{3L}+T_{3R}+\frac{1}{2}(B-L)=\frac{1}{2}[\tau_{3},\Delta]+ \frac{1}{2}(B-L)$.
Main Question. What is the explanation to this? I tried to solve this but fails. There is also some charge distribution which is unexplained to me and canbe written as: $$\begin{pmatrix}\delta_{1}^{++}\\ \delta_{2}^{+}\\ \delta_{3}^{0}\\ \end{pmatrix}$$
Why is the $B-L$ charge for $\Delta_{L}$ is not $0$ like higgs doublet?
Answer. We need to break the $U(1)_{B-L}$ symmetry so we need $B-L$ charge in Triplets.
What is the other explanation to this? We can have this charge on Bi-doublet too, but we are not choosing that for some reason.
Answer. Bi-Doublet is responsible for symmetry breaking of Intermediate Salam-Weinberg Model gauge group which needs zero charge of $B-L$.
How come Bi-doublet is balancing left-handed and right-handed scalar fields i.e., We need a doublet in minimal Standard model now we should need another doublet instead we are defining a bidouble. Doesn't the left handed part will disturb the right handed fields, If not, how?
How to distinguish between $\Delta_{L}$ & $\Delta_{R}$?
|
I am having trouble with this example :
As it says that the contact angle of the glass water interface is $0^{\circ}$ , so the force due to surface tension should act downward on the plates (rather than along the horizontal which would've given a clear idea of attractive interactions).
However, I also notice that the tendency of water would to decrease the area of its free surface. Therefore, I tried to approach this by calculating the surface potential energy as a function of the separation between the plates, $$U = T \times A$$ And thereby calculate force, $F$ as $F = -\frac{dU}{dx}$ where $x$ is the separation between the plates.
I am, unfortunately, still unable to solve it, because I am failing to find the expression for $U$. May I get help as to how to approach the problem and hence derive the expression for the force between the plates?
|
@egreg It does this "I just need to make use of the standard hyphenation function of LaTeX, except "behind the scenes", without actually typesetting anything." (if not typesetting includes typesetting in a hidden box) it doesn't address the use case that he said he wanted that for
@JosephWright ah yes, unlike the hyphenation near box question, I guess that makes sense, basically can't just rely on lccode anymore. I suppose you don't want the hyphenation code in my last answer by default?
@JosephWright anway if we rip out all the auto-testing (since mac/windows/linux come out the same anyway) but leave in the .cfg possibility, there is no actual loss of functionality if someone is still using a vms tex or whatever
I want to change the tracking (space between the characters) for a sans serif font. I found that I can use the microtype package to change the tracking of the smallcaps font (\textsc{foo}), but I can't figure out how to make \textsc{} a sans serif font.
@DavidCarlisle -- if you write it as "4 May 2016" you don't need a comma (or, in the u.s., want a comma).
@egreg (even if you're not here at the moment) -- tomorrow is international archaeology day: twitter.com/ArchaeologyDay , so there must be someplace near you that you could visit to demonstrate your firsthand knowledge.
@barbarabeeton I prefer May 4, 2016, for some reason (don't know why actually)
@barbarabeeton but I have another question maybe better suited for you please: If a member of a conference scientific committee writes a preface for the special issue, can the signature say John Doe \\ for the scientific committee or is there a better wording?
@barbarabeeton overrightarrow answer will have to wait, need time to debug \ialign :-) (it's not the \smash wat did it) on the other hand if we mention \ialign enough it may interest @egreg enough to debug it for us.
@DavidCarlisle -- okay. are you sure the \smash isn't involved? i thought it might also be the reason that the arrow is too close to the "M". (\smash[t] might have been more appropriate.) i haven't yet had a chance to try it out at "normal" size; after all, \Huge is magnified from a larger base for the alphabet, but always from 10pt for symbols, and that's bound to have an effect, not necessarily positive. (and yes, that is the sort of thing that seems to fascinate @egreg.)
@barbarabeeton yes I edited the arrow macros not to have relbar (ie just omit the extender entirely and just have a single arrowhead but it still overprinted when in the \ialign construct but I'd already spent too long on it at work so stopped, may try to look this weekend (but it's uktug tomorrow)
if the expression is put into an \fbox, it is clear all around. even with the \smash. so something else is going on. put it into a text block, with \newline after the preceding text, and directly following before another text line. i think the intention is to treat the "M" as a large operator (like \sum or \prod, but the submitter wasn't very specific about the intent.)
@egreg -- okay. i'll double check that with plain tex. but that doesn't explain why there's also an overlap of the arrow with the "M", at least in the output i got. personally, i think that that arrow is horrendously too large in that context, which is why i'd like to know what is intended.
@barbarabeeton the overlap below is much smaller, see the righthand box with the arrow in egreg's image, it just extends below and catches the serifs on the M, but th eoverlap above is pretty bad really
@DavidCarlisle -- i think other possible/probable contexts for the \over*arrows have to be looked at also. this example is way outside the contexts i would expect. and any change should work without adverse effect in the "normal" contexts.
@DavidCarlisle -- maybe better take a look at the latin modern math arrowheads ...
@DavidCarlisle I see no real way out. The CM arrows extend above the x-height, but the advertised height is 1ex (actually a bit less). If you add the strut, you end up with too big a space when using other fonts.
MagSafe is a series of proprietary magnetically attached power connectors, originally introduced by Apple Inc. on January 10, 2006, in conjunction with the MacBook Pro at the Macworld Expo in San Francisco, California. The connector is held in place magnetically so that if it is tugged — for example, by someone tripping over the cord — it will pull out of the socket without damaging the connector or the computer power socket, and without pulling the computer off the surface on which it is located.The concept of MagSafe is copied from the magnetic power connectors that are part of many deep fryers...
has anyone converted from LaTeX -> Word before? I have seen questions on the site but I'm wondering what the result is like... and whether the document is still completely editable etc after the conversion? I mean, if the doc is written in LaTeX, then converted to Word, is the word editable?
I'm not familiar with word, so I'm not sure if there are things there that would just get goofed up or something.
@baxx never use word (have a copy just because but I don't use it;-) but have helped enough people with things over the years, these days I'd probably convert to html latexml or tex4ht then import the html into word and see what come out
You should be able to cut and paste mathematics from your web browser to Word (or any of the Micorsoft Office suite). Unfortunately at present you have to make a small edit but any text editor will do for that.Givenx=\frac{-b\pm\sqrt{b^2-4ac}}{2a}Make a small html file that looks like<!...
@baxx all the convertors that I mention can deal with document \newcommand to a certain extent. if it is just \newcommand\z{\mathbb{Z}} that is no problem in any of them, if it's half a million lines of tex commands implementing tikz then it gets trickier.
@baxx yes but they are extremes but the thing is you just never know, you may see a simple article class document that uses no hard looking packages then get half way through and find \makeatletter several hundred lines of trick tex macros copied from this site that are over-writing latex format internals.
|
Bulletin of the American Physical Society APS March Meeting 2013 Volume 58, Number 1 Monday–Friday, March 18–22, 2013; Baltimore, Maryland
Session T13: Focus Session: Topological Materials - Quasi 1-dimensional Hide Abstracts Sponsoring Units: DMP
Chair: Joel Moore, University of California, Berkeley
Room:
315
Thursday, March 21, 2013
8:00AM - 8:12AM
T13.00001: Transition from fractional to Majorana fermions in Rashba nanowires
Jelena Klinovaja, Peter Stano, Daniel Loss
We study hybrid superconducting-semiconducting nanowires in the presence of Rashba spin-orbit interaction as well as helical magnetic fields.[1] We show that the interplay between them leads to a competition of phases with two topological gaps closing and reopening, resulting in unexpected reentrance behavior. Besides the topological phase with localized Majorana fermions (MFs) we find new phases characterized by fractionally charged fermion (FF) bound states of Jackiw-Rebbi type. The system can be fully gapped by the magnetic fields alone, giving rise to FFs that transmute into MFs upon turning on superconductivity. We find explicit analytical solutions for MF and FF bound states and determine the phase diagram numerically by determining the corresponding Wronskian null space. We show by renormalization group arguments that electron-electron interactions enhance the Zeeman gaps opened by the fields. \\[4pt] [1] J. Klinovaja, P. Stano, and D. Loss, arXiv:1207.7322 (2012). [Preview Abstract]
Thursday, March 21, 2013
8:12AM - 8:24AM
T13.00002: Majorana fermions in topological insulator nanoribons with multiband occupancy
Piyapong Sitthison, Tudor Stanescu
We present the phase diagram of a topological insulator nanoribbon with proximity-induced superconductivity as function of the chemical potential and the Zeeman field applied parallel to the ribbon. We find that, in doped topological insulator systems, both surface-like and bulk-like states contribute to the low-energy physics and that proximity-induced quantities, such as the induced superconducting pair potential, have different energy scales in these channels. We study the effect of this band-specific proximity coupling on the properties and the stability of Majorana zero-energy bound states in multiband topological insulator nanoribbons. [Preview Abstract]
Thursday, March 21, 2013
8:24AM - 8:36AM
T13.00003: Time Reversal Invariant Topological Superconductors and Majorana Pairs
Fan Zhang, Eugene Mele, Charles Kane
We propose a feasible route to engineer two dimensional (2D) and one dimensional (1D) time reversal invariant topological superconductors via proximity effects. At a boundary of the 2D (1D) topological superconductor, a time reversal pair of Majorana edge (bound) states emerge as the localized midgap states. We analyze how the Majorana pair evolves in the presence of a Zeeman field, as the superconductor undergoes the symmetry class change as well as the topological phase transitions. A fractional Josephson effect with time reversal symmetry occurs in the presence of a mirror symmetry, realizing a topological crystalline superconducting state. We also briefly discuss the possible realization in materials and the unique signature in experiments. [Preview Abstract]
Thursday, March 21, 2013
8:36AM - 9:12AM
T13.00004: Ripple modulated electronic structure of a 3D topological insulator
Invited Speaker: Vidya Madhavan
Many of the unusual properties of topological insulators can only be realized through a delicate tuning of the Dirac surface state rendering their detection thus far elusive. We have discovered that the surface state dispersion of a prototypical topological insulator can be continuously tuned via a novel topographical route. STM images of Bi$_{2}$Te$_{3}$ show one-dimensional (striped) ripples with 100nm periodicity. By combining information from Landau level spectra [1] and Fourier transform of interference patterns [2] we show that the ripples induce spatial modulations in the surface state dispersion. We describe how the ripples create topological channels for chiral spin modes at the boundaries such that placing the Fermi energy between the Landau levels of these periodic stripes would result in the first experimental realization of the ideal 1D dissipationless quantum wire. This ability to tune the surface state dispersion locally opens the door to a host of new phenomena in topological insulators.\\[4pt] [1] Yoshinori Okada, Wenwen Zhou, C. Dhital, D. Walkup, Ying Ran, Z. Wang, Stephen D. Wilson {\&} V. Madhavan, Visualizing Landau levels of Dirac electrons in a one dimensional potential, Phys. Rev. Lett. 109, 166407 (2012).\\[0pt] [2] Yoshinori Okada, Wenwen Zhou, C. Dhital, D. Walkup, Stephen D. Wilson {\&} V. Madhavan, Ripple modulated electronic structure of a 3D topological insulator, Nature Communications 3 1158, (2012). [Preview Abstract]
Thursday, March 21, 2013
9:12AM - 9:24AM
T13.00005: Novel giant Rashba spin splitting of holes in semiconductor nanowires for Majorana Fermions
Jun-Wei Luo, Lijun Zhang, Alex Zunger
Majorana Fermions (MFs) are particles identical to their own antiparticles that have been first theoretical predicted and then experimentally observed in hybrid superconductor-semiconductor nanowire devices. The appearance of MFs requires (spin-orbit-induced) giant nanowire spin splitting (SS) to exceed the topological superconductor gap, a condition realized by tuning the magnetic field. Because the SS due to the conventional Dresselhaus or Rashba mechanisms is inversely proportional to the wire diameter, these mechanisms contribute but vanishing SS ($\ll1$ meV {\AA}) for wide ($\sim100$ nm) wires that are appropriate to device uses--a significant disadvantage of nanowire for this application. Our atomistic pseudopotential calculation predicted a novel large Rashba SS in GaAs/AlAs wires under electric field [1], which increases as the wire diameter to the potential benefit of nanowire MF device. This emerged automatically when the ordinary Schr\"odinger equation is solved in the presence of spin-orbit interaction. We will report such giant Rashba SS coefficient of the order of $\sim200$ meV{\AA} in a number of semiconductor wire materials $\sim100$ nm wide.\\[4pt] [1] J.W. Luo, L. Zhang, and A. Zunger, Phys. Rev. B 84, 121303(R) (2011) see Ref.25. [Preview Abstract]
Thursday, March 21, 2013
9:24AM - 9:36AM
T13.00006: Classification of the 2D topological insulator/ superconductors through their 1D Dirac edge Hamiltonians
Yi-Ting Hsu, Abolhassan Vaezi, Eun-Ah Kim
Ref [1] analyzes the consequences of discrete symmetries for 1D Dirac Hamiltonians as candidate description of 2D topological insulators/superconductors(TI/TS), formally revealed that there are multiple inequivalent representations of time reversal symmetry as required by $\mathbf{T}^\dagger H T=H^*$. This is special to 1D Dirac edge Hamiltonians and leads to additional possibilities in the classification of 2D TI/TS. In this talk, we present physical implications of the multiple representations through additional hidden symmetries $X_i$ implicit in the 1D Dirac Hamiltonians. When $X_i$ do not commute with any of the existing discrete symmetries, it is necessary to consider $X_i$ alone as individual symmetries for the purpose of classifying the edge theory which usually extends its classification. Graphene-based topological insulators are physical examples of a resulting new Z-type topological phase obtained through imposing an additional $U(1)$ symmetry due to the absence of inter-valley scattering. [1] D. Bernard, E.-A. Kim, and A. LeClair, ArXiv:1202.5040 (2012) [Preview Abstract]
Thursday, March 21, 2013
9:36AM - 9:48AM
T13.00007: Topological pi Josephson effect and Majorana states in Rashba wires
Teemu Ojanen
Rashba-based topological superconductor nanowires, where the spin-orbit coupling may change its sign, support three topological phases protected by chiral symmetry. When a superconducting phase gradient is applied over the interface of the two nontrivial phases, the Andreev spectrum is qualitatively phase shifted by $\pi$ compared to usual Majorana weak links. The topological $\pi$-junction has the striking property of exhibiting maximum supercurrent in the vicinity of vanishing phase difference.The studied system could be realized by local gating of the wire or by an appropriate stacking of permanent magnets in synthetic Rashba systems. [Preview Abstract]
Thursday, March 21, 2013
9:48AM - 10:00AM
T13.00008: Majorana fermions in hybrid superconductor-semiconductor nanowire devices
Vincent Mourik, Kun Zuo, David van Woerkom, Sergey Frolov, Sebastien Plissard, Erik Bakkers, Leo Kouwenhoven
Our recent experiment carried out in hybrid superconductor-semiconductor nanowire devices gave the first experimental evidence for the existence of Majorana fermions [1]. However, some open questions need to be answered. Majorana fermions have to come in pairs, but before we were only capable of probing one Majorana fermion. Besides, Majorana fermions should be fully gate controllable, which could not be demonstrated very convincingly. Furthermore the observed conductance peak was only at 5{\%} of the theoretically expected height of 2e\textasciicircum 2/h. Currently we are performing new experiments in similar but improved devices. We study three terminal normal-superconductor-normal InSb nanowire devices. This enables the possibility to simultaneously probe both Majorana fermions occurring at the ends of the superconducting contact by using tunneling spectroscopy from normal to superconducting contact. Furthermore, the devices have an improved gate design enabling more efficient gating under the superconducting contact. The first measurements already give a larger peak amplitude and the peak is visible in a larger magnetic field range. [1] V. Mourik, K. Zuo et al., Science, Vol. 336 no. 6084 pp. 1003-1007 [Preview Abstract]
Thursday, March 21, 2013
10:00AM - 10:12AM
T13.00009: Detecting Majorana fermions in quasi-1D topological phases using non-local order parameters
Yasaman Bahri, Ashvin Vishwanath
There has been much recent interest in realizing Majorana fermions in solid-state or cold atom systems. A primary goal has been to identify the topological phases which host them and propose routes towards their experimental detection. Such topological phases cannot be distinguished via local order parameters. Instead, we propose non-local string order parameters to distinguish 1D topological phases hosting Majorana zero modes. We also discuss potential cold atom measurements of string order, based on recent experimental developments, as a new and alternative route towards their detection. We further consider N identical chains of interacting fermions and use the group cohomology approach to construct non-local order parameters to distinguish topological phases of this quasi-1D system. [Preview Abstract]
Thursday, March 21, 2013
10:12AM - 10:24AM
T13.00010: Gate-defined wires in HgTe quantum wells as a robust Majorana platform
Johannes Reuther, Jason Alicea, Amir Yacoby
We propose a new quasi-1D platform for Majorana zero-modes based on gate-defined wires in HgTe. Due to the Dirac-like band structure for HgTe such wires exhibit several remarkable properties. Most strikingly, modest gate-tuning allows one to modulate the Rashba spin-orbit energy from zero up to $\sim30K$, and the effective g-factor from zero up to giant values of $\sim600$. The large achievable spin-orbit coupling and g-factor together allow one to access Majorana modes in this setting at exceptionally low magnetic fields while maintaining robustness against disorder. Moreover, gate-defined wires may facilitate the fabrication of networks required for realizing non-Abelian statistics and quantum information devices. The exquisite tunablity of parameters further suggests applications in spintronics. [Preview Abstract]
Thursday, March 21, 2013
10:24AM - 10:36AM
T13.00011: Hints of hybridizing Majorana fermions in a nanowire coupled to superconducting leads
A.D.K. Finck, D.J. Van Harlingen, P.K. Mohseni, K. Jung, X. Li
It has been proposed that a nanowire with strong spin-orbit coupling that is contacted with a conventional superconductor and subjected to a large magnetic field can be driven through a topological phase transition. In this regime, the two ends of the nanowire together host a pair of quasi-particles known as Majorana fermions (MFs). A key feature of MFs is that they are pinned to zero energy when the topological nanowire is long enough such that the wave functions of the two MFs do not overlap significantly, resulting in a zero bias anomaly (ZBA). It has been recently predicted that changes in external parameters can vary the wave function overlap and cause the MFs to hybridize in an oscillatory fashion. This would lead to a non-monotonic splitting or broadening of the ZBA and help distinguish MF transport signatures from a Kondo effect. Here, we present transport studies of an InAs nanowire contacted with niobium nitride leads in high magnetic fields. We observe a number of robust ZBAs that can persist for a wide range of back gate bias and magnetic field strength. Under certain conditions, we find that the height and width of the ZBA can oscillate with back gate bias or magnetic field. [Preview Abstract]
Thursday, March 21, 2013
10:36AM - 10:48AM
T13.00012: Using InAs quantum wells to navigate the Majorana parameter space
Peter O'Malley, Pedram Roushan, Yu Chen, Brooks Campbell, Borzoyeh Shojaei, Javad Shabani, Brian Shultz, Chris Palmstrom, John Martinis
Although superconducting contacts laid down on self-assembled nanowires have produced impressive experimental results, the desire to build complex and scalable devices using Majorana modes leads us to want to develop lithographically defined nanowires. Our strategy is to deposit a superconducting layer in situ on an MBE-grown InAs 2DEG, and etch nanowires in subsequent microfabrication. This allows control over nanowire properties as well as the ability to vary the superconductor-semiconductor coupling strength in a precise manner. We plan to present measurements of both Nb coupled to an InAs 2DEG and nanowires fabricated out of two-dimensional InAs systems. We then discuss where these measurements put our system in the parameter space needed to observe the Majorana fermion, and propose a path forward. [Preview Abstract]
Thursday, March 21, 2013
10:48AM - 11:00AM
T13.00013: High-Performance Topological Insulator Bi$_2$Se$_3$ Nanowire Field Effect Transistors
Hao Zhu, Curt Richter, Erhai Zhao, Hui Yuan, Haitao Li, Dimitris Ioannou, Qiliang Li
Single crystal topological insulator Bi$_{2}$Se$_{3}$ nanowires were synthesized by Vapor-Liquid-Solid (VLS) mechanism. Bi$_{2}$Se$_{3}$ NW field-effect transistors were fabricated by using self-alignment method with HfO$_{2}$ as the gate dielectric. Bi$_{2}$Se$_{3}$ NWFETs were measured in vacuum at different temperatures. Excellent MOSFET characteristics were achieved: smooth and well-saturated output characteristics, large On/Off ratio (10$^{7})$, zero Off-state current and good subthreshold slope in transfer characteristics. We have observed linear behavior of the saturation current extracted from the I$_{\mathrm{ds}}$-V$_{\mathrm{ds}}$ curves as a function of the overthreshold voltage (V$_{\mathrm{g}}$-V$_{\mathrm{th}})$, which indicated the main role of the metallic surface conduction at Bi$_{2}$Se$_{3}$ nanowire channel. Both effective mobility and field-effect mobility have been extracted. Very good effective mobility (\textgreater\ 5000 cm$^{2}$V$^{-1}$s$^{-1}$ at 77 K) was obtained under a low gate voltage. From off-state current we calculated the band gap of bulk about 0.33 eV, which is in a good agreement with reported value of 0.35 eV. [Preview Abstract]
Engage My APS Information for
The American Physical Society (APS) is a non-profit membership organization working to advance the knowledge of physics.
Headquarters1 Physics Ellipse, College Park, MD 20740-3844(301) 209-3200 Editorial Office1 Research Road, Ridge, NY 11961-2701(631) 591-4000 Office of Public Affairs529 14th St NW, Suite 1050, Washington, D.C. 20045-2001(202) 662-8700
|
Please assume that this graph is a highly magnified section of the derivative of some function, say $F(x)$. Let's denote the derivative by $f(x)$.Let's denote the width of a sample by $h$ where $$h\rightarrow0$$Now, for finding the area under the curve between the bounds $a ~\& ~b $ we can a...
@Ultradark You can try doing a finite difference to get rid of the sum and then compare term by term. Otherwise I am terrible at anything to do with primes that I don't know the identities of $\pi (n)$ well
@Silent No, take for example the prime 3. 2 is not a residue mod 3, so there is no $x\in\mathbb{Z}$ such that $x^2-2\equiv 0$ mod $3$.
However, you have two cases to consider. The first where $\binom{2}{p}=-1$ and $\binom{3}{p}=-1$ (In which case what does $\binom{6}{p}$ equal?) and the case where one or the other of $\binom{2}{p}$ and $\binom{3}{p}$ equals 1.
Also, probably something useful for congruence, if you didn't already know: If $a_1\equiv b_1\text{mod}(p)$ and $a_2\equiv b_2\text{mod}(p)$, then $a_1a_2\equiv b_1b_2\text{mod}(p)$
Is there any book or article that explains the motivations of the definitions of group, ring , field, ideal etc. of abstract algebra and/or gives a geometric or visual representation to Galois theory ?
Jacques Charles François Sturm ForMemRS (29 September 1803 – 15 December 1855) was a French mathematician.== Life and work ==Sturm was born in Geneva (then part of France) in 1803. The family of his father, Jean-Henri Sturm, had emigrated from Strasbourg around 1760 - about 50 years before Charles-François's birth. His mother's name was Jeanne-Louise-Henriette Gremay. In 1818, he started to follow the lectures of the academy of Geneva. In 1819, the death of his father forced Sturm to give lessons to children of the rich in order to support his own family. In 1823, he became tutor to the son...
I spent my career working with tensors. You have to be careful about defining multilinearity, domain, range, etc. Typically, tensors of type $(k,\ell)$ involve a fixed vector space, not so many letters varying.
UGA definitely grants a number of masters to people wanting only that (and sometimes admitted only for that). You people at fancy places think that every university is like Chicago, MIT, and Princeton.
hi there, I need to linearize nonlinear system about a fixed point. I've computed the jacobain matrix but one of the elements of this matrix is undefined at the fixed point. What is a better approach to solve this issue? The element is (24*x_2 + 5cos(x_1)*x_2)/abs(x_2). The fixed point is x_1=0, x_2=0
Consider the following integral: $\int 1/4*(1/(1+(u/2)^2)))dx$ Why does it matter if we put the constant 1/4 behind the integral versus keeping it inside? The solution is $1/2*\arctan{(u/2)}$. Or am I overseeing something?
*it should be du instead of dx in the integral
**and the solution is missing a constant C of course
Is there a standard way to divide radicals by polynomials? Stuff like $\frac{\sqrt a}{1 + b^2}$?
My expression happens to be in a form I can normalize to that, just the radicand happens to be a lot more complicated. In my case, I'm trying to figure out how to best simplify $\frac{x}{\sqrt{1 + x^2}}$, and so far, I've gotten to $\frac{x \sqrt{1+x^2}}{1+x^2}$, and it's pretty obvious you can move the $x$ inside the radical.
My hope is that I can somehow remove the polynomial from the bottom entirely, so I can then multiply the whole thing by a square root of another algebraic fraction.
Complicated, I know, but this is me trying to see if I can skip calculating Euclidean distance twice going from atan2 to something in terms of asin for a thing I'm working on.
"... and it's pretty obvious you can move the $x$ inside the radical" To clarify this in advance, I didn't mean literally move it verbatim, but via $x \sqrt{y} = \text{sgn}(x) \sqrt{x^2 y}$. (Hopefully, this was obvious, but I don't want to confuse people on what I meant.)
Ignore my question. I'm coming of the realization it's just not working how I would've hoped, so I'll just go with what I had before.
|
Edit: The algebra I speak of here is not actually the Grassmann numbers at all -- they are $\mathbb{R}[X]/(X^n)$, whose generators don't satisfy the anticommutativity relation even though they satisfy all the nilpotency relations. The dual-number stuff for 2 by 2 is still correct, just ignore my use of the word "Grassmann".
Non-diagonalisable 2 by 2 matrices can be diagonalised over the dual numbers -- and the "weird cases" like the Galilean transformation are not fundamentally different from the nilpotent matrices.
The intuition here is that the Galilean transformation is sort of a "boundary case" between real-diagonalisability (skews) and complex-diagonalisability (rotations) (which you can sort of think in terms of discriminants). In the case of the Galilean transformation $\left[\begin{array}{*{20}{c}}{1}&{v}\\{0}&{1}\end{array}\right]$, it's a small perturbation away from being diagonalisable, i.e. it sort of has "repeated eigenvectors" (you can visualise this with MatVis). So one may imagine that the two eigenvectors are only an "epsilon" away, where $\varepsilon$ is the unit dual satisfying $\varepsilon^2=0$ (called the "soul"). Indeed, its characteristic polynomial is:
$$(\lambda-1)^2=0$$
Whose solutions among the dual numbers are $\lambda=1+k\varepsilon$ for real $k$. So one may "diagonalise" the Galilean transformation over the dual numbers as e.g.:
$$\left[\begin{array}{*{20}{c}}{1}&{0}\\{0}&{1+v\varepsilon}\end{array}\right]$$
Granted this is not unique, this is formed from the change-of-basis matrix $\left[\begin{array}{*{20}{c}}{1}&{1}\\{0}&{\epsilon}\end{array}\right]$, but any vector of the form $(1,k\varepsilon)$ is a valid eigenvector. You could, if you like, consider this a canonical or "principal value" of the diagonalisation, and in general each diagonalisation corresponds to a limit you can take of real/complex-diagonalisable transformations. Another way of thinking about this is that there is an entire eigenspace spanned by $(1,0)$ and $(1,\varepsilon)$ in that little gap of multiplicity. In this sense, the geometric multiplicity is forced to be equal to the algebraic multiplicity*.
Then a nilpotent matrix with characteristic polynomial $\lambda^2=0$ has solutions $\lambda=k\varepsilon$, and is simply diagonalised as:
$$\left[\begin{array}{*{20}{c}}{0}&{0}\\{0}&{\varepsilon}\end{array}\right]$$
(Think about this.) Indeed, the resulting matrix has minimal polynomial $\lambda^2=0$, and the eigenvectors are as before.
What about higher dimensional matrices? Consider:
$$\left[ {\begin{array}{*{20}{c}}0&v&0\\0&0&w\\0&0&0\end{array}} \right]$$
This is a nilpotent matrix $A$ satisfying $A^3=0$ (but not $A^2=0$). The characteristic polynomial is $\lambda^3=0$. Although $\varepsilon$ might seem like a sensible choice, it doesn't really do the trick -- if you try a diagonalisation of the form $\mathrm{diag}(0,v\varepsilon,w\varepsilon)$, it has minimal polynomial $A^2=0$, which is wrong. Indeed, you won't be able to find three linearly independent eigenvectors to diagonalise the matrix this way -- they'll all take the form $(a+b\varepsilon,0,0)$.
Instead, you need to consider a generalisation of the dual numbers, called the Grassmann numbers, with the soul satisfying $\epsilon^n=0$. Then the diagonalisation takes for instance the form:
$$\left[ {\begin{array}{*{20}{c}}0&0&0\\0&{v\epsilon}&0\\0&0&{w\epsilon}\end{array}} \right]$$
*Over the reals and complexes, when one defines algebraic multiplicity (as "the multiplicity of the corresponding factor in the characteristic polynomial"), there is a single eigenvalue corresponding to that factor. This is of course no longer true over the Grassmann numbers, because they are not a field, and $ab=0$ no longer implies "$a=0$ or $b=0$".
In general, if you want to prove things about these numbers, the way to formalise them is by constructing them as the quotient $\mathbb{R}[X]/(X^n)$, so you actually have something clear to work with.
(Perhaps relevant: Grassmann numbers as eigenvalues of nilpotent operators? -- discussing the fact that the Grassmann numbers are not a field).
You might wonder if this sort of approach can be applicable to LTI differential equations with repeated roots -- after all, their characteristic matrices are exactly of this Grassmann form. As pointed out in the comments, however, this diagonalisation is still not via an invertible change-of-basis matrix, it's still only of the form $PD=AP$, not $D=P^{-1}AP$. I don't see any way to bypass this. See my posts All matrices can be diagonalised (a re-post of this answer) and Repeated roots of differential equations for ideas, I guess.
|
Whenever you make a part in inventor, the software calculates de properties of the whole body given a constant density. Then, automatically, it shows the inertial tensor. As you can recall from math, the inertia equation is $I_x= \rho\int(y^2+z^2)dV$ for the x axis. So, clearing rho, how can I get the right hand of the equation that is been solved in the underlying algorithm without doing any hand calculations?
I guess your question is how to compute the inertia tensor automatically. A parametric equation may not be always possible, but we could write a program to avoid hand calculations as much as possible.
The inertia tensor $\mathbf{I}$ is defined as an integral over the object domain $\Omega$ (see Inertia tensor): $$ \newcommand{\V}[1]{\mathbf{#1}} \begin{align} \V{I} &= \int_\Omega \rho (\V{r}\cdot\V{r} \delta - \V{r}\V{r}^T)\,dV \end{align} $$ where $\V{r}$ is the position relative to the center of mass (COM),$\rho$ is the density, and $\delta$ is the identity matrix.
Note that $\text{tr}(\V{r}\V{r}^T) = \V{r}\cdot \V{r}$, the above definition can be rearranged as $$ \begin{align} \V{T} &= \V{r}\V{r}^T \\ \V{I} &= \int_\Omega \rho (\text{tr} \V{T} \delta - \V{T})\,dV \end{align} $$
Analytic object
For analytic objects you can substitute their equations into the integral and do symbolic integration. This step can be automated with any symbolic computing software.
Object in tetrahedral mesh
We assume the object is simple and has a closed boundary. An analytic solution is not possible in this case because we do not have an explicit equation for the object. But we could compute the inertia tensor numerically. The only integral we have to compute is $$ \begin{align} \V{C} &= \int_\Omega \rho \V{r}\V{r}^T\,dV \end{align} $$
To compute this integral, this paper could be helpful. The general idea is to compute $\V{C}_t$ for each tetrahedron $t$, and sum $\V{C}_t$ to get $\V{C}$. We first compute the integral for a canonical tetrahedron $t0$ whose vertices are $(0,0,0)$, $(1,0,0)$, $(0,1,0)$, and $(0,0,1)$.
In maple, this canonical integral $\V{C}_{t0}$ is (assume $\rho$ constant)
r:=<x,y,z>; [x] [ ] r := [y] [ ] [z]with(LinearAlgebra): rrt:=r.Transpose(r); [ 2 ] [x x y x z] [ ] rrt := [ 2 ] [x y y y z] [ ] [ 2 ] [x z y z z ]seq(seq(int(int(int(rrt[i,j],x=0..1-y-z),y=0..1-z),z=0..1),i=1..3),j=1..3); 1/60, 1/120, 1/120, 1/120, 1/60, 1/120, 1/120, 1/120, 1/60
Then the integral $\V{C}_t$ can be computed by a change of variable. Assume the coordinates of a world space tetrahedron can be represented as $\V{r}=\V{F}\V{r}_0 + \V{x}$, where $\V{F}$ and $\V{x}$ are constant. We have $$ \begin{align} \V{C}_t &= \rho \int_{\Omega_t} \V{r}\V{r}^T\,dV \\ &= \rho \int_{\Omega_{t0}} (\V{F}\V{r}_0 + \V{x})(\V{F}\V{r}_0+\V{x})^T \det \V{F} \,dV_0 \\ &= \rho \int_{\Omega_{t0}} (\V{F}\V{r}_0\V{r}_0^T\V{F}^T + \V{x}\V{x}^T + \V{F}\V{r}_0\V{x}^T + \V{x}\V{r}_0^T\V{F}^T) \det \V{F} \,dV_0 \\ &= \det(\V{F}) \V{F}\V{C}_{t0}\V{F}^T + \det(\V{F}) m \V{x}\V{x}^T + \det(\V{F}) m \V{\bar{r}}\V{x}^T + \det(\V{F}) m \V{x}\V{\bar{r}}^T \end{align} $$ where $\V{\bar{r}}$ is the COM and $m$ is the mass.
Objects in other form
I did not use Inventor before but I suspect the part produced by it may have other formats, such as Bezier surface. For now I would suggest tetrahedralize the object and apply the above method.
|
I'm trying to understand the full bit-complexity of computing the determinant of an $n\times n$ integer matrix, with each entry represented by $M$ bits. I would like to know what is the state-of-the-art bit-complexity. As far as I could find, the two possible candidates are:
(1) The low-depth circuits, due to Csansky [1976], and Berkowitz [1984], but these, despite having $log^2(n)$ depth, require some $n^4$ bit operations.
(2) The Bunch and Hopcroft [1974] algorithm, which takes any black-box algorithm for integer matrix multiplication, and produces an algorithm for the determinant using the same $\textbf{arithmetic}$ complexity. Since any two $M$-bit integers can be multiplied in $M \cdot log^2(M)$ operations, and the largest value during the computation can be as large as $2^{M n}$, it seems that its bit-complexity is $\tilde{O}(M n^{\omega+1})$, where $\omega$ is the state-of-the-art matrix product coefficient $< 2.38$.
Is there a better upper-bound?
|
Electronic Journal of Probability Electron. J. Probab. Volume 18 (2013), paper no. 7, 21 pp. Regular conditional distributions of continuous max-infinitely divisible random fields Abstract
This paper is devoted to the prediction problem in extreme value theory. Our main result is an explicit expression of the regular conditional distribution of a max-stable (or max-infinitely divisible) process $\{\eta(t)\}_{t\in T}$ given observations $\{\eta(t_i)=y_i,\ 1\leq i\leq k\}$. Our starting point is the point process representation of max-infinitely divisible processes by Giné, Hahn and Vatan (1990). We carefully analyze the structure of the underlying point process, introduce the notions of extremal function, sub-extremal function and hitting scenario associated to the constraints and derive the associated distributions. This allows us to explicit the conditional distribution as a mixture over all hitting scenarios compatible with the conditioning constraints. This formula extends a recent result by Wang and Stoev (2011) dealing with the case of spectrally discrete max-stable random fields. This paper offers new tools and perspective or prediction in extreme value theory together with numerous potential applications.
Article information Source Electron. J. Probab., Volume 18 (2013), paper no. 7, 21 pp. Dates Accepted: 13 January 2013 First available in Project Euclid: 4 June 2016 Permanent link to this document https://projecteuclid.org/euclid.ejp/1465064232 Digital Object Identifier doi:10.1214/EJP.v18-1991 Mathematical Reviews number (MathSciNet) MR3024101 Zentralblatt MATH identifier 1287.60066 Rights This work is licensed under a Creative Commons Attribution 3.0 License. Citation
Dombry, Clément; Eyi-Minko, Frédéric. Regular conditional distributions of continuous max-infinitely divisible random fields. Electron. J. Probab. 18 (2013), paper no. 7, 21 pp. doi:10.1214/EJP.v18-1991. https://projecteuclid.org/euclid.ejp/1465064232
|
"Abstract: We prove, once and for all, that people who don't use superspace are really out of it. This includes QCDers, who always either wave their hands or gamble with lettuce (Monte Zuma calculations). Besides, all nonsupersymmetric theories have divergences which lead to problems with things like renormalons, instantons, anomalons, and other phenomenons. Also, they can't hide from gravity forever."
Can a gravitational field possess momentum? A gravitational wave can certainly possess momentum just like a light wave has momentum, but we generally think of a gravitational field as a static object, like an electrostatic field.
"You have to understand that compared to other professions such as programming or engineering, ethical standards in academia are in the gutter. I have worked with many different kinds of people in my life, in the U.S. and in Japan. I have only encountered one group more corrupt than academic scientists: the mafia members who ran Las Vegas hotels where I used to install computer equipment. "
so Ive got a small bottle that I filled up with salt. I put it on the scale and it's mass is 83g. I've also got a jup of water that has 500g of water. I put the bottle in the jug and it sank to the bottom. I have to figure out how much salt to take out of the bottle such that the weight force of the bottle equals the buoyancy force.
For the buoyancy do I: density of water * volume of water displaced * gravity acceleration?
so: mass of bottle * gravity = volume of water displaced * density of water * gravity?
@EmilioPisanty The measurement operators than I suggested in the comments of the post are fine but I additionally would like to control the width of the Poisson Distribution (much like we can do for the normal distribution using variance). Do you know that this can be achieved while still maintaining the completeness condition $$\int A^{\dagger}_{C}A_CdC = 1$$?
As a workaround while this request is pending, there exist several client-side workarounds that can be used to enable LaTeX rendering in chat, including:ChatJax, a set of bookmarklets by robjohn to enable dynamic MathJax support in chat. Commonly used in the Mathematics chat room.An altern...
You're always welcome to ask. One of the reasons I hang around in the chat room is because I'm happy to answer this sort of question. Obviously I'm sometimes busy doing other stuff, but if I have the spare time I'm always happy to answer.
Though as it happens I have to go now - lunch time! :-)
@JohnRennie It's possible to do it using the energy method. Just we need to carefully write down the potential function which is $U(r)=\frac{1}{2}\frac{mg}{R}r^2$ with zero point at the center of the earth.
Anonymous
Also I don't particularly like this SHM problem because it causes a lot of misconceptions. The motion is SHM only under particular conditions :P
I see with concern the close queue has not shrunk considerably in the last week and is still at 73 items. This may be an effect of increased traffic but not increased reviewing or something else, I'm not sure
Not sure about that, but the converse is certainly false :P
Derrida has received a lot of criticism from the experts on the fields he tried to comment on
I personally do not know much about postmodernist philosophy, so I shall not comment on it myself
I do have strong affirmative opinions on textual interpretation, made disjoint from authoritorial intent, however, which is a central part of Deconstruction theory. But I think that dates back to Heidegger.
I can see why a man of that generation would be leaned towards that idea. I do too.
|
Lets define the class
$ZBQP = \{ L \mid \exists \textit{P-uniform circuit family } \{C_i\}, \forall n \in \mathbb{N}, |x| = n, |\langle 0|C_n|x \rangle - I(x \in L)| \leq 9/10 \Longleftrightarrow x \in L\}.$
We note a construction of Ashley Montanaro, that for any quantum circuit $C$ there exists some multilinear polynomial of size linearly proportional to $C$, $f_C$, such that
$\langle 0 | C|0\rangle = \frac{gap(f_C)}{2^n}$
where $n$ is the number of variables in the polynomial and
$gap(f) = |f^{-1}(0)| - |f^{-1}(1)|.$
We now define CAPP, the Circuit Approximate Probability Problem, which is a function problem:
$C \mapsto v \textit{ where } |v - Pr_x(C(x) = 1)| \leq 1/10.$
Using this problem, we can get a $2/10$ error approximation for $gap(f)$, as
$gap(f) = 2^n (Pr(f(x) = 0) - Pr(f(x) = 1))$
and thus we can get an approximation for $\langle 0|C_n|0 \rangle$ (or for $\langle y| C_n |x \rangle$ by modifying the circuit and thus $f_C$ in the obvious way). As we have a two sided error for $ZBQP$ and this approximation gets us close enough to distinguish which side we're on, it seems that this allows us to solve $ZBQP$ problems in basically whatever function classes where we can do $CAPP$, which is suspected to be in $FP$ and is already in $FBPP$.
Does $ZBQP \neq BQP$ for any reason I'm missing? It seems like they may be equal, but I dunno.
|
My 6 year old wants to know if infinity is an odd or even number. His 38 year old father is keen to know too.
In the context of transfinite ordinals, the usualdefinition is that an ordinal number $\alpha$ is
even ifit is a multiple of $2$, specifically: if there is anotherordinal $\beta$ such that $2\cdot\beta=\alpha$. In otherwords, the order type $\alpha$ can be viewed as $\beta$many pairs in sequence, or in other words, $\alpha$ isleft-divisible by $2$. Otherwise, it is odd.
It is easy to prove from this definition by transfinite recursion that the ordinals come in an alternating even/odd pattern, and that every limit ordinal (and hence every infinite cardinal) is even. Many transfinite constructions proceed by doing something different on the even as opposed to the odd stages, just as with finite constructions.
The smallest infinite ordinal is $\omega$, which is even on this definition, since having $\omega$ many pairs in sequence is order-isomorphic to $\omega$, and so $2\cdot\omega=\omega$. Meanwhile, the next infinite ordinal is $\omega+1$, which is odd. The ordinal $\omega+2$ is even, since it is equal to $2\cdot(\omega+1)$, even though it is not $\beta+\beta$ for any $\beta$.
(Please note that $\alpha=2\cdot\beta$ is not at all the same as saying $\alpha=\beta+\beta$, since $\beta$ copies of $2$ is not the same order type as $2$ copies of $\beta$, a phenomenon at the heart of the non-commutativity of ordinal multiplication. )
To explain the idea to a child, I would focus on theprincipal idea: whether finite or infinite, a number is even when it can be divided into pairs. For finite sets,this is the same as the ability to divide the set into twosets of equal size, since one may consider the firstelement of each pair and the second element of each pair.In the infinite context, as others have noted, there arenumerous concepts of infinity, each with its own concept ofeven and odd. In my experience with children, one of theeasiest-to-grasp concepts of infinity is provided by thetransfinite ordinals, since it can be viewed as acontinuation of the usual counting manner of children, but proceeding intothe transfinite:
$$1,2,3,\cdots,\omega,\omega+1,\omega+2,\cdots,\omega+\omega=\omega\cdot2,\omega\cdot 2+1,\cdots,\omega\cdot 3,\cdots,\omega^2,\omega^2+1,\cdots,\omega^2+\omega,\cdots\cdots$$
This concept of infinity is attractive to children, because they can learn to count into the infinite this way. Also, this concept of infinity has one of the most successful parity concepts, since one maintains the even/odd pattern into the transfinite. The smallest infinity $\omega$ is even, $\omega+1$ is odd, $\omega+2$ is even and so on. Every limit ordinal is even, and then it repeats even/odd up to the next limit ordinal.
I suggest that you read the discussion at Is infinity a number? first (since of course you need to answer that question to answer this question). There are some senses in which infinity is a number, and there are some senses in which infinity is not a number, and it all depends on what exactly you mean by "number," which in turn depends on what applications you have in mind.
On the other hand, there is a useful sense in which infinity is even. To explain this we have to replace "numbers" with cardinalities of sets.
Definition: A set $S$ has even cardinality if it can be written as the disjoint union of two subsets $A, B$ which have the same cardinality.
In other words, we need to be able to divide $S$ into pairs. This definition reduces to the ordinary definition for finite sets, but an infinite set
always has even cardinality. For example, the cardinality of the natural numbers $\mathbb{N}$ is even because we can pair up even numbers with odd numbers.
This definition of "even" came up in my answer to this question, where precisely the above property turned out to be relevant.
Be aware there are
many different notions of infinity in mathematics, so the answer to your query will depend on the particular notion of infinity that you have in mind, and how it interacts with the operations and relations of the extended "number" system. For example, if your notion of $\infty$ satisfies $\:1 +\infty = \infty\:$ then this may yield an obstruction to extending parity arithmetic.
Here is a simple example that has some hope of being comprehensible to a 6-year-old. I will explain it in a language that is hopefully comprehensible to his 38-year-old father. Consider the ring of polynomial functions with integer coefficients, i.e $\rm\:\mathbb Z[x] = \{\: a_0 + a_1\ x\ +\:\cdots\: + a_n\: x^n\ :\ a_i \in \mathbb Z\:\}\:.\:$ If we consider these functions in a neighborhood of $\rm\:+\infty\:$ we obtain an
ordered ring. Namely, define $\rm\ f(x) > g(x)\:$ if this holds true on some neighborhood $\rm\:(x_0,\:+\infty)\:$ of $\rm\:+\infty\:,\:$ i.e. if there is some $\rm\:x_0\:$ such that it holds true for all $\rm\:x > x_0\:,\:$ i.e. if it is "eventually" true. One easily checks that this is well-defined. Indeed, since polynomials have only a finite number of roots, they eventually have constant sign. Thus if $\rm\:f\ne g$ then eventually $\rm\:f-g > 0\:$ or $<0$ so eventually $\rm\:f>g\:$ or $\rm\:g>f\:$. In fact one easily deduces that this is equivalent to defining the sign of a polynomial to be the sign of its leading coefficient (the leading term eventually dominates lower-degree terms). This makes it clear that every polynomial is either positive, negative or zero, and the positive polynomials are closed under addition and multiplication (these are precisely the properties required in general to define a total order on a ring, compatible with the ring operations).
This ring $\rm\:\mathbb Z]x]\:$ has "infinite" elements, e.g. $\rm\:x > n\:$ for all integers $\rm\:n\:$ since $\rm\:x - n\:$ is eventually $> 0\:.\:$ Can we extend parity arithmetic from $\:\mathbb Z\:$ to such infinite elements? In fact we can, in two different ways. First, we can define $\rm\:x\:$ to be even. Since, by the Factor Theorem, $\rm\:f(x) = f(0) + x\ g(x)\:$ for some $\rm\:g(x)\in \mathbb Z[x]\:,\:$ this amounts to defining the parity of $\rm\:f(x)\:$ to be the parity of its constant coefficient $\rm\:f(0)\:.\:$ Alternatively we can define $\rm\:x\:$ to be odd. Again, by the Factor Theorem, we have $\rm\:f(x) = f(1) + (x-1)\ g(x)\:$ for some $\rm\:g(x)\in \mathbb Z[x]\:.\:$ Since $\rm\:x-1\:$ is even, this amounts to defining the parity of $\rm\:f(x)\:$ to be that of $\rm\:f(1)\:,\:$ i.e. the sum of its coefficients. Both definitions lead to a consistent parity arithmetic in the extension ring $\rm\:\mathbb Z[x]\:.\:$ But in general there is no compelling reason to decide which parity we should assign to the infinite element $\rm\:x\:.\:$
In contrast, there
are "number" systems extending the integers where parity arithmetic has a unique extension. For example, the rational numbers (fractions) expressible with odd denominator have parity arithmetic given by defining the parity of $\rm\: m/(2\:n+1)\:$ to be the parity of $\rm\:m\:.\:$ Also the Gaussian integers $\ m + n\ i\:,\ m,n\in \mathbb Z\:,\ i = \sqrt{-1}\:,\: $ have parity arithmetic given by defining $\:i\:$ to be odd. On the other hand, there are also such number rings with no extension of parity, or with numerous possible extensions. For further discussion see my post here.
Also, as JDH mentioned, parity arithmetic extends in some sense to more exotic structures such as ordinals, which may or may not satisfy your definition of a "number". Based on a few decades teaching such concepts, I suspect that you'll have much more luck teaching a 6-year-old a concept of parity of polynomials vs. ordinals. Indeed, my experience is that many
adult educated layfolks have difficulty comprehending ordinals (I've had hundreds of interactions with such adults based on my popular posts about Goodstein's Theorem, e.g. see my sci.math post of Dec 11 1995; update: now migrated here).
A nice introduction to the many different notions of infinity in mathematics is Rudy Rucker's book: Infinity and the Mind. Unlike many other popularizations, the author has expertise in the field, having completed a Ph.D. on a related topic. Moreover, Rucker has gone to great lengths to make the presentation faithful to the mathematics but still accessible to an educated layperson.
JH Conway's Surreal Numbers have a well-defined notion of Omnific Integer which extends the definition of integer from finite numbers. I believe this splits infinite integers between odd and even numbers according to whether they are twice an integer or not, and such that Omnific Integers which differ by 1 always have opposite parities.
I would not recommend the theory to a 6-year-old, but Knuth's "Surreal Numbers" would be a good introduction for his father, and might give some interesting ideas on how to explore the idea of numbers with a child who is asking interesting questions.
"Infinity" is not a number, but there are numbers that are infinite, including cardinals, ordinals, infinite nonstandard reals, and other things. Some of those can be considered even numbers.
The "infinity" you encounter in calculus would not normally be considered a number.
There are also other notions of "infinity", such as the ones involved in the Dirac delta function and its derivatives. But it's hard to see how to view those as being numbers.
protected by Qiaochu Yuan Jul 2 '11 at 18:30
Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
|
Definition:Upper Bound of Mapping/Real-Valued Definition
Let $f: S \to \R$ be a real-valued function.
Let $f$ be bounded above in $\R$ by $H \in \R$. Then $H$ is an upper bound of $f$.
Thus the construction:
The set of numbers which fulfil the propositional function $P \left({n}\right)$ is bounded above with the upper bound $N$
would be reported as:
This construct obscures the details of what is actually being stated. Its use on $\mathsf{Pr} \infty \mathsf{fWiki}$ is considered an abuse of notation and so discouraged. This also applies in the case where it is the upper bound of a mapping which is under discussion. Also see
|
Difference between revisions of "Quasi-metric"
(References are added)
Line 16: Line 16:
|valign="top"|{{Ref|Sch}}|| V. Schroeder, "Quasi-metric and metric spaces". Conform. Geom. Dyn. 10, 355 - 360 (2006) {{ZBL|1113.54014}}
|valign="top"|{{Ref|Sch}}|| V. Schroeder, "Quasi-metric and metric spaces". Conform. Geom. Dyn. 10, 355 - 360 (2006) {{ZBL|1113.54014}}
|-
|-
−
|valign="top"|{{Ref|Wil}}|| W. A. Wilson, "On Quasi-Metric Spaces". American Journal of Mathematics
+
|valign="top"|{{Ref|Wil}}|| W. A. Wilson, "On Quasi-Metric Spaces". American Journal of Mathematics Vol. 53, No. 3 (1931), pp. 675-684 {{ZBL|0002.05503}}
−
Vol. 53, No. 3 (1931), pp. 675-684 {{ZBL|0002.05503}}
|-
|-
|}
|}
Revision as of 10:25, 7 December 2012
Let $\mathbb X$ is a nonempty set. A function $d:\mathbb{X}\times\mathbb{X}\to[0,\infty)$ which satisfies following conditions for all $x,y\in\mathbb X$
1) $d(x,y)=0$ if and only if $x = y$ (the identity axiom);
2) $d(x,y) + \rho(y,z) \geq d(x,z)$ (the triangle axiom);
is called quasi-metric. A pair $(\mathbb X, d)$ is quasi-metric space.
The difference between metric and quasi-metric is that quasi-metric does not possess the symmetry axiom (in the case we allow $d(x,y)\ne d(y,x)$ for some $x,y\in \mathbb X$ ). Reference
[Sch] V. Schroeder, "Quasi-metric and metric spaces". Conform. Geom. Dyn. 10, 355 - 360 (2006) Zbl 1113.54014 [Wil] W. A. Wilson, "On Quasi-Metric Spaces". American Journal of Mathematics Vol. 53, No. 3 (1931), pp. 675-684 Zbl 0002.05503 How to Cite This Entry:
Quasi-metric.
Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Quasi-metric&oldid=29111
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
Let $X_n(\Bbb{Z})$ be the simplicial complex whose vertex set is $\Bbb{Z}$ and such that the vertices $v_0,...,v_k$ span a $k$-simplex if and only if $|v_i-v_j| \le n$ for every $i,j$. Prove that $X_n(\Bbb{Z})$ is $n$-dimensional...
no kidding, my maths is foundations (basic logic but not pedantic), calc 1 which I'm pretty used to work with, analytic geometry and basic linear algebra (by basic I mean matrices and systems of equations only
Anyway, I would assume if removing $ fixes it, then you probably have an open math expression somewhere before it, meaning you didn't close it with $ earlier. What's the full expression you're trying to get? If it's just the frac, then your code should be fine
This is my first time chatting here in Math Stack Exchange. So I am not sure if this is frowned upon but just a quick question: I am trying to prove that a proper subgroup of $\mathbb{Z}^n$ is isomorphic to $\mathbb{Z}^k$, where $k \le n$. So we must have $rank(A) = rank(\mathbb{Z}^k)$ , right?
For four proper fractions $a, b, c, d$ X writes $a+ b + c >3(abc)^{1/3}$. Y also added that $a + b + c> 3(abcd)^{1/3}$. Z says that the above inequalities hold only if a, b,c are positive. (a) Both X and Y are right but not Z. (b) Only Z is right (c) Only X is right (d) Neither of them is absolutely right.
Yes, @TedShifrin the order of $GL(2,p)$ is $p(p+1)(p-1)^2$. But I found this on a classification of groups of order $p^2qr$. There order of $H$ should be $qr$ and it is present as $G = C_{p}^2 \rtimes H$. I want to know that whether we can know the structure of $H$ that can be present?
Like can we think $H=C_q \times C_r$ or something like that from the given data?
When we say it embeds into $GL(2,p)$ does that mean we can say $H=C_q \times C_r$? or $H=C_q \rtimes C_r$? or should we consider all possibilities?
When considering finite groups $G$ of order, $|G|=p^2qr$, where $p,q,r$ are distinct primes, let $F$ be a Fitting subgroup of $G$. Then $F$ and $G/F$ are both non-trivial and $G/F$ acts faithfully on $\bar{F}:=F/ \phi(F)$ so that no non-trivial normal subgroup of $G/F$ stabilizes a series through $\bar{F}$.
And when $|F|=pr$. In this case $\phi(F)=1$ and $Aut(F)=C_{p-1} \times C_{r-1}$. Thus $G/F$ is abelian and $G/F \cong C_{p} \times C_{q}$.
In this case how can I write G using notations/symbols?
Is it like $G \cong (C_{p} \times C_{r}) \rtimes (C_{p} \times C_{q})$?
First question: Then it is, $G= F \rtimes (C_p \times C_q)$. But how do we write $F$ ? Do we have to think of all the possibilities of $F$ of order $pr$ and write as $G= (C_p \times C_r) \rtimes (C_p \times C_q)$ or $G= (C_p \rtimes C_r) \rtimes (C_p \times C_q)$ etc.?
As a second case we can consider the case where $C_q$ acts trivially on $C_p$. So then how to write $G$ using notations?
There it is also mentioned that we can distinguish among 2 cases. First, suppose that the sylow $q$-subgroup of $G/F$ acts non trivially on the sylow $p$-subgroup of $F$. Then $q|(p-1) and $G$ splits over $F$. Thus the group has the form $F \rtimes G/F$.
A presentation $\langle S\mid R\rangle$ is a Dehn presentation if for some $n\in\Bbb N$ there are words $u_1,\cdots,u_n$ and $v_1,\cdots, v_n$ such that $R=\{u_iv_i^{-1}\}$, $|u_i|>|v_i|$ and for all words $w$ in $(S\cup S^{-1})^\ast$ representing the trivial element of the group one of the $u_i$ is a subword of $w$
If you have such a presentation there's a trivial algorithm to solve the word problem: Take a word $w$, check if it has $u_i$ as a subword, in that case replace it by $v_i$, keep doing so until you hit the trivial word or find no $u_i$ as a subword
There is good motivation for such a definition here
So I don't know how to do it precisely for hyperbolic groups, but if $S$ is a surface of genus $g \geq 2$, to get a geodesic representative for a class $[\alpha] \in \pi_1(S)$ where $\alpha$ is an embedded loop, one lifts it to $\widetilde{\alpha}$ in $\Bbb H^2$ by the locally isometric universal covering, and then the deck transformation corresponding to $[\alpha]$ is an isometry of $\Bbb H^2$ which preserves the embedded arc $\widetilde{\alpha}$
It has to be an isometry fixing a geodesic $\gamma$ with endpoints at the boundary being the same as the endpoints of $\widetilde{\alpha}$.
Consider the homotopy of $\widetilde{\alpha}$ to $\gamma$ by straightline homotopy, but straightlines being the hyperbolic geodesics. This is $\pi_1(S)$-equivariant, so projects to a homotopy of $\alpha$ and the image of $\gamma$ (which is a geodesic in $S$) downstairs, and you have your desired representative
I don't know how to interpret this coarsely in $\pi_1(S)$
@anakhro Well, they print in bulk, and on really cheap paper, almost transparent and very thin, and offset machine is really cheaper per page than a printer, you know, but you should be printing in bulk, its all economy of scale.
@ParasKhosla Yes, I am Indian, and trying to get in some good masters progam in math.
Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants.== Branches of algebraic graph theory ===== Using linear algebra ===The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Lap...
I can probably guess that they are using symmetries and permutation groups on graphs in this course.
For example, orbits and studying the automorphism groups of graphs.
@anakhro I have heard really good thing about Palka. Also, if you do not worry about little sacrifice of rigor (e.g. counterclockwise orientation based on your intuition, rather than, on winding numbers, etc.), Howie's Complex analysis is good. It is teeming with typos here and there, but you will be fine, i think. Also, thisbook contains all the solutions in appendix!
Got a simple question: I gotta find kernel of linear transformation $F(P)=xP^{''}(x) + (x+1)P^{'''}(x)$ where $F: \mathbb{R}_3[x] \to \mathbb{R}_3[x]$, so I think it would be just $\ker (F) = \{ ax+b : a,b \in \mathbb{R} \}$ since only polynomials of degree at most 1 would give zero polynomial in this case
@chandx you're looking for all the $G = P''$ such that $xG + (x+1)G' = 0$; if $G \neq 0$ you can solve the DE to get $G'/G = -x/(x+1) = -1 + 1/(x+1) \implies \ln G = -x + \ln(1+x) \implies G = (1+x)e^(-x) + C$ which is obviously not a polyonomial, so $G = 0$ and thus $P = ax + b$
could you suppose that $\operatorname{deg} P \geq 2$ and show that you wouldn't have nonzero polynomials? Sure.
|
I proceed assuming you wanted addition to stay the same (since you said nothing about changing it.)
If you don't care about distributivity, or you don't care about addition period, then yeah, you can take whatever function you want $\mathbb R\times\mathbb R\to \mathbb R$ and sometimes the order of the inputs will matter.
If you
do care about addition, then you can propose whatever weird rules you want for a binary operation, but it will often be disastrous for other properties that we value about multiplication, like distributivity.
Take the second proposed axiom for example: $n\otimes(-m)=nm$
If we wanted distributivity, $n\otimes m + n\otimes(-m)=n\otimes(m-m)=0$, so that $n\otimes(-m)=-(n\otimes m)=-nm$. With your axiom above, we'd have $nm=-nm$ so that $2nm=0$. But this is using regular multiplication and we know that's not true in the real numbers for nonzero $n,m$.
I think there are some oddball binary operations on $\mathbb R$ that can be useful, but by and large the most-used ones are those which cooperate with addition, so that you have a ring structure.
How could this math be applied?
Try not to fall down the rabbit hole of spending time with "solutions looking for problems" and try to get into the mindset of "problems looking for solutions."
Almost always (or, always?) the most fruitful mathematics are generated in the service of solving a problem, not the other way around.
BUT perhaps you meant to ask something more like this, which I think is a fair question:
What are some examples of noncommutative binary operations on the reals that have applications?
Well, now that I think about it, two come to mind:
$a\otimes b=a/b$ and $a\otimes b=a-b$. These 'have applications' but their study does not seem to go very far beyond what we already learn with regular multiplication.
|
One application of uncountable sums (or, to be more precise, sums along arbitrary index set) I am aware of is the definition of the Hilbert space $\ell_2(A)$.
A very basic example of a Hilbert space is the space $\ell_2=\ell_2(\mathbb N)$. The elements of this space are sequences such that $\sum\limits_{i\in\mathbb N} x(i)^2<\infty$. It is endowed with theinner product is given by $\langle x,y \rangle =\sum\limits_{i\in\mathbb N} x(i)y(i)$.
If we allow summation overarbitrary sets, then we can define $\ell_2(A)$ using almost the sameconstruction; in this case, we take all functions $x\colon A\to\mathbb R$ such that $$\sum_{i\in A} x(i)^2 < \infty$$and the inner product will be $$\langle x,y\rangle = \sum\limits_{i\in A} x(i)y(i).$$
It can be shown that this is indeed a Hilbert space and that every Hilbert space $X$ is isomorphic to$\ell_2(A)$ for some set $A$. Cardinality of $A$ is precisely the "Hilbert dimension", i.e. cardinality of orthonormal basis for $X$.This result is, in some sense, aclassification of all Hilbert spaces.
These results can be found, for example, in:
Chapter 13 Roman's Advanced Linear Algebra; Chapter IX of Dixmier's General Topology; Chapter II of Retherford's Hilbert space; Corollary 1.4.19 in Tao's Epsilon of Room...
And there are probably many other places to look. Just google for some reasonable phrases, for example:
See also: Is every Hilbert space an $L^2$ space?
|
Hydrostatic equilibrium
We simulate the hydrostatic equilibrium of a box in heave and pitch, comparing the solution given by the linear and nonlinearapproximations.
We consider a box with the following dimensions \((L,B,H) = (8,4,2)m\), with a center of gravity located at the center of the box.Its mass is taken as
\[mass = \dfrac{1}{2} \rho_{water}\times V_{box} = \dfrac{1}{2} \rho_{water}\times L \times B \times H\]
An artificial linear damping force is introduced, to make for the hydrodynamic radiation damping. (see Damping force).The diagonal coefficients are taken at \(1E4\) for the translation and rotation degrees of freedom.
Fig. 24 Decay test in heave, in small amplitude motions : blue = linear approximation, orange = nonlinear approximation
Fig. 25 Decay test in heave, in large amplitude motions : green = linear approximation, red = nonlinear approximation
Fig. 26 Decay test in pitch, in small amplitude motions : violet = linear approximation, brown = nonlinear approximation
Fig. 27 Decay test in pitch, in large amplitude motions : pink = linear approximation, grey = nonlinear approximation
|
Polygon is a word derived from The Greek language, where poly means many and gonna means angle. So we can say that in a plane, closed figure with many angles is called a polygon. The given diagram is how a polygon looks like:
There are many properties in a polygon like sides, diagonals, area, angles, etc. Lets know how to find using these polygon formulae.
Polygon formula to find area:
\[\large Area\;of\;a\;regular\;polygon=\frac{1}{2}n\; sin\left(\frac{360^{\circ}}{n}\right)s^{2}\]
Polygon formula to find interior angles:
\[\large Interior\;angle\;of\;a\;regular\;polygon=\left(n-2\right)180^{\circ}\]
Polygon formula to find the triangles:
\[\large Interior\;of\;triangles\;in\;a\;polygon=\left(n-2\right)\]
Where,
n is the number of sides and S is the length from center to corner. Solved Examples Question: A polygon is a octagon and its length is 5 cm. Calculate its area ? Solution:
Given :
The polygon is octagon. Hence, n = 8.
Area of a regular polygon = $\frac{1}{2}$ n sin $\frac{360^o}{n}$ S
2
Where,
s is the length from center to corner.
Area of a octagon = $\frac{1}{2}$ $\times$ sin $\frac{360^o}{8}$ 5
2
= 0.5 $\times$ sin $\frac{360^o}{8}$ 5
2
= 0.5 $\times$ 0.707 $\times$ 25
= 8.83 m
2.
|
A particle moves along the x-axis so that at time t its position is given by $x(t) = t^3-6t^2+9t+11$ during what time intervals is the particle moving to the left? so I know that we need the velocity for that and we can get that after taking the derivative but I don't know what to do after that the velocity would than be $v(t) = 3t^2-12t+9$ how could I find the intervals
Fix $c\in\{0,1,\dots\}$, let $K\geq c$ be an integer, and define $z_K=K^{-\alpha}$ for some $\alpha\in(0,2)$.I believe I have numerically discovered that$$\sum_{n=0}^{K-c}\binom{K}{n}\binom{K}{n+c}z_K^{n+c/2} \sim \sum_{n=0}^K \binom{K}{n}^2 z_K^n \quad \text{ as } K\to\infty$$but cannot ...
So, the whole discussion is about some polynomial $p(A)$, for $A$ an $n\times n$ matrix with entries in $\mathbf{C}$, and eigenvalues $\lambda_1,\ldots, \lambda_k$.
Anyways, part (a) is talking about proving that $p(\lambda_1),\ldots, p(\lambda_k)$ are eigenvalues of $p(A)$. That's basically routine computation. No problem there. The next bit is to compute the dimension of the eigenspaces $E(p(A), p(\lambda_i))$.
Seems like this bit follows from the same argument. An eigenvector for $A$ is an eigenvector for $p(A)$, so the rest seems to follow.
Finally, the last part is to find the characteristic polynomial of $p(A)$. I guess this means in terms of the characteristic polynomial of $A$.
Well, we do know what the eigenvalues are...
The so-called Spectral Mapping Theorem tells us that the eigenvalues of $p(A)$ are exactly the $p(\lambda_i)$.
Usually, by the time you start talking about complex numbers you consider the real numbers as a subset of them, since a and b are real in a + bi. But you could define it that way and call it a "standard form" like ax + by = c for linear equations :-) @Riker
"a + bi where a and b are integers" Complex numbers a + bi where a and b are integers are called Gaussian integers.
I was wondering If it is easier to factor in a non-ufd then it is to factor in a ufd.I can come up with arguments for that , but I also have arguments in the opposite direction.For instance : It should be easier to factor When there are more possibilities ( multiple factorizations in a non-ufd...
Does anyone know if $T: V \to R^n$ is an inner product space isomorphism if $T(v) = (v)_S$, where $S$ is a basis for $V$? My book isn't saying so explicitly, but there was a theorem saying that an inner product isomorphism exists, and another theorem kind of suggesting that it should work.
@TobiasKildetoft Sorry, I meant that they should be equal (accidently sent this before writing my answer. Writing it now)
Isn't there this theorem saying that if $v,w \in V$ ($V$ being an inner product space), then $||v|| = ||(v)_S||$? (where the left norm is defined as the norm in $V$ and the right norm is the euclidean norm) I thought that this would somehow result from isomorphism
@AlessandroCodenotti Actually, such a $f$ in fact needs to be surjective. Take any $y \in Y$; the maximal ideal of $k[Y]$ corresponding to that is $(Y_1 - y_1, \cdots, Y_n - y_n)$. The ideal corresponding to the subvariety $f^{-1}(y) \subset X$ in $k[X]$ is then nothing but $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$. If this is empty, weak Nullstellensatz kicks in to say that there are $g_1, \cdots, g_n \in k[X]$ such that $\sum_i (f^* Y_i - y_i)g_i = 1$.
Well, better to say that $(f^* Y_1 - y_1, \cdots, f^* Y_n - y_n)$ is the trivial ideal I guess. Hmm, I'm stuck again
O(n) acts transitively on S^(n-1) with stabilizer at a point O(n-1)
For any transitive G action on a set X with stabilizer H, G/H $\cong$ X set theoretically. In this case, as the action is a smooth action by a Lie group, you can prove this set-theoretic bijection gives a diffeomorphism
|
$$\lim_{x\to16}\frac{\sqrt[4]x-2}{x-16}$$
My suspicion is that I need to find the conjugate to get this in a more factorable form. From there, I think I can plug in $16$ to find the limit. The only issue is that I'm not quite sure how to factor this. This is my first Calc class and I've never factored anything quite like this before. I found the following format for factoring this sort of problem:
$$(a-b)\left(a^3+a^2b+ab^2+b^3\right) = a^4-b^4$$
But I'm pretty confused in terms of how to apply it (do I apply it to the numerator and then try to get the denominator into a similar form to cancel stuff out?). Does $a = x^\frac14$, and $b = 2$? I feel like the toughest part of this problem is the factoring, not the Calculus.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.