qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,984,322
<p>I have the 2nd order homogeneous ODE: <span class="math-container">$$y(x)''+\frac{2}{x}y'(x)-By=0 $$</span> Where <span class="math-container">$B$</span> is a constant. Since I am a bit rusty with analytical solutions, I plugged it in MatLab's symbolic solver which found the following solution. <span class="math-container">$$y(x)=C_1\frac{e^{\sqrt{B}x}}{x}-C_2\frac{e^{-\sqrt{B}x}}{2\sqrt{B}x}$$</span> Unofrtunately, the constant B takes both positive and negative values, but since the ODE is derived from a physical problem I'd like the solution to always be real. Any suggestion/comment is highly appreciated.</p>
Gribouillis
398,505
<p><strong>Hint</strong> Try the substitution <span class="math-container">\begin{equation} y(x) = \frac{z(x)}{x} \end{equation}</span></p>
1,291,657
<p>Is Bezout's lemma enough to confirm the GCD of a number?</p> <p>So suppose we have <span class="math-container">$$ax+by=z$$</span> does this mean <span class="math-container">$$\gcd(a,b)=z$$</span></p>
davidlowryduda
9,754
<p>Consider the following: $$ 1\cdot 2 + 1\cdot 3 = 5,$$ and so $$\gcd (1,1) = 5.$$ You might try almost any combination of numbers and check for yourself the validity of your statement.</p>
1,291,657
<p>Is Bezout's lemma enough to confirm the GCD of a number?</p> <p>So suppose we have <span class="math-container">$$ax+by=z$$</span> does this mean <span class="math-container">$$\gcd(a,b)=z$$</span></p>
Bill Dubuque
242
<p>A <em>common</em> divisor <span class="math-container">$\,c\mid a,b\,$</span> is <em>greatest</em> if it can be written in linear form <span class="math-container">$\,c = j a + k b,\,$</span> since then <span class="math-container">$\,d\mid a,b\,\Rightarrow\, d\mid ja+kb = c,\,$</span> so <span class="math-container">$\, d\le c.\,$</span> Conversely, by <a href="https://math.stackexchange.com/q/718827/242">Bezout,</a> we know every greatest common divisor can be expressed as such a linear combination.</p> <p>But only the <em>least</em> positive such linear combination of <span class="math-container">$\,a,b\,$</span> equals the gcd. Indeed, such linear combinations are preserved by scaling by arbitrary integers, so any multiple of the gcd is also such a linear combination. Further, one can show that the integers of the form <span class="math-container">$\ ja + kb \ $</span> are precisely all integer multiples of the gcd, i.e. <span class="math-container">$\ a \Bbb Z + b\Bbb Z\, =\, \gcd(a,b)\Bbb Z.\,$</span> Said equivalently, there exist integers <span class="math-container">$\,x,y\,$</span> such that <span class="math-container">$\ a x + by = c \iff\gcd(a,b)\mid c,\,$</span> see <a href="https://math.stackexchange.com/a/3290965/242">here.</a></p>
503,691
<p>I'm a bit confused about a statement that I see often in the $\lambda$-calculus literature. Namely, what exactly does the following statement mean: "By induction on the length of $M\in\Lambda$." ?</p> <p>In such literature, $\Lambda$ is defined as the smallest subset of $\mathcal{V}\cup\{(,),\lambda\}$ such that:</p> <ul> <li>$x\in\Lambda$, if $x\in\mathcal{V}$</li> <li>$(PQ)\in\Lambda$, if $P,Q\in\Lambda$</li> <li>$(\lambda x P)\in\Lambda$, if $x\in\mathcal{V}$ and $P\in\Lambda$</li> </ul> <p>where $\mathcal{V}$ is any fixed infinite set of symbols.</p> <p>Now, the length of $M\in\Lambda$ is given by:</p> <ul> <li>$|x| = 1$</li> <li>$(PQ)$ = $|P|+|Q|$</li> <li>$(\lambda x P)$ = $1+|P|$</li> </ul> <p>Therefore, my question is: in what way is <em>"structural induction over $\Lambda$"</em> any different than <em>"induction on the length of a $\lambda$-term"</em>? In particular, what is the inductive hypothesis in such inductive method? </p>
MJD
25,554
<p>As a practical matter the technique of structural induction and the technique of induction on length turn out to be the same, and you can take "induction on the length of the term" to be synonymous with "structural induction on the term".</p> <p>In more detail, what is meant here is actually <a href="http://enwp.org/Strong_induction" rel="nofollow"><em>strong</em> induction</a> on the length of the term. The induction hypothesis is that some predicate $\Phi$ holds for all terms of length strictly <em>less than</em> $n$, and the inductive step uses the hypothesis to show that it must also hold for all terms of length $n$.</p> <p>In practice this nearly always takes the form of a structural induction argument, something like this: </p> <blockquote> <p>We want to show $\Phi(T)$, where $T$ has length $n$. Let $S$ be any proper subterm of $T$. Then since the length of $S$ is strictly less than the length of $T$, we know by the induction hypothesis that $\Phi(S)$…(etc.).</p> </blockquote> <p>The "length" might be a structural length that counts the subterms, as the one in your question, or it could be the actual number of symbols in the λ-term:</p> <p>$\def\size#1{\left|#1\right|}$</p> <p>$$ \begin{align} \size v &amp; = 1 \\ \size{\lambda x.T} &amp; = 3 + \size T \\ \size{(PQ)} &amp; = 2 + \size P + \size Q \end{align}$$</p> <p>The details are unimportant. The important thing here is just that the size function $\size\cdot$ is strictly monotonic: we must have $\size S \lt\size T$ whenever $S$ is a subterm of $T$.</p>
403,344
<p>I'm trying to determine whether this statement is true or false:</p> <p>Assume that $dim V = n$ and let $T \in L(V)$. Let $f(z) \in P_n(F)$ be a monic polynomial of degree $n$ such that $f(T) = 0$. Then $f(z)$ is the characteristic polynomial of $T$.</p> <p>I know that because $f(T) = 0$, the minimal polynomial of T divides it and that the minimal polynomial of T divides the characteristic polynomial of T as well, but I can't seem to put these together in a way that conclusively establishes the truth or falsehood of the above statement.</p> <p>If anyone could shed some insight on the matter, it would be greatly appreciated. Thanks!</p>
Tom Oldfield
45,760
<p>This is not true in general. The minimum polynomial is the smallest degree polynomial that the linear map satisfies, and the characteristic polynomial is defined to be $\det(xI-T)$. If the minimum polynomial is degree $m$, we know that the characteristic polynomial will be degree $n$ so $m\leq n$. If $p(x)$ is the minimum polynomial and $g(x)$ is any other poly, then $T$ will satisfy the polynomial $p(x)g(x)$. So if $m &lt; n$, we can multiply $p(x)$ by any polynomial of degree $(n-m)$ to get a new polynomial of degree $n$ that the linear map satisfies. This will not, in general, be the characteristic polynomial.</p> <p>The above is all done thinking almost solely about polynomials. Now I'll talk about how we can understand the problem thinking more about linear maps. Working over an algebraically closed field, we know that we can pick a basis with respect to which the matrix of $T$ is in Jordan Normal Form (JNF). The multiplicities of the roots of the characteristic polynomial are the number of times that root appears on the diagonal of the matrix in JNF, but the multiplicities of the roots of the minimum polynomial tell us the size of the largest block corresponding to that root. From looking at the matrix, it is intuitively clear that the matrix will satisfy a given polynomial $f(x)$ iff $p(x)$ divides $f(x)$. Thus we can, as before, multiply $p(x)$ by any polynomial of degree $n-m$ to get one of degree $n$ that the linear map satisfies.</p>
2,421,421
<blockquote> <p>Evaluate $$ \lim_{x\to \pi/2} \frac{\sqrt{1+\cos(2x)}}{\sqrt{\pi}-\sqrt{2x}}$$ </p> </blockquote> <p>I tried to solve this by L'Hospital's rule..but that doesn't give a solution..appreciate if you can give a clue.</p>
Marios Gretsas
359,315
<p>$\cos{2x}=2\cos^2{x}-1$</p> <p>Thus the nominator becomes:</p> <p>$$\sqrt{1+\cos{2x}}=\sqrt{1+2\cos^2{x}-1}=\sqrt{2}|\cos{x}|$$</p> <p>As $x \to \frac{\pi}{2}^-$ we have $|\cos{x}|=\cos{x}$</p> <p>As $x \to \frac{\pi}{2}^+$ we have $|\cos{x}|=-\cos{x}$</p> <p>Now you can use the L'Hospital rule to evaluate both limits.</p>
3,553,471
<blockquote> <p>For the triangle <span class="math-container">$\triangle ABC$</span>, prove the inquality, <span class="math-container">$$\frac1{4+\cos A\cos(B-C)}+\frac1{4+\cos B\cos(C-A)}+\frac1{4+\cos C\cos(A-B)}\ge \frac23$$</span> where <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> are the vertex angles. </p> </blockquote> <p>I had trouble tackling it. Have treated it as a minimization problem, subject to the constraint <span class="math-container">$A+B+C=\pi$</span>. In principle, it could be solved with the Lagrange multiplier. However, the required algebraic work seems to be prohibitive. Wonder whether there is a more elegant path to show it.</p>
lab bhattacharjee
33,337
<p>Using <span class="math-container">$\cos(B+C)=\cos(\pi-A)=?$</span></p> <p>and <a href="https://math.stackexchange.com/questions/345703/prove-that-cos-a-b-cos-a-b-cos-2a-sin-2b">Prove that $\cos (A + B)\cos (A - B) = {\cos ^2}A - {\sin ^2}B$</a></p> <p><span class="math-container">$$2(4+\cos A\cos(B-C))$$</span> <span class="math-container">$$=8-\cos(B+C)\cos(B-C)$$</span> <span class="math-container">$$=8-(\cos^2B-\sin^2C)$$</span> <span class="math-container">$$=9-(\cos^2B+\cos^2C)$$</span></p> <p>Now using AM -HM inequity, </p> <p><span class="math-container">$$\dfrac{\sum_{\text{cyc}}\dfrac2{9-(\cos^2 B+\cos^2C)}}3$$</span></p> <p><span class="math-container">$$\ge\dfrac3{\dfrac{27-2(\cos^2A+\cos^2B+\cos^2C)}2}$$</span></p> <p>The equality will occur if <span class="math-container">$9-(\cos^2 B+\cos^2C)=9-(\cos^2C+\cos^2A)=9-(\cos^2A+\cos^2B)$</span></p> <p><span class="math-container">$\iff\cos^2A=\cos^2B=\cos^2C$</span> </p> <p><span class="math-container">$\iff\sin^2A=\sin^2B=\sin^2C$</span> </p> <p>which will occur if <span class="math-container">$A=B=C$</span> </p> <p>using <a href="https://math.stackexchange.com/questions/175143/prove-sinab-sina-b-sin2a-sin2b">Prove $ \sin(A+B)\sin(A-B)=\sin^2A-\sin^2B $</a></p>
3,553,471
<blockquote> <p>For the triangle <span class="math-container">$\triangle ABC$</span>, prove the inquality, <span class="math-container">$$\frac1{4+\cos A\cos(B-C)}+\frac1{4+\cos B\cos(C-A)}+\frac1{4+\cos C\cos(A-B)}\ge \frac23$$</span> where <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and <span class="math-container">$C$</span> are the vertex angles. </p> </blockquote> <p>I had trouble tackling it. Have treated it as a minimization problem, subject to the constraint <span class="math-container">$A+B+C=\pi$</span>. In principle, it could be solved with the Lagrange multiplier. However, the required algebraic work seems to be prohibitive. Wonder whether there is a more elegant path to show it.</p>
The 2nd
751,538
<p>First, we use the trig identity <span class="math-container">$ \cos A \cos B = \frac{1}{2} (\cos(A+B) + \cos(A-B)) $</span>: </p> <p><span class="math-container">$$\frac1{4 + \cos A \cos(B-C)}+\frac1{4 +\cos B \cos(C-A)} + \frac1{4 + \cos C \cos(A-B)} \ge \frac23$$</span> <span class="math-container">$$ \Leftrightarrow \frac1{4 + \frac{1}{2} (\cos(A+B-C) + \cos(A-B+C))} + \frac1{4 + \frac{1}{2} (\cos(B+C-A) + \cos(B-C+A))} + \frac1{4 + \frac{1}{2} (\cos(C+A-B) + \cos(C-A+B))} \ge \frac23 $$</span></p> <p>Since <span class="math-container">$A + B + C = \pi$</span>, we get:</p> <p><span class="math-container">$$ \frac1{4 + \frac{1}{2} (\cos(\pi-2C) + \cos(\pi-2B))} + \frac1{4 + \frac{1}{2} (\cos(\pi-2A) + \cos(\pi-2C))} + \frac1{4 + \frac{1}{2} (\cos(\pi-2B) + \cos(\pi-2A))} \ge \frac23 $$</span></p> <p>Using the trig identity <span class="math-container">$ \cos(\pi-x) = - \cos x$</span>:</p> <p><span class="math-container">$$ \frac1{4 - \frac{1}{2} (\cos(2C) + \cos(2B))} + \frac1{4 - \frac{1}{2} (\cos(2A) + \cos(2C))} + \frac1{4 - \frac{1}{2} (\cos(2B) + \cos(2A))} \ge \frac23 $$</span></p> <p>Applying the inequality <span class="math-container">$ \frac1a + \frac1b + \frac1c \ge \frac9{a+b+c}$</span>:</p> <p><span class="math-container">$$ \frac1{4 - \frac{1}{2} (\cos(2C) + \cos(2B))} + \frac1{4 - \frac{1}{2} (\cos(2A) + \cos(2C))} + \frac1{4 - \frac{1}{2} (\cos(2B) + \cos(2A))} $$</span></p> <p><span class="math-container">$$ \ge \frac9{12 - (\cos(2A) + \cos(2B) + \cos(2C))} $$</span></p> <p>Finally, use the inequality <span class="math-container">$ \cos(2A) + \cos(2B) + \cos(2C) \ge - \frac{3}{2}$</span> (which you can find the proof in here: <a href="https://math.stackexchange.com/questions/1100551/prove-that-cos2a-cos2b-cos2c-geq-frac32-for-angles-of-a-tr?rq=1">Prove that $\cos(2a) + \cos(2b) + \cos(2c) \geq -\frac{3}{2}$ for angles of a triangle</a>):</p> <p><span class="math-container">$$ \frac{9}{12 - (\cos(2A) + \cos(2B) + \cos(2C))} \geq \frac{9}{12 - (- \frac{3}{2})} = \frac23 $$</span></p> <p>The inequality holds when </p> <p><span class="math-container">$$A = B = C = \frac{\pi}{3} $$</span></p> <p>or ABC is an equilateral triangle</p>
954,281
<p>Given a complex measure: $\mu:\Sigma\to\mathbb{C}$.</p> <p><img src="https://i.stack.imgur.com/YS26y.jpg" alt="Variation Measure"></p> <p>Consider its decomposition into positive measures: $$\mu=\Re_+\mu-\Re_-\mu+i\Im_+\mu-i\Im_-\mu=:\sum_{\alpha=0\ldots3}i^\alpha\mu_\alpha$$</p> <blockquote> <p>Does it split into disjoint regions: $\mu_\alpha(E)=|\mu|(E\cap A_\alpha)\quad(A_\alpha\cap A_\beta=\varnothing)$</p> </blockquote>
Alex R.
22,064
<p>Sure. Set $A$ to be the set on which $\mu$ is positive and $B$ the set on which its negative. </p>
2,280,323
<p>I am given a linear transformation $$T:\mathbb{R^3} \rightarrow \mathbb{R^2}$$ $$T((x,y,z)) = (x+y,-y+z)$$ The task if to find a basis in $\mathbb{R^3}$, let's call it $B=\{e_1, e_2, e_3\}$ and $\mathbb{R^2}$, let's call it $B'=\{f_1, f_2\}$ such that $A$ is the matrix of this transformation with respect to the found bases. </p> <p>Here is the matrix $A$: $$A = \begin{bmatrix} 1&amp;0&amp;0\\0&amp;2&amp;0\end{bmatrix}$$</p> <p>I think I am not sure how to interpret the given matrix $A$.</p>
egreg
62,967
<p>According to the definition of associated matrix with respect to given bases, you have to find $\{e_1,e_2,e_3\}$ and $\{f_1,f_2\}$ such that \begin{align} T(e_1)&amp;=f_1\\ T(e_2)&amp;=2f_2\\ T(e_3)&amp;=0 \end{align} Note that the problem is undetermined: you can find infinitely many bases with this property.</p> <p>First find a basis for the kernel of $T$ and you will have $e_3$. Then complete it to a basis for $\mathbb{R}^3$ and define $\{f_1,f_2\}$ according to the specification.</p>
3,786,797
<p>I have two variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> which are measured at two time steps <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. Available to me is the percent change from the first timestep to the second, relative to the first i.e. <span class="math-container">$\frac{x_1 - x_0}{x_0}$</span> and <span class="math-container">$\frac{y_1 - y_0}{y_0}$</span>. I would like to recover the ratio <span class="math-container">$\frac{x_1}{y_1}$</span>. I've tried adding 1 = <span class="math-container">$\frac{x_0}{x_0} = \frac{y_0}{y_0}$</span> to the two percent changes and then dividing them but this only gets me to <span class="math-container">$\frac{x_1}{y_1} \cdot \frac{y_0}{x_0}$</span>. It seems to me that my problem has 2 equations and 4 unknowns, even though 2 of the unknowns don't appear in the desired ratio, making it have infinite solutions. Is that correct? Otherwise, is there another manipulation I can make to get my desired ratio?</p>
Fawkes4494d3
260,674
<blockquote> <p>First, just arrange only the <span class="math-container">$5$</span> men in a line. You can do that in <span class="math-container">$5!=120$</span> ways, number of permutations of <span class="math-container">$5$</span> distinct objects.</p> </blockquote> <blockquote> <p>Take <strong>any such permutation</strong>, call it <span class="math-container">$A,B,C,D,E$</span> and note that now you can place the women in the spaces denoted as <span class="math-container">$$\_,A,\_,B,\_,C,\_,D,\_,E,\_$$</span> Let the number of women placed in the <span class="math-container">$i^{th}$</span> space be <span class="math-container">$a_i$</span>. Then <span class="math-container">$$a_1\ge 0; \ a_2,a_3,a_4,a_5 &gt; 0 ; \ a_6 \ge 0 \\ \text{and }\ \ \ \ \ a_1+a_2+a_3+a_4+a_5+a_6=5$$</span> which is same as saying <span class="math-container">$$a_1\ge 0; \ a_2,a_3,a_4,a_5 \ge 1 ; \ a_6 \ge 0 \implies a_1, a_2-1,a_3-1,a_4-1,a_5-1,a_6\ge 0 \\ \text{and }\ \ \ \ \ a_1+(a_2-1)+(a_3-1)+(a_4-1)+(a_5-1)+a_6=1$$</span></p> </blockquote> <blockquote> <p>Now the question is, how many such combinations of <span class="math-container">$(a_1,a_2,\cdots,a_6)$</span> are there? Basically you have to find the <a href="https://math.stackexchange.com/q/919676/260674">number of non-negative integral solutions</a> to the equation <span class="math-container">$a+b+c+d+e+f=1$</span>, which can be obtained by the formula in the above link (or by simply observing that at one of the variables has to be <span class="math-container">$1$</span> and the rest <span class="math-container">$0$</span>) to be <strong>6</strong>.</p> </blockquote> <blockquote> <p>For each of these <span class="math-container">$6$</span> solutions, you can arrange the <span class="math-container">$5$</span> women in a line in <span class="math-container">$5!=120$</span> ways and divvy them up as the solutions <span class="math-container">$(a_1,a_2,a_3,a_4,a_5,a_6)$</span> in <span class="math-container">$6\times 5!=720$</span> ways.</p> </blockquote> <blockquote> <p>The <span class="math-container">$720$</span> ways you have got for arranging the women, is for &quot;<strong>any such permutation</strong>&quot; from step 2. How many were there? There were <span class="math-container">$120$</span> of them (from step 1). So you have a total of <span class="math-container">$720\times 120=86400$</span> ways.</p> </blockquote> <p>As for why your idea wouldn't work in it's current form (and would need a refinement by the principle of inclusion-exclusion) is you have removed the possibility <span class="math-container">$MWMMWMMWWW$</span> as many times as <span class="math-container">$MM$</span> occurs for this permutation, for each of the <span class="math-container">$9!$</span>(?, should have been 8!, because you are fixing the positions of <span class="math-container">$2$</span> [if I am reading it right]) ways the remaining <span class="math-container">$9$</span> people could fit in this <em>invalid</em> permutation, which obviously removes each <em>invalid</em> permutation multiple times from the total of <span class="math-container">$10!$</span> and makes it negative.</p>
2,592,581
<p>Lets define the following series $$S_n =\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}.$$ I know that $S_0$ does not converge so let's suppose $n \in N$ and define $S_n$ for some $n$. We have $S_1=1$ , $S_2=\frac1 4$ , $S_3=\frac 1 {18}$ , $S_4=\frac 1{96}$ , $S_5=\frac 1{600}$ etc..<br/><br/> the numerator in all results is 1<br/>the pattern in denominator is $[1,4,1,96,600...]$ and <a href="https://oeis.org/A001563" rel="nofollow noreferrer">can be found here</a> that is equal to $n*n!$ Finally I want to prove the general equality:<br/><br/> $\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}=\frac 1 {n*n!}$</p>
Math Lover
348,257
<p>Hint: Telescopic sum and $$\frac{n}{i(i+1)(i+2)\cdots(i+n)}=\frac{1}{i(i+1)\cdots(i+n-1)}-\frac{1}{(i+1)(i+2)\cdots(i+n)}.$$</p>
2,019,542
<p>I read <a href="http://mathworld.wolfram.com/PolynomialOrder.html" rel="nofollow noreferrer">Polynomial order</a> defination, but i am still confuse how do we find polynomial order if we have a polynomial. What does Polynomial order represent?</p> <p>How do we find polynomial order if we have polynomial forexample $x^{5}+x^{2}+1$, currently it has order 31 but i don't know how?</p>
iam_agf
196,583
<p>You want the minimum $n$ such that</p> <p>$$\frac{x^n+1}{x^5+x^2+1}$$</p> <p>In $\mathbb{F}_2$.</p> <p>Doing the division without knowing the $n$ (only trying to get residue zero), you have that the polinomial $x^n+1$ will let you the quotient</p> <p>$$x^{n-5}+x^{n-8}+ x^{n-10}+x^{n-11}+x^{n-14}+x^{n-15}+x^{n-16}+x^{n-17}+x^{n-18}+x^{n-22}+x^{n-23}+x^{n-25}+x^{n-26}+x^{n-27}+x^{n-29}+x^{n-31}$$</p> <p>Because:</p> <p>\begin{align*} x^n &amp; +x^{n-3} &amp; +x^{n-5}\\ &amp; +x^{n-3}&amp;&amp; +x^{n-6}&amp;+x^{n-8}\\ &amp;&amp; x^{n-5}&amp;&amp;+x^{n-8}&amp;+x^{n-10}\\ &amp;&amp;&amp;&amp;... \end{align*}</p> <p>(Is a very large division)</p> <p>So you only want the minimal $n$ for which this division has residue zero, so you just only need to do $n-31=0$, and you will get exactly $n=31$ and the polynomial that is the quotient of the equation will be: </p> <p>$$1+x^2+x^4+x^5+x^6+x^8+x^9+x^{13}+x^{14}+x^{15}+x^{16}+x^{17}+x^{20}+x^{21}+x^{23}+x^{26}$$</p> <p>If you consider any $n&gt;31$, the minimal is this one. If you consider any $n&lt;31$, the division won't have sense since you need to get that $n-31&gt;0$. </p>
2,019,542
<p>I read <a href="http://mathworld.wolfram.com/PolynomialOrder.html" rel="nofollow noreferrer">Polynomial order</a> defination, but i am still confuse how do we find polynomial order if we have a polynomial. What does Polynomial order represent?</p> <p>How do we find polynomial order if we have polynomial forexample $x^{5}+x^{2}+1$, currently it has order 31 but i don't know how?</p>
Gerry Myerson
8,269
<p>Let's write $F$ for the field of two elements, and $f(x)$ for $x^5+x^2+1$. You have to show $f$ is irreducible over $F$. Having done that, it follows that $K=F[x]/\langle\,f\,\rangle$ is a field of degree 5 as an extension of $F$. It follows that $K$ has $2^5=32$ elements. So there are 31 nonzero elements. The nonzero elements of any field form a group, so in this case it's a group of order 31. Now 31 is a prime, so every element of this group has order 31 (except, of course, the multiplicative identity, which has order 1). </p>
2,037,652
<p>I'm taking a course on smooth manifolds and the following exercise was given to me:</p> <blockquote> <p>If $P\in\mathbb{R}[X_1, ..., X_{n+1}]$ is a homogeneous polynomial of degree $d$ such that $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)$ is never zero, prove that $Z(P):=\{[x_1:\dots:x_{n+1}]\in \mathbb{RP}^n\mid P(x_1,\dots, x_{n+1})=0\}$ is a regular submanifold of $\mathbb{RP}^n$.</p> </blockquote> <p>Here was my attempt: by interpreting $P$ as a smooth function $P:\mathbb{RP}^n\to \mathbb{R}$, we have that $\text{rank}(dP)\equiv 1$ in the whole domain, so by the constant rank theorem, $Z(P)=P^{-1}(0)$ is a regular submanifold of codimension $1$ (if this is wrong, please let me know).</p> <p>First of all, the fact that $d$ was not important was really surprising to me. Also, it seemed to me that the condition $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)\neq 0$ was pretty strong, and made me wonder if it was really necessary. For example, taking $n=2$ and $P(X, Y, Z)=X^2-XZ+YZ$, we have $\left(\frac{\partial P}{\partial X}, \frac{\partial P}{\partial Y},\frac{\partial P}{\partial Z}\right)(0,0,0)= 0$, but $Z(P)$ is still a regular submanifold of codimension $1$ (right?).</p> <p>So my question is if there's a more general way to analyse whether or not $Z(P)$ is a regular submanifold, i.e., without imposing $\left(\frac{\partial P}{\partial X_1}, ..., \frac{\partial P}{\partial X_{n+1}}\right)\neq 0$? And what would be the relationship between $d$ and the codimension of $Z(P)$? Thanks!</p>
Ted Shifrin
71,348
<p><strong>HINT</strong>: Consider, for example, the function $$f_{n+1}(u_1,\dots,u_n) = P(u_1,\dots,u_n,1)$$ defined for $[u_1,\dots,u_n,1]\in U_{n+1} \cong \Bbb R^n$. Apply the regular value theorem (or constant rank theorem) to the functions $f_i$, $i=1,\dots,n+1$. Note that every $x\in Z(P)$ belongs to at least one $U_i$.</p>
1,737,835
<p>You have decided to buy candy for the trick-or-treaters and have estimated there will be 200 children coming to your door, and plan to give each children three pieces of candy. You have decided to offer Twix and 3 Musketeers. The cost of buying these two candies is<br> $$C= 5T^2 + 2TM + 3M^2 + 800$$ Where T is the number of Twix and M is the number of 3 Musketeers. How many of each candy should you get to minimize the cost? </p>
Christian Blatter
1,303
<p><strong>Update:</strong> I had forgotten an essential point; see the comments to my answer.</p> <p>Let $B\subset{\mathbb R}^n$ be any compact convex set which is symmetric to the origin: $-B=B$, and has nonempty interior. Then $$\|x\|:=\inf\{\lambda\geq0\&gt;|\&gt;x\in\lambda B\}$$ is a norm on ${\mathbb R}^n$ having $B$ as closed unit ball.</p>
2,348,641
<p><span class="math-container">$f$</span> is a periodic continuous function of period <span class="math-container">$T &gt; 0 $</span>.</p> <ol> <li><p>Let <span class="math-container">$ a &gt; 0$</span>, prove that: <span class="math-container">$ \int_{0}^{+ \infty} f(t)e^{-at}dt $</span> is absolutely convergent.</p> </li> <li><p>Find a positive constant <span class="math-container">$ \lambda $</span> dependent only on <span class="math-container">$a$</span> and <span class="math-container">$T$</span> such that: <span class="math-container">$$ \int_{0}^{+ \infty} f(t)e^{-at}dt = \lambda \int_{0}^{T}f(t)e^{-at}dt $$</span></p> </li> <li><p>Find the expression of <span class="math-container">$ \mu $</span> that is dependent only on <span class="math-container">$ a$</span> and <span class="math-container">$T$</span> such that:</p> </li> </ol> <p><span class="math-container">$$\left|\int_{0}^{T}f(t)e^{-at}\,dt\right| \leq \mu\sqrt{\int_{0}^{T}f^{2}(t)e^{-at}\,dt} $$</span></p> <ol start="4"> <li>Find <span class="math-container">$ \lim_{a \to + \infty } f(t) e^{-at} dt$</span>.</li> </ol> <hr /> <ol> <li><p><span class="math-container">$f$</span> is continuous on the closed interval <span class="math-container">$[0, T]$</span>, there is an <span class="math-container">$M \in \mathbb{R}$</span> such that:</p> <p><span class="math-container">$|f(t)| &lt; M \implies |f(t) e^{-at}| &lt; Me^{-at} $</span></p> </li> </ol> <p><span class="math-container">$ \int_{0}^{+ \infty}Me^{-at}$</span> is convergent, by comparison test, so does <span class="math-container">$\int_{0}^{+ \infty}|f(t) e^{-at}|$</span>, which proves that <span class="math-container">$ \int_{0}^{+ \infty} f(t)e^{-at}dt $</span> is absolutely convergent. Is this correct?</p> <p>I also need help with question 2, 3 and 4.</p> <p>Thank you for your help.</p>
RRL
148,510
<p>Hint 2: </p> <p>Using the substitution $t = u + (k-1)T$,</p> <p>$$\int_0^\infty f(t)e^{-at} \, dt = \sum_{k=1}^\infty\int_{(k-1)T}^{kT}f(t)e^{-at} \, dt = \sum_{k=1}^\infty \int_{0}^{T}f(u+(k-1)T)e^{-au}e^{-a(k-1)T} \, du \\ = \sum_{k=1}^\infty e^{-a(k-1)T}\int_{0}^{T}f(u)e^{-au} \, du $$</p> <p>Hint 3:</p> <p>Use the Cauchy-Scwharz inequality on the RHS of </p> <p>$$\int_0^\infty f(t)e^{-at} \, dt = \int_0^\infty f(t)e^{-at/2}e^{-at/2} \, dt $$</p>
38,480
<p>(Before reading, I apologize for my poor English ability.)</p> <p>I have enjoyed calculating some symbolic integrals as a hobby, and this has been one of the main source of my interest towards the vast world of mathematics. For instance, the integral below $$ \int_0^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx = \pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right). $$ is what I succeeded in calculating today.</p> <p>But recently, as I learn advanced fields, it seems to me that symbolic integration is of no use for most fields in mathematics. For example, in analysis where the integration first stems from, now people seem to be interested only in performing numerical integration. One integrates in order to find an evolution of a compact hypersurface governed by mean curvature flow, to calculate a probabilistic outcome described by Ito integral, or something like that. Then numerical calculation will be quite adequate for those problems. But it seems that few people are interested in finding an exact value for a symbolic integral.</p> <p>So this is my question: Is it true that problems related to symbolic integration have lost their attraction nowadays? Is there no such field that seriously deals with symbolic calculation (including integration, summation) anymore?</p>
Mitch
1,919
<p>If you're talking about practical engineering applications, then really only numerical approximations are used (and studied in computer science as 'numerical analysis' or more recently 'scientific computing'). </p> <p>As to an academic mathematical field nowadays that deals with symbolic integration, first some perspective. Newton/Leibniz invented integral calculus in ...hm...late 1600's and was popularized (as much as you can say that) in the 1700's. Some basic symbolic integration even occurred (without that name and system) before then. So let's just say there's been at least 300 years of work there.</p> <p>Also, there's more to inverting derivatives than just integrals. Solving systems of partial differential equations seem to be the big thing (both numerically and symbolically) for almost as long as simple single variable integrals.</p> <p>That said, there is a small academic group of people working in 'symbolic computation' (with their own journals), and one subarea is symbolic integration. There are proofs of impossibility (i.e. proving that given certain restrictions there is no 'closed form' for a particular integral), and there are algorithms for computing integrals given other certain restrictions (the Risch algorithm). The latter are often implemented in computer algebra packages (Mathematica, Maple, etc.).</p> <p>There is surely room for solving particular integrals (in the AMM there don't seem to be many integrals though in the Problems section) and for finding patterns. I'd look at those journals to see what particular interest there is for integrals.</p>
38,480
<p>(Before reading, I apologize for my poor English ability.)</p> <p>I have enjoyed calculating some symbolic integrals as a hobby, and this has been one of the main source of my interest towards the vast world of mathematics. For instance, the integral below $$ \int_0^{\frac{\pi}{2}} \arctan (1 - \sin^2 x \; \cos^2 x) \,\mathrm dx = \pi \left( \frac{\pi}{4} - \arctan \sqrt{\frac{\sqrt{2} - 1}{2}} \right). $$ is what I succeeded in calculating today.</p> <p>But recently, as I learn advanced fields, it seems to me that symbolic integration is of no use for most fields in mathematics. For example, in analysis where the integration first stems from, now people seem to be interested only in performing numerical integration. One integrates in order to find an evolution of a compact hypersurface governed by mean curvature flow, to calculate a probabilistic outcome described by Ito integral, or something like that. Then numerical calculation will be quite adequate for those problems. But it seems that few people are interested in finding an exact value for a symbolic integral.</p> <p>So this is my question: Is it true that problems related to symbolic integration have lost their attraction nowadays? Is there no such field that seriously deals with symbolic calculation (including integration, summation) anymore?</p>
JWalter
791,148
<p>Symbolic integration becomes less popular indeed and most researchers prefer numerical integration. However, significance of the symbolic integration should not be underestimated. This can be shown by using the integration formula: <span class="math-container">$$\int_0^1 f(x)dx = 2\sum_{m=1}^M{\sum_{n=0}^\infty{\frac{1}{{\left(2M\right)}^{2n+1}\left({2n+1}\right)!}\left.f^{(2n)}\left(x\right)\right|_{x=\frac{m-1/2}{M}}}}\,\,,$$</span> where the notation <span class="math-container">$f(x)^{(2n)}|_{x=\frac{m-1/2}{M}}$</span> implies 2n-th derivative at the points <span class="math-container">$x=\frac{m-1/2}{M}$</span>. Once the integral is expanded as a series, we can use it either numerically or analytically (i.e. symbolically). It may be tedious to find by hand 2n-th derivatives. However, with powerful packages supporting symbolic programming like Maple, Mathematica or MATLAB this can be done easily. For example, by taking <span class="math-container">$f(x) = \frac{\theta}{1 + \theta^2 x^2}$</span> even at smallest <span class="math-container">$M = 1$</span> we can find a rapidly convergent series for the arctangent function: <span class="math-container">$$\tan^{-1}(\theta)=i\sum_{n=1}^{\infty}\frac{1}{2n-1}\left(\frac{1}{\left(1+2i/\theta\right)^{2n-1}}-\frac{1}{\left(1-2i/\theta\right)^{2n-1}}\right),$$</span> where <span class="math-container">$i=\sqrt{-1}$</span>. This example shows that symbolic integration may be highly efficient in many numerical applications.</p>
2,232,677
<p>A parallellogram $ABCD$ is given, where $E$ is the midpoint of $BC$ and $F$ is a point on $AD$ so that $|FD| = 3|AF|$. $G$ is the point where $BF$ and $AE$ intersects.</p> <p>Express the vector $AG$ in terms of vectors $AB$ and $AD$.</p> <p>My solution to the problem is the following: Impose an <em>affine transformation</em> so that $ABCD$ becomes a unit square with point $A$ residing on origo. Then $BF$ is on the line $y = \frac{1}{4} - \frac{1}{4}x$ and $AE$ on $y = \frac{1}{2}x$. The lines' intersection gives me the required coefficients to express $AG$ in terms of $AB$ and $AD$. </p> <p>My question is how would you solve this problem without the affine transformation? The reason I'm asking is because this problem was given at an early stage of the course before affine transformations was introduced. So I want to know if there is a simpler or "more intuitive" way to solve it which I haven't learned.</p>
Misha Lavrov
383,078
<p>This is straightforward once you show that $\overrightarrow{AG} = \frac13 \overrightarrow{AE}$. Then we have $$\overrightarrow{AG} = \frac13(\frac12\overrightarrow{AB} + \frac12\overrightarrow{AC}) = \frac16 \overrightarrow{AB} + \frac16 \overrightarrow{AC} = \frac16 \overrightarrow{AB} + \frac16 (\overrightarrow{AB} + \frac16 \overrightarrow{AD}) = \frac13 \overrightarrow{AB} + \frac16 \overrightarrow{AD}.$$</p> <p>We can get here in a few ways, including but not limited to:</p> <ul> <li>By similar triangles: if you choose $H$ on $AD$ such that $BH \parallel EF'$, then $\triangle AFG \sim \triangle AHE$. We can get the coefficient of similarity from the observation that $$AH = AF + FH = AF + BE = \frac14 AD + \frac12 AC = \frac34 AD,$$ while $AF = \frac14 AD$, so $\triangle AFG = \frac13 \triangle AHE$, and $AG = \frac13 AE$.</li> <li>By the area method. The ratio $AG : GE$ is the ratio $S_{ABF} : S_{BEF}$. Since \begin{align} S_{ABF} &amp;= \frac14 S_{ABD} = \frac18 S_{ABCD} \\ S_{BEF} &amp;= \frac12 S_{BCF} = \frac12 S_{BCA} = \frac14 S_{BCDA}\end{align} we get $AG : GE = \frac18 : \frac14 = 1 : 2$.</li> <li>It is evident from the diagram below once we draw some more lines parallel to $BF$:</li> </ul> <p><a href="https://i.stack.imgur.com/fAUks.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fAUks.png" alt="parallelogram"></a></p> <p>The similar triangles method (or some equivalent vector computation) is probably what you were intended to use, but I prefer the area approach for being systematic. We have $E$ in terms of $B$ and $C$, $F$ in terms of $A$ and $D$, and $G$ in terms of $BF$ and $AE$, so we just undo these constructions until we have expressed the desired ratio $AG : GE$ in terms of $A$, $B$, $C$, and $D$.</p>
3,933,135
<p>Let <span class="math-container">$m$</span> and <span class="math-container">$n$</span> are positive integer</p> <blockquote> <p>Can it be shown that, For every <span class="math-container">$m\ge 5$</span></p> <p><span class="math-container">$$\sum_{i=1}^ni^m-m^i&gt;0\iff n=2,3,...,m$$</span></p> </blockquote> <p>Example: let <span class="math-container">$m=5$</span>, choose any <span class="math-container">$n$</span> between <span class="math-container">$2$</span> to <span class="math-container">$5$</span>, now let <span class="math-container">$n=2$</span> then <span class="math-container">$\sum_{i=1}^2i^5-5^i=(1^5-5^1)+(2^5-5^2)=-4+7=3&gt;0$</span></p> <p>More on observation, <span class="math-container">$\sum_{i=1}^ni^m-m^i=0$</span> holds only for <span class="math-container">$(m,n)=\{(1,1),(2,3),(2,4)\}$</span></p> <p>Score Pari/GP</p> <pre><code>for(m=5,50,for(n=1,50,if(sum(i=1,n,i^m-m^i)&gt;0,print([m,n])))) </code></pre>
Neat Math
843,178
<p><strong>Lemma 1:</strong> <span class="math-container">$2^m&gt; m^2+m-1, \forall m&gt;5$</span>.</p> <p>When <span class="math-container">$m=5, 2^5=32&gt;29=5^2+5-1$</span>. If <span class="math-container">$2^m&gt;m^2+m-1$</span> then <span class="math-container">$$2^{m+1}-[(m+1)^2+(m+1)-1]\\&gt;2(m^2+m-1)-(m+1)^2-(m+1)+1 = m(m-1)-3 &gt;0.$$</span></p> <p><strong>Lemma 2:</strong> <span class="math-container">$i^j &lt; j^i, \forall i&gt;j&gt;e.$</span></p> <p>We check the first derivative of the function <span class="math-container">$f(x)=x^\frac 1x$</span>: <span class="math-container">$$f'(x)=x^{\frac 1x -2}(1-\ln x) &lt; 0, \forall x&gt;e.$$</span></p> <p><strong>First we prove <span class="math-container">$\Leftarrow$</span>.</strong></p> <p>From the lemmas <span class="math-container">$$\sum_{i=1}^n (i^m-m^i)=1+2^m-m-m^2+\sum_{i=3}^n(i^m-m^i)&gt;0.$$</span></p> <p><strong>Next we prove <span class="math-container">$\implies$</span>.</strong></p> <p>If <span class="math-container">$n=1$</span>, it's trivial.</p> <p>If <span class="math-container">$n&gt;m$</span>, <span class="math-container">$$\sum_{i=1}^n (i^m-m^i)=\sum_{i=1}^{m+1} (i^m-m^i) + \sum_{i&gt;m+1} (i^m-m^i) \le \sum_{i=1}^{m+1} (i^m-m^i) \text{ via Lemma 2}$$</span></p> <p>Note that <span class="math-container">$e^{-\frac km} \ge 1-\frac km \implies \left(1-\frac km \right)^m \le e^{-k}, k=0, 1, \cdots, m-1$</span></p> <p>Hence <span class="math-container">$$ \sum_{i=1}^{m+1} (i^m-m^i) = \sum_{i=1}^{m+1} i^m - \sum_{i=1}^{m+1} m^i = m^m\left(\left(1+\frac 1m\right)^m + \sum_{k=0}^{m-1} \left(1-\frac km\right)^m \right) - \frac{m^{m+2} - m}{m-1}\\ &lt; m^m \left(e+\sum_{k=0}^\infty e^{-k} - \frac{m^2-m^{1-m}}{m-1}\right) &lt; m^m \left(e+\frac{1}{1-e^{-1}} - m\right) \approx m^m(4.3003-m) &lt;0.\blacksquare $$</span></p>
1,575,920
<p>The global error of $\int f(x) \mathrm{d}x$, between two $x$-values by the trapezoidal rule is $-(1/12)h^3f''(ξ)$</p> <p>a) $f(x) = x^3, x := [0.2,0.5]$</p> <p>Find the value for $ξ$. Not really sure where to start with this problem. We never did any examples in class.</p>
abel
9,252
<p>you have $$\frac14\left(0.5^4 - 0.2^4\right)=\int_{0.2}^{0.5}x^3 \,dx = 0.3\left(\frac12 0.2^2 + \frac120.3^2\right) -\frac1{12}(0.3^3) 6\xi \tag 1 $$</p> <p>equation $(1)$ determines what $\xi$ is in this case.</p>
309,841
<p>Let $m$ be a positive integer. Prove that the dihedral group $D_{4m+2}$ of order $8m+4$ contains a subgroup of order 4.</p> <p>Im fairly stumped as to where to start with this one, any input would be helpful. Thanks</p>
Haskell Curry
39,362
<p>The concept of tensor products of vector spaces is required for the description of <a href="http://en.wikipedia.org/wiki/General_relativity" rel="noreferrer">General Relativity</a>. For example, the metric tensor $ \mathbf{g} $ of a <a href="http://en.wikipedia.org/wiki/Pseudo-Riemannian_manifold" rel="noreferrer">pseudo-Riemannian manifold</a> $ (M,\mathbf{g}) $ can be viewed as a ‘disjoint collection’ of tensor products. In terms of a local-coordinate system $ (x^{1},\ldots,x^{n}) $, we can express $ \mathbf{g} $ as $ g_{ij} \cdot d{x^{i}} \otimes d{x^{j}} $, where $ d{x^{i}} $ and $ d{x^{j}} $ are <a href="http://en.wikipedia.org/wiki/Differential_form" rel="noreferrer">differential $ 1 $-forms</a> and $ g_{ij} $ is just a scalar coefficient. At each $ p \in M $, what $ \mathbf{g} $ does is to take two tangent vectors at $ p $ as input and produce a scalar in $ \mathbb{R} $ as output, all in a linear fashion. In other words, at each point $ p $, we can view $ \mathbf{g} = g_{ij} \cdot d{x^{i}} \otimes d{x^{j}} $ as a bilinear mapping from $ {T_{p}}(M) \times {T_{p}}(M) $ to $ \mathbb{R} $, where $ {T_{p}}(M) $ denotes the <a href="http://en.wikipedia.org/wiki/Tangent_space" rel="noreferrer">tangent space</a> at $ p $.</p> <p>The metric tensor is the very object that encodes geometrical information about the manifold $ M $. All other useful quantities are constructed from it, such as the <a href="http://en.wikipedia.org/wiki/Riemann_curvature_tensor" rel="noreferrer">Riemann curvature tensor</a> and the <a href="http://en.wikipedia.org/wiki/Ricci_curvature" rel="noreferrer">Ricci curvature tensor</a> (of course, we still need something called an ‘<a href="http://en.wikipedia.org/wiki/Affine_connection" rel="noreferrer">affine connection</a>’ in order to define these tensors). In General Relativity, the metric tensor encodes the curvature of spacetime, which can and does affect the performance of high-precision instrumentation such as the <a href="http://en.wikipedia.org/wiki/Global_Positioning_System" rel="noreferrer">Global Positioning System (GPS)</a>.</p> <p>For vector spaces over a field $ \mathbb{K} $, tensor products are usually defined in terms of multilinear mappings. You may have seen the more abstract definition using a system of generators, but it can be shown that these two definitions are the same. From the categorical point of view, both constructions satisfy the same <a href="http://en.wikipedia.org/wiki/Universal_property" rel="noreferrer">universal property</a> (this property is explicated in the <a href="http://en.wikipedia.org/wiki/Tensor_product" rel="noreferrer">Wikipedia article on tensor products</a>), so they must be isomorphic in the category of $ \mathbb{K} $-vector spaces.</p> <p>You need to understand that <strong>tensor products are algebraic objects</strong>, while <strong><a href="http://en.wikipedia.org/wiki/Tensor_field" rel="noreferrer">tensor fields</a> are geometrical objects</strong>. Even more fundamental than the concept of a tensor field is that of a <a href="http://en.wikipedia.org/wiki/Tensor_bundle" rel="noreferrer">tensor bundle</a>. You can think of a tensor bundle as a collection of tensor products attached to individual points of a manifold in a consistent manner (by ‘consistent’, I mean that the axioms of a <a href="http://en.wikipedia.org/wiki/Vector_bundle" rel="noreferrer">vector bundle</a> must be satisfied). How we relate <em>the tensor product attached to a point</em> to <em>the tensor product attached to another point</em> is what the subject of differential geometry is all about. On the other hand, what we do with the tensor product attached to a fixed single point is pure algebra.</p> <p>Any discussion of spacetime on a global scale requires General Relativity, which takes into account the curvature of spacetime. However, in a small neighborhood of any fixed point in spacetime, one always has what is called a ‘<a href="http://en.wikipedia.org/wiki/Local_reference_frame" rel="noreferrer">local inertial frame</a>’. In such a frame, gravitational effects can be transformed away, in which case, General Relativity simply reduces to <a href="http://en.wikipedia.org/wiki/Special_relativity" rel="noreferrer">Special Relativity</a>, which some mathematicians view as nothing more than just advanced linear algebra. This illustration reinforces the idea that ‘global is geometry but local is algebra’.</p>
145,868
<p>I have a data set like below:</p> <pre><code>368 150 0.31895877 369 150 0.32141133 370 150 0.3190167 371 150 0.33890757 372 150 0.3649395 373 150 0.37585051 374 150 0.37916244 375 150 0.37619812 376 150 0.37287654 377 150 0.37249031 378 150 0.37102263 379 150 0.34839918 380 150 No Data 381 150 No Data 382 150 No Data 383 150 No Data 384 150 No Data 385 150 No Data 386 150 No Data 387 150 No Data 388 150 No Data 389 150 No Data 390 150 No Data 391 150 No Data 392 150 No Data </code></pre> <p>This data set sometimes contains "No Data" and I want to ListPointPlot3D with this data.</p> <p>Therefore, I imported this data and plotted it:</p> <pre><code>data=Import["datafile.txt","Table"]; ListPointPlot3D[data] </code></pre> <p>As I expected, I got error about the invalid list.</p> <p>How can I plot the graph with these No Data rows eliminated?</p> <hr> <p>This is a sample file which I want to plot with.</p> <p><a href="https://www.taiki-home.net/nextcloud/index.php/s/iyy60PCz5niXBTW" rel="nofollow noreferrer">datafile.txt</a></p>
JimB
19,758
<p>Because your distribution is (at least) bimodal and because the probability density is greater than zero at zero, you might consider using nonparametric density estimation rather than maybe a poor approximation with a mixture of a small number of distributions. (However, if it works with a mixture distribution, then that's great.)</p> <p>You've got lots and lots of data and you don't need to force a description with a mixture of standard distributions.</p> <p>Many times just using <code>SmoothKernelDistribution</code> on the data gets you everything you need. In this case because the density is definitely not zero at the border of zero, one needs to do a bit of (commonly used) trickery but one still gets a legitimate and appropriate estimate of the probability density. (See <a href="https://ned.ipac.caltech.edu/level5/March02/Silverman/paper.pdf" rel="noreferrer">Silverman, Page 20</a>).</p> <p>First one creates a dataset that includes a reflection of the data:</p> <pre><code>data2 = Flatten[{data, -data}]; </code></pre> <p>Then the <code>SmoothKernelDistribution</code> function is used followed by truncating the resulting distribution to be between zero and $\infty$.</p> <pre><code>skd2 = SmoothKernelDistribution[data2]; skd = TruncatedDistribution[{0, ∞}, skd2]; </code></pre> <p>One then has access to the functions associated with a probability distribution: PDF, CDF, Expectation, etc.</p> <pre><code>Plot[PDF[skd, x], {x, 0, 0.4}] </code></pre> <p><a href="https://i.stack.imgur.com/r34xu.png" rel="noreferrer"><img src="https://i.stack.imgur.com/r34xu.png" alt="Nonparametric density estimate"></a></p> <p>To repeat: when you've got lots of data you are not restricted to standard distributions. Really.</p>
1,736,360
<p>Using the binomial expansion, find the taylor series of the function $$f(z) = \frac{1}{(1+z)^3}$$ and find its radius of convergence.</p> <hr> <p>The solution is</p> <p>$$f(z) = \sum_{n=0}^\infty (-1)^n\frac{(n+1)(n+2)}2z^n$$</p> <p>How do we find this? I thought the binomial expansion was</p> <p>$$(1+z)^n = \sum_{r=0}^n \binom{n}{r}z^r$$</p> <p>but here $n=-3$ so how do we progress?</p>
Bernard
202,857
<p>The binomial formula is still valid for exponents $\alpha$ which are not natural numbers, but the binomial coefficients $$\binom\alpha k=\frac{\alpha(\alpha -1)\dots(\alpha -k+1)}{1\cdot 2\dotsm k}$$ are never $0$, hence an infinite series.</p> <p>Here, with your notations, you get: $$\binom{-3}n=\frac{-3(-4)\dotsm{-(n+2)}}{1\cdot2\dotsm n}=(-1)^n\frac{(n+1)(n+2)}2.$$</p> <p><strong>Alternative way:</strong> Start from $\;\dfrac1{1+z}=\sum_{n\ge 0}(-1)^nz^n$, and differentiate twice term by term. The radius of convergence does not change.</p>
58,920
<p>I have a csv that encodes the results of plugging certain parameters (say, <code>A</code> and <code>B</code>) into certain functions (say, <code>f</code> and <code>g</code>). Think matrix with (say, 4) columns and a million rows with the columns corresponding to <code>A, B, C = f(A, B)</code>, and <code>D = g(A, B)</code> respectively. I'd like to be able to use the dynamic visualization features of mathematica, but this requires turning this data into discreet functions, and the columns into their own lists to use as domains in the Manipulate environment. </p> <p>I want to be able to do something like <code>Manipulate[Plot[function[A, B], {A, (list of values)}], {B, (list of values)}, {function, {f, g}}]</code></p> <p>I have been poring over the Help guide for hours and this seems to be such a basic thing to do that I feel crazy for not being able to find it. </p>
kglr
125
<p>Perhaps you can use <a href="http://reference.wolfram.com/mathematica/ref/DiscretePlot3D.html" rel="nofollow noreferrer">DiscretePlot3D</a></p> <pre><code>{alist, blist} = RandomReal[{-2 Pi, 2 Pi}, {2, 20}]; Manipulate[DiscretePlot3D[f[a + b], {a, alist}, {b, blist}, PlotStyle -&gt; Hue[RandomReal[]], ExtentSize -&gt; Scaled[.5]], {f, {Sin, Cos}}] </code></pre> <p><img src="https://i.stack.imgur.com/D9RWL.png" alt="enter image description here"><img src="https://i.stack.imgur.com/QJLWC.png" alt="enter image description here"></p>
1,160,741
<p>Prove that continuous functions on $\mathbb{R}$ with period $1$ can be uniformly approximated by trigonometric polynomials, i.e. functions in the linear span of $\{e^{−2πinx}\}_{n \in \mathbb{Z}}$. Explain why $n \in \mathbb{N}$ is not enough.</p> <p>I think I need to restrict the domain into a compact set and consider the interval $[0,1]$. Since the functions are periodic with period $1$ the approximation on $[0,1]$ can be translated to any point on $\mathbb{R}$. It is easy to show that linear span of $\{e^{−2πinx}\}_{n \in \mathbb{Z}}$ is an algebra and that this algebra has the constant function. </p> <p>How do I show that this algebra is closed? </p> <p>Why is $n \in \mathbb{N}$ is not enough?</p>
Berrick Caleb Fillmore
85,964
<p>Let us restrict the domain to the interval $ [0,1] $ and use some of the theory of inner-product spaces.</p> <p>One reason why having $ \mathbb{N} $ as the index set is not enough is because $ e_{1}: x \mapsto e^{2 \pi i x} $ is orthogonal to the set $ \{ e_{- n}: x \mapsto e^{- 2 \pi i n x} \}_{n \in \mathbb{N}_{0}} $. You can easily check this by computing $$ \forall n \in \mathbb{N}_{0}: \quad \langle e_{1},e_{- n} \rangle = \int_{0}^{1} \overline{{e_{1}}(x)} \cdot {e_{- n}}(x) ~ \mathrm{d}{x} = 0. $$ Now, for any $ N \in \mathbb{N}_{0} $ and $ c_{0},\ldots,c_{N} \in \mathbb{C} $, we have \begin{align*} \| e_{1} \|_{2}^{2} &amp; = \left| \left\langle e_{1},e_{1} - \sum_{n = 0}^{N} c_{n} e_{- n} \right\rangle \right| \\ &amp; \leq \| e_{1} \|_{2} \cdot \left\| e_{1} - \sum_{n = 0}^{N} c_{n} e_{- n} \right\|_{2} \quad (\text{By the Cauchy-Schwarz Inequality.}) \\ &amp; \leq \| e_{1} \|_{2} \cdot \left\| e_{1} - \sum_{n = 0}^{N} c_{n} e_{- n} \right\|_{\infty}. \end{align*} Hence, as $ \| e_{1} \|_{2} \neq 0 $, we obtain $$ \| e_{1} \|_{2} \leq \left\| e_{1} - \sum_{n = 0}^{N} c_{n} e_{- n} \right\|_{\infty}. $$ If we could approximate $ e_{1} $ uniformly by the linear span of $ \{ e_{- n} \}_{n \in \mathbb{N}_{0}} $, then the right-hand side of the inequality above could be made arbitrarily small, while the left-hand side remains as a non-zero constant. This is a contradiction.</p>
4,017,790
<p>We consider the equation <span class="math-container">$y^2-4y+4+x^2y+\arctan(x)=0$</span>, the exercice of my book say that the normal vector <span class="math-container">$n$</span> of the cartesian curve implicitally defined by the previous equation is <span class="math-container">$n=l(1,0)$</span> for <span class="math-container">$l \neq 0$</span>. I don't understand the reason of this affermation, because I think that if <span class="math-container">$y(x)$</span> is the implicit function obtained form the equation, the parametric equation of the curve is <span class="math-container">$(t, y(t))$</span>, so the first coordinate of the normal vector is zero. I would therefore expect that the normal vector is <span class="math-container">$n=l(0,1)$</span>. Is it right?</p>
JMoravitz
179,297
<p>The difference between <span class="math-container">$\left(\frac{1}{2}\right)^8$</span> versus <span class="math-container">$\left(\frac{1}{2}\right)^7$</span> depends on how we interpret the problem.</p> <p>Problem 1:</p> <ul> <li>We have two pizzas, each with eight pizza slices. Each time someone enters the room, they open one of the two boxes uniformly at random, independent of each other persons' choices, and will take a slice if able and then close the lid except in the case where they have taken the last slice in which case they throw out the box. We ask what the probability is that the first person to come in the room after a box has been thrown out will find a full eight slices of pizza in the remaining box.</li> </ul> <p>For this problem, let us label the two pizza boxes <span class="math-container">$H$</span> and <span class="math-container">$T$</span> and let the people make the decision which box to eat from based on a coin flip. Here, we have the two sequences <code>HHHHHHHH</code> and <code>TTTTTTTT</code> both result in the ninth person coming in and seeing one of the boxes missing and the remaining box having all of the slices of pizza in it, with a probability of <span class="math-container">$\left(\frac{1}{2}\right)^8 + \left(\frac{1}{2}\right)^8 = \left(\frac{1}{2}\right)^7$</span></p> <p>The calculation could have been made simpler as your friend had noted by not caring which of the two boxes was chosen by the first person and then for the next seven all of them choosing the same box, yielding the same <span class="math-container">$\left(\frac{1}{2}\right)^7$</span></p> <hr /> <p>Problem 2:</p> <ul> <li>We have two pizzas, each with eight pizza slices. Each time someone enters the room, they open of of the two boxes uniformly at random, independent of each other persons' choices, and will take a slice if able and then close the lid... <em>even in the case that they closed the lid on a now empty box</em>. We ask what the probability is that the first time that someone comes into the room and opens an empty box (<em>that they did not cause to be empty themselves</em>), that the remaining box has all eight slices still in it.</li> </ul> <p>For this problem, similarly to the last, we describe via a sequence of coin flips. Here however, after the first box becomes empty, we require that the empty box be discovered before slices from the still full box be taken. In this case we required <em>nine</em> heads in a row or nine tails in a row. This occurs with probability <span class="math-container">$\left(\frac{1}{2}\right)^9 + \left(\frac{1}{2}\right)^9 = \left(\frac{1}{2}\right)^8$</span></p> <hr /> <p>Which of these was the intended problem? That is up to the person who asked the problem, but I interpret it as the first as per the included phrase &quot;<em>If all eight slices from a box are taken then the lid is not closed.</em>&quot; which implies to me that no random event needs to occur for the ninth person in the event that the first eight people all ate from the same box since the contents of the empty box are plainly visible due to the lid remaining open.</p> <p>As for why the linked Matchbox Problem gives the other answer, that is because it follows the style of problem 2 where we do not remove the newly emptied box once it becomes empty and we leave it until we attempt to retrieve a match from it and discover there are none to be had.</p>
962,301
<p>$$(\sqrt 2-\sqrt 1)+(\sqrt 3-\sqrt 2)+(\sqrt 4-\sqrt 3)+(\sqrt 5-\sqrt 4)…$$</p> <p>I have found partial sums equal to natural numbers. The first 3 addends sum to 1. The first 8 sum to 2. The first 15 sum to 3. When the minuend in an addend is the square root of a perfect square, the partial sum is a natural number. So I believe this series to be divergent.</p> <p>Am I right? Have I used correct terminology? How would this be expressed using sigma notation? Is there a proof that this series diverges?</p>
Stephen Quan
170,354
<p>@Jasper Joy is indeed right. It's a telescoping series whose partial sums eventually only have a fixed number of terms after cancellation:</p> <p>$$(\sqrt{n+1} - \sqrt n) + ... + (\sqrt 4 - \sqrt 3) + (\sqrt 3 - \sqrt 2) + (\sqrt 2 - \sqrt 1)$$ $$= \sqrt{n+1} + (-\sqrt n + \sqrt n) + ... + (-\sqrt 3 + \sqrt 3) + (-\sqrt 2 + \sqrt 2) - \sqrt 1$$ $$= \sqrt{n+1} - 1$$</p> <p>In series sigma notation we can summarize this as:</p> <p>$$\sum_{i=1}^{n} (\sqrt{i+1}-\sqrt{i}) = {\sqrt{n+1} - 1}$$</p> <p>As an infite sum it diverges to $\infty$:</p> <p>$$\sum_{i=1}^{\infty} (\sqrt{i+1}-\sqrt{i}) = \lim_{i \to \infty}{\sqrt{i+1} - 1}$$</p>
52,194
<p>Assuming you have a set of nodes, how do you determine how many connections are needed to connect every node to every other node in the set?</p> <p>Example input and output:</p> <pre><code>In Out &lt;=1 0 2 1 3 3 4 6 5 10 6 15 </code></pre>
Jake Petroules
4,775
<p>Figured it out. The formula is:</p> <pre><code>x = n(n - 1) / 2 </code></pre>
2,679,525
<p>Find the $\gcd(x^3-6x^2+14x-15, x^3-8x^2+21x-18)$ over $\mathbb{Q}[x]$. Then find two polynomials $a(x),b(x) \in \mathbb{Q}[x]$ such that, $$a(x)(x^3-6x^2+14x-15) + b(x)(x^3-8x^2+21x-18)=\gcd(x^3-6x^2+14x-15, x^3-8x^2+21x-18)$$ </p> <p>I have managed to find, $$x^3-6x^2+14x-15=(x-3)(x^2-3x+5)$$ $$x^3-8x^2+21x-18=(x-3)(x-3)(x-2)$$</p> <p>Now since $x^2-3x+5$ is irreducible over $\mathbb{Q}[x]$ and so the greatest common divisor is $(x-3)$. Now to find $a(x)$ and $b(x)$ I have no clue how to do that. I have looked online and it seems there is extended euclidean algorithm for polynomials but I haven't formally learned it in my class yet, so I was wondering if there is another efficient way to find these polynomials. Any help is appreciated, thanks!</p>
Macavity
58,320
<p>You have the complete extended Euclid algorithm (which IMHO is the best way) given already in Will Jaggy's post. </p> <p>Given that you already extracted $x-3$ as GCD, here is a perhaps quicker way (which may not generalise much). We simplify a bit by noting we seek <em>linear</em> $a(x), b(x)$, s.t.: </p> <p>$a(x)\color{blue}{(x^2-3x+5)} + b(x) \color{blue}{(x-3)(x-2)}=1$. Setting $x=2, 3$ we get $a(x) = \frac1{15}(-2x+9)$. </p> <p>Thus we are left with $\tfrac1{15}(-2x+9)\color{blue}{(x^2-3x+5)} + (px+q)\color{blue}{ (x-3)(x-2)}=1$<br> Comparing coeffs of the cubic term and the constant, $\implies p=\frac2{15},\; 3+6q = 1\implies q = -\frac13$.<br> So $$\tfrac1{15}(-2x+9)\color{blue}{(x^2-3x+5)} + \tfrac1{15}(2x-5)\color{blue}{(x-3)(x-2)}=1$$</p> <p>Multiply this by $\color{blue}{x-3}$ if you want to see the equation in form $a(x) p_1(x) + b(x) p_2(x) = \gcd(p_1, p_2)$.</p>
2,866,389
<p>Let $\gamma\colon\left[0,1\right]\to\mathbb{R}^{2}$ be a curve such that $\gamma\in\mathcal{C}^{1}$ (Continuously Differentiable). I need to show that $\gamma\left(\left[0,1\right]\right)$ has mesure zero, that is to show there are generalized rectangles $R_{1},R_{2},\ldots$ such that $\gamma\left(\left[0,1\right]\right)\subseteq\bigcup_{i=1}^{\infty}R_{i}$ and $\sum_{i=1}^{\infty}V\left(R_{i}\right)&lt;\varepsilon$ where $V$ is the volume function.</p> <p>Now I know how to show that the graph of a continuous function $f\colon\mathbb{R}^{n}\to\mathbb{R}$ has measure zero by using the uniform continuity, but that argument doesn't work here.</p> <p>Any help?</p>
Angina Seng
436,618
<p>As $\gamma$ is $C^1$ on a compact set, it is Lipschitz. Let $K$ be a Lipschitz constant for $\gamma$. Then, for each $N\in \Bbb N$, $\gamma([0,1])$ can be covered by $N$ squares, each of sidelength $2K/N$.</p>
47,655
<p>Let $X_n$ be the set of all word of the length $3 n$ over the alphabet $\{A,B,C\}$ which contain each of the three letters <em>n</em> times.</p> <p>The amount of elements of $X_n$ is $\frac{(3n)!}{(n!)^3}$, but why?</p> <p>I tried to split the problem into three smaller ones by only looking at the distribution of the single letters. For each one there should be $\binom{3n}{n} = \frac{(3n)!}{n!(2n)!}$ possibilities, but if this is correct, how do I get it combined? According to Wolfram Alpha $\binom{3n}{n}^3 \neq \frac{(3n)!}{(n!)^3}$.</p> <p>Thanks in advance!</p>
Arturo Magidin
742
<p>Two ways of doing it. The first follows your idea, but corrects the error:</p> <ol> <li><p>You are right that there are $\binom{3n}{n}$ possible ways to place one of the letters. However, <em>once</em> you place one of the letters (say, you place all the $A$s), and you try to place the $B$s, you'll find that you do <em>not</em> have $3n$ positions into which the $B$s can go, because you've already used up $n$ of them. So in fact, there are only $2n$ locations remaining for the $B$'s. That means that after placing the $A$s, the number of ways in which you can place the $B$s is only $\binom{2n}{n}$. And once you place the $A$s and the $B$s, the location of the $C$s is forced. So in total you have $$\binom{3n}{n}\times\binom{2n}{n} = \frac{(3n)!}{n!(2n)!}\times\frac{(2n)!}{n!n!} = \frac{(3n)!}{(n!)^3}.$$</p></li> <li><p>Instead of thinking of it as three letters, think of it as $3n$ letters: $$\{A_1, A_2,\ldots,A_{n}, B_1, B_2,\ldots,B_n, C_1,\ldots, C_n\}.$$ How many ways can you arrange them? Well, $(3n)!$. Now, erase the subindices on the $A$s; that means that any way you shuffle the $A$s will correspond to the same word. So, how many different orderings of the $A$s are there? $n!$; so you've overcounted by a factor of $n!$. The same is true when you erase the indices of the $B$s, and again for the indices of the $C$s. So the total is $$\frac{(3n)!}{n!n!n!} = \frac{(3n)!}{(n!)^3}.$$</p></li> </ol>
47,655
<p>Let $X_n$ be the set of all word of the length $3 n$ over the alphabet $\{A,B,C\}$ which contain each of the three letters <em>n</em> times.</p> <p>The amount of elements of $X_n$ is $\frac{(3n)!}{(n!)^3}$, but why?</p> <p>I tried to split the problem into three smaller ones by only looking at the distribution of the single letters. For each one there should be $\binom{3n}{n} = \frac{(3n)!}{n!(2n)!}$ possibilities, but if this is correct, how do I get it combined? According to Wolfram Alpha $\binom{3n}{n}^3 \neq \frac{(3n)!}{(n!)^3}$.</p> <p>Thanks in advance!</p>
trutheality
3,638
<p>If we number each occurence of a letter, e.g. $A_1, A_2,...$ there are $3n$ numbered letters rearrange, so $(3n)!$ permutations.</p> <p>But permutations of the same letters ($A_1A_2$ vs $A_2A_1$) are indistinguishable, so we have $n!$ permutations of the numbering of each kind of letter. Hence you divide by $n!$ three times getting:</p> <p>$$ \frac{(3n)!}{(n!)^3} $$</p>
3,107,755
<p>When working with skew Schur functions, they can be defined as follows.</p> <p>Let <span class="math-container">$C^{\lambda}_{\mu, \nu}$</span> be the integers such that </p> <p><span class="math-container">$$s_{\mu}s_{\nu}=\sum_{\lambda} C^{\lambda}_{\mu, \nu} s_{\lambda}$$</span></p> <p>Then, we can define skew Schur functions as</p> <p><span class="math-container">$$s_{\lambda/\mu}= \sum_{\nu} C^{\lambda}_{\mu, \nu} s_{\nu}$$</span></p> <p>My question is, if we can calculate each of this <span class="math-container">$s_{\mu}$</span>, <span class="math-container">$s_{\nu}$</span>, and <span class="math-container">$s_{\lambda}$</span>, why can't we find <span class="math-container">$C^{\lambda}_{\mu, \nu}$</span> sometimes?</p> <p>My teacher told me that something very different is to have a formula and to have an explicit product. He told me that these coefficients are not always easy to compute. And I have seen in some papers that it is equal to the number of tableaux such that has shape <span class="math-container">$\lambda$</span> and whatever. I mean, they use another methods to compute such coefficients.</p> <p>Why does this happens if we know how to compute all but one object in this formula?</p>
enedil
126,823
<p>You can count the number of ways you can divide <span class="math-container">$11$</span> elements into <span class="math-container">$8$</span> groups (a group can't be empty and order is important), then place an asterisk between every two groups.</p>
3,107,755
<p>When working with skew Schur functions, they can be defined as follows.</p> <p>Let <span class="math-container">$C^{\lambda}_{\mu, \nu}$</span> be the integers such that </p> <p><span class="math-container">$$s_{\mu}s_{\nu}=\sum_{\lambda} C^{\lambda}_{\mu, \nu} s_{\lambda}$$</span></p> <p>Then, we can define skew Schur functions as</p> <p><span class="math-container">$$s_{\lambda/\mu}= \sum_{\nu} C^{\lambda}_{\mu, \nu} s_{\nu}$$</span></p> <p>My question is, if we can calculate each of this <span class="math-container">$s_{\mu}$</span>, <span class="math-container">$s_{\nu}$</span>, and <span class="math-container">$s_{\lambda}$</span>, why can't we find <span class="math-container">$C^{\lambda}_{\mu, \nu}$</span> sometimes?</p> <p>My teacher told me that something very different is to have a formula and to have an explicit product. He told me that these coefficients are not always easy to compute. And I have seen in some papers that it is equal to the number of tableaux such that has shape <span class="math-container">$\lambda$</span> and whatever. I mean, they use another methods to compute such coefficients.</p> <p>Why does this happens if we know how to compute all but one object in this formula?</p>
saulspatz
235,128
<p>Put the <span class="math-container">$8$</span> *s in a row. We have to put <span class="math-container">$7$</span> #s between them. Now there are <span class="math-container">$9$</span> places to put the remaining <span class="math-container">$4$</span> asterisks. Use stars and bars. </p>
185,900
<p>What is the <strong>fastest way</strong> to find the smallest positive root of the following transcendental equation:</p> <p><span class="math-container">$$a + b\cdot e^{-0.045 t} = n \sin(t) - m \cos(t)$$</span></p> <pre><code>eq = a + b E^(-0.045 t) == n Sin[t] - m Cos[t]; </code></pre> <p>where <span class="math-container">$a,b,n,m$</span> are some real constants.</p> <p>for instance I tried :</p> <pre><code>eq = 5 E^(-0.045 t) + 0.1 == -0.3 Cos[t] + 0.009 Sin[t]; sol = FindRoot[eq, {t, 1}] {t -&gt; 117.349} </code></pre> <p>There is an answer but it doesn't mean that this is the smallest positive root ))</p> <p>I don't like <code>FindRoot[]</code> because you need starting point, for different initial parameters <span class="math-container">$(a,b,n,m)$</span></p> <p>Is there a way to find the <strong>smallest positive root</strong> of equation for any <span class="math-container">$(a,b,n,m)$</span> (if there exist the solution), without <em>starting points</em>?</p> <p>If No. how to determine automatically starting point for a given parameters?</p> <p>there is a numerical and graphical answers in <em>Wolfram Alpha</em></p> <p><a href="https://i.stack.imgur.com/Y90wg.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y90wg.jpg" alt="enter image description here"></a></p>
John Doty
27,989
<p>Here's a method that requires no plotting or guessing of initial points. It uses <code>FindInstance</code> to find successively smaller solutions until no smaller solution can be found.</p> <pre><code>try[tt_] := With[{r = FindInstance[eq &amp;&amp; t &lt; tt, t, Reals]}, If[r == {}, tt, t /. r[[1]]]] FixedPoint[try, Infinity] (* 72.0486 *) </code></pre>
1,343,554
<p>This is exercise 6.10 in Resnick's book "A Probability Path".</p> <p>We're given a sequence of random variables $(X_n)$ and an increasing function $f: [0, \infty) \rightarrow [0, \infty)$ such that </p> <p>$$ \frac{f(x)}{x} \rightarrow \infty, \qquad x \rightarrow \infty, $$</p> <p>and</p> <p>$$ \sup_{n \geq 1} E(f(|X_n|)) &lt; \infty, $$</p> <p>and the goal is to conclude that $\{X_n\}$ is uniformly integrable. I know I'm supposed to show my work so far but honestly I find the problem hard to even get a start on, mostly because the existence of $f$ is abstract and I just don't see how to get from A to B. No, I am not a student. Any help is appreciated.</p>
henderfen25
774,103
<p>I just started using the math stack exchange community to help with my self study of probability theory, and I have a comment in general about this problem. It should really be a comment on the above post by David, but I don't have the reputation points for that, so I am posting it here. Also, at the end of this post I am posting my own solution to the problem, so in that way I suppose it is appropriate as an answer to the original question.</p> <p>In regard to David's post, where have we used that <span class="math-container">$f$</span> is increasing? I also did the problem myself -- my proof is below -- and I have a slightly different argument, but I didn't use that <span class="math-container">$f$</span> is increasing. And I don't see in David's post the use of this assumption either. But I could be missing something, so please correct me if I'm wrong.</p> <p>On a slightly more pedantic note, according to the definition of uniform integrability given in Resnick, David's argument is missing the check that <span class="math-container">$sup_{n \geq 1} E(|X_n|) &lt; \infty$</span>. This is not hard to provide, but it is worth noting.</p> <p>So as promised, here is my solution to the problem -- it is based directly on the definition of uniform integrability given in Resnick. I'm not suggesting it is any better or worse than David's approach. Just giving my approach:</p> <p>For <span class="math-container">$a &gt; 0$</span>,</p> <p><span class="math-container">$$\int_{[|X_n| &gt; a]} |X_n| dP \leq \int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP + \int_{[|X_n| &gt; a] \cap [|X_n| &gt; f(|X_n|)]} |X_n| dP$$</span>. Thus</p> <p><span class="math-container">$$sup_{n \geq 1} \int_{|X_n| &gt; a]} |X_n| dP \leq sup_{n \geq 1} \int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP+sup_{n\geq1}\int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP$$</span></p> <p>So it is sufficient to show the two terms on the left <span class="math-container">$\to 0$</span> as <span class="math-container">$a \to \infty$</span></p> <p>For the first term ( <span class="math-container">$sup_{n \geq 1} \int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP$</span>), Let <span class="math-container">$\epsilon &gt; 0$</span>. There exists <span class="math-container">$M &gt; 0 $</span> such that if <span class="math-container">$a &gt; M$</span> then <span class="math-container">$\frac{a}{f(a)} &lt; \frac{\epsilon}{sup_{n \geq 1} E(f(|X_n|))}$</span>. Note this is possible since <span class="math-container">$\frac{f(a)}{a} \to \infty$</span> as <span class="math-container">$a \to \infty$</span>. So we have that for <span class="math-container">$a &gt; M$</span></p> <p><span class="math-container">$$sup_{n \geq 1} \int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP = sup_{n \geq 1} \int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} \frac{|X_n|}{f(|X_n|)} f(|X_n|) dP \leq \frac{\epsilon}{sup_{n \geq 1} E(f|X_n|)} sup_{n \geq 1} E(f|X_n|) = \epsilon$$</span></p> <p>So the first term <span class="math-container">$\to 0$</span> as <span class="math-container">$n \to \infty$</span>. For the second term (<span class="math-container">$sup_{n\geq1}\int_{[|X_n|&gt;a] \cap [|X_n| \leq f(|X_n|)]} |X_n| dP$</span>), there exists <span class="math-container">$c &gt; 0$</span> such that if <span class="math-container">$a &gt; c$</span>, then <span class="math-container">$\frac{f(a)}{a} &gt; 1$</span>. Thus</p> <p><span class="math-container">$[|X_n| &gt; f(|X_n|)] \subseteq [|X_n| \leq c]$</span>. Thus for <span class="math-container">$a &gt; c$</span>, <span class="math-container">$[|X_n| &gt; a] \cap [|X_n| &gt; f(|X_n|)] = \emptyset$</span>. Thus for <span class="math-container">$a &gt; c$</span>, the second term is <span class="math-container">$0$</span>. Thus the second term <span class="math-container">$\to 0$</span> as <span class="math-container">$n \to \infty$</span></p>
2,181,633
<p>Given a real polynomial $P(x)$, find another real polynomial $Q(x)$ such that the powers of $x$ in $P(x)Q(x)$ are multiples of an integer $n$. Is it always possible to find such a polynomial?</p> <p>For e.g if $P(x) = 1+3x+2x^2$ I can find $Q(x)=(2 x - 1) (x - 1) (4 x^2 + 1) (x^2 + 1)$ such that $P(x)Q(x) = 16 x^8 - 17 x^4 + 1$ has powers which are multiples of 4. </p> <p>I'm bumping the question because I'm not satisfied of the original answer.</p>
A. Napster
27,945
<p>If $R(x)= x^{n} - a^{n} $ then by factorisation we can see that $ x^{n} - a^{n} = (x - a)(x^{n - 1} + ax^{n - 2}+ a^2x^{n - 3} + \dots + a^{n-1})$ or by simply observing that $R(a)=0$ implies $x-a$ is a factor of $x^{n} - a^{n}$. </p> <p>In the same way we can deduce that $x+a$ is a factor of $x^n -a^n$ when $n$ is even (if $n =2p$ then $x^n -a^n = x^{2p} -a^{2p} $ is divisible by $x^2 - a^2$ hence divisible by $x+a$). Also $x+a$ is a factor of $x^n +a^n$ when $n$ is odd.</p> <p>Let's suppose that $P(x)$ is monic. By the fundamental theorem of algebra $P(x) = \prod^{n}_{i=1} (x_i - a_i)$ . If $a_i$ is real we can find a real polynomial $Q_{a_i} (x)$ such that $Q_{a_i}(x) \times (x-{a_i}) = x^n \pm {a_i}^n$ . In the case $a_i$ isn't real we know that non-real complex roots come in conjugate pairs. </p> <p>$(x-a)(x- \bar a) = x^2 -2Re(a)x + |a|^2 $ is a polynomial with real coefficients and so is $(x^n -a^n)(x^n - \bar a^n)$. Hence the quotient $Q_{a}(x) Q_{\bar a}(x)$ of $(x^n -a^n)(x^n - \bar a^n)$ divided by $(x-a)(x- \bar a)$ will also be a polynomial with real coefficients.</p> <p>Since $ \Pi^{n}_{i=1} (x^n \pm a_i^n)$ is a polynomial such that powers of $x$ are multiples of an $n$, we have shown that such a polynomial $Q(x)$ in the question exists. </p> <p>We can also use the fact that $x^n - a_j^n = \Pi^{n}_{i=1} (x- \mathcal{E}_ia_j)$ where $\mathcal{E}_i$ is an $n^{th}$ root of unity.</p>
1,382,003
<p>I'm trying to solve problem 7 from the IMC 2015, Blagoevgrad, Bulgaria (Day 2, July 30). Here is the problem</p> <blockquote> <p>Compute $$\large\lim_{A\to\infty}\frac{1}{A}\int_1^A A^\frac{1}{x}\,\mathrm dx$$</p> </blockquote> <p>And here is my approach:</p> <p>Since $A^\frac{1}{x}=\exp\left(\frac{\ln A}{x}\right)$, then using Taylor series for exponential function, we have $$\exp\left(\frac{\ln A}{x}\right)=1+\frac{\ln A}{x}+\frac{\ln^2 A}{x^2}+\frac{\ln^3 A}{x^3}+\cdots=1+\frac{\ln A}{x}+\sum_{k=2}^\infty\frac{\ln^k A}{x^k}$$ Hence, integrating term by term is trivial. \begin{align} \lim_{A\to\infty}\frac{1}{A}\int_1^A A^\frac{1}{x}\,\mathrm dx&amp;=\lim_{A\to\infty}\frac{1}{A}\int_1^A \left[1+\frac{\ln A}{x}+\sum_{k=2}^\infty\frac{\ln^k A}{x^k}\right]\,\mathrm dx\\ &amp;=\lim_{A\to\infty}\frac{1}{A}\left[A-1+\ln^2 A-\sum_{k=2}^\infty\frac{\ln^k A}{(k-1)A^{k-1}}+\sum_{k=2}^\infty\frac{\ln^k A}{(k-1)}\right]\\ &amp;=1+\lim_{A\to\infty}\sum_{k=2}^\infty\frac{\ln^k A}{A(k-1)}\\ \end{align} I'm stuck here because I can't evaluate the last term as $k\to\infty$. My guess the answer is $1$, but I'm not sure. Is my approach correct? If it's correct, how does one evaluate the last limit? I'm also interesting in knowing the other approaches of this this problem in formally math setting. Thanks.</p>
Leucippus
148,155
<p>For the integral in question: $$\large\lim_{A\to\infty} \, \frac{1}{A} \, \int_{1}^{A} A^{\frac{1}{x}}\,dx$$ the following is considered: \begin{align} A^{\frac{1}{x}} = e^{\frac{1}{x} \, \ln A} = 1 + \frac{\ln A}{x} + \sum_{k=2}^{\infty} \frac{\ln^{k}A}{k! \, x^{k}} \end{align} for which \begin{align} \int_{1}^{A} A^{\frac{1}{x}} \, dx &amp;= [ x ]_{1}^{A} - \ln A \, [ \ln x ]_{1}^{A} + \sum_{k=2}^{\infty} \frac{\ln^{k}A}{k! \, (1-k)} \, [ x^{1-k} ]_{1}^{A} \\ &amp;= A - 1 + \ln A - \sum_{k=2}^{\infty} \frac{1}{k! \, (k-1)} \, \left(\frac{A \, \ln^{k}A}{A^{k}} - \frac{\ln^{k}A}{1} \right). \end{align} Now dividing by $A$ leads to \begin{align} \frac{1}{A} \, \int_{1}^{A} A^{\frac{1}{x}} \, dx &amp;= 1 - \frac{1}{A} + \frac{\ln A}{A} - \sum_{k=2}^{\infty} \frac{1}{k! \, (k-1)} \, \left(\frac{ \ln^{k}A}{A^{k}} - \frac{\ln^{k}A}{A} \right). \end{align} In order to evaluate the neccessary limits the following components are needed: \begin{align} \lim_{A \to \infty} \frac{\ln A}{A} &amp;\to 0 \\ \lim_{A \to \infty} \frac{\ln^{k}A}{A^{k}} &amp;= \left( \lim_{A \to \infty} \frac{\ln A}{A} \right)^{k} \to 0 \\ \lim_{A \to \infty} \frac{\ln^{k}A}{A} &amp;= k \, \lim_{A \to \infty} \frac{\ln^{k-1}A}{A} = \cdots = \frac{k!}{1!} \, \lim_{A \to \infty} \frac{\ln A}{A} = 0. \end{align} With these limiting values it is evident that \begin{align} \lim_{A \to \infty} \, \frac{1}{A} \, \int_{1}^{A} A^{\frac{1}{x}} \, dx &amp;= \lim_{A \to \infty} \left[ 1 - \frac{1}{A} + \frac{\ln A}{A} - \sum_{k=2}^{\infty} \frac{1}{k! \, (k-1)} \, \left(\frac{ \ln^{k}A}{A^{k}} - \frac{\ln^{k}A}{A} \right) \right] \\ &amp;= 1 - \lim_{A \to \infty} \frac{1}{A} - \sum_{k=2}^{\infty} \frac{1}{k! \, (k-1)} \, \lim_{A \to \infty} \left(\frac{ \ln^{k}A}{A^{k}} - \frac{\ln^{k}A}{A} \right) \\ &amp;= 1. \end{align}</p>
1,836,998
<p>$$ \lim_{x\to 0^{+}}\left[\left(1+\frac{1}{x}\right)^x+\left(\frac{1}{x}\right)^x+\left(\tan(x)\right)^{\frac{1}{x}}\right]$$</p> <p>Attempt: I used $\tan(x)\approx x$ also $(1+n)^{1/n}=e$ so I let $x=0+h$ and lim changes to lim h tending to $0$. But that gives me $e+\infty+h^{1/h}$. While the answer is an integer between $0-9$ . Where is my mistake?.Thanks</p>
Barry Cipra
86,747
<p>Note first of all that $(\tan x)^{1/x}\to0^\infty=0$ as $x\to0^+$, so the trickiest-looking part of the limit is actually the easiest. More formally, $0\le\tan x\le1$ for $0\le x\le\pi/4$ and $1/x\ge4/\pi\gt1$ for $0\lt x\le\pi/4$, so</p> <p>$$0\le(\tan x)^{1/x}\le\tan x\quad\text{for }0\lt x\le{\pi\over4}$$</p> <p>and thus $(\tan x)^{1/x}\to0$ as $x\to0^+$ by the squeeze theorem.</p> <p>As for the rest of the limit, it's convenient to write</p> <p>$$\left(1+{1\over x}\right)^x+\left(1\over x\right)^x={(x+1)^x+1\over x^x}$$</p> <p>and then "distribute" the limit:</p> <p>$$\lim_{x\to0^+}\left[\left(1+{1\over x}\right)^x+\left(1\over x\right)^x\right]={\lim_{x\to0^+}((x+1)^x+1)\over\lim_{x\to0^+}(x^x)}={2\over\lim_{x\to0^+}(x^x)}$$</p> <p>So all that remains is to show that $\lim_{x\to0^+}(x^x)=1$, which is equivalent to showing</p> <p>$$\lim_{x\to0^+}x\ln x=0$$</p> <p>Computing this limit without using L'Hopital requires additional steps. I will omit them here, unless there is a request for them in comments. (The other answers, I see, took this limit for granted.)</p>
643,658
<p>Let <strong><em>K</em></strong> be a finite field, $F=K(\alpha)$ a finite simple extension of degree $n$, and $ f \in K[x]$ the minimal polynomial of $\alpha$ over $K$. Let $\frac{f\left( x \right)}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}x+\cdots +{{\beta }_{n-1}}{{x}^{n-1}}\in F[x]$ and $\gamma={f}'\left( \alpha \right)$.</p> <p>Prove that the dual basis of $\left\{ 1,\alpha ,\cdots ,{{\alpha }^{n-1}} \right\}$ is $\left\{ {{\beta }_{0}}{{\gamma }^{-1}},{{\beta }_{1}}{{\gamma }^{-1}},\cdots ,{{\beta }_{n-1}}{{\gamma }^{-1}} \right\}$.</p> <p>I met this exercise in "Finite Fields" Lidl &amp; Niederreiter Exercises 2.40, and I do not how to calculate by Definition 2.30. It is</p> <p>Definition 2.30 Let $K$ be a finite field and $F$ a finite extension of $K$. Then two bases $\left\{ {{\alpha }_{1}},{{\alpha }_{2}},\cdots ,{{\alpha }_{m}} \right\}$ and $\left\{ {{\beta }_{1}},{{\beta }_{2}},\cdots ,{{\beta }_{m}} \right\}$ of $ F$ over $K$ are said to be dual bases if for $1\le i,j\le m$ we have $T{{r}_{{F}/{K}\;}}\left( {{\alpha }_{i}}{{\beta }_{j}} \right)=\left\{ \begin{align} &amp; 0\;\;\text{for}\;\;i\neq j, \\ &amp; 1\;\;\text{for}\;\;i=j. \\ \end{align} \right.$</p> <p>I think $\gamma =\underset{x\to \alpha }{\mathop{\lim }}\,\frac{f(x)-f{{(\alpha )}_{=0}}}{x-\alpha }={{\beta }_{0}}+{{\beta }_{1}}\alpha +\cdots {{\beta }_{n-1}}{{\alpha }^{n-1}}$.</p> <p>How can I continue? The lecturer did not teach the "dual bases" section.</p>
DonAntonio
31,254
<p>If $\;G=Gal(F/K)=\{\sigma_1:=Id,\sigma_2,...,\sigma_n\}\;$, then using your nice characterization </p> <p>$$\;\gamma:=f'(\alpha)=\sum_{k=0}^{n-1}\beta_k\alpha^k\;$$</p> <p>we get:</p> <p>$$tr.(\alpha^i\beta_j\gamma^{-1})=\sum_{k=1}^m\sigma_k(\alpha^i\beta_j\gamma^{-1})=\sum_{k=1}^n\sigma_k(\alpha)^i\sigma_k(\beta_j)\sigma_k(\gamma^{-1})=$$</p> <p>$$=\sum_{k=1}^n\sigma_k(\alpha)^i\sigma_k(\beta_j)\sigma_k\left(\left(\sum_{t=0}^{n-1}\beta_t\alpha^t\right)^{-1}\right)=\sum_{k=1}^n\sigma_k(\alpha)^i\sigma_k(\beta_j)\left(\sum_{t=0}^{n-1}\sigma_k(\beta_t)\sigma_k(\alpha)^k\right)^{-1}=$$</p> <p>$$=\frac{\alpha^i\beta_j}{\beta_0+\beta_1\alpha+\ldots+\beta_{n-1}\alpha^{n-1}}+$$</p> <p>$$\frac{\sigma_2(\alpha)^i\sigma_2(\beta_j)}{\sigma_2(\beta_0)+\sigma_2(\beta_1)\sigma_2(\alpha)+\ldots+\sigma_2(\beta_{n-1})\sigma_2(\alpha^{n-1})}+\ldots+$$</p> <p>$$+\frac{\sigma_n(\alpha)^i\sigma_n(\beta_j)}{\sigma_n(\beta_0)+\sigma_n(\beta_1)\sigma_n(\alpha)+\ldots+\sigma_n(\beta_{n-1})\sigma_n(\alpha^{n-1})}$$</p> <p>But $\;\sigma_k(\beta_j)=\beta_j\;$ since $\;\beta_j\in K\;$ , so the above is</p> <p>$$=\frac{\alpha^i\beta_j}{\beta_0+\beta_1\alpha+\ldots+\beta_{n-1}\alpha^{n-1}}+$$</p> <p>$$\frac{\sigma_2(\alpha)^i\beta_j}{\beta_0+\beta_1\sigma_2(\alpha)+\ldots+\beta_{n-1}\sigma_2(\alpha^{n-1})}+\ldots+$$</p> <p>$$+\frac{\sigma_n(\alpha)^i\beta_j}{\beta_0+\beta_1\sigma_n(\alpha)+\ldots+\beta_{n-1}\sigma_n(\alpha^{n-1})}$$</p> <p>Now, if $\;i=j\;$ we simply have</p> <p>$$tr.\left(\alpha^i\beta_j\gamma^{-1}\right)=\sum_{k=1}^n\frac{\sigma_k(\alpha)^i\beta_i}{\sum_{k=1}^n\sigma_k(\alpha)^i\beta_i}=1$$</p> <p>and if $\;i\neq j\;$ </p> <p>$$tr.\left(\alpha^i\beta_j\gamma^{-1}\right)=\sum_{k=1}^n\frac{\sigma_k(\alpha)^i}{\sum_{t=0}^{n-1}\sigma_k(\alpha)^t\beta_t}\beta_j$$</p> <p>I shall come back to this later...perhaps.</p>
3,629,790
<p>So I am a bit stuck on applying the limit definition for this problem. This is what I have so far: </p> <p><span class="math-container">$\lvert x - c \rvert &lt; \delta $</span> then <span class="math-container">$\lvert f(x) - l \rvert &lt; \epsilon$$ $$\lvert x + 2 \rvert &lt; \delta$</span> <span class="math-container">$$\lvert x^2 + x -2 \rvert &lt; \epsilon $$</span> </p> <p>which means <span class="math-container">$\lvert x -1 \rvert \lvert x + 2 \rvert &lt; \epsilon $</span> </p> <p><span class="math-container">$\lvert x + 2 \rvert &lt; {\epsilon\over \lvert x -1 \rvert} $</span></p> <p>let <span class="math-container">$\delta = 1 $</span> then: <span class="math-container">$\lvert x + 2 \rvert &lt; \delta &lt; 1 $</span> which means <span class="math-container">$-1 &lt; x -1 &lt; 1 $</span> <span class="math-container">$-4 &lt; x -1 &lt; -2 $</span> Thus the upper bound of <span class="math-container">$\lvert x -1 \rvert &lt; -2 $</span> is <span class="math-container">$-2$</span></p> <p>let <span class="math-container">$$\delta = \min(1, {\epsilon\over -2}) $$</span></p> <p>But what is confusing me here is the fact that, how is <span class="math-container">$ \lvert x + 2 \rvert &lt; {\epsilon\over -2} $</span></p> <p>Because if <span class="math-container">$\epsilon$</span> is any number more than <span class="math-container">$0$</span>, then <span class="math-container">${\epsilon\over -2} $</span> will be negative and it can't be greater than <span class="math-container">$ \lvert x + 2 \rvert $</span>. </p>
José Carlos Santos
446,262
<p>Note that<span class="math-container">\begin{align}x^2+x-5-(-3)&amp;=x^2-4+x+2\\&amp;=(x-2)(x+2)+x+2\\&amp;=(x-1)(x+2).\end{align}</span>So, if <span class="math-container">$\lvert x+2\rvert&lt;1$</span>, then <span class="math-container">$\lvert x-1\rvert\leqslant\lvert x+2\rvert+3&lt;4$</span>. So, take <span class="math-container">$\delta=\min\left\{1,\frac\varepsilon4\right\}$</span>. Then, if <span class="math-container">$\lvert x+2\rvert&lt;\delta$</span>, then<span class="math-container">$$\lvert x^2+x-5-(-3)\rvert&lt;4\frac\varepsilon4=\varepsilon.$$</span></p>
1,179,922
<p>I would like to inquire whether there is a simple way to prove that a function is decreasing or not. For example how would I prove that the function</p> <p>$$Y = (X^.5 - 1)/0.5$$ </p> <p>is decreasing? I am not sure whether the negativity of the 2nd derivative solves this and would really like to understand the intuition better. </p>
user149418
149,418
<p>I am posting this answer because you wanted to know the monotonicity in terms of the derivative, otherwise this answer does not make any sense as there is already a nice answer of your question.</p> <p>So if a function is differentiable then the monotonicity can be described as follows.</p> <blockquote> <p>Let <span class="math-container">$f:[a,b]\to \Bbb R$</span> be continuous function which is differentiable on <span class="math-container">$(a,b)$</span>, then</p> <p>(i) <span class="math-container">$f^\prime(x)&gt;0,\, \forall x\in (a,b)\implies f$</span> is (strictly) increasing on <span class="math-container">$[a,b]$</span></p> <p>(ii) <span class="math-container">$f^\prime(x)&lt;0,\, \forall x\in (a,b)\implies f$</span> is (strictly) decreasing on <span class="math-container">$[a,b]$</span></p> <p>(iii) <span class="math-container">$f^\prime(x)=0,\, \forall x\in (a,b)\implies f$</span> is constant on <span class="math-container">$[a,b]$</span>.</p> </blockquote> <p>For your problem <span class="math-container">$f(x)=\frac{1}{0.5}(x^{0.5}-1)=2(\sqrt{x}-1)$</span>. Clearly <span class="math-container">$f$</span> is differentiable on <span class="math-container">$(0,\infty)$</span> and <span class="math-container">$f^\prime(x)=\frac{1}{\sqrt{x}},\,\forall x\in(0,\infty)$</span>. Also <span class="math-container">$f^\prime(x)&gt;0$</span> for all <span class="math-container">$x\in(0,\infty)$</span>. Thus <span class="math-container">$f$</span> is strictly increasing on <span class="math-container">$[0,\infty)$</span>.</p>
1,898,839
<p>Is the following inequality true?</p> <p><span class="math-container">$$\mbox{Tr} \left( \mathrm P \, \mathrm M^T \mathrm M \right) \leq \lambda_{\max}(\mathrm P) \, \|\mathrm M\|_F^2$$</span></p> <p>where <span class="math-container">$\mathrm P$</span> is a positive definite matrix with appropriate dimension. How about the following?</p> <p><span class="math-container">$$\mbox{Tr}(\mathrm A \mathrm B)\leq \|\mathrm A\|_F \|\mathrm B\|_F$$</span></p>
Branimir Ćaćić
49,610
<p>The heart of the matter is the following key fact:</p> <blockquote> <p>The assignment $(A,B) \mapsto \langle A,B \rangle := \operatorname{Tr}(A^T B)$ defines a Euclidean inner product on the real vector space $\mathbb{R}^{m \times n}$ of all $m \times n$ real matrices that induces the Frobenius norm $\|\cdot\|_F$ on $\mathbb{R}^{m\times n}$, in the sense that $$\forall C \in \mathbb{R}^{m \times n}, \quad \|C\|_F = \sqrt{\langle C,C\rangle} = \sqrt{\operatorname{Tr}(C^T C)}.$$</p> </blockquote> <p>For this, of course, you need the un-normalised trace $\operatorname{Tr}(C) = \sum_{k=1}^n C_{kk}$ on $\mathbb{R}^{n \times n}$.</p> <p>Once you know this and observe (by whichever definition of the Frobenius norm you prefer) that $\|C^T\|_F = \|C\|_F$ for all $C \in \mathbb{R}^{m \times n}$, the Cauchy–Schwarz inequality for the inner product $\langle \cdot,\cdot \rangle$ immediately yields your second inequality.</p> <p>The first inequality is a little bit more subtle, but isn't too bad once you recall the following basic fact about traces:</p> <blockquote> <p>For any orthonormal basis $\{v_1,\dotsc,v_n\}$ of $\mathbb{R}^n$, we have $$\forall C \in \mathbb{R}^{n\times n}, \quad \operatorname{Tr}(C) = \sum_{k=1}^n \langle v_k,Cv_k \rangle.$$</p> </blockquote> <p>Now, since $P$ is positive definite, let $\{v_1,\dotsc,v_n\}$ be an orthonormal basis for $\mathbb{R}^n$ consisting of eigenvectors of $P$, with $Pv_k = \lambda_k(P)v_k$ for $0 &lt; \lambda_1(P) \leq \cdots \leq \lambda_n(P) = \lambda_{\text{max}}(P)$. If you now write $$ \operatorname{Tr}(PM^T M) = \sum_{k=1}^n \langle v_k,P M^T M v_k \rangle = \langle P v_k,M^T M v_k\rangle = \sum_{k=1}^n \lambda_k(P)\langle v_k, M^T M v_k \rangle $$ in terms of this special orthonormal basis and bound each eigenvalue of $P$ from above by $\lambda_{\text{max}}(P)$, it's now easy to get the first inequality.</p>
4,802
<p>Let $z^i(X, m)$ be the free abelian group generated by all codimension $i$ subvarieties on $X \times \Delta^m$ which intersect all faces $X \times \Delta^j$ properly for all j &lt; m. Then, for each i, these groups assemble to give, with the restriction maps to these faces, a simplicial group whose homotopy groups are the higher Chow groups CH^i(X,m) (m=0 gives the classical ones).</p> <p>Does anyone have an intuition to share about these higher Chow groups? What do they measure/mean? If I pass from the simplicial group to a chain complex, what does it mean to be in the kernel/image of the differential?</p> <p>Could one say that the higher Chow groups keep track of in how many ways two cycles can be rationally equivalent (and which of these different ways are then equivalent etc.)?</p> <p>Finally: I don't see any reason why the definition shouldn't make sense over the integers or worse base schemes. Is this true? Does it maybe still make sense but lose its intended meaning?</p>
Steven Landsburg
10,503
<p>I believe Bloch's original insight was something like the following:</p> <p>First, if $X$ is a regular scheme, you can filter $K_0$ by ``codimension of support''; that is, view $K_0(X)$ as the Grothendieck group of the category of all finitely generated modules and let $F^iK_0(X)$ be the part generated by modules with codimension of support greater than or equal to $i$.</p> <p>Next, suppose you want to mimic this construction for $K_m$ instead of $K_0$. The first step is to notice that if you patch two copies of $\Delta^m_X$ together along their "boundary" (i.e. the union of the images of the various copies of $\Delta^{m-1}_X$) and call the result $S^m_X$, then Karoubi-Villamayor theory tells you that $K_m(X)$ is a direct summand of $K_0(S^m_X)$. (The complementary direct summand is $K_0(X)$.)<br> So it suffices to find a "filtration by codimension of support" on $K_0(S^m_X)$. </p> <p>The usual constructions don't work because $S^m_X$ is not regular (so that in particular, not all modules correspond to $K$-theory classes.)</p> <p>But: a cycle in $z^i(X,m)$ has a positive part $z_+$ and a negative part $z_-$ which, (if it is homologically a cycle) must agree on the boundary. Therefore you can imagine taking $\Delta^m_X$-modules $M_+$ and $M_-$ supported on these positive and negative parts and patching them along the boundary to get a module on $S^m_X$. If this module has finite projective dimension (which it ``ought'' to because of all the proper-meeting conditions, and as long as it has no bad imbedded components), then it gives a class in $K_0(S^m_X)$, hence a class in $K_m(X)$, and we can take the $i-th$ part of the filtration to be generated by the classes that arise in this way.</p> <p>The Bloch-Lichtenbaum work largely bypasses this intuition, but this was (I think) the original intuition for why it ought to work.</p>
2,898,616
<p>My Math teacher told me that $\frac{dx}{dy} = \frac{1}{\frac{dy}{dx}}$. I asked for a proof and he gave me the result by using the chain rule. I don't understand why this is the case for every function because the function $y=x^2$ doesn't have an inverse function. If this is the case then why is $\frac{dx}{dy}$ not $\frac{-1}{2x}$? Maybe it only works for injective functions or we assume only one of the values to be true.</p>
Christoph
86,801
<p>$\newcommand\R{\mathbb R}$Let's avoid Leibniz notation. In Lagrange notation, the <a href="https://en.wikipedia.org/wiki/Inverse_function_theorem" rel="nofollow noreferrer">inverse function theorem</a> for a differentiable function $f\colon\R\to\R$ states that when $f(a)=b$ then $$ g'(b) = \frac 1{f'(a)}, $$ where $g$ is a <em>local inverse</em> of $f$ at $a$. This means, that $g$ is a function $g\colon U\to V$, where $U$ is an open set containing $f(a)$, $V$ is an open set containing $a$ and we have \begin{align*} f(g(y)) &amp;= y \quad\text{for all $y\in U$}, \\ g(f(x)) &amp;= x \quad\text{for all $x\in V$}. \end{align*}</p> <p>Let's apply this to $f(x) = x^2$ with derivative $f'(x)=2x$. For $x=2$ we have $f(x)=4$ and we can pick the local inverse $g_1\colon \R_{&gt;0}\to\R_{&gt;0}, y\mapsto \sqrt y$. We see that $g_1'(y) = \frac 1{2\sqrt{y}}$ and can verify $$ g_1'(4) = \frac{1}{4} = \frac{1}{f'(2)}. $$ For $x=-2$, we can not pick the same $g_1$ as a local inverse, since $-2$ is not even contained in $V=\R_{&gt;0}$ and there is no hope in making $V$ bigger since we would get $$ g_1(f(-2)) = g_1(4) = 2 \neq -2. $$ But we can instead pick $g_2\colon \R_{&gt;0}\to\R_{&lt;0}, y\mapsto -\sqrt{y}$ to get a correct local inverse and again verify the inverse function theorem: $$ g_2'(4) = -\frac{1}{4} = \frac{1}{f'(-2)}. $$</p>
2,007,123
<p>Suppose $R=\mathbb{Q}[x]$ is a ring and define $T$ to be $T=\{f(x) \in R : f(0) \neq 0\}$ </p> <p>Prove that $\frac{x}{1}$ is the only irreducible element in $T^{-1}R$ (disregarding associates).</p> <hr> <p>My approach: assume $\frac{x}{1}$ is reducible. So there exist non-unit elements $\frac{a}{b}, \frac{c}{d}$ in $T^{-1}R $ such that $\frac{x}{1}=\frac{a}{b} \frac{c}{d}$. So $x=ac$ and since the degree of $x$ is $1$ it follows that the degree of $a$ or $c$ needs to be $0$. So one of the elements $\frac{a}{b}, \frac{c}{d}$ is a unit and therefore our assumption that $\frac{x}{1}$ is reducible can't be true. Is my reasoning thus far correct?</p> <p>Now I need to show that $\frac{x}{1}$ is the <strong>only</strong> irreducible element. But how can one prove that?</p>
Han de Bruijn
96,057
<p>Not meant as a duplicate of someone else's answer - hope it's complementary, but I want this section to be coherent and complete, for future reference as well. Apologies if things have become somewhat overdone.<P> The parent element that will be adopted for our own purpose is <I>not quite the unit</I> square. Instead it is: $\,[-1,+1]\times[-1,+1]$ . And the local coordinates inside our parent quadrilateral are defined accordingly: $\,-1 \leq \xi \leq +1\,$ and $\,-1 \leq \eta \leq +1\,$ . It's the same material as employed by Rahul and Nominal Animal if we only substitute $\,\xi = 2u-1$ and $\eta = 2v-1\;$ i.e. translation and scaling of the same shape (see the question's EDIT). The <I>node numbering</I> as proposed by Nominal Animal has been implemented here as follows: $\,\operatorname{nr}(x,y) = 2y+x+1\,$ with $\,x \in \{0,1\}\,$ and $\,y \in \{0,1\}$ . The advantage being that, in contrast with the more common counterclockwise convention, such a numbering can be generalized smoothly to three dimensions and higher. <BR> <a href="https://i.stack.imgur.com/QRsHV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QRsHV.jpg" alt="enter image description here"></a> <BR> Okay, whatever. Let function behaviour inside a FEM quadrilateral be approximated by a bilinear interpolation between the function values at the vertices or nodal points, let $T$ be such a function: $$ T = A_T + B_T.\xi + C_T.\eta + D_T.\xi.\eta $$ It is assumed that the <I>same parameters</I> $(\xi,\eta)$ are employed for the function $T$ as well as for the (global Cartesian) coordinates $x$ and $y$. Herewith it is expressed that we have, as with the linear triangle, an <I>isoparametric transformation</I>: $$ x = A_x + B_x.\xi + C_x.\eta + D_x.\xi.\eta \\ y = A_y + B_y.\xi + C_y.\eta + D_y.\xi.\eta $$ Now specify any function $T$ for the vertices and the basic functions: $$ \begin{bmatrix} T_1 \\ T_2 \\ T_3 \\ T_4 \end{bmatrix} = \begin{bmatrix} +1 &amp; -1 &amp; -1 &amp; +1 \\ +1 &amp; +1 &amp; -1 &amp; -1 \\ +1 &amp; -1 &amp; +1 &amp; -1 \\ +1 &amp; +1 &amp; +1 &amp; +1 \end{bmatrix} \begin{bmatrix} A_T \\ B_T \\ C_T \\ D_T \end{bmatrix} \quad \mbox{ F.E } \leftarrow \mbox{ F.D. } $$ It is remarked that the above matrix is <I>orthogonal</I>, i.e. its columns are mutually perpendicular. This also means that the "condition" of the matrix is optimal. Even better, it is similar to the well-known $4\times 4$ <A HREF="https://en.wikipedia.org/wiki/Hadamard_matrix" rel="nofollow noreferrer">Hadamard matrix</A>. Apart from a scaling factor $4$, the inverse matrix is equal to the transpose, which can be determined easily: $$ \begin{bmatrix} A_T \\ B_T \\ C_T \\ D_T \end{bmatrix} = \frac{1}{4} \begin{bmatrix} +1 &amp; +1 &amp; +1 &amp; +1 \\ -1 &amp; +1 &amp; -1 &amp; +1 \\ -1 &amp; -1 &amp; +1 &amp; +1 \\ +1 &amp; -1 &amp; -1 &amp; +1 \end{bmatrix} \begin{bmatrix} T_1 \\ T_2 \\ T_3 \\ T_4 \end{bmatrix} \quad \mbox{ F.D } \leftarrow \mbox{ F.E. } $$ Writing out the matrix notation: $$ \begin{array}{l} A_T = \frac{1}{4} ( + T_1 + T_2 + T_3 + T_4 ) \\ B_T = \frac{1}{4} ( - T_1 + T_2 - T_3 + T_4 ) \\ C_T = \frac{1}{4} ( - T_1 - T_2 + T_3 + T_4 ) \\ D_T = \frac{1}{4} ( + T_1 - T_2 - T_3 + T_4 ) \end{array} $$ Hence $A_T,B_T,C_T,D_T$ are equal to local partial derivatives: $$ T(0) = A_T \quad ; \quad \frac{\partial T}{\partial \xi}(0) = B_T \quad ; \quad \frac{\partial T}{\partial \eta}(0) = C_T \quad ; \quad \frac{\partial T}{\partial \xi \partial \eta} = D_T $$ These coefficients form a Finite Difference formulation: $$ T = T(0) + \frac{\partial T}{\partial \xi}(0).\xi + \frac{\partial T}{\partial \eta}(0).\eta + \frac{\partial T}{\partial \xi \partial \eta}.\xi.\eta $$ Shape functions may be constructed as follows: $$ T = N_1.T_1 + N_2.T_2 + N_3.T_3 + N_4.T_4 = \begin{bmatrix} N_1 &amp; N_2 &amp; N_3 &amp; N_4 \end{bmatrix} \begin{bmatrix} T_1 \\ T_2 \\ T_3 \\ T_4 \end{bmatrix} =\\ \begin{bmatrix} 1 &amp; \xi &amp; \eta &amp; \xi.\eta \end{bmatrix} \begin{bmatrix} A_T \\ B_T \\ C_T \\ D_T \end{bmatrix} = \begin{bmatrix} 1 &amp; \xi &amp; \eta &amp; \xi.\eta \end{bmatrix} \frac{1}{4} \begin{bmatrix} +1 &amp; +1 &amp; +1 &amp; +1 \\ -1 &amp; +1 &amp; -1 &amp; +1 \\ -1 &amp; -1 &amp; +1 &amp; +1 \\ +1 &amp; -1 &amp; -1 &amp; +1 \end{bmatrix} \begin{bmatrix} T_1 \\ T_2 \\ T_3 \\ T_4 \end{bmatrix} \quad \Longrightarrow \\ \begin{bmatrix} N_1 &amp; N_2 &amp; N_3 &amp; N_4 \end{bmatrix} = \begin{bmatrix} 1 &amp; \xi &amp; \eta &amp; \xi.\eta \end{bmatrix} \frac{1}{4} \begin{bmatrix} +1 &amp; +1 &amp; +1 &amp; +1 \\ -1 &amp; +1 &amp; -1 &amp; +1 \\ -1 &amp; -1 &amp; +1 &amp; +1 \\ +1 &amp; -1 &amp; -1 &amp; +1 \end{bmatrix} \quad \Longrightarrow \\ \begin{array}{l} N_1 = \frac{1}{4}(1 - \xi - \eta + \xi.\eta) = \frac{1}{4}(1 - \xi).(1 - \eta)\\ N_2 = \frac{1}{4}(1 + \xi - \eta - \xi.\eta) = \frac{1}{4}(1 + \xi).(1 - \eta)\\ N_3 = \frac{1}{4}(1 - \xi + \eta - \xi.\eta) = \frac{1}{4}(1 - \xi).(1 + \eta)\\ N_4 = \frac{1}{4}(1 + \xi + \eta + \xi.\eta) = \frac{1}{4}(1 + \xi).(1 + \eta) \end{array} $$ Any shape function $N_k$ has a value $1$ at vertex $(k)$ and it is zero at all other vertices. The global and local coordinates of an arbitrary quadrilateral are related to each other via: $$ \begin{array}{l} x = N_1.x_1 + N_2.x_2 + N_3.x_3 + N_4.x_4 \\ y = N_1.y_1 + N_2.y_2 + N_3.y_3 + N_4.y_4 \end{array} $$ The equivalent Finite Difference representation is: $$ \begin{array}{l} x(\xi,\eta) = A_x + B_x.\xi + C_x.\eta + D_x.\xi.\eta \\ y(\xi,\eta) = A_y + B_y.\xi + C_y.\eta + D_y.\xi.\eta \end{array} \qquad \mbox{ where: } $$ $$ \begin{array}{ll} A_x = \frac{1}{4} ( x_1 + x_2 + x_3 + x_4 ) &amp; ; \quad A_y = \frac{1}{4} ( y_1 + y_2 + y_3 + y_4 ) \\ B_x = \frac{1}{4} \left[(x_2 + x_4) - (x_1 + x_3)\right] &amp; ; \quad B_y = \frac{1}{4} \left[(y_2 + y_4) - (y_1 + y_3)\right] \\ C_x = \frac{1}{4} \left[(x_3 + x_4) - (x_1 + x_2)\right] &amp; ; \quad C_y = \frac{1}{4} \left[(y_3 + y_4) - (y_1 + y_2)\right] \\ D_x = \frac{1}{4} ( + x_1 - x_2 - x_3 + x_4 ) &amp; ; \quad D_y = \frac{1}{4} ( + y_1 - y_2 - y_3 + y_4 ) \end{array} $$ The origin of the local $(\xi,\eta)$ coordinate system is determined by $\xi=0$ and $\eta=0$. Hence by $(A_x,A_y) = (\overline{x},\overline{y})$ = midpoint = centroid.<BR> The $\xi$-axis is defined by $-1 &lt; \xi &lt; +1$ and $\eta = 0$. Hence by the (dashed) line $(x,y) = (A_x,A_y) + \xi. (B_x,B_y) $.<BR> The $\eta$-axis is defined by $-1 &lt; \eta &lt; +1$ and $\xi = 0$. Hence by the (dashed) line $(x,y) = (A_x,A_y) + \eta. (C_x,C_y) $.</p>
3,401
<p>Consider an idealized classical particle confined to a two-dimensional surface that is frictionless. The particle's initial position on the surface is randomly selected, a nonzero velocity vector is randomly assigned to it, and the direction of the particle's movement changes only at the surface boundaries where perfectly elastic collisions occur (i.e. there is no information loss over time).</p> <p>My question is - Does there exist such a bounded surface where the probability of the particle visiting any given position at some time 't', P(x,y,t), becomes equal to unity at infinite time? In other words, no matter where we initialize the particle, and no matter the velocity vector assigned to it, are there surfaces that will always be 'everywhere accessible'?</p> <p>(Once again, I welcome any help asking this question in a more appropriate manner...)</p>
Joseph O'Rourke
6,094
<p>To supplement Andy's answer, there is a recent survey by Laura Demarco, "The conformal geometry of billiards," <a href="http://www.ams.org/journals/bull/2011-48-01/S0273-0979-2010-01322-7/home.html" rel="nofollow"><em>Bulletin AMS</em> 48(1), Jan 2011, pp. 33-52</a>. She defines a billiard table as <em>ergodically optimal</em> if, for each direction $\theta$, either every trajectory that avoids vertices is periodic, or every trajectory that avoids vertices is "uniformly distributed." It may be that your 'everywhere accessible' criterion is adequately captured by her definition of uniformly distributed. Ergodically optimal dynamics are also called <a href="http://journals.cambridge.org/action/displayAbstract;jsessionid=8FAA712C48489147DC7D334DAA03DB20.tomcat1?fromPage=online&amp;aid=2660296" rel="nofollow"><em>Veech's dichotomy</em></a>.</p> <p>Any billiard table that can be tiled by squares is ergodically optimal; in particular, the square is (every rational $\theta$ is periodic, every irrational $\theta$ will lead to the particle "spending equal time in regions with equal area" [modulo avoiding vertices]). The regular $n$-gon is ergodically optimal.</p> <p>There are examples that have billiard trajectories that are dense but not uniformly distributed.</p>
4,253,485
<p>I understand that if you roll <span class="math-container">$10$</span> <span class="math-container">$6$</span>-faced dice (D6) the different outcomes should be <span class="math-container">$60,466,176$</span> right? (meaning <span class="math-container">$1/60,466,176$</span> probability)</p> <p>but if you are playing a board-game, you don't care about that right? what matters is the sum of the numbers in the end, so it should anyway be <span class="math-container">$1/60$</span> instead of whatever right? because its a sum not <span class="math-container">$10$</span> individual rolls.</p> <p>so a <span class="math-container">$60$</span> face die (D60) should be the same as rolling all the dice right? if not can someone please explain? I've been looking all over the net for a compelling answer but there is none. I've seen a D60 can be used as <span class="math-container">$2$</span> D6 but also there is no explanation for this.</p> <p>if not, what would be the number of faces needed for an equivalent single die that can replace <span class="math-container">$10$</span> D6. I know so far the D120 is the last possible die that can be made, but it doesn't matter, I just want to know too...</p> <p>EDIT: Sorry I forgot to mention in some tabletop games Dice can have <span class="math-container">$0$</span>, so you'd treat all <span class="math-container">$6$</span> sided dice as going from <span class="math-container">$0$</span> to <span class="math-container">$5$</span>, making minimum value of <span class="math-container">$0$</span> possible in all <span class="math-container">$10$</span> rolls. But still that changes the maximum in the same way minimum used to be, now the max you get in 6 sided dice is <span class="math-container">$50$</span>, while on the 60 sided die it would be <span class="math-container">$59$</span> if we apply a <span class="math-container">$-1$</span> to all results... it gets messy, and this detail doesn't really change much</p>
Oscar Lanzi
248,217
<p>When you roll a 60-face die, you are in effect rolling ten six-face dice but counting only one at a time, which is very different from counting the entire combination of ten independent six-face dice.</p> <p>You can approximate a fair 60-face die with a red six-face die, a blue one and several white ones, call the number of white dice <span class="math-container">$n$</span>. Interpret a roll as follows:</p> <ul> <li><p>Count the actual outcome of the red die.</p> </li> <li><p>Count the blue die as 1 if its outcome is odd, 2 if even.</p> </li> <li><p>Sum the white dice and divide by 5; count the remainder. You may add 1 to each remainder, or exchange a remainder of 0 for 5, if you want this number to range from 1 to 5 instead of 0 to 4.</p> </li> </ul> <p>You then have 60 ordered triples whose probabilities are nearly equal for reasonable values of <span class="math-container">$n$</span>. The only deviation is associated with the remainder from the white-dice sum divided by <span class="math-container">$5$</span> being congruent with <span class="math-container">$n\bmod 5$</span>. In that case the probability is increased by a factor of <span class="math-container">$(6^n+4)/(6^n-1)$</span> compared with the other remainders (which all come out equal). This corresponds to an excess of just over 2% with <span class="math-container">$n=3$</span> and less than 0.4% with <span class="math-container">$n=4$</span>. Each increment of <span class="math-container">$n$</span> by <span class="math-container">$1$</span> reduces the deviation by a factor of six.</p>
2,828,023
<p><strong>Four cards are dealt off the top of a well-shuffled deck. Find the chance that:</strong></p> <p><strong>You get a queen or a king.</strong> </p> <p>The solution shows $ 1- \frac { \binom {44}{4}}{ \binom {52}{4}}$ </p> <p>I get that in this case it's easier to use compliment rule, however I'm trying another method and I got the following:</p> <p>$ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $ + $ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $ - [ ($ \binom {4}{1} \cdot\ \frac {4}{52} \cdot\ \frac {48}{51} \cdot\ \frac {47}{50} \cdot\ \frac {46}{49} $)^2]</p> <p>My reasoning is P(1 Queen or 1 King) = P(1 Queen) + P(1 King) - P[P(1 queen) x P (1 king)]</p> <p>The answer that I got is about 0.2499 which is different from the solution obtained from the compliument method, what did I miss here?</p> <p>Thank you!</p>
Boyku
567,523
<p>If the deal has no Queens and no Kings, the four cards are chosen among the 48 remaining and we have :</p> <p>$ \binom {52-8}{4} $ posibilities among $ \binom {52}{4} $</p> <p>If we get some queens or some kings, we get $1- \frac { \binom {44}{4}}{ \binom {52}{4}}$, the pointed solution.</p>
4,417,576
<p>Say we divide 1858.54 cm by 5 -- where 5 is an exact number. We get the quotient 371.708</p> <p>According to the rules of significant figures, we would answer write 371.708 cm, since &quot;1858.54&quot; is 6 significant figures, and 5, being an exact number doesn't count.</p> <p>My question is, how do you make sense of <strong>371.708 cm</strong>, which is more precise than the original? That is, the quotient is precise to the nearest thousandth while the original is precise to nearest hundredth. To me that doesn't make sense.</p>
Ross Millikan
1,827
<p>Significant figures are a heuristic for keeping track of accuracy in a calculation. If you want to be careful, you need to assign an uncertainty to each number and keep track of how they build up. You also need to decide whether you want the error band on your result to be a worst case or some sort of &quot;<span class="math-container">$3 \sigma$</span>&quot; based on assumptions of the distribution of errors in your input. Often we assume normal errors because they are easy to compute with, but real world errors tend to have longer tails than that.</p> <p>For your specific case, absent other information we assume <span class="math-container">$1858.54$</span> is <span class="math-container">$\pm 0.005$</span>. If we divide by an exact <span class="math-container">$5$</span> we get <span class="math-container">$371.708 \pm 0.001$</span>. This is not quite as good as a <span class="math-container">$6$</span> significant figure <span class="math-container">$371.708$</span>, which should be <span class="math-container">$\pm 0.0005$</span> but it is better than <span class="math-container">$371.71\pm 0.005$</span>. For many purposes we do not care to this level of detail and we just track significant figures. If we do care, give the result as <span class="math-container">$371.708\pm 0.001$</span> and indicate whether this is a hard limit (like if the original <span class="math-container">$1858.54$</span> was measured with a graduated scale) or has some chance of being outside the range.</p>
119,492
<p>First I define a function, just a sum of a few sin waves at different angular frequencies:</p> <pre><code>ubdat = 50; ws = 10*{2, 5, 10, 20, 40} fn = Table[Sum[Sin[w*x], {w, ws}], {x, 0, ubdat, .001}]; pts = Length@fn ListPlot[fn, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] {20, 50, 100, 200, 400} </code></pre> <p><a href="https://i.stack.imgur.com/ZCcaO.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ZCcaO.png" alt="enter image description here"></a></p> <p>If I take the Fourier transform and scale it correctly, you can see the correct peaks:</p> <pre><code>fnft = Abs@Fourier@fn; fnftnormed = Table[{2*Pi*i/ubdat, fnft[[i]]}, {i, Length@fnft}]; ListPlot[fnftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/adDJS.png" rel="noreferrer"><img src="https://i.stack.imgur.com/adDJS.png" alt="enter image description here"></a></p> <p>Now, I want to do a low pass filter on it, for, say, $\omega_c=140$. This should get rid of the peaks at 200 and 400, ideally. Doing it this way returns the same plots as above:</p> <pre><code>fnfilt = LowpassFilter[fn, 140]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p>I assume the problem is something to do with defining SampleRate, but the documentation explaining how it's defined or how to use it on the <a href="https://reference.wolfram.com/language/ref/LowpassFilter.html" rel="noreferrer">LowpassFilter page</a> is <em>very</em> sparse:</p> <blockquote> <p>By default, SampleRate->1 is assumed for images as well as data. For a sampled sound object of sample rate of r, SampleRate->r is used. With SampleRate->r, the cutoff frequency should be between 0 and $r*\pi$.</p> </blockquote> <p>It appears to have a broken link at the bottom, so maybe that had something helpful. The page for SampleRate itself has even less info.</p> <p>My naive attempt at choosing a sample rate would be dividing the number of samples by the total range, so in this case, <code>Floor[pts/ubdat]=1000</code>. Using this <em>does</em> affect the FT, but not a whole lot:</p> <pre><code>fnfilt = LowpassFilter[fn, 140, SampleRate -&gt; 1000]; ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/XUqiC.png" rel="noreferrer"><img src="https://i.stack.imgur.com/XUqiC.png" alt="enter image description here"></a></p> <p>So what am I missing? I've tried googling for some sort of guide on using filters in Mathematica, but I can't find anything and it's very frustrating.</p>
bill s
1,783
<p>Actually, LowpassFilter is pretty easy to use, if you understand the parameters it needs. For instance, replacing the OPs line with</p> <pre><code>fnfilt = LowpassFilter[fn, 140, 200, SampleRate -&gt; 1000]; </code></pre> <p>gives exactly what you would expect.</p> <pre><code>ListPlot[fnfilt, Joined -&gt; True, PlotRange -&gt; {{0, 1000}, All}] fnfiltft = Abs@Fourier@fnfilt; fnfiltftnormed = Table[{2*Pi*i/ubdat, fnfiltft[[i]]}, {i, Length@fnfiltft}]; ListPlot[fnfiltftnormed, Joined -&gt; True, PlotRange -&gt; {{0, 500}, All}] </code></pre> <p><a href="https://i.stack.imgur.com/ydYMI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ydYMI.png" alt="enter image description here"></a></p> <p>LowpassFilter works fine in 1D as well as 2D. The filter needs to know the cutoff frequency (140 in this case), the sampling rate (1000 in the OPs example), and it needs help choosing the length of the filter. Longer filter lengths allow sharper cutoffs but are computationally more expensive. </p>
396,363
<p>I have two random variables $X,Y$ which are independent and uniformly distributed on $(\frac{1}{2},1]$. Then I consider two more random variables, $D=|X-Y|$ and $Z=\log\frac{X}{Y}$. I would like to calculate both, the disitrbution functions $F_D(t), F_Z(t)$ and the the density functions $f_D(t),f_Z(t)$</p> <p>To do that I think the first thing we need to do is to evaluate the density of the common distribution of $X$ and $Y$, but I do not know how to do that.</p> <p>The only thing which is clear to me is the density and distribution function of $X$ and $Y$ because we know that they are uniform.</p> <p><strong>EDIT</strong>: Please read my own answer to this question. I need someone who can show me my claculation mistakes.</p>
Did
6,179
<blockquote> <p>The area of a $\color{red}{\mathrm{right\ triangle}}$ being half the product of its legs is all that one needs, really...</p> </blockquote> <p>For example, $D\geqslant x$ with $0\leqslant x\leqslant\frac12$ means that $(X,Y)$ is either in the triangle $\color{red}{T_x}$ with vertices $(\frac12+x,\frac12)$, $(1,\frac12)$, $(1,1-x)$ or in the triangle $\color{red}{S_x}$ symmetric of $\color{red}{T_x}$ with respect to the first diagonal. Both legs of $\color{red}{T_x}$ are $\frac12-x$, the triangles $\color{red}{T_x}$ and $\color{red}{S_x}$ are disjoint and with the same area, and the domain of $(X,Y)$ is the full square $(\frac12,1)\times(\frac12,1)$ with area $\frac14$, hence $$ P[D\geqslant x]=4\cdot2\cdot|\color{red}{T_x}|=(1-2x)^2=1-4x(1-x). $$ From here, one gets for every $0\leqslant x\leqslant\frac12$, $$ F_D(x)=4x(1-x),\qquad f_D(x)=4(1-2x). $$ Likewise, consider $R=\frac{Y}X$, then $\frac12\leqslant R\leqslant2$ and $R$ and $\frac1R$ are identically distributed. For every $\frac12\leqslant x\leqslant 1$, $R\leqslant x$ means that $(X,Y)$ is in the triangle $\color{red}{U_x}$ with vertices $(\frac1{2x},\frac12)$, $(1,\frac12)$, $(1,x)$. The legs of $\color{red}{U_x}$ are $1-\frac1{2x}$ and $x-\frac12$ hence $$ P[R\leqslant x]=4\cdot|\color{red}{U_x}|=\frac1{2x}(2x-1)^2. $$ Likewise, for every $1\leqslant x\leqslant2$, $$ P[R\geqslant x]=\frac1{2x}(2-x)^2. $$ (This can be proved either by considering the triangle $\color{red}{V_x}$ which corresponds to the event $R\geqslant x$, or directly using the equidistribution of $R$ and $\frac1R$.)</p> <p>Now, if $Z=\log R$ then, for every $-\log2\leqslant z\leqslant0$, $[Z\leqslant z]=[R\leqslant\mathrm e^z]$ hence $$ P[Z\leqslant z]=\frac12\mathrm e^{-z}(2\mathrm e^z-1)^2, $$ and for every $0\leqslant z\leqslant\log2$, $[Z\geqslant z]=[R\geqslant\mathrm e^z]$ hence $$ P[Z\geqslant z]=\frac12\mathrm e^{-z}(2-\mathrm e^z)^2. $$ From here, one gets, for every $-\log2\leqslant z\leqslant0$, $$ F_Z(z)=\tfrac12\mathrm e^{-z}(2\mathrm e^z-1)^2,\qquad f_Z(z)=2\mathrm e^z-\tfrac12\mathrm e^{-z}, $$ and, for every $0\leqslant z\leqslant\log2$, $$ F_Z(z)=1-\tfrac12\mathrm e^{-z}(2-\mathrm e^z)^2,\qquad f_Z(z)=2\mathrm e^{-z}-\tfrac12\mathrm e^{z}. $$ The invariance of $f_Z(z)$ by the symmetry $z\mapsto-z$, which follows from the fact that $R$ and $\frac1R$ are identically distributed, should be apparent.</p>
2,845,033
<p>Empirically this equality holds:</p> <p>$\gcd(10,8) = 2 $ and $\gcd(\operatorname{mod}(10,8),8) = \gcd(2,8) = 2 $</p> <p>$\gcd(18,9) = 9 $ and $\gcd( \operatorname{mod}(18,9),9) = \gcd(0,9) = 9 $</p> <p>I am stuck on how to prove it,though , and do not understand why this holds true.</p>
dan_fulea
550,003
<p>Let us take $\beta=-2$. Then: $$ \begin{aligned} &amp;\frac{\partial}{\partial x}\Big(\ e^{-2x}\cdot y\ \Big) + \frac{\partial}{\partial y}\Big(\ e^{-2x}\cdot (-2x-y-3x^4+y^2)\ \Big) \\ &amp;\qquad = -2e^{-2x}\cdot y+ e^{-2x}\cdot (-1+2y)=-e^{-2x} \\ &amp;\qquad&lt;0\ . \end{aligned} $$ This is enough to apply Dulac's criteria.</p>
3,107,194
<p>Suppose <span class="math-container">$X$</span> is a compact metric space and <span class="math-container">$f: X \rightarrow \mathbb{R}$</span> is a function for which <span class="math-container">$f^{-1}([t,\infty))$</span> is closed for any real <span class="math-container">$t$</span>. Then <span class="math-container">$f$</span> achieves its maximum value on <span class="math-container">$X$</span>. </p> <p>I am first trying to prove that that <span class="math-container">$f$</span> is bounded above. Suppose, for the sake of contradiction, that <span class="math-container">$f$</span> is not bounded above. So for all <span class="math-container">$M \in \Bbb{R}$</span>, there is a <span class="math-container">$x \in X$</span> such that <span class="math-container">$f(x) &gt; M$</span>. So there exists a sequence <span class="math-container">$\{x_n\}$</span> in <span class="math-container">$X$</span> such that for all <span class="math-container">$n \in \Bbb{N}$</span>, we have <span class="math-container">$f(x_n) &gt; n$</span>. So <span class="math-container">$x_n \in f^{-1}([n,\infty))$</span> for all <span class="math-container">$n \in \Bbb{N}$</span>. Since <span class="math-container">$f^{-1}([n,\infty)) \subseteq X$</span> are closed for all <span class="math-container">$n \in \Bbb{N}$</span>, <span class="math-container">$f^{-1}([n,\infty))$</span> are compact for all <span class="math-container">$n \in \Bbb{N}$</span>. Further, since each <span class="math-container">$f^{-1}([n,\infty))$</span> is nonempty, and <span class="math-container">$f^{-1}([n + 1,\infty)) \subseteq f^{-1}([n,\infty))$</span> for all <span class="math-container">$n \in \Bbb{N}$</span>, <span class="math-container">$\bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span> is not empty, and it is also closed. </p> <p>I am trying to show now that <span class="math-container">$\{x_n\}$</span> has a convergent subsequence <span class="math-container">$\{x_{n_k}\}$</span> that converges to some <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>, and reach a contradiction since <span class="math-container">$$\bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty)) = f^{-1}(\bigcap\limits_{n = 1}^\infty [n,\infty))$$</span> is empty (since <span class="math-container">$\bigcap\limits_{n = 1}^\infty [n,\infty)$</span> is empty).</p> <p>My issue is, how do I know <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>? I know its in <span class="math-container">$X$</span> since it is compact, but I don't know if its in <span class="math-container">$x \in \bigcap\limits_{n = 1}^\infty f^{-1}([n,\infty))$</span>. </p> <p>Help, please.</p>
Sangchul Lee
9,340
<p>Here is a slightly more general statement:</p> <blockquote> <p><strong>Claim.</strong> Let <span class="math-container">$K_1 \supseteq K_2 \supseteq \cdots $</span> be non-empty closed subsets of a metric space <span class="math-container">$X$</span>. If <span class="math-container">$x_n \in K_n$</span> for each <span class="math-container">$n$</span>, then every subsequential limit of <span class="math-container">$(x_n)$</span> lies in <span class="math-container">$K_{\infty} := \bigcap_{n=1}^{\infty} K_n$</span>.</p> </blockquote> <ul> <li><p>For instance, we may apply this to <span class="math-container">$K_n = f^{-1}([n, \infty))$</span>.</p></li> <li><p>This claim does not involve compactness. As such, <span class="math-container">$K_{\infty}$</span> can be empty. In particular, if <span class="math-container">$K_{\infty} = \varnothing$</span> then <span class="math-container">$(x_n)$</span> has no subsequential limit in <span class="math-container">$X$</span>.</p></li> </ul> <p><em>Proof.</em> Let <span class="math-container">$(x_{n_k})$</span> be a convergent subsequence of <span class="math-container">$(x_n)$</span> and write <span class="math-container">$x$</span> for its limit in <span class="math-container">$X$</span>. Then for each fixed <span class="math-container">$N$</span> and for each <span class="math-container">$k \geq N$</span>, we have <span class="math-container">$x_{n_k} \in K_{n_k} \subseteq K_{n_N}$</span>. So, by the closedness of <span class="math-container">$K_{n_N}$</span>, we have <span class="math-container">$x \in K_{n_N}$</span>. But since this is true for every <span class="math-container">$N$</span>, we must have <span class="math-container">$x \in \bigcap_{N=1}^{\infty} K_{n_N} = K_{\infty}$</span>.</p> <hr> <p>As for the last equality, we have</p> <blockquote> <p><strong>Lemma.</strong> If <span class="math-container">$A_1 \supseteq A_2 \supseteq \cdots$</span> is a sequence of sets, then for any subsequence <span class="math-container">$(A_{n_k})$</span> we have <span class="math-container">$\bigcap_{k=1}^{\infty} A_{n_k} = \bigcap_{n=1}^{\infty} A_n$</span>.</p> </blockquote> <p><em>Proof.</em> To show that two sets are equal, we may try to show that one contains the other.</p> <p>First, assume that <span class="math-container">$x \in \bigcap_{k=1}^{\infty} A_{n_k}$</span>. This means that <span class="math-container">$x \in A_{n_k}$</span> for each <span class="math-container">$k$</span>. Now, for each <span class="math-container">$n$</span>, there exists <span class="math-container">$k$</span> such that <span class="math-container">$n_k \geq n$</span>, and so, <span class="math-container">$x \in A_{n_k} \subseteq A_n$</span>. Since this is true for all <span class="math-container">$n$</span>, we have <span class="math-container">$x \in \bigcap_{n=1}^{\infty} A_n$</span>, and so, <span class="math-container">$\bigcap_{k=1}^{\infty} A_{n_k} \subseteq \bigcap_{n=1}^{\infty} A_n$</span>.</p> <p>The other direction <span class="math-container">$\bigcap_{n=1}^{\infty} A_n \subseteq \bigcap_{k=1}^{\infty} A_{n_k}$</span> is almost obvious. Indeed, if <span class="math-container">$x \in \bigcap_{n=1}^{\infty} A_n$</span>, then <span class="math-container">$x \in A_n$</span> for any <span class="math-container">$n$</span>, and so, <span class="math-container">$x \in A_{n_k}$</span> for any <span class="math-container">$k$</span>.</p>
3,451,782
<blockquote> <p>Let <span class="math-container">$M \in \mathcal{M}_n(\mathbb{R})$</span> invertible. Find a matrix <span class="math-container">$N$</span> such that <span class="math-container">$M + N$</span> is not invertible and <span class="math-container">$\|N\|_2$</span> is minimum.</p> </blockquote> <p>Is easy to find matrices <span class="math-container">$N$</span> such that <span class="math-container">$M + N$</span> is not invertible (in fact, there is some results that give needed and sufficient conditions for that). My problem is with the requirement "<span class="math-container">$\|N\|_2$</span> is minimum". I really have no ideia about how to approach this problem. I appreciate any hints.</p>
Fred
380,717
<p>No. We denote the unit in a <span class="math-container">$C^*$</span>- algebra by <span class="math-container">$e$</span>. Then take <span class="math-container">$a=ie$</span> and <span class="math-container">$b=e$</span> and <span class="math-container">$ \lambda=1-i.$</span> Then </p> <p><span class="math-container">$$a+ \lambda b = ie+(1-i)e=e.$$</span></p> <p>Thus <span class="math-container">$a+ \lambda b$</span> is positive , but <span class="math-container">$\lambda \ne \overline{\lambda}.$</span></p>
3,451,782
<blockquote> <p>Let <span class="math-container">$M \in \mathcal{M}_n(\mathbb{R})$</span> invertible. Find a matrix <span class="math-container">$N$</span> such that <span class="math-container">$M + N$</span> is not invertible and <span class="math-container">$\|N\|_2$</span> is minimum.</p> </blockquote> <p>Is easy to find matrices <span class="math-container">$N$</span> such that <span class="math-container">$M + N$</span> is not invertible (in fact, there is some results that give needed and sufficient conditions for that). My problem is with the requirement "<span class="math-container">$\|N\|_2$</span> is minimum". I really have no ideia about how to approach this problem. I appreciate any hints.</p>
Sriram Balasubramanian
273,433
<p>It seems like one can conclude at the least, <span class="math-container">$a = a^*$</span> and <span class="math-container">$\lambda = \overline{\lambda}$</span> from just assuming that <span class="math-container">$a + \lambda b$</span> is self-adjoint, as follows. Have <span class="math-container">$a + \lambda b = a^* + \overline{\lambda}b$</span>. If <span class="math-container">$\overline{\lambda} \neq \lambda$</span>, then <span class="math-container">$b = \frac{1}{\overline{\lambda} - \lambda}(a - a^*)$</span>. This contradicts the hypothesis that <span class="math-container">$b$</span> does not belong to the operator system generated by <span class="math-container">$\{1, a\}$</span>.Thus <span class="math-container">$\lambda = \overline{\lambda}$</span>, which in turn implies that <span class="math-container">$a = a^*$</span>. Can anything more be concluded?</p>
4,626,019
<p>I recently encountered a quant interview question, which I think is not super hard, but still I am not very sure about my results.</p> <p>Could you guys give me some hints on it?</p> <p>The question goes as follows:</p> <blockquote> <p>An ant is put on a vertex of a simplex, and at each vertex, it has equal probability to move to any other one. Once it goes back to the original start point and also it has visited all the vertices, it stops.</p> <p>What is the expected number of steps for its move?</p> </blockquote> <p>The way I did is to consider a 'state', <span class="math-container">$(n,x)$</span>, where <span class="math-container">$n$</span> represents the number of vertices it has visited, and <span class="math-container">$x$</span> is a binary represents whether it is at the original or not. It should be a regular Markov Chain.</p> <p>Therefore, the ant starts at <span class="math-container">$(1,\text{True})$</span> state and move to <span class="math-container">$(2,\text{False})$</span> state with probability <span class="math-container">$1$</span> <span class="math-container">$\ldots$</span> <span class="math-container">$(4,\text{True})$</span> state is the only absorbing state.</p> <p>After some calculations, my result is <span class="math-container">$\mathbf{7}$</span>. I am wondering whether it is correct or not and do you guys have other insights?</p> <p>Appreciate the community!!!</p> <hr /> <p><strong>Edit.</strong> I'm sorry. I messed up with the matrix. The correct answer should be <span class="math-container">$8.5$</span> from my side.</p>
Sarvesh Ravichandran Iyer
316,409
<h5>A brief note on simplification</h5> <p>This question is related to the harmonic numbers because it can be formulated as a version of the coupon collector problem.</p> <p>Indeed, this is how it would work. The &quot;graph&quot; of the <span class="math-container">$n$</span>-simplex, when you treat each of the corners as vertices and each of the edges joining the vertices as graph edges, is in fact an <span class="math-container">$n+1$</span>-clique : a graph with <span class="math-container">$n+1$</span> vertices, in which every possible edge exists. For example, in a tetrahedron, every corner is connected to another one via an edge.</p> <p>The advantage that this symmetry afford us is that, remarkably, it actually allows us to <em>eliminate the &quot;original&quot; coordinate</em> from your Markov chain.</p> <p>Here's why. Let us phrase everything in terms of the Markov chain given just by the random walk part (so you forget whether you are at the &quot;original&quot; vertex or not). You start from a vertex, get to every other vertex, and then return to the original vertex.</p> <p>We break this up into two key steps.</p> <ul> <li><p>Visiting every other vertex starting from the original.</p> </li> <li><p>Once you are at the vertex that was visited last, then you need to return to the original vertex.</p> </li> </ul> <p>Now, let <span class="math-container">$\tau$</span> be the stopping time in question i.e. that you visit every vertex and then return to the original vertex. Let <span class="math-container">$\tau_c$</span> be the time taken only to visit every vertex, and not to return. Let <span class="math-container">$\tau_d$</span> be the time taken to return from the last vertex to the original vertex.</p> <p>By the symmetry of the situation, we are afforded the following facts.</p> <ul> <li><p>The choice of original vertex <em>does not matter</em> because the graph is so symmetric!</p> </li> <li><p>The time <span class="math-container">$\tau_c$</span> is actually <em>equivalent</em> to the coupon-collector problem on <span class="math-container">$n$</span> coupons, but <em>you cannot pick the coupon you picked previously</em>, so that is a small tweak that needs to be added.</p> </li> <li><p>Once you complete <span class="math-container">$\tau_c$</span> and visit every vertex, the distribution of the &quot;last&quot; vertex is in fact <em>uniform</em> over all points other than the original point. So, <span class="math-container">$\tau_d$</span> becomes the answer to the following question : given a point uniformly picked from all the vertices, what is the expected time for this to return to the original vertex?</p> </li> <li><p>Finally, <span class="math-container">$\tau_d$</span> is dependent only upon the Markov chain's values once <span class="math-container">$\tau_c$</span> occurs, and therefore by the strong Markov property, <span class="math-container">$\tau_d$</span> doesn't really depend on <span class="math-container">$\tau_c$</span>.</p> </li> </ul> <hr /> <h5>Notation and geometric claim</h5> <p>Let's put all this into some notation now. Let <span class="math-container">$U$</span> be the uniform distribution on the vertices. It is clear that <span class="math-container">$U$</span> is a (the) stationary distribution for the Markov chain given purely by the random walk on the vertices of the <span class="math-container">$n$</span>-simplex.</p> <p>Let <span class="math-container">$X_n$</span> denote this random walk on the vertex set <span class="math-container">$V$</span> and for <span class="math-container">$f : V \to \mathbb R$</span> write <span class="math-container">$\mathbb E_U[f]$</span> for the expectation of <span class="math-container">$f$</span> under the uniform distribution.</p> <p>Claim : Suppose that <span class="math-container">$X_0 \sim U$</span> (the original vertex is uniformly randomly chosen) , then <span class="math-container">$X_{\tau_c} \sim U$</span> (the &quot;last visited vertex&quot; is also uniformly distributed!).</p> <p>The proof is quite easy, and is done using the idea of &quot;transitivity&quot; or using the symmetry of the geometric figure that is the <span class="math-container">$n$</span>-simplex. Observe that <span class="math-container">$\mathbb P(X_{\tau_c} = y | X_0 = x) = \sum_{p \in P_{x,y}} \mathbb P(p | X_0 = x)$</span>, where</p> <ul> <li><p><span class="math-container">$P_{x,y}$</span> is the set of all paths from <span class="math-container">$x$</span> to <span class="math-container">$y$</span> which start at <span class="math-container">$x$</span> and visit every other vertex before visiting and ending at <span class="math-container">$y$</span>, and</p> </li> <li><p><span class="math-container">$\mathbb P(p | X_0 = x)$</span> is the probability that the Markov chain follows exactly the path <span class="math-container">$p$</span> (at least) starting from <span class="math-container">$X_0 = x$</span>. (Note that there are infinitely many such paths <span class="math-container">$p$</span> so the sum is infinite, but we can still work things out by truncating the sum to paths of length <span class="math-container">$\leq N$</span> and then letting <span class="math-container">$N \to \infty$</span>, something that I'll leave a keen reader to fill in).</p> </li> </ul> <p>Here's an important thing to keep in mind : <span class="math-container">$\mathbb P(p | X_0 =x)$</span> <em>depends only on the length of <span class="math-container">$p$</span></em> because every element of the transition matrix that isn't equal to <span class="math-container">$0$</span> is equal to <span class="math-container">$\frac 1n$</span>, so that <span class="math-container">$\mathbb P(p | X_0 = x) = \frac 1{n^{|p|}}$</span> where <span class="math-container">$|p|$</span> is the number of edges in <span class="math-container">$p$</span>.</p> <p>Let <span class="math-container">$y' \neq y, y' \neq x$</span> be another possible last vertex. We will <em>show that there is a bijection</em> between <span class="math-container">$P_{x,y}$</span> and <span class="math-container">$P_{x,y'}$</span>.</p> <p>How? Well, we will show a hidden bijection within the simplex itself, and this will transfer onto a bijection between the paths.</p> <p>Within an <span class="math-container">$n$</span>-simplex, imagine any three vertices <span class="math-container">$a,b,c$</span>. Suppose that you &quot;hold&quot; only the vertex <span class="math-container">$a$</span> in your hand, with the rest of the vertices jutting out. (You'll need to only imagine this for <span class="math-container">$2$</span> and <span class="math-container">$3$</span> dimensional simplices, the rest will be similar). Then, you can &quot;rotate&quot; the simplex in your hand, like a fan, and what this does is : it gives you a new orientation of the simplex, but <span class="math-container">$a$</span> remains <span class="math-container">$a$</span> while every other vertex goes to a different vertex.</p> <p>For example, imagine keeping a triangle in your hand, holding a vertex and then flipping the triangle about that vertex. Or holding one vertex of a tetrahedron and then rotating about it, so that all other vertices change position except this one.</p> <p>These &quot;visual&quot; depictions can , in fact, be translated into the following theorem, which I'm not going to prove, but I'm going to leave for those interested to verify.</p> <blockquote> <p>Theorem : Given a simplex with three distinct vertices <span class="math-container">$a,b,c$</span>, there is a <em>bijection</em> <span class="math-container">$\phi$</span> from the vertices of the map to itself such that <span class="math-container">$\phi(b) = c$</span> and the only vertex <span class="math-container">$x$</span> satisfying <span class="math-container">$\phi(x) = x$</span> is the point <span class="math-container">$a$</span>.</p> </blockquote> <p>Using this, one does as follows. Suppose that <span class="math-container">$p \in P_{x,y}$</span>. Then, using the theorem we obtain a bijection <span class="math-container">$\phi_{x,y,y'}$</span> such that <span class="math-container">$\phi_{x,y,y'}(y) = y'$</span> and <span class="math-container">$\phi_{x,y,y'}(x) = x$</span> is the only fixed point. Create a new path <span class="math-container">$p'$</span> as follows : if <span class="math-container">$p = (x,p_1,p_2,\ldots, y)$</span> then <span class="math-container">$$ p' = (\phi_{x,y,y'}(x), \phi_{x,y,y'}(p_1),\phi_{x,y,y'}(p_2),\ldots, \phi_{x,y,y'}(y))$$</span></p> <p>One sees that <span class="math-container">$p' \in P_{x,y'}$</span> by the properties of <span class="math-container">$\phi_{x,y,y'}$</span>. Furthermore, the map <span class="math-container">$p \to p'$</span> created from <span class="math-container">$\phi_{x,y,y'}$</span> is invertible with inverse created similarly from <span class="math-container">$\phi_{x,y',y}$</span>. Also observe that this map preserves the length of the path <span class="math-container">$p$</span>!</p> <p>All this put together , we get that there is a bijection from <span class="math-container">$P_{x,y}$</span> to <span class="math-container">$P_{x,y'}$</span>, and because the map is length-preserving, it follows that <span class="math-container">$$ \sum_{P_{x,y}} \mathbb P(p|X_0 = x) = \sum_{n=1}^{\infty} \sum_{P_{x,y} : |p| = n} \mathbb (p|X_0 = x) \\ =_{bij'n} \sum_{n=1}^{\infty} \sum_{P_{x,y'} : |p'| = n} \mathbb (p'|X_0 = x) = \sum_{P_{x,y'}} \mathbb P(p'|X_0 = x) $$</span></p> <p>Which shows the required equivalence. Hence, we are done!</p> <hr /> <h5><span class="math-container">$\tau_c$</span> and <span class="math-container">$\tau_d$</span></h5> <p>For <span class="math-container">$\tau_c$</span> now, we just need to note that we are performing the coupon-collector problem with the only condition being that we cannot pick the same coupon consecutively. However, this doesn't change things very much , because if you break <span class="math-container">$\tau_c$</span> into the stopping times <span class="math-container">$t_1 &lt; t_2&lt; \ldots &lt; t_{n+1}$</span> where <span class="math-container">$t_i$</span> is the stopping time until you see <span class="math-container">$i$</span> unique vertices, then <span class="math-container">$t_0=t_1 = 0$</span> and <span class="math-container">$\mathbb E_U[\tau_c] = \sum_{i=0}^{n} \mathbb E_U[t_{i+1} - t_i]$</span>.</p> <p>The coupon-collector argument follows. If <span class="math-container">$t_i$</span> has occurred , then from the starting vertex, you can go to any non-visited vertex with probability <span class="math-container">$\frac{n-i}{n}$</span>, otherwise you go to a visited vertex already. Considering the visit of a new vertex as a success, clearly <span class="math-container">$t_{i+1}-t_i$</span> is a geometric random variable with success probability <span class="math-container">$\frac{n-i}{n}$</span>. In particular, we see that its expectation is <span class="math-container">$\frac{n}{n-i}$</span>.</p> <p>All in all, we have that <span class="math-container">$$ E_U[\tau_c] = \sum_{i=0}^{n} \mathbb E_U[t_{i+1} - t_i] = n \sum_{i=1}^n \frac{1}{i} = nH_n $$</span></p> <p>What about <span class="math-container">$\tau_d$</span> now? Well, the claim that we have proven earlier is that , once <span class="math-container">$\tau_c$</span> has occurred, the resulting distribution over the rest of the vertices is <em>uniform</em>. Hence, we must find <span class="math-container">$\mathbb E_{U}[\tau_d]$</span>.</p> <p>However, it's easy to see that from any vertex <span class="math-container">$y \neq x$</span>, you either visit <span class="math-container">$x$</span> with probability <span class="math-container">$\frac 1n$</span> or don't with probability <span class="math-container">$\frac{n-1}{n}$</span>. That is, <span class="math-container">$\tau_d$</span> is <em>also</em> a geometric random variable but with success probability <span class="math-container">$\frac 1n$</span>. This leads to <span class="math-container">$\mathbb E_U[\tau_D] = n$</span>.</p> <p>Finally, the answer is <span class="math-container">$$ \mathbb E_{U}[\tau] = \mathbb E_U[\tau_C] + \mathbb E_{U}[\tau_D] = n(H_n+1) $$</span></p> <p>However, <span class="math-container">$E_{U}[\tau]$</span> actually equals <span class="math-container">$E[\tau|X_0 = x]$</span> for any value of <span class="math-container">$x$</span>. To see why, if <span class="math-container">$x \neq y$</span> then one can use the same &quot;rotation&quot; argument (much more loosely) to obtain a bijection from paths starting at <span class="math-container">$x$</span> and ending at time <span class="math-container">$\tau$</span>, to paths starting at <span class="math-container">$y$</span> and ending at time <span class="math-container">$\tau$</span>, which is also length-preserving. Using the same &quot;path-break&quot; argument, one shows that <span class="math-container">$E[\tau|X_0 = x] = \mathbb E[\tau|X_0 = y]$</span> is independent of the start point, whence the equality follows from definition.</p> <p>Therefore, we get <span class="math-container">$$ \mathbb E[\tau|X_0 = x] = n(H_n+1) $$</span></p> <p>For example,</p> <ul> <li><p><span class="math-container">$n=2$</span> : <span class="math-container">$2(1+\frac{1}{2}+1) = 7$</span>.</p> </li> <li><p><span class="math-container">$n=3$</span> : <span class="math-container">$3(1+\frac{1}{2}+\frac 13 + 1) = 8.5$</span>.</p> </li> </ul> <p>and so on.</p>
973,101
<p>I am trying to find a way to generate random points uniformly distributed on the surface of an ellipsoid.</p> <p>If it was a sphere there is a neat way of doing it: Generate three $N(0,1)$ variables $\{x_1,x_2,x_3\}$, calculate the distance from the origin</p> <p>$$d=\sqrt{x_1^2+x_2^2+x_3^2}$$</p> <p>and calculate the point </p> <p>$$\mathbf{y}=(x_1,x_2,x_3)/d.$$</p> <p>It can then be shown that the points $\mathbf{y}$ lie on the surface of the sphere and are uniformly distributed on the sphere surface, and the argument that proves it is just one word, "isotropy". No prefered direction.</p> <p>Suppose now we have an ellipsoid</p> <p>$$\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}=1$$</p> <p>How about generating three $N(0,1)$ variables as above, calculate</p> <p>$$d=\sqrt{\frac{x_1^2}{a^2}+\frac{x_2^2}{b^2}+\frac{x_3^2}{c^2}}$$</p> <p>and then using $\mathbf{y}=(x_1,x_2,x_3)/d$ as above. That way we get points guaranteed on the surface of the ellipsoid but will they be uniformly distributed? How can we check that?</p> <p>Any help greatly appreciated, thanks.</p> <p>PS I am looking for a solution without accepting/rejecting points, which is kind of trivial.</p> <p>EDIT:</p> <p>Switching to polar coordinates, the surface element is $dS=F(\theta,\phi)\ d\theta\ d\phi$ where $F$ is expressed as $$\frac{1}{4} \sqrt{r^2 \left(16 \sin ^2(\theta ) \left(a^2 \sin ^2(\phi )+b^2 \cos ^2(\phi )+c^2\right)+16 \cos ^2(\theta ) \left(a^2 \cos ^2(\phi )+b^2 \sin ^2(\phi )\right)-r^2 \left(a^2-b^2\right)^2 \sin ^2(2 \theta ) \sin ^2(2 \phi )\right)}$$</p>
Stéphane Laurent
38,217
<ul> <li><p>Simulate some points on the sphere (with arbitrary dimension) <span class="math-container">$x_1$</span>, <span class="math-container">$x_2$</span>, <span class="math-container">$...$</span></p> </li> <li><p>Take the upper Cholesky factor <span class="math-container">$U$</span> of the shape matrix <span class="math-container">$A$</span> of the ellipsoid (<span class="math-container">$x'Ax = r^2$</span>)</p> </li> <li><p>Then the solution <span class="math-container">$Y$</span> of <span class="math-container">$UY = (x_1 \, x_2 \, ...)$</span> is a matrix of uniformly random points on the surface of the ellipsoid.</p> </li> </ul>
913,239
<p>Given a circle with known center $c$, known radius $r$ and perimeter point $x$: $$ (x - c_x)^2 + (y - c_y)^2 = r^2 $$ with a tangent line that also goes through a point $p$ lying outside the circle. How do I find the point $x$ at which the line touches the circle?</p> <p>Given that the tangent line is orthogonal to the vector $(x-c)$ and also that the vector $(x-p)$ lies on the tangent line we have $(x-c) \cdot (x-p) = 0$ which can be expanded to:</p> <p>$$ (x - c_x) (x - p_x) + (y - c_y) (y - p_y) = 0 $$</p> <p>Thus my question is: How do I find the point $x$?</p>
loldrup
84,883
<p>Inspired by the <a href="http://www.mathopenref.com/consttangents.html" rel="nofollow noreferrer">visual proof</a> supplied by <a href="https://math.stackexchange.com/users/43949/angryavian">angryavian</a> I have derived this proof:</p> <p>Lets define the middle point between $c$ and $p$: $$ m = ((c - p) / 2) + p $$</p> <p>And the distance from $m$ to $p$: $$ r_m = dist(m - p) $$</p> <p>Now we can define the circle with center $m$ and radius $r_m$: $$ (x - m_x)^2 + (y - m_y)^2 = r_m^2 $$</p> <p>This gives us two equations both containing the unknown variables $x$ and $y$. First we transform both equations to be equal to zero: $$ (x - c_x)^2 + (y - c_y)^2 - r^2 = 0 $$</p> <p>$$ (x - m_x)^2 + (y - m_y)^2 - r_m^2 = 0 $$</p> <p>Then we equal the two equations to each other: $$ (x - c_x)^2 + (y - c_y)^2 - r^2 = (x - m_x)^2 + (y - m_y)^2 - r_m^2 $$</p> <p>Now we isolate $x$: $$ x = \dfrac {c_x^2-c_x p_x+c_y^2-c_y p_y-c_y y+p_y y-r^2}{c_x-p_x} $$</p> <p>Inserting this equation into the equation for the circle with center $c$ and solving for $y$ we get two solutions: $$ y_1 = \dfrac {c_y k + r^2 (p_y - c_y) + \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} $$ $$ y_2 = \dfrac {c_y k + r^2 (p_y - c_y) - \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} $$</p> <p>Where $k = (c_x - p_x)^2 + (c_y - p_y)^2$</p> <p>These can then be inserted into the equation for $x$ to derive the corresponding solutions for $x$: $$ x_1 = \dfrac {c_x^2-c_x p_x+c_y^2-c_y p_y-c_y \left( \dfrac {c_y k + r^2 (p_y - c_y) + \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} \right) +p_y \left( \dfrac {c_y k + r^2 (p_y - c_y) + \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} \right)-r^2}{c_x-p_x} $$</p> <p>and: $$ x_2 = \dfrac {c_x^2-c_x p_x+c_y^2-c_y p_y-c_y \left( \dfrac {c_y k + r^2 (p_y - c_y) - \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} \right) +p_y \left( \dfrac {c_y k + r^2 (p_y - c_y) - \sqrt{-r^2 + k} \cdot r (c_x - p_x)} {k} \right)-r^2}{c_x-p_x} $$</p> <p>Which can be shortened to: $$ x_1 = \dfrac {c_x k + r^2 (p_x - c_x) - \sqrt{-r^2 + k} \cdot r (c_y - p_y)} {k} $$ $$ x_2 = \dfrac {c_x k + r^2 (p_x - c_x) + \sqrt{-r^2 + k} \cdot r (c_y - p_y)} {k} $$</p>
195,065
<p>Let $f:\mathbb{R}\rightarrow\mathbb{R}$ be a nondecreasing function.</p> <p>Let $a&lt;f(a)$ and $f(b)&lt;b$. Prove that there is a $a&lt;c&lt;b$ such that $f(c)=c$. </p> <p>My attempt at a proof is as follows. Let $c:=\sup\{x:a\leq x\leq b\text{, }x\leq f(x)\}$.</p> <p>This is where I'm stuck. Since I can't use more powerful theorem such as the IVT I find this problem far more complex.</p>
Taylor
22,255
<p>Define another function $g(x) = f(x) - x$. Then try to find out if this function crosses $0$. </p>
4,361,063
<blockquote> <p>Suppose that <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_1 \subset \tau_2$</span>. Is the space <span class="math-container">$(X, \tau_2)$</span> compact? Does the converse hold i.e if <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>?</p> </blockquote> <p>The first one shouldn't hold since if <span class="math-container">$X= [0,1]$</span> and <span class="math-container">$\tau_1$</span> is the usual topology of <span class="math-container">$\Bbb R$</span>, then I think that if <span class="math-container">$\tau_2$</span> is the lower limit topology we have <span class="math-container">$\tau_1 \subset \tau_2$</span> and <span class="math-container">$X$</span> wouldn' t be compact?</p> <p>The second one also doesn't seem true. If <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>, then every open cover of <span class="math-container">$X$</span> has a finite subcover, but I don't think why this would hold for the coarser topology <span class="math-container">$\tau_2$</span>? I think it could be that <span class="math-container">$\tau_2$</span> doesn't have &quot;enough&quot; elements to satisfy this.</p>
User
525,755
<p>I think your confusion for the second one arises from the meaning of &quot;finite subcover&quot;. A finite subcover is a finite cover consisting of elements of the initial cover. So, if you cover <span class="math-container">$X$</span> by elements of <span class="math-container">$\tau_2$</span>, these elements are also elements of <span class="math-container">$\tau_1$</span> and hence give a finite subcover, which is what you need.</p>
4,361,063
<blockquote> <p>Suppose that <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_1 \subset \tau_2$</span>. Is the space <span class="math-container">$(X, \tau_2)$</span> compact? Does the converse hold i.e if <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>?</p> </blockquote> <p>The first one shouldn't hold since if <span class="math-container">$X= [0,1]$</span> and <span class="math-container">$\tau_1$</span> is the usual topology of <span class="math-container">$\Bbb R$</span>, then I think that if <span class="math-container">$\tau_2$</span> is the lower limit topology we have <span class="math-container">$\tau_1 \subset \tau_2$</span> and <span class="math-container">$X$</span> wouldn' t be compact?</p> <p>The second one also doesn't seem true. If <span class="math-container">$(X, \tau_1)$</span> is compact and <span class="math-container">$\tau_2 \subset \tau_1$</span>, then every open cover of <span class="math-container">$X$</span> has a finite subcover, but I don't think why this would hold for the coarser topology <span class="math-container">$\tau_2$</span>? I think it could be that <span class="math-container">$\tau_2$</span> doesn't have &quot;enough&quot; elements to satisfy this.</p>
MBolin
251,330
<p><span class="math-container">$X$</span> being <span class="math-container">$\tau_1$</span>-compact means that for every net there exists a <span class="math-container">$\tau_1$</span>-convergent subnet. If <span class="math-container">$\tau_2 \subseteq \tau_1$</span>, convergence in <span class="math-container">$\tau_1$</span> implies convergence in <span class="math-container">$\tau_2$</span>. Therefore, for every net there also exists a <span class="math-container">$\tau_2$</span>-convergent subnet, and thus <span class="math-container">$X$</span> is <span class="math-container">$\tau_2$</span> compact. The converse is not true.</p>
1,401,116
<p>An aeroplane is observed by two persons travelling at $60 \frac{km}{hr} $ in two vehicles moving in opposite directions on a straight road. To an observer in one vehicle the plane appears to cross the road track at right angles while to the observer in the other vehicle the angle appears to be $45°$. At what angle does the plane actually cross the road track and what is its speed relative to the ground.<br> My attempt :<br> Now for vehicle 1 $$ \vec{V_{p1}} =\vec{V_{p}}-\vec{V_{1}} $$ where $ \vec{V_{p}}$ is velocity of plane.<br> Similarly for vehicle 2 $$ \vec{V_{p2}} =\vec{V_{p}}-\vec{V_{2}} $$ How should I proceed further? I am stuck at this step.</p>
Batominovski
72,152
<p>Although I didn't try to prove it, I think that all polynomial solutions to this functional equation (without the boundedness condition) are of the form $f(x,y)=a+bx+cy+dxy+e\left(x^2-y^2\right)$ for all $x,y\in\mathbb{Z}$, where $a,b,c,d,e$ are fixed real numbers. There are other solutions like $f(x,y)=\text{Re}\left(c\alpha^x\beta^y\right)$ for all $x,y\in\mathbb{Z}$, where $\alpha,\beta,c\in\mathbb{C}$ are such that $\alpha+\frac{1}{\alpha}+\beta+\frac{1}{\beta}=4$.</p>
2,922,494
<p>I came across the following problem today.</p> <blockquote> <p>Flip four coins. For every head, you get <span class="math-container">$\$1$</span>. You may reflip one coin after the four flips. Calculate the expected returns.</p> </blockquote> <p>I know that the expected value without the extra flip is <span class="math-container">$\$2$</span>. However, I am unsure of how to condition on the extra flips. I am tempted to claim that having the reflip simply adds <span class="math-container">$\$\frac{1}{2}$</span> to each case with tails since the only thing which affects the reflip is whether there are tails or not, but my gut tells me this is wrong. I am also told the correct returns is <span class="math-container">$\$\frac{79}{32}$</span> and I have no idea where this comes from.</p>
Thomas Andrews
7,933
<p>This particular problem can be solved simply by flipping five coins but winning a maximum of \$4.</p> <p>That gives an expected value of: <span class="math-container">$$\frac{1\cdot 0 + 5\cdot 1 + 10\cdot 2 + 10\cdot 3 + (5+1)\cdot 4}{2^5}=\frac{79}{32}.$$</span></p>
392,441
<p>Assume that $f$ is infinitely differentiable. Let $\delta$ be the (Dirac) delta functional.</p> <p>I know that $f\delta = f(0)\delta$, but I'm not sure how to derive the equation $f\delta′=f(0)\delta′−f′(0)\delta$.</p>
not all wrong
37,268
<p>I'm assuming you mean the delta function(al) $\delta(x)$.</p> <p>If so, you should use formal integration by parts. $$\int g\times (f\delta') =\int -(fg)'\delta = \int -(f'(0)g(x)+g'(x)f(0))\delta = -f'(0) \int g\delta-f(0)\int g'\delta$$</p> <p>Integrating the last term by parts and stripping off the $\int g\times$ gives the result.</p>
2,996,476
<p>So I was asked to solve the following sum using Fourier Series:</p> <p><span class="math-container">$$\sum_{n\in\Bbb N}\frac{(-1)^n}{4n^2-1}^{(1)}$$</span></p> <p>I decided to try splitting it up into two sums (for even and odd <span class="math-container">$n$</span>) in the hopes that it would telescope and be easier to evaluate. It did not, but I was able to salvage my work by employing the arctangent Taylor Series. The purpose of this post is twofold: to post a rather unusual approach and to ask others how they might do this problem more easily/differently.</p> <p>First I noted</p> <p><span class="math-container">$$\sum_{n\in\Bbb N}\frac{(-1)^n}{4n^2-1}=\sum_{n\in\mathbb N}\frac{1}{16n^2-1}^{(2)}-\sum_{n\in\mathbb N}\frac{1}{4(2n-1)^2-1}^{(3)}$$</span></p> <p>Using partial fractions of the first sum on the RHS:</p> <p><span class="math-container">$$(2)\;\sum_{n\in\mathbb N}\frac{1}{16n^2-1}=\frac{1}{2}\sum_{n\in\mathbb N}\left(\frac{1}{4n-1}-\frac{1}{4n+1}\right)$$</span></p> <p>Using partial fractions on the second sum on the RHS:</p> <p><span class="math-container">$$(3)\;\sum_{n\in\mathbb N}\frac{1}{4(2n-1)^2-1}=\frac{1}{2}\sum_{n\in\mathbb N}\left(\frac{1}{4n-3}-\frac{1}{4n-1}\right)$$</span></p> <p><span class="math-container">$$=-\frac{1}{2}+\frac{1}{2}\sum_{n\in\mathbb N}\left(\frac{1}{4n-1}-\frac{1}{4n+1}\right)$$</span></p> <p>Combining results for the even and odd sums from the RHS:</p> <p><span class="math-container">$$\sum_{n\in\mathbb N}\frac{1}{16n^2-1}-\sum_{n\in\mathbb N}\frac{1}{4(2n-1)^2-1}=-\frac{1}{2}+\sum_{n\in\mathbb N}\left(\frac{1}{4n-1}-\frac{1}{4n+1}\right)^{(4)}$$</span></p> <p>Then I saw that the rightmost sum was the following:</p> <p><span class="math-container">$$\sum_{n\in\mathbb N}\left(\frac{1}{4n-1}-\frac{1}{4n+1}\right)=\sum_{n=2}^\infty\frac{(-1)^{n}}{2n-1}^{(5)}$$</span></p> <p>Note the Taylor series for arctangent:</p> <p><span class="math-container">$$\text{atan}(x)=\sum_{n=1}^\infty\frac{(-1)^{n-1}x^n}{2n-1}$$</span></p> <p>Letting <span class="math-container">$x=1$</span>:</p> <p><span class="math-container">$$\text{atan}(1)=\frac{\pi}{4}=\sum_{n=1}^\infty\frac{(-1)^{n+1}}{2n-1}$$</span></p> <p>Accounting for index and sign difference, we see that</p> <p><span class="math-container">$$(5)\;\sum_{n=2}^\infty\frac{(-1)^{n}}{2n-1}=1-\frac{\pi}{4}$$</span></p> <p>And finally, from the <span class="math-container">$-1/2$</span> term in <span class="math-container">$(4)$</span>, we see that</p> <p><span class="math-container">$$(1)\;\sum_{n\in\Bbb N}\frac{(-1)^n}{4n^2-1}=\frac{2-\pi}{4}$$</span></p>
Andreas
317,854
<p>I would go about it in a similar way like you do, just shorter, as follows:</p> <p>By partial fractions, </p> <p><span class="math-container">$$ \frac{1}{4n^2-1} = \frac12 \left(\frac{1}{2n-1} - \frac{1}{2n+1} \right) $$</span> So <span class="math-container">$$ \sum_{n\in\Bbb N}\frac{(-1)^n}{4n^2-1}^{(1)} = \frac12 \sum_{n\in\Bbb N} \left(\frac{(-1)^n}{2n-1} - \frac{(-1)^n}{2(n+1)-1} \right) $$</span> and by shifting the sum index <span class="math-container">$$ \sum_{n\in\Bbb N}\frac{(-1)^n}{4n^2-1}^{(1)} = \frac12 \left( \sum_{n=1}^\infty \frac{(-1)^n}{2n-1} + \sum_{n=2}^\infty \frac{(-1)^n}{2n-1} \right)\\ = \frac12 \left( 1 - 2 \sum_{n=1}^\infty \frac{(-1)^{n+1}}{2n-1} \right) = \frac{2-\pi}{4} $$</span> where in the last line the <span class="math-container">$\arctan$</span> result was used.</p>
163,654
<p>Do You know any successful applications of the geometric group theory in the number theory? GTG is my main field of interest and I would love to use it to prove new facts in the number theory.</p>
Włodzimierz Holsztyński
8,385
<p>For the sake of completeness, let's look at the following simple example of very <em>nice</em> continua, and of their inverse system of inclusions, for which there is no <strong>homological</strong> surjection.</p> <p>All continua are subspaces of $\ \mathbb R^2$. Let $\ X:=(0\ 0)\ $ be a single-point space. Let $\ C(p;r)\ $ and $\ B(p;r)\ $ stand for closed and open discs which have their center in $\ p\ $ and their radius equal to $\ r.\ $ Define:</p> <p>$$X_n\ \,:=\ \,C\left(\left(2^{-n}\ 0\right);\ 2^{-n}\right)\ \setminus\ B\left(\left(\frac 3{2^{n+1}}\ 0\right);\ 2^{-(n+1)}\right)$$</p> <p>Thus $\ H_*(X)=0\ $ while $\ H_*(X_n) = H_*(S^1).\ $ As we see, there are no (homological) surjections. </p>
2,058,560
<p>Can anyone please simplify this boolean expression ? My answer always reduces to a single variable i.e $x$ but my instructor reduced to three literals. $$x'y'z+x'yz'+xy'z'+xyz$$ </p>
barak manos
131,263
<p>You have $3$ variables, hence $2^3=8$ possible combinations of them.</p> <p>Your expression is an OR on $4$ out of these $8$ combinations:</p> <p>$x'y'z+x'yz'+xy'z'+xyz$.</p> <p>So there isn't really any way to simplify it.</p> <p>You can do an OR on the NOT of the other $4$ combinations:</p> <p>$(x'yz)'+(xy'z)'+(xyz')'+(x'y'z')'$.</p> <p>But that leaves you with an expression of the same "size"...</p>
3,954,010
<p>The case <span class="math-container">$n=3$</span> is from <a href="https://usamts.org/Tests/Problems_31_1.pdf" rel="nofollow noreferrer">here</a>. It's straightforward to prove it's true:</p> <p>First we notice that if any two of <span class="math-container">$x_1, x_2, x_3$</span> are equal then all must be equal.</p> <p>Denote <span class="math-container">$a=x_1^{x_2}=x_2^{x_3}=x_3^{x_1}$</span> then <span class="math-container">$$x_1 = a^{1/x_2}, x_2=a^{1/x_3}, x_3=a^{1/x_1}$$</span> WLOG assume <span class="math-container">$x_1 \ge x_2, x_3$</span>. There are two cases:</p> <ul> <li><p><span class="math-container">$x_1 \ge x_2 \ge x_3 \implies \frac{1}{x_2} \ge \frac{1}{x_3} \ge \frac{1}{x_1} \implies x_1 \ge x_3 \ge x_2 \implies x_2=x_3 \implies x_1=x_2=x_3$</span>.</p> </li> <li><p><span class="math-container">$x_1 \ge x_3 \ge x_2 \implies \frac{1}{x_2} \ge \frac{1}{x_1} \ge \frac{1}{x_3} \implies x_3 \ge x_1 \ge x_2 \implies x_1=x_3 \implies x_1=x_2=x_3$</span>.</p> </li> </ul> <hr /> <p>If <span class="math-container">$n$</span> is even, then <span class="math-container">$x_{2i-1} = 2, x_{2i}=4$</span> is a counterexample.</p> <p>When <span class="math-container">$n=5$</span> the above method will need to examine <span class="math-container">$4!=24$</span> cases. Basically we map <span class="math-container">$x_i$</span> to <span class="math-container">$x_{(i \mod 5) +1}$</span> and reverse the order. In many of cases we can deduce <span class="math-container">$4$</span> or all <span class="math-container">$5$</span> of the <span class="math-container">$x_i$</span>'s are equal. For example, if <span class="math-container">$$x_1 \ge x_3 \ge x_2 \ge x_5 \ge x_4 \tag 1$$</span> then <span class="math-container">$$ x_5 \ge x_1 \ge x_3 \ge x_4 \ge x_2 \tag 2 $$</span> Since the order of <span class="math-container">$x_2$</span> and <span class="math-container">$x_5$</span> reversed from <span class="math-container">$(1)$</span> to <span class="math-container">$(2)$</span>, they must be equal and so are everything in between them from both <span class="math-container">$(1)$</span> and <span class="math-container">$(2)$</span> hence all <span class="math-container">$x_i$</span>'s are equal.</p> <p>There are other cases that are different.</p> <p><strong>Example 2:</strong> <span class="math-container">$$x_1 \ge x_3 \ge x_4 \ge x_2 \ge x_5 \tag 3$$</span> then <span class="math-container">$$x_1 \ge x_3 \ge x_5 \ge x_4 \ge x_2 \implies x_1 \ge x_3 \ge x_2=x_4=x_5 \tag 4$$</span></p> <p><strong>Example 3:</strong> <span class="math-container">$$x_1 \ge x_3 \ge x_4 \ge x_5 \ge x_2 \tag 5$$</span> then <span class="math-container">$$x_3 \ge x_1 \ge x_5 \ge x_4 \ge x_2 \implies x_1=x_3 \ge x_2=x_4=x_5 \tag 6$$</span></p> <p>But they all lead to the conclusion that <span class="math-container">$x_1=x_2=x_3=x_4=x_5$</span>.</p> <hr /> <p>Now my questions:</p> <p><strong>Question #1:</strong> Is it always true if <span class="math-container">$n&gt;1$</span> and <span class="math-container">$n$</span> is odd?</p> <p><strong>Question #2:</strong> What if we allow <span class="math-container">$x_i&gt;0$</span> instead of <span class="math-container">$x_i&gt;1$</span>?</p>
Neat Math
843,178
<p>After reading <a href="https://math.stackexchange.com/a/3358130/843178">this answer</a> to another question, as well as <a href="https://math.stackexchange.com/a/3954365/843178">Andreas's answer</a> I decide to write up the following solution. I believe John Omielan had already figured out how to do it, though he chose to leave that to the one who asked the case <span class="math-container">$n=3$</span> question last year. Then Andreas's answer made everything clear to me.</p> <p>Assume <span class="math-container">$n=2m+1$</span>. First notice <span class="math-container">$$x_2 \ln x_1 = x_3 \ln x_2 = \cdots = x_n \ln x_{n-1} = x_1 \ln x_n$$</span> Then there are only three possibilities: 1) <span class="math-container">$x_i = 1, \forall i$</span>; 2) <span class="math-container">$x_i &gt; 1, \forall i$</span>; 3) <span class="math-container">$x_i &lt; 1, \forall i$</span>.</p> <p>For case 2), if not all <span class="math-container">$x_i$</span>'s are equal, WLOG we assume <span class="math-container">$x_1 &gt; x_2$</span>.</p> <p>Then <span class="math-container">$x_2 \ln x_1 = x_3 \ln x_2 \implies x_2 &lt; x_3$</span>.</p> <p>Then <span class="math-container">$x_3 \ln x_2 = x_4 \ln x_3 \implies x_3 &gt; x_4$</span>.</p> <p>...</p> <p><span class="math-container">$x_{2m+1} \ln x_{2m} = x_1 \ln x_{2m+1} \implies x_{2m+1}&gt;x_1$</span></p> <p><span class="math-container">$x_1 \ln x_{2m+1} = x_2 \ln x_1 \implies x_1 &lt; x_2$</span>, a contradiction. Therefore all <span class="math-container">$x_i$</span>'s must be equal.</p> <p>For case 3), since <span class="math-container">$\ln x_i &lt; 0$</span> we'd have <span class="math-container">$x_i &gt; x_{i+1}$</span>, and <span class="math-container">$x_{2m+1}&gt;x_1$</span>, also a contradiction (in fact this also works when <span class="math-container">$n$</span> is even).</p>
111,343
<p>Hallo,</p> <p>I have the following question: Let $M$ smooth analytic manifold of dimension 4n. Assume furthermore that $M$ admits two foliations $A$, $B$, both with leaves of dimension 2n such that the leaves of $A$ are transvers to the leaves of $B$ at each point. Also the leaves of $A$ are $n$-dim complex manifolds with complex structure $J_{\alpha}$ (for a leaf $\alpha \in A$) which vary smoothly if $\alpha$ vary in $A$. The leaves of $B$ are $n-$dim complex manifolds with complex structure $J_{\beta}$ (for a leaf $\beta \in B$) which vary smoothly if $\beta$ vary in $B$. My question is now: Is it possible to define a complex structure on te manifold $M$ by: for $x \in M$ there exists exactly one leaf $\alpha \in A$ that contains $x$ and one leaf $\beta \in B$ that contain $x$ and set for the complex structure $J := J_{\alpha, x} + J_{\beta, x}$. This is an almost complex structure. Is this structure integrable, if $J_{\alpha}$ and $J_{\beta}$ are integrable on each leaf? If not what assumption do I need in order to make it integrable? I hope for a lot of answers. Thanks in advance.</p> <p>Marin </p>
s-f
27,789
<p>We start by giving an answer that rephrases the Nijenhuis integrability condition using the foliations, $A,B$, but the main point here is to present a more geometric<br> criterion for integrability, related to complex structures of moduli spaces. The main claim is that leafwise holomorphicity of certain natural holonomy induced maps (ie transverse moduli) give necessary and sufficient conditions for integrability. We finish with some more speculative questions/answers.</p> <p>To pick up where Robert Bryant left off, by Nijenhuis, Newlander--Nirenberg and the hypotheses on $A,B,J$, (leafwise complex structures), our problem reduces to the pair of systems of equations, $$ (1) P_B^{0,1}(L_{X_i}Y_j)=0, \ (2) {\rm\ likewise\ with\ A,B \ switched}$$ where $L_{X_i}$ is a Lie derivative, (tensored by $C$) $ X_i,Y_j$ a basis, for $T^{1,0} M$, $ X_i\in T^{1,0} A$ is a J-holomorphic vector field along $A$, and likewise for $Y_j$ along $B$, and $P_A^{0,1}: T^C M \mapsto T^{0,1} A$, ($T^C M=T^{1,0} M \oplus T^{0,1} M $), are ($A,B, T^{i,j}$ adapted) projections.</p> <p>So integrability of $J$ is equivalent to formal vanishing of these projected mixed Lie-bracket terms, and the general idea here is that this strongly suggests putting things in terms of the holonomy of $A$ acting on $B$ (or on $J_{|TB}$). Likewise, switching $A,B$ provides the other needed equations. We want to interpret this system of equations in terms of some kind of holomorphicity of {\it holonomy}, along a leaf.</p> <p>I don't know any references dealing with this specific structure, but what follows is related to things like holomorphic motions, and HCMA (homogeneous complex Monge-Ampere equations). So what we will propose now may have some new aspects, but it is not too far from things already done in these related areas.</p> <p>Now we consider more directly the relation of the holonomies of $A$ and $B$ to integrability of $J$. A (local) leaf $A_x \owns x$ parametrizes a family $B_y, y\in A_x$ of the (local) leaves it intersects, and the foliation $A$ near $A_x$ induces a map $H:A_x\to {\cal J}$ where ${\cal J}$ is the space of<br> complex structures on the (local) leaf $B_x$. (we just pullback restrictions of $J$ to $B_y$ by the Holonomy, ie a flow in $A$).</p> <p>But ${\cal J}$ itself has a complex structure, as in the theory of moduli spaces; it comes from the complex structure induced by $J$ on the Lie algebra of diffeos of $B_x$ (or vector-fields).<br> Basically, because ${\cal J}$ is a quotient space of the space (or pseudogroup) of Diffeos of $B_x$.</p> <p>{\bf Claim}: Given $J$ integrable on $M$,</p> <ol> <li><p>If $H_A: A_x\to {\cal J}$ is holomorphic for all $x\in U$, an open set, then either $A$ or $B$ is transversely holomorphic on $U$.</p></li> <li><p>Conversely, if $B$ is transversely holomorphic on $U$ then $H_A: A_x\to {\cal J}$ is holomorphic on $U$. </p></li> </ol> <p>This shows that holomorphic $H_A$ is too strong for our purposes, but a transversely linearized version of the 2nd part of the claim will turn out to be just what we need.</p> <p>{\bf Proof } of claim: </p> <p>$H_A$ is a constant map iff $A$ is transversely holomorphic. If $H_A$ is a non-constant map then since the holonomy of $B$ commutes (by the definition of $H_A$) with the holomorphic maps $H_A: A_x\to {\cal J}$, the holonomy of $B$ must be itself holomorphic, proving the first part. Notice that branch points of $H_A$ give removable singularities for holonomy ($L_Y J_{|A}$) of $B$.</p> <p>2nd part: $B$ is a transversely holomorphic foliation by smooth holomorphic varieties in a ball in $C^N$, so up to a holomorphic coordinate change, $f$, it is just a family of parallel n-planes. Also $A_0$ can be taken to be a k-plane for the same $f$, ($x=0$ here). Representing leaves of $A$ as graphs over $A_0$ with values in $B_0$ gives the converse. (Leafwise holomorphicity is the main point, but $H_A$ is even fully holomorphic.) {\bf qed}.</p> <p>We will weaken the holomorphicity property of $H_A$ by just using the $J$ induced on the normal bundle of a leaf $NA_x$, and we now consider holomorphicity of the holonomy pullback induced on $(NA_x,J)$. ${{\cal J}_{N_xA}}={{\cal J}_N}$ is the homogeneous space, $SL(2n,R)/SL(n,C)$, with its natural complex structure, ie the space of<br> complex structures on the tangent space $T_xB=T_xB_x$. Consider now the map $NH:A_x\to {{\cal J}_N}$, namely the restriction of $H_A$ on the leaf $A_x$ to 1-jets at $x$ of $B_x$.</p> <p>{\bf Corollary}: Given $J$ integrable on $M$, $NH:A_x\to {{\cal J}_N}$ is A-leafwise holomorphic.</p> <p>{\bf Proof}: Apply the 2nd part of the claim above, but using a transversely holomorphic foliation $B'$ transverse to $A_x$. A family of parallel n-planes suffices (working locally). {\bf qed}.</p> <p>Now we note that there is no discrepancy between pulling-back $J$ from $NA$ instead of $TB$, (even though $TB$ is not holomorphic along a leaf), and we will show that the Nijenhuis system above is equivalent to holomorphicity of $NH$. Given any $x\in M,A,B,J$ as in the question, consider the holonomy $A'$ and complex structure $J'$, induced on the normal bundle $NA_x$ of a leaf, and the associated transversal linearization $B'$ of $B$ along $A_x$, obtained as a limit of $A_x$--transverse rescalings of the original $M,A,B,J$. This uses a chart adapted to $A_x$ (as for $f$ above) but more to the point, a holomorphic trivialization $\tau$ of $TM$ along $A_x$, adapted to $A_x$. Observe that the limit $B'$ of $B$--transforms becomes holomorphic in the limit (the tangential components are scaled away, analogously to Poincare normal form constructions).</p> <p>These special cases of the original problem, with $x\in M'=NA_x,A',B',J'$, always have transversely holomorphic $B'$, so the Corollary provides leafwise holomorphic $H_{A'}$, when $J$ is integrable, and conversely, $J'$ is integrable if $H_{A'}$ is holomorphic.</p> <p>Thus, by comparing, in the context of $M'$, the Nijenhuis integrability condition in the ($A,B$ adapted) form above, with the criterion suggested by the Corollary, it now seems clear that these are equivalent.<br> Note that the transverse linearization, which gives $M'$, stabilizes the Nijenhuis system (equation (1)) along a leaf $A_x$, just as it stabilizes $H_{A'}$. (Furthermore, they have very similar formal structure and both provide the integrability conditions.<br> It may be more direct to just do the calculation, but these geometric constructions may have some further usefulness as we will see below.) This leads us to the desired criterion:</p> <p>{\bf Main Claim}: </p> <p>$J$ is integrable on $U$, an open set, iff the A-maps $NH:A_x\to {{\cal J}_N}$ are A-leafwise holomorphic, for all $x\in U$, and likewise with<br> $A,B$ interchanged. (ie holomorphicity of the A-maps and B-maps together is necessary and sufficient).</p> <p>Our discussion of this criterion so far relies implicitly on the Newlander--Nirenberg theorem. We propose to consider a direct geometric construction which might eliminate this dependence: given as data leafwise holomorphic maps, $NH_A, NH_B$, the goal is to construct transverse foliations ${\hat A},{\hat B}$ by holomorphic curves on a complex chart, ${\hat M}\subset C^N$ realizing this data. Then to observe that it is diffeo to the given double foliation with $J$. In the speculative finishing discussion below, we sketch a possible new proof of the Newlander--Nirenberg theorem, but only for 4-manifolds, and a related possible application to moduli of these $ M,A,B,J$--structures.</p> <p>To get started, we use the preceeding remarks of Robert Bryant giving transverse foliations by pseudo-holomorphic curves in the almost complex case, which requires 4-dimensionality. Then use the formal equivalence of the Nijenhuis integrability condition to holomorphiciy of $NH_A, NH_B$, to proceed and use the latter. The idea for constructing ${\hat M}$ is quite simply to reverse the arguments that led to holomorphic maps, $NH_A, NH_B$; this would realize transverse foliations ${\hat A},{\hat B}$ by \lq developing' graphs, reversing the proof of the 1st claim (2nd part) above.<br> There is some ambiguity going backwards from $NH_A, NH_B$ to ${\hat A},{\hat B}$; one needs to choose a lift from almost complex structures to vector fields, recalling how ${\cal J}$ is a quotient space of the space (or pseudogroup) of Diffeos of $B_x$, for example. This corresponds well to the ambiguity, in the presence of ${\hat A},{\hat B}$, of choosing ${\hat M}\subset C^N$ up to biholomorphisms. Also to integrate the transverse vector fields (the lifts) we use trivializations $\tau$ as above to choose the lift from $NA$ to $TM$. One has to be careful that these can be chosen without any constraints beyond what has been discussed here. Note that $\tau$ exists by a very simple case of Newlander--Nirenberg; the special case of<br> a transverse linearization, $x\in M'=NA_x,A',B',J'$, as above, ie we must use the integrability, coming from holomorphicity of $H'= NH$.</p> <p>Note that the integrability criterion of the corollary above is nontrivial, since $NH$, along a single leaf, can be arbitrarily specified, in the almost complex, leafwise-integrable case. The construction sketched here now seems to suggest that the whole family of $NH_A, NH_B$ can also be specified quite freely. This would mean that they give good moduli for these $ M,A,B,J$--structures.</p> <p>To extend this reconstruction of an $ M,A,B,J$--structure to higher dimensions, from data such as holomorphic $NH_A$, one might use large families of (pseudo-)holomorphic curves; we are guessing that sprays of such curves that cover the tangent bundle of $M$ could have enough Jacobi-fields (ie holonomy) to formally express the Nijenhuis condition.</p>
1,296,669
<p>A tapas bar serves 15 dishes, of which 7 are vegetarian, 4 are fish and 4 are meat. A table of customers decides to order 8 dishes, possibly including repetitions.</p> <p>a) Calculate the number of possible dish combinations. </p> <p>b) The customers decide to order 3 vegetarian dishes, 3 fish and 2 meat. Calculate the number of possible orders.</p> <hr> <p><em>Progress</em>. For a) I think that the answer would be $15^8$ as this would be the number of different ordered sequences of 8 elements from the 15 possible dishes.</p>
Diamantino
133,038
<p>a) The given that all dishes are different, then $$ N_{orders}=15^8 $$</p> <p>b) In this case, we can calculate how many different orders for each type and then combine them: $$ N_{orders} = 7^3 \times 4^3 \times 4^2 $$</p>
2,925,450
<p>I am trying to solve the following problem: <span class="math-container">$|4-x| \leq |x|-2$</span>. I am trying to do it algebraically, but I'm getting a solution to the problem that makes no sense. I fail to see the error in my reasoning though. I hope to get an explanation where I went wrong.</p> <p><span class="math-container">$|4-x| \leq |x|-2$</span></p> <p><span class="math-container">$|4-x|-|x| \leq -2$</span></p> <p>If <span class="math-container">$4-x$</span> and <span class="math-container">$x$</span> are both negative, then for them to be equal to <span class="math-container">$2$</span>, we need to multiply both expressions by <span class="math-container">$-1$</span>.</p> <p><span class="math-container">$-4+x+x \leq -2$</span></p> <p><span class="math-container">$-4+2x \leq -2$</span></p> <p><span class="math-container">$2x \leq 2$</span></p> <p><span class="math-container">$x \leq 1$</span></p> <p>But if you sub in any <span class="math-container">$x$</span> less than or equal to <span class="math-container">$1$</span>, the inequality doesn't work! Can you please explain where in my logic, where in the steps, have I gone wrong? Thank you!</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>observe that there is no solution if the RHS is negative which gives $$|x|&lt;2$$</p> <p>so, solve in $[2,4], [4,+\infty)$ and $(-\infty,-2]$.</p>
2,692,124
<blockquote> <p>A group <span class="math-container">$G$</span> is finite iff all its elements have the same finite order?</p> </blockquote> <p>One direction (when <span class="math-container">$G$</span> is finite) is trivial following from Lagrange's theorem. But if a each element of <span class="math-container">$G$</span> has the same finite order (says <span class="math-container">$n&lt;\infty$</span>), i.e. <span class="math-container">$\forall g\in G$</span>, <span class="math-container">$g^n=e_G$</span>. Will that implies <span class="math-container">$G$</span> is finite? I try to find some counter examples like infinite group consists of only nilpotent elements (not nilpotent group), but it doesn't work. So the statement is true?</p> <p>=========================</p> <p>Edit: The statement should be &quot;A group <span class="math-container">$G$</span> is finite iff the order of any elements divides a fixed finite number n?&quot; So one direction should be: Given <span class="math-container">$n&lt;\infty$</span>, <span class="math-container">$\forall g\in G$</span>, if <span class="math-container">$g^{m}=e_G\neq g^{m-1}$</span>, then <span class="math-container">$m\mid n\implies |G|&lt;\infty$</span></p>
Robert Z
299,698
<p>No, take for example the infinite group $G=\prod_{n=1}^\infty{\mathbb Z}_2$ where every non-identity element has order 2.</p>
549,438
<p>Need to take the limit: $$\lim_{x \to 1}\left(\frac{x}{x-1}-\frac{1}{\ln(x)}\right) = \lim_{x \to 1}\left(\frac{x\cdot \ln(x)-x+1}{(x-1)\cdot \ln(x)}\right)=(0/0)$$ Now I can use L'Hospital's rule: $$\lim_{x \to 1}\left(\frac{1\cdot \ln(x)+x\cdot \frac{1}{x}-1}{1\cdot \ln(x)+(x-1)\cdot\frac{1}{x}}\right)= \lim_{x \to 1}\left(\frac{\ln(x)+1-1}{\ln(x)+\frac{(x-1)}{x}}\right)=\lim_{x \to 1}\left(\frac{\ln(x)}{\frac{(x-1)+x \cdot \ln(x)}{x}}\right)=\lim_{x \to 1}\frac{x\cdot \ln(x)}{x-1+x\cdot \ln(x)}=\frac{1\cdot 0}{1-1+0}=(0/0)$$ As you can see I came to $(0/0)$ again. So what I have to do to solve this problem?</p>
Ramanpreet Nara
100,442
<p>You're not guaranteed a solution with one application of L'Hopital's rule. Sometimes, you'll have to apply it multiple times to get the limit. </p>
1,808,183
<p>Consider the Cauchy problem: $$\left\{\begin{array}{lll} x^2\partial_x u+y^2\partial_yu=u^2\\ u(x,2x)=1 \end{array}\right.$$ It is easy to show that the characteristic equations are given by: $$\frac{dx}{x^2}=\frac{dy}{y^2}=\frac{dz}{px^2+qy^2}=\frac{dp}{2pu-2xp}=\frac{dq}{2qu-2yq}=dt$$ By the Lagrange-Charpit's method, we need a first integral $\Psi$ different to $\Phi=x^2p+y^2q-u^2$.</p> <p>How I can find $\Psi$? Maybe we should get directly the characteristic curves of the previous system?</p> <p>Many thanks!</p>
Gino
780,023
<p>As you well stated, the Brownian bridge is a GP. That means that given training outputs <span class="math-container">$f$</span> and test outputs <span class="math-container">$f_{\star}$</span> the joint prior distribution is</p> <p><span class="math-container">$$\begin{bmatrix}f\\f_{\star}\end{bmatrix} \sim N \Bigg(0, \begin{bmatrix}K(X,X) &amp; K(X, X_{\star})\\K(X_{\star}, X) &amp; K(X_{\star}, X_{\star})\end{bmatrix}\Bigg) \tag{1}$$</span></p> <p>where <span class="math-container">$X$</span> and <span class="math-container">$X_{\star}$</span> are the training and test inputs respectively. If there are <span class="math-container">$n$</span> training points and <span class="math-container">$n_{\star}$</span> test points then <span class="math-container">$K(X,X_{\star})$</span> denotes the <span class="math-container">$n \times n_{\star}$</span> matrix of the covariances evaluated at all pairs of training and test points, and similarly for the other entries <span class="math-container">$K(X, X)$</span>, <span class="math-container">$K(X_{\star}, X_{\star})$</span> and <span class="math-container">$K(X_{\star}, X)$</span>. Each covariance corresponds to the Wiener process:</p> <p><span class="math-container">$$k(x,x’) = \min(x,x’) \tag{2}$$</span></p> <p>To get the posterior distribution over functions you need to restrict this joint prior distribution to contain only those functions which agree with the observed data points, corresponding to conditioning the joint Gaussian prior distribution from eq. (1) on the observations, i.e</p> <p><span class="math-container">$$f_{\star}|X_{\star}, X, f \sim N \big(K(X_{\star}, X)K(X,X)^{-1}f, K(X_{\star}, X_{\star}) - K(X_{\star}, X)K(X,X)^{-1}K(X,X_{\star})\big) \tag{3}$$</span></p> <p>(See, e.g. von Mises [1964, sec. 9.3] for this deduction)</p> <p>In our case: <span class="math-container">$X = \begin{bmatrix}0\\1\end{bmatrix}$</span> and <span class="math-container">$f = \begin{bmatrix}0\\0\end{bmatrix}$</span></p> <p>if you compute the covariances from eq. (2) and replace them in the posterior covariance of eq. (3) you get the desired output: <span class="math-container">$\text{cov} (x,x’|x(t) = 0) = \min(x,x') - \frac{xx’}{t}$</span></p> <p>in particular, for the variance (<span class="math-container">$x=x’=s$</span>) you have: <span class="math-container">$\text{var} (s|x(t) = 0) = \frac{s(t-s)}{t}$</span></p> <p>Caveat: <span class="math-container">$K(X,X)$</span> is singular given the training points, to compute <span class="math-container">$K(X,X)^{-1}$</span> you can add noise <span class="math-container">$\sigma^{2}$</span> to the model and then compute the limit when <span class="math-container">$\sigma \rightarrow 0$</span>. The equivalent eq. (3) with noise is obtained by doing the following replacement: <span class="math-container">$K(X,X) \rightarrow K(X,X) + \sigma^{2}I$</span></p>
2,070,353
<p>Let there be two arbitrary functions $f$ and $g$, and let $g \circ f$ be their composition.</p> <ol> <li><p>Suppose $g \circ f$ is one-one. Then prove that this implies $f$ is also one-one.</p></li> <li><p>Suppose $g \circ f$ is onto. Then prove that this implies $g$ is also onto.</p></li> </ol> <p>I tried to do both of the proofs by contradiction method but got stuck in that approach. Please help me in this proof. Proofs in more than one way is welcomed.</p>
Paolo Leonetti
45,736
<p>$$ 1+\frac{1}{2}((2n+1)-7)=n-2. $$</p>
4,610,640
<p>I have this equation :</p> <p><span class="math-container">$$\frac{d\alpha}{dz} = - \frac{dr}{dz} * \frac{\tan(\alpha)}r $$</span></p> <p>I searched for some similar examples but non of these equations was like this one. I'm confused about this one. As far as I know, I used RK4 for equations like this : <span class="math-container">$$y'(t) = F(t,y(t))$$</span></p> <p>Thank you for helping me !</p> <hr /> <p>Here's the context for the equation. <a href="https://i.stack.imgur.com/UsOkt.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UsOkt.png" alt="enter image description here" /></a></p> <p>I just deleted the lambda part to make the equation easier. But I still don't figure out how to solve it!</p>
Bob Dobbs
221,315
<p><span class="math-container">\begin{array}{rr|l} &amp;s+1 &amp; s+2\\ &amp;s+2 &amp; \\ \hline &amp;\text{(Remiander)}\color{lime}{-1}&amp;\color{purple}{1} \text{(Quotient)} \end{array}</span> Hence, <span class="math-container">$$\frac{s+1}{s+2}=\color{purple}{1}+\frac{\color{lime}{-1}}{s+2}=1-\frac{1}{s+2}.$$</span></p>
4,139,003
<p>For this I used the method of making the determinant nonzero. First I create the array</p> <p><span class="math-container">\begin{equation} \begin{pmatrix} a^2 &amp; 0 &amp; 1\\ 0 &amp; a &amp; 2\\ 1 &amp; 0 &amp; 1 \end{pmatrix} \end{equation}</span></p> <p>Then <span class="math-container">$\det(A)=a^3-a=a(a^2-1)$</span></p> <p>Where we can see that for the determinant to be different from zero <span class="math-container">$ a $</span> cannot take the values of <span class="math-container">$0,1,-1$</span>.</p> <p>Therefore the values that <span class="math-container">$ a $</span> can take so that <span class="math-container">$ A $</span> is a base of <span class="math-container">$\mathbb{R} ^ 3 $</span> are <span class="math-container">$\mathbb{R}\backslash \{0,1, -1\}$</span>.</p> <p>This is correct?</p>
Jean Marie
305,862
<p><a href="https://i.stack.imgur.com/VdglK.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VdglK.jpg" alt="enter image description here" /></a></p> <p><em>Fig. 1: (Some circles of) the pencil of circles <span class="math-container">$P_1$</span> in blue, and <span class="math-container">$P_2$</span> in red (notations of (1)). Their common circle, the one we want, is in black. Point-circles <span class="math-container">$(x_1,y_1)$</span> and <span class="math-container">$(x_2,y_2)$</span> are materialized by small stars.</em></p> <p>I understand your method in the following way. You consider two <strong>linear pencils of circles</strong> each one generated by a &quot;point-circle&quot; [...] and a straight line:</p> <p><span class="math-container">$$\begin{cases}[(x-x_1)^2+(y-y_1)^2]+L(y-2x)&amp;=&amp;0&amp;\text{pencil} \ P_1\\ [(x-x_2)^2+(y-y_2)^2]+M(y-x)&amp;=&amp;0&amp;\text{pencil} \ P_2\end{cases} \tag{1}$$</span></p> <p>and you want to find the common circle to the two pencils ; the problem you meet is that, by identification of like coefficients,</p> <p><span class="math-container">$$\begin{cases}x_1 + L&amp;=&amp;x_2+\frac{M}{2}\\y_1 - \frac{L}{2}&amp;=&amp;y_2 - \frac{M}{2}\\x_1^2 + y_1^2&amp;=&amp;x_2^2 + y_2^2\end{cases}\tag{2}$$</span></p> <p>you have 3 equations with 6 unknowns...</p> <ol> <li><p>In fact you should use only 4 unknowns because <span class="math-container">$y_1=2x_1$</span> and <span class="math-container">$y_2=x_2$</span>.</p> </li> <li><p>Moreover you haven't used the fact that the radius is <span class="math-container">$3$</span> ; this can be done by expressing that the center of the circle whose coordinates are &quot;visibly&quot; <span class="math-container">$(x_1+L,y_1-\frac{L}{2})=(x_1+L,2x_1-\frac{L}{2})$</span> is at distance <span class="math-container">$3$</span> from <span class="math-container">$(x_1,y_1)=(x_1,2x_1)$</span> ; this condition is expressed by</p> </li> </ol> <p><span class="math-container">$$L^2+\frac{L^2}{4}=3^2 \ \implies \ L=\dfrac{6}{\sqrt{5}}\tag{3}$$</span></p> <p>(The plus sign has been chosen in order to have the center of the circle under the line with equation <span class="math-container">$y=2x$</span> ; we could have as well chosen to determine <span class="math-container">$M=-3\sqrt{2}$</span>).</p> <p><span class="math-container">$L$</span> being known, system (2) is reduced to 3 simple equations with 3 unknowns.</p> <p>In this way your method is fully viable.</p> <p>Remark: the third equation in (2) expresses the equality of the lengths of the tangents <span class="math-container">$OT_1$</span> and <span class="math-container">$OT_2$</span> issued from the origin.</p> <hr /> <p><strong>Here is a different way</strong>. Let <span class="math-container">$C=(x_0,y_0)$</span> be the center of the circle.</p> <p>The different constraints of distances can be encapsulated into the double equation (see <a href="https://math.stackexchange.com/q/85761">here</a>):</p> <p><span class="math-container">$$\underbrace{\dfrac{1}{\sqrt{2}}(y_0-x_0)}_{\text{distance from C to line } y-x=0}=\underbrace{\dfrac{1}{\sqrt{5}}(2x_0-y_0)}_{\text{distance from C to line } y-2x=0} \ = \ \ 3$$</span></p> <p>whose solution is</p> <p><span class="math-container">$$x_0=3(\sqrt{2}+\sqrt{5}), \ \ y_0=3(2\sqrt{2}+\sqrt{5})$$</span></p> <p>Therefore, the equation of your circle is:</p> <p><span class="math-container">$$(x-x_0)^2+(y-y_0)^2=3^2$$</span></p> <p>that can be somewhat simplified if you expand the squares.</p>
2,107,164
<p>Let $G$ be gorup and $H$ be normal subgroup of $G$.</p> <p>Suppose that $H \cong \mathbb{Z}$ ($\mathbb{Z}$:integer) and $G/H \cong \mathbb{Z}/ n \mathbb{Z} (\mathbb{Z} \ni n \geq 2 )$.</p> <p>Then I'm stuck in next problems.</p> <p>(1)If $n$ is odd number, G is abelian group.</p> <p>(2)Classfy the group G up to isomorphism</p> <p>Please tell me any idea and help me.</p>
Nicky Hekster
9,605
<p>Let me give you a headstart: in general, if $N \unlhd G$, then conjugation on $N$ induces an automorphism of $N$, hence $G/C_G(N) \hookrightarrow Aut(N)$. Here $C_G(N)=\{g \in G: g^{-1}ng=n, $ for all $n \in N\}$, the centralizer of $N$ in $G$. Now $Aut(\mathbb{Z}) \cong \mathbb{Z}/2\mathbb{Z}$. Hence, if $n$ is odd, we see that in your case $G/C_G(H)$ must be trivial, that is $H \subseteq Z(G)$. Since $G/H$ is cyclic, it follows that also $G/Z(G)$ is cyclic and it is well known that implies that $G$ is abelian. This proves (1). Can you work on (2) now?</p>
3,714,021
<p>I've been working with the power series <span class="math-container">$\sum_{n=0}^{\infty}n(1-2^{-n})z^n$</span> for the last few hours and I'm starting to get a little lost in trying to figure out a few things for an exercise.</p> <p>First of all, I've managed to find the radius of convergence ofr the series with relative ease using <span class="math-container">$R=lim\frac{\vert a_n\vert}{\vert a_{n+1}\vert}$</span> as <span class="math-container">$n\rightarrow\infty$</span>. Now the radius of convergence is 1 here and I' asked about whether or not the sum converges <em>at</em> <span class="math-container">$\vert z\vert=R$</span>. I'm a little confused here as I don't really know how much I can say about it considering this is on the boundary of the convergence. I considered saying something about Abel's Theorem, but as far as I understand the theorem only says something <em>if</em> the series is indeed convergent. Besides when I set <span class="math-container">$z=1$</span> the series becomes:</p> <p><span class="math-container">$\sum_{n=0}^{\infty}n-\frac{n}{2^n}$</span></p> <p>The second term grows smaller and the first grows larger, suggesting the whole expression grows larger. This must mean that the series diverges here, right?</p> <p>Secondly I'm asked to show that the sum function for the power series can be written as:</p> <p><span class="math-container">$F(x)=\frac{x}{(1-x)^2}-\frac{2x}{(2-x)^2}$</span></p> <p>I'm assuming this has something to do with the convergence of the series, but I'm not really able to see how to show this.</p> <p>Any help would be deeply appreciated!</p>
Fred
380,717
<p>If <span class="math-container">$b_n=a_n z^n$</span>, then we have for <span class="math-container">$|z|=1$</span> that</p> <p><span class="math-container">$$|b_n|=|a_n|=n-n\frac{1}{2^n} \to \infty$$</span></p> <p>as <span class="math-container">$n \to \infty$</span>.</p> <p>Conclusion ?</p>
432,291
<p>This question is related to <a href="https://mathoverflow.net/questions/421450/homotopy-type-theory-how-to-disprove-that-0-operatornamesucc0-in-the-ty">Homotopy type theory : how to disprove that <span class="math-container">$0=\mathrm{succ}(0)$</span> in the type <span class="math-container">$\mathbb N$</span></a>.</p> <p>Section 2.13 in <a href="https://homotopytypetheory.org/book/" rel="nofollow noreferrer">The HoTT Book</a> uses &quot;the encode-decode method to characterize the path space of the natural numbers&quot; by defining a type family:</p> <p><span class="math-container">$$\mathrm{code} : \mathbb N \to \mathbb N \to \cal U$$</span></p> <p>with</p> <p><span class="math-container">$$\begin{array}{rcl} \mathrm{code}(0,0) &amp; :\equiv &amp; \mathbf 1\newline \mathrm{code}(\mathrm{succ} (m),0) &amp; :\equiv &amp; \mathbf 0\newline \dots &amp; :\equiv &amp; \dots\newline \dots &amp; :\equiv &amp; \dots \end{array}$$</span></p> <p>To my understanding, <span class="math-container">$\mathrm{code}$</span> is only well-defined, if (in particular) <span class="math-container">$0:\mathbb N$</span> and <span class="math-container">$\mathrm{succ}(0):\mathbb N$</span> are <strong>not</strong> judgementally equal. How can we be sure that this is the case?</p>
Daniel Gratzer
76,636
<p>It's a bit confused to view <span class="math-container">$\mathsf{code}$</span> as well-defined because 0 and its successor are not definitionally equal. Rather, it's well-defined because of the induction principle associated with the natural number type; the pattern-matching notation is a shorthand for invoking the induction principle. See Section 1.9 of the HoTT book for a discussion of this induction principle.</p> <p>If we lacked this induction principle, even if <span class="math-container">$0$</span> and <span class="math-container">$\mathsf{succ}(0)$</span> were not definitionally equal we would not be able to define <span class="math-container">$\mathsf{code}$</span> in this way.</p>
2,391,812
<p>The question is to find the largest integer that divides all $p^4-1$, where p is a prime greater than 5. Being asked this question, I just assume this number exists. Set $p = 7$, then $p^4-1=2400$. I don't have any background in number theory and not sure what to do next. Thank you for your help! </p>
Community
-1
<p>Let $n$ be the number you are trying to find.</p> <p>One of the main reasons to look at an example $p=7$ is to gain information about $n$. If $n$ divides all $p^4 - 1$, and $7^4 - 1 = 2400$, then $n$ divides $2400$.</p> <p>Thus, you've gained information about $n$. Follow that up &mdash; what information does knowing that $n \mid 2400$ give you?</p> <p>In my opinion, the next obvious things to do are to try the next few primes and gain more information about $n$.</p> <p>My expectation is that will be enough to completely determine what $n$ ought to be, or at least narrow it down to a very small number of possibilities, at which point you can turn towards trying to <em>prove</em> that one of the possibilities is actually the value of $n$.</p>
567,428
<p>Let $A \ne V$ be a subspace of $V$ and $B$ a linearly independent subset of $V$. Prove that $B$ can be completed to a basis of $V$ with vectors from $V \setminus A$.</p> <p>OK, I started with: </p> <p>$K=\operatorname{lin}(v_1,v_2,v_3,\dotsc)$</p> <p>and if any of $v_1,v_2,\dotsc$ belongs to $A$ we can replace this vector by a linear combination of vectors which don't belong to $A$ </p> <p>So $v_{x_1},v_{x_2},\dotsc \in A$</p> <p>$v_{x_1}\in\operatorname{lin}(w_{1,1},w_{1,2},\dotsc)$<br> $v_{x_2}\in\operatorname{lin}(w_{2,1},w_{2,2},\dotsc)$<br> etc.</p> <p>Hence $V=\operatorname{lin}(w_{1,1},w_{1,2},\dots,w_{2,1},\dots,w_{i,j})$</p> <p>and I don't know what to do next, but what I know is that I should use the Steinitz exchange lemma.</p> <p>Does anybody have an idea how to finish this prove?</p>
user99680
99,680
<p>this should do it: Let {$B:=v_1,..,v_n$} and consider a basis $B_V$:={$v_1,..,v_m$} (i.e., let $DimV=m$ )for $V$; $n&gt;m$ . Construct a matrix $M_{ij}$ with $n+m$ rows ,where the first $n$ rows are the vectors in $B$ and the remaining $m$ rows are the vectors of $B_V$. Now do Gaussian elimination on $M$ . The result will be a collection of $m$ linearly-independent vectors, the first $n$ of which will be the vectors in $B$.</p>
2,447,549
<p>If so, could someone give me an example?</p>
Ben Grossmann
81,360
<p>Yes. For example, take the transformation $T:\Bbb R \to \Bbb R^2$ given by $$ T(x) = (x,0) $$</p>
2,447,549
<p>If so, could someone give me an example?</p>
Fred
380,717
<p>Let $V$ be the vector space of all real sequences and let $T:V \to V$ be defined by</p> <p>$T(x_1,x_2,x_3,....)=(0,x_1,x_2,x_3,...)$.</p> <p>Then $T$ is injective but not surjective.</p>
4,214,892
<p>In the spherical coordinate system</p> <p><span class="math-container">$x=r \sin \theta \cos \phi$</span></p> <p><span class="math-container">$y=r \sin \theta \sin \phi$</span></p> <p><span class="math-container">$z=r \cos \theta$</span></p> <p><span class="math-container">$\theta$</span> lies in <span class="math-container">$[0,\pi]$</span>, while <span class="math-container">$\phi$</span> is in <span class="math-container">$[-\pi,\pi)$</span>. Why do we not need for the sign of <span class="math-container">$\theta$</span>?</p>
Mathman
777,644
<p>Imagine standing in the middle of your kitchen. You can turn around through a vertical axis(your phi) and with your arm by your side can raise or lower your hand(your theta). Then you can point to any object in your room without breaking your elbow (thats theta more than 180)</p> <p>Also take care here, many sources flip theta and phi. I think in your definition theta is the angle to the z-axis (not really the polar angle). For most it is phi. Your definition is common but confusing. Much better is <span class="math-container">$z=r\cos(\phi)$</span> with <span class="math-container">$\phi$</span> ranging from 0 to 180. This is the source of the confusion.</p>
4,214,892
<p>In the spherical coordinate system</p> <p><span class="math-container">$x=r \sin \theta \cos \phi$</span></p> <p><span class="math-container">$y=r \sin \theta \sin \phi$</span></p> <p><span class="math-container">$z=r \cos \theta$</span></p> <p><span class="math-container">$\theta$</span> lies in <span class="math-container">$[0,\pi]$</span>, while <span class="math-container">$\phi$</span> is in <span class="math-container">$[-\pi,\pi)$</span>. Why do we not need for the sign of <span class="math-container">$\theta$</span>?</p>
miracle173
11,206
<p>We have</p> <p><span class="math-container">$$x=r \sin (-\theta) \cos \phi=r (-\sin \theta) \cos \phi=r \sin \theta (-\cos \phi)=r \sin \theta \cos(\phi+\pi)$$</span> <span class="math-container">$$y=r \sin (-\theta) \sin \phi=r (-\sin \theta) \sin \phi=r \sin \theta (-\sin \phi)=r \sin \theta \sin (\phi+\pi)$$</span> <span class="math-container">$$z=r \cos (-\theta)=r \cos \theta$$</span></p> <p>So <span class="math-container">$(r,-\theta,\phi)$</span> represents the same point as <span class="math-container">$(r,\theta,(\phi+\pi )\pmod {2\pi})$</span></p>
2,925,160
<p>We know that <span class="math-container">$\mathbb{R}$</span> is a linear space over <span class="math-container">$\mathbb{Q}$</span>, denoted by <span class="math-container">$\mathbb{R_{\mathbb{Q}}}$</span>,where the vector addition is the real numbers and scalar multiplication is the number in <span class="math-container">$\mathbb{Q}$</span> times the number in <span class="math-container">$\mathbb{R}$</span>. Then <span class="math-container">$ \forall n\in\mathbb{N^{+}}$</span>,<span class="math-container">$\{1,\sqrt[n]{2},\sqrt[n]{2^2},\cdots,\sqrt[n]{2^{n-1}}\}$</span> is a linearly independent set in <span class="math-container">$\mathbb{R_{\mathbb{Q}}}.$</span> Further,we can get <span class="math-container">$\mathbb{R_{\mathbb{Q}}}$</span> is an infinite dimensional linear space.</p> <p>Let <span class="math-container">$\mathbb{Q}(\sqrt[3]{2}):=\{a+b\sqrt[3]{2}+c\sqrt[3]{2^2} \Big|a,b,c\in \mathbb{Q}\}.$</span></p> <p>Similarly,<span class="math-container">$\mathbb{R}$</span> is a linear space over <span class="math-container">$\mathbb{Q}(\sqrt[3]{2})$</span>, denoted by <span class="math-container">$\mathbb{R_{\mathbb{Q}(\sqrt[3]{2})}}$</span>, where the vector addition is the real numbers and scalar multiplication is the number in <span class="math-container">$\mathbb{Q}(\sqrt[3]{2})$</span> times the number in <span class="math-container">$\mathbb{R}$</span>. We can proof that <span class="math-container">$\mathbb{R_{\mathbb{Q}(\sqrt[3]{2})}}$</span> also is an infinite dimensional linear space. In fact, If we suppose <span class="math-container">$\dim(\mathbb{R_{\mathbb{Q}(\sqrt[3]{2})}})\in \mathbb{N^{+}},$</span> then <span class="math-container">$\dim(\mathbb{R_{\mathbb{Q}}})\in\mathbb{N^{+}}$</span> holds. But forall <span class="math-container">$n\in\mathbb{N^{+}},$</span> how can I find <span class="math-container">$n$</span> elements of <span class="math-container">$\mathbb{R_{\mathbb{Q}(\sqrt[3]{2})}}$</span> such that these are linearly independent in <span class="math-container">$\mathbb{R_{\mathbb{Q}(\sqrt[3]{2})}}$</span>?</p>
Magdiragdag
35,584
<p>The following may feel like cheating: pick your favorite transcendental element of <span class="math-container">${\mathbb R}$</span>, e.g., <span class="math-container">$\pi$</span>. Then <span class="math-container">$1, \pi, \pi^2, \dots, \pi^{n-1}$</span> are linearly independent over the field over algebraic reals and in particular over <span class="math-container">${\mathbb Q}(\sqrt[3]{2})$</span>. </p> <p>If you want to stick to algebraic elements, look at <span class="math-container">$1, \sqrt[n]{2}, \dots, \sqrt[n]{2^{n-1}}$</span> (i.e., your own example) for <span class="math-container">$n$</span> that are not divisible by <span class="math-container">$3$</span>. They form a basis of <span class="math-container">${\mathbb Q}(\sqrt[n]{2})$</span> over <span class="math-container">${\mathbb Q}$</span>, but also of <span class="math-container">${\mathbb Q}(\sqrt[n]{2},\sqrt[3]{2})$</span> over <span class="math-container">${\mathbb Q}(\sqrt[3]{2})$</span>. </p>
3,490,024
<p>This is part of a physics problem I was doing yesterday. I am supposed to find the maximum value of <span class="math-container">$$\mu^{-1}\cos\theta+\sin\theta$$</span> This is is supposed to produce the result of <span class="math-container">$\sqrt{1+\mu^{-2}}$</span>. However I am not getting that. By taking the derivative of the equation, and setting it equal to zero I got <span class="math-container">$$-\mu^{-1}\sin\theta+\cos\theta=0$$</span> <span class="math-container">$$\mu\cos\theta=\sin\theta$$</span> <span class="math-container">$$\mu=\tan\theta$$</span> I don’t get the same result of <span class="math-container">$\sqrt{1+\mu^{-2}}$</span> after plugging <span class="math-container">$\tan^{-1}{(\mu)}$</span> back into our equation. Can anyone help?</p>
rogerl
27,542
<p>You just need to simplify the result.</p> <p>When you set <span class="math-container">$\theta = \tan^{-1}\mu$</span> you get <span class="math-container">$$\cos\theta = \frac{1}{\sqrt{1+\mu^2}},\quad \sin\theta = \frac{\mu}{\sqrt{1+\mu^2}},$$</span> so that <span class="math-container">$$\mu^{-1}\cos\theta + \sin\theta = \frac{\mu^{-1}+\mu}{\sqrt{1+\mu^2}} = \frac{1+\mu^2}{\mu\sqrt{1+\mu^2}} = \frac{\sqrt{1+\mu^2}}{\mu} = \sqrt{\mu^{-2}+1}.$$</span></p>
1,836,236
<p>For which $a$ and $b$ is this matrix diagonalizable?</p> <p>$$A=\begin{pmatrix} a &amp; 0 &amp; b \\ 0 &amp; b &amp; 0 \\ b &amp; 0 &amp; a \end{pmatrix}$$</p> <p>How to get those $a$ and $b$? I calculated eigenvalues and eigenvectors, but don't know what to do next?</p>
Merkh
141,708
<p>Calculating the eigenvalues, $b, b+a, a-b,$ one can then easily calculate their respective eigenvectors. For eigenvalue $b,$ \begin{equation} \begin{pmatrix} a-b&amp;0&amp;b\\0&amp;0&amp;0\\b&amp;0&amp;a-b \end{pmatrix}\begin{pmatrix} 0\\1\\0\end{pmatrix} = \begin{pmatrix} 0\\0\\0\end{pmatrix}, \end{equation} There may be special cases, such as if $a = 0,$ then the vector $\begin{pmatrix} 1\\0\\1\end{pmatrix}$ works too.</p> <p>Likewise for $a+b,$ \begin{equation} \begin{pmatrix} -b&amp;0&amp;b\\0&amp;-a&amp;0\\b&amp;0&amp;-b \end{pmatrix}\begin{pmatrix} 1\\0\\1\end{pmatrix} = \begin{pmatrix} 0\\0\\0\end{pmatrix}, \end{equation}</p> <p>And last for $a-b,$ \begin{equation} \begin{pmatrix} b&amp;0&amp;b\\0&amp;2b-a&amp;0\\b&amp;0&amp;b \end{pmatrix}\begin{pmatrix} 1\\0\\-1\end{pmatrix} = \begin{pmatrix} 0\\0\\0\end{pmatrix}. \end{equation}</p> <p>Because no assumptions about the values of $a$ and $b$ were made along the way, this operator must be diagonalizable (which we already knew from symmetry), with an eigenbasis \begin{equation} \begin{pmatrix} 0\\1\\0\end{pmatrix}, \begin{pmatrix} 1\\0\\-1\end{pmatrix}, \text{ and } \begin{pmatrix} 1\\0\\1\end{pmatrix}\end{equation}</p> <hr> <p>Note that if one remembers that matrices of the form $\begin{pmatrix} a &amp; b\\ b&amp; a \end{pmatrix} $ have the eigenbasis $\begin{pmatrix} 1 &amp; 1\\ 1&amp; -1 \end{pmatrix},$ then from a quick glance at the original matrix, you can already "know" the eigenvectors since this $2 \times 2$ is hidden within it.</p>
3,453,626
<p>I was playing around with the concepts of injectivity, surjectivity, and bijectivity recently. I used these three Wikipedia articles as references:</p> <p><a href="https://en.wikipedia.org/wiki/Injective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Injective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Surjective_function" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Surjective_function</a></p> <p><a href="https://en.wikipedia.org/wiki/Bijection" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Bijection</a></p> <p>The third article on bijection states the following:</p> <ul> <li>A bijection is a one-to-one (injective) and onto (surjective) mapping of a set X to a set Y.</li> <li>A bijection is a function between the elements of two sets, where each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set.</li> </ul> <p>I've encountered these definitions of bijectivity many times before and they're not surprising. However, the article on injectivity also states the following about injective functions:</p> <ul> <li>Every element of the function's codomain is the image of at most one element of its domain.</li> </ul> <p>So then let's say we have two sets <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> with three and two elements respectively. We have a function <span class="math-container">$f$</span> such that <span class="math-container">$f$</span> maps exactly one element of <span class="math-container">$X$</span> onto one element of <span class="math-container">$Y$</span>. Then, by the definition of injectivity above, <span class="math-container">$f$</span> should be injective since every element of <span class="math-container">$Y$</span> is the image of one or zero (i.e. at most one) elements in <span class="math-container">$X$</span>. Of course, <span class="math-container">$f$</span> is also non-total.</p> <p>Now let's rather say that <span class="math-container">$f$</span> maps <em>two</em> elements of <span class="math-container">$X$</span> onto two elements of <span class="math-container">$Y$</span>. Then <span class="math-container">$f$</span> should not only be injective but also surjective since <span class="math-container">$Y$</span> has only two elements and they are both covered by elements in <span class="math-container">$X$</span>.</p> <p>Such a function should be considered bijective since it's both injective and surjective. However, it's incorrect to say that each element of <span class="math-container">$X$</span> is paired with exactly one element of <span class="math-container">$Y$</span>.</p> <p>So what am I missing here?</p>
user
293,846
<p><span class="math-container">$$\begin{align} m_a^4+m_b^4+m_c^4&amp;=\frac1{16}\sum_\text{cyc.}(2b^2+2c^2-a^2)^2\\ &amp;=\frac1{16}\sum_\text{cyc.}4b^4+4c^4+a^4+8b^2c^2-4a^2b^ 2-4c^2a^2\\ &amp;=\frac1{16}(9a^4+9b^4+9c^4). \end{align}$$</span></p>
1,180,293
<p>This is from <em>Herstein</em>. </p> <blockquote> <p>$4.$ $\;a)$ Given a group $G$ and a subset $U$ denote by $\hat U$ the smallest subgroup of $G$ which contains $U$ (the subgroup generated by $U$). Prove there is a subgroup $\hat U$ of $G$. </p> <p>$\;\;\;\;\;b)$ If $gug^{-1} \in U$ for all $g \in G$ and $u \in U$, prove that $\hat U$ is a normal subgroup of $G$.</p> </blockquote> <p><strong>My Work:</strong> Let $\mathscr A$ be the collection of subgroups of $G$ that contain $U$. This collection is non-empty since it includes $G$ itself. I defined the subgroup generated by $U$ to be, $$ \hat U = \bigcap_{A \in \mathscr A} A $$ Then we can prove that $\hat U$ is a subgroup of $G$, it contains $U$ and if $H$ is any subgroup of $G$ that contains $U$ then $\hat U$ is a subgroup of $H$. </p> <p>But I am having trouble proving part $b)$. Any help is appreciated. A hint would be fabulous. </p> <p>Thanks in advance. </p>
Frank Lu
41,622
<p><strong>Hint:</strong> Show that $$V:=\{v\in \hat{U}: gvg^{-1}\in \hat{U},\forall g\in G\}$$ is a subgroup containing $U$. It is clear that $V\subseteq\hat{U}$. Also by minimality of $\hat{U}$, it follows that $V\supseteq\hat{U}$. Hence $V=\hat{U}$, this means for all $v\in\hat{U}$ we have $gvg^{-1}\in\hat{U},\forall g\in G$, which means $\hat{U}$ is normal.</p>
1,180,293
<p>This is from <em>Herstein</em>. </p> <blockquote> <p>$4.$ $\;a)$ Given a group $G$ and a subset $U$ denote by $\hat U$ the smallest subgroup of $G$ which contains $U$ (the subgroup generated by $U$). Prove there is a subgroup $\hat U$ of $G$. </p> <p>$\;\;\;\;\;b)$ If $gug^{-1} \in U$ for all $g \in G$ and $u \in U$, prove that $\hat U$ is a normal subgroup of $G$.</p> </blockquote> <p><strong>My Work:</strong> Let $\mathscr A$ be the collection of subgroups of $G$ that contain $U$. This collection is non-empty since it includes $G$ itself. I defined the subgroup generated by $U$ to be, $$ \hat U = \bigcap_{A \in \mathscr A} A $$ Then we can prove that $\hat U$ is a subgroup of $G$, it contains $U$ and if $H$ is any subgroup of $G$ that contains $U$ then $\hat U$ is a subgroup of $H$. </p> <p>But I am having trouble proving part $b)$. Any help is appreciated. A hint would be fabulous. </p> <p>Thanks in advance. </p>
Fan Zheng
94,743
<p>First show that for every $g\in G$ and $A\in \mathscr A$, $gAg^{=1}\in \mathscr A$. Indeed,</p> <p>$$A\in \mathscr A\Rightarrow U\subset A\Rightarrow U=gUg^{-1}\subset gAg^{-1}.$$</p> <p>Now</p> <p>$$\hat U=\bigcap_{A\in\mathscr A}A\subset gAg^{-1}$$</p> <p>for every $A\in \mathscr A$. Hence</p> <p>$$\hat U\subset\bigcap_{A\in\mathscr A} gAg^{-1}=g\hat Ug^{-1}.$$</p> <p>This can also be written as</p> <p>$$g^{-1}\hat Ug\subset \hat U.$$</p> <p>Replacing $g$ by $g^{-1}$ gives</p> <p>$$g\hat Ug^{-1}\subset\hat U.$$</p> <p>Since we have inclusions bothways, the identity is proved.</p>
1,180,293
<p>This is from <em>Herstein</em>. </p> <blockquote> <p>$4.$ $\;a)$ Given a group $G$ and a subset $U$ denote by $\hat U$ the smallest subgroup of $G$ which contains $U$ (the subgroup generated by $U$). Prove there is a subgroup $\hat U$ of $G$. </p> <p>$\;\;\;\;\;b)$ If $gug^{-1} \in U$ for all $g \in G$ and $u \in U$, prove that $\hat U$ is a normal subgroup of $G$.</p> </blockquote> <p><strong>My Work:</strong> Let $\mathscr A$ be the collection of subgroups of $G$ that contain $U$. This collection is non-empty since it includes $G$ itself. I defined the subgroup generated by $U$ to be, $$ \hat U = \bigcap_{A \in \mathscr A} A $$ Then we can prove that $\hat U$ is a subgroup of $G$, it contains $U$ and if $H$ is any subgroup of $G$ that contains $U$ then $\hat U$ is a subgroup of $H$. </p> <p>But I am having trouble proving part $b)$. Any help is appreciated. A hint would be fabulous. </p> <p>Thanks in advance. </p>
athos
26,632
<p>First we prove that $\forall \hat u \in \hat U$, $\exists u, v \in U$, s.t. $\hat u = uv$.</p> <p>To do so we define set $\hat W = \{ uv | uv \in \hat U, u, v \in U \}$, and prove $\hat W$ is exactly $\hat U$.</p> <p>Consider $\forall w_1, w_2 \in \hat W$.</p> <p>By definition of $\hat W$, we have $w_1, w_2 \in \hat U$. Since $\hat U$ is a group, $w_1 w_2^{-1} \in \hat U$.</p> <p>By definition of $\hat W$, we write $ w_1 = u_1 v_1$, $w_2 = u_2 v_2$. So $w_1 w_2^{-1} = (u_1 v_1) (u_2 v_2)^{-1} = (u_1 u_2) (v_2^{-1} v_1^{-1})$. </p> <p>Since $U$ and $\hat U$ are group, $u_1 u_2, v_2^{-1} v_1^{-1} \in U$. So by definition of $\hat W$, $w_1 w_2^{-1} \in \hat W$.</p> <p>So $\hat W$ is also a group. </p> <p>But $U \subset \hat W$, and $\hat U$ is the smallest group that contains $U$, we have $\hat W = \hat U$.</p> <p>With this we can proceed to</p> <blockquote> <p>b) If $gug^{−1} \in U$ for all $g\in G$ and $u\in U$, prove that $\hat U$ is a normal subgroup of G.</p> </blockquote> <p>We just proved above that $\forall \hat u \in \hat U$, there $\exists u, v \in U$ s.t. $\hat u = uv$.</p> <p>So $g\hat u g^{-1} = g uv g^{-1} = (g u g^{-1}) (g v g^{-1})$, where $gug^{-1}, gvg^{-1} \in U$, so $g \hat u g^{-1} \in \hat U$.</p> <p>So $\hat U$ is normal.</p>
2,076,984
<p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p> <blockquote> <p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p> </blockquote> <p>Here are a couple of figures -</p> <ol> <li>Two parabolas - <a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li> </ol> <p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p> <p>Why do we subtract to find the area? Why not add?</p> <ol start="2"> <li>Similarly in parabola and line.</li> </ol> <p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p> <p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
Dan Uznanski
167,895
<p>$\int_a^b f(x)$ gives the area "under the curve" of $f(x)$ between $a$ and $b$: the area from the $x$-axis to the curve, across that interval.</p> <p>In the cases given, you have two curves that you're dealing with instead; one (which I'll call $f(x)$) is higher and the other ($g(x)$) is lower. Finding the area between these curves means finding the area that is under $f$ but not under $g$. This, it is simple to see, we can do by subtraction: the vertical space between $f(x) = \sqrt{x}$ and $g(x) = x^2$ at, say, $1/4$ is equal to $$f\left(\frac{1}{4}\right)-g\left(\frac{1}{4}\right) = \frac{1}{2} - \frac{1}{16} = \frac{7}{16}$$ because while there's $1/2$ below $f$ there, $1/16$ is also below $g$ and thus shouldn't get counted.</p> <p>Here's this pair of functions with their integrals displayed, overlapping. The vertical space mentioned above is also shown as a vertical line.</p> <p><a href="https://i.stack.imgur.com/BxN2p.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BxN2p.png" alt="enter image description here"></a></p> <p>The area under $f$ contains both the pale area and the darker area; the area under $g$ contains only the darker area. But we need only the pale area; we can find both (by taking the integral of $f$), and then remove the dark area (by taking the integral of $g$ and subtracting that from the integral of $f$).</p>
2,076,984
<p>I have already asked <a href="https://math.stackexchange.com/questions/2075949/how-to-draw-diagram-for-line-and-parabola">a similar question</a>. But the answer in that question is very difficult to understand. I am new to this concept so I am looking for an easier explanation.</p> <blockquote> <p>My main <strong>question</strong> is: why do we subtract things to find the area using the definite integral?</p> </blockquote> <p>Here are a couple of figures -</p> <ol> <li>Two parabolas - <a href="https://i.stack.imgur.com/aQC8h.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/aQC8h.jpg" alt="page 1"></a></li> </ol> <p>Area $\displaystyle = \int \left(\sqrt{x} - x^2 \right) dx$</p> <p>Why do we subtract to find the area? Why not add?</p> <ol start="2"> <li>Similarly in parabola and line.</li> </ol> <p><a href="https://i.stack.imgur.com/emac3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/emac3.png" alt="page 2"></a></p> <p>Area $\displaystyle = \int (x + 2 - x^2)dx$</p>
Narasimham
95,860
<p>It is not complicated. One common area is falling on a similar area as overlap, some addition is involved and where from the minus sign? that seems to be your doubt ... which you expressed nicely with a graphic.I edited your picture to explain as follows.</p> <p><a href="https://i.stack.imgur.com/ny5W9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ny5W9.png" alt="Y difference"></a></p> <p>We have Area = height H multiplied by width for a rectangle standing on x-axis. But the height is seen <em>variable</em> now as $ (y_2-y_1) $. In the sideways bulged patch this height H goes from zero to a maximum and then back to zero. and its constant width is $dx$. Area is summation by integration..</p>
2,444,376
<p>Let be $u:\mathbb{R}\rightarrow(-\infty,+\infty]$ a convex function and I suppose that $u$ admits a point of minimum. I define:</p> <p>$$(\varphi_\epsilon*u)(x)=\int_{\mathbb{R}^n}\varphi_\epsilon(y)u(x-y)dy, $$ where $\varphi_{\epsilon}$ is the standard mollifier. Let's introduce the notation: $$\tilde{u}_i=\varphi_{1/i}*u,\quad\forall i\in\mathbb{N}. $$</p> <p>I know that the function $\tilde{u}_i$ is convex, that it converges pointwise to $u$ in $\mathbb{R}^n$ and uniformly on compact sets of $\mathbb{R}^n$.</p> <blockquote> <p>If I denote with $y_i:=\min_{\mathbb{R}}u_i$, is it true that the sequence of the $y_i$ converges to $y=\min u$?</p> </blockquote> <p>I think yes because I have the uniformly convergence on compact sets. However I cannot prove it. How can I do it?</p> <p>Thanks for the help!</p>
H. H. Rugh
355,946
<p>Assuming that your mollifier has compact support, e.g. within $[-\epsilon,\epsilon]$ and is positive, the result follows from the fact that $u_\epsilon(x)=(\phi_\epsilon \star u)(x)$ lies in the convex hull of the values of $u$ within the interval $[x-\epsilon,x+\epsilon]$.</p> <p>Suppose the minimum is $m=u(x_0)$ and let $\delta&gt;0$ be given. We want to find $\epsilon(\delta)$ so that the minimum of $u_\epsilon(x)$ is in the $\delta$ neighborhood of $x_0$ whenever $0&lt;\epsilon&lt;\epsilon(\delta)$.</p> <p>By convexity, $m_1=\min_{|x-x_0|\geq \delta/2}u(x)&gt;m$ so if $\epsilon&lt;\delta/2$ then by what was said above regarding convex hull, for any $|x-x_0|\geq \delta$ we have $u_\epsilon(x)\geq m_1$. So now choose $\epsilon_1$ so that $\max_{|x-x_0|\leq \epsilon_1} u(x) &lt;m_1$. Then for $\epsilon&lt;\min\{\delta/2,\epsilon_1\}$ we have $u_\epsilon(x_0)&lt;m_1$ and we are done.</p> <p>Convexity is not really needed, it is enough to assume that $u$ is continuous, has a unique minimum and $$ \liminf_{|x|\rightarrow \infty} \;u(x) &gt; \min u $$</p>
971,997
<p>How do we find $$\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]$$</p> <p>In the shortest and easiest possible manner?</p> <p>I cannot think of anything good.</p>
Arian
172,588
<p>$$e^{i\theta}=z\Rightarrow ie^{i\theta}d\theta=dz\Rightarrow d\theta=\frac{dz}{iz}$$ Consider the following integral $$\oint_{\gamma}e^z\frac{dz}{iz}$$ where $\gamma$ is a contour of a quarter of a unit disk. Note that the integrand has a simple pole at $z=0$ so the chosen contour will go in the clockwise direction around zero for a quarter of a circle. The above integral can be partitioned as follows $$\oint_{\gamma}e^z\frac{dz}{iz}=\int_{|z|=1,0\leq\theta\leq \pi/2}e^z\frac{dz}{iz}+\int^{\epsilon}_{1}e^{ix}\frac{dx}{ix}+\oint_{|z|=\epsilon,-\pi/2\leq\theta\leq 0}e^{z}\frac{dz}{iz}+\int^{1}_{\epsilon}e^{x}\frac{dx}{ix}$$ The first integral is the one of your interest as $$\int_{|z|=1,0\leq\theta\leq \pi/2}e^z\frac{dz}{iz}=\int^{\pi/2}_{0}e^{e^{i\theta}}\,d\theta$$ The second integral can be rewritten as $\epsilon\to 0$ in the following way $$\lim_{\epsilon\to 0}\int^{\epsilon}_{1}e^{ix}\frac{dx}{ix}=\int^{0}_{1}e^{ix}\frac{dx}{ix}=-\int^{1}_{0}e^{ix}\frac{dx}{ix}$$ The third integral can be computed using fractional residue theorem $$\oint_{|z|=\epsilon,-\pi/2\leq\theta\leq 0}e^{z}\frac{dz}{iz}=-\frac{\pi}{2}i\cdot Res\{\frac{e^{z}}{iz},z=0\}=-\frac{\pi}{2}$$ Note that the angle is negative as the orientation is clockwise for the little arc of radius $\epsilon$. The last integral can be written in the limit as $$\lim_{\epsilon\to 0}\int^{1}_{\epsilon}e^{x}\frac{dx}{ix}=\int^{1}_{0}e^{x}\frac{dx}{ix}$$ Getting all the pieces together yields $$\int^{\pi/2}_{0}e^{e^{i\theta}}\,d\theta-\int^{1}_{0}e^{ix}\frac{dx}{ix}-\frac{\pi}{2}+\int^{1}_{0}e^{x}\frac{dx}{ix}=0$$ The equality to zero comes from the fact that within the quarter disk contour there is no poles so apply Cauchy Theorem on residues. The last equality is equivalent to $$\int^{\pi/2}_{0}e^{e^{i\theta}}\,d\theta=\int^{1}_{0}e^{ix}\frac{dx}{ix}+\frac{\pi}{2}-\int^{1}_{0}e^{x}\frac{dx}{ix}=-i\int^{1}_{0}e^{ix}\frac{dx}{x}+\frac{\pi}{2}+i\int^{1}_{0}e^{x}\frac{dx}{x}$$ Using $e^{ix}=\cos(x)+i\sin(x)$ then \begin{align}\int^{\pi/2}_{0}e^{e^{i\theta}}\,d\theta&amp;=-i\int^{1}_{0}(\cos(x)+i\sin(x))\frac{dx}{x}+\frac{\pi}{2}+i\int^{1}_{0}e^{x}\frac{dx}{x}\\&amp;=\frac{\pi}{2}+\int^{1}_{0}\sin(x)\frac{dx}{x}+i\int^{1}_{0}(\frac{e^x}{x}-\cos(x))\,dx\end{align} Equalizing the real and imaginary parts you get $$\Re(\int^{\pi/2}_{0}e^{e^{i\theta}}\,d\theta)=\frac{\pi}{2}+\int^{1}_{0}\sin(x)\frac{dx}{x}$$</p>