qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,885,492
<p>Question.</p> <blockquote> <p>prove that if ${ a }_{ 1 },{ a }_{ 2 },...{ a }_{ n }&gt;0$ then $$ \frac { { a }_{ 1 }+{ a }_{ 2 }+...+{ a }_{ n } }{ n } \ge \frac { n }{ \frac { 1 }{ { a }_{ 1 } } +\frac { 1 }{ { a }_{ 2 } } +...+\frac { 1 }{ { a }_{ n } } } $$ </p> </blockquote> <p><strong>Proof</strong> $$\\ \left( { a }_{ 1 }+{ a }_{ 2 }+...+{ a }_{ n } \right) \left( \frac { 1 }{ { a }_{ 1 } } +\frac { 1 }{ { a }_{ 2 } } +...+\frac { 1 }{ { a }_{ n } } \right) =\underset { n\left( n-1 \right) /2\quad terms }{ \underbrace { \left( \frac { { a }_{ 1 } }{ { a }_{ 2 } } +\frac { { a }_{ 2 } }{ { a }_{ 1 } } \right) +...+\left( \frac { { a }_{ n-1 } }{ { a }_{ n } } +\frac { { a }_{ n } }{ { a }_{ n-1 } } \right) + } \quad n\ge } \\ \ge n+2\cdot \frac { n\left( n-1 \right) }{ 2 } ={ n }^{ 2 }$$ in this proof i didn't understand this step "$n\left( n-1 \right) /2\quad ? terms$" I mean how the number of terms can be $n\left( n-1 \right) /2\quad $ .Can anybody explain it. Thanks in advance!</p>
Tito Piezas III
4,781
<p>The first has no solutions with positive integer $m$. However, the second case,</p> <p>$$\begin{aligned} m&amp;=\dfrac{1}{18}\left(\sqrt{48r^2+1}-1\right)\\ &amp;=\dfrac{1}{18}\left(x-1\right)\\ &amp;\equiv {3\pmod{4}} \end{aligned}$$</p> <p>has infinitely many given by,</p> <p>$$x= \frac{ (7+\sqrt{48})^{6k+3}+(7-\sqrt{48})^{6k+3}}{2}=1351,\; 9863382151,\; 72010600134783751\dots$$</p> <p>$$r = \frac{ (7+\sqrt{48})^{6k+3}-(7-\sqrt{48})^{6k+3}}{2\sqrt{48}}=195,\; 1423656585,\; 10393834843080975\dots$$</p>
4,187,498
<p>I am studying the proof of the Prime Number Theorem and I want to show that the function <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span>.</p> <p>I think that if I can find the Laurent series expansion of <span class="math-container">$\zeta(s)$</span>, I could then find the same for <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> and then conclude that it has a simple pole at <span class="math-container">$s=1$</span>.(Correct me if I am wrong.)</p> <p>But, how do I find the Laurent expansion ? I know that <span class="math-container">$\zeta(s)$</span> has a simple pole at <span class="math-container">$s=1$</span> but how can I use this to find the complete expansion ? Also, do I even need to find the complete expansion to show that <span class="math-container">$\frac{\zeta'(s)}{\zeta(s)}$</span> has a simple pole at <span class="math-container">$s=1$</span> ? Is there any other way ?</p> <p>Please help. Any help/hint shall be highly appreciated.</p>
robjohn
13,854
<p>This answer was posted to <a href="https://math.stackexchange.com/questions/123531/how-to-show-that-the-laurent-series-of-the-riemann-zeta-function-has-gamma-as">another question</a> which dealt only with the constant term of the Laurent expansion, but it actually answers this question, so I will repost it here.</p> <hr /> <p><strong>A Simple Derivation of the Laurent Series for Zeta</strong> <span class="math-container">$$ \begin{align} &amp;\frac1{s-1}+\sum_{k=1}^mk^{-s}-\frac{m^{1-s}-1}{1-s}\tag1\\ &amp;=\frac1{s-1}+\sum_{k=1}^m\frac1ke^{(1-s)\log(k)}-\frac{e^{(1-s)\log(m)}-1}{1-s}\tag2\\ &amp;=\frac1{s-1}+\sum_{n=0}^\infty\left[\sum_{k=1}^m\frac1k\frac{(1-s)^n\log(k)^n}{n!}-\frac{(1-s)^n\log(m)^{n+1}}{(n+1)!}\right]\tag3\\ &amp;=\frac1{s-1}+\sum_{n=0}^\infty\frac{(1-s)^n}{n!}\left[\sum_{k=1}^m\frac{\log(k)^n}k-\frac{\log(m)^{n+1}}{n+1}\right]\tag4 \end{align} $$</span> Explanation:<br /> <span class="math-container">$(2)$</span>: convert powers to exponentials<br /> <span class="math-container">$(3)$</span>: expand exponentials about <span class="math-container">$s=1$</span><br /> <span class="math-container">$(4)$</span>: pull out a common factor</p> <p>Taking the limit as <span class="math-container">$m\to\infty$</span>, for <span class="math-container">$s\gt1$</span>, <span class="math-container">$$ \bbox[5px,border:2px solid #C0A000]{\zeta(s)=\frac1{s-1}+\sum_{n=0}^\infty\frac{(1-s)^n}{n!}\,\gamma_n}\tag5 $$</span> where <span class="math-container">$$ \bbox[5px,border:2px solid #C0A000]{\gamma_n=\lim_{m\to\infty}\left[\sum_{k=1}^m\frac{\log(k)^n}k-\frac{\log(m)^{n+1}}{n+1}\right]}\tag6 $$</span> <span class="math-container">$\gamma_n$</span> is the <span class="math-container">$n^\text{th}$</span> <a href="https://en.wikipedia.org/wiki/Stieltjes_constants" rel="nofollow noreferrer">Stieltjes constant</a>; <span class="math-container">$\gamma_0$</span> is the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Mascheroni_constant" rel="nofollow noreferrer">Euler-Mascheroni constant</a>.</p>
970,409
<p>Let matrices $A$, $B\in{M_2}(\mathbb{R})$, such that $A^2=B^2=I$, where $I$ is identity matrix. </p> <p>Why can be numbers $3+2\sqrt2$ and $3-2\sqrt2$ eigenvalues for the Matrix $AB$? </p> <p>Can be numbers $2,1/2$ the eigenvalues of matrix $AB$? </p>
Ben Grossmann
81,360
<p>Note that $$ \pmatrix{1&amp;0\\0&amp;-1} \pmatrix{a&amp; b\\-(a^2-1)/b &amp; -a} = \pmatrix{ a &amp; b\\ (a^2 - 1)/b &amp; a } $$ Every such product is similar to a matrix of this form or a triangular matrix with 1s on the diagonal. The associated characteristic polynomial is $$ x^2 - 2ax + 1 $$ Check the possible roots of this polynomial. The product of two such entries must have eigenvalues of the form $$ \lambda = a \pm \sqrt{a^2 - 1} $$ Where $a \in \Bbb C$ is arbitrary. Setting $a=3$ gives you the eigenvalues you mention, and setting $a= 5/4$ gives you $1/2$ and $2$.</p>
3,955,162
<p>I was trying to write this linear transformation as a matrix, the exercise is the following: let &quot;f: <span class="math-container">$R^3 --&gt; R^4$</span>&quot; be a linear transformation and Kerf = (x, y , z) in <span class="math-container">$R^3$</span>: <span class="math-container">$x-2y+z = 0$</span> is its null space. Determine the canonical matrix.</p> <p>I know how to solve this type of exercises (generally), as follow:</p> <ol> <li>I use the basis of the first vector space (R3 in this case),</li> <li>I plug x, y, z, in the equations of the linear transformation,</li> <li>then, I'm ready to use each component of the vectors (of R4, in this case) to create the matrix. (the canonical matrix has basis of the entire space, for example, 1 0 0, 0 1 0, 0 0 1).</li> </ol> <p>But, when I'm about to solve these exercises, I simply cannot solve them. I don't know why, because I know how to do them theoretically.</p> <p>Can you help me to solve this exercise?</p>
Peter Morfe
711,689
<p>If <span class="math-container">$f([0,1])$</span> is an immersed closed curve (suitably defined --- you need <span class="math-container">$f'(1) = f'(0)$</span> --- or, in other words, <span class="math-container">$f$</span> should descend to a <span class="math-container">$C^{1}$</span> map of <span class="math-container">$S^{1}$</span>), then the answer is yes. Otherwise, it's false.</p> <p>Suppose that <span class="math-container">$\Gamma = f([0,1])$</span> is an immersed closed curve. Precisely, assume that <span class="math-container">$f(0) = f(1)$</span>, <span class="math-container">$f'(0) = f'(1)$</span>, and <span class="math-container">$f'$</span> is non-vanishing in <span class="math-container">$[0,1]$</span>. Up to redefining <span class="math-container">$f$</span>, we can then assume that <span class="math-container">$f(t + 1) = f(t)$</span> and <span class="math-container">$f'(t + 1) = f'(t)$</span> and <span class="math-container">$f$</span> remains at least <span class="math-container">$C^{1}$</span>.</p> <p>Henceforth, given <span class="math-container">$e \in S^{1}$</span>, denote by <span class="math-container">$e^{\perp} \in S^{1}$</span> the vector with <span class="math-container">$e \cdot e^{\perp} = 0$</span> and <span class="math-container">$(e,e^{\perp})$</span> right-hand oriented.</p> <p>Pick <span class="math-container">$e \in S^{1}$</span>. Define <span class="math-container">$\ell_{e} : \mathbb{R}^{2} \to \mathbb{R}$</span> by <span class="math-container">$\ell_{e}(x) = \langle x,e \rangle$</span>. Since <span class="math-container">$\Gamma$</span> is compact, <span class="math-container">$\ell_{e}$</span> achieves its maximum at some point <span class="math-container">$x_{M} = f(t_{M})$</span> and its minimum at <span class="math-container">$x_{m} = f(t_{m})$</span>. Thus, an advanced calculus exercise shows that <span class="math-container">$D \ell_{e}(x_{0})$</span> is parallel to the &quot;normal vector&quot;* <span class="math-container">$f'(t_{m})^{\perp}$</span> and <span class="math-container">$f'(t_{M})^{\perp}$</span> to <span class="math-container">$\Gamma$</span> at <span class="math-container">$x_{m}$</span> and <span class="math-container">$x_{M}$</span>. This tells us that <span class="math-container">$f'(t_{M})^{\perp} = \pm e$</span> and <span class="math-container">$f'(t_{m})^{-} = \mp e$</span> since <span class="math-container">$x_{M}$</span> is a maximum and <span class="math-container">$x_{m}$</span>, a minimum.</p> <p>Since <span class="math-container">$e \mapsto e^{\perp}$</span> is a surjective map onto <span class="math-container">$S^{1}$</span>, it follows that the tangent vector always points in each direction at least once. (Note we proved that the normal vector always points in each direction at least once.)</p> <p>A counter-example if <span class="math-container">$\Gamma$</span> is not immersed: take <span class="math-container">$f$</span> so that it traces out a square. (This can be done smoothly by asking that <span class="math-container">$f$</span> stops and takes a break at each corner point.) The normal vector (where defined) only takes on four values. (If it's the coordinate square, then these four are <span class="math-container">$(1,0)$</span>, <span class="math-container">$(-1,0)$</span>, <span class="math-container">$(0,1)$</span>, and <span class="math-container">$(0,-1)$</span> so you won't see <span class="math-container">$(1/\sqrt{2},1/\sqrt{2})$</span>.) The problem is <span class="math-container">$f'$</span> vanishes.</p> <ul> <li>&quot;Normal vector&quot; in quotes since <span class="math-container">$\Gamma$</span> is immersed. For example, if <span class="math-container">$\Gamma$</span> is a figure-eight, there is no well-defined normal vector at the intersection point between the two loops. However, <span class="math-container">$f'(t)$</span> and <span class="math-container">$f'(t)^{\perp}$</span> are always well-defined and the non-vanishing condition on <span class="math-container">$f'$</span> ensures that, locally in <span class="math-container">$t$</span>, we only see one of the possibilities.</li> </ul>
3,955,162
<p>I was trying to write this linear transformation as a matrix, the exercise is the following: let &quot;f: <span class="math-container">$R^3 --&gt; R^4$</span>&quot; be a linear transformation and Kerf = (x, y , z) in <span class="math-container">$R^3$</span>: <span class="math-container">$x-2y+z = 0$</span> is its null space. Determine the canonical matrix.</p> <p>I know how to solve this type of exercises (generally), as follow:</p> <ol> <li>I use the basis of the first vector space (R3 in this case),</li> <li>I plug x, y, z, in the equations of the linear transformation,</li> <li>then, I'm ready to use each component of the vectors (of R4, in this case) to create the matrix. (the canonical matrix has basis of the entire space, for example, 1 0 0, 0 1 0, 0 0 1).</li> </ol> <p>But, when I'm about to solve these exercises, I simply cannot solve them. I don't know why, because I know how to do them theoretically.</p> <p>Can you help me to solve this exercise?</p>
Krishnarjun
722,463
<p>This answer is motivated by what Peter wrote in his answer. I realize now that when I was thinking about this question, I was thinking of something along the lines of intermediate value theorem, but didn't know how to apply it for a curve in <span class="math-container">$\mathbb{R}^2$</span>.</p> <p>Suppose that <span class="math-container">$f'(t)$</span> be a continuous non zero function of <span class="math-container">$t$</span>. Choose a unit vector <span class="math-container">$e_1$</span> and let <span class="math-container">$e_2$</span> be perpendicular to <span class="math-container">$e_1$</span>. Then it follows that <span class="math-container">$t\mapsto \langle e_2, f'(t)\rangle$</span> is a continuous function of <span class="math-container">$t$</span>. Now if <span class="math-container">$\langle e_2, f'(t)\rangle|_{t=0} &lt; 0$</span> and <span class="math-container">$\langle e_2, f'(t)\rangle|_{t=1} &gt; 0$</span>, we can apply the intermediate value theorem for the function <span class="math-container">$t\mapsto \langle e_2, f'(t)\rangle$</span> and get the required result.</p>
1,242,317
<p>If I have a unitary square matrix $U$ ie. $U^{\dagger}U=I$ ( $^\dagger$ stands for complex conjugate and transpose ), then for what cases is $U^{T}$ also unitary. One simple case I can think of is $U=U^{T}$ ( all entries of $U$ are real, where $^T$ stands for transpose ). Are there any other cases ?</p>
TZakrevskiy
77,314
<p>$$(U^T)^\dagger = \bar U = (U^\dagger)^T, $$ where $\bar U$ is the complex conjugate of $U$.</p> <p>Moroever $$(U^T)^\dagger U^T = (U^\dagger)^T U^T = UU^\dagger = I.$$</p> <p>Therefore, your proposition is always true.</p>
815,770
<p>Compute $F_{1000} \bmod F_{11}$, where $F_n$ denote the Fibonacci numbers.</p> <p>Progress:</p> <p>$F_{11}=89$ . I believe you should find the period of $F_n \bmod 89$ and use that to solve it. But I'm not not getting anywhere from that.</p> <p>Thanks!</p>
Matt Gallagher
148,015
<p>Assuming we're starting with $F_0=0$ and $F_1 = 1$, an easy induction shows that for $n \geq 11$ $$F_n = F_{n-10}\cdot F_{11} + F_{n-11}\cdot F_{10}$$</p> <p>So $F_n \equiv F_{n-11} \cdot F_{10} \bmod F_{11}$ for $n \geq 11$.</p> <p>Edit: I made a mistake from here on. See heropup's answer for the rest.</p>
3,403,272
<p><p> I'm currently taking abstract algebra and I'm very lost.</p> <blockquote> <p>Let <span class="math-container">$G = (\Bbb Z/18\Bbb Z, +)$</span> be a cyclic group of order <span class="math-container">$18$</span>.</p> <p>(1) Find a subgroup <span class="math-container">$H$</span> of <span class="math-container">$G$</span> with <span class="math-container">$|H|= 3.$</span></p> <p>(2) What are the elements of <span class="math-container">$G/H$</span>?</p> <p>(3) Find a familiar group that is isomorphic to <span class="math-container">$G/H$</span>.</p> </blockquote> <p><p> For one I think I understand that since it is a cyclic group we need a generator so I choose <span class="math-container">$\langle [6]\rangle$</span>. <span class="math-container">$[6]+[6]=[12]$</span> and <span class="math-container">$[6]+[6]+[6]=[18]=[0]$</span> so <span class="math-container">$H=\langle [6]\rangle=\{[0],[6],[12]\}$</span>. Here we see <span class="math-container">$18$</span> divided by <span class="math-container">$6$</span> is <span class="math-container">$3$</span> so <span class="math-container">$|H| = 3.$</span> <p> The next part are the elements <span class="math-container">$G/H$</span> just the subgroup I wrote down before? <p> The last question is confusing me the most. In order to be isomorphic to one another the group that I select must have three elements as well, correct? The problem is there is no other subgroup of <span class="math-container">$G$</span> that has an order <span class="math-container">$3$</span>.</p>
Bernard
202,857
<p>For the last question, you can use the <em>third isomorphism theorem</em>:</p> <p><span class="math-container">$g=\mathbf Z/18\mathbf Z$</span>, <span class="math-container">$H==6\mathbf Z/18 \mathbf Z$</span>, so <span class="math-container">$$G/H=\mathbf Z/18\mathbf Z\Big/6\mathbf Z/18\mathbf Z\simeq\mathbf Z/6\mathbf Z.$$</span></p>
3,050,295
<p>I'm having a problem with following equation:</p> <p>I'm applying <span class="math-container">$(a+b)^2$</span> and <span class="math-container">$(a-b)^2$</span>, but am unable to get the correct answer.</p> <blockquote> <p><span class="math-container">$$(\sqrt{3}+i)^{2017} + (\sqrt{3} - i)^{2017}$$</span></p> </blockquote>
cansomeonehelpmeout
413,677
<p><strong>Hint</strong>:</p> <p>It's probably easier to write <span class="math-container">$\sqrt 3+i=re^{i\theta}$</span> and <span class="math-container">$\sqrt 3-i=re^{-i\theta}$</span>, by finding <span class="math-container">$r,\theta$</span>, and then use <span class="math-container">$$r\cos(\theta)=r\frac{e^{i\theta}+e^{-i\theta}}{2}$$</span></p>
3,050,295
<p>I'm having a problem with following equation:</p> <p>I'm applying <span class="math-container">$(a+b)^2$</span> and <span class="math-container">$(a-b)^2$</span>, but am unable to get the correct answer.</p> <blockquote> <p><span class="math-container">$$(\sqrt{3}+i)^{2017} + (\sqrt{3} - i)^{2017}$$</span></p> </blockquote>
Mohammad Riazi-Kermani
514,496
<p>Let <span class="math-container">$z=(\sqrt 3 +i)^{2017}$</span>, then <span class="math-container">$$z=2^{2017}e^{2017 i\pi /6}$$</span></p> <p><span class="math-container">$$Re(z)=2^{2017}\cos {2017\pi /6} =2^{2017} \cos {\pi/6}$$</span></p> <p><span class="math-container">$z+\bar z =2Re(z)=2^{2017}\sqrt 3$</span></p>
1,103,239
<p>For example, if I multiply the value of a base squared by four, I also get twice the base if it's squared. Look:$$6^2\cdot4=12^2$$ because $$36\cdot4=144$$and $36$ is the square of $6$ and $144$ is the square of $12$. Why does this always happen?</p>
agha
118,032
<p>Suppose that you have $a^2$, then:</p> <p>$$4 \cdot a^2 = 2^2 \cdot a^2= (2 \cdot a)^2=(2a)^2$$</p>
2,264,791
<p>I have a problem that I'm having trouble figuring out the distribution with given condition.</p> <p>It is given that 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an exponentially distributed random variable with parameter 1.</p> <blockquote> <p><strong>Original Problem:</strong></p> <p>What is the distribution of 1/(<span class="math-container">$X$</span>+1), where <span class="math-container">$X$</span> is an expoentially distributed random variable with parameter 1?</p> </blockquote> <p>With parameter 1, <span class="math-container">$X$</span> can be written as <span class="math-container">$e^{-x}$</span>, and after plug in the given function, I got <span class="math-container">$$\frac{1}{e^{-x}+1} = \frac{e^{x}}{e^{x}+1}$$</span></p> <p>What type of distribution is this?</p>
callculus42
144,421
<p>Keep it short and simple. Let <span class="math-container">$Y=\frac{1}{1+X}$</span> where <span class="math-container">$0&lt;Y&lt;1$</span>. Then</p> <p><span class="math-container">$P(Y&lt;y)=P\left(\frac{1}{1+X}&lt;y\right)=P\left(\frac{1}{1+X}&lt;y\right)=P(1&lt;y+Xy)$</span></p> <p><span class="math-container">$=P\left(X&gt;\frac{1}{y}-1\right)=e^{1-1/y} \ \ $</span> for <span class="math-container">$0&lt;y&lt;1$</span></p> <p>To get the pdf differentiate. I think you can manage it.</p>
267,707
<p>The weight 2, level 1 Eisenstein series $E_2(z)$ is a non-holomorphic automorphic form. It is defined as the analytic continuation to $s = 0$ of the series $$ E_2(z, s) = \sum_{\substack{m, n \in \mathbf{Z} \\ (m, n) \ne (0,0)}} \frac{\operatorname{Im}(z)^s}{(mz + n)^2 |mz + n|^{2s}} $$ which is convergent for $\operatorname{Re}(s) &gt; 0$ (but not for $s = 0$). Moreover for any prime p the function $E_2(z)-pE_2(pz)$ is holomorphic. </p> <p>My question is: what's the Archimedean component of the automorphic representation corresponding to $E_2$? (If it it the holomorphic discrete series of weight two then seems the vector corresponding to $E_2$ should be holomorphic.)</p>
GH from MO
11,919
<p>Excellent question indeed. The quick answer is that $E_2(z)$ is an <em>almost holomorphic modular form</em> of weight $2$ and level $1$, so the automorphic representation generated by it is not irreducible. For more details (and my thought process), read below.</p> <p>Consider the Maass raising operator in weight $0$, $$ R:=y\left(i\frac{\partial}{\partial x}+\frac{\partial}{\partial y}\right).$$ Let $(m,n)\in\mathbb{Z}^2$ be a nonzero pair of integers. Then a small calculation gives that, for $z=x+iy$ and $s\in\mathbb{C}$, $$ R\left(\frac{y^s}{|mz+n|^{2s}}\right) =\frac{sy^s}{(mz+n)^2|mz+n|^{2s-2}}.$$</p> <p>Now let us introduce the usual weight $0$ level $1$ (nonholomorphic) Eisenstein series $$ E(z,s):=\sum_{\substack{m, n \in \mathbb{Z} \\ (m, n) \ne (0,0)}} \frac{\operatorname{Im}(z)^s}{|mz + n|^{2s}},\qquad \operatorname{Re}(s)&gt;1$$ then we see that $$ R\,E(z,s+1) = (s+1)\,yE_2(z,s),\qquad \operatorname{Re}(s)&gt;0.\tag{1}$$ On the right hand side, $yE_2(z,s)$ is the canonical weight $2$ level $1$ (nonholomorphic) Eisenstein series, the one which transforms as a weight $2$ Maass form. It is worthwhile to recall here that weight $k$ holomorphic forms embed into the weight $k$ Maass spectrum by multiplying each weight $k$ holomorphic form by $y^{k/2}$. In our case $k=2$, which explains why we multiply by $y$.</p> <p>So your Eisenstein series, after inserting the factor $y$ to make it into a canonical weight $2$ form, and also inserting the scaling factor $s+1$, equals the Maass raising shift of $E(z,s+1)$. It belongs to the same automorphic representation as $E(z,s+1)$, hence it has the same Langlands parameters as $E(z,s+1)$ at every place. In particular, the archimedean Langlands parameters are $$ (s+1)-\frac{1}{2}=s+\frac{1}{2}\qquad\text{and}\qquad \frac{1}{2}-(s+1)=-s-\frac{1}{2}.$$</p> <p><strong>Added and revised.</strong> Well, we still need to specify all this to $s=0$, but in this case the above argument breaks down, because $E(z,s+1)$ has a pole at $s=0$. So we need to be more careful. Let us use the results and notation of Section 4 of Duke-Friedlander-Iwaniec: The subconvexity problem for Artin L-functions (Invent. Math. 149 (2002), 489-577). Then for $\operatorname{Re}(s)&gt;1$ we have the Fourier decomposition \begin{align*} \frac{1}{2}E(z,s)&amp;=\ \zeta(2s)y^s+\pi^{2s-1}\frac{\Gamma(1-s)}{\Gamma(s)}\zeta(2-2s)y^{1-s}\\&amp;+\ \frac{\pi^s}{\Gamma(s)}\sum_{n=1}^\infty\frac{\sigma_{2s-1}(n)}{n^s}\bigl\{f_0^+(nz,s)+f_0^-(nz,s)\bigr\}.\end{align*} Let us replace $s$ by $s+1$ here, and then apply the raising operator $R$ along with $(1)$. Then for $\operatorname{Re}(s)&gt;0$ we obtain the Fourier decomposition \begin{align*} \frac{1}{2}E_2(z,s)&amp;=\ \zeta(2s+2)y^s+\pi^{2s+1}\frac{\Gamma(1-s)}{\Gamma(2+s)}\zeta(-2s)y^{-s-1}\\&amp;-\ \frac{\pi^{s+1}}{y\Gamma(2+s)}\sum_{n=1}^\infty\frac{\sigma_{2s+1}(n)}{n^{s+1}}\bigl\{f_2^+(nz,s+1)+s(s+1)f_2^-(nz,s+1)\bigr\}.\end{align*} The right hand side is indeed holomorphic at $s=0$, and at this value it specifies to \begin{align} \frac{1}{2}E_2(z)&amp;=\ \frac{\pi^2}{6}-\frac{\pi}{2y}- \frac{\pi}{y}\sum_{n=1}^\infty\frac{\sigma_1(n)}{n}f_2^+(nz,1)\\ &amp;=\ \frac{\pi^2}{6}-\frac{\pi}{2y}-4\pi^2\sum_{n=1}^\infty\sigma_1(n)e(nz).\tag{2}\end{align} It is clear now that the $L$-function of $E_2(z)$ is $\zeta(s-1/2)\zeta(s+1/2)$, and $E_2(z)$ should belong to the holomorphic discrete series of weight $2$ even though its constant term is not holomorphic. I think this paradox arises from the fact that $E_2(z)$ is not a true automorphic form. (More precisely, $E_2(z)$ is not a vector from an irreducible automorphic representation, see the Added 3 section below.)</p> <p><strong>Added 2.</strong> Indeed, $E_2(z)$ is an <em>almost holomorphic modular form</em> of weight $2$ and level $1$: it equals $2G_2^*(z)$ in the notation of Section 2.3 of Bruinier-v.d.Geer-Harder-Zagier's book "The 1-2-3 of modular forms". In particular, (19) and (21) in this book reveal that $E_2(z)$ transforms <em>precisely</em> like a holomorphic modular form of weight $2$ and level $1$, even though it is not holomorphic. Of course, the same also follows from the fact that $E_2(z)=\lim_{s-&gt;0+}E_2(z,s)$, where $yE_2(z,s)$ for $\operatorname{Re}(s)&gt;0$ transforms precisely like a Maass form of weight $2$ and level $1$. One can read more about almost holomorphic modular forms in Section 5.3 of the book.</p> <p><strong>Added 3.</strong> We can learn more by applying the Maass lowering operator in weight $2$, $$L:=1+y\left(i\frac{\partial}{\partial x}-\frac{\partial}{\partial y}\right).$$ Using the Fourier decomposition (2), we can see directly that $$ L(yE_2(z))=-\pi, $$ which harmonizes with the facts that $ L(yE_2(z,s)) = -s E(z,s+1)$ for $\operatorname{Re}(s)&gt;0$ and $\operatorname{res}_{s=0}E(z,s+1)=\pi$. So we see that the automorphic representation generated by the weight $2$ vector $yE_2(z)$ is reducible: it contains the trivial representation $\mathbb{C}$ as a subrepresentation, while its quotient by $\mathbb{C}$ is irreducible and belongs to the holomorphic discrete series of weight $2$. See also Theorem 2.5.2 in Bump: Automorphic forms and representations, specified to $k=2$ and $\lambda=0$, which helped me understand what is going on.</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
Tony Ma
526,038
<p>Question 1) As you see, both method arrive at the same answer, however $\log$ base 10 is a natural choice, as stated in the comment by Matti P above.</p> <p>Question 2) It is because for $a&gt;0,a\neq 1,b&gt;0, c&gt;0, c\neq 1$,$$\log_a b=\frac{\log_c b}{\log_c a}$$In particular in your example, $c=10$, so both method arrives the same answer. The formula is known as the change of base formula, see <a href="https://en.wikipedia.org/wiki/Logarithm" rel="noreferrer">here</a>.</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
Martin Argerami
22,857
<p>Other answers mention the change of base $$ \log_a x = \frac {\log_b x}{\log_b a}. $$ An immediate consequence is that a quotient of logarithms does not depend on the base: $$\frac {\log_ax}{\log_ay}=\frac {\frac {\log_bx}{\log_ba}}{\frac {\log_by}{\log_ba}} =\frac {\log_bx}{\log_by}. $$</p>
2,661,443
<p>For the equation $2^x = 7$</p> <p>The textbook says to use log base ten to solve it like this $\log 2^x = \log 7$. </p> <p>I then re-arrange it so that it reads $x \log 2 = \log 7$ then divide the RHS by $\log 2$ to isolate the $x$. I understand this part.</p> <p>I can alternatively solve it in an easier way by simply using $\log_2 7$ on my calculator.</p> <p>Using both methods the answer comes to the same which is $2.807$</p> <p>My question is twofold:</p> <ol> <li><p>Why would the textbook suggest to use log base ten rather than simply using log base two?</p></li> <li><p>I can see how using log base ten and the suggested method in the textbook makes me arrive at the same answer but I don't understand WHY this is so. How does base ten play a factor in the whole scope of things.</p></li> </ol> <p>Thank you</p>
Mark Miller
534,643
<p>The only reason we use base ten for anything (including decimal numbers) is that biological evolution endowed humans with ten fingers. The Mayans are supposed to have <a href="https://en.wikipedia.org/wiki/Maya_numerals" rel="nofollow noreferrer">used base twenty</a>, apparently because they were barefoot and thought that toes were convenient for counting too.</p>
77,379
<p>It is to show for an $a\in \mathbb{C}^{\ast}$ that $aB_{1}(1)= B_{|a|}(a)$ </p> <p>where B denotes a disc </p> <p>Okay, maybe this is correct: </p> <p>$aB_{1}(1) = a(e^{i\phi}) = ae^{i\phi} = |a|e^{i\phi} = B_{|a|}(a)$</p> <p>But this seems very wrong! </p> <p>V</p>
Adam Smith
15,426
<p>Since this looks like homework, I'll just give you a hint. You are on the right track, but it is not true that $a e^{i\phi} = |a| e^{i \phi}$. Indeed, remember that $a = |a| e^{i \theta}$ for some $\theta$ (the modulus of $a$). Geometrically, what is going on is that multiplication by $a = |a| e^{i \theta}$ scales things by $|a|$ and rotates things by $\theta$.</p>
218,933
<p>The space $L^2 (\mathbb{R}^2; \mathbb{C})$ can be decomposed as $$ L^2 (\mathbb{R}^2; \mathbb{C}) = \bigoplus_{k \in \mathbb{Z}} L^2_k (\mathbb{R}^2; \mathbb{C}), $$ where $$ L^2_k (\mathbb{R}^2; \mathbb{C}) = \bigl\{ f \in L^2 (\mathbb{R}^2;\mathbb{C}) : \text{for almost every \(z \in \mathbb{R}^2 \simeq \mathbb{C}\) and \(\theta \in \mathbb{R}\), }\\ f (e^{i \theta} z) = e^{i k \theta} f (z) \bigr\}, $$ where the summands are mutualy orthogonal (see for example Stein and Weiss, <em>Introduction to Fourier analysis on Euclidean spaces</em>, 1971, §IV.2).</p> <p>The summands are invariant under Fourier transform and the Fourier transform can there be described by Bessel functions. This decomposition appears also implicitly in separation of variables arguments.</p> <p>As Stein and Weiss do not give any name to this decomposition, I was wondering whether under which name(s) this decomposition is known in the literature. </p>
Yemon Choi
763
<p>Thinking about this a bit more, here is a <strong>guess</strong> &mdash; I haven't got time right now to check the details, so comments and corrections are welcome from others.</p> <p>The Euclidean motion group $E(2)$ is the group of all isometries of ${\bf R}^2$ with its usual Euclidean metric. Since $E(2)$ acts on ${\bf R}^2$ in a measure-preserving way, this action gives a complex representation $\pi$ of $E(2)$ on $L^2({\bf R}^2)$. Then the restriction of $E(2)$ to $U(1)$ should decompose in the way shown above.</p> <p>If this is correct, then I guess another way to see this would be: take the trivial representation $\varepsilon$ of $U(1)$, and form the induced representaion ${\rm Ind}_{U(1)}^{E(2)} \varepsilon$. My guess is that this is once again $\pi$, with the decomposition now emerging naturally from the standard construction of "induction from a compact subgroup".</p>
2,579,140
<p>N. Elkies' page <a href="http://www.math.harvard.edu/~elkies/trinomial.html" rel="nofollow noreferrer">http://www.math.harvard.edu/~elkies/trinomial.html</a> ends with an information about octic trinomials "whose Galois group is contained in $G_{1344}$".</p> <p>One of reported trinomials, $x^8+324x+567$, "has a smaller Galois group, which embeds as a <strong>transitive subgroup</strong> into $G_{1344}$ (...) This Galois group is isomorphic with $G_{168}$, acting on 8 letters via the other guise of that group, as $PSL_{2}(\mathbb{Z}/7\mathbb{Z})$".</p> <p>Using GAP one can check that indeed mentioned trinomial has Galois group $PSL(2,7)$:</p> <pre><code>gap&gt; x:=Indeterminate(Rationals, "x");; gap&gt; GaloisType(x^8+324*x+567); 37 gap&gt; t37:=TransitiveGroup(8,37); L(8)=PSL(2,7) gap&gt; Size(t37); 168 </code></pre> <p>In the context of the above web page it is surprising for me that $PSL(2,7)$ <strong>is not</strong> a subgroup of $G_{1344}$:</p> <pre><code>gap&gt; t48:=TransitiveGroup(8,48); E(8):L_7=AL(8) gap&gt; Size(t48); 1344 gap&gt; IsSubgroup(t48, t37); false </code></pre> <p>Moreover, there is another Galois group $G_{168}$, $C_2^3:(C_7: C_3)$, which <strong>is</strong> a subgroup of $G_{1344}$:</p> <pre><code>gap&gt; t36:=TransitiveGroup(8,36); E(8):F_21 gap&gt; Size(t36); 168 gap&gt; IsSubgroup(t48, t36); true </code></pre> <p><strong>Question</strong>: How should I understand this contradiction/"contradiction"?</p>
ahulpke
159,739
<p><code>TransitiveGroup</code> will give you <em>one</em> representative from a conjugacy class of subgroups. For two subgroups $A$, $B$ in the library it is not guaranteed (in fact not guaranteeable) that if a conjugate of $A$ is contained in $B$, then automatically $A$ is contained in $B$.</p> <p>To test for such inclusions, you will need to either calculate subgroups of $B$ and find $A$ there, or search for <code>IsomorphicSubgroups</code> (which finds monomorphism into the larger group):</p> <pre><code>gap&gt; homs:=IsomorphicSubgroups(t48,t37); [ [ (1,8,2)(3,7,6), (1,2)(3,5)(4,7)(6,8) ] -&gt; [ (2,4,7)(3,6,8), (4,5)(6,7) ], [ (1,8,2)(3,7,6), (1,2)(3,5)(4,7)(6,8) ] -&gt; [ (2,4,7)(3,6,8), (1,3)(2,8)(4,7)(5,6) ] ] gap&gt; u:=Image(homs[1],t37); Group([ (2,4,7)(3,6,8), (4,5)(6,7) ]) gap&gt; NrMovedPoints(u); TransitiveIdentification(u); 7 5 </code></pre> <p>So you see that the subgroup isomorphic to $L_2(7)$ is in fact <code>TransitiveGroup(7,5)</code>. (the groups are isomorphic but not permutation isomorphic.)</p>
4,188,020
<p>I know that there exists a connection on a principal bundle and via parallel transport it is possible to define a a covariant derivative on the associated bundle.</p> <p>However, can we also define a covariant derivative on the principal bundle. I.e. something that can differentiate a section along a vector field? Or do we need a linear structure like the one in a vector bundle to 'take derivatives'?</p>
nicrot000
932,402
<p>So from my point of view what a covariant derivative should be or should do I'd say that indeed you need a linear structure.</p> <p>So in principle, when passing from simply vector space-valued functions <span class="math-container">$M\to W$</span> (or sections in the trivial bundle <span class="math-container">$M\times W$</span>, resp.) to sections in a not-necessarily trivial bundle <span class="math-container">$E\to M$</span> with fiber-type <span class="math-container">$W$</span> you run into troubles defining a derivative of such functions. Usually this is paraphrased as that it is intrinsically &quot;not possible to compare points in different fibers&quot;. However, it is possible to chose a covariant derivative, and thereby (at least locally) a frame which is &quot;constant&quot; or parallel to <span class="math-container">$M$</span> (cf. my comment above). Once you everywhere have distinguished such a notion of parallelity you can now indeed take derivatives and view it as &quot;how a function changes compared to what we call constant&quot;, which is of course just the action of the covariant derivative. For this notion you need a vector space structure.</p> <p>On the other hand, on a principal bundle you also have a notion of parallel in the sense of everywhere horizontal but as you don't have a vector space structure in the fibers, so &quot;comparison with a constant section&quot; works a bit different, particularly not by something having similar properties as a derivative in the above sense. Rather would you do something like @Deane in the comments above, but I'd say this is not a &quot;derivative&quot; in the sense of what a derivative should do.</p>
359,742
<p>I have a mathematical problem that leads me to a particular necessity. I need to calculate the convolution of a function for itself for a certain amount of times. </p> <p>So consider a generic function $f : \mathbb{R} \mapsto \mathbb{R}$ and consider these hypothesis:</p> <ul> <li>$f$ is continuos in $\mathbb{R}$.</li> <li>$f$ is bound, so: $\exists A \in \mathbb{R} : |f(x)| \leq A, \forall x \in \mathbb{R}$.</li> <li>$f$ is integral-defined, so its area is a real number: $\exists \int_a^bf(x)\mathrm{d}x &lt; \infty, \forall a,b \in \mathbb{R}$. Which implies that such a function at ifinite tends to zero.</li> </ul> <p><strong>Probability mass functions:</strong> Such functions fit the constraints given before. So it might get easier for you to consider $f$ also like the pmf of some continuos r.v.</p> <p>Consider the convolution operation: $a(x) \ast b(x) = c(x)$. I name the variable always $x$.</p> <p>Consider now the following function:</p> <p>$$ F^{(n)}(x) = f(x) \ast f(x) \ast \dots \ast f(x), \text{for n times} $$</p> <p>I want to evaluate $F^{(\infty)}(x)$. And I would like to know whether there is a generic final result given a function like $f$.</p> <h3>My trials</h3> <p>I tried a little in Mathematica using the Gaussian distribution. What happens is that, as $n$ increases, the bell stretches and its peak always gets lower and lower until the function almost lies all over the x axis. It seems like $F^{(\infty)}(x)$ tends to $y=0$ function...</p> <p><img src="https://i.stack.imgur.com/8FDFH.png" alt="Trials in Mathematica"></p> <p>As $n$ increases, the curves gets lower and lower. </p>
user268922
268,922
<p>You can use the cumulants of the original distribution and make the inverse Fourier Transform. Being m_1 the mean and k_n the cumulants of f, after N self convolutions, the resulting mean is N<em>m_1 and the cumulants N</em>k_n. If f have all cumulants well defined, the result tends to a gaussian with mean N<em>m_1 and variance N</em>k_2 (actually the central limit theorem). </p> <p>Note: in your mathematical experiment, (which have 0 mean and unit variance), the result is a gaussian with variance N: it is more dispersed. It is the reason you see it as if it tends to zero, the area, which is conserved, is expanded over a large interval, so the maximum lowers... If you expand the x-axis you will recover a gaussian...</p>
230,971
<p>At the moment I use <code>Length[ DeleteDuplicates[ array ] ] == 1</code> to check whether an array is constant, but I'm not sure whether this is optimal.</p> <p>What would be the quickest way to test whether an array consists of equal elements?</p> <p>What if the elements would be integers?</p> <p>What if they are floats?</p>
Henrik Schumacher
38,178
<p><code>Equal@@MinMax[array]</code> might be quite fast if <code>array</code> is a packed list of integers. But it cannot short-circuit like <code>Statistics`Library`ConstantVectorQ</code> does. And it is also not very robust with regard to (machine) floating point numbers: <code>Equal</code> and <code>SameQ</code> both use a certain tolerance for their equality checks (I forgot which precise one they use; I just recall that the tolerance of <code>SameQ</code> should be the lower one). This may or may not be the desired behavior in a particular application.</p>
3,988,180
<p>The task: <span class="math-container">$$ \text{Calculate } \int_{C}{}f(z)dz\text{, where } f(z)=\frac{\bar{z}}{z+i}\text{, and } C \text{ is a circle } |z+i|=3\text{.} $$</span> Finding the circle's center and radius: <span class="math-container">$$ |z+i|=|x+yi+i|=|x+(y+1)i|=3 \\ x^2+(y+1)^2=3^2 $$</span> Parametrizing the circle: <span class="math-container">$$ z(t)=i+3e^{-2\pi i t} $$</span></p> <p>Now I need to calculate this integral: <span class="math-container">$$ \int_{0}^{1}{f(z(t))z'(t)dt}= 2\pi\int_{0}^{1}{ \frac{1-3ie^{-2\pi i t}}{e^{4 \pi i t}}dt } $$</span></p> <p>Unfortunately I calculated this integral, and it's equal to <span class="math-container">$0$</span>. Is this correct? I don't think so. Where did I go wrong? Maybe I made a mistake when calculating the integral - what would be the best way to calculate it?</p>
Milo Moses
630,231
<p>You cannot directly use the Residue Theorem in this case since <span class="math-container">$\frac{\overline{z}}{z+i}$</span> is not holomorphic, so your approach is entirely reasonable. Your calculation of the integral is correct since</p> <p><span class="math-container">$$\int_{0}^{1}e^{2k\pi ti}dt=\left.\frac{1}{2\pi ki}e^{2k\pi kti}\right|_0^{1}=\frac{1}{2\pi ki}(1-1)=0$$</span></p> <p>for any integer <span class="math-container">$k$</span>, and so</p> <p><span class="math-container">\begin{align*} 2\pi\int_{0}^{1}\frac{1-3ie^{-2\pi it}}{e^{4\pi i t}}dt&amp;=2\pi\int_{0}^{1}e^{-4\pi i t}dt-6\pi i\int_{0}^{1}e^{-6\pi it}dt\\ &amp;=2\pi(0)-6\pi i(0)\\ &amp;=0 \end{align*}</span></p>
158,916
<p>Let $M$ be a closed riemannian 3-manifold. I think that the following fact should be true and should have a relatively simple proof, but I cannot figure it out.</p> <blockquote> <p>For every $\varepsilon&gt;0$ there is a $\delta&gt;0$ such that every smooth 2-sphere in $M$ of area smaller than $\delta$ bounds a ball of volume smaller than $\varepsilon$.</p> </blockquote> <p>Roughly, small-area spheres must bound small-volume balls. </p> <p>Note that: </p> <ul> <li> If $M\neq S^3$ then $M$ contains spheres that bound regions of arbitrarily small volume that are not balls (just take a spine of $M$ and small regular neighborhoods of it). <li> It suffices to prove that the 2-sphere is contained in a small-volume ball and invoke Alexander theorem. <li> The same fact stated for 3-spheres in $S^4$ would imply the (open) Schoenflies problem (every 3-sphere bounds a 4-ball), since every 3-sphere in $S^4$ can be shrinked to have arbitrarily small area. <li> It is not true in general that a torus of small area is contained in a ball (pick neighborhoods of a homotopically non-trivial knot). </ul>
Misha
21,684
<p>This is a direct corollary of Federer -Flemming deformation Lemma saying that vary small area sphere can be homotoped to 1-skeleton of the fixed triangulation of the manifold. The dimension assumption is irrelevant. Proof of this Lemma can be found somewhere in Federer's book (I will chase down the precise reference when I can).</p>
2,080,716
<p>I have the quadratic form $$Q(x)=x_1^2+2x_1x_4+x_2^2 +2x_2x_3+2x_3^2+2x_3x_4+2x_4^2$$</p> <p>I want to diagonalize the matrix of Q. I know I need to find the matrix of the associated bilinear form but I am unsure on how to do this.</p>
Vidyanshu Mishra
363,566
<p>First posted as a hint, but now it is the compilation to @Atul Mishra's answer.</p> <p>Let us see what is happening in diagrammatic way:</p> <p><img src="https://i.stack.imgur.com/CHw7J.png" alt="Image Describing unions and intersections" /></p> <p>Now, look at the image carefully.The coloured regions represent following data:</p> <p><em>The rectangle: Sum of All the numbers from <span class="math-container">$1-100$</span>.</em></p> <p><em>White region: Sum of Numbers that are neither multiple of <span class="math-container">$3$</span> nor of <span class="math-container">$7$</span>.</em></p> <p><em>Green region: Sum of Numbers that are multiple of <span class="math-container">$3$</span> only.</em></p> <p><em>Blue region: Sum of Numbers that are multiple of <span class="math-container">$7$</span> only.</em></p> <p><em>Light blue region: Sum of Numbers that are multiple of <span class="math-container">$3$</span> and <span class="math-container">$7$</span>.</em></p> <p>We have to find the sum of numbers which are neither divisible by <span class="math-container">$3$</span> nor by <span class="math-container">$7$</span>. So, we will find the sum of numbers that are in white region.</p> <p>Sum of Numbers in white region <span class="math-container">$=$</span> Sum of Numbers in Rectangle <span class="math-container">$-$</span> Sum of Numbers in (Dark blue <span class="math-container">$+$</span> Light Blue) <span class="math-container">$-$</span> Sum of Numbers in (Green + Light Blue) <span class="math-container">$+$</span> Sum of Numbers in light blue region.</p> <p>From here we get the formula used by other answers. i.e.,</p> <p>Sum of Required numbers <span class="math-container">$=$</span> Sum of Total Numbers <span class="math-container">$-$</span> Sum of Numbers divisible by <span class="math-container">$7-$</span> Sum of Numbers divisible by <span class="math-container">$3+$</span> Sum of Numbers divisible by both <span class="math-container">$3$</span> and <span class="math-container">$7$</span>.</p> <p>Next question you may ask is that, How to find the sum of numbers from <span class="math-container">$1-100$</span> or sum of multiples of <span class="math-container">$3$</span> etc.</p> <p>There is no problem in it, you just have to identify the A.P.</p> <p>Sum of numbers from <span class="math-container">$1$</span> to <span class="math-container">$100$</span> equals <span class="math-container">$\frac{100}{2}\times {101}$</span>.</p> <p>Sum of multiples of <span class="math-container">$3$</span> equals <span class="math-container">$\frac{33}{2}\times 102$</span>. ...... ...... ......</p> <p>I think I shall let you conclude now. :) :) :)</p>
2,804,129
<p>To Evaluate the Limit $$L=\lim_{n \to \infty}\left(1+\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)^n \tag{1}$$</p> <p>My try:</p> <p>I tried to use $$\frac{1}{\binom{n}{k}}+\frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \frac{1}{\binom{n-1}{k}} $$</p> <p>taking summation both sides from $k=1$ to $k=n$ we get</p> <p>$$\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}+\sum_{k=1}^{n} \frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \sum_{k=1}^{n} \frac{1}{\binom{n-1}{k}} \tag{2}$$</p> <p>Now let $$S=\lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}$$ we have from $(2)$</p> <p>$$S+S=S$$</p> <p>hence $$S=0$$</p> <p>Now $(1)$ is in form of $1^{\infty}$ Indeterminate form whose limit is given by</p> <p>$$L=e^\left({\lim_{n \to \infty}}n \times \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)$$</p> <p>How to proceed now?</p>
Fred
380,717
<p>Your martrix $M$ is o.k.</p> <p>The equation in the question has a solution $ \iff$ the linear system</p> <p>$(*)$ $M(x,y,z)^T=(1,2,3)^T$ </p> <p>has a solution. Since $M$ is invertible, the equation $(*)$ has exactly one solution. Hence the equation</p> <p>$y+e^{-2t}\frac{dy}{dt}=1+2e^{2t}+3e^{4t}$ has xactly one solution. </p>
2,804,129
<p>To Evaluate the Limit $$L=\lim_{n \to \infty}\left(1+\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)^n \tag{1}$$</p> <p>My try:</p> <p>I tried to use $$\frac{1}{\binom{n}{k}}+\frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \frac{1}{\binom{n-1}{k}} $$</p> <p>taking summation both sides from $k=1$ to $k=n$ we get</p> <p>$$\sum_{k=1}^{n} \frac{1}{\binom{n}{k}}+\sum_{k=1}^{n} \frac{1}{\binom{n}{k+1}}=\frac{n+1}{n} \sum_{k=1}^{n} \frac{1}{\binom{n-1}{k}} \tag{2}$$</p> <p>Now let $$S=\lim_{n \to \infty} \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}$$ we have from $(2)$</p> <p>$$S+S=S$$</p> <p>hence $$S=0$$</p> <p>Now $(1)$ is in form of $1^{\infty}$ Indeterminate form whose limit is given by</p> <p>$$L=e^\left({\lim_{n \to \infty}}n \times \sum_{k=1}^{n} \frac{1}{\binom{n}{k}}\right)$$</p> <p>How to proceed now?</p>
Atif Farooq
451,530
<p><em>Solution</em>. The matrix representation $M$ implies that the dimension of the column space and consequently that of $\operatorname{range}T = 3$ thus by the rank-nullity theorem $$\dim\operatorname{null}T = \dim\operatorname{span}\{1,e^{2t},e^{4t}\}- \dim\operatorname{span}\{T(1),T(e^{2t}),T(e^{4t})\} = 0$$ implying that $T$ is injective and surjective consequently the equation in question must not only have a solution but must have a unique solution.</p> <p>I hope you found the above useful.</p>
21,243
<p>I know that this question is some sort of bridge between Informatics and Mathematics, not knowing the best place where to post this question, I opted for this place because of the type of answer I want (which concerns Math more than anything).</p> <p>Consider to have a graph represented by a collection of nodes and connections. The best way to describe the graph is through the adjacency matrix (AM): a matrix that has 0 or 1 if one node is connected to another (we consider non-directed graphs, so we have all bidirectional connections).</p> <p>Does anyone know whether the eigenvalues of the matrix implies something for the graph??? properties, topology.... I ask this question because I've almost finished studying Markov Chains. In a chain, the matrix of transition probabilities P (for discrete time markov chains), or the matrix of transition frequencies Q (for cont-time markov chains), can be evaluated (their eigenvalues) in order to inspect whether the chain is ergodic or not (with a parallelism to Control Theory: eigenvalues in the unitary circle or in the negative half-plane).</p> <p>I am trying to find something similar for graphs.</p> <p>Thank you </p>
Sam
6,582
<p>There are a <strong>lot</strong> of properties on eigenvalues of adjacency matrix.</p> <p>If the diameter of $G$ is d, then $A$ has at least $d+1$ distinct eigenvalues.</p> <p>For instance, if a graph is $d$-regular, then its largest eigenvalue is bound by d. He is connected if and only the multiplicity of d is 1. </p> <p>A complete bipartite graph $K_{m,n}$ has three eigenvalues : 0, $\lambda$, $-\lambda$ where $\lambda = \sqrt{mn}$.</p> <p>For more, you should check a book on algebraic graph theory like Algebraic Graph Theory, Godsile and Royle, Springer.</p> <p>Note : The AM is not the "best" way to describe a graph. It's one way, very interesting in case you're looking to compute paths.</p>
1,515,478
<p>Given a quadratic equation with one and only one root (for example $6-\sqrt{2}$ ). Does there exist integers $a,b$ and $c$ where $ax^2 + bx + c = 0$ for that root?</p>
marty cohen
13,079
<p>Not even if the coefficients are rational. If a quadratic equation has $r$ and $s$ as roots, it can be written as $(x-r)(x-s) =x^2-(r+s)x + rs $.</p> <p>If the coefficients are rational, then $r+s$ is rational, so $r$ and $s$ are both rational or both irrational.</p>
211,290
<p>Is it possible to import graphs generated by <code>geng</code> (a tool from <a href="http://pallini.di.uniroma1.it/" rel="noreferrer">the nauty suite</a>) one by one, rather than all at once. If one could also specify not only the order but also the number of edges that would be great, but the main thing is to be able to get one at a time rather than dump them all in memory at once. Thanks</p>
Mariusz Iwaniuk
26,828
<p>Try this:</p> <pre><code>a = 7.06; b = 0; c = 0.107763;(*initial values to obtain the root*)(*defining the equations*) Eq1 = ((a - b)*((628*d)^(1 - c))*Cos[c*Pi/2]); Eq2 = 1 + 2*((628*d)^(1 - c))*Sin[c*Pi/2] + ((628*d)^(2*(1 - c))); (*define the equation which the root must be calculated*) Eqt = Eq1 - 0.08*Eq2 == 0; FindRoot[Eqt, {d, 1/10000}] (* {d -&gt; 0.0000107223 } *) FindRoot[Eqt, {d, 1/2}] (*{d -&gt; 0.23648} *) </code></pre> <p>If <code>d</code> is <code>Real</code> we have only 2 roots:</p> <pre><code>NSolve[Eqt &amp;&amp; -100 &lt; d &lt; 100, d, Reals] {{d -&gt; 0.0000107223}, {d -&gt; 0.23648}} </code></pre> <p><strong>EDITED:</strong></p> <p>It is better to use exact numbers for that use <code>Rationalize[0.265616, 0]</code> function.</p> <pre><code>a = 7013/500;(*14.026*) b = 241/25000000;(*9.64*10^(-6)*) c = 16601/62500;(*0.265616*) Eq1 = ((a - b)*((628*d)^(1 - c))*Cos[c*Pi/2]); Eq2 = 1 + 2*((628*d)^(1 - c))*Sin[c*Pi/2] + ((628*d)^(2*(1 - c))); Eqt = Eq1 - 8/100*Eq2 == 0; sol = FindRoot[Eqt, {d, 1/10}, WorkingPrecision -&gt; 100] (*{d -&gt; 1.59474734438386133056761886005121544936415083334269688311103192\ 8277447098063374108097226699429567875*10^-6 - 3.11673146273605497900349372926910978847189125527343938927043516405\ 9118848431652462070982280427383199*10^-117 I}*) sol // Chop (*{d -&gt; 1.59474734438386133056761886005121544936415083334269688311103192\ 8277447098063374108097226699429567875*10^-6}*) </code></pre>
4,041,140
<p>this is a problem which is for homework in my math course. The problem states that you must find two distinct, non-zero matrices, (Size 2x2) such that A * B + A + B = 0.</p> <p>I'm not really looking for an answer, but rather the methodology I should be using to come to this answer. It seems like the easiest way to do this would be through brute force, but I am somewhat slow when it comes to math and so I was hoping there might be an easier method out there. Thank you in advance.</p>
Kenta S
404,616
<p><span class="math-container">$AB+A+B=0$</span> is equivalent to <span class="math-container">$(A+I)(B+I)=AB+A+B+I=I$</span> for the identity matrix <span class="math-container">$I=\begin{bmatrix}1 &amp;0\\0&amp;1\end{bmatrix}$</span>.</p> <p>Thus, picking any invertible matrix <span class="math-container">$M$</span>, the matrices <span class="math-container">$A=M-I$</span> and <span class="math-container">$B=M^{-1}-I$</span> are solutions. In fact, it is easy to show that these are the only solutions.</p>
2,301,198
<p>Solve the initial value problem for the sequence $\left \{ u_{n}| n \in \mathbb{N} \right \}$ satisfying the recurrence relation: $u_n − 5u_{n-1} + 6u_{n−2} = 0 $ with $u_0 = 1$ and $u_1 = 1$.</p> <p>Ive gotten the general solution to be $u_n = A(2)^n + B(3)^n$. </p> <p>Once I sub the initial values: </p> <p>$u_0 = 2A + 3B = 1$</p> <p>$u_1 = 2A + 3B = 1$</p> <p>And Im unsure on how to solve this system. Any help appreciated, thanks. </p>
Cye Waldman
424,641
<p>These problems of generalized Fibonacci sequences seem to show up here weekly. Here, then, is a generalized solution for solving <strong>all</strong> such problems with a recursion formula of the type $f_n=af_{n-1}+bf_{n-2}$ and arbitrary initial conditions.</p> <p>There have been many extensions of the sequence with adjustable (integer) coefficients and different (integer) initial conditions, e.g., $f_n=af_{n-1}+bf_{n-2}$. (You can look up Pell, Jacobsthal, Lucas, Pell-Lucas, and Jacobsthal-Lucas sequences.) Maynard has extended the analysis to $a,b\in\mathbb{R}$, (Ref: Maynard, P. (2008), “Generalised Binet Formulae,” $Applied \ Probability \ Trust$; available at <a href="http://ms.appliedprobability.org/data/files/Articles%2040/40-3-2.pdf" rel="nofollow noreferrer">http://ms.appliedprobability.org/data/files/Articles%2040/40-3-2.pdf</a>.)</p> <p>We have extended Maynard's analysis to include arbitrary $f_0,f_1\in\mathbb{R}$. It is relatively straightforward to show that</p> <p>$$f_n=\left(f_1-\frac{af_0}{2}\right) \frac{\alpha^n-\beta^n}{\alpha-\beta}+\frac{af_0}{2} \frac{\alpha^n+\beta^n}{\alpha+\beta}= \left(f_1-\frac{af_0}{2}\right)F_n+\frac{af_0}{2}L_n$$</p> <p>where $\alpha,\beta=(a\pm\sqrt{a^2+4b})/2$, $F_n=\frac{\alpha^n-\beta^n}{\alpha-\beta}$, and $L_n=\frac{\alpha^n+\beta^n}{\alpha+\beta}$. </p> <p>The result is written in this form to underscore that it is the sum of a Fibonacci-type and Lucas-type Binet-like terms. It will also reduce to the standard Fibonacci and Lucas sequences for $a=b=1, f_1=1, \text{ and } f_0=0 \text{ or }2$.</p> <p>This can also be expressed as</p> <p>$$f_n=f_1F_n+bf_0F_{n-1}$$</p>
1,545,929
<p>Assume $0&lt;\alpha\leq 1$ and $x&gt;0$. Does the following inequality hold? $$(1-e^{-x})^{\alpha}\leq (1-\alpha e^{-x})$$ I know that the reverse inequality holds if $\alpha\ge 1$.</p>
layman
131,740
<p>First of all, all norms on $\Bbb R^{n}$ are equivalent (do you know what that means?). As a consequence, if a function is continuous with respect to one norm, it is continuous with respect to every norm, so you can just pick the norm that makes the proof of continuity easiest based on the given function.</p> <p>Also, just as we prove continuity in the $1$-dimensional case, we have to show <em>for every</em> $\epsilon &gt; 0$, blah blah blah.</p> <p>To prove it, we let $\epsilon &gt;0$ be <em>arbitrary</em>, i.e., instead of checking each $\epsilon &gt; 0$ on its own (which would mean us checking uncountably infinite points), we do all of the work at once. We find a $\delta$ that is a formula based on $\epsilon$, so that given any $\epsilon$, you can compute a $\delta$ that works. It's the exact same in multi-dimensions. Let $\epsilon$ be arbitrary, and find a working $\delta$ which is some formula with $\epsilon$ in it. Then for any $\epsilon$ you pick, the formula will give you a $\delta$ that works.</p>
2,511,095
<p>Let $p$ be an odd prime. We know that the polynomial $x^{p-1}-1$ splits into linear factors modulo $p$. If $p$ is of the form $4k+1$ then we can write $$x^{p-1}-1=x^{4k}-1=(x^{2k}+1)(x^{2k}-1).$$ The theorem of Lagrange tells us that any polynomial congruence of degree $n$ mod $p$ has at most $n$ solutions. Hence we can deduce from this factorization that $-1$ is a quadratic residue modulo $p$. Similarly if $p$ is of the form $3k+1$ we can write $4(x^{p-1}-1)=4(x^{3k}-1)=(x^k-1)((2x^{k}+1)^2+3)$ and deduce that $-3$ is a quadratic residue mod $p$. </p> <p><strong>Can we prove in this fashion that $-2$ is a quadratic residue mod $p$ if $p$ is of the form $8k+1$ or $8k+3$?</strong></p> <p>Note that I am interested only in this specific method. I know how to prove this using different means.</p>
John Alexiou
3,301
<p>You can try eliminating the $a^3$ term, by multiplying the first equation by $b$ and the second equation by $\frac{a}{3}$ and subtracting the two.</p> <p>$$ \begin{cases} b \left(a^3 + 39 ab^2 - 18 = 0 \right) \\ \frac{a}{3} \left( 3a^2 b + 13 b^3 - 5 = 0 \right) \end{cases} \Rightarrow $$</p> <p>$$ \begin{cases} b a^3 + 39 ab^3 - 18b = 0 \\ b a^3 + \frac{13}{3} a b^3 - \frac{5}{3} a = 0 \end{cases} \Rightarrow \frac{104}{3} a b^3 + \frac{5}{3} a - 18 b = 0 $$</p> <p>$$ a= \frac{54 b}{104 b^3 + 5} $$</p> <p>Substituting $a$ into the second equation yields</p> <p>$$ \frac{8748 b^3}{(104 b^3+5)^2} + 13 b^3 -5 =0$$ which is solved with $$b^3 = \frac{1}{8} \Rightarrow b = \frac{1}{2} $$</p> <p>finally you get</p> <p>$$ \begin{cases} a = \frac{3}{2} \\ b = \frac{1}{2} \end{cases} $$</p>
2,292,324
<p>I know what the answer to this question is, but I am not sure how the answer was reached and I would really like to understand it! I am omitting the base case because it is not relevant for my question.</p> <p>Inductive hypothesis:</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{n(n+1)} = \frac{n}{n+1}$$ is true when $n = k$ and $k &gt; 1$</p> <p>Therefore: $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1}$$</p> <p>Inductive step:</p> <p>Prove that $$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+1+1} = \frac{k+1}{k+2}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \left[\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)}\right] + \frac{1}{(k+1)(k+2)}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k}{k+1} + \frac{1}{(k+1)(k+2)}$$</p> <p>$$\frac{1}{1\cdot2} + \frac{1}{2\cdot3} + \frac{1}{3\cdot4} + \dotsb + \frac{1}{k(k+1)} = \frac{k+1}{k+2}$$</p> <p>What I am confused about is where the $\frac{1}{(k+1)(k+2)}$ comes from in the first line of the inductive step. Can someone please explain this in a little more detail? The source of the answer explains it as "break last term from sum", but I am unclear on what that means.</p>
fleablood
280,126
<p>"What I am confused about is where the 1/(k+1)*(k+2) comes from in the first line of the inductive step."</p> <p>It comes because you are trying to evaluate for $n = k+1$</p> <p>You want to prove the statement $\frac 1{1*2} + .....+ \frac 1{(n-1)(n)} + \frac 1{n*(n+1)} = \frac n{n+1}$.</p> <p>For $n = k$ if you replace $n$ with $k$ you are assuming $\frac 1{1*2} + .....+ \frac 1{(k-1)(k)} + \frac 1{k*(k+1)} = \frac k{k+1}$.</p> <p>If you replace $n$ with $k + 1$ you get the statement:</p> <p>$\frac 1{1*2} + .....+ \frac 1{(k-1)(k)} + \frac 1{k*(k+1)}+ \frac 1{(k+1)(k+1)} = \frac {k+1}{k+2}$.</p> <p>And that is the statement you wish to prove. </p> <p>We are assuming $\frac 1{1*2} + .....+ \frac 1{(k-1)(k)} + \frac 1{k*(k+1)} = \frac k{k+1}$</p> <p>So $\frac 1{1*2} + .....+ \frac 1{(k-1)(k)} + \frac 1{k*(k+1)}+ \frac 1{(k+1)(k+1)} = \frac k{k+1} + \frac 1{(k+1)(k+1)}$</p> <p>You just need to prove that $\frac k{k+1} + \frac 1{(k+1)(k+1)}= \frac {k+1}{k+2}$.</p> <p>.....</p> <p>Anyway, you certainly did not transcribe to proof correctly.</p>
878,785
<p>I know that the common approach in order to find an angle is to calculate the dot product between 2 vectors and then calculate arcus cos of it. But in this solution I can get an angle only in the range(0, 180) degrees. What would be the proper way to get an angle in range of (0, 360)?</p>
AgvaniaRekuva
530,850
<p><strong>Before reading this answer</strong> - Imagine your angle in a 3D space - you can look at it from the "front" and from the "back" (front and back are defined by you). The angle from the front will be the opposite of the angle that you see from the back. So there is no real sense in a value in a range larger than $[0,180]$.</p> <p><strong>If you still want to read more, enjoy</strong></p> <p>In the 3D case, your two vectors would be on some plane (the plane that you can get its normal from the cross-product of the two vectors). Getting a $[0,180]$ degrees angle is of course possible by computing $arccos(\frac{\vec{a}*\vec{b}}{|\vec{a}| *|\vec{b}|})$.</p> <p>I think that what you can do, is to fix the Z axis, such that the two vectors will only differ in X and Y. Then you solve a 2D geometry problem. You should be able to fix the Z axis by dividing the two vectors by their Z component. Since the Z component is just a scalar, the vector directions will remain the same (the trend might change though, if this scalar is negative). You should remember whether these two scalars were both positive, both negative or one positive and one negative. <strong>In the last case, you're final results will be flipped!</strong></p> <p>If the Z component of the two vectors is $0$, then this step can (and actually must) be skipped. However, <strong>if only one of them has $0$ in its Z component, then it is probably impossible to compute the very precise angle</strong>, but you can still compute an approximation, by dividing the other vector by a very large number. </p> <p>Having that, you can reduce vector $\vec{a}$ from both vectors $\vec{a}$ and $\vec{b}$ and then add the vector $(1, 0, 0)$ to both. Thus, vector $\vec{a}$ will be $(1, 0, 0)$, while vector $\vec{b}$ will be $(b_x, b_y, 0)$.</p> <p>Now, if $b_y$ is positive, than your angle is $arccos(\frac{\vec{a}*\vec{b}}{|\vec{a}| *|\vec{b}|})$. Else, it is $360 - arccos(\frac{\vec{a}*\vec{b}}{|\vec{a}| *|\vec{b}|})$. Remember to use the opposite result if only one of the original Z values was negative, as described above!</p>
878,785
<p>I know that the common approach in order to find an angle is to calculate the dot product between 2 vectors and then calculate arcus cos of it. But in this solution I can get an angle only in the range(0, 180) degrees. What would be the proper way to get an angle in range of (0, 360)?</p>
theodore panagos
615,306
<p>I write the formula like I wrote in excel . (xa,ya,xb,yb put in the cells a2,b2,c2,d2).</p> <p>angle(vector.a,vector.b)</p> <p>=(180/pi())* abs(pi()/2*((1+sign(a2))* (1-sign(b2^2))-(1+sign(c2))* (1-sign(d2^2)))</p> <p>+pi()/4*((2+sign(a2))*sign(b2)-(2+sign(c2))*sign(d2))</p> <p>+sign(a2*b2)*atan((abs(a2)-abs(b2))/(abs(a2)+abs(b2)))</p> <p>-sign(c2*d2)*atan((abs(c2)-abs(d2))/(abs(c2)+abs(d2))))</p> <p>The formula is giving the angle of two vectors a and b from 0 to 360 degrees,</p> <p>in left wise direction for any value of the vectors coordinates.</p> <p>For xa=ya=0 and or xb=yb=0 the result is undefined.</p>
436,225
<p><a href="http://en.wikipedia.org/wiki/Incidence_matrix">The incidence matrix</a> of a graph is a way to represent the graph. Why go through the trouble of creating this representation of a graph? In other words what are the applications of the incidence matrix or some interesting properties it reveals about its graph?</p>
dtldarek
26,306
<p>There are many advantages, especially if the total number of edges is $|E| = \Omega(|V|^2)$. First of all, worst-case constant time for adding, deleting edges, also testing if edge exists (adjacency lists/sets might have some additional $\log n$ factors). Second, simplicity: no advanced structures needed, easy to work with, etc. Moreover some algorithms like to store data for each edge (like flows), matrix representation is then very convenient, and sometimes has nice properties like:</p> <ul> <li>if $A$ contains $1$ for edges and $0$ otherwise, then $A^k$ contains the number of paths of length $k$ between all vertices,</li> <li>if $A$ contains weights (with $\infty$ meaning no edge), then $A^k$ using the <a href="http://en.wikipedia.org/wiki/Tropical_geometry" rel="nofollow">min-tropical semiring </a> gives you the lightest paths of length $k$ between all the pairs.</li> </ul> <p>Finally, there is <a href="http://en.wikipedia.org/wiki/Spectral_graph_theory" rel="nofollow">spectral graph theory</a>.</p> <p>I hope this explains something ;-)</p>
2,241,100
<p>Please someone help me solve the following equation in terms of $y$:</p> <blockquote> <p><strong>$\frac{y^2}{2}+y = \frac{x^3}{3}+\frac{x^2}{2}+c_1$</strong></p> </blockquote> <p>The calculator gives me:</p> <blockquote> <p>$y = \frac{1}{3}(\sqrt{3}\sqrt{c_1+2x^3+3x^2+3}-3), -\frac{1}{3}(\sqrt{3}\sqrt{c_1+2x^3+3x^2+3}-3)$</p> </blockquote> <p>I do not know the procedure to get to the answer. Somebody help please. Thank you.</p>
Ángel Mario Gallegos
67,622
<p>Since $$\frac{y^2}{2}+y = \frac{x^3}{3}+\frac{x^2}{2}+c_1\qquad\iff\qquad \frac12(y+1)^2-\frac12= \frac{x^3}{3}+\frac{x^2}{2}+c_1$$ It follows $$(y+1)^2=2\left(\frac{x^3}{3}+\frac{x^2}{2}+c_1\right)+1$$ Then $$y=-1\pm\sqrt{2\left(\frac{x^3}{3}+\frac{x^2}{2}+c_1\right)+1}$$</p>
2,106,003
<p>I was just reading about the <a href="https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski_paradox">Banach–Tarski paradox</a>, and after trying to wrap my head around it for a while, it occurred to me that it is basically saying that for any set A of infinite size, it is possible to divide it into two sets B and C such that there exists some mapping of B onto A and C onto A.</p> <p>This seems to be such a blatantly obvious, intuitively self-evident fact, that I am sure I must be missing something. It wouldn't be such a big deal if it was really that simple, which means that I don't actually understand it.</p> <p>Where have I gone wrong? Is this not a correct interpretation of the paradox? Or is there something else I have missed, some assumption I made that I shouldn't have?</p>
Paul Sinclair
258,282
<p>The fullest version of the paradox (that is known to me) says that if you have two sets $A, B \subseteq \Bbb R^3$, both of which have non-empty interior, then you can divide $A$ up into some finite number $n$ of disjoint subsets, then move those subsets around isometrically so that they remain disjoint and their union is now $B$.</p> <p>More formally, if $A, B$ have non-empty interior, there exist sets $A_k, k = 1, ..., n$ such that $A = \bigcup_{k=1}^n A_k$ and $A_j\cap A_k = \emptyset$ when $j \ne k$. And there exist isometries $f_k, k = 1, ..., n$ of $\Bbb R^3$ such that $B = \bigcup_{k=1}^n f_k(A_k)$ and $f_j(A_j) \cap f_k(A_k) = \emptyset$ when $j \ne k$.</p> <p>Because of the paradox, we know that if $V : \scr P(\Bbb R^3) \to \Bbb R$ is a set function, then one of these 3 conditions must hold: </p> <ul> <li>there are disjoint sets $A, B \subseteq \Bbb R^3$ such that $V(A\cup B) \ne V(A) + V(B)$, or</li> <li>there are isometries $f$ of $\Bbb R^3$ and sets $A \subseteq \Bbb R^3$ such that $V(A) \ne V(f(A))$, or</li> <li>$V$ is constant on all sets with interior.</li> </ul> <hr> <p><strong>Edit</strong> Adding explanation why at least one of the three conditions must hold:</p> <p>The first two conditions are just "$V$ is not additive" and "$V$ is not preserved under isometries". So if neither of those hold, then $V$ is additive and is preserved by isometries. In this case, let $A$ and $B$ be two sets with interior. Then per the BTP result I gave, we can write $A = \bigcup_{k=1}^n A_k$ and $B = \bigcup_{k=1}^n f_k(A_k)$ for disjoint $A_k$ and $f_k(A_k)$. Hence $$V(A) = V\left(\bigcup_{k=1}^n A_k\right) = \sum_{k=1}^n V(A_k) = \sum_{k=1}^n V(f_k(A_k)) = V\left(\bigcup_{k=1}^n f_k(A_k)\right) = V(B)$$ Thus if $V$ is additive and preserved under isometries, it must have the same value for all sets with interior (and in fact, that value must be $0$, since any set with interior is the disjoint union of two other sets with interior).</p> <hr> <p>The problem is that all three are violations of properties that any concept of "volume" should have:</p> <ul> <li>Volume is additive: if two sets are disjoint, the volume of their union should be the sum of their volumes.</li> <li>Volume is unchanged by isometries. Volume is supposed to depend only on size and shape, not location or orientation. So moving a set around rigidly should not change its volume.</li> <li>Volume is obviously not constant on sets with interior. The only way it could be both additive and constant is if it was always $0$. But the volume of the unit cube is $1$.</li> </ul> <p>The conclusion is therefore that it is impossible to define a concept of volume that works for all subsets of $\Bbb R^3$.</p>
413,165
<p>I am a graduate student and I've been thinking about this fun but frustrating problem for some time. Let <span class="math-container">$d = \frac{d}{dx}$</span>, and let <span class="math-container">$f \in C^{\infty}(\mathbb{R})$</span> be such that for every real <span class="math-container">$x$</span>, <span class="math-container">$$g(x) := \lim_{n \to \infty} d^n f(x)$$</span> converges. A simple example for such an <span class="math-container">$f$</span> would be <span class="math-container">$ce^x + h(x)$</span> for any constant <span class="math-container">$c$</span> where <span class="math-container">$h(x)$</span> converges to <span class="math-container">$0$</span> everywhere under this iteration (in fact my hunch is that every such <span class="math-container">$f$</span> is of this form), eg. <span class="math-container">$h(x) = e^{x/2}$</span> or simply a polynomial, of course.</p> <p>I've been trying to show that <span class="math-container">$g$</span> is, in fact, differentiable, and thus is a fixed point of <span class="math-container">$d$</span>. Whether this is true would provide many interesting properties from a dynamical systems point of view if one can generalize to arbitrary smooth linear differential operators, although they might be too good to be true.</p> <p>Perhaps this is a known result? If so I would greatly appreciate a reference. If not, and this has a trivial counterexample I've missed, please let me know. Otherwise, I've been dealing with some tricky double limit using tricks such as in <a href="https://math.stackexchange.com/a/15257/354855">this MSE answer</a>, to no avail.</p> <p>Any help is kindly appreciated.</p> <p><span class="math-container">$\textbf{EDIT}$</span>: Here is a discussion of some nice consequences know that we now the answer is positive, which I hope can be generalized.</p> <p>Let <span class="math-container">$A$</span> be the set of fixed points of <span class="math-container">$d$</span> (in this case, just multiples of <span class="math-container">$e^x$</span> as we know), let <span class="math-container">$B$</span> be the set of functions that converge everywhere to zero under the above iteration. Let <span class="math-container">$C$</span> be the set of functions that converges to a smooth function with the above iteration. Then we have the following:</p> <p><span class="math-container">$C$</span> = <span class="math-container">$A + B = \{ g + h : g\in A, h \in B \}$</span>.</p> <p>Proof: Let <span class="math-container">$f \in C$</span>. Let <span class="math-container">$g$</span> be what <span class="math-container">$d^n f$</span> converges to. Let <span class="math-container">$h = f-g$</span>. Clearly <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> since <span class="math-container">$g$</span> is fixed. Then we get <span class="math-container">$f = g+h$</span>.</p> <p>Now take any <span class="math-container">$g\in A$</span> and <span class="math-container">$h \in B$</span>, and set <span class="math-container">$f = g+h$</span>. Since <span class="math-container">$d^n h$</span> converges to <span class="math-container">$0$</span> and <span class="math-container">$g$</span> is fixed, <span class="math-container">$d^n f$</span> converges to <span class="math-container">$g$</span>, and we are done.</p> <p>Next, here I'm assuming the result of this thread holds for a general (possibly elliptic) smooth linear differential operator <span class="math-container">$d : C^\infty (\mathbb{R}) \to C^\infty (\mathbb{R}) $</span>. A first note is that fixed points of one differential operator correspond to solutions of another, i.e. of a homogeneous PDE. Explicitly, if <span class="math-container">$d_1 g = g$</span>, then setting <span class="math-container">$d_2 = d_1 - Id$</span>, we get <span class="math-container">$d_2 g = 0$</span>. This much is simple.</p> <p>So given <span class="math-container">$d$</span>, finding <span class="math-container">$A$</span> from above amounts to finding the space of solutions of a PDE. I'm hoping that one can use techniques from dynamical systems to find the set <span class="math-container">$C$</span> and thus get <span class="math-container">$A$</span> after the iterations. But I'm approaching this naively and I do not know the difficulty or complexity of such an affair.</p> <p>One thing to note is that once we find some <span class="math-container">$g \in A$</span>, we can set <span class="math-container">$h(x) = g(\varepsilon x)$</span> for small <span class="math-container">$\varepsilon$</span> and <span class="math-container">$h \in B$</span>. Conversely, given <span class="math-container">$h \in B$</span>, I'm wondering what happens when set set <span class="math-container">$f(x) = h(x/\varepsilon)$</span>, and vary <span class="math-container">$\varepsilon$</span>. It might not coincide with a fixed point of <span class="math-container">$d$</span>, but could very well coincide with a fixed point of the new operator <span class="math-container">$d^k$</span> for some <span class="math-container">$k$</span>. For example, take <span class="math-container">$h(x) = cos(x/2)$</span>. The iteration converges to 0 everywhere, and multiplying the interior variable by <span class="math-container">$2$</span> we do NOT get a fixed point of <span class="math-container">$d = \frac{d}{dx}$</span> but we do for <span class="math-container">$d^4$</span>.</p> <p>I'll leave it at this, let me know again if there is anything glaringly wrong I missed.</p>
Terry Tao
766
<p>I was able to adapt <a href="https://mathoverflow.net/questions/34059/if-f-is-infinitely-differentiable-then-f-coincides-with-a-polynomial/34067#34067">the accepted answer</a> to <a href="https://mathoverflow.net/questions/34059/if-f-is-infinitely-differentiable-then-f-coincides-with-a-polynomial">this MathOverflow post</a> to positively answer the question. The point is that one can squeeze more out of Petrov's Baire category argument if one applies it to the &quot;singular set&quot; of the function, rather than to an interval.</p> <p>The key step is to establish</p> <blockquote> <p><strong>Theorem 1</strong>. Let <span class="math-container">$f \in C^\infty({\bf R})$</span> be such that the quantity <span class="math-container">$M(x) := \sup_{m \geq 0} |f^{(m)}(x)|$</span> is finite for all <span class="math-container">$x$</span>. Then <span class="math-container">$f$</span> is the restriction to <span class="math-container">${\bf R}$</span> of an entire function (or equivalently, <span class="math-container">$f$</span> is real analytic with an infinite radius of convergence).</p> </blockquote> <p><strong>Proof</strong>. Suppose this is not the case. Let <span class="math-container">$X$</span> denote the set of real numbers <span class="math-container">$x$</span> for which there does not exist any entire function that agrees with <span class="math-container">$f$</span> on a neighbourhood of <span class="math-container">$x$</span> (this is the &quot;entire-singular set&quot; of <span class="math-container">$f$</span>). Then <span class="math-container">$X$</span> is non-empty (by analytic continuation) and closed. Next, let <span class="math-container">$S_n$</span> denote the set of all <span class="math-container">$x$</span> such that <span class="math-container">$M(x) \leq n$</span> for all <span class="math-container">$m$</span>. As <span class="math-container">$M$</span> is lower semicontinuous, the <span class="math-container">$S_n$</span> are closed, and by hypothesis one has <span class="math-container">$\bigcup_{n=1}^\infty S_n = {\bf R}$</span>. Hence, by the Baire category theorem applied to the complete non-empty metric space <span class="math-container">$X$</span>, one of the sets <span class="math-container">$S_n \cap X$</span> contains a non-empty set <span class="math-container">$(a,b) \cap X$</span> for some <span class="math-container">$a &lt; b$</span>.</p> <p>Now let <span class="math-container">$(c,e)$</span> be a maximal interval in the open set <span class="math-container">$(a,b) \backslash X$</span>, then (by analytic continuation) <span class="math-container">$f$</span> agrees with an entire function on <span class="math-container">$(c,e)$</span>, and hence on <span class="math-container">$[c,e]$</span> by smoothness. On the other hand, at least one endpoint, say <span class="math-container">$c$</span>, lies in <span class="math-container">$S_n$</span>, thus <span class="math-container">$$ |f^{(m)}(c)| \leq n$$</span> for all <span class="math-container">$m$</span>. By Taylor expansion of the entire function, we then have <span class="math-container">$$ |f^{(m)}(x)| \leq \sum_{j=0}^\infty \frac{|f^{(m+j)}(c)|}{j!} |x-c|^j$$</span> <span class="math-container">$$ \leq \sum_{j=0}^\infty \frac{n}{j!} (b-a)^j$$</span> <span class="math-container">$$ \leq n \exp(b-a)$$</span> for all <span class="math-container">$m$</span> and <span class="math-container">$x \in [c,e]$</span>. Letting <span class="math-container">$(c,e)$</span> and <span class="math-container">$m$</span> vary, we conclude that the bound <span class="math-container">$$ M(x) \leq n \exp(b-a)$$</span> holds for all <span class="math-container">$x \in (a,b) \backslash X$</span>. Since <span class="math-container">$(a,b) \cap X$</span> is contained in <span class="math-container">$S_n$</span>, these bounds also hold on <span class="math-container">$(a,b) \cap X$</span>, hence they hold on all of <span class="math-container">$(a,b)$</span>. Now from Taylor's theorem with remainder we see that <span class="math-container">$f$</span> agrees on <span class="math-container">$(a,b)$</span> with an entire function (the Taylor expansion of <span class="math-container">$f$</span> around any point in <span class="math-container">$(a,b)$</span>), and so <span class="math-container">$(a,b) \cap X$</span> is empty, giving the required contradiction. <span class="math-container">$\Box$</span></p> <p>The function <span class="math-container">$f$</span> in the OP question obeys the hypotheses of Theorem 1. By Taylor expansion applied to the entire function that <span class="math-container">$f$</span> agrees with, and performing the same calculation used to prove the above theorem, we obtain the bounds <span class="math-container">$$ M(x) = \sup_{m \geq 0} |f^{(m)}(x)| \leq M(0) \exp(|x|)$$</span> for all <span class="math-container">$x \in {\bf R}$</span>. We now have locally uniform bounds on all of the <span class="math-container">$f^{(m)}$</span> and the argument given by username (or the variant given in Pinelis's comment to that argument) applies to conclude.</p>
2,506,279
<blockquote> <p>If $\lim_{x\to \infty}xf(x^2+1)=2$ then find $$\lim_{x\to 0}\dfrac{2f'(1/x)}{x\sqrt{x}}=?$$</p> </blockquote> <p>My Try : $$g(x):=xf(x^2+1)\\g'(x)=f(x^2+1)+2xf'(x^2+1)$$ Now what?</p>
Lutz Lehmann
115,115
<p>Using Python to replicate your calculations, and enumerating the bit string from left-to-right,</p> <p><code>sum(int(b)*2**(1-k) for k,b in enumerate("110010010000111111011011")) </code> </p> <p>correctly returns <code>3.1415927410125732</code>. However, enumerating the bit string in the wrong direction (<em>from right-to-left</em>)</p> <p><code>sum(int(b)*2**(1-k) for k,b in enumerate("110010010000111111011011"[-1::-1]))</code></p> <p>indeed delivers the same incorrect <code>3.436558485031128</code> result mentioned in the OP.</p> <p>The enumeration in the Wikipedia citation is indeed somewhat in conflict with the common notation. For example, in $$ \sum b_n2^{-n}\cdot 2^e $$ the index starts on the left with $n=0$ enumerating the most significant bit. That is to say, that notation instructs you to read the bit string from left-to-right as $b_0,b_1,...,b_{23}$.</p> <p>In the more common notation, however, bit counting is usually oriented on the dyadic powers (<em>the powers of 2</em>) that make up the binary integers. So bit0 would be the right-most bit. The bit string would, therefore, be read from left-to-right as $bit_{23},bit_{22},...,bit_0$ and the corresponding formula should be $$ \sum_{n=0}^{23} bit_n\cdot 2^{n-23} $$</p> <p>So the WP formula is in accordance with the usual array indexing idiom where elements are read left-to-right starting with index 0 or 1.</p>
2,613,484
<p>Give an example of a vector space which has 125 elements. I don't know how proceed!!! Is there any technique about the field?? </p>
GhD
191,008
<p>Just consider the field $\mathbb Z_5$ and mutiply it $3$ times with itself. Then new set over the field $\mathbb Z_5$ is a vector space with $125$ elements.</p>
1,401,516
<p>Given is the unit circle in the plane. Choose randomly point in it, such that $P(\left(x,y\right)\in A)$ is proportional to area of $A$, where $A$ is measurable set in plane. Find density function of random variable $X$ which represents the $x$ coordinate of this point.</p> <p>My idea was to find $P(X\leq x)$ and then differentiate, but I'm struggling with determining area of subset of a circle where all x-coordinates are less or equal to given $x$ while $x$ varies in $\left[-1,1\right]$. Attached is the figure for fixed $x$.<a href="https://i.stack.imgur.com/89o8Z.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/89o8Z.jpg" alt="enter image description here"></a></p>
Community
-1
<p>It is easier to find the marginal density directly. For each $-1&lt;x&lt;1$, we have $$f_X(x)=\int_{-\infty}^\infty f(x,y)\,dy= \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}}{1\over \pi}\,dy= {2\sqrt{1-x^2}\over \pi},$$ that is, $f_X(x)$ is the length of the slice at $x$ divided by the area of the circle. </p>
1,724,812
<p>I'm struggling with the following limit:</p> <p>$$\lim_{x \to 0} \frac{1-(\cos x)^{\sin x}}{x^2}$$</p> <p>Don't know where to start with this. Hints/solutions very appreciated.</p>
jim
289,829
<p>For a series approach, just look at $y(x) = \cos(x)^{\sin(x)}$ and consider the logarithm $\ln y = \sin(x) \ln \cos(x)$. For $x \to 0$ then $\ln y \approx (x + x^3/3) \ln (1 - x^2/2)$ and the expansion for the $\ln$ term is well known.</p>
3,830,204
<p>Working through <em>Spivak's Calculus</em> and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral <span class="math-container">$$\int \frac{1}{x^{2}+x+1} dx$$</span></p> <p>Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:</p> <p><span class="math-container">$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n&gt; 1$$</span></p> <p>In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.</p> <p>Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?</p>
Karagum
428,319
<p>It's actually very simple to identify this integral as this:</p> <p><span class="math-container">$$\int \frac{1}{x^2+x+1} dx = \int \frac{1}{\left(x+\frac{1}{2} \right)^2+ \frac{3}{4}} dx.$$</span></p> <p>Now you can see that you can use the following rule for integration: <span class="math-container">$$\int \frac{1}{x^2+a^2}dx=\frac{1}{a} \arctan \left(\frac{x}{a} \right)+ C.$$</span></p> <p>And now you get <span class="math-container">$$\int \frac{1}{\left(x+\frac{1}{2} \right)^2+ \frac{3}{4}} dx = \frac{2}{\sqrt{3}} \arctan \left(\frac{2x+1}{\sqrt{3}} \right) + C.$$</span></p>
3,830,204
<p>Working through <em>Spivak's Calculus</em> and using old assignments from the course offered at my school I'm working on the following problem, asking me to find the integral <span class="math-container">$$\int \frac{1}{x^{2}+x+1} dx$$</span></p> <p>Looking through Spivak and previous exercises I worked on, I thought using a partial fraction decomposition would be the technique, but even in Spivak the only exercises I've seen which are similar involve:</p> <p><span class="math-container">$$\int \frac{1}{(x^{2}+x+1)^{n}} dx\ ,\text{where}\ n&gt; 1$$</span></p> <p>In which case it is pretty straightforward to solve. So there must be a reason why the exercise isn't presented unless it is so straightforward.</p> <p>Integration by parts and substitution (at least for now) have proven fruitless as well. So I come here to ask if I'm missing any special trick to compute this integral ?</p>
Community
-1
<p>Though the efficient way has been given several times, decomposition in simple fractions remains your good friend.</p> <p>The polynomial <span class="math-container">$x^2+x+1$</span> has complex roots, let <span class="math-container">$\omega$</span> and <span class="math-container">$\omega^*$</span>, which, incidentally, are cube roots of unity. Now,</p> <p><span class="math-container">$$\frac1{x^2+x+1}=\frac1{(x-\omega)(x-\omega^*)}=\frac1{2i\Im(\omega)}\left(\frac1{x-\omega}-\frac1{x-\omega^*}\right)$$</span> and after integration,</p> <p><span class="math-container">$$\frac1{2i\Im(\omega)}\log\frac{x-\omega}{x-\omega^*}.$$</span></p> <p>As <span class="math-container">$|x-\omega|=|x-\omega^*|$</span>, the logarithm reduces to the difference of the arguments,</p> <p><span class="math-container">$$-\frac1{\Im(\omega)}\arctan\frac{\Im(\omega)}{x-\Re(\omega)}=-\frac2{\sqrt3}\arctan\frac{\sqrt3}{2x+1}.$$</span></p> <hr /> <p>There is even a faster way, noticing that</p> <p><span class="math-container">$$\frac1{x-\omega}=\frac{x-\omega^*}{(x-\omega)(x-\omega^*)}=\frac{x-\Re(\omega)}{x^2+x+1}+i\frac{\Im(\omega)}{x^2+x+1}.$$</span></p> <p>Hence, taking the imaginary part,</p> <p><span class="math-container">$$\int\frac{dx}{x^2+x+1}=\frac{\Im(\log(x-\omega))}{\Im(\omega)}=-\frac1{\Im(\omega)}\arctan\frac{\Im(\omega)}{x-\Re(\omega)}.$$</span></p> <p>As a byproduct,</p> <p><span class="math-container">$$\int\frac{{x-\Re(\omega)}}{x^2+x+1}dx=\frac{\Re(\log(x-\omega))}{\Im(\omega)}=\frac1{\Im(\omega)}\log\sqrt{(x-\Re(\omega))^2+\Im^2(\omega)}.$$</span></p>
3,552,915
<p>Determine the point on the plane <span class="math-container">$4x-2y+z=1$</span> that is closest to the point <span class="math-container">$(-2, -1, 5)$</span>. This question is from Pauls's Online Math Notes. He starts by defining a distance function: </p> <p><span class="math-container">$z = 1 - 4x + 2y$</span></p> <p><span class="math-container">$d(x, y) = \sqrt{(x + 2)^2 + (y + 1)^2 + (-4 -4x + 2y)^2}$</span></p> <p>However, at this point, to make the calculus simpler he finds the partial derivatives of <span class="math-container">$d^2$</span> instead of <span class="math-container">$d$</span>. Why does this give you the same answer? </p>
MilesB
744,186
<p>In general, if <span class="math-container">$$d(x,y)=\sqrt {f(x,y)}$$</span> for some positive function <span class="math-container">$f(x,y)$</span> then the minima of d will correspond to the minima of f. So, if <span class="math-container">$f(x,y)$</span> is differentiable, it makes sense to search for solutions of <span class="math-container">$$ \frac{\partial f}{\partial x}=0, \frac{\partial f}{\partial y}=0$$</span> where <span class="math-container">$f=d^2$</span> rather than calculating partial derivatives of <span class="math-container">$d$</span>.</p> <p>In any case, <span class="math-container">$$ \frac{\partial d}{\partial x} = 0$$</span> is the same as <span class="math-container">$$\frac {1}{2} \frac {1}{\sqrt {f(x,y)}} \frac{\partial f}{\partial x}=0 $$</span> by applying the chain rule which requires <span class="math-container">$$\frac{\partial f}{\partial x}=0$$</span> which is where we started. (Similar conclusion for derivative with respect to <span class="math-container">$y$</span>)</p>
1,284,039
<p>What function satisfies $f(x)+f(−x)=f(x^2)$?</p> <p>$f(x)=0$ is obviously a solution to the above functional equation.</p> <p>We can assume f is continuous or differentiable or similar (if needed).</p>
Lukas Geyer
43,179
<p>Define $f(x)$ any way you want for $x &gt; 0$, then define $f(-x) = f(x^2) - f(x)$ also for $x&gt;0$. If you want continuity, make sure that $\lim_{x\to 0} f(x) = 0$.</p>
2,438,362
<p>We have, $\rho(A) \leq \|A\|$ where $\rho(A)$ denotes the spectral radius of $A$.</p> <p>Now there is a corollary that $\rho(A) &lt; 1$ iff $\|A\|&lt;1$ it is clear that when $\|A\|&lt;1$ then $\rho(A)&lt;1$</p> <p>but how to show that if $\rho(A)&lt;1$ then $\|A\|&lt;1$, perhaps it is because of this one</p> <p>$\|A\| = \sup_{x\neq 0}(\frac{\|Ax\|}{\|x\|})$ and $\|Ax\| = \|\lambda x\| = |\lambda| \|x\|$ and hence $\|A\| = \sup(|\lambda|)$</p> <p>$\|A\| = \rho(A)&lt;1$ so $\|A\|&lt;1$</p> <p><strong>EDIT : -</strong></p> <p>I see this but for only $||A||_{2} = \sqrt{\rho(A^{*}A}) $ and $A^{*}$ is the conjugate transpose of $A$ , so in case of say real Symmetric matrix $A$ , $A^{*} = A^{T}$ so $||A||_{2} = \sqrt{\rho(A^{2})} = \sqrt{(\rho(A) )^ 2} &lt; 1 ,$ since $\rho(A)&lt;1$ implying $||A||_{2}&lt;1$,but what about its natural norm that is $\|A\| = \sup_{x\neq 0}(\frac{\|Ax\|}{\|x\|})$?</p>
GAVD
255,061
<p>You can use <a href="https://en.wikipedia.org/wiki/Spectral_radius#Gelfand.27s_formula" rel="nofollow noreferrer">Gelfand's formula</a>: $$\rho(A) = \lim_{k\to \infty} \|A^k\|^{\frac{1}{k}}.$$</p> <p>Otherwise, follow <a href="https://mathoverflow.net/questions/179105/operator-norm-vs-spectral-radius-for-positive-matrices">this post</a>, you have $\rho(A) \leq \|A\| \leq \rho(A)^2$.</p>
277,594
<p><a href="https://i.stack.imgur.com/yX9my.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/yX9my.gif" alt="enter image description here" /></a></p> <pre><code>Manipulate[ ParametricPlot[{Sec[t], Tan[t]}, {t, 0, u}, PlotStyle -&gt; Dashed, PerformanceGoal -&gt; &quot;Quality&quot;, Exclusions -&gt; All, PlotRange -&gt; 2], {u, 0.001, 2 Pi}] </code></pre> <p>I found that for parametric curves with singularities, using <code>ParametricPlot</code> with the dahed line style, there will be shake in some animations, is there a simple way to eliminate shake?</p>
Syed
81,355
<p>As a possibility, let:</p> <pre><code>expr = a[1, 1] x[1]^2 + a[1, 2] x[1] x[2] + a[2, 3] x[2] x[3] rule = {Times[y__, x_[a_Integer] x_[b_Integer]] :&gt; y C[a, b] , Times[y__, x_[a_Integer] x_[a_Integer]] :&gt; y G[a, a] }; </code></pre> <p>Test:</p> <pre><code>expr /. rule </code></pre> <blockquote> <pre><code>a[1, 2] C[1, 2] + a[2, 3] C[2, 3] + a[1, 1] G[1, 1] </code></pre> </blockquote>
277,594
<p><a href="https://i.stack.imgur.com/yX9my.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/yX9my.gif" alt="enter image description here" /></a></p> <pre><code>Manipulate[ ParametricPlot[{Sec[t], Tan[t]}, {t, 0, u}, PlotStyle -&gt; Dashed, PerformanceGoal -&gt; &quot;Quality&quot;, Exclusions -&gt; All, PlotRange -&gt; 2], {u, 0.001, 2 Pi}] </code></pre> <p>I found that for parametric curves with singularities, using <code>ParametricPlot</code> with the dahed line style, there will be shake in some animations, is there a simple way to eliminate shake?</p>
Trev
52,996
<p>I'm not sure whether this would work for you here, but this reminds me of similar cases where <code>Plus</code> and <code>Times</code> would give me headaches. An easy fix was to just replace as <code>expr /. Times -&gt; times</code> (and/or <code>Plus</code> to <code>plus</code>), do whatever, and replace back when you're ready.</p> <p>(I believe this is also the idea behind <code>Deactivate</code> and related functions, but they seemed slightly more finicky to me.)</p>
3,371,339
<p>How to show <span class="math-container">$Pr(X&gt;2E(X))\le 1/2$</span> given that <span class="math-container">$X$</span> is a continuous random variable and <span class="math-container">$P(X\le 0)=0$</span>? <span class="math-container">$E(X)$</span> here is the mean of <span class="math-container">$X$</span>.</p> <p>I started with the definition of <span class="math-container">$E(X)$</span> for the continuous case. Then, I broke the integral into the integral from -infinity to 0 + the integral from 0 to +infinity. Since <span class="math-container">$Pr(X\le 0)=0$</span>, the first term vanishes by using integration by parts. The second term will be greater or equal to the integral from <span class="math-container">$0$</span> to <span class="math-container">$x$</span> for any <span class="math-container">$x$</span> in the open interval <span class="math-container">$(0,+\infty)$</span>. I got stuck here. I am trying to end up with an inequality relationship between <span class="math-container">$E(X)$</span> and the density probability function <span class="math-container">$f(x)$</span> so I can use that in the definition of <span class="math-container">$Pr(X&gt;2*E(x))$</span>. Could anyone help me with this please?</p>
Reveillark
122,262
<p>Here's another way of doing it. Define</p> <p><span class="math-container">$$ g:=\liminf_{n\to\infty} f_n $$</span></p> <p><span class="math-container">$$ h:=\limsup_{n\to\infty} f_n $$</span></p> <p>The functions <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are both measurable.</p> <p>Then note that</p> <p><span class="math-container">$$ E:=\{x\in X: \lim_{n\to\infty} f_n(x) \text{ exists}\}=\{x\in X: g(x)=h(x)\} $$</span></p> <p>As <span class="math-container">$g$</span> and <span class="math-container">$h$</span> are measurable, so is <span class="math-container">$E$</span>.</p> <p>Edit: If by "the limit exists" you mean also that it is finite, consider the set <span class="math-container">$$ A=\{ x\in X: \limsup_{n\to\infty} f_n(x) &lt;\infty\} $$</span> Note that <span class="math-container">$A$</span> is measurable, and thus so is <span class="math-container">$E\cap A$</span>, which is what you want.</p>
1,710,304
<p>I have a boolean algebra equation that i'm not able to simplify fully.</p> <p>\begin{align} &amp;(c+ab)(d+b(a+c))\\ &amp;(c+ab)(d+ba+bc)\\ &amp;cd+ abc + bc^2+abd+a^2 b^2 + ab^2 c\\ &amp;\text{using boolean laws $x^2=x$ and $x+x=x$}\\ &amp;cd + bc + abd + ab + (abc + abc)\\ &amp;cd + bc + abd + ab + abc \end{align} And now I get stuck. Mathematica simplifies this to $ac+bc+bd$, but I just don't see how.</p>
CAGT
119,244
<p>You have backwards the AND's and the OR's entered in Mathematica, you should use:</p> <p>BooleanMinimize[(C || (A &amp;&amp; B)) &amp;&amp; (D || (B &amp;&amp; (A || C)))</p> <p>|| = AND (*)</p> <p>&amp;&amp; = OR (+)</p>
2,172,975
<p>I am reading <a href="http://www.deeplearningbook.org/contents/linear_algebra.html" rel="nofollow noreferrer">http://www.deeplearningbook.org/contents/linear_algebra.html</a> Chapter $2$, page $44$ ($3$rd paragraph) of this book and got confused. Can any body help me to understand this paragraph? Thanks in advance.</p> <p><em>While any real symmetric matrix $A$ is guaranteed to have an eigendecomposition, the eigendecomposition may not be unique. If any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a $Q$ using those eigenvectors instead.</em></p>
Ethan Bolker
72,858
<p>I'll start by assuming that you understand that if an $n \times n$ symmetric matrix has $n$ distinct eigenvalues then the eigenspaces are all one dimensional and any basis of eigenvectors is essentially unique: it contains one nonzero vector from each eigenspace.</p> <p>One way to understand the nonuniqueness when an eigenvalue is repeated is to think about the $2 \times 2$ identity matrix. The two eigenvalues are $1$ and $1$. Every basis is a basis of eigenvectors! </p>
1,307,159
<p>Let $B$ denote the unit ball of $L^\infty$. Question: is $B$ sequentially compact for the topology of convergence in measure ? I am not necessarily assuming that the measure is finite (but $\sigma$ finite is fine).</p> <p>(I have looked a bit, and a counterexample to a lot of similar questions was the sequence of characteristic functions $\chi_{I_n}$ where $I_n$ are intervals of length going to zero but such that $\cup_{n \geq N} I_n$ covers the real line for all $N$. However it seems to me that such a sequence does converge to $0$ in measure, so that is at least a good sign to me! )</p>
Martin Argerami
22,857
<p>With $\sigma$-finite there is a counterexample, which shows that the ball is not sequentially compact: consider $L^\infty(\mathbb R)$ with Lebesgue measure, and let $I_n=(n,n+1)$, $n\in\mathbb Z$, and $f_n=\chi^{\phantom{I_n}}_{I_n}$. Then $$ \mu(\{x:\ |f_m-f_n|\geq\varepsilon\})=2 $$ for any $\varepsilon\leq1$. </p>
1,307,159
<p>Let $B$ denote the unit ball of $L^\infty$. Question: is $B$ sequentially compact for the topology of convergence in measure ? I am not necessarily assuming that the measure is finite (but $\sigma$ finite is fine).</p> <p>(I have looked a bit, and a counterexample to a lot of similar questions was the sequence of characteristic functions $\chi_{I_n}$ where $I_n$ are intervals of length going to zero but such that $\cup_{n \geq N} I_n$ covers the real line for all $N$. However it seems to me that such a sequence does converge to $0$ in measure, so that is at least a good sign to me! )</p>
Albert
19,331
<p>Even in the finite measure case this seems false. </p> <p>Take $f_n(x)=\mathrm{sign}(2^n x)$ in $L^\infty(0,2\pi)$. Then for all $n \neq m$, $\mathrm{Leb}\{ x , |f_n(x)-f_m(x)|&gt;1\}=\pi$.</p>
4,338,285
<p>I have been thinking about the problem of finding the sum of the first squares for a long time and now I have an idea how to do it. However, the second step of this technique looks suspicious.</p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^n i = \frac{n^2+n}{2}$$</span></p> </li> <li><p><span class="math-container">$$\int\sum_{i=1}^{n}idi=\int\frac{\left(n^{2}+n\right)}{2}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\left(\frac{i^{2}}{2}+C_{1}\right)=\left(\frac{n^{3}}{3}+\frac{n^{2}}{2}\right)\cdot\frac{1}{2}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}-2nC_{1}+2C_{0} $$</span></p> </li> <li><p>Assuming <span class="math-container">$C_{0}=0$</span>. Next, we are going to find the constant <span class="math-container">$C_{1}$</span></p> </li> <li><p>From step 4, we can conclude that: <span class="math-container">$C_{1}=\frac{n^{2}}{6}+\frac{n}{4}-\sum_{i=1}^{n}\frac{i^{2}}{2n}$</span>. We can fix <span class="math-container">$n$</span>, at any value, it is more convenient to take one(<span class="math-container">$n=1$</span>) then <span class="math-container">$C_{1}=-\frac{1}{12}$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> </ol> <p>Using the induction method, we can prove the correctness of this formula and that the value of the constant <span class="math-container">$C_{0}$</span> is really zero. But I created this question because the second step looks very strange, since the left part was multiplied by differential <span class="math-container">$di$</span>, and the right by <span class="math-container">$dn$</span>. If we assume that the second step is wrong, then why did we get the correct formula of summation of first squares?</p> <p>Note: The technique shown based on the integrated one is really interesting for me, using the same reasoning we can get the formula of the first cubes and so on</p> <p><strong>EDIT1</strong></p> <p>According to @DatBoi's comment, we can calculate constants <span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span> by solving a system of linear equations. The desired system must contain two equations, since we have two unknown values(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>). To achieve this, we need to use the right part of the statement from step 4 twice, for two different n. For simplicity, let's take <span class="math-container">$n=1$</span> for first equation and <span class="math-container">$n=2$</span> for second equation, then the sum of the squares for these <span class="math-container">$n$</span> is 1 and 5, respectively.</p> <ol> <li>The main system <span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{3}+\frac{1}{2}-2C_{1}+2C_{0}=1 \\ \frac{8}{3}+\frac{4}{2}-4C_{1}+2C_{0}=5 \\ \end{array} \right. $$</span></li> <li>After simplification <span class="math-container">$$ \left\{ \begin{array}{c} \ C_{0}-C_{1}=\frac{1}{12} \\ \ C_{0}-2C_{1}=\frac{1}{6} \\ \end{array} \right. $$</span></li> <li>Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=-\frac{1}{12}$</span></li> </ol> <p><strong>EDIT2</strong></p> <p>Considering @epi163sqrt's answer, the second step should be changed and it will take this form:</p> <ol start="2"> <li><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }idi=\int_{}^{}\frac{\left(n^{2}+n\right)}{2}dn$$</span></li> </ol> <p><em>My hypothesis</em>. If we have: <span class="math-container">$$\sum_{i=1}^{n}i^{p}=f\left(n,p\right)$$</span> Where <span class="math-container">$f$</span> is a closed form for summation, then this should be true for any natural degree</p> <p><span class="math-container">$$\sum_{i=1}^{n}\int_{}^{}i^{p}di=\int_{}^{}f\left(n,p\right)dn\ \to\ \sum_{i=1}^{n}\frac{i^{\left(p+1\right)}}{p+1}=\int_{}^{}f\left(n,p\right)dn-nC_{1}$$</span> Can you prove or disprove this hypothesis? My questions above are no longer relevant</p> <p><strong>EDIT3. Time for fun. Let's try to get a formula for summing the first cubes</strong></p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\int_{ }^{ }i^{2}di=\int_{ }^{ }\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}dn$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\frac{i^{3}}{3}=\frac{n^{4}}{12}+\frac{n^{3}}{6}+\frac{n^{2}}{12}-nC_{1}+C_{0}$$</span></p> </li> <li><p><span class="math-container">$$ \left\{ \begin{array}{c} \frac{1}{4}+\frac{1}{2}+\frac{1}{4}-3C_{1}+3C_{0}=1 \\ \frac{16}{4}+\frac{8}{2}+\frac{4}{4}-6C_{1}+3C_{0}=9 \\ \end{array} \right. $$</span> Roots: <span class="math-container">$C_{0}=0$</span> and <span class="math-container">$C_{1}=0$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}i^{3}=\frac{n^{4}}{4}+\frac{n^{3}}{2}+\frac{n^{2}}{4}$$</span></p> </li> </ol> <p><strong>GREAT EDIT4 19.01.2022</strong></p> <p>So far I have no proof, however, the calculation of constants(<span class="math-container">$C_{0}$</span> and <span class="math-container">$C_{1}$</span>) can be significantly simplified by changing the lower index of summation to 0.</p> <p>1b. Let <span class="math-container">$M_{p}(n)$</span> be a closed form to obtain the summation, with degree of <span class="math-container">$p$</span>. I. e. <span class="math-container">$$\sum_{i=0}^{n}i^{p}=M_{p}\left(n\right)$$</span></p> <p>2b. Now let's assume that the statement written below is true <span class="math-container">$$\sum_{i=0}^{n}\int_{ }^{ }i^{p}di=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>3b. For now, we'll just take the integrals. <span class="math-container">$$\sum_{i=0}^{n}\left(\frac{i^{p+1}}{p+1}+C_{1}\right)=\int_{ }^{ }M_{p}\left(n\right)dn$$</span></p> <p>4b. Now let's express the sum explicitly. Also, we will move the <span class="math-container">$C_{1}$</span> without changing its sign, this is a valid action, since multiplying the constant by (-1) leads to another constant <span class="math-container">$$\sum_{i=0}^{n}i^{p+1}=\left(\int_{ }^{ }M_{p}\left(n\right)dn+nC_{1}\right)\left(p+1\right)$$</span></p> <p>5b. So we got the recurrent formula: <span class="math-container">$$M_{p}(n) = \left(\int_{ }^{ }M_{p-1}\left(n\right)dn+nC_{p}\right)p$$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>6b. Now we have to build and resolve a system for two unknown constants. Therefore, the number of equations is two, we are also going to take n=0 and n=1: <span class="math-container">$$ \left\{ \begin{array}{c} M_{p}(0)=0 \\ M_{p}(1)=1 \end{array} \right. $$</span> 7b. As I said, we have two constants. In order to see this, we will add a new definition for <span class="math-container">$W_{p-1}(n)$</span> that satisfies the following expression: <span class="math-container">$\int_{ }^{ }M_{p-1}\left(n\right)dn=W_{p-1}\left(n\right)+C_{-p}$</span>. <span class="math-container">$$ \left\{ \begin{array}{c} \left(W_{p-1}\left(0\right)+C_{-p}+0C_{p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+1C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b. I will skip the formal proof of the fact, but the intuition is that <span class="math-container">$W_{p}(n)$</span> is a polynomial that does not have a constant term. Therefore, we can safely know that <span class="math-container">$W_{p}(0)=0$</span>. let's rewrite and simplify the system:</p> <p>8b.1. <span class="math-container">$$ \left\{ \begin{array}{c} \left(C_{-p}\right)p=0 \\ \left(W_{p-1}\left(1\right)+C_{-p}+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.2. <span class="math-container">$$ \left\{ \begin{array}{c} C_{-p}=0 \\ \left(W_{p-1}\left(1\right)+C_{p}\right)p=1 \end{array} \right. $$</span></p> <p>8b.3 <span class="math-container">$$ C_{p}=\frac{1}{p}-W_{p-1}\left(1\right) $$</span></p> <p>9b. We have completed the study of the constant. The last action is to match everything together. <span class="math-container">$$ M_{p}\left(n\right)=p\left(\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{n}-n\left(\int_{ }^{ }M_{p-1}\left(n\right)dn\right)_{1}\right)+n $$</span> <span class="math-container">$$M_{0}(n) = n+1$$</span></p> <p>10b. (New step 29.04.2022) The previous step was not recorded correctly. I will also proceed to the calculation of definite integrals: <span class="math-container">$$ M_{p}(n) = \begin{cases} n+1, &amp; \text{if $p$ is zero } \\ p\int_{0}^{n}M_{p-1}\left(t\right)dt-np\int_{0}^{1}M_{p-1}\left(t\right)dt+n, &amp; \text{otherwise} \end{cases} $$</span></p>
PavelDev
709,857
<p>After 4 months, I suspect that the proof of this assumption is very simple. Currently, this answer is a draft, but it will demonstrate all the necessary ideas to generalize to an arbitrary case. We will start with the next special case:</p> <ol> <li><p><span class="math-container">$$\sum_{i=1}^{n}i=\frac{n^{2}+n}{2}$$</span></p> </li> <li><p>To perform the integration operation, we will add the function variable <span class="math-container">$x$</span>: <span class="math-container">$$\sum_{i=1}^{n}\left(x+i\right)=\frac{\left(n+x\right)^{2}+\left(n+x\right)-x^{2}-x}{2}$$</span></p> </li> <li><p>Now we can actually perform the integration of both parts with respect to the variable <span class="math-container">$x$</span>: <span class="math-container">$$\int_{ }^{ }\sum_{i=1}^{n}\left(x+i\right)dx=\int_{ }^{ }\frac{\left(n+x\right)^{2}+\left(n+x\right)-x^{2}-x}{2}dx$$</span></p> </li> <li><p><span class="math-container">$$\sum_{i=1}^{n}\left(\frac{\left(x+i\right)^{2}}{2}+C_{1}\right)=\frac{\frac{\left(n+x\right)^{3}}{3}+\frac{\left(n+x\right)^{2}}{2}-\frac{x^{3}}{3}-\frac{x^{2}}{2}+C_{0}}{2}$$</span></p> </li> <li><p>Now we just assume that <span class="math-container">$x=0$</span> and move <span class="math-container">$C_1$</span> to right side: <span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+2nC_{1}+C_{0}$$</span></p> </li> <li><p>After solving a system of two equations (for details, see my question), we get <span class="math-container">$C_0=0, C_1=\frac{1}{12}$</span>, the formula for summation of first squares is equivalent: <span class="math-container">$$\sum_{i=1}^{n}i^{2}=\frac{n^{3}}{3}+\frac{n^{2}}{2}+\frac{n}{6}$$</span></p> </li> </ol> <p><strong>Note section:</strong></p> <p>This proof looks correct, but contains a few oddities:</p> <ol> <li><p>In the second step, the right part of the expression was not specially simplified. Otherwise, it would have affected the integration result and probably the presented proof was not completed.</p> </li> <li><p>In step 4, when integrating the left part, we should use only this result: <span class="math-container">$\int_{ }^{ }\left(x+i\right)dx=\frac{\left(x+i\right)^{2}}{2}$</span>. However, it can be writteln in another way: <span class="math-container">$\int_{ }^{ }\left(x+i\right)dx=\frac{x^{2}}{2}+ix$</span>. Relative integration rules, two answers is correct, but choosing the second option would not give the desired results.</p> </li> </ol>
2,384,538
<p>I am studying Linear Algebra Done Right, chapter 2 problem 6 states:</p> <blockquote> <p>Prove that the real vector space consisting of all continuous real valued functions on the interval $[0,1]$ is infinite dimensional.</p> </blockquote> <p><strong>My solution:</strong></p> <p>Consider the sequence of functions $x, x^2, x^3, \dots$ This is a linearly independent infinite sequence of functions so clearly this space cannot have a finite basis. However this prove relies on the fact that no $x^n$ is a linear combination of the previous terms. In other words, is it possible for a polynomial of degree $n$ to be equal to a polynomial of degree less than $n$. I believe this is not possible, but does anyone know how to prove this? More specifically, could the following equation ever be true for all $x$?</p> <p>$x^n = \sum\limits_{k=1}^{n-1} a_kx^k$ where each $a_k \in \mathbb R$</p>
Kamil Maciorowski
331,040
<p>Two identical functions have identical derivatives. Differentiate your equality $n$ times and you will see the two sides cannot be identical.</p>
1,970,305
<p>I have just begun reading through Section 3.2 of Hatcher's Algebraic Topology. While I reasonably understood the computations relating to the cup product, I was unsure of the purpose of the cup product. From what I knew, it does not help us to compute cohomology groups, given that we need the cohomology groups to compute the cup product. </p> <p>In a nutshell, why do we care about the cup product? </p>
Len West
377,227
<p>Evaluating equation mod 5 we get x $\equiv$ 3 (mod 5) Evaluating equation mod 3 we get z $\equiv$ 2 (mod 3) Setting x=5m+3, z=3n+2 in equation we get y=1-2m-2n Integer values for m &amp; n will generate Integer solutions for x,y,z</p>
97,788
<p>What exactly is the connection between knots and operator algebra? I heard that Jones established such a connection while discovering the celebrated Jones Polynomial. </p> <p>Now Jones Polynomial is probably understood out of that context on its own, but what was it with Operator algebras in this space ? Can someone explain in plain English?</p> <p>As is probably very evident, I am a complete newbie to this area.</p>
MTS
703
<p>I would recommend you to look at Jones' survey paper from 1986, entitled <em>A New Knot Polynomial and Von Neumann Algebras.</em> It is very readable. Let me try to make a brief summary, though.</p> <p>The basic object you start with is a $II_1$ factor. This is a von Neumann algebra $M$ with trivial center $Z(M)\simeq \mathbb{C}$, possessing a faithful trace $\tau : M \to \mathbb{C}$ (i.e. a positive normal state such that $\tau(a^*a)=0$ implies $a=0$) and having no minimal projections (this excludes matrix algebras).</p> <p>For whatever reason, it is a good idea to study subfactors of $N \subseteq M$. It turns out that the most important thing about a subfactor is not so much its isomorphism type (it is very hard to tell if von Neumann algebras are isomorphic or not) but rather the way it sits inside the big factor.</p> <p>Using the trace $\tau$, you can perform the GNS construction for $M$. This means that you define an inner product on $M$ via $\langle x,y \rangle = \tau(y^*x)$ (which is positive definite since $\tau$ is faithful), and then take the completion to get a Hilbert space called $L^2(M,\tau)$. Then $M$ acts on this Hilbert space by left-multiplication.</p> <p>Now you do what is called Jones' basic construction. Inside the Hilbert space $L^2(M,\tau)$ is the subspace $L^2(N,\tau)$, so there is the orthogonal projection of $L^2(M,\tau)$ onto $L^2(N,\tau)$. Call that projection $e_1$. Then define a new algebra $M_1$ to be the von Neumann algebra generated by $M$ and $e_1$ (inside $\mathcal{L}(L^2(M,\tau))$). It is immediate that $e_1$ commutes with $N$.</p> <p>It turns out that if the inclusion $N \subseteq M$ was of finite index (which I won't get into here) then $M_1$ is also a $II_1$ factor (so it comes with a faithful trace also), and the inclusion $M \subseteq M_1$ has the same index as $N \subseteq M$.</p> <p>Then you just keep going! Repeat the basic construction for $M \subseteq M_1$ to get a projection $e_2$, then let $M_2$ be the von Neumann algebra generated by $M_1$ and $e_2$, etc. So you end up with a sequence of projections $e_1,e_2,\dots$ which satisfy the following relations:</p> <ol> <li>$e_i e_j = e_j e_i$ if $|i-j| &gt; 1$.</li> <li>$e_i e_j e_i = \lambda e_i$ whenever $|i-j| = 1$, where $\lambda$ is the inverse of the index of $N$ in $M$.</li> </ol> <p>The projections $e_i$ give a representation of something called the Temperley-Lieb algebra. You can see that these relations are reminiscent of the relations in the braid group. That is where the connection comes in. Knots are connected to braids, braids are connected to the Temperley-Lieb algebra (and hence to these projections) and then you can use the trace in the von Neumann algebra to define invariants of knots.</p> <p>That is the gist of it. Read Jones' paper for more details.</p>
4,459,722
<p>I have the vector field <span class="math-container">\begin{align*} X:\mathbb R^d&amp;\to\mathbb R^d\\ x&amp;\mapsto\frac{x}{\|x\|} \end{align*}</span> which is a differentiable vector field outside of the origin, and I am interested in its divergence. After some easy computation we get <span class="math-container">$$ \mathrm{div}(X)=\left\{\begin{array}{ll} 2\delta_0&amp;d=1\\ \frac{d-1}{\|x\|}&amp;d\geq 2. \end{array}\right. $$</span> The problem with the above computation is that the dimension 1 case is computed as a distributional derivative and the case of bigger dimension is computed classically. This is a problem because for the longest time I thought that <span class="math-container">$$ \mathrm{div}(X) $$</span> should have a Dirac delta in the origin in any dimension, and I still believe it has to have it.</p> <p>The reasons I do believe it has a Dirac delta component in the origin is because for <span class="math-container">$X$</span> infinite integral curves start from the origin. To put it differently, if we were to look at <span class="math-container">$-X$</span>, if one were to search for the solutions to the continuity equation for <span class="math-container">$-X$</span>, i.e. <span class="math-container">$$ \partial \mu_t+\mathrm{div}((-X)\mu_t)=0 $$</span> one would see how mass accumulates at the origin.</p> <p>Question 1): did I make a mistake in my computations and <span class="math-container">$\mathrm{div}(X)$</span> has a Dirac delta in the origin? Or more correctly, if my computation isn't wrong classically but is wrong distributionally, how would I carry out that computation? Because it still doesn't yield a singular part when I evaluate with polar coordinates.</p> <p>Question 2): is my intuition wrong and in reality in dimension bigger than one there is never a Dirac delta in that computation even distributionally?</p> <p>Any help or literature is appreciated. Maybe it is all because my differential geometry skills are very rusty.</p>
peek-a-boo
568,204
<p>The divergence you wrote is correct distributionally as well. To see this, let <span class="math-container">$\phi$</span> be any test function, then using the definition of distributional divergence, dominated convergence, and the divergence theorem, we get <span class="math-container">\begin{align} \langle\text{div}(X),\phi\rangle&amp;:=-\sum_{i=1}^n\langle X^i,\partial_i\phi\rangle\\ &amp;=-\sum_{i=1}^n\int_{\Bbb{R}^n}X^i\partial_i\phi\,dV\\ &amp;=-\lim_{\epsilon\to 0^+}\sum_{i=1}^n\int_{B_{\epsilon}(0)^c}X^i\partial_i\phi\,dV\tag{DCT}\\ &amp;=\lim_{\epsilon\to 0^+}\sum_{i=1}^n\int_{B_{\epsilon}(0)^c}[(\partial_iX^i)\phi - \partial_i(X^i\phi)]\,dV\\ &amp;=\lim_{\epsilon\to 0^+}\left[\int_{B_{\epsilon}(0)^c}\frac{n-1}{\|x\|}\phi\,dV +\int_{S_{\epsilon}(0)}\phi\,dA \right]\tag{$*$}. \end{align}</span> Notice that in this last equality, I used the divergence theorem, and beware that the outward normal to the boundary of <span class="math-container">$B_{\epsilon}(0)^c$</span> actually points into the origin, i.e is <span class="math-container">$-X$</span>, so this extra minus sign cancels the minus sign which is already present. Now, in the first term, <span class="math-container">$\frac{1}{\|x\|}$</span> is integrable in a neighborhood of the origin in <span class="math-container">$\Bbb{R}^n$</span> for <span class="math-container">$n\geq 2$</span>, so we can use the dominated convergence theorem to say the first limit is <span class="math-container">$\int_{\Bbb{R}^n}\frac{n-1}{\|x\|}\phi\,dV$</span>. For the second term, since <span class="math-container">$n\geq 2$</span>, the 'surface area' of the sphere <span class="math-container">$S_{\epsilon}(0)$</span> grows like <span class="math-container">$\epsilon^{n-1}$</span>, which vanishes as <span class="math-container">$\epsilon\to 0^+$</span>. So, the fact that <span class="math-container">$\phi$</span> is a test function means the intergal over the sphere vanishes too. Hence, the final result is <span class="math-container">\begin{align} \langle\text{div}(X),\phi\rangle&amp;=\int_{\Bbb{R}^n}\frac{n-1}{\|x\|}\phi\,dV+0= \left\langle\frac{n-1}{\|x\|},\phi\right\rangle. \end{align}</span> Thus, even in the distributional sense, we have <span class="math-container">$\text{div}(X)=\frac{n-1}{\|x\|}$</span> for <span class="math-container">$n\geq 2$</span>.</p> <hr /> <p>Note, the proof follows through up to <span class="math-container">$(*)$</span> even for <span class="math-container">$n=1$</span>. To proceed further, note that if <span class="math-container">$n=1$</span>, the first integral vanishes (this is just reflecting that <span class="math-container">$\frac{x}{|x|}$</span> is constantly equal to <span class="math-container">$\pm 1$</span> away from the origin, so the derivative vanishes there). In the 1-dimensional case, the &quot;boundary sphere <span class="math-container">$S_{\epsilon}(0)$</span>&quot; really consists of a 2-point set <span class="math-container">$\{-\epsilon,\epsilon\}$</span>, and the integral over this set just means adding the values of <span class="math-container">$\phi$</span> at these points <span class="math-container">$\phi(-\epsilon)+\phi(\epsilon)$</span>. Now, taking the limit <span class="math-container">$\epsilon\to 0^+$</span>, and using continuity of <span class="math-container">$\phi$</span>, we get <span class="math-container">$2\phi(0)= \langle 2\delta_0,\phi\rangle$</span>, as expected.</p>
779,696
<p>So my problem is:</p> <p>$$\arcsin (x) = \arccos (5/13)$$ </p> <p><strong>^ Solve for $x$.</strong></p> <p>How would I begin this problem? Do I draw a triangle and find the $\sin(x)$ or is there a more algebraic way of doing this? Thanks in advance for any help.</p>
drhab
75,923
<p><strong>Hint</strong>:</p> <p>It is sufficient to prove that $P\left[B_{1}\cap\cdots\cap B_{n-1}\right]=P\left(B_{1}\right)\times\cdots\times P\left(B_{n-1}\right)$ whenever $B_{i}\in\left\{ A_{i},A_{i}^{c}\right\} $ for $i=1,\dots,n-1$. </p> <p>You can probably do that with induction to $n$.</p>
112,021
<p>Let $n$ be a positive integer. The $n$ by $n$ Fourier matrix may be defined as follows:</p> <p>$$ F^{*} = (1/\sqrt{n}) (w^{(i-1)(j-1)}) $$</p> <p>where </p> <p>$$ w = e^{2 i \pi /n} $$</p> <p>is the complex $n$-th root of unity with smaller positive argument and $*$ means transpose -conjugate.</p> <p>It is well known that $F$ is diagonalizable with eigenvalues $1,-1,i,-i$</p> <p>where $i^2 =-1.$</p> <p>It is also known that $F$ has real eigenvectors:</p> <p>COMMENT: (I was unable to got this paper)</p> <p>McClellan, James H.; Parks, Thomas W. Eigenvalue and eigenvector decomposition of the discrete Fourier transform. IEEE Trans. Audio Electroacoust. AU-20 (1972), no. 1, 66--74. END of COMMENT</p> <p>QUESTION:</p> <p>There is some simple manner to get just one of these real eigenvectors.</p> <p>For example how to get a real vector with an odd number $n=2k+1$ of coordinates and such that</p> <p>$$ F(x) =x. $$</p>
Balu Santhanam
350,201
<p>There are several known eigenvector basis for the DFT using matrices that commute with the DFT matrix:</p> <ol> <li>Alberto Grunbaum (Tridiagonal Commuting matrix), Journal of Mathematical Analysis and Applications, 1982.</li> <li>B. Santhanam and T. S. Santhanam (quantum mechanics in finite dimensions)(Symmetric matrix), Signal Processing, Elsevier, 2008.</li> <li>Steiglitz and Dickinson (S matrix approach) (Almost tridiagonal), IEEE Transactions on Signal Processing, Feb. 1982.</li> </ol>
3,295,021
<p>The Hopf fibration is a mapping <span class="math-container">$h:\mathbb{S^3} \mapsto\mathbb{S}^2$</span> defined by <span class="math-container">$r\mapsto ri\bar{r}$</span> where <span class="math-container">$r$</span> is a unit quaternion in the form <span class="math-container">$r=a+bi+cj+dk $</span> where <span class="math-container">$a,b,c,d \in \mathbb{R}$</span> and <span class="math-container">$ijk=-1$</span>. Explicitly, <span class="math-container">$$h(a,b,c,d)=(a^2+c^2-b^2+d^2,2(ad+bc),2(bd-ac))$$</span> Now, it is known that for a point in <span class="math-container">$\mathbb{S^2}$</span> the preimage <span class="math-container">$h^{-1}$</span> is a circle in <span class="math-container">$\mathbb{S^3}$</span>. How can this be? If you consider the set of points <span class="math-container">$$C= \{(\cos(t),0,0,\sin(t)) \mid t\in\mathbb{R}\} \in \mathbb{S^3}$$</span> which can be written in terms of quaternions as <span class="math-container">$C=e^{k t}$</span> where <span class="math-container">$ t \in \mathbb{R}$</span>. This clearly doesn't map to a single point in <span class="math-container">$\mathbb{S^2}$</span> under the Hopf fibration. It maps to the circle <span class="math-container">$i\cos(2t)+j\sin(2t)$</span>. So I am not understanding what's going on here. Why is this circle on in 4 dimensional space not mapping to a point in 3 dimensional space? </p>
Thomas Andrews
7,933
<p><strong>Part ii:</strong></p> <p>In general: <span class="math-container">$$\lfloor z\rfloor \leq z&lt; \lfloor z\rfloor+1.$$</span> </p> <p>For ii, set <span class="math-container">$z=x/2$</span> then we get: <span class="math-container">$$\left\lfloor\frac{x}{2}\right\rfloor\leq \frac{x}{2}&lt;\left\lfloor\frac{x}{2}\right\rfloor +1$$</span></p> <p>Double and you get:</p> <p><span class="math-container">$$2\left\lfloor\frac{x}{2}\right\rfloor\leq x&lt;2\left\lfloor\frac{x}{2}\right\rfloor +2$$</span></p> <p>Taking the floor gives:</p> <p><span class="math-container">$$2\left\lfloor\frac{x}{2}\right\rfloor\leq\lfloor x\rfloor&lt;2\left\lfloor\frac{x}{2}\right\rfloor +2$$</span></p> <p>Subtracting gets:</p> <p><span class="math-container">$$0\leq \left\lfloor x\right\rfloor-2\left\lfloor\frac{x}{2}\right\rfloor&lt;2.$$</span></p> <p>Which is the result you want.</p> <p><strong>Part i</strong></p> <p>You can actually prove the same way that <span class="math-container">$\lfloor x+y\rfloor-\lfloor x\rfloor -\lfloor y\rfloor$</span> is always either <span class="math-container">$0$</span> or <span class="math-container">$1.$</span> More specifically, you can show that <span class="math-container">$\lfloor x+y\rfloor-\lfloor x\rfloor -\lfloor y\rfloor=1$</span> if and only if <span class="math-container">$\{x\}+\{y\}\geq 1.$</span> </p> <p>In particular, when <span class="math-container">$y=x,$</span> <span class="math-container">$\lfloor 2x\rfloor -2\lfloor x\rfloor=1$</span> if and only if <span class="math-container">$\{x\}\geq \frac{1}{2}.$</span></p> <p>Now, if <span class="math-container">$\{x\}+\{y\}\geq 1$</span> then one or both of <span class="math-container">$\{x\}$</span> and <span class="math-container">$\{y\}$</span> are <span class="math-container">$\geq \frac{1}{2}$</span>, so one or both of <span class="math-container">$\lfloor 2x\rfloor -2\lfloor x\rfloor$</span> and <span class="math-container">$\lfloor 2y\rfloor-2\lfloor y\rfloor$</span> are one. </p> <p>So that means, doe any <span class="math-container">$x,y:$</span></p> <p><span class="math-container">$$\lfloor x+y\rfloor -\lfloor x\rfloor -\lfloor y\rfloor\leq \left(\lfloor 2x\rfloor -2\lfloor x\rfloor\right)+\left(\lfloor 2y\rfloor -2\lfloor y\rfloor\right)$$</span></p> <p>This is because when the left side is <span class="math-container">$0,$</span> we know the right side is at least <span class="math-container">$0,$</span> and when the left side is <span class="math-container">$1$</span>, then the right side is either <span class="math-container">$1$</span> or <span class="math-container">$2.$</span></p> <p>Adding <span class="math-container">$2\lfloor x\rfloor + 2\lfloor y\rfloor$</span> to both sides gives you:</p> <p><span class="math-container">$$\lfloor x+y\rfloor +\lfloor x\rfloor +\lfloor y\rfloor\leq \lfloor 2x\rfloor +\lfloor 2y\rfloor$$</span></p>
3,295,021
<p>The Hopf fibration is a mapping <span class="math-container">$h:\mathbb{S^3} \mapsto\mathbb{S}^2$</span> defined by <span class="math-container">$r\mapsto ri\bar{r}$</span> where <span class="math-container">$r$</span> is a unit quaternion in the form <span class="math-container">$r=a+bi+cj+dk $</span> where <span class="math-container">$a,b,c,d \in \mathbb{R}$</span> and <span class="math-container">$ijk=-1$</span>. Explicitly, <span class="math-container">$$h(a,b,c,d)=(a^2+c^2-b^2+d^2,2(ad+bc),2(bd-ac))$$</span> Now, it is known that for a point in <span class="math-container">$\mathbb{S^2}$</span> the preimage <span class="math-container">$h^{-1}$</span> is a circle in <span class="math-container">$\mathbb{S^3}$</span>. How can this be? If you consider the set of points <span class="math-container">$$C= \{(\cos(t),0,0,\sin(t)) \mid t\in\mathbb{R}\} \in \mathbb{S^3}$$</span> which can be written in terms of quaternions as <span class="math-container">$C=e^{k t}$</span> where <span class="math-container">$ t \in \mathbb{R}$</span>. This clearly doesn't map to a single point in <span class="math-container">$\mathbb{S^2}$</span> under the Hopf fibration. It maps to the circle <span class="math-container">$i\cos(2t)+j\sin(2t)$</span>. So I am not understanding what's going on here. Why is this circle on in 4 dimensional space not mapping to a point in 3 dimensional space? </p>
G Cab
317,234
<p>Let's denote with <span class="math-container">$\{ x \}$</span> the fractional part: <span class="math-container">$$ x = \left\lfloor x \right\rfloor + \left\{ x \right\} $$</span></p> <p>Then I would suggest that you first master the addition <span class="math-container">$$ \eqalign{ &amp; \left\lfloor {x + y} \right\rfloor = \left\lfloor {\left\lfloor x \right\rfloor + \left\{ x \right\} + \left\lfloor y \right\rfloor + \left\{ y \right\}} \right\rfloor = \cr &amp; = \left\lfloor x \right\rfloor + \left\lfloor y \right\rfloor + \left\lfloor {\left\{ x \right\} + \left\{ y \right\}} \right\rfloor = \cr &amp; = \left\lfloor x \right\rfloor + \left\lfloor y \right\rfloor + \left[ {1 \le \left\{ x \right\} + \left\{ y \right\}} \right] \cr} $$</span> where <span class="math-container">$[P]$</span> denotes the <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer"><em>Iverson bracket</em></a> <span class="math-container">$$ \left[ P \right] = \left\{ {\begin{array}{*{20}c} 1 &amp; {P = TRUE} \\ 0 &amp; {P = FALSE} \\ \end{array} } \right. $$</span></p> <p>Thereafter you simply have</p> <p>i)<br> <span class="math-container">$$ \left\lfloor {2x} \right\rfloor = \left\lfloor {x + x} \right\rfloor = 2\left\lfloor x \right\rfloor + \left[ {1/2 \le \left\{ x \right\}} \right] $$</span> so <span class="math-container">$$ \left\{ \matrix{ \left\lfloor {2x} \right\rfloor + \left\lfloor {2y} \right\rfloor = 2\left\lfloor x \right\rfloor + 2\left\lfloor y \right\rfloor + \left[ {1/2 \le \left\{ x \right\}} \right] + \left[ {1/2 \le \left\{ y \right\}} \right] \hfill \cr \left\lfloor {x + y} \right\rfloor + \left\lfloor x \right\rfloor + \left\lfloor y \right\rfloor = 2\left\lfloor x \right\rfloor + 2\left\lfloor y \right\rfloor + \left[ {1 \le \left\{ x \right\} + \left\{ y \right\}} \right] \hfill \cr} \right. $$</span> and clearly <span class="math-container">$$ \left[ {1 \le \left\{ x \right\} + \left\{ y \right\}} \right] \le \left[ {1/2 \le \left\{ x \right\}} \right] + \left[ {1/2 \le \left\{ y \right\}} \right] $$</span></p> <p>ii)<br> <span class="math-container">$$ \eqalign{ &amp; \left\lfloor x \right\rfloor = \left\lfloor {x/2 + x/2} \right\rfloor = \cr &amp; = 2\left\lfloor {x/2} \right\rfloor + \left[ {1/2 \le \left\{ {x/2} \right\}} \right] \cr} $$</span></p>
1,081,447
<p>I'm talking about a Roulette wheel with $38$ equally probable outcomes. Someone mentioned that he guessed the correct number five times in a row, and said that this was surprising because the probability of this happening was $$\left(\frac{1}{38}\right)^5$$</p> <p>This is true if you only play the game $5$ times. However, if you play it more than $5$ times there's a higher (should be much higher?) probability that you'll get $5$ in a row at <em>some point</em>. </p> <p>I was thinking about <em>how</em> surprised this person should be at their streak of $m$ correct guesses given that they play $n$ games, each with probability $p$ of success. It makes intuitive sense that their surprise should be proportional to $1/q$ (or maybe $\log(1/q)$ since $1$ in a billion doesn't surprise you $10$ times more than $1$ in $100$ million), where $q$ is the probability that they get at least one streak of $m$ correct guesses at some point in their $n$ games. </p> <p>So, with the Roulette example I was thinking about, $p=1/38$ and $m=5$. </p> <p>I tried to find an explicit formula for $q$ in terms of $n$, and encountered some difficulty, because of the non-independence of "getting a streak in the first five tries" and "getting a streak in tries $2$ through $6$" (if the first is a failure, it's much more likely that the second will be too). </p> <hr> <p>In summary, two questions:</p> <ol> <li><p>How do I find the probability that you get $5$ correct guesses in a row at some point if you play $n$ games of Roulette?</p></li> <li><p>More generally, what is the probability that you get $m$ successes at some point in a series of $n$ events, each with probability $p$ of success? </p></li> </ol> <p>The variables satisfy $\,\,\,m,n \in \mathbb{N}$, $\,\,\,m\leq n$, $\,\,\,p \in \mathbb{R}$, $\,\,\,0 \leq p \leq 1$.</p> <hr> <p>If we write the answer to the second question as a function $q(m,n,p)$, then we can say that $q$ should be increasing with $n$, decreasing with $m$, and increasing with $p$. It should equal $p^n$ when $m=n$ and should equal $1$ when $p=1$ and $0$ when $p=0$. </p> <p>I feel as though this should be a basic probability problem, but I'm having trouble solving it. Maybe some kind of recursive approach would work? Given $q(n,m,p)$, I think I could write $q(n+1,m,p)$ using the probability that the last $m-1$ results are all successes ...</p>
awkward
76,172
<p>Feller, "An Introduction to Probability Theory and Its Applications", Third Edition, gives a useful approximation on p. 325, equation 7.11.</p> <p>Suppose we toss a possibly biased coin $n$ times, where the probability of a head is $p$ and $q = 1-p$. Let $q_n$ be the probability there is no run of $r$ successive heads. Then</p> <p>$$q_n \sim \frac{1-px}{(r+1-rx)q} \cdot \frac{1}{x^{n+1}} $$</p> <p>where $x$ is the smallest positive root of $1 - x + q p^r x^{r+1} = 0$.</p> <p>For your problem, we have $r = 5$, $p = 1/38$, and $q = 37/38$, from which we calculate $x \approx 1 + 1.228854 \times 10^{-8}$. </p> <p>It works out that $q_n = 1/2$ for $n \approx 5.64 \times 10^7$, i.e. it takes about 56 million trials to have a 50% chance of guessing correctly 5 times in a row.</p>
832,710
<p>Does there exist an algebraic structure $(\mathbb{K},+)$ such that equations of the form $x+a=x+b$, $a\neq b$ have solutions for all $a,b\in \mathbb{K}$?</p>
Community
-1
<p>This is a version of the <a href="http://mathworld.wolfram.com/MovingLadderProblem.html" rel="nofollow">moving ladder problem</a>, though more difficult because the road and lane have different widths. Let $\alpha$ be the angle between the pole and the direction of the road. In the process of moving, $\alpha$ takes on values between $0$ and $\pi/2$. </p> <p>Only $64/\sin \alpha$ of the length of the pole can fit within the road. And if the width of the lane is $x$, then $x/\cos \alpha$ of the pole will fit within the lane. Therefore, we must have $$\frac{64}{\sin\alpha} + \frac{x}{\cos\alpha}\ge 125\tag1$$ for all $\alpha$ between $0$ and $\pi/2$. Rearrange as $$x\ge 125\cos\alpha - 64\cot \alpha \tag2$$ The optimal value of $x$ (when the pole <strong>just</strong> fits) is the maximum of the function on the right of (2) on the interval $(0, \pi/2)$. </p>
3,152,021
<p>I'm wondering if there are well known sorting techniques for the following problem.</p> <p><strong>Problem</strong>:</p> <p>Suppose you would like to sort a list of integer numbers <span class="math-container">$0, 1, 2, \ldots, d$</span>. If one is only allowed to use swaps of adjacent positions the major part of programmers would choose a bubble sort type algorithm.</p> <p>The <span class="math-container">$d$</span> sized set of adjacent swaps <span class="math-container">$P^d_{\text{adj}}$</span> can be interpreted as the permutations, e.g. for <span class="math-container">$d=9$</span></p> <p><span class="math-container">$P^9_{\text{adj}} = \{\\ \quad [1,0,2,3,4,5,6,7,8,9],\\ \quad [0,2,1,3,4,5,6,7,8,9],\\ \quad [0,1,3,2,4,5,6,7,8,9],\\ \quad [0,1,2,4,3,5,6,7,8,9],\\ \quad [0,1,2,3,5,4,6,7,8,9],\\ \quad [0,1,2,3,4,6,5,7,8,9],\\ \quad [0,1,2,3,4,5,7,6,8,9],\\ \quad [0,1,2,3,4,5,6,8,7,9],\\ \quad [0,1,2,3,4,5,6,7,9,8]\\ \}.$</span></p> <p>Question: Are there sorting techniques for other sets of permutations?</p> <p>I would like to use the following <span class="math-container">$1+d+d$</span> sized set, e.g. for <span class="math-container">$d=9$</span></p> <p><span class="math-container">$P^9 = \{\\ \quad [0,1,2,3,4,5,6,7,9,8],\\ \quad\\ \quad [8,0,1,2,3,4,5,6,7,9],\\ \quad [8,1,0,2,3,4,5,6,7,9],\\ \quad [8,1,2,0,3,4,5,6,7,9],\\ \quad [8,1,2,3,0,4,5,6,7,9],\\ \quad [8,1,2,3,4,0,5,6,7,9],\\ \quad [8,1,2,3,4,5,0,6,7,9],\\ \quad [8,1,2,3,4,5,6,0,7,9],\\ \quad [8,1,2,3,4,5,6,7,0,9],\\ \quad [8,1,2,3,4,5,6,7,9,0],\\ \quad\\ \quad [9,0,1,2,3,4,5,6,7,8],\\ \quad [9,1,0,2,3,4,5,6,7,8],\\ \quad [9,1,2,0,3,4,5,6,7,8],\\ \quad [9,1,2,3,0,4,5,6,7,8],\\ \quad [9,1,2,3,4,0,5,6,7,8],\\ \quad [9,1,2,3,4,5,0,6,7,8],\\ \quad [9,1,2,3,4,5,6,0,7,8],\\ \quad [9,1,2,3,4,5,6,7,0,8],\\ \quad [9,1,2,3,4,5,6,7,8,0]\\ \}.$</span></p> <p>For general <span class="math-container">$P^d$</span>, the first permutation swaps <span class="math-container">$d-1$</span> with <span class="math-container">$d$</span>.</p> <p>The 1st subset of <span class="math-container">$d$</span> permutations start with <span class="math-container">$d-1$</span> and the zero is moving to the right.</p> <p>The 2nd subset of <span class="math-container">$d$</span> permutations start with <span class="math-container">$d$</span> and the zero is moving to the right.</p> <p>This set of permutations comes from an integral decomposition problem.</p> <p>Additional question:<br> Can the sorting be done in <span class="math-container">$d$</span> steps with the given set <span class="math-container">$P^d$</span>?</p> <p><strong>Edit 2019-03-18</strong>:</p> <p>@Jaap I will accept your answer if no other more sophisticated algorithm will be given.</p> <p>Now that we have at least an algorithm which uses <span class="math-container">$\mathcal{O}(d^2)$</span> permutations<br> I would like to tighten some constraints and give more information.</p> <ul> <li>Every permutation used will result in an inverse matrix multiplication for my numerical analysis program, which will worsen the numerical error, thus I would like to have a better upper bound.</li> <li>Instead of <span class="math-container">$P^d$</span> one is allowed to use <span class="math-container">$P^{-d} := \{ \pi^{-1} : \pi \in P^d\}$</span>. </li> <li>Algorithms which recognize some/all of the permutations get a bonus.</li> <li>Algorithms which use a specific permutation repeatedly do NOT get a bonus.</li> <li>Algorithms which only work for odd permutations but have a somehow nice twist get a bonus.</li> </ul> <p><strong>Edit 2019-03-21</strong>:</p> <p>I wrote a small C++ program, which calculates the height of a shortest path tree (SPT) of the cayley graph <span class="math-container">$\Gamma = \Gamma(S_{d+1},P^d)$</span> with the root node of the tree placed at the identity permutation.</p> <p>This gives for <span class="math-container">$3 \le d \le 9$</span></p> <ul> <li><p><span class="math-container">$height(SPT(\Gamma)) = d$</span>.</p></li> <li><p>If one removes the 1st subset we have<br> <span class="math-container">$height(SPT(\Gamma)) = d$</span>.</p></li> <li><p>If one removes the 2nd subset we have<br> <span class="math-container">$height(SPT(\Gamma)) = d+1$</span>.</p></li> </ul> <p><strong>Edit 2019-04-04</strong>:</p> <p>Numerical evidence suggests that for <span class="math-container">$d \ge 4$</span> the only SPT being invariant under edge completion (EC)<br> is the one coming from the subset <span class="math-container">$Q \subseteq P^d$</span> consisting of the 2nd subset of permutations, i.e.</p> <p><span class="math-container">$ Q = \{\\ \quad [9,0,1,2,3,4,5,6,7,8],\\ \quad [9,1,0,2,3,4,5,6,7,8],\\ \quad [9,1,2,0,3,4,5,6,7,8],\\ \quad [9,1,2,3,0,4,5,6,7,8],\\ \quad [9,1,2,3,4,0,5,6,7,8],\\ \quad [9,1,2,3,4,5,0,6,7,8],\\ \quad [9,1,2,3,4,5,6,0,7,8],\\ \quad [9,1,2,3,4,5,6,7,0,8],\\ \quad [9,1,2,3,4,5,6,7,8,0]\\ \}.$</span></p> <p>Where edge completion (EC) means adding an edge<br> iff two nodes of tree depth k and k+1 can be connected with a valid generator/permutation.</p> <p><strong>Edit 2019-04-06</strong>: (Additional information)</p> <p>We have an accepted answer now.</p> <p>There are similar more difficult problems,<br> where one has faster growing sets <span class="math-container">$P^{d,m}$</span> of valid permutations<br> and one expects to do the sorting in <span class="math-container">$d-m$</span> steps. </p> <p>Numerical evidence suggests there might be a relation to <a href="http://oeis.org/A130477" rel="nofollow noreferrer">http://oeis.org/A130477</a> </p> <p>where e.g. for <span class="math-container">$d=4$</span>, the row <span class="math-container">$1, 4, 15, 40, 60$</span> would tell us,<br> there is 1 permutation of <span class="math-container">$S_{d+1}$</span> which can be sorted in <span class="math-container">$0$</span> steps.<br> there are 4 permutations of <span class="math-container">$S_{d+1}$</span> which can be sorted in <span class="math-container">$1$</span> step.<br> there are 15 permutations of <span class="math-container">$S_{d+1}$</span> which can be sorted in <span class="math-container">$2$</span> steps.<br> there are 40 permutations of <span class="math-container">$S_{d+1}$</span> which can be sorted in <span class="math-container">$3$</span> steps.<br> there are 60 permutations of <span class="math-container">$S_{d+1}$</span> which can be sorted in <span class="math-container">$4$</span> steps. </p> <p><strong>Edit 2019-04-06</strong>: (Example of antkam's algorithm)</p> <p>If one uses the "right-first" convention for multiplying two permutations,<br> antkam's solution uses the inverted permutations of <span class="math-container">$Q$</span>.</p> <p>As can be seen with the following example <span class="math-container">$[0,\color{red}{4},3,1,2] \rightarrow [2,\color{red}{4,0},3,1] \rightarrow [1,\color{red}{2,4,0},3] \rightarrow [3,\color{red}{1,2,4,0}] \rightarrow [\color{red}{0,1,2,3,4}]. $</span></p> <p>And as decomposition with inverted permutations from <span class="math-container">$Q$</span>:</p> <p><span class="math-container">$ \begin{align} &amp;[0,\color{red}{4},3,1,2]\\ =&amp;[2,\color{red}{4,0},3,1][2,1,3,4,0]\\ =&amp;[1,\color{red}{2,4,0},3][1,2,3,4,0][2,1,3,4,0]\\ =&amp;[3,\color{red}{1,2,4,0}][1,2,3,4,0][1,2,3,4,0][2,1,3,4,0]\\ =&amp;[4,1,2,0,3]^{-1}[4,0,1,2,3]^{-1}[4,0,1,2,3]^{-1}[4,1,0,2,3]^{-1}. \end{align} $</span></p> <p><strong>Edit 2019-04-08</strong>: (Example of improved algorithm)</p> <p>There is an obvious improvement to antkam's algorithm,<br> where one simply keeps the red run (ascending numbers) as large as possible.</p> <p>Antkam's algorithm would give for <span class="math-container">$[0,2,3,4,1]$</span><br> <span class="math-container">$[0,\color{red}{2},3,4,1] \rightarrow [1,\color{red}{2,0},3,4] \rightarrow [4,\color{red}{1,2,0},3] \rightarrow [3,\color{red}{1,2,4,0}] \rightarrow [\color{red}{0,1,2,3,4}]. $</span></p> <p>Whereas the improved algorithm would give for <span class="math-container">$[0,2,3,4,1]$</span><br> <span class="math-container">$[0,\color{red}{2,3,4},1] \rightarrow [1,\color{red}{2,3,4,0}] \rightarrow [\color{red}{0,1,2,3,4}]. $</span></p> <p>It looks like the transition graph of the improved algorithm<br> is isomorphic to the tree graph <span class="math-container">$SPT(\Gamma(S_{d+1},Q)$</span> discussed earlier. </p> <p>I have added an image for <span class="math-container">$d=4$</span> with the tree graph, <a href="https://i.stack.imgur.com/vKzrK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vKzrK.png" alt="SPT(Gamma(S,Q))"></a> where the root of the tree represents the identity permutation.</p>
angryavian
43,949
<p>If <span class="math-container">$D$</span> is a diagonal matrix with diagonal entries <span class="math-container">$\lambda_1, \ldots, \lambda_n$</span>, then the eigenvectors are the standard basis vectors <span class="math-container">$e_1 = (1,0,\ldots, 0)$</span>, <span class="math-container">$e_2 = (0,1,0, \ldots, 0)$</span>, and so on. Specifically, check that <span class="math-container">$$D e_j = \lambda_j e_j$$</span> for each <span class="math-container">$j = 1,\ldots, n$</span>. Geometrically, this means that the linear transformation defined by <span class="math-container">$D$</span> (i.e. sending vectors <span class="math-container">$v$</span> to <span class="math-container">$Dv$</span>) just "stretches/shrinks" along each standard basis direction, according to the values of <span class="math-container">$\lambda_j$</span>.</p> <hr> <p>Now what if you have a matrix <span class="math-container">$M = S D S^{-1}$</span> where <span class="math-container">$D$</span> is still diagonal (as above) and where <span class="math-container">$S$</span> is invertible? If <span class="math-container">$s_1, \ldots, s_n$</span> are the columns of <span class="math-container">$S$</span> (note that these form a basis because <span class="math-container">$S$</span> is invertible), then you can show that <span class="math-container">$$Ms_j = \lambda_j s_j,$$</span> that is, the <span class="math-container">$s_j$</span> are the eigenvectors of <span class="math-container">$M$</span>. Geometrically, <span class="math-container">$M$</span> also stretches/shrinks along certain directions according to the values of the <span class="math-container">$\lambda_j$</span>, but along the directions defined by <span class="math-container">$s_j$</span> instead of the standard basis directions.</p>
2,939,163
<p>I want to find a certain <span class="math-container">$x$</span> that belongs to <span class="math-container">$\mathbb R$</span> so that </p> <p><span class="math-container">$$\left|\begin{array}{r}1&amp;x&amp;1\\x&amp;1&amp;0\\0&amp;1&amp;x\end{array}\right|=1$$</span></p> <p>This should be easy enough. I apply the Laplace extension on the third row so I get</p> <p><span class="math-container">$$0-\left|\begin{array}{a}1 &amp; 1\\x&amp;0\end{array}\right|+x\left|\begin{array}{r}1&amp;x\\x &amp;1\end{array}\right|=1$$</span></p> <p>So we have</p> <p><span class="math-container">$$-(0-x)+x(1-x^2)=1\implies x+x-x^3=1\implies x^3-2x+1=0$$</span></p> <p>I'm kind of stuck because I'm not entirely familiar with solving cubic functions. I don't think there's a way to refactor this. Perhaps I should have found another way to solve this. <span class="math-container">$x=1$</span> is definitely a solution, but there's another one that I'm missing. Any hints?</p>
Siong Thye Goh
306,553
<p>Hint:</p> <p>Notice that <span class="math-container">$x=1$</span> is a solution, hence you can reduce the problem to a quadratic equation.</p>
4,062,667
<p>Let <span class="math-container">$A$</span> be a <span class="math-container">$n^{th}$</span> order square and skew-symmetric matrix, if <span class="math-container">$(E-A)$</span> is an invertible matrix show that <span class="math-container">$(E+A)(E-A)^{-1}$</span> is an invertible matrix (where <span class="math-container">$E$</span> is an identity matrix)</p> <p>This is a part of a question I have proved that <span class="math-container">$||(E-A)u|| \geq ||u||$</span> for any column vector <span class="math-container">$u \in \mathbb{R^n}$</span></p> <p>If we can show that <span class="math-container">$(E+A)$</span> is invertible then the proof will be trivial. Any ideas would be appreciated.</p>
Priya
741,323
<p>I am writing an answer to my own question please let me know if I am wrong</p> <p>1. <span class="math-container">\begin{align*} ||(E-A)u|| &amp;= \sqrt{((E-A)u)^T(E-A)u}\\ &amp;= \sqrt{u^T (E-A)^T (E-A)u}\\ &amp;= \sqrt{u^T(E^T- A^T)(E-A)u}\\ &amp;= \sqrt{u^T (E^TE - E^TA -A^TE + A^TA)u} \qquad \qquad \text{since}\,\, A^T = -A\\ &amp;= \sqrt{u^T (E - EA + EA + A^TA)u}\\ &amp;= \sqrt{u^T (E + A^TA)u}\\ &amp;= \sqrt{u^TE u+ u^TA^TAu}\\ &amp;= \sqrt{u^Tu + (Au)^TAu}\\ &amp;= \sqrt{||u||^2 + ||Au||^2}\\ ||(E-A)u|| &amp;\geq ||u|| \end{align*}</span></p> <ol start="2"> <li>If <span class="math-container">$E-A$</span> is invertible <span class="math-container">$(E-A)^T$</span> is invertible as well (it is a property of inverse of a matrix)<span class="math-container">\begin{align*} (E-A)^T &amp;= (E^T-A^T)\\ &amp;= (E-A^T) \qquad\qquad \text{since $A$ is a skew-symmetric matrix}\\ &amp;= (E+A) \end{align*}</span></li> </ol> <p>Now we can say <span class="math-container">$(E+A)$</span> is an invertible matrix. Since both <span class="math-container">$(E-A)$</span> and <span class="math-container">$(E+A)$</span> are invertible</p> <p><span class="math-container">\begin{align*} (E+A)(E-A)^{-1}\big[(E+A)(E-A)^{-1}\big]^{-1} &amp;= (E+A)(E-A)^{-1} (E-A)(E+A)^{-1} \\ &amp;= (E+A)E(E+A)^{-1}\\ &amp;= E \end{align*}</span></p>
736,036
<p><strong>Problem</strong> - The least number which leaves remainders 2, 3, 4, 5 and 6 on dividing by 3, 4, 5, 6 and 7 is?</p> <p><strong>Solution</strong> - Here 3-2 = 1, 4-3 = 1, 5-4 = 1 and so on.</p> <p>So required number is (LCM of 3, 4, 5, 6, 7) - 1 = 419</p> <p><strong>My confusion</strong> - </p> <p>I didn't get the solution of this. Please explain how by subtracting the conclusion was drawn that the number will be the (LCM - 1)?</p>
Bill Dubuque
242
<p>$\, x\!+\!1\equiv 0\pmod{m_i}\iff m_i\mid x\!+\!1\iff {\rm lcm}\{m_i\}\mid x\!+\!1\iff x\equiv -1\pmod{{\rm lcm}\{m_i\}}$</p> <p>Or, equivalently: $\,\ x\equiv -1\pmod{m_i}\iff x\equiv -1\pmod{{\rm lcm}\{m_i\}},\ $ whihc may be viewed as the special <a href="https://math.stackexchange.com/a/73541/242"><em>constant-case</em> of the Chinese Remainder Theorem (CRT).</a></p>
736,036
<p><strong>Problem</strong> - The least number which leaves remainders 2, 3, 4, 5 and 6 on dividing by 3, 4, 5, 6 and 7 is?</p> <p><strong>Solution</strong> - Here 3-2 = 1, 4-3 = 1, 5-4 = 1 and so on.</p> <p>So required number is (LCM of 3, 4, 5, 6, 7) - 1 = 419</p> <p><strong>My confusion</strong> - </p> <p>I didn't get the solution of this. Please explain how by subtracting the conclusion was drawn that the number will be the (LCM - 1)?</p>
Man_From_India
134,937
<p>I just found this <a href="https://math.stackexchange.com/a/731302/134937">answer</a> in another similar <a href="https://math.stackexchange.com/questions/731299/find-the-least-value-of-x-which-when-divided-by-3-leaves-remainder-1">question</a></p> <p>And this explain it.</p> <p>Thanks.</p>
1,765,530
<p>How many $5$-digit numbers (including leading $0$'s) are there with no digit appearing exactly $2$ times? The solution is supposed to be derived using Inclusion-Exclusion.</p> <p>Here is my attempt at a solution:</p> <p>Let $A_0$= sequences where there are two $0$'s that appear in the sequence.</p> <p>...</p> <p>$A_{9}$=sequences where there are two $9$'s that appear in the sequence.</p> <p>I want the intersection of $A_0^{'}A_1^{'}...A_9^{'}$= $N-S_1+S_2$ because you can only have at most two digits who are used exactly two times each in a $5$ digit sequence.</p> <p>$N=10^5$, $S_1=10\cdot \binom{5}{2}\cdot[9+9\cdot 8 \cdot 7]$, and $S_2=10 \cdot 9 \cdot \binom{5}{4} \cdot8$.</p> <p>The $S_1$ term comes from selecting which of the ten digits to use twice, selecting which two places those two digits take, and then either having the same digit used three times for the other three places, or having different digits used for the other three digits.</p> <p>The $S_2$ term comes from selecting which two digits are used twice, selecting where those four digits go, and then having eight choices for the remaining spot.</p> <p>So my answer becomes $10^5 -10 \cdot \binom{5}{2} \cdot [9+9 \cdot 8 \cdot 7]+10 \cdot 9 \cdot \binom{5}{4} \cdot 8$.</p> <p>Am I doing this correctly?</p>
Christian Blatter
1,303
<p>The partitions of $5$ showing the admissible multiplicities of occurring digits are $(1,1,1,1,1)$, $(3,1,1)$, $(4,1)$, and $(5)$.</p> <p>There are $10\cdot9\cdot 8\cdot 7\cdot 6=30\,240$ numbers with five different digits.</p> <p>There are $10\cdot{9\choose 2}=360$ ways to choose three digits whereby the first chosen digit will be used three times. You can arrange these digits in ${5!\over3!}=20$ ways, so that we get $7200$ numbers with multiplicities $(3,1,1)$.</p> <p>There are $10\cdot9=90$ ways to choose two digits whereby the first chosen digit will be used four times. The second chosen digit can be placed in five different slots, so that we get $450$ numbers with multiplicities $(4,1)$.</p> <p>There are $10$ ways to choose a single digit to be used five times.</p> <p>In all there are $37\,900$ admissible numbers.</p>
1,649,320
<p>I am having a hard time proving this simple and natural identity of sets. what I do is go round and round in circles:</p> <p>$$A\cup( A\cap B) = (A\cup A) \cap (A\cup B)$$ $$= A \cap(A\cup B)$$</p> <p>Now what? I apply the distributive property again and reach the first expression. How can I show this using set properties (distributive, idempotent, associative, de morgan etc)?</p>
J.ALI
296,390
<p>$A∪(A∩B)=(A∪A)∩(A∪B)$=$A∩(A∪B)$=$A$, the first equality using the distribution law and the last equality since $A\subset A∪B$.</p>
2,994,296
<p>I'm trying to figure out how to prove, that <span class="math-container">$$\lim_{n\to \infty} \frac{n^{4n}}{(4n)!} = 0$$</span> The problem is, that <span class="math-container">$$\lim_{n\to \infty} \frac{n^{n}}{n!} = \infty$$</span> and I have no idea how to prove the first limit equals <span class="math-container">$0$</span>. </p>
Samvel Safaryan
534,896
<p>Define <span class="math-container">$$ \\a_n=\frac{n^{4n}}{(4n)!}=&gt; \\\frac{a_{n+1}}{a_n}=\frac{(n+1)^{4(n+1)}}{n^{4n}}\cdot\frac{(4n)!}{(4n+4)!}= \\(n+1)^4\big(1+\frac{1}{n}\big)^{4n}\cdot\frac{1}{(4n+1)(4n+2)(4n+3)(4n+4)}=b_n $$</span> <span class="math-container">$$ \\\dfrac{(4n+1)(4n+2)(4n+3)(4n+4)}{(n+1)^4}=\Big(4-\frac{3}{n+1}\Big)\Big(4-\frac{2}{n+1}\Big)\Big(4-\frac{1}{n+1}\Big)\Big(4-\frac{0}{n+1}\Big)\ge(4-3/2)(4-2/2)(4-1/2)(4-0/2)=5\cdot6\cdot7\cdot8/2^4=105&gt;3^4&gt;((1+1/n)^n)^4=&gt; \\b_n&lt;81/105=&gt;a_n&lt;(81/105)^{n-1}=&gt;\lim_{n\to+\infty}a_n=0 $$</span></p>
79,084
<p>Let $X$ be a topological space (say a manifold). A result of R. Thom states that the pushforwards of fundamental classes of closed, smooth manifolds generate the rational homology of $X$. This work of Thom predates the development of bordism. Is there now a more elementary proof of this result that does not rely on spectral sequence techniques?</p>
Tom Goodwillie
6,666
<p>I suspect that the "obvious" proof used an Atiyah-Hirzebruch spectral sequence, so it's not obvious unless you are happy with spectral sequences.</p> <p>Here is an argument with no spectral sequence in it.</p> <p>There is a homology theory $\Pi_\ast$ called stable homotopy theory. It has a natural map to ordinary homology $H_\ast$, given by the Hurewicz map. After tensoring with $\mathbb Q$ this map $\Pi_\ast(X)\to H_\ast(X)$ becomes an isomorphism. The proof of this for finite complexes $X$ uses the five lemma plus the fact that it is an isomorphism when $X$ is a point. In the case when $X$ is a point this is the result (Serre's thesis) that $\pi_k(S^n)$ is finite if $k&gt;n$.</p> <p>On the other hand, by the Thom-Pontryagin construction, stable homotopy is the same as framed bordism.</p> <p>(Oh wait, Serre's result used a spectral sequence ...)</p>
2,458,863
<p>I tried to find the critical points of the function</p> <p>$$f(x,y) = x^2y-2xy + \arctan y $$</p> <p>And I found that is $P(1,0)$, the problem is that the Hessian is null, and I don't know how to procede to determine the nature of that point. Can you help me ?</p> <p><strong>Update:</strong> Thanks you all, and I tried to study the sign of the function, the problem is that I don't know how to proceed , since I have $Δf(x,y)=x^2y-2xy + \arctan y $ and I don't know how to study the sign locally around $1,0$.</p>
Arthur
15,500
<p>More generally: the product of all elements in <em>any</em> finite abelian group has order $2$ or $1$, and the order is $2$ iff there is exactly one element of order $2$ in the group.</p> <p>The first claim is easy to show: For each element that does not have order $2$, its inverse is also in the product, so they pair up and disappear from the final result. Thus the product of all elements is the same as the product of all elements of order $2$, which necessarily has order $2$ or $1$.</p> <p>From here on, I will just disregard all the other elements and concentrate on the subgroup consisting of elements of order $2$ (plus the identity), since all the other elements cancel anyways and don't contribute to the final product. If this subgroup is trivial, then we are done, since there is no element of order $2$, and the product must therefore have order $1$.</p> <p>Otherwise, take any non-identity element $g$ in the group, and divide all the elements of the group up in pairs so that $h$ is paired with $gh$ for all elements $h$. The product of the two elements in a pair is $g$, so it remains to count the number of pairs.</p> <p>In fact, the number of elements in the group (and therefore the number of pairs) is a power of $2$: If you want to count the elements, start with the identity, that's one element. Now take a $g_1\neq e$. We now have two elements. Take a new $g_2\neq e, g_1$. This $g_2$ gives two new elements, because neither $g_2$ nor $g_1g_2$ have been counted already. Next, we pick a new element $g_3$, and get four new elements: $g_3, g_1g_3, g_2g_3, g_1g_2g_3$. We keep going, for each new element doubling the number of elements we can reach. Thus when we are done we must have ended up on a power of $2$.</p> <p>So we see that if there is exactly one element of order $2$, then we just get one pair, so the final product is that $g$. Otherwise, the number of pairs must be even, and so the final product becomes $e$.</p>
2,458,863
<p>I tried to find the critical points of the function</p> <p>$$f(x,y) = x^2y-2xy + \arctan y $$</p> <p>And I found that is $P(1,0)$, the problem is that the Hessian is null, and I don't know how to procede to determine the nature of that point. Can you help me ?</p> <p><strong>Update:</strong> Thanks you all, and I tried to study the sign of the function, the problem is that I don't know how to proceed , since I have $Δf(x,y)=x^2y-2xy + \arctan y $ and I don't know how to study the sign locally around $1,0$.</p>
lhf
589
<p>The product of all units mod $m$ can be written as $P=\prod_{x^2\ne1} x \prod_{y^2=1} y$.</p> <p>The first product is $1$ because each $x$ is paired with $x^{-1}$ and they are different.</p> <p>Therefore, $P=y_1 \cdots y_n$ with $y_i^2=1$.</p> <p>If $n=1$, then $P=1$ because $y=1$ is certainly there.</p> <p>If $n=2$, then $P=-1$ because $y=-1$ is certainly there.</p> <p>If $n&gt;2$, Then $y \mapsto -y$ is a permutation of the elements such $y^2=1$. This permutation has no fixed points, that is, $-y\ne y$. This implies than $n$ is even. Thus, every $y$ is paired with $-y\ne y$ and $y(-y)=-1$. Therefore, $P=(-1)^{n/2}=\pm 1$.</p> <p>(Actually, in the last case, $n/2$ is even and $P=1$. This follows from Lagrange's theorem applied to a subgroup $\{1,-1,y,-y\}$.)</p>
1,012,344
<p>My knowledge of $C^*$-algebras is very little.</p> <p>We call an element positive if $a=b^*b$ for some $b$ and make a relation on all positive elements by saying $$ b \geqslant a \iff b-a \text{ is positive}. $$ I can't figure out why this gives us a poset.</p>
Martin Argerami
22,857
<p>Below is one avenue. </p> <p>Since $a=b^*b$, we have $a^*=a$. Then $C^*(a)$, the C$^*$-subalgebra generated by $a$, is commutative. In a commutative Banach algebra, we have $$ \sigma(a)=\{\tau(a):\ \tau \text{ is a character }\}. $$ So, when $a=b^*b$, $\tau(a)=\tau(b^*)\tau(b)=\overline{\tau(b)}\tau(b)=|\tau(b)|^2\geq0$ for any character $\tau$. Thus, $\sigma(a)\subset[0,\infty)$.</p> <p>If $a\geq0$ and $a\leq0$, we deduce that $\sigma(a)=\{0\}$. Being selfadjoint, the spectral theorem implies that $a=0$. </p>
38,439
<p>I've mentioned before that I'm using this forum to expand my knowledge on things I know very little about. I've learnt integrals like everyone else: there is the Riemann integral, then the Lebesgue integral, and then we switch framework to manifolds, and we have that trick of using partitions of unity to define integrals.</p> <p>This all seems very ad hoc, however. Not natural. I'm aware this is a pretty trivial question for a lot of you (which is why I'm asking it!), but what is the "correct" natural definition we should think of when we think of integrals?</p> <p>I know there's some relation to a perfect pairing of homology and cohomology, somehow relating to Poincare duality (is that right?). And there's also something about chern classes? My geometry, as you can see, is pretty confused (being many years in my past).</p> <p>If you can come up with a natural framework that doesn't have to do with the keywords I mentioned, that would also be very welcome.</p>
Paul Siegel
4,362
<p>As the others have mentioned, integration over a connected oriented smooth manifold $M$ can be characterized (modulo some technicalities) according to the fact that it fits into an exact sequence:</p> <p>$\Omega_c^{n-1}(M) \to \Omega_c^n(M) \to \mathbb{R} \to 0$ </p> <p>where $\Omega_c^*$ refers to the De Rham complex, the first arrow is the De Rham differential, and the second arrow is integration. In particular, $\int_M$ defines an isomorphism $H^n(M^n) \to \mathbb{R}$. This line of thinking leads to Poincare duality: if one takes a submanifold $V$ representing a dimension $p$ homology class of $M$ (this is always possible <em>ratianally</em>) then the integration isomorphism determines a dimension $p$ cohomology class $[\omega]$, and this can be paired with a cohomology class $[\eta]$ in dimension $n-p$ via the exterior product (cup product):</p> <p>$([\omega],[\eta]) \mapsto \int_M \omega \wedge \eta$</p> <p>Poincare duality is precisely the assertion that this pairing is nondegenerate. I think this point of view on integration - as a pairing between homology and cohomology - leads to many of the genuinely non-analytic formulations of integration.</p> <hr> <p>Still, I would not consider this to have the final say as the correct natural definition of integration. For one thing, manifolds are far from the only spaces that one could want to integrate over, and I highly doubt integration has a cohomological formulation in any serious generality beyond manifolds. Second, the theory of measures truly is fundamental to integration and should be involved in an essential way; dynamical systems people often care as much about the measure as they do the integral.</p> <p>Here is what I think integration is all about. Let $X$ be a locally compact Hausdorff space (this is enough generality for the vast majority of applications of integration that I know of) and consider the space $C_0(X)$ of continuous functions on $X$ vanishing at infinity (in the sense of the one-point compactification). Whatever intuition you have about integration, it must tell you that the integral should be a way of assigning a real number to a continuous function (maybe other functions too) which depends linearly on the function. In other words, it has to be some sort of linear functional on $C_0(X)$. Of course, not every linear functional deserves to be called an integral - if $x \in X$ then $f \mapsto f(x)$ is a linear functional on $C_0(X)$, but it doesn't make much sense to call it an "integral". </p> <p>So we allow the topology of $X$ to play a greater role. Recall that $C_0(X)$ is a Banach space if it is equipped with the uniform norm, and as such it comes equipped with a preferred collection of linear functionals: the set $C_0(X)^*$ of continuous linear functionals. If we pretend for a moment that we have already worked really hard and built the theory of integration with respect to a Borel measure $\mu$, then assuming the measure is tied closely enough to the topology of $X$ (precisely, it must be a "Radon measure") we would have a continuous linear functional $I_\mu$ on $C_0(X)$ given by $I_\mu(f) = \int f d\mu$.</p> <p>Riesz Representation Theorem: Let $M(X)$ denote the Banach space of Radon measures on $X$. The map $M(X) \mapsto C_0(X)^*$ given by $\mu \mapsto I_\mu$ is an isometric isomorphism.</p> <p>Consequently, if we hadn't already invented a notion of integration it would be perfectly possible to simply define $\int_X f d\mu$ to be $I_\mu(f)$. I personally think this is the right way to think about integrals.</p>
2,713,201
<p>How would you work something like this out? </p> <p>Are there similar problems to $$\frac{d\left( (\cos(x))^{\cos(x)}\right)}{dx}$$ which could be worked out the same way?</p>
kayush
532,597
<p><strong>Hint</strong>: Given proper domain for the function so that $\cos(x) &gt;0$ we can write:<br> $$(\cos x)^{\cos x} = e^{(\cos x) \ln(\cos x)}$$</p>
2,658,691
<p>I have one last question regarding permutation. I understand the problem and the rule of product but this problem seems to be in a different format compared with the two questions I asked before.</p> <p>A committee of eight is to form a round table discussion group. In how many ways may they be seated if the 2 members are to be seated with each other?</p> <p>If I understand it correctly, two persons are to choose from 6 spaces to sit,this would be 8*7=56 am I correct?</p>
gerw
58,577
<p>You can easily prove the reverse of the sentence after your question: If an $S$ as in the question exists, then $T$ is weak*-weak* continuous.</p>
3,443,094
<blockquote> <p>If <span class="math-container">$$\lim_{x\to 0}\frac{ae^x-b}{x}=2$$</span> the find <span class="math-container">$a,b$</span></p> </blockquote> <p><span class="math-container">$$ \lim_{x\to 0}\frac{ae^x-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)+a-b}{x}=\lim_{x\to 0}\frac{a(e^x-1)}{x}+\lim_{x\to 0}\frac{a-b}{x}=\boxed{a+\lim_{x\to 0}\frac{a-b}{x}=2}\\ \lim_{x\to 0}\frac{a-b}{x} \text{ must be finite}\implies \boxed{a=b}\\ $$</span> Now I think I am stuck, how do I proceed ?</p>
Andrei
331,661
<p>You are almost done. <span class="math-container">$a+0=2$</span>.</p>
1,390,423
<p>An acquaintance of mine proposed a scenario. Imagine parents who ground their child. Initially, the grounding is for 5 days, but for every day the child misbehaves while they're grounded, the parents will tack on an extra 2 days. The child is very predictable and has a 30% chance of misbehaving on any given day, no matter how long they've already been grounded.</p> <p>What is the expected value for this poor kid's imprisonment? When I look at the problem, I come up with $\infty$ because there's a non-zero chance he'll keep adding days on forever. My acquaintance said this wasn't the case but can't prove it.</p> <p>If the EV isn't $\infty$, can someone explain where I'm going wrong? My probability knowledge is very old and rusty, unfortunately.</p>
mjqxxxx
5,546
<p>Let $f(n)$ be the expected value of the remaining sentence given that the current sentence is $n$ days. At the end of the day, the new sentence is either $n-1$ days if the kid behaved (with probability $0.7$), or $n+1$ days if the kid didn't behave (with probability $0.3$). So $$ f(n)=1+\frac{7}{10}f(n-1)+\frac{3}{10}f(n+1), $$ with the boundary condition that $f(0)=0$. Assuming a linear solution, $f(n)=An$, gives $$An=1+\frac7{10}(An-A)+\frac3{10}(An+A)=An+1-\frac25A$$ or $A=5/2$. So the expected sentence is just $5/2$ times the current sentence, or $12.5$ days if the initial grounding was five days.</p>
1,246,356
<p>Let $A,B \in {M_n}$ . suppose $A$ is normal matrix and has distinct eigenvalue, and $AB=0$. why $B$ is normal matrix?</p>
JJacquelin
108,514
<p>On have to take care to the boundaries. One little picture says more than a long speech!</p> <p><img src="https://i.stack.imgur.com/0jWkj.jpg" alt="enter image description here"> $$ \sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)} $$ $$\int_5^{m+1}\frac{\ln(\ln (x))}{x\ln(x)}dx&lt;\sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)}&lt;\int_5^{m+1}\frac{\ln(\ln (x-1))}{(x-1)\ln(x-1)}dx$$ $$ \frac{1}{2}\left(\ln (\ln(m+1))\right)^2-\frac{1}{2}\left(\ln (\ln(5))\right)^2&lt;\sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)}&lt;\frac{1}{2}\left(\ln (\ln(m))\right)^2-\frac{1}{2}\left(\ln (\ln(4))\right)^2 $$ The mean value is a very good approximate :</p> <p>$$\sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)}\simeq \frac{1}{4}\left(\left(\ln (\ln(m+1))\right)^2-\left(\ln (\ln(5))\right)^2 + \left(\ln (\ln(m))\right)^2-\left(\ln (\ln(4))\right)^2\right)$$</p> <p>An even better approximate is obtained in considering the integral of the "mean" function $y=\int_5^{m+1}\frac{\ln(\ln (x-0.5))}{(x-0.5)\ln(x-0.5)}dx$ $$\sum_{k=5}^m\frac{\log(\log(k))}{k\log(k)}\simeq \int_5^{m+1}\frac{\ln(\ln (x-0.5))}{(x-0.5)\ln(x-0.5)}dx=\int_{4.5}^{m+0.5}\frac{\ln(\ln (x))}{x\ln(x)}dx=\frac{1}{2}\left(\ln (\ln(m+0.5))\right)^2-\frac{1}{2}\left(\ln (\ln(4.5))\right)^2$$</p> <p>The comparison is shown below :</p> <p><img src="https://i.stack.imgur.com/J2UnS.jpg" alt="enter image description here"></p>
601,951
<p><em><strong>2</strong> + <strong>5</strong> + <strong>8</strong> + . . . + <strong>(6n-1)</strong> = <strong>n(6n+1</strong>)</em></p> <p>This is what I have so far. </p> <p>The <strong>sum</strong> of <strong>(3j-1)</strong> from <strong>j=1</strong> to <em>something I`m not sure of</em>.</p>
Marko Riedel
44,883
<p>This can also be done using a basic complex variables technique, which gives a close variation on what was posted by @GrigoryM.</p> <p>Suppose we seek to evaluate <span class="math-container">$$\sum_{k=0}^n {2n+1\choose 2k+1} {m+k\choose 2n}.$$</span></p> <p>Introduce the integral representation <span class="math-container">$${m+k\choose 2n} = \frac{1}{2\pi i} \int_{|z|=\varepsilon} \frac{1}{z^{2n+1}} (1+z)^{m+k} \; dz.$$</span></p> <p>This gives the following integral <span class="math-container">$$\frac{1}{2\pi i} \int_{|z|=\varepsilon} \sum_{k=0}^n {2n+1\choose 2k+1} \frac{1}{z^{2n+1}} (1+z)^{m+k} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\varepsilon} \frac{(1+z)^m}{z^{2n+1}} \sum_{k=0}^n {2n+1\choose 2k+1} (1+z)^k \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\varepsilon} \frac{(1+z)^{m-1/2}}{z^{2n+1}} \sum_{k=0}^n {2n+1\choose 2k+1} \sqrt{1+z}^{2k+1} \; dz.$$</span></p> <p>The sum is</p> <p><span class="math-container">$$\sum_{k=0}^{2n+1} {2n+1\choose k} \sqrt{1+z}^k \frac{1}{2} (1-(-1)^k) \\ = \frac{1}{2} ((1+\sqrt{1+z})^{2n+1} - (1-\sqrt{1+z})^{2n+1})$$</span></p> <p>and we get for the integral</p> <p><span class="math-container">$$\frac{1}{2\pi i} \int_{|z|=\varepsilon} \frac{(1+z)^{m-1/2}}{2z^{2n+1}} \left((1+\sqrt{1+z})^{2n+1} - (1-\sqrt{1+z})^{2n+1})\right) \; dz.$$</span></p> <p>By way of ensuring analyticity we observe that we must have <span class="math-container">$\varepsilon \lt 1$</span> owing to the branch cut <span class="math-container">$(-\infty, -1]$</span> of the square root. Now put <span class="math-container">$1+z = w^2$</span> so that <span class="math-container">$dz = 2w\; dw$</span> and the integral becomes</p> <p><span class="math-container">$$\frac{1}{2\pi i} \int_{|w-1|=\gamma} \frac{w^{2m-1}}{(w^2-1)^{2n+1}} \left((1+w)^{2n+1} - (1-w)^{2n+1})\right) \; w \; dw.$$</span></p> <p>This is <span class="math-container">$$\frac{1}{2\pi i} \int_{|w-1|=\gamma} w^{2m} \left(\frac{1}{(w-1)^{2n+1}} + \frac{1}{(w+1)^{2n+1}}\right) \; dw.$$</span></p> <p>Treat the two terms in the parentheses in turn. The first contributes</p> <p><span class="math-container">$$[(w-1)^{2n}] w^{2m} = [(w-1)^{2n}] \sum_{q=0}^{2m} {2m\choose q} (w-1)^q = {2m\choose 2n}.$$</span></p> <p>The second term is analytic on and inside the circle that <span class="math-container">$w$</span> traces round the value <span class="math-container">$1$</span> with no poles (pole is at <span class="math-container">$w=-1$</span>) and hence does not contribute anything. This concludes the argument.</p> <P> <p><strong>Remark.</strong> We must document the choice of <span class="math-container">$\gamma$</span> so that <span class="math-container">$|w-1|=\gamma$</span> is entirely contained in the image of <span class="math-container">$|z|=\varepsilon$</span>, which since <span class="math-container">$w=1+\frac{1}{2} z + \cdots$</span> makes one turn around <span class="math-container">$w=1$</span> and may then be continuously deformed to the circle <span class="math-container">$|w-1|=\gamma.$</span> We need a bound on where this image comes closest to one. We have <span class="math-container">$w = 1 + \frac{1}{2} z + \sum_{q\ge 2} (-1)^{q+1} \frac{1}{4^q} \frac{1}{2q-1} {2q\choose q} z^q.$</span> The modulus of the series term is bounded by <span class="math-container">$\sum_{q\ge 2} \frac{1}{4^q} \frac{1}{2q-1} {2q\choose q} |z|^q = 1 - \frac{1}{2} |z| - \sqrt{1-|z|}.$</span> Therefore choosing <span class="math-container">$\gamma = \frac{1}{2}\varepsilon - 1 + \frac{1}{2} \varepsilon + \sqrt{1-\epsilon} = \sqrt{1-\varepsilon} + \varepsilon - 1$</span> will fit the bill. For example with <span class="math-container">$\varepsilon = 1/2$</span> we get <span class="math-container">$\gamma = (\sqrt{2}-1)/2.$</span> It is a matter of arithmetic to verify that with the formula we have <span class="math-container">$\gamma \lt 1$</span>.</p>
324,594
<p>You have three buckets, two big buckets holding <code>8 litres</code> of water each and one small empty bucket that can hold <code>3 litres</code> of water. How will you split the <code>16 litres</code> of water to <code>four people</code> evenly? Each person has a container but once water is distributed to someone it cannot be taken back. </p> <p>In this puzzle, we need to allocate 4 litres to each person. So I considered the initial state as below <br></p> <blockquote> <p>8 8 0 [0, 0, 0, 0]</p> </blockquote> <p>The values in the bracket are the water given to those 4 people. I tried the below steps </p> <blockquote> <p>5 8 3 [0, 0, 0, 0] <br> 5 8 0 [3, 0, 0, 0] <br> 5 5 3 [3, 0, 0, 0] <br> 5 5 0 [3, 3, 0, 0] <br> 2 8 0 [3, 3, 0, 0] <br> 0 8 2 [3, 3, 0, 0] <br> 0 7 3 [3, 3, 0, 0] <br> 0 4 3 [3, 3, 3, 0] <br> 0 1 3 [3, 3, 3, 3] <br> 0 0 3 [4, 3, 3, 3] <br></p> </blockquote> <p>But I couldnt take it further by giving those 1 litre for rest of the 3 persons since there is a constraint that water given to person cant be retrieved back. <br> Any help/hint towards final solution is greatly appreciated.</p>
Ravi Gupta
528,046
<p>Above can be completed in 13 steps: <code> 8,8,0[0,0,0,0] 8,5,3[0,0,0,0] 8,5,0[3,0,0,0] 5,5,3[3,0,0,0] 5,5,0[3,3,0,0] 5,2,3[3,3,0,0] 5,2,0[3,3,3,0] 5,0,2[3,3,3,0] 4,0,3[3,3,3,0] 0,0,3[3,3,3,4] 0,0,2[4,3,3,4] 0,0,1[4,4,3,4] 0,0,0[4,4,4,4]</code></p>
1,034,698
<p>I have an assignment with the following question:</p> <pre><code>Does an Orthogonal Matrix exist such that its first row consists of the following values: </code></pre> <p>($1$/$\sqrt{3}$, $1$/$\sqrt{3}$, $1$/$\sqrt{3}$)</p> <pre><code>If there is, find one. </code></pre> <p>I know I can solve this question with the Gram Schmidt algorithm, but it includes a lot of complicated calculations.</p> <p>Is there any other way to prove this statement without the Gram Schmidt algorithm?</p> <p>Thanks,</p> <p>Alan </p>
JohnD
52,893
<p>To show $W$ is a subspace, use the Subspace Theorem: $0\in W$, $W$ is closed under addition and scalar multiplication.</p> <p>To compute the dimension of $W$, just note that $2x+3y-z=0$ implies $x=-3y+z$. So you have two free variables: $y$ and $z$. Thus, $W$ is two dimensional. </p> <p>Since $\mathbb{R}^3$ is three dimensional and $W$ is two dimensional, then $\mathbb{R}^3/W$ is one dimensional.</p>
619,890
<p>I have a question.There is a group of 5 men and a group of 7 women.With how many ways can each of the 5 men get married with one of the 7 women?</p>
Emanuele Paolini
59,304
<p>I would define the projection on a affine subspace of an euclidean space as the map which sends any point to the closest one in the affine subspace.</p> <p>In your case the subspace is defined as the smallest affine subspace containing a given set of points $v_1,\dots, v_k$. Let $w_j = v_{j}-v_k$ for $j=1,\dots, k-1$. You should apply the <a href="https://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process" rel="nofollow">orthonormalization</a> process to the vectors $w_j$ to find an orthonormal base $e_1,\dots, e_n$ such that the space generated by $e_1,\dots,e_m$ is equal to the space generated by $w_1,\dots,w_k$ (of course $m\le k$). In such a base the linear part of the projection is represented in coordinates by $P(x_1,\dots,x_n) = (x_1, \dots, x_m, 0, \dots ,0)$.</p>
3,583,879
<blockquote> <p>a) $P_5=11$$</p> <p>b) <span class="math-container">$P_1+P_2+P_3+P_4+P_5 =26$</span></p> </blockquote> <p>For the first part <span class="math-container">$$\alpha^5+\beta ^5$$</span> <span class="math-container">$$=(\alpha^3+\beta ^3)^2-2(\alpha \beta )^3$$</span></p> <p>I found the value of <span class="math-container">$\alpha^3+\beta^3=4$</span></p> <p>So <span class="math-container">$$16-2(-1)=18$$</span> which doesn’t match.</p> <p>In the second part depends on the value obtained from part 1, so I need to get that cleared up. </p> <p>I checked the computation many times, but it might end up being just that. Also, is there a more efficient way to do this?</p>
heropup
118,193
<p>First, prove the identity <span class="math-container">$$P_{k+1} = P_1 P_k - \alpha\beta P_{k-1}.$$</span> Then note that if <span class="math-container">$\alpha, \beta$</span> are roots of <span class="math-container">$x^2 - x - 1$</span>, then they admit the factorization <span class="math-container">$$x^2 - x - 1 = (x-\alpha)(x-\beta) = x^2 - (\alpha + \beta) x + \alpha \beta.$$</span> Equating coefficients in <span class="math-container">$x$</span>, we get <span class="math-container">$\alpha + \beta = 1$</span>, <span class="math-container">$\alpha \beta = -1$</span>. Thus <span class="math-container">$$P_{k+1} = P_k + P_{k-1}.$$</span> Now since <span class="math-container">$$P_0 = \alpha^0 + \beta^0 = 2,$$</span> we compute the recursion for <span class="math-container">$P_k$</span> easily: <span class="math-container">$$P_1 = 1, \\ P_2 = P_1 + P_0 = 3, \\ P_3 = P_2 + P_1 = 4, \\ \text{etc.}$$</span></p>
916,963
<p>I am reading an introductory book on mathematical proofs and I don't seem to understand the mechanics of proof by contradiction. Consider the following example. </p> <p>$\textbf{Theorem:}$ If $P \rightarrow Q$ and $R \rightarrow \neg Q$, then $P \rightarrow \neg R$.</p> <p>$\textbf{Proof:}$ (by contradiction) Assume $P$, then it follows that $Q$. Now, assume $R$, then it follows that $\neg Q$. Contradiction, we have $Q$ and $\neg Q$ at the same time. Hence, $\neg R$. Therefore, if $P \rightarrow Q$ and $R \rightarrow \neg Q$, then $P \rightarrow \neg R$, as desired.</p> <p>What I don't understand in this proof, is that why having arrived at contradiction, we decide that our assumption that $R$ is necessarily false? It also could have been that our first assumption, namely, $P$, was false. Or both of them could be false.</p> <p>So my question is: in general, when proving by contradiction, how do we know which assumption exactly is false? And how do we know that exactly one assumption must be wrong in order to proceed with the proof?</p>
paw88789
147,810
<p>In proving $A\rightarrow B$ by contradiction, you assume $\neg(A\rightarrow B)$. The negation of $A\rightarrow B$ is $A\wedge \neg B$ (the one false case of an implication).</p> <p>In your case this means you get to assume $P\rightarrow Q$, $R\rightarrow \neg Q$, and $\neg(P\rightarrow \neg R)$. However $\neg(P\rightarrow \neg R)$ is equivalent to $P\wedge R$. So you get $P$, $R$, $P\rightarrow Q$ and $R\rightarrow\neg Q$.</p> <p>From that you should be able to get your contradiction. (There are other ways of proving this problems as well.)</p>
183,749
<p>Considering two functions <span class="math-container">$\psi_{1}(u,v)$</span> and <span class="math-container">$\psi_{4}(u,v)$</span>. we have these two parial differential equation for them</p> <p><span class="math-container">$(-2 i Sech[\frac{u}{\alpha}] \frac{\partial\psi_{4}}{\partial u}+2 i Sech[\frac{v}{\alpha}] \frac{\partial\psi_{4}}{\partial v})+(2 i Sech[\frac{u}{\alpha}] \frac{\partial\psi_{1}}{\partial u}+2 i Sech[\frac{v}{\alpha}] \frac{\partial\psi_{1}}{\partial v}-m\psi_{1})=0$</span></p> <p><span class="math-container">$(2 i Sech[\frac{u}{\alpha}] \frac{\partial\psi_{1}}{\partial u}-2 i Sech[\frac{v}{\alpha}] \frac{\partial\psi_{1}}{\partial v})+(-2 i Sech[\frac{u}{\alpha}] \frac{\partial\psi_{4}}{\partial u}-2 i Sech[\frac{v}{\alpha}] \frac{\partial\psi_{4}}{\partial v}-m\psi_{4})=0$</span></p> <p>I need to find <span class="math-container">$\psi_{1}(u,v)$</span> and <span class="math-container">$\psi_{4}(u,v)$</span> I wrote these code for them but I don't know why is the problem and why it doesn't give the answer.</p> <pre><code>DSolve[{((-2 I Sech[u/α] D[ψ4[u, v], u]+2 ISech[v/α] D[ψ4[u, v], v]) + (-m ψ1 [u, v] + 2 I Sech[u/α] D[ψ1[u, v], u] +2 I Sech[v/α] D[ψ1[u, v], v])) ==0, ((2I Sech[u/α] D[ψ1[u, v], u]-2 I Sech[v/α] D[ψ1[u, v], v]) + (-m ψ4[u, v] - 2 I Sech[u/α] D[ψ4[u, v], u]-2 I Sech[v/α] D[ψ4[u, v], v])) == 0}, {ψ1[u,v],ψ4[u, v]}, {u, v}] </code></pre> <p>I also tried <code>NDSolve</code> but still didn't get the answer</p>
Alex Trounev
58,388
<p>An example of a numerical solution</p> <pre><code>eq = {((-2 I Sech[u/α] D[ψ4[u, v], u] + 2 I*Sech[v/α] D[ψ4[u, v], v]) + (-m ψ1[u, v] + 2 I Sech[u/α] D[ψ1[u, v], u] + 2 I Sech[v/α] D[ψ1[u, v], v])) == 0, ((2 I*Sech[u/α] D[ψ1[u, v], u] - 2 I Sech[v/α] D[ψ1[u, v], v]) + (-m ψ4[u, v] - 2 I Sech[u/α] D[ψ4[u, v], u] - 2 I Sech[v/α] D[ψ4[u, v], v])) == 0}; bc = {ψ1[0, v] == Exp[-v^2], ψ4[0, v] == 0, ψ1[L, v] == Exp[-L^2], ψ4[L, v] == 0}; L = 5; α = 1; m = 1; s = NDSolve[{eq, bc}, {ψ1[u, v], ψ4[u, v]}, {u, 0, L}, {v, 0, L}]; Plot3D[Evaluate[Abs[ψ1[u, v]] /. s], {u, 0, L}, {v, 0, L}, PlotRange -&gt; All, PlotPoints -&gt; 50, Mesh -&gt; None, ColorFunction -&gt; Hue] </code></pre> <p><a href="https://i.stack.imgur.com/Stigg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Stigg.png" alt="fig1"></a></p>
2,805,312
<p>Suppose $E$ is a vector bundle over $M, d^E$ a covariant derivative, $\sigma\in\Omega^p(E)$ and $\mu$ a q-form.</p> <p>I have seen the following pair of formulae for wedge products:</p> <p>$d^E(\mu\wedge \sigma)=d\mu\wedge\sigma+(-1)^q\mu\wedge d^E\sigma$</p> <p>$d^E(\sigma\wedge\mu)=d^E\sigma\wedge\mu+(-1)^p\sigma\wedge d\mu$</p> <p>I am quite happy with the first one but I can only get the second one if $q$ is even.</p> <p>Here is my calculation when we write $\sigma=\omega\otimes s$:</p> <p>$d^E((\omega\wedge\mu)\otimes s)=(d\omega\wedge\mu+(-1)^p\omega\wedge d\mu)\otimes s+(-1)^{p+q}(\omega\wedge\mu)d^E s$</p> <p>$=(-1)^p\sigma\wedge d\mu+(d\omega+(-1)^{p+q}\omega d^Es)\wedge\mu$</p> <p>I think I must have something wrong as there is a similar formula involving the connection on the endomorphism bundle.</p>
Jackozee Hakkiuz
497,717
<p>Just to add to the discussion, I'd like to write the derivation I went through using abstract index notation. It also allows one to work without writing $\sigma = \omega\otimes s$.</p> <p>I introduce a bit of notation</p> <blockquote> <p>$$\begin{align} \sigma &amp;\equiv {\sigma^{c}}_{a_1\dots a_p} &amp; \sigma&amp;\in\Omega^{p}\otimes E\\ \mu &amp;\equiv \mu_{b_1\dots b_q} &amp; \mu&amp;\in\Omega^{q}\\ (d\omega)_{a_0\dots a_k} &amp;= (k+1)\nabla_{[a_0}\omega_{a_1\dots a_k]} &amp;\omega&amp;\in\Omega^{k}\\ {(d^{E}\omega)^{c}}_{a_0\dots a_k} &amp;= (k+1)\nabla_{[a_0}{\omega^{c}}_{a_1\dots a_k]} &amp;\omega&amp;\in\Omega^{k}\otimes E\\ (\alpha\wedge\beta)_{a_1\dots a_{p+q}} &amp;= \frac{(p+q)!}{p!q!} (\alpha\otimes\beta)_{[a_1\dots a_{p+q}]} &amp; \alpha&amp;\in\Omega^{p}, \beta\in\Omega^{q} \end{align}$$</p> </blockquote> <p>Now, in our case: $${(\sigma\wedge\mu)^{c}}_{a_1\dots a_p b_1\dots b_q} = \frac{(p+q)!}{p!q!}{\sigma^{c}}_{[a_1\dots a_p}\mu_{b_1\dots b_{q}]}$$ Hence $$\begin{align} {[d^{E}(\sigma\wedge\mu)]^{c}}_{da_1\dots a_pb_1\dots b_q} &amp;= (p+q+1)\frac{(p+q)!}{p!q!}\nabla_{[d}({\sigma^{c}}_{a_1\dots a_p}\mu_{b_1\dots b_{q}]}) \\ &amp;= \frac{(p+q+1)!}{p!q!}\left( \nabla_{[d}{\sigma^{c}}_{a_1\dots a_p}\mu_{b_1\dots b_{q}]} + {\sigma^{c}}_{[a_1\dots a_p}\nabla_{d}\mu_{b_1\dots b_{q}]} \right) \\ &amp;= \left( \frac{((p+1)+q)!}{(p+1)!q!}(p+1)\nabla_{[d}{\sigma^{c}}_{a_1\dots a_p}\mu_{b_1\dots b_{q}]} + \frac{(p+(q+1))!}{p!(q+1)!}{\sigma^{c}}_{[a_1\dots a_p}(q+1)\nabla_{d}\mu_{b_1\dots b_{q}]} \right) \\ &amp;= \left( \frac{((p+1)+q)!}{(p+1)!q!}{(d^{E}\sigma)^{c}}_{[da_1\dots a_p}\mu_{b_1\dots b_{q}]} + \frac{(p+(q+1))!}{p!(q+1)!}{\sigma^{c}}_{[a_1\dots a_p}(d\mu)_{db_1\dots b_{q}]} \right) \\ &amp;= {(d^{E}\sigma\wedge\mu)^{c}}_{da_1\dots a_pb_1\dots b_q} + {(\sigma \wedge d\mu)^{c}}_{a_1\dots a_pdb_1\dots b_q} \\ &amp;= {(d^{E}\sigma\wedge\mu)^{c}}_{da_1\dots a_pb_1\dots b_q} + (-1)^{p}{(\sigma \wedge d\mu)^{c}}_{da_1\dots a_pb_1\dots b_q} \end{align}$$</p> <p>I.e</p> <p>$${[d^{E}(\sigma\wedge\mu)]^{c}}_{da_1\dots a_pb_1\dots b_q} = {(d^{E}\sigma\wedge\mu)^{c}}_{da_1\dots a_pb_1\dots b_q} + (-1)^{p}{(\sigma \wedge d\mu)^{c}}_{da_1\dots a_pb_1\dots b_q}$$</p> <p>or, without all the indices:</p> <p>$$d^{E}(\sigma\wedge\mu) = d^{E}\sigma\wedge\mu + (-1)^{p}\sigma \wedge d\mu$$</p> <p>I know it is quite messy, but for me it makes things so much clearer. Hope it is helpful.</p>
131,283
<p>I came across a question which required us to find $\displaystyle\sum_{n=3}^{\infty}\frac{1}{n^5-5n^3+4n}$. I simplified it to $\displaystyle\sum_{n=3}^{\infty}\frac{1}{(n-2)(n-1)n(n+1)(n+2)}$ which simplifies to $\displaystyle\sum_{n=3}^{\infty}\frac{(n-3)!}{(n+2)!}$. I thought it might have something to do with partial fractions, but since I am relatively inexperienced with them I was unable to think of anything useful to do. I tried to check WolframAlpha and it gave $$\sum_{n=3}^{m}\frac{(n-3)!}{(n+2)!}=\frac{m^4+2m^3-m^2-2m-24}{96(m-1)m(m+1)(m+2)}$$ From this it is clear that as $m\rightarrow \infty$ the sum converges to $\frac{1}{96}$, however I have no idea how to get there. Any help would be greatly appreciated!</p>
Did
6,179
<p><strong>Hint:</strong> There exists some $c_k$ independent of $n$ such that $$ \frac1{(n-2)(n-1)n(n+1)(n+2)}=\sum_{k=-2}^2\frac{c_k}{n+k}. $$ To find $c_k$, multiply both sides by $n+k$ and evaluate the result at $n=-k$. For example, $$ c_{-2}=\left.\frac1{(n-1)n(n+1)(n+2)}\right|_{n=2}=\frac1{24}. $$ <em>Sanity check:</em> $\sum\limits_{k=-2}^2c_k=0$ and every $c_k$ should be a multiple of $\frac1{24}$ with the sign of $(-1)^k$ and depending only on $|k|$.</p> <p>Once this is done, note that the value $S$ of the series you are looking for is $$ S=c_{-2}\cdot\left(\frac11+\frac12\right)+c_{-1}\cdot\left(\frac12\right)+c_{1}\cdot\left(-\frac13\right)+c_{2}\cdot\left(-\frac13-\frac14\right), $$ which yields the value you got thanks to W|A, namely, $S=\dfrac1{96}$.</p>
2,713,038
<p>I've seen some solutions to this problem but I'm wondering what is incorrect about an argument like this:</p> <p>$S = \{x \in \mathbb{R}^d: |x| = 1\}$, then $\delta S = \{x \in \mathbb{R}^d: |\delta x| = 1\}$, and so</p> <p>\begin{align*} \{ x \in \mathbb{R}^d: |\delta ||x| = 1\} &amp; = \{ x \in \mathbb{R}^d: |x| = 1 / |\delta | \} \end{align*}</p> <p>As we take $\delta $ large, then $\delta S$ becomes the set in which $|x| = 0$, i.e. the origin in $\mathbb{R}^d$ , which has zero measure, and since: $m(\delta S) = \delta^d m(S)$ where $m$ is the Lebesgue measure, it follows that $m(S) = 0$</p>
David C. Ullrich
248,223
<p>The argument you give is wrong, as already noted. One <em>can</em> show that <span class="math-container">$m(S)=0$</span> by considering the sets <span class="math-container">$\delta S$</span>.</p> <p>For example, say <span class="math-container">$(\delta_j)\subset(1,2)$</span> and <span class="math-container">$\delta_j\ne\delta_k$</span> for <span class="math-container">$j\ne k$</span>. Let <span class="math-container">$$A=\bigcup_j\delta_j S.$$</span> Then <span class="math-container">$$m(\delta_j S)=\delta_j^n m(S)\ge m(S);$$</span>since the <span class="math-container">$\delta_j S$</span> are disjoint, if <span class="math-container">$m(S)&gt;0$</span> this would show that <span class="math-container">$m(A)=\infty$</span>, which is false since <span class="math-container">$A$</span> is bounded.</p> <p>(Here of course I'm taking <span class="math-container">$\delta S=\{\delta x:x\in S\}$</span> as usual...)</p>
462,397
<p>So, I read the John Baez essay "Lectures on n-categories and cohomology" and I understand the notion of a (-1)-category" and a (-2)-category" and how to derive them. However, I'm not totally clear on what a (-1)-morphism is.</p> <p>At nLab at <a href="http://ncatlab.org/nlab/show/k-morphism" rel="nofollow noreferrer">http://ncatlab.org/nlab/show/k-morphism</a>, I found:</p> <blockquote>For the purposes of negative thinking, it may be useful to recognise that every ∞-category has a (−1)-morphism, which is the source and target of every object.</blockquote> <p>I have an idea of how this works, but I'm not sure if it's correct. Basically, my thinking is that if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are objects in a 1-category <span class="math-container">$C$</span>, and <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are both parallel morphisms that map <span class="math-container">$A\to B$</span>, and we take <span class="math-container">$S = hom_C(A,B)$</span> then we get a 0-category (a set) that contains <span class="math-container">$f$</span> and <span class="math-container">$g$</span> as elements (and their identities as morphisms). Both <span class="math-container">$hom_S(f,g)$</span> and <span class="math-container">$hom_S(f,g)$</span> each give us an empty (-1)-category (obviously isomorphic). Then <span class="math-container">$hom_S(f,f)$</span> and <span class="math-container">$hom_S(g,g)$</span> each give us a (-1)-category with a single object (also obviously isomorphic). Now, we treat the isomorphic (-1)-categories as equivalent, so there are only the two: the empty one named <span class="math-container">$False$</span>; and, the non-empty one named <span class="math-container">$True$</span>. So, since all non-empty (-1)-categories are equivalent, we decide that their morphisms are also equivalent. </p> <p>So, here's my question that I'm not sure about: </p> <ul> <li><b>Question 1: </b> Am I correct in my understanding that the identity morphism of an object in a (-1)-category is a (-1)-morphism? It's a 0-morphism if the (-1)-category were treated as a 1-category, but if we take into account that the (-1)-category is derived from some higher category, then does that effectively make the single 1-morphism in the (-1)-category a (-1) morphism in the higher-category? </li> </ul> <p>This seems like it would make sense of the assertion that the single (-1)-morphism is the source and target of every 0-morphism/object.</p> <p>I also came across another passage meant to explain the (-1)-morphism that threw me off a little bit:</p> <blockquote> Also note that every k-morphism has k+1 identity (k+1)-morphisms, which just happen to all be the same (which can be made a result of the Eckmann–Hilton argument). Thus, the (−1)-morphism has 0 identity 0-morphisms, so we don't need any object. (This confused me once.) </blockquote> <ul> <li><b>Question 2:</b> Does this mean that an empty category still has a (-1)-morphism? If so, where does it come from?</li> <li><b>Question 3:</b> Does this mean that a 1-morphism has 2 identity 2-morphisms? My understanding of an "identity" necessitates uniqueness. Having 2 identities for the same thing seems to cause a contradiction. How can I reconcile this?</li> </ul>
Zhen Lin
5,191
<ol> <li>An <strong>object</strong> is always a 0-morphism, so the identity morphism of an object is a 1-morphism.</li> <li>Yes, the empty category has a $(-1)$-morphism. The convention is that the free $\infty$-category generated by a $(-1)$-morphism is the empty $\infty$-category, and so every $\infty$-category has a unique $(-1)$-morphism. Note that this accords with the homotopy hypothesis: it is commonly accepted that a $(-1)$-cell should be the empty space, because this yields reduced homology in a natural way.</li> <li>Higher identity morphisms are unique in an $\infty$-category with strict interchange laws, as noted in the quote. In principle they could be different (but equivalent): an $n$-cube can be degenerate in <em>at least</em> $n$ different ways, if we respect orientation.</li> </ol>
4,281,654
<p>My professor gave me an exercise where I had to show that the special linear group <span class="math-container">$SL(2,\mathbb{R})$</span> is a lie subgroup of <span class="math-container">$GL(2,\mathbb{R})$</span>. I was able to do this part. However, I was then asked to do the following:</p> <p>All real <span class="math-container">$2\times 2$</span> matrices <span class="math-container">$\begin{pmatrix} a &amp; b \\ c &amp; d\end{pmatrix}$</span> can be identified with <span class="math-container">$(a,b,c,d) \in \mathbb{R}^4$</span>. In this way, <span class="math-container">$SL(2,\mathbb{R})$</span> can be thought of as a subset of <span class="math-container">$\mathbb{R}^4$</span>. In this correspondence, find all matrices in <span class="math-container">$SL(2,\mathbb{R})$</span> that are closest to the origin.</p> <p>I really don't have any idea how to approach this problem. The only things I have ever seen like this are Lagrange multipliers, but those don't seem to apply here. For reference, though this exercise is not in the text, our course is using Introduction to Smooth Manifolds by Lee.</p>
Theone
463,966
<p>You can actually do this with Lagrange multipliers. In the following, I always identify the matrix <span class="math-container">$\begin{pmatrix} a &amp; b\\ c &amp; d \end{pmatrix}$</span> with the corresponding point <span class="math-container">$(a, b, c, d) \in \mathbb{R}^4$</span>.</p> <p>First, you have to recognize that <span class="math-container">$SL(2, \mathbb{R})$</span> is a 3-dimensional submanifold of <span class="math-container">$\mathbb{R}^4$</span> defined by the simple equation <span class="math-container">$ad - bc = 1$</span>. In other words, setting <span class="math-container">$f(a, b, c, d) = ad - bc - 1$</span>, <span class="math-container">$SL(2, \mathbb{R})$</span> is defined by the equation <span class="math-container">$f(a, b, c, d) = 0$</span>. Importantly, <span class="math-container">$f$</span> has nonzero gradient everywhere on <span class="math-container">$SL(2, \mathbb{R})$</span>, which means we can use it for Lagrange multipliers.</p> <p>The function you are trying to minimize is the distance function, but to simplify things, I am actually going to minimize the squared distance function, <span class="math-container">$g(a, b, c, d) = a^2 + b^2 + c^2 + d^2$</span>. Because <span class="math-container">$SL(2, \mathbb{R})$</span> is a non-empty closed subset of <span class="math-container">$\mathbb{R}^4$</span>, there <em>is</em> a closest point to the origin, and this closest point can be found by Lagrange multipliers.</p> <p>As you know from Lagrangian multipliers, the minimum of <span class="math-container">$g$</span> is found when <span class="math-container">$\nabla g = \lambda \nabla f$</span> for some <span class="math-container">$\lambda \in \mathbb{R}$</span>.</p> <p>In other words,</p> <p><span class="math-container">$$ (2a, 2b, 2c, 2d) = \lambda (d, -c, -b, a) $$</span></p> <p>After some algebra, this simplifies to: <span class="math-container">$a = d$</span> and <span class="math-container">$b = -c$</span> or <span class="math-container">$a = -d$</span> and <span class="math-container">$b = c$</span>. In the second case, the determinant is always negative. So, if the distance to the origin is minimized, then <span class="math-container">$a = d$</span> and <span class="math-container">$b = -c$</span>. If you then compute the distance, you find that all matrices in <span class="math-container">$SL(2, \mathbb{R})$</span> that satisfy this do have the same distance to the origin, so they are the closest matrices in <span class="math-container">$SL(2,\mathbb{R})$</span> to the origin.</p>
86,536
<p>Considering we have a an association:</p> <pre><code>assc = &lt;|"A" -&gt; &lt;|"a" -&gt; 1, "aa" -&gt; 2|&gt;, "B" -&gt; 0, "C" -&gt; 5,"D" -&gt; &lt;|"d" -&gt; 2, "dd" -&gt; 12|&gt;|&gt; </code></pre> <p>Let's also consider we have 2 known lists and one list for nested keys (** - <em>how to create this list?</em> **):</p> <pre><code>bigKeys = {"A", "D"} lst1 = {"a", "dd", "B"} lst2 = {"aa", "d", "C"} </code></pre> <p>I would like to loop over <code>assc</code> and assign the values for the keys that are from <code>lst1</code> and are <strong>not</strong> <code>bigKeys</code> - <strong>0</strong>, and to the values for the keys that are from <code>lst2</code> and again are <strong>not</strong> <code>bigKeys</code> - <strong>1</strong>. </p> <p>As a result to have: </p> <pre><code>asscNew = &lt;|"A" -&gt; &lt;|"a" -&gt; 0, "aa" -&gt; 1|&gt;, "B" -&gt; 0, "C" -&gt; 1, "D" -&gt; &lt;|"d" -&gt; 1, "dd" -&gt; 0|&gt;|&gt; </code></pre> <p>Thanks!</p>
SquareOne
19,960
<p>I've found a simple approach using the function <code>Part</code> (its shorthand is <code>[[ ]]</code>):</p> <h2>1.</h2> <p>If you write your list <code>lst1</code> rather that way:</p> <pre><code>newlst1 = {{"A", "a"}, {"D", "dd"}, {"B"}} </code></pre> <p>you can assign any value (for example here <code>999</code>) to all these keys simply with:</p> <pre><code>(assc[[##]] = 999) &amp; @@@ newlst1; </code></pre> <p>You can check the result:</p> <pre><code>assc </code></pre> <blockquote> <p><code>&lt;|"A" -&gt; &lt;|"a" -&gt; 999, "aa" -&gt; 2|&gt;, "B" -&gt; 999, "C" -&gt; 5, "D" -&gt; &lt;|"d" -&gt; 2, "dd" -&gt; 999|&gt;|&gt;</code></p> </blockquote> <h2>2.</h2> <p>Allright, let's say you don't want to modify your <code>lst1={"a", "dd", "B"};</code>, the keys are <strong>unique</strong> so you don't want to loose your time. </p> <p>Then, we'll just need to generate this key map first in order to help us:</p> <pre><code>keymap = Reap@MapIndexed[Sow[Rest@{##}] &amp;, assc, {0, Infinity}] // Flatten[#, 3] &amp; // DeleteCases[#, {}] &amp; // Map[#[[-1, 1]] -&gt; # &amp;, #] &amp; </code></pre> <blockquote> <p><code>{"a" -&gt; {Key["A"], Key["a"]}, "aa" -&gt; {Key["A"], Key["aa"]}, "A" -&gt; {Key["A"]}, "B" -&gt; {Key["B"]}, "C" -&gt; {Key["C"]}, "d" -&gt; {Key["D"], Key["d"]}, "dd" -&gt; {Key["D"], Key["dd"]}, "D" -&gt; {Key["D"]}}</code></p> </blockquote> <p>Then for example to assign a value, you just do (almost as in 1./):</p> <pre><code>(assc[[##]] = 111111) &amp; @@@ (lst1 /. keymap); </code></pre> <p>You can check:</p> <pre><code>assc </code></pre> <blockquote> <p><code>&lt;|"A" -&gt; &lt;|"a" -&gt; 111111, "aa" -&gt; 2|&gt;, "B" -&gt; 111111, "C" -&gt; 5, "D" -&gt; &lt;|"d" -&gt; 2, "dd" -&gt; 111111|&gt;|&gt;</code></p> </blockquote> <h2>Explanation</h2> <p>You can use <code>Part</code> or <code>[[ ]]</code> with Associations almost as you do with <code>List</code>s, except you use Keys as parameters (see the <a href="http://reference.wolfram.com/language/ref/Part.html" rel="nofollow">Part</a> doc.).</p> <p>Let's try some examples:</p> <p>Given</p> <pre><code>assc = &lt;|"A" -&gt; &lt;|"a" -&gt; 1, "aa" -&gt; 2|&gt;, "B" -&gt; 0, "C" -&gt; 5, "D" -&gt; &lt;|"d" -&gt; 2, "dd" -&gt; 12|&gt;|&gt; </code></pre> <p>see what does:</p> <pre><code>assc[["A"]] </code></pre> <blockquote> <p><code>&lt;|"a" -&gt; 1, "aa" -&gt; 2|&gt;</code></p> </blockquote> <pre><code>assc[["A", "a"]] </code></pre> <blockquote> <p><code>1</code></p> </blockquote> <p>and you can directly assign new values:</p> <pre><code>assc[["A", "a"]] = 123456789; assc </code></pre> <blockquote> <p>&lt;|"A" -> &lt;|"a" -> 123456789, "aa" -> 2|>, "B" -> 0, "C" -> 5, "D" -> &lt;|"d" -> 2, "dd" -> 12|>|></p> </blockquote>
812,345
<p><a href="https://math.stackexchange.com/a/87705/53259">This</a> proves: Similar matrices have the same characteristic polynomial. (Lay P277 Theorem 4)</p> <p>I prefer <a href="https://math.stackexchange.com/a/8407/53259">https://math.stackexchange.com/a/8407/53259</a>, but this proves that they have the same eigenvalues. </p> <p>Are they equivalent? What about in general, even for matrices which are NOT similar? </p>
Frank Wan
70,425
<p>"What about in general, even for matrices which are NOT similar?" You can get matrices A and B, jordan matrices, with the same characteristic polynomial and same eigenvalues and the matrices ARE NOT EQUIVALENT Jordan matrices.</p>