qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
658,758
<p>How do you plot $$f(x,y) = \frac{x}{1-y} \text{with}~ x^2+y^2&lt;1$$ in Mathematica or Maple?</p>
heropup
118,193
<p>In <em>Mathematica</em> (version 6 or newer)</p> <pre><code>Plot3D[x/(1 - y), {x, -1, 1}, {y, -1, 1}, RegionFunction -&gt; Function[{x, y}, x^2 + y^2 &lt; 1]] </code></pre>
1,301,937
<p>A curve $C$ is said to be <em>trigonal</em> if it admits a rational function $f: C \to \mathbb{CP}^1$ of degree $3$. Is any nonsingular plane curve of degree four trigonal? Can the map be chosen so that it has exactly $10$ ramification points?</p>
Alex Fok
223,498
<p>For the first question, the answer is yes. We can choose a point $P$ on the nonsingular plane curve $C$ of degree 4. Project $C$ from $P$ onto a hyperplane (i.e. a copy of $\mathbb{P}^1$) in $\mathbb{P}^2$. Noting that any line passing through $P$ cuts $C$ at other three points, this projection is a rational map of degree 3.</p>
3,340,107
<p>In the solution to justify why <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span> does not necessarily hold, given <span class="math-container">$\mathbb Q &lt;&lt; \mathbb P$</span> and <span class="math-container">$E_{\mathbb P}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span>, it is stated that:</p> <p>Defining <span class="math-container">$Y_{n}:=\frac{dQ}{dP}\vert X_{n}-X\vert$</span> and assuming that <span class="math-container">$X_{n}\to X, \mathbb P-$</span>a.s. </p> <p>Note <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert]=E_{\mathbb P}[\frac{dQ}{dP}\vert X_{n}-X\vert]$</span>. Then:</p> <p><span class="math-container">$0=E_{\mathbb P}[\liminf\limits_{n \to \infty}Y_{n} ]\leq \liminf\limits_{n \to \infty}E_{\mathbb P}[Y_{n}]$</span> by Fatou's Lemma, and this is used to justify why it does not necessarily hold even if <span class="math-container">$X_{n}\to X, \mathbb P-$</span>a.s. </p> <p>But I believe it does by simply using <span class="math-container">$\limsup\limits_{n \to \infty}$</span></p> <p><span class="math-container">$0=E_{\mathbb P}[\limsup\limits_{n \to \infty}Y_{n} ]\geq \limsup\limits_{n \to \infty}E_{\mathbb P}[Y_{n}]$</span>(*)</p> <p>and since it always holds that:</p> <p><span class="math-container">$\limsup\limits_{n \to \infty}\geq \liminf\limits_{n \to \infty}$</span></p> <p>and by (*) we also have that <span class="math-container">$\limsup\limits_{n \to \infty}\leq 0\leq \liminf\limits_{n \to \infty}$</span></p> <p>so surely <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span> holds using Fatou's Lemma twice?</p>
Simon
649,092
<p>Since <span class="math-container">$(Y_n)$</span> may not be uniformly upper bounded (i.e. <span class="math-container">$(-Y_n)$</span> may not be uniformly lower bounded), your statement that <span class="math-container">$$ E\left[\limsup_{n\to\infty} Y_n\right]\ge \limsup_{n\to\infty} E[Y_n] $$</span> is not deduced from Fatou's lemma. Note that even if <span class="math-container">$E_{\Bbb P}[|X_n - X|]\to 0$</span>and <span class="math-container">$\Bbb Q \ll \Bbb P$</span>, it can happen that <span class="math-container">$E_{\Bbb Q}[|X_n-X|] =\infty$</span> so <span class="math-container">$E_{\Bbb Q}[|X_n-X|]\to 0$</span> does not necessarily hold.</p>
3,340,107
<p>In the solution to justify why <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span> does not necessarily hold, given <span class="math-container">$\mathbb Q &lt;&lt; \mathbb P$</span> and <span class="math-container">$E_{\mathbb P}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span>, it is stated that:</p> <p>Defining <span class="math-container">$Y_{n}:=\frac{dQ}{dP}\vert X_{n}-X\vert$</span> and assuming that <span class="math-container">$X_{n}\to X, \mathbb P-$</span>a.s. </p> <p>Note <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert]=E_{\mathbb P}[\frac{dQ}{dP}\vert X_{n}-X\vert]$</span>. Then:</p> <p><span class="math-container">$0=E_{\mathbb P}[\liminf\limits_{n \to \infty}Y_{n} ]\leq \liminf\limits_{n \to \infty}E_{\mathbb P}[Y_{n}]$</span> by Fatou's Lemma, and this is used to justify why it does not necessarily hold even if <span class="math-container">$X_{n}\to X, \mathbb P-$</span>a.s. </p> <p>But I believe it does by simply using <span class="math-container">$\limsup\limits_{n \to \infty}$</span></p> <p><span class="math-container">$0=E_{\mathbb P}[\limsup\limits_{n \to \infty}Y_{n} ]\geq \limsup\limits_{n \to \infty}E_{\mathbb P}[Y_{n}]$</span>(*)</p> <p>and since it always holds that:</p> <p><span class="math-container">$\limsup\limits_{n \to \infty}\geq \liminf\limits_{n \to \infty}$</span></p> <p>and by (*) we also have that <span class="math-container">$\limsup\limits_{n \to \infty}\leq 0\leq \liminf\limits_{n \to \infty}$</span></p> <p>so surely <span class="math-container">$E_{\mathbb Q}[\vert X_{n}-X\vert ]\xrightarrow{n \to \infty} 0$</span> holds using Fatou's Lemma twice?</p>
mbartczak
699,276
<p>One can construct a counterexample such as Simon suggested, e.g. <span class="math-container">$$\Omega = (0,1),\ \mathbb{P}(dx) = dx,\ \mathbb{Q}(dx) = \frac{dx}{2\sqrt{x}},\ X_n(x) = n^{3/4}\ 1_{(0,1/n)}(x).$$</span> Then <span class="math-container">$$\mathbb{E_P}|X_n| = \frac{n^{3/4}}n\rightarrow 0$$</span> and <span class="math-container">$$\Bbb{E_Q}|X_n| = \int_0^{1/n}n^{3/4}\ \frac{dx}{2\sqrt{x}} = n^{3/4}\sqrt{\frac1n} = n^{1/4} \rightarrow \infty.$$</span></p>
18,413
<p>The thought came from the following problem:</p> <p>Let $V$ be a Euclidean space. Let $T$ be an inner product on $V$. Let $f$ be a linear transformation $f:V \to V$ such that $T(x,f(y))=T(f(x),y)$ for $x,y\in V$. Let $v_1,\dots,v_n$ be an orthonormal basis, and let $A=(a_{ij})$ be the matrix of $f$ with respect to this basis.</p> <p>The goal here is to prove that the $A$ is symmetric. I can prove this easily enough by saying:</p> <p>Since $T$ is an inner product, $T(v_i,v_j)=\delta_{ij}$.</p> <p>\begin{align*} T(A v_j,v_i)&amp;=T(\sum_{k=1}^n a_{kj} v_k,v_i)\\ &amp;=T(a_{1j} v_1,v_i) + \dots + T(a_{nj} v_n,v_i)\\ &amp;=a_{1j} T(v_1,v_i) + \dots + a_{nj} T(v_n,v_i)\tag{bilinearity}\\ &amp;=a_{ij}\tag{$T(v_i,v_j)=\delta_{ij}$}\\ \end{align*}</p> <p>By the same logic,</p> <p>\begin{align*} T(A v_j,v_i)&amp;=T(v_j,A v_i)\\ &amp;=T(v_j,\sum_{k=1}^n a_{ki} v_k)\\ &amp;=T(v_j,a_{1i} v_1)+\dots+T(v_j,a_{ni} v_n)\\ &amp;=a_{1i} T(v_j,v_1)+\dots+a_{ni} T(v_j,v_n)\\ &amp;= a_{ji}\\ \end{align*}</p> <p>By hypothesis, $T(A v_j,v_i)=T(v_j,A v_i)$, therefore $a_{ij}=T(A v_j,v_i)=T(v_j,T v_i)=a_{ji}$.</p> <p>I had this other idea though, that since $T$ is an inner product, its matrix is positive definite.</p> <p>$T(x,f(y))=T(f(x),y)$ in matrix notation is $x^T T A y=(A x)^T T y$</p> <p>\begin{align*} x^T T A y &amp;= (A x)^T T y\\ &amp;=x^T A^T T y\\ TA &amp;= A^T T\\ (TA)^T &amp;= (A^T T)^T\\ A^T T^T &amp;= T^T A\\ TA &amp;= A^T T^T\tag{T is symmetric}\\ &amp;= (TA)^T\tag{transpose of matrix product}\\ \end{align*}</p> <p>This is where I got stuck. We know that $T$ and $TA$ are both symmetric matrices. Clearly $T^{-1}$ is symmetric. If it can be shown that $T^{-1}$ and $AT$ commute, that would show it.</p>
Listing
3,123
<p>I did some numerical search for higher dimensions:</p> <p>$n=3:$</p> <p>$\left( \begin{array}{ccc} 1 &amp; 1 &amp; 0 \\ 1 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 1 \end{array} \right).\left( \begin{array}{ccc} -383 &amp; 13 &amp; -13 \\ -36 &amp; -445 &amp; -36 \\ -13 &amp; 13 &amp; -383 \end{array} \right)=\left( \begin{array}{ccc} -419 &amp; -432 &amp; -49 \\ -432 &amp; -419 &amp; -432 \\ -49 &amp; -432 &amp; -419 \end{array} \right)$</p> <p>$n=4:$</p> <p>$\left( \begin{array}{cccc} 1 &amp; 1 &amp; 0 &amp; 0 \\ 1 &amp; 1 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 1 &amp; 1 \\ 0 &amp; 0 &amp; 1 &amp; 1 \end{array} \right).\left( \begin{array}{cccc} -383 &amp; 13 &amp; -36 &amp; -23 \\ 85 &amp; -360 &amp; 49 &amp; -49 \\ -49 &amp; 49 &amp; -360 &amp; 85 \\ -23 &amp; -36 &amp; 13 &amp; -383 \end{array} \right)=\left( \begin{array}{cccc} -298 &amp; -347 &amp; 13 &amp; -72 \\ -347 &amp; -298 &amp; -347 &amp; 13 \\ 13 &amp; -347 &amp; -298 &amp; -347 \\ -72 &amp; 13 &amp; -347 &amp; -298 \end{array} \right)$</p> <p>$n=5:$</p> <p>$\left( \begin{array}{ccccc} 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 2 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 3 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 4 &amp; 1 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 5 \end{array} \right).\left( \begin{array}{ccccc} -7526 &amp; 3158 &amp; -9379 &amp; 7340 &amp; -8405 \\ 5216 &amp; -3477 &amp; 6079 &amp; -6570 &amp; 6359 \\ -3225 &amp; 1486 &amp; -3098 &amp; 2500 &amp; -3543 \\ 1159 &amp; -1300 &amp; 905 &amp; -1249 &amp; 970 \\ -641 &amp; 414 &amp; -841 &amp; 186 &amp; -656 \end{array} \right)=$</p> <p>$=\left( \begin{array}{ccccc} -2310 &amp; -319 &amp; -3300 &amp; 770 &amp; -2046 \\ -319 &amp; -2310 &amp; -319 &amp; -3300 &amp; 770 \\ -3300 &amp; -319 &amp; -2310 &amp; -319 &amp; -3300 \\ 770 &amp; -3300 &amp; -319 &amp; -2310 &amp; -319 \\ -2046 &amp; 770 &amp; -3300 &amp; -319 &amp; -2310 \end{array} \right)$</p> <p>All matrices have full rank. However, for high dimensions they are quite ugly.</p>
3,082,944
<blockquote> <p>Prove that space <span class="math-container">$X$</span> of all symmetric matrices in <span class="math-container">$GL_2(\mathbb R)$</span> with both the eigenvalues belonging to the interval <span class="math-container">$(0,2),$</span> with the topology inherited from <span class="math-container">$M_2(\mathbb R) $</span> is <strong>connected</strong>.</p> </blockquote> <p>Space of all symmetric matrices in <span class="math-container">$M_2(\mathbb R)$</span> is path -connected. </p> <p>I was not able to show why <span class="math-container">$det(\lambda A+(1-\lambda)B)\neq 0$</span> where <span class="math-container">$\lambda \in (0,1)$</span> and <span class="math-container">$A,B\in GL_2(\mathbb R), A=A^T,B=B^T.$</span></p> <p>Also how to use the eigenvalues from <span class="math-container">$(0,2)$</span> to prove the connectedness.</p> <p>I hope my doubts are clear to you.</p> <p>Any help is appreciated. Thank you.</p>
Community
-1
<p>We can prove much more (in <span class="math-container">$S_n(\mathbb{R})$</span> , the set of <span class="math-container">$n\times n$</span> real symmetric matrices).</p> <p><span class="math-container">$\textbf{Proposition 1}$</span>. The set <span class="math-container">$Z_I=\{A\in S_n(\mathbb{R});spectrum(A)\subset I\}$</span> is convex (where <span class="math-container">$I$</span> is some interval of <span class="math-container">$\mathbb{R}$</span>).</p> <p><span class="math-container">$\textbf{Proof}$</span>. We use the fact that if <span class="math-container">$A\in S_n(\mathbb{R})$</span>, then <span class="math-container">$\sup_{||x||_2=1}x^TAx=\sup(spectrum(A)),\inf_{||x||_2=1}x^TAx=\inf(spectrum(A))$</span>.</p> <p>Then <span class="math-container">$\lambda x^TAx+(1-\lambda)x^TBx\in I$</span> when <span class="math-container">$A,B\in Z_I,||x||_2=1,\lambda\in [0,1]$</span>. <span class="math-container">$\square$</span></p> <p><span class="math-container">$\textbf{Proposition 2}$</span>. If <span class="math-container">$I=[a,b],a&lt;b$</span>, then <span class="math-container">$Z_I$</span> is homeomorphic to the compact ball <span class="math-container">$B_{n(n+1)/2}$</span>.</p> <p><span class="math-container">$\textbf{Proof}.$</span> <span class="math-container">$Z_I$</span> is also closed (the eigenvalues are continuous functions of the entries) and bounded (<span class="math-container">$||A||_2=\sup_{\lambda\in spectrum(A)}(|\lambda|))$</span>.</p> <p>Finally, <span class="math-container">$Z_I$</span> is convex compact and, therefore, homeomorphic to the compact ball in <span class="math-container">$\mathbb{R}^k$</span>, where <span class="math-container">$k$</span> is the number of degrees of freedom of <span class="math-container">$S_n(\mathbb{R})$</span>.</p>
3,082,944
<blockquote> <p>Prove that space <span class="math-container">$X$</span> of all symmetric matrices in <span class="math-container">$GL_2(\mathbb R)$</span> with both the eigenvalues belonging to the interval <span class="math-container">$(0,2),$</span> with the topology inherited from <span class="math-container">$M_2(\mathbb R) $</span> is <strong>connected</strong>.</p> </blockquote> <p>Space of all symmetric matrices in <span class="math-container">$M_2(\mathbb R)$</span> is path -connected. </p> <p>I was not able to show why <span class="math-container">$det(\lambda A+(1-\lambda)B)\neq 0$</span> where <span class="math-container">$\lambda \in (0,1)$</span> and <span class="math-container">$A,B\in GL_2(\mathbb R), A=A^T,B=B^T.$</span></p> <p>Also how to use the eigenvalues from <span class="math-container">$(0,2)$</span> to prove the connectedness.</p> <p>I hope my doubts are clear to you.</p> <p>Any help is appreciated. Thank you.</p>
user1551
1,551
<p>One-line proof: every <span class="math-container">$A\in X$</span> is path-connected to <span class="math-container">$I$</span> by the line segment <span class="math-container">$\{tA+(1-t)I: 0\leq t\leq 1\}\subseteq X$</span>.</p> <p>Your original approach also works. Let <span class="math-container">$A,B\in X$</span> and <span class="math-container">$0\leq t\leq 1$</span>. Then <span class="math-container">$A,B$</span> and in turn <span class="math-container">$tA+(1-t)B$</span> are positive definite and hence all eigenvalues of <span class="math-container">$tA+(1-t)B$</span> are positive. Also, <span class="math-container">$$ \|tA+(1-t)B\|_2 \leq t\|A\|_2+(1-t)\|B\|_2 &lt; 2t+2(1-t)=2. $$</span> Therefore all eigenvalues of <span class="math-container">$tA+(1-t)B$</span> lie inside the interval <span class="math-container">$(0,2)$</span>. Consequently, <span class="math-container">$X$</span> is path-connected because <span class="math-container">$\{tA+(1-t)B: 0\leq t\leq 1\}\subseteq X$</span> for any <span class="math-container">$A,B\in X$</span>.</p>
3,763,744
<p>The helix is a curve <span class="math-container">$x(t) \in \mathbb{R}^3$</span> defined by:</p> <p><span class="math-container">$$ x(t) = \begin{bmatrix} \sin(t) \\ \cos(t) \\ t \end{bmatrix} $$</span></p> <p>and it takes the classic shape:</p> <p><a href="https://en.wikipedia.org/wiki/File:Rising_circular.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4W73Vm.png" alt="simple helix" /></a></p> <p>Does this have a natural extension from <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?)</p> <hr /> <hr /> <h3>What I've tried so far:</h3> <p>The classic <span class="math-container">$\mathbb{R}^3$</span> helix curve above has two nice properties:</p> <ul> <li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_3$</span>, where <span class="math-container">$\hat{e}_3 = \begin{bmatrix} 0 \\ 0 \\ 1 \end{bmatrix}$</span></li> <li><span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the plane normal to <span class="math-container">$\hat{e}_3$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t))$</span> has polar coordinates <span class="math-container">$(r, \theta) = (1, t)$</span>, so <span class="math-container">$\dot{\theta} \equiv 1$</span>.</li> </ul> <p>The classic helix can be viewed as a parametric walk of a circle in <span class="math-container">$\mathbb{R}^2$</span>, with the parameter <span class="math-container">$t$</span> added as the third dimension. A natural extension to a helix in <span class="math-container">$\mathbb{R}^n$</span> would be a parametric walk of a curve on a hypersphere in <span class="math-container">$\mathbb{R}^{n-1}$</span>, with parameter <span class="math-container">$t$</span> added as the nth dimension. So for <span class="math-container">$\mathbb{R}^4$</span>, one could choose a <a href="https://en.wikipedia.org/wiki/Spiral#Spherical_spirals" rel="nofollow noreferrer">spherical spiral</a> to walk the sphere in <span class="math-container">$\mathbb{R}^3$</span>, and use parameter t as the 4th dimension:</p> <p><span class="math-container">$$ x(t) = \begin{bmatrix} \sin(t) \cos(ct) \\ \sin(t) \sin(ct) \\ \cos(t) \\ t \end{bmatrix} $$</span></p> <p>The first three components are rendered on wikipedia as:</p> <p><a href="https://en.wikipedia.org/wiki/File:Kugel-spirale-1-2.svg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uAlBAm.png" alt="spherical spiral" /></a></p> <p>This construction matches the two properties I listed:</p> <ul> <li><span class="math-container">$x(t)$</span> has constant distance from the axis of propagation <span class="math-container">$\hat{e}_4$</span>, where <span class="math-container">$\hat{e}_4 = \begin{bmatrix} 0 \\ 0 \\ 0 \\ 1 \end{bmatrix}$</span></li> <li>When <span class="math-container">$c=1$</span>, <span class="math-container">$x(t)$</span> has constant angular velocity when projected onto the 3-plane normal to <span class="math-container">$\hat{e}_4$</span>. i.e. the vector <span class="math-container">$(x_1(t), x_2(t), x_3(t))$</span> has spherical coordinates <span class="math-container">$(r, \theta, \phi) = (1, t, t)$</span>, so <span class="math-container">$\dot{\theta} = \dot{\phi} \equiv 1$</span>.</li> </ul> <p>It's technically a direct extension of the <span class="math-container">$\mathbb{R}^3$</span> helix, since <span class="math-container">$c=0$</span> induces an identical curve (up to a projection.) But it still feels a little arbitrary, and the closed form will be quite ugly in higher dimensions.</p> <p>Is there a generally accepted extension of the classical circular helix in <span class="math-container">$\mathbb{R}^3$</span> to <span class="math-container">$\mathbb{R}^4$</span>? (Or even <span class="math-container">$\mathbb{R}^n$</span>?) And do its properties or construction at all resemble the above?</p> <hr /> <p>After some research, I've learned that there are interesting generalizations of helices in <span class="math-container">$\mathbb{R}^n$</span>, defined in terms of derivative constraints, Frenet frames, etc. such that even polynomial curves can behave as helices. [<a href="https://link.springer.com/article/10.1007/s00006-018-0835-1" rel="nofollow noreferrer">Altunkaya and Kula 2018</a>]. However, that's much more general than I'm seeking, since those are aperiodic, and may have unbounded distance from the axis of propagation. But the existence of such work is promising - I just don't know how to search this space well.</p>
Mike F
6,608
<p>Any answer to this question is necessarily going to be a bit arbitrary, but here are a few thoughts:</p> <ul> <li>We have an interesting map <span class="math-container">$\theta \mapsto (\cos \theta, \sin \theta) : \mathbb{R} \to S^1$</span>. The helix is the graph of this map.</li> <li>In this spirit, we might consider that the graph of a parametrization of a manifold is a generalized helix. For example, we have the spherical coordinates parametrization of the 2-sphere <span class="math-container">$(\theta, \phi) \mapsto (\cos\theta \sin \phi,\sin \theta \sin \phi, \cos \phi) : \mathbb{R}^2 \to S^2$</span>. We could consider its graph, a subset of <span class="math-container">$\mathbb{R}^2 \times \mathbb{R}^3 = \mathbb{R}^5$</span> to be a generalized helix.</li> <li>We could also concentrate on the fact that <span class="math-container">$\mathbb{R}$</span> is the universal cover of <span class="math-container">$S^1$</span>. So maybe, given a submanifold <span class="math-container">$M \subset \mathbb{R}^n$</span>, we should consider the graph of the projection <span class="math-container">$\widetilde M \to M$</span> to be a generalized helix. Since <span class="math-container">$S^2$</span> is its own universal cover, we just get another copy of <span class="math-container">$S^2$</span> back in this case.</li> </ul>
2,839,802
<p>Consider the following system of equations</p> <p>$$\begin{cases} \dot x=y-x^2-x \\ \dot y=3x-x^2-y \\ \end{cases} $$ Then, the equilibriums are $(0,0)$ and $(1,2)$. Using linearization around $(1,2)$ one can obtain $$ \begin{pmatrix} \dot x \\ \dot y \\ \end{pmatrix} = \begin{pmatrix} -3 &amp; 1 \\ 1 &amp; -1 \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \end{pmatrix} $$ The matrix has two distinct eigenvalues and both of them have negative real values, implying that the equilibria $(1,2)$ is asymptotically stable.</p> <p>My problem is with $(0,0)$, where the linearization fails, as we have eigenvalues with positive real values $$ \begin{pmatrix} \dot x \\ \dot y \\ \end{pmatrix} = \begin{pmatrix} -1 &amp; 1 \\ 3 &amp; -1 \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \end{pmatrix} $$ So, we need to construct a Lyapunov function, but the "usual" $V(x,y)=ax^2+by^2$ doesn't seem to work in this case. </p>
Alex Jones
350,433
<p>Linear stability analysis comes to a useful conclusion as long as there are no eigenvalues with zero real part. If all eigenvalues have nonzero real part, the stability of the nonlinear system will match that of the linear system. Simply put, since you have two positive real eigenvalues, the fixed point is unstable in both systems.</p> <p>The existence of a Lyapunov function implies stability, but it will be hard to show the existence of such a function, and even harder to show nonexistence, directly. Lyapunov functions should be used as a last resort, when linear stability analysis fails and you're pretty sure that the nonlinear system is stable, but you need to prove it. </p>
2,839,802
<p>Consider the following system of equations</p> <p>$$\begin{cases} \dot x=y-x^2-x \\ \dot y=3x-x^2-y \\ \end{cases} $$ Then, the equilibriums are $(0,0)$ and $(1,2)$. Using linearization around $(1,2)$ one can obtain $$ \begin{pmatrix} \dot x \\ \dot y \\ \end{pmatrix} = \begin{pmatrix} -3 &amp; 1 \\ 1 &amp; -1 \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \end{pmatrix} $$ The matrix has two distinct eigenvalues and both of them have negative real values, implying that the equilibria $(1,2)$ is asymptotically stable.</p> <p>My problem is with $(0,0)$, where the linearization fails, as we have eigenvalues with positive real values $$ \begin{pmatrix} \dot x \\ \dot y \\ \end{pmatrix} = \begin{pmatrix} -1 &amp; 1 \\ 3 &amp; -1 \\ \end{pmatrix} \begin{pmatrix} x \\ y \\ \end{pmatrix} $$ So, we need to construct a Lyapunov function, but the "usual" $V(x,y)=ax^2+by^2$ doesn't seem to work in this case. </p>
Cesareo
397,348
<p>Near the point $(0,0)$ the DE system can be approximated as</p> <p>$$ \dot x = y-x\\ \dot y = 3x-y $$</p> <p>or</p> <p>$$ 3 x\dot x = 3 x y - 3 x^2\\ y \dot y = 3 x y - y^2 $$</p> <p>now subtracting we have</p> <p>$$ \frac 12\frac{d}{dt}(y^2-3x^2)+(y^2-3x^2) = 0 $$</p> <p>so locally we have</p> <p>$$ \frac 12\frac{d}{dt}(3x^2-y^2)+3x^2-y^2 = 0 $$</p> <p>or calling $z = y^2-3x^2$ the equivalent system $\dot z + 2z = 0\;\;$ with solution $z = C e^{-2t}$ or</p> <p>$$ 3x^2-y^2 = (\sqrt 3 x+ y)(\sqrt 3 x-y)=C e^{-2t} $$</p> <p>as typical orbits for a saddle point which is unstable. </p> <p><a href="https://i.stack.imgur.com/MUskC.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MUskC.jpg" alt="enter image description here"></a></p> <p>I hope this helps.</p>
2,538,305
<p>Here is the question I am struggling with:</p> <p>A box has 16 Balls, of which 8 are Green, 6 are Red, and 2 are Blue. If you draw 2 Balls with replacement, what is the probability of getting 1 Green Ball and 1 Blue Ball in no particular order?</p> <p>I see three different ways to get an answer to this problem. Please refute my wrong answers with explanations because I am confused.</p> <p>Method 1:</p> <p>Probability of getting one green: 8/16</p> <p>Probability of getting one blue: 2/16</p> <p>(8/16) * (2/16) = 1/16 final answer</p> <p>Method 2: Since the question said order does not matter, I still figured order does count into this equation so I approached it by finding the probability that the green ball is selected first, then the blue ball. Then add that probability to selecting the blue ball first, then the green ball.</p> <p>Probability of getting green first then blue: (8/16) * (2/16) = 1/16</p> <p>Probability of getting blue first then green: (2/16) * (8/16) = 1/16</p> <p>therefore, the final answer is 1/16 + 1/16 = 1/8.</p> <p>Note: This confuses me because we are double counting the answer, the problem said that order does not matter, but why doesn't the Method 1 take this into account?</p> <p>Method 3 (Combination Method):</p> <p>There are Comb(16,2) possible ways to select 2 balls out of 16</p> <p>There are Comb(8,1)*Comb(2,1) ways to select a green and a blue ball</p> <p>Probability of one green and one blue = Comb(8,1)*Comb(2,1)/Comb(16,2) = 16/120 = 2/15 final answer</p> <p>Which one of these, if any, is the correct answer? The book says it is 1/8, but can someone please explain more and explain why my other methods are wrong. Thanks!</p>
Xander Henderson
468,350
<p>Another way of thinking of the problem is to actually write down the sample space and compute the probabilities. In this case, there are 9 possible outcomes: $$ \{ GG, GR, GB, RG, RR, RB, BG, BR, BB \}, $$ where, for example $RG$ denotes the event of first drawing a red ball then a green ball. There are two "favorable" events: $GB$ and $BG$. Since the two favorable outcomes are disjoint, and each draw is independent (thanks to the fact that we are replacing the balls), we have $$ P(GB \lor BG) = P(GB) + P(BG) = \frac{8}{16} \cdot \frac{2}{16} + \frac{2}{16}\cdot \frac{8}{16} = 2 \cdot \frac{1}{2}\cdot \frac{1}{8} = \frac{1}{8}. $$</p> <hr> <p>EDIT: Upon reflection, it may also be useful to explain where you went wrong.</p> <p><strong>Method 1:</strong> Here you have computed the probability of first drawing a green ball, then drawing a blue ball. Order matters in this computation, but order <em>does not</em> matter in the answer. Thus you need to also work out the probability of first drawing a blue ball, then drawing a green ball, and add the two together.</p> <p><strong>Method 2:</strong> This approach is correct.</p> <p><strong>Method 3:</strong> Typically, combinations aren't the right tool when trying to work with problems that include replacement. In particular, your denominator $\binom{16}{2}$ represents the number of ways of drawing two balls <em>without</em> replacement. I don't think that there is a really a nice combination-y way of writing out this problem.</p>
163,534
<p>Consider a set of $N$ points in $n$-dimensional space, i.e. \begin{align*} \{x_1, \dots, x_N\} \subset \mathbb R^n. \end{align*} Let us be given a finite family of non-injective matrices \begin{align*} \{M_j \in \mathbb R^{m \times n} : j = 1, \dots, J\}, \end{align*} e.g. $m&lt;n$.</p> <p>In a nutshell, the problem I would like to address is the following: For any $j = 1, \dots, J$ we are given the set of points (i.e. no knowledge about ordering!) \begin{align*} \{M_j x_1 , \dots, M_j x_N\} \end{align*} which can be seen as a projection of the set $\{x_1, \dots, x_N\}$. </p> <p>My question is: Under which conditions on the family of projection matrices we can uniquely reconstruct the set $\{x_1, \dots, x_N\}$? Intuitively I would say that $J$ has to be large enough (dependend on $N$) and that the matrices should fullfill some assumption like \begin{align*} \bigcap_{j = 1,\dots, J} \ker M_j = \{0\}. \end{align*}</p>
Tommi
1,445
<p>I don't provide an actual answer (I don't have one), but do provide some musings that might be helpful to others who would like to consider this problem.</p> <p>First, if there is only one point, then the condition $$\bigcap_{j = 1,\dots, J} \ker M_j = \{0\}$$ is both necessary and sufficient. It is clearly necessary even if there are more points.</p> <p>Consider the two-dimensional case, i.e. the situation $n=2 =m$, and suppose the matrices are projections to one-dimensional subspaces. For any two projections it seems possible to place three points so that they can't be distinguished from four points - for example, suppose the two projections are coordinate projections, and take the three or four points to be corners of a square.</p> <p>By drawing additional pictures it seems that, in the plane, $J$ maps are not enough (in the sense that one can select $J+2$ points so that omitting specific one of them does not change the set of projections) but $J+1$ projections to different lines do seem to suffice. Actually proving this would presumably be a matter of linear algebra, but I have not done it.</p> <p>In two dimensions the strategy seems to be to consider projections to arbitrary and different lines. In higher dimensions it might be useful to first consider projections to one-dimension or $n-1$-dimensional subspaces, and only after getting some grip on those try to consider a situation with projections onto subspaces of mixed dimensions.</p>
163,534
<p>Consider a set of $N$ points in $n$-dimensional space, i.e. \begin{align*} \{x_1, \dots, x_N\} \subset \mathbb R^n. \end{align*} Let us be given a finite family of non-injective matrices \begin{align*} \{M_j \in \mathbb R^{m \times n} : j = 1, \dots, J\}, \end{align*} e.g. $m&lt;n$.</p> <p>In a nutshell, the problem I would like to address is the following: For any $j = 1, \dots, J$ we are given the set of points (i.e. no knowledge about ordering!) \begin{align*} \{M_j x_1 , \dots, M_j x_N\} \end{align*} which can be seen as a projection of the set $\{x_1, \dots, x_N\}$. </p> <p>My question is: Under which conditions on the family of projection matrices we can uniquely reconstruct the set $\{x_1, \dots, x_N\}$? Intuitively I would say that $J$ has to be large enough (dependend on $N$) and that the matrices should fullfill some assumption like \begin{align*} \bigcap_{j = 1,\dots, J} \ker M_j = \{0\}. \end{align*}</p>
Liviu Nicolaescu
20,302
<p>Denote by $S$ your finite collection of $N$ points in $\newcommand{\bR}{\mathbb{R}}$ $\bR^n$. Here is how you can recover $S$ from the knowledge of its images via a finite collections of linear maps of rank $&lt;n$. More precisely one can use a universal family consisting of roughly $\frac{N^4}{2}$ matrices of type $(n-1)\times n$ and $n+2$ matrices of type $1\times n$. This may not be optimal but at least it is polynomial in $N$. (For a precise statement you can skip to the highlighted portion at the end of my answer.)</p> <p>Pick a finite collection $\newcommand{\eL}{\mathscr{L}}$ $\eL$ of linear maps $\bR^n\to\bR$ in <em>general position</em>, i.e., any $n$ of them are linearly independent. Denote by $\nu$ the cardinality of $\eL$. The number $\nu$ is $&gt; n$ and will be specified later. For any collection $C\subset \eL$ we obtain a linear map</p> <p>$$L_C:\bR^n\to\bR^C. $$</p> <p>Denote by $\binom{\eL}{n-1}$ the collection of subsets of $\eL$ of cardinality $n-1$.There are $\binom{\nu}{n-1}$ such subsets. If $C$ is such a collection, then the linear map $L_C:\bR^n\to\bR^{n-1}$ is surjective and it has a one-dimensional kernel. The <em>general position</em> assumption shows that if $C_0,C_1\in \binom{\eL}{n-1}$, then</p> <p>$$ C_0=C_1\iff \ker L_{C_0}=\ker L_{C_1}. $$</p> <p><strong>A. Suppose we know $L_C(S)$ for any collection $C\in\binom{\eL}{n-1}$.</strong> </p> <p>Assume $\nu$ is large enough so that</p> <p>$$\binom{\nu}{n-1}&gt;\binom{N}{2}. $$</p> <p>Since the $N$ points in $S$ determine at most $\binom{N}{2}$ lines, we deduce that at least one of the linear maps $L_C$, $C\in\binom{\eL}{n-1}$, restricts to an injective map $S\to \bR^C$. In particular we deduce that</p> <p>$$ N=\# S= \max_{\# C=n-1} L_C(S). $$</p> <p>Choose $C_0\in\binom{\eL}{n-1}$ such that $\# L_{C_0}(S)=\# S=N$. Without loss of generality we can assume that $L_{C_0}$ is the projection</p> <p>$$P_0:\bR^n\to \bR^{n-1},\;\;(x_1,\dotsc,x_n)\mapsto (x_1,\dotsc, x_{n-1}). $$</p> <p>For each point $s\in S$ we set $s':=P_0(s)$. Now we have complete knowledge of the set</p> <p>$$ S'=\bigl\lbrace\; s';\;\;s\in S\;\bigr\}=P_0(S). $$</p> <p>The set $S'\subset \bR^{n-1}$ has the same cardinality as $S$. Moreover any point $s'\in S'$ determines a vertical line, i.e., a line parallel with $\ker P_0$, </p> <p>$$ \ell_{s'}=P_0^{-1}(s')=\bigl\{\; (s', t)\in\bR^n;\;\;t\in\bR\;\bigr\}. $$</p> <p>We now have determined $N$ vertical lines and each one of them contains exactly one point in $S$. </p> <p><strong>B. Suppose that we know $L(S)\subset \bR$ for any $L\in\eL$.</strong></p> <p>Choose a linear functional $L\in \eL\setminus C_0$. The set $L(S)$ has $m\leq N$ elements $r_1&lt;\cdots &lt;r_m$. We obtain $m$-hyperplanes</p> <p>$$H_j(L)=\{ L(x)=r_j\},\;\;j=1,\dotsc, m, $$</p> <p>and a set $X(S,L)$ consisting of $Nm$ points</p> <p>$$ H_j(L)\cap \ell_{s'},\;\;j=1,\dotsc, m,\;\;s'\in S'. $$</p> <p>Clearly $S\subset X(S,L)$. Thus $S$ can only be one of the $\binom{Nm}{N}$ subsets of $X$ of cardinality $Nm$. Doing this with any $L\in \eL\setminus C_0$ we deduce</p> <p>$$ S\subset \bigcap_{L\in\eL\setminus C_0} X(S,L). $$</p> <p>Fix a linear map $L_0\in \eL\setminus C_0$ and set $X_0=X(S, L_0)$. We know that </p> <p>$$ S\subset X_0,\;\; \# X_0\leq N^2. $$</p> <p>Suppose that $\nu$ is large enough so that</p> <p>$$\binom{\nu}{n-1}&gt;\binom{N^2}{2} +2. $$</p> <p>We can then find a collection $C_1\in\binom{\eL}{n-1}$ such that $C_1\neq C_0$ and $L_{C_1}$ and the restriction of $L_{C_1}$ to $X_0$ is injective. We know know exactly $L_{C_1}(X_0)$ and $S_1:=L_{C_1}(S)\subset L_{C_1}(X_0)$. Note that $\# S_1=\# S=N$.</p> <p>For each point $s_1\in S_1$ we get a line $\ell_{s_1}= L_{C_1}^{-1}(s_1)$. Let us observe that each line $\ell_{s_1}$ intersects exactly one of the lines $\ell_{s'}$, $s'\in S'$, because </p> <p>$$\ell_{s_1}\cap\ell_{s'}\subset X_0, $$</p> <p>and the restriction of $L_{C_1}$ to $X_0$ is one-to-one.</p> <blockquote> <p>To conclude, if $\eL\subset {\rm Hom}\;(\bR^n,\bR)$ is a finite collection in general position whose cardinality $\nu$ satisfies </p> <p>$$\binom{\nu}{n-1}&gt;\binom{N^2}{2}+2, \tag{$\nu$}$$</p> <p>and we know $L_C(S)$ $\forall C\subset \eL$ of cardinality $1$ or $n-1$, then we can completely recover $S$.</p> </blockquote> <p><strong>Remark.</strong> We can relax assumption <strong>B</strong> to</p> <p><strong>B'. We know $L(S)$ for any $L$ in a family $F\subset \eL$ of cardinality $n+2$.</strong></p> <p><strong>Update.</strong> Let me explain how the above procedure can be used to recover <em>multisets</em>. First, let me define a <em>discrete weight distribution</em> or <em>d.w.d.</em> in $\bR^n$ to be a pair $(S, w)$ where $S$ is a finite subset of $\bR^n$ and $w$ is a function $w:S\to (0,\infty)$. We say that $S$ is the <em>support</em> of the d.w.d.</p> <p>Given a d.w.d. $(S,w)$ in $\bR^n$ and a map $f:\bR^n\to\bR^m$ we obtain a d.w.d. $f_*(S,w)$ in $\bR^m$ given by [ $$ f_*( S, w)= \bigl(\; f(S), f_* w)\;\bigr), $$</p> <p>where for any $y\in f(S)$ we set</p> <p>$$ f_* w(y)=\sum_{x\in f^{-1}(y)\cap S} w(x). $$</p> <p>Suppose that $(S,w)$ is a d.w.d. in $\bR^n$ $\DeclareMathOperator{\Hom}{Hom}$ such that $|S|=N$, and $\eL\subset \Hom(\bR^n,\bR)$ of cardinality $\nu$ constrained by the inequality ($\nu$) above. I claim that if we know the d.w.d.'s $(L_C)_*(S,w)$ for any subset $C\subset \eL$ of cardinality $1$ and $n-1$, then we can completely determine $(S,w)$.</p> <p>To see this, note that the above discussion shows that this information can be used to determine the support $S$ of the unknown d.w.d. $(S,w)$. To determine $w$ choose a subset $C_0\in \binom{\eL}{n-1}$ such that the restriction of $L_{C_0}$ to $S$ is injective. Let $x\in S$ and set $y=L_{C_0}(x)\in\bR^{C_0}$. In this special case we have</p> <p>$$ w(x)= (L_{C_0})_*w(y). $$</p> <p>From our assumption, the quantity in the right hand side of the above equality is known.</p>
3,134,991
<p>If nine coins are tossed, what is the probability that the number of heads is even?</p> <p>So there can either be 0 heads, 2 heads, 4 heads, 6 heads, or 8 heads.</p> <p>We have <span class="math-container">$n = 9$</span> trials, find the probability of each <span class="math-container">$k$</span> for <span class="math-container">$k = 0, 2, 4, 6, 8$</span></p> <p><span class="math-container">$n = 9, k = 0$</span></p> <p><span class="math-container">$$\binom{9}{0}\bigg(\frac{1}{2}\bigg)^0\bigg(\frac{1}{2}\bigg)^{9}$$</span> </p> <p><span class="math-container">$n = 9, k = 2$</span></p> <p><span class="math-container">$$\binom{9}{2}\bigg(\frac{1}{2}\bigg)^2\bigg(\frac{1}{2}\bigg)^{7}$$</span> </p> <p><span class="math-container">$n = 9, k = 4$</span> <span class="math-container">$$\binom{9}{4}\bigg(\frac{1}{2}\bigg)^4\bigg(\frac{1}{2}\bigg)^{5}$$</span></p> <p><span class="math-container">$n = 9, k = 6$</span></p> <p><span class="math-container">$$\binom{9}{6}\bigg(\frac{1}{2}\bigg)^6\bigg(\frac{1}{2}\bigg)^{3}$$</span></p> <p><span class="math-container">$n = 9, k = 8$</span></p> <p><span class="math-container">$$\binom{9}{8}\bigg(\frac{1}{2}\bigg)^8\bigg(\frac{1}{2}\bigg)^{1}$$</span></p> <p>Add all of these up: </p> <p><span class="math-container">$$=.64$$</span> so there's a 64% chance of probability?</p>
Remellion
639,782
<p>There's a way to do it with barely any maths:</p> <p>It's clear that if there's an odd number of heads, there's an even number of tails and vice versa, so P(even number of heads) + P(even number of tails) = 1.</p> <p>Formally rename "heads" to "tails". The problem remains unchanged.</p> <p>So P(even number of heads) = P(even number of tails) = 1/2.</p>
2,406,061
<p>I am also confused about whether these are symbols or have some meaning of their own. PS- I know that <span class="math-container">$\operatorname{d}y\over\operatorname{d}x$</span> geometrically represents the slope. But, I've come across <span class="math-container">$\operatorname{d}x\over\operatorname{d}y$</span> to make problems easier. What does <span class="math-container">$\operatorname{d}x\over\operatorname{d}y$</span> mean?</p>
Michael Hardy
11,667
<p><a href="https://math.stackexchange.com/questions/200393/what-is-dx-in-integration/200403#200403">Here</a> $\leftarrow$ is something I wrote about that.</p> <p>$dx$ is thought of as an infinitely small but nonzero increment of $x$, just as $\Delta x$ is a finite increment of $x$.</p> <p>$dy$ is the corresponding infinitely small increment of $y$.</p> <p>Thus if at some point on the graph, $y$ is changing $3$ times as fast as $x$ is changing, then $dy=3\,dx$ at that point.</p> <p>This is a quite useful heuristic even if not logically rigorous. Mathematicians have become extraordinarily squeamish about heuristics that are not logically rigorous, and that's why you don't often see this in textbooks today. Look at Silvanus Thompson's <em>Calculus Made Easy</em>.</p> <p>Gottfried Wilhelm Leibniz introduced this notation in the 1600s.</p>
97,130
<p>I tried to prove that $$(1-2x)^2=1/3+4/\pi^2\sum_1^\infty \cos(2n x \pi)/n^2$$ for $x \in [0,1)$ with Fourier analysis, but I just found a Fourier series which defines the function. I also found the fourier series of $\cos(2n x \pi)$.</p> <p>I don't think these results are helpful.</p> <p>Any suggestions on how to prove this equation?</p>
Christian Blatter
1,303
<p>The series on the right hand has terms of the form $a_n\cos(2\pi n x)$ with $a_n\ne0$ for all $n$. This indicates a fundamental period of length $L=1$. When the sum of such a series is claimed to be equal to the function $f\!: x\mapsto (1-2x)^2$ on the interval $[0,1]$ then it has to be the Fourier series of the function $\tilde f$ obtained from $f$ by periodic extension with period $1$. As $\tilde f$ is even (check it!) its Fourier series contains only $\cos$-terms $a_n\cos(2\pi n x)$ to begin with. The $a_n$ are given by the formula $$ a_n={2\over L}\int_0^L(1-2x)^2\ \cos(2\pi n x)\ dx = 2\int_0^1(1-2x)^2\ \cos(2\pi n x)\ dx\ .$$ For $n\geq 1$ the integral can be evaluated by partial integration (2 times); the case $n=0$ is immediate. Doing the calculations you will see that you get exactly the $a_n$ in the title.</p> <p>So we know now that the formal Fourier series of $\tilde f$ is actually the series in the title. In order to finish the case we have to invoke a fundamental theorem about such series: When $\tilde f$ is an $L$-periodic continuous function of bounded variation per period then its formal Fourier series converges uniformly on ${\mathbb R}$ to $\tilde f(x)$. Since the assumptions of the theorem are obviously satisfied in our case, the claim follows.</p>
455,979
<p>Suppose we have three 6-sided die that all share the same common bias:</p> <p>For a single dice: let the probability of rolling a 2 or $P(2) = 2{\times}P(1$), let the probability of rolling a 3 or $P(3) = 3{\times}P(1)$, and so on...</p> <p>Such that: $P(2) = 2P(1), P(3) = 3P(1), P(4) = 4P(1), P(5)=5P(1), P(6)=6P(1)$</p> <p>Given that the sum of all the probabilities is 1, we can get that $P(1) = 1/21, P(1) {\approx} .0476$</p> <p>My question are as follow:</p> <p>A) Given two die are rolled what is the probability that the sum of the faces of the die is 3? How about 6?</p> <p>B) Given that three die are rolled what is the probability that the sum of the faces of the die is 9? What about 10?</p> <p>I made this problem myself after adapting a problem I saw that involved a single dice that had the bias described above. I have a good idea of how the probability is calculated, I just wanted to see how other people approach the problem, just to make sure I am doing it the most efficient way possible. I was also wondering if there is a frequentist method of solving the problem.</p> <p>Also does anyone have Diagrams of how a PDF of would look like for 2 or 3 die rolled?</p>
Henry
6,460
<p>For (A) and two biased dice, $P(S=3)=\frac{1\times 2 + 2\times 1}{21^2} = \frac{4}{441}$ and similarly $P(S=6)= \frac{1\times 5 + 2\times 4 + 3\times 3 + 4\times 2 + 5\times 1}{441}$ (which you can simplify).</p> <p>For (B) and three biased dice, you cannot get a sum above $18$.</p> <p>The probability mass functions look like this, and you can see the Central Limit Theorem starting to have an impact despite the biasedness</p> <p><img src="https://i.stack.imgur.com/ViNGH.png" alt="enter image description here"></p>
4,437,921
<p>Given <span class="math-container">$X$</span>~<span class="math-container">$N(0,\sigma^2_X)$</span> and <span class="math-container">$Y$</span>~<span class="math-container">$N(0,\sigma^2_Y)$</span> (<span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are normally distributed random variables with expectations given by <span class="math-container">$\mu_X=\mu_Y=0$</span> and variances given by <span class="math-container">$\sigma^2_X$</span> and <span class="math-container">$\sigma^2_Y)$</span>, can anyone help me figure out the expectation and variance of <span class="math-container">$Z=\sin(X)\cos(Y)$</span>?</p> <p>NOTE: <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are independent. I have found some threads that purport the following:</p> <p><span class="math-container">$E[\cos(X)]=e^{-\sigma^2_X/2}$</span></p> <p><span class="math-container">$E[\sin(X)]=0$</span></p> <p>but i still have trouble using this info to deduce <span class="math-container">$Z=\sin(X)\cos(Y)$</span></p> <p>I just realized the answer is zero. Ayyyyye</p> <h2><em><strong>EDIT:</strong></em></h2> <p>I am also curious to know...</p> <p><span class="math-container">$E[\sin X \cos X]$</span></p> <p>where <span class="math-container">$X$</span>~<span class="math-container">$N(0,\sigma)$</span>. Here, the domain of <span class="math-container">$X$</span> is any real number, but any such number actually corresponds to an angle in radians.</p>
heropup
118,193
<p>Independence of <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> implies <span class="math-container">$$\operatorname{E}[Z] = \operatorname{E}[\sin X] \operatorname{E}[\cos Y] = 0,$$</span> as you already observed.</p> <p>The variance is more complicated. You'd need to compute <span class="math-container">$$\operatorname{Var}[Z] = \operatorname{E}[Z^2] - \operatorname{E}[Z]^2 = \operatorname{E}[Z^2] = \operatorname{E}[\sin^2 X \cos^2 Y] = \operatorname{E}[\sin^2 X] \operatorname{E}[\cos^2 Y].$$</span></p> <p>To this end, recall that <span class="math-container">$$\cos^2 \theta = \frac{1 + \cos 2\theta}{2}, \quad \sin^2 \theta = \frac{1 - \cos 2\theta}{2}.$$</span> So for instance, <span class="math-container">$$\operatorname{E}[\sin^2 X] = \frac{1 - \operatorname{E}[\cos 2X]}{2} = \frac{1 - e^{-2\sigma_X^2}}{2},$$</span> since <span class="math-container">$2X \sim \operatorname{Normal}(0, 4 \sigma_X^2)$</span>. A similar result holds for <span class="math-container">$\operatorname{E}[\cos^2 Y]$</span>.</p> <p>All that is left is to demonstrate that <span class="math-container">$\operatorname{E}[\cos X] = e^{-\sigma_X^2/2}$</span>, which you state without proof. (The fact that <span class="math-container">$\operatorname{E}[\sin X] = 0$</span> is an immediate consequence of the fact that <span class="math-container">$|\sin X| \le 1$</span> and <span class="math-container">$\sin (-X) = -\sin X$</span>.) I recommend that you try this as an exercise.</p>
2,736
<p><a href="https://mathoverflow.net/questions/18989/generating-classical-groups-over-finite-local-rings">Generating Classical Groups over Finite Local Rings</a> asks a question that, according to the poster's own 'answer' <a href="https://mathoverflow.net/a/19098/2383">https://mathoverflow.net/a/19098/2383</a>, is not what was actually meant. I edited the question to reflect the stated intention (changing the words "semisimple elements" to "tori").</p> <p>One rejection said:</p> <blockquote> <p>This edit does not make the post even a little bit easier to read, easier to find, more accurate or more accessible. Changes are either completely superfluous or actively harm readability.</p> </blockquote> <p>I have the impression that this rejection was based on the idea that I was trying to substitute a synonym for "semisimple elements" merely to improve the wording, whereas actually I was (intentionally) changing the meaning (so that it agreed with the poster's stated intentions—5 years' interval suggesting that he or she was not going to make the edit him- or herself).</p> <p>That brings me to the next rejection, which said:</p> <blockquote> <p>There is already an answer; rather than edit out the question that was answered, you should add the clarification so answer is not dinged as wrong.</p> </blockquote> <p>This seems like a very clear action plan to me, and I agree that doing this is better than making my original edit. However, given the first rejection, I am reluctant to try again to make this edit in case there is any penalty for appearing 'argumentative'. Is it reasonable for me to try again?</p> <p>EDIT: I was too slow, and, in the meantime, the post was edited in a better way, simply inlining asm's clarification. I didn't pay enough attention and submitted my in-progress edit anyway, but I guess that it will (properly) get re-rejected.</p>
Todd Trimble
2,926
<p>Since there was a moderator flag, I went ahead and performed the edit to the question based on asm's "answer" and evident intentions. </p>
203,673
<p>Let $X$ be a locally compact Hausdorff space. Does there exist a locally finite open covering consisting of relatively compact sets?</p>
user642796
13,653
<p>Not necessarily. The ordinal space $\omega_1 = [ 0 , \omega_1 )$ provides a counterexample. </p> <p>To see that there is no locally finite cover by relatively compact sets, note that every compact subset &mdash; and therefore every relatively compact subset &mdash; is bounded. So if $\mathcal{A}$ is a cover by relatively compact sets, we may inductively pick $\alpha_n \in \omega_1$, $A_n \in \mathcal{A}$ ($n \in \omega$) satisfying</p> <ul> <li>$\alpha_n \in A_n$;</li> <li>$\alpha_{n+1} &gt; \sup ( A_n )$.</li> </ul> <p>Then each neighbourhood of $\alpha = \sup_n \alpha_n &lt; \omega_1$ meets infinitely many $A_n$.</p>
2,275,016
<p>I am currently going through Dummit/Foote's <em>Abstract Algebra</em>, and was asked to prove the above for a specific case but was wondering if it holds in the general case.</p> <p>I have a feeling it might be false but I am bad at coming up with counterexamples so I tried to think of some simple contradictions if it were to be true but didn't get anywhere. Any help is appreciated.</p> <p>Edit: By a commutative element, I mean that for any element $y\in G$, we have that $xy=yx$.</p>
Lukas Heger
348,926
<p>For any $y \in G$, we have $xy = yx$. Multiplying from the right and left each with $x^{-1}$, we get $yx^{-1}=x^{-1}y$. One can say even more, the set of commutative elements forms a subgroup, called the center of $G$.</p>
108,409
<p>Given a commutative ring <span class="math-container">$A$</span> with unity, Grothendieck used universal polynomials to define a <em>special</em> <span class="math-container">$\lambda$</span>-ring structure on <span class="math-container">$\Lambda(A):=1+t\:A[[t]]$</span>. Suppose <span class="math-container">$A$</span> is graded, say <span class="math-container">$A=\bigoplus_{i=0}^\infty A_i$</span>. In <a href="https://doi.org/10.1007/978-1-4757-1858-4" rel="nofollow noreferrer"><strong>Riemann–Roch Algebra</strong></a>, p. 11, Fulton and Lang define <span class="math-container">$\Lambda^{\circ}(A):=\{1+a_1t+a_2t^2\dotsb\mid a_i\in A_i\}$</span>. Then on page 15 they state that since the product and <span class="math-container">$\lambda$</span> operations of <span class="math-container">$\Lambda(A)$</span> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself, <span class="math-container">$\Lambda^\circ(A)$</span> becomes a <span class="math-container">$\lambda$</span>-ring (without unit). They use this <span class="math-container">$\lambda$</span>-ring structure of <span class="math-container">$\Lambda^\circ(A)$</span> in the proof of Theorem 3.1 on p. 16.</p> <p>However, a straightforward computation shows that the product in <span class="math-container">$\Lambda(A)$</span> does <em>not</em> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself. For example, if <span class="math-container">$1+a_1t+a_2t^2\dotsb$</span> and <span class="math-container">$1+b_1t+b_2t^2\dotsb$</span> are elements in <span class="math-container">$\Lambda^\circ(A)$</span>, then their product using the product of <span class="math-container">$\Lambda(A)$</span> is given by <span class="math-container">$1+P_1(a_1;b_1)t+P_2(a_1,a_2;b_1,b_2)t^2+\dotsb$</span>, where <span class="math-container">$P_1,P_2,\dotsc$</span> are certain universal polynomials. But <span class="math-container">$P_1(a_1;b_1)$</span> turns out to be <span class="math-container">$a_1b_1$</span> (<a href="https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxkYXJpamdyaW5iZXJnfGd4OjEwNzljZThlNDcwMGE5YmU" rel="nofollow noreferrer">see here</a>, p. 22) and <span class="math-container">$a_1b_1$</span> is not in <span class="math-container">$A_1$</span>, which shows the product is not in <span class="math-container">$\Lambda^\circ(A)$</span>.</p> <p><strong>Question.</strong> Is there an error in the book? If yes, can it be fixed?</p> <p><strong>Edit.</strong> If you know other errors in this book that one should be aware of, please share them here.</p>
darij grinberg
2,530
<p>This is not an answer, as I don't exactly know what Fulton and Lang are trying to achieve with the $\lambda$-ring structure on $\Lambda^{\circ}\left(A\right)$ (I must admit that, while I had the quixotic intent to read and rewrite Fulton-Lang's Chapter I in the notes that you cited, I never found the resolve to walk that talk). I can confirm your counterexample.</p> <p>What I think can be done (don't know if it is of any help) is the following: For every $i\in\mathbb N$, let $\Lambda^{i}_{\circ}\left(A\right)$ be the subset of $\Lambda\left(A\right)$ consisting of all formal power series of the form $1+a_1t+a_2t^2+a_3t^3+...$ with every $k$ satisfying $a_k\in A^{ik}$. Then, each such $\Lambda^{i} _ {\circ}\left(A\right)$ is an additive subgroup of $\Lambda\left(A\right)$, and the direct sum $\bigoplus\limits_{i\in\mathbb N}\Lambda^{i}_{\circ}\left(A\right)$ is well-defined and a sub-$\lambda$-ring of $\Lambda\left(A\right)$. (This is easy to prove by means of the usual grading on the ring of symmetric functions.) This sub-$\lambda$-ring, of course, is graded (and does have a $1$). I have no idea in how far it is what Fulton and Lang wanted.</p> <p>We could also construct a greater graded sub-$\lambda$-ring of $\Lambda\left(A\right)$ by allowing $i$ rational (with $A^x$ defined as $0$ when $x\not\in\mathbb Z$), but then it will be graded by rationals. This greater graded sub-$\lambda$-ring is actually dense in $\Lambda\left(A\right)$ (in the usual topology on formal power series).</p> <p>Does it make sense to replace $\Lambda^{\circ}\left(A\right)$ by $\Lambda^{\geq 1}_{\circ}\left(A\right)$ in the definition of a Chern class homomorphism? I don't know. It seems that most notions in Fulton-Lang are motivated by geometry, and without understanding it I am not the one to judge.</p>
108,409
<p>Given a commutative ring <span class="math-container">$A$</span> with unity, Grothendieck used universal polynomials to define a <em>special</em> <span class="math-container">$\lambda$</span>-ring structure on <span class="math-container">$\Lambda(A):=1+t\:A[[t]]$</span>. Suppose <span class="math-container">$A$</span> is graded, say <span class="math-container">$A=\bigoplus_{i=0}^\infty A_i$</span>. In <a href="https://doi.org/10.1007/978-1-4757-1858-4" rel="nofollow noreferrer"><strong>Riemann–Roch Algebra</strong></a>, p. 11, Fulton and Lang define <span class="math-container">$\Lambda^{\circ}(A):=\{1+a_1t+a_2t^2\dotsb\mid a_i\in A_i\}$</span>. Then on page 15 they state that since the product and <span class="math-container">$\lambda$</span> operations of <span class="math-container">$\Lambda(A)$</span> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself, <span class="math-container">$\Lambda^\circ(A)$</span> becomes a <span class="math-container">$\lambda$</span>-ring (without unit). They use this <span class="math-container">$\lambda$</span>-ring structure of <span class="math-container">$\Lambda^\circ(A)$</span> in the proof of Theorem 3.1 on p. 16.</p> <p>However, a straightforward computation shows that the product in <span class="math-container">$\Lambda(A)$</span> does <em>not</em> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself. For example, if <span class="math-container">$1+a_1t+a_2t^2\dotsb$</span> and <span class="math-container">$1+b_1t+b_2t^2\dotsb$</span> are elements in <span class="math-container">$\Lambda^\circ(A)$</span>, then their product using the product of <span class="math-container">$\Lambda(A)$</span> is given by <span class="math-container">$1+P_1(a_1;b_1)t+P_2(a_1,a_2;b_1,b_2)t^2+\dotsb$</span>, where <span class="math-container">$P_1,P_2,\dotsc$</span> are certain universal polynomials. But <span class="math-container">$P_1(a_1;b_1)$</span> turns out to be <span class="math-container">$a_1b_1$</span> (<a href="https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxkYXJpamdyaW5iZXJnfGd4OjEwNzljZThlNDcwMGE5YmU" rel="nofollow noreferrer">see here</a>, p. 22) and <span class="math-container">$a_1b_1$</span> is not in <span class="math-container">$A_1$</span>, which shows the product is not in <span class="math-container">$\Lambda^\circ(A)$</span>.</p> <p><strong>Question.</strong> Is there an error in the book? If yes, can it be fixed?</p> <p><strong>Edit.</strong> If you know other errors in this book that one should be aware of, please share them here.</p>
JBorger
1,114
<p>As others have said, the definition of the Chern ring there is wrong. But if memory serves, the only mistake is that they forgot to introduce the right multiplication law on the sets of power series they consider. The usual one in the theory is given by the universal formulas for exterior powers of tensor products $\Lambda^n(E\otimes F)$, but the one they want is for Chern classes $c_n(E\otimes F)$. When $n=1$ and $E$ and $F$ are line bundles, the first is multiplication and the second is addition. So it's obviously just an oversight, but one that can be confusing if you're seeing these things for the first time. (In the copy at U Chicago, someone mercifully added a warning note in the margin. There are a few obvious suspects.)</p> <p>If you want a reference where the details are correct, I'd recommend SGA6. Grothendieck's introduction in expose 0 is very clear. Page 28 is where the discussion of the Chern ring starts. If I remember, Berthelot's expose goes into more depth, but I found Grothendieck's easier to read. Berthelot gets to the Chern ring on page 344. Atiyah-Tall is also generally a good reference, but I think they don't cover the Chern ring (although they do introduce the gamma-filtration).</p>
108,409
<p>Given a commutative ring <span class="math-container">$A$</span> with unity, Grothendieck used universal polynomials to define a <em>special</em> <span class="math-container">$\lambda$</span>-ring structure on <span class="math-container">$\Lambda(A):=1+t\:A[[t]]$</span>. Suppose <span class="math-container">$A$</span> is graded, say <span class="math-container">$A=\bigoplus_{i=0}^\infty A_i$</span>. In <a href="https://doi.org/10.1007/978-1-4757-1858-4" rel="nofollow noreferrer"><strong>Riemann–Roch Algebra</strong></a>, p. 11, Fulton and Lang define <span class="math-container">$\Lambda^{\circ}(A):=\{1+a_1t+a_2t^2\dotsb\mid a_i\in A_i\}$</span>. Then on page 15 they state that since the product and <span class="math-container">$\lambda$</span> operations of <span class="math-container">$\Lambda(A)$</span> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself, <span class="math-container">$\Lambda^\circ(A)$</span> becomes a <span class="math-container">$\lambda$</span>-ring (without unit). They use this <span class="math-container">$\lambda$</span>-ring structure of <span class="math-container">$\Lambda^\circ(A)$</span> in the proof of Theorem 3.1 on p. 16.</p> <p>However, a straightforward computation shows that the product in <span class="math-container">$\Lambda(A)$</span> does <em>not</em> take <span class="math-container">$\Lambda^\circ(A)$</span> to itself. For example, if <span class="math-container">$1+a_1t+a_2t^2\dotsb$</span> and <span class="math-container">$1+b_1t+b_2t^2\dotsb$</span> are elements in <span class="math-container">$\Lambda^\circ(A)$</span>, then their product using the product of <span class="math-container">$\Lambda(A)$</span> is given by <span class="math-container">$1+P_1(a_1;b_1)t+P_2(a_1,a_2;b_1,b_2)t^2+\dotsb$</span>, where <span class="math-container">$P_1,P_2,\dotsc$</span> are certain universal polynomials. But <span class="math-container">$P_1(a_1;b_1)$</span> turns out to be <span class="math-container">$a_1b_1$</span> (<a href="https://docs.google.com/viewer?a=v&amp;pid=sites&amp;srcid=ZGVmYXVsdGRvbWFpbnxkYXJpamdyaW5iZXJnfGd4OjEwNzljZThlNDcwMGE5YmU" rel="nofollow noreferrer">see here</a>, p. 22) and <span class="math-container">$a_1b_1$</span> is not in <span class="math-container">$A_1$</span>, which shows the product is not in <span class="math-container">$\Lambda^\circ(A)$</span>.</p> <p><strong>Question.</strong> Is there an error in the book? If yes, can it be fixed?</p> <p><strong>Edit.</strong> If you know other errors in this book that one should be aware of, please share them here.</p>
John Baez
2,893
<p>Just to save people some work, here are some problems with of <em>Riemann-Roch Algebra</em> pointed out by K. R. Coombes in his review on MathSciNet:</p> <blockquote> <p>The beginner, however, may find the going rough at first. Chapters I and III in particular could have been written more carefully. Nowhere is there an unambiguous definition of &quot;special'' λ-ring, even though the term is prominently introduced on p. 6. The reader should perhaps follow the authors' repeated advice to look at a paper by M. F. Atiyah and D. O. Tall [Topology 8 (1969), 253–297; MR0244387] for a &quot;readable account&quot;, and also consult one of two papers by A. Grothendieck [Théorie des intersections et théorème de Riemann-Roch (SGA 6), Exposé 0, 1–19, Lecture Notes in Math., 225, Springer, Berlin, 1971; see MR0354655; Bull. Soc. Math. France 86 (1958), 137–154; MR0418782]. It is rarely made clear when the hypothesis &quot;special&quot; is used. It is not in the statement of Theorem I.2.1, for instance, but is in the proof. The statement of the graded splitting principle on p. 49 leaves out one of the main points, namely, that there exists an extension which splits a given element. That these inaccuracies can be removed by reference to other sources should not relieve the authors of this book, which explicitly addresses itself to beginners and claims to be elementary and self-contained, from an obligation to meet standards of exposition higher than those applied to an ordinary advanced monograph.</p> </blockquote>
879,640
<p>Does a matrix have only one inverse matrix (like the inverse of an element in a field)? If so, does this mean that</p> <p>$A,B \text{ have the same inverse matrix} \iff A=B$?</p>
Andreas Blass
48,510
<p>More generally, in any situation where the associative law holds, if some $x$ has both a left-inverse $l$ and a right inverse $r$, then $l=r$. The reason is that $l=l(xr)=(lx)r=r$. In particular, if $x$ has a $2$-sided inverse, then that's unique. On the other hand, it is entirely possible for some $x$ to have many different left-inverses if it has no right-inverse. It is also possible for some $x$ to have many right-inverses if it has no left-inverse. Both of these possibilities actually happen in the case of non-square matrices.</p>
3,243,503
<p>If <span class="math-container">$x + y = 2c$</span>, find minimum value of <span class="math-container">$ \sec x +\sec y $</span> if <span class="math-container">$x,y\in(0,\pi/2)$</span>, in terms of <span class="math-container">$c$</span>.</p> <p>I was able to solve by differentiating the equation and got the answer as 2secc. But i would like to know solution with trigonometry as base or without differentiating the equation.</p>
DeepSea
101,504
<p>The function <span class="math-container">$f(x) = \sec x$</span> has <span class="math-container">$f''(x) = \sec x\tan^2 x + \sec^3 x &gt; 0$</span> on <span class="math-container">$(0,\frac{\pi}{2})$</span>. Thus <span class="math-container">$f(x)$</span> is convex on the indicated domain and it follows that <span class="math-container">$\sec x + \sec y \ge 2\sec\left(\frac{x+y}{2}\right)= 2\sec c$</span> which is the minimum value.</p>
902,653
<p>I have recently been learning some hyperbolic geometry and the professor briefly mentioned spherical geometry. From a modern, naive point of view, it seems quite easy to show that spherical geometry is an example of non euclidean geometry.</p> <p>Clearly, this is not true since it took over a 1000 years to do this. Is the apparent difference in difficulty due to my modern viewpoint or were there technical difficulties like with hyperbolic geometry? </p>
Per Erik Manne
33,572
<p>To an ancient geometer (or one from the 18th century), spherical geometry would seem to violate Euclid's second postulate, which says that any finite straight line can be extended indefinitely. It is also a problem that two great arcs intersect in <em>two</em> points, rather than one, which would be natural for straight lines. Thus, it would seem as spherical geometry differ from Euclidean geometry in more than the validity of the parallel postulate. </p>
1,507,526
<p>Let $S$ be the portion of the sphere $x^2+y^2+z^2=9$, where $1\leq x^2+y^2\leq4$ and $z\geq0$. Calculate the surface area of $S$</p> <p>Ok i'm really confused with this one. I know i have to apply the surface area formula but and possibly spherical coordinates but i can't seem how to get the integral out.</p> <p>The shape. I thought of using spherical system but after doing i ended up with a 3 coordinate system. I'm not even sure how to begin with this one.</p>
Christian Blatter
1,303
<p>The surface $S$ in question is a spherical zone. Its area can be found by elementary means: If $R$ is the radius of the sphere and $h$ is the $z$-height of the zone then the area $\omega(S)$ is given by $$\omega(S)=2\pi R h\ .$$ As $R=3$ and $h$ is easily computed as $h=\sqrt{9-1}-\sqrt{9-4}$ we obtain $$\omega(S)=6\pi(\sqrt{8}-\sqrt{5})\ .$$ Now for the integral: Use polar coordinates in the $(x,y)$-plane as parameter variables. Then $S$ is produced by $$S:\quad(r,\phi)\mapsto{\bf x}(r,\phi):=(r\cos\phi, \&gt;r\sin\phi, \&gt;\sqrt{9-r^2})\qquad(0\leq r\leq 2, \ 0\leq\phi\leq 2\pi)\ .$$ Then $${\bf x}_r=\bigl(\cos\phi,\sin\phi,-{r\over\sqrt{9-r^2}}\bigr),\quad{\bf x}_\phi=(-r\sin\phi,r\cos\phi,0)$$ and $${\bf x}_r\times{\bf x}_\phi=\left({r^2\cos\phi\over\sqrt{9-r^2}}, \ {r^2\sin\phi\over\sqrt{9-r^2}}, \ r\right)\ .$$ Therefore the area element becomes $${\rm d}\omega=|{\bf x}_r\times{\bf x}_\phi|\&gt;{\rm d}(r,\phi)={3r\over\sqrt{9-r^2}}\&gt;{\rm d}(r,\phi)\ ,$$ and we obtain $$\omega(S)=\int_0^{2\pi} \int_1^2{3r\over\sqrt{9-r^2}}\&gt;dr\&gt;d\phi=2\pi\ \left(-3\sqrt{9-r^2}\right)\biggr|_1^2=6\pi(\sqrt{8}-\sqrt{5})\ ,$$ as before.</p>
761,286
<p>let $G$ be an infinite group of the form $G_1 \oplus G_2 \oplus \dots \oplus G_n$ where each $G_i$ is a <strong>non trivial</strong> group and $n&gt;1$. Prove that $G$ is not cyclic.</p> <p><strong>Attempt</strong> : Let $G = G_1 \oplus G_2 \oplus \dots \oplus G_n$ be cyclic.</p> <p>then $\exists ~g =(g_1,g_2,......,g_n)$ such that $g$ is generator of $G$ if and only if :</p> <p>$(i)~~ g_1$ generates $G_1, ~g_2$ generates $ G_2$, and so on.</p> <p>$(ii)~\gcd(|g_i|,|g_j| =1)$ whenever $i \neq j$ .</p> <p>Since a group of prime order is cyclic, this means that if we take $G_i$ such that $|G_i|=p_i$ , where $p_i$ is a prime number and for any $i \neq j,~\gcd (p_i,p_j) =1$ , hence, if we take $G_i$'s in this fashion such that $g_i$ of prime order generates $G_i$, $G$ should turn out to be cylic?</p> <p>Where could I be making a mistake? Thank you for the help.</p>
Vector_13
96,276
<p>If the inner loop runs $$\sqrt n$$ times and outer loop runs n times as you indicated in you comments then you get: $$n\cdot \sqrt n = n^{3/2}$$ Since $$f=O(g)$$ means that your f grows no faster than g, it follows that $$n^{3/2}=O(n^{3/2})$$ i.e. there is a constant C that makes g grow faster than f.</p>
2,299,466
<p>For the first-order language with vocabulary $(E)$ (the binary relation $E$ which holds if two vertices have an edge) together with a set $G$ of vertices, I've been told that the property "a symmetric graph is connected" cannot be axiomatized by any set of first-order sentences. </p> <p>I think the proof involved taking two constants, let's say $x,y \in G $ and building an infinite set of sentences $T$ that states "there is no path of length $n$ between $x$ and $y$" for each $n$ (where a path is, for some vertices $(v_1,...,v_n) \in G$, $[E(x, v_1) \wedge ... \wedge E(v_n, y)]$). By compactness, I see that $T$ is satisfiable. However, I don't see how $T$ shows the impossibility of constructing another set of sentences $T'$ which is satisfied by connected graphs. </p> <p>I do understand the use of compactness to show that a theory whose models have arbitrarily large domains also has a model with an infinite domain, but I don't understand its use here. I don't have any experience with ultraproducts, so answers using that concept may be lost on me.</p>
Daron
53,993
<p>Suppose out language is $(V,E,a,b)$ where $V$ is the set of vertices; $E(\cdot,\cdot)$ is the edge relation, and $a$ and $b$ are distinct constants.</p> <p>Let each $T_n$ encode "There is no path of length $n$ between $a$ and $b$"</p> <p>And suppose conversely there is a sentence $T$ that encodes "The graph is connected".</p> <p>Consider the set of sentences $T,T_1,T_2,\ldots$ . It's an exercise to show every finite subset is modeled by some finite graph with some large number of edges that are arranged along a straight line. The bigger you make your finite subset the bigger the graph needs to be. But that's okay for the hypotheses of the compactness theorem.</p> <p>By compactness there is a model $\mathcal M$ for the entire set of sentences $T,T_1,T_2,\ldots$ . That model is a connected graph with the property for each $n$ that "There is no path of length $n$ between $a$ and $b$". Contradiction.</p>
2,721,836
<p>I recently found a different method to compute prime number in $\mathcal O(\log(\log n))$ complexity. At present, that logic working fine for $300$ digits prime number, which I found on websites.I need to validate whether that logic will be working fine for a higher number of digits. At present, I have computed a prime number of $300\ 000$ digits(but I am not sure whether this would be valid),</p> <p>My questions are:</p> <ul> <li>Where can I find a prime number of higher digits i.e., more than $300\ 000$ digits?</li> <li>Where can I validate $300\ 000$ digit prime number is valid one?</li> </ul>
gammatester
61,216
<p>Check it with a table of Mersenne primes (e.g. <a href="http://oeis.org/A000043" rel="nofollow noreferrer">http://oeis.org/A000043</a>) or use provable primes (e.g. Maurer's method, see Alg. 4.62 in the <a href="http://cacr.uwaterloo.ca/hac/" rel="nofollow noreferrer">Handbook of Applied Cryptography</a>).</p>
320,557
<p>let $S$ be the set of pairs $(x,y)$ where x,y are orthogonal unit vectors in $\mathbb R^3$. i am trying to show this is a topological manifold. for starters one needs to define a suitable topology on it. i was thinking let a set $U$ be open in $S$ iff $U \cap S^2$ (intersection with sphere) is open in $S^2$ in the subspace topology? am i going in the right direction? I'd really appreicate some help. thank you </p> <p>this is really about topological manifold, sorry for not stating it earlier. </p> <p>here is the definition:</p> <blockquote> <p>a manifold is a second countable Hausdorff space that is locally Euclidean.</p> </blockquote>
Ryan Budney
642
<p>Your set $S$ is a subset of $\mathbb R^6$, so give it the subspace topology. That ensures it's 2nd countable and Hausdorff.</p> <p>To show it's a manifold, notice $S$ is the pairs $(x,y) \in \mathbb R^3 \times \mathbb R^3$ such that:</p> <p>$$ |x|^2 =1,\ \ |y|^2=1, \ \ x\cdot y = 0 $$</p> <p>This is the same as saying $S = f^{-1}(1,1,0)$ where $f(x,y) = (|x|^2, |y|^2, x \cdot y)$. </p> <p>So the idea would be to show $(1,1,0)$ is a regular-value of $f$, then apply the preimage theorem as Qiaochu cites. The pre-image theorem is basically just the implicit function theorem from calculus, but re-cast in a convenient formalism for saying things are manifolds. </p>
30,653
<p>I have a data file which contains thousands of lines and each line has eight elements. Here is a small sample of the data file</p> <pre><code>-4.00 -0.80 0.1886024468848907E+01 0.1467147621657460E+01 -.1217067274319363E+01 0.7206100000000000E+03 0.7693457688734395E-12 5 -4.00 -0.70 0.1430357986632780E+01 -.1404093461650013E+01 -.1742223680347601E+01 0.1824700000000000E+03 0.8439003681169850E-12 8 -4.00 -0.60 -.1324719768465547E+01 0.1740130076002850E+01 0.1497978622206873E+01 0.5479900000000000E+03 0.4264485578634903E-14 2 -4.00 -0.50 0.1358536876189560E+01 -.1580696533502541E+01 0.1621539980382560E+01 0.2881100000000000E+03 0.1319885603098060E-13 4 -4.00 -0.40 -.1588487538231399E+01 0.1275577589608218E+01 0.1707607247015512E+01 0.1337500000000000E+03 0.1057878487713421E-12 2 -4.00 -0.30 0.1755414125374284E+01 0.1332710827520201E+01 0.1477984475201826E+01 0.6426400000000000E+03 0.1459764022611444E-13 1 -4.00 -0.20 0.1245972697710741E+01 0.1540633564543885E+01 0.1777167629372046E+01 0.6112200000000000E+03 0.5718386586661092E-13 1 -4.00 -0.10 -.1311461418732105E+01 -.1594149065989313E+01 0.1661344176980193E+01 0.3507800000000000E+03 0.6765588799377521E-14 3 </code></pre> <p>I read this file using</p> <pre><code>data = ReadList["data.out", Number, RecordLists -&gt; True]; </code></pre> <p>The total length of the list is obtained, of course as</p> <pre><code>ntot = Length[data]; </code></pre> <p>The list contains eight elements per row and the last of them is an integer taking values in the interval [0,8]. What I want is the following:</p> <p>(a). Count how many rows have 0 value at the last element (let's suppose there are <code>n0</code>), how many have 1, 2, 3, ... , 8. Then calculate the corresponding percentages <code>per0 = n0/ntot</code>, <code>per1 = n1/ntot</code>, etc. It could be nice if this was inside a DO loop with <code>i = 0,8</code>.</p> <p>(b). Count again percentages but using more than one criteria this time. For example, count how many rows have 1 at the last element and the value of the seventh element is smaller than <code>10^{-4}</code>.</p> <p>Any suggestions?</p> <p><strong>EDIT</strong></p> <p>Using's @Kuba's solution we have</p> <pre><code>{{5, 656}, {8, 640}, {2, 673}, {4, 663}, {1, 673}, {3, 663}, {6, 656}, {7, 640}, {0, 19}} </code></pre> <p>Is it possible to divide each sum automatically with <code>ntot</code> thus obtaining the percentages? </p> <pre><code>{{5, 656/ntot}, {8, 640/ntot}, {2, 673/ntot}, {4, 663/ntot}, {1, 673/ntot}, {3, 663/ntot}, {6, 656/ntot}, {7,640/ntot}, {0, 19/ntot}} </code></pre> <p>Also it would be great if they were sorted from 0 to 8 not randomly as they are now. </p>
Kuba
5,478
<p>There are many ways to acheive this:</p> <pre><code>SetDirectory@NotebookDirectory[]; data = Import["list.txt", "Table"] (*I saved your sample in txt file*) ntot = Length@data; temp = SortBy[Tally@data[[;; , -1]], 1]; temp[[;; , 2]] = temp[[;; , 2]] 100/ntot // N; (*I've multiplied by 100 to get % value not the ratio*) temp </code></pre> <blockquote> <pre><code>{{1, 25.}, {2, 25.}, {3, 12.5}, {4, 12.5}, {5, 12.5}, {8, 12.5}} </code></pre> </blockquote> <p>What can be useful, but the difference is it will "count" also what is not there (e.g <code>7</code>):</p> <pre><code>viahist = HistogramList[data[[;; , 8]], {1}, "Probability"] </code></pre> <blockquote> <pre><code>{{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, {0., 0.25, 0.25, 0.125, 0.125, 0.125, 0., 0., 0.125}} </code></pre> </blockquote> <p>notice that first part is a list of intervals limits so at the end:</p> <pre><code>{Most@#, #2 100} &amp; @@ viahist // Transpose </code></pre> <blockquote> <pre><code>{{0, 0.}, {1, 25.}, {2, 25.}, {3, 12.5}, {4, 12.5}, {5, 12.5}, {6, 0.}, {7, 0.}, {8, 12.5}} </code></pre> </blockquote> <hr> <p>For your second question, one way to do this is:</p> <pre><code>Length@Select[data, #[[7]] &lt; 10^(-13) &amp;&amp; #[[8]] == 2 &amp;] 100 / ntot // N </code></pre> <blockquote> <pre><code>12.5 </code></pre> </blockquote> <hr>
3,930,659
<blockquote> <p>Evaluate: <span class="math-container">$$ \int \frac{x^2}{\sqrt{1-x^2}}\,dx$$</span></p> </blockquote> <p>The solution I came across does a <span class="math-container">$u$</span>-substitution by letting <span class="math-container">$x = \sin(t)$</span>. But why <span class="math-container">$\sin(t)$</span>? It seems a lot like guessing to me - should I just guess the right <span class="math-container">$x$</span> for substitution? Why not <span class="math-container">$\cos(t)$</span>? Is there any way that I could identify that &quot;this&quot; expression is solved by &quot;that&quot; trig substitution?</p>
DMcMor
155,622
<p>Draw a triangle! When looking at the term <span class="math-container">$\sqrt{1 - x^{2}}$</span> in the integrand, you should immediately be reminded of the Pythagorean Theorem. If you draw a right triangle with hypotenuse length <span class="math-container">$1$</span>, and side lengths <span class="math-container">$x$</span> and <span class="math-container">$\sqrt{1 - x^{2}}$</span>, you end up with the following triangle. I've labelled the remaining two angles <span class="math-container">$s$</span> and <span class="math-container">$t$</span>. So, if you want to isolate <span class="math-container">$x$</span> for a potential change of variables, based on the triangle, you can either use <span class="math-container">$\sin(s) = x$</span> or <span class="math-container">$\cos(t) = x$</span>. Either would give you a valid change of variables.</p> <p><a href="https://i.stack.imgur.com/LboFX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LboFX.png" alt="enter image description here" /></a></p>
18,960
<p>I am making this post in regards to the ongoing delete/undelete skirmish (let's at least change the monotonicity of the use of "war"). The old version of the question is <a href="https://math.stackexchange.com/revisions/172652/3">here</a>, the current version (after edits today) <a href="https://math.stackexchange.com/q/172652/11619">here</a>, and the answer <a href="https://math.stackexchange.com/questions/172652/conics-passing-through-integer-lattice-points/172683#172683">here</a> </p> <p>There are two facts:</p> <ol> <li>The original post is of poor quality: it is a problem statement of various simultaneous problems, which shows no effort from the OP.</li> <li>Nonetheless, the post has received a very good answer, which is worth keeping (in my personal opinion) for the sake of future users.</li> </ol> <p>As a consequence, the moderator team decided to undelete the question some days ago. The question was again deleted by two of the three users that had deleted it before. Since the question is already closed, I find no reason to delete it, given it signifies the deletion of a useful answer. </p> <p>The mod team (more precisely, Pedro and Jyrki) have undeleted, locked, edited, unlocked and reopened the question today.</p>
Jonas Meyer
1,424
<p>Now that it has been edited to be reasonable, I would prefer for it to be unlocked and reopened.</p>
920,782
<p>How do I find the number of integral solutions to the equation - </p> <p>$$2x_1 + 2x_2 + \cdots + 2x_6 + x_7 = N$$</p> <p>$$x_1,x_2,\ldots,x_7 \ge 1$$</p> <p>I just thought that I should reduce this a bit more, so I replace $x_i$ with $(y_i+1)$, so we have:</p> <p>$$y_1 + y_2 + \cdots + y_6 = \tfrac{1}{2}(N + 13 - y_7)$$</p> <p>$$y_1,y_2,\ldots,y_7 \ge 0$$</p> <p>I will be solving this as a programming problem by looping over $y_7$ from $[0, N+13]$. How do I find the number of solutions to this equation in each looping step?</p>
user2566092
87,313
<p>Note $x_7$ must be either odd or even depending on whether $N$ is odd or even. So if you just subtract $1$ from $N$ if necessary and assume $x_7 = 2y$ where $y \geq 0$, then you get that the number of solutions is the same as the number of solutions to $x_1 + \ldots + x_7 = N/2$ where $x_i \geq 1$ for $1 \leq i \leq 6$ and $x_7 \geq 0$. This can be solved with a single binomial coefficient (no big summation required), see the "stars and bars" construction for how to do it, basically you just add 1 to the target sum $N/2$ for each variable that is $\geq 0$ instead of $\geq 1$, and then if you have target sum $M$ and $k$ variables all $\geq 1$ then the number of solutions is ${M-1} \choose {k-1}$.</p>
504,524
<p>I'm trying to learn probability and statistics but I can't really get my head around this one. I realize after drawing the first card there will only be 51 cards in the deck but I'm having trouble calculating the chance that the second one is an Ace if I don't know what the first card is?</p> <p>Assuming that the initial card was an Ace, I could say the probability of the second being an Ace is 3/51... but how do I approach this with an unknown card?</p>
drmortimer
34,817
<p>Break the answer into two scenarios:</p> <ol> <li>If the first card is an ace and the second card is also an ace</li> </ol> <p>The probability, in this case, is <span class="math-container">$$\dfrac{4}{52}\dfrac{3}{51}.$$</span></p> <ol start="2"> <li><p>If the first card is not an ace but the second card is an ace</p> <p>The probability, in this case, is <span class="math-container">$$\dfrac{48}{52}\dfrac{4}{51}.$$</span></p> </li> </ol> <p>Since these two scenarios are mutually exclusive, the final probability is the sum of these two probabilities: <span class="math-container">$$\dfrac{4}{52}\dfrac{3}{51} + \dfrac{48}{52}\dfrac{4}{51}.$$</span></p>
2,898,390
<p>Is there any algorithm or a technique to calculate how many prime numbers lie in a given closed interval [a1, an], knowing the values of a1 and an, with a1,an ∈ ℕ?</p> <p>Example: </p> <p>[2, 10] --> 4 prime numbers {2, 3, 5, 7}</p> <p>[4, 12] --> 3 prime numbers {5, 7, 11}</p>
Doesbaddel
587,094
<blockquote> <p>$(A \implies C) \wedge \neg (B \wedge C \wedge D).$ </p> <p>Truth table:</p> <p>\begin{array}{| c | c | c | c | c | c | c |} \hline A &amp; B &amp; C &amp; D &amp; &gt; \underbrace{A \implies B}_{E} &amp; \underbrace{\neg (B \wedge C \wedge D)}_{F} &amp; \underbrace{E \wedge F}_{G} \\ \hline 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ \hline 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ \hline 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 0 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 1 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \\ \hline 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 0 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\ \hline 0 &amp; 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 1 &amp; 0 &amp; 1 &amp; 0 &amp; 1 &amp; 0 \\ \hline 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ \hline 1 &amp; 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 \\ \hline 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 \\ \hline 1 &amp; 1 &amp; 1 &amp; 1 &amp; 1 &amp; 0 &amp; 0 \\ \hline \end{array}</p> </blockquote> <p>In order to fill the <em>Karnaugh Map</em>, you'll need to search for all the given combinations of "Trues" in your <em>Truth Table.</em> Thanks to AdrianKeister for pointing out. Fill in all the gaps and extract the minimal DNF by circling all $2^k$ fields, that contain the number <em>1</em>.</p> <p>Karnaugh map: \begin{array}{| c | c | c |c | c | c |}\hline C &amp; C &amp; \neg C &amp; \neg C \\ \hline \neg D&amp; D &amp; D &amp; \neg D \\ \hline \color{yellow}{1} &amp; \color{blue}{1} &amp; 0 &amp; 0 &amp; \neg B &amp; A\\ \hline \color{darkorange}{1} &amp; 0 &amp; 0 &amp; 0 &amp; B &amp; A\\ \hline \color{darkorange}{1} &amp; 0 &amp; \color{red}{1} &amp; \color{red}{1} &amp; B &amp; \neg A \\ \hline \color{yellow}{1} &amp; \color{blue}{1} &amp; \color{red}{1} &amp; \color{red}{1} &amp; \neg B &amp; \neg A\\ \hline \end{array} Numbers in similar colors are circled. The color <em>yellow</em> should indicate, that two circles are overlapping each other.</p> <p>The <em>DNF</em> is $G=(C \wedge \neg D) \vee (C \wedge \neg B) \vee (\neg A \wedge \neg C)$ or </p> <p>like <em>Adrian Keister</em> already mentioned: </p> <blockquote> <p>$G=(\neg A \land \neg C)\lor(\neg B \land C)\lor(C \land \neg D)$</p> </blockquote> <p>If you want to receive the <em>CNF</em>, you will need to mark all $2^k$ fields that contain <em>0</em>:</p> <p>Karnaugh map: \begin{array}{| c | c | c |c | c | c |}\hline C &amp; C &amp; \neg C &amp; \neg C \\ \hline \neg D&amp; D &amp; D &amp; \neg D \\ \hline 1 &amp; 1 &amp; \color{blue}{0} &amp; \color{blue}{0} &amp; \neg B &amp; A\\ \hline 1 &amp; \color{red}{0} &amp; \color{blue}{0} &amp; \color{blue}{0} &amp; B &amp; A \\ \hline 1 &amp; \color{red}{0} &amp; 1 &amp; 1 &amp; B &amp; \neg A \\ \hline 1 &amp; 1 &amp; 1 &amp; 1 &amp; \neg B &amp; \neg A\\ \hline \end{array}</p> <p>For <em>CNF</em> you will get $G_2=(\neg A \vee C) \wedge (\neg B \wedge C \wedge \neg D)$.</p>
1,527,197
<p>So in the case where data points have the same variance $\sigma^2$, the estimator (in normal equation form) can be written as </p> <p>$$\theta=(X^TX)^{-1}X^TY$$</p> <p>I'm not sure how to derive a similar formula when the data points have different variances, and thus the covariance matrix would be</p> <p>$$\Sigma = diag(\sigma_1^2, \sigma_2^2, ...,\sigma_n^2)$$</p>
As soon as possible
288,506
<p>The series $\sum a_n$ is divergent if $(a_n)$ hasn't the limit $0$. So the series in 1. is divergent.</p> <p>For 3.the sequence $\frac{1}{\sqrt n+1}$ is decreasing and has the limit $0$ so you can apply AST.</p>
3,488,405
<blockquote> <p>Let <span class="math-container">$L(n)$</span> denote the number of positive divisors of a number <span class="math-container">$n$</span>. Prove that <span class="math-container">$\sum_{n=1}^N L(n)=\lfloor{\sqrt N}\rfloor\pmod 2$</span>.</p> </blockquote> <p>I wanted to prove that by induction. For <span class="math-container">$n=1$</span> True. Assume True for some N. Now we have <span class="math-container">$N+1$</span>. I observed that <span class="math-container">$L(n)=n-\phi(n)+1$</span> where phi is Euler's function. And what now... If <span class="math-container">$N+1$</span> is a square, then <span class="math-container">$\lfloor\sqrt N\rfloor\ne\lfloor\sqrt{N+1}\rfloor\pmod 2$</span>, whereas they are the same modulo 2 if <span class="math-container">$N+1$</span> is not a square. When we add <span class="math-container">$N+1$</span>th element to the sum, we add <span class="math-container">$N+1-\phi(N+1)+1$</span> which modulo 2 is <span class="math-container">$N-\phi(N+1)$</span>. So, the function <span class="math-container">$k(n)=n-\phi(n+1)$</span> should be odd if <span class="math-container">$n+1$</span> is square, and even if it is not square. But, I opened table of Euler's phi function and that function seems to be odd when n is odd and even when n is even - I looked all naturals &lt; 500. So, where am I wrong? </p> <p>I would be also happy to see non-induction solution, because it is probably better way. I just cant imagine from where did that sqrt(N) came from when I think about that sum.</p> <p>Thanks in advance for help. I will probably answer for comments tomorrow.</p>
Will Jagy
10,400
<p>Your evaluation of <span class="math-container">$L$</span> using the Euler phi function is wrong, it works only for primes and 1 and 4.</p> <pre><code>Thu Dec 26 13:34:20 PST 2019 1 = 1 divisor count: 1 n - phi + 1: 1 EQUAL 2 = 2 divisor count: 2 n - phi + 1: 2 EQUAL 3 = 3 divisor count: 2 n - phi + 1: 2 EQUAL 4 = 2^2 divisor count: 3 n - phi + 1: 3 EQUAL 5 = 5 divisor count: 2 n - phi + 1: 2 EQUAL 6 = 2 * 3 divisor count: 4 n - phi + 1: 5 7 = 7 divisor count: 2 n - phi + 1: 2 EQUAL 8 = 2^3 divisor count: 4 n - phi + 1: 5 9 = 3^2 divisor count: 3 n - phi + 1: 4 10 = 2 * 5 divisor count: 4 n - phi + 1: 7 11 = 11 divisor count: 2 n - phi + 1: 2 EQUAL 12 = 2^2 * 3 divisor count: 6 n - phi + 1: 9 13 = 13 divisor count: 2 n - phi + 1: 2 EQUAL 14 = 2 * 7 divisor count: 4 n - phi + 1: 9 15 = 3 * 5 divisor count: 4 n - phi + 1: 8 16 = 2^4 divisor count: 5 n - phi + 1: 9 17 = 17 divisor count: 2 n - phi + 1: 2 EQUAL 18 = 2 * 3^2 divisor count: 6 n - phi + 1: 13 19 = 19 divisor count: 2 n - phi + 1: 2 EQUAL 20 = 2^2 * 5 divisor count: 6 n - phi + 1: 13 21 = 3 * 7 divisor count: 4 n - phi + 1: 10 22 = 2 * 11 divisor count: 4 n - phi + 1: 13 23 = 23 divisor count: 2 n - phi + 1: 2 EQUAL 24 = 2^3 * 3 divisor count: 8 n - phi + 1: 17 25 = 5^2 divisor count: 3 n - phi + 1: 6 26 = 2 * 13 divisor count: 4 n - phi + 1: 15 27 = 3^3 divisor count: 4 n - phi + 1: 10 28 = 2^2 * 7 divisor count: 6 n - phi + 1: 17 29 = 29 divisor count: 2 n - phi + 1: 2 EQUAL 30 = 2 * 3 * 5 divisor count: 8 n - phi + 1: 23 31 = 31 divisor count: 2 n - phi + 1: 2 EQUAL 32 = 2^5 divisor count: 6 n - phi + 1: 17 33 = 3 * 11 divisor count: 4 n - phi + 1: 14 34 = 2 * 17 divisor count: 4 n - phi + 1: 19 35 = 5 * 7 divisor count: 4 n - phi + 1: 12 36 = 2^2 * 3^2 divisor count: 9 n - phi + 1: 25 37 = 37 divisor count: 2 n - phi + 1: 2 EQUAL 38 = 2 * 19 divisor count: 4 n - phi + 1: 21 39 = 3 * 13 divisor count: 4 n - phi + 1: 16 40 = 2^3 * 5 divisor count: 8 n - phi + 1: 25 41 = 41 divisor count: 2 n - phi + 1: 2 EQUAL 42 = 2 * 3 * 7 divisor count: 8 n - phi + 1: 31 43 = 43 divisor count: 2 n - phi + 1: 2 EQUAL 44 = 2^2 * 11 divisor count: 6 n - phi + 1: 25 45 = 3^2 * 5 divisor count: 6 n - phi + 1: 22 46 = 2 * 23 divisor count: 4 n - phi + 1: 25 47 = 47 divisor count: 2 n - phi + 1: 2 EQUAL 48 = 2^4 * 3 divisor count: 10 n - phi + 1: 33 49 = 7^2 divisor count: 3 n - phi + 1: 8 50 = 2 * 5^2 divisor count: 6 n - phi + 1: 31 51 = 3 * 17 divisor count: 4 n - phi + 1: 20 52 = 2^2 * 13 divisor count: 6 n - phi + 1: 29 53 = 53 divisor count: 2 n - phi + 1: 2 EQUAL 54 = 2 * 3^3 divisor count: 8 n - phi + 1: 37 55 = 5 * 11 divisor count: 4 n - phi + 1: 16 56 = 2^3 * 7 divisor count: 8 n - phi + 1: 33 57 = 3 * 19 divisor count: 4 n - phi + 1: 22 58 = 2 * 29 divisor count: 4 n - phi + 1: 31 59 = 59 divisor count: 2 n - phi + 1: 2 EQUAL 60 = 2^2 * 3 * 5 divisor count: 12 n - phi + 1: 45 61 = 61 divisor count: 2 n - phi + 1: 2 EQUAL 62 = 2 * 31 divisor count: 4 n - phi + 1: 33 63 = 3^2 * 7 divisor count: 6 n - phi + 1: 28 64 = 2^6 divisor count: 7 n - phi + 1: 33 65 = 5 * 13 divisor count: 4 n - phi + 1: 18 66 = 2 * 3 * 11 divisor count: 8 n - phi + 1: 47 67 = 67 divisor count: 2 n - phi + 1: 2 EQUAL 68 = 2^2 * 17 divisor count: 6 n - phi + 1: 37 69 = 3 * 23 divisor count: 4 n - phi + 1: 26 70 = 2 * 5 * 7 divisor count: 8 n - phi + 1: 47 71 = 71 divisor count: 2 n - phi + 1: 2 EQUAL 72 = 2^3 * 3^2 divisor count: 12 n - phi + 1: 49 73 = 73 divisor count: 2 n - phi + 1: 2 EQUAL 74 = 2 * 37 divisor count: 4 n - phi + 1: 39 75 = 3 * 5^2 divisor count: 6 n - phi + 1: 36 76 = 2^2 * 19 divisor count: 6 n - phi + 1: 41 77 = 7 * 11 divisor count: 4 n - phi + 1: 18 78 = 2 * 3 * 13 divisor count: 8 n - phi + 1: 55 79 = 79 divisor count: 2 n - phi + 1: 2 EQUAL 80 = 2^4 * 5 divisor count: 10 n - phi + 1: 49 81 = 3^4 divisor count: 5 n - phi + 1: 28 82 = 2 * 41 divisor count: 4 n - phi + 1: 43 83 = 83 divisor count: 2 n - phi + 1: 2 EQUAL 84 = 2^2 * 3 * 7 divisor count: 12 n - phi + 1: 61 85 = 5 * 17 divisor count: 4 n - phi + 1: 22 86 = 2 * 43 divisor count: 4 n - phi + 1: 45 87 = 3 * 29 divisor count: 4 n - phi + 1: 32 88 = 2^3 * 11 divisor count: 8 n - phi + 1: 49 89 = 89 divisor count: 2 n - phi + 1: 2 EQUAL 90 = 2 * 3^2 * 5 divisor count: 12 n - phi + 1: 67 91 = 7 * 13 divisor count: 4 n - phi + 1: 20 92 = 2^2 * 23 divisor count: 6 n - phi + 1: 49 93 = 3 * 31 divisor count: 4 n - phi + 1: 34 94 = 2 * 47 divisor count: 4 n - phi + 1: 49 95 = 5 * 19 divisor count: 4 n - phi + 1: 24 96 = 2^5 * 3 divisor count: 12 n - phi + 1: 65 97 = 97 divisor count: 2 n - phi + 1: 2 EQUAL 98 = 2 * 7^2 divisor count: 6 n - phi + 1: 57 99 = 3^2 * 11 divisor count: 6 n - phi + 1: 40 100 = 2^2 * 5^2 divisor count: 9 n - phi + 1: 61 Thu Dec 26 13:34:20 PST 2019 </code></pre>
2,523,342
<p>Assume $f\in L^p(\Bbb R^d) $ and $g\in L^q(\Bbb R^d) $ Where, $1&lt;p&lt;\infty$ and $1&lt;q&lt;\infty$ are dual ecxponents namely, $$\frac1p+\frac1q =1$$ Then for every $s\in\Bbb R$ such that, $sp\le d$ show that, $$\lim_{j\to\infty} \int_{\Bbb R^d}f_j(x)g(x)dx = 0$$</p> <p>Where, $$f_j(x) = j^sf(jx)~~~$$</p> <p><strong>My attempt,</strong></p> <p>I Applied Holder inequality, and use the change of variables $u =jx$ to get the following,</p> <p>\begin{align}\left|\int_{\Bbb R^d}f_j(x)g(x)dx\right| &amp;\le \left(\int_{\Bbb R^d}|f_j(x)|^pdx\right)^{1/p} \left(\int_{\Bbb R^d}|g(x)|^qdx\right)^{1/q}\\&amp;=\frac{1}{j^{\frac dp-s}}\left(\int_{\Bbb R^d}|f(x)|^pdx\right)^{1/p} \left(\int_{\Bbb R^d}|g(x)|^qdx\right)^{1/q} \\&amp;=\frac{1}{j^{\frac dp-s}}\|f\|_{p}\|f\|_{q}\end{align}</p> <p>If I suppose that, $\color{blue}{d&gt;sp}$ then $\frac{1}{j^{\frac dp-s}}\to 0$ and the result follows. </p> <blockquote> <p><strong>Question:</strong> How do I prove the case where $sp= d$?</p> </blockquote>
David C. Ullrich
248,223
<p>Note it's not true for $p=1$. Assume $p&gt;1$. Come to think of it, it's also false for $p=\infty$. Assume $p&lt;\infty$.</p> <p>Let $\epsilon&gt;0$. Choose $A&gt;0$ so that $$\left(\int_{|x|\le A}|g(x)|^q\right)^{1/q}&lt;\epsilon.$$(This is where $q&lt;\infty$ is needed.) An explicit calculation shows that $$\left(\int_{|x|&gt;A}|f_j(x)|^p\right)^{1/p}\to0.$$(This uses $p&lt;\infty$.)</p> <p><strong>Edit:</strong> What's above will look like a solution to many readers. Others may be wondering how those two lines prove what's asserted. The (standard) argument goes like so: Assume $\epsilon$ and $A$ are as above. Choose $N$ so that $$\left(\int_{|x|&gt;A}|f_j(x)|^p\right)^{1/p}&lt;\epsilon\quad(j&gt;N).$$ Noting that $||f_j||_p=||f||_p$ (since $sp=d$), for $j&gt;N$ we have $$\left|\int f_jg\right|\le\left|\int_{|x\le A}f_jg\right|+\left|\int_{|x|&gt;A}f_jg\right|\le||f||_p\epsilon+\epsilon||g||_q.$$</p>
422,225
<p>The proof uses this lemma which I understand: </p> <p>$\mathbf {Lemma}$: Suppose $x$ and $y$ are positive real numbers such that $x&gt;y$. If we decrease $x$ and increase $y$ by some positive quantity $E$ such that $x-E \ge y+E$, then $(x-E)(y+E) \gt xy$ . $\;$Hence, by subtracting $E$ from $x$ and adding it to $y$, we leave the average of the two numbers unchanged while increasing their product. </p> <p>$\mathbf {Proof}:$ Suppose $a_{1}, a_{2}, a_{3}... a_{n}$ are positive real numbers with average $A$ and product $P$. If all $a_{i}$ are equal, then both the geometric mean and the arithmetic mean are equal to $A$. Let $a_{j}$ be one number closest to $A$ without being equal to $A$. Without loss of generality, let $a_{j} \lt A$ . Since the average of the numbers is $A$, there is at least one member of the set greater than $A$. Let $a_{k}$ be the greatest of these numbers. Clearly we must have $a_{k}-A \gt A-a_{j}$ since $a_j$ is closer to $A$ than any other $a_i$ not equal to $A$. We now use our lemma. Replace $a_j$ with $A$ and $a_k$ with $a_k-(A-a_j)$ . Note that $a_k-(A-a_j) \ge a_j +(A-a_j)$ , so we can apply our lemma with $(A-a_j)$ as our $E$ . By our lemma, the average of the numbers in the new set is the same, but the product is now higher. If we continue this process, we make one of the members of the set equal to $A$ with each application of the process. Hence, in some finite number of steps, we will make all the numbers equal to $A$. Thus, we prove that of all the sets of positive numbers with average $A$, the set with maximum product has all the elements equal to $A$.</p> <p>There are three things I don't understand about this proof:</p> <p>$1)$ I don't understand why they don't loose generality when they say to let $a_j$ be the number closest to $A$ and let $a_j \lt A$. It certainly is possible for this not to be the case, for example the set ${2, 10, 10, 10}$. The average is $8$, but the number closest to $A$ is greater than $A$, so I don't see how the proof can apply to this set. </p> <p>$2)$ I don't see how this process makes makes the elements of the set equal to $A$. If you want $a_j+E$ and $a_k-E$ to be equal to $A$, then $a_j$ and $a_k$ have to be equidistant from $A$. </p> <p>$3)$ if you do bring one pair of terms at a time equal to $A$, then that means you must have an equal number of therms below and above $A$.</p> <p>Any help is appreciated, thanks!</p>
Hagen von Eitzen
39,174
<p>1) I'd suggest to choose <em>any</em> indices $j,k$ with $a_j&lt;A&lt;a_k$ (which exist unless all $a_i$ are equal) and let $E=\min\{a_k-A,A-a_j\}$ in the lemma.</p> <p>2) $E$ is specifically chosen so that at least one of $a_j+E$, $a_k-E$ equals $A$. The text explicitly makes $a_j=A$, i.e. chooses $E=A-a_j$, and decreases $a_k$ accordingly (I do similar in my suggestion for$1)$). It is enough to make only <em>one</em> of these equal to $A$ in order to increase the count.</p> <p>3) You can only be sure to have at least one bigger and at least one smaller number. Only in the last step you are sure to "accidentally" bring two numbers at once to the average $A$ (because it is not possible that all numbers but one are equal to $A$).</p>
3,009,345
<p>I ran into this question which hints me to use Cauchy's Integral Theorem for Derivatives, however I don't seem to be able to fit this integral into the form of the Integral Formula</p> <p><span class="math-container">$$\displaystyle \int_{|z|=2} \frac{\cos(z)}{z(z^2+8)}dz$$</span> I tried using the fact that <span class="math-container">$\displaystyle \int_\gamma f(z)dz=\int_a^b f(\gamma(t))\gamma'(t)dt$</span> for <span class="math-container">$\gamma(t)$</span> where <span class="math-container">$t \in [a, b]$</span>. But got nowhere. Is there a way I can transform the given integral into the form in which I can use the Integral Formula as stated below?</p> <p><span class="math-container">$$f^{(k)}(w)=\frac{k!}{2\pi i}\int_\gamma \frac{f(z)}{(z-w)^{k+1}}dz$$</span> where <span class="math-container">$f^{(k)}(w)$</span> is the <span class="math-container">$k^{th}$</span> derivative of <span class="math-container">$f$</span></p>
Math Lover
348,257
<p>Note that <span class="math-container">$Y&lt;X^3$</span> and <span class="math-container">$XY&lt;z \implies Y&lt;z/X$</span>; i.e., <span class="math-container">$Y &lt; \min\{X^3,z/X\}$</span>. </p> <p>For <span class="math-container">$X^3&lt;z/X$</span>, or <span class="math-container">$X &lt; z^{1/4}$</span>, we have <span class="math-container">$$Y &lt; \min\{X^3,z/X\} = X^3;$$</span> otherwise <span class="math-container">$$Y &lt; \min\{X^3,z/X\} = z/X.$$</span> Consequently,</p> <p><span class="math-container">$$\Pr\{XY &lt;z\} = \int_{0}^{z^{1/4}}\int_{0}^{x^3}\frac{1}{4}dy\,dx + \int_{z^{1/4}}^{2}\int_{0}^{z/x}\frac{1}{4}dy\,dx,$$</span> where <span class="math-container">$0 &lt; z &lt; 16$</span>.</p>
2,064,643
<p>Let $a_n,b_n &gt; 0, \sum_{n=1}^{\infty}a_n &lt; \infty, \sum_{n=1}^{\infty}b_n = \infty$. Is it possible for the Cauchy product of the two series to converge?</p>
Tsemo Aristide
280,301
<p>Hint: $a_n={1\over{n^3}}$, $b_n=n$</p>
2,780,731
<p>In school, I have recently been learning about simple differential equations. We know that the solution of $y'=y$ is $y=Ae^x$, where $A$ is a constant. But how can we know that it is the <strong>only</strong> solution? The only thing I can figure out is that $y$ is continuously differentiable. Help me, please.</p>
Artem
29,547
<p>Let $z$ be a solution to $y'=y$. Consider $z(t)e^{-t}$. We have $$ \frac{d}{dt}(z(t)e^{-t})=z'(t)e^{-t}-z(t)e^{-t}=0, $$ therefore $$ z(t)e^{-t}=Const\implies z(t)=Ae^{t}. $$</p>
4,315,449
<p>Is it correct to say that <span class="math-container">$\mathbb{R}^n \subset \mathbb{R}^n$</span> ?</p> <p><strong>EDIT</strong>: The context of may question is that I am having a function that is defined as <span class="math-container">$f \colon D \subset \mathbb{R}^n \to \mathbb{R}^n $</span> and I am wondering if I can just generalize the definition as <span class="math-container">$f \colon \mathbb{R}^n \to \mathbb{R}^n $</span>. The symbol <span class="math-container">$\subset$</span> refers to set inclusion.</p> <p><sup> I was just reading &quot;J. M. Ortega, Iterative Solution of Nonlinear Equations in Several Variables&quot;. I hope I will not cause more agitation. </sup></p>
Lorenzo Pompili
884,561
<p>The symbol “<span class="math-container">$\subset$</span>” could mean different things depending on the author.</p> <p>In your case, I suggest trying to figure it out from the context. If there are no other restrictions on <span class="math-container">$D$</span> (e.g. bounded,compact,…), then it is very likely that you can also take <span class="math-container">$D=\mathbb R^n$</span>. If you can’t figure it out in this example, try to find other occurrences in the book. Also, check if by any chance there is a section of the book dedicated to notations.</p> <p>Just a small comment: in analysis I can’t recall so many statements that hold for a certain class of possibly unbounded subsets <span class="math-container">$\mathbb R^n$</span> (or <span class="math-container">$\mathbb C^n$</span>) with the only exception being the whole space. Two examples that I have in mind are the Riemann mapping theorem and the strict inclusion between the Sobolev spaces <span class="math-container">$W^{1,p}(\Omega)$</span> and <span class="math-container">$W^{1,p}_0(\Omega)$</span>.</p>
1,341,486
<p>Problem: Find the sum to $n$ terms of \begin{eqnarray*} \frac{1}{1\cdot 2\cdot 3} + \frac{3}{2\cdot 3\cdot 4} + \frac{5}{3\cdot 4\cdot 5} + \frac{7}{4\cdot 5\cdot 6}+\cdots \\ \end{eqnarray*} Answer: The way I see it, the problem is asking me to find this series: \begin{eqnarray*} S_n &amp;=&amp; \sum_{i=1}^{n} {a_i} \\ \text{with } a_i &amp;=&amp; \frac{2i-1}{i(i+1)(i+2)} \\ \end{eqnarray*} We have: \begin{eqnarray*} S_n &amp;=&amp; S_{n-1} + a_n \\ S_n &amp;=&amp; S_{n-1} + \frac{2n-1}{n(n+1)(n+2)} \\ \end{eqnarray*} I am tempted to apply the technique of partial fractions but I believe there is no closed formula for a series of the of the form:</p> <p>\begin{eqnarray*} \sum_{i=1}^{n} \frac{1}{i+k} \\ \end{eqnarray*} where $k$ is a fixed constant. Therefore I am stuck. I am hoping that somebody can help me.</p> <p>Thanks Bob</p>
Michael Galuza
240,002
<p>There is more simple way (for me). You have $$a_n=\frac{2n-1}{n(n+1)(n+2)}=-\frac52\frac{1}{n+2} + \frac{3}{n+1} - \frac{1}{2n};$$ hence $$S_N = \sum_{n=1}^N a_n = -\frac52\left(H_{N+2}-1-\frac12\right) + 3(H_{N+1}-1) - \frac12 H_N,$$ where $H_N$ is $N$-th harmonic number. Simplify it: $$S_N = -\frac52\left(H_N + \frac{1}{N+1} + \frac{1}{N+2}-\frac32\right) + 3\left(H_N+\frac{1}{N+1}-1\right) - \frac12H_N=\\= \frac34 + \frac{1}{2(N+1)}-\frac{5}{2(N+2)}.$$</p>
872,657
<p>For $1 \leq r &lt; p &lt; \infty$ prove the continuous injection of $L^p([0, 1])$ into $L^r([0, 1])$. </p> <p>I am having a hard time starting. Any suggestions. I tried a straight forward approach. That is, given $\epsilon &gt; 0$, I tried to find a $\delta &gt;0$ such that $||f - g||_p &lt; \delta$ implies that $||f - g||_r &lt; \epsilon.$</p> <p>Thanks for any help.</p>
Mohammad Khosravi
87,886
<p>M. Slater, "Lagrange Multipliers Revisited," Cowles Commission Discussion Paper No. 403, November, 1950</p>
858,576
<p>Prove that the union of three subspaces of V is a subspace iff one of the subspaces contains the other two.</p> <p>I can do this problem when I am working in only two subspaces of $V$ but I don't know how to do it with three. </p> <p>What I tried is: If one of the subspaces contains the other two, Then their union is obviously a subspace because the subspace that contains them is a subspace. (Is this sufficient??).</p> <p>If the union of three subspaces is a subspace..... How do I prove that one of the subspaces must contain the other two from here?</p> <p>*When proving this for two I said that there is an element in one of the subspaces that is not the other and proved by contradiction that one of the subspaces must be contained in the other. How would I do this for three?</p>
W.Leywon
300,668
<p>Gina gave an excellent answer,in fact,we can have : If $V$ is a vector space over the field $F$ and there is a collection of finite number of subspaces of $V$, $\{U_1,U_2,U_3,\cdots ,U_n\}$,and $n$,the number of the elements of the collection above,is not more than the cardinality of $F$,when $F$ is finite,or $F$ is just infinite,then the union of all the subspaces $U_1,U_2,U_3,\cdots ,U_n$ is a subspace of $V$ if and only if one of the subspaces $U_1,U_2,U_3,\cdots ,U_n$ contains all other subspaces.The proof is similar to the way one proves that "a vector space over an infinite field cannot be a finite union of proper subspaces of its own", and using the technique "prove by contradiction".(Using the pigeonhole principle to deduce absurdity: Imagine that the elements of $F$ "fly" into the subspaces $U_1,U_2,U_3,\cdots ,U_n$)</p>
858,576
<p>Prove that the union of three subspaces of V is a subspace iff one of the subspaces contains the other two.</p> <p>I can do this problem when I am working in only two subspaces of $V$ but I don't know how to do it with three. </p> <p>What I tried is: If one of the subspaces contains the other two, Then their union is obviously a subspace because the subspace that contains them is a subspace. (Is this sufficient??).</p> <p>If the union of three subspaces is a subspace..... How do I prove that one of the subspaces must contain the other two from here?</p> <p>*When proving this for two I said that there is an element in one of the subspaces that is not the other and proved by contradiction that one of the subspaces must be contained in the other. How would I do this for three?</p>
Jia Cheng Sun
669,184
<p>With regard to Gina and Jeff's answers, I believe the simplification at the start can be made easier (at least in my opinion).</p> <p>Suppose that subspaces <span class="math-container">$U_1,U_2,U_3$</span> union to form a subspace. We have 2 cases.</p> <p>Case 1: Suppose <span class="math-container">$U_i\subseteq \cup_{j\neq i}U_j$</span> for some <span class="math-container">$i\in \{1,2,3\}$</span>. WLOG, let <span class="math-container">$i=1$</span>. Then the problem is reduced to the case of 2 subspaces <span class="math-container">$U_j, j=2,3$</span>. WLOG again, we have <span class="math-container">$U_2\subseteq U_3$</span>, then <span class="math-container">$U_1\subseteq U_2 \cup U_3 = U_3$</span>. Hence <span class="math-container">$U_3$</span> contains the other 2 sets.</p> <p>Case 2: <span class="math-container">$\forall i\in \{1,2,3\}, U_i\not\subseteq \cup_{j\neq i}U_j$</span>. Then for each <span class="math-container">$i$</span>, there exists <span class="math-container">$\mathbf{u}_i\in U_i$</span> such that <span class="math-container">$u_i\notin \cup_{j\neq i}U_j$</span>, i.e. <span class="math-container">$u_i\notin U_j$</span> for <span class="math-container">$j\neq i$</span>.</p> <p>After this, we can make a similar argument to that of Gina's. For simplicity, I assume the field in concern to be the real/complex field like in Axler's book.</p> <p><span class="math-container">$\mathbf{u}_1 + \mathbf{u}_2, 2\mathbf{u}_1 + \mathbf{u}_2, \mathbf{u}_1 + 2\mathbf{u}_2$</span> must all be in <span class="math-container">$U_3\setminus (U_1 \cup U_2)$</span>. (Note that all these vectors are in <span class="math-container">$\cup_{1\leq i\leq 3} U_i$</span>.) In particular, these vectors lie in <span class="math-container">$U_3$</span>. Hence, <span class="math-container">$\mathbf{u}_1, \mathbf{u}_2 \in U_3$</span>, a contradiction.</p> <p>Hence only case 1 is possible and we are done.</p>
76,853
<p>I have a list of stock symbols and related information containing some entries <code>Missing["NotAvailable"]</code>. I would like to delete all nested lists which contain a NotAvaiable entry, as <em>Mathematica</em> obviously does not support these instruments anymore (see also <a href="http://reference.wolfram.com/mathematica/ref/FinancialData.html" rel="nofollow">http://reference.wolfram.com/mathematica/ref/FinancialData.html</a>).</p> <p>The list entries are formated as follows:</p> <pre><code>indexMaster= {"^RDM-SO", Missing["NotAvailable"], "AMEX"} </code></pre> <p>I tried to use the following function, but it does not work. </p> <pre><code>instruments = DeleteCases[indexMaster, {p__, q_String, r__} /; StringMatchQ[q, "*NotAvailable*"] -&gt; {p, q, r}] </code></pre> <p>Does anyone have an idea how to delete the <code>Missing["NotAvailable"]</code> entries? </p> <p>Thanks</p>
kglr
125
<pre><code>indexMaster= {"^RDM-SO", Missing["NotAvailable"], "AMEX"} Select[indexMaster, Internal`LiterallyAbsentQ[#, "NotAvailable"] &amp;] (* {"^RDM-SO", "AMEX"} *) indexMaster2 = {"^RDM-SO", Missing["NotAvailable"], Missing[], "AMEX", "NotAvailable"}; Select[indexMaster2, Internal`LiterallyAbsentQ[#, "NotAvailable"] &amp;] (* {"^RDM-SO", Missing[], "AMEX"} *) DeleteCases[indexMaster2, _?(Internal`LiterallyOccurringQ[#, "NotAvailable"] &amp;)] (* {"^RDM-SO", Missing[], "AMEX"} *) </code></pre>
891,575
<p>The circumference of a circle has length 90 centimeters, Three points on the circle divide the circle into three equal lengths. Three ants A, B, and C start to crawl clockwise on the circle, with starting from one of the three points. Initially A is ahead of B and B is ahead of C. Ant A crawls 3 centimeters per second, ant V 5 centimeters, and and C 10 centimeters. How long does it take for the three ants to arrive at the same spot for the first time?</p> <p>I tried making a list and writing down the numbers, but they seem to never be the same. I know the distance formula is <em>d=rt</em>, but I don't know how to use it to solve this problem. Any help? Thanks!</p>
David
119,775
<p><strong>Hint</strong>. Suppose that the ants meet after $t$ seconds, and measure distance around the circle from where C starts. Then A has travelled $3t$ metres, but had a $60$ metre start for a total of $60+3t$ from the initial point. Likewise B will be a distance 30+5t from the initial point. However B may have travelled a number of times around the circle, say $x$ times more than A, and therefore has travelled $90x$ metres further than A. So we have the equation $$60+3t+90x=30+5t\ .$$ See if you can explain by using similar ideas why $$60+3t+90y=10t\ ,$$ where $y$ is the number of times C has "lapped" A.</p> <p>Now eliminate $t$ from these two equations; then find the smallest possible values of $x$ and $y$, remembering that while $t$ could be any positive number, $x$ and $y$ must be positive integers.</p> <p>Good luck!</p>
1,521,779
<p>I have a homework question that I want to make sure I'm getting it right.</p> <p>This is a joint probability table for the proportions of survey respondents who smoke and who have had heart attacks.</p> <p><kbd>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><kbd>Smoker&nbsp;</kbd><kbd>Non-Smoker</kbd><br/> <kbd>Heart Attack&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><kbd>0.03&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><kbd>0.03&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><br/> <kbd>No Heart Attack</kbd><kbd>0.44&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><kbd>0.50&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;</kbd><br/></p> <p>If a person is a smoker, are they more likely to have had a heart attack than someone who is not a smoker?</p> <p>So I know that $P(A|B) = \dfrac{P(A \cap B)}{P(B)}$</p> <p>So I could think of the question like: <em>Find the conditional probability that a randomly selected person is a victim of a heart-attack, given that they’re a smoker/ non-smoker.</em></p> <p>$ P(X = Smoker \cap Y = Heart Attack) = 0.03 $<br> $ P(X = NonSmoker \cap Y = Heart Attack) = 0.03 $<br> $P(Y) = 0.06$</p> <p>That would mean that a smoker and a non-smoker would both have a $\dfrac{0.03}{0.06}$ chance of having a heart attack?</p> <p>This defies my intuition because $\dfrac{3}{47}$ smokers get heart attacks, while only $\dfrac{3}{53}$ non-smokers get heart attacks.</p>
N. S.
9,176
<p>Hopefully I didn't make any mistake.</p> <p>Note that the squares modulo <span class="math-container">$9$</span> are <span class="math-container">$0,1,4,7$</span>.</p> <p>Now <span class="math-container">$$2017 = a^2 + b^2 \Rightarrow \\ 1 \equiv a^2+b^2 \pmod{9}$$</span></p> <p>This gives that, WLOG we have <span class="math-container">$$ a^2 = 0 \pmod{9} \\ b^2= 1 \pmod{9} $$</span> and hence <span class="math-container">$a=3k$</span> and <span class="math-container">$b=9l \pm 1$</span>.</p> <p>Then, <span class="math-container">$$ 2017 = 9k^2+ 81l^2 \pm 18l +1 \Rightarrow \\ 224 = k^2+ 9l^2 \pm 2l $$</span> Finally <span class="math-container">$$ 224 \geq 9l^2 \pm 2l \geq 9l^2-2l \leq 7l^2 \Rightarrow \\ l \leq 5 $$</span></p> <p>A case by case analysis is now very fast.</p>
2,012,532
<p>The following is all confirmed to be true:</p> <p>Matrix A = $ \begin{bmatrix} 0 &amp; 1 &amp; -2 \\ -1 &amp; 2 &amp; -1 \\ 2 &amp; -4 &amp; 3 \\ 1 &amp; -3 &amp; 2 \\ \end{bmatrix} $</p> <p>U = $ \begin{bmatrix} -1 &amp; 2 &amp; -1 \\ 0 &amp; 1 &amp; -2 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 0 \\ \end{bmatrix} $</p> <p>L = $ \begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; 0 &amp; 0\\ -2 &amp; 0 &amp; 1 &amp; 0\\ -1 &amp; -1 &amp; -1 &amp; 0\\ \end{bmatrix} $</p> <p>Okay so using that I need to solve the following system:</p> <p>$ x_2 - 2x_3 = 0 \\ -x_1 + 2x_2 - x_3 = -2 \\ 2x_1 -4x_2 + 3x_3 = 5 \\ x_1 - 3x_2 + 2x_3 = 1 $</p> <p>So step one is solving $Ly = b$, where $y = Ux$</p> <p>So that is:</p> <p>$ y_1 = 0\\ y_2 = -2\\ -2y_1 + y_3 = 5 \\ -y_1 - y_2 -y_3 = 1 \\ $</p> <p>How can we find $y_3$ in the last two equations? Because,</p> <p>$ -2(0) + y_3 = 5 \\ -(0) - (-2) - y_3 = 1 \\ $</p> <p>So in the second to last equation $y_3 = 5$, but in the last equation $y_3 = 1$. Very confused.</p>
dantopa
206,581
<p>$$ \begin{align} \mathbf{P} \mathbf{A} &amp;= \mathbf{L} \mathbf{U} \\ % P \left[ \begin{array}{cccc} 0 &amp; \boxed{1} &amp; 0 &amp; 0 \\ \boxed{1} &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ \end{array} \right] % A \left[ \begin{array}{rrr} 0 &amp; 1 &amp; -2 \\ -1 &amp; 2 &amp; -1 \\ 2 &amp; -4 &amp; 3 \\ 1 &amp; -3 &amp; 2 \\ \end{array} \right] % &amp;= % L \left[ \begin{array}{rrr} 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 \\ -2 &amp; 0 &amp; 1 &amp; 0 \\ -1 &amp; -1 &amp; -1 &amp; 1 \\ \end{array} \right] % U \left[ \begin{array}{rrr} -1 &amp; 2 &amp; -1 \\ 0 &amp; 1 &amp; -2 \\ 0 &amp; 0 &amp; 1 \\ 0 &amp; 0 &amp; 0 \\ \end{array} \right] \end{align} $$</p>
2,867,404
<p>I have a problem how to get the area from the picture. Some ideas I got are not good enough to get the correct value of the whole element.</p> <p><a href="https://i.stack.imgur.com/PeQ5O.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PeQ5O.jpg" alt="enter image description here"></a></p>
高田航
407,845
<p><strong>Hint</strong>: You know the width is $13.62$, and the maximum height it reaches is $1.65$, so it must fit within a rectangle of those dimensions.</p>
2,867,404
<p>I have a problem how to get the area from the picture. Some ideas I got are not good enough to get the correct value of the whole element.</p> <p><a href="https://i.stack.imgur.com/PeQ5O.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PeQ5O.jpg" alt="enter image description here"></a></p>
Servaes
30,382
<p><strong>Hint:</strong> Draw straight lines between the upper left corner and the midline, and upper right corner and the midline. Now you have two rectangles and two triangles. Can you compute their areas?</p>
1,311,367
<p>Recently, I am considering a question, as is well known, Cauchy's inequality is a famous and useful inequality. $$\left|\int_{a}^{b}f(x)dx\right|^2\leq|b-a| \int_{a}^{b}f^2(x)dx.$$ My question is: can we obtain a inequality such that $$\left|\int_{a}^{b}f(x)dx\right|^2\ge |A|\times \left|\int_{a}^{b}f(x)^2dx\right| $$ holds? Namely, can we find something about A such that the inequality $|\int_{a}^{b}f(x)dx|^2\ge |A|\times |\int_{a}^{b}f(x)^2dx| $ holds??? Anyone can help me? Thanks!</p>
robjohn
13,854
<p>Your inequalities are reversed. If you reverse both, you get the following: $$ \left|\int_a^bf(x)\,\mathrm{d}x\right|^2\le(b-a)\int_a^b|f(x)|^2\,\mathrm{d}x $$ This inequality follows from <a href="http://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality" rel="nofollow">Cauchy-Schwarz</a> or <a href="http://en.wikipedia.org/wiki/H%C3%B6lder%27s_inequality" rel="nofollow">Hölder's Inequality</a> or <a href="http://en.wikipedia.org/wiki/Jensen%27s_inequality" rel="nofollow">Jensen's Inequality</a>. The converse cannot hold for any $A$. Consider $f(x)=x$ on $[-1,1]$</p> <hr> <p><strong>Positive Functions</strong></p> <p>In the example above, we looked at a function whose positive and negative parts cancel, so that $\int_a^bf(x)\,\mathrm{d}x=0$ cannot bound $\int_a^bf(x)^2\,\mathrm{d}x\gt0$. However, if we restrict $f(x)\ge0$, we have that if $\int_a^bf(x)^2\,\mathrm{d}x\gt0$, then $\int_a^bf(x)\,\mathrm{d}x\gt0$. This removes the simple counterexample above.</p> <p>However, consider $f_n(x)=nx^n$ on $[0,1]$. As $n\to\infty$, $$ \left(\int_0^1f_n(x)\,\mathrm{d}x\right)^2=\left(\int_0^1nx^n\,\mathrm{d}x\right)^2=\frac{n^2}{(n+1)^2}\to1 $$ and $$ \int_0^1f_n(x)^2\,\mathrm{d}x=\int_0^1n^2x^{2n}\,\mathrm{d}x=\frac{n^2}{2n+1}\to\infty $$ Again, we have that there is no constant $A$ so that $$ \int_0^1f_n(x)^2\,\mathrm{d}x\le A\left(\int_0^1f_n(x)\,\mathrm{d}x\right)^2 $$ since the right hand side is bounded by $A$, yet the left side can be made as large as possible.</p>
1,419,315
<p>I have a particular scenario.</p> <p>In this scenario, we have the standard cubic equation,</p> <pre><code>ax^3 + bx^2 + cx + d = y </code></pre> <p>as well as 3 points that are graphed, <a href="https://i.imgur.com/VCZKuGW.png" rel="nofollow noreferrer">as can be seen in this graph</a>. (The line is irrelevant for now)</p> <p>Assume that </p> <pre><code>a, b, and c </code></pre> <p>will all adapt themselves so that they can fit all three dots as well as </p> <pre><code>d </code></pre> <p>and create a cubic line.</p> <p>The problem at hand, now, is to find some sort of general equation that would find the value of d that would lead to having the smallest sum of all coefficients, those being </p> <pre><code>a, b, c and d </code></pre> <p>The ending result doesn't have to be a specific number, though. A range of values a reasonable amount of numbers wide or so that is guaranteed to contain the number is acceptable as well.</p> <p>If any clarifications or more details are needed, then feel free to comment and I'll add anything necessary.</p> <p><strong>EDIT</strong></p> <p>I should have mentioned that by minimum, I mean the smallest absolute value of the equation, so in other words</p> <pre><code>|a| + |b| + |c| + |d| </code></pre> <p><a href="https://jsfiddle.net/gamea12/crkg1f6c/" rel="nofollow noreferrer">Here's a program that simulates the scenario</a></p>
Amitai Yuval
166,201
<p>Find $A$'s eigenvalues first. Once you know them, you know everything you need about $A$.</p> <p>More explicitly, you can start by calculating $A$'s characteristic polynomial. A straightforward calculation shows that its roots are $0$ and $3$. These are $A$'s eigenvalues, and hence, with respect to an appropriate orthonormal basis, $A$ becomes$$\left(\begin{array}{ccc}0&amp;0&amp;0\\0&amp;3&amp;0\\0&amp;0&amp;3\end{array}\right).$$This means that $A$ is positive semi-definite.</p>
272,057
<p>Let $\mu$ and $\nu$ be two probability measures on $\mathbb R^n$ with finite first moment. Denote by $d:=W_1(\mu,\nu)$, where $W_1(\cdot,\cdot)$ stands for the Wasserstein distance of order $1$. </p> <p>My question is the following: Let $X$ be a random variable defined on some probability space (rich enough) with law $\mu$, could we find <strong>a measurable function $f:\mathbb R^n\times \mathbb R^n\to\mathbb R^n$ and a random variable $G$ independent of $X$</strong> s.t.</p> <p>$$Y:=f(X,G)~\sim~\nu~~~~~~ \mbox{ and }~~~~~~ \mathbb E[|X-Y|]~\le ~2d~?$$</p> <p><strong>Thought 1:</strong> Let $d_0:=\rho(\mu,\nu)$, where $\rho(\cdot,\cdot)$ denotes the Prokhorov distance. Then we have a measurable function $f_0:\mathbb R^n\times \mathbb R^n\to\mathbb R^n$ and a random variable $G_0$ independent of $X$ s.t.</p> <p>$$Y_0:=f_0(X,G)~\sim~\nu~~~~~~ \mbox{ and }~~~~~~ \mathbb P[\{|X-Y_0|\ge2d_0\}]~\le ~2d_0.$$</p> <p>The above construction is from the paper <em>On a representation of random variables</em> by Skorokhod, but I can't find this paper. </p> <p><strong>Thought 2:</strong> Let $\pi(dx,dy)$ be the optimal transport plan, i.e. $\pi(A\times\mathbb R^n)=\mu(A)$ and $\pi(\mathbb R^n\times A)=\nu(A)$ for all measurable $A\subset\mathbb R^n$. Disintegration w.r.t. the first coordinate $x$, one has $\pi(dx,dy)=\mu(dx)\otimes \lambda_x(dy)$, where $(\lambda_x)_{x\in\mathbb R^n}$ denotes the r.c.p.d. (regular conditional probability distribution). But I've no idea how to recover the function $f$ using $\lambda_x$.</p> <p>Any answer, help or comment is highly appreciated. Thanks a lot!</p>
MB2009
111,097
<p>I've a solution but it's not perfectly satisfying. Assume that </p> <p>$$V~~~:=~~~\int |x|^pd\mu(x)~+~\int |x|^pd\nu(x)~~~&lt;~~~+\infty$$</p> <p>for some fixed $p&gt;1$. It follows from <strong>Thought 1</strong> that, there exists $f_0$ and $G$ s.t. </p> <p>$$Y_0~:=~f_0(X,G)~\sim~\nu~~~~\mbox{ and }~~~~\mathbb P[|X-Y_0|\ge 2d_0]~\le~2d_0.$$</p> <p>It follows that</p> <p>\begin{eqnarray} \mathbb E[|X-Y_0|]\quad&amp;=&amp;\quad\mathbb E[|X-Y_0|{\bf 1}_{\{|X-Y_0|\ge 2d_0\}}]~+~\mathbb E[|X-Y_0|{\bf 1}_{\{|X-Y_0|&lt; 2d_0\}}] \\ &amp;\le&amp;\quad 2d_0 ~+~ \big(E[|X-Y_0|^p]\big)^{1/p}\cdot \mathbb P[|X-Y_0|\ge 2d_0]^{1/q} \\ &amp;\le&amp;\quad 2d_0 ~+~ \left[\big(E[|X|^p]\big)^{1/p}~+~\big(E[|Y_0|^p]\big)^{1/p}\right]\cdot (2d_0)^{1/q} \\ &amp;\le&amp;\quad 2d_0 ~+~ 2V^{1/p}\cdot (2d_0)^{1/q}, \end{eqnarray} where $q&gt;1$ denotes the conjugate number of $p$, i.e. $1/p+1/q=1$. Let $\alpha(x):=2x ~+~ 2V^{1/p}\cdot (2x)^{1/q}$, then one has </p> <p>$$\mathbb E[|X-Y_0|]\quad\le\quad \alpha(d_0)\quad\le \quad \alpha(d),$$</p> <p>as $d_0\le d$.</p> <p><strong>My question is that could we remove the assumption $V&lt;+\infty$.</strong></p>
4,500,928
<p>How many ten-digit positive integers are there such that all of the following conditions are satisfied:</p> <p>(a) each of the digits 0, 1, ... , 9 appears exactly once;</p> <p>(b) the first digit is odd;</p> <p>(c) five even digits appear in five consecutive positions?</p> <p>From Combinatorics by Pavle Mladenovic</p> <p>My approach is as follows: We first choose where to place the five even digits since that is the most restrictive condition. So they can be placed in slots 1-5 all the way to 6-10 (ex. 1st to 5th digits). There are 6 ways to do this, and then we have 120 ways (5!) to place the even digits in the slots, as well as 5! ways to place the odds digits, for a total of 6 * 120 * 120 ways. Wondering if this is accurate or if I'm miscounting something.</p> <p>Hello everybody, was wondering if I can get some help solving this problem. Not sure how to approach it. Thank you in advance.</p>
Johnson
1,071,001
<p>Focussing on the 5 even numbers first is a good way to go. There are 5 different positions the set of them may go in (2,3,4,5 or 6 for the first even digit).</p> <p>For each of the 5! * 5 arrangements of the even digits, there will be 5! different orders for the odd digits.</p> <p>This is exactly the same as your method except you forgot the condition that the first digit must be odd. Therefore there is one less position for the consecutive even digits (5 not 6).</p> <p>(You have 5!*5!*5 = 72,000 possible combinations)</p> <p>I hope this helps.</p>
736,684
<p>I'm trying to figure the probability that <span class="math-container">$X &lt; Y$</span> with:</p> <p><span class="math-container">$$X, Y \in \mathbb R^+;\ X\in [0,5] ; \ Y \in [0,2]$$</span> What is the law to use?</p>
Sergio Parreiras
33,890
<p>Assuming independence: $$\Pr(X&lt;Y)= \int_{0}^5 Pr(x&lt;Y) \cdot f_X(x)dx= \int_{0}^5 (1-F_Y(x)) \cdot f_X(x)dx$$</p>
1,263,865
<p>So I have that $700=7\cdot2^2\cdot5^2$ and I got that $3^2\equiv1\pmod2$ so then $3^{1442}\equiv1\pmod2$ also $3^2\equiv1\pmod{2^2}$ so $3^{1442}\equiv1\pmod{2^2}$ which covers one of the divisors of $700$. Im not sure if I'm supposed to use $2$ or $2^2$ and I was able to find that $3^2\equiv-1\pmod5$ so $3^{1442}\equiv-1\pmod5$, For mod $7$ I wasn't able to come up with an answer in a way like the other two, and I'm not really sure how to do this to find the least non negative residue </p>
lab bhattacharjee
33,337
<p><a href="http://en.wikipedia.org/wiki/Carmichael_function" rel="nofollow">Carmichael function</a> $\lambda(700)=60$</p> <p>As $(3,700)=(3,3\cdot233+1)=(3,1)=1,3^{60}\equiv1\pmod{700}$</p> <p>and $1442=24\cdot60+2\equiv2\pmod{60}\implies3^{1442}\equiv3^2\pmod{700}$</p>
2,792,651
<p>I want to prove that $h_K=2$ if $K=\mathbb{Q}[\sqrt{-6}]$. Using Minkowski Theorem I have that $Cl_K=\{(1),(3,\sqrt{-6}),(2,\sqrt{-6})\}$, and I thought it was a good idea to use Lagrange Theorem (order of an element divides order of the gorup).</p> <p>The main problem is that I can't reduce $(2,\sqrt{-6})^2$: </p> <p>$$(2,\sqrt{-6})(2,\sqrt{-6})=(4,2\sqrt{-6},2\sqrt{-6},-6)=(2,\sqrt{-6})$$</p> <p>so $I^2=I$, but I think there is a mistake, since I should end up with $I^2=(1)$. </p>
dan_fulea
550,003
<p>I will display the answer as an answer, rather than as a simple comment. The above computation line is "almost ok", correctly:</p> <p>$$(2,\sqrt{-6})(2,\sqrt{-6})=(4,2\sqrt{-6},2\sqrt{-6},-6)=(2)\ .$$</p> <p>For the last equality, we have to show the double inclusion. It is clear that from $4,-6$ we can exhibit the $2$ by sum. Conversely, $2$ is dividing all generators.</p> <p>$\blacksquare$</p> <p>Comment: Note that (for instance) <a href="http://www.sagemath.org" rel="nofollow noreferrer">sage</a> can perform/check this and similar computations using a mathematically affine thinking and notation. or instance:</p> <pre><code>sage: K.&lt;a&gt; = QuadraticField(-6) sage: K Number Field in a with defining polynomial x^2 + 6 sage: a^2 -6 sage: K.class_group() Class group of order 2 with structure C2 of Number Field in a with defining polynomial x^2 + 6 sage: K.class_group().gens() (Fractional ideal class (2, a),) sage: J = K.ideal( 2,a ) sage: J^2 Fractional ideal (2) </code></pre> <p>(There are many computer algebra systems, that can be used as an aid for calculations in number fields, <a href="http://www.sagemath.org" rel="nofollow noreferrer">sage</a> is a free one, and collects "batteries" from all related free maths soft, so that there is a seamless way to switch between fields without contortion.)</p> <p>The comment is here, since having its answer, we also have the answer to the posted question.</p>
556,054
<p>Please help me to prove the inequality $$ \sqrt{a^2 + b^2} \geq \frac{|a-b|}{\sqrt{2}}. $$</p>
André Nicolas
6,312
<p><strong>Hint:</strong> Note that $(a-b)^2+(a+b)^2=2(a^2+b^2)$.</p>
16,982
<p>one can obtain solutions to the <a href="http://en.wikipedia.org/wiki/Laplace%27s_equation" rel="noreferrer">Laplace equation</a> <span class="math-container">$$\Delta\psi(x) = 0$$</span></p> <p>or even for the <a href="http://en.wikipedia.org/wiki/Poisson%27s_equation" rel="noreferrer">Poisson equation</a> <span class="math-container">$\Delta\psi(x)=\varphi(x)$</span> in a <a href="http://en.wikipedia.org/wiki/Dirichlet_problem" rel="noreferrer">Dirichlet boundary value problem</a> using a random-walk approach, see e.g. <a href="http://www.maths.leeds.ac.uk/%7Everetenn/bmp%28w%29.pdf" rel="noreferrer">Introduction to Brownian Motion</a>.</p> <p>Now, already this fascinating concept is not really clear to me. Nevertheless, it would be worth to get into details if there also exists such a connection to the <a href="http://en.wikipedia.org/wiki/Helmholtz_equation" rel="noreferrer">Helmholtz equation</a> <span class="math-container">$$\Delta\psi(x) +k^2\psi(x)= 0$$</span></p> <p>Hence my question:</p> <h3>Can we use some random walk to calculate solutions of the Helmholtz equation numerically?</h3> <p>It would also be interesting if this is still true for <strong>open systems</strong> where the boundary conditions are different than in the Dirichlet case and for which <span class="math-container">$k$</span> is now a domainwise constant function.<br /> Thank you in advance</p> <p>Robert</p>
Raskolnikov
3,567
<p>It does not seem that anybody has a more complete reply, so I might as well summarize what I have said in the comments. The technique you are refering to in the OP is an application of the <a href="http://en.wikipedia.org/wiki/Feynman%E2%80%93Kac_formula" rel="noreferrer">Feynman-Kac formula</a>.</p> <p>This technique has in fact been applied to solving the Helmholtz equation, as a Google search will reveal:</p> <ul> <li><p><a href="http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&amp;id=JASMAN000109000005002260000001&amp;idtype=cvips&amp;gifs=yes" rel="noreferrer">Probabilistic solutions of the Helmholtz equation</a></p></li> <li><p><a href="http://link.aip.org/link/?JASMAN/114/1733/1" rel="noreferrer">Random walk approach to wave propagation in wedges and cones</a></p></li> </ul>
2,204,944
<p>A line is a collection of infinitely many points. By definition, a point has no dimensions. But, how can infinitely many dimensionless points give rise to a line with a dimension. This is the same case with planes, solids and higher dimensions too...</p> <p>Thanks in advance for any help..!!</p>
uniquesolution
265,735
<p>The answer is that $X_i$ are copies of one and the same random variable, namely, the random variable that returns the number on a card randomly drawn from your card collection.</p>
2,677,823
<p>How can I precisely prove the existence of a continuous function $\rho(x)$ such that $0 \leq \rho(x) \leq 1 \forall x \in R^d $ such that $g(x) \rho(x)$ is bounded and continuous for $g(x)$ continuous?Both $g(x)$ and $\rho(x)$ are defined on $R^d$.</p> <p>My idea was that we can choose $\rho(x)$ such that $\rho(x)g(x)$ goes exponentially to zero outside a compact set in $R^d$.But i cant argue rigorously?</p> <p>Can hints on how could I proceed?</p>
user
505,767
<p>Let $f=g+ih$ then</p> <p>$$\overline{\int_{E}f}=\overline{\int_{E}g+ih}=\overline{\int_{E}g}+\overline{\int_{E}ih}=\int_{E}g-i\int_{E}h=\int_{E}\bar f,$$</p>
2,677,823
<p>How can I precisely prove the existence of a continuous function $\rho(x)$ such that $0 \leq \rho(x) \leq 1 \forall x \in R^d $ such that $g(x) \rho(x)$ is bounded and continuous for $g(x)$ continuous?Both $g(x)$ and $\rho(x)$ are defined on $R^d$.</p> <p>My idea was that we can choose $\rho(x)$ such that $\rho(x)g(x)$ goes exponentially to zero outside a compact set in $R^d$.But i cant argue rigorously?</p> <p>Can hints on how could I proceed?</p>
Community
-1
<p>The Lebesgue integral of <em>complex</em>-valued functions is not (that I know of) defined with the same measure-theoretic machinery that you use for $\Bbb R$ (the notions of which, starting from <em>positivity</em> or <em>increasing convergence</em>, would actually be moot if $f$ could take non-real values): it's just defined as the component-wise Lebesgue integral of $f$ as a function $X\to\Bbb C=\Bbb R^2$. The result is an element of $(a,b)\in\Bbb R^2$, which is then identified with the number $a+ib\in\Bbb C$. So, the identity is indeed obvious: you are saying that if $(a,b)=(\int f_1, \int f_2)$, then $(a,-b)=(\int f_1,\int -f_2)$.</p>
99,799
<p>I have a <code>Solve</code> similar to the following:</p> <pre><code>Solve[e^2 - c^2 == -15, {e, c}, Integers] (* {{e -&gt; -7, c -&gt; -8}, {e -&gt; -7, c -&gt; 8}, {e -&gt; -1, c -&gt; -4}, {e -&gt; -1, c -&gt; 4}, {e -&gt; 1, c -&gt; -4}, {e -&gt; 1, c -&gt; 4}, {e -&gt; 7, c -&gt; -8}, {e -&gt; 7, c -&gt; 8}} *) </code></pre> <p>I need to add a region constraint to get the solution I want from the unconstrained list of solutions. I tried the following:</p> <pre><code>Solve[e^2 - c^2 == -15 ∧ {e, c} ∈ Interval[{0, 4}], {e, c}, Integers] (* {{e -&gt; {1}, c -&gt; {4}}} *) </code></pre> <p>However, when I do this it wraps the variable's solutions in <code>List</code>. Is there a way to turn this off so I just get <code>{{e -&gt; 1, c -&gt; 4}}</code> or <code>{e -&gt; 1, c -&gt; 4}</code> as the result? The current result is a pain as I have to massage it for use with <code>Replace</code>. Also, can any explain why it is doing this when I constrain the variables? </p>
Jason B.
9,490
<p>The <code>Interval</code> seems to be the problem, it returns an "Interval Object" and, rather than figure out what that is, just use the <code>&lt;=</code> operator to state the conditions explicitly</p> <pre><code>Solve[ e^2 - c^2 == -15 &amp;&amp; 0 &lt;= e &lt;= 4 &amp;&amp; 0 &lt;= c &lt;= 4, {e, c}, Integers] (* {{e -&gt; 1, c -&gt; 4}} *) </code></pre> <p>or</p> <pre><code>Solve[{e^2 - c^2 == -15, 0 &lt;= e &lt;= 4, 0 &lt;= c &lt;= 4}, {e, c}, Integers] (* {{e -&gt; 1, c -&gt; 4}} *) </code></pre>
250,454
<p>Is there a <code>ReplaceOnce</code> which does only one replacement if possible by trying the rules sequentially in order. Consider the following as an example:</p> <pre><code>ReplaceOnce[{&quot;May&quot;,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;},{&quot;May&quot;-&gt;1,&quot;5&quot;-&gt;2}] </code></pre> <p>should produce:</p> <pre><code>{1,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;} </code></pre> <p>Similarly,</p> <pre><code>ReplaceOnce[{&quot;May&quot;,&quot;5&quot;,&quot;May&quot;,&quot;5&quot;},{&quot;5&quot;-&gt;2,&quot;May&quot;-&gt;1}] </code></pre> <p>should produce:</p> <pre><code>{&quot;May&quot;,2,&quot;May&quot;,&quot;5&quot;} </code></pre>
kglr
125
<pre><code>ClearAll[replace1ce] replace1ce = Block[{<span class="math-container">$done = False}, Fold[ReplaceAll, #, # :&gt; RuleCondition[$</span>done = True; #2, ! $done] &amp; @@@ #2]] &amp;; </code></pre> <p><em><strong>Examples:</strong></em></p> <pre><code>replace1ce[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;May&quot; -&gt; 1, &quot;5&quot; -&gt; 2}] </code></pre> <blockquote> <pre><code>{1, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;} </code></pre> </blockquote> <pre><code>replace1ce[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;5&quot; -&gt; 2, &quot;May&quot; -&gt; 1}] </code></pre> <blockquote> <pre><code>{&quot;May&quot;, 2, &quot;May&quot;, &quot;5&quot;} </code></pre> </blockquote> <pre><code>replace1ce[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;blah&quot; -&gt; 10, &quot;5&quot; -&gt; 2, &quot;May&quot; -&gt; 1}] </code></pre> <blockquote> <pre><code>{&quot;May&quot;, 2, &quot;May&quot;, &quot;5&quot;} </code></pre> </blockquote> <pre><code>replace1ce[{&quot;May&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;}, {&quot;x&quot; -&gt; 10, s_String :&gt; ToUpperCase[s], &quot;5&quot; -&gt; 2}] </code></pre> <blockquote> <pre><code>{&quot;MAY&quot;, &quot;5&quot;, &quot;May&quot;, &quot;5&quot;} </code></pre> </blockquote>
834,228
<p>$$u_{1}=2, \quad u_{n+1}=\frac{1}{3-u_n}$$ Prove it is decreasing and convergent and calculate its limit. Is it possible to define $u_{n}$ in terms of $n$?</p> <p>In order to prove it is decreasing, I calculated some terms but I would like to know how to do it in a more "elaborated" way.</p>
Jacob Bond
120,503
<p>After seeing that $u_{1} &gt; u_{2}$, the result follows since $u_{n} &gt; u_{n + 1}$ implies $$\frac{1}{3 - u_{n}} &gt; \frac{1}{3 - u_{n + 1}}.$$</p> <p>Once you know that there is a limit, say $L$, you have $$\lim_{n\rightarrow\infty} \frac{1}{3-u_{n}} = L.$$ But as $n \rightarrow \infty$, $u_{n} \rightarrow L$, so also $$\lim_{n \rightarrow\infty} \frac{1}{3-u_{n}} = \frac{1}{3 - L}.$$ Now you can just solve for $L$.</p>
4,206,039
<p>Find the radius of convergence of the following power series <span class="math-container">$$\sum_{n=1}^\infty \frac{(-1)^n z^{n(n+1)}}{n}$$</span></p> <p>Here's my working <span class="math-container">$$\lim_{n\to \infty}| \frac{(-1)^{n+1} z^{(n+1)(n+2)}}{n+1} \frac{n}{(-1)^nz^{n(n+1)}}|$$</span> <span class="math-container">$$=\lim_{n\to \infty}\big| \frac{(-1)^n(-1) z^{(n^2+3n+2)}}{n+1} \frac{n}{(-1)^nz^{n^2}z^n}\big|$$</span> <span class="math-container">$$= \lim_{n\to \infty}\big| \frac{ -z^{3n}z^2}{n+1} \frac{n}{{}z^n}\big|$$</span> <span class="math-container">$$=\lim_{n\to \infty}\big| \frac{ -nz^{2n}z^2}{n+1}\big|$$</span> I am stuck after this. Is the limit greater than 1 since anything raised to the power infinity is very huge? How do I do this? Also what happens if z=i?</p>
José Carlos Santos
446,262
<p>Now, use the fact that<span class="math-container">$$\lim_{n\to\infty}\left|\frac{-nz^{2n+2}}{n+1}\right|=\lim_{n\to\infty}\frac n{n+1}|z|^{2n+2}=\begin{cases}0&amp;\text{ if }|z|&lt;1\\1&amp;\text{ if }|z|=1\\\infty&amp;\text{ if }|z|&gt;1.\end{cases}$$</span>So, your series converges if <span class="math-container">$|z|&lt;1$</span> and diverges if <span class="math-container">$|z|&gt;1$</span>. Therefore, its radius of convergence is <span class="math-container">$1$</span>.</p>
4,206,039
<p>Find the radius of convergence of the following power series <span class="math-container">$$\sum_{n=1}^\infty \frac{(-1)^n z^{n(n+1)}}{n}$$</span></p> <p>Here's my working <span class="math-container">$$\lim_{n\to \infty}| \frac{(-1)^{n+1} z^{(n+1)(n+2)}}{n+1} \frac{n}{(-1)^nz^{n(n+1)}}|$$</span> <span class="math-container">$$=\lim_{n\to \infty}\big| \frac{(-1)^n(-1) z^{(n^2+3n+2)}}{n+1} \frac{n}{(-1)^nz^{n^2}z^n}\big|$$</span> <span class="math-container">$$= \lim_{n\to \infty}\big| \frac{ -z^{3n}z^2}{n+1} \frac{n}{{}z^n}\big|$$</span> <span class="math-container">$$=\lim_{n\to \infty}\big| \frac{ -nz^{2n}z^2}{n+1}\big|$$</span> I am stuck after this. Is the limit greater than 1 since anything raised to the power infinity is very huge? How do I do this? Also what happens if z=i?</p>
ancient mathematician
414,424
<p>As so often straightforward comparisons will give the result.</p> <p>The series <span class="math-container">$\sum z^n$</span> has radius of convergence <span class="math-container">$1$</span>. The series <span class="math-container">$\sum (n+1)$</span> diverges.</p> <p>(a) For <span class="math-container">$|z|&lt;1$</span> we have that <span class="math-container">$$\left|\frac{(-1)^n}{n}z^{n(n+1)}\right|\leqslant |z|^n$$</span> so by the Comparison Test our series converges for <span class="math-container">$|z|&lt;1$</span> and so <span class="math-container">$R\geqslant 1$</span>.</p> <p>(b) For any <span class="math-container">$a&gt;0$</span> we have that <span class="math-container">$$ \left|\frac{(-1)^n}{n}(1+a)^{n(n+1)}\right|\geqslant\frac{1+n(n+1)a}{n}\geqslant (n+1)a $$</span> and so by the Comparison Test our series diverges at <span class="math-container">$1+a$</span>, and so <span class="math-container">$R\leqslant 1$</span>.</p>
2,624,498
<p>Evaluate $$\lim_{n \rightarrow\infty} \sqrt[n]{3^{n} +5^{n}}$$</p> <p>Attempt:</p> <p>The only sort of manipulation that has come to mind is: $$e^{\frac{1}{n}ln(e^{n\ln(3)} + e^{n\ln(5)})}$$</p> <p>So what is the trick to successfully evaluate this?</p>
user577215664
475,762
<p>With the well known limit of the exponential $\lim\limits_{n \rightarrow\infty} \frac 1 {e^{n}}=0 $</p> <p>$$\lim_{n \rightarrow\infty} \sqrt[n]{3^{n} +5^{n}}=5\lim_{n \rightarrow\infty} \sqrt[n]{ \left(\frac 35\right)^{n} +1}=5\lim_{n \rightarrow\infty} \sqrt[n]{ \frac 1 {e^{n(\ln 5 -\ln 3)}} +1}=5(0+1)^0=5$$</p>
2,624,498
<p>Evaluate $$\lim_{n \rightarrow\infty} \sqrt[n]{3^{n} +5^{n}}$$</p> <p>Attempt:</p> <p>The only sort of manipulation that has come to mind is: $$e^{\frac{1}{n}ln(e^{n\ln(3)} + e^{n\ln(5)})}$$</p> <p>So what is the trick to successfully evaluate this?</p>
user
505,767
<p>The correct maniplulation by exponential is</p> <p>$$\sqrt[n]{3^{n} +5^{n}}=(3^{n} +5^{n})^\frac1n=e^{\frac{\log (3^n+5^n)}{n}}\to5$$</p> <p>indeed</p> <p>$$\frac{\log (3^n+5^n)}{n}=\frac{\log 5^n+\log \left(1+\left(\frac{3}{5}\right)^n\right)}{n}=\frac{n\log 5+\log \left(1+\left(\frac{3}{5}\right)^n\right)}{n}=$$ $$=\log 5+\frac{\log \left(1+\left(\frac{3}{5}\right)^n\right)}{n}\to \log5+0=\log 5$$</p>
91,302
<p>So, we represent numbers usually in a form of a sequence of digits where each one of them multiplies the power of a base:</p> <p>$13.2 = 1 * 10^1 + 3 * 10^0 + 2 * 10^{-1}$</p> <p>So that much is clear, perfectly. But what interests me is the "symmetry" between the left and right of the radix point which separates the integer and fractional part. Specifically, the "significant digits" or "where $0$s matter" to put it blatantly clear:</p> <p>Clear example:</p> <p>$00000050.02000000$ -> $50.02$ </p> <p>What is curious to me is the fact that after the radix point after the last non-zero digit, zeroes do not matter whereas on the left side it is the zeroes before the first non-zero digit that do not matter.</p> <p>Does this symmetry come simply from the fact that the ratio does not change on the fractional side:</p> <p>$2/100 = 20 / 1000 = 200/10000 $ etc. Is this the reason the added digits on the right do not matter?</p> <p>I know it's trivial, but it kind of captured my attention since I like little details and they make me restless. Thank you for trying to assist in advance.</p>
lhf
589
<p>Zero digits do not count, ever. A zero digit in position $k$ corresponds to a term $0\cdot 10^k$. For instance, $502.03= 5 \cdot 10^2 + 0 \cdot 10^1 + 2 \cdot 10^0 + 0 \cdot 10^{-1} + 3 \cdot 10^{-2}$. There is no deeper reason.</p>
31,099
<p>I was wondering what "anti-optimization" is about? Is it related to optimization? What topics does it cover? </p> <p>All I can find out from Google is <a href="http://www.sciencedirect.com/science?_ob=ArticleURL&amp;_udi=B6TJ4-42WP6GH-M&amp;_user=10&amp;_coverDate=07/31/2001&amp;_rdoc=1&amp;_fmt=high&amp;_orig=gateway&amp;_origin=gateway&amp;_sort=d&amp;_docanchor=&amp;view=c&amp;_rerunOrigin=google&amp;_acct=C000050221&amp;_version=1&amp;_urlVersion=0&amp;_userid=10&amp;md5=abd081ecba664ee663f2157095a2bb4a&amp;searchtype=a" rel="nofollow">this paper</a>. It looks like having some relation with probability and optimization?</p> <p>Thanks and regards!</p>
Henry
6,460
<p><em>Anti-optimization</em> is a term popularized by Isaac Elishakoff for an approach to safety factors in engineering structures which he describes as pessimistic and searching for least favourable responses, in combination with optimistization techniques but in contrast to probabilistic approaches.</p> <p>You might be able to discover more from the preface or first chapter of <em><a href="http://www.worldscibooks.com/engineering/p678.html" rel="nofollow">Optimization and Anti-Optimization of Structures under Uncertainty</a></em></p>
448
<p>Let's say, I have 4 yellow and 5 blue balls. How do I calculate in how many different orders I can place them? And what if I also have 3 red balls?</p>
Noldorin
56
<p>This is a standard problem involving the <a href="http://en.wikipedia.org/wiki/Combination" rel="noreferrer">combinations of sets</a>, though perhaps not very obvious intuitively.</p> <p>Firstly consider the number of ways you can rearrange the entire set of balls, counting each ball as indepndent (effectively ignoring colours for now). This is simply $(4 + 5)! = 9!$, since the 1st ball can be any of the $9$, the 2nd can be any of the remaining $8$, and so on.</p> <p>Then we calculate how many different ways the yellow balls can be arranged within themselves, since for the purpose of this problem they are considered equivalent. The number of combinations is of course $4!$; similarly, for the blue balls the number is $5!$.</p> <p>Hence, overall we find:</p> <p>$$\text{total arrangements} = \frac{\text{arrangements of all balls}}{\text{arrangements of yellow balls} \times \text{arrangements of blue balls}}$$</p> <p>Therefore in our case we have:</p> <p>$$\text{total arrangements} = \frac{9!}{5! \times 4!} = 126$$</p> <p>I'm sure you can see how this can be easily extended if we also have 3 red balls too. (Hint: the total changes and we have another multiple of identical arrangements to account for.)</p>
2,677,584
<p>I have the following question:</p> <blockquote> <p>Find the real values of $a$ for which the equation $$(1+\tan^2\theta)^2 + 4a\tan\theta(\tan^2\theta + 1) + 16\tan^2\theta = 0$$ has four distinct real roots in $\left(0, \dfrac{\pi}{2}\right)$.</p> </blockquote> <p>I tried to solve the above equation by dividing the entire equation by $\tan^2\theta$ and then substituting $\tan\theta + \dfrac{1}{\tan\theta}$ as $y$ and then solving for $y$. Then I tried to apply the inequality $\tan\theta + \dfrac{1}{\tan\theta} \geqslant 2$ but couldn't find a proper range of values of $a$.</p> <p>Please help. Please point out if there is any mistake in my work. Thanks in advance.</p>
CY Aries
268,334
<p>You have $y^2+4ay+16=0$, which gives $y=-2a\pm2\sqrt{a^2-4}$. Note that $\tan\theta&gt;0$ for $\displaystyle \theta\in\left(0,\frac{\pi}{2}\right)$. The equation has four real roots in $\displaystyle \left(0,\frac{\pi}{2}\right)$ if $-2a+2\sqrt{a^2-4}&gt;2$ and $-2a-2\sqrt{a^2-4}&gt;2$. </p> <p>As $\sqrt{a^2-4}$ is real and $a&lt;-1-\sqrt{a^2-4}$, $a$ is negative. So, $a\le-2$.</p> <p>If $a=-2$, then $-2a+2\sqrt{a^2-4}=-2a-2\sqrt{a^2-4}$ and the equation will not have $4$ real roots on the interval. So $a&lt;-2$</p> <p>$\displaystyle \begin{cases} a&lt;-2\\ -2a-2\sqrt{a^2-4}&gt;2\end{cases}$ $\implies$ $\displaystyle \begin{cases} a&lt;-2\\ -a-1&gt;\sqrt{a^2-4}\end{cases}$ $\implies$ $\displaystyle \begin{cases} a&lt;-2\\ a^2+2a+1&gt;a^2-4\end{cases}$ $\implies$ $\displaystyle a&gt;\frac{-5}{2}$</p> <p>We can check that when $\displaystyle -\frac{5}{2}&lt;a&lt;-2$, $-2a+2\sqrt{a^2-4}&gt;2$ and $-2a-2\sqrt{a^2-4}&gt;2$.</p>
3,419,276
<p>I'm reading about the directional derivative:</p> <blockquote> <p>Let <span class="math-container">$(E,\|\cdot\|)$</span> and <span class="math-container">$(F,\|\cdot\|)$</span> be Banach spaces over the field <span class="math-container">$\mathbb{K}$</span>, and <span class="math-container">$X$</span> an open subset of <span class="math-container">$E$</span>. A function <span class="math-container">$f: X \rightarrow F$</span> is differentiable at <span class="math-container">$a \in X$</span> if there is <span class="math-container">$A \in \mathcal{L}(E, F)$</span> such that <span class="math-container">$$f(x)=f\left(a\right)+A\left(x-a\right)+o\left(\left\|x-a\right\|\right) \quad\left(x \rightarrow a\right)$$</span></p> </blockquote> <p>The author continues to define the directional derivative:</p> <blockquote> <p><a href="https://i.stack.imgur.com/pmF7C.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmF7C.png" alt="enter image description here"></a></p> </blockquote> <p>and prove a proposition:</p> <blockquote> <p><a href="https://i.stack.imgur.com/xWPS6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xWPS6.png" alt="enter image description here"></a></p> </blockquote> <p>We define <span class="math-container">$g:\mathbb K \rightarrow F, \quad t \mapsto f\left(x_{0}+t v\right)$</span>. Our goal is to find the derivative of <span class="math-container">$g$</span> at <span class="math-container">$t=0$</span>.</p> <p>Because <span class="math-container">$f$</span> is differentiable at <span class="math-container">$x_0$</span>, <span class="math-container">$$ f(x)=f\left(x_{0}\right)+\partial f\left(x_{0}\right)\left(x-x_{0}\right) + o(\left\|x-x_{0}\right\|) \quad (x \to x_0)$$</span> for all <span class="math-container">$x \in X$</span>. It follows that <span class="math-container">$$\begin{aligned}g(t) &amp;= f(x_0+tv) \\ &amp;=f\left(x_{0}\right)+\partial f\left(x_{0}\right)\left((x_0+tv)-x_{0}\right)+o(\left\|(x_0+tv)-x_{0}\right\|) \quad ((x_0+tv) \to x_0)\\ &amp;= g\left(0 \right)+\partial f\left(x_{0}\right)\left(tv\right)+o(\left\|tv\right\|) \quad (t \to 0) \\&amp;= g\left(0\right)+ \partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) + o(\left\|t\right\|) \quad (t \to 0) \end{aligned}$$</span></p> <p>Let <span class="math-container">$\partial g(0) \in \mathcal{L}(\mathbb K, F)$</span> be the derivative of <span class="math-container">$g$</span> at <span class="math-container">$t=0$</span>. It follows that <span class="math-container">$$g(t)= g(0) + \partial g(0)(t-0) + o(\left\|t\right\|) \quad (t \to 0)$$</span></p> <p>To sum up, we have <span class="math-container">$$\begin{aligned}g(t) &amp;=g\left(0\right)+ \partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) + o(\left\|t\right\|) \quad (t \to 0)\\ &amp;= g(0) + \partial g(0)(t-0) + o(\left\|t\right\|) \quad (t \to 0)\end{aligned}$$</span></p> <p>Hence <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot (t-0) = \partial g(0)(t-0)$</span> and thus <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t = \partial g(0)(t)$</span>. Because <span class="math-container">$(t)$</span> on the right hand is just the input of the function <span class="math-container">$\partial g(0)$</span>, we have <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t = \partial g(0)$</span>.</p> <p>In my understanding, <span class="math-container">$\partial g(0)(t)$</span> denote the value of the function <span class="math-container">$\partial g(0)$</span> at <span class="math-container">$t$</span>. On the contrary, <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right) \cdot t$</span> denoted the product of <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right)$</span> and <span class="math-container">$t$</span>.</p> <p><strong>My question:</strong></p> <p>I could not understand why the answer given in my textbook is just <span class="math-container">$\partial f\left(x_{0}\right)\left(v\right)$</span>, which is lack of <span class="math-container">$t$</span>.</p> <p>Could you please elaborate on this point?</p>
Allawonder
145,126
<p>Sometimes we find it useful to express a given positive number as a power of some fixed positive number different from <span class="math-container">$1,$</span> usually called a base. Then if I expressed <span class="math-container">$u&gt;0$</span> as a power of <span class="math-container">$1\ne b&gt;0,$</span> then I have something like <span class="math-container">$$u=b^{\ell}.$$</span> This <span class="math-container">$\ell$</span> is what's called the logarithm of the number <span class="math-container">$u$</span> to the base <span class="math-container">$b,$</span> or for short, we say that <span class="math-container">$\ell=\log_bu.$</span> Thus, we may truthfully write that <span class="math-container">$$u=b^{\log_bu}.$$</span></p>
1,234,471
<p>Given two sequences $(a_k),(b_k)$ with $a_k\geq0,b_k&gt;0$ such that the power series $\sum_{k=0}^\infty a_k b_kr^{k}$ and $\sum_{k=0}^\infty a_kr^k$ converge for each $r&gt;0$. My question now is: Does there exist a constant $c$ (depending only on $(b_k))$ such that \begin{align*} \sum_{k=0}^\infty a_kb_kr^{k}\geq c\sum_{k=0}^\infty a_kr^{k} \end{align*} for all large $r&gt;0$? This is obvious for if $(b_k)$ is bounded away from zero, but what if $\liminf b_k=0?$ In my case, $b_k=1/k!$, and I have no idea how to approach this problem... Any help is highly appreciated! Thanks in advance.</p>
wlad
228,274
<p>Let $S$ be this set, and $\epsilon$ be the empty string.</p> <p>Define $S_0 ::= 0S_*$ and $S_* ::= \epsilon \mid 1S_0 \mid 1S_*$. </p> <p>$S ::= S_0 \mid S_*$.</p> <p>This is BNF notation.</p> <hr> <p>Let $u_n$ be the number of strings of length $n$ in $S_0$, and $v_n$ be the number of strings of length $n$ in $S_*$.</p> <p>We have $u_n = v_{n-1}$ and $v_n = u_{n-1} + v_{n-1}$. $u_0$ = 0 (because $u_0$ counts the number of strings of length 0 that end in a $0$), and $v_0$ = 1.</p> <p>$$ \left[ \begin{array}\\ u_{n+1} \\ v_{n+1} \end{array} \right] = \left[ \begin{array}\\ 0 &amp; 1 \\ 1 &amp; 1 \end{array} \right] \left[ \begin{array}\\ u_{n} \\ v_{n} \end{array} \right] $$</p> <p>Call the matrix above $M$. Diagonalise the matrix into the form $D = PMP^{-1}$, and then notice that $M^n = P^{-1}D^nP$.</p> <p>The answer to your question is then $u_{10} + v_{10}$.</p>
4,092,643
<p>The solutions manual says</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt=\lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm trying to understand how they arrived there. Using L'Hôpital's rule rule, I have</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt= \lim_{x \to \infty}\frac{\int_{0}^{x+\frac1x}e^{t^2}dt-\int_{0}^{x}e^{t^2}dt}{e^{x^2}}= \lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm getting this additional factor <span class="math-container">$(1-\frac1{x^2})$</span> in <span class="math-container">$\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{e^{x^2}2x}$</span>, which is <span class="math-container">$\frac{d}{dx}(x+\frac1x)$</span> and it comes from the application of the chain rule.</p> <p>Is the chain rule not applicable there? My thinking was: let <span class="math-container">$F(x)=\int_{0}^{x}e^{t^2}dt$</span>, and <span class="math-container">$G(x)=x+\frac1x$</span>. Then <span class="math-container">$\int_{0}^{x+\frac1x}e^{t^2}dt=F(G(x))$</span>, which is the functions' composition and I must apply the chain rule.</p>
Aatmaj
769,348
<p>Hint-- use <a href="https://mathworld.wolfram.com/LeibnizIntegralRule.html" rel="nofollow noreferrer">Leibniz rule of integration</a>.</p> <hr /> <p>What they have done is basically use l's hospital rule. To take the derivative they have used the leibniz rule, and approximated <span class="math-container">$1/x^2$</span> to 0...</p>
4,092,643
<p>The solutions manual says</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt=\lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm trying to understand how they arrived there. Using L'Hôpital's rule rule, I have</p> <p><span class="math-container">$$\lim_{x \to \infty}e^{-x^2}\int_{x}^{x+\frac1x}e^{t^2}dt= \lim_{x \to \infty}\frac{\int_{0}^{x+\frac1x}e^{t^2}dt-\int_{0}^{x}e^{t^2}dt}{e^{x^2}}= \lim_{x \to \infty}\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{2xe^{x^2}}$$</span></p> <p>I'm getting this additional factor <span class="math-container">$(1-\frac1{x^2})$</span> in <span class="math-container">$\frac{e^{(x+\frac1x)^2}(1-\frac1{x^2})-e^{x^2}}{e^{x^2}2x}$</span>, which is <span class="math-container">$\frac{d}{dx}(x+\frac1x)$</span> and it comes from the application of the chain rule.</p> <p>Is the chain rule not applicable there? My thinking was: let <span class="math-container">$F(x)=\int_{0}^{x}e^{t^2}dt$</span>, and <span class="math-container">$G(x)=x+\frac1x$</span>. Then <span class="math-container">$\int_{0}^{x+\frac1x}e^{t^2}dt=F(G(x))$</span>, which is the functions' composition and I must apply the chain rule.</p>
Peter Szilas
408,605
<p>MVT for integrals as an option:</p> <p><span class="math-container">$f(x) = e^{-x^2}e^{z^2}\int_{x} ^{x+1/x}dx=$</span></p> <p><span class="math-container">$e^{-x^2}e^{z^2}(1/x)$</span>, where <span class="math-container">$z \in [x, x+1/x];$</span></p> <p><span class="math-container">$1/x \le f(x) \le e^{-x^2}e^{(x+1/x)^2}(1/x);$</span></p> <p>Squeeze.</p>
474,632
<p>Let $M$ be a set with three elements: $a$, $b$, and $c$. Define $D\colon M\times M\to[0,\infty)$ so that $D(x, x) = 0$ for all $x$, $D(x, y) = D(y, x)$ for $x \ne y$. Say $D(a, b) = r$, $D(a, c) = s$, $D(b, c) = t$, and $r \le s \le t$. </p> <p>Prove that $D$ makes $M$ a metric space iff $t \le r + s$.</p> <p>I have no idea on how to begin this proof. </p>
Don Larynx
91,377
<p>P1) $t &lt;= r + s$ implies $D(b, c) &lt;= D(a, b) + D(a, c)$.</p> <p>P2) Either $D(x, y) = 0$ or $r$.</p> <p>P3) Suppose $D(a, b) = D(a, c) = 0$. Then a = b = c = 0. So the triangle inequality is satisfied.</p> <p>P4) In case $D(a, b) = D(a, c) = r, b = c$. This the triangle inequality is trivially satisfied.</p> <p>P5) In case $D(a, b) = 0, D(a, c) = r$ (or the other way around), a = b. The triangle inequality gives $D(a, c) = D(b, c)$.</p> <p>Q) $D$ thus defines a metric space $M$.</p>
474,632
<p>Let $M$ be a set with three elements: $a$, $b$, and $c$. Define $D\colon M\times M\to[0,\infty)$ so that $D(x, x) = 0$ for all $x$, $D(x, y) = D(y, x)$ for $x \ne y$. Say $D(a, b) = r$, $D(a, c) = s$, $D(b, c) = t$, and $r \le s \le t$. </p> <p>Prove that $D$ makes $M$ a metric space iff $t \le r + s$.</p> <p>I have no idea on how to begin this proof. </p>
Stefan Hamcke
41,672
<p>Your argument is a bit complicated. It is much easier:</p> <p>We want to verify the triangle inequality $d(x,y)\le d(x,z)+d(y,z)$.</p> <p>If $x,y,z$ are not all distinct, then it is satisfied as shown in your previous question. So let's assume they are all distinct.</p> <p>There are three possibilities for $x,y$:</p> <p>If $x=a, y=b$, then $d(a,b)=r$.<br> If $x=a, y=c$, then $d(a,c)=s$.<br> Since $r,s\le t$ and $t$ will appear on the right hand side in either case, the TI is satisfied.</p> <p>If $x=b, y=c$, then $d(b,c)=t\le r+s=d(a,b)+d(b,c)$, so we are happy.</p>
3,846,891
<p>Let me know if this proof is correct. This is in french.</p> <p>Translation French --&gt; English</p> <p><strong>prémisse = premise</strong></p> <p><strong>supposition = assumption</strong></p> <p><a href="https://i.stack.imgur.com/Ey8bH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ey8bH.png" alt="Fitch Proof" /></a></p> <p>I now know that this is 100% incorrect. Does anyone know how to resolve it mathematically?</p> <p>.</p> <p>Edit 1 : Thanks to @Mauro Curto here is what I came up with</p> <p><a href="https://i.stack.imgur.com/r090i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r090i.png" alt="Fitch proof" /></a></p>
Bram28
256,001
<p>No, not correct.</p> <p>Line 5 cannot refer to line 3, since that is part of a subproof that was closed right after line 3</p> <p>Line 6, 7, and 8 are all not correct: you introduce a conditional by <em>closing</em> a subproof, inferring a conditional with as its antecednt the assumption of that subproof, and as its conseuqent the last line of that subproof (or at the very least, a line contained within that subproof). On lines 6 and 7 you are not closing any subproofs, and line 8 is not referring to the assumption of the subproof you just closed</p> <p>To be harsh this but frank: it's a mess! Please study the rule for conditional introduction more carefully ... not just syntactically, but also conceptually and indeed purely logically. Nothing you do here makes any logical sense if you step back from the strings of symbols and think about what you do here.</p>
3,846,891
<p>Let me know if this proof is correct. This is in french.</p> <p>Translation French --&gt; English</p> <p><strong>prémisse = premise</strong></p> <p><strong>supposition = assumption</strong></p> <p><a href="https://i.stack.imgur.com/Ey8bH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ey8bH.png" alt="Fitch Proof" /></a></p> <p>I now know that this is 100% incorrect. Does anyone know how to resolve it mathematically?</p> <p>.</p> <p>Edit 1 : Thanks to @Mauro Curto here is what I came up with</p> <p><a href="https://i.stack.imgur.com/r090i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r090i.png" alt="Fitch proof" /></a></p>
Mauro curto
781,761
<p>Your derivation is incorrect. Your supposition in step 2 is not the adequate supposition. After you colse a supposition you can't use the supposition anymore so your step 7 is wrong, nad therefore the step 8 is also wrong.</p> <p>Since the main connective of the consequent is a conditional you have to use the &quot;practical rule&quot; of the introduction of the conditional wich states that you have to assume the antecedent and then try to derivate the consequent. You have to use this practical rule twice in this derivation.</p> <p>A derivations looks like:</p> <p><span class="math-container">$1). p → (q → r) - premise$</span></p> <p><span class="math-container">$2). p → q - supposition$</span></p> <p><span class="math-container">$3). p - supposition$</span></p> <p><span class="math-container">$4)....$</span></p> <p>So now you close supposition 3 after you derivate <span class="math-container">$r$</span>, and then you close supposition 2 after you derivate <span class="math-container">$p → r$</span></p>
3,846,891
<p>Let me know if this proof is correct. This is in french.</p> <p>Translation French --&gt; English</p> <p><strong>prémisse = premise</strong></p> <p><strong>supposition = assumption</strong></p> <p><a href="https://i.stack.imgur.com/Ey8bH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ey8bH.png" alt="Fitch Proof" /></a></p> <p>I now know that this is 100% incorrect. Does anyone know how to resolve it mathematically?</p> <p>.</p> <p>Edit 1 : Thanks to @Mauro Curto here is what I came up with</p> <p><a href="https://i.stack.imgur.com/r090i.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/r090i.png" alt="Fitch proof" /></a></p>
Graham Kemp
135,106
<p><span class="math-container">$\def\fitch#1#2{~~\begin{array}{|l}#1\\\hline#2\end{array}}$</span></p> <blockquote> <p><img src="https://i.stack.imgur.com/r090i.png" alt="no" /></p> </blockquote> <p>Not quite.</p> <p>Conditional Introduction deduces a conditional statement whose antecedent is the assumption of a subproof and whose consequent is the conclusion. Thus when you assume <span class="math-container">$p$</span> with the aim of deriving <span class="math-container">$r$</span>, you are intending to introduce <span class="math-container">$p\to r$</span> outside the context of that subproof.</p> <p>Conversely, Conditional Elimination requires a conditional statement and its antecedent to both be accessible in the context where you intend to derive the consequent. Thus all such eliminations using the assumed conditionals of <span class="math-container">$p\to \textsf{whatever}$</span> must take place inside a context where <span class="math-container">$p$</span> is available.</p> <p>That is in the innermost nest of the suproofs, which derives <span class="math-container">$q$</span> and <span class="math-container">$q\to r$</span>, and from these another Conditional Elimination derives <span class="math-container">$r$</span>. And so...</p> <p><span class="math-container">$$\fitch{~~1.~p\to(q\to r)}{\fitch{~~2.~p\to q}{\fitch{~~3.~p}{~~4.~q\\~~5.~q\to r\\~~6.~r}\\~~7.~p\to r}\\~~8.~(p\to q)\to(p\to r)}$$</span></p>
3,688,680
<p>I know cantor set and rational numbers in <span class="math-container">$\mathbb{R}$</span> are meagre. But they are all zero measure.</p> <p>So is there any meagre set that is non-zero measure?</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} \int_{0}^{\infty}{\sin^{2}\pars{x} \over x^{5/2}}\,\dd x &amp; = \int_{0}^{\infty}\ \overbrace{1 - \cos\pars{2x} \over 2} ^{\ds{\sin^{2}\pars{x}}}\ \overbrace{{1 \over \Gamma\pars{5/2}}\int_{0}^{\infty}t^{3/2}\expo{-xt}\dd t} ^{\ds{1 \over x^{5/2}}}\ \dd x \\[5mm] &amp; = {1 \over 2\,\Gamma\pars{5/2}}\int_{0}^{\infty}t^{3/2}\, \Re\int_{0}^{\infty}\bracks{\expo{-xt} - \expo{-\pars{t - 2\ic}x}}\dd x \,\dd t \\[5mm] &amp; = {2 \over 3\root{\pi}}\int_{0}^{\infty}t^{3/2}\, \Re\pars{{1 \over t} - {1 \over t - 2\ic}}\dd t \\[5mm] &amp; = {2 \over 3\root{\pi}}\int_{0}^{\infty}t^{3/2}\, \pars{{1 \over t} - {t \over t^{2} + 4}}\dd t \\[5mm] &amp; = {8 \over 3\root{\pi}}\int_{0}^{\infty}\,{t^{1/2} \over t^{2} + 4}\,\dd t \\[5mm] &amp; = {8 \over 3\root{\pi}}\,{1 \over 4}\,2\root{2} \int_{0}^{\infty}\,{t^{1/2} \over t^{2} + 1}\,\dd t \\[5mm] &amp; = {4 \over 3}\root{2 \over \pi} \int_{0}^{\infty}\,{t^{1/4} \over t + 1}\,{1 \over 2}\,t^{-1/2}\,\dd t \\[5mm] &amp; = {2 \over 3}\root{2 \over \pi} \int_{1}^{\infty}\,{\pars{t - 1}^{-1/4} \over t}\,\dd t \\[5mm] &amp; = {2 \over 3}\root{2 \over \pi} \int_{1}^{0}\,{\pars{1/t - 1}^{-1/4} \over 1/t} \pars{-\,{\dd t \over t^{2}}} \\[5mm] &amp; = {2 \over 3}\root{2 \over \pi} \int_{0}^{1}t^{-3/4}\pars{1 - t}^{-1/4}\,\dd t = {2 \over 3}\root{2 \over \pi}\,{\Gamma\pars{1/4}\Gamma\pars{3/4} \over \Gamma\pars{1}} \\[5mm] &amp; = {2 \over 3}\root{2 \over \pi}\,{\pi \over \sin\pars{\pi/4}} = \bbx{{4 \over 3}\root{\pi}}\ \approx 2.3633 \end{align}</span></p>
632,891
<p>I'm trying to solve this limit, for which I already know the solution thanks to Wolfram|Alpha to be $\sqrt[3]{abc}$:</p> <p>$$\lim_{n\rightarrow\infty}\left(\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}}{3}\right)^n:\forall a,b,c\in\mathbb{R}^+$$</p> <p>As this limit is an indeterminate form of the type $1^\infty$, I've been trying to approach it by doing:</p> <p>$$\lim_{n\rightarrow\infty}\left(\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}}{3}\right)^n=\lim_{n\rightarrow\infty}\left(1+\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}{3}\right)^n=\lim_{n\rightarrow\infty}\left(1+\frac{1}{\frac{3}{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}}\right)^n=\lim_{n\rightarrow\infty}\left(1+\frac{1}{\frac{3}{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}}\right)^{\frac{3}{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}\cdot\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}{3}\cdot n}=e^{\lim_{n\rightarrow\infty}\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}{3}\cdot n}$$</p> <p>But now when I approach that top limit this is what I get:</p> <p>$$\lim_{n\rightarrow\infty}\frac{a^\frac{1}{n}+b^\frac{1}{n}+c^\frac{1}{n}-3}{3}\cdot n=\lim_{n\rightarrow\infty}\frac{n\cdot a^{\frac{1}{n}}}{3}+\frac{n\cdot b^{\frac{1}{n}}}{3}+\frac{n\cdot c^{\frac{1}{n}}}{3}-n=\lim_{n\rightarrow\infty}\frac{n\cdot a^0}{3}+\frac{n\cdot b^0}{3}+\frac{n\cdot c^0}{3}-n=\lim_{n\rightarrow\infty}\frac{n}{3}+\frac{n}{3}+\frac{n}{3}-n=0$$</p> <p>And hence the final limit should be $e^0=1$ which is clearly wrong but I honestly don't know what I did wrong, so what do you suggest me to solve this limit?</p>
Hagen von Eitzen
39,174
<p>While $\sqrt[n]a\to 1$, it is not correct to say $(n\sqrt[n]a-n)\to 0$. Actually, $\sqrt[n]{1+\epsilon}\approx 1+\frac1n\epsilon$ so $n\sqrt[n]a-n\approx a$.</p>
51,390
<p>Say $g$ is a matrix which is given as, $g = g_0 + xg_2 + x^2 g_4 .. +x^{d/2 -1}g_{d-2}+ x^{d/2}(g_d + h_d(\log (x)))$ where $d$ is an even number and each $g_i$ is a matrix (same dimension as $g$) and $h_d$ is another matrix. </p> <ul> <li>For such a set of arbitrary matrices, how can one power-series expand $\sqrt {\det(g)}$ in $x$? </li> </ul>
Dr. Wolfgang Hintze
16,361
<p>For any square matrix M which is the sum of two similar matrices M = A + B the determinant can be written as a sum of determinants as follows (example for two dimensions):</p> <pre><code>det(M) = det( ( A11 + B11, A12 + B12), (A21 + B21, A22 + B22) ) = det( ( A11 + 0, A12 + 0), (A21 + B21, A22 + B22) ) + det( ( 0 + B11, 0 + B12), (A21 + B21, A22 + B22) ) </code></pre> <p>and, expanding the lower row similarly,</p> <pre><code>= det( ( A11 + 0, A12 + 0), (A21 + 0, A22 + 0) ) + det( ( A11 + 0, A12 + 0), (0 + B21, 0 + B22) ) + det( ( 0 + B11, 0 + B12), (A21 + 0, A22 + 0) ) + det( ( 0 + B11, 0 + B12), (0 + B21, 0 + B22) ) = det( ( A11, A12), (A21, A22) ) + det( ( A11, A12), (B21, B22) ) + det( ( B11, B12), (A21, A22) ) + det( ( B11, B12), (B21, B22) ) </code></pre> <p>Letting B = x C gives then the expansion</p> <pre><code>det(M) = = det( ( A11, A12), (A21, A22) ) + x det( ( A11, A12), (C21, C22) ) + x det( ( C11, C12), (A21, A22) ) + x^2 det( ( C11, C12), (C21, C22) ) </code></pre> <p>Here we recognise det(A) and det(C) but also determinants of matrices mixed between A and C, more exactly, with replacement of rows.</p> <p>This procedure obviously generalizes to your problem. You might wish to write it down in MMA terms.</p> <p>Regards, Wolfgang</p>
2,436,268
<p>My problem is evaluating the following limit: $$\lim_{(x,y)\to(0,0)}\frac{x^5+y^2}{x^4+|y|}$$ The answer should be 0. I tried to convert the limit into polar form, but it didn't help because I couldn't isolate the $r$ and $\theta$-variables of the expression. My "toolbox" for solving problems like these is very limited... If polar form doesn't work, then I usually have no clue on how to continue.</p> <p><strong>Edit</strong>: I think this is the solution. $$ \lim_{(x,y)\to(0,0)}\left|\frac{x^5+y^2}{x^4+|y|}\right| = \frac{|x^5+y^2|}{|x^4+|y||} $$ Applying the triangle inequality gives $$ \frac{|x^5+y^2|}{|x^4+|y||} \leq \left|\frac{x^5}{x^4+|y|}\right| + \left|\frac{y^2}{x^4+|y|}\right| $$ Inspecting the denominators on the RHS gives: $$ \left|\frac{x^5}{x^4+|y|}\right| \leq |x|, \quad\left|\frac{y^2}{x^4+|y|}\right| \leq |y| $$ So $$ \left|\frac{x^5}{x^4+|y|}\right| + \left|\frac{y^2}{x^4+|y|}\right| \leq |x| + |y| $$ Since $|x| + |y| \to 0$ when $x,y\to 0$, the sandwich theorem states that $|\frac{x^5+y^2}{x^4+|y|}| \to 0$. And if $\lim |f(x)|=0$ then $\lim f(x)=0$ which solves the original problem.</p>
Nosrati
108,128
<p><strong>Hint:)</strong></p> <p>For paths $x=0$ as $y\to0$ and $y=0$ as $x\to0$ the result of limit clearly is zero, furthermore $$\Big|\frac{x^5+y^2}{x^4+|y|}\Big|\leq\Big|\frac{x^5}{x^4+|y|}\Big|+\Big|\frac{y^2}{x^4+|y|}\Big|\leq|x|+|y|$$</p>
3,640,298
<p><span class="math-container">$$ f(x,y,z)= \int_{-\infty}^{+\infty} e^ {-t(t-x)(t-y)(t-z)}\;dt$$</span> </p> <p><span class="math-container">$$t=p+x$$</span> <span class="math-container">$$ f(x,y,z)= \int_{-\infty}^{+\infty} e^ {-p(p+x)(p-(y-x))(p-(z-x))}\;dp$$</span> </p> <p><span class="math-container">$$ f(-x,y-x,z-x)= f(x,y,z) \tag {1} $$</span> </p> <p><span class="math-container">$$ f(0,a,b)= \int_{-\infty}^{+\infty} e^ {-t^2(t-a)(t-b)}\;dt$$</span> </p> <p><span class="math-container">$$ f(0,a,b)e^ {c}= \int_{-\infty}^{+\infty} e^ {-t^2(t-a)(t-b)+c}\;dt$$</span> </p> <p>We can find roots of <span class="math-container">$$t^2(t-a)(t-b)-c=0$$</span>, as <span class="math-container">$$x_1,x_2,x_3,x_4$$</span> The roots depend on <span class="math-container">$a,b,c$</span> <span class="math-container">$$ f(0,a,b)e^ {c}= \int_{-\infty}^{+\infty} e^ {-(t-x_1))(t-x_2)(t-x_3)(t-x_4)}\;dt$$</span> </p> <p><span class="math-container">$$t=p+x_1$$</span></p> <p><span class="math-container">$$ f(0,a,b)e^ {c}= \int_{-\infty}^{+\infty} e^ {-p(p-(x_2-x_1))(p-(x_3-x_1))(p-(x_4-x_1))}\;dp$$</span> </p> <p><span class="math-container">$$ f(0,a,b)e^ {c}= f(x_2-x_1,x_3-x_1,x_4-x_1)=\int_{-\infty}^{+\infty} e^ {-p(p-(x_2-x_1))(p-(x_3-x_1))(p-(x_4-x_1))}\;dp$$</span> </p> <p>It is enough to solve <span class="math-container">$$ f(0,a,b)= \int_{-\infty}^{+\infty} e^ {-t^2(t-a)(t-b)}\;dt$$</span> </p> <p><span class="math-container">$$t=p-\frac{(a+b)}{4}$$</span></p> <p><span class="math-container">$$ f(0,a,b)= \int_{-\infty}^{+\infty} e^ {-p^4+(3\frac{a+b}{3}+ab)p^2-\frac{a+b}{2}(\frac{(a+b)^2}{4}+ab)p+\frac{(a+b)^2}{16}(3\frac{(a+b)^2}{16}+ab)}\;dp$$</span> </p> <p><span class="math-container">$$ f(0,a,b)= e^ {\frac{(a+b)^2}{16}(3\frac{(a+b)^2}{16}+ab)}\int_{-\infty}^{+\infty} e^ {-p^4+(3\frac{a+b}{3}+ab)p^2-\frac{a+b}{2}(\frac{(a+b)^2}{4}+ab)p}\;dt$$</span> </p> <p><span class="math-container">$$m=3\frac{a+b}{3}+ab$$</span></p> <p><span class="math-container">$$x=-\frac{a+b}{2}(\frac{(a+b)^2}{4}+ab)$$</span></p> <p><span class="math-container">$$ f(0,a,b)= e^ {\frac{(a+b)^2}{16}(3\frac{(a+b)^2}{16}+ab)}\int_{-\infty}^{+\infty} e^ {-p^4+mp^2+xp}\;dt$$</span> </p> <p><span class="math-container">$$ f(0,a,b)e^ {-\frac{(a+b)^2}{16}(3\frac{(a+b)^2}{16}+ab)}= \int_{-\infty}^{+\infty} e^ {-p^4+mp^2+xp}\;dp$$</span> </p> <p><span class="math-container">$$ y= g_m(x)= \int_{-\infty}^{+\infty} e^ {-p^4+mp^2+xp}\;dp$$</span> If we define <span class="math-container">$$y'=\frac{dy}{dx}$$</span> <span class="math-container">$$y'''=\frac{d^3y}{dx^3}$$</span></p> <p>and <span class="math-container">$m$</span> is an constant <span class="math-container">$$-4y'''+2my'+xy=\int_{-\infty}^{+\infty}(-4p^3+2mp+x) e^ {-p^4+mp^2+xp}\;dp=e^ {-p^4+mp^2+xp}|{_{p=-\infty}^{p=+\infty}}=0$$</span> </p> <p>I need to solve the third-order linear differential equation <span class="math-container">$$-4y'''+2my'+xy=0 \tag{2}$$</span> I do not know the solution of it.</p> <p>I believe we can also find a way to solve <span class="math-container">$f(x,y,z)$</span> via using iteration method . </p> <p>the solution can be written <span class="math-container">$$ f(x,y,z)e^{c_1}= \int_{-\infty}^{+\infty} e^ {-t(t-x)(t-y)(t-z)+c_1}\;dt$$</span> </p> <p>where <span class="math-container">$c_1$</span> depends on <span class="math-container">$x,y,z$</span></p> <p><span class="math-container">$$ f(x,y,z)e^ {c_1}= \int_{-\infty}^{+\infty} e^ {-(t-x_1))(t-y_1)(t-z_1)(t-b_1)}\;dt$$</span> </p> <p><span class="math-container">$t--&gt;t+x_1$</span></p> <p><span class="math-container">$$ f(x,y,z)e^ {c_1}= \int_{-\infty}^{+\infty} e^ {-t(t-(y_1-x_1))(t-(z_1-x_1)(t-(b_1-x_1))}\;dt$$</span><br> where <span class="math-container">$c_2$</span> depends on <span class="math-container">$(y_1-x_1),(z_1-x_1),(b_1-x_1)$</span></p> <p><span class="math-container">$$ f(x,y,z)e^ {c_1}e^ {c_2}= \int_{-\infty}^{+\infty} e^ {-(t-x_2))(t-y_2)(t-z_2)(t-b_2)}\;dt$$</span> </p> <p>And If we iterate in this way </p> <p>Finally we can get <span class="math-container">$x_n=y_n=z_n=b_n=k$</span> where <span class="math-container">$n--&gt;\infty$</span></p> <p><span class="math-container">$$ f(x,y,z)e^ {c_1+c_2+c_3+....}=\int_{-\infty}^{+\infty} e^ {-(t-k)^4}\;dt$$</span> </p> <p><span class="math-container">$t--&gt;t+k$</span></p> <p><span class="math-container">$$ f(x,y,z)e^ {c_1+c_2+c_3+....}=\int_{-\infty}^{+\infty} e^ {-t^4}\;dt=2 Γ(5/4)$$</span> </p> <p><span class="math-container">$$ f(x,y,z)=2 Γ(5/4)e^ {-(c_1+c_2+c_3+....)}$$</span><br> My question :</p> <p>Is it possible to find an iteration algorithm as define above to solve <span class="math-container">$f(x,y,z)$</span></p> <p>How can <span class="math-container">$c_n$</span> selected for next iteration that finally we get <span class="math-container">$x_n=y_n=z_n=b_n=k$</span> where <span class="math-container">$n--&gt;\infty$</span>?</p> <p>If you have other method to solve the <span class="math-container">$ f(x,y,z)$</span> please let me know.</p> <p>Note: This kind of iteration method can be used to solve similar differential equations like equation (2).</p> <p>Thanks for help and comments</p> <p>EDIT:</p> <p>I used the method that @mrc ntn gave clue in his answer below. I would like to find more terms of <span class="math-container">$f(x,y,z)$</span></p> <p><span class="math-container">$$g(s)= f(sx,sy,sz)= \int_{-\infty}^{+\infty} e^ {-t(t-sx)(t-sy)(t-sz)}\;dt$$</span></p> <p>There is no odd terms in expansion because of <span class="math-container">$g(-s)=g(s)$</span> </p> <p>Proof: <span class="math-container">$$g(-s)=\int_{-\infty}^{+\infty} e^ {-t(t+sx)(t+sy)(t+sz)}\;dt$$</span></p> <p><span class="math-container">$$g(-s)=\int_{-\infty}^{+\infty} e^ {-t (-1)^3(-t-sx)(-t-sy)(-t-sz)}\;dt$$</span> <span class="math-container">$$g(-s)=\int_{-\infty}^{+\infty} e^ {-t (-1)^3(-t-sx)(-t-sy)(-t-sz)}\;dt$$</span> <span class="math-container">$t=-u$</span> <span class="math-container">$$g(-s)=-\int_{+\infty}^{-\infty} e^ {-u(u-sx)(u-sy)(u-sz)}\;du=\int_{-\infty}^{+\infty} e^ {-u(u-sx)(u-sy)(u-sz)}\;du=g(s)$$</span> Thus we can write series expansion of <span class="math-container">$g(s)$</span> <span class="math-container">$$g(s)= f(sx,sy,sz)= g(0)+\frac{g''(0)s^2}{2!} ((x+y+z)^2+a_1(xy+yz+xz))+\frac{g^{(4)}(0)s^4}{4!} ((x+y+z)^4+a_2(xy+yz+xz)^2+b_2(xy+yz+xz)(x+y+z)^2+c_2 xyz(x+y+z))+\frac{g^{(6)}(0)s^6}{6!} P_6(x+y+z,xy+yz+xz,xyz) +......$$</span></p> <p>if <span class="math-container">$x=0$</span> and <span class="math-container">$y=0$</span> then </p> <p><span class="math-container">$$g(s)= f(sx,0,0)= \int_{-\infty}^{+\infty} e^ {-t^3(t-sx)}\;dt$$</span></p> <p><span class="math-container">$$g(1)= f(x,0,0)= \int_{-\infty}^{+\infty} e^ {-t^3(t-x)}\;dt=\int_{-\infty}^{+\infty} e^ {-t(t+x)^3}\;dt$$</span></p> <p><span class="math-container">$$g(1)= f(x,0,0)= f(-x,-x,-x)$$</span></p> <p><span class="math-container">$$ f(x,0,0)= \int_{-\infty}^{+\infty} e^ {-t^3(t-x)}\;dt= g(0)+\frac{g''(0)}{2!} x^2+\frac{g^{(4)}(0)}{4!} x^4+\frac{g^{(6)}(0)x^6}{6!}+......$$</span></p> <p><span class="math-container">$$g^{(n)}(0)=\int_{-\infty}^{+\infty} t^{3n}e^ {-t^4}\;dt$$</span> <span class="math-container">$$g(0)=\frac{Γ(1/4)}{2}$$</span> <span class="math-container">$$g'(0)=0$$</span> <span class="math-container">$$g''(0)=\frac{Γ(7/4)}{2}$$</span> <span class="math-container">$$g'''(0)=0$$</span> <span class="math-container">$$g^{(4)}(0)=\frac{Γ(13/4)}{2}$$</span> <span class="math-container">$$g^{(2n)}(0)=\frac{Γ((1+6n)/4)}{2}$$</span></p> <p><span class="math-container">$$f(-x,-x,-x)= g(0)+\frac{g''(0)}{2!} (9-3a_1)x^2+\frac{g^{(4)}(0)}{4!} (81+9a_2-27b_2+3c_2)x^4+\frac{g^{(6)}(0)}{6!} P_6(-3x,+3x^2,-x^3) +......$$</span></p> <p><span class="math-container">$9-3a_1=1$</span> </p> <p><span class="math-container">$a_1=-\frac{8}{3}$</span></p> <p>We can find <span class="math-container">$a_2,b_2,c_2$</span> via using <span class="math-container">$ f(x,y,z)= f(-x,y-x,z-x) $</span> </p> <p><span class="math-container">$$(x+y+z)^4+a_2(xy+yz+xz)^2+b_2(xy+yz+xz)(x+y+z)^2+c_2 xyz(x+y+z)=(y+z-3x)^4+a_2(-x(y-x)-x(z-x)+(z-x)(y-x))^2+b_2(-x(y-x)-x(z-x)+(z-x)(y-x))(y+z-3x)^2-c_2x(y-x)(z-x)(y+z-3x)$$</span></p> <p>I will write down when I find constants. More terms can be found via this method but maybe someone knows easier way. </p> <p>EDIT : I have noticed that any term can be calculated via the method below:</p> <p><span class="math-container">$$g(s)= f(sx,sy,sz)= \int_{-\infty}^{+\infty} e^ {-t(t-sx)(t-sy)(t-sz)}\;dt=\int_{-\infty}^{+\infty} e^ {-t^4} e^ {st^3(x+y+z)} e^ {-s^2t^2(xy+xz+yz)} e^ {s^3t(xyz)} \;dt$$</span></p> <p><span class="math-container">$$g(s)= f(sx,sy,sz)= \int_{-\infty}^{+\infty} e^ {-t^4} (1+st^3(x+y+z)+\frac{s^2t^6(x+y+z)^2}{2!}+....) (1-s^2t^2(xy+xz+yz)+\frac{s^4t^4(xy+xz+yz)^2}{2!}+....) (1+s^3t(xyz)+\frac{s^6t^2(xyz)^2}{2!}+....) \;dt$$</span></p> <p><span class="math-container">$s^2$</span> terms can be written for <span class="math-container">$g(s)$</span> :</p> <p><span class="math-container">$$ \int_{-\infty}^{+\infty} \frac{s^2(t^3)^2(x+y+z)^2}{2!} e^ {-t^4} \;dt+ \int_{-\infty}^{+\infty} (-s^2)t^2(xy+xz+yz) e^ {-t^4} \;dt=s^2 \frac{(x+y+z)^2}{2!}\int_{-\infty}^{+\infty} t^6 e^ {-t^4} \;dt-s^2(xy+xz+yz) \int_{-\infty}^{+\infty} t^2 e^ {-t^4} \;dt= s^2 \frac{Γ(7/4)(x+y+z)^2}{2.2!} -s^2(xy+xz+yz)\frac{Γ(3/4)}{2}= \frac{s^2}{2!} \frac{Γ(7/4)}{2}[ (x+y+z)^2 -\frac{8}{3}(xy+xz+yz)] $$</span></p> <p><span class="math-container">$s^4$</span> terms can be written for <span class="math-container">$g(s)$</span> :</p> <p><span class="math-container">$$ \int_{-\infty}^{+\infty} \frac{s^4(t^3)^4(x+y+z)^4}{4!} e^ {-t^4} \;dt+ \int_{-\infty}^{+\infty} \frac{ (s^4)(-t^2)^2(xy+xz+yz)^2}{2!} e^ {-t^4} \;dt+ \int_{-\infty}^{+\infty} \frac{ s^2(t^3)^2(x+y+z)^2(s^2)(-t^2)(xy+xz+yz)}{2!} e^ {-t^4} \;dt+\int_{-\infty}^{+\infty} s^3(t)(xyz) s(t^3)(x+y+z) e^ {-t^4} \;dt=$$</span></p> <p><span class="math-container">$$ s^4[\frac{Γ(13/4)}{2.4!} (x+y+z)^4 + \frac{ Γ(5/4)}{2.2!}(xy+xz+yz)^2 -\frac{ Γ(9/4)}{2.2!}(x+y+z)^2 (xy+xz+yz) +\frac{ Γ(5/4)}{2}(xyz)(x+y+z)]=\frac{s^4}{4!} \frac{Γ(13/4)}{2} [(x+y+z)^4 + \frac{64}{15}(xy+xz+yz)^2-\frac{16}{3}(x+y+z)^2 (xy+xz+yz)+\frac{128}{15}(xyz)(x+y+z)]$$</span></p> <p>Any <span class="math-container">$s^{2n}$</span> term can be found via this method but I have not got an answer for my iteration question above. I wonder if we can find a way for iteration solution or not. Thanks a lot for answers and comments</p>
Asinomás
33,907
<p>Because for each prime <span class="math-container">$p$</span> that does not divide <span class="math-container">$b$</span> we have <span class="math-container">$v_p(a)= v_p(ab) = v_p(m^3)=3v_p(m)$</span>.</p> <p>We can clearly make the part that is not <span class="math-container">$dp_1^{r_1}\dots p_r^{r_t}$</span> equal to the product of all of the <span class="math-container">$p^{v_p(ab)}$</span> where <span class="math-container">$p$</span> is a prime not dividing <span class="math-container">$b$</span>.</p>
550,188
<p>Okay so I have an equation in my book which is as follows.. $$ \frac {a}{s(s+a)} $$ it says "using partial fractions this can be expanded to $$ \frac {1}{s} + \frac {-1}{s+a} $$</p> <p>My usual method would be to cross multiply and do something like this $$ \frac {a}{s(s+a)} = \frac {A(s+a)}{s(s+a)} + \frac {B(s)}{s(s+a)} $$</p> <p>Then cancel off the denominators and solve..</p> <p>$$ a = A(s+a) + B(s) $$</p> <p>usually though the a would be some constant but here I have no values to play around with.. how has he done it in the book?</p>
Amzoti
38,839
<p>We have:</p> <p>$$\dfrac{a}{s(s+a)} = \dfrac{A}{s}+\dfrac{B}{s+a}$$</p> <p>So, </p> <p>$$a = A(s+a) + Bs = (A+B)s + A a$$</p> <p>we have $A = 1, B = -1$</p> <p>Final result:</p> <p>$$\dfrac{a}{s(s+1)} = \dfrac{1}{s}-\dfrac{1}{s+a}$$</p>
323,109
<p>Could someone help with the following integration: $$\int^0_\pi \frac{x \sin x}{1+\cos^2 x}$$</p> <p>So far I have done the following, but I am stuck:</p> <p>I denoted $ y=-\cos x $ then: $$\begin{align*}&amp;\int^{1}_{-1} \frac{\arccos(-y) \sin x}{1+y^2}\frac{\mathrm dy}{\sin x}\\&amp;= \arccos(-1) \arctan 1+\arccos 1 \arctan(-1) - \int^1_{-1}\frac{1}{\sqrt{1-y^2}}\frac{1}{1+y^2} \mathrm dy\\&amp;=\frac{\pi^2}{4}-\int^{1}_{-1}\frac{1}{\sqrt{1-y^2}}\frac{1}{1+y^2} \mathrm dy\end{align*}$$</p> <p>Then I am really stuck. Could someone help me?</p>
Lai
732,917
<p>Let <span class="math-container">$y=x-\frac{\pi}{2}$</span>.</p> <p><span class="math-container">\begin{array}{l}\displaystyle \int_{0}^{\pi} \frac{x \sin x}{1+\cos ^{2} x} d x&amp;=\displaystyle \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\left(y+\frac{\pi}{2}\right) \cos y}{1+\sin ^{2} y} d y\\&amp;=\displaystyle \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{y \cos y}{1+\sin ^{2} y} d y +\frac{\pi}{2} \int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} \frac{\cos y}{1+\sin ^{2} y} d y \\\displaystyle &amp;=\displaystyle \pi \int_{0}^{\frac{\pi}{2}} \frac{d(\sin y)}{1+\sin ^{2} y} \\&amp;=\displaystyle 0+\pi\left[\tan ^{-1}(\sin y)\right]_{0}^{\frac{\pi}{2}}\\&amp;=\displaystyle \frac{\pi^{2}}{4} \end{array}</span> Hence <span class="math-container">$$\int_{\pi}^{0} \frac{x \sin x}{1+\cos ^{2} x} d x =-\frac{\pi^2}{4}$$</span></p>
3,024,169
<p>First, my apologies. This question may have been asked many times before but I do not know the correct terms to search on..... and my school trigonometry is many years ago. Pointing me to an appropriate already-answered question would be an ideal solution for me.</p> <p>I am writing a program to do 3D view from an observer. I want to calculate motion based upon the direction the observer is looking. I currently define this as angle from directly ahead in two planes. Rotation about the x (left-right) axis gives me elevation (elevationRadians), and rotation about y (up-down) axis gives me left-right. Rotation about z never happens.</p> <p>I need to calculate the change in cartesian coordinates caused by moving D units in the direction of view.</p> <pre><code>dx = D * cos(elevationRadians) dy = D * sin(deflectionRadians) </code></pre> <p>But I now get two components for dz</p> <pre><code>dz = D * cos(deflectionRadians) dz = D * sin(elevationRadians) </code></pre> <p>How should I combine these terms to give realistic movement? Should I add them, average them, or something else? Part of me thinks that simply adding them will give too large a dz. Could someone confirm, deny, or point me to a good resource for this please.......</p> <p>Edit: rotation about y gives 'deflection.. </p> <p>My axes: x: left -> right, y: down -> up, z: behind -> in front.</p> <p>Thanks to Andei's amswer which almost worked (I think our axes differed), I wound up with this:</p> <p>dx = D cos(elevationRadians) cos(deflectionRadians) dy = D sin(elevationRadians) sin(deflectionRadians) dz = D sin(elevationRadians)</p> <p>which <em>seems</em> to work a lot better than what I had.</p> <p>What is the mathematical term for what I am trying to do? I need to read some theory preferably at the simple end of the scale.</p>
goulding
621,218
<p>I would look to solve this using utility methods contained in 3D Vectors. I'm not sure what you're using for creating the application, but I've found Unity 3D's documentation on 3D vector Math to be very easy to follow (regardless of whether you're working in Unity).</p> <p><a href="https://unity3d.com/learn/tutorials/topics/scripting/vector-maths" rel="nofollow noreferrer">https://unity3d.com/learn/tutorials/topics/scripting/vector-maths</a></p> <p>For example - in response to your specific question about the size of "dz" I would use the "normalize" method of a Vector3 which gives you a vector unit length of 1 and then scale this vector by whatever magnitude you need.</p>
946,973
<p>After completing the square, what are the solutions to the quadratic equation below? <span class="math-container">$$x^2 + 2x = 25$$</span></p> <p><img src="https://i.stack.imgur.com/AoFhV.png" alt="enter image description here" /></p> <p>Honstely I think it's B. But I'm not sure.</p>
Timbuc
118,527
<p>$$25=x^2+2x=(x+1)^2-1\implies (x+1)^2=26\implies x+1=\pm\sqrt{26}$$</p>
578,487
<p>Maybe this is a well know result, however, I could not find it. Before stating it, let me write here a well know result (at least for me)</p> <blockquote> <p>Assume that $\Omega\subset\mathbb{R}^N$ is a open domain and $f:\Omega\to\mathbb{R}$. If there is constants $L&gt;0$ and $\alpha&gt;1$ such that $$|f(x)-f(y)|\leq L |x-y|^\alpha,\ \forall\ x,y\in\Omega$$ then, $f$ is constant in each connected componente of $\Omega$.</p> </blockquote> <p>The above result can be proved, for example, by showing that $\nabla f=0$ and then we join points in the same connected component by a continuous curve.</p> <p>Now my question is: </p> <blockquote> <p>Assume that $\Omega\subset\mathbb{R}^N$ and $f:\Omega\to\mathbb{R}$. Suppose that there is constants $L&gt;0$ and $\alpha&gt;1$ such that $$|f(x)-f(y)|\leq L |x-y|^\alpha,\ \forall\ x,y\in\Omega$$ Can we conclude that $f$ is constant in each connected componente of $\Omega$?</p> </blockquote> <p>Maybe it is necessary to add the hypothesis that each connected component of $\Omega$ is pathwise connected? </p> <p>Remark: Note that in the question, $\Omega$ does not need to be a open set. It is now any set.</p>
Post No Bulls
111,742
<h3>Counterexample</h3> <p>As usual, studiosus is right: the answer is negative. A natural parametrization of an arc of the <a href="http://en.wikipedia.org/wiki/Koch_snowflake" rel="noreferrer">von Koch snowflake</a> gives a topological embedding $g:[0,1]\to \mathbb R^2$ such that $$|g(x)-g(y)|\ge C|x-y|^{p},\quad p=\frac{\log 3}{\log 4}$$ for all $x,y\in [0,1]$, with $C$ independent of $x,y$. The inverse $f=g^{-1}$ is a continuous map from a curve to $[0,1]$ which is H&ouml;lder continuous with exponent $1/p&gt;1$. </p> <p>More generally, for every $p\in (0,1)$ there is a topological embedding $g$ of $\mathbb R $ into a Euclidean space such that $$C |x-y|^p\le |g(x)-g(y)|\le C'|x-y|^{p}$$ for all $x,y\in\mathbb R $. This can be constructed directly, or obtained as a special case of <a href="http://en.wikipedia.org/wiki/Doubling_space" rel="noreferrer">Assouad's embedding theorem</a>. The inverse of $g$ is H&ouml;lder continuous with exponent $1/p$ which can be arbitrarily large. </p> <h3>Positive result</h3> <p>To conclude that $f$ is constant, you need an additional <em>geometric</em> (not just <em>topological</em>) assumption on $\Omega$. It suffices to assume that $\Omega$ is <em>quasiconvex</em> (there is $C$ such that every two points $x,y\in \Omega$ can be joined by a curve of length at most $C|x-y|$). </p> <p>Here is a weaker assumption: for every $x,y\in \Omega$ there is a connected set $E\subset \Omega$ that contains both $x$ and $y$ and has Hausdorff dimension less than $\alpha$. </p> <p><em>Proof</em>. Given $x,y\in\Omega$, take $E$ as above. The behavior of Hausdorff dimension under $\alpha$-H&ouml;lder maps is well known: $\operatorname{dim} f(E)\le \alpha \operatorname{dim} E$. Hence $\operatorname{dim} f(E)&lt;1$. On the other hand, $f(E)$ is a connected subset of $\mathbb R$, i.e., either a point or an interval. Thus, $f(E)$ is a point, and $f(x)=f(y)$. $\quad\Box$</p>
643,918
<blockquote> <p>Let $G$ be a group and $a, b \in G$. Show that $(a*b)' = a' * b'$ if and only if $a*b = b*a$.</p> </blockquote> <p>While this is simple to see by intuition, I am having a hard time expressing this formally. It seems as if I want to show that $(a*b)' = a' * b'$ strictly implies $a*b = b*a$, but I'm not sure how much I am allowed to tweak with the equations, and every time I've tried to solve this, I've caught myself in assuming $a*b = b*a$, which naturally results in a foul proof. </p>
voldemort
118,052
<p>Assuming by $a'$ you mean $a^{-1}$ :</p> <p>Suppose $(ab)=(ba)$. Then we know $(ab)^{-1}=b^{-1}a^{-1}$. But $ab=ba$ implies that $(ab)^{-1}=(ba)^{-1}$.</p> <p>Conversely assume that $(ab)^{-1}=a^{-1}b^{-1}=(ba)^{-1}$. Then $ab=ba$</p>
643,918
<blockquote> <p>Let $G$ be a group and $a, b \in G$. Show that $(a*b)' = a' * b'$ if and only if $a*b = b*a$.</p> </blockquote> <p>While this is simple to see by intuition, I am having a hard time expressing this formally. It seems as if I want to show that $(a*b)' = a' * b'$ strictly implies $a*b = b*a$, but I'm not sure how much I am allowed to tweak with the equations, and every time I've tried to solve this, I've caught myself in assuming $a*b = b*a$, which naturally results in a foul proof. </p>
user44197
117,158
<p>From your equation $$ (a * b) = (a' *b')' = b'' * a'' = b * a$$</p>