qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,357,115
<blockquote> <p>$A$ is invertible matrix over $\mathbb{R}$, prove that $AA^t+A^tA$ is invertible</p> </blockquote> <p>It seems to be a trivial question, but it's not. I tried using determinants, i.e $|A| \ne 0 \to |AA^t+A^tA|\ne0$, but calculating $|AA^t+A^tA|$ is not easy.</p>
user1551
1,551
<p>Both $AA^t$ and $A^tA$ are positive definite. Hence their sum is positive definite and invertible.</p>
2,412,454
<p>I was obviously not clear enough in my first question, so I will reformulate. I have the following equation $$ A=\frac{B\sin 2\theta}{C+D\cos 2\theta} $$ where $A,B,C,D$ are variables. I need to solve or rewrite the equation to easily obtain $\theta$ (or $2\theta$), given known values for $A, B, C, D$. Thanks for any help.</p>
José Carlos Santos
446,262
<p>Yes $\emptyset$ an equivalence relation on $\emptyset$. What is $\emptyset$ as a binary relation? It's the relation such that no two elements are related.</p> <p>However, if $A$ is not empty, then $\emptyset$ is <em>not</em> an equivalence relation on $A$ (and the authors do <em>not</em> claim that it is), because if $a\in A$, then it is <em>not</em> true that $(a,a)\in\emptyset$.</p>
2,684,805
<p>This question is asked by my 12 yr old cousin and I seem to be failing to give him a convincing explanation. Here is the summary of our discussion so far - </p> <p>Case1 : $a&gt;0, b&gt;0$<br> <a href="https://i.stack.imgur.com/fuoZS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fuoZS.png" alt="enter image description here"></a></p> <p>I asked him to put another block of length $a$ adjacent to $b$ and stare at the symmetry. He quickly told me that the mid point of $a, b$ equals half the length of $a+b$ : <a href="https://i.stack.imgur.com/ShBQb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ShBQb.png" alt="enter image description here"></a></p> <p>So far we're good. But when either one of $a,b$ is negative, I feel stuck. I fail to give him a similar explanation using symmetry. Greatly appreciate any help. Thanks! </p>
Robert Z
299,698
<p>If we use only two of the three colours then $(A,B,C,D)=(X,Y,X,Y)$ where $X$ can be chosen in $3$ ways and $Y$ in $2$ ways. So the number of such colourings is $3\cdot 2=6$.</p> <p>If we use all three colours then $(A,B,C,D)=(X,Y,X,Z)$ or $(A,B,C,D)=(Y,X,Z,X)$ where $X$ can be chosen in $3$ ways and $Y$ in $2$ ways and $Z$ in $1$ way. So the number of such colourings is $2\cdot (3\cdot 2\cdot 1)=12$.</p> <p>Hence the total number of colourings is $6+12=18$ (not $24$).</p>
110,162
<p>One can use <code>$Epilog</code> to do something when the Kernel is quit or put an <code>end.m</code> file next to the <code>init.m</code>.</p> <blockquote> <p>For Wolfram System sessions, <code>$Epilog</code> is conventionally defined to read in a file named end.m.</p> </blockquote> <p>But if <code>$Epilog</code> is set by the user, then <code>end.m</code> is skipped.</p> <p><strong>Question:</strong> So what to do if I want something to be done each time Kernel is quit but I also want to be able to play with <code>$Epilog</code>. In that sense that if I set <code>$Epilog</code> I would like my default action to be taken anyway?</p> <p>I need to stress out that I want to establish an action that will not be accidentally overwritten with daily (but advanced) mma usage.</p>
Chris Degnen
363
<p>Add the default epilog code to your own. This is what the default does.</p> <pre><code>?? $Epilog </code></pre> <blockquote> <pre><code>$Epilog:=If[FindFile[end`]=!=$Failed,&lt;&lt;end`] </code></pre> </blockquote>
2,781,017
<p>I known that $\sum a_i b_i \leq \sum a_i \sum b_i$ for $a_i$, $b_i &gt; 0$. It seems this inequality will also hold true when $a_i$, $b_i \in (0,1)$. However, I am unable to find out if</p> <p>$\sum \frac{a_i}{b_i} \leq \frac{\sum a_i}{\sum b_i}$ </p> <p>holds true for $a_i$, $b_i \in (0,1)$.</p>
Mundron Schmidt
448,151
<p>Consider $$ \sum\frac{a_i}{b_i}\leq \frac{\sum a_i}{\sum b_i} \Leftrightarrow \sum b_i\sum\frac{a_i}{b_i}\leq\sum a_i. $$ Renaming $c_i=\frac{a_i}{b_i}&gt;0$ implies $a_i=b_ic_i$ and $$ \sum b_i\sum\frac{a_i}{b_i}\leq \sum a_i\Leftrightarrow \sum b_i\sum c_i\leq \sum b_ic_i. $$</p>
2,948,045
<p>In Eric Gourgoulhon's "Special Relativity in General Frames", it is claimed that the two dimensional sphere is not an affine space. Where an affine space of dimension <em>n</em> on <span class="math-container">$\mathbb R$</span> is defined to be a non-empty set E such that there exists a vector space V of dimension <em>n</em> on <span class="math-container">$\mathbb R$</span> and a mapping </p> <p><span class="math-container">$\phi:E \times E \rightarrow V,\space\space\space (A,B) \mapsto \phi(A,B)=:\vec {AB}$</span></p> <p>that obeys the following properties:</p> <p>(i) For any point O <span class="math-container">$\in E$</span>, the function </p> <p><span class="math-container">$\phi_O: E \rightarrow V,\space\space\space M \mapsto \vec {OM}$</span></p> <p>is bijective. </p> <p>(ii) For any triplet (A,B,C) of elements of E, the following relation holds:</p> <p><span class="math-container">$\vec {AB} + \vec {BC} = \vec {AC}.$</span></p> <p>I would like to show that the sphere is not an affine space using this definition. My approach has been to assume that such a <span class="math-container">$\phi$</span> exists and then seek a contradiction. I can construct specific <span class="math-container">$\phi_O$</span>'s that are bijective and I can show that a contradiction arises if I use the same construction centered at a new point A, wtih <span class="math-container">$\phi_A$</span>, but this only invalidates the specific construction I made. I am having trouble generalizing this to any <span class="math-container">$\phi$</span>. </p>
Community
-1
<p>Your definition is different from the definition <a href="https://en.m.wikipedia.org/wiki/Affine_space#Affine_subspaces_and_parallelism" rel="nofollow noreferrer">here</a>. (Though they are probably equivalent.)</p> <p>Using that definition it is straight forward that <span class="math-container">$E$</span> is "flat". That is, given <span class="math-container">$A,B\in E$</span>, the line joining them, <span class="math-container">$A+t\cdot (B-A)$</span> is contained in <span class="math-container">$E$</span>.</p> <p>To see this, note that <span class="math-container">$A=x+u$</span> and <span class="math-container">$B=x+v$</span>, for <span class="math-container">$x\in E,u,v\in\vec{E}$</span>. Then a typical point on the line connecting them is <span class="math-container">$x+u+t\cdot (v-u)\in E$</span>.</p> <p>Of course, <span class="math-container">$S^2$</span> doesn't have this property. (One may have to note that <span class="math-container">$S^2$</span> would have to be an affine subspace of <span class="math-container">$\mathbb R^3$</span>, and conclude it would have to be a <span class="math-container">$2$</span>-plane.)</p>
623,190
<p>What would be the formula, to determine a rectagles edges, when given the perimeter and space? for example, the rectagles space is 80, and the perimeter is 36, and the edge would be 8 and 10, but how do I find them.</p> <p>I know that the formula for the perimeter would be 2x+2y=per, or 2(space/y)+2y=per However I'm trying to figure out how to find x and y, when I only know space and perimeter.</p>
chaosflaws
81,884
<p>I am going to use $A$ for area and $p$ for perimeter to enhance readability.</p> <p>$a, b &gt; 0$ so you can divide, take roots, etc. - whatever you want. Your goal is to solve the system of equations: $$A = a*b$$ $$p = 2a+2b$$</p> <p>So you do the following: $$A = a*b \implies \frac{A}{a}=b$$ $$\implies p = 2a+\frac{A}{a}$$ $$\implies \frac{a \cdot p}{2}=a^2+A$$ $$\implies 0=a^2-\frac{p}{2}a+A$$ Solving the quadratic equation gives: $$a=\frac{p}{4} \pm \sqrt{\frac{p^2}{16}-A}$$ Put your two results for a into the original equation and solve it for b and you get the following pairs of solutions: $$\left(\frac{p}{4} + \sqrt{\frac{p^2}{16}-A},\frac{p}{4} - \sqrt{\frac{p^2}{16}-A}\right),\left(\frac{p}{4} - \sqrt{\frac{p^2}{16}-A},\frac{p}{4} + \sqrt{\frac{p^2}{16}-A}\right)$$</p> <p>(due to the fact that $a$ and $b$ are interchangeable).</p> <p>You'll like the solutions, by the way.</p> <p>By the way, this is not really gometry - it's a little piece of algebra.</p> <p>(If you like, you can turn this into an Analysis question now: what's the maximum area for given perimeter?)</p>
312,878
<p>Why is $\mathbb{Z} [\sqrt{24}] \ne \mathbb{Z} [\sqrt{6}]$, while $\mathbb{Q} (\sqrt{24}) = \mathbb{Q} (\sqrt{6})$ ?</p> <p>(Just guessing, is there some implicit division operation taking $2 = \sqrt{4}$ out from under the $\sqrt{}$ which you can't do in the ring?)</p> <p>Thanks. (I feel like I should apologize for such a simple question.) </p>
Boris Novikov
62,565
<p>Since $\sqrt{6}\not\in\mathbb{Z} [\sqrt{24}]$.</p>
2,646,890
<blockquote> <p>If <span class="math-container">$p(x)$</span> is a polynomial of degree <span class="math-container">$n$</span> such that <span class="math-container">$$p(-2)=-15,\ p(-1)=1,\ p(0)=7,\ p(1)=9,\ p(2)=13,\ p(3)=25.$$</span> Then smalest possible value of <span class="math-container">$n$</span> is</p> <p>Options <span class="math-container">$(a)\; 2\;\;(b)\; 3\;\; (c)\;\; 4\;\; (d)\; 5$</span></p> </blockquote> <p>Try: Tracing curve on coordinate axis, it gave one point of intersection Further <span class="math-container">$p(x)$</span> must be an odd degree polynomial. And slope of function is not same in each interval. So it is not linear. So it must have least degree <span class="math-container">$3$</span>.</p> <p>Can someone explain me if I am doing right? Thanks.</p> <p>Otherwise please provide solution.</p>
Ross Millikan
1,827
<p>There is exactly one fifth (or lower) degree polynomial passing through six points. You can find it, for example by Newton interpolation or by writing the polynomial as $ax^5+bx^4+\ldots+f$ and writing six simultaneous equations to relfect the data you have. Solve them for $a,b,c,d,e,f$. If $a \neq 0$ the polynomial has degree $5$. The fact that $p(0)=7$ gives $f=7$.</p>
1,181,269
<p>I have a function $f(x)=x+\sin x$ and I want to prove that it is strictly increasing. A natural thing to do would be examine $f(x+\epsilon)$ for $\epsilon &gt; 0$, and it is equal to $(x+\epsilon)+\sin(x+\epsilon)=x+\epsilon+\sin x\cos \epsilon + \sin \epsilon \cos x$.</p> <p>Now all I need to prove is that $x+\epsilon+\sin x\cos \epsilon + \sin \epsilon \cos x - \sin x - x$ is always greater than $0$ but it's a dead end for me as I don't know how to proceed. Any hints?</p>
Mark Joshi
106,024
<p>differentiate: you get $ 1 + \cos (x)$ this is positive except on a discrete set of points. Integrate it and you get a strictly increasing function. </p>
1,987,230
<p>On Socratica, I saw a video demonstrating writing groups by writing the Cayley's table satisfying three conditions of the desired order. (1) Neutral element row and column are copies of the row and column headers. (2) Every row and column has neutral element once (3) All the elements of the set are present in each row and column. </p> <p>My question is does this always lead to a group? I think the conditions are necessary but insufficient because they do not test the associativity. </p>
Arthur
15,500
<p>If you are unfamiliar with modular arithmetic, then this might help: If the remainder of $\frac a5$ is $1$, that is another way of saying that there is an integer $r$ such that $a=5r+1$. Squaring both sides, we get $$a^2=(5r+1)^2=25r^2+10r+1=5(5r^2+2r)+1$$which means that the remainder of $\frac{a^2}5$ is also $1$.</p> <p>Doing the same thing for the other four possibilities will tell you that the remainder is never $3$.</p> <p>Note that this is <em>exactly</em> the same as the modular arithmetic approach, only phrased in what is hopefully more accessible language.</p>
95,126
<p>Consider the finite sum</p> <pre><code>rs[x_, n_] := x/n Sum[n^2/(i + (n - i) x)^2, {i, 1, n}] </code></pre> <p>Is there a way to bring <em>Mathematica</em> to calculate the limit for <code>n -&gt; ∞</code>?</p> <p>I have tried <code>Limit[]</code> as well as <code>NLimit[]</code> without success.</p>
SonerAlbayrak
49,198
<h2>The answer</h2> <blockquote> <p><span class="math-container">\begin{equation} \lim\limits_{n\rightarrow\infty}\frac{x}{n}\sum _{i=1}^n \frac{n^2}{(x (n-i)+i)^2}=\lim\limits_{a\rightarrow\infty}\left\{\begin{aligned} a^2 \left(\pi ^2 \csc ^2(\pi a x)-\log (a)-\pi \cot (\pi a)\right)&amp;&amp; x&gt;1\\ 1&amp;&amp;x=1\\ -a^2 \log (a)&amp;&amp; 0\le x&lt;1\\ a^2 \left(\pi ^2 \csc ^2(\pi a x)-\log (a)\right) &amp;&amp; x&lt;0 \end{aligned}\right. \end{equation}</span> for <span class="math-container">$$a=\frac{n}{x-1}$$</span> if <span class="math-container">$$x\in\mathbb{Q}$$</span></p> </blockquote> <h2>Derivation</h2> <p>We first note that we can do the symbolic sum</p> <pre><code>Sum[1/(i + (n - i) 2)^2, {i, 1, n}] </code></pre> <blockquote> <pre><code>PolyGamma[1, 1 - 2 n] - PolyGamma[1, 1 - n] </code></pre> </blockquote> <p>which is true for all <span class="math-container">$n\not\in\mathbb{Z}$</span>. For integer <span class="math-container">$n$</span>, we need to take the finite part; for example:</p> <pre><code>With[{n=5}, Sum[1/(i+(n-i) 2)^2,{i,1,n}] ==Normal[Series[ PolyGamma[1,1-2 k]-PolyGamma[1,1-k] ,{k,n,0}]]/.k-&gt;\[Infinity] ] </code></pre> <blockquote> <p>True</p> </blockquote> <p>Since we are interested in <span class="math-container">$n\rightarrow\infty$</span> limit, we will not be bothered with <span class="math-container">$n\in\mathbb{Z}$</span> cases below, but they can be extracted as above.</p> <p>As the next thing, we observe the following sequence:</p> <pre><code>Table[{x, Sum[1/(i + (n - i) x)^2, {i, 1, n}]}, {x, 1, 5}] // Column </code></pre> <blockquote> <pre><code>{1,1/n} {2,PolyGamma[1,1-2 n]-PolyGamma[1,1-n]} {3,1/4 (PolyGamma[1,1-(3 n)/2]-PolyGamma[1,1-n/2])} {4,1/9 (PolyGamma[1,1-(4 n)/3]-PolyGamma[1,1-n/3])} {5,1/16 (PolyGamma[1,1-(5 n)/4]-PolyGamma[1,1-n/4])} </code></pre> </blockquote> <p>We can actually check this for non-integer <span class="math-container">$x\in\mathbb{Q}$</span> values too, they obey the observed pattern. I am not sure if we can enlarge the domain from <span class="math-container">$x\in\mathbb{Q}$</span> to <span class="math-container">$x\in\mathbb{R}$</span>, hence I will simply assume <span class="math-container">$x\in\mathbb{Q}$</span> in the rest of the derivation: <span class="math-container">\begin{equation} \sum _{i=1}^n \frac{n^2}{(x (n-i)+i)^2}=\frac{n^2 \left(\psi ^{(1)}\left(1-\frac{x n}{x-1}\right)-\psi \left(1-\frac{n}{x-1}\right)\right)}{(x-1)^2} \end{equation}</span></p> <p>Clearly, we can rewrite this equation in a more compact form for <span class="math-container">$a\equiv\frac{n}{x-1}$</span> which covers all cases for which the sum is divergent (x=1 case is the only convergent case at hand):</p> <p><span class="math-container">\begin{equation} \sum _{i=1}^n \frac{n^2}{(x (n-i)+i)^2}=a^2 (\psi ^{(1)}(1-a x)-\psi ^{(0)}(1-a)) \end{equation}</span></p> <p>We can now consider four different cases: <span class="math-container">$x&gt;1,1&gt;x&gt;0,x=0,0&gt;x$</span>. For each case, we series expand around <span class="math-container">$a\sim\pm\infty$</span> and take the leading piece. In the end, we get the answer written at the top of the post. As an example, consider one of the cases:</p> <pre><code>FullSimplify[Normal[Series[ a^2 (-PolyGamma[0, 1 - a] + PolyGamma[1, 1 - a x]) , {a, \[Infinity], -2}, Assumptions -&gt; a &gt; 0 &amp;&amp; x &gt; 1]], a &gt; 0 &amp;&amp; x &gt; 1] </code></pre> <blockquote> <p><span class="math-container">$$a^2 \left(\pi ^2 \csc ^2(\pi a x)-\log (a)-\pi \cot (\pi a)\right)$$</span></p> </blockquote>
3,289,401
<p>As an example in MATLAB</p> <pre><code>[U,S,V]=svd(randn(3,2)+1j*randn(3,2)) assert(isreal(V(1,:))) </code></pre> <p>Why is the first row of V purely real?</p>
ndrizza
147,166
<p>The solution is to do the following projections:</p> <p>1) Projection to unit sphere</p> <p>2) Stereographic projection</p> <p>3) Rescaling by radius R</p>
2,968,235
<p><span class="math-container">$\log_3 4$</span> and <span class="math-container">$\log_7 10$</span>: which of these two logarithms is greater?</p> <p>I figured out that both are between <span class="math-container">$1$</span> and <span class="math-container">$2$</span>, then between <span class="math-container">$1$</span> and <span class="math-container">$1.5$</span>. And then <span class="math-container">$\log_34$</span> is greater than <span class="math-container">$1.25$</span>, and <span class="math-container">$\log_710$</span> is smaller than <span class="math-container">$1.25$</span>. However, that method doesn't work for every example, and I wonder if there's a easier way to solve this? </p>
Calum Gilhooley
213,690
<p>Here is a proof using no numbers larger than <span class="math-container">$128$</span>. We have <span class="math-container">$81 &lt; 125 &lt; 128$</span>, i.e. <span class="math-container">$3^4 &lt; 5^3 &lt; 2^7$</span>, therefore <span class="math-container">$\log_35 &gt; \tfrac{4}{3}$</span> and <span class="math-container">$\log_25 &lt; \tfrac{7}{3}$</span>, therefore <span class="math-container">$\log_25 &lt; 1 + \log_35$</span>, therefore: <span class="math-container">$$ \log_210 &lt; 2 + \log_35 = \log_345 &lt; \log_349 = 2\log_37, $$</span> therefore <span class="math-container">$\log_410 &lt; \log_37$</span>, therefore <span class="math-container">$\log_34 &gt; \log_710$</span>. <span class="math-container">$\square$</span></p> <p>The last step uses the general proposition that <span class="math-container">$\log_ab &gt; \log_cd$</span> if and only if <span class="math-container">$\log_bd &lt; \log_ac$</span>, which can be proved by rewriting all the logarithms in terms of logarithms to a single base (e.g. <span class="math-container">$\log_ab = \ln b/\ln a$</span>, etc.).</p>
1,296,230
<p>This is from Lang's <em>Algebra</em> (page 251)</p> <blockquote> <p><strong>Proposition 6.11</strong> <em>Let <span class="math-container">$E/F$</span> be a normal field extension. Let <span class="math-container">$E^G$</span> be the fixed field of <span class="math-container">$\operatorname{Aut}(E/F)$</span>. Then, <span class="math-container">$E^G$</span> is purely inseparable over <span class="math-container">$F$</span> and <span class="math-container">$E$</span> is separable over <span class="math-container">$E^G$</span>.</em></p> </blockquote> <p>And below is a corollary of this theorem:</p> <blockquote> <p><strong>Corollary 6.12.</strong> <em>Let <span class="math-container">$F$</span> be a field with characteristic <span class="math-container">$p\neq 0$</span> such that <span class="math-container">$F^p=F$</span>. Then, every algebraic extension <span class="math-container">$E$</span> of <span class="math-container">$F$</span> is separable and <span class="math-container">$E^p=E$</span>.</em></p> </blockquote> <p>How this is a corollary of the above theorem?</p> <p>Lang states that "Every algebraic extension is contained in a normal extension, so Proposition 6.11 can be applied to get this", but how?</p> <p>Let <span class="math-container">$E$</span> be an algebraic extension of <span class="math-container">$F$</span>. Then, there is a field extension <span class="math-container">$L$</span> of <span class="math-container">$E$</span> such that <span class="math-container">$L/F$</span> is normal.</p> <p>Let <span class="math-container">$\phi\colon F\to F:a\mapsto a^p$</span>.</p> <p>Then, by hypothesis, <span class="math-container">$\phi$</span> is a field automorphism of <span class="math-container">$F$</span>.</p> <p>Then, <span class="math-container">$\phi$</span> can be extended to a field monomorphism <span class="math-container">$\sigma \colon \bar F \to \bar F$</span>, but since <span class="math-container">$\phi$</span> is not fixing <span class="math-container">$F$</span>, I don't get what this has to do with Proposition 6.11.</p> <p>These are what all I know. How do I proceed to prove the corollary?</p>
Community
-1
<p><em>Let <span class="math-container">$\operatorname{char}(k) = p &gt; 0$</span>. Suppose that <span class="math-container">$k$</span> is perfect, that is, <span class="math-container">$k^p = k$</span>.</em></p> <p><strong>We prove the first part of the corollary for finite extensions of <span class="math-container">$k$</span>.</strong></p> <p>Let <span class="math-container">$E/k$</span> be a finite extension. There is a normal extension <span class="math-container">$K/k$</span> such that <span class="math-container">$E \subset K \subset E^a$</span>. Let <span class="math-container">$G = \operatorname{Aut}(K/k)$</span> and let <span class="math-container">$K^G$</span> be the fixed field of <span class="math-container">$G$</span>. Then <span class="math-container">$K/K^G$</span> is separable and <span class="math-container">$K^G/k$</span> is purely inseparable, by Proposition 6.11.</p> <p>It suffices to show that <span class="math-container">$K^G=k$</span>. For then <span class="math-container">$K/K^G = K/k$</span>, so <span class="math-container">$K/k$</span> is separable. Hence, <span class="math-container">$E/k$</span> is also separable.</p> <p>Now we show that <span class="math-container">$K^G=k$</span>. Since <span class="math-container">$K^G / k$</span> is purely inseparable, every <span class="math-container">$\alpha \in K^G$</span> is purely inseparable over <span class="math-container">$k$</span>, that is, for each <span class="math-container">$\alpha \in K^G$</span> there exists <span class="math-container">$n \geq 0$</span> such that <span class="math-container">$\alpha^{p^n} \in k$</span>. To show that <span class="math-container">$\alpha \in K^G \implies \alpha \in k$</span>, we need to show that <span class="math-container">$n = 0$</span> works for every <span class="math-container">$\alpha \in K^G$</span>. For the sake of contradiction, suppose that <span class="math-container">$\alpha \in K^G$</span> such that <span class="math-container">$\alpha \not\in k$</span>. Let <span class="math-container">$n \geq 1$</span> be the least positive integer such that <span class="math-container">$\alpha^{p^n} \in k$</span>. Since <span class="math-container">$k^p = k$</span>, there exists <span class="math-container">$a \in k$</span> such that <span class="math-container">$\alpha^{p^n} = a^p$</span>. Hence, <span class="math-container">$\alpha^{p^{n-1}} = a \in k$</span>, which contradicts the minimality of <span class="math-container">$n$</span>. Hence, proved.</p> <p><strong>Next, we prove the first part of the corollary for algebraic extensions of <span class="math-container">$k$</span> of infinite degree.</strong></p> <p>Let <span class="math-container">$E/k$</span> be an algebraic extension such that <span class="math-container">$[E:k] = \infty$</span>. To show that <span class="math-container">$E/k$</span> is separable, we show equivalently that every <span class="math-container">$\alpha \in E$</span> is separable over <span class="math-container">$k$</span>. So, let <span class="math-container">$\alpha \in E$</span> and consider the extension <span class="math-container">$k(\alpha)/k$</span>. This is a finite extension, so, by what we have proved earlier, <span class="math-container">$k(\alpha)/k$</span> is separable. Hence, <span class="math-container">$\alpha$</span> is separable over <span class="math-container">$k$</span>.</p> <p><strong>Now, we prove the second part of the corollary for finite extensions of <span class="math-container">$k$</span>.</strong></p> <p>Let <span class="math-container">$E/k$</span> be a finite extension. We have proved that <span class="math-container">$E/k$</span> is separable. So, by Corollary 6.10 (see below), <span class="math-container">$E^{p^n}k = E$</span> for all <span class="math-container">$n \geq 1$</span>. In particular, <span class="math-container">$E^p k = E$</span>. Since <span class="math-container">$k \subset E \implies k^p \subset E^p$</span> and since <span class="math-container">$k$</span> is perfect, we have that <span class="math-container">$k \subset E^p$</span>. So, <span class="math-container">$E^p k = E^p$</span>. Thus, <span class="math-container">$E^p = E$</span>.</p> <p><strong>Next, we prove the second part of the corollary for algebraic extensions of <span class="math-container">$k$</span> of infinite degree.</strong></p> <p>Let <span class="math-container">$E/k$</span> be an algebraic extension such that <span class="math-container">$[E:k] = \infty$</span>. Then, <span class="math-container">$E$</span> is the union of all its finitely generated subextensions containing <span class="math-container">$k$</span>, that is, <span class="math-container">$$ E = \bigcup k(\alpha_1,\dots,\alpha_n), $$</span> where the union is taken over all finite subsets <span class="math-container">$\{ \alpha_1,\dots,\alpha_n \}$</span> of <span class="math-container">$E$</span>. Each subextension <span class="math-container">$k(\alpha_1,\dots,\alpha_n)$</span> is a finite extension of <span class="math-container">$k$</span>, so, by what we have shown above, each such subextension is perfect. Since <span class="math-container">$$ E^p = \bigcup k(\alpha_1,\dots,\alpha_n)^p, $$</span> we have that <span class="math-container">$E^p = E$</span>. Hence, proved.</p> <hr> <blockquote> <p><strong>Corollary 6.10.</strong> <em>Let <span class="math-container">$E^p$</span> denote the field of all elements <span class="math-container">$x^p$</span>, <span class="math-container">$x \in E$</span>. Let <span class="math-container">$E$</span> be a finite extension of <span class="math-container">$k$</span>. If <span class="math-container">$E^p k = E$</span>, then <span class="math-container">$E$</span> is separable over <span class="math-container">$k$</span>. If <span class="math-container">$E$</span> is separable over <span class="math-container">$k$</span>, then <span class="math-container">$E^{p^n} k = E$</span> for all <span class="math-container">$n \geq 1$</span>.</em></p> </blockquote>
2,061,063
<p>Let $X \subset C(\mathbb R;\mathbb R)$ be the space of all continuous functions $u: \mathbb R \to \mathbb R$ where </p> <p>$$\lim_{x \to \pm \infty} u(x)=0$$</p> <p>provided with the $\sup$-norm. Let $k \in L^1(\mathbb R)$, $u \in X$ and </p> <p>$$(Ku)(x) := \int_\mathbb R k(x-y)u(y)\,dy, \,\,\,x \in \mathbb R.$$</p> <p>Furthermore $$\int_\mathbb R |k(s)|\, ds &lt;1$$</p> <p>How can I show that for every $f \in X$ there exists exactly one $u \in X$ such that $$u-Ku=f\,\,? $$</p>
Fred
380,717
<p>$K$ is a bounded linear operator with</p> <p>$||K||=\int_\mathbb R |k(s)|\, ds &lt;1$.</p> <p>Hence (Neumann seies !) $I-K$ is invertible.</p>
1,728,595
<p>Here is the claim I'm trying to understand: Given that $N$ is an integer-valued random variable, why is it true that</p> <p>$$Var(N) = \sum_{i=1}^\infty Var(1_{N\ge i})$$</p> <p>For context, this is a step in the answer to exercise 4.5.10 in Rosenthal, <em>A First Look at Rigorous Probability Theory</em>, 2nd ed., p. 53, which I'm trying to work through in self-study.</p> <p>If one were to substitute expectation for variance, then proposition 4.2.9 would apply, which shows that $\sum_{k=1}^\infty P(X \ge k) = E\lfloor X \rfloor$. And we no longer need the floor function if $X$ is integer valued. Variance is of course defined as an expectation, but even if the original random variable ($N$ above) is integer-valued, its mean doesn't have to be, and that means that the random variable $(N - E(N))^2$ isn't necessarily integer valued, so that theorem doesn't apply directly.</p> <p>Is there another way to see why the above claim is true? </p>
joriki
6,622
<p>This is wrong. We have <span class="math-container">$N=\sum_i1_{N\ge i}$</span>, so the equation would hold if the indicator variables were all independent. But they're not; they're all positively correlated, so we need to add their positive covariances.</p> <p><a href="http://math.uh.edu/%7Ejosic/myweb/teaching/probability/A%20Collection%20of%20Exercises%20in%20Avdanced%20Prob%20Theory.pdf" rel="nofollow noreferrer">Here's a solution manual</a> that claims to solve this exercise on p. <span class="math-container">$17$</span>. The error is in the step where they replace <span class="math-container">$E(1_{N\ge i}1_{N\ge j})$</span> by <span class="math-container">$E(1_{N\ge i})E(1_{N\ge j})$</span> in the second term of <span class="math-container">$E(S^2)$</span>, thus ignoring the correlation between the two indicator variables.</p> <p>An easy case to see this clearly is <span class="math-container">$j=i+1$</span> and <span class="math-container">$P(N=i)=0$</span>; then <span class="math-container">$1_{N\ge i}=1_{N\ge j}$</span> with complete correlation.</p>
162,630
<p>Let $\mathbb{G}$ be a reductive group defined over a number field $K$, let $Z$ be its center, and let $\mathbb{A}:=\mathbb{A}_K$ be the ring of adeles of $K$. Reasonably, we care about the $\mathbb{G}(\mathbb{A})$-representation: $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$. It naturally contains the sub-representations $$L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}),\omega):=\{f\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash\mathbb{G}(\mathbb{A}))|\,\,\,|f|\in L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A})), \forall z\in Z(\mathbb{A}), g \in \mathbb{G}(\mathbb{A})\,\,\, f(zg)=\omega(z)f(g)\} $$</p> <p>for every $\omega$ a unitary character of $Z(\mathbb{A})$. In fact $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ is the direct integral of these subrepresentations.</p> <p>I understand that it is generally desirable to deal with $L^2(\mathbb{G}(K)\backslash \mathbb{G}(\mathbb{A}))$ by decomposing it into the cuspidal part, which is going to be discrete, and the Eisenstein part, which is (I think!) continuous. In order to define this cuspidal part, people define $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ to be the subrepresentation of $L^2(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}),\omega)$ of all of the functions $f$ such that for every $K$-parabolic subgroup $\mathbb{P}$ of $\mathbb{G}$, whose unipotent radical we will call $N$, satisfies that for almost all $g\in\mathbb{G}(\mathbb{A})$ the integral $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn$ is $0$.</p> <p>The definition of a cuspidal representation is then an irreducible unitary subrepresentation of $L^2_0(Z(\mathbb{A})\mathbb{G}(K)\backslash \mathbb{G} (\mathbb{A}), \omega)$ for some central character $\omega$.</p> <p>I feel that I really do not understand the intuition behind the condition with the parabolic subgroups. Parabolic subgroups and their unipotent radicals seem like very formal constructions to me, but I bet there is some geometric intuition that I'm missing. Is there some geometry that should be in the back of my mind that explains the condition $\int_{N(K)\backslash N(\mathbb{A})} f(gn)dn=0$? How does this condition relate to being zeros at the cusps via the classic definition of cusp-forms? </p>
Alexander Braverman
3,891
<p>I think the simplest answer is this. Every reductive group $G$ comes together with a bunch of smaller reductive groups - the Levi subgroups (they are smaller in the sense that their semi-simple rank is smaller). Now, you given such a subgroup $M$ you have a way to construct representations of $G$ out of representations of $M$ (in the context of automorphic forms this procedure is called Eisenstein series). However, not all representations of $G$ will appear in this way (for strictly smaller $M$) -- cuspidal representations are exactly those "which have nothing to do" with smaller Levi subgroups (so you can't construct them out of something simpler).</p>
3,063,053
<p>I'm a Calculus I student and my teacher has given me a set of problems to solve with L'Hoptial's rule. Most of them have been pretty easy, but this one has me stumped. <br /></p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>You'll notice that using L'Hopital's rule flips the value of the top to the bottom. For example, using it once returns: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span> </p> <p>And doing it again returns you to the beginning: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>I of course plugged it into my calculator to find the limit to evaluate to 1, but I was wondering if there was a better way to do this algebraically?</p>
mechanodroid
144,766
<p>Set <span class="math-container">$x = \sinh t$</span>. We have <span class="math-container">$$\frac{x}{\sqrt{x^2+1}}= \frac{\sinh t}{\sqrt{1+\sinh^2t}} = \frac{\sinh t}{\cosh t} = \tanh t$$</span></p> <p><span class="math-container">$x \to \infty$</span> is equivalent to <span class="math-container">$t\to\infty$</span> so <span class="math-container">$$\lim_{x\to\infty} \frac{x}{\sqrt{x^2+1}} = \lim_{t\to\infty} \tanh t = \lim_{t\to\infty}\frac{e^t - e^{-t}}{e^t+e^{-t}} = \lim_{t\to\infty}\frac{e^{2t}-1}{e^{2t}+1} = 1$$</span></p>
956,235
<p>This may be a little low-brow for this forum, but I'm trying to figure out what the common base number set is between two other sets of numbers. Here's the situation: I have received quotes from two vendors for a list of products that they sell, and the prices they have quoted are:</p> <pre><code> Vendor 1's Price Vendor 2's Price Item #1 $9.76 $9.12 Item #2 $15.60 $14.56 Item #3 $9.76 $9.12 Item #4 $15.60 $14.56 Item #5 $9.76 $9.12 </code></pre> <p>Each vendor is taking a certain "list price" and each is applying its own margin. Is there a way to figure out what the "list price" is that these two vendors are working off of?</p> <p>Thanks very much!!!</p>
symmetricuser
125,084
<p>You made a minor mistake: $v(t)$ should be $112 - 32t$. This should make everything correct. As for the second part, ground means $s(t)=0$. So find the corresponding time and plug into $v(t)$ to find the impact velocity.</p>
748,325
<p>In order to prove non-uniqueness of singular vectors when a repeated singular value is present, the book (Trefethen), argues as follows: Let $\sigma$ be the first singular value of A, and $v_{1}$ the corresponding singular vector. Let $w$ be another linearly independent vector such that $||Aw||=\sigma$, and construct a third vector $v_{2}$ belonging to span of $v_{1}$ and $w$, and orthogonal to $v_{1}$. All three vectors are unitary, so $w=av_{1}+bv_{2}$ with $|a|^2+|b|^2=1$, and $v_{2}$ is constructed (Gram-Schmidt style) as follows:</p> <p>$$ {v}_{2}= \dfrac{{w}-({v}_{1}^{T} w ){v}_{1}}{|| {w}_{1}-({v}_{1}^{T} {w} ){v}_{1} ||_{2}}$$</p> <p>Now, Trefethen says, $||A||=\sigma$, so $||Av_{2}||\le \sigma$ but this must be an equality (and so $v_{2}$ is another singular vector relative to $\sigma$), since otherwise we would have $||Aw||&lt;\sigma$, in contrast with the hypothesis.</p> <p>How that? I cannot see any elementary application of triangle inequality or Schwarz inequality to prove this claim.</p> <p>I am pretty well convinced of partial non-uniqueness of SVD in certain situations. Other proofs are feasible, but I wish to undestand this specific algebraic step of this specific proof.</p> <p>Thanks.</p>
Etienne
80,469
<p>Trefethen's argument is indeed very strange.</p> <p>As I understood the above answers, everything would be fine if one knew that $Av_1$ and $Av_1$ are orthogonal. </p> <p>This can be done by observing that since $\sigma_1=\Vert A\Vert$ and $v_1$ is a unit vector such that $\Vert Av_1\Vert=\sigma_1$, then $v_1$ is an eigenvector for $A^*A$ with eigenvalue $\sigma_1^2$. </p> <p>Indeed, we have $\sigma_1^2=\Vert Av_1\Vert^2=\langle A^*Av_1,v_1\rangle$, i.e. $\langle (\sigma_1^2Id-A^*A)v_1,v_1\rangle=0$. Since the sesquilinear form $B(u,v)=\langle (\sigma_1^2Id-A^*A)u,v\rangle$ is positive semi-definite (because $\sigma_1^2=\Vert A\Vert^2=\Vert A^*A\Vert$) it follows, by Cauchy-Schwarz's inequality applied to $B$, that $(\sigma_1^2Id-A^*A)v_1=0$, i.e. $A^*Av_1=\sigma_1^2v_1$. </p> <p>The details for Cauchy-Schwarz are as follows: we have $B(v_1,v_1)=0$, hence by Cauchy-Schwarz: $$\vert B(v_1,u)\vert\leq B(v_1,v_1)^{^1/2}B(u,u)^{1/2}=0;$$ that is, $B(v_1,u)=0$ for all vectors $u$. This means that $(\sigma_1^2Id-A^*A)v_1$ is orthogonal to everybody, and hence is equal to $0$.</p> <p>Once you know that $A^*Av_1=\sigma_1^2v_1$, simply write $$\langle Av_1,Av_2\rangle=\langle A^*Av_1,v_2\rangle=\sigma_1^2\langle v_1,v_2\rangle=0\, .$$</p> <p>Having said that, it seems however much simpler to prove that $\Vert Av_2\Vert=\sigma_1$ as follows. By the above reasoning, $v_1$ and $w$ are both eigenvectors of $A^*A$ with eigenvalue $\sigma_1^2$. Hence, so is $v_2$, since $v_2$ is a linear combination of $v_1$ and $w$. So we have $\Vert Av_2\Vert^ 2=\langle A^*Av_2,v_2\rangle=\langle\sigma_1^2v_2,v_2\rangle=\sigma_1^2$, as required.</p> <p><strong>Edit</strong> Actually, I didn't read Ewan's answer carefully, since he's also proving that $Av_1$ and $Av_2$ are orthogonal! His argument is in fact the one that is used in the proof of Cauchy-Schwarz's inequality.</p>
1,075,215
<p>Question: An actuary is studying the prevalence of three health risk factors, denoted by A, B, and C, within a population of women. For each of the three factors, the probability is 0.1 that a woman in the population only has this risk factor (and no others). For any two of three factors, the probability is 0.12 that she has exactly two of these risk factors (but not the other). The probability that a woman has all three risk factors given that she has A and B, is (1/3). What is the probability that a woman has none of the three risk factors, given that she does not have risk factor A?</p> <p>My attempt: I wrote the "probability that a woman has none of the three risk factors, given that she does not have risk factor A" as Pr(A'andB'andC'|A') as Pr(A'andB'andC'andA')/Pr(A') which just simplifies to Pr(A'andB'andC')/Pr(A') where Pr(A') = (1-.1) = .9. I'm not entirely sure where to go on from there. I also tried to draw a Venn Diagram with three intersecting circles where Pr(AandB'andC') = .1 (same for B and C), but that didn't really get me anywhere The answer is 0.467 (rounded). Can you guys please show me what I'm doing wrong or what I should be doing?</p> <p>Thank you guys so much!</p>
megas
191,170
<p>First, lets write down what we know:</p> <ul> <li><span class="math-container">$ P(A \cap B' \cap C') = P(A' \cap B \cap C')= P(A' \cap B' \cap C) = 0.1$</span></li> <li><span class="math-container">$P(A \cap B \cap C') = P(A \cap B' \cap C)= P(A' \cap B \cap C) = 0.12$</span></li> <li><span class="math-container">$P(A \cap B \cap C|A,B) = \frac{1}{3}.$</span></li> </ul> <p>As you noted, we are interested in computing the probability <span class="math-container">$$ P(A' \cap B' \cap C'|A') = \frac{P(A' \cap B' \cap C')}{P(A')}. $$</span> We have <span class="math-container">$$ P(A') = 1 - P(A). $$</span> By the law of total probability, <span class="math-container">\begin{align} P(A) &amp;= P(A \cap (B \cap C)) + P(A \cap (B' \cap C)) + P(A \cap (B \cap C')) + P(A \cap (B' \cap C')) \\ &amp; = P(A \cap B \cap C) + 0.12 + 0.12 +0.1 \\ &amp; = P(A \cap B \cap C) + 0.34. \end{align}</span> Lets try to determine <span class="math-container">$P(A \cap B \cap C)$</span>. We have <span class="math-container">\begin{align} P(A \cap B \cap C) &amp; = P( A \cap B\cap C | A \cap B) \cdot P(A \cap B) = \frac{1}{3} \cdot P(A \cap B). \end{align}</span> Further, by the law of total probability, <span class="math-container">\begin{align} P(A \cap B) = P(A \cap B \cap C) + P(A \cap B \cap C') = P(A \cap B \cap C) + 0.12. \end{align}</span> Combining the two previous equations, we find <span class="math-container">\begin{align} P(A \cap B) = \frac{1}{3}P(A \cap B) + 0.12 \quad \Rightarrow \quad P(A \cap B) = \frac{3\cdot 0.12}{2} = 0.18, \end{align}</span> and <span class="math-container">\begin{align} P(A \cap B \cap C) &amp; = \frac{1}{3} \cdot P(A \cap B) = \frac{1}{3}0.18 = .06. \end{align}</span> Returning to the calculation of <span class="math-container">$P(A)$</span>, we get <span class="math-container">\begin{align} P(A) = P(A \cap B \cap C) + 0.34 = .06 +0.34 = 0.4, \end{align}</span> and in turn <span class="math-container">\begin{align} P(A') = 1 - P(A) = 1- 0.4 = 0.6. \end{align}</span> Finally, the probability that a woman has <strong>no</strong> factor can be found by subtracting from <span class="math-container">$1$</span> the sum of the probabilities of all <em>disjoint</em> events in which a woman has a factor (exactly one, exactly two, or all three): <span class="math-container">\begin{align} P(A' \cap B' \cap C') &amp;=1 - \left[P(A \cap B \cap C) + 3\cdot 0.1 + 3 \cdot 0.12 \right]\\ &amp;=.34 - .06 = 0.28. \end{align}</span> We now have everything we need to compute the desired result: <span class="math-container">$$ P(A' \cap B' \cap C'|A') = \frac{P(A' \cap B' \cap C')}{P(A')} =\frac{0.28}{0.6} = 0.4666... $$</span></p>
864,237
<p>Let's take a short exact sequence of groups $$1\rightarrow A\rightarrow B\rightarrow C\rightarrow 1$$ I understand what it says: the image of each homomorphism is the kernel of the next one, so the one between $A$ and $B$ is injective and the one between $B$ and $C$ is surjective. I get it. But other than being a sort of curiosity, what is it really telling me?</p>
Jessica B
81,247
<p>This set up will tell you that, essentially, $C=B/A$. Think of the first isomorphism theorem.</p> <p>In some cases, you get that $B=A\oplus C$ (see <a href="http://en.wikipedia.org/wiki/Splitting_lemma" rel="noreferrer">Wikipedia: Splitting lemma</a>).</p>
3,682,661
<p>I needed help with Part (A) without using L'Hopital's because its getting too lengthy.Can someone help me obtain solution with series without using L Hospitals rule</p> <p>I'm trying something out with series </p> <blockquote> <p><a href="https://i.stack.imgur.com/FC6Xs.jpg" rel="nofollow noreferrer">Question Image here</a></p> </blockquote> <p><a href="https://i.stack.imgur.com/qjBQ0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qjBQ0.jpg" alt="enter image description here"></a></p>
Andrés Villa
672,548
<p>Note that <span class="math-container">$$\text{SU}(2):={\{A\in M(2,\mathbb{C}): \langle Ax,Ay\rangle,\; \forall x,y\in\mathbb{C}^2,\;\text{det}(A)=1}\}$$</span> where <span class="math-container">$\langle x,y\rangle:=x_1\bar{y_1}+x_2\bar{y_2}.$</span> Then <span class="math-container">$$\text{SU}(2):={\{A\in M(2,\mathbb{C}): AA^*=A^*A=I_2,\;\text{det}(A)=1}\}$$</span> where <span class="math-container">$A^*=\overline{(A^{t})}.$</span></p> <p>So for all <span class="math-container">$A=\begin{pmatrix}a &amp; b \\ c &amp; d\end{pmatrix}\in\text{SU}(2)$</span> It is true that <span class="math-container">$A^{-1}=A^{*}\;\text{and}\;\text{det}(A)=1$</span>: </p> <p><span class="math-container">$A^{-1}=\begin{pmatrix}d &amp; -b \\ -c &amp; a\end{pmatrix}$</span> and <span class="math-container">$A^{*}=\begin{pmatrix}\overline{a} &amp; \overline{c} \\ \overline{b} &amp; \overline{d}\end{pmatrix}$</span> hence <span class="math-container">$A^{-1}=A^*$</span> implies that <span class="math-container">$$d=\overline{a}\;\text{and}\; b=-\overline{c}.$$</span> Then <span class="math-container">$A=\begin{pmatrix}a &amp; -\overline{c} \\ c &amp; \overline{a}\end{pmatrix}.$</span></p>
2,267,935
<p>There is a fibration $SO(n-1) \mapsto SO(n) \mapsto S^{n-1}$, from basically taking the first column of the matrix in $\mathbb{R}^n$. Is this fibration trivializable? </p>
Ted Shifrin
71,348
<p>$SO(n)$ is the orthonormal frame bundle of $S^{n-1}$. A trivialization of this bundle would in particular imply that $S^{n-1}$ is parallelizable (i.e., has trivial tangent bundle), so this happens precisely for $n=2$, $4$, and $8$.</p>
650,450
<p>Suppose that $a_n$ and $b_n$ are Cauchy sequences, and that $a_n &lt; b_n$ for all n. Prove that $\lim_{x \to \infty}a_n \le \lim_{x \to \infty}b_n$ for all n.</p> <p>Is it sufficient to say that we know both Cauchy sequences must converge to the limit, and since $a_n$ is always less than $b_n$, the limits will follow the desired inequality?</p> <p>Edit: since this is not true, what would be the appropriate strategy to prove?</p>
splinter123
118,883
<p>It is not true, take $b_n=1/n=-a_n$, the strict inequality is not respected at the limit.</p>
3,418,526
<p>The problem is as follows:</p> <blockquote> <p>The figure from below shows the squared speed against distance attained of a car. It is known that for <span class="math-container">$t=0$</span> the car is at <span class="math-container">$x=0$</span>. Find the time which will take the car to reach <span class="math-container">$24\,m$</span>.</p> </blockquote> <p><a href="https://i.stack.imgur.com/rZpd9.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rZpd9.png" alt="Sketch of the problem"></a></p> <p>The given alternatives on my book are:</p> <p><span class="math-container">$\begin{array}{ll} 1.&amp;8.0\,s\\ 2.&amp;9.0\,s\\ 3.&amp;7.0\,s\\ 4.&amp;6.0\,s\\ 5.&amp;10.0\,s\\ \end{array}$</span></p> <p>What I attempted to do to solve this problem was to find the acceleration of the car given that from the graph it can be inferred that:</p> <p><span class="math-container">$\tan 45^{\circ}=\frac{v^{2}\left(\frac{m^{2}}{s^{2}}\right)}{m}=1\,\frac{m}{s^{2}}$</span></p> <p>Using this information I went to the position equation as follows:</p> <p><span class="math-container">$x(t)=x_{o}+v_{o}t+\frac{1}{2}at^2$</span></p> <p>Since it is mentioned that <span class="math-container">$x=0$</span> when <span class="math-container">$t=0$</span> this would make the equation of position into:</p> <p><span class="math-container">$0=x(0)=x_{o}+v_{o}(0)+\frac{1}{2}a(0)^2$</span></p> <p>Therefore,</p> <p><span class="math-container">$x_{o}=0$</span></p> <p><span class="math-container">$x(t)=v_{o}t+\frac{1}{2}at^2$</span></p> <p>From the graph I can spot that:</p> <p><span class="math-container">$v_{o}^2=1$</span></p> <p><span class="math-container">$v_{o}=1$</span></p> <p>Since <span class="math-container">$a=1$</span></p> <p><span class="math-container">$x(t)=t+\frac{1}{2}t^2$</span></p> <p>Then:</p> <p><span class="math-container">$t+\frac{1}{2}t^2=24$</span></p> <p><span class="math-container">$t^2+2t-48=0$</span></p> <p><span class="math-container">$t=\frac{-2\pm \sqrt{2+192}}{2}=\frac{-2\pm \sqrt{194}}{2}=\frac{-2\pm 14}{2}$</span></p> <p><span class="math-container">$t=6,-8$</span></p> <p>Therefore the time would be <span class="math-container">$6$</span> but apparently the answer listed on my book is <span class="math-container">$8$</span>. Could it be that I missunderstood something or what happened? Is the answer given wrong?. Can somebody help me here?.</p>
AgentS
168,854
<p>You're wrongly assuming <span class="math-container">$\color{blue}{a=1}$</span>. </p> <p>From the kinematics equation <span class="math-container">$v^2 = 2\color{blue}{a}x+u^2$</span>, with constant acceleration,<br> when you graph <span class="math-container">$v^2$</span> against <span class="math-container">$x$</span>, you get a linear equation of form <span class="math-container">$y=2\color{blue}{a}x + y_0$</span>.<br> Here the <em>slope</em> represents <span class="math-container">$2\color{blue}{a}$</span>. </p> <hr> <p><span class="math-container">$$2\color{blue}{a} = \tan(45) \implies \color{blue}{a = \frac{1}{2}}$$</span> Then the position function would be <span class="math-container">$$x(t)=t+\frac{1}{2}(\color{blue}{\frac{1}{2}})t^2 = t + \frac{1}{4}t^2 $$</span></p> <p>Setting that equal to <span class="math-container">$24$</span> and solving gives <span class="math-container">$t=8$</span></p>
3,819,658
<p>Calculate, <span class="math-container">$$\lim\limits_{(x,y)\to (0,0)} \dfrac{x^4}{(x^2+y^4)\sqrt{x^2+y^2}},$$</span> if there exist.</p> <p>My attempt:</p> <p>I have tried several paths, for instance: <span class="math-container">$x=0$</span>, <span class="math-container">$y=0$</span>, <span class="math-container">$y=x^m$</span>. In all the cases I got that the limit is <span class="math-container">$0$</span>. But I couldn't figure out how to prove it. Any suggestion?</p>
Michael Rozenberg
190,319
<p>Use <span class="math-container">$$0\leq \dfrac{x^4}{(x^2+y^4)\sqrt{x^2+y^2}}\leq|x|.$$</span></p>
4,330,991
<p>I do understand that if:</p> <p><span class="math-container">$a=b \Rightarrow a^2 = b^2 $</span></p> <p>But clearly, the graph representing these two equations won't be the same. So, (correct me if I'm wrong) this would suggest that if you square both sides of the equation, you essentially get a different set of answers (or graph). What confuses me is this question from my textbook:</p> <blockquote> <p>Find and graph all <span class="math-container">$z$</span> such that <span class="math-container">$|z-3| = |z+2i|$</span>.</p> </blockquote> <p>The solution goes as such:</p> <p><span class="math-container">$z=a+bi$</span></p> <p><span class="math-container">$\sqrt{(a-3)^2+b^2}$</span> = <span class="math-container">$\sqrt{a^2+(b+2)^2}$</span></p> <p>Squaring both sides then simplifying we end up with the equation:</p> <p><span class="math-container">$6a + 4b = 5$</span></p> <p>It proceeds to graph the equation on the complex plane.</p> <p>How can we claim that the graph of the equation <span class="math-container">$6a + 4b = 5$</span> represents the graph of <span class="math-container">$\sqrt{(a-3)^2+b^2}$</span> = <span class="math-container">$\sqrt{a^2+(b+2)^2}$</span> when squaring both sides of the equation was an intermediary step? Doesn't that create extraneous solutions which ends up being graphed? Doesn't this mean that our new graph represents more solutions then what the initial equation was intended for?</p> <p>I hope that made sense...</p> <p>It's quite frustrating looking back at some of these concepts you thought you understood and realizing that you didn't.</p> <p>Anyways, thanks in advance!</p>
Peter Szilas
408,605
<p><span class="math-container">$1)a=b;$</span></p> <p>Squaring yields</p> <p><span class="math-container">$2)a^2=b^2$</span>, or <span class="math-container">$a^2-b^=0$</span>, and factoring</p> <p><span class="math-container">$(a-b)(a+b) =0$</span>, i. e.</p> <p><span class="math-container">$a-b=0$</span>, this is equation <span class="math-container">$1)$</span>, or</p> <p><span class="math-container">$a+b=0$</span>, or <span class="math-container">$a=-b$</span>, an additional solution.</p> <p>If you know that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are of the same sign,</p> <p>you can discard <span class="math-container">$a=-b$</span>.</p>
2,919,683
<p>I am using Monte Carlo method to evaluate the integral above: $$\int_0^\infty \frac{x^4sin(x)}{e^{x/5}} \ dx $$ I transformed variables using $u=\frac{1}{1+x}$ so I have the following finite integral: $$\int_0^1 \frac{(1-u)^4 sen\frac{1-u}u}{u^6e^{\frac{1-u}{5u}}} \ du $$ I wrote the following code on R:</p> <p>set.seed (666)</p> <p>n &lt;- 1e6</p> <p>f &lt;- function(x) ( (1-x)^4 * sin ((1-x)/x) ) / ( exp((1-x)/(5*x) ) * x^6 )</p> <p>x &lt;- runif(n)</p> <p>I &lt;- sum (f(x))/n</p> <p>But I get the wrong answer, but if I integrate f(x) using the R built-in function and not Monte Carlo I get the right answer.</p>
Henno Brandsma
4,280
<p>Suppose $f$ is quotient. (So for all $U \subseteq Y$, $f^{-1}[U]$ open iff $U$ is open)</p> <p>Let's check it satisfies: $f$ is continuous and maps open saturated sets to open sets.</p> <p>$f$ continuous is clear: if $U \subseteq Y$ is open, so is $f^{-1}[U]$, by the right to left implication in the definition of quotient map.</p> <p>Suppose that $S$ is saturated and open. $S$ saturated means that $S = f^{-1}[C]$ for some $C \subseteq Y$. So now we know $S = f^{-1}[C]$ is open and the other implication of the definition of quotient map gives us that $C$ is open and as $f[S] = f[f^{-1}[C]] = C$ (last equality by surjectivity of $f$) we know that f[S]$ is indeed open, as required.</p> <p>Suppose now that $f$ is continuous and maps saturated open sets to open sets.</p> <p>To see that $f$ is quotient we need to show $U \subseteq Y$ open in $Y$ iff $f^{-1}[U]$ open in $X$. Now, if $U$ is open in $Y$, $f^{-1}[U]$ is open in $X$ by continuity of $f$. And if $f^{-1}[U]$ is open in $X$ we note that $f^{-1}[U]$ is saturated (and open) so by assumption $f[f^{-1}[U]] = U$ is open. This shows that $f$ is quotient.</p> <p>The saturated closed case is exactly similar, using the alternative definition of quotient maps in terms of closed sets.</p>
2,250,469
<p>Let n $\geq$ 4 be an integer. Determine the number of permutations of $\{1, 2, . . . , n\}$, in which $1$ and $2$ are next to each other, with $1$ to the left of $2$.<br> I can't make sense of this problem statement. The way I see it, if $n$ is an integer, then the pair $1,2$ could be formed by any pair with the form $\overline{...x_{i-2}x_{i-1}x_i1}, \overline{2y_{1}y_2y_3...}$ or a number with the form $\overline{...x_{i-2}x_{i-1}x_i12x_{i+1}x_{i+2}x_{i+3}..}$ with $x$'s and $y$'s are some mysterious digits. Can anyone explain this problem?</p>
John
7,163
<p>Some hints:</p> <ol> <li>How many places can you put $12$ in the sequence?</li> <li>How many ways can you arrange the other $n-2$ numbers in the spaces that remain?</li> </ol>
1,531,755
<p>Let $a\in [0,1)$. I want to show that $$\lim_{n\to \infty}{na^n}=0$$</p> <p>My try : $$na^n={n\over e^{-(\log{a})n}}$$ and the limit is $${+\infty\over +\infty}$$ Hence by l'Hopital's rule we have that $$\lim_{n\to \infty}{1\over -(\log{a})e^{-(\log{a})n}}={1\over -\infty}=0$$</p> <p>Is there any other way to compute this limit ? thanks!</p>
Jimmy R.
128,037
<p>(Similar) By L'Hopital's Rule $$\lim_{n \to +\infty}na^n=\lim_{n \to +\infty}\frac{n}{\frac{1}{a^n}}\overset{\frac{+\infty}{+\infty}}=\lim_{n \to +\infty}\frac{1}{{-\frac{1}{a^n}\ln(a)}}=-\frac{1}{\ln(a)}\lim_{n \to +\infty} a^n =0$$</p>
1,531,755
<p>Let $a\in [0,1)$. I want to show that $$\lim_{n\to \infty}{na^n}=0$$</p> <p>My try : $$na^n={n\over e^{-(\log{a})n}}$$ and the limit is $${+\infty\over +\infty}$$ Hence by l'Hopital's rule we have that $$\lim_{n\to \infty}{1\over -(\log{a})e^{-(\log{a})n}}={1\over -\infty}=0$$</p> <p>Is there any other way to compute this limit ? thanks!</p>
paw88789
147,810
<p>Hint:</p> <p>Another approach would be to look at the ratio of term $n+1$ to term $n$.</p> <p>This gives $\frac{(n+1)a^{n+1}}{na^n}=\left(1+\frac{1}{n}\right)a$</p> <p>Since $a&lt;1$, this factor becomes strictly less than $1$ and then stays bounded away from $1$. Thus eventually the terms of the sequence start decreasing by these factors.</p>
1,531,755
<p>Let $a\in [0,1)$. I want to show that $$\lim_{n\to \infty}{na^n}=0$$</p> <p>My try : $$na^n={n\over e^{-(\log{a})n}}$$ and the limit is $${+\infty\over +\infty}$$ Hence by l'Hopital's rule we have that $$\lim_{n\to \infty}{1\over -(\log{a})e^{-(\log{a})n}}={1\over -\infty}=0$$</p> <p>Is there any other way to compute this limit ? thanks!</p>
lcn
290,103
<p>First, $a$ should be in $(0,1)$, otherwise L'Hopital's Rule won't work until the following step:</p> <p>$\lim \limits_{n \to \infty} \frac{1}{-(\ln{a})e^{-n\ln{a}}}$</p> <p>Second, we can use Squeeze Theorem to prove this limit.</p> <p>Proof :</p> <p>Let $a = \frac{1}{1+b}$, where $b&gt;0$</p> <p>$\because na^n &gt; 0$</p> <p>$na^n=\frac{n}{(1+b)^n}=\frac{n}{1+nb+\frac{n(n-1)}{2}b^2+...}$</p> <p>$&lt;\frac{n}{\frac{n(n-1)}{2}b^2}=\frac{2}{n-1}$</p> <p>$\because \lim \limits_{n \to \infty} 0 = 0$</p> <p>$\lim \limits_{n \to \infty} \frac{2}{n-1} = 0$</p> <p>By Squeeze Theorem, $\lim \limits_{n \to \infty} na^n =0$</p>
2,262,371
<p>If $a,b,c$ are positive real numbers, prove that $$\frac{2}{a+b}+\frac{2}{b+c}+ \frac{2}{c+a}≥ \frac{9}{a+b+c}$$</p>
Dr. Sonnhard Graubner
175,066
<p>your inequality is equivalent to $$2\,{a}^{3}-{a}^{2}b-{a}^{2}c-a{b}^{2}-a{c}^{2}+2\,{b}^{3}-{b}^{2}c-b{c }^{2}+2\,{c}^{3}&gt;0 &gt;0$$ after Clearing the denominators and this is equivalent to $$(a-b)(a^2-b^2)+(a-c)(a^2-c^2)+(b-c)(b^2-c^2)\geq 0$$ which is true. the equal sign holds if $$a=b=c$$</p>
128,666
<p>If we start with a number like 1234 and produce the following sum 1234 + 123 + 12 + 1 = 1370.</p> <p>If we are given the 1370 can I retrieve the 1234? A similar question was migrated over to the Math.SE because the OP did not in any way relate it to MMa. The math given over there is in no way too tough for anyone over here but the solution will still require MMA for anything more than a trivial problem. I wish to do it using MMa. This is what I did:</p> <p>Suppose we have the non trivial problem of reconstructing the original number from 308460277.</p> <p>Basically, I just used the structure of the FindInstance command as a template and used MMa commands to fill in the various fields.</p> <pre><code>n = 9; (*digit length of the number*) m = 308460277; var = Table[Subscript[a, k], {k, n, 1, -1}]; f = Sum[1/9 (-1 + 10^k) Subscript[a, k], {k, 1, n}] == m; s = FindInstance[Append[{f, 10 &gt; Subscript[a, 1] &gt; 0}, Table[10 &gt; Subscript[a, k] &gt;= 0, {k, 2, n}]] // Flatten, var,Integers]; var /. s (*{{2, 7, 7, 6, 1, 4, 2, 5, 3}}*) </code></pre> <p>So the starting number was 277614253</p> <p>To check:</p> <p>277614253 + 27761425 + 2776142 + 277614 + 27761 + 2776 + 277 + 27 + 2 = 308460277</p> <p>This is pretty fast and can do 98766665555567902460 instantaneously.</p> <p>What is the right way to do this?</p> <p>Addenda: My solution gags on m = 137174210013717421001371742085. I was worried about FindInstance being able to do larger ones, that is the reason I posted for a better way.</p>
george2079
2,079
<p>This is a direct implementation of this answer <a href="https://math.stackexchange.com/a/1967330/92921">https://math.stackexchange.com/a/1967330/92921</a></p> <pre><code>m = 308460277; Reap[NestWhile[{#[[1]] - #[[2]] Sow@Floor[Divide @@ #]], Floor[#[[2]]/10]} &amp;, {m, FromDigits@ConstantArray[1, Ceiling@Log[10, m]]} , #[[2]] &gt; 0 &amp;]][[2, 1]] // FromDigits </code></pre> <blockquote> <p>277614253</p> </blockquote> <p>note this gives the 'closest' value for any input <code>m</code>. You need to check that its correct:</p> <pre><code> Total@NestWhileList[ Floor[#/10] &amp;, 277614253, # &gt; 0 &amp;]==m </code></pre> <blockquote> <p>True</p> </blockquote> <p>I'm not sure of the etiquette re: reposting here vs giving code answers on the math site..</p>
3,832,684
<p>Does the following inequality hold? <span class="math-container">$$\sqrt {x-z} \geq \sqrt x -\sqrt{z} \ , $$</span> for all <span class="math-container">$x \geq z \geq 0$</span>.</p> <p>My justification <span class="math-container">\begin{equation} z \leq x \Rightarrow \\ \sqrt z \leq \sqrt {x} \Rightarrow \\ 2\sqrt z \sqrt z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ 2 z \leq 2\sqrt z\sqrt {x} \Rightarrow \\ z - 2\sqrt z\sqrt {x} + x \leq x - z \Rightarrow \\ (\sqrt x -\sqrt z )^2 \leq x - z \Rightarrow \\ \sqrt x -\sqrt z \leq \sqrt {x - z} \end{equation}</span></p>
user
505,767
<p>More simply we have that</p> <p><span class="math-container">$$x-z=\left(\sqrt x -\sqrt{z}\right)\left(\sqrt x +\sqrt{z}\right)\ge \left(\sqrt x -\sqrt{z}\right)^2$$</span></p>
2,174,340
<p>Given the function $$F(X,Y,Z) = \alpha^TXYZ$$ in which $X, Y, Z $ are matrices of size $n \times n$ and $\alpha$ is a vector of size $n \times 1$, how to compute the derivative of $F$ with respect to $Y$?</p> <p>Actually I found some related questions but did not help.</p> <p>Edit: if the function is of the form: $F(X,Y,Z) = \alpha^TXYZ\beta$, then based on the Matrix Cookbook, derivative is : $f' = (\alpha^T X)^T (Z\beta)^T$, but if there is no $\beta$, then the dimensions do not match.</p> <p>Thank you,</p>
greg
357,854
<p>Let ${\mathcal E}$ be the 4th-order tensor with components $$\eqalign{ {\mathcal E}_{ijkl} &amp;= \delta_{ik}\,\delta_{jl} \cr }$$ Using this tensor, we can calculate the differential and gradient of the function as $$\eqalign{ f &amp;= a^TXYZ \cr \cr df &amp;= a^T(X\,dY\,Z) \cr &amp;= a^T(X\,{\mathcal E}\,Z^T):dY \cr \cr \frac{\partial f}{\partial Y}&amp;= a^TX\,{\mathcal E}\,Z^T \cr }$$ As expected, the gradient of a vector wrt a matrix is a 3rd-order tensor.</p> <p>If you are unable to work with tensors, you can vectorize the differential to obtain $$\eqalign{ {\rm vec}(df) &amp;= {\rm vec}(a^TX\,dY\,Z) \cr df &amp;= (Z^T\otimes a^TX)\,{\rm vec}(dY) \cr &amp;= (Z^T\otimes a^TX)\,dy \cr \cr \frac{\partial f}{\partial y}&amp;= Z^T\otimes a^TX \cr }$$ which is an ordinary matrix quantity.</p> <p>This is equivalent to the previous result, if you swap the order of the factors and replace the kronecker product symbol with the ${\mathcal E}$ tensor.</p>
114,147
<p>I have two lists let say</p> <pre><code>listF = {{7, 2}, {2, 6}, {8, 1}, {1, 7}, {11, 8}, {6, 11}}; </code></pre> <p>and </p> <pre><code>newD = {{{2, 7}, {7, 9}, {9, 2}}, {{7, 2}, {2, 6}, {6, 7}}, {{7, 2}, {2, 6}, {6, 7}}, {{11, 6}, {6, 2}, {2, 11}}, {{8, 1}, {1, 7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{1, 5}, {5, 7}, {7, 1}}, {{8, 1}, {1, 7}, {7, 8}}, {{11, 1}, {1, 8}, {8, 11}}, {{11, 8}, {8, 6}, {6, 11}}, {{11, 6}, {6, 2}, {2, 11}}, {{11, 8}, {8, 6}, {6, 11}}}; </code></pre> <p>Question: How can I delete the parts <code>listF</code> from <code>newD</code> disregarding of the order of elements in the sub-list of <code>listF</code>. For example, I need to delete parts from <code>newD</code> that whether are in the form of <code>{2,7}</code> or <code>{7,2}</code>. I would prefer to Apply Alternative command but any solution would be appreciated.</p>
Edmund
19,542
<p>You may use <code>OrderlessPatternSequence</code> and <code>DeleteCases</code>.</p> <p>Build a set of <code>Alternatives</code> with <code>OrderlessPatternSequence</code> and <code>Map</code> <code>DeleteCases</code> over the sublists.</p> <pre><code>DeleteCases[Alternatives @@ ({OrderlessPatternSequence @@ #} &amp; /@ listF)] /@ newD (* { {{7, 9}, {9, 2}}, {{6, 7}}, {{6, 7}}, {{2, 11}}, {{7, 8}}, {{11, 1}}, {{1, 5}, {5, 7}}, {{7, 8}}, {{11, 1}}, {{8, 6}}, {{2, 11}}, {{8, 6}} } *) </code></pre> <p>Hope this helps.</p>
634,132
<p>Let $G$ be a cyclic group with $N$ elements. Then it follows that</p> <p>$$N=\sum_{d|N} \sum_{g\in G,\text{ord}(g)=d} 1.$$</p> <p>I simply can not understand this equality. I know that for every divisor $d|N$ there is a unique subgroup in $G$ of order $d$ with $\phi(d)$ elements. But how come that when you add all these together you end up with the number of elements in the group $G$. </p>
angryavian
43,949
<p>Hint: by definition, a function takes in some inputs, and produces a <em>unique</em> output.</p>
587,077
<p>Given any prime $p$. Prove that $(p-1)! \equiv -1 \pmod p$.</p> <p>How to prove this?</p>
user66733
66,733
<p>OK. The first thing you need to know is that if $x^2 \equiv 1 \pmod{p}$ where $p$ is a prime number then you have $x \equiv 1 \pmod{p}$ or $x\equiv -1 \pmod{p}$.</p> <p>I'm going to state it as a Lemma and prove it:</p> <p>Theorem: If $p$ is a prime number the only solutions to the congruence $x^2 \equiv 1 \pmod{p}$ are $x \equiv 1 \pmod{p}$ and $x \equiv -1 \pmod{p}$</p> <p>Proof: $$x^2 \equiv 1 \pmod{p} \implies p \mid x^2-1 \implies p \mid (x-1)(x+1) \implies p \mid x-1 \text{ or } p \mid x+1 \implies x \equiv 1 \pmod{p} \text{ or } x \equiv -1 \pmod{p}$$</p> <p>The last implication is correct because $p$ is a prime number.</p> <p>This Lemma tells us that the only elements in a congruence modulo $p$ that are self-invertible are $1$ and $p-1 \equiv -1 \pmod{p}$.</p> <p>Now, suppose that you multiply all non-zero elements $\pmod {p}$ as follows:</p> <p>$A \equiv 1 \cdot 2 \cdot3 \cdots (p-2)(p-1) \pmod{p}$</p> <p>Since $p$ is a prime number, all of these are non-zero and are relatively prime to $p$. So, they have inverses modulo $p$. Now we're going to use the lemma we just proved.</p> <p>So, the case $p=2$ is obviously correct because $(2-1)!=1 \equiv -1 \pmod{2}$, so, you can assume that $p$ is an odd prime number. Since $p$ is odd, the number of the terms in $N=\{ 2, 3, \cdots, p-2 \}$ is even.</p> <p>Now, just match each number in the set $N$ with its inverse modulo $p$. They all get canceled in pairs since there is an even number of them and none of them are self-invertible. At the end everything in $N$ gets canceled and becomes $1$ and you're left with only $1$ and $p-1$ and their product is obviously equal to $p-1$ which is the same as $-1$ modulo $p$. Just like the below:</p> <p>$$1 \cdot \underbrace{ (2 \cdot 3 \cdots (p-3) \cdot (p-2))}_\text{Numbers in here get canceled in pairs} \cdot (p-1) \equiv 1 \cdot (p-1) \equiv -1 \pmod{p}$$</p>
1,063,352
<p>$A$ and $B$ are sets and $\mathcal{F}$ is a family of sets. I'm trying to prove that</p> <p>$\bigcap_{A \in \mathcal{F}}(B \cup A) \subseteq B \cup (\cap \mathcal{F})$</p> <p>I start with "Let $x$ be arbitrary and let $x \in \bigcap_{A \in \mathcal{F}}(B \cup A)$, which means that $\forall C \in \mathcal{F}(x \in B \cup C)$. So, I need some set to plug in for $C$.</p> <p>Looking at the goal, I need to prove that $x \in B \cup (\cap \mathcal{F})$, which is $x \in B \lor \forall C \in \mathcal{F}(x \in C)$. But I'm stuck here too because I need to break up the givens into cases in order to break up the goals into cases. I think.</p>
Fin8ish
166,485
<p><strong>hint:</strong> </p> <p>$$\displaystyle\sum_{n=0}^\infty \dfrac {5^n}{25^n + 1} = \displaystyle\sum_{n=0}^\infty \dfrac {1}{5^n + 5^{-n}}=\displaystyle\sum_{n=0}^\infty \dfrac {1}{e^{\ln(5)n} + e^{-\ln(5)n}}=\displaystyle\sum_{n=0}^\infty \dfrac {1}{2\cosh(\ln(5)n)} $$</p> <p>if you could find a formula for $$\displaystyle\sum_{n=0}^\infty \dfrac {1}{\cosh(xn)} $$ you have soved the problem. and <strong><em>maybe</em></strong> it can be computed by residue theory in complex function.</p>
4,549,070
<p>How can I prove this without using Stirling's formula?</p> <p><span class="math-container">$${n\choose an} \le 2^{nH(a)}$$</span> <span class="math-container">$$H(a) := -a\log_2a -(1-a)\log_2(1-a)$$</span></p>
adrien_vdb
1,013,037
<p>Assuming <span class="math-container">$a\in [0,1]$</span>, you can prove it through this simple chain of (in)equalities</p> <p><span class="math-container">$$1 = 1^n = (a+(1-a))^n = \sum_{k=0}^n \binom{n}{k}a^k(1-a)^{n-k}\geq \binom{n}{an}a^{an}(1-a)^{n-an} = \binom{n}{an}2^{-nH(a)}.$$</span></p> <p>First step is to use the Binomial Theorem, followed by lower-bounding the sum by a carefully chosen term. The rest is just about rewriting it in terms of <span class="math-container">$H$</span>. After re-arranging, you get <span class="math-container">$\binom{n}{an}\leq 2^{nH(a)}$</span> as expected.</p>
159,529
<p>The category of representations $\text{Rep}(D(G))$ of the quantum double of a finite group is well-known to be a modular tensor category. Can these modular tensor categories also be obtained as representation categories of vertex operator algebras?</p>
S. Carnahan
121
<p>A lot has happened in the last four years, and we now have lots of positive results.</p> <p>The current state of knowledge is given in <a href="https://arxiv.org/abs/1804.11145" rel="noreferrer">Evans-Gannon, "Reconstruction and Local Extensions for Twisted Group Doubles, and Permutation Orbifolds"</a>. In particular, if $G$ is a finite solvable group, then $D(G)$ (and more generally, any twist $D^\omega(G)$) is the representation category of some vertex operator algebra (in particular, the fixed points of a $G$-action on some holomorphic vertex operator algebra).</p> <p>For non-solvable groups, the result you want would follow from the conjectured regularity of fixed points (i.e., a suitable generalization of <a href="https://arxiv.org/abs/1603.05645" rel="noreferrer">C-Miyamoto</a>).</p> <p>Oddly enough, it turns out that permutation orbifolds can have non-trivial twists. This is discussed in the Evans-Gannon paper, and earlier in Johnson-Freyd's <a href="https://arxiv.org/abs/1707.08388" rel="noreferrer">"The Moonshine Anomaly"</a>.</p>
1,093,717
<p>Let $\xi_1, \xi_2, \ldots \xi_n, \ldots$ - independent random variables having exponential distribution $p_{\xi_i} (x) = \lambda e^{- \lambda x}, \; x \ge 0$ and $p_{\xi_i} (x) = 0, \; x &lt; 0$. Let $\nu = \min \{n \ge 1 : \xi_n &gt; 1\}$. Need to find the distribution function of a random variable $g = \xi_1 + \xi_2 + \ldots \xi_{\nu}$ that is, find the probability $\mathbb{P}(g &lt; x) = \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x)$.</p> <p>I made the following calculations:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \nu = k) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1)$.</p> <p>The probability of the sum can be represented as integral:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1) = \int\limits_D \lambda^k e^{- \lambda u_1} e^{- \lambda u_2} \ldots e^{- \lambda u_k} {d}u_1 \ldots {d}u_k$, where $D = \{ u_1 + \ldots u_k &lt; x, u_1 \le 1, \ldots u_{k-1} \le 1, u_k &gt; 1\}$. I'm afraid that this integral cannot be calculated.</p> <p>Is it somehow easier to find the distribution function $\mathbb{P} (g &lt; x)$?</p>
ki3i
202,257
<p>The fact that this involves "convolutions" and sums of i.i.d. random variables makes me think of trying to deduce the distribution from Moment generating functions. Using the independence of the $\xi_i$, we have (for $t&lt;\lambda$),</p> <p>$$ \begin{eqnarray*} \mathbb{E}\left[e^{t\sum\limits_{n=1}^{\nu}\xi_{n}}\right]&amp;{}={}&amp;\sum\limits_{r=1}^{\infty}\int{\textbf{1}}_{\left\{\xi_{1}\leq 1,\,\ldots\,,\,\xi_{r-1}\leq 1\,,\,\xi_{r}&gt;1 \right\}}e^{t\left(\sum\limits_{n=1}^{r}z_{n}\right)}f_{z_1}\ldots f_{z_r}dz_1\ldots dz_r\newline &amp;{}={}&amp;\sum\limits_{r=1}^{\infty}\left(\dfrac{\lambda}{\lambda{}-{}t}\right)^r e^{t-\lambda}\left(1{}-{}e^{t-\lambda}\right)^{r-1}\newline &amp;{}={}&amp;e^{t-\lambda}\dfrac{\lambda}{\lambda{}-{}t}\sum\limits_{r=1}^{\infty}\left(\dfrac{\lambda}{\lambda{}-{}t}\right)^{r-1} \left(1{}-{}e^{t-\lambda}\right)^{r-1}\newline &amp;{}={}&amp;\left(\dfrac{e^{t-\lambda}\dfrac{\lambda}{\lambda{}-{}t}}{1{}-{}\left(\dfrac{\lambda}{\lambda{}-{}t}\right) \left(1{}-{}e^{t-\lambda}\right)}\right)\,\newline &amp;{}={}&amp;\dfrac{1}{1{}-{}e^{\lambda{}-{}t}t/\lambda}\,. \end{eqnarray*} $$</p> <p>This looks like $\sum\limits_{n=1}^{\nu}\xi_{n}$ is trying to be exponentially distributed with "rate" $\lambda/e^{\lambda{}-{}t}$, but I do not know this functional form by heart. Any ideas?</p> <p><strong>Edit:</strong> Not knowing the explicit inverse of the final generating function form above, I thought of examining each term in the equivalent series representation: perhaps the individual terms have nicer inverses. If this is the case, then a series representation might be sufficient. If we perform the substitution $u{}={}t/\lambda$, so that $u&lt;1$, note that the moment generating function may be re-written as </p> <p>$$ \sum\limits_{r=1}^{\infty}\left(\dfrac{1}{1-u}\right)^r\left(1-e^{\lambda\left(u-1\right)}\right)^{r-1}e^{\lambda\left(u-1\right)}{}={}\sum\limits_{r=1}^{\infty}\sum\limits_{k=0}^{r-1}{r-1\choose k}\left(\dfrac{1}{1-u}\right)^r(-)^ke^{(k+1)\lambda(u-1)}\,. $$</p> <p>A series representation may be obtained, therefore, if we can invert the "atomic" moment generating functions</p> <p>$$ \left(\dfrac{1}{1-u}\right)^r e^{(k+1)\lambda(u-1)}\,. $$</p> <p>Heuristically, we wish to solve an integral of the kind</p> <p>$$ \int\limits_{-\infty}^{1}\left(\dfrac{1}{1-u}\right)^r e^{(k+1)\lambda(u-1)-xu}\,\,\mbox{d}u\,. $$</p> <p>For $x&lt;\lambda(k+1)$, the integral's solution has the form $$ (-\lambda)^{r}e^{-x}\left(\dfrac{x^{r-1}}{(r-1)!}\log(\lambda(k+1)-x){}+{}f_{r-1}(k)\right) $$</p> <p>where $f_{r-1}(k)$ is a rational function involving terms of, at most, degree "$r-1$" in $k$. </p> <p>(Note: the explicit solution can be obtained by integrating the expression $\dfrac{(-\lambda)^re^{-x}}{\lambda(k+1)-x}$, "$r$"-times, w.r.t "$k$". A justification of this follows by <a href="http://en.wikipedia.org/wiki/Differentiation_under_the_integral_sign" rel="nofollow">differentiating</a> the integral we wish to solve. Note, also, that our "$u$" substitution above was merely to make this presentation look nicer and puts this solution "off" by a factor of $\lambda$: the actual solution follows analogous operations using the $t$ variable, instead). </p>
1,093,717
<p>Let $\xi_1, \xi_2, \ldots \xi_n, \ldots$ - independent random variables having exponential distribution $p_{\xi_i} (x) = \lambda e^{- \lambda x}, \; x \ge 0$ and $p_{\xi_i} (x) = 0, \; x &lt; 0$. Let $\nu = \min \{n \ge 1 : \xi_n &gt; 1\}$. Need to find the distribution function of a random variable $g = \xi_1 + \xi_2 + \ldots \xi_{\nu}$ that is, find the probability $\mathbb{P}(g &lt; x) = \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x)$.</p> <p>I made the following calculations:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \nu = k) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1)$.</p> <p>The probability of the sum can be represented as integral:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1) = \int\limits_D \lambda^k e^{- \lambda u_1} e^{- \lambda u_2} \ldots e^{- \lambda u_k} {d}u_1 \ldots {d}u_k$, where $D = \{ u_1 + \ldots u_k &lt; x, u_1 \le 1, \ldots u_{k-1} \le 1, u_k &gt; 1\}$. I'm afraid that this integral cannot be calculated.</p> <p>Is it somehow easier to find the distribution function $\mathbb{P} (g &lt; x)$?</p>
Did
6,179
<p>There is a subtle dependence betwen $\nu$ and the sums of random variables $\xi_k$, in particular $\nu$ is not independent of $(\xi_k)$. However, note that:</p> <ul> <li>$g\gt1$ almost surely</li> <li>if $\xi_1\gt1$ then $g=\xi_1$</li> <li>if $\xi_1\lt1$ then $g=\xi_1+g'$ where $g'$ is independent of $\xi_1$ and distributed like $g$</li> </ul> <p>Thus, for every $x\gt1$, $$P(g\gt x)=P(\xi\gt x)+\int_0^1\mathrm dP_{\xi}(t)P(g\gt x-t),$$ that is, $$P(g\gt x)=\mathrm e^{-\lambda x}+\int_0^1\lambda\mathrm e^{-\lambda t}P(g\gt x-t)\mathrm dt.$$ From this point, one can show that there exists some sequence $(p_n)$ of polynomials such that, for every nonnegative integer $n$ and every $u$ in $[0,1]$, $$P(g\gt n+u)=p_n(u).$$ The sequence $(p_n)$ is uniquely determined by the recursion $$p'_{n+1}(u)=-\lambda\mathrm e^{-\lambda}p_n(u),\quad p_{n+1}(0)=p_n(1),$$ for every $n\geqslant0$, with the initial condition $$p_0(u)=1.$$ Thus, every $p_n(u)$ depends on the parameter $$a=\lambda\mathrm e^{-\lambda},$$ and the generating function of $(p_n)$ is $$\sum_{n\geqslant0}p_n(u)x^n=\frac{\mathrm e^{-axu}}{1-x\mathrm e^{-ax}}.$$ Explicit general formulas for $p_n$ are not so easy to deduce from this but one sees that each polynomial $p_n$ has degree $n$, and even that $$p_n(u)=1-nu+[\text{some monomials from $u^2$ to $u^{n-1}$}]+(-1)^n\frac{u^n}{n!}.$$ Nevertheless, note as an example of application that, for every $\lambda\leqslant1$, the smallest positive root of $x=\mathrm e^{ax}$ is $x=\mathrm e^\lambda$ hence $$p_n(u)\approx\mathrm e^{-n\lambda}.$$</p>
3,282,895
<p>How do I calculate <span class="math-container">$$\int_{0}^{2\pi} (2+4\cos(t))/(5+4\sin(t)) dt$$</span></p> <p>I've recently started calculating integral via the residue theorem. Somehow I'm stuck with this certain integral. I've substituted t with e^it and received two polynoms but somehow I only get funny solutions.. Could someone please help me finding the residues?</p>
Sohom Paul
686,081
<p>It looks like you are already familiar with the substitution <span class="math-container">$z=e^{it}$</span> which allows us to transform this real integral into a contour integral. Note that on the unit circle, <span class="math-container">$\bar{z} = 1/z$</span>, so we get <span class="math-container">$\cos t = (z + 1/z)/2$</span>, <span class="math-container">$\sin t = (z - 1/z)/2i$</span>, and <span class="math-container">$dt = dz/iz$</span>. Substituting in, we get <span class="math-container">$$\int_{0}^{2\pi}\frac{2+4\cos t}{5 + 4\sin t}\,dt = \oint_{|z|=1}\frac{2 + 4(\frac{z + 1/z}{2})}{5 + 4(\frac{z-1/z}{2i})}\,\frac{dz}{iz} = \oint_{|z| = 1}\frac{2(1 + z + z^2)}{2z(z+\frac{i}{2})(z+2i)}\,dz$$</span> The only singularities that lie within the contour are the simple poles at <span class="math-container">$z=0$</span> and <span class="math-container">$z=-i/2$</span>. <span class="math-container">$$\textrm{Res}(0) = \frac{2}{-2} = -1$$</span> <span class="math-container">$$\textrm{Res}\left(-\frac{i}{2}\right) = \frac{\frac{3}{2}-i}{\frac{3}{2}} = 1 - \frac{2i}{3}$$</span> Taking <span class="math-container">$2\pi i \sum \textrm{Res}$</span>, our answer is just <span class="math-container">$4\pi/3$</span>.</p>
2,933,572
<p>Suppose <span class="math-container">$A = 1/2^{100\log(n)}$</span>, and <span class="math-container">$B = e^{-100\log(2) \log(n)}$</span>.</p> <p>I'm required to prove that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal, how should I prove this? I tried applying some rules of logarithms that I have learned but I'm not able to show this.</p>
TheSilverDoe
594,484
<p><span class="math-container">$C$</span> is closed </p> <p><span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$\mathbb{R} \setminus C$</span> is open </p> <p><span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$ \forall x \in \mathbb{R}\setminus C, \exists \varepsilon &gt; 0, (x-\varepsilon, x + \varepsilon) \subset \mathbb{R} \setminus C$</span></p> <p><span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$ \forall x \in \mathbb{R}\setminus C, \exists \varepsilon &gt; 0, (x-\varepsilon, x + \varepsilon) \cap C = \emptyset$</span></p> <p><span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$ \forall x \in \mathbb{R}\setminus C, \exists \varepsilon &gt; 0, d(x,C) &gt; \varepsilon$</span></p> <p><span class="math-container">$\Leftrightarrow$</span> <span class="math-container">$ \forall x \in \mathbb{R}\setminus C, d(x,C) &gt; 0$</span></p>
4,253,160
<p>I was recently taught that a subset W is a subspace of V if and only if:</p> <ol> <li>W is non-empty.</li> <li>W is closed under vector addition.</li> <li>W is closed under scalar multiplication.</li> </ol> <p>So we only need to prove 3 out of the 10 vector space axioms; why is this? Is it because it's redundant to prove the other axioms once those 3 specific axioms are proven?</p>
Richard Jensen
658,583
<p>Hint: <span class="math-container">$\cos(\frac{1}{x}) \le 1$</span>. Therefore</p> <p><span class="math-container">$|\sin(x)\cos(1/x)|\le |\sin(x)|$</span></p>
192,095
<p>Suppose I have a convex program which has only two variables, the objective function is strictly convex, and the constraints are linear functions. </p> <p>I think removing all non-tight constraints doesn't change the optimal solution.</p> <p>However, when there are more than 2 tight constraints, I am not sure if removing all other tight constraints but only leaving two of them still keep the optimal solution unchanged. </p> <p>Any advice would be appreciated! </p>
J Fabian Meier
3,816
<p>In convex optimization, a local optimum is a global one. So if you do not change the neighbourhood of the optimal solution, you do not change the optimal solution.</p>
2,713,937
<p>I need this lemma for another proof I'm doing, but I can't crack it. I want something of the structure:</p> <p>$$\frac{pq}{(p-1)(q-1)} &lt; \dots = \frac{pq}{\frac{1}{2}pq} = 2,$$ but I can't figure out what to do with the denominator. </p>
Joffan
206,402
<p>You could prove that $1&lt;\frac{k}{k-1} &lt; \frac{\ell}{\ell-1}$ for $k&gt;\ell$ and then observe that $\frac {3\times 5}{2\times4}&lt;2$, and that choosing larger odd primes will thus make the result smaller.</p>
2,713,937
<p>I need this lemma for another proof I'm doing, but I can't crack it. I want something of the structure:</p> <p>$$\frac{pq}{(p-1)(q-1)} &lt; \dots = \frac{pq}{\frac{1}{2}pq} = 2,$$ but I can't figure out what to do with the denominator. </p>
egreg
62,967
<p>Assume $p&gt;q$: $$ 2(p-1)(q-1)-pq=pq-2p-2q+2&gt;pq-4p+2=p(q-4)+2 $$ which is $&gt;0$ for $q&gt;3$. For $q=3$, $$ 2(p-1)(q-1)-pq=4(p-1)-3p=p-4&gt;0 $$ Therefore $2(p-1)(q-1)&gt;pq$.</p>
2,713,937
<p>I need this lemma for another proof I'm doing, but I can't crack it. I want something of the structure:</p> <p>$$\frac{pq}{(p-1)(q-1)} &lt; \dots = \frac{pq}{\frac{1}{2}pq} = 2,$$ but I can't figure out what to do with the denominator. </p>
Hans
64,809
<p>$$\frac{pq}{(p-1)(q-1)} &lt;2\Longleftrightarrow pq-2p-2q+2 =(p-2)(q-2)-2&gt;0$$ for $p\ge 3$ and $q\ge 4$.</p>
102,427
<p>I just coded a simple simulation module that looks at the evolution of a continuous trait in a haploid asexually reproducing population under density dependent competition in discrete time (i.e. non-overlapping generations, using recurrence equations). What I am interested in is finding out whether evolution would always favour selecting for increased intrinsic growth rates R, perhaps eventually pushing the population to go extinct (a scenario known as "evolutionary suicide"), or if instead there would be selection for restrained growth rates, e.g. due to the fact that more selfish lineages with higher intrinsic growth rates would more often move into the chaotic regime and go extinct faster.</p> <p>The module I have takes as arguments the desired fitness function (e.g. the discrete logistic model), the initial trait values of your individuals in the 1st generation, the mutation rate and the standard deviation of the normal deviation that is applied to mutants and the nr of generations to run the simulation :</p> <pre><code>EvolveHapl[fitnessfunc_, initpop_, mutrate_, stdev_, generations_] := Module[{ndist, traitvalues, currpopsize, fastPoisson, fitnessinds, numberoffspring, nrmutants, rnoise, rndelem}, ndist = NormalDistribution[0, stdev] ; traitvalues = Table[{}, {generations + 1}]; (* list of lists containing ind trait values in each generation *) traitvalues[[1]] = initpop; currpopsize = Length[traitvalues[[1]]]; (* fast Poisson random number generator *) fastPoisson = Compile[{{\[Lambda], _Real}}, Module[{b = 1., i, a = Exp[-\[Lambda]]}, For[i = 0, b &gt;= a, i++, b *= RandomReal[]]; i - 1], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True]; Do[fitnessinds = Table[fitnessfunc[traitvalues[[gen - 1]][[i]], currpopsize], {i, 1, currpopsize}]; (* fitness of every individual in the population, in mean number of offspring *) numberoffspring = fastPoisson[fitnessinds]; (* absolute number of offspring that every individual produces *) traitvalues[[gen]] = Flatten[Table[ Table[traitvalues[[gen - 1]][[i]], {j, 1, numberoffspring[[i]]}], {i, 1, currpopsize}]]; (* expected offspring trait values before mutation *) currpopsize = Length[traitvalues[[gen]]]; (* new population size *) nrmutants = RandomVariate[BinomialDistribution[currpopsize, mutrate]]; (* nr of offspring that should mutate *) rnoise = RandomReal[ndist, nrmutants]; (* noise to be added to the trait values of the mutants *) Do[rndelem = RandomInteger[{1, currpopsize}]; traitvalues[[gen]][[rndelem]] = Max[traitvalues[[gen]][[rndelem]] + rnoise[[i]], 0];, {i, 1, nrmutants}];, (* mutate trait values *) {gen, 2, generations + 1}]; Return[traitvalues ]]; </code></pre> <p>And to plot the resulting list of individual trait values I use</p> <pre><code>PlotResult[traitvalues_] := Module[{}, generations = Length[traitvalues]; Print["Mean phenotype at the beginning : " &lt;&gt; ToString[Mean[traitvalues[[1]]]]]; Print["Maximum mean phenotype at any generation : " &lt;&gt; ToString[ Max[Table[ Mean[traitvalues[[i]]], {i, 1, generations}] /. {Mean[{}] -&gt; 0}]]]; Print["Mean phenotype after " &lt;&gt; ToString[ngens] &lt;&gt; " generations : " &lt;&gt; ToString[ Mean[traitvalues[[generations]]] /. {Mean[{}] -&gt; "- (population extinct)"}]]; Print["Final population size : " &lt;&gt; ToString[Length[traitvalues[[generations]]]]]; maxscaleN = Max[Table[{Length[traitvalues[[i]]]}, {i, 1, generations}]]; minscaleN = Min[Table[{Length[traitvalues[[i]]]}, {i, 1, generations}]]; maxscaleP = Max[Flatten[traitvalues]]; GraphicsRow[{Show[ ArrayPlot[ Table[BinCounts[ traitvalues[[i]], {0, maxscaleP + 0.5, 0.05}], {i, 1, generations}]/ Table[Length[traitvalues[[i]]] + 0.00001, {i, 1, generations}], DataRange -&gt; {{0, maxscaleP + 0.5}, {1, generations}}, DataReversed -&gt; True], Frame -&gt; True, FrameLabel -&gt; {"Phenotype frequency", "Generation"}, FrameTicks -&gt; True, AspectRatio -&gt; 2], ListPlot[ Table[{Length[traitvalues[[i]]], i}, {i, 1, generations}], Joined -&gt; True, Frame -&gt; True, FrameLabel -&gt; {"Population size", "Generation"}, AspectRatio -&gt; 2, PlotRange -&gt; {{Clip[minscaleN - 50, {0, \[Infinity]}], maxscaleN + 50}, {0, generations}}]}] ] </code></pre> <p>Running it, however, is quite slow, e.g.</p> <pre><code>psize = 300; ngens = 5000; mutrate = 0.01; stdev = 0.05; K = 2*psize; f[R_, popsize_] := Max[(1 + R *(1 - popsize/K)), 0.00001]; First@AbsoluteTiming[ traitvalues = EvolveHapl[ f, RandomReal[{2.5, 2.6}, psize], mutrate, stdev, ngens];] PlotResult[traitvalues] 204.02 </code></pre> <p><a href="https://i.stack.imgur.com/iblgg.png" rel="noreferrer"><img src="https://i.stack.imgur.com/iblgg.png" alt="enter image description here"></a></p> <p>I was just wondering if there might be any obvious ways to speed up this routine in Mathematica? E.g. could I get rid of all my <code>Do[]</code> and <code>Table[]</code> loops somehow? Or could this whole routine or parts of it be compiled to speed it up? I've already replaced the Poisson number generation with a faster compiled version and used <code>Max[]</code> rather than <code>Clip[]</code> in my original post, but this hasn't improved speed as much as I had hoped it would... Or better to go to C or C++ for this kind of simple simulation?</p>
m_goldberg
3,066
<p>When building a simulation like yours you should test the performance of the individual components before incorporating them into the simulation. That is, you should know the cost of the components as well as the values they return. Here is a simple example based on your code. </p> <p>You use <code>Clip</code> in a couple of places to limit values from below. That is suspect, because <code>Clip</code> is designed to confine value to an interval, i.e., from both below and above. It is likely that <code>Max</code> will be better. Let's test this theory with your fitness function.</p> <pre><code>initsize = 500; With[{K = 2*initsize}, fit1[R_, psize_] := Clip[(1 + R (1 - psize/K)), {0.000001, ∞}]]; fit2[R_, psize_] := Max[(1 + R (1 - psize/K)), 0.000001]] With[{R = 2.5, p = 1000, n = 100000}, {First @ AbsoluteTiming[Do[fit1[R, p], n]], First @ AbsoluteTiming[Do[fit2[R, p], n]]}] </code></pre> <blockquote> <p><code>{0.415949, 0.371045}</code></p> </blockquote> <p>So there is a real advantage in using <code>Max</code> rather than <code>Clip</code>. Since your simulation evaluates its fitness function thousands of times, it would pay to use <code>Max</code>.</p> <p>Thorough component testing will likely reveal many more places where your code can be improved. I personally think that generating so many tables over and over again is performance area which you should investigate.</p> <p>When developing a simulation that will be run many thousands of iterations, testing each piece of component code is well worth the time and effort it takes. One of <em>Mathematica</em>'s advantages for building simulations is that it makes the sort of incremental coding and unit testing for both correctness and performance much easier than traditional lower level languages such as C or C++.</p>
2,861,867
<p>Let $A$ be any commutative ring (with $1$) and $x,y \in A$ such that $x+y = 1$. Then it follows that for any $k,l$, there exist $a,b \in A$ such that $ax^k+by^l = 1$. </p> <p>(Proof: Suppose otherwise. Then, $(x^,k,y^l) \subset \mathfrak p$ for some prime ideal $\mathfrak p$. But then this implies that $x,y \in \mathfrak p$ and therefore $1 = x+y \in \mathfrak p$, contradiction.)</p> <p>My question is: Can we give a method for constructing the $a,b$ given any $k,l$. Maybe this is too much to ask for in general. What about if I restrict $A$ to be a finitely generated algebra over a field?</p>
Thomas Andrews
7,933
<p>Assuming $k,l$ non-negative integers.</p> <p>Write $$1=(x+y)^{k+l}=\sum_{i=0}^{k+l}\binom{k+l}{i}x^iy^{k+l-i}$$</p> <p>Now, for $i=0,\dots,k+l$ either $i\geq k$ or $k+l-i\geq l.$</p> <p>So we just separate the terms. If we set:</p> <p>$$\begin{align}a&amp;=\sum_{i=k}^{k+l}\binom{k+l}{i}x^{i-k}y^{k+l-i}\\ b&amp;=\sum_{i=0}^{k-1}\binom{k+l}{i}x^{i}y^{k-i}\end{align}$$</p> <p>Then $ax^k+by^l=1.$</p> <p>This actually works if $a_0x+b_0y=1$ for any $x,y,a_0,b_0\in R.$</p> <p>For the case $x+y=1,$ you don't need the ring to be commutative, you just need $x,y$ to commute.</p>
890,313
<p>Say the probability of an event occurring is 1/1000, and there are 1000 trials.</p> <p>What's the expected number of events that occur? </p> <p>I got to an answer in a quick script by doing the above 100,000 times and averaging the results. I got 0.99895, which seems like it makes sense. How would I use math to get right to this answer? The only thing I can think of to calculate is the probability that an event never occurs, which would be 0.999^1000, but I am stuck there. </p>
Did
6,179
<p>If the expected number of events occurring in each trial is 1/1000, the expected number of events occurring in 1000 trials is 1000 times 1/1000, that is, 1.</p>
2,344,758
<p>Quoting from Wikipedia article on Euler's totient function theorem :---</p> <blockquote> <p>In general, when reducing a power of <span class="math-container">$a$</span> modulo <span class="math-container">$n$</span> (where <span class="math-container">$a$</span> and <span class="math-container">$n$</span> are coprime), one needs to work modulo <span class="math-container">$φ(n)$</span> in the exponent of <span class="math-container">$a$</span>:</p> <p>if <span class="math-container">$x ≡ y \pmod{φ(n)}$</span>, then <span class="math-container">$a^x ≡ a^y \pmod{n}$</span>.</p> </blockquote> <p>Is this really true generally? And how to prove that the original statement in Euler's theorem is equivalent to that?</p>
Ronald Blaak
458,842
<p>Euler's theorem states:</p> <p>For all integers $a$ and $n$ with $GCD(a,n)=1$ we have $a^{\phi(n)} \equiv 1\mod n$.</p> <p>Hence if we assume $x \geq y$ and $x \equiv y \mod \phi(n)$, there is some integer $k$ that satisfies $x = y + k \phi(n)$. Then we get: $$ a^x = a^{y + k \phi(n)} = a^y \left[a^\phi(n)\right]^k \equiv a^y [1]^k \equiv a^y \mod n $$ which proves your version with the condition that $a$ and $n$ are coprime. For the reverse and starting with the theorem:</p> <p>For all integers $a,n,x,y$ with $GCD(a,n)=1$ and $x \equiv y \mod \phi(n)$ we have $a^x \equiv a^y \mod n$.</p> <p>We consider the special case that $x = y + \phi(n)$, which gives $$ a^x = a^{y + \phi(n)} \equiv a^y \mod n $$ and hence $$ a^{y+\phi(n)} - a^y = a^y \left(a^{\phi(n)} -1 \right) \equiv 0 \mod n $$ Since $GCD(a,n)=1$ the factor $a^y$ has no common factor with $n$ and hence $a^{\phi(n)} -1$ must be a multiple of $n$, which gives Euler's theorem in the usual form $$ a^{\phi(n)} \equiv 1 \mod n $$ So the two theorems are fully equivalent and generally true under the constraint that $a$ and $n$ have no common divisor.</p>
367,497
<p>Let $\{f_n\}$ be a sequence of $L^1(\mathbb R)$ functions converging a.e. to zero. Does $$ \lim_{n\to \infty} \int_{\mathbb R} \sin(f_n(x)) dx = 0? $$</p> <p>I think the answer is no, but I can't find a counterexample.</p>
Davide Giraudo
9,849
<p>Try $f_n:=\frac{\pi}2\chi_{(n,n+1)}$.</p>
372,064
<p>can someone explain me why</p> <p>$\dot{a}\ddot{a}=\frac{1}{2}\frac{d}{dt}\left(\dot{a}^{2}\right)$</p> <p>Many thanks</p>
Marlo
74,183
<p>or use the chain rule. $\frac{1}{2}\frac{d}{dt}\big((\dot a)^2\big)=\frac{d}{dt}f(\dot a(t))$, where $f(x)=x^2$. So you first take the derivative w.r.t $f$ evaluated at $\dot a$ and then you take the time derivative of the argument of $f$, which is $\ddot a$. </p>
3,829,431
<blockquote> <p>If the area of equilateral triangle is <span class="math-container">$3\sqrt3$</span> cm<span class="math-container">$^2$</span> , then what is the height of the equilateral triangle?</p> </blockquote> <p>I am stuck with this question <br>I solved it like this: <br>Area of equilateral triangle is <span class="math-container">$\frac{a\sqrt3}4$</span> <br>So, <span class="math-container">$\frac{\sqrt3}4 \cdot a = 3 \sqrt 3$</span> <br><span class="math-container">$a = \frac{3\sqrt3}{\sqrt3/4} = 3\sqrt3 \cdot \frac4{\sqrt3}$</span>; <span class="math-container">$\sqrt 3$</span>'s cancel <br><span class="math-container">$a = 3 \cdot 4 = 12$</span> <br>I found the side is <span class="math-container">$12$</span>. <br>How do I find the height with the side length? <br>And also kindly say if I made any mistake in my calculations.</p>
SBRJCT
39,413
<p>Since I've accepted an answer, I thought I would post an answer that addressed my issue directly, in case someone else hits the same snag. I assumed, without checking, that the maximal ideals in <span class="math-container">$\mathbb{R}[X,Y]$</span> are correspondence with those of the form <span class="math-container">$(x^2+ax+b,y^2+cx+d)$</span>, but this is not true! Maximal ideals are in correspondence with the kernels of surjective maps from <span class="math-container">$\mathbb{R}[X,Y] \to \mathbb{R}$</span> or <span class="math-container">$\mathbb{C}$</span>. Since, in the latter case, <span class="math-container">$[\mathbb{C}:\mathbb{R}]=2$</span>, a map from <span class="math-container">$\mathbb{R}[X,Y] \to \mathbb{C}$</span> sending <span class="math-container">$X \mapsto z, Y \mapsto w$</span> will be surjective iff <span class="math-container">$z$</span> and <span class="math-container">$w$</span> are linearly independent over <span class="math-container">$\mathbb{R}$</span>. Therefore, along with others, ideals of the form <span class="math-container">$(X^2+b, Y^2+d)$</span> (like those considered above) are <em>not</em> maximal.</p> <p>See also <a href="https://math.stackexchange.com/a/2844259/39413">https://math.stackexchange.com/a/2844259/39413</a>, or <a href="https://math.stackexchange.com/a/2781509/39413">https://math.stackexchange.com/a/2781509/39413</a>, or more generally <a href="https://mathoverflow.net/a/26503/69608">https://mathoverflow.net/a/26503/69608</a></p>
213,665
<p><strong>I've tried 3 methods but all failed to do that.</strong></p> <p>1st Method</p> <pre><code>Apply[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>2nd Method</p> <pre><code>Map[Flatten, {1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>3rd Method</p> <pre><code>Flatten[{1, {2, {3, 4}, 5}, 6}, {2}] </code></pre> <p>I wanna get {1, {2, 3, 4, 5}, 6}</p>
Fraccalo
40,354
<p>This will work:</p> <pre><code>l = {1, {2, {3, 4}, 5}, 6} MapAt[Flatten, l, 2] </code></pre> <blockquote> <p>{1, {2, 3, 4, 5}, 6}</p> </blockquote> <p>also:</p> <pre><code># /. x_ /; Length[x] &gt; 1 :&gt; Flatten@x &amp; /@ l </code></pre> <blockquote> <p>{1, {2, 3, 4, 5}, 6}</p> </blockquote>
2,538,297
<p>This is my first question here so I hope I'm doing it right :) sorry otherwise!</p> <p>As in the title, I was wondering if and when it is OK to calculate a limit i three dimensions through a substitution that "brings it down to two dimensions". Let me explain what I mean in a clearer way through an example. I was calculating this limit:<br> $$\lim_{(x,y) \to (0,0)} \frac{\ln (1+\sin^2(xy))}{x^2+y^2} =\lim_{(x,y) \to (0,0)} \frac{\ln (1+\sin(xy)\cdot \sin(xy))}{x^2+y^2}$$ $$=\lim_{(x,y) \to (0,0)} \frac{\ln (1+xy\cdot xy)}{x^2+y^2} =\lim_{(x,y) \to (0,0)} \frac{\ln (1+x^2y^2)}{x^2+y^2}=\lim_{(x,y) \to (0,0)} \frac{x^2y^2}{x^2+y^2}$$ $$=\lim_{(x,y) \to (0,0)}\frac{1}{\frac{1}{y^2}+\frac{1}{x^2}}="\frac{1}{\infty}"=0.$$ Where I have used: $$ \lim_{(x,y) \to (0,0)} \frac{\sin(xy)}{xy}=[z=xy]=\lim_{z\to 0}\frac{\sin z}{z}=1$$ and $$ \lim_{(x,y) \to (0,0)} \frac{\ln(1+xy)}{xy}=[z=xy]=\lim_{z\to 0}\frac{\ln(1+z)}{z}=1.$$ Is the way I calculated the limits for $(x,y)\to (0,0)$ by substituting with $z=xy$ legit? Also, if it is... am I allowed to substitute an expression with its limit <em>inside</em> a limit, as in <em>while</em> calculating the limit, or can I only take the limits in one last step (I'm a bit confused by this exercise in general, I have solved it with Taylor series but I'm curious to know whether this works too)?<br> Thank you so much in advance!</p>
Reese Johnston
351,805
<p>Your substitution is correct - substituting $z = xy$ isn't really "doing" anything, it's just giving a name to something. It's always fine to rename things.</p> <p>On the other hand, replacing an expression with its limit inside a limit is <em>not</em> in general permissible. To take an excellent example: consider $\lim_{x \to 0}(1 + x)^{1/x}$. If we permit ourselves to take that step, then we could observe that $1 + x \to 1$ as $x \to 0$, so this is really $\lim_{x \to 0}1^{1/x}$. Since $1^y = 1$ for every $y$, this is $\lim_{x\to 0}1$, and so the limit is just $1$. On the other hand, $\lim_{x \to 0}(1 + x)^{1/x}$ is one of the definitions of $e$, so it can't be $1$!</p> <p>In this particular case, it worked - but this is because you got lucky with the functions you were considering. Basically, $\ln$ "damps" small errors near and above $1$, so when you replaced $\sin(xy)$ with the slightly-different $xy$ the difference between $\ln(1 + \sin^2(xy))$ and $\ln(1 + (xy)^2)$ was even smaller. Replacing $\ln(1 + x^2y^2)$ with $x^2y^2$ was basically okay (though only because the final answer was $0$), but technically incorrect - what you should do is use little-$o$ notation here, and write $\ln(1 + x^2y^2) = x^2y^2 + o(x^2y^2)$. If you're familiar with little-$o$ notation, you should be able to demonstrate that $\frac{o(x^2y^2)}{x^2 + y^2}$ goes to zero.</p>
65,810
<p>Recently, I have been learning about nef line bundles. I know that when $X$ is projective or Moishezon, a line bundle $L$ over $X$ is said to be nef iff $$L.C=\int_{C}c_{1}(L)\ge 0$$ for every curve $C$ in $X$.</p> <p>Demailly gave a definition of nefness that works on an arbitrary compact complex manifold, i.e., a line bundle $L$ over $X$ is said to be nef if for every $\varepsilon &gt;0$ there exists a smooth hermitian metric $h_{\varepsilon}$ on $L$ such that its curvature $\Theta_{h_{\varepsilon}}(L)\ge -\varepsilon\omega$. For projective manifolds, Demailly's definition coincides with the above one given by integration (this is an easy consequence of Seshadri's ampleness criterion).</p> <p><strong>Question:</strong> Is this equivalence also true for Moishezon manifolds?</p> <p>I don't know of any counterexamples. If it is not true, could someone give me a counterexample?</p>
YangMills
13,168
<p>Yes, the equivalence is true (the second notion used to be called "metric nef" by some). This was an open problem for quite some time until it was solved in</p> <p>M. Paun "Sur l'effectivité numérique des images inverses de fibrés en droites" Math. Ann. 310 (1998), no. 3, 411–421, see the Corollaire on page 412.</p>
23,181
<p>I have n sectors, enumerated 0 to n-1 counterclockwise. The boundaries between these sectors are infinite branches (n of them). The sectors live in the complex plane, and for n even, sector 0 and n/2 are bisected by the real axis, and the sectors are evenly spaced.</p> <p>These branches meet at certain points, called junctions. Each junction is adjacent to a subset of the sectors (at least 3 of them). </p> <p>Specifying the junctions, (in pre-fix order, lets say, starting from junction adjacent to sector 0 and 1), and the distance between the junctions, uniquely describes the tree.</p> <p>Now, given such a representation, how can I see if it is symmetric wrt the real axis?</p> <p>For example, n=6, the tree (0,1,5)(1,2,4,5)(2,3,4) have three junctions on the real line, so it is symmetric wrt the real axis. If the distances between (015) and (1245) is equal to distance from (1245) to (234), this is also symmetric wrt the imaginary axis.</p> <p>The tree (0,1,5)(1,2,5)(2,4,5)(2,3,4) have 4 junctions, and this is never symmetric wrt either imaginary or real axis, but it has 180 degrees rotation symmetry if the distance between the first two and the last two junctions in the representation are equal.</p> <p>Edit: Here are all trees with 6 branches, distances 1. <a href="http://www2.math.su.se/~per/files/allTrees.pdf" rel="nofollow">http://www2.math.su.se/~per/files/allTrees.pdf</a></p> <p>So, given the description/representation, I want to find some algorithm to decide if it is symmetric wrt real, imaginary, and rotation 180 degrees. The last example have 180 degree symmetry.</p> <p>Edit 2: If all length of the distances between the junctions were all the same, it is quite easy to find the reflection/rotation of a tree. The problem arises when the distances are of unequal length.</p> <p>Notice that if I have a regular n-gon, with some non-intersecting chords, is sort of the dual to my trees. I use this in the drawing algorithm, for those that wonder.</p> <p>That is, I create the n roots of unity (possible with some rotation), then the angle between junction (123) and (345) would be the same as for the mean of vertices 1,2,3 to the mean of vertices 3,4,5 in this n-gon.</p> <p>The angles in the drawing is not really important, you may change the angles, but the order of the long branches should be the same, and you cannot rotate the tree.</p> <p>EDIT 3:</p> <p>Observe that there are many ways of drawing the trees. What I have is an equivalence relation, T1 ~ T2 if the two trees have the same junction representation. If S is an axis symmetry, or rotation by 180 degrees, Then S(T1) ~ S(T2), so the notion of being the same tree is well-defined. The question is therefore, how to determine if S(T1) ~ T1, or even better, compute S(T1). By above, this is independent on how I draw the tree.</p>
Marcos Cossarini
4,118
<p>I don't understand how the angles of the connecting finite segments are determined, so I'll assume the angles are set so that they don't break any symmetry. First observe that the reflection wrt the real axis sends sectors 0,1,2,3,4,5 to 0,5,4,3,2,1 respectively. So in your second example, tree</p> <p>(0,1,5)(1,2,5)(2,4,5)(2,3,4) turns into </p> <p>(0,5,1)(5,4,1)(4,2,1)(4,3,2)</p> <p>which is different from the original tree (the original and transformed tree share only the first and last junction). So the transformation is not a symmetry of the tree. However, the same transformation sends the first example</p> <p>(0,1,5)(1,2,4,5)(2,3,4) to</p> <p>(0,5,1)(5,4,2,1)(4,3,2)</p> <p>which is the same tree, represented in a non standard way because the junctions appear in the wrong order and the sectors of each junction are also in the wrong order. </p> <p>Rotation of 180 degrees sends 0,1,2,3,4,5 to 3,4,5,0,1,2 (add 3 mod 6) so </p> <p>(0,1,5)(1,2,5)(2,4,5)(2,3,4) turns into </p> <p>(3,4,2)(4,5,2)(5,1,2)(5,0,1)</p> <p>which is the same tree, again represented in a non standard way (the junctions appear in the inverse order, and each junction has its inciding sectors cycled). </p> <p>So the recipe seems to be the following: Find out, for your transformation of the plane, which sectors goes to which, apply this permutation to the tree representation, and then reorder each tree representation (original and transformed) in a standard way that allows to compare if they are equal. If they are equal, then (assuming the angles are nice), the transformation of the plane is a symmetry of the tree. If they are not equal, then the transformation is not a symmetry of the tree. </p>
3,174,339
<p>Let <span class="math-container">$M$</span> be a <span class="math-container">$C^{\infty}$</span> manifold. Let <span class="math-container">$U$</span> be an open subset of <span class="math-container">$M$</span>. Now take a closed subset (with respect to the subspace topology on <span class="math-container">$U$</span>) <span class="math-container">$V \subseteq U$</span>. Does is then follow that <span class="math-container">$V$</span> is closed in <span class="math-container">$M$</span>? Thank you. </p> <p>PS I would further like to add more conditions as it appears that I may have simplified the problem too much as shown by Kavi Rama Murthy below. We further let <span class="math-container">$(U, \phi)$</span> be a local chart with <span class="math-container">$p \in U$</span>, and <span class="math-container">$V = \phi^{-1}(\overline{B(\phi(p), \varepsilon)})$</span> for <span class="math-container">$\epsilon &gt; 0$</span> small so that <span class="math-container">$\overline{B(\phi(p), \varepsilon)} \subseteq \phi(U)$</span>. </p>
5xum
112,884
<p><strong>Answer to old question before a significant edit was made</strong>:</p> <p>No they do not. It's not true in euclidean space.</p> <p>The function <span class="math-container">$$f(x)=\begin{cases}x&amp; x\leq 0\\ x+1 &amp; x&gt;0\end{cases}$$</span> is an injection, however, <span class="math-container">$f((-1,1)) = (-1, 0]\cup (1, 2)$</span> which is not open.</p>
3,174,339
<p>Let <span class="math-container">$M$</span> be a <span class="math-container">$C^{\infty}$</span> manifold. Let <span class="math-container">$U$</span> be an open subset of <span class="math-container">$M$</span>. Now take a closed subset (with respect to the subspace topology on <span class="math-container">$U$</span>) <span class="math-container">$V \subseteq U$</span>. Does is then follow that <span class="math-container">$V$</span> is closed in <span class="math-container">$M$</span>? Thank you. </p> <p>PS I would further like to add more conditions as it appears that I may have simplified the problem too much as shown by Kavi Rama Murthy below. We further let <span class="math-container">$(U, \phi)$</span> be a local chart with <span class="math-container">$p \in U$</span>, and <span class="math-container">$V = \phi^{-1}(\overline{B(\phi(p), \varepsilon)})$</span> for <span class="math-container">$\epsilon &gt; 0$</span> small so that <span class="math-container">$\overline{B(\phi(p), \varepsilon)} \subseteq \phi(U)$</span>. </p>
GEdgar
442
<p>Isn't the case <span class="math-container">$\mathbb R^n \to \mathbb R^n$</span> a result of L.E.J. Brouwer? In any case: <span class="math-container">$\mathbb R^1 \to \mathbb R^1$</span> is easy from calculus facts, while <span class="math-container">$\mathbb R^2 \to \mathbb R^2$</span> is harder, but can be proved from the Jordan curve theorem.</p> <p><strong>added</strong><br /> I remembered the name, &quot;Invariance of domain&quot;<br /> See <a href="https://en.wikipedia.org/wiki/Invariance_of_domain" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Invariance_of_domain</a></p>
1,260,260
<blockquote> <p>Find, with proof, the smallest value of $N$ such that $$x^N \ge \ln x$$ for all $0 &lt; x &lt; \infty$. </p> </blockquote> <p>I thought of adding the natural logarithm to both sides and taking derivative. This gave me $N \ge \frac 1{\ln x}$. However, is there a better way to this?</p> <p>Please note that I would like to see only a <em>hint</em>, not a complete solution.</p> <p>If anything, I am made aware that the answer is $N \ge \frac 1e$.</p>
Apurv
109,643
<p>As you increase $n$, the graph of $x^n$ widens and goes away from $\ln x$. At the smallest value of $n$ (so that the inequality holds), the two graphs touch each other. So the two slopes at that point of touching must be same. $$\implies nx^{n-1}=\dfrac {1}{x}$$ $$\implies x^{n}=\dfrac {1}{n}=\ln x$$ as $x^n=\ln x$ at that point. Now $$x^n=\ln x\implies n\ln x=\ln (\ln x)\implies 1=\ln \left (\dfrac {1}{n}\right)$$ $$\implies n=\dfrac {1}{e}$$ Therefore $n\geq \dfrac {1}{e}$</p> <p>NOTE: Only when the two curves are tangent at the point of intersection, will the inequality hold, because there are three possibilities: either cut tangentially, or cut once and go down(thus violating the inequality) or never cut and stay at the top(never satisfying the equality sign).</p>
244,333
<p>Consider this equation : </p> <p><span class="math-container">$$\sqrt{\left( \frac{dy\cdot u\,dt}{L}\right)^2+(dy)^2}=v\,dt,$$</span></p> <p>where <span class="math-container">$t$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$T$</span> , and <span class="math-container">$y$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$L$</span>. Now how to proceed ? </p> <p>This equation arises out of following problem : </p> <p>A cat sitting in a field suddenly sees a standing dog. To save its life, the cat runs away in a straight line with speed <span class="math-container">$u$</span>. Without any delay, the dog starts with running with constant speed <span class="math-container">$v&gt;u$</span> to catch the cat. Initially, <span class="math-container">$v$</span> is perpendicular to <span class="math-container">$u$</span> and <span class="math-container">$L$</span> is the initial separation between the two. If the dog always changes its direction so that it is always heading directly at the cat, find the time the dog takes to catch the cat in terms of <span class="math-container">$v, u$</span> and <span class="math-container">$L$</span>. <hr> See my solution below : </p> <p>Let initially dog be at <span class="math-container">$D$</span> and cat at <span class="math-container">$C$</span> and after time <span class="math-container">$dt$</span> they are at <span class="math-container">$D'$</span> and <span class="math-container">$C'$</span> respectively. Dog velocity is always pointing towards cat.</p> <p>Let <span class="math-container">$DA = dy, \;AD' = dx$</span></p> <p>Let <span class="math-container">$CC'=udt,\;DD' = vdt$</span> as interval is very small so <span class="math-container">$DD'$</span> can be taken straight line.</p> <p>Also we have <span class="math-container">$\frac{DA}{DC}= \frac{AD'}{ CC'}$</span> using triangle property.</p> <p><span class="math-container">$\frac{dy}{L}= \frac{dx}{udt}\\ dx = \frac{dy.udt}{L}$</span></p> <p><span class="math-container">$\sqrt{(dx)^2 + (dy)^2} = DD' = vdt \\ \sqrt{(\frac{dy.udt}{L})^2 + (dy)^2} = vdt $</span></p> <p>Here <span class="math-container">$t$</span> varies from <span class="math-container">$0-T$</span>, and <span class="math-container">$y$</span> varies from <span class="math-container">$0-L$</span>. Now how to proceed?<img src="https://i.stack.imgur.com/Ji3Fc.jpg" alt="enter image description here"></p>
siddhadev
13,133
<p>Working in polar coordinates can be very handy. As already posted by Egor Skriptunoff the differential equations would than look like this: $$ \frac{dr}{dt} = -v - u\sin{\varphi} \\ r\frac{d\varphi}{dt} = -u\cos{\varphi} $$ and for $\frac{dr}{dt}$ we obtain (by formal division): $$ \frac{dr}{d\varphi} = r\frac{v+u\sin{\varphi}}{u\cos{\varphi}} \\ \frac{dr}{r} = k\frac{d\varphi}{\cos{\varphi}} + \tan{\varphi}d\varphi $$ which could be integrated as $$ \ln{r} = k\ln{\left|\frac{1}{cos\varphi}+\tan{\varphi}\right|} - \ln{\left|\cos{\varphi}\right|} +const $$ to give the beautiful $$ r(\varphi) = \frac{R_{0}}{\cos{\varphi}}\left(\frac{1}{\cos{\varphi}} + \tan{\varphi} \right)^k $$</p>
3,253,891
<p>I'm a complete n00b at math, but I'm wondering how one would go about determining the value of <code>n</code> in the following comparison.</p> <p><code>n * 1.5 + 12.5 = 12.5 / 2 + n</code></p> <p>I'm new to the math StackExchange, so I'm also not sure how to properly format this question. Feel free to edit.</p> <p><span class="math-container">$1.5n+12.5=\displaystyle \frac{12.5}{2}+n$</span></p> <hr> <h1>Explanation</h1> <p>I don't think I formulated my mathematical equation properly because I know that the value I'm looking for is obviously a positive integer.</p> <p><strong>I'm trying to figure out what size the squares must be, so that the center most square in each row is centered above or below the gap between the two squares in the other row.</strong></p> <ol> <li>All the squares must be the same size.</li> <li>All of the gaps are <code>12.5</code> pixels.</li> </ol> <p>The squares in the image below are obviously not big enough as of right now.</p> <p><a href="https://i.stack.imgur.com/8qNNh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8qNNh.png" alt="enter image description here" /></a></p>
Vítězslav Štembera
663,062
<p>Let us talk here about second order partial differential equations (PDEs) with constant coefficients. The difference between elliptic PDEs and parabolic/hyperbolic PDEs is in their eigenvalues - elliptic PDEs have only complex eigenvalues. This means that they contain no real characteristics, i.e. no characteristic line, along which the information propagates (from region with information to region without information). Parabolic and hyperbolic PDEs contain always some real characteristics.</p> <p>Meaning of this is, that if you numerically solve parabolic or hyperbolic PDEs (using a finite difference scheme for example) (let us consider independent variables <span class="math-container">$t$</span> and <span class="math-container">$x$</span>), you can go from a region containing new information (typically from <span class="math-container">$t=0$</span>, where initial condition <span class="math-container">$u(t=0,x)=u_0(x)$</span> is defined) to larger times <span class="math-container">$t&gt;0$</span> (in direction of a characteristic line) and compute new time levels from the previous ones. In this way you numerically propagate information to new time levels. (In order to judge stability of the particular finite difference scheme in this case, one uses the von Neumann stability condition, which also yields the CFL condition. This condition tests, if an error which propagates with the solution along characteristic lines is damped or amplified by the particular finite difference scheme.).</p> <p>In the elliptic PDE (let us consider independent variables <span class="math-container">$x$</span> and <span class="math-container">$y$</span> living in a domain <span class="math-container">$\Omega$</span>), having no real characteristics, you cannot go from one part of the domain <span class="math-container">$\Omega$</span> to another, solving &quot;unknown solution&quot; from &quot;already known part of the solution&quot;, because the solution at any point depends on the solution at any other point in <span class="math-container">$\Omega$</span>. You have to solve the whole system at once, by forming a system of linear equations <span class="math-container">$Ax=b$</span> (after discretization), in which every solution points are interdependent. You get then by inversion the discretized solution at all points in <span class="math-container">$\Omega$</span> at once.</p>
278,368
<p><strong>Problem:</strong></p> <p>Assume the number of cars passing a road crossing during an hour satisfies a Poisson distribution with parameter $\mu$, and that the number of passengers in each car satisfies a binomial distribution with parameters $n \in \mathbb{N}$ and $p \in (0,1)$. Let $Y$ denote the total number of passengers passing the road crossing during an hour. Compute $\mathbb{E}[Y]$ and Var$(Y)$.</p> <p><strong>My thoughts:</strong></p> <p>If we let $X_i$ be the number of passengers in the $i^{\text{th}}$ car, we have that $$\mathbb{E}[Y]= \sum_{i=1}^N \mathbb{E}[X_i], $$ where $N$ is the total number of cars. Since $X_i$ ~ Binomial$(n,p)$, $\mathbb{E}[X_i]=np$ $\hspace{1mm}$ $\forall$ $i \in \mathbb{N}$. Furthermore, since $N$ ~ Poisson$(\mu)$, we have $\mathbb{E}[N]=\mu$, yielding $\mathbb{E}[Y]=\mu np.$ </p> <p>This <em>seems</em> right, but I'm dissatisfied with the lack of thoroughness with my argument. Any ideas?</p> <p>(Edit: This isn't homework.)</p>
eeeeeeeeee
55,017
<p>Calculating Var$(Y)$ using leonbloy's suggestion of the use of law of total expectation:</p> <p>First, note that $$\text{Var}(Y|N)=\sum_{i=1}^N \text{Var}(X_i) = Nnp(1-p).$$ Now, by the <a href="http://en.wikipedia.org/wiki/Law_of_total_variance" rel="nofollow">law of total variance</a>, we have $$\begin{align*} \text{Var}(Y) &amp;= \mathbb{E}[\text{Var}(Y|N)]+\text{Var}(\mathbb{E}[Y|N]) \\ &amp;= \mathbb{E}[Nnp(1-p)]+\text{Var}(Nnp) \\ &amp;= np(1-p)\mathbb{E}[N]+n^2p^2Var(N) \\ &amp;=np(1-p)\mu+n^2p^2\mu. \end{align*}$$</p>
2,156,357
<p>if $H$ and $K$ are nonabelian simple groups prove that :</p> <blockquote> <p>$H$ $\times$ $K$ has exactly four distinct normal subgroups. </p> </blockquote> <p>Please help me prove this.</p>
Andreas Caranti
58,401
<p>I assume for simplicity that this is an <em>internal</em> direct product, so that $H$ and $K$ are normal subgroups of the group $G = H \times K$.</p> <p>I claim that either $L \cap H \ne 1$ or $L \cap K \ne 1$. In fact if $1 \ne (h, k) \in L$, with $h \ne 1$, since $H$ is non-abelian, there is $t \in H$ such that $h^{t} \ne h$. Then $(h, k)^{(t, 1)} = (h^{t}, k) \in L$, and $(h, k)^{-1} \cdot (h^{t}, k) = (h^{-1} h^{t}, 1)$ is a non-trivial element of $L \cap H$. A similar argument applies if $k \ne 1$.</p> <p>Suppose then for instance that $L \cap H \ne 1$. Since $1 \ne L \cap H \trianglelefteq H$, and $H$ is simple, we have that $L \cap H = H$, so $L \ge H$, and since $L/H$ is a normal subgroup of the simple group $G/H \cong K$, then either $L = H$ or $L = G$.</p>
1,529,827
<p>In order to make it clear, I ask three questions:</p> <ol> <li>Does $|2^m - 3^n|&lt;10^6$ have any integers solution for $m&gt;20$ ?</li> <li>Is $ \liminf |2^m - 3^n|$ infinite ?</li> <li>Is $ \liminf |2^m - 3^n|/m$ finite ?</li> </ol>
Wojowu
127,263
<p>The answer to question 1 is <strong>yes</strong>, since $2^{21}-3^{13}=502829$.</p> <p>The answer to question 3 is <strong>no</strong>, and hence the answer to question 2 is <strong>yes</strong>. The following argument is adapted from Terry Tao's <a href="https://terrytao.wordpress.com/2011/08/21/hilberts-seventh-problem-and-powers-of-2-and-3/" rel="nofollow">blog post</a>.</p> <p>By <a href="https://en.wikipedia.org/wiki/Baker%27s_theorem" rel="nofollow">Baker's theorem</a> applied to $n=1,\lambda_1=\log_2 3,\beta_0=\frac{m}{n},\beta=1$, we have for some constant $C$ $|\frac{m}{n}-\log_2 3|&gt;H^{-C}$, where $H=\max\{m,n\}$.</p> <p>We have $|2^m-3^n|=2^m|1-2^{n(\log_2 3-\frac{m}{n})}|$. Assume first $|\log_2 3-\frac{m}{n}|&lt;1$. Since $|2^x-1|&gt;c|x|$ for some constant $c&gt;1$ assuming $|x|&lt;1$, we get $|2^m-3^n|&gt;2^m\cdot cn|\log_2 3-\frac{m}{n}|&gt;2^m\cdot cn\cdot H^{-C}$. We also get $\frac{m}{n}&lt;3,n&gt;\frac{m}{3}$, so $H&gt;\frac{m}{3}$. Applying this, $|2^m-3^n|&gt;2^m\cdot cn\cdot H^{-C}&gt;2^m\cdot c\cdot\frac{m}{3}\cdot(\frac{m}{3})^{-C}=2^m\cdot c'\cdot m^{-C+1}$, hence $|2^m-3^n|/m&gt;2^m\cdot c'\cdot m^{-C}$, which can be arbitrarily large for large $m$.</p> <p>For $|\log_2 3-\frac{m}{n}|\geq 1$ we have $|1-2^{n(\log_2 3-\frac{m}{n})}|$ greater than some fixed constant $d$, so $|2^m-3^n|&gt;2^md$ which again can be arbitrarily large.</p>
2,791,863
<p>We need to calculating the limit $$ \lim _{n\rightarrow \infty}((4^n+3)^{1/n}-(3^n+4)^{1/n})^{n3^n} $$</p> <p>I have tried taking the logarithm, but the limit doesnt seem to arrive at any familiar form.</p>
Alex
38,873
<p>I think the limit should be $e^{-12}$.</p> <p>First, after you rewrite the expression within the brackets, you get $$ 4(1+\frac{3}{4^n})^{\frac{1}{n}} - 3(1+\frac{4}{3^n})^{\frac{1}{n}} $$ Both expressions within the two brackets can be expanded using Generalized binomial theorem, and the leading terms will be $$ 4(1+\frac{3}{n 4^n}) - 3(1+\frac{4}{n 3^n}) + o(1) = 1+ \frac{4}{n 3^n} + O\Big(\frac{1}{n4^n}\Big) $$</p> <p>so the limit becomes $$ \bigg(1 - \frac{12}{n 3^n} \bigg)^{n 3^n} \to_n e^{-12} $$</p> <p>EDIT: </p> <p>because $O\Big(\frac{1}{n4^n}\Big)$ doesn't affect the convergence rate. To see that, consider \begin{align} \Big(1+\frac{1}{n} +\frac{1}{n^2} \Big)^n &amp;= \Big(\Big(1+\frac{1}{n}\Big)\Big(1+\frac{\frac{1}{n^2}}{\frac{n+1}{n}}\Big)\Big)^n\\ &amp;= \Big(1+\frac{1}{n}\Big)^n\Big(1+\binom{n}{1}\frac{1}{n(n+1)} +o(1)\Big)\\ &amp; = \Big(1+\frac{1}{n}\Big)^n\Big(1+o(1)\Big) \to_n e \end{align}</p>
2,438,111
<p>I hope my title somehow encapsulates my problem.</p> <p>Let's say we have a 1-D Grid with the values 2,1,5,8,1,1. Imagine those values are of some physical quantity $\alpha$. The mean of this would be $(2+1+5+8+1+1)/6 = 3$</p> <p>Now let's say we have some function $f(x) = x^2$, which computes another quantity $\beta$ out of the initial ones. When we put in our mean it yields $f(3) = 9$. So one could think that the total $\beta$ would be $6 \times 9=54$.</p> <p>Now let's compute $\beta$ directly for each grid element and sum it up to get the total amount. $\beta_{total} = 4+1+25+64+1+1=96$</p> <p>$96 \neq 54$.</p> <p>Intuitively, I'd say 96 is the right result, but I'm kind of at a loss why the mean times the number of values fails.</p>
Apurv Anand
428,144
<p>The amplitude of $A \cos x + B \sin x$ is $\sqrt{A^2+B^2}$.<br> You can check easily by differentiating $f(x) = A \cos x + B \sin x$, which gives $\sin x = \frac{B}{\sqrt{A^2+B^2}}$ and $\cos x = \frac{A}{\sqrt{A^2+B^2}}$ and we can get the maximum value of $f(x)$.<br> So here amplitude is 2.</p>
2,322,646
<p>Let $f$ and $\varphi$ be continuous real valued functions on $\mathbb{R}$. Suppose $\varphi(x)=0$ for $|x|&gt;5$ and that $\int_{\mathbb{R}}\varphi(x)\mathbb{d}x=1$. Show that $$\lim_{h\to 0}\left[\frac{1}{h}\int_{\mathbb{R}}f(x-y)\varphi\left(\frac{y}{h}\right)\mathbb{d}y\right]=f(x).$$ I don't know how to proceed. Please help.</p>
José Carlos Santos
446,262
<p>If you do the substitution $y=ht$ and $dy=h\,dt$, then you get$$\frac1h\int_{\mathbb R}f(x-y)\varphi\left(\frac yh\right)\,dy=\int_{\mathbb R}f(x-ht)\varphi(t)\,dt$$On the other hand,\begin{align*}\int_{\mathbb R}f(x-ht)\varphi(t)\,dt-f(x)&amp;=\int_{\mathbb R}\bigl(f(x-ht)-f(x)\bigr)\varphi(t)\,dt\\&amp;=\int_{-5}^5\bigl(f(x-ht)-f(x)\bigr)\varphi(t)\,dt.\end{align*}In order to prove that the limit of this integral at $0$ is $0$, it will be enough to use the fact that $f$ is uniformly continuous on some interval larger than $[-5,5]$ (take $[-6,6]$, for instance), together with the fact that $\int_{\mathbb R}\varphi=1$.</p>
738,083
<blockquote> <p>Show that if two random variables X and Y are equal almost surely, then they have the same distribution. Show that the reverse direction is not correct.</p> </blockquote> <p>If $2$ r.v are equal a.s. can we write $\mathbb P((X\in B)\triangle (Y\in B))=0$ (How to write this better ?)</p> <p>then </p> <p>$\mathbb P(X\in B)-\mathbb P(Y\in B)=\mathbb P(X\in B \setminus Y\in B)\le \mathbb P((X\in B)\triangle (Y\in B))=0$</p> <p>$\Longrightarrow P(X\in B)=\mathbb P(Y\in B)$</p> <p>but the other direction makes no sense for me, i don't know how this can be true.</p>
BCLC
140,308
<p>Consider flipping a fair coin. Then $1_H$ and $1_T$ have the same distribution.</p> <p>However, they are more than just not equal almost surely ($P(X \ne Y)&gt;0$): they are almost surely not equal ($P(X \ne Y)=1$)!</p>
2,007,224
<p>Analysis problem:</p> <p><strong>Let $f$ and $g$ be differentiable on $ \mathbb R$. Suppose that $f(0)=g(0)$ and that $f' (x)$ is less or equal than $g' (x)$ for all $x$ greater or equal than $0$ Show that $f(x)$ is less or equal than$g(x)$ for all $x$ greater or equal than $0$.</strong></p> <p>Is my proof correct?</p> <p>I am trying to use the Generalized Mean Value Theorem:</p> <p><a href="https://i.stack.imgur.com/SOfvH.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SOfvH.jpg" alt="Generalized Mean Value Theorem"></a></p> <p>As $f$and g are differentiable on$ \mathbb R$, $f$ and $g$ are continuous on $ \mathbb R$ and we can use the Generalized Value Theorem. Using the starting condition $f(0)=g(0)$, we have that for any b that is greater than$ 0$, exist a $c$ element of $(0,b)$ such that</p> <p>$f' (c) g(b) = g' (c) f(b)$</p> <p>By the starting conditions,</p> <p>$f' (x) $is less or equal than$g' (x)$ for all $x$ greater or equal than $0$</p> <p>Therefore, $f(b)$ is less or equal than $g(b)$ for any b element of $(0, b)$</p> <p>As $b$ is any number bigger than$ 0$</p> <p><strong>$f(x)$ is less or equal than $g(x)$ for any $x$ greater or bigger than$ 0$. Q.E.D.</strong></p>
Martin Argerami
22,857
<p>I don't really see how you conclude from $ f'(c)g(b)=g'(c)f(b) $ and $f'(c)\leq g'(c)$ that $f(b)\leq g(b)$. For instance, $0&lt;2$ and $0\times (-1) = 2\times 0$, but you cannot conclude that $0\leq-1$. </p> <p>As mentioned by dxiv, a proof can be achieved by using the Mean Value Theorem, applied to the function $g-f$: given any $x\geq0$, there exists $c$ with $0\leq c\leq x$ and $$g(x)-f(x)=g(x)-f(x)-(g(0)-f(0))=(g'(c)-f'(c))\,(x-0)\geq0.$$</p> <p>It is also worth mentioning that the "direct way": $$ f(x)=\int_0^xf'(t)\,dt\leq\int_0^xg'(t)\,dt=g(x) $$ doesn't work in general, because it is not true in general that $f(x)=\int_0^xf'(t)\,dt$; this requires $f$ to be <a href="https://en.wikipedia.org/wiki/Absolute_continuity" rel="nofollow noreferrer">absolutely continuous</a> (see also <a href="https://math.stackexchange.com/questions/191268/absolutely-continuous-functions">here</a> and <a href="https://math.stackexchange.com/questions/292275/discontinuous-derivative">here</a>).</p>
72,613
<p>Given a list or string, how do I get a list of all (contiguous) sublists/substrings? The order is not important.</p> <p>Example for lists:</p> <pre><code>list = {1, 2, 3}; sublists[list] (* {{}, {}, {}, {}, {1}, {2}, {3}, {1, 2}, {2, 3}, {1, 2, 3}} *) </code></pre> <p>Example for strings:</p> <pre><code>string = "abc"; substrings[string] (* {"", "", "", "", "a", "b", "c", "ab", "bc", "abc"} *) </code></pre>
SquareOne
19,960
<p>For <strong>strings</strong>, there is also :</p> <pre><code>string = "abcd"; StringCases[string, _ ~~ LetterCharacter ..., Overlaps -&gt; All] </code></pre> <p>or rather (as @Kuba greatly suggested)</p> <pre><code>StringCases[string, __, Overlaps -&gt; All] </code></pre> <blockquote> <p>{abcd, abc, ab, a, bcd, bc, b, cd, c, d}</p> </blockquote> <p><em>(5 <code>""</code> are missing but that's not the point here)</em> </p> <p>It seems almost as fast as <code>substrings1</code> according to my tests.</p>
432,811
<p>I'm trying to solve $$\operatorname{Arg}(z-2) - \operatorname{Arg}(z+2) = \frac{\pi}{6}$$ for $z \in \mathbb{C}$.</p> <p>I know that $$\operatorname{Arg} z_1 - \operatorname{Arg} z_2 = \operatorname{Arg} \frac{z_1}{z_2},$$ but that's only valid when $\operatorname{Arg} z_1 - \operatorname{Arg} z_2 \in (-\pi,\pi]$, so I'm not sure how to even begin solving this.</p> <p>I'm not familiar with modular arithmetic so if it is possible to solve this without using it then that would be great! (not that I know whether it is required to solve this in the first place)</p> <p>Thank you in advance.</p>
dfeuer
17,596
<p>Think about the geometric significance of the difference between the arguments of two complex numbers. Then think about where in the plane $z-2$ and $z+2$ must lie to satisfy your equation.</p>
432,811
<p>I'm trying to solve $$\operatorname{Arg}(z-2) - \operatorname{Arg}(z+2) = \frac{\pi}{6}$$ for $z \in \mathbb{C}$.</p> <p>I know that $$\operatorname{Arg} z_1 - \operatorname{Arg} z_2 = \operatorname{Arg} \frac{z_1}{z_2},$$ but that's only valid when $\operatorname{Arg} z_1 - \operatorname{Arg} z_2 \in (-\pi,\pi]$, so I'm not sure how to even begin solving this.</p> <p>I'm not familiar with modular arithmetic so if it is possible to solve this without using it then that would be great! (not that I know whether it is required to solve this in the first place)</p> <p>Thank you in advance.</p>
Mark Bennet
2,906
<p>Here is a way of proceeding which depends on special features of the particular problem, so is not really general.</p> <p>Construct an equilateral triangle on the line segment between $z-2$ and $z+2$ choosing the one in which the third vertex $V$ is nearest to the origin. Then, given the angle subtended at the origin, $V$ is at the centre of a circle passing through the three points $0$, $z-2$ and $z+2$ (angle at centre is twice angle at circumference).</p> <p>The radius of the circle is 4 (since it forms the side of an equilateral triangle with a segment of length 4). So one has the point $V$ on the circle radius 4 centre origin. The midpoint of the segment (so the point $z$) is either vertically above or vertically below this (case depends on which side of the $x$-axis we are), and all that is needed to calculate the co-ordinates using the simple geometry of the equilateral triangle.</p> <p>Now take care to identify the sign of the difference in angles so that the differece comes out as $\frac{\pi}6$ rather than $-\frac{\pi}6$, and avoid cases where the angle becomes $2\pi \pm \frac {\pi}6$.</p>
3,491,867
<p>I'm working on an integral used to illustrate <span class="math-container">$\pi &gt; \frac{22}{7}$</span> and I'm stuck on finding the name of a theorem for the following:</p> <p>Let <span class="math-container">$f(x)$</span> be a continuous Real Valued function on the interval <span class="math-container">$[a,b]$</span> (where <span class="math-container">$a$</span>, <span class="math-container">$b$</span> can be finite or infinite). If <span class="math-container">$f(x) \geq 0$</span> on <span class="math-container">$[a,b]$</span> then <span class="math-container">$$ \int_a^b f(x) \:dx \geq 0 $$</span></p> <p>Does anyone know what the name of this theorem is?</p>
Stefan Perko
166,694
<p>A theorem stating exactly this property has (in all likelihood) no name since this is neither a deep result nor a property specific to integrals.</p> <p>However, given an ordered real vector space <span class="math-container">$(V,\leq)$</span> and a functional (linear map) <span class="math-container">$F : V\to \mathbb R$</span> we say <span class="math-container">$F$</span> is <a href="https://en.wikipedia.org/wiki/Positive_linear_functional" rel="nofollow noreferrer">positive</a> if <span class="math-container">$v\geq 0$</span> implies <span class="math-container">$f(v)\geq 0$</span> for all <span class="math-container">$v\in V$</span>. Note, that we can define functionals of the form</p> <p><span class="math-container">$$\int_a^b : A\to \mathbb R, f\mapsto \int_a^b f(x)\,dx,$$</span></p> <p>for any vector space <span class="math-container">$A$</span> of integrable functions <span class="math-container">$[a,b]\to \mathbb R$</span>.</p> <p>So it would be most reasonable to call this property <em>positivity of the integral</em>.</p>
542,391
<p>I understand the processes of putting a matrix into Jordan normal form and forming the transformation matrix associated to "diagonalizing" the matrix. So here's my question:</p> <p>Why is it that when you have an eigenvalue x=0 with algebraic multiplicity greater than 1, that you don't put a 1 in the superdiagonal of the JNF matrix but when the eigenvalue is non-zero and satisfies the same properties, we put a 1 in the superdiagonal of the Jordan normal form?</p> <p>My professor posted solutions to an assignment involving finding a matrix exponential, but the JNF of a matrix had eigenvalue x=0 with algebraic multiplicity of 3,yet had no entries of 1 along the superdiagonal.</p> <p>In advance, I would like to thank you for your help.</p>
Marc van Leeuwen
18,880
<p>All Jordan <em>blocks</em> do have their entries on the super-diagonal (if any) equal to$~1$, whether the eigenvalue of the block is$~0$ or not. What confuses you is that one can have multiple Jordan blocks for the same eigenvalue; then between adjacent Jordan blocks for the same $\lambda$ there is a super-diagonal entry that is not part of any Jordan block at all, and which therefore is$~0$. Don't make the error of thinking that taking these blocks together forms a new, larger, Jordan block with (exceptionally?) some entries$~0$ on the super-diagonal; they don't. If all Jordan blocks for have size$~1$, as is the case in the example, then none have any entries on the super-diagonal; therefore the entire super-diagonal will be zero in this case (and the Jordan form is a diagonal matrix).</p>
3,269,861
<p>I have to determine a direct sum of cyclic groups to which <span class="math-container">$(\mathbb{Z}/200\mathbb{Z})^\times$</span> is isomorphic. For general <span class="math-container">$n\in\mathbb{Z}_{\geq2}$</span>, we can write <span class="math-container">$n=p_1^{e_1}\cdot\dots \cdot p_k^{e_k}$</span> for pairwise distinct prime numbers <span class="math-container">$p_i$</span> and positive exponents <span class="math-container">$e_i$</span>. And from the Chinese remainder theorem we can conclude that <span class="math-container">$(\mathbb{Z}/n\mathbb{Z})^\times\cong(\mathbb{Z}/p_1^{e_1}\mathbb{Z})^\times\oplus\dotsb\oplus(\mathbb{Z}/p_k^{e_k}\mathbb{Z})^\times$</span>. I learned that <span class="math-container">$(\mathbb{Z}/2^e\mathbb{Z})^\times$</span> is not cyclic for <span class="math-container">$e\geq3$</span>, but <span class="math-container">$200=2^3\cdot5^2$</span>, so that's where I get stuck.</p>
Dr. Sonnhard Graubner
175,066
<p>From your formula we get <span class="math-container">$$\frac{n(2a_1+(n-1)d)}{2}=n^2+3n$$</span> for <span class="math-container">$$n\neq 0$$</span> we get <span class="math-container">$$2a_1=2n+6-(n-1)d$$</span> so...?</p>
3,269,861
<p>I have to determine a direct sum of cyclic groups to which <span class="math-container">$(\mathbb{Z}/200\mathbb{Z})^\times$</span> is isomorphic. For general <span class="math-container">$n\in\mathbb{Z}_{\geq2}$</span>, we can write <span class="math-container">$n=p_1^{e_1}\cdot\dots \cdot p_k^{e_k}$</span> for pairwise distinct prime numbers <span class="math-container">$p_i$</span> and positive exponents <span class="math-container">$e_i$</span>. And from the Chinese remainder theorem we can conclude that <span class="math-container">$(\mathbb{Z}/n\mathbb{Z})^\times\cong(\mathbb{Z}/p_1^{e_1}\mathbb{Z})^\times\oplus\dotsb\oplus(\mathbb{Z}/p_k^{e_k}\mathbb{Z})^\times$</span>. I learned that <span class="math-container">$(\mathbb{Z}/2^e\mathbb{Z})^\times$</span> is not cyclic for <span class="math-container">$e\geq3$</span>, but <span class="math-container">$200=2^3\cdot5^2$</span>, so that's where I get stuck.</p>
lulu
252,071
<p>Taking <span class="math-container">$n=1$</span> we see that <span class="math-container">$S_1=a_1=4$</span>.</p>
4,002,458
<p>I'm a geometry student. Recently we were doing all kinds of crazy circle stuff, and it occurred to me that I don't know why <span class="math-container">$\pi r^2$</span> is the area of a circle. I mean, how do I <em>really</em> know that's true, aside from just taking my teachers + books at their word?</p> <p>So I tried to derive the formula myself. My strategy was to fill a circle with little squares. But I couldn't figure out how to generate successively smaller squares in the right spots. So instead I decided to graph just one quadrant of the circle (since all four quadrants are identical, I can get the area of the easy +x, +y quadrant and multiply the result by 4 at the end) and put little rectangles along the curve of the circle. The more rectangles I put, the closer I get to the correct area. If you graph it out, my idea looks like this:</p> <p><a href="https://i.stack.imgur.com/5JMSb.png" rel="noreferrer"><img src="https://i.stack.imgur.com/5JMSb.png" alt="Approximating circle area using rectangles" /></a></p> <p>Okay, so to try this in practice I used a Python script (less tedious):</p> <pre><code>from math import sqrt, pi # explain algo of finding top right quadrant area # thing with graphics via a notebook # Based on Pythagorean circle function (based on r = x**2 + y**2) def circle_y(radius, x): return sqrt(radius**2 - x**2) def circleAreaApprox(radius, rectangles): area_approx = 0 little_rectangles_width = 1 / rectangles * radius for i in range(rectangles): x = radius / rectangles * i little_rectangle_height = circle_y(radius, x) area_approx += little_rectangle_height * little_rectangles_width return area_approx * 4 </code></pre> <p>This works. The more rectangles I put, the wrongness of my estimate goes down and down:</p> <pre><code>for i in range(3): rectangles = 6 * 10 ** i delta = circleAreaApprox(1, rectangles) - pi # For a unit circle area: pi * 1 ** 2 == pi print(delta) </code></pre> <h3>Output</h3> <pre><code>0.25372370203838557 0.030804314363409357 0.0032533219749364406 </code></pre> <p>Even if you test with big numbers, it just gets closer and closer forever. Infinitely small rectangles <code>circleAreaApprox(1, infinity)</code> is presumably the true area. But I can't calculate that, because I'd have to loop forever, and that's too much time. How do I calculate the 'limit' of a for loop?</p> <p>Ideally, in an intuitive way. I want to reduce the magic and really understand this, not 'solve' this by piling on more magic techniques (like the <span class="math-container">$\pi \times radius^2$</span> formula that made me curious in the first place).</p> <p>Thanks!</p>
Ethan Bolker
72,858
<p>This is an excellent question. You are following in Archimedes' footsteps and starting to invent integral calculus and the idea of a limit.</p> <p>I will try to address (briefly!) the mathematical and philosophical issues here, not the programming question.</p> <p>You are right to worry about a process that has to go on forever. The way mathematicians deal with that question is to replace the infinitely many operations it would take to &quot;reach a limit&quot; by infinitely many inequalities any one of which can be justified in a predictable finite number of steps. If in your picture you calculate the total area of the inscribed slices just as you have calculated the area of the circumscribed ones you can show (with logic, not a program) that the difference between those two areas is as small as you please as long as you are willing to use thin enough rectangles. Then you can argue (though it's not easy) that there is just one number less than all the overestimates and greater than all the underestimates. For a circle of radius <span class="math-container">$1$</span> we call that number <span class="math-container">$\pi$</span>.</p> <p>The next job is to over and underestimate the circumference of the unit circle with the same kind of argument, using circumscribed and inscribed polygons. There too you can show that they tell you a number for the circumference.</p> <p>The final step is to show that number is exactly twice the <span class="math-container">$\pi$</span> that you found for the area.</p> <p>For a circle of radius <span class="math-container">$r$</span> the circumference will be <span class="math-container">$r$</span> times as large, so <span class="math-container">$2\pi r$</span>, and the area will be <span class="math-container">$r^2$</span> times as large, so <span class="math-container">$\pi r^2$</span>. (Carefully proving those proportionalities for curved shapes like circles requires estimations and limits.)</p>
363,911
<p>If a function $f:[-2,3]\to \mathbb{R}$ is defined by </p> <p>$f(x)=\begin{cases} 2|x|+1 \; ;\; \text{ if } x \in \Bbb Q \\ 0 \; ;\; \text{ if } x \notin \Bbb Q \end{cases}$ </p> <p>Prove that $f$ is not Riemann integrable.</p> <p>What I came up with:<br> $m_k=0$,$M_k=7$ </p> <p>Which implies $U(P,f)=35$ and $L(P,f)=0$, for any partition of $[-2,3]$. So the upper and lower integrals are not equal,<br> hence $f \notin {\mathscr R}[-2,3]$</p>
Did
6,179
<p>Let $g:[-2,3]\to\mathbb R$ and $h:[-2,3]\to\mathbb R$ be defined by $g(x)=0$ and $h(x)=2|x|+1$ for every $x$. Then every lower integral of $f$ is a lower integral of $g$ and every upper integral of $f$ is an upper integral of $h$ (can you show this?). </p> <p>Furthermore, $g$ and $h$ are continuous hence integrable. Thus, for every partition $P$, $$ L(P,f)=L(P,g)\leqslant\int_{-2}^3g(x)\mathrm dx=0, $$ and $$ U(P,f)=U(P,h)\geqslant\int_{-2}^3h(x)\mathrm dx=c\gt0. $$ This proves that $U(P,f)-L(P,f)\geqslant c$ for every partition $P$ hence $f$ is not Riemann integrable.</p>
821,845
<p>As the title says, why are those two equivalent? I can find a simple derivation (using natural deduction) of $\bot$ from $\neg\neg\bot$, but i fail at proving the other implication.</p>
Peter Smith
35,151
<p>(a) One of the rules of inference in standard natural deduction systems for intuitionistic logic is ex falso quodlibet, i.e.</p> <blockquote> <p>From $\bot$ infer $\varphi$ for any $\varphi$.</p> </blockquote> <p>So, as a particular application, we have a one-step derivation of, in particular, $\neg\neg\bot$ from $\bot$.</p> <p>(b) As you say the other direction is also intuitionistically provable. Thus $\bot$ trivially entails $\bot$ so (by the intuitionistically acceptable version of reductio that if you can infer $\bot$ from $\psi$, then that gives you $\neg\psi$), $\neg\bot$ is a theorem. But then $\neg\neg\bot$ as an assumption, combined with this theorem (so a pair of the form $\neg\psi$, $\psi$) gives you $\bot$ (by the introduction rule for $\bot$).</p>
925,140
<p>$$f(x)=\frac { x }{ x+4 } $$</p> <p>I am not sure how to go about solving this but here is what I have done so far:</p> <p>$$y=\frac { x }{ x+4 } $$</p> <p>$$(x+4)y=\frac { x }{ x+4 } (x+4)$$</p> <p>$$yx+4y=x$$</p> <p>I feel stuck now. Where do I go from here?</p>
lab bhattacharjee
33,337
<p>$$y=f(x)=\frac x{x+4}$$</p> <p>$$\implies f^{-1}(y)=x$$</p> <p>Now $y=\dfrac x{x+4}$</p> <p>Assuming $x+4\ne0, 4y+xy=x\iff x=\dfrac{4y}{1-y}$</p> <p>Equate the values of $x$</p>
3,575,804
<p>The question goes as follows. My attempts are below it. </p> <p>A motel has ten rooms, all located on the same side of a single corridor and numbered 1 to 10 in numerical order. The motel always randomly allocates rooms to its guests. There are no other guests besides those mentioned.</p> <p>a) Friends Molly and Polly have been allocated two separate rooms at the motel. What is the likely number of rooms between their rooms? </p> <p>b) Molly believes there is a greater than 1/3 chance that at most one room will separate them, but Polly disagrees. Who is right? Explain why.</p> <p>c) On another occasion, Molly, Polly and a third friend, Ollie, were allocated three separate rooms. Molly believes there is a better than 1/3 chance that they are all within a block of five consecutive rooms. Ollie believes that there is exactly 1/3 chance and Polly believes there is less than 1/3 chance. Who is right? Explain why. </p> <p>d) Ollie arrived after rooms were allocated to Molly and Polly. There was then a 50% chance he would be in a room adjacent to Molly or Polly or both. In how many ways could a pair of rooms have been allocated to Molly and Polly?</p> <p>Here I am bit sure of how to work through (c). Do I use permutations or combinations - could you please give me a step by step solution as I am still a bit wobbly on combinatorics.</p> <p>Also could someone please check my answer for (d) (30)?</p> <p>Thanks.</p>
Rezha Adrian Tanuharja
751,970
<p>For C:</p> <p>The number of ways to have three rooms within 5 consecutive rooms is <span class="math-container">$3!$</span> multiplied with the number of non negative solutions of:</p> <p><span class="math-container">$$ \begin{aligned} A+B+C+D&amp;=7\\ A+D&amp;=7-i\\ C+D&amp;=i \end{aligned} $$</span></p> <p>Where <span class="math-container">$i=0,1,2$</span> which is <span class="math-container">$3!\times 40=240$</span>. So the probability is <span class="math-container">$\frac{240}{10\times 9\times 8}=\frac{1}{3}$</span> </p> <p>For D:</p> <p>The number of ways we can allocate rooms to molly and polly equals to <span class="math-container">$2$</span> multiplied with the number of non negative solutions of:</p> <p><span class="math-container">$$ A+B+C=4 $$</span></p> <p>Which is <span class="math-container">$2 \times 15 = 30$</span> so You are right</p>
3,578,191
<p>Without tables or a calculator, find the value of <span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span>.</p> <p>I do not understand how the positive/negative signs are obtained as shown in the book; is there a formula for expanding these kind of things (what kind of expression is it, by the way?)?</p> <p><a href="https://i.stack.imgur.com/TZjZo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TZjZo.png" alt="enter image description here"></a></p> <p>This is my solution:</p> <p><span class="math-container">$\displaystyle\frac{(\sqrt5 +2)^6 - (\sqrt5 - 2)^6}{8\sqrt5}$</span></p> <p><span class="math-container">$= \displaystyle\frac{[(\sqrt5+2)^3+(\sqrt5-2)^3][(\sqrt5+2)^3-(\sqrt5-2)^3]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{(\sqrt5+2+\sqrt5-2)[(\sqrt5+2)^2\color{red}{+}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2](\sqrt5+2-\sqrt5+2)[(\sqrt5+2)^2\color{red}{-}(\sqrt5+2)(\sqrt5-2)+(\sqrt5-2)^2]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{[2\sqrt5(5+4\sqrt5+4+\color{red}{5-4}+5-4\sqrt5+4][4(5+4\sqrt5+4\color{red}{-(5-4)}+(5-4\sqrt5+4)]}{8\sqrt5}$</span></p> <p><span class="math-container">$=\displaystyle\frac{2584\sqrt5}{8\sqrt5}$</span></p> <p><span class="math-container">$=323$</span></p> <p>Because of the multiplication, I still got the same answer as given in the book. However, is the book or I correct in terms of the positive/negative signs(in red)?</p>
lab bhattacharjee
33,337
<p>Hint</p> <p><span class="math-container">$a-b=4$</span></p> <p><span class="math-container">$a+b=2\sqrt5$</span></p> <p><span class="math-container">$ab=1$</span></p> <p><span class="math-container">$a^3-b^3=(a-b)^3+3ab(a-b)=?$</span></p> <p><span class="math-container">$a^3+b^3=(a+b)(a^2-ab+b^2)=(a+b)((a-b)^2+ab)=?$</span></p>
3,691,692
<p>Find all real values of a such that <span class="math-container">$x^2+(a+i)x-5i=0$</span> has at least one real solution. </p> <p><span class="math-container">$$x^2+(a+i)x-5i=0$$</span></p> <p>I have tried two ways of solving this and cannot seem to find a real solution.</p> <p>First if I just solve for <span class="math-container">$a$</span>, I get <span class="math-container">$$a=-x+i\frac{5-x}{x}$$</span> Which is a complex solution, not a real solution...</p> <p>Then I tried using the fact that <span class="math-container">$x^2+(a+i)x-5i=0$</span> is in quadratic form of <span class="math-container">$x^2+px+q=0$</span> with <span class="math-container">$p=(a+i)$</span> and <span class="math-container">$q=5i$</span></p> <p>So I transform <span class="math-container">$$x^2+(a+i)x-5i=0$$</span> to <span class="math-container">$$(x+\frac{a+i}{2})^2=(\frac{a+i}{2})^2+5i$$</span></p> <p>Now it is in the form that one side is the square of the other but I don't know how to find the roots since I'm not sure if I'm supposed to convert <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> to polar form since I can't take the modulus of <span class="math-container">$(\frac{a+i}{2})^2+5i$</span> (or at least I don't know how).</p> <p>At thins point I feel like I'm just using the wrong method if anyone could guide me in the right direction I would very much appreciate it. Thank you. </p>
CopyPasteIt
432,081
<p>To continue from</p> <p><span class="math-container">$\tag 1 \left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right) = \dfrac{(ad+bc)^2}{abcd}$</span></p> <p>set </p> <p><span class="math-container">$\quad u = ad$</span></p> <p>and</p> <p><span class="math-container">$\quad v = bc$</span></p> <p>Then substituting in the rhs of <span class="math-container">$\text{(1)}$</span>, we have</p> <p><span class="math-container">$\quad \dfrac{(u+v)^2}{uv} \ge 4 \text{ iff } (u-v)^2 \ge 0$</span></p> <p>Note that if we let <span class="math-container">$a,b,c,d \in \Bbb R$</span> satisfy <span class="math-container">$abcd \gt 0$</span> then </p> <p><span class="math-container">$\quad \left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right) \ge 4$</span></p> <p>and if <span class="math-container">$abcd \lt 0$</span> then</p> <p><span class="math-container">$\quad \left(\dfrac{b}{a}+\dfrac{d}{c}\right)\cdot\left(\dfrac{a}{b}+\dfrac{c}{d}\right) \le 4$</span></p>
2,256,534
<p>As I just started learning the different rules of differentiation, I have some burning question marks in my head as such in the picture . I'm required to differentiate the following with respect to $x$.</p> <blockquote> <p><strong>1)</strong> $$\frac{2x^2+4x}{x}$$ <strong>2)</strong> $$\frac{(1-x)(x-2)}{x}$$</p> </blockquote> <p>For 1. It is easy to bring the $x$ up and then differentiate from there . </p> <p>For 2. Once I bring the $x$ up , I'm stuck as I can't differentiate it from $x^{-1}((1-x)(x-2))$ I was told not to change the question by expanding $(1-x)(x-2)$ .. So is there any way that I can use the rule of addition and subtraction of function to solve this ? </p>
Sri-Amirthan Theivendran
302,692
<p>If you're only allowed to use sum rule, there is no real way getting around expanding the binomial. Note that $$ f(x) \colon=\frac{(1-x)(x-2)}{x} =\left(1-x\right)\left(1-\frac{2}{x}\right) =3-\frac{2}{x}-x. $$ So $$ f'(x)=\frac{2}{x^2}-1. $$</p>
2,943,790
<p>A function is said to be <em>continuous at zero</em> iff:</p> <p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p> <p>Could this be the same as saying:</p> <ul> <li>Let <span class="math-container">$\Delta$</span> = <em>the smallest open set containing zero</em></li> <li><span class="math-container">$f(x) = f(0), \forall x \in \Delta$</span></li> </ul> <p>Am I misunderstanding what a limit is, or are these two definitions equivalent?</p> <p>Edit: I've got a few responses saying that there is no such set as <span class="math-container">$\Delta$</span>. I agree. However, it seems to me that the expression:</p> <p><span class="math-container">$\lim_{x \rightarrow 0}{f(x)} = f(0)$</span></p> <p>is definitively claiming to evaluate <span class="math-container">$f$</span> at <em>some</em> unspecified value or values <span class="math-container">$x \neq 0$</span> but not claiming any particular values. To be specific, for any particular value you could name, the claim that the limit exists does <em>not</em> claim it needs to evaluate f at that point. So, what exactly is it claiming? </p>
qualcuno
362,866
<p>As others have commented, no smaller set exist (for a standard usage of smaller, at least). In terms of open sets, what you can say is that <span class="math-container">$f$</span> is continuous at <span class="math-container">$p$</span> if for every open set <span class="math-container">$V$</span> containing <span class="math-container">$f(0)$</span>, there is an open set <span class="math-container">$U$</span> containing <span class="math-container">$0$</span> such that <span class="math-container">$f(U) \subseteq V$</span>. This allows you to take arbitrarily small intervals, in particular, which approximates your intuitive idea of taking the 'smallest' such set. </p> <p>Many concepts in analysis, in particular, use this same thought process: defining notions by making sense of approximations up to arbitrary precision.</p>
1,652,758
<p>the question (not homework) I am trying to answer is, in part:</p> <blockquote> <p><em>Let $f$ be an analytic function that maps the open unit disk $D$ into itself and vanishes at the origin. Prove that the inequality $$|f(z)| + |f(−z)| ≤ 2 |z^2| $$ is strict, except at the origin, unless f has the form $f(z) = λz^2$ for some $λ$ a constant of absolute value one</em>.</p> </blockquote> <p>I applied Schwarz' lemma to obtain the inequality. Below is my answer:</p> <blockquote> <p><em>It is clear that the inequality holds at the origin. The hypotheses given for $f$ are the same as those required for Schwarz' lemma to apply to $f$; the lemma clearly applies to both $f(z)$ and $f(-z)$. Thus I have $$|f(z) + f(-z)| \leq |f(z)| + |f(-z)| \leq |z| + |z| = 2|z|$$ Divide both sides by $|2z|$ (I have assumed $z\neq 0$): $$\frac{|f(z) + f(-z)|}{|2z|} \leq 1$$ This fact shows that the function $(f(z) + f(-z))/ 2z$ has a removable singularity at $z = 0$ (since it is bounded in a neighbourhood of that point). Calling the analytic continuation $g(z)$, $g$ is a holomorphic map from $D$ to $D$ and vanishes at the origin; to see why, expand $f(z) + f(-z)$ into the sum of two power series, note that the first two terms vanish, and conclude that $g$ has a zero of order at least one at the origin. So Schwarz' lemma applies to $g(z)$, and in particular $|g(z)| \leq |z|$. But this fact directly implies the desired inequality.</em></p> </blockquote> <p><strong>The problem is</strong> that I have come most of the way to proving the strict inequality, but cannot prove that the given form is the only possible form for $f(z)$. I proved that if $|f(c) + f(-c)| = 2|c^2|$ for some $c$ not the origin, then the constructed function $g(z)$ is a rotation by Schwarz' lemma, which means that $f(z) + f(-z) = \lambda z^2$. This means that all even-index coefficients of the power series of $f$ must be zero, but it does <em>not</em> rule out the possibility that there are odd-index coefficients. None of the standard tricks like Cauchy inequalities work since the domain is the unit disc. I also tried looking at the other implications of Schwarz (i.e., that $|f'(0)| &lt; 1$) and that, too, led nowhere. What am I missing here?</p>
Community
-1
<p>Apply the Schwarz Lemma to $$ g(z)=\frac{f(z)+f(-z)}{2z^2} $$ to deduce that $|g|\le 1$. If equality holds somewhere on $|z|&lt;1$, then $g$ is constant, by the Schwarz Lemma or the maximum principle, so $$ f(z)+f(-z)=2\lambda z^2 $$ for some $|\lambda|=1$. This says that $f(z)=\lambda z^2 + h(z)$, with $h$ odd. We want to show that $h\equiv 0$, and this follows by considering points near $|z|=1$: Fix $\alpha$ and consider $$ f(\pm re^{i\alpha/2}) = \lambda r^2 e^{i\alpha} \pm h(re^{i\alpha/2}) . $$ The first term approaches a limit of absolute value $1$ as $r\to 1-$, so we must have that $\lim_{r\to 1-} h(re^{i\beta})=0$ for all $\beta$, or $f$ would not take values inside the unit disk. A bounded holomorphic function with (radial) boundary value zero is well known to be identically zero.</p> <p>A more elementary argument is also possible in this last step: the convergence is uniform in $\beta$, so the maximum principle also shows that $h\equiv 0$.</p>
387,505
<p>Let <span class="math-container">$f$</span> be a non-invertible bounded outer function on the unit disk. Does <span class="math-container">$f$</span> has radial limit <span class="math-container">$0$</span> somewhere? Note that such a property holds for singular inner functions.</p>
ray
61,993
<p>It is worth to mention that there exists a special class of outer functions where the answer is positive: let <span class="math-container">$u$</span> be a non-constant inner function and <span class="math-container">$|\alpha|=1$</span>. Then <span class="math-container">$f:=u-\alpha$</span> is an outer function which has radial limit <span class="math-container">$0$</span> somewhere. This is a direct consequence of a theorem of Hoessjer-Frostman (see Noshiro's book, &quot;Cluster sets&quot; Theorem 6).</p>
1,821,582
<blockquote> <p>Find all solutions of $$\{x^3\}+[x^4]=1$$ where $[x]=\lfloor x\rfloor$</p> </blockquote> <p>$$$$</p> <p>I know that $0\le\{x^3\}&lt;1\Rightarrow 0&lt;[x^4]\le 1$. Thus $[x^4]=1$. I couldn't get any further though since I'm having trouble with $x^4$ in the term $[x^4]$. $$$$As an example, in another question, I was given the expression $$[2x]-[x+1]$$ In this, to 'convert' the terms inside the floor function, I split the values of $x$ into 2 cases: $x=[x]+\{x\}, \{x\}\in[0,0.5)$ and $x=[x]+\{x\}, \{x\}\in[0.5,1)$. $$$$In this way, I was able to split the value of $[2x]$ into $2[x]$ and $2[x]+1$ respectively. Hence the expressions within the floor function became easier to deal with as the expression $[2x]-[x+1]$ was converted into $2[x]-[x]-1$ and $2[x]+1-[x]-1$ respectively.$$$$ Is there any way to do something similar with $x^4$ in $[x^4]$ so that the $x^3$ inside $\{x^3\}$ and $x^4$ inside $[x^4]$ are converted into the 'same kind'? If this isn't possible, is there any other way to solve this problem, and other problems in which the expressions within the floor/fractional part function, or the floor/fractional part functions themselves are raised to powers? $$$$Many thanks in anticipation.</p>
Jal
295,771
<p>I think taking intervals for {$x$} will complicate the question. Maybe it would be easier to think whether or not {$x^3$} is zero or not. If it is not zero, the equation will have no solution.</p>
1,893,540
<p>I've been asked to prove the following, if $x - ε ≤ y$ for all $ε&gt;0$ then $x ≤ y$. I tried proof by contrapositive, but I keep having trouble choosing the right $ε$. Can you guys help me out? </p>
YoTengoUnLCD
193,752
<p><strong>Hint</strong> $$ x-\varepsilon \leq y\iff x-y \leq \varepsilon $$</p> <p>Suppose that $x&gt;y$. This implies that $\bar \varepsilon=\frac {x-y}{2}&gt;0$, then $\dots$</p> <hr> <p>The contrapositive of $$\forall \varepsilon &gt; 0 \, (x-\varepsilon \leq y) \rightarrow x\leq y$$ Is $$x&gt; y \rightarrow \exists \varepsilon&gt;0\,( x-\varepsilon &gt; y)$$</p> <p>Suppose that for all $\varepsilon&gt;0, $ we have that $x-\varepsilon \leq y$, then $\frac {x-y}{2}$ is positive, then $\dots$ </p>
1,902,455
<p>$x=e^t$ $y=te^(-t)$</p> <p>$\frac{dy}{dx}= \frac{e^(-t)(1-t)}{e^(t)}$</p> <p>$\frac{d^2y}{dx^2}= \frac{\frac{dy}{dx}}{\frac{dx}{dt}}= \frac{e^(-t)(1-t)}{e^t}$</p> <p>any t's without proper enclosement are meant to be to the power...I don't know why its giving me this trouble. I entered these answers into my homework and it said it could not understand my answers and could not be graded.... Also it asked the interval over which it is is concave upward and if im not mistaken from my work it would be from 0 to infinity.</p>
Doug M
317,162
<p>I have a stupid one..</p> <p>$\int \frac{1-y^2}{(1+y^2)^2} dy\\ \int \frac{1}{1+y^2}-\frac{2y^2}{(1+y^2)^2} dy$</p> <p>we know that $\int \frac{1}{1+y^2} = tan^{-1} y + c$ but what about the other term?</p> <p>$-\int \frac{y(2y)}{(1+y^2)^2} dy\\ u = y, dv = \frac{2y}{(1+y^2)^2} dy\\ du = dy, v =-\frac{1}{(1+y^2)}\\ \frac {y}{1+y^2} - \int \frac {1}{(1+y^2)} dy$</p> <p>So,</p> <p>$\int \frac{1}{1+y^2}-\frac{2y^2}{(1+y^2)^2} dy = \int \frac {1}{(1+y^2)} dy + \frac {y}{1+y^2} - \int \frac {1}{(1+y^2)} dy$</p> <p>$\frac {y}{1+y^2} + c$</p>
1,902,455
<p>$x=e^t$ $y=te^(-t)$</p> <p>$\frac{dy}{dx}= \frac{e^(-t)(1-t)}{e^(t)}$</p> <p>$\frac{d^2y}{dx^2}= \frac{\frac{dy}{dx}}{\frac{dx}{dt}}= \frac{e^(-t)(1-t)}{e^t}$</p> <p>any t's without proper enclosement are meant to be to the power...I don't know why its giving me this trouble. I entered these answers into my homework and it said it could not understand my answers and could not be graded.... Also it asked the interval over which it is is concave upward and if im not mistaken from my work it would be from 0 to infinity.</p>
Felix Marin
85,343
<p>$\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} \color{#f00}{\int{1 - y^{2} \over \pars{1 + y^{2}}^{2}}\,\dd y} &amp; = \int{\pars{1 + y^{2}} - 2y^{2} \over \pars{1+y^{2}}^{2}}\,\dd y = {\Huge\int}{\ds{\totald{y}{y}\pars{1 + y^{2}} - \totald{\pars{1 + y^{2}}}{y}\,y} \over \ds{\pars{1 + y^{2}}^{2}}}\,\dd y \\[5mm] &amp; = \int\dd\pars{y \over 1 + y^{2}} = \color{#f00}{y \over 1 + y^{2}} + \pars{~\mbox{a constant}~} \end{align}</p>
2,435
<p>I'm not sure we already have something similar, but I'm working on more code inspections for the IntelliJ plugin and it's always a good idea to ask the community. Since it doesn't really fit on main, I'm posting it here on Meta.</p> <p>Linting is an excellent way to point the developer to probable errors that he might have overlooked. With a dynamic language like the one of Mathematica, we are a bit restricted with what we can do, since we cannot evaluate code and since most things require evaluation to be sure if they are a bug or not. Nevertheless, there are checks we can do. For instance <code>If[a=b, ..]</code> is most likely a bug and even if the developer knew what he did, it is a bad style.</p> <p>There are trickier examples like <code>If[a&lt;5,...]</code>. This looks okay but knowing that <code>a&lt;5</code> stays unevaluated if the comparison cannot be done, it is a source of error because you end up with the unevaluated <code>If</code> expression in your wrong result and debugging might be complicated.</p> <p>In both examples, wrapping <code>TrueQ</code> around the condition resolves the issue and although there might still be a bug, at least you can be sure your <code>If</code> expression is evaluated to some branch. Other common sources of error are, e.g. <code>x_?testFunc[#]&amp;</code> or implicit multiplication through linebreaks.</p> <p><strong>Question:</strong> What are common bugs in your code and could they have been pointed out by a linter? If you like to share your thoughts, please provide one issue per answer, so that others can vote. I'm looking forward to your suggestions and see if I can implement some of them in IntelliJ.</p> <hr> <p>Example issue: With the <a href="https://mathematica.stackexchange.com/a/176489/187">alternative layout for packages</a> which was pointed out by Leonid, we can use <em>directives</em> for a static code analyzer to easily export symbols or declare them as package symbols. As Leonid pointed out, the directives need to be on their own source-line with nothing else on it. So for the directives</p> <pre><code>PackageScope["myFunc"] PackageExport["MyExportedFunc"] </code></pre> <p>I implemented the following rules</p> <ol> <li>They need to be on their own source line with nothing else on it</li> <li>Their string argument must be a valid identifier</li> </ol> <p><a href="https://i.stack.imgur.com/3bO61.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3bO61.gif" alt="enter image description here"></a></p>
Szabolcs
12
<blockquote> <h1>Status Completed</h1> </blockquote> <h2>Invalid attributes</h2> <p>The Workbench has a feature where it will warn about invalid attributes in constructs similar to</p> <pre><code>Attributes[symbol] = {attr1, attr2, ...}; </code></pre> <p><a href="https://i.stack.imgur.com/fSuDH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fSuDH.png" alt="enter image description here"></a></p> <p>It does not at this point warn about wrong attributes in <code>SetAttributes</code> though.</p> <p>I think this would be a useful inspection because it is not an uncommon mistake to try to use <code>Hold</code> instead of <code>HoldAll</code>, or <code>HoldComplete</code> instead of <code>HoldAllComplete</code>.</p> <p>This should <em>not</em> be of high priority because:</p> <ul> <li><p><code>Attributes[foo] = ...</code> will typically appear at the top level and get evaluated at package load time.</p></li> <li><p>Mathematica will show an error message when trying to set a wrong attribute.</p></li> </ul> <p>Thus this error won't go unnoticed for long even if the IDE doesn't warn about it.</p> <hr> <p><strong>Comment halirutan:</strong> I have implemented this</p> <p><img src="https://i.imgur.com/xr3miAB.png" alt="img"></p> <p>It will be available in the WL Plugin <strong>version 2019.1.2</strong></p>