qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,270,382 | <p>I'm having trouble constructing, and understanding the construction, of a metric that induces the dictionary order topology on <span class="math-container">$\mathbb R \times \mathbb R$</span>. There are example metrics posted here on Math.SE, but that is not what I'm looking for. I'd like to understand how to <strong>approach</strong> constructing the proper metric. My first attempt was to define <span class="math-container">$$d((x_1,x_2),(y_1,y_2)) = \begin{cases} |y_1 - x_1|, & \mbox{if } 0 < |y_1 - x_1| \\ |y_2 - x_2|, & \mbox{if } x_1 = y_1, \mbox{and } 0 < |y_2 - x_2| \\ 0, & \mbox{if } x = y \end{cases}$$</span> However, even if I didn't make a mistake in my calculations, and this is in fact a metric, I don't know how to approach proving that <span class="math-container">$d$</span> induces the dictionary order topology. To summarize, I'd like to know how to approach constructing the metric and if my first attempt even works.</p>
| Henno Brandsma | 4,280 | <p>Intuitively, the plane <span class="math-container">$\Bbb R^2$</span> in the lexicographic order, is just a set of vertical lines, all of which are topological copies of <span class="math-container">$\Bbb R$</span> and such that different vertical lines do not really interact: the horizonal line <span class="math-container">$\mathbb{R} \times \{0\}$</span> has the discrete topology as <span class="math-container">$(x,0)$</span> has the neighbourhood <span class="math-container">$O(x):= \{x\} \times (-1,1)$</span> which equals the open interval <span class="math-container">$((x,-1),(x,1))$</span> in the order, such that <span class="math-container">$O(x) \cap \left(\mathbb{R} \times \{0\}\right) = \{(x,0)\}$</span>. Some more thought will tell you that in fact <span class="math-container">$\Bbb R^2$</span> in the order topology is just homeomorphic to the product <span class="math-container">$(\Bbb R, \mathcal{T}_d) \times (\Bbb R, \mathcal{T}_e)$</span>, where the first factor has the (metrisable) discrete topology and the second the normal Euclidean one. This suggests a metric: we can use the sum (or <span class="math-container">$\max$</span>-metric) of the component metrics of the factors to get the right product topology, so define</p>
<p><span class="math-container">$$d((x_1, y_1), (x_2,y_2)) = \begin{cases}
|x_2-y_2| & \text{ if } x_1 = y_1 \\
1+|x_2-y_2| & \text{ if } x_1 \neq y_1 \\
\end{cases}$$</span> </p>
<p>which is then (by standard facts, I hope for you) a metric that induces the product topology of the discrete metric and the Euclidean metric, and so the right topology for the lexicographically ordered plane.</p>
|
65,083 | <p>Rotman's book <em>An Introduction to the Theory of Groups</em> (Fourth Edition) asks, on page 22, Exercise 2.8, to show that <span class="math-container">$S(n)$</span> cannot be embedded in <span class="math-container">$A(n+1)$</span>, where <span class="math-container">$S(n)$</span> = the symmetric group on <span class="math-container">$n$</span> elements, and <span class="math-container">$A(n)$</span> = the alternating group on <span class="math-container">$n$</span> elements. I have a proof but it uses Bertrand's Postulate, which seems a bit much for page 22 of an introductory text. Does anyone have a more appropriate (i.e., easier) proof?</p>
| darij grinberg | 2,530 | <p>I think this is solved on <a href="http://www.artofproblemsolving.com/Forum/viewtopic.php?f=61&t=333049">http://www.artofproblemsolving.com/Forum/viewtopic.php?f=61&t=333049</a> .</p>
|
65,083 | <p>Rotman's book <em>An Introduction to the Theory of Groups</em> (Fourth Edition) asks, on page 22, Exercise 2.8, to show that <span class="math-container">$S(n)$</span> cannot be embedded in <span class="math-container">$A(n+1)$</span>, where <span class="math-container">$S(n)$</span> = the symmetric group on <span class="math-container">$n$</span> elements, and <span class="math-container">$A(n)$</span> = the alternating group on <span class="math-container">$n$</span> elements. I have a proof but it uses Bertrand's Postulate, which seems a bit much for page 22 of an introductory text. Does anyone have a more appropriate (i.e., easier) proof?</p>
| Derek Holt | 35,840 | <p>Of course this is not a research level question, and so is not appropriate for MO, but I remember being puzzled myself about what proof Rotman had in mind for this. I think we had better assume Lagrange's Theorem or it will be completely hopeless! Perhaps the proof using Bertrand's Postulate was intended, because students might expect to have heard of that, even if they have not read a proof?</p>
<p>Let's spell that out. As already noted, we can assume $n+1 = 2m$ is even by Lagrange. If $S_n$ embeds into $A_{n+1}$, then the index of the image of the embedding is $m$, so there is a nontrivial homomorphism (multiplicative action on cosets) $\phi: A_{n+1} \rightarrow S_m$. </p>
<p>By BP, there is a prime $p$ with $m < p < n+1$, so $p$ does not divide $|S_m|$. Hence all elements of order $p$ lie in ${\rm Ker}(\phi)$, including $g = (1,2,\ldots,p-1,p)$ and $h = (1,2,\ldots,p-1,p+1)$. Then $g^{-1}h$ is a 3-cycle ( $(1,p,p+1)$ if you multiply permutations left to right), so ${\rm Ker}(\phi)$ contains all 3-cycles, which generate $A_{n+1}$, contradicting the nontriviality of $\phi$.</p>
|
65,083 | <p>Rotman's book <em>An Introduction to the Theory of Groups</em> (Fourth Edition) asks, on page 22, Exercise 2.8, to show that <span class="math-container">$S(n)$</span> cannot be embedded in <span class="math-container">$A(n+1)$</span>, where <span class="math-container">$S(n)$</span> = the symmetric group on <span class="math-container">$n$</span> elements, and <span class="math-container">$A(n)$</span> = the alternating group on <span class="math-container">$n$</span> elements. I have a proof but it uses Bertrand's Postulate, which seems a bit much for page 22 of an introductory text. Does anyone have a more appropriate (i.e., easier) proof?</p>
| KhashF | 128,556 | <p>Here is a short proof that follows from Rotman's material: Automorphisms of <span class="math-container">$S_m$</span> are all inner unless <span class="math-container">$m=2 \text{ or } 6$</span>. It is not hard to use this to show that for <span class="math-container">$m\neq 2, 6$</span> subgroups of <span class="math-container">$S_m$</span> of index <span class="math-container">$m$</span> are in the form of <span class="math-container">$\{\sigma\mid \sigma(i)=i\}$</span> where <span class="math-container">$i\in\{1,\dots,m\}$</span> (see this <a href="https://math.stackexchange.com/questions/1075548/subgroups-of-s-n-of-index-n">stackexchange post</a>).</p>
<p>Now a subgroup <span class="math-container">$H$</span> of <span class="math-container">$A_{n+1}$</span> isomorphic to <span class="math-container">$S_n$</span> yields an index-<span class="math-container">$(n+1)$</span> subgroup of <span class="math-container">$S_{n+1}$</span>. Hence, except for <span class="math-container">$n=5$</span>, <span class="math-container">$H$</span> must coincide with a stabilizer of the action <span class="math-container">$S_{n+1}\curvearrowright\{1,\dots,n+1\}$</span>. But all such subgroups contain odd permutations, a contradiction. It remains to show that <span class="math-container">$S_5$</span> cannot be embedded in <span class="math-container">$A_6$</span>. For this, notice that the former has elements of order six while the latter doesn't.</p>
|
1,233,700 | <p>A infinitely generated additive group G and its subgroup K, when they are isomorphic to each other? Is there any theorem on this?</p>
| P Vanchinathan | 28,915 | <p>Take the set of polynomials in a variable $x$ with integer coefficients, it is an infinitely generated group on the generating set $\{1,x,x^2,x^3,\ldots\}$.
Take the proper subgroup of polynomials involving only the even powers:
It is a proper subgroup isomorphic to the whole group (it is infact isomorphic even as a ring).</p>
|
606,656 | <p>So I heard this a long time ago and I recently started thinking about it again. So I was told that the complex function $f(z)=1/z$ maps everything inside a circle to points outside the circle (the remaining part of the complex plane). Why is this?</p>
<p>I realize we can write $$f(z)=\frac{1}{z}=\frac{\overline{z}}{|z|^2}$$ so that there is a reflection $\overline{z}$ and a dilation $1/|z|^2$. But I don't see why this then only maps to points outside the circle. Can you explain?</p>
| Robert Israel | 8,508 | <p>$0 < |z| < 1$ iff $1 < |1/z| < \infty$</p>
|
3,864,364 | <p>Let <span class="math-container">$g \colon \mathbb{N} \rightarrow \{0,1\}$</span> such that <span class="math-container">$g(n)=0$</span>, if <span class="math-container">$n \equiv 0$</span> or <span class="math-container">$n \equiv 1 ~(\bmod~4)$</span>; or <span class="math-container">$g(n)=1$</span>, if <span class="math-container">$n \equiv 2$</span> ou <span class="math-container">$n \equiv 3 ~(\bmod~4)$</span>.</p>
<p>Do <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converge?</p>
<p>The series is something like this</p>
<ul>
<li><p>If <span class="math-container">$n=1$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{1}} = 1$</span>, partial sum of 1</p>
</li>
<li><p>If <span class="math-container">$n=2$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{2}} \approx -0.707$</span>, partial sum of <span class="math-container">$\approx$</span> 0.293</p>
</li>
<li><p>If <span class="math-container">$n=3$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{3}} \approx -0.577$</span>, partial sum of <span class="math-container">$\approx$</span> -0.284</p>
</li>
<li><p>If <span class="math-container">$n=4$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{4}} = 0.5$</span>, partial sum of <span class="math-container">$\approx$</span> 0.216</p>
</li>
<li><p><span class="math-container">$\cdot\cdot\cdot$</span> and goes on</p>
</li>
</ul>
<p>What I thought of doing?</p>
<ol>
<li>Break the series (call it <span class="math-container">$a_n$</span>) into two (<span class="math-container">$b_n$</span> and <span class="math-container">$c_n$</span>), where <span class="math-container">$b_n$</span> is the positive terms and make the negative terms = <span class="math-container">$0$</span>; and <span class="math-container">$c_n$</span> is only the negative terms, and the terms which are indices of positive = <span class="math-container">$0$</span>. So we have an equality of <span class="math-container">$a_n = b_n - c_n$</span>. If I show that either <span class="math-container">$b_n$</span> or <span class="math-container">$c_n$</span> diverges then <span class="math-container">$a_n$</span> diverges?</li>
<li>I know, by using Alternate Series Test, that <span class="math-container">$\sum _{n=1}^{\infty }\frac{(-1)^n}{\sqrt{n}}$</span> converges, but does this also means <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converges? It is not properly a reordering, so I have my doubts.</li>
</ol>
<p>I am not sure of any of the ways, so thats why I am here. Thank you.</p>
| Angina Seng | 436,618 | <p>I think that's
<span class="math-container">$$\sum_{m=0}^\infty\left(\frac1{\sqrt{4m+1}}-\frac1{\sqrt{4m+2}}
-\frac1{\sqrt{4m+3}}+\frac1{\sqrt{4m+4}}\right).$$</span>
The mean value theorem implies that both
<span class="math-container">$$\frac1{\sqrt{4m+1}}-\frac1{\sqrt{4m+2}}$$</span>
and
<span class="math-container">$$-\frac1{\sqrt{4m+3}}+\frac1{\sqrt{4m+4}}$$</span>
are <span class="math-container">$O(m^{-3/2})$</span>, and so this series converges by comparison with
<span class="math-container">$\sum_{m=1}^\infty m^{-3/2}$</span>
.</p>
|
3,864,364 | <p>Let <span class="math-container">$g \colon \mathbb{N} \rightarrow \{0,1\}$</span> such that <span class="math-container">$g(n)=0$</span>, if <span class="math-container">$n \equiv 0$</span> or <span class="math-container">$n \equiv 1 ~(\bmod~4)$</span>; or <span class="math-container">$g(n)=1$</span>, if <span class="math-container">$n \equiv 2$</span> ou <span class="math-container">$n \equiv 3 ~(\bmod~4)$</span>.</p>
<p>Do <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converge?</p>
<p>The series is something like this</p>
<ul>
<li><p>If <span class="math-container">$n=1$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{1}} = 1$</span>, partial sum of 1</p>
</li>
<li><p>If <span class="math-container">$n=2$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{2}} \approx -0.707$</span>, partial sum of <span class="math-container">$\approx$</span> 0.293</p>
</li>
<li><p>If <span class="math-container">$n=3$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{3}} \approx -0.577$</span>, partial sum of <span class="math-container">$\approx$</span> -0.284</p>
</li>
<li><p>If <span class="math-container">$n=4$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{4}} = 0.5$</span>, partial sum of <span class="math-container">$\approx$</span> 0.216</p>
</li>
<li><p><span class="math-container">$\cdot\cdot\cdot$</span> and goes on</p>
</li>
</ul>
<p>What I thought of doing?</p>
<ol>
<li>Break the series (call it <span class="math-container">$a_n$</span>) into two (<span class="math-container">$b_n$</span> and <span class="math-container">$c_n$</span>), where <span class="math-container">$b_n$</span> is the positive terms and make the negative terms = <span class="math-container">$0$</span>; and <span class="math-container">$c_n$</span> is only the negative terms, and the terms which are indices of positive = <span class="math-container">$0$</span>. So we have an equality of <span class="math-container">$a_n = b_n - c_n$</span>. If I show that either <span class="math-container">$b_n$</span> or <span class="math-container">$c_n$</span> diverges then <span class="math-container">$a_n$</span> diverges?</li>
<li>I know, by using Alternate Series Test, that <span class="math-container">$\sum _{n=1}^{\infty }\frac{(-1)^n}{\sqrt{n}}$</span> converges, but does this also means <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converges? It is not properly a reordering, so I have my doubts.</li>
</ol>
<p>I am not sure of any of the ways, so thats why I am here. Thank you.</p>
| QC_QAOA | 364,346 | <p>We have</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}=\frac{1}{1}-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{3}}+\frac{1}{\sqrt{4}}+\cdots$$</span></p>
<p>Now, note that for for <span class="math-container">$n\equiv 2\ (\text{mod}\ 4)$</span> we have the terms</p>
<p><span class="math-container">$$\cdots-\frac{1}{\sqrt{n}}-\frac{1}{\sqrt{n+1}}+\frac{1}{\sqrt{n+2}}+\frac{1}{\sqrt{n+3}}-\cdots$$</span></p>
<p><span class="math-container">$$\cdots-\frac{\sqrt{n}+\sqrt{n+1}}{\sqrt{n(n+1)}}+\frac{\sqrt{n+2}+\sqrt{n+3}}{\sqrt{(n+2)(n+3)}}$$</span></p>
<p>That is, the sum can be rewritten as</p>
<p><span class="math-container">$$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}=1+\sum_{n=1}^\infty (-1)^n\frac{\sqrt{2n}+\sqrt{2n+1}}{\sqrt{2n(2n+1)}}$$</span></p>
<p>But we see that the term in this sum is decreasing. We conclude by the alternating series test that the sum converges</p>
<hr />
<p>REQUESTED EDIT: Note that</p>
<p><span class="math-container">$$0\leq \left(\frac{\sqrt{2n}+\sqrt{2n+1}}{\sqrt{2n(2n+1)}}\right)^2=\frac{4n+1+2\sqrt{2n(2n+1)}}{2n(2n+1)}$$</span></p>
<p><span class="math-container">$$\leq\frac{5n+2\sqrt{2n(2n+1)}}{2n(2n+1)}=\frac{5}{4n+2}+\frac{2}{\sqrt{2n(2n+1)}}$$</span></p>
<p><span class="math-container">$$<\frac{5}{4n}+\frac{2}{\sqrt{2n(2n-n)}}=\frac{5}{4n}+\frac{2}{n\sqrt{2}}$$</span></p>
<p><span class="math-container">$$<\frac{2}{n}+\frac{2}{n}=\frac{4}{n}\to 0$$</span></p>
<hr />
<p>SECOND REQUESTED EDIT: Note that</p>
<p><span class="math-container">$$1=\frac{1}{1}$$</span></p>
<p><span class="math-container">$$n=1:\ (-1)^1\frac{\sqrt{2\cdot 1}+\sqrt{2\cdot 1+1}}{\sqrt{2\cdot 1(2\cdot 1+1)}} =-\frac{\sqrt{2}+\sqrt{3}}{\sqrt{2}\sqrt{3}}=-\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{3}}$$</span></p>
<p><span class="math-container">$$n=2:\ (-1)^2\frac{\sqrt{2\cdot 2}+\sqrt{2\cdot 2+1}}{\sqrt{2\cdot 2(2\cdot 2+1)}} =\frac{\sqrt{4}+\sqrt{5}}{\sqrt{4}\sqrt{5}}=\frac{1}{\sqrt{4}}+\frac{1}{\sqrt{5}}$$</span></p>
<p><span class="math-container">$$n=3:\ (-1)^3\frac{\sqrt{2\cdot 3}+\sqrt{2\cdot 3+1}}{\sqrt{2\cdot 3(2\cdot 3+1)}} =-\frac{\sqrt{6}+\sqrt{7}}{\sqrt{6}\sqrt{7}}=-\frac{1}{\sqrt{6}}-\frac{1}{\sqrt{7}}$$</span></p>
<p><span class="math-container">$$\vdots$$</span></p>
<p>The left hand side is the terms we found while the right hand side are the original terms.</p>
|
3,864,364 | <p>Let <span class="math-container">$g \colon \mathbb{N} \rightarrow \{0,1\}$</span> such that <span class="math-container">$g(n)=0$</span>, if <span class="math-container">$n \equiv 0$</span> or <span class="math-container">$n \equiv 1 ~(\bmod~4)$</span>; or <span class="math-container">$g(n)=1$</span>, if <span class="math-container">$n \equiv 2$</span> ou <span class="math-container">$n \equiv 3 ~(\bmod~4)$</span>.</p>
<p>Do <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converge?</p>
<p>The series is something like this</p>
<ul>
<li><p>If <span class="math-container">$n=1$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{1}} = 1$</span>, partial sum of 1</p>
</li>
<li><p>If <span class="math-container">$n=2$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{2}} \approx -0.707$</span>, partial sum of <span class="math-container">$\approx$</span> 0.293</p>
</li>
<li><p>If <span class="math-container">$n=3$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{1}}{\sqrt{3}} \approx -0.577$</span>, partial sum of <span class="math-container">$\approx$</span> -0.284</p>
</li>
<li><p>If <span class="math-container">$n=4$</span>, then <span class="math-container">$\frac{(-1)^{g(n)}}{\sqrt{n}} = \frac{(-1)^{0}}{\sqrt{4}} = 0.5$</span>, partial sum of <span class="math-container">$\approx$</span> 0.216</p>
</li>
<li><p><span class="math-container">$\cdot\cdot\cdot$</span> and goes on</p>
</li>
</ul>
<p>What I thought of doing?</p>
<ol>
<li>Break the series (call it <span class="math-container">$a_n$</span>) into two (<span class="math-container">$b_n$</span> and <span class="math-container">$c_n$</span>), where <span class="math-container">$b_n$</span> is the positive terms and make the negative terms = <span class="math-container">$0$</span>; and <span class="math-container">$c_n$</span> is only the negative terms, and the terms which are indices of positive = <span class="math-container">$0$</span>. So we have an equality of <span class="math-container">$a_n = b_n - c_n$</span>. If I show that either <span class="math-container">$b_n$</span> or <span class="math-container">$c_n$</span> diverges then <span class="math-container">$a_n$</span> diverges?</li>
<li>I know, by using Alternate Series Test, that <span class="math-container">$\sum _{n=1}^{\infty }\frac{(-1)^n}{\sqrt{n}}$</span> converges, but does this also means <span class="math-container">$\sum_{n=1}^\infty \frac{(-1)^{g(n)}}{\sqrt{n}}$</span> converges? It is not properly a reordering, so I have my doubts.</li>
</ol>
<p>I am not sure of any of the ways, so thats why I am here. Thank you.</p>
| saulspatz | 235,128 | <p>The series converges by <a href="https://en.wikipedia.org/wiki/Dirichlet%27s_test" rel="nofollow noreferrer">Dirichlet's test</a>. In the notation of the Wikipedia article, let <span class="math-container">$a_n=\frac1{\sqrt n}$</span>, and let <span class="math-container">$b_n=(-1)^{g(n)}$</span>. Then <span class="math-container">$\mid\sum_{k=1}^n b_k\mid\leq2$</span> and the test applies.</p>
|
142,858 | <p>Let $f:X\to Y$ be a proper surjection of complex algebraic varieties. Let $H_i$ denote Borel-Moore homology. Then $$ \mathrm{Gr}^W_{-k} H_k(X) \to \mathrm{Gr}^W_{-k} H_k(Y) $$ is surjective.</p>
<p><strong>Question:</strong> Does anyone know a reference for this fact?</p>
<p>I have a proof, but it's not as simple as it could be. And it uses generic smoothness of $f$, so it's only valid in characteristic zero.</p>
<hr>
<p>The cycle class map from Chow groups to Borel-Moore homology lands in the lowest weight part, so the above is somehow analogous to the fact that proper pushforward is surjective in Chow for a surjective map.</p>
<p>If $f$ instead is an open immersion, then the induced map on lowest weights is also surjective. This is similarly analogous to the fact that flat pullback is surjective in Chow for open immersions. But here the proof for the result in Borel-Moore homology is very simple, which is one reason I think there should be a simple proof of the above result, too.</p>
<hr>
<p>Addendum. Here's my proof, it's very similar to the one linked to by novice. However, the proof in Lewis's book doesn't need generic smoothness; it uses instead the existence of a subvariety mapping generically finitely onto $Y$, as in the suggestion of ACL below.</p>
<p>Note first that the if $f$ is in addition smooth, then $H_k(X) \to H_k(Y)$ is onto and there's not much to it. For the general case take $U \subset X$ where $f$ is smooth, and let $Z = X \setminus U$. Then there is a map between the long exact sequences
$$ \cdots \to H_k(Z) \to H_k(X) \to H_k(U) \to H_{k-1}(Z) \to \cdots $$
and
$$ \cdots \to H_k(f(Z)) \to H_k(Y) \to H_k(Y \setminus f(Z)) \to H_{k-1}(f(Z)) \to \cdots $$
as follows. The maps $H_\bullet(X) \to H_\bullet(Y)$ and $H_\bullet(Z) \to H_\bullet(f(Z))$ are the obvious ones. The map $H_\bullet(U) \to H_\bullet(Y \setminus f(Z))$ is the composite
$$ H_\bullet(U) \to H_\bullet(X \setminus f^{-1}(f(Z))) \to H_\bullet(Y \setminus f(Z)). $$</p>
<p>Now apply $W_{-k}$ to the long exact sequences. The two maps $H_k(Z) \to H_k(f(Z))$ and $H_k(U) \to H_k(Y \setminus f(Z))$ are surjective on lowest weights: the former by noetherian induction and the latter because it's the composition of pullback for an open immersion and a pushforward for a smooth proper morphism. Since in addition $W_{-k}H_{k-1}(Z) = W_{-k}H_{k-1}(f(Z)) = 0$ the result follows by the four lemma.</p>
| novice | 40,331 | <p>See <a href="http://books.google.com/books?id=uNeu1hjOh7YC&lpg=PA284&ots=gPlXNOtiXu&dq=borel-moore%20weights%20mixed%20hodge%20structure&pg=PA285#v=onepage&q=borel-moore%20weights%20mixed%20hodge%20structure&f=false" rel="nofollow">Lewis's book</a> (page 285) for the case of projective varieties</p>
|
3,551,976 | <p>Let <span class="math-container">$E$</span> be a uniformly convex banach space, <span class="math-container">$K$</span> convex and closed in <span class="math-container">$E$</span>, and <span class="math-container">$x\in B\setminus K$</span>.</p>
<p>Can someone give a concise proof that there is a unique <span class="math-container">$y\in K$</span> s.t. <span class="math-container">$dist(x,K) = \Vert x-y\Vert$</span> using the notion of uniform convexity exactly as given on the relevant <a href="https://en.wikipedia.org/wiki/Uniformly_convex_space" rel="nofollow noreferrer">wikipedia page</a> and without using weak compactness of reflexive spaces?</p>
| Matematleta | 138,929 | <p><a href="https://books.google.com.mx/books/about/Functional_Analysis.html?id=kSJZAAAACAAJ&redir_esc=y" rel="nofollow noreferrer">Larsen's Functional Analysis, an Introduction, Thm 8.2.2</a> has an elementary proof of this, that relies on a lemma, which he leaves to the reader. Here is the statement and proof of the lemma:</p>
<p>Let <span class="math-container">$E$</span> be a uniformly convex normed linear space over <span class="math-container">$F$</span>. If <span class="math-container">$(x_n)\subseteq E$</span> is a sequence such that</p>
<p><span class="math-container">$(i) \underset{n\to \infty}\lim \|x_n\| = 1,\ (ii) \ \underset{n,m\to \infty}\lim \|x_n + x_m\| = 2.\ \text{Then},\ (x_n)\ \text{is Cauchy.}$</span></p>
<p><span class="math-container">$(i)$</span> implies that for each integer <span class="math-container">$k$</span>, there is an <span class="math-container">$x_{n_k}$</span> such that <span class="math-container">$\|x_{n_k}\|<1+1/k$</span>.</p>
<p>Now, if <span class="math-container">$(x_n)$</span> is not Cauchy, then we may choose <span class="math-container">$(x_{n_k})$</span> so that the subsequence is not Cauchy either. Then, there is an <span class="math-container">$\epsilon>0$</span> such that for all integers <span class="math-container">$K$</span>, there are integers <span class="math-container">$n_k,n_l$</span> such that <span class="math-container">$\|x_{n_k}-x_{n_l}\|>\epsilon$</span> with <span class="math-container">$k,l>K.$</span></p>
<p>Without loss of generality, assume <span class="math-container">$l>k$</span> so that <span class="math-container">$1+1/l<1+1/k$</span> and therefore <span class="math-container">$\|x_{n_k}\|<1+1/k$</span> and <span class="math-container">$\|x_{n_l}\|<1+1/l<1+1/k.$</span></p>
<p>Then, the hypothesis of uniform convexity gives us a <span class="math-container">$\delta>0$</span> such that <span class="math-container">$\frac{\|x_{n_k}+x_{n_l}\|}{2(1+1/k)}<1-\delta.$</span> But now, on taking limits, and applying <span class="math-container">$(ii)$</span>,we get a contradiction: <span class="math-container">$1=\underset{k,l\to \infty}\lim\frac{\|x_{n_k}+x_{n_l}\|}{2(1+1/k)}\le 1-\delta$</span></p>
|
3,243,655 | <p>Question:</p>
<blockquote>
<p>Prove the equation <span class="math-container">$2x - 6y = 3$</span> has no integer solution to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p>
</blockquote>
<p>I need to verify my proof I think I did it correctly, but am not fully sure since I don't have solutions in my book. I basically proved by contradiction and assumed there was an integer solution for x or y. I then solved for <span class="math-container">$x $</span> and <span class="math-container">$y$</span> in <span class="math-container">$2x - 6y = 3$</span> getting <span class="math-container">$x = 3y + 3/2$</span> and <span class="math-container">$y = x/3 - 1/2$</span> .since both <span class="math-container">$x,y$</span> are not integers I said it contradicts that <span class="math-container">$x$</span> or <span class="math-container">$y$</span> had an integer solution, meaning the original statement was correct. Did I prove this right, or should I redo?</p>
| Parcly Taxel | 357,390 | <p>Rewind to the point where you say <span class="math-container">$x=3y+3/2$</span>. We rearrange this to <span class="math-container">$x-3y=3/2$</span>, then note that since we have taken <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to be integers, <span class="math-container">$x-3y$</span> is also an integer. But <span class="math-container">$3/2$</span> is not an integer, a contradiction.</p>
|
301,038 | <p>If you want to show that a sequence $(a_{n})$ in $\mathbb{R}$ is convergent, when is it sufficient to show that there is a number $b\in\mathbb{R}$ such that
$$ \liminf a_{n} \geq b \geq \limsup a_{n}$$</p>
<p>In particular, I have a situation where my sequence is bounded and I wanted to use this approach, but I'm not sure I really understand what is going on and why or if this works.</p>
<p>Thanks for any illumination!</p>
| superAnnoyingUser | 34,371 | <p>Geometrically, $\lim \inf$ is the leftmost (or smallest) accumulation point of a sequence, while $\lim \sup$ is the rightmost (or biggest) one. If you can prove that the one on the left is in fact bigger than or equal to the one on the right, then they coincide. Thus, your sequence is convergent, by Weierstrass theorem, since it only has one accumulation point. (Some take it as a definition.)</p>
|
109,307 | <p><strong>Problem.</strong> Let $d=(a,b)$, $b=\beta d$ and $n>1$. If $\beta$ is odd number, prove that $(n^a+1,n^b-1)\le 2$.</p>
<p><em>Solution (from the book).</em> Each common divisor of numbers $n^a+1$ and $n^b-1$ has to be divisor of their sum $n^a+n^b=n^b(n^{a-b}+1)$, that is, it has to be a common divisor of $n^b-1$ and $n^{a-b}+1$ (we assume $a>b$). Keeping up, we conclude that the common divisor of numbers $n^a+1$ and $n^b-1$ has to be a divisor of number $n^d+1$. Let $x=n^d$, such that $n^b-1=x^\beta-1$ and $\beta$ is odd. To divide $n^b-1$ with $n^d-1$ means to divide $x%\beta-1$ with $x+1$. Remainder of this division is $-2$, and this means that numbers $n^a+1$ and $n^b-1$ cannot have common divisor greater than 2. $\square$</p>
<p>I'm horrible at number theory, because it always seems to abstract for me, and solutions always seem kind of forced, as if first the solution was written, and then the problem.</p>
<p>So I've decided to start from the beginning of my <em>Introduction to number theory</em> book, and this problem was supposed to be an easy example of previously learned.</p>
<p>There is a bunch of things I don't understand in this proof, beginning with "it has to be a common divisor of $n^b-1$ and $n^{a-b}+1$". After that, I understand literally nothing. I wish someone could write a more detailed explanation, but less formal, so that it can include many words and explanations, and examples, too.</p>
<p>Also, are there any hints/directions you could give me that I could follow when encountering number theory problems. What am I supposed to see, what am I supposed to look for, how much "deep" do I have to go to prove something (sometimes, some things sound pretty obvious to me, yet they are explained pretty long, and some things, as stuff I mention I don't understand, are simply gone though without a brief of explanation. </p>
<p>I hope I'm not asking too much.</p>
| Math Gems | 75,092 | <p>The second half of the proof is more clearly presented using congruence arithmetic. Namely, working mod $\:m := (n^a+1,n^b-1)$ we have $ n^b\equiv 1$, so from $n^d\equiv -1$ we deduce</p>
<p>$$ 1 \equiv\ n^b \equiv (n^d)^{\beta} \equiv (-1)^{\beta}\equiv -1\ \ {\text by}\ \ \beta\ {\text odd}$$</p>
<p>Therefore $1 \equiv -1\ \Rightarrow\ 2 \equiv 0,\:$ i.e. $\:m\ |\ 2,\:$ as desired.</p>
<p>I haven't checked the details of the first half of the proof, but it should proceed mimicing the Euclidean algorithm, the same way as <a href="https://math.stackexchange.com/a/11636/23500">this proof</a> that $\:(n^a-1,n^b-1) \: =\: n^{(a,b)}-1$.</p>
|
794,032 | <p>Prove by induction that the complement of $ A1 \cup A2...An = A1^c \cap A2^c ...\cap An^c$</p>
<p>My approach: basic step is true, $\overline A1 = A1^c$,</p>
<p>then assume $ A1 \cup A2...Ak = A1^c \cap A2^c ...\cap Ak^c$, prove the case of $k+1$ is true. How should I do that?</p>
| Paul Hurst | 149,898 | <p>The basic structure is to show</p>
<p>1) $(A_1 \bigcup A_2 \ldots A_k \bigcup A_{k+1})^c \subset A_1^c \cap A_2^c \ldots A_k^c \cap A_{k+1}^c$ </p>
<p>and</p>
<p>2) $ A_1^c \cap A_2^c \ldots A_k^c \cap A_{k+1}^c \subset (A_1 \bigcup A_2 \ldots A_k \bigcup A_{k+1})^c $</p>
<p>Let $x \in (A_1 \bigcup A_2 \ldots A_k \bigcup A_{k+1})^c$. Then $x \notin A_1 \bigcup A_2 \ldots A_k \bigcup A_{k+1}$. This implies $x \notin A_1 \bigcup A_2 \ldots A_k$ and $x \notin A_{k+1}$. Then use the induction, etc. and this accomplishes #1. Then do a similar thing for #2.</p>
|
4,435,497 | <p>I have to prove the following:</p>
<p>Let <span class="math-container">$p$</span> be a prime and denote <span class="math-container">$\mathbb{Z}_p=\mathbb{Z}/p\mathbb{Z}$</span>. Show that <span class="math-container">$\mathbb{Z}_p[x]/(x^2+x+1)$</span> is not a field if and only if <span class="math-container">$p=3$</span> or <span class="math-container">$p \equiv 1 \pmod 3$</span>.</p>
<p>First I consider showing that <span class="math-container">$x^2+x+1$</span> is reducible, We may write <span class="math-container">$x^2+x+1 \equiv 0$</span> which implies that <span class="math-container">$x^2 \equiv -x -1$</span> which implies <span class="math-container">$x^2 \equiv -x-1 \equiv 2x+2 \pmod 3$</span>. Thus, finally showing that <span class="math-container">$x^2+x+1 \equiv x^2-2x-2$</span>. Now, <span class="math-container">$x^2-2x-2$</span> <strong>is</strong> reducible. Thus our original polynomial can be written as a reducible polynomial, thus we do not have a field?</p>
<p>Any help would be greatly appreciated.</p>
| Sammy Black | 6,509 | <p>Everything you write is correct, but it winds about unnecessarily, and comes up short of the conclusion: How do we know that <span class="math-container">$x^2 - 2x - 2$</span> is reducible?</p>
<p>Instead, just add/subtract polynomial multiples of <span class="math-container">$3$</span> to put the expression in a form in which it factors. One such way:
<span class="math-container">$$
x^2 + x + 1 \equiv x^2 - 2x + 1 = (x-1)^2 \pmod 3
$$</span>
Can you figure out how to do something similar for the case where <span class="math-container">$p \equiv 1 \pmod 3$</span>? Why does this necessarily <em>not</em> work when <span class="math-container">$p \equiv -1 \pmod 3$</span>?</p>
|
362,944 | <blockquote>
<p>Consider a Brownian bridge <span class="math-container">$B: [0,1]\to \mathbb{R}$</span> with <span class="math-container">$B(0)=B(1)=0$</span>. Let <span class="math-container">$M[0, 1/2]=\max_{x\in[0,1/2]}B(x)$</span>. How to prove that
<span class="math-container">$$\mathbb{P}(M[0, 1/2]\geq s)\leq 2\mathbb{P}(B(1/2)\geq s/2)?$$</span></p>
</blockquote>
<p>Actually, here is a "no big max" argument that could be used in the proof of construction Airy line ensemble. The "no big max" means the top curve between <span class="math-container">$(a, b)$</span> cannot get too high. </p>
<p>(Definition of Brownian bridge) If <span class="math-container">$\{B(t): t\geq 0\}$</span> is standard Brownian motion, then <span class="math-container">$\{Z(t): 0\leq t\leq 1\}$</span> is a Brownian bridge process when <span class="math-container">$$Z(t)=B(t)-tB(1).$$</span></p>
| Iosif Pinelis | 36,721 | <p>Let <span class="math-container">$B_t:=B(t)$</span>. We have to show that
<span class="math-container">\begin{equation}
P(M_{1/2}\ge s)\le2P(B_{1/2}\ge s/2) \tag{1}
\end{equation}</span>
for <span class="math-container">$s\ge0$</span>, where <span class="math-container">$M_T:=\max_{0\le t\le T}B_t$</span> for <span class="math-container">$T\in(0,1)$</span>.</p>
<p>We shall prove (1) by first obtaining an explicit expression for <span class="math-container">$P(M_T\ge s)$</span>.</p>
<p>Indeed, for <span class="math-container">$t\in[0,1]$</span>, we can write
<span class="math-container">$$B_t=W_t-tW_1,$$</span>
where <span class="math-container">$W$</span> is a standard Wiener process. For each real <span class="math-container">$x$</span>, consider
<span class="math-container">\begin{equation}
C^{-x}_t:=W_t-tx,
\end{equation}</span>
so that <span class="math-container">$C^{-x}$</span> is a Wiener process with drift <span class="math-container">$-x$</span>. Then
<span class="math-container">\begin{align*}
P(M_T\ge s|W_1=x)&=P(\max_{0\le t\le T}(W_t-tW_1)\ge s|W_1=x) \\
&=P(\max_{0\le t\le T}(W_t-tx)\ge s|W_1=x) \\
&=P(\max_{0\le t\le T}C^{-x}_t\ge s|C^{-x}_1=0) \\
&=e^{-2s^2}G\Big(\frac{(1-2T)s}{\sqrt{(1-T)T}}\Big)+G\Big(\frac{s}{\sqrt{(1-T)T}}\Big)
\end{align*}</span>
by <a href="https://www.researchgate.net/publication/236984395_On_the_maximum_of_the_generalized_Brownian_bridge" rel="nofollow noreferrer">Theorem 3.1</a> (with <span class="math-container">$-x,s,1,T,0$</span> in place of <span class="math-container">$\mu,\beta,u,t,\eta$</span>, respectively), where <span class="math-container">$1-G$</span> is the standard normal cdf.
Note that <span class="math-container">$P(M_T\ge s|W_1=x)$</span> does not depend on <span class="math-container">$x$</span> (which of course should have been expected, since the Brownian bridge <span class="math-container">$B$</span> is independent of <span class="math-container">$W_1$</span>). We conclude that
<span class="math-container">\begin{equation}
P(M_T\ge s)
=e^{-2s^2}G\Big(\frac{(1-2T)s}{\sqrt{(1-T)T}}\Big)+G\Big(\frac{s}{\sqrt{(1-T)T}}\Big).
\end{equation}</span>
In particular,
<span class="math-container">\begin{equation}
P(M_{1/2}\ge s)
=\tfrac12\,e^{-2s^2}+G(2s)=:l(s).
\end{equation}</span>
On the other hand, since <span class="math-container">$B_{1/2}$</span> equals <span class="math-container">$\tfrac12\,W_1$</span> in distribution, we have <span class="math-container">$P(B_{1/2}\ge s/2)=P(W_1\ge s)=G(s)$</span>.</p>
<p>So, it remains to show that
<span class="math-container">\begin{equation}
d(s):=l(s)-2G(s)\le0.
\end{equation}</span>
For real <span class="math-container">$s>0$</span>, let
<span class="math-container">\begin{equation}
d_1(s):=d'(s)e^{2 s^2}
=\sqrt{\tfrac{2}{\pi }} e^{3 s^2/2}-2 s-\sqrt{\tfrac{2}{\pi }}.
\end{equation}</span>
Then <span class="math-container">$d_1'(s)=3 \sqrt{\frac{2}{\pi }} e^{3 s^2/2} s-2$</span> and hence <span class="math-container">$d_1$</span> is clearly <span class="math-container">$-+$</span>; that is, <span class="math-container">$d_1$</span> may change its sign at most once (on <span class="math-container">$(0,\infty)$</span>) and only from <span class="math-container">$-$</span> to <span class="math-container">$+$</span>.
So, <span class="math-container">$d_1$</span> is down-up -- that is, there is some <span class="math-container">$c\in[0,\infty]$</span> such that <span class="math-container">$d_1$</span> is decreasing on <span class="math-container">$(0,c]$</span> and increasing on <span class="math-container">$[c,\infty)$</span>. Also, <span class="math-container">$d_1(0)=0$</span> and <span class="math-container">$d_1(\infty-)=\infty$</span>. So, <span class="math-container">$d_1$</span> is <span class="math-container">$-+$</span> and hence <span class="math-container">$d$</span> is down-up. Also, <span class="math-container">$d(0)=0=d(\infty-)$</span>. So, <span class="math-container">$d<0$</span> (on <span class="math-container">$(0,\infty)$</span>), as desired.</p>
|
3,212,488 | <p><span class="math-container">$$\int_{0}^{\infty} \ln{(1+a x)}{x^{-b-1}} dx$$</span><br>
I defined two branch cuts along the real axis: <span class="math-container">$[-\infty ,-\frac{1}{a}]$</span> & <span class="math-container">$[0,\infty]$</span> with the following contour: <a href="https://i.stack.imgur.com/4rd0E.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4rd0E.jpg" alt="contour"></a><br>
I defined the <span class="math-container">$arg{(z)} =0$</span> above the positive branch cut and <span class="math-container">$arg(z)=2\pi$</span> below the positive branch cut. Similarly <span class="math-container">$arg(z)=\pi$</span> above the negative branch cut and <span class="math-container">$arg(z)=-\pi$</span> below the negative branch cut.</p>
<p>Using the triangle inequality for integrals it can easily (but with care) be shown that the integrals along all parts of the circles tend to <span class="math-container">$0$</span> as <span class="math-container">$R \Rightarrow \infty$</span> (radius of the outer circle) and <span class="math-container">$\epsilon \Rightarrow 0$</span> (radius of the smaller circles around the singularities).<br>
By doing this we get the necessary bounds for a and b: <span class="math-container">$a>0$</span> and <span class="math-container">$0<b<1$</span></p>
<p>The integrals along the positive branch cut work out nicely and result in: <span class="math-container">$$(1-e^{-2 b \pi i}) \int_{0}^{\infty} \ln{(1+a x)}{x^{-b-1}} dx$$</span><br>
When I work out the integrals along the I end up with the following for the integral along the top side of the negative branch cut:
<span class="math-container">$$e^{-b \pi i} \int_{\infty}^{\frac{1}{a}} \ln{(1-a x)}{x^{-b-1}} dx$$</span><br>
After factoring out <span class="math-container">$-1$</span> from the inside of the ln you are left with:
<span class="math-container">$$e^{-b \pi i} \int_{\infty}^{\frac{1}{a}} (\ln{(a x-1)}-i \pi){x^{-b-1}} dx$$</span><br>
Applying the same procedure to the integral along the bottom side of the negative branch cut yields: <span class="math-container">$$e^{b \pi i} \int_{\frac{1}{a}}^{\infty} (\ln{(a x-1)}+i \pi){x^{-b-1}} dx$$</span><br>
This is troublesome because the <span class="math-container">$e^{b \pi i}$</span> stops me from combining the two integrals along the negative branch cut in order to cancel out the integrals involving the <span class="math-container">$\ln$</span><br>
In the following <a href="http://residuetheorem.com/2015/10/27/integral-with-two-branch-cuts-ii/" rel="nofollow noreferrer">article</a> the author magically has <span class="math-container">$e^{-b \pi i}$</span> in front of the integral which allows him to cancel out the parts including the <span class="math-container">$\ln$</span> which simplifies it enormously. </p>
<p>Can someone please explain to me what I am doing wrong with the integrals along the negative branch cut? </p>
<p>Any help is greatly appreciated!</p>
| I was suspended for talking | 474,690 | <p>If <span class="math-container">$P(X=\infty)=1$</span> then <span class="math-container">$E e^{-X} = E[1\{X=\infty\}e^{-X}] = 0$</span>. Conversely, suppose <span class="math-container">$P(X<\infty)>0$</span>. Then <span class="math-container">$E[e^{-X}] \geq E[1\{X<\infty\}e^{-X}] > 0$</span> since <span class="math-container">$e^{-x}>0$</span> for all <span class="math-container">$x$</span> contradicting <span class="math-container">$E e^{-X} = 0$</span>.</p>
|
1,761,273 | <p>Good morning. I have a problem with this:</p>
<p>Find the maximum and minimum distances from the origin to the curve* <span class="math-container">$$g\left(x,y\right)=5x^{2}+6xy+5y^{2}$$</span></p>
<p>I have done this:</p>
<p>Function to optimize:<span class="math-container">$f\left(x,y\right)=x^{2}+y^{2}$</span></p>
<p>Restriction: <span class="math-container">$g\left(x,y\right)=5x^{2}+6xy+5y^{2}=8
$</span></p>
<p>Applying Lagrange multipliers:
<span class="math-container">$\nabla f\left(x,y\right)=\lambda\nabla g\left(x,y\right)$</span></p>
<p>Then,
<span class="math-container">$\nabla f\left(x,y\right)=2x\hat{i}+2y\hat{j}$</span> and <span class="math-container">$\lambda\nabla g(x,y)=\lambda(2x+\frac{6}{5}y)\hat{i}+\lambda\left(2y+\frac{6}{5}x\right)\hat{j}
$</span></p>
<p>Making the ecuation system:</p>
<p><span class="math-container">$\begin{cases}
2x=(2x+\frac{6}{5}y)\lambda\\
2y=(2y+\frac{6}{5}x)\lambda\\
x^{2}+\frac{6}{5}xy+y^{2}=8
\end{cases}$</span></p>
<p>But I have serious problem solving the system. Any suggestions?</p>
| boojum | 882,145 | <p>It turns out that this system of Lagrange equations doesn't really warrant the use of Mathematica. We are faced with</p>
<p><span class="math-container">$$ \ 2x \ = \ (2x \ + \ \frac{6}{5}y)\lambda \ \ , \ \ 2y \ = \ (2y \ + \ \frac{6}{5}x)\lambda \ \ . $$</span></p>
<p>Since bringing all the terms to one side in each equation won't help us to factor the expressions, we might instead solve each equation for <span class="math-container">$ \ \lambda \ $</span> to obtain</p>
<p><span class="math-container">$$ \lambda \ \ = \ \ \frac{x}{x \ + \ \frac{3}{5} y} \ \ = \ \ \frac{y}{y \ + \ \frac{3}{5} x} \ \ . $$</span></p>
<p>Assuming for the moment that neither denominator is equal to zero (it will turn out that they are not), we can "cross-multiply" the ratios to produce</p>
<p><span class="math-container">$$ xy \ + \ \frac{3}{5} x^2 \ \ = \ \ xy \ + \ \frac{3}{5} y^2 \ \ \Rightarrow \ \ x^2 \ = \ y^2 \ \ \Rightarrow \ \ y \ = \ \pm x \ \ . $$</span></p>
<p>We have two cases now for insertion into the curve equation:</p>
<p><span class="math-container">$$ \mathbf{y = x :} \quad 5x^2 \ + \ 6xy \ + \ 5y^2 \ = \ 8 \ \ \rightarrow \ \ 5x^2 \ + \ 6·x·x \ + \ 5x^2 \ = \ 8 \ \ \Rightarrow \ \ 16x^2 \ = \ 8 $$</span> <span class="math-container">$$ \Rightarrow \ \ x^2 \ = \ \frac{1}{2} \ \ ; $$</span>
<span class="math-container">$$ \mathbf{y = -x :} \quad \quad \quad \rightarrow \ \ 5x^2 \ + \ 6·x·(-x) \ + \ 5x^2 \ = \ 8 \ \ \Rightarrow \ \ 4x^2 \ = \ 8 \ \ \Rightarrow \ \ x^2 \ = \ 2 \ \ . $$</span></p>
<p>If we only need the "distance-squared" from the origin for the extremal points, then the cases are <span class="math-container">$ \ \mathbf{y = x :} \ \ x^2 \ + \ y^2 \ = \ \frac{1}{2} \ + \ \frac{1}{2} \ = \ 1 \ \ $</span> and <span class="math-container">$ \ \ \mathbf{y = -x :} \ \ x^2 \ + \ y^2 \ = \ 2 \ + \ 2 \ = \ 4 \ , $</span> making the extremal distances from the origin <span class="math-container">$ \ 1 \ $</span> and <span class="math-container">$ \ 2 \ . $</span> [The points at these distances are <span class="math-container">$ \ \left( \pm \frac{1}{\sqrt{2}} \ , \ \pm \frac{1}{\sqrt{2}} \right) \ $</span> and <span class="math-container">$ \ \left( \pm \sqrt{2} \ , \ \mp \sqrt{2} \right) \ , $</span> respectively.</p>
<p>The constraint curve <span class="math-container">$ \ 5x^2 \ + \ 6xy \ + \ 5y^2 \ = \ 8 \ $</span> has symmetry about the origin, so it is reasonably to expect that there should be a critical point in each quadrant (with extremal points of the same type in opposite quadrants). The curve is a "rotated ellipse" for which we have found the lengths of its semi-major and semi-minor axes and the locations of their endpoints.</p>
<p><a href="https://i.stack.imgur.com/uYbnm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uYbnm.png" alt="enter image description here" /></a></p>
|
2,823,568 | <p>It is well known that the sum</p>
<p>$$
\sum _{{k=0}}^{\infty }{\frac {x^{k}}{k!}}
$$</p>
<p>converges to $e^{x}$. In particular, for $x=1$ we have $\sum _{{k=0}}^{\infty }{\frac {1}{k!}}=e$. But what about the sum over the reciprocals of primorials, i.e.,</p>
<p>$$
\sum _{{k=0}}^{\infty }{\frac {x^{k}}{k\#}},
$$</p>
<p>where $k\#$ denotes the product of all primes equal to or smaller than $k$. Does the sum $\sum _{{k=0}}^{\infty }{\frac {1}{k\#}}$ converge, like it's factorial analogue does?</p>
<p>In the same spirit, it would also be interesting to ask whether the sum</p>
<p>$$
\sum _{{k=0}}^{\infty }{\frac {x^{k}}{p_k\#}}
$$</p>
<p>converges, where $p_k\#$ is the product of the first $k$ primes. Unfortunately, I have found no reference to such sums after a quick search on internet. What can be said about these sums?</p>
| Pagode | 564,189 | <p>Far more precisely we use Chebyscheff $\theta$ function:</p>
<p>So </p>
<p>$$ (a_n)_{n \in \mathbb{N*}} \triangleq\dfrac{1}{n\#}= \dfrac{1}{\prod_{p\leq n} p} = \dfrac{1}{e^{\theta(n)}} $$</p>
<p>Thus ( <a href="https://rads.stackoverflow.com/amzn/click/0198531710" rel="nofollow noreferrer">Hardy proof in this book</a>)</p>
<p>$$ a_n = \dfrac{1}{e^{n+o(n)}}=e^{-n}e^{-o(n)} $$ </p>
<p>And radius is given by </p>
<p>$$ R=\dfrac{1}{\limsup(a_n^{\frac{1}{n}})}$$</p>
<p>i.e. </p>
<p>$$ R= e$$</p>
|
1,831,250 | <p>I have come to conclusion that the most efficient and thorough way to prove whether or not a limit exists in three dimensions is to use polar coordinates. </p>
<p>$lim_{x,y \to (0,0)} \frac{x^3+y^3}{x^2+y^2}$</p>
<p>Substituting for polar coordinates:</p>
<p>$lim_{r \to 0^+}$ $\frac{r^3(cos^3 \Theta + sin^3 \Theta)}{r^2(sin^2 \Theta + cos^2 \Theta)}$ </p>
<p>$lim_{r \to 0^+} r (cos^3 \Theta + sin^3 \theta)$</p>
<p>As we can see r is independent of Theta and therefore the limit exists and is in fact 0. However showing that it is independent using notation is a little bit of a grey area. </p>
<p>$\vert sin^3 \Theta + cos^3 \Theta \vert$ $\le$ $1$ (Since sin and cos are bound by 1). Then $\vert r sin^3 \Theta + cos^3 \Theta \vert$ $\le \vert r \vert$ $r \to 0$. Is this logical? To be honest I am not even sure what this is saying, if cos and sin are bound by one, how does this have anything to do with r? Also if I had $2x^3$ in the numerator how would that change this? I am not even sure how polar coordinates would be substituted with the 2 I would assume the constant would be pulled outside.</p>
| hmakholm left over Monica | 14,366 | <p>This works in the particular case where the numerator and denominator of your function are both <em>homogenous</em> polynomials in $x$ and $y$ (and the one in the denominator has no nontrivial roots) -- that is, in every term the sum of the exponents if $x$ and $y$ is the same. In that case you have
$$ \frac{p(x,y)}{q(x,y)} = r^k h(\theta) $$
where $h$ is some <em>continuous</em> function with period $2\pi$.</p>
<p>Then $h$ is necessarily <em>bounded</em>, and therefore you can conclude that $\lim_{x,y\to 0} \frac{p(x,y)}{q(x,y)} = 0$ if $k>0$.</p>
<p><strong>However,</strong> beware that when the function doesn't split as nicely as this, note that it is <em>not enough</em> that the limit under $r\to 0^+$ exists separately for each $\theta$ -- even if the limit is the same for all $\theta$. For example, if
$$ f(x,y) = \begin{cases} \frac{x^2+y^2}{y} & \text{when }y> 0 \\ 0 & \text{when }y\le 0 \end{cases} $$
In this case $\lim_{r\to 0^+} f(r,\theta)=0$ for each $\theta$ but $\lim_{x,y\to 0} f(x,y)$ nevertheless doesn't exist. The above reasoning breaks down because $h$ is now not continuous when $\theta$ is a multiple of $\pi$.</p>
<p>For another example where the function isn't even defined by cases, see <a href="https://math.stackexchange.com/questions/753381/limit-is-found-using-polar-coordinates-but-it-is-not-supposed-to-exist">Limit is found using polar coordinates but it is not supposed to exist.</a>, which Git Gud pointed out. (In that example the reasoning above breaks down even earlier; the dependence on $r$ and $\theta$ don't decouple at all).</p>
|
597,610 | <p>can someone explain why when a radical equation is solved some solutions don't work? Is there a rule when a certain transformation is performed on the equation it gives extra solutions? thanks very much</p>
| 2'5 9'2 | 11,123 | <p>The steps you may be used to for solving an equation are one-to-one. If all I know about $x$ is that $2x+1=0$, then</p>
<p>$$\begin{align}
&&2x+1&=0\\
&\iff&2x&=-1\\
&\iff&x&=-\frac12
\end{align}$$</p>
<p>Each step is an implication but so too would be the steps in the reverse order.</p>
<p>If all I know about $x$ is that $\sqrt{2-x}=x$ then I might start by squaring both sides:
$$\begin{align}
&&\sqrt{2-x}&=x\\
&\implies&2-x&=x^2\\
\end{align}$$</p>
<p>but already this cannot be reversed. To take the square root of the second line would leave us with $\sqrt{2-x}=|x|$, which is not what we started with. So the one-sided implication symbol can be used to emphasize that there is no going back. Continuing,</p>
<p>$$\begin{align}
&&\sqrt{2-x}&=x\\
&\implies&2-x&=x^2\\
&\iff&0&=(x+2)(x-1)
\end{align}$$</p>
<p>So we've found that if any $x$ at all satisfies $\sqrt{2-x}=x$, then it would have to be $-2$ or $1$. But of course, that has merely narrowed it down from infinitely many possible $x$-values down to two possible $x$-values. It's up to you do a little something more to actually solve the original equation.</p>
|
120,863 | <blockquote>
<p>A perfect dice is drawn $4$ times.What's the probability to the same number comes out at least $2$ times?</p>
</blockquote>
<p>At first, I applied to binomial law.I made all calculations.</p>
<p>I set up a random variable.The possible values for the variable were $0,1,2,3$ and $4$.
My thought was that any number have a probability of $\frac{1}{6}$ to come out.</p>
<p>Then I started to think.In the $4$ launchs two different numbers can come out $2$ times, each one.So my previous thoughts were wrong.</p>
<p>Can you give me an idea on how to solve this problem?</p>
| Peđa | 15,660 | <p>"Alea iacta est !!"</p>
<p>$$P=\frac{6^4-\frac{6!}{2!}}{6^4} \approx 0.722$$</p>
|
3,270,504 | <blockquote>
<p>Prove that <span class="math-container">$\log|e^z-z|\leq |z|+1$</span> where <span class="math-container">$z\in\mathbb{C}$</span> with <span class="math-container">$|z|\geq e$</span>.</p>
</blockquote>
<p><strong>Background:</strong></p>
<p>This is from a proof that <span class="math-container">$e^z-z$</span> has infinitely many zeroes. The present stage is that we assumed in contradiction that <span class="math-container">$e^z-z$</span> hasn't any zero.</p>
<p><strong>My attampt:</strong></p>
<p>I assume that the meaning of <span class="math-container">$\log$</span> here is the principal branch of <span class="math-container">$\log$</span>.</p>
<p>We know that <span class="math-container">$|w|\in\mathbb{R} ,\ \forall w\in\mathbb{C}$</span>. Because <span class="math-container">$\log$</span> is increasing in <span class="math-container">$\mathbb{R}^+$</span> and according to the triangle inequality we get <span class="math-container">$$\log|e^z-z|\leq\log(|e^z|+|z|)$$</span> But I'm not sure how to proceed. Thanks.</p>
| Lutz Lehmann | 115,115 | <p>There exist a direct quick solution to the problem behind the question, and a solution using the logarithmic instead of the exponential approach that is probably also easier to handle in using the Rouché theorem.</p>
<h3>Quick solution using the poly-log or Lambert-W functions</h3>
<p>Roots of <span class="math-container">$e^z-z=0$</span> come in conjugate complex pairs. As the equation is equivalent to <span class="math-container">$-ze^{-z}=-1$</span>, these roots can be written with the Lambert-W function as <span class="math-container">$z_k=-W_{k}(-1)$</span>.</p>
<pre><code>for k in range(-5,5): print k, -lambertw(-1,k)
>>>>>>
-5 (3.287768611544094 +26.580471499359145j)
-4 (3.020239708164501 +20.272457641615222j)
-3 (2.6531919740386973+13.949208334533214j)
-2 (2.062277729598284 + 7.588631178472513j)
-1 (0.3181315052047642+ 1.3372357014306893j)
0 (0.3181315052047642- 1.3372357014306893j)
1 (2.062277729598284 - 7.588631178472513j)
2 (2.6531919740386973-13.949208334533214j)
3 (3.020239708164501 -20.272457641615222j)
4 (3.287768611544094 -26.580471499359145j)
</code></pre>
<hr />
<h2>Long solution using a fixed-point argument</h2>
<p>Now one could ask for the existence of these infinitely many branches of the Lambert-W function and a more precise location of the roots.</p>
<h3>Approximate root location</h3>
<p>For any root <span class="math-container">$z=x+iy$</span> satisfying <span class="math-container">$z=e^z$</span> we get <span class="math-container">$$x^2+y^2=e^{2x}.$$</span> First this means that <span class="math-container">$e^x>|x|$</span> or <span class="math-container">$x>-W_0(1)=-0.56714329...$</span>. For large <span class="math-container">$x$</span> we have <span class="math-container">$e^x\gg x$</span> so that the value of <span class="math-container">$y$</span> has to dominate, <span class="math-container">$y\sim\pm e^x$</span>. Then the phase of <span class="math-container">$z$</span>, as <span class="math-container">$z$</span> is almost on the imaginary axis, is <span class="math-container">$\pm\frac\pi2$</span>. As that has to be reflected in the imaginary part of the exponent in <span class="math-container">$e^z$</span>, this requires that for the root in the upper half-plane <span class="math-container">$Im(z)\approx y_k= 2\pi k+\frac\pi2$</span>, <span class="math-container">$k>0$</span>.</p>
<p>Then a first approximation for the root is <span class="math-container">$$z\approx \ln(y_k)+iy_k=\ln(2\pi k+\frac\pi2)+i(2\pi k+\frac\pi2).$$</span></p>
<h3>Construction of a fixed-point function</h3>
<p>It appears thus advisable to orient the corrections to this approximation in the same direction, that is, set <span class="math-container">$z=\ln(y_k)+iy_k+iw$</span>, so that
<span class="math-container">$$
-i\ln(y_k)+y_k+w=-iz=-ie^{z}=e^{-i\frac\pi2+z}=y_ke^{iw}\\
w=g(w)=-i\,Ln\left(1+\frac{w-i\ln(y_k)}{y_k}\right)
$$</span></p>
<h3>Contractivity</h3>
<p>The fixed-point map <span class="math-container">$g$</span> has, for <span class="math-container">$k\ge 1$</span>, the derivative bound
<span class="math-container">$$
g'(w)=\frac{-i}{y_k-i(w+\ln(y_k))}\implies |g'(w)|<\frac1{2\pi k} ~~\text{for }|w|\le1
$$</span></p>
<h3>Self-mapping property</h3>
<p>Further, from <span class="math-container">$|Ln(1+v)|\le\frac{|v|}{1-|v|}$</span> is obtained that for <span class="math-container">$|w|\le1$</span> and <span class="math-container">$k\ge 1$</span>
<span class="math-container">$$
|g(w)|\le \frac{1+\ln(y_k)}{y_k-1-\ln(y_k)}<0.64<1.
$$</span></p>
<h3>Conclusion</h3>
<p>This means that the unit disk <span class="math-container">$\Bbb D$</span> is mapped into itself and the mapping <span class="math-container">$g$</span> is contractive on the unit disk, so that there is exactly one fixed-point, meaning exactly one root of the equation in the translated disk <span class="math-container">$(\ln y_k+iy_k)+\Bbb D$</span>.</p>
|
1,647,442 | <p>Prof using the binomial theorem: for all integers $n ≥0$ and for all nonnegative real numbers $x$, $1+nx ≤(1+x)^n$. </p>
<p>Don't have a idea to start this one. I don't know how to use math induction yet, so I need a answer without that.</p>
| JP McCarthy | 19,352 | <p>I define $\%=\frac{1}{100}$ so</p>
<p>$$10\%=10\cdot\frac{1}{100}=\frac{1}{10}.$$</p>
|
1,647,442 | <p>Prof using the binomial theorem: for all integers $n ≥0$ and for all nonnegative real numbers $x$, $1+nx ≤(1+x)^n$. </p>
<p>Don't have a idea to start this one. I don't know how to use math induction yet, so I need a answer without that.</p>
| Christian Blatter | 1,303 | <p>If a school has to send ${1\over10}$ of its students then strictly speaking that school cannot take part if its total number of students is not divisible by $10$. If the school has to send $10\%$ of its students then everyone accepts that the resulting number is rounded to the nearest integer.</p>
|
4,148,930 | <p>Let's consider the function <span class="math-container">$\log x$</span>; how can I prove that it is a transcendental function on the function field of rational functions, i.e. that a polynomial in two variables <span class="math-container">$p(x,y)$</span> such that <span class="math-container">$p(x,\log x)=0$</span> identically does not exist?</p>
<p>I have been trying different approaches: seeing the logarithm on the real numbers, like a formal series or like a holomorphic function on an open in the complex plane, but I was not able to do this. I have also tried looking for the differences with <span class="math-container">$\sqrt{(1+x)}$</span> which is algebraic on the rational function, and can be defined on a subset of the reals, with a formal series or on an open subset of the complex plane.</p>
<p>Could you please help me?</p>
| orangeskid | 168,051 | <p>HINT:</p>
<p>Let us show that we cannot have an equality</p>
<p><span class="math-container">$$\sum_{k=0}^m P_k(x) \exp ( \lambda_k x) = 0$$</span>
where <span class="math-container">$P_k(x)$</span> are non-zero polynomials and <span class="math-container">$\lambda_k$</span> are distinct complex numbers. Otherwise we would have</p>
<p><span class="math-container">$$P_0(x) \exp (\lambda_0 x) = -\sum_{k=1}^m P_k(x) \exp (\lambda_k x) $$</span></p>
<p>Now the right hand side is annihilated by a differential operator
<span class="math-container">$$\prod_{k=1}^m (D - \lambda_k\cdot I)^{d_k+1}$$</span>
while the LHS by <span class="math-container">$(D-\lambda_0)^{d_0+1}$</span>. But the <span class="math-container">$\gcd$</span> of the polynomials <span class="math-container">$\prod_{k=1}^m(T- \lambda_k)^{d_k+1}$</span> and <span class="math-container">$(T-\lambda_0)^{d_0+1}$</span> is <span class="math-container">$1$</span>. It follows that both sides are annihilated by the identity differential operator, so both are <span class="math-container">$0$</span>, contradiction.</p>
|
500,446 | <p>Let $p$ be a prime and $p \geq 5$, Consider the congruence $x^3 \equiv a$ (mod p) with $\gcd(a,p)=1$. Show that The congruence has either no solution or three incongruent solutions modulo $p$ if $p \equiv 1$ (mod 6) and has unique solution modulo $p$ if $p \equiv 5$ (mod 6).</p>
<p>My attempt: By Lagrange theorem, the congruence $x^3 \equiv a$ (mod p) has at most $3$ incongruent solutions modulo $p$. Suppose the congruence has a solution $b$ such that $b^3 \equiv a$ (mod p). Then $x^3 \equiv a \equiv b^3 \textit{mod p} \Rightarrow x^3 -b^3 \equiv 0$ (mod p). Note that $x^3 -b^3=(x-b)(x^2+bx+b^2)$</p>
<p>Now I stuck at here. I observe that if $p \equiv 1$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has two incongruent solutions modulo $p$ and if $p \equiv 5$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has one unique solution modulo $p$.</p>
<p>Can anyone guide me?</p>
| lab bhattacharjee | 33,337 | <p>$$x^3\equiv a\pmod p\ \ \ \ (1)$$</p>
<p>Taking <a href="http://mathworld.wolfram.com/DiscreteLogarithm.html" rel="nofollow">Discrete logarithm</a>, wrt some primitive root $g\pmod p$ as we can <a href="http://planetmath.org/node/38477" rel="nofollow">prove</a> that every prime has at least one primitive root </p>
<p>$3\cdot\text{ind}_gx\equiv \text{ind}_ga\pmod {p-1}\ \ \ \ (2)$</p>
<p>Using <a href="http://www.proofwiki.org/wiki/Solution_of_Linear_Congruence" rel="nofollow">Linear congruence theorem</a>, $(2),$ hence $(1)$ will be solvable if $(3,p-1)|\text{ind}_ga$</p>
<p>If $p=6b+5,$ where integer $b\ge0$
$\implies (3,p-1)=(3,6b+5)=(3,6b+5-3(2b+1))=(3,2)=1$</p>
<p>$\displaystyle (3,p)=1$ will always divide $\text{ind}_ga\forall a$ and consequently from the Linear congruence theorem, we have $(3,p)=1$ solution $\forall a$</p>
<p>If $p=6b+1,$ where integer $b\ge0 \implies(3,p-1)=(3,6b)=3$</p>
<p>$(2),$ hence $(1)$ will be solvable if $(3,p-1)=3|\text{ind}_ga$</p>
<p>If so, from the Linear congruence theorem, we have $(3,p)=3$ solutions if $3|\text{ind}_ga$</p>
|
2,859,411 | <p>Let $y(t)$ be a real valued function defined on the real line such that
$y'= y(1 − y)$, with $y(0) \in [0, 1]$. Then $\lim_{t\to\infty} y(t) = 1$.</p>
<p>The solution is given as false .But i have no idea about that. I try some counterexample but it won't work.</p>
<p>How can i find solution with 3 minutes?</p>
| Mohammad Riazi-Kermani | 514,496 | <p>One way is to solve this logistic equation and find the limit, but it might take more than $3$ minutes.</p>
<p>The other way is qualitative analysis.</p>
<p>Find the equilibrium states which are $y=0$ and $y=1$</p>
<p>Note that if $ y_0 \in (0,1) $ then your $y'=y(1-y)>0$ and it stays positive, so your function increases and approaches the equilibrium state of $y=1$.</p>
<p>Note that this argument fails if $y_0 =0$ because then you stay at $y=0$ for ever and you do not approach $1$</p>
|
1,863,868 | <p>Coordinates: <span class="math-container">$(0,0), (3,3), (6,4.5), (9, 5.25)$</span></p>
<p>If this is a curve is there a formula for determining the <span class="math-container">$y$</span> value for any given <span class="math-container">$x$</span> within the range <span class="math-container">$0$</span> to <span class="math-container">$9?$</span></p>
| Andrei | 331,661 | <p>This particular curve can be also $f(x)=6(1-2^{-x/3})$. Here is how I guessed this function:</p>
<ol>
<li><p>you go in steps of 3, so you might have something $x/3$.</p></li>
<li><p>At the first step the function increases by 3, then at the next step by 3/2, then 3/4. From here, I would guess that the formula is $$\sum_s3\left(\frac{1}{2}\right)^s,$$ where $s=x/3$</p></li>
<li><p>$\sum_s(1/2)^s=\frac{1-(1/2)^{s+1}}{1-1/2}$</p></li>
</ol>
|
275,526 | <p>What would be an easy example of a sequence of functions defined on a compact interval so that $f_n$ goes to $f$ pointwise but $\sup f_n$ does not go to $sup f$.</p>
<p>I thought of the usual example we take to show that the limits in integration can't be interchanged when we only have pointwise convergence. Is this correct?</p>
<p>Does $f(x)=x^n$ work in this context?
Any comments or hints?</p>
| Thomas Andrews | 7,933 | <p>Let $f_n(x)=0$ if $x<n$ and $1$ if $x\geq n$. Then $f_n\to 0$ pointwise, but $\sup f_n = 1$ for all $n$.</p>
<p>You can get continuous examples easily enough.</p>
|
3,068,270 | <p>Given that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are real constants and that the equation <span class="math-container">$x^4+ax^3+2x^2+bx+1=0$</span> has at least one real root, find the minimum possible value of <span class="math-container">$a^2+b^2$</span>.</p>
<p>I began this way: Let the polynomial be factorized as <span class="math-container">$(x^2+\alpha x + 1)(x^2+\beta x +1)$</span>. Then expanding and comparing coefficients we get <span class="math-container">$\alpha\beta=0$</span>, meaning either <span class="math-container">$\alpha=0$</span> or <span class="math-container">$\beta=0$</span>. Suppose <span class="math-container">$\alpha=0$</span>. Then we see that <span class="math-container">$(x^2+\beta x+1)$</span> should have real roots, from which we get <span class="math-container">$\beta^2 \geq 4$</span>. But we get <span class="math-container">$a=b=\beta$</span> from the comparison above. So <span class="math-container">$a^2+b^2 = 2\beta^2 \geq 8$</span>.</p>
<p>Is it correct? Or is there any mistake? Any other solution is also welcome.</p>
| dmtri | 482,116 | <p>I would write first <span class="math-container">$(x^2+cx+d) (x^2+ex+f) $</span> and then find <span class="math-container">$c, d, e, f$</span> in terms of <span class="math-container">$a, b$</span>.</p>
|
2,880,271 | <p>Let a, b, c and d be real numbers that are not all zero. Let
ax + by = p
cx + dy = q
be a pair of equations in the variables x and y with p, q ∈ R. </p>
<p>Show this system of equations has a unique solution if and only if ab − cd != 0.</p>
<hr>
<p>From Determinant of coefficient matrix, I know (ad -bc) =0 => no unique solution.
Have tried substitution of one equation into another and replacement.</p>
<p>=> ab = cd....show that solution is unique</p>
<p><= solution is unique ....show that ab - cd != 0</p>
<p>Pointers?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>Hint: let $b\ne 0$ then we get from the first equation</p>
<p>$y=\frac{p}{b}-\frac{a}{b}x$ plugging this in the second equation we get</p>
<p>$$cx+d\left(\frac{p}{b}-\frac{a}{b}x\right)=q$$ and this is</p>
<p>$$x\left(c-\frac{ad}{b}\right)=q-\frac{pd}{b}$$</p>
<p>multiplying by $b$ we get
$$x(bc-ad)=qb-dp$$</p>
<p>Can you proceed?</p>
|
1,549,506 | <p>To Prove $$\lim_{x \to 0}\frac{x^2+2\cos x-2}{x\sin^3 x}=\frac{1}{12}$$
I tried with L'Hospital rule but in vain.</p>
| David Holden | 79,543 | <p>from
$$
\cos x = 1 - \frac{x^2}2+\frac{x^4}{24}-\dots
$$
we have
$$
x^2+2\cos x - 2 = \frac{2x^4}{24}-\dots
$$
and
$$
\sin x = x-... \\
x\sin^3 x = x^4-...
$$</p>
|
1,282,592 | <p>I am often asked to prove properties of regular bipartite graphs, and beyond the two parts having equal size nothing seems obvious. Are these graphs more intuitive than they first seem?</p>
<p>In particular, right now I can't work out why an r-regular bipartite graph is r-edge-colourable.</p>
<p>Thanks</p>
| Salomo | 226,957 | <p>A $r$-regular bipartite graph has not only two parts of equal size, but by Hall's Marriage Theorem, it has a perfect matching. Hence, you can take out that perfect matching, and the graph will become regular bipartite. Keep doing this, we will get a partition of $r$ independent classes of the edges, that is the $r$-edge-colouring we want.</p>
|
423,793 | <p>A group of 3141 students gather together. Some of them have 13 friends in this group, some have 33 friends, and the rest has 37 friends. Prove using graph theory that this group does not exist. Assume that if A is friends with B, then B is friends with A.</p>
| Ross Millikan | 1,827 | <p>The handshaking theorem tells you that the sum of the degrees of a graph is even.</p>
|
15,093 | <p>For example, I am confident that very few students majoring in pure mathematics can write a complete proof to the <a href="https://en.wikipedia.org/wiki/Abel%E2%80%93Ruffini_theorem" rel="noreferrer">Abel–Ruffini theorem</a> (there is no algebraic solution to general polynomial equations of degree five or higher with arbitrary coefficients) by the time of their graduation. I suspect many students with a Master's degree or Doctorate in pure mathematics could not prove this theorem either. They may know the conclusion, but may not be able to sketch an idea of the proof, let alone give a complete proof.</p>
<p>My question is: should we educate pure mathematics major students in such a way that they should know how to prove most of the classical results in mathematics such as the Abel–Ruffini theorem and the <a href="https://en.wikipedia.org/wiki/Fundamental_theorem_of_algebra" rel="noreferrer">Fundamental Theorem of Algebra</a> before getting their bachelor's degree, or at least their master's Degrees?</p>
| Tommi | 2,083 | <p>For the project, you would need to define what are "classical results in mathematics".
I suspect that different people would disagree on the classicalness of various results.</p>
<p>Furthermore, it is doubtful if everyone should learn the same classical results - mathematics is a big field. You can, of course, cut it down by a sufficiently strong definition of "pure mathematics", in which case, maybe everyone should know the same results.</p>
<p>I think it would be more fruitful to ask what results a particular university or study programme should consider as key results, concepts and tools. This nicely sidesteps the logistical issue of coordinating mathematics study programmes worldwide and the potentially ugly definitional issue of who qualifies as a pure mathematician. Furthermore, since it retains the current situation of different mathematicians knowing different tools, it widens the scope of overall knowledge. Collaboration with and research periods at foreign universities, or at least universities with different priorities, is useful partially due to such differences.</p>
<p>There is the further issue of how much one can realistically ask of bachelor (or even master) students. As Steve Gubkin mentioned in a comment, many bachelor students have problems with even simple proofs. Depending on how wide one wishes to cast the net of "classical results", it might be unfeasible to teach them to everyone, without making "everyone" a much smaller set then nowadays.</p>
|
660,034 | <p>I wondered if all decimal expansions of $\frac{1}{n}$ could be thought of in such a way, but clearly for $n=6$,</p>
<p>$$.12+.0024+.000048+.00000096+.0000000192+...\neq.1\bar{6}$$</p>
<p>Why does it work for 7 but not 6? Is there only one such number per base, <em>i.e.</em> 7 in base 10? If so what is the general formula?</p>
| Daniel Robert-Nicoud | 60,713 | <p>Your expansion comes from the fact that you can write:
$$\frac{1}{7}=\frac{7}{50}\sum_{k=0}^\infty 50^{-k}=0.14(1+2\cdot0.01+4\cdot0.0001+\ldots)$$
You can check the equality by using the standard result that for $|q|<1$ you have $\sum_{k=0}^\infty q^k=\frac{1}{1-k}$.</p>
<p>It doesn't work for $6$ because if you wanted a structurally similar sum, you'd have:
$$\frac{1}{6}=\frac{49}{6\cdot50}\sum_{k=0}^\infty 50^{-k}$$
and the fraction on the left does not have a finite decimal expansion.</p>
|
134,815 | <p>Assume I have a shuffled deck of cards (52 cards, all normal, no jokers) I'd like to record the order in my computer in such a way that the <em>ordering</em> requires the least bits (I'm not counting look up tables ect as part of the deal, just the ordering itself. </p>
<p>For example, I could record a set of strings in memory:</p>
<p>"eight of clubs", "nine of dimonds" </p>
<p>but that's obviously silly, more sensibly I could give each card an (unsigned) integer and just record that... </p>
<p>17, 9, 51, 33... </p>
<p>which is much better (and I think would be around 6 bits per number times 53 numbers so around 318 bits), but probably still not ideal.. for a start I wouldn't have to record the last card, taking me to 312 bits, and if I know that the penultimate card is one of two choices then I could drop to 306 bits plus one bit that was true if the last card was the highest value of the two remaining cards and false otherwise.... </p>
<p>I could do some other flips and tricks, but I also suspect that this is a branch of maths were there is an elegant answer... </p>
| leonbloy | 312 | <p>As the other answers point out, you can code the permutations in several ways, but you'll need at least 226 bits (in average).
To read about concrete encoding schemes, you can start <a href="http://en.wikipedia.org/wiki/Permutation#Numbering_permutations" rel="nofollow">here</a></p>
|
662,637 | <p>I have a quick question</p>
<p>If S is an orthonormal set of vectors in Rn, then S is a basis for the subspace it spans. I am not sure if this is true or not based off the definition of orthonormal basis which is a subspace w of Rn is a basis for w that is also an orthogonal set.</p>
| Community | -1 | <p>Let denote the orthonormal vectors $v_1,\ldots,v_p$ and let $a_1,\ldots, a_p\in\mathbb R$ such that
$$\sum_{k=1}^pa_k v_k=0$$
then</p>
<p>$$0=\left\langle \sum_{k=1}^pa_k v_k/v_i\right\rangle=a_j,\quad \forall j=1,\ldots,p$$
so the vectors are linearly independant.</p>
|
3,436,804 | <p>Does the following series converge? If yes, what is its value in simplest form?</p>
<p><span class="math-container">$$\left( \frac{1}{1} \right)^2+\left( \frac{1}{2}+\frac{1}{3} \right)^2+\left( \frac{1}{4}+\frac{1}{5}+\frac{1}{6} \right)^2+\left( \frac{1}{7}+\frac{1}{8}+\frac{1}{9}+\frac{1}{10} \right)^2+\dots$$</span></p>
<p>I have no idea how to start. Any hint would be really appreciated. THANKS!</p>
| Jack D'Aurizio | 44,121 | <p>Since for moderately large values of <span class="math-container">$n$</span> we have
<span class="math-container">$$ H_n \approx \log(n)+\gamma+\frac{1}{2n}-\frac{1}{12n^2} $$</span>
we also have
<span class="math-container">$$ H_{n(n+1)/2}-H_{n(n-1)/2} \approx \frac{2}{n}-\frac{4}{3n^3}$$</span>
and
<span class="math-container">$$ \sum_{n\geq 1}\left[H_{n(n+1)/2}-H_{n(n-1)/2}\right]^2 \approx 1+\sum_{n\geq 2}\left(\frac{2}{n}-\frac{4}{3n^3}\right)^2=1+\frac{2 \left(-1890+2835 \pi ^2-252 \pi ^4+8 \pi ^6\right)}{8505} $$</span>
is finite and approximately equal to <span class="math-container">$\color{green}{3.17}151$</span>. I doubt there is a simple closed form.</p>
|
902,407 | <p>Exercise</p>
<p>Let $f:G \to G'$ be an isomorphism and let $H\unlhd G$. If $H'=f(H)$, prove that $G/H \cong G'/H'$.</p>
<p>As I've shown that $H'\unlhd G'$, I thought of defining $$\phi(Ha)=H'f(a)$$I was trying to show that this function is well defined and that it is an isomorphism.</p>
<p>I'll work with right cosets (which, since $H$ and $H'$ are normal, it's the same as working with left cosets). I need to know if what I did is correct and I would appreciate some help to show injectivity (maybe the $\phi$ I've defined is not the correct one).</p>
<p>So, what I mean by well-defined is that if $Ha=Ha'$, then $H'f(a)=H'f(a')$. It will be sufficient to show that $f(a)f(a')^{-1} \in H'$; by hypothesis, we have $aa'^{-1} \in H$, which means $f(aa'^{-1}) \in H'$. But then $f(aa'^{-1})=f(a)f(a'^{-1})=f(a)f(a')^{-1} \in H'$. From here one deduces the well definition.</p>
<p>To check $\phi$ is a morphism, we have to show $\phi((Ha)(Hb))=\phi(Ha)\phi(Hb)$. But $(Ha)(Hb)=H(ab)$, so $\phi(((Ha)(Hb))=\phi(H(ab))=H'f(ab)=H'f(a)f(b)=(H'f(a))(H'f(b))=\phi(Ha)\phi(Hb)$.</p>
<p>Surjectivity is almost immediate, take a right coset $H'y$, since $f$ is isomorphic, there is $g \in G$ such that $f(g)=y$, so $\phi(Hg)=H'f(g)=H'y$</p>
<p>Now for injectivity, suppose $\phi(Ha)=\phi(Ha')$, then $H'f(a)=H'f(a')$, I got stuck there.</p>
<p>Any suggestions would be appreciated. Thanks in advance.</p>
| bof | 111,012 | <p>The set $A$ is the union of the two disjoint sets $A\cap B$ and $A-B$, so $|A|=|A\cap B|+|A-B|$. Now, if only we could show that $|B|=|A\cap B|+|B-A|$ . . .</p>
|
4,024,871 | <p>I'm looking to build a function <span class="math-container">$f:S^2 \to \mathbb R^2$</span> such that <span class="math-container">$f(x)\neq f(−x)$</span> for all <span class="math-container">$x\in S^2$</span>.</p>
<p>By Borsuk-Ulam Theorem, this function must be discontinuous. I was trying to build a not too complicated function, but I always encountered a problem.</p>
<p>I appreciate any help.</p>
| R.V.N. | 730,220 | <p>In that case you could try with: <span class="math-container">$$(x_1,x_2,x_3)\in\mathbb{S}^1\longmapsto f(x_1,x_2,x_3):=\begin{cases}(x_1,x_2) &\text{ if } (x_1,x_2,x_3)\neq (0,0,1) \\ (2,2) &\text{ if } (x_1,x_2,x_3)=(0,0,1) \end{cases}$$</span></p>
<p>Then obviously <span class="math-container">$f(x_1,x_2,x_3)=-f(-(x_1,x_2,x_3))$</span> for all <span class="math-container">$(x_1,x_2,x_3)\in\mathbb{S}^1\setminus\{(0,0,1),(0,0,-1)\}$</span>. Also, <span class="math-container">$f(0,0,1)=(2,2)\neq(0,0)=f(0,0,-1)$</span>.</p>
|
939,911 | <p>Question Re-phrased:</p>
<hr>
<p>I'm having a lot of trouble wrapping my head around this problem. While I've looked through similar posts, It's difficult understanding the maths because I currently have another approach stuck in my head. Is this valid?</p>
<p>Question:
Prove that an at most countable union of at most countable sets is at most countable.</p>
<p>Proof:
Let <strong>F = {A1, A2, ... , Ak, ...}</strong> be an at most countable family of sets where each <strong>$A_k \in$ F</strong> is also at most countable for <strong>$k \in N$</strong>. Define <strong>S = $ \bigcup_{A \in F}A_k $</strong>.</p>
<p>If every <strong>$A_k \in F $</strong> is finite, then there exists a bijective <strong>$g_k: A_k \rightarrow J_{n_k}$</strong> for every A $\in$ F. Therefore, for some m $\in$ N, S = $J_m$ such that <strong>$m\le\sum_{k=1}^x(n_i) $</strong>. Hence, S is finite and at most countable. However, if at least one $A_k \in F$ exists such that this $A_k$~N ($A_k$ is countable), then S = N because $J_n \subset$ N for all n $\in$ N. Hence, S is countable.</p>
<p>Therefore, S must be at most countable.</p>
<hr>
<p>But this only works if F is finite. Need to rethink this.</p>
| copper.hat | 27,978 | <p>Let $i_k:\mathbb{N} \to A_k$ be a surjection.</p>
<p>Define $W_n = \cup_{k=1}^n i_k(\{ 1,...,n-k+1\})$. Note the $W_n$ are nested, finite and $\cup_{n=1}^\infty W_n = \cup_{k=1}^\infty A_k$.</p>
<p>Let $\Delta_n = W_n \setminus \cup_{k=1}^{n-1} W_k$. By renumbering if necessary, we may assume that all the $\Delta_n$ are non-empty.</p>
<p>Let $d_k = |\Delta_k|$ and
$j_k:\{1,...,d_k\} \to \Delta_k$ be a bijection.</p>
<p>Now let $\delta_n = \min \{ m | n \le d_1+\cdots+d_m \}$ and define the bijection
$\phi:\mathbb{N} \to \cup_{k=1}^\infty A_k$ by
$\phi_n = \begin{cases} j_1(n), & \delta_n = 1 \\j_{\delta_n}(n-(d_1+\cdots+d_{\delta_n-1})) ,& \text{otherwise}\end{cases}$.</p>
|
532,785 | <p>Let $f:[0, 1] \rightarrow \mathbb{R}$ a continuous function. If $a>0$, show that:</p>
<p>$$ f(0)\ln(\frac{b}{a})=\lim_{\epsilon\rightarrow 0}\int_{\epsilon a}^{\epsilon b} \frac{f(x)}{x}dx$$</p>
<p>Tried using Riemann sum, but did not succeed.</p>
| Mark Bennet | 2,906 | <p>The area of a triangle is half base times height - here $\frac 12 \times 6 \times 8 = 24$</p>
<p>Now join each of the vertices of the triangle to the in centre to get three triangles of height $r$ and base each of the sides - so the area, as you say, is $\frac 12 \times r \times (6+8+10)=12r$ so that $r=2$.</p>
<p>I can't see the logic of what you are doing in your diagram - you don't seem to be using the sides or angles of the triangle at all. It isn't obvious how you would use the theorem for this - but if you were looking for the circumradius instead of the inradius the sides of the triangle would be chords you could use in the application of the theorem.</p>
|
1,307,460 | <p>Need to integrate this function. Need help with my assignment. Thanks</p>
| Pranav Arora | 117,767 | <p>$$I=\int_0^{\pi/2} \frac{\sin^2x}{\sin x+\cos x}\,dx=\int_0^{\pi/2}\frac{\cos^2x}{\sin x+\cos x}\,dx$$
$$\Rightarrow 2I=\int_0^{\pi/2} \frac{1}{\sin x+\cos x}\,dx=2\int_0^{\pi/4} \frac{1}{\sin x+\cos x}\,dx$$
$$\Rightarrow I=\int_0^{\pi/4} \frac{1}{\sin\left(\frac{\pi}{4}-x\right)+\cos\left(\frac{\pi}{4}-x\right)}\,dx=\frac{1}{\sqrt{2}}\int_0^{\pi/4}\sec x\,dx$$
$$\Rightarrow I=\frac{1}{\sqrt{2}}\left(\ln\left(\sec x+\tan x\right)\right|_0^{\pi/4}=\boxed{\dfrac{1}{\sqrt{2}}\ln\left(\sqrt{2}+1\right)}$$</p>
|
4,644,847 | <p>Is there a formal fallacy to describe lack of an observation is not proof that it does not exist, or lack of an occurrence is not proof that it can never happen?</p>
| ryang | 21,813 | <blockquote>
<p>Is there a formal fallacy to describe that lack of an observation is not proof that it does not exist, or that lack of an occurrence is not proof that it can never happen?</p>
</blockquote>
<p><em>Absence of evidence is not evidence of absence.</em> -Carl Sagan(<a href="https://en.wikipedia.org/wiki/Evidence_of_absence" rel="nofollow noreferrer">?</a>)</p>
<p>You are looking for an insufficient-evidence fallacy; here are two, in the context of your requirement:</p>
<ul>
<li><strong>appeal to ignorance</strong>: something is false because it hasn't been proven true;</li>
<li><strong>hasty generalisation</strong>: a biased/small sample in which something is false is enough to conclude that it is generally false.</li>
</ul>
<p>However, because the above are weak inductive arguments rather than deductive reasoning, they are not <em>formal</em> fallacies, which describe defective logical structures.</p>
|
471,710 | <p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
| E.P. | 30,935 | <p>A 'small angle' is equally small whatever system you use to measure it. Thus if an angle is, say, much smaller than 0.1 rad, it will be much smaller than the equivalent in degrees. More typically, saying 'small angle approximation' typically means $\theta\ll1$, where $\theta$ is in radians; this can be rephrased in degrees as $\theta\ll 57^\circ$.</p>
<p>(Switching uses between radians and degrees becomes much simpler if one formally identifies the degree symbol $^\circ$ with the number $\pi/180$, which is what you get from the equation $180^\circ=\pi$. If you're differentiating with respect to the number in degrees, then, you get an ugly constant, as you should: $\frac{d}{dx}\sin(x^\circ)={}^\circ \cos(x^\circ)$.)</p>
<p>In real life, though, you wouldn't usually say 'this angle should be small' without saying what it should be smaller than. If the latter is in degrees then the former should also be in degrees.</p>
<p>That said, though: always work in radians! Physicists tend to use degrees quite often, but there is always the underlying understanding that the angle itself is a quantity in radians and that degrees are just convenient units. Trigonometric functions, in particular, always take their arguments in radians, so that all the math will work well. Always differentiate in radians, always work analytically in radians. And at the end you can plug in the degrees.</p>
|
471,710 | <p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
| Norman | 186,107 | <p>because the sine of an angle is dimensionless [i.e. just a number], the angle has to be also dimensionless [i.e. the radian is dimensionless] ... units on both sides of an equation have to match</p>
|
2,698,555 | <p><strong>Question</strong></p>
<p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p>
<p><strong>My attempt</strong></p>
<p>Since we only have to find the rational values of $x$ and $y$, we can assume that
$$ x \in Q$$
$$ y \in Q$$
$$ y-x \in Q $$
Let$$ d = y-x$$
$$d=\sqrt{x^2+x+3}-x$$
$$d+x=\sqrt{x^2+x+3}$$
$$(d+x)^2=(\sqrt{x^2+x+3})^2$$
$$d^2 + x^2 + 2dx =x^2+x+3$$
$$d^2 +2dx = x +3$$
$$x = \frac{3-d^2}{2d-1}$$</p>
<p>$$d \neq \frac{1}{2}$$</p>
<p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p>
<p>Now
$$ y = \sqrt{x^2+x+3}$$
$$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$
$$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$
$$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$
$$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$
$$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p>
<p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
| John | 543,506 | <p>Let r = the radius of the large quarter-circle = the side of the square. \
Area of the big quarter-circle = (\pi * r^2) / 4 \
Area of two smaller semi-circles = (\pi * r^2) / 4 \
These two areas are equal. \
Graphically, the area of the big quarter-circle (\pi * r^2) / 4 = area of two smaller semi-circles (\pi * r^2) / 4 + red shaded area (because it is contained outside the smaller circles) - blue area (because it is contained by the smaller circles twice). \
Cancel the circle areas on each side. 0 = blue area - red area \
Use addition to move the red area to the left side. red area = blue area</p>
|
2,710,656 | <p>I now there is a continuous surjective map from $\Bbb{R}\to\Bbb{R}^2$ thanks to Peano curve.</p>
<p>My question is simple: does there exist a $C^1$ surjective map from $\Bbb{R}\to\Bbb{R}^2$ ? I think that the answer is no and I have seen this long time ago but I was too young to understand the proof. Unfortunately I cannot think of a proof now.</p>
<p>Any idea ?</p>
| Angina Seng | 436,618 | <p>The image of a $C^1$ map from $\Bbb R$ to $\Bbb R^2$ has Lebesgue measure zero, so it cannot be surjective, nor have non-empty interior.</p>
|
2,983,231 | <p>I'm working out of Salamon's <em>Measure and Integration</em> in preparation for studying probability, and I came across the following exercise.</p>
<blockquote>
<p>Let <span class="math-container">$X$</span> be uncountable and let <span class="math-container">$\mathcal{A} \subset 2^X$</span> be the set of all subsets <span class="math-container">$A \subset X$</span> such that either <span class="math-container">$A$</span> or <span class="math-container">$A^C$</span> is countable. Define:
<span class="math-container">$$ \mu(A) := \begin{cases} 0 &\mbox{if } A \ \text{is countable} \\ 1 &\mbox{if } A^C \ \text{is countable} \end{cases}$$</span>
where <span class="math-container">$A \in \mathcal{A}$</span>. Show that <span class="math-container">$(X, \mathcal{A}. \mu)$</span> is a measure space. Describe the measurable functions and their integrals.</p>
</blockquote>
<p>I was able to attempt to show that <span class="math-container">$\mathcal{A}$</span> was a <span class="math-container">$\sigma$</span>-algebra with the following argument:</p>
<p>(<span class="math-container">$\mathcal{A}$</span> is a <span class="math-container">$\sigma$</span>-algebra.) From <span class="math-container">$X$</span>, we may construct a countable set <span class="math-container">$S = \bigcup_\limits{j \geq 1} S_j$</span> where each <span class="math-container">$S_j$</span> contains <span class="math-container">$j$</span> elements from <span class="math-container">$X$</span>. Thus if <span class="math-container">$X^C = S$</span>, then we may say <span class="math-container">$X \in \mathcal{A},$</span> and <span class="math-container">$\mathcal{A}$</span> is nontrivially nonempty. Let <span class="math-container">$T \in \mathcal{A},$</span> and suppose <span class="math-container">$T$</span> is countable. Then <span class="math-container">$T^C \in \mathcal{A},$</span> as <span class="math-container">$(T^C)^C = T$</span> is countable. Finally, to show closure under countable union, we note that if <span class="math-container">$Y_j$</span> is a countable collection of sets in <span class="math-container">$\mathcal{A}$</span>, then their union must be at most countable, and hence in <span class="math-container">$\mathcal{A}$</span>. Thus, <span class="math-container">$\mathcal{A}$</span> is a <span class="math-container">$\sigma$</span>-algebra.</p>
<p>(The triple is a measure space.) It is seen that all sets have finite measure in this space. It remains to show that this measure is countably additive with respect to disjoint sets. Let <span class="math-container">$A_j$</span> be a countable collection of disjoint sets in <span class="math-container">$\mathcal{A}.$</span> The measure counts the number of sets whose complements are countable.</p>
<p>I was then stuck there. How can I show that <span class="math-container">$\mu$</span> is countable additive? How can I also describe what the measurable functions are?</p>
| Alonso Delfín | 206,395 | <p><strong>(1) <span class="math-container">$\mu$</span> is a measure:</strong>
Since <span class="math-container">$\varnothing$</span> is countable, then <span class="math-container">$\mu(\varnothing)=0$</span>. Let's take <span class="math-container">$E_1,E_2, \cdots \in \mathcal{A}$</span> pairwise disjoint. If every <span class="math-container">$E_n$</span> is countable then
<span class="math-container">$$
\mu\left( \bigcup_{n=1}^{\infty} E_n\right) = 0 = \sum_{n=1}^{\infty} 0 =\sum_{n=1}^{\infty} \mu(E_n)
$$</span>
If there is at least one <span class="math-container">$j \in \mathbb{N}$</span> such that <span class="math-container">$E_j^C$</span> is countable, we claim that this must be the only one with this property. Indeed, if <span class="math-container">$k \neq j$</span> is such that <span class="math-container">$E_k^C$</span> is countable, then <span class="math-container">$(E_k^C) \cup (E_j^C)= (E_k \cap E_j)^C=\emptyset^C= X$</span>, but <span class="math-container">$X$</span> is uncountable by hypothesis and hence it cannot be written as a union of two countable sets. Then <span class="math-container">$E_j$</span> is the only uncountable set in the union <span class="math-container">$\bigcup_{n=1}^{\infty} E_n$</span>, making such union uncountable. Hence
<span class="math-container">$$
\mu\left( \bigcup_{n=1}^{\infty} E_n\right) = 1 = \mu(E_j)=\sum_{n=1}^{\infty} \mu(E_n)
$$</span>
We have that in fact <span class="math-container">$\mu$</span> is a measure in <span class="math-container">$(X, \mathcal{A})$</span>.</p>
<p><strong>(2) what the measurable functions are:</strong> Let <span class="math-container">$\ f : X \to \mathbb{C}$</span> be a function. We claim that <span class="math-container">$f$</span> is measurable if
and only if there is <span class="math-container">$\lambda \in \mathbb{C}$</span> such that <span class="math-container">$f(x) = \lambda$</span> for all but countably many <span class="math-container">$x \in X$</span>.</p>
<p>Considering real and imaginary parts, we see that it suffices to prove the claim for a function <span class="math-container">$ \ f : X \to \mathbb{R}$</span>. For <span class="math-container">$t \in [-\infty, \infty]$</span> set
<span class="math-container">$$
E_t :=\{x \in X : f(x)<t\}
$$</span>
Next, let
<span class="math-container">$$
\lambda := \sup_{t\in \mathbb{R}}\{E_t \text{ is countable }\}.
$$</span>
If <span class="math-container">$\lambda = −\infty$</span>, then <span class="math-container">$E_t^C$</span> is countable for all <span class="math-container">$t \in \mathbb{R}$</span>. So <span class="math-container">$X = \bigcup_{n=0}^\infty E_{-n}^C$</span>
is countable, a contradiction. </p>
<p>So <span class="math-container">$\lambda \neq -\infty$</span>. Choose a sequence <span class="math-container">$(t_n)_{n=0}^{\infty}$</span>
in <span class="math-container">$\mathbb{R}$</span> such that <span class="math-container">$t_n < \lambda$</span> for all <span class="math-container">$n$</span> and <span class="math-container">$\lim_{n\to \infty} t_n = \lambda$</span>. Then <span class="math-container">$E_λ = \bigcup_{n=0}^\infty E_{t_n}$</span>
is countable. This implies <span class="math-container">$λ \neq \infty$</span>. Therefore there is a sequence <span class="math-container">$(s_n)_{n=0}^{\infty}$</span> in <span class="math-container">$\mathbb{R}$</span> such
that <span class="math-container">$s_n > \lambda$</span> for all <span class="math-container">$n$</span> and <span class="math-container">$\lim_{n\to \infty} s_n = \lambda$</span>. Then <span class="math-container">$E_{s_n}^C$</span> is countable for
all <span class="math-container">$n$</span>. So
<span class="math-container">$$
\{x \in X : f(x) \neq \lambda \} = E_\lambda \cup \{x \in X: f(x)>\lambda\} = E_\lambda \cup \left( \bigcup_{n=0}^\infty
E_{s_n}^C \right) $$</span>
is countable. </p>
|
1,495,885 | <p>I'm trying to integrate this but I think the function can't be integrated? Just wanted to check, and see if anyone is able to find the answer (I used integration by parts but it doesn't work). Thanks in advance; the function I need to integrate is $$\int\frac{x}{x^5+2}dx$$</p>
| Stefan Hante | 42,060 | <p>You can do a partial fraction decomposition of $\frac{x}{x^5+2}$ which results in a very ugly result of the form</p>
<p>$$
\frac{x}{x^5+2} = \frac{a_1 x + a_2}{a_3x^2 + a_4x + a_5} + \frac{a_6 x + a_7}{a_8x^2 + a_9x + a_{10}} + \frac1{b_1 + xb_2},
$$
where all the $a$ an $b$ are some really ugly constants. Now you apply the linearity of the integral. The first two terms can be integrated using standard methods, eventually using $(\operatorname{atan}(x))' = \frac1{1+x^2}$. The last term should be abvious and will contain the natural logarithm at the end.</p>
<p>All in all, the answer will look very, as I stated already, ugly.</p>
|
3,080,566 | <p>please tell me how I can solve the following equation. </p>
<p><span class="math-container">$$z^3+\frac{(\sqrt2+\sqrt2i)^7}{i^{11}(-6+2\sqrt3i)^{13}}=0$$</span></p>
<p>What formula should I use? If possible, tell me how to solve this equation or write where I can find a formula for solving such an equation. I searched for it on the Internet, but could not find anything useful.</p>
| Martin Sleziak | 8,297 | <p>Noticing that <span class="math-container">$\sqrt2+\sqrt2i=2\left(\frac{\sqrt2}2+i\frac{\sqrt2}2\right)$</span> and <span class="math-container">$-6+2\sqrt3i=4\sqrt3\left(-\frac{\sqrt3}2+i\frac12\right)$</span> should help you to express the given complex numbers in the polar form. You can then use <a href="https://en.wikipedia.org/wiki/De_Moivre%27s_formula" rel="nofollow noreferrer">de Moivre's formula</a> to simplify the expression. From there you might be able to continue further.</p>
|
273,580 | <p>I notice that the built-in "Coordinates Tool" has a very efficient zoom tooltip displaying an enlarged portion of the image along with coordinates and row/column indices:</p>
<p><a href="https://i.stack.imgur.com/KteX5.png" rel="noreferrer"><img src="https://i.stack.imgur.com/KteX5.png" alt="screenshot" /></a></p>
<p>How to get the source code of this tool? Or <strong>how can we implement such an efficient tooltip ourselves</strong> (I want to replicate all the tooltip functionality)?</p>
| Lukas Lang | 36,508 | <p>Here's a proof-of-concept demonstration of how to build your own tooltip with reasonable performance:</p>
<pre><code>tooltip[img_] :=
EventHandler[
MouseAppearance[
img,
Dynamic@Graphics[
{
Red,
Line@{{{0.05, 0.9}, {0.05, 1}}, {{0, 0.95}, {0.1, 0.95}}},
Inset[
Graphics[
{
Inset[img,
{1, 1},
{
FEPrivate`Ceiling@FEPrivate`Part[FrontEnd`MousePosition["Graphics"], 1],
FEPrivate`Ceiling@FEPrivate`Part[FrontEnd`MousePosition["Graphics"], 2]
},
ImageDimensions@img
],
EdgeForm@Red,
FaceForm@None,
Rectangle[{0, 0}, {1, 1}]
},
PlotRange -> {{-4, 5}, {-4, 5}},
Frame -> True, FrameTicks -> None, FrameStyle -> Thick,
PlotRangeClipping -> True
],
{0.1, 0.9},
Scaled@{0, 1},
Scaled@{0.9, 0.9}
]
},
ImageSize -> 125,
PlotRange -> {{0, 1}, {0, 1}}
],
{0.05, 0.95}
],
{
"MouseClicked" :> Print@MousePosition@"Graphics"
}
]
tooltip@ExampleData[{"TestImage", "House"}]
</code></pre>
<p><a href="https://i.stack.imgur.com/hqS0f.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hqS0f.gif" alt="enter image description here" /></a></p>
<p>Notes:</p>
<ul>
<li>The tooltip is shown by setting the mouse appearance via <code>MouseAppearance</code>. This allows the tooltip to smoothly follow the cursor position. The red cross is aligned such that the alignment point of the cursor (set to <code>{0.05,0.95}</code>) is at its center</li>
<li>The zoomed image displayed is achieved by a series of nested <code>Inset</code>s. This allows us to crop the image and move it around without actually doing any image processing, we are simply changing the shown portion of the image.</li>
<li>The part with <code>FEPrivate`Ceiling@...</code> is effectively computing the inset position in the front end, and rounding it to the appropriate pixel value (to get the pixel snapping behavior).</li>
<li>The coordinates of the innermost image graphics are both set to reflect the <code>ImageDimensions</code>, so 1 unit is 1 pixel. This makes many things a lot easier, especially the control of how many pixels are shown: This is essentially driven by the <code>PlotRange->{{-4,5},{-4,5}}</code> setting.</li>
</ul>
|
2,459,651 | <p>What additional properties must an operation have besides commutativity so that commutativity along with other properties implies associativity?</p>
<p>Where can I read about such structures?</p>
| Community | -1 | <p>Here's why it's not possible, in most cases, to have that something. Commutativity deals with the inputs themselves. Associativity meanwhile, is all about parenthesis placement. The common part they have, is they are both defined for repeated operations, of the same type. Are we locked in by PEMDAS etc.? That's mostly a convention, to make sure people around the world, can look at the same arithmetic and get the same answer. So to really answer the question, as far as most can see here, they are completely unrelated. You can have associativity without commutativity, you can have commutativity without associativity I'm confident as well, you are asking for a property that is of their intersection, which may not be well defined. </p>
|
2,713,555 | <p>I have a question stating</p>
<blockquote>
<p>Let $A= \{ x_1 , x_2 , x_3 , \ldots , x_n \}$ be a set consisting of $n$ distinct elements. How many subsets does it have? How many proper subsets?</p>
</blockquote>
<p>My thought is that there would be subsets with $1$ element, $2$ elements, $3$ element and so on, up to $n$ elements. The number of subsets of each size would be:</p>
<p>$$\begin{array}{c|c}
\text{Subset size} & \text{no. of subsets} \\
\hline
1 & n \\ 2 & n-1 \\ 3 & n-2 \\ \vdots & \vdots \\ n-1 &n-(n-2) \\ n & n- (n-1)
\end{array}$$
From this it seems the number of subsets would be $\displaystyle \sum_{k=0}^{n-1} (n-k) $. And for proper subsets, I would just not include the subsets of size $n$ , so $\displaystyle \sum_{k=0}^{n-2} (n-k) $. Is this correct?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>Is a well-known fact that the answer is $2^n$. Proof by induction:</p>
<ul>
<li>$A_1 = \{x_1\}$ has two subsets: $\emptyset,A_1$.</li>
<li>Suppose true up to $n$. The subsets of $A_{n+1} = \{x_1,\dots,x_{n+1}\}$ are the subsets of $A_n = \{x_1,\dots,x_n\}$ plus the subsets of $A_n$ union $\{x_{n+1}\}$.</li>
</ul>
|
4,419,360 | <p>In chapter 9 of Spivak's <em>Calculus</em>, on derivatives, he mentions the "Leibnizian Notation" for the derivative of a function <span class="math-container">$f$</span>, <span class="math-container">$\frac{df(x)}{dx}$</span>. In a footnote on page 155, he writes</p>
<blockquote>
<p>Leibniz was led to this symbol by his intuitive notion of the
derivative, which he considered to be, not the limit of quotients
<span class="math-container">$\frac{f(x+h)-f(x)}{h}$</span>, but the "value" of this quotient when <span class="math-container">$h$</span> is
an "infinitely small" number. This "infinitely small" quantity was
denoted <span class="math-container">$dx$</span> and the corresponding "infinitely small" difference
<span class="math-container">$f(x+dx)-f(x)$</span> by <span class="math-container">$df(x)$</span>. <strong>Although this point of view is impossible
to reconcile with properties (P1)-(P13) of the real numbers</strong>, some
people find this notion of the derivative congenial.</p>
</blockquote>
<p>The bold section has been highlighted by me. What does he mean with that?</p>
| Joe | 623,665 | <p>The precise meaning of "infinitesimal" depends on context, but it common to define an infinitesimal number as a positive number <span class="math-container">$x$</span> such that <span class="math-container">$x<1/n$</span> for every natural number <span class="math-container">$n$</span>; similarly, an infinite number satisfies <span class="math-container">$x>n$</span> for every natural number <span class="math-container">$n$</span>. With this definition, it can be proven that <span class="math-container">$\mathbb R$</span> does not contain infinite or infinitesimal numbers.</p>
<p>To prove that infinite numbers do not exist in <span class="math-container">$\mathbb R$</span>, assume for the sake of contradiction that there is an infinite number <span class="math-container">$x\in\mathbb R$</span>. Then, <span class="math-container">$x$</span> is an upper bound of <span class="math-container">$\mathbb N$</span>, and so by the axiom of completeness, <span class="math-container">$\mathbb N$</span> has a least upper bound <span class="math-container">$\alpha$</span>. Since <span class="math-container">$\alpha-1<\alpha$</span>, it follows that <span class="math-container">$\alpha-1$</span> is <em>not</em> an upper bound of <span class="math-container">$\mathbb N$</span>; in particular, there is an <span class="math-container">$n_0\in\mathbb N$</span> such that <span class="math-container">$n_0>\alpha_0-1$</span>. But then <span class="math-container">$n_0+1>\alpha$</span> and <span class="math-container">$n_0+1\in\mathbb N$</span>, contradicting the fact that <span class="math-container">$\alpha$</span> is an upper bound of <span class="math-container">$\mathbb N$</span>. Hence, <span class="math-container">$\mathbb N$</span> is not bounded above in <span class="math-container">$\mathbb R$</span>.</p>
<p>To prove that infinitesimal numbers do not exist in <span class="math-container">$\mathbb R$</span>, assume for the sake of contradiction that there is an infinitesimal <span class="math-container">$y\in\mathbb R$</span>. Then, <span class="math-container">$n<1/y$</span> for every natural number <span class="math-container">$n$</span>, and so <span class="math-container">$1/y$</span> is an upper bound of <span class="math-container">$\mathbb N$</span>, contradicting the fact that <span class="math-container">$\mathbb N$</span> is not bounded above in <span class="math-container">$\mathbb R$</span>.</p>
|
1,105,056 | <p>There's something I've never understood about polynomials.</p>
<p>Suppose $p(x) \in \mathbb{R}[x]$ is a real polynomial. Then obviously,</p>
<p>$$(x-a) \mid p(x)\, \longrightarrow\, p(a) = 0.$$</p>
<p>The converse of this statement was used throughout high school, but I never really understood why it was true. I think <em>maybe</em> a proof was given in 3rd year university algebra, but obviously it went over my head at the time. So anyway:</p>
<blockquote>
<p><strong>Question.</strong> Why does $p(a)=0$ imply $(x-a) \mid p(x)$?</p>
</blockquote>
<p>I'd especially appreciate an answer from a commutative algebra perspective.</p>
| orangeskid | 168,051 | <p>You can also understand division by $(x-a)$ as follows:</p>
<p>Take a polynomial</p>
<p>$$P(x) = \sum c_k x^k$$</p>
<p>Plug in $x+a$ instead of $x$. You get a new polynomial in $x$, $P(x+a) = \sum_k c_k (x+a)^k = \sum_k d_k x^k$
that is
$$P(x+a) = \sum d_k x^k$$</p>
<p>Now in the last equality instead of $x$ plug in $x-a$. You get</p>
<p>$$P(x) = \sum_{k=0}^n d_k (x-a)^k$$</p>
<p>This is important, you can expand $P(x)$ as a sum of powers of $(x-a)$. Now if you plug in the value $x=a$ above you get </p>
<p>$$P(a) = d_0$$</p>
<p>since all the other terms are $0$. So what you have is </p>
<p>$$P(x) = d_0 + (\sum_{k\ge 1} d_k (x-a)^k) = P(a) + (x-a) \cdot (\sum_{k\ge 1} d_k (x-a)^{k-1})$$</p>
<p>Here is your division by $(x-a)$. </p>
<p>Now you see: if $P(a) =0$ then $P(x)$ is indeed divisible by $(x-a)$.</p>
|
1,613,857 | <p>I'm reading <a href="https://en.wikipedia.org/wiki/The_Music_of_the_Primes" rel="nofollow">The Music of the Primes</a> by du Sautoy and I've come across a section that I'm having difficulty understanding:</p>
<blockquote>
<p>Euler fed imaginary numbers into the function $2^x$. To his surprise, out came waves which corresponded to a particular musical note. Euler showed that the character of each note depended on the coordinates of the corresponding imaginary number. The farther north one is, the higher the pitch. The farther east, the louder the volume.</p>
</blockquote>
<p>My understanding here is that the results are dependent on the sine function and that the real part of the exponent affects the amplitude and the imaginary part of the exponent affects the frequency.</p>
<p>I'd like to understand this more intuitively, which I tend to get through visualization. So I went to Wolfram Alpha and started with graphing $2^{x+iy}$. That wasn't very helpful.</p>
<p><a href="http://www.wolframalpha.com/input/?i=2%5E%28x%2Bi*y%29%20where%20x%20%3D%2010" rel="nofollow">So I tried graphing it with fixed $x$ values</a>, and indeed, I could see the amplitude of the (now 2D) graph <a href="http://www.wolframalpha.com/input/?i=2%5E%28x%2Bi*y%29%20where%20x%20%3D%20100" rel="nofollow">changing</a>. </p>
<p>I also see that $2^{x+iy}$ is also expressed as $2^x \cos(y \log(2))+i 2^x \sin(y \log(2))$ and I think I can see that changing the value of $x$ would affect the amplitude.</p>
<p>I'm unable to demonstrate the frequency changing by setting y to specific values. </p>
<p>What am I missing? (...Other than a semester in a Complex Analysis class!)</p>
<p>edit:</p>
<p>So while reading more online, I came across <a href="http://blog.echen.me/2011/03/14/prime-numbers-and-the-riemann-zeta-function/" rel="nofollow">this blog</a> that makes a similar claim. I suspect the book of oversimplifying, but wonder if this explains what was simplified?</p>
<blockquote>
<p>[...] But $x^{z-1} + x^{\bar{z} - 1}$ is just a wave whose amplitude depends on the real part of $z$ and whose frequency depends on the imaginary part (i.e., if $z=a+biz=a+bi$, then $x^{z-1} + x^{\bar{z}-1} = 2x^{a-1} cos (b \log x)$) [...]</p>
</blockquote>
<p>(I copied this from the blog, but removed some odd \'s ...)</p>
<p>Is it the inclusion of the conjugates that causes this amplitude/frequency?</p>
| pills | 394,220 | <p>The issue is that as far as I know, there is no canonical way of translating a "musical note" into a complex number - however you can translate it into a complex <em>function</em>. As SolUmbrae wrote, a note is simply a sine or cosine function defined by its amplitude, frequency and offset. To keep it simple, let's forget about the offset. A pure note (single frequency) can then be written as $f(t) = A\cdot\cos(\omega\cdot t)$, where $t$ represents time. The higher $A$ is, the louder the note will be; the higher $\omega$ is, the higher the pitch.</p>
<p>You can rewrite $f(t)$ as $f(t) = \Re(A\cdot e^{i\omega t})$ (the real part of the complex function $A\cdot e^{i\omega t}$).</p>
<p>This is where the conjugate comes in in your edit : $z + \overline{z} = 2\cdot\Re(z)$ for any complex number $z$, so adding the conjugate allows you to work with real numbers instead of complex ones (but this is besides the point of your original question).</p>
<p>Let's forget about taking the real part and keep $\psi(t) = A\cdot e^{i\omega t}$ as a simple way to represent a sinusoidal wave function (in other words, a note). </p>
<p>I'm going to make one more simplification and consider $e^z$ rather than $2^z$ as the formula to turn complex numbers into notes (this will get rid of the $\log(2)$ in the formula). We then have $e^z=e^{x+iy}=e^x\cdot e^{iy}$ as a formula for "musical notes". </p>
<p>How can we reconcile our two formulas for $e^z$ and $\psi(t)$? We can see that $A$ and $\omega$ are enough to completely define $\psi(t)$ (if you know those two number, you can reconstruct the function) so, in that sense, $\psi(1) = A\cdot e^{i\omega}$ is enough to define a note. Putting it all together, $\psi(1) = A\cdot e^{i\omega} = e^{\log(A)}\cdot e^{i\omega} = e^{\log(A)+i\omega} = e^z$ where $z = \log(A) + i\omega$. In other words, the real part of $z$ corresponds to the (logarithm of) the amplitude of the note, and the imaginary part corresponds to the pitch. The higher $x$ is, the louder the note; the higher $y$ is, the higher the pitch.</p>
<p>Note : this is by no means a rigorous demonstration; it makes several simplifying assumptions, and it conflates the exponential function and the representation of a complex number as $e^{iz}$. But I hope it's enough to grasp the intuition of representing a sine function as a real and imaginary part.</p>
|
3,440,171 | <p>How does one write the Taylor expansion of <span class="math-container">$\log\big(\frac{\sin(x)}x\big)$</span>? It is not defined on <span class="math-container">$x=0$</span>.</p>
| Claude Leibovici | 82,404 | <p>If you are not looking for the infinite series, since <span class="math-container">$\forall x$</span>, <span class="math-container">$\frac{\sin(x)} x <1$</span> and that <span class="math-container">$x=0$</span> is a removable singularity, let <span class="math-container">$a=\frac{\sin(x)} x -1$</span> and write
<span class="math-container">$$\log \left(\frac{\sin(x)}x\right)=\log(1+a)$$</span> Now expand as usual
<span class="math-container">$$\log(1+a)=a-\frac{a^2}{2}+\frac{a^3}{3}-\frac{a^4}{4}+\frac{a^5}{5}-\frac{a^6}{6}+\frac{a^7}{7}+O\left(a^8\right)\tag 1$$</span> and now
<span class="math-container">$$a=\frac{\sin(x)} x -1=-\frac{x^2}{6}+\frac{x^4}{120}-\frac{x^6}{5040}+\frac{x^8}{362880}+O\left(x^{10}\right)$$</span> Now, for the computation of <span class="math-container">$a^n$</span>, use th binomial expansion stopping a soon as you encounter power of <span class="math-container">$x$</span> gerater that <span class="math-container">$8$</span> (for axample). This would give
<span class="math-container">$$a^2=\frac{x^4}{36}-\frac{x^6}{360}+\frac{41 x^8}{302400}+O\left(x^{10}\right)$$</span>
<span class="math-container">$$a^3=-\frac{x^6}{216}+\frac{x^8}{1440}+O\left(x^{10}\right)$$</span> and so on.</p>
<p>Replacing in <span class="math-container">$(1)$</span> this would give
<span class="math-container">$$\log \left(\frac{\sin(x)}x\right)=-\frac{x^2}{6}-\frac{x^4}{180}-\frac{x^6}{2835}-\frac{x^8}{37800}+O\left(x^{10}\right)$$</span></p>
<p>Now, if you are curious, ask <span class="math-container">$OEIS$</span> for sequence <span class="math-container">$\{6,180,2835,37800\}$</span> and you will find sequence <span class="math-container">$A046989$</span> which will show you the beautiful formula given by @user76284.</p>
|
3,440,171 | <p>How does one write the Taylor expansion of <span class="math-container">$\log\big(\frac{\sin(x)}x\big)$</span>? It is not defined on <span class="math-container">$x=0$</span>.</p>
| user90369 | 332,823 | <p>Euler product formula for sine:</p>
<p><span class="math-container">$$\frac{\sin x}{x}=\prod\limits_{k=1}^\infty\left(1-\left(\frac{x}{\pi k}\right)^2\right)$$</span></p>
<p>You see: <span class="math-container">$~\ln\frac{\sin x}{x}~$</span> is defined everywhere in arbitrary proximity of <span class="math-container">$~x=0~$</span>, the result is <span class="math-container">$~0~$</span>.</p>
<p>With the help of </p>
<p><span class="math-container">$$-\ln\prod\limits_{v=1}^n\left(1-za_v\right)=\sum\limits_{k=1}^\infty\frac{z^k}{k}\sum\limits_{v=1}^n a_v^k$$</span></p>
<p>we can set <span class="math-container">$\displaystyle~a_k:=\frac{1}{(\pi k)^2}~$</span> and get: </p>
<p><span class="math-container">$$\ln\frac{\sin x}{x}=-\sum\limits_{k=1}^\infty\frac{x^{2k}}{k}\frac{\zeta(2k)}{\pi^{2k}}$$</span></p>
<p><em>Note:</em> <span class="math-container">$~~\zeta(s)~$</span> is called the <a href="https://en.wikipedia.org/wiki/Riemann_zeta_function" rel="nofollow noreferrer"><em>Riemann Zeta function</em></a> . <span class="math-container">$~~~\frac{\zeta(2k)}{\pi^{2k}}~$</span> are rational numbers.</p>
|
1,046,792 | <p>Given:
$$\sin(x+iy)=\cos\theta+i\sin\theta$$
To prove:
$$x=\arccos (\sqrt{\sin\theta})$$
How I tried:
$$\begin{align*}
\sin x \cosh y &= \cos\theta \\
\cos x \sinh y &= \sin\theta
\end{align*}$$
Then tried to use logarithm of hyperbolic complex number.</p>
<p>Also various trignometric form manipulation but I can't get the answer.</p>
| Ashish | 196,782 | <pre><code>given:
</code></pre>
<p>$sin(x+iy)=e^{i\theta}$</p>
<p>$sin(x)*cosh(y)+icos(x)*sinh(y)=cos(\theta)+isin(\theta)$</p>
<p><strong><em>comparing the the real and imaginary part</em></strong></p>
<p>$\implies sin(x)*cosh(y)=cos(\theta)$ </p>
<p>&</p>
<p>$\implies cos(x)*sinh(y)=sin(\theta)$</p>
<p><strong><em>consider hyperbolic form of</em></strong> $sin^2y +cos^2y=1 $</p>
<p>$\cosh^2y-sinh^2y=1$</p>
<p><strong><em>put values of</em></strong> $sinh(y) \& cosh(y)$ <strong><em>in above equation</em></strong></p>
<p>$ \frac {\cos^2\theta}{ \sin^2x} $-$\frac{sin^2\theta}{cos^2x}$=1</p>
<p><strong><em>cross muliplying and arranging</em></strong></p>
<p>$cos^2\theta*cos^2x-sin^2\theta*sin^2x=sin^2x*cos^2x$</p>
<p>$(1-sin^2\theta)*cos^2x-sin^2\theta*sin^2x=cos^2x*(1-cos^2x)$</p>
<p>$cos^2x-sin^2\theta(cos^2x+sin^2x)=cos^2x-cos^2x*cos^2x $</p>
<p>$-sin^2\theta=-cos^4x$</p>
<p>taking twice squre root</p>
<p>$cosx=\sqrt\sin\theta$</p>
|
199,235 | <blockquote>
<p><strong>Possible Duplicate:</strong><br>
<a href="https://math.stackexchange.com/questions/9505/xy-yx-for-integers-x-and-y">$x^y = y^x$ for integers $x$ and $y$</a> </p>
</blockquote>
<p>Determine the number of solutions of the equation
$n^m = m^n$
where both m and n are integers.</p>
| Ross Millikan | 1,827 | <p>Hint: it might be that $n$ and $m$ are powers of a prime. They would clearly need to be powers of the same prime. Define $n=p^a, m=p^b$ then apply the laws of exponents. Otherwise, they might be a product of primes. Again, they need to be products of the same primes. Again, use the laws of exponents to rule it out.</p>
|
2,334,305 | <blockquote>
<p>Let $F$ be the distribution function of the continuous-type random variable $X$, and assume that $F(x)=0$ for $x \le 0$ and $0 \lt F(x) \lt1$ for $0 \lt x$. Prove that if $P(X \gt x+y |X \gt x)=P(X \gt y),$ then $F(x)=1-e^{- \lambda x}, 0 \lt x.$</p>
</blockquote>
<p>My attempt: </p>
<p>Let $F(x)=P(X \le x)$. Then </p>
<p>$ \frac {P(X \gt x+y)}{P(X \gt x)}=P(X \gt y)$</p>
<p>$\Rightarrow \frac {1-P(X\le x+y)}{1-P(X \le x)}=1-p(X \le y)$</p>
<p>$ \Rightarrow \frac {1-F(x+y)}{1-F(x)}=1-F(y)$</p>
<p>Then I have no idea where to go from here. Any help is appreciated. </p>
| leonbloy | 312 | <p>Hint: let $y \to 0$ and try to get a differential equation for $F(x)$</p>
|
292,159 | <blockquote>
<p>Find the relation between <span class="math-container">$a, b, c, d$</span> if the roots of <span class="math-container">$ax^3+bx^2+cx+d=0$</span> are in geometric progression.</p>
<p>By considering <span class="math-container">$(\alpha+\beta)(\beta+\gamma)(\alpha+\gamma)$</span> show that the above cubic equation has two roots equal in size but opposite in sign if and only if <span class="math-container">$ad=bc$</span>.</p>
</blockquote>
<p>I can do the second part if I am given some hints on the first. I can use <span class="math-container">$\beta$</span> as the middle root and make <span class="math-container">$\alpha=\beta/r$</span> and <span class="math-container">$\gamma=r \times \beta$</span>, but haven't got anywhere so far.</p>
| Hagen von Eitzen | 39,174 | <p>The monic polynomial with roots $\beta/r,\beta, r\beta$ is
$$ x^3-\beta\left(1+r+\frac1r\right) x^2+\beta^2\left(1+r+\frac1r\right)x-\beta^3$$
We find $\beta=-\frac cb$ and $\beta^3=-\frac da$, hence
$$db^3=ac^3$$
as necessary condition. Why is it also sufficient? What additional condition(s) would we need if we wanted the roots to be real?</p>
|
292,159 | <blockquote>
<p>Find the relation between <span class="math-container">$a, b, c, d$</span> if the roots of <span class="math-container">$ax^3+bx^2+cx+d=0$</span> are in geometric progression.</p>
<p>By considering <span class="math-container">$(\alpha+\beta)(\beta+\gamma)(\alpha+\gamma)$</span> show that the above cubic equation has two roots equal in size but opposite in sign if and only if <span class="math-container">$ad=bc$</span>.</p>
</blockquote>
<p>I can do the second part if I am given some hints on the first. I can use <span class="math-container">$\beta$</span> as the middle root and make <span class="math-container">$\alpha=\beta/r$</span> and <span class="math-container">$\gamma=r \times \beta$</span>, but haven't got anywhere so far.</p>
| namasikanam | 482,214 | <p>Suppose $a \not = 0$ , let $p=\frac{b}{a},q=\frac{c}{a},r=\frac{d}{a}$, then the equation is $$x^3+px^2+qx+r=0$$
Using Vieta's Formulas,$$\alpha + \beta + \gamma=-p \\ \alpha \beta + \beta \gamma + \alpha \gamma =q \\ \alpha \beta \gamma = -r$$Suppose $\beta^2=\alpha \gamma$, we get $$q^3=p^3r$$that is $$\left \{ \begin{array}{l} ac^3=b^3d \\ a \not = 0 \\ d \not = 0 \end{array} \right.$$
Now we prove it is sufficient.
$$q^3-p^3r \\ =(\alpha \beta + \beta \gamma + \alpha \gamma)^3-\alpha \beta \gamma(\alpha + \beta + \gamma) \\ = \alpha^3\beta^3+\alpha^3\gamma^3+\beta^3\gamma^3-\alpha \beta \gamma(\alpha^3+\beta^3+\gamma^3) \\ = (\alpha^2-\beta \gamma)(\beta^2-\alpha \gamma)(\gamma^2-\alpha \beta) \\=0$$
So the roots are in geometric progression.</p>
|
1,226,162 | <p>\begin{align}
x' &= -x^3 + x^5 + (x^4)(y^5)\\[.7em]
y' &= -8y^3 + y^5 - 10(y^4)(x^5)
\end{align}
$(0,0)$ is obviously a critical point of the system, and we are given that it is asymptotically stable, but have to show it. </p>
<p>I have tried to make a Lyapunov function $V(x,y) = ax^2 + cy^2$, with a,c > 0 but I am having trouble to prove that $\frac{d}{dt} V(x,y)$ is negative definite. I get some complicated polynomial I can't use logic to finalize. How can I change the Lyapunov to come up with a meaningful conclusion?</p>
<p>\begin{align}
\frac{d}{dt}V(x,y) = 2ax(-3x^2 + 5x^4 + 4x^3y^5) + 2cy(-24y^2+5y^4-40y^3x^5)\\[.7em]
\end{align}</p>
| Cesareo | 397,348 | <p>This problem can be handled with an optimization procedure, having in mind that generally is a non convex problem. The result depends on the test Lyapunov function used so we will generalize to a quadratic Lyapunov function</p>
<p><span class="math-container">$$
V(p) = p^{\dagger}\cdot M\cdot p = a x^2+b x y + c y^2,\ \ \ p = (x,y)^{\dagger}
$$</span></p>
<p>and</p>
<p><span class="math-container">$$
f(p) = \{-x^3 + x^5 + x^4 y^5, -8 y^3 + y^5 - 10 x^5 y^4\}
$$</span>
with <span class="math-container">$a>0,c>0, a b-b^2 > 0$</span> to assure positivity on <span class="math-container">$M$</span>. We will assure a set involving the origin <span class="math-container">$Q_{\dot V}$</span> such that <span class="math-container">$\dot V(Q_{\dot V}) < 0$</span>. The optimization process will be used to guarantee a maximal <span class="math-container">$Q_{\dot V}$</span>.</p>
<p>After determination of <span class="math-container">$\dot V = 2 p^{\dagger}\cdot M\cdot f(p)$</span> we follow with a change of variables</p>
<p><span class="math-container">$$
\cases{
x = r\cos\theta\\
y = r\sin\theta
}
$$</span></p>
<p>so <span class="math-container">$\dot V = \dot L(a,b,c,r,\theta)$</span>. The next step is to make a sweep on <span class="math-container">$\theta$</span> calculating</p>
<p><span class="math-container">$$
S(a,b,c, r)=\{\dot V(a,b,c,r,k\Delta\theta\},\ \ k = 0,\cdots, \frac{2\pi}{\Delta\theta}
$$</span></p>
<p>and then the optimization formulation follows as</p>
<p><span class="math-container">$$
\max_{a,b,c,r}r\ \ \ \ \text{s. t.}\ \ \ \ a > 0, c> 0, a c -b^2 > 0, \max S(a,b,c,r) \le -\gamma
$$</span></p>
<p>with <span class="math-container">$\gamma > 0$</span> a margin control number.</p>
<p>Follows a MATHEMATICA script which implements this procedure in the present case.</p>
<pre><code>f = {-x^3 + x^5 + x^4 y^5, -8 y^3 + y^5 - 10 x^5 y^4};
V = a x^2 + 2 b x y + c y^2;
dV = Grad[V, {x, y}].f /. {x -> r Cos[t], y -> r Sin[t]};
rest = Max[Table[dV, {t, -Pi, Pi, Pi/30}]] < -0.2;
rests = Join[{rest}, {r > 0, a > 0, c > 0, a c - b^2 > 0}];
sols = NMinimize[Join[{-r}, rests], {a, b, c, r}, Method -> "DifferentialEvolution"]
rest /. sols[[2]]
dV0 = Grad[V, p].f /. sols[[2]]
V0 = V /. sols[[2]]
r0 = 1.5;
rmax = r /. sols[[2]];
gr0 = StreamPlot[f, {x, -r0, r0}, {y, -r0, r0}];
gr1a = ContourPlot[dV0, {x, -r0, r0}, {y, -r0, r0}, ContourShading -> None, Contours -> 80];
gr1b = ContourPlot[dV0 == 0, {x, -r0, r0}, {y, -r0, r0}, ContourStyle -> Blue];
gr2 = ContourPlot[x^2 + y^2 == rmax^2, {x, -r0, r0}, {y, -r0, r0}, ContourStyle -> {Red, Dashed}];
Show[gr0, gr1a, gr1b, gr2]
</code></pre>
<p>Follows a plot showing in black the level sets <span class="math-container">$Q_{\dot V}$</span> an in blue the trace of <span class="math-container">$\dot V = 0$</span>. In dashed red is shown the largest circular set <span class="math-container">$\delta = 0.99992$</span> defining the maximum attraction basin for the given test Lyapunov function's family.</p>
<p><a href="https://i.stack.imgur.com/VBHTV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VBHTV.jpg" alt="enter image description here" /></a></p>
|
936,830 | <p>I made the differential equation : $$dQ = (-1/100)2Q dt$$ </p>
<p>I separate it and get: $\int_a^b x (dQ/Q) = \int_a^b x (-2/100)dt$</p>
<p>this leads me to: $\log(|Q|) = (-t/50) + C$</p>
<p>I simplify that to $Q = e^{-t/50}$</p>
<p>My TI-Nspire differential equation solver, however, gives me: $Q = Ce^{-t/50}$</p>
<p>I'm confused as to why the calculator is multiple my answer with a constant and which one is the correct answer.</p>
| Dilip Sarwate | 15,941 | <p>It is quite likely that you were told that $X$ and $Y$ are <em>independent</em> random variables, but neglected to pass on this information to us.</p>
<p><em>Assuming</em> that $X$ and $Y$ are independent Poisson random variables, $U = X+Y$ is
also a Poisson random variable with parameter $\theta+\lambda$. Thus, the
probability that $U = m$ ($m$ is a nonnegative integer here) is
$$p_{X+Y}(m) = \frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}.$$
Conditioned on $X+Y = m$, the conditional probability that $Y = n$ is $0$ if
$n >m$, while for $0 \leq n \leq m$,
$$\begin{align}p_{Y\mid X+Y=m}(n\mid m)
&= \frac{P\left(\{Y=n\} \cap \{X+Y=m\}\right)}{P\{X+Y=m\}}
&{\scriptstyle{\text{definition of comditional probability}}}\\
&= \frac{P\{X=m-n,Y=n\}}{P\{X+Y=m\}}
&{\scriptstyle{\text{a re-write}}}\\
&= \frac{P\{X=m-n\}P\{Y=n\}}{\frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}}
&{\scriptstyle{\text{independence of}~X~\text{and}~Y}}\\
&= \frac{\frac{e^{-(\theta)}(\theta)^{m-n}}{(m-n)!}
\frac{e^{-(\lambda)}(\lambda)^{n}}{(n!}}{\frac{e^{-(\theta+\lambda)}(\theta+\lambda)^m}{m!}}\\
&= \binom{m}{n}\left(\frac{\theta}{\theta+\lambda}\right)^{m-n}
\left(\frac{\lambda}{\theta+\lambda}\right)^{n}
\end{align}$$
which shows that, conditioned on the value of $X+Y$, $Y$ is a <em>binomial</em>
random variable.</p>
|
3,672,948 | <p>Consider the series
<span class="math-container">$$
S = \sum_{n=1}^\infty e^{-n^2x}
$$</span>
then I have to argue for that <span class="math-container">$S$</span> is convergent if and only if <span class="math-container">$x>0$</span>.</p>
<p>As this is if and only if I think I have to assume first that S is convergent and show that this implies that <span class="math-container">$x>0$</span> but I am not sure how to. It is easy for me see that if <span class="math-container">$x=0$</span> the series is divergent but if I were to assume that S is convergent and that for a contradiction that <span class="math-container">$x\leq 0$</span> how do I proceed? And how the other way around? </p>
<p>Do you mind helping me? </p>
| Derpp | 596,585 | <p>If the series converges <span class="math-container">$\implies \lim_{n\to\infty} e^{-n^{2}x} = 0$</span>. Do you see why <span class="math-container">$x$</span> needs to be positive now?</p>
|
696,370 | <p>Is it possible that $R/I$ is a field when $R$ is non-commutative ring with unit and $I$ is a maximal left ideal of $R$? If it is not, can anyone give an example of such $R$ and $I$? Thanks.</p>
| J.R. | 44,389 | <p>$F$ cannot have characteristic $0$, because it is a finite field.</p>
<p>If $F$ has characteristic $p>0$, then $1$ generates an additive subgroup of order $p$. By Lagrange's theorem, $p$ divides the order of the whole group $F$, which is $2^n$.</p>
<p>Since $p$ is prime, $p=2$.</p>
|
1,330,078 | <p>Given a quadrilateral $MNPQ$ for which $MN=26$, $NP=30$, $PQ=17$, $QM=25$ and $MP=28$ how do I find the length of $NQ$?</p>
| E.H.E | 187,799 | <p>Hint:
use Heron's Formula for the area of a triangle
$$A=\sqrt{s(s-a)(s-b)(s-c)}$$
$$s=\frac{a+b+c}{2}$$
<img src="https://i.stack.imgur.com/tuTXD.png" alt="enter image description here">
Firstly find the total area by using the two triangles $MNP$ and $MQP$, then you can use same procedures for triangles $NMQ$ and $NPQ$ but now the area is known and the length of $NQ$ unknown.
<img src="https://i.stack.imgur.com/arKD6.png" alt="enter image description here"></p>
|
18,691 | <p>Let's say I have a folowing set of data:</p>
<ul>
<li>k = 1 : list of values </li>
<li>k = 3 : list of values </li>
<li>k = 10 : list of values</li>
</ul>
<p>I know that to make a <code>BoxWhiskerChart</code> I have to give it a list of listen of these values as data and ks as labels. </p>
<p>How do I force the offset between the boxes for different ks to be proportional to the values of ks?</p>
<p>This is like combining <code>ListPlot</code> and <code>BoxWhiskerChart</code> - list plot gives appropriate position of boxes relative to the x-axis.</p>
| David Park | 5,549 | <p>At the request of one user I included a BoxWhisherDraw statement in the latest release of Presentations (which I sell for $50.). This essentially draws a chart centered at zero and then you can easily Scale/Rotate/Translate it to wherever you wish. So here is a solution to your problem using <code>BoxWiskerDraw</code>:</p>
<pre><code><<Presentations`
data[1] = Array[RandomReal[{0, 1}] &, 10];
data[3] = Array[1 + RandomReal[{0, 3}] &, 10];
data[10] = Array[5 + RandomReal[{0, 10}] &, 10];
xticks = CustomTicks[Identity, databased[{1, 3, 10}]];
Draw2D[
{({BoxWhiskerDraw[data[#],
ChartStyle ->
Directive[FaceForm[Opacity[0.2, Blue]], EdgeForm[Black],
Thin]],
CirclePoint[{0, #}, 3, Black, White] & /@ data[#]} //
ScaleOp[{3, 1}, {0, 0}] // TranslateOp[{#, 0}]) & /@ {1, 3,
10}},
Frame -> True,
FrameTicks -> {{Automatic, Automatic}, {xticks,
xticks // NoTickLabels}},
PlotLabel -> Style["Custom BoxWhisker Chart", 13, Bold],
ImageSize -> 300]
</code></pre>
<p><img src="https://i.stack.imgur.com/ASbJG.png" alt="Mathematica graphics"></p>
<p>I also used CirclePoint to mark the data points for each data set and I used CustomTicks to put ticks at just the k values.</p>
<p><strong>Addition</strong></p>
<p>The poster asked if a log x axis could be used. Yes, very simple. In the above we simply add a <code>Log10</code> as the scaling function in <code>CustomTicks</code> and to the <code>TranslateOp</code> command. And we change the width scaling in <code>ScaleOp</code>.</p>
<pre><code>xticks = CustomTicks[Log10, databased[{1, 3, 10}]];
Draw2D[
{({BoxWhiskerDraw[data[#],
ChartStyle ->
Directive[FaceForm[Opacity[0.2, Blue]], EdgeForm[Black],
Thin]],
CirclePoint[{0, #}, 3, Black, White] & /@ data[#]} //
ScaleOp[{1/2, 1}, {0, 0}] //
TranslateOp[{Log10[#], 0}]) & /@ ({1, 3, 10})},
AspectRatio -> 1,
PlotRange -> {{-0.3, 1.3}, {Automatic, Automatic}},
Frame -> True,
FrameTicks -> {{Automatic, Automatic}, {xticks,
xticks // NoTickLabels}},
PlotLabel -> Style["Custom BoxWhisker Chart", 13, Bold],
ImageSize -> 300]
</code></pre>
<p><img src="https://i.stack.imgur.com/PoczK.png" alt="Mathematica graphics"></p>
|
2,936,994 | <p>There are 7 white balls in a row and a fair die (a cube with numbers 1, 2,..., 6 in its six faces). We roll the die 7 times and paint the ith balls into black if we get either 5 or 6 in the ith roll.</p>
<p>(a) What is the expected number of black balls?</p>
<p>(b) What is the chance that there exist 6 or more consecutive black balls?</p>
<p>(c) What is the chance that there are not 4 or more consecutive white balls?</p>
<p>Our study group has answered (a) and (b) and are confident we got those correct. We are just having trouble with the last one (c). </p>
<p>(a) We reasoned that the over all experiment can be modeled by a Binomial(7,<span class="math-container">$\frac{1}{3}$</span>) where <span class="math-container">$\frac{1}{3}$</span> is the probability of painting a ball black (probability of rolling a 5 or 6). Therefore the expected value is just <span class="math-container">$7 \times \frac{1}{3} \approx 2.3$</span>.</p>
<p>(b) There are only 2 ways to get exactly 6 consecutive black balls so <span class="math-container">$2 \times (\frac{1}{3})^6$</span> and one way to get 7 consecutive black balls <span class="math-container">$(\frac{1}{3})^7$</span>. Therefore P(6 or more consecutive black balls) <span class="math-container">$= 2 \times (\frac{2}{3})(\frac{1}{3})^6 + (\frac{1}{3})^7$</span>.</p>
<p>(c) So we decided to attempt to calculate it as 1 - P(4 or more consecutive white balls). We believe that we could possibly brute force this by literally finding the probability of every combination of 4 or more consecutive white balls. The problem is that we are studying this question to help us prep for a qualification exam and believe that there should be a non-brute force way to solve it for it to be a valid exam question. </p>
| Satish Ramanathan | 99,745 | <p>@lulu</p>
<p>But here is my account for a string of 6 and 7 string with not 4 or more consecutive whites(I find the 4 or more consecutive and subtract it from 1)</p>
<p>It absolutely confirms your calculation. Lulu can't be wrong</p>
<p><a href="https://i.stack.imgur.com/gUqP6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gUqP6.png" alt="enter image description here"></a></p>
|
116,392 | <p>As <a href="https://mathematica.stackexchange.com/questions/110808/a-custom-function-is-too-slow-when-i-first-time-to-run-it">my this previous post</a>.I get a custom function which can help me got some function contain a certain option.And I have put it in my "init.m" file.</p>
<pre><code>LookupOptionFunction[option_] :=
Select[ToExpression[
Complement[
Select[Names["System`*"],
StringFreeQ["$"]], {"AllowTransliteration", "MyFind"}]],
KeyMemberQ[Options[#1], option] &]
</code></pre>
<p>Usage:</p>
<pre><code>LookupOptionFunction[SelfLoopStyle]
</code></pre>
<blockquote>
<p>{GraphPlot, GraphPlot3D, LayeredGraphPlot, TreeForm, TreePlot}</p>
</blockquote>
<p>But there are some problem</p>
<p><a href="https://i.stack.imgur.com/QDW9I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QDW9I.png" alt="enter image description here" /></a></p>
<p>As the <a href="https://mathematica.stackexchange.com/a/110814/21532">@Bob Hanlon's answer</a>,the speed up due to the caching produced in first time to run it.</p>
<hr />
<h2>My Question</h2>
<p>How to save the caching to help me speed up to run it <em><strong>even we restart the Mathematica</strong></em>?</p>
<hr />
<h1>New progress:</h1>
<p>I exclude the <code>Keys@SystemOptions[]</code> one by one and narrow it down from 90 to 17,I think following system options must can implement this demand:</p>
<blockquote>
<p>{"CacheOptions","CatchMachineUnderflow","DataOptions","DefinitionsReordering","DifferentiationOptions","DynamicLibraryOptions","EnforceCallPacket","FileBackedCachingOptions","GlobFileNames","HolonomicOptions","LegacyFrontEnd","LegacyNewlineParsingInStrings","NeedNotReevaluateOptions","PostScriptBufferSize","RestorePackageDependencies","SymbolicProductThreshold","SymbolicSumThreshold"}</p>
</blockquote>
<p>But I don't familar they by name.Maybe we just have a little bit of a difference</p>
| Edmund | 19,542 | <p>You may make use of the built-in index caching for <code>Entity</code> objects; <code>"WolframLanguageSymbol"</code>.</p>
<pre><code>lookupOptionFunction[optionName_String] :=
ToExpression /@
EntityValue[EntityList[
Entity["WolframLanguageSymbol",
{EntityProperty["WolframLanguageSymbol", "OptionNames"] -> optionName}]],
"Name"]]
</code></pre>
<p>First run after creating <code>lookupOptionFunction</code>.</p>
<pre><code>lookupOptionFunction["Alignment"] // AbsoluteTiming
(* 26.2167, {ActionMenu, <<>>, Trigger}}
</code></pre>
<p>After quitting <em>Mathematica</em>, restarting, and recreating <code>lookupOptionFunction</code> in the notebook.</p>
<pre><code>lookupOptionFunction["Alignment"] // AbsoluteTiming
(* 2.25438, {ActionMenu, <<>>, Trigger}}
</code></pre>
<p>Hope this helps.</p>
|
839,516 | <p>What is the number of binary sequences of length $n$, with no two consecutive zeros, and if starts with $0$ has to end with $1$.</p>
<p>Would appreciate suggestions and help.</p>
<p>I tried counting the total sequences and than substractung the ones containing 2 consecutive zeros, and then substracting the ones starting and ending with seros- but it got messy and confusing </p>
| Pavan Sangha | 154,686 | <p>Let $B(n)$ be the number of strings starting with a $1$. Further more let $B_{1}(n)$ be the number of strings starting with a $1$ and ending in a $1$ and let $B_{0}(n)$ be the number of strings starting with a $1$ and ending in a $0$, it follows that $B(n)=B_{1}(n)+B_{0}(n)$. With a bit of calculation you can see that $B_{1}(n)=B_{0}(n-1)+B_{1}(n-1)$ and $B_{0}(n)=B_{1}(n-1).$</p>
|
1,000,938 | <p>$\lim_{X \to \infty} \int_0^Xf(x)^2 > 2(\lim_{X \to \infty}\int_0^Xf(x))^2$</p>
| Sarastro | 31,047 | <p>I assume that $X>0$. Consider
$f(x) = \left\{
\begin{array}{lr}
0 & :x &= &0\\
1 & :x &\in &(\frac{1}{n+1},\frac{1}{n}], \text{n odd},n\geq N\\
-1 & :x &\in &(\frac{1}{n+1},\frac{1}{n}], \text{n even},n\geq N\\
0&:x&>&\frac{1}{N} \end{array}
\right.
$</p>
<p>for some fixed $N\in\mathbb{N}$ to be determined. Since $f$ has countably many discontinuities, it is integrable. We need only consider the case $X\in(0,\frac{1}{N}]$. In this case, $\text{LHS}=X$. If $X\in (\frac{1}{n+1},\frac{1}{n}]$ where $n\geq N$, then a crude approximation gives
\begin{align}\left|\int_0^X f(x)dx\right|&\leq \max\left(\left|\int_0^{\frac{1}{n+1}} f(x)dx\right|,\left|\int_0^{\frac{1}{n}} f(x)dx\right|\right)
\\&\leq \frac{1}{n}-\frac{1}{n+1}\\&=\frac{1}{n}\frac{1}{n+1}
\\ &\leq \frac{X^2}{1-X}\end{align}
So $\text{RHS}\leq\frac{2X^4}{(1-X)^2}$, and you can verify that $X>\frac{2X^4}{(1-X)^2}$ for all small enough $X>0$ (or equivalently, large enough $N$). In fact, $N=1$ works, but it's a bit harder to show.</p>
|
293,371 | <p>This is part of a homework assignment for a real analysis course taught out of "Baby Rudin." Just looking for a push in the right direction, not a full-blown solution. We are to suppose that $f(x)f(y)=f(x+y)$ for all real x and y, and that f is continuous and not zero. The first part of this question let me assume differentiability as well, and I was able to compose it with the natural log and take the derivative to prove that $f(x)=e^{cx}$ where c is a real constant. I'm having a little more trouble only assuming continuity; I'm currently trying to prove that f is differentiable at zero, and hence all real numbers. Is this an approach worth taking?</p>
| ibnAbu | 334,224 | <p><span class="math-container">$f(x)= f(\frac{x}{2})f(\frac{x}{2}) $</span> and together with <span class="math-container">$f(x) \neq 0$</span> implies <span class="math-container">$f(x)$</span> is positive (actually if <span class="math-container">$f(x) $</span> is zero anywhere then it is zero everywhere)</p>
<p>Define <span class="math-container">$g(x)= log f(x)$</span></p>
<p>Then <span class="math-container">$g(x) + g(y) =g(x+y)$</span> which is cauchy functional equation with solution <span class="math-container">$g(x) = cx$</span> where <span class="math-container">$c$</span> is a constant</p>
<p>So <span class="math-container">$f(x)= e^{cx}$</span></p>
|
98,317 | <p>This comes from Artin Second Edition, page 219. Artin defined $G = \langle x,y\mid x^3, y^3, yxyxy\rangle$, and uses the Todd-Coxeter Algorithm to show that the subgroup $H = \langle y\rangle$ has index 1, and therefore $G = H$ is the cyclic group of order 3.</p>
<p>That being the case, $x$ cannot be either $y$ or $y^2$, for then the third relation would not be satisfied. So the relation $x=1$ must follow from the given relations. Is there another way of seeing this besides from the Todd-Coxeter algorithm?</p>
| Did | 6,179 | <p>In other words, assuming that $x^3=y^3=yxyxy=e$, the goal is to prove that $x=e$. </p>
<p>Note that $xyx=y^2(yxyxy)y^2=y^4=y$ hence $xy=(xyx)x^2=yx^2$ $(*)$.</p>
<p>Imagine one wants to carry every $y$ in $x=xy^3$ to the leftmost end of the product. Using $(*)$ twice, one first gets
$$
x=xy^3=(xy)y^2=(yx^2)y^2=yx(xy)y=yx(yx^2)y=y(xy)x(xy),
$$
and, again using $(*)$ twice,
$$
x=y(yx^2)x(yx^2)=y^2x^3yx^2=y^3x^2=x^2.
$$
Thus, $x=x^2$ and $x=e$.</p>
|
573,484 | <p>Prove that if $A \setminus B = \emptyset$, then $A \subseteq B$.</p>
<p>The Venn Diagram helped me to visualize what I'm trying to show (thanks @GA316), but the book asks for a written proof (step by step) by contradiction. Sorry if I wasn't more specific at first, is just that I've had many troubles in the past with proofs, somehow I have many ideas but I can't seem to connect them to get to the final proof. </p>
<p>This is what I have so far:
$P \rightarrow Q$ is equivalent to $\neg Q \rightarrow \neg P$ contraposition (thanks @The Chaz 2.0)</p>
<p>With P: $ A \setminus B = \emptyset$ and Q: $ A \subseteq B$</p>
<p>so $ \neg Q \equiv A \not\subseteq B\ , \exists x \in A : x \notin B $</p>
<p>be $ t: t \in A \wedge t \notin B $ ...is this right?</p>
<p>as this is the definition for $A \setminus B \ne \emptyset$ ...is this right?</p>
<p>$\therefore \neg Q \rightarrow \neg P \equiv A \not\subseteq B\ \rightarrow A \setminus B \ne\emptyset$</p>
<p>I have many concerns regarding if I'm using the correct notation. I am trying to learn this by myself and have nobody else to ask.</p>
<p>Also, sorry if it took me too long to update, I just started learning about this LaTEX notation.</p>
<p>Thank you very much in advance, you guys are so nice and helpful. You made me feel very welcomed and sure I need to read more about the rules and instructions for using this site. </p>
| Brian M. Scott | 12,042 | <p>HINT: Just follow the definitions. In order to show that $A\subseteq B$, you should let $x$ be an arbitrary element of $A$ and somehow use the hypothesis that $A\setminus B=\varnothing$ to show that $x\in B$. What if $x$ were not in $B$? Then you’d have $x\in A$ <strong>and</strong> $x\notin B$, which would tell you that $x$ is in ... what? </p>
|
573,484 | <p>Prove that if $A \setminus B = \emptyset$, then $A \subseteq B$.</p>
<p>The Venn Diagram helped me to visualize what I'm trying to show (thanks @GA316), but the book asks for a written proof (step by step) by contradiction. Sorry if I wasn't more specific at first, is just that I've had many troubles in the past with proofs, somehow I have many ideas but I can't seem to connect them to get to the final proof. </p>
<p>This is what I have so far:
$P \rightarrow Q$ is equivalent to $\neg Q \rightarrow \neg P$ contraposition (thanks @The Chaz 2.0)</p>
<p>With P: $ A \setminus B = \emptyset$ and Q: $ A \subseteq B$</p>
<p>so $ \neg Q \equiv A \not\subseteq B\ , \exists x \in A : x \notin B $</p>
<p>be $ t: t \in A \wedge t \notin B $ ...is this right?</p>
<p>as this is the definition for $A \setminus B \ne \emptyset$ ...is this right?</p>
<p>$\therefore \neg Q \rightarrow \neg P \equiv A \not\subseteq B\ \rightarrow A \setminus B \ne\emptyset$</p>
<p>I have many concerns regarding if I'm using the correct notation. I am trying to learn this by myself and have nobody else to ask.</p>
<p>Also, sorry if it took me too long to update, I just started learning about this LaTEX notation.</p>
<p>Thank you very much in advance, you guys are so nice and helpful. You made me feel very welcomed and sure I need to read more about the rules and instructions for using this site. </p>
| GA316 | 72,257 | <p>Draw venn diagram of $A - B = A \cap B^c$. can you see when it will be empty?</p>
|
1,756,448 | <p>Suppose the function, $f$, is differentiable at $x = 1$. $$\lim_{h\rightarrow\ 0}\frac{f(1+h)}{h} = 5$$</p>
<p>Find a) $f(1)$ and, b) $f'(1)$. </p>
<p>I know b) (well at least I think it can) can be found by the definition of the derivative, i.e. </p>
<p>$$\lim_{h\rightarrow\ 0}\frac{f(1+h)-f(1)}{h} $$</p>
<p>Therefore, </p>
<p>$$f'(1) =5-\lim_{h\rightarrow\ 0}\frac{f(1)}{h} $$</p>
<p>However, I'm stuck for a). </p>
| DonAntonio | 31,254 | <p>If $\;f\;$ differentiable at $\;x=1\;$ then it is <em>continuous</em> and, of course, defined there. But</p>
<p>$$f(1)\neq 0\implies \lim_{h\to0}\frac{f(1)}h\;\;\text{doesn't exist}$$</p>
|
1,646,460 | <p>I was thinking about the following integral if I could solve it without using trigonometric formulas. If there is no other way to solve it, could you please explain me why do we replace $x$ with $2\sqrt 2 \sin(t)$? I'm really confused about these types of integrals.</p>
<p>$$\int \sqrt{8 - x^2} dx$$</p>
| Ant | 66,711 | <p>The idea is that we want to get rid of the square root. So we use the fact that $1 - \sin^2 = \cos^2$, so that $$\sqrt{1 - \sin ^2 x } = \sqrt{\cos^2 x} = |\cos x|$$</p>
<p>which we then can integrate (of course there will be another factor coming from $dx$ but that works out)</p>
<p>Same idea when you're faced with $\sqrt{1 + x^2}$; this time we use the fact that $1 + \sinh^2 = \cosh^2$ and we get rid of the square root in the same way.</p>
<p>Of course if you have a number $a$ instead of $1$, you need to be able to factor that; so you want to transform $\sqrt{a - x^2}$ in $\sqrt{a - a\sin^2 x} = \sqrt a \sqrt{1 - \sin^2 x} =\sqrt a |\cos x|$</p>
|
1,646,460 | <p>I was thinking about the following integral if I could solve it without using trigonometric formulas. If there is no other way to solve it, could you please explain me why do we replace $x$ with $2\sqrt 2 \sin(t)$? I'm really confused about these types of integrals.</p>
<p>$$\int \sqrt{8 - x^2} dx$$</p>
| Community | -1 | <p>By rescaling the variable, let us replace the constant $8$ by $1$, for convenience.</p>
<p>The equation $y=\sqrt{1-x^2}$ represents the upper-half of the unit circle, and the integral</p>
<p>$$\int_{t=0}^x\sqrt{1-t^2}dt$$ is the area of a vertical "slice" between the abscissas $0$ and $x$. You can compute it as the area of a sector of aperture $\theta$ such that $\sin(\theta)=x$, plus a triangle of base $x$ and height $\sqrt{1-x^2}$.</p>
<p><a href="https://i.stack.imgur.com/zb3yI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zb3yI.png" alt="enter image description here"></a></p>
<p>Hence,</p>
<p>$$A=\frac12\theta+\frac12x\sqrt{1-x^2}=\frac12\arcsin(x)+\frac12x\sqrt{1-x^2}.$$</p>
<p>This is how a trigonometric function appears, and <strong>you can't avoid it</strong> because it belongs to the final solution.</p>
<hr>
<p>You also see the connection by taking the derivative</p>
<p>$$(\arcsin(x))'=\frac1{\sqrt{1-x^2}}.$$
The trigonometric function disappears and is replaced by a rational expression.</p>
<p>A similar phenomenon occurs with the logarithm,</p>
<p>$$(\ln(x))'=\frac1x,$$</p>
<p>and this is why you will see logarithms appear now and then in antiderivatives.</p>
|
256,785 | <p>Let $T_0$ be the set theory axiomatized by $ZFC^-$ (that is $ZFC$ without powerset) + every set is countable + $\mathbb{V}=\mathbb{L}$. </p>
<p><strong>Question 1:</strong> Suppose $\phi$ is a sentence of set theory. Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_0$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" </p>
<p><strong>Update:</strong> no.</p>
<p><strong>Question 2:</strong> Is there a set $T$ of $\Pi_2$ sentences, such that $ZFC^- \cup T$ is complete? </p>
<p><strong>Update:</strong> no, essentially. See my answer below.</p>
<p><strong>Questions 3 and 4 after comments.</strong></p>
<p><strong>Comments.</strong></p>
<ul>
<li>Question 1 is related to, but different from several previous questions on Mathoverflow, e.g. <a href="https://mathoverflow.net/questions/118081/nice-algebraic-statements-independent-from-zf-v-l-constructibility">Nice Algebraic Statements Independent from ZF + V=L (constructibility)</a> or <a href="https://mathoverflow.net/questions/11480/on-statements-independent-of-zfc-v-l?noredirect=1&lq=1">On statements independent of ZFC + V=L</a> or <a href="https://mathoverflow.net/questions/81190/natural-statements-independent-from-true-pi0-2-sentences?rq=1">Natural statements independent from true $\Pi^0_2$ sentences</a>. Regarding the first two---Hamkins gives a long list of examples of things independent of $\mathbb{V}= \mathbb{L}$, but it seems that my schema takes care of all of them. (Of course in those questions $ZFC$ was assumed.)</li>
<li>Moreover, Question 1 was partly motivated by Dorais' answer to the second question above, which references a question of Shelah from <a href="http://shelah.logic.at/E16/E16.html" rel="nofollow noreferrer">The Future of Set Theory</a>.</li>
<li>I'm leaving "large cardinal axiom" undefined here (so Question 1 can't be formalized), but obviously we should exclude inconsistent axioms, or things like $ZFC + $ "there are no transitive set models of ZFC."</li>
<li><p>If we pick a definition of ``large cardinal axiom", then "every set is countable" + $\mathbb{V}=\mathbb{L}$ + "there are transitive set models of $A$ of arbitrarily high ordinal height" (for each large cardinal axiom $A$) is a set of $\Pi_2$ sentences. So a negative answer to Question 2 implies a negative answer to Question 1.</p></li>
<li><p>We can ignore (recursive) large cardinal axiom schemas $\Gamma$ because we can just replace them by $ZFC_0$+"there is a transitive set model of $\Gamma$" where $ZFC_0$ is some large enough finite fragment of $ZFC$. In particular we don't have to write "$ZFC + A$". </p></li>
<li><p>$T_0$ + this axiom schema (loosely speaking) is of personal significance to me: in fact I believe it to be true. (Namely, given whatever universe of sets $\mathbb{V}$ in which we are working, it seems reasonable to suppose there is a larger universe of sets $\mathbb{W} \models \mathbb{V}=\mathbb{L}$ in which $\mathbb{V}$ is countable, or such that $\mathbb{W}$ is a model of a given large cardinal axiom $A$. Ergo,...) </p></li>
</ul>
<p>Let $\mathcal{L}_{\mbox{set}}$ be the language of set theory $\{\in\}$ and let $\mathcal{L}_1$ be $\mathcal{L}_{\mbox{set}} \cup \{P\}$, $P$ a new unary relation symbol. Let $T_1$ be $T_0$ + the axioms asserting that $P \subseteq \mbox{ON}$ is stationary (for $\in$-definable classes) and for every $\alpha \in P$, $(\mathbb{V}_\alpha, \in) \preceq (\mathbb{V}, \in)$. We insist that large cardinal axioms $A$ be sentences of $\mathcal{L}_{\mbox{set}}$.</p>
<p><strong>Question 3:</strong> Suppose $\phi$ is a sentence of set theory (i.e. of $\mathcal{L}_{\mbox{set}}$). Must there be a large cardinal axiom $A$ such that $\phi$ is decided by $T_1$ + "there are transitive set models of $A$ of arbitrarily high ordinal height?" </p>
<p><strong>Question 4:</strong> Is there a set $T$ of $\Pi_2$ sentences of set theory, such that $T_1 \cup T$ decides every sentence of set theory?</p>
| Emil Jeřábek | 12,705 | <p>Without a definition of “large cardinal axiom”, I’m going to ignore Q1 and Q3.</p>
<p>The answers to Q2 and Q4 are negative by the following general principle. (For Q4, we take $T_0$ to be the set of $\mathcal L_{\mathrm{set}}$-consequences of $T_1$.)</p>
<blockquote>
<p>$\DeclareMathOperator\Tr{Tr}\DeclareMathOperator\Wit{Wit}\let\eq\leftrightarrow\def\gonu#1{\ulcorner#1\urcorner}\let\ob\overline$<strong>Proposition:</strong> Let $T_0$ be an r.e. theory interpreting Robinson’s arithmetic, and $\Gamma$ a set of sentences for which $T_0$ has a truth predicate $\Tr_\Gamma(x)$, that is,
$$\tag{$*$}T_0\vdash\phi\eq\Tr_\Gamma(\ob{\gonu\phi})$$
for all $\phi\in\Gamma$. Then no extension of $T_0$ by a set of $\Gamma$-sentences is a consistent complete theory.</p>
</blockquote>
<p><strong>Proof:</strong> Let $S\subseteq\Gamma$, and assume for contradiction that $T=T_0+S$ is consistent and complete. The basic idea of the proof is that we can define in $T$ a truth predicate $\Tr(x)$ for <em>all</em> sentences as “$x$ is $T_0$-provable from a set of true $\Gamma$-sentences”, contradicting Tarski’s theorem on the undefinability of truth.</p>
<p>In more detail, we fix an interpretation of, say, $S^1_2$ in $T_0$ so that we have basic coding of sequences of integers, and a (polynomial-time) proof predicate for $T_0$. We define $\Wit(w,x)$ to be the formula</p>
<p>“the sequence $w$ is a $T_0$-proof of a sentence $x$ from extra axioms, each of which is a sentence $a\in\Gamma$ such that $\Tr_\Gamma(a)$,”</p>
<p>and we put</p>
<p>$$\Tr(x)\eq\exists w\,(\Wit(w,x)\land\forall w'<w\,\neg\Wit(w',\gonu{\neg x})).$$</p>
<blockquote>
<p><strong>Claim:</strong> Whenever $T$ proves a sentence $\phi$, it also proves $\Tr(\ob{\gonu\phi})$. Whenever $T$ proves $\neg\phi$, it also proves $\neg\Tr(\ob{\gonu\phi})$.</p>
</blockquote>
<p>Using the Claim, we can easily finish the proof of the Proposition: by Gödel’s diagonal lemma, there is a sentence $\alpha$ such that</p>
<p>$$T_0\vdash\alpha\eq\neg\Tr(\ob{\gonu\alpha}).$$</p>
<p>Since $T$ is complete, it proves $\alpha$ or $\neg\alpha$. If $T\vdash\alpha$, then $T$ proves $\neg\Tr(\ob{\gonu\alpha})$ by the definition of $\alpha$, and $\Tr(\ob{\gonu\alpha})$ by the Claim, hence $T$ is inconsistent, contrary to our assumptions. The case $T\vdash\neg\alpha$ is similar.</p>
<p>Now, to prove the Claim, assume $T\vdash\phi$. We can fix a $T_0$-proof of $\phi$ from some $\psi_1,\dots,\psi_k\in S$, which has a standard Gödel number $n$. </p>
<p>By $\Sigma^0_1$-completeness, $T$ proves that $\ob n$ is a $T_0$-proof of $\ob{\gonu\phi}$ from $\ob{\gonu{\psi_1}},\dots,\ob{\gonu{\psi_k}}$. Moreover, $T$ proves each $\psi_i$, hence also $\Tr_\Gamma(\ob{\gonu{\psi_i}})$ by $(*)$. Thus,</p>
<p>$$T\vdash\Wit(\ob n,\ob{\gonu{\phi}}).$$</p>
<p>On the other hand, let $m<n$, we will show</p>
<p>$$T\vdash\neg\Wit(\ob m,\ob{\gonu{\neg\phi}}).$$</p>
<p>This again follows by $\Sigma^0_1$-completeness unless $m$ is an actual Gödel number of an actual $T_0$-proof of $\neg\phi$ from some sentences $\chi_1,\dots,\chi_l\in\Gamma$. Since $T$ also proves $\phi$, this means</p>
<p>$$T\vdash\neg\chi_1\lor\dots\lor\neg\chi_l,$$</p>
<p>hence</p>
<p>$$T\vdash\neg\Tr_\Gamma(\ob{\gonu{\chi_1}})\lor\dots\lor\neg\Tr_\Gamma(\ob{\gonu{\chi_l}})$$</p>
<p>by $(*)$, hence $T\vdash\neg\Wit(\ob m,\ob{\gonu{\neg\phi}})$ as needed.</p>
<p>Since $T$ knows that the only numbers below $\ob n$ are $\ob0,\dots,\ob{n-1}$, we have established $T\vdash\Tr(\ob{\gonu\phi})$.</p>
<p>The second part of the Claim is similar: assuming $T\vdash\neg\phi$, we fix its standard proof with Gödel number $n$, and we show</p>
<p>$$T\vdash\Wit(\ob n,\ob{\gonu{\neg\phi}})$$</p>
<p>and</p>
<p>$$T\vdash\neg\Wit(\ob m,\ob{\gonu\phi})$$</p>
<p>for all $m\le n$, which implies $T\vdash\neg\Tr(\ob{\gonu\phi})$.</p>
|
2,947,218 | <p>I know that if a curve <span class="math-container">$C$</span> is parametrize as <span class="math-container">$$s(t)=(x(t),y(t)),\quad t\in [a,b]$$</span>
then <span class="math-container">$$\ell(C)=\int ds=\int_a^b \sqrt{\dot x^2(t)+\dot y^2(t)}dt.\tag{*}$$</span></p>
<p>In the <a href="https://fr.wikipedia.org/wiki/Abscisse_curviligne" rel="nofollow noreferrer">french wikipedia</a>, it's written that <span class="math-container">$(*)$</span> can be written as <span class="math-container">$$ds^2=dx^2+dy^2.$$</span></p>
<p>I really don't understand the logical. Could someone explain ? What is the link between <span class="math-container">$(*)$</span> and <span class="math-container">$ds^2=dx^2+dy^2$</span> ? I tried <span class="math-container">$$ds^2=dx^2+dy^2\implies ds=\sqrt{dx^2+dy^2}\implies \int ds=\int\sqrt{dx^2+dy^2},$$</span>
but I can't give an interpretation of the RHS... So it should be wrong.</p>
| md2perpe | 168,433 | <p>Formally we have
<span class="math-container">$$
\int \sqrt{\dot x^2(t)+\dot y^2(t)} \, dt
= \int \sqrt{\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2} \, dt \\
= \int \sqrt{\left[\left(\frac{dx}{dt}\right)^2+\left(\frac{dy}{dt}\right)^2\right]dt^2}
= \int \sqrt{dx^2+dy^2}
$$</span>
It's not really stranger than this.</p>
|
2,947,218 | <p>I know that if a curve <span class="math-container">$C$</span> is parametrize as <span class="math-container">$$s(t)=(x(t),y(t)),\quad t\in [a,b]$$</span>
then <span class="math-container">$$\ell(C)=\int ds=\int_a^b \sqrt{\dot x^2(t)+\dot y^2(t)}dt.\tag{*}$$</span></p>
<p>In the <a href="https://fr.wikipedia.org/wiki/Abscisse_curviligne" rel="nofollow noreferrer">french wikipedia</a>, it's written that <span class="math-container">$(*)$</span> can be written as <span class="math-container">$$ds^2=dx^2+dy^2.$$</span></p>
<p>I really don't understand the logical. Could someone explain ? What is the link between <span class="math-container">$(*)$</span> and <span class="math-container">$ds^2=dx^2+dy^2$</span> ? I tried <span class="math-container">$$ds^2=dx^2+dy^2\implies ds=\sqrt{dx^2+dy^2}\implies \int ds=\int\sqrt{dx^2+dy^2},$$</span>
but I can't give an interpretation of the RHS... So it should be wrong.</p>
| edm | 356,114 | <p>The object <span class="math-container">$ds^2$</span> is a tensor field. It assigns to each point a tensor. The formula <span class="math-container">$ds^2=dx^2+dy^2$</span> tells you how this tensor field is defined.</p>
<p>The terminology looks fancy, but it simply does the following:</p>
<p>Write the canonical inner product on <span class="math-container">$\Bbb R^2$</span> as <span class="math-container">$\langle\cdot,\cdot\rangle:\Bbb R^2\times\Bbb R^2\to\Bbb R$</span>. This function is a bilinear function. A tensor is simply a real-valued function that takes multiple vector as inputs, such that this function is linear in each argument. So inner product is itself a tensor. The tensor field <span class="math-container">$ds^2=dx^2+dy^2$</span> tells you that it assigns to each point the tensor <span class="math-container">$\langle\cdot,\cdot\rangle$</span>, i.e. <span class="math-container">$ds^2$</span> is a function that sends each point <span class="math-container">$p\in\Bbb R^2$</span> to <span class="math-container">$\langle\cdot,\cdot\rangle$</span>. For a point <span class="math-container">$p\in\Bbb R^2$</span>, <span class="math-container">$ds^2(p)$</span> is a function that takes two vector entries (the inner product). So generically you can write <span class="math-container">$ds^2(p)(v,w)=\langle v,w\rangle$</span> for two vectors <span class="math-container">$v,w\in\Bbb R^2$</span>.</p>
<p>What this tensor field does is the following:</p>
<p>A smooth curve <span class="math-container">$$\gamma(t)=(\gamma_1(t),\gamma_2(t)),\quad t\in [a,b]$$</span> is given. (I don't want to use <span class="math-container">$s$</span>, to indicate that <span class="math-container">$ds^2$</span> should not be associated to any curve. The tensor is defined independently from curve.) You can differentiate this curve at a point <span class="math-container">$t$</span>, to obtain a vector <span class="math-container">$\gamma'(t)\in\Bbb R^2$</span>.</p>
<p>We can define a quantity from this curve <span class="math-container">$\gamma$</span> and the tensor field <span class="math-container">$ds^2$</span>, as follows:
<span class="math-container">$$\int_a^b\sqrt{ds^2(\gamma(t))(\gamma'(t),\gamma'(t))}\ dt.$$</span></p>
<p>If you look carefully enough, you would see <span class="math-container">$$ds^2(\gamma(t))(\gamma'(t),\gamma'(t))=\langle\gamma'(t),\gamma'(t)\rangle=\dot\gamma_1^2(t)+\dot\gamma_2^2(t)$$</span></p>
<p>So the above quantity is the same as <span class="math-container">$$\int_a^b \sqrt{\dot \gamma_1^2(t)+\dot \gamma_2^2(t)}\ dt.$$</span></p>
<p>There is one thing I have not quite justified yet, and that is, how <span class="math-container">$ds^2=dx^2+dy^2$</span> tells us the tensor field <span class="math-container">$ds^2$</span> assigns to each point the inner product <span class="math-container">$\langle\cdot,\cdot\rangle$</span> and not something else? Well, for the moment, take the following as a definition:</p>
<p>A general tensor field on <span class="math-container">$\Bbb R^2$</span> that assigns to each point a bilinear function is of the following general form <span class="math-container">$$p\mapsto f(p)dx\otimes dx+g(p)dx\otimes dy+h(p)dy\otimes dx+k(p)dy\otimes dy$$</span> for four real-valued functions <span class="math-container">$f,g,h,k$</span>. The whole thing on the right is a bilinear function. By "definition", it takes two vector inputs <span class="math-container">$$\vec a=\begin{bmatrix}a_1\\a_2\end{bmatrix},\vec b=\begin{bmatrix}b_1\\b_2\end{bmatrix}\in\Bbb R^2$$</span> and outputs the following number: <span class="math-container">$$f(p)dx\otimes dx+g(p)dx\otimes dy+h(p)dy\otimes dx+k(p)dy\otimes dy(\vec a,\vec b)=\begin{bmatrix}a_1&a_2\end{bmatrix}\begin{bmatrix}f(p)&g(p)\\h(p)&k(p)\end{bmatrix}\begin{bmatrix}b_1\\b_2\end{bmatrix}.$$</span></p>
<p>(It is not really a definition, because it can be derived from some more fundamental definitions. I won't go into it here. If you want to know what the symbols <span class="math-container">$dx\otimes dy$</span> etc. mean, learn tensors from a book on analysis on <span class="math-container">$\Bbb R^n$</span> or differential geometry, and ask another question about these symbols.)</p>
<p>Now, <span class="math-container">$dx^2$</span> is a shorthand for <span class="math-container">$dx\otimes dx$</span> and <span class="math-container">$dy^2$</span> is a shorthand for <span class="math-container">$dy\otimes dy$</span>. When the tensor field is <span class="math-container">$dx^2+dy^2$</span> (i.e. when <span class="math-container">$f=k=$</span> constant <span class="math-container">$1$</span> function, <span class="math-container">$g=h=$</span> constant zero function), you should recognise that <span class="math-container">$$\begin{bmatrix}a_1&a_2\end{bmatrix}\begin{bmatrix}1&0\\0&1\end{bmatrix}\begin{bmatrix}b_1\\b_2\end{bmatrix}=\begin{bmatrix}a_1&a_2\end{bmatrix}\begin{bmatrix}b_1\\b_2\end{bmatrix}$$</span> is just the canonical inner product of <span class="math-container">$\vec a$</span> and <span class="math-container">$\vec b$</span>.</p>
<p>Here is my comment on your manipulation of <span class="math-container">$ds^2$</span>:</p>
<p>You should NOT take square root of <span class="math-container">$ds^2$</span> to become <span class="math-container">$ds$</span>. The first reason is that this symbol <span class="math-container">$ds^2$</span> is NOT the square of some other symbol <span class="math-container">$ds$</span>. The notation <span class="math-container">$$\int ds$$</span> is just an abuse of notation, to denote the total length of the curve. There is actually no such object <span class="math-container">$ds$</span>.</p>
|
3,868,571 | <p>Let <span class="math-container">$n\ge 1$</span> and <span class="math-container">$A,B\in\mathrm M_n(\mathbb R)$</span>.</p>
<p>Let's assume that</p>
<p><span class="math-container">$$\forall Q\in\mathrm M_n(\mathbb R), \quad \det\begin{pmatrix} I_n & A \\ Q & B\end{pmatrix}=0$$</span></p>
<p>where <span class="math-container">$I_n$</span> is the identity matrix of <span class="math-container">$\mathrm M_n(\mathbb R)$</span>.</p>
<blockquote>
<p>Can we prove that <span class="math-container">$\mathrm{rank} \begin{pmatrix}A\\ B\end{pmatrix}<n$</span>?</p>
</blockquote>
<hr />
<p>This fact seems quite obvious, but I can't find any straightforward argument to prove it.</p>
<p><em>Some ideas.</em></p>
<p>With <span class="math-container">$Q=0$</span>, we deal with a block-triangular matrix, so we have <span class="math-container">$\det B=0$</span>.</p>
<p>Moreover, with <span class="math-container">$Q=\lambda I_n$</span>, <span class="math-container">$\lambda\in\mathbb R$</span>, since it commutes with <span class="math-container">$B$</span>, we have</p>
<p><span class="math-container">$$\forall \lambda\in\mathbb R,\quad \det(B-\lambda A)=0,$$</span></p>
<p>so if <span class="math-container">$\det(A)\ne 0$</span>, we have</p>
<p><span class="math-container">$$\forall \lambda\in\mathbb R,\quad\det((BA-\lambda I_n)A^{-1})=\det(BA-\lambda I_n)\det(A)^{-1}=0,$$</span></p>
<p>which means that every <span class="math-container">$\lambda\in\mathbb R$</span> is an eigenvalue of <span class="math-container">$BA$</span> (since for all <span class="math-container">$\lambda\in\mathbb R$</span>, <span class="math-container">$\det(BA-\lambda I_n)=0$</span>), which is absurd.</p>
<p>So <span class="math-container">$\det(A)=0$</span> also.</p>
| user1551 | 1,551 | <p>It is true over any field, not just <span class="math-container">$\mathbb R$</span>. Note that <span class="math-container">$\det\pmatrix{I&A\\ Q&B}=\det(B-QA)$</span>. View <span class="math-container">$A$</span> and <span class="math-container">$B$</span> as two linear maps from a vector space <span class="math-container">$V$</span> to another vector space <span class="math-container">$W$</span> of the same dimension. View <span class="math-container">$Q$</span> as a linear operator on <span class="math-container">$W$</span>. By changing the bases of <span class="math-container">$V$</span> and <span class="math-container">$W$</span> separately, we may assume that <span class="math-container">$A=I_r\oplus0$</span> where <span class="math-container">$r=\operatorname{rank}(A)$</span>. Partition <span class="math-container">$B$</span> and <span class="math-container">$Q$</span> accordingly as <span class="math-container">$[B_1|B_2]$</span> and <span class="math-container">$[Q_1|Q_2]$</span>, where <span class="math-container">$B_1$</span> and <span class="math-container">$Q_1$</span> each has <span class="math-container">$r$</span> columns. Then <span class="math-container">$B-QA=[B_1-Q_1|B_2]$</span> is singular for every <span class="math-container">$Q_1$</span>. Hence <span class="math-container">$B_2$</span> has deficient column rank and so does <span class="math-container">$\pmatrix{A\\ B}=\pmatrix{\ast&0\\ \ast&B_2}$</span>. Since the change of bases amounts to a transformation in the form of <span class="math-container">$\pmatrix{A\\ B}\mapsto\pmatrix{U&0\\ 0&U}\pmatrix{A\\ B}V$</span> for some invertible <span class="math-container">$U$</span> and <span class="math-container">$V$</span>, the <span class="math-container">$\pmatrix{A\\ B}$</span> before change also has deficient column rank.</p>
|
889,628 | <p>The question and answer is shown but I don't fully understand the answer for part a. Could someone please explain to me why the integral setup for the marginal density function of y1 is from y1 to 1, and not 0 to 1? And also the same thing for y2. Thank you
<img src="https://i.stack.imgur.com/ECQQ7.png" alt="Question"></p>
<p><img src="https://i.stack.imgur.com/Kg8N1.png" alt="enter image description here"></p>
| user487291 | 487,291 | <p>Infinity and Me is a children's book that literally and figuratively illustrates the concept of infinity, in a way that is accessible to all ages. </p>
<p>(<a href="https://i.stack.imgur.com/o5T6t.jpg" rel="nofollow noreferrer">https://i.stack.imgur.com/o5T6t.jpg</a>)</p>
|
1,714,053 | <p>How can I prove that if $\sum\limits_{n\in\Bbb N} a_n^2 $ converges then also $\sum\limits_{n\in\Bbb N} \frac{\lvert a_n\rvert}{n}$ converges?</p>
<p>Thanks ! </p>
| Spenser | 39,285 | <p>By <a href="https://en.wikipedia.org/wiki/Cauchy%E2%80%93Schwarz_inequality">Cauchy–Schwarz inequality</a>,
$$\sum_{n=1}^N\frac{|a_n|}{n}\le \left(\sum_{n=1}^Na_n^2\right)^{1/2}\left(\sum_{n=1}^N\frac{1}{n^2}\right)^{1/2}.$$
Both sums on the right converge, so the partial sums of $\sum|a_n|/n$ are bounded and hence the series converges.</p>
|
1,714,053 | <p>How can I prove that if $\sum\limits_{n\in\Bbb N} a_n^2 $ converges then also $\sum\limits_{n\in\Bbb N} \frac{\lvert a_n\rvert}{n}$ converges?</p>
<p>Thanks ! </p>
| David Mitra | 18,986 | <p>Expanding $(a-b)^2\ge0$ gives the inequality $|ab|\le{1\over2}(a^2+b^2)$. </p>
<p>Apply this with $a=a_n$ and $b={1\over n}$ to obtain ${|a_n|\over n}\le{1\over2}(a_n^2+{1\over n^2})$ for every $n$. Now you're set up nicely to use the Comparison Test.</p>
<p>(Alternatively: ${|a_n|\over n} \le( \max \{|a_n|, 1/n\})^2 \le |a_n|^2+{1\over n^2}$.)</p>
|
2,031,842 | <p>If 3 is the remainder when dividing $P(x)$ with $(x-3)$, and $5$ is the remainder when dividing $P(x)$ with $(x-4)$, what is the remainder when dividing $P(x)$ with $(x-3)(x-4)$?</p>
<p>I'm completely puzzled by this, I'm not sure where to start...</p>
<p>Any hint would be much appreciated. </p>
| Bernard | 202,857 | <p>Euclidean division by $(x-3)(x-4)$ can be written as
$$P(x)=(x-3)(x-4)Q(x)+R(x),\quad\deg R(x)\le1.$$
Instead of writing $R(x)$ with the standard basis $\{1,x\}$, use the basis $\{x-3,x-4\}$. Thus
$$P(x)=(x-3)(x-4)Q(x)+A(x-3)+B(x-4)$$
Setting $x=3$, you get right away $B=-P(3)=-3$. Similarly $A=P(4)=5$, and finally
$$R(x)=5(x-3)-3(x-4)=2x-3.$$</p>
<p>This formula can be generalised: the remainder when dividing a polynomial $P(x)$ by $(x-a)(x-b),\enspace a\ne b$, is
$$R(x)=\frac1{b-a}\Bigl[P(b)(x--a)-P(a)(x-b)\Bigr]=\frac1{b-a}\begin{vmatrix}x-a&x-b\\[1ex]P(a)&P(b)\end{vmatrix}.$$</p>
|
3,996,790 | <p>I just realized that there may be a case where L'Hopital's rule fails, specifically</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which evaluates to an indeterminate form, specifically <span class="math-container">$\frac{\infty}{\infty}$</span>. Sure, we can cancel the <span class="math-container">$e^x$</span>s, but when we use L'Hopital's, we get</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{(e^x)^\prime}{(e^x)^\prime}$$</span></p>
<p>Since the derivative of <span class="math-container">$e^x$</span> is <span class="math-container">$e^x$</span>, we have</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which is our original limit. Therefore, L'Hopital's fails to work in this example.</p>
<p>Question: Does L'Hopital's rule actually fail in this example, or am I understanding it wrong?</p>
<p>Edit: I mean "fails" in which it does not make progress toward a determinate result.</p>
| Spectree | 849,152 | <p>L'Hôpital's Rule says that if <span class="math-container">$\lim_{x\to a}{\frac{f'(x)}{g'(x)}}=L $</span>, then <span class="math-container">$\lim_{x\to a}{\frac{f(x)}{g(x)}}=L$</span>. With <span class="math-container">$f'(x)=e^x$</span> and <span class="math-container">$ g'(x)=e^x$</span> is obviously true. L'Hôpital's Rule is also valid when <span class="math-container">$x\to \infty $</span>.</p>
|
2,339,479 | <p>I am an avid board game player, especially ones with miniatures that duke it out (Risk, Axis and Allies, etc.) and I'm trying to analyze efficiency of different units in one of my more recent pickups. </p>
<p>In the game (Battlelore Second Edition for those who are curious), you get a hit on either a 5 or a 6. This would be simple if you're just rolling one die. However, figures in this game roll up from 1 die, all the way up to 5 dice (maybe more in certain situations) and I'm having a bugger of a time figuring out the probability.
I set up a table in excel that takes the amount of dice being rolled in the column, and checks the probability of getting 1, 2, 3, 4 or 5 hits in each respective row. I'd rather not list out all of the possibilities and count them, so I'm look for an equation of course.</p>
<p>To sum up: <strong>I need an equation(s) that gives Z, the probability of getting X hits with Y dice where hits are on 5's and 6's.</strong></p>
<p>EDIT: I have gotten as far as all of the chances of getting 1 hit with Y dice, as well as the chance of 2 hits with up to 3 dice, but 4 dice with 2 hits and onwards it giving me trouble.</p>
| import random | 462,373 | <p>I'm not particularly familiar with programming in Excel, but I do like Python, so here's some code based on David K's excellent answer and <a href="https://stackoverflow.com/a/26561091/7675174">this</a> binomial coefficient implementation:</p>
<pre><code>from math import factorial as fac
def binomial(x, y):
try:
binom = fac(x) // fac(y) // fac(x - y)
except ValueError:
binom = 0
return binom
def exact_hit_chance(n, k):
"""Return the probability of exactly k hits from n dice"""
# a hit is a 5 or 6, so 1/3 chance.
return binomial(n,k) * (1/3)**k * (2/3)**(n-k)
def hit_chance(n, k):
"""Return the probability of at least k hits from n dice"""
return sum([exact_hit_chance(n, x) for x in range(k,n+1)])
for dice in range(1,6):
for hits in range(1,6):
chance = hit_chance(dice, hits)
if chance > 0:
print(hits, "hit(s) from", dice, "dice: {:>6.2%}".format(chance))
</code></pre>
<p>This produces the following output:</p>
<pre><code>1 hit(s) from 1 dice: 33.33%
1 hit(s) from 2 dice: 55.56%
2 hit(s) from 2 dice: 11.11%
1 hit(s) from 3 dice: 70.37%
2 hit(s) from 3 dice: 25.93%
3 hit(s) from 3 dice: 3.70%
1 hit(s) from 4 dice: 80.25%
2 hit(s) from 4 dice: 40.74%
3 hit(s) from 4 dice: 11.11%
4 hit(s) from 4 dice: 1.23%
1 hit(s) from 5 dice: 86.83%
2 hit(s) from 5 dice: 53.91%
3 hit(s) from 5 dice: 20.99%
4 hit(s) from 5 dice: 4.53%
5 hit(s) from 5 dice: 0.41%
</code></pre>
<p>If you want to poke around with the code or change the number of dice you needn't install Python, you can <a href="https://tio.run/##nVLLasMwELz7KwaXgpQqb5pDoLn1B3ropZSgOHItbEtGVsCm9NtdPdykkNBD96Lx7szurOSmt4VW62HIja5Rc1tA1o02FjnPrDaSV@Ct/0iS5ChyHKTStcuSjqGn2wQurOkj8BEIePIS0lHM5wH1Z9Rh6oSBLrpMNBavvDqJZ2O0ue6yCBkj7MmomIw2ROfs7Qtp91nBVSaIYihHO2mavkSBLQQaow/8ICtpe@hRWPUo4cRuMb@2wlFmwsmC/A7c1yBbBx6hDTYMrcZyvkYcNrsy5S9EsZJiAuJ4dDIpPVwFSNS0pNH2Pwxzi0rw1v7heHTSnmryduti3DPkbo0OUsFw9SFIydTDkr47V77ge11qS7YZfflamHmj5iNOcK/0a5zvxYKKnnky/6HusLjIfTRGKks8nSF1B2lp2DBliJ1Sf2zxud1tZqv7r3TmTLnflMSGlA7DNw" rel="nofollow noreferrer">Try-It-Online</a>.</p>
|
2,153,128 | <p>I have been given an exercise to convert switch coordinated from cylindrical to rectangular ones. This task is easy but one of them is a strange looking. The point in cylindrical coordinates is $(0,45,10)$. This corresponds to $r=0$. What is this point? Is it not the origin? But why then the angle and z-coordinate.
I have one more question that is how to describe the geometric meaning of the following transformation in cylindrical coordinates: $(r,\theta,z)$ to $(-r,\theta-\pi/4,z)$</p>
<p>$-r$ makes the whole problem here.</p>
| Julio Maldonado Henríquez | 406,412 | <p>With binomials, so $\binom{11}{2}$ or $\binom{11}{3}$ respectively, where in general a subset of $k$ from a group of $n$ is $\binom{n}{k}=\frac{n!}{k!(n-k)!}$</p>
|
3,941,728 | <p>I know that every function from the empty set to any other set is the empty function.</p>
<p>I also know that there is no function to the empty set from any other set.</p>
<p>Now, what if both the domain and codomain are empty? Would such a function exist?</p>
| user1551 | 1,551 | <p><em>Hint.</em> Consider the second-order differences of the rows. When <span class="math-container">$n\ge3$</span>,
<span class="math-container">$$
\pmatrix{1&-2&1\\ &1&-2&1\\ &&\ddots&\ddots&\ddots\\ &&&1&-2&1\\ &&&&1&0\\ &&&&&1}A_n
=\pmatrix{0&2\\ \vdots&0&\ddots\\ \vdots&\ddots&\ddots&\ddots\\ 0&\cdots&\cdots&0&2\\ n-2&n-3&\cdots&1&0&1\\ n-1&n-2&\cdots&\cdots&1&0}.
$$</span></p>
|
552,384 | <blockquote>
<p>Prove that</p>
<p>$$
\int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx = \ln\left(\frac{a}{b}\right)
$$</p>
</blockquote>
<p><strong>My Attempt:</strong></p>
<p>Define the function $I(a,b)$ as</p>
<p>$$
I(a,b) = \int_{0}^{\infty}\frac{e^{-bx}-e^{-ax}}{x}\,dx
$$</p>
<p>Differentiate both side with respect to $a$ to get</p>
<p>$$
\begin{align}
\frac{dI(a,b)}{da} &= \int_{0}^{\infty}\frac{0-e^{-ax}(-x)}{x}\,dx\\
&= \int_{0}^{\infty}e^{-ax}\,dx\\
&= -\frac{1}{a}(0-1)\\
&= \frac{1}{a}
\end{align}
$$</p>
<p>How can I complete the proof from here?</p>
| Sangchul Lee | 9,340 | <p>A problem-specific solution is as follows:</p>
<p>\begin{align*}
\int_{0}^{\infty} \frac{e^{-bx} - e^{-ax}}{x} \, dx
&= - \int_{0}^{\infty} \int_{a}^{b} e^{-xt} dt \, dx \\
&= - \int_{a}^{b} \int_{0}^{\infty} e^{-xt} dx \, dt \\
&= - \int_{a}^{b} \frac{dt}{t}
= - \left[ \log x \right]_{a}^{b} = \log\left(\frac{a}{b}\right).
\end{align*}</p>
<p>Interchanging the order of integration is justified either by Fubini's theorem or Tonelli's theorem.</p>
|
1,940,249 | <p>According to Wikipedia, <a href="https://en.wikipedia.org/wiki/First-order_logic#Completeness_and_undecidability" rel="nofollow">first order logic is complete</a>. What is the proof of this?</p>
<p>(Also, in the same paragraph, it says that its undecidable. Couldn't you just enumerate all possible proofs and disproofs to decide it though?)</p>
| user2902293 | 138,157 | <p>I would recommend Lou Van den dries' lecture notes in first order logic. He does it in three chapters. Here's a link <a href="https://www.google.ca/url?sa=t&source=web&rct=j&url=http://www.math.uiuc.edu/~henson/Math570/Fall2009/Math570notes.pdf&ved=0ahUKEwiR--aI5KnPAhVS0GMKHdYADEAQFggbMAA&usg=AFQjCNFRgSAJkPtYyywzaa5YWygqpRZAhQ" rel="nofollow">https://www.google.ca/url?sa=t&source=web&rct=j&url=http://www.math.uiuc.edu/~henson/Math570/Fall2009/Math570notes.pdf&ved=0ahUKEwiR--aI5KnPAhVS0GMKHdYADEAQFggbMAA&usg=AFQjCNFRgSAJkPtYyywzaa5YWygqpRZAhQ</a></p>
|
3,853,896 | <p>Do I consider the probability before drawing both cards or after?</p>
<p><strong>Question more clearly</strong>:
A single card is removed at random from a deck of <span class="math-container">$52$</span> cards. From the remainder we draw <span class="math-container">$2$</span> cards at random and find that they are both spades. What is the probability that the first card removed was also a spade?</p>
| Barry Cipra | 86,747 | <p>The trick here is to realize that the situation described is equivalent to removing two spades from the deck and then drawing a card at random from what remains. If you can calculate the probability of a spade occurring as the third card in that simpler setting, you're good to go.</p>
<p>Many probability problems can be similarly simplified by realizing that the specific order in which certain things are said to happen is irrelevant. That's not to say the order in which things happen is <em>never</em> relevant, just that the meanings of the letter strings (aka words) "first," "second," "third," etc., are sometimes interchangeable.</p>
|
3,831,310 | <p>I try to integrate</p>
<p><span class="math-container">$$\int \frac {dv}{\frac {-c}{m}v^2 - g \sin \theta}$$</span></p>
<p>I did substituted <span class="math-container">$u = \frac{c}{m}$</span> and <span class="math-container">$w = g \sin \theta$</span> to get</p>
<p><span class="math-container">$$-\int \frac {dv}{uv^2 + w}$$</span></p>
<p>I'm wondering if I have to do a second substitution. To be honest, I don't know if I can do that or how to do that. Furthermore, maybe I have to rearrange to get <span class="math-container">$\frac1{1+x^2}$</span></p>
| 19aksh | 668,124 | <p><span class="math-container">$$I= \int\dfrac{dv}{-\frac{c}{m}v^2-g\sin\theta} = -\int\dfrac{dv}{\frac{c}{m}v^2+g\sin\theta} = -\int\dfrac{dv}{\left(\sqrt\frac{c}{m}\ v\right)^2+(\sqrt{g\sin\theta})^2} $$</span></p>
<p>(Assuming <span class="math-container">$0<\sin\theta<1$</span>)</p>
<p><span class="math-container">$$I = -\sqrt{\dfrac{m}{c}}\int\dfrac{\sqrt{\dfrac{c}{m}}dv}{\left(\sqrt\frac{c}{m}\ v\right)^2+(\sqrt{g\sin\theta})^2} = -\sqrt{\dfrac{m}{cg\sin\theta}}\arctan\left(v\sqrt{\dfrac{c}{mg\sin\theta}}\right)+k$$</span></p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.