qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,360,694
<p><span class="math-container">$U(n)$</span> is the collection of positive integers which are coprime to n forms a group under multiplication modulo n.</p> <p>What is the order of the element 250 in <span class="math-container">$U(641)$</span>?</p> <p>My attempt: Here 641 is a prime number. So <span class="math-container">$U(641)$</span> is a cyclic group. So this group is Isomorphic to <span class="math-container">$Z/640 Z$</span> under addition modulo 640.</p> <p>I need to find the smallest positive integer n such that <span class="math-container">$250^n$</span> congruent to 1 (mod 641). Any easy way to find this n?</p> <p>Also I found that inverse of 250 in U(641) is 100.</p> <p>Kindly provide some hints to find the required n.</p> <p>Thanks in advance. </p>
Dietrich Burde
83,966
<p>By Lagrange the order of an element divides the order of the group. Since we have <span class="math-container">$640=2^7\cdot 5$</span>, I tried powers of <span class="math-container">$2$</span>. Then we see immediately that <span class="math-container">$250^{16}=1$</span> in <span class="math-container">$U(641)$</span>. Since <span class="math-container">$250^8\neq 1$</span> we are done.</p>
3,360,694
<p><span class="math-container">$U(n)$</span> is the collection of positive integers which are coprime to n forms a group under multiplication modulo n.</p> <p>What is the order of the element 250 in <span class="math-container">$U(641)$</span>?</p> <p>My attempt: Here 641 is a prime number. So <span class="math-container">$U(641)$</span> is a cyclic group. So this group is Isomorphic to <span class="math-container">$Z/640 Z$</span> under addition modulo 640.</p> <p>I need to find the smallest positive integer n such that <span class="math-container">$250^n$</span> congruent to 1 (mod 641). Any easy way to find this n?</p> <p>Also I found that inverse of 250 in U(641) is 100.</p> <p>Kindly provide some hints to find the required n.</p> <p>Thanks in advance. </p>
Mark Bennet
2,906
<p>We have <span class="math-container">$641=625+16$</span> so that <span class="math-container">$5^4\equiv -2^4$</span> modulo <span class="math-container">$641$</span> </p> <p>and also <span class="math-container">$5\times 128\equiv -1$</span> so that <span class="math-container">$5^4\equiv 5\times 2^{11}$</span> and <span class="math-container">$5^3\equiv 2^{11}$</span> and <span class="math-container">$250\equiv 2^{12}$</span></p> <p>Also <span class="math-container">$5^{24}\equiv 2^{24}$</span> (raising the first equation to the sixth power)and <span class="math-container">$5^{24}\equiv 2^{88}$</span> so that <span class="math-container">$2^{64}\equiv 1$</span></p> <p>Now we have <span class="math-container">$250\equiv (2^4)^3$</span> and raising this to the power <span class="math-container">$16$</span> gives <span class="math-container">$250^{16}\equiv(2^{64})^3$</span></p> <p>This gives us that <span class="math-container">$250^{16}\equiv 1$</span> </p> <p>There is an ad hoc nature about this, but it is easier than working with all the powers directly. </p> <p>We now need to check the status of <span class="math-container">$250^8$</span>. Note that we now know that the value is <span class="math-container">$\pm 1$</span> because <span class="math-container">$641$</span> being prime implies that the only square roots of <span class="math-container">$250^{16}\equiv 1$</span> are <span class="math-container">$\pm 1$</span>. We can therefore proceed confident that there is likely to be a way through. The strategy is to explore which powers of <span class="math-container">$2$</span> are congruent to <span class="math-container">$-1$</span>. We begin by eliminating the <span class="math-container">$5$</span> from the existing expression for <span class="math-container">$-1$</span>.</p> <p>So using <span class="math-container">$5\times 2^7\equiv -1$</span> we can cube to obtain <span class="math-container">$5^3\times 2^{21}\equiv 2^{11}\times 2^{21}=2^{32}\equiv -1$</span></p> <p>Hence (cubing again) <span class="math-container">$2^{96}\equiv -1$</span> but then <span class="math-container">$250^{8}\equiv (2^{12})^8=2^{96}\equiv -1$</span></p>
85,126
<blockquote> <p>Show that any sequence of positive numbers $(a_n)$ satisfying $$0&lt; \frac{a_{n+1}}{a_n} \leq 1+ \frac{1}{n^2}$$ must converge.</p> </blockquote> <p>I have tried taking the limit of the inequality which yields that $0 \leq \lim \frac{a_{n+1}}{a_n} \leq 1$. If $\lim \frac{a_{n+1}}{a_n} \lt 1$, then by the ratio test we have $\sum a_n$ converges, thus $a_n \to 0$. I am trying in particular trying to show that if $\frac{a_{n+1}} {a_n} \to 1$, then $a_n$ is Cauchy thus convergent, of which I am having some trouble.</p>
GEdgar
442
<p>Why not reduce to $y^{b/a}+y=1$, then you have only a one-parameter family to solve. </p> <p><img src="https://i.stack.imgur.com/8J2Dj.jpg" alt="graph"></p> <p>The solution for $y^r+y=1$, expanded in a Taylor series near $r=1$ is<br> $$ \frac{1}{2} + \frac{\operatorname{ln} (2)}{4}(r - 1) - \frac{\operatorname{ln} (2)}{8} (r - 1)^{2} - \frac{\operatorname{ln} (2) \bigl(-6 + 2 \operatorname{ln} (2)^{2} - 3 \operatorname{ln} (2)\bigr)}{96} (r - 1)^{3} + \frac{\operatorname{ln} (2) \bigl(2 \operatorname{ln} (2) + 1\bigr) \bigl(\operatorname{ln} (2) - 2\bigr)}{64} (r - 1)^{4} + \operatorname{O} \bigl((r - 1)^{5}\bigr) $$</p>
3,416,600
<p>Show that <span class="math-container">$|{\sqrt{a^2+b^2}-\sqrt{a^2+c^2}}|\le|b-c|$</span> where <span class="math-container">$a,b,c\in\mathbb{R}$</span></p> <p>I'd like to get an hint on how to get started. What I thought to do so far is dividing to cases to get rid of the absolute value. <span class="math-container">$(++, +-, -+, --)$</span> but it looks messy. I'm wondering if there is any nicer way to solve it.</p> <p>Would love to hear some ideas.</p> <p>Thanks in advance!</p>
David Diaz
431,789
<p>For all <span class="math-container">$a,b,c \in \mathbb{R}$</span>, <span class="math-container">\begin{align} 0 &amp;\leq (b-c)^2\\ 2bc &amp;\leq b^2 + c^2\\ 2a^2bc &amp;\leq a^2b^2 + a^2c^2\\ a^4 + 2a^2bc + b^2c^2 &amp;\leq a^4 + a^2b^2 + a^2c^2 + b^2c^2\\ a^2 + bc&amp;\leq \sqrt{a^4 + a^2b^2 + a^2c^2 + b^2c^2} \label{1}\tag{1}\\ 2a^2 + 2bc&amp;\leq 2\sqrt{a^4 + a^2b^2 + a^2c^2 + b^2c^2}\\ 2a^2 -2\sqrt{a^4 + a^2b^2 + a^2c^2 + b^2c^2}&amp;\leq -2bc\\ 2a^2 +b^2 + c^2 -2\sqrt{a^4 + a^2b^2 + a^2c^2 + b^2c^2}&amp;\leq b^2 -2bc + c^2\\ (a^2 +b^2) -2\sqrt{(a^2+b^2)(a^2+c^2)} + (a^2 + c^2)&amp;\leq b^2 -2bc + c^2\\ \end{align}</span></p> <p>and then take the square root of both sides.</p> <p>Note that it was not necessary to use absolute values the first time we took a square root <span class="math-container">$(\ref{1})$</span> because the right hand side is understood to be positive.</p> <p><a href="https://i.stack.imgur.com/anotm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/anotm.png" alt="enter image description here"></a></p>
79,726
<p>Let $R$ be a commutative ring with unity. Let $M$ be a free (unital) $R$-module.</p> <p>Define a <em>basis</em> of $M$ as a generating, linearly independent set.</p> <p>Define the <em>rank</em> of $M$ as the cardinality of a basis of $M$ (as we know commutative rings have IBN, so this is well defined).</p> <p>A <em>minimal generating set</em> is a generating set with cardinality $\inf\{\#S:S\subset M, M=\langle S \rangle\}$.</p> <p>Must a minimal generating set have cardinality the rank of $M$?</p>
Georges Elencwajg
3,217
<p>Yes, a generating set of minimal cardinality must have cardinality $r=rank_R(M)$.<br> It suffices to show that for <em>any</em> generating set of $M$ with $s$ elements, we have $s\geq r$ . </p> <p>Assume that $M=R^r$.<br> Our generating set gives rise to a surjective $R$-module morphism $R^s\to R^r\to 0 \quad (\star)$.<br> Let $\mathfrak m\subset R$ be a maximal ideal and tensor $(\star)$ with the <em>field</em> $k=R/\mathfrak m$ .<br> You get a $k$-linear map $k^s\to k^r \to 0 \quad$ which is still surjective by right-exactness of the tensor product.<br> Since $k$ is a field, this implies $s\geq r$.</p> <p><strong>Edit</strong><br> Since Bruno just commented that he is also interested in the case of infinitely many generators, let me reassure him that the above reasoning remains true, with the obvious change from integers to cardinals and minor cosmetic adaptations in notation.<br> More precisely, we assume that $M=R^{(B)}$ where $B$ is a basis of $M$ of cardinality $card (B)=\beth$.<br> If $A$ is a generating set of $M$ of cardinality $card(A)=\aleph $, we have a surjective morphism $R^{(A )}\to R^{(B )} \to 0 \quad (\star)$ yielding once more by tensorization (still right-exact!):<br> $k^{(A )}\to k^{(B )} \to 0 $ and the conclusion is again $\aleph \geq \beth$.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Daniel Moskovich
2,051
<p>The <a href="http://en.wikipedia.org/wiki/Four_color_theorem">Four Colour Theorem</a> might perhaps be a canonical example of a very hard proof of a major result which has improved, but is still very hard- no human-comprehensible proof exists, as far as I know, and all known proofs require computer computations.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Victor Protsak
5,740
<p><a href="https://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Arnold%E2%80%93Moser_theorem">Kolmogorov-Arnold-Moser</a> (or <strong>KAM</strong>) theorem. </p> <p><strong>KAM theory</strong> gives conditions for persistence of invariant tori under small perturbations of a Liouville-integrable Hamiltonian system. It is one of the most important parts of the applied dynamical systems.</p> <p>Although I am far from an expert, I believe that the original proofs have not been substantially simplified. In fact, later related work by M.Herman and others is likewise quite long and hard.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
ntc2
36,970
<p>Look for theorems that have been, or are currently, the subject of major formalization efforts!</p> <p>The two highest-rated answers as I write this [<a href="https://mathoverflow.net/a/152418/36970">1</a>,<a href="https://mathoverflow.net/a/152412/36970">2</a>] -- concerning the Four-Color and Feit-Thompson theorems -- don't mention a <em>major</em> point in the history of those theorems: proofs of both theorems have been <em>completely formalized in the Coq proof assistant</em> in the last ten years: the Four-Color Theorem in 2005 [<a href="https://research.microsoft.com/en-us/people/gonthier/4colproof.pdf" rel="nofollow noreferrer">3</a>] and the Feit-Thompson Theorem in 2012 [<a href="http://happyproving.blogspot.com/2012/10/georges-gonthier-completes-formal-proof.html" rel="nofollow noreferrer">4</a>], with both developments led by George Gonthier [<a href="https://en.wikipedia.org/wiki/Georges_Gonthier" rel="nofollow noreferrer">7</a>] of Microsoft Research, Cambridge. I believe both of these theorems were chosen for formalization efforts precisely because the existing proofs were so large and complicated that it was considered impossible for a single individual to understand them completely and convincingly. <strong>UPDATE</strong>: as pointed out in the comments, I am wrong about the difficulty of the Feit-Thompson theorem. Rather, its original proof runs "only" about 250 pages and [<a href="https://research.microsoft.com/en-us/news/features/gonthierproof-101112.aspx" rel="nofollow noreferrer">12</a>]:</p> <blockquote> <p>“The Feit-Thompson Theorem,” Gonthier says, “is the first steppingstone in a much larger result, the classification of finite simple groups, which is known as the ‘monster theorem’ because it’s one of those theorems where belief in it resides in the belief of a few selected people who have understanding of it.”</p> </blockquote> <p>This is particularly significant for the Four-Color Theorem: while the theorem reducing the problem to finitely many cases was peer reviewed in the original 1976 computer-assisted proof [<a href="http://projecteuclid.org/DPubS?service=UI&amp;version=1.0&amp;verb=Display&amp;handle=euclid.bams/1183538218" rel="nofollow noreferrer">5</a>], the computer code which checked the finitely many cases in the 1976 proof was not peer reviewed [[6]] -- indeed the effort to peer review was abandoned after much effort, because the code was judged too long and complex [[6]]. Contrast this with the 2005 proof: going far beyond peer review, the code has been completely formalized, meaning a specification stating what the code should do has been given -- it should check the finitely many cases correctly -- and they have proven that their code meets that specification. This is an amazing achievement!</p> <p>The AMS Notices article about the formalization of the Four Color Theorem -- taken from a special issue of the Notices devoted to computer-aided formal proof [<a href="http://www.ams.org/notices/200811/" rel="nofollow noreferrer">9</a>] -- provides a fascinating history of the proof and discussion of the formalization, along with an introduction to computer-aided formal proof for the non-specialist.</p> <p>The Coq proof assistant [<a href="http://coq.inria.fr/a-short-introduction-to-coq" rel="nofollow noreferrer">8</a>,<a href="https://en.wikipedia.org/wiki/Coq" rel="nofollow noreferrer">10</a>] is a system for constructing and checking completely formal proofs on the computer. Another of its major success stories is the formalization of an optimizing C compiler [<a href="http://compcert.inria.fr/" rel="nofollow noreferrer">11</a>].</p> <p>[[6]]: <a href="http://www.ams.org/notices/200811/tx081101382p.pdf" rel="nofollow noreferrer">http://www.ams.org/notices/200811/tx081101382p.pdf</a>‎ (I can't get this link to work as a footnote ???)</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
user160393
62,818
<p><a href="https://link.springer.com/content/pdf/10.1007/BF01202354.pdf" rel="nofollow noreferrer">Hadwiger's conjecture for <span class="math-container">$K_6$</span>-free graphs</a>.</p> <p>This paper shows the equivalence of Hadwiger's conjecture for graphs with no <span class="math-container">$K_6$</span> minor and the four-colour theorem. The reduction to 4CT (that every minimal counterexample to the main result would be an apex graph) is a tour de force that takes well over 80 pages.</p>
27,490
<h2>Motivation</h2> <p>The common functors from topological spaces to other categories have geometric interpretations. For example, the fundamental group is how loops behave in the space, and higher homotopy groups are how higher dimensional spheres behave (up to homotopy in both cases, of course). Even better, for nice enough spaces the (integral) homology groups count <span class="math-container">$n$</span>-dimensional holes.</p> <hr /> <p>A groupoid is a category where all morphisms are invertible. Given a space <span class="math-container">$X$</span>, the fundamental groupoid of <span class="math-container">$X$</span>, <span class="math-container">$\Pi_1(X)$</span>, is the category whose objects are the points of <span class="math-container">$X$</span> and the morphisms are homotopy classes of maps rel end points. It's clear that <span class="math-container">$\Pi_1(X)$</span> is a groupoid and the group object at <span class="math-container">$x \in X$</span> is simply the fundamental group <span class="math-container">$\pi_1(X,x)$</span>. My question is:</p> <blockquote> <p>Is there a geometric interpretation <span class="math-container">$\Pi_1(X)$</span> analogous to the geometric interpretation of homotopy groups and homology groups explained above?</p> </blockquote>
Ronnie Brown
19,949
<p>Thanks Andrew for the nice comments! </p> <p>In relation to the comment on G-spaces by Donu, I should point out that Chapter 11 of "Topology and groupoids" is on "Orbit spaces, orbit groupoids". But I doublt many topologists are aware of the latter concept! </p> <p>My new jointly authored book `Nonabelian algebraic topology: filtered spaces, crossed complexes, cubical homotopy groupoids' is published in August 2011 by the European Math. Soc and distributed in the Amercas by the AMS from October 2011. It is a kind of sequel to the above book, exploring some major uses of groupoids in higher homotopy theory. A kind of new foundation for algebraic topology at the border between homotopy and homology, and applications of some higher order Seifert-van Kampen Theorems. In particular, the 2-dimensional version allows some quite explicit calculations of homotopy 2-types. </p> <p>See my web site for more infomration. </p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
Vidyanshu Mishra
363,566
<p>Your reasoning is appreciable, but the problem is that it is wrong.</p> <p>While tempering with sides and area, you forgot about the angles. In this case , it is just the matter of sines and cosines. Let's see how:</p> <p>Suppose $\angle ADC=\theta$ and $\angle BCD=180-\theta$. </p> <p>On using trigonometric formula for area of a triangle, you will get that: </p> <p>$$ar\triangle ADC=\frac{1}{2}\times AD\times DC\times \sin\theta$$ </p> <p>And , $$ar\triangle BCD=\frac{1}{2}\times BC\times CD \times \sin(180-\theta)=\frac{1}{2}\times BC\times CD \times \sin\theta$$</p> <p>So, $ar\triangle ADC=ar\triangle BCD$</p> <p>Now, using Cosine Formula:</p> <p>$$AC^2=AD^2+DC^2-2\times AD\times DC\times \cos \theta$$</p> <p>And, $$BD^2=BC^2+CD^2-2\times BC\times CD \times \cos (180-\theta)=BC^2+CD^2+2BC\times CD\times \cos\theta$$</p> <p>This is enough to show that despite of having same area of $\triangle ADC$ and $\triangle BCD$, We can not say that $AC$ is equal to $BD$, these two are equal only when $\theta=90$ which is obviously the case of a rectangle.</p> <p>EDIT:</p> <p>After coming back on the site I saw that this question has got much attention and your comment made me realise that I should improve my post.</p> <p>You discussed that you came to the result <strong>In two different triangles, two sides are equal and their Area is also equal. So, the third side is also equal.</strong> by Heron's Formula. Let's see when we approach this problem with Heron's Formula.</p> <p>For area of a given triangle, We have $\triangle=\sqrt{s(s-a)(s-b)(s-c)}$ where terms have their usual meanings.</p> <p>$$\triangle^2=s(s-a)(s-b)(s-c)=\frac{1}{16}(a+b+c)(a+b-c)(a+c-b)(c-a+b)$$</p> <p>$$16\triangle^2=((a+b)^2-c^2)(c^2-(a-b)^2)$$</p> <p>$$-16\triangle^2=c^4-2c^2\times (a^2+b^2)+(a^2-b^2)^2$$</p> <p>$$c^4-2c^2\times (a^2+b^2)+[(a^2-b^2)^2+16\triangle^2]=0$$</p> <p>Notice that this a polynomial of fourth degree and it is not necessary that all four values of $c$ are equal. The answer of your question comes exactly from here. </p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
hmakholm left over Monica
14,366
<p>If you know two sides and the area of a triangle, there will generally be two <em>different</em> lengths for the third side that gives you that area.</p> <p>Consider, for example: If the two known sides are $3$ and $4$, then the third side is somewhere between $1$ and $7$. A third side of length $1$ gives area $0$, but so does a third side of length $7$. In between, as the third side increases from $1$, the area of the triangle will first <em>increase</em> until it reaches a maximum of $6$ (when the third side is $5$), but then the area <em>decreases</em> towards $0$.</p> <p>Thus every area <em>between</em> $0$ and $6$ will be hit by some third-side between $1$ and $5$, <em>and</em> by some third-side between $5$ and $7$.</p> <p>If you try to use Heron's formula to derive the third side, you will end up with <em>quadratic equation</em> with two solutions.</p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
G Cab
317,234
<p>The answer by <em>Narasimham</em> is fully right.<br> To visualize it better, consider to flip $\triangle BCD$ around the bisector of the common segment $DC$. </p> <p><a href="https://i.stack.imgur.com/Tn7ks.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Tn7ks.png" alt="Quadr_Tr_t"></a></p> <p>The resulting sketch shows that angles in $D$ (for $\triangle ADC$) and $C$ (for $\triangle BCD$) are supplementary. They are equal only when they are rect, i.e. when the quadrilateral is a rectangle.</p>
1,674,676
<p>Let $p$ be an odd prime number. I want to show that $\mathbb{F}_{p^2}$ has a primitive 8th root of unity $\zeta$. </p> <ul> <li>I know that $\zeta^8 = 1$. So my idea is to define $f = X^8 - 1$ such that $\zeta$ is a root of $f$. But this is for a field extension of degree 8 and $p^2$ is at least 9.</li> </ul> <p>any hints?</p>
Matt B
111,938
<p>Consider the unit group $\mathbb{F}_{p^2}^{\times}$, which has order $p^2-1$. </p> <p>Finding an eighth root of unity is equivalent to finding an element which has order $(p^2-1)/8$ in this group. Since the unit group of finite field is known to be cyclic, this happens if and only if $p^2-1$ is a multiple of $8$, ie $p^2-1 \equiv 0 \mod{8}.$</p> <p>Since $p$ is assumed to be odd, this is true since if $p$ is odd then $p \equiv 1,3,5,7 \mod{8}$. This means $p^2 \equiv 1 \mod{8}$ by checking manually and hence we have an eighth root of unity.</p>
1,325,563
<p>Can someone show me:</p> <blockquote> <p>If $x$ is a real number, then $\cos^2(x)+\sin^2(x)= 1$.</p> <p>Is it true that $\cos^2(z)+\sin^2(z)=1$, where $z$ is a complex variable?</p> </blockquote> <p>Note :look [this ] in wolfram alpha showed that's true !!!!</p> <p>Thank you for your help</p>
milka
539,462
<p>by definition (In complex Analysis):</p> <p>cosz= ((e^iz)+(e^-iz))/2</p> <p>sinz= ((e^iz)-(e^-iz))/2</p> <p>L.H.S=(cosz+isinz)(cosz-sinz)</p> <p>substitute with definitions above and manipulate algebraically. You should get this at the end: (e^iz)(e^-iz)= e^0=1= R.H.S</p>
668,664
<p>Solve $\dfrac{\partial u}{\partial t}+u\dfrac{\partial u}{\partial x}=x$ subject to the initial condition $u(x,0)=f(x)$.</p> <p>I let $\dfrac{dt}{ds}=1$ , $\dfrac{dx}{ds}=u$ , $\dfrac{du}{ds}=x$ and the initial conditions become: $t=0$ , $x=\xi$ and $u=f(\xi)$ when $s=0$ .</p> <p>I believe this leads to $t=s$ , but I am unsure how to deal with $\dfrac{dx}{ds}=u$ and $\dfrac{du}{ds}=x$ .</p>
Pragabhava
19,532
<p>Your PDE is quasilinear, meaning it migh not have a <em>classic</em> solution for all time $t$. That said, we know that the quasilinear equation $$ a\big(x,t,u(x,t)\big) u_x(x,t) + b\big(x,t,u(x,t)\big)u_t(x,t) = c\big(x,t,u(x,t)\big) $$ where $a,\,b,\,c \in C^1$ with data $\mathcal{C}(\xi) = \big(x(\xi), t(\xi), u(\xi)\big) \in C^1$ and with $$ \begin{vmatrix} \frac{dx}{d\xi} &amp; a \\ \frac{dt}{d\xi} &amp; b\end{vmatrix} \neq 0 $$ has a unique solution near $\mathcal{C}$ given by \begin{align} \frac{d x}{d \eta} &amp;= a &amp; x\big|_{\eta = 0}&amp;= x(\xi)\\ \frac{d t}{d \eta} &amp;= b &amp; t\big|_{\eta = 0}&amp;= t(\xi)\\ \frac{d u}{d \eta} &amp;= c &amp; u\big|_{\eta = 0}&amp;= u(\xi)\\ \end{align}</p> <p>In your case, $\mathcal{C}(\xi) = \big(\xi,0,f(\xi)\big)$. Near $\eta \sim 0$ $$ \begin{vmatrix} \frac{dx}{d\xi} &amp; a \\ \frac{dt}{d\xi} &amp; b\end{vmatrix} = \begin{vmatrix} 1 &amp; u \\ 0 &amp; 1\end{vmatrix} = 1 $$ and the solution is unique near the initial condition. Now,</p> <p>\begin{align} \frac{d x}{d \eta} &amp;= u &amp; x\big|_{\eta = 0}&amp;= \xi\\ \frac{d t}{d \eta} &amp;= 1 &amp; t\big|_{\eta = 0}&amp;= 0\\ \frac{d u}{d \eta} &amp;= x &amp; u\big|_{\eta = 0}&amp;= f(\xi)\\ \end{align} has as solution $$ t = \eta, \quad u = f(\xi)\cosh \eta + \xi \sinh \eta, \quad x =f(\xi) \sinh \eta + \xi \cosh \eta. $$ and the characteristics are given by the equation $$ t= \text{arctanh}\left(\frac{x - \xi}{f(\xi)}\right) $$ <strong>(you have to be carefull while inverting $\tanh t$)</strong>. </p> <p>Depending on the behavior of $f$, if the characteristics can meet. If the do, there will be a shock and the classical solution will cease to exist. We can see this by studying the transformation $$ (x,t) \longrightarrow (\xi,\eta). $$ The change of variables will be invertible iif $$ \begin{vmatrix} \partial_\xi x &amp; \partial_\eta x \\ \partial_\xi t &amp; \partial_\eta t \end{vmatrix} = f'(\xi) \sinh\eta + \cosh \eta \neq 0 $$ meaning there is no solution when $$ f'(\xi) \sinh\eta + \cosh \eta = 0 $$ or, inverting the transformation, the solution will develop a shock at time $$ t = -\text{arctanh}\left(f'(\xi)\right), $$ and the profile of $f(x)$ is crucial in the shock development. This is why, if you want to understand how this works, you have to study the Burger's equation, as suggested by <a href="https://math.stackexchange.com/questions/668664/method-of-characteristics-inhomogeneous-nonlinear-wave-equation#comment1405597_668664">user88595</a>.</p> <p>In the region where no shock has developed (might be all the domain), the solution is given by $$ u(x,t) = f\big(\xi(x,t)\big)\cosh t + \xi(x,t) \sinh t $$ where $\xi(x,t)$ is determined by inverting $$ x = f(\xi) \sinh t + \xi \cosh t, $$ which can be written in implicit form as $$ f\big(x \cosh t - u(x,t) \sinh t\big) = u(x,t) \cosh t - x \sinh t $$</p>
2,292,656
<p>Let $L/K$ be a degree $n$ extension of fields, where $K$ has discrete valuation $v$, which can be prolonged to the discrete valuations $w_i$ on $L$. We can therefore define the completion of $K$ w.r.t. $v$ to be $\hat K$, and the completion of $L$ w.r.t. $w_i$ to be $\hat L_i$, then in Theorem II.3.1 of Serre's <em>Local Fields</em>, we have a homomorphism $$\varphi:L\otimes_K\hat K\to\prod_i\hat L_i$$which we then show to be an isomorphism. However, I don't see how this morphism is defined in the first place.</p>
sharding4
254,075
<p>The basic idea is to take $L$ to be $K[\alpha]\cong K[x]/(f(x))$ where $f(x)$ is the minimum polynomial of $\alpha$. Then $L\otimes_K\hat K \cong K[x]/(f(x))\otimes_K\hat K \cong \hat K[x]/(f(x))$ and factor $f(x)$ in $\hat K[x]$.</p>
384,318
<p>Let $X$ be a topological space and let $A,B\subseteq X$ be closed in $X$ such that $A\cap B$ and $A\cup B$ are connected (in subspace topology) show that $A,B$ are connected (in subspace topology).</p> <p>I would appreciate a hint towards the solution :)</p>
FiveLemon
76,591
<p>Suppose $A$ were disconnected. Then $A$ is the disjoint union of $A'$ and $A''$ non-empty closed subsets of $A$. </p> <p>If $A' \cap B$ and $A'' \cap B$ are both non-empty then $A\cap B$ is disconnected -- a contradiction.</p> <p>If $A' \cap B$ is empty then $A'$ and $A'' \cup B$ is a partition of $A \cup B$. Since all the sets in question are closed, this means $A \cup B$ is disconnected -- a contradiction.</p>
1,079,995
<p>I can't understand how: $$ \frac {2\times{^nC_2}}{5} $$</p> <p>Equals:</p> <p>$$ 2\times \frac {^nC_2}{5} $$</p> <p>If we forget the combination and replace it with a $10$, the result is clearly different. $1$ in the first example and and $0.5$ in the second.</p>
JEET TRIVEDI
115,676
<p>$$\frac{a}{b}\times\frac{c}{d}=\frac{a\times c}{b \times d}$$ So, taking your example $$\frac{2\times10}{5\times 1}=\dfrac{2}{5}\times\dfrac{10}{1}=\dfrac{2}{1}\times\frac{10}{5}=\dfrac{10}{5}\times 2=\dfrac{2}{5}\times 10$$ As multiplication is commutative.</p>
1,463,567
<p>We have the following theorem </p> <p>If |G| = 60 and G has more than one Sylow-5 subgroup, then G is simple.</p> <p>Since order of the rigid motion of the dodecahedron group is 60, so all we have to do is to show that it has more than one sylow-5 subgroup, but I don't know how to do this as I don't know the elements of this group as I have troubles visualizing it.</p>
bof
111,012
<p>The "topologist's sine curve" is a <em>nice</em> example. Wouldn't you rather see a nonconstructive horror?</p> <p>Assuming the axiom of choice, there is a <a href="https://en.wikipedia.org/wiki/Bernstein_set" rel="nofollow">"Bernstein decomposition"</a> of the plane, i.e., $\mathbb R^2$ is the union of two disjoint sets $B_1$ and $B_2,$ each of which has nonempty intersection with every uncountable closed subset of $\mathbb R^2.$ These sets $B_i$ are famously not Lebesgue measurable, but what's pertinent to your question is that they are both connected sets, and that neither of them contains any nonconstant path.</p>
4,646,715
<blockquote> <p>Let <span class="math-container">$A\in \operatorname{Mat}_{2\times 2}(\Bbb{R})$</span> with eigenvalues <span class="math-container">$\lambda\in (1,\infty)$</span> and <span class="math-container">$\mu\in (0,1)$</span>. Define <span class="math-container">$$T:S^1\rightarrow S^1;~~x\mapsto \frac{Ax}{\|Ax\|}$$</span> I need to show that <span class="math-container">$T$</span> has 4 fixed points.</p> </blockquote> <p>My idea was the following. Since <span class="math-container">$\mu, \lambda$</span> are eigenvalues there exists, <span class="math-container">$x,y\neq 0$</span> such that <span class="math-container">$Ax=\mu x$</span> and <span class="math-container">$Ay=\lambda y$</span>.Then I claim that <span class="math-container">$x,y$</span> are fixed points:<span class="math-container">$$Tx=\frac{Ax}{\|Ax\|}=\frac{\mu x}{\|\mu x\|}=\operatorname{sign}(\mu)\frac{x}{\|x\|}=\frac{x}{\|x\|}=x$$</span> similarly one can show that <span class="math-container">$Ty=y$</span>. But then I don't see where the other two fixed points should come from?</p>
Parcly Taxel
357,390
<p>The other two fixed points are <span class="math-container">$-x$</span> and <span class="math-container">$-y$</span>, when <span class="math-container">$x$</span> and <span class="math-container">$y$</span> lie on <span class="math-container">$S^1$</span> (they can always be chosen to lie on <span class="math-container">$S^1$</span> since eigenvectors can be scaled), since <span class="math-container">$S^1$</span> is invariant under negation: <span class="math-container">$$-x\mapsto\frac{-Ax}{\|Ax\|}=\frac{-\mu x}{\|\mu x\|}=-x$$</span> Note that the problem statement still holds, and this proof still works (if &quot;<span class="math-container">$4$</span> fixed points&quot; means <em>at least</em> <span class="math-container">$4$</span>), as long as <span class="math-container">$A$</span> has linearly independent eigenvectors and positive eigenvalues.</p>
393,293
<p>I need an upper bound for $$\frac{ax}{x-2}$$ I know that $1\leq a&lt; 2$ and $x\geq 0$.</p> <p>This upper bound can include just $a$ and constant numbers not $x$.</p> <p>thanks a lot.</p>
Federica Maggioni
49,358
<p>$$\lim_{x\rightarrow 2^+}\frac{ax}{x-2}=+\infty$$</p>
393,293
<p>I need an upper bound for $$\frac{ax}{x-2}$$ I know that $1\leq a&lt; 2$ and $x\geq 0$.</p> <p>This upper bound can include just $a$ and constant numbers not $x$.</p> <p>thanks a lot.</p>
Inceptio
63,477
<p><strong>Hint:</strong></p> <p>For $2&gt;x \ge 0$, you get a negative value, and for $x&gt;2$ you will see that the value gradually reduces. What happens when $x \to 2$? Consider LHL and RHL.</p>
3,001,700
<p>I am trying to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> that solve the equation <span class="math-container">$15x - 16y = 10$</span>, usually in this type of question I would use Euclidean Algorithm to find an <span class="math-container">$x$</span> and <span class="math-container">$y$</span> but it doesn't seem to work for this one. Computing the GCD just gives me <span class="math-container">$16 = 15 + 1$</span> and then <span class="math-container">$1 = 16 - 15$</span> which doesn't really help me. I can do this question with trial and error but was wondering if there was a method to it.</p> <p>Thank you</p>
Ethan Bolker
72,858
<p>In this case you don't really need the full power of the Euclidean algorithm. Since you know <span class="math-container">$$ 16 - 15 = 1 $$</span> you can just multiply by <span class="math-container">$10$</span> to conclude that <span class="math-container">$$ 16 \times 10 + 15 \times(-10) = 10. $$</span> Now you have your <span class="math-container">$y$</span> and <span class="math-container">$x$</span>.</p>
202,742
<p>Consider a <a href="http://en.wikipedia.org/wiki/Circular_layout" rel="noreferrer">circular drawing</a> of a simple (in particular, loopless) graph $G$ in which edges are drawn as straight lines inside the circle. The <em>crossing graph</em> for such a drawing is the simple graph whose nodes correspond to the edges of $G$ and in which two nodes are adjacent if and only if the corresponding edges cross.</p> <p><strong>Example.</strong> The graph $G$ has four vertices (1–4) and three edges (a–c) where $a = 12$, $b = 13$, $c = 24$. In the circular drawing, $b$ and $c$ cross, so the crossing graph has three nodes and a single edge $bc$.</p> <p><img src="https://i.imgur.com/HvO93fm.jpg?1" alt="Example of a graph drawing and its crossing graph"></p> <p>Here are my questions:</p> <ol> <li><p>Is every simple graph the crossing graph of some circular graph drawing?</p></li> <li><p>If not, how does a counterexample look like?</p></li> <li><p>If yes, how can such a graph drawing be constructed?</p></li> </ol>
Tony Huynh
2,233
<p>The answer to 1 is <strong>no</strong>. To see this, note that every edge-crossing graph is a <a href="http://en.wikipedia.org/wiki/String_graph">string graph</a>. A <em>string graph</em> is a graph which is the intersection graph of arbitrary curves in the plane. However, there are graphs which are not even string graphs. </p> <p>One example of a graph which is not a string graph is $K_5$ with each edge subdivided once. To see this, let $G$ be $K_5$ with each edge subdivided once and suppose that $\mathcal{S}$ is a string representation of $G$. Let $S_1, \dots, S_5$ be the strings in $\mathcal{S}$ that correspond to the degree-4 vertices of $G$. Since the degree-4 vertices are an independent set in $G$, these strings are pairwise disjoint. Since the degree-2 vertices are also an independent set in $G$, by shrinking each of $S_1, \dots, S_5$ down to a point in $\mathcal{S}$, we get a planar drawing of $K_5$, which is a contradiction. </p>
202,742
<p>Consider a <a href="http://en.wikipedia.org/wiki/Circular_layout" rel="noreferrer">circular drawing</a> of a simple (in particular, loopless) graph $G$ in which edges are drawn as straight lines inside the circle. The <em>crossing graph</em> for such a drawing is the simple graph whose nodes correspond to the edges of $G$ and in which two nodes are adjacent if and only if the corresponding edges cross.</p> <p><strong>Example.</strong> The graph $G$ has four vertices (1–4) and three edges (a–c) where $a = 12$, $b = 13$, $c = 24$. In the circular drawing, $b$ and $c$ cross, so the crossing graph has three nodes and a single edge $bc$.</p> <p><img src="https://i.imgur.com/HvO93fm.jpg?1" alt="Example of a graph drawing and its crossing graph"></p> <p>Here are my questions:</p> <ol> <li><p>Is every simple graph the crossing graph of some circular graph drawing?</p></li> <li><p>If not, how does a counterexample look like?</p></li> <li><p>If yes, how can such a graph drawing be constructed?</p></li> </ol>
Brendan McKay
9,025
<p>Such graphs are called "circle graphs" and if you search on that phrase you will find some literature. For example, some is around page 56 in <a href="http://books.google.com.au/books?hl=en&amp;lr=&amp;id=bAGW1L84hRQC&amp;oi=fnd&amp;pg=PR9&amp;dq=crossing%20chord%20graph&amp;ots=R_jz3vps1w&amp;sig=4AltkQYV2XfPqElucxnDO2o1c9E#v=onepage&amp;q=%22circle%20graph%22&amp;f=false" rel="nofollow">this book</a>.</p> <p>Figure 2 in <a href="http://www.csie.ntu.edu.tw/~b91026/2008_02_24_OutOfDate/p435-gabor.pdf" rel="nofollow">this paper</a> shows a 7-vertex graph that is not a circle graph, but according to <a href="http://oeis.org/A156809" rel="nofollow">OEIS:A156809</a> there are two with 6 vertices.</p>
2,296,256
<p>I need help how to mathematically interpret an ODE (Newton's second law). I used to the ODE in this form: $$ m\ddot x(t)=F(t)\tag{1} $$</p> <p>However, in another book they wrote: $$ m\ddot x=F(x,\dot x) \tag{2} $$ where $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$.</p> <p><strong>Questions:</strong></p> <ol> <li><p>I guess $F(x,\dot x)$ is an abbreviation for $F(x(t),\dot x(t))$, is it correct?</p></li> <li><p>What it the difference between writing $F(t)$ or $F(x,\dot x)$?</p></li> <li><p>What is the meaning of the notation $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$?</p></li> </ol> <p>Thanks!</p>
edm
356,114
<p>For $x\lt z$, consider any $y$ in-between, i.e. $x\lt y\lt z$, so that $f(x)\lt g(y)\lt f(z)$, i.e. $f$ is strictly increasing. Similarly, $g$ is strictly increasing.</p> <p>Consider a point $x_0$ at which $f$ is continuous. When a sequence $(x_n)_{n\in\Bbb N}$ of real numbers is decreasing to $x_0$, the sequence $(f(x_n))_{n\in\Bbb N}$ is decreasing to $f(x_0)$ by continuity and monotonicity. While $x_0\lt x_n$, we also have $g(x_0)\lt f(x_n)$ for each $n\in\Bbb N$. This gives that $g(x_0)\le f(x_0)$. Similarly, consider a sequence $(y_n)_{n\in\Bbb N}$ increasing to $x_0$ and we see that $f(x_0)\le g(x_0)$. Hence, $f(x_0)=g(x_0)$. This property holds for continuities of $g$ as well.</p> <p>By imposing that at least one of $f$ and $g$ is continuous, we have $f(x)=g(x)$ for each $x\in\Bbb R$.</p> <p>If continuity is not imposed, there is a counter-example. Consider $$f(x)=\begin{cases} x+1 &amp;\text{if $x\ge 0$}\\ x-1 &amp;\text{if $x\lt 0$}\\ \end{cases}$$ and $$g(x)=\begin{cases} x+1 &amp;\text{if $x\gt 0$}\\ x-1 &amp;\text{if $x\le 0$}\\ \end{cases}.$$</p>
2,150,552
<p>I'm following a YouTube linear algebra course. (<a href="https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14" rel="nofollow noreferrer">https://www.youtube.com/watch?v=PFDu9oVAE-g&amp;list=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab&amp;index=14</a>)<br> In part 9 there's the following question: <a href="https://i.stack.imgur.com/S3UdB.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/S3UdB.jpg" alt="enter image description here"></a></p> <p>I don't know what the formula is. What I figured out is that there is a link with the fibonacci sequence.</p> <p>I also tried to convert A to eigenbasis. I get this: </p> <p>\begin{bmatrix}\frac{{-\sqrt5 + 1}}{2}&amp;0\\0&amp;\frac{{\sqrt5 + 1}}{2}\\\end{bmatrix}</p> <p>How do I go back to normal basis and what is the formula?</p> <p>This is what I have now: <a href="https://i.stack.imgur.com/wVxPY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wVxPY.jpg" alt="enter image description here"></a></p>
egreg
62,967
<p>Let $R$ be a total order relation on the set $X$ (for instance the usual ordering $\leq$ on $X=\mathbb{N}$), so antisymmetric. Let $$ S=R^{\mathrm{op}}=\{(x,y):(y,x)\in R\} $$ which is obviously antisymmetric as well.</p> <p>Then $R\cup S=X\times X$ which is only antisymmetric if $X$ has at most one element.</p> <p>More generally, if $R$ is antisymmetric and there exist $x,y$ such that $(x,y)\in R$ but $x\ne y$, then $R^{\mathrm{op}}$ is also antisymmetric, but $R\cup R^{\mathrm{op}}$ isn't.</p>
3,189,303
<p>What is the point of constant symbols in a language?</p> <p>For example we take the language of rings <span class="math-container">$(0,1,+,-,\cdot)$</span>. What is so special about <span class="math-container">$0,1$</span> now? What is the difference between 0 and 1 besides some other element of the ring?</p> <p>I am aware, that you want to have some elements, that you call 0 and 1 which have the desired properties, like <span class="math-container">$x+0=0+x=x$</span> or <span class="math-container">$1\cdot x = x\cdot 1=x$</span>.</p> <p>Is there something else, which makes constants 'special'?</p> <p>Other example: Suppose we have the language <span class="math-container">$L=\{c\}$</span> where <span class="math-container">$c$</span> is a constant symbol. Now we observe the L-structure <span class="math-container">$\mathfrak{S}_n$</span> over the set <span class="math-container">$\mathbb{Z}$</span>, where <span class="math-container">$c$</span> gets interpreted by <span class="math-container">$n$</span>.</p> <p>Is there any difference, between <span class="math-container">$c$</span> and <span class="math-container">$n$</span>? Or are they just the same and you can view it as some sort of substitution?</p> <p>For <span class="math-container">$\mathfrak{S}_0$</span> we would understand <span class="math-container">$c$</span> as <span class="math-container">$0$</span>. Since there are no relation- or functionsymbols, we just have the set <span class="math-container">$\mathbb{Z}$</span> and could note them as</p> <p><span class="math-container">$\{\dotso, -1, c, 1, \dotso\}$</span></p> <p>If we take the usual function <span class="math-container">$+$</span> and add it <span class="math-container">$L=\{c,+\}$</span> now <span class="math-container">$\mathfrak{S}_0$</span> has the property, that <span class="math-container">$c+c=c$</span> for example.</p> <p>I hope you understand what I am asking for. </p> <p>I think it boils down to:</p> <blockquote> <p>Is there a difference between the structure <span class="math-container">$\mathfrak{S}_n$</span> as L-structure and <span class="math-container">$\mathfrak{S}_n$</span> as <span class="math-container">$L_\emptyset$</span>-structure, where <span class="math-container">$L_\emptyset=\emptyset$</span> (so does not contain a constant symbol).</p> </blockquote> <p>But I want to get as much insight here as possible. So if you do not understand what I am asking for, it might be best, if you just take a guess. :)</p> <p>Thanks in advance.</p>
Clive Newstead
19,542
<p>An <span class="math-container">$L$</span>-structure is not just a set, it is a set <em>together with</em> interpretations of the constant symbols, function symbols and relation symbols in <span class="math-container">$L$</span>. You need to keep track of the interpretations as additional data so that you can do things like define <em>homomorphisms</em> of <span class="math-container">$L$</span>-structures: namely, they're those functions that respect the interpretations of the symbols.</p> <p>For example, "<span class="math-container">$\mathbb{Z}$</span> as a group" and "<span class="math-container">$\mathbb{Z}$</span> as a set" have the same underlying set, but the former additionally has (at least) a binary operation <span class="math-container">$+ : \mathbb{Z} \times \mathbb{Z} \to \mathbb{Z}$</span>, which must be preserved by group homomorphisms.</p> <p>In your example, a homomorphism of <span class="math-container">$L$</span>-structures <span class="math-container">$f : \mathfrak{S}_n \to \mathfrak{S}_m$</span> would be required to satisfy <span class="math-container">$f(n) = m$</span>, since <span class="math-container">$n$</span> and <span class="math-container">$m$</span> are the respective interpretations of the constant <span class="math-container">$c$</span>, but a homomorphism of <span class="math-container">$L_{\varnothing}$</span>-structures would not.</p> <p>So while "<span class="math-container">$\mathfrak{S}_n$</span> as an <span class="math-container">$L$</span>-structure" and "<span class="math-container">$\mathfrak{S}_n$</span> as an <span class="math-container">$L_{\varnothing}$</span>-structure" have the same underlying set, they are not the same object.</p> <p>Fun fact: the assignment from "<span class="math-container">$\mathfrak{S}_n$</span> as an <span class="math-container">$L$</span>-structure" to "<span class="math-container">$\mathfrak{S}_n$</span> as an <span class="math-container">$L_{\varnothing}$</span>-structure" is an example of a <a href="https://en.wikipedia.org/wiki/Forgetful_functor" rel="noreferrer">forgetful functor</a>.</p>
3,219,635
<p>Suppose <span class="math-container">$f$</span> is continuous on <span class="math-container">$\Bbb R$</span>, define <span class="math-container">$F(x)=\int_a^bf(x+t)\cos t\,dt,x\in [a,b]$</span>.</p> <p>How to show <span class="math-container">$F(x)$</span> is differentiable on <span class="math-container">$[a,b]$</span>?</p>
fGDu94
658,818
<p>With <span class="math-container">$u=x+t$</span>, we have</p> <p><span class="math-container">$F(x) = \int_{a+x}^{b+x}f(u)\cos(u-x)du$</span>.</p> <p>Now use liebniz integral rule</p> <p><span class="math-container">$F'(x) = f(b+x)\cos(b)-f(a+x)\cos(a)+\int_{a+x}^{b+x}f(u)\sin(u-x)du$</span></p>
474,048
<p>I am stuck with the following problem from a book.</p> <p>It asks whether or not $f_n \rightarrow f$ converges uniformly on $A$ if for every $[a,b], f_n\rightarrow f$ uniformly on $A\cap [a,b]$.</p> <p>The statement seems false to me (i.e. not necessarily true) because of this intuition I had:</p> <p>If $A$ is not compact then it does not have a finite subcover. I can then construct open intervals which cover $A$. In turn, these open intervals contain closed intervals on which $f_n \rightarrow f$ uniformly on $ A\cap [x,y]$. Consequently because obtaining the maximum of an infinite number of elements is tricky (i.e. no finite subcover), we cannot conclude uniform convergence. [The maxima I am pertaining to is the maxima of $N$ such that $n&gt;N$].</p> <p>How should I proceed? Suggestions very welcome</p>
user71352
71,352
<p>Take $A=\mathbb{R}$ and consider the sequence</p> <p>$f_{n}(x)=\chi_{(n,n+1)}(x)$.</p> <p>Consider any $[a,b]$. Then for all but finitely many $n$ we have $f_{n}=0$ on $A\cap[a,b]$. So $f_{n}$ uniformly converges on $A\cap[a,b]$ to $0$. Since $[a,b]$ are arbitrary then this holds in general. But $f_{n}$ does not converge uniformly on $\mathbb{R}$.</p>
3,991,572
<p>I need to solve the following problem: <span class="math-container">$\lim_{x\to 3}(x-3) \cot{\pi x}$</span>. Can anyone give me a hint? I have no idea.</p>
Rishab Sharma
864,616
<p>In this this type of problems just do telescoping now make the second terms in your partial fraction in powers of n+2 that is just divide and multiply by 16 in second term then do telescoping and possible answers are 33/2</p>
208,744
<p>I was asked to show that $\frac{d}{dx}\arccos(\cos{x}), x \in R$ is equal to $\frac{\sin{x}}{|\sin{x}|}$. </p> <p>What I was able to show is the following:</p> <p>$\frac{d}{dx}\arccos(\cos(x)) = \frac{\sin(x)}{\sqrt{1 - \cos^2{x}}}$</p> <p>What justifies equating $\sqrt{1 - \cos^2{x}}$ to $|\sin{x}|$?</p> <p>I am aware of the identity $ \sin{x} = \pm\sqrt{1 - \cos^2{x}}$, but I still do not see how that leads to that conclusion.</p>
preferred_anon
27,150
<p>$\sqrt{1-\cos^{2}(x)}=\sqrt{\sin^{2}(x)}$, which is $|\sin(x)|$ by definition.</p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
Carl Mummert
5,042
<p>I believe it can be easier, at first, to look at the formal definition of continuity in terms of function application commuting with limits, which says that $\lim_{x \to A} f(x)$ is the same as $f(A)$. The explanation is more clear if also you write $f(A)$ as $f(\lim_{x \to A} x)$. </p> <p>Thus:</p> <ul> <li>$f(A) = f(\lim_{x \to A})$ is "where you are" at at time $A$</li> <li>$\lim_{x \to A} f(x)$ is "where you should be" at time $A$, based on where you were at nearby moments of time.</li> </ul> <p>If someone else naturally followed the function, then at time $A$ they ought to arrive at $\lim_{x \to A} f(A)$. But this means that if $f(A)$ is not the same as $\lim_{x \to A} f(A)$, then the actual function would require "lifting a pen", in some informal sense. </p> <p>Of course, as with any explanation, this requires the students to apply some amount of intuition. And it makes more sense if we already know that the function is continuous everywhere else, so that it is possible to trace "the rest" of the function. But, if students are already able to visualize limits, this helps them visualize continuity. </p> <p>Also, because the limit form of the definition of continuity is vital for applying continuity to compute limits in the context of calculus, emphasizing the link between continuity and limits can have other benefits, compared to the $\epsilon-\delta$ definition. </p>
3,671,223
<p>First and foremost, I have already gone through the following posts:</p> <p><a href="https://math.stackexchange.com/questions/2463561/prove-that-for-all-positive-integers-x-and-y-sqrt-xy-leq-fracx-y">Prove that, for all positive integers $x$ and $y$, $\sqrt{ xy} \leq \frac{x + y}{2}$</a></p> <p><a href="https://math.stackexchange.com/questions/64881/proving-the-am-gm-inequality-for-2-numbers-sqrtxy-le-fracxy2">Proving the AM-GM inequality for 2 numbers $\sqrt{xy}\le\frac{x+y}2$</a></p> <p>The reason why I open a new question is because I do not understand after reading the two posts.</p> <p>Question: Prove that for any two positive numbers x and y, <span class="math-container">$\sqrt{ xy} \leq \frac{x + y}{2}$</span></p> <p>According to my lecturer, he said that the question should begin with <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>. Lecturer also said that this is from a "well-known" fact. Now, both posts also mentioned this exact same thing in the helpful answers.</p> <p>My question is this - <strong>how</strong> and <strong>why</strong> do I know that I need to use <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>? What "well-known" fact is this? Can't I simply just subtract <span class="math-container">$\sqrt{xy}$</span> to both side and conclude at <span class="math-container">$0 \leq {(x-y)}^2$</span>? I do not know how this <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> come back and why it even appear.</p> <p>Thanks in advance.</p> <p>Edit: <strong>I am not looking for the direct answer to this question.</strong> I am looking for an answer on <strong>why</strong> <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> is even considered in the first place as the first step to this question. Is this from a mathematical theorem or axiom etc?</p>
Community
-1
<p>This proof is hinted by the presence of the square root, which one will tend to remove by squaring. As all numbers are positive</p> <p><span class="math-container">$$\sqrt{xy}\le\frac{x+y}2$$</span> is rewritten</p> <p><span class="math-container">$$xy\le\frac{x^2+2xy+y^2}4,$$</span></p> <p>which is also </p> <p><span class="math-container">$$0\le\frac{x^2-2xy+y^2}4$$</span> and certainly holds (see why ?).</p> <hr> <p>Once you have understood this principle, you can recast the proof as</p> <p><span class="math-container">$$\sqrt x\sqrt y\le\frac{\sqrt x^2+\sqrt y^2}2$$</span> or <span class="math-container">$$0\le(\sqrt x-\sqrt y)^2.$$</span></p> <hr> <p>An alternative way to eliminate the square root is to set, like in Narasimham's answer, <span class="math-container">$x=u^2,y=v^2$</span> and prove</p> <p><span class="math-container">$$uv\le\frac{u^2+v^2}2.$$</span></p>
3,131,516
<p>I would like to know if this differential equation can be transformed into the hypergeometric differential equation</p> <p><span class="math-container">$ 4 (u-1) u \left((u-1) u \text{$\varphi $1}''(u)+(u-2) \text{$\varphi $1}'(u)\right)+\text{$\varphi $1}(u) \left((u-1) u \omega ^2-u (u+4)+8\right)=0$</span></p>
MPW
113,214
<p>You can <em>always</em> use the quadratic equation to factor a quadratic polynomial <span class="math-container">$Q(x)$</span>.</p> <p>If the solutions to <span class="math-container">$Q(x)=0$</span> are <span class="math-container">$x=r$</span> and <span class="math-container">$x=s$</span>, then the polynomial can be factored as <span class="math-container">$$Q(x)=k(x-r)(x-s)$$</span> where <span class="math-container">$k$</span> is the leading coefficient of <span class="math-container">$Q(x)$</span>.</p> <p>In your case, <span class="math-container">$Q(x)=2x^2+3x-2$</span> and the quadratic formula says the solutions to <span class="math-container">$Q(x)=0$</span> are <span class="math-container">$$x=\frac{-3 \pm\sqrt{3^2-4(2)(-2)}}{2(2)}=\frac{-3 \pm 5}{4}$$</span> So the solutions are <span class="math-container">$x=\frac12$</span> and <span class="math-container">$x=-2$</span>.</p> <p>This means <span class="math-container">$Q(x)$</span> factors as <span class="math-container">$2(x-\frac12)(x+2)$</span>. It is convenient to combine the leading constant with the first factor to get rid of the fraction, giving <span class="math-container">$$\boxed{Q(x)=(2x-1)(x+2)}.$$</span> <hr> <strong>Addendum:</strong></p> <p>It may be worth remembering how the quadratic formula is derived in the first place. Believe it or not, it's through factoring (!). Here's how. Suppose we have a quadratic equation (assume <span class="math-container">$a\neq0$</span>, otherwise it's really just a linear equation):</p> <p><span class="math-container">$$ax^2+bx+c=0$$</span></p> <p>We work towards the goal of building the constant discriminant <span class="math-container">$b^2-4ac$</span> on the right side: </p> <p><span class="math-container">$$ax^2+bx = \boxed{-c}\tag{subtract $c$ from both sides}$$</span> <span class="math-container">$$4a^2x^2 + 4abx = \boxed{-4ac}\tag{multiply both sides by $4a$}$$</span> <span class="math-container">$$4a^2x^2 + 4abx +b^2= \boxed{b^2-4ac}\tag{add $b^2$ to both sides}$$</span></p> <p>When you do this, the left side is <em>always</em> a perfect square polynomial (the square of a binomial). You can then factor the left side: <span class="math-container">$$\overbrace{(2ax+b)(2ax+b)}^{4a^2x^2 + 4abx +b^2\textrm{ factored}}= \overbrace{b^2-4ac}^{\textrm{constant discriminant}}$$</span> Assuming that <span class="math-container">$b^2-4ac$</span> isn't negative, we can write <span class="math-container">$b^2-4ac=(\sqrt{b^2-4ac})^2$</span> so <span class="math-container">$$(2ax+b)^2 = (\sqrt{b^2-4ac})^2$$</span> <span class="math-container">$$(2ax+b)^2 - (\sqrt{b^2-4ac})^2 = 0$$</span> which can be factored, being a difference of squares, as <span class="math-container">$$(2ax+b - \sqrt{b^2-4ac})(2ax+b + \sqrt{b^2-4ac})=0$$</span> Dividing both sides by <span class="math-container">$2a$</span> twice, and then multiplying by <span class="math-container">$a$</span>, we have <span class="math-container">$$a\left(x+\frac{b- \sqrt{b^2-4ac}}{2a}\right)\left(x+\frac{b+ \sqrt{b^2-4ac}}{2a}\right)=0$$</span> <span class="math-container">$$a\left(x-\left[\frac{-b+ \sqrt{b^2-4ac}}{2a}\right]\right)\left(x-\left[\frac{-b- \sqrt{b^2-4ac}}{2a}\right]\right)=0$$</span> The left side is indeed of the form <span class="math-container">$k(x-r)(x-s)$</span>, where <span class="math-container">$k$</span> is the leading coefficient of <span class="math-container">$Q(x)$</span> (that is, <span class="math-container">$a$</span>) and <span class="math-container">$r,s$</span> are the solutions <span class="math-container">$\tfrac{-b\pm \sqrt{b^2-4ac}}{2a}$</span>from the quadratic formula.</p> <p>If the discriminant is negative, the same formula works but the square root will produce an imaginary number. The quadratic will still factor, but the factors will contain complex numbers. In other words, if <span class="math-container">$b^2-4ac&lt;0$</span>, we can write <span class="math-container">$$\pm\sqrt{b^2-4ac} = \pm i\sqrt{-(b^2-4ac)}.$$</span></p>
3,720,677
<p>There are two vertical lines, <span class="math-container">$l_1$</span> from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(0,n)$</span> and <span class="math-container">$l_2$</span> from <span class="math-container">$(m,0)$</span> to <span class="math-container">$(m, n)$</span>.</p> <p>Prove that the number of north-east lattice paths that start on the line <span class="math-container">$l_1$</span> and end on the line <span class="math-container">$l_2$</span> are: <span class="math-container">$$\binom{n+m+2}{n}$$</span></p> <p>My initial thought is that smallest possible lattice path would just be a set of horizontal steps and would be of length <span class="math-container">$m$</span> and the biggest path would be have <span class="math-container">$n+m$</span> steps in total. I am not quite sure how to frame a summation that would give the above combination equation.</p>
user
293,846
<p>There are <span class="math-container">$$ \binom{m+j-i}{m} $$</span> ways to start at point <span class="math-container">$(0,i)$</span> and finish at point <span class="math-container">$(m,j)$</span>, so that the total number of ways is <span class="math-container">$$ \sum_{0\le i\le j\le n}\binom{m+j-i}{m}=\sum_{k=0}^n(n+1-k)\binom{m+k}m=\binom{m+n+2}n. $$</span></p> <p>The last equality can be proved by induction. Indeed the equality is obviously valid for <span class="math-container">$n=0$</span> and arbitrary <span class="math-container">$m$</span>. Assume it is valid for some <span class="math-container">$n$</span>. Then it is valid for <span class="math-container">$n+1$</span> as well: <span class="math-container">$$\begin{align} \sum_{k=0}^{n+1}(n+2-k)\binom{m+k}m &amp;=\sum_{k=0}^{n+1}(n+1-k)\binom{m+k}m+\sum_{k=0}^{n+1}\binom{m+k}k\\ &amp;\stackrel{I.H.}=\binom{m+n+2}n+\binom{m+n+2}{n+1}\\ &amp;=\binom{m+n+3}{n+1}, \end{align}$$</span> where we used the <a href="https://en.wikipedia.org/wiki/Hockey-stick_identity" rel="nofollow noreferrer">hockey-stick identity</a>.</p>
3,720,677
<p>There are two vertical lines, <span class="math-container">$l_1$</span> from <span class="math-container">$(0,0)$</span> to <span class="math-container">$(0,n)$</span> and <span class="math-container">$l_2$</span> from <span class="math-container">$(m,0)$</span> to <span class="math-container">$(m, n)$</span>.</p> <p>Prove that the number of north-east lattice paths that start on the line <span class="math-container">$l_1$</span> and end on the line <span class="math-container">$l_2$</span> are: <span class="math-container">$$\binom{n+m+2}{n}$$</span></p> <p>My initial thought is that smallest possible lattice path would just be a set of horizontal steps and would be of length <span class="math-container">$m$</span> and the biggest path would be have <span class="math-container">$n+m$</span> steps in total. I am not quite sure how to frame a summation that would give the above combination equation.</p>
Brian M. Scott
12,042
<p>Let <span class="math-container">$P$</span> be the set of lattice paths from <span class="math-container">$\ell_1$</span> to <span class="math-container">$\ell_2$</span> using only steps to the north and steps to the east, and let <span class="math-container">$Q$</span> be the set of such paths from <span class="math-container">$\langle -1,0\rangle$</span> to <span class="math-container">$\langle m+1,n\rangle$</span>. A path in <span class="math-container">$Q$</span> must comprise <span class="math-container">$m+2$</span> steps to the east and <span class="math-container">$n$</span> steps to the north; these <span class="math-container">$n+m+2$</span> steps can occur in any order, and any sequence of such steps is a path in <span class="math-container">$Q$</span>, so <span class="math-container">$|Q|=\binom{n+m+2}n$</span>. I claim that there is a bijection between <span class="math-container">$Q$</span> and <span class="math-container">$P$</span>, so that <span class="math-container">$|P|=\binom{n+m+2}n$</span> as well.</p> <p>Suppose that <span class="math-container">$q\in Q$</span>; <span class="math-container">$q$</span> begins with <span class="math-container">$k$</span> steps to the north for some <span class="math-container">$k$</span> such that <span class="math-container">$0\le k\le n$</span>, but eventually it must take a step to the east. At that point it intersects <span class="math-container">$\ell_1$</span> at <span class="math-container">$\langle 0,k\rangle$</span>. It continues until it hits <span class="math-container">$\ell_2$</span> at some point <span class="math-container">$\langle m,\ell\rangle$</span> such that <span class="math-container">$k\le\ell\le n$</span>. It may proceed north for a bit on <span class="math-container">$\ell_2$</span>, but at some point <span class="math-container">$\langle m,j\rangle$</span> it must go east to <span class="math-container">$\langle m+1,j\rangle$</span> and continue north to <span class="math-container">$\langle m+1,n\rangle$</span>. Let <span class="math-container">$p_q$</span> be the part of <span class="math-container">$q$</span> from <span class="math-container">$\langle 0,k\rangle$</span> to <span class="math-container">$\langle m,j\rangle$</span>; clearly <span class="math-container">$p_q\in P$</span>. Conversely, if <span class="math-container">$p\in P$</span> starts at <span class="math-container">$\langle 0,k\rangle$</span> and ends at <span class="math-container">$\langle m,j\rangle$</span>, we can extend it to a <span class="math-container">$q\in Q$</span> by adding <span class="math-container">$k$</span> steps to the north followed by one to the east before <span class="math-container">$p$</span> and one to the east followed by <span class="math-container">$n-m$</span> to the north after <span class="math-container">$p$</span>; then clearly <span class="math-container">$p=p_q$</span>. Thus, the correspondence <span class="math-container">$q\leftrightarrow p_q$</span> is a bijection between <span class="math-container">$Q$</span> and <span class="math-container">$P$</span>, and <span class="math-container">$|P|=\binom{n+m+2}n$</span>.</p>
2,397,874
<p>I am new to modulus and inequalities , I came across this problem:</p> <p>$ 2^{\vert x + 1 \vert} - 2^x = \vert 2^x - 1\vert + 1 $ for $ x $</p> <p>How to find $ x $ ?</p>
Raffaele
83,382
<p>$\left(5-2 \sqrt{6}\right) \left(5+2 \sqrt{6}\right)=1$</p> <p>So $5-2 \sqrt{6}=\dfrac{1}{5+2 \sqrt{6}}$</p> <p>substitute $\left(2 \sqrt{6}+5\right)^{x^2-3}=t$ so that $\left(5-2 \sqrt{6}\right)^{x^2-3}=\dfrac{1}{t}$</p> <p>and solve $t+\dfrac{1}{t}=10$ which gives $t_1= 5-2 \sqrt{6},\;t_2=5+2 \sqrt{6}$</p> <p>Substitute back</p> <p>$\left(2 \sqrt{6}+5\right)^{x^2-3}=5-2 \sqrt{6}$</p> <p>$\left(2 \sqrt{6}+5\right)^{x^2-3}=\left(5+2 \sqrt{6}\right)^{-1}$</p> <p>$x^2-3=-1\to x=\pm\sqrt 2$</p> <p>and</p> <p>$\left(2 \sqrt{6}+5\right)^{x^2-3}=5+2 \sqrt{6}$</p> <p>$x^2-3=1\to x=\pm 2$</p>
3,163,342
<p>Find all the ring homomorphisms <span class="math-container">$f$</span> : <span class="math-container">$\mathbb{Z}_6\to\mathbb{Z}_3$</span>.</p> <p>definition of ring homomorphism:</p> <p>The function f: R → S is a ring homomorphism if:</p> <p>1) <span class="math-container">$f(1)$</span> = <span class="math-container">$1$</span></p> <p>2) <span class="math-container">$f(a+b)$</span> = <span class="math-container">$f(a)$</span> + <span class="math-container">$f(b)$</span> for all a,b, in R</p> <p>3) <span class="math-container">$f(ab)$</span> = <span class="math-container">$f(a)$</span> <span class="math-container">$f(b)$</span> for all a,b in R</p> <p>Does it make sense to say that in this case </p> <p><span class="math-container">$f(6) = f(1) + f(1) + f(1) + f(1) +f(1) + f(1) = 1 + 1 + 1 + 1 + 1 + 1 = 0$</span> in <span class="math-container">$\mathbb{Z}_3$</span> </p> <p>Could you explain what do we do to find ring homomorphism in all cases. Not only <span class="math-container">$\mathbb{Z}_m \to\mathbb{Z}_n$</span>, where <span class="math-container">$m&lt;n$</span> . </p>
Alessio Del Vigna
639,470
<p>When you multiply both sides by <span class="math-container">$3$</span>, you made a mistake in the RHS.</p>
204,150
<p>If I had a list of let's say 20 elements, how could I split it into two separate lists that contain every other 5 elements of the initial list?</p> <p>For example:</p> <pre><code>list={1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} function[list] (* {1,2,3,4,5,11,12,13,14,15} {6,7,8,9,10,16,17,18,19,20} *) </code></pre> <p>Follow-up question:</p> <p>Thanks to the numerous answers! Is there a way to revert this process? Say we start from two lists and I would like to end up with the <code>list</code>above:</p> <pre><code>list1={1,2,3,4,5,11,12,13,14,15} list2={6,7,8,9,10,16,17,18,19,20} function[list1,list2] (* {1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20} *) </code></pre>
kglr
125
<pre><code>ClearAll[f1, f2] f1[lst_, k_] := Join @@ Partition[lst[[# ;;]], k, 2 k, 1, {}] &amp; /@ {1, k + 1} {list1, list2} = f1[list, 5] </code></pre> <blockquote> <p>{{1, 2, 3, 4, 5, 11, 12, 13, 14, 15},<br> {6, 7, 8, 9, 10, 16, 17, 18, 19, 20}}</p> </blockquote> <p>An alternative way using the (still undocumented) 6th argument of <code>Partition</code>:</p> <pre><code>ClearAll[f1, f2] f2[lst_, k_] := Partition[Drop[lst, #], k, 2 k, 1, {}, Sequence] &amp; /@ {0, k} {list1, list2} = f2[list, 5] </code></pre> <blockquote> <p>{{1, 2, 3, 4, 5, 11, 12, 13, 14, 15},<br> {6, 7, 8, 9, 10, 16, 17, 18, 19, 20}}</p> </blockquote> <p><strong>Update:</strong> To revert the process: </p> <pre><code>ClearAll[fb] fb[lst1_, lst2_, k_] := Join @@ Riffle @@ (Partition[#, k, k, 1, {}] &amp; /@ {lst1, lst2}) fb[list1, list2, 5] </code></pre> <blockquote> <p>{1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20}</p> </blockquote>
1,761,668
<p>Wikipedia says about logical consequence:</p> <blockquote> <p>A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.</p> </blockquote> <p>But if φ and ψ are both true under some interpretations, then aren't they on equal footing? Why is one the logical consequence of the other? </p> <p>In extension, if we have a set of expressions $S = \{X_{1}, X_{2}, X_{3}, X_{4}\}$ and this set is satisfied by an interpretation $I$, so that every expression $X$ in $S$ is satisfied by $I$, then couldn't we just choose the subset $S' = \{X_2, X_{3}\}$ and claim that $X_{1}$ and $X_{4}$ are logical consequences of $S'$? It seems to my (naive) eyes that all expressions in X stand in mutual entailment, which somehow seems wrong. </p>
Olivier Oloa
118,798
<p>We have the following inverse image $$ M^c=\|\cdot\|^{-1}\left([1,\infty) \right) $$ the subset $[1,+\infty)$ is closed in $\mathbb{R}$ thus the subset $M^c$ is closed in the metric space $\mathbb{R}^{n+1}$ ($\|\cdot\|$ is a continuous function over $\mathbb{R}$), that is $M=\left(M^c \right)^c$ is an open subset in $\mathbb{R}^{n+1}.$</p>
1,761,668
<p>Wikipedia says about logical consequence:</p> <blockquote> <p>A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.</p> </blockquote> <p>But if φ and ψ are both true under some interpretations, then aren't they on equal footing? Why is one the logical consequence of the other? </p> <p>In extension, if we have a set of expressions $S = \{X_{1}, X_{2}, X_{3}, X_{4}\}$ and this set is satisfied by an interpretation $I$, so that every expression $X$ in $S$ is satisfied by $I$, then couldn't we just choose the subset $S' = \{X_2, X_{3}\}$ and claim that $X_{1}$ and $X_{4}$ are logical consequences of $S'$? It seems to my (naive) eyes that all expressions in X stand in mutual entailment, which somehow seems wrong. </p>
Jean Marie
305,862
<p>I propose a proof in the spirit of what you have attempted. Let us define</p> <p>$$g((x_1, x_2, \dots, x_{n+1})) := x_1^2 + \dots + x_{n+1}^2 -1 $$</p> <p>then </p> <p>$$M=g^{-1}((-1,1))$$</p> <p>Thus $M$ is the reciprocal set of the open set $\mathbb{R}$, therefore an open set of $\mathbb{R}^n$.</p>
130,502
<p>I obtained a numerical solution from the following code with <code>NDSolve</code></p> <pre><code>L = 20; tmax = 27; \[Sigma] = 2; myfun = First[h /. NDSolve[{D[h[x, y, t], t] + Div[h[x, y, t]^3*Grad[Laplacian[h[x, y, t], {x, y}], {x, y}], {x, y}] + Div[h[x, y, t]^3*Grad[h[x, y, t], {x, y}], {x, y}] == 0, h[x, y, 0] == 1 + 1/(2*\[Pi]*\[Sigma]^2)*Exp[-((x - 10)^2/(2*\[Sigma]^2) + (y - 10)^2/(2*\[Sigma]^2))], h[0, y, t] == h[L, y, t], h[x, 0, t] == h[x, L, t]}, h, {x, 0, L}, {y, 0, L}, {t, 0, tmax}, Method -&gt; {"MethodOfLines", "SpatialDiscretization" -&gt; {"TensorProductGrid", "MinPoints" -&gt; 60, "MaxPoints" -&gt; 60, "DifferenceOrder" -&gt; 4}}, StepMonitor :&gt; Print[t]]] </code></pre> <p>(<em>It took about 7 sec to be solved on my old laplop.</em>)</p> <p>Next, I am trying to make an animation and export a .gif file to present its evolution as follows: (<em>taking about 50 sec</em>)</p> <pre><code>mpl = Table[Plot3D[myfun[x, y, t], {x, 0, L}, {y, 0, L}, PlotRange -&gt; All, PlotPoints -&gt; 40, ImageSize -&gt; 400, PlotLabel -&gt; Style["t = " &lt;&gt; ToString[t], Bold, 18]], {t, 0, 27, 1}]; Export["test.gif", mpl, "DisplayDurations" -&gt; 1, "AnimationRepetitions" -&gt; Infinity] </code></pre> <p><a href="https://i.stack.imgur.com/GAWwG.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/GAWwG.gif" alt="enter image description here"></a></p> <p><strong>Here are my questions:</strong></p> <p>As you may see, during the evolution <strong>(1)</strong> the box(frame) of the animation is shrinking and expanding, though slightly, <strong>(2)</strong> the augment in the amplitude is shown through increasing the vertical coordinate. If one neglects the scaling in this coordinate and only observe the middle peak, he may do not feel its growth. This is a problem in make a presentation. </p> <p>I don't know the reason for the fist observation, for the second one, while, I think MMA try to highlight the surface variation at <em>every</em> instant by scaling the vertical axis synchronously.</p> <p>Can anyone please help me to suppress the oscillation of the 3Dbox and hold the coordinate of vertical axis as the final frame at $t_\text{max}=27$ (i.e. about z=6 here) because I want to show the surface evolution form a small fluctuation to the final big amplitude. Thanks!</p>
ubpdqn
1,997
<p>I only post this as a way (without distortion or dealing with multiple scales)to illustrate the initial Gaussian (flat relative to final range) with <code>MeshFunctions</code> and using <code>ColorFunction</code>. I have voted for Nasser's answer.</p> <pre><code>fun[t_] := Legended[Show[ Plot3D[myfun[x, y, t], {x, 0, 20}, {y, 0, 20}, MeshFunctions -&gt; (#3 &amp;), Mesh -&gt; {{0, 1, 1.1, 1.2, 2, 3, 4, 5}}, MeshStyle -&gt; Thick, ColorFunction -&gt; Function[{x, y, z}, ColorData["Rainbow"][z/6]], ColorFunctionScaling -&gt; False, PlotRange -&gt; {0, 6}, PlotPoints -&gt; 40, PerformanceGoal -&gt; "Quality", PlotLabel -&gt; Style[Row[{"t= ", t}], 20, White, Bold]], Background -&gt; Black], BarLegend[{"Rainbow", {0, 6}}]] </code></pre> <p>The gif was exported from <code>fun/@Range[0,27,0.5]</code> at 8 frames per second (using <code>"DisplayDurations"</code>.</p> <p><a href="https://i.stack.imgur.com/JG3jy.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JG3jy.gif" alt="enter image description here"></a></p>
2,480,528
<blockquote> <p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p> </blockquote> <p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
Community
-1
<p>Hint: Let $z=a+ib$, then $|\frac{1+z}{1-i\overline{z}}|=\frac{|1+z|}{|1-i\overline{z}|}=\frac{|(1+a)+ib|}{|(1-b)-ia|}=\frac{\sqrt{(1+a)^2+b^2}}{\sqrt{(1-b)^2+a^2}}=\frac{\sqrt{1+a^2+2a+b^2}}{\sqrt{1+b^2-2b+a^2}}=1$ </p> <p>Now this equality is true if and only if $a=-b$ which gives us that if $z=a+ib$ then $iz=ai-b=a-bi=\overline{z}$</p>
3,452,493
<p>I remember hearing / reading about a scenario during WW2 where the US / Western Powers were thinking about attacking Japan via an overland route from India. The problem involved how to leapfrog the supplies from India over to China and there begin the fight with the Japanese. There’s a logic to it as the distance is far less than from California to Japan. But some mathematicians realized that it would take too long to stock up the supplies and so the island hopping strategy was finalized.</p> <p>The problem, in a nutshell, is Plane A can travel X miles with a full load of supplies. But then it needed to have enough gas to fly back to the home base in order to get more supplies. (Or, there would be some combinations where some planes carry only supplies in the cargo areas while others carry extra gas to allow the planes to return to base to restock up supplies.)</p> <p>So the questions are: </p> <ol> <li>What is the name of this math problem (if such a name exists) where one determines how many trips would it require to bring supplies from place A to place B.</li> <li>Has anyone any information if this calculation ever took place. (This second question may have to be asked at history.stackexchange). I’ve made some perfunctory searches and haven’t found anything - which leads me to question if this story is apocryphal. </li> </ol>
Hugo
562,826
<p>By AM-GM with <span class="math-container">$(n-1)$</span> ones and one <span class="math-container">$a$</span>, <span class="math-container">$$ \sqrt[n]{a} \leq \frac{1}{n} [(n-1)+a] \leq 1 + \frac{a}{n}. $$</span></p> <p>Then, by letting <span class="math-container">$a' = 1/a$</span> you get the other inequality!</p>
3,382,241
<p>I am trying to find the smallest <span class="math-container">$n \in \mathbb{N}\setminus \{ 0 \}$</span>, such that <span class="math-container">$n = 2 x^2 = 3y^3 = 5 z^5$</span>, for <span class="math-container">$x,y,z \in \mathbb{Z}$</span>. Is there a way to prove this by the Chinese Remainder Theorem?</p>
donguri
670,114
<p>Since all <span class="math-container">$25$</span> balls are indistinguishable, we can put a ball in each box. There are <span class="math-container">$20$</span> remaining balls and <span class="math-container">$5$</span> boxes.</p> <p>From here, you calculate using "stars and bars" <span class="math-container">$$\binom{24}{4}=\frac{24\cdot 23\cdot 22\cdot 21}{4\cdot 3\cdot 2\cdot 1}=10626$$</span> <span class="math-container">$\boxed{10626}$</span> is your final answer.</p>
2,322,294
<p>I am trying to follow K.P. Hart's course <a href="http://fa.its.tudelft.nl/~hart/37/onderwijs/old-courses/settop" rel="nofollow noreferrer">Set-theoretic methods in general topology</a>. In <a href="http://fa.its.tudelft.nl/~hart/37/onderwijs/old-courses/settop/rudin.pdf" rel="nofollow noreferrer">Chapter 6</a>, Rudin's Dowker space $X$ is defined as follows. Let $P=\prod_{n=1}^\infty(\omega_n+1)$ be the box product of the successors of the first $\omega$-many uncountable ordinals, let $X'=\{x\in P:(\forall n)\,\operatorname{cf} x_n&gt;\omega\}$, and let $X=\{x\in X':(\exists i)(\forall n)\ \operatorname{cf}x_n&lt;\omega_i\}$. Exercise 2.8 asks to show that $X$ is <a href="https://en.wikipedia.org/wiki/Collectionwise_normal_space" rel="nofollow noreferrer">collectionwise normal</a>, using the following hint: prove that if $\mathcal{F}$ is a <a href="https://www.encyclopediaofmath.org/index.php/Discrete_family_of_sets" rel="nofollow noreferrer">discrete family</a> of closed subsets of $X$ then $\mathcal{F}'=\{\operatorname{cl}_{X'}F:F\in\mathcal{F}\}$ is a discrete family of closed subsets of $X'$. </p> <p>It was already proved that disjoint closed sets in $X$ have disjoint closures in $X'$. I can prove that the space $X'$ is collectionwise normal, and using the statement of the hint I can also show that $X$ is collectionwise normal. But somehow I am not able to prove the hint.</p> <p><strong>Question:</strong> Is it true that if $Y\subseteq Y'$ are topological spaces, disjoint closed subsets of $Y$ have disjoint closures in $Y'$, and $\mathcal{F}$ is a discrete family of closed subsets of $Y$, then $\mathcal{F}'=\{\operatorname{cl}_{Y'}F:F\in\mathcal{F}\}$ is a discrete family of closed subsets of $Y'$? We can also assume that $Y$ and $Y'$ are normal. Or is that a special property of the above spaces $X$, $X'$?</p>
Mike V.D.C.
114,534
<p>I dont know how to <strong>remove</strong> answer or mark it as non-answer (without deleting it)!</p> <p>The $\mathbb{Z}(X)$ is localization of $\mathbb{Z}[X]$, is absolutely correct. However the underlying multiplicative set is not $\{X,X^2,\ldots\}$, but $\mathbb{Z}$. </p> <p>When you invert all non-costant polynomials, the ring you would get is $\mathbb{Z}(X)$ .</p> <p>e.g. the element $\frac{1}{1+X}$ is in the ring and so is $\frac{1}{2+X}$, but $\frac{1}{2}$ is not there.</p> <p>From computation point of view, I donno! Any pointers?</p> <p>Hope this clarifies!</p> <p>-- Mike</p>
208,830
<p>I have the plot of two surfaces given by</p> <pre><code> pN = ParametricPlot3D[{0,u,v},{u,-0.25,0.25},{v,-0.5,0.5},\ Mesh-&gt;None,PlotStyle-&gt;Directive[Gray,Opacity[0.4]] ]; pS = ParametricPlot3D[{u,v^2,v},{u,-0.25,0.25},{v,-0.5,0.5},\ Mesh-&gt;None,PlotStyle-&gt;Directive[Green,Opacity[0.4]] ]; </code></pre> <p>Then, I combine the two surfaces on the same plot with </p> <pre><code> Show[pN,pS] </code></pre> <p>and I can export my figure.</p> <p>I would like to create a text legend (or a automatic text label) for these two surfaces (maybe a color designation would be easier to begin with).</p> <p>I tried :</p> <pre><code> pN = Legended[\ ParametricPlot3D[{0,u,v},{u,-0.25,0.25},{v,-0.5,0.5}]\ ,"My Text for Surface N"] </code></pre> <p>but we can't determine what surface this piece of text refers to.</p>
Sjoerd Smit
43,522
<p>The easiest way is, I think, to use the definition of a Bézier curve to define a <code>ParametricRegion</code> and then use <code>RegionDistance</code> to find out how well the curve approximates the points. Here's my suggestion. Define the points:</p> <pre><code>n = 5; p = Table[{Cos[\[CurlyPhi]], Sin[\[CurlyPhi]]}, {\[CurlyPhi], Join[{0}, RandomReal[{0, Pi/2}, n], {Pi/2}]}]; </code></pre> <p>Definition of the curve: </p> <pre><code>bezCurve[{pt : {_, _}}] := pt &amp;; bezCurve[ctrlPts_?MatrixQ] := bezCurve[ctrlPts] = With[{b1 = bezCurve[Most[ctrlPts]], b2 = bezCurve[Rest[ctrlPts]]}, (1 - #)*b1[#] + #*b2[#] &amp;]; bezCurve[ctrlPts_?MatrixQ, t_] := Simplify[bezCurve[ctrlPts][t]]; </code></pre> <p>Define the loss function to be minimized:</p> <pre><code>ClearAll[loss]; loss[pt : {__?NumericQ}] := loss[pt] = With[{ regDist = RegionDistance[ ParametricRegion[ bezCurve[{p[[1]], pt, p[[-1]]}, t], {{t, 0, 1}} ] ] }, Total@regDist[p] ] </code></pre> <p>Use <code>FindMinimum</code> to find the solution (will take a while):</p> <pre><code>FindMinimum[{loss[{x, y}], 0 &lt; x &lt; 3 &amp;&amp; 0 &lt; y &lt; 3}, {{x, 0.937}, {y, 0.883}}, EvaluationMonitor :&gt; Print[{{x, y}, loss[{x, y}]}] ] </code></pre>
208,830
<p>I have the plot of two surfaces given by</p> <pre><code> pN = ParametricPlot3D[{0,u,v},{u,-0.25,0.25},{v,-0.5,0.5},\ Mesh-&gt;None,PlotStyle-&gt;Directive[Gray,Opacity[0.4]] ]; pS = ParametricPlot3D[{u,v^2,v},{u,-0.25,0.25},{v,-0.5,0.5},\ Mesh-&gt;None,PlotStyle-&gt;Directive[Green,Opacity[0.4]] ]; </code></pre> <p>Then, I combine the two surfaces on the same plot with </p> <pre><code> Show[pN,pS] </code></pre> <p>and I can export my figure.</p> <p>I would like to create a text legend (or a automatic text label) for these two surfaces (maybe a color designation would be easier to begin with).</p> <p>I tried :</p> <pre><code> pN = Legended[\ ParametricPlot3D[{0,u,v},{u,-0.25,0.25},{v,-0.5,0.5}]\ ,"My Text for Surface N"] </code></pre> <p>but we can't determine what surface this piece of text refers to.</p>
xzczd
1,871
<p>There're 2 issues here:</p> <ol> <li><p>The evaluation order should be properly controlled.</p></li> <li><p>The argument of <code>BezierFunction</code> should be between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, so we need to add constraints to <code>NMinimize</code>.</p></li> </ol> <p>The following is the fixed code, I've also adjust <code>Method</code> option of <code>NMinimize</code> a bit to obtain better result:</p> <pre><code>SeedRandom[1]; n = 5; p = Table[{Cos[φ], Sin[φ]}, {φ, Join[{0}, RandomReal[{0, Pi/2}, n], {Pi/2}]}]; ui = Table[u[i], {i, 2, Length[p] - 1}]; Clear@bez bez[k : {{_?NumericQ, _?NumericQ} ..}] := BezierFunction[k] Clear@norm; norm[point_List, point2_] := Norm[point - point2] NMinimize[{Sum[ norm[bez[{p[[1]], {kx, ky}, p[[-1]]}][u[i]], p[[i]]], {i, 2, Length[p] - 1}], 0 &lt;= # &lt;= 1 &amp; /@ ui}, Join[{kx, ky}, ui], Method -&gt; "RandomSearch"] // AbsoluteTiming (* {3.00299, {0.00958796, {kx -&gt; 0.946537, ky -&gt; 0.938663, u[2] -&gt; 0.838486, u[3] -&gt; 0.0969485, u[4] -&gt; 0.811814, u[5] -&gt; 0.16807, u[6] -&gt; 0.220674}}} *) </code></pre>
2,800,015
<p>Prove $p(x)=\frac{6}{(\pi x)^2}$ for $x=1,2,...$where $p$ is a probability function. and $E[X]$ doesn't exists.</p> <p><b> My work </b></p> <p>I know $\sum _{n=1}^\infty\frac{1}{n^2}=\frac{\pi^2}{6}$</p> <p>Moreover,</p> <p>$p(1)=\frac{6}{\pi^2}$<br> $p(2)=\frac{6}{\pi^24}$<br> $p(3)=\frac{6}{\pi^29}$<br> $p(4)=\frac{6}{\pi^216}$<br> .<br> .<br> .<br></p> <p>Then, for prove $p$ is a probability function then i need prove</p> <p>$\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=1 $</p> <p>Then, $\lim_{x\rightarrow\infty}\frac{6}{(\pi x)^2}=\lim_{x\rightarrow\infty}\frac{6}{\pi^2 x^2}=\frac{6}{\pi^2}\lim_{x\rightarrow\infty}\frac{1}{x^2}=\frac{6}{\pi^2}\times\frac{\pi^2}{6}=1$</p> <p>In consequence, $p$ is a probability function.</p> <blockquote> <p>Moreover, i need prove $E[X]$ doesn't exist.</p> </blockquote> <p>Here i'm a little stuck. Can someone help me?</p>
Arnaud Mortier
480,423
<p>To have a probability mass function of a discrete RV what you need is not $\lim_{x\to \infty}p(x)=1$ but rather $\sum_{x\in\Bbb R}p(x)=1$, where the sum over real numbers is well defined because $p(x)$ is non-zero only countably many times.</p> <p>Then for the expectation use a comparison with the harmonic series.</p>
1,842,826
<blockquote> <p>Explain why the columns of a $3 \times 4$ matrix are linearly dependent</p> </blockquote> <p>I also am curious what people are talking about when they say "rank"? We haven't touched anything with the word rank in our linear algebra class.</p> <p>Here is what I've came up with as a solution, will this suffice?</p> <p>I know that the columns of a matrix $A$ are <strong>linearly independent</strong> <strong>iff</strong> the equation $Ax = 0$ has <strong>only</strong> the <strong>trivial solution</strong>. $\therefore$ If the equation $Ax= 0$ does <strong>not</strong> have <strong>only</strong> the <strong>trivial solution</strong> $\implies$ that the columns of the matrix $A$ are <strong>linearly dependent</strong>?</p> <p><strong>UPDATE</strong> I don't understand why a $3x4$ matrix is always linearly dependent.. what about $\begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0\\0&amp;0&amp;1&amp;0\end{bmatrix}$</p> <p>where $x_1 = 0$ now $x_1= x_2 = x_3...$ then we can see that $x_1v_1 + x_2v_2 + x_3.. = 0 $ and we have the trivial solution?</p>
Doug M
317,162
<p>Why are the colums of</p> <p>$\begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0\\0&amp;0&amp;1&amp;0\end{bmatrix}$ linearly dependent?</p> <p>Because there exists non-zero $x$ such that </p> <p>$\begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0\\0&amp;0&amp;1&amp;0\end{bmatrix} x = 0$</p> <p>i.e. </p> <p>$\begin{bmatrix}1&amp;0&amp;0&amp;0\\0&amp;1&amp;0&amp;0\\0&amp;0&amp;1&amp;0\end{bmatrix}\begin{bmatrix} 0\\0\\0\\1\end{bmatrix} = \begin{bmatrix} 0\\0\\0\end{bmatrix}$</p> <p>How do you prove that any $3\times4$ matrix has linearly dependent columns?</p> <p>Suppose the columns of your matrix are $\mathbf v_1,\mathbf v_2,\mathbf v_3,\mathbf v_4.$ And suppose that $\mathbf v_1,\mathbf v_2,\mathbf v_3$ are linearly independent. Then we want to show that there exists and $a,b,c$ such that $a\mathbf v_1 + b\mathbf v_2 + c\mathbf v_3 = \mathbf v_4$</p> <p>How to do that? It might help to show that there exists $a_1,b_1,c_1$ such that:</p> <p>$a_1\mathbf v_1 + b_1\mathbf v_2 + c_1\mathbf v_3 = \begin{bmatrix} 1\\0\\0\end{bmatrix}$</p> <p>and similarly there is $a_2,b_2, c_2$ and $a_3, b_3, c_3$ such that</p> <p>$a_2\mathbf v_1 + b_2\mathbf v_2 + c_2\mathbf v_3 = \begin{bmatrix} 0\\1\\0\end{bmatrix}$ and</p> <p>$a_3\mathbf v_1 + b_3\mathbf v_2 + c_3\mathbf v_3 = \begin{bmatrix} 0\\0\\1\end{bmatrix}$</p> <p>And certainly $\mathbf v_4$ can be composed as a combintation of $\begin{bmatrix} 1\\0\\0\end{bmatrix}, \begin{bmatrix} 0\\1\\0\end{bmatrix},\begin{bmatrix} 0\\0\\1\end{bmatrix}$</p>
306,588
<p>I'll first explain what Mobius inversion says, and then state what I am fairly sure the equivariant version is. I can write out a proof, but I also can't believe this hasn't been done already; this is a request for references to where it has already been done.</p> <p><b>Ordinary Mobius Inversion</b> Let $P$ be a finite poset with minimal element $0$. Let $u$ be a function $P \to \mathbb{Z}$ and define $v: P \to \mathbb{Z}$ by $v(p) = \sum_{q \geq p} u(q)$. Mobius inversion aims to recover $u(0)$ from the values of $v$. It says that $u(0) = \sum_{q \in P} \mu(q) v(q)$. The function $\mu : P \to \mathbb{Z}$ can be described topologically: Let $(0,q)$ be the poset $\{ r \in P : 0 &lt; r &lt; q \}$ and let $\Delta((0,q))$ be the order complex, which is the simiplicial complex whose faces are totally ordered subsets of $(0,q)$. Then $\mu(q)$ is the reduced ordered characteristic of $\Delta((0,q))$. </p> <p><b>The equivariant situation</b> Let $P$ be a finite poset with minimal element $0$ and let $G$ be a group acting on $P$. For each $p \in P$, let $U(p)$ be a finite dimensional $\mathbb{C}$-vector space. Define $V(p) : = \bigoplus_{q \geq p} U(p)$, so $V(0) = \bigoplus_p U(p)$. Let $G$ act on $V(0)$, with $g U(p)= U(gp)$. My goal is to recover the class of $U(0)$, in the representation ring $Rep(G)$, from the $V(p)$'s. </p> <p>For $p \in P$, let $G_p$ be the stabilizer of $p$, so $U(p)$ and $V(p)$ are $G_p$-reps. Let $\mu_{eq}(q)$ be the equivariant reduced Euler characteristic of $q$, meaning the sum $\sum (-1)^j [\tilde{H}^j(\Delta((0,q)))]$ computed in the representation ring $Rep(G_q)$. Let $G\backslash P$ be a set of orbit representatives for $G$ acting on $P$. </p> <p>Then I claim that $$[U(0)] = \sum_{q \in G \backslash P} \mathrm{Ind}_{G_q}^G \left[ \mu_{eq}(q) \otimes V(p) \right]$$ in $Rep(G)$.</p> <p>Has anyone seen this before?</p>
David E Speyer
297
<p>Sami Assaf and I prove this in section 5 of our paper <a href="https://arxiv.org/abs/1809.10125" rel="nofollow noreferrer">Specht modules decompose as alternating sums of restrictions of Schur modules</a>. It is surprising that we couldn't find a reference!</p>
246,589
<p>Solve the boundary value problem $$\begin{cases} \displaystyle \frac{\partial u}{\partial t} = 2 \frac{\partial^2 u}{\partial x^2} \\ \ \\u(0,t) = 10 \\ u(3,t) = 40 \\ u(x, 0) = 25 \end{cases}$$</p>
doraemonpaul
30,938
<p>Let $u(x,t)=X(x)T(t)$ ,</p> <p>Then $X(x)T'(t)=2X''(x)T(t)$</p> <p>$\dfrac{T'(t)}{2T(t)}=\dfrac{X''(x)}{X(x)}=-\dfrac{n^2\pi^2}{9}$</p> <p>$\begin{cases}\dfrac{T'(t)}{T(t)}=-\dfrac{2n^2\pi^2}{9}\\X''(x)+\dfrac{n^2\pi^2}{9}X(x)=0\end{cases}$</p> <p>$\begin{cases}T(t)=c_3(n)e^{-\frac{2n^2\pi^2t}{9}}\\X(x)=\begin{cases}c_1(n)\sin\dfrac{n\pi x}{3}+c_2(n)\cos\dfrac{n\pi x}{3}&amp;\text{when}~n\neq0\\c_1x+c_2&amp;\text{when}~n=0\end{cases}\end{cases}$</p> <p>$\therefore u(x,t)=C_1x+C_2+\sum\limits_{n=0}^\infty C_3(n)e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}+\sum\limits_{n=0}^\infty C_4(n)e^{-\frac{2n^2\pi^2t}{9}}\cos\dfrac{n\pi x}{3}$</p> <p>$u(0,t)=10$ :</p> <p>$C_2+\sum\limits_{n=0}^\infty C_4(n)e^{-\frac{2n^2\pi^2t}{9}}=10$</p> <p>$\sum\limits_{n=0}^\infty C_4(n)e^{-\frac{2n^2\pi^2t}{9}}=10-C_2$</p> <p>$C_4(n)=\begin{cases}10-C_2&amp;\text{when}~n=0\\0&amp;\text{when}~n\neq0\end{cases}$</p> <p>$\therefore u(x,t)=C_1x+C_2+\sum\limits_{n=0}^\infty C_3(n)e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}+10-C_2=C_1x+10+\sum\limits_{n=1}^\infty C_3(n)e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}$</p> <p>$u(3,t)=40$ :</p> <p>$3C_1+10=40$</p> <p>$C_1=10$</p> <p>$\therefore u(x,t)=10x+10+\sum\limits_{n=1}^\infty C_3(n)e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}$</p> <p>$u(x,0)=25$ :</p> <p>$10x+10+\sum\limits_{n=1}^\infty C_3(n)\sin\dfrac{n\pi x}{3}=25$</p> <p>$\sum\limits_{n=1}^\infty C_3(n)\sin\dfrac{n\pi x}{3}=-10x+15$</p> <p>$C_3(n)=\dfrac{2}{3}\int_{3k}^{3(k+1)}(-10x+15)\sin\dfrac{n\pi x}{3}dx$ , $\forall k\in\mathbb{Z}$ , $x\in(3k,3(k+1))$</p> <p>$C_3(n)=\left[\dfrac{20x-30}{n\pi}\cos\dfrac{n\pi x}{3}-\dfrac{60}{n^2\pi^2}\sin\dfrac{n\pi x}{3}\right]_{3k}^{3(k+1)}$ , $\forall k\in\mathbb{Z}$ , $x\in(3k,3(k+1))$</p> <p>$C_3(n)=\dfrac{(-1)^{n(k+1)}(60k+30)-(-1)^{nk}(60k-30)}{n\pi}$ , $\forall k\in\mathbb{Z}$ , $x\in(3k,3(k+1))$</p> <p>$\therefore u(x,t)=10x+10+\sum\limits_{n=1}^\infty\dfrac{(-1)^{n(k+1)}(60k+30)-(-1)^{nk}(60k-30)}{n\pi}e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}$ , $\forall k\in\mathbb{Z}$ , $x\in(3k,3(k+1))$</p> <p>Hence $u(x,t)=10x+10+\sum\limits_{k=-\infty}^\infty\sum\limits_{n=1}^\infty\dfrac{(-1)^{n(k+1)}(60k+30)-(-1)^{nk}(60k-30)}{n\pi}\prod_{3k,3(k+1)}(x)e^{-\frac{2n^2\pi^2t}{9}}\sin\dfrac{n\pi x}{3}~,~x\in\mathbb{R}$</p>
3,542,885
<p>Let <span class="math-container">$P(x, y) = ax^2 + bxy + cy^2 + dx + ey + h$</span> and suppose <span class="math-container">$b^2 - 4ac &gt; 0.$</span></p> <p>I know that we can re-write <span class="math-container">$P(x, y)$</span> as a polynomial of <span class="math-container">$x:$</span> <span class="math-container">$$P(x, y) = ax^2 + (by+d)x + (cy^2 + ey + h).$$</span> From here, we can get the discriminant <span class="math-container">$\Delta_x(y)$</span> in terms of <span class="math-container">$y:$</span> <span class="math-container">$$\Delta_x(y) = (by+d)^2 - 4a(cy^2 + ey + h) = (b^2 - 4ac)y^2 + (2bd - 4ae)y + (d^2 - 4ah).$$</span></p> <p>Given the assumptions, I'm supposed to show that one of the following occurs: </p> <p>1) <span class="math-container">$\{ y \mid \Delta_x(y) \geq 0\} = \mathbb{R} \text{ and } \Delta_x(y) \neq 0 $</span> </p> <p>2) <span class="math-container">$\{ y \mid \Delta_x(y) = 0\} = \{y_0\} \text{ and }\{ y \mid\Delta_x(y) &gt; 0\} = \{ y\mid y \neq y_0\}$</span></p> <p>3) there exist real numbers <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, <span class="math-container">$\alpha &lt; \beta$</span>, such that <span class="math-container">$\{y \mid \Delta_x(y)\geq0\} = \{y \mid y \leq \alpha\} \cup \{y \mid y \geq \beta\}.$</span></p> <p>In the first case, we're supposed to get a hyperbola opening left and right. In the second case, we'll get two lines intersecting at a point. In the final case, we'll get a hyperbola opening up and down. However, I can't see how to show these things rigorously. Any insight would be appreciated.</p>
Narasimham
95,860
<p><a href="https://i.stack.imgur.com/njdOd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/njdOd.png" alt="enter image description here"></a></p> <p>From similar triangles <span class="math-container">$ ADE,AOB $</span></p> <p><span class="math-container">$$ \tan A = \frac{x_1}{y_1}, \cot A =\frac{a}{DE}$$</span></p> <p>Multiply and transpose</p> <p><span class="math-container">$$ DE= A \,x_1/y_1$$</span></p> <p>Same way if <span class="math-container">$\frac45$</span> th segment is taken from top.</p>
2,099,516
<p>For independent Gamma random variables $G_1, G_2 \sim \Gamma(n,1)$, $\frac{G_1}{G_1+G_2}$ is independent of $G_1+G_2$. Does this imply that $G_1+G_2$ is independent of $G_1-G_2$? Thanks!</p>
madprob
34,305
<p>Let $X_1=G_1+G_2$ and $X_2=G_1-G_2$. \begin{align*} M_{X_1,X_2}(t_1,t_2) &amp;= E[\exp(t_1(G_1+G_2)+t_2(G_1-G_2))] \\ &amp;= E[\exp((t_1+t_2)G_1+(t_1-t_2)G_2)] \\ &amp;= E[\exp((t_1+t_2)G_1)]E[\exp((t_1-t_2)G_2)] \\ &amp;= (1-(t_1+t_2))^{-n}(1-(t_1-t_2))^{-n} \\ &amp;= (1-2t_1+t_1^2-t_2^2)^{-n} \end{align*} Note that $M_{X_1,X_2}(t_1,t_2)$ cannot be factored as $g(t_1)h(t_2)$. Therefore, $X_1$ and $X_2$ are not independent.</p> <p>If you want to be a little more formal, $X_1 \sim \text{Gamma}(2n,1)$ and, therefore, $M_{X_1}(t_1) = (1-t_1)^{-2n}$. Also, \begin{align*} M_{X_2}(t_2) &amp;= E[\exp(t_2(G_1-G_2))] \\ &amp;= M_{G_1}(t_2)M_{G_2}(-t_2) \\ &amp;= (1-t_2)^{-n}(1+t_2)^{-n} \end{align*} Since $M_{X_1,X_2}(t_1,t_2) \neq M_{X_1}(t_2)M_{X_2}(t_2)$, conclude that $X_1$ and $X_2$ are not independent.</p>
2,403,608
<p>I was asked to solve for the <span class="math-container">$\theta$</span> shown in the figure below.</p> <p><a href="https://i.stack.imgur.com/3Yxqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Yxqv.png" alt="enter image description here" /></a></p> <p>My work:</p> <p>The <span class="math-container">$\Delta FAB$</span> is an equilateral triangle, having interior angles of <span class="math-container">$60^o.$</span> I don't think <span class="math-container">$\Delta HIG$</span> and <span class="math-container">$\Delta DEC$</span> are right triangles.</p> <p>So far, that's all I know. I'm confused on how to get <span class="math-container">$\theta.$</span> How do you get the <span class="math-container">$\theta$</span> above?</p>
trying
309,917
<p>This answer makes use of analytic geometry, as an alternative to other answers.</p> <p>Setting a cartesian coordinate system with origin in $A$ and $x$-axis parallel to $AB$ and $y$-axis parallel to $AH$ you have:</p> <p>$y_F=\frac{\sqrt{3}}{2}X$</p> <p>$\tan\frac{\theta}{2}=\frac{X/2}{y_H-y_F}=\frac{1}{2}\frac{1}{1-\frac{\sqrt{3}}{2}}=\frac{1}{2-\sqrt{3}}$</p> <p>$\theta=2\arctan\frac{1}{2-\sqrt{3}}=150^\circ$</p>
1,803,589
<p>I'm stuck at this. How is RHS rearranged? Is it a change of index?</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} - \sum_{n=1}^{N} \frac{1}{n} = \sum_{n=N+1}^{2N} \frac{1}{n} $$</p> <p>I'm stuck here:</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} = \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{2N} $$ $$ \sum_{n=1}^{N} \frac{1}{n} =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N} $$ $$ \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{2N}-(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N})= \frac{1}{2N}-\frac{1}{N}=\frac{-1}{2N} $$ Thanks!</p>
SchrodingersCat
278,967
<p>This is what it should be like:</p> <p>$$\sum_{n=1}^{2N} \frac{1}{n}- \sum_{n=1}^{N} \frac{1}{n}$$ $$= \left(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{2N}\right) -\left(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N}\right)$$ $$=\frac{1}{N+1}+\frac{1}{N+2}+\frac{1}{N+3}+\dots+ \frac{1}{2N}$$ $$=\sum_{n=N+1}^{2N} \frac{1}{n}$$</p>
1,803,589
<p>I'm stuck at this. How is RHS rearranged? Is it a change of index?</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} - \sum_{n=1}^{N} \frac{1}{n} = \sum_{n=N+1}^{2N} \frac{1}{n} $$</p> <p>I'm stuck here:</p> <p>$$ \sum_{n=1}^{2N} \frac{1}{n} = \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{2N} $$ $$ \sum_{n=1}^{N} \frac{1}{n} =\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N} $$ $$ \frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+\frac{1}{2N}-(\frac{1}{1}+\frac{1}{2}+\frac{1}{3}+\dots+ \frac{1}{N})= \frac{1}{2N}-\frac{1}{N}=\frac{-1}{2N} $$ Thanks!</p>
Eff
112,061
<p>You can also simply derive it from the sigma-notation. You have that $$\sum\limits_{n=1}^{2N}\frac1n = \sum\limits_{n=1}^N\frac1n + \sum\limits_{n=N+1}^{2N}\frac1n, $$ and hence $$\require{cancel}\sum\limits_{n=1}^{2N}\frac1n - \sum\limits_{n=1}^{N}\frac1n = \left(\cancel{\sum\limits_{n=1}^N\frac1n} + \sum\limits_{n=N+1}^{2N}\frac1n\right)- \cancel{\sum\limits_{n=1}^{N}\frac1n} = \sum\limits_{n=N+1}^{2N}\frac1n. $$</p>
65,691
<p>The question of generalising circle packing to three dimensions was asked in <a href="https://mathoverflow.net/questions/65677/">65677</a>. There is a clear consensus that there is no obvious three dimensional version of circle packing.</p> <p>However I have seen a comment that circle packing on surfaces and Ricci flow on surfaces are related. The circle packing here is an extension of circle packing to include intersection angles between the circles with a particular choice for these angles. My initial question is to ask for an explanation of this.</p> <p>My real question should now be apparent. There is an extension of Ricci flow to three dimensions: so is there some version of circle packing in three dimensions which can be interpreted as a combinatorial version of Ricci flow?</p>
Roberto Frigerio
6,206
<p>The following paper </p> <p>F. Luo, A combinatorial curvature flow for compact 3-manifolds with boundary,</p> <p><a href="http://arxiv.org/abs/math/0405295" rel="noreferrer">http://arxiv.org/abs/math/0405295</a> (now published in electronic research announcements, AMS, Volume 11, Pages 12--20)</p> <p>provides a combinatorial flow for 3-manifolds with boundary consisting of surfaces with negative Euler characteristic. It deals with the convergence of an initial "piecewise-hyperbolic" metric to an actual hyperbolic metric with geodesic boundary. The analogy with Ricci flow is very very mild, but I hope you may be interested in this reference anyway.</p>
65,691
<p>The question of generalising circle packing to three dimensions was asked in <a href="https://mathoverflow.net/questions/65677/">65677</a>. There is a clear consensus that there is no obvious three dimensional version of circle packing.</p> <p>However I have seen a comment that circle packing on surfaces and Ricci flow on surfaces are related. The circle packing here is an extension of circle packing to include intersection angles between the circles with a particular choice for these angles. My initial question is to ask for an explanation of this.</p> <p>My real question should now be apparent. There is an extension of Ricci flow to three dimensions: so is there some version of circle packing in three dimensions which can be interpreted as a combinatorial version of Ricci flow?</p>
Joseph O'Rourke
6,094
<p>Not yet mentioned is the interesting definition of Ricci curvature by <a href="http://www.yann-ollivier.org/" rel="noreferrer">Yann Ollivier</a>, a definition especially suited to discrete spaces, such as graphs. His definition "can be used to define a notion of 'curvature at a given scale' for metric spaces." For example, he shows how the discrete cube $\{ 0,1 \}^n$ behaves like $\mathbb{S}^n$ in having constant positive curvature, and possessing an analog of the Lévy "<a href="http://en.wikipedia.org/wiki/Concentration_of_measure" rel="noreferrer">concentration of measure</a>" (the mass of $\mathbb{S}^n$ is concentrated about its equator).</p> <p>His definition is used in the recent (April, 2011) paper by Jürgen Jost and Shiping Liu: "<a href="http://arxiv.org/abs/1103.4037" rel="noreferrer">Ollivier's Ricci curvature, local clustering and curvature dimension inequalities on graphs</a>."</p> <p>Here are two primary sources:</p> <blockquote> <p>Y. Ollivier, <a href="http://arxiv.org/abs/math/0701886" rel="noreferrer">Ricci Curvature of Markov Chains on Metric Spaces</a>, <em>J. Funct. Anal.</em> 256 (2009), No. 3, 810-864. </p> <p>Y. Ollivier, <a href="http://www.yann-ollivier.org/rech/publs/surveycurvmarkov.pdf" rel="noreferrer">A survey of Ricci curvature for metric spaces and Markov chains</a>, in <em>Probabilistic approach to geometry</em>, 343-381, <em>Adv. Stud. Pure Math.</em>, 57, Math. Soc. Japan, Tokyo, 2010.</p> </blockquote> <p><hr /> <b>Update</b> (<em>7Feb13</em>). Noticed this recent posting to the arXiv:</p> <blockquote> <p>Warner A. Miller, Jonathan R. McDonald, Paul M. Alsing, David Gu, Shing-Tung Yau, "Simplicial Ricci Flow," <a href="http://arxiv.org/abs/1302.0804" rel="noreferrer">arXiv:1302.0804</a> <code>[math.DG]</code>.</p> </blockquote>
78,311
<p>Let $\mu$ be standard Gaussian measure on $\mathbb{R}^n$, i.e. $d\mu = (2\pi)^{-n/2} e^{-|x|^2/2} dx$, and define the Gaussian Sobolev space $H^1(\mu)$ to be the completion of $C_c^\infty(\mathbb{R}^n)$ under the inner product $$\langle f,g \rangle_{H^1(\mu)} := \int f g\, d\mu + \int \nabla f \cdot \nabla g\, d\mu.$$</p> <p>It is easy to see that polynomials are in $H^1(\mu)$. Do they form a dense set?</p> <p>I am quite sure the answer must be yes, but can't find or construct a proof in general. I do have a proof for $n=1$, which I can post if anyone wants. It may be useful to know that the polynomials are dense in $L^2(\mu)$.</p> <p><strong>Edit</strong>: Here is a proof for $n=1$.</p> <p>It is sufficient to show that any $f \in C^\infty_c(\mathbb{R})$ can be approximated by polynomials. We know polynomials are dense in $L^2(\mu)$, so choose a sequence of polynomials $q_n \to f&#39;$ in $L^2(\mu)$. Set $p_n(x) = \int_0^x q_n(y)\,dy + f(0)$; $p_n$ is also a polynomial. By construction we have $p_n&#39; \to f&#39;$ in $L^2(\mu)$; it remains to show $p_n \to f$ in $L^2(\mu)$. Now we have $$ \begin{align*} \int_0^\infty |p_n(x) - f(x)|^2 e^{-x^2/2} dx &amp;= \int_0^\infty \left(\int_0^x (q_n(y) - f&#39;(y)) dy \right)^2 e^{-x^2/2} dx \\ &amp;\le \int_0^\infty \int_0^x (q_n(y) - f&#39;(y))^2\,dy \,x e^{-x^2/2} dx \\ &amp;= \int_0^\infty (q_n(x) - f&#39;(x))^2 e^{-x^2/2} dx \to 0 \end{align*}$$ where we used Cauchy-Schwarz in the second line and integration by parts in the third. The $\int_{-\infty}^0$ term can be handled the same with appropriate minus signs.</p> <p>The problem with $n &gt; 1$ is I don't see how to use the fundamental theorem of calculus in the same way.</p>
Community
-1
<p>Nate, I once needed this result, so I proved it in <a href="http://www.stat.ualberta.ca/people/schmu/preprints/japonica.pdf"><em>Dirichlet forms with polynomial domain</em></a> (Math. Japonica <strong>37</strong> (1992) 1015-1024). There may be better proofs out there, but you could start with this paper. </p>
1,292,759
<blockquote> <p>Let $a,b,c\in\mathbb{R}^+$ and $abc=1$. Prove that $$\frac{1}{a^3(b+c)}+\frac{1}{b^3(c+a)}+\frac{1}{c^3(a+b)}\ge\frac32$$</p> </blockquote> <p>This isn't hard problem. I have already solved it in following way:<br/> Let $x=\frac1a,y=\frac1b,z=\frac1c$, then $xyz=1$. Now, it is enought to prove that $$L\equiv\frac{x^2}{y+z}+\frac{y^2}{z+x}+\frac{z^2}{x+y}\ge\frac32$$ Now using Cauchy-Schwarz inequality on numbers $a_1=\sqrt{y+z},a_2=\sqrt{z+x},a_3=\sqrt{x+y},b_1=\frac{x}{a_1},b_2=\frac{y}{a_2},b_3=\frac{z}{a_3}$ I got $$(x+y+z)^2\le((x+y)+(y+z)+(z+x))\cdot L$$ From this $$L\ge\frac{x+y+z}2\ge\frac32\sqrt[3]{xyz}=\frac32$$ Then I tried to prove it using derivatives. Let $x=a,y=b$ and $$f(x,y)=\frac1{x^3\left({y+\frac1{xy}}\right)}+\frac1{y^3\left({x+\frac1{xy}}\right)}+\frac1{\left({\frac1{xy}}\right)^3(x+y)}$$ So, I need to find minimum value of this function. It will be true when $$\frac{df}{dx}=0\land\frac{df}{dy}=0$$ After simplifying $\frac{df}{dx}=0$ I got $$\frac{-y(3xy^2+2)}{x^3\left({xy^2+1}\right)^2}+\frac{1-x^2y}{y^2\left({x^2y+1}\right)^2}+\frac{x^2y^3(2x+3y)}{\left({x+y}\right)^2}=0$$ Is there any easy way to write $x$ in term of $y$ from this equation?</p>
xpaul
66,420
<p>You can use this way to do. Your inequality $L\ge \frac32$ is equivalent to $$ 2[x^2(x+y)(x+z)+y^2(x+y)(y+z)+z^2(x+z)(y+z)]\ge 3(x+y)(x+z)(y+z). $$ Let $$ F(x,y,z)=2[x^2(x+y)(x+z)+y^2(x+y)(y+z)+z^2(x+z)(y+z)]-3(x+y)(x+z)(y+z)-\lambda(xyz-1). $$ Then set $$ \frac{\partial F}{\partial x}=0, \frac{\partial F}{\partial y}=0,\frac{\partial F}{\partial z}=0, \frac{\partial F}{\partial \lambda}=0. $$ Easy calculation shows that, $\frac{\partial F}{\partial x}=\frac{\partial F}{\partial y}$ gives $(x-y)[3(x+y)+\lambda z]=0.$ So $x=y$. Similarly $x=y=z$. But $xyz=1$ and hence $x=y=z=1$. So $f(x,y,z)$ reaches its minimum $0$ when $x=y=z=1$ or $f(x,y,z)\ge0$. Thus $L\ge\frac32$.</p>
1,600,054
<p>The graph of $y=x^x$ looks like this:</p> <p><a href="https://i.stack.imgur.com/JdbSv.gif" rel="noreferrer"><img src="https://i.stack.imgur.com/JdbSv.gif" alt="Graph of y=x^x."></a></p> <p>As we can see, the graph has a minimum value at a turning point. According to WolframAlpha, this point is at $x=1/e$.</p> <p>I know that $e$ is the number for exponential growth and $\frac{d}{dx}e^x=e^x$, but these ideas seem unrelated to the fact that the mimum value of $x^x$ is $1/e$. <strong>Is this just pure coincidence, or could someone provide an intuitive explanation</strong> (i.e. more than just a proof) <strong>of why this is?</strong></p>
Brian Fitzpatrick
56,960
<p>Let $f(x)=x^x$. Note that $f(x)$ is only defined for $x&gt;0$.</p> <p>Then $$ \ln f(x)=x\cdot\ln x\tag{1} $$ Differentiating (1) gives $$ \frac{1}{f(x)}f^\prime(x)=x\frac{1}{x}+\ln x=1+\ln x $$ Note that we have used the chain rule and the product rule.</p> <p>Solving for $f^\prime(x)$ gives $$ f^\prime(x)=f(x)(1+\ln x)=x^x(1+\ln x) $$ Can you use this to locate the critical points of $f(x)$?</p>
365,128
<p>How does one find the Inverse Laplace transform of $$\frac{6s^2 + 4s + 9}{(s^2 - 12s + 52)(s^2 + 36)}$$ where $s &gt; 6$?</p>
Amzoti
38,839
<p>Hints:</p> <ol> <li><p>Write out the partial fraction expansion.</p></li> <li><p>Put the partial fraction into the forms that let you use the inverse table.</p></li> </ol> <p>Why do they put the restriction on s (you'll see it if you do 1 and 2)?</p> <p>Clear?</p> <p><strong>Update</strong></p> <p>We are given and asked to find the Inverse Laplace Transform of:</p> <p>$\displaystyle \frac{6s^2 + 4s + 9}{(s^2 - 12s + 52)(s^2 + 36)}$ where $s &gt; 6$.</p> <p>We have (after some algebra and putting things into forms we can work with):</p> <p>$\displaystyle \frac{6s^2 + 4s + 9}{(s^2 - 12s + 52)(s^2 + 36)} = -\frac{121}{272}\left(\frac{s}{s^2 + 6^2}\right) - \frac{42}{272}\left(\frac{6}{s^2 + 6^2}\right) + \frac{121}{272} \left(\frac{s - 6}{((s-6)^2 + 4^2}\right) + \frac{147}{2 \times 272} \left(\frac{4}{((s-6)^2 + 4^2}\right)$</p> <p>Do you see why the condition $s \gt 6$ was given now?</p> <p>Can you now find the ILT of each of those expressions?</p>
2,013,115
<p>I tried to do this problem in the following way:</p> <p>As, $x^2+1 + \langle 3 , x^2+1 \rangle= 0 + \langle 3 , x^2+1 \rangle \implies x^2+1 \equiv 0 \implies x^2 \equiv -1.$</p> <p>Also, $3+ \langle 3 , x^2+1 \rangle=0 +\langle 3 , x^2+1 \rangle \implies 3 \equiv 0$.</p> <p>Now, any element of $\mathbb{Z}[x]/\langle 3 , x^2+1 \rangle$ is of form $p(x)+\langle 3 , x^2+1 \rangle$ where $p(x) \in \mathbb{Z}[x]$. So, divide $p(x)$ by $x^2+1$ to get a linear polynomial $ax+b$ as a remainder. So, any element of $\mathbb{Z}[x]/\langle 3 , x^2+1 \rangle$ can be written as $ax+b+ \langle 3 , x^2+1 \rangle$ where $a,b \in \mathbb{Z}/3\mathbb{Z}.$ Let $I=\langle 3, x^2+1\rangle$. So, the elements of the ring are $I, 1+I, 2+I, x+I,(x+1)+I,(x+2)+I,2x+I,(2x+1)+I,(2x+2)+I.$</p> <p>Is my solution correct?</p> <p>Thanks!</p>
iam_agf
196,583
<p>Well, note that all your polynomials have the condition that their coefficients are $0,1,2$ and are of degree $0,1$, because $x^2+1=0$. So you have that the elements without $x$ are $0,1,2$. Then you consider the monic elements with $x$, that are $x,2x$. Finally you sum all the possible combinations of the firsts with the seconds:</p> <p>$$x+1,x+2,2x+1,2x+2$$</p> <p>Then all the elements of your ring are $0,1,2,x,2x,x+1,x+2,2x+1,2x+2$ (they're 9), so yes, your answer is right. </p> <p>Is clear to see that all of them are different because: </p> <p>If any different of them of degree $0$ is equal, then you will have that $1$ or $2$ is equal to $0$. If any of degere $0$ is equal to another of degree $1$, then $x$ will be equal to $1$ or $2$, and the degree of all your polynomials will be $0$. In the same way, all the polynomials of degree $1$ won't be equal between them, or you will have the same case as before. </p> <p>(I didn't add the $I$ of the ideal because we can consider all the elements I put here as the classes of the elements)</p>
1,840,159
<blockquote> <p>Question: Prove that a group of order 12 must have an element of order 2.</p> </blockquote> <p>I believe I've made great stride in my attempt.</p> <p>By corollary to Lagrange's theorem, the order of any element $g$ in a group $G$ divides the order of a group $G$.</p> <p>So, $ \left | g \right | \mid \left | G \right |$. Hence, the possible orders of $g$ is $\left | g \right |=\left \{ 1,2,3,4,6,12 \right \}$</p> <p>Suppose $\left | g \right |=12.$ Then, $g^{12}=\left ( g^{6} \right )^{2}=e.$ So, $\left | g^{6} \right |=2$</p> <p>Using the above same idea and applying it to $\left | g \right |=\left \{ 6,4,2 \right \}$ and $\left | g \right |=1,$ we see that these elements g have order 2.</p> <p>However, for $\left | g^{3} \right |$, the group $G$ does not require an element of order 2.</p> <p>How can I take this attempt further?</p> <p>Thanks in advance. Useful <strong>hints</strong> would be helpful.</p>
awllower
6,792
<p><strong>Hint:</strong><br> Consider the Sylow $2$-subgroups of $G,$ which have order $4.$ </p> <p>Hope this helps.</p>
3,380,998
<p>Is it possible to express the cube root of "i" without using "i" itself?</p> <p>If this is possible can you show me how to arrive at it?</p> <p>thanks</p>
Mohammad Riazi-Kermani
514,496
<p>On the unit circle mark the <span class="math-container">$30$</span> degree, <span class="math-container">$150 $</span> degree and <span class="math-container">$270$</span> degree points. These are the cube roots of <span class="math-container">$i$</span> </p>
705,829
<p>I'm trying to solve a problem here.</p> <p>It says: "Prove that a triangle is isoceles if $\large b=2a\sin\left(\frac{\beta}{2}\right)$." $B-\beta$ I've tried to prove it but I can't</p> <p>Can anyone help me?</p>
DeepSea
101,504
<p>Square both sides and use the law of cosine: $$b^2 = 4a^2\cdot (sin(B/2))^2 = 2a^2(1 - cosB) = 2a^2(1 - (a^2 + c^2 - b^2)/2ac)$$ . Simplify this equation we get:</p> <p>$$(a - c)(a^2 - ac - b^2) = 0$$. </p> <p>Case 1: If $a = c$, we're done.</p> <p>Case 2: If $a \lt c \Rightarrow a - c \lt 0$, and $a^2 - ac - b^2 = a(a - c) - b^2 \lt 0$, so $LHS \gt 0 = RHS$. Contradiction.</p> <p>If $a \gt c$, then $a - c \gt 0$ and since$ b^2 &lt; (a - c)^2 ==&gt; a^2 - ac - b^2 &gt; a^2 - ac - (a - c)^2 = a^2 - ac - (a^2 - 2ac + c^2) = ac - c^2 = c(a - c) &gt; 0$, so $LHS &gt; 0 = RHS$. Contradiction again.</p> <p>So <em>Case 2</em> can't happen, and so $a = $c and triangle $\hat{ABC}$ is isosceles.</p>
4,181,524
<p><span class="math-container">$$\text{I need to prove the following lemma : }\frac{\zeta'(s)}{\zeta(s)} = - \sum_{n=1}^{\infty} \frac{\Lambda(n)}{n^s}$$</span></p> <p><strong>My attempt:</strong></p> <p><span class="math-container">$$\text{We know that }\zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^s} $$</span><span class="math-container">$$\text{So, }\zeta'(s) = \sum_{n=1}^{\infty}\frac{d}{ds}.\frac{1}{n^s} = -\sum_{n=1}^{\infty} \frac{ln(n)}{n^s}$$</span></p> <p>This gives : <span class="math-container">$$\frac{\zeta'(s)}{\zeta(s)} = -\frac{\sum_{n=1}^{\infty}\frac{1}{n^s}}{\sum_{n=1}^{\infty} \frac{ln(n)}{n^s}} $$</span></p> <p>How do I proceed ? Please help.</p>
Infinity_hunter
826,797
<p>It is easy to show that <span class="math-container">$\log n = \sum_{d | n } \Lambda(d)$</span>. So by <a href="http://en.wikipedia.org/wiki/M%C3%B6bius_inversion_formula" rel="nofollow noreferrer">Mobius inversion</a> it follows that <span class="math-container">$\Lambda(n) = \sum_{d|n} \mu(d) \log(n/d)$</span>. We know that <span class="math-container">$$\frac{1}{\zeta(s)} = \sum _{n=1}^{\infty} \frac{\mu(n)}{n^s}$$</span> and <span class="math-container">$$\zeta'(s) = -\sum_{n=1}^{\infty} \frac{\log n}{n^s}$$</span>, so by <a href="https://en.wikipedia.org/wiki/Dirichlet_convolution" rel="nofollow noreferrer">Dirchlet convolution</a> we have <span class="math-container">$$\frac{\zeta'(s)}{\zeta(s)} = - \sum_{n=1}^{\infty} \sum_{d|n} \mu(d) \log(n/d) \frac1{n^s} = -\sum_{n =1}^{\infty} \frac{\Lambda(n)}{n^s}$$</span></p>
1,238,481
<p>Let $\vec p, \vec q$ and $\vec r$ are three mutually perpendicular vectors of the same magnitude. If a vector $\vec x$ satisfies the equation $\begin{aligned} \vec p \times ((\vec x - \vec q) \times \vec p)+\vec q \times ((\vec x - \vec r) \times \vec q)+\vec r \times ((\vec x - \vec p) \times \vec r)=0\end{aligned}$, then find $\vec x$.</p> <p>I considered $p^2 = q^2 = r^2 = a^2$ (say) and since $\vec p, \vec q$ and $\vec r$ are three mutually perpendicular vectors, therefore $\vec p \cdot \vec q = \vec q \cdot \vec r = \vec p \cdot \vec r = 0$. Then I solved the given equation using these data and got:</p> <p>$3a^2 \vec x +((\vec p\cdot\vec x)- a^2)\vec p +((\vec q\cdot\vec x)- a^2)\vec q+((\vec r\cdot\vec x)- a^2)\vec r= 0$</p> <p>How should I proceed after this to find $\vec x$ ? Thanks.</p>
Sam
221,113
<p>To summarize, and to answer the question, it seems that we have an unknown problem. </p> <p>The reason being: the OP has established "If $ZFC\nvdash$ $\neg$Con$(ZFC)$, then $ZFC\nvdash$ Con$(ZFC)$ $\implies SM$." in the statement of his question. Also, Asaf has established (the almost trivial) "If $ZFC\vdash$ $\neg$Con$(ZFC)$, then $ZFC\vdash$ Con$(ZFC)$ $\implies SM$." </p> <p>Putting these two together, we have established "$ZFC\vdash$ $\neg$Con$(ZFC)$ if and only if $ZFC\vdash$ Con$(ZFC)$ $\implies SM$. </p> <p>(*) So since it is not known whether $ZFC\vdash$ $\neg$Con$(ZFC)$ or not (even under the extra assumption that $ZFC$ is consistent), then it follows from the above paragraph that is not know whether $ZFC\vdash$ Con$(ZFC)$ $\implies SM$ or not (even under the extra assumption that $ZFC$ is consistent). </p> <p>Regarding (*): It is a big assumption for me to say that something is not known, so the best I can do is to direct you to the discussion about the arithmetic soundness of $ZFC$ over at FOM, where the posters dabble in the possibility of a consistent $ZFC$ such that $ZFC\vdash$ $\neg$Con$(ZFC)$.</p>
2,703,639
<p>a) $f: L^1(0,3) \rightarrow \mathbb{R}$</p> <p>b) $f: C[0,3] \rightarrow \mathbb{R}$</p> <p>for part a I got $\|f\| = 1$ because $\|f(x)\|=|\int_0^2x(t)dt| \leq \int_0^2|x(t)|dt \leq\int_0^3|x(t)|dt = \|x(t)\|_1$ so $\|f\|=1$</p> <p>for b, I think its similar: $\|f(x)\|=|\int_0^2x(t)dt| \leq \int_0^2|x(t)|dt \leq\int_0^3|x(t)|dt = \|x(t)\|_1$ and I have a theorem that says there is some c>0 s.t. $\|x(t)\|_1 \leq c\|x(t)\|_{max}$ so by taking the infinum of all such c, we get that $\|f\| = 0$.</p> <p>Are these correct? I am particularly worried about my answer for b</p>
user284331
284,331
<p>b) is not correct. There is some $c&gt;0$ such that $\|x\|_{1}\leq c\|x\|_{\infty}$, then $\|f\|\leq c$ for this $c$, it does not mean that for every $c&gt;0$, $\|f\|\leq c$.</p> <p>Rather, the question needs the exact norm of the operator. Actually one has $|f(x)|=\left|\displaystyle\int_{0}^{2}x(t)dt\right|\leq\displaystyle\int_{0}^{2}|x(t)|dt\leq\|x\|_{\infty}\int_{0}^{2}1dt=2\|x\|_{\infty}$.</p> <p>Now take $x(t)=1$ for $0\leq t\leq 2$ and $x(t)=-t+3$ for $2\leq t\leq 3$, then $\|x\|_{\infty}=1$ and hence $|f(x)|=2$, finally $\|f\|=2$. </p>
3,136,568
<p><a href="https://i.stack.imgur.com/STONY.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/STONY.jpg" alt="enter image description here"></a></p> <p>I seem to be struggling with this particular question. It is my understanding that in this situation, where du does not equal dx, that you must manipulate the original problem to accommodate for this? However I admit, I arrived at my answer through use of an online calculator and would like to have a clear understanding of this concept. </p>
Paras Khosla
478,779
<p><strong>Hint</strong>:</p> <p><span class="math-container">$$\text{Let }\begin{bmatrix}u \\ \mathrm du \\ a\end{bmatrix}=\begin{bmatrix}5t \\ \mathrm dt \\ 2\end{bmatrix}$$</span></p>
604,070
<p>While doing the proof of the existence of completion of a metric space, usually books give an idea that the missing limit points are added into the space for obtaining the completion. But I do not understand from the proof where we are using this idea as we just make equivalence classes of asymptotic Cauchy sequences and accordingly define the metric.</p>
rewritten
43,219
<p>For a metric space $\langle T, d\rangle$ to be complete, all Cauchy sequences must have a limit. So we add that limit by defining it to be an "abstract" object, which is defined by "any Cauchy sequence converging to it".</p> <p>We have two cases:</p> <ol> <li><p>The Cauchy sequence already had a limit in $T$. In this case there is no need to add new points, and we identify that abstract object to the already existing point.</p></li> <li><p>The Cauchy sequence did not converge in $T$. Then you add this "object" to your space, and define distance accordingly. You can prove using triangle equality that you can choose any "equivalent" Cauchy sequence, and the metric will be the same.</p></li> </ol> <p>The important point is that the point we add are not real points, they are just abstract objects, which have some property that make them behave well under your tools and language.</p>
1,364,430
<p><strong>Problem</strong></p> <p>How many of the numbers in $A=\{1!,2!,...,2015!\}$ are square numbers?</p> <p><strong>My thoughts</strong></p> <p>I have no idea where to begin. I see no immediate connection between a factorial and a possible square. Much less for such ridiculously high numbers as $2015!$.</p> <p>Thus, the only one I can immediately see is $1! = 1^2$, which is trivial to say the least.</p>
Machining Machine
90,278
<p>Of all $n!$ only $0!$ and $1!$ are perfect squares.</p> <p><a href="http://mathforum.org/library/drmath/view/55881.html" rel="nofollow">http://mathforum.org/library/drmath/view/55881.html</a></p> <blockquote> <p>To prove that a factorial bigger than 1 can't be a perfect square, first think about breaking down the factorial into prime factors. For example, 4! = 24, and 24 = 2*2*2*3. In order for any number to be a perfect square, it must contain an even number of each prime factor.<br> Now think about the largest prime less than or equal to n. Call that number p. That number appears only once in the list of factors 1*2*3*4*5*...<em>p</em>...*n, so it can't appear an odd number of times unless 2*p is also less than n.</p> <p>Now there's a semi-famous theorem called Bertrand's Postulate, which says that there is always a prime number between n and 2n. This is just what you need to show that 2p isn't less than n - because if it were, there would have to be a larger prime than p that is also less than n.</p> </blockquote>
2,873,520
<p>I want to find out how interference of two sine waves can affect the output-phase of the interfered wave. </p> <p>Consider two waves,</p> <p>$$ E_1 = \sin(x) \\ E_2 = 2 \sin{(x + \delta)} $$</p> <p>First off, I don't know how to prove it, but I can see visually (plotting numerically) that the sum of these waves looks a new sine wave. </p> <p>I want to find out what the phase of $E_1 + E_2 $ looks like. First I tried finding using functions like ArcSin() and Ln() but ran into trouble for both methods. For example when I try ArcSin(Sin[x] + 2*Sin(x - $\delta$)), I get answers that disagree with my numerical answers. </p> <p>Numerically, I solve for zeros and find the one with a positive derivative in a 2-pi region. </p> <p>Now I plot the phase-shift of $E_3$ as a function of $\delta$ (in blue) and compare it to $E_2$ (in purple): </p> <p><a href="https://i.stack.imgur.com/P6Jsc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/P6Jsc.png" alt="enter image description here"></a></p> <p>Is there a "formula" I can use to find this answer without having to trace through one of the zeros? I think the key when using something like ArcSin is using the right normalization (I think it only works for sine of amplitude 1), but I'm not sure exactly the proper way of doing it. </p>
Mohammad Riazi-Kermani
514,496
<p>Yes the polynomial associated to $$ ay'' + by' +cy =0$$ is $$P(\lambda )= a \lambda ^2 + b \lambda +c$$ which is called the charateristic polynomial. </p> <p>This polynomial plays a very important role in finding the solutions to your differential equation. </p> <p>The genera solution to the differential equation is $$ y=C_1 e^{\lambda _1} +C_2 e^{\lambda _2} $$ where $\lambda _1$ and $\lambda _2$ are solutions to $P(\lambda)$ </p>
65,480
<p>The example question is </p> <blockquote> <p>Find the remainder when $8x^4+3x-1$ is divided by $2x^2+1$</p> </blockquote> <p>The answer did something like</p> <p>$$8x^4+3x-1=(2x^2+1)(Ax^2+Bx+C)+(Dx+E)$$</p> <p>Where $(Ax^2+Bx+C)$ is the Quotient and $(Dx+E)$ the remainder. I believe the degree of Quotient is derived from degree of $8x^4+3x-1$ - degree of divisor. But for remainder? Would it not be </p>
Américo Tavares
752
<p>EDIT to add these short answers.</p> <blockquote> <p>I believe the degree of Quotient is derived from degree of $8x^4+3x-1$ - degree of divisor.</p> </blockquote> <p>That's right.</p> <blockquote> <p>But for remainder?</p> </blockquote> <p>The degree of the remainder is less than the degree of the divisor, by definition of polynomial division.</p> <hr> <ol> <li>Let me start with <em>this specific case</em>. To find the remainder ($Dx+E$) we can expand the RHS of the identity shown in the question $$8x^4+3x-1=(2x^2+1)(Ax^2+Bx+C)+(Dx+E)\tag{1}$$ and collect the terms of the same degree. We get $$8x^4+3x-1=2Ax^4+2Bx^3+(A+2C)x^2+(B+D)x+C+E.\tag{2}$$ The polynomial of the LHS is equivalent to to polynomial of the RHS if and only if the coefficients of the terms of the same degree are equal. Therefore we must have the following system of $5$ equations $$2A=8,\ 2B=0, \ A+2C=0, \ B+D=3, \ C+E=-1, $$ whose solution is $$A=4, \ B=0, \ C=-2, \ D=3, \ E=1. \ $$ Hence we obtain $$8x^4+3x-1=(2x^2+1)(4x^2-2)+(3x+1),\tag{3}$$ where $3x+1$ is the <em>remainder</em>. The degree of $8x^4+3x+1$ is $4$ and the degree of $2x^2+1$ is $2$. The degree of the quotient $(4x^2-2)$ is $2=4-2$. <strong>The degree of the remainder is $1&lt;2$, which means that it is less than the degree of the divisor $2x^2+1$.</strong> Since $2x^2+1\ne 0$, the algebraic identity $(3)$ is equivalent to $$\frac{8x^4+3x-1}{2x^2+1}=4x^2-2+\frac{3x+1}{2x^2+1}.\tag{4}$$</li> <li>Now the <em>general case</em>. By definition of polynomial division, given polynomials $A(x),B(x)$, where the degree of $B(x)$ is greater than $0$, it is always possible to find a polynomial $Q(x)$, called quotient, such that the <strong>diference $$R(x)=A(x)-B(x)Q(x)\tag{5}$$ is a polynomial whose degree is less than the degree of $B(x)$. This polynomial $R(x)$, called remainder, is unique.</strong> The polynomial $A(x)$ is called the dividend and <strong>$B(x)$ the divisor.</strong> Let $m$ be the degree of $A(x)$, $n$ the degree of $B(x)$ and $q$ the degree of $Q(x)$. If $m&lt;n$, $Q(x)=0$ and $R(x)=A(x)$. If $m\ge n$, then $q=m-n$. (<em>Note</em>: if $n=0$, then $R(x)=0$.) The identity $(5)$ is equivalent to $$A(x)=B(x)Q(x)+R(x)\tag{6}$$ and for $B(x)\ne 0$ to $$\frac{A(x)}{B(x)}=Q(x)+\frac{R(x)}{B(x)}.\tag{7}$$</li> <li>Concerning the <em>computation of the quotient and the remainder</em>, in addition to the method detailed above, we can use the <a href="http://en.wikipedia.org/wiki/Polynomial_long_division" rel="nofollow">polynomial long division</a> or the <a href="http://en.wikipedia.org/wiki/Synthetic_division" rel="nofollow">synthetic division</a>. The long division technique applied to the present case, results in $$\begin{matrix} 4x^2 - 2\\ \qquad\qquad\qquad 2x^2+1\ \overline{ )\ 8x^4 \; +0x^3 \; +0x^2 \; + 3x - 1 }\\ \qquad\qquad\qquad \underline{ 8x^4 \; +0x^3 \;+ \;\;4x^2}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\; -4x^2\; + 3x - 1 \\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\quad\; \underline{4x^2\; + 0x - 2}\\ \qquad\qquad\qquad\qquad\qquad\qquad\quad\qquad\qquad\qquad\qquad\qquad 3x + 1\\ \end{matrix}\tag{8}$$</li> </ol>
743,988
<p>If we formally exponentiate the derivative operator $\frac{d}{dx}$ on $\mathbb{R}$, we get</p> <p>$$e^\frac{d}{dx} = I+\frac{d}{dx}+\frac{1}{2!}\frac{d^2}{dx^2}+\frac{1}{3!}\frac{d^3}{dx^3}+ \cdots$$</p> <p>Applying this operator to a real analytic function, we have</p> <p>$$\begin{align*}e^\frac{d}{dx} f(x) &amp;= f(x)+f'(x)+\frac{1}{2!}f''(x)+\cdots\\ &amp;=f(x)+f'(x)((x+1)-x)+\frac{1}{2!}f''(x)((x+1)-x)^2+\cdots\\ &amp;=f(x+1) \end{align*}$$</p> <p>Does anyone have an explanation of why this should "morally" be true? I do not have a very good intuition for the matrix exponential which is probably holding me back here...</p>
Community
-1
<p>You can just see it as an identity: the shift operator can be expressed in terms of a Taylor series, and then we just compute its closed form.</p> <p>There are other visualizations for this, though. You can think of $1 + \frac{d}{dx}$ as an <em>infinitesimal</em> shift operator, and exponentiation accumulates all of the infinitesimal shifts up into an actual shift.</p> <p>In particular, if $E_k$ is the shift-by-$k$ operator, and $\Delta_k = E_k - 1$ then </p> <p>$$ E_1 = (1 + \Delta_{1/n})^{n} $$</p> <p>but we know that for small $k$, $\Delta_k f \approx k \frac{df}{dx}$. Thus,</p> <p>$$ E_1 \approx \left(1 + \frac{1}{n} \frac{d}{dx} \right)^n $$</p> <p>for large $n$. In fact, both of the following equalities turn out to be true:</p> <p>$$ E_1 = \lim_{n \to \infty} \left(1 + \frac{1}{n} \frac{d}{dx} \right)^n = e^{d/dx}$$</p>
3,460,426
<p>I tried to take the <span class="math-container">$Log$</span> of <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} = \prod _{m\ge 1} Z_m$</span>, which gives </p> <p><span class="math-container">$$Log \prod_{m\ge 1} Z_m = \sum_{m \ge 1} Log (Z_m) = \sum_{m \ge 1} \ln |Z_m| + i \sum_{m \ge 1} \Theta_m,$$</span></p> <p>where <span class="math-container">$\Theta_m$</span> is the principal argument of <span class="math-container">$Z_m$</span>.</p> <p><span class="math-container">$|Z_m| = \left[\frac{1}{2} \left(1 + \cos(\frac{2\pi}{3^m}\right)\right]^{1/2}$</span> has the range <span class="math-container">$[0,1]$</span>, so <span class="math-container">$\ln |Z_m| \le 0$</span>. And, since there are infinitely many <span class="math-container">$m$</span>'s sucu that <span class="math-container">$\ln|Z_m| \not = 0$</span>, <span class="math-container">$\sum_{m \ge 1} \ln |Z_m| \to -\infty$</span>.</p> <p>Then, <span class="math-container">$$\exp\left({Log \prod_{m\ge 1} Z_m}\right) = \exp\left(\sum_{m \ge 1} \ln |Z_m| \right)\exp\left(i \sum_{m \ge 1} \Theta_m\right) = 0.$$</span></p> <p>I want to show that <span class="math-container">$\prod _{m\ge 1} \frac{1+\exp(i2\pi \cdot3^{-m})}{2} $</span> is non-zero. What is wrong in the above reasonings?</p>
Henry
6,460
<ul> <li>If there are <span class="math-container">$n$</span> families then there are <span class="math-container">$1.8 n$</span> children</li> <li>If there are <span class="math-container">$n_k$</span> families with <span class="math-container">$k$</span> children then <span class="math-container">$\sum_k n_k =n$</span> and <span class="math-container">$\sum_k k \,n_k =1.8n$</span> and <span class="math-container">$\sum_k k^2 \,n_k =0.36 n +1.8^2n = 3.6 n$</span></li> <li>A child in a size <span class="math-container">$k$</span>-child family has <span class="math-container">$k-1$</span> siblings so there are <span class="math-container">$\sum_k k(k-1) \,n_k = 3.6n-1.8n = 1.8n$</span> siblings to count (including multiple cases when there are <span class="math-container">$3$</span> or more children in a family)</li> <li>So the average number of siblings per child is <span class="math-container">$\frac{1.8n}{1.8n}=1$</span> </li> </ul> <p>In general the average number of siblings per child is <span class="math-container">$\mu-1+\frac{\sigma^2}{\mu}$</span>. If the children were Poisson distributed, this would be <span class="math-container">$\mu$</span>. </p>
1,456,444
<p>How can I go about solving this Pigeonhole Principle problem? </p> <p>So I think the possible numbers would be: $[3+12], [4+11], [5+10], [6+9], [7+8]$</p> <p>I am trying to put this in words...</p>
Jacob Westman
273,234
<p>If you pick five numbers from the set, you could pick exactly one from every pair you listed (and all numbers in the set are listed in exactly one pair). If you then pick another number then that must be from one of the five pairs you already have a number from. So you are guaranteed to end up with a pair that sums to fifteen from the pigeonhole principle.</p>
2,855,339
<p>What would be the complement of...</p> <p>$\{$x:x is a natural number divisible by 3 and 5$\}$</p> <p>I checked it's solution and it kind of stumped me...</p> <p>$\{$x:x is a positive integer which is not divisible by 3 <em>or</em> not divisible by 5$\}$</p> <p>Why the word <em>or</em> has been used in the solution? Why not <em>and</em>?</p>
Oldboy
401,277
<p>(Many thanks to <strong>Empy2</strong> and <strong>gandalf61</strong> for spotting a mistake in the first version! Hopefully they won't find another one :)</p> <p><strong>Not an answer</strong>, just a ballpark estimate:</p> <pre><code>In[141]:= hourAngle[h_, m_, s_] := 2*Pi*(h*3600 + m*60 + s)/43200; minuteAngle[h_, m_, s_] := 2*Pi*(m*60 + s)/3600; secondAngle[h_, m_, s_] := 2*Pi*(s)/60; sameSemiCircle[h_, m_, s_] := Module[ {a1, a2, a3, a1s, a2s, a3s, a1norm, a2norm, a3norm}, a1 = hourAngle[h, m, s]; a2 = minuteAngle[h, m, s]; a3 = secondAngle[h, m, s]; {a1s, a2s, a3s} = Sort[{a1, a2, a3}]; a1norm = 0; a2norm = a2s - a1s; a3norm = a3s - a1s; If[a2norm &gt;= Pi || Not[Pi &lt; a3norm &lt; (a2norm + Pi)], True, False] ]; In[145]:= count = 0; For[i = 0, i &lt;= 11, i++, For[j = 0, j &lt;= 59, j++, For[k = 0, k &lt;= 59, k++, If[sameSemiCircle[i, j, k], count++]; ]; ]; ]; count Out[147]= 32406 </code></pre> <p>Basically, I have checked all times from 00:00:00 to 11:59:59 with one second step and checked if clock hands are in the same semi-circle. I know this is an approximation because my time is descrete, but it should give quite good estimate. </p> <p>It turns out, hands are in the same semi-circle for 28516 seconds out of 43200 or approx. <strong>75.01%</strong> of time. This looks suspiciously close to <strong>3/4</strong> and it could very well be the final answer.</p> <p>A word about the script: for any given time it's easy to calculate angles between noon and clock hands. I have sorted that angles and normalized them by subtracting the smallest one from the remaining two. Normalized angles are, say, $a_1=0\le a2 \le a3 \lt 2\pi$. It's like having one hand fixed at noon position. </p> <p><a href="https://i.stack.imgur.com/NO0qh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NO0qh.jpg" alt="enter image description here"></a></p> <p>You have two possible situations: </p> <ol> <li>If $a_2&lt;\pi$, the third angle must be outside of the shaded zone if we want all three hands to be in the same semicircle</li> <li>If $a_2\ge\pi$, all three hands are guaranteed to be in the same semicircle, because $a_3\ge a_2$</li> </ol> <p>That is where the key line of code comes from:</p> <pre><code>If[a2norm &gt;= Pi || Not[Pi &lt; a3norm &lt; (a2norm + Pi)], True, False] </code></pre> <p>Monte Carlo method with 2 billion randomly chosen (not descrete) times gives <strong>75.003%</strong> up to three decimal places.</p>
1,059,427
<p>What is a good method to number of ways to distribute $n=30$ distinct books to $m=6$ students so that each student receives at most $r=7$ books?</p> <p>My observation is: If student $S_i$ receives $n_i$ books, the number of ways is: $\binom{n}{n_1,n_2,\cdots,n_m}$.</p> <p>So answer is coefficient of $x^n$ in $n!(1+x+\frac{x^2}{2!}+\cdots+\frac{x^r}{r!})^m$.</p> <p>For this case it means computing the coefficient of $x^{30}$ in $30!(1+x+\frac{x^2}{2!}+\cdots+\frac{x^7}{7!})^6$.</p> <p>However, it's quite annoying to compute this coefficient without exponential function. Also if we change $(...)$ into $e^x-(\frac{x^8}{8!}+\frac{x^9}{9!}+\cdots)$, how can we handle $(\frac{x^8}{8!}+\frac{x^9}{9!}+\cdots)$ term?</p> <p>Is there any good idea to handle this term for easier calculation?</p>
hardmath
3,111
<p>Since the Question does not state that each student must receive at least one book, I have included below in the 46 ways (that six nonnegative integers sum to 30) those with parts equal zero, limiting however summands not to exceed 7:</p> <p>$$ 30 = s_1 + s_2 + s_3 + s_4 + s_5 + s_6 $$</p> <p>such that $ 7 \ge s_1 \ge s_2 \ge s_3 \ge s_4 \ge s_5 \ge s_6 \ge 0 $. These solutions were generated by a short Prolog "program" (backtracking predicate):</p> <pre><code>/* genPartitionW(SumTotal,NumberOfParts,MaxPartSize,ListOfParts) */ genPartitionW(N,1,M,[N]) :- !, N &gt;= 0, N =&lt; M. genPartitionW(N,K,M,[P|Ps]) :- Max is min(N,M), for(P,Max,1,-1), ( N &gt; P*K -&gt; fail ; ( Km is K-1, Nm is N-P ) ), genPartitionW(Nm,Km,P,Ps). </code></pre> <p>For each of these I created a row in a spreadsheet, computing in one cell the multinomial coefficient:</p> <p>$$ \frac{30!}{s_1!\cdot s_2!\cdot s_3!\cdot s_4!\cdot s_5!\cdot s_6!} $$</p> <p>and in another cell the multiplier that accounts for how many <em>weak compositions</em> correspond to that summation, which is $6!$ divided by the product of factorials of frequencies of parts (number of books allocated to one student).</p> <p>For example, the first summation in our list is $30=7+7+7+7+2+0$. The multinomial computation gives:</p> <p>$$ \frac{30!}{7!\cdot 7!\cdot 7!\cdot 7!\cdot 2!\cdot 0!} = 205545481187904000 $$</p> <p>and the orbit of weak compositions for that summation (arrangements of parts) has size:</p> <p>$$ \frac{6}{4!\cdot 1!\cdot 1!} = 30 $$</p> <p>The product of these is $205545481187904000\cdot 30 = 6166364435637120000$.</p> <p>Due to the limited numerical precision of LibreOffice Calc (around 15 digits), I went back to programming (Amzi! Prolog supports arbitrary precision arithmetic) and got a grand total of 88,115,255,674,831,753,917,120 ways, or approximately 8.8115E+22.</p> <p><strong>Ways to express 30 as sums (up to rearrangement) of 6 integers between 0 and 7</strong></p> <pre><code>7, 7, 7, 7, 2, 0 7, 7, 7, 7, 1, 1 7, 7, 7, 6, 3, 0 7, 7, 7, 6, 2, 1 7, 7, 7, 5, 4, 0 7, 7, 7, 5, 3, 1 7, 7, 7, 5, 2, 2 7, 7, 7, 4, 4, 1 7, 7, 7, 4, 3, 2 7, 7, 7, 3, 3, 3 7, 7, 6, 6, 4, 0 7, 7, 6, 6, 3, 1 7, 7, 6, 6, 2, 2 7, 7, 6, 5, 5, 0 7, 7, 6, 5, 4, 1 7, 7, 6, 5, 3, 2 7, 7, 6, 4, 4, 2 7, 7, 6, 4, 3, 3 7, 7, 5, 5, 5, 1 7, 7, 5, 5, 4, 2 7, 7, 5, 5, 3, 3 7, 7, 5, 4, 4, 3 7, 7, 4, 4, 4, 4 7, 6, 6, 6, 5, 0 7, 6, 6, 6, 4, 1 7, 6, 6, 6, 3, 2 7, 6, 6, 5, 5, 1 7, 6, 6, 5, 4, 2 7, 6, 6, 5, 3, 3 7, 6, 6, 4, 4, 3 7, 6, 5, 5, 5, 2 7, 6, 5, 5, 4, 3 7, 6, 5, 4, 4, 4 7, 5, 5, 5, 5, 3 7, 5, 5, 5, 4, 4 6, 6, 6, 6, 6, 0 6, 6, 6, 6, 5, 1 6, 6, 6, 6, 4, 2 6, 6, 6, 6, 3, 3 6, 6, 6, 5, 5, 2 6, 6, 6, 5, 4, 3 6, 6, 6, 4, 4, 4 6, 6, 5, 5, 5, 3 6, 6, 5, 5, 4, 4 6, 5, 5, 5, 5, 4 5, 5, 5, 5, 5, 5 </code></pre>
849,093
<p>After being introduced to the non-elementary function through an attempt to evaluate $\int x \tan (x)$, an interesting question occurred to me: Can the non-elementary functions be decomposed to elementary ones? For instance, the logarithm, an elementary, can be decomposed into multiplication (e.g. $\ln x=y$ is the same as $y$ iterations of $e*e$), another elementary. So, is this decomposition possible to transform a complicated non-elementary function into an elementary one that can be easily evaluated?</p>
Gina
102,040
<p>No. Doesn't matter which logical language you try to use and the interpretation, number of function that can be defined using that language is at most the number of finite string that can be formed using the symbols of that language. Since symbols set is finite, number of possible string is countable. Number of function however is far from countable, it is actually $c^{c}$ for function $\mathbb{R}\rightarrow\mathbb{R}$.</p> <p>(in other word, even if you have way more than just elementary function, there are function you simply cannot describe at all)</p> <p>EDIT: thanks for the comments. In light of these, I will add a few more variant:</p> <p>-If we allow the symbols set to be of cardinality $c$ (say, maybe we allow not just elementary function, but any continuous function), and restrict to only measurable function. Then this is still impossible by the counting argument. Unfortunately, restriction to measurable function does not decrease cardinality, and increasing the symbols set cardinality only give you the cardinality of possible sentence to be $c$.</p> <p>-If we allow the symbol set to be of cardinality $\aleph_{0}$, and restrict to continuous function, then it is still impossible by counting argument. Continuous function have cardinality $c$, but number of possible string is still $\aleph_{0}$.</p>
819,830
<p>Is the idea of a proof by contradiction to prove that the desired conclusion is both true and false or can it be any derived statement that is true and false (not necessarily relating to the conclusion)? Or can it simply be an absurdity that you know is false but through your derivation comes out true?</p>
Vladhagen
79,934
<p>Evaluating your integral:</p> <p>$$2 \pi \int_0^4 x\sqrt{x} dx = 2 \pi \int_0^4 x^{3/2} dx = 2\pi \frac{2}{5}x^{5/2}\bigg|_0^4 = \frac{128\pi}{5}$$</p> <p>Hopefully this is what you were getting.</p> <p>BUT. If we rotate around the $x$-axis (not the $y$-axis) then we will get $8\pi$ as follows:</p> <p>$$2\pi\int_0^2 y\cdot y^2 dy = 2\pi\int_0^2 y^3 dy = 2 \pi \frac{y^4}{4}\bigg|_0^2 = 8\pi$$</p> <p>So the question now remains as to which axis you really want. </p>
1,303,772
<blockquote> <p>Show that $$-2 \le \cos \theta ~ (\sin \theta +\sqrt{\sin ^2 \theta +3})\le 2$$ for all value of $\theta$.</p> </blockquote> <p>Trial: I know that $0\le \sin^2 \theta \le1 $. So, I have $\sqrt3 \le \sqrt{\sin ^2 \theta +3} \le 2 $. After that I am unable to solve the problem. </p>
math110
58,742
<p>Use this well known inequality $$-\dfrac{a^2+b^2}{2}\le ab\le\dfrac{a^2+b^2}{2},a,b\in R$$ so $$-\dfrac{\cos^2{\theta}+4\sin^2{\theta}}{2}\le\cos{\theta}\cdot 2\sin{\theta}\le\dfrac{\cos^2{\theta}+4\sin^2{\theta}}{2}\tag{1}$$</p> <p>$$-\dfrac{4\cos^2{\theta}+\sin^2{\theta}+3}{2}\le2\cos{\theta}\cdot\sqrt{\sin^2{\theta}+3}\le \dfrac{4\cos^2{\theta}+\sin^2{\theta}+3}{2}\tag{2}$$ then (1)+(2) $$-4\le2\cos{\theta}(\sin{\theta}+\sqrt{\sin^2{\theta}+3})\le 4$$ so $$-2\le\cos{\theta}(\sin{\theta}+\sqrt{\sin^2{\theta}+3})\le 2$$</p>
1,303,772
<blockquote> <p>Show that $$-2 \le \cos \theta ~ (\sin \theta +\sqrt{\sin ^2 \theta +3})\le 2$$ for all value of $\theta$.</p> </blockquote> <p>Trial: I know that $0\le \sin^2 \theta \le1 $. So, I have $\sqrt3 \le \sqrt{\sin ^2 \theta +3} \le 2 $. After that I am unable to solve the problem. </p>
juantheron
14,311
<p><strong>My Solution::</strong> Given $$f(\theta) = \cos \theta \left(\sin \theta + \sqrt{\sin^2 \theta + 3}\right)$$</p> <p>Now let $$y=\sin \theta \cdot \cos \theta +\cos \theta \cdot \sqrt{\sin^2 \theta + 3}$$</p> <p>Now using the <strong>Cauchy-Schwarz</strong> inequality, we get</p> <p>$$\left(\sin^2 \theta +\cos^2 \theta \right)\cdot \left\{\cos^2 \theta + \left(\sqrt{\sin^2 \theta + 3}\right)^2\right\}\geq \left\{\sin \theta \cdot \cos \theta +\cos \theta \cdot \sqrt{\sin^2 \theta + 3}\right\}^2$$</p> <p>So we get $$y^2 \leq \left(\sin^2 \theta +\cos^2 \theta \right)\cdot \left\{\cos^2 \theta + \sin^2 \theta + 3\right\}=2^2$$</p> <p>So we get $$-2 \leq y\leq 2\Rightarrow y\in \left[-2,2\right]$$</p>
4,248,766
<p>How many functions <span class="math-container">$f: \{1,...,n_1\} \to \{1,...,n_2\}$</span> are there such that if <span class="math-container">$f(k)=f(l)$</span> for some <span class="math-container">$k,l \in \{1,...,n_1\}$</span>, then <span class="math-container">$k=l$</span>?</p>
Youem
468,504
<p>You are looking for the number of injective functions from <span class="math-container">$\{1,\ldots,n_1\}$</span> to <span class="math-container">$\{1,\ldots,n_2\}$</span>. This is <span class="math-container">$$n_1!\binom{n_2}{n_1}$$</span> if <span class="math-container">$n_1\le n_2$</span> and <span class="math-container">$0$</span> otherwise.</p>
2,500,961
<p>I've been able to find formulas all over the place for the sum and product of roots, but I haven't found anything that explains the significance of what they mean or how to interpret them to further gain understanding of the polynomial under evaluation. Is there any physical meaning? Do the values have any significance?</p> <p>For example, I have a $4$<sup>th</sup> order complex polynomial in $ \mathbb{Z} $, for which I find the real part of the $4$ roots add up to $\frac{\pi}{2}$. I'm wondering what the significance of the sum being $\frac{\pi}{2}$ is? To me it's a "buzz" number.</p>
Ken Wei
243,183
<p>The roots of a polynomial in $x$ are the values you can plug in for $x$ such that the polynomial takes the value $0$. So the first part of your last paragraph doesn't make much sense, since roots are special values of $x$.</p> <p>The sum and product of roots of a quadratic polynomial $ax^2 + bx + c$ are $-b/a$ and $c/a$ respectively. For example, if you were told that the roots of a quadratic polynomial were $x=\frac{1}{2}$ and $x=\frac{1}{4}$, then the sum is $x=\frac{3}{4}$ and the product is $\frac{1}{8}$, so you can immediately say that one such polynomial is $8x^2 - 6x + 1$ without having to expand $\left(x-\frac{1}{2}\right)\left(x-\frac{1}{4}\right)$.</p> <p>(Note that the roots of a polynomial don't change if you multiply the entire expression, which was what I did.)</p> <p>The more general formula for higher-order polynomials is given by <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">Vieta's formulas</a>.</p>
4,548,329
<p>Find the first derivative of <span class="math-container">$$y=\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span></p> <p>The given answer is <span class="math-container">$$\dfrac{2x^2}{x^6-1}\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span> It is nice and neat, but I am really struggling to write the result exactly in this form. We have <span class="math-container">$$y'=\dfrac13\left(\dfrac{1-x^3}{1+x^3}\right)^{-\frac23}\left(\dfrac{1-x^3}{1+x^3}\right)'$$</span> The derivative of the &quot;inner&quot; function (the last term in <span class="math-container">$y'$</span>) is <span class="math-container">$$\dfrac{-3x^2(1+x^3)-3x^2(1-x^3)}{\left(1+x^3\right)^2}=\dfrac{-6x^2}{(1+x^3)^2},$$</span> so for <span class="math-container">$y'$</span> <span class="math-container">$$y'=-\dfrac13\dfrac{6x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)^\frac23=-\dfrac{2x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)^\frac23$$</span> Can we actually leave the answer this way?</p>
aarbee
87,430
<p>You got <span class="math-container">$$y'=-\dfrac{2x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)^\frac23$$</span></p> <p>You can leave till here if you want. If you want to match the given answer, multiply and divide by <span class="math-container">$\left(\dfrac{1+x^3}{1-x^3}\right)^\frac13$</span>, thus,</p> <p><span class="math-container">$$y'=-\dfrac{2x^2}{(1+x^3)^2}\left(\dfrac{1+x^3}{1-x^3}\right)\left(\dfrac{1-x^3}{1+x^3}\right)^\frac13\\=\dfrac{2x^2}{x^6-1}\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span></p> <p>If you want to reach the given answer directly, you can take <span class="math-container">$\log$</span> on the given expression, thus,</p> <p><span class="math-container">$$\log y=\frac13\left(\log(1-x^3)+\log(1+x^3)\right)$$</span></p> <p>Taking derivative,</p> <p><span class="math-container">$$\frac{y'}{y}=\frac13\left(\frac{-3x^2}{1-x^3}-\frac{3x^2}{1+x^3}\right)\\=x^2\left(\frac{-1-x^3-1+x^3}{1-x^6}\right)$$</span></p> <p>Therefore, <span class="math-container">$$y'=\dfrac{2x^2}{x^6-1}\sqrt[3]{\dfrac{1-x^3}{1+x^3}}$$</span></p>
253,966
<p>Just took my final exam and I wanted to see if I answered this correctly:</p> <p>If $A$ is a Abelian group generated by $\left\{x,y,z\right\}$ and $\left\{x,y,z\right\}$ have the following relations:</p> <p>$7x +5y +2z=0; \;\;\;\; 3x +3y =0; \;\;\;\; 13x +11y +2z=0$</p> <p>does it follow that $A \cong Z_{3} \times Z_{3} \times Z_{6}$ ?</p> <p>I know if we set $x=(1,0,2)$, $y=(0,1,0)$ and $z=(2,1,5)$ then this is consistent with the relations and with $A \cong Z_{3} \times Z_{3} \times Z_{6}$ </p>
Amr
29,267
<p>Since $7x+5y+2z=0=3x+3y$, therefore $7x+5y+2z+2(3x+3y)=13x+11y+2z=0$, hence the last relation is not important. We also note that $7x+5y+2z-2(3x+3y)=x-y+2z$</p> <p>Consider the group $G$={$ix+jy+kz|i,j,k\in Z$} (with addition defined as: ($i_1x+j_1y+k_1z)+(i_2x+j_2y+k_2z)=(i_1+i_2)x+(j_1+j_2)y+(k_1+k_2)z$).</p> <p>Now let $N$ be the smallest subgroup of $G$ that contains $x-y+2z,3x+3y$. Thus, $N$={$i(x-y+2z)+j(3x+3y)|i,j\in Z$}={$(3j+i)x+(3j-i)y+2iz|i,j\in Z$}.</p> <p>Thus, $A=G/N$. Now observe that $z$ has infinite order in $A$. Proof: Let $|z|=n&gt;0$, therefore $nz=(3j+i)x+(3j-i)y+2iz$ for some $i,j$. This implies that $3j+i=3j-i=0$, hence $i,j=0$. Thus, $n=0$ (contradiction). Thus $A$ cant be isomorphic to the direct sum in your question</p>
253,966
<p>Just took my final exam and I wanted to see if I answered this correctly:</p> <p>If $A$ is a Abelian group generated by $\left\{x,y,z\right\}$ and $\left\{x,y,z\right\}$ have the following relations:</p> <p>$7x +5y +2z=0; \;\;\;\; 3x +3y =0; \;\;\;\; 13x +11y +2z=0$</p> <p>does it follow that $A \cong Z_{3} \times Z_{3} \times Z_{6}$ ?</p> <p>I know if we set $x=(1,0,2)$, $y=(0,1,0)$ and $z=(2,1,5)$ then this is consistent with the relations and with $A \cong Z_{3} \times Z_{3} \times Z_{6}$ </p>
Gerry Myerson
8,269
<p>A colleague of mine has written some <a href="http://web.science.mq.edu.au/~chris/groups/CHAP10%20Finitely-Generated%20Abelian%20Groups.pdf" rel="nofollow">notes</a> that we use in a course here. They should help you understand how to do this kind of question. </p>
402,802
<p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p> <p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p> <p>Please tell what is correct.</p>
David Holden
79,543
<p>square it and simplify - gives period</p>
397,347
<p>I'm trying to figure out how to evaluate the following: $$ J=\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx $$ I'm tried considering $I(s) = \int_{0}^{\infty}\frac{x^3}{(e^x-1)^s}\,dx\implies J=-I'(1)$, but I couldn't figure out what $I(s)$ was. My other idea was contour integration, but I'm not sure how to deal with the logarithm. Mathematica says that $J\approx24.307$. </p> <p>I've asked a <a href="https://math.stackexchange.com/questions/339711/find-the-value-of-j-int-0-infty-fracx3ex-1-lnx-dx">similar question</a> and the answer involved $\zeta(s)$ so I suspect that this one will as well. </p>
Ron Gordon
53,268
<p>How about pulling factors of $e^{-x}$ from both the denominator and log terms? Then you end up with two separate integrals:</p> <p>$$\int_0^{\infty}dx \frac{x^4 \, e^{-x}}{1-e^{-x}} + \int_0^{\infty}dx \frac{x^3 \, e^{-x}}{1-e^{-x}} \log{(1-e^{-x})}$$</p> <p>In both cases, you Taylor expand the denominator in $e^{-x}$. For the first integral, this results in</p> <p>$$\sum_{k=0}^{\infty} \int_0^{\infty}dx\, x^4 \, e^{-(k+1) x} = 4! \sum_{k=0}^{\infty} \frac{1}{(k+1)^5} = 24 \, \zeta(5) $$</p> <p>For the second integral, you also need to Taylor expand the log term. This results in a double sum:</p> <p>$$\begin{align}\sum_{k=0}^{\infty} \int_0^{\infty}dx\, x^3 \, e^{-(k+1) x} \log{(1-e^{-x})} &amp;= -\sum_{k=1}^{\infty} \sum_{m=1}^{\infty} \frac{1}{m} \int_0^{\infty} dx \, x^3 e^{-(k+m) x}\\ &amp;= - 3! \sum_{m=1}^{\infty} \frac{1}{m} \sum_{k=1}^{\infty} \frac{1}{(k+m)^4}\\ &amp;= -\sum_{m=1}^{\infty} \frac{\psi^{(3)}(m+1)}{m} \end{align}$$</p> <p>where $\psi$ is a <a href="http://en.wikipedia.org/wiki/Polygamma_function">polygamma function</a>.</p>
2,971,143
<p>Let me choose <span class="math-container">$n=1$</span> for my induction basis: <span class="math-container">$2 &gt; 1$</span>, true.</p> <p>Induction Step : <span class="math-container">$2^n &gt; n^2 \rightarrow 2^{n+1} &gt; (n+1)^2 $</span></p> <p><span class="math-container">$2^{n+1} &gt; (n+1)^2 \iff$</span></p> <p><span class="math-container">$2\cdot 2^n &gt; n^2 + 2n + 1 \iff$</span></p> <p><span class="math-container">$0 &gt; n^2 + 1 + 2n - 2\cdot 2^n \iff$</span></p> <p><span class="math-container">$0 &gt; n^2 -2^n + 1 + 2n - 2^n \iff$</span> IH: <span class="math-container">$0 &gt; n^2 - 2^n$</span></p> <p><span class="math-container">$0 &gt; 1 + 2n - 2^n &gt; n^2 - 2^n + 1 + 2n - 2^n \iff$</span></p> <p><span class="math-container">$2^n &gt; 1 + 2n &gt; n^2$</span>, which can be proved with induction for <span class="math-container">$n \geq 3$</span></p> <p><span class="math-container">$2^n &gt; n^2$</span>, true by assumption</p> <p>I have showed that, based from the induction basis, I can conclude the general statement. But like I have said in the headline the identity is not fulfilled for <span class="math-container">$n=2$</span>, so something must be wrong in the proof. </p>
mathematics2x2life
79,043
<p>You have an error. You had <span class="math-container">$n^2-2^n+1+2n-2^n&lt;0$</span>, by in the induction hypothesis, you do know that <span class="math-container">$n^2-2^n&lt;0$</span>. But that does not mean that <span class="math-container">$1+2n-2^n&lt;0$</span>. It could be that <span class="math-container">$1+2n-2^n$</span> is positive, just not as positive as <span class="math-container">$n^2-2^n$</span> is negative so that their sum is still <span class="math-container">$&lt;0$</span>. The two will only always balance out for <span class="math-container">$n \geq 5$</span>. But you assumed <span class="math-container">$n \geq 1$</span> in the proof. So go back and assume that <span class="math-container">$n \geq 5$</span>, making your base case needing to check <span class="math-container">$n=5$</span>, not <span class="math-container">$n=1$</span>. This then makes the result the expected <span class="math-container">$2^n&gt;n^2$</span> for <span class="math-container">$n\geq 5$</span>.</p>
4,487,489
<p>I'm currently working on completing their first unit on calculus ab and I've encountered this roadblock. That's probably an exaggeration but I honestly can't figure out what they mean by &quot;for negative numbers&quot;. I did the math and got the right number (at least the right absolute value) but the missing negative sign cost me the question and fair enough but why is there a negative sign that's being added anyway?<span class="math-container">$$\sqrt{x^6}=(x^6)^{1/2}=x^{6\times\frac12}=x^3$$</span>I get that so why the negative?</p> <p><a href="https://i.stack.imgur.com/3YyPW.png" rel="nofollow noreferrer">for context here's their explanation and the problem itself</a></p>
emacs drives me nuts
746,312
<blockquote> <p>I honestly can't figure out what they mean by &quot;for negative numbers&quot;.</p> </blockquote> <p>It means that the equation holds for negative <span class="math-container">$x$</span>, that is if <span class="math-container">$x&lt;0$</span>. Reason is that the real square-root is non-negative by definition:</p> <p><span class="math-container">$$\sqrt{x^2} = |x|$$</span></p> <p>Now if <span class="math-container">$x &lt; 0$</span>, then <span class="math-container">$|x|= -x$</span> and thus <span class="math-container">$\sqrt{x^2} = -x= |x|$</span>. This still holds when we replace <span class="math-container">$x$</span> by <span class="math-container">$x^3$</span> due to <span class="math-container">$x&lt;0 \iff x^3&lt; 0$</span> and therefore:</p> <p><span class="math-container">$$\sqrt{x^6} = -x^3$$</span></p> <p>And BTW it also holds for <span class="math-container">$x=0$</span>.</p>
1,908,844
<p>The following example is taken from the book "Introduction to Probability Models" of Sheldon M. Ross (Chapter 5, example 5.4).</p> <blockquote> <p>The dollar amount of damage involved in an automobile accident is an exponential random variable with mean 1000. Of this, the insurance company only pays that amount exceeding (the deductible amount of) 400. Find the expected value and the standard deviation of the amount the insurance company pays per accident."</p> </blockquote> <p>In the solution, the author states that: </p> <blockquote> <p>By the lack of memory property of the exponential, it follows that if a damage amount exceeds 400, then the amount by which it exceeds it is exponential with mean 1000.</p> </blockquote> <p>After reading several implications of this property, I easily map such statement to something like: if you have been waiting for 400s without seeing the bus, then the expected time until the next bus is always 1000s. (Please correct me if I'm wrong)</p> <p>In case I've understood well, what makes me confuse is this next equation:</p> <p>$$ E[Y|I=1] = 1000 $$</p> <p>where:</p> <p>$X$: the dollar amount of damage resulting from an accident</p> <p>$Y=(X-400)^+$: the amount paid by the insurance company (where $a^+$ is $a$ if $a&gt;0$ and 0 if $a&lt;=0$).</p> <p>$I = 1*(X &gt; 400) + 0*(X&lt;=400)$</p> <p>I don't get why that equality holds given the memoryless property. Straightforwardly, I think with respect to 400 subtraction, it should be something like: $E[Y|I] = 1000 - 400 = 600$ (or some other value). Can anyone give me an explanation about this?</p> <p>In case you are not clear about my description, please refer to this <a href="https://books.google.ca/books?id=A3YpAgAAQBAJ&amp;pg=PA281&amp;lpg=PA281&amp;dq=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;source=bl&amp;ots=CaFTvM6Rtw&amp;sig=t0nrAFc-6hX0ByxD3bAD-E3M7EM&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiA4oaN4enOAhUGfxoKHRZHDEYQ6AEIHDAA#v=onepage&amp;q=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;f=false" rel="nofollow">link</a> with <strong>example 5.4</strong>.</p>
syusim
138,951
<p>The answer is ${n+m \choose k}$ and the best possible intuition is that both this and the big summation are counting all possible ways of choosing $k$ things from two sets of things, one of size $m$ and one of size $n$.</p> <p>I find it easier to do combinatorial identities by thinking about what the formulas might be counting rather than just doing algebra. This heuristic is really helpful for similar problems.</p> <p>ALSO: If you wanted to follow the hint that person gave you, you can use the binomial theorem to determine what the coefficient of $x^k$ in $(x+1)^n(x+1)^m = (x+1)^{n+m}$ is, and you can also calculate it by multiplying your expansions of $(x+1)^n$ and $(x+1)^m$. That's actually a really cool way to do it!</p>
1,466,198
<p>I was solving some mathematical questions and have come across the situation, where I need to divide 3900/139. Here is my question, </p> <p>a. Can I assume 139 to 140 for the ease of division?</p> <p>If so, how will I know what percentage of error I am introducing? How can I ensure that I am adding very less value to a number and the results will not be tremendously affected?</p>
PTDS
277,299
<p>In general, if y = c/x (c is a constant) and you make a small change in x, say by h (> 0), then the following happens:</p> <ol> <li>The exact relative error is (-h/(c+h))</li> <li>This is approximately equal to (-h/c) [If you take the log and then the differential, that will be evident].</li> </ol>
4,220,972
<p>I'm studying a for the GRE and a practice test problem is, &quot;For all real numbers x and y, if x#y=x(x-y), then x#(x#y) =?</p> <p>I do not know what the # sign means. This is apparently an algebra function but I cannot find any such in several searches. I'm an older student and haven't had basic algebra in over 45 years and this was certainly not in my recent linear algebra class.</p>
RobertTheTutor
883,326
<p>P(any random variable being within k standard deviations of its mean) <span class="math-container">$\geq 1-1/k^2$</span>.</p> <p>The variance is <span class="math-container">$1/4$</span>, the standard deviation is <span class="math-container">$1/2$</span> and the error is <span class="math-container">$1$</span>, which makes it <span class="math-container">$2$</span> standard deviations out. So <span class="math-container">$k=2$</span> in the formula and you get <span class="math-container">$1-1/2^2 = 3/4$</span>.</p>
3,637,283
<p>How would I find the fourth roots of <span class="math-container">$-81i$</span> in the complex numbers? </p> <p>Here is what I currently have: </p> <p><span class="math-container">$w = -81i$</span> </p> <p><span class="math-container">$r = 9$</span> </p> <p><span class="math-container">$\theta = \arctan (-81)$</span>? </p> <p>Although I am not sure it's correct or if I am on the right track. May I have some help please? </p>
Physical Mathematics
592,278
<p><span class="math-container">$$w:= -81i = 81 e^{-i\pi/2},81 e^{3i\pi/2}, 81e^{7i\pi/2},81 e^{11i\pi/2},$$</span> So the 4th roots of <span class="math-container">$w$</span> are: <span class="math-container">$$\sqrt[4]{81} e^{-i\pi/8}, \sqrt[4]{81}e^{3i\pi/8}, \sqrt[4]{81} e^{7i\pi/8}, \sqrt[4]{81} e^{11i\pi/8},$$</span> Where <span class="math-container">$\sqrt[4]{81}$</span> denotes the unique real positive 4th root of 81. By algebraic considerations, we know there are exactly 4 4th roots. So these are all of them.</p>
3,637,283
<p>How would I find the fourth roots of <span class="math-container">$-81i$</span> in the complex numbers? </p> <p>Here is what I currently have: </p> <p><span class="math-container">$w = -81i$</span> </p> <p><span class="math-container">$r = 9$</span> </p> <p><span class="math-container">$\theta = \arctan (-81)$</span>? </p> <p>Although I am not sure it's correct or if I am on the right track. May I have some help please? </p>
Bernard
202,857
<p><strong>Hint</strong>:</p> <p>It is much simpler: the (real) fourth root of <span class="math-container">$81$</span> is <span class="math-container">$3$</span>. So you simply have to determine the fourth roots of <span class="math-container">$-i$</span>. For that, use the complex exponential notation: <span class="math-container">$$\mathrm e^{4i\theta}=\mathrm e^{\tfrac{3i\pi}2},\;\text{ so }\;4\theta\equiv \frac{3\pi}2\bmod 2\pi.$$</span> Can you proceed?</p>