qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,806,858
<p>There is an equation $$\sin2\theta=\sin\theta$$ We need to show when the right-hand side is equal to the left-hand side for $[0,2\pi]$. <hr> Let's rewrite it as $$2\sin\theta\cos\theta=\sin\theta$$ Let's divide both sides by $\sin\theta$ (then $\sin\theta \neq 0 \leftrightarrow \theta \notin \{0,\pi,2\pi\}$) $$2\cos\theta=1$$ $$cos\theta=\frac{1}{2}$$ $$\theta\in\left\{\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ <hr> Now, let's try something different. $$2\sin\theta\cos\theta=\sin\theta$$ $$2\sin\theta\cos\theta-\sin\theta=0$$ $$\sin\theta(2\cos\theta-1)=0$$ We can have the solution when $\sin\theta=0$. $$\sin\theta=0$$ $$\theta \in \left\{0,\pi,2\pi\right\}$$ And when $2\cos\theta-1=0$. $$2\cos\theta-1=0$$ $$2\cos\theta=1$$ $$\cos\theta=\frac{1}{2}$$ $$\theta \in \left\{\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ Therefore the whole solution set is $$\theta \in \left\{0,\pi,2\pi,\frac{\pi}{3},\frac{5\pi}{3}\right\}$$ <strong>This is the correct solution.</strong> <hr> Why is this happening? In the first approach, the extra solution given by $\sin\theta=0$ is not only non-appearing but actually banned. Both approaches look valid to me, yet the first one yields less solutions than the second one. Is the first approach invalid in some cases? This is not the only case when this happens, so I'd like to know when I need to use the second approach to solve the equation, so I don't miss any possible solutions.</p>
poyea
498,637
<p>You only consider solutions for $\cos\theta=\frac{1}{2}$ in the first case. In fact if you perform such cancellation, you should consider also solutions given by $\sin\theta=0$. </p> <p>Why? Multiply both sides by zero:$$2\cos\theta=1,\,\,2\cos\theta\,\cdot0=1\,\cdot0$$ Bare in mind $\sin\theta$ could be $0$, you have $$2\cos\theta\sin\theta=\sin\theta$$</p> <p><em>Division is valid if $\sin\theta\ne0$; however, the equation holds if $\sin\theta=0$.</em> This is why you've to consider the case: you've assumed $\sin\theta\ne0$ at cancellation, which is correct; but the equation doesn't have this assumption. You have to consider the case when this assumption is waived.</p>
3,891,336
<p>I have a problem with this question:</p> <p>we have a language with alphabet {a, b, c}, all strings in this language have even length and does not contain any substring &quot;ab&quot; and &quot;ba&quot; for example these strings acceptable: &quot;accb&quot;, &quot;aa&quot;, &quot;bb&quot;, &quot;bcac&quot;, and these strings not acceptable &quot;ccab&quot;, &quot;abca&quot;, &quot;cabc&quot; and so on.. actually I think this is not regular language but I can't prove it, anyone can help me by give a regular expression for this language or prove that this language is not regular? or any help that I can figure out how I must think about it.</p> <p>i tryed (aa)<em>(cc)</em>(bb)<em>+(bb)</em>(cc)<em>(aa)</em>+(ac+ca)<em>+(bc+cb)</em> and ((a+c)(a+c))<em>+((b+c)(b+c))</em>+(ac+bc)<em>(ca+cb)</em>+(ca+cb)<em>(ac+bc)</em> but this not work</p>
saulspatz
235,128
<p>We need <span class="math-container">$7$</span> non-terminals. <span class="math-container">$S$</span> is the initial state, <span class="math-container">$T,\ U,$</span> and <span class="math-container">$V$</span> all indicate that an odd number of characters have been read, and that the last character read was <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, or <span class="math-container">$c$</span>, respectively. Similarly, <span class="math-container">$X,\ Y$</span>, and <span class="math-container">$Z$</span> indicate that an even number of characters have been read, and that the last character read was <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, or <span class="math-container">$c$</span>, respectively. I use <span class="math-container">$\lambda$</span> for the empty string. <span class="math-container">$S,\ X,\ Y$</span>, and <span class="math-container">$Z$</span> are accepting states, so we have the productions <span class="math-container">$$S\to\lambda\\ X\to\lambda\\ Y\to\lambda\\ Z\to\lambda\\ $$</span> Also we, have <span class="math-container">$$S\to aT\\S\to bU\\S\to cV\\T\to aX\\T\to cZ\\U\to bY\\U\to cZ\\V\to aX\\V\to bY\\V\to cZ\\$$</span></p> <p>I trust you see how I arrived at these. If we get a <span class="math-container">$b$</span> in state <span class="math-container">$T$</span> or an <span class="math-container">$a$</span> in state <span class="math-container">$U$</span> then we've seen a forbidden substring, so there are no productions corresponding to these possibilities.</p> <p>It remains to determine the productions with <span class="math-container">$X,\ Y$</span> or <span class="math-container">$z$</span> on the left-hand side. I leave it to you to supply those.</p> <p><strong>EDIT</strong> Reading this over, I see that I've slipped into the language of automata at times. I hope it's obvious what I mean. If not, ping me.</p> <p><strong>EDIT</strong></p> <p>When I wrote this, I though you were mainly concerned with whether or not the language is regular, but it's apparent from your comments that you're mostly concerned with getting a regex. Remember that the complement of a regular language is regular. So we want the complement of the union of</p> <pre><code>-set of strings containing <span class="math-container">$ab$</span> -set of strings containing <span class="math-container">$ba$</span> -set of strings with an odd number of characters </code></pre> <p>So, if you can make regexes for these three languages, you can combine them to construct the regex you seek.</p> <p><strong>EDIT</strong></p> <p>The previous edit is wrong. The discussion prior to theorem <span class="math-container">$4.5$</span> in <a href="http://ce.sharif.edu/courses/94-95/1/ce414-2/resources/root/Text%20Books/Automata/John%20E.%20Hopcroft,%20Rajeev%20Motwani,%20Jeffrey%20D.%20Ullman-Introduction%20to%20Automata%20Theory,%20Languages,%20and%20Computations-Prentice%20Hall%20(2006).pdf" rel="nofollow noreferrer">Hopcroft, Motwani and Ullman</a> says that you can't do it that way. You have to convert the regex to a DFA, construct a DFA that recognizes the complement, and then convert the new DFA to a regex. I think any regex for this language will be exceedingly long and complicated.</p>
1,323,317
<p>I have to compute the following quantity:</p> <p>$$ 1) \sum\limits_{k=0}^{n} \binom{n}{k}k2^{n-k} $$</p> <p>Moreover, I have to give an upper bound for the following quantity:</p> <p>$$ 2) \sum\limits_{k=1}^{n-2} \binom{n}{k}\frac{k}{n-k} $$</p> <p>As regards 1), I see that $\binom{n}{k}k2^{n-k}=\frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}$, i.e. I obtain</p> <p>$$ \sum\limits_{k=0}^{n} \frac{n! (n-k+1)2^{n-k}}{(n-k+1)!(k-1)!}=\sum\limits_{k=0}^{n} \binom{n}{k-1}(n-k+1)2^{n-k} $$</p> <p>But it seems strange! As regards 2), I don't know!</p>
André Nicolas
6,312
<p>I prefer the combinatorial approach, but we can do it by manipulation. First note that our sum is $$\sum_{k=1}^n k\binom{n}{k}2^{n-k}\tag{1}$$ since the $k=0$ term makes no contribution to the sum. Then use the fact that $\binom{n}{k}=\frac{n}{k}\binom{n-1}{k-1}$ to rewrite (1) as $$n\sum_{k=1}^n \binom{n-1}{k-1}2^{n-k}.\tag{2}$$ Re-index the sum, replacing $k-1$ by $j$. Then (2) becomes $$n\sum_{j=1}^{n-1} \binom{n-1}{j}2^{n-1-j}.\tag{3}$$ We recognize the binomial expansion of $(1+2)^{n-1}$, so (3) is equal to $$n\cdot3^{n-1}.$$</p>
3,416,600
<p>Show that <span class="math-container">$|{\sqrt{a^2+b^2}-\sqrt{a^2+c^2}}|\le|b-c|$</span> where <span class="math-container">$a,b,c\in\mathbb{R}$</span></p> <p>I'd like to get an hint on how to get started. What I thought to do so far is dividing to cases to get rid of the absolute value. <span class="math-container">$(++, +-, -+, --)$</span> but it looks messy. I'm wondering if there is any nicer way to solve it.</p> <p>Would love to hear some ideas.</p> <p>Thanks in advance!</p>
Ris
318,407
<p>It will be easy if you think <span class="math-container">$\sqrt{a^2 + b^2}$</span> as the euclidean distance. Consider the three points <span class="math-container">$A(a, 0), B(0, b), C(0, c)$</span>. Then the inequality can be transformed into the triangular inequality <span class="math-container">$\lvert \overline{AB} - \overline{AC} \rvert \le \overline{BC}$</span></p>
165,900
<p>Let $R=k[u,v,w]$ and $p\in R$ be a cubic form. Let $G$ be the group of graded automorphisms of $R$ which preserve $p$, i.e., $G$ is the subgroup of $GL_3(k)$ consisting of elements $g$ such that $g(p) \in k p$. My question: is $G$ some well known algebraic group? </p>
Jim Humphreys
4,231
<p>The best known situation of this type involving an algebraic group would occur in type $E_6$, where there is a long history and quite a bit of literature. Is this the "well known algebraic group" you have in mind? In case you have access to MathSciNet, you can find a list of literature cited by a fairly recent article:</p> <p>MR2381940 (2008m:20077).<br> Vavilov, N. A.(RS-STPTMM); Luzgarev, A. Yu.(RS-STPTMM), <em>The normalizer of Chevalley groups of type $E_6$.</em> Algebra i Analiz 19 (2007), no. 5, 37-64; translation in St. Petersburg Math. J. 19 (2008), no. 5, 699–718.</p> <p>As Premet notes, there is also more general literature about the classification of cubic forms and related groups, going back to the early work in invariant theory.</p>
165,900
<p>Let $R=k[u,v,w]$ and $p\in R$ be a cubic form. Let $G$ be the group of graded automorphisms of $R$ which preserve $p$, i.e., $G$ is the subgroup of $GL_3(k)$ consisting of elements $g$ such that $g(p) \in k p$. My question: is $G$ some well known algebraic group? </p>
Jérémy Blanc
23,758
<p>If the field is $\mathbb{C}$ (or algebraically closed of characteristic $\not=2,3$), then you can put any smooth cubic into the Hessian form: $$X^3+Y^3+Z^3+\lambda XYZ=0$$ for some $\lambda\in \mathbb{C}$. This corresponds to put the nine inflection onto the intersection of the curve with $XYZ=0$.</p> <p>Then, the pencil generated by the cubic and $XYZ$ is the Hessian pencil (which corresponds to let $\lambda$ vary in the equation) and is preserved by the classical Hessian group $G\subset \mathrm{PGL}(3,\mathbb{C})$, of order $216$. The groups acts on the set of parameters, paramertised by $\mathbb{P}^1$, and the kernel of this action contains the group $(\mathbb{Z}/3\mathbb{Z})^3\rtimes \mathbb{Z}/2\mathbb{Z}$ generated by the diagonal action $$[X:Y:Z]\mapsto [X:\theta Y:\theta^2Z]$$ $\theta^3=1$ and the group of permutations. This shows that you have generically this group as a group of automorphisms of $\mathbb{P}^2$ preserving the curve. For some values of $\lambda$, you have other automorphisms. Indeed, each of the elements of the Hessian group has two fixed points on $\mathbb{P}^1$.</p> <p>I suggest to read the nice article "The Hesse pencil of plane cubic curves" of M. Artebani and I. Dolgachev on this subject <a href="http://arxiv.org/abs/math/0611590" rel="nofollow">http://arxiv.org/abs/math/0611590</a> .</p>
4,436,210
<p>I have been given this exercise: Calculate the double integral:</p> <blockquote> <p><span class="math-container">$$\iint_D\frac{\sin(y)}{y}dxdy$$</span> Where <span class="math-container">$D$</span> is the area enclosed by the lines: <span class="math-container">$y=2$</span>, <span class="math-container">$y=1$</span>, <span class="math-container">$y=x$</span>, <span class="math-container">$2y=x$</span> (not <span class="math-container">$y = 2x$</span>).</p> </blockquote> <p>Visualising <span class="math-container">$D$</span> is easy. You can split D in two sub areas and get the bounds for the integrals. The problem I face is:</p> <p>Let's split D in two sub areas, <span class="math-container">$D_1$</span> and <span class="math-container">$D_2$</span>. <span class="math-container">$D_1$</span> is the left, upright triangle of <span class="math-container">$D$</span> and <span class="math-container">$D_2$</span> is the right, upside down one.</p> <p>Then <span class="math-container">$D_1$</span> is defined by the lines <span class="math-container">$y=1$</span>, <span class="math-container">$y=x$</span>, and <span class="math-container">$x=2$</span>.</p> <p>You can express the area in a <span class="math-container">$y$</span>-normal form as: <span class="math-container">$$\begin{align} 1 \le y \le 2\\ y \le x \le 2 \end{align}$$</span> then the integral can be written as <span class="math-container">$$ \begin{align} &amp;\int_1^2\int_y^2\frac{\sin(y)}{y}dxdy \\ &amp;=\int_1^2\frac{\sin(y)}{y}[x]^2_y \space dxdy \\ &amp;=\int_1^2\frac{2\sin(y)}{y} - \sin(y)dy \\ &amp;=2\int_1^2\frac{\sin(y)}{y}dy -\int_1^2 \sin(y)dy \\ \end{align}$$</span></p> <p>The second integral is trivial, but in the first one is not. I have tried substituting, integrating by parts but to no avail. What am I doing wrong?</p> <p>Any answer is really appreciated.</p>
A. P.
1,027,216
<p>This type of problem is easier to analyse directly from the matrix avoiding going back to the system of linear equations as follows, this avoids some confusion.</p> <p>The system can be reduced by row as <span class="math-container">$$\begin{bmatrix} 1 &amp; 2 &amp; -3 &amp; | &amp; 4\\ 3 &amp; -1 &amp; 5 &amp; | &amp; 2\\ 4 &amp; 1 &amp; (a^{2}-14) &amp; | &amp; a+2\end{bmatrix} \sim \cdots \sim \begin{bmatrix} 1 &amp; 2 &amp; -3 &amp; | &amp; 4\\ 0 &amp; -7 &amp; 14 &amp; | &amp; -10\\ 0 &amp; 0 &amp; a^{2}-16 &amp; | &amp; a-4 \end{bmatrix}$$</span> Now, since <span class="math-container">$a^{2}-16=(a-4)(a+4)$</span> we have</p> <ul> <li>If <span class="math-container">$a^{2}-16=0$</span> and <span class="math-container">$a-4=0$</span> we have infinitely many solutions, i.e., <span class="math-container">$a=4$</span>.</li> <li>If <span class="math-container">$a^{2}-16=0$</span> and <span class="math-container">$a-4\not=0$</span> we have no solution, i.e., <span class="math-container">$a=-4$</span>.</li> <li>If <span class="math-container">$a^{2}-16\not=0 $</span> and <span class="math-container">$a-4\not=0$</span> we have one solution, i.e. <span class="math-container">$a\not=\pm4$</span>.</li> </ul> <hr /> <p><strong>More details about above</strong></p> <p>There is a way to look at this a little more quickly then reduce by row in an informal way by following the scheme below.</p> <ul> <li><p>Infinitely many solution <span class="math-container">$\to$</span> <span class="math-container">$\sim \cdots \sim \begin{bmatrix} * &amp; * &amp; * &amp; | &amp; *\\ 0 &amp; * &amp;* &amp;|&amp; *\\ 0 &amp; 0 &amp; 0 &amp; | &amp;0 \end{bmatrix}$</span>.</p> </li> <li><p>Have no solution <span class="math-container">$\to$</span> <span class="math-container">$\sim \cdots \sim \begin{bmatrix} * &amp; * &amp; * &amp; | &amp; *\\ 0 &amp; * &amp;* &amp;|&amp; *\\ 0 &amp; 0 &amp; 0 &amp; | &amp;* \end{bmatrix}$</span>.</p> </li> <li><p>have one solution <span class="math-container">$\to$</span> <span class="math-container">$\sim \cdots \sim \begin{bmatrix} * &amp; * &amp; * &amp; | &amp; *\\ 0 &amp; * &amp;* &amp;|&amp; *\\ 0 &amp; 0 &amp; * &amp; | &amp;* \end{bmatrix}$</span>.</p> </li> </ul> <p>The important part in the above scheme is the last row. We can achieve consistency or inconsistency by making a <span class="math-container">$0$</span> or a number other than zero <span class="math-container">$*$</span> (whether or not zeros appear in rows one and two does not matter) and another important detail is that we must get the parameter in this case &quot;<span class="math-container">$a$</span>&quot; in the last row to apply the scheme. However, do not consider this as a general rule, there are cases where it is important to do additional analysis. But in this problem the scheme work well.</p>
3,722,407
<p>I am struggling with this problem:</p> <blockquote> <p>Let n be an even number, and denote <span class="math-container">$[n]=\{1,2,...,n\}$</span>. A sequence of sets <span class="math-container">$S_1 , S_2 , \cdots , S_m \subseteq [n]$</span> is considered <em>graceful</em> if:</p> <ol> <li><span class="math-container">$m$</span> is odd.</li> <li><span class="math-container">$S_1 \subset S_2 \subset \cdots \subset S_m \subseteq [n]$</span></li> <li><span class="math-container">$\forall m \in \{1,...,m-1\}: \; |S_{i+1}|=|S_i|+1$</span></li> <li><span class="math-container">$|S_m|+|S_1|=n$</span></li> </ol> <p>Show that it is possible to <em>decompose</em> the <span class="math-container">$2^n$</span> subsets of <span class="math-container">$[n]$</span> using <span class="math-container">$\binom{n}{n/2}$</span> graceful chains. Different chains may be of different lengths. Every subset of <span class="math-container">$[n]$</span> must appear in one, and only one, chain.</p> </blockquote> <p>I have figured that <span class="math-container">$$|S_1|=\frac{n-m+1}{2}, \; |S_i|=\frac{n-m+2i-1}{2}$$</span> for any valid choice of <span class="math-container">$m$</span>. In addition, it suggests <span class="math-container">$m\in\{1,3,...,n+1\}$</span>. It is also clear that one of the chains must be <span class="math-container">$$\emptyset\subset\{1\}\subset\{1,2\}\subset\cdots\subset\{1,...,n\}$$</span> where all the inner sets in this chain may be chosen arbitrary.</p> <p>I would appreciate any help.</p>
metamorphy
543,769
<p>Here's an idea to get a <strong>complete</strong> asymptotics (obviously not in <em>fixed</em> powers of <span class="math-container">$n$</span>, too).</p> <p>For a fixed <span class="math-container">$n&gt;1$</span>, the solution <span class="math-container">$w=w_n(z)$</span> of <span class="math-container">$w=1+zw^n$</span> has a <a href="https://math.stackexchange.com/a/3310638">known</a> power series <span class="math-container">$$w_n(z)=\sum_{k=0}^\infty\binom{nk}{k}\frac{z^k}{(n-1)k+1}$$</span> (a way to get it is basically Lagrange's inversion theorem). Thus, if <span class="math-container">$v_n=nu_n$</span> for our <span class="math-container">$u_n$</span>, then <span class="math-container">$$v_n=1+n^{-n}(v_n)^n\implies v_n=w_n(n^{-n})\implies u_n=n^{-1}w_n(n^{-n}).$$</span></p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Daniel Moskovich
2,051
<p><a href="http://en.wikipedia.org/wiki/Smale_conjecture" rel="nofollow noreferrer">The Smale Conjecture</a>.</p> <hr> <p>This was <a href="http://www.jstor.org/discover/10.2307/2007035?uid=3738992&amp;uid=2129&amp;uid=2&amp;uid=70&amp;uid=4&amp;sid=21103240235653" rel="nofollow noreferrer">proven by Hatcher in 1983</a>. It states that the diffeomorphism group $\mathrm{Diff}(S^3)$ of the $3$-sphere has the homotopy type of the orthogonal group $O(4)$, which in particular implies that $\pi_0\,\mathrm{Diff}(S^3)= \pi_0 (O(4))$, or equivalently that $\Gamma_4=\pi_0\,\mathrm{Diff}(D^3\mathrm{rel}\,\partial)=0$ (this latter result, due originally to Cerf, was simplified <a href="http://arxiv.org/abs/1007.3606" rel="nofollow noreferrer">here</a>). The case of the $2$-sphere is even more famous and much easier, but the Smale Conjecture is a major foundational result, which implies for example that ``the space of smooth unknotted curves retracts to the space of great circles, <i>i.e.</i> there exists a way to isotope smooth unknotted curves to round circles that is continuous as a function of the curve'' (quoted from <a href="https://mathoverflow.net/a/53486/2051">here</a>).</p> <p>Hatcher's proof is considered to be very hard, and I have heard experts say that there might be only a handful of people in the world who truly understand it. I am not aware of the proof having been substantially simplified.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Dietrich Burde
32,332
<p>The proof of the <a href="http://en.wikipedia.org/wiki/Oppenheim_conjecture"><em>Oppenheim conjecture</em></a> by G. A. Margulis in $1986$ may qualify. It is a famous result, $27$ years ago, has a hard proof, which has not been dramatically simplified (if I am not mistaken, the simplification of Dani and Margulis not counting. Ratner's result made it possible to study the quantitative version of the Oppenheim conjecture).</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Alexandre Eremenko
25,510
<p>A major 19th century result is the general Uniformization theorem: Every simply connected Riemann surface is conformally equivalent either to the plane or to the unit disc or to the sphere. There were improvements of the proof, and many different proofs, but simplifications are not "dramatic". It is still difficult to include a complete proof in a graduate course, unless the large part of the course is dedicated to this single theorem.</p> <p>See also this MO question: <a href="https://mathoverflow.net/questions/10516/uniformization-theorem-for-riemann-surfaces">Uniformization theorem for Riemann surfaces</a></p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Yiftach Barnea
5,034
<p>The <a href="https://en.wikipedia.org/wiki/Burnside%27s_problem" rel="nofollow noreferrer">Restricted Burnside Problem</a> asked whether there is a bound on the size of a finite group with <span class="math-container">$d$</span> generators and exponent <span class="math-container">$n$</span>. In the 1950s, Kostrikin proved there is a bound for <span class="math-container">$n$</span> a prime. Hall-Higman theorem reduced it to prime power <span class="math-container">$n$</span>'s. Zelmanov gave a positive answer for prime powers (the odd case appeared 1990 and the even case in 1991, so we are borderline 25 years). The proof is very difficult and as far as I know it was never simplified (or at least not substantially). </p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Lasse Rempe
3,651
<p>The <a href="http://www.jstor.org/stable/2944326?seq=1#page_scan_tab_contents" rel="nofollow">Benedicks-Carleson theorem</a> on the existence of strange attractors for Hénon maps is an example, I would say. Over the years, there have been some attempts to give improved presentations of the proof, but I don't believe there have been any dramatic simplifications. (However, I am not an expert in this precise area of dynamics.)</p> <p>Nb. The paper was published in January 1991, which date misses your 25-year rule by a few months. However, the paper was received by the journal in 1988, and revised in 1989, so I shall invoke those dates to argue for its eligibility. ☺</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Larry B.
141,260
<p>Chazelle's <a href="https://link.springer.com/article/10.1007/BF02574703" rel="noreferrer">linear time algorithm for the triangulation of a polygon</a> has not been improved upon since its creation in 1991.</p> <p>Technically, this is a computer science theorem, but I think it belongs here for a couple reasons. It's complicated. No actual code implementation of the algorithm has ever been made. While it is linear time, the constant factor makes the algorithm useless for any practical purposes, beyond its theoretical use in other papers.</p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
MoebiusCorzer
283,812
<p>I want to give a more analytic (and probably less intuitive) way of seeing why this is not true. Let's fix $A$, the area of a triangle. Let $a,b&gt;0$ be the respective lengths of two sides of a triangle. Now, let $x$ be the angle between the side of length $a$ and the one of length $b$.</p> <p>Then, we know that the area is given by:</p> <p>$$A=\frac{ab\sin x}{2}$$</p> <p>which yields $\sin x=\tfrac{2A}{ab}$. Now, we also now that the third side has length $c(x)$:</p> <p>$$c(x)=\sqrt{a^{2}+b^{2}-2ab\cos x}$$</p> <p>The claim of the OP is that $c:(0,\pi)\to\Bbb R_{0}^{+}:x\mapsto c(x)$ is a constant function. Now, just note that if $\pi &gt; x \geq \pi/2$, then $-1&lt;\cos x\le 0$ and it that case </p> <p>$$\cos x=\color{red}{-}\sqrt{1-\sin^{2}x}=-\sqrt{1-\frac{4A^{2}}{a^{2}b^{2}}}=-\sqrt{\frac{a^{2}b^{2}-4A^{2}}{a^{2}b^{2}}}=-\frac{\sqrt{a^{2}b^{2}-4A^{2}}}{ab}$$ </p> <p>by the fundamental trigonometric identity $\sin^{2}x+\cos^{2}x=1$. This yields:</p> <p>$$c(x)=\sqrt{a^{2}+b^{2}+2\sqrt{a^{2}b^{2}-4A^{2}}}$$</p> <p>On the other hand, if $0&lt;x\le\pi/2$, then $0\le\cos x&lt;1$ and:</p> <p>$$\cos x = \color{red}{+}\frac{\sqrt{a^{2}b^{2}-4A^{2}}}{ab}$$</p> <p>which yields</p> <p>$$c(x)=\sqrt{a^{2}+b^{2}-2\sqrt{a^{2}b^{2}-4A^{2}}}$$</p> <p>We see that given a certain area $A$ and two side lengths $a$ and $b$, there are only two possible values of $c$, depending on $x$ being an obtuse or an acute angle. We also see that if the two triangles are rectangle, then their respective third sides must be equal, but this already follows from Pythagora's theorem.</p>
2,069,507
<p><a href="https://i.stack.imgur.com/B4b88.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/B4b88.png" alt="The image of parallelogram for help"></a></p> <p>Let's say we have a parallelogram $\text{ABCD}$.</p> <p>$\triangle \text{ADC}$ and $\triangle \text{BCD}$ are on the same base and between two parallel lines $\text{AB}$ and $\text{CD}$, So, $$ar\triangle \text{ADC}=ar\triangle \text{BCD}$$ Now the things those should be noticed are that:</p> <p>In $\triangle \text{ADC}$ and $\triangle \text{BCD}$:</p> <p>$$\text{AD}=\text{BC}$$ $$\text{DC}=\text{DC}$$ $$ar\triangle \text{ADC}=ar\triangle \text{BCD}.$$</p> <p>Now in two different triangles, two sides are equal and their areas are also equal, so the third side is also equal or $\text{AC}=\text{BD}$. Which make this parallelogram a rectangle.</p> <p>Isn't it a claim that every parallelogram is a rectangle or a parallelogram does not exist?</p>
tim
401,692
<p>A rectangle, by definition has $4$ right angles (the clue is in rect and angle). So, no, no need for all the clever workings. All <em>rectangles</em> are parallelograms, but not all <em>parallelograms</em> can be rectangles. Only the ones with all four corners being $90$ degrees each. </p>
1,043,266
<p>Carefully see this problem(I have solved them on my own, I'm only talking about the magical coincidence):</p> <blockquote> <p>A bag contains 6 notes of 100 Rs.,2 notes of 500 Rs., 3 notes of 1000 Rs..Mr. A draws two notes from the bag then Mr. B draws 2 notes from the bag.<br> (i)Find the probability that A has drawn 600 Rs.<br> (ii)Find the probability that B has drawn 600 Rs.<br> (iii)B has drawn 600 Rs., then find the probability that A has also drawn 600 Rs..<br> (iv)A has drawn 600 Rs.,then find the probability that B has drawn 600 Rs.<br></p> <hr> <p>(i)$$P=\frac{\binom61\binom21}{\binom{11}2}=\frac{12}{55}$$ (ii)<strong>Total Probability Theorem:</strong> Considering various cases, depending upon what A chooses:<br> In order of $2H,2F,2T,1H1T,1H1F,1F1T$<br> where H=100(<strong>H</strong>undred),F=500(<strong>F</strong>ive-hundred),T=1000(<strong>T</strong>housand)]is: $$P=\frac{\binom41\binom21}{\binom92}\frac{\binom62}{\binom{11}2} +\frac{0}{\binom92}\frac{\binom22}{\binom{11}2} +\frac{\binom61\binom21}{\binom92}\frac{\binom32}{\binom{11}2} +\frac{\binom51\binom21}{\binom92}\frac{\binom61\binom31}{\binom{11}2} +\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2} +\frac{\binom61\binom11}{\binom92}\frac{\binom21\binom31}{\binom{11}2}=\frac{12}{55}$$ <strong>Oh My God!What's happening here above?</strong><br> (iii)<strong>Baye's Theorem:</strong> $$P=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ (iv)<strong>Conditional Probability:</strong> $$P(B|A)=\frac{P(AB)}{P(A)}=\frac{\frac{\binom51\binom11}{\binom92}\frac{\binom61\binom21}{\binom{11}2}}{\frac{12}{55}}=\frac5{36}$$ <strong>Not again, you must be joking!</strong></p> </blockquote> <p>Why doesn't it makes any difference?Think Intutively, if A has taken some money there must be less notes so there must be difference in probability, why doesn't order matter here?</p>
turkeyhundt
115,823
<p>In the case where A takes some of the bills that will make 600, yes, the probability that B can also get 600 goes down, but in the cases where A takes some bills that do not contribute to a combination of 600, the probability B can get 600 goes up! So it balances out to be the same.</p> <p>You can also see why the order of A and B does not matter by reversing time and imagine each wrote their name on the bills they chose and put them back in the bag. Regardless of the order they picked, two of them are for A and two are for B.</p>
668,664
<p>Solve $\dfrac{\partial u}{\partial t}+u\dfrac{\partial u}{\partial x}=x$ subject to the initial condition $u(x,0)=f(x)$.</p> <p>I let $\dfrac{dt}{ds}=1$ , $\dfrac{dx}{ds}=u$ , $\dfrac{du}{ds}=x$ and the initial conditions become: $t=0$ , $x=\xi$ and $u=f(\xi)$ when $s=0$ .</p> <p>I believe this leads to $t=s$ , but I am unsure how to deal with $\dfrac{dx}{ds}=u$ and $\dfrac{du}{ds}=x$ .</p>
doraemonpaul
30,938
<p>Follow the method in <a href="http://en.wikipedia.org/wiki/Method_of_characteristics#Example" rel="nofollow">http://en.wikipedia.org/wiki/Method_of_characteristics#Example</a>:</p> <p>$\dfrac{dt}{ds}=1$ , letting $t(0)=0$ , we have $t=s$</p> <p>$\begin{cases}\dfrac{dx}{ds}=u\\\dfrac{du}{ds}=x\end{cases}$</p> <p>$\therefore\dfrac{d^2x}{ds^2}=x$</p> <p>$x=C_1\sinh s+C_2\cosh s$</p> <p>$\therefore u=C_1\cosh s+C_2\sinh s$</p> <p>Hence $\begin{cases}x=C_1\sinh s+C_2\cosh s\\u=C_1\cosh s+C_2\sinh s\end{cases}$</p> <p>$x(0)=x_0$ , $u(0)=F(x_0)$ :</p> <p>$\begin{cases}C_1=F(x_0)\\C_2=x_0\end{cases}$</p> <p>$\therefore\begin{cases}x=F(x_0)\sinh s+x_0\cosh s\\u=F(x_0)\cosh s+x_0\sinh s\end{cases}$</p> <p>$\therefore\begin{cases}x_0=x\cosh s-u\sinh s=x\cosh t-u\sinh t\\F(x_0)=u\cosh s-x\sinh s=u\cosh t-x\sinh t\end{cases}$</p> <p>Hence $u\cosh t-x\sinh t=F(x\cosh t-u\sinh t)$</p> <p>$u(x,0)=f(x)$ :</p> <p>$F(x)=f(x)$</p> <p>$\therefore u\cosh t-x\sinh t=f(x\cosh t-u\sinh t)$</p>
192,072
<p>Bonjour!<br> I'm trying this number-theory problem, but i don't have any idea how to solve it.<br> Can you give me some hints ?</p> <p>We have got any $\mathbb{Z_+}$ number. Let it be $n$.<br> Then we must proof that $2 \nmid \sigma(n) \implies n = k^2 \vee n = 2k^2$.<br> Thanks for any help </p>
robjohn
13,854
<p><strong>Hint:</strong> If the prime factorization of $n$ is $$ n=\prod_k p_k^{e_k}\tag{1} $$ then $$ \begin{align} \sigma(n) &amp;=\prod_k\frac{p_k^{e_k+1}-1}{p_k-1}\\ &amp;=\prod_k\left(1+p_k+p_k^2+\dots+p_k^{e_k}\right)\tag{2} \end{align} $$ and count the number of summands in $(2)$.</p>
3,712,699
<p>Let <span class="math-container">$K$</span> be a number field with ring of integers <span class="math-container">$\mathcal{O}_K$</span> and let <span class="math-container">$p$</span> be a rational prime. Let <span class="math-container">$(p) = \mathfrak{p}_1^{e_1}\ldots\mathfrak{p}_r^{e_r}$</span> be the prime factorisation of (p) over <span class="math-container">$\mathcal{O}_K$</span>, and suppose that <span class="math-container">$\alpha \in \mathfrak{a} = \mathfrak p_1\ldots \mathfrak{p}_r$</span>. Then show that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha) \equiv 0$</span> (mod <span class="math-container">$p$</span>). </p> <p>I'd really appreciate any help in proving this. Thanks for reading! </p> <hr> <p><strong>Special Case</strong></p> <p>I am able to prove the result in the case where <span class="math-container">$K/\mathbb{Q}$</span> is a Galois extension. In that case, each embedding <span class="math-container">$\sigma$</span> of <span class="math-container">$K$</span> is actually a <span class="math-container">$\mathbb{Q}$</span>-automorphism, and <span class="math-container">$\sigma$</span> permutes the <span class="math-container">$\mathfrak{p}_i$</span>, hence <span class="math-container">$\sigma(\alpha) \in \mathfrak{a}$</span>, so clearly <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha) = \sum_\sigma \sigma(\alpha) \in \mathfrak{a} \cap \mathbb{Z} \subseteq p\mathbb{Z}$</span>.</p> <p>However, if <span class="math-container">$K/\mathbb{Q}$</span> is not Galois, the argument fails because the embeddings no longer permute the <span class="math-container">$\mathfrak{p}_i$</span>, so the conjugates of <span class="math-container">$\alpha$</span> are no longer in <span class="math-container">$\mathfrak{a}$</span>. </p> <hr> <p><strong>Other Ideas</strong></p> <p>I'm aware that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}$</span> is the trace of the <span class="math-container">$\mathbb{Q}$</span>-linear transformation of <span class="math-container">$K$</span> given by <span class="math-container">$v \mapsto \alpha v$</span>, so I have thought about the matrix of this linear transformation with respect to an arbitrary integral basis, but haven't been able to make much headway with that.</p> <p>We also have that <span class="math-container">$\alpha^e \in (p)$</span>, where <span class="math-container">$e = \max_i \{e_i\}$</span>, so that <span class="math-container">$\text{Tr}_{K/\mathbb{Q}}(\alpha^e)\in (p) \cap \mathbb{Z} = p\mathbb{Z}$</span>, so I've thought about trying to relate the trace of <span class="math-container">$\alpha$</span> to the trace of <span class="math-container">$\alpha^e$</span>, but also to no avail. </p>
GreginGre
447,764
<p>It is known that the different ideal <span class="math-container">$D_K$</span> is divisible by <span class="math-container">$\mathfrak{p}_1^{e_1-1}\cdots \mathfrak{p}_r^{e_r-1}$</span>, hence contained in <span class="math-container">$\mathfrak{p}_1^{e_1-1}\cdots \mathfrak{p}_r^{e_r-1}$</span> . Therefore <span class="math-container">$\mathfrak{p}_1^{1-e_1}\cdots \mathfrak{p}_r^{1-e_r}\subset D_K^{-1}.$</span></p> <p>Now recall that for a fractional ideal <span class="math-container">$I$</span>, we have <span class="math-container">$Tr_{K/\mathbb{Q}}(I)\subset \mathbb{Z}\iff I\subset D_K^{-1}$</span>.</p> <p>Now, <span class="math-container">$p^{-1} \mathfrak{a}=\mathfrak{p}_1^{1-e_1}\cdots \mathfrak{p}_r^{1-e_r}\subset D_K^{-1}$</span>, so <span class="math-container">$Tr_{K/\mathbb{Q}}(p^{-1}\mathfrak{a})\subset \mathbb{Z}$</span>, which is equivalent to <span class="math-container">$Tr_{K/\mathbb{Q}}(\mathfrak{a})\subset p\mathbb{Z}$</span>, as required.</p>
188,900
<p><strong>Bug introduced in 10.0 and persisting through 11.3 or later</strong></p> <hr> <p>In <code>11.3.0 for Microsoft Windows (64-bit) (March 7, 2018)</code> writing:</p> <pre><code>f[w_, x_, y_, z_] := w*x^2*y^3 - z*(w^2 + x^2 + y^2 - 1) eqn = {D[f[w, x, y, z], w] == 0, D[f[w, x, y, z], x] == 0, D[f[w, x, y, z], y] == 0, D[f[w, x, y, z], z] == 0}; sol = Solve[eqn]; Table[eqn /. sol[[n]], {n, Length[sol]}] </code></pre> <p>I get:</p> <blockquote> <p>{{True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {True, True, True, True}, {False, True, True, False}, {True, True, True, True}, {True, True, True, True}, {False, True, True, False}, {True, True, True, True}, {True, True, True, True}, {False, True, True, False}, {True, True, True, True}, {True, True, True, True}, {False, True, True, False}, {True, True, True, True}, {True, True, True, True}}</p> </blockquote> <p>from which there are four wrong solutions.</p> <p>Am I wrong or is it a <code>Solve[]</code> bug?</p> <hr> <p><strong>EDIT</strong>: through the email address <em>support@wolfram.com</em> I contacted <em>Wolfram Technical Support</em> who in less than three working days have confirmed that it is a bug and have already proceeded to report to their developers. </p>
OkkesDulgerci
23,291
<p>You can use <code>Reduce</code></p> <pre><code>f[w_, x_, y_, z_] := w*x^2*y^3 - z*(w^2 + x^2 + y^2 - 1) eqn = {D[f[w, x, y, z], w] == 0, D[f[w, x, y, z], x] == 0, D[f[w, x, y, z], y] == 0, D[f[w, x, y, z], z] == 0}; red = Reduce[eqn, Backsubstitution -&gt; True] </code></pre> <blockquote> <p><span class="math-container">$\left(z=0\land x=0\land w=-\sqrt{1-y^2}\right)\lor \left(z=0\land x=0\land w=\sqrt{1-y^2}\right)\lor \left(z=0\land y=0\land w=-\sqrt{1-x^2}\right)\lor \left(z=0\land y=0\land w=\sqrt{1-x^2}\right)\lor (z=0\land y=-1\land x=0\land w=0)\lor (z=0\land y=0\land x=0\land w=-1)\lor (z=0\land y=0\land x=0\land w=1)\lor (z=0\land y=1\land x=0\land w=0)\lor \left(z=-\frac{1}{4 \sqrt{3}}\land y=-\frac{1}{\sqrt{2}}\land x=-\frac{1}{\sqrt{3}}\land w=\frac{1}{\sqrt{6}}\right)\lor \left(z=-\frac{1}{4 \sqrt{3}}\land y=-\frac{1}{\sqrt{2}}\land x=\frac{1}{\sqrt{3}}\land w=\frac{1}{\sqrt{6}}\right)\lor \left(z=-\frac{1}{4 \sqrt{3}}\land y=\frac{1}{\sqrt{2}}\land x=-\frac{1}{\sqrt{3}}\land w=-\frac{1}{\sqrt{6}}\right)\lor \left(z=-\frac{1}{4 \sqrt{3}}\land y=\frac{1}{\sqrt{2}}\land x=\frac{1}{\sqrt{3}}\land w=-\frac{1}{\sqrt{6}}\right)\lor \left(z=\frac{1}{4 \sqrt{3}}\land y=-\frac{1}{\sqrt{2}}\land x=-\frac{1}{\sqrt{3}}\land w=-\frac{1}{\sqrt{6}}\right)\lor \left(z=\frac{1}{4 \sqrt{3}}\land y=-\frac{1}{\sqrt{2}}\land x=\frac{1}{\sqrt{3}}\land w=-\frac{1}{\sqrt{6}}\right)\lor \left(z=\frac{1}{4 \sqrt{3}}\land y=\frac{1}{\sqrt{2}}\land x=-\frac{1}{\sqrt{3}}\land w=\frac{1}{\sqrt{6}}\right)\\ \lor \left(z=\frac{1}{4 \sqrt{3}}\land y=\frac{1}{\sqrt{2}}\land x=\frac{1}{\sqrt{3}}\land w=\frac{1}{\sqrt{6}}\right)$</span></p> </blockquote> <pre><code>First@eqn //. {ToRules[red]} </code></pre> <blockquote> <p>{True, True, True, True, True, True, True, True, True, True, True, True, True, True, True, True}</p> </blockquote>
1,624,221
<p>For the former one, I am aware that if let $F(x)=\int_a^x f(t)dt$, then it also equals $\int_0^x f(t)dt-\int_0^a f(t)dt$, so $F'(x)= f(x)-0=f(x)$. But who can tell me why $\int_0^a f(t)dt$ is $0$?</p>
DeepSea
101,504
<p><strong>hint</strong>: $\dfrac{1}{n^2-1} = \dfrac{1}{2}\cdot \left(\dfrac{1}{n-1}-\dfrac{1}{n+1}\right) = \dfrac{1}{2}\left(\dfrac{1}{n-1}-\dfrac{1}{n}\right) + \dfrac{1}{2}\left(\dfrac{1}{n}-\dfrac{1}{n+1}\right)$. From this you see that there are $2$ sums you calculate, and using telescoping the first sum is $\dfrac{1}{2}$, and the second is $\dfrac{1}{4}$, thus the answer is $\dfrac{3}{4}$ as claimed.</p>
3,189,303
<p>What is the point of constant symbols in a language?</p> <p>For example we take the language of rings <span class="math-container">$(0,1,+,-,\cdot)$</span>. What is so special about <span class="math-container">$0,1$</span> now? What is the difference between 0 and 1 besides some other element of the ring?</p> <p>I am aware, that you want to have some elements, that you call 0 and 1 which have the desired properties, like <span class="math-container">$x+0=0+x=x$</span> or <span class="math-container">$1\cdot x = x\cdot 1=x$</span>.</p> <p>Is there something else, which makes constants 'special'?</p> <p>Other example: Suppose we have the language <span class="math-container">$L=\{c\}$</span> where <span class="math-container">$c$</span> is a constant symbol. Now we observe the L-structure <span class="math-container">$\mathfrak{S}_n$</span> over the set <span class="math-container">$\mathbb{Z}$</span>, where <span class="math-container">$c$</span> gets interpreted by <span class="math-container">$n$</span>.</p> <p>Is there any difference, between <span class="math-container">$c$</span> and <span class="math-container">$n$</span>? Or are they just the same and you can view it as some sort of substitution?</p> <p>For <span class="math-container">$\mathfrak{S}_0$</span> we would understand <span class="math-container">$c$</span> as <span class="math-container">$0$</span>. Since there are no relation- or functionsymbols, we just have the set <span class="math-container">$\mathbb{Z}$</span> and could note them as</p> <p><span class="math-container">$\{\dotso, -1, c, 1, \dotso\}$</span></p> <p>If we take the usual function <span class="math-container">$+$</span> and add it <span class="math-container">$L=\{c,+\}$</span> now <span class="math-container">$\mathfrak{S}_0$</span> has the property, that <span class="math-container">$c+c=c$</span> for example.</p> <p>I hope you understand what I am asking for. </p> <p>I think it boils down to:</p> <blockquote> <p>Is there a difference between the structure <span class="math-container">$\mathfrak{S}_n$</span> as L-structure and <span class="math-container">$\mathfrak{S}_n$</span> as <span class="math-container">$L_\emptyset$</span>-structure, where <span class="math-container">$L_\emptyset=\emptyset$</span> (so does not contain a constant symbol).</p> </blockquote> <p>But I want to get as much insight here as possible. So if you do not understand what I am asking for, it might be best, if you just take a guess. :)</p> <p>Thanks in advance.</p>
Mark Kamsma
661,457
<p>Clive's answer is already a good one, I just wanted to add another important point about constants. They give us the power to say infinitely many things about one element.</p> <p>For example, if we consider <a href="https://en.wikipedia.org/wiki/Peano_axioms" rel="noreferrer">Peano Arithmetic</a> then obviously <span class="math-container">$\mathbb N$</span> is a model. Now, add a constant <span class="math-container">$c$</span> to our language and add sentences <span class="math-container">$c &gt; \bar n$</span> for all <span class="math-container">$n \in \mathbb N$</span> (where <span class="math-container">$\bar n$</span> stands for 1 added <span class="math-container">$n$</span> times: <span class="math-container">$1 + 1 + \ldots + 1$</span>). This new theory is consistent by compactness, so it has a model <span class="math-container">$M$</span>. In <span class="math-container">$M$</span> we have an interpretation for <span class="math-container">$c$</span>, which is bigger than all natural numbers. So we obtain a nonstandard model of arithmetic. Something similar can be done to create a model that looks like the reals, but has infinitesimals.</p>
4,032,969
<p>I have an integral that depends on two parameters <span class="math-container">$a\pm\delta a$</span> and <span class="math-container">$b\pm \delta b$</span>. I am doing this integral numerically and no python function can calculate the integral with uncertainties.</p> <p>So I have calculated the integral for each min, max values of a and b. As a result I have obtained 4 values, such that;</p> <p><span class="math-container">$$(a + \delta a, b + \delta b) = 13827.450210 \pm 0.000015~~(1)$$</span> <span class="math-container">$$(a + \delta a, b - \delta b) = 13827.354688 \pm 0.000015~~(2)$$</span> <span class="math-container">$$(a - \delta a, b + \delta b) = 13912.521548 \pm 0.000010~~(3)$$</span> <span class="math-container">$$(a - \delta a, b - \delta b) = 13912.425467 \pm 0.000010~~(4)$$</span></p> <p>So it is clear that <span class="math-container">$(2)$</span> gives the min and <span class="math-container">$(3)$</span> gives the max. Let us show the result of the integral as <span class="math-container">$c \pm \delta c$</span>. So my problem is what is <span class="math-container">$c$</span> and <span class="math-container">$\delta c$</span> here?</p> <p>The integral is something like this</p> <p><span class="math-container">$$I(a,b,x) =C\int_0^b \frac{dx}{\sqrt{a(1+x)^3 + \eta(1+x)^4 + (\gamma^2 - a - \eta)}}$$</span></p> <p>where <span class="math-container">$\eta$</span> and <span class="math-container">$\gamma$</span> are constant.</p> <p>Note: You guys can also generalize it by taking <span class="math-container">$\eta \pm \delta \eta$</span> but it is not necessary for now.</p> <p>I have to take derivatives or integrals numerically. There's no known analytical solution for the integral.</p> <p><span class="math-container">$\eta = 4.177 \times 10^{-5}$</span>, <span class="math-container">$a = 0.1430 \pm 0.0011$</span>, <span class="math-container">$b = 1089.92 \pm 0.25$</span>, <span class="math-container">$\gamma = 0.6736 \pm 0.0054$</span>, <span class="math-container">$C = 2997.92458$</span></p>
Claude Leibovici
82,404
<p>What is inside the square root is <span class="math-container">$$\gamma ^2+ (3 a+4 \eta )x+ 3( a+2 \eta )x^2+ (a+4 \eta )+\eta x^4\tag 1$$</span> Write it as <span class="math-container">$$\eta\, (x-r_1 ) (x-r_2 ) (x-r_3 ) (x-r_4)$$</span> where the <span class="math-container">$r_i$</span> are the roots of the quartic polynomial given in <span class="math-container">$(1)$</span> .</p> <p>So, we need to compute <span class="math-container">$$I(a,b)=\frac C {\sqrt \eta}\,\int_0^b\frac{dx}{\sqrt{(x-r_1 ) (x-r_2) (x-r_3 ) (x-r_4 )}}$$</span> and we have an elliptic integral of the first kind (have a look <a href="https://www.wolframalpha.com/input/?i=Integrate%5B1%2FSqrt%5B%28x-a%29%28x-b%29%28x-c%29%28x-d%29%5D%2Cx%5D" rel="nofollow noreferrer">here</a>).</p> <p>So, now, we can compute all the partial derivatives with repect to <span class="math-container">$(\eta,r_1,r_2,r_3,r_4)$</span> and use the chain rule.</p> <p>So , assuming no cross terms, thz final result write <span class="math-container">$$I = I_0 +\frac {\partial I}{\partial a} ( a-a_0)+\frac {\partial I}{\partial b} (b-b_0)+\frac {\partial I}{\partial \eta} (\eta-\eta_0)+\frac {\partial I}{\partial \gamma} (\gamma-\gamma_0)$$</span> with <span class="math-container">$$I_0=13869.7187382790600280056975524$$</span> <span class="math-container">$$\frac {\partial I}{\partial a}=-38667.5002882782982646434723$$</span> <span class="math-container">$$\frac {\partial I}{\partial b}=0.1916010843310452774261082$$</span> <span class="math-container">$$\frac {\partial I}{\partial \eta}=-1517907.851327789447487779$$</span> <span class="math-container">$$\frac {\partial I}{\partial \gamma}=-3984.5811163972118547061439$$</span></p>
2,648,370
<p>$$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$ This is the farthest I've got: $$=\int\frac{x^2}{\sqrt{1-(x-1)^2}}dx$$</p>
haqnatural
247,767
<p><strong>Hint</strong>: substitute $$x-1= \sin t $$ so as $$\\ x-1= \sin t \\ x=\sin t +1\\ dx = \cos t\,dt \\ \int \frac { x^2 }{ \sqrt { 1-(x-1)^2 } } \, dx=\int \frac { \cos t (\sin t +1)^2 }{ \sqrt { 1-\sin^2{t} } } \, dt=\int (\sin t +1)^2 \, dt \\ $$</p>
4,044,654
<p>We say that a continuous function <span class="math-container">$u:\mathbb{R}^d\to \mathbb{R}$</span> is subharmonic if it satisfies the mean value property <span class="math-container">$$u(x)\leq \frac{1}{|\partial B_r(x)|}\int_{\partial B_r(x)}u(y)\,\mathrm{d}y \qquad (\star)$$</span> for any ball <span class="math-container">$B_r(x)\subset \mathbb{R}^d$</span>.</p> <blockquote> <p>Let <span class="math-container">$u:\mathbb{R}^d\to \mathbb{R}$</span> be a convex function (hence, continuous). Is <span class="math-container">$u$</span> subharmonic?</p> </blockquote> <ul> <li><p>If <span class="math-container">$u\in C^2(\mathbb{R}^d)$</span>, this is true. Using a second-order Taylor expansion we have <span class="math-container">\begin{align*}\int_{\partial B_r(x)}(u(y)-u(x))\,\mathrm{d}y&amp;=\int_{\partial B_r(x)}\left(\nabla u(x)\cdot(y-x)+\frac{1}{2}(y-x)D^2u(\xi)(y-x)^t\right)\,\mathrm{d}y.\end{align*}</span> The first term in the above integral vanishes by symmetry, the second is non-negative because <span class="math-container">$D^2u(\xi)$</span> is a positive semi-definite matrix. Therefore, (<span class="math-container">$\star$</span>) is proven.</p> </li> <li><p>If <span class="math-container">$d=1$</span>, the statement is true when <span class="math-container">$u$</span> is continuous, in general. Indeed since balls reduce to intervals, (<span class="math-container">$\star$</span>) is easily shown to be equivalent to <span class="math-container">$u$</span> being midpoint-convex.</p> </li> </ul> <p>I'm not sure how to attack the problem in higher dimensions. Of course <span class="math-container">$(\star)$</span> is true for affine functions in any dimension, and I'd like to use the fact that the graph of a convex function lies below that of an affine function, loosely speaking. However, to close the estimate I would need <span class="math-container">$u$</span> to be equal to the affine function at the boundary of the ball, and this is not necessarily possible.</p>
Martin R
42,969
<p>Using that <span class="math-container">$u$</span> is midpoint-convex works in higher dimensions as well.</p> <p><span class="math-container">$y \mapsto x - (y-x) = 2x-y$</span> maps the sphere <span class="math-container">$\partial B_r(x)$</span> bijectively onto itself (each point is mapped to the “opposite” point on the sphere). It follows that <span class="math-container">$$ \int_{\partial B_r(x)} u(y) \, dy = \int_{\partial B_r(x)} u(2x-y) \, dy $$</span> and therefore <span class="math-container">$$ \int_{\partial B_r(x)} u(y) \, dy = \int_{\partial B_r(x)} \frac 12\bigl(u(y) + u(2x-y)\bigr) \, dy \\ \ge \int_{\partial B_r(x)} u\left(\frac{y + (2x-y)}{2}\right) \, dy = \int_{\partial B_r(x)} u(x) \, dy = |\partial B_r(x)| \cdot u(x) \, . $$</span></p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
user52817
1,680
<p>The prototypical way for a function to not be continuous is that of a jump discontinuity. Imagine a jump discontinuity on the order of a few micrometers, like the width of a hair. If you are tracing the graph of the function with an everyday pencil, you would slide right across the discontinuity without even noticing its presence. However, if you shrunk yourself and your pencil to the micrometer scale, you would suddenly notice the discontinuity. So the width of the pencil corresponds to $\epsilon$. The parameter $\delta$ is a localizing parameter that allows one to focus on the region near the discontinuity. There might be another location in the domain where there is a jump discontinuity on the order of a few angstroms, so the localizing parameter $\delta$ might need to be smaller. In order to rule out a jump discontinuity at $x_0$ you have to look at all small scales to make sure you are not overlooking anything, hence <em>for all</em> $\epsilon&gt;0\ldots$. </p> <p>So I think a natural way to develop the informal definition of continuity to the formal definition is to focus on the intuitive <em>negation</em> of continuity in the form of a jump discontinuity. But as you probably know, the formal definition of continuity permits some continuous functions that only baffle the mind when we try to visualize. For example continuous functions that are nowhere differentiable, or a function that is continuous only at one point. </p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
Stephan Kulla
5,097
<p>To give a partial answer to my question: In "A radical approach to real analysis" David Bressoud gives a good explanation why the intermediate value property (IVP) is no good choice for continuity (pp. 91 ff):</p> <ol> <li>The image of a closed interval might not be bounded.</li> <li>The sum of two functions with the IVP might not be a function with the IVP.</li> </ol> <p>For (1) take $f:[0,1]\to\mathbb R: x\mapsto \begin{cases} \tfrac 1x \sin\left(\tfrac 1x\right) &amp; ;x\neq 0 \\ 0 &amp; ;x=0 \end{cases}$</p> <p>For (2) take $f:\mathbb R\to\mathbb R: x\mapsto \begin{cases} \sin^2\left(\tfrac 1x\right) &amp; ;x\neq 0 \\ 0 &amp; ;x=0 \end{cases}$ and $g:\mathbb R\to\mathbb R: x\mapsto \begin{cases} \cos^2\left(\tfrac 1x\right) &amp; ;x\neq 0 \\ 0 &amp; ;x=0 \end{cases}$ so that $f(x)+g(x)=\begin{cases} 1 &amp; ;x\neq 0 \\ 0 &amp; ;x=0 \end{cases}$</p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
John
6,433
<p>There is an entirely different perspective on this entire problem which is revealed by asking: Is continuity what we should teach students? That is, before we think about motivating a formal definition of continuity, we might wish to question whether continuity is the concept that we really need students to know.</p> <p>An alternate perspective is that the concept which we want to motivate is actually uniform continuity, and that we can approach this concept using much more concrete methods than with the current conception of continuity at a point.</p> <p>The motivation for uniform continuity comes from calculating values of functions (like the square root function $\sqrt{x}$) to a certain number of decimal places and finding that if we have an approximation to say 2 decimal places then this approximation is constant over an interval of arguments. Another way of saying this is that a function is uniformly continuous if we expect that its decimal representation "settles down" over a certain interval of arguments. Said yet another way: we do not give a function arguments which are not rational i.e. we do not give endless decimals to a function when we're calculating, rather we suppose that a function giving an endless decimal is constant up to a certain number of decimal places over some finite interval. There can be some complexity in the fact that an infinite decimal number can have more than one representation e.g. 1.999... is a synonym for 2.0000... ) but these can be addressed just as concretely as the motivation for uniform continuity.</p>
631,214
<p>Two kids starts to run from the same point and the same direction of circled running area with perimeter 400m. The velocity of each kid is constant. The first kid run each circle in 20 sec less than his friend. They met in the first time after 400 sec from the start. Q: Find their velocity.</p> <p>I came with one equation:</p> <p>400/v1 +20 = 400/v2 </p> <p>But what is the second equation? ("They met in the first time after 400 sec from the start.")</p>
Siméon
51,594
<p>Using the integral formula $\ln(x)=\int_1^x \frac{dt}{t}$ and $\ln(y^2)=2\ln(y)$, we have for all $x \in (0,1)$, $$ |x\ln(x)| =2\left|x\ln(x^{1/2})\right| \leq 2\int_{\sqrt{x}}^1\frac{x}{t}\,dt\leq 2 \int_{\sqrt{x}}^1\frac{x}{\sqrt{x}}\,dt \leq 2\sqrt{x}(1-\sqrt{x}). $$ The conclusion follows by squeezing since the limit of the r.h.s. as $x\to 0^+$ is $0$ .</p>
186,890
<p>Working with other software called SolidWorks I was able to get a plot with a curve very close to my data points:</p> <p><a href="https://i.stack.imgur.com/DooKo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DooKo.png" alt="enter image description here"></a></p> <p>I tried to get a plot as accurate as this, but using the <code>Fit function</code>, I could not get a plot as accurate.</p> <pre><code>Clear["Global`*"] dados={{0,0},{1,1000},{2,-750},{3,250},{4,-1000},{5,0}}; Plot[Evaluate[Fit[dados,{1,x,x^2,x^3,x^4,x^5,x^6},x]],{x,0,5}] </code></pre> <p><a href="https://i.stack.imgur.com/syQGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syQGd.png" alt="enter image description here"></a></p> <p>Is there something I need to modify or another method more effective than this?</p>
Bob Hanlon
9,362
<p>Use <a href="https://reference.wolfram.com/language/ref/Interpolation.html" rel="nofollow noreferrer"><code>Interpolation</code></a></p> <pre><code>Clear["Global`*"] dados = {{0, 0}, {1, 1000}, {2, -750}, {3, 250}, {4, -1000}, {5, 0}}; {xmin, xmax} = MinMax[dados[[All, 1]]] (* {0, 5} *) f = Interpolation[dados, InterpolationOrder -&gt; 5]; Plot[f[x], {x, xmin, xmax}, Epilog -&gt; {Red, AbsolutePointSize[4], Point[dados]}] </code></pre> <p><a href="https://i.stack.imgur.com/AROxd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AROxd.png" alt="enter image description here"></a></p> <p>Alternatively, use <a href="https://reference.wolfram.com/language/ref/InterpolatingPolynomial.html" rel="nofollow noreferrer"><code>InterpolatingPolynomial</code></a></p> <pre><code>g[x_] = InterpolatingPolynomial[dados, x] // Simplify (* 125/6 x (520 - 829 x + 450 x^2 - 101 x^3 + 8 x^4) *) Plot[g[x], {x, xmin, xmax}, Epilog -&gt; {Red, AbsolutePointSize[4], Point[dados]}] </code></pre> <p><a href="https://i.stack.imgur.com/2i6ys.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2i6ys.png" alt="enter image description here"></a></p> <p><strong>EDIT:</strong> Using <a href="https://reference.wolfram.com/language/ref/Fit.html" rel="nofollow noreferrer"><code>Fit</code></a></p> <pre><code>g[x] == Fit[dados, {1, x, x^2, x^3, x^4, x^5}, x] // Rationalize[#, 10^-6] &amp; // Simplify (* True *) </code></pre>
3,628,374
<p>We have that <span class="math-container">$W \in \mathbb{R}^{n \times m}$</span> and we want to find <span class="math-container">$$\text{prox}(W) = \arg\min_Z\Big[\frac{1}{2} \langle W-Z, W-Z \rangle+\lambda ||Z||_* \Big]$$</span></p> <p>Here, <span class="math-container">$||Z||_*$</span> represents the trace norm of <span class="math-container">$Z$</span>.</p> <p>I tried getting the derivative of the whole thing, and to do that I used that the derivative of trace norm is <span class="math-container">$UV^T$</span> (according to <a href="https://math.stackexchange.com/questions/701062/proximal-operator-and-the-derivative-of-the-matrix-nuclear-norm">Proximal Operator and the Derivative of the Matrix Nuclear Norm</a>). However, after this, I don't really know how to proceed. </p>
Community
-1
<p>You have shown that all elements of order <span class="math-container">$p$</span> are in the same conjugacy class. But <span class="math-container">$\begin{bmatrix}1&amp;1\\0&amp;1\end{bmatrix}$</span> has order <span class="math-container">$p$</span>.</p>
3,131,516
<p>I would like to know if this differential equation can be transformed into the hypergeometric differential equation</p> <p><span class="math-container">$ 4 (u-1) u \left((u-1) u \text{$\varphi $1}''(u)+(u-2) \text{$\varphi $1}'(u)\right)+\text{$\varphi $1}(u) \left((u-1) u \omega ^2-u (u+4)+8\right)=0$</span></p>
JMoravitz
179,297
<p>My thought process as I go to factor this expression goes something like the following:</p> <p><span class="math-container">$$2x^3+3x^2-2x$$</span></p> <p>&quot;Oh, there is no constant term and everything is a multiple of <span class="math-container">$x$</span>... so that means I can safely factor that out by itself and I can look more closely at what is left&quot;</p> <p><span class="math-container">$$x(2x^2+3x-2)$$</span></p> <p>&quot;Allright, I've got a quadratic left over... oh hey, I was taught about quadratics a really long time in great detail. Hmm, which technique do I want to use today. Well, I get confused if the coefficient of the <span class="math-container">$x^2$</span> is not one, so I'll just go and use the quadratic formula&quot;</p> <blockquote> <p><strong>The Quadratic Formula</strong>: (<em>most people are taught to memorize this, but it should be well within your ability to prove</em>) Given a degree two polynomial of the form <span class="math-container">$Ax^2+Bx+C$</span>, it can be factored as</p> <p><span class="math-container">$$A\left(x-\frac{-B+\sqrt{B^2-4AC}}{2A}\right)\left(x+\frac{-B-\sqrt{B^2-4AC}}{2A}\right)$$</span></p> </blockquote> <p>So, looking at <span class="math-container">$2x^2+3x-2$</span> in this light we see that this section factors as</p> <p><span class="math-container">$$2\left(x-\dfrac{-3+\sqrt{3^2-4\cdot 2\cdot (-2)}}{2\cdot 2}\right)\left(x-\dfrac{-3-\sqrt{3^2-4\cdot 2\cdot (-2)}}{2\cdot 2}\right)$$</span></p> <p>which after simplifying all of the arithmetic and including the <span class="math-container">$x$</span> we left to the side earlire becomes</p> <p><span class="math-container">$$2x(x-\frac{1}{2})(x+2)$$</span></p> <p>(<em>or if you prefer moving the <span class="math-container">$2$</span> on the far outside into one of the parentheses and avoiding fractions can be written as <span class="math-container">$x(2x-1)(x+2)$</span></em>)</p> <p>Other techniques exist, especially when you assume the factorization includes only integers as are common in introductory examples, however the power of the quadratic formula is immense and it will always work even in those situations where other techniques might fail.</p> <p>When the quadratic formula is first taught, it is common in the United States at least for it to be taught via song to make it easier to remember, for example <a href="https://www.youtube.com/watch?v=VOXYMRcWbF8" rel="nofollow noreferrer">here</a>. (<em>This is not the melody I was taught it to, but so long as you can fit it to a melody it shouldn't really matter which melody it is</em>).</p> <blockquote class="spoiler"> <p> It is worth pointing out that there does exist a generalized formula for how to factor an arbitrary cubic equation that <em>some</em> people might have been taught and can memorize but it is a great deal more complicated than the quadratic formula. It is not expected for you to learn this. It is also worth pointing out that there also does exist a generalized formula for how to factor an arbitrary quartic, but this formula takes several pages to even write down. No one in their right minds would bother memorizing the fully generalized formula for the quartic, only maybe some few special cases. Finally, in your senior year of a mathematics degree in undergraduate or in graduate school it is common to reproduce the proof that there <em>does not</em> and <em>cannot</em> exist a fully generalized formula for finding the factorization of a quintic or above.</p> </blockquote>
1,512,515
<p>I have tested all the primes up to 50,000,000 and did not find a single prime which satisfies the condition "sum of digits of prime number written in base7 divides by 3". E.g. </p> <ul> <li>13 (Base10) = 16 (Base7) --> 7 (sum of digits in base 7)</li> <li>1021 (Base10) = 2656 (Base7) --> 19</li> <li>823541 (Base10) = 6666665 (Base7) --> 41</li> <li>46941953 (Base10) = 1110000002 (Base7) --> 5</li> </ul> <p>Here you can see the distribution of sums in base 7:</p> <p><a href="http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png" rel="nofollow">http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png</a></p> <ul> <li>COUNT(*) - the number of occurrences</li> <li>SUM7 - sum of digits in base7</li> <li>MIN(PRIME) - minimal prime in base10</li> <li>MAX(PRIME) - maximal prime in base10</li> </ul> <p>As you can see sum7 of 9, 15, 21, 27, 33 are missing in the list, though other valid sums are widely represented. By 'valid sum' I mean that sum must be odd, because of "In an odd base, a number is odd if and only if it has an odd number of odd digits."</p> <p>So what is the least prime whose sum of digits written in base7 divide by 3? Or is it possible to prove that all primes have such a feature?</p>
Matthias
164,923
<p>You know: If and only if a number can be devided by three than the sum of its digits to the base of then can be devided by three. Therefore each prime number larger than three has a sum of digits which could not be devided by three.</p> <p>So what is about the base of $7$: Now when switching from base $10$ to base $7$ you each increase one digit by a multiple of $3$. Therefore the sum of digits modulo three keeps the same. And the statement of above holds for the base $7$ as well.</p> <p>Alternativly see the multiples of three in the base of $7$</p> <p>$$3_7, 6_7, 12_7, 15_7, 21_7, 24_7, 30_7, 33_7, 36_7, 42_7,...$$ Each time you "overflow the digit" you have to increase the next digit by $1$ and decrease the current by $4$. So the sum of digits increases by $3$.</p>
1,512,515
<p>I have tested all the primes up to 50,000,000 and did not find a single prime which satisfies the condition "sum of digits of prime number written in base7 divides by 3". E.g. </p> <ul> <li>13 (Base10) = 16 (Base7) --> 7 (sum of digits in base 7)</li> <li>1021 (Base10) = 2656 (Base7) --> 19</li> <li>823541 (Base10) = 6666665 (Base7) --> 41</li> <li>46941953 (Base10) = 1110000002 (Base7) --> 5</li> </ul> <p>Here you can see the distribution of sums in base 7:</p> <p><a href="http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png" rel="nofollow">http://s12.postimg.org/lcf3tntzx/prime_sum_in_base7_distribution.png</a></p> <ul> <li>COUNT(*) - the number of occurrences</li> <li>SUM7 - sum of digits in base7</li> <li>MIN(PRIME) - minimal prime in base10</li> <li>MAX(PRIME) - maximal prime in base10</li> </ul> <p>As you can see sum7 of 9, 15, 21, 27, 33 are missing in the list, though other valid sums are widely represented. By 'valid sum' I mean that sum must be odd, because of "In an odd base, a number is odd if and only if it has an odd number of odd digits."</p> <p>So what is the least prime whose sum of digits written in base7 divide by 3? Or is it possible to prove that all primes have such a feature?</p>
Matthias Klupsch
19,700
<p>Let $a = \sum_{i = 0}^n a_i 7^i$ be a number written in base $7$, that is, $0 \leq a_i \leq 6$. Note that $7^i = (1 + 3 \cdot 2)^i = 1 + 3 b_i$ for some $b_i \geq 0$. Hence $a = \sum_{i = 0}^n a_i + 3 \sum_{i = 0}^n a_i b_i$ is divisible by $3$ if and only if its sum of digits is divisible by $3$. Hence the sum of digits in base $7$ of any prime larger than $3$ will not be divisible by $3$.</p>
1,049,841
<p>Out of interest </p> <p>If i have the map $\phi: R \longrightarrow R/I $ where $R$ is a ring and $I$ is a nilpotent ideal ?</p> <p>then would i be right in saying that if i were to apply this map to the jacobson radical of $R$ it would take me to the jacobson radical of $R/I$ </p> <p>i.e. is the following true: $\phi(J(R)) = J(R/I)$</p> <p>I am guessing this is right but can't be certain</p> <p>Also if $\phi$ is surjective with kernel $I$ then this would imply R is artinian right? with R/I as semi simple?</p> <p>any help would be great, thank you ! </p>
egreg
62,967
<p>A nil ideal (in particular a nilpotent ideal) is contained in every maximal right ideal. Indeed, if $I$ is a nil ideal and $\mathfrak{m}$ is a maximal right ideal with $I\not\subseteq\mathfrak{m}$, we have $r+x=1$ with $r\in I$ and $x\in\mathfrak{m}$. But, as $r$ is nilpotent, say $r^n=0$, we have $$ (1-r)(1+r+r^2+\dots+r^{n-1})=1 $$ so $x\in\mathfrak{m}$ would be invertible: absurd.</p> <p>Therefore your nilpotent ideal $I$ is certainly contained in $J(R)$ and so $$ J(R/I)=J(R)/I. $$</p> <p>The quotient $R/I$ need not be artinian nor semisimple. Just consider $R=\mathbb{Z}$ and $I=\{0\}$.</p>
12,281
<p>In propositional logic, for example: $$\neg p \vee q.$$ </p> <p>If $p$ is true at the outset, does that mean it must be considered false when comparing with q in the disjunction?</p> <p>P.S. I am unsure about tags for this question.</p>
Arturo Magidin
742
<p>If $p$ is true, then $\neg p$ is false. To evaluate $\neg p \vee q$, you must evaluate $\neg p$ and you must evaluate $q$. If either $\neg p$ is true or $q$ is true, then $\neg p\vee q$ is true. </p> <p>In other words, you really need to figure out $(\neg p)\vee q$, performing first the operation inside the parentheses, then the disjunction. </p>
12,281
<p>In propositional logic, for example: $$\neg p \vee q.$$ </p> <p>If $p$ is true at the outset, does that mean it must be considered false when comparing with q in the disjunction?</p> <p>P.S. I am unsure about tags for this question.</p>
Yuval Filmus
1,277
<p>If $p$ is true, then $\lnot p \lor q \Leftrightarrow q$.</p> <p>In general, $p$ and $\lnot p$ have the opposite value: if one is true then the other is false, and vice versa.</p> <p>You can think of $p$ as some proposition, say "today is Sunday". Then $\lnot p$ stands for "today is <i>not</i> Sunday".</p>
12,281
<p>In propositional logic, for example: $$\neg p \vee q.$$ </p> <p>If $p$ is true at the outset, does that mean it must be considered false when comparing with q in the disjunction?</p> <p>P.S. I am unsure about tags for this question.</p>
Dan Christensen
3,515
<p>I'm not sure I understand your question, but this may help.</p> <p>Truth Table for ~p v q:</p> <p>~ p v q<br> F T T T<br> F T F F<br> T F T T<br> T F T F</p> <p>If p is true, and ~p v q is true (first line only), then q is true.</p> <p>Note that ~p v q is logically equivalent to p => q. </p>
1,761,668
<p>Wikipedia says about logical consequence:</p> <blockquote> <p>A formula φ is a logical consequence of a formula ψ if every interpretation that makes ψ true also makes φ true. In this case one says that φ is logically implied by ψ.</p> </blockquote> <p>But if φ and ψ are both true under some interpretations, then aren't they on equal footing? Why is one the logical consequence of the other? </p> <p>In extension, if we have a set of expressions $S = \{X_{1}, X_{2}, X_{3}, X_{4}\}$ and this set is satisfied by an interpretation $I$, so that every expression $X$ in $S$ is satisfied by $I$, then couldn't we just choose the subset $S' = \{X_2, X_{3}\}$ and claim that $X_{1}$ and $X_{4}$ are logical consequences of $S'$? It seems to my (naive) eyes that all expressions in X stand in mutual entailment, which somehow seems wrong. </p>
joy
101,393
<p>Consider the complement of $M$ and take a sequence there to show that it is closed. Since we are working in a metric space, this approach works.</p>
1,910,927
<p>$$x^2y+y^2z+xz^2-yz^2-x^2z-xy^2=(x-y)(x-z)(y-z)$$</p> <p>I would like to know if there is any method by which you can have like this result. </p>
Jean Marie
305,862
<p>Let us denote by lower case letters $a,b,c,d,e$ the abscissas of points $A,B,C,D,E$ resp.</p> <p>Two basic observations about this issue:</p> <ul> <li><p>it is "up to a translation" which allows to take $E$ as the origin, i.e., $e=0$.</p></li> <li><p>it is "up to a symmetry", which allows $D$ to be set at abscissa $d \geq 0$. </p></li> </ul> <p>Consider the two last constraints, the other constraints being <strong>useless</strong>.</p> <p>$$\begin{cases}|c-d|&amp;=&amp;5\\d&amp;=&amp;4\end{cases} \ \ \Rightarrow \ \ |c-4|=5.$$</p> <p>Thus 2 cases:</p> <ul> <li><p>$c=-1$; thus $CE=1$.</p></li> <li><p>$c=9$; thus $CE=9$.</p></li> </ul> <p>Edit: Here is a tree explaining how the other possible positions of points $A,B,C$ can be found (truly speaking, it is not exactly a tree because two branches meet in a "leaf"...). Taking a branch in the upwards direction like $(A_3,B_1,C_1,D)$ provides a solution ($E$ being fixed).</p> <p><a href="https://i.stack.imgur.com/08i51.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/08i51.jpg" alt="enter image description here"></a></p>
1,910,927
<p>$$x^2y+y^2z+xz^2-yz^2-x^2z-xy^2=(x-y)(x-z)(y-z)$$</p> <p>I would like to know if there is any method by which you can have like this result. </p>
user491617
491,617
<p>given equation1 $b-a = 2$</p> <p>equation2 $c-b = 3$</p> <p>equation3 $d-c = 5$</p> <p>equation4 $e-d = 4$</p> <p>need to find $e - c$ and $e - a$ solution:</p> <p>add equation3 and equation4 and you get $e-c = 9$</p> <p>add all of them and you get $e - a = 14$</p>
44,552
<p>I was pushing my way through a physics book when the author separated the variables of the Schrödinger equation and I lost the plot:</p> <p>$$\Psi (x, t) = \psi (x) T(t)$$</p> <p>can someone please explain how this technique works and is used? It can be in general maths or in the context of this problem. Thanks </p>
Bill Dubuque
242
<p>Regarding your question about the generality of separation of variables, there is an extremely beautiful Lie-theoretic approach to symmetry, separation of variables and special functions, e.g. see Willard Miller's <a href="http://www.ima.umn.edu/~miller/separationofvariables.html" rel="noreferrer">book [1]</a>. I quote from his introduction:</p> <blockquote> <p>This book is concerned with the relationship between symmetries of a linear second-order partial differential equation of mathematical physics, the coordinate systems in which the equation admits solutions via separation of variables, and the properties of the special functions that arise in this manner. It is an introduction intended for anyone with experience in partial differential equations, special functions, or Lie group theory, such as group theorists, applied mathematicians, theoretical physicists and chemists, and electrical engineers. We will exhibit some modern group-theoretic twists in the ancient method of separation of variables that can be used to provide a foundation for much of special function theory. In particular, we will show explicitly that all special functions that arise via separation of variables in the equations of mathematical physics can be studied using group theory. These include the functions of Lame, Ince, Mathieu, and others, as well as those of hypergeometric type. </p> <p>This is a very critical time in the history of group-theoretic methods in special function theory. The basic relations between Lie groups, special functions, and the method of separation of variables have recently been clarified. One can now construct a group-theoretic machine that, when applied to a given differential equation of mathematical physics, describes in a rational manner the possible coordinate systems in which the equation admits solutions via separation of variables and the various expansion theorems relating the separable (special function) solutions in distinct coordinate systems. Indeed for the most important linear equations, the separated solutions are characterized as common eigenfunctions of sets of second-order commuting elements in the universal enveloping algebra of the Lie symmetry algebra corresponding to the equation. The problem of expanding one set of separable solutions in terms of another reduces to a problem in the representation theory of the Lie symmetry algebra.</p> </blockquote> <p>For an example of effective Lie-theoretic algorithms for first-order ODEs see <a href="http://portal.acm.org/citation.cfm?id=806370" rel="noreferrer">Bruce Char's paper[2],</a> from which the following useful tables are extracted.</p> <p><img src="https://i.stack.imgur.com/uS5Te.jpg" alt="enter image description here"> </p> <p><img src="https://i.stack.imgur.com/RLcoN.jpg" alt="enter image description here"></p> <p><a href="http://www.ima.umn.edu/~miller/separationofvariables.html" rel="noreferrer">1</a> Willard Miller. Symmetry and Separation of Variables.<br> Addison-Wesley, Reading, Massachusetts, 1977 (out of print)<br> <a href="http://www.ima.umn.edu/~miller/separationofvariables.html" rel="noreferrer">http://www.ima.umn.edu/~miller/separationofvariables.html</a><br> <a href="http://gigapedia.com/items:links?id=64401" rel="noreferrer">http://gigapedia.com/items:links?id=64401</a></p> <p><a href="http://portal.acm.org/citation.cfm?id=806370" rel="noreferrer">2</a> Bruce Char. Using Lie transformation groups to find closed form solutions to first order ordinary differential equations.<br> SYMSAC '81. Proceedings of the fourth ACM symposium on Symbolic and algebraic computation.<br> <a href="http://portal.acm.org/citation.cfm?id=806370" rel="noreferrer">http://portal.acm.org/citation.cfm?id=806370</a> </p>
2,480,528
<blockquote> <p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p> </blockquote> <p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
Andreas Blass
48,510
<p>You want $|1+z|=|1-i\bar z|$. The left side here is the distance from $z$ to $-1$. The right side equals $|i+\bar z|$ which in turn equals $|-i+z|$, the distance from $z$ to $i$. So $z$ satisfies your equation iff it is equidistant from $-1$ and $i$. These $z$'s form a straight line in the complex plane, whose equation you can find by drawing a picture.</p>
2,480,528
<blockquote> <p>Find a formula for $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)$ then prove it. </p> </blockquote> <p>I assumed that $\prod_{i=1}^{2n-1} \left(1-\frac{(-1)^i}{i}\right)=\frac{2n}{2n-1}$ after doing a few cases from above then I tried to prove it with induction would this be a fair approach or any other approaches that would work? </p>
nonuser
463,553
<p>Since $|1+z|=|1-i\overline{z}|$ and $|w| = |\overline{w}|$, you can rewrite like this $$|z-(-1)|=|1+z|=|1-i\overline{z}| = |\overline{1-i\overline{z}}| =|1+iz| =|i||-i+z| = |z-i| $$</p> <p>So $z$ is at equal distance from $-1$ and $i$. So $z$ is on perpendicular bisector of segment between $-1$ and $i$. </p>
1,801,112
<p>Find the simplest solution:</p> <p>$y' + 2y = z' + 2z$ I think proper notation is not sure, y' means first derivate of y. ($\frac{dy}{dt}+ 2y = \frac{dz}{dt} + 2z$)</p> <p>$y(0)=1$</p> <p>I got kind of confused, is $y=z=1$ a proper solution here? Or is disqualified because a constant is not reliant on time and something like $e^t$ is the simplest solution?</p> <p>You can choose z and y however you like.</p>
SchrodingersCat
278,967
<p>$$\frac{dy}{dt}+ 2y = \frac{dz}{dt} + 2z$$ $$\frac{dy}{dt}-\frac{dz}{dt}=-2(y-z)$$ $$\frac{d(y-z)}{dt}=-2(y-z)$$ $$\frac{d(y-z)}{(y-z)}=-2dt$$</p> <p>Integrating both sides, we get $$\ln|y-z|=-2t+c$$ where $c$ is a constant of integration.</p> <p>Using the given condition, we have $$\ln|1-z(0)|=c$$</p> <p>So we have that $$\ln|y-z|=\ln|1-z(0)|-2t$$ Or even better, we have that $$\ln\left|\frac{y-z}{1-z(0)}\right|=-2t$$</p> <p>Thus we have that $$y(t)=z(t)+[1-z(0)]e^{-2t}$$</p> <p>That's the simplest solution possible.</p>
145,286
<p>Yesterday I got into an argument with @UnchartedWorks over <a href="https://mathematica.stackexchange.com/a/145207/26956">in the comment thread here</a>. At first glance, he posted a duplicate of <a href="https://mathematica.stackexchange.com/a/145202/26956">Marius' answer</a>, but with some unnecessary memoization:</p> <pre><code>unitize[x_] := unitize[x] = Unitize[x] pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt] </code></pre> <p>and proposed the following test to justify his claim that his approach is faster:</p> <pre><code>RandomSeed[1]; n = -1; data = RandomChoice[Range[0, 10], {10^8, 3}]; AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length] AbsoluteTiming[pick[data, unitize@data[[All, n]], 1] // Length] (* {7.3081, 90913401} {5.87919, 90913401} *) </code></pre> <p>A significant difference. Naturally, I was skeptical. The evaluation queue for his <code>pick</code> is (I believe) as follows:</p> <ol> <li><code>pick</code> is inert, so evaluate the arguments.</li> <li><code>data</code> is just a list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to a list</li> <li><code>unitize@data[[All, n]]</code> writes a large <code>DownValue</code>...</li> <li>...calling <code>Unitize@data[[All, n]]</code> in the process, returning the unitized list.</li> <li>Another large <code>DownValue</code> of the form <code>pick[data] = *pickedList*</code> is created (<code>data</code> here is, of course, meant in its evaluated form), never to be called again (unless, for some reason, we explicitly type <code>pick[data]</code>).</li> <li>The <code>*pickedList*</code> is returned.</li> </ol> <p>What about the evaluation queue for <code>Pick[data, Unitize@data[[All, n]], 1]</code>?</p> <ol> <li><code>Pick</code> is inert.</li> <li><code>data</code> becomes an inert list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to an inert list.</li> <li>Nothing happens here.</li> <li><code>Unitize@data[[All, n]]</code> returns the unitized list.</li> <li>Nothing happens here either.</li> <li>The same step as before is taken to get us the picked list.</li> </ol> <p>So, clearly <code>pick</code> has more things to do than <code>Pick</code>.</p> <p>To test this out I run the following code:</p> <pre><code>Quit[] $HistoryLength = 0; Table[ Clear[pick, unitize, data]; unitize[x_] := unitize[x] = Unitize[x]; pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt]; data = RandomChoice[Range[0, 10], {i*10^7, 3}]; {Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First, pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First}, {i, 5}] </code></pre> <p>Much to my surprise, <code>pick</code> is <em>consistently</em> faster!</p> <blockquote> <pre><code>{{0.482837, 0.456147}, {1.0301, 0.90521}, {1.46596, 1.35519}, {1.95202, 1.8664}, {2.4317, 2.37112}} </code></pre> </blockquote> <p>How can I <s>protect myself from black magic</s> make a representative test? Or <s>should I embrace the black magic</s> is this real and a valid way to speed things up?</p> <p><strong>Update re: answer by Szabolcs</strong></p> <p>Reversing the order of the list like so:</p> <pre><code>{pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First, Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First} </code></pre> <p>gave me the following result:</p> <blockquote> <pre><code>{{0.466251, 0.497084}, {1.18016, 1.17495}, {1.34997, 1.42752}, {1.80211, 1.93181}, {2.25766, 2.39347}} </code></pre> </blockquote> <p>Once again, regardless of order of operations, <code>pick</code> is faster. Caching could be suspect, and as mentioned in the comment thread of the other question, I did try throwing in a <code>ClearSystemCache[]</code> between the <code>pick</code> and <code>Pick</code>, but that didn't change anything.</p> <p>Szabolcs suggested that I throw out the memoization and just use wrapper functions. I presume, he meant this:</p> <pre><code>unitize[x_] := Unitize[x]; pick[xs_, sel_, patt_] := Pick[xs, sel, patt]; </code></pre> <p>As before, on a fresh kernel I set history length to 0 and run the <code>Table</code> loop. I get this:</p> <pre><code>{{0.472934, 0.473249}, {0.954632, 0.96373}, {1.42848, 1.43364}, {1.91283, 1.90989}, {2.37743, 2.40031}} </code></pre> <p>i.e. nearly equal results, sometimes one is faster, sometimes the other (left column is <code>pick</code>, right is <code>Pick</code>). The functions perform as well as <code>Pick</code> in a fresh kernel.</p> <p>I try again with the memoization as described towards the beginning of the answer:</p> <pre><code>{{0.454302, 0.473273}, {0.93477, 0.947996}, {1.35026, 1.4196}, {1.79587, 1.90001}, {2.24727, 2.38676}} </code></pre> <p>The memoized <code>pick</code> and <code>unitize</code> perform consistently better out of a fresh kernel. Of course, it uses twice the memory along the way.</p>
webcpu
43,670
<p>What if let pick and unitize run before Pick and Unitize? pick is still faster than Pick. <strong>In:</strong></p> <pre><code>Clear[unitize, pick, n, data] SeedRandom[1]; n = -1; data = RandomChoice[Range[0, 10], {10^8, 6}]; unitize[x_] := unitize[x] = Unitize[x]; pick[xs_, sel_, patt_] := pick[xs, sel, patt] = Pick[xs, sel, patt] AbsoluteTiming[ pick[data, unitize@data[[All, n]], 1] // Length] AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length] </code></pre> <p><strong>Out:</strong></p> <pre><code>{4.71476, 90911166} {5.14919, 90911166} </code></pre> <p><strong>Memoization</strong></p> <p>This technical is Memoization. I learned it from Haskell. </p> <p>Memoization in GHC’s interactive environment.(GHC=Glasgow Haskell Compiler) <a href="https://i.stack.imgur.com/1S9sp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1S9sp.png" alt="enter image description here"></a></p> <p>The first execution time of add1 x is 0.73 seconds and the memory usage is 676MB memory, the second execution time of add1 x is 0.08 seconds, and the memory usage is only 86KB. </p> <p>If you use this technique wisely, it can save a lot of CPU time and memory. In some worse scenarios, it might only save CPU time and use too much memory.</p>
145,286
<p>Yesterday I got into an argument with @UnchartedWorks over <a href="https://mathematica.stackexchange.com/a/145207/26956">in the comment thread here</a>. At first glance, he posted a duplicate of <a href="https://mathematica.stackexchange.com/a/145202/26956">Marius' answer</a>, but with some unnecessary memoization:</p> <pre><code>unitize[x_] := unitize[x] = Unitize[x] pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt] </code></pre> <p>and proposed the following test to justify his claim that his approach is faster:</p> <pre><code>RandomSeed[1]; n = -1; data = RandomChoice[Range[0, 10], {10^8, 3}]; AbsoluteTiming[Pick[data, Unitize@data[[All, n]], 1] // Length] AbsoluteTiming[pick[data, unitize@data[[All, n]], 1] // Length] (* {7.3081, 90913401} {5.87919, 90913401} *) </code></pre> <p>A significant difference. Naturally, I was skeptical. The evaluation queue for his <code>pick</code> is (I believe) as follows:</p> <ol> <li><code>pick</code> is inert, so evaluate the arguments.</li> <li><code>data</code> is just a list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to a list</li> <li><code>unitize@data[[All, n]]</code> writes a large <code>DownValue</code>...</li> <li>...calling <code>Unitize@data[[All, n]]</code> in the process, returning the unitized list.</li> <li>Another large <code>DownValue</code> of the form <code>pick[data] = *pickedList*</code> is created (<code>data</code> here is, of course, meant in its evaluated form), never to be called again (unless, for some reason, we explicitly type <code>pick[data]</code>).</li> <li>The <code>*pickedList*</code> is returned.</li> </ol> <p>What about the evaluation queue for <code>Pick[data, Unitize@data[[All, n]], 1]</code>?</p> <ol> <li><code>Pick</code> is inert.</li> <li><code>data</code> becomes an inert list, <code>1</code> is inert, <code>data[[All, n]]</code> quickly evaluates to an inert list.</li> <li>Nothing happens here.</li> <li><code>Unitize@data[[All, n]]</code> returns the unitized list.</li> <li>Nothing happens here either.</li> <li>The same step as before is taken to get us the picked list.</li> </ol> <p>So, clearly <code>pick</code> has more things to do than <code>Pick</code>.</p> <p>To test this out I run the following code:</p> <pre><code>Quit[] $HistoryLength = 0; Table[ Clear[pick, unitize, data]; unitize[x_] := unitize[x] = Unitize[x]; pick[xs_, sel_, patt_] := pick[xs] = Pick[xs, sel, patt]; data = RandomChoice[Range[0, 10], {i*10^7, 3}]; {Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First, pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First}, {i, 5}] </code></pre> <p>Much to my surprise, <code>pick</code> is <em>consistently</em> faster!</p> <blockquote> <pre><code>{{0.482837, 0.456147}, {1.0301, 0.90521}, {1.46596, 1.35519}, {1.95202, 1.8664}, {2.4317, 2.37112}} </code></pre> </blockquote> <p>How can I <s>protect myself from black magic</s> make a representative test? Or <s>should I embrace the black magic</s> is this real and a valid way to speed things up?</p> <p><strong>Update re: answer by Szabolcs</strong></p> <p>Reversing the order of the list like so:</p> <pre><code>{pick[data, unitize@data[[All, -1]], 1]; // AbsoluteTiming // First, Pick[data, Unitize@data[[All, -1]], 1]; // AbsoluteTiming // First} </code></pre> <p>gave me the following result:</p> <blockquote> <pre><code>{{0.466251, 0.497084}, {1.18016, 1.17495}, {1.34997, 1.42752}, {1.80211, 1.93181}, {2.25766, 2.39347}} </code></pre> </blockquote> <p>Once again, regardless of order of operations, <code>pick</code> is faster. Caching could be suspect, and as mentioned in the comment thread of the other question, I did try throwing in a <code>ClearSystemCache[]</code> between the <code>pick</code> and <code>Pick</code>, but that didn't change anything.</p> <p>Szabolcs suggested that I throw out the memoization and just use wrapper functions. I presume, he meant this:</p> <pre><code>unitize[x_] := Unitize[x]; pick[xs_, sel_, patt_] := Pick[xs, sel, patt]; </code></pre> <p>As before, on a fresh kernel I set history length to 0 and run the <code>Table</code> loop. I get this:</p> <pre><code>{{0.472934, 0.473249}, {0.954632, 0.96373}, {1.42848, 1.43364}, {1.91283, 1.90989}, {2.37743, 2.40031}} </code></pre> <p>i.e. nearly equal results, sometimes one is faster, sometimes the other (left column is <code>pick</code>, right is <code>Pick</code>). The functions perform as well as <code>Pick</code> in a fresh kernel.</p> <p>I try again with the memoization as described towards the beginning of the answer:</p> <pre><code>{{0.454302, 0.473273}, {0.93477, 0.947996}, {1.35026, 1.4196}, {1.79587, 1.90001}, {2.24727, 2.38676}} </code></pre> <p>The memoized <code>pick</code> and <code>unitize</code> perform consistently better out of a fresh kernel. Of course, it uses twice the memory along the way.</p>
Shadowray
47,416
<p><strong>Cause of speed up</strong></p> <p>This is definitely not memoization. The reason for the observed speed up is that for large arrays (e.g. 10^8 elements), the memory clean up operations may take noticeable time. If one doesn't free memory, one can perform some operations a bit faster.</p> <p>Here is a simple example:</p> <p>Let's create a large array, then perform a calculation, and remove the array:</p> <pre><code>AbsoluteTiming[ Total[ConstantArray[0, 10^8]]; ] </code></pre> <blockquote> <p>{0.422509, Null}</p> </blockquote> <p>It takes 0.42 seconds. Let's now do the same thing, but keep the array in memory:</p> <pre><code>AbsoluteTiming[ Total[garbage = ConstantArray[0, 10^8]]; ] </code></pre> <blockquote> <p>{0.366755, Null}</p> </blockquote> <p>This evaluation is a bit faster.</p> <p>Let's check how long does it take to remove the large array:</p> <pre><code>AbsoluteTiming[ Remove[garbage] ] </code></pre> <blockquote> <p>{0.061982, Null}</p> </blockquote> <p>Note that 0.06 seconds is the difference of the calculation times above. This example shows that if we keep the large array instead of removing it, our code can run faster, because we don't need to spent time on freeing memory.</p> <p><strong>Your example</strong></p> <p>In the example you provide, removing the result of <code>Unitize@data[[All, n]]</code> from memory takes some time. If one saves this array in a redundant variable, one avoids immediate memory clean-up and the evaluation seems to be faster. In case of pseudo-memoization the <code>Clear[pick, unitize]</code> command will take extra time to free the memory, but this command is placed <em>outside</em> the <code>AbsoluteTiming[]</code> scope. That is why "memoization" seems to speed up the calculation.</p> <p><strong>How to make a representative test?</strong></p> <p>You should put <code>Clear[pick, unitize]</code> <em>inside</em> your timing function. This test will show that the pseudo-memoization technique is actually slower than built-in functions:</p> <pre><code>Table[ Clear[data]; data=RandomInteger[{0,10},{i*10^7,3}]; { Pick[data,Unitize@data[[All,-1]],1]; // AbsoluteTiming // First , Clear[pick,unitize]; unitize[x_]:=unitize[x]=Unitize[x]; pick[xs_,sel_,patt_]:=pick[xs,sel,patt]=Pick[xs,sel,patt]; pick[data,unitize@data[[All,-1]],1]; // AbsoluteTiming // First }, {i,5}] (* {{0.534744, 0.469538}, {1.03776, 1.05842}, {1.58536, 1.65404}, {2.10422, 2.11284}, {2.48129, 2.71405}} *) </code></pre> <p><strong>Technical note:</strong> as noted by Carl Woll in comments, if one wants to measure the symbol-removing-time using the following code:</p> <pre><code>In[1] := garbage = ConstantArray[0, 10^8]; In[2] := AbsoluteTiming[Remove[garbage]] </code></pre> <p>one should set <code>$HistoryLength</code> to zero, otherwise the <code>Out[1]</code> variable will retain the contents of the large array. If <code>Out[1]</code> retains the large data, <code>Remove[garbage]</code> will only delete the reference, but not the data itself. Deletion time of a reference is almost zero, but it doesn't correspond to the deletion time for large data.</p>
3,382,241
<p>I am trying to find the smallest <span class="math-container">$n \in \mathbb{N}\setminus \{ 0 \}$</span>, such that <span class="math-container">$n = 2 x^2 = 3y^3 = 5 z^5$</span>, for <span class="math-container">$x,y,z \in \mathbb{Z}$</span>. Is there a way to prove this by the Chinese Remainder Theorem?</p>
David G. Stork
210,401
<p>There are <span class="math-container">$5$</span> boxes. Put a ball in each. So all that remains to calculate is how to place the <span class="math-container">$20$</span> remaining balls in the <span class="math-container">$5$</span> boxes.</p> <p>Can you take it from here?</p>
2,322,294
<p>I am trying to follow K.P. Hart's course <a href="http://fa.its.tudelft.nl/~hart/37/onderwijs/old-courses/settop" rel="nofollow noreferrer">Set-theoretic methods in general topology</a>. In <a href="http://fa.its.tudelft.nl/~hart/37/onderwijs/old-courses/settop/rudin.pdf" rel="nofollow noreferrer">Chapter 6</a>, Rudin's Dowker space $X$ is defined as follows. Let $P=\prod_{n=1}^\infty(\omega_n+1)$ be the box product of the successors of the first $\omega$-many uncountable ordinals, let $X'=\{x\in P:(\forall n)\,\operatorname{cf} x_n&gt;\omega\}$, and let $X=\{x\in X':(\exists i)(\forall n)\ \operatorname{cf}x_n&lt;\omega_i\}$. Exercise 2.8 asks to show that $X$ is <a href="https://en.wikipedia.org/wiki/Collectionwise_normal_space" rel="nofollow noreferrer">collectionwise normal</a>, using the following hint: prove that if $\mathcal{F}$ is a <a href="https://www.encyclopediaofmath.org/index.php/Discrete_family_of_sets" rel="nofollow noreferrer">discrete family</a> of closed subsets of $X$ then $\mathcal{F}'=\{\operatorname{cl}_{X'}F:F\in\mathcal{F}\}$ is a discrete family of closed subsets of $X'$. </p> <p>It was already proved that disjoint closed sets in $X$ have disjoint closures in $X'$. I can prove that the space $X'$ is collectionwise normal, and using the statement of the hint I can also show that $X$ is collectionwise normal. But somehow I am not able to prove the hint.</p> <p><strong>Question:</strong> Is it true that if $Y\subseteq Y'$ are topological spaces, disjoint closed subsets of $Y$ have disjoint closures in $Y'$, and $\mathcal{F}$ is a discrete family of closed subsets of $Y$, then $\mathcal{F}'=\{\operatorname{cl}_{Y'}F:F\in\mathcal{F}\}$ is a discrete family of closed subsets of $Y'$? We can also assume that $Y$ and $Y'$ are normal. Or is that a special property of the above spaces $X$, $X'$?</p>
Mike V.D.C.
114,534
<p>The ring you are looking for is the Laurent Polynomial ring over $\mathbb{Z}$, namely $\mathbb{Z}[x,x^{-1}]$. This can be <strong>constructed</strong> as follows: $$\frac{\mathbb{Z}[x][y]}{\langle xy-1\rangle}$$</p> <p>This you can compute in almost all CASs.</p> <p>Hope this helps.</p> <p>-- Mike</p>
78,143
<p>I don't know the meaning of geometrically injective morphism f of schemes. </p> <p>What's the definition of "geometrically injective"?</p> <p>I can't find it. I hope your answer.</p> <p>Thanks.</p>
Sanjay
18,579
<p>I don't find link to add comment. You can find the various equivalent condition for radicial morphism and its proof in "Altman &amp; Kleiman, Introduction to Grothendieck Duality Theory" on page 119.</p>
308,117
<p>I have the matrix $$A := \begin{bmatrix}6&amp; 9&amp; 15\\-5&amp; -10&amp; -21\\ 2&amp; 5&amp; 11\end{bmatrix}.$$ Can anyone please tell me how to both find the eigenspaces by hand and also by using the Nullspace command on maple? Thanks.</p>
Mhenni Benghorbal
35,472
<p>Here are the maple commands</p> <p>with(LinearAlgebra):</p> <p>A := &lt;&lt;6,9,15>|&lt;-5,-10,-21>|&lt;2,5,11>>;</p> <p>NS := NullSpace(A);</p> <p>ES := Eigenvectors(A);</p>
4,136,248
<p>Let <span class="math-container">$a,b\in\mathbb{R}^+$</span>. Suppose that <span class="math-container">$\{x_n\}_{n=0}^\infty$</span> is a sequence satisfying <span class="math-container">$$|x_n|\leq a|x_{n-1}|+b|x_{n-1}|^2, $$</span> for all <span class="math-container">$n\in\mathbb{N}$</span>. How can we bound <span class="math-container">$|x_n|$</span> with a number <span class="math-container">$M_n$</span> depending on <span class="math-container">$n$</span>, <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$x_0$</span>?</p> <p>That <span class="math-container">$|x_{n-1}|^2$</span> term is rather cumbersome to handle. Is there a combinatorial trick to overcome messy computations?</p> <hr /> <p>To make the problem a bit easier, I am going to assume that <span class="math-container">$$x_n=ax_{n-1}+bx_{n-1}^2.$$</span> This implies the above inequality. Based on the answer of <a href="https://math.stackexchange.com/questions/704350/solve-a-quadratic-map">this</a> question, we can reduce the problem to <span class="math-container">$$\hat x_n=\hat x_{n-1}^2+c,$$</span> where <span class="math-container">$\hat x$</span> is some linear image of <span class="math-container">$x_n$</span> and <span class="math-container">$c$</span> is a constant depending on <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Maybe this is easier to bound <span class="math-container">$\hat x_n$</span>.</p>
user6247850
472,694
<p>Here is a way to bound the <span class="math-container">$x_n$</span> for the sequence <span class="math-container">$x_n = a x_{n-1}+bx_{n-1}^2$</span>, assuming <span class="math-container">$x_n \ge 0$</span> for all <span class="math-container">$n$</span> (which is the case if <span class="math-container">$x_0 \ge 0$</span>). It's a very rapidly growing bound, but I think any bound will have to be.</p> <p>By replacing <span class="math-container">$x_{n-1}$</span> with <span class="math-container">$\max(x_{n-1},1)$</span>, we may assume <span class="math-container">$x_{n-1} \le x_{n-1}^2$</span> so it is sufficient to bound <span class="math-container">$x_n = (a+b) x_{n-1}^2$</span>. Let <span class="math-container">$y_n := \ln(x_n)$</span> so <span class="math-container">$y_n = \ln (a+b) + 2 y_{n-1}.$</span> This is a standard linear recurrence, and is solved with <span class="math-container">$y_n = 2^n - \ln (a+b)$</span>. This implies <span class="math-container">$x_n = \exp(2^n - \ln (a+b)) = \frac 1{a+b} e^{2^n}$</span>. Hence the bound you want is <span class="math-container">$$x_n \le \max\left(\frac {e^{2^n}}{a+b},1\right).$$</span></p>
2,099,516
<p>For independent Gamma random variables $G_1, G_2 \sim \Gamma(n,1)$, $\frac{G_1}{G_1+G_2}$ is independent of $G_1+G_2$. Does this imply that $G_1+G_2$ is independent of $G_1-G_2$? Thanks!</p>
BruceET
221,800
<p>No, not independent. Here is a quick simulation in R statisticsl software of 100,000 realizations of $X = G_1 + G_2$ and $Y = G_1 - G_2,$ for the case $n = 4.$ You should be able to turn the central point of it into a proof. (Notice that $G_1$ and $G_2$ take only nonnegative values.)</p> <pre><code>m = 10^5; n = 4 g1 = rgamma(m,n,1); g2 = rgamma(m,n,1) x = g1 + g2; y = g1 - g2 cor(x,y) ## 0.0009158704 # consistent with uncorrelated </code></pre> <p>By symmetry, it is no surprise that $X$ and $Y$ are uncorrelated. But for non-normal data that does not imply independence. </p> <p>So we make a scatterplot of $Y$ against $X,$ from which it is immediately clear that $X$ and $Y$ are not independent. It is clear that $P(X &lt; 5) \approx 0.13 &gt; 0$ and $P(Y &gt; 5) \approx .04 &gt; 0,$ but $P(X &lt; 5, Y &gt; 5) = 0.$ </p> <p>This conclusion of association between $X$ and $Y$ agrees with @madprob's elegant proof using MGFs (+1).</p> <pre><code>mean(x &lt; 5 &amp; y &gt; 5) ## 0 mean(x &lt; 5); mean(y &gt; 5) ## 0.13255 ## 0.03955 </code></pre> <p><a href="https://i.stack.imgur.com/Gv92B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gv92B.png" alt="enter image description here"></a></p>
2,403,608
<p>I was asked to solve for the <span class="math-container">$\theta$</span> shown in the figure below.</p> <p><a href="https://i.stack.imgur.com/3Yxqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3Yxqv.png" alt="enter image description here" /></a></p> <p>My work:</p> <p>The <span class="math-container">$\Delta FAB$</span> is an equilateral triangle, having interior angles of <span class="math-container">$60^o.$</span> I don't think <span class="math-container">$\Delta HIG$</span> and <span class="math-container">$\Delta DEC$</span> are right triangles.</p> <p>So far, that's all I know. I'm confused on how to get <span class="math-container">$\theta.$</span> How do you get the <span class="math-container">$\theta$</span> above?</p>
amWhy
9,003
<p>Hints:</p> <p>Triangle HIG, and triangle DCE are isosceles triangles, with $\angle HIG = \angle DCE = 90^\circ - 60^\circ = 30^\circ$. </p> <p>In isosceles triangles, the base angles are congruent, so $\angle IHG = \angle HGI = \dfrac{180^\circ - 30^\circ}{2} = 75^\circ.$</p> <p>Similarly, for $\angle CED$ and $\angle CDE$, they too are $75^\circ$.</p>
4,398,207
<p>I have the following exercise:</p> <blockquote> <p>Prove if the functor that sends an abelian group to it's <span class="math-container">$n$</span>-torsion subgroup for <span class="math-container">$n\geq 2$</span> is exact.</p> </blockquote> <p>I know that I need to take <span class="math-container">$f\colon M\to N$</span> and <span class="math-container">$g\colon N\to L$</span> of <span class="math-container">$R$</span>-Modules in <span class="math-container">$\textrm{Ab}$</span> which is exact i.e. <span class="math-container">$\operatorname{im}(f)=\ker(g)$</span> then I need to show that for the functor <span class="math-container">$F$</span> also <span class="math-container">$\operatorname{im}(F(f))=\ker(F(g))$</span> holds. Then the functor is exact.</p> <p>But somehow I first don't see how this functor works so it sends an abelian group <span class="math-container">$A$</span> to the set <span class="math-container">$\{g\in A:g^n=e\}$</span> so all the elements of order <span class="math-container">$n$</span>.</p> <p>Could someone maybe explain this a bit to me?</p> <p>THanks for your help</p>
Berci
41,488
<p>With Abelian groups one usually applies additive notation.</p> <p>Yes, <span class="math-container">$F(A)=\{a\in A:\,n\cdot a=0\}$</span>.<br> Then, <span class="math-container">$F$</span> sends a homomorphism <span class="math-container">$f:A\to B$</span> to its restriction to <span class="math-container">$F(A)$</span>, i e. <span class="math-container">$$F(f):=f|_{F(A)}:F(A)\to F(B)$$</span> as it's clear that <span class="math-container">$f(a)\in F(B)$</span> whenever <span class="math-container">$a\in F(A)$</span>.</p> <p>Then, if <span class="math-container">${\rm im}(f)=\ker g$</span> for a <span class="math-container">$g:B\to C$</span>, then on one hand we have <span class="math-container">$g\circ f=0$</span>, and on the other hand, <span class="math-container">$g(b)=0\implies\exists a\in A:\,f(a)=b$</span>.<br> Can you finish from here?</p>
1,292,759
<blockquote> <p>Let $a,b,c\in\mathbb{R}^+$ and $abc=1$. Prove that $$\frac{1}{a^3(b+c)}+\frac{1}{b^3(c+a)}+\frac{1}{c^3(a+b)}\ge\frac32$$</p> </blockquote> <p>This isn't hard problem. I have already solved it in following way:<br/> Let $x=\frac1a,y=\frac1b,z=\frac1c$, then $xyz=1$. Now, it is enought to prove that $$L\equiv\frac{x^2}{y+z}+\frac{y^2}{z+x}+\frac{z^2}{x+y}\ge\frac32$$ Now using Cauchy-Schwarz inequality on numbers $a_1=\sqrt{y+z},a_2=\sqrt{z+x},a_3=\sqrt{x+y},b_1=\frac{x}{a_1},b_2=\frac{y}{a_2},b_3=\frac{z}{a_3}$ I got $$(x+y+z)^2\le((x+y)+(y+z)+(z+x))\cdot L$$ From this $$L\ge\frac{x+y+z}2\ge\frac32\sqrt[3]{xyz}=\frac32$$ Then I tried to prove it using derivatives. Let $x=a,y=b$ and $$f(x,y)=\frac1{x^3\left({y+\frac1{xy}}\right)}+\frac1{y^3\left({x+\frac1{xy}}\right)}+\frac1{\left({\frac1{xy}}\right)^3(x+y)}$$ So, I need to find minimum value of this function. It will be true when $$\frac{df}{dx}=0\land\frac{df}{dy}=0$$ After simplifying $\frac{df}{dx}=0$ I got $$\frac{-y(3xy^2+2)}{x^3\left({xy^2+1}\right)^2}+\frac{1-x^2y}{y^2\left({x^2y+1}\right)^2}+\frac{x^2y^3(2x+3y)}{\left({x+y}\right)^2}=0$$ Is there any easy way to write $x$ in term of $y$ from this equation?</p>
Dr. Sonnhard Graubner
175,066
<p>why must you use derivatives? the proof is simple with Cauchy Schwarz: we have $$\frac{1}{a^3(b+c)}=\frac{\frac{1}{a^2}}{a(b+c)}=\frac{\frac{1}{a^2}}{\frac{b+c}{bc}}$$ thus we have $$\frac{1}{a^3(b+c)}+\frac{1}{b^3(c+a)}+\frac{1}{c^3(a+b)}\geq $$ $$\frac{\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\right)^2}{\frac{2}{a}+\frac{2}{b}+\frac{2}{c}}=\frac{\left(ab+bc+ca\right)^2}{2(ab+bc+ca)}\geq \frac{1}{2}3\sqrt[3]{(abc)^2}=\frac{3}{2}$$</p>
2,110,561
<p>so I want to find the volume of the body D defined as the region under a sphere with radius 1 with the center in (0, 0, 1) and above the cone given by $z = \sqrt{x^2+y^2}$. The answer should be $\pi$. A hint is included that you should use spherical coordinates. I've started by making a equation for the sphere, $x^2+y^2+(z-1)^2=1$. I used the transformation $(x, y, z) = (\rho\sin\phi\cos\theta, \rho\sin\phi\sin\theta, \rho\cos\phi+1)$. Now I'm struggling on defining the region D. I got that $0\leq\theta\leq2\pi$, but I can't find the bounds for $\phi$ and $\rho$.</p>
Martin Argerami
22,857
<p>If you try spherical coordinates, the following happens. For the cone, you have <span class="math-container">$$\rho\cos\phi=z=\sqrt{x^2+y^2}=\sqrt{\rho^2\sin^2\phi\cos^2\theta+\rho^2\sin^2\phi\sin^2\theta}=\rho\sin\phi.$$</span>(this assumes that you chose <span class="math-container">$\phi$</span> so that <span class="math-container">$\sin\phi\geq0$</span>, that is <span class="math-container">$0\leq\phi\leq\pi$</span>). So <span class="math-container">$\cos\phi=\sin\phi$</span>; it follows that the equation of cone is <span class="math-container">$\phi=\pi/4$</span>.</p> <p>So, to describe the interior of your region, you will have <span class="math-container">$0\leq\phi\leq\pi/4$</span>; simple enough. From the equation of the sphere we get <span class="math-container">$$ \rho^2=2\rho\cos\phi, $$</span> or <span class="math-container">$\rho=2\cos\phi$</span>.</p> <p>The volume is then <span class="math-container">\begin{align} V&amp;=\iiint_E1\,dV=\int_0^{2\pi}\int_0^{\pi/4}\int_0^{2\cos\phi}\rho^2\sin\phi\,d\rho\,d\phi\,d\theta\\ \ \\ &amp;=\frac{16\pi}3\,\int_0^{\pi/4}\cos^3\phi\,\sin\phi\,d\phi =-\frac{16\pi}3\,\left.\left(\frac{\cos^4\phi}4 \right)\right|_0^{\pi/4}\\ \ \\ &amp;=\frac{4\pi}3\left(1-\frac1{4} \right)=\pi. \end{align}</span></p> <hr /> <p>On the other hand, to work in cylindrical coordinates we have as follows. If we write <span class="math-container">$s=\sqrt{x^2+y^2}$</span>, the intersection of the sphere and the cone happens when <span class="math-container">$s^2+(s-1)^2=1$</span>, with solutions <span class="math-container">$s=0$</span> and <span class="math-container">$s=1$</span>. As we want to be above the cone, we have to choose <span class="math-container">$s=1$</span>. The equations of the sphere and the cone are, respectively, <span class="math-container">$z=1+\sqrt{1-r^2}$</span> and <span class="math-container">$z=r$</span>. Then the volume is <span class="math-container">\begin{align} V&amp;=\int_0^{2\pi}\int_0^1\,(1+\sqrt{1-r^2}-r)\,r\,dr\,d\theta =2\pi\,\int_0^1(r+r\sqrt{1-r^2}-r^2)\,dr\\ \ \\ &amp;=2\pi\,\left.\left(\frac{r^2}2-\frac{\sqrt{1-r^2}}3-\frac{r^3}3\right)\right|_0^1 =2\pi\,\left(\frac12-0-\frac13+\frac13 \right)=\pi. \end{align}</span></p>
2,518,305
<p><a href="http://www.mit.edu/%7Esame/pdf/qualifying_round_2017_answers.pdf" rel="noreferrer">This is a problem from MIT integration bee 2017.</a></p> <p><span class="math-container">$$\int_0^{\pi/2} \frac 1 {1+\tan^{2017} x} \, dx$$</span></p> <p>I have tried substitution method, multiplying numerator and denominator with <span class="math-container">$\sec^2x$</span>, breaking the numerator in terms of linear combination of the denominator and the derivative of it. None of these methods work.</p> <p>Some hints please?</p>
Guy Fsone
385,707
<p>Setting the change of variable: $u=\frac\pi2-x $ and since, $\tan x =\cot(\frac\pi2 -x)$ we have, \begin{align} &amp; \int_0^{\frac\pi2}\frac{1}{1+\tan^{2017} x} \, dx = \int_0^{\frac\pi2}\frac{1}{1+\tan^{2017} (\frac\pi2-u) } \, du \\[10pt] = {} &amp; \int_0^{\frac\pi2}\frac{1}{1+\cot^{2017}u} \, du = \int_0^{\frac\pi2}\frac{\tan^{2017} u}{1+\tan^{2017} u} \, du \color{red}{= \frac{\pi}{2} -\int_0^{\frac\pi2}\frac{1}{1+\tan^{2017} u} \, du} \end{align}</p> <p>That is $$\int_0^{\frac\pi2}\frac{1}{1+\tan^{2017} x} \, dx =\frac\pi4$$</p>
260,865
<p>I am fairly new to mathematica and I am trying to plot a 3D curve defined by multiple formulas. I have the curve <span class="math-container">$K$</span> from the point <span class="math-container">$(\frac{1}{2},-\frac{1}{2}\sqrt{3},0)$</span> to <span class="math-container">$(\frac{1}{2},\frac{1}{2}\sqrt{3},2\sqrt{3})$</span> given by, <br /> <span class="math-container">$\begin{cases}x^{2}+y^{2}=1,\\ z=\frac{y}{x}+\sqrt{3}\\ x\geq\frac{1}{2} \end{cases}$</span><br /> I would like to see this curve plotted somehow. I just can't find a function on mathematica which allows this. Does anyone know if this can be done in a simple way?</p>
Ulrich Neumann
53,677
<p>The three conditions define a region</p> <pre><code>reg = ImplicitRegion[{x^2 + y^2 == 1, z == y/x + Sqrt [3],x &gt; 1/2}, {x, y, z}] </code></pre> <p>which is plotted with <code>Region</code></p> <pre><code>Region[reg, Axes -&gt; True, BoxRatios -&gt; {1, 1, 1}, Boxed -&gt; True] </code></pre> <p><a href="https://i.stack.imgur.com/ff1CE.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ff1CE.png" alt="enter image description here" /></a></p>
365,128
<p>How does one find the Inverse Laplace transform of $$\frac{6s^2 + 4s + 9}{(s^2 - 12s + 52)(s^2 + 36)}$$ where $s &gt; 6$?</p>
Ron Gordon
53,268
<p>You need to find the poles of the expression; in your case, you have poles at $s=6 \pm 4 i$ and $s=\pm 6 i$. You then find what are called the residues of the LT times $e^{s t}$ at the poles. The residue at a pole $s_k$ is </p> <p>$$\lim_{s \rightarrow s_k} \left [ (s-s_k) \frac{6 s^2 + 4 s+9}{(s^2-12 s+52)(s^2+36)} e^{s t} \right ]$$</p> <p>Not sure what the $s&gt;6$ thing is; it is meaningless in this context.</p> <p>For each pole, the corresponding residues are:</p> <p>$$s_1=6 i \implies \frac{9 (6 i)^2 + 4 (6 i) + 9}{((6 i)^2 - 12 (6 i) + 52) (12 i)} e^{i 6 t}$$</p> <p>$$s_2=-6 i \implies \frac{9 (-6 i)^2 + 4 (-6 i) + 9}{((-6 i)^2 - 12 (-6 i) + 52) (-12 i)} e^{-i 6 t}$$</p> <p>$$s_3=6 + 4 i \implies \frac{9 (6 + 4 i)^2 + 4 (6 + 4 i) + 9}{(8 i) ((6+4 i)^2+36)} e^{(6 + 4 i)t}$$</p> <p>$$s_3=6 - 4 i \implies \frac{9 (6 - 4 i)^2 + 4 (6 - 4 i) + 9}{(-8 i) ((6-4 i)^2+36)} e^{(6 - 4 i)t}$$</p> <p>The ILT is then the sum of these residues. I leave the arithmetic/algebra to you; I get as the ILT</p> <p>$$-\frac{21}{136} \sin (6 t)-\frac{121}{272} \cos (6 t)+e^{6 t} \left(\frac{579}{544} \sin (4 t)+\frac{121}{272} \cos (4 t)\right)$$</p>
2,970,234
<p>So, I'm given the following query:</p> <p>Write the Taylor series centered at <span class="math-container">$z_0 = 0$</span> for each of the following complex-valued functions:</p> <p><span class="math-container">$$f(z) = z^2\sin(z),\quad g(z) = z\sin(z^2)$$</span></p> <p>Then, use these series to help you compute the following limit:</p> <p><span class="math-container">$$\lim_{z \to 0} \frac{z^2\sin(z)-z\sin(z^2)}{z^5}$$</span></p> <p>So, the first part wasn't so bad. I simply noticed that</p> <p><span class="math-container">$$\sin(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{2n+1}}{(2n+1)!}$$</span> and <span class="math-container">$$\sin(z^2) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{4n+2}}{(2n+1)!}$$</span></p> <p>With a little bit of simplifying, I obtained:</p> <p><span class="math-container">$$f(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{2n+3}}{(2n+1)!}, \quad g(z) = \sum_{n=0}^{\infty} \frac{(-1)^nz^{4n+3}}{(2n+1)!}$$</span></p> <p>However, I'm not quite sure how to deal with the limit. Does anyone have any advice on how I could approach it? I've tried expressing the entire limit as a series, but this doesn't seem to simplify much, and the limit cares about what happens as <span class="math-container">$z \to 0$</span> instead of as <span class="math-container">$n \to \infty$</span> (as we usually think about with series).</p>
Stockfish
362,664
<p>Note that for every power series <span class="math-container">$P(z)$</span> with constant term <span class="math-container">$c$</span> we have <span class="math-container">$\lim_{z \to 0} P(z) = c$</span>. So the question is: in the entire limit represented as series, do we have a constant term? Do we have a summand with <span class="math-container">$z^n$</span> for some negative <span class="math-container">$n$</span>?</p>
903,049
<p>I have to find the series expansion and interval of convergence for the function $\ln(1 - x)$.</p> <p>For the expansion, I have gone through the process and obtained the series:</p> <p>$-x - (x^2/2) - (x^3/3) - . . . - (-1)^k((-x)^k)/k$</p> <p>I know that the interval of convergence will be $(-1,1)$, but am having trouble with the ratio test component to achieve this result. i.e. I am having trouble breaking down/simplifying the equation.</p> <p>Thanks very much</p>
cjferes
89,603
<p>You already know that $$\log(1-x)=-\sum_{k=1}^{\infty} \frac{x^k}{k}=\sum_{k=1}^{\infty} a_kx^k$$</p> <p>Then, $$a_k=-\frac{1}{k}$$</p> <p>The ratio test, then, is: $$\biggl|{\frac{a_{k+1}}{a_k}}\biggr|=\frac{\frac{1}{k+1}}{\frac{1}{k}}=\frac{k}{k+1}$$</p> <p>The convergence radius $R$ is given by: $$\lim_{k\rightarrow \infty}\biggl|\frac{a_{k+1}}{a_k}\biggr|=\frac{1}{R}$$</p> <p>So, $$\begin{array}{rcl} \lim_{k\rightarrow \infty}\biggl|\frac{a_{k+1}}{a_k}\biggr|&amp;=&amp;\lim_{k\rightarrow \infty} \frac{k}{k+1}\\ &amp;=&amp;1=\frac{1}{R}\\ \Rightarrow R&amp;=&amp;1 \end{array}$$</p>
1,840,159
<blockquote> <p>Question: Prove that a group of order 12 must have an element of order 2.</p> </blockquote> <p>I believe I've made great stride in my attempt.</p> <p>By corollary to Lagrange's theorem, the order of any element $g$ in a group $G$ divides the order of a group $G$.</p> <p>So, $ \left | g \right | \mid \left | G \right |$. Hence, the possible orders of $g$ is $\left | g \right |=\left \{ 1,2,3,4,6,12 \right \}$</p> <p>Suppose $\left | g \right |=12.$ Then, $g^{12}=\left ( g^{6} \right )^{2}=e.$ So, $\left | g^{6} \right |=2$</p> <p>Using the above same idea and applying it to $\left | g \right |=\left \{ 6,4,2 \right \}$ and $\left | g \right |=1,$ we see that these elements g have order 2.</p> <p>However, for $\left | g^{3} \right |$, the group $G$ does not require an element of order 2.</p> <p>How can I take this attempt further?</p> <p>Thanks in advance. Useful <strong>hints</strong> would be helpful.</p>
Micapps
300,392
<p>Approach without Sylow's theorem: By what you've shown, all you need to do is discount the possibility that all group elements have order $3$ or $1$. The only element with order $1$ is the identity. What can you say about a group that consists of the identity, and $11$ elements of order $3$?</p> <p>Hint: the elements of order $3$ can be partitioned into pairs $\{g,h\}$ s.t. $h=g^2,g=h^2$.</p>
2,239,192
<p>Let $P_n$ be the polynomials of degree no more than n with basis $Z_n=(1, x, x^2,\dotsc,x^n)$. The derivative transformation $D$ goes from $P_n$ to $P_{n-1}$. Write out the matrix for $D$ from $(P_4, Z_4)$ to $(P_3, Z_3)$.</p> <p>I haven't done a problem similar to this so I'm not sure how to go about doing this. Thanks</p>
Chappers
221,811
<p>The matrix of a linear map $L:(V,B) \to (U,C)$ is found by writing $L(b_j)$ in terms of $c_i$ for each $b_j \in B$. In this case, $$ D(1)=0, \qquad D(x) = 1, \qquad D(x^2)=2x, \qquad D(x^3) = 3x^2, \qquad D(x^4) = 4x^3, $$ so the matrix is $$ \begin{pmatrix} 0 &amp; 1 &amp; 0 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 3 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; 4 \end{pmatrix}. $$ (This takes vectors with five components and gives you ones with four, as it should, and you can check on the general polynomial of degree $4$ that it gives you the right components for the derivative.)</p>
786,655
<p>Say we have two r.v X and Y which are independent and differently distributed ( for e.g X follows a bell curve and Y follows an exponential distribution with parameter $\lambda &gt; 0$</p> <p>What are the different methods to numerically compute the distribution X+Y, X*Y, X/Y, min(X,Y) etc...?</p> <p>I read about Mellin transform and Monte-Carlo simulation but it seemed to me that since these methods go back to a long time ago, there must be something that already exists for such operations within a library or a module on a programming language like Matlab or R (or any other platform)</p> <p>Any ideas|suggestions on this matter would be greatly appreciated!</p>
M.X
66,726
<p>you can use Mathematica for such calculations. For example TransformedDistribution[] can help you solve the problem if it can be solved in closed form at all.</p>
32,294
<p>I sort of asked a version of this question before and it was unclear; try I will now to make an honest attempt to state everything clerly.</p> <p>I am trying to evaluate the following, namely $\nabla w = \nabla |\vec{a} \times \vec{r}|^n$, where $\vec{a}$ is a constant vector and $\vec{r}$ is the vector $&lt;x_1,x_2,\ldots x_n&gt;$. Now say that I use the chain rule, first say by setting $\vec{u}$ to be equal to the cross product of $\vec{a}$ and $\vec{r}$. </p> <p>Now here's the part that I'm confused. How do we extend the chain rule over when dealing with the gradient? Do I take $\nabla |\vec{a} \times \vec{r}|^n$ to be equal to $\nabla |\vec{u}|^n$ $\times$ $\nabla (\vec{a} \times \vec{r})$, where $\times$ denotes the cross product?</p> <p>The first bit $\nabla |\vec{u}|^n$ is easy, it just evaluates to $n|\vec{a} \times \vec{r}|^{n-2} (\vec{a} \times \vec {r})$, remembering that $\vec{u} = \vec{a} \times \vec{r}$.</p> <p>I am guessing that $\nabla |\vec{a} \times \vec{r}|^n$ $\neq$ $\nabla |\vec{u}|^n$ $\times$ $\nabla (\vec{a} \times \vec{r})$, as to even speak about $\nabla (\vec{a} \times \vec{r})$, i.e. the gradient of a vector we would have to talk about either the cross product or dot product of the gradient and $\nabla (\vec{a} \times \vec{r})$ </p> <p>By the way, I am told the answer given is $\nabla |\vec{a} \times \vec{r}|^n$ = $n|\vec{a} \times \vec{r}|^{n-2} \Big(\vec{a} \times (\vec{r} \times \vec{a})\Big)$.</p> <p>So let's say that I try a component wise approach, i.e. we look first at $\frac{\partial w}{\partial x_1}$. Then is it true (I could be wrong) that:</p> <p>$\frac{\partial w}{\partial x_1} = n|\vec{a} \times \vec{r}|^{n-2} \quad \vec{u_1} \times \frac{\partial}{\partial x_1} \Big(\vec{a} \times \vec{r}\Big) = n|\vec{a} \times \vec{r}|^{n-2} \quad \vec{u_1} \times \Big(\vec{a} \times \frac{\partial\vec{r}}{\partial x_1}\Big)$, as $\vec{a}$ is a constant vector? Here $\vec{u_1}$ denotes the first component of the vector $\vec{a} \times \vec{r}$.</p> <p>I would really aprreciate an interpretation of this, it is just that I am confused about what to take and the meanings of these operations.</p>
Community
-1
<p>Say I differentiate $|\mathbf{a} \times \mathbf{r}|^2$. Then for the second term it agrees with $\mathbf{a} \times (\mathbf{r} \times \mathbf{a})$, but for the first term there is some confusion. Say I look at the first term of $\nabla |\mathbf{a}|^2|\mathbf{r}|^2$, namely $|\mathbf{a}|^2|\mathbf{r}|^2_{x_1} \mathbf{e_1}$ = $|\mathbf{a}|^2|\mathbf{r}|\mathbf{r}_{x_1} $ $\bullet$ $\mathbf{e_1}$? If it is indeed the dot product then I'm done! Oh by the way $\mathbf{r}_{x_1}$ means the partial derivative of $\mathbf{r}$ with respect to $x_1$.</p> <p>Ben</p>
3,571,047
<p>Here's what I have so far:</p> <p><span class="math-container">$$\frac{\partial f}{\partial y}|_{(a,b)} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2 + 2tb + t^2) - \sin(a^2 + b^2)}{t} = \lim\limits_{t\to 0} \frac{\sin(a^2 + b^2)[\cos(2tb + t^2) - 1] + \cos(a^2 + b^2)\sin(2tb + t^2)}{t}$$</span> I can see that side left of the plus sign might go to <span class="math-container">$0$</span>, and the side on the right would probably go to <span class="math-container">$\cos(a^2+b^2)$</span>, but I'm missing a <span class="math-container">$2a$</span> multiplying my solution!</p>
Toby Mak
285,313
<p>You can also proceed with the substitution <span class="math-container">$x = t^2, dx = 2t \ \mathrm{d} t$</span>:</p> <p><span class="math-container">$$ \int \dfrac{t^2}{1+t} \ 2t \ \mathrm{d} t = 2 \int \dfrac{t^3}{1+t} \mathrm{d} t = 2 \int \dfrac{t^2(t+1)-t(t+1)+(t+1)-1}{1+t} \ \mathrm{d} t$$</span> <span class="math-container">$$ = 2 \left(\frac{t^3}{3} -\frac{t^2}{2}+t-\ln|1+t|+ C\right)$$</span> <span class="math-container">$$ = \frac{2x \sqrt{x}}{3} -x+2\sqrt{x}-2\ln|1+\sqrt{x}|+ C_1$$</span></p>
4,020,986
<blockquote> <p>For every <span class="math-container">$n \in \mathbb{N}$</span> denote <span class="math-container">$x_n=(n,0) \in \mathbb{R^2}.$</span> Show that the set <span class="math-container">$\mathbb{R^2} \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is an open subset of the plane.</p> </blockquote> <p>The set <span class="math-container">$\mathbb{R^2} \setminus \{x_n \mid n \in \mathbb{N} \}$</span> is the plane excluding the positive <span class="math-container">$x-$</span>axis? It seems that I cannot use the definition of a ball here to conclude that the set would be open. Other definition that I know states that the union of open sets is open also, but it doesn't seem applicable here also. What other definitions might I use here?</p>
Actually Fritz
879,727
<p>First of all, the most direct way to proceed in these situations is exactly what hamam_Abdallah proposed in his answer. Recall that closed sets are by definition complements of open sets, so if you manage to do that, it will immediately follow that <span class="math-container">$\mathbb{R}^2 \setminus (x_n)_{n \in \mathbb{N}}$</span> is open. Indeed, let <span class="math-container">$(a_k)_{k \in \mathbb{N}} \subseteq \{x_n: n \in \mathbb{N}\}$</span> be a converging sequence and denote its limit by <span class="math-container">$\lambda.$</span> Since convergence in the topology of <span class="math-container">$\mathbb{R}^2$</span> is the same as convergence on components, we deduce that <span class="math-container">$\lambda = (\mu, 0)$</span> for some <span class="math-container">$\mu \in \mathbb{R}.$</span> You should be familiar with the fact that a converging sequence with elements in <span class="math-container">$\mathbb{Z}$</span> is constant from a certain point on, but for the sake of completeness, I will provide a proof of this fact here. Note that this would complete our endeavour since it would follow that <span class="math-container">$\mu = n$</span> for some <span class="math-container">$n \in \mathbb{N},$</span> which would enable us to conclude that <span class="math-container">$(x_n)_{n \in \mathbb{N}}$</span> is closed (or, equivalently, that its complement is open). Thus, let <span class="math-container">$(y_n)_{n \in \mathbb{N}} \subseteq \mathbb{Z}$</span> be a converging sequence and let <span class="math-container">$\lambda$</span> be its limit. By the definition of limit, this means that there is <span class="math-container">$n_0 \in \mathbb{N}$</span> such that <span class="math-container">$y_n \in (\lambda - \frac{1}{2}, \lambda + \frac{1}{2})$</span> for any <span class="math-container">$n \geq n_0.$</span> But any such open interval of length <span class="math-container">$1$</span> contains at most one integer, so this interval contains exactly one integer <span class="math-container">$k$</span> and that integer must in fact be equal to <span class="math-container">$\lambda$</span> seeing as <span class="math-container">$y_n = k$</span> for all <span class="math-container">$n \geq 0.$</span> Thus, we proved our claim.</p> <p>However, there is another more direct way to approach this problem (i.e. directly from the definition of what being an open set means). Let <span class="math-container">$x \in \mathbb{R}^2 \setminus (x_n)_{n \in \mathbb{N}}$</span> and write <span class="math-container">$x = (x_1, x_2).$</span> Let <span class="math-container">$f := \lfloor x_1 \rfloor$</span> and <span class="math-container">$r := \frac{1}{2} \min \{d(x, (f, 0)), d(x, (f+1, 0))\},$</span> where <span class="math-container">$d$</span> stands for the standard euclidian distance. Then we infer that the ball <span class="math-container">$B(x, r)$</span> is included in <span class="math-container">$\mathbb{R}^2 \setminus (x_n)_{n \in \mathbb{N}}.$</span> Thus, the set <span class="math-container">$\mathbb{R}^2 \setminus (x_n)_{n \in \mathbb{N}}$</span> is open.</p>
135,663
<p>It is a problem for a Hatcher's book, and it is my homework problem.</p> <p>It is a section 2.2 problem 3, stating:</p> <p>Let $f:S^n\to S^n$ be a map of degree zero. Show that there exist points $x,y \in S^n$ with $f(x)=x$ and $f(y)=-y$. Use this to show that if $F$ is a continuous vector filed defined on the unit ball $D^n$ in $\mathbb{R}^n$ such that $F(x) \neq 0$ for all $x$, then there exists a point on boundary of $D^n $ where $F$ points radially outward and another point on the boundary of $D^n $ where $F$ points radially inward.</p> <p>I could get the first statement by the property of a degree. However, in order to use this fact to conclude that this fact applies to the second statement, I should know that $F$ restricted to $S^{n-1}$ and being normalized so that $\bar F:S^{n-1} \to S^{n-1}$ is of degree zero. If I can conclude that $\bar F$ is not surjective, then it's all done. However, I am not sure to show why $\bar F$ is of degree zero. </p> <p>Any comment about this would be grateful! </p>
Community
-1
<p><strong>Hint</strong></p> <ul> <li><p>$A_5$ is simple. </p></li> <li><p>What is the index of such a group? Let $A_5$, a simple group act on left cosets of this proper subgroup? What can you say about the kernel of the homomorphism that comes with this action? </p></li> <li><p>So, now apply first isomorphism theorem; Lagrange's theorem to conclude a result known due to Poincare...</p></li> <li><p>So, what do you conclude?</p></li> </ul> <hr> <p>Perhaps, a more adhoc solution that applies exclusively here, but nonetheless, an important fact would be to prove the following:</p> <ul> <li><p>$A_5$ has no element of order $15$. (Perhaps, you should try to list all those orders that occur in $A_5$.)</p></li> <li><p>A group of order $15$ is cyclic. (Perhaps, I suggest you classify groups of order $pq$ for primes $p$ and $q$. This is a fun exercise and I suggest you'll do this. You'll get comfortable thinking about group actions and Sylow's theorem. ) </p></li> </ul>
135,663
<p>It is a problem for a Hatcher's book, and it is my homework problem.</p> <p>It is a section 2.2 problem 3, stating:</p> <p>Let $f:S^n\to S^n$ be a map of degree zero. Show that there exist points $x,y \in S^n$ with $f(x)=x$ and $f(y)=-y$. Use this to show that if $F$ is a continuous vector filed defined on the unit ball $D^n$ in $\mathbb{R}^n$ such that $F(x) \neq 0$ for all $x$, then there exists a point on boundary of $D^n $ where $F$ points radially outward and another point on the boundary of $D^n $ where $F$ points radially inward.</p> <p>I could get the first statement by the property of a degree. However, in order to use this fact to conclude that this fact applies to the second statement, I should know that $F$ restricted to $S^{n-1}$ and being normalized so that $\bar F:S^{n-1} \to S^{n-1}$ is of degree zero. If I can conclude that $\bar F$ is not surjective, then it's all done. However, I am not sure to show why $\bar F$ is of degree zero. </p> <p>Any comment about this would be grateful! </p>
Mikko Korhonen
17,384
<p>Show that every group of order $15$ is cyclic. The result follows since there is no element of order $15$ in $A_5$.</p>
57,213
<p>Let <span class="math-container">$A \in \mathbb{Q}^{6 \times 6}$</span> be the block matrix below:</p> <p><span class="math-container">$$A=\left(\begin{array}{rrrr|rr} -3 &amp;3 &amp;2 &amp;2 &amp; 0 &amp; 0\\ -1 &amp;0 &amp;1 &amp;1 &amp; 0 &amp; 0\\ -1&amp;0 &amp;0 &amp;1 &amp; 0 &amp; 0\\ -4&amp;6 &amp;4 &amp;3 &amp; 0 &amp; 0\\ \hline 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp;1 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; -9 &amp;6 \end{array}\right).$$</span></p> <p>I found out that the minimal polynomial of <span class="math-container">$A$</span> is <span class="math-container">$(x-3)^3(x+1)^2$</span>, and now let</p> <p><span class="math-container">$$f(x)=2x^9+x^8+5x^3+x+a$$</span></p> <p>a polynomial, <span class="math-container">$a\in N$</span>. I need to find out for which <span class="math-container">$a$</span> the matrix <span class="math-container">$f(A)$</span> is invertible.</p> <p>It has some similarity to <a href="https://math.stackexchange.com/questions/57123/prove-that-if-gt-is-relatively-prime-to-the-characteristic-polynomial-of-a">to my last question</a>, but I still can't understand and solve it. Thanks again.</p>
Andrea
14,351
<p><strong>Theorem.</strong> Let $V$ be a finite $\mathbb{K}$-vector space and let $f \in \mathrm{End}(V)$ an endomorphism with minimal polynomial $m_f(t) \in \mathbb{K}[t]$. If $a(t) \in \mathbb{K}[t]$, then $a(f) \in \mathrm{GL}(V)$ if and only if $\gcd(a,m_f)=1$.</p> <p><em>Proof.</em> $\Leftarrow$) Since Bezout's identity, $1 = \lambda m_f + \mu a$ for some polynomials $\lambda, \mu$. So, evaluating in $f$, one has $\mathrm{id}_V = \mu(f) \circ a(f)$, that proves that $a(f)$ is invertible.</p> <p>$\Rightarrow$) Let $d$ the greatest common divisor between $a$ and $m_f$. One has $a = \tilde{a} d$ for a polynomial $\tilde{a}$, then $a(f) = \tilde{a}(f) \circ d(f)$, so $\ker d(f) \subseteq \ker a(f)$. But $a(f)$ is invertible, so also $d(f)$ is invertible. One has $m_f = \tilde{m} d$, so $0 = \tilde{m}(f) \circ d(f)$; but $d(f)$ is invertible, so $\tilde{m}(f) = 0$. But $m_f$ is the minimal polynomial, so $m_f = \tilde{m}$ and then $d = 1$. $\square$</p>
57,213
<p>Let <span class="math-container">$A \in \mathbb{Q}^{6 \times 6}$</span> be the block matrix below:</p> <p><span class="math-container">$$A=\left(\begin{array}{rrrr|rr} -3 &amp;3 &amp;2 &amp;2 &amp; 0 &amp; 0\\ -1 &amp;0 &amp;1 &amp;1 &amp; 0 &amp; 0\\ -1&amp;0 &amp;0 &amp;1 &amp; 0 &amp; 0\\ -4&amp;6 &amp;4 &amp;3 &amp; 0 &amp; 0\\ \hline 0 &amp; 0 &amp; 0 &amp; 0 &amp; 0 &amp;1 \\ 0 &amp; 0 &amp; 0 &amp; 0 &amp; -9 &amp;6 \end{array}\right).$$</span></p> <p>I found out that the minimal polynomial of <span class="math-container">$A$</span> is <span class="math-container">$(x-3)^3(x+1)^2$</span>, and now let</p> <p><span class="math-container">$$f(x)=2x^9+x^8+5x^3+x+a$$</span></p> <p>a polynomial, <span class="math-container">$a\in N$</span>. I need to find out for which <span class="math-container">$a$</span> the matrix <span class="math-container">$f(A)$</span> is invertible.</p> <p>It has some similarity to <a href="https://math.stackexchange.com/questions/57123/prove-that-if-gt-is-relatively-prime-to-the-characteristic-polynomial-of-a">to my last question</a>, but I still can't understand and solve it. Thanks again.</p>
Did
6,179
<p>Put $A$ in Jordan form. The diagonal is made of $3$s and $-1$s. In this vector basis, $f(A)$ is also upper triangular and its diagonal is made of $f(3)$s and $f(-1)$s. Hence $f(A)$ is invertible if and only if there is no zero on the diagonal of its Jordan form if and only if $f(3)$ and $f(-1)$ are nonzero (and this condition is equivalent to the fact that the gcd of $f$ and the minimal polynomial of $A$ is $1$).</p>
2,372,171
<p>Let $T$ be a bounded linear operator on a Hilbert space $H$. I have to show that the following are equivalent:</p> <p>(i) $T$ is unitary</p> <p>(ii) For every orthonormal basis $\{u_{\alpha}:\alpha\in \Lambda\}$, $\{T(u_{\alpha}):\alpha\in \Lambda\}$ is an orthonormal basis.</p> <p>(iii) For some orthonormal basis $\{u_{\alpha}:\alpha\in \Lambda\}$, $\{T(u_{\alpha}):\alpha\in \Lambda\}$ is an orthonormal basis.</p> <p>I have proved that (i)$\implies$ (ii). Also (ii)$\implies$ (iii) is obvious. </p> <p>How to show that (iii)$\implies$ (i)? Please suggest anything?</p>
Nina Simone
467,038
<p>It is enough to compute $T^*T(u_\alpha)$ for all $\alpha$, since the operator is determined by its values at a basis.</p> <p>We can compute the coordinates of this vector in the basis $u_\alpha$. So, we do</p> <p>$$T^*T(u_\alpha)\cdot u_{\beta}=T(u_\alpha)\cdot T(u_\beta)=\delta_{\alpha,\beta}$$</p> <p>Therefore $T^*T(u_\alpha)=\sum_\beta(T^*T(u_\alpha)\cdot u_{\beta})u_{\beta}= u_{\alpha}$, from where we get that $T^*T$ is the identity.</p> <p>This is the same computation we did above, but hiding things a little.</p>
81,811
<p>I heard this example was given in Whitehead's paper A CERTAIN OPEN MANIFOLD WHOSE GROUP IS UNITY.( <a href="http://qjmath.oxfordjournals.org/content/os-6/1/268.full.pdf" rel="nofollow">http://qjmath.oxfordjournals.org/content/os-6/1/268.full.pdf</a> ) But I was confused by his term. Thus I'm looking for an explanation in more standard terms about this example.</p> <p>But since my aim is to know about an example of this kind, any alternative will do either.</p>
Marco Golla
13,119
<p>It's discussed in Kirby's "The topology of 4-manifolds", around page 80, and at a glance the argument looks "modern".</p>
283,747
<p>Let $BG$ denote the classifying space of a finite group $G$. For which group cohomology classes $c\in H^2(G;\mathbb{Z}/2)$ does there exist a real vector bundle $E$ over $BG$ such that $w_2(E)=c$?</p>
Neil Strickland
10,366
<p>Put $$A=\{c\in H^2(BG;\mathbb{Z}/2): c = w_2(V) \text{ for some } V\}$$ Here are some observations:</p> <ol> <li>Let $V$ be a real representation with determinant $L$, so $w_1(V)=w_1(L)$. Put $W=V\oplus L\oplus L\oplus L$. We find that $\det(W)=1$ and $w_2(W)=w_2(V)$. It follows that $A=\{w_2(W):\det(W)=1\}$. Moreover, if $\det(V)=\det(W)=1$ then $w_2(V\oplus W)=w_2(V)+w_2(W)$. It follows that $A$ is a subgroup of $H^2(BG;\mathbb{Z}/2)$.</li> <li>If $a,b\in H^1(BG;\mathbb{Z}/2)=\text{Hom}(G,O(1))$ then there are essentially unique one-dimensional representations $L,M$ with $w_1(L)=a$ and $w_1(M)=b$, and this gives $w_2(L\oplus M)=ab$ It follows that $A$ contains all decomposable elements of $H^2(BG;\mathbb{Z}/2)$.</li> <li>There is also a well-known isomorphism $H^2(BG;\mathbb{Z})=\text{Hom}(G,S^1)=\text{Hom}(G,SO(2))$. Using this, we see that if $c\in H^2(BG;\mathbb{Z}/2)$ can be lifted to $H^2(BG;\mathbb{Z})$, then it lies in $A$.</li> <li>Let $P\colon H^2(X;\mathbb{Z}/2)\to H^m(X;\mathbb{Z}/2)$ be any natural cohomology operation that annihilates $A$ for all $G$. Using point 2 we see that $P$ is zero when $X=B(C_2^2)=(\mathbb{R}P^\infty)^2=K(\mathbb{Z}/2,1)^2$. However, it is a standard fact that the obvious map $$H^*(K(\mathbb{Z}/2,n);\mathbb{Z}/2)\to H^*(B(C_2^n);\mathbb{Z}/2)$$ is injective for all $n$. It follows from this (taking $n=2$) that $P=0$. This means that there are no primary cohomological tests that we can use to determine whether elements lie in $A$.</li> </ol> <p>UPDATE: As Mark Grant points out, one can also treat this as a problem in obstruction theory. It is probably best to organise this in terms of the Atiyah-Hirzebruch spectral sequence $$ \widetilde{H}^p(BG;kO^q) \Longrightarrow \widetilde{kO}^{p+q}(BG), $$ with differentials $$ d_r\colon E_r^{pq} \to E_r^{p+r,q-r+1}. $$ It is a standard fact that \begin{align*} kO^* &amp;= \mathbb{Z}[\eta,\mu,\lambda]/(2\eta,\eta^3,\eta\mu,\mu^2-4\lambda) \\ &amp;= \mathbb{Z}[\lambda]\oplus \mathbb{Z}/2[\lambda]\eta \oplus \mathbb{Z}/2[\lambda]\eta^2\oplus \mathbb{Z}[\lambda]\mu \end{align*} with $|\eta|=-1$ and $|\mu|=-4$ and $|\lambda|=-8$. (In particular, these degrees are all negative, corresponding to the fact that $kO$ is a connective spectrum, so the spectral sequence is concentrated in the fourth quadrant.) A class $c\in H^2(BG;\mathbb{Z}/2)$ gives a class $c\eta^2\in E_2^{2,-2}$. Because we are using the reduced homology version of the spectral sequence we see that $E_r^{pq}=0$ for $p\leq 0$ so all differentials ending at $E_r^{2,-2}$ are zero. The problem is really to understand whether the elements $d_r(c\eta^2)$ are zero or not. Because $kO^{-3}=0$, the first possible differential is $d_3(c\eta^2)\in H^5(BG;\mathbb{Z}).\mu$. The result of Teichner mentioned by Mark must mean that this is $$ d_3(c\eta^2) = \beta Sq^2(c) \mu. $$ As $kO^{-5}=kO^{-6}=kO^{-7}=0$, the next possible differential is $d_7(c\eta^2)$, which lies in $E_7^{9,-8}$, which is a subquotient of $H^9(BG;\mathbb{Z}).\lambda$. One would need quite a bit of detailed information to determine which subquotient. In general we have differentials $d_r(c\eta^2)\in E_r^{2+r,-1-r}$ for $r\in\{3,7,8,9\}\pmod{8}$.</p> <p>For any particular group $G$ it is probably more efficient to just work out the representation theory and calculate the Stiefel-Whitney classes, but the spectral sequence approach sheds some interesting light on the general picture.</p>
65,480
<p>The example question is </p> <blockquote> <p>Find the remainder when $8x^4+3x-1$ is divided by $2x^2+1$</p> </blockquote> <p>The answer did something like</p> <p>$$8x^4+3x-1=(2x^2+1)(Ax^2+Bx+C)+(Dx+E)$$</p> <p>Where $(Ax^2+Bx+C)$ is the Quotient and $(Dx+E)$ the remainder. I believe the degree of Quotient is derived from degree of $8x^4+3x-1$ - degree of divisor. But for remainder? Would it not be </p>
Peđa
15,660
<p>Polynomial division allows for a polynomial to be written in a divisor–quotient form:</p> <p>$\frac{P(x)}{D(x)}=Q(x)+\frac{R(x)}{D(x)}$, where degree(D) &lt; degree(P) and degree(R) &lt; degree(D)</p> <p>This rearrangement is known as the division transformation.</p> <p>In this particular case $R(x)=3x-3$ so degree(R)=1 &lt; degree(D)=2</p>
3,239,540
<p><span class="math-container">$$S=\sum_{k=2}^{n}\frac{k^{2}-2}{k!}, n\geq 2$$</span></p> <p>I got <span class="math-container">$S=\sum_{k=2}^{n}\frac{1}{(k-2)!}+\frac{1}{(k-1)!}-\frac{1}{k!}-\frac{1}{k!}$</span></p> <p>I give k values but not all terms are vanishing.I remain with <span class="math-container">$\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{(n-2)!}$</span> </p> <p>The sum should be <span class="math-container">$2-e+\frac{1}{1!}+\frac{1}{2!}+...+\frac{1}{(n-2)!}$</span></p>
G Cab
317,234
<p><span class="math-container">$$ \eqalign{ &amp; \sum\limits_{k = 2}^n {\left( {{1 \over {\left( {k - 2} \right)!}} + {1 \over {\left( {k - 1} \right)!}} - {2 \over {k!}}} \right)} = \sum\limits_{k = 0}^{n - 2} {{1 \over {k!}} + \sum\limits_{k = 1}^{n - 1} {{1 \over {k!}}} - \sum\limits_{k = 2}^n {{2 \over {k!}}} } = \cr &amp; = 1 + 1 + \sum\limits_{k = 2}^{n - 2} {{1 \over {k!}}} + 1 + \sum\limits_{k = 2}^{n - 2} {{1 \over {k!}}} + {1 \over {\left( {n - 1} \right)!}} - 2\left( {\sum\limits_{k = 2}^{n - 2} {{1 \over {k!}}} + {1 \over {\left( {n - 1} \right)!}} + {1 \over {n!}}} \right) = \cr &amp; = 3 - {1 \over {\left( {n - 1} \right)!}} - {2 \over {n!}} = 3 - {{n + 2} \over {n!}} \cr} $$</span> and it <strong>checks</strong> with the original sum, which in fact is rational and cannot include <span class="math-container">$e$</span>.<br> So something is wrong somewhere in your notes.</p>
2,627,131
<blockquote> <p>let <span class="math-container">$f(x)= 1+\sqrt{x+k+1}-\sqrt{x+k} \ \ k \in \mathbb{R}$</span> Number of answers :</p> <p><span class="math-container">$$f(x)=f^{-1}(x) \ \ \ \ :f^{-1}(f(x))=x$$</span></p> </blockquote> <p>MY Try :</p> <p><span class="math-container">$$y=1+\sqrt{x+k+1}-\sqrt{x+k} \\( y-1)^2=x+k+1-x-k-2\sqrt{(x+k+1)(x+k)}\\(y-1)^2+k-1=-2\sqrt{(x+k+1)(x+k)}\\ ((y-1)^2+k-1)^2=4(x^2+x(2k+1)+k^2+k)$$</span></p> <p>now what do i do ?</p>
prog_SAHIL
307,383
<p><strong>Hint:</strong></p> <p>Point of intersection of $f(x)$ and $f^{-1}(x)$ while same as that of $f(x)$ and the line $y=x$.</p>
102,814
<p>Is it possible to construct a nontrivial homomorphism from $C_6$ to $A_3$? I have tried to construct one but failed. Is there a good way to see when there will be a homomorphism?</p>
Clive Newstead
19,542
<p>Obviously you can't construct an isomorphism $C_6 \to A_3$, since $C_6$ has order $6$ and $A_3$ has order $3$. So you'll need to construct a homomorphism $\theta : C_6 \to A_3$ which is not injective. </p> <p>A useful fact to note is that $A_3 \cong C_3$, so write $C_6 = \{e, a, \dots, a^5 \}$ and $A_3 = \{ e, b, b^2 \}$.</p> <p>A good way to do this is to consider the orders of the elements of the two groups you're working with. In $C_6$, you have an element of order $1$ (the identity), one element of order $2$, two elements of order $3$ and two elements of order $6$. In $A_3$ you have one element of order $1$ and two elements of order $2$.</p> <p>Since $C_6$ is generated by either of its elements of order $6$, namely $a$ or $a^5$, it makes sense to choose a sensible element of $A_3$ to map such an element (say $a$) to so that the homomorphism is nontrivial. Then since any homomorphism $\theta$ satisfies $\theta(x^n) = \theta(x)^n$, this must determine the images of the rest of the elements of your group.</p> <p>In this case, any non-identity choice of $\theta(a)$ will work; but this philosophy applies more generally when constructing homomorphisms.</p>
3,579,346
<p>I've been learning some introductory analysis on manifolds and have had a small issue ever since the notion of tangent spaces at points on a differentiable manifold was introduced.</p> <p>In our lectures, we began with the definition using equivalence classes of curves. But it is also possible to define tangent spaces using derivations of smooth functions (and apparently several other ways too, but for now I'm only familiar with these two).</p> <p>It seems intuitively sensible to call both these pictures (the curve and derivative ones) "equivalent": let the point of interest be <span class="math-container">$p$</span> and pick a local chart <span class="math-container">$\phi$</span>. Then we form a quotient of the set of curves through <span class="math-container">$p$</span> (parametrized so that <span class="math-container">$p=\gamma(0)$</span>), declaring <span class="math-container">$\gamma_1\sim\gamma_2$</span> iff <span class="math-container">$(\phi\,\circ\,\gamma_1)'(0)=(\phi\,\circ\,\gamma_2)'(0)$</span>. This is one particular version of a tangent space at <span class="math-container">$p$</span>. But we could also define it as the space of derivations, i.e. linear maps from <span class="math-container">$C^\infty(M)$</span> to <span class="math-container">$\mathbb{R}$</span> satisfying the Leibnitz rule <span class="math-container">$$D(fg)=D(f)g(p)+f(p)D(g)$$</span> For any equivalence class of curves <span class="math-container">$[\gamma]$</span> at <span class="math-container">$p$</span>, the operator defined on <span class="math-container">$C^\infty(M)$</span> by <span class="math-container">$$ D_{[\gamma]}(f)=(f\circ\gamma)'(0) $$</span> is a derivation; conversely, it is true that every derivation is such a directional derivative (proof: <a href="https://math.stackexchange.com/questions/1146901/equivalence-of-definitions-of-tangent-space">Equivalence of definitions of tangent space</a>).</p> <p>Most of this a recap of a part of <a href="https://en.wikipedia.org/wiki/Tangent_space" rel="noreferrer">Wikipedia</a>. At any rate, both of these notions seem to give in some sense "the same" tangent spaces.</p> <p>Here is my problem: I don't actually understand what precisely it is we are checking for when trying to decide if some two definitions are equivalent; right now, all I would personally try to do is show isomorphism of vector spaces and then try to convince myself that this isomorphism respects some vague notion of direction. But then <span class="math-container">$\mathbb{R}^{\mathrm{dim}(M)}$</span> is certainly isomorphic to any tangent space of the manifold <span class="math-container">$M$</span>, at least as a vector space. Nevertheless, just declaring <span class="math-container">$T_pM=\mathbb{R}^{\mathrm{dim}(M)}$</span> doesn't strike me as a successful construction of a tangent space.</p> <p>Now, there are two levels to my question, ordered by "degree of abstraction", so to speak (presumably they also get harder to answer). I do, however, believe they are connected.</p> <p>First, is there some precise notion of vector space isomorphisms respecting direction on a manifold? Specifically, is <span class="math-container">$\mathbb{R}^{\mathrm{dim}(M)}$</span> a valid tangent space or is it not, or do I perhaps have to specify some additional structure on it and then check that the additional structure relates to, say, the curve definition in a correct way? (I suppose this last case would require taking one definition of the tangent space as the absolute foundation and comparing all others to it, which I find somewhat unsatisfying.)</p> <p>Second, is there perhaps an abstract, "external" definition of a tangent space? What I'm talking about could be something like, "Given a smooth manifold <span class="math-container">$M$</span>, a point <span class="math-container">$p\in M$</span> and a vector space <span class="math-container">$V$</span>, this vector space is called a <em>tangent space at <span class="math-container">$p$</span></em> if it satisfies some properties <span class="math-container">$X,Y,Z...$</span>" where these <span class="math-container">$X,Y,Z$</span> don't depend on the type of objects in <span class="math-container">$V$</span> or other particular details specific to <span class="math-container">$V$</span>.</p> <p>The motivation behind asking this is related to the situation with ordered pairs of objects (yes, this is quite a leap): I can use the Kuratowski definition or infinitely many others, and in each case, I will be able to eventually convince myself that, indeed, this thing before me works just as well to encode "ordered-ness" of objects as any other. But I don't have to keep referring to one of these specific cases, I just need to describe how pairs should arise and behave in general: there is a two-place function <span class="math-container">$f$</span> that sends two objects <span class="math-container">$x$</span> and <span class="math-container">$y$</span> to <span class="math-container">$(x,y)$</span> and there are two projections <span class="math-container">$\pi_1,\pi_2$</span> that pull <span class="math-container">$x$</span> and <span class="math-container">$y$</span> back out. (For a precise definition see <a href="https://www.logicmatters.net/resources/pdfs/GentleIntro.pdf#chapter.7" rel="noreferrer">this PDF</a>, I summarised the discussion from there. It goes on to define products also within category theory.) Furthermore, I would find it highly suspect if some theorem about ordered pairs referred to the particulars of the Kuratowski definition - all the relevant information about <span class="math-container">$(x,y)$</span> should be recoverable from just the abstract setup described above (or better yet, in the linked PDF). Is there some way of treating tangent spaces in this same spirit?</p> <p>I know this question is vague, but I honestly don't know how better to phrase it, I hope I've at least gotten the mindset across if nothing else.</p>
anomaly
156,999
<p>1) The tangent space <span class="math-container">$T_p M$</span> is a vector space and, as you pointed out, any two vector spaces of the same dimension are isomorphic. There are two issues that make the definitions in the post nontrivial. First, that isomorphism is not canonical; it depends on (e.g.) a choice of basis. Second, and more importantly, the right notion here is that of the tangent <em>bundle</em> versus the tangent <em>space</em>. That is, the tangent bundle <span class="math-container">$TM$</span> is a space together with a (continuous) projection <span class="math-container">$\pi:TM \to M$</span> such that every point <span class="math-container">$p\in M$</span> has a neighborhood <span class="math-container">$U$</span> above which <span class="math-container">$\pi$</span> is just the projection <span class="math-container">$U \times \mathbb{R}^n \to U$</span> for some fixed <span class="math-container">$n$</span>. In this case, <span class="math-container">$TM$</span> is the collection of the <span class="math-container">$T_p M$</span> for <span class="math-container">$p\in M$</span>, topologized in a certain way.</p> <p>2) There's nothing inherently wrong with having a lot of equivalent definitions of the tangent space; consider all the different definitions of an ordinary derivative. Ultimately, all of these definitions come down to the fact that a tangent space is defined locally (i.e., <span class="math-container">$T_p M$</span> only depends on a neighborhood of <span class="math-container">$p$</span>), and points on manifolds have neighborhoods homeomorphic (in whatever category we're considering, and presumably at least <span class="math-container">$C^1$</span> here) to <span class="math-container">$\mathbb{R}^n$</span>. On <span class="math-container">$\mathbb{R}^n$</span>, the idea of a tangent space is simple: it's just <span class="math-container">$\mathbb{R}^n$</span> itself. The different definitions are just ways of turning that idea into something that doesn't depend on explicit choices of local charts. For motivation, you might want to consider the case where <span class="math-container">$M$</span> is smoothly embedded in some <span class="math-container">$\mathbb{R}^n$</span>. (By the Whitney embedding theorem, this is a trivial assumption, at least if we're assuming second-countability. The trick is coming up with a definition that's independent of that embedding.)</p> <p>3) As for an abstract or external definition, define the cotangent space <span class="math-container">$T_p^* M$</span> to be the quotient <span class="math-container">$I/I^2$</span>, where <span class="math-container">$I$</span> is the space of smooth maps <span class="math-container">$f:M \to \mathbb{R}$</span> that vanish at <span class="math-container">$p$</span>. (It would probably be cleaner to work with the sheaf of smooth functions defined on a neighborhood of <span class="math-container">$p$</span>, but we can reduce to the case above via a suitable bump function.) The tangent is then the dual of <span class="math-container">$T_p^* M$</span>, but <span class="math-container">$T_p^* M$</span> itself is useful in, for example, defining differential forms.</p> <p>Beyond that, it sounds like the abstraction you may be looking for (though, unfortunately, it's not particularly category-theoretic) is that of a vector bundle or, more abstractly, a general fiber bundle. The full definition is on (e.g.) wikipedia, but the idea is the same as the one in part (1) above: A bundle with fiber <span class="math-container">$F$</span> over a manifold <span class="math-container">$M$</span> is a space <span class="math-container">$E$</span> along with a continuous surjection <span class="math-container">$\pi:E \to M$</span> that locally looks like the projection <span class="math-container">$U \times F \to U$</span> onto the first coordinate. The Moebius strip, for example, is a <span class="math-container">$[0, 1]$</span>-bundle over the circle: It just looks like <span class="math-container">$[0, 1] \times U$</span> around a small neighborhood <span class="math-container">$U$</span> of a point in the central circle, but the whole space isn't just <span class="math-container">$[0, 1]\times S^1$</span>.</p> <p>This turns out to be an extraordinarly useful idea, and it leads to extremely productive ideas about exact sequences in algebraic topology, characteristic classes, classifying spaces, and so on.</p>
1,960,911
<p>I am trying to evaluate this limit for an assignment. $$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$</p> <p>I have tried to rationalize the function: $$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$</p> <p>$$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$</p> <p>Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$</p> <p>Leading to </p> <p>$$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$</p> <p>Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that?</p>
DonAntonio
31,254
<p>You should have gotten, after the last step:</p> <p>$$\lim_{x \to \infty} \frac{-6+\frac1x}{\sqrt{1-\frac6x +\frac1{x^2}}+1}=\frac{-6}{2}=-3$$</p> <p>so in fact you only had a minor, though pretty influential, arithmetical mistake.</p>
1,960,911
<p>I am trying to evaluate this limit for an assignment. $$\lim_{x \to \infty} \sqrt{x^2-6x +1}-x$$</p> <p>I have tried to rationalize the function: $$=\lim_{x \to \infty} \frac{(\sqrt{x^2-6x +1}-x)(\sqrt{x^2-6x +1}+x)}{\sqrt{x^2-6x +1}+x}$$</p> <p>$$=\lim_{x \to \infty} \frac{-6x+1}{\sqrt{x^2-6x +1}+x}$$</p> <p>Then I multiply the function by $$\frac{(\frac{1}{x})}{(\frac{1}{x})}$$</p> <p>Leading to </p> <p>$$=\lim_{x \to \infty} \frac{-6+(\frac{1}{x})}{\sqrt{(\frac{-6}{x})+(\frac{1}{x^2})}+1}$$</p> <p>Taking the limit, I see that all x terms tend to zero, leaving -6 as the answer. But -6 is not the answer. Why is that?</p>
haqnatural
247,767
<p>it should be $$\lim _{ x\to \infty } \frac { -6x+1 }{ \sqrt { x^{ 2 }-6x+1 } +x } =\lim _{ x\to \infty } \frac { x\left( -6+\frac { 1 }{ x } \right) }{ x\left( \sqrt { 1-\frac { 6 }{ x } +\frac { 1 }{ { x }^{ 2 } } } +1 \right) } =\frac { -6 }{ 2 } =-3$$</p>
1,966,122
<p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \sum_{k=n+1}^{2n} \frac{1}{k}$$</p> <p>I am trying to prove this inductively, so I thought that I would expand the right side out of sigma form to get</p> <p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \frac{2}{2n(2n+1)} - \frac{1}{n}$$</p> <p>which simplified to</p> <p>$$\sum_{k=1}^{2n}\frac{(-1)^{k+1}}{k} = \frac{-2}{2n+1}$$</p> <p>but apparently that isn't correct, can someone provide some insight into what I am doing wrong?</p>
felasfa
55,243
<p>Good work so far. To complete the problem, the key is to understand the meaning of the matrix of a linear map. Consider a linear map $T:V \rightarrow W$. For simplicity, assume the basis for $V$ is $\{\alpha_{1},\alpha_{2},\alpha_{3}\}$ and the basis for $W$ is $\{\beta_{1},\beta_{2}\}$. The first step, as you did above, is to consider the action of $T$ on the basis vectors in $V$. Say we did that and we have $T(\alpha_1), T(\alpha_2), T(\alpha_3)$. Now, we need to express each of these vectors that lie in $W$ as a linear combination of the basis in $W$ i.e $\{\beta_{1},\beta_{2}\}$ \begin{align} T(\alpha_1) &amp;= a_{11} \beta_{1} + a_{12} \beta_{2} \\ T(\alpha_2) &amp;= a_{21} \beta_{1} + a_{22} \beta_{2} \\ T(\alpha_3) &amp;= a_{31} \beta_{1} + a_{32} \beta_{2} \end{align} where the $a_{ij}$ are some coefficients you find. As an example, the coordinate representation of $T(\alpha_1)$ is simply $$ T(\alpha_1)_{\beta}= \begin{pmatrix} a_{11}\\ a_{12} \end{pmatrix} $$ where the subscript $\beta$ denotes the coordinate represetation with respect to the basis $\beta$. The matrix is nothing but the coordinate representation of the mapped vectors i.e $$ \begin{pmatrix} \vdots &amp; \vdots &amp; \vdots \\ T(\alpha_1)_{\beta} &amp; T(\alpha_2)_{\beta} &amp; T(\alpha_3)_{\beta}\\ \vdots &amp; \vdots &amp; \vdots \\ \end{pmatrix} $$ As you can see, its representation depends on the basis we choose. The operator is one and one but based on the basis you choose, you have different matrix representations. The matrix of the linear map is then $$ \begin{pmatrix} a_{11} &amp; a_{21} &amp; a_{31}\\ a_{12} &amp; a_{22} &amp; a_{32} \end{pmatrix} $$ So for your problem, all that remains to be done is to compute the coordinate representation of the mapped vectors interms of the basis in $\mathbb R^{2}$.</p>
402,802
<p>I have read that $$y=\lvert\sin x\rvert+ \lvert\cos x\rvert $$ is periodic with fundamental period $\frac{\pi}{2}$.</p> <p>But <a href="http://www.wolframalpha.com/input/?i=y%3D%7Csinx%7C%2B%7Ccosx%7C" rel="nofollow">Wolfram</a> says it is periodic with period $\pi$.</p> <p>Please tell what is correct.</p>
Hagen von Eitzen
39,174
<p>Don't trust Wolfram when also you have pen and paper available.</p> <p>Of course, $x\mapsto \sin x$ and $x\mapsto \cos x$ are functions with period $2\pi$. Composing them with some other function (here the absolute value) gives us functions having $2\pi$ as a period as well. But since $\sin(x+\pi)=-\sin x$ (and similarly for cosine), the absolute value in fact introduces the smaller period $\pi$. Finally adding two functions having $\pi$ as a period gives another function having $\pi$ as a period. But since $|\sin(x+\frac\pi2)|=|\cos x|$ and $|\cos(x+\frac\pi2)|=|\sin x|$, swapping the summands introduces a shorter period again, that is $\frac\pi2$ is <em>a</em> period of our function. To see that it is fundamental, i.e. that there is no smaller positive number with $f(x+p)=f(x)$ for all $x$, observe that $f(x)=1$ iff $x=\frac\pi2k$ for some $k\in \mathbb Z$ (why?) or that $f$ fails to be differentibale precisely for $x=\frac\pi2 k$ (why?) or that $f$ is strictly increasing on $[0,\frac\pi4]$ (why? and why doe sthat show that $\frac\pi2$ is minimal?) or look for other distinctive features preventing smaller periods ...</p>
3,244,073
<p>Let <span class="math-container">$A = \{1, 3, 5, 9, 11, 13\}$</span> and let <span class="math-container">$\odot$</span> define the binary operation of multiplication modulo <span class="math-container">$14$</span>.</p> <p>Prove that <span class="math-container">$(A, \odot)$</span> is a group. </p> <p>While completing this question I was able to show that the set was closed, and that associative law held, and that the set contained an identity element. However, I was unable to show that the set had inverses.</p> <p>I drew up the following Cayley table for the set:</p> <p><span class="math-container">$$\begin{bmatrix} \odot &amp; 1 &amp; 3 &amp; 5 &amp; 9 &amp; 11 &amp; 13 \\ 1 &amp; 1 &amp; 3 &amp; 5 &amp; 9 &amp; 11 &amp; 13 \\ 3 &amp; 3 &amp; 9 &amp; 1 &amp; 13 &amp; 5 &amp; 11 \\ 5 &amp; 5 &amp; 1 &amp; 11 &amp; 3 &amp; 13 &amp; 9 \\ 9 &amp; 9 &amp; 13 &amp; 3 &amp; 11 &amp; 1 &amp; 5 \\ 11 &amp; 11 &amp; 5 &amp; 13 &amp; 1 &amp; 9 &amp; 3 \\ 13 &amp; 13 &amp; 11 &amp; 9 &amp; 5 &amp; 3 &amp; 1 \\ \end{bmatrix}$$</span></p> <p>Any help with showing that this set has inverses would be much appreciated. Thanks in advance :)</p>
Jack D'Aurizio
44,121
<p>I strongly agree with the comment by J.G.: the improper Riemann integral equals <span class="math-container">$$\int_{-\infty}^{+\infty}\frac{z^2 e^z}{(e^z-1)^2}\,dz = \int_{-\infty}^{+\infty}\left(\frac{z}{2\sinh\frac{z}{2}}\right)^2\,dz\stackrel{sym}{=}4\int_{0}^{+\infty}\frac{u^2}{\sinh^2 u}\,du$$</span> or, by integration by parts, <span class="math-container">$$ 8\int_{0}^{+\infty}u(\coth u-1)\,du=8\int_{0}^{+\infty}\left[u-\log(2\sinh u)\right]\,du = 8\int_{1}^{+\infty}\log\left(\frac{t}{t-1/t}\right)\frac{dt}{t}. $$</span> This can be shown to be equal to <span class="math-container">$4\,\zeta(2)=\frac{2\pi^2}{3}$</span> in many ways, for instance by reducing the last integral (via <span class="math-container">$t\mapsto 1/t$</span>) to a multiple of </p> <p><span class="math-container">$$\int_{0}^{1}\frac{-\log(1-t^2)}{t}\,dt=\sum_{n\geq 1}\frac{1}{n}\int_{0}^{1}t^{2n-1}\,dt=\frac{1}{2}\sum_{n\geq 1}\frac{1}{n^2}=\frac{\pi^2}{12}.$$</span></p>
2,906,917
<p>I have this problem but I don't know how to continue.<br> Here it is: Compute $\int \sin(x) \left( \frac{1}{\cos(x) + \sin(x)} + \frac{1}{\cos(x) - \sin(x)} \right)\,dx.$<br> So I can anti differentiate the sin x to be cos x but I am unsure on where to go off that for the fraction. I don't want to multiply the fractions to create a big and messy function and I don't quite understate how to do partial fraction decomposition. I'm guessing I will have to do substitution?</p> <p>Anyways, thank you for any help!</p>
bjcolby15
122,251
<p>An alternate (but longer) solution:</p> <p>Going back to the beginning - if we have simplified the original to $$\int\sin x\left(\frac{2\cos x}{\cos^2x-\sin^2x}\right)dx,$$ we can convert the denominator to $2 \cos^2 x - 1$ (as it's equivalent to $\cos^2x -\sin^2 x$) and use the substitution $$u = \cos x, du = -\sin x \ dx;$$ this would give us $$-\int \left(\frac{2u}{2u^2-1}\right)du.$$ </p> <p>Then we would do substitution again with $$v = 2u^2 - 1, dv = 4u \ du, \frac {1}{2} dv = 2u \ du$$ to get $$-\dfrac {1}{2} \int \left(\frac{dv}{v}\right)$$. </p> <p>The integral above with respect to $v$ is $$-\dfrac {1}{2}\ln|v|+C;$$ now all we need to do is resubstitute $u$ and $x$ in succession giving us $$-\dfrac {1}{2}\ln|2u^2-1|+C+C_1 \Rightarrow --\dfrac {1}{2}\ln |(2(\cos x)^2-1)|+C+C_1+C_2.$$ </p> <p>Using the identity $$2 \cos^2 x -1 = \cos 2x$$ we arrive at $$\bbox[lightgray] {-\dfrac {1}{2} \ln |\cos 2x| + K}$$ where $K = C + C_1 + C_2$.</p> <p>Side note: If we DID know the identity $2 \sin x \cos x = \sin 2x$, our substitution would be a slam dunk, i.e. $u = \cos 2x, du = -\dfrac {1}{2} \sin 2x \ dx,$ and we would get $-\dfrac {1}{2}$ $\int {du}/{u}$ immediately. </p>
2,648,516
<p>I am studying Fourier analysis from the text "Stein and Shakarchi" and there is this thing on Dirichlet Kernel. It's fine to define it as a trigonometric poylnomial of degree $n$ , but what is the mathematical intuition behind calling it a Kernel ? I have also thought of Kernel as being a set of zeroes of sum function. Is there a relation between both the terminologies?</p>
David C. Ullrich
248,223
<p>In general, if you have a linear operator $T$ on some space of functions, defined by an integral $$Tf(x)=\int f(t) K(x,t)\,dt,$$then $K$ is the "kernel". </p>
397,347
<p>I'm trying to figure out how to evaluate the following: $$ J=\int_{0}^{\infty}\frac{x^3}{e^x-1}\ln(e^x - 1)\,dx $$ I'm tried considering $I(s) = \int_{0}^{\infty}\frac{x^3}{(e^x-1)^s}\,dx\implies J=-I'(1)$, but I couldn't figure out what $I(s)$ was. My other idea was contour integration, but I'm not sure how to deal with the logarithm. Mathematica says that $J\approx24.307$. </p> <p>I've asked a <a href="https://math.stackexchange.com/questions/339711/find-the-value-of-j-int-0-infty-fracx3ex-1-lnx-dx">similar question</a> and the answer involved $\zeta(s)$ so I suspect that this one will as well. </p>
Start wearing purple
73,025
<p>Mathematica says that the answer is $$\pi^2\zeta(3)+12\zeta(5)$$ I will try to figure out how this can be proven.</p> <hr> <p><strong>Added</strong>: Let me compute the 2nd integral in <strong>Ron Gordon</strong>'s answer: \begin{align}\int_{0}^{\infty}\frac{x^3 e^{-x}}{1-e^{-x}}\ln(1-e^{-x})\,dx &amp;=-\frac32\int_0^{\infty}x^2\ln^2(1-e^{-x})\,dx=\\&amp;=-\frac32\left[\frac{\partial^2}{\partial s^2}\int_0^{\infty}e^{-sx}\ln^2(1-e^{-x})\,dx\right]_{s=0}=\\ &amp;=-\frac32\left[\frac{\partial^2}{\partial s^2}\int_0^{1}t^{s-1}\ln^2(1-t)\,dt\right]_{s=0}=\\ &amp;=-\frac32\left[\frac{\partial^4}{\partial s^2\partial u^2}\int_0^{1}t^{s-1}(1-t)^u\,dt\right]_{s=0,u=0}=\\ &amp;=-\frac32\left[\frac{\partial^4}{\partial s^2\partial u^2}\frac{\Gamma(s)\Gamma(1+u)}{\Gamma(1+s+u)}\right]_{s=0,u=0}=\\ &amp;=-\frac{1}{2}\left(\pi^2\psi^{(2)}(1)-\psi^{(4)}(1)\right). \end{align} To obtain the last expression, one should expand the ratio of gamma functions to 2nd order in $u$, then to expand the corresponding coefficient to 2nd order in $s$.</p> <p>Then we can use that $\psi^{(2)}(1)=-2\zeta(3)$ and $\psi^{(4)}(1)=-24\zeta(5)$ (cf formula (15) <a href="http://mathworld.wolfram.com/PolygammaFunction.html">here</a>) to obtain the quoted result.</p>
4,487,489
<p>I'm currently working on completing their first unit on calculus ab and I've encountered this roadblock. That's probably an exaggeration but I honestly can't figure out what they mean by &quot;for negative numbers&quot;. I did the math and got the right number (at least the right absolute value) but the missing negative sign cost me the question and fair enough but why is there a negative sign that's being added anyway?<span class="math-container">$$\sqrt{x^6}=(x^6)^{1/2}=x^{6\times\frac12}=x^3$$</span>I get that so why the negative?</p> <p><a href="https://i.stack.imgur.com/3YyPW.png" rel="nofollow noreferrer">for context here's their explanation and the problem itself</a></p>
Dark Malthorp
532,432
<p>The issue here has to due with non-integer exponents being weird. When we say <span class="math-container">$\sqrt{x}=y$</span>, what we mean is that <span class="math-container">$y$</span> is a number which, when squared, gives <span class="math-container">$x$</span>. However, there are two such numbers (unless <span class="math-container">$x=0$</span> of course). Both <span class="math-container">$y$</span> and <span class="math-container">$-y$</span> give <span class="math-container">$x$</span> when we square them. For example, <span class="math-container">$2^2 = (-2)^2=4$</span>. We take the positive root by convention, but as far as exponentiation is concerned, there's not a natural reason to prefer it.</p> <p>When we're dealing with non-integer exponents, the identity <span class="math-container">$$ (x^a)^b = x^{a b} $$</span> really means that <em>some</em> value we could assign to the RHS is the same as some value we could assign to the LHS. It doesn't mean that the convention of taking positive roots necessarily makes the equation true. For example:<span class="math-container">$$ -1 = (-1)^{\frac12 \cdot 2} \ne ((-1)^2)^{\frac12} = \sqrt{1} = 1 $$</span> The same sort of thing is happening in this example. If we take <span class="math-container">$x&lt;0$</span>, then <span class="math-container">$x^3&lt;0$</span>, but <span class="math-container">$\sqrt{x^6} &gt; 0$</span>. So our convention failed us, and we need to take the other possible square root, i.e. <span class="math-container">$-\sqrt{x^6}$</span>.</p>
1,984,843
<p>if $\cup$ is finite, say $n$, I came up with formula</p> <p>$f(x) = n x + i$, where $x \in [\frac{i}{n}, \frac{i+1}{n}]$, $n$ is non negative integer and $i$ differs between $0$ and $n-1$.<br><br></p> <p>I'm not sure whether it's correct to assume the bijection holds if $n$ approaches infinity.</p>
hmakholm left over Monica
14,366
<p>If you just want to know that a bijection <em>exists</em>, use the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="nofollow">Cantor-Schröder-Bernstein theorem</a>:</p> <ul> <li><p>The <em>identity function</em> is an injection from $[0,1]$ into $[0,1]\cup[2,3]\cup\cdots$.</p></li> <li><p>$f(x)=\frac{1}{x+1}$ is an injection from $[0,1]\cup[2,3]\cup\cdots$ into $[0,1]$.</p></li> </ul> <p>Since there are injections both ways, there is also a bijection.</p>
1,908,844
<p>The following example is taken from the book "Introduction to Probability Models" of Sheldon M. Ross (Chapter 5, example 5.4).</p> <blockquote> <p>The dollar amount of damage involved in an automobile accident is an exponential random variable with mean 1000. Of this, the insurance company only pays that amount exceeding (the deductible amount of) 400. Find the expected value and the standard deviation of the amount the insurance company pays per accident."</p> </blockquote> <p>In the solution, the author states that: </p> <blockquote> <p>By the lack of memory property of the exponential, it follows that if a damage amount exceeds 400, then the amount by which it exceeds it is exponential with mean 1000.</p> </blockquote> <p>After reading several implications of this property, I easily map such statement to something like: if you have been waiting for 400s without seeing the bus, then the expected time until the next bus is always 1000s. (Please correct me if I'm wrong)</p> <p>In case I've understood well, what makes me confuse is this next equation:</p> <p>$$ E[Y|I=1] = 1000 $$</p> <p>where:</p> <p>$X$: the dollar amount of damage resulting from an accident</p> <p>$Y=(X-400)^+$: the amount paid by the insurance company (where $a^+$ is $a$ if $a&gt;0$ and 0 if $a&lt;=0$).</p> <p>$I = 1*(X &gt; 400) + 0*(X&lt;=400)$</p> <p>I don't get why that equality holds given the memoryless property. Straightforwardly, I think with respect to 400 subtraction, it should be something like: $E[Y|I] = 1000 - 400 = 600$ (or some other value). Can anyone give me an explanation about this?</p> <p>In case you are not clear about my description, please refer to this <a href="https://books.google.ca/books?id=A3YpAgAAQBAJ&amp;pg=PA281&amp;lpg=PA281&amp;dq=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;source=bl&amp;ots=CaFTvM6Rtw&amp;sig=t0nrAFc-6hX0ByxD3bAD-E3M7EM&amp;hl=en&amp;sa=X&amp;ved=0ahUKEwiA4oaN4enOAhUGfxoKHRZHDEYQ6AEIHDAA#v=onepage&amp;q=probability%20model%20dollar%20amount%20of%20damage%20exponential&amp;f=false" rel="nofollow">link</a> with <strong>example 5.4</strong>.</p>
Paolo Leonetti
45,736
<p>The result is $\binom{n+m}{k}$.</p> <p>This is known as <a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow">Vandermonde's identity</a>.</p>
1,574,003
<p>I know that if A and C are finite sets then |AxC|=|A||C|. This makes the problem quite simple but the sets may not be finite. </p> <p>I am guessing that the concept of cardinally of infinite sets and &#8501; <sub>0</sub> are part of the solution but those are concepts that my class did not go into much and I do not understand very well.</p> <p>This is my first post to stack exchange so please inform me of any wrong doings.</p>
cr001
254,175
<p>$|A|=|B|$ means there is a bijection $f:A\rightarrow B$ that $f(a)=b$ for $a\in A, b\in B$.</p> <p>Similarly there is $g:C\rightarrow D$ that $g(c)=d$.</p> <p>Now we can easily show that $h(a,c)=(f(a),g(c))$ is a bijection over $A\times C\rightarrow B\times D$</p>
4,220,972
<p>I'm studying a for the GRE and a practice test problem is, &quot;For all real numbers x and y, if x#y=x(x-y), then x#(x#y) =?</p> <p>I do not know what the # sign means. This is apparently an algebra function but I cannot find any such in several searches. I'm an older student and haven't had basic algebra in over 45 years and this was certainly not in my recent linear algebra class.</p>
Shffl
395,362
<p>I used the form of Chebyshev's inequality found <a href="https://en.wikipedia.org/wiki/Chebyshev%27s_inequality#Statement" rel="nofollow noreferrer">here</a>.</p> <p>Because <span class="math-container">$\operatorname{Var}(X) = \frac{1}{4}$</span>, you have that <span class="math-container">$\sigma = 0.5$</span>. Then it's just an application of the inequality in the case that <span class="math-container">$k = 2$</span>, giving an answer of 3/4.</p>
1,179,497
<p>Let $(F,+,\cdot)$ be a field. </p> <p>Then to prove that $(F,+)$ and $(F-\{0\},\cdot)$ are not isomorphic as groups.</p> <p>I am facing difficulty in finding the map to bring a contradiction!!</p>
Michael Hardy
11,667
<p>Quid's answer is in one sense the same as what I was about to post when I read it. But the way I phrased it may make it clear to some people first learning the subject, in a way that quid's might not.</p> <p>Suppose an isomorphism $\varphi$ from the multiplicative group to the additive group exists. In a field in which $-1\ne1$, we have $(-1)^2=1$ and so $\varphi(-1)+\varphi(-1)=\varphi(1)=0$. This is a field in which $2\ne0$, so it is permissible to divide both sides of the equality $2\varphi(-1)=0$ by $2$ and get $\varphi(-1)=0$. That puts $-1$ in the kernel of the homomorphism $\varphi$, which, being an isomorphism, should have only $1$ in its kernel.</p> <p>In a field in which $-1=1$ one uses a different argument.</p>
2,111,402
<p>Simple exercise 6.2 in Hammack's Book of Proof. "Use proof by contradiction to prove"</p> <p>"Suppose $n$ is an integer. If $n^2$ is odd, then $n$ is odd"</p> <p>So my approach was:</p> <p>Suppose instead, IF $n^2$ is odd THEN $n$ is even</p> <p>Alternatively, then you have the contrapositive, IF $n$ is not even ($n$ is odd), then $n^2$ is not odd ($n^2$ is even).</p> <p>$n = 2k+1$ where $k$ is an integer. (definition of odd)</p> <p>$n^2 = (2k+1)^2$</p> <p>$n^2 = 4k^2 + 4k + 1$</p> <p>$n^2 = 2(2k^2 + 2k) + 1$</p> <p>$n^2 = 2q + 1$ where $q = 2k^2 + 2k$</p> <p>therefore $n^2$ is odd by definition of odd.</p> <p>Therefore we have a contradiction. Contradictory contrapositive proposition said $n^2$ is not odd, but the derivation says $n^2$ is odd. Therefore the contradictory contrapositive is false, therefore the original proposition is true.</p> <p>Not sure if this was the efficient/correct way to prove this using Proof-By-Contradiction.</p>
Dylan
409,257
<p>Last night I read this as a perfectly acceptable claim, but, as has been pointed out, your negation was not simply a "harder case" but instead the converse. Apologies! Your proof happened to work here because the stronger relationship (i.e. $\iff$)</p> <p>A couple of things to note though. For a proof by contradiction would simply need to state that: there exists some $n^2$ which is odd, which has an even $n$ (perhaps phrased better: Suppose that you have $n^2$ which is odd for a corresponding $n$ even). With this in mind, consider the following direct proof by contradiction.</p> <p>Assume that for some $n^2$ which is odd, we have $n = 2k$.</p> <p>\begin{align} n^2 &amp;= n\cdot n\\ &amp;= (2k)(2k)\\ &amp;= 2(2k^2) \\ 2k^2 &amp;\in \mathbb{Z} \ \ \ \ \text{call it q} \\ n^2 &amp;= 2q \implies 2 | n^2 \end{align}</p> <p>and so we have reached a contradiction, so our assumption must be incorrect!</p> <p>I do want to stress that, as with most propositions, there are multiple ways in which to prove this statement, as has been pointed out using the contrapositive is the most efficient way of proving it, but as the exercise asked to use a contradiction method, the above would work!</p>
2,638,028
<p><strong>Question:</strong></p> <blockquote> <p>If $p,q$ are positive integers, $f$ is a function defined for positive numbers and attains only positive values such that $f(xf(y))=x^py^q$, then prove that $p^2=q$.</p> </blockquote> <p><strong>My solution:</strong></p> <p>Put $x=1$. So, $f(f(y))=y^q$, then evidently, $f(y)=y^{\sqrt{q}}...(1)$ satisfies this.<br> Now, put $x=y=1$, to get $f(1)=1$<br> Now, put $y=1$. So, $f(x)=x^p...(2)$</p> <p>Equalising $f(a)$ for an arbitrary constant $a$ from the two equations $(1)$ and $(2)$, we get: $a^{\sqrt{q}}=a^p$ or $p^2=q$. $\blacksquare$</p> <hr> <p>Is this solution correct? I am particularly worried because I have solved this six marks question in a four line solution which wouldn't make my prof very happy...</p>
Community
-1
<p>There is a major gap in your argument.</p> <p>You've correctly argued that any $f$ satisfying the functional equation $f(x f(y)) = x^p y^q$ must also satisfy the functional equation $f(f(y)) = y^q$.</p> <p>It is correct that $f(t) = t^\sqrt{q}$ is <em>one</em> solution to the functional equation $f(f(y)) = y^q$.</p> <p>However, you have made no attempt to show it is the <em>only</em> solution. In fact, other solutions to this functional equation exist; a simple one is that $f(t) = 17/t$ satisfies $f(f(y)) = y^q$ when $q=1$. More complicated solutions exist too.</p> <p>Thus, you have <strong><em>not</em></strong> shown that any $f$ satisfying $f(x f(y)) = x^p y^q$ must also satisfy $f(t) = t^\sqrt{q}$. Some possible ways to continue are:</p> <ul> <li>Find a proof that doesn't rely on this assumption</li> <li>Find out what the other cases are and prove that they also have $p^2 = q$</li> <li>Prove that functions satisfying $f(x f(y)) = x^p y^q$ must actually be of the form $f(t) = t^\sqrt{q}$.</li> </ul>
26,083
<p>I have a data set in form of:(this is just an example)</p> <pre><code>1324501020 3241030205 4332020134 </code></pre> <p>the data are stored in a text file (e.g. data.txt) but I need to convert them into a matrix format such that each number be place in a cell like this:</p> <pre><code>1 3 2 4 5 0 1 0 2 0 3 2 4 1 0 3 0 2 0 5 4 3 3 2 0 2 0 1 3 4 </code></pre> <p>or in terms of <code>List</code> in Mathematica, I need to have</p> <p><code>{{1,3,2,4,5,0,1,0,2,0},{3,2,4,1,0,3,0,2,0,5},{4,3,3,2,0,2,0,1,3,4}}</code></p> <p>in other words, the final data set supposed to be a matrix of numbers. Any idea??</p>
HyperGroups
6,648
<p><code>(IntegerDigits@ImportString["1324501020 3241030205 4332020134"]) // Flatten[#, 1] &amp;</code> list data no need spaces?</p> <p>If you wanna string still add them <code>Riffle[#, " "] &amp; /@ Characters /@ StringSplit["1324501020 3241030205 4332020134"]</code></p> <p>If data in txt file, import as list first.</p>
1,507,710
<p>I'm trying to get my head around group theory as I've never studied it before. As far as the general linear group, I think I've ascertained that it's a group of matrices and so the 4 axioms hold? The question I'm trying to figure out is why $(GL_n(\mathbb{Z}),\cdot)$ does not form a group. I think I read somewhere that it's because it doesn't have an inverse and I understand why this would not be a group, but I don't understand why it wouldn't have an inverse. </p>
C. Falcon
285,416
<p>$GL(n,\mathbb{Z})$ is a group for the multiplication law. One can show that : $$GL(n,\mathbb{Z})=\{A\in\mathcal{M}(n,\mathbb{Z})\textrm{ s.t. }|\det(A)|=1\}.$$ As far as $(A,+,\times)$ is a commutative ring with an identity element for $\times$, $GL(n,A)$ is a group.</p>
1,507,710
<p>I'm trying to get my head around group theory as I've never studied it before. As far as the general linear group, I think I've ascertained that it's a group of matrices and so the 4 axioms hold? The question I'm trying to figure out is why $(GL_n(\mathbb{Z}),\cdot)$ does not form a group. I think I read somewhere that it's because it doesn't have an inverse and I understand why this would not be a group, but I don't understand why it wouldn't have an inverse. </p>
mathcounterexamples.net
187,663
<p>In a general way, if you consider a matrix $A \in \mathcal{M}_n(\mathbb Z)$, you have the relation $$A.\mathbf{adj}(A)=\det(A)I_n \tag{1}$$ where $\mathbf{adj}(A)$ stands for the <a href="https://en.m.wikipedia.org/wiki/Adjugate_matrix" rel="nofollow noreferrer">adjugate matrix</a> of $A$. The adjugate matrix $\mathbf{A}$ also belongs to $\mathcal{M}_n(\mathbb Z)$.</p> <p>The relation (1) allows to prove that a matrix $A \in \mathcal{M}_n(\mathbb Z)$ is invertible if and only if its determinant is an invertible element of $\mathbb Z$, i.e. is equal to $\pm 1$.</p> <p>Which is a proof of the answer provided by <a href="https://math.stackexchange.com/users/285416/sheol">Sheol</a>.</p>