qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
2,792,770
<p>I found the following question in a test paper:</p> <blockquote> <p>Suppose $G$ is a monoid or a semigroup. $a\in G$ and $a^2=a$. What can we say about $a$?</p> </blockquote> <p>Monoids are associative and have an identity element. Semigroups are just associative. </p> <p>I'm not sure what we can say about $a$ in this case other than that $a$ could be other things apart from the identity. Any idea if there's a definitive answer to this question?</p>
drhab
75,923
<p>Let $\Omega=\Omega'$ and let $\mathcal F$ be a proper subcollection of $\mathcal F'$.</p> <p>Then the identity function $\Omega\to\Omega'$ is not measurable.</p>
3,169,668
<p>If A and B don't commute are there counterexamples that AB is diagonalizable but BA not?</p> <p>I read that if AB=BA then both AB and BA are diagonalizable. </p>
egreg
62,967
<p>This is true in the special case when both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are invertible.</p> <p>If we denote by <span class="math-container">$E_C(\lambda)$</span> the eigenspace of the matrix <span class="math-container">$C$</span> relative to the eigenvalue <span class="math-container">$\lambda$</span>, we can see that <span class="math-container">$$ v\mapsto Bv $$</span> induces an injective linear map <span class="math-container">$E_{AB}(\lambda)\to E_{BA}(\lambda)$</span>, so <span class="math-container">$\dim E_{AB}(\lambda)\le\dim E_{BA}(\lambda)$</span>, where <span class="math-container">$\lambda$</span> is any eigenvalue of <span class="math-container">$AB$</span>. By symmetry, the two eigenspaces have the same dimension.</p> <p>Also this proves that <span class="math-container">$AB$</span> and <span class="math-container">$BA$</span> have the same eigenvalues. If <span class="math-container">$AB$</span> is diagonalizable, then the geometric multiplicities of its eigenvalues sum up to <span class="math-container">$n$</span> (the size of the matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span>). Thus the same happens for <span class="math-container">$BA$</span> and <span class="math-container">$BA$</span> is diagonalizable.</p> <p>If <span class="math-container">$A$</span> or <span class="math-container">$B$</span> is not invertible, we can still see that <span class="math-container">$AB$</span> and <span class="math-container">$BA$</span> have the same nonzero eigenvalues. Indeed, if <span class="math-container">$\lambda$</span> is a nonzero eigenvalue of <span class="math-container">$AB$</span>, with eigenvector <span class="math-container">$v$</span>, then <span class="math-container">$ABv=\lambda v$</span> (which implies <span class="math-container">$Bv\ne0$</span>, so <span class="math-container">$(BA)(Bv)=\lambda(Bv)$</span> and therefore <span class="math-container">$\lambda$</span> is an eigenvalue of <span class="math-container">$BA$</span>. By symmetry, <span class="math-container">$AB$</span> and <span class="math-container">$BA$</span> share the nonzero eigenvalues. Also, <span class="math-container">$0$</span> is an eigenvalue of both, because neither is invertible. However, in this case we can't control the geometric multiplicities, as Michael Biro's example shows.</p>
2,285,202
<p>So I want to find a field extension that has the galois group $Z_{3} \times Z_{3} \times Z_{3} $. Now if the 3's where changed to 2's then I guess for example $(x^2-2)(x^2-3)(x^2-5)$ would suffice but I don't see how any clever way to do it with $Z_{3}$. I tried a bit with cyclotomic extensions but came up empty handed. Any hints &amp; help are appreciated, thanks!</p>
Angina Seng
436,618
<p>If we take three primes $p\equiv1\pmod3$ then each cyclotomic field $\Bbb Q(\zeta_p)$ has a cyclic cubic subextension (generated by a Gaussian period). Take the compositum of these three fields.</p> <p>To be specific, take $p=7$, $13$ and $19$. Then we can take $\Bbb Q(\gamma_7,\gamma_{13},\gamma_{19})$ where $$\gamma_7=\zeta_7+\zeta_7^6,$$ $$\gamma_{13}=\zeta_{13}+\zeta_{13}^5+\zeta_{13}^8+\zeta_{13}^{12}$$ and $$\gamma_{19}=\zeta_{19}+\zeta_{19}^7+\zeta_{19}^8+\zeta_{19}^{11} +\zeta_{19}^{12}+\zeta_{19}^{18}.$$</p> <p>These Gaussian periods are $\gamma_p=\sum_{c\in C} \zeta_p^c$ where $c$ is the set of nonzero cubes modulo $p$.</p>
1,929,698
<p>Let $f(x)=\chi_{[a,b]}(x)$ be the characteristic function of the interval $[a,b]\subset [-\pi,\pi]$. </p> <p>Show that if $a\neq -\pi$, or $b\neq \pi$ and $a\neq b$, then the Fourier series does not converge absolutely for any $x$. [Hint: It suffices to prove that for many values of $n$ one has $|\sin n\theta_0|\ge c \gt 0$ where $\theta_0=(b-a)/2.$]</p> <p>However, prove that the Fourier series converges at every point $x$.</p> <p>I've computed the Fourier series and got $\frac{b-a}{2\pi}+\sum_{n\neq 0}\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}.$</p> <p>Also, $|e^{-ina}-e^{-inb}|=2|\sin n\theta_0|$, and $\theta_0\in (0,\pi)$, so I can see that for infinitely many values of $n$, we have $|\sin n\theta_0|\ge c \gt 0$. But this does not guarantee $\sum_{n\neq 0}|\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}|\ge \sum \frac{c}{n}$, and in fact we might have this inequality only for the squares of integers, in which case the right hand side converges. So how does the hint solve the problem?</p> <p>Moreover, for the second problem, to show that the Fourier series converges at every point, I think I need to use Dirichlet's test, using $1/n$ as the decreasing sequence to $0$, but how can I show that $\frac{e^{-ina}-e^{-inb}}{2\pi in}e^{inx}$ has bounded partial sums?</p> <p>I would greatly appreciate any help.</p>
fonini
113,664
<p>First of all, I like to write the Fourier coefficients of this function as <span class="math-container">$$\widehat{\chi_{\left[a,b\right]}}\left(n\right)=\frac{b-a}{2\pi}\exp\left(-i\frac{a+b}{2}n\right)\mathrm{sinc}\left(\frac{b-a}{2\pi}n\right)$$</span> where <span class="math-container">$$\mathrm{sinc}\,\eta = \frac{\sin \pi\eta}{\pi\eta},\qquad \mathrm{sinc}\,0=1.$$</span></p> <p>It's quite easy to show that <span class="math-container">$\sum\widehat{\chi_{\left[a,b\right]}}\left(n\right)$</span> converges (although not absolutely), using the following lemma:</p> <blockquote> <p><strong>(Dirichlet Test; exercise 7b in chapter 2 of Stein &amp; Shakarchi)</strong> If the partial sums of <span class="math-container">$\sum b_n$</span> are bounded, and <span class="math-container">$a_0 \geqslant a_1 \geqslant\cdots\geqslant a_n \geqslant\cdots\to 0$</span>, then <span class="math-container">$\sum a_n b_n$</span> converges.</p> </blockquote> <p>The proof of Dirichlet's Test is based on <a href="https://en.wikipedia.org/wiki/Summation_by_parts" rel="nofollow noreferrer">summation by parts</a>.</p> <p>To use this test, it's actually easier to write <span class="math-container">$\chi$</span> the way you did: <span class="math-container">$$\chi_{\left[a,b\right]}\left(x\right) \sim \frac{b-a}{2\pi}+\frac{1}{ 2\pi i}\sum_{n\neq 0}\frac{e^{in\left(x-a\right)} - e^{in\left(x-b\right)}}{n}.$$</span></p> <p>This is a series of differences. It's enough to show that each of the series <span class="math-container">$\sum e^{in\left(x-a\right)}/n$</span> and <span class="math-container">$\sum e^{in\left(x-b\right)}/n$</span> converge, because then the Fourier series of <span class="math-container">$\chi$</span> can be constructed by just taking the difference between these two series (if two series converge, their sum-series and difference-series also converge). If you've done exercise 8 in this chapter, you'll know that you have to use Dirichlet's test by making <span class="math-container">$a_n=1/n$</span> and <span class="math-container">$b_n=e^{in\theta}$</span>, with <span class="math-container">$\theta=x-a$</span> or <span class="math-container">$x-b$</span>. It's easy to show that the partials of <span class="math-container">$\sum e^{in\theta}$</span> are bounded. Just use the fact that it's a power series: <span class="math-container">$$\left|\sum_{1}^{N-1}e^{inx}\right|=\left|\frac{1-e^{iNx}}{1-e^{ix}}-1\right|=\frac{\left|1-e^{i\left(N-1\right)x}\right|}{\left|1-e^{ix}\right|} \leqslant\frac{2}{\left|1-e^{ix}\right|}$$</span> (This <em>is</em> an acceptable bound because <span class="math-container">$x$</span> is fixed.) (I did it just for <span class="math-container">$n&gt;0$</span>, but you can just do the same for <span class="math-container">$n&lt;0$</span>.)</p> <p>Now, there's actually two senses in which the Fourier series of <span class="math-container">$\chi$</span> could converge absolutely. First, you look at the <em>complex</em> Fourier series, and ask whether the following series converges: <span class="math-container">$$\sum_{n\in\mathbb{Z}}\left|\widehat{\chi_{\left[a,b\right]}}\left(n\right)e^{inx}\right|\to\,?$$</span> Showing that it does not converge is reasonably doable. Now, what's interesting is to ask whether the <em>real</em> Fourier series (that is, the series of sinusoids, instead of the series of complex exponentials) converges absolutely. We can re-write it (yes, again) like this: <span class="math-container">$$\frac{b-a}{2\pi}+\frac{2}{\pi}\sum_{n\geqslant1}\frac{1}{n}\left|\sin\left(\frac{b-a}{2}n\right)\right|\cdot\left|\cos\left(\left(x-\frac{a+b}{2}\right)n\right)\right| \to ?$$</span> You can show show that it does not converge (with the exception that, if <span class="math-container">$b-a=\pi$</span>, then it converges trivially on <span class="math-container">$x=b$</span> and <span class="math-container">$x=a$</span>). I won't go into all of the details, because I can't seem to find an easy, quick proof (it would actually be quite tedious if I detailed all of it), but I'll do my best.</p> <p>Let <span class="math-container">$\theta_{0}=\left(b-a\right)/2$</span> e <span class="math-container">$\omega=x-\left(a+b\right)/2$</span>. We want to show that <span class="math-container">$\sum_{n\geqslant1}\frac{1}{n}\sin n\theta_{0}\cos n\omega$</span> is not absolutely convergent. The strategy is the obvious one: show that <span class="math-container">$\left|\sin n\theta_0\right|\geqslant c&gt;0$</span> more than 50% of the time, and show the same for <span class="math-container">$\left|\cos n\omega\right|$</span>. Then, by the pigeonhole principle, we'll have <span class="math-container">$$\left|\frac{1}{n}\sin n\theta_{0}\cos n\omega\right|\geqslant\frac{c^2}{n}$$</span> at least once per period, which is enough to compare <span class="math-container">$\sum_{n\geqslant 1}\left|\sin n\theta_{0}\right|\cdot\left|\cos n\omega\right|/n$</span> to the harmonic series.]</p> <p>Let's start by getting out of the way the case in which both <span class="math-container">$\theta_0$</span> and <span class="math-container">$\omega$</span> are rational multiples pf <span class="math-container">$\pi$</span>. Let <span class="math-container">$\theta_0=\left(p/q\right)2\pi$</span>, with <span class="math-container">$p/q$</span> already in lowest terms. Then <span class="math-container">$n\theta_{0}$</span> is periodic with period <span class="math-container">$q$</span> and assumes all values <span class="math-container">$$\frac{0}{q},\frac{1}{q},\frac{2}{q},\cdots,\frac{q-1}{q}$$</span> in each period. We're interested in the sequence <span class="math-container">$\sin n\theta_{0}$</span>. We know that <span class="math-container">$q\geqslant 3$</span>, since <span class="math-container">$\theta_{0}\in\left]0,\pi\right[$</span>. Also, let <span class="math-container">$\omega=\left(r/s\right)2\pi$</span> with <span class="math-container">$r/s$</span> in lowest terms, so that <span class="math-container">$n\omega$</span> has period <span class="math-container">$s$</span>.</p> <p>If <span class="math-container">$q=3$</span>, then <span class="math-container">$\left|\sin n\theta_{0}\right|$</span> spends <span class="math-container">$2/3$</span> of the time being equal to <span class="math-container">$\sqrt{3}/2$</span>. If <span class="math-container">$q=4$</span>, the sequence <span class="math-container">$\left|\sin n\theta_{0}\right|$</span> is <span class="math-container">$1,0,1,0,\ldots$</span>. If <span class="math-container">$q\geqslant 5$</span>, then <span class="math-container">$\left|\sin n\theta_{0}\right|$</span> has period <span class="math-container">$5$</span> and is zero at most twice per period, and therefore is greater than its minimum more than 50% of the time. As to <span class="math-container">$\omega$</span>, it's also easy to see that the sequence <span class="math-container">$\left|\cos n\omega\right|$</span> is also <span class="math-container">$\geqslant c&gt;0$</span> more than 50% of the time, unless <span class="math-container">$s=4$</span>.</p> <p>That's enough to show that, unless <span class="math-container">$q=s=4$</span>, the sequence <span class="math-container">$\left|\sin n\theta_{0}\cos n\omega\right|$</span> is periodic with period <span class="math-container">$qs$</span>, and by the pigeonhole principle there's at least one non-zero element per period. Let's say that this non-zero element is the <span class="math-container">$k$</span>-th: <span class="math-container">$$\forall d\geqslant0\text{,} \left|\sin\left(dqs+k\right)\theta_{0}\cos\left(dqs+k\right)\omega\right| =\left|\sin k\theta_{0}\cos k\omega\right|&gt;0\text{.}$$</span> Then: <span class="math-container">$$\sum_{n\geqslant1}\frac{1}{n}\left|\sin n\theta_{0}\cos n\omega\right|\geqslant\left|\sin k\theta_{0}\cos k\omega\right|\sum_{d\geqslant1}\frac{1}{dqs+k}=\infty\text{,}$$</span> and we're good, unless <span class="math-container">$q=s=4$</span>.</p> <p>When <span class="math-container">$q=s=4$</span>, the series is actually absolutely convergent in a trivial fashion: all of its terms are zero, because <span class="math-container">$\left|\sin\right|$</span> will be <span class="math-container">$0,1,0,1,\ldots$</span> and <span class="math-container">$\left|\cos\right|$</span> will be <span class="math-container">$1,0,1,0,\ldots$</span>. Well, <span class="math-container">$q=4$</span> means <span class="math-container">$\theta_{0}=\pi/2$</span>, which means <span class="math-container">$b-a=\pi$</span>; and <span class="math-container">$s=4$</span> means <span class="math-container">$\omega=\pm\pi/2$</span>, meaning <span class="math-container">$x=a$</span> or <span class="math-container">$x=b$</span>.</p> <p>Now, the case in which <span class="math-container">$\theta_0$</span> or <span class="math-container">$\omega$</span> are irrational multiples of <span class="math-container">$\pi$</span> is the tedious part. (Actually, we could use the equidistribution theorem here, but that'll only show up in chapter 3 of the book! We'll try to do without it.) Let <span class="math-container">$H$</span> be the set of angles <span class="math-container">$\theta$</span> for which <span class="math-container">$\left|\sin\theta\right|&lt;\sin\left(\pi/50\right)$</span>. Also, let <span class="math-container">$H'$</span> be the set of <span class="math-container">$\theta$</span> such that <span class="math-container">$\left|\cos\theta\right|&lt;\sin\left(\pi/50\right)$</span>. <span class="math-container">$H$</span> and <span class="math-container">$H'$</span> together make up 8% of the whole circle, so we'd expect that any reasonable periodic-ish sequence in the the circle ends up outside of <span class="math-container">$H\cup H'$</span> around 92% of the time.</p> <p>Let's say that a finite sequence <span class="math-container">$\Theta_{p}\left(N_{0}\right)=\left\{ N_{0}\theta,\left(N_{0}+1\right)\theta,\ldots,\left(N_{0}+p\right)\theta\right\}$</span> is <em>reasonably well-distributed</em> in the circle (rwd for short) when <span class="math-container">$\Theta$</span> has more than 50% of its elements <em>outside</em> <span class="math-container">$H$</span>, and also more than 50% <em>outside</em> <span class="math-container">$H'$</span>. I'll start by showing the following result:</p> <blockquote> <p>Let <span class="math-container">$\theta\in\left]0,\pi/3\right[\setminus\pi\mathbb{Q}$</span>. There exists <span class="math-container">$p\geqslant 1$</span> such that, given any <span class="math-container">$x_{0}\notin\pi\mathbb{Q}$</span>, the finite set <span class="math-container">$$\Theta=\left\{ x_{0},x_{0}+\theta,\ldots,x_{0}+p\theta\right\} $$</span> is rwd.</p> </blockquote> <p>Well, since <span class="math-container">$\theta\in\left]0,\pi/3\right[\setminus\pi\mathbb{Q}$</span>, there is <span class="math-container">$p\in\mathbb{Z},p\geqslant6$</span> such that <span class="math-container">$$\frac{2\pi}{p+1}&lt;\theta&lt;\frac{2\pi}{p},\qquad\text{i.e.,}\qquad p\theta&lt;2\pi&lt;\left(p+1\right)\theta\text{.}$$</span> Let <span class="math-container">$x_0\notin\pi\mathbb Q$</span>. We want to show that, out of all of the <span class="math-container">$p+1$</span> elements in <span class="math-container">$\Theta$</span>, <em>more</em> than <span class="math-container">$\left(p+1\right)/2$</span> are <em>outside</em> <span class="math-container">$H$</span>. Well, it's not very hard to show that, since the elements in <span class="math-container">$\Theta$</span> are equally spaced (except for the pair <span class="math-container">$x_0$</span> and <span class="math-container">$x_0+p\theta$</span>, which could be quite close to each other by roing around the circle), the number of elements in <span class="math-container">$\Theta\cap H$</span> is: <span class="math-container">$$\#\left(\Theta\cap H\right)\leqslant 2\left\lceil \frac{2\pi/50}{\theta}\right\rceil +1\text{.}$$</span> Now, we plug in <span class="math-container">$\theta&gt;2\pi/\left(p+1\right)$</span> and <span class="math-container">$p\geqslant6$</span> to find that <span class="math-container">$$\#\left(\Theta\cap H\right)&lt;\frac{p+1}{2}=\frac{\#\Theta}{2}\text{.}$$</span> (Yes, that "50" in there was important; I didn't just "make up a huge number" when designing <span class="math-container">$H$</span> and <span class="math-container">$H'$</span>. We need that 50 because it makes the inequality <span class="math-container">$2\left\lceil \frac{p+1}{50}\right\rceil +1&lt;\frac{p+1}{2}$</span> valid for <span class="math-container">$p\geqslant 6$</span>.)</p> <p>Anyway, ok, we showed that for <span class="math-container">$H$</span>, and the proof is exactly the same for <span class="math-container">$H'$</span>, so <span class="math-container">$\Theta$</span> is rwd, as promised.</p> <p>Moving on, we'd like to show the same proposition above without the restriction <span class="math-container">$\theta\in\left]0,\pi/3\right[$</span>. However, we already know that <span class="math-container">$\theta=\pi/2$</span> is a problem, and that will make <span class="math-container">$\theta\approx\pi/2$</span> a problem too. We'll write <span class="math-container">$\theta=\pi/2-\vartheta$</span>, with <span class="math-container">$\vartheta\in\left]0,\pi/6\right[\setminus\pi\mathbb{Q}$</span> and abuse the formulas: <span class="math-container">$$\left|\sin n\theta\right| =\begin{cases} \left|\sin n\vartheta\right|, &amp; n\text{ even}\\ \left|\cos n\vartheta\right|, &amp; n\text{ odd} \end{cases}\text{; and}$$</span> <span class="math-container">$$\left|\cos n\theta\right| =\begin{cases} \left|\cos n\vartheta\right|, &amp; n\text{ even}\\ \left|\sin n\vartheta\right|, &amp; n\text{ odd} \end{cases}$$</span></p> <blockquote> <p>Let <span class="math-container">$\theta\in\left]\pi/3,\pi/2\right[\setminus\pi\mathbb{Q}$</span>. There exists <span class="math-container">$p\geqslant1$</span> such that, given any <span class="math-container">$N_{0}\in\mathbb{Z}$</span>, the finite set <span class="math-container">$$\Theta=\left\{ N_{0}\theta,\left(N_{0}+2\right)\theta,\ldots,\left(N_{0}+2p\right)\theta\right\}$$</span> is rwd.</p> </blockquote> <p>Ok, let <span class="math-container">$\vartheta=\pi/2-\theta$</span> so that <span class="math-container">$2\vartheta\in\left]0,\pi/3\right[$</span> and using the first proposition above, <span class="math-container">$\left\{ x_{0},x_{0}+2\vartheta,\ldots,x_{0}+2p\vartheta\right\}$</span> is rwd for all <span class="math-container">$x_0$</span>. Letting <span class="math-container">$x_{0}=N_{0}\vartheta,N_0\in\mathbb Z$</span>, we get that <span class="math-container">$\left\{ N_{0}\vartheta,\left(N_{0}+2\right)\vartheta,\ldots,\left(N_{0}+2p\right)\vartheta\right\} $</span> is rwd. By considering the cases <span class="math-container">$N_0$</span> even and <span class="math-container">$N_0$</span> odd separately, we show that we can change <span class="math-container">$\vartheta$</span> for <span class="math-container">$\theta$</span> in this set, and the elements inside <span class="math-container">$H$</span> remain inside <span class="math-container">$H$</span> (or, if <span class="math-container">$N_0$</span> is odd, become elements of <span class="math-container">$H'$</span> and vice-versa). Anyway, it's proved.</p> <p>Even though that second proposition above looks like it doesn't help much, it does:</p> <blockquote> <p>Let <span class="math-container">$\theta\in\left]0,\pi/2\right[\setminus\pi\mathbb{Q}$</span>. There exists <span class="math-container">$p\geqslant1$</span> such that, given any <span class="math-container">$N_{0}\in\mathbb{Z}$</span>, the finite set <span class="math-container">$\Theta_{p}\left(N_{0}\right)$</span> is rwd.</p> </blockquote> <p>We just have to show for <span class="math-container">$\theta\in\left]\pi/3,\pi/2\right[\setminus\pi\mathbb{Q}$</span>, because the other case is already contemplated in the first proposition above. Applying the second proposition, we get <span class="math-container">$p'\geqslant 1$</span> such that <span class="math-container">$\Theta=\left\{ N_{0}\theta,\left(N_{0}+2\right)\theta,\ldots,\left(N_{0}+p'\right)\theta\right\}$</span> is rwd for any <span class="math-container">$N_0$</span>. Now, let <span class="math-container">$p=2p'+1$</span>. Given any <span class="math-container">$N_0$</span>, the set <span class="math-container">$\Theta_{p}\left(N_{0}\right)=\left\{ N_{0}\theta,\left(N_{0}+1\right)\theta,\ldots,\left(N_{0}+p\right)\theta\right\}$</span> can be partitioned into two sets of size <span class="math-container">$p'+1$</span>: one with the even multiples of <span class="math-container">$N_0$</span>, and the other with the odd multiples. Both partitions are rwd, and therefore their union also is.</p> <p>Now, we extend this result to <span class="math-container">$\theta\in\left]0,\pi\right[$</span> using the relevant trigonometric identities again, and then extend it, finally, to the whole circle:</p> <blockquote> <p>Let <span class="math-container">$\theta\notin\pi\mathbb{Q}$</span>. There exists <span class="math-container">$p\geqslant1$</span> such that given any <span class="math-container">$N_{0}$</span>, <span class="math-container">$\Theta_{p}\left(N_{0}\right)$</span> is rwd.</p> </blockquote> <p>I'm translating this from my notes in portuguese from <a href="https://math.stackexchange.com/questions/1833933/sum-left-sin-alpha-n-sin-beta-n-right-n-infty-when-alpha-neq-beta">when I was taking this course</a> (all of that LaTeX was already typed!), and I'm getting tired of writing, so I hope you can see that this last result is enough to tackle the cases in which each of <span class="math-container">$\theta_0$</span> or <span class="math-container">$\omega$</span> or both are irrational multiples of <span class="math-container">$\pi$</span>. By the way, those notes were typed using the notation <a href="https://tauday.com/" rel="nofollow noreferrer"><span class="math-container">$\tau=2\pi$</span></a>, and in the process of translating from portuguese I also translated from tau, and I might have forgotten to double or halve some of the integers above, so double-check everything if you're going to use it for real. </p>
2,012,223
<p>I've got a problem with finding main argument of these complex number. How can i evaluate this two examples?</p> <p>$$\sin \theta - i\cos \theta$$</p> <p>$$\frac{(1-i\tan \theta)}{1+\tan \theta}$$</p>
robjohn
13,854
<p>Since $\arg(zw)=\arg(z)+\arg(w)$, $$ \begin{align} \arg(\sin(\theta)-i\cos(\theta)) &amp;=\arg(-i)+\arg(\cos(\theta)+i\sin(\theta))\\ &amp;=\theta-\frac\pi2 \end{align} $$ and $$ \begin{align} \arg\left(\frac{1-i\tan(\theta)}{1+\tan{\theta}}\right) &amp;=\arg(\cos(-\theta)+i\sin(-\theta))-\arg(\sin(\theta)+\cos(\theta))\\ &amp;=-\theta+\pi[\sin(\theta)+\cos(\theta)\lt0] \end{align} $$ where $[\dots]$ are <a href="https://en.wikipedia.org/wiki/Iverson_bracket" rel="nofollow noreferrer">Iverson Brackets</a>. Note that $\arg(z)$ is determined mod $2\pi$.</p>
2,034,523
<p>I am sure there is a general and simplified way to solve this problem, I am just unable to figure out the generalized formula (if there is one). </p> <p>Say we have to write a <strong>code with 4 digits</strong>, the digits can range from <strong>0</strong> to <strong>9</strong>. </p> <p>All digits in the code <strong>must be unique</strong>.<br> All of the digits cannot be <strong>neither increasing nor decreasing</strong>. </p> <p>For example, "1234" is not allowed, neither is "1289" nor "9821". </p> <p>How many code combinations are there in total?</p>
Robert Z
299,698
<p>Hint. Once you choose $4$ distinct digits in $\{0,1,\dots,9\}$ (you can do it in $\binom{10}{4}$ ways) in how many ways can you arrange them in decreasing order? How many in increasing order? Now take them away from the $4!$ possible permutations.</p>
2,369,081
<blockquote> <p>Evaluate the integral $$\int_0^1\frac{x^7-1}{\log (x)}\,dx $$</p> </blockquote> <p>[1]: <a href="https://i.stack.imgur.com/lcK2p.jpgplz" rel="nofollow noreferrer">https://i.stack.imgur.com/lcK2p.jpgplz</a> I'm trying to solve this definite integral since 2 hours. Please, I need help on this.</p>
hamam_Abdallah
369,188
<p><strong>hint for the convergence</strong></p> <p>$$f : x\mapsto \frac {x^7-1}{\log (x)} $$ is continuous at $(0,1) $ , and is locally integrable.</p> <p>Near $0$, $f $ is bounded since $\lim_{0^+}f (x)=0$ thus $$\int_0^\frac 12 f (x)dx $$ is convergent.</p> <p>Near $1$, $$\log (x)\sim x-1$$ and $$\lim_{1^-}f (x)=6$$ thus $\int_\frac 12^1f (x)dx $ converges.</p> <p>finally $\int_0^1f (x)dx $ is convergent.</p>
2,369,081
<blockquote> <p>Evaluate the integral $$\int_0^1\frac{x^7-1}{\log (x)}\,dx $$</p> </blockquote> <p>[1]: <a href="https://i.stack.imgur.com/lcK2p.jpgplz" rel="nofollow noreferrer">https://i.stack.imgur.com/lcK2p.jpgplz</a> I'm trying to solve this definite integral since 2 hours. Please, I need help on this.</p>
robjohn
13,854
<p>This can be written as a <a href="http://mathworld.wolfram.com/FrullanisIntegral.html" rel="nofollow noreferrer">Frullani Integral</a>: $$ \begin{align} \int_0^1\frac{x^7-1}{\log(x)}\,\mathrm{d}x &amp;=\int_0^\infty\frac{e^{-u}-e^{-8u}}{u}\,\mathrm{d}x\tag{1}\\ &amp;=\lim_{\epsilon\to0}\int_\epsilon^\infty\frac{e^{-u}-e^{-8u}}{u}\,\mathrm{d}x\tag{2}\\ &amp;=\lim_{\epsilon\to0}\left[\int_\epsilon^\infty\frac{e^{-u}}{u}\,\mathrm{d}x-\int_{8\epsilon}^\infty\frac{e^{-u}}{u}\,\mathrm{d}x\right]\tag{3}\\ &amp;=\lim_{\epsilon\to0}\int_\epsilon^{8\epsilon}\frac{e^{-u}}{u}\,\mathrm{d}x\tag{4}\\ &amp;=\lim_{\epsilon\to0}\int_\epsilon^{8\epsilon}\frac{1+O(u)}{u}\,\mathrm{d}x\tag{5}\\[3pt] &amp;=\lim_{\epsilon\to0}\left(\log(8)+O(\epsilon)\right)\tag{6}\\[6pt] &amp;=\log(8)\tag{7} \end{align} $$ Explanation:<br> $(1)$: $x=e^{-u}$<br> $(2)$: write as a limit near $0$<br> $(3)$: substitute $u\mapsto\frac{u}8$<br> $(4)$: combine integrals<br> $(5)$: $e^u=1+O(u)$ for $u$ near $0$<br> $(6)$: integrate<br> $(7)$: take the limit</p>
3,453,175
<p>If <span class="math-container">$y=\dfrac {1}{x^x}$</span> then show that <span class="math-container">$y'' (1)=0$</span></p> <p>My Attempt:</p> <p><span class="math-container">$$y=\dfrac {1}{x^x}$$</span> Taking <span class="math-container">$\ln$</span> on both sides, <span class="math-container">$$\ln (y)= \ln \left(\dfrac {1}{x^x}\right)$$</span> <span class="math-container">$$\ln (y)=-x.\ln (x)$$</span> Differentiating both sides with respect to <span class="math-container">$x$</span> <span class="math-container">$$\dfrac {1}{y}\cdot y'=-(1+\ln (x))$$</span></p>
Heatconomics
531,927
<p>Let <span class="math-container">$y=\frac{1}{x^x}$</span>, then <span class="math-container">$ln(y)=-xln(x)$</span>, then <span class="math-container">$y'(x)=-(1+ln(x))y(x)$</span>. Taking a second derivative, we have that <span class="math-container">$y''(x)=-y'(x)(1+ln(x))-\frac{y(x)}{x}$</span>. Evaluating at <span class="math-container">$x=1$</span>, we have that <span class="math-container">$y(1)=1$</span>, <span class="math-container">$y'(1)=-1$</span>, so that <span class="math-container">$y''(1)=0$</span>.</p>
38,552
<p>The following Cubics have 3 real roots but the first has Galois group $C_3$ and the second $S_3$</p> <ul> <li>$x^3 - 3x + 1$ (red)</li> <li>$x^3 - 4x + 2$ (green)</li> </ul> <p>Is there any geometric way to distinguish between the two cases? Obviously graphing this onto the real line does not help.</p> <p><img src="https://i.stack.imgur.com/y05Wd.png" alt="enter image description here"></p> <p>It is not clear to me why you cannot transpose the red dots but you can transpose the green ones.</p>
Qiaochu Yuan
232
<p>There's no reason to expect that the set of real points tells you the full story in an arithmetic situation. For example, can you tell that $\pi$ is transcendental but that $\sqrt{10}$ isn't from looking at their relative positions on the number line? </p> <p>One thing you can do which (depending on your tastes) could count as geometric is looking at <a href="http://en.wikipedia.org/wiki/Frobenius_endomorphism" rel="nofollow">Frobenius elements</a>.</p> <p><strong>Proposition:</strong> Let $f(x) \in \mathbb{Z}[x]$ be a monic irreducible polynomial. Suppose that for some prime $p$ not dividing the discriminant, $f$ splits into irreducible factors of degrees $d_1, d_2, ... d_k$. Then there is an element of cycle type $(d_1, d_2, ... d_k)$ in the Galois group of $f$.</p> <p>Hence you can find a transposition in the Galois group (and show that the Galois group is $S_3$) by finding a prime $p$ relative to which $f$ splits as the product of a linear and an irreducible quadratic factor. (This is completely analogous to the situation over $\mathbb{R}$: one might say that complex conjugation is the "Frobenius element at infinity.")</p> <p>Geometrically, instead of looking at the real points of the scheme $\text{Spec } \mathbb{Z}[x]/f(x)$, we look at points over finite fields. But from a scheme-theoretic point of view all of these points are included in the geometry of the "arithmetic curve" $\text{Spec } \mathbb{Z}[x]/f(x)$. </p> <p>The <a href="http://en.wikipedia.org/wiki/Chebotarev%27s_density_theorem" rel="nofollow">Frobenius density theorem</a> even guarantees that the converse holds: for every cycle type in the Galois group there is a prime $p$ realizing that cycle type.</p>
38,552
<p>The following Cubics have 3 real roots but the first has Galois group $C_3$ and the second $S_3$</p> <ul> <li>$x^3 - 3x + 1$ (red)</li> <li>$x^3 - 4x + 2$ (green)</li> </ul> <p>Is there any geometric way to distinguish between the two cases? Obviously graphing this onto the real line does not help.</p> <p><img src="https://i.stack.imgur.com/y05Wd.png" alt="enter image description here"></p> <p>It is not clear to me why you cannot transpose the red dots but you can transpose the green ones.</p>
Gerry Myerson
8,269
<p>Almost all cubics (with integer coefficients and three real roots) have Galois group $S_3$. What exactly is meant by "almost all" is a little technical, but the phrase can be made precise, and the result rigorously proved. One consequence is that if you start with a $C_3$ cubic and perturb the roots the tiniest little bit then with probability $1$ you now have an $S_3$ cubic. So just looking at the red dots can't help you: it's guaranteed that there is a set of green dots so close by that you wouldn't be able to distinguish them with an electron microscope. </p>
4,166,579
<p>I'm trying to solve this proof but I'm not completely sure how to start. Discrete has been pretty rough for me so far so any help would be greatly appreciated!</p>
Peter Szilas
408,605
<p>Option.</p> <p>Assume <span class="math-container">$T \subset S$</span> (strict inclusion).</p> <p>There is a <span class="math-container">$s \in S$</span> and <span class="math-container">$s \not \in T$</span>.</p> <p><span class="math-container">$f(s)\not \in f(T)$</span> since <span class="math-container">$f$</span> is injective, a contradiction.</p> <p>Hence <span class="math-container">$S \subseteq T$</span>.</p>
1,703,120
<p>So I have a vector <span class="math-container">$a =( 2 ,2 )$</span> and a vector <span class="math-container">$b =( 0, 1 )$</span>.<br /> As my teacher told me, <span class="math-container">$ab = (-2, -1 )$</span>.</p> <p><span class="math-container">$ab = b-a = ( 0, 1 ) - ( 2, 2 ) = ( 0-2, 1-2 ) = ( -2, -1 )$</span><br /> <span class="math-container">$ab = a-b = ( 2 ,2 ) - ( 0 ,1 ) = ( 2-0,2-1 ) = ( 2 ,1 )$</span></p> <p>Seems like its the same but the negative signs are gone.</p> <p>Why do I have to subtract b from a to get ab? Why not a-b or a+b?</p>
Samuel
433,229
<p>( Vector AB ) = ( Vector B ) - ( Vector A )</p> <p>Think of this logically when you have the equation 10 - 2 you get 8 ( a positve value ) However if you do 2 - 10 you get the same magnitude 8 but opposite direction -8.</p> <p>Use this to understand the vectors since the point of Vector AB is moving from A to B you want to know whether its moving in the positive of negative direction.</p> <p>If B had a greater value of position than A then obviously it moved in the positive direction to achieve that greater value and that is why B - A (larger - smaller ) has to be positive. If it had a lesser value than A then it moved in the negative direction and that is why the value of B - A would then be negative (smaller - larger).</p> <p>And in reality you are subtraction the vectors OB - OA ( O being the point of origin ) so the difference between these to vectors is a displacement.</p> <p>I really hope i was of any help to answer your question.</p> <p>:)</p>
2,187,509
<p>We're currently implementing the IBM Model 1 in my course on statistical machine translation and I'm struggling with the following appplication of the chain rule. When applying the model to the data, we need to compute the probabilities of different alignments given a sentence pair in the data. In other words to compute $\Pr(a\mid e,f)$, the probability of an alignment given the English and foreign sentences.</p> <p>Why do I end up with </p> <p>$$ \Pr(a\mid e,f) = \frac{\Pr( e,a \mid f )}{\Pr( e \mid f )} $$</p> <p>applying the chain rule which would be </p> <p>$$ \Pr(A,B,C) = \Pr(A)\Pr(B \mid A)\Pr (C \mid B,A) $$</p>
skyking
265,767
<p>There's an ad-hoc solution at least going something like this:</p> <p>You can use taylor expansion to show that (or the convexity of the function). Consider the function $f(x) = x^2e^x-\ln x - 1$ and you have:</p> <p>$$f'(x) = (x^2+2x)e^x - 1/x$$ $$f''(x) = (x^2+4x+4)e^x + 1/x^2 = (x+2)^2e^x + 1/x^2$$</p> <p>We see immediately that $f''(x)\ge 0$. This means that the expansion around any point we will have:</p> <p>$$f(a+h) = f(a) + f'(a)h + f''(\xi)h^2/2 \ge f(a) + f'(a)h$$</p> <p>This means that it will be above every of it's tangent. Now one only needs to pick a tangent sufficiently close to the right of it's minimum such that either of them are positive which will prove that $f(x)$ is positive (since it's above both tangents).</p> <p>The rest is ugly computations. You have that $f(1/2) &gt; 1/10$ and that $0&lt;f'(1/2)&lt;1/10$ (you can find this by Taylor expansion - or if you're lazy by using calculator). This means that we have:</p> <p>$$f(x) = f(1/2 + x-1/2) \ge f(1/2) + f'(1/2)(x-1/2)$$</p> <p>Which means:</p> <p>$$f(x)\begin{cases} \ge 1/10 + 1/10 (x-1/2) = (1+2x)/10 \ge 0 &amp; \text{when } x\le 1/2 \\ \ge 1/10 \ge 0 &amp; \text{when } x \ge 1/2 \end{cases}$$</p>
4,312,890
<p>I'm working my way through Grimaldi's textbook, and there's one exercise in the supplementary exercises for Chapter 4 that I don't understand how to approach.</p> <p>Here is the problem: if <span class="math-container">$n \in Z^+$</span>, how many possible values are there for <span class="math-container">$gcd(n,n+3000)$</span>?</p> <p>In case the notation <span class="math-container">$gcd(x,y)$</span> is not universal, it refers to the greatest common divisor of <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. I reviewed the teacher's solutions for an explanation for how to solve this problem, but it relies on the fact that <span class="math-container">$gcd(n,n+3000)$</span> is a divisor of <span class="math-container">$3000$</span>, which I don't understand. Any help on either solving the problem or explaining why <span class="math-container">$gcd(n,n+3000)$</span> is a common divisor of <span class="math-container">$3000$</span> this would be greatly appreciated. Thank you!</p>
Lai
732,917
<p>Denote the greatest common divisor of x and y be (x,y), then <span class="math-container">$$(n,n+3000)=(n,3000)$$</span> Since <span class="math-container">$$ 3000=2^{3} \times 3 \times 5^{3} $$</span> Therefore there are <em><strong>at most</strong></em> <span class="math-container">$$(3+1)\times (1+1)\times (3+1)=32$$</span>possible values for <span class="math-container">$gcd(n,n+3000)$</span>.</p>
2,263,230
<p>Let's say I wanted to express sqrt(4i) in a + bi form. A cursory glance at WolframAlpha tells me it has not just a solution of 2e^(i<em>Pi/4), which I found, but also 2e^(i</em>(-3Pi/4))</p> <p>Why do roots of unity exist, and why do they exist in this case? How could I find the second solution? </p>
Adam Hughes
58,831
<p>By definition $\zeta\in\Bbb C$ is a root of unity if there is $n\in\Bbb N$ so that $\zeta^n=1$. Roots of unity exist thanks to $e^{2\pi i}=1$ and the usual fact about exponentials that $(e^a)^b=e^{ab}$ so that $e^{2\pi i/n}$ is always an $n^{th}$ root of unity.</p> <p>To see how you can get them all just note that $e^{2\pi i k/n}$ is also an $n^{th}$ root of unity for any $0\le k\le n-1$, and since the smallest $x&gt;0$ so that $e^{ix}=1$ is $x=2\pi$ this means we give rise to all distinct values for $0\le k\le n$. So this is how you find them all in general.</p>
279,994
<p>Note that S [n] is the sum of the first n terms of the sequence a [n]. It is known that a [1]==1, and the sequence {S [n]/a [n]} is an equal difference sequence with a tolerance of 1/3. Find the general term formula of sequence a [n]</p> <p>Let b [n]==S [n]/a [n], first work out the general term formula of b [n], and then operate the &quot;?</p> <pre><code>RSolve[{b[n + 1] == b[n] + 1/3, b[1] == 1}, b[n], n] </code></pre>
csn899
68,574
<p>A friend provided me with the answer:</p> <pre><code>ClearAll[&quot;`*&quot;] sol = First@RSolve[{b[n + 1] == b[n] + 1/3, b[1] == 1}, b[n], n] s[n_] = b[n] a[n] /. sol sola = First@ RSolve[{a[n + 1] == s[n + 1] - s[n], a[1] == 1}, a[n], n] Sum[1/a[n] /. sola, {n, 1, n0}] </code></pre>
2,643,705
<p>If $A,B,C,D$ are all matrices and $A=BCD$ (with dimensions such that all matrix multiplications are defined), how does one solve for $C$? </p> <p>In the particular context I'm working in, $B$ and $D$ are both orthogonal, and $C$ is diagonal. I'm not sure if that's necessary to solve for $C$.</p>
Siong Thye Goh
306,553
<p>Hint:</p> <ul> <li>$B^TB=I$</li> <li><p>$DD^T = I$</p></li> <li><p>You might like to premultiply and postmultiply the equation by some matrices to isolate $C$.</p></li> </ul>
2,643,705
<p>If $A,B,C,D$ are all matrices and $A=BCD$ (with dimensions such that all matrix multiplications are defined), how does one solve for $C$? </p> <p>In the particular context I'm working in, $B$ and $D$ are both orthogonal, and $C$ is diagonal. I'm not sure if that's necessary to solve for $C$.</p>
idok
514,894
<p>Since $B, D$ are orthogonal,</p> <p>$B^t B = D D^t = I$</p> <p>multiply your equation with $B^t$ from the left and by $D^t$ from the right to get $$C = B^t A D^t$$</p>
3,353,826
<p>All the vertices of quadrilateral <span class="math-container">$ABCD$</span> are at the circumference of a circle and its diagonals intersect at point <span class="math-container">$O$</span>. If <span class="math-container">$∠CAB = 40°$</span> and <span class="math-container">$∠DBC = 70°$</span>, <span class="math-container">$AB = BC$</span>, then find <span class="math-container">$∠DCO$</span>.</p>
polettix
264,102
<p>If you are fine going always in one direction for halfway values, you can resort to the programming trick of using <span class="math-container">$\lfloor x + \frac{1}{2} \rfloor$</span> (halfways towards <span class="math-container">$+\infty$</span>) or <span class="math-container">$\lceil x - \frac{1}{2} \rceil$</span> (halfways towards <span class="math-container">$-\infty$</span>).</p>
693,640
<p>Assmue $f(x_{1},x_{2},\cdots,x_{n})$ is a second degree real polynomial with $n(n\ge 2)$ variables. Let $S$ be such that $f(x_{1},x_{2},\cdots,x_{n})$ is the set of minimum and maximum points. In other words: $$S=\{(b_{1},b_{2},\cdots,b_{n})\in R^n| f(x_{1},x_{2},\cdots,x_{n})\le f(b_{1},b_{2},\cdots,b_{n}),\forall (x_{1},x_{2},\cdots,x_{n}\in R^n\}\bigcup \{(b_{1},b_{2},\cdots,b_{n})\in R^n| f(x_{1},x_{2},\cdots,x_{n})\ge f(b_{1},b_{2},\cdots,b_{n}),\forall (x_{1},x_{2},\cdots,x_{n}\in R^n\}$$</p> <p>Assmue $f(x_{1},x_{2},\cdots,x_{n})$ is a symettric polynomial in $x_{1},x_{2},\cdots,x_{n}$,and $S$ is a finte non-empty set.</p> <p>show that:</p> <p>there exsit $b\in R$,such $$S=\{(b,b,\cdots,b)\}$$</p> <p>My idea: let $$f(x_{1},x_{2},\cdots,x_{n})=x^2_{1}+x^2_{2}+\cdots+x^2_{n}$$ is symettric polynomial,then we take $$S=\{(0,0,\cdots,0)\}$$ such it</p> <p>But for this problem I can't prove it,Thank you</p>
Christian Blatter
1,303
<p>Any real symmetric polynomial $f$ of degree $\leq2$ in the variables $x_1$, $\ldots$, $x_n$ can be written in the form $$f(x_1,\ldots,x_n)=a_0+ a_1 \sigma_1+a_2\sigma_1^2 + a_3\sigma_2\ ,\tag{1}$$ where $\sigma_1$ and $\sigma_2$ denote the elementary symmetric polynomials of degree $1$ and $2$ in the $x_i$. Since $\sigma_1^2=(x_1^2+\ldots+x_n^2)+2\sigma_2$ we can replace $(1)$ by the more convenient form $$f(x_1,\ldots,x_n)=c_0+ c_1 \sigma_1+c_2\sigma_1^2 + c (x_1^2+\ldots+x_n^2)\ .\tag{2}$$ When $c=c_2=0$ then $f$ is constant or linear, and $S={\mathbb R}^n$ or $=\emptyset$, in accordance with the claim.</p> <p>When $c=0$ and $c_2\ne 0$ then $f$ depends only on $\sigma_1$, and assumes a minimum or a maximum on some hyperplane $\sigma_1={\rm const.}$. This hyperplane certainly contains a point of the form $(b,b,\ldots,b)$.</p> <p>Finally assume $c\ne0$, and let $p\in S$. Then necessarily $${\partial f\over\partial x_i}=c_1+2c_2\sigma_1+2c x_i=0\qquad(1\leq i\leq n)$$ at $p$, which implies that all $p_i$ are equal.</p>
3,977,281
<p>Is it possible for a function to not have a maxima or a minima? (S.t. I can't find the decreasing and increasing interval.) If so, how do we show it mathematically?</p> <p>I was practicing and found these two functions.</p> <p><span class="math-container">$a. f(x) = x+\sqrt{x^2-1} $</span> and <span class="math-container">$b. f(x) = \frac{x^2}{x^2+4} $</span></p> <p><span class="math-container">$a. f'(x) = 1 + \frac{x}{\sqrt{x^2-1}} = 0$</span><br /> However, I can't find the extrema.<br /> I tried to multiply with <span class="math-container">$\frac{\sqrt{x^2-1}}{\sqrt{x^2-1}}$</span> and obtain <span class="math-container">$x\sqrt{x^2-1}=-x^2+1$</span><br /> But I still can't find the extrema. If I only paid attention to <span class="math-container">$\frac{1}{\sqrt{x^2-1}}$</span>, I'll obtain <span class="math-container">$x=-1$</span> or <span class="math-container">$x=1$</span>.<br /> Thus, <span class="math-container">$f(-1)=-1$</span> as the minimum value and <span class="math-container">$f(1)=1$</span> as the maximum value.<br /> But I think this is wrong.</p> <p><span class="math-container">$b. f'(x) = \frac{2x(x^2+4)-x^2(2x)}{(x^2+4)^2}=0$</span><br /> <span class="math-container">$x=0$</span> <br /> <span class="math-container">$---0+++$</span> <br /> So I know that <span class="math-container">$0$</span> will give us the minimum value. But what about the maxima?<br /> <span class="math-container">$f''(x)=\frac{8(-3x^2+4)}{(x^2+4)^3}$</span> <br /> <span class="math-container">$f''(0)=\frac{1}{2} &gt;0$</span> So 0 will really gives the minimum.<br /> So, the function decreases when x&lt;0 and increases when x&gt;0.</p>
José Carlos Santos
446,262
<p><em>a.</em> As you wrote, <span class="math-container">$f'(x)=1+\frac x{\sqrt{x^2-1}}$</span>. It turns out that <span class="math-container">$f'(x)&lt;0$</span> if <span class="math-container">$x&lt;-1$</span> and that <span class="math-container">$f'(x)&gt;0$</span> if <span class="math-container">$x&gt;1$</span>. So, <span class="math-container">$f$</span> is strictly decreasing on <span class="math-container">$(-\infty,-1]$</span> and strictly increasing on <span class="math-container">$[1,\infty)$</span>. So, since <span class="math-container">$f(-1)=-1$</span> and since <span class="math-container">$f(1)=1$</span>, the minimum of <span class="math-container">$f$</span> is <span class="math-container">$-1$</span> and it has no maximum.</p> <p><span class="math-container">$b.$</span> Note that<span class="math-container">$$f(x)=1-\frac4{x^2+4}.$$</span>So, since <span class="math-container">$x^2+4$</span> is strictly increasing on <span class="math-container">$[0,\infty)$</span> and strictly decreasing on <span class="math-container">$(-\infty,0]$</span>, <span class="math-container">$f$</span> is strictly decreasing on<span class="math-container">$[0,\infty)$</span> and strictly increasing on <span class="math-container">$(-\infty,0]$</span>. Therefore, its minimum is <span class="math-container">$0(=f(0))$</span> and it has no maximum.</p>
3,423,674
<p>According to my calculus professor and MIT open coursework, taking the derivative of (x^2+4)^-1 is an application of the chain rule, not the power rule. The answer to the question is -(x^2+4)^-2, which makes sense to me, but I just don't understand why this is considered an application of the chain rule rather than the power rule, since the power rule says that d/dx(x^n) = nx^(n-1).</p> <p>Here is a link to the MIT coursework I am talking about: <a href="https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/1.-differentiation/part-a-definition-and-basic-rules/session-11-chain-rule/MIT18_01SCF10_ex11sol.pdf" rel="nofollow noreferrer">https://ocw.mit.edu/courses/mathematics/18-01sc-single-variable-calculus-fall-2010/1.-differentiation/part-a-definition-and-basic-rules/session-11-chain-rule/MIT18_01SCF10_ex11sol.pdf</a></p>
herb steinberg
501,262
<p>The derivative is <span class="math-container">$-2x(x^2+4)^{-2}$</span>. Thus the chain rule. You missed <span class="math-container">$2x$</span>.</p>
24,810
<p>The title says it all. Is there a way to take a poll on Maths Stack Exchange? Is a poll an acceptable question?</p>
Asaf Karagila
622
<p>You can post a question, with several answers asking people to vote accordingly.</p> <p>This is relatively acceptable on meta (although a discussion should be had first). If I would see something like that on the main site, I'd immediately downvote, close and then delete (and maybe flag to get the process done even faster). And I'm guessing I'm not the only one.</p>
4,090,416
<blockquote> <p>Suppose <span class="math-container">$f(x)$</span> be bounded and differentiable over <span class="math-container">$\mathbb R$</span>, and <span class="math-container">$|f'(x)|&lt;1$</span> for any <span class="math-container">$x$</span>. Prove there exists <span class="math-container">$M&lt;1$</span> such that <span class="math-container">$|f(x)-f(0)|\le M|x|$</span> holding for every <span class="math-container">$x \in \mathbb{R}$</span>.</p> </blockquote> <p>Probably, we may consider applying MVT, say <span class="math-container">$$\left|\frac{f(x)-f(0)}{x-0}\right|=|f'(\xi)|&lt;1,$$</span> but which can only imply <span class="math-container">$M\le 1$</span>, not <span class="math-container">$M&lt;1$</span> we wanted. How to improve the inequality?</p>
Umberto P.
67,536
<p>You can assume without loss of generality that <span class="math-container">$f(0) = 0$</span>.</p> <p>Suppose to the contrary that no such <span class="math-container">$M$</span> exists. Then for each <span class="math-container">$k \ge 2$</span> there exists a point <span class="math-container">$x_k$</span> satisfying <span class="math-container">$$|f(x_k)| &gt; \left( 1 - \frac 1k \right) |x_k|.$$</span> Since <span class="math-container">$f$</span> is bounded there is a constant <span class="math-container">$C$</span> with <span class="math-container">$|f(x_k)| \le C$</span> for all <span class="math-container">$k$</span>, implying that <span class="math-container">$|x_k| \le \dfrac{kC}{k-1} \le 2C$</span> for all <span class="math-container">$k$</span>. Thus <span class="math-container">$\{x_k\}$</span> is bounded and has a convergent subsequence <span class="math-container">$\{x_{k_j}\}$</span>. Denote the limit of this subsequence by <span class="math-container">$x$</span>. Since <span class="math-container">$f(x_{k_j}) \to f(x)$</span> and <span class="math-container">$$|f(x_{k_j})| &gt; \left( 1 - \frac 1{k_j} \right) |x_{k_j}|$$</span> you may let <span class="math-container">$j \to \infty$</span> to discover that <span class="math-container">$|f(x)| \ge |x|$</span>. According to the mean-value theorem there must be a point <span class="math-container">$\xi$</span> in between <span class="math-container">$0$</span> and <span class="math-container">$x$</span> with <span class="math-container">$|f'(\xi)| \ge 1$</span>, contrary to hypothesis.</p>
1,027,330
<p>How does one figure out whether this series: $$\sum_{n=3}^{\infty}(-1)^{n-1}\frac{1}{\ln\ln n}$$ converges or diverges? And, what is the general approach behind solving for convergence/divergence in a series that seems to "oscillate" (thanks to the -1 in this case)? </p> <p>I have so far tried to split the function into two limits, but I am more or less stuck there. </p>
Emanuele Paolini
59,304
<p>Check that you can apply <a href="http://en.wikipedia.org/wiki/Alternating_series_test" rel="nofollow">Leibniz alternating series test</a></p>
1,027,330
<p>How does one figure out whether this series: $$\sum_{n=3}^{\infty}(-1)^{n-1}\frac{1}{\ln\ln n}$$ converges or diverges? And, what is the general approach behind solving for convergence/divergence in a series that seems to "oscillate" (thanks to the -1 in this case)? </p> <p>I have so far tried to split the function into two limits, but I am more or less stuck there. </p>
user860374
137,485
<p>Let $b_n = \frac{1}{\ln{(\ln{(n)})}}$</p> <p>Since $\ln(n)$ is increasing, we know $\ln{(\ln{(n)})}$ also increases, thus we have that:</p> <p>$b_n = \frac{1}{\ln{(\ln{(n)})}}$ is monotonically decreasing on $[2,\infty)$ and also $$\lim_{n\to \infty}b_n= \lim_{n\to \infty}\frac{1}{\ln{(\ln{(n)})}} = 0.$$ Thus, from Leibniz's Test for Alternating Series, we know $\displaystyle \sum_{n=3}^\infty \frac{(-1)^{n-1}}{\ln{(\ln{(n)})}}$ converges.</p>
2,563,303
<blockquote> <p><strong><em>Question:</em></strong> If $z_0$ and $z_1$ are real irrational numbers I write $$q=z_0+z_1\sqrt{-1}$$ Surely $q$ is just a complex number. Under what condition will the <a href="https://en.wikipedia.org/wiki/Absolute_value#Complex_numbers" rel="nofollow noreferrer">number</a> $|q|$ be an $\color{blue}{integer}$ ?</p> </blockquote> <p>I have that $$|q| = \sqrt{z_0^2+z_1^2}$$ Consequently the only exceptions I can think of are the cases where $z_0^2+z_1^2=x^2$ for some positive integer $x$. For example $z_0=z_1=\sqrt{2}$; which is irrational. Then $$|q| = \sqrt{(\sqrt{2})^2+(\sqrt{2})^2}=\sqrt{2+2}=\sqrt{4}=2$$ I denote the set of exceptional pairs $(z_0,z_1)$ by $\mathcal{E}$. I have shown that $(\sqrt{2},\sqrt{2}) \in \mathcal{E}.$ What else is in $\mathcal{E}?$</p>
Fred
380,717
<p>If $(A_n)$ is a positive sequence with $\lim_{n \to \infty}\frac{A_{n+1}}{A_n}= 0$, then there is $N \in \mathbb N$ such that $\frac{A_{n+1}}{A_n} &lt;1$ for $n \ge N$. This gives</p> <p>$A_{n+1}&lt;A_N$ for all $n&gt;N$.</p> <p>Therefore $\lim_{n \to \infty}A_n = \infty$ can not occure.</p>
1,004,303
<p>Let $S=\{(x,0)\} \cup\{(x,1/x):x&gt;0\}$. Prove that $S$ is not a connected space (the topology on $S$ is the subspace topology)</p> <p>My thoughts: Now in the first set $x$ is any real number, and I can't see that this set in open in $S$. I can't find a suitable intersection anyhow.</p>
copper.hat
27,978
<p>Let $\phi(x) = {1 \over 2x}$. Let $U = \{ (x,y) | x&gt;0, \ y &gt; \phi(x)\}$, $V = \{ (x,y) | x&gt;0, \ y &lt; \phi(x)\}$. Then $U,V$ are disjoint, open, $S \cap U \neq \emptyset$, $S \cap V \neq \emptyset$ and $S \subset U \cup V$. Hence $S$ is not connected.</p>
1,223,209
<p>As part of another problem I am working on, I have the following product to work out. </p> <p>$\begin{bmatrix} 1 &amp; 2 &amp; 3 \end{bmatrix} \cdot h $</p> <p>where $h$ is a scalar. My question is, if I commute the row vector and the scalar then I can just multiply it through. If I think of the $h$ as a $1 \times 1$ matrix however, it seems that this isn't allowed. </p> <p>I know it sounds simple, but I want to understand the subtlety involved. </p>
Mick A
153,109
<p>Your suspicion is right in that you didn't take the conditional probabilities into account. The numbers of new/old balls in each draw of $3$ balls <em>do</em> have hypergeometric conditional distributions. As you seem to have done, I'll define "success" in this distribution as getting an old ball. If $X_k$ is the number of old balls in the $k^{th}$ draw, then the probability we require is</p> <p>\begin{eqnarray*} P(X_3=1) &amp;=&amp; \sum_{i=0}^{3}{P(X_2=i) P(X_3=1\mid X_2=i)}\qquad\qquad\text{($i=$ #old balls in $2^{nd}$ draw)} \\ &amp;&amp; \\ &amp;=&amp; \sum_{i=0}^{3}{\dfrac{\binom{3}{i} \binom{9}{3-i}}{\binom{12}{3}} \dfrac{\binom{6-i}{1} \binom{6+i}{2}}{\binom{12}{3}}} \\ &amp;&amp; \\ &amp;=&amp; \dfrac{1377}{3025} \approx 0.455. \end{eqnarray*}</p>
1,817,367
<blockquote> <p>Prove that $\forall k = m^2 + 1. \space m \in \mathbb{Z}^+$, if $k$ is divisible by any prime then that prime is congruent to $1, 2 \pmod 4$.</p> </blockquote> <p>I am unable to realize why it can't have $2$ prime factors congruent to $3 \pmod 4$. Can anyone please help me proceed?</p> <p>Thanks.</p>
Ege Erdil
326,053
<p>Another solution: Suppose that $ p $ divides $ m^2 + 1 = (m-i)(m+i) $. Since primes that are 3 mod 4 are inert in $ \mathbb{Z}[i] $, and since $ p $ divides neither of the factors on the right, but divides $ m^2 + 1 $, it cannot be prime in $ \mathbb{Z}[i] $, which means it is 1 or 2 modulo 4. (This is because $ \mathbb{Z}[i] $ is a principal ideal domain.)</p>
1,817,367
<blockquote> <p>Prove that $\forall k = m^2 + 1. \space m \in \mathbb{Z}^+$, if $k$ is divisible by any prime then that prime is congruent to $1, 2 \pmod 4$.</p> </blockquote> <p>I am unable to realize why it can't have $2$ prime factors congruent to $3 \pmod 4$. Can anyone please help me proceed?</p> <p>Thanks.</p>
awllower
6,792
<p>Another approach that looks similar to the one by @Starfall:<br> First it is easy to show that $p\not\equiv0\pmod4,$ as every number divisible by $4$ is not a prime.<br> Then, as stated in <a href="https://math.stackexchange.com/questions/38431/dedekinds-theorem-on-the-factorisation-of-rational-primes">this question</a>, that $p$ divides $m^2+1$ for some $m\in\mathbb Z$ implies that the ideal $(p)$ becomes a product of two prime ideals $\mathfrak p_1,\mathfrak p_2.$ Now we have $$(p^2)=N(p)=N(\mathfrak p_1)\cdot N(\mathfrak p_2)$$ and neither of $N(\mathfrak p_1)$ and $N(\mathfrak p_2)$ can be $(1),$ as they do not contain units. Thus $N(\mathfrak p_1)=(p).$<br> This shows that $p=x^2+y^2$ for some $x,y\in\mathbb Z.$ This means that $p\equiv1,2\pmod4,$ as $x^2\equiv0,1\pmod4$ for every integer $x,$ and $p\not\equiv0\pmod4.$ </p> <p>Hope this helps.</p>
1,305,935
<p>Let $f(n)$ be non-negative real valued function defined for each natural number $n$.</p> <p>If $f$ is convex and $lim_{n\to\infty}f(n)$ exists as a finite number, then can we conclude that $f$ is non-increasing?</p>
m.Just
1,130,811
<p>The margin equals the shortest distance between the points of the two hyperplanes. Let <span class="math-container">$\mathbf{x_1}$</span> be a point of one hyperplane, and <span class="math-container">$\mathbf{x}_2$</span> be a point of the other hyperplane. We want to find the minimal value of <span class="math-container">$\lVert \mathbf{x}_1 - \mathbf{x}_2 \rVert$</span>. Since <span class="math-container">\begin{align} \mathbf{w}\cdot\mathbf{x}_1 - b &amp;= 1,\\ \mathbf{w}\cdot\mathbf{x}_2 - b &amp;= -1, \end{align}</span> we have <span class="math-container">$$\mathbf{w}\cdot(\mathbf{x}_1 - \mathbf{x}_2) = 2.$$</span> By the Cauchy-Schwarz inequality, we have <span class="math-container">$$\lVert \mathbf{w} \rVert \lVert \mathbf{x}_1 - \mathbf{x}_2 \rVert \geq 2,$$</span> and therefore <span class="math-container">$$\lVert \mathbf{x}_1 - \mathbf{x}_2 \rVert \geq \frac{2}{\lVert \mathbf{w} \rVert },$$</span> where equality holds when <span class="math-container">$\mathbf{w}$</span> and <span class="math-container">$\mathbf{x}_1-\mathbf{x}_2$</span> are linearly dependent (which is clearly always possible).</p>
3,573,575
<p>I'm trying to find the eigenvalues of a matrix <span class="math-container">$$A=\begin{bmatrix}2/3 &amp; -1/4 &amp; -1/4 \\ -1/4 &amp; 2/3 &amp; -1/4 \\ -1/4 &amp; -1/4 &amp; 2/3\end{bmatrix}$$</span></p> <p>The eigenvalues of this matrix, are the roots <span class="math-container">$\lambda$</span> of the equation <span class="math-container">$det(A-\lambda I)=0$</span>. Expanding this determinant with Sarrus's Rule gives a polynomial of a third degree, the solutions can apparently be estimated by iterative methods. Before I start exploring those avenues, however, I'd like to know if there is a more practical method to compute the eigenvalues of this matrix.</p>
user757704
757,704
<p>It's equal to <span class="math-container">$- \frac{1}{4}J + \frac{11}{12} I$</span>, where <span class="math-container">$I$</span> is the identity matrix and <span class="math-container">$J$</span> is the matrix of all <span class="math-container">$1$</span>s. Note that <span class="math-container">$J$</span> has two eigenvalues: <span class="math-container">$0$</span> with multiplicity <span class="math-container">$2$</span> (since it has a rank of <span class="math-container">$1$</span>) and <span class="math-container">$3$</span> with multiplicity <span class="math-container">$1$</span>, with eigenvector <span class="math-container">$(1, 1, 1)$</span>.</p> <p>So, the eigenvalues of <span class="math-container">$- \frac{1}{4}J$</span> are <span class="math-container">$-3/4, 0, 0$</span>, and hence <span class="math-container">$- \frac{1}{4}J + \frac{11}{12} I$</span> has eigenvalues <span class="math-container">$1/6, 11/12, 11/12$</span>.</p>
2,060,891
<p>Number of solutions of $a^3=e$ in $C_9$</p> <p>The solution goes: $a^3=e$ if and only if $a$ lies in the unique subgroup of $C_9$ of order $3$ thus there are $3$ solutions.</p> <p>I'm questioning why? </p>
Sarvesh Ravichandran Iyer
316,409
<p>It is a misprint. The answer is $\frac 16$ because the cases are as you have given i.e. $(4,6),(5,5),(6,4),(5,6),(6,5),(6,6)$, and these are six cases out of $36$, giving the probability of $ \frac 16$.</p>
4,326,547
<p>I've solved linear ODEs before. This however is something completely new to me. I want to solve it without using approximations or anything.</p> <p><span class="math-container">$s''( t) s( t) =( s'( t))^{2} +B( s( t))^{2} s'( t) -g\cdot s( t) s'( t)$</span></p> <p>These are the equations I started with</p> <p><span class="math-container">$ \begin{array}{l} s'( t) =-Bs( t) i( t)\\ r'( t) =g\cdot i( t)\\ i'( t) =i( t)( Bs( t) -g) \end{array}$</span></p> <p>The reason I'm solving this particular equation is because I want to solve for all of the functions <span class="math-container">$(i(t),r(t),s(t))$</span> from above.</p> <p>I just figured <span class="math-container">$s(t)$</span> might be the place to start. I really don't have any clue where to start here. Any help would be awesome! Thanks!</p> <p>--edit-- I made a mistake above. In order to fix it I changed <span class="math-container">$i'( t) =i( t)( Bs( t) -1)$</span> to this <span class="math-container">$i'( t) =i( t)( Bs( t) -g)$</span>. My bad...</p> <p>---Edit---</p> <p>I might have a step towards the answer.</p> <p><span class="math-container">$ \begin{array}{l} v( s) =s'( t)\\ \Longrightarrow \\ s''( t) =\frac{dv( s)}{dt} =\frac{dv}{ds}\frac{ds}{dt} =v'( s) \cdot \frac{ds}{dt} =v'( s) \cdot v( s)\\ \Longrightarrow \\ v'( s) \cdot v( s) s=v( t)^{2} +Bs^{2} v( t) -g\cdot s\cdot v( t)\\ \Longrightarrow \\ v'( s) \cdot s=v( t) +Bs^{2} -g\cdot s \end{array}$</span></p> <p>Wolfram Alpha says that the solution to this differential equation is</p> <p><span class="math-container">$v(s)=Bs^2+c_1s-gsln(s)$</span></p> <p>So that means that</p> <p><span class="math-container">$s'(t)=Bs(t)^2+c_1s(t)-gs(t)\ln(s(t))$</span></p> <p>This is certainly better than before</p>
Cesareo
397,348
<p>Making <span class="math-container">$y = \int s dt$</span> we have</p> <p><span class="math-container">$$ \frac{y'''}{y''}=\frac{y''}{y'}+B y'-g $$</span></p> <p>then</p> <p><span class="math-container">$$ \ln\left(\frac{y''}{y'}\right)=B y - g t+C $$</span></p> <p>or</p> <p><span class="math-container">$$ y''=y'C_1e^{By-gt} $$</span> This last ODE has an implicit closed solution under Wolfram.</p>
3,853,509
<blockquote> <p>prove <span class="math-container">$$\sum_{cyc}\frac{a^2}{a+2b^2}\ge 1$$</span> holds for all positives <span class="math-container">$a,b,c$</span> when <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> or <span class="math-container">$ab+bc+ca=3$</span></p> </blockquote> <hr /> <p><strong>Background</strong> Taking <span class="math-container">$\sqrt{a}+\sqrt{b}+\sqrt{c}=3$</span> This was left as an exercise to the reader in the book 'Secrets in inequalities'.This comes under the section Cauchy Reverse technique'.i.e the sum is rewritten as :<span class="math-container">$$\sum_{cyc} a- \frac{ab^2}{a+2b^2}\ge a-\frac{2}{3}\sum_{cyc}{(ab)}^{2/3}$$</span> which is true by AM-GM.(<span class="math-container">$a+b^2+b^2\ge 3{(a. b^4)}^{1/3}$</span>)</p> <p>By QM-AM inequality <span class="math-container">$$\sum_{cyc}a\ge \frac{{ \left(\sum \sqrt{a} \right)}^2}{3}=3$$</span>.</p> <p>we are left to prove that <span class="math-container">$$\sum_{cyc}{(ab)}^{2/3}\le 3$$</span> .But i am not able to prove this .Even the case when <span class="math-container">$ab+bc+ca=3$</span> seems difficult to me.</p> <p>Please note I am looking for a solution using this cuchy reverse technique and AM-GM only.</p>
Fred
380,717
<p>Hint: <span class="math-container">$f(x)= \frac{1}{2} \cdot \frac{1}{1-(-\frac{3}{2}x^2)}.$</span></p> <p>Now geometric seies !</p>
1,464,143
<p>$\lim_{n \to \infty} n\ln\left(1+\frac{1}{n}\right)$ using L'Hòpital rule show that this is $1$. Can you do this since there isn't a division and $n$ will obviously tend to infinity and $\ln\left(1+\frac{1}{n}\right)$ will tend to $0$? So there limits aren't matching?</p> <p>So I set $u=n $</p> <p>$du=1$</p> <p>$v= \ln\left(1+\frac{1}{n}\right)$</p> <p>$dv= -\frac{1}{n^2+n}$</p> <p>Hence $\ln\left(1+\frac{1}{n}\right) - \frac{1}{n+1}$ in which both of these tend to $0$ so I am complete lost.</p>
Mark Viola
218,419
<p>$$\lim_{n\to \infty}n\log\left(1+\frac1n\right)=\lim_{n\to \infty}\frac{\log\left(1+\frac1n\right)}{1/n}\lim_{n\to \infty}\frac{\left(\frac{1}{1+1/n}\right)(-1/n^2)}{-1/n^2}=1$$</p>
302
<p>I know that the Fibonacci numbers converge to a ratio of .618, and that this ratio is found all throughout nature, etc. I suppose the best way to ask my question is: where was this .618 value first found? And what is the...significance?</p>
Marko Amnell
2,594
<p>The book <em>A Mathematical History of the Golden Number</em> by Roger Herz-Fischler is an exhaustive study of nearly all references to the golden ratio, from the earliest times, and <a href="http://ebookee.org/A-Mathematical-History-of-the-Golden-Number_887455.html" rel="nofollow">is available as a free e-book</a>. As has been pointed out by others, the golden ratio is older than the Fibonacci numbers. <a href="http://tinyurl.com/2vapgm2" rel="nofollow">On page 53</a>, Herz-Fischler notes that a pentagram appears as "a pot mark on a jar" dating from 3100 BC in Egypt.</p>
34,671
<p>Going through some old papers, I came up with a simple-looking problem I thought about 5 years ago or so. </p> <p>MO wants motivation ... Associated to a probability measure on a metric space is something called "quantization dimension" ... this involves defining a function $D \colon (0,\infty) \to (0,\infty)$. Exactly how is not the point here, but see for example </p> <p><a href="http://www.ams.org/mathscinet-getitem?mr=1877974" rel="nofollow">http://www.ams.org/mathscinet-getitem?mr=1877974</a></p> <p>Lindsay, L. J. and Mauldin, R. D. Quantization dimension for conformal iterated function systems. Nonlinearity 15 (2002), no. 1, 189--199. </p> <p>It was observed numerically that $D$ is increasing and concave, but proof was lacking. When we do this for the simplest possible self-similar measure (similarities with ratios $s_1, s_2$ and probabilities $p_1, p_2$) I still did not solve it, even though it looks like an elementary calculus exercise. Here it is.</p> <blockquote> <p>Let $s_1, s_2, p_1, p_2$ be positive real numbers such that $s_1 &lt; 1$, $s_2 &lt; 1$, $p_1+p_2=1$. For $r&gt;0$ define $D = D(r)$ implicitly by $$ \left(p_1 s_1^r\right)^{D/(r+D)} + \left(p_2 s_2^r\right)^{D/(r+D)} = 1. $$ Then:<br> Does it follow that $D'(r) \ge 0$? [YES]<br> Does it follow that $D''(r) \le 0$? [OPEN]</p> </blockquote> <p>At least it was open back then!</p>
Gerald Edgar
454
<p>My response to the answer by fedja, Jun 5, 2011. This should be a comment, but won't fit. </p> <p>It didn't work. Taking values of $A,B,\epsilon$ that satisfy your conditions, then tracing back through using 20-digit arithmetic, I get these values: $s_1=0.34018988053902955186$, $s_2=0.98903555253485545775$, $p_1=0.0000000004309513037$, $p_2=0.99999999956904869628$, $b = 0.050000002052149145975$. And this does what you wanted: function $(p_1 s_1^r)^{(1+b r)/(1+b r+r)}+ (p_2 s_2^r)^{(1+b r)/(1+b r+r)}$ looks like this:<br> <img src="https://i.stack.imgur.com/hLGGw.jpg" alt="alt text"> </p> <p>It crosses the line $y=1$ three times, as required. But the function $D$ defined as specified implicitly, looks like this:</p> <p><a href="https://i.stack.imgur.com/zgVFy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zgVFy.jpg" alt="D"></a><br> It does <em>not</em> cross the line $1+br$ three times. The value $r=0$ is no good for this (because in fact <em>every</em> number $D$ satisfies the equation when $r=0$).</p>
4,122,732
<p>I was solving exercise 3.125 of Wackerly's Probability book and i did not understand the solution given in the solutions manual.</p> <p>The problem says:</p> <p>Customers arrive at a shop following a poisson distribution for an average of 7 customers per hour. What is the probability that exactly two clients arrive in 2 hours between :</p> <p>(a). 2:00pm and 4:00pm (a continous period of two hours)? (b). between 1:00pm and 2:00pm or between 3:00pm and 4:00pm (two periods of 1 hour that in total add to two hours)?</p> <p>For part (a) i responded correctly. That is <span class="math-container">$P(Y=2) = \frac{14^2 e^{-14} }{2!}$</span> since initially <span class="math-container">$\lambda$</span>=7 (per hour), but because we are considering a continous period of 2 hours, our random variable Y has <span class="math-container">$\lambda_1$</span> = 14.</p> <p>However for part (b) the solutions manual says the result is the same as (a), even though we are considering separate 1 hour periods. I would have said that the answer was <span class="math-container">$2P(Y=2) = 2 \frac{7^2 e^{-7} }{2!}$</span>. Why is this not the answer? Does it have something to do with the &quot;Poisson process&quot;? The book does not dive much into this, and wikipedia's article was overwhelming.</p> <p>Thank you</p>
Vons
274,987
<p>Simple answer is that for both problems, you have a span of two hours, so the calculation is identical for both of them. On average 7 customers arrive in 1 hour so 14 customers arrive in 2 hours. Using <span class="math-container">$X\sim\text{Poisson}(14)$</span> to be the number of customer arrivals in a 2 hour time interval, we have <span class="math-container">$$\Pr(X=2)=\frac{e^{-14}(14)^2}{2!}$$</span></p> <p>In terms of a Poisson process, if 7 customers arrive per hour, then the Poisson <em>process</em> has intensity/ rate 7 customers per hour. Generally this is given in units of time, which is one hour here, so it's just 7. And then for a two hour period, the number of arrivals would be modeled by the Poisson <em>distribution/ random variable</em> having parameter <span class="math-container">$7(2)$</span>.</p> <p>If we assume the customers arrive according to a Poisson process with rate <span class="math-container">$r$</span>, i.e. for every interval of length <span class="math-container">$t$</span> the distribution of arrivals is <span class="math-container">$\text{Poisson}(rt)$</span> and arrivals in separate intervals are independent of one another, then the answers to the two parts are the same.</p> <p>Probably the problem should have said customers arrive according to a Poisson process.</p>
3,997,632
<p>Use the Chain Rule to prove the following.<br /> (a) The derivative of an even function is an odd function.<br /> (b) The derivative of an odd function is an even function.</p> <p><strong>My attempt:</strong></p> <p>I can easily prove these using the definition of a derivative, but I'm having trouble showing them using the chain rule.</p> <p>(a) <span class="math-container">$f(x)$</span> is even <span class="math-container">$ \therefore f(-x) = f(x)$</span>.<br /> We need to show that <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>Let <span class="math-container">$u = -x$</span>.<br /> My reasoning for the next step is that if we want to find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$x$</span> from <span class="math-container">$f(u)$</span> we need to use the chain rule and first find the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span> and multiply that with the derivative of <span class="math-container">$u$</span> with respect to <span class="math-container">$x$</span>.<br /> <span class="math-container">$f'(x) = f'(u) \cdot u' = - f'(u) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(x) = -f'(-x)$</span>.<br /> <span class="math-container">$f'(-x) = -f'(x)$</span>.</p> <p>The problem with this is that I never used the fact that <span class="math-container">$f$</span> is even and I feel like there is a mistake in my approach, most likely where I explained my reasoning. I just can't see it. Can you please point out the mistake and point me to the right direction?</p> <p>Thanks!</p>
peek-a-boo
568,204
<p><span class="math-container">$f$</span> being even means for every <span class="math-container">$x\in\Bbb{R}$</span>, <span class="math-container">$f(x)=f(-x)$</span>. Or, if we define the function <span class="math-container">$u:\Bbb{R}\to\Bbb{R}$</span> as <span class="math-container">$u(x)=-x$</span>, then the condition for being an even function is that <span class="math-container">$f=f\circ u$</span>. So, actually, you did use the evenness assumption when you started calculating derivatives, even if you didn't explain it very explicitly.</p> <p>Now, the chain rule tells us that <span class="math-container">$f'=(f\circ u)' = (f'\circ u)\cdot u'$</span>. Explicitly, this says for every <span class="math-container">$x\in\Bbb{R}$</span>, <span class="math-container">\begin{align} f'(x)&amp;= f'(u(x))\cdot u'(x)=f'(-x)\cdot (-1) = -f'(-x). \end{align}</span> But this is precisely what it means for <span class="math-container">$f'$</span> to be odd. The other question is similar.</p>
3,335,060
<blockquote> <p>The numbers of possible continuous <span class="math-container">$f(x)$</span> defiend on <span class="math-container">$[0,1]$</span> for which <span class="math-container">$I_1=\int_0^1 f(x)dx = 1,~I_2=\int_0^1 xf(x)dx = a,~I_3=\int_0^1 x^2f(x)dx = a^2 $</span> is/are</p> <p><span class="math-container">$(\text{A})~~1~~~(\text{B})~~2~~(\text{C})~~\infty~~(\text{D})~~0$</span></p> <p>I have tried the following: Applying ILATE (the multiplication rule for integration) - nothing useful comes up, only further complications like the primitive of the primitive of f(x). No use of the given information either. Using the rule <span class="math-container">$$ \int_a^b g(x)dx = \int_a^b g(a+b-x)dx$$</span> I solved all three constraints to get <span class="math-container">$$ \int_0^1 x^2f(1-x)dx = (a-1)^2 \\ \text{or} \int_0^1 x^2[f(1-x)+f(x)]dx = (a-1)^2 +a^2 \\$$</span> Then I did the following - if f(x) + f(1-x) is constant, solve with the constraints to find possible solutions. Basically I was looking for any solutions where the function also follows the rule that f(x) + f(1-x) is constant. Solving with the other constraints, I obtained that f(x) will only follow all four constraints if the constant [= f(x) + f(1-x)] is 2, and a is <span class="math-container">$\frac{√3\pm1}{2}$</span>.</p> </blockquote>
J.G.
56,861
<p>If you know your powers of <span class="math-container">$3$</span> well, you know <span class="math-container">$2.7^3=19.683$</span>. Since <span class="math-container">$e&gt;2.718=2.7\left(1+\frac{2}{300}\right)$</span>,<span class="math-container">$$e^3&gt;19.683\left(1+\frac{2}{100}\right)=19.683+0.39366&gt;20.$$</span></p>
2,777,555
<p>Prove that if $f(0)=0$ and $f'(0)=0$, then $f(x)=0$ for all $x$. </p> <p>Hint: The idea is to multiply both sides of the equation $f''(x)+ f(x) = 0$ by something that makes the left-hand side of the equation into the derivative of something.</p> <p>I'm not sure how to proceed and don't really understand the hint.</p>
Mohammad Riazi-Kermani
514,496
<p>$$f''(x)+ f(x) = 0 $$</p> <p>$$ f'(x) f''(x) +f'(x)f(x) =0 $$</p> <p>$$ (1/2)(f^2 + f'^2 )' =0$$</p> <p>$$f^2 + f'^2=C$$</p> <p>Since $$ (f^2 + f'^2)(0)=0$$</p> <p>We get $C=0$, that is $f(x)=0$ </p>
264,587
<p><strong>NOTE</strong></p> <p>I'm sorry, my question was not clear. I want to know all the ways to split a list with a given length simply, <strong>rather than split a cyclic substitution</strong>. If a given list has length <span class="math-container">$N$</span> and the rule is <span class="math-container">${m, n, p, ...}$</span>, we should get a list of length <span class="math-container">${}_{N} C_{m} {}_{N-m} C_{n} {}_{N-m-n} C_{p} \dots = \frac{N!}{m! n! p! \cdots}$</span> if all elements of <span class="math-container">$N$</span> are independent.</p> <p>Other examples: <code>Length@partitionList[{a, b, c, d, e, f, g, h}, {2, 2, 4}]</code> returns <span class="math-container">$420 = \frac{8!}{2! 2! 4!}$</span></p> <p><code>Length@partitionList[{a, b, b, c, d, e, e, f}, {2, 2, 4}]</code> returns 173</p> <hr> <p><strong>Question</strong></p> <p>I want to split a given list into sets of lists, whose lengths are given. For example, this means if we split a list <code>{a, b, c, d}</code> (length 4) to two lists with length <code>{1, 3}</code> (the sum of lengths should be 4), we obtain</p> <pre><code>{{{a}, {b, c, d}}, {{b}, {c, d, a}}, {{c}, {d, e, a}}, {{d}, {e, a, b}} </code></pre> <p>(here, we don't care about ordering for elements in each sublist).</p> <p>To achieve this, I prepared the following function:</p> <pre class="lang-Mathematica prettyprint-override"><code>partitionList[l_List, p_List] := DeleteDuplicates@(Module[{$tmp, $deleteList, $lastchoose, l2 = Range[Length@l]}, $tmp = Subsets[l2, {p[[1]]}]; $deleteList = Flatten /@ $tmp; If[Length@p &gt; 1, Do[ $lastchoose = Table[Subsets[ Delete[l2, {#} &amp; /@ $deleteList[[$j]]], {p[[$i]]}], {$j, Length@$deleteList}]; $tmp = Replace[Flatten[ Tuples /@ Transpose[{{#} &amp; /@ $tmp, $lastchoose}], 1], x_ /; Depth@x &gt; 2 :&gt; Sequence @@ x, {2} ]; $deleteList = Flatten /@ $tmp; , {$i, 2, Length@p}] ]; Map[l[[#]] &amp;, $tmp, {2}] ] ) </code></pre> <p>Here, the argument <span class="math-container">$l$</span> is a list which we want to split, and <span class="math-container">$p$</span> is a list of lengths of sublists. In the previous example, <span class="math-container">$l$</span> is <code>{a, b, c, d}</code> and <span class="math-container">$p$</span> is <code>{1, 3}</code>.</p> <p>However, since it is based on procedural programming, I believe there are more efficient ways. Could you please suggest such a method?</p>
kglr
125
<pre><code>kSP = ResourceFunction[&quot;KSetPartitions&quot;]; partitionLst[a_, p_] := Select[Sort@Map[Length] @ # == Sort @ p &amp;][ DeleteDuplicates @ Sort @ kSP[a, Length @ p]] partitionLst[{a, b, c, d}, {1, 3}] </code></pre> <blockquote> <pre><code>{{{a}, {b, c, d}}, {{a, b, c}, {d}}, {{a, b, d}, {c}}, {{a, c, d}, {b}}} </code></pre> </blockquote> <pre><code>partitionLst[{a, b, c, d}, {2, 2}] </code></pre> <blockquote> <pre><code>{{{a, b}, {c, d}}, {{a, c}, {b, d}}, {{a, d}, {b, c}}} </code></pre> </blockquote>
3,981,458
<p>A star graph <span class="math-container">$S_{k}$</span> is the complete bipartite graph <span class="math-container">$K_{1,k}$</span>. One bipartition contains 1 vertex and the other bipartition contains <span class="math-container">$k$</span> vertices. <a href="https://en.wikipedia.org/wiki/Star_(graph_theory)" rel="noreferrer">Wikipedia Article</a></p> <p>A graph G is balanced if the average degree of every subgraph H is less than or equal to the average degree of G. In other words <span class="math-container">$\bar{d}(H) \leq \bar{d}(G)$</span>.</p> <p>Show that a star graph is balanced.</p> <p>I have been able to prove this, however in an extremely ugly and long way using many different cases. I was wondering if there is any easy way to prove this. Any ideas?</p>
ArsenBerk
505,611
<p>It is clear that <span class="math-container">$$\bar{d}(S_k) = \frac{k+k\cdot1}{k+1} = \frac{2k}{k+1} = 2-\frac{2}{k+1}$$</span></p> <p>Now, in <span class="math-container">$H$</span>, we can consider only two cases:</p> <p><strong>Case 1:</strong> Center vertex <span class="math-container">$v \notin H$</span> (vertex with degree <span class="math-container">$k$</span> in star graph). Then, <span class="math-container">$H$</span> has no edges so <span class="math-container">$\bar{d}(H) = 0$</span>,</p> <p><strong>Case 2:</strong> Center vertex <span class="math-container">$v \in H$</span>. If <span class="math-container">$H$</span> has all the vertices but missing some of the edges, then clearly <span class="math-container">$\bar{d}(H) &lt; \bar{d}(S_k)$</span> since we are decreasing the total degree while keeping the number of vertices same. So, suppose <span class="math-container">$H$</span> has <span class="math-container">$n$</span> vertices of degree <span class="math-container">$1$</span> with <span class="math-container">$n \le k$</span> (here, note that <span class="math-container">$H$</span> still may be missing some edges but it is enough to check the maximal case, in which we have a star graph <span class="math-container">$S_n$</span>). Then, we have <span class="math-container">$$\bar{d}(H) = \frac{2n}{n+1} = 2 - \frac{2}{n+1}$$</span></p> <p>So, all that's left is to compare <span class="math-container">$2-\dfrac{2}{k+1}$</span> and <span class="math-container">$2-\dfrac{2}{n+1}$</span> where <span class="math-container">$n \le k$</span>. But, it is easy to see that</p> <p><span class="math-container">$$2-\dfrac{2}{k+1} \ge 2-\dfrac{2}{n+1}$$</span></p> <p>So, we are done.</p>
3,981,458
<p>A star graph <span class="math-container">$S_{k}$</span> is the complete bipartite graph <span class="math-container">$K_{1,k}$</span>. One bipartition contains 1 vertex and the other bipartition contains <span class="math-container">$k$</span> vertices. <a href="https://en.wikipedia.org/wiki/Star_(graph_theory)" rel="noreferrer">Wikipedia Article</a></p> <p>A graph G is balanced if the average degree of every subgraph H is less than or equal to the average degree of G. In other words <span class="math-container">$\bar{d}(H) \leq \bar{d}(G)$</span>.</p> <p>Show that a star graph is balanced.</p> <p>I have been able to prove this, however in an extremely ugly and long way using many different cases. I was wondering if there is any easy way to prove this. Any ideas?</p>
Misha Lavrov
383,078
<p>All trees are balanced, so in particular stars are balanced.</p> <p>An <span class="math-container">$n$</span>-vertex tree has average degree <span class="math-container">$2-\frac 2n$</span>. Any subgraph of average degree <span class="math-container">$2$</span> can be reduced to a subgraph of minimum degree <span class="math-container">$2$</span> by removing leaves, but a subgraph of minimum degree <span class="math-container">$2$</span> would contain a cycle. Trees don't have cycles, so any subgraph of a tree has average degree less than <span class="math-container">$2$</span>. However, for a <span class="math-container">$k$</span>-vertex subgraph, the largest possible average degree less than <span class="math-container">$2$</span> is <span class="math-container">$2 - \frac 2k \le 2 - \frac 2n$</span>.</p> <p>(In fact, all trees are strictly balanced, since the only way to get <span class="math-container">$2 - \frac2k = 2 - \frac2n$</span> is to have <span class="math-container">$k=n$</span>, taking the entire tree as a subgraph.)</p>
838,400
<p>One question asking if $\mathbb{Z}^*_{21}$ is cyclic.</p> <p>I know that the cyclic group must have a generator which can generate all of the elements within the group.</p> <p>But does this kind of question requires me to exhaustively find out a generator? Or is there any more efficient method to quickly determine if a group is a cyclic group?</p>
Alex Jordan
157,500
<p>I believe this is the same as what KCd said, but I will be more specific. $Z^*_{21}$ contains two subgroups of order 2, namely $&lt;8&gt;$ and $&lt;13&gt;$. However, for $Z^*_{21}$ to be cyclic, it must have only one subgroup of order 2. This fact comes from the fundamental theorem of cyclic groups:</p> <blockquote> <p>Every subgroup of a cyclic group is cyclic. Moreover, if $|a| = n$, then the order of any subgroup of $&lt;a&gt;$ is a divisor of $n$; and, for each positive divisor $k$ of $n$, the group $&lt;a&gt;$ has exactly one subgroup of order k–namely, $&lt;a ^{n/k}&gt;$. <a href="http://www.cs.earlham.edu/~seth/class/math420/Portfolio.pdf" rel="nofollow">http://www.cs.earlham.edu/~seth/class/math420/Portfolio.pdf</a></p> </blockquote> <p>In the context of your question, we apply this theorem by supposing $Z^*_{21}$ is cyclic. Then let $a\in{Z^*_{21}}$ such that $&lt;a&gt;=Z^*_{21}$. Thus $|a|=12$. Since 2 divides 12, by the fundamental theorem of cyclic groups, $&lt;a&gt;$ has exactly one subgroup of order 2. However we know that $Z^*_{21}$ possesses two subgroups of order 2. So, $Z^*_{21}$ is not cyclic.</p>
341,823
<p>Let <span class="math-container">$E\subset B_1(0)\subset \mathbb{R}^n$</span> be a compact set s.t. <span class="math-container">$\lambda(E)=0$</span>, where <span class="math-container">$\lambda$</span> is the Lebesgue measure, and <span class="math-container">$B_1(0)$</span> is the Euclidean unit ball centered at the origin. Is the following integral finite:</p> <p><span class="math-container">$$\int_{B_1(0)}-\log d(x,E)d\lambda(x)&lt;\infty?$$</span></p> <p>Although this question seems trivial, I have failed to find a reference to it or to variations of it in previous discussions. I was not able to come up with a counter-example nor a proof. I also asked in mathstackexchange a variation of it, but didn’t get a sufficient answer.</p> <p>Thanks ahead</p>
Iosif Pinelis
36,721
<p>If <span class="math-container">$E\ne\emptyset$</span>, then <span class="math-container">$d(x,E)\le2$</span> for all <span class="math-container">$x\in B_1(0)$</span>. So, your integral is <span class="math-container">$\le\lambda(B_1(0))\ln2&lt;\infty$</span>. </p>
90,459
<p>I want to find the degree of $\mathbb{Q}(\sqrt{3+2\sqrt{2}})$ over $\mathbb{Q}$. I observe that $3+2\sqrt{2}=2+2\sqrt{2}+1=(\sqrt{2}+1)^2$ so $$ \mathbb{Q}(\sqrt{3+2\sqrt{2}})=\mathbb{Q}(\sqrt{2}+1)=\mathbb{Q}(\sqrt{2}) $$ so the degree is 2.</p> <p>Is there a more mechanical way to show this without noticing the factorization?</p>
Sam
3,208
<p>We can immediately see that $\sqrt{3+2\sqrt{2}}$ is a root of </p> <p>$$(X^2 -3)^2 -8 = X^4 -6X^2 + 1$$</p> <p>So we can ask, whether this polynomial is irreducible or not.</p> <p>The polynomial has not roots in $\mathbb Q$, since for a root $\frac rs \in \mathbb Q$ with $(r,s) = 1$, we would need to have $r|1$, $s|1$, but $\pm1$ is not a root.</p> <p>So let's try to find factors of degree 2:</p> <p>$$ \begin{align} X^4 - 6X^2 + 1 &amp;= (X^2 + aX \pm 1)(X^2 + cX \pm 1) \\ &amp;= X^4 + (a+c)X^3 + (ac\pm 2)X^2 \pm (a+c)X + 1 \end{align} $$</p> <p>so $a = -c$ and $-6 = ac \pm 2 = -a^2 \pm 2$. This implies that we must choose $a = 2$, $c=-2$ and the minus-sign for $\pm 1$. We have thus arrived at the factorization</p> <p>$$X^4 - 6X^2 + 1 = (X^2 + 2X - 1)(X^2 - 2X - 1)$$</p> <p>Both the factors on the right must be irreducible over $\mathbb Q$ (because they can't have a root in $\mathbb Q$) and $\sqrt{3+2\sqrt{2}}$ must be a root of one of them (we need not even figure out which one).</p> <p>From this it follows that $[\mathbb Q(\sqrt{3+2\sqrt{2}}):\mathbb Q] = 2$.</p>
363,767
<p>An ellipse is specified $ x^2 + 4y^2 = 4$, and a line is specified $x + y = 4$. I need to find the max/min distances from the ellipse to the line.</p> <p>My idea is to find two points $(x_1, y_1)$ and $(x_2,y_2)$ such that the first point is on the ellipse and the second point is on the line. Furthermore, the line segment formed by these two points should be perpendicular to the line (slope = 1). This gives 3 constraints $g_i$ and an objective $f$. </p> <p>$$ g_1: x_1 ^2 + 4y_1 ^2 - 4 = 0$$ $$ g_2: x_2 + y_2 - 4= 0 $$ $$ g_3:\frac{(y_2 - y_1)}{(x_2 - x_1)} - 1 = 0 $$ $$ f: (y_2 - y_1)^2 + (x_2- x_1)^2 $$ </p> <p>Then I compute $\nabla g_i$ and $\nabla f$:</p> <p>$$ \nabla g_1 = (2x_1, 8y_1, 0, 0) $$ $$ \nabla g_2 = (0, 0, 1, 1)$$ $$ \nabla g_3 = ( (y_2 - y_1)(x_2 - x_1)^{-2}, -(x_2 - x_1)^{-1}, -(y_2 - y_1)(x_2 - x_1)^{-2}, (x_2 - x_1)^{-1} )$$ $$ \nabla f = (-2(x_2-x_1), -2(y_2-y_1), 2(x_2-x_1), 2(y_2 - y_1))$$ </p> <p>At this point I try to solve $\nabla f = \sum\lambda_i\nabla g_i$, which, together with the constraints, gives me 7 equations with 7 variables. I'm not sure how to solve this system. </p> <p>$$\lambda_1 2 x_1 + \lambda_3(y_2-y_1)(x_2-x_1)^{-2} = -2(x_2-x_1)$$ $$\lambda_1 8 y_1 - \lambda_3 (x_2 - x_1)^{-1} = -2 (y_2 - y_1)$$ $$\lambda_2 - \lambda_3 (y_2 - y_1)(x_2 - x_1)^{-2} = 2(x_2 - x1)$$ $$\lambda_2 + \lambda_3 (x_2 - x_1)^{-1} = 2(y_2 -y_1)$$ </p> <p>Is there an easy way to see the solutions of this system in $x_1, y_1, x_2, y_2$? If not, is there an easier formulation of this optimization problem?</p>
lab bhattacharjee
33,337
<p>We can solve without using Lagrange Multiplier Method</p> <p>We have $$\frac{x^2}4+\frac{y^2}1=1$$</p> <p>So, any point$(P)$ on the ellipse can be represented as $(2\cos t,\sin t)$</p> <p>SO, the distance from the line : $x+y-4=0$ from $P(2\cos t,\sin t)$</p> <p>is $$\frac{|2\cos t+\sin t-4|}{\sqrt{1^2+1^2}}=\frac{\left|\sqrt5\cos\left(t-\arccos \frac2{\sqrt5}\right)-4\right|}{\sqrt2}$$ putting $2=r\cos A,1=r\sin A$ where $r&gt;0$</p> <p>Squaring and adding we get $R^2=5\implies r=\sqrt 5, A=\arccos \frac2r=\arccos \frac2{\sqrt5}$</p> <p>As $-1\le \cos\left(t-\arccos \frac2{\sqrt5}\right)\le1, $</p> <p>clearly, this will be maximum if $\cos\left(t-\arccos \frac2{\sqrt5}\right)=-1$ and minimum if $\cos\left(t-\arccos \frac2{\sqrt5}\right)=1$</p>
1,278,329
<p>Solve the recurrence $a_n = 4a_{n−1} − 2 a_{n−2}$</p> <p>Not sure how to solve this recurrence as I don't know which numbers to input to recursively solve?</p>
Jared
138,018
<p>Assume the solution is $a_n = Ar^n$ then you get:</p> <p>$$ a_n = Ar^n, a_{n - 1} = A \frac{r^n}{r}, a_{n - 2} = A\frac{r^n}{r^2} $$</p> <p>Plug this into your recursion relation:</p> <p>\begin{align} a_n = 4a_{n - 1} - 2a_{n - 2}\\ Ar^n = 4A \frac{r^n}{r} - 2A\frac{r^n}{r^2} \\ Ar^n \big(1 - \frac{4}{r} + \frac{2}{r^2}\big) = 0 \\ Ar^n\left(\frac{r^2 - 4r + 2}{r^2}\right) = 0 \\ r^2 - 4r + 2 = 0 \\ r = \frac{4 \pm \sqrt{16 - 8}}{2} = \frac{4 \pm 2\sqrt{2}}{2} = 2 \pm \sqrt{2} \end{align}</p> <p>This gives that </p> <p>$$ a_n = A(2 + \sqrt{2})^n + B(2 - \sqrt{2})^n $$</p> <p>It's worth noting that this is identical to solving a homogeneous linear differential equation.</p>
1,262,174
<p>I am currently teaching Physics in an Italian junior high school. Today, while talking about the <a href="http://en.wikipedia.org/wiki/Dipole#/media/File:Dipole_Contour.svg" rel="noreferrer">electric dipole</a> generated by two equal charges in the plane, I was wondering about the following problem:</p> <blockquote> <p>Assume that two equal charges are placed in <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(1,0)$</span>.</p> <p>There is an equipotential curve through the origin, whose equation is given by: <span class="math-container">$$\frac{1}{\sqrt{(x-1)^2+y^2}}+\frac{1}{\sqrt{(x+1)^2+y^2}}=2 $$</span> and whose shape is very <a href="http://en.wikipedia.org/wiki/Lemniscate" rel="noreferrer">lemniscate</a>-like:</p> </blockquote> <p><img src="https://i.stack.imgur.com/oojJw.png" alt="enter image description here" /></p> <blockquote> <p><strong>Is there a fast&amp;tricky way to compute the area enclosed by such a curve?</strong></p> </blockquote> <p>Numerically, it is <span class="math-container">$\approx 3.09404630427286$</span>.</p>
Narasimham
95,860
<p>HINT:</p> <p>Almost sure area can be evaluated between limits $(u_1,u_2),(v_1,v_2)$ for holomorphic functions of complex variables in conformal maps.</p> <p>The equipotentials and lines of force form an orthogonal net of curvilinear rectangle curves (resemble Cassinian Ovals, but different.. one out of which is shown above), and force lines which are all concurrent hyperbolas passing through the pole of same polarity / charge and asymptotic to y-axis. For these $(u,v)$ parameter plots that satisfy Laplace equation all we need before evaluating integral of area is to identify limits of u and v for any oval chosen. (shall upload equation , if it is found in sites e.g. 2d curves.com)</p> <p>EDIT1:</p> <p>The closest that I can get to the function is sketched by Mark McClure in another context for $ f(z)= z^2 (z-1)^3 $:</p> <p><a href="https://math.stackexchange.com/questions/607436/what-do-polynomials-look-like-in-the-complex-plane">Orth Traj to Hyperbolae</a></p> <p>I think we should choose another complex variable part so that our hyperbolae pass through $ (\pm 1,0)$ instead of $ ( 0,0),(1,0)$ as given above. We are interested in a particular orthogonal trajectory doubly intersecting at origin. Calculation of area may be easier.</p>
1,262,174
<p>I am currently teaching Physics in an Italian junior high school. Today, while talking about the <a href="http://en.wikipedia.org/wiki/Dipole#/media/File:Dipole_Contour.svg" rel="noreferrer">electric dipole</a> generated by two equal charges in the plane, I was wondering about the following problem:</p> <blockquote> <p>Assume that two equal charges are placed in <span class="math-container">$(-1,0)$</span> and <span class="math-container">$(1,0)$</span>.</p> <p>There is an equipotential curve through the origin, whose equation is given by: <span class="math-container">$$\frac{1}{\sqrt{(x-1)^2+y^2}}+\frac{1}{\sqrt{(x+1)^2+y^2}}=2 $$</span> and whose shape is very <a href="http://en.wikipedia.org/wiki/Lemniscate" rel="noreferrer">lemniscate</a>-like:</p> </blockquote> <p><img src="https://i.stack.imgur.com/oojJw.png" alt="enter image description here" /></p> <blockquote> <p><strong>Is there a fast&amp;tricky way to compute the area enclosed by such a curve?</strong></p> </blockquote> <p>Numerically, it is <span class="math-container">$\approx 3.09404630427286$</span>.</p>
achille hui
59,379
<p>Consider following parametrization of the first quadrant of the $(x,y)$ plane:</p> <p>$$[1,\infty) \times [0,1] \ni (u,v) \quad\mapsto\quad (x,y) \in [0,\infty)^2 \quad\text{s.t.}\quad \begin{cases} r_1 &amp;= \sqrt{(x+1)^2+y^2} = u+v\\ r_2 &amp;= \sqrt{(x-1)^2+y^2} = u-v \end{cases}$$ In this parametrization, the area element is given by $$dx \wedge dy = -\omega\quad\text{ where }\quad \omega \stackrel{def}{=} \frac{u^2-v^2}{\sqrt{(u^2-1)(1-v^2)}} du \wedge dv$$ Let $D$ be the region in $(u,v)$ plane corresponds to the dipole in first quadrant. Its boundary $\partial D$ consists of 3 pieces</p> <ol> <li>$C_1$ : a curve start at $(1,0)$, end at $(\phi,1)$ where $\phi = \frac{1+\sqrt{5}}{2}$ is the Golden ratio. $$[1,\phi] \in t \quad\mapsto\quad (u,v) = \left(t,\sqrt{t(t-1)}\right) \in [1,\infty) \times [0,1]$$</li> <li>$C_2$ : a line segment from $(\phi,1)$ to $(1,1)$.</li> <li>$C_3$ : a line segment from $(1,1)$ to $(1,0)$.</li> </ol> <p>Notice $\omega = \frac12 d\Omega$ where $$\Omega \stackrel{def}{=} \frac{u (u^2 - 1) dv - v(1-v^2) du}{\sqrt{(u^2-1)(1-v^2)}}$$ The area $\mathcal{A}$ we seek equals to $$\mathcal{A} = 4 \int_D \omega = 2\int_{\partial D}\Omega = 2\left(\int_{C_1} + \int_{C_2} + \int_{C_3}\right) \Omega$$</p> <p>Introduce another parametrization for the region $[1,\infty) \times [0,1]$ in the $(u,v)$ plane:</p> <p>$$[0,\infty) \times [0,\pi] \ni (\rho,\eta) \quad\mapsto\quad (u,v) = (\cosh\rho,\cos\eta) \in [1,\infty) \times [0,1]$$</p> <p>One can use this to verify the line segments $C_2$ and $C_3$ contribute nothing to $\mathcal{A}$. As a result,</p> <p>$$\mathcal{A} = 2\int_{C_1} \Omega = 2\int_{C_1} \left(u\sqrt{\frac{u^2-1}{1-v^2}} dv - v\sqrt{\frac{1-v^2}{u^2-1}} du\right)$$</p> <p>Since $v^2 = u(u-1)$ on curve $C_1$, we can transform the $1^{st}$ piece of the integrand as $$\begin{align} u\sqrt{\frac{u^2-1}{1-v^2}} dv &amp;= \sqrt{\frac{u(u+1)}{1-v^2}} vdv = \sqrt{u(u+1)} d(-\sqrt{1-v^2})\\ &amp;= -d \sqrt{u(u+1)(1-v^2)} + \sqrt{1-v^2}d\sqrt{u(u+1)}\\ &amp;= -d\sqrt{u(u+1)(1-v^2)} + \sqrt{\color{red}{\frac{1-v^2}{u(u+1)}}}\left(\color{red}{u} + \frac12\right) du \end{align} $$ Notice the piece in red can be rewritten as $\displaystyle\;\sqrt{\frac{u(1-v^2)}{u+1}} du = v\sqrt{\frac{1-v^2}{u^2-1}} du\;$ which is nothing but the $2^{nd}$ piece. This leads to</p> <p>$$\begin{align} \mathcal{A} &amp;= -2 \left[\sqrt{u(u+1)(1-v^2)}\right]_{(u,v) = (1,0)}^{(\phi,1)} + \int_{C_1} \sqrt{\frac{1-v^2}{u(u+1)}} du\\ &amp;= \sqrt{8} + \int_1^\phi \sqrt{\frac{1+u-u^2}{u(u+1)}} du \tag{*1} \end{align} $$</p> <p>As a double check, one can throw following command to wolfram alpha, </p> <p><code>Sqrt[8]+Integrate[Sqrt[(1+u-u^2)/(u*(u+1))],{u,1,GoldenRatio}]</code></p> <p>to evaluate in $(*1)$ numerically. WA returns $$\mathcal{A} \approx 3.0940463058814386237217800770286020796565427678...$$ a number matching what has been stated on question.</p> <p>This is as far as I can get. I hope someone can further simplify the integral in $(*1)$. Please note that WA do know how to compute the anti-derivative for the integral at $(*1)$. It is a page long expression in terms of elliptic integrals and I won't reproduce it here.</p>
4,408,507
<p>We study the definition of Lebesgue measurable set to be the following:</p> <p>Let <span class="math-container">$A\subset \mathbb R$</span> be called Lebesgue measurable if <span class="math-container">$\exists$</span> a Borel set <span class="math-container">$B\subset A$</span> such that <span class="math-container">$|A-B|=0$</span>,where <span class="math-container">$|.|$</span> denotes the Lebesgue outer measure of a set.</p> <p>Then we have theorems like:</p> <p><span class="math-container">$A\subset \mathbb R$</span> is Lebesgue measurable iff</p> <p><span class="math-container">$(1)$</span> Given any <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$F\subset A$</span> closed such that <span class="math-container">$|A-F|&lt;\epsilon$</span>.</p> <p><span class="math-container">$(2)$</span> Given any <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$G\supset A$</span> open such that <span class="math-container">$|G-A|&lt;\epsilon$</span>.</p> <p>I have two questions here.</p> <p>First is that what motivates the definition of Lebesgue measurable sets and second is that why we are approximating Lebesgue measurable sets from below by closed sets and from above by open sets.I am studying the topic measure theory from Sheldon Axler's book that does not give motivation behind these definitions and theorems.Can someone give me motivation behind these things?</p>
Esgeriath
1,021,258
<p>You probably switched the inequality sign. If we have triangle with angles <span class="math-container">$\alpha, \beta, \gamma$</span>, then inequality <span class="math-container">$$ \sin\left(\frac\alpha 2\right)\sin\left(\frac\beta 2\right)\sin\left(\frac\gamma 2 \right)\geq \frac 1 8 $$</span> does not need hold. Product on the LHS can be made arbitrarily small, as we can pick a right triangle with one very pointy corner (i.e. one of the angles can be arbitrarily small). We have <span class="math-container">$$ \sin\left(\frac\alpha 2\right)\sin\left(\frac\beta 2\right)\sin\left(\frac\gamma 2 \right)\leq \sin\left(\frac\alpha 2\right) $$</span> (because all values are positive, and not grater than 1).</p> <hr /> <p>Let's prove that opposite equality holds. Define <span class="math-container">$f(x, y) = \sin(x/2)\sin(y/2)\sin((\pi - x - y)/2)$</span>, and choose domain of <span class="math-container">$f$</span> to be <span class="math-container">$$ D = \left\{ (x, y) \in \left[0, \pi \right]^2 : x+ y \leq \pi\right\}. $$</span> In that way, <span class="math-container">$D$</span> is compact, and every point on its interior describes a triangle with angles <span class="math-container">$x, y, \pi - x - y$</span>. Since <span class="math-container">$f$</span> is continuous, we know that exists maximum on <span class="math-container">$D$</span>. On the boundary, <span class="math-container">$f$</span> is equal to <span class="math-container">$0$</span>, and on interior it is positive, so we know that maximum lies somewhere in the middle. Calculating gradient gives us <span class="math-container">$$ \nabla f(x, y) = \left(\frac {\sin(\frac y 2) \cos(x + \frac y 2)} 2, \frac { \sin(\frac x 2) \cos(\frac x 2 + y)} 2\right) $$</span> after we use angle sum formula for sine. The only point on the interior of <span class="math-container">$D$</span> on which the gradient vanishes is <span class="math-container">$(\pi / 3, \pi / 3)$</span>, so there lies the maximum. We have <span class="math-container">$$ f(x, y) \leq f\left(\frac \pi 3, \frac \pi 3\right) = \sin^3\left(\frac \pi 6\right) = \frac 1 8.$$</span></p>
268,185
<p><strong>Question:</strong> </p> <ol> <li>Given a PDE, is there a general method to show that it is <em>not solvable</em> using the inverse scattering transform?</li> <li>Specifically, for the perturbed 1D NLS or the 2D cubic NLS, where was it first shown that these equations can not be solved using <em>any</em> form of the inverse scattering transform.</li> </ol> <p><strong>Background and details:</strong> The cubic 1D nonlinear Schrodinger equation (NLS) $$ iu_t + u _{xx} + |u | ^2 u = 0$$ and the KdV equation $$u_t -6uu_x+u_{xxx} = 0$$ are both known to be integrable, and solvable via the <a href="https://en.wikipedia.org/wiki/Inverse_scattering_transform" rel="nofollow noreferrer">inverse scattering transform</a>. So, given the initial condition $u(t=0,x)=u_0 (x)$, one can compute these constants and solve an inverse, linear, auxilary problem to find $u$ for all times $t$. For example, for the cubic 1d NLS this is the Zakharov-Shabat equations, and for the KdV it is the linear, time-independant Schrodinger equation. </p> <p>The 2D cubic NLS, or almost every perturbation of the 1D case, e.g., $$iu_t +u_{xx} + |u|^2 u -\epsilon |u|^4u = 0 \, ,$$ is known to be <em>not</em> solvable using the inverse scattering transform, i.e., not integrable. I didn't find any reference that explains why, however.</p>
mo-user
127,891
<p>Another method for testing integrability is the Painleve test, see e.g. <a href="https://arxiv.org/pdf/solv-int/9804003.pdf" rel="nofollow noreferrer">these</a> lecture notes and references therein. It has some caveats: for example, certain changes of variables do not preserve the Painleve property. Yet another possibility for integrability testing is using higher symmetries, see e.g. <a href="https://mathoverflow.net/questions/302765/how-to-use-these-higher-symmetries-and-conservation-laws/309742#309742">here</a>. </p>
2,159,915
<p>Consider the following system of ODE:</p> <p>$$\begin{array}{ll}\ddot y + y + \ddot x + x = 0 \\ y+\dot x - x = 0 \end{array}$$</p> <p><strong>Question</strong>: How many initial conditions are required to determine a unique solution?</p> <p>A naive reasoning leads to four: $y(0),\dot y(0), x(0)$ and $\dot x(0)$. However, if we write the system in a first-order form:</p> <p>$$\begin{bmatrix} 1 &amp; 0 &amp; 0 &amp; 0 \\ 1 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 0 &amp; 0 \end{bmatrix} \begin{bmatrix} \dot x \\ \ddot x \\ \dot y \\ \ddot y\end{bmatrix} = \begin{bmatrix} 0 &amp; 1 &amp; 0 &amp; 0 \\ -1 &amp; 0 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 1\\ 1 &amp; 0 &amp; -1 &amp; 0 \end{bmatrix}\begin{bmatrix} x \\ \dot x \\ y \\ \dot y\end{bmatrix}$$</p> <p>the left matrix is not of full rank, which means the equations are not all independent. Indeed, by differentiating the second equation: $\dot y + \ddot x -\dot x=0$ which leads to $\ddot y + y + \dot x - \dot y + x =0$, or, in the first-order form: $$ \begin{bmatrix} 1 &amp; 0 &amp; 0 \\ 0 &amp; 1 &amp; 0 \\ 1 &amp; 0 &amp; 1 \end{bmatrix} \begin{bmatrix} \dot x \\ \dot y \\ \ddot y \end{bmatrix} = \begin{bmatrix} 1 &amp; -1 &amp; 0 \\ 0 &amp; 0 &amp; 1 \\ -1 &amp; -1 &amp; 1 \end{bmatrix} \begin{bmatrix} x \\ y \\ \dot y \end{bmatrix}$$</p> <p>With the above form, it shows only three initial conditions are required: $x(0),y(0),\dot y(0)$.</p> <p>And, what if I had:</p> <p>$$\begin{array}{ll}\ddot y + y + x^{(n)} + x = 0 \\ \dot x - x = 0 \end{array}$$</p> <p>for some $n$. Then, I could solve the second equation for some $x(0)$, then differentiate $x$ $n$ times and inject in the first equation, so only $x(0), y(0), \dot y(0)$ are needed. Can this be seen directly, in a robust manner?</p>
Jean Marie
305,862
<p>This problem depends on 3 arbitrary constants. Here is why.</p> <p>Let $$\tag{1}\begin{cases}u&amp;=&amp;x+y\\v&amp;=&amp;x-y\end{cases}$$</p> <p>The given differential system, written under the form:</p> <p>$$\tag{2}\left\{\begin{array}{rclr}\ddot{(x+y)}&amp; = &amp; - (x+y) \ \ \ \ \ &amp; (a) \\ \dot x&amp; = &amp; x-y \ \ \ \ \ &amp; (b)\end{array}\right.$$</p> <p>is equivalent to:</p> <p>$$\tag{3}\begin{cases}\ddot u &amp;=&amp; -u \ \ \ \ \ &amp;(a)\\\dot u +\dot v&amp;=&amp;2v \ \ \ \ \ &amp;(b)\end{cases}$$</p> <p>(equation (3b) comes from the addition of the two equations of (1), $2x=u+v$, then the differentiation of this relationship, and, at last, the use of (2b)).</p> <p>The solution of equation (3a), the harmonic oscillator, depends on 2 arbitrary constants, for example under the form </p> <p>$$u=A \cos(t)+B\sin(t)$$</p> <p>This solution, plugged into equation (b) gives the first order linear differential equation:</p> <p>$$-\dot v + 2v=-A \sin(t)+B \cos(t)$$</p> <p>whose general solution depends on a supplementary constant $C$.</p> <p>Knowing $u$ and $v$, it is immediate to deduce $x$ and $y$.</p>
2,901,734
<p>As title says find the minimum value of $(1+\frac{1}{x})(1+\frac{1}{y})$when given that $x+y=8$ using AGM inequality including Arithmetic Mean, Geometric Mean, and Harmonic Mean.</p>
Dr. Sonnhard Graubner
175,066
<p>Hint: Expanding your term we get $$1+\frac{x+y}{xy}+\frac{1}{xy}=1+\frac{9}{xy}$$ By AM-GM we get</p> <p>$$\frac{x+y}{2}\geq \sqrt{xy}$$ from here we get</p> <p>$$1+\frac{9}{xy}\geq \frac{9}{16}+1$$</p>
182,346
<p>Let's call a polygon $P$ <em>shrinkable</em> if any down-scaled (dilated) version of $P$ can be translated into $P$. For example, the following triangle is shrinkable (the original polygon is green, the dilated polygon is blue):</p> <p><img src="https://i.stack.imgur.com/M0LOu.png" alt="enter image description here"></p> <p>But the following U-shape is not shrinkable (the blue polygon cannot be translated into the green one):</p> <p><img src="https://i.stack.imgur.com/S30bD.png" alt="enter image description here"></p> <p>Formally, a compact $\ P\subseteq \mathbb R^n\ $ is called <em>shrinkable</em> iff:</p> <p>$$\forall_{\mu\in [0;1)}\ \exists_{q\in \mathbb R^n}\quad \mu\!\cdot\! P\, +\, q\ \subseteq\ P$$</p> <p>What is the largest group of shrinkable polygons?</p> <p>Currently I have the following sufficient condition: if $P$ is <a href="https://en.wikipedia.org/wiki/Star-shaped_polygon" rel="nofollow noreferrer">star-shaped</a> then it is shrinkable. </p> <p><em>Proof</em>: By definition of a star-shaped polygon, there exists a point $A\in P$ such that for every $B\in P$, the segment $AB$ is entirely contained in $P$. Now, for all $\mu\in [0;1)$, let $\ q := (1-\mu)\cdot A$. This effectively translates the dilated $P'$ such that $A'$ coincides with $A$. Now every point $B'\in P'$ is on a segment between $A$ and $B$, and hence contained in $P$.</p> <p><img src="https://i.stack.imgur.com/sdPRw.png" alt="enter image description here"></p> <p>My questions are:</p> <p>A. Is the condition of being star-shaped also necessary for shrinkability?</p> <p>B. Alternatively, what other condition on $P$ is necessary?</p>
Gabriel C. Drummond-Cole
3,075
<p>Any <s>simply connected</s> polygon must be star-shaped to be shrinkable. I have made minor edits below to treat the more general case.</p> <p>Let $D$ be a polygon with convex hull $H$. Assume we are given a non-trivial shrinking of $D$; view this as a map from $H$ to itself. This map must have a fixed point $x$, either by algebraic topology or an iterative construction. </p> <p>This means it suffices to consider only dilations centered at a point $x$ in $H$, rather than dilations followed by translations.</p> <p>For any $x$, if there is a point $y$ in $D$ so that the segment from $x$ to $y$ is not contained in $D$, then a $(1-\epsilon)$-dilation of $H$ centered at $x$ will not carry $D$ into $D$ for any positive $\epsilon$ smaller than some $\epsilon(x)&gt;0$. If $D$ is not star-shaped, take the minimum $\delta$ of $\epsilon(x)$ over $x\in H$, and then no $(1-\delta)$-dilation of $H$ centered at a point in $H$ carries $D$ into $D$.</p>
3,363,875
<p>when I read a book,they say this is clear:</p> <p>let <span class="math-container">$n$</span> be postive integer,then have <span class="math-container">$$(-1)^n(n+1)\equiv n+1\pmod 4$$</span> Why don't I feel right?</p>
Tsemo Aristide
280,301
<p>If <span class="math-container">$n$</span> is even <span class="math-container">$(-1)^n=1$</span> and <span class="math-container">$(-1)^n(n+1)=(n+1)$</span></p> <p>if <span class="math-container">$n=2p+1$</span>, <span class="math-container">$(-1)^n(n+1)=-(2p+2)$</span>.</p> <p><span class="math-container">$(-1)^n(n+1)-(n+1)=-(2p+2)-(2p+2)=-4p-4$</span>.</p>
1,515,776
<p>How can I solve something like this?</p> <p>$$3^x+4^x=7^x$$</p> <p>I know that $x=1$, but I don't know how to find it. Thank you!</p>
Paolo Leonetti
45,736
<p>Defining $f(x):=\mathrm{log}_7\left(3^x+4^x\right)$, you want to search for fixed points of $f$. But $$ f^\prime(x)=\frac{1}{\ln 7}\cdot \frac{3^x\ln 3+4^x \ln 4}{3^x+4^x}&gt;0 $$ and $$ f^{\prime\prime}(x)=\frac{1}{\ln 7}\cdot \frac{3^x4^x}{(3^x+4^x)^2} \cdot ((\ln 4)^2+(\ln 3)^2-2\ln 3 \ln 4) $$ which is positive too by arithmetic-geometric mean inequality.</p>
1,515,776
<p>How can I solve something like this?</p> <p>$$3^x+4^x=7^x$$</p> <p>I know that $x=1$, but I don't know how to find it. Thank you!</p>
juantheron
14,311
<p>Here $\displaystyle 3^x+4^x = 7^x\Rightarrow \bf{\underbrace{\left(\frac{3}{6}\right)^x+\left(\frac{4}{6}\right)^x}_{Strictly\ decreasing\; function}} = \underbrace{\left(\frac{7}{6}\right)^x}_{Strictly\; increasing\; function}$</p> <p>So these two curve Intersect each other exactly one Point.</p> <p>So we can easily calculate $x=1$ is only Solution of above equation.</p>
3,944,628
<p>I'm reading a book and, in its section on the definition of a stopping time(continuous), the author declares at the start that for the whole section every filtration will be complete and right-continuous.</p> <p>So, in the definition of a Stopping Time, how important are these conditions? Why would they matter?</p>
lab bhattacharjee
33,337
<p>Where you have stopped, let <span class="math-container">$$z=-2\sin^2x+1+2\sin x\cos y$$</span></p> <p><span class="math-container">$$\iff2\sin^2x-2\sin x\cos y+z-1=0$$</span></p> <p>As <span class="math-container">$\sin x$</span> is real, the discriminant must be <span class="math-container">$\ge0$</span></p> <p><span class="math-container">$\implies8(z-1)\le(-2\cos y)^2\le2^2$</span></p> <p><span class="math-container">$\implies8z\le4+8$</span></p> <p>The equality occurs if <span class="math-container">$\cos^2y=1\iff\sin y=0$</span></p> <p>and consequently <span class="math-container">$\sin x=\mp\dfrac{\cos y}2=\mp\dfrac12$</span></p>
4,090,408
<p>Show that <span class="math-container">$A$</span> is a whole number: <span class="math-container">$$A=\sqrt{\left|40\sqrt2-57\right|}-\sqrt{\left|40\sqrt2+57\right|}.$$</span> I don't know if this is necessary, but we can compare <span class="math-container">$40\sqrt{2}$</span> and <span class="math-container">$57$</span>: <span class="math-container">$$40\sqrt{2}\Diamond57,\\1600\times2\Diamond 3249,\\3200\Diamond3249,\\3200&lt;3249\Rightarrow 40\sqrt{2}&lt;57.$$</span> Is this actually needed for the solution? So <span class="math-container">$$A=\sqrt{57-40\sqrt2}-\sqrt{40\sqrt2+57}.$$</span> What should I do next?</p>
egreg
62,967
<p>The check you're doing is indeed necessary in order to remove the absolute value.</p> <p>However, when you arrive at <span class="math-container">$3200\mathrel{\Diamond}3249$</span> you can realize that the difference is a square, which is precisely the condition for a radical of the form <span class="math-container">$$ \sqrt{a\pm\sqrt{b}} $$</span> can be “denested”. Let's see why. Suppose <span class="math-container">$\sqrt{a+\sqrt{b}}=\sqrt{x}+\sqrt{y}$</span>; after squaring we get <span class="math-container">$$ a+\sqrt{b}=x+y+\sqrt{4xy} $$</span> and if we equate the two parts we get <span class="math-container">$$ \begin{cases} x+y=a \\[6px] 4xy=b \end{cases} $$</span> Hence <span class="math-container">$(x-y)^2=(x+y)^2-4xy=a^2-b$</span>. So, in order to find the integers <span class="math-container">$x,y$</span>, we need that <span class="math-container">$a^2-b$</span> is a square. Once we have it, we can determine <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p> <p>Note that the same holds for <span class="math-container">$\sqrt{a-\sqrt{b}}=\sqrt{x}-\sqrt{y}$</span>.</p> <p>In your case <span class="math-container">$a=57,b=3200$</span> and <span class="math-container">$a^2-b=49=7^2$</span>, so you get <span class="math-container">$$ x+y=57,\quad x-y=7 $$</span> and therefore <span class="math-container">$x=32,y=25$</span>, from which <span class="math-container">$$ \sqrt{57\pm40\sqrt{2}}=\sqrt{32}\pm5 $$</span> and you can finish.</p>
85,126
<blockquote> <p>Show that any sequence of positive numbers $(a_n)$ satisfying $$0&lt; \frac{a_{n+1}}{a_n} \leq 1+ \frac{1}{n^2}$$ must converge.</p> </blockquote> <p>I have tried taking the limit of the inequality which yields that $0 \leq \lim \frac{a_{n+1}}{a_n} \leq 1$. If $\lim \frac{a_{n+1}}{a_n} \lt 1$, then by the ratio test we have $\sum a_n$ converges, thus $a_n \to 0$. I am trying in particular trying to show that if $\frac{a_{n+1}} {a_n} \to 1$, then $a_n$ is Cauchy thus convergent, of which I am having some trouble.</p>
Gerry Myerson
8,269
<p>I agree with the suggestions in the comments that for most values of $a,b$ there will be no algebraic solution. As for a numerical algorithm, are you familiar with <a href="http://en.wikipedia.org/wiki/Newton%27s_method" rel="nofollow">Newton's Method</a> (often called Newton-Raphson)? </p>
716,859
<p>Define the mean of order $p$ of $a$ and $b$ as $s_p(a,b)$ $=$ $({a^p + b^p\over 2})^{1/p}$.</p> <p>I have to find the limit of the sequence $s_n(a,b)$. I already know this sequence is bounded above by $b$ (from a previous question) and if I assume the limit exists I can show it is $b$. What I cannot show is that the sequence is increasing. Could someone assist me or show me how to prove this? </p>
Martín-Blas Pérez Pinilla
98,199
<p>Use the squeeze theorem. Supposing $b&gt;a\ge 0$: $$\frac b{2^{1/p}}=\left(\frac{b^p}2\right)^{1/p}\le\left(\frac{a^p + b^p}2\right)^{1/p} \le\left(\frac{b^p + b^p}2\right)^{1/p}=b $$ ant take $\lim_{p\to\infty}$.</p>
165,900
<p>Let $R=k[u,v,w]$ and $p\in R$ be a cubic form. Let $G$ be the group of graded automorphisms of $R$ which preserve $p$, i.e., $G$ is the subgroup of $GL_3(k)$ consisting of elements $g$ such that $g(p) \in k p$. My question: is $G$ some well known algebraic group? </p>
Alexander Premet
24,386
<p>I think so. This can be deduced from the classification of cubic forms in 3 varibles; see Tadayuki Abiko, Classification of cubic forms with three variables, Hokkaido Math. J. Vol. 10, 1981, 239-248. Google is the quickest way to find this paper which is freely available via Project Euclid.</p>
3,722,407
<p>I am struggling with this problem:</p> <blockquote> <p>Let n be an even number, and denote <span class="math-container">$[n]=\{1,2,...,n\}$</span>. A sequence of sets <span class="math-container">$S_1 , S_2 , \cdots , S_m \subseteq [n]$</span> is considered <em>graceful</em> if:</p> <ol> <li><span class="math-container">$m$</span> is odd.</li> <li><span class="math-container">$S_1 \subset S_2 \subset \cdots \subset S_m \subseteq [n]$</span></li> <li><span class="math-container">$\forall m \in \{1,...,m-1\}: \; |S_{i+1}|=|S_i|+1$</span></li> <li><span class="math-container">$|S_m|+|S_1|=n$</span></li> </ol> <p>Show that it is possible to <em>decompose</em> the <span class="math-container">$2^n$</span> subsets of <span class="math-container">$[n]$</span> using <span class="math-container">$\binom{n}{n/2}$</span> graceful chains. Different chains may be of different lengths. Every subset of <span class="math-container">$[n]$</span> must appear in one, and only one, chain.</p> </blockquote> <p>I have figured that <span class="math-container">$$|S_1|=\frac{n-m+1}{2}, \; |S_i|=\frac{n-m+2i-1}{2}$$</span> for any valid choice of <span class="math-container">$m$</span>. In addition, it suggests <span class="math-container">$m\in\{1,3,...,n+1\}$</span>. It is also clear that one of the chains must be <span class="math-container">$$\emptyset\subset\{1\}\subset\{1,2\}\subset\cdots\subset\{1,...,n\}$$</span> where all the inner sets in this chain may be chosen arbitrary.</p> <p>I would appreciate any help.</p>
Community
-1
<p><strong>If this can help:</strong></p> <p>A first approximation is <span class="math-container">$x_n=\dfrac1n$</span>. A better approximation can be found in the form <span class="math-container">$\dfrac{1+t}n$</span>. We write</p> <p><span class="math-container">$$\left(\frac{1+t}n\right)^n-n\left(\frac{1+t}n\right)+1=0,$$</span></p> <p><span class="math-container">$$\left(\frac{1+t}n\right)^n\approx\frac{1+nt}{n^n}=t,$$</span></p> <p><span class="math-container">$$t\approx \frac1{n^n-n},$$</span></p> <p>and</p> <p><span class="math-container">$$x_n\approx\frac{1+\dfrac1{n^n-n}}n.$$</span></p> <p>E.g., <span class="math-container">$x_5\approx0.2000641025641$</span> and <span class="math-container">$x_5^5-5x_5+1\approx3.28\cdot10^{-10}$</span></p> <p>That process can be continued.</p> <hr /> <p>Another method is to write the Newton's iterates, starting from the initial approximation. Accuracy is very high, but the expressions quickly grow.</p>
3,722,407
<p>I am struggling with this problem:</p> <blockquote> <p>Let n be an even number, and denote <span class="math-container">$[n]=\{1,2,...,n\}$</span>. A sequence of sets <span class="math-container">$S_1 , S_2 , \cdots , S_m \subseteq [n]$</span> is considered <em>graceful</em> if:</p> <ol> <li><span class="math-container">$m$</span> is odd.</li> <li><span class="math-container">$S_1 \subset S_2 \subset \cdots \subset S_m \subseteq [n]$</span></li> <li><span class="math-container">$\forall m \in \{1,...,m-1\}: \; |S_{i+1}|=|S_i|+1$</span></li> <li><span class="math-container">$|S_m|+|S_1|=n$</span></li> </ol> <p>Show that it is possible to <em>decompose</em> the <span class="math-container">$2^n$</span> subsets of <span class="math-container">$[n]$</span> using <span class="math-container">$\binom{n}{n/2}$</span> graceful chains. Different chains may be of different lengths. Every subset of <span class="math-container">$[n]$</span> must appear in one, and only one, chain.</p> </blockquote> <p>I have figured that <span class="math-container">$$|S_1|=\frac{n-m+1}{2}, \; |S_i|=\frac{n-m+2i-1}{2}$$</span> for any valid choice of <span class="math-container">$m$</span>. In addition, it suggests <span class="math-container">$m\in\{1,3,...,n+1\}$</span>. It is also clear that one of the chains must be <span class="math-container">$$\emptyset\subset\{1\}\subset\{1,2\}\subset\cdots\subset\{1,...,n\}$$</span> where all the inner sets in this chain may be chosen arbitrary.</p> <p>I would appreciate any help.</p>
N. S.
9,176
<p>Hopefully I didnt do any mistake:</p> <p>Note that</p> <p><span class="math-container">$$P_n(\frac{1}{n}+\frac{1}{n^{n+1}})= (\frac{1}{n}+\frac{1}{n^{n+1}})^n - \frac{1}{n^n}&gt;0$$</span></p> <p>Now, let <span class="math-container">$\alpha &gt;1$</span>. We have <span class="math-container">$$P_n(\frac{1}{n}+\frac{\alpha}{n^{n+1}}) &lt;0 \Leftrightarrow \\ \frac{1}{n}+\frac{\alpha}{n^{n+1}} &lt; \frac{\sqrt[n]{\alpha}}{n} \Leftrightarrow \\ n(\sqrt[n]{\alpha} -1) \geq \frac{\alpha}{n^{n-1}}$$</span></p> <p>Now, <span class="math-container">$$\lim_n n(\sqrt[n]{\alpha} -1) =\lim_n \frac{\alpha^\frac{1}{n}-\alpha^0}{\frac{1}{n}-1}=\ln(\alpha) &gt;0$$</span> and <span class="math-container">$$\lim_n \frac{\alpha}{n^{n-1}}=0$$</span></p> <p>This shows that for all <span class="math-container">$\alpha &gt;1$</span> there exists some <span class="math-container">$N$</span> so that, for all <span class="math-container">$n&gt;N$</span> we have <span class="math-container">$$P_n(\frac{1}{n}+\frac{\alpha}{n^{n+1}}) &lt;0 $$</span></p> <p>It follows that <strong>asymptotically</strong>, we have <span class="math-container">$$\frac{1}{n}+\frac{1}{n^{n+1}} &lt; u_n &lt; \frac{1}{n}+\frac{\alpha}{n^{n+1}} \qquad \forall \alpha &gt;1$$</span></p> <p>[i.e. For each <span class="math-container">$\alpha &gt;1$</span>, there exists a <span class="math-container">$N$</span> such that the above holds for all <span class="math-container">$n&gt;N$</span>]</p>
79,726
<p>Let $R$ be a commutative ring with unity. Let $M$ be a free (unital) $R$-module.</p> <p>Define a <em>basis</em> of $M$ as a generating, linearly independent set.</p> <p>Define the <em>rank</em> of $M$ as the cardinality of a basis of $M$ (as we know commutative rings have IBN, so this is well defined).</p> <p>A <em>minimal generating set</em> is a generating set with cardinality $\inf\{\#S:S\subset M, M=\langle S \rangle\}$.</p> <p>Must a minimal generating set have cardinality the rank of $M$?</p>
Pierre-Yves Gaillard
660
<p>The statement in the Edit of Georges's answer also holds in the non-commutative case:</p> <blockquote> <p>Let $R$ be an associative ring with $1$. If $M$ is an $R$-module, if $B$ is an infinite basis of $M$, and if $S\subset M$ is a generating subset, then we have $$|S|\ge|B|,$$ where, for any set $X$, the symbol $|X|$ denotes the cardinality of $|X|$. </p> <p>In particular, if $C$ is another basis of $M$, then $|C|=|B|$. </p> </blockquote> <p>Proof: For any $x\in M$ let $B_x$ be the finite set of those $b$ in $B$ such that the $b$-component of $x$ is nonzero. The fact that $S$ generates $M$ implies $$ B=\bigcup_{s\in S}\ B_s, $$ and thus $$ |B|=\left|\ \bigcup_{s\in S}\ B_s\ \right|\ \le \left|\ \coprod_{s\in S}\ B_s\ \right|= \sum_{s\in S}\ \left|B_s\right|=|S|. $$ </p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Victor Protsak
5,740
<p><a href="https://en.wikipedia.org/wiki/Feit%E2%80%93Thompson_theorem">Feit-Thompson theorem</a> $ $</p> <hr> <p><strong>Edit</strong> (GK): This would also be my first answer, let me add a few details. The <strong>Feit-Thompson theorem</strong> asserts that every finite group of odd order is solvable. An equivalent formulation is that every simple nonabelian group is of even order. The theorem was proved by Feit and Thompson in 1962,1962. It was conjectured by Burnside by 1911. The theorem plays a crucial role in the classification of finite simple groups. Some parts of the proof were simplified over the years but it remained very hard. </p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Allen Knutson
391
<p><a href="http://terrytao.wordpress.com/2012/03/23/some-ingredients-in-szemeredis-proof-of-szemeredis-theorem/" rel="nofollow noreferrer">Szemerédi’s theorem</a>, that inside a positive-density set of naturals there are arbitrarily long arithmetic progressions. To quote Terry Tao, "...the pieces of Szemerédi’s proof are highly interlocking, particularly with regard to all the epsilon-type parameters involved; it takes quite a bit of notational setup and foundational lemmas before the key steps of the proof can even be stated, let alone proved... Many years ago I tried to present the proof, but I was unable to find much of a simplification, and my exposition is probably not that much clearer than the original text."</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Stanley Yao Xiao
10,898
<p>The proof of the <a href="https://en.wikipedia.org/wiki/Thue%E2%80%93Siegel%E2%80%93Roth_theorem" rel="nofollow">Thue-Siegel-Roth theorem</a> is still very difficult, as no substantial improvement to Roth's original argument is known.</p> <p>The Thue-Siegel-Roth Theorem states that for any non-rational algebraic number $\alpha$ and $\epsilon &gt; 0$, there exists a small constant $c &gt; 0$ which depends on $\alpha$ and $\epsilon$ such that $$\displaystyle \left \lvert \alpha - \frac{p}{q} \right \rvert &gt; \frac{c}{q^{2 + \epsilon}}$$ for every rational number $p/q$.</p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Gil Kalai
1,532
<p><strong>The decomposition theorem for intersection homology</strong></p> <p>The decomposition theorem for (middle perversity) intersection homology (for algebraic varieties) was proved in 1982 by Beilinson-Bernstein-Deligne-Gabber. I don't understand it well enough to describe it (but please replace this sentence with a description if you wish). I am aware of many important applications also to the combinatorial theory of convex polytopes. Some applications are described in this MO question <a href="https://mathoverflow.net/questions/1959/examples-for-decomposition-theorem/5737#5737">Examples for Decomposition Theorem</a> with a link to the review paper: "<a href="http://arxiv.org/abs/0712.0349" rel="nofollow noreferrer">The Decomposition Theorem and the topology of algebraic maps</a>" by de Cataldo and Migliorini,. To the best of my knowledge there is no dramatically simpler proof (but there is another very hard proof by Saito). [Caveat: I am not an expert.] </p> <p><strong>Update (February 2, 2016):</strong> Here is, however, an abstract from the coming March Bourbaki seminar: </p> <blockquote> <p>Geordie WILLIAMSON, The Hodge theory of the Decomposition Theorem, after M. A. de Cataldo and L. Migliorini</p> <p>In its simplest form the Decomposition Theorem asserts that the rational intersection cohomology of a complex projective variety occurs as a summand of the cohomology of any resolution. This deep theorem has found important applications in algebraic geometry, representation theory, number theory and combinatorics. It was originally proved in 1981 by Beilinson, Bernstein, Deligne and Gabber as a consequence of Deligne’s proof of the Weil conjectures. A different proof was given by Saito in 1988, as a consequence of his theory of mixed Hodge modules. More recently, de Cataldo and Migliorini found a much more elementary proof which uses only classical Hodge theory and the theory of perverse sheaves. We present the theorem and outline the main ideas involved in the new proof.</p> </blockquote> <p><strong>More details</strong> (June 2016): The paper with the new proof is: <a href="http://arxiv.org/abs/0712.0349" rel="nofollow noreferrer">The decomposition theorem, perverse sheaves and the topology of algebraic maps</a> M de Cataldo, L Migliorini , Bulletin of the American Mathematical Society 46 (4), 535-633. Williamson’s Bourbaki paper is <a href="http://arxiv.org/abs/1603.09235" rel="nofollow noreferrer">The Hodge theory of the Decomposition Theorem (after de Cataldo and Migliorini)</a></p>
152,405
<p>This question complement a previous MO question: <a href="https://mathoverflow.net/questions/95837/examples-of-theorems-with-proofs-that-have-dramatically-improved-over-time">Examples of theorems with proofs that have dramatically improved over time</a>.</p> <p>I am looking for a list of</p> <h3>Major theorems in mathematics whose proofs are very hard but was not dramatically improved over the years.</h3> <p>(So a new dramatically simpler proof may represent a much hoped for breakthrough.) Cases where the original proof was very hard, dramatical improvments were found, but the proof remained very hard may also be included.</p> <p>To limit the scope of the question:</p> <p>a) Let us consider only theorems proved at least 25 years ago. (So if you have a good example from 1995 you may add a comment but for an answer wait please to 2020.)</p> <p>b) Major results only.</p> <p>c) Results with very hard proofs.</p> <p>As usual, one example (or a few related ones) per post.</p> <p>A similar question was asked before <a href="https://mathoverflow.net/questions/44593/still-difficult-after-all-these-years">Still Difficult After All These Years</a>. (That question referred to 100-years old or so theorems.)</p> <h2>Answers</h2> <p>(Updated Oct 3 '15)</p> <p><strong>1)</strong> <a href="https://mathoverflow.net/a/152457">Uniformization theorem for Riemann surfaces</a>( Koebe and Poincare, 19th century)</p> <p><strong>2)</strong> <a href="https://mathoverflow.net/a/152546">Thue-Siegel-Roth theorem</a> (Thue 1909; Siegel 1921; Roth 1955)</p> <p><strong>3)</strong> <a href="https://mathoverflow.net/a/152412">Feit-Thompson theorem</a> (1963);</p> <p><strong>4)</strong> <a href="https://mathoverflow.net/a/152419">Kolmogorov-Arnold-Moser (or KAM) theorem</a> (1954, Kolgomorov; 1962, Moser; 1963)</p> <p><strong>5)</strong> <a href="https://mathoverflow.net/a/218043/1532">The construction of the <span class="math-container">$\Phi^4_3$</span> quantum field theory model. This was done in the early seventies by Glimm, Jaffe, Feldman, Osterwalder, Magnen and Seneor.</a> (NEW)</p> <p><strong>6)</strong> <a href="https://mathoverflow.net/a/152752">Weil conjectures</a> (1974, Deligne)</p> <p><strong>7)</strong> <a href="https://mathoverflow.net/a/152418">The four color theorem</a> (1976, Appel and Haken);</p> <p><strong>8)</strong> <a href="https://mathoverflow.net/a/152613">The decomposition theorem for intersection homology</a> (1982, Beilinson-Bernstein-Deligne-Gabber); (<strong>Update:</strong> A new simpler proof by de Cataldo and Migliorini is now available)</p> <p><strong>9)</strong> <a href="https://mathoverflow.net/a/218061/1532">Poincare conjecture for dimension four</a>, 1982 Freedman (NEW)</p> <p><strong>10)</strong> <a href="https://mathoverflow.net/a/152416">The Smale conjecture</a> 1983, Hatcher;</p> <p><strong>11)</strong> <a href="https://mathoverflow.net/a/152741">The classification of finite simple groups</a> (1983, with some completions later)</p> <p><strong>12)</strong> <a href="https://mathoverflow.net/a/152537">The graph-minor theorem</a> (1984, Robertson and Seymour)</p> <p><strong>13)</strong> <a href="https://mathoverflow.net/a/152424">Gross-Zagier formula</a> (1986)</p> <p><strong>14)</strong> <a href="https://mathoverflow.net/a/218066/1532">Restricted Burnside conjecture</a>, Zelmanov, 1990. (NEW)</p> <p><strong>15)</strong> <a href="https://mathoverflow.net/a/219182/1532">The Benedicks-Carleson theorem</a> (1991)</p> <p><strong>16)</strong> <a href="https://mathoverflow.net/a/152454">Sphere packing problem in R 3 , a.k.a. the Kepler Conjecture</a>(1999, Hales)</p> <p><strong>For the following answers some issues were raised in the comments.</strong></p> <p><a href="https://mathoverflow.net/a/152598">The Selberg Trace Formula- general case</a> (Reference to a 1983 proof by Hejhal)</p> <p><a href="https://mathoverflow.net/a/152423">Oppenheim conjecture</a> (1986, Margulis)</p> <p><a href="https://mathoverflow.net/a/152459">Quillen equivalence</a> (late 60s)</p> <p><a href="https://mathoverflow.net/a/152434">Carleson's theorem</a> (1966) (Major simplification: 2000 proof by Lacey and Thiele.)</p> <p><a href="https://mathoverflow.net/a/152432">Szemerédi’s theorem</a> (1974) (Major simplifications: ergodic theoretic proofs; modern proofs based on hypergraph regularity, and polymath1 proof for density Hales Jewett.)</p> <p><strong>Additional answer:</strong></p> <p><a href="https://mathoverflow.net/a/152658">Answer about fully formalized proofs for 4CT and FT theorem</a>.</p>
Dylan Thurston
5,010
<p>I'm surprised no one has mentioned <a href="https://en.wikipedia.org/wiki/Michael_Freedman">Freedman's</a> Theorem from 1982 yet. Technically this theoerem says that <a href="https://en.wikipedia.org/wiki/Casson_handle">Casson handles</a> are standard, but more broadly this completes the classification of topological 4-manifolds, and is one of two major theorems going in to the proof of the existence of exotic 4-manifolds.</p> <p>(The other half is <a href="https://en.wikipedia.org/wiki/Simon_Donaldson">Donaldson's</a> work on gauge theories. That half has been somewhat simplified with the introduction of Seiberg-Witten theory.)</p> <p>I have never attempted to read his proof, but it at least has a reputation for being hard.</p>
2,115,199
<p>Let $A \in \text{End}(V)$ be an endomorphism, and $\mathbb Q[A]$ a subalgebra in $\text{End}(V)$ generated by $A$.</p> <p>Is $\mathbb Q[A]$ always at most dim$V$-dimensional? How to prove it</p>
Adam Hughes
58,831
<p>Recall that the minimal polynomial $m_A(x)$ divides the characteristic polynomial $p_A(x)$ which has degree $\dim V$ and that</p> <blockquote> <p>$$\Bbb Q[A]\cong\Bbb Q[x]/(m_A(x))$$</p> </blockquote> <p>and this is a $\deg m_A(x)\le \deg p_A(x)=\dim V$ algebra.</p>
2,292,656
<p>Let $L/K$ be a degree $n$ extension of fields, where $K$ has discrete valuation $v$, which can be prolonged to the discrete valuations $w_i$ on $L$. We can therefore define the completion of $K$ w.r.t. $v$ to be $\hat K$, and the completion of $L$ w.r.t. $w_i$ to be $\hat L_i$, then in Theorem II.3.1 of Serre's <em>Local Fields</em>, we have a homomorphism $$\varphi:L\otimes_K\hat K\to\prod_i\hat L_i$$which we then show to be an isomorphism. However, I don't see how this morphism is defined in the first place.</p>
Dominic Wynter
275,050
<p>In order to define the canonical morphism $\varphi:L\otimes_K\hat K\to\prod_i\hat L_i$, we will make use of the completion functor $\mathrm{comp}:\mathbf{DVF}\to\mathbf{DVF}$ acting on the category of discretely valued fields, with continuous field embeddings (that is to say, field extensions that prolong the discrete valuation) as morphisms. Then since we have a natural transformation $\mathrm{id}_{\mathbf{DVF}}\to\mathrm{comp}$ given by inclusion in the completion, we obtain the following commutative diagram:</p> <p>$$ \require{AMScd} \begin{CD} (K,v) @&gt;&gt;&gt; (\hat{K},\hat{v})\\ @VVV @VVV \\ (L,w_i) @&gt;&gt;&gt; (\hat{L}_i,\hat{w}_i) \end{CD} $$ which allows us to define a ring morphism $\varphi:L\otimes_K \hat{K}\to\hat{L}_i$ for all $i$, which is $\hat{K}$-linear by construction. This gives us the morphism $$\varphi:L\otimes_K\hat K\to\prod_i\hat L_i.$$</p>
192,072
<p>Bonjour!<br> I'm trying this number-theory problem, but i don't have any idea how to solve it.<br> Can you give me some hints ?</p> <p>We have got any $\mathbb{Z_+}$ number. Let it be $n$.<br> Then we must proof that $2 \nmid \sigma(n) \implies n = k^2 \vee n = 2k^2$.<br> Thanks for any help </p>
Hagen von Eitzen
39,174
<p>If $n$ is odd, then $\sigma(n)$ is the sum of $\tau(n)$ odd divisors. For this sum to be odd, $\tau(n)$ must be odd. But that means that it is not possible to pair off divisors as pairs $(d, \frac n d)$, i.e. there is one divisor $d$ with $d=\frac nd$ and hence $n=d^2$.</p> <p>If $n=2^rm$ with $m$ odd and $r&gt;0$ then $\sigma(n)=\sigma(2^r)\sigma(m)$, hence $\sigma(m)$ is odd and by the preceding paragraph $m=d^2$ for some $d|m$. If $r=2s$ is even, then $n=(2^sd)^2$, and if $r=2s+1$ is odd then $n=2\cdot(2^rd)^2$.</p>
2,437,026
<p>Suppose that:</p> <p>$$ X \sim Bern(p) $$</p> <p>Then, intuitively $X^2 = X \sim Bern(p)$. However, when I try to think of it logically, it doesn't make any sense. </p> <p>As an example, $X$ is $1$ with probability $p$ and $0$ with probability $1-p$. Then, $X^2 = X\cdot X$ is $1$ only when both $X$'s are $1$, which occur with probability $p^2$, and so it doesn't seem like $X^2 = X$. Can someone tell me what is wrong here?</p>
Siong Thye Goh
306,553
<blockquote> <p>$X^2=X.X$ is $1$ only when both $X$'s are $1$, which occurs with probability $\color{blue}p$. </p> </blockquote> <p>Notice that $X$ and $X$ are identical, and dependent. The two $X$'s refer to the same thing. </p> <p>Since $X$ takes binary value, we indeed have $X^2=X$.</p>
3,111,489
<p>For which <span class="math-container">$p,q$</span> does the <span class="math-container">$\int_0^{\infty} \frac{x^p}{\mid{1-x}\mid^q}dx$</span> exist ?</p> <p>Can you help me, I have been siting hours on this question .</p> <p>I got that for <span class="math-container">$ q&lt;1$</span> and <span class="math-container">$p&gt;q+1$</span>, but I am not sure if thats right</p>
gt6989b
16,192
<p>I would try to get rid of the absolute value to simplify: <span class="math-container">$$ \int_0^\infty \frac{x^pdx}{|1-x|^q} = \int_0^1 \frac{x^p dx}{(1-x)^q} + \int_1^\infty \frac{x^p dx}{(x-1)^q} $$</span> Now the first integral has a problem at <span class="math-container">$x \to 1^-$</span>, and the second as <span class="math-container">$x \to 1^+$</span>, and also a potential problem at <span class="math-container">$x\to \infty$</span>, and if <span class="math-container">$p &lt; 0$</span> then both of those have a problem at <span class="math-container">$x \to 0$</span>.</p> <p>Can you take it from here?</p>
1,463,567
<p>We have the following theorem </p> <p>If |G| = 60 and G has more than one Sylow-5 subgroup, then G is simple.</p> <p>Since order of the rigid motion of the dodecahedron group is 60, so all we have to do is to show that it has more than one sylow-5 subgroup, but I don't know how to do this as I don't know the elements of this group as I have troubles visualizing it.</p>
Najib Idrissi
10,014
<p>Yes, the assumption that the set is open is necessary. The plane $\mathbb{R}^2$ is locally path-connected, and the "topologist's sine curve" $$S = \operatorname{closure} \bigl\{(x,\sin(1/x) \mid x &gt; 0 \bigr\}$$ is connected (because it's the closure of a connected set), but it's not path-connected (a classical exercise). And since of course $S$ isn't a path component of $\mathbb{R}^2$, you cannot use the theorem you mention to say that $S$ is path-connected (and it isn't).</p>
238,076
<p>I was thinking about this when flying on the plane which was approaching and slowing down.</p> <p>Assume an object is approaching its target which is at a certain initial distance d at time t0.</p> <p>It starts at a speed that will allow it to reach the target in exactly one hour (e.g. d=100km, it starts at 100 km/h).</p> <p>Incrementally, it will slow down so that at every point in time, it will be exactly one hour far from reaching the target (after it had travelled 40 km being 60 km away, it will be travelling at 60 km/h).</p> <p>After reaching a predefined minimum speed (10 km/h), it will keep its velocity constant (and it will need exactly one additional hour to reach the target).</p> <p>How long will it take to reach the target?</p> <p>I somehow assume the answer should be 2 hours, independently of initial distance, but it does not fit (because then it would not matter what the original distance is (which it should not since we are travelling faster in the beginning), but any shorter path is included in the longer path and the resulting equation cannot possibly be true (or can it be?)) </p> <p>I think I am missing the mathematical apparatus that is needed to solve this (is it differential equations?)</p> <p>Can you please advise how to solve this?</p>
ebsddd
36,669
<p>Split the problem into two parts: the time it takes to get from 100 to 10 km, and from 10 to 0 km. Second part is trivial, so I'll focus on the first.</p> <p>For the first part, let $x(t) = d(t) - 10$, so that when you're at the 10 km mark, $x = 0$. Then, since $v(t)=-d(t)$, $v(t)=-x(t)-10$. As a differential equation: $\frac{dx}{dt}=-x-10$.</p> <p>I don't remember much from my ODE course other than this is a first-order differential equation, but google lead me to <a href="http://en.wikipedia.org/wiki/Integrating_factor#Use_in_solving_first_order_linear_ordinary_differential_equations" rel="nofollow">this</a>. After relabeling the variables, you get $P=1$, $Q=-10$, and an integrating factor of $M(t)=e^t$.</p> <p>This produces $x(t)=\frac{Q M(t) + C}{M(t)}$. Solve for $C$ using initial condition $x(0)=d(0)-10=90$, which gives you $C=100$ and $x(t)=100e^{-t}-10$.</p> <p>To solve for the time it takes to travel the first 90 km, solve for $t$ when $x(t)=0$, which gives you $\ln 10$.</p>
2,296,256
<p>I need help how to mathematically interpret an ODE (Newton's second law). I used to the ODE in this form: $$ m\ddot x(t)=F(t)\tag{1} $$</p> <p>However, in another book they wrote: $$ m\ddot x=F(x,\dot x) \tag{2} $$ where $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$.</p> <p><strong>Questions:</strong></p> <ol> <li><p>I guess $F(x,\dot x)$ is an abbreviation for $F(x(t),\dot x(t))$, is it correct?</p></li> <li><p>What it the difference between writing $F(t)$ or $F(x,\dot x)$?</p></li> <li><p>What is the meaning of the notation $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$?</p></li> </ol> <p>Thanks!</p>
marty cohen
13,079
<p>If f is a function, then, by definition, i=j implies f(i)=f(j).</p> <p>Nothing else is needed.</p>
2,296,256
<p>I need help how to mathematically interpret an ODE (Newton's second law). I used to the ODE in this form: $$ m\ddot x(t)=F(t)\tag{1} $$</p> <p>However, in another book they wrote: $$ m\ddot x=F(x,\dot x) \tag{2} $$ where $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$.</p> <p><strong>Questions:</strong></p> <ol> <li><p>I guess $F(x,\dot x)$ is an abbreviation for $F(x(t),\dot x(t))$, is it correct?</p></li> <li><p>What it the difference between writing $F(t)$ or $F(x,\dot x)$?</p></li> <li><p>What is the meaning of the notation $F: \mathbb{R}^n \times \mathbb{R}^n\rightarrow \mathbb{R}^n$?</p></li> </ol> <p>Thanks!</p>
Mr. Xcoder
435,694
<p>$$\forall\:i,j: i&gt;j \implies f(i) &gt; g(j)\tag1$$ $$\forall\:i,j: i&lt;j \implies f(i) &lt; g(j)\tag2$$</p> <p>Bypassing the redundancy, if you simply suppose that $i\neq j,\: f(i)=g(j)$, there are two cases here we need to handle:</p> <ul> <li><p>$i&gt;j,\text{ and (1)}\implies \underbrace{f(i) &gt; g(j)}_{\text{false, by condradiction}} $</p></li> <li><p>$i&lt;j,\text{ and (2)}\implies \underbrace{f(i)&lt;f(j)}_{\text{false, by condradiction}}$</p></li> </ul> <p>Therefore, $\forall\:i,j:\:f(i) = g(j),\:\:i=j$. By definition, a function always returns the same result for the same parameters, so $g(j) = g(i)$. Thus:</p> <p>$$i=j\implies f(i) = g(j) = g(i),\:\:\forall\:i,j$$</p>
1,624,221
<p>For the former one, I am aware that if let $F(x)=\int_a^x f(t)dt$, then it also equals $\int_0^x f(t)dt-\int_0^a f(t)dt$, so $F'(x)= f(x)-0=f(x)$. But who can tell me why $\int_0^a f(t)dt$ is $0$?</p>
Hugh
94,681
<p>Use partial fraction decomposition to represent the sum and notice that it is a telescoping series.</p>
1,380,348
<p>100-sided dice was rolled 98 times, Numbers form 1 to 50 were rolled exactly once, except number 25, which wasn't rolled yet. Number 75 was rolled 49 times You can only bet if the next roll result will be below 51 or above 49.</p> <p>How do you choose ?, how to calculate which bet is better ? ELI5 please</p> <p>And would it be more reasonable to chose below 51, if 75 was rolled only 48 times ?</p> <p>EDIT: yes dice is fair EDIT sorry, below 51 or above 49.</p>
user2566092
87,313
<p>If the die is fair, then the probability the roll will be above $51$ is $49/100$ and the probability the roll is below $49$ will be $48/100$. The previous rolls do not change the fact that the die is fair. So technically, since it's slightly more likely, you should bet that the roll is above $51$.</p> <p>If you do not assume the die is fair, and you know nothing about the die beforehand, then the maximum likelihood estimate for what you should choose would be whichever is greater, the number of rolls less than $49$ or the number of rolls greater than $51$. If you have prior knowledge about how the die might behave, then you can use Bayes rule (you can look it up online) to figure out what to do but the calculations get messier.</p> <p>UPDATE: The question has been changed to "below 51" or "above 49". If the die is fair, then "below 51" has chances $50/100$, and the chances of "above 49" are $51/100$, so you should choose "above 49" if the die is known to be fair. All the other reasoning stands as is (for a possibly unfair die), just substituting in the new different things you bet on.</p>
1,380,348
<p>100-sided dice was rolled 98 times, Numbers form 1 to 50 were rolled exactly once, except number 25, which wasn't rolled yet. Number 75 was rolled 49 times You can only bet if the next roll result will be below 51 or above 49.</p> <p>How do you choose ?, how to calculate which bet is better ? ELI5 please</p> <p>And would it be more reasonable to chose below 51, if 75 was rolled only 48 times ?</p> <p>EDIT: yes dice is fair EDIT sorry, below 51 or above 49.</p>
hvedrung
245,555
<p>Bet on 75. Fair dice can't fall on one outcome in 49 rolls out of 98. The probability is $ 10^{-96}$. It is greater than the number of atoms in the universe.</p> <p>PS. If this is study task than bets are equal. Next outcome has absolutely no dependance from previous ones.</p>
3,219,635
<p>Suppose <span class="math-container">$f$</span> is continuous on <span class="math-container">$\Bbb R$</span>, define <span class="math-container">$F(x)=\int_a^bf(x+t)\cos t\,dt,x\in [a,b]$</span>.</p> <p>How to show <span class="math-container">$F(x)$</span> is differentiable on <span class="math-container">$[a,b]$</span>?</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$F(x)=\int_{a+x}^{b+x} f(s) [\cos \,s \cos \,x+\sin \,s \sin \,x]ds$</span>. Split this into two terms and pull out <span class="math-container">$\cos\, x,\sin\,x$</span> from the integral. Use the fact indefinite integrals of continuous functions are differentiable. </p>
4,044,654
<p>We say that a continuous function <span class="math-container">$u:\mathbb{R}^d\to \mathbb{R}$</span> is subharmonic if it satisfies the mean value property <span class="math-container">$$u(x)\leq \frac{1}{|\partial B_r(x)|}\int_{\partial B_r(x)}u(y)\,\mathrm{d}y \qquad (\star)$$</span> for any ball <span class="math-container">$B_r(x)\subset \mathbb{R}^d$</span>.</p> <blockquote> <p>Let <span class="math-container">$u:\mathbb{R}^d\to \mathbb{R}$</span> be a convex function (hence, continuous). Is <span class="math-container">$u$</span> subharmonic?</p> </blockquote> <ul> <li><p>If <span class="math-container">$u\in C^2(\mathbb{R}^d)$</span>, this is true. Using a second-order Taylor expansion we have <span class="math-container">\begin{align*}\int_{\partial B_r(x)}(u(y)-u(x))\,\mathrm{d}y&amp;=\int_{\partial B_r(x)}\left(\nabla u(x)\cdot(y-x)+\frac{1}{2}(y-x)D^2u(\xi)(y-x)^t\right)\,\mathrm{d}y.\end{align*}</span> The first term in the above integral vanishes by symmetry, the second is non-negative because <span class="math-container">$D^2u(\xi)$</span> is a positive semi-definite matrix. Therefore, (<span class="math-container">$\star$</span>) is proven.</p> </li> <li><p>If <span class="math-container">$d=1$</span>, the statement is true when <span class="math-container">$u$</span> is continuous, in general. Indeed since balls reduce to intervals, (<span class="math-container">$\star$</span>) is easily shown to be equivalent to <span class="math-container">$u$</span> being midpoint-convex.</p> </li> </ul> <p>I'm not sure how to attack the problem in higher dimensions. Of course <span class="math-container">$(\star)$</span> is true for affine functions in any dimension, and I'd like to use the fact that the graph of a convex function lies below that of an affine function, loosely speaking. However, to close the estimate I would need <span class="math-container">$u$</span> to be equal to the affine function at the boundary of the ball, and this is not necessarily possible.</p>
Peter Morfe
711,689
<p>Martin gives a very reasonable answer extending the question's second argument; here's how to extend the first one.</p> <p>If <span class="math-container">$u: \mathbb{R}^{d} \to \mathbb{R}$</span> is continuous and convex, then mollification of <span class="math-container">$u$</span> gives a family of smooth functions <span class="math-container">$(u^{\epsilon})_{\epsilon &gt; 0}$</span> such that <span class="math-container">$u^{\epsilon} \to u$</span> locally uniformly in <span class="math-container">$\mathbb{R}^{d}$</span>. (Recall that mollification means that we define <span class="math-container">$u^{\epsilon} = \rho^{\epsilon} * u$</span>, where <span class="math-container">$\rho^{\epsilon}(x) = \epsilon^{-d} \rho(\epsilon^{-1} x)$</span> and <span class="math-container">$\rho \in C^{\infty}_{c}(\mathbb{R}^{d})$</span> is a non-negative, even function with <span class="math-container">$\int_{\mathbb{R}^{d}} \rho(x) \, dx = 1$</span>.)</p> <p>It is not hard to show that, for each <span class="math-container">$\epsilon &gt; 0$</span>, <span class="math-container">$u^{\epsilon}$</span> is convex. Indeed, if <span class="math-container">$x, y \in \mathbb{R}^{d}$</span> and <span class="math-container">$\lambda \in [0,1]$</span>, then <span class="math-container">\begin{align*} u^{\epsilon}((1- \lambda)x + \lambda y) &amp;= \int_{\mathbb{R}^{d}} u((1 - \lambda)x + \lambda y - \xi) \rho^{\epsilon}(\xi) \, d \xi \\ &amp;\leq \int_{\mathbb{R}^{d}} ((1 - \lambda) u(x - \xi) + \lambda u(y - \xi)) \rho^{\epsilon}(\xi) \, d \xi \\ &amp;= (1 - \lambda) u^{\epsilon}(x) + \lambda u^{\epsilon}(y). \end{align*}</span> Thus, <span class="math-container">$u^{\epsilon}$</span> is subharmonic.</p> <p>Now if <span class="math-container">$x \in \mathbb{R}^{d}$</span> and <span class="math-container">$r &gt; 0$</span>, then <span class="math-container">\begin{equation*} u(x) = \lim_{\epsilon \to 0^{+}} u^{\epsilon}(x) \leq \lim_{\epsilon \to 0^{+}} \frac{1}{|\partial B_{r}(x)|} \int_{\partial B_{r}(x)} u^{\epsilon}(y) \, dy = \frac{1}{|\partial B_{r}(x)|} \int_{\partial B_{r}(x)} u(y) \, dy. \end{equation*}</span> Therefore, <span class="math-container">$u$</span> is also subharmonic.</p>
2,268,299
<p>Let's say I have a series of real values $y_0,y_1,y_2\cdots$. My question is if it's always possible to find (at least one) $C^\infty$ real function such that \begin{equation} f^{(n)}(0)=y_n \end{equation} and in the affirmative case, how. It is a kind of "reverse taylor" problem... any hints?</p>
Community
-1
<p>Yes, it is possible to find a function that satisfies the condition $f^{\left(n\right)}\left(0\right)=y_n$ when the sequence $\left&lt;y_n\right&gt;$ is given, though infinite such functions exist.</p> <p>One way of finding a possible function would be to a polynomial of degree $n$. Consider $$p\left(x\right)=a_0+a_1x+a_2x^2 +\cdots+a_nx^n$$</p> <p>Now, let $$f\left(x\right)=p\left(x\right)e^x$$ Then, we have $$\begin{array}\\ f\left(0\right)&amp;=&amp;a_0&amp;=&amp;y_0\\ f^1\left(0\right)&amp;=&amp;a_0+a_1&amp;=&amp;y_1\\ f^2\left(0\right)&amp;=&amp;a_0+2a_1+2a_2&amp;=&amp;y_2\\ &amp;\vdots&amp; \end{array}$$</p> <p>We get $n+1$ equations for the $n+1$ variables $\left&lt;a_0,\,a_1,\,\dots,\,a_n\right&gt;$, and simply solve the equations to get the values of the coefficients and hence the function. A computer algebra system can be used if the value of $n$ is large.</p> <p>But, the function so obtained is clearly not the only possible function. To generate more functions, we can simply increase the degree of the polynomial $p$ to an arbitrary value. Since the coefficients of powers of $x$ higher than $n$ do not occur in our set of equations, we may assign any arbitrary value to them. Still then, we cannot cover all possible definitions of $f$.</p> <p>NOTE: We could simply choose $f$ to be a polynomial, but since $f$ must be $C^\infty$, therefore we must choose such a function which has derivatives of all orders everywhere in its domain.</p>
208,744
<p>I was asked to show that $\frac{d}{dx}\arccos(\cos{x}), x \in R$ is equal to $\frac{\sin{x}}{|\sin{x}|}$. </p> <p>What I was able to show is the following:</p> <p>$\frac{d}{dx}\arccos(\cos(x)) = \frac{\sin(x)}{\sqrt{1 - \cos^2{x}}}$</p> <p>What justifies equating $\sqrt{1 - \cos^2{x}}$ to $|\sin{x}|$?</p> <p>I am aware of the identity $ \sin{x} = \pm\sqrt{1 - \cos^2{x}}$, but I still do not see how that leads to that conclusion.</p>
Mikasa
8,581
<p>If $y=\arccos(x)$ then $x=\cos(y)$ and so $$\frac{dy}{dx}=\frac{1}{\frac{dx}{dy}}=\frac{-1}{\sin(y)}$$ if $\sin(y)\neq0$. Here, $$0&lt;y=\arccos(x)&lt;\pi, \sin(y)&gt;0$$ and $$\sin(y)=\sqrt{1-\cos^2(y)}=\sqrt{1-x^2}$$ It means that $$\frac{dy}{dx}=\frac{d}{dx}\arccos(x)=\frac{-1}{\sqrt{1-x^2}}$$ where in $x\in (-1,1)$.</p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
Mikhail Katz
1,385
<p>There is something to be learned from the history of the concept. The modern concept of continuity was introduced by Cauchy in 1821. This concept is probably best explained in two stages:</p> <p>(1) Cauchy's definition of continuity as infinitesimal $x$-increment always producing an infinitesimal change in $y$;</p> <p>(2) once students understand the basic intuition, the epsilon-delta <em>paraphrase</em> can be explained in terms of <em>trough</em> and <em>trap</em>.</p> <p>In more detail, in order for the value of the function to end up in a <em>trap</em> of a certain size, we must "feed" it values from a trough that's sufficiently small.</p> <p>For example, the absolute value function $|\;|$ is continuous at the origin because for every infinitesimal $\Delta x$, the corresponding $\Delta y=|\Delta x|$ is similarly infinitesimal.</p>
10,600
<p>As mentioned in <a href="https://matheducators.stackexchange.com/questions/1538/counterintuitive-consequences-of-standard-definitions">this question</a> students sometimes struggle with the fact that continuity is only defined at points of the function's domain. For example the function $f:\mathbb R\setminus\{0\} \to \mathbb R: x \mapsto \tfrac 1x$ is continuous although it has a "jump" at $x=0$ (<a href="https://matheducators.stackexchange.com/a/1686/5097">cf. this answer with more details</a>). So:</p> <p><em>Why is continuity only defined on the function's domain? What's the benefit?</em> How should a lecturer answer to such a question of a student?</p> <hr> <p><strong>My attempt to answer the question:</strong> I would give two arguments:</p> <ul> <li>When we take the <a href="https://en.wikipedia.org/wiki/Continuous_function#Definition_in_terms_of_limits_of_sequences" rel="nofollow noreferrer">sequence limit definition of continuity</a> $\lim_{n\to\infty} f(x_n) = f\left(\lim_{n\to\infty} x_n\right) = f(x_0)$, then this definition makes only sense when $x_0 = \lim_{n\to\infty} x_n$ is in the domain of $f$.</li> <li>The concept students have in mind is "continuous continuation" and not "continuity". Thus, one have to distinguish between both concepts.</li> </ul> <p>What do think about my answer? Have I missed something or are there other good arguments?</p> <hr> <p><strong>Note:</strong> This is another follow up question of <a href="https://matheducators.stackexchange.com/questions/10597/how-can-i-motivate-the-formal-definition-of-continuity">How can I motivate the formal definition of continuity?</a> I hope that's okay since I ask here for another aspect of continuity. I want to write an introductory article for continuity. That's the reason why I ask all these questions here...</p>
Peter Saveliev
10,168
<p>Property (3), i.e., the $\varepsilon-\delta$ definition of continuity, has numerous motivations/interpretations. For example, continuity can be interpreted as <em>accuracy</em>. Suppose we are shooting a cannon located at the top of a hill. Even when the mathematics is perfect, our limited knowledge of the many parameters that affect the accuracy of our shot will make us under- or overestimate where the cannonball hits the ground. Let's take just one: the accuracy of our measurement of the height of the hill. If the height, $H$, varies, then so will the placement, $D$, of the shot. Of course, the smaller error in the former will lead to a smaller error in the latter and, furthermore, we can achieve any required degree of accuracy, $\varepsilon$, of our shot if we can ensure a sufficient accuracy, $\delta$, of the measurement of the height of the hill. In other words, the dependence of $D$ on $H$ is continuous. But if we replace this target shooting with a game of tennis, we will face discontinuity. <a href="https://i.stack.imgur.com/ETCEf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ETCEf.png" alt="enter image description here"></a></p>
186,890
<p>Working with other software called SolidWorks I was able to get a plot with a curve very close to my data points:</p> <p><a href="https://i.stack.imgur.com/DooKo.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DooKo.png" alt="enter image description here"></a></p> <p>I tried to get a plot as accurate as this, but using the <code>Fit function</code>, I could not get a plot as accurate.</p> <pre><code>Clear["Global`*"] dados={{0,0},{1,1000},{2,-750},{3,250},{4,-1000},{5,0}}; Plot[Evaluate[Fit[dados,{1,x,x^2,x^3,x^4,x^5,x^6},x]],{x,0,5}] </code></pre> <p><a href="https://i.stack.imgur.com/syQGd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/syQGd.png" alt="enter image description here"></a></p> <p>Is there something I need to modify or another method more effective than this?</p>
kglr
125
<p>Using halirutan an J.M.'s <code>IPCUMonotonicInterpolation</code> from <a href="https://mathematica.stackexchange.com/q/14662/125">this q/a</a>:</p> <pre><code>Plot[IPCUMonotonicInterpolation[dados]@t, {t, 0, 5}, Epilog -&gt; {Red, PointSize[Large], Point@dados}, AspectRatio -&gt; 1/GoldenRatio, GridLines -&gt; Transpose[dados]] </code></pre> <p><a href="https://i.stack.imgur.com/eJfEM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eJfEM.png" alt="enter image description here"></a></p> <p><strong>See also:</strong> </p> <ul> <li><p><a href="https://mathematica.stackexchange.com/q/14023/125">Joining and interpolating data points</a></p></li> <li><p><a href="https://mathematica.stackexchange.com/a/5938/125">Ηow to create an interpolated CDF from its samples?</a></p></li> </ul>
3,628,374
<p>We have that <span class="math-container">$W \in \mathbb{R}^{n \times m}$</span> and we want to find <span class="math-container">$$\text{prox}(W) = \arg\min_Z\Big[\frac{1}{2} \langle W-Z, W-Z \rangle+\lambda ||Z||_* \Big]$$</span></p> <p>Here, <span class="math-container">$||Z||_*$</span> represents the trace norm of <span class="math-container">$Z$</span>.</p> <p>I tried getting the derivative of the whole thing, and to do that I used that the derivative of trace norm is <span class="math-container">$UV^T$</span> (according to <a href="https://math.stackexchange.com/questions/701062/proximal-operator-and-the-derivative-of-the-matrix-nuclear-norm">Proximal Operator and the Derivative of the Matrix Nuclear Norm</a>). However, after this, I don't really know how to proceed. </p>
lhf
589
<p>Here is a solution using linear algebra over <span class="math-container">$\mathbb F_p$</span>.</p> <p><span class="math-container">$g$</span> satisfies <span class="math-container">$0=g^p-I=(g-I)^p$</span>. Therefore, <span class="math-container">$1$</span> is the only eigenvalue of <span class="math-container">$g$</span>. Since <span class="math-container">$g$</span> is a <span class="math-container">$2\times 2$</span> matrix, these are the only possible Jordan forms for <span class="math-container">$g$</span>: <span class="math-container">$$ \begin{bmatrix}1&amp;0\\0&amp;1\end{bmatrix} \quad\text{and}\quad \begin{bmatrix}1&amp;1\\0&amp;1\end{bmatrix} $$</span> But <span class="math-container">$I$</span> does not have order <span class="math-container">$p$</span>.</p>
3,628,374
<p>We have that <span class="math-container">$W \in \mathbb{R}^{n \times m}$</span> and we want to find <span class="math-container">$$\text{prox}(W) = \arg\min_Z\Big[\frac{1}{2} \langle W-Z, W-Z \rangle+\lambda ||Z||_* \Big]$$</span></p> <p>Here, <span class="math-container">$||Z||_*$</span> represents the trace norm of <span class="math-container">$Z$</span>.</p> <p>I tried getting the derivative of the whole thing, and to do that I used that the derivative of trace norm is <span class="math-container">$UV^T$</span> (according to <a href="https://math.stackexchange.com/questions/701062/proximal-operator-and-the-derivative-of-the-matrix-nuclear-norm">Proximal Operator and the Derivative of the Matrix Nuclear Norm</a>). However, after this, I don't really know how to proceed. </p>
Jyrki Lahtonen
11,619
<p>It sounds like you want a mixture of group actions and linear algebra.</p> <hr> <p>Let <span class="math-container">$g\in G=GL_2(\Bbb{F}_p)$</span> be an element of order <span class="math-container">$p$</span>. Let's denote <span class="math-container">$H=\langle g\rangle$</span>, and <span class="math-container">$X=\Bbb{F}_p^2$</span> the set of (column) all vectors on which both <span class="math-container">$H$</span> and <span class="math-container">$G$</span> act by matrix multiplication from the left.</p> <ol> <li>By the orbit-stabilizer theorem the orbits of <span class="math-container">$H$</span> have sizes <span class="math-container">$1$</span> and <span class="math-container">$p$</span> only.</li> <li>Because the zero vector forms an <span class="math-container">$H$</span>-orbit of size <span class="math-container">$1$</span> and <span class="math-container">$|X|=p^2$</span>, there must be other <span class="math-container">$H$</span>-orbits of size <span class="math-container">$1$</span>. Let <span class="math-container">$x\in X$</span> form such a singleton orbit of <span class="math-container">$H$</span>.</li> <li>The scalar multiples of <span class="math-container">$x$</span> are then all eigenvectors of <span class="math-container">$g$</span> belonging to eigenvalue <span class="math-container">$\lambda=1$</span>. Let <span class="math-container">$y\in X$</span> be a vector that is linearly independent from <span class="math-container">$x$</span>. Then <span class="math-container">$y$</span> cannot belong to the eigenvalue <span class="math-container">$1$</span> for then we would have <span class="math-container">$g=1_G$</span>.</li> <li>The vectors <span class="math-container">$x$</span> and <span class="math-container">$y$</span> form a basis <span class="math-container">$\mathcal{B}$</span> of <span class="math-container">$X$</span> over <span class="math-container">$\Bbb{F}_p$</span>, so we have <span class="math-container">$g\cdot x=x$</span> and <span class="math-container">$g\cdot y=ax+by$</span> for some <span class="math-container">$a,b\in\Bbb{F}_p$</span>. The matrix of <span class="math-container">$g$</span> with respect to <span class="math-container">$\mathcal{B}$</span> looks like <span class="math-container">$$ M_{\mathcal{B}}(g)=\left(\begin{array}{cc} 1&amp;a\\ 0&amp;b \end{array}\right). $$</span></li> <li>We have <span class="math-container">$\det(g)=b$</span>, so <span class="math-container">$1=\det(g^p)=b^p=b$</span>.</li> <li>Because <span class="math-container">$g\neq 1_G$</span> we have <span class="math-container">$a\neq0$</span>.</li> <li>With respect to the basis <span class="math-container">$\mathcal{B}'=\{ax,y\}$</span> the matrix of <span class="math-container">$g$</span> thus looks like <span class="math-container">$$ M_{\mathcal{B}'}(g)=\left(\begin{array}{cc} 1&amp;1\\ 0&amp;1 \end{array}\right). $$</span></li> </ol>
3,671,223
<p>First and foremost, I have already gone through the following posts:</p> <p><a href="https://math.stackexchange.com/questions/2463561/prove-that-for-all-positive-integers-x-and-y-sqrt-xy-leq-fracx-y">Prove that, for all positive integers $x$ and $y$, $\sqrt{ xy} \leq \frac{x + y}{2}$</a></p> <p><a href="https://math.stackexchange.com/questions/64881/proving-the-am-gm-inequality-for-2-numbers-sqrtxy-le-fracxy2">Proving the AM-GM inequality for 2 numbers $\sqrt{xy}\le\frac{x+y}2$</a></p> <p>The reason why I open a new question is because I do not understand after reading the two posts.</p> <p>Question: Prove that for any two positive numbers x and y, <span class="math-container">$\sqrt{ xy} \leq \frac{x + y}{2}$</span></p> <p>According to my lecturer, he said that the question should begin with <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>. Lecturer also said that this is from a "well-known" fact. Now, both posts also mentioned this exact same thing in the helpful answers.</p> <p>My question is this - <strong>how</strong> and <strong>why</strong> do I know that I need to use <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span>? What "well-known" fact is this? Can't I simply just subtract <span class="math-container">$\sqrt{xy}$</span> to both side and conclude at <span class="math-container">$0 \leq {(x-y)}^2$</span>? I do not know how this <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> come back and why it even appear.</p> <p>Thanks in advance.</p> <p>Edit: <strong>I am not looking for the direct answer to this question.</strong> I am looking for an answer on <strong>why</strong> <span class="math-container">$(\sqrt{x}- \sqrt{y})^2 \geq 0$</span> is even considered in the first place as the first step to this question. Is this from a mathematical theorem or axiom etc?</p>
saulspatz
235,128
<p>The well-known fact your teacher is talking about is that the square of any really number is nonnegative. That is a theorem. As for why you should start with this particular instance of that theorem, that takes a little insight. </p> <p>If you were trying to prove the theorem yourself, you might start by seeing what happens if it isn't true. <span class="math-container">$$\sqrt{xy}&gt;\frac{x+y}2\\ xy&gt;\frac{x^2+2xy+y^2}4\\ 4xy&gt;x^2+2xy+y^2\\ 0&gt;x^2-2xy+y^2\\ 0&gt;(x-y)^2$$</span><br> contradiction.</p> <p>This is a perfectly good proof, but you might prefer a direct proof. If you try to work backwards, you immediately run into trouble, because taking the square root of both sides may not be valid. It may then occur to you to start with the square roots. </p>
2,512,424
<p>It is an easy exercise to show that all finite groups with at least three elements have at least one non-trivial automorphism; in other words, there are - up to isomorphism - only finitely many finite groups $G$ such that $Aut(G)=1$ (to be exact, just two: $1$ and $C_2$).</p> <p>Is an analogous statement true for all finite groups? I.e., given a finite group $A$, are there - again up to isomorphism - only finitely many groups $G$ with $Aut(G)\cong A$?</p> <p>If yes, is there an upper bound on the number of such groups $G$ depending on a property of $A$ (e.g. its order)?</p> <p>And if not, which groups arise as counterexamples?</p> <p>And finally, what does the situation look like for infinite groups $G$ with a given finite automorphism group? And what if infinite automorphism groups $A$ are considered?</p>
Mikko Korhonen
17,384
<p>Ledermann and B.H.Neumann ("On the Order of the Automorphism Group of a Finite Group. I", Proc. Royal Soc. A, 1956) have shown the following:</p> <blockquote> <p><strong>Theorem.</strong> Let $n &gt; 0$. There exists a bound $f(n)$ such that if $G$ is a finite group with $|G| \geq f(n)$, then $|\operatorname{Aut}(G)| \geq n$.</p> </blockquote> <p>An immediate consequence is that up to isomorphism, there are only finitely many finite groups $G$ with $|\operatorname{Aut}(G)| \leq n$. Hence for any finite group $X$, up to isomorphism there are only finitely many finite groups $G$ with $\operatorname{Aut}(G) \cong X$.</p> <p>Among infinite groups this is no longer true, and indeed there are infinitely many groups $G$ with $\operatorname{Aut}(G) \cong \mathbb{Z} / 2 \mathbb{Z}$.</p> <p>Then there is of course the question of determining all finite groups $G$ with given automorphism group $\operatorname{Aut}(G) \cong X$. For this, see for example</p> <blockquote> <p>Iyer, Hariharan K. On solving the equation Aut(X)=G. Rocky Mountain J. Math. 9 (1979), no. 4, 653–670. </p> </blockquote> <p>This paper gives a solution to the problem in some cases, and determines for example all $G$ with $\operatorname{Aut}(G) \cong S_n$. There is also a different proof of the fact that there are only finitely many groups with a given automorphism group (Theorem 3.1 there).</p>
2,512,424
<p>It is an easy exercise to show that all finite groups with at least three elements have at least one non-trivial automorphism; in other words, there are - up to isomorphism - only finitely many finite groups $G$ such that $Aut(G)=1$ (to be exact, just two: $1$ and $C_2$).</p> <p>Is an analogous statement true for all finite groups? I.e., given a finite group $A$, are there - again up to isomorphism - only finitely many groups $G$ with $Aut(G)\cong A$?</p> <p>If yes, is there an upper bound on the number of such groups $G$ depending on a property of $A$ (e.g. its order)?</p> <p>And if not, which groups arise as counterexamples?</p> <p>And finally, what does the situation look like for infinite groups $G$ with a given finite automorphism group? And what if infinite automorphism groups $A$ are considered?</p>
Brauer Suzuki
960,602
<p>A more accessible account on the theorem of Ledermann-Neumann mentioned in the accepted answer can be found here: <a href="https://www.tandfonline.com/doi/full/10.1080/00029890.2020.1803625" rel="nofollow noreferrer">Math. Monthly</a> or <a href="https://arxiv.org/abs/1909.13220" rel="nofollow noreferrer">arxiv</a></p> <p>EDIT: The argument runs as follows. Suppose <span class="math-container">$G$</span> is a finite group with exactly <span class="math-container">$n$</span> automorphisms.</p> <ol> <li>The order of the derived group <span class="math-container">$G'$</span> is bounded in terms of <span class="math-container">$n$</span>. Since the size of the inner automorphism group <span class="math-container">$|G/Z(G)|$</span> is bounded by <span class="math-container">$n$</span>, this follows from a theorem of Schur. While Schur's proof is highly non-trivial, there is an elementary argument by Rosenlicht.</li> <li>Every prime divisor of <span class="math-container">$|G|$</span> is bounded by <span class="math-container">$n+1$</span>. This requires a special case of the Schur-Zassenhaus theorem for central Sylow subgroups.</li> <li>The exponent of <span class="math-container">$G$</span> is bounded in terms of <span class="math-container">$n$</span>. This uses an elementary argument by Nagrebeckiı̆.</li> <li>The size of the center <span class="math-container">$Z(G)$</span> is bounded in terms of <span class="math-container">$n$</span>. Since the exponent is bounded, it suffices to bound the minimal number of generators of <span class="math-container">$Z(G)$</span>. Let <span class="math-container">$g_1,\ldots,g_m$</span> representatives for the cosets of <span class="math-container">$G$</span> mod <span class="math-container">$Z(G)G'$</span> and consider <span class="math-container">$U:=\langle g_1,\ldots,g_m\rangle G'$</span>. Note that <span class="math-container">$m$</span> is bounded in terms of <span class="math-container">$n$</span>. Using another lemma, one finds a decomposition <span class="math-container">$Z(G)=C\times D$</span> such that <span class="math-container">$U\cap Z(G)\le C$</span> and <span class="math-container">$|C|$</span> is bounded. It remains to bound <span class="math-container">$|D|$</span>. The author shows that <span class="math-container">$G=UC\times D$</span>. Now we may assume that <span class="math-container">$G=D$</span> is an abelian <span class="math-container">$p$</span>-group. It is easy to see that the number of automorphisms of <span class="math-container">$G$</span> grows with its rank.</li> </ol>
81,209
<p>I feel a bit ashamed to ask the following question here. </p> <blockquote> <p>What is (actually, is there) Galois theory for polynomials in $n$-variables for $n\geq2$?</p> </blockquote> <p>I am preparing a large audience talk on Lie theory, and decided to start talking about symmetries and take Galois theory as a "baby" example. I know that Lie groups are somehow to differential equations what discrete groups are to algebraic equations. But I nevertheless would expect Lie (or algebraic) groups to appear naturally as higher dimensional analogs of Galois groups. </p> <p>Namely, the Galois group $G_P$ of a polynomial $P(x)$ in one variable can be defined as the symmetry group of the equation $P(x)=0$ (very shortly, the subgroup of permutations of the solutions/roots that preserves any algebraic equation satisfied by them). </p> <p>Then one of the great results of Galois theory is that $P(x)=0$ is solvable by radicals if and only if the group $G_P$ is solvable (meaning that its derived series reaches $\{1\}$). </p> <p>I was wondering what is the analog of the story in higher dimension (i.e. for equations of the form $P(x_1,\dots,x_n)=0$. I would naively expect algebraic group to show up... </p> <hr> <p>I googled the main key words and found <a href="http://www.ucl.ac.uk/~ucahmki/sheffield.pdf">this presentation</a>: on the last slide it is written that </p> <blockquote> <p>the task at hand is to develop a Galois theory of polynomials in two variables</p> </blockquote> <p>This convinced me to anyway ask the question</p> <hr> <p><strong>EDIT: the first "idea" I had</strong></p> <p>I first thought about the following strategy. Consider $P(x,y)=0$ as an polynomial equation in one variable $x$ with coefficients in the field $k(y)$ of rational functions in $y$, and consider its Galois group. But then we could do the opposite...what would happen?</p>
Tim Porter
3,502
<p>This will not answer the question but is more than a comment in addition it may be very naive! (This is a hard question not a soft question!!!)</p> <p>I wonder if given the Galois group &lt;-> étale fundamental group link works for dimension 1, should there not be a link '2-Galois thingie'&lt;->étale 2-type, and hence a link with Grothendieck's Pursuing Stacks and his letters to Breen in 1975. The sought after model might be a profinite (?) crossed module. These are able to be seen as automorphism 2-groups of groupoids, so although they are automorphism things, there is a gap to bridge before the link would work well. I have also met a similar idea when working with orbifolds, and related ideas but have not any definite reply to the particular question, rather more an addition to the question! (I hope this helps... or inspires someone to think 'outside the box'.)</p> <p>There would be then a similar idea for polynomials in n-variable and models for n-types??? (This may be all rubbish but it is nice to dream sometimes!)</p>