qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,678,908 | <p>Let</p>
<p><span class="math-container">$$
V = \{ax^3+bx^2+cx+d|a,b,c,d \in Z_7\}
$$</span></p>
<p>Let the subspace <span class="math-container">$U$</span> Be a subspace of <span class="math-container">$V$</span>:</p>
<p><span class="math-container">$$
U = \{p(x) \in V | p(3)=p(5)=0\}
$$</span></p>
<p>The answers says that:</p>
<p><span class="math-container">$$
Dim(U) = 2
$$</span></p>
<p>How did they conclude that the dimension of <span class="math-container">$U$</span> Is <span class="math-container">$2$</span>?</p>
| Anton Vrdoljak | 744,799 | <p><strong>Hints:</strong></p>
<p><span class="math-container">$|2x+3|=\begin{cases}2x+3, & 2x+3 \ge 0 \iff x \ge -\frac{3}{2} \\-2x-3, & 2x+3 < 0 \iff x < -\frac{3}{2}\end{cases}$</span></p>
<p><span class="math-container">$|x-1|=\begin{cases}x-1, & x-1 \ge 0 \iff x \ge 1 \\-x+1, & x-1 < 0 \iff x < 1\end{cases} \\$</span></p>
<p>Now consider <strong>each of cases</strong> (total number of cases is 3) and solve the equation for every case...</p>
|
4,142,970 | <p>Find the cardinality of the set <span class="math-container">$\{ X \subset \mathbb{N} : |X| = \aleph_0\} $</span>.</p>
<p>First off all I could easily prove that the set given above, say <span class="math-container">$S$</span>, is uncountable.</p>
<p>Because <span class="math-container">$P(\mathbb{N})$</span> is uncountable and <span class="math-container">$|P(\mathbb{N})|=c$</span>, and <span class="math-container">$P(\mathbb{N}) = S \cup F$</span>, where <span class="math-container">$F$</span> = <span class="math-container">$ \{$</span>family of all finite subsets of <span class="math-container">$\mathbb{N}$</span> <span class="math-container">$\}$</span>. So, <span class="math-container">$S$</span> is countable <span class="math-container">$\implies$</span> <span class="math-container">$P(\mathbb{N})$</span> is countable as <span class="math-container">$F$</span> is clearly countable. Which is a contradiction.</p>
<p>But I can't find the cardinality of <span class="math-container">$S$</span>.</p>
<p>Now,
<span class="math-container">$$S \subset P(\mathbb{N})$$</span>
<span class="math-container">$$\implies |S| \le |P(\mathbb{N})|=c$$</span></p>
<p>And,
<span class="math-container">$$|S| \gt \aleph_0$$</span>
Again, <span class="math-container">$$\aleph_0 \lt c$$</span></p>
<p>But in order to say, <span class="math-container">$$|S|=c$$</span> I have to use <strong>Continuum Hypothesis</strong>.</p>
<p>But can I do that because I know it's a hypothesis and also with current mathematical tools it cannot be proved that this is false(I got to know this from internet)?</p>
| TonyK | 1,508 | <p>Your set <span class="math-container">$S$</span> is the power set of <span class="math-container">$\aleph_0$</span> (cardinality <span class="math-container">$2^{\aleph_0}=\mathfrak{c}$</span>) minus the set of finite subests of <span class="math-container">$\aleph_0$</span> (cardinality <span class="math-container">$\aleph_0$</span>). So the answer is <span class="math-container">$2^{\aleph_0}-\aleph_0$</span>, which is...? (This doesn't require the Continuum Hypothesis.)</p>
<p><strong>Edited to add:</strong> The OP asks in a comment why the Axiom of Choice is not required to show that <span class="math-container">$2^{\aleph_0}-\aleph_0=2^{\aleph_0}$</span>. If we can exhibit injections in both directions between <span class="math-container">$S$</span> and <span class="math-container">$\mathcal{P}(\Bbb N)$</span>, then we can invoke the <a href="https://en.wikipedia.org/wiki/Schr%C3%B6der%E2%80%93Bernstein_theorem" rel="nofollow noreferrer">Schröder–Bernstein theorem</a>.</p>
<p>The injection from <span class="math-container">$S$</span> to <span class="math-container">$\mathcal{P}(\Bbb N)$</span> is just the identity.</p>
<p>To construct an injection from all subsets of <span class="math-container">$\Bbb N$</span> to infinite subsets of <span class="math-container">$\Bbb N$</span>, let <span class="math-container">$A$</span> be the set of infinite subsets of <span class="math-container">$\Bbb N$</span> that don't contain <span class="math-container">$1$</span>, and let <span class="math-container">$B$</span> be the set of infinite subsets of <span class="math-container">$\Bbb N$</span> that do contain <span class="math-container">$1$</span> (I am assuming the convention <span class="math-container">$\Bbb N=\{1,2,3,\ldots\}$</span>). Now we we can map the infinite subsets to <span class="math-container">$A$</span> by adding <span class="math-container">$1$</span> to each element; and we can map the finite subsets to <span class="math-container">$B$</span> by first mapping them to <span class="math-container">$A$</span> in the same way, and then taking their complement.</p>
|
3,335,257 | <p>This is the full question:</p>
<blockquote>
<p>show that if <span class="math-container">$p$</span> is an odd prime then the number of ordered pair solutions of the congruence<span class="math-container">$x^2-y^2 \equiv a \pmod p,$</span> is <span class="math-container">$p-1$</span> unless <span class="math-container">$a \equiv 0 \pmod p,$</span> in which case number of solutions is <span class="math-container">$2p-1$</span>. </p>
</blockquote>
<p>Considering <span class="math-container">$x,y \,\in \mathbb{Z}$</span>. In the second case, since <span class="math-container">$a \equiv 0 \pmod p,$</span> it follows that <span class="math-container">$p\mid (x-y)(x+y)$</span>, but then there will be infinitely many solutions of this congruence relation because there are no bounds mentioned in the question on <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p>
<p>So is the question incomplete? or is it implicitly stated that <span class="math-container">$0\leq x,y<p$</span> .For this bound, do we get <span class="math-container">$2p-1$</span> solutions? </p>
| Alessandro Cigna | 641,678 | <p>Trying to find integer solutions <span class="math-container">$0\le x,y< p$</span> of <span class="math-container">$x^2-y^2\equiv 0 mod p$</span> as you observed implicates that <span class="math-container">$p\mid (x-y)(x+y)$</span>, thus you see <span class="math-container">$p\mid x+y$</span> or <span class="math-container">$p\mid x-y$</span> since <span class="math-container">$p$</span> is prime. For every <span class="math-container">$1\le x<p$</span> you have the solutions <span class="math-container">$(x,y)=(x,p-x)$</span> and <span class="math-container">$(x,y)=(x,x)$</span> that are in total <span class="math-container">$2p-2$</span> couples, and for <span class="math-container">$x=0$</span> the unic solution <span class="math-container">$(x,y)=(0,0)$</span>. They are <span class="math-container">$2p-1$</span> total solution.</p>
|
557,146 | <p>I have a clock time in form of HH.MM.SS (hours, minutes, sec), I need to create a way to convert it to one number so that I can determent which time is bigger.
for example
00.00.00 is the smallest
23.59.59 is the biggest</p>
<p>11.03.50 > 11.02.57
etc..</p>
<p>I thought to do the following</p>
<p>hours*10000+minutes*100+sec
but im not sure it works for all cases... how can i prove it works?</p>
| Trevor Wilson | 39,378 | <p>If all you need to do is compare two times in HH:MM:SS format, say $(h,m,s)$ and $(h',m',s')$, then you do not need to convert to seconds. You can just use the lexicographical ordering:</p>
<p>\begin{align*}
(h,m,s) <_{\text{lex}} (h',m',s')
\iff &h < h', \text{ or } \\
&h = h' \mathbin{\And} m < m', \text{ or } \\
&h = h' \mathbin{\And} m = m' \mathbin{\And} s < s'.
\end{align*}</p>
<p>This might be more efficient that converting to seconds if each triple $(h,m,s)$ is only involved in one comparison.</p>
|
3,294,446 | <p>I'm currently working on a definite integral and am hoping to find alternative methods to evaluate. Here I will to address the integral:
<span class="math-container">\begin{equation}
I_n = \int_0^\frac{\pi}{2}\ln^n\left(\tan(x)\right)\:dx
\end{equation}</span>
Where <span class="math-container">$n \in \mathbb{N}$</span>. We first observe that when <span class="math-container">$n = 2k + 1$</span> (<span class="math-container">$k\in \mathbb{Z}, k \geq 0$</span>) that,
<span class="math-container">\begin{equation}
I_{2k + 1} = \int_0^\frac{\pi}{2}\ln^{2k + 1}\left(\tan(x)\right)\:dx = 0
\end{equation}</span>
This can be easily shown by noticing that the integrand is odd over the region of integration about <span class="math-container">$x = \frac{\pi}{4}$</span>. Thus, we need only resolve the cases when <span class="math-container">$n = 2k$</span>, i.e.
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\frac{\pi}{2}\ln^{2k}\left(\tan(x)\right)\:dx
\end{equation}</span>
Here I have isolated two methods.</p>
<hr>
<p>Method 1:</p>
<p>Let <span class="math-container">$u = \tan(x)$</span>:
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\infty\ln^{2k}\left(u\right) \cdot \frac{1}{u^2 + 1}\:du = \int_0^\infty \frac{\ln^{2k}\left(u\right)}{u^2 + 1}\:du
\end{equation}</span>
We note that:
<span class="math-container">\begin{equation}
\ln^{2k}(u) = \frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}
\end{equation}</span>
By Leibniz's Integral Rule:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\infty \frac{\frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}}{u^2 + 1}\:du = \frac{d^{2k}}{dy^{2k}} \left[ \int_0^\infty \frac{u^y}{u^2 + 1} \right]_{y = 0} \nonumber \\
&= \frac{d^{2k}}{dy^{2k}} \left[ \frac{1}{2}B\left(1 - \frac{y + 1}{2}, \frac{y + 1}{2} \right) \right]_{y = 0} =\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \Gamma\left(1 - \frac{y + 1}{2}\right)\Gamma\left( \frac{y + 1}{2} \right) \right]_{y = 0} \nonumber \\
&=\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \frac{\pi}{\sin\left(\pi\left(\frac{y + 1}{2}\right)\right)} \right]_{y = 0} = \frac{\pi}{2}\frac{d^{2k}}{dy^{2k}} \left[\operatorname{cosec}\left(\frac{\pi}{2}\left(y + 1\right)\right) \right]_{y = 0}
\end{align}</span></p>
<hr>
<p>Method 2:</p>
<p>We first observe that:
<span class="math-container">\begin{align}
\ln^{2k}\left(\tan(x)\right) &= \big[\ln\left(\sin(x)\right) - \ln\left(\cos(x)\right) \big]^{2k} \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)
\end{align}</span>
By the linearity property of proper integrals we observe:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\frac{\pi}{2} \left[ \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right) \right]\:dx \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \int_0^\frac{\pi}{2} \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)\:dx \nonumber \\
& = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j F_{n,m}(0,0)
\end{align}</span>
Where
<span class="math-container">\begin{equation}
F_{n,m}(a,b) = \int_0^\frac{\pi}{2} \ln^n\left(\cos(x)\right)\ln^{m}\left(\sin(x)\right)\:dx
\end{equation}</span>
Utilising the same identity given before, this becomes:
<span class="math-container">\begin{align}
F_{n,m}(a,b) &= \int_0^\frac{\pi}{2} \frac{d^n}{da^n}\big[\sin^a(x) \big] \cdot \frac{d^m}{db^m}\big[\cos^b(x) \big]\big|\:dx \nonumber \\
&= \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[ \int_0^\frac{\pi}{2} \sin^a(x)\cos^b(x)\:dx\right] = \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{1}{2} B\left(\frac{a + 1}{2}, \frac{b + 1}{2} \right)\right] \nonumber \\
&= \frac{1}{2}\frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]
\end{align}</span>
Thus,
<span class="math-container">\begin{equation}
I_{2k} = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \frac{1}{2}\frac{\partial^{2k }}{\partial a^j \partial b^{2k - j}}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]_{(a,b) = (0,0)}
\end{equation}</span></p>
<hr>
<p>So, I'm curious, are there any other Real Based Methods to evaluate this definite integral?</p>
| Stefan Lafon | 582,769 | <p><span class="math-container">$$\begin{split}
I_{2k} &= \int_0^\infty\frac{\ln^{2k}u}{u^2 + 1}du \\
&= \int_0^1\frac{\ln^{2k}u}{u^2 + 1}du +\int_1^{+\infty}\frac{\ln^{2k}\left(u\right)}{u^2 + 1}du \\
&=\int_0^1\frac{\ln^{2k}u}{u^2 + 1}du +\int_0^{1}\frac{\ln^{2k}t}{t^2 + 1}dt \,\,\,\left(\text{by } u\rightarrow \frac
1 t\right)\\
&=2\int_0^1\frac{\ln^{2k}u}{u^2 + 1}du\\
&=2\sum_{n\in\mathbb N}(-1)^n\int_0^1u^{2n}\ln^{2k} (u) du
\end{split}$$</span>
Now, let <span class="math-container">$$J_{p,q}=\int_0^1u^p\ln^q (u)du$$</span>
If <span class="math-container">$q\geq 1$</span>, by integration by parts,
<span class="math-container">$$J_{p, q}=\left. \frac{u^{p+1}}{p+1}\ln^q u\right]_0^1-\int_0^1\frac{u^{p+1}}{p+1}q\ln^{q-1}(u)\frac{du}u=-\frac q{p+1}J_{p,q-1}$$</span>
Consequently, if <span class="math-container">$q\geq 1$</span>, <span class="math-container">$$J_{p,q} = (-1)^{q}\frac{q!}{(p+1)^{q+1}}$$</span>
We conclude that
<span class="math-container">$$I_{2k}=2\cdot (2k)!\sum_{n\in\mathbb N}\frac{(-1)^n}{(2n+1)^{2k+1}}$$</span>
Following Zachy's suggestion, the last sum is known as the <a href="https://en.wikipedia.org/wiki/Dirichlet_beta_function" rel="nofollow noreferrer">Dirichlet Beta function</a>
<span class="math-container">$$I_{2k}=2\cdot (2k)!\beta(2k+1)$$</span>
Finally, values of <span class="math-container">$\beta$</span> at odd numbers are <a href="https://en.wikipedia.org/wiki/Dirichlet_beta_function#Special_values" rel="nofollow noreferrer">known</a> in terms of <a href="https://en.wikipedia.org/wiki/Euler_number" rel="nofollow noreferrer">Euler's numbers</a> and we get
<span class="math-container">$$\boxed{I_{2k}=2\frac{(-1)^kE_{2k}\pi^{2k+1}}{4^{k+1}}}$$</span></p>
|
3,294,446 | <p>I'm currently working on a definite integral and am hoping to find alternative methods to evaluate. Here I will to address the integral:
<span class="math-container">\begin{equation}
I_n = \int_0^\frac{\pi}{2}\ln^n\left(\tan(x)\right)\:dx
\end{equation}</span>
Where <span class="math-container">$n \in \mathbb{N}$</span>. We first observe that when <span class="math-container">$n = 2k + 1$</span> (<span class="math-container">$k\in \mathbb{Z}, k \geq 0$</span>) that,
<span class="math-container">\begin{equation}
I_{2k + 1} = \int_0^\frac{\pi}{2}\ln^{2k + 1}\left(\tan(x)\right)\:dx = 0
\end{equation}</span>
This can be easily shown by noticing that the integrand is odd over the region of integration about <span class="math-container">$x = \frac{\pi}{4}$</span>. Thus, we need only resolve the cases when <span class="math-container">$n = 2k$</span>, i.e.
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\frac{\pi}{2}\ln^{2k}\left(\tan(x)\right)\:dx
\end{equation}</span>
Here I have isolated two methods.</p>
<hr>
<p>Method 1:</p>
<p>Let <span class="math-container">$u = \tan(x)$</span>:
<span class="math-container">\begin{equation}
I_{2k} = \int_0^\infty\ln^{2k}\left(u\right) \cdot \frac{1}{u^2 + 1}\:du = \int_0^\infty \frac{\ln^{2k}\left(u\right)}{u^2 + 1}\:du
\end{equation}</span>
We note that:
<span class="math-container">\begin{equation}
\ln^{2k}(u) = \frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}
\end{equation}</span>
By Leibniz's Integral Rule:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\infty \frac{\frac{d^{2k}}{dy^{2k}}\big[u^y\big]_{y = 0}}{u^2 + 1}\:du = \frac{d^{2k}}{dy^{2k}} \left[ \int_0^\infty \frac{u^y}{u^2 + 1} \right]_{y = 0} \nonumber \\
&= \frac{d^{2k}}{dy^{2k}} \left[ \frac{1}{2}B\left(1 - \frac{y + 1}{2}, \frac{y + 1}{2} \right) \right]_{y = 0} =\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \Gamma\left(1 - \frac{y + 1}{2}\right)\Gamma\left( \frac{y + 1}{2} \right) \right]_{y = 0} \nonumber \\
&=\frac{1}{2}\frac{d^{2k}}{dy^{2k}} \left[ \frac{\pi}{\sin\left(\pi\left(\frac{y + 1}{2}\right)\right)} \right]_{y = 0} = \frac{\pi}{2}\frac{d^{2k}}{dy^{2k}} \left[\operatorname{cosec}\left(\frac{\pi}{2}\left(y + 1\right)\right) \right]_{y = 0}
\end{align}</span></p>
<hr>
<p>Method 2:</p>
<p>We first observe that:
<span class="math-container">\begin{align}
\ln^{2k}\left(\tan(x)\right) &= \big[\ln\left(\sin(x)\right) - \ln\left(\cos(x)\right) \big]^{2k} \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)
\end{align}</span>
By the linearity property of proper integrals we observe:
<span class="math-container">\begin{align}
I_{2k} &= \int_0^\frac{\pi}{2} \left[ \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right) \right]\:dx \nonumber \\
&= \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \int_0^\frac{\pi}{2} \ln^j\left(\cos(x)\right)\ln^{2k - j}\left(\sin(x)\right)\:dx \nonumber \\
& = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j F_{n,m}(0,0)
\end{align}</span>
Where
<span class="math-container">\begin{equation}
F_{n,m}(a,b) = \int_0^\frac{\pi}{2} \ln^n\left(\cos(x)\right)\ln^{m}\left(\sin(x)\right)\:dx
\end{equation}</span>
Utilising the same identity given before, this becomes:
<span class="math-container">\begin{align}
F_{n,m}(a,b) &= \int_0^\frac{\pi}{2} \frac{d^n}{da^n}\big[\sin^a(x) \big] \cdot \frac{d^m}{db^m}\big[\cos^b(x) \big]\big|\:dx \nonumber \\
&= \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[ \int_0^\frac{\pi}{2} \sin^a(x)\cos^b(x)\:dx\right] = \frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{1}{2} B\left(\frac{a + 1}{2}, \frac{b + 1}{2} \right)\right] \nonumber \\
&= \frac{1}{2}\frac{\partial^{n + m}}{\partial a^n \partial b^m}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]
\end{align}</span>
Thus,
<span class="math-container">\begin{equation}
I_{2k} = \sum_{j = 0}^{2k} { 2k \choose j}(-1)^j \frac{1}{2}\frac{\partial^{2k }}{\partial a^j \partial b^{2k - j}}\left[\frac{\Gamma\left(\frac{a + 1}{2}\right)\Gamma\left(\frac{b + 1}{2}\right)}{\Gamma\left(\frac{a + b}{2} + 1\right)}\right]_{(a,b) = (0,0)}
\end{equation}</span></p>
<hr>
<p>So, I'm curious, are there any other Real Based Methods to evaluate this definite integral?</p>
| Lai | 732,917 | <p>Using the Reflection property of Beta function, we have
<span class="math-container">$$
\begin{aligned}
\int_0^{\frac{\pi}{2}} \tan ^a x d x=\int_0^{\frac{\pi}{2}} \sin ^a x \cos ^{-a} x d x =& \frac{1}{2} B\left(\frac{a+1}{2}, \frac{1-a}{2}\right)=\frac{\pi}{2 \sin \left(\frac{a+1}{2} \pi\right)}=\frac{1}{2} \sec \frac{a \pi}{2}
\end{aligned}
$$</span>
<span class="math-container">$$
\begin{aligned}
\int_0^{\frac{\pi}{2}} \ln ^n(\tan x) d x =& \frac{\partial^n}{\partial a^n}\left.\left(\frac{\pi}{2} \sec \frac{a \pi}{2}\right) \right|_{x=0} \\
=&\left.\frac{\pi^{n+1}}{2^{n+1}} \frac{\partial^n}{\partial x^n}(\sec x)\right|_{x=0}\\
\end{aligned}
$$</span>
Now we can conclude that</p>
<p><span class="math-container">$$\boxed{\int_0^{\frac{\pi}{2}} \ln ^n(\tan x) d x =\left\{\begin{array}{l}
0 \qquad\qquad \text { if } n \text { is odd } \\
\dfrac{\pi^{n+1}\left|E_{n}\right|}{2^{n+1}} \quad \text { if } n \text { is even }
\end{array}\right.}$$</span></p>
<p>where <span class="math-container">$E_n$</span> is the Euler Number whose formula comes from <a href="http://www.wolframalpha.com/input/?i=nth%20derivative%20of%20sec%20x" rel="nofollow noreferrer">WA</a>.</p>
|
1,141,412 | <p>Prove that the ring of integers of $\mathbb Q (\sqrt{-5})$ does not have unique factorisation.</p>
<p>Since $-5\equiv 3\pmod 4$, I know that the ring of integers of $\mathbb Q (\sqrt{-5})$ is $\mathbb Z [\sqrt{-5}]$. </p>
<p>I assume the way to prove that this does not have a unique factorisation is to give two different factorisations, but what exactly do I use to factorise? Do I consider the polynomial $x+\sqrt{-5}$?</p>
| Robert Cardona | 29,193 | <p>In a UFD, an element is irreducible if and only if it is prime.</p>
<p>Observe that <span class="math-container">$2$</span> is irreducible in <span class="math-container">$\mathbb Z + \mathbb Z\sqrt{-5}$</span>: Suppose <span class="math-container">$$2 = (a + b \sqrt{-5})(c + b \sqrt{-5}),$$</span> taking the norm of both sides gives us <span class="math-container">$$4 = (a^2 + 5b^2)(c^2 + 5d^2)$$</span> which means <span class="math-container">$a^2 + 5b^2 = 1, 2$</span> or <span class="math-container">$4$</span>. If <span class="math-container">$a^2 + 5b^2 = 1$</span>, then <span class="math-container">$a = 1$</span> and <span class="math-container">$b = 0$</span> which means <span class="math-container">$a + b \sqrt{-5} = 1$</span> which is a unit and we're done. If <span class="math-container">$a^2 + 5b^2 = 4$</span>, then <span class="math-container">$a = 2$</span> and <span class="math-container">$b = 0$</span> which means <span class="math-container">$$c + d \sqrt{-5} = \frac{2}{a + b\sqrt{-5}} = \frac{2}{2} = 1,$$</span> a unit, which means we're done. Notice that <span class="math-container">$a^2 + 5b^2 = 2$</span> can never happen: <span class="math-container">$b$</span> will have to be zero because if it's not, then the sum is greater than 5, which means it's greater than <span class="math-container">$2$</span>, which means <span class="math-container">$a^2 = 2$</span> which only holds when <span class="math-container">$a = \sqrt 2 \notin \mathbb Z$</span>, so this case can't occur. Conclude by definition that <span class="math-container">$2$</span> is irreducible in <span class="math-container">$\mathbb Z + \mathbb Z \sqrt{-5}$</span>.</p>
<p>Observe that <span class="math-container">$2 \mid 6 = (1 + \sqrt{-5})(1 - \sqrt{-5})$</span> but <span class="math-container">$2 \nmid (1 + \sqrt{-5}), (1 - \sqrt{-5})$</span>. </p>
<p>Say, by way of contradiction, that <span class="math-container">$2 \mid 1 + \sqrt{-5}$</span>, then there exist <span class="math-container">$a, b \in \mathbb Z$</span> such that <span class="math-container">$1 + \sqrt{-5} = 2(a + b \sqrt{-5})$</span> which means <span class="math-container">$2a = 1$</span> and <span class="math-container">$2b = 1$</span> which can only happen if <span class="math-container">$a = b = 1/2 \notin \mathbb Z$</span>, a contradiction. Similar reasoning works for <span class="math-container">$1 - \sqrt{-5}$</span>.</p>
<p>Conclude that <span class="math-container">$2$</span> is not prime, but it is irreducible. Hence we're not in a unique factorization domain.</p>
|
475,221 | <p>I am working on problem #25 of <em>Linear Algebra and its Applications</em> and the question asks:</p>
<blockquote>
<p>Find an equation involving $g$, $h$, and $k$ that makes this augmented matrix correspond to a consistent system:
$$\left(\begin{array}{ccc|c}
1& -4& 7& g \\
0& 3& -5& h \\
-2& 5& -9& k
\end{array}\right).$$
After I do $R_3 \gets 2R_1 + R_3$ and $R_3 \gets R_2 + R_3$
I end up with
$$\left(\begin{array}{ccc|c} 1& -4& 7& g \\ 0& 3& -5& h \\ 0& 0& 0& 2g+k+h
\end{array}\right).$$</p>
</blockquote>
<p>For this to be a consistent system the third row should be $\begin{pmatrix}0& 0& 0& 0\end{pmatrix}$,
so in order for this augmented matrix to be a consistent system then $2g + k + h =0$</p>
<p>The answer in the back of the book is $k - 2g + k = 0$.</p>
<p>Where am I going wrong with my calculation? Or is the book wrong?</p>
| Eric Auld | 76,333 | <p>$$\frac{y^2}{y^2+d^2} = \frac{y^2 + d^2}{y^2+d^2} - \frac{d^2}{y^2+d^2}$$</p>
<p>The latter is an arctangent integral.</p>
|
475,221 | <p>I am working on problem #25 of <em>Linear Algebra and its Applications</em> and the question asks:</p>
<blockquote>
<p>Find an equation involving $g$, $h$, and $k$ that makes this augmented matrix correspond to a consistent system:
$$\left(\begin{array}{ccc|c}
1& -4& 7& g \\
0& 3& -5& h \\
-2& 5& -9& k
\end{array}\right).$$
After I do $R_3 \gets 2R_1 + R_3$ and $R_3 \gets R_2 + R_3$
I end up with
$$\left(\begin{array}{ccc|c} 1& -4& 7& g \\ 0& 3& -5& h \\ 0& 0& 0& 2g+k+h
\end{array}\right).$$</p>
</blockquote>
<p>For this to be a consistent system the third row should be $\begin{pmatrix}0& 0& 0& 0\end{pmatrix}$,
so in order for this augmented matrix to be a consistent system then $2g + k + h =0$</p>
<p>The answer in the back of the book is $k - 2g + k = 0$.</p>
<p>Where am I going wrong with my calculation? Or is the book wrong?</p>
| user64494 | 64,494 | <p>Maple code with(Student[Calculus1]):IntTutor$(y^2/(y^2 +d^2),y$); produced the output which can be seen <a href="http://rapidshare.com/files/1020936765/screen2.24.08.13.docx" rel="nofollow">here</a>. It coincides with the excellent answer by Eric Auld.</p>
|
2,751,417 | <p>I want to prove the following for real invertible $2 \times 2$ matrices $A$ and $B$:</p>
<p>$\det(ABA^{-1}B^{-1})=1$</p>
<p>I tried to write it out for random numbers, and that seems to work out well. But when I tried to write it out in general, it became far too much paperwork. So I think there must be a simpler/shorter method, but I don't know what.</p>
<p>Thank you in advance for your comments!</p>
| Fred | 380,717 | <p>Hints: $det(XY)=\det(X) \det(Y)$, hence, if $X$ is invertible: $1=\det(X) \det(X^{-1})$.</p>
|
2,546,792 | <blockquote>
<p><strong>Definition.</strong> A metric space $M$ is connected if there is no other open disjoint $A$ and $B$ in $M$ such that $M=A\cup B$ than the empty set and the total space $M$. A subset $C\subset M$ is connected if the subspace $C$ is connected.</p>
<p><strong>Definition</strong>. A connected component of $x\in M$ in a metric space $M$ is the union $C_x$ of all connected subsets of $M$ that contain the point $x$.</p>
</blockquote>
<p>I want to know how many connected components does have the set
$$
\{(x,y)\in\mathbb{R}^2:(xy)^2=xy\}
$$
I know it's the union of the axis $x = 0$ and $y = 0$ and the graph of the function $f(x) = 1/x$ (for $x\ne 0$), but don't know what to do with the definitions. Can someone help? </p>
| MathsLearner | 451,337 | <p>We divide the shaded portion into six equal parts by three lines, each of them joining the centroid and one vertex.</p>
<p><a href="https://i.stack.imgur.com/dpxlG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dpxlG.jpg" alt="enter image description here"></a></p>
<p>So area of shaded area(A) = 6 * Area of one segment(S)</p>
<p>$$tan(\phi) = \frac{CD}{DG} = \frac{\sqrt{6}/2}{1/\sqrt{2}} = \frac{1}{\sqrt{3}}$$
So, $$\phi = 60 degrees $$
The orange lines form part of a regular hexagon inscribed in a circle with centre O.</p>
<p>The area of one segment(S) = ( Area of sector OCPGO - Area of triangle OCG)
$ = (\pi/6*(\sqrt2)^2 - \sqrt{3}/4*(\sqrt2)^2) $</p>
<p>So, Total area of shaded region = $6S = (2\pi - 3\sqrt{3})$</p>
|
820,490 | <p>I am studying pre-calculus mathematics at the moment, and I need help in verifying if $\sin (\theta)$ and $\cos (\theta)$ are functions? I want to demonstrate that for any angle $\theta$ that there is only one associated value of $\sin (\theta)$ and $\cos (\theta)$. How do I go about showing this?</p>
| Fly by Night | 38,495 | <p>The proof is based simply on similar triangles. If a right-angled triangle has an angle $\theta$ then the other two angles are $90^{\circ}$ and $(90-\theta)^{\circ}$. If two triangles have the same angles then they are similar.</p>
<p>My picture shows two similar triangles: $\triangle OAB$ and $\triangle OA'B'$. </p>
<p>Since $\theta = \angle AOB$ then, by definition
$$\sin\theta = \frac{\|AB\|}{\|OB\|}$$</p>
<p>Since $\theta = \angle A'OB'$ then, by definition
$$\sin\theta = \frac{\|A'B'\|}{\|OB'\|}$$</p>
<p>We can show that $\sin \theta$ has a single, unique value if we can show that the two ratios agree.</p>
<p>Let $T$ be the linear transformation given by an enlargement, centre $O$, with scale factor $\lambda$, such that $T(A) = A'$ and $T(B) = B'$. We have $\|A'B'\|=\lambda\|AB\|$ and $\|OB'\| = \lambda\|OB\|$, hence
$$\frac{\|A'B'\|}{\|OB'\|} = \frac{\lambda\|AB\|}{\lambda\|OB\|}=\frac{\|AB\|}{\|OB\|}$$</p>
<p>This shows that the ratio of the opposite side to the hypotenuse is the same for any two similar, right-angled triangles with angle $\theta$. That means that $\sin\theta$ is uniquely, well-defined.</p>
<p>(N.B. Similarity allows rotation and reflection as well as enlargement. However, rotations and reflections preserve lengths and so preserve ratios of lengths.)</p>
<p><img src="https://i.stack.imgur.com/CXmZ5.png" alt="enter image description here"></p>
|
820,490 | <p>I am studying pre-calculus mathematics at the moment, and I need help in verifying if $\sin (\theta)$ and $\cos (\theta)$ are functions? I want to demonstrate that for any angle $\theta$ that there is only one associated value of $\sin (\theta)$ and $\cos (\theta)$. How do I go about showing this?</p>
| mursalin | 154,755 | <p>If you want to verify this intuitively for yourself, then this answer could prove to be helpful. But remember, <em>this is not a proof</em>.</p>
<p>First try to visualize what <span class="math-container">$\sin\theta$</span> looks like. If you need some help, this is what it looks like.</p>
<p><a href="https://i.stack.imgur.com/vLA0e.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vLA0e.gif" alt="Alt text" /></a></p>
<p>Now remember that if <span class="math-container">$\sin\theta$</span> is a function, then for a certain <span class="math-container">$\theta$</span>, there can be only <em>one</em> value of <span class="math-container">$\sin\theta$</span>.</p>
<p>Now pick a random point on the <span class="math-container">$x-$</span>axis. This is going to be your value for <span class="math-container">$\theta$</span>. Now a draw a line perpendicular to the <span class="math-container">$x-$</span>axis through that point. Now <em>is it possible for that line to intersect the sine curve at more than one point?</em> Try it out for different values of <span class="math-container">$x$</span>. No, it is completely impossible for that line to intersect that curve at more than one point.</p>
<p>This means that for a certain <span class="math-container">$\theta$</span>, there is only one associated value of <span class="math-container">$\sin\theta$</span>.</p>
<p><span class="math-container">$\sin\theta$</span> and <span class="math-container">$\cos\theta$</span> are pretty much the same. Notice that <span class="math-container">$\cos\theta=\sin(\theta +\frac{\pi}{2})$</span>. So if you move the sine curve by <span class="math-container">$\frac{\pi}{2}$</span> to the right, you're going to get the curve for <span class="math-container">$\cos\theta$</span>. And <span class="math-container">$\sin\theta$</span> and <span class="math-container">$\cos\theta$</span> <em>look</em> exactly the same. So you can use the same argument.</p>
<p>Finally, I repeat that this is not a proof. But I hope this helps you see it intuitively.</p>
<p>The figure is taken from <a href="http://jwilson.coe.uga.edu/emat6680/dunbar/Assignment1/sine_curves_KD.html" rel="nofollow noreferrer">here</a>.</p>
|
99,750 | <p>Let $G$ be a reductive group, $F$ a Frobenius morphism, $B$ a Borel subgroup $F$-stable and consider the finite groups $G^F$ and $U^F$ where $U$ is the radical unipotent of $B=UT$ ($T$ torus).</p>
<p>I would like a reference for the description of the algebra $End_{G^F}( \mathbb{C}[G^F/U^F] )$. More precisely, I'd like to relate it with a structure of Hecke algebra, which is usually defined as $End_{G^F}( \mathbb{C}[G^F/B^F] ) := End_{G^F} ( Ind_{B^F}^{G^F} 1 )$. I hope to find that the endomorphism algebra is isomorphic to some kind of extension of the Hecke algebra by the torus $T$.</p>
<p>Thank you!</p>
| Robert Bryant | 13,972 | <p>It is not always clear what one means by 'the simplest description' of one of the exceptional Lie groups. In the examples you've given above, you quote descriptions of these groups as automorphisms of algebraic structures, and that's certainly a good way to do it, but that's not the only way, and one can argue that they are not the simplest in terms of a very natural criterion, which I'll now describe:</p>
<p>Say that you want to describe a subgroup $G\subset \text{GL}(V)$ where $V$ is a vector space (let's not worry too much about the ground field, but, if you like, take it to be $\mathbb{R}$ or $\mathbb{C}$ for the purposes of this discussion). One would like to be able to describe $G$ as the stabilizer of some element $\Phi\in\text{T}(V{\oplus}V^\ast)$, where $\mathsf{T}(W)$ is the tensor algebra of $W$. The tensor algebra $\mathsf{T}(V{\oplus}V^\ast)$ is reducible under $\text{GL}(V)$, of course, and, ideally, one would like to be able to chose a 'simple' defining $\Phi$, i.e., one that lies in some $\text{GL}(V)$-irreducible submodule $\mathsf{S}(V)\subset\mathsf{T}(V{\oplus}V^\ast)$.</p>
<p>Now, all of the classical groups are defined in this way, and, in some sense, these descriptions are as simple as possible. For example, if $V$ with $\dim V = 2m$ has a symplectic structure $\omega\in \Lambda^2(V^\ast)$, then the classical group $\text{Sp}(\omega)\subset\text{GL}(V)$ has codimension $m(2m{-}1)$ in $\text{GL}(V)$, which is exactly the dimension of the space $\Lambda^2(V^\ast)$. Thus, the condition of stabilizing $\omega$ provides exactly the number of equations one needs to cut out $\text{Sp}(\omega)$ in $\text{GL}(V)$. Similarly, the standard definitions of the other classical groups as subgroups of linear transformations that stabilize an element in a $\text{GL}(V)$-irreducible subspace of $\mathsf{T}(V{\oplus}V^\ast)$ are as 'efficient' as possible.</p>
<p>In another direction, if $V$ has the structure of an algebra, one can regard the multiplication as an element $\mu\in \text{Hom}\bigl(V\otimes V,V\bigr)= V^\ast\otimes V^\ast \otimes V$, and the automorphisms of the algebra $A = (V,\mu)$ are, by definition, the elements of $\text{GL}(V)$ whose extensions to $V^\ast\otimes V^\ast \otimes V$ fix the element $\mu$. Sometimes, if one knows that the multiplication is symmetric or skew-symmetric and/or traceless, one can regard $\mu$ as an element of a smaller vector space, such as $\Lambda^2(V^\ast)\otimes V$ or even the $\text{GL}(V)$-irreducible module $\bigl[\Lambda^2(V^\ast)\otimes V\bigr]_0$, i.e., the kernel of the natural contraction mapping $\Lambda^2(V^\ast)\otimes V\to V^\ast$.</p>
<p>This is the now-traditional definition of $G_2$, the simple Lie group of dimension $14$: One takes $V = \text{Im}\mathbb{O}\simeq \mathbb{R}^7$ and defines $G_2\subset \text{GL}(V)$ as the stabilizer of the vector cross-product $\mu\in \bigl[\Lambda^2(V^\ast)\otimes V\bigr]_0\simeq \mathbb{R}^{140}$. Note that the condition of stabilizing $\mu$ is essentially $140$ equations on elements of $\text{GL}(V)$ (which has dimension $49$), so this is many more equations than one would really need. (If you don't throw away the subspace defined by the identity element in $\mathbb{O}$, the excess of equations needed to define $G_2$ as a subgroup of $\text{GL}(\mathbb{O})$ is even greater.)</p>
<p>However, as was discovered by Engel and Reichel more than 100 years ago, one can define $G_2$ over $\mathbb{R}$ much more efficiently: Taking $V$ to have dimension $7$, there is an element $\phi\in \Lambda^3(V^\ast)$ such that $G_2$ is the stabilizer of $\phi$. In fact, since $G_2$ has codimension $35$ in $\text{GL}(V)$, which is exactly the dimension of $\Lambda^3(V^\ast)$, one sees that this definition of $G_2$ is the most efficient that it can possibly be. (Over $\mathbb{C}$, the stabilizer of the generic element of $\Lambda^3(V^\ast)$ turns out to be $G_2$ crossed with the cube roots of unity, so the identity component is still the right group, you just have to require in addition that it fix a volumme form on $V$, so that you wind up with $36$ equations to define the subgroup of codimension $35$.)</p>
<p>For the other exceptional groups, there are similarly more efficient descriptions than as automorphisms of algebras. Cartan himself described $F_4$, $E_6$, and $E_7$ in their representations of minimal dimsension as stabilizers of homogeneous polynomials (which he wrote down explicitly) on vector spaces of dimension $26$, $27$, and $56$ of degrees $3$, $3$, and $4$, respectively. There is no doubt that, in the case of $F_4$, this is much more efficient (in the above sense) than the traditional definition as automorphisms of the exceptional Jordan algebra. In the $E_6$ case, this <em>is</em> the standard definition. I think that, even in the $E_7$ case, it's better than the one provided by the 'magic square' construction.</p>
<p>In the case of $E_8\subset\text{GL}(248)$, it turns out that $E_8$ is the stabilizer of a certain element $\mu\in \Lambda^3\bigl((\mathbb{R}^{248})^\ast\bigr)$, which is essentially the Cartan $3$-form on on the Lie algebra of $E_8$. I have a feeling that this is the most 'efficient' description of $E_8$ there is (in the above sense).</p>
<p>This last remark is a special case of a more general phenomenon that seems to have been observed by many different people, but I don't know where it is explicitly written down in the literature: If $G$ is a simple Lie group of dimension bigger than $3$, then $G\subset\text{GL}({\frak{g}})$ is the identity component of the stabilizer of the Cartan $3$-form $\mu_{\frak{g}}\in\Lambda^3({\frak{g}}^\ast)$. Thus, you can recover the Lie algebra of $G$ from knowledge of its Cartan $3$-form alone.</p>
<p><strong>On 'rolling distributions':</strong> You mentioned the description of $G_2$ in terms of 'rolling distributions', which is, of course, the very first description (1894), by Cartan and Engel (independently), of this group. They show that the Lie algebra of vector fields in dimension $5$ whose flows preserve the $2$-plane field defined by
$$
dx_1 - x_2\ dx_0 = dx_2 - x_3\ dx_0 = dx_4 - {x_3}^2\ dx_0 = 0
$$
is a $14$-dimensional Lie algebra of type $G_2$. (If the coefficients are $\mathbb{R}$, this is the split $G_2$.) It is hard to imagine a simpler definition than this. However, I'm inclined not to regard it as all that 'simple', just because it's not so easy to get the defining equations from this and, moreover, the vector fields aren't complete. In order to get complete vector fields, you have to take this $5$-dimensional affine space as a chart on a $5$-dimensional compact manifold. (Cartan actually did this step in 1894, as well, but that would take a bit more description.) Since $G_2$ does not have any homogeneous spaces of dimension less than $5$, there is, in some sense, no 'simpler' way for $G_2$ to appear.</p>
<p>What doesn't seem to be often mentioned is that Cartan also described the other exceptional groups as automorphisms of plane fields in this way as well. For example, he shows that the Lie algebra of $F_4$ is realized as the vector fields whose flows preserve a certain 8-plane field in 15-dimensional space. There are corresponding descriptions of the other exceptional algebras as stabilizers of plane fields in other dimensions. K. Yamaguchi has classified these examples and, in each case, writing down explicit formulae turns out to be not difficult at all. Certainly, in each case, writing down the defining equations in this way takes less time and space than any of the algebraic methods known.</p>
<p><em>Further remark:</em> Just so this won't seem too mysterious, let me describe how this goes in general: Let $G$ be a simple Lie group, and let $P\subset G$ be a parabolic subgroup. Let $M = G/P$. Then the action of $P$ on the tangent space of $M$ at $[e] = eP\in M$ will generally preserve a filtration
$$
(0) = V_0 \subset V_1\subset V_2\subset \cdots \subset V_{k-1} \subset V_k = T_{[e]}M
$$
such that each of the quotients $V_{i+1}/V_i$ is an irreducible representation of $P$. Corresponding to this will be a set of $G$-invariant plane fields $D_i\subset TM$ with the property that $D_i\bigl([e]\bigr) = V_i$. What Yamaguchi shows is that, in many cases (he determines the exact conditions, which I won't write down here), the group of diffeomorphisms of $M$ that preserve $D_1$ is $G$ or else has $G$ as its identity component.</p>
<p>What Cartan does is choose $P$ carefully so that the dimension of $G/P$ is minimal among those that satisfy these conditions to have a nontrivial $D_1$. He then takes a nilpotent subgroup $N\subset G$ such that $T_eG = T_eP \oplus T_eN$ and uses the natural immersion $N\to G/P$ to pull back the plane field $D_1$ to be a left-invariant plane field on $N$ that can be described very simply in terms of the multiplication in the nilpotent group $N$ (which is diffeomorphic to some $\mathbb{R}^n$). Then he verifies that the Lie algebra of vector fields on $N$ that preserve this left-invariant plane field is isomorphic to the Lie algebra of $G$. This plane field on $N$ is bracket generating, i.e., 'non-holonomic' in the classical terminology. This is why it gets called a 'rolling distribution' in some literature. In the case of the exceptional groups $G_2$ and $F_4$, the parabolic $P$ is of maximal dimension, but this is not so in the case of $E_6$, $E_7$, and $E_8$, if I remember correctly.</p>
|
99,750 | <p>Let $G$ be a reductive group, $F$ a Frobenius morphism, $B$ a Borel subgroup $F$-stable and consider the finite groups $G^F$ and $U^F$ where $U$ is the radical unipotent of $B=UT$ ($T$ torus).</p>
<p>I would like a reference for the description of the algebra $End_{G^F}( \mathbb{C}[G^F/U^F] )$. More precisely, I'd like to relate it with a structure of Hecke algebra, which is usually defined as $End_{G^F}( \mathbb{C}[G^F/B^F] ) := End_{G^F} ( Ind_{B^F}^{G^F} 1 )$. I hope to find that the endomorphism algebra is isomorphic to some kind of extension of the Hecke algebra by the torus $T$.</p>
<p>Thank you!</p>
| Skip | 6,486 | <p>Here is a description that is new and you can judge whether it is beautiful. <em>Given any simple complex Lie group $G$ and almost any irreducible representation $V$, the stabilizer of almost any $G$-invariant polynomial $f$ on $V$ has identity component $G$.</em></p>
<p>Cartan's examples </p>
<ul>
<li>$G = E_6$, $V$ of dimension 27, and $f$ cubic; or</li>
<li>$G = E_7$, $V$ of dimension 56, and $f$ quartic</li>
</ul>
<p>are very special cases of this general principle. (They are very special because in these cases the ring of $G$-invariant polynomials on $V$ is generated by $f$.)</p>
<p>In the case of the group $E_8$, you can take $V$ to be the Lie algebra $\mathfrak{e}_8$. Then the ring of invariant polynomial functions is a polynomial ring with generators of degree 2 (the Killing quadratic form), 8, 12, 14, 18, 20, 24, 30. The new result says: If you take $f$ to be any of the generators besides the Killing form, then $E_8$ is the identity component of the stabilizer of $f$.</p>
<p>This is a very concrete description of $E_8$, because an explicit formula for the degree 8 polynomial is already in the literature (<a href="http://aip.scitation.org/doi/10.1063/1.2748615" rel="noreferrer">Cederwall and Palmkvist - The octic $E_8$ invariant</a> (<a href="https://arxiv.org/abs/hep-th/0702024" rel="noreferrer">arXiv</a>)).</p>
<p>Alternatively, there is a commutative, nonassociative, and $E_8$-invariant product on its 3875-dimensional irreducible representation, and the automorphism group of this nonassociative ring is $E_8$.</p>
<p>There is also a variation on the result I mentioned at the beginning that may be worth mentioning: you can also realize each simple complex Lie group $G$, up to isogeny, as the stabilizer of a <em>cubic</em> form on some representation. For $E_8$, you can take the cubic form to be the one defining the multiplication on the 3875-dimensional representation.</p>
<p>The new results mentioned here are from <a href="https://www.cambridge.org/core/journals/forum-of-mathematics-pi/article/simple-groups-stabilizing-polynomials/1FEB791152D2B7FA44679B22D8F8A8EB" rel="noreferrer">Garibaldi and Guralnick - Simple groups stabilizing polynomials</a> (<a href="http://www.ams.org/mathscinet-getitem?mr=3406824" rel="noreferrer">MSN</a>, <a href="http://arxiv.org/abs/1309.6611" rel="noreferrer">arXiv</a>).</p>
|
1,336,209 | <p>I need to understand very good how the properties of this formula</p>
<p>$\frac{4}{\pi} = \frac{5}{4} + \sum_{N \geq 1} \left[ 2^{-12N + 1} \times(42N + 5)\times {\binom {2N-1} {N}}^3 \right] $</p>
<p>Taken from the paper "Radian Reduction for Trigonometric Function" (Hanek Payne Algorithm)</p>
<p>Some remarkable properties are stated, specifically these four ones</p>
<ol>
<li>The $k^{th}$ term of the formula is exactly representable in $6k$ bits;</li>
<li>The first $n$ terms of the sum can be represented exactly in $12n$ bits;</li>
<li>The most significant bit of the $k^{th}$ term has weight at most $2^{1-6k}$ and hence each successive term increases the number of valid bits in the sum by at least $6$;</li>
<li>If $12k < m + 1 \leq 12(k+1)$, then the $m^{th}$ bit of $\frac{4}{\pi}$ may be computed using only terms beyond the $k^{th}$.</li>
</ol>
<p>My questions are:
1. How to prove the formula?
2. How to prove the properties stated above?</p>
<p>PS. I guess with terms the paper means the generic term $a_N$ of the sum... </p>
| math110 | 58,742 | <p>Hint: since
$$\lim_{n\to\infty}\dfrac{1^4+2^4+\cdots+n^4}{n^5}=\int_{0}^{1}x^4dx=\dfrac{1}{5}$$</p>
|
2,829,362 | <p>Suppose the random variable $T$ which represents the time needed for one person to travel from city A to city B ( in minutes). $T$ is normally distributed with mean $60$ minutes and variance $20$ minutes. Also, suppose $600$ people depart at the exact same time with each of their travel time being independent from one another.</p>
<p>Now the question is, what is the probability that less than $80$ people will need to travel more than $1$ hour ?</p>
<p>How I tried to do this is by using the binomial probability distribution to calculate the probability of $i$ people being late out of the 600. Then I summed $i$ from $0$ to $79$ because these are disjoint sets of events. But first I needed to know the probability that a random person will be late. This is simply equal to $1/2$ because $T$ is normally distributed with mean 60. So we get for $X$ the amount of people being late:</p>
<p>$$P(X < 80) = \sum\limits_{i=0}^{79} \frac{600!}{i!(600-i)!}
\left(\frac{1}{2}\right)^i\left(\frac{1}{2}\right)^{600-i} =\sum\limits_{i=0}^{79} \frac{600!}{i!(600-i)!}
\left(\frac{1}{2}\right)^{600} \approx 2.8^{-80} $$</p>
<p>But this probability is practically $0$, which seems to go against my intuition ( it's reasonably possible for less than $80$ people being late). So where did I go wrong in my reasoning ? Also, why did they give the variance which I didn't use (this was an exam question by the way). Has this maybe something to do with the CLT (central limit theorem) ?</p>
| José Carlos Santos | 446,262 | <p>This is an improper integral. It is defined as$$\lim_{\varepsilon\to0^+}\int_\varepsilon^1\frac1x\,\mathrm dx+\lim_{\varepsilon\to0^-}\int_{-1}^\varepsilon\frac1x\,\mathrm dx$$if both limits exist. In this case, none of the limits exist (in $\mathbb R$).</p>
|
3,752,676 | <p>what is <span class="math-container">$P(P(P(333^{333})))$</span>, where P is sum of digit of a number. for an example <span class="math-container">$P(35)=3+5=8$</span></p>
<p>a)18</p>
<p>b)9</p>
<p>c)33</p>
<p>d)333</p>
<p>f)5</p>
<p>I tried to find this but I couldn't. I started to find a pattern for an example the first few power of <span class="math-container">$333^{333}$</span> are:</p>
<p><span class="math-container">$A=333*333=110889 \; \; \; \; \; \; P(A)=3^{3}=27$</span></p>
<p><span class="math-container">$B=110889*333= 36926037 \; \; \; \; \; \; P(B)=36$</span></p>
<p><span class="math-container">$C=36926037*333=12296370321 \; \; \; \; \; \; P(C)=36 $</span></p>
<p><span class="math-container">$D=12296370321*333=4094691316893 \; \; \; \; \; \; P(D)=63$</span></p>
<p>Can I say it is always 9? so <span class="math-container">$P(P(P(333^{333})))=9$</span>?</p>
| Ayman Hourieh | 4,583 | <p>One option is to use <a href="https://en.wikipedia.org/wiki/Tietze_transformations" rel="nofollow noreferrer">Tietze transformations</a>.</p>
<p>Let's start with the presentation found in Batominovski's answer,
<span class="math-container">$$
A = \langle \alpha, \beta \mid 4\alpha, 4\beta, 2(\alpha + \beta) \rangle.
$$</span></p>
<p>Let <span class="math-container">$\gamma = \alpha + \beta$</span>. Using this substitution, we get:
<span class="math-container">$$
A = \langle \alpha, \gamma \mid 4\alpha, 4(\gamma - \alpha), 2\gamma \rangle.
$$</span></p>
<p>The element <span class="math-container">$4(\gamma - \alpha)$</span> is superfluous, it follows that
<span class="math-container">$$
A = \langle \alpha, \gamma \mid 4\alpha, 2\gamma \rangle.
$$</span></p>
<p>We conclude that your original group is isomorphic to <span class="math-container">$\mathbb Z/(4) \times \mathbb Z/(2)$</span>.</p>
|
3,752,676 | <p>what is <span class="math-container">$P(P(P(333^{333})))$</span>, where P is sum of digit of a number. for an example <span class="math-container">$P(35)=3+5=8$</span></p>
<p>a)18</p>
<p>b)9</p>
<p>c)33</p>
<p>d)333</p>
<p>f)5</p>
<p>I tried to find this but I couldn't. I started to find a pattern for an example the first few power of <span class="math-container">$333^{333}$</span> are:</p>
<p><span class="math-container">$A=333*333=110889 \; \; \; \; \; \; P(A)=3^{3}=27$</span></p>
<p><span class="math-container">$B=110889*333= 36926037 \; \; \; \; \; \; P(B)=36$</span></p>
<p><span class="math-container">$C=36926037*333=12296370321 \; \; \; \; \; \; P(C)=36 $</span></p>
<p><span class="math-container">$D=12296370321*333=4094691316893 \; \; \; \; \; \; P(D)=63$</span></p>
<p>Can I say it is always 9? so <span class="math-container">$P(P(P(333^{333})))=9$</span>?</p>
| jjuma1992 | 715,368 | <p>Let <span class="math-container">$G=Z_4 \oplus Z_{12}$</span> and <span class="math-container">$M=\langle (2,2) \rangle.$</span> You can actually consider the matrix <span class="math-container">$[2\,\,\, 2]$</span> and reduce this matrix to smith normal form to get <span class="math-container">$[0 \,\,\, 2].$</span> So invariant factors are <span class="math-container">$0$</span> and <span class="math-container">$2$</span>. Hence it follows that
<span class="math-container">$$G/M \cong Z_4/0Z \oplus Z_{12}/2Z \cong Z_4 \oplus Z_2$$</span></p>
|
308,251 | <p>I asked this question on Mathematics Stackexchange (<a href="https://math.stackexchange.com/q/2863312/660">link</a>), but got no answer.</p>
<p>Let $K$ be a field, let $x_1,x_2,\dots$ be indeterminates, and form the $K$-algebra $A:=K[[x_1,x_2,\dots]]$. </p>
<p>Recall that $A$ can be defined as the set of expressions of the form $\sum_ua_uu$, where $u$ runs over the set monomials in $x_1,x_2,\dots$, and each $a_u$ is in $K$, the addition and multiplication being the obvious ones. </p>
<p>Then $A$ is a local domain, its maximal ideal $\mathfrak m$ is defined by the condition $a_1=0$, and it seems natural to ask</p>
<blockquote>
<p>Is $K[[x_1,x_2,\dots]]$ an $\mathfrak m$-adically complete ring?</p>
</blockquote>
<p>I suspect that the answer is No, and that the series $\sum_{n\ge1}x_n^n$, which is clearly Cauchy, does <em>not</em> converge $\mathfrak m$-adically.</p>
| Uriya First | 86,006 | <p>[Edit: The lemma was revised and proved, changed the point of view from series to sequences.]</p>
<p>[2nd Edit: The proof of the lemma was improved, and now the argument can show that Cauchy series such as $x_1+(x_{1^3+1}^2+\dots x_{2^3}^2)+(x_{2^3+1}^{3}+\dots+x_{3^3}^3)+\dots$, diverge in the $\mathfrak{m}$-adic topology.]</p>
<p>Here is a construction of a Cauchy sequence which does not converge. It is based on the following lemma, a proof of which will is given at the end.</p>
<blockquote>
<p><strong>Lemma.</strong> Let $\mathfrak{m}_c$ denote the maximal ideal of $K[[x_1,\dots,x_c]]$.
Then there exist sequences of natural numbers $(r_n)_{n\in\mathbb{N}}$, $(c_n)_{n\in\mathbb{N}}$ and a sequence of elements $(p_n\in(\mathfrak{m}_{c_n})^n)_{n\in\mathbb{N}}$ such that:</p>
<ul>
<li>$\limsup r_n=\infty$ and </li>
<li>$p_n$ cannot be written as a sum of $r_n$ terms $\sum_{i=1}^{r_n} a_{i} b_i$ with $a_{i},b_i\in\mathfrak{m}_{c_n}$.</li>
</ul>
<p>In fact, one can take $c_n=n^2$, $p_n=x_1^{n}+\dots+x_{n^2}^{n}$ and let $r_n=\lceil \frac{n^2}{2(n-1)}\rceil-1$ if $\mathrm{char}\,K\nmid n$ and $r_n=1$ when $\mathrm{char}\,K\mid n$. </p>
</blockquote>
<p>It would be more convenient to replace $K[[x_1,x_2,\dots]]$ with the isomorphic ring $$S:=K[[y_{ij}\,|\,i,j\in \mathbb{N}]].$$
For every $n$, there is a ring homomorphism $\phi_n:S\to K[[x_1,\dots,x_{c_n}]]$ specializing $y_{n1},\dots,y_{n{c_n}}$ to $x_1,\dots,x_{c_n}$ and the rest of the variables to $0$.</p>
<p>Let $f_n=p_n(y_{n1},\dots,y_{nc_n})$ and define in $S$ the (formal) partial sums
$$
g_t:=\sum_{n=t}^\infty f_n.
$$
Then, by construction, $(g_t)_{t\in\mathbb{N}}$ is a Cauchy sequence relative to the $\mathfrak{m}$-adic topology, but it does not converge.</p>
<p>Indeed, if $(g_t)_t$ converges in the $\mathfrak{m}$-adic topology, then it also converges relative to the (coarser) grading topology of $S$, and so must converge to $0$. However, if this is so, there exists $t\in\mathbb{N}$ such that $g_t=\sum_{n=t}^\infty f_n\in \mathfrak{m}^2$. In particular, we can write $\sum_{n=t}^\infty f_n=\sum_{i=1}^u a_{i}b_i$ with $a_{i},b_i\in \mathfrak{m}$.
Choose $n\geq t$ sufficiently large to have $r_n\geq u$. Applying $\phi_n$ to both sides of the last equality gives $p_n=\sum_{i=1}^u (\phi a_{i})(\phi b_{i})$ with $\phi a_{i},\phi b_i\in\mathfrak{m}_{c_n}$, which is impossible by the way we chose $p_n$.</p>
<p><strong>Back to the lemma:</strong> Given $f\in K[x_1,\dots,x_c]$, let $D_if$ denote its (formal) derivative relative to $x_i$. The lemma follows from the following general proposition.</p>
<blockquote>
<p><strong>Proposition.</strong> Let $f\in K[[x_1,\dots,x_c]]$ be a homogeneous polynomial of degree $n$, and let $r\in\mathbb{N}$ denote the minimal integer such that $f$ can written as $\sum_{i=1}^ra_ib_i$ with $a_i,b_i\in \mathfrak{m}_c$. Suppose that $D_1f,\dots,D_cf$ have no common zero beside the zero vector over the algebraic closure of $K$. Then $r\geq \frac{c}{2(n-1)}$.</p>
<p><strong>Proof.</strong> Suppose otherwise, namely, that $f=\sum_{i=1}^r a_ib_i$ with $2r(n-1)<c$. Notice that $a_i,b_i$ are a priori <em>not polynomials</em> --- rather, they are power series. To fix that, we write each $a_i$ and $b_i$ as a sum of their homogenous components and rewrite the degree-$n$ homogeneous component of product $a_ib_i$ as a sum of
the relevant components of $a_i$ and $b_i$. By doing this, we see that $f$ can written as $\sum_{i=1}^{r(n-1)}a'_ib'_i$ with $a'_i,b'_i$ being homogeneous <em>polynomials</em> in $\mathfrak{m}_c$.</p>
<p>Let $V$ denote the affine subvariety of $\mathbb{A}^c_{\overline{K}}$
determined by the $2r(n-1)$ equations $a'_1=b'_1=a'_2=b'_2=\dots=0$. It is well-known (see Harshorne's "Algebraic Geometry", p. 48) that every irreducible component of $V$ has dimension at least $c-2r(n-1)>0$. Furthermore, $V$ is nonempty because it contains the zero vector (because $a'_1,b'_1,a'_2,b'_2,\dots\in \mathfrak{m}_c$). Thus, there exists a nonzero $v\in \overline{K}^c$ annihilating $a'_1,b'_1,a'_2,b'_2,\dots$.</p>
<p>Now, by Leibniz's rule, we have $$D_jf=\sum_i D_j(a'_ib'_i)=\sum_i(D_ja'_i\cdot b'_i + a'_i\cdot D_j b'_i).$$
It follows that $v$ above annihilates all the derivatives $D_1f,\dots,D_cf$, a contradiction! $\square$</p>
</blockquote>
<p>If it weren't for the passage from power series to polynomials, the proof would work for non-homogenous polynomials and also give the better bound $r\geq \frac{c}{2}$. [Edit: This is in fact possible, see the comments.]
Such a bound would suffice to prove that the Cauchy series $x_1+x_2^2+x_3^3+\dots$ suggested in the question diverges in the $\mathfrak{m}$-adic topology.</p>
|
2,815 | <p>This is a final exam question in my algorithms class:</p>
<p>$k$ is a taxicab number if $k = a^3+b^3=c^3+d^3$, and $a,b,c,d$ are distinct positive integers. Find all taxicab numbers $k$ such that $a,b,c,d < n$ in $O(n)$ time.</p>
<p>I don't know if the problem had a typo or not, because $O(n^3)$ seems more reasonable. The best I can come up with is $O(n^2 \log n)$, and that's the best anyone I know can come up with. </p>
<p>The $O(n^2 \log n)$ algorithm: </p>
<ol>
<li><p>Try all possible $a^3+b^3=k$ pairs, for each $k$, store $(k,1)$ into a binary tree(indexed by $k$) if $(k,i)$ doesn't exist, if $(k,i)$ exists, replace $(k,i)$ with $(k,i+1)$</p></li>
<li><p>Transverse the binary tree, output all $(k,i)$ where $i\geq 2$</p></li>
</ol>
<p>Are there any faster methods? This should be the best possible method without using any number theoretical result because the program might output $O(n^2)$ taxicab numbers. </p>
<p>Is $O(n)$ even possible? One have to prove there are only $O(n)$ taxicab numbers lesser than $2n^3$ in order to prove there exist a $O(n)$ algorithm.</p>
<p><strong>Edit</strong>: The professor admit it was a typo, it should have been $O(n^3)$. I'm happy he made the typo, since the answer Tomer Vromen suggested is amazing.</p>
| rgrig | 512 | <p>For $O(n^2)$ (randomized) time, you can also use a hashtable of size $\Theta(n^2)$. Looking up will be constant time because the number of taxicab numbers is $O(n^2)$.</p>
|
2,815 | <p>This is a final exam question in my algorithms class:</p>
<p>$k$ is a taxicab number if $k = a^3+b^3=c^3+d^3$, and $a,b,c,d$ are distinct positive integers. Find all taxicab numbers $k$ such that $a,b,c,d < n$ in $O(n)$ time.</p>
<p>I don't know if the problem had a typo or not, because $O(n^3)$ seems more reasonable. The best I can come up with is $O(n^2 \log n)$, and that's the best anyone I know can come up with. </p>
<p>The $O(n^2 \log n)$ algorithm: </p>
<ol>
<li><p>Try all possible $a^3+b^3=k$ pairs, for each $k$, store $(k,1)$ into a binary tree(indexed by $k$) if $(k,i)$ doesn't exist, if $(k,i)$ exists, replace $(k,i)$ with $(k,i+1)$</p></li>
<li><p>Transverse the binary tree, output all $(k,i)$ where $i\geq 2$</p></li>
</ol>
<p>Are there any faster methods? This should be the best possible method without using any number theoretical result because the program might output $O(n^2)$ taxicab numbers. </p>
<p>Is $O(n)$ even possible? One have to prove there are only $O(n)$ taxicab numbers lesser than $2n^3$ in order to prove there exist a $O(n)$ algorithm.</p>
<p><strong>Edit</strong>: The professor admit it was a typo, it should have been $O(n^3)$. I'm happy he made the typo, since the answer Tomer Vromen suggested is amazing.</p>
| phil_20686 | 150,039 | <p>I think you can do better, imagine the (a,b) pairs as forming a matrix filled in with a^3+b^3 in the upper right triangle. e.g. (for squares due to limited mental arithmetic)</p>
<pre><code>2, 5, 10, 17, 26
- 8, 13, 20, 29
- -, 18, 25, 34
- - - 32, 41
- - - - 50
</code></pre>
<p>So it should be clear from this that it is actually not hard to generate the numbers almost in order. I start by computing the top left, and I go down columns to the end unless the top of the next column is larger. So an implementation would be something like:</p>
<p>(1) Work out all the elements in the top row and put them in the bottom row of [N][2] So we would have the first row</p>
<pre><code>[1,1,1,1,1]
[2,5,10,17,26]
</code></pre>
<p>So i try to go "down" the first row, but its exhausted, so I put 2 in an int which stores my last value. and i replace it with a 0 to indicate that column is exhausted.</p>
<pre><code>[0,1,1,1,1]
[0,5,10,17,26]
lastVal=2
</code></pre>
<p>So now I see that 5 is the smallest value, so I take 5 and I increment that column. as long as that is less than the head of the next column its all fine, so the first interesting case is </p>
<pre><code>[0,0,2,1,1]
[0,0,13,17,26]
</code></pre>
<p>So when i take 13 for my last value, i find that the next value is 18 which is larger than 17, so i must next take 17 and increment that column. And I keep moving along incrementing until the list is once again sorted left to right.</p>
<p>Since I am generating them in order, i find all pairs immediately (value=last value), and I never need to hold more than N values in memory. In practice this is probably very close to n^2 in time, as it will not often have to make more than one comparison per number since at the end of every step its sorted left to right. Working out its actual complexity is too hard for me though. :)</p>
|
2,353,062 | <p>is there a simple and vivid example of paradoxical decomposition specifically on the banach-tarski paradox? is there a math software that can be used as an application of the paradox?</p>
| Alex Ortiz | 305,215 | <p>To negate the definition properly, we must show that <em>there exists an $\epsilon>0$</em> such that <em>for all $\delta>0$</em>, <em>there exists an $x$</em> such that $0<|x-2|<\delta$ but $|f(x)-8|\ge \epsilon$. Pay attention to the order of the quantifiers here: the limit definition is of the form
$$
\forall\epsilon\ \exists\delta\ \forall x: 0<|x-2|<\delta\implies|f(x)-8|< \epsilon
$$
so its negation should be
$$
\exists\epsilon\ \forall\delta\ \exists x: 0<|x-2|<\delta,\ \text{but}\ |f(x)-8|\ge \epsilon
$$</p>
<p><em>Hint:</em> Try showing that if $\epsilon = 1/2$ then no matter what $\delta>0$ you pick, you can get $0<|x-2|<\delta$ but you will still have $|f(x)-8|\ge 1/2$.</p>
|
2,353,062 | <p>is there a simple and vivid example of paradoxical decomposition specifically on the banach-tarski paradox? is there a math software that can be used as an application of the paradox?</p>
| Siong Thye Goh | 306,553 | <p>To prove that the statement is false, I just have to pick a particular $\epsilon$ value, let me pick $\epsilon = 1$.</p>
<p>Suppose on the contrary that there is an $\delta> 0$, such that $|x-2| < \delta \implies |2x-3|<1$.</p>
<p>This is equivalent to $2-\delta < x < 2+\delta \implies 1 < x < 2$ which is not true.</p>
<p>In particular, $2+\frac{\delta}{2} \in (2-\delta, 2+\delta)$ but $2+\frac{\delta}{2} > 2$.</p>
<p>Remark:</p>
<p>Notice that while $|x-2| < \frac{\epsilon+1}{2} \implies \epsilon > |2x-4|-1$ and from triangle inequality we have $|2x-4-(-1)|\geq|2x-4|-1$, we cannot conclude that $\epsilon >|2x-4-(-1)|$</p>
|
2,459,323 | <p>Q) I have to show that if every vertex in graph $G$ has an even degree , then
$G$ has no bridge. <br>
So by contradiction ,<br> If every vertex in graph $G$ has an even degree , then
$G$ has a bridge , which means $G - e$ will result in a disconnected graph.<br>
We know that in this graph, 1 vertex is adjacent to at least 2 other vertices. <br>
So if we remove 1 edge from this graph , the vertices will still be connected resulting in a connected graph and $G$ will have atleast 1 vertex with an odd degree.(For eg , a graph with 4 vertices, each having an even degree , if we remove an edge , $G$ will still be connected .)<br>
which makes my contradiction false.<br>
<br>Is this enough to prove this question or am I missing something ?
<br>Thankyou.</p>
| JMoravitz | 179,297 | <p>When beginning your proof by contradiction, what you should have rather than "If every vertex has even degree <strong>then</strong> $G$ has a bridge" is:</p>
<blockquote>
<p>Suppose every vertex in $G$ has even degree <strong>and</strong> $G$ has a bridge.</p>
</blockquote>
<p>We then look at what happens to the graph $G$ after having removed that bridge like you suggest. Unlike how you suggest though, removing the bridge will not have implied our graph will still be connected, it will be a disconnected graph and only a disconnected graph. Just because the ends of the removed bridge will be adjacent to <em>something</em> doesn't mean that the part of the graph on the one end of the bridge will be connected to the other part of the graph. We will in fact be guaranteed to be left with at least two separate connected components.</p>
<p>To continue, recognize that each connected component that used to be adjacent to the bridge should be able to form its own graph by itself.</p>
<p>Can you describe the degrees of the vertices in each of those connected components? For the one connected component, how many vertices of odd degree are there? Have you heard of something called the handshaking lemma? How might that be useful here?</p>
|
646,596 | <p>I need to calculate the described squared circle... with only given the length of the side (a).</p>
<p><img src="https://i.stack.imgur.com/VZ8Yn.png" alt="enter image description here"></p>
<p>I need to calculate the area of described squared circle. How exactly is it done with only a side given?</p>
| amWhy | 9,003 | <p>Use the Pythagorean Theorem to find the length of the diagonal $2R$ ($2R$ is the length of the diagonal of the square and the circle), then solve for the radius of the circle: $R$: $$a^2 + a^2 = (2R)^2\iff R = \frac{a}{\sqrt 2}$$</p>
<p>Then use the formula for the area of a circle: $A = \pi R^2$.</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Michael Biro | 29,356 | <p>How precise do you need the calculation to be?</p>
<p>As a quick and dirty approximation, we know that $2^3 = 8$ and $e^2 \approx 2.7^2 = 7.29$, and so $\ln(2)$ should be just over $\frac{2}{3} \approx 0.67$. Contiuing to match powers, we find $2^{10} = 1024$, and
$e^7 \approx (2.7)^7 = (3 - 0.3)^7 = 3^7 -7(3)^6(.3) + 21(3)^5(.3)^2 - 35(3)^4(.3)^3 \dots$ $= 3^7 (1 - .7 + .21 - .035 \dots)$ $\approx 2187(.475) = 1038.825$. Therefore, $e^7 \approx 2^{10}$ and so $\ln(2)$ should be just under $0.7$.</p>
|
530,920 | <p>I want to know if there is a way to find for example $\ln(2)$, without using the calculator ?</p>
<p>Thanks </p>
| Jaume Oliver Lafont | 134,791 | <p>$$\log2=\frac{2}{3}\left(1+\frac{1}{27}+\frac{1}{405}+\frac{1}{5103}+\frac{1}{59049}+\frac{1}{649539}+...\right)$$</p>
<p>The denominator is $(2k+1)9^k$.</p>
<p><a href="http://oeis.org/A155988" rel="nofollow">http://oeis.org/A155988</a></p>
<p>Gourdon and Sebah discuss the efficiency of this formula in
<a href="http://plouffe.fr/simon/articles/log2.pdf" rel="nofollow">http://plouffe.fr/simon/articles/log2.pdf</a> (page 11)</p>
<p>A "little more effort" is required to compute $log(2)$ using this formula than to compute $\pi$ using Machin's relation.</p>
|
544,673 | <p>let $\alpha$ be a cycle of length $s$, say $\alpha = (a_1, a_2, \ldots, a_s)$</p>
<p>Prove $\alpha^2$ is a cycle if and only if $s$ is odd.</p>
<p>Let me start off by saying I am in my 5th week of Group Theory. I often have trouble getting these problems started. This is my first proof based course. </p>
<p>I believe $\alpha^2 = (a_2, a_3, \ldots, a_1)$</p>
<p>Any tips on where to go from here would be great. Also...if there are any tips for starting proofs like these in general, I could really use them! My teacher teaches as if a proofing class was a pre-req, which it was not. </p>
| Brian M. Scott | 12,042 | <p>HINT: Look at a couple of examples: $(1234)$ sends $1$ to $2$ and $2$ to $3$, so $(1234)^2$ sends $1$ to $3$. $(1234)$ sends $2$ to $3$ and $3$ to $4$, so $(1234)^2$ sends $2$ to $4$. $(1234)$ sends $3$ to $4$ and $4$ to $1$, so $(1234)^2$ sends $3$ to $1$. Finally, $(1234)$ sends $4$ to $1$ and $1$ to $2$, so $(1234)^2$ sends $4$ to $2$. Put all the pieces together, and you find that $(1234)^2=(13)(24)$, which is not a cycle. Do the same thing with $(12345)$, however, and you find that $(12345)^2=(13524)$, which is a cycle. Work that one through in detail to be sure, and then try to generalize these ideas to the cases $s$ even and $s$ odd.</p>
|
3,671,675 | <blockquote>
<p>Question: Suppose <span class="math-container">$f:\mathbb{R}\to\mathbb{R}$</span> is a twice differentiable function with <span class="math-container">$f(0)=1$</span>, <span class="math-container">$f'(0)=0$</span> and satisfies <span class="math-container">$f''(x)-5f'(x)+6f(x)\ge 0$</span> for every <span class="math-container">$x\ge 0$</span>. Prove that <span class="math-container">$f(x)\ge 3e^{2x}-2e^{3x}$</span> for every <span class="math-container">$x\ge 0$</span>. </p>
</blockquote>
<p>My approach: Let <span class="math-container">$h:\mathbb{R}\to\mathbb{R}$</span> be such that <span class="math-container">$h(x)=f(x)-3e^{2x}+2e^{3x}, \forall x\in\mathbb{R}.$</span> Thus <span class="math-container">$h$</span> is also a twice differentiable function with <span class="math-container">$$h'(x)=f'(x)-6e^{2x}+6e^{3x}, \forall x\in\mathbb{R}, \text{ and }\\h''(x)=f''(x)-12e^{2x}+18e^{3x}, \forall x\in\mathbb{R}.$$</span></p>
<p>Also observe that <span class="math-container">$$h''(x)-5h'(x)+6h(x)=f''(x)-5f'(x)+6f(x), \forall x\in\mathbb{R}.$$</span> </p>
<p>Thus we have <span class="math-container">$$h''(x)-5h'(x)+6h(x)\ge 0, \forall x\ge 0.$$</span></p>
<p>Now for the sake of contradiction, let us assume that <span class="math-container">$\exists a>0,$</span> such that <span class="math-container">$$f(a)<3e^{2a}-2e^{3a}\implies f(a)-3e^{2a}+2e^{2a}<0\implies h(a)<0.$$</span> </p>
<p>Note that <span class="math-container">$h(0)=0$</span>. Thus, by applying MVT to the function <span class="math-container">$h$</span> on the interval <span class="math-container">$[0,a]$</span>, we can conclude that <span class="math-container">$\exists c\in(0,a)$</span>, such that <span class="math-container">$$h'(c)=\frac{h(a)-h(0)}{a-0}=\frac{h(a)}{a}\implies h'(c)<0.$$</span></p>
<p>Again, note that <span class="math-container">$h'(0)=0$</span>. Thus by applying MVT to the function <span class="math-container">$h'$</span> on the interval <span class="math-container">$[0,c]$</span>, we can conclude that <span class="math-container">$\exists c_1\in(0,c)$</span>, such that <span class="math-container">$$h''(c_1)=\frac{h'(c)-h'(0)}{c-0}=\frac{h'(c)}{c}\implies h''(c_1)<0.$$</span></p>
<p>Thus we have <span class="math-container">$f''(c_1)-12e^{2c_1}+18e^{3c_1}<0\implies f''(c_1)<12e^{2c_1}-18e^{3c_1}<0.$</span> </p>
<p>As one can see, I am trying to prove it using "proof by contradiction". So, is there any way to proceed on these lines, or is there some alternative way to prove?</p>
| jvc | 686,748 | <p>When you want to compare two "solutions" of an two order EDO, it is often useful to consider the Wronskien. To avoid problem of division by 0, let's consider :
<span class="math-container">$$u = f + 2 e^{3x}$$</span>
and
<span class="math-container">$$v = 3 e^{2x}$$</span>
Define the Wronskien between <span class="math-container">$u$</span> and <span class="math-container">$v$</span> by :
<span class="math-container">$$W = uv' - vu'$$</span>
By assumption :
<span class="math-container">$$W' = uv'' - vu'' \leq u (5v' - 6v) - v (5u' - 6u) = 5 W$$</span>
But,
<span class="math-container">$$W(0) = 0$$</span>
By multiplying by <span class="math-container">$e^x$</span> and integrating, we obtain :
<span class="math-container">$$W \leq 0$$</span>
Wich implies :
<span class="math-container">$$\frac{uv' - vu'}{v^2} = -(\frac{u}{v})' \leq 0$$</span>
Which gives the results :
<span class="math-container">$$u \geq \frac{u(0)}{v(0)}v = v$$</span></p>
|
2,428,881 | <p><img src="https://i.stack.imgur.com/0STsn.png" alt="Trig Limits"></p>
<blockquote>
<p>I am confuse with the part of sin^2. I am not sure if there is a trig identity to simplify. I was thinking of trying to rearranging the equation so that it would be similar to sin(x)/x to solve it. Which direction do I go since I have come to road blocks with both?</p>
</blockquote>
| Community | -1 | <p>Drop the cosine and use a change of variable to absorb the $3$ under the sine:</p>
<p>$$\lim_{x\to0}\frac{\sin^2(3x)}{x^2\cos x}=\lim_{u\to0}\frac{\sin^2(u)}{\left(\dfrac u3\right)^2}=9\left(\lim_{u\to0}\frac{\sin u}u\right)^2.$$</p>
|
849,336 | <p>$$\eqalign{\tan^2\theta-\sec^2\theta &=\tan^2\theta-\dfrac1{\cos^2\theta}\\&=\dfrac{\sin^2\theta}{\cos^2\theta}-\dfrac1{\cos^2\theta}\\&=-\dfrac{\cos^2\theta}{\cos^2\theta}\\&=-1.}$$
note:$\theta≠ (2k+1)\frac{\pi}{2}$, $k$ : integer number </p>
<p>May i know if my answer is correct or not..</p>
| John Joy | 140,156 | <p>You seem to have a penchant for expressing everything in terms of sines and cosines when simplifying trigonometric expressions. I used to do the same (because that's the way that Teacher did it), but I soon found it to be too cumbersome.</p>
<p>After a while, I started using 3 different triangles to help me out. Please note that I didn't draw the 3 triangles to scale. Also, I always expressed "$1$" as "$1^2$" to remind me that I'm using the Pythagorean theorem.</p>
<p><img src="https://i.stack.imgur.com/2DA6X.png" alt="enter image description here"></p>
<p>By using the second triangle we have
$$\tan^2\theta - \sec^2\theta = (\sec^2\theta - 1^2) - \sec^2\theta = -1^2 = -1$$</p>
|
1,771,961 | <p>Suppose $U = \{(x,x,y,y) \in \mathbb{F}^4 : x, y \in \mathbb{F}\}$ </p>
<p>Find a subspace W of $\mathbb{F}^4$ such that $\mathbb{F}^4 = U\oplus W$</p>
<p>Attempt: Now from what I understand I would think that an element of $\mathbb{F}^4$ would look like $$(w,x,y,z) \quad\mbox{such that}\quad
w,x,y,z \in \mathbb{F}$$</p>
<p>with that being the case I would use a subspace of the form: $$W = (w-x, 0, 0, z-y) \in \mathbb{F}^4 \quad\mbox{such that}\quad w,x,y,z \in \mathbb{F}$$</p>
<p>But as a solution it was given that $$ W = (0,x,y,0). $$</p>
<p>Explanation?</p>
<p>I think I am not fully grasping how the direct sum sets are formed, but I got the idea that it was using an element from each subspace.</p>
| Solumilkyu | 297,490 | <p>Let $W=\{(0,w,z,0)\in\mathbb{F}^4:w,z\in\mathbb{F}\}$, then we check that such $W$ is the desired subspace, as the following two steps.</p>
<ol>
<li>Given $(a,b,c,d)\in\mathbb{F}^4$, it is easy to decompose the vector
as below.
\begin{align}
(a,b,c,d)
&=(a,b-a+a,c-d+d,d)\\
&=(a,a,d,d)+(0,b-a,c-d,0),
\end{align}
where $(a,a,d,d)\in U$ and $(0,b-a,c-d,0)\in W$. Hence $\mathbb{F}^4=U+W$.</li>
<li>If $(e,f,g,h)\in U\cap W$, then we have
$e=f$, $g=h$, $e=0$, and $h=0$. So
$$(e,f,g,h)=(0,0,0,0)$$
and hence $U\cap W=\{(0,0,0,0)\}$.</li>
</ol>
|
24,671 | <p>Adding to the <a href="https://math.stackexchange.com/search?q=%22for+dummies%22">for dummies</a>.</p>
<p><a href="http://www.paulsprojects.net/opengl/sh/technical.html" rel="noreferrer">The real spherical harmonics are orthonormal basis functions on the surface of a sphere.</a></p>
<p>I'd like to fully understand that sentence and what it means.</p>
<p>Still grappling with </p>
<ul>
<li>Orthonormal basis functions (I believe this is like Fourier Transform's basis functions are sines and cosines, and sin is orthogonal to cos, and so the components can have a zero inner product..)</li>
<li>".. are orthonormal basis functions ..<strong>on the surface of a sphere</strong>".
<ul>
<li><strong>What sphere</strong>? Where does the sphere come from? Do you mean for each position on the sphere, we have a value? Is the periodicity in space on the sphere exploited? Is that how we get the higher order terms?</li>
</ul></li>
</ul>
| Ross Millikan | 1,827 | <p>$\theta$ and $\phi$ the coordinates of a spherical surface. They are similar to latitude ($\theta$) and longitude ($\phi$) except that $\theta$ goes from $0$ to $\pi$ and $\phi$ goes from $0$ to $2\pi$. Each harmonic has a value at every point, for example $Y_1^{-1}(\theta,\phi)=\frac{1}{2}\sqrt{\frac{3}{2\pi}}\sin(\theta)e^{-i\phi}$. Given the coordinates you can calculate the value. The orthogonality is because if you integrate the product of any two different harmonics over the surface of the sphere, you get $0$.</p>
|
24,671 | <p>Adding to the <a href="https://math.stackexchange.com/search?q=%22for+dummies%22">for dummies</a>.</p>
<p><a href="http://www.paulsprojects.net/opengl/sh/technical.html" rel="noreferrer">The real spherical harmonics are orthonormal basis functions on the surface of a sphere.</a></p>
<p>I'd like to fully understand that sentence and what it means.</p>
<p>Still grappling with </p>
<ul>
<li>Orthonormal basis functions (I believe this is like Fourier Transform's basis functions are sines and cosines, and sin is orthogonal to cos, and so the components can have a zero inner product..)</li>
<li>".. are orthonormal basis functions ..<strong>on the surface of a sphere</strong>".
<ul>
<li><strong>What sphere</strong>? Where does the sphere come from? Do you mean for each position on the sphere, we have a value? Is the periodicity in space on the sphere exploited? Is that how we get the higher order terms?</li>
</ul></li>
</ul>
| bobobobo | 1,069 | <p>I think the point that was confusing me/missing link was that spherical harmonics functions <strong>are the solution of</strong> the Laplace's differential equation:</p>
<p>$$\frac{\partial^2u}{\partial x^2}+\frac{\partial^2u}{\partial y^2}+\frac{\partial^2u}{\partial z^2}=0$$</p>
<p>Orthogonal means the functions "pull in different directions". Like in linear algebra, orthogonal vectors "pull" in completely "distinct" directions in n-space, it turns out that orthogonal functions "help you reach completely distinct values", where the resultant value (sum of functions) is again a function.</p>
<p>SH are <strong>based</strong> on the <a href="http://en.wikipedia.org/wiki/Associated_Legendre_polynomial" rel="noreferrer">associated Legendre polynomials</a>, (which are a tad more funky than <a href="http://en.wikipedia.org/wiki/Legendre_polynomials" rel="noreferrer">Legendre polynomials</a>, namely each band has more distinct functions defined for it for the associated ones.)</p>
<p>The Legendre polynomials themselves, like SH, are orthogonal functions. So if you take any 2 functions from the Legendre polynomial set, they're going to be orthogonal to each other (integral on $[-1,1]$ is $0$), and if you add scaled copies of one to the other, you're going to be able to reach <em>an entirely distinct set of functions/values</em> than you could with just one of those basis functions alone.</p>
<p>Now the <strong>sphere</strong> comes from the idea that, SH functions, <strong>use</strong> the Legendre polynomials (but Legendre polynomials are 1D functions), and the <strong>specification</strong> of spherical harmonics is <strong>a function value for every $\phi \theta$</strong>. There is no "sphere" per se.. it's like if you say "there is a value for every point on the unit circle", it means you trace a circle around the origin and give each point a value.</p>
<p>What is meant is <strong>every point on a unit sphere has a numeric value</strong>. If we associate a <strong>color</strong> to every point on the sphere, you get a visualization like this:</p>
<p><img src="https://i.stack.imgur.com/JglQ4.png" alt="viz sh colors"></p>
<p><a href="http://people.csail.mit.edu/sparis/sh/index.php?img=64" rel="noreferrer">This page</a> shows a visualization where the values of the SH function are used to MORPH THE SPHERE (which is part of what was confusing me earlier). But just because a function has values <strong>for every point on the sphere</strong> doesn't mean <em>there is a sphere</em>.</p>
|
2,895,343 | <blockquote>
<p>Differentiate $\tan^3(x^2)$</p>
</blockquote>
<p>I first applied the chain rule and made $u=x^2$ and $g=\tan^3u$. I then calculated the derivative of $u$, which is $$u'=2x$$ and the derivative of $g$, which is
$$g'=3\tan^2u$$</p>
<p>I then applied the chain rule and multiplied them together, which gave me </p>
<p>$$f'(x)=2x3\tan^2(x^2)$$</p>
<p>Is this correct? If not, any hints as to how to get the correct answer?</p>
| salvarico | 379,130 | <p>Your answer is not correct because $g' \neq 3\tan^2u$. For getting $g'$ you need to apply the chain rule once more.
$$g(u) = f(h(u)) = f(\tan u) = \tan^3u$$
where $f(x) = x^3$ and $h(u) = \tan u$.</p>
<p>So $g'(u) = f'(h(u))*h'(u) = 3(h(u))^2*\sec u = 3\tan^2u * \sec u$</p>
|
360,889 | <p>What can be said about publishing mathematical papers on e.g. viXra if the motivation is its low barriers and lack of experience with publishing papers and the idea behind it is to build up a reputation, provided the content of the publication suffices that purpose. </p>
<p>Can that way of getting a foot into the door of publishing be recommended or would it be better to resort to polishing door knobs at arXiv to get an endorsement? </p>
<p>Personal experience or that of someone you know would of course also be interesting to me. </p>
| Alexandre Eremenko | 25,510 | <p>If you want to build a reputation as a mathematician, post your preprint on the arXiv (this is a preprint server, not counted as publication, the posts are not refereed, and there are essentially no "barriers").</p>
<p>Then send your paper to a mainstream mathematical journal. Avoid those journals which are not reviewed in Mathscinet.
(Recently, many journals proliferated which do not really referee papers, but charge the authors for publication. Avoid them, if you want to build a reputation. The main criterion of a mainstream journal is that all its papers are reviewed in Mathscinet.) Mathscinet also publishes a rating of journals according to their citation rates. Publication in a highly rated journal (say of the first 100) will probably be good for your reputation,
but the "barriers" there are also high.</p>
<p>Still I believe that one's reputation is based more on the quality of papers, rather then the rating of the journals where they are published.
For example, the reputation of G. Perelman is mainly based on a few preprints
which he posted on the arXiv. He did not care to send them to journals.<br>
This is an extreme example, but it is not unique. My 3-d highest cited paper is published in a volume of conference proceedings which is not even a journal (arXiv did not exist yet). My two most cited papers are in the journals which do not enter the list of 100 top journals according to MSN rating (though I also published in top journals). I conclude that there is very little correlation between the rating of an individual paper and the rating of the journal. </p>
|
76,457 | <p>I have an ellipse centered at $(h,k)$, with semi-major axis $r_x$, semi-minor axis $r_y$, both aligned with the Cartesian plane.</p>
<p>How do I determine if a point $(x,y)$ is within the area bounded by the ellipse? </p>
| marty cohen | 13,079 | <p>Another way uses the definition of the ellipse
as the points whose sum of distances to the foci is constant.</p>
<p>Get the foci at $(h+f, k)$
and $(h-f, k)$,
where $f = \sqrt{r_x^2 - r_y^2}$.</p>
<p>The sum of the distances
(by looking at the lines from
$(h, k+r_y)$ to the foci)
is
$2\sqrt{f^2 + r_y^2}
= 2 r_x
$.</p>
<p>So, for any point $(x, y)$,
compute
$\sqrt{(x-(h+f))^2 + (y-k)^2} +
\sqrt{(x-(h-f))^2 + (y-k)^2}
$
and compare this with $2 r_x$.</p>
<p>This takes more work, but I like using the geometric definition.</p>
<p>Also, for both methods, if speed is important
(i.e., you are doing this for many points),
you can immediately reject any point $(x, y)$ for which
$|x-h| > r_x$
or $|y-k| > r_y$.</p>
|
211,123 | <p>I am writing some tests and would like to introduce large chunks of random alphabetic text (just characters a-z) to the input. The way I am generating the text now is like this:</p>
<pre><code>RandomString[length_Integer] :=
StringJoin[
Table[
FromLetterNumber[RandomInteger[{1, 26}]],
length
]
]
</code></pre>
<p>This works great but is slow: <code>AbsoluteTiming[RandomString[100000]]</code> shows it running in 6.69763 seconds, which is too slow to run on lots of tests.</p>
<p>Does anyone know of a faster way to generate random text?</p>
| Alucard | 18,859 | <p>just in case someone is not interested in real words and wants something faster</p>
<pre><code>StringJoin@RandomChoice[Flatten@{Alphabet[], " "}, 100000]
</code></pre>
|
1,116,202 | <p>I am taking a class in which knowledge of gradients is a prerequisite. I am familiar with gradients but don't have too much experience, so I am having trouble understanding the following example.</p>
<p>$\theta, x \in \mathbb R^d$.</p>
<p>Define $\nabla J(\theta) = \begin{pmatrix} \frac{\partial}{\partial \theta_1} J(\theta) \\ \frac{\partial}{\partial \theta_2} J(\theta) \\ \vdots \\ \frac{\partial}{\partial \theta_d} J(\theta) \end{pmatrix}$ .</p>
<p>Then, I'm having trouble understanding why the following is true:
$\nabla (\theta \cdot x) = \nabla \left( \sum_{i=1}^{d} \theta_i x_i \right) = \begin{pmatrix} \frac{\partial}{\partial \theta_1} \theta \cdot x \\ \frac{\partial}{\partial \theta_2} \theta \cdot x \\ \vdots \\ \frac{\partial}{\partial \theta_d} \theta \cdot x \end{pmatrix} = \begin{pmatrix} x_1 \\ x_2 \\ \vdots \\ x_d \end{pmatrix} = x$ </p>
<p>My two questions are:</p>
<ul>
<li>Why is the third equality true? That is, why is $\frac{\partial}{\partial \theta_i} \cdot x = x_i$?</li>
<li>The first equality expresses the result as the gradient of a scalar: $\nabla \left( \sum_{i=1}^{d} \theta_i x_i \right)$. How is it possible this is equal to a vector $x$ in the last equality?</li>
</ul>
| velut luna | 139,981 | <ol>
<li><p>$$\frac{\partial}{\partial\theta_i}(\theta \cdot x)=\frac{\partial}{\partial\theta_i}(\theta_1x_1+\theta_2x_2+\cdots+\theta_ix_i+\cdots)=x_i$$</p></li>
<li><p>Gradient of a scalar $\it{is}$ a vector.</p></li>
</ol>
|
3,304,800 | <p>Given that <span class="math-container">$x^2-4cx+b^2 \gt 0$</span> <span class="math-container">$\:$</span> <span class="math-container">$\forall$</span> <span class="math-container">$x \in \mathbb{R}$</span> and <span class="math-container">$a^2+c^2-ab \lt 0$</span></p>
<p>Then find the Range of <span class="math-container">$$y=\frac{x+a}{x^2+bx+c^2}$$</span></p>
<p>My try:</p>
<p>Since <span class="math-container">$$x^2-4cx+b^2 \gt 0$$</span> we have Discriminant</p>
<p><span class="math-container">$$D \lt 0$$</span> <span class="math-container">$\implies$</span></p>
<p><span class="math-container">$$b^2-4c^2 \gt 0$$</span></p>
<p>Also</p>
<p><span class="math-container">$$x^2+bx+c^2=(x+a)^2+(b-2a)(x+a)+a^2+c^2-ab$$</span></p>
<p>Hence</p>
<p><span class="math-container">$$y=\frac{1}{(x+a)+b-2a+\frac{a^2+c^2-ab}{x+a}}$$</span></p>
<p>But Since <span class="math-container">$a^2+c^2-ab \lt 0$</span> We can't Use AM GM Inequality</p>
<p>Any way to proceed?</p>
| Sen47 | 684,888 | <p>This is not a complete solution. Just my opinion on the possible solution.</p>
<p>Cross multiplying and subtracting <span class="math-container">$x+a$</span> gives an equation:</p>
<p><span class="math-container">$x^2y+x(by-1)+(yc^2-a)=0$</span></p>
<p>The discriminant turns out to be:
<span class="math-container">$y^2(b^2-4c^2)+y(4a-2b)+1$</span></p>
<p>Further the discriminant of the above equation is:</p>
<p><span class="math-container">$16(a^2+c^2-ab)$</span></p>
<p>Which is certainly less than 0 according to info provided. I believe in a similar manner the range shall pop up. But certainly, I might be wrong...</p>
|
1,110,122 | <p>Prove $$\large\int_{-\pi}^{\pi}\sin (\sin x) \,dx =0$$ without using the fact that $\sin(x)$ is odd.</p>
<p>Computing this in wolfram says that it is uncomputable, which leads me to believe that the only way to find this would be methods for solving definite integrals. I am wondering if it is possible with any other techniques such as DUIS or residues?</p>
| mrf | 19,440 | <p>Another silly answer, using complex analytic methods (similar to sos440's answer, but avoiding use of Bessel functions):</p>
<p>Rewrite the integrand using Euler's formulas and put $z = e^{ix}$, thus mapping $[-\pi,\pi]$ to the unit circle (some algebra omitted):
$$
\int_{-\pi}^\pi \sin \sin x \, dx =
-\frac1{2} \int_{|z|=1} \frac{\exp\Big( \frac12 ( z - \frac1z ) \Big) - \exp\Big( \frac12 ( -z + \frac1z ) \Big)}{z}\,dz.
$$</p>
<p>The integrand has an essential singularity at $z=0$, but we can still compute the relevant residue. Thanks to the $z$ in the denominator, we only have to compute the $0$:th terms of the Laurent series for the numerator.</p>
<p>We have
\begin{align}
\exp\Big( \frac12 ( z - \frac1z ) \Big) &=
e^{z/2} \cdot e^{-1/(2z)} \\
&= \Big( 1 + \frac{1}{1!} \big( \frac{z}{2} \big) + \frac{1}{2!} \big( \frac{z}{2} \big)^2 + \cdots \Big)
\Big( 1 - \frac{1}{1!} \big( \frac{1}{2z} \big) + \frac{1}{2!} \big( \frac{z}{2} \big)^2 - \cdots \Big)
\end{align}
Hence, the $0$:th term will be
$$
1 - \frac{1}{1!} \frac{1}{2^2} + \frac{1}{2!} \frac{1}{2^4} - \frac{1}{3!} \frac{1}{2^6} + \cdots
$$</p>
<p>Similarly
\begin{align}
\exp\Big( \frac12 (-z + \frac1z ) \Big) &=
e^{-z/2} \cdot e^{1/(2z)} \\
&= \Big( 1 - \frac{1}{1!} \big( \frac{z}{2} \big) + \frac{1}{2!} \big( \frac{z}{2} \big)^2 - \cdots \Big)
\Big( 1 + \frac{1}{1!} \big( \frac{1}{2z} \big) + \frac{1}{2!} \big( \frac{z}{2} \big)^2 + \cdots \Big)
\end{align}
And again, the $0$:th term will be
$$
1 - \frac{1}{1!} \frac{1}{2^2} + \frac{1}{2!} \frac{1}{2^4} - \frac{1}{3!} \frac{1}{2^6} + \cdots
$$</p>
<p>Summing up, the $0$:th term in the Laurent series for the numerator vanishes, and by the residue theorem, the integral, unsurprisingly is $0$.</p>
<p>Of course, this approach also uses, albeit implicitly, since I bothered to write out a number of unnecessary calculations, that the integrand is odd.</p>
|
1,616,460 | <blockquote>
<p>If $p(x)$ be a quadratic equation with real coefficient satisfying $$x^2-2x+2\leq p(x)\leq 2x^2-4x+3\;\forall x\in \mathbb{R}$$ and $p(11) = 181.$ Then $p(16)= $ </p>
</blockquote>
<p>$\bf{My\; Try::}$ We can Write Inequality as $$(x-1)^2+1\leq p(x)\leq 2(x-1)^2+1$$</p>
<p>Here from above inequality we have seen that vertices's of $p(x)$ must lie on $\bf{1^{st}}$ quadrant and upward parabola is formed</p>
<p>Now I did not Understand how can I solve after that</p>
<p>Help me, Thanks</p>
| Eric Wofsey | 86,856 | <p>The second sentence of the proof of Corollary 9 is kind of poorly phrased. Here's a clearer phrasing:</p>
<blockquote>
<p>Suppose now that $L$ is an $S$-module, $K\subseteq N$ is a submodule, and $\psi:N/K\to L$ is an injective $R$-module homomorphism. Write $\varphi:N\to L$ for the composition of $\psi$ with the quotient homomorphism $N\to N/K$, so $\ker(\varphi)=K$.</p>
</blockquote>
<p>That is, the map $\varphi$ is exactly a map satisfying the hypotheses of Theorem 8, and $K$ is its kernel (where what we know about $K$ is that the quotient $N/K$ is a quotient that embeds into an $S$-module). So let $\Phi:S\otimes_R N\to L$ be the map provided by Theorem 8 using the map $\varphi$. Then for each $n\in N$, $\varphi(n)=\Phi(\iota(n))$. In particular, if $n\in \ker(\iota)$, then $\varphi(n)=\Phi(0)=0$, so $n\in\ker(\varphi)$.</p>
|
690,822 | <p>Let <span class="math-container">$G$</span> be a graph of order <span class="math-container">$n$</span> and let <span class="math-container">$k$</span> be an integer with <span class="math-container">$1\leq k\leq n-1$</span>. Prove that if <span class="math-container">$\delta(G)\geq (n+k-2)/2$</span>, then <span class="math-container">$G$</span> is <span class="math-container">$k$</span>-connected.</p>
<p>where, <span class="math-container">$\delta(G) = \text{minimum degree of a vertex in } G.$</span></p>
| bof | 111,012 | <p>Assume for a contradiction that $G$ is not $k$-connected. Since $n\gt k$, this means there is a set $S\subseteq V(G)$ such that $|S|=k-1$ and $G-S$ is disconnected. Then $G-S$ has $n-k+1$ vertices, and so the smallest component of $G-S$, call it $H$, has at most $\frac{n-k+1}2$ vertices. Let $v$ be any vertex in $H$; then $v$ is joined only to other vertices in $H$ and vertices in $S$, so
$$\deg v\le\frac{n-k+1}2-1+(k-1)=\frac{n+k-3}2\lt\frac{n+k-2}2\le\delta(G),$$
a contradiction.</p>
|
1,393,311 | <blockquote>
<p>Let $f:[0,1]\to[0,1]$ be a continuous function. Define $h:(0,1)\to[0,1]$ such that, $$h(x)=f(x)-\left\lfloor f(x)\right\rfloor$$Is $h$ continuous? Here $\left\lfloor x\right\rfloor$ is the <a href="https://www.google.co.in/url?sa=t&rct=j&q=&esrc=s&source=web&cd=7&cad=rja&uact=8&sqi=2&ved=0CEQQFjAGahUKEwiHtt2er6HHAhVBBo4KHTHGAGU&url=http%3A%2F%2Fmathworld.wolfram.com%2FFloorFunction.html&ei=8RfKVcf5GMGMuASxjIOoBg&usg=AFQjCNHELl1VhlrQZPnRxx_Yduw2VQQfCA&bvm=bv.99804247,d.c2E" rel="nofollow">floor function</a>.</p>
</blockquote>
<p>This problem arose due to solving another problem in Real Analysis. Intuitively, it seems that $h$ is continuous but I can neither prove or disprove it. Any help is appreciated.</p>
| DanielWainfleet | 254,665 | <p>This is Dominik's answer,with details: The set A is a compact subspace of some space S. The space A is compact iff any non-empty family F of closed subsets of A with the F.I.P. ...( Finite Intersection Property: i.e. if $G \subset F$ and $\phi \not =G$, then $\bigcap G \not = \phi$.) ... satisfies $\bigcap F \not = \phi$. From the definition of sub-space topology, a closed subspace C of the compact space A is compact (Observe that any closed subset of the space C is closed in A, so a family of closed subsets of C is a family of closed subsets of A.)</p>
|
251,799 | <p>I have a PDE in the form of
$$
\frac{\partial u}{\partial t} + \frac{\partial u}{\partial x} + u = \delta(x-1),
$$
with initial condition $u(x,0)=100$. I'm trying to solve it numerically, but I have no idea on which method should I use. Most of the examples that I refer to has no delta function in the PDE.</p>
<p>Can anyone guide me on what should I do?</p>
<p>Thank you very much. </p>
| doraemonpaul | 30,938 | <p>Note that this PDE just belongs to the PDE of the form <a href="http://eqworld.ipmnet.ru/en/solutions/fpde/fpde1302.pdf" rel="nofollow">http://eqworld.ipmnet.ru/en/solutions/fpde/fpde1302.pdf</a> so that exact solution can find easily, so you have not necessarily to solve it numerically.</p>
<p>The general solution is $u(x,t)=e^{-x}C(x-t)+e^{-x}\int\delta(x-1)e^x~dx=e^{-x}C(x-t)+e^{1-x}H(x-1)$</p>
<p>$u(x,0)=100$ :</p>
<p>$e^{-x}C(x)+e^{1-x}H(x-1)=100$</p>
<p>$C(x)=100e^x-eH(x-1)$</p>
<p>$\therefore u(x,t)=e^{-x}(100e^{x-t}-eH(x-t-1))+e^{1-x}H(x-1)=100e^{-t}-e^{1-x}H(x-t-1)+e^{1-x}H(x-1)$</p>
|
1,595,909 | <p>I am studying Measure Theory and I am stuck in some concepts about continuity of a measure.</p>
<p>Let $(S_{1}, \Sigma,\mu)$ be a measurable space, where $\mu$ is a probability mesure such that $\mu (S_{1}) = 1.$ Let also $S_{\delta} \subset \Sigma, 0\leq \delta \leq 1,$ be a family of sets such that $S_{\delta_{1}} \subseteq S_{\delta_{2}}$ for $0 \leq \delta_{1}\leq \delta_{2} \leq 1.$</p>
<p><strong>Theorem-</strong> <em>Continuity from bellow:</em>
If $E_{j} \subset E_{j+1}$ is an increasing sequence of mesurable sets, then
$$
\mu\left( \bigcup_{j=1}^{\infty} E_j \right)
= \lim_{j\rightarrow \infty}\mu(E_{j}).
$$ </p>
<p>Define $E_{j} = S_{1-1/j}$, for $j \in\{1,2,\ldots\}$. My question is the following:</p>
<p>If $\mu(S_{\delta}) = c$, with $c < 1$, for $0 \leq \delta < 1,$ how the theorem holds at $\delta =1$ (or $ j\rightarrow \infty$)?</p>
<p>I think I missed something. Thanks in advance!</p>
| Jendrik Stelzner | 300,783 | <p>I see no problem in this.</p>
<p>By the theorem you have $\mu(\bigcup_{j=1}^\infty E_j) = c$ with $\bigcup_{j=1}^\infty E_j = \bigcup_{j=1}^\infty S_{1-1/j} \subseteq S_1$. That $c < 1 = \mu(S_1)$ just tells us that $\bigcup_{0 \leq \delta < 1} S_\delta = \bigcup_{j=1}^\infty S_{1-1/j} \subsetneq S_1$ is a proper subset. (This is what you missed.)</p>
<p>Take for example the unit interval $S_1 = [0,1]$ with the Lebesgue measure $\lambda$ and set $S_\delta = \emptyset$ for every $0 \leq \delta < 1$ (i.e. $c = 0$).</p>
|
516,501 | <p>Find limit: $x_n=\dfrac{1+\frac12+...+\frac1{2^n}}{1+\frac14+...+\frac1{4^n}}$</p>
<p>as $n \rightarrow \infty$</p>
<p>My "intuition" says that it should be $\frac34$ but I don't know how to proove it with rigour.</p>
| Zarrax | 3,035 | <p>By induction you can prove the following formula for the sum of terms of a geometric progression:
$$a + ar + ar^2 + ... + ar^n = {a - ar^{n+1} \over 1 - r}$$
So you can apply this in the numerator with $a = 1, r = {1 \over 2}$, and in the denominator with $a = 1, r = {1 \over 4}$. You obtain
$$x_n = {1 - {1 \over 2^{n+1}} \over 1 - {1 \over 2}} \times {1 - {1 \over 4} \over 1 - {1 \over 4^{n+1}}}$$
$$= {3 \over 2} {1 - {1 \over 2^{n+1}} \over 1 - {1 \over 4^{n+1}}}$$
Now take limits as $n \rightarrow \infty$.</p>
|
516,501 | <p>Find limit: $x_n=\dfrac{1+\frac12+...+\frac1{2^n}}{1+\frac14+...+\frac1{4^n}}$</p>
<p>as $n \rightarrow \infty$</p>
<p>My "intuition" says that it should be $\frac34$ but I don't know how to proove it with rigour.</p>
| DonAntonio | 31,254 | <p>Geometric series</p>
<p>$$\sum_{n=k}^m r^n=r^k\frac{(r^{m+1}-1)}{r-1}$$</p>
<p>and then a little arithmetic of limits:</p>
<p>$$x_n=\frac{\frac{1-\frac1{2^n}}{1-\frac12}}{\frac{1-\frac1{4^n}}{1-\frac14}}=\frac{2-\frac{1}{2^{n-1}}}{\frac{4-\frac{1}{4^{n-1}}}{3}}\xrightarrow[n\to\infty]{}\frac{2-0}{\frac{4-0}3}=\frac32$$</p>
|
618,288 | <p>I know it's been answered before (at least to the case with $n$ different eigenvalues) but I didn't find a proof for the general case, and I would like some help with this question.</p>
<p>We are given linear transforms $S,T: V\to V$ where $V$ is some vector space.</p>
<p>We are given that $S$ and $T$ commute, $ST=TS$, and that they are diagonalizable:</p>
<p>$T=PD_1P^{-1}$ and $S=KD_2K^{-1}$, where $D_1, D_2$ are diagonal and $K,P$ are invertible.</p>
<p>We are asked to show that $S$ and $T$ have a common eigenspace.</p>
<p><strong>My solution</strong></p>
<p>Maybe I understood the question wrong, but what I tried to do is show that if $v$ is an eigenvector of $S$ then it is also an eigenvector of $T$.</p>
<p>let $Sv=\lambda v$.</p>
<p>$STv=TSv=T\lambda v=\lambda Tv$ which implies that $Tv$ is an eigenvector of $S$ with eigenvalue $\lambda$.</p>
<p>Why does that mean that $v$ is an eigenvalue of $T$?</p>
<p>Another possible way to solve this question is write:</p>
<p>$PD_1P^{-1}KD_2K^{-1} = KD_2K^{-1}PD_1P^{-1}$ and get that $P=K$ but I don't know how to do that either.</p>
| user44197 | 117,158 | <p>If $S$ and $T$ commute and both are diagonalizable then there is a common transform that will <em>simultaneously</em> diagonalize both.</p>
<p>First find a transform $M$ so that $M^{-1}SM$ is diagonal. Suppose that the eigenvalues of $S$ are sorted so repeated eigenvalues of $S$ are grouped together. </p>
<p>Let us suppose that $\lambda_1$ has multiplicity $m$. Since the eigenspace of $S$ is invariant under $T$ we should have
$$\begin{align}
M^{-1} S M &=
\begin{pmatrix}\lambda I_m & 0 \\ 0 & \hat{S}_{22}\end{pmatrix} \\
M^{-1} T M &=
\begin{pmatrix}\hat{T}_{11} & 0 \\ 0 & \hat{T}_{22}\end{pmatrix}
\end{align}$$
where $I_m$ is $m \times m$ identity matrix.
Let $P$ diagonalize $\hat{T}_{11}$
i.e.
$$ P^{-1} \hat{T}_{11} P = D $$ where $D$ is diagonal.</p>
<p>Now let
$$
R= M \begin{pmatrix} P & 0 \\0 & I\end{pmatrix}
$$
Then
$$
R^{-1} S R =
\begin{pmatrix} P^{-1} & 0 \\0 & I\end{pmatrix}
\begin{pmatrix}\lambda I_m & 0 \\ 0 & \hat{S}_{22}\end{pmatrix}
\begin{pmatrix} P & 0 \\0 & I\end{pmatrix} =
\begin{pmatrix}\lambda I_m & 0 \\ 0 & \hat{S}_{22}\end{pmatrix}
$$
$$
R^{-1} T R =
\begin{pmatrix} P^{-1} & 0 \\0 & I\end{pmatrix}
\begin{pmatrix}\hat{T}_{11} & 0 \\ 0 & \hat{T}_{22}\end{pmatrix}
\begin{pmatrix} P & 0 \\0 & I\end{pmatrix} =
\begin{pmatrix}D & 0 \\ 0 & \hat{T}_{22}\end{pmatrix}
$$
Thus the $m\times m$ block is now simultaneously diagonalized.</p>
<p>Proceed as before for other eigenvalues of $S$.</p>
|
123,502 | <p>Suppose $A \subseteq \{1,\dots,n\}$ does not contain any arithmetic progressions of length $k+1$. What is the largest number of $k$-term arithmetic progressions that $A$ can have? (one may also wish to put some lower or upper on the size of $A$) We can work over $\mathbb{Z}_p$ if it makes the answer any easier. The "degenerate" case $k=2$ asks for the largest size of the set without arithmetic progressions and it is known that there exist $A$'s with this property of almost linear size.</p>
| Gerhard Paseman | 3,528 | <p>I can offer some general common sense that might apply; it is derived from life
experience and not from the specific setting of my mentoring undergraduates
or being mentored as an undergraduate.</p>
<p>I find that practice is one way to develop ability in a particular skill set. I note
that many of the more respected answers on this forum are not just those that
are clear examples of communication: they have specific references and show
quality of research and scholarship. Precision and clarity are important, but
providing the links to the existing and relevant literature so that others can
follow, repeat, and confirm or correct the argument presented is a hallmark
of decent research; high school is not too early to start practicing such skills,
even for those not destined to a profession in the sciences, engineering, or
education. Even documenting and keeping journals on small projects is
good practice for those aiming to produce good research. Mentors should
do what they can to encourage such practice.</p>
<p>Gerhard "Aiming To Produce Good Research" Paseman, 2013.03.03</p>
|
3,170,742 | <p>I have a system of recurrence relations in the following form:</p>
<p><span class="math-container">$$
\begin{pmatrix}
f(n+1)\\
g(n+1)\\
\end{pmatrix}
= \textbf{A}
\begin{pmatrix}
f(n)\\
g(n)\\
\end{pmatrix} +\vec{b}
$$</span></p>
<p>which hold for all <span class="math-container">$n \in \{ 0,1,2,...,N\}.$</span> I also have the conditions: <span class="math-container">$f(0) = g(N+1) = 0.$</span> I've been trying to find a way to solve this but I'm really not sure how to proceed. Any help would be appreciated.</p>
| Vasili | 469,083 | <p>Hint: <span class="math-container">$\sin(\frac{\pi}{3}-x)=\sin \frac{\pi}{3}\cos x-\cos \frac{\pi}{3}\sin x$</span></p>
|
158,944 | <p>Is there any type of function that when graphed would show a curve which intersects the x axis multiple times, with each point being an arbitrary distance from the last?</p>
<p>I mean, not like a trig function where each intersect is the same distance from the last. (2,0); (4,0); (6,0); (8,0). And not like a spiral where the distance gets bigger and bigger (or smaller) (2,0); (4,0); (8,0); (16,0);</p>
<p>But for example, some curve which intersects x-axis at (2,0); (6,0); (14,0); (15,0); (20,0); (122,0)...</p>
<p>Does that type function exist?</p>
<p>If so, is it possible to solve/get the equation, given only those intersect points?</p>
<p>I wouldn't need the exact equation of any particular curve. Just the equation of any curve that happens to intersect x-axis at whatever given arbitrary x values. Is that at least that possible to do?</p>
| prajwal | 245,977 | <p>you can find the curves using matrix method by taking the columns as the x polynomials and multiply with the constant matrix which finally equated to given y.
but on which does the degree of x which have to be taken. on what does that depend.</p>
|
237,464 | <p>Let $-\frac{1}{2}\le a \le\frac{1}{2}$ and $b\in[0,\infty)$.</p>
<p>Definitions: $$f_k(a;b):=\frac{(2k+\frac{1}{2}+a)^2+b}{(2k+\frac{1}{2}-a)^2+b}(\frac{k}{k+1})^{2a},$$
$$f(a;b):=\prod\limits_{k=1}^\infty f_k(a;b)$$
QUESTIONs: </p>
<p>(1) Does $f(a;b)=1$ have any solution with $a\neq 0$?</p>
<p>(2) If yes: Single points $(a;b)$ or areas ? </p>
<p>Thank you very much !</p>
<p>EDIT: Have changed $(\frac{k}{k+1})^a$ to $(\frac{k}{k+1})^{2a}$. It was a mistake.</p>
<p>2th EDIT: It seems to be $f(a,b)<\pi^{2a}$ for $a>0$, at least for e.g. $b>2$. Correct ? </p>
| Iosif Pinelis | 36,721 | <p>Let us show that for each $a\in[-\frac12,\frac12]\setminus\{0\}$ there is a unique $b\in(0,\infty)$ such that $f(a;b)=0$.
By factoring $(2k+\frac{1}{2}\pm a)^2+b$
as $4 [k + \frac14\, (1 \pm 2 a - 2 i\sqrt b)][k + \frac14\, (1 \pm 2 a + 2 i\sqrt b)]$,
noting that $\prod_{k=1}^n(k+c)=\Gamma(n+1+c)/\Gamma(1+c)$ for complex $c$,
and then using the Stirling formula (see e.g. formula (1) in <strong>[<a href="http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/1363-4.pdf" rel="nofollow noreferrer">Kilbas--Saigo</a>]</strong>), we have this closed-form expression for $f(a;b)$:<br>
\begin{equation}
f(a;b)=\frac{\Gamma(\frac54-\frac a2+\frac{i\sqrt b}2)\Gamma(\frac54-\frac a2-\frac{i\sqrt b}2)}
{\Gamma(\frac54+\frac a2+\frac{i\sqrt b}2)\Gamma(\frac54+\frac a2-\frac{i\sqrt b}2)}
= \bigg(\frac{\big|\Gamma(\frac54-\frac a2+\frac{i\sqrt b}2)\big|}
{\big|\Gamma(\frac54+\frac a2+\frac{i\sqrt b}2)\big|}\bigg)^2 . \tag{1}
\end{equation}
By formula $(1)$,
\begin{equation}
f(a;0)=\Big(\frac{g(a)}{g(-a)}\Big)^2,
\end{equation}
where $g(a):=\Gamma(\frac54-\frac a2)$ is convex in $a$. Since $g'(-\frac4{10})>0$, it follows that $g$ is increasing on $[-\frac4{10},\frac12]$, and so, $f(a;0)>1$ for $a\in[0,\frac4{10}]$. Also, for $a\in(\frac4{10},\frac12]$ one has $g(a)>g(\frac4{10})>g(-\frac12)\vee g(-\frac4{10})\ge g(-a)$, and so, $f(a;0)>1$ for $a\in(\frac4{10},\frac12]$ as well. Thus, $f(a;0)>1$ for all $a\in(0,\frac12]$, which is equivalent to $f(a;0)<1$ for all $a\in[-\frac12,0)$. </p>
<p>Next, using again formula $(1)$ above and formula (3) in <strong>[<a href="http://www.kurims.kyoto-u.ac.jp/~kyodo/kokyuroku/contents/pdf/1363-4.pdf" rel="nofollow noreferrer">Kilbas--Saigo</a>]</strong>, it is not hard to see that that $f(a;\infty-)=0<1$ for all $a\in(0,\frac12]$ and $f(a;\infty-)=\infty>1$ for all $a\in[-\frac12,0)$. </p>
<p>Also, $f(a;b)$ is (strictly) increasing in $b\ge0$ for $a\in[-1/2,0)$ and decreasing in $b\ge0$ for $a\in(0,1/2]$ -- because $f_k(a;b)$ has these properties, for each $k$. </p>
<p>It follows that indeed for each $a\in[-\frac12,\frac12]\setminus\{0\}$ there is a unique $b\in(0,\infty)$ such that $f(a;b)=1$.</p>
<hr>
<p>For an illustration of the above result, here are graphs of $f(a,b)$ and $\text{sgn}(f(a,b)-1)$, suggesting something like a straight line of roots at $b\approx1.6$. </p>
<p><a href="https://i.stack.imgur.com/DLxL8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DLxL8.png" alt="enter image description here"></a></p>
<p>The "straight" line is not quite straight, as the following regional plots of $f(a,b)>1$ suggest: </p>
<p><a href="https://i.stack.imgur.com/gGmK4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gGmK4.png" alt="enter image description here"></a></p>
|
289,757 | <p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p>
<p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p>
<p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em>
Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p>
<p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p>
<p>So here comes my question</p>
<blockquote>
<p>How much math are you actually doing at your job?
And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p>
</blockquote>
<p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p>
<p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
| dtldarek | 26,306 | <p>I've tried to put it into words, but I guess the following picture is much better (of course, please alter the text in your mind so that it fits the current context).</p>
<p><a href="http://www.calamitiesofnature.com/archive/?c=323" rel="noreferrer"><img src="https://i.stack.imgur.com/owX5F.jpg" alt="Calamities of Nature, strip 323."></a></p>
|
289,757 | <p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p>
<p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p>
<p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em>
Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p>
<p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p>
<p>So here comes my question</p>
<blockquote>
<p>How much math are you actually doing at your job?
And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p>
</blockquote>
<p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p>
<p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
| Ganesh | 1,585 | <p>A bit of background first: my area is Computer Science, and I've been a student as well as a professional software engineer. As such, my answer is tailored to this profession. As a CS student I'd learnt a fair amount of mathematics : discrete maths, logic, plus the standard engineering stuff: ODE, PDE, Transforms, Operations Research. I've also picked up quite a bit of undergraduate pure maths by myself. </p>
<p>As a software engineer, the most mathematics I had to do was in the form of sanity checks and unit tests for my software. Arguably, the calculations hardly went beyond basic arithmetic, and sometimes involved a certain amount of algebra.No calculus, ODEs, Transforms, etc. Hardly have I "sat down to do maths"! </p>
<p>In the software industry, the core product development does take a substantial knowledge of relevant mathematics concepts, but the work of a typical engineer may not involve any of this. For example, one of my friends at a database firm told me that their core database team knew a great deal of logic and set theory , which is the maths that gets heavily used in databases. But the typical engineer was involved in the peripheral parts, e.g. designing the GUI, and this involved relatively little mathematics. A company is not necessarily interested in what amount of maths is used, but rather what brings in the maximum revenue. </p>
<p>Also, I've often encountered "maths heavy" (or otherwise arcane) software code wrapped up in easy-to-use programming libraries, e.g. image processing.
In such a case, I'm the user , not the designer, and have to know relatively little maths. Such "packaging" saves the programmer from thorny implementation details and having to reinvent the wheel.</p>
<p>I totally agree with Peter Sheldrick's comment on your question. But it's important to point out that some of the benefits you gain by taking maths-heavy courses may be quite abstract. For example, almost every aspect of Software Engineering involves substantial abstraction( I dare say beyond algebraic geometry!) ; debugging is sometimes like finding an error in a proof. Mathematics courses encourage you to do both. </p>
<p>I also strongly recommend Keith Devlin's <a href="http://www.maa.org/devlin/devlin_10_00.html"> article </a> on whether a software engineer needs maths, and suggest you ponder over this. HTH, and sorry for the long post!</p>
|
289,757 | <p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p>
<p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p>
<p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em>
Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p>
<p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p>
<p>So here comes my question</p>
<blockquote>
<p>How much math are you actually doing at your job?
And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p>
</blockquote>
<p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p>
<p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
| b.sahu | 60,209 | <p>I have specialzed in High Voltage Engg.I have worked for about 8 years in Industry and more than 30 years in Academics. I used One percent of whatever Maths I have learnt in Industry and about Ten percent in Academics.</p>
<p>I cannot blame the designers of University Courses ,or the Industry or the Govt. for this Mismatch between Theory and Practice . It seems Education is in a process of Evolution --it is trying to meet the needs of Practical World . Similarly , the Industrial world is trying to get the Maximum out of the Educational System -they wil pay the freshly recruited students the maximum,if they can get an equivalent return from them.</p>
<p>I have worked for about ten years in USa and Europe , and the rest in India.
The situation seems to be the same Everywhere.If we apply Darwin's theory of Evolution to this problem , we will come to the conclusion that Optimal use of Theory and Practice -is a continuous search or Evolutionary proess.
No one knows the future -- every body is learning to face the uncertaintis of future , by a trial and error process.This is what I believe in .</p>
|
289,757 | <p>I am writing this, as I am a currently an intern at an aircraft manufactur. I am studying a mixture of engineering and applied math. During the semester I focussed on numerical courses and my applied field is CFD. Even though every mathematician would say I have not heard a lot of math, for myself I would say that I get the "most amount of math" you can get while not studying math.</p>
<p>In my courses I have done deep theoretical analysis for numerical concepts and application in CFD. But currently I am starting to wonder, how much the e.g. Calculus of Variation course really helps me in my future career. The theory you learn at university seems to get only a little application in the <em>real word</em>. </p>
<p>Example: In my numerics for PDE class I have spent (wasted?) so many hours on trying to figure out the CFL number of certain schemes, but what I am doing right now has nothing to do with that. <em>Oh your simulation does diverge? Well let's take 2 instead of 4 as our CFL number.</em>
Furthermore, I am not really programming stuff as I hoped I could, but I am rather scripting. Fact is, 99 out of 100 people are not going to program a CFD solver. You rather use the code and apply it to your needs.</p>
<p>I am aware that university always follows a way more theoretical path than industry, but I am actually disappointed how little math I am really doing. Okay you might, say that's due to the fact that I am an intern and of course you are right. But I am in the lucky situation, that my team comes really close to research. Most of the members hold a PhD and studied engineering or math, and the focus is definetely on research ( in this departure of the company). But if the amount of math is that small in such an environment, where are you really able to make use of what you have learned at university.</p>
<p>So here comes my question</p>
<blockquote>
<p>How much math are you actually doing at your job?
And I don't mean, how much math is helping you to understand things, but how often does it happen, that you sit down and really <strong>do math</strong> in your non-academic job?</p>
</blockquote>
<p>Personally I get the impression that I could do the exact work without having heard most of my courses. Don't get me wrong, I really enjoy the theory, but currently I am rather frustrated.</p>
<p>Note: As this is my first Question, I hope I did not screw up completely. I did not found similar questions on this side. And feel free to edit or ask questions if thinks are not clear.</p>
| Alex | 38,873 | <p>There's one (excellent!) real-life application of derivatives, the one that should be explained to people as early as in high school, I guess: neural classifiers aka ANNs. </p>
<p>My doctoral degree was, like, 80% mathematical and 20% computational, but my postdoc jobs are the exact opposite (probably even 90% computational). Nevertheless, I work with some real-life software (automatic machine translation) that is comparable in its performance to Google Translate that uses plenty of computational resources.</p>
<p>The main optimizer is backpropagation that uses derivatives ofthe activation function to find the delta-value used to update the weight of the connection between neurons. One such activation function is hyperbolic tangent: </p>
<p>$$
h(x) = \frac{e^{x} - e^{-x}}{e^{x} + e^{-x}}
$$</p>
<p>Without knowledge of how to take the derivative of this (and other) function you won't be able to develop even a simple neural network, let alone a complicated one! And the area of applications of them is very broad: from banking (classify the applicant as genuine vs fraudulent) to language translation to signal processing. </p>
|
2,466,022 | <p>In Real and Complex Analysis, 3rd Edition, Walter Rudin advances the following:</p>
<p><a href="https://i.stack.imgur.com/XEVwK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XEVwK.png" alt="enter image description here"></a>
<a href="https://i.stack.imgur.com/dTwbg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dTwbg.png" alt="enter image description here"></a></p>
<p>How does $e^z \cdot e^{-z} = 1$ entail $(a)$?</p>
| Thorgott | 422,019 | <p>Hint: $\forall z\in\mathbb{C}:0\cdot z=0$</p>
|
3,203,346 | <p>I know that <span class="math-container">$f(0)=2$</span>, <span class="math-container">$f'(0)=3$</span> and <span class="math-container">$g=f^{-1}$</span>.</p>
<p>But how can I find the value of <span class="math-container">$g'(2)$</span>?</p>
| Olivier Roche | 649,615 | <p>Use the fact that <span class="math-container">$g \circ f = Id$</span>, and derive both sides in 0, you get:
<span class="math-container">$$
f'(0) \cdot g'(f(0)) = 1 \\
\textrm{thus }g'(f(0)) = \frac{1}{f'(0)}
$$</span>
Hence, <span class="math-container">$g'(2) = \frac{1}{3}$</span> .</p>
<p>Btw that's how you get the derivative of a reciprocal function in general.</p>
|
4,370,989 | <p>I have looked in some of my old books and found an exericse that I do not know how to solve. It seems pretty simple though.</p>
<p>The question is as follows:</p>
<pre><code>Which of these integers are prime?:
111
111.1
111.111
111.111.11
111.111.111
111.111.111.1
111.111.111.111
</code></pre>
<p>I remember one rule of thumb saying that if the sum of integers mod 9 were 0,3 or 9, then you could know for sure that the not number was NOT prime.</p>
<p>But for numbers such as this, is there any way of checking whether the number is actually prime?</p>
| Jamie Radcliffe | 25,795 | <p>The group elements are <strong>not</strong> words in <span class="math-container">$a,b,c, \dots$</span>, but instead equivalence classes of words in <span class="math-container">$a,b,c,\dots,a^{-1}, b^{-1}, c^{-1},\dots$</span>. The equivalence relation is generated by all equivalences <span class="math-container">$$w a a^{-1} w' \sim w w' \sim w a^{-1}a w'$$</span> where <span class="math-container">$w,w$</span> are words and <span class="math-container">$a$</span> can be any of the letters. The group operation is defined by <span class="math-container">$$ [w][w'] = [ww'].$$</span> Of course work is required to prove that this is a group and satisfies the relevant universal property.</p>
|
128,651 | <p>Let $M$ be a smooth manifold (maybe compact, if that helps). Denote by $\operatorname{Diff}(M)$ the group of diffeomorphisms $M\to M$ and by $R(M)$ the space of Riemannian metrics on $M$. We obtain a canonical group action
$$ R(M) \times \operatorname{Diff}(M) \to R(M), (g,F) \mapsto F^*g, $$
where $F^*g$ denotes the pullback of $g$ along $F$. Is this action transitive? In other words, is it possible for any two Riemannian metrics $g,h$ on $M$ to find a diffeomorphism $F$ such that $F^*g=h$? Do you know any references for this type of questions?</p>
| Eric O. Korman | 9 | <p>This map will not be transitive in general. For example, if $g$ is a metric and $\phi \in Diff(M)$ then the curvature of $\phi^* g$ is going to be the pullback of the curvature of $g$. So there's no way for a metric with zero curvature to be diffeomorphic to a manifold with non-zero curvature. Or for example, if $g$ is einstein $(g = \lambda Ric)$ then so is $\phi^* g$. So there are many diffeomorphism invariants of a metric.</p>
<p>Indeed, this should make sense because you can think of a diffeomorphism as passive, i.e. as just a change of coordinates. Then all of the natural things about the Riemannian geometry of a manifold should be coordinate ($\Leftrightarrow$ diffeomorphism) invariant.</p>
|
3,741,122 | <p>Recently I've tried to find the difference between partial differentiation and total differentiation.
I've heard the total derivative is defined on single value functions, while the partial derivative by contrast is defined on multivariate functions.
My problem is, that total differentiation is used on multivariate functions all the time.</p>
<p>Every time I come up with a rigorous definition I arrive at a contradiction.
I will share what I have defined so far, and hopefully you can enlighten me.</p>
<p>Let</p>
<p><span class="math-container">$$f: (x_1, ... , x_n) \rightarrow f(x_1, ..., x_n)$$</span></p>
<p>and it's partial derivative by the difference quotient</p>
<p><span class="math-container">$$\frac{\partial f}{\partial x_i} = \lim_{h \to 0} \frac{f(x_1,..,x_i+h,...x_n)- f(x_1,..., x_n)}{h}$$</span></p>
<p>the total derivative must by contrast account for interdependence between <span class="math-container">$x_k$</span> in the domain of f.</p>
<p><span class="math-container">$$\frac{df}{dx_i}\stackrel{?}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{\partial x_k}{\partial x_i}}$$</span></p>
<p>This seemed sensible to me, until I realized it simplified to</p>
<p><span class="math-container">$$n \frac{\partial f}{\partial x_i}$$</span></p>
<p>which definitely isn't right.</p>
<p>Can someone tell me where I've made an error? Or provide better definition? This issue really annoys me, since all my research so far didn't answer this question at all.</p>
<p>Edit:
Ok thank you for all the responses!
I'm just writing out the final formula for total derivatives for quick lookup now:
<span class="math-container">$\frac{d}{d x_i}$</span> is defined recursively as
<span class="math-container">$$\frac{df}{dx_i}\stackrel{!}{=} \sum_k{\frac{\partial f}{\partial x_k} \frac{d x_k}{d x_i}}$$</span></p>
<p>until <span class="math-container">$x_k$</span> has a domain without interdependence, in which case <span class="math-container">$\frac{\partial x_j}{\partial x_i}$</span> = <span class="math-container">$\frac{d x_j}{d x_i}$</span> and the entire expression can be calculated by limits.</p>
| 5xum | 112,884 | <p>It simplifies to <span class="math-container">$\frac{\partial f}{\partial x_i}$</span>, because <span class="math-container">$\frac{\partial x_k}{\partial x_i}=0$</span> if <span class="math-container">$i\neq k$</span>.</p>
|
1,515,900 | <p>In many books and papers on analysis I met this equality without proof:</p>
<p>$$\sup \limits_{t\in[a,b]}f(t)-\inf \limits_{t\in[a,b]}f(t)=\sup \limits_{t,s\in[a,b]}|f(t)-f(s)|$$</p>
<p>Can anyone show strict and nice proof of that equality?</p>
<p>I would really grateful foe your help!</p>
| Nicolas | 213,738 | <p>We have
$$\sup_{t\in[a,b]}f(t)\geq f(t)\quad\quad\quad\forall t\in[a,b]$$
and
$$\inf_{t\in[a,b]}f(t)\leq f(t)\quad\quad\quad\forall t\in[a,b]$$
so
$$-\inf_{t\in[a,b]}f(t)\geq -f(t)\quad\quad\quad\forall t\in[a,b]$$
whence
$$\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)\geq f(t)-f(s)\quad\quad\quad\forall s,t\in[a,b].$$
As
$$\sup_{t\in[a,b]}f(t)\geq\inf_{t\in[a,b]}f(t),$$
we have $$|\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)|=\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)$$
and then
$$\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)\geq |f(t)-f(s)|\quad\quad\quad\forall s,t\in[a,b].$$
We thus have got the inequality
$$\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)\geq \sup_{s,t\in[a,b]}|f(t)-f(s)|.$$</p>
<p>For the he converse, we start with the inequality
$$f(t)-f(s)\leq|f(t)-f(s)|\leq \sup_{s,t\in[a,b]}|f(t)-f(s)|\quad\quad\quad\forall s,t\in[a,b].$$
For all $\varepsilon>0$, there are $s,t\in[a,b]$ such that
$$\sup_{t\in[a,b]}f(t)-\varepsilon/2\leq f(t),\quad\quad\inf_{t\in[a,b]}f(t)+\varepsilon/2\geq f(s),$$
thus
$$\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)\leq f(t)-f(s)+\varepsilon$$
whence
$$\sup_{t\in[a,b]}f(t)-\inf_{t\in[a,b]}f(t)\leq \sup_{s,t\in[a,b]}|f(t)-f(s)|+\varepsilon$$
and we have done.</p>
|
830,599 | <p>The function $f$ is defined as follows:
$$f(x):=\sum_{j=1}^{\infty} \frac{x^j}{j!} e^{-x}$$</p>
<p>It's easy to see that $f(0)=0$. But I am interested in the value
$$\lim_{x \rightarrow 0^+} f(x).$$</p>
<p>Even <a href="https://www.wolframalpha.com/input/?i=lim_%28x-%3E0%29+%28sum_%28i%3D1%29%5Einfinity+x%5Ej%2F%28j%21%29+e%5E%28-x%29%29+" rel="nofollow">Wolfram Alpha</a> does not help here. I tried to plot this function, but this doesn't work neither. And my calculator doesn't give a solution for concrete values of $x$, so I have no idea how to get on here. </p>
| Eric Towers | 123,905 | <p>Wolfram Alpha can <a href="http://www.wolframalpha.com/input/?i=Limit%5BSum%5Bx%5Ej/j!%20E%5E-x,%20%7Bj,%201,%20%5C%5BInfinity%5D%7D%5D,%20x%20-%3E%200,%20Direction%20-%3E%20-1%5D" rel="nofollow">do this limit</a>. "Direction->-1" means to approach the limit from larger values.</p>
|
13,829 | <p>I was trying to understand the notion of a connection. I have heard in seminars that a connection is more or less a differential equation. I read the definition of Kozsul connection and I am trying to assimilate it. So far I cannot see why a connection is a differential equation. Please help me with some clarification.</p>
| daniel.jackson | 3,237 | <p>There is a standard technique we learned for dealing with integrals of the form $(*) \int_{a}^{\infty}f(x)\cos (\alpha x)dx$ or $\int_{a}^{\infty}f(x)\sin (\alpha x)dx$ when $f(x)$ is continuous and positive in $[a, \infty)$.</p>
<ol>
<li><p>If $\int_{a}^{\infty}f(x)dx$ converges, then (*) converges absolutely by the comparison test because $|f(x)\sin (\alpha x)|\leq f(x)$ (or $|f(x)\cos (\alpha x)|\leq f(x)$).</p></li>
<li><p>If $\int_{a}^{\infty}f(x)dx$ diverges while $f(x)$ is decreasing and $\lim_{x\to\infty}f(x)=0$ then the integrals (*) converge conditionally by <a href="http://math.nyu.edu/student_resources/wwiki/index.php/Category%3aDirichlet_Test#Dirichlet_Test_for_Integrals#Warning_About_Improper_Integrals" rel="nofollow">Dirichlets test</a>.</p></li>
</ol>
<p>I'll illustrate the technique on your integral:</p>
<p>First, lets substitute $t=x^2$, then $(**) \int_{0}^{\infty} \cos(x^2) \mathrm dx=\int_{0}^{\infty} \frac{\cos(t)}{2 \sqrt t} \mathrm dt$. </p>
<p>Let $f(x)=\frac{1}{2\sqrt x}$ then $f(x)$ is decreasing and $\lim_{x\to\infty}f(x)=0$. Also $cos(x)$ has a bounded anti-derivative. Therefore (**) converges by Dirichlets test.</p>
<p>Now we observe that $|\frac{\cos(x)}{2 \sqrt x}|\geq \frac{\cos^2(x)}{2 \sqrt x}=\frac{1}{4\sqrt x}+\frac{\cos(2x)}{4\sqrt x}$. By the same arguments above $\int_{0}^{\infty}\frac{\cos(2x)}{4\sqrt x}dx$ converges. But then if we assume that $\int_{0}^{\infty}\frac{\cos^2(x)}{2 \sqrt x}$ converges, we get that $\int_{0}^{\infty}\frac{1}{4\sqrt x}$ converges and that's obviously not true. </p>
|
2,439,324 | <blockquote>
<p>Does there exist an uncountable set $A \subset \mathbb{R}$, such that for every $a \in A$ and every $\epsilon>0$, $(a-\epsilon,a+\epsilon)\not\subset A$?</p>
</blockquote>
<p>I am not sure what the answer is, but I am having trouble trying to construct such a set. Any hints?</p>
| Levon Haykazyan | 11,753 | <p>Hint: The set of rational numbers is dense and countable.</p>
|
1,371,549 | <p>Differentiate the Function : $y=\log_2(e^{-x} \cos(\pi x))$</p>
<p>Here is my work. What I have I done wrong?
<a href="https://i.stack.imgur.com/w9RSN.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/w9RSN.jpg" alt="enter image description here"></a></p>
| Bort | 256,093 | <p>Your result seems to be okay, you could and should however simplify it. Also, your calculation would have been simpler if you had said something like</p>
<p>$$y=\log_2 e^{-x} \cos\pi x \\
= \frac{-x + \log\cos\pi x}{\log 2} $$</p>
<p>and thus</p>
<p>$$y' \log 2 = -1-\pi\tan\pi x$$</p>
|
3,351,012 | <p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span> seems to be leading me in circles. The integral I get when I use integration by parts, <span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span> just leads me back to <span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span>. I am not sure how to solve it.</p>
<p><strong>My Steps:</strong></p>
<p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span></p>
<p>Let <span class="math-container">$u = \sin(3\theta)$</span> and <span class="math-container">$dv=e^{2\theta}d\theta$</span></p>
<p>Then <span class="math-container">$du = 3\cos(3\theta)d\theta$</span> and <span class="math-container">$v = \frac{1}{2}e^{2\theta}$</span>
<span class="math-container">\begin{align*}
\int e^{2\theta} \sin(3 \theta)d\theta &= \frac{1}{2} e^{2\theta}\sin(3\theta) - \int\frac{1}{2}e^{2\theta}3\cos(3\theta)d\theta\\
&=e^{2\theta}\sin(3\theta) - \frac{3}{2}\int e^{2\theta}\cos(3\theta)d\theta\\
\end{align*}</span></p>
<hr>
<p><span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span></p>
<p>Let <span class="math-container">$u = \cos(3\theta)$</span> and <span class="math-container">$dv = e^{2\theta}d\theta$</span></p>
<p>Then <span class="math-container">$du = -3\sin(3\theta)d\theta$</span> and <span class="math-container">$v=\frac{1}{2}e^{2\theta}$</span> </p>
<p><span class="math-container">\begin{align*}
\int e^{2\theta}\cos(3\theta) &= \frac{1}{2}e^{2\theta}\cos(3\theta)-\int (\frac{1}{2}e^{2\theta}\cdot-3\sin(3\theta))d\theta\\
&=\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} \int e^{2\theta}\sin(3\theta)d\theta
\end{align*}</span></p>
<p>So you can see I just keep going in circles. How can I break out of this loop?</p>
| lab bhattacharjee | 33,337 | <p>Hint</p>
<p>In case integration by parts is not mandatory,</p>
<p>find
<span class="math-container">$$\dfrac{e^{2x}(a\cos3x+b\sin3x)}{dx}$$</span> and compare with <span class="math-container">$e^{2x}\sin3x$</span> to find the values of <span class="math-container">$a,b$</span></p>
|
3,351,012 | <p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span> seems to be leading me in circles. The integral I get when I use integration by parts, <span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span> just leads me back to <span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span>. I am not sure how to solve it.</p>
<p><strong>My Steps:</strong></p>
<p><span class="math-container">$\int e^{2\theta}\sin(3\theta)d\theta$</span></p>
<p>Let <span class="math-container">$u = \sin(3\theta)$</span> and <span class="math-container">$dv=e^{2\theta}d\theta$</span></p>
<p>Then <span class="math-container">$du = 3\cos(3\theta)d\theta$</span> and <span class="math-container">$v = \frac{1}{2}e^{2\theta}$</span>
<span class="math-container">\begin{align*}
\int e^{2\theta} \sin(3 \theta)d\theta &= \frac{1}{2} e^{2\theta}\sin(3\theta) - \int\frac{1}{2}e^{2\theta}3\cos(3\theta)d\theta\\
&=e^{2\theta}\sin(3\theta) - \frac{3}{2}\int e^{2\theta}\cos(3\theta)d\theta\\
\end{align*}</span></p>
<hr>
<p><span class="math-container">$\int e^{2\theta}\cos(3\theta)d\theta$</span></p>
<p>Let <span class="math-container">$u = \cos(3\theta)$</span> and <span class="math-container">$dv = e^{2\theta}d\theta$</span></p>
<p>Then <span class="math-container">$du = -3\sin(3\theta)d\theta$</span> and <span class="math-container">$v=\frac{1}{2}e^{2\theta}$</span> </p>
<p><span class="math-container">\begin{align*}
\int e^{2\theta}\cos(3\theta) &= \frac{1}{2}e^{2\theta}\cos(3\theta)-\int (\frac{1}{2}e^{2\theta}\cdot-3\sin(3\theta))d\theta\\
&=\frac{1}{2}e^{2\theta}\cos(3\theta)+ \frac{3}{2} \int e^{2\theta}\sin(3\theta)d\theta
\end{align*}</span></p>
<p>So you can see I just keep going in circles. How can I break out of this loop?</p>
| Claude Leibovici | 82,404 | <p>As lab bhattacharjee answered, <em>in case integration by parts is not mandatory</em>, you can make life easier considering that what you need is the imaginary part of
<span class="math-container">$$I=\int e^{2\theta} e^{3i \theta}\,d\theta=\int e^{(2+3i)\theta}\,d\theta=\frac {e^{(2+3i)\theta}}{(2+3i)}=\frac{2-3i}{13}e^{(2+3i)\theta}$$</span>
<span class="math-container">$$I=\frac{3}{13} e^{2 \theta } \sin (3 \theta )+\frac{2}{13} e^{2 \theta } \cos (3
\theta )+i \left(\frac{2}{13} e^{2 \theta } \sin (3 \theta )-\frac{3}{13} e^{2
\theta } \cos (3 \theta )\right)$$</span></p>
|
2,423,055 | <p>I am sort of baffled by this thing, already real number has every thing in it why is this concept of $\Bbb R^2$ ? What does it mean? What is its advantage?</p>
| Dave | 334,366 | <p>$\Bbb R$ is the set of real numbers. That is, $\Bbb R=\{x:\text{$x$ is a real number}\}$.</p>
<p>$\Bbb R^2$ is the set of pairs of real numbers. That is, $\Bbb R^2=\Bbb R\times\Bbb R=\{(x,y):\text{$x$ and $y$ are real numbers}\}$.</p>
|
801,680 | <p>Is it true that every point in a space equipped with the Hausdorff topology is closed?</p>
<p>I am assuming this is true but am having a difficult time proving it. Fix $a \in X$ where $X$ is equipped with the Hausdorff topology. If I can show that $X \setminus \{a\}$ is open then I am finished.</p>
<p>I know that $\forall y \in X \setminus \{a\}$ there exists $U$ and $V$ such that $a \in U$ and $y \in V$ and $U \cap V = \emptyset$. </p>
<p>In general, I have been having a difficult time with these types of proofs. The fact that $U$ and $V$ change based on my choice of $y$ causes some confusion. </p>
| Ferra | 83,471 | <p>More generally, points are closed iff the topology is T1, and Hausdorff implies T1. A topology is T1 if for every pair of points $x,y$ there are open sets $U,V$ s.t. $U$ contains $x$ but not $y$ and $V$ contains $y$ but not $x$.</p>
<p>If points are closed, clearly the topology is T1. Conversely, take a point $x$ of your space $X$. You want to show that $X\setminus \{x\}$ is open. For each $y\in X\setminus \{x\}$, pick an open $V_y$ s.t. $x\notin V_y$. Then $X\setminus \{x\}=\bigcup_{y\in X\setminus\{x\}}V_y$ is a union of open sets and is therefore open.</p>
|
1,743,465 | <p>If $a=(1,2,3,4,5)$ is an example of a vector in $\mathbb R^5$, what could be an example of a vector in $\mathbb C^5$ ? is it $a=(1,2,3,4,i5)$ ?</p>
<p>Also, $x=a+ib$ is $2$ dimensional, can a complex number be one dimensional? like when $a=0$ or $b=0$, but if $b=0$ then it is a real number, so can we say that all real numbers (scalars) are one dimensional complex numbers?</p>
| Marc | 132,141 | <p>A vector in $\mathbb{C}^5$ is a vector of 5 elements $(x_1,\ldots,x_5)$, where each $x_i\in\mathbb{C}$. </p>
<p>About the dimensionality. This all depends on the base field you choose. It is true that $\mathbb{C}$ is two dimensional over $\mathbb{R}$, thus $\mathbb{C}^5$ is 10 dimensional over $\mathbb{R}$. However, when you take $\mathbb{C}$ as the base field, then $\mathbb{C}^5$ has dimension 5.</p>
|
482,030 | <p>I'm reading a proof of the irrationality of <span class="math-container">$\sqrt 2$</span>. In a step it states that <span class="math-container">$2d^2=n^2$</span> implies that <span class="math-container">$n$</span> is multiple of <span class="math-container">$2$</span>. How?</p>
| DonAntonio | 31,254 | <p>$\;2\;$ is a prime and divides the left side in $\,2d^2=n^2\;$ ,
so by the fundamental theorem of arithmetic it also divides the right hand side...</p>
|
482,030 | <p>I'm reading a proof of the irrationality of <span class="math-container">$\sqrt 2$</span>. In a step it states that <span class="math-container">$2d^2=n^2$</span> implies that <span class="math-container">$n$</span> is multiple of <span class="math-container">$2$</span>. How?</p>
| Glen O | 67,842 | <p>Since $d$ is an integer, $2d^2$ must be even. For any odd integer $n$, $n^2$ must also be odd. Therefore, $n$ must be even, and thus a multiple of 2.</p>
|
482,030 | <p>I'm reading a proof of the irrationality of <span class="math-container">$\sqrt 2$</span>. In a step it states that <span class="math-container">$2d^2=n^2$</span> implies that <span class="math-container">$n$</span> is multiple of <span class="math-container">$2$</span>. How?</p>
| Benjamin Dickman | 37,122 | <p>If $n^2 = 2d^2$, then $n^2$ is a multiple of $2$ hence even.</p>
<p>We now prove: $n^2$ even implies $n$ even.</p>
<p><strong>Proof:</strong> We tackle the contrapositive, i.e., $n$ odd implies $n^2$ odd.</p>
<p>Since $n$ is odd, we can write $n = 2k+1$ for some integer $k$.</p>
<p>Then $n^2 = (2k+1)^2 = 4k^2 + 4k + 1 = 2(2k^2 + 2k) + 1$, which is an odd number.</p>
<p>(It's of the form $2m + 1$ for an integer $m = 2k^2 + 2k$.)</p>
<p>This completes the proof, and the contrapositive is the statement you asked about. <strong>QED</strong></p>
|
1,573,947 | <p>I'm trying to see the relationship between the sample variance equation </p>
<p>$\sum(X_i- \bar X)^2/(n-1)$ and the variance estimate, $\bar X(1-\bar X),$ in case of binary samples. </p>
<p>I wonder if the outputs are the same, or if not, what is the relationship between the two??</p>
<p>I'm trying to prove their relationship but it's quite challenging to me.. </p>
<p>Please help!</p>
<p><a href="https://i.stack.imgur.com/xYZAH.jpg" rel="nofollow noreferrer">Sigma(Xi-Xbar)/(n-1)</a>
<a href="https://i.stack.imgur.com/EMi0G.jpg" rel="nofollow noreferrer">Xbar(1-Xbar)</a></p>
| BruceET | 221,800 | <p>I suppose your question is whether the two formulas give the
same answer for binary data. Here is an example to illustrate
that they are almost the same, but not exactly.</p>
<p>Suppose I have a sample of a thousand zeros and ones in which
there are 283 ones. Then $\bar X = 283/1000 = 0.283.$ Thus,
$\bar X(1-\bar X) = 0.283(1 - 0.283) = 0.202911.$</p>
<p>An alternate general formula for the sample variance
of values $X_i$ is</p>
<p>$$S^2 = \frac{\sum_{i=1}^n X_i^2 - n \bar X^2}{n-1}.$$</p>
<p>In a binary sample $\sum_{i=1}^n X_i^2 = \sum_{i=1}^n X_i$,
because $0^2 = 0$ and $1^2 = 1.$ </p>
<p>Thus, the general formula gives
$S^2 = \frac{283 - 1000(.283)^2}{999} = 0.2031141.$
If (as in the Comment by @A.S) the denominator were $n = 1000$ instead of $n-1=999,$ this
would simplify to $$S^2 = 0.283 - 0.283^2 = 0.283(1 = 0.283) = \bar X(1- \bar X).$$</p>
<p>The formula for the population variance is often written with the population size $n$ in the denominator.</p>
|
1,901,170 | <blockquote>
<p>Consider the region $D$ bounded by the positive $y$-axis, the line $y=8$ and the line $y=x^3$. Evaluate the following integral between $x=0$ and $x=2$.
$$\iint_D \frac{x^3y^2}4\ dx\ dy$$</p>
</blockquote>
<p>What are the limits and what is the solution? I've asked people and they all give conflicting ideas!</p>
| Andres Mejia | 297,998 | <p>This is not formal, but emphasizes a geometric intuition: <a href="http://rads.stackoverflow.com/amzn/click/0393925161" rel="nofollow">Div, Grad, Curl</a>.</p>
<p>Alternatively, there is a somewhat standard book, depending on what level of vector calculus you are looking for:
<a href="http://rads.stackoverflow.com/amzn/click/1429215089" rel="nofollow">vector calculus</a></p>
|
1,558,210 | <p>In a linear algebra book I'm reading now, there was the following exercise:</p>
<blockquote>
<p>Let <span class="math-container">$W\subseteq V$</span> be a subspace of vector space <span class="math-container">$V$</span>. Do there always exist two subspaces <span class="math-container">$W_1,W_2\subseteq V$</span> such that:</p>
<ol>
<li><p><span class="math-container">$W_1+W_2=V$</span></p>
</li>
<li><p><span class="math-container">$W_1\cap W_2=W$</span></p>
</li>
<li><p><span class="math-container">$W_1\neq V,W_2\neq V$</span></p>
</li>
</ol>
</blockquote>
<p>The answer is clearly no if we allow <span class="math-container">$W=V$</span>, but even without it we can find counterexamples, e.g. <span class="math-container">$W=\Bbb R\times\{0\},V=\Bbb R^2$</span>.</p>
<p>Critical property of the example above is that there are no "intermediate" spaces, i.e. if <span class="math-container">$W\subseteq W'\subseteq V$</span>, then <span class="math-container">$W'=W$</span> or <span class="math-container">$W'=V$</span>. I started wondering whether this is an equivalent condition to failure of condition in problem, and I found out it is the case. Below I present a proof of this fact, which however makes heavy use of the axiom of choice (in existence of bases).</p>
<p>My question now is:</p>
<blockquote>
<p>Can the equivalence which I state and prove below be shown without any appeal to axiom of choice?</p>
</blockquote>
<hr />
<p>For <span class="math-container">$W$</span> subspace of <span class="math-container">$V$</span> the following are equivalent:</p>
<ol>
<li><p>There exist subspaces <span class="math-container">$W_1,W_2$</span> which satisfy 1-3 above</p>
</li>
<li><p>There exists a proper subspace of <span class="math-container">$V$</span> properly containing <span class="math-container">$W$</span>.</p>
</li>
</ol>
<p>Proof: 1 <span class="math-container">$\Rightarrow$</span> 2: I claim <span class="math-container">$W_1$</span> is such a proper subspace. Clearly <span class="math-container">$W\subseteq W_1\subsetneq V$</span>. If <span class="math-container">$W_1=W$</span>, then <span class="math-container">$V=W_1+W_2=W+W_2=W_2$</span> as <span class="math-container">$W\subseteq W_2$</span>, but this is a contradiction.</p>
<p>2 <span class="math-container">$\Rightarrow$</span> 1: Let <span class="math-container">$W\subsetneq W_1\subsetneq V$</span>. Let <span class="math-container">$B_1$</span> be any basis of <span class="math-container">$W$</span>, <span class="math-container">$B_2$</span> any basis of <span class="math-container">$W_1$</span> containing <span class="math-container">$B_1$</span> and <span class="math-container">$B_3$</span> any basis of <span class="math-container">$V$</span> containing <span class="math-container">$B_2$</span>. Define <span class="math-container">$W_2=\text{span}((B_3\setminus B_2)\cup B_1)$</span>. It's straightforward to see that <span class="math-container">$W_1,W_2$</span> satisfy the properties we want them to.</p>
| Daron | 53,993 | <p>Your lemma is true when suitable interpreted. </p>
<p>Of course you need $\phi$ to be a continuous function on $[a,b]$. </p>
<p>Then you define $\displaystyle \lim _{f(x) \to f(t)} \phi(x) = L$ to mean that for any $\epsilon >0$ there is $\delta > 0$ such that $|f(x)-f(t)|< \delta \implies |\phi(x)-L|<\epsilon$.</p>
<p>The fact that this is true relies on how $[a,b]$ is closed and bounded. Or in other words compact. One consequence of this is the inverse function to $f$ is also continuous. In fact you could replace $[a,b]$ with any closed and bounded subset and probably get away with the lemma.</p>
|
3,654,315 | <blockquote>
<p>Let <span class="math-container">$m$</span> be an odd positive integer. Prove that</p>
<p><span class="math-container">$$ \dfrac{ \sin (mx) }{\sin x } = (-4)^{\frac{m-1}{2}} \prod_{1 \leq j
\leq \frac{(m-1)}{2} } \left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) $$</span></p>
</blockquote>
<h2>Atempt to the proof</h2>
<p>My idea is to use induction on <span class="math-container">$m$</span>. The base case is <span class="math-container">$m=3$</span> and we obtain</p>
<p><span class="math-container">$$ \dfrac{ \sin (3x) }{\sin x } = (-4) ( \sin^2 x - \sin^2 (2 \pi /3 ) ) $$</span></p>
<p>and this holds if one uses the well known <span class="math-container">$\sin (3x) = 3 \sin x - 4 \sin^3 x $</span> identity.</p>
<p>Now, if we assume the result is true for <span class="math-container">$m = 2k-1$</span>, then we prove it holds for <span class="math-container">$m=2k+1$</span>. We have</p>
<p><span class="math-container">$$ \dfrac{ \sin (2k + 1) x }{\sin x } = \dfrac{ \sin [(2k-1 + 2 )x] }{\sin x } = \dfrac{ \sin[(2k-1)x ] \cos (2x) }{\sin x } + \dfrac{ \cos [(2k-1) x ] \sin 2x }{\sin x } $$</span></p>
<p>And this is equivalent to</p>
<p><span class="math-container">$$ cos(2x) \cdot (-4)^{k-1} \prod_{1 \leq j
\leq k-1 }\left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) + 2 \cos [(2k-1) x ] \cos x $$</span></p>
<p>Here I dont see any way to simplify it further. Am I on the right track?</p>
| dezdichado | 152,744 | <p>Too long for a comment: Look up <a href="https://en.wikipedia.org/wiki/Chebyshev_polynomials" rel="nofollow noreferrer">Chebyshev polynomial of second kind</a>. They are literally what you are dealing with:
<span class="math-container">$$U_n(\cos x) = \dfrac{\sin((n+1)x)}{\sin x}.$$</span></p>
<p>Your attempt at induction basically reduces it to an equivalent problem that uses the first kind of Chebyshev polynomials, so I would not be fixated on inductive approach. </p>
|
3,654,315 | <blockquote>
<p>Let <span class="math-container">$m$</span> be an odd positive integer. Prove that</p>
<p><span class="math-container">$$ \dfrac{ \sin (mx) }{\sin x } = (-4)^{\frac{m-1}{2}} \prod_{1 \leq j
\leq \frac{(m-1)}{2} } \left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) $$</span></p>
</blockquote>
<h2>Atempt to the proof</h2>
<p>My idea is to use induction on <span class="math-container">$m$</span>. The base case is <span class="math-container">$m=3$</span> and we obtain</p>
<p><span class="math-container">$$ \dfrac{ \sin (3x) }{\sin x } = (-4) ( \sin^2 x - \sin^2 (2 \pi /3 ) ) $$</span></p>
<p>and this holds if one uses the well known <span class="math-container">$\sin (3x) = 3 \sin x - 4 \sin^3 x $</span> identity.</p>
<p>Now, if we assume the result is true for <span class="math-container">$m = 2k-1$</span>, then we prove it holds for <span class="math-container">$m=2k+1$</span>. We have</p>
<p><span class="math-container">$$ \dfrac{ \sin (2k + 1) x }{\sin x } = \dfrac{ \sin [(2k-1 + 2 )x] }{\sin x } = \dfrac{ \sin[(2k-1)x ] \cos (2x) }{\sin x } + \dfrac{ \cos [(2k-1) x ] \sin 2x }{\sin x } $$</span></p>
<p>And this is equivalent to</p>
<p><span class="math-container">$$ cos(2x) \cdot (-4)^{k-1} \prod_{1 \leq j
\leq k-1 }\left( \sin^2 x - \sin^2 \left( \dfrac{ 2 \pi
j }{m } \right) \right) + 2 \cos [(2k-1) x ] \cos x $$</span></p>
<p>Here I dont see any way to simplify it further. Am I on the right track?</p>
| epi163sqrt | 132,007 | <p>This answer can be seen as supplement of @Conrads answer providing some more details.</p>
<blockquote>
<p>We start with the right-hand side of OPs identity. Letting <span class="math-container">$m=2k+1$</span> we obtain:
<span class="math-container">\begin{align*}
\color{blue}{(-4)^k}&\color{blue}{\prod_{j=1}^k\left(\sin^2(x)-\sin^2\left(\frac{2j\pi}{2k+1}\right)\right)}\\
&=(-4)^k\prod_{j=1}^k\left[\sin\left(x+\frac{2j\pi}{2k+1}\right)\sin\left(x-\frac{2j\pi}{2k+1}\right)\right]\tag{1}\\
&=4^k\prod_{j=1}^k\left[\sin\left(x+\frac{2j\pi}{2k+1}\right)\sin\left(\left(x-\frac{2j\pi}{2k+1}\right)-\pi\right)\right]\tag{2}\\
&=4^k\left(\prod_{j=1}^k\sin\left(x+\frac{2j\pi}{2k+1}\right)\right)\left(\prod_{j=1}^k\sin\left(x+\frac{(2k+1-2j)\pi}{2k+1}\right)\right)\tag{3}\\
&=4^k\left(\prod_{j=1}^k\sin\left(x+\frac{2j\pi}{2k+1}\right)\right)\left(\prod_{j=1}^k\sin\left(x+\frac{(2j-1)\pi}{2k+1}\right)\right)\tag{4}\\
&=4^k\left(\prod_{{j=1}\atop{j\ even}}^{2k}\sin\left(x+\frac{j\pi}{2k+1}\right)\right)\left(\prod_{{j=1}\atop{j\ odd}}^{2k}\sin\left(x+\frac{j\pi}{2k+1}\right)\right)\tag{5}\\
&\,\,\color{blue}{=4^k\prod_{j=1}^{2k}\sin\left(x+\frac{j\pi }{2k+1}\right)}\tag{6}
\end{align*}</span></p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (1) we recall the trigonometric addition formulas
<span class="math-container">\begin{align*}
\sin(x+y)&=\sin(x)\cos(y)+\cos(x)\sin(y)\\
\sin(x-y)&=\sin(x)\cos(y)-\cos(x)\sin(y)
\end{align*}</span>
and get
<span class="math-container">\begin{align*}
\sin&(x+y)\sin(x-y)\\
&=\left(\sin(x)\cos(y)+\cos(x)\sin(y)\right)\left(\sin(x)\cos(y)-\cos(x)\sin(y)\right)\\
&=\sin^2(x)\cos^2(y)-\cos^2(x)\sin^2(y)\\
&=\sin^2(x)\left(1-\sin^2(y)\right)-\left(1-\sin^2(x)\right)\sin^2(y)\\
&=\sin^2(x)-\sin^2(y)
\end{align*}</span></p>
</li>
<li><p>In (2) we use the identity <span class="math-container">$\sin(x)=\sin(\pi -x)$</span> and factor out <span class="math-container">$(-1)^k$</span> by using <span class="math-container">$\sin(x)=-\sin(-x)$</span>.</p>
</li>
<li><p>In (3) we use <span class="math-container">$\sin(x)=\sin(x+2\pi)$</span> and we split the product as preparation for the next steps.</p>
</li>
<li><p>In (4) we change the order of the multiplication in the right-hand product <span class="math-container">$j\to k-j+1$</span>.</p>
</li>
<li><p>In (5) we do not change anything. We just write the index region somewhat more conveniently to better see the next step, where the products can be merged.</p>
</li>
</ul>
<blockquote>
<p>In order to simplify (6) we recall Euler's formula <span class="math-container">$e^{ix}=\cos(x)+i\sin(x)$</span>. We obtain
<span class="math-container">\begin{align*}
\color{blue}{4^k}&\color{blue}{\prod_{j=0}^{2k}\sin\left(x+\frac{j\pi }{2k+1}\right)}\\
&=4^k\prod_{j=0}^{2k}\left[\frac{1}{2i}\left(e^{i\left(x+\frac{j\pi }{2k+1}\right)}-e^{-i\left(x+\frac{j\pi }{2k+1}\right)}\right)\right]\tag{7}\\
&=\frac{(-1)^{k+1}}{2i}\prod_{j=0}^{2k}\left[e^{-i\left(x+\frac{j\pi}{2k+1}\right)}\left(1-e^{2i\left(x+\frac{j\pi}{2k+1}\right)}\right)\right]\tag{8}\\
&=\frac{(-1)^{k+1}}{2i}e^{-i(2k+1)x}e^{-\frac{i\pi}{2k+1}\sum_{j=0}^{2k}j}
\prod_{j=0}^{2k}\left(1-e^{2i\left(x+\frac{j\pi}{2k+1}\right)}\right)\tag{9}\\
&=\frac{(-1)^{k+1}}{2i}e^{-i(2k+1)x}e^{-ik\pi}
\prod_{j=0}^{2k}\left(1-\left(e^{\frac{2\pi i}{2k+1}}\right)^j e^{2ix}\right)\tag{10}\\
&=\frac{(-1)}{2i}e^{-i(2k+1)x}
\left(1-\left(e^{2ix}\right)^{2k+1} \right)\tag{11}\\
&=\frac{1}{2i}\left(e^{(2k+1)ix}-e^{-(2k+1)ix}\right)\\
&\,\,\color{blue}{=\sin((2k+1)x)}
\end{align*}</span>
and the claim follows.</p>
</blockquote>
<p><em>Comment:</em></p>
<ul>
<li><p>In (7) we use the identity <span class="math-container">$\sin(x)=\frac{1}{2i}\left(e^{ix}-e^{-ix}\right)$</span>.</p>
</li>
<li><p>In (8) we factor out <span class="math-container">$\left(\frac{1}{2i}\right)^{2k+1}$</span> from the product and within the product <span class="math-container">$e^{-i\left(x+\frac{j\pi}{2k+1}\right)}$</span>.</p>
</li>
<li><p>In (9) we factor out some more terms which do not depend on the index <span class="math-container">$j$</span>.</p>
</li>
<li><p>In (10) we use the summation formula <span class="math-container">$\sum_{j=1}^{2k}j = \frac{1}{2}(2k)(2k+1)$</span> , the identity <span class="math-container">$e^{ik\pi}=(-1)^k$</span> and we write the factor in the product in the form
<span class="math-container">\begin{align*}
1-\omega ^j z
\end{align*}</span>
with <span class="math-container">$\omega=e^{\frac{2\pi i}{2k+1}}$</span> the <span class="math-container">$(2k+1)$</span>-st <em><a href="https://en.wikipedia.org/wiki/Root_of_unity#General_definition" rel="nofollow noreferrer">root of unity</a></em>.</p>
</li>
<li><p>In (11) we use the representation with <span class="math-container">$\omega$</span> the root of unity and <span class="math-container">$z=e^{2ix}$</span>.
<span class="math-container">\begin{align*}
\prod_{j=0}^{2k}\left(1-z\omega^j\right)=\left(1+z+\cdots+z^{2k}\right)(1-z)=1-z^{2k+1}
\end{align*}</span></p>
</li>
</ul>
|
893,959 | <p>I have a series of problems in inequalities that I can not solve,please help me if you can.</p>
<p>problem 1 :$a,b,c \geq 0$ such that $\sqrt{a^2+b^2+c^2}=\sqrt[3]{ab+bc+ca} $
prove that $a^2b+b^2c+c^2a+abc \le \frac{4}{27}$</p>
<p>problem 2 : $a,b,c\geq0$ and $a+b+c = 1$ prove that </p>
<p>1, $ \sqrt{a+\frac{(b-c)^2}{4}}+\sqrt{b}+\sqrt{c} \leq \sqrt{3} $</p>
<p>2, $\sqrt{a+\frac{(b-c)^2}{4}} + \sqrt{b+\frac{(a-c)^2}{4}} + \sqrt{c+\frac{(a-b)^2}{4}} \leq2$</p>
<p>problem 3 $a,b,c \geq0$ and $a+b+c=1$ find Maximum value of :
$M=\frac{1+a^2}{1+b^2}+\frac{1+b^2}{1+c^2}+\frac{1+c^2}{1+a^2}$</p>
| Ahaan S. Rungta | 85,039 | <p><em>Sketch to #3</em></p>
<p>We can use the method of [Lagrange Multipliers]. Let $ f(a,b,c) = a + b + c $. This is our restraint function, where the restraint is $ f(a,b,c) = 1 $. Let $$ g(a,b,c) = \dfrac {1 + a^2}{1 + b^2} + \frac {1 + b^2}{1 + c^2} + \frac {1+c^2}{1 + a^2}. $$Then, we get $$ \nabla f = \left< 1, 1, 1 \right> $$ and $$ \nabla g = \left< \frac {2a \cdot \left( a^4 + 2a^2 - b^2c^2 - b^2 - c^2 \right)}{\left( a^2 + 1 \right)^2 \cdot \left( b^2 + 1 \right)}, - \frac {2b \cdot \left( a^2 + c^2 + a^2 - b^4 - 2b^2 + c^2 \right)}{\left( b^2 + 1 \right)^2 \cdot \left( c^2 + 1 \right)}, - \dfrac {2c \cdot \left( a^2 b^2 + a^2 + b^2 - c^4 - 2c^2 \right)}{\left( c^2 + 1 \right)^2 \cdot \left( a^2 + 1 \right)} \right>. $$Setting $ \nabla f \propto \nabla g $ gives us $$ \frac {2a \cdot \left( a^4 + 2a^2 - b^2c^2 - b^2 - c^2 \right)}{\left( a^2 + 1 \right)^2 \cdot \left( b^2 + 1 \right)} = - \frac {2b \cdot \left( a^2 + c^2 + a^2 - b^4 - 2b^2 + c^2 \right)}{\left( b^2 + 1 \right)^2 \cdot \left( c^2 + 1 \right)} = - \dfrac {2c \cdot \left( a^2 b^2 + a^2 + b^2 - c^4 - 2c^2 \right)}{\left( c^2 + 1 \right)^2 \cdot \left( a^2 + 1 \right)}. $$</p>
<p>Now we can solve and plug back into $ a + b + c = 1 $ and finish. </p>
|
1,657,720 | <blockquote>
<p>Two tennis players, $A$ and $B$, are rendered equal in a game. It
takes two point lead for the winner. A player who has one point is
said to benefit. Assuming that $A$ has a probability $p$ of winning
each point and $B$ a probability $1-p$, independently of all other
points, determine: $(a)$ the probability for $A$ to be declared the
winner; $(b)$ the expected number of times that $A$ will benefit
before the end of the game.</p>
</blockquote>
<p>I have to prepare for an exam for the course of Stochastic Processes, and a part of the material bothers me enormously; essentially it is a problem resolution by <strong>conditioning method</strong>. Does someone could solve my problem and clearly explain each step? I tried, but I can not go very far in the problem. It should also say that my first course probability goes back more than three years; this may be part of why I have trouble on that kind of question.</p>
<p>Thanks!</p>
| carmichael561 | 314,708 | <p>After two points, either $A$ has won, $B$ has won, or the players are back where they started. These three events have probabilities $p^2,(1-p)^2$, and $2p(1-p)$, respectively.</p>
<p>Let $W$ be the event that $A$ wins, then from the above we have
$$ \mathbb{P}(W)=p^2+2p(1-p)\mathbb{P}(W)$$
hence
$$ \mathbb{P}(W)=\frac{p^2}{1-2p(1-p)}$$</p>
<p>A similar argument should work for the expectation.</p>
|
1,657,720 | <blockquote>
<p>Two tennis players, $A$ and $B$, are rendered equal in a game. It
takes two point lead for the winner. A player who has one point is
said to benefit. Assuming that $A$ has a probability $p$ of winning
each point and $B$ a probability $1-p$, independently of all other
points, determine: $(a)$ the probability for $A$ to be declared the
winner; $(b)$ the expected number of times that $A$ will benefit
before the end of the game.</p>
</blockquote>
<p>I have to prepare for an exam for the course of Stochastic Processes, and a part of the material bothers me enormously; essentially it is a problem resolution by <strong>conditioning method</strong>. Does someone could solve my problem and clearly explain each step? I tried, but I can not go very far in the problem. It should also say that my first course probability goes back more than three years; this may be part of why I have trouble on that kind of question.</p>
<p>Thanks!</p>
| chandu1729 | 64,736 | <p>There are 5 possible states for the Markov Chain.</p>
<p>State 1: A and B are equal </p>
<p>State 2 : A benefit</p>
<p>State 3 : B benefit</p>
<p>State 4 : A Win</p>
<p>State 5 : B Win</p>
<p>Let the probability that A wins starting from the state i be $a_i$. It's clear that $a_4 = 1$ and $a_5 = 0$. We have the following equations for the others (Using conditioning)</p>
<p>$$a_2 = a_1 (1-p) + a_4 p $$
$$a_3 = a_1 p + a_5 (1-p) $$
$$a_1 = a_2p + a_3 (1-p) $$</p>
<p>Solving the above equations, we get $a_1 = \frac{p^2}{2p^2-2p+1}$ (This is the required probability as the game begins in State 1)</p>
|
11,435 | <p>I know what they are and how to define them. They have to serve some purpose - else why include them? But I never used them and all examples on this site and the docs never show a practical use of <code>UpValues</code>.</p>
<p>Can someone give me an example to see when they are actually needed?</p>
| Leonid Shifrin | 81 | <p>Oh yes. <code>UpValues</code> are used quite a bit. There are several common uses, and you may have a look at <a href="https://mathematica.stackexchange.com/questions/96/what-is-the-distinction-between-downvalues-upvalues-subvalues-and-ownvalues/">this</a> and especially <a href="https://stackoverflow.com/questions/6648260/what-does-upvalue-mean-in-mathematica-and-when-to-use-them">this</a> question and answers therein to see some sample uses.</p>
<p>As for practical uses: I will just mention a couple of examples for what I consider to be the main practical use: overloading functions (system or user-defined) on custom data types, so that such redefinitions are local (in the sense that they are attached to the heads representing the new data type, rather than to the functions being overloaded).</p>
<p>One example is my implementation of the <a href="https://mathematica.stackexchange.com/questions/36/file-backed-lists-variables-for-handling-large-data/209#209">large data framework</a>, where it would not be an exaggeration to say that <code>UpValues</code> were crucial element of it. I routinely and automatically create <em>thousands</em> of them during the operation of the framework, and they serve as a powerful encapsulation mechanism, which allowed me to use the OOP-style encapsulation in a very effective way. I will reproduce here the main function using them:</p>
<pre><code>ClearAll[definePartAPI];
definePartAPI[s_Symbol, part_Integer, dir_String] :=
LetL[{sym = Unique[], hash = Hash[sym],
fname = $fileNameFunction[dir, hash]
},
sym := sym = $uncompressFunction@$importFunction[fname];
s /: HoldPattern[Part[s, part]] := sym;
(* Release memory and renew for next reuse *)
s /: releasePart[s, part] :=
Replace[Hold[$uncompressFunction@$importFunction[fname]],
Hold[def_] :> (ClearAll[sym]; sym := sym = def)];
(* Check if on disk *)
s /: savedOnDisk[s, part] := FileExistsQ[fname];
(* remove from disk *)
s /: removePartOnDisk[s, part] := DeleteFile[fname];
(* save new on disk *)
s /: savePartOnDisk[s, part, value_] :=
$exportFunction[fname, $compressFunction @value];
(* Set a given part to a new value *)
If[! TrueQ[setPartDefined[s]],
s /: setPart[s, pt_, value_] :=
Module[{},
savePartOnDisk[s, pt, value];
releasePart[s, pt];
value
];
s /: setPartDefined[s] = True;
];
(* Release the API for this part. Irreversible *)
s /: releaseAPI[s, part] := Remove[sym];
];
</code></pre>
<p>What this does is to define a certain API for a symbol which represents the list in the framework. For lists of thousands parts, many thousands such <code>UpValues</code> are created. They are saved (serialized) when the list representation is saved for a later use, and read back in when this representation is loaded from disk. This is by far the most massive use of <code>UpValues</code> in my practice at least, and <code>UpValues</code> played a major role in my ability to structure the code this way, providing necessary means for encapsulation, instantiation and separation of interface and implementation. You can find more details in the linked discussion of the framework.</p>
<p>Another, somewhat similar, example is that of an implementation of mutable data structures in Mathematica. The way I do it is described <a href="https://stackoverflow.com/questions/6097071/tree-data-structure-in-mathematica/6097444#6097444">here</a>, and I will again use some code from that post to illustrate the point:</p>
<pre><code>Module[{parent, children, value},
children[_] := {};
value[_] := Null;
node /: new[node[]] := node[Unique[]];
node /: node[tag_].getChildren[] := children[tag];
node /: node[tag_].addChild[child_node, index_] :=
children[tag] = Insert[children[tag], child, index];
node /: node[tag_].removeChild[index_] :=
children[tag] = Delete[children[tag], index];
node /: node[tag_].getChild[index_] := children[tag][[index]];
node /: node[tag_].getValue[] := value[tag];
node /: node[tag_].setValue[val_] := value[tag] = val;
];
</code></pre>
<p>The <code>node</code> represents a new data type (tree node), and using <code>UpValues</code> allowed me to use the familiar dotted notation without hard overloading of <code>Dot</code>, and without giving the API symbols (<code>addChild</code>, <code>getChild</code>, etc) any meaning, so they can be reused by other data types.</p>
<p>There are many more uses for <code>UpValues</code>, but what I want to stress is that they are a <em>very</em> practical tool. </p>
|
1,606,023 | <blockquote>
<p>Given that $U_{n+1} = U_n-U_n^2$ and $0 < U_0 < 1$, show that $0 < U_n \leq 1/4$ for all $n \in \mathbb{N}^*$.</p>
</blockquote>
<p>Given that $S_n$ is the sum of $\frac{1}{1-U_k}$ from k=0 to n</p>
<p>Calculate $S_n$ in function of n </p>
<p>I see that $\frac{1}{1-U_{n}}=\frac{1}{U_{n+1}}-\frac{1}{U_n}$</p>
<p>How can I calculate the sum ?</p>
| Kim Jong Un | 136,641 | <p>Observe that $\frac{1}{4}-U_{n+1}=(U_n-\frac{1}{2})^2>0$. The inequality is strict because by the induction hypothesis $U_n\in(0,\frac{1}{4}]$ and so $U_n\neq\frac{1}{2}$. Similarly, $U_{n+1}=U_n(1-U_n)>0$ also because $U_n\in(0,\frac{1}{4}]$.</p>
<p>The base case is identical:
$$
\frac{1}{4}-U_1=(U_0-\frac{1}{2})^2\geq 0,\quad U_1=U_0(1-U_0)>0.
$$
For $U_n>U_{n+1}$ observe that $U_{n}-U_{n+1}=U_n^2>0$, for all $n\geq 1$. </p>
|
626,920 | <p>$a_n=\sum_{k=1}^{n} \frac{1}{n+k}=\frac{1}{n+1}+\frac{1}{n+2}+\dots+\frac{1}{2n}$</p>
<p>How to find $\lim a_n$?</p>
| Claude Leibovici | 82,404 | <p><strong>HINT</strong> </p>
<p>As Daniel Fisher suggested, rewrite 1/(n+k) as (1/n) 1/(1+k/n) and you are done through integration. I am sure you can take from here.</p>
<hr>
<p><em>Below is a reformatted version of this answer</em></p>
<p><strong>HINT</strong> </p>
<p>As Daniel Fisher suggested, rewrite $\dfrac 1{n+k}$ as $\dfrac {1}{n} \dfrac {1}{1+\frac k n}$ and you are done through integration. I am sure you can take from here.</p>
|
3,805,452 | <p>Given a stick and break it randomly at two places, what is the probability that you can form a triangle from the pieces?</p>
<p>Here is my attempt and the answer does not match, so I am confused what went wrong with this argument.</p>
<p>I first denote the two randomly chosen positions by <span class="math-container">$X$</span> and <span class="math-container">$Y$</span>, and let <span class="math-container">$A=\max(X,Y)$</span>, <span class="math-container">$B=\min(X,Y)$</span>. We are interested in the probability of the event <span class="math-container">$\{A>\frac{1}{2}, B>A-\frac{1}{2}\}$</span>. Thus, we want the joint distribution of <span class="math-container">$A$</span> and <span class="math-container">$B$</span>. To compute that, I computed
<span class="math-container">$$F_{A,B}(w,z)=\mathbb{P}(A\leq w, B\leq z)=\mathbb{P}(A\leq w)-\mathbb{P}(A\leq w, B>z)=\mathbb{P}(X\leq w,Y\leq w)-\mathbb{P}(X\leq w, Y\leq w, X>z, Y>z)$$</span>
Therefore, we have if <span class="math-container">$z\leq w$</span>
<span class="math-container">$$F_{A,B}(w,z)=w^2-(w-z)^2$$</span>
otherwise
<span class="math-container">$$F_{A,B}(w,z)=w^2$$</span>
Then the joint density of <span class="math-container">$A$</span> and <span class="math-container">$B$</span> is
<span class="math-container">$$f_{A,B}(w,z)=\frac{\partial^2 F}{\partial w\partial z}(w,z)=2$$</span>
if <span class="math-container">$z\leq w$</span> and <span class="math-container">$0$</span> otherwise.<br />
Finally
<span class="math-container">$$\mathbb{P}(A>\frac{1}{2},B>A-\frac{1}{2})=\int_{\frac{1}{2}}^1\int_{w-\frac{1}{2}}^w2dzdw=\frac{1}{2}$$</span>
The answer is <span class="math-container">$\frac{1}{4}$</span> instead, but I can't figure out what went wrong with this argument.</p>
| Varun Vejalla | 595,055 | <p>The lengths of the sides of the stick will be <span class="math-container">$1-A, A-B$</span>, and <span class="math-container">$B$</span>. For the stick to form a valid triangle, the following three conditions must hold by the triangle inequality:</p>
<p><span class="math-container">$$\begin{align*}
1-A + A-B>B \to B&<\frac{1}{2} \\
1-A + B>A-B \to B&>A-\frac{1}{2} \\
A-B+B > 1-A \to A&>\frac{1}{2}
\end{align*}$$</span></p>
<p>You did not include the first condition, <span class="math-container">$B<1/2$</span>, which threw off your answer. Other than this, everything else was solved correctly; your answer would have been correct if only the second and third conditions were needed.</p>
|
3,816,140 | <p>The points A, B, C, D have position vectors <strong>a</strong>, <strong>b</strong>, <strong>c</strong> and <strong>d</strong> respectively. Prove that the lines joining the midpoints of opposite edges of the tetrahedron ABCD bisect each other and give the position vector of the point of intersection.</p>
<p>I have started by working out the position vectors of E and F:</p>
<p><strong>b</strong> + <span class="math-container">$\vec BA$</span> = <strong>a</strong></p>
<p>so <span class="math-container">$\vec BA$</span> = <strong>a</strong> - <strong>b</strong> and position vector of E is <span class="math-container">$\frac{(a - b)}{2}$</span></p>
<p>and that for F is <span class="math-container">$\frac{d - c}{2}$</span>.</p>
<p>But I can proceed no further.</p>
<p><a href="https://i.stack.imgur.com/D7zGU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/D7zGU.png" alt="enter image description here" /></a></p>
| Robert Z | 299,698 | <p>Note that by <a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow noreferrer">Vandermonde's_identity</a>,
<span class="math-container">\begin{align*}
\sum_{i=1}^{n-3}\frac{\binom{n-3}{i}\binom{n+i-1}{i}}{i+1}\cdot(-1)^{i+1}&=\frac{1}{n-1}\sum_{i=1}^{n-3}\binom{n-3}{i}\binom{n+i-1}{i+1}\cdot(-1)^{i+1}\\
&=\frac{1}{n-1}\sum_{i=1}^{n-3}\binom{n-3}{n-3-i}\binom{1-n}{i+1}\\
&=\frac{1}{n-1}\sum_{k=2}^{n-2}\binom{n-3}{n-2-k}\binom{1-n}{k}\\
&\stackrel{\text{V.I.}}{=}
\frac{1}{n-1}\left(\binom{-2}{n-2}-0-(1-n)\right)\\
&=\frac{1}{n-1}\left((n-1)(-1)^n-(1-n)\right)=1+(-1)^n
\end{align*}</span>
where we used the identity <span class="math-container">$\binom{-a}{b}=\binom{a+b-1}{b}(-1)^b$</span>.</p>
|
764,437 | <p>I am trying to prove that $\mathbb{Z}[i]\cong \mathbb{Z}[x]/(x^2+1)$.</p>
<p>My initial plan was to use the first isomorphism theorem. I showed that there is a map $\phi: \mathbb{Z}[x] \rightarrow \mathbb{Z}[i]$, given by $\phi(f)=f(i).$ This map is onto and homorphic. The part I have a question on is showing that the $ker(\phi) = (x^{2}+1)$. </p>
<p>One containment is trivial, $(x^2+1)\subset ker(\phi)$. To show $ker(\phi)\subset (x^2+1)$, let $f \in ker(\phi)$, then f has either $i$ or $-i$ as a root. Sot $f=g(x-i)(x+i)=g(x^2+1).$
How can I prove that $f \in \mathbb{Z}[x]\rightarrow g \in \mathbb{Z}[x]$? </p>
| Community | -1 | <p>You can define your isomorphism as follows</p>
<p>$\phi:\mathbb{Z}[i]\rightarrow \mathbb{Z}[x]/\langle x^2+1 \rangle$ </p>
<p>$\phi(a+bi)=(a+bx)+\langle x^2+1 \rangle$ </p>
<p>If you compute $[a+bx+\langle x^2+1 \rangle]\times [c+dx+\langle x^2+1 \rangle]$</p>
<p>We get $ac+(ad+bc)x+bdx^2+\langle x^2+1 \rangle$. We reduce by long division to get, </p>
<p>$(ac-bd)+(ad+bc)x+\langle x^2+1 \rangle$.</p>
<p>Therefore,</p>
<p>$\phi([a+bi][c+di])=(ac-bd)+(ad+bc)x+\langle x^2+1 \rangle$.</p>
<p>Which is exactly what you want.</p>
<p>The map is obviously bijective and a ring homomorphism.</p>
|
1,191,740 | <p>$a_{n+1} - a_n = 3n^2 - n$ ;$a_0=3$</p>
<p>I need help for solving the particular solution.
Based on a chart in my textbook if you get $n^2$ the particular solution would be
$A_2n^2 + A_1n + A_0$ and $n$ has the particular solution of $A_1n+A_0$.<br>
So given $3n^2 - n$, my first thought was that if the equation was $n^2-n$ you can have something like $An^2 + Bn+C - (Bn + C) = An^2$. </p>
<p>Is this process correct if I simply had $n^2-n$ ? If so how would the $3$ in $3n^2$ affect this step?</p>
| abel | 9,252 | <p>take a particular solution in the form $$a_n = An^3 + Bn^2+Cn,\\
3n^2 - n = a_{n+1} - a_n = A(n+1)^3 - An^3 + B(n+1)^2 - Bn^2+ C\\= 3An^2 + n(3A+2B) + A+B+C$$ equating the coefficients gives $$A = 1, B = -2, C=1. $$</p>
<p>the homogeneous solution is $ a_n = D $ so the general solution is $$a_n = n^3 -2n^2 +n + D, a_0 = 3 \to D = 3. $$</p>
|
1,291,657 | <p>Is Bezout's lemma enough to confirm the GCD of a number?</p>
<p>So suppose we have <span class="math-container">$$ax+by=z$$</span> does this mean <span class="math-container">$$\gcd(a,b)=z$$</span></p>
| AlexR | 86,940 | <p>I note that $\mathrm{hcf}$ is usually denoted as $\gcd$ (<em>greatest common divisor</em>). Now from
$$ax+by = z$$
we can only see that $\gcd(a,b) \mid z$ (i.e. $z$ is a multiple of $\gcd(a,b)$) and by Bezout's lemma there exist <em>some</em> $(x,y)$ such that $z=\gcd(a,b)$.<br>
You can't infer anything else about $\gcd(a,b)$ from this, not even if you force $\gcd(x,y)=1$. For example $x=y=1$ gives $z=a+b$. There is not much you could say about the $\gcd$ of some numbers only knowing their sum.</p>
|
99,617 | <p>How can I animate a point on a polar curve? I have used <code>Animate</code> and <code>Show</code> together before in order to get the curve and the moving point together on the same plot, but combining the polar plot and the point doesn't seem to be working because point only works with Cartesian coordinates.</p>
<p>Here is the code I used before to animate a point on a parametric curve. For higher values of <code>a</code> and <code>theta</code>, you can see the point moving along the curve better (I was required to animate all three parameters).</p>
<pre><code>Animate[
Show[
ParametricPlot[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}, {t, 0, 15}, AxesLabel -> {"x", "y"},
PlotRange -> {{0, 50}, {0, 30}}],
Graphics[{Red, PointSize[.05], Point[{a Cos[θ] t, a Sin[θ] t - 4.9 t^2}]}]
],
{t, 0, 5, Appearance -> "Labeled"},
{a, 1, 20, Appearance -> "Labeled"},
{θ, 0, Pi/2, Appearance -> "Labeled"},
AnimationRunning -> False
]
</code></pre>
<p>Here is the code I tried to use to animate a point on a polar curve, but the point does not even show up.</p>
<pre><code>Animate[
Show[
PolarPlot[2 Sin[4*θ], {θ, 0, 2 Pi}],
Graphics[Red, PointSize[Large], Point[{2 Sin[4*θ] Cos[θ], 2 Sin[4*θ] Sin[θ]}]]
],
{θ, 0, 2 Pi},
AnimationRunning -> False
]
</code></pre>
| ubpdqn | 1,997 | <pre><code>Animate[
ParametricPlot[f[a, s, u], {u, 0, 15},
PlotRange -> {{0, 50}, {0, 30}},
Epilog -> {Red, PointSize[0.05], Point[f[a, s, t]]}], {a, 1,
20}, {s, 0, Pi/2}, {t, 0, 5},
Initialization :> (f[a_, s_, t_] :=
a t { Cos[s] , Sin[s]} - {0, 4.9 t^2})]
</code></pre>
<p><a href="https://i.stack.imgur.com/Z4n89.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Z4n89.gif" alt="enter image description here"></a></p>
<p>I have not done all the niceties but suggest this suffices to operationalize.</p>
|
1,404,960 | <p>Say you have a function $f(x)$ and a line $g(x)=ax+b$. How do you reflect $f$ about $g$?</p>
<p>I am apparently supposed to write more text, but the line above is all I am after, hence I wrote this sentence as well.</p>
| user84413 | 84,413 | <p>Represent the graph of f parametrically by $x=t, y=f(t)$. </p>
<p>If we reflect the point $(t, f(t))$ in the line $y=ax+b$ to get the point $(x,y)$, then
$\color{red}{y-f(t)=-\frac{1}{a}(x-t)}$ </p>
<p>since the line and the line segment between $(t, f(t))$ and $(x,y)$ are perpendicular to each other.</p>
<p>We also have that $\frac{y+f(t)}{2}=a\left(\frac{x+t}{2}\right)+b$ and therefore $\color{red}{y+f(t)=a(x+t)+2b}$ </p>
<p>since the midpoint of the line segment lies on the line.</p>
<p>Subtracting these equations gives $2f(t)=ax+at+\frac{1}{a}x-\frac{1}{a}t+2b=\left(\frac{a^2+1}{a}\right)x+\left(\frac{a^2-1}{a}\right)t+2b$,</p>
<p>and solving for x gives $\displaystyle \color{blue}{x=\frac{1-a^2}{1+a^2}t+\frac{2a}{a^2+1}\left(f(t)-b\right)}$.</p>
<p>Solving for x in the first equation gives $-ay+af(t)=x-t$ and so $\color{red}{x=-ay+af(t)+t}$.</p>
<p>Substituting into the second equation gives </p>
<p>$y+f(t)=a\big(-ay+af(t)+2t\big)+2b=-a^2y+a^2f(t)+2at+2b,$ and solving for y gives</p>
<p>$(a^2+1)y=(a^2-1)f(t)+2(at+b)$ so $\displaystyle\color{blue}{y=\frac{a^2-1}{a^2+1}f(t)+\frac{2}{a^2+1}(at+b)}$.</p>
|
2,262,846 | <p>This question is somewhat naive. Please see the following proof before reading the question itself. </p>
<p>Let $L$ be a vector space over a field $\Bbb{F}.$ Assume there is an associative multiplication on $L$ so that $L$ is an associative ring, and denote this multiplication simply as $xy$ for $x,y\in L$. Furthermore, suppose this multiplication to be compatible with scaling, that is $\alpha\in\Bbb{F},$ then $\alpha x = x\alpha\in L$ for all $x\in L$ ($L$ should be a left and right $\Bbb{F}$ vector space). Define $[x,y]=xy-yx$ for all $x,y\in L.$ Then $[-,-]:L\times L \longrightarrow L$ is a Lie bracket. </p>
<p><em>Proof:</em>
It is clear that $[-,-]$ is anti-commutative, and bilinear. We only need to show that it satisfies the Jacobi identity. That is </p>
<p>$$[[x,y],z]+[[y,z],x]+[[z,x],y]=0.$$</p>
<p>One readily sees that this is equivalent to </p>
<p>$$(xy)z-(yx)z-z(xy)+z(yx)+(yz)x-(zy)x-x(yz)+x(zy)+(zx)y-(xz)y-y(zx)+y(xz)=0.$$</p>
<p>The previous equation is certainly true when the multiplication defined earlier is associative. Therefore $L$ is a Lie algebra. $\Box$</p>
<p>So here's the question:</p>
<blockquote>
<p>1) Is it true that an algebra defined in this way is a Lie algebra? </p>
</blockquote>
<p>An example of such an associative algebra is the $M_{n\times n}(\Bbb{F}).$</p>
<blockquote>
<p>2) If so, is there any name which separates these from the classical Lie algebras?</p>
</blockquote>
<p>There is rich theory surrounding classical Lie algebras, especially their connection to Lie groups. This motivates the next question.</p>
<blockquote>
<p>3) Is there any significance in viewing an associative algebra in this way?</p>
</blockquote>
| StackTD | 159,845 | <p><strong>Hint</strong>: you could use $\left(x^3-y^3\right)^2 \ge 0 \implies x^6+y^6 \ge 2x^3y^3$, so:
$$\left| \frac{x^3 y^4}{x^6 + y^6}\right| \le \left| \frac{x^3 y^4}{2x^3y^3}\right| = \ldots$$</p>
|
2,592,581 | <p>Lets define the following series
$$S_n =\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}.$$
I know that $S_0$ does not converge so let's suppose $n \in N$ and define $S_n$ for some $n$. We have $S_1=1$ , $S_2=\frac1 4$ , $S_3=\frac 1 {18}$ , $S_4=\frac 1{96}$ , $S_5=\frac 1{600}$ etc..<br/><br/>
the numerator in all results is 1<br/>the pattern in denominator is $[1,4,1,96,600...]$ and <a href="https://oeis.org/A001563" rel="nofollow noreferrer">can be found here</a> that is equal to $n*n!$
Finally I want to prove the general equality:<br/><br/>
$\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}=\frac 1 {n*n!}$</p>
| Rohan Shinde | 463,895 | <p>$$\frac{1}{i(i+1)(i+2)\cdots(i+n)}$$
$$=\frac{1}{n}\left(\frac{1}{i(i+1)\cdots(i+n-1)}-\frac{1}{(i+1)(i+2)\cdots(i+n)}.\right)$$ </p>
<p>The first term of this series becomes
$$=\frac{1}{n}\left(\frac{1}{n!}-\frac{1}{(n+1)!}\right) $$</p>
<p>The second term becomes
$$=\frac{1}{n}\left(\frac{1}{(n+1)!}-\frac{2}{(n+2)!}\right) $$</p>
<p>The third term becomes
$$=\frac{1}{n}\left(\frac{2}{(n+2)!}-\frac{6}{(n+3)!}\right)$$</p>
<p>Hence we see that the sum telescopes to $\frac {1}{n. n!}$</p>
|
2,592,581 | <p>Lets define the following series
$$S_n =\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}.$$
I know that $S_0$ does not converge so let's suppose $n \in N$ and define $S_n$ for some $n$. We have $S_1=1$ , $S_2=\frac1 4$ , $S_3=\frac 1 {18}$ , $S_4=\frac 1{96}$ , $S_5=\frac 1{600}$ etc..<br/><br/>
the numerator in all results is 1<br/>the pattern in denominator is $[1,4,1,96,600...]$ and <a href="https://oeis.org/A001563" rel="nofollow noreferrer">can be found here</a> that is equal to $n*n!$
Finally I want to prove the general equality:<br/><br/>
$\sum_{i=1}^{\infty} \frac 1 {i(i+1)(i+2)...(i+n)}=\frac 1 {n*n!}$</p>
| omegadot | 128,913 | <p>If you failed to see the telescoping nature of the series, here is an alternative approach that makes use of the <a href="https://en.wikipedia.org/wiki/Beta_function" rel="noreferrer">Beta function</a>.</p>
<p>\begin{align*}
S_n &= \sum_{i = 1}^\infty \frac{1}{i (i + 1) (i + 2) \cdots (i + n)} \quad n \in \mathbb{N}\\
&= \sum_{i = 1}^\infty \frac{1 \cdot 2 \cdots (i - 1)}{1 \cdot 2 \cdots (i - 1)i(i + 1) \cdots (i + n)}\\
&= \sum_{i = 1}^\infty \frac{(i - 1)!}{(i + n)!}\\
&= \sum_{i = 1}^\infty \frac{\Gamma (i)}{\Gamma (i + n + 1)}\\
&= \frac{1}{\Gamma (n + 1)} \sum_{i = 1}^\infty \frac{\Gamma (i) \Gamma (n + 1)}{\Gamma (i + n + 1)}\\
&=\frac{1}{\Gamma (n + 1)} \sum_{i = 1}^\infty \text{B}(i, n + 1)\\
&=\frac{1}{\Gamma (n + 1)} \sum_{i = 1}^\infty \int_0^1 t^{i - 1} ( 1 - t)^n \, dt\\
&=\frac{1}{\Gamma (n + 1)} \int_0^1 (1 - t)^n \left [\sum_{i = 1}^\infty t^{i - 1} \right ] \, dt\\
&= \frac{1}{\Gamma (n + 1)} \int^1_0 (1 - t)^n \cdot \frac{1}{1 - t} \, dt\\
&= \frac{1}{\Gamma (n + 1)} \int^1_0 (1 - t)^{n - 1} \, dt\\
&= \frac{1}{\Gamma (n + 1)} \left [-\frac{1}{n} (1 - t)^n \right ]^1_0\\
&= \frac{1}{\Gamma (n + 1) n}\\
&= \frac{1}{n \cdot n!},
\end{align*}
as expected.</p>
|
2,300,386 | <blockquote>
<p>Consider the initial value problem:</p>
<p>\begin{cases}
u_{tt} &= c^2 u_{xx} \ \ & \text{for} \ -\infty < x < \infty, \ 0 \leq t < \infty\\
u(x,0) &= \phi(x) \ \ & \text{for} \ -\infty < x < \infty\\
u_t(x,0) &= \psi(x) \ \ & \text{for} \ -\infty < x < \infty\\
\end{cases}
where $\phi$ has compact support (that is, outside some bounded interval, $\phi$ is zero), and $\psi(x) = 0$. Define the kinetic energy $KE = \frac{1}{2}\int_{-\infty}^{\infty} \rho u_t^{2} dx$ and the potential energy $PE = \frac{1}{2} \int_{-\infty}^{\infty} T u_x^{2} dx$. Show not that, for large enough times $t$, each of $KE$ and $PE$ is itself constant, and they are equal to each other. Can you prove the same thing if the inital velocity $\psi$ merely has compact support, instead of being zero?</p>
</blockquote>
<p>I am not sure how to start this, how am I to show that $KE$ and $PE$ are constant? I usually post some work but I am not sure how to start this. Any help would be useful.</p>
| Winther | 147,873 | <p>Define $E_k(t) \equiv \frac{1}{2}\int_{\mathbb{R}}u_t^2(x,t){\rm d}x$ and $E_p(t) \equiv \frac{1}{2}\int_{\mathbb{R}}c^2u_x^2(x,t){\rm d}x$. Then taking the derivative and using integration by parts we get
$$\frac{d}{dt}(E_k(t) + E_p(t)) = \int_{\mathbb{R}}u_t[u_{tt} - c^2u_{xx}]{\rm d}x = 0$$
so the total energy is conserved. To show that $E_p(t) = E_k(t)$ for large $t$ we need an expression for the solution. This is given by d'Alemberts formula</p>
<p>$$u(x,t) = \frac{\phi(x+ct) + \phi(x-ct)}{2} + \frac{1}{2c}\int_{x-ct}^{x+ct}\psi(s){\rm d}s$$</p>
<p>from which you can compute $u_t$, $u_x$ and derive</p>
<p>$$E_k(t) - E_p(t) = \frac{1}{2}\int_{\mathbb{R}}[\psi(x+ct)+c\phi'(x+ct)][\psi(x-ct)-c\phi'(x-ct)]{\rm d}x$$</p>
<p>To make the algebra above simpler note that $u_t^2 - c^2u_x^2 = (u_t - cu_x)(u_t+cu_x)$. Now since $\phi$ and $\psi$ has compact support then there is a $M>0$ such that $\phi'(x) = \psi(x) = 0$ for all $|x| > M$. Now if $ct > \frac{M}{2}$ then either $|x-ct| > M$ or $|x+ct|>M$ for all $x$ so the integrand above is identical to zero and $E_p = E_k$ follows. Finally since we know that $E_k + E_p$ is constant it follows that $E_p$ and $E_k$ has to be constant for large $t$.</p>
|
2,946,408 | <p>Here is the question:
Find the volume of the region bounded by the <span class="math-container">$x$</span>-axis, <span class="math-container">$x=4$</span>, and <span class="math-container">$y=sqrt(x)$</span>. (rotated about x-axis)</p>
<p>I understand that we can find the cross sectional area at each point with respect to x and integrate (<span class="math-container">$\int_{0}^{4}(\pi *x)dx$</span>), but I want to solve this question slightly differently. Since the curve goes from 0 to 2 (y-axis), I decided to set up my integral like this: <span class="math-container">$\int_{0}^{2}(\pi*y^2)dy$</span>. Unfortunately, this does not give me the correct answer. What went wrong?</p>
| Phil H | 554,494 | <p>Your <span class="math-container">$\pi x$</span> just so happens to equal <span class="math-container">$\pi (\sqrt{x})^2$</span> for the disc method. For <span class="math-container">$\pi y^2$</span>, it does not equal <span class="math-container">$2\pi y(4 - y^2)$</span> for the shell method.</p>
|
2,245,010 | <p>Does there exist a topological group which can be covered by (nontrivial and proper) open subgroups of itself? If so, what are groups of these types called and is this a nice property for a topological group to have? Or is this just impossible?</p>
| Angina Seng | 436,618 | <p>There's $\Bbb Q_p$, the $p$-adic numbers under addition. This is an infinite, locally compact and totally disconnected group. The subgroups
$p^n \Bbb Z_p$ for $n\in\Bbb Z$ cover $\Bbb Q_p$ and are open therein
(and are compact to book).</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.