qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
1,802,020 | <p>Let $S$ be a set such that if $A,B\in S$ then $A\cap B,A\triangle B\in S,$ where $\triangle$ denotes the symmetric difference operator. I would like to show that if $S$ contains $A$ and $B$, then it also contains $A\cup B, A\setminus B$.</p>
<p>The difference was easy to find, but I am not succeeding with the union. I was able to show that each of the sets $$\emptyset,A\setminus B, B\setminus A,(A\cap B)\cup(A\setminus B),(A\cap B)\cup(B\setminus A),A\cup(A\triangle B),B\cup(A\triangle B),\\(A\cap B)\cup(B\setminus A)\cup(A\triangle B)$$ is an element of $S$. Combining these I could go on producing other elements, and probably I would eventually find $A\cup B$. But I have already spent too long on the problem, there must be a cleverer approach, right?</p>
| Asaf Karagila | 622 | <p>Note that if $A\cap B=\varnothing$, then $A\cup B=A\mathbin\triangle B$. So if you can show that $A\setminus B\in S$, then $A\cup B=(A\setminus B)\mathbin\triangle B$.</p>
<p>Next, observe that $A\cap(A\mathbin\triangle B)=A\cap((A\setminus B)\cup(B\setminus A))=A\setminus B$.</p>
|
4,639,966 | <p>I recently saw the expansion <span class="math-container">$(1+ \frac{1}{x})^n = 1 + \frac{n}{x} + \frac{n(n-1)}{2!x^2} + \frac{n(n-1)(n-2)(n-3)}{3!x^3}.... $</span> where <span class="math-container">$n \in \mathbb Q$</span></p>
<p>From what I understood, they have taken the Taylor series of <span class="math-container">$(1+x)^n$</span> and put <span class="math-container">$x=\frac{1}{t}$</span>. This doesn't make sense to me because the Taylor series used, uses successive derivatives at zero but derivatives at zero won't be defined for <span class="math-container">$f(x)=\frac{1}{x}$</span>.</p>
<p>How can I directly calculate the Taylor series for <span class="math-container">$(1+ \frac{1}{x})^n$</span>?</p>
| P. Lawrence | 545,558 | <p>Expand <span class="math-container">$(x+1)^n$</span> as a Taylor series. Then divide through by <span class="math-container">$x^n$</span>.</p>
|
3,415,266 | <p>I cannot find how or why this,</p>
<p><span class="math-container">$$5\sin(3x)−1 = 3$$</span></p>
<p>Has one of two solutions being this,</p>
<p><span class="math-container">$$42.29 + n \times 120^\circ$$</span></p>
<p>I am lost on how to get this solution. I have found the other one, so I will not mention it.</p>
<p>Can someone help explain this and show how to get this solution? Thank you!</p>
| user | 505,767 | <p>We have</p>
<p><span class="math-container">$$5\sin(3x)−1 = 3 \iff \sin(3x)=\frac45 $$</span></p>
<p>then, since <span class="math-container">$\arcsin y\in [-\pi/2,\pi/2]$</span> and <span class="math-container">$\sin \theta = \sin (\pi-\theta)$</span>, we have the following (set of) solution(s)</p>
<ul>
<li><span class="math-container">$3x=\arcsin\left(\frac45\right)+2k\pi \implies x=\frac13\arcsin\left(\frac45\right)+\frac23k\pi$</span></li>
</ul>
<p>which is the one you probably have already found and also this one</p>
<ul>
<li><span class="math-container">$3x=\pi -\arcsin\left(\frac45\right)+2k\pi\implies x=\frac \pi 3-\frac13\arcsin\left(\frac45\right)+\frac23k\pi$</span></li>
</ul>
<p>The key point is that <span class="math-container">$\arcsin$</span> function only returns values between <span class="math-container">$[-\pi/2,\pi/2]$</span> therefore we are loosing a (set of) solution(s) in the interval <span class="math-container">$[\pi/2,3\pi/2]$</span>. For example</p>
<p><span class="math-container">$$\arcsin \frac {\sqrt2} 2=\frac \pi 4$$</span></p>
<p>and indeed <span class="math-container">$\sin \frac \pi 4 =\frac{\sqrt2} 2$</span> but also <span class="math-container">$\sin \left(\pi-\frac \pi 4 \right)=\frac{\sqrt2} 2$</span>.</p>
|
3,999,652 | <p>Let triangle <span class="math-container">$ABC$</span> is an equilateral triangle. Triangle <span class="math-container">$DEF$</span> is also an equilateral triangle and it is inscribed in triangle <span class="math-container">$ABC \left(D\in BC,E\in AC,F\in AB\right)$</span>. Find <span class="math-container">$\cos\measuredangle DEC$</span> if <span class="math-container">$AB:DF=8:5$</span>.</p>
<p><a href="https://i.stack.imgur.com/FZkBd.png" rel="noreferrer"><img src="https://i.stack.imgur.com/FZkBd.png" alt="enter image description here" /></a>
Firstly, I would be very grateful if someone can explain to me how I am supposed to draw the diagram. Obviously I have made it by sight.</p>
<p>Let <span class="math-container">$\measuredangle DEC=\alpha$</span>. We can note that <span class="math-container">$\triangle AEF \cong \triangle BFD \cong CDE$</span>. This is something we can always use in such configuration. So <span class="math-container">$$AE=BF=CD, $$</span> <span class="math-container">$$AF=BD=CE.$$</span> Let <span class="math-container">$AB=BC=AC=8x$</span> and <span class="math-container">$DF=DE=EF=5x$</span>. If we denote <span class="math-container">$CD=y,$</span> then <span class="math-container">$CE=AC-AE=AC-CD=8x-y$</span>. Cosine rule on <span class="math-container">$CED$</span> gives <span class="math-container">$$25x^2=(8x-y)^2+y^2-2y\cos60^\circ(8x-y)$$</span> which is a homogenous equation. I got that <span class="math-container">$\dfrac{y}{x}=4\pm\sqrt{3}.$</span> Now using the sine rule on <span class="math-container">$CED$</span> <span class="math-container">$$\dfrac{CD}{DE}=\dfrac{\sin\alpha}{\sin60^\circ}\Rightarrow \sin\alpha=\dfrac{\sqrt{3}}{10}\cdot\dfrac{y}{x}=\dfrac{4\sqrt3\pm3}{10}.$$</span> Now we can use the trig identity <span class="math-container">$\sin^2x+\cos^2x=1$</span> but it doesn't seem very rational. Can you give me a hint? I was able to find <span class="math-container">$\sin\measuredangle DEC$</span> in acceptable way, but I can't find <span class="math-container">$\cos\measuredangle DEC$</span>...</p>
| Math Lover | 801,574 | <p>WLOG, <span class="math-container">$AB = 8, DF = 5$</span>. Say <span class="math-container">$\angle DEC = \theta$</span></p>
<p>If <span class="math-container">$CD = x$</span> then <span class="math-container">$CE = 8 - x$</span></p>
<p>Applying sine law in <span class="math-container">$\triangle CDE$</span>,</p>
<p><span class="math-container">$\displaystyle \frac{\sin 60^0}{5} = \frac{\sin \theta}{x} = \frac{\sin(60^0+\theta)}{8-x}$</span></p>
<p>From first two, we have <span class="math-container">$\sin \theta = \frac{x \sqrt3}{10}$</span></p>
<p>From first and third, <span class="math-container">$\sin 60^0 \cos \theta + \cos 60^0 \sin\theta = \frac{4\sqrt3}{5} - \frac{x \sqrt3}{10}$</span></p>
<p><span class="math-container">$\implies \cos\theta = \frac{8}{5} - \frac{3x}{10}$</span></p>
<p>Applying <span class="math-container">$sin^2\theta + \cos^2\theta = 1$</span>, we get</p>
<p><span class="math-container">$x^2 - 8x+\frac{39}{3} = 0 \implies x = 4 \pm \sqrt3$</span></p>
<p>So, <span class="math-container">$\cos\theta = \frac{4 \mp 3\sqrt3}{10}$</span> (please note <span class="math-container">$\pm$</span> for <span class="math-container">$x$</span> and corresponding <span class="math-container">$\mp$</span> for <span class="math-container">$\cos \theta$</span>)</p>
|
2,716,081 | <p>A stone of mass 50kg starts from rest and is dragged 35m up a slope inclined at 7 degrees to the horizontal by a rope inclined at 25 degrees to the slope. the tension in the rope is 120N and the resistance to motion of the stone is 20N. calculate the speed of the stone after it has moved 35m up the slope.</p>
<p>Any help would be appreciated, as I'm not sure whether there is a easy way to approach this question as all the methods I've tried lead to the dead ends.</p>
| Aweygan | 234,668 | <p>You're right that $f_n(t)=\sin(nt)$ isn't in $L^2(\mathbb R)$, but multiplying this by a smooth bump function $g$ with $g(t)=1$ on $[0,2\pi]$ and $\text{supp}(g)=[-\varepsilon,2\pi+\varepsilon]$ for some $\varepsilon>0$ does the trick.</p>
|
2,716,081 | <p>A stone of mass 50kg starts from rest and is dragged 35m up a slope inclined at 7 degrees to the horizontal by a rope inclined at 25 degrees to the slope. the tension in the rope is 120N and the resistance to motion of the stone is 20N. calculate the speed of the stone after it has moved 35m up the slope.</p>
<p>Any help would be appreciated, as I'm not sure whether there is a easy way to approach this question as all the methods I've tried lead to the dead ends.</p>
| zhw. | 228,045 | <p>Choose any smooth <span class="math-container">$f$</span> on <span class="math-container">$\mathbb R$</span> with both <span class="math-container">$\int_{\mathbb R}f^2, \int_{\mathbb R}(f')^2$</span> finite and nonzero. Set <span class="math-container">$f_n(x)=f(nx).$</span> Then</p>
<p><span class="math-container">$$\int_{\mathbb R}f_n^{\,2} = \frac{1}{n}\int_{\mathbb R}f^2 \to 0,\,\,\text{while }\int_{\mathbb R}(f_n\,')^2 = n\int_{\mathbb R}(f\,')^2 \to \infty.$$</span></p>
|
4,359,136 | <p>I am trying to calculate average number of turns it will take to win in Catan given a set of hexes.</p>
<p>I am stuck at calculating probability of an event given n rolls.
Each roll uses 2 6-sided dice. You get a resource if a specific number(sum of 2 dice) rolls.</p>
<p>Say probability of getting an <strong>ore</strong> in a dice roll is <strong>x/36</strong> and probability of getting a <strong>wheat</strong> is <strong>y/36</strong>
You can construct a city if you have accumulated <strong>3 ore</strong> and <strong>2 wheat</strong>.</p>
<p>What is the probability of being able to construct a city after (n) dice roles assuming no loss/discarding of resources?</p>
| JMoravitz | 179,297 | <p>For a direct approach: assuming you only have a single settlement on ore and a single settlement on wheat, and no other source of these (e.g. trading, port, dev cards)... perhaps the easiest approach is via markov chains.</p>
<p>You have <span class="math-container">$12$</span> different possible states you can be in, <span class="math-container">$(0,0), (0,1), (0,2), (1,0), (1,1),\dots (3,2)$</span>. For convenience sake, rearrange the list to put the <span class="math-container">$(3,2)$</span> in the front.</p>
<p>You have the following transition matrix:</p>
<p><span class="math-container">$$A = \begin{bmatrix}1&0&0&0&0&0&0&0&0&\frac{x}{36}&0&0&\frac{y}{36}\\0&\frac{36-x-y}{36}&0&0&0&0&0&0&0&0&0&0&0\\0&\frac{y}{36}&\frac{36-x-y}{36}&0&0&0&0&0&0&0&0&0&0\\0&0&\frac{y}{36}&\frac{36-y}{36}&0&0&0&0&0&0&0&0&0\\0&\frac{x}{36}&0&0&\frac{36-x-y}{36}&0&0&0&0&0&0&0&0\\\vdots&&&\vdots&&\vdots&&&&&&\vdots\end{bmatrix}$$</span></p>
<p>where I have left the last several rows for you to complete as it is tedious to write it all down. The <span class="math-container">$r$</span>'th row <span class="math-container">$c$</span>'th column entry corresponds to the probability of moving from the <span class="math-container">$c$</span>'th state to the <span class="math-container">$r$</span>'th state. For instance, to move from the state of having three ore and one wheat to the state of having three ore and two wheat occurs with probability <span class="math-container">$\frac{y}{36}$</span> so the <span class="math-container">$1$</span>'st row <span class="math-container">$12$</span>'th column entry of the matrix is <span class="math-container">$\frac{y}{36}$</span>. Moving from <span class="math-container">$2$</span> wheat to more wheat does not change the fact that we have three wheat and for the purposes of this problem we don't keep track of excess.</p>
<p>Now... that is what the matrix <span class="math-container">$A$</span> is. The matrix <span class="math-container">$A^n$</span> will correspond to the probabilities of moving from a particular state to another <em>after <span class="math-container">$n$</span> rounds</em>.</p>
<p>The probability you are interested in then? It will be the first row second column entry of <span class="math-container">$A^n$</span> (<em>since we begin the game with no resources</em>).</p>
<p>This matrix can share a great deal more information as well. It is an absorbing transition matrix of the form <span class="math-container">$\left[\begin{array}{c|c}I&S\\\hline 0&R\end{array}\right]$</span>. The "fundamental" of it, <span class="math-container">$S(I-R)^{-1}$</span> has information in it of the expected time until the markov chain reaches an absorbing state. Again, as we are starting with no resources, the entry corresponding to the state <span class="math-container">$(0,0)$</span> is what will interest us, the value in the first position then is the expected wait time.</p>
<hr />
<p>For a less direct approach, try asking the probability of you <em>not</em> having enough resources. This still is rather case heavy, but less so than more direct approaches.</p>
<p>If you have too few wheat is that you had zero wheat rolls or exactly one wheat roll in your <span class="math-container">$n$</span> rolls. That occurs with probability <span class="math-container">$(1-\frac{y}{36})^n + n(\frac{y}{36})(1-\frac{y}{36})^{n-1}$</span></p>
<p>If you have too few ore, thats if you had exactly zero, exactly one, or exactly two ore. <span class="math-container">$(1-\frac{x}{36})^n+n(\frac{x}{36})(1-\frac{x}{36})^{n-1}+\binom{n}{2}(\frac{x}{36})^2(1-\frac{x}{36})^{n-2}$</span></p>
<p>It is possible however that both of these conditions happened simultaneously and so we would need to find the probabilities of exactly no wheat and no ore, exactly one wheat and no ore, exactly no wheat and exactly one ore, exactly one wheat and one ore, etc... and subtract these away as per inclusion exclusion <span class="math-container">$|A\cup B|=|A|+|B|-|A\cap B|$</span>.</p>
<p>Again, this is tedious to write out so I leave it to you to complete.</p>
|
4,359,136 | <p>I am trying to calculate average number of turns it will take to win in Catan given a set of hexes.</p>
<p>I am stuck at calculating probability of an event given n rolls.
Each roll uses 2 6-sided dice. You get a resource if a specific number(sum of 2 dice) rolls.</p>
<p>Say probability of getting an <strong>ore</strong> in a dice roll is <strong>x/36</strong> and probability of getting a <strong>wheat</strong> is <strong>y/36</strong>
You can construct a city if you have accumulated <strong>3 ore</strong> and <strong>2 wheat</strong>.</p>
<p>What is the probability of being able to construct a city after (n) dice roles assuming no loss/discarding of resources?</p>
| Ishan Chaurasia | 1,007,050 | <p>If <span class="math-container">$n\leqslant 4$</span>, then clearly it's <span class="math-container">$0$</span>. Otherwise, note that
<span class="math-container">$$
\mathbb{P}(\text{exactly i ores and j wheat after n rolls})=\frac{n!}{i!j!(n-i-j)!}\left(\frac{x}{36}\right)^i\left(\frac{y}{36}\right)^j\left(1-\frac{x+y}{36}\right)^{n-i-j}
$$</span>
as you can have different orders of occurrence of the ores/wheat/neither in your <span class="math-container">$n$</span> dice rolls, each of which occurs with probability <span class="math-container">$\left(\frac{x}{36}\right)^i\left(\frac{y}{36}\right)^j\left(1-\frac{x+y}{36}\right)^{n-i-j}$</span>. Now consider when we will be unable to construct a city. If we have <span class="math-container">$0,1,2$</span> ores at the end, then regardless of number of wheat we can't; if we have <span class="math-container">$3,4,\ldots,n-1$</span> ores, then we are unable to only if we have <span class="math-container">$0,1$</span> wheat; if we have exactly <span class="math-container">$n$</span> ores, then we are again unable to. Any other combination of wheat/ore amount will be fine, so the desired probability becomes (unless I have missed something)
<span class="math-container">$$
1-\sum_{i=0}^{2}\sum_{j=0}^{n-i}\mathbb{P}(\text{i ores} \cap \text{j wheat})-\sum_{i=3}^{n-1}\sum_{j=0}^{1}\mathbb{P}(\text{i ores} \cap \text{j wheat})-\mathbb{P}\text{(n ores} \cap \text{0 wheat)}
$$</span>
which is possible to compute given <span class="math-container">$n,x,y$</span>.</p>
|
4,032,890 | <p>I am asked to calculate the winding number of an ellipse (it's clearly 1 but I need to calculate it)</p>
<p>I tried two different aproaches but none seems to work.</p>
<p>I would like to know why none of them work (I believe it is because these formulas only work if I have a curve parametrized by arc lenght).</p>
<p>Approach 1:</p>
<p>A valid parametrization : <span class="math-container">$\gamma=(a\cos t,b\sin t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\dot{\gamma}(t)=(-a\sin t,b\cos t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\ddot{\gamma}(t)=(-a\cos t,-b\sin t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\det(\dot{\gamma}(t)|\ddot{\gamma}(t)) = \renewcommand\arraystretch{1.2}\begin{vmatrix}
-a\sin t & -a\cos t \\ b\cos t & -b\sin t \end{vmatrix}=ab \sin^2 t+ab \cos^2 t=ab$</span></p>
<p><span class="math-container">$||\dot{\gamma}(t)||^3=(\displaystyle\sqrt{(-a\sin t)^2+(b\cos t)^2})^3=(\displaystyle\sqrt{a^2\sin^2 t+b^2\cos^2 t})^3=a^3b^3$</span></p>
<p><span class="math-container">$\kappa(t)=\displaystyle\frac{ab}{a^3b^3}=\displaystyle\frac{1}{a^2b^2}$</span></p>
<p><span class="math-container">$\mathcal{K}_\gamma = \displaystyle\int_{0}^{2\pi} \displaystyle\frac{1}{a^2b^2} \ dt= \displaystyle\frac{2\pi}{a^2b^2}$</span>, <span class="math-container">$\mathcal{K}_\gamma$</span> is the total curvature of the curve.</p>
<p><span class="math-container">$i_\gamma=\displaystyle\frac{\displaystyle\frac{2\pi}{a^2b^2}}{2\pi}=\displaystyle\frac{1}{a^2b^2}$</span>...which is not necessarily 1.</p>
<p>Approach 2:</p>
<p>Winding # = <span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{\gamma}\displaystyle\frac{-y}{x^2+y^2}\>dx+\displaystyle\frac{x}{x^2+y^2}\>dy$</span></p>
<p>That gives us <span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{0}^{2\pi}\left( \displaystyle\frac{-b\sin t}{a^2\cos^2 t+b^2\sin^2 t}(-a\sin t)+\displaystyle\frac{a\cos t}{a^2\cos^2 t+b^2\sin^2 t}(b\cos t) \right)\>dt$</span></p>
<p><span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{0}^{2\pi}\left( \displaystyle\frac{ab}{a^2\cos^2 t+b^2\sin^2 t }\right)\>dt$</span>, which I computed and cannot be calculated.</p>
<p>Clearly the second approach is valid if we are dealing with a circumference of radius 1. We can generalize for the elipsee using Green's Theorem. I would also like if someone could show me this way as well.</p>
<p>Thank you</p>
| roydiptajit | 780,430 | <p>From analyticity of <span class="math-container">$\sum_{n=1}^k \frac{1}{n^z}$</span>, we get that
<span class="math-container">$
\int_{\mathfrak{D}}\sum_{n=1}^k \frac{1}{n^z}=0
$</span>, from Goursat's Theorem. Now, from uniform convergence of the partial sums, <span class="math-container">$\sum_{n=1}^k \frac{1}{n^z}$</span>, to <span class="math-container">$\zeta(z)$</span> in <span class="math-container">$Re(z)>1$</span>, we get</p>
<p><span class="math-container">$$
lim_{k\rightarrow\infty}\int_{\mathfrak{D}}\sum_{n=1}^k \frac{1}{n^z}=\int_{\mathfrak{D}}\zeta(z)=0
$$</span>
Therefore, analytic.</p>
|
4,032,890 | <p>I am asked to calculate the winding number of an ellipse (it's clearly 1 but I need to calculate it)</p>
<p>I tried two different aproaches but none seems to work.</p>
<p>I would like to know why none of them work (I believe it is because these formulas only work if I have a curve parametrized by arc lenght).</p>
<p>Approach 1:</p>
<p>A valid parametrization : <span class="math-container">$\gamma=(a\cos t,b\sin t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\dot{\gamma}(t)=(-a\sin t,b\cos t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\ddot{\gamma}(t)=(-a\cos t,-b\sin t)$</span>, with <span class="math-container">$t \in [0,2\pi], \, a,b \in \mathbb{R}$</span></p>
<p><span class="math-container">$\det(\dot{\gamma}(t)|\ddot{\gamma}(t)) = \renewcommand\arraystretch{1.2}\begin{vmatrix}
-a\sin t & -a\cos t \\ b\cos t & -b\sin t \end{vmatrix}=ab \sin^2 t+ab \cos^2 t=ab$</span></p>
<p><span class="math-container">$||\dot{\gamma}(t)||^3=(\displaystyle\sqrt{(-a\sin t)^2+(b\cos t)^2})^3=(\displaystyle\sqrt{a^2\sin^2 t+b^2\cos^2 t})^3=a^3b^3$</span></p>
<p><span class="math-container">$\kappa(t)=\displaystyle\frac{ab}{a^3b^3}=\displaystyle\frac{1}{a^2b^2}$</span></p>
<p><span class="math-container">$\mathcal{K}_\gamma = \displaystyle\int_{0}^{2\pi} \displaystyle\frac{1}{a^2b^2} \ dt= \displaystyle\frac{2\pi}{a^2b^2}$</span>, <span class="math-container">$\mathcal{K}_\gamma$</span> is the total curvature of the curve.</p>
<p><span class="math-container">$i_\gamma=\displaystyle\frac{\displaystyle\frac{2\pi}{a^2b^2}}{2\pi}=\displaystyle\frac{1}{a^2b^2}$</span>...which is not necessarily 1.</p>
<p>Approach 2:</p>
<p>Winding # = <span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{\gamma}\displaystyle\frac{-y}{x^2+y^2}\>dx+\displaystyle\frac{x}{x^2+y^2}\>dy$</span></p>
<p>That gives us <span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{0}^{2\pi}\left( \displaystyle\frac{-b\sin t}{a^2\cos^2 t+b^2\sin^2 t}(-a\sin t)+\displaystyle\frac{a\cos t}{a^2\cos^2 t+b^2\sin^2 t}(b\cos t) \right)\>dt$</span></p>
<p><span class="math-container">$\displaystyle\frac{1}{2\pi}\displaystyle\int_{0}^{2\pi}\left( \displaystyle\frac{ab}{a^2\cos^2 t+b^2\sin^2 t }\right)\>dt$</span>, which I computed and cannot be calculated.</p>
<p>Clearly the second approach is valid if we are dealing with a circumference of radius 1. We can generalize for the elipsee using Green's Theorem. I would also like if someone could show me this way as well.</p>
<p>Thank you</p>
| reuns | 276,986 | <p>Otherwise it is immediate that for <span class="math-container">$\Re(c)> 1$</span> and <span class="math-container">$|s-c|<\Re(c)-1$</span>, by absolute convergence
<span class="math-container">$$\sum_{n\ge 1}n^{-s}=\sum_{n\ge 1}n^{-c}\sum_{k\ge 0}\frac{((c-s)\log n)^k}{k!}=\sum_{k\ge 0} (c-s)^k \sum_{n\ge 1}n^{-c} \frac{(\log n)^k}{k!} $$</span></p>
<p>We need the whole analytic continuation of <span class="math-container">$\zeta(s)$</span> and the Cauchy integral formula to prove that the latter series in fact converges for <span class="math-container">$|s-c|<|c-1|$</span>.</p>
|
1,382,479 | <p>I would like to know why $a^p \equiv a \pmod p$ is the same as $a^{p-1} \equiv 1 \pmod p$, and also how Fermat's little theorem can be used to derive Euler's theorem, or vice versa. </p>
<p>Please keep in mind that I have little background in math, and I am trying to understand these theorems to understand the math behind RSA encryption, so I would appreciate it if the explanation could be as simple as possible (i.e. not using too much mathematical notation). I have had trouble finding resources online to explain these theorems that explain it in simple enough terms that I can understand.</p>
| André Nicolas | 6,312 | <p><strong>If</strong> $a^{k-1}\equiv 1\pmod{m}$, <strong>then</strong> $a^k\equiv a\pmod{m}$. For $a^{k-1}\equiv 1\pmod{m}$ says that $m$ divides $a^{k-1}-1$. And if $m$ divides $a^{k-1}-1$, then $m$ divides $a(a^{k-1}-1)$, that is, $m$ divides $a^k-a$, and therefore $a^k\equiv a \pmod{m}$.</p>
<p>However, the converse does not necessarily hold. If $a^k\equiv a\pmod{m}$, we cannot necessarily conclude that $a^{k-1}\equiv 1\pmod{m}$. But everything goes well if $a$ and $m$ are <em>relatively prime</em>. For suppose that $m$ divides $a^k-a$. Then $m$ divides $a(a^{k-1}-1)$. If $m$ and $a$ are relatively prime, then we can conclude that $m$ divides $a^{k-1}-1$.</p>
<p>Even in the special case where $m$ is the prime $p$, and $k=p$, we cannot conclude from $a^{p-1}\equiv a \pmod{p}$ that $a^{p-1}\equiv 1\pmod{p}$. For let $p=7$ and $a=14$. Then $14^7\equiv 14\equiv 0\pmod{7}$, but $14^{6}$ is not congruent to $1$ modulo $7$. </p>
<p>For primes we can say that <strong>if</strong> $a^p\equiv a\pmod{p}$, <strong>and</strong> $a$ is not divisible by $p$, then $a^{p-1}\equiv 1\pmod{p}$.</p>
|
4,487,494 | <blockquote>
<p><strong>Problem:</strong> Let <span class="math-container">$x$</span> and <span class="math-container">$y$</span> be non-zero vectors in <span class="math-container">$\mathbb{R}^n$</span>.<br>
(a) Suppose that <span class="math-container">$\|x+y\|=\|x−y\|$</span>. Show that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> must be perpendicular.<br>
(b) Suppose that <span class="math-container">$x+y$</span> and <span class="math-container">$x−y$</span> are non-zero and perpendicular. Show that <span class="math-container">$x$</span>
and <span class="math-container">$y$</span> must have the same length.</p>
</blockquote>
<p><strong>Attempt</strong>:</p>
<p>(a)<span class="math-container">\begin{align*}\|x+y\|^2 & =\|x-y\|^2 \\
(x+y)\cdot (x+y) & =(x-y)\cdot (x-y) \\
\|x\|^2+\|y\|^2+2x\cdot y & =\|x\|^2+\|y\|^2-2x\cdot y \\
2x\cdot y & =-2x\cdot y \\
x\cdot y & =0.
\end{align*}</span>(b)<span class="math-container">\begin{align*}(x+y)\cdot (x-y) & =0 \\
\|x\|^2-x\cdot y+x\cdot y-\|y\|^2 & =0 \\
\|x\|^2-\|y\|^2 & =0.
\end{align*}</span>Are these attempts correct?</p>
<p><strong>EDITED</strong></p>
| Deepak | 151,732 | <p>a) is fine, well done.</p>
<p>For b) you're given <span class="math-container">$\mathbb{x+y}$</span> and <span class="math-container">$\mathbb{x-y}$</span> are perpendicular, which implies <span class="math-container">$\mathbb{(x+y)} \cdot \mathbb{(x-y)} = 0$</span></p>
<p>Now expand and apply the distributive and commutative properties of the dot product. Your end goal is to reach <span class="math-container">$|\mathbb x| = |\mathbb y|$</span></p>
|
549,159 | <p>How to simplify this:</p>
<p>$$(5-\sqrt{3}) \sqrt{7+\frac{5\sqrt{3}}{2}}$$</p>
<p>Dont know how to minimize to 11.</p>
<p>Thanks in advance!</p>
| Olivier | 45,622 | <p>Try to apply the rules you know for working with roots: $\sqrt{a} * \sqrt{b} = \sqrt{a \cdot b}$ and $\sqrt{a} / \sqrt{b} = \sqrt{\frac{a}{b}}$ and $\left(\sqrt{a}\right)^n = \sqrt{a^n}$.</p>
|
5,253 | <p>I have an acyclic digraph that I would like to draw in a pleasing way, but I am having trouble finding a suitable algorithm that fits my special case. My problem is that I want to fix the x-coordinate of each vertex (with some vertices having the same x-coordinate), and only vary the y. My aesthetic criteria are (in order of importance):</p>
<ol>
<li>Ensure no two vertices are too close together</li>
<li>Minimize edge crossings and near misses</li>
<li>Make a reasonable use of the entire drawing space</li>
</ol>
<p>I have tried several (modified) force-directed algorithms, but they haven't met my expectations on at least one of these - usually too many edge crossings.</p>
<p>Has anyone come across a problem like this, or can you point me to some good papers that deal with restrictions like this?</p>
| Jay Kominek | 178 | <p>The <a href="http://www.graphviz.org/Documentation.php" rel="nofollow">documentation for GraphViz</a> (a software package that does this sort of thing) has a number of papers on the subject included.</p>
|
2,573,572 | <p>Here is the expression to take the derivative of.
$$C = \frac{1}{2}\sum_j (y_j - a_j^L)^2$$</p>
<p>Here is the result.
$$\frac{\partial C}{\partial a_j^L} = 2(a_j^L-y_j)$$</p>
<p>Multiplying by 2, then again by the derivative of the inside (-1) seems reasonable, but what happened to the summation?</p>
| Ethan Bolker | 72,858 | <p>The notation is more than a little confusing, but I think this is just computing the derivative with respect to a variable that appears in just one of the summands. See what it says with $j=1$ in the answer, leaving the dummy index $j$ in the summation. Or change the dummy $j$ to $i$.</p>
|
2,864,992 | <p>It starts by someone asking an exercise question that whether negation of</p>
<pre><code>2 is a rational number
</code></pre>
<p>is</p>
<pre><code>2 is an irrational number
</code></pre>
<p>Their argument is that they consider it is incorrect if we include complex numbers, because 2 might be complex and not irrational. </p>
<p>My argument is that because first proposition is always true because 2 could not be anything else but rational number, any false sentence could be negation of the first proposition, including the given sentence above. They said what I do is not "negation" in their sense.</p>
<p>So my question is, is it true that every false statements are negation of true statement? </p>
| Empy2 | 81,790 | <p>$$f(x)+\log_2(1+x)-x=f(x^2)\text{ ( 0<x<1 )}\\
f(exp(y))+\log_2(1+\exp(y))-\exp(y)=f(\exp(2y))\text{ $(-\infty<y<0)$}\\
g(y)+\log_2(1+\exp(y))-\exp(y)=g(2y)\text{ $(-\infty<y<0)$}\\
g(-2^z)+\log_2(1+\exp(-2^z))-\exp(-2^z)=g(-2^{z+1})\text{( $-\infty<z<\infty$)}\\
h(z)+\log_2(1+\exp(-2^z))-\exp(-2^z)=h(z+1)\text{$(-\infty<z<\infty)$}\\
f(1)-f(0)=\int_{-\infty}^{\infty}dh=\int_{-\infty}^\infty\log_2(1+\exp(-2^z))-\exp(-2^z)dz\\
\approx -0.332746$$
By change of variable, it becomes
$$\int_0^1 \frac{x-\log_2(1+x)}{x\log x\log 2}dx$$
which the Inverse Symbolic Calculator gives as
$$\frac12-\frac\gamma{\ln2}\approx -0.3327461772769$$
As pointed out by Somos, I took an approximation when I replaced
$\sum h(z+1)-h(z)$ by $\int dh$. It seems to have variation in the sixth decimal place as $x$ varies from $x_0$ to $x_0^2$.<br>
<a href="https://i.stack.imgur.com/5pVC1.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5pVC1.jpg" alt="enter image description here"></a></p>
|
751,670 | <p>I've just started to learn about Cohen-Macaulay rings. I want to show that the following rings are Cohen-Macaulay:</p>
<p>$k[X,Y,Z]/(XY-Z)$ and $k[X,Y,Z,W]/(XY-ZW)$.</p>
<p>Also I am looking for a ring which is not Cohen-Macaulay.</p>
<p>Can anyone help me?</p>
| Georges Elencwajg | 3,217 | <p>a) The ring $k[X,Y,Z]/(XY-Z)$ is isomorphic to the ring $k[X,Y]$, which is regular.<br>
Since a regular ring is Cohen-Macaulay, the original ring $k[X,Y,Z]/(XY-Z)$ is Cohen-Macaulay. </p>
<p>b) The ring $k[X,Y,Z,W]/(XY-ZW)$ is a complete intersection ring and is consequently Cohen-Macaulay. [By the way, this argument also applies to the ring in a)] </p>
<p>c) The ring $R=k[x,y]:=k[X,Y]/(X^2, XY)$ (the usual suspect !) is not Cohen-Macaulay because its localization $R_\mathfrak m=k[x,y]_\mathfrak m$ at the maximal ideal $\mathfrak m=(x,y)\subset R$ is not Cohen-Macaulay:<br>
Indeed every element $f\in \mathfrak m R_\mathfrak m=(x,y)$ is a zero divisor since $x\neq0$ but $fx=0$ (recall that $x^2=xy=0$).<br>
So there are no regular sequences at all in the local ring $R_\mathfrak m$, which consequently is not Cohen-Macaulay. </p>
<p>c') Another, stunningly geometric, example of a non Cohen-Macaulay ring is the ring $A=k[x,y,z]:=\Gamma(V,\mathcal O_V)$ of global functions on the closed algebraic subset $V\subset \mathbb A^3$ consisting of the union of a plane and a transverse line, say $A=k[X,Y,Z]/(ZX,ZY)$.<br>
The localization $A_\mathfrak m$ of $A$ at the maximal ideal $\mathfrak m=(x,y,z)$ (corresponding to the singularity of $V$) is obviously not equidimensional and thus $A_\mathfrak m$ is not Cohen-Macaulay.<br>
This example is lifted from <a href="http://books.google.fr/books/about/Commutative_Algebra.html?id=Fm_yPgZBucMC&redir_esc=y">Eisenbud</a>, Corollary 18.11 .</p>
|
1,170,088 | <blockquote>
<p>In a group of $10$ people, $60\%$ have brown eyes. Two people are to be selected at random from the group. What is the probability that neither person selected will
have brown eyes?</p>
</blockquote>
<p>How do I do this problem? $6$ people have brown eyes and $4$ people don't. </p>
<p>The possibility of people not have brown eyes is:</p>
<p>$$4 * 3 * 2 * 1 = 24$$ </p>
<p>What to do?</p>
| heropup | 118,193 | <p>Hint: two <em>different</em> people are selected from the group of ten. In how many ways can you select those two people from the four who do not have brown eyes? In other words, suppose the group is labeled</p>
<p>$$\{B1, B2, B3, B4, B5, B6, N1, N2, N3, N4\}.$$ Then how many ways can you choose two different people from the $N$ subgroup?</p>
<p>Next, how many ways can you choose two different people from the <em>entire</em> group?</p>
|
1,170,088 | <blockquote>
<p>In a group of $10$ people, $60\%$ have brown eyes. Two people are to be selected at random from the group. What is the probability that neither person selected will
have brown eyes?</p>
</blockquote>
<p>How do I do this problem? $6$ people have brown eyes and $4$ people don't. </p>
<p>The possibility of people not have brown eyes is:</p>
<p>$$4 * 3 * 2 * 1 = 24$$ </p>
<p>What to do?</p>
| robjohn | 13,854 | <p>On the first pick, there is $\frac4{10}$ chance that the person does not have brown eyes. On the second pick, after picking a person without brown eyes, there is $\frac39$ chance that the person does not have brown eyes.
$$
\frac4{10}\cdot\frac39=\frac2{15}
$$</p>
|
2,203,066 | <p>The definition I have is the following:</p>
<blockquote>
<p>A vector space V is said to be <strong>finite-dimensional</strong> if there is a finite set of vectors in V that spans V and is said to be <strong>infinite-dimensional</strong> if no such set exists.</p>
</blockquote>
<p>However, with this definition I can't determine whether the vector space $\mathbb{R}^3$ is finite-dimensional or infinite-dimensional (I am assuming that it is finite since the dimension of $\mathbb{R}^3$ is $3$)</p>
<p>Going with my thought process, though, I know that $(1,0,0),(0,1,0),(0,0,1)$ spans $\mathbb{R}^3$. However we can also check that $(2,0,0),(0,2,0),(0,0,2)$ spans $\mathbb{R}^3$. Also note that $(3,0,0),(0,3,0),(0,0,3)$ spans $\mathbb{R}^3$. This process could be continued over and over to show that there are infinitely many vectors that span $\mathbb{R}^3$. </p>
<p>Wouldn't this mean that $\mathbb{R}^3$ is infinite-dimensional? Because there isn't a finite number of vectors that span $\mathbb{R}^3$. (Again I want to say this isn't the case and that there is something I am overlooking.) </p>
| John Kontol | 412,568 | <p>The dimension of a vector space <strong>V</strong> is the cardinality of a basis of <strong>V</strong> over its base field.By a basis we mean a set <em>B</em> of vectors $b_i$, spanning the entire space that is also linearly independent. If this basis is finite, the space is call finite-dimensional. Likewise for an infinite set. </p>
|
2,136,791 | <p>I got a following minimization problem</p>
<p>$$\min_{\mathbf{X}^{(1)}, \, \mathbf{X}^{(2)}} \;\left\| \mathbf{B} - \mathbf{A} (\mathbf{X}^{(1)} \odot \mathbf{X}^{(2)}) \right\|^{2}_{F},$$</p>
<p>where the matrices $\mathbf{B}\in \mathbb{R}^{100 \times 3}$, $\mathbf{A}\in \mathbb{R}^{100\times 36}$, $\mathbf{X}^{(1)}\in \mathbb{R}^{9 \times 3}$ and $\mathbf{X}^{(2)}\in \mathbb{R}^{4 \times 3}$. The operation $\odot$ refers to the <a href="https://en.wikipedia.org/wiki/Kronecker_product#Khatri.E2.80.93Rao_product" rel="nofollow noreferrer">Khatri-rao product</a>.</p>
<p>Given matrices $\mathbf{A}$ and $\mathbf{B}$, my problem is to find out matrices $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ such that</p>
<p>$$\mathbb{f} = \left\| \mathbf{B} - \mathbf{A} (\mathbf{X}^{(1)} \odot \mathbf{X}^{(2)}) \right\|^{2}_{F}$$</p>
<p>is minimized.</p>
<p>My idea is to compute the gradient of $\mathbb{f}$ with respect to $\mathbf{X}^{(1)}$ and $\mathbf{X}^{(2)}$ respectively.</p>
<p>My question is, how to do differentiation with respect to a matrix? I have consulted a <a href="http://www4.ncsu.edu/~pfackler/MatCalc.pdf" rel="nofollow noreferrer">reference</a> but the situation seems different because it involves a Khatri-rao product in $\mathbb{f}$. Thanks in advance.</p>
<p>$\dfrac{\partial \mathbf{f}}{\partial \mathbf{X}^{(1)}}$ and $\dfrac{\partial \mathbf{f}}{\partial \mathbf{X}^{(2)}} $?</p>
| Florian | 185,854 | <p>I don't think there is a nice closed-form expression for $\frac{\partial f}{\partial \mathbf X^{(i)}}$, however, I can tell you how you can get to $\frac{\partial f}{\partial \mathbf x^{(i)}}$, where $\mathbf x^{(i)} = {\rm vec}\{\mathbf X^{(i)}\}$. From there, the desired $\frac{\partial f}{\partial \mathbf X^{(i)}}$ is just a reshape.</p>
<p>Essentially, it relies on five ingredients:</p>
<ol>
<li>$\|\mathbf X\|_{\rm F}^2 = \|{\rm vec}\{\mathbf X\}\|_2^2$</li>
<li>${\rm vec}\{\mathbf A \mathbf X \mathbf B\} = (\mathbf B^{\rm T} \otimes \mathbf A)\cdot {\rm vec}\{\mathbf X\}$ where $\otimes$ is the Kronecker product (which implies ${\rm vec}\{\mathbf A \mathbf X \} = (\mathbf I \otimes \mathbf A)\cdot {\rm vec}\{\mathbf X\}$).</li>
<li>${\rm vec}\{\mathbf X_1 \odot \mathbf X_2\} = ([\mathbf I_N \odot \mathbf X_1] \otimes \mathbf I_P)\cdot{\rm vec}\{\mathbf X_2\}$ where $\mathbf X_1$ is $M \times N$ and $\mathbf X_2$ is $P \times N$</li>
<li>${\rm vec}\{\mathbf X_1 \odot \mathbf X_2\} = [\mathbf I_{MN} \odot (\mathbf X_2\cdot [\mathbf I_N \otimes \mathbf 1_{1\times M}])]\cdot{\rm vec}\{\mathbf X_1\}$ where $\mathbf X_1$ is $M \times N$ and $\mathbf X_2$ is $P \times N$</li>
<li>$\frac{\partial \|\mathbf b - \mathbf{A}\cdot\mathbf{x}\|_2^2}{\partial \mathbf{x}} = 2\mathbf{A}^{\rm T}(\mathbf{A}\cdot\mathbf{x}-\mathbf b)$</li>
</ol>
<p>For the proofs:</p>
<ol>
<li>is trivial, both are the sum of all elements squared.</li>
<li>can be found in many textbooks and is not hard to show either, cf., e.g., [*]</li>
<li>when I needed it I didn't find a proof anywhere so I proved it myself. If you don't mind, I'd like to give my <a href="http://theses.eurasip.org/theses/473/advanced-algebraic-concepts-for-efficient-multi/download/" rel="nofollow noreferrer">dissertation</a> as a reference where it is Proposition 3.1.2, the proof is in Appendix B.2. There might be textbooks that contain something similar.</li>
<li>see 3., covered by the same proposition. This may not be the shortest form but it works.</li>
<li>is again straightforward: $\|\mathbf b - \mathbf A \mathbf x\|_2^2 = (\mathbf b^{\rm T} - \mathbf x^{\rm T} \mathbf{A}^T) (\mathbf b - \mathbf{A} \mathbf{x})$, then use the product rule.</li>
</ol>
<p>Now let us put everything together:</p>
<p>$$
f = \left\|\mathbf B - \mathbf A \left( \mathbf X^{(1)} \odot \mathbf X^{(2)} \right) \right\|_{\rm F}^2 \\
= \left\| \mathbf b - \left(\mathbf I_3 \otimes \mathbf{A}\right) {\rm vec}\{\mathbf X^{(1)} \odot \mathbf X^{(2)}\} \right\|_2^2 \\
= \left\| \mathbf b - \mathbf C_1 \cdot{\rm vec}\{\mathbf X_1\} \right\|_2^2 \\
= \left\| \mathbf b - \mathbf C_2 \cdot{\rm vec}\{\mathbf X_2\} \right\|_2^2 \\
$$</p>
<p>where $\mathbf b = {\rm vec}\{\mathbf B\}$, $\mathbf C_1 = \left(\mathbf I_3 \otimes \mathbf{A}\right) \cdot [\mathbf I_{27} \odot (\mathbf X_2\cdot [\mathbf I_3 \otimes \mathbf 1_{1\times 9}])]$ and $\mathbf C_2 = \left(\mathbf I_3 \otimes \mathbf{A}\right) \cdot([\mathbf I_3 \odot \mathbf X_1] \otimes \mathbf I_4)$. The first step uses 1.+2., the third and fourth line use 3. and 4., respectively. </p>
<p>Finally, using 5. we then have</p>
<p>$$ \frac{\partial f}{\partial \mathbf x^{(i)}} = 2 \mathbf C_i^{\rm T}(\mathbf C_i\mathbf x^{(i)} - \mathbf b)$$</p>
<p>for $i=1, 2$ with $\mathbf C_i$ given above. From this expression, the matrix derivative $\frac{\partial f}{\partial \mathbf X^{(i)}}$ you wanted is given by reshaping $\frac{\partial f}{\partial \mathbf x^{(i)}}$ back into a matrix of appropriate size.</p>
<p>[*] H. Neudecker, “Some theorems on matrix differentiation with special reference to Kronecker matrix products,” Journal of the American Statistical Association, vol. 64, pp. 953–963, 1969.</p>
<p><strong>edit</strong>: Since I did a quick test in Matlab to verify it, I thought I might share this bit as well (it uses a function <code>krp</code> to calculate the Khatri-Rao product):</p>
<pre><code>B = randn(100,3);
A = randn(100,36);
X1 = randn(9,3);
X2 = randn(4,3);
f = norm(B - A*krp(X1,X2),'fro')^2;
C1 = kron(eye(3),A)*krp(eye(27),X2*kron(eye(3),ones(1,9)));
C2 = kron(eye(3),A)*kron(krp(eye(3),X1),eye(4));
disp(f - norm(B(:)-C1*X1(:))^2) % it is zero
disp(f - norm(B(:)-C2*X2(:))^2) % it is zero
</code></pre>
|
632,043 | <p>tl;dr: why is raising by $(p-1)/2$ not always equal to $1$ in $\mathbb{Z}^*_p$?</p>
<p>I was studying the proof of why generators do not have quadratic residues and I stumbled in one step on the proof that I thought might be a good question that might help other people in the future when raising powers modulo $p$.</p>
<p>Let $p$ be prime and as usual, $\mathbb{Z}^*_p$ be the integers mod $p$ with inverses.</p>
<p>Consider raising the generator $g$ to the power of $(p-1)/2$:</p>
<p>$$g^{(p-1)/2}$$</p>
<p>then, I was looking for a somewhat rigorous argument (or very good intuition) on why that was <strong>not always</strong> equal to $1$ by fermat's little theorem (when I say always, I mean, even when you do NOT assume the generator has a quadratic residue).</p>
<p>i.e. why is this logic flawed:</p>
<p>$$ g^{(p-1)/2} = (g^{(p-1)})^{\frac{1}{2}} = (1)^{\frac{1}{2}} \ (mod \ p)$$</p>
<p>to solve the last step find an x such that $1 = x \ (mod \ p)$. $x$ is obviously $1$, which completes the wrong proof that raising anything to $(p-1)/2$ is always equal to $1$. This obviously should not be the case, specially for a generator since the only power that should yield $1$ for a generator is $p-1$, otherwise, it can't generate one of the elements in the cyclic set. </p>
<p>The reason that I thought that this was illegal was because you can only raise to powers of integers mod $p$ and $1/2$ is obviously not valid (since its not an integer). Also, if I recall correctly, not every number in a set has a k-th root, right? And $1/2$ actually just means square rooting...right? Also, maybe it was a notational confusion where to the power of $1/2$ actually just means a function/algorithm that "finds" the inverse such that $z = x^2 \ (mod \ p)$. So is the illegal step claiming that you can separate the powers because at that step, you would be raising to the power of an element not allowed in the set?</p>
| Stefan4024 | 67,746 | <p>Taking roots in modular arithmetics doesn't work.</p>
<p>For example check this:</p>
<p>$9 \equiv 4 \pmod 5$, but $2 \equiv 3 \pmod 5$, doesn't hold.</p>
<p>Now to the problem. If $(g,p) = 1$, then </p>
<p>$$g^{\frac{p-1}{2}} \equiv 1 \pmod p \text { or } g^{\frac{p-1}{2}} \equiv -1 \pmod p$$</p>
<p>This is because:</p>
<p>$$g^{p-1} \equiv 1 \pmod p$$
$$g^{p-1} - 1\equiv 0 \pmod p$$
$$(g^{\frac{p-1}{2}} - 1)(g^{\frac{p-1}{2}} + 1) \equiv 0 \pmod p$$</p>
<p>Obviously only one of the factors can be 0 modulo $p$.</p>
|
1,090,620 | <p>I don't know how to solve this limit</p>
<p>$$ \lim_{y\to0} \frac{x e^ { \frac{-x^2}{y^2}}}{y^2}$$</p>
<p>$\frac{1}{e^ { \frac{x^2}{y^2}}} \to 0$</p>
<p>but $\frac{x}{y^2} \to +\infty$</p>
<p>This limit presents the indeterminate form $0 \infty$ ?</p>
| Siminore | 29,672 | <p>The limit is a particular case of the limit
$$
\lim_{u \to +\infty} u e^{-\beta u}, \tag{1}
$$
with $\beta >0$. Indeed, just define $u=y^{-2} \to +\infty$ as $y \to 0$. Rewrite (1) as
$$
\lim_{u \to +\infty} \frac{u}{e^{\beta u}}
$$
and apply De l'Hospital's theorem.</p>
|
1,396,067 | <p><strong>Question:</strong><br/>
The bacteria in a certain culture double every $7.3$ hours. The culture has $7,500$ bacteria at the start.
How many bacteria will the culture contain after $3$ hours?
<br />
<br />
<strong>Possible Answers:</strong><br/>
a. $9,449$ bacteria<br/>
b. $9,972$ bacteria<br/>
c. $40,510$ bacteria<br/>
d. $8,247$ bacteria<br/>
<br/>
<br/>I got this answer right on my quiz but I want to be sure that I can do it again. Please help me with setting up this problem and getting the correct answer.</p>
| Salvatore | 261,037 | <p>The generator of $ mZ \cap nZ $ is the l.c.m of m and n. Intuitively take $3Z \cap 2Z$ it means that you are looking for the smallest common multiple of 2 and 3 which is 6</p>
|
2,250,733 | <p>I have a general idea to solve the problem, which is to pair up 2s and 5s in the numerator and denominator, cancel those that are common, and the remaining pairs of 2s and 5s are the number of 0s left. Since 130 choose 70 is so large, how do I do this?</p>
| N. F. Taussig | 173,070 | <p>By definition,
$$\binom{n}{k} = \frac{n!}{k!(n - k)!}$$
Hence,
$$\binom{130}{70} = \frac{130!}{70!60!}$$
A number ends in zero if it is divisible by $10 = 2 \cdot 5$. Every even number is divisible by at least one factor of $2$, so the number of zeros with which $n!$ ends is determined by the number of factors of $5$ that divide it. We can calculate the number of factors of $5$ that divide $n!$ by using the formula
$$\left\lfloor \frac{n}{5} \right\rfloor + \left\lfloor \frac{n}{5^2} \right\rfloor + \left\lfloor \frac{n}{5^3} \right\rfloor + \cdots$$
The first term counts the number of factors of $n!$ that are divisible by $5$; the second term counts the number of factors of $n!$ that are divisible by $5^2 = 25$; the third term counts the number of factors of $n!$ that are divisible by $5^3$; and so forth. Consequently, the formula counts each factor of $n!$ that contributes exactly one factor of $5$ once, each factor of $n!$ that contributes exactly two factors of $5$ twice, and so forth.</p>
<p>The number of factors of $5$ that divide $130!$ is
$$\left\lfloor \frac{130}{5} \right\rfloor + \left\lfloor \frac{130}{5^2} \right\rfloor + \left\lfloor \frac{130}{5^3} \right\rfloor + \cdots = 26 + 5 + 1 + 0 + 0 + \cdots = 32$$
Since
$$\left\lfloor \frac{70}{5} \right\rfloor + \left\lfloor \frac{70}{5^2} \right\rfloor + \left\lfloor \frac{70}{5^3} \right\rfloor + \cdots = 14 + 2 + 0 + 0 + \cdots = 16$$
and
$$\left\lfloor \frac{60}{5} \right\rfloor + \left\lfloor \frac{60}{5^2} \right\rfloor + \left\lfloor \frac{60}{5^3} \right\rfloor + \cdots = 12 + 2 + 0 + 0 + \cdots = 14$$
the denominator is divisible by $16 + 14 = 30$ factors of $5$. Therefore,
the quotient $\frac{130!}{70!60!}$ is divisible by $32 - 30 = 2$ factors of $5$, which suggests that $\binom{130}{70}$ ends in two zeros. </p>
<p>As @DavidK has pointed out in the comments, this is true provided that there are not more factors of $2$ in the denominator than there are in the numerator since each such factor would reduce a factor of $10$ in the numerator to a factor of $5$, thereby reducing the number of zeros at the end of the number. </p>
<p>The number of factors of $2$ that divide the numerator is $$\left\lfloor \frac{130}{2} \right\rfloor + \left\lfloor \frac{130}{2^2} \right\rfloor + \left\lfloor \frac{130}{2^3} \right\rfloor + \left\lfloor \frac{130}{2^4} \right\rfloor + \left\lfloor \frac{130}{2^5} \right\rfloor + \left\lfloor \frac{130}{2^6} \right\rfloor + \left\lfloor \frac{130}{2^7} \right\rfloor + \cdots\\ = 65 + 32 + 16 + 8 + 4 + 2 + 1 + 0 + 0 + \cdots = 128$$
The number of factors of $2$ that divide $70!$ is
$$\left\lfloor \frac{70}{2} \right\rfloor + \left\lfloor \frac{70}{2^2} \right\rfloor + \left\lfloor \frac{70}{2^3} \right\rfloor + \left\lfloor \frac{70}{2^4} \right\rfloor + \left\lfloor \frac{70}{2^5} \right\rfloor + \left\lfloor \frac{70}{2^6} \right\rfloor + \left\lfloor \frac{130}{2^7} \right\rfloor + \cdots\\ = 35 + 17 + 8 + 4 + 2 + 1 + 0 + 0 + \cdots = 67$$
and the number of factors of $2$ that divide $60!$ is
$$\left\lfloor \frac{60}{2} \right\rfloor + \left\lfloor \frac{60}{2^2} \right\rfloor + \left\lfloor \frac{60}{2^3} \right\rfloor + \left\lfloor \frac{60}{2^4} \right\rfloor + \left\lfloor \frac{60}{2^5} \right\rfloor + \left\lfloor \frac{60}{2^6} \right\rfloor + \left\lfloor \frac{60}{2^7} \right\rfloor + \cdots\\ = 30 + 15 + 7 + 3 + 1 + 0 + 0 + 0 + \cdots = 56$$
Thus, there are $67 + 56 = 123$ factors of $2$ in the denominator. Since there are fewer factors of $2$ in the denominator than the numerator, the number of zeros at the end of $\binom{130}{70}$ is, in fact, $2$.</p>
|
2,222,966 | <p>Given the three line segments below, of lengths a, b and 1, respectively:<a href="https://i.stack.imgur.com/HWoz8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HWoz8.jpg" alt="enter image description here"></a></p>
<p>construct the following length using a compass and ruler: $$\frac{1}{\sqrt{b+\sqrt{a}}} \ \ \text{and} \ \ \ \sqrt[4]{a} $$</p>
<p>Make sure to draw the appropriate diagram(s) and describe your process in words. We are also to use the following axioms and state where they are used:</p>
<ol>
<li>Any two points can be connected by a line segment, </li>
<li>Any line segment can be extended to a line, </li>
<li>Any point and a line segment define a circle, </li>
<li>Points are born as intersection of lines, circles and lines and circles</li>
</ol>
<p>Can someone please guide me or show me as to how to construct this? I know if we draw a triangle whose base(let's suppose this is $a+1$) is the diameter of a semi-circle, then the line perpendicular to this base leading to the top of the semi-circle will divide the trianlge into two smaller triangles with the bases resulting in $a$ and $1$. I don't know how to end up with $\sqrt{a}$ from there. But with it, the process can be repeated to end up with $\sqrt[4]{a}$. Can someone explain or show me? I will then be able to tackle a whole lot of other questions.</p>
| fleablood | 280,126 | <p>The triangles you described will have AB=1. BC=a AC=a+1. and BD a perpendicular altitude of unknown height. Triangle ABD is similar to triangle DBC.</p>
<p>So $\frac {AB}{BD} = \frac {DB}{BC}$</p>
<p>so $\frac {1}{BD} = \frac {BD}{a}$</p>
<p>so $a = BD^2$</p>
<p>so $BD = \sqrt{a}$.</p>
<p>The height is no longer unknown.</p>
|
590,219 | <p>Disclaimer: this is a homework question. I'm looking for direction, not an answer.</p>
<blockquote>
<p>Given a field <span class="math-container">$F$</span>, show that <span class="math-container">$F[x,x^{-1}]$</span> is a principal ideal domain.</p>
</blockquote>
<p>I'm unsure how to proceed. Would it be better to prove this directly? (ie, let <span class="math-container">$I$</span> be an ideal, show that <span class="math-container">$I = (f)$</span> for some <span class="math-container">$f \in F[x,x^{-1}]$</span>). The proof that polynomials are a PID would involve (I imagine) division by remainder and use of degree, both of which don't seem to have obvious parallels for Laurent polynomials. Should I try and devise some parallels and mimic the proof for polynomials? Or is this overkill? (or just wrong?)</p>
<p><strong>edit 1</strong>: I guess the approach I mentioned above amounts to showing that Laurent polynomials are a euclidean domain (we know that euclidean domain => principle ideal domain, so this would be sufficient) Are they a euclidean domain though? (it seems like this would have been the question if they were, instead of asking if they were a PID).</p>
<p><strong>edit 2</strong>: I've spat this out, think most parts of it are correct, though it seems kind of ugly/cumbersome (but that might just be me trying to spell things out more than is needed):</p>
<blockquote>
<p>Given a Laurent polynomial <span class="math-container">$f \in F[x,x^{-1}]$</span>, define its "negative
degree" <span class="math-container">$\deg^-(f)$</span> to be the largest power of <span class="math-container">$x^{-1}$</span> that appears
in <span class="math-container">$f$</span>.</p>
<p>Let <span class="math-container">$I$</span> be an ideal of <span class="math-container">$F[x,x^{-1}]$</span>. Note that <span class="math-container">$\{x^{-\deg^-(f)}f \mid f \in I\} \subseteq F[x]$</span>. Let <span class="math-container">$J$</span> be the
ideal in <span class="math-container">$F[x]$</span> generated by this set. <span class="math-container">$F[x]$</span> is a principal ideal
domain, so <span class="math-container">$J$</span> is a principal ideal and we have <span class="math-container">$J = (j)$</span> for some <span class="math-container">$j \in F[x]$</span>.</p>
<p>We claim <span class="math-container">$I = (j)$</span> (now meaning an ideal of <span class="math-container">$F[x,x^{-1}]$</span>).</p>
<p>Let <span class="math-container">$f \in I$</span>. Then <span class="math-container">$x^{-\deg^-(f)}f \in J$</span>, meaning <span class="math-container">$f = x^{\deg^-(f)}g$</span>, where <span class="math-container">$g = x^{-\deg^-(f)}f$</span> is in <span class="math-container">$J$</span>. Because <span class="math-container">$g \in
J = (j)$</span>, there exists <span class="math-container">$g' \in F[x]$</span> such that <span class="math-container">$g = g'j$</span>, and thus <span class="math-container">$f = (x^{\deg^-(f)}g')j$</span> is a multiple of <span class="math-container">$j$</span>, so <span class="math-container">$f \in (j)$</span>.</p>
<p>Let <span class="math-container">$f \in (j)$</span>. Then <span class="math-container">$f = gj$</span> for some <span class="math-container">$g \in F[x,x^{-1}]$</span>. But note
that <span class="math-container">$j = x^{-\deg^-(f')}f'$</span> for some <span class="math-container">$f' \in I$</span>, so <span class="math-container">$f = gx^{-\deg^-(f')}f'$</span>, so <span class="math-container">$f$</span> is a multiple of an element of an ideal
<span class="math-container">$I$</span>, so <span class="math-container">$f$</span> itself is in <span class="math-container">$I$</span>.</p>
<p>This shows <span class="math-container">$I = (j)$</span>. So an arbitrary ideal of <span class="math-container">$F[x,x^{-1}]$</span> is
principal, so <span class="math-container">$F[x,x^{-1}]$</span> is a principal ideal domain.</p>
</blockquote>
<p>I kind of feel like I still don't "get" the proof (I more-or-less see how each part works with the others but I'm having trouble seeing the bigger picture), though this may be due to a poor handle on ideals in general.</p>
| Josué Tonelli-Cueto | 15,330 | <p>1st HINT: Given an ideal $I$ of $F[x,x^{-1}]$, what can you say about $I\cap F[x]$? How is the information you obtain related to $I$?</p>
|
1,413,150 | <p>So for a periodic function <span class="math-container">$f$</span> (of period <span class="math-container">$1$</span>, say), I know the Riemann-Lebesgue Lemma which states that if <span class="math-container">$f$</span> is <span class="math-container">$L^1$</span> then the Fourier coefficients <span class="math-container">$F(n)$</span> go to zero as <span class="math-container">$n$</span> goes to infinity. And as far as I know, the converse of this is not true. My question, then, is this:</p>
<blockquote>
<p>Under what conditions on the Fourier coefficients <span class="math-container">$F(n)$</span> is the function <span class="math-container">$f$</span>, defined pointwise as the Fourier series with <span class="math-container">$F(n)$</span> as coefficients,</p>
<ol>
<li>integrable,</li>
<li>continuous, and</li>
<li>differentiable?</li>
</ol>
</blockquote>
| Jack D'Aurizio | 44,121 | <p>As suggested by Clement C., let:
$$ f(x)=\left(1+\frac{1}{x}\right)^{x}+\frac{1}{x}.\tag{1}$$
Then:
$$ f'(x) = \left(1+\frac{1}{x}\right)^{x}\left(\log\left(1+\frac{1}{x}\right)-\frac{1}{x+1}\right)-\frac{1}{x^2}\tag{2} $$
but, due to convexity:
$$\log\left(1+\frac{1}{x}\right)-\frac{1}{x+1}=-\frac{1}{x+1}+\int_{x}^{x+1}\frac{dt}{t}\geq \frac{1}{2(x+1)^2}\tag{3}$$
hence for any $x\geq 8$:
$$ f'(x)\geq \frac{\left(1+\frac{1}{8}\right)^8}{2(x+1)^2}-\frac{1}{x^2}>0.\tag{4} $$</p>
|
45,163 | <p>I would like to get reccomendations for a text on "advanced" vector analysis. By "advanced", I mean that the discussion should take place in the context of Riemannian manifolds and should provide coordinate-free definitions of divergence, curl, etc. I would like something that has rigorous theory but also plenty of concrete examples and a mixture of theoretic/concrete exercises.</p>
<p>The text that I have seen that comes closest to what I'm looking for is Janich's <a href="http://rads.stackoverflow.com/amzn/click/1441931449" rel="noreferrer">Vector Analysis</a>. The Hatcheresque style of writing in this particular text though isn't really suitable for me.</p>
<p>Looking forward to your reccomendations, thanks.</p>
| Tim van Beek | 7,556 | <p>Physicists often have the problem that their theories, like general relativity, are very elegant in a coordinate free formulation; but they still need coordinates all the time because they have to compute concrete solutions to concrete problems. So books about mathematically well defined physical theories that make heavy use of differential geometry are actually a good source for what you are looking for, switching between the coordinate free form and concrete coordinates, with a lot of concrete problems.</p>
<p>Try, for example, the classic:</p>
<ul>
<li>Misner, Thorne, Wheeler: Gravitation.</li>
</ul>
<p>This is a scary 1500 pages tome, but it is that long because it takes a lot of space and time to explain basic mathematical concepts in differential geometry. You don't have to read all the later chapters about special applications to general relativity. Although I'd like to recommend that you do: It is less work than it looks at first sight, because the text is easy to read. </p>
|
588,930 | <p>I want help with this question.</p>
<blockquote>
<p>Show that for all $x>0$, $$ \frac{x}{1+x^2}<\tan^{-1}x<x.$$</p>
</blockquote>
<p>Thank you.</p>
| Berci | 41,488 | <p>Let $t:=\tan^{-1}x$ so that $x=\tan(t)$. Then $1+x^2=\displaystyle\frac{\cos^2(t)+\sin^2(t)}{\cos^2(t)}=\frac1{\cos^2(t)}$, so
$$\frac x{1+x^2}=\tan(t)\cdot\cos^2(t)=\sin(t)\cos(t)=\frac{\sin(2t)}2\,.$$
So this leads to prove $\displaystyle\frac{\sin(2t)}2<t<\tan(t)$ for $t\in (0,\pi/2)$. </p>
|
588,930 | <p>I want help with this question.</p>
<blockquote>
<p>Show that for all $x>0$, $$ \frac{x}{1+x^2}<\tan^{-1}x<x.$$</p>
</blockquote>
<p>Thank you.</p>
| HK Lee | 37,116 | <p>$$ \frac{d}{dx} \tan^{-1} x = \frac{1}{1+x^2} < \frac{d}{dx} x =1$$
and $\tan^{-1}(0)=0$ so that the last inequality is proved. </p>
<p>$$ \frac{d}{dx} \frac{x}{x^2+1} = \frac{(x^2+1) - x(2x) }{(x^2+1)^2} = \frac{1-x^2}{(x^2+1)^2} < \frac{d}{dx} \tan^{-1} x = \frac{1}{1+x^2} $$
and $\frac{x}{1+x^2}(0)=0=\tan^{-1}(0)$ so that the first inequality is proved. </p>
<p><strong>Another way :</strong> </p>
<p>$$\frac{x}{x^2+1}< t\frac{1}{t^2+1}|_{t=0}^{t=x} -
\int_0^x t\frac{-2t}{(t^2+1)^2} = \int_0^x \frac{1}{t^2+1}\ dt =\tan^{-1}x < \int_0^x \frac{1}{0+1}\ dt=x$$</p>
|
1,902,842 | <p>In Bert Mendelson's <em>Introduction to Topology</em>, the first exercise of Ch. 1 Sec. 5 states:</p>
<blockquote>
<p>Let $X\subset A$ and $Y\subset B$. Prove that $$C(X\times Y)=A\times C(Y)\cup C(X) \times B.$$</p>
</blockquote>
<p>I have seen a "proof" of this, but I remain unsatisfied with the result. As support, I offer the following as a counterexample. </p>
<p>Let $A=\{-1,0,1\}=B$. Also let $X=\{0,1\}$ and $Y=\{-1,0\}$. These satisfy the preconditions. Now, is it true that</p>
<p>$$(\{0,1\}\times \{-1,0\})^C=\{-1,0,1\}\times \{-1,0\} \cup \{0,1\}^C\times \{-1,0,1\}.$$</p>
<p>It is easy enough to see that $X\times Y=\{(0,0),(0,-1),(1,0),(1,-1)\}$. The complement* would then be anything not in this set, for example $(2,2)$. However, certainly $(2,2)$ is in neither $\{-1,0,1\}\times \{-1,0\}$ nor $\{0,1\}^C\times \{-1,0,1\}$.</p>
<p>(*Is this definition of complement correct?)</p>
<p>Is there some underlying assumption of which I should be aware? Is staying within the bounds of the parent sets a standard practice? Is my counterexample unreasonable? Please advise.</p>
| Yes | 155,328 | <p>Let me provide another way to prove such a statement, which is more "intuitive" and admits less chance for one to go wrong. This may not be required by you; yes, I know.</p>
<p>The expression $x^{2}+x-2$ is meaningful for all $x \in \mathbb{R}$. If $x \in \mathbb{R}$, then
$$
|x^{2}+x-2 - 4| = |x-2||x+3|.
$$
If $|x-2| < 1$ (the bound "1" is chosen for convenience to bound away $|x+3|$), then $|x|-2 \leq |x-2| < 1$ (this is due to the elementary inequality $||a|-|b|| \leq |a-b|$ in analysis), implying $|x| < 3$, implying
$$
|x+3| \leq |x|+3 < 6,
$$
implying
$$
|x-2||x+3| < 6|x-2|.
$$
Given any $\varepsilon > 0$,
we have
$6|x-2| < \varepsilon$
if in addition we have $|x-2| < \varepsilon/6$.
Now clearly if $|x-2| < \delta := \min \{ 1, \varepsilon/6 \}$, we have $|x^{2}+x-2 -4| < \varepsilon$.</p>
|
1,904,903 | <p>Taken from Soo T. Tan's Calculus textbook Chapter 9.7 Exercise 27-</p>
<p>Define $$a_n=\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}$$
One needs to prove the convergence or divergence of the series $$\sum_{n=1}^{\infty} a_n$$</p>
<p>upon finding the radius of convergence for $\sum_{n=1}^{\infty}\frac{2\cdot 4\cdot 6\cdot\ldots\cdot 2n}{3\cdot 5\cdot7\cdot\ldots\cdot (2n+1)}\cdot x^{2n+1}$ to be $1$ and checking the endpoints. Also, please use tests and methods that are taught in introductory courses.</p>
<p>Answers show divergence but no without explanation. </p>
| Claude Leibovici | 82,404 | <p>Another way to look at the problem could be to consider that $$a_n=\frac {\prod_{i=1}^{n }(2i) }{\prod_{i=1}^{n }(2i+1) }$$ Using the ratio test $$\frac{a_{n+1}}{a_n}=\frac{2 (n+1)}{2 n+3}$$ which is inconclusive.</p>
<p>Using <a href="https://en.wikipedia.org/wiki/Ratio_test#Raabe.27s_test" rel="nofollow">Raabe's test</a> $$n\left(\frac{a_n}{a_{n+1}} -1\right)=\frac{n}{2 n+2}$$ its limit is $\frac 12$ and then divergence.</p>
<p>Another way, using</p>
<p>$$a_n=\frac {\prod_{i=1}^{n }(2i) }{\prod_{i=1}^{n }(2i+1) }=\frac {2^n\,n!}{\frac{2^{-n} (2 n+1)!}{n!}}=\frac{4^n\, (n!)^2}{(2n+1)! }$$ and using, for large values of $n$, Stirling approximation for the factorial leads to $$\log(a_n)=\left(-\frac{1}{2} \log (n)+\log \left(\frac{\sqrt{\pi
}}{2}\right)\right)+O\left(\frac{1}{n}\right)$$ Using the fact that $a_n=e^{\log(a_n)}$, this leads to $$a_n=\frac{1}{2} \sqrt{\pi } \sqrt{\frac{1}{n}}+O\left(\frac{1}{n}\right)$$ which is divergent by comparaison to the $p$-series.</p>
|
2,195,287 | <blockquote>
<p>Knowing that $p$ is prime and $n$ is a natural number show that
$$n^{41}\equiv n\bmod 55$$
using Fermat's little theorem
$$n^p\equiv n\bmod p$$</p>
</blockquote>
<p>If the exercise was to show that
$$n^{41}\equiv n\bmod 11$$ I would just rewrite $n^{41}$ as a power of $11$ and would easily prove that the congruence is true in this case but I cannot apply the same logic when I have $\bmod55$ since $n^{41}$ cannot be written as power of $55$.</p>
<p>Any hint?</p>
| Alberto Andrenucci | 370,680 | <p>You use the Chinese Remainder Theorem:</p>
<p>$$\begin{equation}
\begin{cases}
n^{41}\equiv n \mod(11)\\n^{41}\equiv n \mod(5)
\end{cases}
\end{equation}$$</p>
<p>Now you can apply the Fermat's little theorem, using the fact that $n^{\phi(n)}\equiv1 \mod(p)$ (Euler's Theorem) to obtain:</p>
<p>$$\begin{equation}
\begin{cases}
n^{4\phi(11)+1}\equiv n \mod(11)\\n^{10\phi(5)+1}\equiv n \mod(5)
\end{cases}
\end{equation}$$</p>
<p>$$\begin{equation}
\begin{cases}
n^{4\cdot10+1}\equiv n \mod(11)\\n^{10\cdot4+1}\equiv n \mod(5)
\end{cases}
\end{equation}$$</p>
<p>Which gives you the result.</p>
|
553,431 | <p>In the <a href="http://demonstrations.wolfram.com/NoFourInPlaneProblem/" rel="nofollow noreferrer">No-Four-In-Plane problem</a>, points are selected so that no four of them are coplanar.</p>
<p>Eight points can be picked from a <span class="math-container">$3\times3\times3$</span> space in a unique way.</p>
<p>Can 11 points be picked from a <span class="math-container">$4\times4\times4$</span> grid so that no four points are coplanar?</p>
<p>What is the maximal number of points selectable from a <span class="math-container">$5\times5\times5$</span> grid, and beyond?</p>
<p>NEW</p>
<p>There's a <a href="http://azspcs.com/Contest/Tetrahedra" rel="nofollow noreferrer">computer programming contest</a> running through June 4, 2016 that asks the question "What's the most points in an <em>n</em> × <em>n</em> × <em>n</em> grid of which no 4 are coplanar?" for larger values of <em>n</em>.</p>
| Hugo Pfoertner | 686,508 | <p>See <a href="https://oeis.org/A280537" rel="nofollow noreferrer">https://oeis.org/A280537</a> and <a href="https://oeis.org/A280538" rel="nofollow noreferrer">https://oeis.org/A280538</a> for the 6 x 6 x 6 results.
The maximum number of points in the 6x6x6 case is 16. There are 36 solutions; a list of coordinates is provided in <a href="https://oeis.org/A280538/a280538.txt" rel="nofollow noreferrer">https://oeis.org/A280538/a280538.txt</a>.
None of the solutions is symmetric.</p>
|
255,295 | <p>I just did one exercise stating:
Prove that the linear map $M: X \rightarrow C([0,1])$, is continuous iff for every $t\in[0,1]$, the rule $x\rightarrow (Mx)(t)$ defines a continuous linear functional on X.
the next exercise stated:
State, and prove a similar continuity criterion for linear maps $M:X\rightarrow Y$ where Y is an arbitrary Banach space.</p>
<p>Is there some theorem which states that $M$ is continuous iff $x\mapsto \ell(Mx)$ is continous for all linear functionals $\ell:Y\rightarrow \mathbb{K}$ in $Y'$? or what does it mean?</p>
<p>I posted a new try to a proof, can someone please confirm it or post another one?</p>
| Thomas | 49,120 | <p>Operators $\ell:\: X \rightarrow Y$ are not linear functionals. Think about how the linear functionals where defined in the example of $Y = C([0,1])$ as a composition of something with $M$ and try to generalize that. Use the representation of the norm in $X$ via linear functionals, and the uniform boundedness principle to prove the statement.</p>
|
3,002,114 | <blockquote>
<p>Prove that
<span class="math-container">$$
\binom{n}{1}^2+2\binom{n}{2}^2+\cdots + n\binom{n}{n}^2
= n \binom{2n-1}{n-1}.
$$</span></p>
</blockquote>
<p>So
<span class="math-container">$$
\sum_{k=1}^n k \binom{n}{k}^2
= \sum_{k=1}^n k \binom{n}{k}\binom{n}{k}
= \sum_{k=1}^n n \binom{n-1}{k-1} \binom{n}{k}
= n \sum_{k=0}^{n-1} \frac{(n-1)!n!}{(n-k-1)!k!(n-k-1)!(k+1)!}
= n^2 \sum_{k=0}^{n-1} \frac{(n-1)!^2}{(n-k-1)!^2k!^2(k+1)}
=n^2 \sum_{k=0}^{n-1} \binom{n-1}{k}^2\frac{1}{k+1}.
$$</span>
I do not know what to do with <span class="math-container">$\frac{1}{k+1}$</span>, how to get rid of that.</p>
| drhab | 75,923 | <p><span class="math-container">$$\sum_{k=1}^{n}k\binom{n}{k}^{2}=n\sum_{k=1}^{n}\binom{n-1}{k-1}\binom{n}{n-k}=n\sum_{k=0}^{n-1}\binom{n-1}{k}\binom{n}{n-k-1}=n\binom{2n-1}{n-1}$$</span></p>
<p>Applying <a href="https://en.wikipedia.org/wiki/Vandermonde%27s_identity" rel="nofollow noreferrer">Vandermonde's identity</a> in third equality.</p>
|
2,831,731 | <p>I don't know how should i define a homotopy on a set.
I think {{},{a,b,c}} should work but i don't know how to write the homotopy between the identity map and a constant map.
(So sorry for this basic quistion.....)</p>
| mathcounterexamples.net | 187,663 | <p>So if I understand well... You have a set $X=\{a,b,c\}$ only containing three points. You want to define a topology $\mathcal T$ on $X$ such that $X$ is contractile.</p>
<p>If that is the question, then indeed the trivial topology $\mathcal T =\{\emptyset, \{a,b,c\}\}$ is convenient. Why?</p>
<p>Consider the map defined by
$$H(x,t)=\begin{cases}
x & \text{for } &x \in \{a,b,c\} & \text{ and } 0 \le t <1/2\\
a & \text{for } &x \in \{a,b,c\} & \text{ and } 1/2 \le t \le 1
\end{cases}$$</p>
<p>It is sufficient to prove that $H$ is continuous as $H(\cdot,t)$ is a map between of the identity of $X$ and a point, namely $a$.</p>
<p>But this is trivial as the inverse image of the emptyset is the emptyset while the inverse image of $X$ is $X \times [0,1]$ which are both open sets of the product $X \times [0,1]$ endowed with the product topology.</p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| Pete L. Clark | 1,149 | <p>I don't think the OP has provided enough information to get a useful answer to his/her precise question (what text to learn quickly from).</p>
<p>What level is the course being taught at? High school? Undergraduate for non-majors? Undergraduate for majors but without specific knowledge of any other undergraduate math courses beyond calculus? Undergraduate assuming some basic analysis and/or algebra? Graduate level? Something else??</p>
<p>As others have said, a perfectly reasonable thing to do when you are teaching any course for the first time and don't have strong opinions / too much expertise about it is to look at the textbook(s) that others have used who have taught the course recently. Thumb through them a little bit, then ask them how they liked the book and how well it worked for the course. If you found anything confusing or problematic in the book, ask them about that. </p>
<p>I think someone with a PhD in mathematics (for the sake of argument, I'll assume the OP has one) should be able to pick up and read a textbook for any undergraduate class within a month and then be able to teach the class with a reasonable amount of competence. Of course, real insight takes more time than that, and it is not reasonable to expect that someone conscripted into service with one month's worth of notice (why is this, exactly?) will be able to provide that. </p>
|
39,424 | <p>I need to teach an intro course on number theory in 1 month. I was just notified. Since I have never studied it, what are good books to learn it quickly?</p>
| Niemi | 8,590 | <p>What about "The Little Book of Bigger Primes" by Ribenboim (see <a href="http://rads.stackoverflow.com/amzn/click/0387201696" rel="nofollow">1</a> for the Amazon link)? I personally think this is a great introduction to the field of number theory and I have enjoyed it very much a few years ago. It is clear and nicely written.</p>
<p>(Just to be clear: We are not talking about a course that also involves notable parts of algebraic number theory, are we?)</p>
|
2,788,015 | <p>I'm trying to solve an exercise that says</p>
<blockquote>
<p>Show that a locally compact space is $\sigma$-compact if and only if is separable.</p>
</blockquote>
<p>Here locally compact means that also is Hausdorff. I had shown that separability imply $\sigma$-compactness but I'm stuck in the other direction.</p>
<p>Assuming that $X$ is $\sigma$-compact it seems enough to show that a compact Hausdorff space is separable. However I don't have a clue about how to do it. </p>
<p>My first thought was try to show that a compact Hausdorff space is first countable, what would imply that it is second countable and from here the proof is almost done. However it seems that my assumption is not true, so I'm again in the starting point.</p>
<p>Some hint will be appreciated, thank you.</p>
<hr>
<p>EDIT: it seems that the exercise is wrong. Searching in the web I found <a href="http://at.yorku.ca/cgi-bin/bbqa?forum=ask_a_topologist_2003&task=show_msg&msg=0014.0001" rel="nofollow noreferrer">a "sketch" for a proof</a> that a compact Hausdorff space is not separable:</p>
<blockquote>
<p>Another natural example: take more than |R| copies of the unit interval
and take their product. This is compact Hausdorff (Tychonov theorem) but
not separable (proof not too hard, but omitted).</p>
<p>Hope this helped,</p>
<p>Henno Brandsma</p>
</blockquote>
<p>My knowledge of topology is little and the exercise appear in a book of analysis (this is a part of the exercise 18 on page 57 of <em>Analysis III</em> of Amann and Escher.)</p>
<p>My hope is that @HennoBrandsma (an user of this web) appear and clarify the question :)</p>
| Henno Brandsma | 4,280 | <p>As I said, you cannot say in general that a locally compact space is separable iff it is $\sigma$-compact. </p>
<p>There are many classic compact spaces that are not separable, e.g. $[0,1]^I$ where $|I| > \mathfrak{c}$, and the lexicographically ordered square $[0,1] \times [0,1]$ in the order topology or the Alexandroff double of $[0,1]$ etc. All such spaces are trivially $\sigma$-compact and locally compact, so they disprove the right to left implication.</p>
<p>But the stated fact <strong>is</strong> true if we restrict ourselves to metric or metrisable spaces, (or in fact any class of spaces where separability is equivalent to Lindelöfness):</p>
<p>Suppose $X$ is separable, then for a metric space this implies that $X$ is Lindelöf and so as $X$ has an open cover of open sets with compact closures (being locally compact) it has a countable such cover as well. Hence $X$ is then $\sigma$-compact. On the other hand, if $X$ is $\sigma$-compact, it’s Lindelöf (this implication holds in general spaces) and hence separable.</p>
|
3,232,341 | <p>How would I show this? I know a directed graph with no cycles has at least one node of outdegree zero (because a graph where every node has outdegree one contains a cycle), but do not know where to go from here.</p>
| hmakholm left over Monica | 14,366 | <p>Just reverse the direction of all edges. This cannot produce a cycle where there wasn't one before, and the indegrees are now outdegrees.</p>
|
2,098,693 | <p>Full Question: Five balls are randomly chosen, without replacement, from an urn that contains $5$
red, $6$ white, and $7$ blue balls. What is the probability of getting at least one ball of
each colour?</p>
<p>I have been trying to answer this by taking the complement of the event but it is getting quite complex. Any help?</p>
| Community | -1 | <p>Try dividing it into all possible cases.</p>
<p>Case $1$: We take $2$ red, $2$ white and one blue ball. The total ways are : $W_1= \binom {5 }{2}\cdot \binom {6}{2}\cdot \binom {7}{1} $ ways.</p>
<p>Case $2$: We take $1$ red, $2$ white and $2$ blue balls. Here the number of ways are : $W_2 = \binom {5}{1}\cdot \binom {6}{2}\cdot \binom {7}{2} $ ways.</p>
<p>I leave it to you to find $W_3$ for case $3$ and also find $W_4, W_5 , W_6$ for the cases where we can take $3$ balls of one colour and one ball each of the other two. Thus the probability is $$P_{req} =\frac{W_1+W_2+W_3+W_4+W_5+W_6}{\binom{5+6+7}{5}} $$ Hope it helps. </p>
|
131,524 | <p>I am an amateur mathematician, and certainly not a set theorist, but there seems to me to be an easy way around the reflexive paradox: Add to set theory the primitive $A(x,y)$, which we may think of as meaning that $x$ is allowed to belong to $y$ and the axiom</p>
<p>$\forall x,\forall y, x\in y \rightarrow A(x,y)$</p>
<p>and modify the axiom schema for abstraction to read, given any wff $\phi(y)$ in which $x$ is not a free variable,</p>
<p>$\exists x,\forall y, y\in x \leftrightarrow A(y,x) \wedge \phi(y)$ </p>
<p>Then if we try to construct the set $B$ of all sets not belonging to themselves, we get</p>
<p>$\forall x, x\in B \leftrightarrow A(x,B) \wedge x\notin x$</p>
<p>Then, instead of the reflexive paradox, we get</p>
<p>$B\in B \leftrightarrow A(B,B) \wedge B\notin B$</p>
<p>which is a consistent statement that implies both $B\notin B$ and $\neg A(B,B)$. Moreover, since $B$ is arbitrary, it follows that no set can be a member of itself.</p>
<p>Now, this all looks correct to me, but I can not believe that such a simple trick has been overlooked for over a century. So I have to believe that either its been done and I am simply unaware of it, or I've made a mistake that is staring me in the face and I just can't see it. Can someone set me straight on this?</p>
| Noah Schweber | 8,133 | <p>As The User says in the comments, you still have a problem, aesthetically at least -- in order to prevent the existence of "silly" models, you need some axiom asserting that $\in^*$ isn't too big. As is, a model in which $\in^*$ always holds between any $x$ and $y$ satisfies your axioms; this means that your separation axiom just asserts the existence of the empty set, and so any collection of sets containing the empty set can form a model of your axioms. In particular, your theory is now certainly consistent, since the structure consisting of a single object, interpreted as the emptyset, is a model.</p>
<p>This is the primary difficulty in creating a useful set theory -- not avoiding the paradoxes, but avoiding them in such a way that the resulting theory has some semantic power, so that models of the theory all share some intuitive properties. Also, we want the theory to be powerful, in the sense that any of its models interpret the rest of mathematics. These two demands are actually tied together, since one of the semantic properties we tend to demand of a set theory is that its models function well as universes for mathematics. In this case, avoiding paradoxes too easily is actually in some sense a <em>bad</em> thing -- having too many models can get in the way of interpretive power. For example, one theorem showing that ZFC is a powerful theory -- the Reflection Theorem, that asserts that for each finite fragment F of ZFC, ZFC proves the consistency of F -- can also be thought of as a near-inconsistency result: ZFC is "as close as possible" to inconsistency, in terms of what it says about its own finite fragments.</p>
<p>(This is not to argue that you should stop thinking about these things! I think coming up with alternate set theories is one of the best things a logician can do with their time; or at least that's how I justify it to my advisor! But it is a good idea to keep all of these things in one's mind. In particular, I recommend at the outset setting down a list of requirements you want your set theory to satisfy: is consistent relative to PA? interprets ZFC? is formulated in seven-valued infinitary logic?* since this will guide your process.)</p>
<hr>
<p>* Nobody said those demands had to be <em>reasonable</em>, after all!</p>
|
4,206,147 | <blockquote>
<p><span class="math-container">$f(f(x))=f(x),$</span> for all <span class="math-container">$x\in\Bbb R$</span> suppose <span class="math-container">$f$</span> is differentiable, show <span class="math-container">$f$</span> is constant or <span class="math-container">$f(x)=x$</span></p>
</blockquote>
<p>Clearly, <span class="math-container">$f'(f(x))f'(x)=f'(x)$</span>. This implies for each <span class="math-container">$x$</span>, <span class="math-container">$f'(f(x))=1$</span>, or <span class="math-container">$f'(x)=0$</span>. But this is not enough.</p>
| Daniel McLaury | 3,296 | <p>I'm not able to follow the argument given in the existing answer. It may be perfectly valid, but I don't understand what's being said at several points. So let me give my own answer.</p>
<p>Suppose <span class="math-container">$f : \mathbb{R} \to \mathbb{R}$</span> is a differentiable function such that
<span class="math-container">$f(f(x)) = f(x).$</span>
Differentiating, we have
<span class="math-container">$$f'(f(x))f'(x) = f'(x)$$</span>
which we can rewrite as
<span class="math-container">$$f'(x)[f'(f(x)) - 1] = 0$$</span></p>
<p>So for each <span class="math-container">$x \in \mathbb{R}$</span> we have either <span class="math-container">$f'(x) = 0$</span> or <span class="math-container">$f'(f(x)) = 1$</span>.</p>
<p>Since <span class="math-container">$f$</span> is differentiable, it is continuous. Since <span class="math-container">$\mathbb{R}$</span> is nonempty and connected, so is its continuous image <span class="math-container">$Y := f(\mathbb{R})$</span>.</p>
<p>Let <span class="math-container">$y \in Y$</span>, say <span class="math-container">$y = f(x)$</span>. We must have either <span class="math-container">$f'(y) = 0$</span> or <span class="math-container">$f'(f(y)) = 1$</span>. But since <span class="math-container">$y = f(x)$</span>,
<span class="math-container">$$f(y) = f(f(x)) = f(x) = y.$$</span>
So in all we have either <span class="math-container">$f'(y) = 0$</span> or <span class="math-container">$f'(y) = 1$</span> for each <span class="math-container">$y \in Y$</span>.</p>
<p>Derivatives of differentiable functions need not be continuous, but they at least satisfy the intermediate value property. If <span class="math-container">$f'(y_1) = 0$</span> and <span class="math-container">$f'(y_2) = 1$</span> for some points <span class="math-container">$y_1, y_2 \in Y$</span>, it would necessarily be the case that there was some point <span class="math-container">$y_3$</span> in between them with <span class="math-container">$f'(y_3) = 1/2$</span>. Since <span class="math-container">$Y$</span> is connected, though, this point <span class="math-container">$y_3$</span> would necessarily be in <span class="math-container">$Y$</span> as well, contradicting the fact that <span class="math-container">$f'$</span> only takes on the values 0 and 1 on <span class="math-container">$Y$</span>.</p>
<p>It follows that either <span class="math-container">$f'(y) = 0$</span> for all <span class="math-container">$y \in Y$</span> for <span class="math-container">$f'(y) = 1$</span> for all <span class="math-container">$y \in Y$</span>.</p>
<p>In the former case, we have <span class="math-container">$f'(f(x)) = 0$</span> for all <span class="math-container">$x$</span>, so in particular <span class="math-container">$f'(f(x)) \neq 1$</span> for all <span class="math-container">$x$</span>, so we must have <span class="math-container">$f'(x) = 0$</span> for all <span class="math-container">$x$</span>. But this means that <span class="math-container">$f$</span> is constant.</p>
<p>In the latter case, we have <span class="math-container">$f'(y) = 1$</span> for all <span class="math-container">$y \in Y$</span>. First, since <span class="math-container">$f'$</span> is nonzero at a point, <span class="math-container">$f$</span> is nonconstant, so <span class="math-container">$Y$</span> must be an interval of positive length. Second, since <span class="math-container">$f'(y) = 1$</span> on this interval, there is some constant <span class="math-container">$C$</span> for which <span class="math-container">$f(y) = y + C$</span> for all <span class="math-container">$y \in Y$</span>. That is, for all <span class="math-container">$x$</span>, <span class="math-container">$f(f(x)) = f(x) + C$</span>. But <span class="math-container">$f(f(x)) = f(x)$</span>, so this says that <span class="math-container">$f(x) = f(x) + C$</span>, i.e. <span class="math-container">$C = 0$</span>. In other words, <span class="math-container">$f(y) = y$</span> for all <span class="math-container">$y \in Y$</span>.</p>
<p>I want to claim in this case that <span class="math-container">$Y$</span> is necessarily all of <span class="math-container">$\mathbb{R}$</span>. Suppose not. Write <span class="math-container">$a = \inf Y$</span> and <span class="math-container">$b = \sup Y$</span>. Since <span class="math-container">$Y$</span> is an interval of positive length, we have in particular <span class="math-container">$a < b$</span>. By assumption, one or both of these must be finite. Assume without loss of generality that <span class="math-container">$a$</span> is not <span class="math-container">$-\infty$</span>.</p>
<p>Since <span class="math-container">$f(y) = y$</span> at least on the interval <span class="math-container">$(a, b)$</span> and <span class="math-container">$f$</span> is continuous, we must have <span class="math-container">$f(a) = a$</span>. For <span class="math-container">$x < a$</span> we have <span class="math-container">$f(x) \geq a$</span> since <span class="math-container">$a$</span> is the infimum of the range of <span class="math-container">$f$</span>. For <span class="math-container">$a \leq x < b$</span> we have <span class="math-container">$f(x) = x$</span>.</p>
<p>But <span class="math-container">$f$</span> is supposed to be differentiable, so let's consider its derivative,</p>
<p><span class="math-container">$$\lim_{h \to 0} \frac{f(x + h) - f(x)}{h}$$</span></p>
<p>The limit from the right is
<span class="math-container">$$\lim_{h \to 0^+} \frac{f(x + h) - f(x)}{h} = 1$$</span>
On the other hand, if we consider the limit from the left,
<span class="math-container">$$\lim_{h \to 0^-} \frac{f(x + h) - f(x)}{h}$$</span>
the numerator is nonnegative (since <span class="math-container">$f(x+h) \geq a$</span>) while the denominator is a negative number. So the limit, should it exist, could at most be zero, and is definitely not equal to one.</p>
<p>This is a contradiction, so it must be the case that <span class="math-container">$a = -\infty$</span>. Similarly, <span class="math-container">$b = +\infty$</span>. So the <span class="math-container">$f(y) = y$</span> for all <span class="math-container">$y \in Y$</span>, but <span class="math-container">$Y$</span> is all of <span class="math-container">$\mathbb{R}$</span> so <span class="math-container">$f$</span> is just the identity function.</p>
|
1,345,643 | <p>In an exercise it seems I must use Pascal's triangle to solve this $(z^1+z^2+z^3+z^4)^3$. The result would be $z^3 + 3z^4 + 6z^5 + 10z^ 6 + 12z^ 7 + 12z^ 8 + 10z^ 9 + 6z^ {10} + 3z^ {11} + z^{12}$. But how do I use the triangle to get to that result? Personally I can only solve things like $(x+y)^2$ and $(x+y)^3$.</p>
<p>Thanks for any tips that may be given.</p>
| Jack D'Aurizio | 44,121 | <p>$$(z+z^2+z^3+z^4)^3 = z^3\cdot\left(\frac{1-z^4}{1-z}\right)^3=z^3(1-3z^4+3z^8-z^{12})\sum_{n\geq 0}\binom{n+2}{2}z^n$$
hence:
$$\begin{eqnarray*}(z+z^2+z^3+z^4)^3&=&(z^3-3z^7+3z^{11}-z^{15})\sum_{n\geq 0}\binom{n+2}{2}z^n\\&=&\sum_{n\geq 3}\binom{n-1}{2}z^n-3\sum_{n\geq 7}\binom{n-5}{2}+3\sum_{n\geq 11}\binom{n-9}{2}z^n-\sum_{n\geq 15}\binom{n-13}{2}z^n\\&=&\sum_{n= 3}^{12}\binom{n-1}{2}z^n-3\sum_{n=7}^{12}\binom{n-5}{2}z^n+3\sum_{n=11}^{12}\binom{n-9}{2}z^n\\&=&\frac{1}{2}\left(\sum_{n= 3}^{6}(n-1)(n-2)z^n-2\sum_{n=7}^{10}(n-4)(n-11)z^n+\sum_{n=11}^{12}(n-13)(n-14)z^n\right)\end{eqnarray*}$$</p>
|
179,377 | <p>Consider the $k \times k$ block matrix:</p>
<p>$$C = \left(\begin{array}{ccccc} A & B & B & \cdots & B \\ B & A & B &\cdots & B \\ \vdots & \vdots & \vdots & \ddots &
\vdots \\ B & B & B & \cdots & A
\end{array}\right) = I_k \otimes (A - B) + \mathbb{1}_k \otimes B$$</p>
<p>where $A$ and $B$ are size $n \times n$ and $\mathbb{1}$ is the matrix of all ones.</p>
<p>It would seem that the formula for the determinant of $C$ is simply:</p>
<p>$$\det(C) = \det(A-B)^{k-1} \det(A+(k-1) B)$$</p>
<p>Can anyone explain why this seems to be true or offer a proof or direct me to a proof?</p>
| Rodrigo de Azevedo | 91,764 | <p>Subtracting the last row of blocks from the first $k-1$ rows of blocks, we obtain</p>
<p>$$\begin{bmatrix}A-B & O & O & \dots & O & B-A\\ O & A-B & O & \dots & O & B-A\\ O & O & A-B & \dots & O & B-A\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ O & O & O & \dots & A-B & B-A\\ B & B & B & \dots & B & A\end{bmatrix}$$</p>
<p>which is the first step in Remling's answer. Adding the first $k-1$ columns of blocks to the last column of blocks, we obtain a block lower triangular matrix</p>
<p>$$\begin{bmatrix}A-B & O & O & \dots & O & O\\ O & A-B & O & \dots & O & O\\ O & O & A-B & \dots & O & O\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ O & O & O & \dots & A-B & O\\ B & B & B & \dots & B & M_k (A,B)\end{bmatrix}$$</p>
<p>where $M_k (A,B) = A+(k-1)B$. The determinant of this block lower triangular matrix is</p>
<p>$$(\det(A-B))^{k-1} \det(M_k (A,B)) = (\det(A-B))^{k-1} \det (A + (k-1) B)$$</p>
<p>Thus,</p>
<p>$$\det(C) = (\det(A-B))^{k-1} \det (A + (k-1) B)$$</p>
|
1,334,527 | <p>The integral in hand is
$$
I(n) = \frac{1}{\pi}\int_{-1}^{1} \frac{(1+2x)^{2n}}{\sqrt{1-x^2}}\, dx
$$
I dont know whether it has closed-form or not, but currently I only want to know its asymptotic behavior. Setting $x=\cos\theta$, then
$$
I(n) = \frac{1}{\pi}\int_{0}^{\pi/2} \Big[(1+2\cos\theta)^{2n}+(1-2\cos\theta)^{2n}\Big]\, d\theta
$$
The second term can be neglected, therefore
$$
I(n) \sim \frac{1}{\pi}\int_{0}^{\pi/2}(1+2\cos\theta)^{2n}\, d\theta
$$
How can I move on?</p>
| zhw. | 228,045 | <p>Let's look at</p>
<p>$$(1)\,\,\,\,\int_{0}^{\pi/2}(1+2\cos t)^{n}\, dt = 3^n\int_{0}^{\pi/2}(1/3+2(\cos t)/3)^{n}\, dt.$$</p>
<p>For the last integral, we can look at $\int_{0}^{b}(1/3+2\cos t/3)^{n}\, dt$ for any small $b>0,$ the rest of the integral decreasing exponentially as $n\to \infty.$ Now near $0,\cos t \sim 1-t^2/2,$ so let's substitute that in and simplify. We get</p>
<p>$$\int_0^b(1-t^2/3)^n\,dt.$$</p>
<p>Let $t=(\sqrt {3}s)/\sqrt {n}.$ The above becomes </p>
<p>$$\frac{\sqrt {3}}{\sqrt n}\int_0^{b\sqrt n /\sqrt 3}(1-s^2/n)^n\,ds.$$</p>
<p>That last integral $\to \int_0^{\infty}e^{-s^2}ds = \sqrt \pi/2.$ It follows that $(1)$ is asymptotic to</p>
<p>$$3^n\cdot \frac{\sqrt {3}}{\sqrt n}\cdot \frac{\sqrt \pi}{2}.$$ Not a full expansion, certainly some details left out, but this I think this approach gives intuition.</p>
|
1,658,577 | <p>I'm an electrical/computer engineering student and have taken fair number of engineering math courses. In addition to Calc 1/2/3 (differential, integral and multivariable respectfully), I've also taken a course on linear algebra, basic differential equations, basic complex analysis, probability and signal processing (which was essentially a course on different integral transforms).</p>
<p>I'm really interested in learning rigorous math, however the math courses I've taken so far have been very applied - they've been taught with a focus on solving problems instead of proving theorems. I would have to relearn most of what I've been taught, this time with a focus on proofs. </p>
<p>However, I'm afraid that if I spend a while relearning content I already know, I'll soon become bored and lose motivation. However, I don't think not revisiting topics I already know is a good idea, because it would be next to impossible to learn higher level math without knowing lower level math from a proof based point of view.</p>
| layman | 131,740 | <p>I don't think if you relearn the material you'll get bored because I don't think you would be <em>re</em>learning the material. To relearn something means you've already learned it once, and now you are learning it again. But you said yourself your courses never focused on proving theorems. It's a very different experience to study pure mathematics than to study mathematics as you've already done. In the former, you take a much deeper look at the topics and theorems that you used to solve the problems earlier, and this time you'd be learning the methods of proof that are used to prove the theorems. For homework and exams, you'd most likely be expected to prove given statements on your own. </p>
|
19,261 | <p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p>
<ol>
<li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li>
<li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li>
<li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li>
</ol>
<blockquote>
<p>Then $v_i$, $v_j$ are adjacent iff $N_i$
and $N_j$ are not coprime,</p>
</blockquote>
<p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p>
<p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p>
<blockquote>
<p><strong>QUESTION</strong></p>
<p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$
is minimal (among all that do the job) — and if so: how?</p>
</blockquote>
<p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
| Igor Pak | 4,040 | <p>About the Fary–Milnor theorem. Milnor's original proof is already very nice (see <a href="http://www.jstor.org/stable/1969467" rel="noreferrer">here</a>). I also very much like <a href="http://www.jstor.org/stable/119165" rel="noreferrer">this proof</a> by Alexander & Bishop (see also a version of this proof in <a href="http://www.math.ucla.edu/~pak/book.htm" rel="noreferrer">my book</a>).</p>
|
19,261 | <p>Every simple graph $G$ can be represented ("drawn") by numbers in the following way:</p>
<ol>
<li><p>Assign to each vertex $v_i$ a number $n_i$ such that all $n_i$, $n_j$ are coprime whenever $i\neq j$. Let $V$ be the set of numbers thus assigned. <br/></p></li>
<li><p>Assign to each maximal clique $C_j$ a unique prime number $p_j$ which is coprime to every number in $V$.</p></li>
<li><p>Assign to each vertex $v_i$ the product $N_i$ of its number $n_i$ and the prime numbers $p_k$ of the maximal cliques it belongs to.</p></li>
</ol>
<blockquote>
<p>Then $v_i$, $v_j$ are adjacent iff $N_i$
and $N_j$ are not coprime,</p>
</blockquote>
<p>i.e. there is a (maximal) clique they both belong to. <strong>Edit:</strong> It's enough to assign $n_i = 1$ when $v_i$ is not isolated and does not share all of its cliques with another vertex.</p>
<p>Being free in assigning the numbers $n_i$ and $p_j$ lets arise a lot of possibilites, but also the following question:</p>
<blockquote>
<p><strong>QUESTION</strong></p>
<p>Can the numbers be assigned <em>systematically</em> such that the greatest $N_i$
is minimal (among all that do the job) — and if so: how?</p>
</blockquote>
<p>It is obvious that the $n_i$ in the first step have to be primes for the greatest $N_i$ to be minimal. I have taken the more general approach for other - partly <a href="https://mathoverflow.net/questions/19076/bringing-number-and-graph-theory-together-a-conjecture-on-prime-numbers/19080#19080">answered </a> - questions like "Can the numbers be assigned such that the set $\lbrace N_i \rbrace_{i=1,..,n}$ fulfills such-and-such conditions?"</p>
| Joel Fine | 380 | <p>A nice topic to read about is Chern-Weil theory. This is the generalisation of Gauss-Bonnet to higher dimensions and to vector bundles other than the tangent bundle. Put very briefly, topological invariants of a vector bundle over a manifold (its characteristic classes - certain classes in the cohomology of the base) can be computed using the curvature tensor of any choice of connection in the bundle. </p>
<p>The prototype is Gauss-Bonnet in which, as you know, the Euler characteristic of a (compact orientable) surface is equal to a fixed constant times the integral of the scalar curvature of any Riemannian metric on the surface.</p>
|
3,261,846 | <blockquote>
<p>What is the solution to the IVP <span class="math-container">$$y'+y=|x|, \ x \in \mathbb{R}, \ y(-1)=0$$</span></p>
</blockquote>
<p>The general solution of the above problem is <span class="math-container">$y_{g}(x)=ce^{-x}$</span>.</p>
<p>How to find the particular solution? As <span class="math-container">$|x|$</span> is not differentiable at origin. Is there any alternate way to get the solution?</p>
| E.H.E | 187,799 | <p>or use the variation of parameters method
<span class="math-container">$$y=y_c+y_p$$</span>
we know that <span class="math-container">$y_c=ce^{-x}$</span></p>
<p>so
<span class="math-container">$$y_p=u(x)e^{-x}$$</span>
<span class="math-container">$$y'_p=u'(x)e^{-x}-u(x)e^{-x}$$</span>
substitute in the D.E to get
<span class="math-container">$$u(x)=\int|x|e^{x}dx=e^x(x-1)\operatorname{sgn}(x)+\operatorname{sgn}(x)+1$$</span>
so the
<span class="math-container">$$y_p=(x-1)\operatorname{sgn}(x)+e^{-x}[\operatorname{sgn}(x)+1]$$</span></p>
|
4,539,739 | <p>Here is the curve <span class="math-container">$y=2^{n-1}\prod\limits_{k=0}^n \left(x-\cos{\frac{k\pi}{n}}\right)$</span>, shown with example <span class="math-container">$n=8$</span>, together with the unit circle centred at the origin.</p>
<p><a href="https://i.stack.imgur.com/mBNbY.png" rel="noreferrer"><img src="https://i.stack.imgur.com/mBNbY.png" alt="enter image description here" /></a></p>
<p>Call the arc lengths between neighboring roots <span class="math-container">$l_1, l_2, l_3, ..., l_n$</span>.</p>
<blockquote>
<p>What is the exact value of <span class="math-container">$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n l_k$</span> ?</p>
</blockquote>
<p>Desmos suggests that <span class="math-container">$L$</span> exists and is approximately <span class="math-container">$2.94$</span>. Maybe <span class="math-container">$\frac{8}{e}$</span> ?</p>
<p><strong>Context</strong></p>
<p>I have studied this curve, and found that it has several interesting properties.</p>
<ul>
<li><p>The curve is tangent to the unit circle at <span class="math-container">$n$</span> points, which are uniformly spaced around the circle.</p>
</li>
<li><p>The magnitude of the gradient at each root inside the circle is <span class="math-container">$n$</span>; the magnitude of the gradient at <span class="math-container">$x=\pm1$</span> is <span class="math-container">$2n$</span>.</p>
</li>
<li><p>The total area of the regions enclosed by the curve and the <em>x</em>-axis is <span class="math-container">$1$</span>.</p>
</li>
<li><p>As <span class="math-container">$n\to\infty$</span>, the volume of revolution of those regions about the <em>x</em>-axis approaches <span class="math-container">$\frac{1}{2}$</span> of the volume of the unit sphere, and the volume of revolution of those regions about the <em>y</em>-axis approaches <span class="math-container">$\frac{1}{\pi}$</span> of the volume of the unit sphere.</p>
</li>
<li><p>As <span class="math-container">$n\to\infty$</span>, if the curve is magnified so that the average area of those regions is always <span class="math-container">$2$</span>, then the product of those areas approaches <span class="math-container">$4\cosh^2{\left(\frac{\sqrt{\pi^2-8}}{2}\right)}\approx6.18$</span>, as shown <a href="https://math.stackexchange.com/a/4472892/398708">here</a>.</p>
</li>
</ul>
<p>I recently discovered that the product of arc lengths between neighboring roots seems to converge to a positive number as <span class="math-container">$n\to\infty$</span>. Hence, my question.</p>
<p>(If you know any other interesting properties of this curve, feel free to add them in the comments.)</p>
<p><strong>My attempt</strong></p>
<p>The part of the curve inside the circle can be expressed as <span class="math-container">$y=-\sqrt{1-x^2}\sin{(n\arccos{x})}$</span>. So</p>
<p><span class="math-container">$$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{\cos{\frac{k\pi}{n}}}^{\cos{\frac{(k-1)\pi}{n}}}\sqrt{1+\left(n\cos{(n\arccos{x})+\frac{x\sin{(n\arccos{x})}}{\sqrt{1-x^2}}}\right)^2}dx$$</span></p>
<p>I do not know how to evaluate this limit. I tried taking the log of the product, without success. I tried to approximate each integral as areas of triangles (hoping that that approximation would become equality with the limit) and a rectangle at the bottom, multiplying each triangle's area by <span class="math-container">$\frac{4}{\pi}$</span> (which is the ratio of areas under sine or cosine to the area of an inscribed triangle), but that resulted in a different limit.</p>
<p>EDIT</p>
<p>Further numerical analysis strongly suggests that <span class="math-container">$L=\frac{8}{e}$</span>. I noticed that when <span class="math-container">$n$</span> doubles, the ratio of the two products is a certain number (which is close to <span class="math-container">$1$</span>), and when <span class="math-container">$n$</span> is doubled again, the ratio's distance to <span class="math-container">$1$</span> is approximately halved. So then I projected that the product indeed approaches <span class="math-container">$\frac{8}{e}$</span>. (I don't have Mathematica; anyone who has it is welcome to confirm this.)</p>
<p>I have simplified the expression of <span class="math-container">$L$</span>. Letting <span class="math-container">$x=\cos{\frac{u}{n}}$</span>, and ignoring the <span class="math-container">$1$</span> in the <span class="math-container">$\sqrt{1+(...)^2}$</span> (I think this is OK since <span class="math-container">$n\to\infty$</span>), we get</p>
<p><span class="math-container">$$L=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{k\pi}^{(k-1)\pi}\sqrt{\left(n\cos{u}+(\sin{u})\cot{\frac{u}{n}}\right)^2}\left(-\frac{1}{n}\sin{\frac{u}{n}}\right)du$$</span></p>
<p><span class="math-container">$$\space{}=\lim\limits_{n\to\infty}\prod\limits_{k=1}^n \int_{(k-1)\pi}^{k\pi}\left|(\cos{u})\sin{\frac{u}{n}}+\frac{1}{n}(\sin{u})\cos{\frac{u}{n}}\right|du$$</span></p>
<p>So why is this equal to <span class="math-container">$\frac{8}{e}$</span> ?</p>
| Anders Kaseorg | 38,671 | <p>A simple parameterization of the curve is given by</p>
<p><span class="math-container">$$x = \cos \frac tn, \quad y = -{\sin t \sin \frac tn}, \quad 0 < t < nπ.$$</span></p>
<p>If we could approximate the arc length of the <span class="math-container">$k$</span>th lobe <span class="math-container">$(k - 1)π < t < kπ$</span> by twice its height at <span class="math-container">$t = \frac{(k - 1/2)π}{n}$</span> (the point of tangency with the circle), we’d get</p>
<p><span class="math-container">$$\prod_{k = 1}^n l_k ≈ \prod_{k = 1}^n 2\sin \frac{(k - \frac12)π}{n}.$$</span></p>
<p>That product equals <span class="math-container">$2$</span>. (Proof: observe that <span class="math-container">$z^n + 1 = \prod_{k = 1}^n (z - e^{2i(k - 1/2)π/n})$</span>, because these monic polynomials have the same roots; substitute <span class="math-container">$z = 1$</span> and take absolute values.)</p>
<p>Unfortunately, as I think you’ve already observed, this approximation isn’t good enough. For <span class="math-container">$k$</span> close to <span class="math-container">$1$</span> or <span class="math-container">$n$</span>, the lobe shoots significantly past the tangency point before turning back toward the <span class="math-container">$x$</span> axis. However, we can try to correct the approximation as follows.</p>
<p>If we translate the <span class="math-container">$k$</span>th lobe by <span class="math-container">$(-1, 0)$</span> and scale it by a factor of <span class="math-container">$n$</span>, we get</p>
<p><span class="math-container">$$x = n\left(\cos \frac tn - 1\right), \quad y = -n \sin t \sin \frac tn.$$</span></p>
<p>As <span class="math-container">$n → ∞$</span>, this approaches</p>
<p><span class="math-container">$$x = 0, \quad y = -t \sin t,$$</span></p>
<p>with the arc length converging as <span class="math-container">$O(n^{-2})$</span>. This limiting arc length is</p>
<p><span class="math-container">$$\max_{(k - 1)π < t < kπ} 2\lvert t \sin t\rvert = 2\lvert r_k\sin r_k\rvert,$$</span></p>
<p>where <span class="math-container">$r_k$</span> is the <span class="math-container">$k$</span>th positive root of <span class="math-container">$r \cos r + \sin r = 0$</span>. But we’ve underestimated it as</p>
<p><span class="math-container">$$2n \sin \frac{\left(k - \frac12\right)π}{n} \to 2\left(k - \frac12\right)π,$$</span></p>
<p>also converging as <span class="math-container">$O{(n^{-2})}$</span>. So we’ll need to correct the approximation by a factor of</p>
<p><span class="math-container">$$\frac{\lvert r_k\sin r_k\rvert}{\left(k - \frac12\right)π}.$$</span></p>
<p>The corrections in lobes <span class="math-container">$k$</span> and <span class="math-container">$n + 1 - k$</span> are the same, so the overall correction tends to</p>
<p><span class="math-container">$$L = 2\prod_{k = 1}^\infty \left(\frac{r_k \sin r_k}{\left(k - \frac12\right)π}\right)^2,$$</span></p>
<p>with overall error <span class="math-container">$O(n^{-1}) → 0$</span>. We can compute <span class="math-container">$L ≈ 2.94303552937$</span>, whose proximity to <span class="math-container">$\frac 8e$</span> is extremely striking, although it’s hard to confirm since <span class="math-container">$r_k$</span> don’t seem to have a closed form.</p>
|
1,443,441 | <blockquote>
<p>If <span class="math-container">$\frac{x^2+y^2}{x+y}=4$</span>,then all possible values of <span class="math-container">$(x-y)$</span> are given by<br></p>
<p><span class="math-container">$(A)\left[-2\sqrt2,2\sqrt2\right]\hspace{1cm}(B)\left\{-4,4\right\}\hspace{1cm}(C)\left[-4,4\right]\hspace{1cm}(D)\left[-2,2\right]$</span><br></p>
</blockquote>
<p>I tried this question.<br></p>
<p><span class="math-container">$\frac{x^2+y^2}{x+y}=4\Rightarrow x+y-\frac{2xy}{x+y}=4\Rightarrow x+y=\frac{2xy}{x+y}+4$</span><br></p>
<p><span class="math-container">$x-y=\sqrt{(\frac{2xy}{x+y}+4)^2-4xy}$</span>, but I am not able to proceed. I am stuck here. Is my method wrong?</p>
| Jack D'Aurizio | 44,121 | <p>$\frac{x^2+y^2}{x+y}=4$ is equivalent to $(x-2)^2+(y-2)^2 = 8$, hence $(x,y)$ lies on a circle centered at $(2,2)$ with radius $2\sqrt{2}$. The tangents at the points $(0,4)$ and $(4,0)$ are parallel to the $y=x$ line, so the right answer is $(C)$.</p>
|
1,443,441 | <blockquote>
<p>If <span class="math-container">$\frac{x^2+y^2}{x+y}=4$</span>,then all possible values of <span class="math-container">$(x-y)$</span> are given by<br></p>
<p><span class="math-container">$(A)\left[-2\sqrt2,2\sqrt2\right]\hspace{1cm}(B)\left\{-4,4\right\}\hspace{1cm}(C)\left[-4,4\right]\hspace{1cm}(D)\left[-2,2\right]$</span><br></p>
</blockquote>
<p>I tried this question.<br></p>
<p><span class="math-container">$\frac{x^2+y^2}{x+y}=4\Rightarrow x+y-\frac{2xy}{x+y}=4\Rightarrow x+y=\frac{2xy}{x+y}+4$</span><br></p>
<p><span class="math-container">$x-y=\sqrt{(\frac{2xy}{x+y}+4)^2-4xy}$</span>, but I am not able to proceed. I am stuck here. Is my method wrong?</p>
| Bart Michels | 43,288 | <p>The condition $\frac{x^2+y^2}{x+y}=4$ is equivalent to $(x+y)^2+(x-y)^2=8(x+y)$. Let $x+y=s$ and $x-y=d$. Note that this induces a bijection from $\Bbb R^2$ to itself, meaning that for every pair $(s,d)$ there exist corresponding $x,y$. We have $0\leq d^2=8s-s^2$. The nonnegative values that $8s-s^2$ can take are $[0,16]$, so $d$ takes values in $[-4,4]$.</p>
|
452,306 | <p>I am trying to be able to find the radius of a cone combined with a cylinder.
see my other question
(Solving for radius of a combined shape of a cone and a cylinder where the cone is base is concentric with the cylinder? part2 )</p>
<p>I have a volume calculation that Has been reduced as far as I know how to.</p>
<p>Know values:</p>
<p>$$v=65712.4$$
$$x=3$$
$$y=2$$
$$\theta=30$$
$$r=unknown$$</p>
<p>$$v=\pi r^3\left(2y-\frac{2}{3}\tan\theta-\frac{x}{r}\right)$$</p>
<p>Since I haven't solved a Quadratic equation in a while. </p>
<p>I would appreciate it explained in steps. </p>
<p>Thank You For Your Time.</p>
| Mariano Suárez-Álvarez | 274 | <p>Let me suppose your group is second-countable, so that we need only worry abour sequences.</p>
<p>It is enough to show that for every $g\in G$ we have that $g^{-1}$ is an element of the closure of $\{g^i:i\geq0\}$.</p>
<p>So pick any convergent subsequence of $(g^i)_{i\geq0}$, say $(g^{n_i})_{i\geq0}$, and let $h\in G$ be its limit. Then $(g^{n_i-n_{i-1}-1})_{i\geq1}$ is another subsequence. What is its limit?</p>
|
3,997,532 | <p>Please help me with this. I can't prove the result. Tried integral by parts or notations, nothing working</p>
<p><span class="math-container">$$\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx$$</span></p>
| Riemann | 27,899 | <p>Let <span class="math-container">$t=-x$</span>, then
<span class="math-container">\begin{align*}
\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx
&=\int_{1}^{-1}{\frac{t^2}{e^{-t}+1}}(-dt)\\
&=\int_{-1}^{1}{\frac{t^2}{e^{-t}+1}}dt\\
&=\int_{-1}^{1}{\frac{x^2e^x}{e^x+1}}dx.
\end{align*}</span>
<span class="math-container">\begin{align*}
\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx
&=\frac{1}{2}\left(\int_{-1}^{1}{\frac{x^2}{e^x+1}}dx
+\int_{-1}^{1}{\frac{x^2e^x}{e^x+1}}dx\right)\\
&=\frac{1}{2}\int_{-1}^{1}x^2dx=\int_{0}^{1}x^2dx\\
&=\frac{1}{3}.
\end{align*}</span></p>
|
317,175 | <p>What tools would we like to use here? Is there any easy way to establish the limit?</p>
<p>$$\sum_{k=1}^{\infty}{1 \over k^{2}}\,\cot\left(1 \over k\right)$$</p>
<p>Thanks!</p>
<p>Sis!</p>
| Community | -1 | <p>Since $\displaystyle\cot{\frac{1}{x}}=\frac{\cos\frac{1}{x}}{\sin \frac{1}{x}}$ and $\cos\frac{1}{x}\sim_{+\infty}1$, $\sin\frac{1}{x}\sim_{+\infty}\frac{1}{x}$ then
$$\frac{1}{k^2}\cot{\frac{1}{k}}\sim_{+\infty}\frac{1}{k}.$$
So the serie with positive terms $\displaystyle\sum_k \frac{1}{k^2}\cot{\frac{1}{k}}$ is divergent by comparaison with harmonic series.</p>
<p><a href="http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)" rel="nofollow">http://en.wikipedia.org/wiki/Harmonic_series_(mathematics)</a></p>
|
1,068,631 | <p>I want to find the solutions of the equation $$\left[z- \left( 4+\frac{1}{2}i\right)\right]^k = 1 $$ </p>
<p>in terms of roots of unity.</p>
<p>When I try to solve this, I get
\begin{align*}z - 4 - \dfrac i2 &= 1\\
z-\dfrac{i}{2}&=5\\
\dfrac{2z-i}2 &= 5\\
z&= 5 + \dfrac i2\end{align*}</p>
<p>Is this the right approach?</p>
<p>I want to do the same for $$\left[z-\left(4+\frac{1}{2}i\right)\right]^k = 2$$</p>
<p>as well.</p>
| Przemysław Scherwentke | 72,361 | <p>HINT: You should find (or only name?) $w_0$, $w_1$, ... $w_{k-1}$, which are $k$-th roots of 1 (2, respectively) and compare them one by one with $z-4+\frac12i$.</p>
|
2,502,963 | <p>How do you prove that $e=\sum_{n=0}^{\infty}\frac{1}{n!}$? Here I am assuming $e:=\lim_{n\to\infty}(1+\frac{1}{n})^n$. Do you have any good PDF file or booklet available online on this? I do not like how my analysis text handles this...</p>
| 5xum | 112,884 | <p>For me during calculus, the steps were:</p>
<ol>
<li>Define $e$ as $\lim_{n\to\infty}\left(1+\frac1n\right)^n$</li>
<li>Define $\ln x$ as the inverse function to $e^x$.</li>
<li>Prove that $\frac{d}{dx} \ln x = \frac1x$</li>
<li>From 3, prove that $\frac{d}{dx}e^x=e^x$</li>
<li>Prove that, if a function $f$ is infinitly differentiable, then $$f(x)=\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!}x^n$$ plus an error term which goes to $0$ in a lot of cases (including $e^x$).</li>
<li>From $5$, conclude that $$e^{x}=\sum_{n=0}^\infty\frac{x^n}{n!}$$</li>
<li>Plug in $x=1$ into (6).</li>
</ol>
|
133,711 | <p>I am trying to show that $$\int_{-\pi}^{\pi}e^{\alpha \cos t}\sin(\alpha \sin t)dt=0$$</p>
<p>Where $\alpha$ is a real constant.</p>
<hr>
<p>I found the problem while studying a particular question in this room,<a href="https://math.stackexchange.com/questions/124868/evaluate-int-c-frace-alpha-zzdz-where-alpha-in-mathbb-r-and-c-i">this one.</a> It becomes so challenging to me as I am trying to make life easy but I stucked!</p>
<p>EDIT:
The integral is from $-\pi$ to $\pi$</p>
<p>EDIT 2:
I am sorry for this edit, but it is a typo problem and I fix it now. In my question I have $e^\alpha \cos t$ not $e^\alpha$ only. I am very sorry. </p>
| N. S. | 9,176 | <p><strong>Hint</strong> Your function is ODD.... </p>
<p>This is one of the pretty standard result in Calculus:</p>
<p>If $f(t)$ is a continuous, odd function, then </p>
<p>$$\int_{-a}^a f(t) dt =0$$</p>
<p>Proof: Substitute $u=-t$. </p>
<p>Second Proof: Think of the integral as a signed area....</p>
|
1,007,533 | <p>Prove that if $v$ is an eigenvector for the matrix $A$, then $A^2v=c^2v$</p>
<p>Pretty much all I have is:</p>
<p>$Av=cv$ where $v$ is a nonzero vector</p>
| Milly | 182,459 | <p>Apply $A$ both sides of $Av=cv$ (i.e. $A^2v=Acv$), and use that $A$ commutes with multiplication by constants...</p>
|
4,074,718 | <p>The angle bisectors of <span class="math-container">$\angle B$</span> and <span class="math-container">$\angle C_{ex}$</span> intersect at point <span class="math-container">$E$</span>. If <span class="math-container">$\angle A=70^\circ$</span>, what is <span class="math-container">$\angle E$</span> equal to?</p>
<p><a href="https://i.stack.imgur.com/dOPmK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dOPmK.png" alt="enter image description here" /></a></p>
<p>I tried to solve this question as follows:</p>
<p><span class="math-container">$a=\angle EBC$</span> and <span class="math-container">$b=\angle BCA$</span></p>
<p><span class="math-container">$2a+b=110^{\circ}$</span></p>
<p>Also <span class="math-container">$\angle ACE= 90^\circ-\frac{b}{2}$</span></p>
<p>This is as far as I got. I don't know how to work out what <span class="math-container">$\angle E$</span> is equal to. Could you please explain to me how to solve this question?</p>
| MathIsNice1729 | 274,536 | <p><strong>Hint :</strong> Let <span class="math-container">$f$</span> be a polynomial with integer coefficients. Then if <span class="math-container">$a+\sqrt b$</span> is a root of <span class="math-container">$f$</span>, where <span class="math-container">$a, b \in \Bbb Q, \sqrt b \notin \Bbb Q$</span>, then <span class="math-container">$a-\sqrt b$</span> is also a root of <span class="math-container">$f$</span>.</p>
|
4,003,948 | <p>In the Book that I'm reading (Mathematics for Machine Learning), the following para is given, while listing the properties of a matrix determinant:</p>
<blockquote>
<p>Similar matrices (Definition 2.22) possess the same determinant.
Therefore, for a linear mapping <span class="math-container">$Φ : V → V$</span> all transformation matrices
<span class="math-container">$A_Φ$</span> of <span class="math-container">$Φ$</span> have the same determinant. Thus, the determinant is invariant
to the choice of basis of a linear mapping.</p>
</blockquote>
<p>I know that matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are similar if they satisfy <span class="math-container">$B=C^{-1}AC$</span>.
I can prove that determinants of such <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are equal using other properties of a determinant.</p>
<p>But beyond that I don't understand what this paragraph is saying. I can understand all matrices <span class="math-container">$Y$</span> such that <span class="math-container">$Y=X^{-1}AX$</span> have the same determinant as <span class="math-container">$A$</span>, for varying <span class="math-container">$X$</span>s.</p>
<p>But how do I connect this to linear mappings of the form <span class="math-container">$Φ : V → V$</span>. What does <span class="math-container">$Φ : V → V$</span> mean here? Maybe someone can give me an example.</p>
<p>EDIT:
This video is pretty basic, but it helped me understand better
<a href="https://www.youtube.com/watch?v=s4c5LQ5a4ek" rel="nofollow noreferrer">https://www.youtube.com/watch?v=s4c5LQ5a4ek</a></p>
| Community | -1 | <p>Given an abstract finite dimensional (real) vector space <span class="math-container">$V$</span>, a <em><a href="https://en.wikipedia.org/wiki/Linear_map" rel="nofollow noreferrer">linear transformation</a></em> <span class="math-container">$\Phi:V\to V$</span> is a map such that for any <span class="math-container">$\lambda\in\mathbb{R}$</span>, and any <span class="math-container">$v,w\in V$</span>:
<span class="math-container">$$
\Phi(v+w)=\Phi(v)+\Phi(w),\quad \Phi(\lambda v)=\lambda\Phi(v)
$$</span></p>
<p>Note that in this definition, no matrix is mentioned and <span class="math-container">$V$</span> is not necessarily <span class="math-container">$\mathbb{R}^n$</span> (although <span class="math-container">$V$</span> is <em>isomorphic</em> to <span class="math-container">$\mathbb{R}^n$</span>).</p>
<p>If you choose an <a href="https://en.wikipedia.org/wiki/Basis_(linear_algebra)#Ordered_bases_and_coordinates" rel="nofollow noreferrer">ordered basis</a> <span class="math-container">$\beta$</span> for <span class="math-container">$V$</span>, then <span class="math-container">$\Phi$</span> can be represented as an <span class="math-container">$n\times n$</span> matrix <span class="math-container">$B=[\Phi]_{\beta}$</span>. If you choose another ordered basis <span class="math-container">$\alpha$</span> for <span class="math-container">$V$</span>, then <span class="math-container">$\Phi$</span> can be represented as another <span class="math-container">$n\times n$</span> matrix <span class="math-container">$A=[\Phi]_{\alpha}$</span>. In linear algebra, it is known that the two matrices <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are "similar", and thus have the same determinant.</p>
<p>Since the determinant is invariant to the choice of basis, we can talk about the determinant of a linear mapping.</p>
|
2,227,047 | <p>For any $x=x_1, \dotsc, x_n$, $y=y_1, \dotsc, y_n$ in $\mathbf E^n$, define $\|x-y\|=\max_{1 \le k \le n}|x_k-y_k|$. Let $f\colon\mathbf E^n \to \mathbf E^n$ be given by $f(x)=y$, where $y_k= \sum_{i=1}^n a_{ki} x_i + b_k$ where $k =1,2, \dotsc,n$. Under what conditions is $f$ a contraction mapping?</p>
<p>Any hint or solution for this question? I am beginner for this course, I can not understand clearly. </p>
| Felix Marin | 85,343 | <p>$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</p>
<blockquote>
<p>$\ds{\alpha \in \mathbb{C}\setminus\left(-\infty,0\right].\quad}$ Lets
$\ds{\alpha = \verts{\alpha}\exp\pars{\ic\phi}\quad}$ where
$\ds{\quad-\pi < \phi < \pi\quad}$ and $\ds{\quad\alpha \not= 0}$.</p>
</blockquote>
<p>\begin{align}
&\int_{-\infty}^{\infty}{\dd x \over \verts{1 + \alpha x^{2}}} =
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{\root{\verts{\alpha}}\dd x \over \verts{\vphantom{\Large A} \verts{\alpha}x^{2} + \verts{\alpha}/\alpha}}
\\[5mm] \stackrel{\root{\verts{\alpha}}x\ \mapsto\ x}{=}&
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{\dd x \over \verts{\vphantom{\Large A} x^{2} + \bar{\alpha}/\verts{\alpha}}} =
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{\dd x \over \verts{\vphantom{\Large A} x^{2} + \expo{-\ic\phi}}}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{\dd x \over \root{\pars{x^{2} + \expo{-\ic\phi}}\pars{x^{2} + \expo{\ic\phi}}}}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{\dd x \over \root{x^{4} + 2\cos\pars{\phi}x^{2} + 1}}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{1 \over \root{x^{2} + 2\cos\pars{\phi} + 1/x^{2}}}\,{\dd x \over x}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{1 \over \root{\pars{x - 1/x}^{2} + 2 + 2\cos\pars{\phi}}}\,{\dd x \over x}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\infty}
{1 \over \root{\pars{x - 1/x}^{2} + 4\cos^{2}\pars{\phi/2}}}\,{\dd x \over x}
\end{align}
With the change of variables
$\ds{t = x - {1 \over x}}$ and $\ds{x = {\root{t^{2} + 4} + t \over 2}}$:
\begin{align}
&\int_{-\infty}^{\infty}{\dd x \over \verts{1 + \alpha x^{2}}} =
{2 \over \root{\verts{\alpha}}}\int_{-\infty}^{\infty}
{\dd t \over \root{t^{2} + 4\cos^{2}\pars{\phi/2}}\root{t^{2} + 4}}
\\[5mm] \stackrel{t\ =\ 2\tan\pars{\theta}}{=}\,\,\,&\
{4 \over \root{\verts{\alpha}}}\int_{0}^{\pi/2}
{2\sec^{2}\pars{\theta} \over
\root{4\tan^{2}\pars{\theta} + 4\cos^{2}\pars{\phi/2}}\bracks{2\sec\pars{\theta}}}\,\dd\theta
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\pi/2}
{\dd\theta \over
\root{\sin^{2}\pars{\theta} + \cos^{2}\pars{\phi/2}\cos^{2}\pars{\theta}}}
\\[5mm] = &\
{2 \over \root{\verts{\alpha}}}\int_{0}^{\pi/2}
{\dd\theta \over
\root{\cos^{2}\pars{\phi/2} + \sin^{2}\pars{\phi/2}\sin^{2}\pars{\theta}}}
\\[5mm] = &\
\bbx{\ds{{2 \over \root{\verts{\alpha}}}
\,\mrm{K}\pars{\sin^{2}\pars{\phi \over 2}}}}\,;\qquad\alpha \not= 0\,,\quad
\phi = \,\mrm{arg}\pars{\alpha}\,,\quad \phi \in \pars{-\pi,\pi}
\end{align}</p>
<blockquote>
<p>$\ds{\mrm{K}}$ is the <em>Complete Elliptic Integral of the First Kind</em>.</p>
</blockquote>
|
1,051,372 | <p>If $|z_1|=1,|z_2|=1$, how can one prove $|1+z_1|+|1+z_2|+|1+z_1z_2|\ge2$</p>
| Shivang jindal | 38,505 | <p>$$ \mid 1+z_1 \mid + \mid 1+ z_2 \mid + \mid 1+ z_1z_2 \mid \ \ge \mid 1+z_1 \mid + \mid 1+z_1z_2-1-z_2 \mid$$ [ Using Triangle inequality]
$$ \mid 1+z_1 \mid + \mid z_1z_2-z_2 \mid = \mid 1+z_1 \mid + \mid z_1 -1 \mid \ \ge |1+z_1+z_1-1| = 2 $$ [Again using triangle inequality]</p>
<p>So we are done :)</p>
|
3,897,067 | <p>Consider a binary operation <span class="math-container">$*$</span> acting from a set <span class="math-container">$X$</span> to itself. It's useful and standard to work with operations which are associative, such that <span class="math-container">$(a*b)*c = a*(b*c)$</span>. What about operations which are not associative?</p>
<p>Is there any way to characterize all different possible types of such binary operations <span class="math-container">$*$</span> which are not associative? Eg. Can we say that if <span class="math-container">$*$</span> is not associative, then it must instead satisfy one of set of other possible properties, depending on any other additional operations that we have on our set <span class="math-container">$X$</span>?</p>
<p>If we also add some additional structure to our set <span class="math-container">$X$</span> so that we can add elements together and multiply by scalars, it's standard to quantify the amount that two elements of <span class="math-container">$X$</span> commute with each other under <span class="math-container">$*$</span> by calculating the commutator <span class="math-container">$[a,b] = a*b - b*a$</span>.
Is it ever useful to consider an 'associative commutator' <span class="math-container">$[abc] = (a*b)*c - a*(b*c)$</span>, for a given non-associative <span class="math-container">$*$</span>?</p>
<p>Finally, I know from Lie algebras that if <span class="math-container">$*$</span> anticommutes then it can be natural to consider a Jacobi identity</p>
<p><span class="math-container">$(a*b)*c = a*(b*c) - b*(a*c)$</span></p>
<p>Are there other natural extensions of associativity in different settings?
Why do Lie algebras use this Jacobi identity and not for example</p>
<p><span class="math-container">$(a*b)*c = a*(b*c) + k b*(a*c)$</span></p>
<p>Where k is a scalar?</p>
| Dietrich Burde | 83,966 | <p>"What about operations which are not associative?" In many areas we encounter non-associative algebra structures, e.g., in operad theory, homology of partition sets, deformation theory, geometric structures on Lie groups, renormalisation theory in physics and many more.</p>
<p>In a certain sense one can answer your question what else can happen. One way is, to classify all nonassociative algebras defined by the action of invariant subspaces of the symmetric group <span class="math-container">$S_3$</span> on the associator of the considered laws, see for example <a href="https://arxiv.org/abs/math/0309015" rel="nofollow noreferrer">here</a>. But of course these are not all possibilities.</p>
<p>A well known example of a non-associative algebra structure related to Lie algebras are <em>pre-Lie algebras</em> (also called left-symmetric algebras).
They satisfy the identity
<span class="math-container">$$
(x,y,z)=(y,x,z)
$$</span>
for all <span class="math-container">$x,y,z\in A$</span>, where <span class="math-container">$(x,y,z)$</span> is the associator. In particular, associative algebras are a trivial example where both sides are zero, i.e., with <span class="math-container">$0=0$</span>. Then the commutator
<span class="math-container">$$
[x,y]=xy-yx
$$</span>
is a Lie bracket, see <a href="https://math.stackexchange.com/questions/3895481/is-there-a-relationship-between-associators-and-commutators/3895615#3895615">Is there a relationship between associators and commutators?</a></p>
<p>Pre-Lie algebra arise in algebra, geometry and and physics, see my survey article <a href="https://homepage.univie.ac.at/Dietrich.Burde/papers/burde_24_pre_lie.pdf" rel="nofollow noreferrer">here</a>. They play an important role for crystallographic groups, fundamental groups of affinity flat manifolds (Milnor), Gerstenhaber deformation theory, Rota-Bater operators and Yang-Baxter equations, just to name a few key words.</p>
|
3,034,421 | <p>Lets say I have 2 multivariate functions:</p>
<pre><code>f(x,y) = x - y
g(x,y) = x + y
</code></pre>
<p>How do I get the composition of these 2 functions <span class="math-container">$g(f(x,y))$</span> ? </p>
| Cesareo | 397,348 | <p>For <span class="math-container">$xy^2(2+3x+4y) =0$</span> we have the set of solutions </p>
<p><span class="math-container">$$S_1 = \{x = 0, y = 0, 2+3x+4y = 0\}$$</span></p>
<p>For <span class="math-container">$2x^2y(1+x+3y) =0$</span> we have the set of solutions </p>
<p><span class="math-container">$$S_2 = \{x = 0, y = 0, 1+x+3y = 0\}$$</span></p>
<p>So for the system of equations we have </p>
<p><span class="math-container">$$S_1 \cap S_2 = \{x = 0, y = 0, (2+3x+4y = 0)\cap(1+x+3y = 0)\} = \{x = 0, y = 0\}\cup \{x=-\frac 25, y = -\frac 15\}$$</span></p>
|
959,393 | <p>Let's use the following example:</p>
<p>$$17! = 16!*17 \approx 2 \cdot 10^{13} * 17 = 3.4 \cdot 10^{14} $$</p>
<p>Are you allowed to do this? I am in doubt whether or not this indicates that $17! = 3.4 \cdot 10^{14}$, which is obviously not true, but I think it doesn't.</p>
| vadim123 | 73,324 | <p>Your example claims that two things are equal (on the left), two things are equal (on the right), and that the left pair are approximately equal to the right pair.</p>
<p>One should be careful with too much use of the ill-defined $\approx$ symbol, or you can get $$1\approx 1.01\approx 1.02\approx \cdots \approx 1.99\approx 2$$</p>
|
1,368,073 | <p>Halmos, in Naive Set Theory, on page 19, provides a definition of intersection restricted to subsets of $E$, where $C$ is the collection of the sets intersected. The point is to allow the case where $C$ is $\emptyset$, which with this definition of intersection gives $E$ as the result. </p>
<blockquote>
<p>$\{x \in E: x \in X$ for every $X$ in $C\}$</p>
</blockquote>
<p>My problem lies in interpreting the sentence. I wanted to read it as:</p>
<blockquote>
<p>"Elements x in E, given that: Element x is in X for every X in C"</p>
</blockquote>
<p>My brain, tuned by a number of popular programming languages, wants to evaluate the terms in the condition reading from left to right. And clearly, no element $x$ will be in any $X$ if $C$ is $\emptyset$, and if the condition is evaluated to false, $E$ will not be the result of the intersection.</p>
<p>After struggling for a while, I figured that I had to read the sentence as:</p>
<blockquote>
<p>"Elements x in E, given that: For all X that are in C, x is in all of them"</p>
</blockquote>
<p>The <em>for</em> part of the condition has to be the pivotal one. It has to be the first term you evaluate. In analogy with common programming languages.</p>
<p>Questions:</p>
<ol>
<li>Is my new reading and conclusion correct?</li>
<li>How does one learn the order of evaluation in set theoretic expressions?</li>
</ol>
<p>Edit: Corrected after discussion with coldnumber.</p>
<p>Edit 2: Upon rereading the previous chapter, I've found that Halmos actually explains his "for every". The condition "$x \in X$ for every $X$ in $C$" actually means "for all $X$ (if $X \in C$, then $x \in X$)" -- which seems to give an unambiguous order of evaluation.</p>
| Mark Fischler | 150,362 | <p>The first highlighted statement is mixing English reading of a statement with mathematical notation. The clue for that is that it had to say "for every..."</p>
<p>WHhen you do this, it becomes a bit less clear as to what in order you have to "resolve" the instructions. In this case, because the statement is so short and simple, one has hopes of getting it right.</p>
<p>It would be cleaner if you expressed it in math notation in the first place:</p>
<p>$$S(C) =\{ x \in E : \left( \forall X \subset C : x \in X \right)\}$$</p>
<p>Mpw it becomes clear that to test whether $x$ is in $S(C)$ you have to verify that $x\in E$ and then "try" every possible subset of $C$ and decide of $x$ is in that subset -- and if any of those fail, $x$ is not in $S(C)$.</p>
<p>Now it is very clear that
$$S(\emptyset) = E$$</p>
|
1,521,124 | <p>What will be the value of $3/1!+5/2!+7/3!+...$?</p>
<p>I'm trying to bring it in terms of $e$.Is it possible?</p>
<p>I used taylor series for e.</p>
| vudu vucu | 215,476 | <p>Hint: $\frac{2k+1}{k!}=2\frac1{(k-1)!}+\frac{1}{k!}$</p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Georges Elencwajg | 450 | <p>Here is a type of example coming from Analytic Geometry (in the sense of the second "GA" in Serre's "GAGA").</p>
<p>Consider a domain $D$ in $\mathbb C$. Then every $\mathbb C$-algebra morphism (aka character) $\chi : \mathcal O (D) \to \mathbb C $ is of the form $ev_d:f \to f(d)$ with $d=\chi (z) \in D$. This is completely elementary: just write $f(z)= f(d)+(z-d)g(z)$ and let $\chi$ act on both sides of the equality.[You have to convince yourself that $d$ is in $D$, not just in $\mathbb C$: else $1=(z-d).(z-d)^{-1}$ would lead to 1=0 by applying $\chi$].</p>
<p>[From this Lipman Bers proved in 1948 that, given two domains $D,D'\subset \mathbb C$, a purely algebraic isomorphism $u:\mathcal O (D) \to\mathcal O (D')$ necessarily comes from an analytic isomorphism $f=u(z): D' \to D$.]</p>
<p>A vast generalization is that for any Stein manifold (or even Stein space) X, all characters $\mathcal O (X) \to \mathbb C $ are evaluations at a point of $X$, and yield maximal ideals $ker \chi$ of $X$.
This is proved in Grauert-Remmert's book "Theory of Stein spaces" and looking rather superficially at the proof I THINK the axiom of choice is not used.
This is certainly not a satisfactory answer to Qiaochu's question (in particular I know nothing of other maximal ideals in $\mathcal O (X)$: do such exist?) but maybe these not so well-known results might interest some reader.</p>
|
7,025 | <p>Many commutative algebra textbooks establish that every ideal of a ring is contained in a maximal ideal by appealing to Zorn's lemma, which I dislike on grounds of non-constructivity. For Noetherian rings I'm told one can replace Zorn's lemma with countable choice, which is nice, but still not nice enough - I'd like to do without choice entirely.</p>
<p>So under what additional hypotheses on a ring $R$ can we exhibit one of its maximal ideals while staying in ZF? (I'd appreciate both hypotheses on the structure of $R$ and hypotheses on what we're given in addition to $R$ itself, e.g. if $R$ is a finitely generated algebra over a field, an explicit choice of generators.)</p>
<p>Edit: I guess it's also relevant to ask whether there are decidability issues here. </p>
| Greg Kuperberg | 1,450 | <p>I suspect that the most general reasonable answer is a ring endowed with a constructive replacement for what the axiom of choice would have given you.</p>
<p>How do you show in practice that a ring is Noetherian? Either explicitly or implicitly, you find an ordinal height for its ideals. Once you do that, an ideal of least height is a maximal ideal. This suffices to show fairly directly that any number field ring has a maximal ideal: The norms of elements serve as a Noetherian height.</p>
<p>The Nullstellensatz implies that any finitely generated ring over a field is constructively Noetherian in this sense.</p>
<p>Any Euclidean domain is also constructively Noetherian, I think. A Euclidean norm is an ordinal height, but not at first glance one with the property that $a|b$ implies that $h(a) \le h(b)$ (with equality only when $a$ and $b$ are associates). However, you can make a new Euclidean height $h'(a)$ of $a$, defined as the minimum of $h(b)$ for all non-zero multiples $b$ of $a$. I think that this gives you a Noetherian height.</p>
<p>I'm not sure that a principal ideal domain is by itself a constructive structure, but again, usually there is an argument based on ordinals that it is a PID.</p>
|
634,890 | <blockquote>
<p><strong>Moderator Notice</strong>: I am unilaterally closing this question for three reasons. </p>
<ol>
<li>The discussion here has turned too chatty and not suitable for the MSE framework. </li>
<li>Given the recent pre-print of <a href="http://arxiv.org/abs/1402.0290" rel="noreferrer">T. Tao</a> (see also the blog-post <a href="http://terrytao.wordpress.com/2014/02/04/finite-time-blowup-for-an-averaged-three-dimensional-navier-stokes-equation/" rel="noreferrer">here</a>), the continued usefulness of this question is diminished.</li>
<li>The final update on <a href="https://math.stackexchange.com/a/649373/1543">this answer</a> is probably as close to an "answer" an we can expect. </li>
</ol>
</blockquote>
<p>Eminent Kazakh mathematician
Mukhtarbay Otelbaev, Prof. Dr. has published a full proof of the Clay Navier-Stokes Millennium Problem.</p>
<p>Is it correct?</p>
<p>See <a href="http://bnews.kz/en/news/post/180213/" rel="noreferrer">http://bnews.kz/en/news/post/180213/</a></p>
<p>A link to the paper (in Russian):
<a href="http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf" rel="noreferrer">http://www.math.kz/images/journal/2013-4/Otelbaev_N-S_21_12_2013.pdf</a></p>
<p>Mukhtarbay Otelbaev has published over 200 papers, had over 70 PhD students, and he is a member of the Kazak Academy of Sciences. He has published papers on Navier-Stokes and Functional Analysis.</p>
<p>please confine answers to any actual mathematical error found!
thanks</p>
| uvs | 121,193 | <p>what do you make of Definition 2? He defines a strong solution so that all terms of NSE are required to live in $L^2$. Global regularity of NSE calls for pressure and velocity fields in $C^\infty$. Is it just me or are we speaking of a very, very, very weak notion of strong solution, which has nothing to do with the millenium problem?</p>
|
4,588,408 | <p>This is a step in a guided proof that the cyclotomic polynomial <span class="math-container">$\Phi_n$</span> is the minimal polynomial of <span class="math-container">$u$</span>. I already know that <span class="math-container">$\Phi_n(0)=0$</span> so <span class="math-container">$P$</span> divides <span class="math-container">$\Phi_n$</span>, I need to show the converse. Any hints?</p>
| Ja_1941 | 937,996 | <p>Suppose <span class="math-container">$P$</span> has roots <span class="math-container">$\zeta_1,\zeta_2,...,\zeta_k$</span>. All of them are <span class="math-container">$n$</span>-th root of unity. Without the loss of generality, let <span class="math-container">$P$</span> be monic. Thus,</p>
<p><span class="math-container">$$P(X)=(X-\zeta_1)(X-\zeta_2)...(X-\zeta_k)$$</span></p>
<p><span class="math-container">$$P(X^p)=(X-\zeta_1^p)(X-\zeta_2^p)...(X-\zeta_k^p)$$</span></p>
<p>You can try to expand these two polynomials. Some knowledge in elementary symmetric polynomials and multinomial theorem will helpful.</p>
<p>When I learnt about cyclotomic polynomials, I proved they are minimal using Galois theory, so I don't really have an elementary proof like this in my mind. I found this proof here <a href="https://www.lehigh.edu/%7Eshw2/c-poly/several_proofs.pdf" rel="nofollow noreferrer">https://www.lehigh.edu/~shw2/c-poly/several_proofs.pdf</a>.</p>
|
747,949 | <p>There is a complex serie: $f(t_n)=\alpha_n+\beta_n i$, for $n = 1,...,N$,$t_n,\alpha_n$ and $\beta_n$ are known.When we have know that $f(t)$ has the following form:
$$f(t)=Ae^{-iBt}$$
with unknown amplitude $A$ and unknown phase $B$, how to estimate the parameters $A$ and $B$ by using a numerical optimization method?</p>
| Martín-Blas Pérez Pinilla | 98,199 | <p>$$f(t)=Ae^{−iBt}=A(\cos(Bt)-i\sin(Bt))$$
$$\alpha_n+\beta_n i=f(t_n)=A(\cos(Bt_n)-i\sin(Bt_n))$$
$$A=|\alpha_n+\beta_n i|=\sqrt{\alpha_n^2+\beta_n^2}$$
$$B= -\frac1{t_n}\arctan\frac{\alpha_n}{\beta_n}$$
$$\cdots$$</p>
|
159,563 | <p>I have some <a href="https://pastebin.com/MGEzkeC3" rel="nofollow noreferrer">data</a> and want to fit it to Planck's law for black body radiation. The problem is that Mathematica does not give me the correct coefficients.</p>
<p>When I evaluate</p>
<pre><code>dati = Import["https://pastebin.com/raw/MGEzkeC3", "Table"];
h = 6.62607004*10^(-34);
c = 299792458;
kb = 1.38064852*10^(-23);
Planks[l_, T_, A_] := (1/A)*(((2*h*c^2)/l^5)*(1/(Exp[((h*c)/(l*kb*T))] - 1)));
fittesana2 = FindFit[dati, Planks[l, T, A], {T, A } , l];
Show[
Plot[fittesana2[l], {l, 400, 900}, PlotStyle -> Red, PlotRange -> All],
ListPlot[dati], Frame -> True]
Pfit = NonlinearModelFit[dati, Planks[l, T, A], {{A, 1*10^8}, {T, 1700}}, l];
Show[
Plot[Pfit[l], {l, 400, 900}, PlotStyle -> Red, PlotRange -> All],
ListPlot[dati], Frame -> True]
Normal[Pfit]
Pfit["ANOVATable"]
Pfit["ParameterTable"]
Pfit["FitCurvatureTable"]
</code></pre>
<p>I get</p>
<p><a href="https://i.stack.imgur.com/sCkM5.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sCkM5.png" alt="enter image description here" /></a></p>
<p>Sorry guys for forgetting to write down constants. And my data is only the part of Black Body radiation law. And unit of x-axis is nanometers(nm), and y-axis (uW/cm^2/nm).<br />
<strong>Update:</strong> As suggested by @JimB, I changed my fitting function. I tried to use @JimB suggested function, but for me easier was different one, because I need to find out temperature (Te). Here is the code:</p>
<pre><code>h = 6.62607004*10^(-34);
c = 299792458;
kb = 1.38064852*10^(-23);
b = 2*6.62607004*10^(-34)*299792458^2*Pi;
d = (6.62607004*10^(-34)*299792458)/(1.38064852*10^(-23));
dati = ImportString[Import["H2liesma.txt"], "Table"];
Plankulis[la_, Te_, G_, b_, d_] := (1/G)*(b/(la^5*(Exp[d/(la*Te)] - 1)));
Pfit3 = FindFit[dati,
Plankulis[la, Te, G, b, d], {G, 1*10^(9)}, { Te, 1500} , la];
Show[Plot[Pfit3[la], {la, 400, 900}, PlotStyle -> Red,
PlotRange -> All], ListPlot[dati], Frame -> True]
</code></pre>
<p>I get:</p>
<pre><code> FindFit::nonopt: Options expected (instead of la) beyond position 4 in FindFit[{{400.035,-0.00759963},{400.409,0.0136996},{400.783,-0.000465753},{401.157,0.00636862},{401.531,0.0205706},{401.904,0.0257837},{402.278,0.0298773},{402.652,0.00226108},{403.025,0.0188769},{403.399,-0.0230916},{403.772,-0.00365794},{404.146,0.00856837},<<28>>,{414.961,-0.00272152},{415.333,-0.00222349},{415.706,-0.00943255},{416.078,-0.00921836},{416.45,0.00204648},{416.823,-0.0261218},{417.195,-0.00775242},{417.567,0.0140285},{417.939,-0.00992257},{418.311,-0.00711655},<<1408>>},<<24>>/<<1>>,{<<1>>},<<1>>,la]. An option must be a rule or a list of rules. >>
</code></pre>
<p>When I write analitical solution for my function:</p>
<pre><code>b = 2*6.62607004*10^(-34)*299792458^2*Pi;
d = (6.62607004*10^(-34)*299792458)/(1.38064852*10^(-23));
Plankulis1[la_] := (1/G)*(b/(la^5*(Exp[d/(la*Te)] - 1)));
Te = 1500;
G = 1*10^(9);
Plankulis[G, b, la, d, Te]
Plot[Plankulis1[la], {la, 400*10^(-9),
700*10^(-8)}, {PlotRange -> Full}, Frame -> True]
</code></pre>
<p>I get:
<a href="https://i.stack.imgur.com/QzrRD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QzrRD.png" alt="enter image description here" /></a></p>
<p>I get out what I need.<br />
What is it I am doing wrong? I did not really understood it in answers. Thank you.</p>
| José Antonio Díaz Navas | 1,309 | <p>No need what JimB did. Just correct the units for $\lambda$, i.e., not nanometers but meters. There was also some errors in the syntax when using <code>FindFit</code>. I also add <code>pts</code>as a new dataset with less points suitable for plotting and compare to the fits.</p>
<p>Here is your code corrected:</p>
<pre><code>dati = Import["https://pastebin.com/raw/MGEzkeC3", "Table"];
pts = dati[[#]] & /@ Table[i, {i, 1, 1458, 50}];
h = 6.62607004*10^(-34);
c = 299792458;
kb = 1.38064852*10^(-23);
Planks[l_, T_, A_] := (2*h*c^2 10^45)/(A l^5 (Exp[(10^9 h*c)/(l*kb*T)] - 1));
fittesana2 = FindFit[dati, Planks[l, T, A], {{A, 1*10^8}, {T, 1700}}, l];
Plot[Planks[l, T, A] /. fittesana2, {l, 400, 900}, PlotStyle -> Red,
PlotRange -> All, Epilog :> {Blue, PointSize[0.015], Point[pts]}]
Pfit = NonlinearModelFit[dati,
Planks[l, T, A], {{A, 1*10^8}, {T, 1700}}, l]
Plot[Pfit[l], {l, 400, 900}, PlotStyle -> Red, PlotRange -> All,
Epilog :> {Blue, PointSize[0.015], Point[pts]}]
Normal[Pfit]
Pfit["ANOVATable"]
Pfit["ParameterTable"]
Pfit["FitCurvatureTable"]
</code></pre>
<p>The results with <code>FindFit</code>:</p>
<p><a href="https://i.stack.imgur.com/3UAzR.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3UAzR.jpg" alt="enter image description here"></a></p>
<p>and with <code>NonLinearModel Fit</code>:</p>
<p><a href="https://i.stack.imgur.com/2abZG.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2abZG.jpg" alt="enter image description here"></a></p>
<p>Adicional data:</p>
<p><a href="https://i.stack.imgur.com/pmxPZ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/pmxPZ.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/bIBu4.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/bIBu4.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/re0Gj.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/re0Gj.jpg" alt="enter image description here"></a></p>
<p><a href="https://i.stack.imgur.com/dzzQO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dzzQO.jpg" alt="enter image description here"></a></p>
|
1,027,486 | <p>How do I integrate this?</p>
<p>$$\int_0^{2\pi}\frac{dx}{2+\cos{x}}, x\in\mathbb{R}$$</p>
<p>I know the substitution method from real analysis, $t=\tan{\frac{x}{2}}$, but since this problem is in a set of problems about complex integration, I thought there must be another (easier?) way.</p>
<p>I tried computing the poles in the complex plane and got
$$\text{Re}(z_0)=\pi+2\pi k, k\in\mathbb{Z}; \text{Im}(z_0)=-\log (2\pm\sqrt{3})$$
but what contour of integration should I choose?</p>
| dustin | 78,317 | <p>For completeness, we know that $\cos(x) = \frac{e^{ix}+e^{-ix}}{2}$ and from @danielfischer's comment, $z=e^{ix}$, we are able to obtain $\cos(z) = \frac{z + 1/z}{2}$. Then $dx =\frac{dz}{iz}$. Therefore, we have the following integral were $\gamma$ is to be taking counter clockwise such that $|z|=1$
$$
\int_{\gamma}\frac{1}{2 + \frac{z + 1/z}{2}}\frac{dz}{iz} = \int_{\gamma}\frac{2z}{4z + z^2 + 1}\frac{dz}{iz}
$$
Then the poles occur at $z^2 + 4z + 1$. However, we only need to consider the pole in the contour.</p>
|
1,412,594 | <p>I'm a statistics teacher at a college. One day a student came with a doubt about an exercise about probability. The text goes like this:</p>
<blockquote>
<p>A person has two boxes $A$ and $B$. In the first one has $4$ white balls and $5$ black balls and in the second has $5$ white balls and $4$ black balls. This person takes randomly one ball from the first box and put it into the second box. After that he takes a ball from the second box. Find the probability of taking balls of the same color in this process (i.e, the one that is taken from box $A$ to $B$ and the one taken from box $B$).</p>
</blockquote>
<p>The student made the following:
Let $C$ be the event of taking balls of the same color in the process above described.
Let $W$ be the event of taking White balls from both boxes and $Bl$ the event of taking black balls from both boxes. Let $Wb_1$ be the event of taking a white ball from the first box and $Wb_2$ the event of taking a white ball from the second box. The same with $Blb_1$ and $Blb_2$. Then,</p>
<p>\begin{align}
\mathbb P(C)&=\mathbb P(W)+\mathbb P(Bl)\\ &= \mathbb P(Wb_1)\mathbb P(Wb_2)+\mathbb P(Blb_1)\mathbb P(Blb_2)\\&= \frac49 \cdot\frac6{10} + \frac59\cdot\frac5{10}
\end{align}</p>
<p>I told the student the reasoning was wrong because he has to use conditional probability because events $Wb_1$ and $Wb_2$ as well as $Blb_1$ and $Blb_2$ are not independent. A probability teacher (his actual teacher) told the student he was right and that's why I make this post. ¿Who is right? Thanks!</p>
| barak manos | 131,263 | <p><strong>Split it into disjoint events, and add up their probabilities:</strong></p>
<hr>
<p>The probability of choosing white from the first box and then from the second box is:</p>
<p>$$\frac{4}{9}\cdot\frac{6}{10}=\frac{24}{90}$$</p>
<hr>
<p>The probability of choosing black from the first box and then from the second box is:</p>
<p>$$\frac{5}{9}\cdot\frac{5}{10}=\frac{25}{90}$$</p>
<hr>
<p>So the probability of choosing the same color is:</p>
<p>$$\frac{24}{90}+\frac{25}{90}=\frac{49}{90}$$</p>
|
4,089,800 | <p>I tried to solve this problem by doing this way : <span class="math-container">$( x-w)^{5} =\left( w^{2}\right)^{5}$</span> and got this equation <span class="math-container">$x^{5}-5wx^{4}+10w^{2}x^{3}-10w^{3}x^{2}+5w^{4}x=6$</span> but I don't know how to proceed further. Would someone please help me here.</p>
| Eric Towers | 123,905 | <p><span class="math-container">\begin{align*}
(\omega + \omega^2)^5
&= \omega^5 + 5 \omega^6 + 10 \omega^7 + 10 \omega^8 + 5 \omega^9 + \omega^{10} \\
&= 2 + 10 \omega + 20 \omega^2 + 20 \omega^3 + 10 \omega^4 + 4 \text{,} \\
-10(\omega + \omega^2)^2
&= -10 \omega^2 - 20 \omega^3 - 10 \omega^4 \\
-10(\omega + \omega^2)
&= -10 \omega - 10 \omega^2 \text{.}
\end{align*}</span></p>
|
4,089,800 | <p>I tried to solve this problem by doing this way : <span class="math-container">$( x-w)^{5} =\left( w^{2}\right)^{5}$</span> and got this equation <span class="math-container">$x^{5}-5wx^{4}+10w^{2}x^{3}-10w^{3}x^{2}+5w^{4}x=6$</span> but I don't know how to proceed further. Would someone please help me here.</p>
| Robert Lee | 695,196 | <p>Recalling that since <span class="math-container">$w$</span> is a fifth root of <span class="math-container">$2$</span> we have that <span class="math-container">$w^5 = 2$</span>, we notice that
<span class="math-container">\begin{align*}
x^5 - 10x^2 -10x & =\left(w + w^2\right)^5 - 10\left(w + w^2\right)^2 - 10\left(w + w^2\right) \\
&= \underbrace{w^{10}}_{\color{blue}{4}} + 5 w^9 + 10 w^8 + 10 w^7 + 5 w^6 + \underbrace{w^5}_{\color{blue}{2}} - 10 w^4 - 20 w^3 - 20 w^2 - 10 w\\
&= 5 w^9 + 10 w^8 + 10 w^7 + 5 w^6 - 10 w^4 - 20 w^3 - 20 w^2 - 10 w + \color{blue}{6}
\end{align*}</span>
Now, what do we do with the expression besides the <span class="math-container">$6$</span>? If we factor it we see we can re-write it as
<span class="math-container">$$
5 w(w+1)\left(w^2 + w +1\right)\left(\color{purple}{w^5 -2}\right)+ \color{blue}{6}
$$</span>
But since we know that <span class="math-container">$w^5 =2$</span>, this implies that <span class="math-container">$w^5 -2 = 2-2 = 0$</span>, so in fact, the whole rest of the expression has a <span class="math-container">$0$</span> multiplying it! Hence, everything besides the <span class="math-container">$6$</span> gets canceled and you can conclude that
<span class="math-container">$$
x^5 - 10x^2 -10x = 6
$$</span></p>
|
4,052,760 | <blockquote>
<p>Prove that <span class="math-container">$\int\limits^{1}_{0} \sqrt{x^2+x}\,\mathrm{d}x < 1$</span></p>
</blockquote>
<p>I'm guessing it would not be too difficult to solve by just calculating the integral, but I'm wondering if there is any other way to prove this, like comparing it with an easy-to-calculate integral. I tried comparing it with <span class="math-container">$\displaystyle\int\limits^{1}_{0} \sqrt{x^2+1}\,\mathrm{d}x$</span>, but this greater than <span class="math-container">$1$</span>, so I'm all out of ideas.</p>
| Martin R | 42,969 | <p>A method which is generally applicable to <em>concave</em> functions:</p>
<p>If <span class="math-container">$f:[a, b] \to \Bbb R$</span> is concave then its graph lies below any tangent line:
<span class="math-container">$$
f(x) \le f(c) + f'(c)(x-c) \, .
$$</span>
If one chooses <span class="math-container">$c =(a+b)/2$</span> then the integral of the second term on the right over the interval <span class="math-container">$[a, b]$</span> is zero, and therefore
<span class="math-container">$$
\int_a^b f(x) \, dx \le (b-a) \cdot f\left( \frac{a+b}{2}\right) \, .
$$</span></p>
<p>In our case this gives
<span class="math-container">$$
\int_0^1 f(x) \, dx \le f\left(\frac 12 \right) = \frac 12 \sqrt 3 < 1 \, .
$$</span>
The estimate <span class="math-container">$\frac 12 \sqrt 3 \approx 0.86603$</span> comes pretty close to the <a href="https://www.wolframalpha.com/input/?i=integral+sqrt%28x%2Bx%5E2%29+from+0+to+1" rel="nofollow noreferrer">exact value</a> of the integral, which is <span class="math-container">$\approx 0.84032$</span>.</p>
|
121,653 | <p>What is information about the existence of rational points on hyperelliptic curves over finite fields available?</p>
| Felipe Voloch | 2,290 | <p>As Mike says, there isn't much beyond the Weil bound. For prime fields, there is a slight improvement due to Stark. If the field has $q=r^2$ elements with $q$ odd, then one can find $a,b$ such that $y^2=ax^{r+1}+b$ has no points, which shows that you cannot improve the Weil bound. For arbitrary $q$ odd, there is this beautiful idea of I. Shparlinski:</p>
<p>For a monic irreducible polynomial $f$ of degree $d$, the vector of values of $f$ modulo squares is one of $2^q$ possibilities. There are about $q^d/d$ choices for $f$ and so if $q^d/d > 2^q$ or thereabouts, you get two monic irreducibles $f,g$ such that $fg$ takes only square values and so, if $c$ is a non-square, $y^2=cf(x)g(x)$ has no points and you can take $d$ about $q/\log q$. I don't know how to do better than this.</p>
|
232,777 | <p>Let $F$ be an ordered field.</p>
<p>What is the least ordinal $\alpha$ such that there is no order-embedding of $\alpha$ into any bounded interval of $F$?</p>
| Panurge | 82,840 | <p>So, Fedor Petrov gave an excellent proof and Thomas Brown's first step towards Frankl's conjecture is proved. In order to prove the first half of the second step, it would be sufficient to prove the following statement : if $r$ and $k'$ are two natural numbers such that $r \leq k'$, the union-closed family generated by ${k'+1 \choose r} + 1$ sets with cardinality $r$ has three members whose intersection is of cardinality at least $k'$.</p>
<p>The proof of this new statement seems more difficult...</p>
<p>(In order to prove that his new statement is sufficient for proving the first half of Thomas Brown's second step, I use the following fact : if a union-closed family has at least $k'+3$ members of cardinality a least $k'$, this family has three members whose intersection is of cardinality at least $k'$.)</p>
<p><b>Edit :</b> I wrote "The proof of this new statement seems more difficult..." Isn't it a stupidity ? Fedor Petrov's method furnishes an element $v$ and a second family (contained in the first family) of at least ${k' \choose r}$ $r$-sets that don't contain $v$. Again, there is an element $v'$ in the union of these ${k' \choose r}$ $r$-sets and a third family (contained in the second family) of at least ${k'-1 \choose r}$ $r$-sets that don't contain $v'$ and thus contain neither $v$ nor $v'$. The union of the first family, the union of the second family and the union of the third family are three distinct members of the generated family, they have cardinality at least $k'$ and their intersection is the third family, so the cardinality of the intersection is at least $k'$. So, the first half of Thomas Brown' second step is proved.</p>
|
1,343,722 | <p>Note: I am looking at the sequence itself, not the sequence of partial sums.</p>
<p>Here's my attempt...</p>
<p>Setting up:</p>
<p>$$\left\{\frac{2(n+1)}{2(n+1)-1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>Simplifying:</p>
<p>$$\left\{\frac{2n+2}{2n+1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>$$\frac{(2n+2)(2n-1)-(2n)(2n+1)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2+2n-2-(4n+2n)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2-4n-2}{4n^2-1}$$</p>
<p>How should I proceed from this point? I think I need to get rid of the ratio, so that I can judge whether or not it'll be positive or negative. Or can I just judge from this point that it will be a positive value? When I use the $\frac{a_{n+1}}{a_{n}}$ test, I get a result that the sequence is strictly decreasing.</p>
| Jack D'Aurizio | 44,121 | <p>Since $n^2 = 2\binom{n}{2}+\binom{n}{1}$ we have:
$$ S=\sum_{n\geq 1}\frac{n^2}{2^n}=\left.\left(2\frac{x^2}{(1-x)^3}+\frac{x}{(1-x)^2}\right)\right|_{x=\frac{1}{2}}=\color{red}{6}.$$
As an alternative to the negative binomial series, we may also use:
$$ S = 2S-S = \sum_{n\geq 1}\frac{n^2}{2^{n-1}}-\sum_{n\geq 1}\frac{n^2}{2^n}=1+\sum_{n\geq 1}\frac{(n+1)^2-n^2}{2^n}=2+2\sum_{n\geq 1}\frac{n}{2^n}$$
so that, in the same way:
$$ \sum_{n\geq 1}\frac{n}{2^n} = T = 2T-T = \sum_{n\geq 1}\frac{n}{2^{n-1}}-\sum_{n\geq 1}\frac{n}{2^n} = 1+\sum_{n\geq 1}\frac{1}{2^n}=2 $$
and $S=2+2\cdot 2=\color{red}{6}$.</p>
|
1,343,722 | <p>Note: I am looking at the sequence itself, not the sequence of partial sums.</p>
<p>Here's my attempt...</p>
<p>Setting up:</p>
<p>$$\left\{\frac{2(n+1)}{2(n+1)-1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>Simplifying:</p>
<p>$$\left\{\frac{2n+2}{2n+1}\right\} - \left\{\frac{2n}{2n-1}\right\}$$</p>
<p>$$\frac{(2n+2)(2n-1)-(2n)(2n+1)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2+2n-2-(4n+2n)}{(2n+1)(2n-1)}$$</p>
<p>$$\frac{4n^2-4n-2}{4n^2-1}$$</p>
<p>How should I proceed from this point? I think I need to get rid of the ratio, so that I can judge whether or not it'll be positive or negative. Or can I just judge from this point that it will be a positive value? When I use the $\frac{a_{n+1}}{a_{n}}$ test, I get a result that the sequence is strictly decreasing.</p>
| Barry Cipra | 86,747 | <p>Here's a bit of a twist on the second approach in Jack D'Aurizio's answer. It's main virtue (if any) is that it avoids explicitly evaluating the auxiliary sum $\sum{n\over2^n}$, getting it to drop out instead.</p>
<p>It's convenient to start the sum at $n=0$ instead of $n=1$. Borrowing Jack's notation, we have</p>
<p>$$S=\sum_{n=0}^\infty{n^2\over2^n},\quad T=\sum_{n=0}^\infty{n\over2^n},\quad\text{and}\quad2=\sum_{n=0}^\infty{1\over2^n}$$</p>
<p>Then</p>
<p>$$\begin{align}
S-2&=\sum_{n=0}^\infty{n^2-1\over2^n}\\
&=\sum_{n=0}^\infty{(n+1)(n-1)\over2^n}\\
&=\sum_{m=1}^\infty{m(m-2)\over2^{m-1}}\\
&=2\sum_{m=1}^\infty{m^2\over2^m}-4\sum_{m=1}^\infty{m\over2^m}\\
&=2S-4T
\end{align}$$</p>
<p>so</p>
<p>$$S=4T-2$$</p>
<p>which will turn out to be better written as</p>
<p>$$4S=16T-8$$</p>
<p>Likewise (and here's where the twist comes in), we have</p>
<p>$$\begin{align}
S-8&=\sum_{n=0}^\infty{n^2-4\over2^n}\\
&=\sum_{n=0}^\infty{(n+2)(n-2)\over2^n}\\
&=\sum_{m=2}^\infty{m(m-4)\over2^{m-2}}\\
&=4\sum_{m=2}^\infty{m^2\over2^m}-16\sum_{m=2}^\infty{m\over2^m}\\
&=4(S-{1\over2})-16(T-{1\over2})\\
&=4S-16T+6
\end{align}$$</p>
<p>so</p>
<p>$$3S=16T-14$$</p>
<p>Subtracting this equation from the cleverly written $4S=16T-8$ leaves</p>
<p>$$S=6$$</p>
<p>As promised, we haven't bothered computing $T$ (although it's clear its value is easily obtained).</p>
|
4,264,496 | <p>So we have the jensen's inequality: <span class="math-container">$$|EX| \leq E|X|$$</span></p>
<p><strong>Any bound</strong> on the Jensen gap (upper bound or lower bound)? <span class="math-container">$$\text{gap}=E|X| - |EX|$$</span></p>
| GEdgar | 442 | <p><span class="math-container">$0 \le |EX| \leq E|X|$</span> so <span class="math-container">$$\text{gap} \le E|X|.$$</span> Reijo provided an example where <span class="math-container">$\text{gap} = E|X|$</span>.</p>
|
2,566,803 | <p>Let $A,B,C$ be sets such that $f:A\to B$ is a function.</p>
<p>Let $F: C^B \to C^A$ be a function, such that $F(k)=k\circ f$.</p>
<p>Prove/disprove that if $f$ is surjective then $F$ is surjective.</p>
<p>I tried to prove it: If $f$ is surjective so for every $b\in B$ there is $a\in A$ so $f(a)=b$, but what now?</p>
| ajotatxe | 132,456 | <p>It is false.</p>
<p>Let $A=C=\Bbb R$, $B=[0,\infty)$, $f(x)=x^2$ for $x\in A$.</p>
<p>Let $g:A\to C$ be defined as $g(x)= x^3$. Assume that there exists $h:B\to C$ such that $h\circ f=g$, that is $h(x^2)=x^3$. But this implies</p>
<p>$$h(1)=h(1^2)=1^3=1$$
$$h(1)=h((-1)^2)=(-1)^3=-1$$</p>
<p>But if $f$ is bijective then $F$ is surjective.</p>
|
3,690,185 | <p>By <span class="math-container">$a_n \sim b_n$</span> I mean that <span class="math-container">$\lim_{n \rightarrow \infty} \frac{a_n}{b_n} = 1$</span>.</p>
<p>I don't know how to do this problem. I have tried to apply binomial theorem and I got
<span class="math-container">$$\int_{0}^{1}{(1+x^2)^n dx} = \int_0^1 \sum_{k=0}^n{\binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \int_0^1{ \binom{n}{k}x^{2k}dx} = \sum_{k=0}^n \frac {\binom{n}{k}}{2k+1}$$</span>
But I don't know what I could do with this, nor if it is a correct approach. </p>
| SB1729 | 466,737 | <p>A partial progress</p>
<p>We make the substitution <span class="math-container">$x\tan(y)$</span>. Then <span class="math-container">$dx=\sec^2(y)dy$</span>.</p>
<p>Now <span class="math-container">$$\int_{0}^{1}(1+x^2)^ndx=\int_{0}^{\frac{\pi}{4}}\sec^{2n}(y)\sec^2(y)dy=\int_{0}^{\frac{\pi}{4}}\sec^{2(n+1)}(y)dy$$</span></p>
<p>Maybe it will be helpful.</p>
<p>Here is the proof by proceeding in your way.</p>
<p><span class="math-container">$$\sum_{k=0}^{n}\frac{n\binom{n}{k}}{2k+1}>\sum_{k=0}^{n}\frac{n\binom{n}{k}}{2k+2}=\frac{n}{2(n+1)}\sum_{k=0}^{n}\frac{n+1}{k+1}\binom{n}{k}=\frac{n}{2(n+1)}\sum_{k=0}^{n}\binom{n+1}{k+1}=\frac{n}{2(n+1)}(2^{n+1}-1)$$</span></p>
<p>Hence <span class="math-container">$$\lim_{n\rightarrow\infty}\sum_{k=0}^{n}\frac{n\binom{n}{k}}{2^n(2k+1)}\geq\lim_{n\rightarrow\infty}\frac{n}{2(n+1)}\left(\frac{2^{n+1}}{2^n}-\frac{1}{2^n}\right)=1$$</span></p>
<p>Again we see that,</p>
<p><span class="math-container">$$\sum_{k=0}^{n}\frac{\binom{n}{k}}{2k+1}<1+\left(\frac{1}{2}\sum_{k=1}^{n}\frac{1}{k}\binom{n}{k}\right)$$</span></p>
<p>From this you can show that <span class="math-container">$$\lim_{n\rightarrow\infty}\sum_{k=0}^{n}\frac{n\binom{n}{k}}{2^n(2k+1)}\leq1$$</span></p>
<p>Hence the result will follow.</p>
|
3,489,345 | <p>My goal is to find the values of <span class="math-container">$N$</span> such that <span class="math-container">$10N \log N > 2N^2$</span></p>
<p>I know for a fact this question requires discrete math. </p>
<p>I think the problem revolves around manipulating the logarithm. The thing is, I forgot how to manipulate the logarithm using discrete math. </p>
<p>My question is how do I manipulate this equation in a way such that I can find the values of N such that the equation is true? </p>
| ZAF | 609,023 | <p><span class="math-container">$N \in \mathbb{N}$</span> ?</p>
<p><span class="math-container">$10N\log(N)> 2N^2$</span> </p>
<p>If and only if</p>
<p><span class="math-container">$5 \log(N)> N$</span> </p>
<p>If and only if </p>
<p><span class="math-container">$e^{5\log(N)} > e^{N}$</span></p>
<p>If and only if</p>
<p><span class="math-container">$N^5 > e^{N}$</span></p>
|
3,600,633 | <p>As I was reading <a href="https://math.stackexchange.com/questions/1918673/how-can-i-prove-that-the-finite-extension-field-of-real-number-is-itself-or-the">this question</a>, I saw Ethan's answer. However, perhaps this is very obvious, but why does the degree of the polynomial be at most <span class="math-container">$2$</span>? I get that the polynomial must be irreducible but does that force the degree to be at most <span class="math-container">$2$</span>?</p>
| Gregory Grant | 217,398 | <p>Any polynomial in <span class="math-container">$\Bbb R[x]$</span> factors into linears and quadratics.</p>
|
3,905,629 | <p>I need to compute a limit:</p>
<p><span class="math-container">$$\lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x$$</span></p>
<p>I tried to apply the L'Hôpital rule, but the emerging terms become too complicated and doesn't seem to simplify.</p>
<p><span class="math-container">$$
\lim_{x \to 0+}(2\sin \sqrt x + \sqrt x \sin \frac{1}{x})^x \\
= \exp (\lim_{x \to 0+} x \ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})) \\
= \exp (\lim_{x \to 0+} \frac
{\ln (2\sin \sqrt x + \sqrt x \sin \frac{1}{x})}
{\frac 1 x}) \\
= \exp \lim_{x \to 0+} \dfrac
{\dfrac {\cos \sqrt x} {x} + \dfrac {\sin \dfrac 1 x} {2 \sqrt x}
- \dfrac {\cos \dfrac 1 x} {x^{3/2}}}
{- \dfrac {1} {x^2} \left(2\sin \sqrt x + \sqrt x \sin \frac{1}{x} \right)}
$$</span></p>
<p>I've calculated several values of this function, and it seems to have a limit of <span class="math-container">$1$</span>.</p>
| Barry Cipra | 86,747 | <p>For small positive angles, the inequalities <span class="math-container">${3\over4}\theta\le\sin\theta\le\theta$</span> apply. (The fraction <span class="math-container">$3/4$</span> is somewhat arbitrary; any fraction less than <span class="math-container">$1$</span> will do.) Letting <span class="math-container">$\theta=\sqrt x$</span> and noting that <span class="math-container">$-1\le\sin(1/x)\le1$</span> for all <span class="math-container">$x\not=0$</span>, we have</p>
<p><span class="math-container">$${3\over2}\sqrt x-\sqrt x\le2\sin\sqrt x+\sqrt x\sin(1/x)\le2\sqrt x+\sqrt x$$</span></p>
<p>which simplifies to</p>
<p><span class="math-container">$${1\over2}\sqrt x\le2\sin\sqrt x+\sqrt x\sin(1/x)\le3\sqrt x$$</span></p>
<p>If we take for granted the limit <span class="math-container">$x^x\to1$</span> as <span class="math-container">$x\to0$</span> (easily proved with L'Hopital applied to <span class="math-container">$\ln x\over1/x$</span>), we have <span class="math-container">$\lim_{x\to0^+}(\sqrt x/2)^x=\lim_{x\to0^+}(3\sqrt x)^x=1$</span> so the Squeeze Theorem tells us</p>
<p><span class="math-container">$$\lim_{x\to0^+}(\sin\sqrt x+\sqrt x\sin(1/x))^x=1$$</span></p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| JuliusL33t | 111,167 | <p>The "normal" definition goes like this:</p>
<p>It is claimed that, at fixed point, for any given ball $B_\epsilon$ of radius $\epsilon$ in the image, there exists a ball $B_\delta$, in the preimage, of radius $\delta$ such that $Im (B_\delta) \subset B_\epsilon$. This is the implication $$(...) < \delta \implies (...) < \epsilon $$</p>
<p>Very informally, you could compare the statement, for continuous $f$,</p>
<blockquote>
<p>For any ball $B_\epsilon$ in the image, you can find a ball $B_\delta$
mapping into $B_\epsilon$</p>
</blockquote>
<p>and</p>
<blockquote>
<p>For any ball $B_\epsilon$ in the image, its preimage contains a ball
$B_\delta$</p>
</blockquote>
<p>and</p>
<blockquote>
<p>The preimages of open sets are open.</p>
</blockquote>
<p>In topological spaces, the last one is often taken as a definition.</p>
<hr>
<p>Regarding your interpretation </p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{−1}(U)$ is open</p>
</blockquote>
<p>This is perfectly valid and translates as "IF you give me an $\epsilon$ THEN I can find you a corresponding $\delta$".</p>
<hr>
<p>Regarding the implication, let me explain in this way, to show what happens with that implication:</p>
<p>Let $U \subset Y$ be open, then for this set you can have its preimage, $f^{-1}(U) \subset X$, which is the set that satisfies:
$$x \in f^{-1}(U) \implies f(x) \in U $$
So now you can freely say:</p>
<blockquote>
<p>For any open $U \subset Y$, there is a set $f^{-1}(U) \subset X.$</p>
</blockquote>
<p>If is just so happens, that $f^{-1}(U)$ is open for any open $U$, then we call $f$ continuous. Translating, this means that if it just so happens that for any given radius $\epsilon$, can find a corresponding $\delta$ such that
$$ x\in B_\delta \implies f(x) \in B_\epsilon, $$ then $f$ is continuous.</p>
<hr>
<p>A few more details:</p>
<p>You have be rather careful when you state exactly what you mean with mapping "nearby points to nearby points".</p>
<p>Given a metric, we can always have balls as subsets of that space. The <em>open</em> sets are precisely those that, for each $x$, have some ball around them completely contained in the open set. This is true regardless of whether the open set is a union of open intervals, the whole space, a single interval, or any other open set.</p>
<p>To say that $f$ maps "nearby points to nearby points" means to say that, if you fix a point $x_0$, and look at what happens to points nearby $x_0$, they will all be mapped to points close to $f(x_0)$. The exact meaning of this is that: for each <em>fixed</em> $x\in f^{-1}(U)$, for any ball $B_\epsilon$ around $f(x)$ (and one exists, and satisies $B_\epsilon \subset U$, by openness), there is a ball $B_\delta$ around the point $x$ that maps into $B_\epsilon$. Since $B_\epsilon \subset U$, we have $B_\delta \subset f^{-1}(U) $, which by definition makes the preimage open. It's a ball around an arbitrary point completely in $f^{-1}(U) $.</p>
<p>Whatever open set you have, all of the points in there will be interior, so continuity (finding matching balls $B_\delta$ and $B_\epsilon$) works at each point at a time, so to speak. And now it almost rolls off the tongue:
$$\forall x \ \forall \epsilon \ \exists \delta \ (...) $$</p>
<p>To me, it is somehow intuitively clear that if you want a statement about how some values of $f(x)$ behave, you would start with something about its target set. Maybe that's just me. You sort of start with the question "How close to $f(x_0)$ do you want the outputs of $f$ to be", which is a question about the target set.</p>
|
2,823,758 | <p>I was learning the definition of continuous as:</p>
<blockquote>
<p>$f\colon X\to Y$ is continuous if $f^{-1}(U)$ is open for every open $U\subseteq Y$</p>
</blockquote>
<p>For me this translates to the following implication:</p>
<blockquote>
<p>IF $U \subseteq Y$ is open THEN $f^{-1}(U)$ is open</p>
</blockquote>
<p>however, I would have expected the definition to be the other way round, i.e. with the 1st implication I defined. The reason for that is that just by looking at the metric space definition of continuous:</p>
<blockquote>
<p>$\exists q = f(p) \in Y, \forall \epsilon>0,\exists \delta >0, \forall x \in X, 0 < d(x,p) < \delta \implies d(f(x),q) < \epsilon$</p>
</blockquote>
<p>seems to be talking about Balls (i.e. open sets) in X and then has a forward arrow for open sets in Y, so it seems natural to expect the direction of the implication to go in that way round. However, it does not. Why does it not go that way? Whats is wrong with the implication going from open in X to open in Y? And of course, why is the current direction the correct one?</p>
<p>I think conceptually I might be even confused why the topological definition of continuous requires to start from things in the target space Y and then require things in the domain. Can't we just say map things from X to Y and have them be close? <strong>Why do we require to posit things about Y first in either definition for the definition of continuous to work properly</strong>?</p>
<hr>
<p>I can't help but point out that this question <a href="https://math.stackexchange.com/questions/323610/the-definition-of-continuous-function-in-topology">The definition of continuous function in topology</a> seems to be similar but perhaps lack the detailed discussion on the direction on the implication for me to really understand why the definition is not reversed or what happens if we do reverse it. The second answer there tries to make an attempt at explaining why we require $f^{-1}$ to preserve the property of openness but its not conceptually obvious to me why thats the case or whats going on. Any help?</p>
<hr>
<p>For whoever suggest to close the question, the question is quite clear:</p>
<blockquote>
<p><strong>why is the reverse implication not the "correct" definition of continuous?</strong></p>
</blockquote>
<hr>
<p>As an additional important point I noticed is, pointing out <strong>the difference between open mapping and continuous function would be very useful</strong>.</p>
<hr>
<p>Note: I encountered this in baby Rudin, so thats as far as my background in analysis goes, i.e. metric spaces is my place of understanding. </p>
<hr>
<p>Extra confusion/Appendix:</p>
<p>Conceptually, I think I've managed to nail what my main confusion is. In conceptual terms continuous functions are suppose to map "nearby points to nearby points" so for me its metric space definition makes sense in that sense. However, that doesn't seem obvious to me unless we equate "open sets" to be the definition of "close by". Balls are open but there are plenty of sets that are open but are not "close by", for example the union of two open balls. I think this is what is confusing me most. How is the topological def respecting that conceptual requirement? </p>
| CopyPasteIt | 432,081 | <p>The notion of topological space and the definition of a continuous function are certainly in the realm of 'abstract' mathematics. To say that a function $f$ is continuous mean that if points are close to other points then they don't get 'ripped away' when applying it - they 'follow the action' of $f$.</p>
<p>Now we can also define a topological space using closed sets. Hmm,</p>
<p>$\quad \text{Open sets: Take open subsets of the codomain back (in some way...)}$</p>
<p>so maybe?!?</p>
<p>$\quad \text{Closed sets: Take closed subsets of the domain forward (in some way)...}$</p>
<p>The OP will find this interesting:</p>
<p><a href="https://math.stackexchange.com/q/114462/432081">A map is continuous if and only if for every set, the image of closure is contained in the closure of image</a></p>
<p>$\tag 1 \text{For any } A, \; f(\overline{A})\subseteq \overline{f(A)}$</p>
<p>or intuitively, all points 'close to' $A$ get mapped to points 'close to' $f(A)$.</p>
|
205,671 | <p>How would one go about showing the polar version of the Cauchy Riemann Equations are sufficient to get differentiability of a complex valued function which has continuous partial derivatives? </p>
<p>I haven't found any proof of this online.</p>
<p>One of my ideas was writing out $r$ and $\theta$ in terms of $x$ and $y$, then taking the partial derivatives with respect to $x$ and $y$ and showing the Cauchy Riemann equations in the Cartesian coordinate system are satisfied. A problem with this approach is that derivatives get messy.</p>
<p>What are some other ways to do it?</p>
| syockit | 53,159 | <p>We can derive using purely polar coordinates. Start with
\begin{align}
z(r,\theta) &=r\,\mathrm{e}^{\mathrm{i}\theta} \\
f(z) &= u(r,\theta) + \mathrm{i} v(r,\theta)
\end{align}
We define $f'(z)$ using the limit
$$ f'(z) = \lim_{z\to 0} \frac{\Delta f}{\Delta z} $$
where
\begin{align}
\Delta f &= \Delta u + \mathrm{i} \Delta v \\
\Delta z &= z(r+\Delta r,\theta + \Delta\theta) - z(r,\theta) \\
&= (r+\Delta r)\,\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)} - r \,\mathrm{e}^{\mathrm{i}\theta}
\end{align}
Next, we try to first approach from $\Delta\theta\to 0$,
$$ \Delta z = (r+\Delta r)\,\mathrm{e}^{\mathrm{i}\theta} - r \,\mathrm{e}^{\mathrm{i}\theta} = \Delta r\,\mathrm{e}^{\mathrm{i}\theta} $$
Therefore when we take the limit,
$$ f'(z) = \lim_{\Delta r\to 0} \frac{\Delta u + \mathrm{i} \Delta v}{\Delta r \,\mathrm{e}^{i\theta}} = \frac{1}{\mathrm{e}^{\mathrm{i}\theta}}(u_r + \mathrm{i} v_r) \label{a}\tag{1}$$
On the other hand, approaching from $\Delta r\to 0$ first yields
$$\Delta z = r\,\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)} - r \,\mathrm{e}^{\mathrm{i}\theta}$$
Using the derivative
$$\mathrm{e}^{\mathrm{i}(\theta+\Delta\theta)}-\mathrm{e}^{\mathrm{i}\theta} = \frac{\mathrm{d}\mathrm{e}^{\mathrm{i}\theta}}{\mathrm{d}\theta}\Delta\theta = \mathrm{i}\mathrm{e}^{\mathrm{i}\theta}\Delta\theta$$
we get
$$\Delta z = \mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}\Delta\theta$$
Therefore
$$f'(z) = \lim_{\Delta\theta\to 0} \frac{\Delta u + \mathrm{i} \Delta v}{\mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}\Delta\theta} = \frac{1}{\mathrm{i}r\,\mathrm{e}^{\mathrm{i}\theta}}(u_\theta + \mathrm{i}v_\theta) \label{b}\tag{2}$$
Finally, comparing the real and imaginary parts of ($\ref{a}$) and ($\ref{b}$) gives us what we want:
$$u_r=\frac{1}{r}v_\theta \quad,\quad v_r = -\frac{1}{r}u_\theta$$</p>
<hr>
<p>Reference: Kwong-tin Tang, <em><a href="http://www.springer.com/physics/theoretical%2C+mathematical+%26+computational+physics/book/978-3-540-30273-5" rel="noreferrer">Mathematical Methods for Engineers and Scientists 1</a></em> (Springer, 2007)</p>
|
109,734 | <p>I am trying to do this homework problem and I have no idea how to approach it. I have tried many methods, all resulting in failure. I went to the books website and it offers no help. I am trying to find the derivative of the function
$$y=\cot^2(\sin \theta)$$</p>
<p>I could be incorrect but a trig function squared would be the result of the trig function with the angle value and then squared. Not the angle value squared, that would give a different answer. Knowing this I also know that I can not use the table of simple trig derivatives so I know I can't just take the derivative as
$$y=\cot^2(x)$$
$$ x=\sin(\theta)$$ </p>
<p>This does not help because I can't get the derivative of cot squared. What I did try to do was rewrite it as $\frac{\cos x}{\sin x}\frac{\cos x}{\sin x}$ and then find the derivative of that but something went wrong with that and it does not produce an answer that is like the one in the book. In fact the book gets a csc squared in the answer so I know they are doing something very different.</p>
| Peđa | 15,660 | <p>$y'=2(\cot(\sin \theta))(\cot(\sin \theta))'\cdot (\sin \theta)'=2 \cdot (\cot(\sin \theta))(-1-\cot^2(\sin \theta))\cdot \cos \theta$</p>
|
4,268,962 | <blockquote>
<p>Check whether <span class="math-container">$y=\ln (xy)$</span> is an answer of the following differential equation or not</p>
<p><span class="math-container">$$(xy-x)y''+xy'^2+yy'-2y'=0$$</span></p>
</blockquote>
<p>First I tried to solve the equation,</p>
<p><span class="math-container">$$x(yy''-y''+y'^2)+yy'-2y'=0$$</span>
<span class="math-container">$$x((yy')'-y'')+(yy')-2y'=0$$</span>
Since I have <span class="math-container">$-y''$</span> in the parenthesis , the substitution <span class="math-container">$z=yy'$</span> doesn't work here but if it was <span class="math-container">$-2y''$</span> instead, I could use the substitution <span class="math-container">$u=yy'-2y'$</span> but it is not the case.</p>
<hr />
<p>My second try was taking derivative of the answer (i.e <span class="math-container">$y=\ln(xy)$</span> ) and plugging it in the D.E,</p>
<p><span class="math-container">$$y'=\frac1x+\frac{y'}y\quad\Rightarrow y'(1-\frac1y)=\frac1x\quad\Rightarrow y'=\frac y{y-1}\times \frac1x$$</span></p>
<p><span class="math-container">$$y''=\frac{-1}{x^2}+\frac{yy''-y'^2}{y^2}\quad\Rightarrow y''=\frac{y}{y-1}\times(\frac{-1}{x^2}-\frac{y^2}{y'^2})$$</span>
But it is getting really ugly when I plug <span class="math-container">$y,y',y''$</span> in the original equation.</p>
| Claude Leibovici | 82,404 | <p>Consider the implicit equation
<span class="math-container">$$F(x,y)=y-\log(x y)=0$$</span></p>
<p>So, using the implicit function theorem,
<span class="math-container">$$y'=\frac{y}{x (y-1)}\qquad \text{and} \qquad y''=-\frac{y ((y-2) y+2)}{x^2 (y-1)^3}$$</span></p>
<p>Just replace and simplify.</p>
|
3,907,928 | <p>Suppose the solutions to a general cubic equation <span class="math-container">$ax^3+bx^2+cx+d=0$</span> are to be found. Then according to Cardano's method, First a variable substitution must be carried on to convert the general cubic to depressed cubic.
<span class="math-container">$$ax^3+bx^2+cx+d=0\rightarrow t^3+pt+q=0\ \text{where $t=x-\frac{b}{3a}$}$$</span>
This seems pretty clear but what happens next is not so obvious for me.</p>
<p>Now we take <span class="math-container">$t=u+v$</span> to get <span class="math-container">$u^3+v^3+(3uv+p)(u+v)+q=0$</span>. Assuming <span class="math-container">$3uv+p=0$</span>, we get the system
<span class="math-container">$$u^3+v^3=-q$$</span> <span class="math-container">$$uv=\frac{-p}{3}$$</span>. Using Vieta's formula, the quadratic equation with <span class="math-container">$u^3$</span> and <span class="math-container">$v^3$</span> as roots is <span class="math-container">$$x^2+qx-\frac{p^3}{27}=0$$</span>
Now the roots for quadratic can be found by analysing the discriminant.</p>
<p>My doubt is that what was the inspiration behind assuming <span class="math-container">$3uv+p=0$</span>, except helping to form the quadratic equation.</p>
<p><em>Please help</em></p>
<p><em><strong>THANKS</strong></em></p>
| Andy Walls | 441,161 | <p>One source of inspiration may have been looking at the binomial expansion of <span class="math-container">$(u+v)^3$</span> and mapping it to the cubic <span class="math-container">$t^3 + pt +q = 0$</span>.</p>
<p><span class="math-container">$$\begin{align*}(u+v)^3 &= u^3 +3 u^2v +3uv^2+v^3 \\
\\
&= u^3+v^3+3uv(u+v) \\
\\
(u+v)^3 - 3uv(u+v)-(u^3+v^3)&= 0\\
\end{align*}$$</span></p>
<p>Substituting <span class="math-container">$t =u+v$</span> yields</p>
<p><span class="math-container">$$t^3-3uvt-(u^3+v^3) =0$$</span></p>
<p>Which forces the equalities
<span class="math-container">$$ p = -3uv$$</span>
<span class="math-container">$$q= -(u^3+v^3)$$</span>
to complete the mapping of the depressed cubic equation to the equation for the cube of a binomial.</p>
<p>Is this what really happened historically? I don't know. Probably not.</p>
|
4,487,380 | <p>I was reading my calculus book wherein I came across a note, being worth of attention. It says:</p>
<blockquote>
<p>Integrals in the form of <span class="math-container">$\int P(x)e^{ax}dx$</span> have a special property. After calculating the integral, we obtain a function in the form of <span class="math-container">$Q(x)e^{ax}$</span> where <span class="math-container">$Q(x)$</span> is a polynomial of the same degree as of <span class="math-container">$P(x)$</span>. This is called method of indefinite coefficients.</p>
</blockquote>
<p>Like for example, we do <span class="math-container">$\displaystyle\int (3x^3-17)e^{2x}dx$</span>, We can do it traditionally by integration by parts but let me show you my or rather author's method :</p>
<p>Let this be equal to <span class="math-container">$(Ax^3+Bx^2+Cx+D)e^{2x}$</span></p>
<p>Now differentiating both sides we get, <span class="math-container">$$(3x^3-17)e^{2x}=2(Ax^3+Bx^2+Cx+D)e^{2x}+e^{2x}(3Ax^2+2Bx+C)$$</span>
Now we will cancel <span class="math-container">$e^{2x}$</span> on both sides and the rest is equating the coefficients.</p>
<p>I have seen that this property is applicable in every question but I don't know the mathematical proof of this. It wasn't even in the book.</p>
<p>Any help regarding the proof is greatly appreciated.</p>
| dxiv | 291,201 | <p>The problem reduces to finding the polynomial <span class="math-container">$Q$</span> of degree <span class="math-container">$n = \deg P$</span> such that <span class="math-container">$P = aQ + Q'$</span>.</p>
<p>Differentiating <span class="math-container">$n$</span> times and eliminating <span class="math-container">$Q', Q'', \dots, Q^{(n)}$</span> between the equations gives:</p>
<p><span class="math-container">$$
\require{cancel}
\begin{align}
P &= aQ + \color{red}{\bcancel{Q'}}
\\ P' &= \color{red}{\bcancel{aQ'}} + \color{blue}{\bcancel{Q''}} \quad &&\big|\;\cdot \color{red}{\frac{-1}{a}}
\\ P'' &= \color{blue}{\bcancel{aQ''}} + \bcancel{Q'''} \quad &&\big|\;\cdot \color{blue}{\frac{+1}{a^2}}
\\ \dots
\\ P^{(n)} &= \bcancel{aQ^{(n)}} + \xcancel{Q^{(n+1)}} \quad &&\big|\;\cdot \frac{(-1)^n}{a^n}
\\ \hline
P - \frac{1}{a} P' + \frac{1}{a^2} P'' - \dots + (-1)^n \frac{1}{a^{n}} P^{(n)} &= aQ
\end{align}
$$</span>
<span class="math-container">$$
\iff\quad Q = \frac{1}{a} P - \frac{1}{a^2} P' + \frac{1}{a^3} P'' - \dots + (-1)^n \frac{1}{a^{n+1}} P^{(n)}
$$</span></p>
|
1,297,319 | <p>integration equation </p>
<p>$$\int_{0}^{1/8} \frac{4}{\sqrt{(1-4x^2)}} \,dx$$</p>
<p>my work </p>
<p>$t= \sqrt{(1-4x^2)} $</p>
<p>$dt = -4x/\sqrt{(1-4x^2)} dx $</p>
<p>stuck here also </p>
| Lucas | 227,060 | <p>Let $u=4x \Rightarrow du=4dx$</p>
<blockquote>
<p>Therefore $\int\frac{4}{\sqrt{1-4x^2}}dx=8\int\frac{1}{\sqrt{4-u^2}}=8\arcsin(2x)+C$</p>
</blockquote>
<p>$\Rightarrow\int_{0}^{1/8}\frac{4}{\sqrt{(1-4x^2)}}dx=8\arcsin(\frac{1}{4})$</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.