qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,253,761
<p><span class="math-container">$$\begin{array}{ll} \text{extremize} &amp; xy+2yz+3zx\\ \text{subject to} &amp; x^2+y^2+z^2=1\end{array}$$</span></p> <p>How to find the maximum/minimum using Lagrange multipliers?</p> <p>Context: This is not a homework problem, my friend and I often make up problems to challenge each other. We both love Maths and we are both students.</p> <p>I have improved my answer based on user247327's suggestion, and I have found the maximum value of 2.056545.Thank you for contributing ideas to my questions.</p>
Ferris Boyler
637,978
<p>Suppose we want to find <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> such that:</p> <p><span class="math-container">$$a_1x_1+a_2x_2=0$$</span></p> <p>Well, we could pick <span class="math-container">$x_1=x_2=0$</span>: This is called the trivial solution of the linear equation.</p> <hr /> <p>Now, suppose we want to find <span class="math-container">$x_1$</span>,<span class="math-container">$x_2$</span>,...,<span class="math-container">$x_n$</span> such that:</p> <p><span class="math-container">$$a_1x_1+a_2x_2+....+a_nx_n=0$$</span></p> <p>Well, we could pick <span class="math-container">$x_1=x_2=...=x_n=0$</span>: This is called the trivial solution of the linear equation.</p> <hr /> <p>Now, suppose we want to find <span class="math-container">$x_1$</span>,<span class="math-container">$x_2$</span>,...,<span class="math-container">$x_n$</span> such that:</p> <p><span class="math-container">$$a_{1,1}x_1+a_{1,2}x_2+....+a_{1,n}x_n=0$$</span></p> <p><span class="math-container">$$a_{2,1}x_1+a_{2,2}x_2+....+a_{2,n}x_n=0$$</span></p> <p>.......</p> <p><span class="math-container">$$a_{n,1}x_1+a_{n,2}x_2+....+a_{n,n}x_n=0$$</span></p> <p>Well, we could pick <span class="math-container">$x_1=x_2=...=x_n=0$</span>: This is called the trivial solution of the homogenous system of linear equations. (homogeneous because the right side of all the equations is zero).</p>
1,201,904
<p>I have to implement a circuit following the boolean equation A XOR B XOR C, however the XOR gates I am using only have two inputs (I am using the 7486 XOR gate to be exact, in case that makes a difference)... is there a way around this?</p>
David Holden
79,543
<p>use two gates. input $A$ and $B$ to the first gate, then input the output of the first gate and $C$ to the second gate.</p>
4,292,091
<p>I'm reading the definition of <span class="math-container">$inf\emptyset$</span> and <span class="math-container">$sup\emptyset$</span>.</p> <p>a) I'm wondering why <span class="math-container">$inf\emptyset = \infty$</span> and <span class="math-container">$sup\emptyset = -\infty$</span>. I would have expected both to be undefined.</p> <p>b) In general, can something equal infinity if it's not in the extend real number system? Should I assume they are using about extended real numbers in these definitions?</p>
Zuy
730,592
<p>You can assume that the author used the extended real number line for these definitions.</p> <p>In fact, here's a motivation for above definition.</p> <p>If you have two sets <span class="math-container">$A\subseteq B\subseteq\mathbb R$</span>, then you want them to satisfy <span class="math-container">$$\inf A\geq \inf B,\quad \sup A\leq \sup B.$$</span></p> <p>You can check that this always works whenever both <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are non-empty.</p> <p>We want this to remain true even if we accept <span class="math-container">$A=\varnothing$</span>. We then must have <span class="math-container">$$\inf\varnothing\geq \inf B,\quad \sup\varnothing\leq \sup B$$</span> for <em>any</em> set <span class="math-container">$B\subseteq\mathbb R$</span>.</p> <p>Since you can then choose <span class="math-container">$B=\{x\}$</span> for <span class="math-container">$x\in\mathbb R$</span> arbitrarily large (or small), we are forced to define <span class="math-container">$$\inf\varnothing=+\infty,\quad \sup\varnothing=-\infty.$$</span></p>
4,292,091
<p>I'm reading the definition of <span class="math-container">$inf\emptyset$</span> and <span class="math-container">$sup\emptyset$</span>.</p> <p>a) I'm wondering why <span class="math-container">$inf\emptyset = \infty$</span> and <span class="math-container">$sup\emptyset = -\infty$</span>. I would have expected both to be undefined.</p> <p>b) In general, can something equal infinity if it's not in the extend real number system? Should I assume they are using about extended real numbers in these definitions?</p>
Vercassivelaunos
803,179
<p>As others have said, this assumes that we are working in the extended reals.</p> <p>Contrary to what the others said, though, <span class="math-container">$\sup\emptyset=-\infty$</span> is not a definition made out of convenience. It is a direct consequence of the normal definition of the supremum: the smallest upper bound of the set. Since everything is an upper bound of the empty set (everything is larger than all its elements), the smallest upper bound is <span class="math-container">$-\infty$</span>.</p> <p>Essentially the same applies to the infimum.</p>
2,819,667
<p>The problem is the same as <a href="https://math.stackexchange.com/questions/14190/average-length-of-the-longest-segment">here</a>. </p> <blockquote> <p>A stick of 1m is divided into three pieces by two random points. Find the average length of the largest segment.</p> </blockquote> <p>I tried solving it in a different way, and the logic seems fine, however I get a different result to $\frac{11}{18}$. </p> <p>Here is my solution. Please let me know what I did wrong. </p> <p>Let $X$ be the length of the stick from the beginning to the first cut. $Y$ be the length of the stick between the first and second cut and $1-X-Y$ the length between the second cut and the end of the stick. </p> <p>We want to find the CDF of the following random variable: $Z=\max(X,Y,1-X-Y)$. (I believe that if anything is wrong, this might be it).</p> <p>$$\begin{split} F_Z(z) = P(Z\leq z) &amp; = P(\max(X,Y,1-X-Y) \leq z)\\ &amp; = P(X\leq z, Y\leq z, 1-X-Y\leq z)\\ &amp;= P(1-Y-z\leq X \leq z, Y\leq z) \end{split} $$</p> <p>Since we have $1-Y-z\leq z$ we deduce that $Y\geq 1-2z$. Hence: $$\begin{split} F_Z(z) &amp;= \int_{1-2z}^z\int_{1-y-z}^z 1 dx dy = \int_{1-2z}^z (z-1+y+z) dy\\ &amp;= (2z-1)(z-1+2z) + \left. \frac{y^2}{2}\right|_{y=1-2z}^{y=z} \\ &amp;=(2z-1)(3z-1) + \frac{1}{2}(z^2- (2z-1)^2) \\ &amp; = (2z-1)(3z-1) +\frac{1}{2}(-3z^2 + 4z -1) \\ &amp; = \frac{1}{2}(3z-1)^2 \end{split} $$ Now, the pdf of $Z$ is : $$f_Z(z) = \frac{d}{dz}F_Z(z) = 9z-3 $$</p> <p>And now, in order to find the expected value of the largest length, we need to integrate over $(\frac{1}{3},1)$ as the largest piece needs to be greater than $\frac{1}{3}$. Hence</p> <p>$$\begin{split} E[Z] = \int_{\frac{1}{3}}^{1} z f_Z(z) dz = \int_{\frac{1}{3}}^{1} z (9z-3) dz = \frac{14}{9} \end{split} $$ The result is obviously wrong as it needs to be something between $0$ and $1$, however after going over the solution multiple times, and checking the calculations with Wolfram, I cannot seem to figure out what went wrong. </p>
leonbloy
312
<p>The first integral is wrong, because it assumes that $X,Y$ are uniform and independent on $[0,1]^2$. They are not (for one thing, $X \le Y$).</p>
478,516
<p>$\lim_{d \to \infty} (1+\frac{w}{d})^{\frac{d}{w} } = e$. But, what if the number of bits used to encode $d$ is polynomial in length. In this model, infinity can't be encoded. However, $d$ is polynomialy much larger than $w$. Is there any tight lower bound, a closed form function $f(d)$ such that $$ f(d) \le (1+\frac{w}{d})^{\frac{d}{w} }$$</p>
Antonio Vargas
5,531
<p>By expanding the left-hand side as a power series one can show that</p> <p>$$ \left(1+\frac{1}{x}\right)^x &gt; e - \frac{e}{2x} $$</p> <p>for $x &gt; 0$, where the approximation gets better as $x$ gets larger. By setting $x = d/w$ we get</p> <p>$$ \left(1 + \frac{w}{d}\right)^{d/w} &gt; e - \frac{ew}{2d} $$</p> <p>for $d/w &gt; 0$.</p>
2,902,768
<p>$f:\mathbb{R}^2 \to \mathbb{R}$</p> <p>$f\Bigg(\begin{matrix}x\\y\end{matrix}\Bigg)=\begin{cases}\frac{xy^2}{x^2+y^2},(x,y)^T \neq(0,0)^T \\0 , (x,y)^T=(0,0)^T\end{cases}$</p> <p>I need to determine all partial derivatives for $(x,y)^T \in \mathbb{R}^2$:</p> <p>$f_x=y^2/(x^2+y^2)-2x^2y^2/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p> <p>$f_y=2xy/(x^2+y^2)-2xy^3/(x^2+y^2)^2$ for $(x,y)^T \neq (0,0)$</p> <p>and $f_x=f_y=0$ for $(x,y)^T = (0,0)$.</p> <p>Then I need to determine $\frac{\partial f}{\partial v}((0,0)^T)$ for all $v=(v_1,v_2)^T \in \mathbb{R}^2$.</p> <p>I tried: $\frac{1}{s} (f(x+sv)-f(x))$ at $x=(0,0)^T$ is equal to $\frac{1}{s} f(sv)$=$\frac{1}{s} f\Big(\begin{matrix}sv_1\\sv_2\end{matrix}\Big)$.</p> <p>Which is either equal to $0$ when the argument is $(0,0)^T$ or it is $\frac{1}{s}\frac{sv_1s^2v_2^2}{s^2v_1^2+s^2v_2^2}$ which converges to $\frac{v_1v_2^2}{v_1^2+v_2^2}$ as $s \to \infty$.</p> <p>Is that correct so far?</p> <p>And how do I know if $f$ is continuously partial differentiable on $\mathbb{R}^2$? According to our professor $f$ is not differentiable at $0$. How do I show that? As far as I know it has something to do with that something is not linear but I don't know what exactly. So I guess it can't be continuously partial differentiable on $\mathbb{R}^2$ as well but I am not sure about that.</p> <p>Thanks for your help!</p>
user
505,767
<p>Recall that when a function is differentiable at a point $(x_0,y_0)$ the following holds</p> <p>$$\frac{\partial f}{\partial v}=\nabla f(x_0,y_0)\cdot v$$</p> <p>that is the directional derivative are linear function.</p> <p>Therefore that is a necessary condition for the differentiabilty of $f(x,y)$ at $(x_0,y_0)$.</p> <p>Since at $(0,0)$ the condition is fulfilled we need to check directly by the <a href="https://en.m.wikipedia.org/wiki/Differentiable_function" rel="nofollow noreferrer">definition</a> that the following holds</p> <p>$$\lim_{(x,y)\to(0,0)}\frac{f(x,y)-f(0,0)-\nabla f(0,0)\cdot (x,y)}{\sqrt{x^2+y^2}}=\lim_{(x,y)\to(0,0)}\frac{f(x,y)}{\sqrt{x^2+y^2}}=0$$</p>
4,349,487
<h2>Problem :</h2> <p>Let <span class="math-container">$a,x&gt;0$</span> then define :</p> <h2><span class="math-container">$$f\left(x\right)=x^{\frac{x}{x^{2}+1}}$$</span></h2> <p>And :</p> <h2><span class="math-container">$$g\left(x\right)=\sqrt{\frac{x^{a}}{a}}+\sqrt{\frac{a^{x}}{x}}$$</span></h2> <p>Then prove or disprove that :</p> <h2><span class="math-container">$$2^{\frac{1}{2}}\cdot\left(f\left(xa\right)+f\left(\frac{1}{xa}\right)\right)^{\frac{1}{2}}\leq g(x)\tag{E}$$</span></h2> <hr /> <hr /> <h2>My attempt</h2> <p>From here <a href="https://math.stackexchange.com/questions/3253006/new-bound-for-am-gm-of-2-variables">New bound for Am-Gm of 2 variables</a> we have :</p> <h2><span class="math-container">$$h(x)=\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}\geq x^{\frac{x}{x^{2}+1}}$$</span></h2> <p>So we need to show :</p> <p><span class="math-container">$$\sqrt{2}\sqrt{h\left(xa\right)+h\left(\frac{1}{xa}\right)}\leq g(x)$$</span></p> <p>Now using Binomial theorem (second order at <span class="math-container">$x=1$</span>) for <span class="math-container">$1\leq x\leq 2$</span> and <span class="math-container">$0.5\leq a\leq 1$</span> we need to show :</p> <p><span class="math-container">$$\sqrt{2}\sqrt{r\left(xa\right)+r\left(\frac{1}{xa}\right)}\leq g(x)$$</span></p> <p>Where :</p> <p><span class="math-container">$$r(x)=\left(1+\left(-\frac{1}{x}+\frac{1}{x^{2}}\right)x+\frac{1}{2}\left(-\frac{1}{x}+\frac{1}{x^{2}}\right)^{2}\cdot x\cdot\left(x-1\right)\right)^{-\frac{1}{2}}$$</span></p> <p>I have not tried but here <a href="https://math.stackexchange.com/questions/4268913/show-this-inequality-sqrt-fracabb-sqrt-fracbaa-ge-2/4269271#4269271">show this inequality $\sqrt{\frac{a^b}{b}}+\sqrt{\frac{b^a}{a}}\ge 2$</a> user RiverLi's provide some lower bound again I haven't checked but (perhaps?) it works with this inequality .If it doesn't work we need a higher order in the Pade approximation .</p> <p>Edit 06/01/2022 :</p> <p>Using the nice solution due to user RiverLi it seems we have for <span class="math-container">$0.7\leq a \leq 1$</span> and <span class="math-container">$1\leq x\leq 1.5$</span> :</p> <p><span class="math-container">$$\frac{1}{a}\cdot\frac{1+x+(x-1)a^{2}}{1+x-(x-1)a^{2}}+\frac{1}{x}\cdot\frac{1+a+(a-1)x^{2}}{1+a-(a-1)x^{2}}\geq \sqrt{2}\sqrt{r\left(a^{2}x^{2}\right)+r\left(\frac{1}{a^{2}x^{2}}\right)}$$</span></p> <p>If true and proved it provides a partial solution .</p> <p>Edit 07/01/2022:</p> <p>Define :</p> <p><span class="math-container">$$t\left(x\right)=\left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-1}$$</span></p> <p>As accurate inequality we have for <span class="math-container">$x\geq 1$</span> :</p> <p><span class="math-container">$$\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}\leq \left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-1}+h(1)-t(1)$$</span></p> <p>Again it seems we have for <span class="math-container">$0&lt;x\leq 1$</span> :</p> <p><span class="math-container">$$\left(\ln\left(\left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{\frac{x}{2}}+1\right)\right)^{-\frac{96}{100}}+h(1)-(t(1))^{\frac{96}{100}}\geq \left(-\frac{1}{x}+\frac{1}{x^{2}}+1\right)^{-\frac{x}{2}}$$</span></p> <p>If true we can use the power series of <span class="math-container">$\ln(e^x+1)$</span> around zero and hope .</p> <p>Last edit 08/01/2022:</p> <p>I found a simpler one it seems we have firstly :</p> <p>On <span class="math-container">$(0,1]$</span> :</p> <p><span class="math-container">$$\left(1+\frac{1}{x^{2}}-\frac{1}{x}\right)^{-\frac{x}{2}}-\frac{x^{2}}{x^{2}+1}-\left(1-\frac{0.5\cdot1.25x}{x+0.25}\right)\leq 0$$</span></p> <p>And on <span class="math-container">$[1,8]$</span> :</p> <p><span class="math-container">$$\left(1+\frac{1}{x^{2}}-\frac{1}{x}\right)^{-\frac{x}{2}}-\frac{x^{2}}{x^{2}+1}-\frac{0.5\cdot1.25x}{x+0.25}\leq 0$$</span></p> <p>Question :</p> <p>How to show or disprove <span class="math-container">$(E)$</span> ?</p> <p>Thanks really .</p>
Canis Lupus
22,943
<p><img src="https://i.stack.imgur.com/b8pvI.png" alt="" /></p> <p>It's easy to find <span class="math-container">$\angle ABC=75^\circ$</span>.</p>
62,981
<p><strong>Introduction.</strong> I recently revisited Shelah's model without P-points and I was wondering how "badly" Grigorieff forcing destroys ultrafilters, i.e., what kind of properties can survive the destruction of the "ultra"ness.</p> <p><strong>An example.</strong> Given a free (ultra)filter $F$ on $\omega$, <strong>Grigorieff forcing</strong> is defined as $$ G(F) := \{ f:X \rightarrow 2: \omega \setminus X \in F \},$$ partially ordered by reverse inclusion. A simple density argument shows that <strong>"$G(F)$ destroys $F$"</strong>, i.e., the filter generated by $F$ in a generic extension is <strong>not</strong> an ultrafilter (the generic real being the culprit).</p> <p>Of course, there are many forcing notions that specifically destroy ultrafilters (also, Bartoszynski, Judah and Shelah showed that whenever there's a new real in the extension, some ground model ultrafilter was destroyed).</p> <p>My question is: </p> <p><strong>If $F$ is destroyed, how far away is $F$ from being the ultrafilter it once was?</strong> </p> <p>Maybe a more positive version: <strong>Which properties of $F$ can we destroy while preserving others?</strong> </p> <p>This might seem awfully vague, so before you vote to close let me explain what kind of answers I'm hoping for.</p> <ul> <li><strong>Positive answers.</strong> <ul> <li>If the forcing is $\omega^\omega$-bounding and $F$ is rapid, then $F$ will still be rapid. That's a very clean and simple preservation. </li> <li>In Shelah's model without P-points, all ground model Ramsey ultrafilters stop being P-points but "remain" Q-points.</li> </ul></li> <li><strong>"Minimal" answers.</strong> Is it possible that $F$ together with the generic real generates an ultrafilter, i.e., there are only two ultrafilters extending $F$? For Grigorieff forcing, I'd expect this needs at least a Ramsey ultrafilter. But maybe other forcings have this property?</li> <li><strong>Negative answers.</strong> Say $F$ is a P-point; can $F$ still be extended to a P-point? Shelah tells us that forcing with the full product $G(F)^\omega$ denies this. Is it known whether $G(F)$ already denies this? Do other forcing notions allow this?</li> </ul> <p>I know there is a lot of literature on <strong>preserving ultrafilters</strong> (mostly P-points, I think) but I'm more interested in the case where the ultrafilter is actually destroyed. But I'd welcome anything that sheds light on this.</p> <p>PS: community wiki, of course.</p>
Andreas Blass
6,794
<p>Here's a proof that, if $F$ is an ultrafilter and $g$ is $F$-Grigorieff-generic, then $F\cup\{g\}$ does not generate an ultrafilter in the extension. Define a real $x:\omega\to2$ (also viewed as $x\subseteq\omega$ as usual) by letting $x(n)=\sum_{k=0}^ng(k)$ modulo 2. (Technically, I should fix the obvious names for $g$ and $x$, but for simplicity let me omit the resulting dots over the letters.) Suppose, toward a contradiction, that $x$ or its complement is in the filter generated by $F\cup\{g\}$. Then there is a condition $p$ and there is a set $B\in F$ such that either (1) $p$ forces $B\cap g\subseteq x$ or (2) $p$ forces $B\cap g\subseteq\omega-x$. Fix two numbers $a&lt;b$ such that neither of them is in the domain of $p$ and such that $b\in B$. (This can be done because $B$ is in $F$ while the domain of $p$ isn't.) Now form two extensions $q$ and $q'$ of $p$ as follows. Both of them have the value 1 at $b$ (so they force $b\in g$ and therefore $b\in B\cap g$); they are both defined at $a$ but take opposite values there; and they are both defined and equal at all other numbers smaller than $b$. Then one of them forces $b\in x$ and the other forces $b\notin x$. This is absurd, as both extend $p$, which already decided between (1) (which will require $b\in x$) and (2) (which will require $b\notin x$).</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Community
-1
<p>The Frobenius automorphism</p> <p>$$(x + y)^p = x^p + y^p$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Andrey Rekalo
723
<p>$$\left(\sum\limits_{k=1}^n k\right)^2=\sum\limits_{k=1}^nk^3 .$$</p> <p>The two on the left is <em>not</em> a typo.</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Community
-1
<p>$$ \infty! = \sqrt{2 \pi} $$</p> <p>It comes from the zeta function.</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
user02138
2,720
<p>\begin{eqnarray} \sum_{i_1 = 0}^{n-k} \, \sum_{i_2 = 0}^{n-k-i_1} \cdots \sum_{i_k = 0}^{n-k-i_1 - \cdots - i_{k-1}} 1 = \binom{n}{k} \end{eqnarray}</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
J. M. ain't a mathematician
498
<p>$$\int_0^1\frac{\mathrm{d}x}{x^x}=\sum_{k=1}^\infty \frac1{k^k}$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
user02138
2,720
<p>Machin's Formula: \begin{eqnarray} \frac{\pi}{4} = 4 \arctan \frac{1}{5} - \arctan \frac{1}{239}. \end{eqnarray}</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Community
-1
<p>Well, i don't know whether to classify this as funny or surprising, but ok it's worth posting.</p> <ul> <li>Let $(X,\tau)$ be a topological space and let $A \subset X$ . By iteratively applying operations of closure and complemention, one can produce at most 14 distinct sets. It's called as the <a href="http://en.wikipedia.org/wiki/Kuratowski%27s_closure-complement_problem">Kuratowski's Closure complement problem</a>. </li> </ul>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Alon Amit
308
<p>Let $f$ be a symbol with the property that $f^n = n!$. Consider $d_n$, the number of ways of putting $n$ letters in $n$ envelopes so that no letter gets to the right person (aka derangements). Many people initially think that $d_n = (n-1)! = f^{n-1}$ (the first object has $n-1$ legal locations, the second $n-2$, ...). The correct answer isn't that different actually:</p> <p>$d_n = (f-1)^n$. </p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Kirthi Raman
25,538
<p>$$\large{1,741,725 = 1^7 + 7^7 + 4^7 + 1^7 + 7^7 + 2^7 + 5^7}$$</p> <p>and</p> <p>$$\large{111,111,111 \times 111,111,111 = 12,345,678,987,654,321}$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
andreasdr
13,591
<p>$$ 10^2+11^2+12^2=13^2+14^2 $$</p> <p>There's a funny Abstruse Goose comic about this, which I can't seem to find at the moment.</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Felix Marin
85,343
<p>$$ \begin{array}{rcrcl} \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \vdots \\[1mm] \int{1 \over x^{3}}\,{\rm d}x &amp; = &amp; -\,{1 \over 2}\,{1 \over x^{2}} &amp; \sim &amp; x^{\color{#ff0000}{\large\bf -2}} \\[1mm] \int{1 \over x^{2}}\,{\rm d}x &amp; = &amp; -\,{1 \over x} &amp; \sim &amp; x^{\color{#ff0000}{\large\bf -1}} \\[1mm] \int{1 \over x}\,{\rm d}x &amp; = &amp; \ln\left(x\right) &amp; \sim &amp; x^{\color{#0000ff}{\LARGE\bf 0}} \color{#0000ff}{\LARGE\quad ?} \\[1mm] \int x^{0}\,{\rm d}x &amp; = &amp; x^{1} &amp; \sim &amp; x^{\color{#ff0000}{\large\bf 1}} \\[1mm] \int x\,{\rm d}x &amp; = &amp; {1 \over 2}\,x^{2} &amp; \sim &amp; x^{\color{#ff0000}{\large\bf 2}} \\[1mm] \vdots &amp; \vdots &amp; \vdots &amp; \vdots &amp; \vdots \end{array} $$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Gerry Myerson
8,269
<p>$$2592=2^59^2$$ Found this in one of Dudeney's puzzle books</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Theemathas Chirananthavat
66,404
<p>Best near miss</p> <p>$$\int_{0}^{\infty }\cos\left ( 2x \right )\prod_{n=0}^{\infty}\cos\left ( \frac{x}{n} \right )~\mathrm dx\approx \frac{\pi}{8}-7.41\times 10^{-43}$$</p> <p>One can easily be fooled into thinking that it is exactly $\dfrac{\pi}{8}$.</p> <p>References:</p> <ul> <li><a href="https://en.wikipedia.org/wiki/Mathematical_coincidence">Wikipedia</a></li> <li><a href="http://crd-legacy.lbl.gov/~dhbailey/dhbpapers/math-future.pdf">Future Prospects for Computer-Assisted Mathematics</a>, by D.H. Bailey and J.M. Borwein</li> </ul>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Joe
107,639
<p>$$ \sum_{n=1}^{+\infty}\frac{\mu(n)}{n}=1-\frac12-\frac13-\frac15+\frac16-\frac17+\frac1{10}-\frac1{11}-\frac1{13}+\frac1{14}+\frac1{15}-\cdots=0 $$ This relation was discovered by Euler in 1748 (<strong>before</strong> Riemann's studies on the $\zeta$ function as a complex variable function, from which this relation becomes much more easier!).</p> <p>Then one of the most impressive formulas is the functional equation for the $\zeta$ function, in its asimmetric form: it highlights a very very deep and smart connection between the $\Gamma$ and the $\zeta$: $$ \pi^{\frac s2}\Gamma\left(\frac s2\right)\zeta(s)= \pi^{\frac{1-s}2}\Gamma\left(\frac{1-s}2\right)\zeta(1-s)\;\;\;\forall s\in\mathbb C\;. $$</p> <p>Moreover no one seems to have wrote the Basel problem (Euler, 1735): $$ \sum_{n=1}^{+\infty}\frac1{n^2}=\frac{\pi^2}{6}\;\;. $$</p>
2,101,059
<p>Let $d,n~$ be positive integer.</p> <p>$\int{y(1-y^d)^n}dy$</p> <p>Is it possible to solve it? If you know the method, please teach me.</p>
Claude Leibovici
82,404
<p>As Eff answered, the binomial theorem seems to be the way to do it.</p> <p>You will learn sooner or later that $$\int{y(1-y^d)^n}\,dy=\frac{1}{2} y^2 \,\, _2F_1\left(\frac{2}{d},-n;\frac{d+2}{d};y^d\right)$$ where appears the Gaussian or ordinary hypergeometric function (see <a href="https://en.wikipedia.org/wiki/Hypergeometric_function" rel="nofollow noreferrer">here</a>).</p>
314,239
<blockquote> <p>If <span class="math-container">$P_1 , P_2 $</span> are two sylow <span class="math-container">$p$</span>-subgroups of the group <span class="math-container">$G$</span> prove that:</p> <p><span class="math-container">$ P_1 \bigcap $</span> <span class="math-container">$P_2$</span> = <span class="math-container">$ { 1 } $</span></p> </blockquote> <p>I tried to prove it by induction as follows: proved it when <span class="math-container">$P_1 , P_2$</span> have the order p for some prime p then supposed it is true when the sylow p-subgroup has the order <span class="math-container">$p^n$</span> and supposed that there is some element in the intersection , made <span class="math-container">$H$</span> = the subgroup generated by this element &quot; say , x &quot;</p> <p>I proved that H is normal subgroup of <span class="math-container">$P_1$</span> , <span class="math-container">$P_2$</span> , and made the factor group, <span class="math-container">$P_1$</span> mod <span class="math-container">$H $</span> = <span class="math-container">$Q_1$</span> and <span class="math-container">$P_2$</span> mod <span class="math-container">$H$</span> = <span class="math-container">$Q_2$</span></p> <p>So by the induction, if <span class="math-container">$h$</span> <span class="math-container">$\in$</span> the intersection of <span class="math-container">$Q_1 , Q_2$</span> then <span class="math-container">$Q_1$</span>= <span class="math-container">$Q_2$</span></p> <p>But, I couldn't determine the element which is in this intersection--I don't know if this element <span class="math-container">$h$</span> must be exist or not -</p> <p>I don't know what is the next step now; I need some hints to prove this statement:</p> <p>I found that the text - dummit and foote - use the fact that the intersection of two sylow p-subroup is the identity element, but it didn't prove this fact so I look for a proof.</p>
Nicky Hekster
9,605
<p>Although, as pointed out above, the statement is not true in general, you might wonder for which groups it is true indeed. Your question can be reformulated as <strong>which p-groups appear as non-normal T.I. Sylow p-subgroups?</strong> Here T.I. stands for "trivial intersection". A trivial intersection set is one that intersects each of its conjugates fully or trivially. <p>These have been extensively studied, since groups that have T.I. sets exhibit some interesting representation theoretic behavior (see e.g. Chapter 7 of Isaacs's famous book <em>Character Theory of Finite Groups</em>). See further <a href="https://math.stackexchange.com/questions/66843/which-p-groups-can-be-sylow-p-subgroups-with-trivial-intersection?rq=1">Jack Schmidt's post</a> of September 2011 and the discussion following this.<p> Another somewhat more specialized angle at this is asking the question which p-groups can be realized as a <em>Frobenius complement</em>: $G$ is a Frobenius group if and only if $G$ has a proper, non-identity subgroup $H$ ("the Frobenius complement") such that $H \cap H^g = 1$ for every $g \in G − H$. It can be proved that if $H$ is a Sylow p-subgroup of $G$, it must be cyclic or generalized quaternion.</p>
2,505,971
<blockquote> <p>Let $\lim_{x\to \infty }x^n f(x)=0$ for any $n \in \mathbb{N}$. Then find the symptote $$g(x)=\dfrac{x^2+f(x)}{x+1+2f(x)}.$$</p> </blockquote> <hr> <p>My try : $$\lim_{x \to \infty }\dfrac{x^2+f(x)}{x+1+2f(x)}=\lim_{x \to \infty }\dfrac{x^{n+2}+x^nf(x)}{x^{n+1}+2x^nf(x)}=\infty?$$ Please help me.</p>
Community
-1
<p>$S(n)-S(n-1)=\sum_{i=1}^{n-1}iS(i)-\sum_{i=1}^{n-2}iS(i)=(n-1)*S(n-1)$</p> <p>$S(n)=nS(n-1)$</p> <p>$S(n)=1*2*\ldots*n=n!$</p>
39,828
<p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p> <p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p> <ul> <li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li> <li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li> <li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li> </ul> <p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p> <p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p> <blockquote> <p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p> </blockquote> <p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p> <p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p> <p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p> <blockquote> <p>How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p> </blockquote> <p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p> <hr> <p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p> <hr> <p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
Peter Arndt
733
<p>Hi Alex! </p> <p>About the second question: I think senior mathematicians don't necessarily escape the criterion of general interest, but it can become a self-fulfilling prophecy: The mere fact that a senior mathematician is studying something can raise interest in the object of study among the mathematical community - I guess they easier grant him that he will see connections or analogies to other areas accepted as interesting. See Minhyong Kim's nice <a href="https://mathoverflow.net/questions/38639/thinking-and-explaining/38694#38694">"money in the bank"</a> comparison.</p> <p>About the first: Of course you want to study this concept you are interested in. So to make it interesting for others you could go for some introspection - what is it that you find intriguing about it? Can you pass it on to others (this is surely easier in talks than in papers)?</p> <p>It does not always have to be a big range examples that apply to it. Maybe you feel it behaves unexpectedly well in spite of weak axioms. Maybe it clarifies that many of the facts about Y depend only on the fact that it is an X and thus improves the understanding of the well-accepted theory of Y. Maybe you have a single application where it showed up and feel that there it greatly helped to separate the algebraic content of the situation (which is strictly more than the structure of a Y) from the rest. These seem all like potential good reasons to work on the theory of X.</p> <p>But maybe your fascination comes from the feeling that your X shows unusual behaviour for an algebraic structure, then spelling that out you could find that this just reflects your prejudices about algebraic structures, which others don't have - this could be a criterion record this as learning experience and do something else for publishing...</p>
39,828
<p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p> <p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p> <ul> <li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li> <li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li> <li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li> </ul> <p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p> <p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p> <blockquote> <p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p> </blockquote> <p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p> <p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p> <p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p> <blockquote> <p>How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p> </blockquote> <p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p> <hr> <p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p> <hr> <p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
Ronnie Brown
19,949
<p>It may be helpful to say how I got into groupoids. </p> <p>In the 1960s, I was writing a topology text and wanted to do the fundamental group of a cell complex, which required the van Kampen Theorem (I have now been persuaded to call this the Seifert-van Kampen theorem, as on wikipedia, so I call it SvKT). I was kind of irritated that this did not as then formulated give the fundamental group of the circle, so one had to make a detour and do all or a piece of covering space theory. </p> <p>Then I found a paper by Olum on nonabelian cohomology and van Kampen's theorem which I extended to a Mayer-Vietoris type sequence which did give the fundamental group of the circle. Unfortunately, when written out in full, it was rather boring! I then came across a paper of Philip Higgins which included the notion of free product with amalgamation of groupoids. So I decided to put in an exercise using this notion for the fundamental groupoid of a space. Then I wrote out a solution for this, and it was so much nicer than the nonabelian cohomology stuff that I decided to make the account in terms of groupoids. It still needed the key notion of the fundamental groupoid on a set $C$ of base points, written $\pi_1(X,C)$. For the circle, this needed $C$ to have 2 elements. This result appeared in the first 1968 edition, and in subsequent ones, of the book on topology, but in no other topology text in English since then. </p> <p>In 1967 I met George Mackey who told me of his work on ergodic groupoids. This persuaded me that the idea of groupoid was, or might be, more important than met the eye. </p> <p>On writing out the proof of the SvKT for groupoids maybe 5 times, it occurred to me in 1965 that the proof should generalise to higher dimensions if one had the `right' gadget generalising $\pi_1(X,C)$. This was finally found with Philip Higgins in 1974 as the fundamental double groupoid $\rho_2(X,A,C)$ of a space $X$ with subspace $A$ and set $C$ of base points. So we got a SvKT in dimension 2, published in 1978, and had extended this to all dimensions by 1979. Work with Chris Spencer in 1971-2 on double groupoids and crossed modules was essential as a basis for all this. </p> <p>The point I am making is that the initial aim of an improved proof of the fundamental group of the circle was very modest, but based on an aesthetic feeling, and the aim would not have got many marks for a research proposal! But in the end it opened out a new area. </p> <p>One main driving force for the higher dimensional work was the intuitions of subdividing a square into little squares, and getting the inverse to that, i.e. composing the little squares into a big one. Another problem was that of expressing the idea of commutative cubes. </p> <p>Philip Higgins told me of a remark of Philip Hall that one should try to make the algebra model the geometry, and not force it into an already known mold. I think that is what people were doing in avoiding the groupoid concept, despite its obvious nature. Indeed the idea of `change of base point' for the fundamental group is a bit like giving a railway timetable in terms of return journeys and change of start-- i.e. is bizarre. </p> <p>Perhaps the moral is that is good to look for ways of expressing intuitions in a rigorous mathematical form. And if that means building up some maths from scratch, previous to definitions, examples, theorems, proofs, as was needed in the higher dimensional work, then that is a lot of fun! (More fun than doing someone else's problem!) But it may take a long time, need lots of attempts, and searching for related ideas, and as it gets going, hard work, and in our case fruitful collaborations. </p> <p>Research students liked the idea of a big plan (what is or might be `higher dimensional group theory'?) and the attempts to pick from this something that might be doable. </p> <p>I'd better not go on about the opposition! </p> <p>Does that help? </p>
39,828
<p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p> <p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p> <ul> <li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li> <li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li> <li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li> </ul> <p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p> <p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p> <blockquote> <p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p> </blockquote> <p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p> <p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p> <p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p> <blockquote> <p>How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p> </blockquote> <p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p> <hr> <p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p> <hr> <p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
Vamsi
3,709
<p>This paper has a very nice introduction (it is on &quot;pointless topology&quot;). So apparently, one may come up with very random definitions for their own sake and hope someone &quot;applies&quot; them to more &quot;concrete&quot; problems. <a href="https://projecteuclid.org/journals/bulletin-of-the-american-mathematical-society-new-series/volume-8/issue-1/The-point-of-pointless-topology/bams/1183550014.full" rel="nofollow noreferrer">Link</a></p>
2,454,455
<p>I know this is a soft and opinion based question and I risk that this question get's closed/downvoted but I still wanted to know what other persons, who are interested in mathematics, think about my question.</p> <p>Whenever people are talking about the most beautiful equation/identity Euler's identity is cited in this fashion:</p> <p>$$e^{i\pi}+1=0.$$</p> <p>While I would agree that this is a beautiful identity (see my avatar) I personally always wondered why not </p> <p>$$e^{2i\pi}-1 = 0$$</p> <p>is the most beautiful identity. It has $e$, $i$, $\pi$, $0$ and the number $2$ in it. I prefer it because the number $2$ is the first and at the same time the only even odd prime number. Having the prime numbers, which are in some way the atoms of mathematics, included makes this formula even more pleasant for me. The minus sign seems a little bit "negative" but the good part is that it is displaying the principle of inversion.</p> <blockquote> <p>So my question is, why is this not the form in which it is most often presented?</p> </blockquote>
Especially Lime
341,019
<p>The main reason is simply that the standard version gives more information. If I know $e^{i\pi}=-1$ then I can deduce $e^{2i\pi}=1$, but not the other way round (knowing $e^{2i\pi}=1$ doesn't tell me whether $e^{i\pi}$ is $+1$ or $-1$).</p>
2,125,297
<p><a href="https://i.stack.imgur.com/cBCJe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBCJe.png" alt="enter image description here"></a></p> <p>I would like to know how exactly this works. I watched a khan academy video: "Multiplying matrices" but in this case he would've done B*A and had 2 columns, why does this one have 3 columns and 3 rows??</p>
amd
265,466
<p>The distance between a vector and a subspace is measured along a direction orthogonal to the subspace, i.e., it’s the length of the orthogonal rejection of the vector from the subspace. So, if $\mathbf\pi_W$ is orthogonal projection onto $W$, then distance to $W$ from $\mathbf v$ is $\|\mathbf v-\mathbf\pi_W\mathbf v\|$. </p> <p>In this case, $W$ is given as the span of a pair of orthogonal vectors $\mathbf w_1=(0,0,1,1)$ and $\mathbf w_2=(1,-1,0,0)$, so $\mathbf\pi_W\mathbf v={\mathbf v\cdot\mathbf w_1\over\mathbf w_1\cdot\mathbf w_1}\mathbf w_1+{\mathbf v\cdot\mathbf w_2\over\mathbf w_2\cdot\mathbf w_2}\mathbf w_2$. I’ll leave it to you work out what $(1,1,1,1)-\mathbf\pi_W(1,1,1,1)$ is.</p>
840,211
<p>I have line integral of a vector function: $\vec{F}=-e^{-x}\sin y\,\,\vec{i}+e^{-x}\cos y\,\,\vec{j}$ The path is a square on the $xy$ plane with vertices at $(0,0),(1,0),(1,1),(0,1)$</p> <p>Of course it is a closed line integral, and I know the result should be zero. </p> <p>I am baffled how can you calculate $\sin y$ or $\cos y$ where $y$ is an actual coordinate point?!</p>
Mathmo123
154,802
<p>In the first case, you're being asked to show that if $n = 2^m + 1$ is prime, then $n$ must be a Fermat prime - i.e. $n$ is of the form $n = 2^{2^k} + 1$ for some $k$. So you need to show that if $n$ is prime, then $m$ is a power of 2.</p> <p>In the second case, you're being asked to show that if $n = a^m - 1$ is prime, then $n$ must be a Mersenne prime - i.e. $n$ is of the form $n = 2^{p} - 1$ for some prime $p$. So you need to show that if $n$ is prime, then $a = 2$ and $m$ is prime.</p>
698,474
<p>I am trying to find the phi(18). Using an online calculator, it says it is 6 but im getting four. <br/> The method I am using is by breaking 18 down into primes and then multiplying the phi(primes)</p> <p>$$=\varphi (18)$$ $$=\varphi (3) \cdot \varphi(3) \cdot \varphi(2)$$ $$= 2 \cdot 2 \cdot 1$$ $$= 4$$</p>
Álvaro Lozano-Robledo
14,699
<p>By definition, $\varphi(18)$ is the number of elements in the set $$\{n : 1\leq n \leq 17, \text{ with } \gcd(n,18)=1\}=\{1,5,7,11,13,17\}.$$ Thus, $\varphi(18)=6$. Similarly, $\varphi(9)$ is the number of elements in the set $$\{n : 1\leq n \leq 8, \text{ with } \gcd(n,9)=1\}=\{1,2,4,5,7,8\},$$ so $\varphi(9)=6$, and not $4$.</p>
105,058
<p>Consider the (cumbersome) statement: "Every integer greater than 1 can be written as a unique product of integers belonging to a certain subset, $S$ of integers.</p> <p>When $S$ is the set of primes, this is the Fundamental Theorem of Arithmetic. My question is this: Are there any other types of numbers, for which this is true. </p> <p>EDIT: As the answers show, this obviously cannot be done. What if we relax the integer condition, i.e. can there be <em>any</em> other canonical representation of positive integers using complex numbers?</p>
anon
11,763
<p>If you mean that every positive integer gets a unique multiplicative factorization, then <em>no</em>, there is no other canonical representation. Why? Because then every prime number $p$ can be factorized, but the only way that's possible is if the components of the factorizations include the primes them-selves. Furthermore, you can't add any other number to the list because then the factorization of this number would be non-unique.</p> <p>Alternatively, there are non-multiplicative representations of integers. The $p$-adic representation is just writing $n$ in "base $p$": $n=a_0+a_1p+a_2p^2+\cdots+a_rp^r$. Even though the golden ratio is not a rational number, we can write integers in <a href="http://en.wikipedia.org/wiki/Golden_ratio_base" rel="noreferrer">base golden ratio</a>.</p> <hr> <p>Algebraic number theory studies <a href="http://en.wikipedia.org/wiki/Algebraic_number_field" rel="noreferrer">number fields</a> and <a href="http://en.wikipedia.org/wiki/Ring_of_integers" rel="noreferrer">rings of integers</a> beyond just $\mathbb{Q}$ and $\mathbb{Z}$. Of note, there is not necessarily unique factorization of the elements. For example, in $\mathbb{Z}[\sqrt{-5}]$, we have</p> <p>$$6=2\cdot3=(1+\sqrt{-5})(1-\sqrt{-5}).$$</p> <p>This lead to some headaches (I assume anyway), until mathematicians figured out that even though the numbers don't factor uniquely, the <a href="http://en.wikipedia.org/wiki/Ideal_%28ring_theory%29" rel="noreferrer"><em>ideals</em></a> of the integers factor uniquely into products of prime ideals, which has lead to other algebraic constructions based off of them designed ultimately to study the structure of numbers. (If you don't understand this section of my answer, don't worry about it. It's for a later time then.)</p>
3,276,264
<h1>My attempt</h1> <p>Based on the sine rule and the graph of <span class="math-container">$\sin A = k a$</span> (where <span class="math-container">$k$</span> is a constant) in interval <span class="math-container">$(0,\pi)$</span>, increasing <span class="math-container">$a$</span> up to <span class="math-container">$1/k$</span> will either</p> <ul> <li>increase <span class="math-container">$A$</span> up to <span class="math-container">$90^\circ$</span>.</li> <li>decrease <span class="math-container">$A$</span> up to <span class="math-container">$90^\circ$</span>.</li> </ul> <p>So I cannot conclude that increasing <span class="math-container">$a$</span> will increase <span class="math-container">$A$</span>.</p> <p>Now I use the cosine rule (it is promising because the cosine is decreasing in the given interval).</p> <p><span class="math-container">\begin{align} A &amp;= \cos^{-1}\left(\frac{b^2+c^2-a^2}{2bc}\right)\\ B &amp;= \cos^{-1}\left(\frac{a^2+c^2-b^2}{2ac}\right)\\ C &amp;= \cos^{-1}\left(\frac{a^2+b^2-c^2}{2ab}\right)\\ \end{align}</span></p> <p>It is hard to show that <span class="math-container">$0^\circ&lt;A\leq B\leq C&lt;180^\circ$</span> for any <span class="math-container">$\triangle ABC$</span> with <span class="math-container">$0&lt;a\leq b\leq c$</span>. Could you show it?</p> <p>It means that I need to show that </p> <p><span class="math-container">$$ -1&lt;\frac{a^2+b^2-c^2}{2ab}\leq \frac{a^2+c^2-b^2}{2ac} \leq \frac{b^2+c^2-a^2}{2bc}&lt;1 $$</span></p> <p>for <span class="math-container">$0&lt;a\leq b\leq c$</span>.</p>
CY Aries
268,334
<p>We need <span class="math-container">$a\ne0$</span> and <span class="math-container">$b\ne0$</span>. If it holds, then we have</p> <p><span class="math-container">$(a^3+b^3)^2=c^6=(a^2+b^2)^3$</span></p> <p><span class="math-container">$a^6+2a^3b^3+b^6=a^6+3a^4b^2+3a^2b^4+b^6$</span></p> <p><span class="math-container">$2ab=3a^2+3b^2$</span></p> <p><span class="math-container">$(a-b)^2+2a^2+2b^2=0$</span></p> <p><span class="math-container">$a=b=0$</span></p>
778,294
<p>If $G$ is an open subset of $ \mathbb R$ such that $ 0 \notin G $ , then is it true that $H:=\{ xy \mid x , y \in G \}$ is an open subset of $ \mathbb R$ ?</p>
Nick Peterson
81,839
<p><strong>Hint:</strong></p> <p>There's no need for anything fancy here; you can go straight to the definition.</p> <p>Let $x,y\in G$, and consider $xy$. Because $G$ is open, you can find some $\epsilon$ so that $(y-\epsilon,y+\epsilon)\subseteq G$. Can you use that to show that $H$ contains an interval around $xy$?</p>
520,203
<p>Here's what I'm reading: every regular bipartite graph has a 1-factor.</p> <p>But I understand that not every regular graph has a 1-factor.</p> <p>So, I was thinking if it's possible to find a $k$-regular simple graph without 1-factor for any fixed $k$?</p>
hardmath
3,111
<p>It is possible to have a $k$-regular (simple) graph with no 1-factor for each $k \gt 1$ (obviously in the trivial case $k=1$ the graph itself is a 1-factor).</p> <p>For $k$ even the complete graph on $k+1$ nodes is an example, since there are an odd number of nodes (and a 1-factor or <em>perfect matching</em> implies an even number of nodes).</p> <p>Petersen showed that any 3-regular graph with no cut-edge has a 1-factor, <a href="http://www.math.uiuc.edu/~west/pubs/ganci.pdf" rel="noreferrer">a result that has been generalized and sharpened</a>. However a 3-regular graph on 16 nodes (connected but not (vertex) 1-connected) is shown in Figure 7.3.1 of <a href="http://www.mathcs.emory.edu/~rg/book/chap7.pdf" rel="noreferrer">this book chapter</a>, about 3/4ths of the way through. <img src="https://i.stack.imgur.com/GII3W.png" alt="Cubic graph without 1-factor"></p> <p>A construction for $k$ odd that generalizes this (connected but not 1-connected) approach is described in <a href="http://mathhelpforum.com/discrete-math/159540-k-regular-simple-graph.html" rel="noreferrer">a MathHelpForum post</a>. A central vertex $v^*$ has $k$ copies of a certain odd-order (number of nodes) graph connected to it, say $G_1,\ldots,G_k$. In any perfect matching, some edge for $v^*$ must be chosen, its removal leaving some components with odd-order (in which a perfect matching becomes impossible).</p>
3,290,073
<p>I'm trying to help my daughter learn maths. She is struggling with factors, which is to work out what numbers go into a larger number (division).</p> <p>I've already learned that by summing numbers, if they make 3, it can divide by 3. I also know the rules for 2, 5, 6, 9 and 10.</p> <p>I'm trying to see if there is a rule for 4. I'm thinking not.</p> <p><a href="https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work" rel="nofollow noreferrer">https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work</a> shows the following</p> <blockquote> <p>The divisibility rule for 4 is in any large number, if the digits in tens and units places is divisible by then the whole number is divisible by 4.</p> </blockquote> <p>This doesn't make sense. 56 divides by 4. However, the 2 numbers add to 11, and so can't be divided by 4.</p> <p>It may very well get a "no" answer, but is there any pattern/method I can use for determining if a number can be divided by 4 if it is less than 100 (and greater than 4)</p>
I. Chekhov
684,502
<p>The test for divisibility by <span class="math-container">$4$</span> is given any integer <span class="math-container">$n$</span> consider the last two digits; if that two-digit number is divisible by <span class="math-container">$4$</span> then so is <span class="math-container">$n$</span>.</p> <p>Example.</p> <p>Consider 96. Since <span class="math-container">$96$</span> is divisible by <span class="math-container">$4$</span>, so is <span class="math-container">$196.$</span> </p> <p>Reason: <span class="math-container">$196 = 100 + 96$</span>. The number on the left (which will always be the case even if it is <span class="math-container">$0$</span>) is divisible by <span class="math-container">$4$</span>; hence, it suffices to consider only the number represented by the last two digits of the integer <span class="math-container">$n$</span>. </p> <p>Finally, in regards to your last question, say you had the number 8. Describing <span class="math-container">$8$</span> as <span class="math-container">$08$</span>, the test applies to single digit numbers as well. </p>
3,290,073
<p>I'm trying to help my daughter learn maths. She is struggling with factors, which is to work out what numbers go into a larger number (division).</p> <p>I've already learned that by summing numbers, if they make 3, it can divide by 3. I also know the rules for 2, 5, 6, 9 and 10.</p> <p>I'm trying to see if there is a rule for 4. I'm thinking not.</p> <p><a href="https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work" rel="nofollow noreferrer">https://www.quora.com/Why-does-the-divisibility-rule-for-the-number-4-work</a> shows the following</p> <blockquote> <p>The divisibility rule for 4 is in any large number, if the digits in tens and units places is divisible by then the whole number is divisible by 4.</p> </blockquote> <p>This doesn't make sense. 56 divides by 4. However, the 2 numbers add to 11, and so can't be divided by 4.</p> <p>It may very well get a "no" answer, but is there any pattern/method I can use for determining if a number can be divided by 4 if it is less than 100 (and greater than 4)</p>
Zach Hunter
588,557
<p>The key is that 100 is divisible by 4. So, we have have:</p> <p><span class="math-container">$$12345678956 = (123456789)(100) + 56 = (123456789)(25)(4) + (14)(4) = ((123456789)(25)+14)(4)$$</span></p> <p>Therefore, if the first two digits are divisible by four, the whole thing is divisible by four. In fact, the remainder when divided by four is equal to that when you just divide the last two digits, because the 3 digits onwards have remainder zero.</p>
2,348,290
<p>The question and its answer is given in the following pictures:<a href="https://i.stack.imgur.com/xYhqD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYhqD.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Gv3IA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gv3IA.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Lltko.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lltko.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/8Pf1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Pf1g.png" alt="enter image description here"></a></p> <p>The first line in the solution is not clear for me, my questions are:</p> <p>1- where is the interval from 0 to 1 ?</p> <p>2- why we changed the x inside the floor function to n and did not change the x in the power of e ?</p> <p>Could anyone illustrate this for me please? </p>
Verdruss
458,683
<p>For your first question: The first summand $\int_0^1 \lfloor x \rfloor e^{-x}$ gives 0, because $\lfloor x \rfloor = 0$ for all $x\in [0,1)$. That's why the whole integral vanishes.</p> <p>For your second question: In the interval $[n,n+1)$ the function $\lfloor x \rfloor e^{-x} = n e^{-x}$, that's why only the $x$ that is 'floored' changed to $n$.</p>
2,348,290
<p>The question and its answer is given in the following pictures:<a href="https://i.stack.imgur.com/xYhqD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYhqD.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Gv3IA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gv3IA.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Lltko.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lltko.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/8Pf1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Pf1g.png" alt="enter image description here"></a></p> <p>The first line in the solution is not clear for me, my questions are:</p> <p>1- where is the interval from 0 to 1 ?</p> <p>2- why we changed the x inside the floor function to n and did not change the x in the power of e ?</p> <p>Could anyone illustrate this for me please? </p>
FDP
186,817
<p>The solution is too complicated according to me.</p> <p>A way to simplify,</p> <p>$\begin{align} J&amp;=\sum_{n=1}^{\infty} \int_n^{n+1} ne^{-x}\\ &amp;=\sum_{n=1}^{\infty} \Big[-ne^{-x}\Big]_n^{n+1}\\ &amp;=\sum_{n=1}^{\infty}n\left(e^{-n}-e^{-(n+1)}\right)\\ &amp;=\sum_{n=1}^{\infty}ne^{-n}-\sum_{n=1}^{\infty}ne^{-(n+1)}\\ &amp;=\sum_{n=1}^{\infty}ne^{-n}-\sum_{n=1}^{\infty}(n+1)e^{-(n+1)}+\sum_{n=1}^{\infty}e^{-(n+1)}\\ &amp;=\sum_{n=1}^{\infty}ne^{-n}-\sum_{n=2}^{\infty}ne^{-n}+\sum_{n=1}^{\infty}e^{-(n+1)}\\ &amp;=e^{-1}+\sum_{n=1}^{\infty}e^{-(n+1)}\\ &amp;=\sum_{n=0}^{\infty}e^{-(n+1)}\\ &amp;=\frac{1}{\text{e}}\sum_{n=0}^{\infty}\left(e^{-1}\right)^n\\ &amp;=\dfrac{1}{\text{e}}\frac{1}{1-\dfrac{1}{\text{e}}}\\ &amp;=\frac{1}{\text{e}-1} \end{align}$</p>
2,348,290
<p>The question and its answer is given in the following pictures:<a href="https://i.stack.imgur.com/xYhqD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xYhqD.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Gv3IA.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Gv3IA.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/Lltko.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lltko.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/8Pf1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8Pf1g.png" alt="enter image description here"></a></p> <p>The first line in the solution is not clear for me, my questions are:</p> <p>1- where is the interval from 0 to 1 ?</p> <p>2- why we changed the x inside the floor function to n and did not change the x in the power of e ?</p> <p>Could anyone illustrate this for me please? </p>
Siddhartha
427,792
<p>Our task here is nothing but calculating Laplace Transform of $\lfloor t\rfloor$ at $s=1$</p> <blockquote> <p>$$\begin{align}F(s)&amp;=\displaystyle\int_0^\infty \lfloor t \rfloor e^{-st}dt\\&amp;=\displaystyle\int_0^1 \lfloor t \rfloor e^{-st}dt +\displaystyle\int_1^2 \lfloor t \rfloor e^{-st}dt +\displaystyle\int_2^3 \lfloor t \rfloor e^{-st}dt + \cdots\\&amp;=\displaystyle\sum_{n=0}^\infty n\displaystyle\int_{n}^{n+1} e^{-st}dt\\&amp;=\dfrac{1}{s}\displaystyle\sum_{n=0}^\infty n \left[ e^{-sn}-e^{-s(n+1)} \right]\\&amp;=\dfrac{1}{s}\left(1-e^{-s}\right)\displaystyle\sum_{n=0}^\infty n e^{-sn}\\&amp;=\dfrac{1}{s}\left(1-e^{-s}\right)\dfrac{e^{-s}}{\left(1-e^{-s}\right)^{2}}\\&amp;=\dfrac{1}{s\left(e^s-1\right)}\\&amp;=\dfrac{\coth\left(\dfrac{s}{2}\right)-1}{2s}\end{align}\tag*{}$$</p> </blockquote> <p>Now just put $s=1$</p> <p>Note that:</p> <p>$$\displaystyle\sum_{n=0}^{\infty} e^{-n x} = \dfrac{1}{1-e^{-x}}$$ $$\displaystyle\sum_{n=0}^{\infty}n e^{-n x}=- \dfrac{d}{dx}\left(\dfrac{1}{1-e^{-x}}\right)$$</p>
3,244,361
<p>I need some help on two exercises from Kiselev's geometry, about straight lines.</p> <blockquote> <p>Ex 7: Use a straightedge to draw a line passing through two points given on a sheet of paper. Figure out how to check that the line is really straight. Hint: Flip the straightedge upside down.</p> </blockquote> <p>I would draw the first line, then flip the straightedge and draw the second line over the first. The two lines should coincide nicely iff the straightedge is straight. Because, this shows that there is no "unevenness" or "bumps" on the edge of the straightedge. There would be gaps between the two lines if there are "unevenness/bumps" on the edge of the straightedge.</p> <blockquote> <p>Ex 8: Fold a sheet of paper and, using ex 7, check that the edge is straight. Can you explain why the edge of a folded paper is straight?</p> </blockquote> <p>Ex 8 is marked as more difficult by the author. I'm completely clueless about this exercise.</p> <p>Please provide insights and help me with these two exercises. I'd appreciate if they are more of an "experimental approach" than theoretical because exercises 7 and 8 are arranged in between the introduction and first chapter of the book.</p> <p>Thank you. :)</p>
Jeremy Weissmann
87,206
<p>I’m late to the game but here is my argument for #8. (Incidentally, I don’t think Kiselev is suggesting to use #7 to prove #8. He says use the technique to check that the line is straight, and, separately, asks us to explain why it is true.)</p> <p>When we fold the paper, we identify each point on the bottom 'half' with a unique point on the top 'half'. (You may imagine the paper to expand as much as necessary to make this a reality! Or, you can imagine we are folding a plane.)</p> <p>Lay the paper on top of a plane and fold the paper. The edge creates a curve in the plane. Our goal is to convince ourselves that this curve is straight.</p> <p>Unfold the paper. Now the paper has a crease, which of course is the same curve as the edge. The crease divides the paper into two halves — call them 'left' and 'right'.</p> <p>Now, imagine any two points on this crease, and further imagine them connected by some curve which does <em>not</em> lie on the crease — say, a curve in either the left or right half of the paper. What can we say about such a curve?</p> <p>A curve like this cannot be straight. The reason is, folding the paper would give us a <em>congruent</em> but <em>different</em> curve on the other side of the page, and we cannot have two different straight curves connecting a pair of points.</p> <p>So we have shown that if two points on the crease are connected by a curve which travels off the crease, that curve is not straight. But we know that any two points <em>are</em> connected by a straight line, and hence that straight line lies on the crease. Applying this argument to the endpoints of the curve, we see that the whole crease/edge itself is straight.</p> <p>Obviously I've left off some details and assumptions (for example, a line congruent to a straight line is straight, etc) but I think I've managed to find the thrust of a good argument.</p> <p>It’s a wonderful problem that I’ve thought about for years.</p>
2,941,579
<p>In Taylor's series, to determine the number of terms needed to obtain the desired accuracy, sometimes one needs to solve inequalities of the form <span class="math-container">$$\frac{a^n}{n!}&lt;b,$$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are fixed positive numbers. In most textbooks in calculus, the only introduced method to solve <span class="math-container">$\frac{a^n}{n!}&lt;b$</span> for <span class="math-container">$n$</span> is trial and error. While this method works well in many cases, I feel that it is inefficient when <span class="math-container">$a$</span> is large and <span class="math-container">$b$</span> is small. (For example, how about solving <span class="math-container">$\frac{1000^n}{n!}&lt;0.01$</span>?) </p> <p>My Question: Apart from using brutal force, is there another method to solve the inequality <span class="math-container">$\frac{a^n}{n!}&lt;b$</span> for <span class="math-container">$n$</span>?</p>
DanielWainfleet
254,665
<p>Let <span class="math-container">$f(x)=4x-5.$</span> The object is to prove that for any <span class="math-container">$\epsilon &gt;0$</span> we can find some <span class="math-container">$\delta &gt;0$</span> such that <span class="math-container">$|f(x)-7|&lt;\epsilon$</span> whenever <span class="math-container">$0&lt;|x-3|&lt;\delta.$</span></p> <p>In general, for a given value of <span class="math-container">$\epsilon,$</span> a value of <span class="math-container">$\delta$</span> that works will depend on <span class="math-container">$\epsilon$</span> and on the nature of the function <span class="math-container">$f.$</span> And we do not need to find the largest possible value of <span class="math-container">$\delta$</span> that will work.</p> <p>The proof shows by elementary algebra that if <span class="math-container">$\epsilon &gt;0$</span> and if <span class="math-container">$\delta=\epsilon /4$</span> then <span class="math-container">$ 0&lt;|x-3|&lt;\delta \implies |f(x)-7|&lt;\epsilon.$</span> This can be discovered, rather than confirmed, by looking at the consequences of <span class="math-container">$|x-3|&lt;\delta$</span> for <span class="math-container">$any$</span> <span class="math-container">$\delta$</span>. We have <span class="math-container">$$|x-3|&lt;\delta \implies |f(x)-7|=|(4x-5)-7|=|4x-12|=4 |x-3|&lt;4\delta.$$</span> So, given <span class="math-container">$\epsilon,$</span> if <span class="math-container">$\delta =\epsilon/4$</span> then <span class="math-container">$0&lt;|x-3|&lt;\delta \implies |f(x)-7|&lt;4\delta =\epsilon.$</span></p> <p>So letting <span class="math-container">$\delta= \epsilon/4$</span> is sufficient. And it happens to be the largest value of <span class="math-container">$\delta$</span> that will work. But we can also say that if <span class="math-container">$\delta'=\epsilon /10^{10}$</span> then <span class="math-container">$0&lt;|x-3|&lt;\delta'\implies |f(x)-7|&lt;\epsilon.$</span></p>
3,096,933
<p>In studying the causes of power failures, the following data have been gathered: 10% are due to a transformer damage, 75% are due to line damage, 5% involve both problems. Based on these percentages, find the probability that a given power failure involves:</p> <p>a) line damage given that there is transformer damage</p> <p>b) transformer damage given that there is line damage</p> <p>c) transformer damage but not line damage</p> <p>d) transformer damage given that there is no line damage</p> <p>e) transformer damage or line damage.</p> <p>Now, I am familiar with conditional probabilities ( to some degree at least ) and the first thing that came to my mind for point a) was Bayes </p> <p>so </p> <p><span class="math-container">$ T $</span> - it involves transformer damage</p> <p><span class="math-container">$ L $</span> - it involves line damage</p> <p><span class="math-container">$ B $</span> - it involves both</p> <p><span class="math-container">$$ P(L/T)=\frac{P(L)*P(T/L)}{P(T)} $$</span></p> <p>but the problem is that I am stuck at <span class="math-container">$P(T/L)$</span> I have no idea from were to start and maybe this approach is not even the correct one so I would appreciate some help, maybe a hint on how to proceed...</p>
Alex
132,353
<p>Going off my hint: recall that for two events <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> (with <span class="math-container">$P(Y) \neq 0$</span>) we have</p> <p><span class="math-container">$$ P(X|Y) = \frac{P(X \cap Y)}{P(Y)} $$</span></p> <p>So in your case, you want</p> <p><span class="math-container">$$ P(L|T) = \frac{P(L \cap T)}{P(T)} $$</span></p> <p>Note that the event <span class="math-container">$L \cap T$</span> is exactly <span class="math-container">$B$</span>. So </p> <p><span class="math-container">$$P(L|T) = \frac{P(B)}{P(T)}$$</span></p>
866,921
<p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p> <p>$$3\times 4=8$$ $$4\times 5=50$$ $$5\times 6=30$$ $$6\times 7=49$$ $$7\times 8=?$$</p> <p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p> <blockquote class="spoiler"> <p> $224$</p> </blockquote> <p>How do we find this solution ?</p>
Kevin
147,209
<p>This is what I have so far, it seems a bit more intuitive than Omran's solution.</p> <p>Based on the flip-flopping numbers, I figured the answer has to rely on the prime factorization of the numbers in question. So in particular, we see:</p> <p>$$3 \times 2^2 \Rightarrow 2$$ $$2^2 \times 5 \Rightarrow 2*5$$ $$5 \times 2 * 3 \Rightarrow 5$$ $$2 * 3 \times 7 \Rightarrow 7$$ $$7 \times 2^3 \Rightarrow 2^2*7$$</p> <p>So my initial hypothesis, which is that you took the highest prime and any primes with power greater than $1$ fails for the first equation. But it does look like a promising lead.</p>
3,534,896
<p>If <span class="math-container">${\sqrt 3} - {\sqrt 2}, 4- {\sqrt 6}, p{\sqrt 3} - q {\sqrt 2}$</span> form a geometric progression, find the values of p and q.</p> <p>So I take the second term <span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2}) (r)$</span> , where r is the common ratio.</p> <p><span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2})( 2{\sqrt3} + {\sqrt2 })$</span></p> <p>And found that the common ratio, r = <span class="math-container">$2{\sqrt3} + {\sqrt2 }$</span></p> <p>To find the third term, I multiplied the second term with the common ratio.</p> <p><span class="math-container">$(4-{\sqrt 6})( 2{\sqrt3} + {\sqrt2 })= p{\sqrt 3} - q {\sqrt 2}$</span> </p> <p><span class="math-container">$8{\sqrt 3} + 4{\sqrt2} - 6 {\sqrt 2} - 2{\sqrt 6} = p{\sqrt 3} - q {\sqrt 2}$</span> </p> <p>I am unable to proceed beyond this step. </p>
Michael Rozenberg
190,319
<p>If it means that <span class="math-container">$p$</span> and <span class="math-container">$q$</span> are rationals, so use the following:<span class="math-container">$$(\sqrt3-\sqrt2)(p\sqrt3-q\sqrt2)=(4-\sqrt6)^2$$</span> or <span class="math-container">$$3p+2q-(p+q)\sqrt6=22-8\sqrt6.$$</span> I got <span class="math-container">$$(p,q)=(6,2).$$</span></p>
3,534,896
<p>If <span class="math-container">${\sqrt 3} - {\sqrt 2}, 4- {\sqrt 6}, p{\sqrt 3} - q {\sqrt 2}$</span> form a geometric progression, find the values of p and q.</p> <p>So I take the second term <span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2}) (r)$</span> , where r is the common ratio.</p> <p><span class="math-container">$4-{\sqrt 6} =( {\sqrt 3} - {\sqrt 2})( 2{\sqrt3} + {\sqrt2 })$</span></p> <p>And found that the common ratio, r = <span class="math-container">$2{\sqrt3} + {\sqrt2 }$</span></p> <p>To find the third term, I multiplied the second term with the common ratio.</p> <p><span class="math-container">$(4-{\sqrt 6})( 2{\sqrt3} + {\sqrt2 })= p{\sqrt 3} - q {\sqrt 2}$</span> </p> <p><span class="math-container">$8{\sqrt 3} + 4{\sqrt2} - 6 {\sqrt 2} - 2{\sqrt 6} = p{\sqrt 3} - q {\sqrt 2}$</span> </p> <p>I am unable to proceed beyond this step. </p>
Quanto
686,284
<p>Note </p> <p><span class="math-container">$$(4- {\sqrt 6})^2=({\sqrt 3} - {\sqrt 2})( p {\sqrt 3} - q {\sqrt 2})$$</span></p> <p>or,</p> <p><span class="math-container">$$22-8\sqrt6 = 3p +2q -(p+q)\sqrt6$$</span></p> <p>Therefore,</p> <p><span class="math-container">$$22= 3p +2q,\&gt;\&gt;\&gt;\&gt;\&gt;p+q=8$$</span></p> <p>Solve to obtain <span class="math-container">$p=6$</span> and <span class="math-container">$q=2$</span>.</p>
2,172,870
<p>I have two vectors $u$, and $v$, I know that $\mid u \mid$ = 3 and $\mid v \mid$ = 5, and that $u\cdot v = -12$. I need to calculate the length of the vector $(3u+2v) \times (3v-u)$.</p> <p>Because I know the dot product of the vectors I know what cosine of the angle between them is $\cos \theta = -0.8$, and also $\sin \theta = 0.6$ Using this I started calculating the components of the vectors, but got nowhere. Am I missing some sort of fast, clever way of doing this?</p>
TonyK
1,508
<p>No closed knight's tour is possible on a board with an odd number of squares, because each move changes the colour of the knight's square. So after an odd number of moves, you can't be back at the starting square, because it's the wrong colour.</p>
3,041,841
<p>Let <span class="math-container">$f(z)$</span> be holomorphic function on an open unit disk such that <span class="math-container">$\lim_{z \to 1} f(z)$</span> doesn't exists. Let <span class="math-container">$f(z) = \Sigma_{i=1}^{\infty} a_iz^i$</span> be its Taylor series around <span class="math-container">$0$</span>. Then the radius of convergence of <span class="math-container">$f(z)$</span> is?</p> <p>Option </p> <p>1) <span class="math-container">$R = 0$</span></p> <p>2) <span class="math-container">$0 &lt; R &lt; 1$</span></p> <p>3) <span class="math-container">$R = 1$</span></p> <p>4) <span class="math-container">$R &gt; 1$</span>.</p> <p>My attempt:</p> <p>I considered the function <span class="math-container">$f(z) = \frac{1}{1 - z} = \Sigma_{i = 0}^{\infty}z^i$</span>. This satisfies the above hypothesis, and its radius of convergence is <span class="math-container">$R &lt; 1$</span>, hence option 2) is correct. But the answer key says option 4) is correct? So can anyone explain me the reason?</p> <p>Reference: CSIR NET DEC 2017 Qno 34, Paper A <a href="http://csirhrdg.res.in/mathA_Dec2017.pdf" rel="nofollow noreferrer">http://csirhrdg.res.in/mathA_Dec2017.pdf</a> <a href="http://csirhrdg.res.in/Mathkey_Dec2017.pdf" rel="nofollow noreferrer">http://csirhrdg.res.in/Mathkey_Dec2017.pdf</a></p>
José Carlos Santos
446,262
<p>Your apporach is wrong, because the radius of convergence of <span class="math-container">$\sum_{n=0}^\infty z^n$</span> is <span class="math-container">$1$</span>.</p> <p>The Taylor series about <span class="math-container">$a$</span> of a holomorphic function <span class="math-container">$f$</span> always converges on any disk <span class="math-container">$D(a,r)$</span> contained in the domain of <span class="math-container">$f$</span>. Therefore, the radius of convergence of the Taylor series of <span class="math-container">$f$</span> centered at <span class="math-container">$0$</span> is <em>at least</em> <span class="math-container">$1$</span>. If it was greater than one, we would have<span class="math-container">\begin{align}\sum_{n=0}^\infty a_n&amp;=\lim_{z\to1}\sum_{n=0}^\infty a_nz^n\\&amp;=\lim_{z\to1}f(z).\end{align}</span>But this is impossible, since this last limit doesn't exist. Therefore, the radius of convergence is <span class="math-container">$1$</span>.</p>
3,858,966
<blockquote> <p>Let <span class="math-container">$A= \{(x,y,z) \in \Bbb R^3 \vert x+y&lt;z &lt; x^2+y^2 \}$</span>. Show that <span class="math-container">$A$</span> is an open set in <span class="math-container">$\Bbb R^3$</span> defined by the Euclidean metric.</p> </blockquote> <p>So <span class="math-container">$A$</span> can be written as <span class="math-container">$A = \{(x,y,z) \in \Bbb R^3 \vert x+y-z&lt;0, x^2+y^2-z&gt;0 \} = \{(x,y,z) \in \Bbb R^3 \vert x+y-z&lt;0\} \cap \{(x,y,z) \in \Bbb R^3 \vert x^2+y^2-z&gt;0\}$</span>.</p> <p>Now the book I'm reading had solved this defining <span class="math-container">$f,g : \Bbb R^3 \to \Bbb R$</span>, <span class="math-container">$f(x) = x+y-z$</span> and <span class="math-container">$g(x) = x^2+y^2-z$</span>. Showing that these two functions are continuous seemed to imply that <span class="math-container">$A$</span> is open? I'm not yet on the chapter that introduces continuity in metric spaces so I was thinking if there's any other way to show that <span class="math-container">$A$</span> would be open? I know the definition of continuity in metric spaces, but here they used some projections, etc. which I'm not familiar yet.</p>
Henno Brandsma
4,280
<p>The method using continuous functions is by far the least messy way to solve such problems.</p> <p>Define <span class="math-container">$f(x,y,z) = x+y-z$</span> and <span class="math-container">$g(x,y,z) = x^2+y^2-z$</span>. Then <span class="math-container">$x+y &lt; z$</span> can be described as <span class="math-container">$f(x,y,z) &lt; 0$</span> and <span class="math-container">$z &lt; x^2+y^2$</span> as <span class="math-container">$0 &lt; g(x,y,z)$</span>. So</p> <p><span class="math-container">$$A = f^{-1}[(\leftarrow, 0)] \cap g^{-1}[(0,\rightarrow)]$$</span> which is the intersection of two open sets (as <span class="math-container">$f,g$</span> are continuous) and thus open (axiom of topology). We do use that projections are continuous, which is pretty easy.</p> <p>You could also show that in any ordered space (like <span class="math-container">$\Bbb R$</span> is) the set <span class="math-container">$\{(x,y,z) \mid f(x,y,z) &lt; g(x,y,z)\}$</span> is open for any <span class="math-container">$f,g: \Bbb R^3 \to \Bbb R$</span> that are continuous, and apply it twice for functions <span class="math-container">$x+y$</span>, <span class="math-container">$z$</span> and <span class="math-container">$x^2+y^2$</span> on <span class="math-container">$\Bbb R^3$</span>. Now we use that <span class="math-container">$\Bbb R$</span> is a group instead. But using that addition and squaring is continuous is the most economical: if you'd give a full <span class="math-container">$\epsilon$</span>-proof from first principles you'd be taking much longer.</p>
3,912,635
<p>I am currently trying to understand the proof of Proposition 4.3.18 in Pedersen's Analysis now, which reads</p> <blockquote> <p>To each Tychonoff space <span class="math-container">$X$</span> there is a Hausdorff compactification <span class="math-container">$\beta(X)$</span>, with the property that every continuous function <span class="math-container">$\Phi: X \to Y$</span>, where <span class="math-container">$Y$</span> is a compact Hausdorff space, extends to a continuous function <span class="math-container">$\beta \Phi: \beta(X) \to Y$</span>.</p> </blockquote> <p>The proof starts by noting that <span class="math-container">$C_b(X)$</span> is a commutative unital C<span class="math-container">$^*$</span>-algebra, and is therefore isometrically isomorphic to a (commutative and unital) C<span class="math-container">$^*$</span>-algebra of the form <span class="math-container">$C(\beta(X))$</span>, where <span class="math-container">$\beta(X)$</span> is a compact Hausdorff space.</p> <p>By the Gelfand duality between the category of commutative and unital C<span class="math-container">$^*$</span>-algebras and the category of compact Hausdorff spaces, we can take <span class="math-container">$\beta(X) = \Omega(C_b(X))$</span>, the space of characters on <span class="math-container">$C_b(X)$</span>.</p> <p>Then we can define a map <span class="math-container">$\iota: X \to \beta(X)$</span>, where <span class="math-container">$\iota(x)(\phi) := \phi(x)$</span> for all <span class="math-container">$x \in X$</span> and <span class="math-container">$\phi \in \beta(X)$</span>.</p> <p>The particular part of the proof that I am struggling to understand is the proof that <span class="math-container">$\iota(X)$</span> is dense in <span class="math-container">$\beta(X)$</span>.</p> <p>He argues that if <span class="math-container">$\iota(X)$</span> is not dense in <span class="math-container">$\beta(X)$</span>, then there is a non-zero continuous map <span class="math-container">$f: \beta(X) \to \mathbb{C}$</span> vanishing on <span class="math-container">$\iota(X)$</span>. This I understand. He then says that under the identification <span class="math-container">$C_b(X) = C(\beta(X))$</span>, this is impossible. This is the sentence I am stuck on. Why is it impossible under this identification?</p> <p>We have that <span class="math-container">$C_b(X)$</span> is isometrically isomorphic to <span class="math-container">$C(\Omega(C_b(X)))$</span> via the map <span class="math-container">$\delta: g \mapsto (\delta_g: \Omega(C_b(X)) \to \mathbb{C}, \phi \mapsto \phi(g))$</span>. I am pretty sure what Pedersen is getting at is that the map <span class="math-container">$\delta^{-1}(f)$</span> is zero, but I am not able to show that this is the case. <a href="https://math.stackexchange.com/questions/260794/stone-%C4%8Cech-via-c-bx-cong-c-beta-x">This answer</a> also claims that a similar map is zero.</p> <p>In summary, my question is:</p> <blockquote> <p>Can we show that <span class="math-container">$\iota(X)$</span> is dense in <span class="math-container">$\beta(X)$</span> by showing that <span class="math-container">$\delta^{-1}(f) = 0$</span>? If so, how do we do this?</p> </blockquote>
QuantumSpace
661,543
<p>Recently, I wrote all this out in detail for myself, so here I share my notes with you. Note that the assumption that <span class="math-container">$X$</span> is Tychonoff can be ommitted. The construction works for every topological space. The Tychnoff assumption is only there to ensure that the canonical inclusion is injective.</p> <p>Recall that if <span class="math-container">$A$</span> is a commutative <span class="math-container">$C^*$</span>-algebra, then we can consider the space of characters <span class="math-container">$\Omega(A)$</span> . If <span class="math-container">$A$</span> is a unital <span class="math-container">$C^*$</span>-algebra, then this becomes a compact Hausdorff space for the weak<span class="math-container">$^*$</span>-topology. Note that we have a natural map <span class="math-container">$$i_X: X \to \Omega(C_b(X)): x \mapsto \text{ev}_x.$$</span> Clearly this is a continuous map, as an easy argument with nets shows.</p> <p><strong>Lemma</strong>: The map <span class="math-container">$i_X$</span> has dense image.</p> <p><strong>Proof</strong>: Assume to the contrary that <span class="math-container">$\overline{i_X(X)}\subsetneq \Omega(C_b(X))$</span>. Then Urysohn's lemma applied to the compact Hausdorff space <span class="math-container">$\Omega(C_b(X))$</span> gives a non-zero continuous function <span class="math-container">$f: \Omega(C_b(X))\to \mathbb{C}$</span> that is zero on <span class="math-container">$i_X(X)$</span>. Consider the canonical isomorphism <span class="math-container">$$\Psi: C_b(X) \to C(\Omega(C_b(X))): \omega \mapsto \text{ev}_\omega.$$</span> Choose <span class="math-container">$\omega \in C_b(X)$</span> with <span class="math-container">$\text{ev}_\omega = f$</span>. Then for all <span class="math-container">$x \in X$</span>, we have <span class="math-container">$$\omega(x) = \text{ev}_x(\omega) = \text{ev}_\omega(\text{ev}_x) = f(i_X(x)) = 0$$</span> so <span class="math-container">$\omega = 0$</span>, which is a contradiction. <span class="math-container">$\quad \square$</span></p> <p><strong>Theorem</strong>: If <span class="math-container">$X$</span> is a topological space, then <span class="math-container">$(\Omega(C_b(X)), i_X)$</span> is a Stone-Čech compactification of <span class="math-container">$X$</span>.</p> <p><strong>Proof</strong>: Let <span class="math-container">$K$</span> be a compact Hausdorff space and let <span class="math-container">$f: X \to K$</span> be a continuous map. This induces a <span class="math-container">$*$</span>-morphism <span class="math-container">$$C(f): C(K) \to C_b(X): g \mapsto g \circ f$$</span> and this then induces a continuous map <span class="math-container">$$\Omega(C(f)): \Omega(C_b(X)) \to \Omega(C(K)): \chi \mapsto \chi \circ C(f)$$</span> Consider the homeomorphism <span class="math-container">$$i_K: K \to \Omega(C(K)): k \mapsto \text{ev}_k.$$</span></p> <p>Then we define the continuous map <span class="math-container">$F:= i_K^{-1}\circ \Omega(C(f)): \Omega(C_b(X)) \to K$</span>. Moreover, we have <span class="math-container">$F\circ i_X= f$</span>. Indeed, if <span class="math-container">$x \in X$</span>, then <span class="math-container">$$i_K(F \circ i_X(x)) = i_K (F(\text{ev}_x)) = \Omega(C(f))(\text{ev}_x) = \text{ev}_x \circ C(f)= \text{ev}_{f(x)}= i_K(f(x))$$</span> so that by injectivity of <span class="math-container">$i_K$</span> we obtain <span class="math-container">$F \circ i_X = f$</span>.</p> <p>The condition <span class="math-container">$F \circ i_X = f$</span> determines <span class="math-container">$F$</span> uniquely on <span class="math-container">$i_X(X)$</span>, which is dense in <span class="math-container">$\Omega(C_b(X))$</span> by the preceding lemma. Thus <span class="math-container">$F$</span> is unique. <span class="math-container">$\quad \square$</span></p>
792,813
<p>Let A be a random variable defined as:</p> <ul> <li>With probability $p[i]$, the random variable $B[i]$ is drawn</li> <li>$B[i] ~ N[mu[i],sigma[i]]$</li> <li>probabilities $p[i]$ sum up to one</li> </ul> <p>I know how to compute the mean, which is given by:</p> <p>$$E[A] = p[1]*mu[1] + .. + p[N]*mu[N]$$</p> <p>I would like to know how to compute the variance</p> <p><img src="https://i.stack.imgur.com/J94Al.png" alt="Tree random variable"></p>
Claude Leibovici
82,404
<p>You can also expand the integrand as a Taylor series built around $x=0$ starting with $$\cos(y)=1-\frac{y^2}{2}+\frac{y^4}{24}+O\left(y^6\right)$$ Now, replace $y$ by $x^2$ and multiply the result by $x^3$. So, the integrand is $$cos(x^2)x^3=x^3-\frac{x^7}{2}+O\left(x^{10}\right)$$ Integrate between $0$ and $z$ to get $$ \int_{0}^{z} cos(x^2)x^3dx \simeq \frac{z^4}{4}-\frac{z^8}{16}$$ I let you finishing.</p>
264,595
<p>I've been trying to find an asymptotic expansion of the following series</p> <p>$$C(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n!{\sqrt{n}} }$$</p> <p>and</p> <p>$$L(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n(n!{\sqrt{n}}) }$$</p> <p>around $+\infty$, in the from</p> <p>$$\exp(x^2)\Big(1+\frac{a_1}{x}+\frac{a_2}{x^2} + .. +\frac{a_k}{x^k}\Big) + O\Big(\frac{\exp(x^2)}{x^{k+1}}\Big)$$</p> <p>where $x$ is a positive real number. As far as I progressed, I obtained only</p> <p>$$C(x) = \exp(x^2) + \frac{\exp(x^2)}{x} + O\Big(\frac{\exp(x^2)}{x}\Big).$$</p> <p>I tried to use ideas from <a href="https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1">https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1</a>, <a href="https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk">https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk</a>, <a href="https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&amp;lq=1">https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&amp;lq=1</a>, but I was unable to make them work in my case. </p> <p>Any suggestions would be greatly appreciated!</p> <p>(If someone has a solid culture in this kind of things, is there are any specific names for $C(x) $ and $L(x) $ ?).</p> <p>PS:</p> <p>This question was asked on the math.SE but was closed as duplicate of <a href="https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100">https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100</a>. However, the latter question provides only the first term of the asymptotic expansion and does not address sufficiently the problem considered here.</p>
Johannes Trost
37,436
<p>This is another, totally different (and correct !) approach for answering the question. It is simply too long for a comment. So I decided to write it in a new answer. (Although that might look odd, but my previous answer, though accepted, is wrong.)</p> <p>First define $$ I_{\mu}(y) = y^{1/2} \sum_{n=1}^{\infty} \ \frac{y^{n}}{n! \ n^{\mu}}, $$ and observe that the OP's functions are $$ C(x)= I_{\frac{1}{2}}(x^{2}) $$ and $$ L(x)=I_{\frac{3}{2}}(x^{2}) $$ Let $m$ be the integer part of $\mu$ and $\xi$ the fractional part, $0&lt;\xi&lt;1$.</p> <p>Replace $n^{-\mu}$ in the definition of $I_{\mu}(y)$ by a ratio of $\Gamma$-functions times an asymptotic series in $n$ using <a href="http://dlmf.nist.gov/5.11#E13" rel="nofollow noreferrer">this</a> formula and for the coefficients of the asymptotic series in $n$ using the Norlund polynomials, $B_k^{(\alpha)}(x)$ found <a href="http://dlmf.nist.gov/5.11#E17" rel="nofollow noreferrer">here</a>. They are available in Mathematica via $\mathtt{NorlundB[k,\alpha,x]}$. Concretely, $$ n^{-\xi}\sim \frac{\Gamma(n)}{\Gamma(n+\xi)}\sum_{k=0}^{\infty}n^{-k} {\xi \choose k} B_k^{(1+\xi)}(\xi). $$</p> <p>Now exchange the summation of the asymptotics in $n$ (with, say, summation index $k$) and the summation over $n$, which I assume light heartedly to be possible. The (now inner) sum over $n$ results in generalized hypergeometric functions of the form $$ _{m+k+2}F_{m+k+2}\left(\left. {1,\dots,1}\atop{2,\dots,2,1+\xi} \right\vert y\right), $$ with $m+k+2$ 1s in the upper line and $k+m+1$ 2s in the lower line. To get there one has to shift the summation index such that summation starts at $n=0$. Then insert $n+1=\frac{(2)_{n}}{(1)_{n}}$, with the usual Pochhammer symbols used. The defining formula for generalized hypergeometric functions results.</p> <p>Using the asymptotic expansion of the generalized hypergeometric function for $y\rightarrow \infty$ given in a paper by Volkmer and Wood (downloadable from <a href="https://www.researchgate.net/profile/Hans_Volkmer/publication/263796676_A_note_on_the_asymptotic_expansion_of_generalized_hypergeometric_functions" rel="nofollow noreferrer">here</a>) and after some simplifications one arrives at the asymptotic formula $$ I_{m+\xi}(y)=e^{y}\ y^{\frac{1}{2}-m-\xi}\ \frac{\xi\ \sin \pi\xi}{\pi}\sum_{k=0}^{\infty}(-1)^{k+1} y^{-k} \frac{\Gamma(k-\xi)}{k!}\ B_{k}^{(1+\xi)}(\xi) \\ \sum_{s=0}^{\infty} y^{-s} \left\{ {\sum_{(s_{1},\dots,s_{m+k+1})}} \frac{\Gamma(\xi + s_{m+k+1})}{s_{m+k+1}!}\prod_{j=1}^{m+k+1} \frac{\Gamma\left(j+\sum_{i=1}^{j}s_{i}\right)}{\Gamma\left(j+\sum_{i=1}^{j-1}s_{i}\right)}\right\}. $$ $(s_{1},\dots ,s_{m+k+1})$ under the sum sign indicates summation over all (ordered) partitions of $s$ into $m+k+1$ non-negative integers, $s_{1},\dots ,s_{m+k+1}$. Order matters here, i.e., $(1,0)$ is different from $(0,1)$.</p> <p>Numerical calculations of the coefficients (with highest possible precision on my laptop) show excellent match (5 or more digits) with the formula, even for exotic indices, like $\mu=\pi$ and orders up to $y^{-6}$.</p> <p>For the OP's functions I get: $$ C(x) e^{-x^{2}} = 1 + \frac{3}{8} x^{-2} + \frac{65}{128} x^{-4} + \frac{1225}{1024} x^{-6} + \frac{131691}{32768} x^{-8} + O(x^{-10}) , $$ $$ L(x) x^{2} e^{-x^{2}} = 1 + \frac{15}{8} x^{-2} + \frac{665}{128} x^{-4} + \frac{19845}{1024} x^{-6} + \frac{2989371}{32768} x^{-8} + O(x^{-10}) . $$</p> <p>All calculations were done with Mathematica 11. </p>
812,263
<p>To further explain the title:</p> <p>Is there a probabilistic reason as to why a 6-sided die has the opposing sides suming to 7?</p> <p>My argument begun when a friend decided to use <a href="http://ecx.images-amazon.com/images/I/51OvRGphnOL.jpg" rel="nofollow">this die</a> instead of <a href="http://nerdywithchildren.com/wp-content/uploads/2013/05/d20.jpg" rel="nofollow">this die</a>.</p> <p>I understand that having 20 sides, each is as likely to come up, but does the different pattern affect the subsequent rolls?</p> <p>Thanks in advance!</p>
tpb261
125,795
<p>If the dice are unbiased, there is no reason for choosing any particular arrangement. You could as well pull a card from a deck of six. No difference.</p>
1,621,989
<p>If we have any two co-prime positive integers <em>x</em> and <em>y</em>, does there always exist a positive integer <em>C</em> such that all integers greater than <em>C</em> can be expressed as <em>Ax</em>+<em>By</em> where <em>A</em> and <em>B</em> are also non-negative integers?</p> <p>Do we have a formula to calculate the largest non-expressable integer (i.e. <em>C</em>-<em>1</em>) in such a case?</p> <p>EDIT: <em>A</em> and <em>B</em> are non-negative, not necessarily positive. Either one of them can be 0.</p>
Slade
33,433
<p>This is sometimes called the <a href="https://en.wikipedia.org/wiki/Coin_problem" rel="nofollow">coin problem</a>, and the answer for two coins of relatively prime denominations $a$ and $b$ is $ab-a-b$, when we are allowed to use zero of either coin.</p> <p>To find the answer when the coefficients are required to be positive, we simply subtract one coin of each type, giving an answer of $(xy-x-y)+(x+y)=xy$ for the largest number that cannot be so expressed.</p> <p>For example, if $x=3$, $y=4$, then we can verify that $12$ has no expression as the sum of positive multiples of $3$ and $4$, but $13=3\cdot 3 + 1\cdot 4$, $14=2\cdot 3 + 2\cdot 4$, $15=1\cdot 3 + 3\cdot 4$. Since $13,14,15$ can be expressed in this form, we can get any positive integer $&gt; 12$ by adding multiples of $3$.</p>
1,934,259
<blockquote> <p><strong>Problem.</strong></p> <p>Let <span class="math-container">$\emptyset \subset A\subset X$</span> and <span class="math-container">$\emptyset \subset B\subset Y$</span>. If <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> are connected, show that <span class="math-container">$(X\times Y)\setminus (A\times B)$</span> is also connected by <strong>using the criteria of connectedness that if for any continuous function <span class="math-container">$f$</span> such that <span class="math-container">$f:X\to \{\pm1\}$</span>, <span class="math-container">$f$</span> is constant then <span class="math-container">$X$</span> is connected</strong>.</p> </blockquote> <p>I began by assuming that there exists a function <span class="math-container">$f:(X\times Y)\setminus (A\times B)\to\{\pm1\}$</span> which is continuous but not constant but couldn't proceed any further beyond that.</p>
Tsemo Aristide
280,301
<p>Let $p:X\times Y\rightarrow (X\times Y)/(A\times B)$ be the quotient map; it is surjective. Consider a continuous function $f:(X\times Y)/ (A\times B)\to\{\pm1\}$. $f\circ p$ is continuous; since $X\times Y$ is connected, $f\circ p$ is constant. Since $p$ is surjective, $f$ is constant.</p>
688,577
<p>A beautiful polyhedron with 20 hexagons and 60 pentagons can be seen here: <a href="http://robertlovespi.wordpress.com/2013/11/03/a-polyhedron-with-80-faces/" rel="nofollow">http://robertlovespi.wordpress.com/2013/11/03/a-polyhedron-with-80-faces/</a> . Euler formula and the corresponding Diophantine equation give a smaller possible combination: 7 hexagons and 20 pentagons adjacent by two to hexagons' vertices and by five in the pentagonal vertices. Does such a polyhedron really exist? I doubt but my only argument is "I was not able to compose it". At the same time I do know that the non-existence of polyhedra permitted by Euler equation is not elementary (cf. the not-existing polyhedron with 12 pentagons and 1 hexagon only). </p>
Barry Cipra
86,747
<p>Here's a kludgy solution. Start with a "hat box" with pentagons for top and bottom and $5$ square sides. Inside each square side draw a pentagon, and connect four of the vertices of that pentagon to the four corners of the square. In two of those connecting segments from opposite vertices of the square, draw an extra vertex. You'll see that each square now consists of $4$ pentagons and $1$ hexagon. This gives you a total so far of $2+5\cdot4=22$ pentagons and $5\cdot1=5$ hexagons. To get the numbers you want, take any two pentagons that share an edge and draw an extra vertex on that edge.</p>
2,003,660
<p>I got this exercise from the textbook Book of Proof, CH4 E12. I've tackle this problem in the following manner:</p> <p>Suppose x is a real and $0 &lt; x &lt; 4$, it follows that,</p> <p>\begin{align*} &amp;\Rightarrow 0 - 2 &lt; x - 2 &lt; 4 - 2 \\ &amp;\Rightarrow 4 &lt; (x - 2)^2 &lt; 4\\ &amp;\Rightarrow 0 \leq (x - 2)^2 &lt; 4 \end{align*}</p> <p>Since, $x(4 - x) = 4x - x^2 = 4 - (x - 2)^2$, then</p> <p>$$\dfrac{4}{x(4 - x)} = \dfrac{4}{4 - (x - 2)^2}.$$</p> <p>This expression is greater or equal to $1$ for $0 \leq (x - 2)^2 &lt; 4$. Thus,</p> <p>$$\dfrac{4}{x(4 - x)} \geq 1.$$</p> <p>I'm quite new to proof technique and I'm using this book to self-learn logic and proofing writing. My question is: is the solution stated above logically sound? Would my arguments be considered sufficient to prove that $P \Rightarrow Q$?</p>
Benson Lin
371,844
<p>To get from $0−2&lt;x−2&lt;4−2$ to $4&lt;(x−2)^2&lt;4$ you have simply squared the entire inequality. This is invalid since you have not only led to a contradiction ($4 &lt; k &lt; 4$ implies no such $k$ exists), you have also mistakenly thought that if $a$ and $b$ are reals and $a &lt; b$, then $a^2 &lt; b^2$. This is clearly not true if $a$ and $b$ are not restricted in any other way. Since this step leads to a contradiction, the rest is invalid.</p> <p>To create a logical and sound solution, the solution must not lead to any contradiction and cover all possible cases. One method to write a logical proof is to include as much (as necessary) details as possible. Try not to skip steps as it might lead to one missing some cases or using a "fact" that hasn't been shown to be true yet. When the details are there, make them clear and concise.</p> <p>One possible method I propose for this problem is using the <a href="https://en.wikipedia.org/wiki/Inequality_of_arithmetic_and_geometric_means" rel="nofollow noreferrer">AM-GM inequality</a>. Since $0 &lt; x &lt; 4$, both $x$ and $(4-x)$ are positive. Use it on $x(4-x)$ to derive $\sqrt{x(4-x)} \le \frac{x+(4-x)}{2}$. You may continue from here on.</p>
4,331,258
<p>I'm trying to find the median of <span class="math-container">$f(x) = 4xe^{-2x}$</span>.</p> <p>So far, I've tried solving for <span class="math-container">$q_{50}$</span> by plugging it into an integral and setting it equal to 0.5 like so: <span class="math-container">$\int_{0}^{q_{50}} 4xe^{-2x} dx = 0.5$</span>. I eventually get to <span class="math-container">$-2q_{50}e^{-2q_{50}} - e^{-2q_{50}} + 1 = 0.5$</span>. Unfortunately, at this point, I have been unable to solve for <span class="math-container">$q_{50}$</span>.</p> <p>Is there something I've done wrong up to this point or another method that I could be using instead to find the median? Thanks for the help!</p>
Vítězslav Štembera
663,062
<p>Your equation <span class="math-container">\begin{align} -2q_{50}e^{-2q_{50}} - e^{-2q_{50}} + 1 = 0.5 \end{align}</span> i.e. <span class="math-container">\begin{align} (2q_{50}+1)e^{-2q_{50}}= 0.5 \end{align}</span> is correct, however it is trascendental and must be solved numerically. Using MAPLE for example you can find <span class="math-container">$q_{50}\approx 0.839173495$</span>.</p>
457,427
<p>Find the derivative of the functions:<br> $$\int_{x^2}^{\sin(x)}\sqrt{1+t^4}dt$$<br></p> <p>In class we had the following solution:<br> By the fundamental theorem of calculus we know that <br> $$\left(\int_a^xf(t)dt\right)'=f(x)$$ So<br> $$\int_{x^2}^0\sqrt{1+t^4}dt+\int_0^{\sin(x)}\sqrt{1+t^4}dt=$$<br> $$\int_0^{\sin(x)}\sqrt{1+t^4}dt-\int_0^{x^2}\sqrt{1+t^4}dt=$$<br> Letting $g(x)=\sqrt{1+t^4} $<br> $$g(\sin(x))(\sin(x))'-g(x^2)(x^2)'=$$<br> $$\sqrt{1+\sin(x)^4}\cdot \cos(x)-\sqrt{1+x^8} \cdot 2x$$<br></p> <p>However, if we have that $\left(\int_a^xf(t)dt \right)'=f(x)$ wouldn't the answer just be <br> $$\sqrt{1+\sin(x)^4}-\sqrt{1+x^8}?$$</p>
Ron Gordon
53,268
<p>You are forgetting the chain rule; you have to take the derivative of each function in the limits. Thus your derivative is</p> <p>$$\sqrt{1+\sin^4{x}} \frac{d}{dx}\sin{x} - \sqrt{1+x^8} \frac{d}{dx} x^2$$</p> <p>which I believes gives you your answer.</p>
1,283,037
<blockquote> <p>Consider the series: $$ \sum_{i=1}^\infty \frac{i}{(i+1)!} $$ Make a guess for the value of the $n$-th partial sum and use induction to prove that your guess is correct.</p> </blockquote> <p>I understand the basic principles of induction I think I would have to assume the n-1 sum to be true and then use that to prove that the nth sum is true. But I have no idea how to guess what the sum might be? Doing the partial sums indicates that the series converges at possibly 1.</p>
Brian M. Scott
12,042
<p>In order to make a guess, you should begin by calculating some partial sums:</p> <p>$$\begin{align*} \sum_{i=1}^1\frac{i}{(i+1)!}&amp;=\frac12\\\\ \sum_{i=1}^2\frac{i}{(i+1)!}&amp;=\frac12+\frac2{3!}=\frac12+\frac13=\frac56\\\\ \sum_{i=1}^3\frac{i}{(i+1)!}&amp;=\frac56+\frac3{4!}=\frac56+\frac18=\frac{23}{24}\\\\ \sum_{i=1}^4\frac{i}{(i+1)!}&amp;=\frac{23}{24}+\frac4{5!}=\frac{23}{24}+\frac1{30}=\frac{714}{720}=\frac{119}{120} \end{align*}$$</p> <p>Now look at that sequence of partial sums: $$\dfrac12,\dfrac56,\dfrac{23}{24},\dfrac{119}{120}\;.$$ The pattern of the denominators should leap out at you to suggest a conjecture as to the denominator of the $n$-th partial sum, and the likely relationship between the numerator and the denominator is even more apparent. Write down a conjecture of the form</p> <p>$$\sum_{i=1}^n\frac{i}{(i+1)!}=\frac{a_n}{b_n}\;,$$</p> <p>where $a_n$ and $b_n$ are some integer functions of $n$, and try to prove it by induction on $n$. Note that </p> <p>$$\sum_{i=1}^{n+1}\frac{i}{(i+1)!}=\sum_{i=1}^n\frac{i}{(i+1)!}+\frac{n+1}{(n+2)!}\;,$$</p> <p>so this boils down to proving that</p> <p>$$\frac{a_{n+1}}{b_{n+1}}=\frac{a_n}{b_n}+\frac{n+1}{(n+2)!}\;.$$</p>
3,243,243
<p>I am learning category theory using Basic Category Theory by Tom Leinster as my main source. In the chapter on natural transformations he says that isomorphism of categories is unreasonably strict for the notion of the sameness of two categories. Isomorphism would require functors, <span class="math-container">$$ F:A\rightarrow B,G:B\rightarrow A $$</span> such that <span class="math-container">$$ G\circ F=1_A, F\circ G=1_B $$</span></p> <p>Instead he says that for equivalence we loosen the requirement on these functors to be isomorphic, <span class="math-container">$$ G\circ F\cong 1_A,F\circ G\cong 1_B $$</span> Then this is better. This section threw me for a loop. I don't understand the difference between the equivalence and the isomorphism statements. Any help clarifying what is trying to be said here is greatly appreciated.</p>
Chessanator
363,017
<p>Well, the actual difference between the two statements is that for an equivalence of categories, we only require that that the composites <span class="math-container">$F \circ G$</span> and <span class="math-container">$G \circ F$</span> are naturally isomorphic to the identity functors rather than exactly equal. That is, there's a collection of isomorphisms <span class="math-container">$\eta_x :GF(x) \rightarrow x$</span> for each object of <span class="math-container">$A$</span> such that whenever <span class="math-container">$f: x \rightarrow y$</span> is a morphism in <span class="math-container">$A$</span>, <span class="math-container">$\eta_y GF(f) = f \eta_x$</span>, and a similar natural isomorphism for <span class="math-container">$F \circ G$</span>.</p> <hr> <p>As for why we do this... Imagine we're both doing group theory, so we both get ourselves a category of groups and start doing group theory in that category. But then we compare our categories and they're not the same: your category has one object for each isomorphism class of groups, while the objects of my category are given by a set <span class="math-container">$X$</span> along with a multiplication <span class="math-container">$\otimes: X \times X \rightarrow X$</span> which makes it a group. </p> <p>Our categories aren't isomorphic, not by a long shot: for every object in your category there's a large class of objects in mine. So if we could only use isomorphisms of categories it would look like we're working on entirely different things.</p> <p>Fortunately, our two categories are equivalent: using one functor which sends a set and a multiplication to its isomorphism class, and the other functor which takes each isomorphism class and picks a realisation of that group. Therefore, we're justified in calling both categories 'The Category of Groups' and any result you get in your category will also work in mine.</p>
2,081,001
<p>The number $\frac{22}{7}$ is irrational in our base-$10$ system, but in, say, base-$14$, it is rational (it comes out to $3.2$ in that system).</p> <p>It's easy for fractions that are irrational as decimals, as you can just represent them in a base that's double the denominator of the fraction. However, what if I have a number like $\pi$, or $\log(2)$?</p> <p>For those numbers, it could easily be represented as a rational number if it is in base-($\pi\cdot 2$) or base-($\log(2)\cdot 2$), but is it possible to represent them in any rational-based number system?</p>
mweiss
124,095
<p>The number $22/7$ is <em>not</em> irrational, regardless of the number system one uses to express it. On the other hand $\pi$ <em>is</em> irrational, regardless of the number system one uses to express it. They are different numbers: $22/7$ is a good rational approximation to $\pi$, but they are not equal.</p> <p>It is unfortunate that students are taught to think of "nonrepetition" as the essential quality that distinguishes rational numbers from irrational numbers. It is true that if a number is rational, then its decimal representation will eventually either terminate or repeat, but it might take a very long time, and just looking at a string of digits is not enough evidence to conclude whether or not the string represents the beginning of a repeating decimal or a non-repeating decimal. The real distinction between rational and irrational numbers lies in whether it is possible to express the number as a ratio of integers. If it is possible, then the number is rational; if it is not possible, then the number is irrational. The fact that rational numbers correspond to decimal representations that terminate or repeat is an important and interesting consequence of the definition, but it is not really the essence of the distinction.</p> <p>For more on this, see my answer and the discussion in the comments beneath it at <a href="https://math.stackexchange.com/a/2073186/124095">https://math.stackexchange.com/a/2073186/124095</a>.</p>
14,765
<p>I like to make the "dominoes" analogy when I teach my students induction.</p> <p>I recently came across the following video:</p> <p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p> <p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p> <p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p> <ol> <li><span class="math-container">$P(1)$</span></li> <li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li> <li><span class="math-container">$P(100) \implies Q(100)$</span></li> <li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li> </ol> <p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p> <p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
GeauxMath
10,521
<p>I have similar issues, and handle it in the following way:</p> <ol> <li>On the first day of class I do a review of basic algebra, things like: <span class="math-container">$$\sqrt{\frac{a}{b}} = \frac{\sqrt{a}}{\sqrt{b}} $$</span> <span class="math-container">$$\sqrt{a + b} \neq \sqrt{a} + \sqrt{b} $$</span> <span class="math-container">$$\sqrt{ab} = \sqrt{a}\sqrt{b}$$</span> <span class="math-container">$$a(b + c) = ab + ac$$</span></li> <li>Give a list of what I call 'fatal errors', things like what you mention, give demonstrations of why these things are the way they are, e.g.: if <span class="math-container">$n$</span> even: <span class="math-container">$$x^n = b \hspace{.5in} (b &gt; 0)$$</span> has 2 solutions, namely <span class="math-container">$$\pm\sqrt[n]{b} $$</span> if <span class="math-container">$n$</span> odd: <span class="math-container">$$x^n = b \hspace{.5in} (b &gt;0)$$</span> has 1 solution, namely <span class="math-container">$$\sqrt[n]{b}$$</span> and use some numbers that work out 'nicely' as examples, e.g. <span class="math-container">$$(-3)^2 = (-3)(-3) = 9 = (3)(3) = 3^2 $$</span></li> <li>Tell them I will stop grading their solution the moment I see one of these errors.</li> <li>Give them lots of examples to work, circulate to help individual students</li> <li>Work out solutions while being sure to explain my reasoning</li> <li>Make sure to write a specific comment on their exams about having stopped grading because they made one of these fatal errors</li> </ol>
1,574,290
<p>How do I prove this? </p> <p>For the Fibonacci numbers defined by $f_1=1$, $f_2=1$, and $f_n = f_{n-1} + f_{n-2}$ for $n ≥ 3$, prove that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$ for all $n≥ 1$.</p>
Jack D'Aurizio
44,121
<p>A slightly faster proof comes from noticing that </p> <p>$$q(x,y) = x^2-xy-y^2 = (y-x)^2-y(x-y)-y^2 = q(y-x,y)$$ and: $$ q(-x,y) = x^2+xy-y^2 = -q(y,x) $$ hence: $$ q(f_{n+1},f_n) = q(-f_{n-1},f_n) = -q(f_n,f_{n-1}) = \ldots = (-1)^n q(1,0).$$</p>
2,138,963
<p>finding $\displaystyle \int^{\pi}_{-\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg]dx$</p> <p>Attempt:</p> <p>\begin{align} &amp; \int^{\frac{\pi}{3}}_{-\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg] \, dx \\[10pt] + {} &amp; \int^\pi_{\frac{\pi}{3}}\bigg[\cot^{-1}\bigg(\frac{1}{2\cos x-1}\bigg)+\cot^{-1}\bigg(\cos x - \frac{1}{2}\bigg)\bigg] \, dx \end{align}</p> <p>as we break because $\displaystyle \cos x- \frac 1 2 =0$ at $\displaystyle x= \frac \pi 3$</p> <p>wan,t be able to go further, could some help me</p>
victoria
412,473
<p>I see some tricks in here. Ideas, as yet unfinished, I'm out of time now but will try to come back later.</p> <p>Consider $cot^{-1}(b) = \theta \leftrightarrow cot(\theta) = b$</p> <p>and then $tan(\theta ) = 1/b$ so $\theta = tan^{-1}(1/b)$</p> <p>Also $(cos(x) - 1/2 ) = (2 cos(x) -1)/2$</p> <p>$$f(x) = cot^{-1}(\frac{1}{2cos(x) -1}) + cot^{-1}(\frac{2 cos(x) -1}{2})$$</p> <p>If it weren't for that 2 in the second expression denominator this would be really interesting with a possible simplification (check the original problem again?)</p> <p>$$f(x) = tan^{-1}(2cos(x) -1) + tan^{-1}(\frac{2}{2 cos(x) -1})$$</p> <p>This is reminiscent of the tangent addition formula:</p> <p>If$\ tan(\theta_1) = a $ and $tan(\theta_2) = b$ then </p> <p>$tan(\theta_1 + \theta_2)= \frac{a+b}{ab}$ </p> <p>Let $ \theta_1 = tan^{-1}(2cos(x) -1)$ and $\theta_2 = tan^{-1}(\frac{2}{2 cos(x) -1})$</p> <p>So a = $(2cos(x) -1)$, $b = \frac{2}{2 cos(x) -1}$, and $ab = 2$</p> <p>$$tan(\theta_1 + \theta_2) = 1/2((2cos(x) -1) + \frac{2}{2 cos(x) -1}) $$</p> <p>$$f(x) = \theta_1 + \theta_2 = tan^{-1}( 1/2((2cos(x) -1) + \frac{2}{2 cos(x) -1})) $$</p> <p>This should be able to go somewhere from here -- will try more later.</p>
610,472
<p>What is the value of these limits;</p> <p>$\lim_{x\rightarrow 1^{+}}\frac{\lfloor x\rfloor-1}{\lfloor x\rfloor-x}$</p> <p>$\lim_{x\rightarrow 1^{-}}\frac{\lfloor x\rfloor-1}{\lfloor x\rfloor-x}$</p>
mathlove
78,967
<p>1) We may suppose that $1\lt x\lt 2$. So we have $\lfloor x\rfloor =1$ and $\lfloor x\rfloor=1 \not= x$, so $$\lim_{x\to1+}\frac{\lfloor x\rfloor -1}{\lfloor x\rfloor-x}=\lim_{x\to1+}\frac{0}{1-x}=\lim_{x\to1+}0=0.$$</p> <p>2) We may suppose that $0\lt x\lt1$. So we have $\lfloor x\rfloor=0$, so $$\lim_{x\to1-}\frac{\lfloor x\rfloor -1}{\lfloor x\rfloor-x}=\lim_{x\to1-}\frac{0-1}{0-x}=\lim_{x\to1-}\frac{1}{x}=1.$$</p> <p>Note that $x\to \alpha$ does not mean $x=\alpha$.</p>
3,685,969
<p>I have been investigating the brachistochrone problem with friction and in my derivations, I would like help solving the Euler-Lagrange equation below</p> <p><span class="math-container">$\frac{d}{dx}\frac{\partial F}{\partial y'}=\frac{\partial F}{\partial y}$</span> where <span class="math-container">$F=\sqrt{\frac{1+y'^2}{2g(y-\mu x)}}$</span></p> <p>I can get up to</p> <p><span class="math-container">$\frac{d}{dx}\frac{y'}{\sqrt{2g(y-\mu x)(1+y'^2)}}=-\sqrt{\frac{(1+y'^2)}{2g}}\frac{1}{2(y-\mu x)^\frac32}$</span> </p> <p>But I am unsure how the equation above reduces into <span class="math-container">$(1+y'^2)(1+\mu y')+2(y-\mu x)y''=0$</span>, as seen in equation (29) of <a href="https://mathworld.wolfram.com/BrachistochroneProblem.html" rel="nofollow noreferrer">this Wolfram page</a>.</p> <p>I am quite new to calculus and would appreciate a step by step solution. Thanks in advance!</p>
Quanto
686,284
<p>Let <span class="math-container">$x\to -x$</span> over <span class="math-container">$(-1,0)$</span></p> <p><span class="math-container">\begin{align} \int_{-1}^1\frac{\ln(1-x^2)}{(1-\beta x)^2}dx = &amp; \ 2\int_0^1\ln(1-x^2) \frac{1+\beta^2x^2}{(1-\beta^2x^2)^2} dx\\ = &amp; \ 2\int_0^1 \ln(1-x^2)\ d\left( \frac{x}{1-\beta^2x^2} - \frac{1}{1-\beta^2}\right)\\ \overset{ibp}= &amp; \ \frac4{1-\beta^2}\int_0^1\frac1{1+x}-\frac1{1-\beta^2x^2}\ dx\\ =&amp;\ \frac4{1-\beta^2}\left(\ln2-\frac1{2\beta}\ln \frac{1+\beta}{1-\beta}\right) \end{align}</span></p>
2,276,429
<p>I've tried multiple variations of polygons but can't find any that work. Do they exist?</p> <p>Is it possible to draw a polygon on a grid paper and divide it into two equal parts by a cut of the shape shown on the Figure (a)? </p> <p>Solve the same problem for a cut shown on Figure (b).</p> <p>Solve the same problem for a cut shown on Figure (c).</p> <p><a href="https://i.stack.imgur.com/iYSlV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iYSlV.jpg" alt="enter image description here"></a></p> <p>(In every problem a cut is inside the polygon, with the ends lying on the boundary. The sides of the polygons and the cuts must lie on the grid lines. The small links of the cuts are twice as short as the large ones)</p>
nickgard
420,432
<p>Here are (a) and (b). The key thing is to repeat the shape of the cut twice in the outline of the polygon.</p> <p><a href="https://i.stack.imgur.com/IpTRX.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IpTRX.png" alt="enter image description here"></a></p> <p><strong>EDIT:</strong> to add a solution for (c) constructed in a similar fashion.</p> <p><a href="https://i.stack.imgur.com/eje5Y.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/eje5Y.png" alt="enter image description here"></a></p>
317,666
<p>What is the difference between $\mathbb{H}$ and $Q_8$? Both are called <em>quaternions.</em></p>
Alex Youcis
16,497
<p>One is the <em><a href="http://en.wikipedia.org/wiki/Quaternion" rel="nofollow">Hamiltonian Quaternions</a></em> and has many descriptions, perhaps the most important (for things that immediately interest me) is that it is the )up to equivalence) only non-trivial central simple algebra over $\mathbb{R}$--it is also an object of fundamental importance in geometry.</p> <p>$Q_8$ is the <em><a href="http://en.wikipedia.org/wiki/Quaternion_group" rel="nofollow">quaternion group</a></em>. It is of great importance for the many weird properties it has that cause it to be a counterexample to many simple group theoretic questions--it is a non-abelian group all of whose subgroups are normal.</p>
317,666
<p>What is the difference between $\mathbb{H}$ and $Q_8$? Both are called <em>quaternions.</em></p>
Marc van Leeuwen
18,880
<p>The first is a division ring (also called skew field, which is obviously infinite, and necessarily so by <a href="http://en.wikipedia.org/wiki/Wedderburn%27s_little_theorem" rel="nofollow">Wedderburn's theorem</a>) and the second is a finite non-abelian group (in fact a subgroup of the multiplicative group of the first). So they are not even the same kind of algebraic structure. Indeed $Q_8$ has only $8$ elements, those unit quaternions that have only one nonzero component. The reason it is called quaternion group is probably that it captures the essence of the definition of multiplication in $\Bbb H$ (the general case follows by $\Bbb R$-bilinearity), and that the quaternions are in fact the most easily descibed context in which one comes across $Q_8$ (but may of course find $Q_8$ in some settings totally unrelated to the quaternions).</p> <p>Incidentally $Q_8$ shows that the theorem saying that finite subgroups of multiplicative groups of fields (and more generally of integral domains) are cyclic fails for skew fields.</p>
2,699,483
<blockquote> <p>The probability that a fair-coin lands on either <strong>Heads</strong> or <strong>Tails</strong> on a certain day of the week is $1/14$. </p> <p><strong>Example: (H, Monday), (H, Tuesday) $...$ (T, Monday), (T, Tuesday) $...$</strong><br> Thus, $(1/2 \cdot 1/7) = 1/14$. There are $14$ such outcomes.</p> <p>In some arbitrary week, Tom flips <strong>two</strong> fair-coins. You don't know if they were flipped on the same day, or on different days. After this arbitrary week, Tom tells you that at least one of the flips was a <strong>Heads</strong> which he flipped on <strong>Saturday</strong>.</p> <p><strong>Determine the probability that Tom flipped two heads in that week.</strong></p> </blockquote> <p>I know that this is a conditional probability problem. </p> <p>The probability of getting two heads is $(1/2)^2 = 1/4$. Call this event <strong>$P$</strong>.</p> <p>I am trying to figure out the probability of Tom flipping at least one head on a Saturday. To get this probability, I know that we must compute the probability of there being no (H, Saturday) which is $1 - 1/14 = 13/14$.</p> <p>But then to get this "at least", we need to do $1 - 13/14$ which gives us $1/14$ again. Call this event $Q$. </p> <p>So is the probability of event $Q = 1/14$? It doesn't sound right to me.</p> <p>Afterwards we must do $Pr(P | Q) = \frac{P(P \cap Q)}{Pr(Q)}$. Now I'm not quite sure what $P \cap Q$ means in this context.</p>
NewGuy
518,506
<p>CORRECTED AS REASONS GIVEN BY BY JGON</p> <p>Sample Space for single throw = {MH,MT,TuH,TuT,...........,SH,ST} = 14</p> <p>Sample Space for two throws = $14*14 = 196$</p> <p>M: 2 heads are thrown = $7*7= 49$</p> <p>N: atleast single head is thrown on saturday = {(SaH,?)(?,SaH)} = $2*14 $</p> <p>But we have counted twice {(SaH,SaH)} therefore one has to be subtracted</p> <p>= $2*14-1=27$</p> <p>To Find P(M|N) = $\frac{P(M\cap N)}{P(N)}$ =$\frac{n(M\cap N)}{n(N)}$</p> <p>$M\cap N$ = One head throw on saturday and other head can be on anyday ={ (SaH,?H)(?H,SaH)}= $2*7 $</p> <p>But double counting also take place here =$2*7-1=13$</p> <p>P(M|N) = $\frac{13}{27}$</p>
2,699,483
<blockquote> <p>The probability that a fair-coin lands on either <strong>Heads</strong> or <strong>Tails</strong> on a certain day of the week is $1/14$. </p> <p><strong>Example: (H, Monday), (H, Tuesday) $...$ (T, Monday), (T, Tuesday) $...$</strong><br> Thus, $(1/2 \cdot 1/7) = 1/14$. There are $14$ such outcomes.</p> <p>In some arbitrary week, Tom flips <strong>two</strong> fair-coins. You don't know if they were flipped on the same day, or on different days. After this arbitrary week, Tom tells you that at least one of the flips was a <strong>Heads</strong> which he flipped on <strong>Saturday</strong>.</p> <p><strong>Determine the probability that Tom flipped two heads in that week.</strong></p> </blockquote> <p>I know that this is a conditional probability problem. </p> <p>The probability of getting two heads is $(1/2)^2 = 1/4$. Call this event <strong>$P$</strong>.</p> <p>I am trying to figure out the probability of Tom flipping at least one head on a Saturday. To get this probability, I know that we must compute the probability of there being no (H, Saturday) which is $1 - 1/14 = 13/14$.</p> <p>But then to get this "at least", we need to do $1 - 13/14$ which gives us $1/14$ again. Call this event $Q$. </p> <p>So is the probability of event $Q = 1/14$? It doesn't sound right to me.</p> <p>Afterwards we must do $Pr(P | Q) = \frac{P(P \cap Q)}{Pr(Q)}$. Now I'm not quite sure what $P \cap Q$ means in this context.</p>
Remy
325,426
<p>Intuitively, I would think the result would be greater than $\frac{1}{2}$ because of that slight chance we get $2$ heads on Saturday.</p> <p>Let $P$ denote the event that we flip $2$ heads that week.</p> <p>Let $Q$ denote the event that we flip at least one head on Saturday.</p> <p>I find it easier to flip $P(P\mid Q)$ into $P(Q\mid P)$</p> <p>We have</p> <p>$$\begin{align*} P(P\mid Q) &amp;=\frac{P(P\cap Q)}{P(Q)}\\\\ &amp;=\frac{P(Q\mid P)\cdot P(P)}{P(Q)}\\\\ &amp;=\frac{\left({2 \choose 2}\left(\frac{1}{7}\right)^2+{2 \choose 1}\left(\frac{1}{7}\right)\left(\frac{6}{7}\right)\right)\left(\frac{1}{2}\right)^2}{{2 \choose 2}\frac{1}{14}^2+{2 \choose 1}\left(\frac{1}{14}\right)\left(\frac{13}{14}\right)}\\\\ &amp;=\frac{13}{27} \end{align*}$$</p> <p>where $P(Q\mid P)$ can be thought of as we're given that we got two heads but what are the chances that at least one was from Saturday with probability $\frac{1}{7}$ for an individual coin.</p> <p><strong>Note:</strong> My answer contradicts my intuition! This serves as further proof that intuition can lead you astray in probability. To see why my intuition was incorrect, see @jgon's answer.</p>
2,699,483
<blockquote> <p>The probability that a fair-coin lands on either <strong>Heads</strong> or <strong>Tails</strong> on a certain day of the week is $1/14$. </p> <p><strong>Example: (H, Monday), (H, Tuesday) $...$ (T, Monday), (T, Tuesday) $...$</strong><br> Thus, $(1/2 \cdot 1/7) = 1/14$. There are $14$ such outcomes.</p> <p>In some arbitrary week, Tom flips <strong>two</strong> fair-coins. You don't know if they were flipped on the same day, or on different days. After this arbitrary week, Tom tells you that at least one of the flips was a <strong>Heads</strong> which he flipped on <strong>Saturday</strong>.</p> <p><strong>Determine the probability that Tom flipped two heads in that week.</strong></p> </blockquote> <p>I know that this is a conditional probability problem. </p> <p>The probability of getting two heads is $(1/2)^2 = 1/4$. Call this event <strong>$P$</strong>.</p> <p>I am trying to figure out the probability of Tom flipping at least one head on a Saturday. To get this probability, I know that we must compute the probability of there being no (H, Saturday) which is $1 - 1/14 = 13/14$.</p> <p>But then to get this "at least", we need to do $1 - 13/14$ which gives us $1/14$ again. Call this event $Q$. </p> <p>So is the probability of event $Q = 1/14$? It doesn't sound right to me.</p> <p>Afterwards we must do $Pr(P | Q) = \frac{P(P \cap Q)}{Pr(Q)}$. Now I'm not quite sure what $P \cap Q$ means in this context.</p>
Boyku
567,523
<p>the result of the two flips may be represented by one mark in a <span class="math-container">$14\times14$</span> grid. There are <span class="math-container">$13$</span> little squares that represent double head results. There are 27 squares that represent flips that contain an HSat.</p> <p>The required probability is then <span class="math-container">$13 \over 27$</span></p> <p><a href="https://i.stack.imgur.com/nQstf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nQstf.jpg" alt="14x14 grid" /></a></p>
257,027
<p><strong>Question:</strong> Consider a distribution $D$, and $n$ i.i.d. random variables $X_i$, all distributed according to $D$. Let $p^D_2:=\Pr[X_1=X_2]$. What is a lower bound for $p^D_n:=\Pr[\exists i\neq j. X_i=X_j]$ (as a function of $p^D_2$)?</p> <p><strong>Conjecture:</strong> $p^D_n \geq 1-\bigl(1-p^D_2\bigr)^{n\choose 2}$. [<strong>EDIT:</strong> This particular bound is wrong. Counterexample by Will Perkins: $D(1)=0.8$, $D(2)=0.1$, $D(3)=0.1$, $n=3$.]</p> <p><strong>What bounds would I like:</strong> Tight bounds are preferred, of course. The conjecture above would be sufficient. But any bound that allows me to show the following is fine: For some $n\in O\bigl(\sqrt{1/p_2^D}\bigr)$, we have that $p^D_n\geq\frac12$.</p> <p><strong>Relation to uniform birthday inequality:</strong> If $D$ is the uniform distribution on $N$ elements, then $p^D_2=1/N$, and $p^D_n\leq \bigl(1-\tfrac1N\bigr)^{n\choose 2}$ [1]. Thus the conjecture holds for uniform $D$.</p> <hr> <p><strong>Approaches I tried:</strong></p> <p><strong>Approach 1:</strong> I tried to show that, for fixed $q$, we have that $p_n^D \geq p_n^U$ where $U$ is the uniform distribution on $1/q$ elements. (Assuming that $1/q$ is an integer.) Then I would just have to find a formula for $p_n^U$ which is the uniform birthday inequality. Unfortunately, it turns out that this approach cannot work: Consider the distribution $D$ on three elements with probabilities $2/3,1/6,1/6$. Then $p_2^D=1/2$. And $p_3^D&lt;1$. (Because there is a nonzero chance of picking three different elements.) But for $U$ being the uniform distribution on $2$ elements, we have $p^U_3=1$. Thus $p_n^D \ngeq p_n^U$ for $n=3$.</p> <p><strong>Approach 2:</strong> [<strong>EDIT:</strong> This approach cannot work because it would show the conjecture above which is wrong.] Perkins [1] shows implicitly in his introduction that the conjecture above (Definition 1 in [1]) is true for any distribution $D$ that satisfies the "repulsion inequality" (Definition 2 in [1]). This repulsion inequality says, in our special case and our notation: $$ \Pr[X_{N+1}\in\{X_1,\dots,X_N\}|X_1,\dots,X_N\text{ all distinct}] \geq \Pr[X_{N+1}\in\{X_1,\dots,X_N\}]. $$ (Here $X_1,\dots,X_{N+1}$ are i.i.d. according to $D$.) Thus, showing the repulsion property would answer my question. But I have not been able to prove the repulsion property.</p> <p><strong>Related work:</strong> I have found many references considering the Birthday inequality for non-uniform distributions, e.g., [2]. However, in all those cases, it was only shown that $p_n^D\geq p_n^U$ where $U$ is the uniform distribution on the support of $D$ (note that the support of $D$ can be very large if $D$ has a large number of low probability events). Or they contained exact formulas for the probability $p_n^D$ from which I did not manage to derive a bound in terms of $p_2^D$. There is one <a href="https://mathoverflow.net/q/255880/101775">question</a> on MathOverflow that asks for the same thing (in somewhat different words), but it gives much less details and has only an incorrect answer.</p> <p>[1] Will Perkins, Birthday Inequalities, Repulsion, and Hard Spheres, <a href="http://arxiv.org/abs/1506.02700v2" rel="nofollow noreferrer">http://arxiv.org/abs/1506.02700v2</a></p> <p>[2] Clevenson, M. Lawrence, and William Watkins. "Majorization and the birthday inequality." Mathematics Magazine 64.3 (1991): 183-188. <a href="http://www.jstor.org/stable/2691301" rel="nofollow noreferrer">http://www.jstor.org/stable/2691301</a></p>
esg
48,831
<p>I reformulate slightly, please check.</p> <p>You are considering a sequence $X_1,X_2,\ldots$ of (discrete) i.i.d random variables and want an upper bound for the probability $\mathbb{P}(R&gt;n)$ in terms of $\sqrt{\beta}$, where $\beta:= {1 \over \mathbb{P}(X_1=X_2)}$, and $R:=\inf\{ n\geq 2\,:\,X_n\in\{X_1,\ldots,X_{n-1}\}\}$ is the first time a value is repeated.</p> <p>(Note that $\{ R&gt; n\}=\{ X_1,\ldots , X_n \mbox{ are mutually distinct }\}$. Note also that you use the notation $p_n^D$ in opposite ways above: $p_n^D=\mathbb{P}(R\leq n)$ in the question, and (for the uniform distribution) $p_n^D=\mathbb{P}(R&gt;n)$ $=\mathbb{P}(E_n)$ of the paper of Perkins.)</p> <p>This view allows to use Markov's inequality: for $a&gt;0$</p> <p>$$\mathbb{P} (R\geq a)\leq \frac{\mathbb{E}(R)}{a}$$</p> <p><a href="http://eprint.iacr.org/2005/318" rel="nofollow noreferrer">Here</a> (Thm. 4) it is proved that $\mathbb{E}(R)\leq 2\sqrt{\beta}$. Thus for $a&gt;0$ $$\mathbb{P} (R\geq a\sqrt{\beta})\leq \frac{2}{a}$$ entailing the desired claim.</p> <p>Remarks:</p> <p>(1) the inequality for $\mathbb{E}(R)$ can be sharpened,<br> e.g. to $$\sqrt{\frac{\pi}{2}\beta}\leq \mathbb{E}(R)\leq \sqrt{\frac{\pi}{2}\beta} + \max_i( p_i)\, \beta\;\;,$$ but this doesn't improve the bound qualitatively.</p> <p>(2) the bound is far from tight. The possible limiting distributions of ${R_n \over \sqrt{\beta_n}}$ (for a sequence $(R_n)$ with corresponding $\beta_n\longrightarrow \infty$) are <a href="http://projecteuclid.org/euclid.ejp/1457376437" rel="nofollow noreferrer">known</a> - tighter bounds must be compatible with all possible limiting shapes (your conjectured bound isn't).</p>
4,510,808
<p>Value of p such that <span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {{x^p}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right)$</span> is some finite | non-zero number.</p> <p>My approach is as follow</p> <p><span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {{x^p}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right) \Rightarrow \mathop {\lim }\limits_{x \to \infty } \left( {{x^{\frac{{3p}}{3}}}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right)$</span></p> <p><span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {\sqrt[3]{{{x^{3p}}}}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right) \Rightarrow \mathop {\lim }\limits_{x \to \infty } \left( {\sqrt[3]{{{x^{3p + 1}} + {x^{3p}}}} + \sqrt[3]{{{x^{3p + 1}} - {x^{3p}}}} - 2\sqrt[3]{{{x^{3p + 1}}}}} \right)$</span></p> <p>How do we proceed</p>
Z Ahmed
671,540
<p><span class="math-container">$$L=\mathop {\lim }\limits_{x \to \infty } \left( {{x^p}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right)$$</span> <span class="math-container">$$L=\mathop {\lim }\limits_{x \to \infty } \left( {{x^{p+1/3}}\left( {\sqrt[3]{{1 + 1/x}} + \sqrt[3]{{1 - 1/x}} - 2} \right)} \right)$$</span> Use <span class="math-container">$(1+z)^k=1+kz+\frac{k(k-1)}{2}z^2+O(z^3)$</span> when <span class="math-container">$z$</span> is very small, then let <span class="math-container">$z=1/x$</span> <span class="math-container">$$L=\lim_{z\rightarrow 0}~ z^{-p-1/3}\left(1+\frac{z}{3}-\frac{z^2}{9}+O(z^3)+1-\frac{z}{3}-\frac{z^2}{9}+O(z^3)-2\right)$$</span> <span class="math-container">$$L=z^{-p-1/3}\left(-\frac{2z^2}{9}+O(z^3)\right)$$</span> Let <span class="math-container">$-p-1/3+2=0$</span>, then <span class="math-container">$$L=-\frac{2}{9}$$</span> Hence <span class="math-container">$p=\frac{5}{3}$</span> makes the limit finite equal to <span class="math-container">$-\frac{2}{9}$</span>.</p>
1,812,525
<p>Let $T:H \to H$ be defined as $Tx=\sum_{n=1}^{\infty} \lambda_n \langle x,\varphi _n \rangle \varphi _n$, given that $\{\varphi _n\}_{n=1}^\infty$ is an orthonormal sequence (not necessarily a basis) and $\{\lambda_n\}_{n=1}^\infty$ is a sequence of numbers (which may be complex if the Hilbert space is complex).</p> <p>Show that $\ker (T)=\{\varphi _n\mid\lambda_n\neq 0\}^\perp $.</p> <p>What does this $\{\}^\perp $ notation mean? Do I need to show that $\varphi _n$ are perpendicular to each other? If so how?</p>
egreg
62,967
<p>The notation $S^\perp$ means $\{x\in H\mid \langle x,y\rangle=0,\text{ for all }y\in S\}$.</p> <p>Let $x\in\ker T$; you need to prove that, for every $m$ with $\lambda_m\ne0$, you have $\langle x,\varphi_m\rangle=0$.</p> <p>You know that $\sum_n\lambda_n\langle x,\varphi_n\rangle\varphi_n=0$, so also $$ \Bigl&lt;\sum_n\lambda_n\langle x,\varphi_n\rangle\varphi_n,\varphi_m\Bigr&gt; =0 $$ Since the series converges, you can deduce that $$ 0=\sum_n\langle\lambda_n\langle x,\varphi_n\rangle\varphi_n,\varphi_m\rangle =\sum_n\lambda_n\langle x,\varphi_n\rangle\,\langle\varphi_n,\varphi_m\rangle= \lambda_m\langle x,\varphi_m\rangle $$ Since $\lambda_m\ne0$, by assumption, it follows that $\langle x,\varphi_m\rangle=0$.</p> <p>Conversely, you need to show that, if $\langle x,\varphi_m\rangle=0$ whenever $\lambda_m\ne0$, then $x\in\ker T$, meaning that $\sum_n\lambda_n\langle x,\varphi_n\rangle\varphi_n=0$. Can you show it?</p>
4,651,798
<p>This question comes from the province-stage olympiad in my country in order to qualify for the national stage:</p> <blockquote> <p>Given the set <span class="math-container">$S=\{1,2,3,4\}$</span>.The number of non-empty subsets <span class="math-container">$A_1,A_2, ..., A_6$</span> that fulfil these three criteria:</p> <ol> <li><span class="math-container">$A_1\cap A_2=\emptyset$</span>.</li> <li><span class="math-container">$A_1\cup A_2\subseteq A_3$</span>.</li> <li><span class="math-container">$A_3\subseteq A_4\subseteq\dots\subseteq A_6$</span>. is ...</li> </ol> </blockquote> <p>From what I managed to conclude, <span class="math-container">$|A_1|+|A_2|\leq|A_3|$</span> and <span class="math-container">$|A_3|\leq|A_4|\leq|A_5|\leq|A_6|$</span> from the second and third criterion respectively. Then, I thought to divide the first criterion into two cases: <span class="math-container">$|A_1|=|A_2|$</span> and <span class="math-container">$|A_1|\neq|A_2|$</span>. I don't know how to proceed from here. Any help would be appreciated.</p>
Rezha Adrian Tanuharja
751,970
<p>Label each element with <span class="math-container">$0,1,2,3,4,5$</span> or <span class="math-container">$6$</span>. Then put the elements into the six subsets with the following rules:</p> <ul> <li>Those with label <span class="math-container">$0$</span> is not in any subset</li> <li><span class="math-container">$A_{1}$</span> contains those with label <span class="math-container">$1$</span></li> <li><span class="math-container">$A_{2}$</span> contains those with label <span class="math-container">$2$</span></li> <li><span class="math-container">$A_{3}$</span> contains those with label <span class="math-container">$1,2$</span> or <span class="math-container">$3$</span></li> <li><span class="math-container">$A_{4}$</span> contains those with label <span class="math-container">$1,2,3$</span> or <span class="math-container">$4$</span></li> <li><span class="math-container">$A_{5}$</span> contains those with label <span class="math-container">$1,2,3,4$</span> or <span class="math-container">$5$</span></li> <li><span class="math-container">$A_{6}$</span> contains those with label <span class="math-container">$1,2,3,4,5$</span> or <span class="math-container">$6$</span></li> </ul> <p>The question becomes &quot;how many labelling has at least one <span class="math-container">$1$</span> and one <span class="math-container">$2$</span> label?</p> <p>Using principle of inclusion and exclusion (PIE) we have <span class="math-container">$7^{4}-2\cdot 6^{4}+5^{4}=434$</span></p>
15,316
<blockquote> <p>What is the length $f(n)$ of the shortest nontrivial group word $w_n$ in $x_1,\ldots,x_n$ that collapses to $1$ when we substitute $x_i=1$ for any $i$?</p> </blockquote> <p>For example, $f(2)=4$, with the commutator $[x_1,x_2]=x_1 x_2 x_1^{-1} x_2^{-1}$ attaining the bound. </p> <p>For any $m,n \ge 1$, the construction $w_{m+n}(\vec{x},\vec{y}):=[w_m(\vec{x}),w_n(\vec{y})]$ shows that $f(m+n) \le 2 f(m) + 2 f(n)$.</p> <p>Is $f(1),f(2),\ldots$ the same as sequence <a href="http://oeis.org/A073121" rel="nofollow noreferrer" title="A073121">A073121</a>: $$ 1,4,10,16,28,40,52,64,88,112,136,\ldots ?$$</p> <p><strong>Motivation:</strong> Beating the iterated commutator construction would improve the best known bounds in <a href="https://mathoverflow.net/questions/15022/size-of-the-smallest-group-not-satisfying-an-identity/15065#15065">size of the smallest group not satisfying an identity</a>.</p>
Sam Nead
1,650
<p>See the paper "Brunnian links" by Gartside and Greenwood, published in Fundamenta Mathematicae. Theorems 8 and 7 imply that iterated commutators are optimal and the sequence you suggest gives the minimal length. </p>
18,530
<p>Sorry about the title, I have no idea how to describe these types of problems.</p> <p>Problem statement:</p> <p>$A(S)$ is the set of 1-1 mappings of $S$ onto itself. Let $S \supset T$ and consider the subset $U(T) = $ { $f \in A(S)$ | $f(t) \in T$ for every $t \in T$ }. $S$ has $n$ elements and $T$ has $m$ elements. Show that there is a mapping $F:U(T) \rightarrow S_m$ such that $F(fg) = F(f)F(g)$ for $f, g \in U(T)$ and $F$ is onto $S_m$.</p> <p>How do I write up this reasoning: When I look at the sets $S$ = { 1, 2, ..., $n$} and $T$ = { 1, 2, ..., $m$}, I can see that there are a bunch of permutations of the elements of $T$ within $S$. I can see there are $(n - m)!$ members of $S$ for each permutation of $T$'s elements. But there needs to be some way to get a handle on the positions of the elements in $S$ and $T$ in order to compare them to each other. But $S$ isn't any particular set, like a set of integers, so how can I relate the positions of the elements to one another? Or, is this the wrong way to go about it?</p> <p>Example:</p> <p>$U(T_3 \subset S_6) = \left( \begin{array}{cccccc} 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 6 &amp; 5 \\ 1 &amp; 2 &amp; 3 &amp; 5 &amp; 4 &amp; 6 \\ 1 &amp; 2 &amp; 3 &amp; 5 &amp; 6 &amp; 4 \\ 1 &amp; 2 &amp; 3 &amp; 6 &amp; 4 &amp; 5 \\ 1 &amp; 2 &amp; 3 &amp; 6 &amp; 5 &amp; 4 \\ 1 &amp; 3 &amp; 2 &amp; 4 &amp; 5 &amp; 6 \\ 1 &amp; 3 &amp; 2 &amp; 4 &amp; 6 &amp; 5 \\ 1 &amp; 3 &amp; 2 &amp; 5 &amp; 4 &amp; 6 \\ 1 &amp; 3 &amp; 2 &amp; 5 &amp; 6 &amp; 4 \\ 1 &amp; 3 &amp; 2 &amp; 6 &amp; 4 &amp; 5 \\ 1 &amp; 3 &amp; 2 &amp; 6 &amp; 5 &amp; 4 \\ 2 &amp; 1 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ 2 &amp; 1 &amp; 3 &amp; 4 &amp; 6 &amp; 5 \\ 2 &amp; 1 &amp; 3 &amp; 5 &amp; 4 &amp; 6 \\ 2 &amp; 1 &amp; 3 &amp; 5 &amp; 6 &amp; 4 \\ 2 &amp; 1 &amp; 3 &amp; 6 &amp; 4 &amp; 5 \\ 2 &amp; 1 &amp; 3 &amp; 6 &amp; 5 &amp; 4 \\ 2 &amp; 3 &amp; 1 &amp; 4 &amp; 5 &amp; 6 \\ 2 &amp; 3 &amp; 1 &amp; 4 &amp; 6 &amp; 5 \\ 2 &amp; 3 &amp; 1 &amp; 5 &amp; 4 &amp; 6 \\ 2 &amp; 3 &amp; 1 &amp; 5 &amp; 6 &amp; 4 \\ 2 &amp; 3 &amp; 1 &amp; 6 &amp; 4 &amp; 5 \\ 2 &amp; 3 &amp; 1 &amp; 6 &amp; 5 &amp; 4 \\ 3 &amp; 1 &amp; 2 &amp; 4 &amp; 5 &amp; 6 \\ 3 &amp; 1 &amp; 2 &amp; 4 &amp; 6 &amp; 5 \\ 3 &amp; 1 &amp; 2 &amp; 5 &amp; 4 &amp; 6 \\ 3 &amp; 1 &amp; 2 &amp; 5 &amp; 6 &amp; 4 \\ 3 &amp; 1 &amp; 2 &amp; 6 &amp; 4 &amp; 5 \\ 3 &amp; 1 &amp; 2 &amp; 6 &amp; 5 &amp; 4 \\ 3 &amp; 2 &amp; 1 &amp; 4 &amp; 5 &amp; 6 \\ 3 &amp; 2 &amp; 1 &amp; 4 &amp; 6 &amp; 5 \\ 3 &amp; 2 &amp; 1 &amp; 5 &amp; 4 &amp; 6 \\ 3 &amp; 2 &amp; 1 &amp; 5 &amp; 6 &amp; 4 \\ 3 &amp; 2 &amp; 1 &amp; 6 &amp; 4 &amp; 5 \\ 3 &amp; 2 &amp; 1 &amp; 6 &amp; 5 &amp; 4 \end{array} \right),A(T_3) = \left( \begin{array}{ccc} 1 &amp; 2 &amp; 3 \\ 1 &amp; 3 &amp; 2 \\ 2 &amp; 1 &amp; 3 \\ 2 &amp; 3 &amp; 1 \\ 3 &amp; 1 &amp; 2 \\ 3 &amp; 2 &amp; 1 \end{array} \right)$</p>
Qiaochu Yuan
232
<p>A basic practical reason to care about logarithms is that there are many numbers in real life which vary greatly in size, and it is both a pain and misleading to compare their sizes directly; one should instead compare the sizes of their <em>logarithms</em>, for various reasons. This is why the <a href="http://en.wikipedia.org/wiki/Richter_magnitude_scale" rel="nofollow noreferrer">Richter scale</a> is logarithmic; see <a href="http://en.wikipedia.org/wiki/Orders_of_magnitude" rel="nofollow noreferrer">these</a> <a href="http://en.wikipedia.org/wiki/Orders_of_magnitude_(length)" rel="nofollow noreferrer">Wikipedia</a> <a href="http://en.wikipedia.org/wiki/Orders_of_magnitude_(time)" rel="nofollow noreferrer">articles</a> for some examples.</p> <p>Logarithms also appear in the basic mathematical description of <a href="http://en.wikipedia.org/wiki/Information_theory" rel="nofollow noreferrer">information</a>. Suppose I send you a message consisting of zeroes and ones. If the message has length $n$, we say that it contains $n$ <em>bits</em> of information. There are $2^n$ possible such messages, which leads to a general principle: whenever you are in a situation where there are $k$ possibilities and you know that one of them happens, you have gained $\log_2 k$ bits of information. </p> <p>Information is a fundamental concept. Consider the following <a href="https://math.stackexchange.com/questions/639/logic-problem-identifying-poisoned-wines-out-of-a-sample-minimizing-test-subjec">puzzle</a>: you have $1000$ bottles of wine, and you know that one of them is poisoned. You have an indeterminate number of rats to which you can feed various wines; if they are poisoned, they will die in $1$ hour. How many rats do you need to figure out which bottle is poisoned in $1$ hour?</p> <p>The answer is $10$. This is because you want to figure out which of $1000$ possibilities happens, so you want to gain $\log_2 1000 \approx 10$ bits of information. If you feed $n$ rats some amount of wine, the amount of information you have after $1$ hour is precisely a list of which rats died and which rats didn't - zeroes and ones - so you have gained at most $n$ bits of information. (You might not reach this upper bound if some of the information you gain is redundant.) This requires that $n$ is at least $10$, and in fact this bound can be achieved by the following algorithm:</p> <p>Label the wines $0, 1, 2, ... 999$ and convert the numbers to binary. Each of these numbers has at most $10$ binary digits. Assign each of the rats wines as follows: rat $i$ will drink all the wines with the property that the $i^{th}$ binary digit is $1$. After $1$ hour, the pattern of which rats die spells out the binary expansion of the poisoned wine. </p> <p>I really like this problem because the problem statement does not mention logarithms at all, but it is an inevitable consequence of the particular optimization you are trying to accomplish that logarithms appear in the solution. </p>
655,005
<p>Show that if $k \in \mathbb{Z}$, then the integers $6k-1$, $6k+1$, $6k+2$, $6k+3$, and $6k+5$ are pairwise relatively prime. I am still new and uncomfortable with proofs. Any help would be great. </p>
hmakholm left over Monica
14,366
<p>The Kleene star produces only <em>finite</em> sequences of the alphabet symbols. The elements in <span class="math-container">$\Sigma^*$</span> for some alphabet <span class="math-container">$\Sigma$</span> can be arbitrary long, but each of them is, individually, finite.</p> <p>Because of this, there are not enough elements in <span class="math-container">$\Sigma^*$</span> to give <em>every</em> real number a representation.</p> <p>You can select <em>some</em> irrational numbers to represent with your strings-that-don't-have-a-meaning-yet, of course -- getting an <strong>injective</strong> mapping from <span class="math-container">$\Sigma^*$</span> to <span class="math-container">$\mathbb R$</span> is no problem, but you can't make it <strong>surjective</strong>. There will always be some reals left over that you're not representing.</p>
4,473,264
<p>I have part of a circle described by three two dimensional vectors.</p> <ul> <li>start point <code>s1</code></li> <li>center point <code>c1</code></li> <li>end point <code>e</code></li> </ul> <p>I move the start point <code>s1</code> by <code>m1</code>, which is a <strong>known</strong> two dimensional vector. My question is: Can I calculate the new center point <code>c2</code> from the data I have? And if so, how?</p> <p>Problem</p> <p><a href="https://i.stack.imgur.com/3wWOr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3wWOr.jpg" alt="enter image description here" /></a></p> <p>I'm creating a svg-manuipulation-app (drawing-app) in javascript where I want to edit one point of an arc, but keep the shape of the arc intact by appropriately moving the center of the arc.</p> <p>It only looks like I want to keep the <code>x</code> value the same. Small coincidence I didn't realised. The question should cover any vector <code>m1</code>, no matter where the new center <code>c2</code> would end up.</p>
Eugene
726,796
<p>the integral</p> <p><span class="math-container">$$ \int\sqrt{1+\frac{1}{x^4}}dx $$</span></p> <p>can be rewritten as <span class="math-container">$$ \int x^{-2}(1+x^4)^{\frac{1}{2}}dx, $$</span></p> <p>which is an example of <a href="https://planetmath.org/integrationofdifferentialbinomial" rel="nofollow noreferrer">integrating the differential binom</a>.</p> <p>In this case, the integral (antiderivative) can be expressed in elementary functions only if one of the following holds:</p> <ul> <li><span class="math-container">$\frac{-2 + 1}{4} + \frac{1}{2} \in \mathbb{Z} \Leftrightarrow \frac{1}{4} \in \mathbb{Z}$</span> (FALSE)</li> <li><span class="math-container">$\frac{-2 + 1}{4} \in \mathbb{Z} \Leftrightarrow -\frac{1}{4} \in \mathbb{Z}$</span> (FALSE)</li> <li><span class="math-container">$\frac{1}{2} \in \mathbb{Z}$</span> (FALSE)</li> </ul> <p>Non of the above is TRUE, so the integral cannot be expressed in elementary functions.</p>
24,361
<p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p> <p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p> <p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
user2035
2,035
<p>Actually, the examples given in the answers so far are even counter-examples to the weaker statement that two sheaves $F,G$ for which there exists a covering $U_i$ such that $F|U_i\cong G|U_i$ be isomorphic.</p> <p>For the original question regarding stalks there is a simpler example: Let $X=\{\eta,s\}$ be the topological space having $\{\eta\}$ as the only non-trivial open set. To give an abelian sheaf $F$ on $X$ is equivalent to give two groups $F(X)=F_s$ and $F(\{\eta\})=F_\eta$ and a restriction homomorphism $F_s\to F_\eta$. Taking $F_s=F_\eta=A$ for some abelian group $A\ne0$ and choosing either $\mathrm{id}_A$ or $0$ as restriction defines two non-isomorphic sheaves.</p>
24,361
<p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p> <p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p> <p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
Bugs Bunny
5,301
<p>How about a positive answer for a twist:-))? Pick an open cover $X=\cup_i U_i$ and try to reglue $F|_{U_i}$ into a brand new sheaf. To do that you need to pick automorphism $\sigma_{i,j}\in Aut(F|_{U_i \cap U_j})$ that agree on triple intersection. This is some Chech cocycle $Z^1_{U_i} (X, Aut (F))$. Two Chech cocycle will give you the same sheaf if they are different by a coboundary. Hence if $H^1_{U_i} (X, Aut (F))=1$ then any regluing will give the same sheaf.</p> <p>Now you have to take care of all possible covers by going to the limit. Here is your positive answer then: true if and only if $H^1 (X, Aut (F))=1$. </p>
1,348,587
<p>Let $f$ be a real-valued function (a function with target space the set of reals). Let $P(x, M)$ stand for $|f(x)| \leq M $, let $N$ be the set of positive real numbers, and let $\mathbb{R}$ be the set of real numbers.</p> <p>a) Which of the following statements is an accurate translation of "f is bounded"?</p> <p>(i): ($\forall M \in N$)($\exists x \in \mathbb{R}$)($P(x,M)$)</p> <p>(ii): ($\exists M \in N$)($\forall x \in \mathbb{R}$)($P(x,M)$)</p> <p>(iii): ($\forall x \in \mathbb{R}$)($\exists M \in N$)($P(x,M)$)</p> <p>(iv): ($\exists x \in \mathbb{R}$)($\forall M \in N$)($P(x,M)$)</p> <p>I understand that (III) is the answer that defines a bounded function, but I don't understand how it differs from (II). Also, if someone can provide me with a more explicitly method of reading these types of statements that would really help clarify a lot of things.</p>
the_candyman
51,370
<p>(ii) Exists an $M$ such that for each $x$ real, $|f(x)| \leq M$.</p> <p>(iii) For each $x$ real there exists an $M$ such that $|f(x)| \leq M$.</p> <p>In the first case $M$ is unique for all $x$.</p> <p>In the second case $M$ depends on $x$. In this sense, we can restate (iii) as follows:</p> <p>(iii) For each $x$ real there exists an $M(x)$ such that $|f(x)| \leq M(x)$.</p> <p>The function is bounded if there is an $M$ such that for all $x$ you have $|f(x)| \leq M$. Then the solution is.... (ii)! </p> <p><strong>Example</strong></p> <p>Consider $f(x) = x^2+1$ which is clearly unbounded. </p> <p>(ii) is not satisfied, while (iii) is satisfied. Indeed, for all $x$, take for example $M(x) = x^2+2$. Then:</p> <p>$$|f(x)| = |x^2 + 1| = x^2 + 1 \leq x^2 +2 = M(x)$$</p>
1,715,032
<p>When given propositions to prove such as the following question: prove that $|z+i| = |z-i|$ if $z \in \mathbb{R}$.</p> <p>Would I have to prove this proposition without substituting $z$ for a complex number?</p>
Bernard
202,857
<p>Quite simple: it is enough to prove $\lvert z+ \mathrm i\rvert^2=\lvert z- \mathrm i\rvert^2$. Now, since $z$ is real, we have $\lvert z+ \mathrm i\rvert^2= (z+ \mathrm i) (\overline{z+\mathrm i})=(z+ \mathrm i)(z-\mathrm i) $, while $\lvert z-\mathrm i\rvert^2=(z-\mathrm i)(z+\mathrm i) $.</p>
196,002
<p>The paper <a href="http://www.sciencedirect.com/science/article/pii/S002212369690110X" rel="nofollow noreferrer">Lattices of Intermediate Subfactors</a> of Y. Watatani, received on December 1994, finishes by: </p> <blockquote> <p><strong>Prop. 6.2.</strong> $ \ $ Any finite lattice with at most five elements can be realized as an intermediate subfactor lattice.</p> </blockquote> <p>In fact he has investigated all the lattices with at most six elements, and they can be realized as an intermediate subfactor lattice, except the following two lattices for which he didn't know:</p> <p><img src="https://i.stack.imgur.com/RpIjp.png" alt="enter image description here"> </p> <p><strong>Question</strong>: Can any finite lattice with at most six elements be realized as an intermediate subfactor lattice?<br> In others words: Can $L_{19}$ and $L_{20}$ be realized as intermediate subfactor lattices?</p> <p>Today is 20 years after this paper of Y. Watatani, and perhaps subfactors realizing these lattices has been found or perhaps we now know how to prove they don't exist.<br> Of course, if they exist we should ask the same question for seven elements, eight elements... and finally:<br> Can any finite lattice be realized as an intermediate subfactor lattice?<br> We've sketched a planar algebraic approach for this question in the optional part <a href="https://mathoverflow.net/q/195806/34538">here</a>, but we don't know if the skein theory is practicable or not in these cases. </p>
Carlo Beenakker
11,260
<p>a simple upper bound is $(\sqrt{a_{\rm max}}-\sqrt{a_{\rm min}})^2$, with $a_{\rm max}$ and $a_{\rm min}$ the largest and smallest of the $a_i$'s. So for $a_i=a_1+(i-1)\varepsilon$ this would give as upper bound $(\sqrt{a_1+(n-1)\varepsilon}-\sqrt{a_1})^2$.</p> <p>see theorem 1 of <A HREF="http://www.ams.org/journals/mcom/1984-42-165/S0025-5718-1984-0725994-5/S0025-5718-1984-0725994-5.pdf">Some Inequalities for Elementary Mean Values</A>, B. Meyker (1984).</p>
4,216,243
<p>I have to solve the following problem which seems difficult:</p> <p>Find <span class="math-container">$$ \iint_S \nabla \times F\ dS $$</span> where <span class="math-container">$S$</span> is given by</p> <p><span class="math-container">$$r(t,s)=\left( 9+(\cos t)(\sin s)\left(2+\frac{\sin (5s)}{2}\right), \ \ \ 9+(\cos t)(\cos s)\left(2+\frac{\sin (5s)}{2}\right), \ \ \ 9+\frac{\sin t}{3}\left(2+\frac{\sin (5s)}{2}\right) \right) $$</span></p> <p>where <span class="math-container">$0\leq t\leq 2\pi$</span>, <span class="math-container">$\ \ $</span> <span class="math-container">$0\leq s\leq \pi$</span>, <span class="math-container">$\ \ \ $</span> and <span class="math-container">$F:=(z,0,y)$</span>.</p> <p>I'm not sure how to proceed, any help is appreciated.</p> <p>Should I use Gauss divergence?</p> <p>When I plotted <span class="math-container">$S$</span> in wolfram (not sure why is different from the answer below) <a href="https://www.wolframalpha.com/input/?i=%289%2B%28cos+t%29%28sin+s%29%282%2Bsin+%285s%29%2F%282%29%29%2C++9%2B%28cos+t%29%28cos+s%29%282%2Bsin+%285s%29%2F2%29%2C+9%2B%28%28sin+t%29%2F3%29%282%2Bsin+%285s%29%2F2%29+%29%2C+0%3C%3D+s%3C%3D+pi%2C+0%3C%3Dt%3C%3D+2pi" rel="nofollow noreferrer">https://www.wolframalpha.com/input/?i=%289%2B%28cos+t%29%28sin+s%29%282%2Bsin+%285s%29%2F%282%29%29%2C++9%2B%28cos+t%29%28cos+s%29%282%2Bsin+%285s%29%2F2%29%2C+9%2B%28%28sin+t%29%2F3%29%282%2Bsin+%285s%29%2F2%29+%29%2C+0%3C%3D+s%3C%3D+pi%2C+0%3C%3Dt%3C%3D+2pi</a></p> <p><a href="https://i.stack.imgur.com/XZFNJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XZFNJ.png" alt="enter image description here" /></a></p>
Michael Seifert
248,639
<p>For future reference, we define <span class="math-container">$$ \vec{r}(t,s) \equiv \left( x(t,s), y(t,s), z(t,s) \right) $$</span></p> <p>By Stokes' Theorem, the given integral is <span class="math-container">$$ \iint_S \nabla \times F\ dS = \oint_{\partial S} F \cdot d\ell $$</span> where <span class="math-container">$\partial S$</span> is the boundary of <span class="math-container">$S$</span>. This boundary can be broken up into four separate parametric curves:</p> <ol> <li><span class="math-container">$\vec{r}(t, 0)$</span>, for <span class="math-container">$t$</span> running from 0 to 2π;</li> <li><span class="math-container">$\vec{r}(2\pi, s)$</span>, for <span class="math-container">$s$</span> running from 0 to π;</li> <li><span class="math-container">$\vec{r}(t,\pi)$</span>, for <span class="math-container">$t$</span> running from 2π to 0; and</li> <li><span class="math-container">$\vec{r}(0,s)$</span>, for <span class="math-container">$s$</span> running from π to 0.</li> </ol> <p>The first integral is (setting <span class="math-container">$s = 0$</span> here) <span class="math-container">\begin{align*} I_1 &amp;= \int_0^{2\pi} \left( F \cdot \frac{\partial \vec{r}}{\partial t}\right) dt \\ &amp;= \int_0^{2 \pi} \left(z \frac{\partial x}{\partial t} + y \frac{\partial z}{\partial t} \right) dt \\ &amp;= \int_0^{2 \pi} \left[ \left(9 + \frac{2}{3}\sin t \right)(0) + \left( 9 + 2 \cos t \right)\left( \frac{2}{3} \cos t \right) \right] \, dt \\ &amp;= \frac{4}{3} \int_0^{2 \pi} \cos^2 t = \frac{4\pi}{3}. \end{align*}</span> Meanwhile, the third integral is (setting <span class="math-container">$s = \pi$</span> here) <span class="math-container">\begin{align*} I_3 &amp;= \int^0_{2\pi} \left( F \cdot \frac{\partial \vec{r}}{\partial t}\right) dt \\ &amp;= \int^0_{2\pi} \left(z \frac{\partial x}{\partial t} + y \frac{\partial z}{\partial t} \right) dt \\ &amp;= \int^0_{2\pi} \left[ \left(9 + \frac{2}{3}\sin t \right)(0) + \left( 9 - 2 \cos t \right)\left( \frac{2}{3} \cos t \right) \right] \, dt \\ &amp;= -\frac{4}{3} \int^0_{2\pi} \cos^2 t = \frac{4\pi}{3}. \end{align*}</span> Using similar techniques, one can show that the second and fourth integrals vanish. Thus, the total integral is <span class="math-container">$$ \boxed{ \iint_S \nabla \times F\ dS = \frac{8\pi}{3}.} $$</span></p> <p>But wait! <a href="https://math.stackexchange.com/a/4216304/248639">rebo79's answer</a> seems to show that this is a closed surface, so why is this giving us a non-zero answer? The answer seems to be that the parametrization of the surface leads to an inconsistent orientation along the boundary curves. We can see this by calculating the surface normals <span class="math-container">$$ \hat{n} = \frac{\partial \vec{r}}{\partial t} \times \frac{\partial \vec{r}}{\partial s} $$</span> and plotting them at several points:</p> <p><a href="https://i.stack.imgur.com/C3fiw.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C3fiw.png" alt="enter image description here" /></a></p> <p>We can see that the surface orientation is not consistent, and thus we cannot apply Gauss's Law to get the integral over <span class="math-container">$S$</span>: the normal of <span class="math-container">$S$</span>, as parametrized, is not always the outwards-pointing normal to the region &quot;bounded&quot; by <span class="math-container">$S$</span>. The problematic points seem to be when <span class="math-container">$t = \pi/2$</span> and <span class="math-container">$t = 3 \pi/2$</span>; at these points, the surface intersects itself. It can also be shown that curves 1 &amp; 3 are the same ellipse in the <span class="math-container">$yz$</span>-plane <em>traversed in the same direction</em>, so their contributions to the boundary integral reinforce rather than cancel. Curves 2 &amp; 4 are similarly the same arc traversed in the same direction, and so their contributions would reinforce rather than cancelling; but it happens that both contributions just happen to be zero for this particular choice of <span class="math-container">$F$</span>.</p> <p>Finally, note that if the parametrization of the curve had been <span class="math-container">$t \in [-\pi/2, \pi/2]$</span> and <span class="math-container">$s \in [0, 2 \pi]$</span>, this problem would not have arisen. In this case, two of the &quot;boundary curves&quot; would have been closed loops running along line segments parallel to the <span class="math-container">$z$</span>-axis (and so their integrals would have vanished), while the other two would have been curves running along a &quot;line of latitude&quot; in opposite directions (and so their integrals would have cancelled.) I suspect that there is either some kind of transcription error involved, or an instructor decided to tweak things around without thinking too carefully about the consequences; because this problem is quite devious as it stands.</p>
2,812,122
<blockquote> <p>For any real numbers $x$ and $y$ satisfying $x^2y + 6y = xy^3 +5x^2 +2x$, it is known that $$(x^2 + 2xy + 3y^2) \, f(x,y) = (4x^2 + 5xy + 6y^2) \, g(x,y)$$<br> Given that $g(0,0) = 6$, find the value of $f(0,0)$.</p> </blockquote> <p>I have tried expressing $f(x,y)$ in terms of $g(x,y)$. But seems that some tricks have to been done to further on the question. Can anyone figure out the expression?</p>
Jacob.Lee
142,729
<p>This formula comes from Thomas Simpson's Series Multisection Theory. Speak in the concrete, a multisection of the series of an analytic function</p> <p><span class="math-container">$$f(z) = \sum_{n=0}^\infty a_n\cdot z^n.$$</span></p> <p>has a closed-form expression in terms of the function <span class="math-container">$f(x)$</span>:</p> <p><span class="math-container">$$\sum_{m=0}^\infty a_{qm+p}\cdot z^{qm+p} = \frac{1}{q}\cdot \sum_{k=0}^{q-1} \omega^{-kp}\cdot f(\omega^k\cdot z),$$</span></p> <p>where <span class="math-container">$\omega = e^{\frac{2\pi i}{q}}$</span> is a Primitive nth root of unity primitive ''q''-th root of unity . This solution was first discovered by Thomas Simpson.</p>
313,254
<p>I know this question has been asked before on MO and MSE (<a href="https://mathoverflow.net/questions/59605/reference-in-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/407004/good-book-for-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/1839673/books-on-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/200537/complex-analysis-book-with-a-view-toward-riemann-surfaces">here</a>) but the answers that were given were only partially helpful to me, and I suspect that I am not the only one.</p> <p>I am about to teach a first course on Riemann surfaces, and I am trying to get a fairly comprehensive view of the main references, as a support for both myself and students.</p> <p>I compiled a list, here goes in alphabetical order. Of course, it is necessarily subjective. For more detailed entries, I made a bibliography using the bibtex entries from MathSciNet: <a href="https://www.brice.loustau.eu/teaching/RiemannSurfaces2018/References.pdf" rel="noreferrer">click here</a>.</p> <ol> <li>Bobenko. Introduction to compact Riemann surfaces. </li> <li>Bost. Introduction to compact Riemann surfaces, Jacobians, and abelian varieties.</li> <li>de Saint-Gervais. Uniformisation des surfaces de Riemann: retour sur un théorème centenaire.</li> <li>Donaldson. Riemann surfaces.</li> <li>Farkas and Kra. Riemann surfaces.</li> <li>Forster. Lectures on Riemann surfaces.</li> <li>Griffiths. Introduction to algebraic curves.</li> <li>Gunning. Lectures on Riemann surfaces.</li> <li>Jost. Compact Riemann surfaces.</li> <li>Kirwan. Complex algebraic curves.</li> <li>McMullen. Complex analysis on Riemann surfaces.</li> <li>McMullen. Riemann surfaces, dynamics and geometry.</li> <li>Miranda. Algebraic curves and Riemann surfaces.</li> <li>Narasimhan. Compact Riemann surfaces.</li> <li>Narasimhan and Nievergelt. Complex analysis in one variable.</li> <li>Reyssat. Quelques aspects des surfaces de Riemann.</li> <li>Springer. Introduction to Riemann surfaces.</li> <li>Varolin. Riemann surfaces by way of complex analytic geometry.</li> <li>Weyl. The concept of a Riemann surface.</li> </ol> <p>Having a good sense of what each of these books does, beyond a superficial first impression, is quite a colossal task (at least for me).</p> <p>What I'm hoping is that if you know very well such or such reference in the list, you can give a short description of it: where it stands in the existing literature, what approach/viewpoint is adopted, what are its benefits and pitfalls. Of course, I am also happy to update the list with new references, especially if I missed some major ones.</p> <p>As an example, for Forster's book (5.) I can just use the accepted answer <a href="https://math.stackexchange.com/questions/407004/good-book-for-riemann-surfaces">there</a>: According to <a href="https://math.stackexchange.com/users/71348/ted-shifrin">Ted Shifrin</a>:</p> <blockquote> <p>It is extremely well-written, but definitely more analytic in flavor. In particular, it includes pretty much all the analysis to prove finite-dimensionality of sheaf cohomology on a compact Riemann surface. It also deals quite a bit with non-compact Riemann surfaces, but does include standard material on Abel's Theorem, the Abel-Jacobi map, etc.</p> </blockquote>
Balazs
6,107
<p>I am copying this here from the official CUP website, so I don't think I am breaching anyone's copyright: a short review of one of my old favourites; seems to address precisely the points you are interested in. There should be a new edition out soon (?). </p> <p>BEARDON, A. J. A primer on Riemann surfaces (London Mathematical Society Lecture Note Series 78, Cambridge University Press, 1984), 188 pp. Graduate or advanced undergraduate students frequently encounter Riemann surfaces as a section in a second course in complex analysis or a chapter in an advanced text in complex analysis. To proceed further, they must then reach for one of a number of advanced texts on Riemann surfaces e.g. those by Ahlfors and Sario, Weyl, Forster, Springer (now sadly out of print), Gunning, Farkas and Kra. The book under review has less grand objectives than these books and aims to fill the gap by providing a leisurely and elementary introduction to Riemann surfaces. Riemann surfaces are introduced initially in the abstract, free from connections with analytic functions. The flavour throughout is geometrical and for example, a chapter is devoted to automorphisms of the disc, plane and Riemann sphere. The connection with analytic functions is later discussed along with details on covering spaces. The penultimate chapter contains a nice introduction to harmonic and subharmonic functions, Dirichlet's problem and Green's functions. This enables the author in the final chapter to achieve his goal of proving the Riemann mapping theorem and the Uniformization theorem and discussing their geometrical significance. The title aptly describes the nature of the book and it will suit those students whose requirements do not extend to the deeper texts on Riemann surfaces. Its only competitor with these limited objectives is perhaps the much-less-widely-available Rice University Notes by B. F. Jones and so it should be a useful addition to this L.M.S. series of notes.</p>
330,526
<p>Let <span class="math-container">$\tau&gt;0$</span>, and let <span class="math-container">$T\in \mathcal{D}'(\mathbb{R})$</span> be a <span class="math-container">$\tau$</span>-periodic distribution (that is, <span class="math-container">$ \langle T, \varphi(\cdot+\tau)\rangle= \langle T,\varphi\rangle $</span> for all <span class="math-container">$\varphi \in \mathcal{D}(\mathbb{R})$</span>). Then <span class="math-container">$$ T=\sum_{n\in \mathbb{Z}} c_n e^{i 2\pi t/\tau}, $$</span> for some <span class="math-container">$c_n\in \mathbb{C}$</span>, and where the equality means that the symmetric partial sums of the series on the right hand side converge in <span class="math-container">$\mathcal{D}'(\mathbb{R})$</span> to <span class="math-container">$T$</span>. What are the <span class="math-container">$c_n$</span>s in terms of <span class="math-container">$T$</span>? One would think that they are given by <span class="math-container">$c_n=\langle T|_{(0,2\pi)}, e^{-in2\pi /\tau}\rangle/\tau$</span>, but <span class="math-container">$e^{-in2\pi/\tau}$</span> is not a test function in <span class="math-container">$\mathcal{D}((0,2\pi))$</span>. </p>
user44191
44,191
<p><span class="math-container">$\DeclareMathOperator\deg{deg}\DeclareMathOperator\dim{dim}\DeclareMathOperator\span{span}$</span> I decided to split my answer into a more direct "answer" post and a "tidbits" post. This is the "tidbits" post, where I point out some things I determined about <span class="math-container">$S$</span> that may help with other analysis. </p> <p><span class="math-container">$S$</span> inherits a grading <span class="math-container">$S = \bigoplus_{k = 0}^\infty S^k$</span> from <span class="math-container">$TV$</span>, where <span class="math-container">$\deg(x) = \deg(y) = \deg(z) = 1$</span>, as the quotient of a graded ring by a homogeneous ideal. </p> <blockquote> <p><strong>Claim</strong>: <span class="math-container">$S^k = q(\sum_{\ell = 0}^k \left( T^\ell\span(x, y) z^{k - \ell}\right))$</span>, where <span class="math-container">$\span(x, y) \subseteq V$</span> is the vector space spanned by <span class="math-container">$x$</span>, <span class="math-container">$y$</span>; <span class="math-container">$T \span(x, y)$</span> is its tensor algebra; and <span class="math-container">$T^k$</span> denotes the <span class="math-container">$k$</span>th graded component of the tensor algebra. </p> <p><strong>Conjecture</strong>: Further, <span class="math-container">$q$</span> is an isomorphism. Equivalently, <span class="math-container">$\langle xy + yz + zx, yx + zy + xz\rangle \cap \sum_{\ell = 0}^k \left( T^\ell\span(x, y) z^{k - \ell}\right) = \{0\}$</span>.</p> <p>Less formally, this is saying that every element of <span class="math-container">$S^k$</span> has a "normal representative" in <span class="math-container">$TV$</span> such that each monomial only has <span class="math-container">$z$</span> at the ends, with no <span class="math-container">$x$</span> or <span class="math-container">$y$</span> after a <span class="math-container">$z$</span>. The conjecture is that this representative is unique. </p> <p><strong>Proof</strong>: We work by induction. If <span class="math-container">$k = 0$</span>, this is trivial. Otherwise, let <span class="math-container">$s \in S^k$</span>, with some representative <span class="math-container">$t \in T^kV$</span>. By the definition of <span class="math-container">$TV$</span>, we have that <span class="math-container">$t = x C_x + y C_y + z C_z$</span> for some <span class="math-container">$C_x, C_y, C_z \in T^{k - 1}V$</span>. Then <span class="math-container">$s = p(x) p(C_x) + p(y) p(C_y) + p(z) p(C_z)$</span>. By the induction step, <span class="math-container">$p(C_x)$</span>, <span class="math-container">$p(C_y)$</span>, <span class="math-container">$p(C_z)$</span> have such "normal representatives" <span class="math-container">$C'_x$</span>, <span class="math-container">$C'_y$</span>, <span class="math-container">$C'_z$</span>. Write <span class="math-container">$C'_z = x D_x + y D_y + z D_z$</span>. By the definition of "normal representative", we have that <span class="math-container">$D_z$</span> consists only of linear combinations of strings of <span class="math-container">$z$</span>s. Let <span class="math-container">$E_x$</span> be the "normal representative" of <span class="math-container">$z D_x$</span> and <span class="math-container">$E_y$</span> be the "normal representative" of <span class="math-container">$z D_y$</span>. Then:</p> <p><span class="math-container">\begin{align*} s &amp;= p(x) p(C_x) + p(y) p(C_y) + p(z) p(C_z) \\ &amp;= p(x) p(C_x) + p(y) p(C_y) + p(z) p(x) p(D_x) + p(z) p(y) p(D_y) + p(z) p(z) p(D_z) \\ &amp;= p(x) p(C_x) + p(y) p(C_y) - p(x) p(y) p(D_x) - p(y) p(z) p(D_x) - p(x) p(z) p(D_y) - p(y) p(x) p(D_y) + p(z) p(z) p(D_z) \\ &amp; = p(x) p(C_x) + p(y) p(C_y) - p(x) p(y) p(D_x) - p(y) p(E_x) - p(x) p(E_y) - p(y) p(x) p(D_y) + p(z) p(z) p(D_z) \end{align*}</span> Clearly, <span class="math-container">$x C_x + y C_y - x y D_x - y E_x - x E_y - y x D_y + z z D_z$</span> is a normal representation of <span class="math-container">$x$</span>. <span class="math-container">$\square$</span></p> <p>Uniqueness shouldn't be too hard to prove by induction, but I haven't worked out the exact "trick". The idea is that if <span class="math-container">$t, t'$</span> are distinct "normal representatives" of <span class="math-container">$s$</span>, then <span class="math-container">$t - t'$</span> is a nonzero "normal representative" of <span class="math-container">$0$</span>, so we can assume WLOG that <span class="math-container">$t'$</span> is the obvious representation of <span class="math-container">$0$</span>. Then <span class="math-container">$t = x C_x + y C_y + z C_z$</span> for some <span class="math-container">$C_x, C_y, C_z \in T^{k - 1}V$</span>, where <span class="math-container">$C_x$</span>, <span class="math-container">$C_y$</span> are "normal representatives" and <span class="math-container">$C_z = a z^{k - 1}$</span> for some scalar <span class="math-container">$a$</span>. But if <span class="math-container">$C_z \neq 0$</span>, then <span class="math-container">$t$</span> can't be in the ideal (as all of the monomials in the ideal contain some non-<span class="math-container">$z$</span> letter), so <span class="math-container">$C_z = 0$</span>. There should be a relatively simple demonstration then that <span class="math-container">$C_x = C_y = 0$</span>, which would finish the proof.</p> <p><strong>Corollary of conjecture</strong>: <span class="math-container">$\dim(S^k) = 2^{k + 1} - 1$</span>. </p> <p>I've done some minor independent checking (using the ideal and inclusion-exclusion), and this seems to hold at least up to <span class="math-container">$k = 5$</span>, if my calculations are correct.</p> </blockquote> <p>So we have bounded the growth of <span class="math-container">$S^k$</span> (and found its dimension exactly if the conjecture is correct). This should help if there is an analogue of the Peter-Weyl theorem. </p> <hr> <p>Final tidbit: it may be useful to look at <span class="math-container">$S$</span> as being "noncommutatively graded" over <span class="math-container">$S_3$</span>, with <span class="math-container">$\text{"deg"}(x) = (12), \text{"deg"}(y) = (23), \text{"deg"}(z) = (13)$</span>. This can give us some idea of likely "useful elements" to consider: <span class="math-container">$xyxyxy + yzyzyz + zxzxzx$</span> should be interesting. </p>
2,460,195
<p>I had the following question:</p> <p>Three actors are to be chosen out of five — Jack, Steve, Elad, Suzy, and Ali. What is the probability that Jack and Steve would be chosen, but Suzy would be left out?</p> <p>The answer given was: Total Number of actors = $5$; Since Jack and Steve need to be in the selection and Suzy is to be left out, only one selection matters. Number of actors apart from Jack, Steve, and Suzy = $2$; Probability of choosing 3 actors including Jack and Steve, but not Suzy = $$\frac{C(2,1)}{C(3,5)} = \frac{1}{5}$$</p> <p>I do not understand the answer. What do they mean by only one selection matters? It looks like they are choosing $1$ person from $2$ combinations? Why? Can anyone please explain this.</p> <p>Thanks</p>
Peter
82,961
<p>For $x=0$ , the possible values of $y$ are $0$ to $16$, so $17$ possible values.</p> <p>For $x=1$ , the possible values of $y$ are $0$ to $14$, so $15$ possible values.</p> <p>$\cdots$</p> <p>For $x=8$ , the possible values of $y$ are $0$ to $0$, so $1$ possible value.</p> <p>In total , there are $$1+3+5+7+9+11+13+15+17=\color\red{81}$$ distinct triples.</p>
677,708
<p>Haven't done this for a long time, just want to know if this is the right method for a really simple example. Say we have two (obviously equal) sets $$A= \Big\{\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix} : a,b,c,d \in \Bbb R, a+b=c \Big\}$$ $$B= \Big\{\begin{bmatrix}e &amp; f\\g &amp; h\end{bmatrix} : e,f,g,h \in \Bbb R, e+f=g \Big\}$$</p> <p>Prove $A\subseteq B$.</p> <p>Let $a\in A$, such that $a=\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix}$. Choose $a=e$, $b=f$, $c=g$ and $d=h$. Then $a=\begin{bmatrix}e &amp; f\\g &amp; h\end{bmatrix}$ where $e+f=g$, hence $a\in B$ and $A\subseteq B$. </p> <p>Thanks! </p> <p>Edit: I chose an obviously equal example on purpose. It was a question about the method of proving rather than the actual example. </p>
Daniel Muñoz Parsapoormoghadam
123,645
<p>Since, as you say, it's obvious that $A = B$, it's obvious that $A \subseteq B$.</p>
677,708
<p>Haven't done this for a long time, just want to know if this is the right method for a really simple example. Say we have two (obviously equal) sets $$A= \Big\{\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix} : a,b,c,d \in \Bbb R, a+b=c \Big\}$$ $$B= \Big\{\begin{bmatrix}e &amp; f\\g &amp; h\end{bmatrix} : e,f,g,h \in \Bbb R, e+f=g \Big\}$$</p> <p>Prove $A\subseteq B$.</p> <p>Let $a\in A$, such that $a=\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix}$. Choose $a=e$, $b=f$, $c=g$ and $d=h$. Then $a=\begin{bmatrix}e &amp; f\\g &amp; h\end{bmatrix}$ where $e+f=g$, hence $a\in B$ and $A\subseteq B$. </p> <p>Thanks! </p> <p>Edit: I chose an obviously equal example on purpose. It was a question about the method of proving rather than the actual example. </p>
Ali Caglayan
87,191
<p>It is by definition of two sets being equal that $$A=B\iff A\subseteq B\land B\subseteq A$$</p> <p>Here is a simple proof for your sake:</p> <p>$$A= \Big\{\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix} : a,b,c,d \in \Bbb R, a+b=c \Big\}$$ $$B= \Big\{\begin{bmatrix}e &amp; f\\g &amp; h\end{bmatrix} : e,f,g,h \in \Bbb R, e+f=g \Big\}$$</p> <p>After a change of variables... $$A= \Big\{\begin{bmatrix}\alpha_1 &amp; \alpha_2\\\alpha_3 &amp; \alpha_4\end{bmatrix} : \alpha_n\in \Bbb R, \alpha_1+\alpha_2=\alpha_3 \Big\}$$ $$B= \Big\{\begin{bmatrix}\alpha_1 &amp; \alpha_2\\\alpha_3 &amp; \alpha_4\end{bmatrix} : \alpha_n\in \Bbb R, \alpha_1+\alpha_2=\alpha_3 \Big\}$$ Therefore $A=B\tag*{$\blacksquare$}$</p>
3,938,951
<p>The problem:</p> <p>Given a group <span class="math-container">$G$</span> or order <span class="math-container">$n$</span>, and a Cayley embedding <span class="math-container">$\phi \ :\ G\to S_{n}$</span>. Prove that some <span class="math-container">$g\in G$</span> is of order <span class="math-container">$m$</span> iff <span class="math-container">$\phi ( g)$</span> is a multiplication of <span class="math-container">$\frac{n}{m}$</span> disjoint cycles of length <span class="math-container">$m$</span>.</p> <p>I was able to prove that if <span class="math-container">$\phi ( g) \ $</span> is s a multiplication of <span class="math-container">$\displaystyle \frac{n}{m}$</span> disjoint cycles of order <span class="math-container">$m$</span>, <em>then</em> the order of <span class="math-container">$g$</span> is <span class="math-container">$m$</span>. This was fairly straightforward given the fact that:</p> <blockquote> <p>The order of a multiplication of disjoint cycles is the <span class="math-container">$lcm$</span> of the length of the cycles.</p> </blockquote> <p>In our case the <span class="math-container">$lcm$</span> is obviously <span class="math-container">$m$</span>, so it was easy to prove from here that the order of <span class="math-container">$g$</span> is <span class="math-container">$m$</span>.</p> <p>The other direction, meaning if the order of <span class="math-container">$g$</span> is <span class="math-container">$m$</span>, then <span class="math-container">$\phi ( g)$</span> is a multiplication of <span class="math-container">$\frac{n}{m}$</span> disjoint cycles of length <span class="math-container">$m$</span>, I'm struggling to prove it.</p> <p>I also want to admit that my understanding of Cayley embeddings, and Cayley's theorem in general, is very poor. What I know is simply that it's a homomorphism and also injective, not much more.</p> <p>Any help?</p>
Arturo Magidin
742
<p>If <span class="math-container">$g$</span> has order <span class="math-container">$m$</span>, then the orbit of <span class="math-container">$e$</span> under multiplication by <span class="math-container">$g$</span> is <span class="math-container">$$e\mapsto g\mapsto g^2\mapsto g^3\mapsto\cdots\mapsto g^{m-1}\mapsto g^m=e.$$</span> That is, the cycle that contains <span class="math-container">$e$</span> is <span class="math-container">$(e,g,g^2,\ldots,g^{m-1})$</span>.</p> <p>Now, how about any other orbit/cycle? If we have <span class="math-container">$x\in G$</span>, then the orbit of <span class="math-container">$x$</span> is given by <span class="math-container">$x\mapsto gx\mapsto g^2x\mapsto\cdots$</span> etc. What is the first repeat? If <span class="math-container">$g^ix = g^jx$</span>, then <span class="math-container">$g^i=g^j$</span>, so <span class="math-container">$i\equiv j\pmod{m}$</span>. So the first repeat is <span class="math-container">$g^mx=x$</span>. So the cycle/orbit of <span class="math-container">$x$</span> is <span class="math-container">$(x,gx,g^2x,\ldots,g^{m-1}x)$</span>.</p>
2,965,865
<blockquote> <p>Finding the minimum value of <span class="math-container">$\displaystyle \frac{x^2 +y^2}{y}.$</span> where <span class="math-container">$x,y$</span> are real numbers satisfying <span class="math-container">$7x^2 + 3xy + 3y^2 = 1$</span></p> </blockquote> <p>Try: Equation <span class="math-container">$7x^2+3xy+3y^2=1$</span> represent Ellipse</p> <p>with center is at origin.</p> <p>So substitute <span class="math-container">$x=r\cos \alpha $</span> and <span class="math-container">$y=r\sin \alpha$</span> </p> <p>in <span class="math-container">$7x^2+3xy+3y^2=1$</span></p> <p><span class="math-container">$$3r^2+4r^2\cos^2 \alpha+3r^2\sin \alpha \cos \alpha =1$$</span></p> <p><span class="math-container">$$3r^2+2r^2(1+\cos 2 \alpha)+\frac{3r^2}{2}\sin 2 \alpha =1$$</span></p> <p><span class="math-container">$$8r^2+r^2(4\cos 2 \alpha+3\sin \alpha)=2$$</span></p> <p>So <span class="math-container">$$r^2=\frac{2}{8+(4\cos 2 \alpha+3\sin \alpha)}$$</span></p> <p><span class="math-container">$$\frac{2}{8+5}=\frac{2}{13}\leq r^2\leq \frac{2}{8-5}=\frac{2}{3}$$</span></p> <p>we have to find minimum of <span class="math-container">$$\frac{x^2+y^2}{y}=\frac{r}{\sin \alpha}$$</span></p> <p>How can i find it, could some help me </p>
Cesareo
397,348
<p>Making <span class="math-container">$y = \lambda x$</span> we have</p> <p><span class="math-container">$$ \min f(x,\lambda) \ \ \mbox{s. t. }\ \ g(x,\lambda) = 0 $$</span></p> <p>here </p> <p><span class="math-container">$$ \begin{cases} f(x,\lambda) = \frac{1+\lambda^2}{\lambda}x\\ g(x,\lambda) = x^2(7+3\lambda+3\lambda^2)-1=0 \end{cases} $$</span></p> <p>this minimization problem is equivalent to</p> <p><span class="math-container">$$ \min F(\lambda) = \left(\frac{1+\lambda^2}{\lambda}\right)^2\frac{1}{7+3\lambda+3\lambda^2} $$</span></p> <p>and then</p> <p><span class="math-container">$$ F'(\lambda)= 0\to (1 + \lambda^2) (3 \lambda^3 + 2 \lambda^2- 9 \lambda -14 )=(\lambda-2) (7 + 8 \lambda + 3 \lambda^2) = 0 $$</span></p> <p>hence <span class="math-container">$\lambda = 2\to x = \pm \sqrt{\frac{1}{7+3\times 2+3\times 2^2}} = \pm\frac{1}{5}$</span> etc.</p>
4,126,238
<p>Prove <span class="math-container">$f$</span> is uniformly continuous <span class="math-container">$\implies$</span> there exist <span class="math-container">$C, D$</span> such that <span class="math-container">$|f(x)| &lt; C + D|x|$</span>.</p> <p>Proof below. Please verify or critique.</p> <p>By definition of uniform continuity, there exists <span class="math-container">$\delta &gt; 0$</span> such that <span class="math-container">$|x_a - x_b| \leq \delta \implies |f(x_a)- f(x_b)| &lt; 1$</span>. Choose <span class="math-container">$D &gt; 1/\delta$</span> and <span class="math-container">$C &gt; |f(0)| + D + 1$</span>.</p> <p>For any <span class="math-container">$x$</span>, <span class="math-container">$|f(x)| - |f(0)| \leq |f(x) - f(0)| \leq \sum_{0 &lt; j \leq |x/\delta|+1}|f(j\delta) - f((j-1)\delta)| \leq |x/\delta|+1$</span>, so <span class="math-container">$|f(x)| \leq |x/\delta| + 1 + |f(0)| &lt; C + D|x|$</span>.</p>
Gábor Pálovics
922,984
<p>I think what you do is correct. A minor correction would be instead of <span class="math-container">$j \in \{i : 0 \le i \le \lfloor|x|/\delta\rfloor+1\}$</span>, you could sum over <span class="math-container">$j \in \{i : 0\le i \cdot sgn(x) \le \lfloor|x|/\delta\rfloor+1\}$</span>. This way it works for negative <span class="math-container">$x$</span> values as well.</p>
2,442,297
<blockquote> <p>Ryan has been given a salary increase of $7.39\%$. The salary increase is for the value of €$4231$.</p> <p>His salary is now $x$. Solve for $x$.</p> </blockquote> <p>My head is saying </p> <p>$$\begin{align} 4231 / 7.39 &amp;= 572 \\ 572 * 100 &amp;= 57,200 \end{align}$$</p> <p>is not correct, but I am having a brainfart right now.</p> <p>Can anyone help ?</p> <p>Thanks.</p>
Théophile
26,091
<p>Suppose that Ryan's initial salary is $s$. Then the value of the salary increase is $0.0739s$, so $0.0739s = 4231$, and we get $s = 57253.04$ (watch out for rounding too soon!).</p> <p>We want to calculate his <em>final</em> salary, which is $$s + 4231.$$</p> <p>In other words, you forgot to add his increase. (And you also rounded off some fifty-three euros.)</p>
754,603
<p>I have two equations:</p> <ol> <li>$x = 2^n (p+i)^{3n}$ </li> <li>$x = 14^n p^{3n} $</li> </ol> <p>Here $n$, $p$, and $i$ are all integers $\geq0$.</p> <p>I worked out (using a spreadsheet) that if $i &gt; 1$ then the value of x in expression 1 is larger than the value of x in expression 2.</p> <p>How can I show this mathematically?</p>
Marc van Leeuwen
18,880
<p>If you look at row of people lined up in front of a mirror, not only will their mirror images appear in the opposite order (the image of the person closest to the mirror comes first) but also the image of each individual person will be a mirror image (if facing towards the mirror, the mirror image will be facing out of the mirror) just as if the person were alone. The same applies to transposition of a collection of blocks.</p>