qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,307,016
<p>Explore convergence of <span class="math-container">$\sum_{n=3}^{\infty}\frac{1}{n\ln n(\ln \ln n)^\alpha}$</span></p> <p>Tried to use Cauchy integral test,so we need to find</p> <p><span class="math-container">$$\int_{3}^\infty\frac{dx}{x\ln x(\ln \ln x)^\alpha}=\int_{\ln 3}^{\infty}\frac{dz}{z(\ln z)^\alpha}= \int_{\ln (\ln 3)}^{\infty}\frac{du}{(u^\alpha)}$$</span></p> <p>and stuck here. How continue from here?</p> <p>I know that <span class="math-container">$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^\alpha}$</span> converges when <span class="math-container">$\alpha&gt;1$</span> and diverges when <span class="math-container">$\alpha\leq1$</span></p> <p>but here we start sum from <span class="math-container">$\ln (\ln 3)$</span> not-natural number can we say same thing here and why if we can?</p>
Mark Viola
218,419
<p><strong>HINT:</strong></p> <p>Note that we have for <span class="math-container">$\alpha\ne 1$</span></p> <p><span class="math-container">$$\int_3^\infty \frac{1}{x\log(x) \left(\log(\log(x))\right)^\alpha}\,dx=\left.\left(\frac1{(1-\alpha)\left(\log(\log(x))\right)^{\alpha-1}}\right)\right|_3^\infty$$</span></p> <p>If <span class="math-container">$\alpha=1$</span>, then we have</p> <p><span class="math-container">$$\int_3^\infty \frac{1}{x\log(x)\log(\log(x))}\,dx=\left.\left(\frac1{\log(\log(\log(x)))}\right)\right|_3^\infty$$</span></p> <p>Can you finish now?</p>
1,553,354
<p>Help me to find an example of a sequence of differentiable functions defined on $[0,1]$ that converge uniformly to a function $f$ on $[0,1]$ such that there exists $x \in (0,1)$ such that $f$ is not differentiable at $x$.</p>
Christian Blatter
1,303
<p>Consider the $C^1$-functions $$f_n(x):=\sqrt{{1\over n^2}+x^2}-{1\over n}\qquad(n\geq1)\ .$$ One has $f_n(0)=0$ for all $n$, and $$f_n(x)={x^2\over \sqrt{{1\over n^2}+x^2}+{1\over n}}\to |x|\qquad(n\to\infty)$$ for all $x\ne0$. It follows that $\lim f_n(x)=|x|$ for all $x\in{\mathbb R}$, which is not differentiable at $0$.</p>
1,162,697
<p>If $f:\mathbb{R} \to \mathbb{R}$ is differentiable with at least two roots, I wish to show that Newton's method will not converge for some $x_0$. </p> <p>I know that $f'(x)$ has a zero, say at $z$. It seems we should choose $x_0$ close to $z$ to ensure that the Newton iterates wander away. But it's hard to say anything more precise without knowing more about $f(x)$...</p>
Understand
214,109
<p>Have a look at this example: $$x^3 - 5x = 0$$ $$x_0=1$$</p>
1,162,697
<p>If $f:\mathbb{R} \to \mathbb{R}$ is differentiable with at least two roots, I wish to show that Newton's method will not converge for some $x_0$. </p> <p>I know that $f'(x)$ has a zero, say at $z$. It seems we should choose $x_0$ close to $z$ to ensure that the Newton iterates wander away. But it's hard to say anything more precise without knowing more about $f(x)$...</p>
Robert Israel
8,508
<p>If $f'(x_0) = 0$, there is no $x_1$.<br> In the case of a quadratic $f$ with two real roots, that is the only initial point where Newton goes wrong: for all other $x_0$, it converges to one of the roots.</p> <p>EDIT: Somewhat more generally, if $f$ is a convex differentiable function with at least one root, you have convergence to a root from any $x_0$ with $f'(x_0) \ne 0$.</p>
54,506
<p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p> <p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p> <p><strong>Batman Equation in text form:</strong> \begin{align} &amp;\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &amp;\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &amp;\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &amp;\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &amp;\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}</p>
GEdgar
442
<p>Here's what I got from the equation using Maple... </p> <p><img src="https://i.stack.imgur.com/6pXe9.jpg" alt="enter image description here"></p>
54,506
<p><a href="http://www.hardocp.com/news/2011/07/29/batman_equation/" rel="noreferrer">HardOCP</a> has an image with an equation which apparently draws the Batman logo. Is this for real?</p> <p><img src="https://i.stack.imgur.com/VYKfg.jpg" alt="Batman logo"></p> <p><strong>Batman Equation in text form:</strong> \begin{align} &amp;\left(\left(\frac x7\right)^2\sqrt{\frac{||x|-3|}{|x|-3}}+\left(\frac y3\right)^2\sqrt{\frac{\left|y+\frac{3\sqrt{33}}7\right|}{y+\frac{3\sqrt{33}}7}}-1 \right) \\ &amp;\qquad \qquad \left(\left|\frac x2\right|-\left(\frac{3\sqrt{33}-7}{112}\right)x^2-3+\sqrt{1-(||x|-2|-1)^2}-y \right) \\ &amp;\qquad \qquad \left(3\sqrt{\frac{|(|x|-1)(|x|-.75)|}{(1-|x|)(|x|-.75)}}-8|x|-y\right)\left(3|x|+.75\sqrt{\frac{|(|x|-.75)(|x|-.5)|}{(.75-|x|)(|x|-.5)}}-y \right) \\ &amp;\qquad \qquad \left(2.25\sqrt{\frac{(x-.5)(x+.5)}{(.5-x)(.5+x)}}-y \right) \\ &amp;\qquad \qquad \left(\frac{6\sqrt{10}}7+(1.5-.5|x|)\sqrt{\frac{||x|-1|}{|x|-1}} -\frac{6\sqrt{10}}{14}\sqrt{4-(|x|-1)^2}-y\right)=0 \end{align}</p>
Shivam Patel
95,509
<p>Sorry but this is not the answer but too long for a comment: Probably the easiest verification is to type the equation on Google you'l be surprised : The easiest way is to Google :2 sqrt(-abs(abs(x)-1)<em>abs(3-abs(x))/((abs(x)-1)</em>(3-abs(x))))(1+abs(abs(x)-3)/(abs(x)-3))sqrt(1-(x/7)^2)+(5+0.97(abs(x-.5)+abs(x+.5))-3(abs(x-.75)+abs(x+.75)))(1+abs(1-abs(x))/(1-abs(x))),-3sqrt(1-(x/7)^2)sqrt(abs(abs(x)-4)/(abs(x)-4)),abs(x/2)-0.0913722(x^2)-3+sqrt(1-(abs(abs(x)-2)-1)^2),(2.71052+(1.5-.5abs(x))-1.35526sqrt(4-(abs(x)-1)^2))sqrt(abs(abs(x)-1)/(abs(x)-1))+0.9</p>
939,509
<p>Is there a proper name for a shape defined by the volume between two concentric spheres? My understanding is that, formally, a "sphere" is strictly a 2D surface and there's a formal term for volume contained by that surface -- which I forget.</p> <p>Is there a term that describes the volume between two concentric spheres? That is, colloquially, a "a filled in sphere with a hollow core".</p> <p>Another phrasing would be, what is the name for a three dimensional annulus? (Or is an annulus not strictly two dimensional?)</p>
Jonas Meyer
1,424
<p>It's also called an <a href="http://mathworld.wolfram.com/AnnulusTheorem.html" rel="nofollow">annulus</a>. ${}$</p>
2,461,615
<p>I am still at college. I need to solve this problem.</p> <p>The total amount to receive in 1 year is 17500 CAD. And the university pays its students each 2 weeks (26 payments per year). </p> <p>How much does a student have to receive for 4 months? I have calculated this in 2 ways (both seem ok) but results are different. Which one is the right one and why? </p> <pre><code>a) 17500CAD / 12 months = 1458.33CAD each month 1458.33CAD x 4 months = 5833 (total amount of money in 4 months) If money has to be given each 2 weeks: 5833 / 8 = 729.125 CAD b) 17500 / 26 = 673.08 each 2 weeks 673.08 x 8 = 5384.62 (total amount of money in 4 months) </code></pre> <p>I think the right one is a), because b) is assuming the student has been receiving money for the whole year (26 payments). But it is not the case.</p> <p>Thank you</p>
Aizzaac
488,697
<p>Okey. This is my solution:</p> <pre><code>1 year = 365 days or 366 days two-week period = 14 days 365 / two-week period = 26 payments per year September to December = 122 days 122 / two-week period = 8.71 payments (175000 / 26) x 8.71 = 5863 CAD </code></pre>
3,393,244
<p>My homework is to transform this formula </p> <p><span class="math-container">$$(A \wedge \neg B) \wedge (A \vee \neg C)$$</span> into this equivalent form: <span class="math-container">$A \wedge \neg B$</span>. Do you have any ideas?</p>
Bram28
256,001
<p>The 'correct' transformation depends on what rules you have .... </p> <p>Here is a transformation that uses pretty elementary equivalence principles:</p> <p><span class="math-container">$$(A \wedge \neg B) \wedge (A \vee \neg C)$$</span></p> <p><span class="math-container">$$\overset{Commutation}{=}$$</span></p> <p><span class="math-container">$$(\neg B \land A) \wedge (A \vee \neg C)$$</span></p> <p><span class="math-container">$$\overset{Association}{=}$$</span></p> <p><span class="math-container">$$\neg B \land (A \wedge (A \vee \neg C))$$</span></p> <p><span class="math-container">$$\overset{Identity}{=}$$</span></p> <p><span class="math-container">$$\neg B \land ((A \lor \bot) \wedge (A \vee \neg C))$$</span></p> <p><span class="math-container">$$\overset{Distribution}{=}$$</span></p> <p><span class="math-container">$$\neg B \land (A \lor (\bot \land \neg C))$$</span></p> <p><span class="math-container">$$\overset{Annihilation}{=}$$</span></p> <p><span class="math-container">$$\neg B \land A$$</span></p> <p><span class="math-container">$$\overset{Commutation}{=}$$</span></p> <p><span class="math-container">$$A \land \neg B$$</span></p> <p>If you are given </p> <p><strong>Absorption</strong></p> <p><span class="math-container">$A \land (A \lor B) = A$</span></p> <p>then you canuse that to go from <span class="math-container">$$\neg B \land (A \wedge (A \vee \neg C))$$</span> to <span class="math-container">$$\neg B \land A$$</span> in one step</p>
2,216,601
<p>Alright so I have this Transformation that I know isn't one to one transformation, but I'm not sure why. </p> <p>A Transformation is defined as $f(x,y)=(x+y, 2x+2y)$.</p> <p>Now my knowledge is that you need to fulfill the 2 conditions: Additivity and the scalar multiplication one. I tried both of them and somehow both of them are met perfectly. </p> <p>However, the transformation is NOT linear. This is because the column vectors of the transformation are linearly dependent. </p> <p>So how am I supposed to relate these 2 seemingly unrelated conjectures to check the one-one transformation ? </p>
Jimmy R.
128,037
<p><strong>Hint:</strong> What is $f^{-1}(0,0)$? For example $f(0,0)=(0,0)$, so $(0,0)\in f^{-1}(0,0)$. Can you find (m)any other pair(s) $(x,y)$ such that $f(x,y)=(0,0)$? </p>
777,535
<p>I need to find the full Taylor expansion of $$f(x)=\frac{1+x}{1-2x-x^2}$$</p> <p>Any help would be appreciated. I'd prefer hints/advice before a full answer is given. I have tried to do partial fractions\reductions. I separated the two in hopes of finding a known geometric sum but I could not.</p> <p>Edit: I guess you could say that I did not have the.... insight to take the path with the partial decomposition mentioned. I have done some work (I had to go to the gym that is why it took a while)</p> <p>$$\frac{1+x}{1-2x-x^2}=\frac{1}{2(\sqrt{2}-x-1)}-\frac{1}{2(\sqrt{2}+x+1)}$$ I am going to work with this to go further.</p> <p>I got to this:</p> <p>$$\frac{1}{2}\left(\sum_{n=0}^\infty\frac{x^n}{(\sqrt{2}-1)^{n+1}}+\sum_{n=0}^\infty\frac{x^n}{(-\sqrt{2}-1)^{n+1}}\right) $$ But I think this is wrong for some reason.</p> <p>Edit: Figured it out.</p> <p>$$\begin{align*} \implies\frac{1+x}{1-2x-x^2}&amp;=\frac{1}{2(\sqrt{2}-x-1)}-\frac{1}{2(\sqrt{2}+x+1)} \\[2mm] &amp;=\frac{1}{2}\left(\frac{1}{a-x}-\frac{1}{b+x}\right) \mbox{where $a=\sqrt{2}-1$ and $b=\sqrt{2}+1$}. \\[2mm] &amp;=\frac{1}{2}\left(\frac{1}{a} \frac{1}{1-\frac{x}{a}}-\frac{1}{b} \frac{1}{1-\frac{x}{-b}}\right) \\[2mm] &amp;=\frac{1}{2}\left(\frac{1}{a}\sum_{n=0}^\infty \frac{1}{a^n}x^n-\frac{1}{b}\sum_{n=0}^\infty\frac{1}{(-b)^n}x^n\right) \\[2mm] &amp;=\frac{1}{2}\left(\frac{1}{\sqrt{2}-1}\sum_{n=0}^\infty \frac{1}{(\sqrt{2}-1)^n}x^n-\frac{1}{\sqrt{2}+1}\sum_{n=0}^\infty\frac{1}{(-\sqrt{2}-1)^n}x^n\right) \\[2mm] &amp;=\frac{1}{2}\left(\sum_{n=0}^\infty\frac{x^n}{(\sqrt{2}-1)^{n+1}}+\sum_{n=0}^\infty\frac{x^n}{(-\sqrt{2}-1)^{n+1}}\right) \\ &amp;=1+3x+7x^2+17x^3+\ldots \end{align*}$$</p>
Madavan Viswanathan
547,205
<p>Consider the following, doing the <span class="math-container">$2$</span>-D case, which can be generalized to <span class="math-container">$n$</span>-D.</p> <p>Vector <span class="math-container">$A$</span> with coordinates <span class="math-container">$(x_A,y_A)$</span></p> <p>Vector <span class="math-container">$B$</span> with coordinates <span class="math-container">$(x_B,y_B)$</span></p> <p><a href="https://i.stack.imgur.com/k7d7A.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/k7d7A.png" alt="Vectors A and B" /></a></p> <p>The dot product of those two vectors is : <span class="math-container">\begin{align*} A\cdot B &amp;= AB\cos(\theta) \\ &amp;= AB\cos(\alpha−\beta)\qquad(\mathrm{since}\,\theta=\alpha-\beta) \\ &amp;= AB(\cos(\alpha)\cos(\beta) + \sin(\alpha)\sin(\beta)) \\ &amp;= AB\cos(\alpha)\cos(\beta) + AB\sin(\alpha)\sin(\beta) \\ &amp;= A\cos(\alpha)B\cos(\beta) + A\sin(\alpha)B\sin(\beta) \\ &amp;= x_Ax_B + y_Ay_B \end{align*}</span></p>
4,394,983
<p>I am tasked with proving that Th((<span class="math-container">$\mathbb{Z}, &lt;))$</span> has continuum many models. For this we are given the following construction.</p> <blockquote> <p>Let <span class="math-container">$\alpha \in \mathcal{C} = \{0,1\}^{\mathbb{N}}$</span>. We define for each <span class="math-container">$\alpha$</span> the set <span class="math-container">$V_{\alpha}$</span>: <span class="math-container">$$V_{\alpha} = \{q \in \mathbb{Q}\mid\exists n[2n \leq q \leq 2n+1]\ \lor\ \exists n[\alpha(n) = 1\ \land\ q = 2n + \frac{3}{2}]\}$$</span> Define for each such <span class="math-container">$V_{\alpha}$</span> the set <span class="math-container">$W_{\alpha} := V_{\alpha}\times \mathbb{Z}$</span>.</p> </blockquote> <p>Now consider the structures <span class="math-container">$(V_{\alpha}, &lt;)$</span> and <span class="math-container">$(W_{\alpha}, &lt;')$</span> with <span class="math-container">$&lt;$</span> the usual ordering on <span class="math-container">$\mathbb{Q}$</span> and <span class="math-container">$&lt;'$</span> the lexicographic ordering on <span class="math-container">$\mathbb{Q}^2$</span>.</p> <p>There are now three things to do:</p> <ol> <li><p>Prove: <span class="math-container">$\forall\alpha\in\mathcal{C}\forall\beta\in\mathcal{C}[\alpha\neq\beta\to (V_{\alpha}, &lt;) \ncong (V_{\beta}, &lt;)]$</span>.</p> </li> <li><p>Prove: <span class="math-container">$(V_{\alpha}, &lt;) \cong (V_{\beta}, &lt;)$</span> if and only if <span class="math-container">$(W_{\alpha}, &lt;') \cong (W_{\beta}, &lt;')$</span>.</p> </li> <li><p>Prove: <span class="math-container">$(W_{\alpha}, &lt;') \equiv (\mathbb{Z}, &lt;)$</span>. That is that both structures are elementary equivalent.</p> </li> </ol> <p><strong>My question</strong></p> <p>I have difficulties proving 3. Thus far I tried using Ehrenfeucht-Fraïssé games but as of now no result yet. This is probably because <span class="math-container">$(W_{\alpha}, &lt;')$</span> can be thought of as a plane and <span class="math-container">$(\mathbb{Z}, &lt;)$</span> as a line. Successfully devising a winning strategy basically comes down to good choices when the first player chooses an element in <span class="math-container">$W_{\alpha}$</span>. I know how to win if the games takes at most 3 moves from each player. Yet generalizing this has proven to be difficult.</p> <p>Also I have some troubles with 1. I see why this is true; if <span class="math-container">$\alpha \neq \beta$</span> then either <span class="math-container">$V_{\alpha}$</span> or <span class="math-container">$V_{\beta}$</span> has one &quot;successor&quot; more. But translating this to a formal proof has yet to be made.</p> <p>Can I get help on these problems? I thank you in advance.</p>
Primo Petri
137,248
<p>This is not exactly the answer that the OP is looking for, but it may be interesting.</p> <p>First an answer that assumes the continuum hypothesis.</p> <p>For every <span class="math-container">$\alpha&lt;\omega_1\smallsetminus\{0\}$</span> there is a model <span class="math-container">$\alpha\times \mathbb Z$</span>. Here the relation <span class="math-container">$&lt;$</span> is interpreted as the lexicographic order. These are <span class="math-container">$\omega_1$</span> non isomorphic countable models.</p> <p>Now without continuum hypothesis.</p> <p>For every sequence <span class="math-container">$(n_i)_{i\in\omega}$</span> of positive integers consider a model obtained &quot;concatenating&quot; the models <span class="math-container">$n_i\times\mathbb Z$</span> separated by a copy of <span class="math-container">$\mathbb Q\times\mathbb Z$</span>. These are <span class="math-container">$2^\omega$</span> non isomorphic countable models.</p>
3,424,656
<p>Assume <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous at <span class="math-container">$x=a$</span>. Prove <span class="math-container">$h=\max\{f,g\}$</span> is continuous at <span class="math-container">$x=a$</span>.</p> <p>My solution:</p> <p>When <span class="math-container">$f\ge g\Rightarrow h=\max\{f,g\}=f$</span> and since <span class="math-container">$f$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p> <p>When <span class="math-container">$f&lt;g\Rightarrow h=\max\{f,g\}=g$</span> and since <span class="math-container">$g$</span> is continuous at <span class="math-container">$x=a$</span> so is <span class="math-container">$h$</span>.</p> <p>Does this seem sufficient?</p>
José Carlos Santos
446,262
<p>No, not at all. By the same argument, if <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are differentiable, then so is <span class="math-container">$\max\{f,g\}$</span>. However, <span class="math-container">$\max\{x,-x\}=\lvert x\rvert$</span>.</p>
4,569,910
<p><span class="math-container">$ABC$</span> is a right-angled triangle (<span class="math-container">$\measuredangle ACB=90^\circ$</span>). Point <span class="math-container">$O$</span> is inside the triangle such that <span class="math-container">$S_{ABO}=S_{BOC}=S_{AOC}$</span>. If <span class="math-container">$AO^2+BO^2=k^2,k&gt;0$</span>, find <span class="math-container">$CO$</span>. <a href="https://i.stack.imgur.com/YEj0a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YEj0a.png" alt="enter image description here" /></a></p> <p>The most intuitive thing is to note that <span class="math-container">$AO^2+BO^2=k^2$</span> is part of the cosine rule for triangle <span class="math-container">$AOB$</span> and the side <span class="math-container">$AB:$</span> <span class="math-container">$$AB^2=c^2=AO^2+BO^2-2AO.BO\cos\measuredangle AOB\\ =k^2-2AO.BO\cos\measuredangle AOB$$</span> From here if we can tell what <span class="math-container">$2AO.BO\cos\measuredangle AOB$</span> is in terms of <span class="math-container">$k$</span>, we have found the hypotenuse of the triangle (with the given parameter). I wasn't able to figure out how this can done.</p> <p>Something else that came into my mind: does the equality of the areas mean that <span class="math-container">$O$</span> is the centroid of the triangle? If so, can some solve the problem without using that fact?</p>
Sarvesh Ravichandran Iyer
316,409
<p>This is just a more rigorous write-up of things, because there is a more general phenomena here that is in play : an exchange argument.</p> <p>Summary :</p> <ul> <li><p>The &quot;success probability&quot; calculation</p> </li> <li><p>The idea of an exchange argument.</p> </li> <li><p>Using the exchange argument to show black-ball exchange optimality.</p> </li> <li><p>Using the exchange argument to restrict the colors of balls in an optimal configuration of boxes.</p> </li> <li><p>Resolving the remaining cases by hand.</p> </li> <li><p>A small addendum on the technique used.</p> </li> </ul> <hr /> <h5>The &quot;success probability&quot;</h5> <p>Suppose that you put <span class="math-container">$a_1,a_2,a_3$</span> white balls in boxes <span class="math-container">$1,2,3$</span> respectively, and <span class="math-container">$b_1,b_2,b_3$</span> black balls in boxes <span class="math-container">$1,2,3$</span> respectively.</p> <p>Then, the probability of getting a white ball when you pick a box at random, and then pick a ball out of it at random, is (by Bayes' rule) <span class="math-container">$$ \frac{1}{3}\left[\frac{a_1}{a_1+b_1}+\frac{a_2}{a_2+b_2}+\frac{a_3}{a_3+b_3}\right] $$</span></p> <p>Note that <span class="math-container">$a_1+a_2+a_3 = 23$</span> and <span class="math-container">$b_1+b_2+b_3 = 7$</span>. Therefore, we're basically trying to maximize <span class="math-container">$$ \frac{a_1}{a_1+b_1}+\frac{a_2}{a_2+b_2}+\frac{a_3}{a_3+b_3} $$</span></p> <p>subject to those summation conditions.</p> <hr /> <h5>The &quot;exchange&quot; argument</h5> <p>There is a very common idea in optimal control theory (a branch of mathematics that can be studied from a pure, or applied point of view) that's called the exchange argument. It roughly says the following : if a particular arrangement (or control) is not optimal, then it can be made better just by exchanging two &quot;components&quot; of that arrangement (or control).</p> <p>The idea here, is that the structure of the &quot;reward&quot; (in this case, the success probability) allows us to execute an exchange argument : a procedure that will always increase the reward, provided that it can be done.</p> <p>For example, suppose that we're given the configuration of <span class="math-container">$a_1,a_2,a_3$</span> white balls and <span class="math-container">$b_1,b_2,b_3$</span> black balls in boxes <span class="math-container">$1,2,3$</span> respectively. Suppose that <span class="math-container">$b_1&gt;0$</span>.</p> <p>Create the alternate configuration by taking a black ball from the first box and putting it in the third box. Now, you have <span class="math-container">$b_1-1,b_2,b_3+1$</span> black balls instead.</p> <p>Compare the success probabilities now. Initially, it is <span class="math-container">$$ \frac{1}{3}\left[\frac{a_1}{a_1+b_1}+\frac{a_2}{a_2+b_2}+\frac{a_3}{a_3+b_3} \right] $$</span></p> <p>Following the exchange, it becomes <span class="math-container">$$ \frac{1}{3}\left[\frac{a_1}{a_1+b_1-1}+\frac{a_2}{a_2+b_2}+\frac{a_3}{a_3+b_3+1} \right] $$</span></p> <p>Their difference (times <span class="math-container">$3$</span>, let's avoid the <span class="math-container">$\frac 13$</span> for now) is <span class="math-container">$$ \frac{a_3}{a_3+b_3+1} - \frac{a_3}{a_3+b_3} - \frac{a_1}{a_1+b_1}+\frac{a_1}{a_1+b_1-1} $$</span></p> <p>which equals <span class="math-container">$$ \frac{a_1}{(a_1+b_1)(a_1+b_1-1)}- \frac{a_3}{(a_3+b_3+1)(a_3+b_3)} $$</span></p> <p>This is positive precisely when <span class="math-container">$$ a_3(a_1+b_1)(a_1+b_1-1) &lt; a_1(a_3+b_3)(a_3+b_3+1) $$</span></p> <p>To write that more cleanly, let <span class="math-container">$a_1+b_1 = T_1,a_2+b_2 = T_2,a_3+b_3 = T_3$</span> be the total number of balls in boxes <span class="math-container">$1,2,3$</span> respectively in the initial configuration. Then this condition becomes <span class="math-container">$$ a_3T_1(T_1-1) &lt; a_1T_3(T_3+1) $$</span></p> <p>Thus, if the original configurations satisfy this inequality and if <span class="math-container">$b_1&gt;0$</span>, then the exchange argument shows that a better configuration exists.</p> <p>I will refer to the above condition as an &quot;exchange&quot; inequality, because it provides a criteria for when an exchange leads to a better configuration.</p> <hr /> <h5>A result on &quot;extremal&quot; black ball values being the best</h5> <p>With respect to this exchange, let's make the following observation. Suppose that <span class="math-container">$b_1&gt;0$</span> and <span class="math-container">$a_1T_3(T_3+1) &gt; a_3T_1(T_1-1)$</span>. Then, we make the exchange : take a black ball from box <span class="math-container">$1$</span> and put it in box <span class="math-container">$3$</span>.</p> <p>However, if we now look at the number of black and white balls in boxes <span class="math-container">$1$</span> and <span class="math-container">$3$</span>, then the new left hand side is <span class="math-container">$a_1(T_3+1)(T_3+2)$</span>, and the new right hand side is <span class="math-container">$a_3(T_1-1)(T_1-2)$</span>. We would still have <span class="math-container">$$ a_1(T_3+1)(T_3+2) &gt; a_1T_3(T_3+1) &gt; a_3T_1(T_1-1)&gt;a_3(T_1-1)(T_1-2) $$</span></p> <p>That is, we have proven the following :</p> <blockquote> <p>If it is better for us to transfer a black ball from box <span class="math-container">$1$</span> to box <span class="math-container">$3$</span>, then (if feasible) it is still better for us to transfer another black ball from box <span class="math-container">$1$</span> to box <span class="math-container">$3$</span>, because this retains the &quot;exchange&quot; inequality we wrote above.</p> </blockquote> <p>By inductive reasoning, we obtain the following statement :</p> <blockquote> <p>If it is better for us to transfer a black ball from box <span class="math-container">$1$</span> to box <span class="math-container">$3$</span>, then the best possible situation is that all the black balls from box <span class="math-container">$1$</span> are transferred to box <span class="math-container">$3$</span>.</p> </blockquote> <hr /> <h5>A result on the colors of the balls that can be in each box</h5> <p>We must prove now is that if two boxes have a positive number of black and white balls, then by exchanging black balls among these boxes we can produce better configurations. That is the following lemma :</p> <blockquote> <p>If <span class="math-container">$a_1,a_3,b_1,b_3&gt;0$</span>, then at least one of <span class="math-container">$a_1T_3(T_3-1)&lt;a_3T_1(T_1+1)$</span> or <span class="math-container">$a_3T_1(T_1-1)&lt;a_1T_3(T_3+1)$</span> must be true.</p> </blockquote> <p>Proof : If both are false, then <span class="math-container">$$ a_1T_3(T_3-1) \geq a_3T_1(T_1+1) , a_3T_1(T_1-1)\geq a_1T_3(T_3+1) $$</span> are both true. However, <span class="math-container">$$ a_3T_1(T_1+1)&gt; a_3T_1(T_1-1) $$</span> because <span class="math-container">$a_3,T_1&gt;0$</span>. Therefore, <span class="math-container">$a_1T_3(T_3-1)&gt; a_1T_3(T_3+1)$</span>. This can't be true because <span class="math-container">$a_1,T_3&gt;0$</span>.</p> <p>Combining this with the previous lemma about black ball shifting, we have now proven :</p> <blockquote> <p>Any configuration in which there are two boxes, each containing at least one black and one white ball, is strictly inferior to some other configuration.</p> </blockquote> <p>Because one of those configurations is better than the other, but then continuous black-ball shifting makes both of them inferior to a configuration in which one of the boxes has no black ball.</p> <hr /> <h5>Proving that two boxes must have only white balls</h5> <p>Therefore, we may stick to configurations in which there is at most one box, say Box <span class="math-container">$1$</span>, which contains both black and white balls. The other configurations either contain no white ball, or no black ball (but not both : the problem is not well-defined if any <span class="math-container">$T_i=0$</span>).</p> <p>The situation where every box either consists of only white or black balls leads to a success probability of at most <span class="math-container">$\frac 23$</span> , and we know that can be bettered. So we will stick to box <span class="math-container">$1$</span> being the box that has both white and black balls.</p> <p>Suppose that box <span class="math-container">$2$</span> contains a black ball. Then, by our earlier assertion, it contains only black balls, which means that <span class="math-container">$a_2 = 0$</span>. Therefore, we trivially have <span class="math-container">$$ a_2T_1(T_1-1) &lt; a_1T_2(T_2+1) $$</span></p> <p>By the exchange inequality, it follows that a better configuration is formed by transferring a black ball from Box <span class="math-container">$1$</span> to Box <span class="math-container">$2$</span>. However, using the inductive shifting lemma, this means that we eventually arrive at the situation where Box <span class="math-container">$1$</span> has only white balls and Box <span class="math-container">$2$</span> has only black balls. Knowing that Box <span class="math-container">$3$</span> also only has either white or black balls, it follows that this configuration has success probability at most <span class="math-container">$\frac 23$</span>, which we know can be bettered.</p> <p>We have shown that :</p> <blockquote> <p>In any optimal configuration, the two boxes that have balls of only one color in it can consist only of white balls. That is, without loss of generality, <span class="math-container">$b_1=7,b_2=b_3=0$</span>.</p> </blockquote> <hr /> <h5>To finish off</h5> <p>To finish, we must only see what is the best among those configurations with <span class="math-container">$b_1=7,b_2=b_3=0$</span>. In this case, the success probability explicitly equals <span class="math-container">$$ \frac{1}{3}\left[\frac{a_1}{a_1+7}+\frac{2}{3}\right] $$</span></p> <p>Thus, we must maximize this quantity , subject to <span class="math-container">$a_2,a_3&gt;0$</span> (so that the other boxes are not empty). That is easily done : write <span class="math-container">$$ \frac{1}{3}\left[\frac{a_1}{a_1+7}+\frac{2}{3}\right] = \frac{1}{3}\left[1-\frac{7}{a_1+7}+\frac{2}{3}\right] $$</span></p> <p>Now, the bigger the value of <span class="math-container">$a_1$</span>, the bigger the above quantity. The biggest possible value of <span class="math-container">$a_1$</span> is <span class="math-container">$21$</span>, with <span class="math-container">$a_2=a_3=1$</span>. It follows that <span class="math-container">$$ a_1=21,a_2=a_3=1, b_1=7,b_2=b_3=0 $$</span></p> <p>is the best configuration.</p> <hr /> <h6>ADDENDUM</h6> <p>The exchange argument is actually a heavily used technique in queue and scheduling theory. Whenever the &quot;reward&quot; (or &quot;cost&quot;) tends to have a structure that corresponds well with exchanges, then one can make use of these arguments to <em>rigorously</em> prove the optimality of various strategies (typically &quot;greedy&quot; strategies like the above one, where you believe that some kind of monotonicity is at play. Indeed, monotonicity plays a huge role in optimal control theory).</p>
18,511
<p>I have a notebook written in Mathematica 8 in which I imported Tiff images and everything worked fine. Since I installed Mathematica 9, I get the error:</p> <pre><code>In[14]:= Files[[1]][[1]] Import[Files[[1]][[1]],"TIFF"] Out[14]= Growth_1_130124_1353/Growth_1_130124_1353_T0001.tif During evaluation of In[14]:= Image::imgcsmis: The specified color space ColorProfileData[&lt;&gt;,Description-&gt;sRGB IEC61966-2.1,DeviceColorSpace-&gt;RGB,IndependentColorSpace-&gt;XYZ] and the number of channels 1 are not compatible. &gt;&gt; Out[15]= Image[RawArray[Byte,&lt;1024,1360&gt;],Byte,ColorSpace-&gt;ColorProfileData[&lt;&gt;,Description-&gt;sRGB IEC61966-2.1,DeviceColorSpace-&gt;RGB,IndependentColorSpace-&gt;XYZ],Interleaving-&gt;None] </code></pre> <p>How can I solve it? Cheers, Andrea</p>
Sjoerd C. de Vries
57
<p>Photoshop complains that the ICC color profile of this picture is invalid and is ignoring it. So, there might be a problem with the picture itself. It is a 1-channel grayscale image but it reports a 3-channel RGB color space.</p> <p>Mathematica 9 has the new <code>ColorProfileData</code> object which represents this information, and the importer wants to make use of that. Since the information contained in the file header apparently is incorrect Mathematica 9 has a problem here, where previous versions worked.</p> <p>There is a workaround though, simply overwriting this incorrect information:</p> <pre><code>res = Import["https://dl.dropbox.com/u/7754460/Growth_1_130124_1353_T0001.tif"] </code></pre> <p><img src="https://i.stack.imgur.com/HVKhS.png" alt="Mathematica graphics"></p> <pre><code>res[[3]] = ColorSpace -&gt; "Gray"; res </code></pre> <p><img src="https://i.stack.imgur.com/s0xnn.png" alt="Mathematica graphics"></p>
333,467
<p>I was reading in my analysis textbook that the map $ f: {\mathbf{GL}_{n}}(\mathbb{R}) \to {\mathbf{GL}_{n}}(\mathbb{R}) $ defined by $ f(A) := A^{-1} $ is a continuous map. I also saw that $ {\mathbf{GL}_{n}}(\mathbb{R}) $ is dense in $ {\mathbf{M}_{n}}(\mathbb{R}) $. My question is:</p> <blockquote> <p>What is the unique extension of $ f $ to $ {\mathbf{M}_{n}}(\mathbb{R}) $?</p> </blockquote>
user1551
1,551
<p>As pointed out by the others, you cannot extend $f$ to a <em>continuous</em> function $g:M_n(\mathbb{R})\to M_n(\mathbb{R})$, because there exists a convergent sequence of invertible matrices $X_n$ such that $f(X_n)=X_n^{-1}$ diverges. There does exist, however, a <em>bijective</em> function $g:M_n(\mathbb{R})\to M_n(\mathbb{R})$ such that $g=f$ on $GL_n(\mathbb{R})$, namely, $g(X)=X^+$ is the <a href="http://en.wikipedia.org/wiki/Moore%E2%80%93Penrose_pseudoinverse" rel="nofollow">Moore-Penrose pseudoinverse</a> of $X$.</p>
1,677,359
<p>$\sum_{i=0}^n 2^i = 2^{n+1} - 1$</p> <p>I can't seem to find the proof of this. I think it has something to do with combinations and Pascal's triangle. Could someone show me the proof? Thanks</p>
Slade
33,433
<p>Since you asked about Pascal's triangle:</p> <p>Imagine filling in rows $0$ through $n$ of Pascal's triangle. Now change the first position of row $0$ from $1$ to $1+1$.</p> <p>Distribute the two ones to the following row, which should now read $1+1, 1+1$. Distribute again to get $1+1,2+2,1+1$. And so on.</p> <p>When we get to row $n$, we will populate row $n+1$ as usual, and the sum of those numbers will equal the sum of the numbers we started with.</p> <p>Since the sum of the elements in the $i$-th row of Pascal's triangle is $2^i$, we have shown that $1+ \sum_{i=0}^n 2^i = 2^{n+1}$.</p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
YukiJ
7,608
<p>The <a href="https://en.wikipedia.org/wiki/Minimax_theorem" rel="noreferrer">minimax theorem</a> states the following:</p> <blockquote> <p>Let $X\subset \mathbb{R}^{n}$ and $Y\subset \mathbb {R} ^{m}$ be compact convex sets. If $ f:X\times Y\rightarrow \mathbb {R} $ is a continuous function that is convex-concave, i.e.</p> <p>$$f(\cdot ,y):X\rightarrow \mathbb {R} \text{ is convex for fixed } y, \text{and}$$ $$ f(x,\cdot ):Y\rightarrow \mathbb {R} \text{ is concave for fixed } x.$$</p> <p>Then we have that</p> <p>$$ \min _{x\in X}\max _{y\in Y}f(x,y)=\max _{y\in Y}\min _{x\in &gt; X}f(x,y).$$</p> </blockquote> <p>For arbitrary functions $f$ the equality does not hold in general. However, the <a href="https://en.wikipedia.org/wiki/Max%E2%80%93min_inequality" rel="noreferrer">Max-Min-Inequality</a> is always satisfied.</p> <blockquote> <p>For any function $f: Z \times W \to \mathbb{R}$ we have</p> <p>$$ \inf _{w\in W}\sup _{z\in Z}f(z,w) \geq \sup _{z\in Z}\inf_{w\in W}f(z,w) .$$</p> </blockquote> <p>Since this property always holds for arbitrary functions $f$, it is well worth to be kept in mind by students. Naturally, many students will tend to confuse the order of the max and min operations as well as the direction of the inequality. In a lecture on nonlinear optimization our professor told us the following mnemonic to remember the property, which, as I think, makes it really easy to remember:</p> <blockquote> <p><em>The shortest giant is at least as tall as the tallest dwarf.</em></p> </blockquote> <p>Here, the shortest giant refers to the inf sup on the left hand side, which is at least as tall (i.e. $\geq$) as the tallest dwarf, which corresponds to sup inf. </p> <p>I haven't forgotten the Max-Min-Inequality since I learned the above mnemonic which is why I posted the question whether you are aware of other such neat mnemonics which make students' learning easier. </p>
14,007
<p>A colleague of mine will be teaching 3 classes (pre-calculus and two sections of calculus, at the university level) next semester with an additional grader in only one of those classes (pre-calculus). With an upper bound of 35 students a class, there is potential for a large amount of grading that needs to happen without a lot of resources available in terms of additional graders.</p> <p>My question essentially boils down to: </p> <blockquote> <p>What are tips/tricks/techniques for creating quiz and exam questions that both</p> <ol> <li>test students at various levels of Bloom's hierarchy and</li> <li>minimize the amount of work for the grader</li> </ol> <p>?</p> </blockquote> <p>(Again, the subject is particularly (pre)calculus, but more general tips/tricks/techniques are great too. I only mention the subjects so that techniques that work particularly well for topics in (pre)calculus are mentioned.)</p> <p>I have some ideas:</p> <ul> <li>Formatting can get rid of a lot of the time spent <em>looking</em> for various components of an answer. E.g. for a question about convergence/divergence of a series, you can label spaces for saying whether the series converges or diverges, for what test they are using, and what work needs to be done in order to use that test.</li> <li>True/False questions in which the student must give a counterexample or an explanation in case of False (and perhaps some explanation in case of True as well).</li> <li>Matching problems for topics in which this makes sense, such as polar/parametric graphs or conics (given Cartesian or polar equations).</li> </ul> <p>I'm curious to hear what other things people have used.</p>
Torsten Schoeneberg
8,931
<p>Last year I heard of </p> <p>$$\text{Lo De Hi Mi Hi De Lo}$$</p> <p>$$\text{(sing: "Low Dee High my High Dee Low!")}$$</p> <p>as a mnemonic for the numerator in the quotient rule:</p> <p>$$\left(\frac{f}{g}\right)' = \frac{g\cdot Df - f \cdot Dg}{g^2}$$</p> <p>(of course Lo(w) = denominator, De = derivative, Hi(gh) = numerator, Mi = minus). Make sure you emphasize you have to divide the whole thing</p> <p>$$\text{over LoLo}$$</p> <p>though.</p>
2,064,095
<p>Can someone please help me understand this problem Does the limit exist in the part (a) and part (b)</p> <p>A) $$\lim_{(x,y) \to (0,0)} x \sin (\frac{1}{y})$$</p> <p>B) $$\lim_{(x,y) \to (0,0)} \left( x \sin (\frac{1}{y})+y \sin (\frac{1}{x}) \right)$$</p>
Henricus V.
239,207
<p>Problem 1 only holds for finite $A$. Since $x \not\in A \implies \chi_A(x) = 0$, the sum reduces to $\sum_{x \in X} \chi_A(x) = \sum_{x \in A} \chi_A(x) = \sum_{x \in A} 1 = |A|$.</p> <p>Problem 2 is essentially the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle" rel="nofollow noreferrer">Inclusion-Exclusion Principle</a>. For $x \in X$, define the <strong>point mass measure</strong> of $x$ to be $$ \mu_x(S) = \chi_S(x) $$ Evidently $\mu$ is a measure. (Check this) By the Inclusion-Exclusion Principle of measures, $$ \mu_x(A \cup B) + \mu_x(A \cap B) = \mu_x(A) + \mu_x(B) $$ which solves Problem 2.</p>
2,064,095
<p>Can someone please help me understand this problem Does the limit exist in the part (a) and part (b)</p> <p>A) $$\lim_{(x,y) \to (0,0)} x \sin (\frac{1}{y})$$</p> <p>B) $$\lim_{(x,y) \to (0,0)} \left( x \sin (\frac{1}{y})+y \sin (\frac{1}{x}) \right)$$</p>
Brian M. Scott
12,042
<p>For the first question you have simply</p> <p>$$|A|=\sum_{x\in A}1=\sum_{x\in A}1+\sum_{x\in X\setminus A}0=\sum_{x\in X}\chi_A(x)\;,$$</p> <p>assuming, of course, that $A$ is a finite set.</p> <p>HINT: For the second question just compare the two sides for each $x\in X$. Note that each $x\in X$ is in exactly one of the sets $A\setminus B$, $B\setminus A$, $A\cap B$, and $X\setminus(A\cup B)$.</p>
14,712
<p>I have matrix <code>in</code> as shown, consisting of real numbers and 0. How can I sort it to become <code>out</code> as shown?</p> <pre><code>in ={ {0, 0, 3.411, 0, 1.343}, {0, 0, 4.655, 2.555, 3.676}, {0, 3.888, 0, 3.867, 1.666} }; out ={ {1.343, 3.411, 0, 0, 0}, {2.555, 3.676, 4.655, 0, 0}, {1.666, 3.867, 3.888, 0, 0} }; </code></pre> <p>This is related to a <a href="https://mathematica.stackexchange.com/questions/14663/">question I asked</a>. It is much easier to add the columns by sorting it this way than in previous question, and easier to visualize than trying to take the first non-zero value in a row.</p>
rm -rf
5
<p>The simplest way would be to replace zeros with <code>Null</code>, map <code>Sort</code> onto it and then replace <code>Null</code> with zeros. This works because the default sorting function <code>OrderedQ</code> will place <code>Null</code> at the end, as per your needs.</p> <pre><code>mat = {{0, 0, 3.411, 0, 1.343}, {0, 0, 4.655, 2.555, 3.676}, {0, 3.888, 0, 3.867, 1.666}}; Map[Sort, mat /. (0 | 0.) -&gt; Null] /. Null -&gt; 0 (* 1.343 3.411 0 0 0 2.555 3.676 4.655 0 0 1.666 3.867 3.888 0 0 *) </code></pre>
1,656,136
<p>I'm trying to track down an example of a ring in which there exists an infinite chain of ideals under inclusion. (i.e. $I_1 \subsetneq I_2 \subsetneq I_3 \subsetneq...$)</p>
spec
318,677
<p>Consider the ring $R = C([0,1], \mathbf{R})$ of continuous functions $f \colon [0,1] \to \mathbf{R}$.</p> <p>For $0 \lt t \lt 1$, let $I_t = \{ f \in R \mid f(x) = 0 \text{ for all } 0 \leq x \leq t \}$ be the ideal of functions vanishing on all of $[0,t]$. Then $I_s \supsetneq I_t$ whenever $s \lt t$.</p>
2,905,022
<p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p> <p><strong>Note</strong> that I only want hints.</p> <p>Thanks for the help!</p>
user
505,767
<p><strong>HINT</strong></p> <p>We have</p> <p>$$\sqrt a + \sqrt b=c \stackrel{both \, terms\, \ge 0}\iff (\sqrt a + \sqrt b)^2=a+2\sqrt{ab}+b=c^2 $$</p> <p>and</p> <p>$$a+2\sqrt{ab}+b=c^2 \color{red}{\implies} (2\sqrt{ab})^2=(c^2-a-b)^2$$</p> <p>for the latter implication we need to check at the end for possible extra solutions.</p>
2,905,022
<p>I recently stumbled upon the problem $3\sqrt{x-1}+\sqrt{3x+1}=2$, where I am supposed to solve the equation for x. My problem with this equation though, is that I do not know where to start in order to be able to solve it. Could you please give me a hint (or two) on what I should try first in order to solve this equation?</p> <p><strong>Note</strong> that I only want hints.</p> <p>Thanks for the help!</p>
mfl
148,513
<p><strong>First step</strong></p> <p>$$3\sqrt{x-1}+\sqrt{3x+1}=2\implies (3\sqrt{x-1}+\sqrt{3x+1})^2=2^2.$$</p> <p><strong>Second step</strong></p> <p>After rearranging you'll get</p> <p>$$6\sqrt{x-1}\sqrt{3x+1}=ax+b.$$ Take squares one more time.</p> <p><strong>Final step</strong></p> <p>Solve the quadratic equation and check that the solutions you get are solutions of the initial equation.</p>
206,305
<p>Prove: $s_n \to s \implies \sqrt{s_n} \to \sqrt{s}$ by the definition of the limit. $s \geq 0$ and $s_n$ is a sequence of non-negative real numbers.</p> <p>This is my preliminary computation:</p> <p>$|\sqrt{s_n} - \sqrt{s}| &lt; \epsilon$</p> <p>multiply by the conjugate:</p> <p>$|\dfrac{s_n - s}{\sqrt{s_n}+\sqrt{s}}| &lt; \epsilon$</p> <p>Thus we can use the fact that $|\sqrt{s_n} - \sqrt{s}| &lt; \dfrac{|s_n - s|}{\sqrt{s}} &lt; \epsilon$</p> <p>After this I am lost...</p>
Pragabhava
19,532
<p>If both $s$ and $s_n$ are non-negative</p> <p>$$ |\sqrt{s}-\sqrt{s_n}|^2 \le |\sqrt{s}-\sqrt{s_n}||\sqrt{s} + \sqrt{s_n}|. $$</p> <p><strong>Step by Step :)</strong></p> <p>Since both $s$ and $s_n$ are non-negative</p> <p>$$ |\sqrt{s}-\sqrt{s_n}| \le |\sqrt{s} + \sqrt{s_n}| $$</p> <p>this is clear because the result of substracting a non-negative number from another is always less than the result of adding it, then</p> <p>$$ |\sqrt{s}-\sqrt{s_n}|^2 \le |\sqrt{s}-\sqrt{s_n}| \cdot |\sqrt{s}+\sqrt{s_n}| = |s - s_n| $$</p> <p>and you are done!</p>
99,506
<p>I am trying to show that how the binary expansion of a given positive integer is unique.</p> <p>According to this link, <a href="http://www.math.fsu.edu/~pkirby/mad2104/SlideShow/s5_3.pdf" rel="nofollow">http://www.math.fsu.edu/~pkirby/mad2104/SlideShow/s5_3.pdf</a>, All I see is that I can recopy theorem 3-1's proof?</p> <p>Is this polished enough of an argument. Thanks</p>
Bill Dubuque
242
<p><strong>Hint</strong> $\ $ Put $\ b_i = 2\ $ in this sketched proof of the uniqueness of <a href="http://en.wikipedia.org/wiki/Mixed_radix" rel="nofollow">mixed-radix representation</a></p> <p>$$\begin{eqnarray} n &amp;=&amp; d_0 +\ d_1\, b_0 +\ d_2\, b_1\,b_0 +\ d_3\, b_2\,b_1\,b_0 +\ \cdots, \quad 0 \le d_i &lt; b_i,\ \ b_i &gt; 1\\[0.1em] &amp;=&amp;c_0\, +\, c_1\, b_0 +\, c_2\,\, b_1\,b_0 + \, c_3\,\, b_2\,b_1\,b_0 +\ \cdots, \quad 0 \le c_i &lt; b_i\end{eqnarray}$$</p> <p>$\, c_0 = d_0\ $ since $\,{\rm mod}\ b_0\!:\ c_0 \equiv n\equiv d_0\, $ and $\ 0 \le c_0,d_0 &lt; b_0.\, $ Now induct on smaller tails</p> <p>$$\begin{eqnarray} (n-d_0)/b_0 &amp;=&amp; d_1 +\ d_2\, b_1 +\ d_3\, b_2\, b_1 + \ \cdots\\[0.1em] =\, (n-c_0)/b_0 &amp;=&amp; c_1 +\, c_2\,\, b_1 +\ c_3\ b_2\, b_1 + \ \cdots \end{eqnarray}$$</p> <p>$(n-d_0)/b_0 \le\, n/b_0 &lt;\, n\ $ by $\,d_0 \ge 0,\ b_0 &gt; 1,\,$ so by induction $\ c_i = d_i\ $ for $\,i \ge 1.$ </p>
981,541
<p>Say I have the number <code>0.73992</code> and I'm rounding to 3 decimal places. My instinct would be to write <code>0.740 (3dp)</code>. But surely that implies that it is <em>exactly</em> <code>0.740</code>. The only other alternatives are to write <code>0.7399 (4dp)</code> or <code>0.74</code>, neither of which are to the requested accuracy.</p> <p>I'm sure there's a nice definitive answer out there, I just can't seem to find it :/</p>
ACupofJoe
185,680
<p>.0740 would be the answer. Since you're adding the 0 at the end it implies that you have an accuracy to the third decimal place.</p>
981,541
<p>Say I have the number <code>0.73992</code> and I'm rounding to 3 decimal places. My instinct would be to write <code>0.740 (3dp)</code>. But surely that implies that it is <em>exactly</em> <code>0.740</code>. The only other alternatives are to write <code>0.7399 (4dp)</code> or <code>0.74</code>, neither of which are to the requested accuracy.</p> <p>I'm sure there's a nice definitive answer out there, I just can't seem to find it :/</p>
Barbosa
185,459
<p>If it was 0.11488 you would round to 0.115, and it does not implies that it is exactly 0.115. I think you just confused because the rounded number ended on 0. So, in your example, you should write 0.740.</p>
547,050
<p>Which trigonometric formulas are used for these problems? <img src="https://i.stack.imgur.com/TVBCx.png" alt="enter image description here"></p>
Empy2
81,790
<p>These come from the addition formulas $\cos(a+b)+\cos(a-b)=2\cos a\cos b$ and $\sin(a+b)+\sin(a-b)=2\sin(a)\cos(b)$</p>
831,472
<p>I am learning about Karnaugh maps to simplify boolean algebra expressions. I have this:</p> <p>$$\begin{bmatrix} &amp; bc &amp; b'c &amp; bc' &amp; b'c' \\ a &amp; 0 &amp; 1 &amp; 1 &amp; 0\\ a' &amp; 1 &amp; 1 &amp; 0 &amp; 1 \end{bmatrix}$$</p> <p>There are no groups of four, so I am now looking for groups of two. I have highlighted the groups of two that I chose: $$\begin{bmatrix} &amp; bc &amp; b'c &amp; bc' &amp; b'c' \\ a &amp; 0 &amp; \color{red}1 &amp; 1 &amp; 0\\ a' &amp; \color{blue}1 &amp; \color{red}1 &amp; 0 &amp; \color{blue}1 \end{bmatrix}$$</p> <p>One red, and another blue.</p> <p>Now, there is one $1$ hanging over there. Normally, I would say that it will belong to a third group (of size one) and be done with it.</p> <p>However, I remember the professor doing an example in which he was in a similar situation, but he actually joined the $1$ with another $1$ that was <strong>already</strong> grouped. I cannot recall his reasoning though.</p> <p>What should I do?</p>
skyking
265,767
<p>As pointed out you need to order the rows and columns properly so that adjacent cells differ only in one variable. If your labeling is correct you need to swap the rightmost two columns for this (then of course it's customary to order the columns 00, 01, 11 and 10, but that's not necessary for the working of the diagram).</p> <p>If it's just a typo in the labeling and the table is indeed correct you will have fx:</p> <p>$\begin{matrix} &amp; BC &amp; C &amp;&amp; B \\ A &amp; 0 &amp; 1 &amp; 1 &amp; 0\\ &amp; 1 &amp; 1 &amp; 0 &amp; 1\\ \end{matrix}$</p> <p>Now you can find four groupings of two cells (which are prime implicants), the the terms are $A\overline B+C\overline B+C\overline A+B\overline A$. Next step is to identify the essential implicants which is $A\overline B$ (because it's the only one covering $A\overline B\overline C$) and $B\overline A$ (because it's the only one covering $\overline AB\overline C$. The other two terms/groups are not essential, but one of them has to be choosen - there's no reason to choose one over the other so you can just pick one of: </p> <p>$A\overline B+C\overline A+B\overline A$</p> <p>$C\overline B+C\overline A+B\overline A$</p> <p>The way you went wrong is first when you went for the red group second. Normally you go for essential prime implicants first (which will mean that they don't have a matching neighbor outside the group). The second error is that you should always aim for the largest possible groups (that is prime implicants) - even if you allowed the first error to slip you will anyway have to use a group of two for the hanging 1.</p> <p>You will have a hanging one anyway if you do it correctly:</p> <p>$\begin{matrix} &amp; BC &amp; C &amp;&amp; B \\ A &amp; 0 &amp; \color{red}1 &amp; \color{red}1 &amp; 0\\ &amp; \color{green}1 &amp; 1 &amp; 0 &amp; \color{green}1\\ \end{matrix}$</p> <p>now the black one could be combined with either its red or green neighbor and since it can it should to be combined.</p>
203,111
<p>Assume $(A_{i})_{i\in\Bbb N}$ to be an infinite sequence of sets of natural numbers, satisfying</p> <p>$$A_{0}\subseteq A_{1}\subseteq A_{2}\subseteq A_{3}\cdots\subseteq\Bbb N\tag{*}$$</p> <p>For each property $p_{i}$ shown below, state whether </p> <p>• the hypothesis (*) is sufficient to conclude that $p_{i}$ holds; or</p> <p>• the hypothesis (*) is sufficient to conclude that $p_{i}$ does not hold; or</p> <p>• the hypothesis (*) is not sufficient to conclude anything about the truth of $p_{i}$ .</p> <p>Justify your answers (briefly).</p> <ol> <li><p>$p_{1}$ : $\forall k\in\Bbb N.\ A_{k}=\bigcup_{i=0}^{k}A_{i}$</p></li> <li><p>$p_{2}$ : for all $i$, if $A_{i}$ is infinite, then $A_{i}=A_{i+1}$</p></li> <li><p>$p_{3}$ : if $\forall i\in\Bbb N.\ A_{i}\neq A_{i+1}$, then $\bigcup_{i=0}^{\infty}A_{i}=\Bbb N$</p></li> <li><p>$p_{4}$ : if $\forall i\in\Bbb N.\ A_{i}$ is finite, then $\bigcup_{i=0}^{\infty}A_{i}$ is finite</p></li> <li><p>$p_{5}$ : if $\forall i\in\Bbb N.\ A_{i}$ is finite, then $\bigcup_{i=0}^{\infty}A_{i}$ is infinite</p></li> <li><p>$p_{6}$ : if $\forall i\in\Bbb N.\ A_{i}$ is infinite, then $\bigcup_{i=0}^{\infty}A_{i}$ is infinite</p></li> </ol>
Berci
41,488
<p>($*$)$\Rightarrow p_1,p_6$. </p> <p>The rest, $p_2, p_3, p_4, p_5$ are not in general true, even if we assume (*). Try to find counterexamples.</p>
498,785
<p>I'm trying to solve this problem, but I'm not even sure how to formulate it in a coherent mathematical manner, or even what branch of mathematics this might fall in to.</p> <p>Basically I have a set of weights, where each weight individually must remain in the range $[0,1]$. I want to change the mean of the weights to some new mean, also in the $[0,1]$ range, by modifying all the weights slightly (that is, I can't add or remove weights; only modify their values).</p> <p>Also, ideally, after changing the mean to a new value, if I do the algorithm again, and try to return to the original mean, I'll get the same original weights. That is, the mapping function can work as its own inverse. Which I think implies certain things about the distribution of the values of the weights before and after the mapping, but I'm not sure how to describe it in mathematical terms.</p> <p>Last, the amount of movement of individual weights should be minimized, probably in a least squares sort of way. That is, I'd prefer to move all the values a slight amount over moving a single value from 0 to 1, for instance.</p> <p>Does anyone know how I might go about this sort of remapping? Basically I have four requirements:</p> <ol> <li>After modifying the original weights, the new values stay within $[0,1]$.</li> <li>The new mean of the modified weights must be the mean I wanted</li> <li>The mapping can be applied again to get back to the original weights.</li> <li>The change in weights is minimized in a least squares-esque manner.</li> </ol>
Sneftel
10,735
<p>First, put aside all the 0s and 1s, which will stay the same. (If you only have 0s and 1s you'll need to use a different strategy, and you won't be able to do #3.) Put the remaining weights through the logit function. Then find a constant which you can add to all the logit-scale weights such that, after putting them back through the logistic function, you get the desired mean. (I don't have a closed form for this offhand, but Newton-Raphson should work fine.) This should accomplish the first three requirements. It obeys the fourth in that it mostly modifies weights which are around 0.5, while applying less of a change to weights that are nearly 0 or nearly 1.</p>
63,633
<p>(This question came up in a conversation with my professor last week.)</p> <p>Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$. <br> Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? <br> What if $G$ is finite?</p>
GH from MO
11,919
<p>The Mathieu group $M_{11}$ does not have this property. A quote from Example 2.16 in <a href="http://arxiv.org/PS_cache/arxiv/pdf/0707/0707.3895v1.pdf">this paper</a>: "Hence there is no automorphism of $M_{11}$ that maps $x$ to $x^{−1}$."</p> <p>Background how I found this quote as I am no group theorist: I used Google on "groups with no outer automorphism" which led me to this <a href="http://en.wikipedia.org/wiki/Outer_automorphism_group">Wikipedia article</a>, and from there I jumped to this other <a href="http://en.wikipedia.org/wiki/Mathieu_group">Wikipedia article</a>. So I learned that $M_{11}$ has no outer automorphism. Then I used Google again on "elements conjugate to their inverse in the mathieu group" which led me to the above mentioned paper. </p> <p><strong>EDIT:</strong> Following Geoff Robinson's comment let me show that any element $x\in M_{11}$ of order 11 has this property, using only basic group theory and the above <a href="http://en.wikipedia.org/wiki/Mathieu_group">Wikipedia article</a>. The article tells us that $M_{11}$ has 7920 elements of which 1440 have order 11. So $M_{11}$ has 1440/10=144 Sylow 11-subgroups, each cyclic of order 11. These subgroups are conjugates to each other by one of the Sylow theorems, so each of them has a normalizer subgroup of order 7920/144=55. In particular, if $x$ and $x^{-1}$ were conjugate to each other, then they were so by an element of odd order. This, however, is impossible as any element of odd order acts trivially on a 2-element set.</p>
63,633
<p>(This question came up in a conversation with my professor last week.)</p> <p>Let $\langle G,\cdot \rangle$ be a group. Let $x$ be an element of $G$. <br> Is there always an isomorphism $f : G \to G$ such that $f(x) = x^{-1}$ ? <br> What if $G$ is finite?</p>
Tim Dokchitser
3,132
<p>No, such an isomorphism does not always exist, and the smallest counterexample is $G=C_5\rtimes C_4$ with $C_4$ acting faithfully. It is not hard to see that the only automorphisms of $G$ are inner, and that they cannot map an element of order 4 to its inverse.</p>
2,512,736
<p>I do not understand how this result is a special case of theorem 9.1, could anyone explain this for me please?</p> <p><a href="https://i.stack.imgur.com/hsgYr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/hsgYr.png" alt="enter image description here"></a></p> <p>This is theorem 9.1:</p> <p><a href="https://i.stack.imgur.com/jUvFU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jUvFU.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/po6xr.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/po6xr.png" alt="enter image description here"></a></p>
greg
357,854
<p>Let a lowercase letter stand for the corresponding vectorized matrix, e.g. $$f={\rm vec}(F), \,\,\, x={\rm vec}(X)$$ Write the function and its differential $$\eqalign{ F &amp;= A^TX + XA - XPX + Q \cr dF &amp;= A^T\,dX + dX\,A - dX\,PX -XP\,dX \cr }$$ and vectorize $$\eqalign{ df &amp;= (I\otimes A^T+A^T\otimes I-I\otimes XP-X^TP^T\otimes I)\,dx \cr J=\frac{\partial f}{\partial x} &amp;= M - (I\otimes XP) - (PX\otimes I)^T \cr }$$ Note that $$M=(I\otimes A^T+A^T\otimes I)$$ is constant between iterations and only needs to be calculated once.</p> <p>Then we have the standard Newton iteration $$\eqalign{ Js &amp;= f \cr x_+ &amp;= x - s \cr }$$ All of the Kronecker products are sparse, so by utilizing the sparse solvers from your matrix library, you can solve moderately-sized systems with this simple method.</p>
94,440
<p>In Sean Carroll's <em>Spacetime and Geometry</em>, a formula is given as $${\nabla _\mu }{\nabla _\sigma }{K^\rho } = {R^\rho }_{\sigma \mu \nu }{K^\nu },$$</p> <p>where $K^\mu$ is a Killing vector satisfying Killing's equation ${\nabla _\mu }{K_\nu } +{\nabla _\nu }{K_\mu }=0$ and the convention of Riemann curvature tensor is</p> <p>$$\left[\nabla_{\mu},\nabla_{\nu}\right]V^{\rho}={R^\rho}_{\sigma\mu\nu}V^{\sigma}.$$</p> <p>So how to prove the this formula (the connection is Levi-Civita)?</p>
K.defaoite
553,081
<p>A solution that is simpler still, requiring no differentiation or fancy identities other than one of the Bianchi identities, is as follows.</p> <hr /> <p>We know that the Riemann tensor can measure how much the covariant derivatives commute with each other, e.g <span class="math-container">\begin{equation} [ \nabla _{\mu } ,\nabla _{\nu }] k_{\alpha } =-R^{\beta }{}_{\alpha \mu \nu } k_{\beta } \end{equation}</span> We can shuffle the indices in (4) and use the antisymmetry of the Lie bracket to get the three equations <span class="math-container">\begin{gather} [ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha } =R^{\beta }{}_{\alpha \mu \nu } k_{\beta } \tag{5A}\\ [ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } =R^{\beta }{}_{\nu \alpha \mu } k_{\beta } \tag{5B}\\ [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } =R^{\beta }{}_{\mu \nu \alpha } k_{\beta } \end{gather}</span> We now look at <span class="math-container">$\displaystyle ( 5\mathrm{C}) -( 5\mathrm{B}) -( 5\mathrm{A})$</span>: <span class="math-container">\begin{equation*} [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } -[ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } -[ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha } =\left( R^{\beta }{}_{\mu \nu \alpha } -R^{\beta }{}_{\nu \alpha \mu } -R^{\beta }{}_{\alpha \mu \nu }\right) k_{\beta } \end{equation*}</span> Applying the first Bianchi identity, <span class="math-container">\begin{equation*} R^{\beta }{}_{\alpha \mu \nu } +R^{\beta }{}_{\mu \nu \alpha } +R^{\beta }{}_{\nu \alpha \mu } =0 \end{equation*}</span> This simplifies to <span class="math-container">\begin{equation*} [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } -[ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } -[ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha } =2R^{\beta }{}_{\mu \nu \alpha } k_{\beta } \end{equation*}</span> Expanding the Lie brackets, <span class="math-container">\begin{gather*} [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } -[ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } -[ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha }\\ =( \nabla _{\alpha } \nabla _{\nu } -\nabla _{\nu } \nabla _{\alpha }) k_{\mu } -( \nabla _{\mu } \nabla _{\alpha } -\nabla _{\alpha } \nabla _{\mu }) k_{\nu } -( \nabla _{\nu } \nabla _{\mu } -\nabla _{\mu } \nabla _{\nu }) k_{\alpha }\\ =\nabla _{\alpha } \nabla _{\nu } k_{\mu } -\nabla _{\nu } \nabla _{\alpha } k_{\mu } +\nabla _{\alpha } \nabla _{\mu } k_{\nu } -\nabla _{\mu } \nabla _{\alpha } k_{\nu } +\nabla _{\mu } \nabla _{\nu } k_{\alpha } -\nabla _{\nu } \nabla _{\mu } k_{\alpha } \end{gather*}</span> Using the linearity of <span class="math-container">$\displaystyle \nabla $</span>, <span class="math-container">\begin{gather*} [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } -[ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } -[ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha }\\ =\nabla _{\alpha }( \nabla _{\nu } k_{\mu } +\nabla _{\mu } k_{\nu }) +\nabla _{\mu }( \nabla _{\nu } k_{\alpha } -\nabla _{\alpha } k_{\nu }) -\nabla _{\nu }( \nabla _{\alpha } k_{\mu } +\nabla _{\mu } k_{\alpha }) \end{gather*}</span> Using Killing's equation <span class="math-container">$\displaystyle \nabla _{( \rho } k_{\sigma )} =0$</span> the first and third terms vanish, leaving <span class="math-container">\begin{equation*} [ \nabla _{\alpha } ,\nabla _{\nu }] k_{\mu } -[ \nabla _{\mu } ,\nabla _{\alpha }] k_{\nu } -[ \nabla _{\nu } ,\nabla _{\mu }] k_{\alpha } =\nabla _{\mu }( \nabla _{\nu } k_{\alpha } -\nabla _{\alpha } k_{\nu }) =2\nabla _{\mu } \nabla _{\nu } k_{\alpha } \end{equation*}</span> Hence, <span class="math-container">\begin{equation} \boxed{\nabla _{\mu } \nabla _{\nu } k_{\alpha } =R^{\beta }{}_{\mu \nu \alpha } k_{\beta }} \end{equation}</span></p>
1,677,868
<p>The sequence is:</p> <p>$$a_n = \frac {2^{2n} \cdot1\cdot3\cdot5\cdot...\cdot(2n+1)} {(2n!)\cdot2\cdot4\cdot6\cdot...\cdot(2n)} $$</p>
Claude Leibovici
82,404
<p>In the same spirit as Brian M. Scott's answer, using $$2\cdot 4\cdot 6\cdot\ldots\cdot(2n)=2^nn!$$ and $${1\cdot 3\cdot 5\cdots (2n+1)}=\frac{1\cdot 2\cdot 3\cdot 4\cdot 5\cdots (2n+1)}{2\cdot4\cdot6\cdots(2n)}=\frac{(2n+1)!}{2^n n! }$$ All of this makes $$a_n=\frac{2^{2n}\frac{(2n+1)!}{2^n n!}}{2^n n! (2n)!}=\frac{2n+1}{ (n!)^2}$$ which, following André Nicolas's suggestion, seems to make $$\frac{a_{n+1}}{a_n}=\frac{2 n+3}{(n+1)^2 (2 n+1)}$$</p>
365,483
<p>Let <span class="math-container">$f\colon X\to \mathbb{A}^n_{\mathbb{C}}$</span> be a morphism of <span class="math-container">$\mathbb{C}$</span>-schemes. Suppose <span class="math-container">$f$</span> is (a) separated, (b) flat, (c) locally of finite type, (d) all fibers are quasi-compact, is <span class="math-container">$X$</span> necessarily quasi-compact?</p>
R. van Dobben de Bruyn
82,179
<p>Here is a counterexample:</p> <p><strong>Example.</strong> We will define <span class="math-container">$X$</span> as a union of affine varieties <span class="math-container">$$U_0 \subseteq U_1 \subseteq \ldots$$</span> as follows: start with <span class="math-container">$U_0 = \mathbf A^1 \times (\mathbf A^1 \setminus 0) \subseteq \mathbf A^2 = V_0$</span> with its natural projection to <span class="math-container">$\mathbf A^1$</span>, and let <span class="math-container">$Z_0 = \mathbf A^1 \times 0$</span> be the complement of <span class="math-container">$U_0$</span> in <span class="math-container">$V_0$</span>.</p> <p>Choose a sequence of points <span class="math-container">$x_1,x_2,\ldots$</span> on <span class="math-container">$\mathbf A^1$</span>. Define <span class="math-container">$V_i$</span> as the blowup of <span class="math-container">$V_0$</span> in the points <span class="math-container">$(x_1,0), \ldots, (x_i,0)$</span>, so we have maps <span class="math-container">$$\ldots \to V_i \to V_{i-1} \to \ldots \to V_0.$$</span> Let <span class="math-container">$E_i$</span> be the exceptional divisor for <span class="math-container">$V_i \to V_{i-1}$</span>, let <span class="math-container">$Z_i$</span> be the strict transform of <span class="math-container">$Z_0$</span> in <span class="math-container">$V_i$</span>, and let <span class="math-container">$U_i$</span> be its complement in <span class="math-container">$V_i$</span>. For each <span class="math-container">$i$</span>, the centre of the blowup <span class="math-container">$V_i \to V_{i-1}$</span> is contained in <span class="math-container">$Z_{i-1}$</span>, giving an isomorphism <span class="math-container">$$V_i\setminus(E_i \cup Z_i) \stackrel\sim\longrightarrow V_{i-1}\setminus Z_{i-1},$$</span> hence an open immersion <span class="math-container">$$U_{i-1} = V_{i-1}\setminus Z_{i-1} \cong V_i\setminus (E_i \cup Z_i) \hookrightarrow V_i \setminus Z_i = U_i.$$</span> Define <span class="math-container">$X$</span> as the union. The maps <span class="math-container">$U_i \to \mathbf A^1$</span> are compatible, so they give a map <span class="math-container">$X \to \mathbf A^1$</span>. It is flat since <span class="math-container">$X$</span> is integral and dominant over the Dedekind scheme <span class="math-container">$\mathbf A^1$</span>. It is separated and locally of finite type since <span class="math-container">$X \to \operatorname{Spec} k$</span> is. Finally, the fibres are quasi-compact: each step <span class="math-container">$U_{i-1} \hookrightarrow U_i$</span> only modifies the fibre over <span class="math-container">$x_i$</span>. But <span class="math-container">$X$</span> itself is not quasi-compact. <span class="math-container">$\square$</span></p>
330,991
<p>Many things in math can be formulated quite differently; see the list of statements equivalent to RH <a href="https://mathoverflow.net/questions/39944/collection-of-equivalent-forms-of-riemann-hypothesis">here</a>, for example, with RH formulated as a bound on lcm of consecutive integers, as an integral equality, etc.</p> <p>I am wondering about equivalent formulations of the P vs. NP problem. Formulations that are very much different from the questions such &quot;Is TSP in P?&quot;, formulation that may seem unrelated to complexity theory.</p>
Mohammad Al-Turkistany
8,784
<p>The P vs NP problem can be formulated in terms of <em>incomplete</em> sets in NP. Ladner theorem can be stated as:</p> <p><span class="math-container">$P \ne NP$</span> if and only if there is an incomplete set in NP.</p> <p>Incomplete set is a set that is not complete for <span class="math-container">$NP$</span> under many-one polynomial time reductions (Karp reductions).</p> <p>Another formulation in terms of sparse sets is Mahaney's Theorem:</p> <p>There is no sparse NP-complete set if and only if <span class="math-container">$P \ne NP$</span> (under Karp reduction).</p> <p>Complexity Theory and Cryptology: An Introduction to Cryptocomplexity By Jörg Rothe, page 106</p>
338,535
<p>Suppose that $f$ is a function defined on the set of natural numbers such that $$f(1)+ 2^2f(2)+ 3^2f(3)+...+n^2f(n) = n^3f(n)$$ for all positive integers $n$. Given that $f(1)= 2013$, find the value of $f(2013)$.</p>
Christian Blatter
1,303
<p>Introduce the auxiliary function $$g(n):=n^2 f(n)\qquad(n\geq1)\ .$$ Then $$n g(n)= g(1)+g(2)+\ldots+g(n)\qquad(n\geq1)$$ and therefore $$(n+1)g(n+1)-n g(n)=g(n+1)\ ,$$ or $g(n+1)=g(n)$ for all $n\geq1$. It follows that $$2013^2 f(2013)= g(2013)= g(1)=1^2 f(1)\ ,$$ whence $f(2013)={1\over2013}$.</p>
2,208,943
<p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p> <p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p> <p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
mathreadler
213,607
<p>The purpose of rigor is not so much to make sure something is true. It is to make sure we know what we are actually assuming. If one forces specificity of what is assumed then also new ways to define thing may become clearer. </p> <p>The parallell axiom of euclidean geometry is a good example. By forcing ourselves to try and prove it ( which we now know was not possible ) we gradually realize that other paths to build theory are possible. Without bothering to try and prove it and just take it for granted, then maybe other possibilities would not have occured to us. </p> <p>For each added piece of specificity there is always a "in what <strong>other</strong> ways could this be done?" which has a chance to pop up leading to new theories.</p>
2,208,943
<p>I am about to finish my first year of studying mathematics at university and have completed the basic linear algebra/calculus sequence. I have started to look at some real analysis and have really enjoyed it so far.</p> <p>One thing I feel I am lacking in is motivation. That is, the difference in rigour between the usual introduction to calculus class and real analysis seems to be quite strong. While I appreciate rigour for aesthetic reasons, I have trouble understanding why the transition from the 18th century Euler style calculus to the rigorous "delta-epsilon" formulation of calculus was necessary. </p> <p>Is there a book that provides some historical motivation for the rigorous developement of calculus? Perhaps something that gives several counterexamples that occur when one is only equipped with a non-rigorous (i.e. first year undergraduate) formulation of calculus. For example, were there results that were thought to be true but turned out to be false when the foundations of calculus were strengthened? I suppose if anyone knows good counterexamples themselves they could list them here as well. </p>
Philip Roe
430,997
<p>@TheGreatDuck This is a fascinating thread. Let me comment on your aeronautical contribution from the viewpoint of an aeronautical engineer.</p> <p>There are times when rigor is important and times when it isnt. For the first situation, consider the design of software to undertake air traffic control. Much attention is being given at the moment to "verifiable" algorithms, where it can be "rigorously" established that no possible situation has been overlooked. I am not sure that the standard of rigor would convince a modern analyst, but there is a recognition that intuition can be misleading and that formal analysis has considerable value.</p> <p>An example where the search for rigor would be misplaced would be the calculation of the airflow by solving the Navier-Stokes equations. Insistence on rigor would require waiting around until the Navier-Stokes equations are shown to be well-posed, which is probably not going to happen soon. Until that day comes, designers will rely on wind-tunnel experiments, flight tests, and decades of accumulated experience. For now, this is MUCH safer than attempting to prove theorems. In fact, if I knew that the designers were trying to rely on theorems I would think very seriously before buying an airline ticket.</p> <p>The value of rigor depends entirely on what you are trying to do, and how quickly you need to do it. This is true within mathematics as much as in its applications. Without Euler's gleeful nonchalance the pace of mathematical advance would have been greatly delayed.</p>
3,489,212
<p>Playing around I found a series which looks to converge to the square root function.</p> <p><span class="math-container">$$\sqrt{p^2+q}\overset{?}{=}p\left(1-\sum_{n=1}^{+\infty}\left(-\frac q{2p^2}\right)^n\right)$$</span></p> <p>Is it correct?</p>
kimchi lover
457,779
<p>No, it is not correct. Your right-hand expression is a geometric series for <span class="math-container">$$p\left(1-\frac{-\frac q{2p^2}}{1+\frac q{2p^2}}\right),$$</span> which is a rational expression in <span class="math-container">$p$</span> and <span class="math-container">$q$</span>.</p> <p>The correct answer is given by Newton's <a href="https://en.wikipedia.org/wiki/Binomial_theorem#Newton&#39;s_generalized_binomial_theorem" rel="nofollow noreferrer">"generalized binomial theorem"</a>, in the form <span class="math-container">$$\sqrt{p^2+q}=p\sqrt{1+q/p^2} = p\sum_{k\ge 0} \binom{1/2} k \left(\frac q {p^2}\right)^k,$$</span> which converges for <span class="math-container">$|q/p^2|&lt;1$</span>. The first two values of <span class="math-container">$\binom{1/2} k$</span> are <span class="math-container">$1$</span> and <span class="math-container">$1/2$</span>, which match what you have. But <span class="math-container">$\binom{1/2}{2}=-1/8$</span> which does not match your <span class="math-container">$-1/4$</span>.</p>
364,278
<p>Let <span class="math-container">$X$</span> be a variety over a number field <span class="math-container">$K$</span>. Then it is known that for any topological covering <span class="math-container">$X' \to X(\mathbb{C})$</span>, the topological space <span class="math-container">$X'$</span> can be given the structure of a <span class="math-container">$\overline{K}$</span>-variety in such a way so that the morphism <span class="math-container">$f: X' \to X$</span> inducing the topological map is a finite etale morphism over <span class="math-container">$\overline{K}$</span>. However, the variety <span class="math-container">$X'$</span> and the morphism <span class="math-container">$f$</span> may not descend to <span class="math-container">$K$</span>.</p> <p>My question is as follows: does there always exist a further finite etale covering <span class="math-container">$f' : X'' \to X'$</span> such that the composition <span class="math-container">$X'' \to X$</span> may be defined over <span class="math-container">$K$</span>?</p> <p>EDIT: Just to be clear, I'd like all the covers involved to be geometrically connected to avoid trivial solutions.</p>
Will Chen
15,242
<p>Here's a simple argument assuming <span class="math-container">$X$</span> admits a <span class="math-container">$K$</span>-rational point, and that <span class="math-container">$X$</span> has a finitely generated geometric fundamental group. In fact the &quot;further&quot; covering <span class="math-container">$X''$</span> can be chosen to be geometrically Galois over <span class="math-container">$X$</span>.</p> <p>Let <span class="math-container">$\Pi := \pi_1(X_K)$</span>, let <span class="math-container">$\overline{\Pi} := \pi_1(X_{\overline{K}})$</span> (assumed to be topologically finitely generated). Let <span class="math-container">$G_K := \text{Gal}(\overline{K}/K)$</span>.</p> <p>Since we're working over a field, there's a homotopy exact sequence <span class="math-container">$$1\rightarrow \overline{\Pi}\rightarrow\Pi\rightarrow G_K\rightarrow 1$$</span> from which we get a canonical outer action <span class="math-container">$G_K\rightarrow\text{Out}(\overline{\Pi})$</span>.</p> <p>The covering <span class="math-container">$X'$</span> (over <span class="math-container">$\overline{K})$</span> corresponds to a finite index subgroup <span class="math-container">$H \le \overline{\Pi}$</span>. It would suffice to find a finite index normal subgroup <span class="math-container">$\Gamma\lhd \overline{\Pi}$</span> which is stabilized by <span class="math-container">$G_K$</span>. Indeed, using the <span class="math-container">$K$</span>-rational point of <span class="math-container">$X$</span>, the homotopy exact sequence is split, so the outer action of <span class="math-container">$G_K$</span> comes from an honest action, and <span class="math-container">$\Pi = \overline{\Pi}\rtimes G_K$</span> relative to this action. If <span class="math-container">$\Gamma\lhd\overline{\Pi}$</span> is stabilized by <span class="math-container">$G_K$</span>, then the subgroup <span class="math-container">$\Gamma\rtimes G_K\le \Pi$</span> visibly corresponds to a geometrically connected finite cover of <span class="math-container">$X_K$</span> (though it may not be normal inside <span class="math-container">$\Pi$</span>).</p> <p>To find this <span class="math-container">$\Gamma$</span>, let <span class="math-container">$N\le H$</span> be the intersection of all the <span class="math-container">$\overline{\Pi}$</span>-conjugates of <span class="math-container">$H$</span>, so <span class="math-container">$N$</span> is normal and of finite index inside <span class="math-container">$\overline{\Pi}$</span>. Let <span class="math-container">$\Gamma$</span> be the intersection of the kernels of all the surjective homomorphisms <span class="math-container">$\overline{\Pi}\rightarrow\overline{\Pi}/N$</span>. Since <span class="math-container">$\overline{\Pi}$</span> is finitely generated, there are only finitely many such homomorphisms, so <span class="math-container">$\Gamma$</span> is also finite index inside <span class="math-container">$\overline{\Pi}$</span>. Moreover, it's easy to check that <span class="math-container">$\Gamma$</span> is <em>characteristic</em> inside <span class="math-container">$\overline{\Pi}$</span>. Thus, <span class="math-container">$G_K$</span> must stabilize <span class="math-container">$\Gamma$</span>, and hence <span class="math-container">$\Gamma\rtimes G_K$</span> will correspond to the desired covering <span class="math-container">$X_K''\rightarrow X_K$</span>, which is moreover geometrically Galois.</p>
3,453,408
<p>I'm reading through some lecture notes and see this in the context of solving ODEs: <span class="math-container">$$\int\frac{dy}{y}=\int\frac{dx}{x} \rightarrow \ln{|y|}=\ln{|x|}+\ln{|C|}$$</span> why is the constant of integration natural logged here?</p>
Fimpellizzeri
173,410
<p>No real reason, from this simple equation, that I can see. It could be any <span class="math-container">$C$</span>. Perhaps the author intended to take the exponential of both sides in the following step and remind you that in this case the constant term must be non-negative.</p>
155,547
<p>Given $X_1, \ldots, X_n$ from $\mathcal{N} (\mu, \sigma^2)$.</p> <p>I have to compute the probability: $$P\left(|\bar{X} - \mu| &gt; S\right)$$ where $\bar{X}$ is the sample mean and $S^2$ is the sample variance.</p> <p>I tried to expand: $$P\left(\bar{X}^2 + \mu^2 - \bar{X}\mu &gt; \frac{1}{n}\sum {X_i}^2 + \frac{1}{n}\sum\bar{X} - 2\left(\frac{1}{n}\sum X_i\right) \bar{X} \right) $$ $$P\left( \mu^2 - \bar{X}\mu &gt; \frac{1}{n}\sum {X_i}^2 - 2\bar{X}^2 \right) $$</p> <p>but it does not seems to be helpful.</p> <p>Can someone help me?</p>
Michael Hardy
11,667
<p>$$ \frac{\bar X - \mu}{\sigma/\sqrt{n}} \sim \mathcal{N}(0,1) $$ $$ \frac{\bar X - \mu}{S/\sqrt{n}} \sim T_{n-1} $$ where $T_k$ is Student's t-distribution with $k$ degrees of freedom.</p> <p>So $$ \Pr\left(\left|\frac{\bar X - \mu}{S}\right| &gt; 1\right) = \Pr\left(\left|\frac{\bar X - \mu}{S/\sqrt{n}}\right| &gt; \sqrt{n}\right) = \Pr(|T|&gt;\sqrt{n}) $$ where the distribution of $T$ is Student's t-distribution with $n-1$ degrees of freedom.</p> <p>I don't know any neat expression for this. For any particular value of $n$, you can get a number from standard on-the-shelf software.</p>
4,021,994
<p>I was taught in high school algebra to translate word problems into algebraic expressions. So when I encountered <a href="https://artofproblemsolving.com/wiki/index.php/2016_AMC_10A_Problems/Problem_3" rel="nofollow noreferrer">this</a> problem I tried to reason out an algebra formula for it</p> <blockquote> <p>For every dollar Ben spent on bagels, David spent 25 cents less. Ben paid $12.50 more than David. How much did they spend in the bagel store together?</p> </blockquote> <p>To solve this I imagined a series of comparisons when Ben spends <span class="math-container">$x$</span>, David spends <span class="math-container">$.75x$</span>. Loop this relationship until <span class="math-container">$x - .75x \approx 12.50$</span>. Good. Done. <span class="math-container">$x = 50$</span>, then add David's for the answer. Coming from computers, I would have set this up in code where a loop (recursion) would increase <span class="math-container">$x$</span> until the condition <span class="math-container">$x - .75x = 12.50$</span> was met, then the &quot;loop counter/accumulator&quot; would be how much Ben spent, i.e., <span class="math-container">$50$</span>, etc.</p> <p>I'm a beginner with math, but it seems like there should be a better approach, something with series and sequences or even calculus derivatives, something better than my brute-force computer algorithm. Can someone enlighten? The &quot;answer&quot; given at the site (see link) is its own brute-force and hardly satisfying. I'm thinking there should be something more formal -- at least for the first part that derives <span class="math-container">$50$</span>.</p> <p><strong>Update</strong></p> <p>I think everyone so far has missed my point. Many of you simply re-did the problem again. I'm wondering if there is a more <em>formal</em> way to do this other than just &quot;figuring it out&quot; (FIO). The whole FIO routine is murky. It looks like a limit problem; it looks like a system of equations, but I'm not experienced enough to know exactly. If there isn't, then let's call it a day....</p>
Laowl Lomao
851,165
<p>You can use linear algebra, by defining something as x, say the amount paid by Ben. Then Dave's will be 0.75x. Then we can get an easily solvable equation.</p>
4,021,994
<p>I was taught in high school algebra to translate word problems into algebraic expressions. So when I encountered <a href="https://artofproblemsolving.com/wiki/index.php/2016_AMC_10A_Problems/Problem_3" rel="nofollow noreferrer">this</a> problem I tried to reason out an algebra formula for it</p> <blockquote> <p>For every dollar Ben spent on bagels, David spent 25 cents less. Ben paid $12.50 more than David. How much did they spend in the bagel store together?</p> </blockquote> <p>To solve this I imagined a series of comparisons when Ben spends <span class="math-container">$x$</span>, David spends <span class="math-container">$.75x$</span>. Loop this relationship until <span class="math-container">$x - .75x \approx 12.50$</span>. Good. Done. <span class="math-container">$x = 50$</span>, then add David's for the answer. Coming from computers, I would have set this up in code where a loop (recursion) would increase <span class="math-container">$x$</span> until the condition <span class="math-container">$x - .75x = 12.50$</span> was met, then the &quot;loop counter/accumulator&quot; would be how much Ben spent, i.e., <span class="math-container">$50$</span>, etc.</p> <p>I'm a beginner with math, but it seems like there should be a better approach, something with series and sequences or even calculus derivatives, something better than my brute-force computer algorithm. Can someone enlighten? The &quot;answer&quot; given at the site (see link) is its own brute-force and hardly satisfying. I'm thinking there should be something more formal -- at least for the first part that derives <span class="math-container">$50$</span>.</p> <p><strong>Update</strong></p> <p>I think everyone so far has missed my point. Many of you simply re-did the problem again. I'm wondering if there is a more <em>formal</em> way to do this other than just &quot;figuring it out&quot; (FIO). The whole FIO routine is murky. It looks like a limit problem; it looks like a system of equations, but I'm not experienced enough to know exactly. If there isn't, then let's call it a day....</p>
poetasis
546,655
<p>If the difference is <span class="math-container">$\$12.50$</span> then it must be the product of dollars times <span class="math-container">$\$0.25$</span> so we divide: <span class="math-container">$\quad\dfrac{12.50}{0.25}=50\quad$</span> for Ben and another <span class="math-container">$\quad 37.50 \quad$</span> for David for a total of <span class="math-container">$\quad\$87.50$</span></p>
403,631
<p>$a^n \rightarrow 0$ as $n \rightarrow \infty$ for $\left|a\right| &lt; 1 $ <br/> Hint $u_{2n}$ = $u_{n}^2$</p> <p>I have totally no idea how to prove this, this looks obvious but i found out proof is really hard... I am doing a real analysis course and there's a lot of proving and I stuck there. Any advices? Practice makes perfect? </p>
robjohn
13,854
<p>Since $0\le|a|\lt1$, we have $0\le|a|^{n+1}\le|a|^n$. Since $|a|^n$ is a non-increasing sequence, bounded below, $A=\lim\limits_{n\to\infty}|a|^n$ exists. Then, $$ \begin{align} |a|A &amp;=|a|\lim_{n\to\infty}|a|^n\\ &amp;=\lim_{n\to\infty}|a|^n\\ &amp;=A \end{align} $$ Thus, $(|a|-1)A=0\implies A=0$. Therefore, $$ \left|\lim_{n\to\infty}a^n\right|=\lim_{n\to\infty}|a|^n=A=0 $$</p>
319,262
<p>If the first 10 positive integer is placed around a circle, in any order, there exists 3 integer in consecutive locations around the circle that have a sum greater than or equal to 17? </p> <p>This was from a textbook called "Discrete math and its application", however it does not provide solution for this question. </p> <p>May I know how to tackle this question. </p> <p>Edit: I relook at the actual question and realize it is sum greater or equal to 17. My apologies.</p>
Steve Kass
60,500
<p>Remove the number 1 and unwrap the circle of numbers into a row $a, b, c, d, e, f, g, h, i$, where $\{a, b, c, d, e, f, g, h, i\}=\{2,3,4,5,6,7,8,9,10\}$. Then $(a+b+c)+(d+e+f)+(g+h+i)=\sum_{j=2}^{10}j = 54$, therefore at least one of $(a+b+c), (d+e+f),$ or $(g+h+i)$ must be $\ge {54\over3}=18$. </p>
599,126
<p>Question is to check which of the following holds (only one option is correct) for a continuous bounded function $f:\mathbb{R}\rightarrow \mathbb{R}$.</p> <ul> <li>$f$ has to be uniformly continuous.</li> <li>there exists a $x\in \mathbb{R}$ such that $f(x)=x$.</li> <li>$f$ can not be increasing.</li> <li>$\lim_{x\rightarrow \infty}f(x)$ exists.</li> </ul> <p>What all i have done is :</p> <ul> <li>$f(x)=\sin(x^3)$ is a continuous function which is bounded by $1$ which is not uniformly continuous.</li> <li>suppose $f$ is bounded by $M&gt;0$ then restrict $f: [-M,M]\rightarrow [-M,M]$ this function is bounded ad continuous so has fixed point.</li> <li>I could not say much about the third option "$f$ can not be increasing". I think this is also true as for an increasing function $f$ can not be bounded but i am not sure.</li> <li>I also believe that $\lim_{x\rightarrow \infty}f(x)$ exists as $f$ is bounded it should have limit at infinity.But then I feel the function can be so fluctuating so limit need not exists. I am not so sure.</li> </ul> <p>So, I am sure second option is correct and fourth option may probably wrong but i am not so sure about third option.</p> <p>Please help me to clear this.</p> <p>Thank You. :)</p>
Eric Auld
76,333
<p>$\tan^{-1}x$ is increasing. $\sin (x^3)$ has no limit at infinity.</p>
4,244,187
<blockquote> <p>Find the equation of the tangent line to <span class="math-container">$\sin^{-1}(x) + \sin^{-1}(y) = \frac{\pi}{6}$</span> at the point <span class="math-container">$(0,\frac{1}{2})$</span></p> </blockquote> <p>This is in the context of learning implicit differentiation.</p> <p>First, I apply <span class="math-container">$\frac{dy}{dx}$</span> operator to both sides of the equation yielding:</p> <p><span class="math-container">$-\sin^{-2}(x) - \sin^{-1}(y)\frac{dy}{dx} = 0$</span></p> <p>Second, I want to solve for <span class="math-container">$\frac{dy}{dx}$</span>.</p> <p><span class="math-container">$\frac{dy}{dx} = -\sin^{-2}(x)\sin(y)$</span>.</p> <p>Third, I substitute the point <span class="math-container">$(0,\frac{1}{2})$</span> into the above equation to find the slope of the tangent line.</p> <p><span class="math-container">$\frac{dy}{dx}\mid_{(0,\frac{1}{2})} = -\sin^{-2}(0)\sin(\frac{1}{2}) = -0.479$</span></p> <p>Finally, I substitute the slope into the point-slope equation of the line to obtain</p> <p><span class="math-container">$y = -0.479x + 0.2395$</span></p> <p>Is this correct?</p>
Stefan
965,450
<p>I have found now a counterexample that shows that the considered inequality does generally not hold. Specifically, if B and C are generated as <span class="math-container">$B = Q_B Q_B^*$</span> and <span class="math-container">$C = Q_C Q_C^*$</span>, where <span class="math-container">$Q_B$</span> and <span class="math-container">$Q_C$</span> are bases for orthogonal <span class="math-container">$m$</span>-dimensional subspaces, then <span class="math-container">$\mathrm{trace}(B C)$</span> is zero, whereas <span class="math-container">$\mathrm{trace}(A B A^* C)$</span> is generally not zero.</p>
1,924,033
<blockquote> <p><strong>Question.</strong> Let $\mathfrak{g}$ be a real semisimple Lie algebra admitting an invariant inner-product. Is every connected Lie group with Lie algebra $\mathfrak{g}$ compact?</p> </blockquote> <p>I know that the converse is true: If $G$ is a compact connected Lie group, then the Haar measure may be used to give an invariant inner-product on $\mathrm{Lie}(G)$. Also, semisimplicity is necessary since $\mathrm{Lie}(\mathbb{R})=\mathbb{R}$ trivially admits an invariant inner-product.</p>
Ben
12,885
<p>Here comes a diagram in $\mathrm(Top)$ whose colimit is the desired glueing. Given a glueing datum as in the question text, the diagram will consist of </p> <ul> <li>all $U_i$ and all $U_{ij}$ as objects and </li> <li>the inclusion maps $U_{ij}\to U_i$ as well as all $\varphi_{ij}$ as morphisms.</li> </ul> <p>This feels a bit ad-hoc, I have to admit, but it works. If we were to write down a more general/abstract diagram, it were harder to translate that the $U_{ij}$ are actually subspaces of the $U_i$ and not just merely spaces mapping to $U_i$, but besides that, it's not too hard. But under weaker assumptions, I think $3)$ might not hold for the colimit.</p> <p>Let me recall (from Lee Mosher's comment) that the following space, with the obvious maps, is a colimit for the above diagram: $$X := \coprod\nolimits_i U_i\,/{\sim},\text{ where }\sim\text{ identifies }U_{ij}\text{ with }U_{ji}\text{ via }\varphi_{ij} = \varphi_{ji}^{-1}.$$ This space is easily seen to fulfil the properties $1), 2), 3)$, so actually, there is nothing more to prove, by uniqueness. Nevertheless, we should look at what's going on in case there are only two spaces $U_1,U_2$ with subspaces $U_{12}\subset U_1$, $U_{21}\subset U_2$, with continuous maps $\varphi_{12}\colon U_{21}\to U_{12}$ and $\varphi_{21}\colon U_{12}\to U_{21}$, being mutually inverse. Then the diagram is just $$ \newcommand{\ra}[2]{\!\!\!\!\!\!\!\!\!\!\!\!\mathop{\rightleftarrows}\limits^{#1}_{#2}\quad\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\int}\right.} % \begin{array}{rcl} U_{12} &amp; \ra{\varphi_{21}}{\varphi_{12}} &amp; U_{21} \\ \da{} &amp; &amp; \da{} \\ U_{1} &amp; &amp; U_2 \\ \end{array} $$ A colimit of this diagram boils down to a space $X$ with maps $\psi_i\colon U_i\to X$ such that the following diagram commutes, and the universal property that the continuous maps $f\colon X\to Y$ are, via pull-back to $U_1$ and $U_2$, in bijection with pairs $f_i = f\circ\psi_i\colon U_i\to Y$ such that $(f_1)|_{U_{12}} = f_2\circ\varphi_{21}$ (and then automatically also $(f_2)|_{U_{21}} = f_1\circ\varphi_{12}$).</p> <p>$$ \newcommand{\ra}[2]{\!\!\!\!\!\!\!\!\!\!\!\!\mathop{\rightleftarrows}\limits^{#1}_{#2}\quad\!\!\!\!\!\!\!\!} \newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\int}\right.} % \begin{array}{rcl} U_{12} &amp; \ra{\varphi_{21}}{\varphi_{12}} &amp; U_{21} \\ \da{} &amp;&amp; \da{} \\ U_{1} &amp; &amp; U_2 \\ &amp;\searrow^{\psi_1}\;\;\;\;\swarrow_{\psi_2}&amp;\\ &amp; X &amp; \\ \end{array} $$</p> <p>We must show that this implies $1)$ and $3)$ as in the question. This is where we see what's formal and what's rather special to $(Top)$. The universal property implies that $\coprod_i U_i\xrightarrow{\psi} X$ is an epimorphism and this is enough for the map so be surjective (in $\mathrm{(Top)}$!); therefore, $X$ is covered by $U_1$ and $U_2$ and $1)$ holds. $3)$, however, is more involved. It relies on the fact that (in $\mathrm{(Top)}$!) some push-out squares are also pull-back squares. I'm running out of time for more details right now, so let's just say <em>that $3)$ holds relies very much on the fact that we're talking about $\mathrm{(Top)}$ and on the assumption that the $U_{ij}\subset U_i$ be subspaces</em>. (I think I will only provide more details upon request.)</p>
1,512,171
<p>I want to show that there exists a diffeomorphic $\phi$ such that the following diagram commutes: $$ \require{AMScd} \begin{CD} TS^1 @&gt;{\phi}&gt;&gt; S^1\times\mathbb{R}\\ @V{\pi}VV @V{\pi_1}VV \\ S^1 @&gt;{id_{S^1}}&gt;&gt; S^1 \end{CD}$$ where $\pi$ is the associated projection of $TS^1$, and $\pi_1(x,y)=x$ is the standard projection function in the first component.</p> <p>A hint was given along with the exercise that I should find a nowhere vanishing vector field on $S^1$. However, I don't know how to find one exactly, or what to do subsequent to finding such a vector field. I have seen an analogous example where $\phi$ was given without reason where $S^1$ and $\mathbb{R}$ were both instead $\mathbb{R}^n$. The definition of that $\phi$ was:$$\phi(a^i\frac{\partial}{\partial x^i}(p)) = (p,(a^1,...,a^n)).$$Perhaps the nowhere vanishing vector field on $S^1$ is used in an analogous formula?</p> <p>Could anyone give some additional hints or a sketch of a proof?</p> <p><strong>EDIT:</strong> Thinking about it, if I get the nowhere vanishing vector field, say, $u$, then because $S^1$ is a 1-manifold, I have that $T_pS^1$ is 1-dimensional as well. So that means that $T_pS^1$ is spanned by $u_p$. So I am thinking we use $\forall v_p\in TS^1$ the unique coefficient given by $\alpha\in\mathbb{R}$ such that $v_p = \alpha u_p$. So perhaps:$$\phi(v_p)=(p,\alpha),$$is our diffeomorphism? In that case, is there a condition that is met by $S^1$ such that it has to have a nowhere vanishing vector field (i.e. I don't have to find an exact formula for one)?</p>
Ross Millikan
1,827
<p>It depends on your definition of integral. The Riemann integral, the first one taught in calculus classes, does not have a value because the lower sum is always zero and the upper sum is always one. The <a href="https://en.wikipedia.org/wiki/Lebesgue_integration" rel="nofollow">Lebesgue integral</a> of this function exists and is $1$ as your intuition suggests.</p>
1,341,440
<p>I came across a claim in a paper on branching processes which says that the following is an <em>immediate consequence</em> of the B-C lemmas:</p> <blockquote> <p>Let $X, X_1, X_2, \ldots$ be nonnegative iid random variables. Then $\limsup_{n \to \infty} X_n/n = 0$ if $EX&lt;\infty$, and $\limsup_{n \to \infty} X_n/n = \infty$ if $EX=\infty$.</p> </blockquote> <p>So to apply the BC lemmas to these, I want to essentially show that $$(1) \; \textrm{If } EX&lt;\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \epsilon\}) = 0 \quad \forall \epsilon&gt;0$$ $$(2) \; \textrm{If } EX=\infty, \textrm{ then } P(\limsup \{X_n/n &gt; \delta\}) = 1 \quad \forall \delta&gt;0$$</p> <p>But I keep getting stuck. For example if I want to apply the first BC lemma to (1), then using Markov's inequality only gives $P(X_n &gt; n\epsilon) &lt; EX/n\epsilon$, which isn't summable. Am I missing something right under my nose?</p>
Tobias Kildetoft
2,538
<p>Since $Z(G)$ is not trivial, it has order at least $2$. But the quotient $G/Z(G)$ is not cyclic unless $Z(G) = G$ (the quotient by the center is never non-trivial cyclic), so it must have exponent dividing $2$, which precisely means that for any $x\in G$ we have $x^2\in Z(G)$.</p>
276,329
<p>I have a problem, from Gelfand's "Algebra" textbook, that I've been unable to solve, here it is:</p> <p><strong>Problem 268.</strong> </p> <p>What is the possible number of solutions of the equation $$ax^6+bx^3+c=0\;?$$</p> <p>Thanks in advance.</p>
amWhy
9,003
<p><strong>Hint:</strong> $\quad$Let $y = x^3$:</p> <p>$$ax^6 + bx^3 + c = 0 \quad \iff \quad ay^2 + by + c = 0\tag{1}$$</p> <p>Solve for $y$ ... there will be either two real solutions, one real solution, or no real solutions when solving for $y$ (why?, when?). (Examine the <a href="http://en.wikipedia.org/wiki/Discriminant" rel="nofollow">discriminant</a>.) </p> <ul> <li><p>When is $\Delta = \sqrt{b^2 - 4ac}\; &lt; \;0\,$? And what does this mean in terms of the existence (or non-existence), of <em>real</em> solutions in y? </p></li> <li><p>When $\Delta = 0$, there is <em>exactly</em> one real-valued solution $y$. </p></li> <li><p>When $\Delta &gt; 0$, there are two unique real-valued solutions $y_1, y_2$.</p></li> </ul> <p>In each case, then, for each (possible) solution $y_i$ of the right-hand equation in $(1)$, what are the number of solutions in $x$ to $y_i = x^3$ for each solution $y_i$? (Note that the degree $3$ is odd in $y = x^3$, so we don't have to worry whether solutions ($y$'s) are positive or negative. If $y_i$ is a solution, then there will exist $x$ such that $y = x^3$.) Simply check cases for each possible root $y_i$.</p>
2,574,117
<p>For a matrix $A$, define the operator $\ell_p$-norm of $A$ to be $$ \|A\|_p = \sup_{x \neq 0} \frac{\|Ax\|_p}{\|x\|_p}. $$ Here $\|x\|_p$ denotes the $\ell_p$ norm of the vector $x$.</p> <p>For $1 \le p \le q \le 2$ and $x \in \mathbb{R}^n$, we know that $\|x\|_q \le \|x\|_p \le n^{1 / p - 1 / q} \|x\|_q$. </p> <p>Is there any similar conclusion for the operator $\ell_p$-norm of a given matrix $A \in \mathbb{R}^{n \times m}$?</p> <p>Or a more concrete problem: if we know $\|A\|_1$ and $\|A\|_2$, what is the best possible upper bound we can achieve for $\|A\|_p$ if $1 \le p \le 2$?</p> <p>E.g., $$\|Ax\|_p \le n^{1 / p - 1 /2} \|Ax\|_2 \le n^{1 / p - 1 / 2} \|A\|_2 \cdot \|x\|_2 \le n^{1 / p - 1 / 2} \|A\|_2 \cdot \|x\|_p,$$ which implies $$ \|A\|_p \le n^{1 / p - 1 / 2} \|A\|_2. $$</p> <p>Similarly, $$\|Ax\|_p \le \|Ax\|_1 \le \|A\|_1 \cdot \|x\|_1 \le m^{1 - 1 / p} \|A\|_1 \cdot \|x\|_p,$$ which implies $$ \|A\|_p \le m^{1 - 1 / p} \|A\|_1. $$ Combine them we have $\|A\|_p \le \min\{n^{1 / p - 1 / 2} \|A\|_2, m^{1 - 1 / p} \|A\|_1\}$, which is quite naive. Are there any tighter upper bounds?</p>
Martin Argerami
22,857
<p>Your estimate is not naive in general. With $p=1$, $q=2$, take $$ A=\begin{bmatrix} 1&amp;0&amp;\cdots&amp;0\\ 1&amp;0&amp;\cdots&amp;0\\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots\\ 1&amp;0&amp;\cdots&amp;0\\ \end{bmatrix}, $$ It is well-known that $$\|A\|_1=\max\{\|A_j\|_1:\ j\},\ \ \ \|A\|_2=\|A^*A\|_2^{1/2}=\max\sigma(A^*A)^{1/2},$$ where $A_j$ denotes the $j^{\rm th}$ column of $A$, and $\sigma(B)$ is the <em>spectrum</em> (i.e., the list of eigenvalues). </p> <p>Thus $\|A\|_1=n$, while $$\|A\|_2=\|A^*A\|_2^{1/2}=\left\|\begin{bmatrix} n&amp;0&amp;\cdots&amp;0\\0&amp;0&amp;\cdots&amp;0\\ \vdots&amp;\vdots&amp;\ddots&amp;\vdots\\ 0&amp;0&amp;\cdots&amp;0\end{bmatrix}\right\|^{1/2}_2=n^{1/2}. $$ So $$ \|A\|_1=n^{1/1-1/2}\,\|A\|_2. $$</p>
3,978,303
<p><strong>Background</strong></p> <p>The following Euler product for the Riemann zeta function is well known.</p> <p><span class="math-container">$$ \sum_n \frac{1}{n^s} = \prod_p (1-\frac{1}{p^s})^{-1} $$</span></p> <p>Here <span class="math-container">$n$</span> ranges over all integers, <span class="math-container">$p$</span> over a primes, and real <span class="math-container">$s&gt;1$</span>.</p> <hr /> <p><strong>Common Proof Strategy</strong></p> <p>Many derivations / proofs, found in textbooks and papers, consider the following expression.</p> <p><span class="math-container">$$(1 - \frac{1}{p^s})^{-1} = 1 + \frac{1}{p^s} + \frac{1}{p^{2s}} + \frac{1}{p^{3s}} + \ldots$$</span></p> <p>The LHS is finite for any given <span class="math-container">$p$</span> and the series expansion is valid because <span class="math-container">$\frac{1}{p} &lt; 1$</span>.</p> <p>The following takes the product over all primes.</p> <p><span class="math-container">$$\prod_{p_i} (1-\frac{1}{p_i^s})^{-1} = 1 + \frac{1}{p_1^s} + \frac{1}{p_1^{2s}} + \ldots + \frac{1}{p_1^sp_2^{s}} + \frac{1}{p_1^sp_3^{s}} + \ldots $$</span></p> <p>The LHS is a product of finite and non-zero factors.</p> <p>The RHS has terms of the form <span class="math-container">$\frac{1}{X}$</span> where <span class="math-container">$X$</span> contains all combinations of the primes, and all combinations of powers of the primes.</p> <p>It is common to apply the Fundamental Theorem of Arithmetic to see that there is one term X for each integer <span class="math-container">$n$</span>, and therefore the RHS is the desired <span class="math-container">$\sum\frac{1}{n^s}$</span>.</p> <hr /> <p><strong>Challenge</strong></p> <p>A challenge (for example <a href="https://math.stackexchange.com/a/3970823/319008">here</a>) that has been raised to this very common proof logic is that there are terms <span class="math-container">$X$</span> with an infinite number of factors in the denominator, for example:</p> <p><span class="math-container">$$ \frac{1}{(2^2\cdot3^2\cdot 5^2\cdot 7^2 \cdot\ldots)^s} $$</span></p> <p>or another simpler example:</p> <p><span class="math-container">$$ \frac{1}{(2\cdot 2 \cdot 2\cdot 2\cdot\ldots)^s} $$</span></p> <hr /> <p><strong>Question</strong></p> <p>Is the challenge valid?</p> <p>I am not a trained mathematician, but in my opinion the proof strategy is valid because terms <span class="math-container">$X$</span> with denominators with an infinite number of prime factors are equivalent to zero. That is:</p> <p><span class="math-container">$$ \frac{1}{(2^2\cdot3^2\cdot 5^2\cdot 7^2 \cdot\ldots)^s} = 0$$</span></p> <p>and</p> <p><span class="math-container">$$ \frac{1}{(2\cdot 2 \cdot 2\cdot 2\cdot\ldots)^s} = 0$$</span></p> <p>My assertion is that the proof strategy remains valid because any finite integer <span class="math-container">$n$</span> has a single finite non-zero term <span class="math-container">$X$</span>, and those <span class="math-container">$X$</span> with infinitely long denominators can be discarded because they are zero.</p>
Paul Sinclair
258,282
<p>By definition, where <span class="math-container">$p_i$</span> is the <span class="math-container">$i^{th}$</span> prime, <span class="math-container">$$\prod_{p_i} \left(1-\frac{1}{p_i^s}\right)^{-1}:=\prod_{i=1}^\infty\left(1-\frac{1}{p_i^s}\right)^{-1}:= \lim_N \prod_{i=1}^N\left(1-\frac{1}{p_i^s}\right)^{-1}$$</span></p> <p>Now <span class="math-container">$$\prod_{i=1}^N\left(1-\frac{1}{p_i^s}\right)^{-1} = \prod_{i=1}^N\left(\sum_{e_i=0}^\infty \frac 1{\,p_i^{se_i}\,}\right)$$</span> And <span class="math-container">$$\begin{align}\prod_{i=1}^N\left(\sum_{e_i=0}^\infty \frac 1{\,p_i^{se_i}\,}\right) &amp;= \prod_{i=1}^N \lim_{n_i} \sum_{e_i=0}^{n_i} \frac 1{\,p_i^{se_i}\,}\\ &amp;= \lim_{n_1,\dots,n_N} \sum_{e_1=0}^{n_1}\dots\sum_{e_N=0}^{n_N}\frac 1{(p_1^{e_1}p_2^{e_2}\dots p_n^{e_N})^s}\end{align}$$</span> provided the RHS converges. But since the terms are all positive, the limitend is an increasing function of all its indices. And it is the sum of a finite subsequence of the known convergent sequence <span class="math-container">$\sum_k \frac 1{k^s}$</span>, so it is bounded above by that value. Hence the RHS side must converge.</p> <p>Further, it includes a term of the form <span class="math-container">$\frac 1{k^s}$</span> for every <span class="math-container">$k &lt; p_{N+1}$</span>. Thus <span class="math-container">$$\sum_{k=1}^{p_{N+1}-1} \dfrac1{k^s} \le \lim_{n_1,\dots,n_N} \sum_{e_1=0}^{n_1}\dots\sum_{e_N=0}^{n_N}\frac 1{(p_1^{e_1}p_2^{e_2}\dots p_n^{e_N})^s} \le \sum_{k=1}^\infty \dfrac1{k^s}$$</span> <span class="math-container">$$\sum_{k=1}^{p_{N+1}-1} \dfrac1{k^s} \le \prod_{i=1}^N\left(1-\frac{1}{p_i^s}\right)^{-1}\le \sum_{k=1}^\infty \dfrac1{k^s}$$</span></p> <p>Since <span class="math-container">$p_{N+1} \to \infty$</span> as <span class="math-container">$N \to \infty$</span>, by the squeeze theorem</p> <p><span class="math-container">$$\sum_{k=1}^\infty \dfrac1{k^s} \le \prod_{i=1}^\infty\left(1-\frac{1}{p_i^s}\right)^{-1} \le \sum_{k=1}^\infty \dfrac1{k^s}$$</span></p> <p>That is, <span class="math-container">$$\prod_{p_i} \left(1-\frac{1}{p_i^s}\right)^{-1} = \sum_{k=1}^\infty \dfrac1{k^s}$$</span></p> <p>Note that at one stage in this proof, it required that <span class="math-container">$$\sum_{k=1}^\infty \dfrac1{k^s}$$</span> converge. I.e., it only works for <span class="math-container">$s &gt; 1$</span>.</p>
2,098,395
<p>Evaluate the following;</p> <p>$$\sum_{r=0}^{50} (r+1) ^{1000-r}C_{50-r}$$</p> <p>Using $^{n}C_{r}=^{n}C_{n-r}$ we get $\sum_{r=0}^{50} (r+1) ^{1000-r}C_{950}$</p> <p>but I am not getting how to solve $\sum_{r=0}^{50} r \cdot \hspace{0.5 mm} ^{1000-r}C_{950}$</p>
lab bhattacharjee
33,337
<p>Set $50-r=u$ $$\sum_{u=0}^{50}(51-u)\binom{950+u}{950}=\sum_{u=0}^{50}\{1002-(951+u)\}\binom{950+u}{950}$$</p> <p>$$=1002\sum_{u=0}^{50}\binom{950+u}{950}-951\sum_{u=0}^{50}\binom{951+u}{951}$$</p> <p>Now $\displaystyle\sum_{u=0}^{50}\binom{950+u}{950}$ is the coefficient of $x^{950}$ in $$\displaystyle\sum_{u=0}^{50}(1+x)^{950+u}$$</p>
2,098,395
<p>Evaluate the following;</p> <p>$$\sum_{r=0}^{50} (r+1) ^{1000-r}C_{50-r}$$</p> <p>Using $^{n}C_{r}=^{n}C_{n-r}$ we get $\sum_{r=0}^{50} (r+1) ^{1000-r}C_{950}$</p> <p>but I am not getting how to solve $\sum_{r=0}^{50} r \cdot \hspace{0.5 mm} ^{1000-r}C_{950}$</p>
Mike Earnest
177,399
<p>This is a hockey stick made of hockey sticks. Expand each term <span class="math-container">$(r+1)\binom{1000-r}{950}$</span> into a column of <span class="math-container">$r+1$</span> copies of <span class="math-container">$\binom{1000-r}{950}$</span>, then add up the rows using the hockey stick identity, then add up the rows sums using the hockey stick identity. <span class="math-container">$$\begin{array}{rrrcrl} \displaystyle\binom{1000}{950}&amp;+\displaystyle2\binom{999}{950}&amp;+\displaystyle3\binom{999}{950}&amp;\dots&amp;+\displaystyle50\binom{950}{950}\\\hline\\ =\displaystyle\binom{1000}{950}&amp;+\displaystyle\binom{999}{950}&amp;+\displaystyle\binom{999}{950}&amp;\dots&amp;+\displaystyle\binom{950}{950} &amp;\stackrel{\text{H.S.}}=\displaystyle\binom{1001}{951} \\\\ &amp; +\displaystyle\binom{999}{950}&amp;+\displaystyle\binom{998}{950}&amp;\dots &amp;+\displaystyle\binom{950}{950} &amp;\stackrel{\text{H.S.}}=+\displaystyle\binom{1000}{951} \\\\ &amp;&amp;+\displaystyle\binom{998}{950}&amp;\dots &amp;+\displaystyle\binom{950}{950} &amp;\stackrel{\text{H.S.}}=+\displaystyle\binom{999}{951} \\\\&amp;&amp;&amp;\ddots&amp;\vdots\;\;\;\;&amp;\;\;\;\;\;\;\;\;\;\;\;\vdots \\\\&amp;&amp;&amp;&amp;+\displaystyle\binom{950}{950}&amp;\stackrel{\text{H.S.}}=+\displaystyle\binom{951}{951} \\\\ &amp;&amp;&amp;&amp;&amp;\stackrel{\text{H.S.}}=\displaystyle\boxed{\binom{1002}{952}} \end{array}$$</span></p>
168,053
<p>If g is a positive, twice differentiable function that is decreasing and has limit zero at infinity, does g have to be convex? I am sure, from drawing a graph of a function which starts off as being concave and then becomes convex from a point on, that g does not have to be convex, but can someone show me an example of an actual functional form that satisfies this property?</p> <p>We know that since g has limit at infinity, g cannot be concave, but I am sure that there is a functional example of a function g:[0,∞)↦(0,∞) which is increasing, has limit zero at infinity, and is not everywhere convex, I just can't come up with it. Any ideas?</p> <p>Thank you!</p>
Brian M. Scott
12,042
<p>Try $$f(x)=\frac{\pi}2-\tan^{-1}x\;.$$</p>
139,021
<p>Can you, please, recommend a good text about algebraic operads?</p> <p>I know the main one, namely, <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Loday, Vallette "Algebraic operads"</a>. But it is very big and there is no way you can read it fast. Also there are notes by <a href="https://arxiv.org/abs/1202.3245" rel="nofollow noreferrer">Vallette "Algebra+Homotopy=Operad"</a>, but they don't have much information and are too combinatorial. So what I am looking for is a pretty concise introduction to the theory of algebraic operads, that will be more algebraic then combinatorial, and that will give enough information to actually start working with operads.</p> <p>Thank you very much for your help!</p> <p><strong>Edit</strong>: I have also found this interesting paper <a href="http://arxiv.org/pdf/math/9906063v2.pdf" rel="nofollow noreferrer">Modules and Morita Theorem for Operads</a> by Kapranov--Manin. Maybe it's a bit too concise for the first time reading about operads, but it has a lot of really nice examples and theorems.</p> <p>There are also <a href="http://folk.uib.no/nmajv/Operader.ps" rel="nofollow noreferrer">notes</a> by Vatne (only in PostScript).</p>
David White
11,540
<p><a href="http://math.univ-lille1.fr/~fresse/OperadModuleFunctors-Updated.pdf" rel="nofollow">Benoit Fresse's book <em>Modules over Operads and Functors</em></a> is masterful.</p> <p>Additionally, here are a couple of very good survey articles and notes from conferences:</p> <p><a href="http://www.ams.org/notices/200406/what-is.pdf" rel="nofollow">AMS "What is..." article written by Stasheff</a></p> <p><a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.180.1723&amp;rep=rep1&amp;type=pdf" rel="nofollow">Expository article by Shenghao Sun</a></p> <p><a href="http://math.berkeley.edu/~aaron/atf/" rel="nofollow">Notes from Algebra, Topology, and Fjords Conference</a></p>
139,021
<p>Can you, please, recommend a good text about algebraic operads?</p> <p>I know the main one, namely, <a href="http://www-irma.u-strasbg.fr/~loday/PAPERS/LodayVallette.pdf" rel="nofollow noreferrer">Loday, Vallette "Algebraic operads"</a>. But it is very big and there is no way you can read it fast. Also there are notes by <a href="https://arxiv.org/abs/1202.3245" rel="nofollow noreferrer">Vallette "Algebra+Homotopy=Operad"</a>, but they don't have much information and are too combinatorial. So what I am looking for is a pretty concise introduction to the theory of algebraic operads, that will be more algebraic then combinatorial, and that will give enough information to actually start working with operads.</p> <p>Thank you very much for your help!</p> <p><strong>Edit</strong>: I have also found this interesting paper <a href="http://arxiv.org/pdf/math/9906063v2.pdf" rel="nofollow noreferrer">Modules and Morita Theorem for Operads</a> by Kapranov--Manin. Maybe it's a bit too concise for the first time reading about operads, but it has a lot of really nice examples and theorems.</p> <p>There are also <a href="http://folk.uib.no/nmajv/Operader.ps" rel="nofollow noreferrer">notes</a> by Vatne (only in PostScript).</p>
Peter Heinig
108,556
<p>Since both the following references appeared significantly later than the OP, it seems useful to add: </p> <ul> <li><p><a href="http://bookstore.ams.org/gsm-170/" rel="nofollow noreferrer">Donald Yau: <em>Colored Operads</em>. AMS. Graduate Studies in Mathematics Volume 170</a></p></li> <li><p>The review of the above book written by Nick Gurski in the most recent issue (September 2017) of the <a href="https://www.springerprofessional.de/jahresbericht-der-deutschen-mathematiker-vereinigung/4966776" rel="nofollow noreferrer">Jahresbericht der deutschen Mathematiker-Vereinigung</a></p></li> </ul>
2,473,220
<p>From how I understood the question and judging from solutions I've been provided with (see graph below),</p> <p><a href="https://i.stack.imgur.com/73RU3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/73RU3.png" alt="enter image description here"></a></p> <p>$f(x)$ starts from an x-position, which should be an integer, and assuming this goes on for all integers until infinity. </p> <p>I also assume the graph follows the function $f(x)=x$, whereby $0\le y \le 0.5$ to make sure the function returns to the nearest integer. </p> <p>If not, can $f(x)$ be equal to any function as long as it occupies the distance from $x$ to the next integer? For example, $f(x)=2x$ whereby $0\le y \le 1$</p> <p><a href="https://i.stack.imgur.com/cSaYk.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cSaYk.jpg" alt="enter image description here"></a></p> <p>And can we say the critical points are all integers? Or maybe I did not understand the question well.</p>
moqui
988,087
<p>Notice that <span class="math-container">$f$</span> is defined as:</p> <p><span class="math-container">$$f(x):=\begin{cases}n-x\quad \text{if }\ n-\frac{1}{2}\leq x&lt;n\\ x-n\quad \text{if }\ n\leq x&lt;n+\frac{1}{2}\end{cases}$$</span> for every integer <span class="math-container">$n$</span>. Now,</p> <p><span class="math-container">$$\lim_{h\,\to\,0^-}\frac{f(n+h)-f(n)}{h}=\lim_{h\,\to\,0^-}\frac{n-(n+h)}{h}=-1$$</span> and you can show that the right-hand limit is 1. Therefore, derivatives don't exist at <span class="math-container">$n\in\mathbb{Z}$</span> <span class="math-container">$\Big($</span>neither at <span class="math-container">$n+\frac{1}{2}\Big)$</span>. So now everything depends on your definition of critical point.</p>
2,049,685
<p>If a team 1 has a probability of p of winning against team 2. What is the probability "formula" that team one will win 7 games first. </p> <p>There are no ties and the teams play until one t am wins 7 games </p>
lulu
252,071
<p>Imagine that exactly $13$ games are played out, even though it is likely that the series will have been settled prior to the last game. The advantage here is that we know that exactly one of the teams will have won $7$ or more games and that determines the winner. To finish, we remark that for Team $1$ to win the series, they must win between $7$ and $13$ games. Of course the probability that they win exactly $i$ games out of $13$ is $\binom {13}i p^i(1-p)^{13-i}$. Thus the answer is $$\sum_{i=7}^{13} \binom {13}i p^i(1-p)^{13-i}$$</p>
1,034,335
<p>I'm preparing for my calculus exam and I'm unsure how to approach the question: "Explain the difference between convergence of a sequence and convergence of a series?" </p> <p>I understand the following:</p> <p>Let the sequence $a_n$ exist such that $a_n =\frac{1}{n^2}$ </p> <p>Then $\lim_{n\to\infty} a_n=\lim_{n\to\infty} \frac{1}{n^2}=0$ therefore $a_n$ converges to $0$.</p> <p>And the series $\sum_{i=1}^{n}a_n=1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... +\frac{1}{n^2}$</p> <p>And by the $n$-th term test, this series converges. But, I don't understand <em>why</em> or <em>how</em> the convergence between the series and the sequence is different.</p> <p>I looked online and I find a lot of answers on how to determine convergence or divergence, but the only difference I've found is that you use limits to test sequences and series have more complex testing requirements. Please help!</p>
Emanuele Paolini
59,304
<p>You can identify a series with the sequence of its partial sums: $$ S_n = \sum_{k=1}^n a_k. $$ So everything you know about sequences can be applied to series, and vice-versa.</p> <p>However dealing with series is usually more difficult because, in general, it can be very difficult to find the limit. This is due to the indirect definition of partial sums... So to deal with series and to prove their convergence, one should use methods which do not require the limit (i.e. the sum) of the series to be known in advance.</p>
1,034,335
<p>I'm preparing for my calculus exam and I'm unsure how to approach the question: "Explain the difference between convergence of a sequence and convergence of a series?" </p> <p>I understand the following:</p> <p>Let the sequence $a_n$ exist such that $a_n =\frac{1}{n^2}$ </p> <p>Then $\lim_{n\to\infty} a_n=\lim_{n\to\infty} \frac{1}{n^2}=0$ therefore $a_n$ converges to $0$.</p> <p>And the series $\sum_{i=1}^{n}a_n=1 + \frac{1}{4} + \frac{1}{9} + \frac{1}{16} + ... +\frac{1}{n^2}$</p> <p>And by the $n$-th term test, this series converges. But, I don't understand <em>why</em> or <em>how</em> the convergence between the series and the sequence is different.</p> <p>I looked online and I find a lot of answers on how to determine convergence or divergence, but the only difference I've found is that you use limits to test sequences and series have more complex testing requirements. Please help!</p>
Alasdair
35,574
<p>Series can be baffling things. The trouble is that the convergence of the terms tells you nothing about the convergence of the series. We know that $\lim_{n\to\infty}a_n=0$ is <em>necessary</em> for a series $\sum_{k=0}^\infty a_n$ to converge, but it is not <em>sufficient</em>.</p> <p>For example consider the sequences $a_n=1/n$, $b_n=(-1)^n/n$, $c_n=1/n^2$, $d_n=1/n^3$, $e_n=1/n^5$. All of these sequences converge to zero. But:</p> <ul> <li>$\sum_{k=1}^\infty a_n$ diverges (harmonic series)</li> <li>$\sum_{k=1}^\infty b_n$ converges (alternating series)</li> <li>$\sum_{k=1}^\infty c_n$ converges to $\pi^2/6$</li> <li>$\sum_{k=1}^\infty d_n$ converges, to an irrational number (Ap&eacute;ry's theorem)</li> <li>$\sum_{k=1}^\infty e_n$ converges, but its not known if the result is rational or irrational.</li> </ul> <p>There is, as far as I know, no complete decision method for convergence of a series. You try a sequence of tests, and each one will return a result of converge, diverge, undecided. If the latter, you try another test.</p> <p>There are certain families of series, such as geometric series and $p$-series, for which convergence or divergence is trivial ($\sum_{k=0}^\infty r^k$ converges if and only if $|r|&lt;1$), but in general, given a new series which doesn't fit into a known class, you're on your own.</p> <p>As far as I know, this problem is unsolved: is there a sequence of rational numbers $a_n$ for which $\lim_{n\to\infty}a_{n+1}/a_n=0$ and $\sum_{n=0}^\infty a_n=\pi$? (Note that if we replace $\pi$ by $e$ then the result is trivial, as we can put $a_n=1/n!$.)</p> <p>As you see, series are tricky.</p>
3,412,418
<blockquote> <p>You have been chosen to play a game involving a 6-sided die. You get to roll the die once, see the result, and then may choose to either stop or roll again. Your payoff is the sum of your rolls, unless this sum is (strictly) greater than 6. If you go over <span class="math-container">$6$</span>, you get <span class="math-container">$0$</span>. What's the best strategy?</p> </blockquote> <p>I tried to set up equations with expected value. I think that the best strategy is to roll until you get at least some value, call it <span class="math-container">$x$</span>. But I have not been able to make too much progress. Can someone please help me?</p>
Ross Millikan
1,827
<p>If you have <span class="math-container">$x$</span>, you fail and get <span class="math-container">$0$</span> with probability <span class="math-container">$\frac x6$</span>, losing <span class="math-container">$x$</span>. If you succeed, you gain a number between <span class="math-container">$1$</span> and <span class="math-container">$6-x$</span>, so on average gain <span class="math-container">$$\frac {6-x}6\cdot \frac 12(7-x)-\frac x6\cdot x$$</span> Just make a table of this and see where it goes negative. It turns out the gain is <span class="math-container">$\frac 73$</span> at <span class="math-container">$x=1$</span>, <span class="math-container">$1$</span> at <span class="math-container">$x=2$</span> and negative above that, so you should roll again when you have <span class="math-container">$2$</span> or less.</p>
2,496,817
<p>My task is to prove the above, with $m,n \in \mathbb{N}$</p> <p>Here is what I have:</p> <p>$7 | (100m + n) \iff (100m +n) \mod 7 = 0$</p> <p>$\iff (100m \mod 7 + n \mod 7) \mod 7 = 0 $</p> <p>$\iff (2m +n) \mod 7 = 0$ </p> <p>That is where I am stuck.</p>
Nosrati
108,128
<p>$$100m+n=7k$$ $$2m+n=7k-(7\times14)m$$ $$4(2m+n)=4(7k-(7\times14)m)$$ $$m+4n=4(7k-7\times14m)-7m=7\ell$$</p>
10,722
<p>I notice that geometry students frequently have difficulty with representations of 3-dimensional objects in 2 dimensions. Today, we worked with physical manipulatives in order to help visualize where right triangles can occur in 3 dimensions in both pyramids and rectangular prisms (the focus is on fluency with the Pythagorean Theorem and noting its application in many contexts.) I chose to create physical manipulatives instead of finding online 3d manipulatives because <em>I felt as if the physical manipulatives would provide more insight than simply seeing a draggable, yet still 2d, projection.</em> </p> <p>For clarity: </p> <p>Physical manipulative in conjunction with 2d drawing: <a href="https://i.stack.imgur.com/PMWeo.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PMWeo.jpg" alt="Photo of maipulative"></a></p> <p>Some online examples of "virtual manipulatives": <a href="https://www.learner.org/interactives/geometry/3d_pyramids.html" rel="noreferrer">1</a> <a href="http://www.learnalberta.ca/content/mejhm/index.html?l=0&amp;ID1=AB.MATH.JR.SHAP&amp;ID2=AB.MATH.JR.SHAP.SURF&amp;lesson=html/object_interactives/surfaceArea/explore_it.html" rel="noreferrer">2</a></p> <p>My question is: <em>Is there research supporting my intuition?</em> Are students who have difficulty translating between 2D and 3D more benefited by a physical model than a virtual manipulative, or the other way around?</p>
Joseph O'Rourke
511
<p>This is not what you seek, because it compares two different physical manipulatives, rather than physical vs. virtual. But I find it interesting partly because my own research involves studying nets of polyhedra.</p> <blockquote> <p>Scott, Jacqui, Anton Selvaratnam, and Lynden Rogers. "Using Bendable and Rigid Manipulatives in Primary Mathematics: Is One More Effective Than the Other in Conceptualising 3D Objects from Their 2D Nets?." <em>TEACH Journal of Christian Education</em> 6.1 (2012): 10. (<a href="http://research.avondale.edu.au/cgi/viewcontent.cgi?article=1028&amp;context=teach" rel="noreferrer">Article link</a>)</p> </blockquote> <p><hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <a href="https://i.stack.imgur.com/PRory.jpg" rel="noreferrer"><img src="https://i.stack.imgur.com/PRory.jpg" alt="Manipulatives"></a></p> <hr /> <blockquote> <p><strong><em>Abstract excerpts</em></strong>. The purpose of this study was to compare the effectiveness of two different types of manipulatives, bendable and rigid, as aids for the conceptualisation of 3D solids from 2D nets...Contrary to initial expectations, the bendable nets, although more attractive to pupils, did not prove superior to the rigid variety. In fact, the most noticeable advances in conceptualisation followed teaching experiences using the rigid nets.</p> </blockquote> <p>They cite an article I couldn't find:</p> <blockquote> <p>Shaw, J. M. "Manipulatives enhance the learning of mathematics." Houghton Mifflin Mathematics. (2002).</p> </blockquote>
4,203,906
<p>Does there exist real numbers a and b such that</p> <p>(i) <span class="math-container">$a+b$</span> is rational and <span class="math-container">$a^ n +b^ n$</span> is irrational for each natural <span class="math-container">$n ≥ 2$</span>;</p> <p>(ii) <span class="math-container">$a+b$</span> is irrational and <span class="math-container">$a^ n +b^ n$</span> is rational for each natural <span class="math-container">$n ≥ 2$</span>?</p> <p>for (i), I tried to prove yes, and I was thinking of some rational <span class="math-container">$x$</span> and irrational <span class="math-container">$z$</span> such that <span class="math-container">$a = x+z, b = x-z$</span>, but I don't quite know how to show <span class="math-container">$a^n + b^n$</span> is always irrational for <span class="math-container">$n \geq 2.$</span> I tried to use induction, but since you can't say irrational + irrational = irrational, I'm at a loss as to what to do.</p> <p>for (ii), I tried to prove no by factorizing some <span class="math-container">$a^n + b^n$</span> for some odd <span class="math-container">$n$</span>, say <span class="math-container">$a^3 + b^3 =(a+b)(a^2 -ab + b^2)$</span>, and somehow proving that <span class="math-container">$\frac{a^n + b^n}{a+b}$</span> is rational for some odd <span class="math-container">$n$</span>, but I don't know what to do next.</p>
zwim
399,263
<h4 id="study-case-i-y6a2">Study case (i)</h4> <p>Let <span class="math-container">$s=a+b$</span> and <span class="math-container">$p=ab$</span> then <span class="math-container">$a,b$</span> are solutions of <span class="math-container">$x^2-sx+p=0$</span>.</p> <p>Regarding this as the characteristic equation of a linear induction relation this gives <span class="math-container">$$\begin{cases}u_{n+2}=su_{n+1}-pu_n\\u_n=a^n+b^n\end{cases}$$</span></p> <p>Note that <span class="math-container">$u_0=2$</span> and <span class="math-container">$u_1=s$</span> therefore if both <span class="math-container">$s$</span> and <span class="math-container">$p$</span> are rational then by induction every <span class="math-container">$u_n$</span> will be rational.</p> <p>Therefore the only chance at a counter example is to have <span class="math-container">$p$</span> irrational.</p> <p>This is only a necessary condition.</p> <hr /> <p>Edit 1:</p> <p>We can try to prove it for some particular set of numbers.</p> <p>Let's work in the ring <span class="math-container">$\mathbb Q\Big[\sqrt{k}\Big]$</span> for instance.</p> <p>I claim it is working for any integer <span class="math-container">$k\ge 2$</span> for <span class="math-container">$\begin{cases}a=k+\sqrt{k}\\b=1-\sqrt{k}\end{cases}$</span></p> <p><span class="math-container">$$\require{cancel}\begin{cases}s=a+b=k+1\in\mathbb Q\\ p=ab=(k+\sqrt{k})(1-\sqrt{k})=\cancel{k}-k\sqrt{k}+\sqrt{k}-\cancel{k}=(1-k)\sqrt{k}\in\mathbb R\setminus\mathbb Q\end{cases}$$</span></p> <p>Now since <span class="math-container">$u_n=a^n+b^n\in\mathbb Q\Big[\sqrt{k}\Big]$</span> too we can set:</p> <p><span class="math-container">$$u_n=\alpha_n+\beta_n\sqrt{k}$$</span></p> <p>The linear induction relation gives (I skip the calculations):</p> <p><span class="math-container">$$\begin{cases} \alpha_0=2,\ \beta_0=0\\ \alpha_1=s,\ \beta_1=0\\ \alpha_{n+2}=s\,\alpha_{n+1}+k(k-1)\,\beta_{n}\\ \beta_{n+2}=s\,\beta_{n+1}+(k-1)\,\alpha_{n} \end{cases}$$</span></p> <p>Since <span class="math-container">$s&gt;1$</span> and <span class="math-container">$(k-1)&gt;0$</span> and all initial terms are non-negative, we get <span class="math-container">$\alpha_n\nearrow$</span> and <span class="math-container">$\beta_n\nearrow$</span>.</p> <p>In particular <span class="math-container">$$\beta_n&gt;0\implies u_n\in\mathbb R\setminus\mathbb Q$$</span></p> <hr /> <h4 id="study-case-ii-y5ws">Study case (ii)</h4> <p>We still have the relation <span class="math-container">$$p=\frac {s\,u_{n+1}-u_{n+2}}{u_n}$$</span></p> <p>Notice that since <span class="math-container">$s$</span> irrational and for <span class="math-container">$n\ge 2$</span> all <span class="math-container">$u_n$</span> are rational by hypothesis this forces <span class="math-container">$p$</span> to be irrational too.</p> <p>Though I suspect this case is not possible, I can't seem to find the decisive blow...</p> <p>Edit 2:</p> <p>See AAA's answer, by exploiting <span class="math-container">${u_n}^2$</span> you can prove that <span class="math-container">$p$</span> has to be rational too, which is an incompatible conclusion, therefore (ii) is not possible.</p>
73,410
<p>Gromov proved that if $$ f,g:\left[ {a,b} \right] \to R $$ are integrable functions, such that the function $$ t \to \frac{{f\left( t \right)}} {{g\left( t \right)}} $$ is also integrable, and decreasing. Then the function $$ r \to \frac{{\int\limits_a^r {f\left( t \right)dt} }} {{\int\limits_a^r {g\left( t \right)dt} }} $$ is decreasing. I could not proved, and I could not find a proof )=</p>
robjohn
13,854
<p><em>Thanks to Mariano Suárez-Alvarez for pointing out a bad assumption I made in my previous attempt</em></p> <p>For all $u\le v$, in $[a,b]$ we have $$ \frac{f(u)}{g(u)}\ge\frac{f(v)}{g(v)} $$ Assuming that $g$ is either non-negative or non-positive on all of [a,b], we get $$ f(u)g(v)\ge f(v)g(u) $$ Let $r\le s$. Then, integrating in $u$ from $a$ to $r$ and then in $v$ from $r$ to $s$, we get $$ \int_a^rf(u)\mathrm{d}u\;\int_r^sg(v)\mathrm{d}v\ge\int_a^rg(u)\mathrm{d}u\;\int_r^sf(v)\mathrm{d}v $$ Then we have $$ \begin{align} &amp;\frac{\int_a^rf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t}-\frac{\int_a^sf(t)\mathrm{d}t}{\int_a^sg(t)\mathrm{d}t}\\ &amp;=\frac{\int_a^rf(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t-\int_a^rg(t)\mathrm{d}t\;\int_a^sf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &amp;=\frac{\int_a^rf(t)\mathrm{d}t\;(\int_a^rg(t)\mathrm{d}t+\int_r^sg(t)\mathrm{d}t)-\int_a^rg(t)\mathrm{d}t\;(\int_a^rf(t)\mathrm{d}t+\int_r^sf(t)\mathrm{d}t)}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &amp;=\frac{\int_a^rf(t)\mathrm{d}t\;\int_r^sg(t)\mathrm{d}t-\int_a^rg(t)\mathrm{d}t\;\int_r^sf(t)\mathrm{d}t}{\int_a^rg(t)\mathrm{d}t\;\int_a^sg(t)\mathrm{d}t}\\ &amp;\ge0 \end{align} $$ <strong>Update:</strong> The requirement that $g$ stay either non-negative or non-positive is reasonable since the result is false for $f(t)=1-t$ and $g(t)=1-t^2$ on $[0,\frac{3}{2}]$. Here is the graph of $\frac{\int_0^x(1-t)\;\mathrm{d}t}{\int_0^x(1-t^2)\;\mathrm{d}t}$: <img src="https://i.stack.imgur.com/WaZUg.gif" alt="integral ratio"></p>
73,410
<p>Gromov proved that if $$ f,g:\left[ {a,b} \right] \to R $$ are integrable functions, such that the function $$ t \to \frac{{f\left( t \right)}} {{g\left( t \right)}} $$ is also integrable, and decreasing. Then the function $$ r \to \frac{{\int\limits_a^r {f\left( t \right)dt} }} {{\int\limits_a^r {g\left( t \right)dt} }} $$ is decreasing. I could not proved, and I could not find a proof )=</p>
zyx
14,120
<p>The geometric interpretation of the result is fairly clear if you draw the picture of a particle with velocity vector $(f(t), g(t))$ that at time $t=a$ is at $(0,0) \quad$ (assume $g(t) &gt; 0$ so that the particle moves to the right at all times). Decreasing $f(t)/g(t)$ means the path of the particle is convex, curving downward. This implies the second property if the particle goes through $0$; the slope of the velocity vector when $t&gt;a$ is always less than the slope of the line from the particle to $0$ so that continued motion forces the latter to decrease.</p>
1,440,522
<p>For a function $f:[0,1]\to \mathbb{R}$, let $C$ be the set of points where $f$ is continuous. Prove that $C$ is in the Borel $\sigma$-algebra.</p> <p>I know that for $A=\{f(x): f(x)&lt;a\}$ is open for each real number a, and since openness is preserved by continuity, the set $f^{-1}(A)\cap C$ should also be open. But I don't know how to write a rigorous proof for it.And I feel I need to write $C$ in such a way so it is clear that it can be written as a union or intersection of open sets.</p>
recmath
102,337
<p>We can actually show that these points are a $G_{\delta}$ set (countable intersection of open sets). </p> <p>Let </p> <p>$$A_n=\{t \ \mathrm{s.t} \ \ \exists \delta_t&gt;0 \ \mathrm{with} \ |f(y)-f(x)|&lt; \frac{1}{n} \ \mathrm{when} \ x,y \in (t-\delta_t, t+\delta_t)\}$$</p> <p>Each of these $A_n$ are open. The set of points at which $f$ is continuous is $\cap_{i=1}^\infty A_i$.</p>
384,006
<p>Just came across the following question:</p> <blockquote> <p>Let $S=\{2,5,13\}$. Notice that $S$ satisfies the following property: for any $a,b \in S$ and $a \neq b$, $ab-1$ is a perfect square. Show that for any positive integer $d \not\in S$, $S \cup \{d\}$ does not satify the above property.</p> </blockquote> <p>This question can be done by considering modulo 4.</p> <p>Here comes my question:</p> <blockquote> <p>What is the greatest value of $|A|$ if all elements in $A$ are different and for any $a,b \in A$ and $a \neq b$, $ab-1$ is a perfect square?</p> </blockquote> <p>Remark: $A$ may not contain any of $\{2,5,13\}$, example: $\{17, 26, 85\}$</p> <p>Edit: From the link <a href="http://web.math.pmf.unizg.hr/~duje/intro.html">here</a>, there are infinite many 3-element sets statisfy the poperty. These sets are of the form $\{a, b, a+b+2r\}$ where $r^2 = ab-1$. Are we able to find a 4-element set that statisfies the property?</p>
duje
82,393
<p>By the paper A. Dujella and C. Fuchs, Complete solution of a problem of Diophantus and Euler, J. London Math. Soc. 71 (2005), 33-52. (see <a href="http://web.math.pmf.unizg.hr/~duje/pdf/dioeul2.pdf" rel="nofollow">Theorem 1b</a>), there does not exist 4-element set the considered property and with all elements greater than 1. </p> <p>For results on sets contaning 1, see e.g. N. C. Bonciocat, M. Cipu, M. Mignotte, On D(-1)-quadruples, Publ. Mat. 56 (2012), 279-304. In particular, if {1,b,c,d} has considered property, then b>10^{13}.</p>
1,998,244
<p>Given the equation of a damped pendulum:</p> <p>$$\frac{d^2\theta}{dt^2}+\frac{1}{2}\left(\frac{d\theta}{dt}\right)^2+\sin\theta=0$$</p> <p>with the pendulum starting with $0$ velocity, apparently we can derive:</p> <p>$$\frac{dt}{d\theta}=\frac{1}{\sqrt{\sqrt2\left[\cos\left(\frac{\pi}{4}+\theta\right)-e^{-(\theta+\phi)}\cos\left(\frac{\pi}{4}-\phi\right)\right]}}$$</p> <p>where $\phi$ is the initial angle from the vertical. How can we derive that? Obviously $\frac{dt}{d\theta}$ is the reciprical of $\frac{d\theta}{dt}$, but I don't see how to deal with the second derivative.</p> <p>I've found a similar derivation at <a href="https://en.wikipedia.org/wiki/Pendulum_(mathematics)" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Pendulum_(mathematics)</a>, where the formula</p> <p>$${\frac {d\theta }{dt}}={\sqrt {{\frac {2g}{\ell }}(\cos \theta -\cos \theta _{0})}}$$</p> <p>is derived in the "Energy derivation of Eq. 1" section. However, that uses a conservation of energy argument which is not applicable for a damped pendulum.</p> <p>So how can I derive that equation?</p>
fleablood
280,126
<p>There are 4 groups:</p> <p>$A$ = no siblings</p> <p>$B$ = only brothers</p> <p>$C$ = only sisters</p> <p>$D$ = both brothers and sisters.</p> <p>$A = 5$.</p> <p>$D = 6$</p> <p>$17 = B + D$</p> <p>$18 = C + D$</p> <p>So $B = 17 - D =17-6 = 11$. $C = 18 -D = 18-6 =12$</p> <p>So there are <em>exactly</em> (no "at least" about it) $5+6+11+12 = 34$ students.</p> <p>5 have no siblings, 11 have only brothers, 12 have only sisters, 6 have both.</p> <p>So 17 have brothers and maybe or maybe not sisters, and 18 have sisters and maybe or maybe not brothers.</p>
3,745,273
<p>I am looking for a way to solve :</p> <p><span class="math-container">$$\int_{-\infty}^{\infty} \frac{x\sin(3x)}{x^4+1}\,dx $$</span></p> <p>without making use of complex integration.</p> <p>What I tried was making use of integration by parts, but that didn't reach any conclusive result. (i.e. I integrated <span class="math-container">$\sin(3x)$</span> and differentiated the rest)</p> <p>I can't see a clear starting point to solve this question. Any help appreciated.</p> <p>This problem is posted by Vilakshan Gupta on <a href="https://brilliant.org/problems/integrate-it-8/?ref_id=1591957" rel="nofollow noreferrer">Brilliant</a>.</p>
Riemann'sPointyNose
794,524
<p>@Nanayajitzuki has given you a very nice solution to this problem using Leibniz' integral rule (or Feynman trick if you are a Physicist!) Really, this integral is ridiculously difficult without Complex Analysis. It's doable... but any real method is going to be highly non-trivial.</p> <p>For another solution, we could again parameterize the integral and use the Laplace Transform. Technically, formally inverting the Laplace Transform at the end would require Complex Integration - however, we can figure out the inverses of many standard functions very easily by using properties of the Laplace Transform along with knowing the Laplace Transform of standard functions.</p> <p>Define</p> <p><span class="math-container">$${I(t)=\int_{0}^{\infty}\frac{x\sin(3xt)}{x^4 + 1}dx}$$</span></p> <p>So we have</p> <p><span class="math-container">$${\Rightarrow \int_{0}^{\infty}\int_{0}^{\infty}\frac{x\sin(3xt)}{x^4 + 1}e^{-st}dxdt=\mathcal{L}\{I(t)\}(s)}$$</span></p> <p>Interchanging the integrals gives us</p> <p><span class="math-container">$${\Rightarrow \int_{0}^{\infty}\int_{0}^{\infty}\left(\frac{x}{x^4 + 1}\right)\sin(3xt)e^{-st}dtdx=\int_{0}^{\infty}\left(\frac{x}{x^4 + 1}\right)\left(\mathcal{L}\{\sin(3xt)\}\right)dx}$$</span></p> <p>We can use the well known formula for the Laplace Transform of <span class="math-container">${\sin(ax)}$</span>:</p> <p><span class="math-container">$${\Rightarrow \mathcal{L}\{I(t)\}=\int_{0}^{\infty}\frac{x}{x^4 + 1}\left(\frac{3x}{9x^2 + s^2}\right)dx=\int_{0}^{\infty}\frac{3x^2}{(9x^2 + s^2)(x^4 + 1)}dx}$$</span></p> <p>This is a ridiculous integral to evaluate (you can look at Wolfram alpha as to how big the anti-derivative answer actually is) - but again, completely doable using real methods. If you do evaluate it though, the end result is</p> <p><span class="math-container">$${\Rightarrow \frac{3\pi}{2\sqrt{2}\left(s^2 + 3\sqrt{2}s + 9\right)}=\mathcal{L}\{I(t)\}}$$</span></p> <p>(Writing down the complete way of solving that integral would be huge - but essentially you need to do partial fractions and substitutions a bunch. It is possible to evaluate it using real methods though - simply because it's a monster integral, I won't write the steps for it here. But you can give it a go if you are feeling brave :P It shouldn't be hard - just tedious).</p> <p>Now the last part is to invert the Laplace Transform! To do this, notice that</p> <p><span class="math-container">$${\mathcal{L}\{e^{-at}\sin(bt)\}=\frac{b}{(a+s)^2 + b^2}}$$</span></p> <p>(you can also see it as part of a Inverse Laplace table - see: <a href="https://tutorial.math.lamar.edu/classes/de/laplace_table.aspx" rel="nofollow noreferrer">https://tutorial.math.lamar.edu/classes/de/laplace_table.aspx</a>)</p> <p>And notice that</p> <p><span class="math-container">$${\mathcal{L}\{I(t)\}=\frac{3\pi}{2\sqrt{2}}\left(\frac{\sqrt{2}}{3}\right)\left(\frac{\frac{3}{\sqrt{2}}}{\left(s+\frac{3\sqrt{2}}{2}\right)^2 + \left(\frac{3}{\sqrt{2}}\right)^2}\right)=\frac{\pi}{2}\left(\frac{\frac{3}{\sqrt{2}}}{\left(s+\frac{3\sqrt{2}}{2}\right)^2 + \left(\frac{3}{\sqrt{2}}\right)^2}\right)}$$</span></p> <p>And so we get</p> <p><span class="math-container">$${I(t)=\frac{\pi}{2}e^{-\frac{3\sqrt{2}}{2}t}\sin\left(\frac{3}{\sqrt{2}}t\right)}$$</span></p> <p>Giving us overall</p> <p><span class="math-container">$${\Rightarrow \int_{-\infty}^{\infty}\frac{x\sin(3x)}{x^4+1}dx=2I(1)=\pi e^{-\frac{3}{\sqrt{2}}}\sin\left(\frac{3}{\sqrt{2}}\right)}$$</span></p>
2,994,900
<p>Prove that <span class="math-container">$$\sum_{d\mid q}\frac{\mu(d)\log d}{d}=-\frac{\phi(q)}{q}\sum_{p\mid q}\frac{\log p}{p-1},$$</span> where <span class="math-container">$\mu$</span> is Möbius function, <span class="math-container">$\phi$</span> is Euler's totient function, and <span class="math-container">$q$</span> is a positive integer.</p> <p>I can get <span class="math-container">\begin{align} \sum_{d\mid q} \frac{\mu(d)\log d}{d}&amp; = \sum_{d\mid q}\frac{\mu(d)}{d}\sum_{p\mid d}\log p \\ &amp; = \sum_{p\mid q} \log p \sum_{\substack{d\mid q \\ p\mid d}} \frac{\mu(d)}{d} = \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d}, \end{align}</span> Let <span class="math-container">$d=pr$</span>, then <span class="math-container">$\mu(d)=\mu(p)\mu(r)=-\mu(r)$</span>, <span class="math-container">$$ \sum_{p\mid q} \log p \sum_{\substack{d \\ p\mid d \mid q}} \frac{\mu(d)}{d}= - \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}.$$</span> But I don't know why <span class="math-container">$$- \sum_{p\mid q} \frac{\log p}{p} \sum_{\substack{r\mid q \\ p \nmid r}} \frac{\mu(r)}{r}=-\frac{\phi(q)}{q} \sum_{p\mid q} \frac{\log p}{p-1}?$$</span></p> <p>Can you help me?</p>
arithmetic1
558,611
<p>I find a paper "On some identities in multiplicative number theory", Olivier Bordellès and Benoit Cloitre, arXiv:1804.05332v2 <a href="https://arxiv.org/abs/1804.05332v2" rel="nofollow noreferrer">https://arxiv.org/abs/1804.05332v2</a></p> <p>Using Dirichlet convolution <span class="math-container">\begin{eqnarray*} - \frac{\varphi(n)}{n} \sum_{p \mid n} \frac{\log p}{p-1} &amp;=&amp; - \frac{1}{n} \sum_{p \mid n} \varphi \left( \frac{n}{p} \right) \log p \\ &amp;=&amp; - \frac{1}{n} \left( \Lambda \ast \varphi \right) (n) \\ &amp;=&amp; - \frac{1}{n} \left( - \mu \log \ast \mathbf{1} \ast \mu \ast \mathrm{id} \right) (n) \\ &amp;=&amp; \frac{1}{n} \left( \mu \log \ast \mathrm{id} \right) (n) \\ &amp;=&amp; \sum_{d \mid n} \frac{\mu(d) \log d}{d}. \end{eqnarray*}</span></p>
50,362
<p>I have a question about the basic idea of singular homology. My question is best expressed in context, so consider the 1-dimensional homology group of the real line $H_1(\mathbb{R})$. This group is zero because the real line is homotopy equivalent to a point. The chain group $C_1(\mathbb{R})$ contains all finite formal linear combinations of continuous maps from the interval $[0,1]$ into $\mathbb{R}$. One such map (call it $\mu$) maps the interval along some path that begins and ends at zero. (For my purposes it doesn't matter how exactly.) This map is a cycle, i.e. is contained in the kernel of $\partial_1:C_1 \rightarrow C_0$, because it begins and ends at the same point. It must be that it is also a boundary, i.e. contained in the image of $\partial_2:C_2 \rightarrow C_1$, because otherwise it would represent a nonzero homology class in $H_1$. My question is about exactly how and why it is a boundary.</p> <p>I have an intuitive understanding of why it is a boundary that does not seem to work when I translate it into formal language, and a formal way to show it is a boundary that does not seem to capture the heart of the intuition. My reference on the formal definitions is Allen Hatcher's <i>Algebraic Topology</i>.</p> <p>Intuitively, $\mu$ maps $[0,1]$ to a loop and then smooshes it into the real line (i.e. $\mu$ factors through $S^1$). The map from the loop to the line could be extended to a disc without losing continuity, since the whole thing gets smooshed anyway. A triangle could be mapped homeomorphically to the disc, and this would give us a map $\zeta: \Delta^2 \rightarrow \mathbb{R}$ of which, intuitively anyway, $\mu$ is the boundary. However, formally, $\partial_2 (\zeta)$ is the formal sum of the restriction of $\zeta$ to each of its edges; it is thus a formal sum of <i>three</i> maps from the interval to the real line, and thus is not (formally) equal to $\mu$.</p> <p>Formally, I can define a map $\alpha : \Delta^2 \rightarrow \mathbb{R}$ from a triangle to the real line that does have $\mu$ as a boundary, but I am very unsatisfied with this construction because it involves details that feel essentially extrinsic to the intuition above. Let the vertices of $\Delta^2$ be labeled 0, 1, 2. Map $\Delta^2$ to a disc in the following way: map vertex 0 to the center of the disc; the edges $[0,1]$ and $[0,2]$ to a radius in the same way (so that the restrictions of $\alpha$ to the two edges are equal); the edge $[1,2]$ around the circumference; and extend the map to the interior of the triangle in the obvious way. Then map the disc to the real line as above; the restriction to the circumference is $\mu$. Now, the boundary map $\partial_2$ by definition maps $\alpha$ to $\alpha |_{[0,1]} +\alpha |_{[1,2]}-\alpha |_{[0,2]}$. But $\alpha |_{[0,1]}$ and $\alpha |_{[0,2]}$ are equal and $\alpha |_{[1,2]}$ is equal to $\mu$, so $\partial_2(\alpha)=\mu$.</p> <p>My question is this: is it correct that the intuitive construction of $\zeta$ does not provide an element of $C_2$ with $\mu$ as a boundary? Is it correct that in order to get $\mu$ as a boundary one must use a construction like that of $\alpha$ above? If so, is the intuition that $\mu$ is a boundary because it is a loop that can be extended to a disc before smooshing wrong? Does the fact that $\mu$ is a boundary really hang on the sign convention in the definition of $\partial_2$? If so, can you give me a reason for why this sign convention works to guarantee that such a construction will always exist when a cycle "seems like it should be" a boundary?</p> <p>EDIT:</p> <p>I should add, after reading a few very helpful but somehow-unsatisfying-to-me answers, that I am not just interested in the one-dimensional case. (See my comment on MartianInvader's answer.)</p> <p>EDIT (7/12):</p> <p>Thanks for all the help everyone. My immediate acute sense of cognitive dissonance has been addressed, so I'm marking the question answered. I have some residual sense of not getting the whole picture, but expect this to resolve itself with slow processing of more theorems (like homotopy invariance of homology, and the Hurewicz map, thank you Matt E and Dan Ramras).</p>
Matt E
221
<p>Your intuition is correct, I think. I also had the experience, when first learning this material, of wanting to understand homologies explicitly in the way that you are trying to, so I encourage you to pursue your attempt to match intuition with formal definitions.</p> <p>The basic problem you observed is that often, at a technical level, one has to produce formal sums of cycles, while when thinking intuitively, one doesn't normally generate these formal sums in one's imagination. The way to reconcile this is to prove the following:</p> <p>If $\alpha:[0,1] \to X$ and $\beta: [0,1] \to X$ are two 1-simplices (in any target space $X$) and $\gamma:[0,1] \to X$ is the <em>sum</em> of $\alpha$ and $\beta$ in the sense of addition in the fundamental group, then there is a homology between $\alpha + \beta$ (formal sum) and $\gamma$. This is easily checked, so I leave it as an exercise. (In a complete treatment of singular homology, it would appear as part of the verification of homotopy invariance, probably in some implicit manner. It is also closely related to Dylan Wilson's suggestion about verifying that homotopic cycles are homologous.) Once you've done this, you'll have more confidence that various intuitive pictures do indeed match with the more formally correct treatment.</p>
2,461,506
<p>I am trying to derive / prove the fourth order accurate formula for the second derivative:</p> <p>$f''(x) = \frac{-f(x + 2h) + 16f(x + h) - 30f(x) + 16f(x - h) - f(x -2h)}{12h^2}$.</p> <p>I know that in order to do this I need to take some linear combination for the Taylor expansions of $f(x + 2h)$, $f(x + h)$, $f(x - h)$, $f(x -2h)$. For example, when deriving the the centered-difference formula for the first derivative, the Taylor expansion of $f(x + h)$ minus $f(x-h)$ can be computed to give the desired result of $f'(x)$, in that case.</p> <p>In what way would I have to combine these Taylor expansions above to obtain the required result?</p>
Donald Splutterwit
404,247
<p>Exactly as Gammatester says, Taylor expand the terms upto order $4$ and verify. \begin{eqnarray*} -f(x+2h) &amp;=&amp; -f(x) &amp;-&amp; 2h f'(x) &amp;-&amp; 2h^2 f''(x) &amp;-&amp; \frac{4}{3} h^3 f'''(x) &amp;-&amp; \frac{2}{3} h^4 f''''(x) &amp;+&amp; O(h^5) \\ 16f(x+h) &amp;=&amp; 16 f(x)&amp;+&amp; 16h f'(x) &amp;+&amp; 8h^2 f''(x) &amp;+&amp; \frac{8}{3} h^3 f'''(x) &amp;+&amp; \frac{2}{3} h^4 f''''(x) &amp;+&amp; O(h^5) \\ -30f(x) &amp;=&amp; -30f(x) &amp; &amp; &amp; &amp; &amp; &amp; &amp; &amp; &amp; &amp; \\ 16f(x-h) &amp;=&amp; 16 f(x)&amp;-&amp; 16h f'(x) &amp;+&amp; 8h^2 f''(x) &amp;-&amp; \frac{8}{3} h^3 f'''(x) &amp;+&amp; \frac{2}{3} h^4 f''''(x) &amp;+&amp; O(h^5) \\ -f(x-2h) &amp;=&amp; -f(x) &amp;+&amp; 2h f'(x) &amp;-&amp; 2h^2 f''(x) &amp;+&amp; \frac{4}{3} h^3 f'''(x) &amp;-&amp; \frac{2}{3} h^4 f''''(x) &amp;+&amp; O(h^5) \\ \end{eqnarray*}</p>
339,142
<p>I'm trying to understand the difference between the sense, orientation, and direction of a vector. According to <a href="http://www.eng.auburn.edu/users/marghdb/MECH2110/c1_2110.pdf">this</a>, sense is specified by two points on a line parallel to a vector. Orientation is specified by the relationship between the vector and given reference lines (which I'm interpreting to be some basis).</p> <p>However, these two definitions seem to be synonymous with direction. How do these 3 terms differ?</p>
Christian Blatter
1,303
<p>For the purposes of this answer two nonzero vectors ${\bf x}$, ${\bf y}\in{\mathbb R}^d$ are considered as <em>equivalent</em> if there is a $\lambda&gt;0$ such that ${\bf y}=\lambda\,{\bf x}$. An equivalence class is called a <em>direction</em>, and two vectors belonging to the same equivalence class are said to <em>point into the same direction</em>. The unit sphere $S^{d-1}\subset{\mathbb R}^d$ is a set of representatives for this equivalence relation.</p> <p>In a one-dimensional setting one has just two directions which then are called <em>senses</em>. They are represented by the two points $1$ and $-1$ making up $S^0\subset{\mathbb R}^1$.</p> <p>The notion of orientation refers to bases of $d$-dimensional real vector spaces $V$. Two bases $(a_i)_{1\leq i\leq d}$ and $(b_i)_{1\leq i\leq d}$ of $V$ are <em>equally oriented</em> when the matrix $T$ relating them has positive determinant. There are exactly two equivalence classes. When there is a distinguished basis of $V$ (e.g. the standard basis $(e_i)_{1\leq i\leq d}$ of ${\mathbb R}^d$) its orientation is usually considered the <em>positive</em> orientation.</p> <p>An example: When a hyperplane $H\subset V$, $\&gt;0\in H$, is given then a chosen positive orientation in $V$ induces an orientation of $H$ only after a positive normal vector ${\bf n}\perp H$ has been selected. A basis $(a_i)_{1\leq i\leq d-1}$ of $H$ is then <em>positively oriented</em> if $({\bf a}_1,\ldots, {\bf a}_{d-1},{\bf n})$ is a positively oriented basis of $V$.</p>
350,747
<p>Base case: $n=1$. Picking $2n+1$ random numbers 5,6,7 we get $5+6+7=18$. So, $2(1)+1=3$ which indeed does divide 18. The base case holds. Let $n=k&gt;=1$ and let $2k+1$ be true. We want to show $2(k+1)+1$ is true. So, $2(k+1)+1=(2k+2) +1$....</p> <p>Now I'm stuck. Any ideas?</p>
Community
-1
<p>Let $a$ be the starting number. Then the $2n+1$ consecutive numbers are $$a,a+1,a+2,\ldots,a+2n$$ The sum of these number is $$S(a,n) = (2n+1)a + \dfrac{2n(2n+1)}2 = (2n+1)(a+n)$$ Clearly, $(2n+1) \mid S(a,n)$.</p>
2,928,196
<p>I thought that I could take all points with rational coordinates, but this space is not discrete</p>
bangs
520,024
<p>For each <span class="math-container">$n\in\mathbb{N}$</span>, let <span class="math-container">$$D_n=\{(k/2^n, 1/n)\in \mathbb{R}^2: k\in \mathbb{Z}\}.$$</span> Let <span class="math-container">$D=\cup_{n=1}^\infty D_n$</span>. Then <span class="math-container">$D$</span> is discrete. To see this, note that if <span class="math-container">$x\in D_n$</span>, then no other point of <span class="math-container">$D$</span> is closer than <span class="math-container">$\min\{2^{-n}, \frac{1}{n}-\frac{1}{n+1}\}$</span> to <span class="math-container">$x$</span>. </p> <p>But the closure of <span class="math-container">$D$</span> contains the entire <span class="math-container">$x$</span>-axis. </p>
2,476,865
<p>As the title suggests, I'm trying to establish a good bound on</p> <p>\begin{equation} S(n) = \sum_{k = 2}^n (en)^k k^{-Cn/\log{n} - k - 1/2}, \end{equation}</p> <p>where $C$ is some reasonably large positive constant. In particular I would like to have $S(n) = o(1)$, i.e., </p> <p>\begin{equation} \lim_{n \to \infty} S(n) = 0, \end{equation}</p> <p>which numerical evaluations suggest to be the case.</p> <p>Moreover it appears to be the case that the terms in the series are monotonically decreasing; if this were true my claim follows trivially (replace all terms by the one at $k = 2$ and check) but verifying the ''monotone decrease conjecture'' is again a difficult task in itself.</p> <p>I appreciate any ideas on how to tackle this problem.</p> <p>EDIT: The sequence terms are unfortunately not monotonically decreasing as can be verified by studying the logarithm of the latter but the conjecture about the term at $k = 2$ being largest still stands.</p>
user480881
480,881
<p>This seems to do it:</p> <p>To argue that the second term $s_2$ is indeed largest consider the ratio</p> <p>\begin{equation} r(d) = \frac{s_2}{s_{2 + d}} \end{equation}</p> <p>for some $d \leq n - 2$. Our goal is to establish $r(d) \geq 1$ for $d \geq 1$. We ignore the $-1/2$ contribution in the exponent and find that</p> <p>\begin{equation} r(d) = \left(1 + \frac{d}{2}\right)^{Cn/\log{n} + 2} \left(\frac{2 + d}{en}\right)^d. \end{equation}</p> <p>Observe that</p> <p>\begin{equation} \left(\frac{2 + d}{en}\right)^d \end{equation}</p> <p>is increasing in $d$, so we define</p> <p>\begin{equation} \tilde{r}(d) = \left(1 + \frac{d}{2}\right)^{Cn/\log{n}} \frac{3}{en} \end{equation}</p> <p>and seek to satisfy the stricter requirement $\tilde{r}(d) \geq 1$. Let $f(n)$ denote the first value for which the inequality holds true. We find that</p> <p>\begin{equation} f(n) = 2\left(\left(\frac{en}{3}\right)^{\frac{\log{n}}{Cn}} - 1\right). \end{equation}</p> <p>Since</p> <p>\begin{equation} \lim_{n \to \infty} \left(\frac{en}{3}\right)^{\frac{\log{n}}{Cn}} = 1 \end{equation}</p> <p>we establish the existance of some $n_0$ s.t. $\tilde{r}(d) \geq 1$ for $n \geq n_0$ and hence we indeed find that $r(d) \geq 1$ for large $n$. The final claim ($\lim_{n \to \infty} S_n = 0$) can then be recovered as outlined in the question.</p>
1,955,591
<p>I have to prove that ' (p ⊃ q) ∨ ( q ⊃ p) ' is a tautology.I have to start by giving assumptions like a1 ⇒ p ⊃ q and then proceed by eliminating my assumptions and at the end i should have something like ⇒(p ⊃ q) ∨ ( q ⊃ p) but could not figure out how to start.</p>
DanielV
97,045
<p>If you are allowed to use the law of the exluded middle, then propositional logic proofs are straight forward. You have 2 variables, that means you have 4 cases. Then you combine the cases using LEE and Or-Elimination. It looks like:</p> <p>Dedution 1: $$\begin{array} {r|l} (11) &amp; p \land q \\ (12) &amp; p \\ (13) &amp; q \\ &amp; \vdots \\ (1.) &amp; p \implies q \\ (1.) &amp; (p \implies q) \lor (q \implies p) \end{array}$$</p> <p>Deduction 2: $$\begin{array} {r|l} (21) &amp; p \land \lnot q \\ (22) &amp; p \\ (23) &amp; \lnot q \\ &amp; \vdots \\ (2.) &amp; q \implies p \\ (2.) &amp; (p \implies q) \lor (q \implies p) \end{array}$$</p> <p>Deduction 3: $$\begin{array} {r|l} (31) &amp; \lnot p \land q \\ &amp; \vdots \\ (3.) &amp; p \implies q \\ (3.) &amp; (p \implies q) \lor (q \implies p) \end{array}$$</p> <p>Deduction 4: $$\begin{array} {r|l} (41) &amp; \lnot p \land \lnot q \\ &amp; \vdots \\ (4.) &amp; p \implies q \quad \text{(You could also establish }q \implies p\text{ here.)} \\ (4.) &amp; (p \implies q) \lor (q \implies p) \end{array}$$</p> <p>Also, you can assume $p \lor \lnot p$ and $q \lor \lnot q$.</p> <p>What's left is to fill in the $\vdots$, organize the deductions, and use Or-Eliminations to suck the $(p \implies q) \lor (q \implies p)$ out of all the deductions.</p>
140,500
<p>The diagonals of a rectangle are both 10 and intersect at (0,0). Calculate the area of this rectangle, knowing that all of its vertices belong to the curve $y=\frac{12}{x}$.</p> <p>At first I thought it would be easy - a rectanlge with vertices of (-a, b), (a, b), (-a, -b) and (a, -b). However, as I spotted no mention about the rectangle sides being perpendicular to the axises, it's obviously wrong which caused me to get stuck. I thought that maybe we could move in a similar way - we know that if a rectangle is somehow rotated (and we need to take that into account), the distances from the Y axis from the points being symmetric to (0,0) are still just two variables. So we would have: (-b, -12/b), (a, 12/a), (-a, -12/a), (b, 12/b). I then tried to calculate the distance between the first two and the second and the third which I could then use along with the Pythagorean theorem and a diagonal. However, the distance between the first two is $\sqrt{(a+b)^2+(\frac{12}{a}+\frac{12}{b})^2}$ which is unfriendly enough to make me thing it's a wrong way. Could you please help me?</p>
Isaac
72
<p>As in J.M.'s comment, the diagonals of a rectangle (any parallelogram, in fact) bisect each other, so we're looking for points on $y=\frac{12}{x}$ that are a distance of $5$ from the origin. That is, we want solutions to the system $$\left\{\begin{matrix} y=\frac{12}{x}\\ x^2+y^2=5^2 \end{matrix}\right..$$ By substituting for $y$ in the second equation, $$\begin{align} &amp;&amp;x^2+\left(\frac{12}{x}\right)^2&amp;=25 \\ &amp;\implies&amp;x^2+\frac{144}{x^2}&amp;=25 \\ &amp;\implies&amp;x^4+144&amp;=25x^2 \\ &amp;\implies&amp;x^4-25x^2+144&amp;=0 \\ &amp;\implies&amp;(x^2-9)(x^2-16)&amp;=0 \\ &amp;\implies&amp;(x-3)(x+3)(x-4)(x+4)&amp;=0 \\ &amp;\implies&amp;x=\pm3\text{ or }\pm4 \end{align}$$ Using $y=\frac{12}{x}$ gives the corresponding $y$-coordinates for each of the 4 points, from which the side lengths and area can be computed.</p>
1,297,863
<p>Is it possible to write the following function: $$ f(x) = \begin{cases} \frac{x-\sin x}{1- \cos x}&amp; x\neq 0\\ 0 &amp; x=0 \end{cases} $$ as a composition of elementary functions (including $\mathrm{sinc} (x) = (\sin x) / x)$ so that I get not large numerical errors for $x$ close to zero?</p> <p>This is the complete list of functions I can use: <a href="http://docs.scipy.org/doc/numpy/reference/routines.math.html" rel="nofollow">http://docs.scipy.org/doc/numpy/reference/routines.math.html</a></p> <p>This formula is used to compute the area of a circular segment with fixed chord length and given angle.</p> <p><strong>addendum</strong></p> <p>I found I can write: $$ f(x) = \frac{\frac{x}{\sin x} - 1}{x} \frac{x}{\sin x}. $$ But this is not resolutive. Seems to me that the derivative of the $\mathrm{sinc}$ function cannot be explicitly written in terms of the extended elementary function listed in the link above.</p>
zoli
203,663
<p>If you use the first two members of the Taylor series of the numerator and the denominator then you get</p> <p>$$\frac{x-\sin x}{1- \cos x}\approx \frac{x}{3}.$$</p> <p>The error of this approximation is less than $10^{-8}$ over the interval $(-0.01,0.01).$</p>
606,431
<p>Can someone explain to me how to solve this using inverse trig and trig sub?</p> <p>$$\int\frac{x^3}{\sqrt{1+x^2}}\, dx$$</p> <p>Thank you. </p>
Farshad Nahangi
50,728
<p>You can also use integration by part: let $u=x^2$ and $dv=\frac{x}{\sqrt{1+x^2}}$ then you will have: \begin{align*} \int udv&amp;=uv-\int vdu\\ &amp;=x^2\sqrt{1+x^2}-\int2x\sqrt{1+x^2}\,dx\\ &amp;=x^2\sqrt{1+x^2}-\frac{2}{3}(1+x^2)^{\frac{3}{2}}+C \end{align*} where the last integral was solved by substitution $\ u=1+x^2$</p>
2,839,945
<blockquote> <p>Let $p$ be a prime in $\mathbb{Z}$ of the form $4n + 1, n \in \mathbb{N}$. Show that $\left(\frac{-1}{p}\right) = 1$ (here $\left(\frac{\#}{p}\right)$ is the Legendre symbol). Hence prove that $p$ is not a prime in the ring $\mathbb{Z}[i]$.</p> </blockquote> <p>Here is my solution:</p> <p>Since $p &gt; 2$, we have $\left(\frac{-1}{p}\right) = 1$ if and only if $(-1)^{\frac{p - 1}{2}} \equiv_p 1$ if and only $(-1)^{2n} \equiv_p 1$ which is true.</p> <p>Now suppose $p$ is prime in $\mathbb{Z}[i]$, which means that there exists $x \in \mathbb{Z}$ such that $-1 \equiv_p x^2$, from which $p \mid (x^2 + 1) = (x - i)(x + i)$ and, since $p$ is prime, $p \mid (x - i)$ or $p \mid (x + i)$. In either case we have $m + ni \in \mathbb{Z}[i]$ such that $p(m + ni) = x \pm i$, which implies $pn = x$, that is $p \mid x$, and $x^2 + 1 \equiv_p 1$, which is not congruent to $0$, contradiction.</p> <p>Is it correct? thanks in advance</p> <p>Edit: (I've tried to write it better using Robert Soupe advice)</p> <p>Since $p&gt;2$ we have $(-1)^{(p-1)/2}\equiv_p (-1)^{2n} \equiv_p 1$, that is $\left(\frac{-1}{p} \right)= 1$.</p> <p>Now suppose $p$ is prime in $\mathbb{Z}[i]$, this means that there exists $x \in \mathbb{Z}$ such that $x^2 \equiv_p -1$, hence $p \mid (x^2 + 1) = (x - i)(x + i)$ and, since $p$ is prime, $p \mid (x + i)$. Therefore there exists $m + ni \in \mathbb{Z}[i]$ such that $p(m + ni) = x + i$, but this is absurd because $p$ does not divide $1$. We can conclude that $p$ is not prime in $\mathbb{Z}[i]$.</p>
Don
571,059
<p>For the first part, I understand that you are using the supplementary laws of quadratic reciprocity, and of course, the result is immediate. However, you can also solve the problem without that theorem. With the same notation as in your statement:</p> <p>$\mathbb{F}_p^*$ is cyclic, so there exists $x \in \mathbb{Z}$ such that $\bar{x}$ has order $p-1=4n$ in $\mathbb{F}_p^*$. Hence, $$(\bar{x}^{2n})^2=\bar{x}^{4n}=\bar{1}.$$ Due to the order of $\bar{x}$, it follows that $\bar{x}^{2n}=-\bar{1}$. Thus, if we choose $y:=x^n$, we get that $$y^2 \equiv -1 \mod p;$$ ie, $$\left(\frac{-1}{p}\right)=1.$$</p> <p>The second part is also correct, but when you reach to $p(m+ni)=x\pm i$, I would just say that $pn=\pm 1$, so $p \mid 1$; contradiction.</p>
96,211
<p>A modulus of continuity for a function $f$ is a continuous increasing function $\alpha$ such that $\alpha(0) = 0$ and $|f(x) - f(y)| &lt; \alpha(|x-y|)$ for all $x$ and $y$. I am trying to prove that an equicontinuous family $\mathcal F$ of functions has a common modulus of continuity. This seems intuitively obvious, but I am having difficulty proving continuity. So far, I have defined </p> <p>$\alpha(\delta) = \sup\{|f(x) - f(y)| : d(x,y) \leq \delta, f\in \mathcal F\} $. </p> <p>I want to show that this function is continuous in $\delta$. Any suggestions?</p>
Alex Becker
8,173
<p>Building on Beni's answer, suppose that this is not right-continuous, i.e. we have some $\delta$ such that $$\sup\{|f(x)-f(y)| : d(x,y)\leq \delta+\epsilon, f\in \mathcal{F}\}-\sup\{|f(x)-f(y)| : d(x,y)\leq \delta, f\in \mathcal{F}\}&gt;z$$ for some fixed $z&gt;0$ and for arbitrarily small $\epsilon&gt;0$. Then for any fixed $\epsilon$ we have some $f\in \mathcal{F},x&#39;,y&#39;\in \mathbb{R}$ such that $d(x&#39;,y&#39;)\leq \delta+\epsilon$ and $$|f_\epsilon(x&#39;)-f_\epsilon(y&#39;)|-\sup\{|f_\epsilon(x)-f_\epsilon(y)| : d(x,y)\leq \delta\}&gt;z$$ meaning that if we let $x&#39;&#39;,y&#39;&#39;\in\mathbb{R}$ be such that $d(x&#39;&#39;,y&#39;&#39;)\leq \delta$, $d(x&#39;,x&#39;&#39;)\leq \epsilon$,$d(y',y'')\leq \epsilon$ (which can always be done) we get $$|f_\epsilon(x&#39;&#39;)-f_\epsilon(x&#39;)| + |f_\epsilon(y&#39;&#39;)-f_\epsilon(y&#39;)| \geq |f_\epsilon(x&#39;)-f_\epsilon(y&#39;)| - |f_\epsilon(x&#39;&#39;)-f_\epsilon(y&#39;&#39;)|&gt;z$$ so one of the summands on the left must be at least $z/2$, meaning that for sufficiently small $\epsilon&gt;0$ (as in smaller than $z/2$) and any $\delta&gt;0$ we have some function $f_\delta\in\mathcal{F}$ for which $$d(x,y)&lt;\delta\not\Rightarrow d(f_\delta(x),f_\delta(y))&lt;\epsilon$$ and so $\mathcal{F}$ is not equicontinuous.</p>
3,075,979
<p>Prove that <span class="math-container">$$\frac{k^7}{7}+\frac{k^5}{5}+\frac{2k^3}{3}-\frac{k}{105}$$</span> is an integer using mathematical induction.</p> <p>I tried using mathematical induction but using binomial formula also it becomes little bit complicated.</p> <p>Please show me your proof.</p> <p>Sorry if this question was already asked. Actually i did not found it. In that case only sharing the link will be enough.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$ $</span> Note that <span class="math-container">$\ 3\!\cdot\!5\!\cdot\!7\mid \overbrace{3\!\cdot\! 5\, (\color{#c00}{k^7\!-\!k})+ 3\!\cdot\! 7\, (\color{#c00}{k^5\!-\!k})- 5\!\cdot\! 7 (\color{#c00}{k^3\!-\!k})+ 3\!\cdot\! 5\cdot\! 7\, k^3}^{\Large{\rm sum\ = \ this/(3\cdot 5\cdot 7)}}\, $</span> by <span class="math-container">$\,\rm\overbrace{little\ \color{#c00}{Fermat}}^{\Large p\ \mid\ \color{#c00}{k^p-k}}$</span></p> <p><strong>Remark</strong> <span class="math-container">$ $</span> More generally this shows that if <span class="math-container">$\,p,q,r\,$</span> are primes and <span class="math-container">$\,a,b,c,k\,$</span> are integers</p> <p><span class="math-container">$$\quad\ pqr\,\mid\, aqr\,(k^p\!-\!k)+bpr\,(k^q\!-\!k)+cpq\,(k^r\!-\!k)$$</span></p>
660,461
<p>$A = \{a,b,c,d,e\}$</p> <p>$B = \{a,b,c\}$</p> <p>$C = \{0,1,2,3,4,5,6\}$</p> <p>The first few iterations are as follows:</p> <p>$1.$ $a,a,0$</p> <p>$2.$ $b,b,1$</p> <p>$3.$ $c,c,2$</p> <p>$4.$ $d,a,4$</p> <p>$5.$ $e,b,5$</p> <p>$...$</p> <p>I'm trying to figure out at which iterations we will have $x,y,z$ such that $x=y$ and $z=5$. I wrote a program to "solve" this problem for me, and I discovered that this happens at iterations $32,46,60,137,151,165...$. The problem for me is that I don't see how to derive this pattern. In particular, the following are true:</p> <p>$32.$ $c,c,5$</p> <p>$...$</p> <p>$46.$ $b,b,5$</p> <p>$...$</p> <p>$60.$ $a,a,5$</p> <p>$...$</p>
coffeemath
30,316
<p>The least common multiple of the sizes 3,5 of sets $A,B$ is 15, so that $x=y$ will occur at any position $15k+1,15k+2,15k+5.$ For each of these, set it equal to $5$ mod 7 and solve. </p> <p>EDIT: They should be set to 6 mod 7 since you start at step 1 with a 0 in column 3. Thanks to @Casteels for pointing out this offset in a comment.</p> <p>$15k+1=6$ mod 7 has solution $k=7r+5$, and $15k+2=6$ mod 7 has solution $k=7r+4$, finally $15k+3=6$ mod 7 has solution $k=7r+3.$ So the three solution families are $$15(7r+5)+1=105r+76,\\ 15(7r+4)+2=105r+62, \\ 15(7r+3)+3=105r+48.$$</p>
1,483,802
<p>Take $B(0,1)$ the ball in $\mathbb{R}^2$ with the normalized Lebesgue measure $\lambda$ such that $\int_{B(0,1)} d \lambda=1.$</p> <p>Now, I want to show, or give a counterexample that this is false, that for all $f \in H^1_0(B(0,1))$ we have for fixed constants $a,b&gt;0$ and any(!) $p \in (2,\infty)$ \begin{equation} ||f||_p^2 \le a \left(\int_{B(0,1)}| \nabla f|^2 d\lambda \right) + b ||f||_2^2. \end{equation}</p> <p>Does anybody know how to do this? The normal Sobolev inequality is apparently too weak to show this, as this holds for any $p$ and fixed $a,b$.</p>
Xiao
131,137
<p>Can you use Sobolev embedding theorem?</p> <p>Given your domain is a ball, $W_0^{1,2}(B) = H_0^{1}(B)$. In this case $W^{1,p}$ and $\mathbb{R}^n$ where $p=n=2$, we have the continuous embedding $$W^{1,2}_0(B) \hookrightarrow L^q(B) \text{ for all } q\in [1,+\infty),$$ that is $$\|f\|_q \leq C \bigg(\int_B |f|^2 dx +\int_B|\nabla f|^2 dx \bigg)^{1/2}.$$ (This is w.r.t. standard Lebesgue measure.)</p>
2,554,448
<p>Beside using l'Hospital 10 times to get $$\lim_{x\to 0} \frac{x(\cosh x - \cos x)}{\sinh x - \sin x} = 3$$ and lots of headaches, what are some elegant ways to calculate the limit?</p> <p>I've tried to write the functions as powers of $e$ or as power series, but I don't see anything which could lead me to the right result.</p>
celtschk
34,930
<p>Using power series: $$\begin{aligned} \frac{x(\cosh x-\cos x)}{\sinh x-\sin x} &amp;= \frac{x\left((1+\tfrac12 x^2 + O(x^4)) - (1-\tfrac12 x^2 + O(x^4)\right)} {(x+\frac16 x^3 + O(x^5)) - (x - \frac16 x^3 + O(x^5))}\\ &amp;= \frac{x\left(x^2 + O(x^4)\right)} {\frac13 x^3 + O(x^5)}\\ &amp;= \frac{1 + O(x^2)}{\tfrac13 + O(x^2)} = 3 + O(x^2) \end{aligned}$$</p>
525,326
<p>If all elements of $S$ are irrational and bounded from below by $\sqrt 2$ then $\inf S$ must be irrational .</p> <p>I would say this statement is true since $S=\{ \sqrt 2, \sqrt 3, \sqrt 5,\ldots\}$ the greatest lower bound is $\sqrt 2$ which is irrational and bounded from below the sequence. </p> <p>Is this correct?</p>
Brian M. Scott
12,042
<p>HINT: All elements of $S=[2,3]\setminus\Bbb Q$ are irrational and bounded below by $\sqrt2$, but $\inf S&gt;\sqrt2$; what is $\inf S$?</p>
2,184,776
<p>So there's an almost exact question like this here: </p> <p><a href="https://math.stackexchange.com/questions/576268/use-a-factorial-argument-to-show-that-c2n-n1c2n-n-frac12c2n2-n1#576280">Use a factorial argument to show that $C(2n,n+1)+C(2n,n)=\frac{1}{2}C(2n+2,n+1)$</a></p> <p>However, I'm getting stuck in just figuring out the lcds for the factorials.</p> <p>I end up with this after the <strong>CNR</strong>:</p> <p>$$\frac{(2n)!}{(n-1)!(n+1)!} + \frac{(2n)!}{n!n!}$$</p> <p>When I try to find the common denominator, I do:</p> <p>$$\frac{(2n)!n}{(n-1)!(n+1)n!n} + \frac{(2n)!(n+1)}{n(n-1)!n!(n+1)}$$</p> <p>Putting it together I get:</p> <p>$$\frac{(2n)!(n) + (2n)!(n+1)}{ (n)(n+1)(n-1)!n!}$$</p> <p>Which is wrong because according to the other answer, it should be:</p> <p>$$\frac{(2n+1)!}{n!(n+1)!}$$</p> <p>Not sure how they got there. I guess that's my question, how did they get that?</p> <p>I've been googling for hours on how to find common denominators of factorials but can't seem to find anything. I mean, what happened to the $(n-1)!$ ?</p> <p>Thanks.</p>
Icycarus
409,911
<p>Prove by the combinatoric way:</p> <p>You can actually simplify this equation to be $C(2n+2,n+1)=2C(2n,n+1)+C(2n,n)$</p> <p>Now, on the left-hand side, we assume we have $2n+2$ elements, and we want to pick $n+1$ elements. That would give us $C(2n+2,n+1)$</p> <p>On the right-hand side, this is one way we can pick our elements: We are going to separate our set of $2n+2$ elements (Assuming they are arranged from smallest to biggest) into 2 subsets; The first subset contains the first $n$ elements, and the second set contains the last $2$ elements.</p> <p>There are 3 ways we can pick our elements:</p> <p>Case 1: We pick $n$ elements from the set of $2n$ elements, and $1$ element from the set of $2$ elements. Hence, we have $C(2n,2).C(2,1)=2C(2n,n)$</p> <p>Case 2: We pick $n+1$ elements from the set of $2n$ elements, and $0$ elements from the set of $2$ elements. Hence, we have $C(2n,n+1)$</p> <p>Case 3: We pick $n-1$ elements from the set of $2n$ elements, and $2$ elements from the set of $2$ elements. We have $C(2n,n-1).C(2,2)=C(2n,n-1)$. However, we need to take note that $C(2n,n-1)=C(2n,n+1)$ because $2n-(n+1)=n-1$</p> <p>Therefore, our our right-hand side, the number of ways to choose $n+1$ elements from 2 different sets is </p> <p>$2C(2n,n)+C(2n,n+1)+C(2n,n-1)=2C(2n,n)+C(2n,n+1)+C(2n,n+1)=2C(2n,n)+2C(2n,n+1)$, which is what is to be shown!</p> <p>I hope that this combinatoric argument would help you because there are some questions where algebraic argument is very difficult to manipulate!</p>
32,021
<p>On more than one occasion, always with an explicit disclaimer, I have posted a comment of more than 600 characters as an &quot;answer&quot;. I have done this because I have quite often seen other people do it, and I have never once, in 5 years in Maths.SE, seen anyone object to the practice. But a comment I posted in this way last night has been deleted. The reason given was &quot;low quality&quot;. (I have undeleted it, but I have no idea if that action will remain in effect for long.)</p> <p>Has there been a change in policy?</p> <p>Or is there some other reason why this comment in particular was singled out for deletion? Is it perhaps connected with there having been a truly extraordinary comment thread on the question? The thread - including the first comment, which had nothing to do with the very strange dispute that suddenly erupted - was deleted in its entirety, not even moved to chat (as I had requested, in order to mitigate the extreme distraction from the question that had been asked). That is something else that I have never seen happen in my 5 years in Maths.SE, and this coincidence seems highly unlikely to be accidental.</p> <p><a href="https://math.stackexchange.com/posts/3739469/timeline">Timeline for answer to Is the sequence <span class="math-container">$(B_n)_{n \in \Bbb{N}}$</span> unbounded, where <span class="math-container">$B_n := \sum_{k=1}^n\mathrm{sgn}(\sin(k))$</span>? by Calum Gilhooley - Mathematics Stack Exchange</a>.</p>
Xander Henderson
468,350
<p><strong>The answer box is meant for answers, not comments.</strong></p> <p>The Stack Exchange model is meant to facilitate the construction of a high-quality database of questions and authoritative answers. Stack Exchange is not a <a href="https://meta.stackexchange.com/questions/65261/is-stack-overflow-a-social-networking-site">social networking site</a>, nor is it a site for <a href="https://meta.stackexchange.com/questions/141508/how-can-stack-overflow-be-used-as-a-collaborative-tool">collaboration</a> or <a href="https://meta.stackexchange.com/questions/195954/where-can-i-ask-an-open-discussion-question">discussion</a>. On the Stack Exchange network, the &quot;Answer&quot; box is for answers. It is not meant for long comments.</p> <p>It is suggested elsewhere that if an answer is really just a long comment, then it ought to be <a href="https://math.meta.stackexchange.com/q/4353">marked &quot;Community Wiki&quot;</a>, though it is worth noting that there is an implicit assumption here that &quot;not an answers&quot; are a problem, i.e. they should not be treated as normal answers (they should either be converted to comments, or marked CW). There was also an attempt to <a href="https://math.meta.stackexchange.com/questions/2991/rfc-social-norm-about-not-an-answer-just-too-long-for-a-comment-and-communit">discuss this issue</a>, but little discussion actually followed.</p> <p>It may also be worthwhile to read through my question about whether or not it is acceptable to <a href="https://math.meta.stackexchange.com/questions/28969/is-it-acceptable-to-leave-hints-as-answers">leave hints as answers</a>. The consensus here appears to be that hints are fine, <em>as long as they lead to an answer</em>. The extended comments-as-answers under discussion here don't even rise to that level—the answerers themselves don't even know if their comments will lead to an answer.</p> <p>My conclusion is that the consensus leans towards the following statement:</p> <blockquote> <p>If your answer is really a long comment, then it shouldn't be posted as an answer. If you <em>must</em> post your long comment in an answer box, then you should, at the very least, (a) clearly note that the answer is not an answer, and (b) mark your &quot;answer&quot; Community Wiki.</p> </blockquote> <p>I originally said that, in light of the above, it was appropriate to delete <a href="https://math.stackexchange.com/revisions/3739469/1">this answer</a> (note that I am linking to a particular version of that answer). It was remarked by <a href="https://chat.stackexchange.com/transcript/message/54829596#54829596">KReiser</a> that this might have sounded harsh, which was not my intention, so let me try to rephrase: the original version of the answer consisted of a table of data, which was <em>interesting</em>, but didn't really answer the question. It very much <em>was</em> an extended comment. As such, I am sympathetic to the <a href="https://math.stackexchange.com/review/low-quality-posts/1413331">users who voted to delete it</a>. An equally reasonable outcome (in my mind) would have been for the answerer to mark the answer as &quot;Community Wiki&quot;. The best possible outcome would be for the answerer to expand their remarks, as they have done.</p>
32,021
<p>On more than one occasion, always with an explicit disclaimer, I have posted a comment of more than 600 characters as an &quot;answer&quot;. I have done this because I have quite often seen other people do it, and I have never once, in 5 years in Maths.SE, seen anyone object to the practice. But a comment I posted in this way last night has been deleted. The reason given was &quot;low quality&quot;. (I have undeleted it, but I have no idea if that action will remain in effect for long.)</p> <p>Has there been a change in policy?</p> <p>Or is there some other reason why this comment in particular was singled out for deletion? Is it perhaps connected with there having been a truly extraordinary comment thread on the question? The thread - including the first comment, which had nothing to do with the very strange dispute that suddenly erupted - was deleted in its entirety, not even moved to chat (as I had requested, in order to mitigate the extreme distraction from the question that had been asked). That is something else that I have never seen happen in my 5 years in Maths.SE, and this coincidence seems highly unlikely to be accidental.</p> <p><a href="https://math.stackexchange.com/posts/3739469/timeline">Timeline for answer to Is the sequence <span class="math-container">$(B_n)_{n \in \Bbb{N}}$</span> unbounded, where <span class="math-container">$B_n := \sum_{k=1}^n\mathrm{sgn}(\sin(k))$</span>? by Calum Gilhooley - Mathematics Stack Exchange</a>.</p>
user1729
10,513
<p>In the spirit of the question: This started life as more of an extended comment than an answer. I think it has morphed into an answer now though.</p> <p>Math.SE seems to be based around the idea that every problem is soluble by a single person. This is clearly not the case, as, for example, in research mathematics single-authored papers are the exception rather than the normal. Providing an incomplete answer, for example by proving that the result does not hold for a large number of cases, or verifying that it does hold in many cases, should therefore not be discouraged (especially for hard questions); it may lead someone else to solving the problem, which is how collaboration works. This all requires effort and thought, and readers should be allowed to decide, in the usual way, whether or not the author should receive reputation for this partial answer; therefore, such answers should not be automatically made community wikis.</p> <p>I see no reason why providing data, found via computation and confirming the result for a sufficiently large number of cases, does not fall under the above paragraph, provided it comes with supporting explanations. Computational results of this nature can be useful. Indeed, the journal <a href="https://www.tandfonline.com/action/journalInformation?show=aimsScope&amp;journalCode=uexm20" rel="nofollow noreferrer">Experimental Mathematics</a> is essentially devoted to computation-led research. The <a href="https://en.wikipedia.org/wiki/Goldbach%27s_weak_conjecture" rel="nofollow noreferrer">Ternary Goldbach Conjecture</a> is a concrete result which relied on this sort of explicit computation. It's proof proceeded in two steps: 1) prove the result for all numbers bigger than a certain, known number <span class="math-container">$n$</span>, and 2) use a computer to verify the result for all numbers less than <span class="math-container">$n$</span>.</p>
2,792,770
<p>I found the following question in a test paper:</p> <blockquote> <p>Suppose $G$ is a monoid or a semigroup. $a\in G$ and $a^2=a$. What can we say about $a$?</p> </blockquote> <p>Monoids are associative and have an identity element. Semigroups are just associative. </p> <p>I'm not sure what we can say about $a$ in this case other than that $a$ could be other things apart from the identity. Any idea if there's a definitive answer to this question?</p>
wayne
557,397
<p>Take $\Omega = \{0,1\}$, $\mathcal{F} = \{\emptyset,\Omega\}$, $\mathbb{P}(\emptyset) = 0$, $\mathbb{P}(\Omega) = 1$, $\Omega'=\Omega$, $\mathcal{F}' = 2^{\Omega}$, and $X(\omega) = \omega$ for every $\omega \in \Omega$. Since $\{1\} \in \mathcal{F}'$ and $X^{-1}(\{1\}) = \{1\} \notin \mathcal{F}$, $X$ is not $\mathcal F/\mathcal F'$-measurable.</p> <p>For another example, consider $\Omega = [0,1]$, $\mathcal{F} = \{\emptyset, \Omega, [0,\frac{1}{2}],(\frac{1}{2},1]\}$, $\mathbb{P}$ the Lebesgue measure on $[0,1]$, $\Omega'=\Omega$, $\mathcal{F}' = \mathcal{B}(\Omega)$, and $X(\omega) = \omega$ for every $\omega \in \Omega$. Since $[1/4,3/4] \in \mathcal{F}'$ and $X^{-1}([1/4,3/4]) = [1/4,3/4] \notin \mathcal{F}$, $X$ is not $\mathcal F/\mathcal F'$-measurable.</p>