qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
66,068
<p>I have a list like this. </p> <pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}} </code></pre> <p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p> <pre><code>datalistfunc[input_] := Module[{cell, cell2, celltable, celllist}, i = 1; celllist = {}; While[i &lt; Length@cdatalist + 1, cell = Select[cdatalist[[i]][[1 ;; 3]], Head[cdatalist[[i]][[3]]] == Real &amp;]; i = If[i &lt; Length@cdatalist + 1, i + 1, Length@cdatalist + 1]; celllist = AppendTo[celllist, cell2]; Print[cell2] ] ] datalist = datalistfunc[cdata]; </code></pre> <p>My list looks like this after filtering. </p> <pre><code>{{},{}} {{1.,1.31175,1.},{}} {{},{}} {{1.,19.6628,0.990079},{}} {{},{}} {{},{}} {{},{}} {{1.,40.6208,0.980588},{}} {{},{}} {{},{}} {{},{}} {{},{}} {{},{}} {{15.,1.31344,1.},{}} </code></pre> <p>Instead, I want my list to look like this. </p> <pre><code>{{1.,1.31175,1.}, {1.,19.6628,0.990079}, {1.,40.6208,0.980588}, {15.,1.31344,1.}} </code></pre>
Gerli
8,389
<p>Try <a href="http://reference.wolfram.com/language/ref/Cases.html" rel="nofollow">Cases</a>.</p> <p>If you want the third column to be Real:</p> <pre><code>Cases[cdatalist, {_, _, _Real, __}][[All, 1 ;; 3]] </code></pre> <p>or</p> <pre><code>Cases[cdatalist[[All, 1 ;; 3]], {_, _, _Real}] </code></pre> <p>depending on whether you want to shorten the list before or after filtering.</p> <p>It's more general if you filter out everything that is not Failed or $Failed:</p> <pre><code>Cases[cdatalist[[All, 1 ;; 3]], {_, _, Except[$Failed | Failed]}] </code></pre>
4,059,489
<blockquote> <p>Let <span class="math-container">$ A, B \in M_n (\mathbb{C})$</span> such that <span class="math-container">$(A-B)^2 = A -B$</span>. Then <span class="math-container">$\mathrm{rank}(A^2 - B^2) \geq \mathrm{rank}( AB -BA)$</span>.</p> </blockquote> <p>I tried to apply the basic inequalities without results. How to start? Thank you.</p>
PTDS
277,299
<p><span class="math-container">$$\sqrt{\frac{x-8}{1388}}+\sqrt{\frac{x-7}{1389}}+\sqrt{\frac{x-6}{1390}}=\sqrt{\frac{x-1388}{8}}+\sqrt{\frac{x-1389}{7}}+\sqrt{\frac{x-1390}{6}} \tag {1}$$</span></p> <p>Since <span class="math-container">$x \geq 1390$</span>, let us put <span class="math-container">$x = 1390 + k$</span> where <span class="math-container">$k \geq 0$</span></p> <p><span class="math-container">$(1) \implies$</span></p> <p><span class="math-container">$$\sqrt{\frac{1382+k}{1388}}+\sqrt{\frac{1383+k}{1389}}+\sqrt{\frac{1384+k}{1390}}=\sqrt{\frac{k+2}{8}}+\sqrt{\frac{k+1}{7}}+\sqrt{\frac{k}{6}}$$</span></p> <p>Note that since <span class="math-container">$k \geq 0$</span>,</p> <p>we have <span class="math-container">$$\left(\sqrt{\frac{1382+k}{1388}} &lt; \sqrt{\frac{1383+k}{1389}} &lt; \sqrt{\frac{1384+k}{1390}} &lt; 1 \right) \land \left(\sqrt{\frac{k}{6}} &lt; \sqrt{\frac{k+1}{7}} &lt; \sqrt{\frac{k+2}{8}} &lt; 1 \right)$$</span> for <span class="math-container">$k &lt; 6$</span>,</p> <p><span class="math-container">$$\left(\sqrt{\frac{1382+k}{1388}} &gt; \sqrt{\frac{1383+k}{1389}} &gt; \sqrt{\frac{1384+k}{1390}} &gt; 1 \right) \land \left(\sqrt{\frac{k}{6}} &gt; \sqrt{\frac{k+1}{7}} &gt; \sqrt{\frac{k+2}{8}} &gt; 1 \right)$$</span> for <span class="math-container">$k &gt; 6$</span> and</p> <p><span class="math-container">$$\left(\sqrt{\frac{1382+k}{1388}} = \sqrt{\frac{1383+k}{1389}} = \sqrt{\frac{1384+k}{1390}} = 1 \right) \land \left(\sqrt{\frac{k}{6}} = \sqrt{\frac{k+1}{7}} = \sqrt{\frac{k+2}{8}} = 1 \right)$$</span> for <span class="math-container">$k = 6$</span></p> <p>Can you argue why the first two cases do not lead to any solution but the last case does?</p> <p>In fact, you can show that</p> <p><span class="math-container">$\left(\sqrt{\frac{k}{6}} &lt; \sqrt{\frac{k+1382}{1388}} \right) \land \left(\sqrt{\frac{k+1}{7}} &lt; \sqrt{\frac{k+1383}{1389}} \right) \land \left(\sqrt{\frac{k+2}{8}} &lt; \sqrt{\frac{k+1384}{1390}} \right)$</span> for <span class="math-container">$k &lt; 6$</span></p> <p>and</p> <p><span class="math-container">$\left(\sqrt{\frac{k}{6}} &gt; \sqrt{\frac{k+1382}{1388}} \right) \land \left(\sqrt{\frac{k+1}{7}} &gt; \sqrt{\frac{k+1383}{1389}} \right) \land \left(\sqrt{\frac{k+2}{8}} &gt; \sqrt{\frac{k+1384}{1390}} \right)$</span> for <span class="math-container">$k &gt; 6$</span></p>
3,788,298
<p>Let <span class="math-container">$f(x)$</span> be an integrable function on <span class="math-container">$[0,1]$</span> that obeys the property <span class="math-container">$f(x)=x, x=\frac{n}{2^m}$</span> where <span class="math-container">$n$</span> is an odd positive integer and m is a positive integer. Calculate <span class="math-container">$\int_0^1f(t)dt$</span></p> <p><strong>My attempt:-</strong></p> <p>Any positive even number can be written as the sum of two positive odd integers. So, <span class="math-container">$f(x)=x, \forall x\in \{n/2^m:n,m\in \mathbb Z^+\}.$</span> I know the set <span class="math-container">$\{n/2^m:n,m\in \mathbb Z^+\}$</span> is dense in <span class="math-container">$[0,1]$</span>.</p> <p>Define <span class="math-container">$g(x)=f(x)-x$</span>, if <span class="math-container">$f$</span> is continuous, I could say that <span class="math-container">$f(x)=x$</span> using the sequential criterion of limit. Hence,<span class="math-container">$\int_0^1f(t)dt=\frac{1}{2}$</span> How do I proceed for non-continuous function?</p>
Steven
606,584
<p>If we're using the Lebesgue integral, the value can be anything; simply define <span class="math-container">$f(x) = c$</span> outside the countable number of points you specify. If the Riemann integral is under consideration, the value must be <span class="math-container">$\frac12$</span>, since Riemann integrability implies that <span class="math-container">$f(x)$</span> is continuous almost everywhere, so that <span class="math-container">$f(x)=x$</span> almost everywhere.</p>
1,747,525
<p>Given a number $N$, how can I write down a summation of all odd numbers divisible by 5 which are also less than $N$?</p> <p>For instance, if $N = 27$ then I am looking for a series to generate $5+15+25$.</p> <p>Its pretty clear the series looks like </p> <p>$$\sum_{k=0}^{???} 5(2k+1)$$</p> <p>but I am having trouble with that upper index. It must involve flooring to the nearest integer divisible by 5 (maybe write this as $\lfloor N \rfloor_5$ perhaps there is better notation).</p>
Unit
196,668
<p>You want $5(2k+1) \le N &lt; 5(2(k+1)+1)$, which means $k \le \frac{N/5-1}{2} &lt; k+1$; now take floors.</p>
1,747,525
<p>Given a number $N$, how can I write down a summation of all odd numbers divisible by 5 which are also less than $N$?</p> <p>For instance, if $N = 27$ then I am looking for a series to generate $5+15+25$.</p> <p>Its pretty clear the series looks like </p> <p>$$\sum_{k=0}^{???} 5(2k+1)$$</p> <p>but I am having trouble with that upper index. It must involve flooring to the nearest integer divisible by 5 (maybe write this as $\lfloor N \rfloor_5$ perhaps there is better notation).</p>
Anurag A
68,092
<p>You want $5(2k+1) \leq N$. Thus $$k \leq \frac{N}{10}-\frac{1}{2}.$$ Thus $k=\left\lfloor \frac{N}{10}-\frac{1}{2}\right\rfloor$.</p>
505,178
<blockquote> <p>Suppose $k$ is an algebraically closed field, and $f\in k[x, y]$ is an irreducible polynomial in two variables. Furthermore, suppose that $f(u(x), v(y))=f(x, y)$ for every $x, y\in k$, where $u\in k[x]$, $v\in k[y]$ are polynomials of one variable. Can we conclude that either $u(x)=x$ or $v(y)=y$?</p> </blockquote> <p>I think the answer is "yes". I am thinking maybe we can use some sort of degree argument, but there could be some cancellations if we try to expand $f(u(x), v(y))$. </p> <p>This question naturally arose to me when I was reading Section 1.4 "Rational Maps" in <em>Basic Algebraic Geometry</em> by Shafarevich. But, as far as I can tell, it is not directly related to any of the results presented there. </p>
leshik
15,215
<p>The answer is no. Take <span class="math-container">$f(x,y)=x^2-1+y^2$</span> and <span class="math-container">$u(x)=-x,$</span> <span class="math-container">$\nu(y)=-y.$</span> If you want some more "nontrivial" examples, you can consider symmetries with respect to <span class="math-container">$x\to 1-x$</span> or something like that (instead of obvious one <span class="math-container">$x\to -x.$</span>) </p>
3,736,580
<p>Show that for <span class="math-container">$n&gt;3$</span>, there is always a <span class="math-container">$2$</span>-regular graph on <span class="math-container">$n$</span> vertices. For what values of <span class="math-container">$n&gt;4$</span> will there be a 3-regular graph on n vertices?</p> <p>I think this question is slightly out of my control. Can you please help me out with this question...</p> <p>For part two what I think is yes by handshaking I will exclude all the odd vertices as <span class="math-container">$3(2n+1)$</span> is not even number. So what should be the answer? All even number of vertices? Does that make sense? And for part 1 it is obviously true but how can I proceed to the answer? Thanks.</p>
DanielV
97,045
<p>You could define the sequence recursively in terms of the average of the previous terms of the sequence:</p> <p><span class="math-container">$$x_k = \begin{cases} 3 &amp; \text{ if } &amp; a_{k-1} &gt; \pi \\ 4 &amp; \text{ if } &amp; a_{k-1} &lt; \pi \\ \end{cases}$$</span></p> <p>where</p> <p><span class="math-container">$$a_n = \frac {1}{n}\sum_k^n x_k$$</span></p> <hr /> <p>The convergence of <span class="math-container">$|a_n - \pi| \to 0$</span> follows from</p> <p><span class="math-container">$$- \frac{\pi - 3}{n} &lt; a_n - \pi &lt; \frac{4 - \pi}n$$</span></p> <p>when <span class="math-container">$(x_{n-1}, x_n)$</span> is <span class="math-container">$(3, 4)$</span> or <span class="math-container">$(4, 3)$</span>. Also, <span class="math-container">$|a_n - \pi|$</span> is decreasing in the other cases.</p> <p>In the <span class="math-container">$(3, 4)$</span> case, <span class="math-container">$a_{n-1} &lt; \pi$</span> so <span class="math-container">$$\begin{array} {rcl} a_n &amp;=&amp; (a_{n-1}\cdot(n-1) + 4)/n \\ &amp;&lt;&amp; (\pi \cdot (n-1) + 4)/n \\ &amp;=&amp; \pi + (4 - \pi)/n \end{array}$$</span></p> <p>Similarly for the <span class="math-container">$(4, 3)$</span> case.</p> <hr /> <p>To be pedantically rigorous, it would also need to pointed out that there is no final time <span class="math-container">$a_n - \pi$</span> changes signs.</p>
3,736,580
<p>Show that for <span class="math-container">$n&gt;3$</span>, there is always a <span class="math-container">$2$</span>-regular graph on <span class="math-container">$n$</span> vertices. For what values of <span class="math-container">$n&gt;4$</span> will there be a 3-regular graph on n vertices?</p> <p>I think this question is slightly out of my control. Can you please help me out with this question...</p> <p>For part two what I think is yes by handshaking I will exclude all the odd vertices as <span class="math-container">$3(2n+1)$</span> is not even number. So what should be the answer? All even number of vertices? Does that make sense? And for part 1 it is obviously true but how can I proceed to the answer? Thanks.</p>
fleablood
280,126
<p>Yes. Consider <span class="math-container">$a \le \omega \le b$</span>. (In this specific case <span class="math-container">$a=3; b=4; \omega = \pi$</span>)</p> <p>Define <span class="math-container">$x_1=\begin{cases}b &amp;\omega \le \frac {a+b}2\\a &amp;\omega &gt;\frac{a+b}2\end{cases}$</span></p> <p><span class="math-container">$v_k= average(x_1,....., x_k)=\frac {\sum_{i=1}^k x_i}k$</span>.</p> <p><span class="math-container">$x_{k+1} = \begin{cases}b &amp;\omega \le v_k\\a &amp;\omega &gt; v_k\end{cases}$</span>.</p> <p>It's easy to algebraically claim:</p> <blockquote> <p>Claim 1: <span class="math-container">$|v_{k+1} - v_k| \le \frac {b-a}{k+1}$</span></p> </blockquote> <p>And it's easy to use that claim to claim by induction that</p> <blockquote> <p>Claim 2: <span class="math-container">$|v_k - \omega| \le \frac {b-a}{k}$</span>.</p> </blockquote> <p>Then using the definition</p> <blockquote> <p>Def: <span class="math-container">$\lim_{n\to \infty} v_n =\omega$</span> if when for any <span class="math-container">$\epsilon &gt; 0$</span> there is an <span class="math-container">$N$</span> so that whenever <span class="math-container">$n &gt; N$</span> then <span class="math-container">$|v_n -\omega| &lt; \epsilon$</span>.</p> </blockquote> <p>the result follows:</p> <p>Assuming <span class="math-container">$b &gt; a$</span> (if <span class="math-container">$a=b$</span> then <span class="math-container">$\omega = a =b$</span> and <span class="math-container">$x_k = a_k = \omega = a=b$</span> and there is nothing to prove) then if we an <span class="math-container">$\epsilon &gt; 0$</span> and we let <span class="math-container">$n &gt; N \ge \frac 1{(b-a)}\epsilon$</span> then <span class="math-container">$|v_n - \omega| \le {b-a}{n+1} &lt; \frac {b-a}n &lt; \frac {b-a}N\le \epsilon$</span>. So <span class="math-container">$\lim_{n\to \infty} v_n =\omega$</span></p>
2,714,450
<p>Suppose $A$ and $B$ are two square matrices so that $e^{At}=e^{Bt}$ for infinite (countable or uncountable) values of $t$ where $t$ is positive.</p> <p>Do you think that $A$ <strong>has to be equal to</strong> $B$?</p> <p>Thanks, Trung Dung.</p> <hr> <p>Maybe I do not state clearly or correctly.</p> <p>I mean that the equality holds for all $t\in (0, T)$ where $T&gt;0$ or $T=+\infty$, i.e. for uncountable $t$. In this case I think some of the counter-examples above do not work because it is correct for countable $t$.</p>
Angina Seng
436,618
<p>Take $A=\pmatrix{0&amp;1\\-1&amp;0}$. Then $\exp(tA)=I$ for $t=2n\pi$ ($n$ integer). That is $\exp(tA)=\exp(tB)$ infinitely often for $B$ the zero matrix.</p>
2,714,450
<p>Suppose $A$ and $B$ are two square matrices so that $e^{At}=e^{Bt}$ for infinite (countable or uncountable) values of $t$ where $t$ is positive.</p> <p>Do you think that $A$ <strong>has to be equal to</strong> $B$?</p> <p>Thanks, Trung Dung.</p> <hr> <p>Maybe I do not state clearly or correctly.</p> <p>I mean that the equality holds for all $t\in (0, T)$ where $T&gt;0$ or $T=+\infty$, i.e. for uncountable $t$. In this case I think some of the counter-examples above do not work because it is correct for countable $t$.</p>
Community
-1
<p>$\textbf{Proposition 1.}$ Let $(t_k)$ be a positive sequence that converges to $l$ s.t., for every $k$, $t_k\not= l$. </p> <p>If, (*) for every $k$, $e^{t_kA}=e^{t_kB}$, then $A=B$.</p> <p>$\textbf{Proof.}$ There is $k$ s.t. $t_kA$ is $2i\pi$ congruence free (for every $\lambda,\mu\in spectrum(A)$, $t_k(\lambda-\mu)\notin 2i\pi\mathbb{Z}^*$). From a result by Hille, $t_kA,t_kB$ commute, that implies $AB=BA$. </p> <p>In the sequel, we may assume that $A=\lambda I+N$ where $N$ is nilpotent and (*). Let $\mu\in spectrum(B)$; then, for every $k$, $e^{t_k\mu}=e^{t_k\lambda}$, that is $t_k(\lambda-\mu)\in 2i\pi\mathbb{Z}$; thus $\lambda=\mu$ and $B=\lambda I+M$ where $M$ is nilpotent and $e^{t_kM}=e^{t_kN}$. Note that the exponential map is injective on the nilpotent matrices; then $t_kM=t_kN$ and $A=B$. $\square$</p> <p>EDIT. In the same way as above, we can prove that follows.</p> <p>$\textbf{Proposition 2.}$ Let $(t_k)$ be a generic real sequence; for instance, the $(t_k)$ are iid and each follows the normal law $N(0,1)$. </p> <p>If, (*) for every $k$, $e^{t_kA}=e^{t_kB}$, then $A=B$ with probability $1$.</p> <p>In other words, that the OP says is true except if we make an ad-hoc sequence so that it does not work.</p>
927,188
<p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p> <p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p> <p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p> <p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p> <p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
paul garrett
12,291
<p>There is a lot of hype surrounding "math competitions", since "winning" and "competition" can easily be understood in broad cultural terms, whether or not we collectively think of these as either highest goals or as legitimate formative principles. Most professional mathematics and its practice does not resemble competition-math at all, specifically, serious projects have substantial background requirements, may take months or years to complete or partially complete, and have meaning beyond "getting the result before anyone else" (although some people've been so indoctrinated in math-as-game that they never recover, and find no other meaning in it).</p> <p>Also, there is much hype about "the other people" being incredible geniuses, while few sane people will look at themselves and see "a genius". :) But this is mostly gossip or mythology, in part generated to create a superficial excitement where otherwise there'd be mostly hard work without pop-style-glamour. :) And, of course, there is a common style of "bluffing" and/or never admitting weakness or ignorance, but this is mostly a facade, whose maintenance is most enthusiastic among those most worried about the "game" aspect of mathematics or anything else. Such stuff should not be unquestioningly believed.</p> <p>To make a living as a mathematician, and to make reasonable contributions, it is not necessary to be a larger-than-life romantic-heroic figure. :) Possibly many of us, occasionally, wish that we were such a figure, but that more comic-book or video-game reality than any human-lifetime reality.</p> <p>More important than hype is the concrete reality of how one spends one's days: if you like thinking about mathematical things, perhaps teaching mathematical things, then being a mathematician is a happy job, with or without rich-and-famousness. The purported glamor of "being a great [whatever]" is not a reliable thing to aim toward, since the "process" of <em>practicing</em> the [whatever] is how one will spend one's days. It is a reasonable analogy, I think, to say that "great musicians" occur by accident among the class of people who really, really enjoy <em>practicing</em> and "jamming" (in whatever genres), rather than people who wish for celebrity but hate practicing. Of course there're the people who <em>pose</em> as never practicing, but reality belies that cute P.R. pose...</p>
3,435,256
<p>The following statement is given in my book under the topic <em>Tangents to an Ellipse</em>:</p> <blockquote> <p>The <a href="http://mathworld.wolfram.com/EccentricAngle.html" rel="nofollow noreferrer">eccentric angles</a> of the points of contact of two parallel tangents differ by <span class="math-container">$\pi$</span></p> </blockquote> <p>In case of a circle, it is easy for me to visualise that two parallel tangents meet the circle at two points which are apart by <span class="math-container">$\pi$</span> radians as they are diametrically opposite. But in case of ellipse, as the eccentric angle is defined with respect to the <a href="http://mathworld.wolfram.com/AuxiliaryCircle.html" rel="nofollow noreferrer">auxiliary circle</a> and not the ellipse, I am unable to understand why two parallel tangents meet the ellipse at points which differ by <span class="math-container">$\pi$</span>. </p> <p>Kindly explain the reason behind this fact.</p>
Quanto
686,284
<p>According to the definition of the eccentric angle for the ellipse <span class="math-container">$\frac{x^2}{a^2}+ \frac{y^2}{b^2}=1$</span>,</p> <p><span class="math-container">$$ t= \tan^{-1} \frac{ay}{bx}$$</span></p> <p>evaluate </p> <p><span class="math-container">$$t_2-t_1= \tan^{-1} \frac{ay_2}{bx_2} - \tan^{-1} \frac{ay_1}{bx_1}=\tan^{-1}\frac { \frac{ay_2}{bx_2} - \frac{ay_1}{bx_1} } {1+ \frac{ay_2}{bx_2} \frac{ay_1}{bx_1} }\tag{1}$$</span></p> <p>The tangent of the ellipse is <span class="math-container">$-\frac{b^2x}{a^2y}$</span>. So, the two parallel tangents satisfy,</p> <p><span class="math-container">$$\frac{x_1}{y_1}=\frac{x_2}{y_2}\tag{2}$$</span></p> <p>Plug (2) in to (1),</p> <p><span class="math-container">$$t_2-t_1=\tan^{-1} (0)$$</span></p> <p>Thus, the two angles are <span class="math-container">$\pi$</span> apart.</p>
2,339,707
<p>Suppose I have five bins into which I want to place 15 balls. The bins have capacities $2$, $2$, $3$, $3$, and $7.$ I place the balls one at a time in the bins, randomly and uniformly amongst the bins that are not full (so for example, if after placing four balls, both of the bins with capacity $2$ are already full, the next ball is placed with probability $1/3$ in each of the remaining three bins). </p> <p>My question is if there is a an efficient way to estimate the probability that the bin with capacity $7$ is full at the end of this process (it would be great if the technique generalizes in the obvious way).</p>
Miguel
259,671
<p>Your solution is incorrect because you restrict to a particular value for $p_0$. It is true that if $v$ is an eigenvector of $A$, then you have: $$A^n v=\lambda^n v$$ But you have to compute $A^n p_0$ instead, and you cannot choose $p_0$ because it is part of the problem statement. In other words, you cannot assume $p_0=v$, so the first line of your solution is false.</p>
1,590,625
<blockquote> <p>If $f(x)=\log \left(\cfrac{1+x}{1-x}\right)$ for $-1 &lt; x &lt; 1$,then find $f \left(\cfrac{3x+x^3}{1+3x^2}\right)$ in terms of $f(x)$.</p> </blockquote> <p><strong>My Attempt</strong> $$f \left(\cfrac{3x+x^3}{1+3x^2}\right)=\log\left(\cfrac{1+\cfrac{3x+x^3}{1+3x^2}}{1-\cfrac{3x+x^3}{1+3x^2}}\right)=\log \left(\cfrac{1+3x^2+3x+x^3}{1+3x^2-3x-x^3}\right)=\\\log(1+3x^2+3x+x^3)-\log(1+3x^2-3x-x^3)$$</p> <p>Now I am kinda clueless about how to express this in terms of $f(x)$.</p> <p>Can you guys help ?</p>
Archis Welankar
275,884
<p>hint $$log(\frac{1+x}{1-x})=2(x+\frac{x^3}{3!}...\infty)$$ for $|x|&lt;1$</p>
33,702
<p>The code that I have written has an unintended consequence that I'm not sure how to get around. I want 3 rotation transforms to be applied simultaneously to 1 graphics object. Instead, I get 3 separate separate copies of the graphics object, one per transformation.</p> <p>The documentation does state that this will be the outcome of using multiple transformations on a graphics object.</p> <blockquote> <p>GeometricTransformation[g, {t1, t2, ...}]<br/> <strong><em>represents multiple copies</em></strong> of g transformed by a collection of transformations.</p> </blockquote> <p>My question is: how is it possible to achieve the the outcome that I described instead of getting multiple copies?</p> <p>Here is the code I am executing:</p> <pre><code>Manipulate[ Graphics3D[{ EdgeForm[None], GeometricTransformation[Cylinder[], {RotationTransform[a Pi, {1, 0, 0}], RotationTransform[b Pi, {0, 1, 0}], RotationTransform[c Pi, {0, 0, 1}]}]}], {{a, 0}, -1, 1}, {{b, 0}, -1, 1}, {{c, 0}, -1, 1}, SaveDefinitions -&gt; True] </code></pre> <p>If anyone could show me a way to accomplish this, I'd appreciate it.</p>
halirutan
187
<p>When tracing other plots, I often saw that plotting-related functions compile their arguments when possible. A quick look at the trace of your example suggests the same, because the <code>Exp</code> function is used only for the evaluation of <code>function</code>. This seems to indicate that your second argument <code>Exp[-9 t^2]</code> was already compiled down and doesn't show up when the numeric values for the plot are generated.</p> <p>To support my hypothesis, let us change your example to create a plot that matches completely by compiling <code>function</code> down:</p> <pre><code>With[{cf = Compile[{{t, _Real}}, Exp[-9 t^2]]}, functionC[t_?NumericQ] := cf[t] ]; LogLogPlot[{functionC[t], Exp[-9 t^2]}, {t, .01, 100}] </code></pre> <p>Because <code>LogLogPlot</code> tries to analyze <code>functionC</code>, I wrapped the compiled function with <code>?NumericQ</code>. The result shows two matching graphs:</p> <p><img src="https://i.stack.imgur.com/SLTSi.png" alt="enter image description here"></p> <p>If you now want to gain further inside, you can concentrate on the difference between the compiled function and the evaluatation of <code>Exp[-9 t^2]</code>.</p>
700,012
<p>Ok, so the question is to prove by induction that:</p> <p>$${n \choose k} \le n^k$$</p> <p>Where $N$ and $k$ are integers, $k \le n$;</p> <p>How do I approach this? Do i choose a $n$ and a $k$ to form my base case?</p>
Stella Biderman
123,230
<p>You need to prove this for all $n$ and all $k$, right? Then, to do it by induction, you must take an arbitrary $n$ and then induct on $k$.</p>
298,791
<blockquote> <p>If a ring $R$ is commutative, I don't understand why if $A, B \in R^{n \times n}$, $AB=1$ means that $BA=1$, i.e., $R^{n \times n}$ is Dedekind finite.</p> </blockquote> <p>Arguing with determinant seems to be wrong, although $\det(AB)=\det(BA ) =1$ but it necessarily doesn't mean that $BA =1$.</p> <blockquote> <p>And is every left zero divisor also a right divisor ? </p> </blockquote>
Martin Brandenburg
1,650
<p><strong>Lemma</strong>. Every surjective endomorphism $f : M \to M$ of a finitely generated $R$-module $M$ is an isomorphism.</p> <p>Proof: $M$ becomes an $R[x]$-module, where $x$ acts by $f$. By assumption, $M=xM$. Nakayama's Lemma implies that there is some $p \in R[x]$ such that $(1-px)M=0$. This means $\mathrm{id}=p(f) f$. Hence, $f$ is injective. $\square$</p> <p><strong>Corollary</strong>: If $f,g$ are endomorphisms of a finitely generated $R$-module satisfying $fg=\mathrm{id}$, then also $gf=\mathrm{id}$. </p>
27,455
<p>Let $(f_n)_{n \geq 1}$ be disjointly supported sequence of functions in $L^\infty(0,1)$. Is the space $\overline{\mathrm{span}(f_n)}$ (the closure of linear span) complemented in $L^\infty(0,1)$? By complemented we mean that $L^\infty(0,1) = \overline{\mathrm{span}(f_n)} \oplus X$, where $X$ is a subspace of $L^\infty$ and $\oplus$ is direct sum. </p> <p>Equivalently, we can ask if there exists a projection $P\colon L^\infty(0,1) \to \overline{\mathrm{span}(f_n)}$?</p> <p>It is quite easy to prove this in $C[0,1]$. Indeed, let $(f_n)$ be disjointly supported sequence in $C[0,1]$ and fix $x_n \in \mathrm{supp}(f_n)$, $n \in \mathbb{N}$. Then the space $C[0,1]$ can be written as $$ C[0,1] = \overline{\mathrm{span}(f_n)} \oplus \{f \in C[0,1]\colon f(x_n) = 0, n = 1,2,\dots \}. $$</p>
Philip Brooker
11,532
<p>The answer is <em>no</em>. The closed linear span of such a sequence is separable, and so if $\overline{span(f_n)}$ was complemented in $L^\infty (0, 1)$ then every Banach space isomorphic to $L^\infty (0, 1)$ would contain an infinite dimensional, separable complemented subspace. In particular, since $L^\infty (0, 1)$ is isomorphic to $\ell^\infty$ (this is an old result due to Pelczynski, but a proof is given as Theorem 4.3.10 of Albiac and Kalton's text <em>Topics in Banach space theory</em>), if $\overline{span(f_n)}$ was complemented in $L^\infty (0, 1)$ then $\ell^\infty$ would contain an infinite dimensional, separable complemented subspace... however, this is not true since every infinite dimensional complemented subspace of $\ell^\infty$ is isomorphic to $\ell^\infty$ by a result of J. Lindenstrauss (see, e.g., Lindenstrauss and Tzafriri's book <em>Classical Banach Spaces I</em>, Theorem 2.a.7), hence nonseparable.</p>
27,455
<p>Let $(f_n)_{n \geq 1}$ be disjointly supported sequence of functions in $L^\infty(0,1)$. Is the space $\overline{\mathrm{span}(f_n)}$ (the closure of linear span) complemented in $L^\infty(0,1)$? By complemented we mean that $L^\infty(0,1) = \overline{\mathrm{span}(f_n)} \oplus X$, where $X$ is a subspace of $L^\infty$ and $\oplus$ is direct sum. </p> <p>Equivalently, we can ask if there exists a projection $P\colon L^\infty(0,1) \to \overline{\mathrm{span}(f_n)}$?</p> <p>It is quite easy to prove this in $C[0,1]$. Indeed, let $(f_n)$ be disjointly supported sequence in $C[0,1]$ and fix $x_n \in \mathrm{supp}(f_n)$, $n \in \mathbb{N}$. Then the space $C[0,1]$ can be written as $$ C[0,1] = \overline{\mathrm{span}(f_n)} \oplus \{f \in C[0,1]\colon f(x_n) = 0, n = 1,2,\dots \}. $$</p>
Michael Causey
52,539
<p>If all but finitely many of the functions are the zero function, then the answer is yes, because any finite-dimensional subspace is complemented. But this is the trivial case. </p> <p>In the nontrivial case, just note that if you normalize the nonzero functions in the sequence, they form a basic sequence $1$-equivalent to the unit vector basis of $c_0$. But we know $\overline{\text{span}(f_n)}=c_0$ is not complemented in $L_\infty=\ell_\infty$ (see, for example, Albiac and Kalton's Topics in Banach Space Theory). </p>
1,986,798
<p>The way I solved the problem is to change the equation to $|x+2|=1-|y-3|$, and then square both sides. But I don't think it is the right way to solve the problem. I hope someone can either give me a hint or show me how to solve the problem.</p> <blockquote> <p>$|x+2|+|y-3|=1$ is an equation for a square. How many units are in the lengths of its diagonals?</p> </blockquote>
GFauxPas
173,170
<p>The Jacobian can be a row or column vector, which is to say a matrix with only one row or column. As CoffeeBliss says, the Jacobian of a function $\mathbb R^n \to \mathbb R^m$ is an $m \times n$ matrix.</p> <p>There is an operation called the <strong>gradient</strong> of $f$ and it is defined by:</p> <p>$$\boldsymbol{\nabla} f= \begin{bmatrix}\ \partial_xf \ \ \ \partial_yf \ \ \ \partial_zf \ \end{bmatrix}$$</p> <p>The symbol $\nabla$ is called <strong>nabla</strong>.</p> <p>Note that the gradient as I defined it is a <strong>row</strong> vector and that the Jacobian $\dfrac{\partial f}{\partial(x,y,z)}$ would be a <strong>column</strong> vector. However, it's often convenient to not distinguish between the two, and so it's common to consider the gradient as a specific case of the Jacobian where the codomain is one-dimensional.</p>
1,729,308
<p>The sum of the first $n$ $(n&gt;1)$ terms of the A.P. is $153$ and the common difference is $2$. If the first term is an integer , then number of possible values of $n$ is </p> <p>$a)$ $3$</p> <p>$b)$ $4$</p> <p>$c)$ $5$</p> <p>$d)$ $6$</p> <p>My approach : I used the formula for the first $n$ terms of an A.P. to arrive at the following quadratic equation $n^2 + n(a-1) -153 = 0 $</p> <p>Next up I realised that since we are talking about the number of terms , thus the possible values which n can take must be whole numbers. That is the discriminant of the above quadratic should yield a whole number in other words </p> <p>$ (a-1)^2 + 612 = y^2 $ for some y . </p> <p>However I am stuck at this point , as from here I am unable to figure out the number of such a's ( i.e. the initial terms of an AP ) which will complete the required pythagorean triplet The answer mentioned is $5$</p> <p>Please let me know , if I am doing a step wrong somewhere . Or If you have a better solution , that will be welcomed too. </p>
User
311,480
<p>one of the possible value of $n$ is $3$.</p> <p><strong>REASON</strong></p> <p>Let the first term of $AP$ be $a-2$.</p> <p>$AP: a-2, a, a+2,a+4,a+6,a+8,a+10,\cdots$</p> <p>Sum of first three terms of $AP= 3a$ which will give $a=51$(integer value).</p> <p>Now generalising this pattern</p> <p>we get other possible values of $n$ are $9,17,51,153$</p> <p>As when we get odd number of terms say, $2k+1$ </p> <p>We can take $AP$ to be $a-2k,a-2(k-1),a-2(k-2),\cdots,a,a+2,a+4,\cdots,a+2(k-1),a+2k\cdots$</p> <p>(Note: I have mentioned first $2k+1$ terms here)</p> <p>On adding all these we get $a(2k+1)$ (as common difference will cancel each other due to negative pairity)</p> <p>Thus $a(2k+1)=153\Rightarrow a=\frac{153}{2k+1}$ ,for integral value of $a, 2k+1$ has to be a positive factor of $153$ which are $5$(excluding 1).</p>
331,654
<p>After having received Brian M. Scott's permission (see comments in the selected answer) I am integrating his suggestions with my own solutions to form a complete answer to the questions apperaing below. </p> <blockquote> <p>Let $\mathscr{T}$ be the collection of subsets of $\Bbb R$ consisting of $\emptyset, \Bbb R$ and the rays of the form $(r, \infty)$, where $r \in \Bbb R$.</p> <p>$(a)$ Exhibit that this, indeed, is a topology on $\Bbb R$.</p> </blockquote> <p><em>Proof</em>: Given any finite number of open sets of the form above, $\exists$ a maximal $r_n$. The set $(r_n, \infty)$ is the intersection of these finite sets. Let $\{U_i\}_{i \in I}$ be an arbitrary family of open sets. Let $r_{\min} = \inf\{r_i\}$. Then $(r_{\min}, \infty)$ is the union of the $U_i$ and clearly it is of the desired form to be an element of the topology. If $r_{\min} = -\infty$ then the union of the $U_i$ is the entire $\Bbb R$.</p> <blockquote> <p>$(b)$ Show that it fails to be a topology if $r \in \Bbb Q$.</p> </blockquote> <p>I don't have the full answer to this. I think that there should be a problem in the union of arbitrarily many open sets. Any help with a counterexample will be very helpful.</p> <blockquote> <p>Answer the following questions. Is $(\Bbb R, \mathscr{T})$:</p> <p>$(c)$ $T_1$?</p> </blockquote> <p>No. Let $x_1 \neq x_2 \in \Bbb R$ and assume without loss of generality that $x_1 &lt; x_2$. Then any open set containing $x_1$ will be of the form $x_1 - \epsilon, \infty$ for some $\epsilon &gt; 0$ and will encessarily contain $x_2$.</p> <blockquote> <p>$(d)$ Hausdorff</p> </blockquote> <p>No. Being Hausdorff (or $T_2$) would imply that it is $T_1$, a contradiction to (a).</p> <blockquote> <p>$(e)$ metrizable</p> </blockquote> <p>No. If there was a metric, the metric space would have to be Hausdorff, contradicting (b).</p> <blockquote> <p>$(f)$ second - countable</p> </blockquote> <p>I have no idea on how to go about this one.</p> <blockquote> <p>$(g)$ compact</p> </blockquote> <p>No. There exists no finite cover of this space. Suppose we are given a finite cover of this space. Then there exists a minimal $r$ such that $(r_{\min}, \infty)$ covers the entire space. Of ,course, this is only possible if $(r_{\min}, \infty) = \Bbb R$.</p> <blockquote> <p>$(h)$ locally compact</p> </blockquote> <p>Yes. We need to exhibit that any point has a compact neighbourhood. To this end, fix $x \in \Bbb R$. Let $(r, \infty)$ be a neighbourhood of $x$ and let $\{U_i\}_{i \in I}$ be an open cover. Then, there must exist $q \in \Bbb R$ such that $q \ge r$ so that $(q, \infty)$ is a set of the open cover. Clearly, taking $(q, \infty)$ as the subcover completes the proof.</p> <blockquote> <p>$(i)$ connected</p> </blockquote> <p>In part $(j)$ we prove that $(\Bbb R, \mathcal{T})$ is path-wise connected, hence connected.</p> <blockquote> <p>$(j)$ path-wise connected</p> </blockquote> <p>$\Bbb R$ is convex. Given $x, y \in \Bbb R$, the path $f: [0,1] \to \Bbb R$ given by $f(t) = (1-t)x + ty$ is continuous and $f(0) = x$ and $f(1) = y$. Since path-wise connected implies connected we also answered $(i)$.</p> <blockquote> <p>What is the closure of $\{1\}$ in $(\Bbb R, \mathcal{T})$?</p> </blockquote> <p><em>Proof</em>: Since the closure is the smallest closed set containing $\{1\}$, it is clear that it is $(-\infty, 1]$</p> <p>Any suggestions, corrections, hints and any help, in general, will be tremendously appreciated! Any stylistic improvements in the formatting of the question are also greatly encouraged!</p>
anon271828
54,360
<p>Hint: For $(b)$, use the fact that $\Bbb Q$ is dense in $\Bbb R$. Maybe consider $\sqrt{2}+1/n\to\sqrt{2}$ as $n\to\infty$. For $(g)$, I'm not sure why you say there doesn't exist a finite cover; $\Bbb R$ itself is a perfectly good finite open cover. For $(j)$, I think you need more argument. It seems like you are jumping topologies, which is okay, but you have to argue that the function you've defined is actually continuous via the definition for continuous in topology.</p> <p>I'm not sure about $(f)$, so I'll leave that for someone else, but hopefully this gets you started on the other parts. I <em>think</em> everything I didn't mention was okay, though.</p>
2,913,921
<p>As I know, the general complexity of matrix inversion is $O(n^3)$, but it is a little bit high. My matrix is $(I + A)$ , where $I$ is an $n \times n$ identity matrix and $A$ is a hermitian matrix and the norm of all its elements are very small (near $1/100$). Therefore, I think it's a particular matrix that close to an identity matrix. </p> <p>So, is there any effective algorithm that can solve the inversion of the matrix with low-complexity ? hopefully , it can be reduced to $O(n^2)$. High accuracy is not very necessary. </p>
Pushpendre
52,858
<p>$(I+A)(I-A) = I - AA$. If $A$ is small enough, then you may be able to approximate $(I+A)^{-1}$ by $I-A$. </p> <p>If you don't need to compute the inverse but only need to solve $(I+A)^{-1}v$ then you can improve accuracy by taking the rational series $I - A + AA/2 + ...$ upto the desired tolerance and then compute matrix-vector products $v - Av + A(Av)/2 + ...$. This will reduce the complexity to $O(n^2)$</p> <p>Hard to say more without more structure on $A$.</p>
3,089,493
<p>Calculate the volume between <span class="math-container">$x^2+y^2+z^2=8$</span> and <span class="math-container">$x^2+y^2-2z=0$</span>. I don't know how to approach this but I still tried something:</p> <p>I rewrote the second equation as: <span class="math-container">$x^2+y^2+(z-1)^2=z^2+1$</span> and then combined it with the first one and got <span class="math-container">$2(x^2+y^2)+(z-1)^2=9$</span> and then parametrized this with the regular spheric parametrization which is:</p> <p><span class="math-container">$$x=\frac {1}{\sqrt{2}}r\sin \theta \cos \phi$$</span> <span class="math-container">$$y=\frac 1{\sqrt{2}}\sin\theta\sin\phi$$</span> <span class="math-container">$$z=r\cos\theta + 1$$</span></p> <p>And of course the volume formula:</p> <p><span class="math-container">$$V(\Omega)=\int\int\int_{\Omega} dxdydz$$</span></p> <p>But that led me to a wrong answer.. what should I do?</p> <p>Else, I tried parametrizing like this: <span class="math-container">$x=r\cos t$</span>, <span class="math-container">$y=r\sin t$</span>. then <span class="math-container">$r^2+z^2=8$</span> and <span class="math-container">$r^2-2z=0$</span> giving the only 'good' solutions <span class="math-container">$r=2, z=2$</span> then <span class="math-container">$r\in[0,2]$</span> and <span class="math-container">$z=[\frac {r^2}2,\sqrt{8-r^2}]$</span> positive root because it's in the plane <span class="math-container">$z=2$</span>.</p> <p>giving <span class="math-container">$\int_{0}^{2\pi}\int_0^2\int_{\frac {r^2}2}^{\sqrt{8-r^2}}rdzdrdt.$</span> But still i god the wrong answer..</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span></p> <p><span class="math-container">\begin{align} V &amp; \stackrel{\mrm{def.}}{\equiv} \iiint_{\mathbb{R}^{\large 3}}\bracks{x^{2} + y^{2} + z^{2} &lt; 8} \bracks{z &gt; {x^{2} + y^{2} \over 2}}\dd^{3}\vec{r} \\[5mm] &amp; \underline{\mbox{Use Cylindrical Coordinates:}} \\ V &amp; = \int_{0}^{2\pi} \int_{-\infty}^{\infty}\int_{0}^{\infty}\bracks{\rho^{2} + z^{2} &lt; 8} \bracks{z &gt; {\rho^{2} \over 2}}\rho\,\dd\rho\,\dd z\,\dd\phi \\[5mm] &amp; = \pi\int_{0}^{\infty}\int_{0}^{\infty}\bracks{\rho + z^{2} &lt; 8} \bracks{z &gt; {\rho \over 2}}\,\dd\rho\,\dd z \\[5mm] &amp; = \pi\int_{0}^{\infty}\int_{0}^{\infty} \bracks{{1 \over 2}\,\rho &lt; z &lt; \root{8 - \rho}}\,\dd z\,\dd\rho \\[5mm] &amp; = \pi\int_{0}^{\infty} \bracks{{1 \over 2}\,\rho &lt; \root{8 - \rho}} \int_{\rho/2}^{\root{8 - \rho}}\,\dd z\,\dd\rho \\[5mm] &amp; = \pi\int_{0}^{\color{red}{4}} \pars{\root{8 - \rho} - {1 \over 2}\,\rho}\,\dd\rho = \bbx{{4 \over 3}\pars{8\root{2} - 7}\,\pi} \approx 18.0692 \end{align}</span></p> <blockquote> <p>Note that <span class="math-container">$\ds{0 &lt; \rho/2 &lt; \root{8 - \rho} \implies \rho &lt; \color{red}{4}}$</span>.</p> </blockquote>
1,516,925
<p>Let $x,y,z$ be 3 non-zero integers defined as followed: </p> <p>$$(x+y)(x^2-xy+y^2)=z^3$$</p> <p>Let assume that $(x+y)$ and $(x^2-xy+y^2)$ are coprime and set $x+y=r^3$ and $x^2-xy+y^3=s^3$</p> <p>Can one write that $z=rs$ where $r,s$ are 2 integers? I am not seeing why not but I want to be sure.</p>
André Nicolas
6,312
<p>Yes, there exist such integers $r$ and $s$. It is simplest to use the Fundamental Theorem of Arithmetic (Unique Factorization Theorem). The result is easy to prove for negative $z$ if we know the result holds for positive $z$. Also, the result is clear for $z=1$. So we may assume that $z\gt 1$.</p> <p>Let $z=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$, where the $p_i$ are distinct primes. Then $z^3=p_1^{3a_1}\cdots p_k^{3a_k}$.</p> <p>Because by assumption $x+y$ and $x^2-xy+y^2$ are relatively prime, the primes in the factorization of $z^3$ must split into two sets, the ones that "belong to" $x+y$ and the ones that belong to $x^2-xy+y^2$. Because any prime has exponent divisible by $3$, each of $x+y$ and $x^2-xy+y^2$ is a perfect cube.</p> <p>The rest is easy. From $z^3=r^3s^3$, it immediately follows that $z=rs$.</p> <p><strong>Remark:</strong> Note that $x$ and $y$ relatively prime does not imply $x+y$ and $x^2-xy+y^2$ are relatively prime. They could be both divisible by $3$.</p>
131,051
<p>So we want to find an $u$ such that $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$. I obtained that if $u$ is of the following form: $$u=\sqrt[6]{2^a5^b}$$Where $a\equiv 1\pmod{2}$, and $a\equiv 0\pmod{3}$, and $b\equiv 0\pmod{2}$ and $ b\equiv 1\pmod{3}$. This works since $$u^3=\sqrt{2^a5^b}=2^{\frac{a-1}{2}}5^{\frac{b}{2}}\sqrt{2}$$and also, $$u^2=\sqrt[3]{2^a5^b}=2^{\frac{a}{3}}5^{\frac{b-1}{3}}\sqrt[3]{5}$$Thus we have that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subseteq \mathbb{Q}(u)$. Note that $\sqrt{2}$ has degree of $2$ (i.e., $[\mathbb{Q}(\sqrt{2}):\mathbb{Q}]=2$) and alsothat $\sqrt[3]{5}$ has degree $3$. As $\gcd(2,3)=1$, we have that $[\mathbb{Q}(\sqrt{2},\sqrt[3]{5}),\mathbb{Q}]=6$. Note that this is also the degree of the extension of $u$, since one could check that the set $\{1,u,...,u^5\}$ is $\mathbb{Q}$-independent. Ergo, we must have equality. That is, $\mathbb{Q}(u)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$.</p> <p>My question is: How can I find all such $w$ such that $\mathbb{Q}(w)=\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$? This is homework so I would rather hints rather hints than a spoiler answer. I believe that They are all of the form described above, but apriori I do not know how to prove this is true. </p> <p>My idea was the following, since $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})$ has degree $6$, then if $w$ is such that the desired equality is satisfied, then $w$ is a root of an irreducible polynomial of degree $6$, moreover, we ought to be able to find rational numbers so that $$\sqrt{2}=\sum_{i=0}^5q_iw^i$$ and $$\sqrt[3]{5}=\sum_{i=0}^5p_iw^i$$But from here I do not know how to show that the $u$'s described above are the only ones with this property (It might be false, apriori I dont really know). </p>
Community
-1
<p>If we take $u = \sqrt{2} + \sqrt[3]{5}$, such a $u$ <em>almost always</em> turns out to work. In fact let's try if a rational linear combination of $\sqrt{2}$ and $\sqrt[3]{5}$ will work. Let us now write $u$ as $u = a\sqrt{2} + b\sqrt[3]{5}$ for rationals $a$ and $b$.</p> <p>Clearly we have that $\Bbb{Q}(u)\subseteq \Bbb{Q}(\sqrt{2},\sqrt[3]{5})$. To show the other inclusion, we just need to show that say $\sqrt{2} \in \Bbb{Q}(u)$ for then $\sqrt[3]{5} = \frac{a\sqrt{2} + b\sqrt[3]{5} - a\sqrt{2}}{b}$ will be in $\Bbb{Q}(u)$. Here is a quick and easy way of doing this:</p> <p>Write $u = a\sqrt{2} + b\sqrt[3]{5}$ so that $(\frac{u - a\sqrt{2}}{b})^3 = 5$. We can assume that $a$ and $b$ are simultaneously not zero for then the proof becomes redundant. Then expanding the left hand side by the binomial theorem we get that</p> <p>$$ u^3 - 3\sqrt{2}u^2a + 6ua^2 - 2a^3\sqrt{2} = 5.$$</p> <p>Rearranging, we get that </p> <p>$$\sqrt{2} = \frac{u^3 + 6ua^2 -5}{ 3u^2a + 2a^3 }.$$</p> <p>Since $\Bbb{Q}(u)$ is a field the right hand side is in $\Bbb{Q}(u)$ so that $\sqrt{2}$ is in here. Done!</p>
2,242,846
<p>I will try to prove the theorem in the title:</p> <blockquote> <p>Suppose S is closed, non-empty then if $b = \sup\{x: x \in S\}$ (least upper bound), $b \in S$.</p> </blockquote> <p>I also use the following <strong>Theorem</strong> which we proved in class: S is closed iff every Cauchy sequence in S converges to a point in S. </p> <p>Either $b \in S$ and we are done, or $b \notin S$ and so $\forall \epsilon_n = \frac{1}{n}$ for $n \in \mathbb{N},$ $\exists x_n \in (b - \epsilon_n, b] \cap S$ because if not then $b - \epsilon_n$ would be the least upper bound. So, clearly $x_n \rightarrow b$ as $n \rightarrow \infty.$ Since $x_n$ converges then $x_n$ is Cauchy and in $S$, so it must converge to some point in $S$, lets say $x_o$, by the above theorem. However, limits are unique, so $b = x_o$, but $b \notin S$ and $x_o \in S$. Thus we have arrived at a contradiction and so $b \in S. $</p>
Bram28
256,001
<p>Given your EE rule, you need to prove $\forall x (\neg p(x) \rightarrow \neg \forall x \ P(x))$, which you do by a universal introduction on $\neg p(x) \rightarrow \forall x \ p(x)$, which on its turn you prove by a conditional Introduction on a subproof that assumes $\neg p(x)$ and derives $\neg \forall x \ p(x)$ .. and the latter you get by a proof by contradiction as you yourself surmised. So:</p> <ol> <li><p>$\exists x \neg p(x)$ Premise</p></li> <li><p>$\quad \neg p(x)$ Assumption</p></li> <li><p>$\quad \quad \forall x \ p(x)$ Assumption</p></li> <li><p>$\quad \quad p(x)$ $\forall$ Elim 3</p></li> <li><p>$\quad \forall x \ p(x) \rightarrow p(x)$ $\rightarrow$ Intro 3-4</p></li> <li><p>$\quad \quad \forall x \ p(x)$ Assumption</p></li> <li><p>$\quad \quad \neg p(x)$ Reiteration 2</p></li> <li><p>$\quad \forall x \ p(x) \rightarrow \neg p(x)$ $\rightarrow$ Intro 6-7</p></li> <li><p>$\quad \neg \forall x \ p(x)$ $\neg$ Intro 5,8</p></li> <li><p>$\neg p(x) \rightarrow \neg \forall x \ p(x)$ $\rightarrow$ Intro 2-9</p></li> <li><p>$\forall x (\neg p(x) \rightarrow \neg \forall x \ p(x))$ $\forall$ Intro 10</p></li> <li><p>$\neg \forall x \ p(x)$ $\exists$ Elim 1, 11</p></li> </ol>
4,602,596
<p>In an exercise, my teacher asked us to prove that <span class="math-container">$\ell^1$</span> is a Banach space. I was able to do so, but there are two steps in my proof that I'm not quite so sure they are correct. This is what I came up with:</p> <hr /> <p>let <span class="math-container">$x^n=(x^n_1, x^n_2, ...)$</span> be a cauchy sequence of points in <span class="math-container">$\ell ^1$</span>. Then, for all <span class="math-container">$\varepsilon&gt;0$</span> exists a natural number <span class="math-container">$N$</span> such that:</p> <p><span class="math-container">$$n,m\geq N \implies\sum_{i=1}^\infty|x^n_i-x^m_i| \leq \varepsilon$$</span> <span class="math-container">$$\implies |x^n_i-x^m_i| \leq \varepsilon$$</span></p> <p>So, for every <span class="math-container">$i$</span> the sequence <span class="math-container">$(x^n_i)_{n\in \mathbb N}$</span> is a Cauchy sequence on <span class="math-container">$\mathbb C$</span> and therefore it converges, let's call it's limit <span class="math-container">$y_i \in \mathbb C$</span>.</p> <p>We want to prove that <span class="math-container">$x^n \to y = (y_1,y_2,...)$</span></p> <hr /> <p>This is the first step of the proof that I think isn't correct. I tried but failed to prove that <span class="math-container">$y$</span> is indeed a point in <span class="math-container">$\ell^1$</span>, this is: <span class="math-container">$$\sum_{i=1}^\infty |y_i|\leq\infty$$</span></p> <hr /> <p>To show that <span class="math-container">$\lim x^n = y$</span>, let <span class="math-container">$\varepsilon&gt;0$</span>. Because <span class="math-container">$\lim_nx^n_i=y_i$</span>, for all <span class="math-container">$i$</span> we have that there is a natural number <span class="math-container">$N_i$</span> such that: <span class="math-container">$$n\geq N_i \implies |x_i^n - y_i|\leq \varepsilon\frac{1}{2^i}$$</span></p> <p>so <span class="math-container">$$\sum_{i=1}^k|x_i^n - y_i|\leq\varepsilon\sum_{i=1}^k \frac{1}{2^i}\leq \varepsilon$$</span> for <span class="math-container">$n \geq \max\{N_1,...,N_k\}.$</span></p> <p>Because this is a bounded increasing sum, we have that it converges and <span class="math-container">$$||x^n - y|| = \lim_{k\to\infty}\sum_{i=1}^k|x_i^n - y_i|\leq \varepsilon$$</span></p> <p>So <span class="math-container">$\lim x^n = y$</span></p> <hr /> <p>This last part is the one I'm most unsure of. While I had a finite sum, the condition <span class="math-container">$$\sum_{i=1}^k|x_i^n - y_i|\leq \varepsilon$$</span></p> <p>Was true for <span class="math-container">$n \geq \max\{N_1,...,N_k\}$</span> but I don't know if this is the case when <span class="math-container">$k\to\infty$</span>.</p> <p>I thought about changing the definition of <span class="math-container">$n$</span> to <span class="math-container">$n \geq \sup\{N_1,...,N_k\}$</span> to accommodate the case where <span class="math-container">$k\to\infty$</span>, but this solves nothing because either the set is unbounded, or it eventually stays the same at some point for large values of <span class="math-container">$k$</span>.</p> <p>How can I fix these two issues with my proof?</p>
Lucas Henrique
274,595
<p>You already figured out that the homogeneous equation is unbounded. Suppose that there are two bounded solutions <span class="math-container">$y_1, y_2$</span>. Thus <span class="math-container">$\tilde y:=y_1 - y_2$</span> is bounded and satisfies the associated homogeneous equation. But if <span class="math-container">$\tilde y\not \equiv 0$</span>, then it is unbounded by your remark (every solution of the homogeneous equation is <span class="math-container">$c e^{-t}$</span> for some <span class="math-container">$c$</span>). Thus <span class="math-container">$\tilde y \equiv 0 \implies y_1 \equiv y_2$</span>, proving uniqueness.</p> <p>To show that there is a bounded solution, use the usual, explicit method to find a solution and try to choose an initial value good enough such that <span class="math-container">$|f(t)| \le M$</span> actually gives you boundness. You can multiply by an integrating factor <span class="math-container">$\mu = e^t$</span>, so <span class="math-container">$e^t y' + e^t y = e^t f \iff (e^t y)' = e^t f \iff y(t) = (c + \int_{-\infty}^te^\tau f(\tau) d\tau)e^{-t}$</span>. In particular, <span class="math-container">$|y(t)| \le ce^{-t} + e^{-t}\int_{-\infty}^t e^\tau |f(\tau)| d\tau \le ce^{-t} + Me^{-t}\int_{-\infty}^t e^\tau d\tau = ce^{-t} + M$</span>. so <span class="math-container">$c = 0$</span> in fact gives boundness for both positive and negative <span class="math-container">$t$</span>; i.e.,</p> <p><span class="math-container">$$y(t) = \int_{-\infty}^t e^\tau f(\tau)d\tau$$</span> is our bounded solution.</p>
1,561,563
<p>Two circles $\Gamma_1,\Gamma_2$ have centers $O_1,O_2$. Let $\Gamma_1\cap\Gamma_2=A,B$, with $A\neq B$. An arbitrary line through $B$ intersects $\Gamma_1$ at $C$ and $\Gamma_2$ at $D$. The tangents to $\Gamma_1$ at $C$ and to $\Gamma_2$ at $D$ intersect at $M$. Let $N=AM\cap CD$. Let $l$ be a line through $N$ parallel to $CM$, and let $l\cap AC=K$. Prove that $BK$ is tangent to $\Gamma_2$.</p> <hr> <p>$\qquad\quad$ <img src="https://i.stack.imgur.com/mTLgz.png" alt=""></p> <hr> <p>Here is some progress I have:</p> <p>We are looking to prove $\angle O_2BP=90^{\circ}$, and since $\angle O_2DP=90^{\circ}$, if we could prove $BP=PD$, we would be done by congruent triangles. So we are looking to prove $\dfrac{\sin \angle BDP}{\sin \angle DBP}=1$. Let $AM\cap BK=l$. We have $\angle BDP=\angle QBN$. By the law of sines on $\triangle DNM,\triangle BNQ$, we have $\sin \angle BDP=\sin \angle NDM=\dfrac{NM}{DM}\sin \angle DNM$ and $\sin\angle QBN=\dfrac {NQ}{BQ}\sin\angle BNQ$. Dividing, the sines cancel (since they are supplementary), and we are left with $\dfrac{NM\times BQ}{DM\times NQ}$, so it remains to prove $\dfrac{NM}{DM}=\dfrac{NQ}{BQ}$.</p> <p>I'm not sure what to do from here. We would be done if we could prove $\triangle NBQ\sim\triangle NDM$, but this would imply $\angle QNB=\angle MND=90^{\circ}$, but from drawing multiple diagrams, it looks like this is not always the case. </p> <p>As always, any ideas are appreciated!</p>
Kay K.
292,333
<p>Another simple solution:</p> <p>\begin{align} &amp;x\mapsto\sin u\\ &amp;I=\int_0^{\pi/2} \frac{\cos u}{\sin u + \cos u}du\\ &amp;u\mapsto \frac{\pi}2-v\\ &amp;I=\int_0^{\pi/2} \frac{\sin v}{\sin v + \cos v}dv\\ &amp;\therefore 2I=\int_0^{\pi/2} \frac{\sin u + \cos u}{\sin u + \cos u}du=\frac{\pi}{2}\\ &amp;\therefore I=\frac{\pi}{4} \end{align}</p>
139,385
<p>Can anyone help me prove if $n \in \mathbb{N}$ and is $p$ is prime such that $p|(n!)^2+1$ then $(p-1)/2$ is even?</p> <p>I'm attempting to use Fermats little theorem, so far I have only shown $p$ is odd.</p> <p>I want to show that $p \equiv 1 \pmod 4$</p>
Ragib Zaman
14,657
<p><strong>Theorem</strong>: Let $p$ be an odd prime. Then $-1$ is a square modulo $p$ if and only if $ p = 1\mod 4.$</p> <p>Proof: For any $x\neq 0$ in $\mathbb{Z}_p$ call the following the bundle generated by $x$: $$ B(x) = \{ x, x^{-1}, -x, -x^{-1} \}. $$</p> <p>Check $B(x) = B(x^{-1}) = B(-x) = B(-x^{-1}) .$ The distinct bundles partition $\mathbb{Z}_p\setminus\{0\} .$ Since $p\neq 2$, $x\neq -x$ for all non-zero $x\in \mathbb{Z}_p,$ every package has $4$ elements unless $x= x^{-1} $ or $x=-x^{-1}.$ If $x=x^{-1}$ then $x=\pm 1$ and $B(x) =\{1,-1\}$ has $2$ elements. $x=-x^{-1}$ if and only if $x^2=-1$ and if so, $B(x) = B(-x) = \{x,-x\} $ has two elements. </p> <p>If $-1$ is a square then $\mathbb{Z}_p\setminus\{0\} $ is partitioned into $2$ sets of size $2$ and the rest of size $4.$ In that case, $p-1 = 0 \mod 4.$ If $-1$ is not a square then $\mathbb{Z}_p\setminus\{0\} $ is partitioned into a set with size $2$ and the rest with size $4$, i.e $p-1=2 \mod 4.$ </p>
1,499,949
<p>Prove that for all event $A,B$</p> <p>$P(A\cap B)+P(A\cap \bar B)=P(A)$</p> <p><strong>My attempt:</strong></p> <p>Formula: $\color{blue}{P(A\cap B)=P(A)+P(B)-P(A\cup B)}$</p> <p>$=\overbrace {P(A)+P(B)-P(A\cup B)}^{=P(A\cap B)}+\overbrace {P(A)+P(\bar B)-P(A\cup \bar B}^{=P(A\cap \bar B)})$</p> <p>$=2P(A)+\underbrace{P(B)+P(\bar B)}_{=1}-P(A\cup B)-P(A\cup \bar B)$</p> <p>$=1+2P(A)-P(A\cup B)-P(A\cup \bar B)$</p>
Michael Medvinsky
269,041
<p>@Angelo Mark In your comment to @Elekko you say</p> <p>$$x^2=9 \Rightarrow x=9^{\frac{1}{2}}= \sqrt{9}=3$$ however $x^2=9$ doesn't imply neigher $x=9^{\frac{1}{2}}$ or $x= \sqrt{9}$, although they usually mean exactly the same thing.</p> <p>The main question is not whether $x^{1/n}=\sqrt[n]x$ or not but what is an inverse function of $x^n$. The minimal requirement for a function $f(x)$ to be invertible in some given domain is to be injective (or one-to-one, i.e. $f(x)=f(y)$ if and only if $x=y$ ) in that domain. </p> <p>A function $f(x) =x^n$ is injective for all real numbers, i.e. $\forall x\in\mathbb{R}$ when $n$ is odd, i.e. $n=2m-1,m\in \mathbb{N}$ and therefore $f(x)=x^{2m-1}$ has an inverse function $f^{-1}(x) =x^{1\over{2m-1}},\forall x\in\mathbb{R} =\sqrt[2m-1]{x}$. </p> <p>However, for an even $n$, i.e. $n=2m,m\in \mathbb{N}$, the function $f(x) =x^n$ isn't injective in the domain of all real numbers, since $f(x)=f(-x)$ and therefore there is no inverse function. Fortunately, if we take only a half of the domain, either all positive or all negative real numbers the function $f(x) =x^{2m}$ will be injective in such a half domain and therefore has an inverse. In the negative half domain, the inverse will be $f^{-1}(x)=-x^{1\over 2m}=-\sqrt[2m]x$ and in the positive half domain $f^{-1}(x)=x^{1\over 2m}=\sqrt[2m]x$. </p> <p>Getting back at your example, when you write $x^2=9 \Rightarrow $ either $x=9^{\frac{1}{2}}$ or $x= \sqrt{9}$ you automatically assume that $x^2$ has an inverse function in the domain of all real numbers, which is wrong. The problem is easily seen if you write it as following $x^2=(\pm x)^2=9\Rightarrow \pm x=\sqrt 9=9^{1/2}$ or $x=\pm\sqrt 9=\pm 9^{1/2}$ </p> <hr> <p>@kilimanjaro Both $\sqrt{x}$ and $x^{1/2}$ are functions. A function associates one, and only one, output to any particular input.</p> <p>A multi-valued functions are not functions in a regular sense and only add confusions for this sort of discussion. Also, neither of the notation of roots is commonly considered multi-valued. You can define one so and one so for a specific very abstract discussion where you need to distinct between them, but outside of that abstract discussion (i.e. when you speak to "normal people") it is not so.</p>
2,249,020
<p><a href="https://i.stack.imgur.com/L7PXf.jpg" rel="nofollow noreferrer">The Math Problem</a></p> <p>I have issues with finding the Local Max and Min, and Abs Max and Min, after I find the Critical Point. How do I do this problem in its entirety? </p>
angryavian
43,949
<p>If you have two vectors $v=(v_1,v_2)$ and $w=(w_1,w_2)$, then their dot product is $$v \cdot w = v_1 w_1 + v_2 w_2.$$ Additionally, we also have $$v \cdot w = \|v\| \|w\| \cos \theta$$ where $\|v\|=\sqrt{v_1^2+v_2^2}$, $\|w\|=\sqrt{w_1^2+w_2^2}$, and $\theta$ is the angle between $v$ and $w$. <a href="https://proofwiki.org/wiki/Cosine_Formula_for_Dot_Product" rel="nofollow noreferrer">Proof here</a>.</p> <p>Now, consider $v=(\cos \alpha, \sin \alpha)$ and $w=(\cos \beta, \sin \beta)$. Do you now see why the formula for $\cos(\alpha-\beta)$ then follows? Hint: we have $\|v\|=\|w\|=1$ and $\theta=\alpha-\beta$.</p> <hr> <p>For your second question,note $\cos(\alpha+\beta)=\cos(\alpha-(-\beta))$, then apply your formula for the cosine of a difference of angles, and then use the fact that $\sin$ is an odd function, and $\cos$ is an even function. This has been done for you in the comments.</p>
2,655,018
<p>I have a quick question regarding a little issue.</p> <p>So I'm given a problem that says "$\tan \left(\frac{9\pi}{8}\right)$" and I'm supposed to find the exact value using half angle identities. I know what these identities are $\sin, \cos, \tan$. So, I use the tangent half-angle identity and plug-in $\theta = \frac{9\pi}{8}$ into $\frac{\theta}{2}$. I got $\frac{9\pi}{4}$ and plugged in values into the formula based on this answer. However, I checked my work with slader.com and it said I was wrong. It said I should take the value I found, $\frac{9\pi}{4}$, and plug it back into $\frac{\theta}{2}$. Wouldn't that be re-plugging in the value for no reason? Very confused.</p>
Rohan Shinde
463,895
<p>Let $\tan \frac {9\pi}{8}= \tan \frac {\theta }{2}=a$</p> <p>By half angle formulas $$\tan \theta=\frac {2\tan \frac {\theta }{2}}{1-\tan ^2\frac {\theta }{2}}$$ Hence we get $$1=\frac {2a}{1-a^2}$$ Hence we get $a^2+2a-1=0$</p> <p>Solve the quadratic to get the answer.</p> <p>Note : By using quadratic formula we get $\tan \frac {9\pi}{8}= \sqrt 2 -1$</p> <p>The other solution i.e. $-\sqrt 2-1$ gets rejected because $\frac {9\pi}{8}$ lies in third quadrant where $\tan $ must be positive.</p>
1,473,862
<p>Let $X$ be a random variable with mean $\mu$ and variance $\sigma^2$. Let $a &lt; \mu$ and consider the probability $$ F_X(a) = \mathbb{P}(X \leq a) = \mathbb{P}(X - \mu \leq a - \mu). $$ If $a &gt; \mu$, Cantelli's inequality (see here <a href="https://en.wikipedia.org/wiki/Cantelli%27s_inequality" rel="nofollow">https://en.wikipedia.org/wiki/Cantelli%27s_inequality</a>) gives a good lower bound for $F_X(a)$. However is there a good lower bound we can use (in terms of moments) in the case when $a &lt; \mu$? Thanks.</p> <p>If it is of help note we can also write, for any $\lambda &gt; 0$, $$ \mathbb{P}(X \leq \mu - \lambda) = \mathbb{P}(X -\mu &lt; \lambda) - \mathbb{P}(-\lambda &lt; X - \mu &lt; \lambda). $$ To get a lower bound for $\mathbb{P}(X \leq \mu - \lambda)$ we can use Cantelli's lower bound on the first term. But we still need an upper bound for the term $\mathbb{P}(-\lambda &lt; X - \mu &lt; \lambda)$. </p>
Clement C.
75,808
<p>If you do not have any further assumption on the values $X$ can take (e.g., is it lower bounded a.s.?), then you cannot get any meaningful lower bound. For any $\varepsilon\in[0,1]$ (and wlog the case $\mu=0$), consider the random variable defined by $$ X = \begin{cases} -x\frac{1-\varepsilon}{\varepsilon} &amp; \text{ w.p. } \varepsilon \\ x &amp; \text{ w.p. } 1-\varepsilon \end{cases} $$ where $x = \sigma\sqrt{\frac{\varepsilon}{1-\varepsilon}}$.</p> <p>You have $\mathbb{E} X = 0 = \mu$, and $\operatorname{Var} X = \sigma^2$; yet $\mathbb{P}\{ X &lt; \mu\} = \varepsilon$ can be arbitrarily small.</p> <hr> <p>Assuming $X \geq 0$ a.s. (as suggested in a comment below). Even then, one cannot get a non-trivial bound. Namely, </p> <blockquote> <p>Fix any $\mu&gt; 0$, $\sigma^2\geq 0$. For any $a\in[0,\mu)$, there exists a random variable $X\in L^2$ such that $X\geq 0$ a.s., satisfying $$\mathbb{P}\{ X \leq a\} = 0.$$</p> </blockquote> <p>Note that up to renormalization by $\mu$ (of the standard deviation and $a$), we can wlog assume $\mu = 1$. For fixed $\sigma,a$ as above, define $\alpha \stackrel{\rm def}{=}\frac{a+1}{2}\in(a,1)$, and let $\beta$ be the solution of the equation $$ \sigma^2 + 1 = \alpha + \beta - \alpha\beta $$ i.e. $\beta = 1+\frac{\sigma^2}{1-\alpha} &gt; 1$.</p> <p>Let $X$ be the random variable taking values in $\{\alpha,\beta\}$, defined as $$ X = \begin{cases} \alpha &amp;\text{ w.p. } \frac{\beta-1}{\beta-\alpha} \\ \beta &amp;\text{ w.p. } \frac{1-\alpha}{\beta-\alpha} \\ \end{cases} $$ so that indeed $$ \begin{align} \mathbb{E} X &amp;= 1 \\ \operatorname{Var} X &amp;= - \mathbb{E}[X^2]- (\mathbb{E} X)^2 \\ &amp;= \frac{1}{\beta-\alpha}\left(\alpha^2(\beta-1) + \beta^2(1-\alpha)\right)- 1 = \alpha+\beta-\alpha\beta -1 \\ &amp;= \sigma^2. \end{align} $$ $X$ satisfies all the assumptions, and yet $$ \mathbb{P}\{X \leq a\} = \mathbb{P}\{X &lt; \alpha\} = 0. $$</p> <p>At that point, it looks to me that one would need the assumption that $X$ be <em>bounded</em> to get some interesting lower bound.</p>
1,473,862
<p>Let $X$ be a random variable with mean $\mu$ and variance $\sigma^2$. Let $a &lt; \mu$ and consider the probability $$ F_X(a) = \mathbb{P}(X \leq a) = \mathbb{P}(X - \mu \leq a - \mu). $$ If $a &gt; \mu$, Cantelli's inequality (see here <a href="https://en.wikipedia.org/wiki/Cantelli%27s_inequality" rel="nofollow">https://en.wikipedia.org/wiki/Cantelli%27s_inequality</a>) gives a good lower bound for $F_X(a)$. However is there a good lower bound we can use (in terms of moments) in the case when $a &lt; \mu$? Thanks.</p> <p>If it is of help note we can also write, for any $\lambda &gt; 0$, $$ \mathbb{P}(X \leq \mu - \lambda) = \mathbb{P}(X -\mu &lt; \lambda) - \mathbb{P}(-\lambda &lt; X - \mu &lt; \lambda). $$ To get a lower bound for $\mathbb{P}(X \leq \mu - \lambda)$ we can use Cantelli's lower bound on the first term. But we still need an upper bound for the term $\mathbb{P}(-\lambda &lt; X - \mu &lt; \lambda)$. </p>
soakley
84,631
<p>There is a one-sided Chebyshev's inequality. For $k \gt 0,$ $$P \ [ \ X \leq \mu - k \sigma \ ] \leq { {1} \over {k^2 + 1} } $$</p>
3,371,638
<p>Measure space <span class="math-container">$(X, \mathcal{A}, ν)$</span> has <span class="math-container">$ν(X) = 1$</span>. Let <span class="math-container">$A_n \in \mathcal{A} $</span> and denote </p> <p><span class="math-container">$B := \{x : x ∈ A_n$</span> for infinitly many n }.</p> <p>I want to prove that if <span class="math-container">$ν(A_n) \geq \epsilon &gt; 0$</span> for all n, then <span class="math-container">$ν(B) ≥ \epsilon$</span>.</p> <p><span class="math-container">$\textbf{My attempt}:$</span></p> <p><span class="math-container">$$B = \text{limsup} A_n = \bigcap_{n=1}^{\infty}\bigcup_{k=n}^{\infty}A_k$$</span> taking complement and taking measure from both sides: <span class="math-container">$$\nu(X)-\nu(B) = \nu\bigg[\bigcup_{n=1}^{\infty} \bigg(\bigcup_{k=n}^{\infty}A_k\bigg)^c\bigg] = \bigcup_{n=1}^{\infty} \nu(B_n)\leq \sum_{n=1}^\infty\nu(B_n)$$</span> <span class="math-container">$$\nu(B) \geq 1 - \sum_{n=1}^\infty\nu(B_n) $$</span> </p> <p><span class="math-container">$B_n$</span> is an increasing sequence (i.e.<span class="math-container">$B_n \subset B_{n+1}$</span>) ,right? </p> <p><span class="math-container">$\sum_{n=1}^\infty\nu(B_n) = \lim_{n \to \infty}\nu(B_n)$</span> so there exist N such that <span class="math-container">$\nu(B_n) \leq \frac{1-\epsilon}{2^n}$</span> thus;</p> <p><span class="math-container">$$\nu(B) \geq 1 - \sum_{n=1}^\infty\nu(B_n) \leq 1-(1-\epsilon)=\epsilon $$</span> </p>
copper.hat
27,978
<p>Presumably you are using the Euclidean norm. (The unit ball with the <span class="math-container">$l_1, l_\infty$</span> norms <strong>are</strong> polyhedral.)</p> <p>The Euclidean norm is strictly convex, so if <span class="math-container">$a \neq b$</span> then <span class="math-container">$\|ta+(1-t)b\|^2 &lt; t \|a\|^2+(1-t)\|b\|^2$</span> for <span class="math-container">$t \in (0,1)$</span>.</p> <p>In particular, if <span class="math-container">$\|a\| \le 1, \|b\| \le 1$</span> and <span class="math-container">$a \neq b$</span>, then <span class="math-container">$\|ta+(1-t)b\|^2 &lt; 1$</span> for <span class="math-container">$t\in (0,1)$</span>.</p> <p>Suppose <span class="math-container">$P$</span> is a polyhedron and <span class="math-container">$P \subset \bar{B}$</span>, the closed unit ball. Then <span class="math-container">$\bar{B} \setminus P$</span> is non empty and so we cannot express <span class="math-container">$\bar{B}$</span> as a polyhedron. To see this, consider the following:</p> <p>Any compact polyhedron can be written as the convex hull of a finite number of points, so we can write <span class="math-container">$P = \operatorname{co} \{ x_1, \cdots , x_m\}$</span>. We can assume that none of the points can be written as a convex combination of the remaining points. We must have <span class="math-container">$\|x_k\| \le 1$</span>.</p> <p>Any point <span class="math-container">$x \in P \setminus \{ x_1, \cdots , x_m\}$</span> can be written as <span class="math-container">$x = \sum_k t_k x_k$</span>, with <span class="math-container">$t_k &lt; 1$</span> and <span class="math-container">$\sum_k t_k = 1$</span>.</p> <p>Then <span class="math-container">$x = t_1 x_1 + (1-t_1) \sum_{k \neq 1} {t_k \over 1-t_1} x_k$</span> and since <span class="math-container">$\|x_1\| \le 1$</span> and <span class="math-container">$\| \sum_{k \neq 1} {t_k \over 1-t_1} x_k \| \le 1$</span>, we see that <span class="math-container">$\|x\| &lt; 1$</span>.</p> <p>In particular, the <strong>only</strong> points in <span class="math-container">$P$</span> that have norm one are <span class="math-container">$x_1,...,x_p$</span>. Hence <span class="math-container">$P$</span> cannot equal <span class="math-container">$\bar{B}$</span>.</p>
2,203,988
<p>I'm reading the book <i>Heat Transfer</i> by J.P. Holman. On the chapter of unsteady-state conduction, page 140, the author remarks:</p> <blockquote> <p>The final series solution is therefore: $${\theta(x,t) \over \theta_i} = {4\over \pi} \sum^{\infty}_{n=1} {1\over n} e^{-\left({n\pi/L}\right)^2\alpha \,t}\sin{n\pi x \over L}$$ We note, of course, that at $t=0$ the series on the right side of the Equation must converge to unity for all values of x.</p> </blockquote> <p>In this equation $0 &lt; x &lt; L$, and $\alpha$ is a finite constant. My question is, how can I proof that</p> <p>$${4 \over \pi}\sum^{\infty}_{\substack{n=1}} {1\over n} \sin{n\pi x \over L} = 1$$</p> <p>Additional information: The solution presented above solves the PDE: $${\partial^2 \theta(x,t) \over \partial x^2} = {1\over \alpha}{\partial^2 \theta(x,t) \over \partial t^2} $$ with initial and boundary conditions: \begin{align} \theta(x,0) &amp;= \theta_i \qquad &amp;0\leq x \leq L\\ \theta(0,t) &amp;=0 \qquad &amp; t &gt; 0 \\ \theta(L,t) &amp;=0 \qquad &amp; t &gt; 0 \end{align}</p>
JMP
210,189
<p>For order doesn't matter, each ring can be placed on one of $3$ fingers. This results in a unique string, for example $12323113$, which results in finger $1$ having rings $1,6,7$, finger $2$ has rings $2,4$ and finger $3$ has rings $3,5,8$.</p> <p>This clearly groups each finger's rings as distinct, and gives $3^8$ strings.</p> <p>$8^3$ is three rings onto $8$ fingers.</p> <p>If order does matter, using $3^8\cdot8!$ doesn't work, for example in the above case there are only $3!2!3!=72$ permutations.</p> <p>Instead, use stars and bars to give the number of available patterns as $\dbinom{10}{2}=45$.</p> <p>To feed the fingers their rings, first arrange the rings into one of the $8!$ permutations, and feed from finger $1$ through to finger $3$. This ensures uniqueness, and gives the result as:</p> <p>$$\binom{10}{2}\cdot8!=45\cdot40320=1814400$$</p>
3,125,093
<p>Let us remember, the conditions to apply L'Hôpital's Rule:</p> <p>Let suppose:</p> <p><span class="math-container">$f(x)$</span> and <span class="math-container">$g(x)$</span> are real and differentiable for all <span class="math-container">$x\in (a,b)$</span> </p> <p>1-) <span class="math-container">$ \lim_{x\to c}{f(x)} = \lim_{x\to c}g(x) = 0$</span></p> <p>2-) If <span class="math-container">$g'(x)\neq 0$</span> <span class="math-container">$,\,\,\,\,$</span> as on some deleted neighborhood of <span class="math-container">$\,\,$</span> <span class="math-container">$c$</span>.</p> <p>3-) <span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L$</span>, then</p> <p>4-) <span class="math-container">$\lim_{x\to c}{\frac{f(x)}{g(x)}}=L$</span>, <span class="math-container">$\,\,\,\,\,$</span> thus we can write:</p> <p><span class="math-container">$$ \lim_{x\to c}{\frac{f(x)}{g(x)}}=\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L $$</span></p> <p><strong>Note 1:</strong> I have written all the above information just to remember to L'Hôpital's Rule. They are not related with my question. If I wrote something as incorect, you can correct it inside of the question. </p> <p><span class="math-container">$$-----------$$</span> ANYWAY, Let us suppose that our following example satisfies the conditions to apply L'Hôpital's Rule which we stated above. If we start:</p> <p><span class="math-container">$\lim_{u\to \infty}\int_0^u F(t,a)dt=0$</span> <span class="math-container">$\,\,\,$</span> and<span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{u\to \infty}\int_0^u G(t,a)dt=0$</span> </p> <p><span class="math-container">$$\lim_{u \to \infty} \frac{ \int_0^u F(t,a) \, dt}{\int_0^u G(t,a) \, dt} $$</span> </p> <p>THE QUESTION. <strong>Before</strong> to apply L'Hôpital's Rule with respect to the parameter <span class="math-container">$´u´$</span> to the above-equation, do we need to also <strong>to prove</strong> the following one? </p> <p>There exists a number <span class="math-container">$M$</span> as <span class="math-container">$M&gt; 0$</span>.</p> <p>The denominator <span class="math-container">$\,\,$</span> <span class="math-container">$\int_0^u G(t,a)dt$</span> <span class="math-container">$\,\,$</span> should <strong>not</strong> equal to zero for any<span class="math-container">$\,\,\,\,$</span> <span class="math-container">$u &gt; M$</span>.</p> <p><strong>Note 2:</strong> Please aware I am <strong>not asking</strong> anything about <span class="math-container">$\,\,$</span> <span class="math-container">$ \frac{d}{du} {\int_0^u G(t,a) \, dt}$</span> </p>
J.G.
56,861
<p>Repeatedly using the existence of inverses in groups gives <span class="math-container">$$ijk=k^2\implies ij=k,\,ijk=i^2\implies jk=i,\,kijk=k^3\implies kij=k^2=j^2\implies ki=j.$$</span>Define <span class="math-container">$m:=k^{-1}ji=i^2$</span>; we would normally call this <span class="math-container">$-1$</span>. Since <span class="math-container">$m=i^2=j^2=k^2$</span>, <span class="math-container">$m$</span> commutes with everything so <span class="math-container">$$ji=mk,\,ik=j^{-1}mk^2=mj,\,kj=k^2mi^{-1}=mi.$$</span>Finally, <span class="math-container">$m^2=k^{-1}mkk^{-1}ji=k^{-1}ij$</span> is the identity.</p>
209,842
<p>I was looking for a general way of formulating solutions for work and time problems.</p> <p>For example,</p> <p>30 soldiers can dig 10 trenches of size 8*3*3 ft in half a day working 8 hours per day. How many hours will 20 soldiers take to dig 18 trenches of size 6*2*2 ft working 10 hours per day?</p> <p>Now i know that Work = Efficiency * Time, but i get confused sometimes in selecting which factor in the given problem will contribute directly to the work i.e. increase it and which factors will result in the work being done faster.</p> <p>I've seen the method in which one uses a table to write all the parameters given in the problem e.g. making columns titled work, number of soldiers, volume of trench, number of days required, number of hours per day, efficiency, wages etc and uses the direct and inverse proportionality to write the equation for solving a given unknown. However i face the same problem i.e. finding out which factors are directly related and which are in an inverse relation.</p> <p>Is there a simple way of solving these general problems?</p>
lab bhattacharjee
33,337
<p>$30$ soldiers can dig $10$ trenches of size $8\cdot 3\cdot 3$ cube fit in $4$ hours.</p> <p>$1$ soldier can dig $10$ trenches of size $8\cdot 3\cdot 3$ cube fit in $4\cdot 30$ hours.</p> <p>$1$ soldier can dig $1$ trench of size $8\cdot 3\cdot 3$ cube fit in $\frac{4\cdot 30}{10}$ hours.</p> <p>$1$ soldier can dig $1$ trench of size $1\cdot 1\cdot 1$ cube fit in $\frac{4\cdot 30}{10\cdot 8\cdot 3\cdot 3}$ hours.</p> <p>$20$ soldiers can dig $18$ trenches of size $6\cdot 2\cdot 2$ cube fit in $$\frac{4\cdot 30\cdot 18\cdot 6\cdot 2\cdot 2}{10\cdot 8\cdot 3\cdot 3\cdot 20}=\frac{36}{10}=3.6$$ hours which is clearly $&lt;10$ hours.</p> <p>The number of trenches and the size of trenches are directly proportional to the time, but the number of soldiers is inversely roportional to the time, the more the number of soldiers, the lesser is the time.</p>
293,026
<p>The question is to show that $A\sin(x + B)$ can be written as $a\sin x + b\cos x$ for suitable a and b.</p> <p>Also, could somebody please show me how $f(x)=A\sin(x+B)$ satisfies $f + f ''=0$?</p>
Michael Hardy
11,667
<p>If $$ f(x) = A\sin(x+B) $$ then $$ f'(x) = A\cos(x+B)\cdot\frac{d}{dx}(x+B) = A\cos(x+B)\cdot1, $$ and $$ f''(x) = -A\sin(x+B)\cdot\frac{d}{dx}(x+B) = -A\sin(x+B). $$ So $$ f''(x)+f(x) = -A\sin(x+B)+A\sin(x+B) = 0. $$</p> <p>For the initial question, the standard trigonometric identity $$ \sin(x+B) = \sin x\cos B+ \cos x\sin B $$ is most of what you need to know. Then you have $$ A\sin(x+B) = A\Big( \sin x\cos B+ \cos x\sin B \Big) $$ $$ = \Big(A\cos B\Big) \sin x + \Big( A\sin B\Big) \cos x $$ $$ =a\sin x+b\cos x. $$</p>
1,575,397
<p>I need help calculating $$\lim_{n\to\infty}\left(\frac{1}{n^{2}}+\frac{2}{n^{2}}+...+\frac{n}{n^{2}}\right) = ?$$</p>
Adhvaitha
228,265
<p>We have $$1+2+\cdots+n = \dfrac{n(n+1)}2$$ Hence, we need $$\lim_{n \to \infty}\left(\dfrac1{n^2} + \dfrac2{n^2} + \cdots + \dfrac{n}{n^2}\right) = \lim_{n \to \infty}\left(\dfrac{n(n+1)}{2n^2}\right) = \dfrac12$$</p>
1,991,238
<p>How can I integrate this? $\int_{0}^{1}\frac{\ln(x)}{x+1} dx $</p> <p>I've seen <a href="https://math.stackexchange.com/questions/108248/prove-int-01-frac-ln-x-x-1-d-x-sum-1-infty-frac1n2">this</a> but I failed to apply it on my problem.</p> <p>Could you give some hint?</p> <p>EDIT : From hint of @H.H.Rugh, I've got $\sum_{n=1}^{\infty} \frac{(-1)^{n}}{n^2}$, since $\int_{0}^{1}x^{n}\ln(x)dx = (-1)\frac{1}{(n+1)^2}$. How can I proceed this calculation hereafter?</p>
grand_chat
215,011
<p>Way late to the party, but here's a general result, and an elementary derivation:</p> <hr /> <p><strong>Claim:</strong> Let <span class="math-container">$(a_n)$</span> and <span class="math-container">$(b_n)$</span> be sequences of positive integers with <span class="math-container">$a_n\to\infty$</span> and <span class="math-container">$\lim_{n\to\infty}\frac{b_n}{a_n}=c$</span>. Then <span class="math-container">$$ \lim_{n\to\infty}\sum_{k=a_n}^{b_n}\frac1k=\log c.$$</span></p> <p><strong>Proof:</strong> Start with <a href="https://math.stackexchange.com/a/3253689/215011">the inequalities</a> <span class="math-container">$$\frac{x-1}x\le\log x\le x-1.$$</span> Substitute <span class="math-container">$x=(k+1)/k$</span> into the right inequality and <span class="math-container">$x=k/(k-1)$</span> into the left, obtaining <span class="math-container">$$\log(k+1)-\log k\le\frac1k\le\log k-\log(k-1).$$</span> Sum from <span class="math-container">$k=a_n$</span> to <span class="math-container">$k=b_n$</span>, and use telescoping to find <span class="math-container">$$ \log\frac{b_n+1}{a_n}\le\sum_{a_n}^{b_n}\frac1k\le\log\frac{b_n}{a_n-1}.$$</span> Finally, take the limit as <span class="math-container">$n\to\infty$</span>.</p> <hr /> <p>Now apply this result with <span class="math-container">$a_n=F_n$</span> and <span class="math-container">$b_n=F_{n+1}$</span> and use the fact that <a href="https://math.stackexchange.com/q/739229/215011"><span class="math-container">$F_{n+1}/F_n$</span> tends to the golden ratio <span class="math-container">$\phi$</span></a>.</p>
4,292,618
<p>I have the following function <span class="math-container">$$\frac{1}{1+2x}-\frac{1-x}{1+x} $$</span> How to find equivalent way to compute it but when <span class="math-container">$x$</span> is much smaller than 1? I assume the problem here is with <span class="math-container">$1+x$</span> since it probably would be equal to 1. I don't know if multiplying by <span class="math-container">$(1-x)$</span> would be helpful as it would be <span class="math-container">$$ \frac{1-x}{1+x-2x^2}-\frac{(1-x)^2}{1-x^2} $$</span> so there's still term <span class="math-container">$1+x$</span>.</p>
njuffa
114,200
<p>The formula as written has numerically acceptable behavior in the vicinity of the singularities. Outside this region it can be transformed algebraically into the rational function <span class="math-container">$\frac{2x^{2}}{2x^{2}+3x+1}$</span>. But in this form, there is an issue with premature overflow in intermediate computation for <span class="math-container">$x$</span> large in magnitude. This can be avoided by dividing the numerator and denominator by <span class="math-container">$2x$</span>. The most advantageous switchover points between the two computations can be estimated, supported by a few experiments. In summary:</p> <p>For <span class="math-container">$x$</span> in <span class="math-container">$[-\frac{3}{2}, -\frac{11}{32}]$</span>: <span class="math-container">$$f(x) := \frac{1}{2x+1} - \frac{1-x}{1+x}$$</span> else: <span class="math-container">$$f(x) := \frac{x}{(\frac{3}{2} + \frac{1}{2x}) + x}$$</span></p> <p>Note evaluation order of additions in the denominator is indicated by parenthesis. When evaluated with IEEE-754 floating-point arithmetic, using fairly extensive testing, the maximum error in the positive half-plane was found to be less than 3 <a href="https://en.wikipedia.org/wiki/Unit_in_the_last_place" rel="nofollow noreferrer">ulp</a>, while the maximum error in the negative half-plane was found to be less than 5 ulp.</p>
3,352,834
<p><span class="math-container">$A^2 + A - 6I = 0$</span></p> <p>A= <span class="math-container">$\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix}$</span></p> <p>I was asked to find <span class="math-container">$a + d$</span>, and <span class="math-container">$ad - bc$</span></p> <p><span class="math-container">$a+d&gt;0$</span></p> <p>What i get is <span class="math-container">$A^2$</span> = <span class="math-container">$\begin{bmatrix}a^2+bc &amp; ab + bd\\ac+dc &amp; bc+d^2\end{bmatrix}$</span></p> <p>I get <span class="math-container">$a^2+bc+a=6, $</span></p> <p><span class="math-container">$ab+bd+b = 0,$</span></p> <p><span class="math-container">$bc+d^2+d=6, $</span></p> <p><span class="math-container">$ac+dc+c=0$</span></p> <p>I get a=-1/2, d=-1/2</p> <p>Why i get wrong answer? Please help me?</p>
Peter Foreman
631,494
<p>You have that <span class="math-container">$$(A+3I)(A-2I)=0$$</span> so the matrix <span class="math-container">$$A=2I=\pmatrix{2&amp;0\\0&amp;2}$$</span> is a valid such <span class="math-container">$A$</span>.</p>
3,352,834
<p><span class="math-container">$A^2 + A - 6I = 0$</span></p> <p>A= <span class="math-container">$\begin{bmatrix}a &amp; b\\c &amp; d\end{bmatrix}$</span></p> <p>I was asked to find <span class="math-container">$a + d$</span>, and <span class="math-container">$ad - bc$</span></p> <p><span class="math-container">$a+d&gt;0$</span></p> <p>What i get is <span class="math-container">$A^2$</span> = <span class="math-container">$\begin{bmatrix}a^2+bc &amp; ab + bd\\ac+dc &amp; bc+d^2\end{bmatrix}$</span></p> <p>I get <span class="math-container">$a^2+bc+a=6, $</span></p> <p><span class="math-container">$ab+bd+b = 0,$</span></p> <p><span class="math-container">$bc+d^2+d=6, $</span></p> <p><span class="math-container">$ac+dc+c=0$</span></p> <p>I get a=-1/2, d=-1/2</p> <p>Why i get wrong answer? Please help me?</p>
Mostafa Ayaz
518,023
<p>Actually, in the characteristic equation of a matrix <span class="math-container">$A$</span> like<span class="math-container">$$|\lambda I-A|=\lambda^n+a_{n-1}\lambda^{n-1}+\cdots +a_1\lambda+a_0=0$$</span>we have <span class="math-container">$$a_0=|-A|=(-1)^n |A|\\a_{n-1}=\text{tr}(A)$$</span>so here we have three possibilities in the characteristic equation of <span class="math-container">$A$</span> <span class="math-container">$$\lambda^2+\lambda -6=0\\\lambda-2=0\\\lambda+3=0$$</span>which lead to <span class="math-container">$$n=2\\|A|=ad-bc=-6\quad,\quad \text{tr}(A)=a+d=-1\\|A|=4\quad,\quad \text{tr}(A)=4\\|A|=9\quad,\quad \text{tr}(A)=-6$$</span></p>
11,994
<p>Now that we get to see the SE-network wide list of "hot" questions, I am just shaking my head in disbelief. At the time I am writing this, the two hot questions from Math.SE are titled (get a barf-bag, quick)</p> <ul> <li><a href="https://math.stackexchange.com/q/599520/8348">https://math.stackexchange.com/q/599520/8348</a></li> <li><a href="https://math.stackexchange.com/q/600373/8348">$1/i=i$. I must be wrong but why?</a></li> </ul> <p>Who gets to select these questions? How? Irrespective of how this is done, this is ridiculous, as neither question has any even remotely serious content (the latter one is more or less a common <em>fake-proof</em>). </p> <p>My proposal:</p> <blockquote> <p>The representatives of Math.SE on this list should be based only on the votes of people who are active on Math.SE. Not just all voting members (suspecting/pointing finger at SOers, who get the right to vote from association bonus alone).</p> </blockquote> <p>The exact rep limit (if any) is open to debate, may be 1000? Probably shouldn't put the bar too high, for that would introduce different kind of problems. But something that ensures a valued history of contributions on this site - not elsewhere on the SE network.</p>
Post No Bulls
111,742
<p>There is at least something that can be done to reduce the degree of embarrassment that &quot;hot&quot; questions bring to Math.SE: <strong>improve their titles</strong>. The titles are what ~3 million daily visitors to SE actually see; relatively few will click through to the question (although the absolute number of visits may still be high). Misspelled words, poor grammar, multiple question signs, lack of actual information in the title... these things tell the rest of network and its visitors that Math.SE does not care about the quality of its content. Current example:</p> <p><em>How to show some function is constant ??</em></p> <p>Yes, removing one ? and the space between text and ? looks like a small edit; but when the text gets shown to millions of people, maybe the fix isn't so minor after all. No other SE site contributes similarly embarrassing titles to Hot questions. (Arqade is often on the quirky side, but this is different.)</p> <p><strong>Proposal</strong>: whenever you see a less-than-stellar title in the sidebar (and have 2000 points in the bank), just go ahead and edit it. If applicable, make it more specific: this will reduce the number of passers-by who click the title just to find out there's nothing for them there. On the scale of Math.SE traffic, one edit bump is nothing; especially since hot questions get bumped by answers anyway.</p>
3,091,353
<p>There are 2 definitions of <strong><em>Connected Space</em></strong> in my lecture notes, I understand the first one but not the second. The first one is:</p> <blockquote> <p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if there does not exist <span class="math-container">$U,V\in\mathcal{T}$</span> such that <span class="math-container">$U\neq\emptyset$</span>, <span class="math-container">$V\neq\emptyset$</span>, <span class="math-container">$U\cap V=\emptyset$</span> and <span class="math-container">$X=U\cup V$</span></p> </blockquote> <p>which makes sense. It is saying that connected spaces can't be cut up into parts that have nothing to do with eachother.</p> <p>The second definition is: </p> <blockquote> <p>A topological space <span class="math-container">$(X,\mathcal{T})$</span> is connected if <span class="math-container">$\emptyset$</span> and <span class="math-container">$X$</span> are the only subsets of <span class="math-container">$X$</span> which are closed and open</p> </blockquote> <p>which makes no intuitive sense to me, especially as a definition of connectedness. </p> <p>Any intuitive explanation behind this second definition?</p>
Hagen von Eitzen
39,174
<p>If <span class="math-container">$U$</span> is closed and open, then so is <span class="math-container">$V:=U^\complement$</span>. So if such <span class="math-container">$U$</span> exists that is neither empty nor the whole space, we have <span class="math-container">$X=U\cup V$</span> with <span class="math-container">$U\cap V=\emptyset$</span>, <span class="math-container">$U\ne\emptyset$</span>, <span class="math-container">$V\ne\emptyset$</span>.</p>
749,714
<p>Does anyone know how to show this preferable <strong>without</strong> using modular</p> <p>For any prime $p&gt;3$ show that 3 divides $2p^2+1$ </p>
lab bhattacharjee
33,337
<p><strong>Generalization</strong>:</p> <p>If $q$ is prime not dividing integer $\displaystyle m,(m,q)=1\implies m^{q-1}\equiv1\pmod q$ (by Fermat's Little Theorem)</p> <p>$\displaystyle(q-1)m^{q-1}\equiv (-1)\cdot1\pmod q\equiv-1\iff (q-1)m^{q-1}+1\equiv0\pmod q$</p> <p>Here $q=3$</p>
3,717,506
<p>I am reading some text about even functions and found this snippet:</p> <blockquote> <p>Let <span class="math-container">$f(x)$</span> be an integrable even function. Then,</p> <p><span class="math-container">$$\int_{-a}^0f(x)dx = \int_0^af(x)dx, \forall a \in \mathbb{R}$$</span></p> <p>and therefore,</p> <p><span class="math-container">$$\int_{-a}^af(x)dx = 2\int_0^af(x), \forall a \in \mathbb{R}$$</span></p> </blockquote> <p>Why does <span class="math-container">$dx$</span> disappear from <span class="math-container">$2\int_0^af(x)$</span>? Is it just a notation convention?</p>
Travis Willse
155,629
<p>This answer expands on A. Kriegman's and folds in some of my comments thereunder.</p> <p>Let <span class="math-container">$P_n(k)$</span> denote the fraction of values of <span class="math-container">$n$</span>-term sequences with value <span class="math-container">$k$</span>, which we can interpret as the probability that the value of a uniformly randomly selected <span class="math-container">$n$</span>-term sequence has value <span class="math-container">$k$</span>.</p> <p>The limiting probabilities <span class="math-container">$p_k := \lim_{n \to \infty} P_n(k)$</span> are stable under the application of a uniformly selected die roll, giving an infinite set of equalities: <span class="math-container">$$\begin{array}{rcll} p_k &amp;=&amp; \frac{1}{6}(p_{k - 3} + p_{k - 2} + p_{k - 1} + p_{k + 1} + p_{k + 2}), &amp; k \neq 0 \\ p_0 &amp;=&amp; \frac{1}{6}(p_{- 3} + p_{- 2} + p_{- 1} + p_{1} + p_{2} + 1) . \\ \end{array}\qquad (\ast)$$</span></p> <p>The first equation defines a linear recurrence with characteristic polynomial <span class="math-container">$$p(r) = r^5 + r^4 - 6 r^3 + r^2 + r + 1,$$</span> and so the half-infinite sequences <span class="math-container">$\{p_k\}_{k \leq 0}$</span> and <span class="math-container">$\{p_k\}_{k \geq 0}$</span> can be given as linear combinations of powers <span class="math-container">$\alpha_i^k$</span> of the roots <span class="math-container">$\alpha_i$</span> of <span class="math-container">$p$</span> (possibly with different coefficients for <span class="math-container">$k &gt; 0$</span> and <span class="math-container">$k &lt; 0$</span>).</p> <p>The roots of <span class="math-container">$p$</span> are : <span class="math-container">$$ \alpha = 0.82140\ldots, \quad \beta = -0.27496\ldots+i 0.38561 \ldots, \quad \bar\beta, \quad \gamma = 1.77912\ldots, \quad \delta = -3.05060\ldots . $$</span> Since <span class="math-container">$0 \leq p_k \leq 1$</span> for all <span class="math-container">$k$</span>, the coefficients of <span class="math-container">$\gamma, \delta$</span> (whose real parts have absolute value <span class="math-container">$&gt; 1$</span>) must be zero for the sequence <span class="math-container">$\{p_k\}_{k \geq 0}$</span>, and the coefficients of <span class="math-container">$\alpha, \beta, \bar\beta$</span> (whose real parts have absolute value <span class="math-container">$&lt; 1$</span>) must be zero for <span class="math-container">$\{p_k\}_{k \leq 0}$</span>, and so <span class="math-container">$$\boxed{\begin{array}{rcll} p_k &amp;=&amp; A \alpha^k + B (\beta^k + \bar\beta^k), &amp;k \geq 0 \\ p_k &amp;=&amp; C \gamma^k + D \delta^k , &amp;k \leq 0 \end{array}\qquad (\ast\ast)}$$</span> for some constants <span class="math-container">$A, B, C, D$</span>. (NB we can rewrite <span class="math-container">$\beta^k + \bar\beta^k$</span> as a manifestly real expression, namely, as <span class="math-container">$2 e^{\operatorname{Re}(\beta) k} \cos (\operatorname{Im}(\beta) k)$</span>.) We can find those constants by producing an independent linear system in those variables and solving; one option is to substitute the expressions <span class="math-container">$(\ast\ast)$</span>, <span class="math-container">$k = -1,0,1$</span> in <span class="math-container">$(\ast)$</span>. We get one equation each from substituting the first and second equations in <span class="math-container">$(\ast\ast)$</span> in <span class="math-container">$(\ast)$</span>, or we can replace one of those two equations with the condition <span class="math-container">$A + 2 B = C + D$</span> given by substituting <span class="math-container">$k = 0$</span> in both of the equations in <span class="math-container">$(\ast\ast)$</span>.</p> <p>Appealing to a C.A.S. produces explicit formulae for <span class="math-container">$A, B, C, D$</span> as rational polynomials in <span class="math-container">$\alpha, \beta, \gamma, \delta$</span>, but the expressions are unwieldy (hundreds of thousands of characters among them), and it is not evident that they can be simplified further. Their numerical values are: <span class="math-container">$$\boxed{\begin{align*} A &amp;= 0.13210\ldots\\ B &amp;= 0.04359\ldots\\ C &amp;= 0.15602\ldots\\ D &amp;= 0.06328\ldots . \end{align*}}$$</span> In particular, <span class="math-container">$p_0 = 0.21930\ldots$</span>.</p> <p>Since <span class="math-container">$A, C \neq 0$</span>, the limiting behaviors of <span class="math-container">$p_k$</span> are <span class="math-container">\begin{align*} p_k \sim A \alpha^k ,&amp;\quad k \to \phantom{-}\infty \\ p_k \sim C \gamma^k ,&amp;\quad k \to -\infty . \end{align*}</span></p> <p><strong>Remark</strong> One might ask whether we can produce exact expressions for the roots <span class="math-container">$\alpha, \beta, \ldots$</span> of the (quintic) polynomial <span class="math-container">$p$</span>. If we restrict ourselves to algebraic expressions, we cannot: By reducing modulo <span class="math-container">$2$</span> we can efficiently deduce that <span class="math-container">$p$</span> is irreducible over <span class="math-container">$\Bbb Q$</span>, so its Galois group contains a <span class="math-container">$5$</span>-cycle. On the other hand, we've seen that <span class="math-container">$p$</span> has exactly <span class="math-container">$2$</span> nonreal roots, and hence the complex conjugation map is a transposition in the Galois group of <span class="math-container">$p$</span>. But a transposition and a <span class="math-container">$5$</span>-cycle generate all of <span class="math-container">$S_5$</span>, which is hence the Galois group. In particular, it is not solvable, so the roots <span class="math-container">$\alpha, \beta, \ldots$</span> are not expressible in terms of radicals.</p>
2,247,968
<blockquote> <p>$a,b$ are elements in a group $G$. Let $o(a)=m$ which means that $a^m=e$, $\gcd(m,n)=1$ and $(a^n)*b=b*(a^n)$. Prove that $a*b=b*a$.</p> </blockquote> <p><em>Hint: try to solve for $m=5,n=3$.</em></p> <p>I am stuck in this question and can't find an answer to it, can anyone give me some hints?</p> <p>Thanks.</p>
egreg
62,967
<p>The constant function $f(x)=1$ is a counterexample for (a) and (d).</p> <p>Since $f(0)=1$, (c) is obviously false.</p> <p>It remains to show (b) is true. Suppose $f(x)&lt;0$ for some $x\in[0,2]$. Then the minimum of $f$ is negative. What's the derivative at a point of minimum, if this point is in $(0,2)$? Can you do the case when the minimum is at $2$?</p>
2,258,697
<p>I recently encountered this question and have been stuck for a while. Any help would be appreciated!</p> <p>Q: Given that $$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{1}{5} \tag{1} \label{eq:1}$$ $$abc = 5 \tag{2} \label{eq:2}$$ Find $a^3 + b^3 + c^3$. It wasn't specified in the question but I think it can be assumed that $a, b, c$ are real numbers.</p> <p>My approach: $$ ab + ac + bc = \frac{1}{5} abc = 1 $$ $$ a^3 + b^3 + c^3 = (a+b+c)^3 - 3[(a + b + c)(ab + ac + bc) - abc] $$ $$ a^3 + b^3 + c^3 = (a+b+c)^3 - 3(a+b+c) + 15 $$ From there, I'm not sure how to go about solving for $a + b + c$. Something else I tried was letting $x = \frac{1}{a}, y = \frac{1}{b}, z = \frac{1}{c}$, so we get $$ xyz = x + y + z = \frac{1}{5} $$Similarly, I'm not sure how to continue from there. </p>
amd
265,466
<p>The first two columns are obviously linearly independent, while the last two columns are duplicates of the first, so the nullity of this matrix is 2, which means that it has $0$ as an eigenvalue of multiplicity two. The row sums all equal $2$, so that’s another eigenvalue with associated eigenvector $(1,1,1,1)^T$ (right-multiplying a matrix by a vector of all 1’s sums its rows). The last eigenvalue can always be found “for free:” the trace of a matrix is equal to the sum of its eigenvalues. The trace of this matrix is equal to $4$, therefore the fourth eigenvalue is $4-0-0-2=2$.</p>
1,262,036
<p>In complex analysis, this seems to be a really helpful way to avoid having to expand out Laurent series. I am unclear, however, when it is appropriate to use this property.</p> <p>In specific, I'm worried I CAN'T use this method on the following:</p> <p>$$\frac{e^z}{z^3 \sin(z)}$$ at the origin. This looks really messy, because using Laurent series, I'll have to divide series. Can I use the property stated above? If not, is there a more efficient way I can approach this problem?</p>
zhw.
228,045
<p>Expanding the series is not so bad really. Rewrite the thing as</p> <p>$$\frac{1}{z^4}\cdot \frac{e^z}{(\sin z)/z}.$$</p> <p>We want the coefficient of $z^3$ in the expansion of the second quotient. Now $(\sin z)/z = 1 - (z^2/6 + O(z^4)),$ so its reciprocal is $1+(z^2 + O(z^4)).$ So we are looking at</p> <p>$$(1+z+z^2/2 + z^3/6 + O(z^4))(1+z^2/6 + O(z^4)).$$</p> <p>Finding the coefficient of $z^3$ in the above is easy (it's $1/3$), and that is your residue.</p>
15,205
<p>I'm a young math student. And I live with the effort of always wanting to understand everything I study, in mathematics. This means that for every thing I face I must always understand every single demonstration, studying the basics every time if I don't remember them. And this makes it impossible for me to prepare the exams, because I can't go on, I fix myself on wanting to derive by myself a theorem and I lose days in it. And so I ask mathematicians if it is always necessary to be able to prove everything, or we must accept what the theorems say and give it for good. If possible I also ask you some advices to help me study, knowing my problem.</p>
kcrisman
1,608
<p>I don't have any specific resources, but I suggest that you might find a little success by finding some physics applications where there really is a difference between the "algebra-based" and "calc-based" physics that use e.g. multivariate calc or integration by parts or something.</p> <p>As an example, I think that there are some problems involving work (e.g. pumping water out of a conical tank?) that probably aren't exactly solvable without integrals, and which might require even trig substitution or worse. If you have access to a solution manual for the "real" calc-based physics course, you can just browse it for problems where those specific techniques come up, and then just do the problem up to the point where calculus comes in. Give them two minutes to talk to a neighbor to say "now what do I do", and assuming they have no idea, now you review the relevant calc technique.</p> <p>This is probably not really going to help them internalize new calculus they need, but then again a 1-credit hour class probably is going to have pretty minimal expectations so at least it will connect a bit.</p>
1,549,138
<p>I have a problem with this exercise:</p> <p>Proove that if $R$ is a reflexive and transitive relation then $R^n=R$ for each $n \ge 1$ (where $R^n \equiv \underbrace {R \times R \times R \times \cdots \times R} _{n \ \text{times}}$).</p> <p>This exercise comes from my logic excercise book. The problem is that I've proven $R^n=R$ is false for $n=2$ and non-empty $R$.</p> <p>Here is how I've done it:</p> <p>Let's take $n=2$. $R$ is a relation so it's a set. $R^2$ is, by definition, a set of ordered pairs where both of their elements belong to $R$. But $R$ is a set of elements that belong to $R$ - I mean it's not the set of pairs of elements from $R$. So $R^2\neq R$.</p> <p>Please tell me something about my proof and this exercise. How would you solve the problem?</p>
Leox
97,339
<p>It is enought to prove that if a relation $R$ is transitive and reflexive then $R^2=R.$</p> <p>By definition of transitive realtion we have that $R^2 \subseteq R.$ Let us prove that $R \subseteq R^2.$ Let $(a,b) \in R$. Since $R$ is reflexive then $(b,b) \in R$. Then by definition of composition we get that $(a,b) \in R^2.$ Thus $R^2=R.$</p>
53,188
<p>Recently I read the chapter "Doctrines in Categorical Logic" by Kock, and Reyes in the Handbook of Mathematical Logic. And I was quite impressed with the entire chapter. However it is very short, and considering that this copy was published in 1977, possibly a bit out of date. </p> <p>My curiosity has been sparked (especially given the possibilities described in the aforementioned chapter) and would like to know of some more modern texts, and articles written in this same style (that is to say coming from a logic point of view with a strong emphasis on analogies with normal mathematical logic.)</p> <p>However, that being said, <em>I am stubborn as hell, and game for anything.</em></p> <p>So, Recommendations?</p> <p>Side Note: More interested in the preservation of structure, and the production of models than with any sort of proposed foundational paradigm </p>
Buschi Sergio
6,262
<p>For a introduction:</p> <p>1) Notes on Logic and Set theory (cap. 1, cap 3) P.T. Johnstone. </p> <p>2) Locally Presentable And Accessible Categories by J Adamek J Rosicky (Cap. 3 &amp; cap. 5)</p> <p>FOr a comprehensive view:</p> <p>1) Sketches of an Elephant: A Topos Theory Compendium (VOl 2, cap D1)</p> <p>3) B. Jacobs, Categorical Logic and Type Theory, Studies in Logic and the Foundations of Mathematics 141 </p>
984,915
<blockquote> <p>If $A=\{a_1,...,a_n\}$ and $B=\{b_1,...,b_n\}$ are two bases of a vector space $V$, there exists a unique matrix $M$ such that for any $f\in V$, $[f]_A=M[f]_B$.</p> </blockquote> <p>My textbook uses this theorem without a proof, so I'm trying to show that it's true myself. Consider $[f]_A = (c_1,...,c_n)^T$ and $[f]_B=(d_1,...,d_n)^T$. How is it possible that just one, unique matrix exists that takes $[f]_B$ to $[f]_A$? Every $f$ will have a different coordinate vector under $A$ and $B$. I was thinking that it had something to do with $|A|=|B|$, but I can't justify how the matrix $M$ would look.</p>
hickslebummbumm
168,882
<p>Note that $\frac{-1}{n} &lt; \frac{-n}{n^2+1} \leq \frac{(-1)^n n}{n^2 +1} \leq \frac{n}{n^2+1} &lt; \frac{1}{n}$. The outer two sequences both converge to $0$, it follows from the sandwich theorem that $\lim_{n \rightarrow \infty}\frac{(-1)^n n}{n^2 + 1} = 0$, too.</p>
984,915
<blockquote> <p>If $A=\{a_1,...,a_n\}$ and $B=\{b_1,...,b_n\}$ are two bases of a vector space $V$, there exists a unique matrix $M$ such that for any $f\in V$, $[f]_A=M[f]_B$.</p> </blockquote> <p>My textbook uses this theorem without a proof, so I'm trying to show that it's true myself. Consider $[f]_A = (c_1,...,c_n)^T$ and $[f]_B=(d_1,...,d_n)^T$. How is it possible that just one, unique matrix exists that takes $[f]_B$ to $[f]_A$? Every $f$ will have a different coordinate vector under $A$ and $B$. I was thinking that it had something to do with $|A|=|B|$, but I can't justify how the matrix $M$ would look.</p>
Bumblebee
156,886
<p>HINT:$$\dfrac{n}{n^2+1}=\dfrac{1}{n+\dfrac1n}\to 0\,\,\,\,\, \text{as}\,\,\,\,\,n\to\infty$$</p>
3,050,497
<p>The operator is given by <span class="math-container">$$A=\begin{pmatrix} 1 &amp; 0 &amp; 0\\ 1 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 4 \end{pmatrix}$$</span> I have to write down the operator <span class="math-container">$$B=\tan(\frac{\pi} {4}A)$$</span> I calculate <span class="math-container">$$\mathcal{R} (z) =\frac{1}{z\mathbb{1}-A}=\begin{pmatrix} \frac{1}{z-1} &amp; 0 &amp; 0\\ \frac{1}{(z-1)^2} &amp; \frac{1}{z-1} &amp; 0\\ 0 &amp; 0 &amp; \frac{1}{z-4}\end{pmatrix} $$</span></p> <p>Now the B operator is given by: <span class="math-container">$$B=\begin{pmatrix} Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} &amp; 0 &amp; 0\\ Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{(z-1)^2} &amp; Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} &amp; 0\\ 0 &amp; 0 &amp; Res_{z=4}\frac{\tan(\frac{\pi}{4}z)}{z-4} \end{pmatrix} $$</span></p> <p>For me the result should be <span class="math-container">$$ B=\begin{pmatrix} 1 &amp; 0 &amp; 0\\ \frac{\pi}{2} &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 0\end{pmatrix}$$</span></p> <p>But the exercise gives as solution: <span class="math-container">$$ B=\begin{pmatrix} 1 &amp; 0 &amp; 0\\ \frac{\pi}{4} &amp; 1 &amp; 0\\ 0 &amp; 0 &amp; 1\end{pmatrix}$$</span></p> <p>Where is the error? Thank you and sorry for bad English </p>
Ankit Kumar
595,608
<p>Note that <span class="math-container">$$1+a+b+c+ab+bc+ca+abc=(1+a)(1+b)(1+c)$$</span> <span class="math-container">$$(1+a)(1+b)(1+c)=1623=1\cdot3\cdot541$$</span> <span class="math-container">$$\implies a=0,\ b=2,\ c=540.$$</span> Note that <span class="math-container">$0\leq a&lt;b&lt;c$</span>. Those three numbers, 1, 3 and 541 are the only possible factorisation that could satisfy this criteria</p>
977,446
<p>Prove that $A\cap B = \emptyset$ iff $A\subset B^C$. I figured I could start by letting $x$ be an element of the universe and that $x$ is an element of $A$ and not an element of $B$. </p>
ajotatxe
132,456
<p>You must prove both implications, that is: if $A\cap B=\emptyset$ then $A\subset B^c$ and conversely: if $A\subset B^c$ then $A\cap B=\emptyset$.</p> <p><strong>For the first:</strong> A good way to prove that some set is a subset of another one is supposing that $x$ is in the subset and proving that $x$ is in the superset: if $x\in A$, then it must not be in $B$, because $A$ and $B$ have no common elements. Then $x$ is in $B^c$.</p> <p><strong>For the second:</strong> A good way to prove that a set is empty is supposing that $x$ belongs to it and deriving a contradiction: if $x\in A\cap B$ then $x\in B$ and $x\notin B$.</p>
614,749
<p><strong>The game:</strong></p> <p>Given $S = \{ a_1,..., a_n \}$ of positive integers ($n \ge 2$). The game is played by two people. At each of their turns, the player chooses two <strong>different</strong> non-zero numbers and subtracts $1$ from each of them. The winner is the one, for the last time, being able to do the task.</p> <p><strong>The problem:</strong></p> <p>Suppose that the game is played by $\text{A}$ and herself.</p> <p>$\text{a)}$ Find the necessary and sufficient conditions of $S$ (called $\mathbb{W}$), if there are any, in which $\text{A}$ always clear the set regardless of how she plays.</p> <p>$\text{b)}$ Also, find the necessary and sufficient conditions of $S$ (called $\mathbb{L}$) in which $\text{A}$ is always unable to clear the set regardless of how she plays.</p> <p>$\text{c)}$ Then, find the strategies/algorithm by which $\text{A}$ can clear the set with $S$ that doesn't satisfy $\mathbb{L} \vee \mathbb{W}$.</p> <p>Next, suppose that the game is played by $\text{A}$ and $\text{B}$ respectively and $S$ that doesn't satisfy $\mathbb{W}$.</p> <p>$\text{d)}$ Is there any of them having the strategies/algorithm to win the game? If so, who is her and what is her winning way? (It's possible to suppose that $\text{A}$ and $\text{B}$ play the game optimally)</p> <p>$\;$</p> <p><em>Note:</em></p> <p>$\text{1)}$ This is not an assignment. I have just create this out of a familiar thing in my life. So, I haven't known whether there is an official research or even names for the game. If so, I'd be very appreciated if you shared those.</p> <p>$\text{2)}$ The case of $n = 2$ is so obvious that we can eliminate that from consideration. We can do the same thing to an obvious condition in $\mathbb{W}$ (if $\mathbb{W} \neq \varnothing$): $\left ( \sum_{i \in S} i \right ) \; \vdots \; 2$.</p> <p>Thanks in advance.</p> <p>${}$</p> <p><strong>Update 1:</strong> To clear many people's misunderstanding and to avoid it for new ones, I emphasize the word "different" above. And by "different", I mean different indices of numbers, not their values. If this is still not clear, I think we should consider $S$ as a finite natural sequence ($a_1$ to $a_n$) and not delete any of them once they become $0$.</p> <p><strong>Update 2:</strong> (d) has been renewed a little, thank to Greg Martin.</p>
Greg Martin
16,078
<p>I'm more interested in the competitive part of the question, so my answer deals with the following modification to (d): The starting position can be any sequence of $n\ge3$ positive integers, regardless of whether the total sum is even or odd (although that overall parity cannot change during the game, of course). The first player to be unable to move loses, regardless of whether all the numbers are $0$ yet. For this game, here are some emperical observations.</p> <p>If $n=3$ and the parity of the total is even: the losing positions are precisely those where all three numbers are individually even. So a winning strategy, once I make a move that results in all three numbers being even, is to subtract from the same two numbers my opponent does.</p> <p>If $n=4$ and the parity of the total is even: the losing positions are precisely those where all four numbers are even or all four numbers are odd. One winning strategy (there are others) is to always subtract from the two odd numbers.</p> <p>If $n=5$ and the parity of the total is even: the losing positions are precisely those where all five numbers are even, or the smallest number is even and all other numbers are odd. Winning strategy: if my opponent leaves two odd numbers, subtract from them; otherwise subtract from the smallest number (which will be odd) and the unique even number.</p> <p>When the parity of the total is odd, already the game seems more complicated. When $n=3$, for example, losing positions include the rows of \begin{array}{ccc} 1 &amp; 2 &amp; 2 \\ 1 &amp; 4 &amp; 4 \\ 1 &amp; 6 &amp; 6 \\ 1 &amp; 8 &amp; 8 \\ 1 &amp; 10 &amp; 10 \\ 2 &amp; 2 &amp; 5 \\ 2 &amp; 2 &amp; 7 \\ 2 &amp; 2 &amp; 9 \\ 2 &amp; 3 &amp; 4 \\ 2 &amp; 4 &amp; 7 \\ 2 &amp; 4 &amp; 9 \\ 2 &amp; 5 &amp; 6 \\ 2 &amp; 6 &amp; 9 \\ 2 &amp; 7 &amp; 8 \\ 2 &amp; 9 &amp; 10 \\ 3 &amp; 3 &amp; 3 \\ 3 &amp; 4 &amp; 6 \\ 3 &amp; 5 &amp; 5 \\ 3 &amp; 6 &amp; 8 \\ 3 &amp; 7 &amp; 7 \\ 3 &amp; 8 &amp; 10 \\ 3 &amp; 9 &amp; 9 \\ 4 &amp; 4 &amp; 5 \\ 4 &amp; 4 &amp; 9 \\ 4 &amp; 5 &amp; 8 \\ 4 &amp; 6 &amp; 7 \\ 4 &amp; 7 &amp; 10 \\ 4 &amp; 8 &amp; 9 \\ 5 &amp; 5 &amp; 7 \\ 5 &amp; 6 &amp; 6 \\ 5 &amp; 6 &amp; 10 \\ 5 &amp; 7 &amp; 9 \\ 5 &amp; 8 &amp; 8 \\ 5 &amp; 10 &amp; 10 \\ 6 &amp; 6 &amp; 9 \\ 6 &amp; 7 &amp; 8 \\ 6 &amp; 9 &amp; 10 \\ 7 &amp; 7 &amp; 7 \\ 7 &amp; 8 &amp; 10 \\ 7 &amp; 9 &amp; 9 \\ 8 &amp; 8 &amp; 9 \\ 9 &amp; 10 &amp; 10 \end{array} while winning positions include the rows of \begin{array}{ccc} 1 &amp; 1 &amp; 1 \\ 1 &amp; 1 &amp; 3 \\ 1 &amp; 1 &amp; 5 \\ 1 &amp; 1 &amp; 7 \\ 1 &amp; 1 &amp; 9 \\ 1 &amp; 2 &amp; 4 \\ 1 &amp; 2 &amp; 6 \\ 1 &amp; 2 &amp; 8 \\ 1 &amp; 2 &amp; 10 \\ 1 &amp; 3 &amp; 3 \\ 1 &amp; 3 &amp; 5 \\ 1 &amp; 3 &amp; 7 \\ 1 &amp; 3 &amp; 9 \\ 1 &amp; 4 &amp; 6 \\ 1 &amp; 4 &amp; 8 \\ 1 &amp; 4 &amp; 10 \\ 1 &amp; 5 &amp; 5 \\ 1 &amp; 5 &amp; 7 \\ 1 &amp; 5 &amp; 9 \\ 1 &amp; 6 &amp; 8 \\ 1 &amp; 6 &amp; 10 \\ 1 &amp; 7 &amp; 7 \\ 1 &amp; 7 &amp; 9 \\ 1 &amp; 8 &amp; 10 \\ 1 &amp; 9 &amp; 9 \\ 2 &amp; 2 &amp; 3 \\ 2 &amp; 3 &amp; 6 \\ 2 &amp; 3 &amp; 8 \\ 2 &amp; 3 &amp; 10 \\ 2 &amp; 4 &amp; 5 \\ 2 &amp; 5 &amp; 8 \\ 2 &amp; 5 &amp; 10 \\ 2 &amp; 6 &amp; 7 \\ 2 &amp; 7 &amp; 10 \\ 2 &amp; 8 &amp; 9 \\ 3 &amp; 3 &amp; 5 \\ 3 &amp; 3 &amp; 7 \\ 3 &amp; 3 &amp; 9 \\ 3 &amp; 4 &amp; 4 \\ 3 &amp; 4 &amp; 8 \\ 3 &amp; 4 &amp; 10 \\ 3 &amp; 5 &amp; 7 \\ 3 &amp; 5 &amp; 9 \\ 3 &amp; 6 &amp; 6 \\ 3 &amp; 6 &amp; 10 \\ 3 &amp; 7 &amp; 9 \\ 3 &amp; 8 &amp; 8 \\ 3 &amp; 10 &amp; 10 \\ 4 &amp; 4 &amp; 7 \\ 4 &amp; 5 &amp; 6 \\ 4 &amp; 5 &amp; 10 \\ 4 &amp; 6 &amp; 9 \\ 4 &amp; 7 &amp; 8 \\ 4 &amp; 9 &amp; 10 \\ 5 &amp; 5 &amp; 5 \\ 5 &amp; 5 &amp; 9 \\ 5 &amp; 6 &amp; 8 \\ 5 &amp; 7 &amp; 7 \\ 5 &amp; 8 &amp; 10 \\ 5 &amp; 9 &amp; 9 \\ 6 &amp; 6 &amp; 7 \\ 6 &amp; 7 &amp; 10 \\ 6 &amp; 8 &amp; 9 \\ 7 &amp; 7 &amp; 9 \\ 7 &amp; 8 &amp; 8 \\ 7 &amp; 10 &amp; 10 \\ 8 &amp; 9 &amp; 10 \\ 9 &amp; 9 &amp; 9. \end{array}</p>
614,749
<p><strong>The game:</strong></p> <p>Given $S = \{ a_1,..., a_n \}$ of positive integers ($n \ge 2$). The game is played by two people. At each of their turns, the player chooses two <strong>different</strong> non-zero numbers and subtracts $1$ from each of them. The winner is the one, for the last time, being able to do the task.</p> <p><strong>The problem:</strong></p> <p>Suppose that the game is played by $\text{A}$ and herself.</p> <p>$\text{a)}$ Find the necessary and sufficient conditions of $S$ (called $\mathbb{W}$), if there are any, in which $\text{A}$ always clear the set regardless of how she plays.</p> <p>$\text{b)}$ Also, find the necessary and sufficient conditions of $S$ (called $\mathbb{L}$) in which $\text{A}$ is always unable to clear the set regardless of how she plays.</p> <p>$\text{c)}$ Then, find the strategies/algorithm by which $\text{A}$ can clear the set with $S$ that doesn't satisfy $\mathbb{L} \vee \mathbb{W}$.</p> <p>Next, suppose that the game is played by $\text{A}$ and $\text{B}$ respectively and $S$ that doesn't satisfy $\mathbb{W}$.</p> <p>$\text{d)}$ Is there any of them having the strategies/algorithm to win the game? If so, who is her and what is her winning way? (It's possible to suppose that $\text{A}$ and $\text{B}$ play the game optimally)</p> <p>$\;$</p> <p><em>Note:</em></p> <p>$\text{1)}$ This is not an assignment. I have just create this out of a familiar thing in my life. So, I haven't known whether there is an official research or even names for the game. If so, I'd be very appreciated if you shared those.</p> <p>$\text{2)}$ The case of $n = 2$ is so obvious that we can eliminate that from consideration. We can do the same thing to an obvious condition in $\mathbb{W}$ (if $\mathbb{W} \neq \varnothing$): $\left ( \sum_{i \in S} i \right ) \; \vdots \; 2$.</p> <p>Thanks in advance.</p> <p>${}$</p> <p><strong>Update 1:</strong> To clear many people's misunderstanding and to avoid it for new ones, I emphasize the word "different" above. And by "different", I mean different indices of numbers, not their values. If this is still not clear, I think we should consider $S$ as a finite natural sequence ($a_1$ to $a_n$) and not delete any of them once they become $0$.</p> <p><strong>Update 2:</strong> (d) has been renewed a little, thank to Greg Martin.</p>
Barry Cipra
86,747
<p>This answers parts a)-c).</p> <p>The set $\mathbb{W}$ is fairly small: It consists of sets $S=\{a,a\}$ and $S=\{1,\ldots,1\}$ with an even number of $1$'s. Every other set can, by (in)judicious play, lead to a losing position of the form $S=\{a\}$. This can be proved by induction on the total of the integers in $S$: If $S$ is not one of the two specified types (and not yet a single number), it can be made smaller, yet still not of either type in $\mathbb{W}$, by subtracting $1$ from each of the two smallest numbers, unless $S=\{1,1,a,a\}$ with $a\gt1$, in which case subtracting $1$ from $1$ and $a$, which leaves $\{1,a-1,a\}$, does the trick.</p> <p>The set $\mathbb{L}$ consists of sets $S$ for which the total of the integers is odd and sets for which the largest integer is greater than the sum of the others. This is again proved by induction, based on the fact that whatever you do, the smaller set after a play has been made will still either have an odd sum or its largest number will still exceed the sum of the other numbers.</p> <p>As for a solitaire strategy for clearing the set (when possible), it simply amounts to always subtracting $1$ from the largest element; the other $1$ can be subtracted from anything. </p>
253,152
<p>So I was given $f(x)$ continuous and positive on $[0,\infty)$, and need to show that $g(x)$ increasing on $(0,\infty)$</p> <p>And $g(x)={\int_0^xtf(t)dt\over \int_0^xf(t)dt} $</p> <p>So my approach is I want to show that $g'(x)&gt;0$, so I used FTC and quotient rule to take the derivative of $g'(x)$, but then I got suck at midway because I cannot simplify it. </p>
Nameless
28,087
<p>You want to show that for $x&gt;0$, \begin{equation}g^{\prime}(x)=\frac{xf(x)\int_0^xf(t)dt-f(x)\int_0^xtf(t)dt}{(\int_0^xf(t)dt)^2} &gt;0\end{equation} which implies since $f$ is positive, $0&lt;x\int_0^xf(t)dt-\int_0^xtf(t)dt=\int_0^x(x-t)f(t)dt$ which is true since $x&gt;t&gt;0$ and $f$ is positive. </p>
3,893,440
<p>Suppose we have <span class="math-container">$4$</span> books on Math, <span class="math-container">$5$</span> books on English and <span class="math-container">$6$</span> books on History. In how many ways you can put them on your bookshelf if you want :- <br/> <span class="math-container">$1)$</span> The first book is a math book. <br/> <span class="math-container">$2)$</span> All math books are at the beginning. <br/> <span class="math-container">$3)$</span> Math and English books will stay together. <br/> <span class="math-container">$4)$</span> The first book and the last book are both Math books.</p> <p>(I'm completely lost with this problem. If someone can give me an explanation that would be great.)</p> <p>I know that we have <span class="math-container">$3$</span> subjects so there are <span class="math-container">$3$</span>! Or <span class="math-container">$6$</span> possibilities and for <span class="math-container">$4$</span> math books we have <span class="math-container">$4!$</span> or <span class="math-container">$24$</span> ways to order. When it is <span class="math-container">$5$</span> English books we have <span class="math-container">$5!$</span> or <span class="math-container">$120$</span> ways to order. If it is <span class="math-container">$6$</span> history books we have <span class="math-container">$6!$</span> Or <span class="math-container">$720$</span> ways to order. So I think the answer is <span class="math-container">$6 \times 24 \times 120 \times 720=12,441,600$</span> ways to order the books .</p>
fleablood
280,126
<ol> <li>The first book must be a math book:</li> </ol> <p>There are <span class="math-container">$4$</span> choices for that book.</p> <p>One you choice that math book, there are <span class="math-container">$14$</span> books remaining. They can go in any order. SO there are <span class="math-container">$14!$</span> ways to do that.</p> <p>So there are <span class="math-container">$4 \times 14!$</span> ways to do problem 1).</p> <ol start="2"> <li>Math books must go first.</li> </ol> <p>There are <span class="math-container">$4$</span> math books that must go in any order in the first <span class="math-container">$4$</span> positions. There are <span class="math-container">$4!$</span> ways to do that.</p> <p>There are <span class="math-container">$11$</span> books remaining so there are <span class="math-container">$11!$</span> ways to do that.</p> <p>so there are <span class="math-container">$4!\times 11!$</span> ways to do that.</p> <p>....</p> <p>can you continue?</p>
1,067,051
<p>How can I find the point of intersection of <span class="math-container">$y=e^{-x}$</span> and <span class="math-container">$y=x$</span> ?</p> <p><a href="https://i.stack.imgur.com/VoX32.png" rel="nofollow noreferrer">Here's the graph</a></p>
George V. Williams
54,806
<p>The solution to this equation can be expressed in terms of the Lambert-W function.</p> <p>$$ e^{-x} = x $$ $$ 1 = xe^x $$ $$ x = W(1) \approx 0.567$$</p> <p>Note that the last step is by definition.</p>
3,409,598
<p>Given three equation</p> <p><span class="math-container">$$\log{(2xy)} = (\log{(x)})(\log{(y)})$$</span> <span class="math-container">$$\log{(yz)} = (\log{(y)})(\log{(z)})$$</span> <span class="math-container">$$\log{(2zx)} = (\log{(z)})(\log{(x)})$$</span></p> <p>Find the real solution of (x, y, z)</p> <p>What should I do to get the answer? and I think it's not possible that x = y = z has a solution, I have no idea what method I can do. Show me a hint</p>
Milten
620,957
<p>You are sort of close. But you treat all the <span class="math-container">$x_i$</span> as if they were <span class="math-container">$x$</span>. Also, the answer should be an <span class="math-container">$n$</span>-dimensional vector.</p> <p>The gradient is <span class="math-container">$\nabla f = (\frac{\partial f}{\partial x_1}, \frac{\partial f}{\partial x_2}, \ldots, \frac{\partial f}{\partial x_n})$</span>, so we need to work out <span class="math-container">$\frac{\partial f}{\partial x_i}$</span>, i.e. the partial derivatives. Using the chain rule: <span class="math-container">$$ \frac{\partial f}{\partial x_i} = \frac{\partial}{\partial x_i} \left(\sum_{i=1}^n x_i^2\right)^\frac12 = \frac12 \left(\sum_{i=1}^n x_i^2\right)^{-\frac12} \cdot \left(\frac{\partial}{\partial x_i} \sum_{i=1}^n x_i^2\right) = \frac12 \left(\sum_{i=1}^n x_i^2\right)^{-\frac12}\cdot 2x_i = \frac{x_i}{\Vert x\Vert} $$</span> (Do you see why the third equality is right?) Therefore, the gradient is <span class="math-container">$$ \nabla f = \left(\frac{x_1}{\Vert x\Vert},\ldots,\frac{x_n}{\Vert x\Vert}\right) = \frac{x}{\Vert x\Vert} $$</span> There are other ways to show the result (probably the most elegant using <span class="math-container">$\Vert x\Vert^2 = x\cdot x$</span> with the product rule), but this was the most basic way.</p> <p>Hopefully you can go on and find the Hessian now?</p> <p>[Hessian hint]: For the Hessian, you need to work out all possible double derivatives: <span class="math-container">$$ \frac{\partial^2f}{\partial x_j\partial x_i} = \frac{\partial}{\partial x_j} \frac{\partial f}{\partial x_i} = \frac{\partial}{\partial x_j} \frac {x_i}{\Vert x\Vert} $$</span> The result will depend on wether <span class="math-container">$j=i$</span> or <span class="math-container">$j\ne i$</span>. In the case <span class="math-container">$j=i$</span>, you need to use the quotient rule. Also, remember that we already know what <span class="math-container">$\frac{\partial}{\partial x_j}\Vert x\Vert$</span> is, which you will need. </p> <p>If this is too complicated, try the simple case of two dimensions first: <span class="math-container">$\Vert (x,y)\Vert = (x^2+y^2)^\frac 12$</span>, and find the Hessian for this function, and then try to generalise. </p>
1,365,489
<p>What is the value of the following expression?</p> <p>$$\sqrt[3]{\ 17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38}$$</p>
g.kov
122,782
<p>\begin{align} x&amp;=\sqrt[3]{17\sqrt{5}+38} - \sqrt[3]{17\sqrt{5}-38} \end{align}</p> <p>Note that \begin{align} 17\sqrt{5}-38&amp;=\frac{1}{17\sqrt{5}+38}. \end{align}</p> <p>Let $a=\sqrt[3]{17\sqrt{5}+38}$. Then we have \begin{align} x^3&amp;=\left(a-\frac1a\right)^3 \\ x^3&amp;= a^3-3a+\frac3a-\frac{1}{a^3} \\ x^3+3\left(a-\frac1a\right)&amp;= a^3-\frac{1}{a^3} \\ x^3+3x-76&amp;=0 \\ (x-4)(x^2+4x+19)=0, \end{align} The only real solution $x=4$ is the answer.</p>
126,739
<p><strong>I changed the title and added revisions and left the original untouched</strong> </p> <p>For this post, $k$ is defined to be the square root of some $n\geq k^{2}$. Out of curiousity, I took the sum of one of the factorials in the denominator of the binomial theorem; $$\sum _{k=1}^{\infty } \frac{1}{k!} \equiv e-1$$ <a href="http://oeis.org/A091131" rel="nofollow">OEIS A091131</a></p> <p>Because I need to show that only the contiguous non-overlapping sequences of size $k$ up to $k^{2}+2k$ are valid for my purpose, I took the same sum with the denominator multiplied by $k+2$: $$\sum _{k=1}^{\infty } \frac{1}{(k+m) k!} \equiv \frac{1}{2}\text{ for $m=2$ }$$ <a href="http://oeis.org/A020761" rel="nofollow">OEIS A020761</a></p> <p>This is not a sum that I expected.</p> <p>When $m\neq2$ the convergence returns alternating values like $\frac{1}{k}(-x+y e)$ and $\frac{1}{k}(x^{\prime}-y^{\prime} e)$, so $\frac{1}{2}$ seems to be the only value constructed out of integers.</p> <p>Two questions:</p> <p>$1)$ Is there a proof technique that can use this specific convergence to show that $k+2$ is the natural limit to my sequences? And that those specific non-overlapping sequences are the only ones that apply?</p> <p>$2)$ Is this convergence interesting enough to put into OEIS?</p> <p>I need some hints for my next step.</p> <p><strong>Edit</strong><br> Q1 is answered. I have enough info to keep me going for a few months.<br> Q2: if you look at the OEIS entries for constants like $\pi$ and $e$, you will see dozens of identities. The entry for $\frac{1}{2}$ has only two identities. I feel it should have many more. But, just because I find this series interesting, doesn't mean others do, therefore, the question. </p> <p>My motivation is to prove <a href="http://en.wikipedia.org/wiki/Oppermann%27s_conjecture" rel="nofollow">Oppermann's conjecture</a>. Thanks for the great answers and comments, and your patience.</p> <p><strong>Revised</strong></p> <p>Original post revised to use $k=0$ as starting index. And we show an example of the underlying pattern. </p> <p>$ e= \sum_{k=0}^{\infty} 1/k!\textit{ Revised }$ </p> <p>$ e-1= \sum_{k=0}^{\infty} 1/((k+m)k!)\text{ for }m=1$ </p> <p>$ 1= \sum_{k=0}^{\infty} 1/((k+m)k!)\text{ for }m=2$ </p> <p>$\sum_{k=0}^{\infty} 1/((k+m)k!)\not \in \textbf{Q} \text{ for }m&gt;2$ </p> <p>Example of underlying pattern for (say) $k=3$: </p> <p>$(1, 2, 3), (4, 5, 6), (7, 8, 9), (10, 11, 12), (13, 14, 15)$<br> $(1, 2, 3), (1, 2, 3), (1, 2, 3), (1, 2, 3), (1, 2, 3)$<br> $(1, 2, 3), (2, 1, 2), (1, 2, 3), (2, 1, 2), (1, 2, 3)$ </p> <p>Top: Number line partitioned into $k+2$ non-overlapping ordered lists<br> Middle: Equivalence classes $n-1 \mod k +1$<br> Bottom: Least divisors. $1= p_{x}$ </p> <p>What is it about these patterns that causes the convergence result for $m=2$ to be $\in \textbf{Q}$?</p> <p><strong>Coda</strong></p> <p>Removed the identities as not quite in step. Below I show the summand of my function on left, the summand of an 'instep' identity, and a variation of the identity.</p> <p>$$\frac{1}{(k+2)k!} \equiv \frac{1}{(k+1)!+k!} \equiv \frac{1}{\Gamma(k+2)+k!}$$ </p> <p>So, $\frac{1}{(k+2)k!}$ sums two consecutive factorials. Why? </p> <p><strong>New</strong> This ratio equals $(e-1)^{-1}$ as shown <a href="http://mathworld.wolfram.com/ContinuedFraction.html" rel="nofollow">here</a>,</p> <p>$$ \frac{\sum _{k=0}^{\infty } \frac{1}{(k+2) k!}}{\sum _{m=0}^{\infty } \left(\sum _{k=m}^{\infty } \frac{1}{(k+2) k!}\right)}=\frac{1}{1+\frac{2}{2+\frac{3}{3+\frac{4}{4+\frac{5}{5+\frac{6}{6+\frac{7}{7+\frac{8}{8+\frac{9}{9+\frac{10}{10+11}}}}}}}}}} $$</p> <p><strong>Another interesting pattern for the series:</strong><br> $$ 11_2,22_3,33_4,44_5,55_6,66_7,77_8,88_9,99_{10},\text{AA}_{11},\text{BB}_{12},\text{CC}_{13}{}{}{} $$</p>
Tibor Pogany
113,270
<p>\begin{align*} S_m &amp;= \sum_{k \geq 0} \frac1{(k+m)k!} = \sum_{k \geq 0} \frac{\Gamma(m+k)}{\Gamma(m+k+1) k!} \\ &amp;= \frac1m \sum_{k \geq 0} \frac{(m)_k}{(m+1)_k} \frac1{k!} = \frac1m {}_1F_1(m; m+1; 1)\,. \end{align*} The desired value $S_2 = 1$, since the confluent hypergoemetric function $$ {}_1F_1(m; m+1; 1) = 2.$$</p>
1,231,772
<p>Motivated by Baby Rudin Exercise 6.9</p> <p>I need to show that $\int_0^\infty \frac{|\cos x|}{1+x} \, dx$ diverges.</p> <p>My attempt: </p> <p>$\frac{|\cos x|}{1+x} \geq \frac{\cos^2 x}{1+x}$, and then $\int_0^\infty \frac{\cos^2 x}{1+x} \, dx + \int_0^\infty \frac{\sin^2 x}{1+x} \, dx = \int_0^\infty \frac{1}{1+x} \, dx$. </p> <p>Since the right integral diverges, either or both of the integrals on the left most diverge. Since both diverge (at least I'm inclined to believe), now if I show that $\int_0^\infty \cos^2 x / (1+x) \, dx \geq \int_0^\infty \sin^2 x / (1+x) \, dx $ we'll be done. Here's where I am stuck.</p>
wlad
228,274
<p>$\int_{\pi/2}^{\infty} \frac{|\cos x|}{1+x} = \sum_{r=0}^{\infty} \int_{\pi/2 + \pi r}^{\pi/2 + \pi r+1} \frac{|\cos x|}{1+x}dx \geq \sum_{r=0}^{\infty} \int_{\pi/2 + \pi r}^{\pi/2 + \pi r+1} \frac{|\cos x|}{1+(\pi/2 + \pi r+1)}dx = \sum_{r=0}^{\infty} \frac{1}{1+(\pi/2 + \pi r+1)} \int_{\pi/2 + \pi r}^{\pi/2 + \pi r+1} |\cos x| dx = \int_{\pi/2}^{\pi/2 + \pi} |\cos x| dx \sum_{r=0}^{\infty} \frac{1}{1+(\pi/2 + \pi r+1)} = \infty$</p> <p>Basically, split the integral up into regions from $[\pi/2, 3\pi/2], [3\pi/2, 5\pi/2] \dotso $ and consider the lowest value of $\frac{1}{1+x}$ in that region. Use the fact that the Harmonic series diverges.</p> <p>[edit] Fixed.</p>
1,231,772
<p>Motivated by Baby Rudin Exercise 6.9</p> <p>I need to show that $\int_0^\infty \frac{|\cos x|}{1+x} \, dx$ diverges.</p> <p>My attempt: </p> <p>$\frac{|\cos x|}{1+x} \geq \frac{\cos^2 x}{1+x}$, and then $\int_0^\infty \frac{\cos^2 x}{1+x} \, dx + \int_0^\infty \frac{\sin^2 x}{1+x} \, dx = \int_0^\infty \frac{1}{1+x} \, dx$. </p> <p>Since the right integral diverges, either or both of the integrals on the left most diverge. Since both diverge (at least I'm inclined to believe), now if I show that $\int_0^\infty \cos^2 x / (1+x) \, dx \geq \int_0^\infty \sin^2 x / (1+x) \, dx $ we'll be done. Here's where I am stuck.</p>
Adhvaitha
228,265
<p>We have \begin{align} I_n &amp; = \int_{n\pi}^{(n+1)\pi} \dfrac{\vert \cos(x) \vert}{1+x}dx = 2\int_{n\pi}^{n\pi+\pi/2} \dfrac{\vert \cos(x) \vert}{1+x}dx \geq 2\int_{n\pi}^{n\pi+\pi/6} \dfrac{\vert \cos(x) \vert}{1+x}dx\\ &amp; \geq 2 \int_{n\pi}^{n\pi+\pi/6} \dfrac{1/2}{1+x}dx =\int_{n\pi}^{n\pi+\pi/6} \dfrac{dx}{1+x} &gt; \int_{n\pi}^{n\pi+\pi/6} \dfrac{dx}{(n+2)\pi} = \dfrac1{6(n+2)} \end{align} Hence, your integral is bounded below by $\sum_{n=0}^{\infty} I_n$, which clearly diverges.</p>
755,571
<p>$$a_n=3a_{n-1}+1; a_0=1$$</p> <p>The book has the answer as: $$\frac{3^{n+1}-1}{2}$$</p> <p>However, I have the answer as: $$\frac{3^{n}-1}{2}$$</p> <p>Based on:</p> <p><img src="https://i.stack.imgur.com/4vJrQ.png" alt="enter image description here"></p> <p>Which one is correct?</p> <p>Using backwards substitution iteration, the end of this will be $$3^{n-1}a_0+3^{n-2}+3^{n-3}+...+3+1$$</p> <p>which is $$=3^{n-1}+3^{n-2}+3^{n-3}+...+3+1=\sum_{i=0}^{n-1}3^i$$</p> <p>Which according to the theorem should be $$\frac{3^{(n-1)+1}-1}{(3-1)}=\frac{3^{n}-1}{2}$$</p>
lhf
589
<p>Write $b_n= a_n +\alpha$ and find $\alpha$ such that the recurrence reduces to $b_{n+1}=3b_n$. You'll find that $\alpha=1/2$ works. Then of course $b_n=3^nb_0=3^n(a_0+\alpha)$ and $a_n=b_n-\alpha=3^na_0+(3^n-1)\alpha$.</p>
2,887,880
<p>I read this <a href="https://www.reddit.com/r/math/comments/8frbe2/what_is_a_natural_way_to_represent_nonlinear/" rel="nofollow noreferrer">reddit</a> post and this <a href="https://math.stackexchange.com/q/1388566/553404">SE thread</a> discussing how to represent nonlinear/linear transforms in matrix notations but they were not sufficient.</p> <p>In quantum mechanics, scientists use infinite matrices to represent operators. Should the operators be linear to be represented as matrices? If then, should the operators be linear <strong>even</strong> to be represented as <strong>infinite</strong> matrices? Or can infinite matrices represent nonlinear operators too?</p>
Bananach
70,687
<p>In some sense everything you can possibly define can be encoded in numbers and you are free to write those numbers in matrix form to get a linear operator. Thus the answer to your question is trivially yes.</p> <p>However, the answer is also nontrovially yes. Such a yes is dependent on what connections between the linear and the nonlinear operations you consider meaningful. One way to make such a meaningful connection is the use of Koopman operators in dynamical systems <a href="https://faculty.missouri.edu/~liyan/Coll.pdf" rel="nofollow noreferrer">https://faculty.missouri.edu/~liyan/Coll.pdf</a></p>
1,190,759
<p>I was trying to show the following $\int_{-\infty}^{\infty} x^{2n}e^{-x^2}dx = (2n)!{\sqrt{\pi}}/4^nn!$ by using $\int_{-\infty}^{\infty} e^{-tx^2}dx = \sqrt{\pi/t}$ thus</p> <p>I differentiated this exponential integral n times to get the following. </p> <p>$\int_{-\infty}^{\infty} \frac{d^ne^{-tx^2}}{dt^n}dx $$=\frac{2^{n}\times \sqrt{\pi}t^{\frac{2n-1}{2}}} {1\times 3\times 5 \times ... \times (2n-1)}$ after applying limit for $t\rightarrow 1} I am not getting the desired result. Where am I going wrong ? </p> <p>Thanks </p>
Alijah Ahmed
124,032
<p>The first thing is your answer should be $$\int_{-\infty}^{\infty} \frac{d^ne^{-tx^2}}{dt^n}dx=\frac{1\times 3\times 5 \times ... \times (2n-1)\times \sqrt{\pi}t^{-\frac{2n-1}{2}}}{2^{n}} $$ based on the repeated differentiation of $t^{-1/2}\sqrt{\pi}$</p> <p>Then, if you multiply both numerator and denominator by $2\times4\times\cdots\times(2n)$, you will end up with $$\int_{-\infty}^{\infty} \frac{d^ne^{-tx^2}}{dt^n}dx=\frac{(2n)!\times \sqrt{\pi}t^{\frac{2n-1}{2}}}{2^{n}\times2^n\times n!}=\frac{(2n)!\times \sqrt{\pi}t^{-\frac{2n-1}{2}}}{4^{n}\times n!}$$ Setting $t=1$ will give you the desired answer.</p>
949,664
<p>Consider the Laplace transform $\int_{0}^{\infty} e^{-px}f(x)\,dx$ <br/> Assume $f(x)=1$ , then the Laplace transform is $\frac {1}{p}$. <br/> Assume $f(x)=x$ , then the Laplace transform is $\frac {1}{p^2}$.<br/> The question is, what will happen to the $f(x)$ after getting transformed?<br/> Why should the function be transformed and what aspect of initial function will remain in the Laplace transform that makes it so important?<br/> If someone can give geometric intuition of it, it will be a plus!<br/> Thanks</p>
Christian Blatter
1,303
<p>The Laplace transform ${\cal L}$ is applied solely to known or unknown functions which are defined explicitly or implicitly in terms of <em>"analytic formulas"</em>. What makes ${\cal L}$ useful are alone its <em>formal</em> algebraic properties, encoded in certain rules of term manipulation. A central ingredient of the Laplace philosophy is <em>Lerch's theorem</em> which says that ${\cal L}$ is injective. So, when you have found a solution $s\mapsto Y(s)$ in transform space it suffices to look up the unique function $t\mapsto y(t)$ whose transform is $Y$ in a suitable catalogue.</p> <p>Don't hope for an "intuitive content" encoded in ${\cal L}f$. Nobody has ever looked at the graph of an ${\cal L}f$, or has computed ${\cal L}f$ for an $f$ which is only defined by a data set. This is in sharp contrast to the Fourier transform: Of course we work with it all the time in theoretical discussions, but apart from that the Fourier transform $\hat f$ of a time signal $f$ conveys interesting "intuitive information" about $f$, and people are Fourier-transforming discreetly sampled time signals all the time.</p>
1,955,393
<p>I have been trying to evaluate this limit:</p> <p>$$\lim_{n\to\infty}{\sqrt[n]{4^n + 5^n}}$$</p> <p>What methods should I try in order to proceed?</p> <p>I was advised to use "Limit Chain Rule", but I believe there is a different approach.</p>
E.H.E
187,799
<p>Hint: $$\lim_{n\to\infty}{\sqrt[n]{4^n + 5^n}}=\lim_{n\to\infty}5{\sqrt[n]{\frac{4}{5^n}^n + 1}}$$</p>
990,512
<p>Suppose that the probability of $x=0$ is $p$, and the probability of $x=1$ is $1-p=q$. Consider the random sequence $X=\{X_i\}_{i=1}^{\infty}$. We map this sequence by $C$ to a point in the interval $[0,1]$ as below:</p> <p>$1)$ we look at the first random variable. If it is $0$, then we update the interval to $I_1=[0,p)$, else update it to $I_1=[p,1)$.</p> <p>$2)$ Let $I_k=[a,b)$. Look at the $(k+1)^{th}$ random variable. if it is $0$, then we update the interval to $I_{k+1}=[a,a+p(b-a))$, else we update it to $I_{k+1}=[b-q(b-a),b)$.</p> <p>and we continue this process till we reach to a point as the length of the random process goes to infinity. As an example if the first $2$ random variables are $01$, then we have:</p> <p>$I_1=[0,p)$</p> <p>$I_2=[p-qp,p)$.</p> <p>Find the pdf of $C(X)$. </p>
Did
6,179
<p>At the $n$th step, $I_n=J$, where $J$ is one of $2^n$ disjoint intervals whose union is $(0,1)$. Each of these intervals $J$ has length $p^kq^{n-k}$ for some $0\leqslant k\leqslant n$ and the probability that $I_n=J$ is $p^kq^{n-k}$. This holds for every $n$ hence $C(X)$ is uniformly distributed in $(0,1)$.</p>
898,543
<p>I have the random vector $(X,Y)$ with density function $8x^{2}y$ for $0 &lt; x &lt; 1$, $0 &lt; y &lt; \sqrt{x}$ I am trying to find the marginal distributions of $X$ and $Y$. For $X$ this seems to be simply the integral $\int_{0}^{\sqrt{x}}8x^{2}y = 4x^{3}$, which is also the given solution, and follows the general formula I've gotten, where you find marginal distributions of a variable by integrating the joint PDF of all other variables over their supports. However, this seems to fail in the case of $Y$, where I try the integral $\int_{0}^{1}8x^{2}y = \frac{8y}{3}$, conflicting with the given answer of $\frac{8y}{3}(1-y^{6})$. What am I misunderstanding here? This seems painfully simple, and I have never had issues finding a marginal distribution like this before.</p>
Hypergeometricx
168,053
<p>Draw a Venn diagram!</p> <p>Let probabilities be as follows:</p> <p>$a$=Blue only</p> <p>$b$=Blue and Left</p> <p>$c$=Left only</p> <p>$d$=None</p> <p>From probabilities given, </p> <p>$$\begin{align} \frac b{a+b}=\frac 17 \quad \Rightarrow a&amp;=6b\\ \frac b{b+c}=\frac 13 \quad \Rightarrow c&amp;=2b\\ d&amp;=\frac 45\\ \end{align}$$ As the probabilites sum to 1, </p> <p>$$\begin{align} a+b+c+d&amp;=1\\ 6b+b+2b+\frac 45&amp;=1\\ 9b&amp;=\frac 15\\ b&amp;=\frac 1{45}\end{align}$$</p>
1,687,336
<p>I've been searching through the internet and through SE to find something to help me understand generating functions, but I haven't found anything that would solve my problem with them.</p> <p>I understand that </p> <p>$$\frac1{1-x}=\sum_{n\ge 0}x^n\;,\tag{1}$$</p> <p>gives the sequence $(1, 1, 1, 1,...) $ because $$\frac1{1-x}=\sum_{n\ge 0}x^n\;,\tag{1}$$ </p> <p>is just another way of writing the sum $1+x+x^2+x^3+x^4+...$ and the coefficients of each term are 1, and thus the sequence is $1, 1, 1, 1,...$.</p> <p>What I don't get is how does </p> <p>$$\begin{align*} \frac4{1-x^3}&amp;=\sum_{n\ge 0}4x^{3n} \end{align*}\tag{3}$$ equal</p> <p>$$4x^0+0x^1+0x^2+4x^3+0x^4+0x^5+4x^6+0x^7+0x^8+\ldots$$</p> <p>and thus the sequence $(4, 0, 0, 4, 0, 0, 4, 0, 0,...)$.</p> <p>I would say that $$\begin{align*} \frac4{1-x^3}&amp;=\sum_{n\ge 0}4x^{3n} \end{align*}\tag{3}$$</p> <p>is equal to $$4x^{3\times0}+4x^{3\times1}+4x^{3\times2}+4x^{3\times3}...=4+4x^3+4x^6+4x^9\ldots$$</p> <p>There is something that I'm completely not understanding and I would like to know what that something is.</p>
Community
-1
<p>Take $\dfrac1{1-x}=1+x+x^2+x^3+x^4+...$, multiply by $4$ and replace $x$ by $x^3$ to get $\dfrac4{1-x^3}=4+4x^3+4x^6+4x^9+4x^{12}+...$. Is this unclear ?</p>
2,762,953
<p>I've studied Markov Process with 2x2 matrices. Using the linear algebra and calculus procedures is clear to me how a Markov chain works.</p> <p>However, i'm still not able to grasp the intuitive and immediate meaning of a Markov chain. Why intuitively, for $n\rightarrow +\infty $, the state of the system is independent of both initial state and the states reached during the process ?</p>
amd
265,466
<p>Suppose that your process with matrix $P$ has a unique stationary distribution $\mathbf\pi_\infty$. This vector is an eigenvector of $1$, since $\mathbf\pi_\infty P=\mathbf\pi_\infty$, and the other eigenvalue $\lambda$ of $P$ has absolute value less than one. Every state vector $\mathbf\pi$ can be decomposed into the sum of the stationary distribution $\mathbf\pi_\infty$ and an eigenvector $\mathbf v$ of $\lambda$. We then have $$\mathbf\pi P^n = \mathbf\pi_\infty P^n+\mathbf v P^n = \mathbf\pi_\infty+\lambda^n\mathbf v.$$ Since $|\lambda\lt1|$, the second term will get smaller and smaller with increasing $n$ and vanish in the limit: intuitively, the influence on the process of the difference between the initial state and the stationary distribution wanes over time. </p> <p>Graphically, the situation looks something like this:</p> <p><a href="https://i.stack.imgur.com/VAdDY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VAdDY.png" alt="![enter image description here"></a></p> <p>Every state vector lies on the line $x+y=1$. With each iteration, the component of the state vector parallel to this line gets shorter and shorter, bringing the state vector closer and closer to $\mathbf\pi_\infty$.</p>
1,448,363
<p>I have gotten to the next stage where you write it as $\frac{1}{\left(\frac 34\right)}$ to the power of $3$, now I am stuck</p> <p>I've got it now, thanks everyone.</p>
jameselmore
86,570
<p>Hint: $$\left(\frac{b}{a}\right)^{-n} = \left(\frac{a}{b}\right)^n = \frac{a^n}{b^n}$$</p>
2,617,235
<p>Given a triangle $\Delta$ABC, how to draw any inscribed equilateral triangle whose vertices lie on different sides of $\Delta$ABC?</p>
g.kov
122,782
<p>Another way is to draw a bisector $AD$ of $\angle CAB$, $D\in BC$, find the intersection of the sides $AC$ and $AB$ with the line $AD$, rotated $\pm30^\circ$ around $D$:</p> <p><a href="https://i.stack.imgur.com/Nq1A2.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Nq1A2.png" alt="enter image description here"></a></p> <p><strong>Edit</strong></p> <p>\begin{align} D&amp;=\frac{b\cdot B+c\cdot C}{b+c} ,\\ |AF|=|AE|&amp;=\frac{|AD|}{2\,\sin(30^\circ+\tfrac\alpha2)} . \end{align} </p>
1,708,996
<p>If $x = a( \theta +\sin \theta)$ and $y = a(1-\cos \theta)$ then $\frac{dy}{dx}$ will be equal to : </p> <p>$a) \sin \frac{\theta}{2}$</p> <p>$b) \cos \frac{\theta}{2}$</p> <p>$c) \tan \frac{\theta}{2}$</p> <p>$d) \cot \frac{\theta}{2}$</p> <p>I have solved till : $\frac{dy}{dx} = \frac{\sin \theta}{1 + \cos \theta}$ using $\frac{dy}{dx} = \frac{dy}{d \theta} . \frac{d \theta}{dx}$. </p> <p>How do I reduce to the option's forms?</p>
Archis Welankar
275,884
<p>Hint $$\sin(x)=2\sin(x/2)\cos(x/2)$$ and $$1+\cos(x)=2\cos^2(x/2)$$</p>
3,581,724
<p>I suspect a simple wooden toy "lead screw" was made by advancing a cylindrical rotary cutting tool ( <em>Cylindrical End Mill Cutter</em>) along the surface of the rotating wooden dowel (base cylinder), resulting in a helical cut (the axes of the cylinders are orthogonal (<em>skew</em>).</p> <p><a href="https://i.stack.imgur.com/VVTRg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VVTRg.png" alt="enter image description here"></a></p> <p>Videos of the manufacturing process close to what I suspect:</p> <ul> <li><a href="https://youtu.be/5U9lJAgU1oE?t=31" rel="nofollow noreferrer">https://youtu.be/5U9lJAgU1oE?t=31</a> (but: spherical cutter. radial, not tangent end mill)</li> <li><a href="https://youtu.be/y5DOQWiexOQ?t=314" rel="nofollow noreferrer">https://youtu.be/y5DOQWiexOQ?t=314</a> (cutter radial)</li> <li><a href="https://youtu.be/pbaRRsG3BN4?t=9" rel="nofollow noreferrer">https://youtu.be/pbaRRsG3BN4?t=9</a> (not radial, but "tilted")</li> </ul> <p>I have tried to visualise/emulate the resulting geometry using multiple <code>difference</code> operations for cylinder primitives in <a href="https://openjscad.org/" rel="nofollow noreferrer">(Open)JSCAD</a> (see code at end of post) and adjusted the view manually:</p> <p><a href="https://i.stack.imgur.com/PQIUU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQIUU.png" alt="image of helical track approximated by multiple cylinder (*End mill*)-cuts"></a></p> <p>What is the equivalent (elliptical?) shape that is the cross-section of the helical path?</p> <p><hr/> And: what is the contact surface/line/point of another, slightly smaller cylinder that is used as "lead screw nut" (having the same orientation as the cutting cylinder, i.e. orthogonal to the base cylinder) - a point contact on one of the helical edges?</p> <p>Code for JSCAD</p> <pre><code>function main () { let main = cylinder({r: 3, h:10, center: true, fn: 64 }); for (let i=0; i&lt;36; i++) { let cut = cylinder({r: 0.2, h:10, center: true}); cut = translate([0,-3,0],cut); cut = rotate([0,90,i*3],cut); cut = translate([0,0,i*0.1],cut); main = difference(main, cut); } return main; } </code></pre> <p><s>I think the underlying question may be about the surface created by a straight line moved along a spiral ( or helix):</p> <p><a href="https://i.stack.imgur.com/4R595.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4R595.png" alt="blender screenshot"></a> (created with Blender: a mesh edge with Screw modifier)</p> <p>Or the surface created by a helix that has been rotated (spin):</s></p> <p><a href="https://i.stack.imgur.com/wJwjf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wJwjf.png" alt="enter image description here"></a></p> <p>The cross-section of the "cutting" cylinder (<em>End mill</em>) is a circle of course, <s>which is what an infinite number of cuts "converge" to (a cylinder with zero length).</p> <p>Then the cross-section along the helix should be an ellipse (intersection of the hypothetical "cutting" cylinder (<em>End mill cutter</em>) and the plane orthogonal to the helix).</s></p> <hr/> <p>It's not the same as moving a circle along the helix; to illustrate, I've reduced the cylinder's length: <a href="https://i.stack.imgur.com/2BRYO.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2BRYO.gif" alt="enter image description here"></a></p> <p>My "straight line" theory does not apply either, I think these "lines" might be helices created by the intersection of the translated and rotated "cutting" cylinders.</p> <p>So it seems this might be much more involved than I anticipated -- please don't spend too much time on this on my account. I was just curious to see whether the "cut" could be better created in 3D by "lofting" the equivalent cross-section along a helix.</p>
Community
-1
<p>If I am right, the cutting surface is described by a revolving vertical circle, the center of which describes an helix of vertical axis.</p> <p>Parametrically:</p> <p><span class="math-container">$$\begin{cases}x=(R+r\cos u)\cos\ t,\\y=(R+r\cos u)\sin t,\\z=r\sin u+a^2t.\end{cases}$$</span></p> <p>At time <span class="math-container">$0$</span>, the plane normal to the helix is normal to the tangent vector <span class="math-container">$(0,R,a)$</span> and has the equation</p> <p><span class="math-container">$$Ry+az=0.$$</span></p> <p>Hence the equation of the intersection of the cutting surface and the normal plane is given by the condition</p> <p><span class="math-container">$$R(R+r\cos u)\sin t+ar\sin u+a^2t=0.$$</span></p> <p>This equation is transcendental in <span class="math-container">$t$</span>, but can be solved for <span class="math-container">$u$</span>.</p> <p><span class="math-container">$$(r\sin t)\cos u+(ar)\sin u=-(a^2t+R^2\sin t)$$</span></p> <p>gives</p> <p><span class="math-container">$$r\sin u=\frac{-a(a^2t+R^2\sin t)\pm \sin t\sqrt{(r\sin t)^2+(ar)^2-(a^2t+R^2\sin t)^2}}{\sin^2t+a^2}$$</span></p> <p>and <span class="math-container">$$r\cos u=\frac{-\sin t(a^2t+R^2\sin t)\mp a\sqrt{(r\sin t)^2+(ar)^2-(a^2t+R^2\sin t)^2}}{\sin^2t+a^2}.$$</span></p> <p>Finally, the curve is given by the planar coordinates <span class="math-container">$\left(x,\dfrac{-ay+Rz}{\sqrt{a^2+R^2}}\right)$</span> obtained by rotating the coordinate frame. The final equation is terrible. It does not describe an ellipse.</p> <p><span class="math-container">$$\begin{cases}x=\left(R+\dfrac{-\sin t(a^2t+R^2\sin t)\mp a\sqrt{(r\sin t)^2+(ar)^2-(a^2t+R^2\sin t)^2}}{\sin^2t+a^2}\right)\cos t, \\y'=\\\dfrac1{\sqrt{a^2+R^2}}\left(-a\left(R+\dfrac{-a(a^2t+R^2\sin t)\pm\sin t\sqrt{(r\sin t)^2+(ar)^2-(a^2t+R^2\sin t)^2}}{\sin^2t+a^2}\right)+R\left(\dfrac{-a(a^2t+R^2\sin t)\pm \sin t\sqrt{(r\sin t)^2+(ar)^2-(a^2t+R^2\sin t)^2}}{\sin^2t+a^2}+a^2t\right)\right). \end{cases}$$</span></p> <p>I leave the study of this curve to the future generations.</p> <hr> <p><strong>Resolution of the trigonometric equation:</strong></p> <p><span class="math-container">$$a\cos u+b\sin u=c\implies a^2(1-\sin^2u)=c^2-2bc\sin u+b^2\sin^2u$$</span></p> <p>gives </p> <p><span class="math-container">$$\sin u=\frac{bc\pm a\sqrt{a^2+b^2-c^2}}{a^2+b^2}$$</span></p> <p>and by symmetry</p> <p><span class="math-container">$$\cos u=\frac{ac\mp b\sqrt{a^2+b^2-c^2}}{a^2+b^2}.$$</span></p> <hr> <p>Disclaimer: done by hand, typos are not excluded.</p>
3,581,724
<p>I suspect a simple wooden toy "lead screw" was made by advancing a cylindrical rotary cutting tool ( <em>Cylindrical End Mill Cutter</em>) along the surface of the rotating wooden dowel (base cylinder), resulting in a helical cut (the axes of the cylinders are orthogonal (<em>skew</em>).</p> <p><a href="https://i.stack.imgur.com/VVTRg.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VVTRg.png" alt="enter image description here"></a></p> <p>Videos of the manufacturing process close to what I suspect:</p> <ul> <li><a href="https://youtu.be/5U9lJAgU1oE?t=31" rel="nofollow noreferrer">https://youtu.be/5U9lJAgU1oE?t=31</a> (but: spherical cutter. radial, not tangent end mill)</li> <li><a href="https://youtu.be/y5DOQWiexOQ?t=314" rel="nofollow noreferrer">https://youtu.be/y5DOQWiexOQ?t=314</a> (cutter radial)</li> <li><a href="https://youtu.be/pbaRRsG3BN4?t=9" rel="nofollow noreferrer">https://youtu.be/pbaRRsG3BN4?t=9</a> (not radial, but "tilted")</li> </ul> <p>I have tried to visualise/emulate the resulting geometry using multiple <code>difference</code> operations for cylinder primitives in <a href="https://openjscad.org/" rel="nofollow noreferrer">(Open)JSCAD</a> (see code at end of post) and adjusted the view manually:</p> <p><a href="https://i.stack.imgur.com/PQIUU.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PQIUU.png" alt="image of helical track approximated by multiple cylinder (*End mill*)-cuts"></a></p> <p>What is the equivalent (elliptical?) shape that is the cross-section of the helical path?</p> <p><hr/> And: what is the contact surface/line/point of another, slightly smaller cylinder that is used as "lead screw nut" (having the same orientation as the cutting cylinder, i.e. orthogonal to the base cylinder) - a point contact on one of the helical edges?</p> <p>Code for JSCAD</p> <pre><code>function main () { let main = cylinder({r: 3, h:10, center: true, fn: 64 }); for (let i=0; i&lt;36; i++) { let cut = cylinder({r: 0.2, h:10, center: true}); cut = translate([0,-3,0],cut); cut = rotate([0,90,i*3],cut); cut = translate([0,0,i*0.1],cut); main = difference(main, cut); } return main; } </code></pre> <p><s>I think the underlying question may be about the surface created by a straight line moved along a spiral ( or helix):</p> <p><a href="https://i.stack.imgur.com/4R595.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/4R595.png" alt="blender screenshot"></a> (created with Blender: a mesh edge with Screw modifier)</p> <p>Or the surface created by a helix that has been rotated (spin):</s></p> <p><a href="https://i.stack.imgur.com/wJwjf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/wJwjf.png" alt="enter image description here"></a></p> <p>The cross-section of the "cutting" cylinder (<em>End mill</em>) is a circle of course, <s>which is what an infinite number of cuts "converge" to (a cylinder with zero length).</p> <p>Then the cross-section along the helix should be an ellipse (intersection of the hypothetical "cutting" cylinder (<em>End mill cutter</em>) and the plane orthogonal to the helix).</s></p> <hr/> <p>It's not the same as moving a circle along the helix; to illustrate, I've reduced the cylinder's length: <a href="https://i.stack.imgur.com/2BRYO.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2BRYO.gif" alt="enter image description here"></a></p> <p>My "straight line" theory does not apply either, I think these "lines" might be helices created by the intersection of the translated and rotated "cutting" cylinders.</p> <p>So it seems this might be much more involved than I anticipated -- please don't spend too much time on this on my account. I was just curious to see whether the "cut" could be better created in 3D by "lofting" the equivalent cross-section along a helix.</p>
Narasimham
95,860
<p>With videos the question is more clear. We can consider the simplest router section to be a rectangular groove like a channel in a flat plate.</p> <p>When wrapped around a cylinder isometrically we have a helical strip as channel bottom flanked by two developable helicoids on either side.</p> <p>The cross section in general can be chosen arbitrary to suit any router shape that removes (wood) material by radial plunge and helical motion driven by lead screw.</p> <p>The groove widths are <span class="math-container">$( w/\cos \alpha,w/\sin\alpha)$</span> along (circumferential,axial ) directions respectively. </p> <p>If instead of a square router a ball end mill diameter <span class="math-container">$ 2w$</span> is chosen then we have the same radial dimension of groove but ellipses of axial dimension major axes <span class="math-container">$( 2w/\cos \alpha, 2w/\sin\alpha)$</span>.</p> <p><a href="https://i.stack.imgur.com/UqabC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UqabC.png" alt="enter image description here"></a></p> <p>( to be continued)</p>
2,553,175
<p>How can I verify that $$1-2\sin^2x=2\cos^2x-1$$ Is true for all $x$?</p> <p>It can be proved through a couple of messy steps using the fact that $\sin^2x+\cos^2x=1$, solving for one of the trigonemtric functions and then substituting, but the way I did it gets very messy very quickly and you end up with a bunch of factoring, etc.</p> <p>What's the simplest way to solve this?</p>
A Piercing Arrow
502,334
<p>First, get the trigonometric functions on one side and integers on the other. Then, divide by $2$ to get $\sin{x^2}+\cos{x}^2 = 1.$ Since we know this to be true (check out proof of unit circle), we have hereby finished our proof!</p>
2,067,794
<p>Let $A$ be a ring and $u,v \in A^\times$. When do we have that $u + v \in A^\times$?</p> <p>I think that A is needed to be an integral domain. For example consider $\mathbb{Z/6}$. Both $1$ and $5$ is a unit but their sum $1+5=0$ is not a unit.</p>
B. Goddard
362,009
<p>In any field, the sum of two units is a unit, unless they're additive inverses. For a non-field example, let $\zeta$ be a primitive sixth-root of unity and let $A$ be the ring of integers in $\mathbb{Q}[\zeta]$. Then note that $\zeta^2+1 = \zeta$. (Because $0 = \zeta^6 - 1 = (\zeta^3-1)(\zeta^3+1) = (\zeta-1)(\zeta^2+\zeta+1)(\zeta+1)(\zeta^2-\zeta+1)$ and we can set this last factor equal to $0$.) </p>
2,067,794
<p>Let $A$ be a ring and $u,v \in A^\times$. When do we have that $u + v \in A^\times$?</p> <p>I think that A is needed to be an integral domain. For example consider $\mathbb{Z/6}$. Both $1$ and $5$ is a unit but their sum $1+5=0$ is not a unit.</p>
ToThichToan
389,952
<p>I don't sure about an answer for your question. However, there is a nice result of F. Beukers and Schlickewei about the number of solutions of the unit equation in a finitely generated subgroup. </p>
67,985
<p>Consider $X_1,X_2$ i.i.d. standard normal random variables(mean 0, variance 1). Are the random variables $Y=X_1+X_2$ and $Z=X_1-X_2$ dependent? I am not sure how to prove this one way or the other.</p>
Sasha
11,069
<p>If you are familiar with the concept of characteristic function, it is easiest to compute characteristic function for $(Y, Z)$. For independent variables, the characteristic function would factor into a product:</p> <p>$$ \begin{eqnarray} \mathbb{E}\left( \exp( i t_1 Y + i t_2 Z ) \right) &amp;=&amp; \mathbb{E}\left( \exp( i (t_1+t_2) X_1 + i (t_1-t_2) X_2 ) \right) \\ &amp; = &amp; \exp\left( -\frac{1}{2} \left(t_1+t_2\right)^2 \right) \cdot \exp\left( -\frac{1}{2} \left(t_1-t_2\right)^2 \right) \\ &amp;=&amp; \exp\left( -t_1^2 \right) \cdot \exp \left(-t_2^2 \right) \end{eqnarray} $$</p> <p>Hence the $Y$ and $Z$ are independent normal with mean 0 and standard deviation of $\sqrt{2}$.</p>
67,985
<p>Consider $X_1,X_2$ i.i.d. standard normal random variables(mean 0, variance 1). Are the random variables $Y=X_1+X_2$ and $Z=X_1-X_2$ dependent? I am not sure how to prove this one way or the other.</p>
Michael Lugo
173
<p>$X_1$ and $X_2$ are independent standard normals, so $(X_1, X_2)$ has rotationally symmetric density, namely $$ {1 \over 2\pi} \exp(-(x_1^2 + x_2^2)/2). $$ If you change coordinates with $u = (x_1 + x_2)/\sqrt{2}, v = (x_1 - x_2)/\sqrt{2}$ (so the change from $(x_1, x_2)$ to $(u,v)$ is area-preserving) then this becomes $$ {1 \over 2\pi} \exp(-(u^2+v^2)/2). $$ That is, the random variables $U = (X_1 + X_2)/\sqrt{2}$ and $V = (X_1 - X_2)/\sqrt{2}$ are also independent standard normals. Your random variables are $Y = U \sqrt{2}$ and $Z = V \sqrt{2}$, so they're independent normals with mean 0 and SD $\sqrt{2}$.</p>
362,716
<p>Let <span class="math-container">$E$</span> be a separable <span class="math-container">$\mathbb R$</span>-Banach space, <span class="math-container">$\rho_r$</span> be a metric on <span class="math-container">$E$</span> for <span class="math-container">$r\in(0,1]$</span> with <span class="math-container">$\rho_r\le\rho_s$</span> for all <span class="math-container">$0&lt;r\le s\le1$</span>, <span class="math-container">$\rho:=\rho_1$</span>, <span class="math-container">$$d_{r,\:\delta,\:\beta}:=1\wedge\frac{\rho_r}\delta+\beta\rho\;\;\;\text{for }(r,\delta,\beta)\in[0,1]\times(0,\infty)\times[0,\infty)$$</span> and <span class="math-container">$(\kappa_t)_{t\ge0}$</span> be a Markov semigroup on <span class="math-container">$(E,\mathcal B(E))$</span>.</p> <blockquote> <p>Assume we arbe able to show that for all <span class="math-container">$n\in\mathbb N$</span> there is a <span class="math-container">$\alpha\in[0,1)$</span> and <span class="math-container">$(r,\delta,\beta)\in[0,1]\times(0,\infty)\times(0,1)$</span> with<span class="math-container">$^1$</span> <span class="math-container">$$\operatorname W_{d_{r,\:\delta,\:\beta}}\left(\delta_x\kappa_n,\delta_y\kappa_n\right)\le\alpha\operatorname W_{d_{r,\:\delta,\:\beta}}\left(\delta_x,\delta_y\right)\tag1$$</span> for all <span class="math-container">$x,y\in E$</span>, where <span class="math-container">$\delta_x$</span> denotes the Dirac measure on <span class="math-container">$(E,\mathcal B(E))$</span> at <span class="math-container">$x\in E$</span>. Why are we able to conclude that there is a <span class="math-container">$(c,\lambda\in[0,\infty)^2$</span> with <span class="math-container">$$\operatorname W_\rho\left(\nu_1\kappa_t,\nu_2\kappa_t\right)\le ce^{-\lambda t}\operatorname W_\rho\left(\nu_1,\nu_2\right)\tag2$$</span> for all <span class="math-container">$\nu_1,\nu_2\in\mathcal M_1(E)$</span> and <span class="math-container">$t\ge0$</span>?</p> </blockquote> <p>It's clear to me that if <span class="math-container">$\kappa$</span> is any Markov kernel on <span class="math-container">$(E,\mathcal B(E))$</span> and <span class="math-container">$d$</span> is any metric on <span class="math-container">$E$</span> such that there is a <span class="math-container">$\alpha\ge0$</span> with <span class="math-container">$\operatorname W_d\left(\delta_x\kappa,\delta_y\kappa\right)\le\alpha\operatorname W_d\left(\delta_x,\delta_y\right)$</span> for all <span class="math-container">$x,y\in E$</span>, then this extends to <span class="math-container">$\operatorname W_d(\mu\kappa,\nu\kappa)\le\alpha\operatorname W_d(\mu,\nu)$</span> for all <span class="math-container">$\mu,\nu\in\mathcal M_1(E)$</span>. Moreover, it's clear that <span class="math-container">$\operatorname W_d\left(\delta_x,\delta_y\right)=d(x,y)$</span>.</p> <p>Note that for any choice of <span class="math-container">$(r,\delta,\beta)\in[0,1]\times(0,\infty)\times[0,\infty)$</span>, it holds <span class="math-container">$$\beta\rho\le d_{r,\:\delta,\:\beta}\le\left(\frac1\delta+\beta\right)\rho.\tag3$$</span></p> <p><em>Remark</em>: The desired claim seems to be used in the proof of Theorem 3.4 in <a href="https://arxiv.org/pdf/math/0602479.pdf" rel="nofollow noreferrer">https://arxiv.org/pdf/math/0602479.pdf</a>.</p> <hr> <p><span class="math-container">$^1$</span> If <span class="math-container">$(E,d)$</span> is a complete separable metric space and <span class="math-container">$\mathcal M_1(E)$</span> is the space of probability measures on <span class="math-container">$\mathcal B(E)$</span>, then the Wasserstein metric <span class="math-container">$\operatorname W_d$</span> on <span class="math-container">$\mathcal M_1(E)$</span> satisfies the identity <span class="math-container">$$\operatorname W_d(\mu,\nu)=\sup_{\substack{f\::\:E\:\to\:\mathbb R\\|f|_{\operatorname{Lip}(d)}\:\le\:1}}(\mu-\nu)f\;\;\;\text{or all }\mu,\nu\in\mathcal M_1(E),$$</span> where <span class="math-container">$$|f|_{\operatorname{Lip}(d)}:=\sup_{\substack{x,\:y\:\in\:E\\x\:\ne\:y}}\frac{|f(x)-f(y)|}{d(x,y)}\;\;\;\text{for }f:E\to\mathbb R$$</span> and <span class="math-container">$\mu f:=\int f\:{\rm d}\mu$</span> for <span class="math-container">$\mu$</span>-integrable <span class="math-container">$f:E\to\mathbb R$</span>.</p>
Benoît Kloeckner
4,961
<p>I can answer assuming some regularity on the Markov semigroup, which I would expect to be satisfied in most cases. Specifically, assume local (in time) Lipschitz continuity on your Markov semigroup, i.e. <span class="math-container">$$\forall s_0&gt;0, \exists C&gt;0, \forall s\in[0,s_0], \forall \mu_1,\mu_2 : \mathrm{W}(\mu_1\kappa_s,\mu_2\kappa_s)\le C\mathrm{W}(\mu_1,\mu_2)$$</span> (I do not precise for which metric, since the two metrics under consideration are Lipschitz-equivalent and so only the constant <span class="math-container">$C$</span> would change when passing from one to the other.)</p> <p>Using convexity of Wasserstein distance, every Lipschitz/contraction bound we have on Dirac masses is also true for arbitrary measures (I guess that is what you mean at the end of your question, although an <span class="math-container">$\alpha$</span> appears to be missing).</p> <p>For any <span class="math-container">$t_0$</span>, using (1) with <span class="math-container">$n=1$</span> iteratively and the double inequality (3): <span class="math-container">\begin{align*} \mathrm{W}_\rho(\delta_x\kappa_{t_0},\delta_y\kappa_{t_0}) &amp;\le \frac1\beta \mathrm{W}_{d_{r,\delta,\beta}}(\delta_x\kappa_{t_0},\delta_y\kappa_{t_0}) \\ &amp;\le \frac{\alpha^{t_0}}{\beta} \mathrm{W}_{d_{r,\delta,\beta}}(\delta_x,\delta_y) \\ &amp;\le \alpha^{t_0}\Big(\frac{1}{\beta\delta}+1\Big) \mathrm{W}_\rho(\delta_x,\delta_y) \end{align*}</span> Since <span class="math-container">$\alpha\in(0,1)$</span>, this is what you needed.</p> <p>(Side note: this kind of computation shows that any decay of the form <span class="math-container">$$ d(T^n(x),T^n(y)) \le f(n) d(x,y)$$</span> where <span class="math-container">$d$</span> is any metric, <span class="math-container">$T$</span> is any Lipschitz dynamical system, and <span class="math-container">$f(n) \to 0$</span> as <span class="math-container">$n\to \infty$</span> (or even <span class="math-container">$f(n)&lt;1$</span> for some <span class="math-container">$n$</span>), actually imply exponential decay. This is pretty basic, but seems to be sometimes overlooked.)</p>
206,227
<p>I was given the following problem:</p> <p>Let $V_1, V_2, \dots$ be an infinite sequence of Boolean variables. For each natural number $n$, define a proposition $F_n$ according to the following rules: </p> <p>$$\begin{align*} F_0 &amp;= \text{False}\\ F_n &amp;= (F_{n-1} \ne V_n)\;. \end{align*}$$</p> <p>Use induction to prove that for all $n$, $F_n$ is $\text{True}$ if and only if an odd number of the variables $V_k \;( k \le n)$ are $\text{True}$.</p> <p>Can anyone help me out with at least beginning this problem? I'm not even entirely sure what it is asking.</p>
Berci
41,488
<p>So, given $V_i$ Boolean variables, i.e. all of them are either true or false, and you define 'propositions' $F_0,F_1,\dots$, they will be also <strong>evaluated</strong> as <em>true</em> or <em>false</em> (we can call it just another set of Boolean variables, built up using $V_i$), such that $$F_0:=\text{false}, \ F_1:=(F_0\ne V_1), \ F_2:=(F_1\ne V_2), \ \dots$$ Now you have to prove by induction that $$F_n=(\text{odd number of }V_k\text{'s are true, }k\le n)$$ It starts by observing that, as no $V_k$'s are given for $k\le 0$, the statement on the RHS is false ($0$ of $V_k$'s are true, but $0$ is even). Then assume the induction hypothesis for $n$ and try to deduce it for $n+1$.</p>
2,764,073
<p>I recently was working on a question posted in an AP calculus BC multiple choice sheet which asked:</p> <p>Let f(x) be a positive, continuous deceasing function. If $\int_1^∞ f(x)dx$ = 5, then which of the following statements must be true about the series $\sum_1^∞f(n)$?</p> <p>(a) $\sum_1^∞f(n)$ = 0</p> <p>(b) $\sum_1^∞f(n)$ converges, and $\sum_1^∞f(n)$ &lt; 5</p> <p>(c) $\sum_1^∞f(n)$ = 5</p> <p>(d) $\sum_1^∞f(n)$ converges, and $\sum_1^∞f(n)$ > 5</p> <p>(e) $\sum_1^∞f(n)$ diverges.</p> <p>I assumed, due to the integral test, the answer would be (b). However, the answer sheet claimed (d). I thought that this was merely a mistake, however I tried to find a function that met these conditions ( f(x) is strictly positive, f '(x) is negative, f(x) is continuous $\forall$x $\in$ [1,∞) and $\int_1^∞ f(x)dx$ = 5) and seemed to have found one (where the conditions were confirmed via online sources) and (d) seems to be true, thus I'm asking where this function fails to meet the above conditions or how (d) can be true given the integral test contradicting it (from my understanding).</p> <p>If F(x) = $\int f(x)dx$, then $\lim_{x\to ∞}$F(x) - F(1) must equal five. Let F(x) = (x - 1)$\biggr(\frac{5}{x+2}-\frac{1}{(x+4)^2}\biggr)$</p> <p>Then f(x), which we'll let equal $\dfrac{d}{dx}$F(x), equals $\frac{5}{(x+2)}$ - $\frac{1}{(x+4)^2}$ + (x - 1)$\biggr(\frac{-5}{(x+2)^2} +\frac{2}{(x+4)^3}\biggr)$. By plugging f(x) and its derivative into an online graphing calculator, I find that f(x) is strictly positive and its derivative strictly negative. Thus $\sum_1^∞f(n)$ &lt; $\int_1^∞f(x)dx$, but when I use an online source to find these values I find the opposite result:</p> <p>f(x) in a graphing calculator (Desmos) (note only from 1 to a large number) <a href="https://www.desmos.com/calculator/kfeqcphona" rel="nofollow noreferrer">https://www.desmos.com/calculator/kfeqcphona</a></p> <p>f'(x) in a graphing calculator (Desmos) (I just put f'(x) in a form where f(x) is derived term by term) <a href="https://www.desmos.com/calculator/fnroc3w9rm" rel="nofollow noreferrer">https://www.desmos.com/calculator/fnroc3w9rm</a></p> <p>$\sum_1^∞f(n)$ = $\sum_1^∞\frac{5}{n+2}-\frac{1}{(n+4)^2}+(n-1)(\frac{-5}{(n+2)^2}+\frac{2}{(n+4)^3})$ = −10ζ(3)−$\frac{7255}{864}$+8(π)23≈5.90138529723494 </p> <p>According to <a href="https://www.emathhelp.net/calculators/calculus-2/series-calculator/" rel="nofollow noreferrer">https://www.emathhelp.net/calculators/calculus-2/series-calculator/</a>.</p> <p>Where am I going wrong?</p>
José Carlos Santos
446,262
<p>Since:</p> <ul> <li>$\displaystyle\int_1^2f(x)\,\mathrm dx&lt;f(1)$</li> <li>$\displaystyle\int_2^3f(x)\,\mathrm dx&lt;f(2)$</li> <li>$\displaystyle\int_3^4f(x)\,\mathrm dx&lt;f(3)$</li> </ul> <p>and so on, you have$$\int_1^{+\infty}f(x)\,\mathrm dx&lt;\sum_{n=1}^\infty f(n).$$</p>
3,091,090
<p>I came across this question the other day and have been trying to solve it by using some simple algebraic manipulation without really delving into L'Hospital's Rule or the Power Series as I have just started learning limit calculations. We needed to find : <span class="math-container">$$\lim_{x \to 0} \frac {x\cos x - \sin x}{x^2\sin x}$$</span> I approached this problem in two different ways and know what the flaw is, however I have been unable to justify why this is so.</p> <p>Let <span class="math-container">$$f(x) = \frac {x\cos x - \sin x}{x^2\sin x}$$</span> Therefore, dividing by <span class="math-container">$x$</span>, <span class="math-container">$$f(x) = \frac {\cos x - \frac{\sin x}{x}}{x\sin x}$$</span> Using standard limit properties, <span class="math-container">$$\lim_{x \to 0}f(x) = \frac{\lim_{x \to 0}\cos x - \lim_{x \to 0}\frac{\sin x}{x}}{\lim_{x \to 0}x\sin x}$$</span> Since <span class="math-container">$$\lim_{x \to 0} \frac {\sin x}{x}=1$$</span> <span class="math-container">$$\lim_{x \to 0}f(x)= \frac{\lim_{x \to 0}\cos x-1}{\lim_{x \to 0}x\sin x}$$</span></p> <p>Rewriting the above as <span class="math-container">$$\lim_{x \to 0}\frac{(\cos x -1)x}{x^2\sin x}$$</span> and using the fact that <span class="math-container">$\lim_{x \to 0} \frac {\sin x}{x}=1$</span> and <span class="math-container">$\lim_{x \to 0}\frac{\cos x -1}{x^2}= -\frac{1}{2}$</span>, we get <span class="math-container">$$\lim_{x \to 0}f(x)=-\frac{1}{2}$$</span></p> <p>I know that the answer is wrong although I am not able to understand why. I believe it is because I cannot combine the numerator and denominator into a single limit function. Using a similar trick, I also obtained the limit to be <span class="math-container">$-\frac{3}{8}$</span>.</p> <p>Questions:</p> <p>1) Could someone please explain why combining the numerator and denominator into a single limit is wrong? (The reason I even went ahead with such a manipulation was, we are allowed to separate the numerator and denominator while expanding the limit of a rational function so I felt that the reverse should also work). </p> <p>2) As you can notice, I have not used L'Hospital's Rules or Power Series expansion of <span class="math-container">$\sin x $</span>and <span class="math-container">$\cos x$</span>. When I used L'Hospital's Rule, I noticed that I needed to go upto the third or fourth derivative to get rid of the <span class="math-container">$\frac{0}{0}$</span> indeterminate form. So would there be a better way of approaching such limits? </p> <p>Thank You.</p>
Mefitico
534,516
<p>1) If I understand your question, you are conjecturing that:</p> <p><span class="math-container">$$ \lim_{x \to a} \frac{f(x)}{g(x)}= \frac{\lim_{x \to a} f(x)}{\lim_{x \to a} g(x)} $$</span></p> <p>This is wrong. You may try to prove it, but as a simple counterexample should suffice:</p> <p>Let <span class="math-container">$f(x)=x$</span> and <span class="math-container">$g(x) =x$</span>. Then: <span class="math-container">$$ \lim_{x \to a} \frac{f(x)}{g(x)}= \lim_{x \to a} \frac{x}{x} = 1 $$</span> And the result holds even for <span class="math-container">$a=0$</span>. but: <span class="math-container">$$ \frac{\lim_{x \to a} f(x)}{\lim_{x \to a} g(x)}= \frac{a}{a} $$</span> And the last one is undetermined for <span class="math-container">$a =0$</span>, since <span class="math-container">$a/a$</span> is not within a limit expression anymore. This would actually hold if <span class="math-container">${\lim_{x \to a} g(x)}\neq 0$</span>, but this exception was precisely the case you had in hand.</p> <p>2) You have a rather complicate expression there. I don't see any elegant/short solution to it, but using power series seems pretty much fair and straight forward.</p> <p>There could be some property related to limits and some known transform such as Initial value theorem from Laplace's transform, but that wouldn't enter into my book as a "better" solution.</p>
2,785,993
<p>Let</p> <ul> <li><p><span class="math-container">$k(0)=11$</span></p> </li> <li><p><span class="math-container">$k(1)=1101$</span></p> </li> <li><p><span class="math-container">$k(2)=1101001$</span></p> </li> <li><p><span class="math-container">$k(3)=11010010001$</span></p> </li> <li><p><span class="math-container">$k(4)=1101001000100001$</span></p> </li> <li><p>And So on....</p> <p>I've checked it up to <span class="math-container">$k(120)$</span>, and I did't find anymore prime of such form. Are there anymore prime numbers of that form ? (I just realized that only <span class="math-container">$k(6n+5)$</span> could be a prime (?))</p> </li> </ul>
Cesareo
397,348
<p>The formation law is clearly</p> <p>$$ n_k = 2^k n_{k-1}+1 $$</p> <p>with $n_1=3$</p> <p><code>n0 = 3; For[i = 2, i &lt; 50, i++, n1 = 2^i n0 + 1; If[PrimeQ[n1], Print[n1, " ", IntegerString[n1, 2]]]; n0 = n1]</code></p> <p>obtaining</p> <p>n = 13 -- 1101</p> <p>n = 271302750695377321080849818469209754627603342031510693802940799730825845099036699701989532948734015220469369753358523432961 -- 11010010001000010000010000001000000010000000010000000001000000000010000000000010000000000001000000000000010000000000000010000000000000001000000000000000010000000000000000010000000000000000001000000000000000000010000000000000000000010000000000000000000001000000000000000000000010000000000000000000000010000000000000000000000001000000000000000000000000010000000000000000000000000010000000000000000000000000001</p> <p>If the number is considered in basis $10$ then the procedure is analogous. In this case we have $n_1 = 11$ and the recurrence equation is $n_k = 10^k n_{k-1}+1$ giving <code> n = 1101001000100001000001000000100000001000000001000000000100000000001000000000001000000000000100000000000001000000000000001000000000000000100000000000000001000000000000000001000000000000000000100000000000000000001000000000000000000001000000000000000000000100000000000000000000001000000000000000000000001000000000000000000000000100000000000000000000000001000000000000000000000000001000000000000000000000000000100000000000000000000000000001000000000000000000000000000001000000000000000000000000000000100000000000000000000000000000001000000000000000000000000000000001000000000000000000000000000000000100000000000000000000000000000000001000000000000000000000000000000000001 -- 1101001000100001000001000000100000001000000001000000000100000000001000000000001000000000000100000000000001000000000000001000000000000000100000000000000001000000000000000001000000000000000000100000000000000000001000000000000000000001000000000000000000000100000000000000000000001000000000000000000000001000000000000000000000000100000000000000000000000001000000000000000000000000001000000000000000000000000000100000000000000000000000000001000000000000000000000000000001000000000000000000000000000000100000000000000000000000000000001000000000000000000000000000000001000000000000000000000000000000000100000000000000000000000000000000001000000000000000000000000000000000001</code></p>
54,486
<p>Many colour schemes and colour functions can be accessed using <a href="http://reference.wolfram.com/mathematica/ref/ColorData.html"><code>ColorData</code></a>.</p> <p>Version 10 introduced new default colour schemes, and a new customization option using <a href="http://reference.wolfram.com/mathematica/ref/PlotTheme.html"><code>PlotTheme</code></a>. The colour themes accessible with <code>PlotTheme</code> have both discrete colour schemes and gradients.</p> <p>Is there a standard way to access these? I.e. get a colour function that take a real argument in $[0,1]$ and returns a shade, or one that takes an integer argument and returns a colour, as with <code>ColorData</code>.</p>
Mr.Wizard
121
<p>For ease of direct access I have found through digging the following relationships for indexed colors:</p> <pre><code>map = {"Default" -&gt; 97, "Earth" -&gt; 98, "Garnet" -&gt; 99, "Opal" -&gt; 100, "Sapphire" -&gt; 101, "Steel" -&gt; 102, "Sunrise" -&gt; 103, "Textbook" -&gt; 104, "Water" -&gt; 105, "BoldColor" -&gt; 106, "CoolColor" -&gt; 107, "DarkColor" -&gt; 108, "MarketingColor" -&gt; 109, "NeonColor" -&gt; 109, "PastelColor" -&gt; 110, "RoyalColor" -&gt; 111, "VibrantColor" -&gt; 112, "WarmColor" -&gt; 113}; </code></pre> <p>For example:</p> <pre><code>ColorData["Sunrise" /. map, "ColorList"] </code></pre> <p><img src="https://i.stack.imgur.com/Lq8V0.png" alt="enter image description here"></p> <p>Visually:</p> <p><img src="https://i.stack.imgur.com/qna7b.png" alt="enter image description here"></p> <p>(<code>"Default"</code> was added manually; it is not listed with the others that I can find.)</p> <p>In addition to indexed colors each of these PlotThemes has rules for gradient colors, Financial plots, Wavelet plots etc. For example <code>"MarketingColor"</code> and <code>"NeonColor"</code> use the same indexed color scheme but other details are different.</p> <p>This additional data may be conveniently found as follows. (You will need <a href="https://mathematica.stackexchange.com/a/1447/121"><code>step</code></a>.)</p> <pre><code>Plot[x, {x, 0, 1}]; (* preload PlotThemes system *) System`PlotThemeDump`resolvePlotTheme["BoldColor", ""] // step System`PlotThemeDump`resolvePlotTheme["Sunrise", ""] // step </code></pre> <p><img src="https://i.stack.imgur.com/vQBO4.png" alt="enter image description here"></p> <p>You can copy values directly or let the <code>Switch</code> statement assign the value to the appropriate Symbol.<br> For example to find the gradient colors used by:</p> <pre><code>BarChart[{Range@98}, PlotTheme -&gt; "DarkColor"] </code></pre> <p><img src="https://i.stack.imgur.com/g8OFV.png" alt="enter image description here"></p> <p>We can use this:</p> <pre><code>System`PlotThemeDump`resolvePlotTheme["DarkColor", ""]; System`PlotThemeDump`$ThemeColorGradient </code></pre> <p><img src="https://i.stack.imgur.com/S8yYb.png" alt="enter image description here"></p> <p>You can of course view all definitions at once but be warned that it is long:</p> <pre><code>?? System`PlotThemeDump`resolvePlotTheme </code></pre> <p>This Symbol can also used to customize plot themes as I described in the second half of my answer to:</p> <ul> <li><a href="https://mathematica.stackexchange.com/questions/54545/is-it-possible-to-define-new-plottheme/54637#54637">Is it possible to define a new PlotTheme?</a></li> </ul>
2,452,084
<ul> <li><em>I'm having trouble understanding why the arbitrariness of $\epsilon$ allows us to conclude that $d(p,p')&lt;0$. It seems we could likely conclude a value such as $\frac {\epsilon}{100}$ couldn't we?? The other idea that would normally work is the limit (as $n$ approaches $\infty$, $p$ approaches $p'$) but that would mean we are further in another sequence which would have same problem Thanks in advance</em></li> </ul> <p>$\ $ $\ $ $\ $ </p> <p>Definition of Convergence</p> <blockquote> <p>A sequence $\{ p_n \}$ in a metric space $X$ is said to converge if there us a point $p \in X$ with the following property: For every $ \epsilon&gt;0$ there is an integer $N$ such that $n \ge N$ implies distance function $d(p_n, p) &lt; \epsilon.$</p> </blockquote> <p><a href="https://i.stack.imgur.com/tDMkx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tDMkx.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/TIFil.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TIFil.png" alt="enter image description here"></a></p>
Paramanand Singh
72,031
<p>The crucial thing to note here is that $p, p'$ are fixed points in $X$ (ie they don't depend on $n$) and hence $d(p, p') $ is non-negative specific real number and let's denote this specific non-negative number by $A$. It should now be clear that in the inequality $A&lt;\epsilon$ the number $A$ is fixed and $\epsilon$ is arbitrary. Since $A$ is fixed it can not be chosen to be something like $\epsilon/100$. The implication $$\forall \epsilon&gt;0,0\leq A&lt;\epsilon\implies A=0$$ is just a fancy way (like the legalese) to mean the following obvious/trivial fact:</p> <blockquote> <p><strong>The</strong> only non-negative real number less than <strong>any/every</strong> positive real number is $0$.</p> </blockquote> <p>To convince yourself that the above is trivial understand that the statement remains true even if word "real" is replaced by "rational" and this property of rationals is an immediate consequence of the following</p> <blockquote> <p>Given any positive rational number we can find a smaller positive rational number. </p> </blockquote> <p>And the above follows from the even more obvious fact</p> <blockquote> <p>Given any natural number we can find a greater one, namely it's successor. </p> </blockquote> <p>It is a trademark of authors like Rudin who convert such trivialities into seemingly difficult results via excessive use of symbolism in the name of "unambiguous / precise writing".</p> <p>If these statements do not look trivial then it time to revisit the way inequalities work in these number systems. </p>
2,452,172
<p>Let $(a_n)_{n \geq 1}$ be a decreasing sequence of positive reals. Let $s_n = a_1 + a_2 + ... + a_n$ and </p> <p>\begin{align} b_n = \frac{1}{a_{n+1}} - \frac{1}{a_n}, n \geq 1 \end{align}</p> <p>Prove that if $(s_n)_{n \geq 1}$ is convergent, then $(b_n)_{n \geq 1}$ is unbounded.</p> <p>My attempt: If $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} \neq 1 $ or DNE, then we can bound $\frac{a_{n+1}}{a_{n}}$ by $\frac{a_{n+1}}{a_{n}} &lt; \frac{1}{k}$ for some $ k&gt;1$</p> <p>Hence $b_n = \frac{1}{a_{n+1}} - \frac{1}{a_n} &gt; \frac{k-1}{a_n} \to \infty$</p> <p>On the other hand, if $\lim_{n \to \infty} \frac{a_{n+1}}{a_n} = 1$ I'm not sure how to prove for this case. </p> <p>I tried $\forall \epsilon &gt;0 , \exists N&gt;0, n&gt;N \implies |\frac{a_{n+1}}{a_n} - 1| &lt; \epsilon \implies |\frac{1}{a_{n+1}} - \frac{1}{a_n}| &lt; \frac{\epsilon}{1-\epsilon} \frac{1}{a_n}$.</p> <p>But this doesn't help me as I am looking for a lower bound.</p> <p>Any hints are appreciated</p>
alphacapture
334,625
<p>Hint: Suppose for the sake of contradiction that the $b_n$ were bounded. Then show that $\sum{a_n}$ is divergent.</p>
646,779
<p>Prove that if $p$ and $q$ are polynomials over the field $F$, then the degree of their sum is less than or equal to whichever polynomial's degree is larger</p> <p>$$\deg(p+q)\leq \max \left\{\deg(p),\deg(q) \right\}$$</p> <p>Currently, I am taking it case by case, but I was curious if there was a way to do a proof by contradiction. What would it mean if I could add $2$ polynomials the result would be of larger degree than either of them.</p>
Bill Dubuque
242
<p>Here's one way. Let $\, \rm d := \deg.\,$ Suppose for contradiction $\, {\rm d}(f+g) &gt; {\rm d}(f),\, {\rm d}(g).\,$ Choose such a counterexample of minimal degree $\,d = {\rm d}(f+g).\,$ Necessarily $\,d &gt; 0\,$ since it is true for constants. Since $(f+g)(0) = f(0)+g(0)\,$ subtracting the constant terms from $f,g$ then cancelling $x$ from both yields a counterexample of smaller degree, contra the minimality hypothesis.</p>
346,198
<p>Recently I was playing around with some numbers and I stumbled across the following formal power series:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^{ak}}{(ak)!}\biggl(\sum_{l=0}^k\binom{ak}{al}\biggr)$$</span></p> <p>I was able to "simplify" the above expression for <span class="math-container">$a=1$</span>:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^k}{k!}\cdot2^k=e^{2x}$$</span> I also managed to simplify the expression for <span class="math-container">$a=2$</span> with the identity <span class="math-container">$\sum_{i=0}^\infty\frac{x^{2k}}{(2k)!}=\cosh(x)$</span>:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^{2k}}{(2k)!}\biggl(\sum_{l=0}^k\binom{2k}{2l}\biggr)=\mathbf[\cdots\mathbf]=\frac{1}{4}\cdot(e^{2x}+e^{-2x})+\frac{1}{2}=\frac{1}{2}\cdot(\cosh(2x)+1)$$</span></p> <p>However, I couldn't come up with a general method for all <span class="math-container">$a\in\Bbb{N}$</span>. I would be very thankful if someone could either guide me towards simplifying this expression or post his solution here. </p>
RobPratt
141,766
<p>You might be able to use the fact that <span class="math-container">$$\sum_{k=0}^\infty b_{ak}=\sum_{k=0}^\infty \left(\frac{1}{a}\sum_{j=0}^{a-1} \exp\left(2\pi ijk/a\right)\right)b_k.$$</span> For example, when <span class="math-container">$a=1$</span>, taking <span class="math-container">$b_k = \frac{x^k}{k!}\sum_{\ell \ge 0} \binom{k}{\ell}$</span> yields <span class="math-container">$$\sum_{k=0}^\infty b_{k}=\sum_{k=0}^\infty \frac{x^k}{k!}\sum_{\ell \ge 0} \binom{k}{\ell}=\sum_{k=0}^\infty \frac{x^k}{k!}2^k=\exp(2x),$$</span> as you already obtained. For <span class="math-container">$a=2$</span>, first note that <span class="math-container">\begin{align} \sum_{\ell \ge 0}\binom{k}{2 \ell} &amp;= \sum_{\ell\ge 0} \left(\frac{1}{2}\sum_{j=0}^1 \exp\left(2\pi ij\ell/2\right)\right)\binom{k}{\ell}\\ &amp;= \sum_{\ell\ge 0} \frac{1+(-1)^\ell}{2}\binom{k}{\ell}\\ &amp;= \frac{1}{2}\sum_{\ell\ge 0} \binom{k}{\ell}+ \frac{1}{2}\sum_{\ell\ge 0} (-1)^\ell\binom{k}{\ell}\\ &amp;= \frac{2^k+0^k}{2}. \end{align}</span> Now taking <span class="math-container">$b_k = \frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{2 \ell}$</span> yields <span class="math-container">\begin{align}\sum_{k=0}^\infty b_{2k}&amp;=\sum_{k=0}^\infty \left( \frac{1}{2}\sum_{j=0}^1 \exp\left(\pi ijk\right)\right)\frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{2 \ell}\\ &amp;=\frac{1}{2}\sum_{k=0}^\infty \left(1+(-1)^k\right)\frac{x^k}{k!}\left(2^{k-1}+\frac{1}{2}[k=0]\right)\\ &amp;=\frac{1}{4}\sum_{k=0}^\infty \frac{x^k}{k!}2^k+\frac{1}{4}\sum_{k=0}^\infty (-1)^k\frac{x^k}{k!}2^k+\frac{1}{2}\\ &amp;=\frac{\exp(2x)+\exp(-2x)}{4} +\frac{1}{2}\\ &amp;=\cosh^2(x), \end{align}</span> again matching your result. For <span class="math-container">$a=3$</span>, first note that <span class="math-container">\begin{align} \sum_{\ell \ge 0}\binom{k}{3 \ell} &amp;= \sum_{\ell\ge 0} \left(\frac{1}{3}\sum_{j=0}^2 \exp\left(2\pi ij\ell/3\right)\right)\binom{k}{\ell}\\ &amp;= \sum_{\ell\ge 0} \frac{1+\exp(2\pi i\ell/3)+\exp(4\pi i\ell/3)}{3}\binom{k}{\ell}\\ &amp;= \frac{1}{3}\sum_{\ell\ge 0} \binom{k}{\ell}+ \frac{1}{3}\sum_{\ell\ge 0} \exp(2\pi i/3)^\ell\binom{k}{\ell}+ \frac{1}{3}\sum_{\ell\ge 0} \exp(4\pi i/3)^\ell\binom{k}{\ell}\\ &amp;= \frac{2^k+(1+\exp(2\pi i/3))^k+(1+\exp(4\pi i/3))^k}{3}\\ &amp;= \frac{2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k}{3}. \end{align}</span> Now taking <span class="math-container">$b_k = \frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{3 \ell}$</span> yields <span class="math-container">\begin{align}\sum_{k=0}^\infty b_{3k}&amp;=\sum_{k=0}^\infty \left( \frac{1}{3}\sum_{j=0}^2 \exp\left(2\pi ijk/3\right)\right)\frac{x^k}{k!}\sum_{\ell \ge 0}\binom{k}{3 \ell}\\ &amp;=\frac{1}{3}\sum_{k=0}^\infty \left(1+\exp(2\pi ik/3)+\exp(4\pi ik/3)\right)\frac{x^k}{k!}\frac{2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k}{3}\\ &amp;=\frac{1}{9}\sum_{k=0}^\infty (1+\exp(2\pi i/3)^k+\exp(4\pi i/3)^k)(2^k+\exp(\pi i/3)^k+\exp(-\pi i/3)^k)\frac{x^k}{k!}. \end{align}</span> Now expand the product of trinomials to obtain 9 sums that reduce to <span class="math-container">$\exp(cx)$</span> for various constants <span class="math-container">$c$</span>.</p> <p>Alternatively, note that: <span class="math-container">$$\sum_{k=0}^\infty \frac{x^{ak}}{(ak)!}\sum_{\ell \ge 0}\binom{ak}{a\ell} = \left(\sum_{k=0}^\infty \frac{x^{ak}}{(ak)!}\right)^2,$$</span> so you might as well just compute <span class="math-container">\begin{align} \sum_{k=0}^\infty \frac{x^{ak}}{(ak)!} &amp;= \sum_{k=0}^\infty \left( \frac{1}{a}\sum_{j=0}^{a-1} \exp\left(2\pi ijk/a\right)\right)\frac{x^k}{k!} \\ &amp;= \frac{1}{a}\sum_{j=0}^{a-1} \sum_{k=0}^\infty \frac{(\exp(2\pi ij/a)x)^k}{k!} \\ &amp;= \frac{1}{a}\sum_{j=0}^{a-1} \exp(\exp(2\pi ij/a)x), \end{align}</span> and then square the result.</p>
346,198
<p>Recently I was playing around with some numbers and I stumbled across the following formal power series:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^{ak}}{(ak)!}\biggl(\sum_{l=0}^k\binom{ak}{al}\biggr)$$</span></p> <p>I was able to "simplify" the above expression for <span class="math-container">$a=1$</span>:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^k}{k!}\cdot2^k=e^{2x}$$</span> I also managed to simplify the expression for <span class="math-container">$a=2$</span> with the identity <span class="math-container">$\sum_{i=0}^\infty\frac{x^{2k}}{(2k)!}=\cosh(x)$</span>:</p> <p><span class="math-container">$$\sum_{k=0}^\infty\frac{x^{2k}}{(2k)!}\biggl(\sum_{l=0}^k\binom{2k}{2l}\biggr)=\mathbf[\cdots\mathbf]=\frac{1}{4}\cdot(e^{2x}+e^{-2x})+\frac{1}{2}=\frac{1}{2}\cdot(\cosh(2x)+1)$$</span></p> <p>However, I couldn't come up with a general method for all <span class="math-container">$a\in\Bbb{N}$</span>. I would be very thankful if someone could either guide me towards simplifying this expression or post his solution here. </p>
AccidentalFourierTransform
106,114
<p>The explicit formula is as follows: <span class="math-container">$$ S_a=\frac{1}{a^2}\left(\sum_{z^a=2^a}+2\sum_{p_a(z)=0}\right)e^{az} $$</span> where the polynomials <span class="math-container">$p_a$</span> are given by <a href="http://oeis.org/A244608" rel="noreferrer">A244608</a>. For example, <span class="math-container">\begin{align} p_9(z)&amp;= 1 - 13604 z^9 - 13359 z^{18} + 247 z^{27} + z^{36}\\ p_{10}(z)&amp;=3125-383750 z^{10}-73749 z^{20}+502 z^{30}+z^{40} \end{align}</span> whose roots in the complex plane look like follows:</p> <p><a href="https://i.stack.imgur.com/lm3aM.png" rel="noreferrer"><img src="https://i.stack.imgur.com/lm3aM.png" alt="enter image description here"></a></p> <p><a href="https://i.stack.imgur.com/B4vj7.png" rel="noreferrer"><img src="https://i.stack.imgur.com/B4vj7.png" alt="enter image description here"></a></p> <p>The first few solutions are <span class="math-container">\begin{align} S_1&amp;=e^{2 x}\\ S_2&amp;=\frac{e^{-2 x}}{4}+\frac{e^{2 x}}{4}\\ S_3&amp;=\frac{2 e^{-x}}{9}+\frac{e^{2 x}}{9}+\frac{2}{9} e^{\frac{x}{2}-\frac{1}{2} i \sqrt{3} x}+\frac{2}{9} e^{\frac{x}{2}+\frac{1}{2} i \sqrt{3} x}+\frac{1}{9} e^{-x-i \sqrt{3} x}+\frac{1}{9} e^{-x+i \sqrt{3} x}\\ S_4&amp;=\frac{e^{-2 x}}{16}+\frac{1}{8} e^{(-1-i) x}+\frac{1}{8} e^{(-1+i) x}+\frac{1}{16} e^{-2 i x}+\frac{1}{16} e^{2 i x}+\frac{1}{8} e^{(1-i) x}+\frac{1}{8} e^{(1+i) x}+\frac{e^{2 x}}{16} \end{align}</span> as given by <span class="math-container">\begin{align} p_1(z)&amp;=0\\ p_2(z)&amp;=0\\ p_3(z)&amp;=1+z^3\\ p_4(z)&amp;=4+z^4\\ p_5(z)&amp;=-1+11z^5+z^{10}\\ p_6(z)&amp;=-27+26z^6+z^{12} \end{align}</span> etc. Quoting the OEIS entry, the coefficients are found as follows:</p> <blockquote> <p>Let <span class="math-container">$\omega$</span> be a primitive <span class="math-container">$j$</span>-th root of unity. Let <span class="math-container">$L(k)=\sum_{p=0}^{j-1} c(p)\omega^{kp}$</span> with <span class="math-container">$c(0)=2$</span> and <span class="math-container">$c(i)=C(j,i)$</span> if <span class="math-container">$i&gt;0$</span>. Then <span class="math-container">$p(j,X)=(X-L(1))(X-L(2))\dots(X-L([(n-1)/2]))$</span>.</p> </blockquote>
217,429
<blockquote> <p>Let $A\subset\mathbb R$. Show for each of the following statements that it is either true or false.</p> <ol> <li>If $\min A$ and $\max A$ exist then $A$ is finite.</li> <li>If $\max A$ exists then $A$ is infinite.</li> <li>If $A$ is finite then $\min A$ and $\max A$ do exist.</li> <li>If $A$ is infinite then $\min A$ does not exist.</li> </ol> </blockquote> <hr> <p>My attempts so far:</p> <ol> <li><p>This statement is wrong. Let $A=[a;b]\cap\mathbb Q\subset\mathbb R$ with $a&lt;b$. It is obvious that $\min A=a$ and $\max A=b$. Assume now, that $A$ is finite, then we would be able to enumerate every element in $A$, however $\mathbb Q$ is dense in $\mathbb R$ and therefore we can find for each $x_k,y_k\in A$ a $r_k\in A$ in $[x_k;y_k]$, such that $x_k&lt;r_k&lt;y_k$. E.g. this can be achieved with the arithmetic mean $r_k=(x_k+y_k)/2$. Starting with $[x_1=a;y_1=b]$ one could create infinitely nested intervals $[x_k;r_k]$ and can therefore find infinite elements in constrast to the assumption, that $A$ is finite.</p></li> <li><p>This statement is wrong. Let $A=(0;n]\cap\mathbb N\subset\mathbb R$ with $n\in\mathbb N$ and therefore $A$ is bounded by $n$ with $\max A=n$. Assume $A$ would be infinite; with $|A|=n-1$ follows, that $A$ must be finite contrary to the assumption that it is infinite.</p></li> <li><p>???</p></li> <li><p>This statement is wrong. Let $A=\mathbb N\subset\mathbb R$ and we know that $A$ is infinite. However $A$ has a lower bound and even a minimal element where $\min A=1$ in constrast to the assumption that it does not exist.</p></li> </ol> <hr> <p>I would like to know whether my attempts for (1), (2) and (4) are plausible or incomplete. Furthermore I need some hints on how to prove (3).</p>
Brian M. Scott
12,042
<p>HINTS: You’re given that $P$ is the point $\left(x,-\frac7{25}\right)$ on the unit circle in the fourth quadrant. The fact that $P$ is on the unit circle tells you that $$x^2+\left(-\frac7{25}\right)^2=1\;;$$ why? And what are the values of $x$ making this true?</p> <p>The fact that $P$ is in the fourth quadrant tells you the algebraic sign of $x$; is $x$ positive, or is it negative?</p>