qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
1,000,547
<p>So the system would look something like this.</p> <pre><code>74" &lt; 12.5x + 7.75y &lt; 84" 60" &lt; 12.5w + 7.75z &lt; 74" y + z = x + w where x, y, w, z are natural numbers </code></pre> <p>Is this right? So i made a computer program and was able to solve the question by iterating through all the numbers. There are three answers. I am wondering if there is a mathematical way of solving this without using a computer program? And how would you do this. Is there a way like with pivoting or Gaussian reduction? I'm looking for the most efficient way of cutting, so i need to compare answers. Thank you</p>
Mark Bennet
2,906
<p>Well we could work in units of one quarter of an inch. The pieces we have are then $336$ units and $296$ units long respectively, a total of $632$ units.</p> <p>The pieces we want to cut are $50$ units and $31$ units.</p> <p>Now, we want equal numbers of the two sizes - so each pair comes to $81$ units.</p> <p>$632=7\times 81 +65$. If we can, therefore, we want seven pieces of each length.</p> <p>To get a solution, note that $336=6\times 50 + 31 +5$ gives seven pieces with little waste. We then have $296=50+6\times 31 +60$.</p> <p>I did this by trying to cut as many long pieces as possible from the longest piece of wood. But there is enough spare here to allow for lots of other possibilities. There is room for seven short pieces from the second piece. Then try five $50$s from the first piece etc.</p>
1,000,547
<p>So the system would look something like this.</p> <pre><code>74" &lt; 12.5x + 7.75y &lt; 84" 60" &lt; 12.5w + 7.75z &lt; 74" y + z = x + w where x, y, w, z are natural numbers </code></pre> <p>Is this right? So i made a computer program and was able to solve the question by iterating through all the numbers. There are three answers. I am wondering if there is a mathematical way of solving this without using a computer program? And how would you do this. Is there a way like with pivoting or Gaussian reduction? I'm looking for the most efficient way of cutting, so i need to compare answers. Thank you</p>
callculus42
144,421
<p>The model, which is usually used is the following:</p> <p>First you have to find which combinations of small pieces (12.5 inches and 7.75 inches) can be made out of the big pieces (84,74):</p> <p><strong>84 Inches</strong></p> <p>$$\begin{array}{|m{cm}|m{1cm}|} \hline \text{combination} &amp; 12.5 &amp;7.75 &amp; \text{remaining wood} &amp; \text{length} &amp;\text{variable} \\ \hline \hline \hline1&amp; 0&amp;10&amp;6.5 &amp; 84 &amp;x_1\\ \hline 2&amp; 1&amp;9&amp;1.75 &amp;84 &amp; x_2 \\ \hline 3&amp; 2&amp;7&amp; 4.75&amp;84 &amp;x_3 \\ \hline 4&amp; 3&amp;6&amp;0 &amp;84 &amp;x_4 \\ \hline 5&amp;4&amp;4&amp;3&amp;84&amp;x_5 \\ \hline 6&amp; 5&amp;2&amp;6 &amp; 84 &amp; x_6 \\ \hline \hline 7&amp; 6&amp;1&amp;1.25 &amp; 84 &amp; x_7 \\ \hline \end{array}$$ </p> <p>Example for calculating the remaining wood (combination 3): $84-2\cdot 12.5-7\cdot 7.75=4.75$ There is not enough wood to cut additional a piece of 7.75 inches.</p> <p><strong>74 Inches</strong></p> <p>$$\begin{array}{|m{cm}|m{1cm}|} \hline \text{combination} &amp; 12.5 &amp;7.75 &amp; \text{remaining wood} &amp; \text{length} &amp;\text{variable} \\ \hline \hline \hline1&amp; 5&amp;1&amp;3.75 &amp; 74 &amp;y_1\\ \hline 2&amp; 4&amp;3&amp;0.75 &amp;74 &amp; y_2 \\ \hline 3&amp; 3&amp;4&amp; 5.5&amp;74 &amp;y_3 \\ \hline 4&amp; 2&amp;6&amp;2.5 &amp;74 &amp;y_4 \\ \hline 5&amp;1&amp;7&amp;7.25&amp;74&amp;y_5 \\ \hline 6&amp;0&amp;9&amp;4.25&amp;74&amp;y_6 \\ \hline \end{array}$$ </p> <p><strong>MODEL</strong></p> <p><strong>variables</strong></p> <p>$x_i$:Number of pieces with a length of 84 with the combination i</p> <p>$y_i$:Number of pieces with a length of 74 with the combination i</p> <p><strong>objective function</strong></p> <p>The remaining wood has to be minimized. Thus the objective function is</p> <p>$\text{Min} \ \ 6.5x_1+1.75x_2+4.75x_3+0x_4+3x_5+6x_6+1.25x_7+3.75y_1+0.75y_2+5.5y_3+2.5y_4+7.25y_5+4.25y_6$</p> <p><strong>restrictions</strong></p> <p>I suppose, that we need 50 pieces with a length of 12.5 inches and 50 pieces with a length of 7.75 inches.</p> <p>$$0x_1+x_2+2x_3+3x_4+4x_5+5x_6+6x_7+5y_1+4y_2+3y_3+2y_4+y_5+0y_6\geq 50$$</p> <p>$$10x_1+x_9+7x_3+6x_4+4x_5+2x_6+1x_7+1y_1+3y_2+4y_3+6y_4+7y_5+9y_6\geq 50$$</p> <p>$$x_i\in \mathbb N \ \forall \ i=1..7$$</p> <p>$$y_i\in \mathbb N \ \forall \ i=1..6$$</p> <p>This problem can be solved by applying the branch and bound algorithm.</p>
3,868,571
<p>Let <span class="math-container">$n\ge 1$</span> and <span class="math-container">$A,B\in\mathrm M_n(\mathbb R)$</span>.</p> <p>Let's assume that</p> <p><span class="math-container">$$\forall Q\in\mathrm M_n(\mathbb R), \quad \det\begin{pmatrix} I_n &amp; A \\ Q &amp; B\end{pmatrix}=0$$</span></p> <p>where <span class="math-container">$I_n$</span> is the identity matrix of <span class="math-container">$\mathrm M_n(\mathbb R)$</span>.</p> <blockquote> <p>Can we prove that <span class="math-container">$\mathrm{rank} \begin{pmatrix}A\\ B\end{pmatrix}&lt;n$</span>?</p> </blockquote> <hr /> <p>This fact seems quite obvious, but I can't find any straightforward argument to prove it.</p> <p><em>Some ideas.</em></p> <p>With <span class="math-container">$Q=0$</span>, we deal with a block-triangular matrix, so we have <span class="math-container">$\det B=0$</span>.</p> <p>Moreover, with <span class="math-container">$Q=\lambda I_n$</span>, <span class="math-container">$\lambda\in\mathbb R$</span>, since it commutes with <span class="math-container">$B$</span>, we have</p> <p><span class="math-container">$$\forall \lambda\in\mathbb R,\quad \det(B-\lambda A)=0,$$</span></p> <p>so if <span class="math-container">$\det(A)\ne 0$</span>, we have</p> <p><span class="math-container">$$\forall \lambda\in\mathbb R,\quad\det((BA-\lambda I_n)A^{-1})=\det(BA-\lambda I_n)\det(A)^{-1}=0,$$</span></p> <p>which means that every <span class="math-container">$\lambda\in\mathbb R$</span> is an eigenvalue of <span class="math-container">$BA$</span> (since for all <span class="math-container">$\lambda\in\mathbb R$</span>, <span class="math-container">$\det(BA-\lambda I_n)=0$</span>), which is absurd.</p> <p>So <span class="math-container">$\det(A)=0$</span> also.</p>
Ben Grossmann
81,360
<p>Denote <span class="math-container">$$M_Q = \begin{pmatrix} I_n &amp; A \\ Q &amp; B\end{pmatrix}, \quad \operatorname{col}(A,B) = \pmatrix{A\\B}. $$</span></p> <hr /> <p>The statement does hold with the additional assumption that <span class="math-container">$\ker(A) \subseteq \ker B$</span>, i.e. the row-space of <span class="math-container">$A$</span> contains that of <span class="math-container">$B$</span>.</p> <p>Suppose for contradiction that <span class="math-container">$\operatorname{col}(A,B)$</span> has full rank. Let <span class="math-container">$U$</span> denote the column space of <span class="math-container">$A,B$</span>. Let <span class="math-container">$P$</span> denote a matrix whose columns form a basis of <span class="math-container">$U^\perp$</span>. By using column operations on <span class="math-container">$P$</span>, we can bring <span class="math-container">$P$</span> to its column-echelon form, which is <span class="math-container">$$ P = \pmatrix{I_n\\ Q} $$</span> for some matrix <span class="math-container">$Q_*$</span>. Because the columns of <span class="math-container">$P$</span> form a basis of <span class="math-container">$U^\perp$</span> and the columns of <span class="math-container">$\operatorname{col}(A,B)$</span> form a basis of <span class="math-container">$U$</span>, we conclude that the columns of <span class="math-container">$M_{Q_*}$</span> form a basis of <span class="math-container">$\Bbb R^n$</span>, which means that <span class="math-container">$M_{Q_*}$</span> is invertible and <span class="math-container">$\det(M_{Q_*}) \neq 0$</span>.</p> <p>Thus, <span class="math-container">$\operatorname{col}(A,B)$</span> indeed fails to have full rank if <span class="math-container">$\det(M_Q) = 0$</span> for all <span class="math-container">$Q$</span>.</p>
318,299
<blockquote> <p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p> </blockquote> <p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
Stromael
73,823
<p>These answers all seem to be variations on one another, but I've found each one so far to be at least a little cryptic. Here's my version/adaptation.</p> <p>Let <span class="math-container">$U \subseteq \mathbb{R}$</span> be open and let <span class="math-container">$x \in U$</span>. Either <span class="math-container">$x$</span> is rational or irrational. If <span class="math-container">$x$</span> is rational, define <span class="math-container">\begin{align}I_x = \bigcup\limits_{\substack{I\text{ an open interval} \\ x~\in~I~\subseteq~U}} I,\end{align}</span> which, as a union of non-disjoint open intervals (each <span class="math-container">$I$</span> contains <span class="math-container">$x$</span>), is an open interval subset to <span class="math-container">$U$</span>. If <span class="math-container">$x$</span> is irrational, by openness of <span class="math-container">$U$</span> there is <span class="math-container">$\varepsilon &gt; 0$</span> such that <span class="math-container">$(x - \varepsilon, x + \varepsilon) \subseteq U$</span>, and there exists rational <span class="math-container">$y \in (x - \varepsilon, x + \varepsilon) \subseteq I_y$</span> (by the definition of <span class="math-container">$I_y$</span>). Hence <span class="math-container">$x \in I_y$</span>. So any <span class="math-container">$x \in U$</span> is in <span class="math-container">$I_q$</span> for some <span class="math-container">$q \in U \cap \mathbb{Q}$</span>, and so <span class="math-container">\begin{align}U \subseteq \bigcup\limits_{q~\in~U \cap~\mathbb{Q}} I_q.\end{align}</span> But <span class="math-container">$I_q \subseteq U$</span> for each <span class="math-container">$q \in U \cap \mathbb{Q}$</span>; thus <span class="math-container">\begin{align}U = \bigcup\limits_{q~\in~U \cap~\mathbb{Q}} I_q, \end{align}</span> which is a countable union of open intervals.</p>
318,299
<blockquote> <p>Let <span class="math-container">$U$</span> be an open set in <span class="math-container">$\mathbb R$</span>. Then <span class="math-container">$U$</span> is a countable union of disjoint intervals. </p> </blockquote> <p>This question has probably been asked. However, I am not interested in just getting the answer to it. Rather, I am interested in collecting as many different proofs of it which are as diverse as possible. A professor told me that there are many. So, I invite everyone who has seen proofs of this fact to share them with the community. I think it is a result worth knowing how to prove in many different ways and having a post that combines as many of them as possible will, no doubt, be quite useful. After two days, I will place a bounty on this question to attract as many people as possible. Of course, any comments, corrections, suggestions, links to papers/notes etc. are more than welcome.</p>
RJM
376,273
<p>More of a question than answer. I am a chemist turned pharmacist, who wishes to have studied mathematics. I am trying to work through Rudin's Principles of Mathematical Analysis. I envy you all who are involved in math for a career. </p> <p>Can someone give me feedback on my attempt at a proof in $\mathbb R? Completely novice and not at all pretty, but is it sound?</p> <p>segment := open interval in R. </p> <p>Lemma: disjoint segments in R are separated (proof not shown).</p> <p>It follows from the lemma that an open connected subset of R cannot be the union of disjoint segments.</p> <p>Let E be an open subset of R. Since R is separable by Rudin prob 2.22, there exists a subset, D, of R that is countable and dense in R. Assume E is connected, which includes E=R, then E is the union of an at most countable collection of open segments, containing only E.</p> <p>Suppose E is separated. Then E is the union of a collection of disjoint segments, including the possibility of segments unbounded above or below.</p> <p>If the collection is finite, then it is at most countable. </p> <p>Assume the collection of segments is infinite. Because D is dense in R and E is contained in R, every open subset of E contains a point of D. Then each of the infinitely many disjoint segments contains a unique point of D. Since D is countable, a one-to-one correspondence between a unique point of D and the segment is established. This implies that there are countably many disjoint segments in the collection. </p> <p>Therefore E is the union of finitely many or countably many , hence at most countably many, disjoint segments. </p>
2,056,209
<p>We know that the union of countably many countable sets is countable. What can se say about the union of infinitely many countable sets and its cardinality? Thanks :)</p>
Jacob Wakem
117,290
<p>The real numbers is the union of the singletons drawn from the real numbers. These are countable sets but the union is uncountable. In general the disjoint union of k (k greater than aleph null) countable sets (not the null set) has cardinality k.</p>
1,553,530
<p>I have a small question about how to finish the proof in the title. The main idea seems to be make an assumption of ∀x∀y (Px→(Py→x=y)) and to derive a contradiction between Raa and ¬Raa from that, which then proves the conclusion:</p> <p>So</p> <p>1.∀x∀y (Px→(Py→x=y))</p> <ol start="2"> <li>∀y (Pa→(Py→a=y))</li> </ol> <p>3.(Pa→(Pb→a=b))</p> <ol start="4"> <li>Assume Pa, then by modus ponens</li> </ol> <p>5.Pb→a=b</p> <p>Pb can be derived from the premiss ∀x∃y(Rxy∧Py), which gives</p> <p>6.∃y(Ray∧Py)</p> <ol start="7"> <li><p>Existential elimination gives the assumption Rab∧Pb</p> </li> <li><p>Pb by conjunction elimination</p> </li> <li><p>Plugging Pb into 5. gives a=b by modus ponens</p> </li> <li><p>Use the assumption Rab∧Pb again</p> </li> <li><p>Use conjunction elimination to get Rab</p> </li> <li><p>Then combine 9. and 11. to get Raa</p> </li> <li><p>Then take the premiss ∀x¬Rxx</p> </li> <li><p>Use universal quantifier elimination to get ¬Raa</p> </li> <li><p>From the contradiction between 14. and 12. you can prove ¬Pa (Pa was the assumption in 4.)</p> <p>But where do I go from here? I need to get another contradiction in order to discharge my assumption ∀x∀y (Px→(Py→x=y)) and prove the conclusion...but anything I assume seems impossible to discharge again!</p> </li> </ol> <p>Thanks for your help and sorry for the long explanation!</p>
BrianO
277,043
<p>You don't know that $Pa$, hence the difficulty you're having. I'll sketch the deduction:</p> <ol> <li>Assume $\forall x \forall y\,(Px\to (Py\to x=y))$.</li> <li>Let $a$ be any individual. By hypothesis, we get</li> <li>$\exists y\,(Ray \land Py)$. (This is your 6.) </li> <li>$Rab \land Pb$ (existential elimination, giving your 7.)</li> <li>Similarly to 3., $\exists y\,(Rby \land Py)$, so</li> <li>$Rbc \land Pc$ for some $c$ (existential elimination).</li> <li>$Pb\to (Pc\to b=c)$, using the assumption.</li> <li>From 4., $Pb$, and from 6., $Pc$.</li> <li>From 8. and 7. by modus ponens, $b=c$.</li> <li>From 6. and 9., $Rbb$.</li> <li>$\neg Rbb$, by instantiating $\forall x\,\neg Rxx$ with $b$.</li> <li>From 10. and 11., contradiction; so conclude that the assumption is false.</li> </ol>
1,553,530
<p>I have a small question about how to finish the proof in the title. The main idea seems to be make an assumption of ∀x∀y (Px→(Py→x=y)) and to derive a contradiction between Raa and ¬Raa from that, which then proves the conclusion:</p> <p>So</p> <p>1.∀x∀y (Px→(Py→x=y))</p> <ol start="2"> <li>∀y (Pa→(Py→a=y))</li> </ol> <p>3.(Pa→(Pb→a=b))</p> <ol start="4"> <li>Assume Pa, then by modus ponens</li> </ol> <p>5.Pb→a=b</p> <p>Pb can be derived from the premiss ∀x∃y(Rxy∧Py), which gives</p> <p>6.∃y(Ray∧Py)</p> <ol start="7"> <li><p>Existential elimination gives the assumption Rab∧Pb</p> </li> <li><p>Pb by conjunction elimination</p> </li> <li><p>Plugging Pb into 5. gives a=b by modus ponens</p> </li> <li><p>Use the assumption Rab∧Pb again</p> </li> <li><p>Use conjunction elimination to get Rab</p> </li> <li><p>Then combine 9. and 11. to get Raa</p> </li> <li><p>Then take the premiss ∀x¬Rxx</p> </li> <li><p>Use universal quantifier elimination to get ¬Raa</p> </li> <li><p>From the contradiction between 14. and 12. you can prove ¬Pa (Pa was the assumption in 4.)</p> <p>But where do I go from here? I need to get another contradiction in order to discharge my assumption ∀x∀y (Px→(Py→x=y)) and prove the conclusion...but anything I assume seems impossible to discharge again!</p> </li> </ol> <p>Thanks for your help and sorry for the long explanation!</p>
Mario Carneiro
50,776
<p>My first thought when approaching this type of problem is to read out the statements and come up with a natural language proof. For convenience I will replace $Rxy$ with $x\prec y$ and read it as "$y$ is bigger than $x$" simply because we have a lot of convenient adjectives for that relation. $\forall x\exists y(x\prec y\land Py)$ says that for every $x$ there is a $y$ bigger than $x$ which has property $P$, and $\forall x,x\not\prec x$ means that no element is bigger than itself, while $\lnot\forall x\forall y(Px\to(Py\to x=y))$ means that it is not the case that there is at most one element $x$ with property $P$, i.e. there are at least two distinct elements with property $P$.</p> <p>From this, the answer is (hopefully) already clear: We assume there is at least one element $a$, and then there is an $b$ bigger than $a$ with property $P$, and there is also a $c$ bigger than $b$ with property $P$ (and $b\ne c$ because $c$ is bigger). If we knew that $\prec$ was transitive, we could even continue to get an infinite number of distinct elements with property $P$, but with only irreflexivity we can only get two.</p> <p>Now to turn this into a rigorous proof.</p> <ul> <li>1: Assume $\forall x\exists y(x\prec y\land Py)$</li> <li>2: Assume $\forall x,x\not\prec x$</li> <li>3: Assume $\forall x\forall y(Px\to(Py\to x=y))$</li> </ul> <p>We're going to prove a contradiction from these three, thus proving the negation of (3).</p> <ul> <li>4: $\exists y(a\prec y\land Py)$ by universal generalization $x:=a$ on (1) </li> <li>5: Take $b$ from (4) with $a\prec b\land Pb$</li> <li>6: $\exists y(b\prec y\land Py)$ by universal generalization $x:=b$ on (1)</li> <li>7: Take $c$ from (6) with $b\prec c\land Pc$</li> <li>8: $Pb$ from and elimination on (5)</li> <li>9: $Pc$ from and elimination on (7)</li> <li>10: $Pb\to(Pc\to b=c)$ from universal generalization $x:=b,y:=c$ on (3)</li> <li>11: $Pc\to b=c$ from modus ponens on (8,10)</li> <li>12: $b=c$ from modus ponens on (9,11)</li> <li>13: $b\prec c$ from and elimination on (7)</li> <li>14: $b\prec b$ from equality substitution on (12,13)</li> <li>15: $b\not\prec b$ from universal generalization $x:=b$ on (2)</li> <li>16: contradiction from (14,15)</li> </ul> <p>This is nothing more than an explication of the same natural language proof: get $b$ and $c$, and note that they are distinct and both have property $P$.</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
known google
392
<p>EGA, with hyperlinks for easy navigation.</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
hce
2,781
<p>Chern - Complex manifolds without potential theory</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
Ricky
7,845
<p>All the SGA's. Note that SGA 1 and 2 already exists in TeX, and there is something for SGA 3 and 4.</p>
64,905
<p>Let's see if we could use MO to put some pressure on certain publishers...</p> <p>Although it is wonderful that it has been put <a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p> <p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
hce
2,781
<p>Stong - Notes on cobordism theory</p>
79,658
<p>My knowledge is very limited for complex geometry. I have the following question:</p> <p>If we have two complex vector bundles $E\to X$ and $F\to X$ such that we have an isomorphism $\mathcal O\left(E\right) \cong \mathcal O\left(F\right)$ between the sheaf of holomorphic sections, do we have an isomorphism $E \cong F$ ?</p>
DamienC
7,031
<p>Yes, this is true as soon as you assume that the isomorphism of sheaves $\mathcal O(E)\cong\mathcal O(F)$ is $\mathcal O_X$-linear. </p> <p>To prove, pick a cover $(U_i)_{i\in I}$ of $X$ be opens which are trivializing both for $E$ and $F$. Now if you have an isomorphism of sheaves $\phi:\mathcal O(E)\to\mathcal O(F)$ then you can restrict it to each of these opens and get holomorphic functions $\phi_i:U_i\to GL_n(\mathbb{C})$. If you write carefully the gluing property you'll get something like: on intersections, $f_{ij}\phi_i=\phi_je_ {ij}$, where $e_{ij}$ and $f_{ij}$ are transition functions for $E$ and $F$. </p> <p>This defines a bundle map. </p>
2,962,377
<p>Rewrite <span class="math-container">$f(x,y) = 1-x^2y^2$</span> as a product <span class="math-container">$g(x) \cdot h(y)$</span> (both arbitrary functions)</p> <p>To make more clear what I'm talking about I will give a example.</p> <p>Rewrite <span class="math-container">$f(x,y) = 1+x-y-xy$</span> as <span class="math-container">$g(x)h(y)$</span></p> <p>If we choose <span class="math-container">$g(x) = (1+x)$</span> and <span class="math-container">$h(y) = (1-y)$</span> we have</p> <p><span class="math-container">$f(x,y) = g(x) h(y) \implies (1+x-y-xy) = (1+x)(1-y)$</span></p> <p>I'm trying to do the same with <span class="math-container">$f(x,y) = 1-x^2y^2 = (1-xy)(1+xy)$</span>.</p> <p>New question:</p> <blockquote> <p>Is there also a contradiction for <span class="math-container">$f(x,y) = \frac{xy}{1-x^2y^2}$</span> ? Or it's possible to write <span class="math-container">$f(x,y) $</span> as <span class="math-container">$g(x)h(y)$</span> ?</p> </blockquote>
Batominovski
72,152
<p>For <span class="math-container">$f(x,y)=\dfrac{xy}{1-x^2y^2}$</span>, the answer is also no. Suppose on the contrary that there exist nonempty open intervals <span class="math-container">$U$</span> and <span class="math-container">$V$</span> such that <span class="math-container">$f|_{U\times V}$</span> can be factorized as <span class="math-container">$$f(x,y)=g(x)\,h(y)\text{ for all }x\in U\text{ and }y\in V\,.$$</span> Fix <span class="math-container">$a\in U$</span> and <span class="math-container">$b\in V$</span>. Without loss of generality, we may assume that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are both nonzero. Then, <span class="math-container">$$\frac{bx}{1-b^2x^2}=f(x,b)=g(x)\,h(b)$$</span> for all <span class="math-container">$x\in U$</span>. This shows that <span class="math-container">$h(b)\neq 0$</span> and so there exists <span class="math-container">$\alpha \neq 0$</span> such that <span class="math-container">$$g(x)=\frac{\alpha x}{1-b^2x^2}\text{ for }x\in U\,.$$</span> Similarly, <span class="math-container">$$h(y)=\frac{\beta y}{1-a^2y^2}\text{ for }y\in V\,.$$</span> That is, <span class="math-container">$$\frac{xy}{1-x^2y^2}=f(x,y)=g(x)\,h(y)=\frac{\alpha \beta\, xy}{\left(1-b^2x^2\right)\,\left(1-a^2y^2\right)}$$</span> for all <span class="math-container">$x\in U$</span> and <span class="math-container">$y\in V$</span>. In other words, <span class="math-container">$$\left(1-b^2x^2\right)\,\left(1-a^2y^2\right)=\alpha\beta\,\left(1-x^2y^2\right)$$</span> for all <span class="math-container">$x\in U\setminus\{0\}$</span> and <span class="math-container">$y\in V\setminus\{0\}$</span>. The two bivariate polynomials <span class="math-container">$\left(1-b^2x^2\right)\,\left(1-a^2y^2\right)$</span> and <span class="math-container">$\alpha\beta\,\left(1-x^2y^2\right)$</span> must equal (not just as functions, but as polynomials in <span class="math-container">$\mathbb{R}[x,y]$</span>), but the one on the left has a term <span class="math-container">$x^2$</span> with nonzero coefficient, while the one on the right does not have such a term.</p>
2,075,712
<p>I want to show that if $A=[0,1)$ then its interior is $(0,1)$. I know that $int(A) \subset A$, and that $\forall x \in int(A)$ $\exists R&gt;0 $ such that $B(x,R) \subset A$. Thus immediately we see that $0 \notin int(A)$ because $\not \exists R&gt;0$ such that $B(0,R)\subset A$.</p> <p>What I struggle to do is to show that the final set is equal to $(0,1)$.</p>
Community
-1
<p>We know $$r = 4R\sin \frac{A}{2}\sin \frac{B}{2}\sin \frac{C}{2}$$ Thus, $$r_1-r = 4R\sin \frac{A}{2}\cos \frac{B}{2}\cos \frac{C}{2} -4R\sin \frac{A}{2}\sin \frac{B}{2}\sin \frac{C}{2} = 4R\sin ^2 \frac{A}{2}$$ Similarly it can be shown that: $$r_2-r = 4R\sin ^2 \frac{B}{2} \text{ and that } r_3-r = 4R\sin ^2\frac{C}{2}$$ Thus, $$(r_1-r)(r_2-r)(r_3-r) = 64R^3\sin^2\frac{A}{2}\sin ^2\frac{B}{2}\sin ^2\frac{C}{2} = 4R(4R\sin \frac{A}{2}\sin \frac{B}{2}\sin \frac{C}{2})^2 = 4Rr^2$$ Hope it helps.</p>
1,940,249
<p>According to Wikipedia, <a href="https://en.wikipedia.org/wiki/First-order_logic#Completeness_and_undecidability" rel="nofollow">first order logic is complete</a>. What is the proof of this?</p> <p>(Also, in the same paragraph, it says that its undecidable. Couldn't you just enumerate all possible proofs and disproofs to decide it though?)</p>
Reese Johnston
351,805
<p>I won't cover why first order logic is complete - an adequate textbook on mathematical logic will provide the proof. I recommend <em>Mathematical Logic</em> by Kunen, but it may be a bit more extensive than your needs.</p> <p>As for decidability: Every necessarily true statement is provable, and every necessarily false statement has a proof of its negation (that's what it means to be "complete") but there are statements which are neither. For example, $(\forall x)P(x)$ may be either true or false depending on the universe of discourse and the definition of $P$. On such a statement, your proposed algorithm would never halt, because it would never find a proof or a disproof; but at no point could we be sure that no proof or disproof exists.</p>
3,061,277
<p>This concept I have asked a few people, but none of them are able to help me understand, so hope that there's a hero can save me from this problem!!!</p> <p>My question occurs during substitution process, for example, sometimes we let <span class="math-container">$x = π - u$</span>. Then after some manipulation of numbers, we then let <span class="math-container">$x = u$</span> and integrate the guy that we want to integrate. That doesn't seem intuitive to me, isn't that we are changing the definition of x by omiting the rule of arithematic? <span class="math-container">$x = u \implies x = π - x$</span>. Why that operation doesn't affect the integration result?</p> <p>Here is a concrete example illustrate my question</p> <p><span class="math-container">$$ \int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx $$</span> then we let <span class="math-container">$x = π - u \implies dx = -du$</span> <span class="math-container">$$= \int^{π}_{0} \frac{(π - u)\sin(u)}{(1+\cos^2(u))} du$$</span> <span class="math-container">$$= \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - \int^{π}_{0} \frac{u\sin(u)}{(1+\cos^2(u))} du$$</span> <strong>And here (downward) is the part that I don't understand!!! (the x in _x_sin(x) in RHS)</strong></p> <p>we then let <span class="math-container">$x = u$</span> <span class="math-container">$$\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - \int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx$$</span></p> <p>move the rightmost guy to LHS and integrate RHS, solve the problem.</p> <p><span class="math-container">$$2\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du$$</span></p> <p>Why can we let <span class="math-container">$x = u$</span>? isn't that we have given it the value <span class="math-container">$pi - u$</span> in the beginning?</p>
Doug M
317,162
<p>We have:</p> <p><span class="math-container">$\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - \int^{π}_{0} \frac{u\sin(u)}{(1+\cos^2(u))} du$</span></p> <p>Rather than say "let <span class="math-container">$u = x$</span>" I think it would be cleaner to say </p> <p><span class="math-container">$\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{u\sin(u)}{(1+\cos^2(u))} du$</span></p> <p>And then bring it over to the other side </p> <p>or </p> <p><span class="math-container">$I = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - I\\ 2I = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du\\ I = \frac 12 \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du$</span></p>
556,664
<p>For example, if $f$ is integrable on $[0,3]$, is it also integrable on $[1,2]$? I tried thinking of a counterexample but couldn't, since I've only learned what implies integrability but not what integrability implies.</p>
Haha
94,689
<p>If $f$ is integrable on $[0,3]$ then there is a $F:[0,3]\to \Bbb R$ :$F'=f$ on $[0,3]$.The restriction of $F$ on $[1,2]$ let's name it $G$ means that $G'=f$ on $[1,2]$. So if $f$ is integrable on $[0,3]$ is integrable on every connected subset(&lt;=>here is equal to every subinterval) of $[0,3]$</p>
556,664
<p>For example, if $f$ is integrable on $[0,3]$, is it also integrable on $[1,2]$? I tried thinking of a counterexample but couldn't, since I've only learned what implies integrability but not what integrability implies.</p>
bof
111,012
<p>A function $f:[a,b]\to\mathbb R$ is Riemann-integrable if and only if $f$ is bounded and its set of points of discontinuity has Lebesgue measure zero. It clearly follow that, if $f$ is Riemann-integrable on an interval, then its restriction to a subinterval is also Riemann-integrable.</p>
1,407,169
<p>Let $A$ and $B$ be two matrices in $M_n$. Is the following ture:</p> <p>$A$ and $B$ are similar $\iff$ $A$ and $B$ have the same jordan canonical form.</p> <p>Could someone explain?</p>
Patrick Stevens
259,262
<p>It's true. Let $A$ have the same JCF $J$ as $B$ does; say $A = PJP^{-1}$ and $B = Q J Q^{-1}$. Then $J = P^{-1} A P$, so $B = Q P^{-1} A P Q^{-1}$, and so $B$ is similar to $A$.</p> <p>Suppose $A$ and $B$ are similar, but for contradiction they have different JCFs. Then $A$ reduces to JCF $J$, and $B$ to JCF $K$. But $J, K$ are not similar - this is a contradiction.</p>
2,338,199
<p>Let $G$ and $H$ be two groups. Suppose that $M$ and $N$ be two normal subgroups of $G$ and $H$ respectively such that we have the following,</p> <ul> <li><p>$G$ and $H$ are isomorphic.</p></li> <li><p>$M$ and $N$ are isomorphic.</p></li> </ul> <p>Then we know that $G/M$ and $H/N$ are isomorphic if for an isomorphism $\varphi:G\to H$ we have $\varphi(M)=N$.</p> <p>My Question is, </p> <blockquote> <p>If $G$ and $H$ be two groups and $M$ and $N$ are two normal subgroups of $G$ and $H$ respectively such that $G/M$ and $H/N$ are isomorphic then is it true that both $G$ and $H$ are isomorphic <strong>or</strong> $M$ and $N$ are isomorphic?</p> </blockquote> <p>I don't know how to approach this problem. Can anyone help me?</p>
Ken Duna
318,831
<p>This is false. </p> <p>Let $$G = \mathbb{Z}_2 \times \mathbb{Z}_2 = \textrm{ The Klein Four Group }$$ and $$M = \textrm{ trivial subgroup}.$$ Now let $$H = D_4 = \textrm{Dihedral Group of Order 8}$$ and $$N = Z(D_4) = \textrm{ The Center of } D_4.$$</p> <p>You can show that $G/M \cong H/N$.</p>
3,764,342
<p>Here since <span class="math-container">$\lim \frac{a_n}{a_{n+1}}=1.$</span> So no definite conclusion can be made about the nature of the sequence <span class="math-container">$\langle a_n\rangle$</span>.</p> <p>So how can I can proceed to find the value of <span class="math-container">$a_1$</span> from the relation: <span class="math-container">$a_ka_{k+1}=k,$</span> for any <span class="math-container">$k\in\mathbb N$</span> ?</p> <p>Please suggest something..</p>
Gerry Myerson
785,985
<p>When I was in high school a friend showed me a way to sum the infinite arithmetico-geometric series, made a big impression on me at the time: <span class="math-container">$$\matrix{1&amp;+&amp;x&amp;+&amp;x^2&amp;+&amp;x^3&amp;+&amp;x^4&amp;+&amp;\cdots\cr&amp;&amp;x&amp;+&amp;x^2&amp;+&amp;x^3&amp;+&amp;x^4&amp;+&amp;\cdots\cr&amp;&amp;&amp;&amp;x^2&amp;+&amp;x^3&amp;+&amp;x^4&amp;+&amp;\cdots\cr&amp;&amp;&amp;&amp;&amp;&amp;\vdots&amp;&amp;\vdots&amp;&amp;\ddots}$$</span> If you sum the columns, you get <span class="math-container">$1+2x+3x^2+4x^3+\cdots$</span>. If you sum the rows, you get <span class="math-container">$${1\over1-x}+{x\over1-x}+{x^2\over1-x}+\cdots$$</span> a geometric series with sum <span class="math-container">${1/(1-x)\over1-x}=(1-x)^{-2}$</span>, and we're done. OK, we should insist on <span class="math-container">$|x|&lt;1$</span> to guarantee convergence and justify the manipulations, and we should realize that all that's going on here is interchange of summations, but it's still pretty neat.</p>
30,586
<p>There are many ways to type a pipe. You could use \$|\$ (<span class="math-container">$|$</span>), \$\vert\$ (<span class="math-container">$\vert$</span>), \$\mid\$ (<span class="math-container">$\mid$</span>), or just a plain | (not surrounded by dollar signs). You could also use vmatrix to indicate matrix determinants.</p> <p>I wanted to know when is it appropriate to use each type of pipe on Mathematics Stack Exchange. For example, pipes can be used in the following cases:</p> <ul> <li>To indicate that one integer is a factor (or divisor) of another (e.g. <span class="math-container">$2|4$</span>)</li> <li>To indicate conditions in set notation (e.g. <span class="math-container">$Dom(\sqrt{x}) = \{x \in \mathbb{R} \mid x \ge 0\}$</span>)</li> <li>To indicate absolute value (e.g. <span class="math-container">$|-2019| = |2019| = 2019$</span>)</li> <li>To indicate the cardinality of a set (e.g. <span class="math-container">$|\emptyset|=0$</span>)</li> <li>To indicate the order of an element of a group (e.g. <span class="math-container">$\forall x \in K_4 ((x=e) \lor (|x|=2))$</span>, where <span class="math-container">$K_4$</span> is the Klein four-group)</li> <li>To indicate the determinant of a square matrix (e.g. <span class="math-container">$\begin{vmatrix} 2 &amp; 3\\5 &amp; 7 \end{vmatrix}=-1$</span>)</li> </ul> <p>There is also of course the double pipe symbol (<span class="math-container">$||$</span>), which is used for logical or in programming, concatenation, and parallel lines; and should not be confused with the number eleven.</p>
Martin Sleziak
8,297
<p>Since you asked about some specific meanings when the vertical line appears:</p> <ul> <li>For divisibility, you use <code>\mid</code> (and <code>\nmid</code>) for "does not divide". For example, <code>$a\mid b$</code> <span class="math-container">$a\mid b$</span>, <code>$4\mid8$</code> <span class="math-container">$4\mid8$</span>, <code>$4\nmid7$</code> <span class="math-container">$4\nmid7$</span>. The advantage over typing just <code>$4|8$</code> <span class="math-container">$4|8$</span> is that <code>\mid</code> yields extra spacing (but some authors prefer <span class="math-container">$4|8$</span>). </li> <li>For conditions in set notation, use again <code>\mid</code>. (However, other symbols are also used for this purpose, not just vertical bar.)</li> <li>For absolute value, you can use simply <code>|</code>, for example, <code>$|x+2|+|x-2|=4$</code> <span class="math-container">$|x+2|+|x-2|=4$</span>. Sometimes, if the expression inside a absolute value has bigger height, you might combine this with <code>\left</code> and <code>\right</code>. For example, <code>$\left|\frac{x+1}2-1\right|$</code> <span class="math-container">$\left|\frac{x+1}2-1\right|$</span> looks better than just <code>$|\frac{x+1}2-1|$</code> <span class="math-container">$|\frac{x+1}2-1|$</span>. You can also use <code>\lvert</code> and <code>\rvert</code>. <code>$\lvert x+2 \rvert + \lvert x-2 \rvert = 4$</code> <span class="math-container">$\lvert x+2 \rvert + \lvert x-2 \rvert = 4$</span> or <code>$\left\lvert\frac{x+1}2-1\right\rvert$</code> <span class="math-container">$\left\lvert\frac{x+1}2-1\right\rvert$</span>. You can treat the order of a group or the cardinality of a set in the same way.</li> <li><p>For determinants, you can use <code>vmatrix</code> environment. For example, <span class="math-container">$$\begin{vmatrix} a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33} \end{vmatrix}$$</span> is obtained using</p> <p><code>$$\begin{vmatrix} a_{11} &amp; a_{12} &amp; a_{13} \\ a_{21} &amp; a_{22} &amp; a_{23} \\ a_{31} &amp; a_{32} &amp; a_{33} \end{vmatrix}$$</code></p></li> </ul> <p>Although MathJax is different from LaTeX, many things which can be used in LaTeX apply also in MathJax. So if you find some advice on math mode in LaTeX, it is reasonable to try them also here.</p> <p>See also: </p> <ul> <li>The section on matrices in <a href="https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference/5023#5023">MathJax basic tutorial and quick reference</a>.</li> <li><a href="https://tex.stackexchange.com/q/43008">Absolute Value Symbols</a> on TeX Stack Exchange.</li> <li><a href="https://tex.stackexchange.com/q/498">\mid, | (vertical bar), \vert, \lvert, \rvert, \divides</a> on Tex Stack Exchange (this was already linked in <a href="https://math.meta.stackexchange.com/questions/30586/when-should-each-type-of-pipe-be-used/30588#30588">Jack's answer</a> and <a href="https://math.meta.stackexchange.com/questions/30586/when-should-each-type-of-pipe-be-used/30588#comment132454_30586">quid's comment</a>).</li> <li><a href="https://tex.stackexchange.com/q/187162">vertical bar for absolute value and conditional expectation</a> on TeX Stack Exchange; <a href="https://tex.stackexchange.com/q/277833">Absolute value size! (\lvert doesn't scale)</a> on TeX Stack Exchange</li> <li><a href="https://tex.stackexchange.com/q/116580">a ∣ b and a ∤ b in formulas</a> on TeX Stack Exchange</li> <li><a href="https://tex.stackexchange.com/q/448">How to automatically resize the vertical bar in a set comprehension?</a> on TeX Stack Exchange.</li> </ul>
3,831,310
<p>I try to integrate</p> <p><span class="math-container">$$\int \frac {dv}{\frac {-c}{m}v^2 - g \sin \theta}$$</span></p> <p>I did substituted <span class="math-container">$u = \frac{c}{m}$</span> and <span class="math-container">$w = g \sin \theta$</span> to get</p> <p><span class="math-container">$$-\int \frac {dv}{uv^2 + w}$$</span></p> <p>I'm wondering if I have to do a second substitution. To be honest, I don't know if I can do that or how to do that. Furthermore, maybe I have to rearrange to get <span class="math-container">$\frac1{1+x^2}$</span></p>
DMcMor
155,622
<p>You've already rewritten the integral to make it easier to work with. The next step, as you've considered, is to try to get it into the form <span class="math-container">$\int \frac{dx}{1+x^2}$</span> so you can get an antiderivative in terms of arctangent. To do that: <span class="math-container">\begin{align*} -\int \frac{dv}{uv^{2} + w} &amp;= -\frac{1}{w}\int\frac{dv}{\frac{u}{w}v^2 + 1}\\ &amp;=-\frac{1}{w}\int \frac{dv}{\left(\sqrt{\frac{u}{w}}v\right)^{2}+1}. \end{align*}</span></p> <p>If we use the substitution <span class="math-container">$x = \sqrt{\frac{u}{w}}v$</span> and <span class="math-container">$dx = \sqrt{\frac{u}{w}}\,dv$</span> we get</p> <p><span class="math-container">\begin{align*} -\frac{1}{w}\int \frac{dv}{\left(\sqrt{\frac{u}{w}}v\right)^{2}+1} &amp;= -\sqrt{\frac{w}{u}}\frac{1}{w}\int\frac{dx}{x^{2} + 1}\\ &amp;=-\frac{1}{\sqrt{uw}}\arctan(x) + C\\ &amp;= -\frac{1}{\sqrt{uw}}\arctan\left(\sqrt{\frac{u}{w}}v\right). \end{align*}</span></p> <p>At this point you just need to undo your original substitutions.</p>
3,696,265
<p>This is from Vakil's FOAG: exercise 2.5 C, part b. I understand how objects in the extension sheaf from a sheaf on a base <span class="math-container">$\mathcal B$</span> of a topology are created, but I am having trouble understanding how to produce a morphism of sheaves given a morphism of sheaves on a base. </p> <p>Assume we have topological space <span class="math-container">$X$</span>. Supposing we have two sheaves on our base <span class="math-container">$\mathcal B$</span>, say <span class="math-container">$F$</span> and <span class="math-container">$G$</span>, and maps <span class="math-container">$F(B_i) \to G(B_i)$</span> for all <span class="math-container">$B_i \in \mathcal B$</span>, these induce maps between any stalks <span class="math-container">$F_x \to G_x$</span> we like, and we know also for any <span class="math-container">$x \in X$</span>, that <span class="math-container">$F_x \simeq F^{ext}_x$</span>, where <span class="math-container">$F^{ext}$</span> is our extended sheaf (likewise for <span class="math-container">$G^{ext}$</span>). After this, I do not know how to proceed, nor do I know if I needed all of that information.</p>
Marko Riedel
44,883
<p>We seek to verify that</p> <p><span class="math-container">$$\sum_{k=0}^{\min(p,q)} {p\choose k} (-1)^k {p+q-k\choose p} =1.$$</span></p> <p>Re-write as</p> <p><span class="math-container">$$\sum_{k=0}^{\min(p,q)} {p\choose k} (-1)^k {p+q-k\choose q-k} \\ = [z^q] (1+z)^{p+q} \sum_{k=0}^{\min(p,q)} {p\choose k} (-1)^k \frac{z^k}{(1+z)^k}.$$</span></p> <p>Now when <span class="math-container">$k\gt q$</span> the coefficient extractor makes for a zero contribution. With <span class="math-container">$p\ge 0$</span> we have <span class="math-container">$p^{\underline{k}} = 0$</span> when <span class="math-container">$k\gt p.$</span> The upper limit is enforced and we may continue with</p> <p><span class="math-container">$$[z^q] (1+z)^{p+q} \sum_{k\ge 0} {p\choose k} (-1)^k \frac{z^k}{(1+z)^k} \\ = [z^q] (1+z)^{p+q} \left(1-\frac{z}{1+z}\right)^p = [z^q] (1+z)^{p+q} (1+z)^{-p} = [z^q] (1+z)^q = 1.$$</span></p>
2,368,771
<blockquote> <p>How to prove the function $$ f(z)=\exp\Big(\frac{z}{1-\cos z}\Big)$$ has an essential singularity at $z=0$ ?</p> </blockquote> <p>It's actually hard to express the Laurent series of $f(z)$ around $0$, because the power $\frac{z}{1-\cos z}$ itself is already in the series form (since $\cos z$ appears there and it has the series expansion) and $e^{z/(1-\cos z)}$ has again a series form. </p> <p>Edit 1: I already see <a href="https://math.stackexchange.com/questions/407318/at-z-0-the-function-fz-expz-over-1-cos-z-has">this</a> but it does not give information about the Laurent expension of $f(z)$ </p> <p>Edit 2: How to Proceed or can anyone explain the limit of $e^{z/(1-\cos z)}$ at $0$ does not exist?</p>
smb3
467,465
<p>We wish to show that the function's Laurent series has infinitely many negative terms. As you said, it may be difficult to express the entire Laurent series, but there is a formula for each of the Laurent coefficients:</p> <p>$$a_n = \frac{1}{2\pi i} \int_\gamma \frac{f(z)}{z^{n + 1}}dz$$</p> <p>where $\gamma$ is a curve around $0$ within an annulus on which $f$ is holomorphic. Perhaps you could try to show that there is no $n &lt; 0$ such that for all $k &lt; n$, $a_k = 0$.</p>
743,911
<p>Consider $\frac{1}{T}\sum_{t=1}^{T}\max\{ 0,a_t\}$. Can we say whether this is greater or equal then $\max\{ 0,\frac{1}{T}\sum_{t=1}^{T}a_t\}$?</p>
Hayden
27,496
<p>Since $a_t\leq \max\{0,a_t\}$ for all $t$, it follows that $\frac{1}{T}\sum_{t=1}^T{a_t}\leq \frac{1}{T}\sum_{t=1}^T{\max\{0,a_t\}}$. Now, the RHS is always non-negative; if the LHS is non-negative, then it is equal to the max of $0$ and itself, and otherwise it is less than 0. In either case, $$\max\left\{0,\frac{1}{T}\sum_{t=1}^T{a_t}\right\}\leq \frac{1}{T}\sum_{t=1}^T{\max\{0,a_t\}}$$</p>
2,511,061
<p>I will prove 0=1. we know that, From Factorial definition zero factorial is equal to one and one factorial is equal to one. so,0!=1!. factorial get cancelled both sides, we get 0=1. Is this right..?</p>
gt6989b
16,192
<p>What gives you the right to cancel out the factorial? By the same token I can claim that if $x^2 = y^2$ then $x=y$, which implies that since $1^2 = 1 = (-1)^2$, we must have $1 = -1$, just as absurd as your conclusion...</p>
2,511,061
<p>I will prove 0=1. we know that, From Factorial definition zero factorial is equal to one and one factorial is equal to one. so,0!=1!. factorial get cancelled both sides, we get 0=1. Is this right..?</p>
Community
-1
<p>Two things:</p> <ol> <li>&quot;!&quot; is not a variable rather it is an operator.</li> <li>Factorial function (domain is whole number) is not injective.</li> </ol>
1,740,032
<p>In layman's terms, why would anyone ever want to change basis? Do eigenvalues have to do with changing basis? </p>
mathreadler
213,607
<p><strong>Practical speed related aspects</strong> Matrix multiplication and division and equation system solving can become much faster to perform in some bases than others. Often one can cut the algorithmic complexity so much that it becomes a difference between something being practically useful or far too slow for usefulness. The reason for this is that the underlying field of elements provides $1$ and $0$ elements which by definition make multiplication and addition simpler. $$\text{field properties for the elements 1 and 0 utilizable for saving calculations}\begin{cases}a \cdot 0 &amp;= 0\\ a + 0 &amp;= a\\a\cdot 1 &amp;= a\end{cases}$$</p> <p>The first two together lets us ignore whole parts of scalar products with zero elements which is the main gain. Then the third one allows us to avoid multiplications which can be nice as in computers adding is less computationally complex than multiplying, saving hardware on the chip or clock cycles.</p> <p>$${\bf D} = \left(\begin{array}{ccccc} d_1&amp;0&amp;0&amp;\cdots&amp;0\\0&amp; d_2 &amp; 0 &amp;\cdots &amp; 0\\0&amp;0&amp;\ddots&amp; \ddots &amp;0\\0&amp;0&amp;\ddots&amp;d_{n-1}&amp;0\\0&amp;0&amp;\cdots&amp;0&amp;d_n\end{array}\right) \hspace{1cm} {\bf C} = \left(\begin{array}{ccccc} {\bf C_1}&amp;\bf 0&amp;\bf 0&amp;\cdots&amp; \bf 0\\\bf 0&amp; \bf C_2 &amp; \bf 0 &amp;\cdots &amp; \bf 0\\ \bf \bf 0&amp;\bf 0&amp;\ddots&amp; \ddots &amp;\bf 0\\\bf 0&amp;\bf 0&amp;\ddots&amp; \bf C_{n-1}&amp;\bf 0\\\bf 0&amp;\bf 0&amp;\cdots&amp;\bf 0&amp;\bf C_n\end{array}\right) \hspace{1cm}$$ Diagonal matrix to the left, only n non-zero values - the ones on the diagonal. Block-diagonal matrix to the right. The $\bf C_k$ matrices on the diagonal are square and by properties of matrix multiplication will multiply onto their own position only. The larger we can make these $\bf 0$ matrices the more time we can save when doing computations.</p> <p><strong>Theory understanding aspects</strong> Basis change is the basis of understanding more advanced algebra. Eigenvalues, eigenvectors and matrix diagonalization are the first "steps" to that. Then <em>canonical</em> bases of various kinds and matrix representation of groups and fields with simultaneous block-diagonalizations ( that a set of matrices which multiply onto each other can be put on the same block-diagonal form ). Can be used to create an understanding and a framework for systematic work on these more advanced algebras.</p>
638,906
<p>Find the coefficient of $x$ in the expansion of $$\left(1-2x^3+3x^5\right)\left(1+\frac{1}{x}\right)^5.$$ Answer is $154$, but how?</p>
colormegone
71,645
<p>An alternate approach is to consider, without writing out the coefficients, that $ \ (1 + x^{-1})^5 \ $ will, from the binomial theorem, produce a polynomial of the form $ \ 1 + Ax^{-1} + Bx^{-2} + Cx^{-3} + Dx^{-4} + x^{-5} \ . $ When we multiply this by $ \ 1 - 2x^3 + 3x^5 \ , $ we see that multiplication by 1 cannot produce any terms in the binomial product containing $ \ x^1 \ $ . The <em>only</em> products that can do so are $ \ -2x^3 \ \cdot Bx^{-2} \ $ and $ \ 3x^5 \ \cdot \ Dx^{-4} \ . $ </p> <p>Thus, the term in the polynomial product containing $ \ x \ $ has the coefficient $ \ -2B \ + \ 3D \ . $ The binomial theorem tells us that $ B \ = \ \binom53 \cdot 1^3 \cdot 1^2 \ = \ 10 \ $ and $ D \ = \ \binom51 \cdot 1^1 \cdot 1^4 \ = \ 5 $ , agreeing with <strong>lab bhattacharjee</strong>'s result; the coefficient is then -5 ... This leads me to believe that there is some error in the problem statement or the edit, because no term in the expansion has a coefficient as big as 154 (as verified by WolframAlpha).</p>
2,386,471
<p>I have a solved question from Ross as stated below.</p> <blockquote> <p>Q : Suppose that each of three men at a party throws his hat into the center of the room. The hats are first mixed up and then each man randomly selects a hat. What is the probability that none of the three men selects his own hat?</p> <p>Sol: We shall solve this by first calculating the complementary probability that at least one man selects his own hat......</p> </blockquote> <p>I want to start with basics and identify the sample space first.</p> <p>$$\mathrm{Space} = \{h_1p_1, h_1p_2, h_1p_3, h_2p_1, h_2p_2, h_2p_3, h_3p_1, h_3p_2, h_3p_3\}$$</p> <p>where $h_ip_j$ is the event of picking up a hat of person $i$ by person $j$.</p> <p>To satisfy the condition that nobody picks his own hat, $i$ should not be equal to $j$.</p> <p>Why the complement is "at least one man selects his own hat" and not that "all guys select their own hat". </p>
Community
-1
<p>For $n=1$, $\boldsymbol{c}=\boldsymbol0$ hence $\boldsymbol{a} = \boldsymbol0$.</p> <p>For $n=2$, have you tried $$\boldsymbol C = \begin{pmatrix} 1 &amp; 0 \\ 0 &amp; 0 \end{pmatrix},\begin{pmatrix} 0 &amp; 1 \\ 0 &amp; 0 \end{pmatrix},\begin{pmatrix} 0 &amp; 0 \\ 1 &amp; 0 \end{pmatrix},\begin{pmatrix} 0 &amp; 0 \\ 0 &amp; 1 \end{pmatrix}?$$ Can we generalize this to the more general case?</p>
2,386,471
<p>I have a solved question from Ross as stated below.</p> <blockquote> <p>Q : Suppose that each of three men at a party throws his hat into the center of the room. The hats are first mixed up and then each man randomly selects a hat. What is the probability that none of the three men selects his own hat?</p> <p>Sol: We shall solve this by first calculating the complementary probability that at least one man selects his own hat......</p> </blockquote> <p>I want to start with basics and identify the sample space first.</p> <p>$$\mathrm{Space} = \{h_1p_1, h_1p_2, h_1p_3, h_2p_1, h_2p_2, h_2p_3, h_3p_1, h_3p_2, h_3p_3\}$$</p> <p>where $h_ip_j$ is the event of picking up a hat of person $i$ by person $j$.</p> <p>To satisfy the condition that nobody picks his own hat, $i$ should not be equal to $j$.</p> <p>Why the complement is "at least one man selects his own hat" and not that "all guys select their own hat". </p>
levap
32,262
<p>If $n = 1$, the only singular matrix is $C = 0$ but then any matrix $A$ will satisfy $AC = 0$ and not only $A = 0$.</p> <p>For $n &gt; 1$, denote by $e_1,\dots,e_n$ the standard basis of $\mathbb{F}^n$. Choose $1 \leq i \leq n$ and consider a matrix $C$ whose columns are the same and equal to $e_i$. This is a singular matrix (it has rank $1 &lt; n$) and $AC$ is a matrix whose columns are $Ae_i, \dots, Ae_i$. By assumption, $Ae_i = 0$ and since this is true for all $1 \leq i \leq n$, we get $A = 0$. Note that we haven't used the fact that $A$ is symmetric.</p>
365,166
<p>I have trouble coming up with combinatorial proofs. How would you justify this equality?</p> <p><span class="math-container">$$ n\binom {n-1}{k-1} = k \binom nk $$</span> where <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$k$</span> is an integer.</p>
Community
-1
<p>Say we want to choose a baseball team of $k$ players from $n$ players and we want to choose a captain for the baseball team. We can count the above in two different ways.</p> <p>$1$. Choose the captain first. This can be done in $n$ ways. Now choose the rest of the team, i.e., we need to choose $k-1$ people from the remaining $n-1$ people, which can be done in $\dbinom{n-1}{k-1}$ ways.</p> <p>$2$. Choose the team first, i.e., choose $k$ players from $n$ players. This can be done in $\dbinom{n}k$ ways. Now once we have these $k$ players, the captain can be chosen in $k$ ways.</p>
86,277
<p>I have a question: does the Heine-Borel theorem hold for the space $\mathbb{R}^\omega$ (where $\mathbb{R}^\omega$ is the space of countable sequences of real numbers with the product topology). That is, prove that a subspace of $\mathbb{R}^\omega$ is compact if and only if it is the product of closed and bounded subspaces of $\mathbb{R}$ - or provide a counterexample.</p> <p>I think it does not hold. But I can't come up with a counterexample! Could anyone please help me with this? Thank you in advance. </p>
Nate Eldredge
822
<p>One thing you can say is that a subset of $\mathbb{R}^\omega$ is compact iff it is closed and <em>contained in</em> a product of bounded sets. I'll leave the proof as an exercise.</p> <p>More generally, let $X_i$ be any family of Hausdorff spaces (and assume the axiom of choice). Then a subset $A$ of $X = \prod_i X_i$ is compact iff $A$ is closed and contained in a product of compact sets.</p>
287,405
<p>Hi have this sequence:</p> <p>$$\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n}$$</p> <p>I understand that this is a <em>Geometric series</em> so this is what I've made to get the sum. $$\sum\limits_{n=1}^\infty (-1)^n\frac{3^{n}\cdot 3^{-2}}{4^n}$$ $$\sum\limits_{n=1}^\infty (-1)^n\cdot 3^{-2}{(\frac{3}{4})}^n$$</p> <p>So $a= (-1)^n\cdot 3^{-2}$ and $r=\frac{3}{4}$ and the sum is given by $$(-1)^n\cdot 3^{-2}\cdot \frac{1}{1-\frac{3}{4}}$$</p> <p>Solving this I'm getting the result as $\frac{4}{9}$ witch I know Is incorrect because WolframAlpha is giving me another result.</p> <p>So were am I making the mistake?</p>
amWhy
9,003
<p>The objective here is to transform your sum into a sum of the form: </p> <p>$$\sum_{n=1}^\infty ar^{n-1}$$</p> <p>$$\text{Transformation: }\quad\quad\sum\limits_{n=1}^\infty \frac{(-1)^n3^{n-2}}{4^n} = \sum_{n=1}^{\infty} \frac{-1}{4\cdot 3}\frac{(-3)^{n-1}}{4^{n-1}} = \sum_{n=1}^{\infty} \frac{-1}{4\cdot 3}\left(\frac{-3}{4}\right)^{n-1}$$</p> <p>Hence $a = -\dfrac{1}{12}$ and $r = -\dfrac{3}{4}.\quad$ <strong><em>Now</strong> use the fact</em> that </p> <p>$$\sum_{n=1}^\infty ar^{n-1} = \dfrac{a}{1 - r} = -\left(\frac{1}{12}\right)\cdot \left(\frac{1}{1 - (-\frac{3}{4})}\right)$$</p> <p>Simpilfy, and then you are done!</p>
3,611,845
<p>For the Stable Matching algorithm by Gale-Shapley, how do I prove that at most one man will get his worst choice? </p> <p>My intuition is that I have to use contradiction. Assume that there are two men who will get their worst preferences: <span class="math-container">$M_1$</span> with <span class="math-container">$W_1$</span> and <span class="math-container">$M_2$</span> with <span class="math-container">$W_2$</span>. I have to prove <span class="math-container">$M_2$</span> and <span class="math-container">$W_2$</span> are unstable. However, I can't think of anything. Can anyone help me with the proof? </p> <p>Thanks! </p>
Parcly Taxel
357,390
<p>Suppose <span class="math-container">$M_1$</span>'s last choice is <span class="math-container">$W_1$</span> and <span class="math-container">$M_2$</span>'s last choice is <span class="math-container">$W_2$</span>. Before they are forced to select <span class="math-container">$W_1$</span>/<span class="math-container">$W_2$</span> respectively, the Gale&ndash;Shapley algorithm has already directed <span class="math-container">$M_1$</span>/<span class="math-container">$M_2$</span> to propose to all women other than <span class="math-container">$W_1$</span>/<span class="math-container">$W_2$</span>. Thus all women have been proposed to, so are now engaged, so the algorithm stops. But we assumed that the algorithm <em>does not</em> stop here, so we have a contradiction and at most one man can have his worst choice.</p>
2,708,902
<p>I am looking for an inductive proof of the following.</p> <blockquote> <p>Let $\alpha \in S_n$ be a cycle of length $p$. Then $\alpha^p =1$.</p> </blockquote> <p>For example $\alpha = (123)$ then $\alpha^3 = 1$ it is easy to verify.</p> <p><em>My attempt:</em> Induction on $p$: The base case is true for $p=1$. Inductive case, assume that true for $k$ i.e $\alpha^k = 1$. We want to prove for $\alpha^{k+1}$. Rewrite $\alpha^{k+1} $ can be rewritten as product of two cycle's of length less than $p$??</p> <p>Note: I know that this question is already answered elsewhere on the site; <strong>however, I want to solve the above problem using induction</strong>.</p>
K B Dave
534,616
<p>I think the answer is <em>no</em> for the same reason that $x^p-x$ is not said to be the zero polynomial in $\mathbb{F}_p[x]$.</p> <p>One "reason" that $x^p-x$ is not the zero polynomial in $\mathbb{F}_p[x]$ is that, even though it evaluates to zero in $\mathbb{F}_p$, it does not evaluate to zero in every $\mathbb{F}_p$-algebra—in particular, it's not zero in the tautological evaluation $x\mapsto x$.</p> <p>Similarly, even though the leading term of your polynomial evaluates to zero in $R$, it doesn't evaluate to zero in every $R$-algebra.</p>
77,246
<p>What is the "right" analog in the orbifold case of a singular homology of a topological space? We can not just take the homology of the underlying space, because it does not contain much information. For example, is there any kind homology of orbifolds such that the first homology group is the abelianization of the fundamental group of the orbifold? And such that an $n$-dim orbifold will have all homology group higher than $n$-dim equal to zero? And if there is such a homology, will there be Poincare duality in the orbifold case???</p> <p>Thanks very much! </p>
Angelo
4,790
<p>One can define singular homology directly for an orbifold, mimicking the standard constructions, but I don't know if this has been written up. The alternative is to take a space that is homotopy equivalent to your orbifold, and take the homology of that. This has been worked out by Behrang Noohi in <a href="https://arxiv.org/abs/0808.3799" rel="nofollow noreferrer">https://arxiv.org/abs/0808.3799</a>.</p> <p>If <span class="math-container">$G$</span> is a group, then the homology of the classifying orbifold of <span class="math-container">$G$</span> (the stack quotient of a point by the trivial action of <span class="math-container">$G$</span>) is the group homology of <span class="math-container">$G$</span>; so you see that it does not vanish above the dimension of the orbifold, which is 0, except in the trivial case.</p>
322,745
<p>I'm faced with the following problem on primes. Does someone have any clue? Is it (a reformulation of) an open problem?</p> <p>Let <span class="math-container">$d$</span> be a positive integer, <span class="math-container">$d\geq 2$</span>. By Dirichlet's theorem, there is an infinite set <span class="math-container">$\mathcal{P}$</span> of primes congruent to <span class="math-container">$1$</span> modulo <span class="math-container">$d$</span>. </p> <p>Consider the set <span class="math-container">$N$</span> of integers <span class="math-container">$n=1+kd$</span> where all prime divisors of <span class="math-container">$k$</span> belong to <span class="math-container">$\mathcal{P}$</span>. </p> <p>Could we expect that <span class="math-container">$N$</span> contains infinitely many primes? (we need at least <span class="math-container">$d$</span> to be even).</p>
Greg Martin
5,091
<p>Chen's theorem says that there are infinitely many numbers <span class="math-container">$k$</span> such that <span class="math-container">$k-2$</span> is prime and <span class="math-container">$k$</span> is either prime or the product of two primes ("<span class="math-container">$k$</span> is a <span class="math-container">$P_2$</span> number").</p> <p>This theorem can be modified relatively easily to prove that for any fixed <span class="math-container">$d$</span>, there are infinitely many numbers <span class="math-container">$k$</span> such that <span class="math-container">$kd+1$</span> is primes and <span class="math-container">$k$</span> is a <span class="math-container">$P_2$</span> number.</p> <p>I suspect that the proof of Chen's theorem could be modified, without too much trouble, to prove that there are infinitely numbers <span class="math-container">$k\equiv1\pmod d$</span> such that <span class="math-container">$k-2$</span> is prime and <span class="math-container">$k$</span> is either prime or the product of two primes that are both <span class="math-container">$\equiv1\pmod d$</span>.</p> <p>Combining these two modifications together would yield an unconditional proof that your <span class="math-container">$N$</span> contains infinitely many primes.</p>
1,673,277
<p>In what scenarios is Goldbach's conjecture, that all even numbers greater than 4 are the sum of two prime numbers; a natural conjecture to research?</p>
Patrick Da Silva
10,704
<p>The point of such conjectures are often not the result themselves ; more often than not the knowledge of the veracity of such statements is completely uninteresting. The interest lies in the methods of proof to argue against such statements ; they often lead to developing new theories and inspire mathematicians to create new kinds of arguments which can then apply to other situations where they may hopefully be useful. Think of it as a training ground for mathematicians. </p> <p>Hope that helps,</p>
1,209,233
<p>How can I find all of the transitive groups of degree $4$ (i.e. the subgroups $H$ of $S_4$, such that for every $1 \leq i, j \leq 4$ there is $\sigma \in H$, such that $\sigma(i) = j$)? I know that one way of doing this is by brute force, but is there a more clever approach? Thanks in advance!</p>
pjs36
120,540
<p>We know subgroups of $S_4$ come in only a few different orders: $1,2,3,4,6,8,12,$ and $24$.</p> <p>If we use the orbit stabilizer theorem, we have that $|H| = |\operatorname{Orb}_H(x)| \cdot |\operatorname{Stab}_H(x)|$; this limits us to only four possible subgroup orders, as $|\operatorname{Orb}_H(x)|$ can only be one thing.</p> <p>Among those four orders, we can find $5$ non-isomorphic groups fulfilling the criteria. I don't know off-hand how many copies of such groups $S_4$ has offhand, but it shouldn't be too hard to pin that down.</p> <p>Note that just because two subgroups of $S_4$ may be isomorphic, it doesn't mean that they all (do or don't) act transitively on $\{1,2,3,4\}$. For example, I can only think of one particular Klein four-group in $S_4$ that does, although there are several isomorphic subgroups in $S_4$.</p>
701,681
<p>It seems intuitive that, if a function differentiable on [a,b] is such that f'(a) &lt; 0 &lt; f'(b) then there exists some c in the open interval (a,b) such that f'(c)=0, but I can't prove it rigourously...</p> <p>I expect to need to use Fermat's Theorem on stationary points but I would then need to prove that c is a local extremum.</p> <p>Am I on the right path or totally wrong? Can you hint me? Thank you</p>
DonAntonio
31,254
<p>Hints:</p> <p>$$f'(a)&lt;0&lt;f'(b)\implies \;\exists\,c,d\in (a,b)\;\;s.t.\;\;f\;\;\text{is descending in}\;\;[a,c)$$</p> <p>$$\text{and ascending in}\;\;(d,b]$$</p> <p>By continuity, there's some point in between (between <em>what</em>?) in which $\;f\;$ must have an extrema (in fact, a local minimum)</p>
269,696
<p>I am given that $\sum\limits_{n=1}^\infty a_n$ is convergent. </p> <p>I need to determine whether $\sum\limits_{n=1}^\infty (a_n)^\frac{1}{3}\;$ and $\;\sum\limits_{n=1}^\infty (a_n)^2\;$ are also convergent.</p> <p>Imagine that $a_n = \dfrac{1}{n^4}.\;$ I believe that this is convergent because it's converging to $0$.</p> <p>Following the same thought, if $\displaystyle a_n = \left(\frac{1}{n^4}\right)^2,\;$ it's convergent because it's converging to $0$.</p> <p>Am I doing this correctly or there is some other way to prove this?</p>
amWhy
9,003
<p><strong>Task</strong>: You need to establish whether the statements are true <em>for all</em> $(a_n)$. </p> <ul> <li>To prove such a statement is <em>true</em>, you need to show it is <em>always holds</em> for any convergent $a_n$ (not just that it holds for <em>some</em> $a_n$). </li> <li>But to conclude that a given statement is false, you can simply find a <em>single</em> $a_n$ that serves as a <em>counterexample.</em></li> </ul> <p><strong>Clarification</strong>: Recall that IF a series $\sum a_n$ converges, THEN $\lim a_n \to 0.\quad(1)$ </p> <p>The converse of (1) <strong><em>does not hold</em></strong>: if $\lim_{n\to \infty} b_n = 0$, it doesn't necessarily follow that $b_n$ converges.</p> <hr> <p>You can use the <strong><em>p-series test</em></strong> to find an $a_n$ such that$\sum_{n=1}^\infty a_n$ converges, but such that $\sum_{n=1}^\infty (a_n)^{1/3}$ diverges.</p> <p><strong>p-series</strong> test: Recall that for $a_n = \dfrac{1}{n^p}, \;$ $\displaystyle \sum_{n=1}^\infty \frac{1}{n^p}\;$ converges if $p &gt; 1$, and diverges if $p \le 1$.</p> <p>E.g. $p = 3$, $a_n = \dfrac{1}{n^3}\implies \sum_{n=1}^\infty \dfrac{1}{n^3}$ converges.</p> <p>Then $(a_n)^{1/3}$ gives $\sum_{n=1}^\infty \left(\dfrac{1}{n^3}\right)^{1/3} =\;\; \sum_{n=1}^\infty \dfrac{1}{n},\;\;$ which diverges.</p> <hr> <p>Now we need to check whether the fact that $\sum_{n=1}^{\infty}a_n$ converges, implies that $\sum_{n=1}^{\infty}a_n^2$ converges. This is more of a challenge, since it seems to follow that, yes, it must.</p> <p>Here, we have to get creative to find a counterexample, if one exists: </p> <p>If we choose $\sum a_n$ to be an <strong><em>alternating series</em></strong> (sign of terms change depending on even, odd $n$) and so that $\sum_{n=1}^{\infty}a_n$ would converge <strong><em>but not</em></strong> $\sum_{n=1}^{\infty}a_n^2$, we are in luck. </p> <p>Let $ a_n=\dfrac{(-1)^n}{\sqrt{n}}$ (the sum of which converges, <em>non-absolutely</em>, since some terms are positive, others negative). </p> <p>Then $(a_n)^2 = \left(\dfrac{(-1)^n}{n^{1/2}}\right)^2 = \dfrac{1}{n}$; this sum, with all positive terms, you will recognize to be the harmonic series, which you know diverges.</p>
1,080,746
<p>I want to calculate the limit of this sum :</p> <p>$$\lim\limits_{x \to 1} {\left(x - x^2 + x^4 - x^8 + {x^{16}}-\dotsb\right)}$$</p> <p>My efforts to solve the problem are described in the <a href="https://math.stackexchange.com/a/1308281/202081">self-answer below</a>.</p>
Abdou Abdou
202,081
<p>We can define a rotation point when the curve switches its direction from increasing ordinates to decreasing values as x advances.</p> <p><img src="https://i.stack.imgur.com/YlfWR.gif" alt="curve"></p> <p>So when we try to locate those points in our polynomial, the derivative of this function must be 0 in that abscissa.</p> <p>$$fn'(x)={1-2x+4x^3-......(-|+)2^n x^{2^n-1}}$$</p> <p>This polynomial has (n-1) factors, which means that $fn'(x)=0$ has $(n-1)$ solutions which implies that there are $(n-1)$ deviation points in our graph.</p> <p>When $n$ tends to infinity we have limitless deviations as shown in this graph:</p> <p><img src="https://i.stack.imgur.com/bQF5b.png" alt="enter image description here"></p> <p>Well, consequently, our limit is hovering endlessly between $0.5+λ$ and $0.5-λ$ with $λ$ extremely small -- it may even be zero!</p> <p>Finally, we can say that the function is diverging, and the limit doesn't exist.</p>
2,649,557
<p>I've been studying topology recently, and I've gotten to the part of the book that deals with quotient spaces. For the most part, it's fairly clear, but one thing that has been confusing me a bit is how the unit circle is represented.</p> <p>Sometimes $\mathbf S^1$ is denoted as $\{(x,y)\in\mathbb R^2|(x-a)^2+(y-b)^2=1\}$ while other times it's denoted as $\{z\in\mathbb C| \; |z-w|=1\}$. I know these are both representations of the same thing, but I'm not sure whether to consider $\mathbf S^1$ as a subset of $\mathbb R^2$ or as a subset of $\mathbb C$, or if it even matters.</p>
Berci
41,488
<p>These are different only on the level of <em>algebra</em>. <br> From <em>geometric</em> point of view, these are exactly the same (with $w=a+bi$ correspondence), using the natural homeorphism $\Bbb R^2\cong\Bbb C,\ \ (x,y)\mapsto x+yi$.</p>
2,649,557
<p>I've been studying topology recently, and I've gotten to the part of the book that deals with quotient spaces. For the most part, it's fairly clear, but one thing that has been confusing me a bit is how the unit circle is represented.</p> <p>Sometimes $\mathbf S^1$ is denoted as $\{(x,y)\in\mathbb R^2|(x-a)^2+(y-b)^2=1\}$ while other times it's denoted as $\{z\in\mathbb C| \; |z-w|=1\}$. I know these are both representations of the same thing, but I'm not sure whether to consider $\mathbf S^1$ as a subset of $\mathbb R^2$ or as a subset of $\mathbb C$, or if it even matters.</p>
William Elliot
426,203
<p>$S^1= \{ (x,y) \in R^2 : x^2 + y^2 = 1 \}.$<br> All those other things are just homeomorphs, isomorphs even.<br> Whether it is a subset of the real plain or the complex plain depends upon the context in use.</p>
294,895
<p>Let $T: X \longrightarrow Y$ be a continuous linear map between two Banach spaces.</p> <p>When is $\operatorname{Ran}(T)$ a closed subspace?</p> <p>What theorems are there? </p> <p>Thanks :) </p>
Community
-1
<p>A case in which it <em>is</em> true that $R(T)$ is closed is if $T$ is bounded from below, i.e., there exists $c &gt; 0$ such that $\|Tx\| \geq c\|x\|$ for all $x \in X$. To prove this, notice first that since $$Tx = 0 \implies 0 = \|Tx\| \geq c\|x\| \geq 0 \implies x= 0$$ $T$ must be injective. Now let $x_n \in R(T)$ be such that $x_n \to x$. Then we have that $$ \|x_n - x_m\| = \|TT^{-1}x_n - TT^{-1}x_m\| \geq c\|T^{-1}x_n - T^{-1}x_m\|$$ so that $T^{-1}x_n$ is a Cauchy sequence which must converge to some $z \in X$. But we have $$T(z) = T(\lim_{n\to\infty} T^{-1}x_n) = \lim_{n\to\infty} TT^{-1}x_n = \lim_{n\to\infty} x_n = x$$ which means that $x \in R(T)$ and thus $R(T)$ is closed.</p> <p>In fact, if $T$ is assumed injective then this is a characterization of $T$ having closed range. For if $R(T)$ is closed in $Y$ then $R(T)$ is a Banach space under $\|\cdot\|_Y$ in its own right and the open mapping theorem shows that $T^{-1}:R(T) \to X$ is continuous which means there exists $C &gt; 0$ such that $$\|T^{-1}x\| \le C\|x\|$$ or $$\|Tx\| \geq \frac{1}{C}\|x\|$$ for all $x \in X$.</p>
4,383,098
<p>This was a question I had ever since I started studying Formal mathematics. Take ZFC for example, in it the axioms tell us 'tests' to check if something is a set or not and how the object, if they are set, behave with some other operations defined on the set.</p> <p>My question is how exactly do we find objects do fulfill these axioms? Is there some formal procedure for it , or, is it just guess work?</p>
DanielWainfleet
254,665
<p>Convert to Boolean algebra: Let <span class="math-container">$I$</span> be any set such that <span class="math-container">$A\cup B\cup C\subseteq I.$</span> For any <span class="math-container">$S, T\subseteq I$</span> let <span class="math-container">$S+T=S\cup T$</span> and let <span class="math-container">$ST=S\times T=S\cap T.$</span> Then</p> <p>(i). <span class="math-container">$+$</span> and <span class="math-container">$\times$</span> are each commutative and associative,</p> <p>(ii). <span class="math-container">$\times$</span> is distributive over <span class="math-container">$+$</span>,</p> <p>(iii). <span class="math-container">$S+S=SS=SI=S$</span> for any <span class="math-container">$S\subseteq I$</span>,</p> <p>(iv). <span class="math-container">$S+I=I$</span> for any <span class="math-container">$S\subseteq I.$</span></p> <p>The RHS is <span class="math-container">$(A\cap B )\cup (B\cap C )\cup (C\cap A)=AB+ BC+CA.$</span></p> <p>The LHS is <span class="math-container">$$(A\cup B )\cap (B\cup C )\cap (C\cup A)=(A+B) (B+C) (C+A)=$$</span> <span class="math-container">$$= ABC+ABA+ACC+ACA+BBC+BBA+BCC+BCA=$$</span> <span class="math-container">$$(ABC+BCA)+(AAB+ABB)+(BBC+BCC)+(CCA+CAA)=$$</span> <span class="math-container">$$=(ABC+ABC)+(AB+AB)+(BC+BC)+(CA+CA)=$$</span> <span class="math-container">$$=ABC+AB+BC+CA=$$</span> <span class="math-container">$$=ABC+ABI+BC+CA=$$</span> <span class="math-container">$$=(AB)(C+I)+BC+CA=$$</span> <span class="math-container">$$=(AB)(I)+BC+CA=$$</span> <span class="math-container">$$=AB+BC+CA.$$</span></p>
1,958,616
<p>Question: Let $z=x+iy $. Using complex notation, find an equation of a circle with radius $5$ and center $(3,-6) $.</p> <p>The answer seems simple to derive, but I'm curious as to whether or not there is more that must be done to establish the equation than what I have accomplished.</p> <p>The circle with the above information can be constructed from the equation</p> <p>$$(x-3)^2+(y+6)^2=25$$</p> <p>which implies that </p> <p>$$\sqrt {(x-3)^2+(y+6)^2}=5$$</p> <p>Therefore from the definition of the modulus of a complex number, we have</p> <p>$$z=(x-3)+i (y+6)$$</p> <p>However, I fail to see how we might use the $z $ in the first sentence of this post.</p>
fleablood
280,126
<p>Definition of circle with radius $r$ centered at $(w,u)$: All points, $z$ of the complex plane where the distance from $z$ to $w + iu$ equals $r$. Or in other words: all $z$ where $|z - (w + ui)| = r$.</p> <p>So the solution is $|z - (3 - 6i)| = 5$. That's all there is to it.</p> <p>Now if $z = x+iy$ then $|z - (3-6i)| = 5 \iff (x-3)^2 + (y+6)^2 = 25 \iff x^2 - 6x + y^2 + 12y + 20 = 0$ but those are three ways of saying the same thing.</p> <p>So you can say:</p> <p>Circle =$\{z \in \mathbb C| |z - (3-6i)| = 5\}$</p> <p>$= \{z = x+yi \in \mathbb C| (x-3)^2 + (y+6)^2 = 25\}$</p> <p>$= \{z = x+yi \in \mathbb C|x^2 - 6x + y^2 + 12y + 20 = 0\}$</p> <p>I honestly have no idea which solution you text wants. If it were my class I'd want simply want $\{z \in \mathbb C| |z - (3-6i)| = 5\}$. </p> <p>I'd really like to get my students out of the mindset that complex numbers are some kind of game where we mix around the "real" real numbers. Complex numbers <em>are</em> numbers and the circle <em>is</em> just the numbers that are 5 away from $3 - 6i$.</p> <p>Then again, maybe the point is to give the students practice in calculate $|z| = \sqrt{re(z)^2 + im(z)^2}$ in which case... I don't know what your texts wants.</p> <p>====</p> <p>Maybe ... your text wants:</p> <p>$(x-3)^2 + (y+6)^2 = 25$</p> <p>$(y+6)^2= 16 - x^2 + 6x$</p> <p>$y = -6 \pm \sqrt{16 -x^2 + 6x}$</p> <p>So $z = x + (-6 \pm \sqrt{16 -x^2 + 6x})i$ where $-2 \le x \le 8$. </p> <p>That might be what your texts wants but ... I'm not sure.</p>
384,471
<p>I'm attempting to evaluate the limit</p> <p>$\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}$</p> <p>I got it reduced to the following</p> <p>$\lim_{x\rightarrow\infty}\frac{\sqrt{\frac{1}{\left(x-2\right)^{2}}-\frac{3}{\left(x-2\right)^{4}}}+1}{1-\frac{3}{\left(x-2\right)^{2}}-1}$</p> <p>But putting in $\infty$ I get $\frac{1}{0}$ and, what's worse, Mathematica tells me the limit is equal to $-\infty$. Where am I going wrong?</p>
Easy
60,079
<p>\begin{align*} &amp;\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}\\ =&amp;\lim_{x\rightarrow\infty}\frac{1}{\sqrt{(x-2)^2-3}-(x-2)}\\ =&amp;\lim_{x-2\rightarrow\infty}\frac{1}{\sqrt{(x-2)^2-3}-(x-2)}\\ =&amp;\lim_{z\rightarrow\infty}\frac{1}{\sqrt{z^2-3}-z}\\ =&amp;\lim_{z\rightarrow\infty}\frac{\frac{1}{z}}{\sqrt{1-\frac{3}{z^2}}-1} \end{align*} Note that the absolute value of the last expression is $\infty$ and the denominator is negative as $\sqrt{1-\frac{3}{z^2}}&lt;1$.</p>
1,915,257
<p>$$\sum_{n=2}^\infty\frac{\cos\ln\ln n}{\ln n}$$ My idea is $$-\frac1{\ln n}\le\frac{\cos\ln\ln n}{\ln n}\le\frac1{\ln n}$$ But I don't know if $\sum\frac1{\ln n}$ converges.</p>
Mark Viola
218,419
<p>The first term in the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula#The_formula" rel="nofollow">Euler-Macluarin Summation Formula</a> is </p> <p>$$\int_1^N \frac{\cos(\log\log x)}{\log(x)}\,dx=\int_{-\infty}^{\log\log N} e^{e^x}\cos(x)\,dx$$</p> <p>which diverges as $N\to \infty$. Therefore, the series of interest diverges.</p>
176,054
<p>I have a friend who wants to study something applied to neurosciences. He is going to begin his grad studies in mathematics. He asked me which areas of mathematics could be applied to neurosciences. Since I don't know the answer, I thought mathoverflow would be the right place to ask. I mean, there are many areas of mathematics that could be applied to neurosciences. But the question is the following: which are the fields that have already been applied to neurosciences? Are there areas related to dynamical systems, stochastic process, probability, topology, analysis, PDE or algebra applied to neurosciences? Articles are welcome. </p> <p>Thank you in advance</p>
Manfred Weis
31,310
<p>Non-negative Matrix Factorization (cf e.g. <a href="http://en.wikipedia.org/wiki/Non-negative_matrix_factorization" rel="nofollow">http://en.wikipedia.org/wiki/Non-negative_matrix_factorization</a>) plays a role in knowledge representation and extraction.</p>
6,661
<p>Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations.</p> <p>I searched the site and closest to an answer was <a href="https://math.stackexchange.com/questions/954/inverse-of-laplace-transform">this</a>. However, it is too complicated for me.</p>
NereusF
114,646
<p>Refer <a href="http://www.dspguide.com/CH32.PDF" rel="noreferrer">http://www.dspguide.com/CH32.PDF</a> for an excellent explanation of Laplace Transforms in the Electrical Domain.</p>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Amir Asghari
29,316
<p><a href="http://books.google.com/books/about/Mathematics_A_Very_Short_Introduction.html?id=DBxSM7TIq48C">Mathematics: A Very Short Introduction</a> by Timothy Gowers. It is very short and indeed very insightful. It is not a textbook, but includes some school-mathematics topics. From the cover: </p> <blockquote> <p>The aim of this book is to explain, carefully but not technically, the differences between advanced, research-level mathematics, and the sort of mathematics we learn at school.</p> </blockquote>
142,734
<p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane &amp; solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
Gerry Myerson
3,684
<p>Walter Prenowitz and Meyer Jordan, <em>Basic Concepts of Geometry</em>. </p>
1,974,939
<p>I don't understand red line parts. I dont know why the assumtion that the list in (1) contains every real number leads to the conclusion that the intersection n to infinite number In equal to empty set. Please explain why... I will really appreciate it. </p> <p><a href="https://i.stack.imgur.com/UtJHG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UtJHG.png" alt="enter image description here"></a> <a href="https://i.stack.imgur.com/ges2a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ges2a.png" alt="enter image description here"></a></p>
angryavian
43,949
<p>Right before the part that you underlined, they show that if $x$ is in that list (1), then it is not in the infinite intersection $\bigcap_{n=1}^\infty I_n$. By construction, the list (1) contains all real numbers, so all real numbers are not in $\bigcap_{n=1}^\infty I_n$, or in other words, this infinite intersection is empty.</p>
1,858,218
<p>I often come across students who are confused by the idea that the complex unit, $i$, is defined as $i^2 = -1$. Since we are using the complex numbers in an engineering course, we use the complex numbers to encode information about a sinusoid of a given frequency, where the argument is the phase of the sinusoid with respect to a reference and the magnitude is the amplitude. I've started to explain the need for the complex numbers by re-defining the complex unit as: $i^5 = i$, which helps to explain that the complex numbers inherently encode periodic information, hence why they were so useful for encoding sinusoids.</p> <p>My question is this: conceptually this helps the students to better understand why we use the complex numbers (they tend to hate the fact that we use the term "imaginary number"). However, this isn't the original definition of $i$. Is there anything seriously wrong with approaching the concepts with my definition? Is there anything special about the fact that $i^2 = -1$ <em>that is not also present in my definition</em>?</p>
Community
-1
<p>An alternative to the use of the embarrassing $i=\sqrt{-1}$ is to work with coordinate couples $(a,b)$ and introduce the product rule</p> <p>$$(a,b)\cdot(c,d):=(ac-bd,ad+bc).$$</p> <p>Then the constant $i$ is the innocuous $(0,1)$.</p> <hr> <p>An important issue is to relate the complex product to rotations. It is easy to make the connection with</p> <p>$$(\cos\theta,\sin\theta)\cdot(\cos\phi,\sin\phi)=(\cos\theta\cos\phi-\sin\theta\sin\phi,\sin\theta\cos\phi+\cos\theta\sin\phi)\\ =(\cos(\theta+\phi),\sin(\theta+\phi)).$$</p>
11,368
<p>I am interested in improving this plot:</p> <p><img src="https://i.stack.imgur.com/qvPAJ.png" alt="contour plot"></p> <p>which I produced with the following command:</p> <pre><code>dat // ListContourPlot[#, ContourShading -&gt; False, ContourStyle -&gt; ColorData[10] /@ Range[10], Contours -&gt; Range[21]/8, ContourLabels -&gt; None, FrameLabel -&gt; {x, y}, DataRange -&gt; {{-7, 2}, {-15, 15}}, PlotLabel -&gt; "Contour plot of ϕ(x,y)"] &amp; </code></pre> <p>The actual <code>dat</code> is available <a href="http://justpaste.it/1d9v" rel="noreferrer">here</a>; the edge corresponds to zero values (though, it could be set to something else). </p> <p><strong>Question</strong></p> <blockquote> <p>I would like <em>Mathematica</em> to identify the edge of the contours and do a region plot to avoid the ragged line, or alternatively, to draw a thick line over it to make it publication-friendly.</p> </blockquote> <p><strong>Attempt</strong></p> <p>I guess I know how to find the edge:</p> <pre><code>dat2 = dat // Image // EdgeDetect // ImageData // Position[#, 1] &amp; // Sort; </code></pre> <p>On the other hand, these points are not sorted correctly:</p> <p><img src="https://i.stack.imgur.com/oODeH.png" alt="badly-sorted points"></p> <p>I supposed I could use 2D neighbours as a criterion for sorting, but I feel there is a smarter way to achieve my overall goal(?)</p>
rm -rf
5
<p>Here's my solution. Step-by-step explanation follows.</p> <p><img src="https://i.stack.imgur.com/D900n.png" alt="final solution"></p> <ul> <li><p>First, import the data</p> <pre><code>dat = ToExpression@Import["http://pastebin.com/raw.php?i=XWyb7jFJ"]; </code></pre></li> <li><p>Instead of using <code>EdgeDetect</code>, I'll just use the outermost contour (on a fine mesh) as the "edge":</p> <pre><code>contour = dat // ListContourPlot[#, ContourShading -&gt; False, ContourStyle -&gt; ColorData[10] /@ Range[10], Contours -&gt; {1}/20, ContourLabels -&gt; None, Frame -&gt; False, InterpolationOrder -&gt; 1, FrameLabel :&gt; {x, y}, DataRange -&gt; {{-7, 2}, {-15, 15}}, PlotLabel -&gt; "Contour plot of ϕ(x,y)"] &amp; </code></pre> <p><img src="https://i.stack.imgur.com/xlJ7O.png" alt="contour"></p></li> <li><p>Next, I use the image processing functions to close the gap, get rid of the egg-shaped contour and thin the edge.</p> <pre><code>edge = contour // Image // Binarize // ColorNegate // SelectComponents[#, "Elongation", 0.5 &lt; # &amp;] &amp; // DeleteSmallComponents // Closing[#, BoxMatrix[6]] &amp; // Thinning </code></pre> <p><img src="https://i.stack.imgur.com/cUKTj.png" alt="edge"></p></li> <li><p>Get the positions of the 1s (this forms the curve in the binary image) and then use <code>FindCurvePath</code> to get the proper ordering for the curve</p> <pre><code>pos = Position[ImageData@edge, 1 | 1.]; curve = FindCurvePath@pos; </code></pre></li> <li><p>Next, convert the positions and ordering from filled curve to <code>ListPlot</code> coordinates. </p> <pre><code>pts = N@With[{rescale = Rescale[#, Through[{Min, Max}@#], #2] &amp;}, {rescale[#2, {-7, 2.2}], rescale[-#1, {-14.5, 12.5}]} &amp; @@ Transpose[pos[[curve[[1]]]]] // Transpose]; </code></pre> <p>Here, I've eyeballed the rescaling from the extents of the original image. It is possible to get these programmatically, but from my trials, some amount of fine tuning is eventually necessary to get a nice fit.</p></li> <li><p>Finally, downsample the points and form a <code>BSplineCurve</code> and overlay on the original image</p> <pre><code>dat // ListContourPlot[#, ContourShading -&gt; False, ContourStyle -&gt; ColorData[10] /@ Range[10], Contours -&gt; Range[20]/8, ContourLabels -&gt; None, InterpolationOrder -&gt; 1, FrameLabel -&gt; {"x", "y"}, DataRange -&gt; {{-7, 2}, {-15, 15}}, PlotLabel -&gt; "Contour plot of ϕ(x,y)", Epilog -&gt; First@Graphics[{AbsoluteThickness[3], BSplineCurve[pts[[1 ;; ;; 40]] ~Join~ {Last[pts]}]}]] &amp; </code></pre></li> </ul>
4,412,175
<p>I wonder and tried to google it, but I am not sure what to google it, how to solve non linear equations where equations are equal between each other. I am able to write a specific algorithm for two equations but not dynamically for N equations. I will show the example of three (how my equations approximately looks like):</p> <p><a href="https://i.stack.imgur.com/NbloI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NbloI.png" alt="enter image description here" /></a></p> <p>C1, C2, C3, X - are unknows, but in the end I do not need to know result of X.</p> <p>It can be interpreted like this (last equation C1 + C2 + C3 = 1 is not included here):</p> <p><a href="https://i.stack.imgur.com/OAJz8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OAJz8.png" alt="enter image description here" /></a></p> <p>Please, don't try to solve this, I am not sure if these equations have results. I just randomly typed coefficients. But this is how my equations can looks like. Only with different coefficients. I tried to calculated it with only two unknows and I have got quadratic equation in the end so with three unknowns there will be cubic in the end. With N unknows there will be polynomial equation with degree N. Also, I have to say, result do not have to be with 100% accuracy. I am not sure if its help somehow or not.</p> <p>I found on google that maybe using iterative method could help. I look at few iterative methods but I am still not sure how to use it on this kind of problem. I also found, that non linear equation can by linearize. Maybe that would be a option but I am not sure how to do it here.</p>
Jean Marie
305,862
<p>A much more direct way is to consider that this kind of system is composed of an eigensystem condition + (the last equation which is plainly a normalizing condition.</p> <p>Let us take your example written under the form of a <em>slightly simplified</em> system</p> <p><span class="math-container">$$\begin{cases}2C_1+4C_2+8C_3&amp;=&amp;C_1\\ 3 C_1+9C_2+27C_3&amp;=&amp;C_2\\ 4C_1+16C_2+64C_3&amp;=&amp;C_3\end{cases}$$</span></p> <p>giving the matrix-vector eigen-equation (of the form <span class="math-container">$MV=XV$</span>)</p> <p><span class="math-container">$$\underbrace{\begin{pmatrix}2&amp;4&amp;8\\ 3&amp;9&amp;27\\ 4&amp;16&amp;64\end{pmatrix}}_M\underbrace{\begin{pmatrix}C_1\\ C_2\\ C_3\end{pmatrix}}_V=X \underbrace{\begin{pmatrix}C_1\\ C_2\\ C_3\end{pmatrix}}_V$$</span></p> <p>where there are only 3 possibilities for <span class="math-container">$X$</span> and <span class="math-container">$(C_1,C_2,C_3)$</span> to be chosen among the eigenvalues and associated eigenvectors with normalizing condition <span class="math-container">$C_1+C_2+C_3=1$</span> giving (for example using the matlab program below):</p> <p><span class="math-container">$$X = 71.5723 \ or \ X=3.2194 \ or \ X=0.2083$$</span></p> <p>associated resp. to normalized vectors:</p> <p><span class="math-container">$$\begin{pmatrix}C_1\\ C_2\\ C_3\end{pmatrix}= \begin{pmatrix}0.0888\\ 0.2776\\ 0.6335\end{pmatrix} \ \ or \ \ \begin{pmatrix}0.6198\\ 0.5713\\ -0.1912\end{pmatrix} \ \ or \ \ \begin{pmatrix}2.2101\\ -1.4302\\ 0.2201\end{pmatrix}$$</span></p> <p>Matlab program:</p> <pre><code> M=[2, 4, 8 3, 9, 27 4,16, 64]; eig(M), % list of eigenvalues [P,~]=eig(M); % P is a matrix whose columns are eigenvectors for k=1:3 V=P(:,k);V=V/sum(V), % normalization of eigenvectors end; </code></pre>
4,038,802
<p>Say I have <span class="math-container">$2^n +1 &lt; n! -n$</span> for all <span class="math-container">$n \ge 4$</span>, and <span class="math-container">$n$</span> is an integer.</p> <p>My inductive steps says, consider a <span class="math-container">$k$</span> that is a arbitrary integer, assuming <span class="math-container">$P(k)$</span>.</p> <p>Thus, <span class="math-container">$$\begin{align} 2^{k+1} +1 = 2^k \cdot 2 +1 \\ 2^k\cdot 2+1\overset{\mathrm{IH}}{&lt;} 2(k!-k) &lt; ((k+1)!-k) \\\text{Therefore, we have proven this my mathematical induction.} \end{align}$$</span></p> <p>I am not too sure on how to use &lt; in mathematical induction. I based this off an example I saw, and wondering it is valid, and maybe some calcification on what this works (if it does)...</p>
Community
-1
<p>You can get all the transpositions from those three. There are <span class="math-container">${4\choose2}=6$</span> transpositions total, so there are sort of <span class="math-container">$3$</span> things to check.</p> <p>For instance, <span class="math-container">$(12)(23)(12)=(13)$</span>.</p> <p>(One more hint: the conjugation action is transitive; that is all the transpositions are conjugate)</p> <p>So it remains to find <span class="math-container">$(14)$</span> and <span class="math-container">$(24)$</span>.</p>
2,960,614
<p>I am going through Tenenbaum and Pollard's book on differential equations and they define the differential <span class="math-container">$dy$</span> of a function <span class="math-container">$y = f(x)$</span> to be the function <span class="math-container">$$ (dy)(x,\Delta x) = f'(x) \cdot (d\hat{x})(x, \Delta x) $$</span> where</p> <ul> <li><span class="math-container">$\Delta x$</span> is a variable denoting an increment along the <span class="math-container">$x$</span>-coordinate </li> <li><span class="math-container">$\hat{x}$</span> denotes the function <span class="math-container">$\hat{x}(x) = x$</span>, and</li> <li><span class="math-container">$d\hat{x}$</span> is the differential of the function <span class="math-container">$\hat{x}$</span>.</li> </ul> <p>I've never seen differentials crisply defined this way. They're usually described as "small quantities" or just avoided in favor of definitions of the derivative in terms of limits. Anyway, this definition makes good sense to me. Is this the accepted way to think of them -- i.e. as functions?</p>
Hir
506,981
<p>That holds. You get to be so careful about where outer measure is defined on. We start with a measurable space (X,F) and let Γ be an outer measure on (X,F) and let μ be an measure on it. Let's go through what we've learned.</p> <p>Γ: 2^X→[0,+∞] holds Γ(∪i E_i) ≦ ΣiΓ(E_i) (subadditivity) as for E_i∈F i=1,2,..., regardless of whether E_i and E_i' (i≠i') are disjoint or not.</p> <p>μ: F→[0,+∞] holds: μ(∪i E_i) = ΣiΓ(E_i) (σ-additivity) as for E_i∈F i=1,2,..., obly E_i and E_i' (i≠i') are disjoint.</p> <p>There are a lot of constructions on measure, but one defines measure μ as a restriction of outer measure Γ from 2^X to F. As we already see Γ has subadditivity, but measure μ which is a restriction of outer measure Γ has σ-additivity. So as long as an outer measure Γ holds finite additivity and we pick up disjoint sets E_i and E_i' (i≠i') in F, E_i and E_i' are measurable.</p> <p>Sorry for being illegible since I am a newbie here.</p>
2,395,611
<p>This is probably an easy question, hanging on some minor detail, but i cannot find a proof for it. I try working through T.A. Springer's book "Linear algebraic groups" and i got stuck at remark 7.1.4. (I know there is already a question asked about the same remark, but concerning a different statement made in it). Specifically i am interested in proofing the following: </p> <p>Let $G$ be a connected linear algebraic group, $T\subset G$ a maximal subtorus and $S \subset C(G)$ another torus, lying in the center of the group $G$. Then it holds, that: $$ W(G,T) \cong W(G/S,T/S). $$</p> <p>So what i do is:<br> For $n\in N_G(T)$, one already has $nTn^{-1}=T$, hence also module $S$, which gives a function $$ \phi : N_G(T) \to \frac{N_{G/S}(T/S)}{Z_{G/S}(T/S)} \\ n \mapsto n ~\mathrm{mod}~S~, $$ which is surjective, because $S\subset T$ (i could do that). </p> <p>What i need would be a hint on how to proof, that $\ker(\phi) = Z_G(T)$ holds. Thanks in advance!</p>
D_S
28,556
<p>Check that $N_{G/S}(T/S) = N_G(T)/S$ and $Z_{G/S}(T/S) = Z_G(T)/S$ and use the third isomorphism theorem.</p> <p>I'll write out the details of why $Z_{G/S}(T/S) = Z_G(T)/S$. The inclusion '$\supseteq$' is clear. Conversely, let $gS \in G/S$, and suppose $gS$ commutes with every element of $T/S$. This means that for all $t \in T$, $gtg^{-1} \in S$. Then $gtg^{-1}$ commutes with everything in $G$, so </p> <p>$$gt = (gtg^{-1})g = g(gtg^{-1})$$</p> <p>which implies $t = gtg^{-1}$. Thus $g \in Z_G(T)$, and $gS \in Z_G(T)/S$.</p>
1,249,638
<p>Update: This is false. See the answers for a counterexample.</p> <blockquote> <p>Let $C\ge 1$ be a constant. Fix $f\in L^p(\mathbb R)$ for $p\ge C$. Show that $$p\rightarrow \left( \int |f|^p \right)^{1/p}$$ is non-decreasing.</p> </blockquote> <p>Comments: I'm posting this because there is (surprisingly) no good reference for this fact on the internet. If I recall correctly, differentiating with respect to $p$ will do the trick. </p> <p>For the same problem on a finite measure space, see <a href="https://math.stackexchange.com/questions/1156977/another-property-of-the-function-phi-p-mapsto-int-fp-d-mu">here</a>.</p>
Ragnar
232,420
<p>Jensen:</p> <p>take q>p, examine:</p> <p>$L(q,p)$ = $\frac {(\int |f|^q)^{\frac 1q}}{(\int |f|^p)^{\frac 1p}}$</p> <p>By Jensen's inequality $(\int |f|^p)^{\frac qp}$ ≤ $(\int |f|^q)$ Because $\frac qp$≥1 $\Rightarrow$ $g(x)$ = $x^{\frac qp}$ is convex</p> <p>Thus, $L(q,p)^q$≥ $\frac {\int |f|^q}{\int |f|^q}$ = 1 $\Rightarrow$ $L(q,p)$≥1 $\Rightarrow$ $p \rightarrow (\int |f|^p)^{\frac 1p}$ is non decreasing</p>
1,249,638
<p>Update: This is false. See the answers for a counterexample.</p> <blockquote> <p>Let $C\ge 1$ be a constant. Fix $f\in L^p(\mathbb R)$ for $p\ge C$. Show that $$p\rightarrow \left( \int |f|^p \right)^{1/p}$$ is non-decreasing.</p> </blockquote> <p>Comments: I'm posting this because there is (surprisingly) no good reference for this fact on the internet. If I recall correctly, differentiating with respect to $p$ will do the trick. </p> <p>For the same problem on a finite measure space, see <a href="https://math.stackexchange.com/questions/1156977/another-property-of-the-function-phi-p-mapsto-int-fp-d-mu">here</a>.</p>
Jose27
4,829
<p>This is not true: Consider the function $f(x)=\chi_{\mathbb{R}\setminus(-1,1)}(x)|x|^{-1/2}$. Then for $p\leq 2$ we have $\| f\|_{p}=\infty$ and for $p&gt;2$ we get $$ \| f\|_p = \left(\frac{4}{p-2}\right)^{1/p}. $$</p>
1,176,016
<p>How do you show that if you have sets $B_1, B_2, \cdots ,B_n$ and a set $C$, then $$(B_1\cap B_2, \cap \cdots B_n)\cup C= (B_1\cup C)\cap(B_2\cup C) \cap \cdots \cap (B_n\cup C)\,?$$</p> <p>Thanks</p>
kingW3
130,953
<p>Logically derivatives represent the rate of change of the function,the rate of change doesn't take into consideration from what value the function starts to change at that rate.</p> <p>A simple example is $2x+15=2x$.</p> <p>Now the other way around if two functions take the same value at one point they don't have to have the same change rate.Other then that you can think it geometrically,they don't have to approach that point from the same path,simple example is $x$ and $-x$ the first one approaches $0$ from left and the second one from right.</p> <p>And an algebraic approach that was intuitive for me if $f(x)=g(x)$ for all $x\in(c-\epsilon,c+\epsilon)$ and for any $\epsilon$ then $f'(c)=g'(c)$,it seemed clear to me that in most cases $f(x)\not=g(x)$ for all $x\in(c-\epsilon,c+\epsilon)$ so in that cases $f'(c)\not=g'(c)$ </p>
3,066,991
<p>I'm working through some functional analysis problems and am having trouble with the following. Let <span class="math-container">$f(z)=\sum_{n=0}^\infty a_n z^n$</span> be a power series with radius of convergence <span class="math-container">$R&gt;0$</span>. Let <span class="math-container">$A\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$||A||&lt;R.$</span> Show the following: </p> <p>(a) There is a <span class="math-container">$T\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$\langle Tx, y\rangle=\sum_{n=0}^\infty a_n\langle A^nx,y\rangle$</span> for <span class="math-container">$x,y\in\mathcal{H}.$</span> Here we denote <span class="math-container">$T$</span> by <span class="math-container">$f(A)$</span>. </p> <p>(b) That the partial sums of <span class="math-container">$f(A)$</span> converge to <span class="math-container">$T$</span> in the operator norm. </p> <p>(c) That <span class="math-container">$f(A)B=bf(A)$</span> whenever <span class="math-container">$B\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$BA=AB.$</span> </p> <p>(d) For <span class="math-container">$f(z)=e^z$</span> for <span class="math-container">$A$</span> self-adjoint we have <span class="math-container">$f(iA)$</span> is unitary.</p> <p>Solution attempts:</p> <p>(a) I think <span class="math-container">$T = \sum_{n=0}^\infty a_n A^n$</span>, but am having trouble showing this is within <span class="math-container">$\mathcal{B}(\mathcal{H})$</span>. In particular, I have for <span class="math-container">$h\in\mathcal{H}$</span> with <span class="math-container">$||h||=1,$</span></p> <p><span class="math-container">$|| \sum_{n=0}^\infty a_nA^n h||\leq |a_0|||h||+|a_1|||Ah||+|a_2|||A^2h||+\cdots$</span></p> <p>My thought was to continue this chain of inequalites until I got a series that converged, but since I am taking the absolute values of the coefficients I don't think that this will work.</p> <p>(b) I think I could get this part if I saw how part (a) is done.</p> <p>(c) <span class="math-container">$$ f(A)B = (\sum a_n A^n ) B = (a_0+a_1A+a_2A^2+\cdots)B\\ =a_0B+a_1AB+a_2AAB+\cdots = Ba_0+Ba_1A+Ba_2AA+\cdots \\ =B(\sum a_n A^n) = Bf(A) $$</span></p> <p>(d) <span class="math-container">$f(iA)=\sum \frac{1}{n!} (iA)^n\Rightarrow f(iA)^*= (\sum \frac{1}{n!}(iA)^n)^*=\sum \frac{1}{n!}(i^n)^*(A^n)^*=\sum \frac{1}{n!}(i^*)^n(A^*)^n=\sum \frac{1}{n!}(-i)^nA^n =f(-iA)$</span>. Since <span class="math-container">$A$</span> commutes with itself, we have <span class="math-container">$f(iA)f(-iA)=f(-iA)f(iA)=e^{iA}e^{-iA}=e^{0}=I$</span></p>
Martin Argerami
22,857
<p>In part a), your inequality is too crude. And it wouldn't prove much, because what you need to show is that the series exists. You have <span class="math-container">$$ \left\|\sum_{n=m}^ka_nA^n\right\|\leq\sum_{n=m}^k|a_n|\,\|A^n\|\leq\sum_{n=m}^k|a_n|\,\|A\|^n. $$</span> Since <span class="math-container">$\|A\|&lt;R$</span>, the series <span class="math-container">$\sum_n |a_n|\,\|A\|^n$</span> converges, and so the last term above can be made arbitrarily small if <span class="math-container">$m$</span> and <span class="math-container">$k$</span> are big enough. This shows that the partial sums <span class="math-container">$\sum_{n=1}^ka_nA^n$</span> are Cauchy, and so the series converges since <span class="math-container">$B(H)$</span> is complete. </p> <p>Now, since the inner product is continuous (in the sense that <span class="math-container">$\langle T_nx,y\rangle\to\langle Tx,y\rangle$</span> if <span class="math-container">$T_n\to T$</span>), we have <span class="math-container">\begin{align} \langle Tx,y\rangle&amp;=\left\langle \left(\lim_{k\to\infty}\sum_{n=1}^ka_nA^n\right)x,y\right\rangle=\lim_{k\to\infty}\left\langle \left(\sum_{n=1}^ka_nA^n\right)x,y\right\rangle\\ \ \\ &amp;=\lim_{k\to\infty}\left\langle \sum_{n=1}^ka_nA^nx,y\right\rangle=\lim_{k\to\infty}\sum_{n=1}^ka_n\left\langle A^nx,y\right\rangle\\ \ \\ &amp;=\sum_{n=1}^\infty a_n\left\langle A^nx,y\right\rangle, \end{align}</span> where the last series can be shown to be convergent as in the first part. </p> <p>The rest of your arguments are ok. Just never forget that a series is a <strong>limit of sums</strong> and not a sum. Maybe I'm wrong, but the way you wrote part c) suggests to me that such idea is not very clear in your mind. </p>
3,066,991
<p>I'm working through some functional analysis problems and am having trouble with the following. Let <span class="math-container">$f(z)=\sum_{n=0}^\infty a_n z^n$</span> be a power series with radius of convergence <span class="math-container">$R&gt;0$</span>. Let <span class="math-container">$A\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$||A||&lt;R.$</span> Show the following: </p> <p>(a) There is a <span class="math-container">$T\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$\langle Tx, y\rangle=\sum_{n=0}^\infty a_n\langle A^nx,y\rangle$</span> for <span class="math-container">$x,y\in\mathcal{H}.$</span> Here we denote <span class="math-container">$T$</span> by <span class="math-container">$f(A)$</span>. </p> <p>(b) That the partial sums of <span class="math-container">$f(A)$</span> converge to <span class="math-container">$T$</span> in the operator norm. </p> <p>(c) That <span class="math-container">$f(A)B=bf(A)$</span> whenever <span class="math-container">$B\in\mathcal{B}(\mathcal{H})$</span> with <span class="math-container">$BA=AB.$</span> </p> <p>(d) For <span class="math-container">$f(z)=e^z$</span> for <span class="math-container">$A$</span> self-adjoint we have <span class="math-container">$f(iA)$</span> is unitary.</p> <p>Solution attempts:</p> <p>(a) I think <span class="math-container">$T = \sum_{n=0}^\infty a_n A^n$</span>, but am having trouble showing this is within <span class="math-container">$\mathcal{B}(\mathcal{H})$</span>. In particular, I have for <span class="math-container">$h\in\mathcal{H}$</span> with <span class="math-container">$||h||=1,$</span></p> <p><span class="math-container">$|| \sum_{n=0}^\infty a_nA^n h||\leq |a_0|||h||+|a_1|||Ah||+|a_2|||A^2h||+\cdots$</span></p> <p>My thought was to continue this chain of inequalites until I got a series that converged, but since I am taking the absolute values of the coefficients I don't think that this will work.</p> <p>(b) I think I could get this part if I saw how part (a) is done.</p> <p>(c) <span class="math-container">$$ f(A)B = (\sum a_n A^n ) B = (a_0+a_1A+a_2A^2+\cdots)B\\ =a_0B+a_1AB+a_2AAB+\cdots = Ba_0+Ba_1A+Ba_2AA+\cdots \\ =B(\sum a_n A^n) = Bf(A) $$</span></p> <p>(d) <span class="math-container">$f(iA)=\sum \frac{1}{n!} (iA)^n\Rightarrow f(iA)^*= (\sum \frac{1}{n!}(iA)^n)^*=\sum \frac{1}{n!}(i^n)^*(A^n)^*=\sum \frac{1}{n!}(i^*)^n(A^*)^n=\sum \frac{1}{n!}(-i)^nA^n =f(-iA)$</span>. Since <span class="math-container">$A$</span> commutes with itself, we have <span class="math-container">$f(iA)f(-iA)=f(-iA)f(iA)=e^{iA}e^{-iA}=e^{0}=I$</span></p>
mechanodroid
144,766
<p>Define <span class="math-container">$T = f(A)$</span> with <span class="math-container">$f(A) = \sum_{n=0}^\infty a_nA^n$</span>. This series converges absolutely since <span class="math-container">$$\sum_{n=0}^\infty |a_n|\|A^n\| \le \sum_{n=0}^\infty |a_n|\|A\|^n &lt; +\infty$$</span> because <span class="math-container">$\|A\| &lt; R$</span> and the power series <span class="math-container">$\sum_{n=0}^\infty a_nz^n$</span>converges absolutely on <span class="math-container">$B(0,R) \subseteq \mathbb{C}$</span>.</p> <p>Since <span class="math-container">$\mathcal{H}$</span> is complete, this implies that the series <span class="math-container">$\sum_{n=0}^\infty a_nA^n$</span> converges so <span class="math-container">$f(A)$</span> is well-defined.</p> <p><span class="math-container">$(a)$</span> now follows from the continuity of the inner product.</p> <p><span class="math-container">$(c)$</span> and <span class="math-container">$(d)$</span> are correct provided you know that <span class="math-container">$A \mapsto A^*$</span> and <span class="math-container">$A \mapsto AB$</span>, <span class="math-container">$A \mapsto BA$</span> for fixed <span class="math-container">$B$</span> are continuous with respect to the norm topology.</p>
2,727,598
<p>Given p is a prime number greater than 2, and</p> <p>$ 1 + \frac{1}{2} + \frac{1}{3} + ... \frac{1}{p-1} = \frac{N}{p-1}$ </p> <p>how do I show, $ p | N $ ???</p> <p>The previous part of this question had me factor $ x^{p-1} -1$ mod $p$. Which I think is just plainly $(x-1) ... (x-(p-1))$ </p>
Piquito
219,998
<p>HINT.-$x\to\dfrac 1x$ is a bijection of $\mathbb F^*_p$ so $$\sum_1^{p-1} \frac 1x\equiv\sum_1^{p-1} x\equiv 0\pmod p$$</p>
476,955
<p>Let $X$ be a set with 4 elements. Is it possible to have a topology on $X$ with 14 open sets?</p>
Ittay Weiss
30,953
<p>Hints:</p> <p>1) If each singleton is open, then the topology is the discrete one. So assume not every singleton is open. </p> <p>2) Since we are allowed to miss exactly two subsets from the list of open sets, and there are four singletons, without loss of generality, $\{1\}$ and $\{2\}$ are open.</p> <p>3) Of the remaining $\{3\}$ and $\{4\}$ at least one must not be open. Assume neither is open. But then, to have fourteen elements in the topology, the topology must be $\mathcal P (X)-\{\{3\},\{4\}\}$. But this is not a topology (find two sets in it the intersect to give $\{3\}$.</p> <p>4) So, without loss of generality, $\{3\}$ is also open and thus $\{4\}$ is not open. But note that you can find three subsets such that any two intersect to give $\{4\}$, and so, since you are only allowed to miss one of those, you can't obtain a topology with 14 elements on a four element set. </p>
606,380
<blockquote> <p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p> </blockquote> <p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p> <p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
math110
58,742
<p>let $$a=x^3,b=y^3,c=z^3\Longrightarrow xyz=1$$ since $$y^3+z^3\ge y^2z+yz^2$$ so $$\dfrac{1}{1+b+c}=\dfrac{xyz}{xyz+y^3+z^3}\le\dfrac{xyz}{xyz+y^2z+yz^2}=\dfrac{x}{x+y+z}$$ so $$\sum_{cyc}\dfrac{1}{1+b+c}\le\sum_{cyc}\dfrac{x}{x+y+z}=1$$</p>
2,177,619
<p>Find a solution to the boundary value problem \begin{align}y''+ 4y &amp;= 0 \\ y\left(\frac{\pi}{8}\right) &amp;=0\\ y\left(\frac{\pi}{6}\right) &amp;= 1\end{align} if the general solution to the differential equation is $y(x) = C_1 \sin(2x) + C_2 \cos (2x)$.</p> <p>I was able to compute the following equations: \begin{align}C_1 \left(\frac 12\right)\sqrt2 + C_2 \left(\frac 12\right)\sqrt2 &amp;= 0\\ C_1 \left(\frac 12\right)\sqrt3 + C_2 \left(\frac 12\right) &amp;= 1\end{align}</p> <p>However I am unable to solve the system of equations. The books says the answer is $\frac{2}{\sqrt3 -1}$for $C_1$ and $-C_2$. I am not sure how to go about manipulating the equations to get on variable. </p>
Bob
154,155
<p><span class="math-container">\begin{align*} \frac{\sqrt 2 C_1}{2} + \frac{\sqrt 2 C_1}{2} &amp;= 0 \\ C_1 + C_2 &amp;= 0 \\ C_2 &amp;= -C_1 \\ \frac{\sqrt{3}C_1}{2} + \frac{C_2}{2} &amp;= 1 \\ \sqrt{3}C_1 + C_2 &amp;= 2 \\ \sqrt{3}C_1 - C_1 &amp;= 2 \\ (\sqrt{3} - 1)C_1 &amp;= 2 \\ C_1 &amp;= \frac{2}{(\sqrt{3} - 1)} \\ C_2 &amp;= -\frac{2}{(\sqrt{3} - 1)} \\ \end{align*}</span></p> <p>Which is what the book got.</p>
3,645,742
<p>The number <span class="math-container">$2012 \cdot 2013 \cdot 2014 + 2013$</span> is the cube of </p> <p>a) <span class="math-container">$2012$</span><br> b) <span class="math-container">$2013$</span><br> c) <span class="math-container">$2014$</span><br> d) <span class="math-container">$2112$</span><br> e) <span class="math-container">$2113$</span></p>
qwr
122,489
<p>You can also guess the answer by estimation and the process of elimination.</p> <p><span class="math-container">$2012^3$</span> is too small because <span class="math-container">$2012 \times 2013 \times 2014$</span> is already larger.</p> <p><span class="math-container">$2014^3$</span> is too large because <span class="math-container">$2014^3$</span> is larger than <span class="math-container">$2012 \times 2013 \times 2014$</span>. <span class="math-container">$2014^3 - 2014 (2012) (2013) = 2014(2014^2 - 2012 \times 2013)$</span>, and <span class="math-container">$(2014^2 - 2012 \times 2013) &gt; 1$</span> so the difference is bigger than <span class="math-container">$2014$</span>.</p> <p>The other answers are much too large.</p>
57,508
<p>While learning commutative algebra and basic algebraic geometry and trying to understand the structure of results (i.e. what should be proven first and what next) I came to the following question: </p> <p>Is it possible to prove that $\mathbb A^2-point$ is not an affine variety, if you don't know that the polynomial ring is a unique factorisation domain?</p> <p>It seems to me, that this question has some meaning, since when we define affine variety, we don't need to use the fact that the polynomial ring is an UFD. Don't we?</p>
Donu Arapura
4,144
<p>Here's a short argument over $\mathbb{C}$.</p> <p>If $\mathbb{A}^2-\{0\}$ were affine, then a standard application of Morse theory [Milnor, chap 1, sect 7] would show that $H^i(\mathbb{A}^2-\{0\},\mathbb{Z})=0$ for $i&gt;2$. But it's homotopic to $S^3$.</p>
57,508
<p>While learning commutative algebra and basic algebraic geometry and trying to understand the structure of results (i.e. what should be proven first and what next) I came to the following question: </p> <p>Is it possible to prove that $\mathbb A^2-point$ is not an affine variety, if you don't know that the polynomial ring is a unique factorisation domain?</p> <p>It seems to me, that this question has some meaning, since when we define affine variety, we don't need to use the fact that the polynomial ring is an UFD. Don't we?</p>
Brian
9,035
<p>We can easily see that the function field of $\mathbb{A}^2_k-(0,0)$ is still $k(x,y)$. So the ring of functions is of the form $f/g$ where $f$ and $g$ are polynomials. But any polynomial in 2 variables will vanish at a codim 1 sub-variety, i.e. cannot vanish at exactly 1 point. This is the Krull dimension theorem. But if you think this is too much, from the fact that $k$ is algebraically closed, you can see that $g$ must vanish at more than 1 point: for each $x$ you can solve for $y$. Thus the ring of functions on $\mathbb{A}^2_k-(0,0)$ is $k[x,y]$. Thus, if it's affine, it must be isomorphic to $\mathbb{A}^2_k$ through the identity map. But it's not. So we are done.</p> <p>Another way that uses Cohomology is the follows: using $\check{\mathrm{C}}\mathrm{ech}$ cohomology, we can show that $H^1(\mathbb{A}_k^2-\{0\}, \mathcal{O}_X)$ is infinite dimensional. But if our space is in fact affine, then this must vanish, due to Serre's criterion for affineness.</p>
2,939,058
<p>The question to be solved is:</p> <p><span class="math-container">$$ \lim_{n \to \infty} \left( \ \sum_{k=10}^{n+9} \frac{2^{11(k-9)/n}}{\log_2 e^{n/11}} \ - \sum_{k=0}^{n-1} \frac{58}{\pi\sqrt{(n-k)(n+k)}} \ \right)$$</span></p> <p>The first thing that occured to me was to transform the limits into definite integrals using the limit definition of integrals, so it'll become easier to evaluate.</p> <p>However, I have no clue how to convert them into definite integrals. Could anyone please shed some light on how to proceed? Or is there a better way to solve this problem?</p>
Jose Arnaldo Bebita Dris
28,816
<p><strong>MY ATTEMPT</strong></p> <p>For even perfect numbers <span class="math-container">$2^{p-1}(2^p - 1)$</span>, I get <span class="math-container">$$D(2^p - 1)D(2^{p-1}) = (2(2^p - 1) - (2^p))(1) = 2^{p+1} - 2 - 2^p = 2^p - 2$$</span> <span class="math-container">$$2s(2^p - 1)s(2^{p-1}) = 2(2^p - (2^p - 1))(2^p - 1 - 2^{p-1}) = 2(1)(2^{p-1} - 1) = 2^p - 2.$$</span></p> <p>Thus, the equation <span class="math-container">$$D(2^p - 1)D(2^{p-1}) = 2s(2^p - 1)s(2^{p-1}) = 2^p - 2$$</span> is true.</p> <p>Therefore, the required relationship <span class="math-container">$$D(q^k)D(n^2) = 2s(q^k)s(n^2)$$</span> holds for both even and odd perfect numbers.</p> <p>Here is my question:</p> <blockquote> <p>Does this proof suffice?</p> </blockquote> <p><strong>Added October 02 2018</strong></p> <p>Note that it would appear as though we have the corresponding equation <span class="math-container">$$D(q^k)D(n^2) = 2s(q^k)s(n^2) = q^k - 1$$</span> for odd perfect numbers. We show here that this assumption is false.</p> <p>Assuming that <span class="math-container">$D(q^k)D(n^2) = 2s(q^k)s(n^2) = q^k - 1$</span>, we get <span class="math-container">$$\frac{2(q^k - 1)}{(q - 1)}s(n^2) = q^k - 1,$$</span> since <span class="math-container">$s(q^k) = \sigma(q^k) - q^k = \sigma(q^{k-1})$</span>. This simplifies to <span class="math-container">$$s(n^2) = \frac{q-1}{2}$$</span> or <span class="math-container">$$\frac{\sigma(n^2)}{n^2} - 1 = \frac{s(n^2)}{n^2} = \frac{q-1}{2n^2}.$$</span> Using the following results from this <a href="https://cs.uwaterloo.ca/journals/JIS/VOL15/Dris/dris8.pdf" rel="nofollow noreferrer">paper</a>: <span class="math-container">$$\frac{8}{5} &lt; \frac{\sigma(n^2)}{n^2}$$</span> and <span class="math-container">$$\frac{q}{n^2} \leq \frac{q^k}{n^2} &lt; \frac{2}{3},$$</span> we get a contradiction, as follows: <span class="math-container">$$\frac{3}{5} = \frac{8}{5} - 1 &lt; \frac{\sigma(n^2)}{n^2} - 1 = \frac{s(n^2)}{n^2} = \frac{q-1}{2n^2} &lt; \frac{q}{2n^2} \leq \frac{q^k}{2n^2} &lt; \frac{1}{2}\cdot\frac{2}{3} = \frac{1}{3}.$$</span></p>
1,106,320
<p>I know the chance of hitting either red or black on the first roll is about 48% 1/37 chance on European 0 But shouldn't the chance of getting it right on either the first or second roll be high</p> <p>like 66% or 75%</p> <p>so to bet 25 1st roll and 50 second roll should have about 75% chance of a win?</p>
user208259
208,259
<p>If the chances of winning on one roll are $\frac{18}{37}$, then the probability of winning <em>at least once</em> in two rolls is $$1 - (1 - 18/37)^2 \approx 74\%.$$</p>
1,968,267
<p>When doing induction should you always try to put your final answer as the "<em>desired</em> " form? For example if: $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ we ought to give the final answer as $$\frac{2(k+1)^{3} + 21(k+1)^{2} + 67(k+1)}{6}?$$</p> <p>I just expanded both the $\text{LHS}_{k+1}$ and the $\text{RHS}_{k+1}$ to show they were equal after the induction. Like this: </p> <hr> <p>Show that $$\sum^{n}_{k=1}(k+2)(k+4) = \frac{2n^{3} + 21n^{2} + 67n}{6}$$ for all integers $n \geq 1$.</p> <p>For $n = 1$,</p> <p>$$\sum^{1}_{k=1}(k+2)(k+4) = 15$$</p> <p>and</p> <p>$$\frac{2(1)^{3} + 21(1)^{2} + 67(1)}{6} = 15$$</p> <p>Assume that it is true for some integer $n = k$, thus $$\sum^{k}_{k=1}(k+2)(k+4) = \frac{2k^{3} + 21k^{2} + 67k}{6}$$ so the $\text{LHS}_{k+1}$ $$\sum^{k+1}_{k=1}(k+2)(k+4) = \sum^{k}_{k=1}(k+2)(k+4) + (k+3)(k+5)$$ $$= \frac{2k^{3} + 21k^{2} + 67k}{6} + \frac{6(k+3)(k+5)}{6}$$ $$=\frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Now the $\text{RHS}_{k+1}$ $$\frac{2(k+1)^{3} + 21(k+1)^{2}+ 67(k+1)}{6} = \frac{2k^{3} + 27k^{2} + 115k + 90}{6}$$ Thus $\text{LHS}_{k+1} = \text{RHS}_{k+1}$ Q.E.D.</p>
ypercubeᵀᴹ
7,367
<p>You could also use that: </p> <p>$$(k+2)(k+4) = (k+3)^2 - 1$$</p> <p>and the known sum:</p> <p>$$ \sum^{n}_{k=1}{k^2} = \frac{n(n+1)(2n+1)}{6} $$</p> <p>$$ \sum^{n}_{k=1}{(k+2)(k+4)} = \sum^{n}_{k=1}{((k+3)^2 - 1)} = \sum^{n}_{k=1}{(k+3)^2} - \sum^{n}_{k=1}{1} $$ $$ \sum^{n+3}_{k=4}{k^2} - n = \sum^{n+3}_{k=1}{k^2} - \sum^{3}_{k=1}{k^2} - n $$ $$ \frac{(n+3)(n+4)(2n+7)}{6} - 14 - n $$ The rest is simple.</p>
2,381,496
<p>I am trying to write the explicit formula of all solutions of a linear system in the form :</p> <p>$Ax=b$ where $A$ is an $m \times n$ matrix ($n$ different from $m$), $x$ is $n$-dimensional vector and $b$ is $m$-dimensional vector.</p>
platty
468,546
<p>Rewrite the first equation as $c = \frac{ab}{4} - a - b$. Square it to get $$c^2 = a^2 + b^2 + \frac{a^2b^2}{16} - \frac{a^2b}{2} - \frac{ab^2}{2} + 2ab$$ Now using the other equation, we see that $$\frac{a^2b^2}{16} - \frac{a^2b}{2} - \frac{ab^2}{2} + 2ab = 0$$ Since $a,b &gt; 0$ divide by $ab$ and multiply by $16$ to get $$ ab - 8a - 8b + 32 = 0$$ Use Simon's Favorite Factoring Trick to get $(a-8)(b-8) = 32$.</p> <p>Now note that $a$ and $b$ are integers, so $(a-8)$ and $(b-8)$ must be factors of $32$. But factoring $32$ into anything except for $\{1,32\}$ gives you two even numbers - these can't be the legs of a primitive Pythagorean triple. Thus, we must have $a=9,b=40$, giving us the $(9,40,41)$ triangle.</p>
2,381,496
<p>I am trying to write the explicit formula of all solutions of a linear system in the form :</p> <p>$Ax=b$ where $A$ is an $m \times n$ matrix ($n$ different from $m$), $x$ is $n$-dimensional vector and $b$ is $m$-dimensional vector.</p>
fleablood
280,126
<p>Hint: formula for primitive triples is </p> <p>$a = mn; b = \frac {m^2 -n^2}2; c = \frac {m^2 + n^2}2$ for $\gcd(m,n) = 1$. And as $m^2 - n^2 $ is even $m$ and $n$ must both be odd.</p> <p>So we want $\frac {mn(m^2 -n^2)}4 = 2(mn + \frac {m^2 - n^2}2 + \frac {m^2 - n^2} 2) = 2(mn + m^2)$</p> <p>So $mn(m-n)(m+n) = 8m(m+n)$</p> <p>So $n(m-n)=8$</p> <p>For $n=1,2,4,8$ we have $m = 9,6,6,9$. Only $n =1; m= 9$ have $\gcd(m,n) =1$ and $m^2 -n^2$ is even.</p> <p>So solutions are $a = 9; b = 40; c=41$ </p> <p>====earlier answer with and an easier but not exhaustive formula for a triple ====</p> <p>==== it worked out nicely, but it doesn't rule out that there aren't any others; nor did it guarentee I'd find a solution ===</p> <p>Hint: Formula for primitive triples is $k, \frac {k^2 - 1}2, \frac {k^2+1}2$ where $k$ is odd. Ex. $3,4,5; 5,12,13; 7,24,25$ etc. </p> <p>So we need to solve $\frac {ab}2 = \frac {k (k^2 -1)}4 = 2(a+b+c) = 2(k + \frac {k^2 - 1}2 + \frac {k^2 + 1}2) = 2(k + k^2)$.</p> <p>Has a very nice solution!.</p>
202,530
<p>I'd like to check if all elements of a list are numbers. I've tried </p> <pre><code>t = {5/4, 12} MatrixQ[t, NumberQ] MemberQ[t, NumberQ] And @@ Table[NumberQ[t[[i]]], {i, 1, Length[t]}] </code></pre> <p>but only the last one yields the desired result. Is there a better way to check?</p>
mgamer
19,726
<p>Given a list:</p> <pre><code>list = {Pi, 0.4, 1, 2/2, 1./3} </code></pre> <p>you can do:</p> <pre><code>And @@ (Head[#] === Real &amp; /@ list) (* False *) </code></pre>
1,321,704
<p>For example: $3^\sqrt5$ versus $5^\sqrt3$</p> <p>I tried to write numbers as this:</p> <p>$3^{5^{\frac{1}{2}}}$ and then as $3^{\frac{1}{2}^5}$</p> <p>But this method gives the wrong answer because $a^{(b^c)} \ne a^{bc}$</p>
Empy2
81,790
<p>$$5^\sqrt{3}&gt;5^{5/3}=\sqrt[3]{3125}&gt;\sqrt[3]{2187}=3^{7/3}&gt;3^{\sqrt{5}}$$<br> Another option is to take roots, and compare $3^{1/\sqrt{3}}$ with $5^{1/\sqrt{5}}$. This function increases for $1&lt;x&lt;7.39$, then decreases.</p>
972,385
<p>Could someone please explain this question to me? I know that such integers do NOT exist, but I could not prove it. "Either solve it or give a brief explanation as to why it is impossible." Thank you!</p> <p>Find integers $s$ and $t$ such that $15s + 11t = 1$.</p>
Lucian
93,448
<p>$$15s+11t=1\iff11(s+t)+4s=1\iff4~\Big[3(s+t)+s\Big]-(s+t)=1=4-3.$$ Now let $s+t=3$ and $4s+3t=1$, which is a system of two equations in two unknowns, yielding $s=-8$ and $t=11$. Then it is trivial to show that all integers of the form $s+11n$ and $t-15n$ are also solutions to the initial equation.</p>
1,839,458
<blockquote> <p>For any $n\ge2, n \in \mathbb N$ prove that $$\sqrt{\frac{1}n}-\sqrt{\frac{2}n}+\sqrt{\frac{3}n}-\cdots+\sqrt{\frac{4n-3}n}-\sqrt{\frac{4n-2}n}+\sqrt{\frac{4n-1}n}&gt;1$$</p> </blockquote> <h3>My work so far:</h3> <p>1) $$\sqrt{n+1}-\sqrt{n}&gt;\frac1{2\sqrt{n+0.5}}$$</p> <p>2) $$\sqrt{n+1}-\sqrt{n}&lt;\frac1{2\sqrt{n+0.375}}$$</p>
Jack D'Aurizio
44,121
<p>We have to prove that: $$ \sum_{k=1}^{2n}\sqrt{2k-1}-\sum_{k=1}^{2n}\sqrt{2k-2} &gt; \sqrt{n} \tag{1}$$ hence it looks like a good idea to apply creative telescoping and approximate: $$\sqrt{2k-1}-\sqrt{2k-2}\geq\frac{\sqrt{k-1/4}-\sqrt{k-5/4}}{\sqrt{2}}-\frac{1}{128\sqrt{2}}\left(\frac{1}{(k-5/4)^{3/2}}-\frac{1}{(k-1/4)^{3/2}}\right)\tag{2} $$ I found $(2)$ by playing a bit with the Laurent expansion of the LHS in a neighbourhood of $+\infty$.<br> It is an algebraic inequality not terribly difficult to prove once established, and the RHS is a telescopic term, so, by summing it over $k=2,\ldots,2n$, we get an inequality actually (slightly) stronger than the wanted one.</p> <p>A simpler approach may be to show that $A_n$ defined through $$ A_n = \sum_{k=1}^{2n}\left(\sqrt{2k-1}-\sqrt{2k-2}\right) $$ fulfils $A_n^2 \geq 1+A_{n-1}^2$ by induction.</p>
3,161,584
<p>Consider the following matrix</p> <p><span class="math-container">$A=\begin{bmatrix}0&amp;0&amp;0&amp;0\\a&amp;0&amp;0&amp;0\\0&amp;b&amp;0&amp;0\\0&amp;0&amp;c&amp;0\end{bmatrix}$</span></p> <p>Which <span class="math-container">$a,b,c$</span> are real numbers</p> <p>What conditions are required for <span class="math-container">$a,b,c$</span> such that <span class="math-container">$\mathbb{R}^4$</span> has a basis made of A eigenvectors?</p>
coding1101
633,698
<p>Thanks @Blue.</p> <p>I solved this problem in a very simple way, thanks to you.</p> <p>1) Solved this by contradiction.</p> <p>If <span class="math-container">$|M| \ge 4$</span>, the lengths of edges of quadrangles are all different.</p> <p>Let the four lengths <span class="math-container">$a, b, c, d$</span>.</p> <p>Then, it seems to be a contradiction because,</p> <p><a href="https://i.stack.imgur.com/I41J4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I41J4.png" alt="contradiction"></a></p> <p>2) It is simmilar from above.</p> <p>Let the four lengths <span class="math-container">$a, b, a, c$</span>.</p> <p>Then, it seems to be a contradiction because,</p> <p><a href="https://i.stack.imgur.com/yb2u8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yb2u8.png" alt="contradiction"></a></p>
1,809,500
<p>Prove that $k!&gt;(\frac{k}{e})^{k}$. </p> <p>It is known that $e^{k}&gt;(1+k)$. So if we multiply $k!$ on both sides, we get $k!e^{k}&gt;(k+1)!$. Also $k^k&gt;k!$. Now how to proceed ?</p>
Chris Sanders
309,566
<p>I take it that $k$ is a positive integer. Then the question is equivalent to</p> <p>$e^k&gt;\frac{k^k}{k!}$</p> <p>But $e^k=\displaystyle \sum_{n=0}^\infty \frac{k^n}{n!}$ and one of the summands (all of them are positive) is itself $\frac{k^k}{k!}$</p>
1,876,590
<p>In "<a href="https://web.archive.org/web/20151219180208/http://apollonius.math.nthu.edu.tw/d1/ne01/jyt/linkjstor/regular/7.pdf" rel="nofollow">Angle Trisection, the Heptagon, and the Triskaidecagon</a>", published in the <em>American Mathematical Monthly</em> in March 1988, Andrew Gleason discusses what regular polygons can be constructed with compass, straightedge and angle trisector. At the end of that article he notes that the angle <em>p</em>-sectors required for a regular <em>n</em>-gon are the odd primes <em>p</em> dividing $\varphi(n)$.</p> <p>For the heptagon, which only requires an angle trisector, he gives the minimal polynomial of $2\cos(2\pi/7)$ $$x^3+x^2-2x-1$$ and transforms it into the Chebyshev polynomial expression $$7\sqrt{28}(4\cos^3\theta-3\cos\theta)=7$$ leading to the final identity $$\sqrt{28}\cos\left(\frac{\cos^{-1}(1/\sqrt{28})}{3}\right)=1+6\cos(2\pi/7).$$</p> <p>I am interested in the hendecagon (11 sides), which requires an angle quinsector (that splits an angle into five equal parts).</p> <blockquote> <p>Is there a similar transformation between the minimal polynomial for $2\cos(2\pi/11)$ $$x^5+x^4-4x^3-3x^2+3x+1$$ and the relevant Chebyshev polynomial $$\cos 5\theta=16\cos^5\theta-20\cos^3\theta+5\cos\theta$$ and how do I find it? If I had such a transformation, I could construct an exact hendecagon with the quinsector.</p> </blockquote> <p>I have tried Tschirnhaus transforms on the former polynomial, without success.</p>
Oscar Lanzi
248,217
<p>If the object is to construct a regular hendecagon, it can be done more simply without going through an angle quinsection. Benjamin and Snyder proved the existence of a construction using a marked ruler and compasses in 2014 (BENJAMIN, ELLIOT; SNYDER, C. Mathematical Proceedings of the Cambridge Philosophical Society156.3 (May 2014): 409-424.; <a href="http://dx.doi.org/10.1017/S0305004113000753" rel="nofollow">http://dx.doi.org/10.1017/S0305004113000753</a> ).</p> <p>The basic premises of the construction are as follows:</p> <p>1) It's based on the properties of "conchoid-circle" constructions where we position the marked straightedge to paas through a fixed point $P$, with one mark on a line $l$ and the other on a circle $K$.</p> <p>2) With this type of construction, we define a "signed distance" $z$. This is the distance between $P$ and the mark on $K$, with a negative sign if that mark lies vetween $P$ and the other mark which is on $l$, a positive sign otherwise.</p> <p>3) Then $z$ satisfies a sextic equation whose coefficients satisfy certain relationships called the "verging theorem".</p> <p>4) The quintic equation given here for the hendecagonal cosines is converted to a sextic equation that satisfies the verging theorem by (4.1) defining $z=ux$ for some scale factor $u$, and (4.2) introducing an additional root $\eta$ with the appropriate value relative to $u$. Then, all the geometric parameters needed to determine $l$ and $K$ may be expressed and constructed in terms of this scale factor $u$.</p> <p>5) Now to the heart of the matter. It looks like we have to solve a seventh-degree equation for that parameter $u$. But, "a miracle occurs" (the authors' own words); the equation for $u$ is reducuble and all we are left with is cubic factor equation (with integer coefficients), which can be solved by an auxiliary marked ruler construction.</p> <p>6) So $z=ux$ has a construction with a marked ruler and compasses because it solves a sextic equation that satisfies the verging theorem, and $u$ has such a construction as well because it is obtained from a cubic equation in $Z[u]$; and so $x=2 cos(2\pi m/11)$ has one too.</p> <p>7) Now for the parameters. For $u$, choose the one real root of $u^3+2u^2+2u+2=0$. For the construction of $z=ux$: $P=(0,0)$, $l$ is the line $x=-u-1$ where the length unit is the distance between the marks (conventional in this type of construction), $K$ is centeted at $(u(u-1)/2,-(u^2+3u+1)/2)$ and passes through $(-u-2,0)$. One orientation of the ruler is along the $x$ axis, that is the "extra" root of the sextic; the other roots on $K$ with proper distance signs, see (2), give the roots for $z$. Note that the authors do not give the formulas that way, I did some algebra of my own to get everything in terms of $u$.</p>
637,699
<p>We have a triangle $ABC$. Whats the value of angle $C$?</p> <p>$$\sin^2(A)+\sin^2(B)-\sin^2(C)=1$$</p> <p>I made a small java program and it gave me an answer. I want to know how to make it through other ways.</p>
JB King
8,950
<p>An isosceles triangle with angles of 90,45 and 45 comes to mind as a rather simple solution. Let A or B be one of the 90 degree angles since $sin^2(90)=1$ and the others will cancel out. Whether there is another solution one may use substitution since A+B+C=180.</p>
637,699
<p>We have a triangle $ABC$. Whats the value of angle $C$?</p> <p>$$\sin^2(A)+\sin^2(B)-\sin^2(C)=1$$</p> <p>I made a small java program and it gave me an answer. I want to know how to make it through other ways.</p>
Blue
409
<p>As @lab shows in a comment, $$\sin^2 A + \sin^2 B - \sin^2 C = 1 \quad\implies\quad 2\sin A \sin B \cos C = 1$$ so we can write $$1 = 2 \sin A \sin(A+C) \cos C = 2 \sin A \cos C( \sin A \cos C+ \cos A \sin C)$$ Thus, $$\begin{align} 1 - 2 \sin^2 A \cos^2 C &amp;= 2 \sin A \cos A \sin C \cos C \\ \implies \quad \left( 1 - 2 \sin^2 A \cos^2 C \right)^2 &amp;= 4 \sin^2 A \cos^2 A \left( 1 - \cos^2 C \right) \cos^2 C \\ \implies \quad 4 a^2 c^2 ( 2 - a^2 - c^2 ) &amp;= 1 \qquad (\star) \end{align}$$ where $c := \cos C$ and $a := \sin A$. Equation $(\star)$ is a quadratic in $c^2$ (and $a^2$), yielding valid values for $C$ across a range of values of $A$ (and vice-versa); values for which $0 &lt; A+C \leq \pi$, making for valid triangles. Without further restrictions, there's no single value for $C$.</p>
50,406
<p>This is a question about support of modules under extension of scalars.</p> <p>Let $f \colon A \to B$ be a homomorphism of commutative rings (with unity), and let $M$ be a finitely generated $A$-module. Recall that the <em>support</em> of $M$ is the set of prime ideals $\mathfrak{p}$ of $A$ such that the localization $M_{\mathfrak{p}}$ is nonzero. Then $\mathrm{Supp} _B(B \otimes_A M) = f^{*-1}(\mathrm{Supp}_A(M))$, the set of prime ideals of $B$ whose contractions are in the support of $M$. </p> <p>The $\subseteq$ containment is true for any $M$. What's an obvious example of a non-finitely generated module where the other containment doesn't hold?</p>
jdc
5,792
<p>Okay, a further question. Despite its simplicity, it is not homework.</p> <p>Atiyah and Macdonald (Ch. 2, Ex. 19.iv)) claim that if $M$ is a sum of submodules $M_i$, then $\text{Supp}(M) = \bigcup \text{Supp}(M_i)$, not specifying what size of set the indices $i$ are to run over. I assumed initially the index set was to be assumed arbitrary, and came up with an argument, which it now occurs to me may be wrong, as your example makes me wonder if the sum should really be finite. (Or I'm possibly just (continuing to be) stupid.)</p> <p>Continue with your example, so $M = \mathbb Q$ is a $A =\mathbb Z$-module, and $B = \mathbb Z / p \mathbb Z$. Now $\mathbb Q$ is generated over $\mathbb Z$ by the elements $\frac 1 n \in \mathbb Z$, so it is a sum of the submodules $\frac 1 n \mathbb Z$, Thus the elements $z_n = 1 \otimes \frac 1 n$ generate $B \otimes_A M = \mathbb Z / p \mathbb Z \ \otimes_{\mathbb Z} \mathbb Q$; in fact all are zero, since the generator $z_n = p \otimes \frac 1 {pn}$ is zero already in $\mathbb Z / p \mathbb Z \ \otimes_{\mathbb Z} \frac{1}{pn}\mathbb Z$.</p> <p>So $\mathbb Z / p \mathbb Z \ \otimes_{\mathbb Z} \mathbb Q = 0$ has empty support, even though the $\mathbb Z / p \mathbb Z$-submodules $\mathbb Z / p \mathbb Z \ \otimes_{\mathbb Z} \frac 1{n} \mathbb Z \cong \frac 1{n} \mathbb Z / \frac p{n} \mathbb Z \cong \mathbb Z / p \mathbb Z$ all have $(0)$ in their support. </p> <p>Is this right, or am I making an obvious mistake? Should the sum of modules be assumed finite for the proposition to hold?</p>
4,218,944
<p>A very simple question. Saw this formula many places earlier, but how do we prove it? <span class="math-container">$$ax^2+bx+c=a(x-r_1)(x-r_2)$$</span> Where <span class="math-container">$r_1$</span> and <span class="math-container">$r_2$</span> are the roots of the quadratic.</p> <p>P.S.- I have seen a &quot;proof&quot; using Vieta's formulas, but Vieta's formula itself requires this fact in its proof.</p>
Deepak
151,732
<p>What do you define as &quot;roots&quot; of the quadratic? If they are precisely the two (not necessarily distinct) values that make the quadratic zero, then it's as trivial as setting the factorised expression to zero and deducing that one or the other linear factor has to be zero. The two roots of the LHS correspond in some order to the two of the RHS, giving you the proof.</p> <p>As to the fact there are two roots (including the case of a repeated root) of a quadratic, you have to use the Fundamental Theorem of Algebra for that, and that is a fairly deep result.</p>
3,623,856
<p>Use the definition of a Cauchy sequence to prove that the sequence defined by <span class="math-container">$x_n = \left (\frac{3}{2}\right )^n$</span> is a Cauchy sequence in <span class="math-container">$\mathbb{R}$</span>. </p>
Saaqib Mahmood
59,734
<p>Your sequence is <em>not</em> a Cauchy sequence. </p> <p>In order to show this, let <span class="math-container">$m$</span> and <span class="math-container">$n$</span> be any natural numbers such that <span class="math-container">$m &lt;n$</span>. Then we note that <span class="math-container">$$ \begin{align} \left\lvert x_m - x_n \right\rvert &amp;= \left\lvert \left( \frac{3}{2} \right)^m - \left( \frac{3}{2} \right)^n \right\rvert \\ &amp;= \left( \frac{3}{2} \right)^n - \left( \frac{3}{2} \right)^m \\ &amp;= \left( \frac{3}{2} \right)^m \left[ \left( \frac{3}{2} \right)^{n-m} - 1 \right] \\ &amp;\geq \left( \frac{3}{2} \right) \left[ \left( \frac{3}{2} \right) - 1 \right] \\ &amp;= \left( \frac{3}{2} \right) \left( \frac{1}{2} \right) \\ &amp;= \frac{3}{4} \end{align} $$</span></p> <p>Thus if we take a real number <span class="math-container">$\varepsilon$</span> such that <span class="math-container">$$ 0 &lt; \varepsilon &lt; \frac{3}{4}, $$</span> then for all <span class="math-container">$m, n \in \mathbb{N}$</span> such that <span class="math-container">$m \neq n$</span>, we would get <span class="math-container">$$ \left\lvert x_m - x_n \right\rvert &gt; \varepsilon, $$</span> and therefore for any <span class="math-container">$N \in \mathbb{N}$</span> we can find some <span class="math-container">$m, n \in \mathbb{N}$</span> such that <span class="math-container">$m &gt; N$</span> and <span class="math-container">$n &gt; N$</span>, but <span class="math-container">$$ \left\lvert x_m - x_n \right\rvert \not\leq \varepsilon. $$</span></p>
2,075
<p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p> <p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p> <p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p> <p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p> <p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
Yuval Filmus
1,277
<p>The most interesting aspect for me is the "elementary puzzle questions", questions which are natural, require some thought, and are elementary in nature. In that respect, the advantage of Math.SE over MathOverflow is that the questions are actually elementary, whereas in MO they usually involved subjects I know nothing about.</p> <p>The next best thing is "elementary nuggets", which are similar to the previous but are easier. Good examples for me are questions of the sort "find a combinatorial proof of this nice identity". These are also enjoyable, and their advantage is that they don't take up as much time.</p> <p>What I like least are questions of the sort "how do I simplify $\sqrt{27}$?", which I don't usually bother to answer. Fortunately, there are lots of questions whose level of difficulty lies between "trivialities" and "nuggets", which are interesting enough to spare a thought while being easy enough to not distract me from "real" work.</p>
18,421
<p>I am teaching 4th-grade kids. The topic is Fraction. Basic understanding of a fraction as a part of the whole and as part of the collection is clear to the kids. Several concrete ways exist to teach this basic concept. But when it comes to fraction addition/subtraction I could not find a way that teaches it concretely.<br> Of course, teaching fraction addition &amp; subtraction of the form 3/2 + 1/2 is easy. But what about 3/2+ 4/3?<br> It is where we start talking about the algorithm (using LCM), which makes the matter less intuitive and more abstract which I am trying to avoid in the beginning. I believe all abstract concepts should come after the concrete experience. </p> <p>So teachers do you have any suggestions? </p>
Andrew Chin
12,720
<p>Use a <em>language</em> approach.</p> <p>This is not meant to be a snide remark; I came across a <a href="https://www.youtube.com/watch?v=V6yixyiJcos" rel="nofollow noreferrer">particular TED talk</a> a while ago that illustrated how math is just another language and we must first learn the proper syntax.</p> <p>Of course, we know that adding fractions require the use of common denominators, so I would first introduce the topic of equivalent fractions. If the concept of common denominators is new to the student, perhaps using something concrete like denominations of coins can help bridge that gap.</p>
804,310
<p>The probability distribution of a discrete random variable x is $$f (x)= \begin{pmatrix}3 \\ x \end{pmatrix} (1/4)^x (3/4)^{3-x} $$ Find the mean value of x. Construct a cumulative distribution function for f (x). i find out$$ P(X=o) = 0.421875$$ $$ P(X=1) = 0.421875$$ $$ P(X=2) = 0.140625$$ $$ P(X=3) = 0.015625$$ put the value in Binomial distribution.</p>
Bowen
101,918
<p>This discrete random variable follows Binomial distribution, i.e. $$f(x)=\binom{n}{x}p^x(1-p)^{n-x}$$ where $n$ is the number of trials and $p$ is the proportion. What this distribution mean is that when there are $n$ trials and every trial follows $p$ proportion(success rate), the probability that $x$ successes occur is $f(x)$. And for Binomial distribution, the expect(mean) value and variance are $$E(X)=np$$ $$Var(X)=np(1-p)$$ respectively.</p> <hr> <p>From above, you can know how to calculate the mean value. For cumulative distribution function, it is just the the sum of probability of first k outcomes, which can be expressed as $$F(k;n,p)=P(X\le \lfloor k\rfloor)=\sum_{i=0}^{\lfloor k \rfloor} \binom{n}{i}p^i(1-p)^{n-i}$$ where $\lfloor k\rfloor$ is the largest integer that is less than or equal to $k$. For this problem, I think this form is enough because of the small $n$ that equals to 3.</p>
1,087,910
<p>Erdős proved that if $f(n)$ is a monotone increasing function from the natural numbers to the positive reals, and $f(n)$ is completely multiplicative, then there exists some constant $C$ such that $f(n)=n^C$ for all $n$.</p> <p>I have not been able to find a nice proof of this online and was hoping someone could provide a proof.</p>
Huy
3,787
<p>Using partial fractions, you'll find</p> <p>$$\frac{1}{x^2(x-1)^3} = -\frac{1}{x^2} - \frac{3}{x} + \frac{3}{x-1} - \frac{2}{(x-1)^2} + \frac{1}{(x-1)^3}.$$</p> <p><strong>EDIT:</strong> To integrate the fractions in your original post you claim to have trouble with, remember that</p> <p>$$\left(\frac{1}{x} \right)' = - \frac{1}{x^2} \text{ and } \left(\frac{1}{x^2} \right)' = - \frac{2}{x^3}.$$</p>
3,369,129
<p>I have the following formulas: </p> <p><span class="math-container">$$S = 4 \times \tan\left(\frac{180^\circ-a}{2}\right) - \frac{π}{90^\circ} \times (180^\circ-a)$$</span></p> <p>where:</p> <p><span class="math-container">$$S \gt 0$$</span></p> <p><span class="math-container">$$0^\circ \lt a \lt 180^\circ$$</span></p> <p>and I want to solve them by <strong>α</strong> but I don't know how to take <strong>α</strong> out of the <strong>tan()</strong>.<br> I tried to use some functions with arctan() but with no luck.</p> <p>Could someone help me out with it?</p>
Lutz Lehmann
115,115
<p>You also have, in the relation of degree and radian units, <span class="math-container">$\pi=180^∘$</span>, <span class="math-container">$90^∘=\frac\pi2$</span> etc. so that the equation reduces to <span class="math-container">$$ \frac{S}4=\tan(x)-x $$</span> where <span class="math-container">$x=\frac{180^∘−α}2$</span>. </p> <p>Now apply Newton's method or apply a fixed-point method. For the Newton method consider <span class="math-container">$f(x)=\arctan(x+\frac S4)-x$</span>. For small <span class="math-container">$S$</span>, <span class="math-container">$x$</span> dominates the sum with <span class="math-container">$S/4$</span>,see below. Thus <span class="math-container">$f(x)\sim x^3$</span>. To remove this multiple root, divide by <span class="math-container">$x^2$</span> which gives a Newton iteration <span class="math-container">$N(x)=x-\frac{xf(x)}{xf'(x)-2f(x)}$</span> which converges very rapidly</p> <p>A suitable fixed-point equation that works without trying too many alternative formulations is <span class="math-container">$x=g(x)=\arctan(x+\frac S4)$</span>. It converges reliably, however slowly.</p> <p><a href="https://i.stack.imgur.com/3qUDx.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3qUDx.png" alt="plot of the fixed-point iteration"></a> <sup><em>Iteration <span class="math-container">$x_{k+1}=g(x_k)$</span>, <span class="math-container">$x_0=0$</span>, over <span class="math-container">$S$</span></em></sup></p> <p>Finally, convert back <span class="math-container">$α=180^∘-360^∘\frac{x}\pi$</span>.</p> <hr> <p>For small values of <span class="math-container">$S$</span> and thus small <span class="math-container">$x$</span> this gives approximately <span class="math-container">$$ \frac S4\approx \frac{x^3}3\implies x\approx\sqrt[3]{\frac34S}. $$</span> For larger <span class="math-container">$S$</span> you can use the approximation <span class="math-container">$\frac{x}{1-4x^2/\pi^2}$</span> of the tangent function, so that an approximation is obtained as the smaller solution of a quadratic equation. <span class="math-container">$$ 0=\left(\frac{2x}\pi\right)^2+2\frac{2x}{\pi}\frac{\pi}{S}-1 =\left(\frac{2x}\pi+\frac{\pi}{S}\right)^2-1-\frac{\pi^2}{S^2} \\ \implies x=\frac\pi2\left(\sqrt{1+\frac{\pi^2}{S^2}}-\frac{\pi}{S}\right) =\frac{\pi S}{2(\sqrt{S^2+\pi^2}+\pi)} $$</span></p>
358,328
<p>How do I go about doing this? I'm clueless.. Thank you.</p> <p>My attempt:</p> <p>Using the product rule and making:</p> <p>$\eqalign{ &amp; u = u \cr &amp; v = {v^{ - 1}} \cr} $</p> <p>so:</p> <p>${{du} \over {dx}} = 1$ and ${{dv} \over {dx}} = - {v^{ - 2}}$ so:</p> <p>$\eqalign{ &amp; {{dy} \over {dx}} = u{{dv} \over {dx}} + v{{du} \over {dx}} \cr &amp; {{dy} \over {dx}} = u( - {v^{ - 2}}) + {v^{ - 1}}(1) \cr &amp; {{dy} \over {dx}} = {v^{ - 2}}(v - u) \cr} $</p> <p>Where do I go from here?</p>
Community
-1
<p>We have $$\left(\frac{1}{v}\right)'=-\frac{v'}{v^2}$$ and $$(ut)'=u't+ut'$$ then we take $t=\frac{1}{v}$ and we find $$\left(\frac{u}{v}\right)'=\frac{u'}{v}-\frac{uv'}{v^2}=\frac{u'v-uv'}{v^2}$$</p>
358,328
<p>How do I go about doing this? I'm clueless.. Thank you.</p> <p>My attempt:</p> <p>Using the product rule and making:</p> <p>$\eqalign{ &amp; u = u \cr &amp; v = {v^{ - 1}} \cr} $</p> <p>so:</p> <p>${{du} \over {dx}} = 1$ and ${{dv} \over {dx}} = - {v^{ - 2}}$ so:</p> <p>$\eqalign{ &amp; {{dy} \over {dx}} = u{{dv} \over {dx}} + v{{du} \over {dx}} \cr &amp; {{dy} \over {dx}} = u( - {v^{ - 2}}) + {v^{ - 1}}(1) \cr &amp; {{dy} \over {dx}} = {v^{ - 2}}(v - u) \cr} $</p> <p>Where do I go from here?</p>
DonAntonio
31,254
<p>Hint:</p> <p>$$(uv^{-1})'=u'v^{-1}+u(v^{-1})'$$</p> <p><strong>But</strong></p> <p>$$(v^{-1})'=-\frac{v'}{v^2}\;,\;\;\text{so}\ldots$$</p>
2,640,909
<p>I encountered a problem with 4 variables and I was wondering if anyone knows how to solve this: This is what is known:</p> <p>$$ \left\lbrace \begin{align} a+b &amp;= 1800 \\ c+d &amp;= 12 \\ a/c &amp;= 100 \\ b/d &amp;= 250 \\ (a+b)/(c+d) &amp;= 150 \end{align} \right.$$</p> <p>Below is a screenshot from a spreadsheet. The red numbers are the 4 unknowns that I'm trying to figure out how to solve for (I happen to know them, but would love to understand how to solve for them when I do not know them). <a href="https://i.stack.imgur.com/rlb4A.png" rel="nofollow noreferrer">screenshot</a> Any help would be greatly appreciated! Thank you!</p>
Donald Splutterwit
404,247
<p>Linear algebra will do ... we have \begin{eqnarray*} a+b=1800 \\ c+d=12 \end{eqnarray*} and $a=100c$, $b=250d$ sub these into the first equation &amp; cancel a factor of $50$. (Note that the fifth equation is satisfied by virtue of the first two.) \begin{eqnarray*} 2c+5d=36 \\ c+d=12 \end{eqnarray*} Should be a doddle from here ?</p>