qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
881,520
<p>Take half a square with side length $1$. The resulting right-angled triangle ABC has two angles of $45^\circ$. By Pythagoras’ theorem, the hypotenuse AC has length $\sqrt{2}$. Applying the definitions on the previous page gives the values in the table below. that $\sin 30^\circ= \frac{1}{2}$</p> <p>Sorry I cannot provide diagram, but from my understanding $\sin =$ opposite / hypotenuse. How is the value $0.5$ derived then? No possible combination. What point of reference should I be looking from?</p>
5xum
112,884
<p>The right angled triangle you describe has angles of $45$ degrees, not $30$ degrees. What you proved is $\sin 45=\frac{1}{\sqrt2}$, which is correct.</p>
881,520
<p>Take half a square with side length $1$. The resulting right-angled triangle ABC has two angles of $45^\circ$. By Pythagoras’ theorem, the hypotenuse AC has length $\sqrt{2}$. Applying the definitions on the previous page gives the values in the table below. that $\sin 30^\circ= \frac{1}{2}$</p> <p>Sorry I cannot provide diagram, but from my understanding $\sin =$ opposite / hypotenuse. How is the value $0.5$ derived then? No possible combination. What point of reference should I be looking from?</p>
Fermat
83,272
<p>First draw a equilateral triangle and find the sine or cosine of the angle of $60^\circ$. Now for the angle of $30$ degrees apply the identity $$\cos x=\sin (90^\circ -x)$$</p>
881,520
<p>Take half a square with side length $1$. The resulting right-angled triangle ABC has two angles of $45^\circ$. By Pythagoras’ theorem, the hypotenuse AC has length $\sqrt{2}$. Applying the definitions on the previous page gives the values in the table below. that $\sin 30^\circ= \frac{1}{2}$</p> <p>Sorry I cannot provide diagram, but from my understanding $\sin =$ opposite / hypotenuse. How is the value $0.5$ derived then? No possible combination. What point of reference should I be looking from?</p>
John Joy
140,156
<p><img src="https://i.stack.imgur.com/zO5p0.png" alt="enter image description here"></p> <p>The trigonometric function values for all the special angles are derived from the first three regular polygons (equilateral triangle, square, and regular pentagon). The trigonometric function values for the first two shapes make use of the Pythagorean theorem. The values for the pentagon make use of similar triangles.</p>
1,610,700
<blockquote> <p>$$\int \frac{x-3}{\sqrt{1-x^2}} \mathrm dx$$</p> </blockquote> <p>I know that $\int \frac{1}{1-x^2}\mathrm dx=\arcsin(\frac{x}{1})$ but how can I continue from here? </p>
Error 404
206,726
<p>Write $\int \frac {x-3}{\sqrt {1-x^2}} = \int \frac {x}{\sqrt {1-x^2}}-3\int \frac 1{\sqrt {1-x^2}}$.</p> <p>Use the substitution $1-x^2=u$ in the first integral of the RHS. Also you know the second integral of the RHS as per your post.</p>
2,753,548
<p>Let $F$ be a field with $7^5$ elements. $$X=\{a^7-b^7 \mid a,b \in F\}$$ I have no idea how to solve. Please help me.</p>
Antoine Giard
554,642
<p><strong>Hint:</strong> Prove that \begin{align*} \phi \ \colon \ &amp;F \to F\\ &amp;x \mapsto x^7, \end{align*} is an isomorphism.</p>
2,974,747
<p><strong>Q</strong>:Solve the equation <span class="math-container">$x^4+x^3-9x^2+11x-4=0$</span> which has multiple roots.<br><strong>My approach</strong>:Let <span class="math-container">$f(x)=x^4+x^3-9x^2+11x-4=0$</span>.And i knew that if the equation have multiple roots then there must exist H.C.F(Highest Common Factor) of <span class="math-container">$f'(x)$</span> and <span class="math-container">$f(x)$</span> Or H.C.F of <span class="math-container">$f''(x)$</span> and <span class="math-container">$f(x)$</span>.But i don't know how to find H.C.F of two polynomial by synthetic division(my book titled it and written the H.C.F).My book provided that H.C.F of <span class="math-container">$f(x),f'(x)$</span> and <span class="math-container">$f''(x)$</span> is <span class="math-container">$(x-1)$</span>.<br> So <span class="math-container">$(x-1)^3$</span> is a factor of <span class="math-container">$f(x)$</span>.Hence <span class="math-container">$f(x)=(x-1)^3(x+4)=0$</span>.<br>Now my <strong>Question</strong> is how they find H.C.F without division method.Is there any general process to get H.C.F of polynomials without division method or some easy process.Any solution will be appreciated.<br>Thanks in advanced.</p>
Community
-1
<p>There is no need to apply the Euclidean algorithm to solve this problem. Note that the sum of coefficients of <span class="math-container">$$f(x)=x^4+x^3−9x^2+11x−4$$</span> is <span class="math-container">$0$</span>, so <span class="math-container">$f(1)=0$</span>. Next, <span class="math-container">$$f'(x)=4x^3+3x^2-18x+11$$</span> also has zero coefficient sum. So, <span class="math-container">$f'(1)=0$</span>. Then, <span class="math-container">$$f''(x)=12x^2+6x-18$$</span> also has zero coefficient sum. This gives <span class="math-container">$f''(1)=0$</span>. Finally, <span class="math-container">$$f'''(x)=24x+6$$</span> does not satisfy <span class="math-container">$f'''(1)=0$</span>. Thus, <span class="math-container">$1$</span> is a root of <span class="math-container">$f$</span> with multiplicity <span class="math-container">$3$</span>. That is, <span class="math-container">$$f(x)=(x-1)^3g(x)$$</span> for some polynomial <span class="math-container">$g$</span>. Clearly, <span class="math-container">$g$</span> is linear and monic, with constant term <span class="math-container">$4$</span> (as <span class="math-container">$(-1)^3c=-4$</span> implies <span class="math-container">$c=4$</span>). So, <span class="math-container">$g(x)=x+4$</span>.</p>
2,974,747
<p><strong>Q</strong>:Solve the equation <span class="math-container">$x^4+x^3-9x^2+11x-4=0$</span> which has multiple roots.<br><strong>My approach</strong>:Let <span class="math-container">$f(x)=x^4+x^3-9x^2+11x-4=0$</span>.And i knew that if the equation have multiple roots then there must exist H.C.F(Highest Common Factor) of <span class="math-container">$f'(x)$</span> and <span class="math-container">$f(x)$</span> Or H.C.F of <span class="math-container">$f''(x)$</span> and <span class="math-container">$f(x)$</span>.But i don't know how to find H.C.F of two polynomial by synthetic division(my book titled it and written the H.C.F).My book provided that H.C.F of <span class="math-container">$f(x),f'(x)$</span> and <span class="math-container">$f''(x)$</span> is <span class="math-container">$(x-1)$</span>.<br> So <span class="math-container">$(x-1)^3$</span> is a factor of <span class="math-container">$f(x)$</span>.Hence <span class="math-container">$f(x)=(x-1)^3(x+4)=0$</span>.<br>Now my <strong>Question</strong> is how they find H.C.F without division method.Is there any general process to get H.C.F of polynomials without division method or some easy process.Any solution will be appreciated.<br>Thanks in advanced.</p>
Will Jagy
10,400
<p>I quite like the Euclidean algorithm. The Extended part can be done as a continued fraction, same steps as for integers.</p> <p><span class="math-container">$$ \left( x^{4} + x^{3} - 9 x^{2} + 11 x - 4 \right) $$</span></p> <p><span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) $$</span></p> <p><span class="math-container">$$ \left( x^{4} + x^{3} - 9 x^{2} + 11 x - 4 \right) = \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) \cdot \color{magenta}{ \left( \frac{ 4 x + 1 }{ 16 } \right) } + \left( \frac{ - 75 x^{2} + 150 x - 75 }{ 16 } \right) $$</span> <span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) = \left( \frac{ - 75 x^{2} + 150 x - 75 }{ 16 } \right) \cdot \color{magenta}{ \left( \frac{ - 64 x - 176 }{ 75 } \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \frac{ 0}{1} $$</span> <span class="math-container">$$ \frac{ 1}{0} $$</span> <span class="math-container">$$ \color{magenta}{ \left( \frac{ 4 x + 1 }{ 16 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 4 x + 1 }{ 16 } \right) }{ \left( 1 \right) } $$</span> <span class="math-container">$$ \color{magenta}{ \left( \frac{ - 64 x - 176 }{ 75 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ - 16 x^{2} - 48 x + 64 }{ 75 } \right) }{ \left( \frac{ - 64 x - 176 }{ 75 } \right) } $$</span> <span class="math-container">$$ \left( x^{2} + 3 x - 4 \right) \left( \frac{ 16}{75 } \right) - \left( 4 x + 11 \right) \left( \frac{ 4 x + 1 }{ 75 } \right) = \left( -1 \right) $$</span> <span class="math-container">$$ \left( x^{4} + x^{3} - 9 x^{2} + 11 x - 4 \right) = \left( x^{2} + 3 x - 4 \right) \cdot \color{magenta}{ \left( x^{2} - 2 x + 1 \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) = \left( 4 x + 11 \right) \cdot \color{magenta}{ \left( x^{2} - 2 x + 1 \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \mbox{GCD} = \color{magenta}{ \left( x^{2} - 2 x + 1 \right) } $$</span> <span class="math-container">$$ \left( x^{4} + x^{3} - 9 x^{2} + 11 x - 4 \right) \left( \frac{ 16}{75 } \right) - \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) \left( \frac{ 4 x + 1 }{ 75 } \right) = \left( - x^{2} + 2 x - 1 \right) $$</span></p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p> <p>=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=</p> <p><span class="math-container">$$ \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc \bigcirc $$</span></p> <p><span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) $$</span></p> <p><span class="math-container">$$ \left( 2 x^{2} + x - 3 \right) $$</span></p> <p><span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) = \left( 2 x^{2} + x - 3 \right) \cdot \color{magenta}{ \left( \frac{ 4 x + 1 }{ 2 } \right) } + \left( \frac{ - 25 x + 25 }{ 2 } \right) $$</span> <span class="math-container">$$ \left( 2 x^{2} + x - 3 \right) = \left( \frac{ - 25 x + 25 }{ 2 } \right) \cdot \color{magenta}{ \left( \frac{ - 4 x - 6 }{ 25 } \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \frac{ 0}{1} $$</span> <span class="math-container">$$ \frac{ 1}{0} $$</span> <span class="math-container">$$ \color{magenta}{ \left( \frac{ 4 x + 1 }{ 2 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ 4 x + 1 }{ 2 } \right) }{ \left( 1 \right) } $$</span> <span class="math-container">$$ \color{magenta}{ \left( \frac{ - 4 x - 6 }{ 25 } \right) } \Longrightarrow \Longrightarrow \frac{ \left( \frac{ - 8 x^{2} - 14 x + 22 }{ 25 } \right) }{ \left( \frac{ - 4 x - 6 }{ 25 } \right) } $$</span> <span class="math-container">$$ \left( 4 x^{2} + 7 x - 11 \right) \left( \frac{ 2}{25 } \right) - \left( 2 x + 3 \right) \left( \frac{ 4 x + 1 }{ 25 } \right) = \left( -1 \right) $$</span> <span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) = \left( 4 x^{2} + 7 x - 11 \right) \cdot \color{magenta}{ \left( x - 1 \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \left( 2 x^{2} + x - 3 \right) = \left( 2 x + 3 \right) \cdot \color{magenta}{ \left( x - 1 \right) } + \left( 0 \right) $$</span> <span class="math-container">$$ \mbox{GCD} = \color{magenta}{ \left( x - 1 \right) } $$</span> <span class="math-container">$$ \left( 4 x^{3} + 3 x^{2} - 18 x + 11 \right) \left( \frac{ 2}{25 } \right) - \left( 2 x^{2} + x - 3 \right) \left( \frac{ 4 x + 1 }{ 25 } \right) = \left( - x + 1 \right) $$</span></p>
3,969,598
<p>Let <span class="math-container">$f: \mathbb{R} \rightarrow \mathbb{R}$</span> a monotonically increasing function and <span class="math-container">$A \subset \mathbb{R}$</span> where <span class="math-container">$A \neq \emptyset$</span> and boundend.</p> <p>i) If f is continuous function, prove that <span class="math-container">$f(\sup (A))= \sup (f(A))$</span></p> <p>ii) Find a function <span class="math-container">$f: \mathbb{R} \rightarrow \mathbb{R}$</span> wich do not fulfilled i).</p> <p>For i), I thought because <span class="math-container">$A$</span> is boundend it has supreme,<br /> <span class="math-container">$$ x \le \sup A$$</span> In other hand, because <span class="math-container">$f$</span> is monotonically increasing, if <span class="math-container">$x \le \sup A$</span> then <span class="math-container">$f(x) \le f(\sup A)$</span> for <span class="math-container">$x \in A$</span>.</p> <p>As <span class="math-container">$f$</span> is continuous and monotonically increasing in a boundend set, the set <span class="math-container">$f(A)$</span> has a supreme, i.e <span class="math-container">$f(y) \le \sup f(A)$</span> for <span class="math-container">$y \in A$</span>, so <span class="math-container">$\sup f(A)$</span> and <span class="math-container">$f(\sup A)$</span> are upper bounds.</p> <p>But I don't know what else I can do.</p>
Henry
6,460
<p><span class="math-container">$$s = r - \sqrt{r^2 - y^2}$$</span> can be rearranged to <span class="math-container">$$\sqrt{r^2 - y^2} = r - s$$</span> and squaring both sides (possibly introducing a spurious root) <span class="math-container">$${r^2 - y^2} = r ^2 -2rs +s^2$$</span> and simplifying <span class="math-container">$$2rs = y^2+s^2$$</span> and thus <span class="math-container">$$r = \frac{y^2+s^2}{2s}$$</span> and you can check this works correctly by substitution</p>
271,105
<p><strong>tl;dr</strong> What are some good workflows for developing and running data processing pipelines with Mathematica?</p> <hr /> <p>I sometimes develop data processing pipelines with Mathematica. I load some data, transform it, and derive some summary results. I tend to experiment quite a bit when doing this, checking the output after each step. The notebook environment is very convenient for this.</p> <p>When I'm done, I sometimes need to run this pipeline on multiple datasets. The notebook is not convenient for this, unfortunately. It's better to wrap up the sequence of operations into a function and call that functions with several pieces of data.</p> <p>The problems with this approach are:</p> <ul> <li>Collecting all the steps into a function takes some work, and is error-prone.</li> <li>If I need to modify the pipeline later, it is very inconvenient to work with a single large function. It does not make it easy to look at partial results.</li> </ul> <p>How do people deal with this situation? What is the workflow you found most convenient?</p>
Lukas Lang
36,508
<p><em>Disclaimer: I have faced similar issues as well, but I have not &quot;field-tested&quot; the solution I present below, so I can't speak about its issues and limitations.</em></p> <p>The idea is the following: We use <a href="https://reference.wolfram.com/language/ref/AutoGeneratedPackage.html" rel="noreferrer"><code>AutoGeneratedPackage</code></a> to convert the pipeline notebook into a package file. We then do minor post-processing on the package file, and package it into a function. Compared to trying to run the notebook itself via <code>NotebookEvaluate</code> or similar, this approach should be significantly more robust and performant.</p> <p>There are two files involved:</p> <ul> <li><p>The pipeline notebook, here <code>PipelineDemo.nb</code> (this file has <code>AutoGeneratedPackage-&gt;True</code> set):</p> <pre><code>input = FindFile@&quot;ExampleData/turtle.jpg&quot; (* &quot;D:\\Program Files\\Wolfram \ Research\\Mathematica\\12.3\\Documentation\\English\\System\\\ ExampleData\\turtle.jpg&quot; *) data = Import@input output1 = ColorNegate@data output2 = Colorize@data (* this will be the final output of the pipeline *) {output1, output2} </code></pre> <p><a href="https://i.stack.imgur.com/N9pST.png" rel="noreferrer"><img src="https://i.stack.imgur.com/N9pST.png" alt="enter image description here" /></a></p> <p>Note how everything except the input parameters is marked as initialization cell, such that it is exported to the package file. It is of course easy to exclude tests/notes from being exported simply by not marking the relevant cells as initialization cell. The last line will be what's returned from the pipeline function.</p> </li> <li><p>The file calling the pipeline, <code>PipelineUsageDemp.nb</code>. This could also be an actual package file to nicely wrap up the pipeline, but for simplicity, I just put the code in a notebook:</p> <pre><code>BeginPackage[&quot;PipelineDemo`&quot;]; pipeline; Begin[&quot;`Private`&quot;]; (* new cell *) Join @@ Import[FileNameJoin@{NotebookDirectory[],&quot;PipelineDemo.m&quot;}, &quot;HeldExpressions&quot;] /. HoldComplete[expr__] :&gt; ( SetDelayed @@ Hold[pipeline[input_], CompoundExpression[expr]] ) End[]; EndPackage[]; pipeline[FindFile@&quot;ExampleData/ocelot.jpg&quot;] </code></pre> <p><a href="https://i.stack.imgur.com/1nNCw.png" rel="noreferrer"><img src="https://i.stack.imgur.com/1nNCw.png" alt="enter image description here" /></a></p> <p>As you can see, we import the auto-generated <code>PipelineDemo.m</code> file as <code>&quot;HeldExpressions&quot;</code>, and post-process it into the body of the <code>pipeline</code> function. The symbol <code>input</code> (whose corresponding line was not exported to the package file) is now a parameter of the <code>pipeline</code> function.</p> </li> </ul> <h3>Alternative</h3> <p>An even simpler (but less clean) approach would be to have the same <code>PipelineDemo.nb</code> file, and then simply set the global variable <code>input</code> before <code>Get</code>ting the file.</p>
2,114,636
<p>Let $V$ is a finite dimensional vector space over a field $\mathbb F.$ Let $\rm dim (V)=n&gt;0$ and $\mathcal {B}=\{v_1,\ldots,v_n\}$ be a basis of $V.$ Now we know dimension of $V \otimes_\mathbb {F}V$ is $n^2$ as $V \otimes_\mathbb {F}V \cong \mathbb {F^{n^2}}.$ Now since the set $\mathcal {A}=\{v_i\otimes v_j:1 \leq i,j\leq n\}$ spans $V \otimes_\mathbb {F}V$ and the number of elements in $\mathcal {A}$ is $n^2$, $\mathcal {A}$ forms a basis for $V \otimes_\mathbb {F}V$ as a vector space over $\mathbb F.$ In this way every element in $\mathcal {A}$ is non zero.</p> <blockquote> <p>Now my question is if I don't want to use the above arguments how can I show that for any $i$ and $j$ the element $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$. By the construction of tensor product if it can be shown then it will help me.</p> </blockquote> <p>Help me. Many Thanks.</p>
JWL
161,058
<p>Whatever your definition of tensor is, it should be true that any bilinear function $$ V\times V\longrightarrow \mathbb{F} $$ must factor uniquely through $$ V\times V\longrightarrow V\otimes_\mathbb{F} V\longrightarrow \mathbb{F} $$ Now take any linear functional $\ell :V\to\mathbb{F}$ such that $\ell(v_i)\neq0 $ and $\ell(v_j)\neq 0$. Define a bilinear function $f:V\times V\to \mathbb{F}$ as $$ f(u,v) = \ell(u)\ell(v) $$ Since $f$ is clearly bilinear, $f$ must factor through a homomorphism $g:V\otimes_\mathbb{F} V\to \mathbb{F}$, i.e. $g(u\otimes v) = f(u,v)$. Since $g(v_i\otimes v_j) = f(v_i,v_j) = \ell(v_i)\ell(v_j)\neq 0$, $v_i\otimes v_j$ is nonzero.</p>
2,114,636
<p>Let $V$ is a finite dimensional vector space over a field $\mathbb F.$ Let $\rm dim (V)=n&gt;0$ and $\mathcal {B}=\{v_1,\ldots,v_n\}$ be a basis of $V.$ Now we know dimension of $V \otimes_\mathbb {F}V$ is $n^2$ as $V \otimes_\mathbb {F}V \cong \mathbb {F^{n^2}}.$ Now since the set $\mathcal {A}=\{v_i\otimes v_j:1 \leq i,j\leq n\}$ spans $V \otimes_\mathbb {F}V$ and the number of elements in $\mathcal {A}$ is $n^2$, $\mathcal {A}$ forms a basis for $V \otimes_\mathbb {F}V$ as a vector space over $\mathbb F.$ In this way every element in $\mathcal {A}$ is non zero.</p> <blockquote> <p>Now my question is if I don't want to use the above arguments how can I show that for any $i$ and $j$ the element $v_i\otimes v_j$ is non zero in $V \otimes_\mathbb {F}V$. By the construction of tensor product if it can be shown then it will help me.</p> </blockquote> <p>Help me. Many Thanks.</p>
Community
-1
<p>If $(v_i,v_j)$ are non-zero there is a bilinear application $\beta : V \times V \to k$ with $\beta(v_i,v_j) = 1$ and by the universal property of tensor product we get a map $b : V \otimes V \to k $ with $b(v_i \otimes v_j) = 1$ so it cannot be zero. </p>
1,290,363
<p>So I already proved Closure and Associativity, now I'm trying to find the identity element of this operation defined as: $$ a * b = a + b - ab $$</p> <p>But my identity element gets cancelled...</p> <p>(The set defined in this exercise is the real numbers.)</p> <p><img src="https://i.stack.imgur.com/ZchjC.jpg" alt="enter image description here"></p>
Matt Samuel
187,867
<p>As stated in the other answers, the identity element is $0$. If the goal was to prove or disprove that this is a group, you checked the axioms in an unfortunate order, because inverses don't exist. In particular, $1$ does not have an inverse, because $a\ast 1=1$ for all $a$.</p>
3,093,660
<p>This is an introducory task from an exam. </p> <p><strong>If</strong> <span class="math-container">$z = -2(\cos{5} - i\sin{5})$</span>, <strong>then what are:</strong></p> <p><span class="math-container">$Re(z), Im(z), arg(z)$</span> and <span class="math-container">$ |z|$</span>?</p> <p>First of all, how is it possible that the modulus is negative <span class="math-container">$|z|=-2$</span>? Or is the modulus actually <span class="math-container">$|z|= 2$</span> and the minus is kind of in front of everything, and that's why the sign inside of the brackets is changed as well? That would make some sense.</p> <p>I assume <span class="math-container">$arg(z) = 5$</span>. How do I calculate <span class="math-container">$Re(z) $</span> and <span class="math-container">$Im(z)$</span>? Something like this should do the job?</p> <p><span class="math-container">$$arg(z) = \frac{Re(z)}{|z|}$$</span></p> <p><span class="math-container">$$5 = \frac{Re(z)}{2}$$</span></p> <p><span class="math-container">$$10 = Re(z)$$</span></p> <p>And analogically with <span class="math-container">$Im(z):$</span></p> <p><span class="math-container">$$arg(z) = \frac{Im(z)}{|z|}$$</span></p> <p><span class="math-container">$$5 = \frac{Im(z)}{2} \Rightarrow Im(z) = Re(z) = 10$$</span></p> <p>I'm sure I'm confusing something here because, probably somewhere wrong <span class="math-container">$\pm$</span> signs.</p> <p>Help's appreciated.</p> <p><strong>And finally:</strong> is there some good calculator for complex numbers? Let's say I have a polar form and I want to find out the <span class="math-container">$Re(z), Im(z)$</span> and such. Wolframalpha seems like doesn't work fine for that.</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$z=a+bi$</span>, with <span class="math-container">$a,b\in\mathbb R$</span>, then <span class="math-container">$\lvert z\rvert=\sqrt{a^2+b^2}$</span>; in particular, <span class="math-container">$\lvert z\rvert\geqslant0$</span> for any <span class="math-container">$z\in\mathbb C$</span>.</p> <p>Actually, <span class="math-container">$\bigl\lvert-2(\cos5-i\sin5)\bigr\rvert=2$</span>, <span class="math-container">$\operatorname{Re}\bigl(-2(\cos5-i\sin5)\bigr)=-2\cos5$</span>, and <span class="math-container">$\operatorname{Im}\bigl(-2(\cos5-i\sin5)\bigr)=2\sin5$</span>.</p>
575,597
<p>I don't understand how to solve $3^{1/4} \cdot 9^{-5/8}$. Help please?</p> <p>I have tried many different things, but they're not working. Once I plug the problem into a math equation solver, the answer $1/3$ appears, but I don't understand how they got that. </p>
Newb
98,587
<p>We have your equivalence relation <strong>R</strong> as $a \sim b$ if $3|(a^2 - b^2)$.</p> <p>To find the class of elements equivalent to $0$, we need to set one of the elements to $0$ (just one because of reflexivity, symmetry, and transitivity, as this is an equivalence relation): suppose $a \sim 0$, i.e. $3|(a^2 - 0^2) \Longrightarrow 3|a^2$. Then by <a href="http://en.wikipedia.org/wiki/Euclid%27s_lemma" rel="nofollow">Euclid's Lemma</a>, $3|a$. So the equivalence class of $0$ is the set of all integers that we can divide by $3$, i.e. that are multiples of $3: \{\ldots, -6,-3,0,3,6, \ldots\}$.</p>
575,597
<p>I don't understand how to solve $3^{1/4} \cdot 9^{-5/8}$. Help please?</p> <p>I have tried many different things, but they're not working. Once I plug the problem into a math equation solver, the answer $1/3$ appears, but I don't understand how they got that. </p>
Cory Crowley
397,663
<p>The set has the following equivalence relations. <a href="https://i.stack.imgur.com/xm0Vi.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/xm0Vi.png" alt="[0] equivalence relation on set"></a></p>
3,692,877
<p>Consider the following expression in three variables, <span class="math-container">$0 \leq p,s \leq 1$</span> and <span class="math-container">$n &gt;0$</span></p> <p><span class="math-container">$$S_{n, p, s} = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} e^{-s(k - np)^2}$$</span></p> <p>If <span class="math-container">$s = 0$</span> then <span class="math-container">$S_{n, p, 0} = 1$</span>.</p> <blockquote> <p>Is there a closed form for the sum for <span class="math-container">$ s &gt; 0$</span>? If not, can it be approximated if we assume that <span class="math-container">$n$</span> is large?</p> </blockquote>
SchrodingersCat
278,967
<p>Just an attempt in trying to simplify the sum (without any idea to reach a closed form):</p> <p><span class="math-container">$$S_{n, p, s} = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} e^{-s(k - np)^2}$$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \sum_{m=0}^\infty (-1)^m \frac{\left[s(k - np)^2\right]^m}{m!}$$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \sum_{m=0}^\infty \frac{(-1)^m}{m!} s^m \sum_{r=0}^{2m} k^r (-np)^{2m-r} $$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \sum_{m=0}^\infty \frac{(-1)^{m}}{m!} s^m \sum_{r=0}^{2m} (-k)^r (np)^{2m-r} $$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \sum_{m=0}^\infty \frac{(-1)^{m}}{m!} (sn^2p^2)^m \sum_{r=0}^{2m} \left(-\frac{k}{np}\right)^r $$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \sum_{m=0}^\infty \frac{(-1)^{m}}{m!} (sn^2p^2)^m \left[ \frac{\left(-\frac{k}{np}\right)^{2m+1}-1}{\left(-\frac{k}{np}\right)-1} \right] $$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \left[ \frac{\exp(-sk^2) \left(\frac{k}{np}\right)+\exp(-sn^2p^2)}{\left(\frac{k}{np}\right)+1} \right] $$</span> <span class="math-container">$$ = \sum_{k=0}^n {n \choose k} p^k (1-p)^{n-k} \left[ \frac{k\exp(-sk^2)+np\exp(-sn^2p^2)}{k+np} \right] $$</span></p> <p>See if this helps in leading to a closed form at certain approximations of <span class="math-container">$n$</span>, as well as any condition on <span class="math-container">$np$</span>.</p>
3,191,345
<p>Evaluate the following definite integral : <span class="math-container">$$\int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \qquad \qquad \qquad (1)$$</span> </p> <p><span class="math-container">\begin{align} &amp; = \int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \ \ I \ used \ u=1-\sin x \ and \ dx= \cfrac{-du}{cosx} \\ &amp; = -\int_1^0\cfrac{du}{\sqrt u} \\ &amp; = \int_0^1\cfrac{du}{\sqrt u} \\ &amp; = 2\sqrt u |_0^1 \\ &amp; = 2-0 =2 \\ \end{align}</span> But Symbolab says that is 0, what i have done wrong in (1) ?</p>
fGDu94
658,818
<p>Symbolab is wrong here, <span class="math-container">$\cos(x) \geq 0$</span> on the interval and <span class="math-container">$\sqrt{1-sin(x)} \geq 0$</span> on the interval, with regions where both functions are strictly positive.</p>
3,191,345
<p>Evaluate the following definite integral : <span class="math-container">$$\int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \qquad \qquad \qquad (1)$$</span> </p> <p><span class="math-container">\begin{align} &amp; = \int_0^{\pi/2}\cfrac{\cos x}{\sqrt{1-\sin x}} \ \ I \ used \ u=1-\sin x \ and \ dx= \cfrac{-du}{cosx} \\ &amp; = -\int_1^0\cfrac{du}{\sqrt u} \\ &amp; = \int_0^1\cfrac{du}{\sqrt u} \\ &amp; = 2\sqrt u |_0^1 \\ &amp; = 2-0 =2 \\ \end{align}</span> But Symbolab says that is 0, what i have done wrong in (1) ?</p>
Dr. Sonnhard Graubner
175,066
<p>I would substitute <span class="math-container">$$t=\sqrt{1-\sin(x)}$$</span> then we get <span class="math-container">$$dt=\frac{\cos(x)}{2\sqrt{1-\sin(x)}}dx$$</span></p>
672,736
<p>Let $A = \begin{bmatrix}1&amp;2&amp;1\\0&amp;1&amp;0\\1&amp;3&amp;1\end{bmatrix}$. Find the eigenvalues of $A$.</p> <p>I think I got a pretty steady ground on how I approached this, I just have some difficulty getting the right answer.</p> <p>What I have done so far:</p> <p>$P(\lambda) = det(A - \lambda I)$</p> <p>$det\begin{bmatrix}1-\lambda&amp;2&amp;1\\0&amp;1-\lambda&amp;0\\1&amp;3&amp;1-\lambda\end{bmatrix} = 0$</p> <p>$=(1-\lambda)(1-\lambda)^2 - 2(0) + 1(1-\lambda) = 0$</p> <p>$= (1- \lambda) ^3 +(1-\lambda) = 0$</p> <p>But I'm not getting the right eigenvalues. The above answer gives me the eigenvalue: 1 only.</p> <p>but the right answer is: 2, 1, 0.</p>
Eleven-Eleven
61,030
<p>Your multiplication is wrong...since the middle row has two zeros, you only have to evaluate using the middle cofactor... $$\det{(A)}=(1-\lambda)((1-\lambda)^2-1)=(1-\lambda)(1-2\lambda+\lambda^2-1)=(1-\lambda)(\lambda^2-2\lambda)$$ $$=\lambda(1-\lambda)(\lambda-2)$$</p>
3,243,733
<p><strong>Use induction to show that the Fibonacci numbers satisfy F(n) <span class="math-container">$\ge$</span> <span class="math-container">$(2 ^ {(n-1) / 2})$</span> for all n <span class="math-container">$\ge$</span> 3</strong></p> <p>My work thus far:</p> <blockquote> <p>Base Case: F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {(3-1) / 2}$</span> => F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {1}$</span></p> <p>Induction Hypothesis: Assume F(n) is true for all 3 &lt; n &lt; k</p> <p>Inductive step: for (k + 1), F(k + 1) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {(k + 1 - 1) / 2}$</span> => F(3) <span class="math-container">$\ge$</span> <span class="math-container">$2 ^ {k / 2}$</span></p> </blockquote> <p>I'm not sure where to go from here.</p>
YiFan
496,634
<p>You seem to be misunderstanding how a proof by induction works. Say you have a proposition <span class="math-container">$P(n)$</span> to be verified for all natural numbers <span class="math-container">$n=0,1,2,\dots$</span>. The <strong>base case</strong> is to show that <span class="math-container">$P(0)$</span> is true, which is usually the easiest. The <strong>inductive hypothesis</strong> is that for some <span class="math-container">$k$</span>, <span class="math-container">$P(n)$</span> is true for all <span class="math-container">$n=0,1,2,\dots,k$</span>. You seem to get the idea until now. The <strong>inductive step</strong> is to show, assuming <span class="math-container">$P(0),P(1),\dots,P(k)$</span>, that <span class="math-container">$P(k+1)$</span> is also true. This is what allows the domino reaction to occur: once <span class="math-container">$P(k+1)$</span> is true, so is <span class="math-container">$P(k+2)$</span>, and so on for every natural number.</p> <p>In this case, <span class="math-container">$F(k+1)\geq2^{(k+1-1)/2}$</span> is what you <strong>want to show</strong>. You do not a priori know it to be true. Thus, it makes no sense to start by assuming it, as you seem to have done. Instead, start with the assumptions that <span class="math-container">$$ F(k-1)\geq 2^{(k-2)/2}\quad\text{and}\quad F(k)\geq 2^{(k-1)/2}.$$</span> Note that by definition, <span class="math-container">$F(k+1)=F(k)+F(k-1)$</span>. Summing the above two inequalities, <span class="math-container">$$F(k+1)=F(k)+F(k-1)\geq 2^{(k-2)/2}+2^{(k-1)/2}.$$</span> The big question, now, is whether it is true that <span class="math-container">$2^{(k-2)/2}+2^{(k-1)/2}$</span> is greater than or equal to <span class="math-container">$2^{k/2}$</span>. It turns out that it is true. To show this is the inductive step that you have to make and the conclusion that would complete the proof. Can you finish now?</p> <hr> <p>Try completing the inductive step on your own first. If you need a hint, look below:</p> <blockquote class="spoiler"> <p> Note that <span class="math-container">$2^{(k-1)/2}&gt;2^{(k-2)/2}$</span>, so <span class="math-container">$2^{(k-1)/2}+2^{(k-2)/2}&gt;2\cdot2^{(k-2)/2}=2^{(k-2)/2+1}=2^{k/2}$</span>.</p> </blockquote>
1,700,689
<p>Let $A, B$, and $C$ be sets. If $A\backslash B$ is a subset of $C$, then $A\backslash C$ is a subset of $B$. Is this a direct proof where I let $x$ be an element of $A$ and then work from there? I can't seem to figure out all of the cases. Thanks for help in advance.</p>
martini
15,379
<p>As you want to prove $A \setminus C \subseteq B$, start with an element $x \in A \setminus C$, that is $a \in A$, $a \not\in C$. Then either $x \in B$, or $x \not\in B$. If $x \not\in B$, then $x \in A \setminus B$, hence $x \in C$, which is not possible. If $x \in B$, we are done.</p> <p>Hence $A \setminus C \subseteq B$.</p>
1,700,689
<p>Let $A, B$, and $C$ be sets. If $A\backslash B$ is a subset of $C$, then $A\backslash C$ is a subset of $B$. Is this a direct proof where I let $x$ be an element of $A$ and then work from there? I can't seem to figure out all of the cases. Thanks for help in advance.</p>
johnnycrab
171,304
<p>Let $x \in A \setminus C$, then $x \in A$, but $x \notin C$. If $x \notin B$, then $x \in C$ (because $A \setminus B \subset C$), which is a contradiction. Thus $x \in B$ and $A \setminus C \subset B$.</p>
272,846
<p>Suppose I have a List of numbers:</p> <pre><code>num = Range[5] </code></pre> <p>I want to combine the second and the third element into a sublist to get the result as {1,{2,3},4,5}.<br /> I tried using this:</p> <pre><code>MapAt[List, num, {{2}, {3}}] </code></pre> <p>which is not giving me the desired result. What changes are needed to be made?<br /> Can the same changes be applied to this code:</p> <pre><code>music = SoundNote[&quot;CSharp&quot;, 0.1, 0.2, &quot;Violin&quot;] </code></pre> <p>to get the result as SoundNote[CSharp,{0.1,0.2},Violin]?</p>
Artes
184
<p>I guess this is the simplest approach:</p> <pre><code>num /. {a_, b_, c_, d___} :&gt; {a, {b, c}, d} </code></pre> <blockquote> <pre><code> {1, {2, 3}, 4, 5} </code></pre> </blockquote> <p>At the last position we have <code>BlankNullSequence</code> to make it work.</p>
3,014,453
<p>If there is a number somewhere between 0 and 100 and you have to find it with the least attempts possible. Every attempt consists of you checking if the number is smaller (or bigger) than a number in the said interval (0 to 100). My guess would be you start with the half way point.</p> <p>Is it smaller than 50? yes --> is it smaller than 25---> yes ---> is it smaller than 25 ---> no ---> is it smaller than 37.5 ---> yes...etc </p> <p>If this is indeed the faster method, what would be the formula that expresses it? If this isn't the fastest method, what is it and how is it expressed mathematically and verbally? Thanks.</p>
badjohn
332,763
<p>The other answers have confirmed that this is the best method but did not mention how you can see that you cannot get much better. With <span class="math-container">$7$</span> tests and only <span class="math-container">$2$</span> possibilities each, there are <span class="math-container">$2 ^ 7 = 128$</span> possible result sets so there is some chance of distinguishing <span class="math-container">$101$</span> cases if the tests are chosen well. With only <span class="math-container">$6$</span> tests, there would be at most <span class="math-container">$2 ^ 6 = 64$</span> possible result sets and hence no hope of distinguishing <span class="math-container">$101$</span> cases. </p> <p>I say, "cannot get much better". As I just said, you won't be able to get the worst case below <span class="math-container">$7$</span> but, with some tweaking, you might get the average a little lower. </p> <p>This type of analysis may prove that you cannot do better than a certain number of tests but does not prove that it is possible. For example, suppose there were <span class="math-container">$3$</span> possible results to each test <span class="math-container">$&lt;$</span>, <span class="math-container">$=$</span>, and <span class="math-container">$&gt;$</span> then since <span class="math-container">$3 ^ 5 &gt; 101$</span> you might hope that <span class="math-container">$5$</span> tests would be sufficient but this alone would not prove it. You would need to find an algorithm with a worst case of <span class="math-container">$5$</span> steps or prove it in some other way. </p>
95,598
<p>I have a wavefunction $\psi(x,t)=Ae^{i(kx-\omega t)}+ Be^{-i(kx+\omega t)}$. $A$ and $B$ are complex constants.</p> <p>I am trying to find the probability density, so I need to find the product of $\psi$ with it's complex conjugate. The problem is, im not sure what is it's complex conjugate, I know the complex conjugate of $5+4i$ is $5-4i$, but what would be the complex conjugate of $\psi$? Is it just $-Ae^{i(kx-\omega t)}-Be^{-i(kx+\omega t)}$?</p>
MrOperator
21,887
<p>The complex conjugation factors through sums and products. So you can take the complex conjugate of the factor with A and B separately. The constant A and B form know problem, this goes according to the usual rules. This leaves something of the form $e^{(a+bi)}$. Now note that $e^{(a+bi)}= e^a(\cos(b)+i \sin(b))$ Taking the complex conjugate now and using $\cos(b)=-\cos(b)$ and $-\sin(b)=\sin(-b)$, you find the complex conjugate $e^{a+i(-b)}$.</p> <p>This means: $\bar{\psi} = \bar{A} \mathrm{e}^{-i (k x - \omega t)} + \bar{B} \mathrm{e}^{i (k x + \omega t)}$</p> <p>Note the use of the minus sign to compactly write the complex conjugate of $e^{a+ib}$. In computation this is what write, but you might want to keep the explanation of the $\cos$ and $\sin$ in the back of your head.</p>
2,890,625
<p>Suppose $f(x)$ is differentiable on $[0,1]$, and $f(0)=0$, $f(x)\ne 0,\forall x\in(0,1)$ , Prove for every $n,m\in\mathbb{N^+}$, there exists $\xi=\xi_{n,m}\in(0,1)$ such that $$n\cdot\frac{f'(\xi)}{f(\xi)}=m\cdot\frac{f'(1-\xi)}{f(1-\xi)}$$</p>
xbh
514,490
<p>MVT method:</p> <p>Consider an auxiliary function $$ F(x) =f(x)^n f(1-x)^m, \quad x \in [0,1], $$ then Rolle's theorem achieves the goal. </p>
4,313,593
<p>I am asked to determine the derivative of the function <span class="math-container">\begin{align*} f(\textbf{x}) = \left\|\textbf{A}\textbf{x}-\textbf{y}\right\|_{2} ^{2}+\alpha \textbf{x}^{\mathsf{T}}\textbf{M}\textbf{x} \end{align*}</span> with <span class="math-container">$\textbf{A} \in \mathbb{R}^{m, n}, \ \textbf{x}, \textbf{y} \in \mathbb{R}^{n} , \ \textbf{M}\in \mathbb{R}^{m, n} , \ \alpha\in \mathbb{R}^{} $</span>. I know that we can differentiate termwise and for the first I got: <span class="math-container">\begin{align*} \frac{\partial }{\partial x_{i}} \left\|\textbf{A}\textbf{x}-\textbf{y}\right\|_{2} &amp;= \frac{\partial }{\partial x_{i}} \sum_{k=1}^{m} (\textbf{A}\textbf{x}-\textbf{y})_{k}^{2} =\frac{\partial }{\partial x_{i}} \sum_{k=1}^{n} \left(\sum_{j=1}^{n} a_{k, j}x_{j} - y_{k}\right)^{2} \\ &amp;= \frac{\partial }{\partial x_{i}} \sum_{k=1}^{n} (a_{k, i}\cdot x_{i}-y_{k})^{2} = 2\sum_{k=1}^{n} a_{k, i}^{2}\cdot x_{i} - 2 \sum_{k=1}^{n} a_{k, i}\cdot x_{i}\cdot y_{k} .\end{align*}</span></p> <p>Is my calculation above correct? Is there an easier way to determine the derivative of this expression? Thanks in advance!</p>
Will Jagy
10,400
<p>ADDED next morning: took forever, I finally confirmed that my <span class="math-container">$A$</span> satisfies <span class="math-container">$31A^4 - 22 A^2 + 8A - 1$</span> which reduces, eventually <span class="math-container">$$ 31 A^3 - 31 A^2 + 9A - 1 = 0$$</span> Meanwhile, the dependence is <span class="math-container">$A = \frac{\alpha^3}{\alpha^3 + 2}$</span> with <span class="math-container">$$ \alpha \approx 1.465571231876768026656731225 $$</span> <span class="math-container">$$ A \approx 0.6114919919508125184143170109 $$</span> An intermediate step was confirming that <span class="math-container">$\alpha^{12}-3\alpha^9 - \alpha^6 + 2 \alpha^3 - 1=0.$</span> From</p> <p><span class="math-container">$$ x^{12} - 3 x^9 - x^6 + 2 x^3 - 1 = $$</span> <span class="math-container">$$ (x+1)(x^2-x+1)(x^3-x^2-1)(x^6+x^5+x^4-2x^3-x^2+1) $$</span></p> <p>Then using <span class="math-container">$\alpha^3 = \frac{2A}{1-A}$</span> That is, <span class="math-container">$\delta = \alpha^3$</span> is a root of <span class="math-container">$\delta^4 - 3 \delta^3 - \delta^2 + 2 \delta - 1 = (\delta+1)(\delta^3-4\delta^2 + 3 \delta -1) $</span></p> <p>ORIGINAL:the set of sequences <span class="math-container">$x_n$</span> with <span class="math-container">$ x_{n+3} - x_{n+2} - x_n = 0$</span> is a vector space over the complexes. A basis is <span class="math-container">$$ \alpha^n \; \; , \; \; \beta^n \; \; , \; \; \bar{\beta}^n \; \; , \; \; $$</span> where <span class="math-container">$ \alpha \approx 1.465571231876768026656731225 $</span> and <span class="math-container">$ \beta \approx -0.2327856159383840133283656126 + 0.7925519925154478483258983007i$</span></p> <p>Note that <span class="math-container">$\beta$</span> and <span class="math-container">$\bar{\beta}$</span> have norm smaller than <span class="math-container">$1,$</span> so that <span class="math-container">$\beta^n$</span> and <span class="math-container">$\bar{\beta}^n$</span> approach <span class="math-container">$0$</span> fairly quickly.</p> <p>I see, you are calling my <span class="math-container">$\alpha = C$</span></p> <p>Any complex series can be written using the basis. If all elements of the sequence are real, the coefficients come out <span class="math-container">$$ x_n = A \alpha^n + B \beta^n + \bar{B} \bar{\beta}^n $$</span></p> <p>Once more, you have my <span class="math-container">$A = d.$</span> From the comment about the norms going to zero, we know that ( because your sequence is always integers) <span class="math-container">$x_n$</span> really is the closest integer to <span class="math-container">$A \alpha^n.$</span> Rounding a positive real <span class="math-container">$t$</span> to the nearest integer can be done with <span class="math-container">$ \left\lfloor t + \frac{1}{2} \right\rfloor $</span></p>
4,313,593
<p>I am asked to determine the derivative of the function <span class="math-container">\begin{align*} f(\textbf{x}) = \left\|\textbf{A}\textbf{x}-\textbf{y}\right\|_{2} ^{2}+\alpha \textbf{x}^{\mathsf{T}}\textbf{M}\textbf{x} \end{align*}</span> with <span class="math-container">$\textbf{A} \in \mathbb{R}^{m, n}, \ \textbf{x}, \textbf{y} \in \mathbb{R}^{n} , \ \textbf{M}\in \mathbb{R}^{m, n} , \ \alpha\in \mathbb{R}^{} $</span>. I know that we can differentiate termwise and for the first I got: <span class="math-container">\begin{align*} \frac{\partial }{\partial x_{i}} \left\|\textbf{A}\textbf{x}-\textbf{y}\right\|_{2} &amp;= \frac{\partial }{\partial x_{i}} \sum_{k=1}^{m} (\textbf{A}\textbf{x}-\textbf{y})_{k}^{2} =\frac{\partial }{\partial x_{i}} \sum_{k=1}^{n} \left(\sum_{j=1}^{n} a_{k, j}x_{j} - y_{k}\right)^{2} \\ &amp;= \frac{\partial }{\partial x_{i}} \sum_{k=1}^{n} (a_{k, i}\cdot x_{i}-y_{k})^{2} = 2\sum_{k=1}^{n} a_{k, i}^{2}\cdot x_{i} - 2 \sum_{k=1}^{n} a_{k, i}\cdot x_{i}\cdot y_{k} .\end{align*}</span></p> <p>Is my calculation above correct? Is there an easier way to determine the derivative of this expression? Thanks in advance!</p>
Claude Leibovici
82,404
<p>Just in case you want the exact formulae</p> <p><span class="math-container">$$c_1=\frac{1}{3} \left(1+2 \cosh \left(\frac{1}{3} \cosh ^{-1}\left(\frac{29}{2}\right)\right)\right)$$</span></p> <p>and</p> <p><span class="math-container">$$d_1=\frac 13 \Bigg[1+\frac{4}{\sqrt{31}}\cosh \left(\frac{1}{3} \cosh ^{-1}\left(\frac{\sqrt{31}}{2}\right)\right) \Bigg]$$</span></p>
1,107,317
<p>I've got this hypergeometric series</p> <p>$_2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p>where $a,n&gt;0$ and $a,n\in \mathbb{N}$</p> <p>The problem is that $-a-n+1$ is negative in this case. So when I try to use Gauss's identity</p> <p>$_2F_1 \left[ \begin{array}{ll} a &amp; b \\ c &amp; \end{array} ; 1\right] = \dfrac{\Gamma(c-a-b)\Gamma(c)}{\Gamma(c-a)\Gamma(c-b)}$</p> <p>I give negative parameters to the $\Gamma$ function.</p> <p>What other identity can I use?</p> <p>I'm trying to find a closed form to this: $\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i}$</p> <p>Wolfram Mathematica answered this as a closed form: $\frac{2^{-2 a} \Gamma \left(\frac{1}{2} (1-2 a)\right) \binom{a+n-1}{n} \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$</p> <p>But I would like to have a manual solution with proof.</p> <p>This is how I got that hypergeometric series:</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i}$</p> <p>$\dfrac{t_{i+1}}{t_{i}} = \frac{\binom{a+i+1-1}{i+1} \binom{a-i+n-2}{-i+n-1}}{\binom{a+i-1}{i} \binom{a-i+n-1}{n-i}} = \frac{(a+i) (n-i)}{(i+1) (a-i+n-1)} = \frac{(a+i) (i-n)}{ (i-a-n+1)(i+1)}$</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i} = _2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p><strong>UPDATE</strong></p> <p>Thanks to David H, I got closer to the solution.</p> <p>$\sum _{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i} = _2F_1 \left[ \begin{array}{ll} a &amp;-n \\ -a-n+1 &amp; \end{array} ; 1\right]$</p> <p>$\lim\limits_{\epsilon \to0} \frac{\Gamma (-2 a-2 \epsilon +1) \Gamma (-a-n-\epsilon +1)}{\Gamma (-a-\epsilon +1) \Gamma (-2 a-n-2 \epsilon +1)} = \frac{4^{-a} \Gamma \left(\frac{1}{2}-a\right) \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$</p> <p>As you can see this result is close to the expected $\frac{2^{-2 a} \Gamma \left(\frac{1}{2} (1-2 a)\right) \binom{a+n-1}{n} \Gamma (-a-n+1)}{\sqrt{\pi } \Gamma (-2 a-n+1)}$ formula. But the $\binom{a+n-1}{n}$ factor is still missing and I don't really understand why.</p>
balping
208,441
<p>Here I post the full solution</p> <p>The problem: we are looking for the closed form of this sum:</p> <p>$\sum\limits_{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i}$</p> <p>The first term of the sum for $i=0$ is $\binom{a+n-1}{n}$</p> <p>The ratio two consecutive terms:</p> <p>$\dfrac{t_{i+1}}{t_i} = \dfrac{P(i)}{Q(i)} = \dfrac{\binom{a+i}{i+1} \binom{a-i+n-2}{-i+n-1}}{\binom{a+i-1}{i} \binom{a-i+n-1}{n-i}} = \dfrac{(a+i) (n-i)}{(i+1) (a-i+n-1)} = \dfrac{(i+a)(i-n)}{(i-a-n+1)(i+1)}$</p> <p>So our hypergeometric series can be described this way:</p> <p>$\, _2F_1(a,-n;-a-n+1;1)$</p> <p>Using <a href="http://functions.wolfram.com/HypergeometricFunctions/Hypergeometric2F1/03/06/05/0016/" rel="nofollow">this</a> and <a href="http://functions.wolfram.com/HypergeometricFunctions/GegenbauerC3General/03/01/01/0002/" rel="nofollow">this</a> identities:</p> <p>$\begin{align} {_2F_1}{\left(a,-n;1-a-n;1\right)} &amp;=\frac{n!}{(a)_{n}}C_{n}^{a}{\left(1\right)}\\ &amp;=\frac{\Gamma{\left(n+1\right)}\,\Gamma{\left(a\right)}}{\Gamma{\left(a+n\right)}}\cdot\frac{\Gamma{\left(2a+n\right)}}{\Gamma{\left(2a\right)}\,\Gamma{\left(n+1\right)}}\\ &amp;=\frac{\Gamma{\left(a\right)}\,\Gamma{\left(2a+n\right)}}{\Gamma{\left(a+n\right)}\,\Gamma{\left(2a\right)}}\\ &amp;=\frac{\binom{2a+n-1}{2a-1}}{\binom{a+n-1}{a-1}}.\\ \end{align}$</p> <p>The final solution:</p> <p>$\begin{align}\sum\limits_{i=0}^n \binom{a+i-1}{i} \binom{a-i+n-1}{n-i} &amp;= \binom{a+n-1}{n} \cdot \, _2F_1(a,-n;-a-n+1;1)\\ &amp;= \binom{a+n-1}{n} \cdot \dfrac{\binom{2a+n-1}{2a-1}}{\binom{a+n-1}{a-1}}\\ &amp;=\binom{2a+n-1}{n}\end{align}$</p>
1,523,287
<p>We choose a random number from 1 to 10. We ask someone to find what number it is by asking a yes or no question. Calculate the expected value if the person ask if it is $x$ number till you got it right?</p> <p>I know the answer is around 5 but i can't find how to get there.</p> <p>I tried $\frac{1}{10}(1)+\frac{1}{9}(2)+\frac{1}{8}(3)+\frac{1}{7}(4)+\frac{1}{6}(5)+\frac{1}{5}(6)+\frac{1}{4}(7)+\frac{1}{3}(8)+\frac{1}{2}(9)$</p> <p>but it doesn't work. Any help to point me in the right direction would be greatly appreciated.</p> <p>Thank you</p>
Bernard
202,857
<p>We have the obvious *Bézout's identity: $\;3\cdot 5-2\cdot 7 =1$. The solution of the system of congruences $$\begin{cases}n\equiv \color{cyan}1\mod5\\n\equiv \color{red}3\mod 7\end{cases}\quad \text{is}\enspace n\equiv \color{red}3\cdot3\cdot 5-\color{cyan}1\cdot2\cdot 7\equiv 31\mod 35 $$</p>
1,344,161
<p>Suppose $k\geq 2$ is an integer. I want to show $$\frac{1+k+k(k-2)}{1+\frac{k-1}{k}+\frac{(-1-\sqrt{k-1} )^2}{k(k-2)}}$$ is not an integer. It is equal to $$\frac{(k-2) k (k^2-k+1)}{2 (k^2-2 k+\sqrt{k-1}+1)}.$$</p> <p>If I can show this then I will be able to finish my proof of the <a href="https://en.wikipedia.org/wiki/Friendship_graph#Friendship_theorem" rel="nofollow">Friendship Theorem</a>. We may assume $k$ is even if that helps any.</p>
Jyrki Lahtonen
11,619
<p>Hint: For that number to be rational it is necessary that the square root is rational, which happens only when $k=n^2+1$ for some integer $n$. Show that then the denominator (of the latter formula) is divisible $n$ but the numerator <strike>is not</strike> leaves remainder $-1$ when divided by $n$. Therefore the factors of $n$ cannot cancel. Check $n=1$ separately.</p>
3,436,219
<p><img src="https://i.stack.imgur.com/VplT3.jpg" alt="enter image description here"></p> <p>I could use gaussian elimination if I make some assumptions or does any one have another suggestion?</p>
Glowing0v3rlord
725,413
<p>Say the first number is <em>x</em>, the second is <em>y</em>, and the answer is <em>A</em>.</p> <p>I believe the formula is <em>A</em> = <em>x</em>-(2+<em>y</em>).</p> <p>Another that works is <em>A</em> = <em>x</em>-<em>y</em>-2</p> <p>According to this logic:</p> <p>a) 9 ● 2=5</p> <p>b) ● = -2-</p> <p>I hope that answers your question.</p>
687,352
<p>How many experiments should we conduct so that we could state that with more than $0.9$ probability the event occurs at least once. The probability that the event occurs is $0.7$. </p> <p>I have tried the following:</p> <p>Let's say the number of experiments is equal to $n$. The opposite of 'occurs at least once' is that the event occurs in <strong>all</strong> experiments and the probability of this be $1-0.9=0.1.$ </p> <p>So I need the following $(0.7)^n=0.1$</p> <p>Solving this does not give me the right answer, which is <strong>more than $2$</strong>. </p> <p>Anyone could help?</p>
NovaDenizen
109,816
<p>First, for $0 &lt; x &lt; \frac{\pi}{2}$, $\sin x \ge \dfrac{2x}{\pi}$. Likewise for any interval $[k\pi, (k + \frac12)\pi]$ (with integer $k$), $|\sin{x}| \ge \dfrac{2(x-k\pi)}{\pi}$.</p> <p>For each interval $[k\pi, (k + \frac12)\pi]$, $\dfrac{|\sin x|}{x} \ge \dfrac{2(x - k\pi)}{\pi x} \gt \dfrac{2(x-k\pi)}{(k+1)\pi} = f_k(x)$.</p> <p>$f_k$ over the $k$th interval forms a triangle with base $\dfrac{\pi}2$ and height $\dfrac{2((k+\dfrac12)\pi - k\pi)}{(k+1)\pi} = \dfrac{1}{k+1}$, and area $\dfrac{\pi}{4(k+1)} = g_k$</p> <p>The $f_k$ cover the left half of each "bump" in $\dfrac{|\sin x|}{x}$. This isn't a problem because the integrand is non-negative for all of $[0,\infty]$. $\lim_{n \rightarrow \infty} \sum_{k=0}^{n} g_k$ diverges as $O(\log n)$. Since $\dfrac{|\sin x|}{x} \ge f_k(x)$ over every respective interval, the integral must diverge also.</p>
2,448,349
<p>Would the cubic equation that is the best approximation for $e^x$ just be the first 4 terms of its Taylor Series expansion? $$1 + x + \frac{x^2}{2!} + \frac{x^3}{3!}$$ I guess it would depend on the bounds you are inspecting though? I'm not well-versed in dealing with infinities. Is there a cubic equation that is the best approximation on $[-\infty,\infty]?$</p>
zwim
399,263
<p>It depends what is it you try to minimize ?</p> <p>Taylor expansion is the one that optimize the fitness of the polynomial for $||\cdot||_{\infty}$ to the exponential in a neighbourhood of $0$.</p> <p>But you can try to minimize other things, by having other norms and/or trying to minimize locally or globally.</p> <p>Here is an example of a linear approximation minimizing $||\cdot||_1$ on the interval $[-1,1]$ :</p> <p><a href="https://math.stackexchange.com/questions/2344512/linear-approximation-of-exponential-for-the-l1-norm-find-a-b-that-minimize/2346260#2346260">Linear approximation of exponential for the $L^1$ norm. Find $a,b$ that minimize $||ax+b-e^x||_{1}$</a></p> <p>This is $l(x)=2x\sinh(\frac 12)+\cosh(\frac 12)$</p> <p>It is different from Taylor expansion because we tried to minimize a global property (i.e. the fitness of the integral).</p>
3,047,241
<blockquote> <p>Let <span class="math-container">$X_1, X_2, \cdots, X_n$</span> be i.i.d. <span class="math-container">$\sim \text{Bernoulli}(p)$</span>. Then <span class="math-container">$\bar{x}$</span> is an unbiased estimator of <span class="math-container">$p$</span>.</p> </blockquote> <p>How should I approach for this types of problems. Some hint will also help me.</p>
Vishaal Sudarsan
414,699
<p>Use the fact that <span class="math-container">$X_i$</span> are Identical. And the linearity property of Expectation.</p> <p><span class="math-container">$E(\bar{X})=E(X_1)=p$</span></p>
83,965
<p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p> <p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p> <p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
Joseph O'Rourke
6,094
<p>If you don't mind specializing Stokes theorem to Green's theorem, then one of the most practical applications is computation of the area of a region by integrating around its contour. I am old enough to have used a <a href="http://en.wikipedia.org/wiki/Planimeter" rel="nofollow noreferrer">planimeter</a>, a delightful physical embodiment of Green's theorem: <br /> &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; <img src="https://upload.wikimedia.org/wikipedia/commons/thumb/5/58/Polarplanimeter_01.JPG/250px-Polarplanimeter_01.JPG" alt="Planimeter"><br /> One can also derive an (otherwise non-obvious) formula for the area of a planar polygon via Green's theorem: $A = \frac{1}{2} \sum_{i=0}^{n-1} x_i y_{i+1} - x_{i+1} y_i$.</p> <p>Sorry&mdash;no swirling vector fields in these examples!</p>
83,965
<p>When students learn multivariable calculus they're typically barraged with a collection of examples of the type "given surface X with boundary curve Y, evaluate the line integral of a vector field Y by evaluating the surface integral of the curl of the vector field over the surface X" or vice versa. The trouble is that the vector fields, curves and surfaces are pretty much arbitrary except for being chosen so that one or both of the integrals are computationally tractable.</p> <p>One more interesting application of the classical Stokes theorem is that it allows one to interpret the curl of a vector field as a measure of swirling about an axis. But aside from that I'm unable to recall any other applications which are especially surprising, deep or interesting. </p> <p>I would like use Stokes theorem show my multivariable calculus students something that they enjoyable. Any suggestions?</p>
orbifold
19,007
<p>A nice application in fluid mechanics is <a href="http://en.wikipedia.org/wiki/Kelvin%27s_circulation_theorem" rel="nofollow">Kelvin's circulation theorem</a>. You could also discuss how it fails to hold, if there are obstacles in the fluid flow. In the same spirit stokes theorem is applied in the canonical formalism of classical mechanics to find the poincare-cartan integral invariants. </p>
2,746,222
<p>Problem:</p> <p>A boats speed is <strong>1,70 m/s</strong> in still water.<br /> It must cross a river with a width of <strong>260 m</strong>.<br /> The boats starting point is the <strong>origin on the xy-axsis</strong> (on the shore).<br /> It has to dock <strong>110 m</strong> to the right(in the positive x-direction) opposite of the starting point on the other shore(i.e. the point parallel to the starting point on the other side + 110 m).<br /> The boat must sail in a <strong>45°</strong> angle relative to the shore(x-axis) to arrive at that point.</p> <p>What is the speed of the water current(water flows to the negative x-direction)?</p> <p><img src="https://i.stack.imgur.com/EVFzH.jpg" alt="Picture of the problem" /></p> <p>What I have done:</p> <p>It semms to be a pretty simple vector problem.<br /> Just subtract the vector of the boat in moving water from the vector of the boat in still water(direct route) to get the vector of water flow.</p> <p>I did this and got a nonzero y component of the water flow, which can't be true. How can it even be zero if only the sin(0°+180°*n)= 0 and the y components of the vectors aren't equal?</p> <p>Thank you for your help</p>
CiaPan
152,299
<p>Sailing on a still 'river', the boat would arrive to the opposide side at the point $260\,m$ across the river <em>and</em> $260\,m$ along the river, due to the angle of $45^\circ$.</p> <p>What is the length of this route? What time would it take the boat to sail it?</p> <p>You know the boat actually ends its travel $110\,m$ along the river. That means water shifted the boat by $260-110=150$ meters during the jorney (or $260+110=370$ meters, depending on the direction of the $45^\circ$ angle). Divide it by the travel time and you'll get the river speed.</p>
2,704,394
<p>Here is the formal statement:</p> <blockquote> <p>Let $\lambda_1, \lambda_2, \lambda_3$ be distinct eigenvalues of $n\times n$ matrix $A$. Let $S=\{v_1, v_2, v_3\}$, where $Av_i = \lambda_i v_i$ for $1\leq i\leq 3$. Prove $S$ is linearly independent. </p> </blockquote> <p>Many resources online state the general proof or the proof for two eigenvectors. What is the proof for specifically 3? I tried to derive the 3 eigenvector proof from the 2 eigenvector proofs, but failed. </p>
copper.hat
27,978
<p>Suppose $v=\sum_k \alpha_k v_k = 0$.</p> <p>$(A-\lambda_1 I)v = \sum_{k&gt;1} \alpha_k (\lambda_k-\lambda_1)v_k = 0$.</p> <p>$(A-\lambda_2 I)(A-\lambda_1 I)v = \sum_{k&gt;2} \alpha_k (\lambda_k-\lambda_2) (\lambda_k-\lambda_1)v_k = 0$. $$\vdots$$ $\prod_{i=1}^{n-1} (A-\lambda_i I) v = \alpha_n \prod_{i=1}^{n-1}(\lambda_n-\lambda_i) v_n = 0$.</p> <p>Hence $\alpha_n = 0$. Then the previous equation gives $\alpha_{n-1} = 0$, etc, etc.</p>
3,999,699
<p>I want to show that <span class="math-container">$a_{n} = \sqrt{n}$</span> is not a bounded sequence.</p> <p>Definition: We say that a sequence is bounded if it is bounded above and below. A sequence <span class="math-container">$a_{n}$</span> is bounded above if there exists <span class="math-container">$C$</span> such that, for all <span class="math-container">$n$</span>, <span class="math-container">$a_{n} \leq C$</span>. A sequence <span class="math-container">$a_{n}$</span> is bounded below if there exists <span class="math-container">$C$</span> such that, for all <span class="math-container">$n$</span>, <span class="math-container">$a_{n} \geq C$</span>.</p> <p>Well clearly, our sequence is bounded below by <span class="math-container">$0$</span>. So we need to show it is not bounded above.</p> <p>Proof:</p> <p>We need to show that for every <span class="math-container">$C$</span>, there exists an <span class="math-container">$n$</span> such that <span class="math-container">$a_{n} &gt; C$</span>. Observing that for every <span class="math-container">$C \geq 0, \sqrt{C^{2}} = C$</span>. Suppose that <span class="math-container">$ n &gt; C^{2}$</span> then we have <span class="math-container">$\sqrt{n} &gt; C$</span></p>
Joe
623,665
<p>Ted Shifrin has already explained the problem with your derivation, but if you're looking for a 'safer' way of finding the derivative then consider this: if <span class="math-container">$\DeclareMathOperator{\arcsec}{arcsec} y = \arcsec x$</span>, then using the identity <span class="math-container">$\arcsec x = \arccos 1/x$</span>, we see that <span class="math-container">\begin{align} \frac{dy}{dx}=\frac{d}{dx}\left(\arccos 1/x\right) &amp;= -\frac{1}{\sqrt{1-\left(\frac{1}{x}\right)^2}} \cdot -\frac{1}{x^2} \\[4pt] &amp;= \frac{1}{x^2\sqrt{1-\frac{1}{x^2}}} \, . \end{align}</span></p>
1,171,492
<p>I was trying to obtain the square root of $-5-12i$ by the formula for square root (given below) and also by De Moivre's theorem and verify that both give the same result. But the two results are somehow not matching for this complex number. I am writing my solution below in two cases for each method:</p> <p>Case - I:</p> <p>As given on pg - 3 of <em>Complex Analysis</em> - Newman and Bak, the equation $(x+iy)^2 = a+ib$ has the solution: $x=\pm\sqrt{\frac{a+\sqrt{a^2+b^2}}{2}}$ and $y=\pm\sqrt{\frac{-a+\sqrt{a^2+b^2}}{2}}.($sign $b)$</p> <p>Putting $a=-5, b=-12$ &amp; $($sign $b) = -ve$ in the above formula for $x$ &amp; $y$, we get that the square roots of $-5-12i$ are $2-3i$ &amp; $-2+3i$</p> <p>Case - II:</p> <p>By De Moivre's theorem, we know that given $z = r(cos \theta + i sin \theta)$; its $n$th root $z_k$ is given by $z_k = r^{1/n}(cos (\frac{\theta + 2k \pi}{n}) + i sin (\frac{\theta + 2k \pi}{n}))$, where $k=0,1,...,n-1$</p> <p>Here, $z=-5-12 i = r(cos \theta + i sin \theta)$. Thus, $r=13$ and $\theta = atan(\frac{-12}{-5}) = 1.176005207$ (in radian)</p> <p>Hence, $z_k = \sqrt{13}(cos (\frac{\theta + 2k \pi}{2}) + i sin (\frac{\theta + 2k \pi}{2}))$, where $k=0,1$</p> <p>For $k=0$, $z_0 = \sqrt{13} (cos (\frac{\theta}{2}) + i sin (\frac{\theta}{2})) = \sqrt{13} (0.8320502943 + i 0.5547001962) = 3 + 2i$</p> <p>For $k=1$, $z_1=\sqrt{13} (cos (\frac{\theta + 2 \pi}{2}) + i sin (\frac{\theta + 2 \pi}{2})) = \sqrt{13} (cos (\pi + \frac{\theta}{2}) + i sin (\pi + \frac{\theta}{2})) = - \sqrt{13} (cos (\frac{\theta}{2}) + i sin (\frac{\theta}{2})) = - \sqrt{13} (0.8320502943 + i 0.5547001962) = -3 - 2i$</p> <p>Here, the square roots of $-5-12i$ are $3+2i$ and $-3-2i$</p> <p>I think there must be some error in the solution because the square roots are coming out to be different. Thanks...</p>
Toby Mak
285,313
<p>There is a way without using $\arctan$.</p> <p>Let the square root be $a+bi$. Then, $a^2+2abi-b^2 = -5-12i$, so $a^2-b^2 = -5 \text{(eq. 1)}$ and $2abi = -12i \ \text{(eq. 2)}; ab = -6 \ \text{(eq. 3)}$.</p> <p>From equation $3$, $a = -\frac{6}{b}$. Substituting into equation $1$, $(-\frac{6}{b})^2-b^2+5=0$, and so:</p> <p>$$\frac{36}{b^2} - b^2 + 5 = 0$$ $$\Rightarrow b^4 - 5b^2 - 36 = 0$$ $$\Rightarrow (b^2-9)(b^2+4) = 0$$ $$b^2 = 9; b^2 = -4$$ $$b = ±3$$</p> <p>Substiuting $b = 3$ into equation $1$, we have $a = 2$; and substituting $b = -3$ we also get $a = 2$. Therefore, the two solutions are $2 + 3i$ and $2 - 3i$.</p>
1,317,143
<blockquote> <p><em>Notation</em>: $\log:=\log_{10}$</p> </blockquote> <p>$\log x+\log_x 10$</p> <p>$=\log x+ \frac{1}{\log x}$ </p> <p>$=\log(x \cdot \frac{1}{x})$ </p> <p>$=\log 1$ </p> <p>$=0$ </p> <p>Is the process correct? I doubt this is wrong. Please help. Thanks.</p>
user26486
107,671
<p>I assume $\log_{10} x&gt;0$ (iff $x&gt;1$), because without constraints arbitrary small values can be gained.</p> <p>$$\log_{10}x+\log_x 10=\log_{10}x+\frac{1}{\log_{10} x}\ge 2$$ </p> <p>with equality iff $x=10$, because $a+\frac{1}{a}\ge 2$ for $a&gt;0$ with equality iff $a=1$. </p> <p>To prove it, $$a+\frac{1}{a}\ge 2\iff \left(\sqrt{a}-\sqrt{\frac{1}{a}}\right)^2\ge 0$$ with equality iff $\sqrt{a}=\sqrt{\frac{1}{a}}\iff a=1$.</p>
1,201,002
<p>I´m trying to find a vector $\vec{c} = $ , which is orthogonal to vector $\vec{a}$ and $\vec{b}$:</p> <p>As far I understood, I have to show that:</p> <p>$$\langle a,c\rangle=0 $$ $$\langle b,c\rangle=0 $$ </p> <p>So if I would like to determine an orthogonal vector regarding: \begin{bmatrix}-1\\1\end{bmatrix} I just intuitively uses: $$\langle v,w\rangle=1 \cdot(-1)+1\cdot 1=0 $$ in order to arrive at \begin{bmatrix}1\\1\end{bmatrix} My problem is that I just dont know a mechanic way to solve for an orthogonal vector. It was more a educated guess.</p> <p>For example, given: $\vec{a} = \begin{bmatrix}-1\\1\\1\end{bmatrix}$ and $\vec{b} = \begin{bmatrix}\sqrt{2}\\1\\-1\end{bmatrix}$ how do I find a orthogonal vector?</p> <p>Thank you in advance.</p>
Nick Lim
500,921
<p>I discovered a "quick" <span class="math-container">$(O(d^2))$</span> algorithm to generate <span class="math-container">$d-1$</span> mutually orthogonal vectors that are perpendicular to <span class="math-container">$\vec{x}$</span> where <span class="math-container">$d$</span> is the size of <span class="math-container">$\vec{x}$</span>, while working on a semi-related problem (I needed to generate <span class="math-container">$d-2$</span> mutually orthogonal vectors that are perpendicular to both <span class="math-container">$\vec{u}$</span> and <span class="math-container">$\vec{v}$</span>, (where <span class="math-container">$\vec{u}$</span> and <span class="math-container">$\vec{v}$</span> are perpendicular.. unfortunately the concept of vector cross product does not exist when d is not 3). The alternative would be to apply Gram-Schmidt which would take <span class="math-container">$O(d^3)$</span> </p> <p>I made use of something called a Householder Transform (<span class="math-container">$H= I - 2nn^T$</span> (with <span class="math-container">$||n||=1$</span>)) which conceptually works like a reflection on a d-dimensional hyperplane through the origin, with <span class="math-container">$n$</span> the vector normal to the hyperplane. The general idea is to reflect d orthogonal basis (an easy basis to use would be the usual basis (aka the d-size identity matrix <span class="math-container">$I_d$</span>) , such one of the column vectors coincides with x.</p> <p>Step 1: normalize x (take x and divide it by ||x||)</p> <p>Step 2: let <span class="math-container">$n_1= \sqrt{\frac{1-x_1}{2}}$</span>, and <span class="math-container">$n_j= \frac{-x_j}{\sqrt{2(1-x_1)}} $</span> with <span class="math-container">$j \in [2..d]$</span></p> <p>Step 3: calculate <span class="math-container">$H = I - 2nn^T$</span>.</p> <p>Step 4: Columns 2 to d are orthogonal to <span class="math-container">$x$</span> </p> <p>Proof: Since, the column vectors of <span class="math-container">$I_d$</span> are mutually orthogonal, it follows that the column vectors of the reflection of <span class="math-container">$I_d$</span> would also be mutually orthogonal. Hence all we need to do is to design <span class="math-container">$H$</span> such that we have the following property <span class="math-container">$HI = H = [x \text{ | } V_{2..d} ] $</span>, giving us <span class="math-container">$V_2, V_3 ... V_d$</span> mutually-orthogonal to <span class="math-container">$x$</span>.</p> <p>To construct <span class="math-container">$H$</span>, we have <span class="math-container">$x_i = I_{1,i} - 2n_in_1$</span> simplifying,</p> <p><span class="math-container">$x_1 = 1 - 2n_1^2$</span>,</p> <p>Solving for <span class="math-container">$n_1$</span>, we get <span class="math-container">$n_1= \sqrt{\frac{1-x_1}{2}}$</span>.</p> <p>Similarly,<br> <span class="math-container">$x_j = -2n_1n_j$</span> for <span class="math-container">$j \in [2.. d]$</span>,</p> <p>Solving for <span class="math-container">$n_j$</span> we get <span class="math-container">$n_j= \frac{-x_j}{\sqrt{2(1-x_1)}}$</span></p> <p>Finally, proof that <span class="math-container">$V_2 ... V_d$</span> are mutually orthogonal to <span class="math-container">$x$</span></p> <p><span class="math-container">$ V_{i,j} = I_{i,j} -2n_jn_i $</span></p> <p>when <span class="math-container">$i=1$</span>, <span class="math-container">$ V_{1,j} = -2n_jn_1 = -2 \frac{-x_j}{2n_1}n_1 = x_j$</span></p> <p>when <span class="math-container">$i=j$</span>, <span class="math-container">$ V_{j,j} = 1-2n_jn_j = 1-2 \frac{x_j^2}{2(1-x_1)} = \frac{1 -x_1 - x_j^2}{1-x_1}$</span></p> <p>when <span class="math-container">$i \neq j, i \neq 1$</span>, <span class="math-container">$ V_{i,j} = -2n_in_j = -2 \frac{x_jx_i}{2(1-x_1)} = \frac{-x_ix_j}{1-x_1}$</span></p> <p>To show <span class="math-container">$V_j$</span> is orthorgonal to <span class="math-container">$x$</span>, </p> <p><span class="math-container">$x^TV_j = \sum_{i=1}^d V_{i,j}x_i$</span></p> <p><span class="math-container">$= x_jx_1 + \dfrac{x_j -x_jx_1- x_j^3}{1-x_1}+\sum_{i\neq j, i\geq 2}^d \dfrac{-x_jx_i^2}{1-x_1}$</span> </p> <p><span class="math-container">$=\dfrac{x_jx_1-x_jx_1^2}{1-x_1} + \dfrac{x_j -x_jx_1- x_j^3}{1-x_1}-\sum_{i\neq j, i\geq 2}^d \dfrac{x_jx_i^2}{1-x_1} $</span></p> <p><span class="math-container">$= \dfrac{x_j}{1-x_1}(1-\sum_{i=1}^dx_i^2)=0$</span></p> <p>Since <span class="math-container">$\|x\| = 1$</span> through step one, implying <span class="math-container">$\sum_{i=1}^d x_i^2 = 1 $</span> </p> <p>(or we can just exploit the fact that by construction, <span class="math-container">$HH^T=H^TH=I$</span> implying the column vectors are mutually orthogonal to each other... )</p>
2,634,701
<p>Let $ f: {{\mathbb{R^n}} \rightarrow {{\mathbb{R}} }}$ be continuous and let $a$ and $b$ be points in $ {{\mathbb{R} }} $ Let the function $g: {\mathbb{R}} \rightarrow {\mathbb{R}}$ be defined as: $$ g(t) = f(ta+(1-t)b) $$ Show that $g$ is continuous .</p> <p>If I define a function $ h(t)=ta+(1-t)b$, then I have that $g(t)=f(h(t))$ I know that $f$ is continuous, so I have to prove that $h(t)$ is continuous as a compound function of two continuous function is also continuous. </p> <p>How do I prove that $h(t)$ is continuous in ${{\mathbb{R^n}}}$? </p>
José Carlos Santos
446,262
<p>If $t_1.t_2\in\mathbb R$, then\begin{align}\bigl\|h(t_2)-h(t_1)\bigr\|&amp;=\bigl\|t_2a+(1-t_2)b-t_1a-(1-t_1)b\bigr\|\\&amp;=\bigl\|(t_2-t_1)a-(t_2-t_1)b\bigr\|\\&amp;=|t_2-t_1|.\|a-b\|.\end{align}If $a=b$, $h$ is the null function and therefore ir is continuous. Otherwise, if $\varepsilon&gt;0$ then take $\delta=\frac{\varepsilon}{\|a-b\|}$. Then$$|t_2-t_1|&lt;\delta\implies\bigl\|h(t_2)-h(t_1)\bigr\|&lt;\varepsilon.$$</p>
1,727,339
<p>What am I doing wrong?</p> <p>I've been learning how to put matrices into Jordan canonical form and it was going fine until I encountered this $4 \times 4$ matrix:</p> <p>$A=\begin{bmatrix} 2 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 2 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 4 \\ \end{bmatrix} $</p> <p>Which has as only eigenvalue $\lambda_1=\lambda_2=\lambda_3=\lambda_4=2$ with 2 corresponding eigenvectors, which I will for now call $v_1$ and $v_2$:</p> <p>$v_1 = \pmatrix{0\\0\\1\\0}, v_2=\pmatrix{-3 \\ 1 \\ 0 \\ 2} $ </p> <p>2 eigenvectors means 2 Jordan blocks so I have 2 possibilities:</p> <p>$J= \pmatrix{2 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 1 &amp; 0 \\ 0 &amp; 0 &amp; 2 &amp; 0 \\ 0 &amp; 0 &amp; 0 &amp; 2} $ or $ J= \pmatrix{2 &amp; 1 &amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 &amp; 0 \\ 0 &amp; 0 &amp; 2 &amp; 1 \\ 0 &amp; 0 &amp; 0 &amp; 2} $</p> <p>I consider the first possibility. This gives me the relations:</p> <p>$Ax_1=2x_1 \\ Ax_2=x_1+2x_2 \\ Ax_3=2x_3+x_2 \\ Ax_4=2x_4 \\ $</p> <p>where $x_1$ and $x_4$ should be $v_1$ and $v_2$. From the second relation $(A-2I)x_2=x_1$ I see</p> <p>$\pmatrix{0 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; -2 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 0 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 2} \pmatrix{a \\ b \\ c \\ d} =\pmatrix{0 \\ 0 \\ 1 \\ 0} $ </p> <p>( $v_2= \pmatrix{ -3 \\ 1 \\ 0 \\ 2} $ will give an inconsistent system)</p> <p>Now I get that $x_2 = \pmatrix{-2 \\ 1 \\ 0 \\ 2} $</p> <p>From the third relation $(A-2I)x_3=x_2$:</p> <p>$\pmatrix{0 &amp; 2 &amp; 0 &amp; -1 \\ 0 &amp; -2 &amp; 0 &amp; 1 \\ 1 &amp; 5 &amp; 0 &amp; -1 \\ 0 &amp; -4 &amp; 0 &amp; 2} \pmatrix{e \\ f \\ g \\ h} =\pmatrix{-2 \\ 1 \\ 0 \\ 2} $ </p> <p>But this system is inconsistent as well! No matter which vectors I try in which places, when I try to generalize eigenvectors I seem to always end up with some inconsistency.</p> <p>Is there something staring me in the face that I am overlooking? Or am I doing it completely wrong (even though this method worked fine for me before)?</p> <p>Sorry for the lengthiness and thank you in advance.</p>
Isaac mathsgod
675,881
<ol> <li><p>Look at the image space (column-wise) of <span class="math-container">$S = ( A - 2I )$</span>, which yields two vectors <span class="math-container">$S(v), v = (0,0,2,0) , (-1,1,-1,2)$</span> [SPECIFICALLY IN THIS ORDER]</p></li> <li><p>Now find <span class="math-container">$u$</span> such that <span class="math-container">$S(u) = v$</span> say <span class="math-container">$u = (0,0,0,1)$</span></p></li> <li><p>Now add any vector <span class="math-container">$w$</span> such that <span class="math-container">$w$</span> is in <span class="math-container">$Ker(S)$</span> and altogether completes a basis for <span class="math-container">$\Bbb R^4$</span>; clearly <span class="math-container">$w = (-3,1,0,2)$</span></p></li> </ol> <p>You can check that <span class="math-container">$\mathcal B = \{ (0,0,2,0), (-1,1,-1,2), (0,0,0,1), (-3,1,0,2)\}$</span>does the job for JCF transform.</p> <p>P.s not experienced in this typefont, but the maths works!:)</p>
148,807
<p>I'm not sure if these types of questions are accepted here or not (I'm very sorry if it's not), but it would be great if anyone could explain me this.</p> <blockquote> <p><strong>Question:</strong> Using his bike, Daniel can complete a paper route in 20 minutes. Francisco, who walks the route, can complete it in 30 minutes. How long will it take the two boys to complete the route if they work together, one starting at each end of the route?</p> </blockquote> <p>I have the answer: 12 minutes</p> <p>But I don't understand the solution given in the book.</p> <p>Can any of you explain how to solve this? Your help is highly appreciated.</p>
André Nicolas
6,312
<p>Suppose that $N$ newspapers have to be delivered. Since Daniel can do the job in $20$ minutes, he distributes $\frac{N}{20}$ newspapers per minute.</p> <p>Similarly, Francisco delivers $\frac{N}{30}$ newspapers per minute.</p> <p>So if they work together as described, they deliver a total of $\frac{N}{20}+\frac{N}{30}$ newspapers per minute. In other words, their combined delivery rate is $\frac{N}{20}+\frac{N}{30}$ newspapers per minute.</p> <p>Thus the total time that they take is the total number of newspapers to be delivered, divided by their combined rate. This is $$\frac{N}{\frac{N}{20}+\frac{N}{30}}.\tag{$1$}$$ Now we need to do some algebra. The denominator in the above expression is $\frac{N}{20}+\frac{N}{30}$. Bring this expression to the common denominator $60$. We have $\frac{N}{20}+\frac{N}{30}=\frac{3N}{60}+\frac{2N}{60}=\frac{5N}{60}=\frac{N}{12}$. So the expression $(1)$ simplifies to $$\frac{N}{\frac{N}{12}},$$ which simplifies to $12$.</p> <p><strong>Remark:</strong> The above calculation has an abstract character. To make it very concrete, decide arbitrarily on the number of newspapers to be delivered. It is convenient to assume there are $60$ papers, because $60$ is divisible by both $20$ and $30$.</p> <p>If there are $60$ papers to be delivered, then Daniel delivers $60/20$ newspapers per minute, and Francisco delivers $60/30$ newspapers per minute. So their combined delivery rate is $5$ papers per minute. Since there are $60$ papers, it takes $60/5=12$ minutes for the two people to deliver them all.</p> <p>The first calculation that we made uses the general "$N$" instead of the specific (and possibly wrong) $60$. Apart from that, it is exactly the same as our concrete calculation with $N=60$. It is very useful to go through the calculation with concrete numbers, to see what's <em>really</em> going on. </p>
31,767
<p>I am looking for "low-complexity" indexing methods to enumerate binary sequences of a given length and a given weight. </p> <p>Formally, let $T_k^n = \{x_1^n \in \{0,1\}^n: \sum_{i=1}^n x_i = k\}$. How to construct a bijective mapping $f: T_k^n \to \{1, 2, \ldots, \binom{n}{k}\}$ such that computing each $f(x_1^n)$ needs small number of operations?</p> <p>For example, one could do <em>lexicographical ordering</em>, that is, e.g., $0110 &lt; 1010$. Then this gives the following scheme:</p> <p>$f(x_1^n) = \sum_{k=1}^n x_k \binom{n-k}{w_k}$</p> <p>where $w_k=\sum_{i=k}^n x_i$. Computing $n$ binomial coefficients can be quite demanding. Any other ideas? Or is it impossible to avoid?</p>
Anweshi
2,938
<p>Harald says:</p> <blockquote> <p>My basic question is how to go about this.</p> </blockquote> <p>Answer:</p> <ol> <li><p>If you want to have an intensive discussion with someone over this, through the internet: For communication that may happen burst-by-burst, leading to something definite later, google wave is an idea. </p></li> <li><p>For a collaborative effort allowing anyone to contribute: Since you might perhaps want anyone to be able to contribute, starting a wiki is a good option. There are many sites allowing you to create wikis for free. Adding latex support also will be quite easy. If you want a ready-made place listing tricks, methods and their usage, maybe you can create a few pages at Tim Gowers' site "tricki" for various excerpts from Vinogradov's book. </p></li> </ol>
1,117,592
<blockquote> <p>Let <span class="math-container">$k$</span> be a finite field and <span class="math-container">$V$</span> a finite-dimensional vector space over <span class="math-container">$k$</span>.</p> <p>Let <span class="math-container">$d$</span> be the dimension of <span class="math-container">$V$</span> and <span class="math-container">$q$</span> the cardinal of <span class="math-container">$k$</span>.</p> <p>Construct <span class="math-container">$q+1$</span> hyperplanes <span class="math-container">$V_1,\ldots,V_{q+1}$</span> such that <span class="math-container">$V=\bigcup_{i=1}^{q+1}V_i$</span></p> </blockquote> <p>I tried induction on <span class="math-container">$d$</span>, to no avail.</p> <p>There has been discussion about this topic (<a href="http://alpha.math.uga.edu/%7Epete/coveringnumbersv2.pdf" rel="nofollow noreferrer">here</a>), but it doesn't answer my question.</p>
quid
85,306
<p>Note that it suffices to do this for a vector-space of dimension $2$. You can then simply "blow up" each line to a hyperplane in a larger space. </p> <p>For dimension two, check that $q+1$ is in fact equal to the number of hyperplanes, that is lines in this case. </p>
3,397,548
<p>For a sequence <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span>, define <span class="math-container">$$\Delta x_n:=x_{n+1}-x_n,~\Delta^2 x_n:=\Delta x_{n+1}-\Delta x_n,~(n=1,2,\ldots)$$</span> which are named <strong>1-order</strong> and <strong>2-order difference</strong>, respectively. </p> <p>The problem is stated as follows:</p> <blockquote> <p>Let <span class="math-container">$\{x_n\}_{n=1}^{\infty}$</span> be <strong>bounded</strong> , and satisfy <span class="math-container">$\lim\limits_{n \to \infty}\Delta^2 x_n=0$</span>. Prove or disprove <span class="math-container">$\lim\limits_{n \to \infty}\Delta x_n=0.$</span></p> </blockquote> <p>By intuiton, the conclusion is likely to be true. According to <span class="math-container">$\lim\limits_{n \to \infty}\Delta^2 x_n=0,$</span> we can estimate <span class="math-container">$\Delta x_n$</span> almost equal with an increasing <span class="math-container">$n$</span>. Thus, <span class="math-container">$\{x_n\}$</span> looks like an <strong>arithmetic sequence</strong>. If <span class="math-container">$\lim\limits_{n \to \infty}\Delta x_n \neq 0$</span>, then <span class="math-container">$\{x_n\}$</span> can not be bounded.</p> <p>But how to prove it rigidly?</p>
hskimse
691,294
<p>I wonder if this prove is correct.</p> <p>Assume <span class="math-container">$\Delta x_n$</span> does not converge to 0, than there will be infinitely many <span class="math-container">$n$</span> with <span class="math-container">$\Delta x_n&gt;c$</span>(or <span class="math-container">$\Delta x_n&lt;c$</span>). as <span class="math-container">$\Delta^2x_n \to 0$</span>, there is <span class="math-container">$N&gt;0$</span> for all <span class="math-container">$\epsilon$</span> such that <span class="math-container">$|\Delta x_{n+1}-\Delta x_n|&lt;\epsilon$</span>. Since there are infinitely many <span class="math-container">$n$</span> that satisfy <span class="math-container">$\Delta x_n&gt;c$</span>, for that <span class="math-container">$n$</span> and <span class="math-container">$n&gt;N$</span> we can write <span class="math-container">$|\Delta x_{n+1}-c|&lt;\epsilon$</span>. and for sufficiently large m, we can say that <span class="math-container">$\sup_{m}x_m-\inf_{m}x_m=2\epsilon M$</span> when M is the number of m's which mathches two condition. and this makes contradiction.</p>
3,033,943
<blockquote> <p><span class="math-container">$\textbf{Problem}$</span> Let <span class="math-container">$\Omega$</span> be an open, bounded and connected subset of <span class="math-container">$\mathbb{R}^n$</span>. Suppose that <span class="math-container">$\partial \Omega$</span> is <span class="math-container">$C^{\infty}$</span>. Consider an eigenvalue problem <span class="math-container">\begin{align*} \begin{cases} -\Delta u=\lambda u &amp; \textrm{ in } \; \Omega \\ \frac{\partial u}{\partial \nu}=-u &amp; \textrm{ on } \partial \Omega \end{cases} \end{align*}</span> Define a bilinear operater <span class="math-container">$(\cdot,\cdot)_{H^1}$</span> by <span class="math-container">\begin{align*} (u,v)_{H^1}:=\int_{\Omega} \nabla u \cdot \nabla v \;dx + \int_{\partial \Omega} uv \; d\sigma \end{align*}</span> Show that there exists a constant <span class="math-container">$\theta&gt;0$</span> independent of <span class="math-container">$u,v$</span> such that <span class="math-container">\begin{align*} (u,u)_{H^1} \geq \theta \Vert u \Vert _{H^1(\Omega)}^2 \end{align*}</span></p> </blockquote> <p><span class="math-container">$\textbf{Attempt}$</span> </p> <p><span class="math-container">\begin{align*} (u,u)_{H^1}&amp;=\int_{\Omega} \nabla u \cdot \nabla u \;dx + \int_{\partial \Omega} u^2 \; d\sigma \\ &amp;=\int_{\Omega} \nabla \cdot(u\nabla u)-u\Delta u \; dx +\int_{\partial \Omega} u^2 \; d\sigma \\ &amp;=\int_{\partial \Omega} u \frac{\partial u}{\partial \nu} \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma \\ &amp;=-\int_{\partial \Omega} u^2 \; d\sigma +\int_{\Omega} \lambda u^2 dx +\int_{\partial \Omega} u^2 \; d\sigma\\ &amp;=\lambda \Vert u \Vert _{L^2(\Omega)}^2 \end{align*}</span> I don't know how to get <span class="math-container">$\lambda \Vert u \Vert_{L^2(\Omega)}^2 \geq \theta \Vert u \Vert_{H^1(\Omega)}^2$</span>...</p> <p>Any help is appreciated..</p> <p>Thank you!</p>
user9077
9,077
<p>Assume <span class="math-container">$T$</span> is continuous at <span class="math-container">$0$</span>. It means given <span class="math-container">$\epsilon &gt;0$</span> there exist <span class="math-container">$\delta&gt;0$</span> such that for all <span class="math-container">$|z|&lt;\delta$</span> one has <span class="math-container">$\|T(z)||'&lt;\epsilon$</span>.</p> <p>Now suppose we want to show that <span class="math-container">$T$</span> is continuous at <span class="math-container">$v\in V$</span>. Given <span class="math-container">$\epsilon &gt;0$</span>, take the <span class="math-container">$\delta &gt;0$</span> above in continuity at <span class="math-container">$0$</span>. Now for every <span class="math-container">$x\in V$</span> such that <span class="math-container">$\|x-v\|&lt;\delta$</span> we have <span class="math-container">$$\|T(x)-T(v)\|'=\|T(x-v)\|'=\|T(z)\|'&lt;\epsilon$$</span> where <span class="math-container">$z=x-v$</span>.</p>
487,123
<p>How to evaluate the following limit? $$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}$$</p> <p>For this problem I have two methods. But I'd like to know if there are better methods.</p> <p><strong>My solution 1:</strong></p> <p>Using Stolz-Cesaro Theorem, we have $$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}=\lim_{n\to\infty}\dfrac{n!}{n!-(n-1)!}=\lim_{n\to\infty}\dfrac{n}{n-1}=1$$</p> <p><strong>My solution 2:</strong></p> <p>$$1=\dfrac{n!}{n!}&lt;\dfrac{1!+2!+\cdots+n!}{n!}&lt;\dfrac{(n-2)(n-2)!+(n-1)!+n!}{n!}=\dfrac{n-2}{n(n-1)}+\dfrac{1}{n}+1$$</p>
wendy.krieger
78,024
<p>You can write this as a kind of 'added fraction', or fraction of continued numerator. Such fractions were used, for example, by Fibonacci in <em>Liber Aceri</em></p> <p>Thus $1 \frac {a+}A \frac{b+}B \dots = 1 \frac{a+\frac{b+ \dots}B}A$ For example, one might regard decimals, as a series of added tenths, as $1m \frac{dm+}{10} \frac{cm+}{10} \frac{mm}{10}$ The $+$ serves to show it's the numerator continued. </p> <p>It gives $A = 1 \frac {1+}n \frac {1+}{n-1} \frac {1+}{n-2} \dots$. This is identical to writing it in a base, where the size of the base gets smaller as one goes along. So, for example, when $x=10$ it gives $1 \frac {1+}{10} \frac {1+}9 \frac {1+}8$</p> <p>As n goes large, one sees that the limiting factor (which is less than the sum), is $1 \frac {1+}n \frac {1+}n \frac {1+}n \dots$, which is $A_2 = \frac n{n-1} = 1 \frac 1{n-1}$. In fact, the first two fractions of the number add to this. </p> <p>The sum of the first three fractions, then goes $A_3 = 1 \frac {1+}{n} \frac{1}{n-2}$. This is $1 \frac{n-1}{n^2-2n}$.</p> <p>The value between $A-A_3$ is much less than between $A-A_2$, to the extent that it is admissable to suppose an upper limit of $1 \frac{n-1}{n^2-2n-1}$ be larger than $A$ by the same order which $A_2$ is less, and that this difference is nearly $n (A-A_3)$.</p> <p>The limit as n goes large is $1$, since the fractional part approaches zero, but for those of us who track the disappearence direction, it is in the order of $1 \frac 1n$.</p>
487,123
<p>How to evaluate the following limit? $$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}$$</p> <p>For this problem I have two methods. But I'd like to know if there are better methods.</p> <p><strong>My solution 1:</strong></p> <p>Using Stolz-Cesaro Theorem, we have $$\lim_{n\to\infty}\dfrac{1!+2!+\cdots+n!}{n!}=\lim_{n\to\infty}\dfrac{n!}{n!-(n-1)!}=\lim_{n\to\infty}\dfrac{n}{n-1}=1$$</p> <p><strong>My solution 2:</strong></p> <p>$$1=\dfrac{n!}{n!}&lt;\dfrac{1!+2!+\cdots+n!}{n!}&lt;\dfrac{(n-2)(n-2)!+(n-1)!+n!}{n!}=\dfrac{n-2}{n(n-1)}+\dfrac{1}{n}+1$$</p>
Barry Cipra
86,747
<p>Let </p> <p>$$b_n={1!+2!+\cdots(n-1)!\over n!}$$ </p> <p>It suffices to show that $\lim_{n\rightarrow\infty}b_n=0$.</p> <p>Note that $0\lt b_n\lt1$ for all $n\gt1$. (There are fewer than $n$ terms in the numerator, none larger than $(n-1)!$.) This implies</p> <p>$$0\lt b_n={1\over n}\left({1!+\cdots+(n-2)!+(n-1)!\over(n-1)! }\right)={1\over n}(b_{n-1}+1)\lt{2\over n}$$</p> <p>so the limit $0$ follows.</p>
543,712
<p>I am stuck on the following problem that says:</p> <blockquote> <p>Let <span class="math-container">$p,q$</span> be 2 complex numbers with <span class="math-container">$|p|&lt;|q|$</span>. Let <span class="math-container">$$f(z)=\sum\{3p^n-5q^n\}z^n$$</span> Then the radius of convergence of <span class="math-container">$f(z)$</span> is :</p> <ol> <li><p><span class="math-container">$|q|$</span></p> </li> <li><p><span class="math-container">$|p|$</span></p> </li> <li><p>At least <span class="math-container">$\frac{1}{|q|}$</span></p> </li> <li><p>At most <span class="math-container">$\frac{1}{|q|}$</span></p> </li> </ol> </blockquote> <p><em>My Attempt</em>: <span class="math-container">$f(z)=\sum(3p^n-5q^n)z^n=\sum\{3p^n\}z^n-\sum\{5q^n\}z^n=3\sum(pz)^n-5\sum(qz)^n$</span>. Now for convergence,we must have <span class="math-container">$|pz|&lt;1 \implies |z|&lt;\frac{1}{|p|}$</span> and <span class="math-container">$|qz|&lt;1 \implies |z|&lt;\frac{1}{|q|}$</span>. Also we are given that <span class="math-container">$|p|&lt;|q| \implies \frac{1}{|q|} &lt;\frac{1}{|p|}$</span>.</p> <p>Now,I am bit confused. Can someone help? Thanks in advance for your time.</p>
Kirk Fogg
83,162
<p>Permutations are used when one is concerned with order. For example, if you wanted to choose how many ways are there to arrange five people in a line, the answer will be different. In your case, a person first in line is the same as a person fourth in line, i.e., order does not matter. </p> <p>So combinations are used when the problem does not concern the ordering of objects, rather strictly taking combinations of objects. Hope this helps!</p>
543,712
<p>I am stuck on the following problem that says:</p> <blockquote> <p>Let <span class="math-container">$p,q$</span> be 2 complex numbers with <span class="math-container">$|p|&lt;|q|$</span>. Let <span class="math-container">$$f(z)=\sum\{3p^n-5q^n\}z^n$$</span> Then the radius of convergence of <span class="math-container">$f(z)$</span> is :</p> <ol> <li><p><span class="math-container">$|q|$</span></p> </li> <li><p><span class="math-container">$|p|$</span></p> </li> <li><p>At least <span class="math-container">$\frac{1}{|q|}$</span></p> </li> <li><p>At most <span class="math-container">$\frac{1}{|q|}$</span></p> </li> </ol> </blockquote> <p><em>My Attempt</em>: <span class="math-container">$f(z)=\sum(3p^n-5q^n)z^n=\sum\{3p^n\}z^n-\sum\{5q^n\}z^n=3\sum(pz)^n-5\sum(qz)^n$</span>. Now for convergence,we must have <span class="math-container">$|pz|&lt;1 \implies |z|&lt;\frac{1}{|p|}$</span> and <span class="math-container">$|qz|&lt;1 \implies |z|&lt;\frac{1}{|q|}$</span>. Also we are given that <span class="math-container">$|p|&lt;|q| \implies \frac{1}{|q|} &lt;\frac{1}{|p|}$</span>.</p> <p>Now,I am bit confused. Can someone help? Thanks in advance for your time.</p>
Ross Millikan
1,827
<p>It is a combination problem because you don't care the order the people are selected. A committee of ABCDE is the same as one of EBADC. Note that $C(13,5)=\frac {13!}{5!(13-5)!}$ so your $3!$ in the denominator should be $5!$</p>
892,114
<p>i have three number 1 2 3 which will always be in this order {123}, i want to find out number of cases can be made, like {1},{2},{23},{13},{12},{123}{3},{}. but each number has two states like "a" "b", i.e, each one will become different entity,like 2a,2b,3a,3b,1a, with only exception i.e. 1 will have only one state 1a.</p> <p>please tel me step wise using formulas, so that i can understand, also, any link will be helpfull. yours sincerly</p>
Vikram
11,309
<p>Same as others but with some colors</p> <p>$(x-2)^2=\color{red}{(x-2)}\color{blue}{(x-2)}$</p> <p>$\color{red}{(x-2)}\color{blue}{(x-2)}=\color{red}x\color{blue}{(x-2)}\color{red}{-2}\color{blue}{(x-2)}$</p> <p>$\hspace{65 pt}=\underbrace{\color{red}x\times \color{blue}x}\hspace{5 pt}+\underbrace{\color{red}x \times \color{blue}{-2}}\hspace{5 pt}\underbrace{\color{red}{-2} \times \color{blue}x}\hspace{5 pt}\underbrace{ \color{red}{-2} \times \color{blue}{-2}}$ $\hspace{70 pt}=x^2 \hspace{25 pt}-2x\hspace{15 pt}-2x\hspace{15 pt}+4$</p> <p>$\hspace{65 pt}=x^2-4x+4$</p>
331,962
<p>We have an first order ODE : </p> <p>Equation1 : $y' + y = x$ ? We can view the left-hand side as an operator acting on $y$. </p> <p>In that case $L=(d/dx + 1)$ </p> <p>$L(y_1) = x$<br> $L(y_2)=x$<br> $L(y_1+y_2)=x$<br> So, clearly $L(y_1+y_2) = x \neq L(y_1)+L(y_2) = 2x$ </p> <p>So why is $y'+y=x$ is a linear ODE ?</p>
Ross Millikan
1,827
<p>The equation is considered a linear differentail equation because the operator $1+\frac {d}{dx}$ is linear. In this light $1+\frac {dy}{dx}y=f(x)$ is linear regardless of $x$. This allows us to say that if we find any solution to the homogeneous part, the equation without the $f(x)$, we can add it to a solution of the whole equation and get another solution.</p>
1,474,867
<p>I was trying to prove </p> <p>$$\left|\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx-\frac{\pi}{2}\right|\leq \frac{3}{a}$$ or $\leq \frac{2}{a}$. My work: I would like to use Fubini's theorem to prove it. </p> <p>I notice that $\frac{1}{x^2}=\int^{\infty}_{0}{ue^{-xu}}du$. </p> <p>Then, I got $\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx=\int_{0}^{\infty}u\int_{0}^{a}{(1-\cos{x})e^{-xu}}dxdu$. </p> <p>Then, I got $\int_{0}^{a}{(1-\cos{x})e^{-xu}}dx=-e^{-au}u+\frac{1}{u+u^3}+e^{-au}\frac{u^2\cos{a}-u\sin{a}}{u+u^3}$.</p> <p>Then, $\int_{0}^{a}{\frac{1-\cos{x}}{x^2}}dx=\int_0^{\infty}u(\frac{e^{au}-1}{u}+\frac{u-e^{au}(u\cos{a}+\sin{a})}{1+u^2})du\\=\int_0^{\infty}({e^{au}+\frac{-ue^{au}(u\cos{a}+\sin{a}-2)}{1+u^2}})du+\frac{\pi}{2}.$ </p> <p>I was trying to show $|\int_0^{\infty}({e^{au}+\frac{-ue^{au}(u\cos{a}+\sin{a}-2)}{1+u^2}})du|\leq\frac{3}{a}$ or $\frac{2}{a}$. </p> <p>But I do not have a clue. Can some give me hints?</p>
tired
101,233
<p>To circumvent possible divergence issues at the origin, </p> <p>write $\int_0^afdx=\int_0^{\infty}fdx-\int_a^{\infty}fdx$</p> <p>because the first integral is just $\frac{\pi}{2}$ as @Julian Rosen pointed out, we have to inspect</p> <p>$$ J(a)=\int_a^{\infty}\frac{1-\cos(x)}{x^2}=2\int_a^{\infty}\frac{\sin^2(x/2)}{x^2}= \int_{a/2}^{\infty}\frac{\sin^2(y)}{y^2}dx $$</p> <p>We used the trigonmetric identity $1-\cos(x)=2\sin^2(x/2)$</p> <p>this integral is now easily bounded (use $\sin(y)\leq1$)</p> <p>$$ J(a)&lt;\int_{a/2}^{\infty}\frac{1}{y^2}=\frac{2}{a} $$</p> <p>which is equivalent to the original claim</p>
1,108,832
<p>Q: A team of $11$ is to be chosen out of $15$ cricketers of whom $5$ are bowlers and $2$ others are wicket keepers. In how many ways can this be done so that the team contains at least $4$ bowlers and at least $1$ wicket keeper?</p>
barak manos
131,263
<p>The number of ways to choose exactly $4$ bowlers and exactly $1$ keeper:</p> <p>$$\binom{5}{4}\cdot\binom{2}{1}\cdot\binom{15-5-2}{11-4-1}=280$$</p> <p>The number of ways to choose exactly $5$ bowlers and exactly $1$ keeper:</p> <p>$$\binom{5}{5}\cdot\binom{2}{1}\cdot\binom{15-5-2}{11-5-1}=112$$</p> <p>The number of ways to choose exactly $4$ bowlers and exactly $2$ keepers:</p> <p>$$\binom{5}{4}\cdot\binom{2}{2}\cdot\binom{15-5-2}{11-4-2}=280$$</p> <p>The number of ways to choose exactly $5$ bowlers and exactly $2$ keepers:</p> <p>$$\binom{5}{5}\cdot\binom{2}{2}\cdot\binom{15-5-2}{11-5-2}=70$$</p> <p>The number of ways to choose at least $4$ bowlers and at least $1$ keeper:</p> <p>$$280+112+280+70=742$$</p>
439,302
<p>@HansEngler Left the following response to <a href="https://math.stackexchange.com/questions/260656/cant-argue-with-success-looking-for-bad-math-that-gets-away-with-it">this question</a> regarding "bad math" that works,</p> <blockquote> <p>Here's another classical freshman calculus example: </p> <p><strong>Find $\frac{d}{dx}x^x$.</strong> </p> <p>Alice says "this is like $\frac{d}{dx}x^n = nx^{n-1}$, so the answer is $x x^{x-1} = x^x$." Bob says "no, this is like $\frac{d}{dx}a^x = \log a \cdot a^x$, so the answer is $\log x \cdot x^x$." Charlie says "if you're not sure, just add the two terms, so you'll get partial credit".</p> <p>The answer $\frac{d}{dx}x^x = (1 + \log x)x^x $ turns out to be correct.</p> </blockquote> <p>In <a href="https://math.stackexchange.com/questions/260656/cant-argue-with-success-looking-for-bad-math-that-gets-away-with-it#comment571410_261057">this comment</a>, @joriki asserts that this is not "bad math" but rather a legitimate technique,</p> <blockquote> <p>You get the derivative of any expression with respect to $x$ as the sums of all the derivatives with respect to the individual instances of $x$ while holding other instances constant.</p> </blockquote> <p>I had never previously seen such a technique so naturally I tested it on a few examples, including $\frac{d}{dx} \left( x^{ \sin x}\right)$, etc. and it provided the correct result. The following three questions arose,</p> <p>$ \ \ $ <strong>1. What is the proof of its is validity?</strong> <br> $ \ \ $ <strong>2. Are there any examples where this technique outshines standard methods?</strong></p>
Michael Hardy
11,667
<p>I relied on something similar to this in a published paper. There's an identity that, in the concrete instance where the number of independent variables is $3$, says \begin{align} &amp; \phantom{{}=} \frac{\partial^3}{\partial x_1\,\partial x_2\,\partial x_3} e^y \\[10pt] &amp; = e^y\left(\frac{\partial^3 y}{\partial x_1\,\partial x_2\,\partial x_3} + \frac{\partial^2 y}{\partial x_1\,\partial x_2}\cdot\frac{\partial y}{\partial x_3} + \frac{\partial^2 y}{\partial x_1\,\partial x_3}\cdot\frac{\partial y}{\partial x_2} + {}\right. \\[10pt] &amp; \left.\phantom{{}= e^y\quad{}} + \frac{\partial^2 y}{\partial x_2\,\partial x_3}\cdot\frac{\partial y}{\partial x_1} + \frac{\partial y}{\partial x_1}\cdot\frac{\partial y}{\partial x_2}\cdot\frac{\partial y}{\partial x_3} \right) \end{align}</p> <p>The point is there's one term for each partition of the set of variables. Having proved this, one can go on to say that $$ \frac{d^3}{dx^3} e^y = e^y\left( \frac{d^3 y}{dx^3} + 3\frac{d^2y}{dx^2}\cdot\frac{dy}{dx} + \left(\frac{dy}{dx}\right)^2 \right), $$ simply by saying that's the special case in which all three variables are the same. The proof is the same, but it's clearer when one first treats the variables as distinguishable. When it's written in that form, one can see that there's just one term for each set partition, and all the coefficients are $1$, so that the coefficients in the form with indistinguishable terms have a combinatorial interpreation as the number of set partitions corresponding to a given integer partition.</p> <p>Similarly \begin{align} &amp; {}\qquad\frac{\partial^3}{\partial x_1\,\partial x_2\,\partial x_3} (uv) \\[10pt] &amp; = u\frac{\partial^3 v}{\partial x_1\,\partial x_2\,\partial x_3} + \frac{\partial u}{\partial x_1}\cdot\frac{\partial^2 v}{\partial x_2\,\partial x_3} + \frac{\partial u}{\partial x_2}\cdot\frac{\partial^2 v}{\partial x_1\,\partial x_3} + \frac{\partial u}{\partial x_3}\cdot\frac{\partial^2 v}{\partial x_1\,\partial x_2} \\[10pt] &amp; \phantom{{}=} + \frac{\partial^2 u}{\partial x_1\,\partial x_2}\cdot\frac{\partial v}{\partial x_3} + \frac{\partial^2 u}{\partial x_1\,\partial x_3}\cdot\frac{\partial v}{\partial x_3} + \frac{\partial^2 u}{\partial x_2\,\partial x_3}\cdot\frac{\partial v}{\partial x_1} + \frac{\partial^3 u}{\partial x_1\,\partial x_2\,\partial x_3}\cdot v \end{align} This time, there is one term for each <em>subset</em> of the set of variables. Each coefficient is $1$. Then one can make all variables indistinguishable, and collect like terms, and then each coefficient is the number of subsets of a specified size.</p>
439,302
<p>@HansEngler Left the following response to <a href="https://math.stackexchange.com/questions/260656/cant-argue-with-success-looking-for-bad-math-that-gets-away-with-it">this question</a> regarding "bad math" that works,</p> <blockquote> <p>Here's another classical freshman calculus example: </p> <p><strong>Find $\frac{d}{dx}x^x$.</strong> </p> <p>Alice says "this is like $\frac{d}{dx}x^n = nx^{n-1}$, so the answer is $x x^{x-1} = x^x$." Bob says "no, this is like $\frac{d}{dx}a^x = \log a \cdot a^x$, so the answer is $\log x \cdot x^x$." Charlie says "if you're not sure, just add the two terms, so you'll get partial credit".</p> <p>The answer $\frac{d}{dx}x^x = (1 + \log x)x^x $ turns out to be correct.</p> </blockquote> <p>In <a href="https://math.stackexchange.com/questions/260656/cant-argue-with-success-looking-for-bad-math-that-gets-away-with-it#comment571410_261057">this comment</a>, @joriki asserts that this is not "bad math" but rather a legitimate technique,</p> <blockquote> <p>You get the derivative of any expression with respect to $x$ as the sums of all the derivatives with respect to the individual instances of $x$ while holding other instances constant.</p> </blockquote> <p>I had never previously seen such a technique so naturally I tested it on a few examples, including $\frac{d}{dx} \left( x^{ \sin x}\right)$, etc. and it provided the correct result. The following three questions arose,</p> <p>$ \ \ $ <strong>1. What is the proof of its is validity?</strong> <br> $ \ \ $ <strong>2. Are there any examples where this technique outshines standard methods?</strong></p>
Fabian
458,126
<p>I beg to differ with the suggested differential to the x to the power of x. The result would be best expressed using ln x or the log to base e of x. Ie </p> <p>d/dx(x ^x) = x ^x (ln x + 1)</p> <p>When using log(believed to be in base 10), the result will be expressed as</p> <p>d/dx(x ^x) = x^x ln10(log x + log e)</p> <p>Problems of this form could all be solved by taking the log or ln of both sides and implicitly differentiating with respect to x</p>
2,756,798
<p>Consider the sequence space $l^2:=\{(x_n)_n\mid \sum^\infty_{n=0}x_n&lt;\infty\}$ together with the norm $$ ||(x_n)_n||=(\sum^\infty_{n=0}|x_n|^2)^{1/2} $$ How can I show that the triangle inequality holds for $||\cdot||$?</p>
Clement C.
75,808
<p>Let $x,y\in\ell^2(\mathbb{N})$. Then we have $$\begin{align} \lVert x+y\rVert^2 &amp;= \sum_{n=0}^\infty \lvert x_n+y_n\rvert^2 \leq \sum_{n=0}^\infty (\lvert x_n\rvert+\lvert y_n\rvert)^2 \tag{Triangle}\\ &amp;= \lVert x\rVert^2+\lVert y\rVert^2 + 2\sum_{n=0}^\infty \lvert x_n\rvert\lvert y_n\rvert\\ &amp;\leq \lVert x\rVert^2+\lVert y\rVert^2 + 2\sum_{n=0}^\infty \lvert x_n\rvert^2\sum_{n=0}^\infty\lvert y_n^2\rvert \tag{Cauchy-Schwarz}\\ &amp;= \lVert x\rVert^2+\lVert y\rVert^2 + 2\lVert x\rVert\lVert y\rVert\\ &amp;= \left(\lVert x\rVert+\lVert y\rVert\right)^2 \end{align}$$ (where the first inequality is the triangle inequality in $\mathbb{R}$); so that $$ \lVert x+y\rVert \leq \lVert x\rVert+\lVert y\rVert\,. $$</p>
742,216
<p>$a$, $b$, $c$, $d$ are rational numbers and all $&gt; 0$.</p> <p>$\max \left\{\dfrac{a}{b} , \dfrac{c}{d}\right\} \geq \dfrac{a+c}{b+d}\geq \min \left\{\dfrac{a}{b} , \dfrac{c}{d}\right\}$</p> <p>Hope someone can help me with this one. How would you go on proving the validity? Thanks in advance.</p>
lab bhattacharjee
33,337
<p>$$\frac{a+c}{b+d}-\frac ab=\frac{b(a+c)-a(b+d)}{(b+d)b}=\frac{bc-ad}{b(b+d)}$$</p> <p>Similarly, $$\frac{a+c}{b+d}-\frac cd=\cdots=\frac{ad-bc}{(b+d)d}$$</p> <p>Observe that the signs of the terms are opposite as $a,b,c,d&gt;0$</p>
759,087
<p>I'm busy writing my thesis, and I'm looking for some concise notation to denote the supremum of the matrix entries of, say $A \in M_n(\mathbb{R})$. How should I do this? </p> <p>Looking for something like $$\sup_{a_{i,j} \in A}|a_{i,j}|$$ but the notation $a_{i,j} \in A$ in reality doesn't make much sense in my opinion. What else can I do?</p> <p>EDIT: Even more ideally I want to denote $\sup_{a_{i,j}\in (A-B)}|A - B|$, but I might just introduce general notation for the "norm" to simplify this.</p>
user1551
1,551
<p>Although not really <strong>a</strong> notation but a combination of notations, the quantity in question is $\|\operatorname{vec}(A)\|_\infty$.</p>
1,754,931
<p>If a sequence has a pattern where +2 is the pattern at the start, but 1 is added each time, like the sequence below, is there a formula to find the 125th number in this sequence? It would also need to work with patterns similar to this. For example if the pattern started as +4, and 5 was added each time.</p> <blockquote> <p>2, 4, 7, 11, 16, 22 ...</p> </blockquote>
Jeevan Devaranjan
220,567
<p>Let $a_1 = 2$. From the way you defined the sequence you can see that $a_n - a_{n-1} = n$. We can use this to find \begin{align} a_n &amp;= a_{n-1} + n\\ &amp;= a_{n-2} + (n-1) + n\\ &amp;= a_{n-3} + (n-2) + (n-1) + n\\ &amp;\vdots \\ &amp;= a_1 + 2 + \cdots + (n - 2) + (n-1) + n \end{align} which is just the sum of the natural numbers except 1($1 + 2 + \cdots + n = \frac{n(n+1)}{2}$). So \begin{equation} a_n = a_1 + \frac{n(n+1)}{2} - 1 = 2 - 1 + \frac{n(n+1)}{2} = \frac{n^2 + n + 2}{2} \end{equation} where $a_1$ is the starting number (in this case 2). This sequence is a quadratic sequence as it exhibits second differences(the difference of the differences is constant). </p>
670,813
<p>Let $Y$ be a closed subspace of a compact space $X$. Let $i:Y \to X$ the inclusion and $r:X \to Y$ a retraction ($r \circ i = Id_Y$). I have to prove that exists this short exact sequence $$ 0 \to K(X,Y) \to K(X) \to K(Y) \to 0.$$ Then I have to verify that $K(X) \simeq K(X,Y) \oplus K(Y)$. How can I do it? I think that $K(X,Y) = \tilde{K}(X/Y).$ Thank you very much.</p>
Thomas Rot
5,882
<p>For the exactness at the left, you can use the long exact sequence in $K$ theory. Then every map involving (suspensions) of $i$ will split (why). Then the long exact sequence breaks down in short exact sequences.</p>
95,819
<p>I think I have solved a problem in <em>Topology</em> by Munkres, but there is a small detail that is bugging me. The problem is stated in this question's title. I will write down the proof and will highlight what is troubling me.</p> <p>We prove by contradiction: Assume $X$ is not Hausdorff. Then there exist points $x,y$ where $x$ is different from $y$ such that no neighbourhoods $U$, $V$ about $x$ and $y$ respectively have trivial intersection. Now consider the point $(x,y)$ that is in the complement of $\Delta$. Now let $U \times V$ be any basis element that contains $(x,y)$ (such an element exists by definition of the product topology being generated by the basis $\mathcal{B}$ consisting of elements of the form $W \times Z$, where $W$ is open in $X$ and $Z$ is open in $Y$). Consider $(U \times V ) \cap \Delta$, <strong>which I claim to be $(U \cap X) \times (V \cap X)$</strong>.</p> <p>By our choice of $x$ and $y$ there is $z \in U \cap V$, implying that the intersection $(U \times V ) \cap \Delta$ is not trivial.</p> <p>Since $U \times V$ was any basis element containing $(x,y)$, this means that $(x,y) \in \overline{\Delta}$, which means that there exists a limit point of $\Delta$ that is not in it, contradicting $\Delta$ being closed.</p> <p>The problem comes is in the way I have decomposed $\Delta$; the way I have put it seems I am saying that $\Delta$ <em>is equal to $X \times X$</em>, which is not the case. How can I get round this?</p> <p>Thanks.</p> <p><strong>Edit:</strong> Martin Sleziak has pointed out some mistakes, $(U \times V ) \cap \Delta$ should be $\{ (x,x) : x \in U \cap V\}$ and not as claimed.</p>
Clive Newstead
19,542
<p>The definition of Hausdorff is that for all distinct pairs $x,y \in X$ there exist disjoint open $U \ni x$ and $V \ni y$. Hence the negative is that there <em>exist</em> $x,y \in X$ each of whose open neighbourhoods $U \ni x$, $V \ni y$ have nonempty intersection. You seem to have said that this is the case for <em>all</em> $x,y \in X$, which isn't true.</p> <p>Also your claim $(U \times V) \cap \Delta = (U \cap X) \times (V \cap X)$ seems to assume $\Delta = X \times X$, which could be the cause of your problems.</p>
95,819
<p>I think I have solved a problem in <em>Topology</em> by Munkres, but there is a small detail that is bugging me. The problem is stated in this question's title. I will write down the proof and will highlight what is troubling me.</p> <p>We prove by contradiction: Assume $X$ is not Hausdorff. Then there exist points $x,y$ where $x$ is different from $y$ such that no neighbourhoods $U$, $V$ about $x$ and $y$ respectively have trivial intersection. Now consider the point $(x,y)$ that is in the complement of $\Delta$. Now let $U \times V$ be any basis element that contains $(x,y)$ (such an element exists by definition of the product topology being generated by the basis $\mathcal{B}$ consisting of elements of the form $W \times Z$, where $W$ is open in $X$ and $Z$ is open in $Y$). Consider $(U \times V ) \cap \Delta$, <strong>which I claim to be $(U \cap X) \times (V \cap X)$</strong>.</p> <p>By our choice of $x$ and $y$ there is $z \in U \cap V$, implying that the intersection $(U \times V ) \cap \Delta$ is not trivial.</p> <p>Since $U \times V$ was any basis element containing $(x,y)$, this means that $(x,y) \in \overline{\Delta}$, which means that there exists a limit point of $\Delta$ that is not in it, contradicting $\Delta$ being closed.</p> <p>The problem comes is in the way I have decomposed $\Delta$; the way I have put it seems I am saying that $\Delta$ <em>is equal to $X \times X$</em>, which is not the case. How can I get round this?</p> <p>Thanks.</p> <p><strong>Edit:</strong> Martin Sleziak has pointed out some mistakes, $(U \times V ) \cap \Delta$ should be $\{ (x,x) : x \in U \cap V\}$ and not as claimed.</p>
Zarrax
3,035
<p>You made the statement "Since $X$ is not Hausdorff, for any neighbourhoods $U$ and $V$ about $x,y$ respectively the intersection of $U$ and $V$ is not trivial." This actually has to only be true for a single $(x,y)$, not all $x$ and $y$.</p> <p>It's probably best to not use proof by contradiction... If $\Delta$ is closed, then for any $(x,y) \notin \Delta$, there are open $U$ and $V$ such that $(x,y) \in U \times V$ and $U \times V$ is disjoint from $\Delta$. If you translate this correctly, you're done.</p>
3,288,010
<p>The following snippet is from Adamek, Rosicky:Algebra and local presentability,how algebraic are.</p> <p>It is unclear to me the end of Example 5.1:</p> <p>Since <span class="math-container">$e$</span> is the coequalizer of <span class="math-container">$\bar{u}_1,\bar{u}_2$</span> in <span class="math-container">$\mathbf{Pos}$</span>, we conclude that <span class="math-container">$W$</span> does not preserve <span class="math-container">$W-$</span>split coequalizers.</p> <p>Why?</p> <p>The snippet:</p> <p><a href="https://i.stack.imgur.com/ElaYF.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ElaYF.jpg" alt="enter image description here"></a></p>
Julian Mejia
452,658
<p>From <span class="math-container">$x^2\equiv 1\pmod{2}$</span>. You got that <span class="math-container">$x$</span> has to be odd. You have two options:</p> <p><span class="math-container">$x,y$</span> odd. Then <span class="math-container">$x^2+2y^2\equiv 1+2=3\pmod{8}$</span>.</p> <p><span class="math-container">$x$</span> odd, <span class="math-container">$y$</span> even. Then <span class="math-container">$x^2+2y^2\equiv 1+0=1\pmod{8}$</span>.</p> <p>And you are done.</p>
7,575
<p>How could I display text that flashed red for a half second or so and then reverted to black? (Or was put in bold and reverted to normal, etc.)</p>
ragfield
15
<p>Use Dynamic and Refresh:</p> <pre><code>Dynamic[Refresh[ Style["text", FontColor -&gt; If[Mod[Round[AbsoluteTime[]], 2] == 0, Red, Black]], UpdateInterval -&gt; 1]] </code></pre>
7,575
<p>How could I display text that flashed red for a half second or so and then reverted to black? (Or was put in bold and reverted to normal, etc.)</p>
Rojo
109
<pre><code>Style["TESTESTEST", FontColor -&gt; Dynamic[If[Clock[] 2 - 1. &gt; 0, Red, Black]]] </code></pre> <p>If you want it to flash only once</p> <pre><code>Style["TESTESTEST", FontColor -&gt; Dynamic[If[Clock[{-1, 1, 1}, 1, 1] &gt; 0, Black, Red]]] </code></pre>
18
<p>Some teachers make memorizing formulas, definitions and others things obligatory, and forbid "aids" in any form during tests and exams. Other allow for writing down more complicated expressions, sometimes anything on paper (books, tables, solutions to previously solved problems) and in yet another setting students are expected to take questions home, study the problems in any way they want and then submit solutions a few days later.</p> <p>Naturally, the memory-oriented problem sets are relatively easier (modulo time limit), encourage less understanding and more proficiency (in the sense that the student has to be efficient in his approach). As the mathematics is in big part thinking, I think that it is beneficial to students to let them focus on problem solving rather than recalling and calculating (i.e. designing a solution rather than modifying a known one). There is a huge difference between work in time-constrained environment (e.g. medical teams, lawyers during trials, etc.) where the cost of "external knowledge" is much higher and good memory is essential. However, math is, in general (things like high-frequency trading are only a small part math-related professions), slow.</p> <p>On the other hand, memory-oriented teaching is far from being a relic of the past. Why is this so? As this is a broad topic, I will make it more specific:</p> <p><strong>What are the advantages of memory-oriented teaching?</strong></p> <p><strong>What are the disadvantages of allowing aids during tests/exams?</strong></p>
André Nicolas
256
<p>One disadvantage of allowing "memory aids" is that there will then be little or no credit given for the appropriate formula. This makes it more difficult for a student with only a modest level of understanding to get a C, or even to pass.</p>
2,926,270
<p>The base step is pretty obvious: <span class="math-container">$1 \geq \frac{2}{3}$</span>.</p> <p>Then we assume that <span class="math-container">$P(k)$</span> is true for some <span class="math-container">$k \in \mathbb{Z}^{+}$</span> and try to prove <span class="math-container">$P(k+1)$</span>. So I have</p> <p><span class="math-container">$ \sqrt{1}+\sqrt{2}+...+\sqrt{k} + \sqrt{k+1} \geq \frac{2}{3}k\sqrt{k}+\sqrt{k+1}$</span> </p> <p>by the induction hypothesis. But I'm not too sure how to proceed to prove that this is also greater than <span class="math-container">$\frac{2}{3}(k+1)\sqrt{k+1}$</span>.</p> <p>Would appreciate any help!</p>
trancelocation
467,003
<p>If you can drop "by induction" there is another way to show the inequality.</p> <p>At least, it shows how "others" may invent such inequalities:</p> <p><span class="math-container">$$\frac{2}{3}n\sqrt{n} \leq \sum_{i=1}^n \sqrt{i} \Longleftrightarrow \color{blue}{\frac{2}{3} \leq} \frac{1}{n\sqrt{n}}\sum_{i=1}^n \sqrt{i} = \color{blue}{\sum_{i=1}^n \sqrt{\frac{i}{n}}\cdot \frac{1}{n}}$$</span></p> <p>The <span class="math-container">$\color{blue}{\mbox{blue}}$</span> sum is a Riemann sum for <span class="math-container">$\int_0^1 \sqrt{x}\; dx$</span> which can be estimated using the fact that <span class="math-container">$\sqrt{x}$</span> is strictly increasing:</p> <ul> <li><span class="math-container">$\int_{\frac{i-1}{n}}^{\frac{i}{n}} \sqrt{x}\; dx &lt; \int_{\frac{i-1}{n}}^{\frac{i}{n}} \sqrt{\frac{i}{n}}\; dx = \sqrt{\frac{i}{n}}\cdot \frac{1}{n}$</span> for <span class="math-container">$i=1, \ldots , n$</span></li> </ul> <p><span class="math-container">$$\color{blue}{\sum_{i=1}^n \sqrt{\frac{i}{n}}\cdot \frac{1}{n} &gt;} \sum_{i=1}^n \int_{\frac{i-1}{n}}^{\frac{i}{n}} \sqrt{x}\; dx = \int_0^1 \sqrt{x}\; dx = \color{blue}{\frac{2}{3}}$$</span></p>
2,926,270
<p>The base step is pretty obvious: <span class="math-container">$1 \geq \frac{2}{3}$</span>.</p> <p>Then we assume that <span class="math-container">$P(k)$</span> is true for some <span class="math-container">$k \in \mathbb{Z}^{+}$</span> and try to prove <span class="math-container">$P(k+1)$</span>. So I have</p> <p><span class="math-container">$ \sqrt{1}+\sqrt{2}+...+\sqrt{k} + \sqrt{k+1} \geq \frac{2}{3}k\sqrt{k}+\sqrt{k+1}$</span> </p> <p>by the induction hypothesis. But I'm not too sure how to proceed to prove that this is also greater than <span class="math-container">$\frac{2}{3}(k+1)\sqrt{k+1}$</span>.</p> <p>Would appreciate any help!</p>
Peter Szilas
408,605
<p>Need to show: </p> <p><span class="math-container">$(2/3)k√k+\sqrt{k+1} \ge$</span></p> <p><span class="math-container">$ (2/3)(k+1)\sqrt{k+1}$</span>, or</p> <p><span class="math-container">$\sqrt{k+1} \ge (2/3)[(k+1)^{3/2}-k^{3/2}]$</span>.</p> <p><span class="math-container">$\displaystyle {\int_{k}^{k+1}} x^{1/2}dx \lt (k+1)^{1/2}(1)$</span></p> <p><span class="math-container">$\displaystyle {\int_{k}^{k+1}}x^{1/2}dx =$</span></p> <p><span class="math-container">$ (2/3)[(k+1)^{3/2}-k^{3/2}]$</span>.</p> <p>Hence </p> <p><span class="math-container">$(k+1)^{1/2} \gt \displaystyle {\int_{k}^{k+1}} x^{1/2}dx=$</span></p> <p><span class="math-container">$(2/3)[(k+1)^{3/2}-k^{3/2}].$</span></p>
541,644
<p>I want to know why $p \leftrightarrow q$ is equivalent to $(p \wedge q) \vee (\neg p \wedge \neg q)$? Without using the truth table.</p> <p>Thanks all</p>
mathematics2x2life
79,043
<p>Just think about the statement. </p> <p>$p \leftrightarrow q$: This says that $p$ occurs only if $q$ occurs and that $q$ happens only if $p$ does. Meaning, that either they both happen or nothing happens at all.</p> <p>But look at my last sentence. They BOTH happen OR NEITHER happens. They both happen is $p \wedge q$. They both DON'T happen is $\neg p \wedge \neg q$. So either they both happen or they both don't is $$(p \wedge q) \vee (\neg p \wedge \neg q)$$</p>
2,654,538
<p>If $2\tan^2x - 5\sec x = 1$ has exactly $7$ distinct solutions for $x\in[0,\frac{n\pi}{2}]$, $n\in N$, then the greatest value of $n$ is?</p> <p>My attempt:</p> <p>Solving the above quadratic equation, we get $\cos x = \frac{1}{3}$</p> <p>The general solution of the equation is given by $\cos x = 2n\pi \pm \cos^{-1}\frac{1}{3}$</p> <p>For having $7$ distinct solutions, $n$ can have value = 0,1,2,3</p> <p>So, from here we can conclude that $n$ is anything but greater than $6$. So, according to the options given in the questions, the greatest value of $n$ should be $13$. But the answer given is $14$. Can anyone justify?</p>
user
505,767
<p>I agree with your answer, indeed note that</p> <p>$$2\tan^2x - 5\sec x = 1\iff2(1-\cos^2x)-5\cos x=\cos^2x\\\iff3\cos^2x+5\cos x-2=0$$</p> <p>and</p> <p>$$3t^2+5t-2=0\implies t=\frac{-5\pm\sqrt{25+24}}{6}\implies t=\frac{-5+\sqrt{47}}{6}= \frac13$$</p> <p>thus we have 2 solution on the interval $[0,2\pi]$ and notably one in the interval $[0,\pi/2]$ and the other in $[3\pi/2,2\pi]$.</p> <p>Therefore, since the function is periodic with period $2\pi$ we have that $n=13$.</p> <p>Note that for $x\in\left[0,\frac{13\pi}{2}\right]$ the expression is not defined when $x=\frac{\pi}2+k\pi$ thus this points should be excluded by the solution.</p>
2,299,768
<p><em>It may have been already done, but I have found the answer nowhere...</em></p> <hr> <p><strong>Context.</strong></p> <p>We already know by Stirling's formula that </p> <p>$$n!\sim \sqrt{2\pi n}\left(\frac ne\right)^n.$$</p> <p>We can deduce from this that</p> <p>$$\log(n!)\sim n\log n.$$</p> <p><strong>The question.</strong></p> <p>But what about $$P_n:=\prod_{k=2}^n \log k.$$</p> <p>Is possible to conduct an asymptotical analysis for $P_n$, and to find a simpler expression of its growth?</p>
Angina Seng
436,618
<p>The natural approach is to consider $$\log P_n=\sum_{k=2}^n\log\log k$$ and apply the <a href="https://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-Maclaurin summation method</a>.</p>
2,299,768
<p><em>It may have been already done, but I have found the answer nowhere...</em></p> <hr> <p><strong>Context.</strong></p> <p>We already know by Stirling's formula that </p> <p>$$n!\sim \sqrt{2\pi n}\left(\frac ne\right)^n.$$</p> <p>We can deduce from this that</p> <p>$$\log(n!)\sim n\log n.$$</p> <p><strong>The question.</strong></p> <p>But what about $$P_n:=\prod_{k=2}^n \log k.$$</p> <p>Is possible to conduct an asymptotical analysis for $P_n$, and to find a simpler expression of its growth?</p>
Jack D'Aurizio
44,121
<p>The natural approach is to consider $$ \log P_n = \sum_{k=2}^{n}\log\log k $$ then notice that $\log\log k$ is approximately constant on short intervals.<br> By applying <a href="https://en.wikipedia.org/wiki/Abel%27s_summation_formula" rel="nofollow noreferrer">Abel's summation formula</a> we get:</p> <p>$$\begin{eqnarray*} \log P_n &amp;=&amp; (n-1)\log\log n-\int_{2}^{n}\frac{\left\lfloor x\right\rfloor-1}{x\log x}\,dx\\&amp;=&amp; O(1)+n\log\log n-\int_{2}^{n}\frac{dx}{\log x}+\int_{2}^{n}\frac{\{x\}}{x\log x}\,dx\\&amp;=&amp;n\log\log n-\frac{n}{\log n}+O\left(\frac{n}{\log^2 n}\right). \end{eqnarray*}$$</p>
1,115,117
<p>Consider the initial value problem $$y'=ty(4-y)/(1+t)$$ $$y(0)=y_{0}&gt;0$$</p> <p>(a)Determine how the solution behaves as $t$ tends to infinity.</p> <p>(b)If $y_{0}=2$,find the time $T$ at which the solution first reaches the value of 3.99</p> <p>(c)Find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$ by the time $t=2$.</p> <p>What i tried</p> <p>(a)I first solve the following IVP to get a general solution of $$y=\frac{4A}{(1+t)^{4}e^{-4t}+A} $$ And from here, as $t$ tends to infinity, it can be seen that the solution becomes $4$</p> <p>(b)For part (b) by substituting $y_{0}=2$ into the general solution, the expression becomes $$y=\frac{4}{(1+t)^{4}e^{-4t}+1} $$. I then let $y=3.99$ to get a value of $t$. Im stuck from here onwards,however, as it dont understand the rationale of the question. Espicallly for part (c). I know that as the solution becomes $4$, $T$ will tends to infinity. But what if the solution becomes $3.99$ instead. And for part (c) i dont really get what the question means when it ask to find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$.Could anyone explain. Thanks</p>
voldemort
118,052
<p>Half angle formula: We will manipulate the RHS-1.</p> <p>RHS= $2\cos^2(\theta/2)-1+i2\cos(\theta/2)\sin(\theta/2)=\cos(\theta)+i\sin(\theta)=z$</p>
1,115,117
<p>Consider the initial value problem $$y'=ty(4-y)/(1+t)$$ $$y(0)=y_{0}&gt;0$$</p> <p>(a)Determine how the solution behaves as $t$ tends to infinity.</p> <p>(b)If $y_{0}=2$,find the time $T$ at which the solution first reaches the value of 3.99</p> <p>(c)Find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$ by the time $t=2$.</p> <p>What i tried</p> <p>(a)I first solve the following IVP to get a general solution of $$y=\frac{4A}{(1+t)^{4}e^{-4t}+A} $$ And from here, as $t$ tends to infinity, it can be seen that the solution becomes $4$</p> <p>(b)For part (b) by substituting $y_{0}=2$ into the general solution, the expression becomes $$y=\frac{4}{(1+t)^{4}e^{-4t}+1} $$. I then let $y=3.99$ to get a value of $t$. Im stuck from here onwards,however, as it dont understand the rationale of the question. Espicallly for part (c). I know that as the solution becomes $4$, $T$ will tends to infinity. But what if the solution becomes $3.99$ instead. And for part (c) i dont really get what the question means when it ask to find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$.Could anyone explain. Thanks</p>
Autolatry
25,097
<p>Two relations may help</p> <p>\begin{eqnarray} \sin \theta \cos \phi &amp;=&amp; \frac{\sin(\theta + \phi)+\sin(\theta-\phi)}{2} \\ \cos^{2} \frac{\theta}{2} &amp;=&amp; \frac{1+\cos \theta}{2} \end{eqnarray}</p>
121,924
<p>Why is <span class="math-container">$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right)$</span> nonzero?</p> <p>Context: This is problem <span class="math-container">$2.25 (iii)$</span> of page <span class="math-container">$69$</span> Rotman's Introduction to Homological Algebra:</p> <blockquote> <p>Prove that <span class="math-container">$$\mathrm{Hom}_{\mathbb{Z}}\left(\prod_{n \geq 2}\mathbb{Z}_{n},\mathbb{Q}\right) \ncong \prod_{n \geq 2}\mathrm{Hom}_{\mathbb{Z}}(\mathbb{Z}_{n},\mathbb{Q}).$$</span></p> </blockquote> <p>The right hand side is <span class="math-container">$0$</span> because <span class="math-container">$\mathbb{Z}_{n}$</span> is torsion and <span class="math-container">$\mathbb{Q}$</span> is not.</p>
Mariano Suárez-Álvarez
274
<p>Let $G=\prod_{n\geq2}\mathbb Z_n$ and let $t(G)$ be the torsion subgroup, which is properly contained in $G$ (the element $(1,1,1,\dots)$ is not in $t(G)$, for example) Then $G/t(G)$ is a torsion-free abelian group, which therefore embeds into its localization $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, which is a non-zero rational vector space, and in fact generates it as a vector space. There is a non-zero $\mathbb Q$-linear map $(G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q$ (here the Choice Police will observe that we are using the axiom of choice, of course...). Composing, we get a non-zero morphism $$G\to G/t(G)\to (G/t(G))\otimes_{\mathbb Z}\mathbb Q\to\mathbb Q.$$</p> <p><em>Remark.</em> If $H$ is a torsion-free abelian group, its finitely generated subgroups are free, so flat. Since $H$ is the colimit of its finitely generated subgroups, it is itself flat, and tensoring the exact sequence $0\to\mathbb Z\to\mathbb Q$ with $H$ gives an exact sequence $0\to H\otimes_{\mathbb Z}\mathbb Z=H\to H\otimes_{\mathbb Z}\mathbb Q$. Doing this for $H=G/t(G)$ shows $G/t(G)$ embeds in $(G/t(G))\otimes_{\mathbb Z}\mathbb Q$, as claimed above.</p>
1,130,142
<p><img src="https://i.stack.imgur.com/NXr1V.png" alt="enter image description here"></p> <p>This is how I solved this problem but I have some reservations regarding my answer.</p> <p>1st house = x ; 2nd house = 3x ; 3rd house = [3x + x] - 2610</p> <p>12(x) + 12(3x) + 12(4x - 2610) = 186,390</p> <p>96x = 155,070</p> <p>x = 1615.3125</p> <p>__</p> <p>4(1615.3125) - 2610 = 3,851.25</p> <p>I answered 'none of the above'. Is my solution correct? How about my answer? Did I miss something? If there is some kind of shortcut in answering this problem, please let me know.</p> <p>PS I am a college student having troubles with word problems.</p>
coolcheetah
195,418
<p>Let the rent for each house be A, B and C for the House 1, 2 and 3 respectively. Therefore, 3A = B C = A + B - 2610</p> <p>It is also given that only 6 months' rent is collected from the tenant of House 1. Therefore, 186390 = 12[A + B - 2610] + 36A + 6A 186390 = 90A - 31320 A = 2419;</p> <p>B=7257</p> <p>Therefore C = 7066(which would correspond to none of the above).</p>
1,753,620
<p>How do I find the matrix exponential $e^{tA}$ with </p> <p>$$A = \left(\begin{matrix} 2 &amp; 8 \\ 0 &amp; 2\end{matrix}\right)$$</p> <p>The eigenvalue is 2 with multiplicity 2, but it yields only 1 eigenvector {${1, 0}$}, so the matrix isn't diagonalizable. I'm confused what to do. One option is to convert it into a Jordan form, but how do I do that?</p> <p>Any help is appreciated. Seems like a simple problem, but it's been bugging me for a while. </p>
Edward Pickman Derby
305,007
<p>a) Note: $|f_n(x)| = \Bigg| \frac{\sin(nx)}{1+n^3}\Bigg| \leq \Bigg|\frac{1} {1+n^3} \Bigg|$ for all $n \in \mathbb{N}$ and for all $x \in \mathbb{R}$. Observe also that $\sum_{n=1}^{\infty} \frac{1} {1+n^3}\leq \infty$. </p> <p>So by the Weierstrass M-Test, $\sum_{n=1}^{\infty} \frac{\sin(nx)}{1+n^3}$ converges uniformly on $\mathbb{R}$. </p> <p>Note also that $f_n = \frac{\sin(nx)}{1+n^3}$ is uniformly continuous for all $n \in \mathbb{N}$. Then it follows from the <strong>Term -by-term Continuity Theorem</strong> (Abbott 2nd edition page 188) that $f(x)$ is continuous on $\mathbb{R}$. </p> <p>b) A uniform limit on a set $I$ of uniformly continuous functions on $I$ is uniformly continuous on $I$.</p>
2,426,361
<p>What would be the best mathematical tool/concept to measure how far a matrix is from being singular? Could it be the condition number?</p>
user1551
1,551
<p>Given a matrix norm induced by a vector norm of your choice, the distance of an invertible matrix $A$ to its nearest singular matrix, i.e. $\min\{\|A-B\|:\ B \text{ is singular}\}$, is known to be $\|A^{-1}\|^{-1}=\|A\|/\kappa(A)$.</p> <p>Note that this is a concept different from (but closely related to) the condition number $\kappa(A)=\|A\|\|A^{-1}\|$. What the condition number measures is not how "singular" a matrix is in terms of its nearness to singular matrices, but how singular it is in terms of its effect on the relative error in the solution $x$ of $Ax=b$ (relative to the relative error in the coefficient vector $b$ ). For most purposes, what people concern is the condition number rather than the distance to the nearest singular matrix.</p>
3,443,137
<p>Find the radius of the circle tangent to <span class="math-container">$3$</span> other circles <span class="math-container">$O_1$</span>, <span class="math-container">$O_2$</span> and <span class="math-container">$O_3$</span> have radius of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span></p> <p>The Wikipedia page about the Problem of Apollonius can be found <a href="https://en.wikipedia.org/wiki/Problem_of_Apollonius" rel="nofollow noreferrer">here</a></p> <p>I can't provide you an exact image because the construction is too complex. What have I done is draws tons of lines to attempt to use Pythagoras's theorem and trig to solve it.</p>
achille hui
59,379
<p>I recently need to compute something like this and I finally use <a href="https://en.wikipedia.org/wiki/Cayley%E2%80%93Menger_determinant" rel="nofollow noreferrer">Cayley-Menger determinant</a> to find the radius.</p> <p>Let <span class="math-container">$ABCD$</span> be an tetrahedron. Let <span class="math-container">$a,b,c$</span> be the sides of base triangle <span class="math-container">$ABC$</span> and <span class="math-container">$a_1,b_1,c_1$</span> be the distances between apex <span class="math-container">$D$</span> and vertices <span class="math-container">$A,B,C$</span> respectively. It is known that the volume of tetrahedron <span class="math-container">$V$</span> can be computed using the CM determinant:</p> <p><span class="math-container">$$288V^2 = \mathcal{CM}(a,b,c,a_1,b_1,c_1) \stackrel{def}{=} \left| \begin{matrix} 0 &amp; 1 &amp; 1 &amp; 1 &amp; 1\\ 1 &amp; 0 &amp; a_1^2 &amp; b_1^2 &amp; c_1^2 \\ 1 &amp; a_1^2 &amp; 0 &amp; c^2 &amp; b^2\\ 1 &amp; b_1^2 &amp; c^2 &amp; 0 &amp; a^2\\ 1 &amp; c_1^2 &amp; b^2 &amp; a^2 &amp; 0 \end{matrix}\right| $$</span></p> <p>If you have three circles centered at <span class="math-container">$A,B,C$</span> with radii <span class="math-container">$r_a, r_b, r_c$</span> and you want to find a circle with center <span class="math-container">$D$</span>, radius <span class="math-container">$r$</span> touching these <span class="math-container">$3$</span> circles. then <span class="math-container">$ABCD$</span> will form a degenerate tetrahedron with volume <span class="math-container">$V = 0$</span>. This means the radius <span class="math-container">$r$</span> satisfy an equation of the form:</p> <p><span class="math-container">$$f(r) = 0\quad\text{ where }\quad f(r) \stackrel{def}{=} \mathcal{CM}(a,b,c,r_a + r, r_b + r, r_c + r)$$</span></p> <p>This looks horrible at a first glance. However, if you expand this out, you will find <span class="math-container">$f(r)$</span> is a quadratic polynomial in <span class="math-container">$r$</span>. </p> <p>It is a little bit complicated to write down all the coefficients of <span class="math-container">$f(r)$</span>. In my code, I just implement a function compute the CM determinant. I use the function to compute the values of <span class="math-container">$f(r)$</span> at <span class="math-container">$r = 0,\pm 1$</span>, backout the coefficients of <span class="math-container">$f(r)$</span> and finally solve the quadratic equation.</p> <p>As pointed out in another answer, there are in general <span class="math-container">$8$</span> circles touching the <span class="math-container">$3$</span> given circles. In a typically configuration where the interior of three circles are disjoint from each other, the absolute values of the two roots of <span class="math-container">$f(r)$</span> are the radii of the innermost and outermost Apollonius' circles. The radii of other <span class="math-container">$6$</span> circles can be obtained by flipping the signs of <span class="math-container">$r_a, r_b, r_c$</span> in the definition of <span class="math-container">$f(r)$</span>.</p> <p>Finally, for completeness, the formula for the CM determinant is following horrible mess: <span class="math-container">$$\mathcal{CM}(a,b,c,a_1,b_1,c_1) = 2 \times \begin{cases} &amp; a^2(a_1^2(b^2+c^2-a^2) - (a_1^2-b_1^2)(a_1^2-c_1^2))\\ + &amp; b^2(b_1^2(c^2+a^2-b^2) - (b_1^2-c_1^2)(b_1^2-a_1^2))\\ + &amp; c^2(c_1^2(a^2+b^2-c^2) - (c_1^2-a_1^2)(c_1^2-b_1^2))\\ - &amp; (abc)^2\\ \end{cases}$$</span></p>
728,495
<p>I'm taking Discrete Math this semester. While I understand the mechanics of proofs, I find that I must refine my understanding of how to work them. To that end, I'm working through some extra problems on spring break. Please read over this proof I did from an exercise from the book. I apologize in advance for poor formatting. I just couldn't figure out how to make this one big block of LaTeX commands. I'm still learning.</p> <p>Let A, B, C be subsets of a Universal set U.</p> <p>Given $A \cap B \subseteq C \wedge A^c \cap B \subseteq C \Rightarrow B \subseteq C$</p> <p>Proof: Part 1:</p> <p>$$ \begin{array}{rcl} B &amp; = &amp; (A \cap B) \cup (A^c \cap B) \\ &amp; = &amp; (A \cap B) \cup B , \text{definition of intersection} \\ &amp; = &amp; B \, \square \end{array} $$</p> <p>Part 2: Part 1 states $B = (A \cap B) \cup (A^c \cap B)$.<br> By assumption, $(A \cap B) \subseteq C \wedge (A^c \cap B) \subseteq C$<br> $\therefore B \subseteq C$</p> <p>Thanks<br> Andy</p>
MarnixKlooster ReinstateMonica
11,994
<p>Here is an alternative proof which goes back to the definitions and solves the problem using the rules of logic: \begin{align} &amp; A \cap B \subseteq C \;\land\; A^c \cap B \subseteq C \\ \equiv &amp; \qquad \text{"definition of $\;\subseteq\;$, twice"} \\ &amp; \langle \forall x :: x \in A \cap B \Rightarrow x \in C \rangle \;\land\; \langle \forall x :: x \in A^c \cap B \Rightarrow x \in C \rangle \\ \equiv &amp; \qquad \text{"logic: simplify: $\;\forall\;$ distributes over $\;\land\;$"} \\ &amp; \langle \forall x :: (x \in A \cap B \Rightarrow x \in C) \;\land\; (x \in A^c \cap B \Rightarrow x \in C) \rangle \\ \equiv &amp; \qquad \text{"logic: simplify by merging consequents"} \\ &amp; \langle \forall x :: x \in A \cap B \;\lor\; x \in A^c \cap B \;\Rightarrow\; x \in C \rangle \\ \equiv &amp; \qquad \text{"definition of $\;\cap\;$, twice; definition of $\;^c\;$"} \\ &amp; \langle \forall x :: (x \in A \;\land\; x \in B) \;\lor\; (x \;\not\in\; A \;\land\; x \in B) \;\Rightarrow\; x \in C \rangle \\ \equiv &amp; \qquad \text{"logic: simplify: $\;\land\;$ distributes over $\;\lor\;$"} \\ &amp; \langle \forall x :: (x \in A \;\lor\; x \not\in A) \;\land\; x \in B \;\Rightarrow\; x \in C \rangle \\ \equiv &amp; \qquad \text{"logic: excluded middle; simplify"} \\ &amp; \langle \forall x :: x \in B \;\Rightarrow\; x \in C \rangle \\ \equiv &amp; \qquad \text{"definition of $\;\subseteq\;$"} \\ &amp; B \;\subseteq\; C \\ \end{align}</p> <p>Note how every step is actually an equivalence, so this also proves the other direction.</p>
3,501,879
<p>I have been stuck at this problem for some time now. I'd really apprechiate your help. Thanks.</p> <p><span class="math-container">$$2\sin^2(x)+6\cos^2(\frac x4)=5-2k$$</span></p>
Claude Leibovici
82,404
<p><em>Too long for a comment but I cannot resist when I see an equation !</em></p> <p>As @lhf answered, we are looking for the maximum of <span class="math-container">$$f(x)=2 \sin ^2(x)+6 \cos ^2\left(\frac{x}{4}\right)$$</span> so for the zero's of <span class="math-container">$$f'(x)=2 \sin (2 x)-\frac{3}{2} \sin \left(\frac{x}{2}\right)$$</span> Let <span class="math-container">$x=2 \sin ^{-1}(t)$</span> to get as a new equation <span class="math-container">$$8 t \left(1-2 t^2\right) \sqrt{1-t^2}-\frac{3 t}{2}=0$$</span> Discarding the trivial <span class="math-container">$t=0$</span> and squaring <span class="math-container">$$1024 t^6-2048 t^4+1280 t^2-247=0$$</span> which is a cubic in <span class="math-container">$t^2$</span> and, using <span class="math-container">$z=t^2$</span>, the solutions are given by <span class="math-container">$$z_k=\frac{2}{3}+\frac{1}{3} \cos \left(\frac{2 \pi k}{3}-\frac{1}{3} \cos ^{-1}\left(\frac{13}{256}\right)\right) \qquad \text{with} \qquad k=0,1,2$$</span> Back to <span class="math-container">$t$</span> and <span class="math-container">$x$</span>, the final solution is <span class="math-container">$$x_*=2 \sin ^{-1}\left(\sqrt{\frac{2}{3}-\frac{1}{3} \sin \left(\frac{\pi }{6}+\frac{1}{3} \cos^{-1}\left(\frac{13}{256}\right)\right)}\right)\approx 1.33019$$</span> and then <span class="math-container">$$f(x_*)=4+\sqrt{3\left(1+\cos \left(\frac{\pi }{6}+\frac{1}{3} \sin ^{-1}\left(\frac{13}{256}\right)\right) \right) }+$$</span> <span class="math-container">$$\cos \left(2 \sin ^{-1}\left(-\frac 13+\frac 23\sin \left(\frac{\pi }{6}+\frac{1}{3} \cos ^{-1}\left(\frac{13}{256}\right)\right)\right)\right)\approx 7.24701$$</span></p> <p><strong>Edit</strong></p> <p>This is exactly the same result as the elegant one provided by @bjorn93 (which, I must confess, I did not pay attention when I started working the problem).</p>
3,862,408
<p>This is the second example of 1. in <a href="http://www-personal.umich.edu/%7Ebhattb/teaching/mat679w17/lectures.pdf" rel="nofollow noreferrer">Ex. 2.0.3 </a> of Bhatt's notes in perfectoid space.</p> <p>We define <span class="math-container">$R^{perf}:= \varprojlim ( \cdots R \xrightarrow{\phi} R)$</span> where <span class="math-container">$\phi$</span> is the Frobenius map.</p> <p>He claims that <span class="math-container">$R=\Bbb F_p[t]$</span> we have <span class="math-container">$R^{perf}=\Bbb F_p$</span>.</p> <p><strong>I don't see why.</strong></p> <hr /> <p>My thoughts: From the map <span class="math-container">$\Bbb F_p[t] \rightarrow \Bbb F_p$</span>, <span class="math-container">$t \mapsto 0$</span>. We get an induced map on limits. Since <span class="math-container">$\Bbb F_p$</span> is perfect, <span class="math-container">$$g:\varprojlim \Bbb F_p[t] \rightarrow \Bbb F_p$$</span></p> <p>There is also a canonical <span class="math-container">$\Bbb F_p \rightarrow \Bbb F_p[t]$</span>, inducing <span class="math-container">$$h:\Bbb F_p \rightarrow \varprojlim \Bbb F_p[t]$$</span></p> <p>I'm guess that these two maps are inverses. Its clear that <span class="math-container">$hg=id$</span>. However, it is less clear to me why <span class="math-container">$gh=id$</span>.</p>
Arturo Magidin
742
<p>If <span class="math-container">$g\in N_G(H)$</span>, <span class="math-container">$h\in H$</span>, and <span class="math-container">$x\in X^H$</span>, then you know that <span class="math-container">$g^{-1}hg(x) = x$</span> (since <span class="math-container">$g^{-1}hg\in H$</span>). That means that <span class="math-container">$h(gx) = gx$</span>, and hence that <span class="math-container">$gx$</span> is also fixed by <span class="math-container">$h$</span>; as <span class="math-container">$h$</span> is arbitrary, this shows that <span class="math-container">$gx\in X^H$</span>. This holds for all <span class="math-container">$g\in N_G(H)$</span>, so <span class="math-container">$N_G(H)$</span> sends <span class="math-container">$X^H$</span> to itself. The action of <span class="math-container">$H$</span> on that set is trivial, so the action of <span class="math-container">$N_G(H)$</span> on <span class="math-container">$X^H$</span> factors through <span class="math-container">$N_G(H)/H$</span> (which makes sense since <span class="math-container">$H\triangleleft N_G(H)$</span>. The actions is the action of <span class="math-container">$G$</span>, restricted to <span class="math-container">$X^H$</span>.</p>
408,717
<p>Let $n\in \mathbb N$ and $A_1,A_2,..,A_n$ be arbitrary sets. Now define $X=[x_{ij}]_{n \times n}$ where $$x_{ij}= \begin{cases} 1 , &amp; \text{$A_i$$\subsetneq$}A_j \\ 0 , &amp; \text{otherwise} \\ \end{cases}.$$ How do you prove $X^n=0$?</p> <p>Thanks in advance.</p>
Julien
38,053
<p><strong>Hint:</strong> by iteration of the formula $(XY)_{ij}=\sum_{k=1}^nx_{ik}y_{kj}$ for the coefficients of a matrix product, we have $$ (X^n)_{ij}=\sum_{1\leq i_1,\ldots,i_{n-1}\leq n}x_{ii_1}x_{i_1i_2}\cdots x_{i_{n-1}j}. $$</p> <blockquote class="spoiler"> <p>For a term of this sum to be nonzero, we need $x_{ii_1}=x_{i_1i_2}=\ldots=x_{i_{n-1}j}=1$, i.e. $A_{i}\subsetneq A_{i_1}\subsetneq \ldots\subsetneq A_{i_{n-1}}\subsetneq A_j$. Can this happen? How many sets in the chain?</p> </blockquote>
19,815
<p>Problem:</p> <blockquote> <p>Prove that if gcd( a, b ) = 1, then gcd( a - b, a + b ) is either 1 or 2.</p> </blockquote> <p>From Bezout's Theorem, I see that am + bn = 1, and a, b are relative primes. However, I could not find a way to link this idea to a - b and a + b. I realized that in order to have gcd( a, b ) = 1, they must not be both even. I played around with some examples (13, 17), ...and I saw it's actually true :( ! Any idea?</p>
Arturo Magidin
742
<p>The gcd of $x$ and $y$ divides any linear combination of $x$ and $y$. And any number that divides $r$ and $s$ divides the gcd of $r$ and $s$.</p> <p>If you add $a+b$ and $a-b$, you get <code>&lt;blank&gt;</code>, so $\mathrm{gcd}(a+b,a-b)$ divides <code>&lt;blank&gt;</code>.</p> <p>If you subtract $a-b$ from $a+b$, you get <code>&lt;blankity&gt;</code>, so $\mathrm{gcd}(a+b,a-b)$ divides <code>&lt;blankity&gt;</code>.</p> <p>So $\mathrm{gcd}(a+b,a-b)$ divides $\mathrm{gcd}($<code>&lt;blank&gt;,&lt;blankity&gt;</code>$) = $<code>&lt;blankety-blank&gt;</code>.</p> <p>(For good measure, assuming the result is true you'll want to come up with examples where you get $1$ and examples where you get $2$, just to convince yourself that the statement you are trying to prove is the best you can do).</p>
1,757,260
<p>A little box contains $40$ smarties: $16$ yellow, $14$ red and $10$ orange.</p> <p>You draw $3$ smarties at random (without replacement) from the box.</p> <p>What is the probability (in percentage) that you get $2$ smarties of one color and another smarties of a different color?</p> <p>Round your answer to the nearest integer.</p> <p>Answer given is $67$. I don't get it. Is it not: $$\left(\frac{16}{40} \times \frac{15}{39} \times\frac{24}{38}\right) + \left(\frac{14}{40} \times\frac{13}{39} \times\frac{26}{38}\right) +\left(\frac{10}{40} \times\frac{9}{39} \times\frac{30}{38}\right)= 22?$$</p>
M47145
188,658
<p>Your options are "Exactly two yellow smarties, exactly two red smarties, or exactly two orange smarties."</p> <p>If $P$ represents your final probability, you need to add up the following probabilities:</p> <p>$P($exactly two yellows$) \, + \, P($exactly two reds$) \, + \,P($exactly two orange$)$</p> <p>The probability of getting exactly two yellow smarties is $3\cdot \frac{16}{40}\cdot \frac{15}{39}\cdot \frac{24}{38}$. The reason we multiply by $3$ is that there are three different ways to choose two yellow smarties, i.e. $\binom{3}{2}=3$. The first two draws can be yellow, the first and the last can be yellow, or the last two draws can be yellow.</p> <p>Similarly we can find $P($exactly two reds$)$ and $P($exactly two orange$)$. </p> <p>Thus $P=3\left( \frac{16}{40}\cdot \frac{15}{39}\cdot \frac{24}{38}\right)+ 3\left( \frac{14}{40}\cdot \frac{13}{39}\cdot \frac{26}{38}\right)+3\left( \frac{10}{40}\cdot \frac{9}{39}\cdot \frac{30}{38}\right)\approx 67$%.</p>
1,375,085
<p>It is the first time I met such a question:</p> <blockquote> <p>Which is greater as $n$ gets larger, $f(n)=2^{2^{2^n}}$ or $g(n)=100^{100^n}$?</p> </blockquote> <p>Intuitively I think $f(n)$ would gradually become larger as $n$ gets larger, but I find it hard to produce an argument. Is there any trick to use for this type of question?</p>
Jonathan Aronson
83,164
<p>Try taking the logs. Log is a monotonic transformation.</p>
2,699,621
<p>To show $1 + \frac12 x - \frac18 x^2 &lt; \sqrt{1+x}$ is it enough to tell that the taylor series expansion of $\sqrt{1+x}$ around $0$ has more positive terms?</p>
user284331
284,331
<p>Let $\varphi(x)=\sqrt{1+x}-1\dfrac{1}{2}x+\dfrac{1}{8}x^{2}$, $x\geq 0$, then for $x&gt;0$, \begin{align*} \varphi(x)&amp;=\varphi(x)-\varphi(0)\\ &amp;=\varphi'(\xi)x\\ &amp;=\left(\dfrac{1}{2}(1+\xi)^{-1/2}-\dfrac{1}{2}+\dfrac{1}{4}\xi\right)x. \end{align*} Now let $\eta(x)=\dfrac{1}{2}(1+x)^{-1/2}-\dfrac{1}{2}+\dfrac{1}{4}x$, $x\geq 0$, then for $x&gt;0$, \begin{align*} \eta(x)&amp;=\eta(x)-\eta(0)\\ &amp;=\left(-\dfrac{1}{4}(1+\omega)^{-3/2}+\dfrac{1}{4}\right)x\\ &amp;=\dfrac{1}{4}\left(1-\dfrac{1}{(1+\omega)^{3/2}}\right)x\\ &amp;&gt;0, \end{align*} so in particular, $\eta(\xi)&gt;0$ because $0&lt;\xi&lt;x$, and hence $\varphi(x)&gt;0$ for $x&gt;0$.</p>
2,669,277
<p>In his textbook <em>Calculus</em>, Spivak presents integration by parts as follows: </p> <p>If $f'$ and $g'$ are continuous then \begin{align*} \int fg'&amp;=fg-\int f'g\\ \int f(x)g'(x)\,dx&amp;=f(x)g(x)-\int f'(x)g(x)\,dx\\ \int_a^b f(x)g'(x)\,dx&amp;=f(x)g(x)\bigg|_a^b-\int_a^b f'(x)g(x)\,dx\\ \end{align*} I understand that without the continuity requirement, $fg'$ and $gf'$ may not be integrable, but why isn't it enough to have $f'$ and $g'$ be integrable functions? Isn't the product of two Riemann-integrable functions necessarily Riemann-integrable?</p>
ryang
21,813
<blockquote> <p>why isn't it enough to have <span class="math-container">$f'$</span> and <span class="math-container">$g'$</span> be integrable functions? <span class="math-container">$$\int_a^b f(x)g'(x)\,dx =f(x)g(x)\bigg|_a^b-\int_a^b f'(x)g(x)\,dx$$</span></p> </blockquote> <p>Yes, <span class="math-container">$f′$</span> and <span class="math-container">$g′$</span> being integrable on <span class="math-container">$[a,b]$</span> is sufficent.</p> <blockquote> <p><span class="math-container">$$\int fg'=fg-\int f'g$$</span></p> </blockquote> <p>For <strong><em>indefinite</em> integration by parts</strong>, the only requirement is that on the intersection of their domains, <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are differentiable and <em>one</em> of them has an antiderivative.</p> <p><strong>Proof</strong></p> <p>Without loss of generality suppose that <span class="math-container">$g$</span> has an antiderivative. Then <span class="math-container">$f'g$</span> has an antiderivative; denote it by <span class="math-container">$H.$</span> Then <span class="math-container">$(fg-H)$</span> is an antiderivative of <span class="math-container">$fg'$</span>.</p> <p>Now, by the product rule, <span class="math-container">$(fg)'=f'g+fg'.$</span> So, <span class="math-container">$fg=\int(f'g+fg').$</span></p> <ul> <li>Therefore, <span class="math-container">$fg=\left(\int f'g\right)+\left(\int fg'\right),$</span> so <span class="math-container">$$\int fg'=fg-\int f'g.$$</span></li> <li>By the Fundamental Theorem of Calculus, <span class="math-container">$\int_a^b(f'g+fg')=fg\bigg|_a^b\ ,$</span> i.e., <span class="math-container">$$\int_a^b fg'=fg\bigg|_a^b-\int_a^b f'g.$$</span></li> </ul>
363,391
<p>In <a href="https://math.stackexchange.com/q/2602271/682690">this MathSE question</a>, classification of finite simple groups with Abelian Sylow 2-subgroups, credit is rightly given to John Walter. But in the introduction to his paper, Walter explicitly states that &quot;It seems to be a very difficult problem to show that these are the only examples.&quot; Is there a later reference, perhaps earlier than the complete classification theorem, that states that Walter, et al. found them all? Thanks for your help.</p>
Derek Holt
35,840
<p>The remark of Walter in his paper is referring specifically to the groups of Type (3) in his classification, that is, simple groups <span class="math-container">$S$</span> such that, for each involution <span class="math-container">$\tau \in S$</span>, we have <span class="math-container">$C_S(\tau) \cong \langle \tau \rangle \times {\rm PSL}(2,q)$</span> with <span class="math-container">$q \equiv \pm 3 \bmod 8$</span>.</p> <p>These include the first Janko group <span class="math-container">$J_1$</span> (with <span class="math-container">$q=5$</span>) and the groups of Ree type <span class="math-container">$^2G_2(q)$</span> with <span class="math-container">$q=3^k$</span> and <span class="math-container">$k$</span> odd.</p> <p>It was quickly proved that any unknown simple group of this type must have similar properties to the groups of Ree type. John Thompson devoted a lot of time trying to prove that there were no further groups of this type, and he eventually reduced it to a problem in algebraic geometry, which was finally settled by Bombieri in 1980 in the paper:</p> <p>Bombieri, Enrico (1980), appendices by Andrew Odlyzko and D. Hunt, &quot;Thompson's problem (<span class="math-container">$\sigma^2=3$</span>)&quot;, Inventiones Mathematicae, 58 (1): 77–100, doi:10.1007/BF01402275, ISSN 0020-9910, MR 0570875.</p> <p>Of course &quot;Thompson's Problem (<span class="math-container">$\sigma^2=3$</span>)&quot; is a strange title for a mathematical paper, but it was solving an important problem! I think Bombieri proved it for sufficiently large <span class="math-container">$q$</span>, and the appendices of the paper describe computer calculations to settle the remaining small values.</p> <p>So yes, this was resolved before the complete classification, but not so long before. I remember at the time that people were speculating that this problem might turn out to be the last one to be resolved.</p>
825,318
<p>Can someone please help me with these True and False questions? I've tried them myself, but I'm not very good at discrete math... Thank you in advance!</p> <ol> <li><p>Any set $A$ and $B$ with $B\subseteq A$ and $f: B \to A$ be $1$-$1$ and onto, then $B = A$</p> <p>False?</p></li> <li><p>Let $A$ and $B$ be nonempty sets and $f:A \to B$ be a $1$-$1$ function. Then $f(X\cap Y) = f(X)\cap f(Y)$ for all nonempty subsets $X$ and $Y$ of $A$</p> <p>True?</p></li> <li><p>Let $A$ and $B$ be nonempty sets and $f:A \to B$ be a function. Then if $f(X\cap Y) = f(X)\cap f(Y)$ for all nonempty subsets $X$ and $Y$ of $A$, then $f$ must be $1$-$1$.</p> <p>False?</p></li> <li><p>There is no one-to-one correspondence between the set of all positive integers and the set of all odd positive integers because the second set is a proper subset of the first.</p> <p>False?</p></li> <li><p>if $(A \cup B\subset A \cup C)$ then $B\subset C$</p> <p>False?</p></li> <li><p>If $A$, $B$, and $C$ are three sets, then the only way that $A \cup C$ can equal $B \cup C$ is $A = B$.</p> <p>False?</p></li> <li><p>If the product $A \times B$ of two sets $A$ and $B$ is the empty set , then both $A$ and $B$ have to be empty set.</p> <p>False?</p></li> </ol>
heropup
118,193
<p>The variance of the <strong>sum</strong> of the two measurements $X_1$ and $X_2$ is equal to the sum of the variances of each individual measurement; i.e., $${\rm Var}[X_1 + X_2] = {\rm Var}[X_1] + {\rm Var}[X_2].$$ Thus, the variance of the <strong>mean</strong> of the measurements is $${\rm Var}[\bar X] = {\rm Var}\left[\frac{X_1+X_2}{2}\right] = \frac{{\rm Var}[X_1] + {\rm Var}[X_2]}{4}.$$ Since the standard deviations are given as $$\sigma_1 = 0.0056h, \quad \sigma_2 = 0.0044h,$$ it follows that $$\begin{align*} {\rm Var}[X_1] &amp;= \sigma_1^2 = 0.00003136h^2, \\ {\rm Var}[X_2] &amp;= \sigma_2^2 = 0.00001936h^2. \end{align*}$$ This then easily gives us the variance of the mean of the measurements, and from that, it is easy to calculate the probability $$\Pr\left[|\bar X - \mu| &lt; .005h\right].$$</p>
50,227
<p>The problem I'm having is mapping a 3D triangle into 2 dimensions. I have three points in $(x,y,z)$ form, and want to map them onto the plane described by the normal of the triangle, such that I end up with three points in $(x,y)$ form.</p> <p>My guess would be it'd assign an arbitrary up vector and then doing something? Finding the distance traveled along the plane from one vertex to another? What do I do, and how do I do it?</p>
Ralph Dratman
120,442
<p>Suppose the three vertices of the 3D triangle are given by three coordinate triples <span class="math-container">$a, b, c$</span>. For example, <span class="math-container">$b = \{x_b,y_b,z_b\}$</span>. In Mathematica 10, a 2D triangle congruent to this 3D triangle is</p> <pre><code>SSSTriangle[Norm[b-a], Norm[c-b], Norm[a-c]] </code></pre> <p>I do not see an obvious way to use the original 3D coordinates to position the new triangle in 2D space. SSSTriangle uses an arbitrary but consistent placement.</p>
50,227
<p>The problem I'm having is mapping a 3D triangle into 2 dimensions. I have three points in $(x,y,z)$ form, and want to map them onto the plane described by the normal of the triangle, such that I end up with three points in $(x,y)$ form.</p> <p>My guess would be it'd assign an arbitrary up vector and then doing something? Finding the distance traveled along the plane from one vertex to another? What do I do, and how do I do it?</p>
Radim Cernej
161,576
<p>If you only care about the shape of the triangle, not about its orientation, then it is reasonably simple -just calculate the distances between the points. I believe that is what Ralph Dratman is doing above.</p> <p>Let us have the three 3D points</p> <pre><code>x1, y1, z1 x2, y2, z2 x3, y3, z3 </code></pre> <p>Let us call the sides of the triangle a, b and c, and its vertices A, B, C.</p> <pre><code>a = sqrt((x2-x1)^2 + (y2-y1)^2 + (z2-z1)^2) b = sqrt((x3-x2)^2 + (y3-y2)^2 + (z3-z2)^2) c = sqrt((x1-x3)^2 + (y1-y3)^2 + (z1-z3)^2) </code></pre> <p>To draw this triangle, you can choose any orientation and position. One choice is</p> <pre><code>A: [0,0] B: [a,0] </code></pre> <p>The position of C would now need to be calculated using the above calculated sides a, b, c.</p> <pre><code>C: [xc, yc] </code></pre> <p>According to Wolfram Alpha (after label adjustment):</p> <pre><code>yc = sqrt((a + b - c) (a - b + c) (-a + b + c) (a + b + c))/(2 a) xc = sqrt(c^2 - yc^2) </code></pre>
232,930
<p>Let $f(n)$ denote the number of integer solutions of the equation $$3x^2+2xy+3y^2=n $$</p> <p>How can one evaluate the limit $$\lim_{n\rightarrow\infty}\frac{f(1)+...f(n)}{n}$$</p> <p>Thanks</p>
EuYu
9,246
<p>The ellipse $Ax^2 + Bxy + Cy^2 = n$ with discriminant $\Delta=-B^2 + 4AC &gt; 0$ has an area of $$\rm{Area} = \frac{2\pi n}{\sqrt{\Delta}}$$ In this case we have $A=3$, $B=2$ and $C=3$ for an area of $\frac{\pi n}{2\sqrt{2}}$. It is rather well known that the number of lattice points inside an ellipse is given by $$N = \mathrm{Area} + \mathcal{O}\left(\sqrt{n}\right)$$ So your limit is $$\lim_{n\rightarrow \infty}\frac{\frac{\pi n}{2\sqrt{2}} + \mathcal{O}(\sqrt{n})}{n} = \lim_{n\rightarrow \infty}\frac{\pi}{2\sqrt{2}} + \mathcal{O}\left(n^{-\frac{1}{2}}\right) = \frac{\pi}{2\sqrt{2}}$$ For a reference to the above results if unfamiliar, see Advanced Number Theory by Cohn, page 160-161.</p>
1,814,823
<p>In this question , multiple concepts of graphical transformations are involved. I am facing problems in applying all of them in a single question.</p>
Roman83
309,360
<p>$$|a|=\begin{cases}a, a\ge 0\\ -a, a&lt;0\end{cases}$$</p> <p>If $x\ge0$ then $$y=\frac x{1+x}=\frac {1+x-1}{1+x}=1-\frac 1{1+x} -$$ hyperbola</p> <p>If $x&lt;0$ then $$y=\frac x{1-x}=-\frac {1-x-1}{1-x}=-1+\frac 1{1-x} -$$ hyperbola</p> <p><a href="https://i.stack.imgur.com/WGXxn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WGXxn.png" alt="enter image description here"></a></p>
3,363,944
<p>A group consisting of <span class="math-container">$3$</span> men and <span class="math-container">$6$</span> women attends a prizegiving ceremony. If <span class="math-container">$ 5$</span> prizes are awarded at random to members of the group, find the probability that exactly <span class="math-container">$3 $</span> of the prizes are awarded to women if<br> a) There is a restriction of at most one prize per person<br> b) There is no restriction on the number of prizes per person</p> <p>I did part a) and got the same result as the solution but I failed at getting the same answer for part b). When I looked at the working outs of both parts, I noticed a significant difference in the ways two parts are solved. </p> <p>This is the working out for part a) (which is also similar to my working out) a) <span class="math-container">$\frac{6C3\times 3C2}{9C5} = \frac{10}{21}\ $</span></p> <p>And this is the working out of part b) b) <span class="math-container">$\ 5C3 \times (\frac{3}{9})^{2} \times (\frac{6}{9})^{3}\ = \frac{80}{243}\ $</span></p> <p>I'm so confused why part b) is done in such a different way than part a) and as a student, how can I know when to consider the numerator and denominator separately like part a) and when to find the probability of each component and times all of them together like part b)? Also, can we solve part b) in a similar way like part a)? Does anyone have any tips on how to distinguish these sorts of methods? </p> <p>Thank you very much for helping.</p>
drhab
75,923
<p>b) There are <span class="math-container">$5$</span> <em>independent</em> events in the form of prizes that are awarded that can succeed each (i.e. the prize is awarded to a woman) with (the same) probability <span class="math-container">$\frac69$</span>, or fail (i.e. the prize is not awarded to a woman). </p> <p>So evidently we are dealing with <a href="https://en.wikipedia.org/wiki/Binomial_distribution#Specification" rel="nofollow noreferrer">binomial distribution</a> here, equipped with parameters <span class="math-container">$n=5$</span> and <span class="math-container">$p=\frac39$</span>.</p> <p>Essential difference: in a) the events are not independent. If e.g. the first prize is handed over to Bob then for the further process Bob is put aside because he cannot be awarded with another prize. </p> <p>Actually I would say that b) is more easy to solve than a) where we are dealing with <a href="https://en.wikipedia.org/wiki/Hypergeometric_distribution#Definition" rel="nofollow noreferrer">hypergeometric distribution</a>.</p>