qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,104,890
<p>I'm trying to solve the following problem:</p> <blockquote> <p>Ten people are sitting around a round table. Three of them are chosen at random to give a presentation. What is the probability that the three chosen people were sitting in consecutive seats?</p> </blockquote> <p>I got the wrong answer but cannot see the error in my reasoning. This is how I see it:</p> <p>1) the selection of the first person is unconstrained.</p> <p>2) the next person must be selected from the 2 spots adjacent to the first. So this choice is limited to <code>2/9</code> of the possible choices.</p> <p>3) the third choice must be taken from the one free spot next to the first person chosen, or the one free spot next to the 2nd person chosen. So this choice is limited to <code>2/8</code> of the possible choices.</p> <p>4) multiplying these we get:</p> <pre><code>2/9 * 2/8 = 1/18 </code></pre> <p>However, the official answer is:</p> <blockquote> <p>Let's count as our outcomes the ways to select 3 people without regard to order. There are <span class="math-container">$\binom{10}{3} = 120$</span> ways to select any 3 people. The number of successful outcomes is the number of ways to select 3 consecutive people. There are only 10 ways to do this -- think of first selecting the middle person, then we take his or her two neighbors. Therefore, the probability is <span class="math-container">$\frac{10}{120} = &gt; \boxed{\frac{1}{12}}$</span>.</p> </blockquote>
Arthur
15,500
<p>You forgot the possibility that the second person can be chosen to sit <em>two</em> seats away from the first, and then the third person is chosen to be the one between the first and the second. This gives an additional <span class="math-container">$\frac29\cdot \frac18 = \frac1{36}$</span>, bringing the total up to <span class="math-container">$\frac1{18}+\frac1{36} = \frac1{12}$</span>.</p>
2,090,315
<blockquote> <ol> <li>Let $\Omega \subset \mathbb{C}$ open. How do I prove that if $f(\Omega) \subseteq \text{ a line }$ then $f$ is constant?</li> <li>How do I prove that if $f$ is holomorphic in $\mathbb{C}$ and there exists $r&gt;0$ such that $f(\mathbb{C})\subset \mathbb{C}-B(0,r)$, then $f$ is constant?</li> </ol> </blockquote>
Learnmore
294,365
<ol> <li>Since $f$ is holomorphic it maps open sets to open sets,since $f(\Omega)\subset \text{line}\implies f \text{is constant}$ </li> <li>Use Picards Theorem ,If the range of $f$ excludes more than two points of $\Bbb C$ then $f$ is constant.Here it excludes uncountably many points.</li> </ol>
2,090,315
<blockquote> <ol> <li>Let $\Omega \subset \mathbb{C}$ open. How do I prove that if $f(\Omega) \subseteq \text{ a line }$ then $f$ is constant?</li> <li>How do I prove that if $f$ is holomorphic in $\mathbb{C}$ and there exists $r&gt;0$ such that $f(\mathbb{C})\subset \mathbb{C}-B(0,r)$, then $f$ is constant?</li> </ol> </blockquote>
Fred
380,717
<p>A proof of 2. without Picard:</p> <p>We have $|f(z)| \ge r$ for all $z \in \mathbb C$. Let $g=1/f$. Then $g$ is a bounded entire function. By Liouville, $g$ is constant, hence $f$ is constant.</p>
1,115,117
<p>Consider the initial value problem $$y'=ty(4-y)/(1+t)$$ $$y(0)=y_{0}&gt;0$$</p> <p>(a)Determine how the solution behaves as $t$ tends to infinity.</p> <p>(b)If $y_{0}=2$,find the time $T$ at which the solution first reaches the value of 3.99</p> <p>(c)Find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$ by the time $t=2$.</p> <p>What i tried</p> <p>(a)I first solve the following IVP to get a general solution of $$y=\frac{4A}{(1+t)^{4}e^{-4t}+A} $$ And from here, as $t$ tends to infinity, it can be seen that the solution becomes $4$</p> <p>(b)For part (b) by substituting $y_{0}=2$ into the general solution, the expression becomes $$y=\frac{4}{(1+t)^{4}e^{-4t}+1} $$. I then let $y=3.99$ to get a value of $t$. Im stuck from here onwards,however, as it dont understand the rationale of the question. Espicallly for part (c). I know that as the solution becomes $4$, $T$ will tends to infinity. But what if the solution becomes $3.99$ instead. And for part (c) i dont really get what the question means when it ask to find the range of initial values for which the solution lies in the interval $3.99&lt;y&lt;4.01$.Could anyone explain. Thanks</p>
Chinny84
92,628
<p>$$ \cos\frac{1}{2}\theta + i\sin\frac{1}{2}\theta = \mathrm{e}^{i\frac{1}{2}\theta} $$ using $$ \frac{\mathrm{e}^{i\frac{1}{2}\theta} + \mathrm{e}^{-i\frac{1}{2}\theta}}{2} = \cos \frac{1}{2}\theta $$ we get $$ \left(\mathrm{e}^{i\frac{1}{2}\theta} + \mathrm{e}^{-i\frac{1}{2}\theta}\right)\mathrm{e}^{i\frac{1}{2}\theta} = \mathrm{e}^{i\theta} + 1 = z+1 $$</p>
4,590,677
<p>This is perhaps a silly question related to calculating with surds. I was working out the area of a regular pentagon ABCDE of side length 1 today and I ended up with the following expression :</p> <p><span class="math-container">$$\frac{\sqrt{5+2\sqrt5}+\sqrt{10+2\sqrt{5}}}{4}$$</span></p> <p>obtained by summing the areas of the triangles ABC, ACD and ADE.</p> <p>I checked my solution with Wolfram Alpha which gave me the following equivalent expression :</p> <p><span class="math-container">$$\frac{\sqrt{25+10\sqrt{5}}}{4}$$</span></p> <p>I was able to show that these two expressions are equivalent by squaring the numerator in my expression, which gave me</p> <p><span class="math-container">$$15+4\sqrt5+2\sqrt{70+30\sqrt5},$$</span></p> <p>and then &quot;noticing&quot; that</p> <p><span class="math-container">$$\sqrt{70+30\sqrt5}=\sqrt{25+30\sqrt5+45}=5+3\sqrt5.$$</span> My question is the following : how could I have known beforehand that my sum of surds could be expressed as a single surd, and is there a way to systematize this type of calculation ? I would have liked to find the final, simplest expression on my own without the help of a computer.</p> <p>Thanks in advance !</p>
Will Jagy
10,400
<p><span class="math-container">$10 + 2 \sqrt 5$</span> has norm <span class="math-container">$100 - 5 \cdot 4 = 80.$</span> <span class="math-container">$5 + 2 \sqrt 5$</span> has norm <span class="math-container">$25 - 5 \cdot 4 = 5.$</span> The ratio of the norms is <span class="math-container">$\frac{80}{5} = 16,$</span> which is an integer and a square, so the ratio might be very nice.</p> <p><span class="math-container">$$ \frac{10+2 \sqrt 5}{5 + 2 \sqrt 5} \cdot \frac{5-2 \sqrt 5}{5 - 2 \sqrt 5} = \frac{30-10 \sqrt 5}{5 } = 6 - 2 \sqrt 5 $$</span></p> <p>Next, <span class="math-container">$36 - 5 \cdot 4 = 16$</span> so <span class="math-container">$ 6 - 2 \sqrt 5 $</span> might be a square. Indeed, by inspection it is <span class="math-container">$\left( 1 - \sqrt 5 \right)^2 = \left( \sqrt 5 - 1 \right)^2$</span> So <span class="math-container">$10 + 2 \sqrt 5 = \left( \sqrt 5 - 1 \right)^2 \left( 5 + 2 \sqrt 5 \right) $</span> and</p> <p><span class="math-container">$ \sqrt{10 + 2 \sqrt 5} = \left( \sqrt 5 - 1 \right) \sqrt { 5 + 2 \sqrt 5 } $</span> Thus</p> <p><span class="math-container">$$ \sqrt{10 + 2 \sqrt 5} + \sqrt { 5 + 2 \sqrt 5 } = \left( \sqrt 5 \right) \sqrt { 5 + 2 \sqrt 5 } = \sqrt { 25 + 10 \sqrt 5 }$$</span></p> <p><span class="math-container">$$ \color{red}{ \sqrt{10 + 2 \sqrt 5} + \sqrt { 5 + 2 \sqrt 5 } = \sqrt { 25 + 10 \sqrt 5 } } $$</span></p>
3,443,137
<p>Find the radius of the circle tangent to <span class="math-container">$3$</span> other circles <span class="math-container">$O_1$</span>, <span class="math-container">$O_2$</span> and <span class="math-container">$O_3$</span> have radius of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span></p> <p>The Wikipedia page about the Problem of Apollonius can be found <a href="https://en.wikipedia.org/wiki/Problem_of_Apollonius" rel="nofollow noreferrer">here</a></p> <p>I can't provide you an exact image because the construction is too complex. What have I done is draws tons of lines to attempt to use Pythagoras's theorem and trig to solve it.</p>
Blue
409
<p>If you can endure the algebraic slog (a computer algebra system really helps), you can find some structure that simplifies the solution.</p> <hr> <p>Let the given circles have radii <span class="math-container">$r_A$</span>, <span class="math-container">$r_B$</span>, <span class="math-container">$r_C$</span>, and (non-collinear) centers <span class="math-container">$A:=(x_A,y_A)$</span>, <span class="math-container">$B:=(x_B,y_B)$</span>, <span class="math-container">$C:=(x_C,y_C)$</span>; I specifically used <span class="math-container">$$A = (0,0) \qquad B = (c,0) \qquad C = (b\cos A, b \sin A)$$</span></p> <p>As Wikipedia's <a href="https://en.wikipedia.org/wiki/Problem_of_Apollonius" rel="nofollow noreferrer">Problem of Apollonius</a> entry notes, there are eight solutions that depend on whether each circle's tangency with the target circle is <em>external</em> or <em>internal</em>. The entry's coordinate discussion accommodates this by attaching sign variables to the radii; here, we can just assert that <em>the radii <span class="math-container">$r_A$</span>, <span class="math-container">$r_B$</span>, <span class="math-container">$r_C$</span> are signed values</em>. </p> <p>Let the target circle have center <span class="math-container">$P:=(x_P,y_P)$</span> and radius <span class="math-container">$r$</span>. We can express <span class="math-container">$P$</span> in barycentric coordinates: <span class="math-container">$$P = \frac{\alpha A+\beta B+\gamma C}{\alpha+\beta+\gamma} \tag{1}$$</span></p> <p>Now, we'll follow Wikipedia's lead and consider the equations <span class="math-container">$$\begin{align} (x_P-x_A)^2 + (y_P-y_A)^2=(r-r_A)^2 \\ (x_P-x_B)^2 + (y_P-y_B)^2=(r-r_B)^2 \\ (x_P-x_C)^2 + (y_P-y_C)^2=(r-r_C)^2 \end{align} \tag{2}$$</span> Upon subtracting the latter two equations from the first, we eliminate troublesome quadratic terms, leaving a linear homogeneous system of two equations in the three unknowns <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span>, <span class="math-container">$\gamma$</span>. The reader can verify that we can write the solution as <span class="math-container">$$\begin{align} \alpha &amp;= a^2 b c \cos A - a^2 r_A (r_A - 2 r) + a b \cos C\,r_B (r_B - 2 r) + c a \cos B\,r_C (r_C - 2 r) \\[4pt] \beta &amp;= a b^2 c \cos B - b^2 r_B (r_B - 2 r) + b c \cos A\,r_C (r_C - 2 r) + a b \cos C\,r_A (r_A - 2 r) \\[4pt] \gamma &amp;= a b c^2 \cos C - c^2 r_C (r_C - 2 r) + c a \cos B\,r_A (r_A - 2 r) + b c \cos A\,r_B (r_B - 2 r) \end{align} \tag{3}$$</span> where <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, <span class="math-container">$c$</span> are the <em>sides</em> of <span class="math-container">$\triangle ABC$</span>. Conveniently, <span class="math-container">$\alpha+\beta+\gamma = 16 |\triangle ABC|^2$</span>, by <a href="https://en.wikipedia.org/wiki/Heron%27s_formula" rel="nofollow noreferrer">Heron's Formula</a>.</p> <p>Substituting <span class="math-container">$P$</span> with these <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span>, <span class="math-container">$\gamma$</span> values back into the first equation of <span class="math-container">$(1)$</span>, we get this quadratic in <span class="math-container">$r$</span>:</p> <p><span class="math-container">$$\begin{align} 0 &amp;= 4 r^2 \left(\;-4|\triangle ABC|^2 + \sum_{cyc} \left(a^2 r_A^2 - 2 b c \cos A\,r_B r_C \right) \;\right) \\ &amp;-4r \left(\; \sum_{cyc}\left( a^2 r_A^3 - b c \cos A (a^2 r_A + r_B^2 r_C + r_B r_C^2) \right) \;\right) \\[4pt] &amp;+a^2 b^2 c^2 + \sum_{cyc} \left( a^2 r_A^4 - 2 b c \cos A (a^2 r_A^2 + r_B^2 r_C^2)\right) \end{align} \tag{8}$$</span></p> <p>Somewhat surprisingly, the discriminant simplifies nicely: <span class="math-container">$$\triangle = 64 |\triangle ABC|^2 \left(a^2 - (r_B - r_C)^2\right)\left( b^2 - (r_C - r_A)^2\right) \left(c^2 - (r_A - r_B)^2\right) \tag{9}$$</span></p> <p>The last three factors invite defining, say, <span class="math-container">$d$</span>, <span class="math-container">$e$</span>, <span class="math-container">$f$</span> such that <span class="math-container">$$d^2 = a^2 - (r_B - r_C)^2 \qquad e^2 = b^2 - (r_C - r_A)^2 \qquad f^2 = c^2 - (r_A - r_B)^2 \tag{10}$$</span> Since the differences could be negative, we allow that <span class="math-container">$d$</span>, <span class="math-container">$e$</span>, <span class="math-container">$f$</span> could be imaginary; but <span class="math-container">$d^2e^2f^2$</span> must be non-negative (so, <span class="math-container">$def$</span> is real). As it turns out, it's helpful to define (possibly-imaginary) "angles" <span class="math-container">$D$</span>, <span class="math-container">$E$</span>, <span class="math-container">$F$</span> with <span class="math-container">$$\cos D = \frac{-d^2+e^2+f^2}{2e f} \qquad \cos E = \frac{-e^2+f^2+d^2}{2fd} \qquad \cos F = \frac{-f^2+d^2+e^2}{2de} \tag{11}$$</span> and (possibly-imaginary) "area" <span class="math-container">$|\triangle DEF|$</span> with <span class="math-container">$$|\triangle DEF|^2 = \frac1{16}(d+e+f)(-d+e+f)(d-e+f)(d+e-f) \tag{12}$$</span></p> <p>How helpful? Well, helpful enough that <span class="math-container">$(8)$</span> shrinks to <span class="math-container">$$\begin{align} 0 &amp;= 16 r^2 |\triangle DEF|^2 - 4r d e f ( d r_A\cos D + er_B \cos E + fr_C \cos F) \\[4pt] &amp;-\left(d^2 e^2 f^2 + d^4 r_A^2 + e^4 r_B^2 + f^4 r_C^2 - 2 e^2 f^2 r_B r_C - 2 f^2 d^2 r_C r_A - 2 d^2 e^2 r_A r_B \right) \end{align} \tag{8'}$$</span></p> <p>(If we defined possibly imaginary <span class="math-container">$s_A$</span>, <span class="math-container">$s_B$</span>, <span class="math-container">$s_C$</span> with <span class="math-container">$s_A^2 = d^2 r_A$</span>, etc, the constant term could be written <span class="math-container">$(s_A+s_B+s_C)(-s_A+s_B+s_C)(s_A-s_B+s_C)(s_A+s_B-s_C)-d^2 e^2 f^2$</span>, but that isn't particularly helpful going forward.)</p> <p>Solving <span class="math-container">$(8')$</span> yields</p> <blockquote> <p><span class="math-container">$$r = \frac{def}{8|\triangle DEF|^2} \left(\; dr_A \cos D + e r_B \cos E + f r_C \cos F \pm 2|\triangle ABC| \;\right) \tag{13}$$</span></p> </blockquote> <p>Note that, if we were to multiply-through the <span class="math-container">$def$</span> factor, and expand the cosines via <span class="math-container">$(11)$</span>, the first three terms would only involve even powers <span class="math-container">$d$</span>, <span class="math-container">$e$</span>, <span class="math-container">$f$</span>, making for a real (possibly-negative) value. We know that <span class="math-container">$def$</span> is real, so the <span class="math-container">$\pm 2def\,|\triangle ABC|$</span> term is also real. This guarantees that <span class="math-container">$r$</span> itself is real.</p> <p>Since the signs of <span class="math-container">$r_A$</span>, <span class="math-container">$r_B$</span>, <span class="math-container">$r_C$</span> account for the eight possible tangency configurations, presumably one choice of "<span class="math-container">$\pm$</span>" is always extraneous. Can we decide which one ahead of time? Maybe, but I'm running out of steam ...</p> <p>Substituting <span class="math-container">$(13)$</span> into a <span class="math-container">$def$</span> version of <span class="math-container">$(7)$</span>, and dividing-through by a common factor, yields a final-ish form for parameter <span class="math-container">$\alpha$</span>:</p> <blockquote> <p><span class="math-container">$$\begin{align} \alpha = d \left( \begin{array}{l} \phantom{+} \cos D \left( 4 |\triangle DEF|^2 + \sum_{cyc} \left( d^2 r_A^2 - 2 e f r_B r_C \cos D \right) \right) \\ \pm 2 |\triangle ABC|\; (d r_A - e r_B \cos F - f r_C \cos E) \end{array} \right) \end{align} \tag{14}$$</span></p> </blockquote> <p>If we define <span class="math-container">$u := d r_A$</span>, <span class="math-container">$v := e r_B$</span>, <span class="math-container">$w := f r_C$</span>, we reduce a bit of clutter in <span class="math-container">$(13)$</span> and <span class="math-container">$(14)$</span>: <span class="math-container">$$\begin{align} r &amp;= \frac{def}{8|\triangle DEF|^2} \left(\;u \cos D + v \cos E + w \cos F\pm 2|\triangle ABC|\;\right) \\ \alpha &amp;= d \left( \begin{array}{l} \phantom{+} \cos D \left( 4 |\triangle DEF|^2 + u^2 + v^2 + w^2 - 2 v w \cos D - 2 w u \cos E - 2 u v \cos F \right) \\[6pt] \pm 2 |\triangle ABC|\; (u - v \cos F - w \cos E) \end{array} \right) \end{align} \tag{15}$$</span></p> <hr> <p>As a final note, I'll mention that the values <span class="math-container">$d$</span>, <span class="math-container">$e$</span>, <span class="math-container">$f$</span> have geometric significance related to the power of certain points with respect to the given circles. It's a little tricky to describe, and I don't (yet) see how it might contribute to the discussion, so I won't bother including it.</p>
3,501,879
<p>I have been stuck at this problem for some time now. I'd really apprechiate your help. Thanks.</p> <p><span class="math-container">$$2\sin^2(x)+6\cos^2(\frac x4)=5-2k$$</span></p>
lhf
589
<p><em>Hint:</em> The problem is about finding the minimum and maximum values of <span class="math-container">$ 2\sin^2(x)+6\cos^2\left(\frac x4\right)$</span>.</p> <p>The minimum value is easy: it's <span class="math-container">$0$</span>, taken at <span class="math-container">$x=10\pi$</span> for instance.</p> <p>The maximum value is <a href="https://www.wolframalpha.com/input/?i=max+2sin%5E2%28x%29%2B6cos%5E2%28x%2F4%29" rel="nofollow noreferrer">not easy</a>: it's approximately <span class="math-container">$M=7.24701$</span>.</p> <p>This gives <span class="math-container">$0 \le 5-2k \le M$</span>, or <span class="math-container">$(5 - M)/2 \le k \le 5/2$</span>.</p>
408,717
<p>Let $n\in \mathbb N$ and $A_1,A_2,..,A_n$ be arbitrary sets. Now define $X=[x_{ij}]_{n \times n}$ where $$x_{ij}= \begin{cases} 1 , &amp; \text{$A_i$$\subsetneq$}A_j \\ 0 , &amp; \text{otherwise} \\ \end{cases}.$$ How do you prove $X^n=0$?</p> <p>Thanks in advance.</p>
N. S.
9,176
<p>If you know some Graph Theory:</p> <p>Define a digraph the following way: the vertices are $A_1,A_2,..., A_n$. The arcs: Put an arc $A_i \rightarrow A_j$ if and only if $A_i\subsetneq A_j$.</p> <p>Your matrix $X$ is exactly the incidence matrix of this digraph. Since the digraph contains no directed cycles, it contains no directed path of length $n$. Thus $X^n=0$.</p>
4,595,208
<p>I'm having trouble calculating this limit directly :</p> <p><span class="math-container">$$\lim_{n\to\infty}\frac{(2n+1)(2n+3)\cdots(4n+1)}{(2n)(2n+2)\cdots(4n)}$$</span></p> <p>It can be calculated using the inventory method and the result is:<br /> <span class="math-container">$$\lim_{n\to\infty}\frac{(2n+1)(2n+3)\cdots(4n+1)}{(2n)(2n+2)\cdots(4n)} =\sqrt {2}$$</span></p> <p>But the limit method is not smart, I need help doing this in a direct way.</p> <p>This is the first time I am asking a question on this site. I apologize if it is not a good question.</p>
John Douma
69,810
<p>If the incident angle is <span class="math-container">$\beta$</span> then the final angle must be <span class="math-container">$\frac{\pi}{2} -\beta$</span>.</p> <p>To see this, imagine a beam going to the right. Its perpendicular direction is straight up which is <span class="math-container">$\frac{\pi}{2}$</span> so if we rotate the beam and this direction clockwise by <span class="math-container">$\beta$</span> we get that the perpendicular direction goes from <span class="math-container">$\frac{\pi}{2}$</span> to <span class="math-container">$\frac{\pi}{2}-\beta$</span>.</p> <p>The angle <span class="math-container">$\gamma$</span> is the difference between <span class="math-container">$\frac{\pi}{2}-\beta$</span> and <span class="math-container">$\pi-\alpha$</span> which is <span class="math-container">$\alpha -\beta -\frac{\pi}{2}$</span></p> <p>The angle <span class="math-container">$\gamma$</span> is also the incident angle on the second mirror and must be supplementary to <span class="math-container">$\alpha +\beta$</span>. You can see this by looking at the triangle consisting of <span class="math-container">$\alpha$</span> and the two incident points on the mirrors.</p> <p>Therefore, <span class="math-container">$$\alpha +\beta +\alpha -\beta -\frac{\pi}{2}=\pi\implies 2\alpha=\frac{3\pi}{2}\implies\alpha =\frac{3\pi}{4}$$</span></p>
1,757,260
<p>A little box contains $40$ smarties: $16$ yellow, $14$ red and $10$ orange.</p> <p>You draw $3$ smarties at random (without replacement) from the box.</p> <p>What is the probability (in percentage) that you get $2$ smarties of one color and another smarties of a different color?</p> <p>Round your answer to the nearest integer.</p> <p>Answer given is $67$. I don't get it. Is it not: $$\left(\frac{16}{40} \times \frac{15}{39} \times\frac{24}{38}\right) + \left(\frac{14}{40} \times\frac{13}{39} \times\frac{26}{38}\right) +\left(\frac{10}{40} \times\frac{9}{39} \times\frac{30}{38}\right)= 22?$$</p>
true blue anil
22,388
<p><strong>Whenever the sequence is unspecified</strong> in drawing w/o replacement,<br> I much rather prefer using combinations to avoid getting into an unnecessary tangle by oversight.</p> <p>$\dfrac{\left[\binom{16}{2}\binom{24}1 + \binom{14}{2}\binom{26}1 + \binom{10}{2}\binom{30}1\right]}{ \binom{40}{3}} = 67\%$ </p>
3,043,846
<p>I want to rewrite a question not so well written on this site and clarified by Mr. Lahtonen (thank you again).</p> <p>So here the question:</p> <blockquote> <p>Let the extention <span class="math-container">$GF(p^m) \supset GF(p)$</span> that contains roots of <span class="math-container">$p(x)=x^{p^{m}}-1$</span>. Show that those roots are distinct and that forms a field</p> </blockquote> <p>I know that the roots of <span class="math-container">$p(x)=x^{p^{}}-1$</span> are contained in <span class="math-container">$p(x)=x^{p^{m}}-1$</span>, but then?</p> <p>edit: probably the correct exercise was <span class="math-container">$p(x)=x^{p^{m}-1}$</span></p>
Dietrich Burde
83,966
<p>Besides your example, there is even an example with <em>finite</em> groups, as <span class="math-container">$$ {\rm Aut}(S_3)\cong {\rm Aut}(C_2\times C_2)\cong S_3, $$</span> but <span class="math-container">$S_3$</span> is of course not isomorphic to <span class="math-container">$C_2\times C_2$</span>.</p>
697,668
<p>How many combinations are there to arrange the letters in MISSISSIPPI requiring that the 2 S's must be separated? </p> <p>I found there are 34650 combinations to arrange without restriction. </p> <p>How to approach this question?</p>
2012ssohn
103,274
<p>We know that the string will take the form of</p> <p>$$*S█S█S█S*$$</p> <p>where $█$ MUST have at least one character and $*$ can be of any length (even 0). I would suggest the following steps:</p> <ol> <li>Find the number of ways you can put the $S$s (they can be in positions $(1,3,5,7)$, $(2,5,8,11)$, $(1,4,6,9)$, etc.)</li> <li>Find the number of different strings you can make with $MIIIPPI$ (that's $MISSISSIPPI$ without the $S$s)</li> <li>Multiply the two.</li> </ol> <p>I leave the math for you to do.</p>
254,253
<blockquote> <p>If the only contents of a container are 10 disks that are each numbered with a different positive integer from 1 through 10, inclusive. If 4 disks are to be selected one after the other, with each disk selected at random and without replacement, what is the probability that the range of the numbers on the disks selected is 7?</p> </blockquote> <p>So I don't understand why my solution doesn't work:</p> <p>I figured if there are 4 draws, then to pick, say, $1,8$,and two numbers between $1$ and $8$, the probability would be $(1/10)*(1/9)*(6/8)*(5/7)$. You have a $1/10$ chance to pick $1$. Since there's no replacement, you have $1/9$ chance to pick $8$. then $6/8$ for integers $2,3,4,5,6,$ and $7$. Then $5/7$ for another one.</p> <p>Then just multiply by three. But apparently I don't get anything close to the solution. Could someone please explain why this is? </p> <p><strong>Update</strong>: I finally get it. Thank you all for the responses!</p>
Gerry Myerson
8,269
<p>tofu, you have found the probability that you pick $1$, then $8$, then two others between $1$ and $8$. But you could also have picked $8$, then $1$, then the other two. Or one of the other two, then $1$, then $8$, then the other of the other two. So what you need to figure out is how many orderings there are of one-eight-other-other, and multiply by that (and then by $3$). </p>
4,131,747
<p>I am having trouble with this problem in my Linear Algebra review:</p> <blockquote> <p>Find an equation for the plane parallel to <span class="math-container">$2x-y+2z=4 $</span> such that the point <span class="math-container">$(3,2,-1) $</span> is equidistant from both planes.</p> </blockquote> <p>The answer is <span class="math-container">$2x-y+2=0$</span> . How would you go about finding the <span class="math-container">$0$</span> ?</p>
Toby Mak
285,313
<p>Since both planes are parallel, the <a href="https://math.stackexchange.com/questions/2472153/normal-vector-to-plane">normal vector</a> to both planes is <span class="math-container">$(2, -1, 2)$</span>. Thus the points <span class="math-container">$(2k + 3, -k + 2, 2k-1)$</span> and <span class="math-container">$(-2k + 3, k + 2, -2k-1)$</span> are equidistant to <span class="math-container">$(3,2,-1)$</span>.</p> <p>Now if the first point lies on the original plane, <span class="math-container">$2(2k + 3) - (-k + 2) + 2(-2k + 1) = 4$</span> <span class="math-container">$ \implies k = -2$</span>. So the original plane is moved <span class="math-container">$-2$</span> units to meet the point <span class="math-container">$(3,2,-1)$</span>, then another <span class="math-container">$-2$</span> units for it to be equidistant gives <span class="math-container">$2x - y + 2z = 4 + (-2) + (-2) = 0$</span>.</p>
320,937
<p>It is easy to think of $\mathbb{C}^2$ as an ordered pair. I just wonder if it is possible to put $\mathbb{C}^2$ into illustration, since $\mathbb{C}$ has taken the role of two dimensional Euclidean Space.</p>
Brian M. Scott
12,042
<p>It’s basically the same argument. For $n\in\Bbb Z^+$ let $L_n$ be the $L$-shaped piece described in the question. Let $B=\{n\in\Bbb Z^+:L_n\text{ cannot be tiled}\}$. You want to show that $B=\varnothing$. Assume, to get a contradiction, that $B\ne\varnothing$. $\Bbb Z^+$ is well-ordered, so $B$ has a least element $n$. Use the basis step of your induction argument to show that $n&gt;1$. Then use the induction step of your induction argument to show that in fact $L_n$ <strong>can</strong> be tiled, thereby getting a contradiction.</p>
1,859,719
<blockquote> <p>Let be $U (x,y) = x^\alpha y^\beta$. Find the maximum of the function $U(x,y)$ subject to the equality constraint $I = px + qy$.</p> </blockquote> <p>I have tried to use the Lagrangian function to find the solution for the problem, with the equation</p> <p>$$\nabla\mathscr{L}=\vec{0}$$</p> <p>where $\mathscr{L}$ is the Lagrangian function and $\vec{0}=\pmatrix{0,0}$. Using this method I have a system of $3$ equations with $3$ variables, but I can't simplify this system:</p> <p>$$ax^{\alpha-1}y^\beta-p\lambda=0$$ $$\beta y^{\beta-1}x^\alpha-q\lambda=0$$ $$I=px+qx$$</p>
Rodrigo de Azevedo
339,790
<p>We want to solve</p> <p>$$\begin{array}{lc} \text{maximize} &amp; x^\alpha y^\beta\\ \text{subject to} &amp; p x + q y = r\end{array}$$</p> <p>We define the Lagrangian</p> <p>$$\mathcal{L} (x,y,\lambda) := x^\alpha y^\beta - \lambda (p x + q y - r)$$</p> <p>Taking the partial derivatives and finding where they vanish,</p> <p>$$\alpha x^{\alpha-1} y^\beta = \lambda p \qquad \qquad \beta x^\alpha y^{\beta-1} = \lambda q \qquad \qquad p x + q y = r$$</p> <p>Multiplying the first equation by $x$ and the second by $y$,</p> <p>$$\alpha x^\alpha y^\beta = \lambda p x \qquad \qquad \beta x^\alpha y^\beta = \lambda q y \qquad \qquad p x + q y = r$$</p> <p>Adding the first two equations and using the third,</p> <p>$$\lambda = \left(\frac{\alpha + \beta}{r}\right) x^\alpha y^\beta$$</p> <p>and, thus,</p> <p>$$\left( \alpha r - (\alpha + \beta) p x \right) x^\alpha y^\beta = 0 \qquad \qquad \left( \beta r - (\alpha + \beta) q y \right) x^\alpha y^\beta = 0$$</p> <p>Since $x^\alpha y^\beta$ is not the zero function, we then can compute $x$ and $y$ easily.</p>
195,168
<pre><code>f[2 x_] := f[x] f[1] := 3 f[0] := 0 f[2 x_ + 1] := f[x] + f[x + 1] a[x_] := f[x]/f[x + 1] </code></pre> <p>Will this work as an recursive function ? I think there's something wrong with this because every integer will get an output of 3 </p> <p>any help would be appreciated, thank you so much</p>
Bill
18,890
<p>Perhaps try this</p> <pre><code>f[0] := 0; f[1] := 3; f[x_/;EvenQ[x]] := f[x/2]; f[x_/;OddQ[x]] := f[(x-1)/2] + f[(x-1)/2 + 1]; a[x_] := f[x]/f[x + 1]; Table[{i,a[i]},{i,0,6}] </code></pre> <p>with output</p> <pre><code>{{0, 0}, {1, 1}, {2, 1/2}, {3, 2}, {4, 1/3}, {5, 3/2}, {6, 2/3}} </code></pre> <p>Please check this carefully to make certain I haven't made any mistakes</p>
195,168
<pre><code>f[2 x_] := f[x] f[1] := 3 f[0] := 0 f[2 x_ + 1] := f[x] + f[x + 1] a[x_] := f[x]/f[x + 1] </code></pre> <p>Will this work as an recursive function ? I think there's something wrong with this because every integer will get an output of 3 </p> <p>any help would be appreciated, thank you so much</p>
Roman
26,598
<p>If you want to go to large values of <span class="math-container">$x$</span>, then some <a href="https://reference.wolfram.com/language/tutorial/FunctionsThatRememberValuesTheyHaveFound.html" rel="nofollow noreferrer">memoization</a> will speed up your recursion dramatically:</p> <pre><code>f[0] = 0; f[1] = 3; f[x_?EvenQ] := f[x] = f[x/2]; f[x_?OddQ] := f[x] = f[(x - 1)/2] + f[(x + 1)/2]; a[x_] := f[x]/f[x + 1] </code></pre> <p>try it out:</p> <pre><code>f[873813] // AbsoluteTiming </code></pre> <blockquote> <p>{0.000245, 32838}</p> </blockquote>
3,810,733
<p><span class="math-container">$$''' + y' = 2^2 + 4\sin(x)$$</span></p> <p>Find the general solution of the differential equation by using the Indefinite Coefficients Method.</p>
Christian Blatter
1,303
<p>We are given the vector <span class="math-container">${\bf a}:=(3,-2,4,-1)$</span> and are looking for the unit vector <span class="math-container">${\bf u}\in{\mathbb R}^4$</span> for which the scalar product <span class="math-container">$${\bf a}\cdot{\bf u}$$</span> is maximal. By Schwarz' inequality we have <span class="math-container">${\bf a}\cdot{\bf u}\leq|{\bf a}|$</span> for all <span class="math-container">${\bf u}\in S^3$</span>. Maybe you find a <span class="math-container">${\bf u}\in S^3$</span> for which we equality here.</p>
108,253
<p>I would like to assign 'x' individuals to 'y' groups, randomly. For example, I would like to divide 50 individuals into 100 groups randomly. Of course, with more groups than individuals many of the groups will have zero individuals, while some groups will have multiple individuals. That is fine. With random assignment, the distribution of the number of individuals per group should fit a Poisson distribution.</p> <p>I feel like there should be a simple function in Mathematica for partitioning X things into Y groups randomly. I have searched and haven't found anything to do this. Please help!</p>
Dr. belisarius
193
<p>it looks like a Poisson:)</p> <pre><code>t = Tally[Flatten@RandomChoice[Range@100, {100000, 50}]][[All, 2]]; fdu = FindDistributionParameters[t, PoissonDistribution[u]]; Show[SmoothHistogram[t, PlotStyle -&gt; Red], Plot[PDF[PoissonDistribution[u /. fdu]][x], {x, Min@t, Max@t}, PlotStyle -&gt; Blue]] </code></pre> <p><img src="https://i.stack.imgur.com/lvqBi.png" alt="Mathematica graphics"></p>
108,253
<p>I would like to assign 'x' individuals to 'y' groups, randomly. For example, I would like to divide 50 individuals into 100 groups randomly. Of course, with more groups than individuals many of the groups will have zero individuals, while some groups will have multiple individuals. That is fine. With random assignment, the distribution of the number of individuals per group should fit a Poisson distribution.</p> <p>I feel like there should be a simple function in Mathematica for partitioning X things into Y groups randomly. I have searched and haven't found anything to do this. Please help!</p>
Eric Towers
16,237
<p>This function is "stable" in the sense that elements will appear in sublists in the same order they appear in the supplied list.</p> <pre><code>Clear[randomSetPartition]; randomSetPartition[n_ /; IntegerQ[n], parts_] := randomSetPartition[Range[1, n], parts] randomSetPartition[individuals_List, parts_] := Module[ { retVal, targets }, retVal = Table[{}, {parts}]; targets = RandomInteger[{1, parts}, {Length[individuals]}]; MapThread[AppendTo[retVal[[#1]], #2] &amp;, {targets, individuals}]; retVal ] </code></pre> <p>This clears previous definitions, sets up a convenience function so you can use the small integers as a list of individuals rather than providing a specific list of individuals, then defines the workhorse. That function creates a list of empty bins, then successively inserts each individual into a uniformly randomly selected bin. Then it outputs the resulting set of bins.</p> <p>Example usage: </p> <pre><code>randomSetPartition[{a, b, c}, 10] randomSetPartition[3, 10] (* {{}, {}, {}, {}, {}, {c}, {b}, {}, {}, {a}} {{}, {3}, {}, {2}, {}, {1}, {}, {}, {}, {}} *) randomSetPartition[Range[50], 100] (* {{}, {}, {28}, {38}, {37}, {21}, {}, {}, {}, {}, {}, {}, {}, {48}, {41, 45}, {}, {}, {22}, {14}, {}, {}, {}, {}, {}, {8}, {44}, {29}, {}, {}, {}, {}, {11, 32, 40, 47}, {}, {10}, {}, {}, {}, {17}, {}, {2}, {25}, {}, {}, {}, {23}, {49}, {}, {}, {4, 7, 35}, {}, {}, {}, {31}, {18}, {}, {}, {}, {27}, {6}, {19}, {}, {}, {24, 46}, {3}, {}, {}, {20}, {34}, {}, {12, 50}, {}, {5}, {39}, {}, {30}, {}, {13, 43}, {}, {33}, {}, {1, 42}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {}, {15}, {9, 16, 26}, {}, {}, {36}, {} } *) </code></pre> <p>Possible bug detected: The lines "<code>target = ...</code>" and "<code>MapThread ...</code>" were originally the one line </p> <pre><code>AppendTo[retVal[[Evaluate[RandomInteger[{1, parts}]]]], #] &amp; /@ individuals; </code></pre> <p>However, this line produces incorrect results. Since <code>AppendTo[]</code> is <code>HoldFirst</code>, the random integer is generated twice: once to figure out which bin to read for appending and another time to assign a bin back into the list of bins. This led to crazy things, like </p> <pre><code>randomSetPartition[{"a", "b", "c"}, 10] (* {{}, {"a"}, {"c"}, {}, {}, {}, {}, {"a", "b"}, {}, {}} *) (* Note: ^ ^ *) </code></pre> <p>which happened when the first random number said "append <code>"b"</code> to bin 2" and the second random number said "but overwrite bin 8".</p> <p>There may be some convoluted way to get that to not happen with <code>Evaluate[]</code> and <code>ReleaseHold[]</code> magic -- but my fu is weak today. Instead, one could realize the random numbers into an array, <code>targets</code>, then use that array.</p> <p>Possible directions of extension:</p> <ul> <li>We could accept a list of weights to allow nonuniform assignment to bins.</li> <li>This could be slow. I've never looked at the timing for inserting into depth 2 lists. It could be faster (for large inputs) to either make a list of {bin, individual} pairs, sort those, then construct the result iteratively. Maybe it would be better to make a map of bin -> individuals and read that out. Much testing could ensue.</li> <li>For very sparse outputs, there have to be better ways to generate and return all the empties. For instance, use a method to determine how many bins will be nonempty, partition into that many bins, then randomly permute with the right number of empties. Some sort of sparse representation of the non-empties could be a good idea for speed and memory.</li> <li>We could specify a sort order for the output bins, possibly different sorts for different bins.</li> </ul>
3,306,571
<p>I know that the function <span class="math-container">$f(x) = \frac{x}{x}$</span> is not differentiable at <span class="math-container">$x = 0$</span>, but according to the definition of differentiable functions:</p> <blockquote> <p>A differentiable function of one real variable is a function whose derivative exists at each point in its domain</p> </blockquote> <p>since <span class="math-container">$x = 0$</span> is not in the domain of <span class="math-container">$f$</span>, it doesn't have to be differentiable at that point for the function to be differentiable. This suggests that <span class="math-container">$f$</span> is differentiable as every other points in the domain has a derivative of <span class="math-container">$0$</span>.</p> <p>However, some say that a function must be continuous if it's differentiable. This disproves the fact that <span class="math-container">$f$</span> is differentiable since it's not a continuous function.</p> <p>Then is it really a differentiable function?</p>
Ross Millikan
1,827
<p><span class="math-container">$\frac xx$</span> is continuous at all points in its domain. It has derivative <span class="math-container">$0$</span> at all points of <span class="math-container">$\Bbb R$</span> except <span class="math-container">$0$</span>.</p>
336,196
<p>Can anyone help me with the following SDE?</p> <p>Solve the following stochastic differential equation: $$dY_t=aY_tdt+(b(t)+cY_t)dB_t$$ with $Y_0=0$.</p> <p>Hint: Try a solution of the form $Z_tH_t$ where $Z_t = exp(cB_t+(a-\frac{1}{2}c^2t))$ and $dH_t=F(t)dt+G(t)dB_t$ for some adapted process F and G which need to be determined.</p> <p>Thanks gt6989b for your input!</p> <p>Please correct me if I'm wrong anywhere.</p> <p>Since, $$Z_t = exp(cB_t+(a-\frac{1}{2}c^2t))$$ Hence we get: $dZ_t = Z_t(cdB_t-\frac{1}{2}c^2)dt$</p> <p>and we also have $dH_t=F(t)dt+G(t)dB_t$</p> <p>Then we apply Ito's Lemma on $Z_tH_t$, which yields \begin{eqnarray*} d(Z_tH_t) &amp;=&amp; Z_tdH_t+H_tdZ_t+dH_tdZ_t \\ &amp;=&amp; Z_t(F(t)dt+G(t)dB_t)+Z_tH_t(cdB_t-\frac{1}{2}c^2dt)+Z_tcG(t)dt \\ &amp;=&amp; Z_t(F(t)-\frac{1}{2}c^2H_t+cG(t))dt+(Z_tG(t)+cZ_tH_t)dB_t \end{eqnarray*}</p> <p>By letting $Y_t = Z_tH_t$, we compare between the expressions $dY_t$ and $d(Z_tH_t)$ in the $dt$ and $dB_t$ terms respectively. And we get the following:</p> <p>\begin{eqnarray} Y_t &amp;=&amp; Z_tH_t \\ G(t) &amp;=&amp; \frac{Z_t}{b(t)} \\ F(t) &amp;=&amp; (a+\frac{1}{2}c^2)H_t - c\frac{Z_t}{b(t)} \end{eqnarray}</p> <p>Note: $F(t) = P$ and $G(t) = Q$ for your P and Q respectively.</p> <p>But now, I don't really understand how would this result help me in solving the original $dY_t$ SDE. </p>
Anders Muszta
294,222
<p>A linear SDE $$\text{d}Y_{t} = (\alpha(t)+\beta(t)Y_{t})\,\text{d}t+(\gamma(t)+\delta(t)Y_{t})\,\text{d}W_{t}$$ has an explicit (strong) solution which can be found on <a href="https://en.wikipedia.org/wiki/Stochastic_differential_equation#Linear_SDE:_general_case" rel="nofollow">Wikipedia</a> . Here $\alpha$, $\beta$, $\gamma$ and $\delta$ denote deterministic functions and $W$ a one-dimensional Wiener process. The solution, $Y\ ,$ is expressed in terms of the stochastic exponential, $X\ ,$ which is the strong solution to the linear SDE $$\text{d}X_{t} = \beta(t)Y_{t}\,\text{d}t+\delta(t)Y_{t}\,\text{d}W_{t}\ .$$</p>
510,814
<p>I've seen in several places without further comment that if an equalizer is epic, it's an isomorphism. I've only proved one half of this:</p> <p>Suppose $e:X \rightarrow A$ is an epimorphism and an equalizer for $f$ and $g$. Then $f \circ e = g \circ e \implies f = g$. Then any function $e': X' \rightarrow A$ trivially equalizes $f$ and $g$, so take $id_A: A \rightarrow A$. $e$ is an equalizer, so there exists a unique $k: A \rightarrow X$ such that $e \circ k = id_A$.</p> <p>That gets me one side of the inverse, but how do I prove that $k \circ e = id_X$?</p>
Community
-1
<p>Suppose $B$ is the target of $f$ and $g$.</p> <p>Since $e : X \to A$ is an equalizer of $f$ and $g$ and $X\xrightarrow{e} A \xrightarrow{f} B = X \xrightarrow{e} A \xrightarrow{g} B$ , hence there exists a unique $\ell : X \to X$ such that $X\xrightarrow{\ell} X \xrightarrow{e} A = X \xrightarrow{e} A$. </p> <p>On one hand $X\xrightarrow{\mathrm{id}_X} X \xrightarrow{e} A = X \xrightarrow{e} A$. Hence, by the uniqueness, $$ \ell \text{ is } X\xrightarrow{\mathrm{id}_X} X $$</p> <p>On the other hand, since you have already proved that $A \xrightarrow{k} X \xrightarrow{ e } A = id_A$, hence $X \xrightarrow{e} A \xrightarrow{k} X \xrightarrow{e} A = X \xrightarrow{e} A$. Hence, by the uniqueness, $$ \ell \text{ is } X \xrightarrow{e} A \xrightarrow{k} X $$</p> <p>Therefore, comparing the two displayed equations, the result that $ X \xrightarrow{e} A \xrightarrow{k} X = X\xrightarrow{\mathrm{id}_X} X $ follows.</p>
2,567,332
<p>A Greek urn contains a red, blue, yellow, and orange ball. A ball is drawn from the urn at random and then replaced. If one does this $4$ times, what is the probability that all $4$ colors were selected?</p> <p>I approached this questions by doing $(1/4)^4$ because there's always a $1/4$ chance of selected a specific color ball if it's replaced. I also tried doing if not the correct ball was selected; so I did $(3/4)^4$ but that didn't work either. What am I doing wrong?</p>
Puffy
513,387
<p>The first ball drawn can be any colour. So, the probability is $\frac{4}{4}.$</p> <p>Since the first ball is replaced, there is a $\frac{1}{4}$ chance that the same ball will be drawn. The chance for a different ball to be drawn is $\frac {3}{4}$.</p> <p>There is $\frac{2}{4}$ chance that the two drawn balls will duplicate, so the chance for a different ball to be drawn (for the third draw) is $\frac{2}{4}$.</p> <p>Finally, there is a $\frac {1}{4}$ chance of the final different ball to be chosen as the other three are already drawn (if drawn) and will be duplicated.</p> <p>$$\frac{4}{4} \times\frac{3}{4}\times\frac {2}{4}\times \frac {1}{4}=\frac{3}{32} =0.09375$$</p> <p>Conclusion: There is a $\frac{3}{32} $ chance of you getting all four colours.</p>
2,413,891
<blockquote> <p><strong>Question :</strong> Evaluate - $$\int_{0}^{1}2^{x^2+x}\mathrm dx$$</p> </blockquote> <p><strong>My Attempt :</strong> First I tried to evaluate the indefinite integral of $2^{x^2+x}$ in order to put the limits $0$ and $1$ later on, but couldn't integrate it. Then I checked on WA and came to know that it's elementary integral doesn't exist. Now I moved one to using properties of definite integration such as $$\int_a^b f(x) \mathrm dx=\int_a^b f(a+b-x) \mathrm dx$$</p> <p>But it couldn't help either. Can you please give me hint to proceed on this question?</p> <p>P.S. - This is a high school level problem and therefore its solution shouldn't involve any special functions, such as Gaussian Integral etc.</p> <p><strong>Edit</strong> : I asked my teacher this question and basically this was an approximation based question. This was a MCQ type question which has an option "None of the above" and it was the correct answer, since the other options were made in such a way that can be rejected by bounding this integral between 2 functions. For example we can use $$2^{x^2+x}&lt;2^{2x} ~; ~x\in (0,1)$$ and thus can be sure that this integral is less than $3/\ln(4)$.</p> <p>Thanks all for devoting your time in my question!</p>
Satish Ramanathan
99,745
<p>Hint:</p> <p>Put <span class="math-container">$y = 2^{x^2+x}$</span></p> <p>Integral becomes <span class="math-container">$\int_{1}^{4} \frac{1}{\sqrt{1+ \frac{4}{\ln(2)}\ln(y)}}dy$</span></p> <p>Again Put <span class="math-container">$\sqrt{1+ \frac{4}{\ln(2)}\ln(y)}= u$</span></p> <p>Integral becomes <span class="math-container">$\int_{1}^{3}\frac{1}{2e^a} e^{au^2} du$</span></p> <p>where <span class="math-container">$a = \frac{\ln(2)}{4}$</span></p> <p>It resembles the standard integral <span class="math-container">$\int_{1}^{3} e^{au^2}du$</span></p> <p><span class="math-container">$$\int e^{au^2}du = \frac{-i\sqrt{\pi}}{2\sqrt{a}} \text{erf}\left(iu\sqrt{a}\right)$$</span></p> <p>I hope you can take it from there</p> <p>I am attaching the table of standard integrals for your reference</p> <p><a href="http://integral-table.com/downloads/integral-table.pdf" rel="nofollow noreferrer">http://integral-table.com/downloads/integral-table.pdf</a></p> <p>see page page 7, integral number 67</p> <p>Good luck</p>
1,566,471
<p>Hi can someone please help?</p> <p>I need to evaluate this indefinite integral:</p> <p>$$\int \frac{(\ln x)^5}x dx$$</p> <p>I know I need to use substitution, so if I let <em>u= x</em> but I can't figure out the antiderivative for the top portion.</p> <p>Thank you!</p>
Peter
82,961
<p>The definition is correct in the case $P(A)=0$ (or $P(B)=0$), only if the event $A$ (or $B$) is impossible.</p> <p>As you have shown, the definition breaks down for events with $P(A)=0$, which can occur.</p>
2,110,286
<p>Show that if $A$ and $B$ are subsets of a set $S$, then $\overline{A \cap B}=\overline{A}\cup \overline{B}$.</p> <p>I tried to prove that $A \cap B=A \cup B$ because I didn't realize that the overline meant to prove it for the <em>closure</em> of the sets.</p> <p>So, now I am confused about how to prove for closure. I cannot find it in my textbook, and by some "similar" proofs online led me to conclude that $\overline{A \cap B}=\overline{A \cup B}$ but I somehow don't know if this is true, or how to prove it exactly. So, now I am not sure if I understand this principle at all.</p>
Kanwaljit Singh
401,635
<p>Let M = (A ∩ B)' and N = A' U B'</p> <p>Let x be an arbitrary element of M then x ∈ M ⇒ x ∈ (A ∩ B)'</p> <p>⇒ x ∉ (A ∩ B)</p> <p>⇒ x ∉ A or x ∉ B</p> <p>⇒ x ∈ A' or x ∈ B'</p> <p>⇒ x ∈ A' U B'</p> <p>⇒ x ∈ N</p> <p>Therefore, M ⊂ N …………….. (i)</p> <p>Again, let y be an arbitrary element of N then y ∈ N ⇒ y ∈ A' U B'</p> <p>⇒ y ∈ A' or y ∈ B'</p> <p>⇒ y ∉ A or y ∉ B</p> <p>⇒ y ∉ (A ∩ B)</p> <p>⇒ y ∈ (A ∩ B)'</p> <p>⇒ y ∈ M</p> <p>Therefore, N ⊂ M …………….. (ii)</p> <p>Now combine (i) and (ii) we get</p> <p>M = N </p> <p>(A ∩ B)' = A' U B'</p>
874,300
<p>I'm having trouble grasping how to set these types of problems. There are a lot of related questions but it's difficult to abstract a general procedure on finding constants that give the given function bounding constraints to make it big-theta(general function). </p> <p>so $\frac{x^4 +7x^3+5}{4x+1}$ is $ \Theta (x^3) $</p> <p>to show this, we need to find constants such that.</p> <p>$$ |c_1|(x^3) \leq \frac{x^4 +7x^3+5}{4x+1} \leq |c_2|(x^3)$$ In addition, there also has to be a $k$ such that for all values $x &gt;k $ the argument holds.</p> <p>start with one inequality $$ |c_1|(x^3) \leq \frac{x^4 +7x^3+5}{4x+1}$$ $$ = |c_1| \leq \frac{x^4 +7x^3+5}{4x^4+x^3}$$ $$ = |c_1| \leq \frac{x^4}{x^3(4x+1)} + \frac{7x^3}{x^3(4x+1)} + \frac{5}{x^3(4x+1)}$$ so basically for $x &gt; 0$, $$ |c_1| \leq \frac{1}{4} + 0 + 0$$ I'm assuming after I take the limit as x goes to infinity, i could choose any $c_1$ less than or equal to $\frac{1}{4}$? The other way would then have the same procedure? What would I set $k$ to?</p>
Mary Star
80,708
<p>Yes, you could take $c_1=\frac{1}{4}$, then you have the following:</p> <p>$$ \frac{1}{4}x^3 \leq \frac{x^4 +7x^3+5}{4x+1} \Rightarrow \frac{4x^4+x^3}{4} \leq x^4+7x^3+5 \Rightarrow x^4+\frac{1}{4}x^3 \leq x^4+7x^3+5 \\ \Rightarrow \frac{27}{4}x^3 \geq -5 \Rightarrow x^3 \geq -\frac{20}{27} \Rightarrow x \geq - \sqrt[3]{\frac{20}{27}}$$</p> <p>Since $x&gt;0$ and $x \geq - \sqrt[3]{\frac{20}{27}}$, it must be $x&gt;0$, therefore $k_1=1$.</p> <p>From the other inequality you will find $k_2$, such that $\frac{x^4 +7x^3+5}{4x+1} \leq |c_2|(x^3), \forall x \geq k_2$.</p> <p>Then $k=\max \{k_1, k_2 \}$.</p>
373,958
<p>Is $\sum_{n=1}^\infty(2^{\frac1{n}}-1)$ convergent or divergent? $$\lim_{n\to\infty}(2^{\frac1{n}}-1) = 0$$ I can't think of anything to compare it against. The integral looks too hard: $$\int_1^\infty(2^{\frac1{n}}-1)dn = ?$$ Root test seems useless as $\left(2^{\frac1{n}}\right)^{\frac1{n}}$ is probably even harder to find a limit for. Ratio test also seems useless because $2^{\frac1{n+1}}$ can't cancel out with ${2^{\frac1{n}}}$. It seems like the best bet is comparison/limit comparison, but what can it be compared against?</p>
John Dawkins
68,151
<p>Try the Comparison Test, using the elementary inequality $$ 2^{1/n} -1 &gt; {\log 2\over n} $$ for $n=1,2,\ldots$.</p>
2,282,818
<p>I'm getting $f(x)=2x+f(0)$ and $f(x)=f(0)-2x$ by setting $y=0$, but I'd like to verify. Am I right?</p>
Umberto P.
67,536
<p>The function $f$ is clearly continuous and one-to-one, since it satisfies the Lipschitz condition and $f(x) = f(y)$ implies $x=y$. Thus $f$ is monotone, and consequently either increasing or decreasing. </p> <p>If $f$ is increasing, the condition $|f(x) - f(0)| = 2|x|$ leads to $$x &gt; 0 \implies f(x) &gt; f(0) \implies f(x) - f(0) = 2x$$ and $$x &lt; 0 \implies f(x) &lt; f(0) \implies f(0) - f(x) = -2x$$ so that $f(x) = 2x + f(0)$ for all $x \in \mathbb R$.</p> <p>Likewise, if $f$ is decreasing then $f(x) = -2x + f(0)$ for all $x \in \mathbb R$.</p> <p>This means these are the only two functions satisfying the stated condition.</p>
4,265,001
<p>I'm a bit confused about expanding out the notation of product of matrices, in the context of quadratic forms.</p> <p>If <span class="math-container">$x \in \mathbb{R}^n, \, \, A \in \mathbb{R}^{n \times n}$</span> then</p> <p><span class="math-container">$$x^TAx = \sum_{i,j=1}^na_{ij}x_ix_j$$</span></p> <p>But then if I consider a <strong>matrix</strong> <span class="math-container">$X \in \mathbb{R}^{n \times n}$</span> how should I write the expanded form of</p> <p><span class="math-container">$$X^TAX = \, \,...\, \, ?$$</span></p> <p>This time the result will be a <strong>matrix</strong>.. will it be something like</p> <p><span class="math-container">$$X^TAX = \sum_{i,j=1}^n a_{ij}x_ix_j^T$$</span></p> <p>and if yes, why?</p> <p>Sorry if this is pretty straightforward but it always happens to get a little bit stuck with matrix notation.</p> <p>Many thanks,</p> <p>James</p>
Jean-Claude Arbaut
43,608
<p>It's easy to see why your attempt is wrong by considering a matrix <span class="math-container">$X$</span> of dimension <span class="math-container">$n\times p$</span>. Then <span class="math-container">$X^TAX$</span> has dimension <span class="math-container">$p\times p$</span>, while your formula yields a <span class="math-container">$n\times n$</span> matrix.</p> <p>You can work out the contribution of <span class="math-container">$a_{i,j}$</span> to the product by writing the product:</p> <p><span class="math-container">$$X^TAX=\sum_{i,j,k,l} x_{i,k}a_{i,j}x_{j,l}e_{k,l}$$</span></p> <p>Where <span class="math-container">$e_{k,l}$</span> are elementary matrices, and the sum runs on all valid indices. All elements of the matrix <span class="math-container">$e_{k,l}$</span> are <span class="math-container">$0$</span> except at index <span class="math-container">$(k,l)$</span> where it's <span class="math-container">$1$</span>.</p> <p>Then:</p> <p><span class="math-container">$$X^TAX=\sum_{i,j} a_{i,j} \left(\sum_{k,l} x_{i,k}x_{j,l}e_{k,l}\right)$$</span></p> <p>And inside the parentheses we recognize <span class="math-container">$x_i^Tx_j$</span>, where <span class="math-container">$x_i$</span> is the <span class="math-container">$i$</span>th <strong>row</strong>.</p> <p>If it's difficult to recognize, consider this: the element at index <span class="math-container">$(k,l)$</span> is <span class="math-container">$x_{i,k}x_{j,l}$</span>. That is, the matrix <span class="math-container">$\sum_{k,l} x_{i,k}x_{j,l}e_{k,l}$</span> is an outer product <span class="math-container">$uv^T$</span> where <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are column vectors. The <span class="math-container">$k$</span>th element of <span class="math-container">$u$</span> must be <span class="math-container">$x_{i,k}$</span>, that is <span class="math-container">$u$</span> has the same elements as the row <span class="math-container">$x_i$</span>, except it's a column vector, so <span class="math-container">$u=x_i^T$</span>. And the <span class="math-container">$l$</span>th element of <span class="math-container">$v$</span> is <span class="math-container">$x_{j,l}$</span>, and in the product you get <span class="math-container">$v^T$</span>, which is a row vector, hence <span class="math-container">$v^T=x_j$</span>. Therefore <span class="math-container">$\sum_{k,l} x_{i,k}x_{j,l}e_{k,l}=x_i^Tx_j$</span>.</p>
3,086,024
<p>I am trying to solve an exercise from the book "Theory of Numbers" by B.M.Stewart. The exercise is the following one:</p> <blockquote> <p>Let <span class="math-container">$T=2^ap_1^{a_1}p_2^{a_2} \dots p_n^{a_n}$</span>, where <span class="math-container">$a \ge0, n\ge0, 2&lt;p_1&lt;p_2&lt;\dots p_n, p_j$</span> odd prime numbers <span class="math-container">$ \forall j=1 \dots n,a_j \ge1$</span> and let <span class="math-container">$S(T)$</span> indicate the number of primitive Pythagorean triplets of side <span class="math-container">$T$</span>. Show that <span class="math-container">$S(T) = 2^{n-1}$</span> if <span class="math-container">$a=0$</span>. </p> </blockquote> <p>The primitive Pythagorean triplets are the solutions of <span class="math-container">$x^2+y^2=z^2$</span> where <span class="math-container">$\gcd(x,y,z)=1$</span> and they are given by <span class="math-container">$$\cases {x=2uv \\y=u^2-v^2\\z=u^2+v^2\\u&gt;v&gt;0\\ \gcd(u,v)=1\\ u \not\equiv v \pmod2} $$</span></p> <p>If <span class="math-container">$a=0$</span> then <span class="math-container">$T=p_1^{a_1}p_2^{a_2} \dots p_n^{a_n}$</span>, so <span class="math-container">$T \ne 2uv$</span>. Then <span class="math-container">$T=y$</span> or <span class="math-container">$T=z$</span>. </p> <p>If <span class="math-container">$T=y$</span>, then <span class="math-container">$$T=u^2-v^2=(u+v)(u-v)=p_1^{a_1}p_2^{a_2} \dots p_n^{a_n}.$$</span> So for <span class="math-container">$(u+v)$</span> I have <span class="math-container">$2^n$</span> possibilities because I can count the number of prime factors which is <span class="math-container">$\tau(r)$</span> with <span class="math-container">$r=p_1p_2 \dots p_n$</span>.</p> <p>From there I don't know how to proceed. Have you any idea?</p>
saulspatz
235,128
<p>You're awfully close, if you accept my comment that only the <span class="math-container">$T=y$</span> case needs to be considered. It follows from <span class="math-container">$\gcd(u,v)=1, u \not\equiv v \pmod2$</span> that <span class="math-container">$\gcd(u+v,u-v)=1.$</span> There are <span class="math-container">$2^n$</span> ways to split <span class="math-container">$T$</span> up into a pair of relative prime factors, and the larger one must be <span class="math-container">$u+v,$</span> so you just solve two linear equations for <span class="math-container">$u$</span> and <span class="math-container">$v$</span>.</p> <p><strong>EDIT</strong> Suppose we had <span class="math-container">$T=3^2\cdot5\cdot7.$</span> We could split this into two the factors <span class="math-container">$9$</span> and <span class="math-container">$35$</span>. Then we would have to solve <span class="math-container">$$\begin{align}u+v &amp;=35\\u-v&amp;=9\end{align}$$</span> so <span class="math-container">$u=22,v=13.$</span> </p> <p>We have to divide the number of solutions by <span class="math-container">$2$</span> because we could always have to make <span class="math-container">$u+v$</span> the larger factor, so half the choices are inadmissible.</p>
503,589
<ol> <li><p>Let $\epsilon&gt;0$. Prove that the set of those $x\in [0,1]$ such that there exist infinitely many fractions $p/q$, with relatively prime integers $p$ and $q$ such that $$\bigg |x-\frac{p}{q}\bigg|\leq \frac{1}{q^{2+\epsilon}}$$ is a set of measure zero.</p></li> <li><p>Let $(a_n)$ be a sequence of real numbers, and let $(\alpha_n)$ be a sequence of positive numbers such that $\sum_n \sqrt {a_n}&lt;\infty$. Prove that there exists a measurable set $A$ with $\lambda(A^c)=0$ (Lebesgue measure) such that $$\forall x\in A, \sum_n \frac{\alpha_n}{|x-a_n|}&lt;\infty.$$</p></li> </ol>
Etienne
80,469
<p>(1) First observe that if $x\in [0,1]$ and $\left\vert x-\frac{p}q\right\vert\leq \frac{1}{q^{2+\varepsilon}}$ for some $p,q$, then $\frac pq\leq x+ \frac{1}{q^{2+\varepsilon}}\leq 2$ and hence $p\leq 2q$. Consequently, if you put $$A_q=\left\{ x\in [0,1];\; \exists p\leq 2q\;:\; \left\vert x-\frac{p}q\right\vert\leq \frac{1}{q^{2+\varepsilon}}\right\},$$ then the set you're looking at is exactly $A=\limsup_q A_q$.</p> <p>Now, for each fixed $q$ the set $A_q$ is contained in the union of $2q+1$ intervals of length $\frac2{q^{2+\varepsilon}}$; so $\lambda(A_q)\leq \frac{2\times(2q+1)}{q^{2+\varepsilon}}\leq \frac{6}{q^{1+\varepsilon}}$ and hence $\sum_{q\geq 1} \lambda(A_q)&lt;\infty$. By Borel-Cantelli, it follows that $\lambda(A)=0$. </p> <p>(2) Consider the set $$ A=\{ x\in\mathbb R;\; \exists N\;\forall n\geq N\;:\; \vert x-a_n\vert\geq \sqrt{\alpha_n}\}\, .$$ Note that if $x\in A$, then $\frac{\alpha_n}{\vert x-a_n\vert}\leq\sqrt{\alpha_n}$ for all but finitely $n$ and hence $\sum_1^\infty \frac{\alpha_n}{\vert x-a_n\vert}&lt;\infty$. So it is enough to show that $\lambda (A^c)=0$. Now, by the definition of $A$ we have</p> <p>$$A^c=\limsup\, I_n \,,$$ where $I_n$ is the interval $[a_n-\sqrt{\alpha_n}, a_n+\sqrt{\alpha_n}]$. Since $\sum_1^\infty \sqrt{\alpha_n}$, one can apply Borel-Cantelli again.</p>
64,544
<blockquote> <p>Please let me know what is the standard notation for group action.</p> </blockquote> <p>I saw the following three notations for group action. (All the images obtained as <code>G\acts X</code> for different deinitions of <code>\acts</code>.) </p> <p>(1) <img src="https://lh5.googleusercontent.com/_7jyZyirE1is/TcM6Q736oVI/AAAAAAAAACU/7li7VA1-FTc/s144/B.png" alt="alt text"> </p> <p>I saw this one most, but only in handwriting and I like it. But I did not find a better way to write it in LaTeX.</p> <pre><code>\usepackage{mathabx,epsfig} \def\acts{\mathrel{\reflectbox{$\righttoleftarrow$}}} </code></pre> <p>(2) <img src="https://lh5.googleusercontent.com/_7jyZyirE1is/TcM6XymimxI/AAAAAAAAACY/byg0FDFv7wM/s144/C.png" alt="alt text"> </p> <p>It is almost as good as 1, but in handwriting this arrow can be taken as $G$.</p> <pre><code>\usepackage{mathabx} \def\acts{\lefttorightarrow} </code></pre> <p>(3) <img src="https://lh6.googleusercontent.com/_7jyZyirE1is/TcM6JeflvjI/AAAAAAAAACQ/n_dfYCfeQEc/s800/A.png" alt="alt text"> </p> <p>I saw this one in print, I guess it is used since there is no better symbol in "amssymb".</p> <pre><code>\usepackage{amssymb} \def\acts{\curvearrowright} </code></pre>
Rob Harron
1,021
<p>As this was the first hit on google for "latex action arrow", but didn't contain what I wanted, let me post what I figured out. But to address other people's issues with the original question: while I agree that in a sentence one should simply say "Let $G$ act on $X$...", I was interested in drawing a (what some people just generically call "commutative") diagram in order to visually display the relationships between several objects (as you do).</p> <p>Anyway, there is a symbol $\circlearrowright$, which is almost what I wanted. If you do this:</p> <blockquote> <p>\usepackage{graphicx}<br> G\ \rotatebox[origin=c]{-90}{\$\circlearrowright\$}\ X</p> </blockquote> <p>you will get an arrow whose beginning and end are right next to the X.</p>
3,115,168
<p>I've converted <span class="math-container">$\cos^3(x)$</span> into <span class="math-container">$\cos^2(x)\cos(x)$</span> but still have not gotten the answer. </p> <p>The answer is <span class="math-container">$\dfrac{\sin(x)(3\cos^2x + 2\sin^2x)}{3}$</span></p> <p>My answer was the same except I did not have a <span class="math-container">$3$</span> infront of <span class="math-container">$x$</span> and my <span class="math-container">$2\sin^2x$</span> was not squared.</p> <p>Help! </p>
Nikunj
287,774
<p>If you wish to do this using by parts, use <span class="math-container">$\cos^2x$</span> as the first function (to differentiate) and integrate <span class="math-container">$\cos x$</span> to get: <span class="math-container">$$\int \cos^2 x \cos x dx = \cos^2 x \sin x + 2\int \sin^2 x \cos {x}dx$$</span> Now use <span class="math-container">$\sin x = t$</span> to get <span class="math-container">$dt = \cos x dx$</span> and <span class="math-container">$$\int \cos^2 x \cos x dx = \cos^2 x \sin x + \frac{2\sin^3 x}{3} + C$$</span></p>
316,865
<p>How do you find this limit?</p> <p>$$\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x$$</p> <p>I was given a clue to use L'Hospital's rule.</p> <p>I did it this way:</p> <p><strong>UPDATE 1:</strong> $$ \begin{align*} \lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x &amp;= \lim_{x \rightarrow \infty} x\begin{pmatrix}\sqrt[5]{1-\frac 1 x} -1\end{pmatrix}\\ &amp;= \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x} \end{align*} $$</p> <p>Applying L' Hospital's, $$ \begin{align*} \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x}&amp;= \lim_{x \rightarrow \infty} \frac{0.2\begin{pmatrix}1-\frac 1 x\end{pmatrix}^{-0.8}\begin{pmatrix}-x^{-2}\end{pmatrix}(-1)} {\begin{pmatrix}-x^{-2}\end{pmatrix}}\\ &amp;= -0.2 \end{align*} $$</p> <p>However the answer is $0.2$, so I would like to clarify the correct use of L'Hospital's</p>
ferson2020
59,689
<p>Yes, you are using L'Hopital's rule correctly, and the answer is $-\frac{1}{5}$. The steps are correct, and I double-checked the answer: <a href="http://www.wolframalpha.com/share/clip?f=d41d8cd98f00b204e9800998ecf8427epe5q2eamff" rel="nofollow">http://www.wolframalpha.com/share/clip?f=d41d8cd98f00b204e9800998ecf8427epe5q2eamff</a></p> <p>Not a big deal, but please note 'L'Hopital' does not have an 's' in it.</p>
316,865
<p>How do you find this limit?</p> <p>$$\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x$$</p> <p>I was given a clue to use L'Hospital's rule.</p> <p>I did it this way:</p> <p><strong>UPDATE 1:</strong> $$ \begin{align*} \lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x &amp;= \lim_{x \rightarrow \infty} x\begin{pmatrix}\sqrt[5]{1-\frac 1 x} -1\end{pmatrix}\\ &amp;= \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x} \end{align*} $$</p> <p>Applying L' Hospital's, $$ \begin{align*} \lim_{x \rightarrow \infty} \frac{\sqrt[5]{1-\frac 1 x} -1}{\frac1x}&amp;= \lim_{x \rightarrow \infty} \frac{0.2\begin{pmatrix}1-\frac 1 x\end{pmatrix}^{-0.8}\begin{pmatrix}-x^{-2}\end{pmatrix}(-1)} {\begin{pmatrix}-x^{-2}\end{pmatrix}}\\ &amp;= -0.2 \end{align*} $$</p> <p>However the answer is $0.2$, so I would like to clarify the correct use of L'Hospital's</p>
lab bhattacharjee
33,337
<p>If we are not compelled to use L'Hospital's Rule,</p> <p>$$\lim_{x \rightarrow \infty} \sqrt[5]{x^5-x^4} -x$$ $$=\lim_{y\to0}\frac{(1-y)^\frac15-1}y$$</p> <p>$$=\lim_{y\to0}\frac{(1-y)-1}{y\{(1-y)^\frac45+(1-y)^\frac35+(1-y)^\frac25+(1-y)^\frac15+1\}}$$ as $ a^n-1=(a-1)(a^{n-1}+a^{n-2}+\cdots+a+1)$</p> <p>$$=\frac{-1}{1+1+1+1+1}\text { as } y\to0\implies y\ne0$$</p>
2,889,835
<p>If I have random lag times from <code>a=0.1</code> to <code>b=0.3</code> and a time to live (TTL) of <code>x=0.25</code>, what would be the packet loss in per cent?</p> <p>Ok so basically I have packets that arrive in <code>Random [a,b]</code> time, if that random value is greater than <code>x</code> the packet gets lost and doesn't arrive.</p> <p>What is the probability of a packet to arrive?</p>
Sarvesh Ravichandran Iyer
316,409
<p>The issue is the following : $f$ continuous, means that if $x_n$ is a <em>convergent</em> sequence to $x$, then so is $f(x_n)$ , converging to $f(x)$.</p> <p>Note that every convergent sequence is Cauchy. However, the converse is true(in $\mathbb R$), only in a closed set. Note that $(0,1)$ is not a closed set. Therefore, there are Cauchy sequences in $(0,1)$ that are not convergent, and the problem is that the continuity of $f$ does not imply anything at all, since it comes into play only for convergent sequences.</p> <p>The best example of such a function is $f(x) = \frac 1x$ on $(0,1)$ , with the sequence $\frac 1n$ : clearly, $x_n$ is Cauchy while $f(x_n) = n$ is not. What is happening here, is that $x_n$ is not convergent, so the continuity of $f$ does not help.</p> <hr> <p>The "theory" part of things is this :</p> <p>Suppose $f$ is continuous. Now, let $x_n$ be any Cauchy sequence in $(0,1)$. Note that Cauchy sequences in a subset of $\mathbb R$ converge to a point in the closure of that set, in this case $(0,1)$. Therefore, two things can happen :</p> <ul> <li>Suppose that $x_n$ converges to $x$, where $x \in (0,1)$. Then by the definition of continuity, since $x_n$ is a <em>convergent</em> sequence, we have that $f(x_n)$ converges to $f(x)$.</li> </ul> <blockquote> <p>But let us look more deeper, and see "why". We are actually using continuity <em>at the point $x$</em> for this, because : given $\epsilon &gt; 0$, first we know there exists $\delta &gt; 0$ such that $|y - x| &lt; \delta$ implies $|f(y) - f(x)| &lt; \delta$. Now we can find $N$ such that $$n &gt; N \implies |x_n - x| &lt; \delta \implies |f(x_n) - f(x)| &lt; \epsilon$$, so $f(x_n) \to f(x)$ by the definition of convergence of a sequence of points.</p> </blockquote> <ul> <li>Suppose that $x_n$ converges to either $0$ or $1$. Then $x_n$ is only <em>Cauchy</em>, not <em>convergent</em> in $(0,1)$. By the previous post, we saw that $f(x_n)$, if it were a convergent sequence, must go to $f(x)$, and the proof depended crucially on <em>continuity at the point $x$</em>, which was the limit of the $x_n$. But here, the limit $x$ does not belong to the set $(0,1)$, and therefore $f$ is not even defined on $x$. If it were possible to define $f$ there, we could speak of extension, which could be continuous if $f$ was continuous at $x$. </li> </ul> <hr> <p>Now for the more subtler point : continuity in $(0,1)$, is equivalent to continuity at each point of $(0,1)$. That is, continuity in $(0,1)$ can be written as follows :</p> <blockquote> <p>Given any point $x \in (0,1)$, and given any $\epsilon &gt; 0$, there exists a $\delta &gt; 0$, <em>depending on $x$ and $\epsilon$</em>, such that $|y-x| &lt; \delta \implies |f(y) - f(x)| &lt; \epsilon$. </p> </blockquote> <p>To put this in plain English : if we fix a point, then if another point is sufficiently close to that point, the function values of the two points are sufficiently close.</p> <p>Now, let us write down what it means for $f$ to take Cauchy sequences to Cauchy sequences : it means that if a sequence of points get eventually close to each other, so must the function values of the points. There is no fixing of a point here, rather a sequence is fixed,so the definition of pointwise continuity cannot be used.</p> <p>Why would we intuitively think this is true if $f$ is continuous? Let's put it this way : if points are getting close, then fixing any two of them which are "sufficiently" close to each other, their function values are "sufficiently" close, by using continuity at either point. </p> <hr> <p>Alas, this is not true. Why? Because, given $\epsilon &gt; 0$, the $\delta$ <em>depends on $x$</em>, that is why. Suppose I fix an $\epsilon &gt; 0$ for trying to show that $f(x_n)$ is Cauchy. I do know that there exists $N$ such that $m,n &gt; N \implies |x_m - x_n| &lt; \epsilon$, because $x_n$ is Cauchy. Now, let us fix $P,Q &gt; N$, and consider the points $x_P$ and $x_Q$. By continuity at $x_P$, there is $\delta_P &gt; 0$ <em>depending on $x_P$</em> such that if $|y - x_P| &lt; \delta_P$ then $|f(y) - f(x_P)| &lt; \epsilon$. Similarly $\delta_Q$ for $x_Q$.</p> <p>The inevitable question arises : what if $|x_P - x_Q|$ is greater than both $\delta_P$ and $\delta_Q$? Then, $|f(x_Q) - f(x_P)|$ <em>cannot be controlled</em>, because we require the points to be close enough to apply the continuity criterion.</p> <p>Thus, the complete answer to the question is this :</p> <blockquote> <p>Fix $\epsilon &gt; 0$. Even if the points $x_n$ are getting closer and closer to each other, the $\delta$ corresponding to each point of the sequence and the given $\epsilon$, could possibly <em>reduce more rapidly</em> than the distances between the points. Thus, even though the distances between points is decreasing, for example $|x_P - x_Q|$ is really small, yet since $|x_P - x_Q| &gt; \delta_P$ and $\delta_Q$, neither of these points fall into the other's "range of continuity" (i.e where the definition of "continuity at a point" can applied for that point) for the given $\epsilon$, and since continuity says nothing about what happens if $|x-y| &gt; \delta$, this allows the function to behave unwantedly at these points i.e. $|f(x_P) - f(x_Q)|$ may be very large even though $|x_P - x_Q|$ is not.</p> </blockquote> <p>Back to our example : what serves as $\delta_x$ for a point $x$? See that $ \left|\frac{1}{y} - \frac 1x\right| = \frac{|x-y|}{|xy|}$, so given $\epsilon &gt; 0$ , you can check that $\delta = \frac{\epsilon x^2}{1 + \epsilon x}$ does the job (and no higher $\delta_x$ does the job at each $x$). Now, $|\frac 1n - \frac 1m| = \frac{|m-n|}{mn}$, and $\delta_{\frac 1l} = \frac{\epsilon}{l(l+\epsilon)}$. So, you can check that if $\epsilon = \frac 12$ then $\delta _{\frac 1n},\delta_{\frac 1m}$ both fall short of $|\frac 1n -\frac 1m|$. So here, for a fixed $\epsilon &gt; 0$, the $\delta$s are decreasing much faster than $\frac 1n - \frac 1m$, and continuity of $f$ would help only if $\frac 1n - \frac 1m$ were small enough to lie in one of the $\delta$ neighbourhoods.</p> <hr> <p>How do we tackle this situation i.e. what condition do we put on $f$ so that $x_n$ Cauchy implies $f(x_n)$ Cauchy? There are a few ways around it.</p> <ul> <li><p>If $x_n$ is a convergent sequence to $x$, then even though $f(x_n)$ and $f(x_m)$ cannot <em>directly</em> be shown to be close to each other, we can show they are both close to $f(x)$ via continuity, and then the triangle inequality shows they are close to each other. This is the condition that the domain be closed, or that the domain of $f$ be continuously extendable to a closed domain.</p></li> <li><p>Don't let the $\delta$s fall so fast. Maybe, they should depend <em>only on $\epsilon$</em>. Such a function is called a uniformly continuous function. For example, $f(x) = x$ on $(0,1)$ is uniformly continuous. (Can you find the $\delta$ for each $\epsilon$? It should not be difficult!)</p></li> </ul> <p>This should be a comprehensive discussion on the desired topic.</p>
4,640,732
<p>Find roots of: <span class="math-container">$$x^{6}\ -\ \left(x-1\right)^{6}=0 \tag {1}$$</span></p> <p>I know this equation has <span class="math-container">$4$</span> complex roots and exactly one real roots of value <span class="math-container">$0.5$</span>.</p> <p>However, my first instinct was to do this: <span class="math-container">$$x^{6}\ =\ \left(x-1\right)^{6} \tag{2}$$</span> &quot;raise both sides to 6-th power&quot; to get: <span class="math-container">$$x=x-1\tag{3}$$</span></p> <p>Which has no real solution. I see that this wrong. How to avoid this error? Thanks.</p> <p>Inspired by watching <a href="https://www.youtube.com/watch?v=2jENiRqdk3s" rel="nofollow noreferrer">this youtube video</a></p> <p><strong>Edit:</strong></p> <p>I am not asking about how to solve the problem. I want to know what I did wrong from an Algebraic stand-point. Maybe raising to the power? What is wrong with that?</p>
Henry
6,460
<p>If you had started with <span class="math-container">$x^2 - (x-1)^2=0$</span> then you can expand it to <span class="math-container">$x^2-x^2+2x-1=0$</span> and so <span class="math-container">$x=\frac12$</span>.</p> <p>But let's take a version of your approach. You might say <span class="math-container">$a^2=b^2$</span> clearly implies <span class="math-container">$a=b$</span> or <span class="math-container">$a=-b$</span>; let's write this as <span class="math-container">$a=\omega b$</span> where <span class="math-container">$\omega^2=1$</span>. We expect there to be be two possible values for <span class="math-container">$\omega$</span>, namely <span class="math-container">$+1$</span> and <span class="math-container">$-1$</span>. But if <span class="math-container">$a=x$</span> and <span class="math-container">$b=x-1$</span> you get <span class="math-container">$x=\omega(x-1)$</span> and so <span class="math-container">$x= \frac{-\omega}{1-\omega}= 1 - \frac{1}{1-\omega}$</span>; when <span class="math-container">$\omega=-1$</span> it gives <span class="math-container">$x=\frac12$</span> as expected, while if <span class="math-container">$\omega=1$</span> it tries to give <span class="math-container">$1- \frac{1}{0}$</span> which is not a finite number, reflecting the fact there is no finite solution to <span class="math-container">$x=x-1$</span>.</p> <p>Translate this to your <span class="math-container">$6$</span>th powers in <span class="math-container">$(2)$</span>:</p> <ul> <li><span class="math-container">$a^6=b^6$</span> has solutions of the form <span class="math-container">$a=\omega b$</span> where <span class="math-container">$\omega^6=1$</span>;</li> <li>there are potentially six such <span class="math-container">$\omega$</span> (some complex) of which one is <span class="math-container">$\omega=1$</span></li> <li>If <span class="math-container">$a=x$</span> and <span class="math-container">$b=x-1$</span> you get <span class="math-container">$x=\omega(x-1)$</span> and again <span class="math-container">$x= 1 - \frac{1}{1-\omega}$</span></li> <li>So each of the six <span class="math-container">$\omega$</span> will give you a different solution for <span class="math-container">$x$</span>, except <span class="math-container">$\omega=1$</span> which tries to give <span class="math-container">$1-\frac{1}{0}$</span> (still not a finite number).</li> </ul>
3,599,008
<p>I want to see if the set:</p> <p><span class="math-container">$$X =\{f\in{}C[0,1]:|f(t)|&lt;1\text{ for all } t\in[0,1]\}$$</span> is open or closed.</p> <p>If it is open at each function <span class="math-container">$g\in{}X$</span> there is an epsilon sized open ball around it contained by <span class="math-container">$X$</span>. I can't seem to show that its open but can't think of a counter example to show its not.</p>
Umberto P.
67,536
<p>Supposing the norm on <span class="math-container">$C[0,1]$</span> is <span class="math-container">$\|f\| = \max_{t \in [0,1]} |f(t)|$</span>, the corresponding metric on <span class="math-container">$C[0,1]$</span> is <span class="math-container">$d(f,g) = \|f-g\|$</span>.</p> <p>Select an element <span class="math-container">$f \in X$</span>. Since <span class="math-container">$f$</span> attains its maximum at some point <span class="math-container">$t \in [0,1]$</span> you have <span class="math-container">$\|f\| &lt; 1$</span>. Let <span class="math-container">$\epsilon = 1 - \|f\|$</span>. Then, if <span class="math-container">$g \in B(f,\epsilon)$</span> you have <span class="math-container">$$\|g\| \le \|f-g\| + \|f\| &lt; \epsilon + \|f\| &lt; 1$$</span> so that <span class="math-container">$g \in X$</span> too.</p>
3,599,008
<p>I want to see if the set:</p> <p><span class="math-container">$$X =\{f\in{}C[0,1]:|f(t)|&lt;1\text{ for all } t\in[0,1]\}$$</span> is open or closed.</p> <p>If it is open at each function <span class="math-container">$g\in{}X$</span> there is an epsilon sized open ball around it contained by <span class="math-container">$X$</span>. I can't seem to show that its open but can't think of a counter example to show its not.</p>
José Carlos Santos
446,262
<p>Assuming that you are working with the metric <span class="math-container">$(f,g)\mapsto\sup\lvert f-g\rvert$</span>, consider the map<span class="math-container">$$\begin{array}{rccc}\psi\colon&amp;C[0,1]&amp;\longrightarrow&amp;\mathbb R\\&amp;f&amp;\mapsto&amp;\sup\lvert f\rvert.\end{array}$$</span>Then <span class="math-container">$\psi$</span> is continuous and <span class="math-container">$X=\psi^{-1}\bigl((-\infty,1)\bigr)$</span>. Since the set <span class="math-container">$(-\infty,1)$</span> is an open subset of <span class="math-container">$\mathbb R$</span>, <span class="math-container">$X$</span> is an open subset of <span class="math-container">$C[0,1]$</span>.</p>
1,376,159
<p>A friend of mine shared this problem with me. As he was told, this integral can be evaluated in a closed form (the result may involve polylogarithms). Despite all our efforts, so far we have not achieved anything, so I decided to ask for your advice. $$\int_0^1\log(x)\,\log(2+x)\,\log(1+x)\,\log\left(1+x^{-1}\right)dx$$</p> <p>I found some similar questions here on MSE: <a href="https://math.stackexchange.com/q/316745/76878">(1)</a>, <a href="https://math.stackexchange.com/q/465444/76878">(2)</a>, <a href="https://math.stackexchange.com/q/503405/76878">(3)</a>, <a href="https://math.stackexchange.com/q/524358/76878">(4)</a>, <a href="https://math.stackexchange.com/q/761930/76878">(5)</a>, <a href="https://math.stackexchange.com/q/795867/76878">(6)</a>, <a href="https://math.stackexchange.com/q/908108/76878">(7)</a>, <a href="https://math.stackexchange.com/q/915083/76878">(8)</a>, <a href="https://math.stackexchange.com/q/933977/76878">(9)</a>, <a href="https://math.stackexchange.com/q/972775/76878">(10)</a>, <a href="https://math.stackexchange.com/q/1043771/76878">(11)</a>, <a href="https://math.stackexchange.com/q/1046519/76878">(12)</a>, <a href="https://math.stackexchange.com/q/1096557/76878">(13)</a>, <a href="https://math.stackexchange.com/q/1341254/76878">(14)</a>.</p>
Cleo
97,378
<p>\begin{align} &amp; \int_0^1\ln(2+x)\,\ln(1+x)\,\ln\left(1+x^{-1}\right)\ln x\,dx\\ &amp; \quad=\frac{71}{36}\,\ln^42+2\ln^32\cdot\ln3+4\ln2\cdot\ln^33-7\ln^22\cdot\ln^23-\frac23\,\ln^32-\frac23\,\ln^33-\ln^22\cdot\ln3\\ &amp; \quad \quad +6\ln^22+3\ln^23-12\ln2-\frac{\pi^4}{216}+\pi^2\!\left(\frac{49}{36}\,\ln^22-2\ln2\cdot\ln3-\frac{\ln2}3+\frac{\ln3}3-\frac16\right)\\ &amp; \quad \quad+\left(6-2\ln2-2\ln^22\right)\operatorname{Li}_2\!\left(\tfrac13\right)+(2-12\ln2)\left[\operatorname{Li}_3\!\left(\tfrac13\right)+\operatorname{Li}_3\!\left(\tfrac23\right)\right]-\frac23\,\operatorname{Li}_4\!\left(\tfrac12\right)+3\operatorname{Li}_4\!\left(\tfrac14\right)\\ &amp; \quad \quad +\left(\frac54+\frac{221}{12}\ln2\right)\zeta(3). \end{align}</p>
2,037,704
<p>What symmetry property in complex space is related to the fact that the absolute value of numbers $|a+ib| = |b+ia|$ are equals?</p>
levap
32,262
<p>If you identify the complex space $\mathbb{C}$ with $\mathbb{R}^2$ by sending $z = a + ib$ to $(a,b)$, the norm of a complex number $z$ is the same as (regular, Euclidean) the norm of the vector $(a,b) \in \mathbb{R}^2$. Consider the following two maps:</p> <ol> <li>The conjugation map $T \colon \mathbb{C} \rightarrow \mathbb{C}$ given by $T(a + ib) = T(z) = \overline{z} = a - ib$.</li> <li>The multiplication by $-i$ map $S \colon \mathbb{C} \rightarrow \mathbb{C}$ given by $S(z) = S(a + ib) = (-i)z = (-i)(a + ib) = b - ia$.</li> </ol> <p>The map $T$ is not a $\mathbb{C}$-linear map but under the identification of $\mathbb{C}$ with $\mathbb{R}^2$, it corresponds to an $\mathbb{R}$-linear reflection map across the $x$-axis which preserves the norms of vectors. The map $S$ is a $\mathbb{C}$-linear map that corresponds under the identification of $\mathbb{C}$ with $\mathbb{R}^2$ to a rotation map by $\frac{\pi}{2}$ degrees clockwise which also preserves norms of vectors.</p> <p>Hence, the composition $T \circ S$ also preserves the norms of vectors and </p> <p>$$ (T \circ S)(a + ib) = T(b - ia) = b + ia $$</p> <p>which implies that $|a + ib| = |b + ia|$ (which of course can be verified directly using the definition of the norm).</p>
1,969,169
<p>We have to do the following integral. $$\int_1^{\frac{1+\sqrt{5}}{2}}\frac{x^2+1}{x^4-x^2+1}\ln\left(1+x-\frac{1}{x}\right)dx$$ I tried it a lot. I substitute $t=1+x-(1/x)$, $dt=1+(1/x^2)$</p> <p>But then I stuck at $$\int\limits_{1}^{2} \frac{\ln(t)}{(t-1)^{2} + 1} \mathrm{d}t$$</p> <p>But now how to proceed.</p>
David H
55,051
<hr> <p>Let $I$ denote the integral</p> <p>$$I:=\int_{1}^{\phi}\frac{x^{2}+1}{x^{4}-x^{2}+1}\ln{\left(1+x-\frac{1}{x}\right)}\,\mathrm{d}x,$$</p> <p>with $\phi$ of course being the golden ratio, $\phi=\frac{1+\sqrt{5}}{2}$.</p> <p>As a numerical approximation, we find</p> <p>$$I\approx0.272198.$$</p> <hr> <p>Try substituting instead</p> <p>$$\frac12\left(x-\frac{1}{x}\right)=t,$$</p> <p>$$\implies\frac12\left(1+\frac{1}{x^{2}}\right)\,\mathrm{d}x=\mathrm{d}t.$$</p> <p>Taking the square, we have</p> <p>$$\frac14\left(x^{2}-2+\frac{1}{x^{2}}\right)=t^{2}.$$</p> <p>Then,</p> <p>$$\begin{align} I &amp;=\int_{1}^{\phi}\frac{x^{2}+1}{x^{4}-x^{2}+1}\ln{\left(1+x-\frac{1}{x}\right)}\,\mathrm{d}x\\ &amp;=\int_{1}^{\phi}\frac{2x^{2}}{x^{4}-x^{2}+1}\ln{\left(1+x-\frac{1}{x}\right)}\,\frac{x^{2}+1}{2x^{2}}\,\mathrm{d}x\\ &amp;=\int_{1}^{\phi}\frac{2}{\left(x^{2}-2+x^{-2}\right)+1}\ln{\left(1+x-\frac{1}{x}\right)}\,\frac12\left(1+\frac{1}{x^{2}}\right)\,\mathrm{d}x\\ &amp;=\int_{0}^{\frac12}\frac{2}{4t^{2}+1}\ln{\left(1+2t\right)}\,\mathrm{d}t;~~~\small{\left[\frac12\left(x-\frac{1}{x}\right)=t\right]}\\ &amp;=\int_{0}^{1}\frac{\ln{\left(1+u\right)}}{u^{2}+1}\,\mathrm{d}u;~~~\small{\left[2t=u\right]}.\\ \end{align}$$</p> <hr> <p>Expressing the logarithmic term as an integral, $I$ becomes a double integral. Changing the order of integration, we obtain</p> <p>$$\begin{align} I &amp;=\int_{0}^{1}\frac{\ln{\left(1+x\right)}}{1+x^{2}}\,\mathrm{d}x\\ &amp;=\int_{0}^{1}\frac{\mathrm{d}x}{1+x^{2}}\int_{0}^{1}\mathrm{d}t\,\frac{x}{1+xt}\\ &amp;=\int_{0}^{1}\mathrm{d}x\int_{0}^{1}\mathrm{d}t\,\frac{x}{\left(1+x^{2}\right)\left(1+xt\right)}\\ &amp;=\int_{0}^{1}\mathrm{d}t\int_{0}^{1}\mathrm{d}x\,\frac{x}{\left(1+x^{2}\right)\left(1+tx\right)}\\ &amp;=\int_{0}^{1}\mathrm{d}t\int_{0}^{1}\mathrm{d}x\,\left[\frac{t+x}{\left(1+t^{2}\right)\left(1+x^{2}\right)}-\frac{t}{\left(1+t^{2}\right)\left(1+tx\right)}\right];~~~\small{P.F.D.}\\ &amp;=\int_{0}^{1}\frac{\mathrm{d}t}{\left(1+t^{2}\right)}\int_{0}^{1}\mathrm{d}x\,\left[\frac{t+x}{\left(1+x^{2}\right)}-\frac{t}{\left(1+tx\right)}\right]\\ &amp;=\int_{0}^{1}\frac{\mathrm{d}t}{\left(1+t^{2}\right)}\left[\int_{0}^{1}\mathrm{d}x\,\frac{t}{\left(1+x^{2}\right)}+\int_{0}^{1}\mathrm{d}x\,\frac{x}{\left(1+x^{2}\right)}-\int_{0}^{1}\mathrm{d}x\,\frac{t}{\left(1+tx\right)}\right]\\ &amp;=\int_{0}^{1}\frac{\mathrm{d}t}{\left(1+t^{2}\right)}\left[\frac{\pi}{4}t+\frac12\ln{\left(2\right)}-\ln{\left(1+t\right)}\right]\\ &amp;=\int_{0}^{1}\frac{\mathrm{d}t}{\left(1+t^{2}\right)}\left[\frac{\pi}{4}t+\frac12\ln{\left(2\right)}\right]-\int_{0}^{1}\frac{\ln{\left(1+t\right)}}{1+t^{2}}\,\mathrm{d}t\\ &amp;=\frac{\pi}{4}\ln{\left(2\right)}-I.\\ \end{align}$$</p> <p>Thus,</p> <p>$$I=\frac{\pi}{8}\ln{\left(2\right)}.\blacksquare$$</p>
248,710
<p>The organizers of a cycling competition know that about 8% of the racers use steroids. They decided to employ a test that will help them identify steroid-users. The following is known about the test: When a person uses steroids, the person will test positive 96% of the time; on the other hand, when a person does not use steroids, the person will test positive only 9% of the time. The test seems reasonable enough to the organizers. The one last thing they want to find out is this: Suppose a cyclist does test positive, what is the probability that the cyclist is really a steroid-user.</p> <p>S be the event that a randomly selected cyclist is a steroid-user and P be the event that a randomly selected cyclist tests positive.</p> <p>***My questions is Can someone please translate and explain P(P|S) and P(S|P) ?</p>
broccoli
50,577
<p><a href="http://bayesianthink.blogspot.com/2012/08/understanding-bayesian-inference.html" rel="nofollow">Here</a> is a relatively plain English explanation for Bayesian statistics.</p> <p>Here, you have an evidence which says a user tested positive and the hypothesis you want to test against is whether steroids were used.</p> <p>P(H | E) = P(E|H) x P(H) / [ P(E|H) x P(H) + P(E|~H) x P(~H) ]</p> <p>from your numbers, P(H) = 8% so P(~H) = 92%, P(E|H) = 96% and P(E|~H) = 9%. Plug in all these numbers and you get ~ 48% which is quite low.</p>
1,981,360
<blockquote> <p>Given function $f:\mathbb{R}_0^+ \to \mathbb{R},~f(x) = x^2 + 4x + 4$ prove that it is injective.</p> </blockquote> <p>Using definition of injectivity $(\forall x_1, x_2 \in \mathbb{R}_0^+)(x_1 \neq x_2 \implies f(x_1) \neq f(x_2))$ I'm doing the following:</p> <p>$$x_1^2 + 4x_1 + 4 = x_2^2 + 4x_2 + 4$$ $$x_1^2 - x_2^2 = -4(x_1 - x_2)$$ $$x_1 + x_2 = -4$$ $$x_1 = -4 - x_2.$$ Since domain is $\mathbb{R}_0^+$ it is apparent that $x_1 \neq -4 - x_2$ and hence function is not injective.</p> <hr> <p>Is my final argument correct? In cases like that, shall I use definition instead of counterpositive?</p>
Edouard L.
378,837
<p>Your final argument is not correct.<br> Instead you have (from your calculation):<br> Since $x_2 \geq 0$ that would imply $x_1 &lt;0$ which is impossible (because out of the domain): therefore f is injective on the domain.<br> It would also be better to give your assumptions: suppose there exists $x_1 \neq x_2$ such that $f( x_1)=f(x_2)$...</p>
1,981,360
<blockquote> <p>Given function $f:\mathbb{R}_0^+ \to \mathbb{R},~f(x) = x^2 + 4x + 4$ prove that it is injective.</p> </blockquote> <p>Using definition of injectivity $(\forall x_1, x_2 \in \mathbb{R}_0^+)(x_1 \neq x_2 \implies f(x_1) \neq f(x_2))$ I'm doing the following:</p> <p>$$x_1^2 + 4x_1 + 4 = x_2^2 + 4x_2 + 4$$ $$x_1^2 - x_2^2 = -4(x_1 - x_2)$$ $$x_1 + x_2 = -4$$ $$x_1 = -4 - x_2.$$ Since domain is $\mathbb{R}_0^+$ it is apparent that $x_1 \neq -4 - x_2$ and hence function is not injective.</p> <hr> <p>Is my final argument correct? In cases like that, shall I use definition instead of counterpositive?</p>
egreg
62,967
<p>Your argument is fine, but you can end a step earlier: $x_1&gt;0$ and $x_2&gt;0$ implies $x_1+x_2&gt;0$, so $x_1+x_2=-4$ is a contradiction.</p> <p>However you have better making evident where you're using the hypothesis $x_1\ne x_2$.</p> <hr> <p>Proofs of injectivity are often times simpler with the contrapositive: “if $f(x_1)=f(x_2$ then $x_1=x_2$”.</p> <p>Suppose $f(x_1)=f(x_2)$; then, as in your steps, $$ (x_1^2-x_2^2)+4(x_1-x_2)=0 $$ so $$ (x_1-x_2)(x_1+x_2+4)=0 $$ Since $x_1+x_2+4&gt;x_1&gt;0$, we conclude $x_1-x_2=0$.</p>
191,548
<p>Say I have a list:</p> <pre><code>{{Line[{{-Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}], Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0,1}}]}, {Line[{{-Sqrt[5/8 + Sqrt[5]/8],1/4 (-1 + Sqrt[5])}, {Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}}], Line[{{Sqrt[5/8 - Sqrt[5]/8], 1/4 (-1 - Sqrt[5])}, {0, 1}}]}} </code></pre> <p>So that each sublist of that list consist of, in this case, two lines. All the points that tell us about the position of the line appear in another list, call this list <code>points</code>. Now, I want to extract the position of all the points from the above list in that <code>points</code> list. I'm aware of <code>Position</code> function but I'm not sure how to effectively apply it to my big list above in order to get the list of positions. AI'd very much appreciate some help. </p>
WReach
142
<p>To make things easier to read, let's use these lines (with the same nested structure as in the question):</p> <pre><code>lines = { { Line[{{60, 50}, {60, 70}}] , Line[{{40, 30}, {40, 50}}] } , { Line[{{20, 10}, {20, 30}}] } }; </code></pre> <p>Furthermore, let's define the list <code>points</code> as this:</p> <pre><code>points = {{20, 10}, {20, 30}, {40, 30}, {40, 50}, {60, 50}, {60, 70}}; </code></pre> <p>We can get the positions of each point in this list as follows:</p> <pre><code>pointPosition = points // PositionIndex // Map[First] (* &lt;|{20, 10} -&gt; 1, {20, 30} -&gt; 2, {40, 30} -&gt; 3, {40, 50} -&gt; 4, {60, 50} -&gt; 5, {60, 70} -&gt; 6|&gt; *) </code></pre> <p>This association can tell us, for example, that the point <code>{40, 30}</code> appears at position <code>3</code> in <code>points</code>.</p> <pre><code>pointPosition[{40, 30}] (* 3 *) </code></pre> <p>We can now convert the line list into a similarly structured list where each point has been replaced by a pair of corresponding positions:</p> <pre><code>lines /. Line[pts__] :&gt; pointPosition /@ pts (* {{{5, 6}, {3, 4}}, {{1, 2}}} *) </code></pre> <p>Or, should we desire, we could retain the <code>List</code> heads in the result:</p> <pre><code>lines /. Line[pts__] :&gt; Line[pointPosition /@ pts] (* {{Line[{5, 6}], Line[{3, 4}]}, {Line[{1, 2}]}} *) </code></pre> <p>This latter form has the advantage that we can still draw the resultant lines by means of <code>GraphicsComplex</code>:</p> <pre><code>newLines = lines /. Line[pts__] :&gt; Line[pointPosition /@ pts]; Graphics[GraphicsComplex[points, newLines]] </code></pre> <p><a href="https://i.stack.imgur.com/I9ScE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/I9ScE.png" alt="line graphics"></a></p>
393,580
<p>Show that $-Z$ is also a standard normal random variable; that is, show that $P[-Z &lt; x] = P[Z &lt; x] \,\forall x.$</p>
response
76,635
<p>Hint: The standard normal is a symmetric distribution.</p>
507,467
<p>How do you factor $x^3 + x - 2$?</p> <p>Hint: Write it as $(x^3-x^2+x^2-x+2x-2)$ to get $(x-1)(x^2+x+2)$</p> <p>Note the factored form <a href="http://www.wolframalpha.com/input/?i=x%5E3+%2B+x+-+2" rel="nofollow noreferrer">here</a>. Thanks!</p>
Community
-1
<p>By inspection, we see that $1$ is a root of $x^3 + x - 2$, so it is divisible by $x - 1$; alternatively, the rational roots theorem would suggest this too.</p> <p>Now $x^2 + x + 2 = x^2 + x + \frac{1}{4} + \frac{7}{4} = (x + \frac{1}{2})^2 + \frac{7}{4}$ has no real roots, and is irreducible. If you're factoring over $\Bbb{C}$, then it's got roots at $\pm \sqrt{\frac 7 4}i - \frac{1}{2}$; denoting these as $\alpha_+$ and $\alpha_-$, the original polynomial then splits as</p> <p>$$(x - 1)(x - \alpha_+)(x - \alpha_-)$$</p>
2,256,973
<p>I'm writing a computer algorithm to do binomial expansion in C#. You can view the code <a href="https://gist.github.com/jamesqo/01015428601641347e436129c1ae0079#file-multinomial-cs-L29-L39" rel="nofollow noreferrer">here</a>; I am using the following identity to do the computation:</p> <p>$$ \dbinom n k = \frac n k \dbinom {n - 1} {k - 1} $$</p> <p>Currently, the algorithm computes $\dbinom {n - 1} {k - 1}$, multiplies it by $n$, then divides by $k$. Since signed integers are capped at $2^{32} - 1$ on a computer, which can be a very small limit for large $n$, I want to change the algorithm to divide by $k$ before mixing in $n$. However, computer integer division floors the result if the dividend isn't a multiple of the divisor; for example, $11 / 5 = 2$. So I want to come up with a criteria for whether $k$ divides $\dbinom {n - 1} {k - 1}$ that can be cheaply checked.</p> <p>Here's my work so far: assuming $n &gt; k$, then $n - 1 \geq k$. Then $(n - 1)!$ will have at least 1 factor of $k$, and $k \mid \dbinom {n - 1} {k - 1}$ if the $(k - 1)!(n - k)!$ factor does not cancel those factors out. If $k$ is prime, then $(k - 1)!$ will have no factors of $k$. $(n - k)!$ will often have less factors of $k$ than $(n - 1)!$, but that is not the case if $n \equiv -1 \mod k$. For example, $5 \not\mid \dbinom 9 4$, because both the numerator and denominator have exactly one factor of $5$.</p> <p>So I figured out that $n \not\equiv -1 \mod k$ must be true for primes, but I've no idea how to deal with composites. I played around a bit with them, though, and $6$ is another value of $k$ for which $6 \not\mid \dbinom 11 5$, so clearly $k$ doesn't have to be prime. Any tips?</p>
Robert Israel
8,508
<p>If division by $k$ would give you a non-integer, multiplication by $n$ would give you back an integer. So if you reduce $n/k$ to lowest terms, $n/k = a/b$ with $\gcd(a,b) = 1$, then you can safely divide by $b$ and then multiply by $a$.</p> <p>EDIT: Another method: Let $q$ be the result of integer division of $N = {n-1 \choose k-1}$ by $k$, and $r = N - q k$ (so $0 \le r &lt; k$ and $N \equiv r \mod k$). Then $\frac{n N}{k} = n q + (n r)/k$, where the multiplication $n r$ is much less likely to produce an overflow.</p>
2,412,454
<p>I was obviously not clear enough in my first question, so I will reformulate. I have the following equation $$ A=\frac{B\sin 2\theta}{C+D\cos 2\theta} $$ where $A,B,C,D$ are variables. I need to solve or rewrite the equation to easily obtain $\theta$ (or $2\theta$), given known values for $A, B, C, D$. Thanks for any help.</p>
Noah Schweber
28,111
<p>This is indeed somewhat confusing; let me break down what they're talking about.</p> <p>The source of your confusion is I think the use of the word "in." In this context, it does <strong>not</strong> mean "$\in$." When Hrbacek/Jech say "Is $E$ a [someting] relation in $X$?," it might be better for them to use the word "on" - what they mean is that $X$ is the underlying set, and $E$ is the relation. <strong>I'll be using their meaning of the word "relation in $X$" throughout this answer, even though I disapprove</strong>.</p> <p>Now what about the specific problem(s) here? Well, these exercises are designed to do two things. The first is to show you that the choice of the underlying set can matter: e.g. the relation $E=\{(a, a)\}$ is reflexive in $\{a\}$ (since every element of $\{a\}$ is related to itself) but not in $\{a, b\}$ (since $b$ is not related to itself: $(b, b)\not\in E$).</p> <p>The second point is to give you some practice reasoning in a nonintuitive context - specifically, about the empty set. $\emptyset$, as usual, is a confusing object. Remember that "for all" statements about the empty set are vacuously true - for instance, $\emptyset$ is a set of elephants, since every element of $\emptyset$ is an elephant. More relevantly here, <strong>$\emptyset$ is a set of ordered pairs</strong>, hence a relation (it's the relation which never holds).</p> <p>Now as long as our domain isn't stupid, $\emptyset$ clearly won't be reflexive: no element will be related to itself, since no element will be related to <em>anything</em> (our relation never holds). But what if we take our underlying set to <em>also</em> be $\emptyset$? Then <strong>every element of $\emptyset$ is related to itself</strong> - remember that "for all" statements about $\emptyset$ are vacuously true!</p> <p>So $\emptyset$ is a reflexive relation if our domain is $\emptyset$. And it's clearly transitive and symmetric no matter what the domain is (exercise). So, to use the phrasing of the book, $\emptyset$ is an equivalence relation in $\emptyset$.</p> <hr> <p>Re: the end of your question, you say that</p> <blockquote> <p>The book also claims that $\emptyset$ is an equivalence relation in a nonempty set $A$.</p> </blockquote> <p>This is <strong>false</strong> - $\emptyset$ isn't reflexive in any nonempty set $A$ (why?). Where does the book claim this? ${}{}$</p>
2,781,017
<p>I known that $\sum a_i b_i \leq \sum a_i \sum b_i$ for $a_i$, $b_i &gt; 0$. It seems this inequality will also hold true when $a_i$, $b_i \in (0,1)$. However, I am unable to find out if</p> <p>$\sum \frac{a_i}{b_i} \leq \frac{\sum a_i}{\sum b_i}$ </p> <p>holds true for $a_i$, $b_i \in (0,1)$.</p>
Fly by Night
38,495
<p>Is it true that $\frac{2}{1}+\frac{3}{1} \le \frac{2+3}{1+1}?$</p> <p>If you want numbers between $0$ and $1$, then consider $$\frac{0.2}{0.1}+\frac{0.3}{0.1} \ \ \le \ \ \frac{0.2+0.3}{0.1+0.1}$$</p>
1,182,523
<p>I have stumbled across this question: Let $a,b,c$ be integers, not all $0$ such that $\max(|a|,|b|,|c|)&lt;10^6$. Prove that $|a+b \sqrt{2} + c \sqrt{3}| &gt; 10^{-21}$. </p> <p>Could anybody help by solving this? Elementary solution is preferred.</p>
achille hui
59,379
<p>Let <span class="math-container">$0 &lt; \epsilon \ll 1$</span> be any small and <span class="math-container">$M \gg 1$</span> be any large positive numbers. Let <span class="math-container">$\lambda$</span> be a number of the form <span class="math-container">$a + b\sqrt{2} + c\sqrt{3}$</span> where <span class="math-container">$a, b, c \in \mathbb{Z}$</span>, not all zero such that</p> <p><span class="math-container">$$|a|, |b|, |c| &lt; M \quad\text{ and }\quad |\lambda| = |a + b\sqrt{2}+c\sqrt{3}| &lt; \epsilon $$</span></p> <p>It is easy to see <span class="math-container">$\lambda$</span> is a root of the polynomial</p> <p><span class="math-container">$$\begin{align} &amp; \left((x-a - b\sqrt{2})^2 - 3c^2\right)\left((x-a + b\sqrt{2})^2 - 3c^2\right)\\ = &amp; \left((x-a)^2 +2b^2 - 3c^2\right)^2 - 8b^2(x-a)^2\\ = &amp; (x-a)^4 - 2(2b^2+3c^2)(x-a)^2 +(2b^2-3c^2)^2\\ \end{align}$$</span> Apply MVT for <span class="math-container">$x$</span> between <span class="math-container">$0$</span> and <span class="math-container">$\lambda$</span>, we can find a <span class="math-container">$\xi \in (0,1)$</span> such that</p> <p><span class="math-container">$$4((\xi \lambda - a)^2 - 2b^2 - 3c^2)(\xi \lambda - a)x = -\left[a^4 - 2(2b^2+3c^2)a^2+(2b^2-3c^2)^2\right]$$</span> It is clear we can bound the absolute value of LHS from above by </p> <p><span class="math-container">$$4\max( (M+\epsilon)^2, 5 M^2 ) (M+\epsilon)\epsilon = 20 M^2 (M+\epsilon)\epsilon $$</span></p> <p>For <span class="math-container">$M = 10^6$</span> and <span class="math-container">$\epsilon = 10^{-20}$</span>, this bound is about <span class="math-container">$0.2$</span>. Since RHS is an integer, this forces it to be zero. i.e.</p> <p><span class="math-container">$$a^4 - 2(2b^2+3c^2)a^2+(2b^2-3c^2)^2 = 0 \quad\iff\quad (a^2 - 2b^2 -3c^2)^2 = 24b^2c^2 $$</span> Since <span class="math-container">$24$</span> is not a square, we have</p> <p><span class="math-container">$$\begin{align} a^2 - 2b^2 - 3c^2 = bc = 0 \implies &amp; ( b = 0 \land a^2 = 3c^2 ) \lor ( c = 0 \land a^2 = 2b^2 )\\ \implies &amp; a = b = c = 0 \end{align} $$</span> This contradicts with our assumption that <span class="math-container">$a, b, c$</span> are not all zero.</p> <p><strong>Conclusion</strong>: If <span class="math-container">$|a|, |b|, |c| \le 10^6$</span> and not all zero, then <span class="math-container">$| a + b\sqrt{2} + c\sqrt{3} | \ge 10^{-20} &gt; 10^{-21}$</span>.</p> <hr> <p><strong>Random Notes</strong></p> <p>Please note that what derived above is a very poor estimate of the actual lower bound.<br> If I didn't make any mistake, for <span class="math-container">$|a|, |b|, |c| &lt; M = 10^6$</span>, not all zero, we have</p> <p><span class="math-container">$$| a + b\sqrt{2}+c\sqrt{3} | \ge |376852-24672\sqrt{2}-197431\sqrt{3}| \approx 1.39824525 \times 10^{-11}\\ \color{red}{\text{this is wrong, see update below}} $$</span></p> <p>It seems in general, the greatest lower bound is of the order <span class="math-container">$O(M^{-2})$</span> instead of <span class="math-container">$O(M^{-3})$</span>. In view of <a href="http://en.wikipedia.org/wiki/Thue%E2%80%93Siegel%E2%80%93Roth_theorem" rel="nofollow noreferrer">Roth's Theorem</a>, this is what one should expect. Unfortunately, I have no idea how to prove this speculation rigorously!</p> <p><strong>Update 2019/09/30</strong></p> <p>As pointed out by O.S. Dawg, above lower bound is incorrect.<br> With <span class="math-container">$(a,b,c) = (96051,-616920,448258)$</span>, one has</p> <p><span class="math-container">$$a+b\sqrt{2}+c\sqrt{3} \sim 3.352882344 \times 10^{-13}$$</span> beating the lower bound I have before. Please note that to get this number, I need to crank up the precision in the computation (the new number is computed with 100 digits accuracy).</p>
41,718
<p>Currently I am working on creating the package on Mathematica version 9 on Windows 7. Here is my code as follows: </p> <pre><code>BeginPackage["mypackage`"] Begin["`Private`"] getColumn[data_,branch_List]:= Module[{pos}, pos = Position[data,#][[1,2]]&amp;/@branch; data[[All,pos]] ]; RemoveMissing[data_]:=DeleteCases[data,{date_,val_}/;!NumberQ[val]]; End[] EndPackage[] </code></pre> <p>As you can see from the above that it contains two defined functions. I saved it as mypackage.m on my working directory.</p> <p>To make sure my working directory is the one that contains mypackage.m, I put <code>SetDirectory[NotebookDirectory[]]</code> in my notebook. Then, I call the package by using </p> <pre><code>Needs["mypackage`"] </code></pre> <p>Evaluating, but it seems not to load, because when I call the function <code>getColumn</code>, the letters should turn black if it is loaded (now it's still blue). Also, I tried other ways </p> <pre><code>`Needs["mypackage`","mypackage.m"]` </code></pre> <p>and</p> <pre><code>&lt;&lt;mypackage` </code></pre> <p>Both are still not working.</p> <p>I have two questions:</p> <ol> <li><p>When we save .m file, is it necessary to have the same name as we defined in the BeginPackage[""]</p></li> <li><p>Is there anything missing so my package does not work ? any suggestions ?</p></li> </ol>
Szabolcs
12
<p>Notice how the active context changes as the package gets loaded line by line:</p> <pre><code>BeginPackage["mypackage`"] (* the active context is mypackage` *) (* whatever gets mentioned for the first time within this section becomes part of the mypackage` context *) (* any symbol meant to be public, i.e. usable when the package is loaded, should be mentioned here *) Begin["`Private`"] (* the active context is mypackage`Private` *) (* whatever gets mentioned here first becomes part of mypackage`Private` *) End[] EndPackage[] </code></pre> <p>You did not mention these symbols in the public section of the package so they won't be accessible without explicitly writing their context when you load the package.</p> <p>The key here is to mention every symbol that needs to be "exported" in the correct section so it becomes part of the <code>mypackage`</code> context. This context gets added to the context path when you load the package, so any symbol it contains can be references by its short name (not explicitly prepending the context) when you use the package.</p> <p>For more details please read: <a href="https://mathematica.stackexchange.com/questions/29324/creating-mathematica-packages">Creating Mathematica packages</a></p>
1,181,269
<p>I have a function $f(x)=x+\sin x$ and I want to prove that it is strictly increasing. A natural thing to do would be examine $f(x+\epsilon)$ for $\epsilon &gt; 0$, and it is equal to $(x+\epsilon)+\sin(x+\epsilon)=x+\epsilon+\sin x\cos \epsilon + \sin \epsilon \cos x$.</p> <p>Now all I need to prove is that $x+\epsilon+\sin x\cos \epsilon + \sin \epsilon \cos x - \sin x - x$ is always greater than $0$ but it's a dead end for me as I don't know how to proceed. Any hints?</p>
Jack D'Aurizio
44,121
<p>Let $f(x)=x+\sin x$. Then $f'(x)=1+\cos x\geq 0$ and:</p> <p>$$ f(x+h)-f(x) = h\, f'(\xi),\quad \xi\in(x,x+h) $$ by Lagrange's theorem, hence $f(x+h)-f(x)\geq 0$. </p> <p>In order to prove that the inequality is strict, we can notice that: $$ f(x+h)-f(x-h) = 2h + 2\cos x \sin h $$ can be zero only if $\cos x=-1$ and $\sin h=h$, i.e. for $h=0$.</p>
95,126
<p>Consider the finite sum</p> <pre><code>rs[x_, n_] := x/n Sum[n^2/(i + (n - i) x)^2, {i, 1, n}] </code></pre> <p>Is there a way to bring <em>Mathematica</em> to calculate the limit for <code>n -&gt; ∞</code>?</p> <p>I have tried <code>Limit[]</code> as well as <code>NLimit[]</code> without success.</p>
Community
-1
<p>It's not the solution you are looking for and surely you have tried the same.</p> <pre><code>rs[x_, n_] := x/n Sum[n^2/(i + (n - i) x)^2, {i, 1, n}] rs[1., \[Infinity]] </code></pre> <blockquote> <p>Infinity::indet: Indeterminate expression 0 [Infinity] encountered.</p> <p>Sum::div: Sum does not converge.</p> </blockquote> <pre><code>Plot[rs[2., n], {n, 1, 100000}, PlotPoints -&gt; 100] </code></pre> <p><a href="https://i.stack.imgur.com/Ywe4y.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ywe4y.gif" alt="enter image description here"></a></p> <p><strong>Addition</strong></p> <pre><code>&lt;&lt; NumericalCalculus` NLimit[rs[2, n], n -&gt; 3] </code></pre> <blockquote> <p>Infinity::indet: Indeterminate expression ComplexInfinity+ComplexInfinity encountered. >></p> <p>NLimit::notnum: The expression Indeterminate is not numerical at the point n == 4.`. >></p> <p>NLimit[2 n (PolyGamma[1, 1 - 2 n] - PolyGamma[1, 1 - n]), n -> 3]</p> </blockquote> <pre><code>Plot[PolyGamma[1, 1 - 2 n] - PolyGamma[1, 1 - n], {n, 1, 3}] </code></pre> <p><a href="https://i.stack.imgur.com/dqMt5.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/dqMt5.gif" alt="enter image description here"></a></p> <pre><code>NLimit[PolyGamma[1, 1 - 2 n] - PolyGamma[1, 1 - n], n -&gt; 3] </code></pre> <blockquote> <p>Infinity::indet: Indeterminate expression ComplexInfinity+ComplexInfinity encountered. >></p> <p>NLimit::notnum: The expression Indeterminate is not numerical at the point n == 4.`. >></p> </blockquote> <p>I would be surprised if there exists a limit.</p> <p>We know, with Limit[] and NLimit[] we cannot obtain a solution. With DiscretePlot we can get an idea, but that is not a prove. </p> <pre><code>Table[DiscretePlot[rs[x, k], {k, 1000}, PlotRange -&gt; All, AxesOrigin -&gt; {0, 0}], {x, {0.1, 1., 2., 5.}}] </code></pre> <p><a href="https://i.stack.imgur.com/nwrkF.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/nwrkF.gif" alt="enter image description here"></a></p> <p>I'm looking forward to the limiting procedure.</p>
2,302,966
<p>Translate the following English statements into predicate logic formulae. The domain is the set of integers. Use the following predicate symbols, function symbols and constant symbols.</p> <ul> <li>Prime(x) iff x is prime</li> <li>Greater(x, y) iff x > y</li> <li>Even(x) iff x is even</li> <li>Equals(x,y) iff x=y</li> <li>sum(x, y), a function that returns x + y</li> <li>0,1,2,the constant symbols with their usual meanings</li> </ul> <p>I tried them, but don't know if they're correct.</p> <p><strong>(a) The relation Greater is transitive.</strong></p> <p>(∀x(∃y(∃z (Greater(x,y) ∧ Greater(y,z) -> Greater(x,z))))</p> <p><strong>(b) Every even number is the sum of two odd numbers. (Use (¬Even(x)) to express that x is an odd number.)</strong></p> <p><strong>(c) All primes except 2 are odd.</strong></p> <p>(∀x(Prime(x) ∧ ¬(Equals(x,2) -> ¬Even(x))</p> <p><strong>(d) There are an infinite number of primes that differ by 2. (The Prime Pair Conjecture)</strong></p> <p>(∀x(∃y (Prime(x) ∧ Equals(y,sum(x,2)) ∧ Prime(y)))) From what I remember, we aren't suppose to put predicate symbol (sum(x,2)) inside Equals. How do I do this?</p> <p><strong>(e) For every positive integer, n, there is a prime number between n and 2n. (Bertrand's Postulate) (Note that you do not have multiplication, but you can get by without it.)</strong></p> <p>(∀x(∃y (Greater(x,1) -> (Greater(x,y) ∧ Prime(y) ^ Greater(y, Sum(x,x))))) -Same problem as d.</p>
Stefan Dawydiak
98,106
<p>Here is a whole family of examples: The matrix groups $\mathrm{SL}(n,\mathbb{C})$, $\mathrm{GL}(n,\mathbb{C})$, $SO(n,\mathbb{C})$, and $\mathrm{Sp}(n,\mathbb{C})$ are all real Lie groups. They are not compact (I think this is pretty easy to see). Each has a maximal compact subgroup though. In order, they are $\mathrm{U}(n)$, the unitary matrices, $\mathrm{SU}(n)$ the special unitary matrices, $\mathrm{SO}(n)$ the special unitary matrices with real entries, and $\mathrm{Sp}(n)$, which is $\mathrm{Sp}(n,\mathbb{C})\cap\mathrm{U}(2n)$. </p>
1,371,649
<p>The question is:</p> <blockquote> <p>What does the following interation formula do?: <span class="math-container">$$x_{k+1}=2x_k-cx_{k}^2.$$</span></p> </blockquote> <p>I tried to identify this with Newtons method. I.e. I tried to bring that into the form <span class="math-container">$x_{k+1}=x_k-\frac{f(x_0)}{f'(x_0)}$</span>, which leads to: <span class="math-container">$$(cx_k^2-x_k)f'(x_k)=f(x_k).$$</span> But then <span class="math-container">$f(x)$</span> is something like <span class="math-container">$e^a$</span> but these functions doesn't have any roots... Is this still correct and I must note that this iteration formula does not converge or are there any other functions satisfying this equality?</p>
Lutz Lehmann
115,115
<p>You could also see the partial square of a binom on the right side, and complete it as <span class="math-container">$$ 1-cx_{k+1}=1-2cx_k+(cx_k)^2=(1-cx_k)^2. $$</span> The sequence <span class="math-container">$y_k=1-c_k$</span> is then a subsequence of the geometric sequence, and from <span class="math-container">$y_k=y_0^{2^k}$</span> follows the quadratic convergence.</p>
2,910,301
<p>Most of the textbooks state that provided a nonzero field $F$, a nonzero polynomial $f\in F[x]$ of degree $n$ has at most $n$ <em>distinct</em> roots. I am wondering whether the word "distinct" can be removed? I guess the answer is yes, but I cannot come up with a nice proof. </p> <p>Sorry for the confusion. The question above can be rephrased as "if $f$ has roots $\alpha_i$ of multiplicity $n_i$, then is it possible that $\sum n_i&gt;n$?"</p> <p>Here $\alpha$ is of multiplicity $k$ means $(x-\alpha)^k\mid f$ but $(x-\alpha)^{k+1}\nmid f$.</p> <p>For commutative ring this is possible, e.g, $x^2-x\in\mathbb{Z}_6[x]$ has roots $\overline{0},\overline{1},\overline{3},\overline{4}$.</p>
Mark Bennet
2,906
<p>The key thing you need is that these are no non-trivial zero divisors, so that if $ab=0$ then either $a=0$ or $b=0$. So the formula for maximum number of roots works if you are working within a domain (and not just a field - the integers will do too).</p> <p>Then you can use the polynomial division algorithm with $x-a$ to give, for any polynomial $p(x)$, that $$p(x)=(x-a)q(x)+r$$If you now make $a$ a root of $p$ and set $x=a$ you get $r=p(a)=0$ and $p(x)=(x-a)q(x)$. Then if $q(x)$ has a root $b$ you can take out a factor $x-b$ so you get $$p(x)=(x-a)^h(x-b)^t \dots (x-f)^us(x)$$ where $s$ has no roots, and may be a scalar.</p> <p>Now, because there are no zero divisors, if $p(y)=0$ one of the factors must be zero. It can't be $s(y)$ ($s$ has no roots) so it must be one of the others, and so $y$ must be one of the roots you already know. Now equate degrees (or work the other way and use induction on degree).</p> <p>In the case of integers modulo $6$ you have $2\times 3 \equiv 0$ so that $x(x-1)\equiv 0$ can be true even if $x$ is not $1$ or $2$.</p> <p>Indeed we also have $x(x-1)\equiv (x-4)(x-3)$, which shows that factorisation is not unique.</p>
481,764
<p>What type of symmetry does the function $y=\frac{1}{|x|}$ have? Specify the intervals over which the function is increasing and the intervals where it is decreasing.</p>
tylerc0816
53,243
<p>Plotting the function is a good way to guess at where the function is going to be increasing or decreasing. Then you can think about why.</p> <p>As you increase $x$, what is going to happen to $\frac{1}{|x|}$? What happens when $x$ approaches (but does not equal!) 0?</p>
229,703
<p>Given the function <span class="math-container">\begin{align*} f \colon \mathbb{R}^n &amp;\to \mathbb{R}^n\\ v&amp;\mapsto \dfrac{v}{\|v\|}, \end{align*}</span> I would like to compute the derivative of <span class="math-container">$f$</span>, that is <span class="math-container">$df(v)$</span>. It is possible to derive it by hand, which leads to</p> <p><span class="math-container">$$df(v)=\dfrac{1}{\| v\|}\Big(I_n - \dfrac{v}{\|v\|}\otimes \dfrac{v}{\|v\|}\Big)$$</span></p> <p>where <span class="math-container">$I_n$</span> is the identity second-order matrix.</p> <p>I believe <em>Mathematica</em> cannot find that using simple built-in functions (without explicitly defining <code>v = {v1, v2, v3}</code> if <span class="math-container">$n=3$</span> for instance). Some packages are dedicated to differential geometry (see <a href="https://mathematica.stackexchange.com/questions/60243/coordinate-free-differential-forms-package">Coordinate free differential forms package</a> or <a href="https://mathematica.stackexchange.com/questions/2620/differential-geometry-add-ons-for-mathematica">Differential geometry add-ons for Mathematica</a>) but I failed to achieve the above calculation. Any hint would be appreciated.</p> <hr /> <p><strong>Edit</strong> For those of you who are interesting in how to find the above formula, you can define <span class="math-container">$g(t)=f(v(t))=\big(v(t)\cdot v(t)\big)^{1/2}v(t)$</span> and compute <span class="math-container">$g'(t)$</span> with the chain rule. <span class="math-container">$g'(t)$</span> is a linear function of <span class="math-container">$v'(t)$</span> because:</p> <p><span class="math-container">$$g'(t)=\dfrac{df}{dv}(v(t)) v'(t)$$</span></p> <p>Taking the coefficient in front of <span class="math-container">$v'(t)$</span> gives the above expression.</p> <p>Now, the naive implementation of this approach as follows fail because it does not capture the multi-dimensionality of the <code>f</code>:</p> <pre><code>f[v_] = v/Norm[v] h[t_] = D[f[v[t]], t]/v'[t] // Simplify h[t] /. Norm'[v[t]] -&gt; v[t]/Norm[v[t]] // Simplify (* (Norm[v[t]]^2 - v[t]^2)/Norm[v[t]]^3 *) </code></pre>
vsht
20,240
<p>A solution using <a href="https://feyncalc.github.io/" rel="noreferrer">FeynCalc</a> would be to write</p> <pre><code>ex = CVD[v, i]/Sqrt[CSPD[v, v]] </code></pre> <p>which corresponds to <span class="math-container">$ \frac{v^i}{\sqrt{v^2}} $</span> (<code>CVD</code> denotes a <span class="math-container">$D-1$</span> dimensional Cartesian vector, <code>CSPD</code> is a Cartesian scalar product in <span class="math-container">$D-1$</span> dimensions). Then using the routine <code>ThreeDivergence</code> (<span class="math-container">$\nabla^j$</span>)</p> <pre><code>ThreeDivergence[ex, CVD[v, j]] </code></pre> <p>we find <span class="math-container">$ \frac{\delta ^{i j}}{\sqrt{v^2}}-\frac{v^i v^j}{\left(v^2\right)^{3/2}}. $</span></p> <p>Of course, FeynCalc is not a tool for doing differential geometry. The tensor routines only cover what one usually needs in Feynman diagram calculations. So I suppose that for more serious tasks the OP would still need to become familiar with dedicated tensor algebra packages.</p>
229,703
<p>Given the function <span class="math-container">\begin{align*} f \colon \mathbb{R}^n &amp;\to \mathbb{R}^n\\ v&amp;\mapsto \dfrac{v}{\|v\|}, \end{align*}</span> I would like to compute the derivative of <span class="math-container">$f$</span>, that is <span class="math-container">$df(v)$</span>. It is possible to derive it by hand, which leads to</p> <p><span class="math-container">$$df(v)=\dfrac{1}{\| v\|}\Big(I_n - \dfrac{v}{\|v\|}\otimes \dfrac{v}{\|v\|}\Big)$$</span></p> <p>where <span class="math-container">$I_n$</span> is the identity second-order matrix.</p> <p>I believe <em>Mathematica</em> cannot find that using simple built-in functions (without explicitly defining <code>v = {v1, v2, v3}</code> if <span class="math-container">$n=3$</span> for instance). Some packages are dedicated to differential geometry (see <a href="https://mathematica.stackexchange.com/questions/60243/coordinate-free-differential-forms-package">Coordinate free differential forms package</a> or <a href="https://mathematica.stackexchange.com/questions/2620/differential-geometry-add-ons-for-mathematica">Differential geometry add-ons for Mathematica</a>) but I failed to achieve the above calculation. Any hint would be appreciated.</p> <hr /> <p><strong>Edit</strong> For those of you who are interesting in how to find the above formula, you can define <span class="math-container">$g(t)=f(v(t))=\big(v(t)\cdot v(t)\big)^{1/2}v(t)$</span> and compute <span class="math-container">$g'(t)$</span> with the chain rule. <span class="math-container">$g'(t)$</span> is a linear function of <span class="math-container">$v'(t)$</span> because:</p> <p><span class="math-container">$$g'(t)=\dfrac{df}{dv}(v(t)) v'(t)$$</span></p> <p>Taking the coefficient in front of <span class="math-container">$v'(t)$</span> gives the above expression.</p> <p>Now, the naive implementation of this approach as follows fail because it does not capture the multi-dimensionality of the <code>f</code>:</p> <pre><code>f[v_] = v/Norm[v] h[t_] = D[f[v[t]], t]/v'[t] // Simplify h[t] /. Norm'[v[t]] -&gt; v[t]/Norm[v[t]] // Simplify (* (Norm[v[t]]^2 - v[t]^2)/Norm[v[t]]^3 *) </code></pre>
Carl Woll
45,431
<p>Maybe you could use the following approach:</p> <pre><code>Clear[VectorD] VectorD[e_, v_] := ReplaceAll[ D[e, VectorD, NonConstants-&gt;{v}], s_Dot:&gt;TensorReduce[s,Assumptions-&gt;v ∈ Vectors[d]] ] VectorD /: D[s_. v_,VectorD,NonConstants-&gt;{v_}] := s IdentityMatrix[d] + TensorProduct[v, D[s, VectorD, NonConstants-&gt;{v}]] VectorD /: D[Transpose[f_], VectorD, NonConstants-&gt;{x_}] := Transpose[ D[f, VectorD, NonConstants-&gt;{x}] ] VectorD /: D[a_Dot|a_Times|a_TensorProduct, VectorD, NonConstants-&gt;{x_}] := Sum[ MapAt[D[#, VectorD, NonConstants-&gt;{x}]&amp;, a, i], {i,Length[a]} ] </code></pre> <p>Then:</p> <pre><code>VectorD[v/Sqrt[v.v], v] </code></pre> <blockquote> <p>IdentityMatrix[d]/Sqrt[v.v] - TensorProduct[v, v]/(v.v)^(3/2)</p> </blockquote>
3,696,371
<p>I was rolling stats for a set of characters (main + backup) with my DM, and he told me I could choose between 3 sets of two rolls. One rolled by him, one rolled by another player, and one rolled by me. Him and the other player use physical dice, rolling three dice, then rerolling the lowest value twice. Myself, I roll electronic dice, and I used a <code>5d6d2</code> command.</p> <p>When the three of us came up with sets of rolls, the results were very different, to the point that you could easily guess which one was mine (the only that was done using electronic dice):</p> <p>DM's rolls: {14, 17, 14, 15, 15, 18} &amp; {17, 16, 16, 16, 16, 14}</p> <p>Player's rolls: {15, 17, 17, 16, 18, 17} &amp; {17, 15, 16, 16, 18, 16}</p> <p>My rolls: {7, 12, 15, 13, 16, 11} &amp; {16, 16, 10, 14, 14, 17}</p> <p>This isn't that surprising: in my experience, physical dice have a tendency to roll higher (just by comparing what I roll against them, this is far from the first time when this happens). However, we started a friendly argument regarding the statistics we were using for the rolls, and whether we were actually rolling the same/equivalent thing.</p> <p>My belief is that the two different roll scenarios are equivalent, while my DM believes it's a situation similar to the <a href="https://en.wikipedia.org/wiki/Monty_Hall_problem" rel="nofollow noreferrer">three doors problem</a>, as he calls it.</p> <hr> <h2>My scenario/rolls</h2> <p>Rolling 5 six-sided dice and taking the three largest values. The <code>5d6D2</code> roll, which using <a href="http://rumkin.com/reference/dnd/diestats.php" rel="nofollow noreferrer">this page</a> we can get a neat graph of the possibilities for each roll.</p> <p><strong>Example roll:</strong> {3, 2, 2, 1, 1} Result: 7</p> <h2>My DM's/fellow player's scenario/rolls</h2> <p>Rolling 3 six-sided dice, then rerolling the lowest value twice.</p> <p><strong>Example roll:</strong> Roll: {6, 4, 1} Result: 11</p> <p>Reroll the 1. Roll: {4} Result: 4</p> <p>Reroll either 4. Roll: {1} Result: 1</p> <p>Final roll: {6, 4, 4} Result: 14</p> <hr> <p>Is there a statistical distribution difference between these two scenarios?</p>
Xander Henderson
468,350
<p>You are both wrong:</p> <ul> <li><p>You are wrong because the two rolling procedures do not produce the same distributions of probability.</p></li> <li><p>Your DM is wrong, because this has nothing to do with the Monty Hall (three doors) problem.</p></li> </ul> <p>Tackling these in reverse order: in the Monty Hall problem, the host of the game knows where the goats and prize are. He cannot reveal the prize, so when he reveals a goat, you gain extra information. That is, there is a dependence in between the sequence of events which leads to an outcome (winning or losing the game).</p> <p>In the dice scenario, the die rolls are independent. There is no difference between rolling three dice and rerolling the lowest, and rolling four dice and throwing away the lowest of the first three. You don't get any extra information after rolling three dice, as the fourth roll is independent of the previous three rolls.</p> <p>Therefore your DM's explanation is incorrect.</p> <p>Your error is, perhaps, a bit more subtle. Since the question asks <em>whether</em> the probability distributions are different, and not for the probability distributions <em>themselves</em>, let's consider a much simpler case: </p> <ul> <li><p><strong>Procedure 1:</strong> Flip a coin (a d2, if you will) twice&mdash;say tails is <span class="math-container">$0$</span> and heads is <span class="math-container">$1$</span>. Reflip the lowest result. Equivalently, flip three coins, then throw away the lowest result <em>among the first two tosses</em>.</p></li> <li><p><strong>Procedure 2:</strong> Flip a coin three times, and throw away the lowest result.</p></li> </ul> <p>In either case, you flip the coin <span class="math-container">$3$</span> times, leading to <span class="math-container">$2^3=8$</span> different events, each corresponding to a sequence of flips. For example, possible events are <span class="math-container">$$ HHT, \qquad TTT, \qquad\text{and}\qquad HTH. $$</span> With respect to either procedure, these <span class="math-container">$8$</span> events correspond to just three outcomes: you get a sum of <span class="math-container">$0$</span>, <span class="math-container">$1$</span>, or <span class="math-container">$2$</span> (the total number of heads which "count"). However, the probabilities of these outcomes are not the same. I have chosen this example to be small and tractable, so we can actually show every possible event and outcome, as summarized below:</p> <p><span class="math-container">\begin{array} .\text{Event} &amp; \text{Proc 1} &amp; \text{Proc 2} \\\hline TTT &amp; 0 &amp; 0 \\ TTH &amp; 1 &amp; 1 \\ THT &amp; 1 &amp; 1 \\ THH &amp; 2 &amp; 2 \\ HTT &amp; 1 &amp; 1 \\ HTH &amp; 2 &amp; 2 \\ \color{red}{HHT} &amp; \color{red}{1} &amp; \color{red}{2} \\ HHH &amp; 2 &amp; 2 \end{array}</span></p> <p>Notice that these two procedures give very similar results, but that Procedure 2 is, on average, slightly better. The difference occurs in the line which I have colored red&mdash;under Procedure 1, you get two heads, but then replace one of those heads with tails on your next toss. Under Procedure 2, you get to keep both heads, so you are better off.</p> <p>The question regarding dice follows a similar pattern. Rolling three dice, rerolling the lowest, then rerolling the lowest again (or, equivalently, rolling five dice, then dropping the lowest two from the first four) is akin to Procedure 1. Rolling five dice and dropping the lowest two is akin to Procedure 2. Procedure 2 is always going to win out&mdash;heuristically, this is because you never replace a high roll with a lower roll.</p>
2,669,524
<p>I am reading <strong>Algebraic Geometry</strong>, Vol 1, <em>Kenji Ueno</em>. My problem is that $$k\left[ x,y,t\right]/\left(xy-t\right)\otimes_{k\left[t\right]}k\left[t\right]/\left(t-a\right) \simeq k\left[x,y\right]/\left(xy-a\right) $$ where $k$ is a field and $a$ is an element in $k$. I don't understand how it works. I tried to use the result $$R\otimes_A A/I =R/IR,$$ where $A$ and $R$ are commutative rings and $I$ is an ideal of $A$. Then I obtain $$LHS=\left[k\left[ x,y,t\right]/\left(xy-t\right)\right]/\left[\left(t-a\right)k\left[ x,y,t\right]/\left(xy-t\right)\right].$$ But now it seems be hard to manage. Please help me. Thank you very much for helps.</p>
user
505,767
<p>Note that $dV$ in the integral is just a symbol to identify the variable respect to we are integrating and $\Delta V$ in this case just indicate the difference $V(2)-V(1)$ according to the foundamental theorem of calculus</p>
976,910
<p>i'm having a small issue with a certain question. </p> <p>Given a parametric equation of a plane $x=5-2a-3b$, $y=3-4a+2b$, $z=7-6a-2b$, find a point $P$ on the plane so that the position vector of $P$ is perpendicular to the plane.</p> <p>How would you go about this for a parametric equation? I think I could convert this to a cartesian equation and dissect an answer that way, but how can I do this without having to convert it?</p> <p>The hint it gives on the page is that $P$ has the vector $\overrightarrow{OP}$, so I'd imagine the first thing I would do is use the dot product with dummy variables for the $i$, $j$ and $k$ values of $P$. Am I on the right track?</p> <p>Thanks in advance.</p>
copper.hat
27,978
<p>Suppose $x \notin K$. Suppose $B(x,{1 \over n})$ intersects $K$ for all $n$, then $x$ is a limit point of $K$, which is a contradiction. Hence for some $n_x$ we have $K \cap B(x, {1 \over n_x}) = \emptyset$. Hence $K^c$ is open.</p>
4,004,978
<blockquote> <p>For all <span class="math-container">$a, b, c, d &gt; 0$</span>, prove that <span class="math-container">$$2\sqrt{a+b+c+d} ≥ \sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$$</span></p> </blockquote> <p>The idea would be to use AM-GM, but <span class="math-container">$\sqrt{a} + \sqrt{b} + \sqrt{c} + \sqrt{d}$</span> is hard to expand. I also tried squaring both sides, but that hasn't worked either. Using two terms at a time doesn't really work as well. How can I solve this question? Any help is appreciated.</p>
Neat Math
843,178
<p>How about Jensen's inequality? <span class="math-container">$\sqrt x$</span> is concave so</p> <p><span class="math-container">$$\sqrt{\frac{a+b+c+d}{4}} \ge \frac 14 \left( \sqrt a +\sqrt b +\sqrt c +\sqrt d \right)$$</span></p>
1,929,445
<blockquote> <p>Is there a solution to the problem $$\left\{\begin{matrix} y'=y+y^4\\ y(x_0)=y_0 \end{matrix}\right.$$ which is defined on $\mathbb{R}$? ($x_0,y_0$ might be any real numbers)</p> </blockquote> <p>It's easy to prove that for all $(x,y)\in\mathbb{R}^2$ there exists an open interval $I$ (with $x_0\in I$) where the problem has a unique solution. However is the maximal interval always $(-\infty,\infty)$? I know that the answer is no, but that's just because I found a solution for particular values of $x_0$ and $y_0$ and checked its domain. But is there a way to prove that $I$ need not be $I=(-\infty,\infty)$ without actually solving the problem for a certain initial condition? In other words, how can I prove that the unique solution need not be defined on $\mathbb{R}$?</p>
Futurologist
357,211
<p>Actually, there is a whole family of solutions of this equation that are defined on the whole real line. Look carefully at the equation itself $$\frac{dy}{dx} = y + y^4$$ and write it in the form $$\frac{dy}{dx} = y(1 + y^3)$$ As the equation is locally Lipschitz everywhere, because it is polynomial, locally everywhere there exists a unique solution to it and so two solutions cannot cross. Now, observe that the constant functions $y(x) \equiv 0$ and $y(x) \equiv -1$ are actually the unique solutions to the inital value problems<br> \begin{align} \frac{dy}{dx} &amp;= y + y^4\\ y(0) &amp;= 0 \end{align} and \begin{align} \frac{dy}{dx} &amp;= y + y^4\\ y(0) &amp;= -1 \end{align} respectively. Since two different solutions of the equations cannot cross unless they coincide everywhere, any solution that satisfies the initial value problem \begin{align} \frac{dy}{dx} &amp;= y + y^4\\ y(x_0) &amp;= -y_0 \end{align} for $(x_0, y_0) \in \mathbb{R} \times [-1, 0]$ will be trapped in the region $\mathbb{R} \times [-1, 0]$ for all $x$ from their maximal interval of definition and therefore that maximal interval is the whole $\mathbb{R}$. </p> <p>In conclusion, all solutions $y=y(x)$ of the differential equation $$\frac{dy}{dx} = x + x^4$$ such that $-1 \leq y(x_0) \leq 0$ for some $x_0 \in \mathbb{R}$ are defined as differentiable functions $y : \mathbb{R} \to \mathbb{R}$ with the property that $-1 \leq y(x) \leq 0$ for all $x \in \mathbb{R}$. </p>
2,429,164
<p>I'd like to prove $$\lim_{n\rightarrow+\infty} \int_0^\pi n\sqrt{n^2+x^2-2nx\cos\theta}-n^2 \mathrm{d}\theta = \frac{\pi}{4}x^2$$ for $x\in[0,\infty)$. I checked it numerically and derived it with Matlab Symbolic Toolbox, but cannot prove it by calculus.</p> <p>I cannot use the Dominated Convergence Theorem since I could not find any $L^1[0,\pi]$ function of $\theta$ dominating $n\sqrt{n^2+x^2-2nx\cos\theta}-n^2$. I also tried integration by parts, but it didn't work.</p> <p>Has anyone any idea? (change of variable to get rid of $\cos\theta$, perhaps?) Thanks </p> <hr> <p><strong>Update, following Claude's solution</strong></p> <p>I'm trying to follow Claude's solution, but I've encountered a problem. I had tried before to transform my integral into the complete elliptic integral of the second kind,</p> <p>Following your notation, \begin{equation*} \begin{array}{rcl} \displaystyle{\int_0^\pi \sqrt{1-k\cos\theta}~d\theta} &amp;=&amp; \displaystyle{\int_0^\pi \sqrt{1-k\left(1-2\sin^2\frac{\theta}{2}\right)}}~d\theta \\ &amp;=&amp; \displaystyle{\int_0^\frac{\pi}{2} \sqrt{1-k+2k\sin^2\theta}}~d\theta \\ &amp;=&amp; 2 \sqrt{1-k}\displaystyle{\int_0^\frac{\pi}{2}}\sqrt{1+\frac{2k}{1-k}\sin^2\theta}~d\theta \end{array} \end{equation*} But then I got stuck, because the complete elliptic integral of the second kind is of the form $$\int_0^\frac{\pi}{2} \sqrt{1-a^2\sin^2\theta}~d\theta$$ with $a\in[0,1]$, right? Where did you get your 2nd equation from?</p>
zhw.
228,045
<p>The integrand equals</p> <p>$$\tag 1 n^2(1+x^2/n^2 -(2x\cos \theta)/n)^{1/2} - n^2.$$</p> <p>From Taylor we have $(1+u)^{1/2} = 1+u/2 -u^2/8 +O(u^3)$ as $u\to 0.$ Apply this with $u = x^2/n^2 -(2x\cos \theta)/n$ to see $(1)$ equals</p> <p>$$\frac{n^2}{2} (x^2/n^2 -(2x\cos \theta)/n) - \frac{n^2}{8}\frac{4x^2\cos^2 \theta}{n^2} + O(\frac{1}{n}).$$</p> <p>If you now integrate on $[0,\pi],$ you get $x^2\pi/4 + O(1/n) \to x^2\pi/4.$</p>
3,063,053
<p>I'm a Calculus I student and my teacher has given me a set of problems to solve with L'Hoptial's rule. Most of them have been pretty easy, but this one has me stumped. <br /></p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>You'll notice that using L'Hopital's rule flips the value of the top to the bottom. For example, using it once returns: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span> </p> <p>And doing it again returns you to the beginning: </p> <p><span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$$</span> </p> <p>I of course plugged it into my calculator to find the limit to evaluate to 1, but I was wondering if there was a better way to do this algebraically?</p>
Noble Mushtak
307,483
<p>By your own reasoning, you have the following: <span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}=\lim\limits_{x\to \infty} \frac{\sqrt{x^2 + 1}}{x}$$</span></p> <p>Now, the left side is clearly the reciprocal of the right side, so we have: <span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}=\frac{1}{\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}}$$</span></p> <p>(Note that doing this manipulation assumes that <span class="math-container">$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}$</span> converges to a real number. However, you can use the first derivative to show this is an always increasing function and then use basic algebra to show that <span class="math-container">$\frac{x}{\sqrt{x^2 + 1}} &lt; 1$</span> for all <span class="math-container">$x\in\Bbb{R}$</span>. Thus, because this is a bounded, always increasing function, the limit as <span class="math-container">$x\to \infty$</span> must converge to some real number, so our assumption in this manipulation is valid.)</p> <p>Cross-multiply: <span class="math-container">$$\left(\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}\right)^2=1$$</span></p> <p>Take the square root: <span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}=\pm 1$$</span></p> <p>However, it is easy to show that <span class="math-container">$\frac{x}{\sqrt{x^2 + 1}}&gt;0$</span> for all <span class="math-container">$x &gt; 0$</span>. Therefore, there's no way the limit can be a negative number like <span class="math-container">$-1$</span>. Thus, the only possibility we have left is <span class="math-container">$+1$</span>, so: <span class="math-container">$$\lim\limits_{x\to \infty} \frac{x}{\sqrt{x^2 + 1}}=1$$</span></p>
621,409
<p>I need some help with the following question:</p> <p>We have $H$ acting by automorphisms on $N$, and let $\rho:H\to Aut(N)$ the associated representation by automorphisms.</p> <p>Suppose that $G=H[N]_{\rho}$ is a semidirect product, and $K=\ker(\rho)$.</p> <p>Prove that $K\unlhd G$ and that $G/K$ is also a semidirect product. </p> <p>Thanks a lot in advance for any help!</p> <hr> <p>Edit: I deleted the part that was unclear (in fact bad formulated). The answers of DonAntonio and user finally solved the question.</p>
user1337
62,839
<p>Note that $$\frac{1-\cos(x^2+y^2)}{(x^2+y^2)^2}=\frac{2 \sin^2 \frac{x^2+y^2}{2}}{(x^2+y^2)^2}, $$ as well as that for small $\theta$ $$\sin \theta \sim \theta. $$</p>
748,325
<p>In order to prove non-uniqueness of singular vectors when a repeated singular value is present, the book (Trefethen), argues as follows: Let $\sigma$ be the first singular value of A, and $v_{1}$ the corresponding singular vector. Let $w$ be another linearly independent vector such that $||Aw||=\sigma$, and construct a third vector $v_{2}$ belonging to span of $v_{1}$ and $w$, and orthogonal to $v_{1}$. All three vectors are unitary, so $w=av_{1}+bv_{2}$ with $|a|^2+|b|^2=1$, and $v_{2}$ is constructed (Gram-Schmidt style) as follows:</p> <p>$$ {v}_{2}= \dfrac{{w}-({v}_{1}^{T} w ){v}_{1}}{|| {w}_{1}-({v}_{1}^{T} {w} ){v}_{1} ||_{2}}$$</p> <p>Now, Trefethen says, $||A||=\sigma$, so $||Av_{2}||\le \sigma$ but this must be an equality (and so $v_{2}$ is another singular vector relative to $\sigma$), since otherwise we would have $||Aw||&lt;\sigma$, in contrast with the hypothesis.</p> <p>How that? I cannot see any elementary application of triangle inequality or Schwarz inequality to prove this claim.</p> <p>I am pretty well convinced of partial non-uniqueness of SVD in certain situations. Other proofs are feasible, but I wish to undestand this specific algebraic step of this specific proof.</p> <p>Thanks.</p>
Dubious
32,119
<p>This is not an answer but too long for a comment. Suppose that $||Av_2||=\sigma$, then:</p> <p>$$\sigma^2=||A(w)||^2=\left&lt;A(w),A(w)\right&gt;=\left&lt;aA(v_1)+bA(v_2),aA(v_1)+bA(v_2)\right&gt;=$$ $$=a^2||A(v_1)||^2+b^2||A(v_2)||^2+2ab\left&lt;A(v_1),A(v_2)\right&gt;=\sigma^2+2ab\left&lt;A(v_1),A(v_2)\right&gt;$$</p> <p>But this is true when $A(v_1)$ and $A(v_2)$ are orthogonal; does this fact help?</p>
2,900,294
<p>I tried this and I only got $\sin( 53^\circ)= \sin( 127^\circ).$ How do I find the equal value in cosine or tangent? Please help me out. Thank you!</p>
Jason Kim
570,340
<p>We can use the identity $\sin x^\circ = \cos {(90^\circ-x^\circ)}$ so $\sin{53^\circ}=\sin{127^\circ}=\cos{(127^\circ-90^\circ)}=\cos{37^\circ}.$</p> <p>I can't think of an simple way to express it in tangent.</p>
864,237
<p>Let's take a short exact sequence of groups $$1\rightarrow A\rightarrow B\rightarrow C\rightarrow 1$$ I understand what it says: the image of each homomorphism is the kernel of the next one, so the one between $A$ and $B$ is injective and the one between $B$ and $C$ is surjective. I get it. But other than being a sort of curiosity, what is it really telling me?</p>
jxnh
132,834
<p>As Jessica said in a previous answer, the set up initially tells us that in some sense <span class="math-container">$C = B/A$</span>. To see this, since <span class="math-container">$A \to B$</span> is injective, and has image precisely the kernel of some map, we may identify <span class="math-container">$A \unlhd B$</span> as a normal subgroup. And this is precisely the kernel of a surjective map <span class="math-container">$B \to C$</span> so by the first isomorphism theorem, <span class="math-container">$C = B/A$</span>.</p> <p>However, the situation quickly becomes more interesting than this. For example, this situation clearly arises when we have a direct product of groups <span class="math-container">$B = A \times C$</span>. Motivated by the simplicity of this case, we say that the sequence splits if <span class="math-container">$B \cong A \times C$</span>. For a non-abelian group, this happens if and only if we can find a map <span class="math-container">$B \to A$</span> (think projection) such that the composition <span class="math-container">$A \to B \to A$</span> is the identity automomorphism on <span class="math-container">$A$</span>. For an abelian group, we can say even more that the sequence also splits if we can find a map <span class="math-container">$C \to B$</span> such that <span class="math-container">$C \to B \to C$</span> is the identity. This happens more generally in abelian categories like the modules over a given ring. <a href="http://en.wikipedia.org/wiki/Splitting_lemma" rel="nofollow noreferrer">This Wikipedia page has more info...</a></p> <p>In general we also care about such questions because in algebra and related topics we often find ourselves dealing with so-called long exact sequences of forms</p> <p><span class="math-container">$$... \to A_{-1} \to A_0 \to A_1 \to A_2 \to... $$</span></p> <p>Where we know some of the elements of the sequence and some of the maps and would like to identify what the others are. In general any such sequence may be broken down into a number of short exact sequences, and in the best case we may split these sequences to nicely describe an unknown term in terms of the kernel and image of the surrounding maps. The first example that most see in my experience is the <a href="http://en.wikipedia.org/wiki/Mayer-Vietoris_sequence" rel="nofollow noreferrer">Mayer Vietoris sequence</a> which arises in algebraic topology as a way to compute algebraic invariants of a space in terms of those of simpler subspaces.</p>
3,682,661
<p>I needed help with Part (A) without using L'Hopital's because its getting too lengthy.Can someone help me obtain solution with series without using L Hospitals rule</p> <p>I'm trying something out with series </p> <blockquote> <p><a href="https://i.stack.imgur.com/FC6Xs.jpg" rel="nofollow noreferrer">Question Image here</a></p> </blockquote> <p><a href="https://i.stack.imgur.com/qjBQ0.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qjBQ0.jpg" alt="enter image description here"></a></p>
Phicar
78,870
<p>Let <span class="math-container">$\begin{bmatrix}x_1 &amp; x_2\\x_3 &amp; x_4\end{bmatrix}\in SU(2),$</span> then <span class="math-container">$x_1\cdot x_4-x_3\cdot x_2=1$</span> because determinant is <span class="math-container">$1$</span>(this you did not mention) and <span class="math-container">$$\begin{bmatrix}\overline{x_1} &amp; \overline{x_3}\\\overline{x_2} &amp; \overline{x_4}\end{bmatrix}=\begin{bmatrix}x_4 &amp; -x_2\\-x_3 &amp; x_1\end{bmatrix}$$</span> so <span class="math-container">$x_1=\overline{x_4}$</span> and <span class="math-container">$\overline{x_2}=-x_3.$</span> Notice that the other <span class="math-container">$2$</span> equations are just manipulations of these two. Check that left hand side is the inverse of the matrix and the left hand side is the adjoint.</p>
1,725,084
<p>I am currently trying to practice the technique of transfinite induction with the following problem: </p> <p>Suppose that $X$ is a non-empty subset of an ordinal $\alpha$, so that $X$ is well-ordered by $\in$. Show that $\text{type}(X; \in) \leq \alpha$. </p> <hr> <p>My approach thus far: </p> <p>Let $\beta = \text{type}(X; \in)$ and $f: X \rightarrow \beta$ be an order-preserving isomorphism. Now we show that $f(\xi) \leq \xi$ for all $\xi \in X$ by transfinite induction. </p> <p>Base Case: Let $\xi_{0} \in X$ be minimal with respect to $\in$ in $X$. As $f$ preserves order, it must be the case that $f(\xi_{0}) = \emptyset$ and so $f(\xi_{0}) \leq \xi_{0}$. </p> <p>Inductive Step: Suppose that $f(\xi) \leq \xi$ for all $\xi &lt; \gamma$ for some $\gamma$. Now we deduce that $f(\gamma) \leq \gamma$. </p> <blockquote> <p>My question is how to prove this crucial step $f(\gamma) \leq \gamma$. </p> </blockquote> <p>After proving this, then by transfinite induction we have that $f(\xi) \leq \xi$ for all $\xi \in X$ and so $\beta = f(X) \subseteq \alpha$ and so $\text{type}(X; \in) = \beta \leq \alpha$ as desired. </p>
Brian M. Scott
12,042
<p>It’s easier if you replace $f$ by its inverse and work with an order-isomorphism $f:\beta\to X$. Then $f$ is an order-isomorphism of $\beta$ into $\alpha$, and you want to show that $\xi\le f(\xi)$ for each $\xi\in\beta$. If not, let $\gamma\in\beta$ be minimal such that $f(\gamma)&lt;\gamma$. Then by the minimality of $\gamma$ and the fact that $f$ is order-preserving you have</p> <p>$$f(\gamma)\le f\big(f(\gamma)\big)&lt;f(\gamma)\;,$$</p> <p>which is absurd. Hence $\xi\le f(\xi)$ for all $\xi\in\beta$.</p>
1,676,505
<p>Let $f:[0,1]\times[0,1]\to \mathbb R$, $$f(x,y)= \begin{cases} \frac1q+\frac1n, &amp; \text{if $(x,y)=(\frac mn,\frac pq) \in \Bbb Q\times\Bbb Q,$ $ (m,n)=1=(p,q)$ } \\ 0, &amp; \text{if $x$ or $y$ irrational$ $ or $0,1$} \end{cases} $$</p> <p>Prove that f is integrable over $R=[0,1]\times[0,1]$ and find the value of the integral (I know its value is zero, because every lower sum is zero).</p> <p>I'm trying to find the set of discontinuities of $f$ over $R$ and prove that it has measure zero, so that $f$ is integrable.</p> <p>I remember doing this for the one dimensional case (Thomae´s function), proving that $f$ was continuous over the irrationals and discontinuous over the rationals, but I can't prove it this time, so I need some help, it will be really appreciated.</p>
Julián Aguirre
4,791
<p>If <span class="math-container">$a,b\in[0,1]\setminus\mathbb{Q}$</span>, then <span class="math-container">$f$</span> is continuous in <span class="math-container">$(a,b)$</span>. Given <span class="math-container">$\epsilon&gt;0$</span>, the number of rationals in <span class="math-container">$[0,1]$</span> with denominator (when written as <span class="math-container">$p/q$</span> an irreducible fraction) larger than than <span class="math-container">$1/\epsilon$</span> is finite. Then there exists <span class="math-container">$\delta&gt;0$</span> such that if <span class="math-container">$|x-a|,|y-b|&lt;\delta$</span>, then <span class="math-container">$f(x,y)=0$</span> (if one of <span class="math-container">$x$</span> or <span class="math-container">$y$</span> is irrational) or less than <span class="math-container">$2\,\epsilon$</span> (if both <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are rational.) In any case, <span class="math-container">$|f(x,y)-f(x,a)|\le2\,\epsilon$</span>. The set of discontinuities of <span class="math-container">$f$</span> is contained in <span class="math-container">$$ D=\bigcup_{r\in\mathbb Q}\Bigl(\{r\}\times[0,1]\cup[0,1]\times\{r\}\Bigr). $$</span> <span class="math-container">$D$</span> is the countable union of sets of measure <span class="math-container">$0$</span>, so it has also measure <span class="math-container">$0$</span>.</p>
2,476,688
<p>The Homework Exercise I am working on, is:</p> <blockquote> <p>Let $\overrightarrow{a}, \overrightarrow{b}$ be vectors. Show that $\overrightarrow{a} \cdot \overrightarrow{b}=\frac{1}{4}\left(\Vert{\overrightarrow{a}+\overrightarrow{b}\Vert^2}-\Vert{\overrightarrow{a}-\overrightarrow{b}\Vert^2}\right)$.</p> </blockquote> <p>Things I tried:</p> <ol> <li>Using the property $\Vert \overrightarrow{u}\Vert^2=\overrightarrow{u} \cdot \overrightarrow{u}$, I tried to make $\Vert{\overrightarrow{a}+\overrightarrow{b}\Vert^2} =\left(\overrightarrow{a}+\overrightarrow{b} \right) \cdot \left(\overrightarrow{a}+\overrightarrow{b} \right)$ and $\Vert{\overrightarrow{a}-\overrightarrow{b}\Vert^2} =\left(\overrightarrow{a}-\overrightarrow{b} \right) \cdot \left(\overrightarrow{a}-\overrightarrow{b} \right)$</li> </ol> <p>I'm not sure if that was a correct step, but then I substituted into the main equation to get $$ \overrightarrow{a} \cdot \overrightarrow{b} = \frac{1}{4}\left( (\overrightarrow{a}+\overrightarrow{b} ) \cdot (\overrightarrow{a}+\overrightarrow{b} ) - (\overrightarrow{a}-\overrightarrow{b} ) \cdot (\overrightarrow{a}-\overrightarrow{b} ) \right) $$</p> <p>Am not sure where to go from here, or if I'm even heading in the right direction...</p>
5xum
112,884
<p>Use the fact that $$\vec a\cdot (\vec b + \vec c) = \vec a \cdot \vec b + \vec a\cdot \vec c$$</p> <p>and write it all out. A lot of things should cancel out.</p>
2,476,688
<p>The Homework Exercise I am working on, is:</p> <blockquote> <p>Let $\overrightarrow{a}, \overrightarrow{b}$ be vectors. Show that $\overrightarrow{a} \cdot \overrightarrow{b}=\frac{1}{4}\left(\Vert{\overrightarrow{a}+\overrightarrow{b}\Vert^2}-\Vert{\overrightarrow{a}-\overrightarrow{b}\Vert^2}\right)$.</p> </blockquote> <p>Things I tried:</p> <ol> <li>Using the property $\Vert \overrightarrow{u}\Vert^2=\overrightarrow{u} \cdot \overrightarrow{u}$, I tried to make $\Vert{\overrightarrow{a}+\overrightarrow{b}\Vert^2} =\left(\overrightarrow{a}+\overrightarrow{b} \right) \cdot \left(\overrightarrow{a}+\overrightarrow{b} \right)$ and $\Vert{\overrightarrow{a}-\overrightarrow{b}\Vert^2} =\left(\overrightarrow{a}-\overrightarrow{b} \right) \cdot \left(\overrightarrow{a}-\overrightarrow{b} \right)$</li> </ol> <p>I'm not sure if that was a correct step, but then I substituted into the main equation to get $$ \overrightarrow{a} \cdot \overrightarrow{b} = \frac{1}{4}\left( (\overrightarrow{a}+\overrightarrow{b} ) \cdot (\overrightarrow{a}+\overrightarrow{b} ) - (\overrightarrow{a}-\overrightarrow{b} ) \cdot (\overrightarrow{a}-\overrightarrow{b} ) \right) $$</p> <p>Am not sure where to go from here, or if I'm even heading in the right direction...</p>
Bernard
202,857
<p>Just expand the dot products by didtributivity in the right-hand side: \begin{align} (\vec a+\vec b)\cdot(\vec a+\vec b)-(\vec a-\vec b)\cdot(\vec a-\vec b)&amp;=\begin{aligned}[t]\vec a\cdot\vec a&amp;+\vec a\cdot\vec b+\vec b\cdot\vec a+\vec b\cdot\vec b\\ &amp;-\vec a\cdot\vec a+\vec a\cdot\vec b+\vec b\cdot\vec a-\vec b\cdot\vec b \end{aligned}\\ &amp;=2\vec a\cdot\vec b+2\vec b\cdot\vec a=4\vec a\cdot\vec b \end{align}</p>
2,267,935
<p>There is a fibration $SO(n-1) \mapsto SO(n) \mapsto S^{n-1}$, from basically taking the first column of the matrix in $\mathbb{R}^n$. Is this fibration trivializable? </p>
Tsemo Aristide
280,301
<p>If $n=3$, the Hopf fibration is the composition $S^3=Spin(3)\rightarrow SO(3)\rightarrow S^2$, so if $SO(3)\rightarrow S^2$ is trivial, so the hopf fibration will be flat and this is not true.</p> <p><a href="https://en.wikipedia.org/wiki/Hopf_fibration#Geometric_interpretation_using_rotations" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Hopf_fibration#Geometric_interpretation_using_rotations</a></p>
1,370,576
<p>I am working on a trigonometry question at the moment and am very stuck. I have looked through all the tips to solving it and I cant seem to come up with the right answer. The problem is </p> <blockquote> <p>What is exact value of<br> $$\cot \left(\frac{7\pi}{6}\right)? $$</p> </blockquote>
Zain Patel
161,779
<p>We have $$\cot \left(\frac{7\pi}{6}\right) = \frac{1}{\tan \left(\frac{7\pi}{6}\right)} = \frac{\cos \left(\frac{7\pi}{6}\right)}{\sin \left(\frac{7\pi}{6}\right)} \equiv \frac{-\cos \left(\frac{\pi}{6}\right)}{-\sin \left(\frac{\pi}{6}\right)} = \sqrt{3}$$</p> <p>You can easily see this using a "CAST" diagram to reduce $\cos \left(\frac{7\pi}{6}\right)$and $\sin \left(\frac{7\pi}{6}\right)$ to standard results.</p> <hr> <p>As per @Scientifica's comment, an easier method would be to simply note that $\tan$ is $\pi$-periodic so that shifting its argument by $\pi$ will yield no change to it's value, or: $$\cot \left(\frac{7\pi}{6}\right) = \frac{1}{\tan \left(\frac{7\pi}{6}\right)} = \frac{1}{\tan \left(\frac{7\pi}{6} - \pi\right)} = \frac{1}{\tan \left(\frac{\pi}{6}\right)} = \sqrt{3}$$</p>
149,830
<p>As we know, the eigen vectors and eigen values of a real symmetric matrix always is are real numbers. I am trying to use the <em>Mathematica</em> to verify this theory. Suppose I have a matrix <code>A</code></p> <pre><code>A={{6585, 7579, 6717}, {7579, 11002, 12324}, {6717, 12324, 17030}} Eigenvalues[A] </code></pre> <blockquote> <p><code>{Root[-13826467396+117514474 #1-34617 #1^2+#1^3&amp;,3],Root[-13826467396+117514474 #1-34617 #1^2+#1^3&amp;,2],Root[-13826467396+117514474 #1-34617 #1^2+#1^3&amp;,1]}</code></p> </blockquote> <p>But I really don't like those <code>Root</code>, so I try to simplfy it to a radical expression.</p> <pre><code>Simplify /@ ToRadicals /@ Eigenvalues[A] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/CbKYv.png" alt=""></p> </blockquote> <p>The result includes <code>I</code>. Since they are real numbers, could we express this number by radical express without <code>I</code>?</p>
eldo
14,254
<p>In other situations you might have a look at <code>Map</code> (<code>/@</code>)</p> <p>Vector of value pairs</p> <pre><code>m1 = RandomInteger[{1, 10}, {10, 2}]; </code></pre> <p>Matrix of value pairs</p> <pre><code>m2 = Partition[m1, 5]; </code></pre> <p>Some function</p> <pre><code>f[{x_, y_}] := x*y </code></pre> <p>Plot vector</p> <pre><code>ListLogPlot[f /@ m1, Joined -&gt; True, AxesOrigin -&gt; {1, 0}] </code></pre> <p><a href="https://i.stack.imgur.com/RDh7f.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RDh7f.jpg" alt="enter image description here"></a></p> <p>Plot matrix</p> <pre><code>ListLogPlot[Map[f, m2, {2}], Joined -&gt; True, AxesOrigin -&gt; {1, 0}] </code></pre> <p><a href="https://i.stack.imgur.com/feWDt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/feWDt.jpg" alt="enter image description here"></a></p>
2,837,172
<p>In complex analysis, sometimes we need to use some theorems which are results of measure theory. However, I know very very little about measure theory. So</p> <blockquote> <p>What are some very basic results of measure theory on complex functions/complex plane/complex calculus?</p> </blockquote> <p>I expect the answers to be like:</p> <ol> <li>... is always measurable.</li> <li>... is always integrable.</li> <li>For theorems considering a measure space, we can choose it to be ...</li> <li>...(anything that is worth mentioning)</li> </ol> <p>For instance, I always suspected all harmonic functions are measurable, but don’t know how to prove it.</p> <p>Another example is, I failed to apply Dominated convergence theorem rigorously. On the Wikipedia page, we need to consider $\{f_n\}$ a sequence of measurable real functions on a measure space $(S,\Sigma,\mu)$. What $\{f_n\}$ can I choose if the function is complex? What measure space should I consider?</p> <p>I hope I have provided enough context and my question is not too broad.</p> <p>Thanks for any help in advance.</p>
saulspatz
235,128
<p>Continuous functions are measurable. All the single-valued functions you'll see in complex variables are measurable. In particular, harmonic functions are measurable. </p> <p>As for convergence theorems, I can't think of any but the dominated convergence theorem that are likely to apply. (Way back when I took complex variables, we used Alfohrs, which doesn't use measure theory, so I'm guessing, but I think this is right.)</p> <p>As for what sequences of functions to pick, you probably want a sequence of analytic functions. To apply the dominated convergence theorem, it would be enough that they are bounded in modulus by an integrable function. That is, $|f_n(z)|\le |f(z)| $ where $f_n\to f$ pointwise, and $f$ is integrable on the domain in question. Or you could apply the theorem to the real and imaginary parts separately.</p> <p>The measure space will be the domain of the functions, with Lebesgue measure. You don't have to worry about that too much. </p>
3,454,682
<p>I know that <span class="math-container">${\langle x, y \rangle}$</span> means the inner product but I've stumbled upon the notation <span class="math-container">${\langle x, y \rangle}_a$</span> with <span class="math-container">$a \in \mathbb{R}$</span> and I can't figure out what it means. Usually what's in the subscript isn't a number, but the denotion of some vector space, e.g. <span class="math-container">$V$</span>.</p> <p>The context is this problem from an exam in the introductory course in linear algebra at our university:</p> <p>Let <span class="math-container">${\langle,\rangle}_1$</span> and <span class="math-container">${\langle,\rangle}_2$</span> be two inner product structures on a finite-dimensional vector space <span class="math-container">$V$</span>. Show that there is a linear map <span class="math-container">$T: V \to V$</span> such that</p> <p><span class="math-container">${\langle x,y \rangle}_1$</span> = <span class="math-container">${\langle T(x), y \rangle}_2$</span></p> <p>for all <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$V$</span>.</p>
J.G.
56,861
<p>The answer @Masacroso gave is probably right. I should mention subscripts on either side may instead be labels for the vectors themselves. However, this is usually used only in <a href="https://en.wikipedia.org/wiki/Bra%E2%80%93ket_notation" rel="nofollow noreferrer">bra–ket notation</a>, where we replace the comma with a pipe. In other words, <span class="math-container">$\langle x|y\rangle_a$</span> could be <span class="math-container">$\langle x|z\rangle$</span> with <span class="math-container">$|y\rangle_a:=|z\rangle$</span>.</p>
156,376
<p>I understand that when we are doing indefinite integrals on the real line, we have $\int f(x) dx = g(x) + C$, where $C$ is some constant of integration. </p> <p>If I do an integral from $\int f(x) dx$ on $[0,x]$, then is this considered a definite integral? Can I just leave out the constant of integration now? I am skeptical of the fact that this is a definite integral, because our value $x$ is still a variable. </p>
robjohn
13,854
<p>$\int f(x)\,\mathrm{d}x$ is an <a href="http://en.wikipedia.org/wiki/Antiderivative">antiderivative</a>. It represents any function whose derivative with respect to $x$ is $f(x)$.</p> <p>$\int_0^af(x)\,\mathrm{d}x$ is a <a href="http://en.wikipedia.org/wiki/Integral#Terminology_and_notation">definite integral</a>, and for any of the antiderivatives $g(x)=\int f(x)\,\mathrm{d}x$ (which incorporate a constant of integration), $$ \int_0^af(x)\,\mathrm{d}x=g(a)-g(0) $$ For example, $$ \int x^3\,\mathrm{d}x=\frac14x^4+C $$ for some constant $C$, and $$ \begin{align} \int_0^ax^3\,\mathrm{d}x &amp;=\left(\frac14a^4+C\right)-\left(\frac140^4+C\right)\\ &amp;=\frac14a^4 \end{align} $$ no matter which $C$ is chosen.</p> <p>In the case above, $\int_0^xf(x)\,\mathrm{d}x$, there is confusion because the same variable is used inside the integration as in the bounds. The <a href="http://en.wikipedia.org/wiki/Bound_variable">bound variable</a> $x$ inside the integral is not the same as the <a href="http://en.wikipedia.org/wiki/Free_variables_and_bound_variables">free variable</a> $x$ in the limit. To reduce the confusion, your integral can also be written as $\int_0^xf(t)\,\mathrm{d}t$ by renaming the bound variable. In any case, this is a definite integral.</p>
1,714,278
<p>Given the sequence $ y_{k}=2^k\tan(\frac{\pi}{2^k})$ for k=2,3,.. prove that $ y_{k} $ is recursively produced by the algorithm: $$ y_{k+1}=2^{2k+1}\frac{\sqrt{1+(2^{-k}y_{k})^2}-1}{y_{k}} $$ for k=2,3,...</p> <p>I used the identity $ {\tan^2({a})}=\frac{1-\cos{(2a)}}{1+\cos{(2a)}}$ but I couldn't get it right. Any help appreciated.</p>
Noble Mushtak
307,483
<p>Let's try plugging in the explicit formula for $y_k$ into the recursive formula and seeing if we get the explicit formula for $y_{k+1}$ back.</p> <p>$$y_{k+1}=2^{2k+1}\frac{\sqrt{1+\left(2^{-k}2^k\tan\left(\frac{\pi}{2^k}\right)\right)^2}-1}{2^k\tan\left(\frac{\pi}{2^k}\right)}$$</p> <p>Cancel out the $2^{2k+1}$ out front and the $2^k$ in the denominator. Also, get rid of the $2^{-k}2^k$ under the radical in the numerator.</p> <p>$$y_{k+1}=2^{k+1}\frac{\sqrt{1+\left(\tan\left(\frac{\pi}{2^k}\right)\right)^2}-1}{\tan\left(\frac{\pi}{2^k}\right)}$$</p> <p>Let $\alpha=\frac{\pi}{2^k}$. It is well-known that $1+\tan(\alpha)^2=\frac{1}{\cos(\alpha)^2}$, so by taking the square root of both sides, we get $\sqrt{1+\tan(\alpha)^2}=\left|\frac{1}{\cos(\alpha)}\right|$. However, since $0 &lt; \alpha &lt; \frac{\pi}{2}$, $\cos(\alpha)$ is positive and the absolute value signs are unnecessary. Substitute.</p> <p>$$y_{k+1}=2^{k+1}\frac{\frac{1}{\cos\left(\frac{\pi}{2^k}\right)}-1}{\tan\left(\frac{\pi}{2^k}\right)}$$</p> <p>Multiply both the numerator and denominator by $\cos\left(\frac{\pi}{2^k}\right)$.</p> <p>$$y_{k+1}=2^{k+1}\frac{1-\cos\left(\frac{\pi}{2^k}\right)}{\sin\left(\frac{\pi}{2^k}\right)}$$</p> <p>Now, let $\beta=\frac{\pi}{2^{k+1}}$ so that $\alpha=2\beta$. It is well-known that $\frac{1-\cos(2\beta)}{\sin(2\beta)}=\tan(\beta)$, so substitute.</p> <p>$$y_{k+1}=2^{k+1}\tan\left(\frac{\pi}{2^{k+1}}\right)$$</p> <p>Thus, we have simplified the recursive formula into the explicit formula for $y_{k+1}$, concluding the proof.</p>
4,004,827
<p>I need to calculate: <span class="math-container">$$\displaystyle \lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{1- e^{-2x}}}$$</span></p> <p>I looks like I need to use common limit: <span class="math-container">$$\displaystyle \lim_{x \to 0} \frac{e^x-1}{x} = 1$$</span></p> <p>So I take following steps:</p> <p><span class="math-container">$$\displaystyle \lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{1- e^{-2x}}} = \displaystyle \lim_{x \to 0^+} \frac{-3x - \sqrt{x}}{\sqrt{e^{-2x} - 1}}$$</span></p> <p>And I need to delete root in the denominator and make nominator equal to <span class="math-container">$-2x$</span>. But I don't know how.</p>
Raffaele
83,382
<p><span class="math-container">$$\sqrt{1-e^{-2 x}}\sim \sqrt{2} \sqrt{x};\;\text{as }x\to 0$$</span></p> <p><span class="math-container">$$\lim_{x \to 0^+} \frac{3x + \sqrt{x}}{\sqrt{2} \sqrt{x}}=\lim_{x \to 0^+}\left(\frac{3}{\sqrt 2}\sqrt x +\frac{1}{\sqrt 2} \right)=\frac{1}{\sqrt 2} $$</span></p>
1,073,628
<p>I am trying to find generating functions which will give me a power logarithm. </p> <p>I am trying to find generating sums in the form</p> <p>$$\sum_{n=1}^{\infty} a_n\,x^n = -\frac{\log^2(1-x)}{1-x}$$</p> <p>or </p> <p>$$\sum_{n=1}^{\infty} a_n\,x^n = \frac{\log^2(x)}{x}.$$</p> <p>Something, which will return $\log^3$ in the end. </p> <p>Help is required! </p> <p>Thanks</p>
Felix Marin
85,343
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\ceil}[1]{\,\left\lceil\, #1 \,\right\rceil\,} \newcommand{\dd}{{\rm d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\dsc}[1]{\displaystyle{\color{red}{#1}}} \newcommand{\expo}[1]{\,{\rm e}^{#1}\,} \newcommand{\fermi}{\,{\rm f}} \newcommand{\floor}[1]{\,\left\lfloor #1 \right\rfloor\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{{\rm i}} \newcommand{\iff}{\Longleftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\Li}[1]{\,{\rm Li}_{#1}} \newcommand{\norm}[1]{\left\vert\left\vert\, #1\,\right\vert\right\vert} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\pp}{{\cal P}} \newcommand{\root}[2][]{\,\sqrt[#1]{\vphantom{\large A}\,#2\,}\,} \newcommand{\sech}{\,{\rm sech}} \newcommand{\sgn}{\,{\rm sgn}} \newcommand{\totald}[3][]{\frac{{\rm d}^{#1} #2}{{\rm d} #3^{#1}}} \newcommand{\ul}[1]{\underline{#1}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ With $\ds{\verts{x}\ &lt;\ 1}$:</p> <blockquote> <p>\begin{align} \pars{1 - x}^{\mu}&amp;=\sum_{n\ =\ 0}^{\infty}{\mu \choose n}x^{n}\quad\imp\quad \left\{\begin{array}{lcl} \pars{1 - x}^{\mu}\ln^{2}\pars{1 - x} &amp;=&amp;\sum_{n\ =\ 0}^{\infty}\partiald[2]{{\mu \choose n}}{\mu}\,x^{n} \\ \pars{1 - x}^{\mu}\ln^{3}\pars{1 - x} &amp;=&amp;\sum_{n\ =\ 0}^{\infty}\partiald[3]{{\mu \choose n}}{\mu}\,x^{n} \end{array}\right. \end{align}</p> </blockquote> <p>It leads to: \begin{align} {\ln^{2}\pars{1 - x} \over 1 - x} &amp;=\sum_{n\ =\ 0}^{\infty}\bracks{\lim_{\mu\ \to\ -1}\partiald[2]{{\mu \choose n}}{\mu}}x^{n} =\sum_{n\ =\ 0}^{\infty}\bracks{\color{#00f}{\pars{-1}^{n}\lim_{\mu\ \to\ 0} \partiald[2]{{\mu + n\choose n}}{\mu}}}x^{n} \\[5mm] \ln^{3}\pars{1 - x} &amp;=\sum_{n\ =\ 0}^{\infty}\bracks{\dsc{\lim_{\mu\ \to\ 0}\partiald[3]{{\mu \choose n}}{\mu}}}x^{n} \end{align}</p> <blockquote> <p>Limits can be expressed in terms of ${\sf Gamma}$ and ${\sf PolyGammas}$ functions.</p> </blockquote>
789,407
<p>If the roots of the equation $$ax^2-bx+c=0$$ lie in the interval $(0,1)$, find the minimum possible value of $abc$. </p> <p><strong>Edit:</strong> I forgot to mention in the question that $a$, $b$, and $c$ are natural numbers. Sorry for the inconvenience.<br> <strong>Edit 2:</strong> As Hagen von Eitzen said about the double roots not allowed, I forgot to mention that too. Extremely sorry :(</p> <blockquote> <p>I tried to use $D &gt; 0$, where $D$ is the discriminant but I can't further analyze in terms of the coefficients. Thanks in advance!</p> </blockquote>
MathGod
101,387
<p><strong>Given:</strong> Roots lie in $(0,1).$ </p> <p>Let $f(x)=ax^2-bx+c$ and it's roots be $\alpha$ and $\beta$ </p> <p>$\implies f(0) \times f(1) &gt; 0$ (Can be easily verified from the parabolic graph of $f(x)$)</p> <p>or $c(a-b+c)&gt;0$ </p> <p>$\implies \frac{c}{a}(1-\frac{b}{a}+\frac{c}{a})&gt;0$ </p> <p>$\implies \alpha \beta (1-(\alpha + \beta) + \alpha\beta) &gt; 0$ (Using <strong>Vieta's Formula</strong>) </p> <p>$\implies \alpha\beta(1-\alpha)(1-\beta) &gt; 0$ </p> <p><strong>Now</strong>, </p> <p>Consider $\alpha(1-\alpha)$, </p> <p>By <strong>AM-GM</strong> inequality, </p> <p>$\alpha(1-\alpha)&lt;\frac{1}{4}$ </p> <p>Similarly, </p> <p>$\beta(1-\beta)&lt; \frac{1}{4}$ </p> <p>By multiplying the above two inequalities, we get, </p> <p>$\alpha\beta(1-\alpha)(1-\beta)&lt;\frac{1}{16}$ </p> <p>$\implies \frac{c}{a}(1-\frac{b}{a}+\frac{c}{a})&lt;\frac{1}{16}$ </p> <p>$\implies c(a-b+c)&lt;\frac{a^2}{16}$ </p> <p>If we let $a$=$b$ and $c=1$, clearly we are getting the minimum value of $a$, i.e. </p> <p>$\frac{a^2}{16}&gt;1$ </p> <p>$a&gt;4$ or minimum $a =5$ </p> <p>Since $D &gt; 0$, we have $b^2-4ac &gt; 0$ (where $D$ is the discriminant of $f(x)=0$) </p> <p>this inequality is satisfied for $a=b=5$ which we calculated above </p> <p>thus at $a=b=5$ and $c=1$ the <strong>minimum value of $abc=25$</strong> is achieved.</p>
3,272,738
<p>I've been trying to make sense of these two integrals, somehow the result seems intuitive, yet very hard to compute. We define</p> <p><span class="math-container">$$ f(x)=\frac{1}{4\pi}\delta(|x|-R)$$</span> and then note that <span class="math-container">$$ -\frac{1}{2}\int\int\frac{f(x)f(y)}{|x-y|}=-\frac{1}{2R}$$</span> and </p> <p><span class="math-container">$$\int \frac{f(x)}{|x-y|}dx=\frac{1}{|y|}\quad \text{if }|y|\geq R$$</span></p> <p>the integration is over <span class="math-container">$R^3$</span>, and <span class="math-container">$\delta$</span> is the Dirac delta function. I'd appreciate any help with this.</p>
callculus42
144,421
<p>Your calculation is right. <span class="math-container">$\mathbb E(X\cdot Y)=\mathbb E_X(X\cdot \mathbb E_{Y|X}[Y])=7.63$</span></p> <p>You´re right as well that <span class="math-container">$\mathbb E(X)\cdot \mathbb E(Y)=7.63$</span>. That means that <span class="math-container">$Cov(X,Y)=0$</span>.</p> <p>This is a nice example that a covariance of <span class="math-container">$0$</span> does <span class="math-container">$\texttt{not}$</span> necessarily imply independence.</p>
4,515,517
<p>Suppose that <span class="math-container">$E$</span> is a measruable set and <span class="math-container">$f: E \rightarrow [0, \infty]$</span> is a non-negative function with <span class="math-container">$\int_E f(x)^n dx = \int_E f(x) dx &lt; \infty$</span> for all positive integers <span class="math-container">$n$</span>. Show that there exists a meaurable set <span class="math-container">$A \subseteq E$</span> such that <span class="math-container">$f = \chi_A$</span> a.e.</p> <p><em>My Attempt</em></p> <p>Define the measurable set <span class="math-container">$A = \{x: \liminf_{n \rightarrow \infty} f(x)^n \text{ exists } \} $</span>. Define <span class="math-container">$g(x) = \liminf_n f(x)^n$</span>, by Fatou's Lemma:</p> <p><span class="math-container">$$ \int_A \liminf_{n \rightarrow \infty} f(x)^n dx \leq \liminf_{n \rightarrow \infty} \int_A f(x)^n dx = \int_A f(x)$$</span>. Thus</p> <p><span class="math-container">$$ \int_A f-g \leq 0 \implies f = g \text{ a.e. } $$</span></p> <p>comparing the limit <span class="math-container">$g(x) = \liminf_n f(x)^n = f(x)$</span> for the cases <span class="math-container">$f(x) &gt; 1$</span> and <span class="math-container">$f(x) \leq 1$</span> gives <span class="math-container">$f(x) = 1 \text{ or }0$</span></p> <p>I know there is an error in the proof. Guidance would be appreciated.</p>
geetha290krm
1,064,504
<p><span class="math-container">$\lim t^{n}$</span> exists for every <span class="math-container">$t \geq 0$</span>. (It may be <span class="math-container">$+\infty$</span>, of course).</p> <p>The hypothesis can be weakened to <span class="math-container">$\int_E f =\lim \int_E f^n$</span>.</p> <p>Note that <span class="math-container">$\int_E f =\lim \int_E f^n=\lim \inf \int_E f^n \geq \int_{E\cap \{f&gt;1\}} \lim \inf f^n\geq \int_{E\cap \{f&gt;1\}} \infty$</span> (by Fatou's Lemma). This proves that <span class="math-container">$\mu ( E\cap \{f&gt;1\})=0$</span>. In other words, <span class="math-container">$f \leq 1$</span> a.e. on <span class="math-container">$E$</span>. Since <span class="math-container">$f^{n} \leq f$</span> a.e. we can apply DCT to get <span class="math-container">$\int_E f=\lim \int_Ef^{n}=\int_E \lim f^{n}$</span>. Noting that <span class="math-container">$f^{n}(x) \to 0$</span> if <span class="math-container">$f(x) &lt;1$</span> we get <span class="math-container">$\int_E f=\int_{E\cap \{f=1\}} f$</span> or <span class="math-container">$\int_{E\cap \{f \neq 1\}} f=0$</span>. Thus <span class="math-container">$f=0$</span> a.e on the set <span class="math-container">$E\cap \{f \neq 1\}$</span>. In other words <span class="math-container">$f(x)\in \{0,1\}$</span> for almost all <span class="math-container">$x$</span>. I will let you check that <span class="math-container">$f=\chi_A$</span> a.e. where <span class="math-container">$A=\{x: f(x)=1\}$</span>.</p>
4,033,831
<p>Example:</p> <p><img src="https://i.stack.imgur.com/nPzJb.png" alt="Notation" /></p> <p>From this we can tell no negative real number can be the image of any element of the domain. Thus not surjective because the range is not equal to the codomain, which means the function associates a any real number to a positive real number only. Is it better to write it this way <span class="math-container">$f : \Bbb{R} \to \Bbb{R}^+$</span> then?</p>
user10354138
592,552
<p>It depends on how you define what a function is (i.e., whether you take the set of ordered-pairs alone a la Bourbaki, or use something like the source-target predicate).</p> <p>If you use the Bourbaki definition (i.e., <span class="math-container">$f\colon A\to B$</span> is <span class="math-container">$\{(a,f(a))\mid a\in A\}\subseteq A\times B$</span> and the Kuratowski ordered pair <span class="math-container">$(a,b)=\{\{a\},\{a,b\}\}$</span>), then the codomain <span class="math-container">$B$</span> cannot be recovered from <span class="math-container">$f$</span>.</p> <p>On the other hand, if you build the codomain into the function as in source-target predicate, then you get the absolute value <span class="math-container">$\mathbb{R}\to\mathbb{R}$</span> and <span class="math-container">$\mathbb{R}\to\mathbb{R}^+$</span> are two different functions (which are related by the inclusion <span class="math-container">$\mathbb{R}^+\to\mathbb{R}$</span>), which may or may not be what you want.</p>
849,433
<blockquote> <p>We have subspaces in $\mathbb R^4: $ </p> <p>$w_1= \operatorname{sp} \left\{ \begin{pmatrix} 1\\ 1 \\ 0 \\1 \end{pmatrix} , \begin{pmatrix} 1\\ 0 \\ 2 \\0 \end{pmatrix}, \begin{pmatrix} 0\\ 2 \\ 1 \\1 \end{pmatrix} \right\}$, $w_2= \operatorname{sp} \left\{ \begin{pmatrix} 1\\ 1 \\ 1 \\1 \end{pmatrix} , \begin{pmatrix} 3\\ 2 \\ 3 \\2 \end{pmatrix}, \begin{pmatrix} 2\\ -1 \\ 2 \\0 \end{pmatrix} \right\}$</p> <p>Find the basis of $w_1+w_2$ and the basis of $w_1\cap w_2$.</p> </blockquote> <p>So in order to find the basis for $w_1+w_2$, I need to make a $4\times 6$ matrix of all the six vectors, bring it to RREF and see which vector is LD and the basis would be the LI vectors. </p> <p>But the intersection of these 2 spans seems empty, or are they the LD vectors that I should've found before ?</p> <p>In general, how is the intersection of subspaces defined ?</p>
James S. Cook
36,530
<p>If you consider $\text{rref}(B_1|B_2)$ then you can easily ascertain which vectors in $B_1$ fall in $w_2=\text{span}(B_2)$. Likewise, $\text{rref}(B_2|B_1)$ will tell you which vectors in $B_2$ fall in $w_1=\text{span}(B_1)$. Once you know both of these you ought to be able to put the answer together. </p> <p>(<em>the thought above is my initial answer which was not based on calculating the particulars of your problem, this is the general strategy as I see it. In fact, this problem is a bit simpler than what we face in general</em>)</p> <p>In particular, let $B_1 = \{ u_1,u_2,u_3 \}$ and $B_2 = \{v_1,v_2,v_3 \}$ then we can calculate: $$ \text{rref}[u_1,u_2,u_3|v_1,v_2,v_3] = \left[ \begin{array}{ccc|ccc} 1 &amp; 0 &amp; 0 &amp; 0 &amp; 1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; 1 &amp; 1 \end{array}\right]$$ Therefore, by the column correspondence theorem, <strong>none</strong> of the vectors in $B_2$ can be written as linear combinations of the vectors in $B_1$. It follows that $w_1 \cap w_2 = \{ 0 \}$. Of course, given this, we could stop here. But, I think it may be instructive to consider the other half of my initial suggestion. $$ \text{rref}[v_1,v_2,v_3|u_1,u_2,u_3] = \left[ \begin{array}{ccc|ccc} 1 &amp; 0 &amp; 0 &amp; 0 &amp; -1 &amp; 0 \\ 0 &amp; 1 &amp; 0 &amp; 0 &amp; 1 &amp; 1 \\ 0 &amp; 0 &amp; 1 &amp; 0 &amp; 0 &amp; -1 \\ 0 &amp; 0 &amp; 0 &amp; 1 &amp; -1 &amp; -1 \end{array}\right]$$ This calculation shows no vector in $B_1$ is in the span of $B_2$. Again, if this happened to be the first thing we calculated then the conclusion to draw is that $w_1 \cap w_2 = \{ 0 \}$</p> <p>Oh well, the column correspondence theorem is not too exciting here. That said, for the sake of learning something from all this, notice we can read that: $$ u_1 = -v_1+v_2-u_2 $$ Why? Simply this, whatever linear combination we see between the <strong>columns</strong> of the reduced row echelon form of a matrix must also be present in the initial unreduced matrix. I invite the reader to verify the linear combination above is easy to see in the second rref calculation. Just to be explicit, $$ u_1 = \left[ \begin{array}{c} 1 \\1 \\ 0 \\ 1 \end{array}\right] = -\left[ \begin{array}{c} 1 \\1 \\ 1 \\ 1 \end{array}\right] +\left[ \begin{array}{c} 3 \\2 \\ 3 \\ 2 \end{array}\right] -\left[ \begin{array}{c} 1 \\0 \\ 2 \\ 0 \end{array}\right]= -v_1+v_2-u_2.$$</p>
750,751
<p>if $V$ is a finite-dimensional vector space and $t \in \mathcal L (V,V) $is such that $t^2 = id_V$ prove that the sum of eigenvalues of t is an integer.</p> <p>I started the prove as such:</p> <p>Let $\lambda_1 ,...,\lambda_n $ be eigenvalues of $t$. </p> <p>So $\lambda_1^2 ,... \lambda_n^2$ will be the eigenvalues for $t^2 = id_V$</p> <p>I don't how to continue. Any suggestions.</p>
Oliver
84,451
<p>The eigenvalues are 1 or -1, hence the sum is an integer. The fact that the eigenvalues are $\pm1$ can be proved as follows: if $\lambda$ is an eigenvalue and $v$ an eigenvector corresponding to $\lambda$, then $v=t^2(v)=\lambda^2 v$, hence $\lambda=\pm1$.</p>
806,476
<p>In Milnor's book <em>Topology from the Differentiable Viewpoint</em> there's the following problem:</p> <p><strong>Problem $6$</strong> (Brouwer). Show that any map $S^n\to S^n$ with degree different from $(-1)^{n+1}$ must have a fixed point.</p> <p><strong>My solution:</strong> Assume that the map $f:S^n\to S^n$ has no fixed points. Let $a:S^n\to S^n$ denote the antipodal map $a(x)=-x$. Then the map $a\circ f$ is homotopic to the identity as follows:</p> <p>Since $x$ is never mapped to $-x$ (by assumption), there exist a unique shortest great circle arc from $a(f(x))$ to $x$, simply take the straight line homotopy flowing along such arches to get a homotopy $a\circ f\simeq\operatorname{id}$.</p> <p>Now we have: $$1=\deg(\operatorname{id})=\deg(a\circ f)=\deg(a)\deg(f)=(-1)^{n+1}\deg(f)$$ and thus $\deg(f)=(-1)^{n+1}$.</p> <p>My questions:</p> <ol> <li>Is this solution correct?</li> <li>How to show that the homotopy is continuous? It seems intuitively true to me, but I'm not sure about how to proceed to show it. Maybe saying that it is the flow of some good tangent field?</li> <li>Are there other (elegant/short/interesting) proofs for this fact?</li> </ol>
Moishe Kohan
84,907
<p>I think, what you gave is (a sketch of) the simplest proof, unless you know about the <a href="http://en.wikipedia.org/wiki/Lefschetz_fixed-point_theorem" rel="noreferrer">Lefshetz fixed point theorem</a>. Using the latter, then the argument goes like this: $H_k(S^n)$ is nonzero only in dimensions $0$ and $n$ and $H_0(S^n)\cong H_n(S^n)\cong Z$. The action of $f$ on $H_0$ is trivial and the action on $H_n$ is by multiplication by $d=\deg(f)$. The Lefschetz number of $f$ then equals $$ \Lambda_f= (-1)^0 + (-1)^n (d)= 1+ d(-1)^n. $$ This number is nonzero unless $$ d= (-1)^{n+1} $$ as required. If $\Lambda_f\ne 0$ then $f$ has a fixed point (this is the Lefschetz fixed point theorem). </p> <p>As for your solution, do not bother with geodesic flow, that's unnecessarily complicated; you can use linear algebra instead. For $t\in [0,1]$ and $x, y\in S^n$ non-antipodal, define $$ z_t= \frac{tx+(1-t)y}{|tx+(1-t)y|}\in S^n. $$ This is a point on the shortest arc of the great circle connecting $x$ and $y$. The rest you can figure out yourself. </p>
315,551
<p>So I'm going over my practice midterms (which all seem to have solutions like this one), </p> <p><img src="https://i.stack.imgur.com/fC8Gu.png" alt="Image"></p> <p>Can anyone help clarify this for me? I understand that you multiply by the reciprocal to get to line two. But after that I'm completely lost, I don't understand how:</p> <p>$$x^{2} + 1 - [(x + h)^{2} + 1]$$</p> <p>can become:</p> <p>$$(x-(x+h))(x+x+h)$$</p> <p>and so forth, I'm sorry if this is a stupid question the solution doesn't seem to explain it very well.</p>
Cousin
54,755
<p>Well, $x^2+1-[(x+h)^2+1]=x^2-(x+h)^2=(x-(x+h))(x+(x+h))$. The last equality comes from the difference of two squares. </p>
257,978
<p>Is there any non-monoid ring which has no maximal ideal?</p> <p>We know that every commutative ring has at least one maximal ideals -from Advanced Algebra 1 when we are study Modules that makes it as a very easy Theorem there.</p> <p>We say a ring $R$ is monoid if it has an multiplicative identity element, that if we denote this element with $1_{R}$ we should have: $\forall r\in R;\: r.1_{R}=1_{R}.r=r$</p>
rschwieb
29,335
<p>If $D$ is a valuation domain with unique maximal ideal $M$, then there are some conditions where $M$ is an example of a commutative rng with no maximal ideals.</p> <p>As I remember it, one can choose a domain with a value group within $\Bbb{R}$ such that the group has no least positive element. Then, one can argue that the maximal ideal of that valuation domain $\{r\mid \nu(r)&gt;0 \textrm{ or } r=0\}$ is a rng without maximal ideals.</p> <p>Scanning the web, I think <a href="http://sierra.nmsu.edu/morandi/notes/NoMaxIdeals.pdf" rel="nofollow">this pdf</a> contains an argument of that sort.</p>
159,529
<p>The category of representations $\text{Rep}(D(G))$ of the quantum double of a finite group is well-known to be a modular tensor category. Can these modular tensor categories also be obtained as representation categories of vertex operator algebras?</p>
Marcel Bischoff
10,718
<p>This answer is related to my answer here: <a href="https://mathoverflow.net/questions/153264/duality-between-orbifold-and-quasi-hopf-algebra-twisted-quantum-doubles/153270#153270">Duality between orbifold and quasi-Hopf algebra (twisted quantum doubles)</a> and the comment by Scott.</p> <p>Every finite group $G$ can be embedded in some symmetric group $S_n$ and then you can take for example the $n$-fold product of the $E_8$ lattice VOA, for which the representation category is trivial. Then I guess the $G$ orbifold of this VOA should have $\mathrm{Rep}(D^\omega(G))$ as a representation category. This would be true in the setting of conformal net, see my answer here: <a href="https://mathoverflow.net/questions/153264/duality-between-orbifold-and-quasi-hopf-algebra-twisted-quantum-doubles/153270#153270">Duality between orbifold and quasi-Hopf algebra (twisted quantum doubles)</a> but I am not sure how much a result like: "an extension of a rational VOA is given by a special symmetric Frobenius algebra $A$ in the category of Representations and the Representation category of the extension is given by the the subcategory of dyslexic (local) module category of $A$." is established in VOA.</p> <p>In this case the results mentioned in <a href="http://arxiv.org/abs/0909.2537v1" rel="nofollow noreferrer">http://arxiv.org/abs/0909.2537v1</a> also show that every VOA with $\mathrm{Rep}(D^\omega(G))$ as representation category is a $G$ orbifold of a holomorphic VOA.</p> <p><strong>Update</strong> I could still not find an exact statement in VOA, but in Alexei Davydov wrote in <a href="http://arxiv.org/abs/1312.7466" rel="nofollow noreferrer">http://arxiv.org/abs/1312.7466</a> (p.2.)</p> <blockquote> <p>Our examples come from permutation orbifolds of holomorphic conformal field theories (CFTs whose state space is an irreducible module over the chiral algebras) . It is argued in [30] (see also [35]) that the modular category of the $G$-orbifold of a holomorphic conformal field theory is the so called Drinfeld (or monoidal ) centre $\mathcal Z(G,\alpha)$, where $\alpha$ is a 3-cocycle of the group $G$. It is also known that the cocycle $\alpha$ is trivial for permutation orbifolds (orbifolds where the group $G$ is a subgroup of the symmetric group permuting copies in a tensor power of a holomorphic theory). he assumption crucial for the arguments of [30] is the e xistence of twisted sectors. This assumption is known to be true for permutation orbifolds [1].</p> </blockquote> <p>This statement (if true, I did not yet check [1] (<a href="http://link.springer.com/article/10.1007/s002200200633" rel="nofollow noreferrer">http://link.springer.com/article/10.1007/s002200200633</a>)) gives exactly the answer the ops question, like I guessed above, namely:</p> <blockquote> <p>Let $G\subset S^n$ and and let $(V^{\otimes n})^G$ the $G$-permutation orbifold of $V^n$ where $V$ is a holomorphic VOA, then $\mathrm{Rep}((V^{\otimes n})^G) \cong \mathrm{Rep}(D(G))$.</p> </blockquote> <p><strong>Update:</strong> The statement "an extension of a rational VOA is given by a commutative algebra $A$ (Thm 3.2) in the category of Representations and the Representation category of the extension is given by the the subcategory of dyslexic (local) module category of $A$. (Thm 3.4)" appeared now in <a href="http://arxiv.org/abs/1406.3420" rel="nofollow noreferrer">http://arxiv.org/abs/1406.3420</a> </p>
1,093,717
<p>Let $\xi_1, \xi_2, \ldots \xi_n, \ldots$ - independent random variables having exponential distribution $p_{\xi_i} (x) = \lambda e^{- \lambda x}, \; x \ge 0$ and $p_{\xi_i} (x) = 0, \; x &lt; 0$. Let $\nu = \min \{n \ge 1 : \xi_n &gt; 1\}$. Need to find the distribution function of a random variable $g = \xi_1 + \xi_2 + \ldots \xi_{\nu}$ that is, find the probability $\mathbb{P}(g &lt; x) = \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x)$.</p> <p>I made the following calculations:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_{\nu} &lt; x) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \nu = k) = \sum_{k = 1}^{\infty} \mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1)$.</p> <p>The probability of the sum can be represented as integral:</p> <p>$\mathbb{P} (\xi_1 + \xi_2 + \ldots \xi_k &lt; x, \xi_1 \le 1, \ldots \xi_{k-1} \le 1, \xi_k &gt; 1) = \int\limits_D \lambda^k e^{- \lambda u_1} e^{- \lambda u_2} \ldots e^{- \lambda u_k} {d}u_1 \ldots {d}u_k$, where $D = \{ u_1 + \ldots u_k &lt; x, u_1 \le 1, \ldots u_{k-1} \le 1, u_k &gt; 1\}$. I'm afraid that this integral cannot be calculated.</p> <p>Is it somehow easier to find the distribution function $\mathbb{P} (g &lt; x)$?</p>
wolfies
74,360
<p>Let:</p> <ul> <li><p><span class="math-container">$X \sim \text{Exponential}(\lambda)$</span> with pdf <span class="math-container">$p(x) = \lambda e^{-\lambda x}$</span>, for <span class="math-container">$x&gt;0$</span>. </p></li> <li><p><span class="math-container">$(X_1, X_2, \dots)$</span> denote successive random draws on <span class="math-container">$X$</span>, where the sample <span class="math-container">$(X_1, X_2, \dots, X_n)$</span> is terminated as soon as <span class="math-container">$X_n &gt; 1$</span> is attained. </p></li> <li><p><span class="math-container">$Y = \sum_{i=1}^n X_i$</span></p></li> </ul> <p><strong>The stopped-sum constraint</strong></p> <p>The problem has two complications: The first is that the number of draws <span class="math-container">$N=n$</span> is itself a random variable. The second is that we are not just seeking the sum of Exponentials ... but rather that <span class="math-container">$Y$</span> is the sum of:</p> <ul> <li><span class="math-container">$(n-1)$</span> 'truncated above' Exponentials (each conditional on <span class="math-container">$X_i &lt;1$</span>), PLUS:</li> <li>the final term <span class="math-container">$X_n$</span> which is a 'truncated below' Exponential (conditional on <span class="math-container">$X_n &gt;1$</span>). </li> </ul> <p>This sum appears unlikely to have a tractable closed-form solution.</p> <p><strong>From theory to approximation</strong></p> <p>However, one can obtain very good approximations for small <span class="math-container">$\lambda$</span> (e.g. <span class="math-container">$\lambda &lt;\frac15$</span>) and large <span class="math-container">$\lambda$</span> (e.g. <span class="math-container">$\lambda &gt; 5$</span>). To see why, consider a plot of the Exponential pdf given different values of parameter <span class="math-container">$\lambda$</span>:</p> <p><a href="https://i.stack.imgur.com/YE27J.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YE27J.png" alt=""></a></p> <p>Consider the two extremes:</p> <hr> <h1>Case 1: <span class="math-container">$\lambda$</span> small</h1> <p>As <span class="math-container">$\lambda \rightarrow 0$</span> , <span class="math-container">$P(X_i&gt;1) \rightarrow 1$</span>, so in the extreme, the model should simplify to the pdf of <span class="math-container">$(X \big| X&gt;1)$</span>, which is a 'truncated below' Exponential with pdf:</p> <p><span class="math-container">$$\phi_{n=1}(y) = \frac{\lambda e^{-\lambda y}}{P(X&gt;1)} = \frac{\lambda e^{-\lambda y}}{e^{- \lambda}} \quad \text{for } \quad y&gt;1$$</span></p> <p>How does the approximate solution do? Here is a comparison when <span class="math-container">$\lambda= \frac15$</span> of:</p> <ul> <li>true Monte Carlo empirical pdf of <span class="math-container">$Y$</span> (the squiggly BLUE curve), versus</li> <li><span class="math-container">$\phi_{n=1}(y)$</span> - the truncated Exponential pdf (RED DASHED curve)</li> </ul> <p><a href="https://i.stack.imgur.com/2i5Hc.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/2i5Hc.png" alt=""></a></p> <p>The fit appears excellent, even here where <span class="math-container">$\lambda = \frac15$</span> is not particularly small. The only substantive deviation is over <span class="math-container">$y \in (1,2)$</span>, where the correct solution (Blue curve) appears to be approximately Uniform. (The roughly Uniform behaviour occurs for all values of <span class="math-container">$\lambda$</span>, for <span class="math-container">$1&lt;y&lt;\approx 2$</span>.)</p> <p>Smaller values of <span class="math-container">$\lambda$</span> will yield an even better fit.</p> <p>The solution thus far is:</p> <ul> <li>If <span class="math-container">$n = 1 \text{:} \quad Y_1 = (X_1 \big| X_1&gt;1)\quad \text{ with pdf } \phi_{n=1}(y)$</span></li> </ul> <p>For a better approximation, add more terms:</p> <ul> <li>If <span class="math-container">$n = 2 \text{:} \quad Y_2 = (X_1 \big| X_1 &lt;1) + (X_2 \big| X_2 &gt;1) \quad \text{ which has pdf: } $</span></li> </ul> <p><a href="https://i.stack.imgur.com/vuDjN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vuDjN.png" alt=""></a></p> <p>Then, the order 2 approximation is:</p> <p><span class="math-container">$$P(N=1) \phi_{n=1}(y) + P(N&gt;1) \phi_{n=2}(y)$$</span></p> <p>where <span class="math-container">$P(N=1) = P(X&gt;1) = e^{-\lambda }$</span>. The small discrepancy over <span class="math-container">$y \in (1,2)$</span> is now resolved:</p> <p><a href="https://i.stack.imgur.com/AUIcn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AUIcn.png" alt=""></a></p> <p>... as a zoomed in plot over that region of interest illustrates:</p> <p><a href="https://i.stack.imgur.com/ZDeFP.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZDeFP.png" alt=""></a></p> <p>One can continue to add more terms as desired (I did <span class="math-container">$n = 3$</span> for fun) ... it might get a bit messy algebraically, but certainly possible with a computer algebra system. Fortunately, one does not need to add too many terms, because large <span class="math-container">$n$</span> is only needed for large <span class="math-container">$\lambda$</span>, and for large <span class="math-container">$\lambda$</span>, a simpler method exists ...</p> <hr> <h1>Case 2: <span class="math-container">$\lambda$</span> large</h1> <p>If <span class="math-container">$\lambda$</span> is large, <span class="math-container">$P(X_i&lt; 1) \approx 1$</span>. In the extreme, the problem effectively reduces to finding the sum of <span class="math-container">$n$</span> iid Exponentials, which is well-known to be Gamma; in particular, that <span class="math-container">$Y \sim \text{Gamma}(n, \frac{1}{\lambda})$</span> (also known as the Erlang distribution when <span class="math-container">$n$</span> is an integer, as in our case), with pdf, say <span class="math-container">$f(y)$</span>:</p> <p><a href="https://i.stack.imgur.com/vE9A3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vE9A3.png" alt=""></a></p> <p>We then need the pmf of <span class="math-container">$N=n$</span>:</p> <p><span class="math-container">$$\begin{align*}\displaystyle P(N=1) \quad &amp;= \quad P(X_1&gt;1) \\P(N=2)\quad &amp;= \quad P(X_1 \leq 1) P(X_2&gt;1) \\ P(N=3) \quad &amp;= \quad P(X_1 \leq 1) P(X_2 \leq 1) P(X_3&gt;1) \\ \dots \quad &amp;= \quad \dots\\P(N=n) \quad &amp;= \quad P(X \leq 1)^{n-1} P(X&gt;1) = \left(1-e^{-\lambda }\right)^{n-1} e^{-\lambda } = \frac{\left(1-e^{-\lambda }\right)^n}{e^{\lambda }-1} \end{align*} $$</span></p> <p>Let <span class="math-container">$g(n)$</span> denote the pmf of <span class="math-container">$N$</span>:</p> <p><a href="https://i.stack.imgur.com/NOpcT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NOpcT.png" alt=""></a></p> <p><em>Parameter-mix distribution</em>: To solve, we require the expectation <span class="math-container">$E_N[ Gamma(N,\frac{1}{\lambda})]$</span> where <span class="math-container">$N$</span> has pmf <span class="math-container">$g(n)$</span>. The pdf of the parameter-mix distribution is simply:</p> <p><a href="https://i.stack.imgur.com/RaCc3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RaCc3.png" alt=""></a></p> <p>with domain of support on <span class="math-container">$y&gt;0$</span>. Finally, conditioning the latter on <span class="math-container">$Y&gt;1$</span>, we obtain the approximate pdf of <span class="math-container">$Y$</span>, for large <span class="math-container">$\lambda$</span>, as:</p> <p><span class="math-container">$$\phi(y) = \frac{\lambda}{\exp\left(-\lambda e^{-\lambda }\right)} \exp \left(-\lambda \left(1+ e^{-\lambda } y\right)\right) \quad \quad \text{ for} \quad y&gt;1$$</span></p> <p>How does the approximate solution do? Here is a comparison when <span class="math-container">$\lambda=5$</span> of:</p> <ul> <li>true Monte Carlo empirical pdf of <span class="math-container">$Y$</span> (the squiggly BLUE curve), versus</li> <li><span class="math-container">$\phi_(y)$</span> - the Gamma parameter-mix pdf (RED DASHED curve)</li> </ul> <p><a href="https://i.stack.imgur.com/SLqwB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/SLqwB.png" alt=""></a></p> <p>... and a 'zoomed-in' version of the same:</p> <p><a href="https://i.stack.imgur.com/LJtO1.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/LJtO1.png" alt=""></a></p> <p>Again, the fit appears excellent, even here where <span class="math-container">$\lambda = 5$</span> is not particularly large. Larger values of <span class="math-container">$\lambda$</span> will yield an even better fit.</p> <p><strong>Summary</strong></p> <p>If <span class="math-container">$\lambda$</span> is small (e.g. <span class="math-container">$\frac15$</span> or smaller) or large (e.g. 5 or bigger), then approximate limit solutions that have fairly simple forms appear to yield good solutions. For mid-range values of <span class="math-container">$\lambda$</span> in-between, a suitable weighted average of the two limit solutions may yield reasonable results, or more terms <span class="math-container">$\phi_{n=3}(y)$</span> and <span class="math-container">$\phi_{n=4}(y)$</span> etc can be added.</p> <p><strong>Notes</strong></p> <ul> <li>Monte Carlo simulation of the stopped-sum process: </li> </ul> <p>One standard approach would be to write a recursive function such as:</p> <pre><code> Func := (rr = MrRandom; AppendTo[xvals, rr]; If[ rr &gt; 1, xvals, Func]) </code></pre> <p>where </p> <pre><code>MrRandom := RandomReal[ExponentialDistribution[2]] </code></pre> <p>and then call 6 runs with, say:</p> <pre><code>Table[xvals = {}; Func, {6}] </code></pre> <p>However, for large values of <span class="math-container">$\lambda$</span>, the number of pseudorandom drawings required to attain an <span class="math-container">$x_i &gt; 1$</span> can be very large indeed, which may cause problems with iteration limits and recursive limits etc. It is also a slow way to proceed. A much nicer way, here using <em>Mathematica</em>, is to generate the samples <span class="math-container">$(x_1, \dots, x_n)$</span> as a one-liner:</p> <pre><code>Split[ RandomReal[ExponentialDistribution[.1], 10^7], # &lt; 1 &amp;] </code></pre> <p>... which generates 10 million Exponential pseudorandom drawings (in one go), and then splits them into separate samples whenever a value greater than 1 is attained.</p> <ul> <li>The <code>Expect</code> function used above is from the <em>mathStatica</em> package for <em>Mathematica</em>. As disclosure, I should add that I am one of the authors.</li> </ul>