qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
971,997
<p>How do we find $$\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]$$</p> <p>In the shortest and easiest possible manner?</p> <p>I cannot think of anything good.</p>
Anastasiya-Romanova 秀
133,248
<p>Here is Feynman's style answer. We will evaluate the general integral using differentiation under integral sign method. Consider \begin{equation} I(\alpha)=\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\alpha\theta}}d\theta\right]=\int_0^{\large\frac{\pi}{2}} e^{\alpha\cos\theta}\cos(\alpha\sin\theta)\ d\theta\quad\Rightarrow\quad I(0)=\frac{\pi}{2} \end{equation} The above integral has been evaluated in this OP: <a href="https://math.stackexchange.com/q/409171/133248">How to evaluate $\displaystyle\int_{0}^{2\pi}e^{\cos \theta}\cos( \sin \theta)\, d\theta$</a>, where Mr. john and Tunk-Fey have posted brilliant answers there. You may refer there to see the detail. Rephrase the final step of their answers, we have</p> <blockquote> <p>\begin{equation} I(\alpha)=\Im\bigg[\,{\rm{Ei}}(i\alpha)\bigg]={\rm{Si}}(\alpha)+\frac{\pi}{2} \end{equation}</p> </blockquote> <p>Therefore</p> <blockquote> <p>\begin{equation} I(1)=\Re\left[\int_0^{\large\frac{\pi}{2}} e^{\Large e^{i\theta}}d\theta\right]=\int_0^{\large\frac{\pi}{2}} e^{\cos\theta}\cos(\sin\theta)\ d\theta=\Im\bigg[\,{\rm{Ei}}(i)\bigg]={\rm{Si}}(1)+\frac{\pi}{2} \end{equation}</p> </blockquote>
2,477,730
<p>How would one show (preferably using congruences) that $$37\not\mid n^{9^9}+4 $$ for any $n \in \mathbb Z$?</p>
Oscar Lanzi
248,217
<p>For the divisibility to hold we must have $n$ prime to $37$. Then $n^{36}\equiv 1 \bmod 37$. Now, $4×9^9$ is a multiple of $36$ so $n^{4×9^9}=(n^{9^9})^4\equiv 1$. Alas, $(-4)^4\equiv 256\equiv 34$, and $1\equiv 34$ won't work $\bmod 37$.</p>
2,477,730
<p>How would one show (preferably using congruences) that $$37\not\mid n^{9^9}+4 $$ for any $n \in \mathbb Z$?</p>
fleablood
280,126
<p>If $n\equiv 0 \mod 37$ then that statement isn't true of course.</p> <p>If $n\not\equiv 0\mod 37... $</p> <p>37 is prime. So $n^{36}\equiv 1\mod 37$. So if $9^9\equiv k \mod 36$ then ${n^9}^9\equiv n^k \mod 37$.</p> <p>Hmmm... $4*9=36$ so $9*9\equiv 9+9*4+9*4\equiv 9\mod 36$ so $9^2\equiv 9\mod 36$ and inductively $9^k\equiv 9\mod 36$.</p> <p>So ${n^9}^9\equiv n^9 \mod 37$.</p> <p>And $(n^9)^4=n^{36}\equiv 1\mod 37$.</p> <p>So if $n^9\equiv -4 \mod 37$ then it must be true that $4^4\equiv 1\mod 37$.</p> <p>But $4^4=256\not \equiv 1\mod 37$.</p> <p>So ... that's that then.</p>
1,255,629
<p>Show that </p> <p>$$\sin\left(\frac\pi3(x-2)\right)$$ </p> <p>is equal to </p> <p>$$\cos\left(\frac\pi3(x-7/2)\right)$$</p> <p>I know that $\cos(x + \frac\pi2) = −\sin(x)$ but i'm not sure how i can apply it to this question.</p>
Oscar
215,894
<p>The two angles $\frac{\pi}{3}(x-2)$ and $\frac{\pi}{3}(x-7/2)$ differs by $\pi/2$. In fact, $$ \frac{\pi}{3}(x-2) - \frac{\pi}{3}(x-\frac{7}{2}) = \frac{\pi}{3}x- \frac{2}{3}\pi - \frac{\pi}{3}x + \frac{7}{6}\pi = - \frac{7-4}{6}\pi = -\frac{\pi}{2}\, , $$ thus $\sin(\frac{\pi}{3}(x-2)) = \sin(\frac{\pi}{3}(x-7/2) - \frac{\pi}{2}) = \cos(\frac{\pi}{3}(x-7/2))$.</p>
1,030,327
<p>Im trying to do an exercise from the book Algebraic Curves of Fulton (Exercise $\:6.26^{*}$).</p> <p>It says:</p> <p>Let $f:X\rightarrow Y$ be a morphism of affine varieties. Show that $f(X)$ in dense in $Y$ if and only if the homomorphism $\tilde{f}:\Gamma(Y)\rightarrow\Gamma(X)$ is one-to-one.</p>
Alex Youcis
16,497
<p>This is not true. The statement is that $f:\text{Spec}(B)\to\text{Spec}(A)$ is dominant (has dense image), if and only if $\ker(A\to B)$ is contained in $\text{nil}(A)$, the nilradical of $A$. Indeed, consider the map $\text{Spec}(k)\to\text{Spec}(k[x]/(x^2))$ coming from the quotient $k[x]/(x^2)\to k$.</p> <p>Here's an outline:</p> <ol> <li>Show that the closure of the image of $\text{Spec}(B)\to\text{Spec}(A)$ is $V(\ker(A\to B))$. This should just be chasing definitions.</li> <li>Show that $V(I)=A$ if and only if $I\subseteq \text{nil}(A)$. This should also be apparent by definitions.</li> </ol> <p>Let me know if you have trouble!</p>
147,441
<p>I've just been looking through my Linear Algebra notes recently, and while revising the topic of change of basis matrices I've been trying something:</p> <p>"Suppose that our coordinates are $x$ in the standard basis and $y$ in a different basis, so that $x = Fy$, where $F$ is our change of basis matrix, then any matrix $A$ acting on the $x$ variables by taking $x$ to $Ax$ is represented in $y$ variables as: $F^{-1}AF$ "</p> <p>Now, I've attempted to prove the above, is my intuition right?</p> <p>Proof: We want to write the matrix $A$ in terms of $y$ co-ordinates.</p> <p>a) $Fy$ turns our y co-ordinates into $x$ co-ordinates.</p> <p>b) pre multiply by $A$, resulting in $AFy$, which is performing our transformation on $x$ co-ordinates</p> <p>c) Now, to convert back into $y$ co-ordinates, pre multiply by $F^{-1}$, resulting in $F^{-1}AFy$</p> <p>d) We see that when we multiply $y$ by $F^{-1}AF$ we perform the equivalent of multiplying $A$ by $x$ to obtain $Ax$, thus proved.</p> <p>Also, just to check, are the entries <em>in</em> the matrix $F^{-1}AF$ still written in terms of the standard basis?</p> <p>Thanks.</p>
Thomas
26,188
<p>Your approach seems correct.</p> <p>I don't know if the following helps, but anyway: So you have a vector space $V$ (over say the complex numbers) and you have say two basis $E$ and $D$. </p> <p>That $F$ is a change of basis matrix means that if as column vector $y = (y_i)$ written with respect to the basis $E$, then you get the coordinates with respect to $D$ by $x = Fy$.</p> <p>Now you have a linear transformation $T: V \to V$. With respect to each basis, this transformation is given by two matrices, say $A_E$, $A_D$. So if $y = (y_i)$ (wrt. basis $E$) then $Ty = A_Ey$ and the result with be the coordinates in the basis $E$. (And likewise for the basis $D$ using $A_D$). </p> <p>So given a vector $y = (y_i)$ written in the basis $E$, you could then first transform the coordinates to the basis $D$, then you the matrix $A_D$ and then transform the coordinated back to the basis $E$. So you get $A_E(y) = F^{-1}A_DFy$.</p> <p>One can actually write out all of this (If you have never done so I recommend that you do it) with coordinates. So you would start with the vector $v$ and write it as a linear combination of the basis $E$: $v = y_1e_1 + \dots y_ne_n$ and continue from there...</p>
4,004,157
<p>A firm wants to know how many of its employees have drug problems. Realizing the sensitivity of this issue, the personnel director decides to use a randomized response survey.</p> <p>Each employee is asked to flip a fair coin,</p> <p>If head (H), answer the question “Do you carpool to work?”</p> <p>If tail (T), answer the question “Have you used illegal drugs within the last month?”</p> <p>Out of 8000 responses, 1420 answered “YES” (assuming honesty)</p> <p>The company knows that 35% of its employees carpool to work. What is the probability that an employee (chosen at random) used illegal drugs within the last month?</p> <p>I think the probability that I am trying to figure out is <span class="math-container">$\mathbb{P}(yes|T)$</span>. From the problem, I was able to figure out that <span class="math-container">$\mathbb{P}(yes)=1420/8000$</span>, <span class="math-container">$\mathbb{P}(T)=50%$</span> (because it's a fair coin) and that <span class="math-container">$\mathbb{P}(yes|H)=35%$</span>. But for Bayes' theorem, I need to find <span class="math-container">$\mathbb{P}(T|yes)$</span>, and that is where I am stuck.</p> <p>I realized that I did not need Bayes' theorem as that would have made it more difficult.</p>
Mark Bennet
2,906
<p>Let's look at the concrete example, to see what is going on, and assume we are not in characteristic <span class="math-container">$2$</span>.</p> <p>Then, using the normal quadratic formula if <span class="math-container">$$x^4-10x^2+1=0$$</span> then <span class="math-container">$$x^2=\frac {10\pm \sqrt{100-4}}{2}=5\pm 2\sqrt 6$$</span> and if <span class="math-container">$6$</span> has a square root in your finite field, then the original equation is reducible.</p> <p>The roots in a splitting field are <span class="math-container">$\pm \sqrt 2\pm \sqrt 3$</span> and the factorisation already obtained corresponds to one pairing of the roots: <span class="math-container">$$(x^2-5-2\sqrt 6)(x^2-5+2\sqrt 6)=$$</span><span class="math-container">$$=(x-\sqrt 2-\sqrt 3)(x+\sqrt 2+\sqrt 3)\cdot(x+\sqrt 2-\sqrt 3)(x-\sqrt 2+\sqrt 3)$$</span></p> <p>If we were instead to choose other pairings we'd get <span class="math-container">$$(x-\sqrt 2+\sqrt 3)(x+\sqrt 2+\sqrt 3)\cdot(x-\sqrt 2-\sqrt 3)(x+\sqrt 2-\sqrt 3)=$$</span><span class="math-container">$$=(x^2+2\sqrt 3 x+1)(x^2-2\sqrt 3x+1)$$</span> or <span class="math-container">$$(x-\sqrt 2+\sqrt 3)(x-\sqrt 2-\sqrt 3)\cdot(x+\sqrt 2-\sqrt 3)(x+\sqrt 2+\sqrt 3)=$$</span><span class="math-container">$$=(x^2-2\sqrt 2 x-1)(x^2+2\sqrt 2x-1)$$</span> and these give factorisations in the case that <span class="math-container">$\sqrt 3$</span> or <span class="math-container">$\sqrt 2$</span> exist in your finite field - ie that <span class="math-container">$3$</span> or <span class="math-container">$2$</span> are squares.</p> <p>Now, you may also know that the product of two non quadratic residues modulo <span class="math-container">$p$</span> is a quadratic residue. From this it is easy to deduce that at least one of <span class="math-container">$2,3, 6$</span> has a square root in a field of characteristic <span class="math-container">$p$</span>. So the polynomial is reducible. If two of the three have roots then the third does and the polynomial splits into linear factors.</p> <p>Modulo <span class="math-container">$5$</span> we have that <span class="math-container">$6$</span> is a square while <span class="math-container">$2$</span> and <span class="math-container">$3$</span> are not. Modulo <span class="math-container">$7$</span> we have that <span class="math-container">$2$</span> is a square while <span class="math-container">$3$</span> and <span class="math-container">$6$</span> are not. Modulo <span class="math-container">$11$</span> we have that <span class="math-container">$3$</span> is a square and <span class="math-container">$2$</span> and <span class="math-container">$6$</span> are not. Modulo <span class="math-container">$23$</span> we have that <span class="math-container">$2, 3, 6$</span> are all squares. So all four possibilities occur.</p> <p>Finally it is easy to show that in characteristic <span class="math-container">$2$</span> we have <span class="math-container">$x^4-10x+1=x^4+1=(x+1)^4$</span>.</p> <p>So I've worked this in some detail in case it helps you to see what is going on. You can generalise to <span class="math-container">$a$</span> and <span class="math-container">$b$</span>. Obviously there are more efficient expositions, but sometimes longhand helps.</p>
4,179,720
<p>I'm starting to study triple integrals. In general, I have been doing problems which require me to sketch the projection on the <span class="math-container">$xy$</span> plane so I can figure out the boundaries for <span class="math-container">$x$</span> and <span class="math-container">$y$</span>. For example, I had an exercise where I had to calculate the volume bound between the planes <span class="math-container">$x=0$</span>, <span class="math-container">$y=0$</span>, <span class="math-container">$z=0$</span>, <span class="math-container">$x+y+z=1$</span> which was easy. For the projection on the <span class="math-container">$xy$</span> plane, I set that <span class="math-container">$z=0$</span>, then I got <span class="math-container">$x+y=1$</span> which is a line.</p> <p>However, now I have the following problem:</p> <p>Calculate the volume bound between:</p> <p><span class="math-container">$$z=xy$$</span></p> <p><span class="math-container">$$x+y+z=1$$</span></p> <p><span class="math-container">$$z=0$$</span></p> <p>now I know that if I put <span class="math-container">$z=0$</span> into the second equation I get the equation <span class="math-container">$y=1-x$</span> which is a line, but I also know that <span class="math-container">$z=xy$</span> has to play a role in the projection. If I put <span class="math-container">$xy=0$</span> I don't get anything useful. Can someone help me understand how these projections work and how I can apply it here?</p>
quasi
400,434
<p>Label the chairs<span class="math-container">$\;1,...,8\;$</span>in clockwise order.</p> <p> There are <span class="math-container">$4$</span> pairs of opposite chairs, namely <span class="math-container">$(1,5),(2,6),(3,7),(4,8)$</span>. <ul> <li>For the first couple (couple #<span class="math-container">$1$</span>), there are <span class="math-container">$4$</span> choices for their pair of opposite chairs, and then <span class="math-container">$2$</span> ways for the couple to choose their seats.<span class="math-container">$\\[4pt]$</span> <li>For the next couple (couple #<span class="math-container">$2$</span>), there are <span class="math-container">$3$</span> choices for their pair of opposite chairs, and then <span class="math-container">$2$</span> ways for the couple to choose their seats.<span class="math-container">$\\[4pt]$</span> <li>For the next couple (couple #<span class="math-container">$3$</span>), there are <span class="math-container">$2$</span> choices for their pair of opposite chairs, and then <span class="math-container">$2$</span> ways for the couple to choose their seats.<span class="math-container">$\\[4pt]$</span> <li>For the last couple (couple #<span class="math-container">$4$</span>), there is only <span class="math-container">$1$</span> choice for their pair of opposite chairs, and then <span class="math-container">$2$</span> ways for the couple to choose their seats.<span class="math-container">$\\[4pt]$</span> </ul> <p>hence there are <span class="math-container">$$ (4{\,\cdot\,}2) (3{\,\cdot\,}2) (2{\,\cdot\,}2) (1{\,\cdot\,}2) =2^4{\,\cdot\,}4! $$</span> acceptable seatings out of <span class="math-container">$8!$</span> possible seatings.</p> <p> Thus the probability of an acceptable seating is <span class="math-container">$$ \frac{2^4{\,\cdot\,}4!}{8!}=\frac{1}{105} $$</span>
810,110
<p>How to prove that the following holds? $$\frac{d}{dx}a^x=(\ln a)a^x.$$ Just a hint will do it.</p>
Hakim
85,969
<p><strong>Hint:</strong> $$\dfrac{\mathrm d}{\mathrm dx}a^x=\dfrac{\mathrm d}{\mathrm dx}e^{\ln a\cdot x}.\tag{$a&gt;0$}$$ Now use the <a href="http://quantizd.blogspot.com/p/blog-page_7049.html" rel="nofollow">chain rule</a> knowing that: $$\dfrac{\mathrm d}{\mathrm dx}e^x=e^x.$$</p>
810,110
<p>How to prove that the following holds? $$\frac{d}{dx}a^x=(\ln a)a^x.$$ Just a hint will do it.</p>
Jika
143,855
<p>For $a&gt;0$, we know that $a^x=e^{x\log a}$. Hence to calculate the derivative of $a^x$, you have to calculate the derivative of $e^{g(x)}$ where $g(x)=x\log a$ for $a&gt;0$ which is $g'(x)e^{g(x)}$ using the chain rule.</p> <p>For $a\le0$, I do not think such a function exists.</p>
2,711,189
<p>Is their anyway you can integrate a wedge of cylinder (picture below) not by adding up the parts cut in parallel lines but by adding up parts cut in pivotal lines to the center of the half circle?</p> <p><a href="https://i.stack.imgur.com/BrucD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/BrucD.png" alt="enter image description here"></a></p>
Matthew Leingang
2,785
<p>The standard multivariable integration method can integrate along radial lines. If the base disk of the cylinder is in the $(x,y)$-plane, and the wedge runs along the $x$-axis, then the height function for the wedge can be represented as $z=y\tan\alpha$. In cylindrical coordinates, the wedge can be described as $$ \left\{(r,\theta,z)\mid 0 \leq r \leq a,\ 0 \leq \theta \leq \pi,\ 0 \leq z \leq r \sin \theta \tan\alpha\right\} $$ The volume can be computed with an iterated integral: \begin{align*} V &amp;= \int_0^\pi \int_0^a r \sin(\theta) \tan(\alpha) r\,dr \,d\theta \\ &amp;= \int_0^\pi \frac{1}{3}r^3 \tan(\alpha) \sin(\theta)\,d\theta \\ &amp;= \frac{2}{3}r^3 \tan(\alpha) \end{align*}</p>
2,338,508
<p>Given this matrix that stretches to infinity to the right and up: $$ \begin{matrix} ...&amp;...&amp;...\\ \frac{1}{4}&amp; \frac{1}{8}&amp; \frac{1}{16}&amp;... \\ \frac{1}{2} &amp; \frac{1}{4}&amp; \frac{1}{8}&amp;... \\ 1 &amp; \frac{1}{2}&amp; \frac{1}{4}&amp;... \\ \end{matrix} $$</p> <p>I was trying to find the total sum of this matrix. I know the answer should be $4$. I came up with a different solution and a different answer. What is wrong with that solution? Here it is:</p> <p>The first row sums to $2$. The second row to $2-1$. The third row to $2-1-\frac{1}{2}$ etc... So we get:</p> <p>$$ \begin{matrix} 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}&amp;-\frac{1}{8}&amp;-\frac{1}{16}\\ 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}&amp;-\frac{1}{8}\\ 2&amp;-1&amp;-\frac{1}{2}&amp;-\frac{1}{4}\\ 2&amp;-1&amp;-\frac{1}{2}\\ 2&amp;-1 \\ 2 \\ \end{matrix} $$</p> <p>Now for each "$2$" there is a diagonal that gives the sequence $2-1-\frac{1}{2}-\frac{1}{4}...=0$ (since the matrix goes on forever) Therefore, the sum of the matrix must be $0$!</p> <p>Apparently that's wrong; but why? Thanks!</p> <p>EDIT: I am looking for an answer to the question what is fundamentally <strong>wrong</strong> with my method plus an explanation for why that is wrong. I am not looking for an explanation of the correct method.</p>
Emilio Novati
187,568
<p>The ''triangular array'' is not really an array but a column of values:</p> <p>$ 2 $</p> <p>$2-1=1$</p> <p>$2-1-\frac{1}{2}=\frac{1}{2}$</p> <p>$2-1-\frac{1}{2}-\frac{1}{4}=\frac{1}{4}$</p> <p>$2-1-\frac{1}{2}-\frac{1}{4}-\frac{1}{8}=\frac{1}{8}$</p> <p>$\cdots$</p> <p>so <strong>there is not a diagonal</strong> and the sum of these values is clearly $=4$</p> <hr> <p>In other words, this is not a matrix, but the ''infinite sum'' </p> <p>$$ 2+\left(2-1\right)+\left(2-1-\frac{1}{2}\right)+\left(2-1-\frac{1}{2}-\frac{1}{4}\right)+ \cdots +\left(2-\sum_{i=1}^n\frac{1}{i}\right)+ \cdots $$ and we cannot rearrange or associate the terms of the series in a different order, as adding them ''by diagonals''. </p>
3,215,553
<p>Recently I have been reading a lot about <span class="math-container">$\mathbb{Z}_2$</span>-actions on topological spaces. Mainly I was focused on surfaces such as the sphere, torus and Klein bottle and here the existence of a nontrivial <span class="math-container">$\mathbb{Z}_2$</span>-action is rather simple. But I was wondering if a general topological space always admits a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action? If not, then more specific, does a manifold always admit a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action? </p> <p>For a manifold <span class="math-container">$M$</span> I was thinking about the fact that we can embed <span class="math-container">$M$</span> into <span class="math-container">$\mathbb{R}^N$</span> for some <span class="math-container">$N &gt;0$</span> and then <span class="math-container">$M$</span> can inherit a <span class="math-container">$\mathbb{Z}_2$</span>-action from <span class="math-container">$\mathbb{R}^N$</span> but then when one looks at the spiral in <span class="math-container">$\mathbb{R}^2$</span> we see that this spiral does not inherit for example the antipodality of <span class="math-container">$\mathbb{R}^2$</span>.</p> <p>Extra: I was also wondering that if there are spaces that do admit a nontrivial continuous <span class="math-container">$\mathbb{Z}_2$</span>-action, do these space then also admit a <strong>free</strong> <span class="math-container">$\mathbb{Z}_2$</span>-action? By free I mean that the action is fixed point free.</p> <p>If anyone knows some basic examples that do not admit a continuous (free) <span class="math-container">$\mathbb{Z}_2$</span>-action. Please do share. I seem to be unable to find one.</p> <p>Thank you in advance! </p>
PR_
667,990
<p>Consider <span class="math-container">$X=[0,1)$</span>. By connectedness arguments it follows that <span class="math-container">$0$</span> is fixed by every homeomorphism and as a consequence every <span class="math-container">$\mathbb{Z}_2$</span>-action on <span class="math-container">$X$</span> is trivial (to verify this, show that every subset of the form <span class="math-container">$[0,\varepsilon]$</span> gets fixed).</p> <p>If we consider manifolds then I can't give you examples that do not admit a non-trivial <span class="math-container">$\mathbb{Z}_2$</span>-action (which do not consist of only one point). However, if we consider free actions then the quotient is a manifold and we obtain a covering. It is a consequence of the Lefschetz fixed point theorem that the only free actions on <span class="math-container">$S^{2n}$</span> are trivial or given by <span class="math-container">$\mathbb{Z}_2$</span>. But if <span class="math-container">$\mathbb{R}P^{2n}$</span> had a free <span class="math-container">$\mathbb{Z}_2$</span>-action, then the quotient would have a fundamental group of order <span class="math-container">$4$</span> with universal covering <span class="math-container">$S^{2n}$</span>, which is not possible.</p>
3,659,942
<p>I have the following problem:</p> <p>Find all <span class="math-container">$n$</span> that are natural numbers such that <span class="math-container">$\sqrt{n}+\sqrt{n+m}\in\mathbb{N}$</span>, where <span class="math-container">$m$</span> is a positive natural number that is the product of two primes that are not equal to each other.</p> <p>I thought that I can solve it when the function under the square roots are squares of integers but I have no idea how to prove that or how to continue.</p>
Brian M. Scott
12,042
<p>SKETCH: Let <span class="math-container">$B'$</span> be a subset of <span class="math-container">$A$</span> order-isomorphic to <span class="math-container">$B$</span>. Recursively construct an order-isomorphism from <span class="math-container">$B'$</span> onto an initial segment, not necessarily proper, of <span class="math-container">$A$</span>. If this order-isomorphism is onto <span class="math-container">$A$</span>, compose it with the order-isomorphism from <span class="math-container">$B$</span> into <span class="math-container">$A$</span> to get the desired order-isomorphism from <span class="math-container">$B$</span> onto <span class="math-container">$A$</span>. If not, get a contradiction by using the order-isomorphism of <span class="math-container">$A$</span> to a subset of <span class="math-container">$B$</span> to get an order-isomorphism of <span class="math-container">$A$</span> into a proper initial segment of itself, from which you can recursively construct an order-isomorphism of <span class="math-container">$A$</span> onto a proper initial segment of itself.</p>
3,414,072
<blockquote> <p>Let <span class="math-container">$U \subset \mathbb{R}^N$</span> be an open set, let <span class="math-container">$f : U \times [a, b] \to \mathbb{R}$</span> be a continuous function. Consider the function <span class="math-container">$$g(x):= \int_a^b f(x,y) \,dy$$</span> with <span class="math-container">$x \in U$</span>.</p> <p>i) Prove that <span class="math-container">$g$</span> is continuous in <span class="math-container">$U$</span>.</p> <p>ii) consider the function <span class="math-container">$f : [−1, 1] \times [−1, 1] \to \mathbb{R}$</span> defined by <span class="math-container">$$f (x, y) :=\begin{cases} \frac {|y|−|x|} {y^2}&amp;\text{ if $|x|&lt;|y|$}\\ 0 &amp;\text{ if $|x|\geq |y|$} \end{cases}$$</span> Let <span class="math-container">$g(y):= \int_{-1}^1 f(x,y) \,dx$</span> for <span class="math-container">$y \in [-1,1]$</span> Study the continuity of <span class="math-container">$g$</span>.</p> </blockquote> <p>So I am a bit stuck on how to prove the continuity from the basics: i know how to prove that if <span class="math-container">$f$</span> is continuous on <span class="math-container">$[a,b]$</span>, then <span class="math-container">$g=\int f$</span> is continuous on <span class="math-container">$[a,b]$</span> but i assume because of the different notation and dimension here, i have to prove a different way? In addition, how would i study the continuity?</p>
Masacroso
173,262
<p>First note that continuity is a <strong>local property</strong>. You can exploit this property together with the assumption that <span class="math-container">$U$</span> is open in <span class="math-container">$\Bbb R ^n$</span>. That is: we can simplify the exercise to show that <span class="math-container">$g$</span> is continuous in any <strong>compact</strong> subset of <span class="math-container">$U$</span> because any point on an open set of <span class="math-container">$\Bbb R ^n$</span> (assuming that <span class="math-container">$\Bbb R ^n$</span> is a normed space) have a compact neighborhood.</p> <p>Using compact sets is easy because compacts sets behaves, in many ways, as if there were finite sets. In particular we will use the property that any continuous function in a compact set is <strong>uniformly continuous</strong>, that is, if <span class="math-container">$C\subset U$</span> is compact then <span class="math-container">$f$</span> is uniformly continuous in <span class="math-container">$C\times [a,b]$</span>, because Cartesian product of compact sets is a compact set in the product topology.</p> <p>First note that <span class="math-container">$$ \begin{align*} |g(x_0)-g(x)|&amp;\leqslant \int_{a}^b|f(x_0,y)-f(x,y)|\,\mathrm d y\\ &amp;\leqslant (b-a)\sup_{y\in[a,b]}|f(x_0,y)-f(x,y)| \end{align*}\tag1 $$</span> Because <span class="math-container">$f$</span> is uniformly continuous in <span class="math-container">$C\times[a,b]$</span> then for any <span class="math-container">$\epsilon &gt;0$</span> there is a <span class="math-container">$\delta &gt;0$</span> such that if <span class="math-container">$\|(x_0,y_0)-(x,y)\|&lt;\delta $</span> then <span class="math-container">$|f(x_0,y_0)-f(x,y)|&lt;\epsilon $</span> for all <span class="math-container">$(x_0,y_0),(x,y)\in C\times [a,b]$</span>, where <span class="math-container">$\|{\cdot}\|$</span> is any norm on <span class="math-container">$\Bbb R ^n$</span> (all norms are equivalent in finite dimensional vector spaces, so you can choose the norm you like more, by example the maximum norm).</p> <p>Then from <span class="math-container">$\rm (1)$</span> we have that if <span class="math-container">$\|(x_0,y)-(x,y)\|_\infty=|x_0-x|&lt;\delta $</span> then <span class="math-container">$|g(x_0)-g(x)|&lt;(b-a)\epsilon $</span>, for all <span class="math-container">$x_0,x\in C$</span>, what finishes the proof.</p>
758,158
<p>I am trying to use <span class="math-container">$f(x)=x^3$</span> as a counterexample to the following statement. </p> <p>If <span class="math-container">$f(x)$</span> is strictly increasing over <span class="math-container">$[a,b]$</span> then for any <span class="math-container">$x\in (a,b), f'(x)&gt;0$</span>. </p> <p>But how can I show that <span class="math-container">$f(x)=x^3$</span> is strictly increasing?</p>
binkyhorse
18,357
<p>Here is an intuitive proof that could be made rigorous using Lebesgue measure. </p> <p>Start with $x&gt;0$. Place a cube with edge length $x$ so that it has three faces on the coordinate planes in three-dimensional space and two corners at $(0,0,0)$ and at $(x,x,x)$. Then $f(x)=x^3$ is the volume of the cube. </p> <p>Strict monotonicity of $f$ is a consequence of the volume being strictly bigger for a strictly bigger cube. To incorporate negative values of $x$, argue with the fact that $f$ is odd.</p>
725,241
<p>Where s=circumcenter, H= orthocenter, and A'= midpoint of one side of triangle. <img src="https://i.stack.imgur.com/Pg6rv.png" alt="enter image description here"></p> <p>How can can I determine the location of the three vertices of the triangle? </p>
Marc van Leeuwen
18,880
<p>It becomes easier if you first recognise that you may rename $k-1$ to $k'$ and $i-1$ to $i'$, so that you need to prove $$ \binom{n+k'}{n-1}=\sum_{i'=0}^{k'}\binom{k'}{i'}\binom n{i'+1}, \qquad\text{for $0\leq k'&lt;n$} $$ then apply symmetry on the last binomial coefficient, drop the primes, and write $n-1=m$ $$ \binom{k+n}m=\sum_{i=0}^k\binom ki\binom n{m-i}. $$ Now you see it is the <a href="https://en.wikipedia.org/wiki/Vandermonde_identity" rel="nofollow">Vandermonde identity</a>. Both sides express the coefficient of $X^m$ in $(1+X)^{k+n}=(1+X)^k(1+X)^n$. Or the number of ways to select $m$ elements out of $k+n$ that happen to be coloured so that $k$ are blue and $n$ are red. The relations between $k,n,m$ are unimportant.</p>
725,241
<p>Where s=circumcenter, H= orthocenter, and A'= midpoint of one side of triangle. <img src="https://i.stack.imgur.com/Pg6rv.png" alt="enter image description here"></p> <p>How can can I determine the location of the three vertices of the triangle? </p>
mathse
136,490
<p>Yes, both answers are perfect, but to add to Marc van Leeuwen's answer - the formula is merely a simple application of the Vandermonde convolution, which states that</p> <p>$\binom{n+m}{r} = \sum_{i+j=r} \binom{n}{i}\binom{m}{j}$</p> <p>(with $m=k-1$ and $r=n-1$). So, from this you see that many other 'fancy' formulas also hold, such as </p> <p>$\binom{n+k-1}{n-1} = \sum_{i=1}^{k-4}\binom{k-5}{i-1}\binom{n+4}{n-i}$,</p> <p>etc.</p>
696,900
<p>This should be trivial but for some reason I cannot think of a formula which enlarges a number in proportion to another number decreasing in the negative direction</p> <p>Example:</p> <p>if value 1 = $-0.1$, value 2 should be set to $0.9$<br> if value 1 = $-0.2$, value 2 should be set to $0.8$<br> if value 1 = $-0.4$, value 2 should be set to $0.6$</p> <p>and so on...</p> <p>Thanks for any help</p>
Nathan Smith
690,955
<p>In your description you mention that one should be increasing while the other is decreasing. In your examples, both value1 and value2 are getting smaller.</p>
1,708,072
<p>This is Exercise 2 of section 1.8 in Humphreys' "Reflection groups and Coxeter groups", p.16.</p> <p>The longest word $w_0$ in a finite reflection group $W$ acting on a Euclidean space (with a specified basis/system of simple roots $\Delta$) is defined as the unique element of $W$ that maps the fundamental chamber to its inverse. (This makes sense because $W$ acts simply transitively on the chambers.) I am trying to show that, in every reduced expression of $w_0$, every simple reflection occurs at least once.</p> <p>I am trying to proceed by contradiction: suppose that $w_0$ admits the reduced expression $w_0 = s_1\cdots s_r$, where $s_i = s_{\alpha_i}$ for some simple roots $\alpha_i$, and that there exists a missing simple root $\beta \neq \alpha_i \ \forall i$. Intuitively I want to show that $w_0s_\beta$ is a longer word than $w_0$, contradicting the fundamental property of the latter, but I find myself unable to do so. Any hint would be greatly appreciated!</p>
anon
194,722
<p>I believe I have found an easy way to proceed. We know that $w_0$ maps every positive root to a negative root; in particular, it maps $\beta$ to a negative root. But by Proposition 1.4 of the book, for any simple root $\alpha ­\in \Delta$, the simple reflection $s_\alpha$ permutes all the positive roots that are distinct from $\alpha$. Therefore, if $\beta \neq \alpha_i$ for all $i$, the word $w_0 = s_{\alpha_1}\cdots s_{\alpha_r}$ sends $\beta$ to another positive root, a contradiction. (First $s_{\alpha_r}$ sends $\beta$ to a positive root; then $s_{\alpha_{r-1}}$ sends that positive root to another positive root; and so on.)</p> <p>EDIT: Wait, this doesn't necessarily work. There is no reason why $s_{\alpha_{r-1}}$ should send $s_{\alpha_r}(\beta)$ to another positive root unless we know that $s_{\alpha_r}(\beta) \neq \alpha_{r-1}$...</p>
1,708,072
<p>This is Exercise 2 of section 1.8 in Humphreys' "Reflection groups and Coxeter groups", p.16.</p> <p>The longest word $w_0$ in a finite reflection group $W$ acting on a Euclidean space (with a specified basis/system of simple roots $\Delta$) is defined as the unique element of $W$ that maps the fundamental chamber to its inverse. (This makes sense because $W$ acts simply transitively on the chambers.) I am trying to show that, in every reduced expression of $w_0$, every simple reflection occurs at least once.</p> <p>I am trying to proceed by contradiction: suppose that $w_0$ admits the reduced expression $w_0 = s_1\cdots s_r$, where $s_i = s_{\alpha_i}$ for some simple roots $\alpha_i$, and that there exists a missing simple root $\beta \neq \alpha_i \ \forall i$. Intuitively I want to show that $w_0s_\beta$ is a longer word than $w_0$, contradicting the fundamental property of the latter, but I find myself unable to do so. Any hint would be greatly appreciated!</p>
Ben West
37,097
<p>I believe your argument still works. Suppose $w_0=s_{\alpha_1}\cdots s_{\alpha_r}$ is a reduced expression in terms of various simple reflections, and suppose $\beta$ is a simple root distinct from all these $\alpha_i$. </p> <p>I assume there is some positive definite bilinear form $(-,-)$ on the Euclidean space, so the simple reflections are defined by the formula $$ s_\alpha(\beta)=\beta-2\frac{(\beta,\alpha)}{(\alpha,\alpha)}\alpha. $$</p> <p>The idea is that when you apply these simple reflections, they just add some multiple of their corresponding simple root to the root you plug in, but they'll never add a multiple of $\beta$, since they're all distinct from $\beta$. So the simple reflections in the reduced expression of $w_0$ can't change the coefficient of $1$ for $\beta$, as the simple roots are linearly independent.</p> <p>That is, we must have $w_0(\beta)=\beta+\sum_{\alpha_i\in\Delta\setminus\{\beta\}}c_i\alpha_i$ for some coefficients $c_i$. This image is necessarily a root, and since the simple roots are linearly independent, we are ensured a positive coefficient of $\beta$, hence $w_0(\beta)\in\Phi^+$.</p>
1,506,805
<p>$\lim _{x\to 0}\left(\frac{\cos\left(x+\frac{\pi }{2}\right)}{x}\right)\:$</p>
John Joy
140,156
<p><a href="https://i.stack.imgur.com/5OBMN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5OBMN.png" alt="enter image description here" /></a></p> <p>For acute angles <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span>, use the law of cosines to show that <span class="math-container">$$\cos(\alpha+\beta)=\cos\alpha\cos\beta - \sin\alpha\sin\beta$$</span></p> <p>Then, use this identity to show that <span class="math-container">$$\cos\bigg(x+\frac{\pi}{2}\bigg) = -\sin x$$</span> and, finally evaluate the limit <span class="math-container">$$\lim_{x\to 0}\frac{\cos(x+\frac{\pi}{2})}{x} = \lim_{x\to 0}\frac{-\sin x}{x} = \dots$$</span></p>
3,758
<p>When implicitly finding the derivative of: </p> <blockquote> <p>$xy^3 - xy^3\sin(x) = 1$</p> </blockquote> <p>How do you find the implicit derivative of:</p> <blockquote> <p>$xy^3\sin(x)$</p> </blockquote> <p>Is it using a <em>triple</em> product rule of sorts?</p>
Bill Dubuque
242
<p><strong>REMARK</strong> $\ $ Pierre's answer is a generating function approach:</p> <p>$\displaystyle\rm\quad\quad\quad\;\; f(x+t) \;=\; \sum_{n=0}^\infty \;\; f^{(n)}(x) \; \frac{t^n}{n!}$</p> <p>$\rm\quad (fgh)(x+t) \;=\; f(x+t)\: g(x+t)\: h(x+t)$</p> <p>$\rm\quad\quad\quad\quad\quad\quad\quad\quad\;\: \;=\; (f + f' t +\:\cdots) (g + g' t + \:\cdots) (h + h' t + \:\cdots) $</p> <p>$\rm\quad\quad\quad\quad\quad\quad\quad\quad\;\: \;=\; fgh + (f'gh + fg'h + fgh')\: t + \:\cdots $</p> <p>$\rm\quad (fgh)(x+t) \;=\; fgh + (fgh)' t + \:\cdots $</p> <p>Thus we conclude $\:\rm(fgh)' = f'gh + fg'h + fgh'$</p> <p>This generalizes to n-th order derivatives, as Pierre mentioned, e.g.</p> <p>$\quad\displaystyle\rm\sum_{n=0}^\infty \: (fgh)^{(n)} \frac{t^n}{n!} \ =\ \sum_{n=0}^\infty \:\bigg(\ \sum_{i+j+k=n} \frac{f^{(i)}g^{(j)}h^{(k)}}{i!\ j!\ k!}\bigg) \frac{t^n}{n!} $</p>
3,218,200
<p>Let <span class="math-container">$\Bbbk$</span> be a field. A well-known result (see <a href="https://www.jstor.org/stable/2373270?origin=crossref&amp;seq=1#metadata_info_tab_contents" rel="nofollow noreferrer">Larson, Sweedler, <em>An Associative Orthogonal Bilinear Form for Hopf Algebras</em></a> and <a href="https://www.sciencedirect.com/science/article/pii/0021869371901414?via%3Dihub" rel="nofollow noreferrer">Pareigis, <em>When Hopf algebras are Frobenius algebras</em></a>) states that a bialgebra <span class="math-container">$B$</span> over <span class="math-container">$\Bbbk$</span> is a finite-dimensional Hopf algebra if and only if it is Frobenius as an algebra and the Frobenius morphism <span class="math-container">$\psi:B\to\Bbbk$</span> is a (left) integral on <span class="math-container">$B$</span> (i.e. <span class="math-container">$\sum b_1\psi(b_2)=\psi(b)1_B$</span> for all <span class="math-container">$b\in B$</span>).</p> <p>For didactical reasons, I would like to have an example of a (necessarily finite-dimensional) bialgebra <span class="math-container">$B$</span> which is Frobenius as an algebra but that is not a Hopf algebra. However, I didn't manage to construct one. Does anybody know one? Even a non-elementary one would be fine.</p> <p>Alternatively, if somebody knows a proof of the fact that a Frobenius bialgebra is automatically a Hopf algebra, without further assumptions on the Frobenius homomorphism, then I would be very glad to see it.</p>
Marco Farinati
664,350
<p>Here is an example. Take k a field and S the multiplicative semigroup {0,1}. For notational convenience denote it S={1,x}. Consider B=k[S] the semigroup algebra.</p> <p>B is bialgebra and not Hopf because S is a semigroup and nota a group.</p> <p><span class="math-container">$B\cong k[x]/(x^2-x)\cong k\times k$</span> is semisimple. In particular is a Frobenius algebra.</p>
1,799,559
<blockquote> <p>Let $f:[a,b]\to\mathbb{R}$ be a continious function. Show that if $$\int_a^b f(x)g(x)dx=0$$ for all continious functions $g:[a,b]\to\mathbb{R}$ with $g(a)=g(b)=0$, then $f(x)=0$ $\forall x\in[a,b]$</p> </blockquote> <p>I have difficulties proving this question. Consider for example $g(x)=0$ $\forall x\in[a,b]$, then assumptions still hold but $f(x)$ can be anything (f.i. $f(x)=1\neq0$). Could anyone tell me if this reasoning is correct? </p>
levap
32,262
<p>Note that if $f$ would satisfy $f(a) = f(b) = 0$, you could take $g = f$ and get $\int_a^b f^2(x) \, dx = 0$ and since $f^2(x)$ is continuous and non-negative, you would immediately get $f(x) \equiv 0$. In general, you can modify $f$ continuously so that it stays the same on $[a + \frac{1}{n}, b - \frac{1}{n}]$ but drops down to zero afterwards. Define</p> <p>$$ g_n(x) = \begin{cases} f \left( a + \frac{1}{n} \right) n(x-a) &amp; a \leq x \leq a + \frac{1}{n}, \\ f(x) &amp; a + \frac{1}{n} \leq x \leq b - \frac{1}{n}, \\ -f \left( b - \frac{1}{n} \right)n(x - b) &amp; b - \frac{1}{n} \leq x \leq b. \end{cases} $$</p> <p>Then $g_n(a) = g_n(b) = 0$, the function $g_n$ is continuous and</p> <p>$$ 0 = \int_a^b f(x)g_n(x) \, dx = \int_{a+\frac{1}{n}}^{b-\frac{1}{n}} f^2(x) \, dx + \operatorname{Junk}_n$$</p> <p>where $\operatorname{Junk}_n \to 0$ where $n \to 0$ (show this by showing that $\operatorname{Junk}_n \leq \frac{2M}{n}$ where $|f| \leq M$). Taking the limit, you obtain the required result.</p>
84,897
<pre><code>Integrate[a/(Sin[t]^2 + a^2), {t, 0, 2 Pi}] </code></pre> <p>$$\int_0^{2 \pi } \frac{a}{a^2+\sin ^2(t)} \, dt$$</p> <p>gives $0$</p> <p>This cannot be true. What is going on?</p> <p>If I insert a number into <code>a</code>, it gives a reasonable result:</p> <pre><code>NIntegrate[2/(Sin[t]^2 + 4), {t, 0, 2 Pi}] </code></pre> <p>give <code>2.80993</code></p>
Dr. Wolfgang Hintze
16,361
<pre><code>$Version (* Out[228]= "8.0 for Microsoft Windows (64-bit) (October 7, 2011)" *) </code></pre> <p>The result provided by <em>Mathematica</em> is correct:</p> <pre><code>Integrate[a/(a^2 + Sin[t]^2), {t, 0, 2 π}] (* Out[213]= (2 π)/(Sqrt[1 + 1/a^2] a) *) </code></pre> <p>Now the same procedure as "always" which "explains" the zero result. The indefinite integral is</p> <pre><code>Integrate[a/(a^2 + Sin[t]^2), t] (* Out[214]= ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2] *) </code></pre> <p>This result is not continuous for real <code>a</code>. In this graph we can see the jumps as well as the necessary additional terms to restore continuity.</p> <p>The jump must be determined by taking (directed) limits. Assuming first $a&gt;0$ gives</p> <pre><code>Simplify[Limit[ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2], t -&gt; π/2, Direction -&gt; +1], a &gt; 0] - Simplify[Limit[ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2], t -&gt; π/2, Direction -&gt; -1], a &gt; 0] (* Out[408]= π/Sqrt[1 + a^2] *) </code></pre> <p>and therefore</p> <pre><code>With[{a = 1/2}, Plot[{ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2], ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2] + π/Sqrt[1 + a^2], ArcTan[(Sqrt[1 + a^2] Tan[t])/a]/Sqrt[1 + a^2] + (2 π)/Sqrt[ 1 + a^2]}, {t, 0, 2 π}]] </code></pre> <p><img src="https://i.stack.imgur.com/hGRfC.jpg" alt="150601_plot_int.jpg"></p> <p>The blue (lower) curve is the original antiderivative, the red (middle) curve has <code>π/Sqrt[1+a^2]</code> added, and is the continiuous continuation between <code>π/2</code> and <code>3π/2</code>, finally the brown (upper) curve does it for the rest adding again the same amount <code>π/Sqrt[1+a^2]</code>.</p> <p>Hence taking the difference of the antiderivative at the endpoints according to the fundamental theorem of calculus would lead to zero on the blue curve, and is only correct for the continuous version leading to the correct result given in the beginning.</p> <p><strong>EDIT #1</strong></p> <p>The case $a&lt;0$ need not be discussed separately because of the (anti)symmetry of the integral.</p> <p>Note that because the antiderivative vanishes at both ends the integral is equal to the "total" jump, which here is twice the amount of the jump at $\pi/2$.</p> <p>This and other examples point to the (tentative) rule: if the result of <code>Integrate[]</code> is zero despite the fact that the integrand is positive check for jumps and calculate them using the directed Limits. The integral will then be the sum of the jumps over the whole interval.</p>
1,614,899
<p>Construct Context-free Grammar for integers. Integer can begin with + or - and after that we have non-empty string of digits. Integer must not contain unnecessary leading zeros and zero should not be preceded by + or -. For example: 0; 123; -15; +9999 are correct, but +0; 01; +-3; +09; + are incorrect.</p> <p>I have something like this:</p> <p>(number) ::= (unsigned number) | (sign)(unsigned number)</p> <p>(sign) ::= + | – </p> <p>(unsigned number) ::= (digit) | (unsigned number)(digit) </p> <p>(digit) ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 |</p> <p>| - or</p> <p>Is it okay? ;)</p>
David K
139,123
<p>In general, when you take a union of the sets in a collection of sets, you can easily end up with a new set that is not in the original collection.</p> <p>I'll assume that when we take $\bigcup_{\gamma\in A} \gamma$, where each $\gamma$ is a "cut", we are actually referring to a union of the lower sets in each cut; that is, each $\gamma$ is a set of rationals that does not contain its own supremum.</p> <p>Let $A$ be the countably infinite collection of Dedekind cuts named $1/2, 2/3, 3/4, 4/5, \ldots$. That is, $A = \left\{ 1 - \frac1n \mid n \in \mathbb N \right\}$. Now let $q$ be any rational number less than $1$; for a large enough $n$, $q &lt; 1 - \frac1n$, so $q$ is a member of the cut named $1 - \frac1n$, and therefore $q \in A$. That is, $A$ is the set of all rational numbers less than $1$, and therefore $A$ is just exactly the Dedekind cut for the real number $1$.</p> <p>In short, $\sup A = 1$ and $\bigcup_{\gamma\in A} \gamma = 1$, but $1 \not\in A$.</p>
4,490,117
<blockquote> <p>(a) Let <span class="math-container">$f : [a, b] \to \mathbb{R}$</span> be continuous and suppose that <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>. Show that there is some <span class="math-container">$L\gt 0$</span> such that <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$ x \in [a, b]$</span>.</p> </blockquote> <blockquote> <p>(b) Give an example of a continuous function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying <span class="math-container">$f(x) \gt 0$</span> for all <span class="math-container">$x$</span>, such that no <span class="math-container">$L\gt 0$</span> satisfies <span class="math-container">$f(x) \ge L$</span> for all <span class="math-container">$x$</span>.</p> </blockquote> <p>I was given these two pribelsy together. For the first one I could solve it easily by using the property that <span class="math-container">$f$</span> will attain it's bounds in the given closed interval and hence the minimum value will do the trick.</p> <p>But I can't prove (b) analytically.</p> <p>I thought of <span class="math-container">$f(x) = e^x$</span> and I know it will work but I can't prove it using any contradiction.</p> <p>Can I get some help please?</p>
user1035015
1,035,015
<p>I would check out Set Theory: The Structure of Arithmetic by Hamilton and Landin. I love this book. They define addition formally and prove the addition tables and all the usual properties of addition. Unfortunately they do not go into algorithm, but I think you will find the content very enlightening and interesting. It’s very close to what you are looking for I believe.</p>
668,451
<p>A wizard has commanded you, the master architect, to build towers by stacking stone blocks.</p> <p>You have at your disposal five stone masons of limited intellect. Each mason is capable of making only a single type of cylindrical block, the height of which must be a positive integer, and the diameter of which is irrelevant. The masons have sufficient time to build as many copies of their block type as will be needed.</p> <p>The wizard, being capricious, will demand a tower with an integer height between 1 and 300 units. There is no way to know in advance what height will be required; it will be selected from a uniform distribution. The wizard has also decided that each tower be made of no more than five blocks. Using multiple copies of the same block size is acceptable. The blocks can't be stacked sideways.</p> <p>What five integers do you assign to your masons to maximize the probability that you will be able to build a tower of the specified height, using no more than five blocks total? What is that probability (i.e., the probability that you can meet the wizard's demands and escape an untimely death)? Are there multiple solutions with equally good probability? What is the best way to get good solutions quickly?</p> <p>To restate more formally:</p> <p>Select the set of five positive integers {a, b, c, d, e} which maximize the probability that</p> <pre><code>n1*a + n2*b + n3*c + n4*d + n5*e == RandomInteger[{1,300}] n1 + n2 + n3 + n4 + n5 &lt;= 5 </code></pre> <p>has at least one solution, where n1 through n5 are integers >= 0.</p> <p>The only strategy I could think of was randomly modifying an existing solution and checking if it was better than the current solution. This is hardly efficient, and is likely to get stuck in some local maximum. Maybe there's something more elegant. Here's the Mathematica code:</p> <pre><code>score[{a_, b_, c_, d_, e_}] := Length[Flatten[FindInstance[{ n1*a + n2*b + n3*c + n4*d + n5*e == #, n1 &gt;= 0, n2 &gt;= 0, n3 &gt;= 0, n4 &gt;= 0, n5 &gt;= 0, n1 + n2 + n3 + n4 + n5 &lt;= 5 }, {n1, n2, n3, n4, n5}, Integers, 1] &amp; /@ Range[300], 1]] best = {5, 10, 15, 20, 25}; bestScore = score[best]; nRounds = 1000; Do[ new = best + RandomInteger[{-2, 3}, 5]; newScore = score[new]; If[newScore &gt; bestScore, bestScore = newScore; best = Sort[new]]; If[Mod[r, 10] == 0, Print[r, " ", best, " ", bestScore]], {r, 1, nRounds} ] </code></pre>
Bob Brooks
244,123
<p>Solve[a n1 + b n2 + c n3 + d n4 + e n5 == RandomInteger[{1, 300}], {a, b, c, d, e, n1, n2, n3, n4, n5}]</p> <blockquote> <p>Just guessing.</p> </blockquote>
668,451
<p>A wizard has commanded you, the master architect, to build towers by stacking stone blocks.</p> <p>You have at your disposal five stone masons of limited intellect. Each mason is capable of making only a single type of cylindrical block, the height of which must be a positive integer, and the diameter of which is irrelevant. The masons have sufficient time to build as many copies of their block type as will be needed.</p> <p>The wizard, being capricious, will demand a tower with an integer height between 1 and 300 units. There is no way to know in advance what height will be required; it will be selected from a uniform distribution. The wizard has also decided that each tower be made of no more than five blocks. Using multiple copies of the same block size is acceptable. The blocks can't be stacked sideways.</p> <p>What five integers do you assign to your masons to maximize the probability that you will be able to build a tower of the specified height, using no more than five blocks total? What is that probability (i.e., the probability that you can meet the wizard's demands and escape an untimely death)? Are there multiple solutions with equally good probability? What is the best way to get good solutions quickly?</p> <p>To restate more formally:</p> <p>Select the set of five positive integers {a, b, c, d, e} which maximize the probability that</p> <pre><code>n1*a + n2*b + n3*c + n4*d + n5*e == RandomInteger[{1,300}] n1 + n2 + n3 + n4 + n5 &lt;= 5 </code></pre> <p>has at least one solution, where n1 through n5 are integers >= 0.</p> <p>The only strategy I could think of was randomly modifying an existing solution and checking if it was better than the current solution. This is hardly efficient, and is likely to get stuck in some local maximum. Maybe there's something more elegant. Here's the Mathematica code:</p> <pre><code>score[{a_, b_, c_, d_, e_}] := Length[Flatten[FindInstance[{ n1*a + n2*b + n3*c + n4*d + n5*e == #, n1 &gt;= 0, n2 &gt;= 0, n3 &gt;= 0, n4 &gt;= 0, n5 &gt;= 0, n1 + n2 + n3 + n4 + n5 &lt;= 5 }, {n1, n2, n3, n4, n5}, Integers, 1] &amp; /@ Range[300], 1]] best = {5, 10, 15, 20, 25}; bestScore = score[best]; nRounds = 1000; Do[ new = best + RandomInteger[{-2, 3}, 5]; newScore = score[new]; If[newScore &gt; bestScore, bestScore = newScore; best = Sort[new]]; If[Mod[r, 10] == 0, Print[r, " ", best, " ", bestScore]], {r, 1, nRounds} ] </code></pre>
Mr.Wizard
9,909
<p>I don't know how to answer this directly, but I think I can help you approach it. We can show that the upper limit to the best solution is 251 different heights, as that is the maximum number of unique sums that can be produced. Using <em>Mathematica</em>:</p> <pre><code>p = {a, b, c, d, e}; Array[Tr /@ Tuples[p, #] &amp;, 5, 1, Union] // Length </code></pre> <blockquote> <pre><code>251 </code></pre> </blockquote> <p>You could also express this as the coefficients of the expansion of: $\left(1+x^a+x^b+x^c+x^d+x^e\right)^5$ where $(a, b, c, d, e)$ are your block heights.</p> <hr> <p>After a randomized search the best values I found were:</p> <p>208 : {1,5,26,60,74}<br> 208 : {1,8,41,53,71}<br> 209 : {1,7,48,58,73}<br> 209 : {1,5,26,60,74}<br> 210 : {2,7,39,57,72} </p> <p>I used this <em>Mathematica</em> code:</p> <pre><code>count[p_] /; Min[p] &lt; 1 = 0; mem : count[p_] := mem = -1 + Tr @ Unitize @ PadRight[ CoefficientList[(1 + Tr["x"^p])^5, "x"], 301] try = best = Sort @ RandomInteger[{1, 150}, 5] Do[ If[count[try] &gt; count[best], Print[count[best], " : ", best = try]]; try = Sort[best + Round @ RandomReal[NormalDistribution[0, 6], 5]], {100000} ] </code></pre>
2,862,166
<p>In a paper I've been studying it says:</p> <blockquote> <p>Let $x$ in the cone $\mathbb R_+^n$ of all vectors in $\mathbb R^n$ with nonnnegative components ($n\in\mathbb N$)</p> </blockquote> <p>Somebody tell me what does it means, please? $\mathbb R_+^n$ should be $[0,\infty)\times\dots\times [0,\infty)$ ($n$ times), but I don't understand why <em>the cone $\mathbb R_+^n$</em>. Maybe <em>the cone $\mathbb R_+^n$</em> is different from $[0,\infty)\times\dots\times [0,\infty)$?</p>
Fred
380,717
<p>Yes, we have $\mathbb R_+^n= [0,\infty)\times\dots\times [0,\infty)$.</p> <p>If $C \ne \emptyset$ is a subset of $\mathbb R^n$, then $C$ is called a cone if $x \in C$ and $t \ge 0$ imply that $tx \in C$.</p> <p>Hence $\mathbb R_+^n$ is a cone in $\mathbb R^n$</p>
1,637,237
<p>I'm studying differential calculus, but one of the questions involves solving an inequality:</p> <p>$$(x-2)e^x &lt; 0$$</p> <p>I intend to go deeper in solving inequalities later, but I just want to understand how the teacher got the following solution in order to advance in these lectures: $$x-2 &lt; 0$$ $$x &lt; 2$$</p> <p>Where did the $e^x$ go? There's some rule to solve these inequalities involving $e$?</p>
BCLC
140,308
<p>Recall that for $a &gt; 0$</p> <p>$$ab &lt; ac \iff b &lt; c$$</p> <p>Note that $e^x &gt; 0 \ \forall x \in \mathbb R$</p>
118,680
<p>The question is motivated by this one <a href="https://mathoverflow.net/questions/118626/real-symmetric-matrix-has-real-eigenvalues-elementary-proof">real symmetric matrix has real eigenvalues - elementary proof</a>: </p> <p>Are there other fields $F$ than $\mathbb{R}$ (maybe some valued fields or real closed fields) with the property that every symmetric matrix in $M_n(F)$ is diagonalizable ? </p>
Name
30,062
<p>This is a generalization of the idea of Will Sawin. The Stufe of such field should be infinite. In fact if $-1$ is a sums of squares, i.e., $-1=a_1^2+\cdots+a_{n-1}^2$, then $$A:= \begin{pmatrix} 1 &amp; a_1&amp;\cdots &amp; a_{n-1} \\ a_1 &amp; a_1^2 &amp;\cdots &amp; a_1a_{n-1} \\ \vdots &amp; \vdots&amp;\ddots &amp;\vdots \\ a_{n-1} &amp; a_{n-1}a_1&amp;\cdots &amp; a_{n-1}^2\\ \end{pmatrix} $$ would be a symmetric matrix with $A^2=0$ and is not diagonalizable. So the base field should be a formally real field.</p> <p>A complete characterization is given in the following article (a necessary and sufficient condition is that such field should be an intersection of real closed fields):</p> <p><a href="http://www.ams.org/mathscinet-getitem?mr=1237224">MR1237224</a> D. Mornhinweg, D. B. Shapiro and K. G. Valente, <a href="http://www.jstor.org/stable/2324781">The Principal Axis Theorem Over Arbitrary Fields</a> (The American Mathematical Monthly, Vol. 100, No. 8 (Oct., 1993), pp. 749-754).</p>
3,239,689
<p><span class="math-container">$$n\in\mathbb{N}^{*}; S_{n}=\sum_{k=1}^{n}k!(k^2+1)$$</span></p> <p>I need to find <span class="math-container">$S_n$</span></p> <p>I started like this: <span class="math-container">$S_{n}=\sum_{k=1}^{n}(k+2)!-3(k+1)!+2k!$</span></p> <p>How to continue?I tried to give the k values but the terms don't vanish.</p>
AccidentalFourierTransform
289,977
<p><span class="math-container">$$ \sum_{k=1}^{n}\bigg((k+2)!-3(k+1)!+2k!\bigg)=\sum_{k=1}^{n}\bigg(\bigg[(k+2)!-(k+1)!\bigg]-2\bigg[(k+1)!-k!\bigg]\bigg) $$</span> and telescope.</p>
2,996,640
<p>Maybe trivial for number theorists, but not for me: is the title meaningful to ask for (<span class="math-container">$p$</span> is a prime)? If so, what's the answer? Thanks</p>
Ekesh Kumar
614,428
<p>Note that <span class="math-container">$x &gt; 0$</span> and <span class="math-container">$y &lt; 1$</span> can only satisfy <span class="math-container">$x &gt; y$</span> on the interval <span class="math-container">$(0, 1)$</span>. </p> <p>We can choose <span class="math-container">$x$</span> arbitrarily on the interval <span class="math-container">$(0, 1)$</span>.</p> <p>We must have <span class="math-container">$y$</span> varying from <span class="math-container">$0$</span> to <span class="math-container">$x$</span> (so that the inequality <span class="math-container">$x &gt; y$</span> is satisfied!).</p> <p>Thus,</p> <p><span class="math-container">$$P(X &gt; Y) = \int_{0}^{x}\int_{0}^{1} f(x, y) \mathop{dx} \mathop{dy}. $$</span></p> <p>Alternatively, we can have <span class="math-container">$y$</span> vary arbitrarily, and we can have <span class="math-container">$x$</span> vary from <span class="math-container">$y$</span> to <span class="math-container">$1$</span>. </p> <p>Therefore,</p> <p><span class="math-container">$$P(X &gt;Y) = \int_{y}^{1} \int_{0}^{1} f(x, y) \mathop{dy} \mathop{dx} $$</span></p>
256,415
<p>I need to store values of a function of x and y defined as <code>f[x_, y_] := Sin[Pi*x/3]*Sin[Pi*y/3]</code> at the following points <code>xval = {0, 0.5, 1, 1.5, 2, 2.5, 3, 3.5, 4, 4.5, 5, 5.5, 6, 6.5, 7, 7.5, 8, 8.5, 9, 9.5, 10, 10.5, 11, 11.5, 12, 12.5, 13, 13.5, 14, 14.5, 15};</code>, same for y.</p> <p>I need it to be a matrix of dimensions <code>{961,3}</code>, being the first two columns the values of X and Y and the third, the value of f(x,y) at said points.</p> <p>I have tried to use <a href="https://reference.wolfram.com/language/ref/Outer.html" rel="nofollow noreferrer">outer</a> as <code>Outer[f, xval, yval];</code>, but it provides a result whose dimensions are <code>{31, 31}</code>. Is there a way to reshape the dimensions or an alternative to <code>Outer</code> which allows for the data to be stored as I need?</p>
Henrik Schumacher
38,178
<p>Maybe this is what you want:</p> <pre><code>Map[ X \[Function] {Indexed[X, 1], Indexed[X, 2], f[Indexed[X, 1], Indexed[X, 2]]}, Tuples[{xval, yval}] ] </code></pre>
4,500,550
<p>I have a very basic question about cohomology of sheaves. Suppose <span class="math-container">$\mathcal{F}$</span> is a sheaf of abelian groups over a topological space <span class="math-container">$X$</span>. Then <span class="math-container">$\mathcal{F}$</span> itself is a topological space with a continuous map <span class="math-container">$\sigma : \mathcal{F} \rightarrow X$</span>.</p> <p>How are the sheaf cohomology groups <span class="math-container">$H^i(X,\mathcal{F})$</span> of <span class="math-container">$X$</span> related to <span class="math-container">$H^i(\mathcal{F},\mathbb{Z})$</span>, the singular cohomology groups of the space <span class="math-container">$\mathcal{F}$</span>?</p> <p>Can we also consider &quot;relative&quot; cohomology groups <span class="math-container">$H^i(\mathcal{F}, X)$</span> using the zero section and relate it to the other cohomology groups, perhaps with a spectral sequence?</p>
Qiaochu Yuan
232
<p>They have nothing much to do with each other. Take the special case that <span class="math-container">$X$</span> is discrete, so a sheaf <span class="math-container">$F$</span> of abelian groups is just a collection <span class="math-container">$F_x, x \in X$</span> of abelian groups indexed by <span class="math-container">$X$</span>. The étale space, which I'll denote <span class="math-container">$Y$</span>, is the disjoint union <span class="math-container">$\bigsqcup_{x \in X} F_x$</span>, so it is again a discrete space, so it satisfies <span class="math-container">$H^0(Y, \mathbb{Z}) \cong \mathbb{Z}^Y$</span> and higher cohomology vanishes. On the other hand the sheaf cohomology is <span class="math-container">$H^0(X, F) \cong \prod_{x \in X} F_x$</span> and higher cohomology vanishes.</p> <p>So the cohomology of <span class="math-container">$Y$</span> is much bigger and not particularly interesting; in particular it is completely insensitive to the <em>group structure</em> on each stalk <span class="math-container">$F_x$</span> (and this generalizes to the general case, since the étale space is only sensitive to the underlying sheaf of sets).</p>
3,572,721
<p>A finite sum of cosine functions weighted with different amplitude and phase, but with a fixed frequency,<span class="math-container">$$f(x) = \sum_{n=1}^{N}A_{n}cos(x+\phi_{n})$$</span> the question is if I were to fit <span class="math-container">$f(x)$</span> with <span class="math-container">$cos(x)$</span>, what will be the amplitude and phase offset?</p>
mdnestor
519,413
<p>We have: <span class="math-container">$$ \sum_{n=1}^{N}A_n\cos(x+\phi_n)=A\cos(x+\phi)$$</span> Use the angle-sum formula <span class="math-container">$\cos(x+\phi)=\cos x\cos\phi-\sin x\sin\phi$</span>: <span class="math-container">$$ \sum_{n=1}^{N}A_n(\cos x\cos\phi_n-\sin x\sin\phi_n)=A(\cos x\cos\phi-\sin x\sin\phi) $$</span> <span class="math-container">$$ \iff \cos x\sum_{n=1}^{N}A_n\cos\phi_n-\sin x\sum_{n=1}^{N}A_n\sin\phi_n=\cos x(A\cos\phi)-\sin x(A\sin\phi) $$</span> By equating the coefficients on <span class="math-container">$\cos x$</span> and <span class="math-container">$\sin x$</span> we get: <span class="math-container">$$ A\cos\phi = \sum_{n=1}^{N}A_n\cos\phi_n $$</span> <span class="math-container">$$ A\sin\phi = \sum_{n=1}^{N}A_n\sin\phi_n $$</span> Square both equations and add: <span class="math-container">$$ A^2 = \left( \sum_{n=1}^{N}A_n\cos\phi_n \right)^2 + \left( \sum_{n=1}^{N}A_n\sin\phi_n \right)^2 $$</span> <span class="math-container">$$ \implies A = \sqrt{\left( \sum_{n=1}^{N}A_n\cos\phi_n \right)^2 + \left( \sum_{n=1}^{N}A_n\sin\phi_n \right)^2} $$</span> To find angle, divide <span class="math-container">$A\sin\phi$</span> by <span class="math-container">$A\cos\phi$</span>: <span class="math-container">$$ \tan\phi = \frac{A\sin\phi}{A\cos\phi} = \frac{\sum_{n=1}^{N}A_n\sin\phi_n}{\sum_{n=1}^{N}A_n\cos\phi_n } $$</span> To recover the angle without using piecewise definition of <span class="math-container">$\arctan$</span> I reccomend using the <span class="math-container">$\mbox{atan}2$</span> function, so that <span class="math-container">$$\phi=\mbox{atan}2\left(\sum_{n=1}^{N}A_n\sin\phi_n,\sum_{n=1}^{N}A_n\cos\phi_n\right)$$</span></p>
459,067
<p>In set theory, numbers are often constructed, e.g. from nestings of sets which eventually contain the empty set. The operations are defined in term of taking unions etc.etc. </p> <p>The extended real number line hat plus and minus infinity $\infty,-\infty$ in addition to the reals, and they fulfill <a href="https://en.wikipedia.org/wiki/Extended_real_number_line#Arithmetic_operations" rel="noreferrer">certain axioms</a>.</p> <blockquote> <p>How are the two objects "$\infty$" and "$-\infty$" of the <a href="https://en.wikipedia.org/wiki/Extended_real_number_line" rel="noreferrer">extended real number line</a> $\mathbb R\cup\{-\infty,\infty\}$ modeled in rigorous set theory? </p> </blockquote> <p>I figure it might be related to ordinal numbers. But these are <em>two</em> things and they behave somewhat similarly. So I wonder what they are made of, set theoretically.</p>
The_Sympathizer
11,172
<p>One way to do this is using the Dedekind cut construction of the real numbers from the rationals. If one relaxes the requirement that the left and right sets of a Dedekind cut must be nonempty, one has the two cuts $(\emptyset, \mathbb{Q})$ and $(\mathbb{Q}, \emptyset)$. These can be identified with $-\infty$ and $\infty$, respectively.</p> <p>Then we have, for example, the following as a natural result:</p> <ul> <li>$\infty + \infty = \infty$ (add the elements of the left set of $\infty$, i.e. $\mathbb{Q}$ (that is, form the set $\{ x + y : x \in \mathbb{Q}, y \in \mathbb{Q} \}$, to those of itself and you get $\mathbb{Q}$ again, same goes for the right set)</li> <li>$-\infty + -\infty = -\infty$ (add the elements of the left set of $-\infty$, i.e. $\emptyset$, to those of itself and you get $\emptyset$ again, same goes for the right set)</li> <li>$a + \infty = \infty$ (for a non-infinity cut $a$, note that if you add the elements of the left set of $a$ to $\mathbb{Q}$, you just get $\mathbb{Q}$ again)</li> </ul> <p>all from the usual definitions of addition of Dedekind cuts. Note that $-\infty + \infty$ doesn't work, though, since if we try that, both the left and right set collapse, which is not a valid cut. But we expect that since we usually consider $-\infty + \infty = \infty - \infty$ undefined.</p>
215,227
<p>I'm looking at a past homework solution and there is a part of it I don't understand. Specifically I'm talking about number 2 in <a href="http://www.math.cmu.edu/~af1p/Teaching/Combinatorics/F08/hw5a.pdf" rel="nofollow">this problem set</a>:</p> <blockquote> <p>Let $m=\lfloor(8/7)^{n/3}\rfloor$. Show that there exist distinct sets $A_1,A_2,\dots,A_m\subseteq[n]$ such that for all distinct $i,j,k\in[m]$ we have $A_i\cap A_j\nsubseteq A_k$.</p> </blockquote> <p>Given $A_1, A_2, A_3,..., A_m$ which are subsets of $ [n] $. Can someone explain in detail why the probability of $A_i \cap A_j \subseteq A_k$ is equal to $(\frac7 8)^{n}$. </p>
Brian M. Scott
12,042
<p>Suppose that $A_i\cap A_j\subseteq A_k$. Then for each $r\in[n]$ one of the following must be true:</p> <ol> <li>$r\in A_i$ and $r\in A_j$ and $r\in A_k$. </li> <li>$r\in A_i$ and $r\notin A_j$, in which case it doesn’t matter whether $r\in A_k$. </li> <li>$r\notin A_i$ and $r\in A_j$, in which case it doesn’t matter whether $r\in A_k$. </li> <li>$r\notin A_i$ and $\notin A_j$, in which case it doesn’t matter whether $r\in A_k$. </li> </ol> <p>Since each $r\in[m]$ is in any given $A_s$ with probability $\frac12$, the probability of (1) is $\frac12\cdot\frac12\cdot\frac12=\frac18$. The probability of (2) is $\frac12\cdot\frac12\cdot1=\frac14$, and exactly the same calculation gives the probability of (3) and the probability of (4). These four possibilities are mutually exclusive, so the probability that one of them is true is $\frac18+\frac14+\frac14+\frac14=\frac78$. </p> <p>In other words, for each $r\in[n]$ the probability that $r$ behaves in one of the ways consistent with $A_i\cap A_j\subseteq A_k$ is $\frac78$. This is the case independently for each $r\in[n]$, so the probability that <strong>all</strong> of them behave properly is $\left(\frac78\right)^n$.</p>
41,940
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
Joel Cohen
10,553
<p>Any polygon (regular or not) can be described by an equation involving only absolute values and polynomials. Here is a small explanation of how to do that.</p> <p>Let's say that a curve $C$ is given by the equation $f$ if we have $C = \{(x,y) \in \mathbb{R}^2, \, f(x,y) = 0\}$.</p> <ul> <li><p>If $C_1$ and $C_2$ are given by $f_1$ and $f_2$ respectively, then $C_1 \cup C_2$ is given by $f_1 . f_2$ and $C_1 \cap C_2$ is given by $f_1^2 + f_2^2$ (or $|f_1| + |f_2|$). So if $C_1$ and $C_2$ can be described by an equation involving absolute values and polynomials, then so do $C_1 \cup C_2$ and $C_1 \cap C_2$.</p></li> <li><p>If $C = \{(x,y) \in \mathbb{R}^2, \, f(x,y) \ge 0\}$, then $C$ is given by the equation $|f|-f$.</p></li> </ul> <p>Now, any segment $S$ can be described as $S = \{(x,y) \in \mathbb{R}^2, \, a x + b y = c, \, x_0 \le x \le x_1, \, y_0 \le y \le y_1\}$, which is given by a single equation by the above principles. And since union of segments also are given by an equation, you get the result.</p> <p>EDIT : For the specific case of the octagon of radius $r$, if you denote $s = \sin(\pi/8)$, $c = \cos(\pi/8)$, then one segment is given by $|y| \le rs$ and $x = rc$, for which an equation is</p> <p>$$f(x, y) = \left||rs - |y|| - (rs - |y|)\right| + |x-rc| = 0$$</p> <p>So I think the octagon is given by</p> <p>$$f(|x|,|y|) \ f(|y|,|x|) \ f\left(\frac{|x|+|y|}{\sqrt{2}}, \frac{|x|-|y|}{\sqrt{2}}\right) = 0$$ </p> <p>To get a general formula for a regular polygon of radius $r$ with $n$ sides, denote $c_n = \cos(\pi/n)$, $s_n = \sin(\pi/n)$ and</p> <p>$$f_n(x+iy) = \left||rs_n - |y|| - (rs_n - |y|)\right| + |x-rc_n|$$</p> <p>then your polygon is given by</p> <p>$$\prod_{k = 0}^{n-1} f_n\left(e^{-\frac{2 i k \pi}{n}} (x+iy)\right) = 0$$</p> <p>Depending on $n$, you can use symmetries to lower the degree a bit (as was done with $n = 8$).</p>
41,940
<p>For example, the square can be described with the equation $|x| + |y| = 1$. So is there a general equation that can describe a regular polygon (in the 2D Cartesian plane?), given the number of sides required?</p> <p>Using the Wolfram Alpha site, this input gave an almost-square: <code>PolarPlot(0.75 + ArcSin(Sin(2x+Pi/2))/(Sin(2x+Pi/2)*(Pi/4))) (x from 0 to 2Pi)</code></p> <p>This input gave an almost-octagon: <code>PolarPlot(0.75 + ArcSin(Sin(4x+Pi/2))/(Sin(4x+Pi/2)*Pi^2)) (x from 0 to 2Pi)</code></p> <p>The idea is that as the number of sides in a regular polygon goes to infinity, the regular polygon approaches a circle. Since a circle can be described by an equation, can a regular polygon be described by one too? For our purposes, this is a regular convex polygon (triangle, square, pentagon, hexagon and so on).</p> <p>It can be assumed that the centre of the regular polygon is at the origin $(0,0)$, and the radius is $1$ unit.</p> <p>If there's no such equation, can the non-existence be proven? If there <em>are</em> equations, but only for certain polygons (for example, only for $n &lt; 7$ or something), can those equations be provided?</p>
Hans-Peter Stricker
1,792
<p>In the context of <a href="https://math.stackexchange.com/questions/3174514/fourier-series-of-regular-polygons">this question</a> you may define for a given number <span class="math-container">$n$</span> some (Fourier) coefficients by these sensible equations (as you have asked for):</p> <p><span class="math-container">$a^{(n)}_k \sim \begin{cases} +k^{-2} &amp; \text{ for } k \equiv 1 \pmod n\\ +k^{-2} &amp; \text{ for } k \equiv (n-1) \pmod n\\ 0 &amp; \text{ otherwise } \end{cases}$</span></p> <p><span class="math-container">$b^{(n)}_k \sim \begin{cases} +k^{-2} &amp; \text{ for } k \equiv 1 \pmod n\\ -k^{-2} &amp; \text{ for } k \equiv (n-1) \pmod n\\ 0 &amp; \text{ otherwise } \end{cases}$</span></p> <p>Then you calculate the functions <span class="math-container">$a^{(n)}(t)$</span> and <span class="math-container">$b^{(n)}(t)$</span> like this:</p> <p><span class="math-container">$a^{(n)}(t) \sim \sum_{k=0}^\infty a^{(n)}_k\cos(kt)$</span></p> <p><span class="math-container">$b^{(n)}(t) \sim \sum_{k=0}^\infty b^{(n)}_k\sin(kt)$</span></p> <p>Finally you draw the curve <span class="math-container">$t \mapsto a^{(n)}(t) + ib^{(n)}(t)$</span> in the complex plane &ndash; and get your desired regular <span class="math-container">$n$</span>-gon.</p> <p>As an example for <span class="math-container">$n=4$</span>:</p> <p><a href="https://i.stack.imgur.com/8TRof.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8TRof.png" alt="enter image description here"></a></p> <p>See more examples in the gallery <a href="https://math.stackexchange.com/questions/3174514/fourier-series-of-regular-polygons">here</a>.</p>
4,370,760
<p>The two statements are:</p> <p><span class="math-container">$\forall a\in A, \exists b\in B, M(a)\implies N(b)$</span></p> <p><span class="math-container">$(\forall a\in A, M(a))\implies (\exists b\in B, N(b))$</span></p> <p>I am not so good in logics. I feel the second statement implies first one. The existence of <span class="math-container">$b$</span> is not depending on <span class="math-container">$a$</span>. So, I think two statements are same.</p> <p>Am I correct? Can anyone give some counter for sets and predicates if No. Thank you.</p>
Arthur
15,500
<p>They are not the same. Consider the following statements:</p> <ul> <li><span class="math-container">$M(a)$</span>: Person <span class="math-container">$a$</span> has a dog</li> <li><span class="math-container">$N(b)$</span>: Person <span class="math-container">$b$</span> has a cat</li> </ul> <p>Then the first statement says that for any person, if that person has a cat, then there is a person with a dog. In other words, if there is a cat, then there is a dog.</p> <p>The second sentence says that if every person has a cat, then someone has a dog.</p> <p>They are seen to be inequivalent if some but not all people have cats, and no one has a dog.</p>
3,910,015
<p>My impression has been that <span class="math-container">$\mathbb{Z_n}$</span> is the set <span class="math-container">$\{0,1,...,n-1\}$</span> under binary operation addition modulo <span class="math-container">$n$</span>. However I'm also coming across this notion that <span class="math-container">$\mathbb{Z_n}$</span> is actually a set of equivalence classes of equivalence relation <span class="math-container">$x\sim y \iff x \equiv y$</span> mod <span class="math-container">$n$</span> and the addition here is actually addition of equivalence classes rather than simply addition of integers modulo <span class="math-container">$n$</span>. Is this correct? So would it be correct to say <span class="math-container">$\mathbb{Z_n} = \{n,n+1,...,2n-1\}$</span>? if we are considering these elements as being equivalence classes?</p>
Bill Dubuque
242
<p>Below I explain the general idea, which works not only for rings but also for any algebraic structure definable by <span class="math-container">$\rm\color{#c00}{universal}$</span> equational axioms (&quot;identities&quot;) such as <span class="math-container">$\,\color{#c00}{\forall x,y,z\!:}\,x(y+z) = xy+xz\,$</span>.</p> <p><span class="math-container">$a\equiv b\pmod{\! n}\iff n\mid a-b\,$</span> is a <a href="https://en.wikipedia.org/wiki/Congruence_relation" rel="nofollow noreferrer">congruence relation</a> (on a <a href="https://en.wikipedia.org/wiki/Ring_(mathematics)" rel="nofollow noreferrer">ring</a>) i.e. it is an equivalence relation that is furthermore compatible with all of the ring operations addition and multiplication, i.e. congruence satisfies the following <a href="https://math.stackexchange.com/a/879262/242">Congruence Sum &amp; Product Rules</a></p> <p><span class="math-container">$$\begin{align} a_1\equiv b_1\\ a_2\equiv b_2\end{align}\ \Rightarrow\ \begin{array}{}a_1 + a_2\equiv b_1+b_2\\ a_1 \times a_2\equiv b_1 \times b_2 \end{array}\qquad$$</span></p> <p>This implies that the ring operations descend to well-defined operations on the equivalence classes <span class="math-container">$\,[a] = a+n\Bbb Z\,$</span> via <span class="math-container">$\,[a]+[b] := [a+b],\ [a]\times [b] := [a\times b],\,$</span> and the map <span class="math-container">$\, a\mapsto [a]\,$</span> is a surjective ring hom, which <a href="https://math.stackexchange.com/a/3563794/242">immediately implies</a> that all the ring laws persist in the image <span class="math-container">$\,\Bbb Z_n,\,$</span> so <span class="math-container">$\,\Bbb Z_n\,$</span> has associative and commutative addition and multiplication, connected via the distributive law, so arithmetic in <span class="math-container">$\,\Bbb Z_n\,$</span> is essentially the same as in <span class="math-container">$\,\Bbb Z,\,$</span> except that some elements are forced equal.</p> <p>For computational purposes it is often convenient to map the classes to normal (canonical) representatives <span class="math-container">$\,h\,:\, [a]\mapsto \bar a.\,$</span> The most common choice is its least nonnegative element <span class="math-container">$\,\bar a := a\bmod n\,$</span> (remainder reps), but also convenient are least magnitude reps, e.g. <span class="math-container">$\,0,\pm1,\pm2\pmod{\!5}.\,$</span> Generally we can use any complete system of residues, i.e. any <span class="math-container">$\,n\,$</span> integers such that every integer is congruent to exactly one in our set of complete residue reps.</p> <p>Then we can <a href="https://math.stackexchange.com/search?tab=votes&amp;q=user%3a242%20transport%20structure">transport the ring structure</a> to our normal forms by pulling (back) the ring operations along <span class="math-container">$h$</span> to obtain the induced ring operations on the normal forms as follows:</p> <p><span class="math-container">$$\color{#c00}{\bar a} + \bar b = h([a])+h([b]) = h([a]+[b]) = h([a+b]) = \color{#0a0}{\overline{a+b}}$$</span></p> <p>e.g. this becomes <span class="math-container">$\ \color{#c00}{a\bmod n} + b\bmod n\, = \, \color{#0a0}{(a+b)\bmod n}\ $</span> using common normal forms.</p> <p>Said equivalently, to perform an operation on normal forms <span class="math-container">$\,\bar a,\,\bar b,\,$</span> we apply <span class="math-container">$h^{-1}$</span> to map them to their associated classes <span class="math-container">$\,[a],[b],\,$</span> then we perform the operation on the classes yielding <span class="math-container">$\,[a+b],\,$</span> then finally we apply <span class="math-container">$h$</span> to map that result to its normal form <span class="math-container">$\,\overline{a+b}.\,$</span> So the normal forms are essentially canonical &quot;labels&quot; or &quot;names&quot; for their congruence classes. We could instead use any set of <span class="math-container">$\,n\,$</span> elements as labels, but using elements from the original ring <span class="math-container">$\,\Bbb Z\,$</span> makes it more intuitive how the normal rep corresponds to the class.</p> <p>As above, in Euclidean domains like <span class="math-container">$\,\Bbb Z\,$</span> and <span class="math-container">$\,F[x]\,$</span>, which enjoy Euclidean division with smaller remainder, it is convenient to use the the remainder (&quot;least Euclidean size&quot;) as the normal rep, which is discussed further <a href="https://math.stackexchange.com/a/646600/242">here</a>, showing how <a href="https://math.stackexchange.com/a/2658/242">Hamilton's pair representation</a> for complex numbers is just a special case of this, viz. <span class="math-container">$\Bbb R[x]\bmod x^2\!+\!1\cong \Bbb R[i]\cong \Bbb C$</span>.</p>
79,040
<p>Let <span class="math-container">$X$</span> be a compact Riemann surface, and <span class="math-container">$f$</span> a meromorphic function on X. There's a theorem telling us that <span class="math-container">$\deg(\mathrm{div}(f)) = 0$</span>.</p> <p>But is the inverse statement also true? I mean, is it true that:</p> <blockquote> <p>if <span class="math-container">$D$</span> is a divisor on <span class="math-container">$X$</span> with <span class="math-container">$\deg(D) = 0$</span>, then exists a meromorphic function <span class="math-container">$f$</span> on <span class="math-container">$X$</span> such that <span class="math-container">$D = \mathrm{div}(f)$</span>?</p> </blockquote> <p>Thanks!</p>
rla
18,373
<p>That is not true. Here a counterexample: consider a Riemann surface $X$ of genus $g \geq 1$. Fix $p, q \in X$ distinct points and consider the divisor $D = p - q$. This divisor has degree $0$, but it is not principal, because on the contrary there would be a holomorphic map $f: X \rightarrow \overline{\mathbb{C}}$ of degree equal to $1$ (for it has single simple zero/infinity value), and it is well known that a such map with this property is an isomorphism. That is a contradiction, since $g(\overline{\mathbb{C}}) = 0$. Look for Abel-Jacobi Theorem for necessary and sufficient conditions for a divisor be principal.</p>
1,352,170
<p>I have a homework question that I wasn't positive about. This is the first probability course I have taken and the class is only taught using excel so I apologize for the lack of formulas in my reasoning. </p> <p>The size of a fish in a lake follows a normal distribution with a mean = 1 lb 4 oz and standard deviation = 3 oz. Fish that weigh less than 1 lb 9 oz must be released back into the lake. Bill wants to reach his limit of 5 'keepers'. What is the probability that Bill must catch at least 40 fish in order to reach the limit?</p> <p>To find p I used norm.dist(25,20,3,1) = 0.190787. Then p(x=5) = binom.dist(35,40,p,0) = 0.09530982. Does this seem like the right way to do a problem like this? I'm not sure of how I would do it by hand as my professor does not teach us any formulas other than the excel ones. Any help is very much appreciated!!</p>
Ross Millikan
1,827
<p>Given the clarification that the teams may split up, it takes some work. Let the heads label the new teams. Assign three people to the first head. They might be all from the same team, so you have a similar problem for four teams (but there are fewer constraints). They might be from three separate teams, or two might come from the same team. Each possibility sprouts a different tree. Having chosen the pattern for the first team (which you can compute the number of ways to make that pattern) I would note that $12!=479,001,600$ is a reasonable number of iterations and just check each one.</p>
1,352,170
<p>I have a homework question that I wasn't positive about. This is the first probability course I have taken and the class is only taught using excel so I apologize for the lack of formulas in my reasoning. </p> <p>The size of a fish in a lake follows a normal distribution with a mean = 1 lb 4 oz and standard deviation = 3 oz. Fish that weigh less than 1 lb 9 oz must be released back into the lake. Bill wants to reach his limit of 5 'keepers'. What is the probability that Bill must catch at least 40 fish in order to reach the limit?</p> <p>To find p I used norm.dist(25,20,3,1) = 0.190787. Then p(x=5) = binom.dist(35,40,p,0) = 0.09530982. Does this seem like the right way to do a problem like this? I'm not sure of how I would do it by hand as my professor does not teach us any formulas other than the excel ones. Any help is very much appreciated!!</p>
mjqxxxx
5,546
<p>The Python code below counts the number of possible assignments for $m$ teams of size $n$. It recursively assigns staff members to new teams, keeping track of how many empty slots and unassigned members there are for each team, and aggressively caching partial results.</p> <p>For $n=1$, this is just the number of derangements of $m$ elements. For $n&gt;1$, many of these sequences appear to be in the OEIS as "card-matching numbers" or "dinner-diner matching numbers". For instance, your case (teams of size $3$) appears as <a href="http://oeis.org/A059073" rel="nofollow">OEIS:A059703</a>. The answer to the original question ($5$ teams of size $3$) is $6699824$.</p> <p>Note that a decent approximation can be obtained by counting the number of ways to assign the staff to the $5$ teams in sets of $3$ (this is $15! / (3!)^5$), and then multiplying this by a rough probability that no one is assigned to his old team (each person is on a new team with probability $4/5$, so we can try $(4/5)^{15}$ for the overall probability). This gives $$ \frac{15!}{(3!)^5}\left(\frac{4}{5}\right)^{15} \approx 5900000, $$ or about $10\%$ less than the correct figure.</p> <hr> <pre><code>def assignOne(curr, i, j): '''Assign one of old team i to new team j.''' asLists = list(map(list, curr)) (src, dst) = (asLists[i], asLists[j]) if src[0] == 0 or dst[1] == 0: return None src[0] -= 1 dst[1] -= 1 asLists.sort() return tuple(map(tuple, asLists)) def count(curr, cache={}): if max(map(max, curr)) == 0: return 1 if cache.has_key(curr): return cache[curr] allNext = {} nn = len(curr) for j in xrange(nn-1): key = assignOne(curr, nn-1, j) if key: allNext[key] = allNext.get(key, 0) + 1 ret = sum([allNext[nxt] * count(nxt, cache) for nxt in allNext.keys()]) cache[curr] = ret return ret def countWays(numTeams, teamSize): curr = [(teamSize, teamSize)] * numTeams curr = tuple(map(tuple, curr)) return count(curr) &gt;&gt;&gt; [countWays(x, 1) for x in xrange(2, 12)] [1, 2, 9, 44, 265, 1854, 14833, 133496, 1334961, 14684570] &gt;&gt;&gt; [countWays(x, 2) for x in xrange(2, 9)] [1, 10, 297, 13756, 925705, 85394646, 10351036465] &gt;&gt;&gt; [countWays(x, 3) for x in xrange(2, 9)] [1, 56, 13833, 6699824, 5691917785, 7785547001784, 16086070907249329] </code></pre>
281,387
<p>I've been reading Girard et al's 'Proofs and Types', which in Chapter 6 presents a proof of strong normalisation for the simply typed lambda calculus with products and base types. The proof is based on Tait's method of defining a set of 'reducible terms for type $T$' by induction on the type $T$. For example a term $t$ is reducible for type $A\to B$ if for all reducible $u$ for type $A$, $t\,u$ is reducible for type $B$.</p> <p>However I don't see how to extend this proof to (binary) sums, which are covered in a later chapter, because the elimination for sums involves a 'parasitic' type which prevents one from defining reducible terms of type $A+B$ via the elimination by induction on $A+B$.</p> <p>How (if at all) does this proof style extend to sums? I can't find a good reference that presents this.</p>
Damiano Mazza
45,027
<p>Let me add to the above (perfectly correct) answers that there is a more general perspective, which works for all positive connectives, of which sums are a special case.</p> <p>The fundamental insight is that reducibility actually arises from a duality between terms and contexts, which is itself parametric in a set of terms closed under certain operations (such as $\beta$-expansion, but I think something trickier must be used for strong normalization), called a <em>pole</em>. Given a pole $P$, one defines</p> <p>$$\mathsf C\perp t\quad\text{just if}\quad\mathsf C[t]\in P$$</p> <p>where $\mathsf C$ is a context (a term with a hole denoted by $\bullet$) and $t$ a term. The idea is that $P$ is a set of terms which "behave well" (e.g. they have a normal form), so $\mathsf C\perp t$ means that the context $\mathsf C$ and the term $t$ "interact well" with respect to the property expressed by $P$.</p> <p>Given a set $S$ of contexts, one may define a set of terms $$S^\bot:=\{t\mathrm{|}\forall\mathsf C\in S,\,\mathsf C\perp t\}.$$ A similar definition induces a set of contexts from a set of terms $T$, which we still denote by $T^\bot$.</p> <p>We may then define the interpretation of a type $A$, which is a set of contexts denoted by $|A|$, by induction on $A$: $$ \begin{array}{rcl} |X| &amp;:=&amp; \{\bullet\} \\ |A\to B| &amp;:=&amp; \{\mathsf C[\bullet\,u]\mathrel{|} u\in|A|^\bot,\ \mathsf C\in|B|\} \\ |A\times B| &amp;:=&amp; \{\mathsf C[\pi_1\bullet]\mathrel{|}\mathsf C\in|A|\}\cup\{\mathsf C[\pi_2\bullet]\mathrel{|}\mathsf C\in|B|\}\\ |A+B| &amp;:=&amp; \{\mathsf C\mathrel{|}\mathsf C[\mathsf{inl}\,\bullet]\in |A|^{\bot\bot}\text{ and }\mathsf C[\mathsf{inr}\,\bullet]\in|B|^{\bot\bot}\} \end{array} $$ Finally, we let $$\|A\|:=|A|^\bot,$$ which is a set of terms. This is the "reducibility predicate" on $A$.</p> <p>One then goes on to prove the so-called <em>adequacy theorem</em>: if $$x_1:C_1,\ldots,x_n:C_n\vdash t:A$$ is derivable, then for all $u_1,\ldots,u_n$, $u_i\in\|C_i\|$ implies $t[u_1/x_1,\ldots,u_n/x_n]\in\|A\|$. This is proved by induction on the last rule of the derivation, using the properties of the pole (closure under $\beta$-expansion or something like that).</p> <p>Now, if things are setup correctly, one also has $x\in\|A\|$ as well as $\|A\|\subseteq P$ for every variable $x$ and every type $A$, which implies that every term belongs to the pole by adequacy, i.e., every term "behaves well".</p> <p>I am sure that this works for <em>weak</em> normalization, i.e., one may take $P$ to be the set of weakly normalizing terms (which is closed under $\beta$-expansion) and immediately obtain weak normalization of the simply-typed $\lambda$-calculus with sums. For strong normalization, some kind of hack is needed, because $\beta$-expanding a strongly normalizable term does not necessarily yield a strongly normalizable term, but I'm sure that a suitable definition will make things work.</p> <p>This "reducibility by duality" may be found already in Girard's <em>Linear logic</em> paper (1987), although I'm not sure he is the one who introduced it. It has been used by many authors since then and of course it easily extends to higher order quantification. In particular, the above presentation is adapted from Krivine, who uses it as the basis of his classical realizablity theory (see his many papers on the subject, all available on his web page).</p>
98
<p>I am looking for a reference (book or article) that poses a problem that seems to be a classic, in that I've heard it posed many times, but that I've never seen written anywhere: that of the possibility of a man in a circular pen with a lion, each with some maximum speed, avoiding capture by that lion. </p> <p>References to pursuit problems in general would also be appreciated, and the original source of this problem.</p>
BBischof
16
<p>Here is a book on this type of problem</p> <ul> <li>Paul J. Nahin, <a href="http://books.google.com/books?id=HeESjfM2geUC" rel="nofollow noreferrer">Chases and escapes: the mathematics of pursuit and evasion</a>, Princeton University Press, 2007.</li> </ul> <p>it is also briefly mentioned in his other book &quot;Euler's Fabulous Formula&quot;.</p>
98
<p>I am looking for a reference (book or article) that poses a problem that seems to be a classic, in that I've heard it posed many times, but that I've never seen written anywhere: that of the possibility of a man in a circular pen with a lion, each with some maximum speed, avoiding capture by that lion. </p> <p>References to pursuit problems in general would also be appreciated, and the original source of this problem.</p>
Aidan
2,506
<p>The book is Coffee in Memphis by Bollobas. It's the first problem, and there are loads more :</p> <p><a href="http://rads.stackoverflow.com/amzn/click/0521693950" rel="noreferrer">http://www.amazon.com/Art-Mathematics-Coffee-Time-Memphis/dp/0521693950</a></p>
2,480,493
<p>I have a problem with what looks like a very easy equation to solve $\left|\frac{1 + z}{1- i\overline z}\right| = 1$ . ($z$ is a complex number, $\overline z$ is a conjugate of $z$) I got stuck at the point when after replacing $\overline z = a-bi $ and $z = a +bi$ and getting rid of absolute value I end up with $a^2 +a-b =0$. I have no idea how to follow this up or wether I should take totally different approach from the begining. I'd be very gratefull if someone could guide me into the right solution.</p>
Andreas Blass
48,510
<p>You want $|1+z|=|1-i\bar z|$. The left side here is the distance from $z$ to $-1$. The right side equals $|i+\bar z|$ which in turn equals $|-i+z|$, the distance from $z$ to $i$. So $z$ satisfies your equation iff it is equidistant from $-1$ and $i$. These $z$'s form a straight line in the complex plane, whose equation you can find by drawing a picture.</p>
2,480,493
<p>I have a problem with what looks like a very easy equation to solve $\left|\frac{1 + z}{1- i\overline z}\right| = 1$ . ($z$ is a complex number, $\overline z$ is a conjugate of $z$) I got stuck at the point when after replacing $\overline z = a-bi $ and $z = a +bi$ and getting rid of absolute value I end up with $a^2 +a-b =0$. I have no idea how to follow this up or wether I should take totally different approach from the begining. I'd be very gratefull if someone could guide me into the right solution.</p>
Community
-1
<p>Hint: Let $z=a+ib$, then $|\frac{1+z}{1-i\overline{z}}|=\frac{|1+z|}{|1-i\overline{z}|}=\frac{|(1+a)+ib|}{|(1-b)-ia|}=\frac{\sqrt{(1+a)^2+b^2}}{\sqrt{(1-b)^2+a^2}}=\frac{\sqrt{1+a^2+2a+b^2}}{\sqrt{1+b^2-2b+a^2}}=1$ </p> <p>Now this equality is true if and only if $a=-b$ which gives us that if $z=a+ib$ then $iz=ai-b=a-bi=\overline{z}$</p>
3,515,656
<p>Given <span class="math-container">$V(x) = -5x_1^2 (x_1^2 - 1) - 4x_1x_2$</span>. Show that there exist <span class="math-container">$x_0 \in \mathbb{R}^2$</span> arbitrarily close to <span class="math-container">$x = 0$</span> such that <span class="math-container">$V(x_0) &gt; 0$</span>. I have no idea how to prove this. I was thinking to put some small numbers <span class="math-container">$x_1, x_2$</span> that satisfies the condition but this is not a formal prove. Can you help me to solve this? Thank you.</p>
Siong Thye Goh
306,553
<p>Let <span class="math-container">$0&lt;\epsilon &lt; 1$</span> and <span class="math-container">$(x_1, x_2) = (\epsilon, 0)$</span>, then we have </p> <p><span class="math-container">$$V(x)=-5\epsilon^2(\epsilon^2-1)&gt;0$$</span></p>
3,515,656
<p>Given <span class="math-container">$V(x) = -5x_1^2 (x_1^2 - 1) - 4x_1x_2$</span>. Show that there exist <span class="math-container">$x_0 \in \mathbb{R}^2$</span> arbitrarily close to <span class="math-container">$x = 0$</span> such that <span class="math-container">$V(x_0) &gt; 0$</span>. I have no idea how to prove this. I was thinking to put some small numbers <span class="math-container">$x_1, x_2$</span> that satisfies the condition but this is not a formal prove. Can you help me to solve this? Thank you.</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$V(\frac 1 n , \frac 1 n)=-\frac 5 {n^{4}} +\frac 5 {n^{2}}-\frac 4 {n^{2}} &gt;0$</span> for all <span class="math-container">$n \geq 3$</span>. Every neighborhood of the origin contains the points <span class="math-container">$(\frac 1 n , \frac 1 n)$</span> for infinitely many values of <span class="math-container">$n$</span>. Hence we can take <span class="math-container">$x_0=(\frac 1 n , \frac 1 n)$</span> with <span class="math-container">$n$</span> large. </p>
4,435,945
<p>Near point <span class="math-container">$(0,0)$</span> find taylor formula for <span class="math-container">$f(x,y)=\ln(1+x+y)$</span></p> <p><span class="math-container">$f(x)=f(x_0)+\sum_{k=1}^n \frac{1}{k!}d^kf(x_0)+o(|x-x_0|^n)$</span></p> <p><span class="math-container">$$d^kf(0,0)=\sum_{n=1}^k\binom{k}{n}\frac{\partial^kf}{\partial x^n\partial y^{k-n}}(0,0)x^ny^{k-n}$$</span></p> <p><span class="math-container">$\frac{\partial^kf}{\partial x^n \partial y^{k-n}}=\frac{(-1)^k k!}{(1+x+y)^k}$</span></p> <p><span class="math-container">$d^kf(0,0)=\sum_{n=0}^k \binom{k}{n} (-1)^{k-1}k!x^n y^{k-n} = (-1)^{k-1} k!(x+y)^n$</span></p> <p>Plugging in the formula I don't get the answer. Answer in the book is <span class="math-container">$\sum_{n=1} ^m \frac{(-1)^{n-1}(x+y)^n}{n} + o((x^2+y^2)^\frac{m}{2})$</span></p>
trancelocation
467,003
<p>A bit late answer but I think worth mentioning it because it doesn't need any Gram-Schmidt:</p> <p>Note that the direction vector <span class="math-container">$\begin{pmatrix} 0 \\ 2 \\ 1 \end{pmatrix}$</span> of <span class="math-container">$X_2$</span> is also a spanning vector of <span class="math-container">$P$</span>. Hence, the line <span class="math-container">$X_2$</span> is parallel to <span class="math-container">$P$</span>.</p> <p>The line <span class="math-container">$X_2$</span> passes through point <span class="math-container">$(0 , -2 , 5)$</span>. So, you need to find the orthogonal projection of this point onto <span class="math-container">$P$</span>.</p> <p>Since <span class="math-container">$P$</span> contains point <span class="math-container">$(0,0,0)$</span>, you only need to project <span class="math-container">$\begin{pmatrix} 0 \\ -2 \\ 5\ \end{pmatrix}$</span> onto a normal vector of <span class="math-container">$P$</span> in order to find the component of <span class="math-container">$\begin{pmatrix} 0 \\ -2 \\ 5\ \end{pmatrix}$</span> which is normal to <span class="math-container">$P$</span>.</p> <p>A normal vector of <span class="math-container">$P$</span> is quickly found to be, for example, <span class="math-container">$\begin{pmatrix} -1 \\ -1 \\ 2\ \end{pmatrix}$</span>. Hence, the position vector of such a point you are looking for is</p> <p><span class="math-container">$$\begin{pmatrix} 0 \\ -2 \\ 5\ \end{pmatrix}- \frac{\begin{pmatrix} 0 \\ -2 \\ 5\ \end{pmatrix}\cdot\begin{pmatrix} -1 \\ -1 \\ 2\ \end{pmatrix}}{\left|\begin{pmatrix} -1 \\ -1 \\ 2\ \end{pmatrix}\right|^2}\begin{pmatrix} -1 \\ -1 \\ 2\ \end{pmatrix} = \begin{pmatrix} 2 \\ 0 \\ 1\ \end{pmatrix}$$</span></p>
621
<p>The OP of this question is asking if <a href="https://mathematica.stackexchange.com/questions/9639">https://mathematica.stackexchange.com/questions/9639</a> could be reopened. I did not want to act unilaterally, so I want to ask the rest of you whether you think the question should be reopened. Plead your case for or against reopening that question here.</p> <hr> <p>The intent with the poll answers seems to have been misunderstood, so: <em>they're not supposed to be downvoted</em>. It's a poll; vote up whichever of the two you agree with. Otherwise, this confounds the vote counting.</p>
J. M.'s persistent exhaustion
50
<p>For those who'd rather vote than write an answer:</p> <p>Vote this answer up if you think <a href="https://mathematica.stackexchange.com/questions/9639">https://mathematica.stackexchange.com/questions/9639</a> should be reopened.</p>
1,609,854
<p>If a polyhedron is made only of pentagons and hexagons, how many pentagons can it contain? With the assumption of three polygons per vertex, one can prove there are 12 pentagons.</p> <p>Let's not make that assumption, and only use pentagons. </p> <p>12 pentagons: dodecahedron and <a href="https://en.wikipedia.org/wiki/Dodecahedron#Tetartoid" rel="nofollow noreferrer">tetartoid</a>.<br> 24 pentagons: <a href="https://en.wikipedia.org/wiki/Pentagonal_icositetrahedron" rel="nofollow noreferrer">pentagonal icositetrahedron</a>.<br> 60 pentagons: <a href="https://en.wikipedia.org/wiki/Pentagonal_hexecontahedron" rel="nofollow noreferrer">pentagonal hexecontahedron</a>.<br> 72 pentagons: <a href="http://dmccooey.com/polyhedra/DualSnubTruncatedOctahedron.html" rel="nofollow noreferrer">dual snub truncated octahedron</a>.<br> 132 pentagons: <a href="http://dmccooey.com/polyhedra/132Pentagons.html" rel="nofollow noreferrer">132-pentagon polyhedron</a>.<br> 180 pentagons: <a href="http://dmccooey.com/polyhedra/DualSnubTruncatedIcosahedron.html" rel="nofollow noreferrer">dual snub truncated icosahedron</a>. </p> <p>Here's what the 132 looks like. </p> <p><a href="https://i.stack.imgur.com/TW9h8.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TW9h8.gif" alt="132 pentagon polyhedron"></a></p> <p>In that range of 12 to 180, what values are missing? For values missing here where an all-pentagon polyhedron exists, what is the most symmetrical polyhedron for that value?</p> <p>Edit: According to <a href="http://www.emis.de/journals/JGAA/accepted/2011/HasheminezhadMcKayReeves2011.15.3.pdf" rel="nofollow noreferrer">Hasheminezhad, McKay, and Reeves</a>, there are planar graphs that lead to 16, 18, 20, and 22 pentagonal faces, but I've never seen these polyhedra.</p> <p>16 would be the dual of the <a href="https://en.wikipedia.org/wiki/Snub_square_antiprism" rel="nofollow noreferrer">snub square antiprism</a>.<br> 20 would be the dual of this graph:<br> <a href="https://i.stack.imgur.com/GDnez.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GDnez.gif" alt="quintic 20,1"></a></p>
user159517
159,517
<p>Hint: Find the laurent expansion of $1/z$ and differentiate.</p>
804,673
<p>EDIT: Small Typo Fixed now, Thanks to Sir Chen Wang!</p> <p>Hi I am trying to prove this result without using a series approach $$ \int_0^1\log(1+x)\frac{1+x^2}{(1+x)^4}dx=-\frac{\log 2}{3}+\frac{23}{72}. $$I know we can just solve it by writing $$ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}\left( \int_0^1\frac{x^n}{(1+x)^4}dx+\int_0^1 \frac{x^{2+n}}{(1+x)^4}dx\right), $$ which leads to summing a bunch of Harmonic numbers which is not so easy.<br> This method is brute force but I often have trouble summing harmonic numbers, can we prove it another way?</p> <p>Thanks</p>
Ben Grossmann
81,360
<p>Not a solution, but a possible step in the right direction (too long for a comment):</p> <p>Following Cramer's rule, we note that the solution $\vec y = (y_1,\dots,y_{2014})^T$ to $$X \,\vec y = \pmatrix{ \frac{2^2}{4} &amp; \frac{3^2}{5} &amp; \dfrac{4^2}{6} &amp; \cdots &amp; \dfrac{2014^2}{2016} }^T$$ Will satisfy $$ y_1 = \det(Y^T)/\det(X) = [\det(XY^{-1})]^{-1} $$ With that in mind: if there is another way to solve this equation, it will probably be easier than actually computing the determinant.</p> <p>Note also that $X$ is a <a href="https://en.wikipedia.org/wiki/Vandermonde_matrix" rel="nofollow">Vandermonde</a> matrix.</p>
804,673
<p>EDIT: Small Typo Fixed now, Thanks to Sir Chen Wang!</p> <p>Hi I am trying to prove this result without using a series approach $$ \int_0^1\log(1+x)\frac{1+x^2}{(1+x)^4}dx=-\frac{\log 2}{3}+\frac{23}{72}. $$I know we can just solve it by writing $$ \sum_{n=1}^\infty \frac{(-1)^{n+1}}{n}\left( \int_0^1\frac{x^n}{(1+x)^4}dx+\int_0^1 \frac{x^{2+n}}{(1+x)^4}dx\right), $$ which leads to summing a bunch of Harmonic numbers which is not so easy.<br> This method is brute force but I often have trouble summing harmonic numbers, can we prove it another way?</p> <p>Thanks</p>
Will Orrick
3,736
<p>Consider something a bit more general. Let $$X=\left[ {\begin{array}{cc} 1 &amp; x_1 &amp; x_1^2 &amp; \cdots &amp; x_1^{n-1} \\ 1 &amp; x_2 &amp; x_2^2 &amp; \cdots &amp; x_2^{n-1} \\ 1 &amp; x_3 &amp; x_3^2 &amp; \cdots &amp; x_3^{n-1} \\ \vdots &amp; \vdots &amp; \vdots &amp; \cdots &amp; \vdots \\ 1 &amp; x_n &amp; x_n^2 &amp; \cdots &amp; x_n^{n-1} \\ \end{array} } \right], $$ and $$Y=\left[ {\begin{array}{cc}\frac{x_1^2}{x_1+r} &amp; \frac{x_2^2}{x_2+r} &amp; \dfrac{x_3^2}{x_3+r} &amp; \cdots &amp; \dfrac{x_n^2}{x_n+r} \\ x_1 &amp; x_2 &amp; x_3 &amp; \cdots &amp; x_n \\ x_1^2 &amp; x_2^2 &amp; x_3^2 &amp; \cdots &amp; x_n^2 \\ \vdots &amp; \vdots &amp; \vdots &amp; \cdots &amp; \vdots \\ x_1^{n-1} &amp; x_2^{n-1} &amp; x_3^{n-1} &amp; \cdots &amp; x_n^{n-1} \\ \end{array} } \right]. $$</p> <p>Your problem corresponds to $x_i=i+1,$ $n=2013,$ $r=2.$</p> <p>Let $Y'$ be the matrix obtained by rescaling the columns of $Y$ by multiplying column $i$ by $(x_i+r)/x_i.$ So $$\det Y'=\det Y\prod_{i=1}^n\frac{x_i+r}{x_i} $$ and $$Y'=\left[ {\begin{array}{cc}x_1 &amp; x_2 &amp; x_3 &amp; \cdots &amp; x_n \\ x_1+r &amp; x_2+r &amp; x_3+r &amp; \cdots &amp; x_n+r \\ x_1(x_1+r) &amp; x_2(x_2+r) &amp; x_3(x_3+r) &amp; \cdots &amp; x_n(x_n+r) \\ \vdots &amp; \vdots &amp; \vdots &amp; \cdots &amp; \vdots \\ x_1^{n-2}(x_1+r) &amp; x_2^{n-2}(x_2+r) &amp; x_3^{n-2}(x_3+r) &amp; \cdots &amp; x_n^{n-2}(x_n+r) \\ \end{array} } \right]. $$</p> <p>Now show that $$\det Y'=-r\det X,$$ from which your determinant can easily be evaluated. This is done by a series of row operations on $Y'.$ First subtract row $1$ from row $2.$ Then swap the first two rows. Then divide row $1$ by $r$. (These steps account for the factor $-r.$) We now have the matrix $$\left[ {\begin{array}{cc}1 &amp; 1 &amp; 1 &amp; \cdots &amp; 1 \\ x_1 &amp; x_2 &amp; x_3 &amp; \cdots &amp; x_n \\ x_1(x_1+r) &amp; x_2(x_2+r) &amp; x_3(x_3+r) &amp; \cdots &amp; x_n(x_n+r) \\ \vdots &amp; \vdots &amp; \vdots &amp; \cdots &amp; \vdots \\ x_1^{n-2}(x_1+r) &amp; x_2^{n-2}(x_2+r) &amp; x_3^{n-2}(x_3+r) &amp; \cdots &amp; x_n^{n-2}(x_n+r) \\ \end{array} } \right]. $$ Now subtract $r$ times row $2$ from row $3$. Then subtract $r$ times row $3$ from row $4.$ Continue in this way, finally subtracting $r$ times row $n-1$ from row $n.$ The resulting matrix will be $X^T.$</p>
2,713,853
<p>This is the question: "Let X and Y be independent random variables representing the lifetime (in 100 hours) of Type A and Type B light bulbs, respectively. Both variables have exponential distributions, and the mean of X is 2 and the mean of Y is 3. What is the variance of the total lifetime of two Type A bulbs and one Type B bulb? "</p> <p>This is the answer: $$\operatorname{Var}(2X+Y)=2^2 \operatorname{Var}(X)+\operatorname{Var}(Y)=4×4+9=25$$</p> <p>I understand $\operatorname{Var}(2x)=2^2$, Var(x), but why are $\operatorname{Var}(X) = 4$ and $\operatorname{Var}(Y)=9$? I also know that $\operatorname{Var}(x) = σ^2$. However, I also observed that $\operatorname{Var}(x) = μ^2$ in this case.</p>
V. Vancak
230,329
<p>Note that if $X\sim \mathcal{E}xp(\lambda)$, thus $\mathbb{E}[X] = 1/\lambda$. Hence, if $A$'s mean life time is $2$, that is $\mathbb{E}[X] = 2 = 1/\lambda \to \lambda =1/2.$ As such, $\operatorname{var}(X) = 1/\lambda^2 = 1/(1/2)^2 = 4$. Same is true for type $B$. </p>
679,341
<p>The 4 bytes used in CDROM sectors is just a usual 32 bit CRC. It uses the polynomial </p> <pre><code>P(x) = (x^16 + x^15 + x^2 + 1).(x^16 + x^2 + x + 1) </code></pre> <p>which expands to </p> <pre><code>x^32 + x^31 + 2x^18 + 2x^17 + 3x^16 + x^15 + x^4 + x^3 + 2x^2 + x + 1 </code></pre> <p>The CRC process reverses the bits of the input bytes and the final CRC value. It is stored in big endian format in the sector.</p> <p>So I have to pass a polynomial prime into crc32 algorithm.</p> <p>How do I convert this polynomial expression into binary form. any explanation will be appreciated.</p>
Hagen von Eitzen
39,174
<p>This boils down to a sequence of <em>shift</em> and <em>xor</em> operations, see for example <a href="http://en.wikipedia.org/wiki/Cyclic_redundancy_check" rel="nofollow">here</a> for an explanation, including an example calculation.</p>
427,663
<p>If $|z|=|w|=1$, and $1+zw \neq 0$, then $ {{z+w} \over {1+zw}} \in \Bbb R $</p> <p>i found one link that had a similar problem. <a href="https://math.stackexchange.com/questions/343982/prove-if-z-1-and-w-1-then-1-zw-neq-0-and-z-w-over-1">Prove if $|z| &lt; 1$ and $ |w| &lt; 1$, then $|1-zw^*| \neq 0$ and $| {{z-w} \over {1-zw^*}}| &lt; 1$</a></p>
Berci
41,488
<p><strong>Hint:</strong> For complex numbers $a,b$ we have $a/b\in\Bbb R\iff a\bar b\in\Bbb R$.</p>
1,769,596
<p><strong>Question</strong> Calculate the critical point(s) for the following function $$f =\sin(x)+\cos(y) +\cos(x-y)$$</p> <p><em>Hint:</em> Use the trigonometric identity $$\sin(\alpha) +\sin(\beta) = 2\sin(\frac{\alpha+\beta}{2})\cos(\frac{\alpha -\beta}{2})$$</p> <p><strong>My attempt:</strong></p> <p>$$f_x=\cos(x) -\sin(x-y)=0$$ $$f_y= -\sin(y)+\sin(x-y)=0$$</p> <p>However I can't seem to get this into a form in which I can calculate the critical points or even use the identity given.</p>
Christian Blatter
1,303
<p>From the two equations concerning $f_x$, $f_y$ you obtained it immediately follows that $$\sin y=\sin(x-y)=\cos x\ .$$ Now $\sin y=\cos x$ is equivalent with $${\rm (i)} \quad y={\pi\over2}+x\qquad\vee\qquad{\rm (ii)} \quad y={\pi\over2}-x$$ (all angles are modulo $2\pi$).</p> <p>In case (i) we then obtain $\cos x=\sin(x-y)=-1$, with consequences.</p> <p>In case (ii) we obtain $$\cos x=\sin(x-y)=\sin\bigl(2x-{\pi\over2}\bigr)=-\cos(2x)=1-2\cos^2 x\ ,$$ with consequences.</p>
2,835,177
<p>For example 6,</p> <p>convert to the base 2 = 110</p> <p>number of 1's is 2(prime), number of zero's is 1(prime - 1)</p> <p>or 496 = (111110000)2</p> <p>5(prime) times 1 and 4 times 0</p> <p>Is this always correct?</p>
BruceET
221,800
<p><strong>Extended Comment:</strong> As indicated in the Comment by @HagenvonEitzen, one way to work the initial problem (on the probability D6 shows a larger value than D10) is to enumerate cases. In particular, you might make a $10 \times 6$ array of possible pairs of outcomes and highlight the pairs that satisfy your condition. When I did that, it was pretty clear that there are 15 favorable outcomes out of 60, so the probability $P(\text{D6 &gt; D10}) = 1/4.$ </p> <pre><code>D10\D6 1 2 3 4 5 6 1 11 12* 13* 14* 15* 16* 2 21 22 23* 24* 25* 26* 3 31 32 33 34* 35* 36* 4 41 42 43 44 45* 45* 5 51 52 53 54 55 56* 6 61 62 63 64 65 66 ... </code></pre> <p>A brief simulation can sometimes help to provide insurance against miscounting outcomes. In a simulation of a million pairs of dice rolls (D6 and D10), results ought to give two place accuracy, and that was the result. (The code is for R statistical software: <code>event</code> is a logical vector with a million <code>TRUE</code>s and <code>FALSE</code>s, and its <code>mean</code> is its proportion of <code>TRUE</code>s.)</p> <pre><code>m = 10^6 event = replicate(m, sample(1:6, 1) &gt; sample(1:10, 1)) mean(event) [1] 0.250383 # aprx 16/50 = 1/4 </code></pre> <p>Using the array method might suggest a way to generalize to cases in which the two dice have <em>consecutively</em> numbered faces, such as in the 'generalization' of your first paragraph. I will leave it to you to figure that out. Or maybe you can see how to generalize Hagen von Eitzen's computation.</p> <p>My guess is that it will be considerably messier to solve the 'bonus' problem.</p>
3,974,200
<p>Let <span class="math-container">$X$</span> be an <span class="math-container">$n\times n$</span> negative definite matrix such that <span class="math-container">$X_{ii}&lt;0$</span> for all <span class="math-container">$i$</span> and <span class="math-container">$X_{ij}\geq 0$</span> for <span class="math-container">$i\neq j$</span>. Denote by <span class="math-container">$Y=X^{-1}$</span> the inverse of <span class="math-container">$X$</span>. I would like to find bounds on the diagonal elements of <span class="math-container">$Y$</span>. Can we find <span class="math-container">$\underline{Y}$</span> and <span class="math-container">$\overline{Y}$</span> such that <span class="math-container">$\underline{Y}\leq Y_{ii} \leq \overline{Y}$</span> for all <span class="math-container">$i$</span>?</p> <p>Note that <span class="math-container">$-X$</span> satisfies the definition of an <a href="https://en.wikipedia.org/wiki/M-matrix" rel="nofollow noreferrer"><span class="math-container">$M$</span>-matrix</a> and so its inverse is such that all its elements are nonnegative. This implies that all the elements of <span class="math-container">$Y$</span> are nonpositive and so one potential upper bound is <span class="math-container">$\overline{Y}=0$</span> but I would like to find a tighter bound if possible.</p> <p>Also, in the special in which <span class="math-container">$X$</span> is diagonal, it is easy to show that <span class="math-container">$$ \left(\max_i X_{ii}\right)^{-1}\leq Y_{ii} \leq \left(\min_i X_{ii}\right)^{-1}. $$</span> I am hoping that a result like this extends to nondiagonal <span class="math-container">$X$</span>'s.</p>
thedude
297,270
<p>By the <a href="https://en.wikipedia.org/wiki/Stationary_phase_approximation" rel="nofollow noreferrer">stationary phase method</a>, the integral is approximately given by <span class="math-container">$$ \int_0^1 e^{inp(x)}dx= \sum_{x_0} \sqrt{\frac{2\pi}{np''(x_0)}}e^{inp(x_0)+\mathrm{sign}(p''(x_0))i\pi/4}+o(1/\sqrt{n}),$$</span> where the sum is over the stationary points of <span class="math-container">$p(x)$</span> in the integration interval (roots of <span class="math-container">$p'(x)$</span>).</p> <p>So it decays like <span class="math-container">$\sim 1/\sqrt{n}$</span> for large <span class="math-container">$n$</span>, provided there are no double roots of <span class="math-container">$p'(x)$</span>. If there are double roots, it will decay slower, but will still decay.</p>
341,946
<p>Using $n$-th Taylor polynomial for $f_1(x)=\frac{1}{1-x}$ with center in $0$, find $4$-th derivative of $f_2(x)=\frac{1+x+x^2}{1-x+x^2}$ in the point $0$ without calculating it's $1$,$2$ or $3$ derivative. </p> <p>I'm looking for hints, it's a homework. I've tried using the Taylor expansion with Lagrange rest, but it won't work because of restriction of calculating $1-3$ derivatives. Also I don't see connection between $f_1$ and $f_2$</p> <p>Thanks in advance for help!</p>
Ian Coley
60,524
<p>Let $(f,g)=\int_{-1}^1f(x)g(x)\,dx$ denote the $L^2$ inner product.</p> <p>We seed the process with $P_0(x)=1,P_1(x)=x$. We'll prove this by induction and try to first prove $n=2$. We know $P_2(x)$ satisfies $(P_2,P_0)=\int_{-1}^1 P_2(x)\cdot 1=0$ and $(P_2,P_1)=\int_{-1}^1 P_2(x)\cdot x=0$. We don't actually have a normal condition on this basis, though Gram-Schmidt has built into it a normalising process after every step.</p> <p>So instead we're just going to prove that this recursive relation starts building up an orthogonal basis. We have $P_2(x)=x^2-\frac{(x^2,1)}{(1,1)}1-\frac{(x^2,x)}{(x,x)}x$. This gives us $$ \frac{(x^2,1)}{(1,1)}=\frac{\int_{-1}^1 x^2\,dx}{\int_{-1}^1 1\,dx}=\frac{2/3}{2}=\frac{1}{3},\quad \frac{(x^2,x)}{(x,x)}=\frac{\int_{-1}^1 x^3\,dx}{\int_{-1}^1 x^2\,dx}=0 \\ P_2(x)=x^2-\frac{1}{3}\cdot1-0\cdot x=x^2-\frac{1}{3}. $$</p> <p>But this doesn't agree with the second Legendre polynomial. Maybe I messed up in here, but at least I'll post this here for you to ponder.</p>
1,623,796
<p>In the structure $\langle \mathbb{Q}, &lt; \rangle$ which of the following axioms hold? How about when we use the weak versions of the axioms (all $\leftrightarrow$ replaced with $\rightarrow$ )? </p> <p>Extensionality, Empty Set, Power Set, Infinity</p> <p>Find an instance of Separation that is true in $\langle \mathbb{Q}, &lt; \rangle$, and on that is false. </p> <p>What I'm struggling with is: What does the $\in$ in the axioms mean in$\langle \mathbb{Q}, &lt; \rangle$? I assume it has to mean <p>In which case for the strong axioms, I'd say the Extensionality, Power Set and Infinity holds and Empty set doesn't as there's no minimal element, is that right? For the weak versions, just the same ones?</p> <p>And what is meant by "An instance of Separation"?</p>
Asaf Karagila
622
<p>One easy way to see this is false is that the $\sf ZF$ axioms prove that $\in$ is not transitive on every set. Namely, there exists $a,b,c$ such that $a\in b$ and $b\in c$, but $a\notin c$.</p> <p>For example, $a=\varnothing, b=\{\varnothing\}, c=\{\{\varnothing\}\}$.</p> <p>On the other hand, $&lt;$ is transitive on $\Bbb Q$. </p> <p>(Another way, using far heavier guns, would be to point that the first order theory of $(\Bbb Q, &lt;)$ is decidable, consistent and complete, whereas $\sf ZF$ has no extension which satisfies all those three properties by Gödel's second incompleteness theorem.)</p>
262,601
<p>What does the following statement mean, that Nature likes to minimize things (like energy) and this equation describes one particular minimization problem? </p> <p>What is the Variational Principle?</p>
Marcel Rubió
53,712
<p>The variational principle is, in fact, a physical principle. To get an idea, the easiest example of a variational principle is Fermat's principle in non-relativistic optics (the one taught at school via geometric means). The idea to use a variational principle is that at the end we get an equation, or even a set of equations, that will define the dynamics of the system for all times.</p> <p>I am not sure if I understand your question, but if you mean to ask 'how do we come up with variational principles', I would guess by experiments. By experiments, we can notice that light tends to travel in a straight line (in non-relativistic experiments, that is!), so then, with this idea we can derive some equations. The same happens for the principle of least action, first you build some intuition about the behaviour of objects in dynamics, and then you find some equations which satisfy the physical model using those constraints.</p> <p>If you check <a href="http://en.wikipedia.org/wiki/Course_of_Theoretical_Physics" rel="nofollow">Landau-Lifshitz's book</a> for classical mechanics (a fantastic read, by the way), you will see that the authors just go straight ahead and integrate the action, thus using a variational principle. Then they get the Euler-Lagrange equations.</p>
2,755
<p>As Akhil had great success with his <a href="https://mathoverflow.net/questions/1291/a-learning-roadmap-for-algebraic-geometry">question</a>, I'm going to ask one in a similar vein. So representation theory has kind of an intimidating feel to it for an outsider. Say someone is familiar with algebraic geometry enough to care about things like G-bundles, and wants to talk about vector bundles with structure group G, and so needs to know representation theory, but wants to do it as geometrically as possible.</p> <p>So, in addition to the algebraic geometry, lets assume some familiarity with representations of finite groups (particularly symmetric groups) going forward. What path should be taken to learn some serious representation theory?</p>
Jon Yard
434
<p>My favorite book right now on representation theory is Claudio Procesi's <a href="http://books.google.com/books?id=Sl8OAGYRz_AC&amp;dq=Procesi+Lie+groups&amp;printsec=frontcover&amp;source=bl&amp;ots=8LwROHsMyF&amp;sig=HUTjse0dj6r2K-CcAFjteYyTwD0&amp;hl=en&amp;ei=uabnSuPNF4H2sgOd96ybBQ&amp;sa=X&amp;oi=book_result&amp;ct=result&amp;resnum=1&amp;ved=0CA0Q6AEwAA#v=onepage&amp;q=&amp;f=false">Lie groups: an approach through invariants and representations</a>. It is one of those rare books that manages to be just about as formal as needed without being overburdened by excessive pedantry. He gives a rather complete picture of both compact and algebraic groups and how they interplay, while doing a nice job of explaining the necessary background in algebraic geometry and functional analysis. He covers all the "standard" material on Young symmetrizers, Schur duality, representations of GL_n, semisimple Lie groups &amp; algebras, as well as more advanced stuff like Schubert calculus and some basic geometric invariant theory. This book was the first place I started to feel like I was "getting" the big picture, after picking up bits and pieces from different places.</p> <p>If your institution has a subscription to SpringerLink, you can probably download this book for free (legally) and purchase an on-demand print version for around $25 USD. </p> <p>Since this question was about a "learning roadmap," and not just for a single textbook, let me mention my favorite book that fits in your back pocket: "Lectures on Lie groups and Lie algebras" by Carter, Segal and Macdonald. The section by Segal is especially nice.</p>
1,856,225
<p>Right, so as the final step of my project draws near and after having made a bad layout sort of question, I am posting a new one very clear and unambiguous. I need to find this specific definite integral which Mathematica could not solve:</p> <p>$$ \int_{x=0}^\pi \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \sin \left( \frac{1+A}{\sqrt{A}} \omega \tanh^{-1} \frac{\cos \frac{x}{2}}{\sqrt{1 + A \sin^2 \frac{x}{2}}} \right) \, dx.$$</p> <p>where $ 0&lt;A&lt;1 $ and $ \omega &gt; 0 $ are parameters of the problem. I tried to use a substitution of the argument of the hyperbolic arctan but that seemed to make it worse. I was posting here in the hopes of receiving help, if someone could tell me if it is at all possible to solve it analytically via a trick of sorts or a clever substitution, or maybe it is an elliptic integral in disguise. I thank all helpers.</p> <p>** My question on the Melnikov integral can be found <a href="https://math.stackexchange.com/questions/1853921/finding-a-specific-improper-integral-on-a-solution-path-to-a-2-dimensional-syste">here</a> I just used trig identities to soften it up</p>
achille hui
59,379
<p>Let $\mathcal{I}$ be the integral at hand and $B = \frac{1+A}{\sqrt{A}}\omega$. Introduce variables $y, t, \theta, z$ such that</p> <p>$$y = \sin\frac{x}{2},\quad t = \tanh\theta = \frac{\cos\frac{x}{2}}{\sqrt{1+A\sin^2\frac{x}{2}}}\quad\text{ and }\quad z = e^\theta$$</p> <p>Notice $$t = \sqrt{\frac{1-y^2}{1+Ay^2}} \implies y = \sqrt{\frac{1-t^2}{1+At^2}} \implies \frac{dy}{\sqrt{1+Ay^2}} = - \frac{t\sqrt{A+1}dt}{\sqrt{1-t^2}(1+At^2)} $$ and $dt = (1-t^2)d\theta$, we have $$\begin{align} \mathcal{I} &amp;= 2\int_0^1 \sin(B\theta)\frac{dy}{\sqrt{1+Ay^2}} = 2\sqrt{A+1}\int_0^1\sin(B\theta)\frac{t\sqrt{1-t^2}}{1+At^2}\frac{dt}{1-t^2}\\ &amp;= 2\sqrt{A+1}\int_0^\infty \frac{\sin(B\theta)\sinh\theta}{\cosh^2\theta + A\sinh^2\theta} d\theta = -i\sqrt{A+1}\int_{-\infty}^\infty \frac{e^{iB\theta}\sinh\theta}{\cosh^2\theta + A\sin^2\theta} d\theta\\ &amp;= -i2\sqrt{A+1}\int_0^\infty \frac{z^{iB}(z^2-1)}{(z^2+1)^2 + A(z^2-1)^2} dz \end{align} $$ Let $\phi = \tan^{-1}\sqrt{A}$, this can be simplified as $$ \mathcal{I} = -\frac{2i}{\sqrt{A+1}}\int_0^\infty \frac{z^{iB}(z^2-1)}{z^4 + 2\left(\frac{1-A}{1+A}\right) z^2 + 1} dz = -2i\cos\phi\int_0^\infty \frac{z^{iB}(z^2-1)}{(z^2+e^{2i\phi})(z^2+e^{-2i\phi})} dz$$ Consider following contour integral $$\mathcal{J(\epsilon,R)} \stackrel{def}{=} \oint_{C(\epsilon,R)} \frac{(-z)^{iB}(z^2-1)}{(z^2+e^{2i\phi})(z^2+e^{-2i\phi})} dz \tag{*1}$$ where </p> <ul> <li>$\arg(-z) = 0$ on negative real axis.</li> <li>$(-z)^{iB}$ has a branch cut along positive real axis.</li> <li><p>$C(\epsilon,R)$ is the contour consists of </p> <ul> <li>$C_1$ : line segment from $\epsilon \to R$ above the positive real axis.</li> <li>$C_2$ : circular arc $Re^{iu}$ for $u$ from $0 \to 2\pi$.</li> <li>$C_3$ : line segment $R \to \epsilon$ below the positive real axis.</li> <li>$C_4$ : circular arc $\epsilon e^{iu}$ for $u$ from $2\pi \to 0$.</li> </ul></li> </ul> <p>It is easy to see in $\mathcal{J}(\epsilon,R)$, </p> <ul> <li>the contribution from $C_1$ and $C_3$ adds up to $$(e^{\pi B} - e^{-\pi B})\int_\epsilon^R \frac{z^{iB}(z^2-1)}{(z^2+e^{2i\phi})(z^2+e^{-2i\phi})} dz$$</li> <li>the contribution from $C_4$ vanishes as $\epsilon \to 0$.</li> <li>Since $B &gt; 0$ is a real number, $|(-z)^{iB}|$ is bounded from above by $e^{\pi B}$ and the contribution from $C_2$ behaves as $O(R^{-1})$ as $R \to \infty$.</li> </ul> <p>Combine these, we find</p> <p>$$\mathcal{I} = -i\frac{\cos\phi}{\sinh(\pi B)}\lim_{\epsilon \to 0,R \to \infty} \mathcal{J}(\epsilon,R)$$</p> <p>The integrand in $(*1)$ has 4 poles inside the contour: $\; e^{i(\frac{\pi}{2} \pm \phi)}\;$ and $\;e^{i(\frac{3\pi}{2} \pm \phi)}$.</p> <ul> <li><p>The residues at $e^{i(\frac{\pi}{2} \pm \phi)}$ are $\displaystyle\;(e^{-(-\frac{\pi}{2} \pm \phi)B})\frac{-e^{\pm 2i\phi} - 1}{4i e^{\pm i\phi }(-e^{\pm 2i \phi} + \cos(2\phi))} = \mp\frac{e^{(\frac{\pi}{2}\mp \phi)B}}{4\sin\phi}$</p></li> <li><p>The residues at $e^{i(\frac{3\pi}{2} \pm \phi)}$ are $\displaystyle\;(e^{-(\frac{\pi}{2} \pm \phi)B})\frac{-e^{\pm 2i\phi} - 1}{-4i e^{\pm i\phi }(-e^{\pm 2i \phi} + \cos(2\phi))} = \pm\frac{e^{(-\frac{\pi}{2}\mp \phi)B}}{4\sin\phi}$</p></li> </ul> <p>This implies</p> <p>$$\begin{align} \mathcal{I} &amp;= \left(-i \frac{\cos\phi}{\sinh\pi B}\right)\left(\frac{2\pi i}{4\sin\phi}\right)\left[ -e^{(\frac{\pi}{2}-\phi)B} +e^{(\frac{\pi}{2}+\phi)B} +e^{(-\frac{\pi}{2}-\phi)B} -e^{(-\frac{\pi}{2}+\phi)B} \right]\\ &amp;= \left(\frac{\cos\phi}{\sinh\pi B}\right)\left(\frac{2\pi}{4\sin\phi}\right) (e^{\frac{\pi}{2}B} - e^{-\frac{\pi}{2}B}) (e^{\phi B} - e^{-\phi B}) = \frac{\pi}{\tan\phi}\frac{\sinh(B\phi)}{\cosh\left(\frac{\pi}{2}B\right)}\\ &amp;= \frac{\pi}{\sqrt{A}}\frac{\sinh(B\tan^{-1}\sqrt{A})}{\cosh\left(\frac{\pi}{2}B\right)} \end{align} $$ Treating $A, B$ as two independent parameters, in the limiting case $A \to 0$, we find $$\lim_{A\to 0} \mathcal{I} = \frac{\pi B}{\cosh\left(\frac{\pi}{2}B\right)}$$</p> <p>This matches what first pointed out by @nospoon in the comments.</p>
2,167,598
<p>I've seen the answer for gcd$(a,b)$ but never for the lcm$(a,b)$?</p>
Carl Schildkraut
253,966
<p>For lcm$(a,b)$, it's a multiple of $a$, so we can take $x=\frac{lcm(a,b)}{a}, y=0$. The gcd case, as you can see, is more interesting. </p>
2,412,839
<p>I figured I could use the binomial formula to solve this problem but then remembered it only works when the order does matter (repeated combinations are included). Is there a formula that does not account for the repeated combinations and that could help me solve this problem?</p>
Graham Kemp
135,106
<p>The trick with "order doesn't matter" is to ask "to whom doesn't it matter and why?" </p> <blockquote> <p>A Die is Rolled 3 times, what is the probability that a six is rolled at least once when the order doesn't matter?</p> </blockquote> <p>Why doesn't order matter? &nbsp; Because, the observer loses track of which die is which and only keeps <em>count</em> of the faces that occur. &nbsp; However, the <em>dice</em> roll the same whether we keep track of them or not. &nbsp; They don't care whether or no order matters to the observer.</p> <p>You can use the same sample space of $6^3$ <em>outcomes</em>; it is only the description of the <em>events</em> (and how we measure them) that is affected by "order doesn't matter."</p> <p>Thus $~\mathsf P(\text{at least one six})~{~=~1-\mathsf P(\text{no sixes on the three dice}) \\ ~=~1- 5^3/6^3} $</p>
677,574
<p>Do there exist examples of non-empty, infinite spaces <span class="math-container">$X$</span> not equipped with the discrete topology for which <span class="math-container">$X \cong X \times X$</span>?</p>
omar
12,273
<p>Take for example $X=\cup_{n\ge1}\mathbb R^n$ with the topology defined by $U\subset X$ is open if and only if $U\cap\mathbb R^n $ is open for all $n$. It is then easy to check that the map $\phi:X\times X\to X$ defined by $\phi((x_i),(y_i))=(x_1,y_1,x_2,y_2,x_3,\dots)$ is a homeomorphism here an element of $X$ is represented by $(x_i)$ where $(x_i)$ is a sequence of real numbers which is nonzero only for a finite number of $i's$. (also $X=\Pi_{n\ge1}\mathbb R$ with the product topology would also work with the same map defined as above)</p> <p>For a another example we have a theorem in functional analysis which says that every separable infinite dimentional Hilbert space is isomorphic to $l_2$. Since $l_2\times l_2$ is a separable Hilbert space hence by the theorem it would be isomorphic to $l_2$ (in particular it would be homemorphic)</p>
677,574
<p>Do there exist examples of non-empty, infinite spaces <span class="math-container">$X$</span> not equipped with the discrete topology for which <span class="math-container">$X \cong X \times X$</span>?</p>
Henno Brandsma
4,280
<p>Elementary examples are often zero-dimensional: $\mathbb{Q}$, $\mathbb{P}$ (= the irrationals as a subset of $\mathbb{R}$) and the Cantor set $C \subset [0,1]$ all are homeomorphic to their squares (so even every finite power of itself; even countable power, for the irrationals and $C$.). A trivial example is an infinite space in the indiscrete topology.</p> <p>Many infinite-dimensional spaces also obey this: $R^\omega$ in the product topology, or any higher power as well, or the Hilbert space $\ell_2$ (not really different, as $R^\omega$ is homeomorphic (as topological spaces) to $\ell_2$). </p> <p>A nice one-dimensional space due to Erdős: take all points of $\ell_2$ where all coordinates are rational. Erdős showed this space is one-dimensional and it's quite clearly homeomorphic to its square: the even and odd coordinates form copies of the space itself.</p>
325,236
<blockquote> <p>Is there a bijection between $\mathbb N$ and $\mathbb N^2$?</p> </blockquote> <p>If I can show $\mathbb N^2$ is equipotent to $\mathbb N$, I can show that $\mathbb Q$ is countable. Please help. Thanks,</p>
André Nicolas
6,312
<p>It is easier to write an answer than to find one of its several occurrences on this site. </p> <p>Let $n$ be a positive integer. Then there exist uniquely determined positive integers $u$ and $v$ such that $n=2^{u-1}(2v-1)$. This is because every positive integer $n$ can be written in one and only one way as a power of $2$ (possibly $2^0$) and an odd number.</p> <p>Define $g$ by $g(n)=(u,v)$. Then $g$ is a bijection from $\mathbb{N}$ to $\mathbb{N}^2$. </p> <p>For a different bijection, search for the <em>Cantor Pairing Function</em>.</p>
325,236
<blockquote> <p>Is there a bijection between $\mathbb N$ and $\mathbb N^2$?</p> </blockquote> <p>If I can show $\mathbb N^2$ is equipotent to $\mathbb N$, I can show that $\mathbb Q$ is countable. Please help. Thanks,</p>
Julien
38,053
<p>Here is a non constructive proof.</p> <p>Of course, there is an injection of $\mathbb{N}$ into $\mathbb{N}\times\mathbb{N}$.</p> <p>Now consider the map $$ (x,y)\longmapsto 2^x\cdot 3^y $$ By the fundamental theorem of arithmetic, this is an injection of $\mathbb{N}\times\mathbb{N}$ into $\mathbb{N}$.</p> <p>By <a href="http://en.wikipedia.org/wiki/Cantor%E2%80%93Bernstein%E2%80%93Schroeder_theorem" rel="noreferrer">Cantor-Bernstein-Schroeder</a>, there exists a bijection between $\mathbb{N}$ and $\mathbb{N}\times\mathbb{N}$.</p> <p>Edit: I added this answer because the only constructive proof I knew was the zigzag along the diagonals, which I find slightly tedious to write down. But thanks to Andre Nicolas, now I know a straightforward and easy-to-write-down bijection. Which is actually a little step away from the injection I use above.</p>
574,220
<p><strong>Question 1:</strong></p> <p>let Polynomial <span class="math-container">$f(x)=\displaystyle\sum_{i=0}^{3}a_{i}x^i,$</span> have three real numbers roots,where <span class="math-container">$a_{i}&gt;0,i=1,2,3$</span>.</p> <blockquote> <p>show that: <span class="math-container">$$g(x)=\sum_{i=0}^{3}a^m_{i}x^i$$</span> have only real roots,where <span class="math-container">$m\in R,m\ge 1$</span></p> </blockquote> <p><strong>My try:</strong></p> <p><strong>case (1):</strong></p> <p>suppose that <span class="math-container">$f(x)$</span> has a zero of multiplicity 3, then we assume</p> <blockquote> <p><span class="math-container">$$f(x)=(x+p)^3=x^3+3x^2p+3xp^2+p^3$$</span> then <span class="math-container">$$g(x)=x^3+(3p)^mx^2+(3p^2)^mx+(p^3)^{m}=(x+p^m)[x^2+(3^mp^m-p^m)x+p^{2m}]$$</span></p> </blockquote> <p>then</p> <blockquote> <p><span class="math-container">$$h(x)=x^2+p^m(3^m-1)x+p^{2m}\Longrightarrow \Delta =(p^m(3^m-1))^2-4p^{2m}&gt;0$$</span> so this case <span class="math-container">$g(x)$</span> have only three real roots.</p> </blockquote> <p><strong>for case (2):</strong></p> <p>let <span class="math-container">$f(x)=(x+p)^2(x+q)$</span>,</p> <p>I can't prove it,</p> <p><strong>and the case (3):</strong></p> <p><span class="math-container">$$f(x)=(x+p)(x+q)(x+r)$$</span> and this case I can't prove it too.</p> <p><strong>I hope someone can help solve this nice problem ;</strong></p> <p>Thank you very much!</p>
zy_
27,648
<p>It is really a tough question. A generalization of this problem follows from a theorem in the famous book by Polya and Szego(Book II, Chapter 5, Problem 155), which states:</p> <p>If</p> <p>$$a_0+a_1 x+\cdots+a_n x^n$$</p> <p>and</p> <p>$$b_0+b_1 x+\cdots+b_n x^n$$</p> <p>have only real zeros, while all the zeros of the latter have the same sign, then</p> <p>$$a_0 b_0+a_1 b_1 x+\cdots+a_n b_n x^n$$</p> <p>has real zeros only.</p> <p>The proof is complicated, so borrowing a copy from library is recommended.</p>
3,789,429
<blockquote> <p><strong>Problem</strong>: Can an <span class="math-container">$f$</span> function be created where:<span class="math-container">$$f\colon\mathbb Q_{+}^{*}\to \mathbb Q_{+}^{*}$$</span> The function is defined on the set of fully positive rational numbers and is achieved: <span class="math-container">$\forall(x,y)\in \mathbb Q_{+}^{*}\times\mathbb Q_{+}^{*},f(xf(y))=\frac{f(f(x))}{y}$</span></p> </blockquote> <p>This question is similar to one of the Olympiad questions that I was very passionate about and used several ideas to solve this problem, but I did not arrive at any result from one of them by using the basic theorem in arithmetic that states that there is a corresponding application between <span class="math-container">$(\mathbb Q_{+}^{*})$</span>and <span class="math-container">$(\mathbb Z^{\mathbb N})$</span> where: <span class="math-container">$$\left\{\mathbb Z^{\mathbb N} =\text{ A set of stable sequences whose values ​​are set in} \quad\mathbb Z\right\}$$</span> This app is defined like this <span class="math-container">$$\varphi\colon\mathbb Z^{\mathbb N}\to \mathbb Q_{+}^{*} ,(\alpha_n)_{n\in\mathbb N}\longmapsto \prod_{n\in\mathbb N} P_n^{\alpha_n}$$</span> Where:<span class="math-container">$$\mathbb P=\left\{P_k:k\in\mathbb N\right\}\text{ is the set of prime numbers} $$</span> And put <span class="math-container">$x=\prod_{n\in\mathbb N}P_n^{\alpha_n},\quad y=\prod_{n\in\mathbb N }P_n^{\beta_n},\text{and}\quad $</span> <span class="math-container">$$f(\prod_{n\in\mathbb N}P_n^{\alpha_n})=\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)$$</span></p> <p>Found in the latter<span class="math-container">$$f(xf(y))=\frac{\left(\prod_{n\in\mathbb N}P_{2n}^{\alpha_{2n+1}}\right)\left(\prod_{n\in\mathbb N}P_{2n+1}^{-\alpha_{2n}}\right)}{\left(\prod_{n\in\mathbb N}P_n^{\beta_{2n}}\right)}=\frac{f(x)}{y}$$</span> However, this did not help me create this method</p> <p>I need an idea or suggestion to solve this problem if possible and thank you for your help</p> <blockquote> <p>Note: <span class="math-container">$(\alpha_n)_{n\in\mathbb N}\quad \text{is a stable sequence}\leftrightarrow \forall n\in\mathbb N ,\exists n_0\in\mathbb N :\left( n\geq n_0 \quad \alpha_{n}=0\right) $</span></p> </blockquote>
Marktmeister
762,010
<p>There is no such function.</p> <p><em>Proof:</em> Specifically for <span class="math-container">$x = 1$</span>, we obtain that <span class="math-container">$f(f(y)) = \frac{f(f(1))}{y}$</span> for all <span class="math-container">$y$</span>.</p> <p>Now, let <span class="math-container">$a \in \mathbb Q$</span>, <span class="math-container">$a &gt; 0$</span> and write <span class="math-container">$y = \frac{y}{f(a)} \cdot f(a)$</span>. We then obtain (using the first equality)</p> <p><span class="math-container">$$f(y) = f\left(\frac{y}{f(a)} \cdot f(a)\right) = \frac{f(f(\frac{y}{f(a)}))}a = \frac{f(f(1)) \cdot f(a)}{a \cdot y}.$$</span></p> <p>Thus we have a formula for <span class="math-container">$f(y)$</span>, which is independent of <span class="math-container">$a$</span>. Specifically for <span class="math-container">$a = y$</span>, one obtains</p> <p><span class="math-container">$$f(y) = \frac{f(f(1)) \cdot f(y)}{y^2},$$</span></p> <p>hence <span class="math-container">$y^2 = f(f(1))$</span> for all <span class="math-container">$y$</span>, a contradiction.</p>
2,491,560
<p>Given a sequence $(s_n)$ in $\mathbb R$ such that $$\lim \limits_{n \to \infty}( s_{n+1}-s_n)=0,$$ I am asked to prove $(s_n)$ converges. </p> <p>I know all Cauchy sequences converge in $\mathbb R^k$. So I want to prove that $(s_n)$ is Cauchy. </p> <p>I am stuck as to how to show the given sequence is a Cauchy. Thank you.</p>
egreg
62,967
<p>It’s not surprising you’re stuck, because the claim is false.</p> <p>Consider $s_n=\sqrt{n}$. Then $$ \lim_{n\to\infty}(s_{n+1}-s_n)= \lim_{n\to\infty}(\sqrt{n+1}-\sqrt{n})= \lim_{n\to\infty}\frac{1}{\sqrt{n+1}+\sqrt{n}}=0 $$</p>
2,071,054
<p>Let $f(x)$ be continuous in $[a,b]$</p> <p>Let $A$ be the set defined as: $$A = \{ x \in [a,b] \mid f(x) = f(a) \}$$</p> <ol> <li><p>Does $A$ have a maximum? I guess it it is the max value in $[a,b]$ which $f$ sends to $f(a)$ but I don't know how to prove it.</p></li> <li><p>Would it still have a maximum if $[a,b]$ would turn into $[a,b)$?</p></li> </ol>
Christian Blatter
1,303
<p>You can avoid inclusion-exclusion by considering $S$ as a nested sum: $$S=6\sum_{i=0}^\infty 3^{-i}\sum_{j=i+1}^\infty 3^{-j}\sum_{k=j+1}^\infty 3^{-k}\ ,$$ whereby the factor $6$ compensates for the assumption $i&lt;j&lt;k$. The innermost sum has the value $$3^{-j}\sum_{k'=1}^\infty 3^{-k'}={1\over2} 3^{-j}\ .$$ The next-inner sum then becomes $${1\over2}\sum_{j=i+1}^\infty3^{-2j}={1\over2}3^{-2i}\sum_{j'=1}^\infty 3^{-2j'}={1\over2}\cdot{1\over8}\&gt;3^{-2i}\ .$$ In this way we finally obtain $$S=6\cdot{1\over2}\cdot{1\over8}\sum_{i=0}^\infty 3^{-3i}=6\cdot{1\over2}\cdot{1\over8}\cdot{27\over26}={81\over208}\ .$$</p>
1,061,325
<p>I need to show that for $x&gt;0$, $1+\frac{x}{2} \ge \sqrt{1+x} \ge 1+\frac{x}{2} - \frac{x^2}{8} $.</p> <p>I used the geometric/arithmetic mean inequality to show that $1+\frac{x}{2} \ge \sqrt{1+x}$ is indeed true. My issue lies in the second part of this.</p> <p>In trying to show that $\sqrt{1+x} \ge 1+\frac{x}{2} - \frac{x^2}{8} $, I determined that this is a partial taylor series expansion and I attempted to play around with both inequalities to show that this is true but to no avail.</p> <p>I would appreciate any help in solving this. Ideally I would like to do this analytically but any solutions using more complicated theorems are welcome.</p>
turkeyhundt
115,823
<p>Yes. Different structures. Trig/Unit Circle starts with zero at "East" and increases in a counterclockwise direction. Bearings and directions use vertical-up as zero degrees.</p> <p><img src="https://i.stack.imgur.com/I3zN7.png" alt="enter image description here"></p>
1,379,878
<blockquote> <p>Let $M_n(\mathbb{C})$ denote the vector space over $\mathbb{C}$ of all $n\times n$ complex matrices. Prove that if $M$ is a complex $n\times n$ matrix then $C(M)=\{A\in M_n(\mathbb{C}) \mid AM=MA\}$ is a subspace of dimension at least $n$.</p> </blockquote> <p>My Try:</p> <p>I proved that $C(M)$ is a subspace. But how can I show that it is of dimension at least $n$. No idea how to do it. I found similar questions posted in MSE but could not find a clear answer. So, please do not mark this as duplicate.</p> <p>Can somebody please help me how to find this? </p> <p>EDIT: Non of the given answers were clear to me. I would appreciate if somebody check my try below:</p> <p>If $J$ is a Jordan Canonical form of $A$, then they are similar. Similar matrices have same rank. $J$ has dimension at least $n$. So does $A$. Am I correct?</p>
Sungjin Kim
67,070
<p>Your approach is correct. Let $PMP^{-1}=J$ with invertible $P$ and Jordan form $J$. Then $$ \begin{align} AM=MA &amp;\Longleftrightarrow PAMP^{-1}=PMAP^{-1} \\ &amp;\Longleftrightarrow PAP^{-1} PMP^{-1} = PMP^{-1} PAP^{-1}\\ &amp;\Longleftrightarrow PAP^{-1} J= J PAP^{-1}. \end{align} $$ Thus, we have $$ \phi_P : C(M) \rightarrow C(J) $$ given by $\phi_P(A) = PAP^{-1}$, is an invertible linear transformation with $\phi_P^{-1}(B)= P^{-1}BP$. So, $C(M)$ and $C(J)$ are isomorphic via the isomorphism $\phi_P$. Therefore, $C(M)$ and $C(J)$ have the same dimensions over $\mathbb{C}$. </p> <p>Then if you prove that $C(J)$ has dimension at least $n$, then the same is true for $C(M)$ too. </p> <p>Alternatively, as I commented last year, you can also use the general formula given in <a href="https://mathoverflow.net/questions/105040/centralizer-of-a-matrix-over-a-finite-field">Centralizer of a Matrix</a>. This gives the following info:</p> <p>Let $\mathbb{F}$ be a field and $M\in M_n(\mathbb{F})$. Denote by $C(M)=\{A\in M_n(\mathbb{F}) | AM=MA\}$ the centralizer of $M$. The dimension of $C(M)$ over $\mathbb{F}$ is given by $$ \mathrm{dim}_{\mathbb{F}} C(M) = \sum_p (\mathrm{deg}(p))\sum_{i,j} \min (\lambda_{p,i}, \lambda_{p,j}), $$ where $p$ is any irreducible polynomial that divides the characteristic polynomial of $M$, and $\lambda_p= \sum \lambda_{p,i}$ is the exact power of $p$ in the characteristic polynomial of $M$. Here, $\lambda_{p,i}$ are the powers of $p$ in the primary decomposition of the $\mathbb{F}[x]$-module $\mathbb{F}^n$ in which $x$ acts by $M$-multiplication on the left. </p> <p>By taking $i=j$ only in the double sum, we obtain that $$ \mathrm{dim}_{\mathbb{F}}C(M) \geq \sum_p (\mathrm{deg}(p)) \sum_i \lambda_{p,i} = n. $$</p> <p>The equality occurs if and only if the minimal polynomial and the characteristic polynomial of $M$ coincide. Moreover, we have in this case, $$ C(M)=\{f(M) | f\in \mathbb{F}[x]\} = \mathrm{span}_{\mathbb{F}} \{ I, A, \ldots , A^{n-1}\}. $$</p>
1,685,621
<p>I know that we can define two vectors to be orthogonal only if they are elements of a vector space with an inner product. </p> <p>So, if $\vec x$ and $\vec y$ are elements of $\mathbb{R}^n$ (as a real vector space), we can say that they are orthogonal iff $\langle \vec x,\vec y\rangle=0$, where $\langle \vec x,\vec y\rangle $ is an inner product.</p> <p>Usually the inner product is defined with respect to the standard basis $E=\{\hat e_1,\hat e_2 \}$ (for $n=2$ to simplify notations), the standard definition is: $$ \langle \vec x,\vec y\rangle_E=x_1y_1+x_2y_2 $$ Where $$ \begin{bmatrix} x_1\\x_2 \end{bmatrix} =[\vec x]_E \qquad \begin{bmatrix} y_1\\y_2 \end{bmatrix} =[\vec y]_E $$ are the components of the two vectors in the standard basis and, by definition of the inner product, $\hat e_1$ and $\hat e_2$ are orho-normal. </p> <p>Now, if $\vec v_1$ and $\vec v_2$ are linearly independent the set $V=\{\vec v_1,\vec v_2\}$ is a basis and we can express any vector in this basis with a couple of components: $$ \begin{bmatrix} x'_1\\x'_2 \end{bmatrix} =[\vec x]_V \qquad \begin{bmatrix} y'_1\\y'_2 \end{bmatrix} =[\vec y]_V $$ from which we can define an inner product: $$ \langle \vec x,\vec y\rangle_V=x'_1y'_1+x'_2y'_2 $$</p> <p>Obviously we have: $$ [\vec v_1]_V= \begin{bmatrix} 1\\0 \end{bmatrix} \qquad [\vec v_2]_V= \begin{bmatrix} 0\\1 \end{bmatrix} $$ and $\{\vec v_1,\vec v_2\}$ are orthogonal (and normal) for the inner product $\langle \cdot,\cdot\rangle_V$.</p> <p>This means that any two linearly independent vectors are orthogonal with respect to a suitable inner product defined by a suitable basis. So orthogonality seems a ''coordinate dependent'' concept. </p> <p>The question is: is my reasoning correct? And, if yes, what make the usual standard basis so special that we chose such basis for the usual definition of orthogonality? </p> <hr> <p>I add something to better illustrate my question.</p> <p>If my reasoning is correct than, for any basis in a vector space there is an inner product such that the vectors of the basis are orthogonal. If we think at vectors as oriented segments (in pure geometrical sense) this seems contradicts our intuition of what ''orthogonal'' means and also a geometric definition of orthogonality. So, why what we call a ''standard basis'' seems to be in accord with intuition and other basis are not? </p>
Theodor Johnson
320,421
<p>The unit base vectors $e_1$,$e_2$: form an orthonormal basis.</p> <p>Let us take a look what this requires:</p> <ul> <li>all elements in $M:=\{e_1,e_2,...,e_n \}$ are linearly independent</li> <li>the vectors $e_1$,$e_2$,...,$e_n$ are pairwise orthogonal, so their dot product $\langle e_i,e_k \rangle=0 $, $\forall i,k \leq n$ and $i\neq k$</li> <li>they are normalized, so $||e_i||=1$</li> </ul> <p>You are right, there are more vectors that fulfill this criteria. In fact, there is an uncountably infinite amount of them. But what makes the standart basis so appealing? </p> <ol> <li><strong>They are easy to calculate</strong></li> </ol> <p>For any vector $e_i$ you set all but one position to zero. The only nonzero entry has the only requirement, that its position is unique compared to the other unit vectors.</p> <p>If you tried to create a basis starting from a non-unit vector $v$, you would have to ensure that they are linearly independent, pairwise orthogonal and normalized.</p> <ol start="2"> <li><strong>They have good numerical properties</strong> You will not run into rounding errors while trying to calculate $||e_i||$ This is a mighty advantage.</li> </ol> <p>Furthermore you ask, if the dot product is a <em>coordinate dependent concept</em>. Remember that a vector $v=\left( \begin{array}{c} a\\ b\\ \end{array} \right)$ has a magnitude and a direction. So the entries $a,b$ have a direct relationship on the direction the vector $v$ will be facing. So in a sense, yes, the dot product is as coordinate dependent as the overall appearance of the vector is.</p> <p>But what happens if you convert your vector $v$ to polar coordinates, such that $v=(angle, length)$. What would you mean by coordinate dependent now?</p> <p>So instead of saying the dot product is coordinate dependent, say it is magnitude and angle dependent. That is even how the dot product is defined, because </p> <p>$\langle v,u \rangle=||u||||v||cos(alpha)$</p>
5,998
<p>I have restricted the context of my notebooks to each individual notebook. So variables in each notebook are local and are not seen in another notebook.</p> <p>But in two of my notebooks I have two functions that I want to plot simultaneously with the command <code>Show</code>. Please note that I want variables of each notebook to be local except for those two functions. So I need to define two global variables and then plot that two functions in <code>Show</code>. How can I define global variables in Mathematica?</p>
celtschk
129
<p>You can explicitly define variables in the global context by prefixing their name with <code>Global`</code>, for example, <code>Global`i = 3</code> or <code>Global`f[x_]:=x^2</code>. However if you have set the notebook to have private context, you don't have <code>Global`</code> in your <code>$ContextPath</code> (in order to prevent interference from other notebooks with non-private context). Therefore in your notebooks the context <code>Global`</code> isn't really special, you can use <em>any</em> context, like <code>myShared`</code>. Indeed, that's desirable because that way you know that a third notebook will be unlikely to interfere.</p> <p>So you would write in your first notebook</p> <pre><code>myShared`function[x_]:=x^2 </code></pre> <p>and then could access that function from the other notebook as</p> <pre><code>myShared`function[3] (* ==&gt; 9 *) </code></pre> <p>Note that you can add your context to the context path by using</p> <pre><code>AppendTo[$ContextPath, "myShared`"] </code></pre> <p>This then allows you to refer to the function above without the prefix, e.g.</p> <pre><code>function[3] (* ==&gt; 9 *) </code></pre> <p>However note that with this, your local symbols may shadow the shared ones (however access as <code>myShared`function</code> still works even then), so you have to make sure that you don't use the unprefixed symbol <em>in any way</em> before the prefixed one was created.</p> <p>You might consider avoiding the issue by using <code>PrependTo</code> instead of <code>AppendTo</code>, but then you are vulnerable to variable injection (including accidental one) from the other notebook. For example, imagine that you have the definitions</p> <pre><code>a = 3; PrependTo[$ContextPath, "myShared`"] </code></pre> <p>and then you do in your other notebook</p> <pre><code>Begin["myShared`"] function[a_] := a^2 other[a_] := 5 End[] </code></pre> <p>Let's assume the symbol <code>a</code> had not yet been used in that notebook, then it is created, together with the symbols <code>function</code> and <code>other</code>, in the context <code>myShared`</code>, and therefore in your first notebook now <em>hides</em> the local definition of <code>a</code>. That is, if you now evaluate <code>a</code> in your first notebook, the kernel will find <code>myShared`a</code> (without a value) first, and therefore use <em>that</em> instead of the local <code>a</code>; of course it won't evaluate to 3.</p> <p>Yet another way to access the symbol without prefix would be the definition</p> <pre><code>function := myShared`function </code></pre> <p>which however only works in a context with evaluation. Especially it should not be used for shared variables because after</p> <pre><code>variable := myShared`variable </code></pre> <p>a subsequent assignment like</p> <pre><code>variable = 42 </code></pre> <p>does <em>not</em> change the shared variable, but only the local one (which no longer refers to the shared one).</p> <p>Therefore I think it is a better idea to not do either, but always use the prefixed version.</p> <p>Note that the <code>PrependTo</code> scenario is also an argument against using <code>Begin["myShared`"]</code> … <code>End[]</code> to simplify definitions in this case: It's too easy to accidentally introduce new symbols in that context which were not intended to be there.</p> <p>Note that another way of sharing functions is to make them into a package and use that package from both notebooks. Ultimately this also boils down to having a shared context, as rcollyer noted in the comments. However a properly written package protects against or at least warns about most problems with hiding. Of course, writing a package might be overkill for your specific situation.</p>
2,030,063
<p>Let $A$ and $B$ be two disjoint subset of $\mathbb{R}^2$ such that $A \cup B$ is open and disconnected in $\mathbb{R}^2$. does it follows both $A$ and $B$ are open. </p> <p>If $A$ and $B$ are both open, since $A$ and $B$ are disjoint by hypothesis. then by definition $A \cup B$ is disconnected. But for the reverse case i guess it is not true. But i dont get any counter example for this.</p> <p>So i will be happy if someone help me to get this.</p> <p>since the same question already exist here. But there is no proper counter example for this</p>
Andrew D. Hwang
86,418
<p>You've got multiple counterexamples under the hypotheses of the question.</p> <p>If you assume $A$ and $B$ are <em>connected</em>, however, then "$A \cup B$ open and disconnected" implies $A$ and $B$ are open.</p> <p>To prove this, note that $A \cap B = \varnothing$: If a family of connected sets has an element in common, then the union is connected.</p> <p>Let $\{U, V\}$ be a separation of $A \cup B$. Each of the sets $U_{0} := (A \cup B) \cap U$ and $V_{0} := (A \cup B) \cap V$ is open and non-empty as an intersection of finitely many open sets.</p> <p>Since $A$ is connected, one of the sets $A \cap U$ or $A \cap V$ is empty, say $A \cap V$ without loss of generality. This means $A \subset A \cap U$, so $A \subset U$ and therefore $A = A \cap U = (A \cup B) \cap U = U_{0}$ is open. A similar argument shows $B$ is open.</p>
2,030,063
<p>Let $A$ and $B$ be two disjoint subset of $\mathbb{R}^2$ such that $A \cup B$ is open and disconnected in $\mathbb{R}^2$. does it follows both $A$ and $B$ are open. </p> <p>If $A$ and $B$ are both open, since $A$ and $B$ are disjoint by hypothesis. then by definition $A \cup B$ is disconnected. But for the reverse case i guess it is not true. But i dont get any counter example for this.</p> <p>So i will be happy if someone help me to get this.</p> <p>since the same question already exist here. But there is no proper counter example for this</p>
Unknown x
753,217
<p>For counterxample , take <span class="math-container">$A=[(0,1)\times (0,1)] \cap [\mathbb Q \times \mathbb R] $</span> and <span class="math-container">$B=[(0,1)\times (0,1)] \cap [\mathbb Q^c \times \mathbb R] $</span></p>
2,476,267
<p>Here's a problem presented in my proofs class that I cannot for the life of me figure out. Apparently the answer is P2 according to the answer book but I have no idea how Help is much appreciated, thanks in advance!</p> <p>Anna wrote down the following 3 statements, denoted P1, P2, P3, on a blank piece of paper:</p> <p>P1 : There is exactly one FALSE statement written on this piece of paper.</p> <p>P2 : There are exactly two FALSE statements written on this piece of paper .</p> <p>P3 : There are exactly three FALSE statements written on this piece of paper.</p> <p>One of the above statements is TRUE. What statement is TRUE?</p>
zipirovich
127,842
<p>You're told that one of these statement &mdash; which means <strong>precisely</strong> one of them &mdash; is TRUE. So you can simply consider all possible cases in a very straightforward manner:</p> <ul> <li>maybe P1 is TRUE and the other two are FALSE;</li> <li>or maybe P2 is TRUE and the other two are FALSE;</li> <li>or maybe P3 is TRUE and the other two are FALSE.</li> </ul> <p>When you consider these three possible cases, you'll see that only one of them works, while the other two lead to a contradiction.</p> <p><strong>EDIT.</strong> Note that we don't even need to be told that one of them is true. Even without this additional information there's only one non-contradictory way (the same way) to assign TRUE/FALSE values to these three statements.</p>
2,476,267
<p>Here's a problem presented in my proofs class that I cannot for the life of me figure out. Apparently the answer is P2 according to the answer book but I have no idea how Help is much appreciated, thanks in advance!</p> <p>Anna wrote down the following 3 statements, denoted P1, P2, P3, on a blank piece of paper:</p> <p>P1 : There is exactly one FALSE statement written on this piece of paper.</p> <p>P2 : There are exactly two FALSE statements written on this piece of paper .</p> <p>P3 : There are exactly three FALSE statements written on this piece of paper.</p> <p>One of the above statements is TRUE. What statement is TRUE?</p>
Marcelo
487,716
<p>Try case by case and see the only one who can happen.</p> <p>If P1 is true than exist just one false, so we have two options P2-T and P3-F; or P2-F and P3-T; note that both are impossible. Then we have P1 must be false (because we showed that in every case of assuming P1 true, contradictions happens).</p> <p>Now we have P1-F. And assume that P2 is true, so must exist exactly to falses, how we already have P1-F this simply leads us to P3-F (P1-F P2-T P3-F), this configuration is possible! And we'll see that its the only one.</p> <p>Still assuming P1-F, suppose we had P2-F too, then or P3-T or P3-F, note that both cases lead us to a contradiction. (P1-F P2-F P3-T but there are two falses, not three) and (P1-F P2-F P3-F, there are EXACTLY three falses) so both can't happen.</p> <p>Now note that the cases are over and the only possible is (P1-F P2-T P3-F)</p>
398,535
<p>I understand that "recursive" sets are those that can be completely decided by an algorithm, while "recursively enumerable" sets can be listed by an algorithm (but not necessarily decided). I am curious why the word "recursive" appears in their name. What does the concept of decidability/recognizability have to do with functions that call themselves?</p>
Michael Hardy
11,667
<p>Quoting from a comment by the original poster: "Is 'recursion' a word in computer science that has two independent meanings ("computable" and "function that calls itself")? Or are those meanings somehow related?"</p> <p><b>Yes, they're related,</b> but you could reasonably dislike the way in which they're related.</p> <p>WHich functions are computable?</p> <p>Among those are</p> <ul> <li>the zero function;</li> <li>projection functions: $p_k(x_1,\ldots,x_k,\ldots,x_n) = x_k$;</li> <li>the successor function: $s(x)=x+1$ (if words in an alphabet, rather than numbers, are the arguments to these functions, then there is one successor function for each letter of the alphabet --- e.g. the one corresponding to "c" appends that letter to the end of the input word);</li> <li>compositions of computable functions;</li> <li>functions defined by recursion (calling themselves): $f(x_1,\ldots,x_k+1,\ldots,x_n) = \text{some expression involving }f(x_1,\ldots,x_k,\ldots,x_n)$;</li> <li>functions defined from other functions by minimalization: $f(x_1,\ldots,x_n) = \min\{y : g(y,x_1,\ldots,x_n)=0\}$.</li> </ul> <p>. . . and no others, if the Church--Turing thesis is right. Thus functions defined by recursion are precisely the functions that are computable.</p> <p>Minimalization means some computable functions are not everywhere defined, and you don't know whether the algorithm will run forever because the function is undefined. You may be searching for the smallest even number $&gt;2$ that is not the sum of two primes, or the smallest number $&gt;1$ that appears at least nine times in Pascal's triangle (existence of either is an open question). Without including minimalization you get a class of functions that can be shown by a diagonal argument to fail to include all computable functions.</p> <p>There are other seemingly quite different characterizations of computability. Every general-purpose programming language is such a characterization. But <em>simple</em> characterizations like the one above, which is in effect a simple programming language, are useful, not for writing programs, but for exploring the boundary between what is and what is not computable.</p>
121,453
<p>I wanted to ask a summary of known results and references about the homotopy type of the mapping space $\mathrm{Map}(BG,BK)$ (and specially the connected components) between the classifying spaces when G and K are general topological groups. Thank you. </p>
Peter May
14,447
<p>The Sullivan conjecture, a beautiful theorem proved by Haynes Miller (The Sullivan conjecture on maps from classifying spaces) has striking consequences about maps between classifying spaces when the target is completed at a prime $p$. Some references are Dwyer and Zabrodsky (Maps between classifying spaces), Jackowski, McClure, and Oliver Homotopy classification of self-maps of $BG$ via $G$-actions. I, II; Self-homotopy equivalences of classifying spaces of compact connected Lie groups); Notbohm (Maps between classifying spaces). </p> <p>The classical work of J.F. Adams on this topic is also well worth remembering: Maps between classifying spaces (I), II, III and Maps between $p$-completed classifying spaces.</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Community
-1
<p>Estimate the result. Multiply the numbers without paying attention to the decimal point. Place the decimal point so that the result is close to the estimate.</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Kevin
13,109
<p><strong>Do it in slices.</strong></p> <p>We teach young kids that multiplication is just repeated addition. What's 5x4? It's just 5, added 4 times.</p> <p>So do the same with the decimal numbers. 3.9 x 7.5. First, let's use 3.9 added 7 times. You can't add it an eighth time, because that's over 7.5 - we only need to add it 0.5 times now. So how would you add it 0.5 times? Divide it by 10, and add it five times. So you'd get:</p> <p>3.9 + 3.9 + 3.9 + 3.9 + 3.9 + 3.9 + 3.9 + 0.39 + 0.39 + 0.39 + 0.39 + 0.39</p> <p><strong>Express Each Term As A Rational Fraction</strong></p> <p>Every non-infinite decimal value can be expressed as a rational fraction. 0.153258 is just 153258/1000000. And you can multiply two rational fractions without worrying about decimal places. Then, finally at the end, you can reduce it back into decimal form.</p> <p>3.9 is 39/10. 7.5 is 15/2 (or 75/100, if you want.) Multiply those fractions together, and you get (39x15)/(10x2) - or reduced down 117/4.</p> <p><strong>Jokingly ask them to convert it to binary and let them discover the joy of floating point errors.</strong></p> <p>... hey, you asked for as many ways as possible, right?</p>
18,174
<p>I am an undergraduate secondary math education major. In <span class="math-container">$2$</span> weeks I have to give a <a href="https://www.mathmammoth.com/lessons/number_talks.php" rel="nofollow noreferrer">Number Talk</a> in my math ed class on the problem "<span class="math-container">$3.9$</span> times <span class="math-container">$7.5$</span>". I need to come up with as many different solution methods as possible. </p> <p>Here is what I have come up with so far:</p> <ol> <li><p>The most common way: multiply the two numbers "vertically", ignoring the decimal, to get <span class="math-container">$2925$</span>: <span class="math-container">\begin{array} {}\hfill {}^6{}^439\\ \hfill \times\ 75 \\\hline \hfill {}^1 195 \\ \hfill +\ 273\phantom{0} \\\hline \hfill 2925 \end{array}</span> Since there are two numbers that are to the right of the decimal, place the decimal after the <span class="math-container">$9$</span> to get the answer <span class="math-container">$29.25$</span>.</p></li> <li><p>Write both numbers as improper fractions: <span class="math-container">$$3.9= \dfrac{39}{10}$$</span> and <span class="math-container">$$7.5=\dfrac{75}{10}$$</span>Then multiply <span class="math-container">$$\dfrac{39}{10}\cdot\dfrac{75}{10}$$</span> to get <span class="math-container">$\dfrac{2925}{100}$</span> which simplifies to 29.25.</p></li> <li><p>Use lattice multiplication. This is a very uncommon method that I doubt the students will use, and I need to review it myself before I consider it.</p></li> <li><p>Since <span class="math-container">$3.9$</span> is very close to <span class="math-container">$4$</span>, we could instead do <span class="math-container">$4\cdot7.5=30$</span> and then subtract <span class="math-container">$0.1\cdot7.5=0.75$</span> to get <span class="math-container">$30 - 0.75=29.25$</span></p></li> <li><p>Similarly, since <span class="math-container">$7.5$</span> rounds up to <span class="math-container">$8$</span>, we can do <span class="math-container">$3.9\cdot 8=31.2$</span> and then subtract .<span class="math-container">$5\cdot 3.9=1.95$</span> to get <span class="math-container">$31.2-1.95=29.25$</span> </p></li> </ol> <p>Are there any other possible methods the students might use? (<strong>Note:</strong> they are junior college math ed students.) Thanks!</p>
Galactic
13,851
<p><span class="math-container">$$\left\lfloor \frac{\left(x+y\right)^2}{4} \right\rfloor - \left\lfloor \frac{\left(x-y\right)^2}{4} \right\rfloor = xy.$$</span></p> <p>This answer uses the floor function (which essentially means to get rid of the decimal). You can then quickly consult a table to find 0.25*x^2. </p> <pre><code>X, 0.25*x^2 1, 0.25 2, 1 3, (9/4) 4, 4 5, (25/4) 6, 9 7, (49/4) 8, 16 9, (81/4) 10, 25 11, (121/4) 12, 36 13, (169/4) 14, 49 15, (225/4) </code></pre> <p>Source: <a href="https://en.wikipedia.org/wiki/Multiplication_algorithm#Quarter_square_multiplication" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Multiplication_algorithm#Quarter_square_multiplication</a></p>
2,535,347
<p>Okay so I'm required to find if the given function is <strong>one-one/many-one and onto/into</strong>.</p> <p>The function is : $\mathbb R \to \mathbb R$ and $$f(x) = x\left(\frac{2^x - 2^{-x}}{2^x + 2^{-x}}\right)$$</p> <p>So as this function is even , it can't be one-one .But I'm facing difficulty in deducing if it's onto or not. My book says <em>"The function is continuous and any even-continuous function cannot have range ‘R’. Hence function is many one into."</em></p> <p>Can someone explain how do I prove it continuous just by observation ? Or is there any other way I can show it to be an onto function ? </p>
Burrrrb
322,248
<p>Your function is not onto, because it is non-negative. To see this, examine the cases when $x\ge0$ and $x\le0$ separately. The denominator is always positive and the numerator is positive when $x$ is positive and negative when $x$ is negative. So the function is always non-negative.</p> <p>Also note that $ \lim_{x\rightarrow \infty} f(x) = \lim_{x\rightarrow -\infty} f(x) = \infty$ so by continuity $f$ maps $\mathbb{R}$ onto $[0,\infty)$.</p> <p>The statement in your book stating that any even, continuous function can't map $\mathbb{R}$ onto $\mathbb{R}$ is incorrect. Since the function $f(x)=x \sin(x)$ is even, continuous and maps $\mathbb{R}$ onto $\mathbb{R}$. To see this, it suffices to see that $f(\frac{\pi}{2} +2n\pi) = \frac{\pi}{2} +2n\pi$ and $f(\frac{3\pi}{2} +2n\pi) = -(\frac{3\pi}{2} +2n\pi)$. then using the Mean value theorem to conclude that the function attains all the values in between. </p> <p>It's interesting to note that $f(x) = x\left(\frac{2^x - 2^{-x}}{2^x + 2^{-x}}\right) = x \tanh(x\log2 )$.</p>
260,240
<p><img src="https://i.stack.imgur.com/3WeQw.png" alt="enter image description here"></p> <p>In the last bullet, it says l must be even and provides an explanation. I don't understand the explanation, however. Why does it have to be even?</p>
Billy
39,970
<p>I don't really follow the argument given. Here's a correct argument...</p> <p>If $a^2 = 2l$, then $a^2$ is divisible by 2. But 2 is prime, so $a$ must be divisible by 2 too - say $a = 2b$. Then $(2b)^2 = 2l$, i.e. $2b^2 = l$, and so $l$ is divisible by 2.</p>
3,700,575
<p>I have to find Taylor series at <span class="math-container">$a=1$</span> for </p> <p><span class="math-container">$ f(x)=\begin{cases} \frac{e^{x}-e}{x-1},\quad &amp;\text{if } x\ne1\\ e,\quad &amp;\text{if } x=1\\ \end{cases} $</span></p> <p>I haven't found Taylor series for such functions before and I also don't know how to find n-th derivative of this function. Can anyone help me with this?</p>
Community
-1
<p>With <span class="math-container">$y:=x-1$</span>,<span class="math-container">$$\frac{e^{x-1+1}-e}{x-1}=e\frac{e^y-1}y=e\sum_{k=1}^\infty \frac{y^k}{k!y}=e\sum_{k=1}^\infty \frac{(x-1)^{k-1}}{k!}.$$</span></p>
1,735,653
<p>So I need to use the fact that: $$\cos(4x) + i\sin(4x) = \left(\cos(x) + i\sin(x)\right)^4$$ to derive identities for $\cos(4x)$ and $\sin(4x)$ in terms of $\cos(x)$ and $\sin(x)$. I'm not sure how to go about this, could I please get some help.</p>
Roman83
309,360
<p>$$(\cos x +i \sin x)^4=\cos 4x+i \sin 4x$$ $$(\cos x +i \sin x)^4=\cos^4x+4\cos ^3x \cdot (i \sin x)+6\cos ^2x \cdot (i \sin x)^2+4\cos x \cdot (i \sin x)^3+(i \sin x)^4=$$ $$=\cos^4x-6 \cos ^2x \sin^2x+\sin^4x+$$ $$+i(4 \cos ^3 x \sin x-4\cos x\sin^3x)$$ Then $$\cos 4x=\cos^4x-6 \cos ^2x \sin^2x+\sin^4x$$ and $$\sin 4x=4 \cos ^3 x \sin x-4\cos x\sin^3x$$</p>
3,068,381
<p>I calculated, using Mathematica, that for <span class="math-container">$4\leq k \leq 100$</span>, <span class="math-container">$$ \sum_{j=k}^{2k} \sum_{i=j+1-k}^j (-1)^j 2^{j-i} \binom{2k}{j} S(j,i) s(i,j+1-k) = 0,$$</span> where <span class="math-container">$s(i,j)$</span> and <span class="math-container">$S(i,j)$</span> are Stirling numbers of the first and second kinds, respectively.</p> <p>Here the code:</p> <blockquote> <p>F[k_] := Sum[(-1)^j 2^(j - i) Binomial[2 k, j] StirlingS2[j, i] (StirlingS1[i, j + 1 - k]), {j, k, 2 k}, {i, j - k + 1, j}];</p> <p>Table[F[k], {k, 4, 100}]</p> </blockquote> <p>How do I prove it holds for all <span class="math-container">$k \geq 4$</span> ?</p>
Marko Riedel
44,883
<p>Working with the notation by @GCab we seek to evaluate</p> <p><span class="math-container">$$S_n = \sum_{j=n}^{2n} \sum_{k=j+1-n}^j (-1)^j 2^{j-k} {2n\choose j} {j\brace k} {k\brack j+1-n}.$$</span></p> <p>With the usual EGFs we get</p> <p><span class="math-container">$$\sum_{j=n}^{2n} \sum_{k=j+1-n}^j (-1)^j 2^{j-k} {2n\choose j} j! [z^j] \frac{(\exp(z)-1)^k}{k!} \\ \times k! [w^k] \frac{1}{(j+1-n)!} \left(\log\frac{1}{1-w}\right)^{j+1-n}.$$</span></p> <p>Now we have</p> <p><span class="math-container">$${2n\choose j} j! \frac{1}{(j+1-n)!} = \frac{(2n)!}{(2n-j)! \times (j+1-n)!} = \frac{(2n)!}{(n+1)!} {n+1\choose j+1-n}.$$</span></p> <p>This yields for the sum</p> <p><span class="math-container">$$\frac{(2n)!}{(n+1)!} \sum_{j=n}^{2n} {n+1\choose j+1-n} (-1)^j 2^j \\ \times [z^j] \sum_{k=j+1-n}^j 2^{-k} (\exp(z)-1)^k [w^k] \left(\log\frac{1}{1-w}\right)^{j+1-n} \\ = \frac{(2n)!}{(n+1)!} (-1)^n 2^n \sum_{j=0}^{n} {n+1\choose j+1} (-1)^j 2^j \\ \times [z^{n+j}] \sum_{k=j+1}^{j+n} 2^{-k} (\exp(z)-1)^k [w^k] \left(\log\frac{1}{1-w}\right)^{j+1}.$$</span></p> <p>Observe that <span class="math-container">$(\exp(z)-1)^k = z^k + \cdots$</span> and hence we may extend the inner sum beyond <span class="math-container">$j+n$</span> due to the coefficient extractor <span class="math-container">$[z^{n+j}].$</span> We find</p> <p><span class="math-container">$$\frac{(2n)!}{(n+1)!} (-1)^n 2^n \sum_{j=0}^{n} {n+1\choose j+1} (-1)^j 2^j \\ \times [z^{n+j}] \sum_{k\ge j+1} 2^{-k} (\exp(z)-1)^k [w^k] \left(\log\frac{1}{1-w}\right)^{j+1}.$$</span></p> <p>Furthermore note that <span class="math-container">$\left(\log\frac{1}{1-w}\right)^{j+1} = w^{j+1} +\cdots$</span> so that the coefficient extractor <span class="math-container">$[w^k]$</span> covers the entire series, producing</p> <p><span class="math-container">$$\frac{(2n)!}{(n+1)!} (-1)^n 2^n \sum_{j=0}^{n} {n+1\choose j+1} (-1)^j 2^j [z^{n+j}] \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j+1}.$$</span></p> <p>Working with formal power series we are justified in writing</p> <p><span class="math-container">$$[z^{n+j}] \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j+1} = [z^{n-1}] \frac{1}{z^{j+1}} \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j+1}$$</span></p> <p>because the logarithmic term starts at <span class="math-container">$z^{j+1}/2^{j+1}.$</span> To see this write</p> <p><span class="math-container">$$\frac{\exp(z)-1}{2} + \frac{1}{2} \frac{(\exp(z)-1)^2}{2^2} + \frac{1}{3} \frac{(\exp(z)-1)^3}{2^3} + \cdots$$</span></p> <p>We continue</p> <p><span class="math-container">$$\frac{(2n)!}{(n+1)!} (-1)^{n-1} 2^{n-1} \\ \times [z^{n-1}] \sum_{j=0}^{n} {n+1\choose j+1} (-1)^{j+1} 2^{j+1} \frac{1}{z^{j+1}} \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j+1} \\ = \frac{(2n)!}{(n+1)!} (-1)^{n-1} 2^{n-1} \\ \times [z^{n-1}] \sum_{j=1}^{n+1} {n+1\choose j} (-1)^{j} 2^{j} \frac{1}{z^{j}} \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j}.$$</span></p> <p>The term for <span class="math-container">$j=0$</span> in the sum is one and hence only contributes to <span class="math-container">$n=1$</span> so that we may write</p> <p><span class="math-container">$$-[[n=1]] + \frac{(2n)!}{(n+1)!} (-1)^{n-1} 2^{n-1} \\ \times [z^{n-1}] \sum_{j=0}^{n+1} {n+1\choose j} (-1)^{j} 2^{j} \frac{1}{z^{j}} \left(\log\frac{1}{1-(\exp(z)-1)/2}\right)^{j} \\ = -[[n=1]] + \frac{(2n)!}{(n+1)!} (-1)^{n-1} 2^{n-1} \\ \times [z^{n-1}] \left(1-\frac{2}{z} \log\frac{1}{1-(\exp(z)-1)/2}\right)^{n+1}.$$</span></p> <p>Finally observe that</p> <p><span class="math-container">$$\left(1-\frac{2}{z} \log\frac{1}{1-(\exp(z)-1)/2}\right)^{n+1} \\ = \left(1-\frac{2}{z} \left( \frac{\exp(z)-1}{2} + \frac{1}{2} \frac{(\exp(z)-1)^2}{2^2} + \frac{1}{3} \frac{(\exp(z)-1)^3}{2^3} + \cdots \right)\right)^{n+1} \\ = \left( -\frac{3}{4} z - \cdots \right)^{n+1}$$</span></p> <p>and furthermore</p> <p><span class="math-container">$$[z^{n-1}] \left((-1)^{n+1} \frac{3^{n+1}}{4^{n+1}} z^{n+1} + \cdots \right) = 0$$</span></p> <p>which is the claim.</p>
2,328,677
<p>I wish to find the area between the curves:</p> <p>$y=\sqrt{x}$</p> <p>and </p> <p>$y=x^{2}-3x-2$</p> <p>and between $x=1, x=9$</p> <p>Now, I think that the first thing to do is to find the intersection between the two curves, and then to find which curve is the upper one before and after the intersection point. At the end, an integral will follow.</p> <p>The problem is, how do you find the intersection point? If you compare the curves, you get an equation which is find to solve. Is there a trick I am missing? How do you solve it when you have a square root and a power of 2?</p> <p>Thank you !</p>
Dr. Sonnhard Graubner
175,066
<p>you must solve the equation $$\sqrt{x}=x^2-3x-2$$ after squaring and factorizing you will get $$(x-4) \left(x^3-2 x^2-3 x-1\right)=0$$ the otherone is $$x\approx 3.07959562349143878601$$</p>
1,281,950
<p>Is there a way to express the integral</p> <p>$I(x_{0}, t) = \int_{x_{0}}^{\infty} \frac{1}{x} \, \cos (x t) \, \text{e}^{-x^{2}} \, dx$,</p> <p>where $x_{0} \neq 0$ and $t \ge 0$, in terms of more well-known functions? In particular, I am interested in a form where $t$ appears in the limits and not in the integrand, analogous to how something like the integral of $sin (x t) \, \text{e}^{-x^{2}}$ may be expressed in terms of error functions. My attempts thus far have only yielded</p> <p>$I(x_{0}, t) = \text{e}^{-t^{2} / 4} \Big(\int_{x_{0} - i t/2}^{\infty} \frac{\text{e}^{-x^{2}} (z - i t / 2)}{z^{2} + t^{2} / 4} \, dz + \int_{x_{0} + i t/2}^{\infty} \frac{\text{e}^{-x^{2}} (z + i t / 2)}{z^{2} + t^{2} / 4} \, dz \Big)$,</p> <p>from which I cannot find a way to proceed further. My next step would be to split the limits, introducing integrals to and from $0$, although this introduces singularities which naturally cancel but are nonetheless undesirable.</p> <p>Any analyses of the integral are welcome.</p> <p>For completeness (although not relevant to the actual question itself), I have an incrementally increasing variable $t$ with fixed $x_{0}$, for which I intend to compute this integral. For large $t$, it becomes highly oscillatory.</p>
Daniel
150,142
<p>A matrix $M_{n\times n}$ is invertible if (by definition) the exists a matrix $N_{n\times n}$ such that $MN=NM=I$ ($I$: Identity of ${n\times n}$). </p> <p>Here you have $AB=I$, can you prove $BA=I$? </p> <p><em>NOTE:</em> This is how I interpreted your question, asking uniqueness of the inverse and that $A$ is the inverse of $B$ are different things.</p>