qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
3,279,878 | <p>I got this equation while I was trying to solve a certain math Olympiad problem. I tried modulus and whatnot, but I haven't got anywhere. Is there a way to prove this?</p>
| Robert Israel | 8,508 | <p>Show that all solutions mod <span class="math-container">$16$</span> have <span class="math-container">$x,y,z,a$</span> all even, and use infinite descent.</p>
|
2,779,083 | <p>Given a polynomial $f(z)\in\mathbb{C}[z]$, $\exists$ only finitely many $c$ s.t. $f(z)-c=0$ has repeated roots?
Is above true in general?
Is it true for polynomials of the form $f(z) = (z-z_1)\cdot ... \cdot (z - z_n)$ where $z_1, ... , z_n \in \mathbb{C}$are distinct?</p>
| Angina Seng | 436,618 | <p>A polynomial has repeated roots iff its discriminant is zero. The discriminant is a polynomial in its coefficients. So the discriminant
of $f(z)-c$ is a polynomial, say $D(c)$, in $c$. Either $D(c)$
has finitely many zeros, which is what we want, or it is identically
zero. In that case $f(z)=c$ has repeated root for all $c$. But that's
not true. If $f(z)$ has leading term $z^n$, then when $|c|$ is large,
the roots of $f(z)=c$ approximate those of $z^n=c$ which are distinct.</p>
|
1,611,052 | <p>Let H an infinite-dimensional Hilbert space in $\mathbb{R}$</p>
<p>If $x_1, x_2, \ldots x_n \in H$, how to prove: </p>
<p>$\sum_{1\leq i,j\leq n} {\lvert\lvert x_i - x_j \rvert\rvert}^2 \leq \sum_{1\leq i,j\leq n} ({\lvert\lvert x_i \rvert\rvert}^2 + {\lvert\lvert x_j \rvert\rvert}^2)$</p>
| gerw | 58,577 | <p>We have
\begin{align*}\sum_{1\le i,j\le n} \|x_i - x_j\|^2
&=\sum_{1\le i,j\le n} \left\{\|x_i\|^2 + \|x_j\|^2 - 2 \, (x_i,x_j)\right\}
\\&= \sum_{1\le i,j\le n} \left\{\|x_i\|^2 + \|x_j\|^2\right\} - 2 \, (\sum_{i=1}^n x_i,\sum_{j=1}^n x_j)
\\&\le \sum_{1\le i,j\le n} \left\{\|x_i\|^2 + \|x_j\|^2\right\}\end{align*}</p>
|
700,004 | <p>I have been working on this proof for a few hours and I can not make it work out.</p>
<p>$$\sum_{i=1}^{n}\frac{1}{i(i+1)}=1-\frac{1}{(n+1)}$$</p>
<p>i need to get to $1-\frac{1}{k+2}$</p>
<p>I get as far as $$1-\frac{1}{k+1}+\frac{1}{(k+1)(k+2)}$$
then I have tried $1-\frac{(k+2)+1}{(k+1)(k+2)}$ which got me exactly nowhere. </p>
| Jose Antonio | 84,164 | <p>Claim $\sum_{n=1}^N \frac{1}{N(N+1)}=1-\frac{1}{N+1}$</p>
<p>If $N=0$, so $\sum_{n=1}^0 1/n-1/(n+1)=0$ and $1-1/(N+1)=0$. Suppose we have proven the assertion for $N\ge0$. Then </p>
<p>\begin{align}\sum_{n=1}^{N+1} 1/n-1/(n+1)=1/(N+1)-1/(N+2)+\sum_{n=1}^N 1/n-1/(n+1)\\
=1/(N+1)-1/(N+2)+1-1/(N+1)\\
=1-1/(N+2)\end{align}</p>
|
308,856 | <p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p>
<p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p>
<p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
| arsmath | 3,711 | <p>I have a (possibly idiosyncratic) view that the natural form of measure theory is for finite measure spaces and bounded functions. Other cases are obviously very important, but we have to work harder to get them. You can see this is many of the proofs, where the finite case is easier, and we have to work a bit more to generalize it. For example, the usual proof of the Radon-Nikodym theorem works that way.</p>
<p>In the finite measure space case, with bounded functions, everything can be made symmetric. The symmetry is broken in the general case, because allowing infinity breaks it. In integration, this asymmetry shows up in the way we have to have separate theorems for the non-negative measurable functions and the integrable functions. For bounded functions on finite measure spaces you don't need to impose any extra conditions.</p>
|
308,856 | <p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p>
<p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p>
<p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
| Gerald Edgar | 454 | <p>I think your statement about Jordan is actually wrong. If $m_*(E) = \infty$ and $m^*(E) = \infty$, then $E$ need not be Jordan measurable. If you talk only about bounded sets $E$, then your characterization is correct. But it is also correct for Lebesgue measure (using Lebesgue inner and outer measure).</p>
<p>The reason for Caratheodory's criterion is to define measurability when even bounded sets could have infinite measure, so that restricting to bounded sets no longer helps. One of Caratheorory's examples was an "arc length" measure for sets in $\mathbb R^n$. In that case, there is no obvious way to define inner measure. But we still can define outer measure. And then we need a criterion for measurability that uses only outer measure. </p>
<p>More recent mathematicians have developed a way to start only with an "inner measure" and go from there.</p>
|
2,446,282 | <p>The maximum value of the function $f(x)= ax^2+bx+c$ is 10. Given that $f(3)=f(-1)=2$, find $f(2)$</p>
<p>The answer is $f(2)=8$</p>
<p>I thought that by maximum value it meant that c=10, but the equation I got gave as a result $f(2)=10$</p>
<p>Any hint on how to solve it?</p>
| David K | 139,123 | <p>You are confusing real numbers with their representations.</p>
<p>You write <span class="math-container">$(1 + 0.002)_3 \stackrel?= (1.01)_3,$</span>
which is an abuse of notation to begin with;
the left side is not <em>equal</em> to the right, it <em>rounds</em> to the right-hand side
when <span class="math-container">$p = 3.$</span></p>
<p>The question Heath appears to be trying to answer is,
"What is the largest relative error due to a single rounding off?"
For <span class="math-container">$\beta = 10$</span> and <span class="math-container">$p = 3$</span> we can use <span class="math-container">$1.005$</span> as a test case because that is the very smallest number between <span class="math-container">$1.00$</span> and <span class="math-container">$1.01$</span> that rounds up instead of down.</p>
<p>But when <span class="math-container">$\beta = 3$</span> and <span class="math-container">$p = 3,$</span>
lots of other numbers that are less than <span class="math-container">$1.002_3$</span> also round off to <span class="math-container">$1.01_3$</span>, for example <span class="math-container">$(1 + 0.001112)_3.$</span></p>
<p>In fact, the very smallest number between <span class="math-container">$1$</span> and <span class="math-container">$1.01_3$</span> that rounds up to <span class="math-container">$1.01_3$</span>
is
<span class="math-container">$$1.001111\ldots_3 = 1 + \frac1{18}.$$</span></p>
|
3,461,531 | <p>I have to determine differentiability at <span class="math-container">$(0,1)$</span> of the following function:
<span class="math-container">$$f(x,y)=\frac{|x| y \sin(\frac{\pi x}{2})}{x^2+y^2}$$</span>
The partial derivatives both have value <span class="math-container">$0$</span> at <span class="math-container">$(0,1),$</span> and both are continuous on that point (I think I've got this part right), so the function must be differentiable at <span class="math-container">$(0,1).$</span> But when I checked for differentiability using the definition, the limit that should be <span class="math-container">$0$</span> doesn't exist, so I assume I'm doing something wrong when computing the limit. The following limit has to be <span class="math-container">$0$</span> if the function is differentiable at that point
<span class="math-container">$$\lim_{x,y\to(0,1)} \frac{|f(x,y)|}{\|(x,y)-(0,1)\|}$$</span></p>
<p>Doing the change <span class="math-container">$w=y-1$</span> we have:
<span class="math-container">$$\lim_{x,y\to(0,0)} \frac{|x (w+1)\sin(\frac{\pi x}{2})|}{(x^2+(w+1)^2)\sqrt{x^2+w^2}}$$</span> and then computing the limit along the line <span class="math-container">$x=w,$</span> it has the value <span class="math-container">$\pi /2\sqrt{2}$</span>, which contradicts that the limit is <span class="math-container">$0.$</span></p>
<p>What am I doing wrong?</p>
| José Carlos Santos | 446,262 | <p><strong>Hint:</strong> <span class="math-container">$\mathbb Z\setminus\mathbb Q=\emptyset$</span> and <span class="math-container">$\emptyset\subset\mathbb N$</span>.</p>
|
2,705,794 | <p>I ran across this problem on a practice Putnam worksheet. Completely stumped.</p>
<p>Is $$\large \frac{m^{6} + 3m^{4} + 12m^{3} + 8m^{2}}{24}$$ an integer for all $m \in \mathbb{N}$?</p>
<p>I suspect it is an integer for any $m$. It checks out for small cases.</p>
<p>Any hints for proving the general case?</p>
| Marko Riedel | 44,883 | <p>With this being contest math I suspect the contestant is supposed to recognize the substituted cycle index of the face permutations of the cube under rotations, which is</p>
<p>$$Z(F) = \frac{1}{24}
\left(a_1^6 + 6a_1^2a_4 + 3a_1^2a_2^2 + 8a_3^2 + 6a_2^3\right).$$</p>
<p>Hence the formula counts the number of colorings of the faces of the cube with at most $m$ colors and must therefore be an integer.</p>
<p>This cycle index has appeared at MSE several times, consult e.g. this <a href="https://math.stackexchange.com/questions/460735/">MSE link</a>.</p>
|
761,616 | <p>How do you integrate $\sqrt{(x^4 + x^2)}$? </p>
| David | 119,775 | <p>As long as we are considering positive values of $x$, we have
$$\int\sqrt{x^4+x^2}\,dx=\int x\sqrt{x^2+1}\,dx\ ,$$
and this is easy to integrate.</p>
|
527,576 | <blockquote>
<p>Three men (out of 7) and three women (out of 6) will be chosen to
serve on a 7 member committee. In how many ways can the committee be
formed?</p>
</blockquote>
<p>I did 7C3 to get 35 men.</p>
<p>Then i did 6C3 to get 20 women.</p>
<p>Then i decide to add up 20 + 35 and get 55 but it is suggested i have to multiply 35 and 20 instead. I want to know why is it that we are multiplying 35 and 20 instead of adding them up?</p>
| mathematics2x2life | 79,043 | <p>You could also think about it this way. It doesn't matter the order you choose men and women, so you might as well choose all the men that you want on the committee then all the women. There are 7 total spots, _ _ _ _ _ _ _. Let's fill the first spot with a man. How many ways can we do this--7. So we have 7 _ _ _ _ _ _. Let's fill the next spot with a man, there are 6 left. Continuing, this gives 7 6 5 _ _ _ _. Now we fill in the women in the same way, This gives a total of $(7\times6\times5)\times(6\times5\times4\times3)$ committees. But you should recognize this as $\binom{7}{3} \binom{6}{3}$. </p>
<p>In general, just think about the choices you want. I want this guy out of the 7 here AND this guy out of the 6 left here AND ..... AND this final women out of the three left here. 'AND' logic means multiplication in probability. So whenever you have a situation where you can find yourself able to make statements like that you're most likely going to have some sort of multiplication involved. </p>
|
4,644,186 | <p>Let m be a positive integer.Find the values of <span class="math-container">$$\sum_{k=0}^n \frac{{n\choose k }}{k+1}$$</span>. Leave your answer in terms of n where appropriate.</p>
<p>Remark. There is an alternative method for computing the sums described here: make use of integration.</p>
<p>I can only list out the terms
<span class="math-container">$$\sum_{k=0}^n \frac{{n\choose k }}{k+1}=1+\frac{\binom{n}{1}}{2}+\frac{\binom{n}{2}}{3}+...+\frac{1}{m+1}$$</span>
I can't think of how to simplify them and get the answer.</p>
<p>Also, the question said I can use integration to solve it, but I have no idea how to start.I would greatly appreciate it if someone could show how to solve this.</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$\sum_{k=0}^{n}\frac{{n\choose k}}{k+1}=\frac{1}{n+1}\sum_{k=0}^{n}{n+1\choose k+1}=\frac{1}{n+1}\sum_{j=1}^{n+1} {n+1 \choose j}=\frac{2^{n+1}-1}{n+1}.$$</span></p>
|
3,016,386 | <p>Hi I am struggling with this exercise, which may be perceived as simple. so I was trying to write tangents as follows:</p>
<p><span class="math-container">$$\tan(z)=-i\frac{e^{iz}-e^{-iz}}{e^{iz}+e^{-iz}}$$</span> and then <span class="math-container">$$z=a+bi$$</span>, which led me to <span class="math-container">$$ \tan z=-i\frac{\cos a(e^{-b}-e^{b})+i\sin a(e^{-b}+e^{b})}{\cos a(e^{-b}+e^{b})+i\sin a(e^{-b}-e^{b})}$$</span>, so I guess here I can multiply denominator by conjunction, but this is really a complicated computation on an exam... help appreciated</p>
| saulspatz | 235,128 | <p>It's easier if you don't use sines and cosines. We have
<span class="math-container">$$\overline{e^{iz}+e^{-iz}}=\overline{e^{iz}}+\overline{e^{-iz}},$$</span> and, for example, <span class="math-container">$$\overline{e^{iz}}=e^{\overline{iz}}=e^{\overline{ix-y}}=e^{-y}e^{-ix}$$</span> </p>
|
779,509 | <p>I know there is a nice way of getting the continued fraction expansion of quadratic irrationals mainly because they recur after a point, and if they recur after a point they are quadratic irrationals. When constructing the expansion you can multiply by conjugates (kind of), e.g. </p>
<p>$\sqrt 3 =1+\sqrt 3 -1 = 1+\frac {1}{\frac {\sqrt 3 +1}{2}} $</p>
<p>Where you use $(\sqrt 3 - 1)(\sqrt 3 +1)=2$.</p>
<p>Are there identities that would help with the construction for $ \sqrt[3]{2} $?</p>
<p>One I thought was useful in the first step to get [1; 3,...] was </p>
<p>$ (\sqrt[3]{2}-1)( \sqrt[3]{4} + \sqrt[3]{2}+1 )=1$,</p>
<p>So you get:</p>
<p>$ \sqrt[3]{2}=1+( \sqrt[3]{2}-1 )=1+\frac {1}{ \sqrt[3]{4} + \sqrt[3]{2}+1 }= 1+\frac {1}{3+ (\sqrt[3]{4} + \sqrt[3]{2}-2)} $</p>
<p>Thanks for the help.</p>
| Fabio Lucchini | 54,738 | <p>Starting from the column vector <span class="math-container">$(1,0,0,-2)$</span>, consider the following steps:</p>
<p><em>Step a)</em> Repeat multiplication by the matrix <span class="math-container">$A$</span>
<span class="math-container">$$A=\begin{bmatrix}
1&0&0&0\\
3&1&0&0\\
3&2&1&0\\
1&1&1&1
\end{bmatrix}$$</span>
while the coefficients of the resulting vector have different signs.</p>
<p><em>Step b)</em> Reverse the coefficients of the vector, or equivalently multiply by
<span class="math-container">$$B=\begin{bmatrix}
0&0&0&1\\
0&0&1&0\\
0&1&0&0\\
1&0&0&0
\end{bmatrix}$$</span></p>
<p><em>Then the number of times you multiply by <span class="math-container">$A$</span> in step a gives the partial quotients of continued fraction of <span class="math-container">$\sqrt[3]{2}$</span>.</em></p>
<hr>
<p>For, starting from <span class="math-container">$(1,0,0,-2)$</span>, successive multiplication by <span class="math-container">$A$</span> gives:
<span class="math-container">\begin{align}
(1,0,0,-2)
&\xrightarrow A\color{red}{(1,3,3,-1)}\\
&\xrightarrow A(1,6,12,6)
\end{align}</span>
hence in <em>step a</em> we multiply by <span class="math-container">$A$</span> one time only, because <span class="math-container">$(1,6,12,6)$</span> have positive coefficients only, hence the first partial quotient is <span class="math-container">$1$</span>:
<span class="math-container">$$\sqrt[3]{2}=1+\cdots$$</span></p>
<p>Apply step b to <span class="math-container">$(1,3,3,-1)$</span> we get <span class="math-container">$(-1,3,3,1)$</span>.
Then applying <em>step a</em> to <span class="math-container">$(-1,3,3,1)$</span>, successive multiplication by <span class="math-container">$A$</span> gives:
<span class="math-container">\begin{align}
(-1,3,3,1)
&\xrightarrow A(-1,0,6,6)\\
&\xrightarrow A(-1,-3,3,11)\\
&\xrightarrow A\color{red}{(-1,-6,-6,10)}\\
&\xrightarrow A(-1,-9,-21,-3)\\
\end{align}</span>
hence the second partial quotient is <span class="math-container">$3$</span>:
<span class="math-container">$$\sqrt[3]{2}=1+\frac 1{3+}\cdots$$</span>
and so on...</p>
<hr>
<p>This algorithm holds for every algebraic number of third degree which is the only positive root of it minimal polynomial.
For higher degree the matrix <span class="math-container">$A$</span> is enlarged as in the Tartaglia-Pascal triangle; for example for fourth degree:
<span class="math-container">$$A=\begin{bmatrix}
1&0&0&0&0\\
4&1&0&0&0\\
6&3&1&0&0\\
4&3&2&1&0\\
1&1&1&1&1
\end{bmatrix}$$</span></p>
<hr>
<p>For the intuition behind this algorithm.
Then vector <span class="math-container">$(1,0,0,-2)$</span> corresponds to the polynomial <span class="math-container">$p(x)=x^3-2$</span>.
Multyplication by <span class="math-container">$A$</span> corresponds to <span class="math-container">$p(x)\mapsto p(x+1)$</span>, while revesing in step b corresponds to <span class="math-container">$p(x)\mapsto x^3p(1/x)$</span>.
Finally Descartes's signs rule provide the stopping criterion in <em>step a</em>.</p>
|
3,154,332 | <p>I have a calculus question which i will display here as an image:
<a href="https://i.stack.imgur.com/8xN3P.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8xN3P.png" alt="enter image description here"></a></p>
<p>I am interested to understand part (b) of this question.
I actually got the answer, but i feel i need more to understand how to determine the maximum just via second derivative.
If one takes the function P(t) and graphs it, you can see the largest value of the derivative happen to be on the Maxima of the graph of the derivative.
Now if one was to take the derivative of this graph then you would get that the maximum rate of change happens when the second derivative is zero, which means its at the Points of Inflection.
So it seems that for Trig functions the maximum rate of change happens at the points of inflection.
So this is how i analyzed it, but i feel there is a better way to explain this.</p>
<p>Hope to get further insight from others here on the forum.</p>
| B. Goddard | 362,009 | <p>When I think about the 2nd derivative, I imagine the tangent line to the curve at a point <span class="math-container">$x$</span> and let <span class="math-container">$x$</span> increase. The 2nd derivative tells you about the change in the slope of the tangent line. So if the 2nd derivative is positive, the slope is increasing and so the tangent line is rotating counter-clockwise (as <span class="math-container">$x$</span> increases.) Likewise a negative 2nd derivative shows that the tangent line is rotating clockwise.</p>
<p>An inflection point is a point where the tangent line crosses the graph and it's also a point where the rotation changes from clockwise to counter-clockwise, or vv. </p>
<p>So you get a maximum slope exactly when the rotation changes from CCW to CW. Which is exactly that the inflection points. </p>
|
574,614 | <p>if $\gamma:[0,2\pi]\mapsto\Bbb C,\quad \gamma(t)=1+e^{it}$ then show that $|\int_\gamma\frac{dz}{z-\frac{3}{2}}|\le4\pi$ (without computing)</p>
<p>I tried :
$ |\int_\gamma\frac{dz}{z-\frac{3}{2}}| \le \int_\gamma|\frac{1}{z-\frac{3}{2}}|dz$ and $ |z-\frac{3}{2}|\ge||z|-\frac{3}{2}|$ we should find its max value$ .\quad$
$z=\gamma(t)=1+e^{it} $ then we can say $x(t)=1+cost\quad and\quad y(t)=isint$ so we have $
(x-1)^2+y^2=1 $ circle. how can we continue? $max{|z|}=1$ so $||z|-\frac{3}{2}|\le |1-\frac{3}{2}| \quad =\frac{1}{2}? $</p>
<p>edit :
$||z|-\frac{3}{2}|\le |1-\frac{3}{2}| =\frac{1}{2} \Rightarrow \frac{1}{|z-\frac{3}{2}|}\le2\Rightarrow \int_c\frac{1}{|z-\frac{3}{2}|}dz\le\int_c2dz=4\pi \Rightarrow |\int_\gamma\frac{dz}{z-\frac{3}{2}}|\le4\pi $</p>
| AD - Stop Putin - | 1,154 | <p>Do this and you will find your way:</p>
<ol>
<li><p>Your $z$ in the integral varies on $\gamma$. </p></li>
<li><p>$\gamma$ is a circle of radius 1, with center $z=1$.</p></li>
<li><p>Find the point on the circle $\gamma$, that is closest to $z=3/2$. Note that that point maximize $$|f(z)|= \frac{1}{|z-3/2|}$$ </p></li>
</ol>
|
574,614 | <p>if $\gamma:[0,2\pi]\mapsto\Bbb C,\quad \gamma(t)=1+e^{it}$ then show that $|\int_\gamma\frac{dz}{z-\frac{3}{2}}|\le4\pi$ (without computing)</p>
<p>I tried :
$ |\int_\gamma\frac{dz}{z-\frac{3}{2}}| \le \int_\gamma|\frac{1}{z-\frac{3}{2}}|dz$ and $ |z-\frac{3}{2}|\ge||z|-\frac{3}{2}|$ we should find its max value$ .\quad$
$z=\gamma(t)=1+e^{it} $ then we can say $x(t)=1+cost\quad and\quad y(t)=isint$ so we have $
(x-1)^2+y^2=1 $ circle. how can we continue? $max{|z|}=1$ so $||z|-\frac{3}{2}|\le |1-\frac{3}{2}| \quad =\frac{1}{2}? $</p>
<p>edit :
$||z|-\frac{3}{2}|\le |1-\frac{3}{2}| =\frac{1}{2} \Rightarrow \frac{1}{|z-\frac{3}{2}|}\le2\Rightarrow \int_c\frac{1}{|z-\frac{3}{2}|}dz\le\int_c2dz=4\pi \Rightarrow |\int_\gamma\frac{dz}{z-\frac{3}{2}}|\le4\pi $</p>
| AD - Stop Putin - | 1,154 | <p>Another short answer goes "By the Argument principle we have...".</p>
|
3,059,571 | <p><span class="math-container">$$\lim_{x\to \frac\pi2} \frac{(1-\tan(\frac x2))(1-\sin(x))}{(1+\tan(\frac x2))(\pi-2x)^3}$$</span></p>
<p>I only know of L'hopital method but that is very long. Is there a shorter method to solve this?</p>
| Ankit Kumar | 595,608 | <p>Note that <span class="math-container">$$\frac{1-\text{tan}\frac{x}{2}}{1+\text{tan}\frac{x}{2}}=\text{tan}(\frac{\pi}{4}-\frac{x}{2})$$</span>
So, given limit is same as
<span class="math-container">$$\text{lim}_{x\to\pi/2}\frac{\text{tan}(\pi/4-x/2)(1-\text{sin}x)}{(\pi-2x)^3}$$</span>
Can you do it now using <span class="math-container">$\lim_{t\to0}\frac{\text{sin}t}{t}=1$</span> and <span class="math-container">$\lim_{t\to0}\frac{\text{tan}t}{t}=1$</span>?</p>
<p>You'll finally get the limit to be <span class="math-container">$1/32$</span>.</p>
|
1,775,965 | <p>I'm learning about measure theory, specifically the Lebesgue integral of nonnegative functions, and need help with the following problem. </p>
<blockquote>
<p>Let $f:\mathbb{R}\to[0,\infty)$ be measurable and $f\in L^1$. Show that $F(x)=\int_{-\infty}^x f$ is continuous.</p>
</blockquote>
<p>I know is isn't much but the only thing a could think of is that given $x, y \in \mathbb{R}$ with $x < y$ we note that $F(x) \leq F(y)$, i.e. $F$ is increasing. So maybe we can apply one of the convergence theorems of Lebesgue integration theory.</p>
<hr>
<p>I was also wondering if this problem can be solved using only Riemann integration theory. </p>
| Behnam Esmayli | 283,487 | <p>For $x<y$ since integration (in any reasonable type of integration) is additive on union of disjoint sets,</p>
<p>$$ F(x) + \int_{x}^y f=\int_{(-\infty,x)} f + \int_{(x,y)} f =\int_{(-\infty,x) \cup (x,y)} f=F(y)$$
$$\implies F(y)-F(x)=\int_{x}^y f$$
$$\implies |F(y)-F(x)|=|\int_{x}^y f| \leq \int_{x}^y |f|$$</p>
<p>LEMMA (A Key lemma, very basic and crucial.): If $f$ is integrable, i.e. in $L^1$, for any $\epsilon >o$ given, there is a $\delta >0$, such that
$$ \forall A, \ \ \ \ |A|<\delta \implies \int_{A} |f| < \epsilon.$$</p>
<p>The important thing is that we don't care what set, and where, $A$ is!</p>
<p>In light of this lemma, if $x$ is $\delta$-close to $y$, we'll have</p>
<p>$$|F(y)-F(x)| < \epsilon .$$</p>
<p>So, we even proved more: $F(x)$ is uniformly continuous, i.e. same $\delta$ works for all points in the definition of continuity.</p>
|
21,156 | <p>The title says it all, is there a way to get in contact which users who consistently post answers without using <span class="math-container">$\LaTeX$</span>? I've come across a user who does that and (as I had some free time) edited about 10-15 of his posts, some of his answers were barely readable; on each post I left a comment including a link to the MathJax tutorial. He still keeps posting answers without using <span class="math-container">$\LaTeX$</span>, so is there anything else besides editing and commenting one can do?</p>
| Ashutosh Gupta | 215,160 | <p>A constructive suggestion to the Stack Exchange features:</p>
<p>a) Why not implement $\LaTeX$ auto completion/suggestion for the $\LaTeX$ users?</p>
<p>b) If somebody is posting in plain text, suggest (and possibly auto insert) $\LaTeX$ tags. This is very much possible and a very good feature to see in the stack exchange community.</p>
|
1,154,763 | <p>I'm given this equation:</p>
<p>$$
u(x,y) =
\begin{cases}
\dfrac{(x^3 - 3xy^2)}{(x^2 + y^2)}\quad& \text{if}\quad (x,y)\neq(0,0)\\
0\quad& \text{if} \quad (x,y)=(0,0).
\end{cases}
$$</p>
<p>It seems like L'hopitals rule has been used but I'm confused because</p>
<ol>
<li>there is no limit here it's just straight up $x$ and $y$ equals zero.</li>
<li>if I have to invoke limit here to use Lhopitals rule, there are two variables $x$ and $y$. How do I take limit on both of them?</li>
</ol>
| Alex Zorn | 73,104 | <p>Here's one option. Write $(x,y)$ in polar form: $x = r\cos(\theta)$, $y = r\sin(\theta)$. You get:</p>
<p>$$u(r,\theta) = \frac{r^3\cos^{3}(\theta) - 3r^3\cos(\theta)\sin^{2}(\theta)}{r^2}$$</p>
<p>$$u(r,\theta) = r[\cos^{3}(\theta) - 3\cos(\theta)\sin^{2}(\theta)]$$</p>
<p>Since $\cos^{3}(\theta) - 3\cos(\theta)\sin^{2}(\theta)$ is continuous, it is bounded. Meaning there exists some $M > 0$ such that $-M < \cos^{3}(\theta) - 3\cos(\theta)\sin^{2}(\theta) < M$. </p>
<p>Then you have $-Mr < u(r,\theta) < Mr$. By the squeeze theorem:</p>
<p>$$\lim_{(x,y) \rightarrow (0,0)}u(x,y) = \lim_{r \rightarrow 0^{+}}u(r,\theta) = 0$$</p>
|
1,720,053 | <p>The PDF describes the probability of a random variable to take on a given value:</p>
<p>$f(x)=P(X=x)$</p>
<p>My question is whether this value can become greater than $1$?</p>
<p>Quote from wikipedia:</p>
<p>"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere."</p>
<p>This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)?</p>
<p>Original question:</p>
<p>"$X$ is a continuous random variable with probability density function $f$. Answer with either True or False.</p>
<ul>
<li>$f(x)$ can never exceed $1$."</li>
</ul>
<p>Thank you!</p>
<p>EDIT: Resolved.</p>
| Wouter | 89,671 | <p>Probability density functions are not probabilities, but , if $f(x)$ is a probability density function, then $P=\int_{x_0}^{x_1} f(x) dx$ is a probability and thus $\int_{x_0}^{x_1} f(x) dx \leq 1$ for all $x_0,x_1$ ($x_0\leq x_1$).</p>
|
1,720,053 | <p>The PDF describes the probability of a random variable to take on a given value:</p>
<p>$f(x)=P(X=x)$</p>
<p>My question is whether this value can become greater than $1$?</p>
<p>Quote from wikipedia:</p>
<p>"Unlike a probability, a probability density function can take on values greater than one; for example, the uniform distribution on the interval $[0, \frac12]$ has probability density $f(x) = 2$ for $0 \leq x \leq \frac12$ and $f(x) = 0$ elsewhere."</p>
<p>This wasn't clear to me, unfortunately. The question has been asked/answered here before, yet used the same example. Would anyone be able to explain it in a simple manner (using a real-life example, etc)?</p>
<p>Original question:</p>
<p>"$X$ is a continuous random variable with probability density function $f$. Answer with either True or False.</p>
<ul>
<li>$f(x)$ can never exceed $1$."</li>
</ul>
<p>Thank you!</p>
<p>EDIT: Resolved.</p>
| holala | 806,768 | <p>To add to the already good existing answers, it is easy to understand it by way of their definitions.</p>
<p>For discrete random variable, the probability mass function (pmf) denoted as <span class="math-container">$p_{_X}(x)$</span> gives us the probabilities that <span class="math-container">$X$</span> takes a discrete value <span class="math-container">$x$</span> which is always between 0 and 1.</p>
<p>For continuous random variable, the probability density function (pdf) denoted as <span class="math-container">$f_{_X}(x)$</span> shows us the "nonnegative behavior" that <span class="math-container">$X$</span> takes a value <span class="math-container">$x$</span>, this is not a probability! But when we integrate it over the support set of <span class="math-container">$x$</span> it should be 1.</p>
<p>I guess the confusion usually arise when we often assign probability mass function to discrete random variables and probability density function to the continuous counterpart and we think that they are all probabilities, which one is and the other is not. Another confusion also comes from abuse of notations that I have seen many times: <span class="math-container">$p_{_X}(x) = f_{_X}(x) = \mathbb{P} (X=x)$</span>, you may think that the two equalities are true for both discrete and continuous random variables which is not true. This is only true for the discrete case.</p>
<p>Therefore, avoid abusing the notations, strictly use <span class="math-container">$p_{_X}(x)=\mathbb{P} (X=x)\in[0,1]$</span> for discrete random variables and strictly use <span class="math-container">$f_{_X}(x)\ge 0$</span> for continuous random variables.</p>
|
376,517 | <p>Let <span class="math-container">$U$</span> be a smooth variety, and <span class="math-container">$U\hookrightarrow X$</span> an smooth compactification with snc boundary <span class="math-container">$D=X\setminus U$</span>. Suppose that <span class="math-container">$\omega\in H^0(U,\Omega^n_U)$</span> is global algebraic <span class="math-container">$n$</span>-form on <span class="math-container">$U$</span>. It defines a class in <span class="math-container">$H^n(U,\mathbb{C})=\mathbb{H}^n(X,\Omega_X^\bullet(\log D))$</span>.</p>
<p>The form <span class="math-container">$\omega$</span> extends to a meromorphic form on <span class="math-container">$X$</span>, denote it by <span class="math-container">$\tilde{\omega}$</span>. This is not necessarily an element of <span class="math-container">$H^0(X,\Omega_X^n(\log D))$</span>, since <span class="math-container">$\tilde{\omega}$</span> can have poles of higher order. Is there an element <span class="math-container">$\omega'\in H^0(X,\Omega_X^n(\log D))$</span> such that <span class="math-container">$\omega'|_U$</span> defines the same cohomology class as <span class="math-container">$\omega$</span> in <span class="math-container">$H^n(U,\mathbb{C})$</span>?</p>
<p>Here are my thoughts: the Hodge spectral sequence degenerates, and so we have
<span class="math-container">$$Gr_F^i(H^n(U,\mathbb{C}))=H^{n-i}(X,\Omega^i(\log D)),$$</span>
and so non-canonically (I believe)
<span class="math-container">$$H^n(U,\mathbb{C})=\bigoplus_i H^{n-i}(X,\Omega^i(\log D)).$$</span>
Now it seems that the class defined by <span class="math-container">$\omega$</span> should be contributed by the summand <span class="math-container">$H^{0}(X,\Omega^n(\log D))$</span>, which would imply the claim. However, to prove this I think one would need something analogous to what Peters and Steenbrink call a "Hodge decomposition in the strong sense" (page 45). However, I do not know if this kind of result exists for non-compact <span class="math-container">$U$</span>?</p>
| abx | 40,297 | <p>This is not true. Take for <span class="math-container">$X$</span> an elliptic curve, for <span class="math-container">$D$</span> a point <span class="math-container">$p\in X$</span>. The restriction map <span class="math-container">$H^1(X,\mathbb{C})\rightarrow H^1(U,\mathbb{C})$</span> is an isomorphism, and <span class="math-container">$H^0(X,\Omega ^1_X(\log D))=H^0(X,\Omega ^1_X)$</span>. There is a form <span class="math-container">$\tilde{\omega } $</span> with a pole of order 2 at <span class="math-container">$p$</span>; its restriction <span class="math-container">$\omega $</span> to <span class="math-container">$U$</span> is not cohomologous to the restriction of a holomorphic form on <span class="math-container">$X$</span>.</p>
|
979,267 | <p>Let $a_n$ be the $n$th sequence 1, 2 , 2 , 3 , 3 , 3 , 4 , 4 , 4 , 4 , 5 , 5 , 5 , 5 , 5, . . . . . . . constructed by including the integer $k$ exactly $k$ time. Show that $a_n$ $=$ $\lfloor \frac12 + (2n+\frac14)^.5 \rfloor$</p>
<p>Let $\lvert r\rvert < 1$ be a real number. Evaluate $\sum_{i=0}^\infty ir^i. $</p>
| Hagen von Eitzen | 39,174 | <p>Yes. Use SSS to construct the triangle with sides $\frac1{h_a}$, $\frac1{h_b}$, $\frac1{h_c}$. Then in this triangle the heights are <em>proportinal</em> to the given heights. Scale accordingly.</p>
|
1,285,443 | <blockquote>
<p>Let us denote solution to the equation</p>
<p>$$(x+a)^{x+a}=x^{x+2a}$$</p>
<p>with $X_a$.</p>
<p>($a$ is a non-zero real number)</p>
<p>Prove that:</p>
<p>$$\lim_ {a \to 0} X_a = e$$</p>
</blockquote>
<p>This is something that I noticed while making numerical experiments for another problem. The statement looks interesting, I couldn't find anything close to it on the internet. I don't have the idea how to prove it, but numerical methods confirm the statement.</p>
| Peter Franek | 62,009 | <p>Rewriting the equation to $(1+\frac{a}{x})^{x+a}=x^a$ and taking $\ln$, we have $(x+a) \ln(1+\frac{a}{x})=a\ln x$. Assuming that $X_a$ is bounded for $a$ from some neighbrhood of $0$, we take the Taylor approximation for small $a$ of the left hand side:
$$
(X_a+a) (\frac{a}{X_a}+o(a))=a\ln X_a
$$
and
$$
a+o(a) = a\ln X_a
$$
which yields $\ln X_a=1$.</p>
|
3,433,492 | <p>I know that a function can admitted multiple series representation (according to Eugene Catalan), but I wonder if there is a proof for the fact that each analytic function has only one unique Taylor series representation. I know that Taylor series are defined by derivatives of increasing order. A function has one and only one unique derivative. So can this fact be employed to prove that each function only has one Taylor series representation?</p>
| freakish | 340,986 | <p>The <a href="https://en.wikipedia.org/wiki/Taylor_series" rel="nofollow noreferrer">Taylor series</a> is indeed uniquely defined for any smooth function, regardless whether it is convergent or not and whether it coincides with the function when convergent. And so asking about uniqueness is a bit pointless. It's like asking about uniqueness of derivative. However the question can be turned into a sensible one, if we ask whether <span class="math-container">$f(x)$</span> can be represented as a power series uniquely, i.e. if <span class="math-container">$\sum a_n(x-x_0)^n$</span> and <span class="math-container">$\sum b_n(x-x_0)^n$</span> are both convergent and equal over some open interval, then does it follow that <span class="math-container">$a_n=b_n$</span> for any <span class="math-container">$n$</span>?</p>
<p>This can be reduced (by subtracting) to the question that if <span class="math-container">$\sum c_n(x-x_0)^n=0$</span> over some open interval then does <span class="math-container">$c_n=0$</span> follow?</p>
<p>Now assume that <span class="math-container">$\sum c_n(x-x_0)^n=0$</span> over some open interval <span class="math-container">$(a, b)$</span> with <span class="math-container">$x_0\in (a,b)$</span>. Since every power series evaluates to <span class="math-container">$c_0$</span> at <span class="math-container">$x=x_0$</span> then we conclude that <span class="math-container">$c_0=0$</span>. Thus we can write our equation as</p>
<p><span class="math-container">$$(x-x_0)\cdot\big(c_1+c_2(x-x_0)+c_3(x-x_0)^2+\cdots+c_n(x-x_0)^{n-1}+\cdots\big)=0$$</span></p>
<p>It is tempting to multiply both sides by <span class="math-container">$(x-x_0)^{-1}$</span> and conclude that <span class="math-container">$c_1=0$</span> (and so by induction <span class="math-container">$c_n=0$</span>) but we cannot do that for <span class="math-container">$x=x_0$</span>. And actually we are only interested in <span class="math-container">$x=x_0$</span> case. Nevertheless we can do that for <span class="math-container">$x\neq x_0$</span>. And so we conclude that</p>
<p><span class="math-container">$$c_1+c_2(x-x_0)+c_3(x-x_0)^2+\cdots+c_n(x-x_0)^{n-1}+\cdots=0$$</span></p>
<p>for any <span class="math-container">$x\in (a,b)\backslash\{x_0\}$</span>. Of course every power series is convergent at <span class="math-container">$x=x_0$</span>, the question is whether it is <span class="math-container">$0$</span> there? And it is, because every power series is continuous (as a function of <span class="math-container">$x$</span>) wherever it is convergent (<a href="https://math.stackexchange.com/questions/153841/continuity-of-power-series">see this</a>). This implies that <span class="math-container">$c_1+c_2(x-x_0)+c_3(x-x_0)^2+\cdots=0$</span> for <span class="math-container">$x=x_0$</span> as well. And therefore <span class="math-container">$c_1=0$</span> by evaluating at <span class="math-container">$x=0$</span>.</p>
<p>Now we repeat this process and by simple induction we conclude that <span class="math-container">$c_n=0$</span> for any <span class="math-container">$n\in\mathbb{N}$</span>.</p>
|
50,736 | <p>Hi guys,</p>
<p>I have recently started looking at polynomials $q_n$ generated by initial choices $q_0=1$, $q_1=x$ with, for $n\geq 0$, some recurrence formula</p>
<p>$$q_{n+2}=xq_{n+1}+c_n q_n$$</p>
<p>where $c_n$ is some function in $n$. The first few of these are</p>
<p>$$q_2=x^2+c_0$$
$$q_3=x^3+(c_0+c_1)x$$
$$q_4=x^4+(c_0+c_1+c_2)x^2+c_2c_0$$
$$q_5=x^5+(c_0+c_1+c_2+c_3)x^3+(c_0c_2+c_0c_3+c_1c_3)x$$
$$q_6=x^6+(c_0+c_1+c_2+c_3+c_4)x^4+(c_0c_2+c_0c_3+c_0c_4+c_1c_3+c_1c_4+c_2c_4)x^2$$$$+c_0c_2c_4$$</p>
<p>My question is whether there is a name for the coefficients of the powers of $x$. I realise that they can be written as certain formulations of elementary symmetric polynomials but I am ideally looking for a reference where the specific expressions are studied </p>
<p>Any help would be great :)</p>
| user91132 | 6,827 | <p>Let's treat the $c_i$ as formal indeterminates.</p>
<p>Let $S(n,m)$ be the set of increasing functions $i:\{1,\ldots, m\}\to \{0,\ldots, n-2\}$, written $j \mapsto i_j$, such that $i_{j+1} > i_j + 1$ for all $j=1,\ldots, m$. So $S(n,m)$ is in bijection with the set of subsets of $\{0, \ldots, n-2\}$ of size $m$ which contain no adjacent pair $(k, k+1)$.</p>
<p>Then the coefficient of $x^{n - 2m}$ in $q_n$ is </p>
<p>$\sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m}$</p>
<p>and all other coefficients are zero:</p>
<p>$q_n = \sum_{m = 0}^{[n/2]} \left( \sum_{i \in S(n,m)} c_{i_1} c_{i_2} \cdots c_{i_m} \right) x^{n - 2m}$.</p>
<p>Proof: induction on $n$.</p>
<p>Possibly considering the generating function $F = \sum_{n=0}^\infty q_nt^n$ may be helpful?</p>
<p>Note that these coefficients are not elementary symmetric polynomials in the $c_i$, since for example already $c_2c_0$ isn't invariant under all permutations of $\{0,1,2\}$. I just thought I'd spell out the symmetry that is involved here, and perhaps someone else knows a name for these coefficients.</p>
|
2,551,233 | <p>There are 4 fair coins and 1 unfair coin that has only heads. We choose a coin and flip it three times. The result is HHH. What is the probability that the fourth flip is H? </p>
| nicola | 251,928 | <p>First, use Bayes' rule to determine the probability of having selected the unfair coin. Call $UC$ the event "we selected the unfair coin" and $FC $ "we selected a fair coin".</p>
<p>$$P(HHH) = P(HHH|UC)P_0(UC) + P(HHH|FC)P_0(FC) = 1\times\frac{1}{5}+\frac{1}{8}\times\frac{4}{5} = \frac{3}{10}$$
$$P(UC|HHH) = \frac{P(HHH|UC) P_0(UC)}{P(HHH)} = \frac{1}{5}\times\frac{10}{3} = \frac{2}{3}$$
$$P(FC|HHH) = 1-P(UC|HHH)=\frac{1}{3}$$</p>
<p>So we have:</p>
<p>$$P(H|HHH) = P(H|HHH,UC)P(UC) + P(H|HHH,FC)P(FC) = \frac{2}{3}\times1 +\frac{1}{3}\times\frac{1}{2} = \frac{5}{6} $$</p>
|
37,900 | <p>I use the following code to find out the number of consecutive prime numbers using a formula $n^2+n+i$ found out by Euler (starting from n=0):</p>
<pre><code>Nbs = {};
Do[Nbs = Union[Nbs,
Select[Range[5000], (PrimeQ[#^2 + # + i] == False &), 1]], {i, 1,
5000}];
Nbs
</code></pre>
<p>How can I also get in the output list the value of the <code>Do</code> iterator where $i$ is corresponding to each number of consecutive primes?</p>
<p>I would like to get something like that:</p>
<pre><code>({1,2},...,{40,41}}
</code></pre>
| Kevin | 10,860 | <p>I don't fully understand your question, but I believe you want either Nbs2 or Nbs3 (same data, sorted differently) in this code:</p>
<pre><code>Nbs = {};
Do[AppendTo[Nbs,
Append[Select[Range[5000], (PrimeQ[#^2 + # + i] == False &), 1],
i]], {i, 1, 5000}];
Nbs2 = DeleteDuplicates[Nbs, (#1[[1]] == #2[[1]]) &];
Nbs3 = Sort[Nbs2];
</code></pre>
<p>Where <code>Nbs2</code>:</p>
<pre><code>{{4, 1}, {1, 2}, {2, 3}, {10, 11}, {16, 17}, {40, 41}, {3, 65}, {6, 77},
{12, 221}, {5, 347}, {7, 437}}
</code></pre>
<p>and <code>Nbs3</code>:</p>
<pre><code>{{1, 2}, {2, 3}, {3, 65}, {4, 1}, {5, 347}, {6, 77}, {7, 437},
{10, 11}, {12, 221}, {16, 17}, {40, 41}}
</code></pre>
<p><code>Nbs</code> holds the whole data since I am not using <code>Union</code> there anymore so that I can keep the iterator as you needed.</p>
|
120,687 | <p>Consider the following code</p>
<pre><code>styles = {Red, Blue, {Red, Dashed}, {Blue, Dashed}}
pt1 = Plot[{x^2, 2 x^2, 1/x^2, 2/x^2}, {x, 0, 3}, Frame -> True,
PlotStyle -> styles, PlotLegends -> {"1", "2", "1", "2"}]
</code></pre>
<p>I would like the two red lines to carry the same label "1" and the two blue lines the same label "2". That is, in the legend I would like a red line and a red-dashed line below each other and then one label right of it. Similarly for the blue lines. Does anybody know how to do this?</p>
| Eric Towers | 16,237 | <p>Testing my comment does indicate that one can cut the number of evaluations of <code>f[]</code> in half easily.</p>
<pre><code>f[x_, y_, z_] := Module[{},
totCalls++;
Exp[Sin[x]] + Cos[y + z]
]
totCalls = 0;
NIntegrate[
{f[x, y, z], Sqrt[f[x, y, z]] + x},
{x, 0, 10}, {y, 0, 10}, {z, 0, 10},
Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0},
PrecisionGoal -> 7] // RepeatedTiming
totCalls
(* {7.23, {1429.54, 6081.95 + 59.0571 I}} *)
(* 16 *)
totCalls = 0;
NIntegrate[
{#, Sqrt[#] + x} &[f[x, y, z]],
{x, 0, 10}, {y, 0, 10}, {z, 0, 10},
Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0},
PrecisionGoal -> 7] // RepeatedTiming
totCalls
(* {7.281, {1429.54, 6081.95 + 59.0571 I}} *)
(* 8 *)
</code></pre>
<p>This tells me one very important thing: Your slow numerical evaluation is not caused by calling <code>f[]</code> often. Instrumenting <code>f[]</code> a little more turns up some <em>very odd</em> behaviour.</p>
<pre><code>f[x_, y_, z_] := Module[{},
totCalls++;
Print[{x, y, z, Exp[Sin[x]] + Cos[y + z]} // RepeatedTiming];
Exp[Sin[x]] + Cos[y + z]
]
totCalls = 0;
NIntegrate[
{f[x, y, z], Sqrt[f[x, y, z]] + x},
{x, 0, 10}, {y, 0, 10}, {z, 0, 10},
Method -> {"LocalAdaptive", "SymbolicProcessing" -> 0},
PrecisionGoal -> 7] // RepeatedTiming
totCalls
(* {1.9*10^-6 , {10, 10, 10, E^Sin[10]+Cos[20]}} *)
(* {1.35*10^-6, {x , y , z , E^Sin[x]+Cos[y+z]}} *)
(* ... essentially repeat those two lines seven more times ... *)
(* {7.34, {1429.54, 6081.95 + 59.0571 I}} *)
(* 16 *)
</code></pre>
<p>So, adding a bunch of overhead to calls to <code>f[]</code> doesn't much change the total calculation time. Also, the function only ever seems to get evaluated at <code>{x,y,z} = {10,10,10}</code>, which is very odd.</p>
|
1,533,362 | <p>I need to prove this identity:</p>
<p>$\sum_{k=0}^n \frac{1}{k+1}{2k \choose k}{2n-2k \choose n-k}={2n+1 \choose n}$</p>
<p>without using the identity:</p>
<p>$C_{n+1}=\sum_{k=0}^n C_kC_{n-k}$.</p>
<p>Can't figure out how to.</p>
| Ojas | 154,392 | <p>Let's try to solve the following problem : Given a grid of $(n+1)*n$, count the number of ways to move from the lower left corner to the upper right corner while moving only right or up in one step.</p>
<p>One way to do this is straightforward. You have total $2n + 1$ moves, out of which $n$ moves should be <em>up</em> moves and the rest should be <em>right moves</em>. The number of sequence of length $2n+1$ with $n$ <em>up</em> moves and $n+1$ <em>right</em> moves is $\binom{2n+1}{n}$.</p>
<p>Another way to do this is as follows :
Consider a diagonal of the rectangle. From the vertex $(0,0)$ to $(n,n)$. Now, let's count the number of paths from lower left to upper right which come below this diagonal for the first time at the coordinate $(k,k)$. i.e., the path lies above the diagonal till it reaches $(k,k)$ then it moves to $(k+1,k)$, and then continues from there. The number of such paths are $C_k \binom{2n-2k}{2k}$ where $C_k$ is the $k^{th}$ Catalan number. (The number of paths from $(0,0)$ to $(k,k)$ that lie above the diagonal are $C_k$ and total paths from $(k+1,k)$ to $(n+1,n)$ are $\binom{2n-2k}{n-k}$.
Taking the sum of this expression from $k= 0$ to $n$ gives the total number of paths from the lower left corner of the grid to the upper right corner. Thus, another expression for the same is $\sum_{k=0}^nC_k\binom{2n-2k}{n-k} = \sum_{k=0}^n\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k}$.</p>
<p>$$ \therefore \binom{2n+1}{n}= \sum_{k=0}^n\frac{1}{k+1}\binom{2k}{k}\binom{2n-2k}{n-k}$$.</p>
|
3,851,609 | <p>I need to show that if <span class="math-container">$X_n \rightarrow X$</span> and <span class="math-container">$X_n \rightarrow Y$</span>, then <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span> for convergence in probability, convergence almost surely, as well as for convergence in mean and quadratic mean (<span class="math-container">$\mathcal L^1$</span> and <span class="math-container">$\mathcal L^2$</span> convergence).</p>
<p><strong>Convergence in Probability:</strong></p>
<p>For any <span class="math-container">$\epsilon>0$</span> and for any <span class="math-container">$n\in\mathbb N$</span> we have</p>
<p><span class="math-container">$$\begin{align}
\mathbb P(|X-Y|\geq\epsilon)
&\leq\mathbb P(|X-X_n|+|X_n-Y|\geq\epsilon)\\\\
&\leq\mathbb P\left((|X-X_n|\geq\epsilon/2)\cup(|X_n-Y|\geq\epsilon/2)\right)\\\\
&\leq\mathbb P(|X-X_n|\geq\epsilon/2)+\mathbb P(|X_n-Y|\geq\epsilon/2)
\end{align}$$</span></p>
<p>so that</p>
<p><span class="math-container">$$\mathbb P(|X-Y|\geq\epsilon)\leq\lim_{n\rightarrow\infty}\mathbb P(|X-X_n|\geq\epsilon/2)+\mathbb P(|X_n-Y|\geq\epsilon/2)=0$$</span></p>
<p>Since <span class="math-container">$$\{|X-Y|>0\}=\underbrace{\bigcup_{n=1}^\infty \underbrace{\left\{|X-Y|>\frac{1}{n}\right\}}_{=\emptyset}}_{=\emptyset}=\emptyset$$</span></p>
<p>we have that <span class="math-container">$\mathbb P\{|X-Y|>0\}=0$</span> and so <span class="math-container">$\mathbb P(X\ne Y)=0$</span>. Hence <span class="math-container">$\mathbb P(X= Y)=1$</span> which means that <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span>.</p>
<p><strong>Convergence Almost Surely:</strong></p>
<p>Since almost sure convergence implies convergence in probability, the result follows immediately from the last part. However, I'd like to show this without making use of that result. Since <span class="math-container">$X_n$</span> converges almost surely to both <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then <span class="math-container">$\mathbb P(\lim_{n\rightarrow\infty}X_n=X)=1$</span> and <span class="math-container">$\mathbb P(\lim_{n\rightarrow\infty}X_n=Y)=1$</span>. From here it seems obvious to me that <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span> but I'm not sure how to show this formally.</p>
<p><strong>Convergence in Mean:</strong></p>
<p><span class="math-container">$$\begin{align}
\mathbb E(|X-Y|)
&\leq\mathbb E\left(|X-X_n|+|X_n-Y|\right)\\\\
&=\mathbb E\left(|X-X_n|)+\mathbb E(|X_n-Y|\right)
\end{align}$$</span></p>
<p>so</p>
<p><span class="math-container">$$\mathbb E(|X-Y|)\leq\lim_{n\rightarrow\infty}\mathbb E(|X-X_n|)+\mathbb E(|X_n-Y|)=0$$</span></p>
<p>so <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span></p>
<p><strong>Convergence in Quadratic Mean:</strong></p>
<p>I tried continuing with the same logic but it's not the case that</p>
<p><span class="math-container">$$
\mathbb E(|X-Y|^2)\leq\mathbb E\left(|X-X_n|^2+|X_n-Y|^2\right)$$</span></p>
<p>so I'm not sure how to proceed.</p>
<p>Is my reasoning correct for the first and third? How can I proceed with the other two?</p>
| Ninad Munshi | 698,724 | <p>If <span class="math-container">$f(y)$</span> is integrable, then it is the derivative of some other function <span class="math-container">$g(y)$</span> s.t. <span class="math-container">$g'(y) = f(y)$</span>. This gives us</p>
<p><span class="math-container">$$\int_0^1 \int_x^{1-x} g'(y)\:dydx = \int_0^1 g(1-x)-g(x)\:dx$$</span></p>
<p>Then split apart the integrals and use the variable interchange <span class="math-container">$x\leftrightarrow 1-x$</span> on the first integral</p>
<p><span class="math-container">$$\int_0^1g(1-x)dx-\int_0^1g(x)dx = \int_0^1g(x)dx-\int_0^1g(x)dx = 0$$</span></p>
|
3,851,609 | <p>I need to show that if <span class="math-container">$X_n \rightarrow X$</span> and <span class="math-container">$X_n \rightarrow Y$</span>, then <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span> for convergence in probability, convergence almost surely, as well as for convergence in mean and quadratic mean (<span class="math-container">$\mathcal L^1$</span> and <span class="math-container">$\mathcal L^2$</span> convergence).</p>
<p><strong>Convergence in Probability:</strong></p>
<p>For any <span class="math-container">$\epsilon>0$</span> and for any <span class="math-container">$n\in\mathbb N$</span> we have</p>
<p><span class="math-container">$$\begin{align}
\mathbb P(|X-Y|\geq\epsilon)
&\leq\mathbb P(|X-X_n|+|X_n-Y|\geq\epsilon)\\\\
&\leq\mathbb P\left((|X-X_n|\geq\epsilon/2)\cup(|X_n-Y|\geq\epsilon/2)\right)\\\\
&\leq\mathbb P(|X-X_n|\geq\epsilon/2)+\mathbb P(|X_n-Y|\geq\epsilon/2)
\end{align}$$</span></p>
<p>so that</p>
<p><span class="math-container">$$\mathbb P(|X-Y|\geq\epsilon)\leq\lim_{n\rightarrow\infty}\mathbb P(|X-X_n|\geq\epsilon/2)+\mathbb P(|X_n-Y|\geq\epsilon/2)=0$$</span></p>
<p>Since <span class="math-container">$$\{|X-Y|>0\}=\underbrace{\bigcup_{n=1}^\infty \underbrace{\left\{|X-Y|>\frac{1}{n}\right\}}_{=\emptyset}}_{=\emptyset}=\emptyset$$</span></p>
<p>we have that <span class="math-container">$\mathbb P\{|X-Y|>0\}=0$</span> and so <span class="math-container">$\mathbb P(X\ne Y)=0$</span>. Hence <span class="math-container">$\mathbb P(X= Y)=1$</span> which means that <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span>.</p>
<p><strong>Convergence Almost Surely:</strong></p>
<p>Since almost sure convergence implies convergence in probability, the result follows immediately from the last part. However, I'd like to show this without making use of that result. Since <span class="math-container">$X_n$</span> converges almost surely to both <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> then <span class="math-container">$\mathbb P(\lim_{n\rightarrow\infty}X_n=X)=1$</span> and <span class="math-container">$\mathbb P(\lim_{n\rightarrow\infty}X_n=Y)=1$</span>. From here it seems obvious to me that <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span> but I'm not sure how to show this formally.</p>
<p><strong>Convergence in Mean:</strong></p>
<p><span class="math-container">$$\begin{align}
\mathbb E(|X-Y|)
&\leq\mathbb E\left(|X-X_n|+|X_n-Y|\right)\\\\
&=\mathbb E\left(|X-X_n|)+\mathbb E(|X_n-Y|\right)
\end{align}$$</span></p>
<p>so</p>
<p><span class="math-container">$$\mathbb E(|X-Y|)\leq\lim_{n\rightarrow\infty}\mathbb E(|X-X_n|)+\mathbb E(|X_n-Y|)=0$$</span></p>
<p>so <span class="math-container">$X\overset{\text{a.s.}}{=}Y$</span></p>
<p><strong>Convergence in Quadratic Mean:</strong></p>
<p>I tried continuing with the same logic but it's not the case that</p>
<p><span class="math-container">$$
\mathbb E(|X-Y|^2)\leq\mathbb E\left(|X-X_n|^2+|X_n-Y|^2\right)$$</span></p>
<p>so I'm not sure how to proceed.</p>
<p>Is my reasoning correct for the first and third? How can I proceed with the other two?</p>
| Tryst with Freedom | 688,539 | <p><a href="https://i.stack.imgur.com/F19zS.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F19zS.jpg" alt="enter image description here" /></a></p>
<p>So, I've drawn the region of integration in the <span class="math-container">$x-y$</span> plane. So, the bounds of the inner integral <span class="math-container">$ x$</span> and <span class="math-container">$ 1-x$</span> denote these two lines in the <span class="math-container">$x-y$</span> plane. If we were to imagine <span class="math-container">$f(y)$</span> as some density function (eg: mass density) then integrating it from <span class="math-container">$x$</span> to <span class="math-container">$ 1-x$</span> can be thought of as finding the mass of a vertical strip between the lines <span class="math-container">$ y=x $</span> and <span class="math-container">$ y=1-x$</span>.</p>
<p>The mass of the strip varies vertically as a function of <span class="math-container">$y$</span> , that is the curve connects the black image connected to the green inner integral in my picture. We can think of the mass of the strip varying as we move up vertically along our vertical stirp.</p>
<p>Once we do the inner integral, we get this expression as @Ninad Munshi has shown:</p>
<p><span class="math-container">$$ \int_{0}^1 g(1-x) - g(x) dx$$</span></p>
<p>So, this expression we can think of as finding the mass of vertical strip from the <span class="math-container">$y=1-x$</span> line to the x-axis, and chopping of the mass from the line <span class="math-container">$y=x$</span> to the x-axis. The outer integral can be thought of as adding up the vertical strips horizontally, to get the mass of bow-tie as Ninad put it.</p>
<p><span class="math-container">$$ \int_0^1 g(1-x) dx - \int_0^1 g(x) dx$$</span></p>
<p>For the first integral, we can do a trick. Instead of adding up the mass of the vertical bar from left to right along the x-axis, we can add them from the right to left ( starting from the end). This gives:</p>
<p><span class="math-container">$$ \int_0^1 g(x) dx - \int_0^1 g(x) dx$$</span></p>
<p>But now we see that the mass of the region is zero.. the mass between <span class="math-container">$x$</span> axis to the line <span class="math-container">$ y=x $</span> is negative to that of the mass between the line <span class="math-container">$ y=1-x$</span> to the line <span class="math-container">$ y=x$</span></p>
|
257,623 | <p>Consider the following ellipse, generated by the bounding region of the following points</p>
<pre><code>ps = {{-11, 5}, {-12, 4}, {-10, 4}, {-9, 5}, {-10, 6}};
rec = N@BoundingRegion[ps, "FastEllipse"];
Graphics[{rec, Red, Point@ps}]
</code></pre>
<p><a href="https://i.stack.imgur.com/gvtUB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gvtUB.png" alt="enter image description here" /></a></p>
<p>We have that the ellipse 'rec' is given in the form</p>
<pre><code>Ellipsoid[{-10.4, 4.8}, {{2.77333, 0.853333}, {0.853333, 1.49333}}]
</code></pre>
<p>How can I retrieve the lengths of the two main axes of such ellipsoid? Following <a href="https://en.wikipedia.org/wiki/Ellipsoid#As_a_quadric" rel="nofollow noreferrer">this representation</a> and Mathematica's general definition of <code>Ellipsoid</code></p>
<p><a href="https://i.stack.imgur.com/vaCtl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/vaCtl.png" alt="enter image description here" /></a></p>
<p>I tried the following, using the eigenvalues of <code>rec[[2]]</code></p>
<pre><code>eigs = Eigenvectors[Inverse[rec[[2]]]]
eigv = Eigenvalues[Inverse[rec[[2]]]];
lens = 2/Sqrt[eigv]
Out[]= {2.06559, 3.57771}
</code></pre>
<p>where the <code>2</code> factor comes from the fact what what I retrieve from the eigenvalues is actually half the length of the main axis. Indeed we get</p>
<pre><code>Graphics[{rec, Red, Point@ps,
Blue, Line[
RegionCentroid@rec + # & /@ {-(lens[[1]] eigs[[1]])/
2, (lens[[1]] eigs[[1]])/2}],
Line[RegionCentroid@rec + # & /@ {-(lens[[2]] eigs[[2]])/
2, (lens[[2]] eigs[[2]])/2}]}]
</code></pre>
<p><a href="https://i.stack.imgur.com/1N5gL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1N5gL.png" alt="enter image description here" /></a></p>
<p>Is this correct? Is there a quicker way of doing this?</p>
| Daniel Huber | 46,318 | <p>I think the help of Ellipsoid is incomplete because it does not explain the input: Ellipsoid[p,[CapitalSigma]], where the second argument is called the "weight matrix".</p>
<p>You will remember that an ellipse (for simplicity I am explaining the 2D case, nD is similar and assume the ellipse is centered at the origin) can be written by:</p>
<pre><code>x^2/rx^2 + y^2/ry^2 == 1
</code></pre>
<p>We may write this as:</p>
<pre><code>{x,y}.{{1/rx^2,0},{0,1/ry^2}}.{x,y} == r.mat.r == 1
</code></pre>
<p>Note that the inverse Sqrt of the eigenvalues of m0 are the half axes of the ellipse.</p>
<p>If we now rotate the coordinate system by a rotation matrix: rot (r'=rot.r where r' are the new coordinates) the ellipse will be rotated (in the inverse sense) in the new coordinates:</p>
<pre><code>r'. Transpose[rot].mat. rot .r' == r' . mat' . r' == 1
</code></pre>
<p>Therefore a rotated ellipse may be represented by a an symmetric (positive definite) matrix: mat'. This is called "weigh matrix" in the help.</p>
<p>Note that the eigenvalues of the matrix are not changed by a rotation. The eigenvectors point in the directions of the half axes.</p>
<p>Here is an example: Let</p>
<pre><code>rx=2;
ry=1;
m0=DiagonalMatrix[{1/rx^1,1/ry^2}]
{x,y}.m0.{x,y}==1
</code></pre>
<p>This represents an axis aligned ellipse with half axes rx and ry:</p>
<pre><code>Region[ImplicitRegion[{x, y} . m0 . {x, y} == 1, {x, y}],
Axes -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/WAGza.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/WAGza.png" alt="enter image description here" /></a></p>
<p>If we now rotate the matrix m0:</p>
<pre><code>rot = RotationMatrix[-Pi/4];
m= Transpose[rot].m0.rot;
</code></pre>
<p>we get a rotated ellipse:</p>
<pre><code>Region[ImplicitRegion[{x, y} . m . {x, y} == 1, {x, y}], Axes -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/EZNzG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/EZNzG.png" alt="enter image description here" /></a></p>
<p>Threrfore, the half axes are obtained by the inverse Sqrt of the eigenvalues of the weight matrix. And the directions of the axes are given by the eigenvectors.</p>
|
7,981 | <p>I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?</p>
| Sundaramurthy | 69,788 | <p>It is straight forward. We have a Zeta function, 'analytically continued' that says</p>
<p>$\zeta(s)[1-2/2^s] = 1 - 1/2^s + 1/3^s - 1/4^s + ......$ Here s is a complex variable. Thus $s=\Re(\sigma) + \Im(\omega).$ (Where $\Re$ indicates the real part and $\Im$ indicates the imaginary part). The above series converges in the region of our interest which is $0 < \sigma < 1.$</p>
<p>To find the zeroes of $\zeta(s)$ we need to first substitute $\zeta(s) = 0$ and solve for $s$ and the sigmas and omegas.</p>
<p>That is </p>
<p>Solve $0= 1 - 1/2^s + 1/3^s -1/4^s + ...$, for sigmas and omegas. Riemann hypothesized that the zeros will have their sigmas equal to 1/2 while the omegas are distinct. To this date, after 150 years, no one has any clue why sigma takes a single value of 1/2 in the critical strip $0 < \sigma < 1.$ Apart from the consequences I hope I explained it well. Wikipedia on Riemann Hypothesis is a good source for reading up.</p>
|
1,941,583 | <p>I tried to solve the Millikan integer, but I did not success. The initial integer is:
$$m\frac{dv}{dt}= \frac{4}{3}\pi a^3 \rho g - qE - 6\pi \eta av$$
and I must prove show I to optain this:
$$v= \frac{\frac{4}{3}\pi a^3 \rho g - qE} {6\pi \eta a}\left(1-e^\left(\frac{-6\pi \eta at}{m}\right)\right) $$</p>
<p>*** Just to make sure, all the "greek variables" are not variables in this equation; only $v$ and $t$.</p>
<p>%%%________________________________________________________%%%</p>
<p>Those are my steps:</p>
<p>$$\int\frac{dv}{4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v} = \int\frac{dt}{m}$$</p>
<hr>
<p>My substitution:
$$u=4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v$$
$$du= -6\pi \eta a (dv)$$
$$dv= \frac{-du}{6\pi \eta a}$$</p>
<hr>
<p>$$-\int\frac{du}{6\pi \eta a (u)} = \frac{t}{m} +c$$
$$\frac{-1}{6\pi \eta a} \ln \left\lvert 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v\right\rvert = \frac{t}{m} +c$$</p>
<p>$$ c=\frac{-1}{6\pi \eta a} \ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$ We know that $t$ and $v$ are equal to 0.</p>
<p>$$-m \ln \left\lvert 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v\right\rvert= 6\pi \eta avt*\ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$</p>
<p>$$ \ln \left\lvert \left( 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v \right)^{-m}\right\rvert= 6\pi \eta avt*\ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$</p>
<p>$$\ln \left\lvert \frac{4/3 * \pi a^3 \rho g - qE}{\left( 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v \right)^{m}}\right\rvert = 6\pi \eta at$$</p>
<p>This is my last step, I don't know how to solve for v. Just in case, could you just make sure that I did not any mistakes so far and help me to solve for v. This should be the final answer:
$$v= \frac{\frac{4}{3}\pi a^3 \rho g - qE} {6\pi \eta a}\left(1-e^\left(\frac{-6\pi \eta at}{m}\right)\right) $$</p>
| snulty | 128,967 | <p>Writing the equation as $$m\frac{dv}{dt}=A-Bv$$
one can solve by <a href="https://en.wikipedia.org/wiki/Separation_of_variables" rel="nofollow">separation of variables</a>. Re-write as:</p>
<p>$$\frac{m}{A-Bv}\frac{dv}{dt}=1$$</p>
<p>Integrating $dt$ and making the substitution $u=A-Bv$:</p>
<p>$$\frac{-m}{B}\int\frac{du}{u}=t+\text{constant}$$</p>
<p>Continuing:</p>
<p>$$\ln(A-Bv)=-\frac{Bt}{m}+\text{constant}$$</p>
<p>$$v=\frac{1}{B}\left(A-ce^{-Bt/m}\right)$$</p>
<p>Since you have mentioned that $v(0)=0$, then this implies that $c=A$ and thus:</p>
<p>$$v=\frac{A}{B}\left(1-e^{-Bt/m}\right)$$</p>
<p>Finally note that $A=\frac{4}{3}\pi a^3\rho g-qE$, and $B=6\pi\eta a$.</p>
|
1,941,583 | <p>I tried to solve the Millikan integer, but I did not success. The initial integer is:
$$m\frac{dv}{dt}= \frac{4}{3}\pi a^3 \rho g - qE - 6\pi \eta av$$
and I must prove show I to optain this:
$$v= \frac{\frac{4}{3}\pi a^3 \rho g - qE} {6\pi \eta a}\left(1-e^\left(\frac{-6\pi \eta at}{m}\right)\right) $$</p>
<p>*** Just to make sure, all the "greek variables" are not variables in this equation; only $v$ and $t$.</p>
<p>%%%________________________________________________________%%%</p>
<p>Those are my steps:</p>
<p>$$\int\frac{dv}{4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v} = \int\frac{dt}{m}$$</p>
<hr>
<p>My substitution:
$$u=4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v$$
$$du= -6\pi \eta a (dv)$$
$$dv= \frac{-du}{6\pi \eta a}$$</p>
<hr>
<p>$$-\int\frac{du}{6\pi \eta a (u)} = \frac{t}{m} +c$$
$$\frac{-1}{6\pi \eta a} \ln \left\lvert 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v\right\rvert = \frac{t}{m} +c$$</p>
<p>$$ c=\frac{-1}{6\pi \eta a} \ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$ We know that $t$ and $v$ are equal to 0.</p>
<p>$$-m \ln \left\lvert 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v\right\rvert= 6\pi \eta avt*\ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$</p>
<p>$$ \ln \left\lvert \left( 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v \right)^{-m}\right\rvert= 6\pi \eta avt*\ln \left\lvert 4/3 * \pi a^3 \rho g - qE\right\rvert$$</p>
<p>$$\ln \left\lvert \frac{4/3 * \pi a^3 \rho g - qE}{\left( 4/3 * \pi a^3 \rho g - qE - 6\pi \eta a v \right)^{m}}\right\rvert = 6\pi \eta at$$</p>
<p>This is my last step, I don't know how to solve for v. Just in case, could you just make sure that I did not any mistakes so far and help me to solve for v. This should be the final answer:
$$v= \frac{\frac{4}{3}\pi a^3 \rho g - qE} {6\pi \eta a}\left(1-e^\left(\frac{-6\pi \eta at}{m}\right)\right) $$</p>
| Felix Marin | 85,343 | <p>$\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
m\,\totald{\mrm{v}\pars{t}}{t} & = {4 \over 3}\,\pi a^{3}\rho g - qE -
6\pi\eta a\mrm{v}\pars{t}\implies
\pars{\totald{}{t} + {1 \over \tau}}\mrm{v}\pars{t} = {\tilde{v} \over \tau}
\\[5mm]
\mbox{where}\quad &
\left\{\begin{array}{rcl}
\ds{\tau} & \ds{\equiv} & \ds{m \over 6\pi\eta a}
\\[2mm]
\ds{\tilde{v}} & \ds{\equiv} &
\ds{4\pi a^{3}\rho g/3 - qE \over 6\pi\eta a}
\end{array}\right.
\\[5mm]
\implies & = \totald{\bracks{\expo{t/\tau}\mrm{v}\pars{t}}}{t} =
{\tilde{v} \over \tau}\,\expo{t/\tau}
\implies\expo{t/\tau}\mrm{v}\pars{t} - \expo{t_{0}/\tau}v\pars{t_{0}} =
{\tilde{v} \over \tau}\,\int_{t_{0}}^{t}\expo{t'/\tau}\,\dd t'
\end{align}
<hr>
\begin{align}
\color{#f00}{\mrm{v}\pars{t}} & =
v\pars{t_{0}}\expo{-\pars{t - t_{0}}/\tau} +
{\tilde{v} \over \tau}\,\int_{t_{0}}^{t}\expo{-\pars{t - t'}/\tau}\,\dd t' =
\color{#f00}{v\pars{t_{0}}\expo{-\pars{t - t_{0}}/\tau} +
\tilde{v}\bracks{1 - \expo{-\pars{t - t_{0}}/\tau}}}
\end{align}</p>
|
9,508 | <p>I need to write a coupon code system but I do not want to save each coupon code in the database. (For performance and design reasons.) Rather I would like to generate codes subsequent that are watermarked with another code.</p>
<p>They should like kind of fancy and random. Currently they look like this:</p>
<p>1: AKFCU2, 2: AKFDU2, 3: AKFDW2, 4: AKHCU2, 5: AKHCW2, ..., 200: CLFSU2, 201: CLFSW2, ...</p>
<p>It's obvious that subsequent codes look very similar as I just converted my code ingredients (the watermark and the integer in the front) to binary base and permutated the order by a fixed scheme. But to prevent people to easily guess another valid code or even accidently enter another valid code (thus making another code invalid by using it) I would prefer something more chaotic, like this:</p>
<p>1: FIOJ32, 2: X9NIU2, 3: SIUUN7, 4: XTVV4S, ...</p>
<p>In the end the problem is to find a bijective, discrete function on the domain {0,1}^27 (or alternatively {0,1,2,3,4, ..., [10^(8.5)]}) that is far away from being continuous. Also it should be as simple as possible to implement. (EDIT: I also need to implement the reverse function.)</p>
<p>Any suggestions for such a function?</p>
| comonad | 3,226 | <p>you could make sth inspired by RSA, where you do not need two keys. like this:</p>
<p>calculate everything modulo $p$, where $p$ is prim. that will then be the GF[$p$] <a href="http://en.wikipedia.org/wiki/Finite_field" rel="nofollow">(GaloisField)</a>.
now instead of finding a generator $g$ for that GF[$p$] and calculating anything in powers of $g$, you could use any large numbers for multiplication and addition. this allone will then be very unsafe, but it will be very chaotic in combination with some bit-operations.</p>
<p>the encoding and decoding functions can be combined out of simple unsafe functions. the combination makes them more unpredictable, especially with the bit-operations.</p>
<p>(calculate once the reciprocal ($m^{-1}\equiv m^{p-2}$) of every $m$, that you use to multiply; you'll need it for decryption.)</p>
<hr>
<p>let's define functions enc1 and dec1 based on multiplication:</p>
<p>$enc1_m(x)=x·m$ (mod p)</p>
<p>$dec1_m(y)=y·m^{-1}$ (mod p)</p>
<p>..................................................</p>
<p>and another pair based on addition:</p>
<p>$enc2_a(x)=x+a$ (mod p)</p>
<p>$dec2_a(y)=y-a$ (mod p)</p>
<p>..................................................</p>
<p>and another pair based on bit-shuffling, but that need to be invertible and never generate a value $\geq p$:</p>
<p>let $(\odot,\oplus,\otimes,\neg)$ be the bitwise (and,or,xor,not) operations</p>
<p>let $permutateAndXorLowestNBits_{s}(x,n)$ change the lowest n bits of x so that it is invertible. in other words, this condition holds:
$$\forall x,s,n: permutateAndXorLowestNBits_{s}^{-1}(permutateAndXorLowestNBits_{s}(x,n),n)=x$$</p>
<p>let N(x) be the number of lowest bits of x that can be "safely" changed so that after changing those bits, N will result in the same number and the value never exceeds p. in other words, these two conditions hold: $$x \oplus (2^{N(x)}-1) < p$$ $$\forall y. x \odot \neg(2^{N(x)}-1) \leq y \leq x \oplus (2^{N(x)}-1) \Rightarrow N(y)=N(x)$$</p>
<p>$enc3_s(x) = permutateAndXorLowestNBits_{s}(x,N(x))$</p>
<p>$dec3_s(y) = permutateAndXorLowestNBits_{s}^{-1}(y,N(y))$</p>
<p>..................................................</p>
<p>now, the resulting key will be (m,a,s) and the functions:</p>
<p>$enc_{m,a,s}(x)=enc1_m(enc2_a(enc3_s(x)))$</p>
<p>$dec_{m,a,s}(y)=dec3_s(dec2_a(dec1_m(y)))$</p>
<p>but any other more complec combination would be valid, too. remember, that any combination of several enc1s and enc2s can be expressed as one single combination of enc1 and enc2; so, alternate it with enc3.</p>
<hr>
<p>i do not know, how hard it will be to break this if only p is known. without the bit-operations, a linear equation would break this, but with those bit-operations it will be very chaotic. does anyone know that? </p>
<p>otherwise, you could still make sth like a <a href="http://en.wikipedia.org/wiki/Feistel_cipher" rel="nofollow">Feistel cipher</a>, but i do not know wether it would be more secure or how to measure that.</p>
|
148,374 | <p>I have checked all Mathematica color schemes, and I think "Hue" is the most vibrant, beautiful one. However, it has one issue: the two ends of the spectrum are red (though, different reds). I like a spectrum from, say, red to blue. Is that possible to manipulate the Hue and remove the pink and second red? </p>
<p>Consider the following:</p>
<pre><code>DensityPlot[Sin[x y], {x, 0.1, 1}, {y, 0.1, 1}, PlotLegends -> Automatic, Frame -> True, ColorFunction -> Hue]
</code></pre>
<p>The output is
<a href="https://i.stack.imgur.com/NKaue.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NKaue.jpg" alt="enter image description here"></a></p>
<p>As you see, the two extremes are red. </p>
| Alexey Popkov | 280 | <p>Your question looks almost a duplicate of this one:</p>
<ul>
<li><a href="https://mathematica.stackexchange.com/q/101268/280">How to customize color scheme to mimic that in Origin?</a></li>
</ul>
<p>Using the formulation from the <a href="https://mathematica.stackexchange.com/a/101933/280">answer</a> by <a href="https://mathematica.stackexchange.com/users/50/j-m">J.M.</a> we get:</p>
<pre><code>DensityPlot[Sin[x y], {x, 0.1, 1}, {y, 0.1, 1}, PlotLegends -> Automatic, Frame -> True,
ColorFunction -> (Hue[2 (1 - #)/3] &)]
</code></pre>
<blockquote>
<p><a href="https://i.stack.imgur.com/C7n0v.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/C7n0v.png" alt="plot"></a></p>
</blockquote>
|
259,083 | <p>There was a question asked: <a href="https://math.stackexchange.com/q/136204/8348">An open subset $U\subseteq R^n$ is the countable union of increasing compact sets.</a> There Davide gave an <a href="https://math.stackexchange.com/a/136209/">answer</a>. Can anyone tell me how the
equality holds, and the motivation behind this construction?</p>
| Noix07 | 92,038 | <p>(answer for the "motivation" part of the question)</p>
<p>I stumbled on two applications of that property although I'm not well-versed enough to develop so I'll just cite "Topological spaces, distributions and kernels", François Treves: (Lemma 10.1 p.87)</p>
<ol>
<li>the property/lemma is used to proove that <span class="math-container">$C^k(U)$</span> with some topology defined p.86 is metrizable.</li>
<li>in Example II: Spaces of test functions p.131-133, it is used to proove that <span class="math-container">$C^k(U)$</span> (and other spaces) are inductive limits of the <span class="math-container">$C^k(X_k)$</span> (<span class="math-container">$X_k$</span> such that <span class="math-container">$\bigcup X_k= U$</span>).</li>
</ol>
<p>Edits: now that I see this question several years later, I would say that the motivation is tautologically to "approximate" the open subset <span class="math-container">$U$</span> with countably many compacts. As a consequence,</p>
<ol start="3">
<li>in integration theory this can be used to show density of <span class="math-container">$\mathcal{C}(\mathbb{R}^n)$</span> in <span class="math-container">$L^1(\mathbb{R}^n)$</span> with the L1 norm. I don't know the exact reason why but there is a notion of inner and outer regularity of the Lebesgue measure, and one of the two involves compact subsets. One of the possible proofs uses that characteristic functions of compact subsets can be approximated by a sequence of continuous functions and then one uses these characteristic functions to approximate more general functions, ultimately approximating any <span class="math-container">$f\in L^1$</span>.</li>
<li>and for topological manifolds (ultimatedly this will also be related to integration, more precisely to existence of partitions of unity), there is a notion of paracompactness which is surprisingly related to the notion of <span class="math-container">$\sigma$</span>-compacts (it has never clear to me whether this is equivalent to countable at infinity but...), cf. Topology and Geometry, Glen E. Bredon, Thm 12.11 p.38 or Topologie Générale, N. Bourbaki, Théorème 5 "p.82 on the pdf file" or p. I.70.</li>
</ol>
|
2,137,332 | <p>On the MathWorld page: </p>
<p><a href="http://mathworld.wolfram.com/FermatPseudoprime.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/FermatPseudoprime.html</a></p>
<p>in the first table, I expect to see $561$ on every line, but it is not on the line for base $3$.</p>
<p>When you click on the link to the OEIS page, it also is missing from the list. Since $561$ is a Carmichael number, I expected it to be there. Is this a typo (and if so, how do I report it)? If not, what am I missing? Certainly $3^{561} \equiv 3 \pmod{561}$; is there a different definition of "Fermat pseudoprime" that leaves $561$ out?</p>
| mjw | 655,367 | <p>The triangle consists of three segments:
<span class="math-container">$\{-1 \leftrightarrow i, i \leftrightarrow -1$</span>,<span class="math-container">$-1 \leftrightarrow 1\}$</span>, call them <span class="math-container">$\{a,b,c\}$</span>. Segment <span class="math-container">$a$</span> maps to a parabola in the upper half plane with <span class="math-container">$1\mapsto 1$</span>, <span class="math-container">$(1+i)/2\mapsto i/2$</span> (the vertex of the parabola), <span class="math-container">$i\mapsto -1$</span>. Segment <span class="math-container">$b$</span> maps to a parabola in the lower half plane. The vertex is <span class="math-container">$(-1+i)/2\mapsto -i/2$</span>. Segment <span class="math-container">$c$</span> maps to the positive real axis between <span class="math-container">$0$</span> and <span class="math-container">$1$</span>. The interior of the triangle in the <span class="math-container">$z$</span>-plane maps to the interior of the transformed region in the <span class="math-container">$w$</span>-plane (minus the segment from 0 to 1, the image of <span class="math-container">$c$</span>). </p>
<p>Here is the triangle and its interior in the <span class="math-container">$z$</span>-plane:</p>
<p><a href="https://i.stack.imgur.com/5PoSE.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5PoSE.jpg" alt="enter image description here"></a></p>
<p>Here is the image of the triangle and its interior in the <span class="math-container">$w$</span>-plane:</p>
<p><a href="https://i.stack.imgur.com/Ogfv6.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Ogfv6.jpg" alt="enter image description here"></a></p>
<p>It is relatively straightforward to calculate the equations of the parabolas. One of the other answers has a nice start.</p>
<p>For example: <span class="math-container">$w=f(z) = f(x+iy) = x^2-y^2 +2 x y i$</span>. Substitute <span class="math-container">$y=1-x$</span> (segment a) and we obtain <span class="math-container">$\xi=\Re\{w\}$</span> and <span class="math-container">$\eta=\Im\{w\}$</span>. We can then find the relation between <span class="math-container">$\xi$</span> and <span class="math-container">$\eta$</span> (the equation of the parabola in the <span class="math-container">$\xi,\eta$</span>-plane. Similarly for the image of segment <span class="math-container">$b$</span>. Each half of segment <span class="math-container">$c$</span> (from -1 to 0 and from 0 to 1) trivially maps into the segment between 0 and 1 in the <span class="math-container">$w$</span>-plane.</p>
<p>One interesting point to notice: A mapping such as this (by a holomorphic function) is conformal. Angles are preserved. Notice that the angle of each of the vertices of the triangle maps into an equal angle in the <span class="math-container">$w$</span>-plane.</p>
|
4,092,877 | <p>I'm trying to find the solution for the following differential equation, however, I'm not sure how to derive the answer and so I would really appreciate some support!</p>
<p><span class="math-container">$y'' - y' = x^2$</span></p>
<p>I have tried splitting this into a quadratic polynomial: <span class="math-container">$Ax^2 + bx + C$</span></p>
<p>Then taking its derivative:</p>
<p><span class="math-container">$$y'' - y' = x^2 \implies 2Ax + 2A + B =x^2 $$</span></p>
<p>This is the case when <span class="math-container">$A = \frac{1}{2}x$</span> and <span class="math-container">$B = -x$</span></p>
<p><span class="math-container">$y_1(x) = x^2 + \frac{1}{2}x-x$</span></p>
<p>Though this is not the solution, because when I place this back into the equation I do not get the right answer.</p>
<p>I thought the solution would be: <span class="math-container">$y=c_1\cos(x) + c_2\sin(x) + x^2+\frac{1}{2}x-x$</span></p>
<p>My expectation is: <span class="math-container">$y=c_1e^x+c_2-2x-x^2-\frac{1}{x}x^3$</span></p>
| David | 911,796 | <p>(When an answer can be checked, there is no danger in using a formalism.) <span class="math-container">$D^2-D=D(D-1),$</span> where <span class="math-container">$D$</span> is <span class="math-container">$d\over dx$</span>, so that
<span class="math-container">$$y_H=c_1+c_2e^x.$$</span>
Next,<br />
<span class="math-container">$$y_P=(D^2-D)^{-1}x^2={-1\over 1-D}x^2-{1\over D}x^2$$</span>
<span class="math-container">$$=-(1+D+D^2+D^3+\cdots)x^2-{1\over 3}x^3$$</span> <span class="math-container">$$=-(x^2+2x+2)-{1\over 3}x^3$$</span>
Therefore,
<span class="math-container">$$y=c_1+c_2e^x-{1\over 3}x^3-x^2-2x.$$</span></p>
|
23,911 | <p>I am teaching a course on Riemann Surfaces next term, and would <strong>like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties</strong> (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis.</p>
<p>Here are some examples that I thought of:</p>
<p><strong>1.</strong> Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space.</p>
<p><strong>2.</strong> Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that
agrees with $f$ on $V$ and is identically zero outside of $U$.</p>
<p>By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$.</p>
<p><strong>3.</strong> If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. </p>
<p><em><strong>What are some more examples?</em></strong></p>
<p>Answers illustrating the difference between real manifolds and complex manifolds are also welcome.</p>
| Felipe Voloch | 2,290 | <p>A connected real manifold can be disconnected by the removal of a submanifold but the complement of a subvariety on an irreducible variety is still connected.</p>
|
683,513 | <p>There is much discussion both in the education community and the mathematics community concerning the challenge of (epsilon, delta) type definitions in real analysis and the student reception of it. My impression has been that the mathematical community often holds an upbeat opinion on the success of student reception of this, whereas the education community often stresses difficulties and their "baffling" and "inhibitive" effect (see below). A typical educational perspective on this was recently expressed by Paul Dawkins in the following terms: </p>
<p><em>2.3. Student difficulties with real analysis definitions. The concepts of limit and continuity have posed well-documented difficulties for students both at the calculus and analysis level of instructions (e.g. Cornu, 1991; Cottrill et al., 1996; Ferrini-Mundy & Graham, 1994; Tall & Vinner, 1981; Williams, 1991). Researchers identified difficulties stemming from a number of issues: the language of limits (Cornu, 1991; Williams, 1991), multiple quantification in the formal definition (Dubinsky, Elderman, & Gong, 1988; Dubinsky & Yiparaki, 2000; Swinyard & Lockwood, 2007), implicit dependencies among quantities in the definition (Roh & Lee, 2011a, 2011b), and persistent notions pertaining to the existence of infinitesimal quantities (Ely, 2010). Limits and continuity are often couched as formalizations of approaching and connectedness respectively. However, the standard, formal definitions display much more subtlety and complexity. That complexity often baffles students who cannot perceive the necessity for so many moving parts. Thus learning the concepts and formal definitions in real analysis are fraught both with need to acquire proficiency with conceptual tools such as quantification and to help students perceive conceptual necessity for these tools. This means students often cannot coordinate their concept image with the concept definition, inhibiting their acculturation to advanced mathematical practice, which emphasizes concept definitions.</em> </p>
<p>See <a href="http://dx.doi.org/10.1016/j.jmathb.2013.10.002" rel="nofollow noreferrer">http://dx.doi.org/10.1016/j.jmathb.2013.10.002</a> for the entire article (note that the online article provides links to the papers cited above).</p>
<p>To summarize, in the field of education, researchers decidedly have <em>not</em> come to the conclusion that epsilon, delta definitions are either "simple", "clear", or "common sense". Meanwhile, mathematicians often express contrary sentiments. Two examples are given below. </p>
<p><em>...one cannot teach the concept of limit without using the epsilon-delta definition. Teaching such ideas intuitively does not make it easier for the student it makes it harder to understand. Bertrand Russell has called the rigorous definition of limit and convergence the greatest achievement of the human intellect in 2000 years! The Greeks were puzzled by paradoxes involving motion; now they all become clear, because we have complete understanding of limits and convergence. Without the proper definition, things are difficult. With the definition, they are simple and clear.</em> (see Kleinfeld, Margaret; Calculus: Reformed or Deformed? Amer. Math. Monthly 103 (1996), no. 3, 230-232.) </p>
<p><em>I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.)</em> (see Bishop, Errett; Book Review: Elementary calculus. Bull. Amer. Math. Soc. 83 (1977), no. 2, 205--208.)</p>
<p>When one compares the upbeat assessment common in the mathematics community and the somber assessments common in the education community, sometimes one wonders whether they are talking about the same thing. How does one bridge the gap between the two assessments? Are they perhaps dealing with distinct student populations? Are there perhaps education studies providing more upbeat assessments than Dawkins' article would suggest? </p>
<p>Note 1. See also <a href="https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions">https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions</a></p>
<p>Note 2. Two approaches have been proposed to account for this difference of perception between the education community and the math community: (a) sample bias: mathematicians tend to base their appraisal of the effectiveness of these definitions in terms of the most active students in their classes, which are often the best students; (b) student/professor gap: mathematicians base their appraisal on their own scientific appreciation of these definitions as the "right" ones, arrived at after a considerable investment of time and removed from the original experience of actually learning those definitions. Both of these sound plausible, but it would be instructive to have field research in support of these approaches.</p>
<p>We recently published <a href="http://dx.doi.org/10.5642/jhummath.201701.07" rel="nofollow noreferrer">an article</a> reporting the result of student polling concerning the comparative educational merits of epsilon-delta definitions and infinitesimal definitions of key concepts like continuity and convergence, with students favoring the infinitesimal definitions by large margins.</p>
| user4894 | 118,194 | <p>My feeling is that the biggest problem with the epsilon-delta definition is that this is the first time students have ever seen the universal and existential quantifiers. By the time you say, "For every epsilon there exists a delta," you have already lost 95% of your audience before you even get to the business end of the proposition.</p>
<p>And of course the other problem is with the lower-case Greek letters. Students have been seeing x, y, z, and t all their lives; and out of nowhere you show them epsilon and delta.</p>
<p>In other words it's the basic form of the definition that's intimidating and confusing to students; not so much the actual idea, which is simply that you can <em>arbitrarily constrain the output by suitably constraining the input</em>. </p>
<p>Perhaps if instructors started with the conceptual understanding and then spent time explaining "for all" and "there exists" and giving them a gentle introduction to Greek letters used as variables, things would get better.</p>
|
683,513 | <p>There is much discussion both in the education community and the mathematics community concerning the challenge of (epsilon, delta) type definitions in real analysis and the student reception of it. My impression has been that the mathematical community often holds an upbeat opinion on the success of student reception of this, whereas the education community often stresses difficulties and their "baffling" and "inhibitive" effect (see below). A typical educational perspective on this was recently expressed by Paul Dawkins in the following terms: </p>
<p><em>2.3. Student difficulties with real analysis definitions. The concepts of limit and continuity have posed well-documented difficulties for students both at the calculus and analysis level of instructions (e.g. Cornu, 1991; Cottrill et al., 1996; Ferrini-Mundy & Graham, 1994; Tall & Vinner, 1981; Williams, 1991). Researchers identified difficulties stemming from a number of issues: the language of limits (Cornu, 1991; Williams, 1991), multiple quantification in the formal definition (Dubinsky, Elderman, & Gong, 1988; Dubinsky & Yiparaki, 2000; Swinyard & Lockwood, 2007), implicit dependencies among quantities in the definition (Roh & Lee, 2011a, 2011b), and persistent notions pertaining to the existence of infinitesimal quantities (Ely, 2010). Limits and continuity are often couched as formalizations of approaching and connectedness respectively. However, the standard, formal definitions display much more subtlety and complexity. That complexity often baffles students who cannot perceive the necessity for so many moving parts. Thus learning the concepts and formal definitions in real analysis are fraught both with need to acquire proficiency with conceptual tools such as quantification and to help students perceive conceptual necessity for these tools. This means students often cannot coordinate their concept image with the concept definition, inhibiting their acculturation to advanced mathematical practice, which emphasizes concept definitions.</em> </p>
<p>See <a href="http://dx.doi.org/10.1016/j.jmathb.2013.10.002" rel="nofollow noreferrer">http://dx.doi.org/10.1016/j.jmathb.2013.10.002</a> for the entire article (note that the online article provides links to the papers cited above).</p>
<p>To summarize, in the field of education, researchers decidedly have <em>not</em> come to the conclusion that epsilon, delta definitions are either "simple", "clear", or "common sense". Meanwhile, mathematicians often express contrary sentiments. Two examples are given below. </p>
<p><em>...one cannot teach the concept of limit without using the epsilon-delta definition. Teaching such ideas intuitively does not make it easier for the student it makes it harder to understand. Bertrand Russell has called the rigorous definition of limit and convergence the greatest achievement of the human intellect in 2000 years! The Greeks were puzzled by paradoxes involving motion; now they all become clear, because we have complete understanding of limits and convergence. Without the proper definition, things are difficult. With the definition, they are simple and clear.</em> (see Kleinfeld, Margaret; Calculus: Reformed or Deformed? Amer. Math. Monthly 103 (1996), no. 3, 230-232.) </p>
<p><em>I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.)</em> (see Bishop, Errett; Book Review: Elementary calculus. Bull. Amer. Math. Soc. 83 (1977), no. 2, 205--208.)</p>
<p>When one compares the upbeat assessment common in the mathematics community and the somber assessments common in the education community, sometimes one wonders whether they are talking about the same thing. How does one bridge the gap between the two assessments? Are they perhaps dealing with distinct student populations? Are there perhaps education studies providing more upbeat assessments than Dawkins' article would suggest? </p>
<p>Note 1. See also <a href="https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions">https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions</a></p>
<p>Note 2. Two approaches have been proposed to account for this difference of perception between the education community and the math community: (a) sample bias: mathematicians tend to base their appraisal of the effectiveness of these definitions in terms of the most active students in their classes, which are often the best students; (b) student/professor gap: mathematicians base their appraisal on their own scientific appreciation of these definitions as the "right" ones, arrived at after a considerable investment of time and removed from the original experience of actually learning those definitions. Both of these sound plausible, but it would be instructive to have field research in support of these approaches.</p>
<p>We recently published <a href="http://dx.doi.org/10.5642/jhummath.201701.07" rel="nofollow noreferrer">an article</a> reporting the result of student polling concerning the comparative educational merits of epsilon-delta definitions and infinitesimal definitions of key concepts like continuity and convergence, with students favoring the infinitesimal definitions by large margins.</p>
| String | 94,971 | <p>In his answer, <em>Paramanand Singh</em> suggests that freshman students are unfamiliar with certain <em>concepts</em> and <em>methods</em> that are prerequisites for understanding $\varepsilon-\delta$. On the other hand Singh suggests, that once these <em>concepts</em> and <em>methods</em> have been succesfully placed in someones mind, they become part of that persons intuition on the subject. Here <em>intuition</em> is a word I substituted for Singh's use of the word <em>natural</em>. I hope this is a fair account!</p>
<p>This suggestion, perhaps, fits very well with the perspective suggested in the article <em>"On The Dual Nature of Mathematical Conceptions"</em> by Anna Sfard (published in Educational Studies in Mathematics 22, 1-36, 1991). She argues that the <strong>process</strong> of doing algorithmic operations leads through stages of gradually maturing perceptions, ultimately identifying new <strong>objects</strong>. Maybe freshman regards $\varepsilon,\delta$ as heavy algorithmic <strong>processes</strong>, whereas the matured view is to see it as a <strong>whole concept</strong>, an <strong>object</strong>.</p>
<p>In her article, A. Sfard is also referring to <em>Miller, G. A.: 1956, "The magic number seven plus minus two"</em> suggesting that one can only juggle about seven chunks of information in the "working memory" at the time. So for the trained $\varepsilon,\delta$-scholar the <strong>concept</strong> of $\varepsilon,\delta$ is just one <strong>object</strong>, one chunk of information, whereas for the untrained person each symbol, each quantifier, occupies space in the "working memory" thus rendering the understanding nearly impossible at that stage?</p>
|
683,513 | <p>There is much discussion both in the education community and the mathematics community concerning the challenge of (epsilon, delta) type definitions in real analysis and the student reception of it. My impression has been that the mathematical community often holds an upbeat opinion on the success of student reception of this, whereas the education community often stresses difficulties and their "baffling" and "inhibitive" effect (see below). A typical educational perspective on this was recently expressed by Paul Dawkins in the following terms: </p>
<p><em>2.3. Student difficulties with real analysis definitions. The concepts of limit and continuity have posed well-documented difficulties for students both at the calculus and analysis level of instructions (e.g. Cornu, 1991; Cottrill et al., 1996; Ferrini-Mundy & Graham, 1994; Tall & Vinner, 1981; Williams, 1991). Researchers identified difficulties stemming from a number of issues: the language of limits (Cornu, 1991; Williams, 1991), multiple quantification in the formal definition (Dubinsky, Elderman, & Gong, 1988; Dubinsky & Yiparaki, 2000; Swinyard & Lockwood, 2007), implicit dependencies among quantities in the definition (Roh & Lee, 2011a, 2011b), and persistent notions pertaining to the existence of infinitesimal quantities (Ely, 2010). Limits and continuity are often couched as formalizations of approaching and connectedness respectively. However, the standard, formal definitions display much more subtlety and complexity. That complexity often baffles students who cannot perceive the necessity for so many moving parts. Thus learning the concepts and formal definitions in real analysis are fraught both with need to acquire proficiency with conceptual tools such as quantification and to help students perceive conceptual necessity for these tools. This means students often cannot coordinate their concept image with the concept definition, inhibiting their acculturation to advanced mathematical practice, which emphasizes concept definitions.</em> </p>
<p>See <a href="http://dx.doi.org/10.1016/j.jmathb.2013.10.002" rel="nofollow noreferrer">http://dx.doi.org/10.1016/j.jmathb.2013.10.002</a> for the entire article (note that the online article provides links to the papers cited above).</p>
<p>To summarize, in the field of education, researchers decidedly have <em>not</em> come to the conclusion that epsilon, delta definitions are either "simple", "clear", or "common sense". Meanwhile, mathematicians often express contrary sentiments. Two examples are given below. </p>
<p><em>...one cannot teach the concept of limit without using the epsilon-delta definition. Teaching such ideas intuitively does not make it easier for the student it makes it harder to understand. Bertrand Russell has called the rigorous definition of limit and convergence the greatest achievement of the human intellect in 2000 years! The Greeks were puzzled by paradoxes involving motion; now they all become clear, because we have complete understanding of limits and convergence. Without the proper definition, things are difficult. With the definition, they are simple and clear.</em> (see Kleinfeld, Margaret; Calculus: Reformed or Deformed? Amer. Math. Monthly 103 (1996), no. 3, 230-232.) </p>
<p><em>I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.)</em> (see Bishop, Errett; Book Review: Elementary calculus. Bull. Amer. Math. Soc. 83 (1977), no. 2, 205--208.)</p>
<p>When one compares the upbeat assessment common in the mathematics community and the somber assessments common in the education community, sometimes one wonders whether they are talking about the same thing. How does one bridge the gap between the two assessments? Are they perhaps dealing with distinct student populations? Are there perhaps education studies providing more upbeat assessments than Dawkins' article would suggest? </p>
<p>Note 1. See also <a href="https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions">https://mathoverflow.net/questions/158145/assessing-effectiveness-of-epsilon-delta-definitions</a></p>
<p>Note 2. Two approaches have been proposed to account for this difference of perception between the education community and the math community: (a) sample bias: mathematicians tend to base their appraisal of the effectiveness of these definitions in terms of the most active students in their classes, which are often the best students; (b) student/professor gap: mathematicians base their appraisal on their own scientific appreciation of these definitions as the "right" ones, arrived at after a considerable investment of time and removed from the original experience of actually learning those definitions. Both of these sound plausible, but it would be instructive to have field research in support of these approaches.</p>
<p>We recently published <a href="http://dx.doi.org/10.5642/jhummath.201701.07" rel="nofollow noreferrer">an article</a> reporting the result of student polling concerning the comparative educational merits of epsilon-delta definitions and infinitesimal definitions of key concepts like continuity and convergence, with students favoring the infinitesimal definitions by large margins.</p>
| Ben Blum-Smith | 13,120 | <p><a href="https://matheducators.stackexchange.com/a/16857/140">[Crossposted</a> from matheducators.SE]</p>
<p>The apparent conflict between points of view expressed in the OP is illusory. There is no real conflict. The mathematics education researcher quoted in the OP is arguing that students find the definition difficult to appreciate and master. The mathematicians quoted in the OP are arguing that, <em>once mastered</em>, it provides clarity.</p>
<p>I have questions about the representativeness of the quotes of Kleinfeld and of Bishop among mathematicians: see the postscript, for one contrasting data point; also, both pieces were written in explicit reaction against directions in college-level math education that were being championed by other mathematicians. But putting that aside, not even these quotes themselves are asserting that students <em>find the epsilon-delta proof easy</em>. More specifically:</p>
<p>Kleinfeld is asserting not that the definition is easy to master, but that once mastered, it provides clarity that is otherwise unavailable. "Without the proper definition, things are difficult. With the definition, they are simple and clear." This sentence asserts that the definition, once it has been fully understood, clarifies and illuminates the matters that the notion of "limit" is intended to deal with. This does not imply that the definition itself is easy. Indeed, its difficulty is implicit in her celebration of it, quoting Russell, as "the greatest achievement of the human intellect in 2000 years."</p>
<p>The quote from Bishop acknowledges the definition as "notorious". More importantly, it is taken out of context. Here it is with the 6 prior and 5 following words as well:</p>
<blockquote>
<p>Although it seems to be futile, I always tell my calculus students that mathematics is not esoteric: It is common sense. (Even the notorious epsilon, delta definition of limit is common sense, and moreover is central to the important practical problems of approximation and estimation.) They do not believe me. </p>
</blockquote>
<p>(<a href="https://www.ams.org/journals/bull/1977-83-02/S0002-9904-1977-14264-X/S0002-9904-1977-14264-X.pdf" rel="nofollow noreferrer">link</a>)</p>
<p>There is no assertion that students are readily assimilating to Bishop's point of view.</p>
<p><strong>Postscript:</strong> <a href="https://flm-journal.org/Articles/7FC3DCB00614F668235473F38508F6.pdf" rel="nofollow noreferrer">Here</a> is David Tall, writing in 1981:</p>
<blockquote>
<p>During the last century the epsilon-delta method has led to fruitful advances in analysis and a clarification of the meaning of certain concepts for the professional mathematician, but it is too complex and intractable for the beginning student in calculus.</p>
</blockquote>
<p>In 1981, Tall was making a transition into mathematics education research. Still, he was by then established as a mathematician. I think the quote nicely illustrates how one can simultaneously appreciate the definition's clarificatory power and the difficulty it presents to students.</p>
<p><strong>Post-postscript:</strong> Tall's quote includes something that goes beyond an assessment either of the definition's ease of mastery or its clarificatory value -- namely, an assessment regarding its appropriate place in the curriculum. Kleinfeld also makes such an assessment -- presumably a contradictory one, although she doesn't specify whether she's talking about the "beginning student" of Tall's quote. Dawkins makes no such assessment (see <a href="https://www.sciencedirect.com/science/article/pii/S0732312313000813" rel="nofollow noreferrer">here</a>) nor, as far as I can tell, does any of the research he cites on the difficulty of delta-epsilon proofs (see <a href="http://u.cs.biu.ac.il/~katzmik/sgtdirectory/ely10.pdf" rel="nofollow noreferrer">here</a>, <a href="https://reader.elsevier.com/reader/sd/pii/S0732312396900152?token=A277E5207A7C01B95BCD202F1922004C72675D658F720257E2313BCC703E1752F950ED966AB194497AEFB3EBED42D16A" rel="nofollow noreferrer">here</a>, <a href="https://staff-old.najah.edu/sites/default/files/Concept%20Image%20and%20Concept%20Definition%20in%20mathematics.pdf" rel="nofollow noreferrer">here</a>, or <a href="https://www.jstor.org/stable/749075?seq=1&cid=pdf-reference#page_scan_tab_contents" rel="nofollow noreferrer">here</a>).</p>
|
1,424,273 | <p>Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative?</p>
<p>My try: For all $\epsilon >0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|<\epsilon$ for all $n\ge N$. And we know $0< a_n$ for all $n\in \mathbb{N}$, particularly $0<a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L<0$, then $L<0<a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0<-L<a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon >0$, there is a $N\in \mathbb{N}$ such that $0<|-L|=-L<|a_n-L|<\epsilon$ for all $n\ge N$, which can't be true.</p>
<p>Is my proof ok?</p>
| Luis Mendo | 91,216 | <p>If the limit were negative, say $\ell<0$, there would be at least be a term of the sequence (in fact, infinitely many terms) smaller than $\ell/2$, and thus this term would be negative, which is impossible.</p>
|
1,424,273 | <p>Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative?</p>
<p>My try: For all $\epsilon >0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|<\epsilon$ for all $n\ge N$. And we know $0< a_n$ for all $n\in \mathbb{N}$, particularly $0<a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L<0$, then $L<0<a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0<-L<a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon >0$, there is a $N\in \mathbb{N}$ such that $0<|-L|=-L<|a_n-L|<\epsilon$ for all $n\ge N$, which can't be true.</p>
<p>Is my proof ok?</p>
| Surb | 154,545 | <p>Your proof is a bit confused at the end. But it seems that you would conclude $0<|L|<\epsilon$ for every $\epsilon>0$ and you can a get a contradiction by choosing $\epsilon = |L|/2$.</p>
<p>I propose you nevertheless the following formulation:</p>
<p>Suppose by contradiction that $a_n\geq 0$ for every $n$, $\lim\limits_{n\to\infty} a_n=L$ and $L<0$.<br>
Let $\epsilon = |L|/2>0$, by definition of the limit, there exists $N$ such that $|a_n-L|<\epsilon= |L|/2$ for every $n\geq N$. In particular, this implies that
$$a_N-L<|L|/2=-L/2 \implies a_N<L-L/2=L/2 <0.$$
A contradiction to $a_n\geq 0$ for every $n$.</p>
|
1,424,273 | <p>Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative?</p>
<p>My try: For all $\epsilon >0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|<\epsilon$ for all $n\ge N$. And we know $0< a_n$ for all $n\in \mathbb{N}$, particularly $0<a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L<0$, then $L<0<a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0<-L<a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon >0$, there is a $N\in \mathbb{N}$ such that $0<|-L|=-L<|a_n-L|<\epsilon$ for all $n\ge N$, which can't be true.</p>
<p>Is my proof ok?</p>
| robb | 346,801 | <p>Here is a proof that I believe is more succinct than all of the above. </p>
<p>Suppose $\lim a_n = a <0$ . </p>
<p>Let $\epsilon=-a>0$ .</p>
<p>By hypothesis, there exists an $n$ large enough such that $\left|a_n-a\right|<\epsilon =-a$ </p>
<p>$\Rightarrow a_n-a<-a $</p>
<p>$\Rightarrow a_n <0$ </p>
<p>Contradiction. </p>
|
256,322 | <p>Let $A$ be an abelian group of order $n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ (i.e., $n$'s unique prime factorization). The Primary Decomposition Theorem states that $A \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}}$. On the other hand, the Fundamental Theorem of Finitely Generated Albelian Groups states that $A \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j}$ for some $\{n_j\}$ s.t. $n = n_1 \cdot \ldots \cdot n_j$ and $n_{i+1}\,|\,n_i$ for all $1 \le i < j-1$. Now I'm confused because it initially seems to me that both of these statements cannot be true at once. </p>
<p>For example, suppose that the order of $A$ gives rise to at least <em>two</em> unique isomorphism types given by the Fundamental Theorem of Finitely Generated Abelian Groups. That is, suppose that $|A_1| = |A_2| = n = p_1^{\alpha_1} \cdot \ldots \cdot p_k^{\alpha_k}$ whereby $A_1 \not\cong A_2$ so that by the Fundamental Theorem of Finitely Generated Groups we have</p>
<p>$$
A_1 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{n_j}
$$</p>
<p>and</p>
<p>$$
A_2 \cong \mathbb{Z}_{n_1} \times \ldots \times \mathbb{Z}_{m_k}
$$</p>
<p>with $\{n_i\} \ne \{m_k\}$. But we know no matter what that by the Primary Decomposition Theorem we have that $A_1 \cong \mathbb{Z}_{p_1^{\alpha_1}} \times \ldots \times \mathbb{Z}_{p_k^{\alpha_k}} \cong A_2$, a contradiction.</p>
<p>What am I missing?</p>
| Marc van Leeuwen | 18,880 | <p>It is not true that a finite Abelian group is determined up to isomorphism by it order $n$ only, and that is what your formulation of the Primary Decomposition Theorem would imply. So that is what is wrong; please look up the correct formulation of this theorem which apparently you know about (I don't recall any such theorem, other than the Fund. Th. of Finitely Gen. Abelian Groups). For instance a group of order $9$ could be either $\Bbb Z_9$ or $\Bbb Z_3\times\Bbb Z_3$, and these are not isommorphic.</p>
|
3,066,446 | <p>Let <span class="math-container">$\overline{X}$</span> be the average of a sample of <span class="math-container">$16$</span> independent normal random variables with mean <span class="math-container">$0$</span> and variance <span class="math-container">$1$</span>. Determine c such that
<span class="math-container">$P(| \overline{X} | < c) = .5$</span></p>
<p>I am having a lot of trouble with this question. I know it is related to chi-square but I don't know how to even start. </p>
| Silent | 94,817 | <p>You should check if <span class="math-container">$f_n$</span> is odd or even function. As it turns out, <span class="math-container">$f_n$</span> even for <span class="math-container">$n$</span> odd and vice versa. Also, note that for <span class="math-container">$0<x<\pi$</span>, <span class="math-container">$f_n(x)>0$</span> for any <span class="math-container">$n$</span>. And <span class="math-container">$f_n(0)=0$</span>. Combining this with continuity of <span class="math-container">$f$</span>, we see that <span class="math-container">$f_n$</span> has local min at 0 for <span class="math-container">$n$</span> odd, and saddle point at zero for <span class="math-container">$n$</span> even.</p>
<p>Calculating second derivative will be fatal, since in any case it is zero at point <span class="math-container">$0$</span>.</p>
|
171,690 | <p>I am trying to make a projection on the <em>xy-plane</em> of the intersection of the surfaces from the functions: <code>1 + x^2 - y^2</code>, <code>3 Log[1 + x^2]</code>.</p>
<p><a href="https://i.stack.imgur.com/XqC1g.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XqC1g.png" alt="Intersection of surfaces"></a></p>
<p>Thanks.</p>
| MarcoB | 27,951 | <p>Using geometric region functions:</p>
<pre><code>RegionPlot@
ImplicitRegion[
1 + x^2 - y^2 == 3 Log[1 + x^2],
{{x, -1.5, 1.5}, {y, -1.5, 1.5}}
]
</code></pre>
<p><img src="https://i.stack.imgur.com/fRB0C.png" alt="Mathematica graphics"></p>
<p>See also: <a href="https://mathematica.stackexchange.com/questions/5968/plotting-implicitly-defined-space-curves">Plotting implicitly-defined space curves</a> for other interesting approaches.</p>
|
3,543,150 | <p>My question : two indefinite integrals of a function being given , how to express one indefinite integral in terms of the other? </p>
<p><a href="https://i.stack.imgur.com/VkMzJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VkMzJ.png" alt="enter image description here"></a></p>
| mjw | 655,367 | <p><span class="math-container">$$\displaystyle (-1)^{4/3}=\left[e^{(2k-1)\pi i}\right]^{4/3}, \quad k\in \{0,1,2\}$$</span></p>
<p><span class="math-container">$$\displaystyle (-1)^{4/3} \in \left\{ e^{-\frac{4\pi i}{3}}, e^\frac{4\pi i}{3}, e^{4\pi i} \right\}$$</span></p>
<p>We can write this in rectangular coordinates:</p>
<p><span class="math-container">$$\displaystyle (-1)^{4/3} \in \left\{- \frac{1}{2}+\frac{\sqrt{3}}{2}, -\frac{1}{2}-\frac{\sqrt{3}}{2}, 1 \right\}$$</span></p>
|
3,246,244 | <p>Consider the action of <span class="math-container">$G$</span> on <span class="math-container">$X$</span>.</p>
<p>Let it be a property of <span class="math-container">$G,X$</span> that <span class="math-container">$\forall x,y,\exists g:g\cdot x=g\cdot y$</span>. This is not quite a transitive action - it describes for example a sequence of inclusions. <strong>What is the name for this type of action?</strong> I can't pair it with an appropriate definition from <a href="https://en.wikipedia.org/wiki/Group_action_(mathematics)#Types_of_actions" rel="nofollow noreferrer">here</a>.</p>
<p>My attempt? There seem to be several things going on here, none of which I can associate with documented group theory at the moment.</p>
<p><span class="math-container">$G$</span> seems to define a "contracting epimorphism"</p>
<p><span class="math-container">$G$</span> seems to define the identity function on the trivial group having the powerset of <span class="math-container">$X$</span> as its element.</p>
| Wuestenfux | 417,848 | <p>Well, notice that <span class="math-container">$g^{-1}\cdot (g\cdot x) = (g^{-1}g)\cdot x = 1\cdot x = x$</span>.</p>
<p>Thus <span class="math-container">$g\cdot x = g\cdot y$</span> implies by applying <span class="math-container">$g^{-1}$</span> that <span class="math-container">$x=y$</span>.</p>
|
302,005 | <p>Show that:
$$\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$
I evaluated this by some Fourier series. Is there any other method?
Start with substitution of $$u=\arcsin x$$
Then we have to integrate $$\int_0^{\frac{\pi}{2}}\frac{u^3\cos u}{\sin^2 u}\text{d}u=-\int_0^{\frac{\pi}{2}}u^3\csc u\text{d}u$$
Since $$\int\csc u\text{d}u=\ln (\csc u-\cot u)=\ln \left(\frac{1-\cos x}{\sin x}\right)=\ln 2+2\ln \left(\sin \frac{x}{2}\right)-\ln \sin x$$
Thus $$\int_0^{\frac{\pi}{2}}u^2\csc u\text{d}u=\int_0^{\frac{\pi}{2}}u^2\text{d}\left(2\ln \frac{\sin u}{2}-\ln \sin u\right)$$
$$=-\frac{\pi^2}{4}\ln 2-2\int_0^{\frac{\pi}{2}}u\left(2\ln \sin \frac{u}{2}-\ln \sin u\right)$$
$$=-\frac{\pi^2}{4}\ln 2-4\int_0^{\frac{\pi}{2}}u\ln \sin \frac{u}{2}\text{d}u+2\int_0^{\frac{\pi}{2}}u\ln \sin u\text{d}u$$
$$=-\frac{\pi^2}{4}\ln 2+4\int_0^{\frac{\pi}{2}}u\left[\ln 2+\sum_{n=1}^{\infty}\frac{\cos nu}{n}\right]\text{d}u-\int_0^{\frac{\pi}{2}}u^2\cot u\text{d}u$$
$$=\frac{\pi^2}{4}\ln 2+4\sum_{n=1}^{\infty}\frac{1}{n}\int_0^{\frac{\pi}{2}}u\cos nu\text{d}u-\frac{\pi^2}{4}\ln 2+\frac78\zeta(3)$$
$$=\frac78\zeta(3)+2\sum_{n=1}^{\infty}\frac{-2+2\cos \frac{n\pi}{2}+n\pi \sin \frac{n\pi}{2}}{n^3}$$
$$=\frac78\zeta(3)-4\zeta(3)-\frac38\zeta(3)+2\pi G=2\pi G-\frac72\zeta(3)$$
Combine these gives $$\int_0^1\frac{\arcsin ^3x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$</p>
| Christopher A. Wong | 22,059 | <p>Here's a suggestion that might be fruitful. It's not a full solution but it seems too messy to put into the comments. If somebody else completes the solution, feel free to add it to this.</p>
<p>You can use the substitution $\sin{t} = x$ to convert the integral to</p>
<p>$$ \int_0^{\pi/2} t^3 \csc{t} \cot{t} \, dt$$
and by applying integration by parts yields
$$ \left[ - t^3 \csc{t} \right]_0^{\pi/2} + \int_0^{\pi/2} 3t^2 \csc{t} \, dt = - \frac{\pi^2}{8} + \int_0^{\pi/2} 3t^2 \csc{t} \, dt$$
This produces one of the three special terms that you want. From here, I am not sure. We could calculate the power series of $\csc{t}$, from which the remaining integral can be readily expressed as an infinite series. Mathematica tells me that
$$ \int_0^{\pi/2} \log| \csc{t} + \cot{t} | \, dt = 2G,$$
and that integral can fall out by integration by parts once more, but that seems a little too difficult to form a power series from.</p>
|
302,005 | <p>Show that:
$$\int_0^1\frac{\arcsin^3 x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$
I evaluated this by some Fourier series. Is there any other method?
Start with substitution of $$u=\arcsin x$$
Then we have to integrate $$\int_0^{\frac{\pi}{2}}\frac{u^3\cos u}{\sin^2 u}\text{d}u=-\int_0^{\frac{\pi}{2}}u^3\csc u\text{d}u$$
Since $$\int\csc u\text{d}u=\ln (\csc u-\cot u)=\ln \left(\frac{1-\cos x}{\sin x}\right)=\ln 2+2\ln \left(\sin \frac{x}{2}\right)-\ln \sin x$$
Thus $$\int_0^{\frac{\pi}{2}}u^2\csc u\text{d}u=\int_0^{\frac{\pi}{2}}u^2\text{d}\left(2\ln \frac{\sin u}{2}-\ln \sin u\right)$$
$$=-\frac{\pi^2}{4}\ln 2-2\int_0^{\frac{\pi}{2}}u\left(2\ln \sin \frac{u}{2}-\ln \sin u\right)$$
$$=-\frac{\pi^2}{4}\ln 2-4\int_0^{\frac{\pi}{2}}u\ln \sin \frac{u}{2}\text{d}u+2\int_0^{\frac{\pi}{2}}u\ln \sin u\text{d}u$$
$$=-\frac{\pi^2}{4}\ln 2+4\int_0^{\frac{\pi}{2}}u\left[\ln 2+\sum_{n=1}^{\infty}\frac{\cos nu}{n}\right]\text{d}u-\int_0^{\frac{\pi}{2}}u^2\cot u\text{d}u$$
$$=\frac{\pi^2}{4}\ln 2+4\sum_{n=1}^{\infty}\frac{1}{n}\int_0^{\frac{\pi}{2}}u\cos nu\text{d}u-\frac{\pi^2}{4}\ln 2+\frac78\zeta(3)$$
$$=\frac78\zeta(3)+2\sum_{n=1}^{\infty}\frac{-2+2\cos \frac{n\pi}{2}+n\pi \sin \frac{n\pi}{2}}{n^3}$$
$$=\frac78\zeta(3)-4\zeta(3)-\frac38\zeta(3)+2\pi G=2\pi G-\frac72\zeta(3)$$
Combine these gives $$\int_0^1\frac{\arcsin ^3x}{x^2}\text{d}x=6\pi G-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)$$</p>
| robjohn | 13,854 | <p>Integrating by parts twice, we get
$$
\int_0^{\pi/2}x^2\,e^{ikx}\,\mathrm{d}x
=i^{k-1}\frac{\pi^2}{4k}+i^k\frac\pi{k^2}+\frac2{k^3}\left(i^{k+1}-i\right)
$$
Therefore, using $\sin(x)=\dfrac{e^{ix}-e^{-ix}}{2i}$,
$$
\begin{align}
&\int_0^1\frac{\arcsin^3(x)}{x^2}\mathrm{d}x\\
&=\int_0^{\pi/2}\frac{x^3}{\sin^2(x)}\mathrm{d}\sin(x)\\
&=-x^3\csc(x)\Big]_0^{\pi/2}+3\int_0^{\pi/2}\csc(x)\,x^2\,\mathrm{d}x\\
&=-\frac{\pi^3}{8}+3\int_0^{\pi/2}x^2\,\frac{2ie^{-ix}\,\mathrm{d}x}{1-e^{-2ix}}\tag{$\lozenge$}\\
&=-\frac{\pi^3}{8}+6i\sum_{k=0}^\infty\int_0^{\pi/2}x^2\,e^{-(2k+1)ix}\,\mathrm{d}x\\
&=-\frac{\pi^3}{8}+6i\sum_{k=0}^\infty\left((-1)^k\frac{\pi^2}{8k+4}-i(-1)^k\frac\pi{(2k+1)^2}-\left((-1)^k-i\right)\frac2{(2k+1)^3}\right)\\
&=-\frac{\pi^3}{8}+6\sum_{k=0}^\infty\left(\frac{(-1)^k\pi}{(2k+1)^2}-\frac2{(2k+1)^3}\right)\tag{$\ast$}\\
&=-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)+6\pi\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)^2}\\
&=-\frac{\pi^3}{8}-\frac{21}{2}\zeta(3)+6\pi G
\end{align}
$$
where $G$ is <a href="http://en.wikipedia.org/wiki/Catalan%27s_constant">Catalan's Constant</a>. In $(\ast)$, we drop the imaginary part (which should be $0$).</p>
<p>There is a question about the convergence in $(\lozenge)$. To handle this, we can consider
$$
x^2\frac{2ie^{-ix}}{1-e^{-2ix}}=\lim_{r\to1^-}x^2\frac{2ire^{-ix}}{1-r^2e^{-2ix}}
$$
and the convergence in the sums is uniform as $r\to1^-$.</p>
|
3,053,975 | <p><span class="math-container">$3^6-3^3 +1$</span> factors?, 37 and 19, but how to do it using factoring, <span class="math-container">$3^3(3^3-1)+1$</span>, can't somehow put the 1 inside </p>
| Clive Newstead | 19,542 | <p>Viewing this as a quadratic in <span class="math-container">$3^3$</span> leads you to try to factorise <span class="math-container">$x^2-x+1$</span> or <span class="math-container">$x^6-x^3+1$</span>, neither of which splits into anything useful. However, you can pull out a factor of <span class="math-container">$9$</span> from <span class="math-container">$3^6$</span> and a factor of <span class="math-container">$3$</span> from <span class="math-container">$3^2$</span> to obtain
<span class="math-container">$$3^6-3^3+1 = 9 \cdot 3^4 - 3 \cdot 3^2 + 1$$</span>
And the quartic <span class="math-container">$9x^4-3x^2+1$</span> does factorise into two quadratic factors:
<span class="math-container">$$9x^4-3x^2+1 = (3x^2+3x+1)(3x^2-3x+1)$$</span>
Now set <span class="math-container">$x=3$</span> and voilà!</p>
|
275,430 | <p>I'm trying to give an $\epsilon$-$\delta$ proof that the following function $f$ is continuous for $x\notin\mathbb Q$ but isn't for $x\in\mathbb Q$. </p>
<p>Let $f:\mathbb{A\subset R\to R}, \mathbb{A=\{x\in R| x>0\}}$ be given by:
$$
f(x) = \begin{cases}
1/n,&x=m/n\in\mathbb Q
\\
0,&x\notin\mathbb Q
\end{cases}
$$</p>
<p>where $m/n$ is in the lowest terms.</p>
<p>Can anyone help me with this proof (I'd prefer an answer with an $\epsilon$-$\delta$ proof).</p>
<p>Thank you very much!</p>
| Pedro | 23,350 | <p>We prove that for every $a\in(0,1)$ we have $$\lim_{x\to a}f(x)=0$$</p>
<p>from where we'll see it is only continuous at the irrational points. So, let's pick any $a\in(0,1)$, and let us be given $\epsilon >0$. Choose $n$ so that $1/n\leq \epsilon$. </p>
<p>First, we note that the only points where it might be <strong>false</strong> that $|f(x)-0|<\epsilon$ are $$\frac{1}{2};\frac{1}{3},\frac{2}{3};\frac{1}{4},\frac{3}{4};\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}; \cdots ;\frac{1}{n}, \cdots ,\frac{{n - 1}}{n}$$ If $a$ is rational, then $a$ could be one of those numbers. But, as many as there can be, the amountof these numbers are <strong>finite</strong>. Thus, for some $p/q$ in that list, the number $$|a-p/q|$$ is least. If $a$ is one of these numbers, consider $p/q\neq a$. Then, take $\delta$ as this distance. It will then be the case that if $0<|x-a|<\delta$, then $x$ will be <strong>none</strong> of the numbers</p>
<p>$$\frac{1}{2};\frac{1}{3},\frac{2}{3};\frac{1}{4},\frac{3}{4};\frac{1}{5},\frac{2}{5},\frac{3}{5},\frac{4}{5}; \cdots ;\frac{1}{n}, \cdots ,\frac{{n - 1}}{n}$$</p>
<p>thus it will be <strong>true</strong> that $|f(x)-0|<\epsilon$. </p>
<p>You can find another interesting example <a href="https://math.stackexchange.com/questions/195951/what-exactly-is-going-on-when-were-finding-a-limit/195969#195969">here</a>.</p>
|
1,804,042 | <p><strong>Edit:</strong> Here is the original problem; it is possible that my recurrence for the stationary distribution $\pi$ is incorrect.</p>
<blockquote>
<p>Consider a single server queue where customers arrive according to a Poisson process with intensity $\lambda$ and request i.i.d. $\mathsf{Exp}(\mu)$ service times. The server is subject to failures and repairs. The lifetime of a working server is an $\mathsf{Exp}(\theta)$ random variable, while the repair time is an $\mathsf{Exp}(\alpha)$ random variable. Successive lifetimes and repair times are independent, and are independent of the number of customers in the queue. When the server fails, all the customers in the queue are forced to leave, and while the server is under repair no new customers are allowed to join.</p>
</blockquote>
<p><strong>Edit:</strong> I have revised the recurrence.</p>
<p>In a problem on queueing theory I've derived the following recurrence:
\begin{align}
\pi_1 &=\left(\frac{\lambda+\theta}\mu\right)\pi_0 - \frac{\alpha\theta}{\mu(\alpha+\theta)}\\
\pi_{n+1} &= \left(1+\frac{\lambda+\theta}\mu\right)\pi_n - \frac\lambda\mu\pi_{n-1},\ n\geqslant1.
\end{align}
where $\lambda$, $\mu$, $\theta$, and $\alpha$ are positive constants and $$\sum_{i=0}^\infty \pi_i = \frac\alpha{\alpha+\theta}. $$ </p>
<p>After a lot of tedious algebra, I found that
$$\scriptsize\pi_n = \left(\frac{\alpha \theta \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\theta +\lambda +\mu+ \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}\right)^n}{(\alpha +\theta ) (2 \mu )^n \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}}\right)(1+\pi_0) $$ for $n\geqslant 1$. To save space, let $$\mathcal C:=\sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}. $$</p>
<p>Summing over $n$ and solving for $\pi_0$, I found
$$\pi_0 =\frac{\alpha \mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)}{2 \theta (\alpha +\theta ) \mathcal C}, $$</p>
<p>and so
$$
\pi_n=\left(\frac{ \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)+2 \theta (\alpha +\theta ) \mathcal C }{2(\alpha +\theta )^2\mathcal C^2\left(\alpha^2\mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right)\right)^{-1}} \right)\left(\frac{\lambda+\mu+\theta+\mathcal C }{2\mu}\right)^n.
$$</p>
<p>If you see any errors let me know...</p>
<p>I'm also wondering what conditions on $\lambda,\mu,\theta$, and $\alpha$ are necessary for $\sum_{i=0}^\infty \pi_i$ to converge. For context, this is a $M/M/1$ queue with arrival rate $\lambda$, service rate $\mu$, but with an added state $D$ with transitions of rate $\theta$ from each state $n$ to $D$ and a transition of rate $\alpha$ from $D$ to $0$.</p>
| Brent Kerby | 218,224 | <p>Hint: By defining $\phi_n = \sum_{i=0}^n \pi_i$, we may express $\pi_{n+1}$ and $\phi_{n+1}$ as linear combinations of $\pi_n$ and $\phi_n$ plus constants, giving a first-order linear matrix difference equation.</p>
<p><strong>EDIT</strong>: The equation we end up with is of the form $$x_{(n+1)} = Ax_{(n)}+b$$ where the vectors $x_{(n)}=\begin{pmatrix}\pi_n \\ \phi_n\end{pmatrix}\in\mathbb R^2$ are to be solved for, and $A$ is a known $2\times 2$ matrix and $b\in\mathbb R^2$ a known vector. If $A$ can be diagonalized, then, writing $A=SDS^{-1}$ and setting $y_{(n)}=S^{-1}x_{(n)}$, the system becomes
$$y_{(n+1)} = Dy_{(n)}+S^{-1}b$$
In this case, the system decomposes into univariate difference equations, which can be solved separately, and then one uses $x_{(n)} = Sy_{(n)}$ to solve for the original unknowns.</p>
|
1,977,306 | <p>This is from a math competition so it must not be something really long
If a parabola touches the lines $y=x$ and $y=-x$ at $A(3,3) $ and $b(1,-1)$ respectively, then </p>
<p>(A) equation of axis of parabola is $2x+y=0$ </p>
<p>(B)slope of tangent at vertex is $1/2$</p>
<p>(C) Focus is $(6/5,-3/5)$</p>
<p>(D) Directrix passes through $(1,-2)$ </p>
<p>I thought the axis would be the angle bisector of the tangents passing through the focus but it turns out that is not the case in a parabola so how can I find anything..</p>
| Emilio Novati | 187,568 | <p>This figure illustrate the situation.</p>
<p><a href="https://i.stack.imgur.com/cyXYM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cyXYM.jpg" alt="enter image description here"></a></p>
<p>Using the fact that:</p>
<blockquote>
<p>Tangents drawn at the endpoints of a focal chord of a parabola intersect at right angles (on the directrix).</p>
</blockquote>
<p>we can say that the focal chord has equation: $y=2x-3$, so we can test that the answer (C): $F=(6/5,-3/5)$ is correct. So we have the focus.</p>
<p>We can also use the fact that:</p>
<blockquote>
<p>The tangent at any point of the parabola is equally inclined to the focal distance and the axis of the parabola.</p>
</blockquote>
<p>In the figure this mens that the two angles in $A$ and $D$ are equals and from this we can find the axis.</p>
<p>Finally, from the focus and the axis we can find the directrix as the line orthogonal to axis that passes thorough the common point of the two tangents : $C$.</p>
<p>If you o this you can verify that also the answer (D) is correct.</p>
<p>You can find the properties of the tangent used in this answer at: <a href="http://www.nabla.hr/CS-ParabolaAndLine2.htm" rel="nofollow noreferrer">http://www.nabla.hr/CS-ParabolaAndLine2.htm</a>.</p>
|
1,977,306 | <p>This is from a math competition so it must not be something really long
If a parabola touches the lines $y=x$ and $y=-x$ at $A(3,3) $ and $b(1,-1)$ respectively, then </p>
<p>(A) equation of axis of parabola is $2x+y=0$ </p>
<p>(B)slope of tangent at vertex is $1/2$</p>
<p>(C) Focus is $(6/5,-3/5)$</p>
<p>(D) Directrix passes through $(1,-2)$ </p>
<p>I thought the axis would be the angle bisector of the tangents passing through the focus but it turns out that is not the case in a parabola so how can I find anything..</p>
| Jean Marie | 305,862 | <p>(see figure below) </p>
<p>Parametric equations for the parabola are readily obtained, assuming a certain knowledge of quadratic Bezier curves : see for example the paragraph "Second order curve is a parabolic segment" in (<a href="https://en.wikipedia.org/wiki/B%C3%A9zier_curve" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/B%C3%A9zier_curve</a>)).</p>
<p>The tangents' intersection is clearly $O(0,0)$. Thus the parabola is nothing but the Bezier curve with endpoints $A$ and $B$ and "directing" point in $O$: $M=A(1-t)^2 + 2t(1-t)O + B t^2$, otherwise said:</p>
<p>$$\tag{1}\pmatrix{x\\y}=(1-t)^2\pmatrix{3\\3}+2t(1-t)\pmatrix{0\\0}+t^2\pmatrix{1\\-1}$$</p>
<p>$$\tag{2}\Leftrightarrow \ \ \ \ \cases{x=3(1-t)^2+t^2\\y=3(1-t)^2-t^2}$$</p>
<p>which constitutes a parametric description of the curve. </p>
<p>Eliminating $t$ between the two equations (2), one gets an implicit equation that can be written under the form :</p>
<p>$$\tag{3} \left(x-\frac{6}{5}\right)^2+\left(y+\frac{3}{5}\right)^2=\frac{(y+2x)^2}{5}$$</p>
<p>Taking square roots on both sides:</p>
<p>$$\tag{4} \sqrt{\left(x-\frac{6}{5}\right)^2+\left(y+\frac{3}{5}\right)^2}=\frac{|y+2x|}{\sqrt{5}}$$</p>
<p>(4) expresses the fact that the distance of $M(x,y)$ (the current point on the curve) to point $F(\frac{6}{5},-\frac{3}{5})$ is equal to the distance of $M$ to straight line $(D)$ with equation $y+2x=0$ (with slope $-2$). For distance formula see: (<a href="http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/Point-LineDistance2-Dimensional.html</a>). Thus $F$ is the focus of the parabola and $(D)$ its directrix. </p>
<p>Therefore, the parabola's axis, passing through $F$ and orthogonal to $(D)$ (thus with slope $\frac{1}{2}$) has the following equation: $y+\frac{3}{5}=\frac{1}{2}(x-\frac{6}{5})$, i.e., $y=\frac{1}{2}x-\frac{6}{5}.$</p>
<p>Conclusion:</p>
<ul>
<li><p>propositions (A), (B) are false. For (B), it's not necessary to compute the tangent at the vertex because the slope of this tangent is the same as the slope of the directrix, which is $-2$, not $\frac{1}{2}$.</p></li>
<li><p>propositions (C), (D) are exact.</p></li>
</ul>
<p><a href="https://i.stack.imgur.com/Lf1cM.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Lf1cM.jpg" alt="enter image description here"></a></p>
|
129 | <p>Is there some criterion for whether a space has the homotopy type of a closed manifold (smooth or topological)? Poincare duality is an obvious necessary condition, but it's almost certainly not sufficient. Are there any other special homotopical properties of manifolds?</p>
| Andrew Ranicki | 732 | <p>The main result of the Browder-Novikov-Sullivan-Wall surgery theory (1962-1969) is that for $n>4$ a space $X$ is homotopy equivalent to a compact n-dimensional topological (resp. differentiable) manifold if and only if $X$ is homotopy equivalent to a finite $CW$ complex $M$ with $n$-dimensional Poincaré duality, and there is a topological (resp. vector) bundle over $M$ for which the corresponding normal map $(f,b):N\to M$ from an $n$-dimensional manifold $N$ has zero surgery obstruction in the Wall $L$-group of quadratic forms over $Z[\pi_1(X)]$. Thus there are two obstructions, a primary topological $K$-theory obstruction to the existence of a bundle, and (depending on the vanishing of the primary one, and a choice of reason) a secondary obstruction in algebraic $L$-theory. The original theory was for differentiable manifolds: the extension to topological manifolds due to Kirby and Siebenmann (1970) remains a major success of surgery theory. All this is explained (at some length) in Wall's own book <a href="http://www.maths.ed.ac.uk/~aar/books/scm.pdf" rel="noreferrer">Surgery on compact manifolds</a> (1970/1998) and also in my own books <a href="http://www.maths.ed.ac.uk/~aar/books/topman.pdf" rel="noreferrer">Algebraic L-theory and topological manifolds</a> (1992) and <a href="http://www.maths.ed.ac.uk/~aar/books/surgery.pdf" rel="noreferrer">Algebraic and geometric surgery</a> (2002), as well as many other references (such as Wolfgang Lück's notes listed in a previous post). I have made available a large number of surgery-related resources on my <a href="http://www.maths.ed.ac.uk/~aar/surgery" rel="noreferrer">website</a>.</p>
|
204,365 | <p>Consider a positive matrix <code>M</code> and a positive vector <code>b</code>, e.g.</p>
<pre><code>nn = 1000;
M = Table[RandomReal[{0, 100}], {i, 1, nn}, {j, 1, nn}];
b = Table[RandomReal[{0, 100}], {i, 1, nn}];
</code></pre>
<p>I would like to find a positive vector <code>X</code></p>
<pre><code>X = Array[x,nn];
</code></pre>
<p>(each <code>x[i]>0</code>) such that given</p>
<pre><code>expr = M.X-b;
</code></pre>
<p>the quantity <code>expr.expr</code> is minimized. Is it possible to do that in Mathematica efficiently (so that it finishes within a few seconds/minutes)?</p>
| Roman | 26,598 | <pre><code>Minimize[{expr.expr, Thread[X > 0]}, X]
</code></pre>
|
2,304,379 | <p>My textbook give the following definition.</p>
<blockquote>
<p>Let $G$ be any topological group. A representation of $G$ on a nonzero complex Hilbert space $V$ is a group homomorphism $\phi$ of $G$ into the group of bounded linear operators on $V$ with bounded inverses, such that the resulting map $ G\times V\to V$ is continuous. </p>
</blockquote>
<p>And it says that to have the continuity property it is enough to have that the map $g\mapsto \phi(g)v$ from $G$ to $V$ is continuous at $g=1$ for all $v\in V$ and a uniform bound for $\|\phi(g)\|$ in some neighbourhood of $1$.</p>
<p>I want to show that the map $G\times V\to V$ given by $(g,v)\mapsto \phi(g)v$ is continuous under these assumptions.</p>
<p><strong>My attempt</strong></p>
<p>Let $(g_0,v_0)\in G\times V$. Let $\epsilon>0$. We need to find an open neighbourhood $U\subseteq G\times V$ of $(g_0,v_0)$ such that $\|\phi(g)v-\phi(g_0)v_0\|<\epsilon$ for all $(g,v)\in U$. By the continuity of the map $G\to V$ by $g\mapsto \phi(g)v_0$ at $g=1$, there is an open neighbourhood $N$ of $1$ such that $\|\phi(g)v_0-v_0\|<\epsilon$ for all $g\in N$. On the other hand, for any $g\in N$ we have
\begin{align*}
\|\phi(g)v-\phi(g_0)v_0\|&=\|\phi(g_0^{-1}g)v-v_0\| \qquad \text{by the unitarity} \\&\leq \|\phi(g_0^{-1}g)v-\phi(g_0^{-1}g)v_0\|+\|\phi(g_0^{-1}g)v_0-v_0\| \\& \leq \|\phi(g_0^{-1}g)\|_{op} \|v-v_0\|+\underbrace{\|\phi(g_0^{-1}g)v_0-v_0\|}_\text{$(*)$}
\end{align*}</p>
<p>Here I could not find an upper bound for the term $(*)$. Could anyone help me? Thanks.</p>
<p>Update: I have realized that we have no unitarity assumption. So I will think on this. </p>
| Arpan1729 | 444,208 | <p>In any Hausdorff space, any finite set is closed.</p>
<p>So in any connected Hausdorff space a singleton does contain any non-empty open set.</p>
<p>(I mentioned connectedness as then the singleton cannot be open since it is already closed.)</p>
|
1,102,638 | <p>Let $n\in \mathbb{N}$. Can someone help me prove this by induction:</p>
<p>$$\sum _{i=0}^{n}{i} =\frac { n\left( n+1 \right) }{ 2 } .$$</p>
| Krish | 177,430 | <p><em>Hint:</em> $\dfrac{\sqrt{1 + x+ x^2} -1}{x} = \dfrac{x + x^2}{x (\sqrt{1 + x+ x^2} + 1)} = \dfrac{1+x}{\sqrt{1 + x+ x^2} + 1}$</p>
|
3,857,494 | <p>I have the following sequence given recursively by:</p>
<p><span class="math-container">$$A_n - 2A_{n-1} - 4A_{n-2} = 0$$</span></p>
<p>Where:</p>
<p><span class="math-container">$$A_0 = 1, A_1 = 3, A_2 = 10, A_3 = 32, etc.$$</span></p>
<p>To find the generating function, I have done the following:</p>
<p><span class="math-container">$$\begin{aligned} A &= 1 + 3x + 10x^2 + 32x^3 + \dots
\\ -2xA &= 0 - 2x - 6x^2 - 20x^3 + \dots
\\ -4x^2 A &= 0 - 0 - 4x^2 - 12x^3 + \dots \end{aligned}$$</span></p>
<p>[NOTE: The <span class="math-container">$0s$</span> are there for formatting purposes, they're not part of the expressions]</p>
<p>Adding these together:</p>
<p><span class="math-container">$$(1 - 2x - 4x^2)A = 1 + x + 0$$</span></p>
<p><span class="math-container">$$A = \frac{1+x}{1 - 2x - 4x^2}$$</span></p>
<p>Which, I'm guessing, gives me the generating function.</p>
<p>My question is, how do I know if this is correct? What is this generating function supposed to tell me?</p>
<p>If I substitute certain values into the generating function, will I get the initial sequence given recursively or will I get the function, <span class="math-container">$A = 1 + 3x + 10x^2 + 32x^3 + ...$</span>?</p>
| awkward | 76,172 | <p>You can do polynomial long division to grind out as many terms as you like. (Sorry for the image, trying to figure out how to do this with MathJax made me grow faint.)</p>
<p><a href="https://i.stack.imgur.com/oKNYB.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oKNYB.png" alt="polynomial long division" /></a></p>
|
1,902,188 | <p>$k\in\mathbb{N}$ </p>
<p>The inverse of the sum $$b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j} j^{\,k} a_j$$ is obviously
$$a_k=\sum\limits_{j=1}^k \binom{k-1}{j-1}\frac{b_j}{k^j}$$ . </p>
<p>How can one proof it (in a clear manner)? </p>
<p>Thanks in advance.</p>
<hr>
<p>Background of the question: </p>
<p>It’s $$\sum\limits_{k=1}^\infty \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\sum\limits_{k=1}^\infty \frac{a_k}{k}$$ with $\,\displaystyle b_k:=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k}a_j $. </p>
<p>Note:</p>
<p>A special case is $\displaystyle a_k:=\frac{1}{k^n}$ with $n\in\mathbb{N}$ and therefore $\,\displaystyle b_k=\sum\limits_{j=1}^k (-1)^{k-j}\binom{k}{j}j^{\,k-n}$ (see Stirling numbers of the second kind)
$$\sum\limits_{k=1}^n \frac{b_k}{k!}\int\limits_0^\infty \left(\frac{t}{e^t-1}\right)^k dt =\zeta(n+1)$$ and the invers equation can be found in <a href="https://math.stackexchange.com/questions/1873839/a-formula-for-int-limits-0-infty-fracxex-1n-dx">A formula for $\int\limits_0^\infty (\frac{x}{e^x-1})^n dx$</a> . </p>
| Marko Riedel | 44,883 | <p>Suppose we seek to show that if</p>
<p>$$b_n = \sum_{q=1}^n (-1)^{n-q} {n\choose q} q^n a_q$$</p>
<p>then</p>
<p>$$a_n = \sum_{q=1}^n {n-1\choose q-1} n^{-q} b_q.$$</p>
<p>This is</p>
<p>$$a_n = \sum_{q=1}^n {n-1\choose q-1} n^{-q}
\sum_{p=1}^q (-1)^{q-p} {q\choose p} p^q a_p.$$</p>
<p>Re-indexing we find</p>
<p>$$\sum_{p=1}^n a_p
\sum_{q=p}^n {n-1\choose q-1} n^{-q} (-1)^{q-p}
{q\choose p} p^q
\\ = \sum_{p=1}^n (-1)^p a_p
\sum_{q=p}^n {n-1\choose q-1} (p/n)^q (-1)^q
{q\choose p}.$$</p>
<p>The inner sum is</p>
<p>$$\sum_{q=p}^n \frac{q}{n} {n\choose q} (p/n)^q (-1)^q
{q\choose p}.$$</p>
<p>Note that</p>
<p>$${n\choose q} {q\choose p}
= \frac{n!}{(n-q)! p! (q-p)!}
= {n\choose p} {n-p\choose n-q}$$</p>
<p>and we obtain for the sum term</p>
<p>$$\frac{1}{n} {n\choose p} \sum_{q=p}^n {n-p\choose n-q}
\times q \times (-1)^q \times (p/n)^q
\\ = \frac{1}{n} {n\choose p} \sum_{q=0}^{n-p} {n-p\choose q}
\times (n-q) \times (-1)^{n-q} \times (p/n)^{n-q}.$$</p>
<p>We now have two cases, case A when $n\gt p$ and case B when $n=p.$ In
case A we split the sum into two pieces. The first piece here is</p>
<p>$$\frac{1}{n} {n\choose p} \times (-1)^n (p/n)^n \times n
\left(1-\frac{n}{p}\right)^{n-p}.$$</p>
<p>The second is</p>
<p>$$-\frac{1}{n} {n\choose p} \times (-1)^n (p/n)^n \times
\sum_{q=1}^{n-p} {n-p-1\choose q-1} (n-p) (-1)^q (p/n)^{-q}
\\ = -\frac{1}{n} {n\choose p} \times (-1)^n (p/n)^n \times
(-n/p) (n-p) \left(1-\frac{n}{p}\right)^{n-p-1}.$$</p>
<p>Adding the two components yields</p>
<p>$$\frac{1}{n} {n\choose p} \times (-1)^n (p/n)^n
\\ \times \left( n \left(1-\frac{n}{p}\right)^{n-p}
+ \frac{n}{p} (n-p) \left(1-\frac{n}{p}\right)^{n-p-1}\right)
\\ = \frac{1}{n} {n\choose p} \times (-1)^n (p/n)^n
\\ \times \left( n \left(1-\frac{n}{p}\right)^{n-p}
+ n \left(\frac{n}{p} - 1\right)
\left(1-\frac{n}{p}\right)^{n-p-1}\right) = 0.$$</p>
<p>For case $B$ when $n=p$ we do not split the sum and simply evaluate
the single term that appears, which is</p>
<p>$$\frac{1}{n} {n\choose n} \times
{0\choose 0} n (-1)^n 1^n = (-1)^n.$$</p>
<p>Returning to the target sum we see that we have</p>
<p>$$\sum_{p=1}^n (-1)^p a_p \times (-1)^p [[n=p]] = a_n$$</p>
<p>as claimed.</p>
|
638,875 | <p>Let $P$ be a $p$-group and let $A$ be maximal among abelian normal subgroups of $P$. Show that $A=C_P(A)$.</p>
<p>This is the second part of a problem in which I successfully proved the following:
Let $P$ be a finite $p$-group and let $U<V$ be normal subgroups of $P$. Show that there exists $W \triangleleft P$ with $U<W \le V$ and $|W:U|=p$.</p>
<p>I did this by observing that since $U<V$ are normal in $P$, $(V/U) \triangleleft P/U$ and so $(V/U) \cap Z(P/U)$ is nontrivial. Now suppose that $|V/U|=p$. Then it easily follows that $U \triangleleft V \triangleleft P$ and $|V:U|=p$. Now suppose that $|V/U|>p$. Then choose a subgroup of $(V/U) \cap Z(P/U)$ of order $p$, which is normal (since it is central) in $P/U$. This subgroup is of the form $W/U$ for some $W<P$. Then by the Correspondence Theorem, we have $U \triangleleft W \triangleleft P$ and $|W:U|=p$.</p>
<p>I have been told to apply the first part with $U=A$ and $V=C_P(A)$ and show that $W$ is abelian. I tried using the same strategy as above, i.e. choosing $W$ from $Z(P/U)$. However, abelian-ness isn't necessarily preserved under the canonical homomorphism from $P$ to $P/U$. Even if I could obtain such a $W$ I don't see how $W$ abelian implies that $A=C_P(A)$.</p>
<p>I would appreciate a hint to point me in the right direction with this. Thanks.</p>
| Jack Schmidt | 583 | <p>This answer is meant to answer Alex's questions rather than the homework question:</p>
<p>Suppose $U$ is abelian, $V=C_P(U)$ and $U<W \leq V$ with $[W:U]=p$ and $W \unlhd V$. You ask how to show $W$ can be chosen to be abelian.</p>
<p>In fact $W$ is always abelian in this case: Since $[W:U]=p$, there is some $w \in W$ with $W=\langle w, U \rangle$. Since $w \in V=C_P(U)$, one has that $w$ commutes with both itself and $U$; similarly $U$ commutes with both itself and $w$. An element that commutes with all generators lies in the center, so $\langle w, U \rangle \leq Z(\langle w, U\rangle)$, that is, $W$ is abelian.</p>
<p>You also asked how $W$ being abelian implies $C_P(U)=U$. The overall structure of the proof is by contradiction: Let $U$ be maximal amongst abelian normal subgroups. Suppose, by way of contradiction, that $U \neq C_P(U)$. Since $U$ is abelian, $U \leq V=C_P(U)$ and since it is not equal we get $U < V$. Now you find the $W$ as above, so that $W$ is abelian and $W \unlhd P$ and $U < W$. However that directlyh contradicts the hypothesis on $U$: it is not maximal amongst normal abelian subgroups if $W$ is a normal abelian subgroup strictly containing it!</p>
|
1,917,313 | <p>I am to find a combinatorial argument for the following identity:</p>
<p>$$\sum_k \binom {2r} {2k-1}\binom{k-1}{s-1} = 2^{2r-2s+1}\binom{2r-s}{s-1}$$</p>
<p>For the right hand side, I was think that would just be number of ways to choose at least $s-1$ elements out of a $[2r-s]$ set. However, for the left hand side, I don't really know what it is representing. </p>
<p>Any help would be greatly appreciated!</p>
| robjohn | 13,854 | <p><span class="math-container">$$
\begin{align}
\sum_k\binom{2r}{2k-1}\binom{k-1}{s-1}
&=\sum_k\binom{2r}{2r-2k+1}\binom{k-1}{k-s}\tag1\\
&=\sum_k(-1)^{k-s}\binom{2r}{2r-2k+1}\binom{-s}{k-s}\tag2\\[3pt]
&=\left[x^{2r-2s+1}\right](1+x)^{2r}\left(1-x^2\right)^{-s}\tag3\\[12pt]
&=\left[x^{2r-2s+1}\right](1+x)^{2r-s}(1-x)^{-s}\tag4\\[9pt]
&=\sum_k(-1)^k\binom{2r-s}{2r-2s-k+1}\binom{-s}{k}\tag5\\
&=\sum_k\binom{2r-s}{2r-2s-k+1}\binom{k+s-1}{k}\tag6\\
&=\sum_k\binom{2r-s}{k+s-1}\binom{k+s-1}{s-1}\tag7\\
&=\binom{2r-s}{s-1}\sum_k\binom{2r-2s+1}{k}\tag8\\
&=2^{2r-2s+1}\binom{2r-s}{s-1}\tag9
\end{align}
$$</span>
Explanation:<br>
<span class="math-container">$(1)$</span>: symmetry of <a href="https://en.wikipedia.org/wiki/Pascal%27s_triangle" rel="nofollow noreferrer">Pascal's triangle</a><br>
<span class="math-container">$(2)$</span>: apply <a href="https://math.stackexchange.com/a/217647">negative binomial coefficient</a><br>
<span class="math-container">$(3)$</span>: write the sum as the coefficient in a product<br>
<span class="math-container">$(4)$</span>: cancel factors<br>
<span class="math-container">$(5)$</span>: write the coefficient in a product as a sum<br>
<span class="math-container">$(6)$</span>: apply negative binomial coefficient<br>
<span class="math-container">$(7)$</span>: symmetry of Pascal's triangle<br>
<span class="math-container">$(8)$</span>: <span class="math-container">$\binom{n}{m}\binom{m}{k}=\binom{n}{k}\binom{n-k}{m-k}$</span><br>
<span class="math-container">$(9)$</span>: evaluate the sum</p>
|
352,849 | <p>I have to show that $\lim \limits_{n\rightarrow\infty}\frac{n!}{(2n)!}=0$ </p>
<hr>
<p>I am not sure if correct but i did it like this :
$(2n)!=(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))\cdot (n!)$ so I have $$\displaystyle \frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}$$ and $$\lim \limits_{n\rightarrow \infty}\frac{1}{(2n)\cdot(2n-1)\cdot(2n-2)\cdot ...\cdot(2n-(n-1))}=0$$ is this correct ? If not why ?</p>
| Mikasa | 8,581 | <p>Another hint based on using series may be that, if the series $$\sum_0^{\infty}u_n$$ is convergent so $u_n\to 0$. </p>
|
2,596,213 | <p>I'm having huge troubles with problems like this. I know the following:</p>
<p>$$\frac{\sin{x}}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)$$</p>
<p>and </p>
<p>$$\ln{(1+t)}=t-\frac{t^2}{2}+\frac{t^3}{3}+O(t^4)$$</p>
<p>So</p>
<p>$$\ln{\left(1+\left(-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right)\right)}=\\\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]-\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^2}{2}+\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^3}{3}+O(x^8).$$</p>
<p>But how on earth would one simplify this? Obviously I should not need to manually expand something of the form $(a+b+c+d+e)^n$. Seriously don't understand what is happening here.</p>
<p>Also, how should I know to what $O(x^?)$ I should expand the initial functions to? </p>
| marty cohen | 13,079 | <p>If
$f(x)
= \ln \sin(x)
$,
then
$f'(x)
=\dfrac{\sin'(x)}{\sin(x)}
=\dfrac{\cos(x)}{\sin(x)}
=\cot(x)
$.</p>
<p>From
<a href="https://en.wikipedia.org/wiki/Trigonometric_functions#Series_definitions" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Trigonometric_functions#Series_definitions</a>,
we have
$\cot(x)
=\sum_{n=0}^{\infty} \dfrac{(-1)^n2^{2n}B_{2n}x^{2n-1}}{(2n)!}
=\dfrac1{x}-\dfrac{x}{3}-\dfrac{x^3}{45}-\dfrac{2x^5}{945}-...
$
where the
$B_{2n}$
are the Bernoulli numbers.</p>
<p>Integrating term-by-term,
and ignoring the constant,
$f(x)
=\ln(x)-\dfrac{x^2}{6}-\dfrac{x^4}{180}-\dfrac{x^6}{2835}-...
$.</p>
<p>So
$\ln\frac{\sin(x)}{x}
=\ln\sin(x)-\ln(x)
=-\dfrac{x^2}{6}-\dfrac{x^4}{180}-\dfrac{x^6}{2835}-...
$</p>
<p>To get the complete power series,</p>
<p>$\begin{array}\\
\ln \sin(x)
&= \int \cot(x) dx\\
&=\int \dfrac{dx}{x}+\sum_{n=1}^{\infty} \int dx\dfrac{(-1)^n2^{2n}B_{2n}x^{2n-1}}{(2n)!}\\
&=\ln(x)+\sum_{n=1}^{\infty} \dfrac{(-1)^n2^{2n}B_{2n}x^{2n}}{(2n)(2n)!}\\
\text{so}\\
\ln (\sin(x)/x)
&=\sum_{n=1}^{\infty} \dfrac{(-1)^n2^{2n}B_{2n}x^{2n}}{(2n)(2n)!}\\
\end{array}
$</p>
|
109,298 | <p>I'm an applied model theorist, and open image theorems are important in the mathematical structures I study (they limit the number of types of elements being realised, and therefore keep things model theoretically nice e.g. stable). </p>
<p>So I have some idea as to why these open image theorems should hold from a model theoretic viewpoint, and I know that these are regarded as important theorems, but I don't think I've ever come across a diophantine application of an open image theorem in the literature and I'd like to see one.</p>
<p>I'm most familiar with Serre's open image theorem for elliptic curves so an example in this context would be ideal.</p>
| Joe Silverman | 11,926 | <p>Here's an application to independence of Heegner points. (But if you search on MathSciNet for papers that reference Serre's two results, I expect you'll find a very large number of applications.)</p>
<p>Let $E/\mathbb{Q}$ be an elliptic curve with no CM, and let $\Phi:X_0(N)\to E$ be a modular parametrization. (Wiles et.al. show that $\Phi$ exists for all such $E$.) The modular curve $X_0(N)$ has special points called <em>Heegner points</em> associated to pairs $(C,\Gamma)$, where $C$ is a CM elliptic curve and $\Gamma\subset C$ is a cyclic subgroup of order $N$. More precisely, we can associate to each imaginary quadratic field $K$ (satisfying some conditions) a Heegner point $x_K\in X_0(\overline{\mathbb{Q}})$ associated to the maximal order in $K$.</p>
<p><strong>Theorem</strong> [1] Let $K_1,\ldots,K_r$ be distinct imaginary quadratic fields such that the odd parts of their class numbers are sufficiently large. Then the points $\Phi(x_{K_1}),\ldots,\Phi(x_{K_r})$ are linearly independent in the group $E(\overline{\mathbb{Q}})$.</p>
<p>The proof uses Serre's image of Galois theorem in a crucial way. Not simply that the image of Galois is open in each $\hbox{Aut}(T_\ell(E))$, but also that it is surjective for almost all $\ell$.</p>
<p>[1] M. Rosen, JH Silverman, On the independence of Heegner points associated to
distinct quadratic imaginary fields, <em>Journal of Number Theory</em>
<strong>127</strong> (2007), 10-36.</p>
|
1,380,402 | <p>I'm developing a C++ program and I need to find a formula that given a number to reduce and a limit number, get a value between 0 and this limit number.</p>
<p>I don't know if it is allow to put C++ code here, but I want to show you my function:</p>
<pre><code>double Utils::reduceNumber(double numberToReduce, double limitNumber)
{
double factor = 0.0;
double result = 0.0;
factor = std::abs(numberToReduce / limitNumber);
if (factor != (int)factor)
factor = (int)factor + 1;
if (numberToReduce > 0)
result = numberToReduce - (byNumber * factor);
else
result = numberToReduce + (byNumber * factor);
return result;
}
</code></pre>
<p>For example, If I want to reduce −465.986246 in a limit between 0 and 24, I have to do this:</p>
<pre><code>−465.986246 + (24 x 20) = 14.013754
</code></pre>
<p>What is the formula to obtain that 20?</p>
| VansFannel | 193,243 | <p>Using math love's solution, this is my function now:</p>
<pre><code>double Utils::reduceNumber(double numberToReduce, double limitNumber)
{
float factor = 0.0;
double result = 0.0;
factor = (limitNumber - numberToReduce) / limitNumber;
if (factor < 0.0)
{
factor -= 1;
}
else
{
if ((std::abs(factor) - (int)std::abs(factor)) > 0.5)
{
factor += 1;
}
}
result = numberToReduce + (byNumber * (int)factor);
return result;
}
</code></pre>
<p>Now it works with 28.668119326367531 and with −465.986246 numbers.</p>
|
3,729,851 | <p>So I have the following question here.</p>
<blockquote>
<p>Suppose that <span class="math-container">$y_1$</span> solves <span class="math-container">$2y''+y'+3x^2y=0$</span> and <span class="math-container">$y_2$</span> solves <span class="math-container">$2y''+y'+3x^2y=e^x$</span>. Which of the following is a solution of <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span>?</p>
</blockquote>
<blockquote>
<p>(A) <span class="math-container">$3y_1-2y_2$</span></p>
</blockquote>
<blockquote>
<p>(B) <span class="math-container">$y_1+2y_2$</span></p>
</blockquote>
<blockquote>
<p>(C) <span class="math-container">$2y_1-y_2$</span></p>
</blockquote>
<blockquote>
<p>(D) <span class="math-container">$y_1+2y_2-2e^x$</span></p>
</blockquote>
<blockquote>
<p>(E) None of the above.</p>
</blockquote>
<p>The answer is supposed to be <span class="math-container">$A$</span>. But I am not really sure how that happened.</p>
<p>I know that for <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span> the solution is always the homogeneous part and the particular part added together.</p>
<p>Furthermore, I know that the homogeneous part is given as <span class="math-container">$y_1$</span>.</p>
<p>I then know that for <span class="math-container">$2y''+y'+3x^2y=e^x$</span> the solution for that one is composed of the homogeneous part and the particular part and that I can also write the ODE as <span class="math-container">$-4y''-2y'-6x^2y=-2e^x$</span>. So this implies that the homogeneous portion is just <span class="math-container">$-2y_1$</span>.</p>
<p>I can't get further than that though. Is my thought process right so far? If not, what more can I do and how can I proceed from here? I can't even solve the first two equations since they have no analytic solution.</p>
<p>This is from an old exam I was looking at an not an assignment so feel free to show work.</p>
| Nico De Tullio | 788,836 | <p>You are on the right track. Just notice that <span class="math-container">$ay_1$</span> is a solution to the homogeneous for any constant a. So, when you say that "the solution to the homogeneous is just <span class="math-container">$-2y_1$</span>", you could also say that <span class="math-container">$3y_1$</span> is also a solution. Then substitute <span class="math-container">$-2y_2$</span> in the LHS, and use the information you have about <span class="math-container">$y_2$</span>, what do you get?</p>
|
4,530,792 | <p>I have the following sequence <span class="math-container">$\left \{k \sin \left(\frac{1}{k}\right) \right\}^{\infty}_{1}$</span>. I don't know how to show that this is monotonically increasing.</p>
<p>I tried taking the derivative of the corresponding function <span class="math-container">$f(x) = x \sin \left(\frac{1}{x}\right)$</span>, and show that <span class="math-container">$f^{\prime} \geq 0$</span> for <span class="math-container">$x \geq 1$</span>, but the derivative is kind of messy. The problem falls down to showing that <span class="math-container">$$\sin(\frac{1}{x}) - \frac{\cos{\frac{1}{x}}}{x} \geq 0.$$</span></p>
<p>I am open to other approaches too, maybe some outside the box thinking way that doesn't even need the first derivative. But it feels like one should be able to show that the inequality holds.
Thank you!</p>
| Claude Leibovici | 82,404 | <p><em>Without derivatives</em></p>
<p><span class="math-container">$$a_k=k \sin \left(\frac{1}{k}\right) \quad \implies \quad \frac{a_{k+1}}{a_k}=
\frac{k+1}{k} \sin \left(\frac{1}{k+1}\right) \csc \left(\frac{1}{k}\right)$$</span> If <span class="math-container">$k$</span> is large, by Taylor</p>
<p><span class="math-container">$$\frac{a_{k+1}}{a_k}=1+\frac{1}{3 k^3}+O\left(\frac{1}{k^4}\right)$$</span></p>
|
2,012,947 | <p>I'm trying to prove that if f,g are continuous functions, and if E is a dense subset of X $(\text{or } Cl(E) = X)$ and if $f(x)=g(x) \forall x \in E$ then $f(x)=g(x) \forall x \in X$. </p>
<p>I understand that if f,g are continuous, then:</p>
<blockquote>
<p>$\exists \delta_1, \delta_2$ such that $\forall X \in E$ with $d(x,p)< \delta_1$, $|f(x) - f(p)| < \epsilon$ and similarly $\forall X \in E$ with $d(x,p)< \delta_2$, $|g(x) - g(p)| < \epsilon$</p>
</blockquote>
<p>And by definition of closure, I know that:</p>
<blockquote>
<p>$Cl(E) = E \cup E'$ where E' is the set accumulation points of E, where p is an accumulation point if $\forall r>0, (E\cup N_r(p)) \backslash \{p\} \neq \emptyset $</p>
</blockquote>
<p>I have zero clue on how to approach this problem. If $f(x) = g(x)$, then I'm guessing it implies that $|f(x) - f(p)| = |g(x) - g(p)|$. And so I'm guessing that $\delta_1 = \delta_2$. </p>
<p>Help would be very much appreciated. </p>
| mildboson | 669,255 | <p>A little bit late, but I decided to give an alternate answer without using contradiction or the sequence definition of continuity. Let <span class="math-container">$p \in X$</span> and <span class="math-container">$\epsilon > 0$</span> be given. If <span class="math-container">$p \in E$</span> we are done, otherwise <span class="math-container">$p$</span> is a limit point of <span class="math-container">$E$</span>. By continuity of <span class="math-container">$f, g$</span>, there exists <span class="math-container">$\delta_1, \delta_2 >0$</span> s.t. <span class="math-container">$\forall x \in X$</span> we have <span class="math-container">$d(p,x) < \delta_1 \implies d(f(x), f(p)) < \epsilon/2$</span> and <span class="math-container">$d(p, x) < \delta_2 \implies d(g(p), g(x)) < \epsilon/2$</span>. Set <span class="math-container">$\delta = \min\{\delta_1, \delta_2\}$</span>. Since <span class="math-container">$E$</span> is dense in <span class="math-container">$X$</span>, there is some <span class="math-container">$q \in E \cap B_{\delta}(p) \backslash \{p\}$</span>. Thus, we have that <span class="math-container">$d(f(p), g(p)) \leq d(f(p), f(q)) + d(f(q), g(q)) + d(g(q),g(p)) < \epsilon$</span>. Finally, since <span class="math-container">$\epsilon$</span> was arbitrary, we conclude that <span class="math-container">$d(f(p), g(p)) = 0 \implies f(p) = g(p)$</span> for all <span class="math-container">$p \in X$</span>.</p>
<p>Note: <span class="math-container">$d(f(q), g(q)) = 0$</span> since <span class="math-container">$f,g$</span> agree on <span class="math-container">$E$</span> by hypothesis.</p>
|
205,926 | <p>I'm trying to understand a proof about density of a subset $X$ in its one-point compactification $Y$.</p>
<p>We can do this proof by contradiction, suppose we don't have $\operatorname{cl}(X) = Y$.
This implies that $\operatorname{cl}(X) = X$. </p>
<p>Why? Can anyone help me?</p>
<p>Thanks</p>
| Brian M. Scott | 12,042 | <p>You’re making it much harder than it really is. $Y=X\cup\{p\}$, where $p$ is the new point. To show that $X$ is dense in $Y$, you need only show that every open neighborhood of $p$ has non-empty intersection with $X$. Go back to the definition of the one-point compactification and see why this is true: what are the open neighborhoods of $p$? Why must each of them have non-empty intersection with $X$? It has to do with the fact that $X$ is not compact.</p>
|
4,540,637 | <blockquote>
<p>Given a line <span class="math-container">$y=kx$</span> on a Cartesian coordinate, I want to find an equation of a parabola, whose base is on that line at point <span class="math-container">$(x_1,y_1)$</span> and passes through point <span class="math-container">$(x_2,y_2)$</span>.</p>
</blockquote>
<p>Then, an equation of a red line that is perpendicular to line <span class="math-container">$y=kx$</span> at point <span class="math-container">$(x_1,y_1)$</span> is <span class="math-container">$y=-\frac{1}{k}x+y_1+\frac{1}{k}x_1$</span>. I tried to assume that <span class="math-container">$y=kx$</span> line is my new <span class="math-container">$x$</span> axis, and <span class="math-container">$y=-\frac{1}{k}x+y_1+\frac{1}{k}x_1$</span> is my new <span class="math-container">$y$</span> axis and do calculations, but the computation became messy and I couldn't finish. Is there simple way to solve this problem?</p>
<p><a href="https://i.stack.imgur.com/PuEIL.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/PuEIL.jpg" alt="enter image description here" /></a></p>
| insipidintegrator | 1,062,486 | <p><a href="https://math.stackexchange.com/questions/4180898/axes-based-equations-of-conics">From this post</a>, we can write the equation of a parabola as
<span class="math-container">$$d_{\text{Axis}}^2=4a\cdot d_{\text{Tangent at vertex}}.$$</span>
Here <span class="math-container">$d_{\text{line}}$</span> represents the algebraic distance of any arbitrary point on the parabola from the said line. <span class="math-container">$**$</span>(<em>More intuitive explanation below</em>).</p>
<hr />
<p>Here, tangent at vertex is <span class="math-container">$y=kx$</span>.
<br> The axis (straight line through the vertex (that you referred to as the base) and perpendicular to tangent at vertex) is <span class="math-container">$y+\dfrac xk=y_1+\dfrac{x_1}{k}$</span>.</p>
<p>Thus the equation of parabola is <span class="math-container">$$\bigg(\frac{ky+x-(ky_1+x_1)}{\sqrt{1+k^2}}\bigg)^2=4a\cdot\dfrac{y-kx}{\sqrt{1+k^2}}$$</span></p>
<p>Put <span class="math-container">$(x_2, y_2)$</span> into the equation and find <span class="math-container">$a$</span>. Done :)</p>
<hr />
<hr />
<p><span class="math-container">$**$</span> Intuitive explanation: Consider the standard parabola <span class="math-container">$y^2=4ax$</span>. Here <span class="math-container">$y$</span>= Distance from X axis (<span class="math-container">$\equiv$</span> Axis) and <span class="math-container">$x$</span>= Distance from Y axis (<span class="math-container">$\equiv $</span> Tangent at vertex)</p>
|
691,734 | <p>Consider the sequence defined recursively by $x_1$=$\sqrt2$ and where $x_n$=$\sqrt2$ + $x_n$$_-$$_1$. </p>
<p>Find a explicit formula for the $n^t$$^h$ term.</p>
<p>I considered using the general equation to find an explicit formula for any term in an arithmetic sequence. a$_n$ = a$_1$ + $d(n-1)$, but I came to no conclusion helping my argument. </p>
<p>Am I using the correct method? </p>
| naslundx | 130,817 | <p>You are right in that it is an arithmetic series. A good strategy is to write up the first terms, simplify, and try to find a pattern:</p>
<p>$$a_1 = \sqrt{2}$$
$$a_2 = \sqrt{2} + a_1 = 2\sqrt{2}$$
$$a_3 = \sqrt{2} + a_2 = \sqrt{2} + 2\sqrt{2} = 3\sqrt{2}$$
$$a_4 = \sqrt{2} + a_3 = \sqrt{2} + 3\sqrt{2} = 4\sqrt{2}$$</p>
<p>And so on. Hence we see the pattern:</p>
<p>$$a_n = n \cdot \sqrt{2}$$</p>
|
617,927 | <p>Find the taylor expansion of $\sin(x+1)\sin(x+2)$ at $x_0=-1$, up to order $5$.</p>
<p><strong>Taylor Series</strong></p>
<p>$$f(x)=f(a)+(x-a)f'(a)+\frac{(x-a)^2}{2!}f''(a)+...+\frac{(x-a)^r}{r!}f^{(r)}(a)+...$$</p>
<p>I've got my first term...</p>
<p>$f(a) = \sin(-1+1)\sin(-1+2)=\sin(0)\sin(1)=0$</p>
<p>Now, I've calculated $f'(x)=\sin(x+1)\cos(x+2)+\sin(x+2)\cos(x+1)$</p>
<p>So that $f'(-1) = \sin(1) = 0.8414709848$</p>
<p>This means my second term would be $(x+1)(0.8414709848).$</p>
<p>But, this doesn't seem to be nice and neat like the other expansions I have done and I can't figure out what I've done wrong.</p>
<p>Merry Christmas and thanks in advance.</p>
| nadia-liza | 113,971 | <p>use this formula
$\sin(a)\sin(b)=1/2(cos(a-b)-cos(a+b))$</p>
|
661,269 | <p>Check if $\mathbb{Z}_5/x^2 + 3x + 1$ is a field. Is $(x+2)$ a unit, if so calculate its inverse. </p>
<p>I would say that this quotient ring is not a field, because $<x^2 + 3x + 1>$ is not a maximal ideal, since $x^2 + 3x + 1 = (x+4)^2$ is not irreducible. </p>
<p>However, the result should still be a ring, right? How do I check if $(x+2)$ is a unit in that ring. Should I just "try" to invert it, or is there a better way.</p>
<p>Thanks </p>
| voldemort | 118,052 | <p><strong>V.X</strong> will be Normal, as linear combination of normal rvs is again normal. Check <a href="http://www.statlect.com/normal_distribution_linear_combinations.htm" rel="nofollow">this</a>.</p>
|
1,376,651 | <p>To be specific here is the system:</p>
<p>$$x-2y=0 \tag{1}$$
$$x-2(k+2)y=0 \tag{2}$$
$$x-(k+3)y=-k \tag{3}$$ </p>
<p>I have already solved it for equations $(1)$ and $(2)$... what should I do with the 3rd equation?</p>
<p>Just to make sure everything goes well here is my method:</p>
<p>$D=-2(k+2)$ and $D_x=D_y=0$</p>
<p>If $k=-2$ then $D=0$ so there are indefinite solutions.
If $k\not=-2$ then $D\not=0$ so the solution is $(0,0)$</p>
| MJD | 25,554 | <p>The answer is no; John can't even fill up the topmost $7\times 11\times 1$ slice of the $7\times 11\times 9$ box. Consider just the top $7\times 11$ face of this box; look just at this face and ignore the rest of the box. A solution to the problem would fill up this $7\times 11$ rectangle with large $3\times3$ rectangles and small $3\times 1$ rectangles. But $7\times 11$ is not a multiple of $3$.</p>
|
988,628 | <p>Problem : </p>
<p>For the series $$S = 1+ \frac{1}{(1+3)}(1+2)^2+\frac{1}{(1+3+5)}(1+2+3)^2+\frac{1}{(1+3+5+7)}(1+2+3+4)^2+\cdots $$ Find the nth term of the series. </p>
<p>We know that nth can term of the series can be find by using $T_n = S_n -S_{n-1}$ </p>
<p>$$S_n =1+ \sum \frac{(\frac{n(n+1)}{2})^2}{(2n-1)^2}$$ </p>
<p>$$\Rightarrow S_n =\frac{n^4+5n^2+2n^3-4n+1}{(2n-1)^2}$$ </p>
<p>But I think this is wrong, please suggest how to proceed thanks..</p>
| Bumblebee | 156,886 | <p>First we should note that $$1+2+3+...+n=\dfrac{n(n+1)}{2}$$ and $$1+3+5+...+(2n-1)=n^2.$$ Therefore general term in your series become to $$T_n=\dfrac{(1+2+3+..+n)^2}{1+3+5+...+(2n-1)}=\dfrac{(n+1)^2}{4}$$ $$S_n=\sum_{k=1}^nT_k\\=\sum_{k=1}^n\dfrac{(k+1)^2}{4}\\=\sum_{k=1}^{n+1}\dfrac{k^2}{4}-\dfrac{1}{4}\\=\dfrac{(n+1)(n+2)(2n+3)}{24}-\dfrac{1}{4}$$</p>
|
433,816 | <p>While I was studying the measurements of pressure at earth's atmosphere,I found the barometric formula which is more complex equation ($P'=Pe^{-mgh/kT}$) than what I used so far ($p=h\rho g$).</p>
<p>So I want to know how this complex formula build up? I could reach at the point of
$${dP \over dh}=-{mgP \over kT}$$
From this how can I obtain that equation. Please give me a Mathematical explanation.</p>
| nbubis | 28,743 | <p>This type of problem is known as a differential equation. In this particular case the solution can be guessed, since you have that the derivative of the function $P$ is just a constant times $P$. The only function satisfying this condition is the exponent, since:
$${d(A e^{a x}) \over dx} = a A e^x$$
Thus, if $${dP\over dh} = \left( \frac {-mg}{kT}\right) P$$
Then $P$ must be:
$$P(h) = A\exp\left(\frac {-mg}{kT} \cdot h\right)$$
Where $A$ is a constant. In particular, if the pressure $P_0$ at $h=0$ is known, then by substitution, We have:
$$P(h) = P_0\exp\left(\frac {-mg}{kT} \cdot h\right)$$</p>
|
361,201 | <p>Let $\left\{ x_\alpha : \alpha \in \mathscr{A}\right\} \subset (0, + \infty ) $ be a set of positive real numbers such that for every countable subcollection $ \left\{ x_{\alpha_n} \right\} $ of distinct points it holds $ x_{\alpha_n} \rightarrow 0 $. Then $ \mathscr{A} $ is a countable set. \</p>
<p>I think that this statement is true. How can i prove it? (if it is true)</p>
<p>Thanks</p>
| xyzzyz | 23,439 | <p>Hint: for each $n$, there are only finitely many $x \in \mathcal{A}$ such that $|x| \geq \frac{1}{n}$ -- otherwise you can easily find a sequence in $\mathcal{A}$ that does not converge to 0.</p>
|
1,585,408 | <p>I have the equation: (1-x<sup>2</sup>)u<sup>''</sup> -xu<sup>'</sup>+ku=0,
where ' represents differentiation with respect to x and k is a constant.</p>
<p>I am asked to show that cos(k<sup>1/2</sup>cos<sup>-1</sup>x) is a solution to this equation.</p>
<p>I assumed to show this you need to set u=cos(k<sup>1/2</sup>cos<sup>-1</sup>x) and substitute it into the the Tchebycheff equation and show it equals zero. However, when doing this I get quite a lot of messy differentiation. </p>
<p>The question before this asked to put the equation into Sturm Liouville form, so I am unsure if that is meant to be used to solve the question.</p>
<p>Any hints on how to advance would be greatly appreciated.</p>
| JJacquelin | 108,514 | <p>$$(1-x^2)\frac{d^2u}{dx^2}-x\frac{du}{dx}+ku=0$$
Let $x=\cos(t)\quad$ hense $\quad dx=-\sin(t)dt_\quad$ then $\quad\frac{dt}{dx}=-\frac{1}{\sin(t)}$</p>
<p>$\frac{du}{dx}=\frac{du}{dt}\frac{dt}{dx}= -\frac{1}{\sin(t)}\frac{du}{dt}$</p>
<p>$\frac{d^2u}{dx^2}=\frac{d\left( -\frac{1}{\sin(t)}\frac{du}{dt}\right)}{dx}=
\frac{d\left( -\frac{1}{\sin(t)}\frac{du}{dt}\right)}{dt}\frac{dt}{dx}=\frac{d\left( -\frac{1}{\sin(t)}\frac{du}{dt}\right)}{dt}\left(-\frac{1}{\sin(t)}\right)=\frac{1}{\sin^2(t)}\frac{d^2u}{dt^2}-\frac{\cos(t)}{\sin^3(t)}\frac{du}{dt}$</p>
<p>$(1-x^2)\frac{d^2u}{dx^2}-x\frac{du}{dx}+ku=\sin^2(t)\left(\frac{1}{\sin^2(t)}\frac{d^2u}{dt^2}-\frac{\cos(t)}{\sin^3(t)}\frac{du}{dt} \right) - \cos(t)\left(-\frac{1}{\sin(t)}\frac{du}{dt} \right)+ku=0$</p>
<p>after simplification :</p>
<p>$$\frac{d^2u}{dt^2}+ku=0$$</p>
<p>$$u=c_1\cos(\sqrt k \:t)+c_2\sin(\sqrt k \:t)$$
$t=\cos^{-1}(x)$</p>
<p>$$u=c_1\cos\left(\sqrt k \:\cos^{-1}(x)\right)+c_2\sin\left(\sqrt k \:\cos^{-1}(x)\right)$$</p>
|
2,296,544 | <p>Let $\{F_n\}, n\in \mathbb{N}$ be the sequence of Fibonacci numbers such that $F_1=1$, $F_2=1$ and $F_n=F_{n-1}+F_{n-2}$ $\forall n\geq2$.</p>
<p>Define a new sequence $\{S_n\}$ such that $S_n=F_n+1$ $\forall n\in \mathbb{N}$.</p>
<p>Now the question is: For every prime $p$, does there exist an $N\in \mathbb{N}$, such that $p|S_N$ ?</p>
| Angina Seng | 436,618 | <p>Well $S_{-2}=0$, but $-2\notin\Bbb N$. Never mind!</p>
|
2,953,837 | <p>Given <span class="math-container">$n_1$</span> number of a's, <span class="math-container">$n_2$</span> number of b's, <span class="math-container">$n_3$</span> number of c's.</p>
<p>They form a sequence using all these characters such that no two a's and no two b's are adjacent.</p>
<p>(a and b can be adjacent, but two a's or b's cannot be adjacent. c has no restrictions.)</p>
<p>for example : acbccab is valid for <span class="math-container">$n_1=2$</span>, <span class="math-container">$n_2=2$</span>, <span class="math-container">$n_3=3$</span></p>
<p>but,</p>
<p>cbcbcaa is not valid as two a's are adjacent.</p>
<p>I have tries lot of things but nothing worked.</p>
<p>Can somebody tell me how to solve this problem?</p>
| epi163sqrt | 132,007 | <p>This answer is based upon generating functions of <strong>Smirnov words</strong>. These are words with no equal consecutive characters. (See example III.24 <em>Smirnov words</em> from <em><a href="http://algo.inria.fr/flajolet/Publications/books.html" rel="nofollow noreferrer">Analytic Combinatorics</a></em> by Philippe Flajolet and Robert Sedgewick for more information.) </p>
<blockquote>
<p>We have <span class="math-container">$n_1$</span> <span class="math-container">$a$</span>'s, <span class="math-container">$n_2$</span> <span class="math-container">$b$</span>'s and <span class="math-container">$n_3$</span> <span class="math-container">$c$</span>'s and are looking for Smirnov words of length <span class="math-container">$n_1+n_2+n_3$</span>.</p>
<p>A generating function for the number of Smirnov words over the three letter alphabet <span class="math-container">$V=\{a,b,c\}$</span> is given by
<span class="math-container">\begin{align*}
\left(1-\frac{a}{1+a}-\frac{b}{1+b}-\frac{c}{1+c}\right)^{-1}\tag{1}
\end{align*}</span></p>
</blockquote>
<p>In fact there is no restriction given for the character <span class="math-container">$c$</span>. We respect this by substituting
<span class="math-container">\begin{align*}
c\to c+c^2+c^3+\cdots=\frac{c}{1-c}
\end{align*}</span></p>
<blockquote>
<p>We so obtain from (1) the generating function
<span class="math-container">\begin{align*}
\left(1-\frac{a}{1+a}-\frac{b}{1+b}-\frac{\frac{c}{1-c}}{1+\frac{c}{1-c}}\right)^{-1}
=\color{blue}{\frac{(1+a)(1+b)}{1-c-ab-ac-bc-abc}}\tag{2}
\end{align*}</span></p>
</blockquote>
<p>We use the <em>coefficient of</em> operator <span class="math-container">$[z^n]$</span> to denote the coefficient of <span class="math-container">$z^n$</span> in a series <span class="math-container">$A(z)$</span>.</p>
<blockquote>
<p>The number of wanted words of length <span class="math-container">$n_1+n_2+n_3$</span> is therefore
<span class="math-container">\begin{align*}
\left[a^{n_1}b^{n_2}c^{n_3}\right]&\left(1-\frac{a}{1+a}-\frac{b}{1+b}-\frac{\frac{c}{1+c}}{1+\frac{c}{1+c}}\right)^{-1}\\
&\color{blue}{=\left[a^{n_1}b^{n_2}c^{n_3}\right]\frac{(1+a)(1+b)}{1-c-ab-ac-bc-abc}}\tag{3}
\end{align*}</span></p>
<p><em>Example:</em></p>
<p>We consider the example of OP setting <span class="math-container">$(n_1,n_2,n_3)=(2,2,3)$</span> and we obtain from (3) with some help of Wolfram Alpha
<span class="math-container">\begin{align*}
\left[a^2b^2c^3\right]\frac{(1+a)(1+b)}{1-c-ab-ac-bc-abc}\color{blue}{=110}
\end{align*}</span></p>
</blockquote>
<p>The <span class="math-container">$110$</span> valid words are listed below with OPs stated word marked in <span class="math-container">$\color{blue}{\text{blue}}$</span>.</p>
<p><span class="math-container">\begin{array}{cccccc}
\mathrm{ababccc}&\mathrm{acbacbc}&\mathrm{bacaccb}&\mathrm{bcbacca}&\mathrm{cacbcab}&\mathrm{ccabcab}\\
\mathrm{abacbcc}&\mathrm{acbaccb}&\mathrm{bacbacc}&\mathrm{bcbcaca}&\mathrm{cacbcba}&\mathrm{ccabcba}\\
\mathrm{abaccbc}&\mathrm{acbcabc}&\mathrm{bacbcac}&\mathrm{bccabac}&\mathrm{caccbab}&\mathrm{ccacbab}\\
\mathrm{abacccb}&\mathrm{acbcacb}&\mathrm{bacbcca}&\mathrm{bccabca}&\mathrm{cbabacc}&\mathrm{ccbabac}\\
\mathrm{abcabcc}&\mathrm{acbcbac}&\mathrm{baccabc}&\mathrm{bccacab}&\mathrm{cbabcac}&\mathrm{ccbabca}\\
\mathrm{abcacbc}&\mathrm{acbcbca}&\mathrm{baccacb}&\mathrm{bccacba}&\mathrm{cbabcca}&\mathrm{ccbacab}\\
\mathrm{abcaccb}&\color{blue}{\mathrm{acbccab}}&\mathrm{baccbac}&\mathrm{bccbaca}&\mathrm{cbacabc}&\mathrm{ccbacba}\\
\mathrm{abcbacc}&\mathrm{acbccba}&\mathrm{baccbca}&\mathrm{bcccaba}&\mathrm{cbacacb}&\mathrm{ccbcaba}\\
\mathrm{abcbcac}&\mathrm{accabcb}&\mathrm{bacccab}&\mathrm{cababcc}&\mathrm{cbacbac}&\mathrm{cccabab}\\
\mathrm{abcbcca}&\mathrm{accbabc}&\mathrm{bacccba}&\mathrm{cabacbc}&\mathrm{cbacbca}&\mathrm{cccbaba}\\
\mathrm{abccabc}&\mathrm{accbacb}&\mathrm{bcabacc}&\mathrm{cabaccb}&\mathrm{cbaccab}&\\
\mathrm{abccacb}&\mathrm{accbcab}&\mathrm{bcabcac}&\mathrm{cabcabc}&\mathrm{cbaccba}&\\
\mathrm{abccbac}&\mathrm{accbcba}&\mathrm{bcabcca}&\mathrm{cabcacb}&\mathrm{cbcabac}&\\
\mathrm{abccbca}&\mathrm{acccbab}&\mathrm{bcacabc}&\mathrm{cabcbac}&\mathrm{cbcabca}&\\
\mathrm{acbcabc}&\mathrm{babaccc}&\mathrm{bcacacb}&\mathrm{cabcbca}&\mathrm{cbcacab}&\\
\mathrm{acbcacb}&\mathrm{babcacc}&\mathrm{bcacbac}&\mathrm{cabccab}&\mathrm{cbcacba}&\\
\mathrm{acabcbc}&\mathrm{babccac}&\mathrm{bcacbca}&\mathrm{cabccba}&\mathrm{cbcbaca}&\\
\mathrm{acabccb}&\mathrm{babccca}&\mathrm{bcaccab}&\mathrm{cacabcb}&\mathrm{cbccaba}&\\
\mathrm{acacbcb}&\mathrm{bacabcc}&\mathrm{bcaccba}&\mathrm{cacbabc}&\mathrm{ccababc}&\\
\mathrm{acbabcc}&\mathrm{bacacbc}&\mathrm{bcbacac}&\mathrm{cacbacb}&\mathrm{ccabacb}&\\
\end{array}</span></p>
|
514 | <p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p>
<p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p>
<p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p>
<p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p>
<p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p>
<p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p>
<p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p>
<p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p>
<p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
| Chain Markov | 407,165 | <p>In combinatorics there are quite many such disproven conjectures. The most famous of them are:</p>
<p>1) Tait conjecture:</p>
<blockquote>
<p>Any 3-vertex connected planar cubic graph is Hamiltonian</p>
</blockquote>
<p>The first counterexample found has 46 vertices. The "least" counterexample known has 38 vertices.</p>
<p>2) Tutte conjecture:</p>
<blockquote>
<p>Any bipartite cubic graph is Hamiltonian</p>
</blockquote>
<p>The first counterexample found has 96 vertices. The "least" counterexample known has 54 vertices.</p>
<p>3) Heidetniemi conjecture</p>
<blockquote>
<p>Chromatic number of tensor product of finite undirected simple graph is equal to the least of chromatic numbers of those graphs.</p>
</blockquote>
<p>The first known counterexample to this conjecture has more than <span class="math-container">$4^{10000}$</span> vertices</p>
<p>4) Thom conjecture</p>
<blockquote>
<p>If two finite undirected simple graphs have conjugate adjacency matrices over <span class="math-container">$\mathbb{Z}$</span>, then they are isomorphic. </p>
</blockquote>
<p>The least known counterexample pair is formed by two trees with 11 vertices.</p>
<p>5) Borsuk conjecture:</p>
<blockquote>
<p>Every bounded subset <span class="math-container">$E$</span> of <span class="math-container">$\mathbb{R}^n$</span>can be partitioned into <span class="math-container">$n+1$</span> sets, each of which has a smaller diameter, than <span class="math-container">$E$</span></p>
</blockquote>
<p>In the first counterexample found <span class="math-container">$n = 1325$</span>. In the "least" counterexample known <span class="math-container">$n = 64$</span>.</p>
<p>6) Danzer-Gruenbaum conjecture:</p>
<blockquote>
<p>If <span class="math-container">$A \subset \mathbb{R}^n$</span> and <span class="math-container">$\forall u, v, w \in A$</span> <span class="math-container">$(u - w, v - w) > 0,$</span> then <span class="math-container">$|A| \leq 2n - 1$</span></p>
</blockquote>
<p>This statement is not true for any <span class="math-container">$n \geq 35$</span></p>
<p>7) The Boolean Pythagorean Triple Conjecture:</p>
<blockquote>
<p>There exists <span class="math-container">$S \subset \mathbb{N}$</span>, such that neither <span class="math-container">$S$</span>, nor <span class="math-container">$\mathbb{N} \setminus S$</span> contain Pythagorean triples</p>
</blockquote>
<p>This conjecture was disproved by M. Heule, O. Kullman and V. Marek. They proved, that there do exist such <span class="math-container">$S \subset \{n \in \mathbb{N}| n \leq k\}$</span>, such that neither <span class="math-container">$S$</span>, nor <span class="math-container">$\{n \in \mathbb{N}| n \leq k\} \setminus S$</span> contain Pythagorean triples, for all <span class="math-container">$k \leq 7824$</span>, but not for <span class="math-container">$k = 7825$</span></p>
<p>8) Burnside conjecture:</p>
<blockquote>
<p>Every finitely generated group with period n is finite</p>
</blockquote>
<p>This statement is not true for any odd <span class="math-container">$n \geq 667$</span> (proved by Adyan and Novikov). </p>
<p>9) Otto Shmidt conjecture:</p>
<blockquote>
<p>If all proper subgroups of a group <span class="math-container">$G$</span> are isomorphic to <span class="math-container">$C_p$</span>, where <span class="math-container">$p$</span> is a fixed prime number, then <span class="math-container">$G$</span> is finite.</p>
</blockquote>
<p>Alexander Olshanskii proved, that there are continuum many non-isomorphic counterexamples to this conjecture for any <span class="math-container">$p > 10^{75}$</span>.</p>
<p>10) Von Neuman conjecture</p>
<blockquote>
<p>Any non-amenable group has a free subgroup of rank 2</p>
</blockquote>
<p>The least known finitely presented counterexample has 3 generators and 9 relators</p>
<p>11) Word problem conjecture:</p>
<blockquote>
<p>Word problem is solvable for any finitely generated group</p>
</blockquote>
<p>The "least" counterexample known has 12 generators.</p>
<p>12) Leinster conjecture:</p>
<blockquote>
<p>Any Leinster group has even order</p>
</blockquote>
<p>The least counterexample known has order 355433039577.</p>
<p>13) Rotman conjecture:</p>
<blockquote>
<p>Automorphism groups of all finite groups not isomorphic to <span class="math-container">$C_2$</span> have even order</p>
</blockquote>
<p>The first counterexample found has order 78125. The least counterexample has order 2187. It is the automorphism group of a group with order 729.</p>
<p>14) Rose conjecture:</p>
<blockquote>
<p>Any nontrivial complete finite group has even order</p>
</blockquote>
<p>The least counterexample known has order 788953370457.</p>
<p>15) Hilton conjecture</p>
<blockquote>
<p>Automorphism group of a non-abelian group is non-abelian</p>
</blockquote>
<p>The least counterexample known has order 64.</p>
<p>16)Hughes conjecture:</p>
<blockquote>
<p>Suppose <span class="math-container">$G$</span> is a finite group and <span class="math-container">$p$</span> is a prime number. Then <span class="math-container">$[G : \langle\{g \in G| g^p \neq e\}\rangle] \in \{1, p, |G|\}$</span></p>
</blockquote>
<p>The least known counterexample has order 142108547152020037174224853515625.</p>
<p>17)<span class="math-container">$\frac{p-1}{p^2}$</span>-conjecture:</p>
<blockquote>
<p>Suppose <span class="math-container">$p$</span> is a prime. Then, any finite group <span class="math-container">$G$</span> with more than <span class="math-container">$\frac{p-1}{p^2}|G|$</span> elements of order <span class="math-container">$p$</span> has exponent <span class="math-container">$p$</span>.</p>
</blockquote>
<p>The least counterexample known has order 142108547152020037174224853515625 and <span class="math-container">$p = 5$</span>. It is the same group that serves counterexample to the Hughes conjecture. Note, that for <span class="math-container">$p = 2$</span> and <span class="math-container">$p = 3$</span> the statement was proved to be true.</p>
<p>18) Moreto conjecture:</p>
<blockquote>
<p>Let <span class="math-container">$S$</span> be a finite simple group and <span class="math-container">$p$</span> the largest prime divisor of <span class="math-container">$|S|$</span>. If <span class="math-container">$G$</span> is a finite group with the same number of elements of order <span class="math-container">$p$</span> as <span class="math-container">$S$</span> and <span class="math-container">$|G| = |S|$</span>, then <span class="math-container">$G \cong S$</span></p>
</blockquote>
<p>The first counterexample pair constructed is formed by groups of order 20160 (those groups are <span class="math-container">$A_8$</span> and <span class="math-container">$L_3(4)$</span>)</p>
<p>19) This false statement is not a conjecture, but rather a popular mistake done by many people, who have just started learning group theory:</p>
<blockquote>
<p>All elements of the commutant of any finite group are commutators</p>
</blockquote>
<p>The least counterexample has order 96.</p>
<p>If the numbers mentioned in this text do not impress you, please, do not feel disappointed: there are complex combinatorial objects "hidden" behind them.</p>
|
256,649 | <p>I am trying to plot phase diagram and poincare map. But I cannot get the poincare map as shown in the image below</p>
<p>Phase Space</p>
<pre><code>sol = NDSolve[{v'[t] ==
0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t],
x'[t] == v[t], x[0] == 0, v[0] == 0}, {x, v}, {t, 0, 1500}];
ParametricPlot[{x[t], v[t]} /. sol, {t, 200, 1000},
AxesLabel -> {"x", "v"}, PlotRange -> Full, PlotStyle -> LightGray,
Axes -> False, Frame -> True,
FrameTicksStyle -> Directive[Black, 20], ImageSize -> {700, 350},
AspectRatio -> Full]
</code></pre>
<p>Poincare Map</p>
<pre><code>poincare[A_, gamma_, omega_, ndrop_, nplot_,
psize_] := (T = 2*Pi/omega;
g[{xold_, vold_}] := {x[T], v[T]} /.
NDSolve[{v'[t] ==
0.320 x[t] - 1.65 x[t]^3 - gamma*v[t] + A*Cos[omega*t],
x'[t] == v[t], x[0] == xold, v[0] == vold}, {x, v}, {t, 0,
T}][[1]];
lp = ListPlot[Drop[NestList[g, {0, 0}, nplot + ndrop], ndrop],
PlotStyle -> {PointSize[psize], Black}, Axes -> False,
Frame -> True, FrameTicksStyle -> Directive[Black, 20],
PlotRange -> All, AxesLabel -> {"x", "v"},
ImageSize -> {700, 350}, AspectRatio -> Full])
poincare[0.855, 0.005, 1.2, 1000, 200, 0.01]
</code></pre>
<p>I want diagram as shown in below image(The phase diagram will be different for my code)</p>
<p><a href="https://i.stack.imgur.com/RYPDV.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RYPDV.png" alt="Phase Diagram I want" /></a></p>
| Akku14 | 34,287 | <p>Not an answer, but an <strong>essential warning</strong> !</p>
<p>For t->1500 you have more than 300 oscillations of x[t]. Even at best Workingprecision numerical NDSolve will give total wrong result for higher t. See examples. A workaround could be solutions with DSolve (but my MMA version doesn't find any.)</p>
<pre><code>sol1 = NDSolve[{v'[t] ==
0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t],
x'[t] == v[t], x[0] == 0, v[0] == 0}, {x, v}, {t, 0, 1500},
MaxSteps -> 10^5];
sol2 = NDSolve[{v'[t] ==
0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t],
x'[t] == v[t], x[0] == 0, v[0] == 0} // Rationalize[#, 0] &, {x,
v}, {t, 0, 1500}, MaxSteps -> 10^5, WorkingPrecision -> 25];
sol3 = NDSolve[{v'[t] ==
0.320 x[t] - 1.65 x[t]^3 - 0.005*v[t] + 0.855 Cos[1.2*t],
x'[t] == v[t], x[0] == 0, v[0] == 0} // Rationalize[#, 0] &, {x,
v}, {t, 0, 1500}, MaxSteps -> 10^6, WorkingPrecision -> 50]
{xsol1, vsol1} = {x, v} /. First@sol1;
{xsol2, vsol2} = {x, v} /. First@sol2;
{xsol3, vsol3} = {x, v} /. First@sol3;
Plot[{xsol1[t], xsol2[t], xsol3[t]}, {t, 300, 400},
PlotStyle -> {Blue, Red, Green}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ltyyX.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ltyyX.jpg" alt="enter image description here" /></a></p>
|
26,451 | <p>I am trying to solve the following:</p>
<p>$\begin{align*}
&X \sim N(1,1)\\
&\mathrm{cov}(X, X^3) = \text{?}
\end{align*}$</p>
<p>where $\mathrm{cov}$ is the covariance.</p>
<p>How would you do this in <em>Mathematica</em>?</p>
<p>I have tried</p>
<pre><code>X = NormalDistribution[1, 1]
cov[x_, y_] := Mean[TransformedDistribution[a*b,
{a \[Distributed] x, b \[Distributed] y}]] - Mean[x] Mean[y]
cov[X, TransformedDistribution[a^3, a \[Distributed] X]]
</code></pre>
<p>But this doesn't seem to work.</p>
| J. M.'s persistent exhaustion | 50 | <p>I guess something like this:</p>
<pre><code>d1 = NormalDistribution[1, 1];
xa = Mean[d1]; xa3 = Mean[TransformedDistribution[u^3, u \[Distributed] d1]];
Mean[TransformedDistribution[(x - xa) (x^3 - xa3), x \[Distributed] d1]]
6
</code></pre>
|
40,116 | <p>I'm building a program that calculates the cost of an item based on it's size (let's say a bamboo pole). As the customer requests a longer pole, it gets hard to find a bamboo, plus requires more resources to grow, therefore, I would want to charge more per inch for the piece of bamboo based on it's length approaching a particular length. Then after that point the cost per inch would really escalate.</p>
<p>I believe that log would be the function that I need to use, but I just can't figure out how to make my formula. I've tried log(-x), log(x-1), log(y-x), I can't figure out how to get the log to shoot up to infinity, nor target a specific point.</p>
<p>Referencing the example above, I would want the cost/inch of the bamboo to stay reasonable up to 72", but after that, the cost/inch should rapidly increase, until it gets to 100", where it would become ridiculously expensive. Before 72", it should rise in cost/inch, but at a slow rate (it costs a little bit more per inch to grow a 72" stick than a 6" stick). I'm looking for a uniform growth, not a split formula. No f(x) where x<72, g(x) where x>72.</p>
<p>I'm not necessarily looking for the formula to solve the above question. I am looking for the HOW to research and solve the above question.</p>
<p>Many Thanks,
Matt</p>
| soandos | 10,921 | <p>You are looking for functions that go to infinity, preferably in finite time (vertical asymptote) or something similar. you then construct a scale to get to your infinity point at the number of inches you want (say 100). so, for tan(x) as an example, pi/2 is infinity, so map 0-100 to pi/4-pi/2, and you have something that works (you might want to add a constant to formula to help you on on the low end).</p>
<p>A possibility: f(x) = Tan(x/200*pi/2+pi/4)+5</p>
<p>f(10) = 6.2</p>
<p>f(72) = 9.5</p>
<p>f(90) = 17.7</p>
<p>f(95) = 35.45</p>
<p>f(98) = 68.7</p>
<p>Feel free to massage the constants, or pick a different function to get the behavior that you want.</p>
|
443,578 | <blockquote>
<p>Is the limit
$$
e^{-x}\sum_{n=0}^N \frac{(-1)^n}{n!}x^n\to e^{-2x} \quad \text{as } \ N\to\infty \tag1
$$
uniform on $[0,+\infty)$? </p>
</blockquote>
<p>Numerically this appears to be true: see the difference of two sides in (1) for $N=10$ and $N=100$ plotted below. But the convergence is very slow (<strike>logarithmic</strike> error $\approx N^{-1/2}$ as shown by Antonio Vargas in his answer). In particular, putting $e^{-0.9x}$ and $e^{-1.9x}$ in (1) clearly makes convergence non-uniform. </p>
<p>One difficulty here is that the Taylor remainder formula is effective only up to $x\approx N/e$, and the maximum of the difference is at $x\approx N$.</p>
<p><img src="https://i.stack.imgur.com/Vuxmg.png" alt="N=10"></p>
<p><img src="https://i.stack.imgur.com/d0LHA.png" alt="enter image description here"></p>
<p>The question is inspired by an attempt to find an alternative proof of <a href="https://math.stackexchange.com/q/386807/">$\epsilon>0$ there is a polynomial $p$ such that $|f(x)-e^{-x}p|<\epsilon\forall x\in[0,\infty)$</a>. </p>
| Pedro | 23,350 | <p>I'd like to provide another solution which is a mixture of Antonio's and Landscape's. One can also write $$\left|r_n(x)\right|=\int_0^x {e^{-t}}\frac{(x-t)^n}{n!}dt$$</p>
<p>by virtue of Taylor's theorem with the integral remainder. But then again $$\left| {{r_n}(x)} \right| \leqslant \int_0^x {\frac{{{{(x - t)}^n}}}{{n!}}dt} = \int_0^x {\frac{{{u^n}}}{{n!}}du} = \frac{x^{n + 1}}{{\left( {n + 1} \right)!}}$$</p>
<p>and Strling does the job.</p>
|
2,309,123 | <p>This is a 2 part question.</p>
<ol>
<li><p>I have been studying a particular matrix group $G \le GL(n,\mathbb R)$ with $n \ge 3$ and I managed to show elements of my group $A \in G$ have the block structure
$$
A = \left(
\begin{array}{cc}
O(3) & 0 \\
A_{21} & A_{22}
\end{array}
\right)
$$
Now $A_{22}$ must be invertible since I started with $GL(n, \mathbb R)$. <strong>So is it true that this matrix is an element of the group $G = O(3) \times GL(n-3,\mathbb R)$?</strong> I'm not sure how to account for the "extra" elements $A_{21}$ when writing $G$ as a direct product of groups.</p></li>
<li><p>I have a function
$$
f:\mathbb R^n \rightarrow \mathbb R
$$
which happens to be invariant under the action of my group $G$. In other words, $f(Ax) = f(x)$ for each $A \in G$. There is a comment in a thread <a href="https://mathoverflow.net/questions/166197/is-group-theory-useful-in-any-way-to-optimization">here</a> which says if I want to minimize my function, I only need to search for solutions in the quotient space $\mathbb R^n / G$. I am having trouble understanding what this quotient space looks like - can anyone provide some intuition how to go about visualizing this quotient space?</p></li>
</ol>
| Angina Seng | 436,618 | <p>For the first part, the answer is no. The direct product is the group
of block matrices
$$\pmatrix{A_{11}&0\\0&A_{22}}$$
with $A_{11}\in O(3)$ and $A_{22}\in GL(n-3,\Bbb R)$.</p>
|
1,055,091 | <p>I've been asked to estimate a y coordinate by using differentials. This normally isn't overly difficult, however, I'm not sure what to do in a case like this when y cannot be separated and used as a function of x. Can anyone point me in the right direction? I suspect I'll have to use implicit differentiation but I can't quite formulate how I'd approach it.</p>
<p>Solve for the y coordinate of point P near (1,2) on the curve $2x^3 + 2y^3 = 9xy$, given that the x-coordinate of P is 1.1</p>
| FundThmCalculus | 153,550 | <p>Make the assumption that $x=x(t)$ and $y=y(t)$. We don't care what those functions are, it just allows us to differentiate with respect to $t$. We do this because of this definition:
$$\frac{dy}{dx}=\frac{\frac{dy}{dt}}{\frac{dx}{dt}}$$
So your equation looks like this:
$$2x(t)^3+2y(t)^3=9x(t)\cdot y(t)$$
Now differentiate both sides with respect to $t$.
$$\frac{d}{dt} \left(2x(t)^3+2y(t)^3 \right)=\frac{d}{dt} \left(9x(t)\cdot y(t) \right)$$
Apply power and product rules. I am dropping the $(t)$ for clarity:
$$6x^2 \cdot \frac{dx}{dt}+6y^2 \cdot \frac{dy}{dt}=9 \left(\frac{dx}{dt}\cdot y + x \cdot \frac{dy}{dt} \right)$$
Notice how we have a $\frac{dx}{dt}$ or $\frac{dy}{dt}$ on every term. We now can separate $\frac{dx}{dt}$ to one side, and $\frac{dy}{dt}$ to the other:
$$6x^2 \cdot \frac{dx}{dt}-9\frac{dx}{dt}\cdot y= 9x \cdot \frac{dy}{dt}-6y^2 \cdot \frac{dy}{dt}$$
Now take the ratio to obtain the slope.
$$\frac{\frac{dy}{dt}}{\frac{dx}{dt}}=\frac{9x-6y^2}{6x^2-9y}=\frac{dy}{dx}$$
Now plug into the slope formula to obtain the slope.
$$\frac{dy}{dx}=\frac{9*1-6*2^2}{6*1^2-9*2}=\frac{-15}{-3}=5$$
As you know, the point-slope formula of the line is:
$$y=mx+b$$
In this case, $m=\frac{dy}{dx}=5$. Take the given point to obtain the constant $b$.
$$2=5*1+b \rightarrow b=-3$$
Use the final equation to estimate the value:
$$y=5*x-3$$
With $x=1.1$, we have
$$y=2.5$$
If I use a numerical solver routine, the actual value is: $y \approx -2.35$. So the relative error is about 6%. That's really quite good! :)</p>
|
3,753,474 | <p><strong>Question:</strong></p>
<blockquote>
<p>If <span class="math-container">$\alpha,\beta,\gamma$</span> are the roots of the equation, <span class="math-container">$x^3+x+1=0$</span>, then find the equation whose roots are: <span class="math-container">$({\alpha}-{\beta})^2,({\beta}-{\gamma})^2,({\gamma}-{\alpha})^2$</span></p>
</blockquote>
<p>Now, the normal way to solve this question would be to use the theory of equations and find the sum of roots taken one at a time, two at a time and three at a time. Using this approach, we get the answer as <span class="math-container">$(x+1)^3+3(x+1)^2+27=0$</span>. However, I feel that this is a very lengthy approach to this problem. Is there an easier way of doing it?</p>
| farruhota | 425,072 | <p>The standard method:
<span class="math-container">$$a+b+c=0; ab+bc+ca=1; abc=-1;\\
a^2+b^2+c^2=-2;a^2b^2+b^2c^2+c^2a^2=1;a^4+b^4+c^4=2;\\
a^3=-a-1.$$</span>
First coefficient:
<span class="math-container">$$(a-b)^2+(b-c)^2+(c-a)^2=\\2(a^2+b^2+c^2)-2(ab+bc+ca)=-6$$</span>
Second coefficient:
<span class="math-container">$$(a-b)^2(b-c)^2+(b-c)^2(c-a)^2+(c-a)^2(a-b)^2=\\
a^4+b^4+c^4+3(a^2b^2+b^2c^2+c^2a^2)-\\
2(a^3(\underbrace{b+c}_{-a})+b^3(\underbrace{c+a}_{-b})+c^3(\underbrace{a+b}_{-c}))=\\
3(a^2b^2+b^2c^2+c^2a^2+a^4+b^4+c^4)=9$$</span>
Third coefficient:
<span class="math-container">$$(a-b)^2(b-c)^2(c-a)^2=\\
\small{(a^2+b^2-2ab)(b^2+c^2-2bc)(c^2+a^2-2ac)=\\
(c^2-4ab)(a^2-4bc)(b^2-4ac)=\\
16 a^4 b c - 4 a^3 b^3 - 4 a^3 c^3 - 63 a^2 b^2 c^2 + 16 a b^4 c + 16 a b c^4 - 4 b^3 c^3=\\
-16(a^3+b^3+c^3)-4(a^3b^3+b^3c^3+c^3a^3)-63=\\
-16(-a-b-c-3)-4((-a-1)(-b-1)+\\
(-b-1)(-c-1)+(-c-1)(-a-1))-63=}\\
48-4(1+3)-63=-31.$$</span>
Hence, the equation is: <span class="math-container">$x^3+6x^2+9x+31=0$</span>.</p>
|
3,653,212 | <p>Due to Covid -19 , in our university quizzes are held online and it's hard to ask questions. </p>
<p>3 Days back in my Combinatorics quiz this question was asked on which I am struck. I couldn't solve it in the time alloted and struggled to find a proper strategy. </p>
<p>Question is ->Determine the number of non equivalent colourings of the corners of regular tetrahedron with k different colours. </p>
<blockquote>
<p>My attempt -> I am trying to solve it by Burnside Theorem ( Number of non equivalent colourings in C are given by N(G, C) = 1/ |G| <span class="math-container">$\sum_{f \epsilon G } | C(f) | $</span>. [C(f) = set of all colourings in C that are fixed by f ]</p>
</blockquote>
<p>Group of permutations is <span class="math-container">$S_4$</span> and all<span class="math-container">$ (k^4)$</span> will be fixed by identity . But I am not able to think how to find colourings fixed by each permutation caused due to rotations and reflections. I have done it for pentagon which was easy. </p>
<blockquote>
<p>Can someone please tell a way on how to efficiently and elegentally compute the value of C(f) in case of rotations and reflections. </p>
</blockquote>
<p>I will be really thankful for the ideas. </p>
| Hagen von Eitzen | 39,174 | <p>As the symmetry group of the tetrahedron is <span class="math-container">$S_4$</span>, colourings are already equivalent if they use the same colours the same number of times. Thus we have</p>
<ul>
<li><span class="math-container">$k$</span> colourings of type <span class="math-container">$(a,a,a,a)$</span>,</li>
<li><span class="math-container">$k\cdot(k-1)$</span> colourings of type <span class="math-container">$(a,a,a,b)$</span>,</li>
<li><span class="math-container">$k\choose 2$</span> colourings of type <span class="math-container">$(a,a,b,b)$</span>,</li>
<li><span class="math-container">$k\cdot{k-1\choose 2}$</span> colourings of type <span class="math-container">$(a,a,b,c)$</span>,</li>
<li><span class="math-container">$k\choose 4$</span> colourings of type <span class="math-container">$(a,b,c,d){}$</span>.</li>
</ul>
|
971,139 | <p>$\max\{a,b\} = \frac12(a+b+|a-b|)$ and $\min\{a,b\} = \frac12(a+b-|a-b|)$</p>
<p>how would you go about solving this?</p>
<p>I started with suppose $a \leq b$</p>
<p>Also, show min{a,b,c} = min{min{a,b},c}.</p>
<p>How would I go about showing that?</p>
| Paul | 17,980 | <p>Suppose that $a\le b$.</p>
<p>$$\frac{a+b+|a-b|}{2}=\frac{a+b+b-a}{2}=b=\max \{a,b\}$$</p>
<p>And </p>
<p>$$\frac{a+b-|a-b|}{2}=\frac{a+b-(b-a)}{2}=a=\min \{a,b\}$$</p>
|
971,139 | <p>$\max\{a,b\} = \frac12(a+b+|a-b|)$ and $\min\{a,b\} = \frac12(a+b-|a-b|)$</p>
<p>how would you go about solving this?</p>
<p>I started with suppose $a \leq b$</p>
<p>Also, show min{a,b,c} = min{min{a,b},c}.</p>
<p>How would I go about showing that?</p>
| Mariano Suárez-Álvarez | 274 | <p>You can use the fact that $$\min\{x,y\}=-\max\{-x,-y\}$$ to get the result about mins from the one about maxes.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.