qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
494,227 | <blockquote>
<p>If I had $6$ feet of fencing could I fence a region that has area $3$
square feet?</p>
</blockquote>
<p>So, I must show that there is a curve in the plane of my fencing that has length $6$ feet that bounds the region of area. </p>
<p>How can I prove this? </p>
| Brian M. Scott | 12,042 | <p>HINT: For a given perimeter, a disk has the largest area. Does a disk with a circumference of $6$ have area $\ge 3$, or not?</p>
|
138,091 | <p>I am trying to compute an explicit formula using Mathematica for the following multinomial expression:</p>
<blockquote>
<p>\begin{equation} \sum_{n_{1}+n_{2}+...+n_{M}=N}^{M} {N \choose
n_{1},n_{2},...,n_{M }} \cdot n_{i} = ? \end{equation}</p>
</blockquote>
<p>where $i={1,2,...,M}$ and using </p>
<pre><code>multinomial[n__] := (Plus @@ {n})!/Times @@ (#! & /@ {n})
</code></pre>
<p>but I don't know how make the sumatoria over all the index $n_{k}$.</p>
<p>In fact I know the following:</p>
<blockquote>
<p>\begin{equation} \sum_{n_{1}+n_{2}+...+n_{M}=N}^{M} {N \choose
n_{1},n_{2},...,n_{M }} = M^{N} \end{equation}</p>
</blockquote>
<p>but I think that this previous results can not be used in order to obtain the result at the first equation, it's the reason why I am asking for a code in <em>Mathematica</em> that tries to compute this thing in an analytical way.</p>
<p>example:</p>
<p>Taking for example M=N=2 and $i=1$ then I have to obtain:</p>
<blockquote>
<p>\begin{equation} \sum_{n_{1}+n_{2}=2}^{2} {2 \choose n_{1},n_{2}} \cdot n_{1} = {2 \choose 2,0} \cdot 2+{2 \choose 0,2} \cdot 0+{2 \choose 1,1} \cdot 1\end{equation}</p>
</blockquote>
<p>In fact I I take $i=2$ i would obtain the same:</p>
<blockquote>
<p>\begin{equation} \sum_{n_{1}+n_{2}=2}^{2} {2 \choose n_{1},n_{2}} \cdot n_{2} = {2 \choose 2,0} \cdot 0+{2 \choose 0,2} \cdot 2+{2 \choose 1,1} \cdot 1\end{equation}</p>
</blockquote>
| JimB | 19,758 | <p>Given that it doesn't matter which index ($i$) one picks, here's a brute force algebraic approach:</p>
<p>\begin{align*}
\sum_{n_1+n_2+\cdots+n_M=N}n_1\binom{N}{n_1,n_2,\cdots,n_M}&=\sum_{n_1=0}^N n_1 \sum_{n_2+n_3+\cdots+n_M=N-n_1}\binom{N}{n_1,n_2,\cdots,n_M}\\
&=\sum_{n_1=0}^N n_1 \sum_{n_2+n_3+\cdots+n_M=N-n_1}\frac{N!}{n_1! n_2!\cdots n_M !}\\
&=\sum_{n_1=0}^N n_1 \sum_{n_2+n_3+\cdots+n_M=N-n_1}\frac{N!}{n_1! n_2!\cdots n_M !}\frac{(N-n_1)!}{(N-n_1)!}\\
&=\sum_{n_1=0}^N n_1 \binom{N}{n_1} \sum_{n_2+n_3+\cdots+n_M=N-n_1}\frac{(N-n_1)!}{n_2!\cdots n_M !}\\
&=\sum_{n_1=0}^N n_1 \binom{N}{n_1}(M-1)^{N-n_1}\\
&=\sum_{n_1=0}^N n_1 \binom{N}{n_1}1^{n_1}(M-1)^{N-n_1}\\
&=N M^{N-1}
\end{align*}</p>
<p>And, yes, I know, this doesn't use <em>Mathematica</em>.</p>
|
2,262,094 | <p>In Sheldon Axler's <em>Linear Algebra Done Right</em> Page 35, he gave a proof of "Length of linearly independent list <=length of spanning list"(see reference below) by using a process of adding a vector from independent list to the spanning list and then cancels a vector form the spanning list to form a new spanning list.</p>
<p>My question falls into the <em>Linear Dependence Lemma</em> part. <em>Linear Dependence Lemma</em> only tells us that we can remove some $v_j$ in the dependent list, but the proof he gives on the first picture says we can remove one of the $w's$, why not $u's$? How can he be so sure about that?</p>
<blockquote>
<p>My current understanding: </p>
<p>Step 1: By the definition of <em>spanning list</em>, it's
easy to obtain that $u_1$ can be written as a linear combination of
the spanning list $w_1,...,w_n$ as $u_1=a_1w_1+...+a_nw_n$. Meanwhile,
the coefficients $a_1,...,a_n$ cannot all equals to $0$, or else
$u_1=0$, contradicting the presumption that $u_1,...,u_n$ is linearly
independent. Let $a_j\neq 0$, and thus we can rewrite
$u_1=a_1w_1+...+a_nw_n$ into
$w_j=-\frac{a_1}{a_j}w_1+...-\frac{a_{j-1}}{a_j}w_{j-1}-\frac{a_{j+1}}{a_j}w_{j+1}-...-\frac{a_n}{a_j}w_n+\frac{1}{a_j}u_1$,
implying $w_j\in span\{w_1,...,w_{j-1},w{j+1},...,w_n,u_1\}$, which is
why we can reduce some $w's$.</p>
<p>Step $j$($j$>=$2$), also the coefficients of in the form of $a's$
in $u_j=a_1w_1+...+a_{n-j+1}w_{n-j+1}+b_1u_1+...+b_{j-1}u_{j-1}$
cannot all equals to zero, or else contradicts the presumption that
$u's$ are linearly independent. This is why we can always remove some
$w's$ in each step.</p>
</blockquote>
<p>Please verify my understanding, this does not seem obvious to me and it took time for it to figure it out. However, I'm not so sure that I'm right, so please point out any mistakes and substitute my understanding of the question with a better explanation, thanks in advance.</p>
<p>Reference 1:<a href="https://i.stack.imgur.com/38TVT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/38TVT.png" alt="Proof of "Length of linearly independent list <=length of spanning list""></a>
Reference 2: <em>Linear Dependence Lemma</em><a href="https://i.stack.imgur.com/5KRBK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5KRBK.png" alt="Linear Dependence Lemma"></a></p>
| Community | -1 | <p>I found the above answer by @Tengu to be correct, but unsatisfying; it seemed like just a cheap trick to ensure that the <span class="math-container">$u_i$</span> came first, and therefore we must remove one of the <span class="math-container">$w_i$</span>.</p>
<p>Allow me to present a different perspective. Since <span class="math-container">$u_1, w_1, \ldots, w_n$</span> was linearly dependent, then the linear combination <span class="math-container">$a_0u_1 + a_1w_1 + \ldots a_nw_n = 0$</span> had some <span class="math-container">$a_i \neq 0$</span>. I claim there is some <span class="math-container">$w_i$</span> such that <span class="math-container">$a_i \neq 0$</span>. To see this, if <span class="math-container">$a_0$</span> (corresponding to <span class="math-container">$u_1$</span>) was nonzero, then certainly one of the <span class="math-container">$w_i$</span> must be nonzero, else the linear combination would not sum to <span class="math-container">$0$</span>. (You might say, well what if <span class="math-container">$u_1$</span> is <span class="math-container">$0$</span>? That cannot be possible - we assumed <span class="math-container">$u_1, \ldots, u_m$</span> was linearly independent.) On the other hand, if <span class="math-container">$a_0$</span> was zero, then the non-zero coefficient must have been one of the <span class="math-container">$w_i$</span>, so we are done.</p>
<p>Now it just suffices to check that <span class="math-container">$u_1, w_1, \ldots, w_{i-1}, w_{i+1}, \ldots, w_n$</span> is a spanning list; that exercise is left to the reader.</p>
|
3,793,581 | <p>I can solve this integral in a certain way but I'd like to know of other, simpler, techniques to attack it:</p>
<p><span class="math-container">\begin{align*}
\int _0^{\frac{\pi }{2}}\frac{\ln \left(\sin \left(x\right)\right)\ln \left(\cos \left(x\right)\right)}{\tan \left(x\right)}\:\mathrm{d}x&\overset{ t=\sin\left(x\right)}=\int _0^{\frac{\pi }{2}}\frac{\ln \left(\sin \left(x\right)\right)\ln \left(\cos \left(x\right)\right)}{\sin \left(x\right)}\cos \left(x\right)\:\mathrm{d}x\\[2mm]
&=\int _0^1\frac{\ln \left(t\right)\ln \left(\cos \left(\arcsin \left(t\right)\right)\right)}{t}\cos \left(\arcsin \left(t\right)\right)\:\frac{1}{\sqrt{1-t^2}}\:\mathrm{d}t\\[2mm]
&=\int _0^1\frac{\ln \left(t\right)\ln \left(\sqrt{1-t^2}\right)}{t}\sqrt{1-t^2}\:\frac{1}{\sqrt{1-t^2}}\:\mathrm{d}t\\[2mm]
&=\frac{1}{2}\int _0^1\frac{\ln \left(t\right)\ln \left(1-t^2\right)}{t}\:\mathrm{d}t=-\frac{1}{2}\sum _{n=1}^{\infty }\frac{1}{n}\int _0^1t^{2n-1}\ln \left(t\right)\:\mathrm{d}t\\[2mm]
&=\frac{1}{8}\sum _{n=1}^{\infty }\frac{1}{n^3}\\[2mm]
\int _0^{\frac{\pi }{2}}\frac{\ln \left(\sin \left(x\right)\right)\ln \left(\cos \left(x\right)\right)}{\tan \left(x\right)}\:\mathrm{d}x&=\frac{1}{8}\zeta (3)
\end{align*}</span></p>
| Z Ahmed | 671,540 | <p>Let
<span class="math-container">$$I=\int _0^{\frac{\pi }{2}}\frac{\ln (\sin \left(x\right))\ln (\cos \left(x\right))}{\tan \left(x\right)}\:\mathrm{d}x$$</span>
Let <span class="math-container">$\ln \cos x=-t$</span>, then we'll have:
<span class="math-container">$$I=-\frac{1}{2}\int_{0}^{\infty} t \ln(1-e^{-2t}) dt$$</span>
Use <span class="math-container">$\ln(1-z)=-\sum_{k=1}^{\infty} \frac{z^k}{k} $</span>
<span class="math-container">$$\implies I=\frac{1}{2}\sum_{k=1}^{\infty} \int_{0}^{\infty}\frac {t e^{-2kt}}{k} dt$$</span>
<span class="math-container">$$I=\frac{1}{2} \sum_{k=1}^{\infty} \frac{1}{4k^3}=\frac{\zeta(3)}{8}$$</span></p>
|
903,656 | <p>An urn has $2$ balls and each ball could be green, red or black. We draw a ball and it was green, then it was returned it to the urn. What is the probability that the next ball is red? </p>
<p>My attempt: I think it is just a probability of $1/4$ because we have 4 colors in total but on the other hand I think i need to use conditional probability:</p>
<p>$$P(R|V)= {P(R\bigcap V)\over P(V)}$$</p>
<p>where $P(V)$ is the probability of drawing a green ball , $P(R)$ is the probability of drawing a red ball but I am not so sure which one would be the correct approach of the problem</p>
<p>I would really appreciate your help :)</p>
| spunkets | 170,680 | <p>The probability of drawing any one particular color in the original draw is 1/3. In that situation there are 2 copies of the same 3 color possibility. In that problem, there is a 1/2 probability of choosing one of the 2 sets of 3 colors. The probably is 2 * 1/2 * 1/3 =1/3. </p>
<p>In the second situation there is one green ball and another set of 3 colors. In that situation there is not 2 copies of 3 colors, but 3 different probabilities for the drawing of any color from a single draw: red=1/4; black=1/4 and green=2/4=1/2. Note that the possibility of drawing a green ball must be higher in the 2nd case than it was in the first, becasue of the new inofrmation. That means the probability of drawing the red ball in the second case must be lower according to the new information, that there is now 3 colors + another green--4 possible color outcomes per one draw. </p>
<p>A Baysian update must conclude the same probability. The probability is improved from the original p(grn)=p(red)=p(blk)=1/3 to 1/4 by the new information as:</p>
<p>p(red|grn)=(p(grn|red)*p(red)) / (p(grn|red)*p(red)+p(grn|blk)*p(blk)+p(grn|grn)*p(grn)
=(1/4 * 1/3) / (1/4*1/3 + 1/4*1/3 + 1/2*1/3) = 1/4</p>
<p>Scott Stillwell</p>
|
894,159 | <p>I was assigned the following problem: find the value of $$\sum_{k=1}^{n} k \binom {n} {k}$$ by using the derivative of $(1+x)^n$, but I'm basically clueless. Can anyone give me a hint?</p>
| André Nicolas | 6,312 | <p>Imagine tossing a fair coin $n$ times. Then the mean number of heads is
$$\sum_0^n k\binom{n}{k}\frac{1}{2^n}.\tag{1}$$
We compute the mean another way. Let random variable $X_i$ be $1$ if we get a head on the $i$-th toss, and $0$ otherwise. Then the number $Y$ of heads is given by
$$Y=X_1+X_2+\cdots +X_n,$$
and therefore by the linearity of expectation we have
$$E(Y)=E(X_1)+E(X_2)+\cdots +E(X_n).$$
Since $X_i=1$ with probability $\frac{1}{2}$, we have
$$E(Y)=\frac{n}{2}.\tag{2}$$
Now comparison of (1) and (2) gives the result.</p>
<p><strong>Remark:</strong> This is an example of a <em>Mean Proof</em>. </p>
|
2,303,106 | <p>I was looking at this question posted here some time ago.
<a href="https://math.stackexchange.com/questions/1353893/how-to-prove-plancherels-formula">How to Prove Plancherel's Formula?</a></p>
<p>I get it until in the third line he practically says that $\int _{- \infty}^{+\infty} e^{i(\omega - \omega')t} dt= 2 \pi \delta(\omega - \omega')$.</p>
<p>I mean, I would understand if we were integrating over a period of length $2 \pi$, but here the integration is over $\mathbb{R}$. </p>
<p>P.S. I would have asked this directly to the author of the post, but it's been over a year since he last logged in.</p>
| user121330 | 178,570 | <p>Supposing that $\omega = \omega'$, it's clear that we don't expect the integral to converge - the Dirac's Delta function is infinite at zero:</p>
<p>\begin{equation}\tag{1}
\int_{-\infty}^\infty e^{i ( \omega - \omega )t} dt=\int_{-\infty}^\infty 1\, dt = \infty
\end{equation}</p>
<p>In the case that $\omega \neq \omega'$, neither the positive nor the negative sides converge separately, but they don't have to. </p>
<p>$$\begin{eqnarray}
\int_{-\infty}^\infty e^{i u} du &=& i \int_{-\infty}^\infty \sin(u) du + \int_{-\infty}^\infty \cos(u) du \\
\int_{-\infty}^\infty \sin(u) &=& \lim_{a \rightarrow \infty} \int_{-a}^a \sin(u) du \\
&=& \lim_{a \rightarrow \infty} - \cos(a) + \cos(-a)\\
&=& \lim_{a \rightarrow \infty} 0 \\
&=& 0 \\
\int_{-\infty}^\infty \cos(u) &=& \int_{-\infty + \frac{\pi}{2}}^{\infty + \frac{\pi}{2}} \cos \left(x - \frac{\pi}{2} \right) dx \\
&=& \int_{-\infty}^{\infty} \sin(x) dx \\
&=& 0 \\
\int_{-\infty}^\infty e^{i u} du &=& 0 + i 0 = 0
\end{eqnarray}$$</p>
<p>This may feel uncomfortable but it's as rigorous as $a - a + b - b = 0$.</p>
<p>The final ambiguity is the value of $\infty$ from Equation 1. Without getting into the weeds of how one defines frequency (which moves that $2\pi$ all over the place), we know from the definition of the Fourier Transform that both:
$$\begin{eqnarray}
\mathcal{F}(g(t)) &=& G(\omega) \\
\mathcal{F}(G(\omega)) &=& g(t)
\end{eqnarray}$$</p>
<p>Since $\mathcal{F(F}(g(t))) = \mathcal{F}(G(\omega)) = g(t)$, we may equivalently show that</p>
<p>$$\mathcal{F}(2 \pi \delta(\omega - \omega')) = \int_{-\infty}^{\infty} e^{i \omega t} \delta(\omega - \omega') d\omega = e^{-i \omega' t} = g(t)$$</p>
<p>by the <a href="http://mathworld.wolfram.com/DeltaFunction.html" rel="nofollow noreferrer">sifting property</a> which is what we sought.</p>
|
2,809,686 | <p>Let S={1,2,3,...,20}. Find the probability of choosing a subset of three numbers from the set S so that no two consecutive numbers are selected in the set.
"I am getting problem in forming the required number of sets."</p>
| true blue anil | 22,388 | <p>$\underline{Another\; approach}$</p>
<p>For any chosen subset of $3$, there will be $17$ left unchosen, and the three chosen must have come from any $3$ of $18$ gaps (including ends) marked with an uparrow,</p>
<p>$\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\bullet\uparrow\quad$thus $\dfrac{\binom{18}{3}}{\binom{20}{3}}$</p>
|
409,220 | <p>$$f(x,y)=6x^3y^2-x^4y^2-x^3y^3$$
$$\frac{\delta f}{\delta x}=18x^2y^2-4x^3y^2-3x^2y^3$$
$$\frac{\delta f}{\delta y}=12x^3y-2x^4y-3x^3y^2$$
Points, in which partial derivatives ar equal to 0 are: (3,2), (x,0), (0,y), x,y are any real numbers. Now I find second derivatives
$$\Delta_1=\frac{\delta f}{\delta x^2}=36xy^2-12x^2y^2-6xy^3$$
$$\frac{\delta f}{\delta y^2}=12x^3-2x^4-6x^3y$$
$$\frac{\delta f}{\delta x \delta y}=\frac{\delta f}{\delta y \delta x} = 36x^2y-8x^3y-9x^2y^2$$
$$\Delta_2=\begin{vmatrix}\frac{\partial^2 f}{\partial x^2}&\frac{\partial^2 f}{\partial y\partial x}\\\frac{\partial^2 f}{\partial x\partial y}& \frac{\partial^2 f}{\partial y^2} \end{vmatrix}$$
After plugging in the point (3,2) we get $\Delta_1<0$ and $\Delta_2>0$, so (3,2) is maxima. Now then I try to plug in (x,0) and (0,y) I obviously get $\Delta_1=0$ and $\Delta_2=0$ and I can't tell, using Sylvester's criterion, if those points are minima or maxima or neither. What should I do?</p>
| colormegone | 71,645 | <p>If we go back to the function and its first and second derivatives, we see that they can be factored as</p>
<p>$$ f(x, \ y) \ = \ x^3 \ y^2 \ ( \ 6 \ - \ x \ - \ y \ ) \ \ , $$</p>
<p>$$ f_x \ = \ x^2 \ y^2 \ ( \ 18 \ - \ 4x \ - \ 3y \ ) \ \ , \ \ f_y \ = \ x^3 \ y \ ( \ 12 \ - \ 2x \ - \ 3y \ ) \ \ , $$</p>
<p>$$ f_{xx} \ = \ 6 \ x \ y^2 \ ( \ 6 \ - \ 2x \ - \ y \ ) \ \ , \ \ f_{yy} \ = \ 2 \ x^3 \ ( \ 6 \ - \ x \ - \ 3y \ ) \ \ , \ \ \text{and} $$</p>
<p>$$ f_{xy} \ = \ x^2 \ y \ ( \ 36 \ - \ 8x \ - \ 9y \ ) \ \ . $$</p>
<p>Because we are dealing with factors in the function which are power functions with exponents of three and higher, we can expect that "second derivative tests" may not be much help where the base of the power function becomes zero. So we will need to look to the "behavior" of the function (and possibly its derivatives) in the vicinity, which is something of what <strong>john</strong> suggests.</p>
<p>If we look along the $ \ x-$ axis, that is, the set of points $ \ (c, \ 0) \ $ , we can investigate the behavior of the function $ \ f(c, \ y) \ = \ c^3 \ y^2 \ ( \ 6 \ - \ c \ - \ y \ ) \ = \ (6 - c) \ c^3 \ y^2 \ - \ c^3 \ y^3 \ $ . Close in to that axis, the quadratic term is dominant, so the function "behaves like" $ \ (6 - c) \ c^3 \ y^2 \ $ . Hence, the surface described by $ \ f(x, \ y) \ $ has an "upward-opening" parabolic "cross-section" for $ \ 0 \ < \ x \ < \ 6 \ $ , and "downward-opening" for $ \ x \ > \ 6 \ $ and about the negative $ \ x-$ axis (the change at $ \ x \ = \ 6 \ $ comes of "crossing" the line $ \ x \ + \ y \ = \ 6 \ $ , on which the function is also zero) . The graph below gives an idea of what this looks like.</p>
<p><img src="https://i.stack.imgur.com/W4N52.png" alt="enter image description here"></p>
<p><em>views of the surface $ \ f(x, \ y) \ $ , "sighting" down the positive $ \ x-$ axis "short of" $ \ x \ = \ 6 \ $ , and a side-view of the negative $ \ x-$ axis portion</em></p>
<p>So the points on the $ \ x-$ axis other than $ \ (0, \ 0) \ $ and $ \ (6, \ 0) \ $ are either relative maxima or relative minima.</p>
<p>Performing a similar analysis near the $ \ y-$ axis, we have </p>
<p>$$ f(x, \ c) \ = \ x^3 \ c^2 \ ( \ 6 \ - \ x \ - \ c \ ) \ = \ (6 - c) \ c^2 \ x^3 \ - \ c^2 \ y^4 \ \sim \ (6 - c) \ c^2 \ x^3 \ \ , $$</p>
<p>where the cubic term now dominates. So the surface clearly has (close to) odd symmetry immediately about the $ \ y-$ axis everywhere except $ \ (0, \ 0) \ $ and $ \ (0, \ 6) \ $ , making all of those locations saddle points.</p>
<p><img src="https://i.stack.imgur.com/E5Dsa.png" alt="enter image description here"></p>
<p><em>the view along the $ \ y-$ axis "short of" $ \ y \ = \ 6 \ $ , beyond which the direction of the "cubic cross-section" would be reversed</em></p>
<p>We have passed over the character of the origin and the points $ \ ( 0, \ 6) \ $ and $ \ (6, \ 0 ) \ $ , but we now have enough information to describe them as well. For a surface representing a function of two variables, if the symmetry about a point is <em>even</em> and the "direction of concavity" is <em>the same</em> in both perpendicular directions, then the point will be a relative extremum; otherwise, it is a saddle point. (This can be extended to functions with more variables.)</p>
<p>We have found that the conditions for relative extrema are not met, so these three points are all saddle points. A graph of the surface near the origin illustrates the situation:</p>
<p><img src="https://i.stack.imgur.com/QJkHO.png" alt="enter image description here"></p>
<p>We can make this a bit more evident by considering cross-sections through the surface along the "vertical" planes $ \ y \ = \ x \ $ and $ \ y \ = \ -x \ $ . In these planes, we have the functions</p>
<p>$$ f(x, \ x) \ = \ 6 x^5 \ - \ 2 x^6 \ \ , \ \ f(x, \ -x) \ = \ 6 x^5 \ \ . $$</p>
<p>Both of these functions have odd (or nearly so) symmetry close to the origin, so the direction of concavity reverses on passing through the origin for both, producing a saddle point there. With somewhat more work, we determine that there is similar behavior about $ \ ( 0, \ 6) \ $ and $ \ (6, \ 0 ) \ $ .</p>
<p>[Note that the function $ f(x, \ x) \ = \ 6 x^5 \ - \ 2 x^6 \ $ is zero at $ \ (3, \ 3) \ $ , which is a point on the "zero-line" $ \ x \ + \ y \ = \ 6 \ $ . It has a maximum at $ \ (\frac{5}{2}, \ \frac{5}{2}) \ $ , along a "ridge" formed in the surface, which includes the relative maximum you found nearby at $ \ (3, \ 2) \ $ . ]</p>
|
3,154,407 | <p>I would like to define a function whose domain is any multiset of real numbers and image is a real number.</p>
<p>To my understanding, the domain of a function that can be applied on any set of real numbers is the power set <span class="math-container">$\mathcal{P}(\mathbb{R})$</span>. Is it correct? If yes, is there an equivalent "power multiset" and what is its notation?</p>
| benlaug | 655,808 | <p>In the article <em>Mathematics of Multisets</em> by A. Syropoulos, this is defined as follows:</p>
<blockquote>
<p>Assume that <span class="math-container">$A$</span> is a set, then <span class="math-container">$\mathcal{P}^A$</span> is the set of all multisets which have <span class="math-container">$A$</span> as their support set.</p>
</blockquote>
|
2,619,131 | <p>How one can prove the following inequality?</p>
<p>$$58x^{10}-42x^9+11x^8+42x^7+53x^6-160x^5+118x^4+22x^3-56x^2-20x+74\geq 0$$ </p>
<p>I plotted the graph on Wolfram Alpha and found that the inequality seems to hold. I was unable to represent the polynomial as a sum of squares. </p>
<p>It looks quite boring to approximate the derivative to be zero and use some numerical methods to show that values near local minimums proves that the inequality really holds everywhere.</p>
| Michael Rozenberg | 190,319 | <p>For $x<0$ it's obvious.</p>
<p>But for $x\geq0$ we obtain:
$$58x^{10}-42x^9+11x^8+42x^7+53x^6-160x^5+118x^4+22x^3-56x^2-20x+74=$$
$$=(x^3-x^2-x+1)(58x^7+16x^6+85x^5+85x^4+207x^3+47x^2)+$$
$$+287x^4-138x^3-103x^2-20x+74>0,$$
where $$287x^4-138x^3-103x^2-20x+74=$$
$$=(16x^2-4x-5)^2+(31x^4-10x^3+x^2)+(40x^2-60x+49)>0.$$</p>
|
760,767 | <p>I don't understand the last part of this proof:</p>
<p><a href="http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup" rel="nofollow">http://www.proofwiki.org/wiki/Intersection_of_Normal_Subgroup_with_Sylow_P-Subgroup</a></p>
<p>where they say: $p \nmid \left[{N : P \cap N}\right]$, thus, $P \cap N$ is a Sylow p-subgroup of $N$. I don't see why this implicaction is true. On the other hand, I understand that $P$ being a Sylow p-subgroup of $G$ implies that $p \nmid [G : P]$, for $[G:P]=[G:N_G(P)][N_G(P):P]$ and $p$ does not divide any of these two factors. So, what I don't understand why is true is the inverse implication, that is, if $p \nmid [G : P]$ then $P$ is a Sylow p-subgroup of $G$.</p>
| user141421 | 141,421 | <p>Your first formula is incorrect. For instance, consider the line $y=x$ and you want to get the length of the line segment from $x=0$ to $x=1$. The length is $\sqrt2$, and the equation in polar coordinates is $\theta=\dfrac{\pi}4$. If we use the first formula, we get the length to be $0$.</p>
<p>In terms of $x$, $y$, we have
$$S = \int \sqrt{(dx)^2+(dy)^2} = \int \sqrt{1+y'^2} dx$$
Setting $x=r \cos(t)$ and $y=r\sin(t)$, we get
$$dx = dr \cos(t) - r \sin(t)dt$$
and
$$dy = dr \sin(t) + r \cos(t)dt$$
Hence,
$$(dx)^2 + (dy)^2 = (dr)^2 + (rdt)^2$$</p>
|
2,498,628 | <p>This was a question in our exam and I did not know which change of variables or trick to apply</p>
<p><strong>How to show by inspection ( change of variables or whatever trick ) that</strong></p>
<p><span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx \tag{I} $$</span></p>
<p>Computing the values of these integrals are known as routine. Further from their values, the equality holds. But can we show equality beforehand?</p>
<blockquote>
<p><strong>Note</strong>: I am not asking for computation since it can be found <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&lq=1.">here</a>
and we have as well that,
<span class="math-container">$$ \int_0^\infty \cos(x^2) dx = \int_0^\infty \sin(x^2) dx =\sqrt{\frac{\pi}{8}}$$</span>
and the result can be recover here, <a href="https://math.stackexchange.com/questions/187729/evaluating-int-0-infty-sin-x2-dx-with-real-methods?noredirect=1&lq=1">Evaluating $\int_0^\infty \sin x^2\, dx$ with real methods?</a>.</p>
</blockquote>
<p>Is there any trick to prove the equality in (I) without computing the exact values of these integrals beforehand?</p>
| Guy Fsone | 385,707 | <h2><Here is what I found</h2>
<p>Employing the change of variables <span class="math-container">$2u =x^2$</span> We get <span class="math-container">$$I=\int_0^\infty \cos(x^2) dx =\frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx$$</span> <span class="math-container">$$ J=\int_0^\infty \sin(x^2) dx=\frac{1}{\sqrt{2}}\int^\infty_0\frac{\sin(2x)}{\sqrt{x}}\,dx $$</span></p>
<blockquote>
<p><strong>Summary:</strong> We will prove that <span class="math-container">$J\ge 0$</span> and <span class="math-container">$I\ge 0$</span> so that, proving that <span class="math-container">$I=J$</span> is equivalent to <span class="math-container">$$ \color{blue}{0= (I+J)(I-J)=I^2 -J^2 =\lim_{t \to 0}I_t^2-J^2_t}$$</span>
Where, <span class="math-container">$$I_t = \int_0^\infty e^{-tx^2}\cos(x^2) dx~~~~\text{and}~~~ J_t = \int_0^\infty e^{-tx^2}\sin(x^2) dx$$</span>
<span class="math-container">$t\mapsto I_t$</span> and <span class="math-container">$t\mapsto J_t$</span> are clearly continuous due to the present of the integrand factor <span class="math-container">$e^{-tx^2}$</span>.</p>
</blockquote>
<p>However, By Fubini we have,</p>
<p><span class="math-container">\begin{split}
I_t^2-J^2_t&=& \left(\int_0^\infty e^{-tx^2}\cos(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\cos(y^2) dy\right) \\&-& \left(\int_0^\infty e^{-tx^2}\sin(x^2) dx\right) \left(\int_0^\infty e^{-ty^2}\sin(y^2) dy\right) \\
&=& \int_0^\infty \int_0^\infty e^{-t(x^2+y^2)}\cos(x^2+y^2)dxdy\\
&=&\int_0^{\frac\pi2}\int_0^\infty re^{-tr^2}\cos r^2 drd\theta\\&=&\frac\pi4 Re\left( \int_0^\infty \left[\frac{1}{i-t}e^{(i-t)r^2}\right]' dr\right)\\
&=&\color{blue}{\frac\pi4\frac{t}{1+t^2}\to 0~~as ~~~t\to 0}
\end{split}</span></p>
<p><strong>To end the proof:</strong> Let us show that <span class="math-container">$I> 0$</span> and <span class="math-container">$J> 0$</span>. Performing an integration by part we obtain
<span class="math-container">$$J = \frac{1}{\sqrt{2}} \int^\infty_0\frac{\sin(2x)}{x^{1/2}}\,dx=\frac{1}{\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{2\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{3/2}}\,dx\color{red}{>0}$$</span>
Given that <span class="math-container">$\color{red}{\sin 2x= 2\sin x\cos x =(\sin^2x)'}$</span>. Similarly we have,
<span class="math-container">$$I = \frac{1}{\sqrt{2}}\int^\infty_0\frac{\cos(2x)}{\sqrt{x}}\,dx=\frac{1}{2\sqrt{2}}\underbrace{\left[\frac{\sin 2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{1}{4\sqrt{2}} \int^\infty_0\frac{\sin 2 x}{x^{3/2}}\,dx\\=
\frac{1}{4\sqrt{2}}\underbrace{\left[\frac{\sin^2 x}{x^{1/2}}\right]_0^\infty}_{=0} +\frac{3}{8\sqrt{2}} \int^\infty_0\frac{\sin^2 x}{x^{5/2}}\,dx\color{red}{>0}$$</span></p>
<blockquote>
<p><strong>Conclusion:</strong> <span class="math-container">$~~~I^2-J^2 =0$</span>, <span class="math-container">$I>0$</span> and <span class="math-container">$J>0$</span> impliy <span class="math-container">$I=J$</span>. Note that we did not attempt to compute neither the value of <span class="math-container">$~~I$</span> nor <span class="math-container">$J$</span>.</p>
</blockquote>
<p><strong>Extra-to-the answer</strong> However using similar technique in above prove one can easily arrives at the following <span class="math-container">$$\color{blue}{I_tJ_t = \frac\pi8\frac{1}{t^2+1}}$$</span> from which one get the following explicit value of <span class="math-container">$$\color{red}{I^2=J^2= IJ = \lim_{t\to 0}I_tJ_t =\frac{\pi}{8}}$$</span></p>
<p>See also <a href="https://www.maa.org/sites/default/files/pdf/cms_upload/Chen-CMJ0926332.pdf" rel="nofollow noreferrer">here</a> for more on (The Fresnel Integrals Revisited)</p>
|
3,792,954 | <p>For vector space <span class="math-container">$V$</span> and <span class="math-container">$v \in V$</span>, there is a natural identification <span class="math-container">$T_vV \cong V$</span> where <span class="math-container">$T_vV$</span> is the tangent space of <span class="math-container">$V$</span> at <span class="math-container">$v$</span>. Can somebody explain this identification to me? Thank you</p>
| Ivan | 157,467 | <p>One proof uses the addition formula for the hyperbolical tangent function:</p>
<p><span class="math-container">$$\tanh(a+b)=\frac{\tanh(a)+\tanh(b)}{1+ \tanh(a)\tanh(b)}$$</span></p>
<p>where <span class="math-container">$a,b \in \mathbb{R}$</span>.</p>
<p><span class="math-container">$\tanh$</span> maps from <span class="math-container">$\mathbb{R}$</span> surjectively onto <span class="math-container">$(-1,1)$</span>. Hence for any <span class="math-container">$x,y \in (-1,1)$</span> there exists <span class="math-container">$a,b \in \mathbb{R}$</span> with <span class="math-container">$x=\tanh(a)$</span> and <span class="math-container">$y=\tanh(b)$</span> which gives</p>
<p><span class="math-container">$$\frac{x+y}{xy+1}=\frac{\tanh(a)+\tanh(b)}{1+ \tanh(a)\tanh(b)}=\tanh(a+b)\in(-1,1).$$</span></p>
|
2,022,566 | <p>How can I calculate the below limit?
$$
\lim\limits_{x\to \infty} \left( \mathrm{e}^{\sqrt{x+1}} - \mathrm{e}^{\sqrt{x}} \right)
$$
In fact I know should use the L’Hospital’s Rule, but I do not how to use it.</p>
| haqnatural | 247,767 | <p>Using the fact that $\lim _{ x\rightarrow 0 }{ \frac { { e }^{ x }-1 }{ x } } =1\\ \\ $ we can write </p>
<p>$$\lim _{ x\rightarrow \infty }{ \left( e^{ \sqrt { x+1 } }-e^{ \sqrt { x } } \right) } =\lim _{ x\rightarrow \infty }{ { e }^{ \sqrt { x } }\left( e^{ \sqrt { x+1 } -\sqrt { x } }-1 \right) } =\lim _{ x\rightarrow \infty }{ { e }^{ \sqrt { x } }\left( \frac { e^{ \frac { 1 }{ \sqrt { x+1 } +\sqrt { x } } }-1 }{ \frac { 1 }{ \sqrt { x+1 } +\sqrt { x } } } \right) } \cdot \frac { 1 }{ \sqrt { x+1 } +\sqrt { x } } =\\ =\lim _{ x\rightarrow \infty }{ \frac { { e }^{ \sqrt { x } } }{ \sqrt { x+1 } +\sqrt { x } } } =+\infty $$</p>
|
64,925 | <p>Suppose $G$ is a group and $V$ an irreducible representation of $G$. One has that $V\otimes V\cong \Lambda^2(V)\oplus Sym^2(V)$. It is well-known that if the trivial representation appears as a subrepresentation of $\Lambda^2(V)$ then $V$ is of quaternionic type; while if the trivial representation appears as a subrepresentation of $Sym^2(V)$ then $V$ is a of real type. From this approach, it is clear that the trivial representation cannot appear in both $\Lambda^2(V)$ and $Sym^2(V)$.</p>
<p>What I am curious about is as follows:</p>
<blockquote>
<blockquote>
<p><b>Question:</b> Is there is some (relatively easy) way to see why the trivial representation cannot appear in both $\Lambda^2(V)$ and $Sym^2(V)$ without introducing the machinery of real/quaternionic types? </p>
</blockquote>
</blockquote>
<p>As a bit of motivation, if one looks at other subrepresentations, then for example if $G = G_2$ and $V_n$ is an $n$-dimensional irreducible representation of $G_2$, then $V_{64}$ appears as a subrepresentation of both $\Lambda^2(V_{27})$ and $Sym^2(V_{27})$. In particular it is possible for the intertwining number of $\Lambda^2(V)$ and $Sym^2(V)$ to be nonzero, but by the real vs. quaternionic characterization, the trivial representation is somehow special in that it cannot contribute to the intertwining number.</p>
| darij grinberg | 2,530 | <p>The trivial representation appears in $\wedge^2 V$ if and only if the representation $V^{\ast}$ has a $G$-invariant alternating bilinear form (because $\wedge^2 V\cong\wedge^2\left(\left(V^{\ast}\right)^{\ast}\right)$ is isomorphic to the $G$-module of all alternating bilinear forms on $V^{\ast}$, and $G$-invariant forms correspond to $G$-fixed elements).</p>
<p><em>ctrl+c & ctrl+v</em>:</p>
<p>The trivial representation appears in $\mathrm{Sym}^2 V$ if and only if the representation $V^{\ast}$ has a $G$-invariant symmetric bilinear form (because $\mathrm{Sym}^2 V\cong\mathrm{Sym}^2\left(\left(V^{\ast}\right)^{\ast}\right)$ is isomorphic to the $G$-module of all symmetric bilinear forms on $V^{\ast}$, and $G$-invariant forms correspond to $G$-fixed elements).</p>
<p>So we have to prove that for an irreducible representation $V$, the representation $V^{\ast}$ cannot have both a nontrivial $G$-invariant symmetric bilinear form and a nontrivial $G$-invariant alternating bilinear form. More generally, an irreducible representation $W$ of $G$ cannot have two linearly independent $G$-invariant bilinear forms. In fact, a bilinear form on the $k$-vector space $W$ can be seen as a homomorphism $W\to W^{\ast}$, and the bilinear form is $G$-invariant if and only if this homomorphism is $G$-equivariant. But Schur's lemma yields that there cannot be two linearly independent $G$-equivariant homomorphisms from $W$ to $W^{\ast}$, since both $W$ and $W^{\ast}$ are irreducible representations.</p>
<p>So much for the case when the ground field is algebraically closed (which is probably your case). In the general case, I think the assertion is not true, though I don't know a counterexample right out of my head.</p>
|
114,909 | <p>What is known about normal subgroups of $SL_2(\mathbb{C}[X])$? Can one hope for a congruence subgroup property, i.e. that every (non-central) normal subgroup contains the kernel of the reduction modulo some ideal of $\mathbb{C}[X]$?</p>
| Jim Humphreys | 4,231 | <p>[EDIT] These groups have been studied for a long time from various viewpoints, so there is a long paper-trail. I'd emphasize however that working over the complex numbers is usually similar to working over an arbitrary infinite field.
Finite fields on the other hand occur more often in arithmetic contexts. </p>
<p>Concerning normal subgroups, the problem looks essentially hopeless, just as in the closely parallel situation of $\mathrm{SL}_2(\mathbb{Z})$ (or its quotient the modular group) first studied over a century ago. There the congruence subgroup problem has a strongly negative answer, in a sense eventually made precise by Serre. In the modern study of the congruence subgroup problem (Bass-Milnor-Serre and beyond), the most manageable situation involves higher rank algebraic groups. But Serre did obtain almost definitive results for rings of integers (and closely related rings of arithmetic interest). His paper is now available online through JSTOR: <em>Le probleme des groupes de congruence pour SL2.</em> Ann. of Math. (2) 92 1970 489–527.</p>
<p>While the congruence subgroup problem is studied mainly in the arithmetic setting, the normal subgroup problem is complicated in a similar way because the matrix group (in your case over a polynomial ring in one variable) has a huge number of such subgroups. In the case of the modular group, these arise from the group-theoretic structure as an amalgamated free product of two small cyclic groups. The analogue in your question is worked out in the paper by Hirosi Nagao, *On $\mathrm{GL}(2, K[x])$, J. Inst. Polytech. Osaka City Univ. Ser. A 10 (1959) 117–121. In most of the literature, attention is focused instead on the somewhat better behaved classical (or Chevalley) groups of higher Lie rank than 1. </p>
<p>While the cumulative literature on matrix groups over rings of various types is enormous and spreads out into algebraic $K$-theory, I'll mention a few of the many people involved over the years: E. Abe, W. Klingenberg, A.W. Mason, A.A. Suslin, L.N. Vaserstein, N.A. Vavilov. Here is a randomly chosen paper: A.W. Mason, <em>Anomalous normal subgroups of $\mathrm{SL}_2(K[x])$.</em> Quart. J. Math. Oxford Ser. (2) 36 (1985), no. 143, 345–358. Vavilov has written many papers on general structure theory, in both Russian and English, including some long surveys. </p>
<p>P.S. Though I'm not a specialist on structure of linear groups over rings, all evidence I've seen confirms that determining the normal subgroups of groups $\mathrm{SL}_2(K[x])$ (or $\mathrm{GL}_2(K[x])$) when $K$ is a field is basically hopeless. Even though the ideal structure of the ring itself is reasonable (as for $\mathbb{Z}$), Nagao's old work already shows how far that is from locating all normal subgroups. Much of the literature is covered in the 1989 Springer book by Hahn-O'Meara <em>The Classical Groups and $K$-Theory</em> (Chapter 4). The papers by Costa-Keller get some mileage in special cases from the refined notion of "radix", but for the rings here every radix is actually an ideal unless $K$ has characteristic 2. </p>
|
2,330,514 | <p>Prove if n is a perfect square, n+2 is not a perfect square</p>
<blockquote>
<p>Assume n is a perfect square and n+2 is a perfect square (proof by
contradiction)</p>
<p>There exists positive integers a and b such that $n = a^2$ and $n + 2= b^2$</p>
<p>Then $a^2 + 2 = b^2$</p>
<p>Then $2 = b^2-a^2$</p>
<p>Then $2 = (b-a)(b+a)$</p>
<p>Then $b+a = 2$ and $b-a=1$ <strong><em>where does this come from?</em></strong></p>
<p>Then $a = 1/2$ and $b=3/2$</p>
<p>This is a contradiction because a and b should be positive integers.
Therefore if n is a perfect square, n+2 is not a perfect square.</p>
</blockquote>
<p>Where does the $b+a = 2$ and $b-a=1$ come from?</p>
| dxiv | 291,201 | <p>Alternative direct proof: if $n=k^2$ for $k \ge 1$ then $(k+1)^2 = k^2 + 2k + 1\gt k^2 + 2 = n+2 \gt k^2\,$ so $n+2$ lies strictly between squares of consecutive numbers, thus cannot be a perfect square.</p>
|
44,771 | <p>A capital delta ($\Delta$) is commonly used to indicate a difference (especially an incremental difference). For example, $\Delta x = x_1 - x_0$</p>
<p><strong>My question is: is there an analogue of this notation for ratios?</strong></p>
<p>In other words, what's the best symbol to use for $[?]$ in $[?]x = \dfrac{x_1}{x_0}$?</p>
| Luboš Motl | 10,599 | <p>The best symbol to use is $\exp\Delta\log$:
$$\exp\Delta\log x = \exp(\log x_1-\log x_0) = \frac{x_1}{x_0}.$$
The point is that this operation isn't "qualitatively different" from $\Delta$, so it may be reduced to $\Delta$. So far, I haven't used any new symbols but if you want some multiplicative new creative symbols, see e-percentages and units of evidence:</p>
<blockquote>
<p><a href="http://motls.blogspot.com/2010/01/exponential-percentages-useful-proposed.html">http://motls.blogspot.com/2010/01/exponential-percentages-useful-proposed.html</a><br>
<a href="http://motls.blogspot.com/2010/08/units-of-evidence.html">http://motls.blogspot.com/2010/08/units-of-evidence.html</a></p>
</blockquote>
<p>There is no compact symbol for $\exp\Delta\log$. If you want an ally who would endorse the idea to introduce such a symbol, you may count on me. What about $\Delta^\times$?</p>
|
3,372 | <p>Have any questions first proposed on Mathoverflow attracted enough interest from experts in their field that solving them would be considered a significant advance?</p>
<p>I don't want to count problems that are known (or strongly suspected) to be at least as hard as some previously described problem, unless the version original to Mathoverflow is believed by experts to be a better formulation of the problem.</p>
<p>Of course, as Mathoverflow has been around for less than 10 years, no problem original to Mathoverflow can possibly be a long-standing open problem in its field. But I don't see any reason a question asked on MO can't be among the most interesting questions asked in the last ten years.</p>
| Joseph O'Rourke | 6,094 | <p>Here is a snapshot of the site analytics to which Martin refers:
<hr />
<a href="https://i.stack.imgur.com/zbrKj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/zbrKj.png" alt="Traffic"></a>
<hr />
So, roughly $40,000$ page views per day (in the summer).
Might be interesting to know what happened on May 14th and on August 8th...</p>
|
1,053,506 | <p>I had thought that the ultra-metric property was just a rule that someone made up, that if applied shows some bizarre behavior. I however came across these notes: <a href="http://www.math.harvard.edu/~thorne/all.pdf" rel="nofollow">Lecture notes</a> and it seems that the ultra-metric property is actually derived from a p-adic norm. i however don't understand what they are doing (Note Theorem 0.4 which leads to Definition 0.5).</p>
<p>So it seems as if the ultra-metric or non-Archimedean metric is actually derived from the Archimedean metric as applied to p-adic numbers?</p>
<p>Thanks,</p>
<p>Brian</p>
| David Holden | 79,543 | <p>you write:</p>
<p><b> it seems as if the ultra-metric or non-Archimedean metric is actually derived from the Archimedean metric as applied to p-adic numbers?</b></p>
<p>it would be more appropriate to say that the ultra-metric on $\mathbb{Q}$ is derived from <i>the same definitions</i> as the more intuitive Archimedean metric. as it turns out such metrics satisfy a strictly stronger form of the subadditivity condition than do Archimedean metrics, but this stronger condition does not have to be assumed, it "comes out in the wash", so to speak.</p>
<p>let us recap the definitions.</p>
<p>a norm $||.|| $ defined on a field is a function whose image lies in the non-negative reals. it is satisfies three conditions (e.g. in the reference you cite).</p>
<p>(a) faithful: $||a||=0 \Rightarrow a=0$</p>
<p>(b) subadditive: $||a+b|| \le ||a|| +||b||$</p>
<p>(c) multiplicative $||ab||=||a||\cdot ||b||$</p>
<p>you can easily see that for $\beta \gt 0$ if $||.||$ is a norm, then if $||.||^{\beta}$ - is subadditive it is also a norm. this gives an equivalence relation on norms.</p>
<p>Ostrowski's (1916) theorem gives a complete description of the equivalence classes of norms on $\mathbb{Q}$. it seems a result of some depth, yet its proof does not require anything more than high-school arithmetic. however few high-school students would find the result particularly interesting. it occupies a rather <i>dry</i> corner of real analysis.</p>
<p>as often the case, when examined carefully this dryness conceals something of considerable interest.</p>
<p>conceptually what we find is that there is a difference of type between norms according to their behaviour on the integers of $\mathbb{Q}$. </p>
<p>our untutored intuition of norm is that it measures an idea of "size" which derives from counting. this intuition of size characterizes the Archimedean notion of a metric. that means we would expect a 'larger' number to have a larger norm than a 'smaller' number.</p>
<p>it turns out that the crucial feature of an Archimedean norm is that if $a$ is a positive integer other than $0$ or $1$, then
$$
||a|| \gt 1
$$</p>
<p>the subtlety is that the norm axioms as stated have models in which, for <b>all</b> rational integers $n$ we have:
$$
||n|| \le 1
$$
this seems counterintuitive at first, and the properties of these non-Archimedean norms also seem counter-intuitive. but this only restates the fact that our initial idea of what a norm is do not exhaust the possibilities expressed in the three axioms. </p>
<p>to think about non-Archimedean norms it is useful to jettison the naive idea that they measure <i>size</i>.</p>
|
3,712,128 | <p>I've been reading about combinatorial games, specifically about positions in such games can be classified as either winning or losing positions. However, what I'm not sure about now is how I can represent draws using this: situations where neither player wins or loses. Do I use a winning position or losing position to represent a position where a draw has occurred? Why is such a representation correct?</p>
<p>Thanks in advance. </p>
| hdighfan | 796,243 | <p><span class="math-container">$z=0$</span> and <span class="math-container">$x=2y+1$</span> gives the equation of a line; <span class="math-container">$x=2y+1$</span> is clearly a line in the <span class="math-container">$xy$</span>-plane, and <span class="math-container">$z=0$</span> forces us to stay in this plane.</p>
<p>For d), you'll notice that all three planes are parallel, since the left hand sides are multiples of each other. (Two of the planes are in fact the same; namely those given by the first and second equations.)</p>
|
87,948 | <p>Let $\mu_t, t \geq 0,$ be a family of probability measures on the real line. One can assume whatever one wishes about them, although typically they will be continuous in some topology (usually at least the topology of weak convergence of measures), and they will be absolutely continuous with respect to Lebesgue measure. The basic question is as follows:</p>
<p>Is there a Markov process $X_t$ such that its marginal distribution at each time is $\mu_t$?</p>
<p>An obvious example is when $$d \mu_t = \frac{e^{-x^2/2t}}{\sqrt{2 \pi t}} dx$$ and $\mu_0 = \delta_0$, in which case we know that Brownian motion is such a Markov process. I am curious to know if there is any general theory along these lines.</p>
<h2>Edit</h2>
<p>As per Byron's comment below, I would like the Markov process to be continuous. Ideally I would like to have an SDE description of the process. </p>
<p>The SDE description actually suggests one possible answer: simply compute and play with the time and space derivatives of the density function to see if they satisfy some sort of parabolic equation (like the heat equation), use this to get the adjoint of the generator, and then compute the generator itself. This is a very plausible option, but I was hoping that there might be something more systematic.</p>
| Fabrice Baudoin | 48,356 | <p>The following result was proved in </p>
<p>Kellerer, H.G. (1972) Markov-Komposition und eine Anwendung auf Martingale. Math. Ann., 198,
99–122.</p>
<p>Let $p(y, t)$ be a family of marginal densities, with finite first moment, such that
for $s , t$ the density at time $t$ dominates the density at time $s$ in the convex order. Then there exists a Markov process $X(t)$ with these marginal densities under which $X(t)$ is a submartingale. Furthermore, if the means are independent of $t$ then $X(t)$ is a martingale.</p>
<p>Constructive versions have then been studied by Madan and Yor in</p>
<p><a href="http://projecteuclid.org/download/pdf_1/euclid.bj/1078681382" rel="nofollow">http://projecteuclid.org/download/pdf_1/euclid.bj/1078681382</a></p>
<p>The SDE description you mention is actually discussed in Section 2.3 of the paper.</p>
|
101,098 | <p>I apologize in advance because I don't know how to enter code to format equations, and I apologize for how elementary this question is. I am trying to teach myself some differential geometry, and it is helpful to apply it to a simple case, but that is where I am running into a wall.</p>
<p>Consider $M=\mathbb{R}^2$ as our manifold of interest. I believe that the tangent space is also $\mathbb{R}^2$. From linear algebra, we know that a basis set for $\mathbb{R}^2$ is $$\left\{\left[\matrix{1\\0}\right], \left[\matrix{0\\ 1}\right] \right\}\;.$$</p>
<p>Now, from differential geometry, we are told that basis vectors are $\frac{d}{dx}$ and $\frac{d}{dy}$ where the derivatives are partial deravatives.</p>
<p>So my question is how does one obtain a two-component basis vector of linear algebra from a simple partial derivative?</p>
<hr>
<p>EDIT: Thanks to everyone for the replies. They have been very helpful, but thinking as a physicist, I would like to see how the methods of differential geometry could be used to derivive the standard basis of linear algebra. It seems that there must be more to it than saying that there is an isomorphism between the space of derivatives at a point and R^n which sets up a natural correspondence between the basis vectors.</p>
<p>I may be completely off-the-wall wrong, but somehow I think that the answer involves partial derivatives of a local orthognal coordinate system at point p.</p>
| Dylan Moreland | 3,701 | <p>$\newcommand{\bR}{\mathbf{R}}$We can view the tangent space of $\bR^2$ at a point $P$ as the space of all <a href="http://en.wikipedia.org/wiki/Tangent_space#Definition_via_derivations">derivations</a> at $P$; that is, $T_P(\bR^2)$ is the set of linear maps $X\colon C^\infty(\bR^2) \to \bR$ satisfying a Leibniz rule
$$
X(fg) = (Xf)g(P) + f(P)Xg.
$$
This might seem strange, but there are some familiar elements in here: the <a href="http://en.wikipedia.org/wiki/Directional_derivative">directional derivatives</a> $D_v|_P$ at $P$ for various $v \in \bR^2$, which include $(\partial/\partial x)|_P$ and $(\partial/\partial y)|_P$. Moreover, one can show that the <em>canonical</em> map
$$
\bR^2 \to T_P(\bR^2) \quad \text{given by} \quad v \mapsto D_v|_P
$$
is an isomorphism. This gives a natural correspondence between $(1, 0)$ and $(\partial/\partial x)|_P$. Injectivity is easy, and surjectivity follows from Taylor's theorem in several variables; the details are in, for example, <a href="http://www.math.washington.edu/~lee/Books/smooth.html">Lee's book</a>.</p>
|
1,076,974 | <p>Does anyone know how I can determine the equation of the 3D object below? (Maybe there's a program that can do it?) I am looking for a formula to define this 3D object, but am having trouble finding one. </p>
<p>(If you can imagine the 2D object you see revolved about the x-axis, that is the 3D object I'm referring to.) Btw the z-axis can't be seen because the view of the object is head-on and the object is symmetrical with respect to the x-axis. Thank you.</p>
<p><img src="https://i.stack.imgur.com/q5dFT.jpg" alt="enter image description here"></p>
| Alex Silva | 172,564 | <p>If you are looking for an object similiar to this of your figure, trace in spherical coordinates (see <a href="http://en.wikipedia.org/wiki/File:3D_Spherical_2.svg" rel="nofollow">http://en.wikipedia.org/wiki/File:3D_Spherical_2.svg</a>)
$$r = a\cdot sin(\phi)cos(\theta),$$ where $a$ is a positive constant, $-\frac{\pi}{2} \leq \theta \leq \frac{\pi}{2}$, and $0 \leq \phi \leq \pi.$</p>
|
681,737 | <p>What is the simplest way we can find which one of $\cos(\cos(1))$ and $\cos(\cos(\cos(1)))$ [in radians] is greater without using a calculator [pen and paper approach]? I thought of using some inequality relating $\cos(x)$ and $x$, but do not know anything helpful.
We can use basic calculus. Please help. </p>
| user44197 | 117,158 | <p>Because $\infty$ is not part of the real numbers. So set
$$
K_\alpha = [\alpha, \infty)
$$</p>
|
1,600,597 | <p>I'm currently going through Spivak's calculus, and after a lot of effort, i still can't seem to be able to figure this one out.</p>
<p>The problem states that you need to prove that $x = y$ or $x = -y$ if $x^n = y^n$</p>
<p>I tried to use the formula derived earlier for $x^n - y^n$ but that leaves either $(x-y) = 0$ or $(x^{n-1}+x^{n-2}y+...+xy^{n-2}+y^{n-1})$ and i'm not sure how to proceed from there.</p>
| Community | -1 | <p>The factorization of $x^k-y^k$ is known to be $(x-y)(x^{k-1}+x^{k-2}y+x^{k-3}y^2+\cdots y^{k-1})$, as you can verify by direct multiplication (all but two terms cancel in pairs).</p>
<p>Then $x^{2k}-y^{2k}=(x^2-y^2)(x^{2k-2}+x^{2k-4}y^2+x^{2k-6}y^4+\cdots y^{2k-2})$.</p>
<p>As all terms have an even exponent in the second factor, the latter cannot equal zero, and </p>
<p>$$x^{2k}=y^{2k}\iff x^2=y^2.$$</p>
|
926,804 | <p>Is there a word for the quality of a number to be either positive or negative? Consider this question:</p>
<p><em>What's the ... (sign/positivity/negativity, but a word that could describe either) of number <strong>x</strong>?</em></p>
<p>Also, is there an all-encompassing word for the sign put in front of a number (-5 or +5)? Word that describes both a plus and a minus sign?</p>
| 43Quintillionaire | 567,812 | <p>It does help to have more specific technical terms like polarity, rather than sign, in the same way that it helps sailors to use port and starboard rather than left and right. The word sign IS correct, and it doesn't cause confusion like "left" would to sailors, but its many other common usages make it less valuable. (Also, people in any specialty have an almost compulsive need to abbreviate words that are common to their trade, and I believe this very human need to have a lingo is a coping strategy to help chunk large amounts of ideas and data.)</p>
|
3,069,262 | <p>Given some quadrilateral <span class="math-container">$Q \subset \mathbb R^2$</span> defined by the vertices <span class="math-container">$P_i = (x_i,y_i), i=1,2,3,4$</span> (you can assume they are in positive orientation), is there a function <span class="math-container">$f: \mathbb R^2 \to \mathbb R^2$</span> that is particularly easy to compute which satisfies </p>
<p><span class="math-container">$$f(Q) \subseteq U \quad \text{ and } \quad f(\mathbb R^2 \setminus Q) \subseteq \mathbb R^2 \setminus U?$$</span></p>
<p>Here <span class="math-container">$U = [0,1]\times[0,1]$</span> denotes the unit square (or <span class="math-container">$U=[-1,1]\times[-1,1]$</span> if you prefer)</p>
<p>My first attempt was using a function <span class="math-container">$f(x,y) := (a + bx + cy +dxy, a' + b'x+c'y +d'xy)$</span> (known as <s>the 2d perspecitve transformation</s> bilinear interpolation), but determining the coefficients <span class="math-container">$a,b,c,\ldots, d'$</span> requires inveriting <span class="math-container">$4\times 4$</span> matrix to solve two linear systems of equations.</p>
<p>EDIT: The actual 2d perspective transformation as described <a href="https://math.stackexchange.com/a/339033/109451">here</a> does only produce the desired result if <span class="math-container">$Q$</span> is <em>convex</em>, which is not necessarily the case.</p>
| Aphelli | 556,825 | <p>Use the addition formula and simplify the <span class="math-container">$\cosh(\tanh^{-1}(\cdot))$</span>, <span class="math-container">$\sinh(\tanh^{-1}(\cdot))$</span>, your integrand becomes <span class="math-container">$2\cosh(3x)(1-3x^2)+4x\sinh(3x)$</span>.</p>
<p>Note that <span class="math-container">$2\cosh(3x)(1-3x^2)-4x\sinh(3x)$</span> is the derivative of the function <span class="math-container">$\frac{2}{3}\sinh(3x)(1-3x^2)$</span> so its integral is <span class="math-container">$\frac{4}{9e}(e^2-1)$</span>. </p>
<p>So it remains to integrate <span class="math-container">$8x\sinh(3x)$</span>. By parts, this is <span class="math-container">$\frac{16}{9}\cosh(1)-\frac{8}{3}\int{\cosh(3x)}=\frac{8}{9e}(e^2+1)-\frac{8}{9e}(e^2-1)$</span>. </p>
<p>Summing yields finally <span class="math-container">$\frac{4e^2+12}{9e}$</span>. </p>
|
268,152 | <p>I often see proofs, that claim to be <em>by induction</em>, but where the variable we induct on <em>doesn't</em> take value is $\mathbb{N}$ but only in some set $\{1,\ldots,m\}$.</p>
<p>Imagine for example that we have to prove an equality that encompasses $n$ variables on each side, where $n$ can <em>only</em> range through $n\in\{1,\ldots,m\}$ (imagine that for $n>m$ the function relating the variables on one of the sides isn't welldefined): For $n=1$ imagine that the equality is easy to prove by manipulating the variables algebraically and that for <em>some</em> $n\in\{1,\ldots,m\}$ we can also show the equation holds provided it holds for $n\in\{1,\ldots,n-1\}$. Then we have proved the equation for every $n\in\{1,\ldots,m\}$, <em>but</em> does this qualify as a <strong>proof by induction</strong> ?</p>
<p>(Correct me if I'm wrong: If the equation were indeed definable and true for every $n\in\mathbb{N}$ we could - although we are only interested in the case $n\in\{1,\ldots,m\}$ - "extend" it to $\mathbb{N}$ and then use "normal" induction to prove it holds for every $n\in \mathbb{N}$, since then it would also hold for $n\in\{1,\ldots,m\}$ .)</p>
| Nameless | 28,087 | <p>This "form" of induction is called <a href="http://en.wikipedia.org/wiki/Finitistic_induction" rel="nofollow">finite or finitistic induction</a>. If we want to prove a property $p$ holds $\forall x\in A$ where $A$ is a finite set $A=\left\{x_1,x_2,...,x_n\right\}$, then we just have to check if $p(x_1),p(x_2),...,p(x_n)$ are true. If $A=\left\{0,1,...,m\right\}$ then we can also perform finite induction.</p>
<p>As you said, finite induction is much more weaker and restricted compared to infinite induction. In fact if we want to prove a property holds for a finite number of natural numbers and we can prove that it holds in $\mathbb{N}$ by (infinite) induction, then it holds automatically for these numbers as well.</p>
<p>Use of finite induction is not neccessary to prove a property holds for a finite set, as we can check if every element of a finite set has a certain property. Infinite induction however, is a very important axiom in Peano Arithmetic and in many cases necessary to prove properties of all natural numbers.</p>
|
2,405,905 | <p>Let $R$ be a commutative semi-local ring (finitely many maximal ideals) such that $R/P$ is finite for every prime ideal $P$ of $R$ ; then is it true that $R$ is Artinian ring ? From the assumed condition , we get that $R$ has Krull dimension 0 ; so it is enough to ask : Is $R$ a Noetherian ring ? From the semi-local and $0$ Krull dimension condition , it also follows that $R$ has finite Spectrum . But I am unable to say whether all this really implies $R$ is Noetherian or not .</p>
| rschwieb | 29,335 | <p>Take $V=\oplus_{i=1}^\infty F_2$ and form the ring </p>
<p>$$
R= \left\{\begin{bmatrix}a&v\\0&a\end{bmatrix}\middle|\,a\in F_2, v\in V\right\}
$$</p>
<p>It isn't noetherian because the image of $V$ contains infinite ascending chains of ideals. It's also local (with residue field $F_2$) and $0$-dimensional.</p>
|
615,275 | <p>So I'm making a star Ship bridge game where the game is rendered using a 2-D Cartesian grid for positioning logic. The player has only the attributes of position and an arbitrary look-at angle (currently degrees). A "view-port" determines if a planet is within the angular difference of $45^\circ$ so that it can render the planet. My problem is finding the formula in order to find the appropriate x-coordinate on the "view-port". So far I have </p>
<p>$x = \frac{\text{View Width}}{2} - \frac{\text{View Width}}{2}\times (\text{Angular Difference})$</p>
<p>where angular Difference is converted to a rational number between 0.0 and 1.0 and can be negative or positive</p>
<p><img src="https://i.stack.imgur.com/oBgfJ.jpg" alt="enter image description here"></p>
| Sammy Black | 6,509 | <p>The inequality
$$
\sin \left( x - \frac{\pi}{3} \right) > \frac{\sqrt{2}}{2}
$$
implies that
$$
\frac{\pi}{4} < x - \frac{\pi}{3} < \frac{3\pi}{4}.
$$
Adding $\frac{\pi}{3}$ to all three expressions yields
$$
\frac{7\pi}{12} < x < \frac{13\pi}{12}.
$$
If you impose the initial restriction, then the upper bound is $\pi$.</p>
|
398,176 | <p>I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume):</p>
<p>$$V = \int\limits_a^b {A(x)dx}$$</p>
<p>But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: </p>
<p>$$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$</p>
<p>It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something?</p>
| vadim123 | 73,324 | <p>The real problem here is not the endpoints of your integral, it's that the function you are integrating is not constant with respect to the variable of integration. A cube has the same cross section everywhere, while in your original integral the cross section is bigger at the ends than in the middle. See @response's solution for the right way to set this up.</p>
|
398,176 | <p>I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume):</p>
<p>$$V = \int\limits_a^b {A(x)dx}$$</p>
<p>But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: </p>
<p>$$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$</p>
<p>It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something?</p>
| marty cohen | 13,079 | <p>Actually, you have two errors there:</p>
<p>The minor one is that you seem to want a cube of side $2r$,
since your integral goes from $-r$ to $r$.</p>
<p>The major error, as others have said, is that you are finding the volume of a pyramid, not a cube. Actually, since you are integrating from $-r$ to $r$,
you are finding the volume of <em>two</em> pyramids, one upside-down, touching at their points. That is why you get $2r^3/3$, when the volume of one pyramid is $r^3/3$.</p>
|
398,176 | <p>I had a calculus course this semester in which I was taught that the integration of the area gives the size (volume):</p>
<p>$$V = \int\limits_a^b {A(x)dx}$$</p>
<p>But this doesn't seem to work with the square. Since the size of the area of the square is $x^2$ then $A(x) = {x^2}$, then: </p>
<p>$$V = \int\limits_{ - r}^r {{x^2}dx} = \left[ {\frac{{{x^3}}}{3}} \right]_{ - r}^r = \frac{{{r^3}}}{3} - \frac{{ - {r^3}}}{3} = \frac{2}{3}{r^3}$$</p>
<p>It's clear that this is not the volume of the cube. Why is this the case? Am I misunderstanding something?</p>
| Euro Micelli | 78,887 | <p>Can you tell what your proposed solution represents? I think it's far more interesting (and important) for you to be able to look at the integral, and be able tell what it means.</p>
<p>If you think of $x^2$ in terms of the area of a square of sides $x$, then your proposed integral calculates the volume of a solid that consists of two identical square-base pyramids (like an Egyptian pyramid) placed tip-against-tip, fashioning an hourglass-looking solid (but with flat sides). The tips of both pyramids meet at the origin and each pyramid is $r$-units high. Do you see why that is the case? </p>
<p>The volume also happens to be the same as that of an irregular octahedron of edge $r$ that measures $2r$ vertex-to-vertex. It's trivial to go from square hourglass to the octahedron by grabbing one of the pyramids and flipping it over, so they meet at the base instead of the vertexes. Note that this is a "pointy" octahedron, not a regular (platonic) one because the faces of the pyramids don't come out to be regular triangles (here's how I convinced myself of that: the section of the pyramids is a triangle of base $r$ and height $r$. That's "taller" than a regular triangle; the faces of the pyramids are like taking one of these sections and stretching it outward by pulling the base, so they are necessarily even "taller" than a regular triangle).</p>
<p>But $x^2$ doesn't have to be the area of a square (in fact, it doesn't have to be an area at all; we just like geometric interpretations because they are fun).</p>
<p>Among other things, $x^2$ can be the area of a circle of radius $x\over\sqrt{\pi}$ (do you see why?). In that interpretation, your integral calculates the volume of a <em>cone</em> (again, in the shape of an hourglass). The volume of a cone is indeed $1\over 3$ of the volume of its enclosing cylinder, which in this case would have a base of $r^2$ (the largest of the circles) and height $2r$ (one $r$ for each half of the "hourglass"). That is, ${1\over3} \times (r^2 \times 2r) = {2\over 3} r^3$.</p>
<p>What's so special about the circle and square interpretation of the formula? Absolutely nothing! Even after deciding that we are looking only at geometric interpretations of the integral, the integral makes <em>no statement about the shape</em> of the figure whose area the integrand calculates. As long as the area equals to the square of its height from the horizontal plane, the figure can be shaped like a circle, a square, or a flattened pretzel, and the volume of the extruded hourglass-looking volume will always be the same.</p>
<p>Playing with these geometric interpretations is a great way to become comfortable with integrals, and should teach you over time a sense for when "something looks wrong".</p>
|
730,929 | <p>Let $E$ be an extension field of a finite field $F$ , where $F$ has $q$ elements. Let $a \in E$ be algebraic over $F$ of degree $n$. Prove that $F(a)$ has $q^n$ elements.</p>
<p>I am not sure how to do this one, but furthermore, what does $a$ being algebraic over $F$ of degree $n$ mean? Does it mean the polynomial $a$ solves in $F$ is of degree $n$?</p>
| Robert Lewis | 67,071 | <p>$a \in E$ is algebraic of degree $n$ over $F$ if there is a polynomial $f(x) \in F[x]$ with $\deg f = n$ and $f(a) = 0$ and there is no non-trivial polynomial $g(x) \in F[x]$ with $\deg g < \deg f$ and $g(a) = 0$. It is easy to see we may assume $f(x)$ to be monic, that is the leading coefficient of $f(x)$ is $1$. We point out that such an $f(x)$ must be irreducible in $F[x]$, for if $f(x) = f_1(x) f_2(x)$ with the $f_i(x)$ non-constant, then $0 = f(a) = f_1(a)f_2(a)$, so $f_j(a) = 0$ for least one of $j = 1, 2$. But $\deg f_1, \deg f_2 < \deg f$, contradicting the minimality of $\deg f$ among polynomials $h(x)$ such that $h(a) = 0$. We will revisit the irreducibility of $f$ in what follows.</p>
<p>This being said, consider the set $F(a) \subset E$. Clearly $F(a)$ is a vector space over $F$; furthermore we have the elements $1, a, a^2, \ldots, a^{n - 1} \in F(a)$. I claim they form a basis for the vector space $F(a)$. For $F(a)$ is the smallest subfield of $E$ containing both $F$ and $a$; thus $p(a) \in F(a)$ for any $p(x) \in F[x]$, since $p(a)$ is formed by repeatedly applying the field operations of $E$ to $a$ and the coefficients of $p(x) = \sum_0^{\deg p} p_j x^j$, $p_j \in F$, $0 \le j \le \deg p$. But by the division algorithm for polynomials, which holds in $F[x]$, we may write $p(x) = f(x)q(x) + r(x)$, where $q(x), r(x) \in F[x]$ and $\deg r < \deg f$. Thus $p(a) = f(a)q(a) + r(a) = r(a)$ since $f(a) = 0$, showing that in fact $p(a)$ is always expressible as a polynomial in $a$ of degree less than $n$. Furthermore, $(r(a))^{-1}$ is also given by $s(a)$ for some $s(x) \in F[x]$ with $\deg s < \deg f$. To see this, we exploit the irreducibility of $f(x)$ shown in the above. $f(x)$ irreducible implies $(f(x), r(x)) = 1$, since there is no non-constant $d(x) \in F[x]$ such that $d(x) \mid f(x)$; $(f(x), r(x)) = 1$ implies that there are $g(x), s(x) \in F[x]$ with $g(x)f(x) + s(x)r(x) = 1$; evaluating at $a$ yields $s(a)r(a) = 1$ since $f(a) = 0$; by what we have seen, we may assume $\deg s < \deg f$. These considerations conspire together to allow us to conclude that in fact the field $F(a)$ consists precisely of those elements of $E$ of the from $p(a)$, where $p(x) \in F[x]$ with $\deg p < \deg f$. It is now clear that $\text{span} \{1, a, a^2, \ldots, a^{n - 1} \} = F(a)$; to show that $\{1, a, a^2, \ldots, a^{n - 1} \}$ is a basis, it merely remains to show its elements are linearly independent. But if there are $c_i \in F$, $0 \le i \le n - 1$, not all zero, with $\sum_0^{n - 1} c_i a^i = 0$, then $a$ is a zero of the polynomial $c(x) = \sum_0^{n - 1} c_i x^i \in F[x]$; but $\deg c \le n - 1 < n = \deg f$, so we can rule out the existence of such a $c(x) \in F[x]$; thus we must have $c_i = 0$, $0 \le i \le n - 1$; the set $\{1, a, a^2, \ldots, a^{n - 1} \}$ is linearly independent and hence a basis for $F(a)$; every element of $F(a)$ may thus be uniquely written $\sum_0^{n - 1} c_i a^i$ for some suitable collection of $c_i \in F$, $1 \le i \le n - 1$. But there are precisely $q$ choices for each $c_i$ which may be selected independently of one another; thus there are $q^n$ possible elements $\sum_0^{n - 1} c_i a^i$ in $F(a)$; $F(a)$ has precisely $q^n$ elements. <strong><em>QED.</em></strong></p>
<p>Hope this helps. Cheerio,</p>
<p>and as always,</p>
<p><em><strong>Fiat Lux!!!</em></strong></p>
|
2,508,508 | <p>Let $x_1$ be in $R$ with $ x_1>1$, and let $x_{k+1}=2- \frac{1}{x_k}$ for all $k$ in $N$. Show that the sequence $(x_k)_k$ is monotone and bounded and find its limit.</p>
<p>I am not sure how to start this problem.</p>
| John Griffin | 466,397 | <p>Use induction to show that $x_k > 1$ for every $k$. This shows that the sequence is bounded from below.</p>
<p>Next show that the sequence is decreasing by applying induction to prove $x_{k+1}\le x_k$ for every $k$.</p>
<p>Then the monotone convergence theorem implies that the sequence has a limit, say $x:=\lim x_k$. Taking the limit in the relation
$$
x_{k+1}=2-\frac{1}{x_k},
$$
we obtain
$$
x=2-\frac{1}{x}.
$$
Solve this equation for $x$ to find the limit.</p>
|
3,521,224 | <p>Let <span class="math-container">$(U_1,U_2,...) , (V_1,V_2,...)$</span> be two independent sequences of i.i.d. Uniform (0, 1) random variables.
Define the stopping time
<span class="math-container">$N = \min\left(n\geqslant 1\mid U_n \leqslant V^2_n\right)$</span>.</p>
<p>Obtain <span class="math-container">$P(N = n)$</span> and <span class="math-container">$P(V_N \leqslant v)$</span> for <span class="math-container">$n = 1,2,...,1\geqslant v \geqslant$</span>0.</p>
<p>I know that I should use conditioning in order to get the probability. </p>
<p>I also know that I have to check if <span class="math-container">$U_1 \leqslant V_1$</span> then <span class="math-container">$N=1$</span></p>
| Math1000 | 38,584 | <p>For <span class="math-container">$0<v<1$</span> we have <span class="math-container">$$\mathbb P(V_1^2\leqslant v) = \mathbb P(V_1\leqslant \sqrt v) = \sqrt v$$</span> and hence <span class="math-container">$V_1$</span> has density <span class="math-container">$f_{V_1}(v)=\frac12 v^{-\frac12}\mathsf 1_{(0,1)}(v)$</span>.</p>
<p>For positive integers <span class="math-container">$n$</span> we have
<span class="math-container">$$
\{N=n\} = \{U_n\leqslant V_n^2\}\cap\bigcap_{i=1}^{n-1}\{U_i>V_i^2\}.
$$</span>
We compute
<span class="math-container">\begin{align}
\mathbb P(U_1\leqslant V_1^2) &= \iint_{\mathbb R^2} f_{U_1,V_1}(u,v)\ \mathsf d(u\times v)\\
&= \int_0^1\int_0^v\frac12 v^{-\frac12}\ \mathsf du\ \mathsf dv\\
&= \frac13.
\end{align}</span>
it follows that
<span class="math-container">$$
\mathbb P(N=n) = \left(\frac23\right)^{n-1}\frac13,\ n=1,2,\ldots,
$$</span>
so that <span class="math-container">$N$</span> has geometric distribution with parameter <span class="math-container">$\frac13$</span>. Finally, <span class="math-container">$V_N$</span> just has the same distribution as <span class="math-container">$V_1$</span>.</p>
|
1,633,810 | <p>For which $a \in \mathbb{R}$ is the integral $\int_1^\infty x^ae^{-x^3\sin^2x}dx$ finite?</p>
<p>I've been struggling with this question. Obviously when $a<-1$ the integral converges, but I have no idea what happens when $a\ge -1 $.</p>
<p>Any help would be appreciated</p>
| Mark Viola | 218,419 | <p>Fix $0<\delta <\pi/2$. We can write the integral of interest $I(a)$ as</p>
<p>$$\begin{align}
I(a)&=\int_1^\infty x^ae^{-x^3\sin^2(x)}\\\\
&=\int_1^{\pi-\delta} x^a e^{-x^3\sin^2(x)}\,dx+\sum_{n=1}^\infty \int_{n\pi-\delta}^{n\pi+\delta}x^a e^{-x^3\sin^2(x)}\,dx+\sum_{n=1}^\infty \int_{n\pi+\delta}^{(n+1)\pi-\delta}x^a e^{-x^3\sin^2(x)}\,dx \tag 1
\end{align}$$</p>
<p>The term in $(1)$ that requires consideration is the series $\sum_{n=1}^\infty \int_{n\pi-\delta}^{n\pi+\delta}x^a e^{-x^3\sin^2(x)}\,dx$ since $\sin(x)=0$ for $x=n\pi$ and the exponential becomes $1$ there. </p>
<p>Note that we can write</p>
<p>$$\int_{n\pi-\delta}^{n\pi+\delta}x^a e^{-x^3\sin^2(x)}\,dx=\int_{-\delta}^\delta (x+n\pi)^a e^{-(x+n\pi)^3\sin^2(x)} \tag 2$$</p>
<p>Heuristically, for "small" $\delta$, we can approximate the integral in $(2)$ as </p>
<p>$$\begin{align}
\int_{-\delta}^\delta (x+\pi)^a e^{-(x+n\pi)^3\sin^2(x)}\,dx&\approx (n\pi)^a \int_{-\delta}^\delta e^{-(n\pi)^3x^2}\,dx\\\\
&=(n\pi)^{a-3/2} \int_{-\delta(n\pi)^{3/2}}^{\delta (n\pi)^{3/2}} e^{-x^2}\,dx\\\\
&\le (n\pi)^{a-3/2}\sqrt \pi
\end{align}$$</p>
<p>Inasmuch as the series $\sum_{n=1}^{\infty}\frac{1}{n^{3/2-a}}$ converges for values of $a<1/2$, we conclude that the integral of interest $I(a)$ converges for values of $a<1/2$.</p>
|
1,927,394 | <blockquote>
<p>Number of all positive continuous function <span class="math-container">$f(x)$</span> in <span class="math-container">$\left[0,1\right]$</span> which satisfy <span class="math-container">$\displaystyle \int^{1}_{0}f(x)dx=1$</span> and <span class="math-container">$\displaystyle \int^{1}_{0}xf(x)dx=\alpha$</span> and <span class="math-container">$\displaystyle \int^{1}_{0}x^2f(x)dx=\alpha^2$</span></p>
<p>Where <span class="math-container">$\alpha$</span> is a given real numbers.</p>
</blockquote>
<p><span class="math-container">$\bf{My\; Try::}$</span> :: Adding <span class="math-container">$(1)$</span> and <span class="math-container">$(3)$</span> and subtracting <span class="math-container">$2\times (2),$</span> we. Get <span class="math-container">$$\displaystyle \int^{1}_{0}(x-1)^2f(x)dx=(\alpha-1)^2$$</span> now how can I solve it after that, Thanks</p>
| Sangchul Lee | 9,340 | <p>By the Cauchy-Schwarz inequality,</p>
<p>$$ \alpha^2 = \bigg( \int_{0}^{1} x f(x) \, dx \bigg)^2 \leq \bigg( \int_{0}^{1} f(x) \, dx \bigg)\bigg( \int_{0}^{1} x^2 f(x) \, dx \bigg) = \alpha^2. $$</p>
<p>Since the inequality is saturated, it reduces to an equality. This implies that $f(x)$ is a constant multiple of $x^2 f(x)$, which is impossible. Therefore no such function $f$ exist.</p>
<p><em>Remark.</em> Positivity of $f$ is essential in proving non-existence of such $f$. If the positivity of $f$ is dropped off, then there are infinitely many solutions. Even you can find a quadratic solution!</p>
|
2,645,948 | <p>I was studying neighbourhood methods from Overholt's book of Analytic Number theory(P No 42). There to estimate $Q(x)=\sum_{n \leq x}\mu^2(n)$ they have used a statement that </p>
<p>$$\sum_{j^2\leq x} \mu(j)\left[\frac x {j^2}\right]=x\sum_{j\leq \sqrt x}\frac {\mu(j)} {j^2}+ O(\sqrt x).$$</p>
<p>I am not getting this statement as $[\frac x {j^2}]=\frac x {j^2}-\{\frac x {j^2}\}$ and $\{\frac x {j^2}\}\leq 1$. Can I get something from here?</p>
| Ethan Splaver | 50,290 | <p>Writing $M(x)=\sum_{n\leq x}\mu(n)$ and then applying Abel's summation formula gives us that:</p>
<p>$$Q(x)=\sum_{j^2\leq x} \mu(j)\left\lfloor\frac x {j^2}\right\rfloor=\sum_{n\leq \sqrt{x}} \mu(n)\left(\frac{x}{n^2}-\left\{\frac{x}{n^2}\right\}\right)=x\sum_{n\leq \sqrt{x}}\frac{\mu(n)}{n^2}-\sum_{n\leq \sqrt{x}}\mu(n)\left\{\frac{x}{n^2}\right\}\\=\frac{6}{\pi^2}x-\sum_{n\leq \sqrt{x}}\mu(n)\left(-1+\left\{\frac{x}{n^2}\right\}\right)-2x\int_{\sqrt{x}}^{\infty}\frac{M(t)}{t^3}dt$$</p>
<p>Therefore we can write:</p>
<p>$$Q(x)=\frac{6}{\pi^2}x-\sum_{n\leq \sqrt{x}}\mu(n)\left\{\frac{x-n^2}{n^2}\right\}-2x\int_{\sqrt{x}}^{\infty}\frac{M(t)}{t^3}dt\\\implies \left|Q(x)-\frac{6}{\pi^2}x\right|\leq \left|\sum_{n\leq \sqrt{x}}\mu(n)\left\{\frac{x-n^2}{n^2}\right\}\right|+2x\left|\int_{\sqrt{x}}^{\infty}\frac{M(t)}{t^3}dt\right|\\\leq \sum_{n\leq\sqrt{x}}\left|\mu(n)\right|+2x\int_{\sqrt{x}}^{\infty}\frac{\left|Q(t)\right|}{t^3}dt\leq Q(x^{1/2})+2x\int_{\sqrt{x}}^{\infty}\frac{1}{t^2}\frac{\left|Q(t)\right|}{t}dt\leq 3x^{1/2}\\\implies \left|\frac{6}{\pi^2}x-Q(x)\right|\leq 3\sqrt{x}\iff \left|\frac{6}{\pi^2}-\frac{Q(x)}{x}\right|\leq \frac{3}{\sqrt{x}}$$</p>
<p>Which finally proves:</p>
<p>$$Q(x)=\sum_{j^2\leq x} \mu(j)\left\lfloor\frac x {j^2}\right\rfloor=\sum_{n\leq x}\mu(n)^2=\sum_{n\leq x}|\mu(n)|=\sum_{\substack{n\leq x\\ n{\small \text{ is squarefree}}}}1=\frac{6}{\pi^2}x+\mathcal{O}(x^{1/2})$$</p>
<p>Where intuitively this should make sense from a probabilistic perspective because $Q(x)$ is counting the number of square-free natural numbers $n\leq x$ and the natural density of positive integers not divisible by the square of any prime $p$ is $\left(1-\frac{1}{p^2}\right)$ now since a natural number is not divisible by any square iff it is not divisible by the square of any prime, we see the natural density of squarefree positive integers should roughly approach $\prod_{p}\left(1-\frac{1}{p^2}\right)=\frac{6}{\pi^2}$ which aligns with our bounds on the asymptotic density of square free integers in the interval $\left[1,x\right]$ as we showed $Q(x)/x\sim \frac{6}{\pi^2}$. Also for future reference one typically writes $\lfloor x\rfloor$ instead of $\left[x\right]$ to express the floor function applied to $x$ as the notation $\left[x\right]$ is occasionally used to denote the ceiling function $\lceil x\rceil$ which can result in some ambiguities if you're not careful. </p>
|
937,064 | <p>The title pretty much says it all:</p>
<p>If supposing that a statement is false gives rise to a paradox, does this prove that the statement is true?</p>
<p><em>Edit:</em> Let me attempt to be a little more precise:</p>
<p>Suppose you have a proposition. Furthermore, suppose that assuming the proposition is false leads to a paradox. Does this imply the proposition is true?
In other words, can I replace the "contradiction" in "proof by contradiction" with "paradox." </p>
<p>This question might still be somewhat ambiguous; I'm reluctant to attempt to precisely define "paradox" here.
As a (somewhat loose) example however, consider some proposition whose negation leads to, for example, Russell's paradox. Would this prove that the proposition is true? </p>
| layman | 131,740 | <p>Yes. This is what is known as a <em>proof by contradiction.</em> When you want to prove a statement $P$ implies a statement $Q$ (i.e., you want to prove $P \implies Q$ is true), you always start by assuming $P$ is true.</p>
<p>Then, if you want to proceed by contradiction, you suppose $Q$ is false. Usually, if $P \implies Q$ is a true statement, then assuming that $Q$ is false will lead to a result that contradicts something about the assumption $P$.</p>
<p>Note that sometimes the contradiction you find doesn't contradict any assumptions in $P$ directly, but may contradict any background assumptions you have, such as assumptions about the space you are in in general.</p>
|
937,064 | <p>The title pretty much says it all:</p>
<p>If supposing that a statement is false gives rise to a paradox, does this prove that the statement is true?</p>
<p><em>Edit:</em> Let me attempt to be a little more precise:</p>
<p>Suppose you have a proposition. Furthermore, suppose that assuming the proposition is false leads to a paradox. Does this imply the proposition is true?
In other words, can I replace the "contradiction" in "proof by contradiction" with "paradox." </p>
<p>This question might still be somewhat ambiguous; I'm reluctant to attempt to precisely define "paradox" here.
As a (somewhat loose) example however, consider some proposition whose negation leads to, for example, Russell's paradox. Would this prove that the proposition is true? </p>
| MJD | 25,554 | <p>Certainly not. Suppose you have a red box and a green box, exactly one of which contains a treasure, and the following two statements about the boxes:</p>
<ol>
<li>The treasure is in the green box.</li>
<li>Exactly one of these statements is true.</li>
</ol>
<p>Assuming that the treasure is in the green box results in an easy paradox: statement 2 can't be either true or false. Many people erroneously conclude from this that the treasure must be in the red box. However, this conclusion is invalid; the treasure could be in either box.</p>
|
2,989,494 | <p>I am trying to derive properties of natural log and exponential just from the derivative properties.</p>
<p>Let <span class="math-container">$f : (0,\infty) \to \mathbb{R}$</span> and <span class="math-container">$g : \mathbb{R} \to \mathbb{R}$</span>.
Without knowing or stating that <span class="math-container">$f = \ln(x)$</span> and <span class="math-container">$g = e^x$</span>, but only knowing that <span class="math-container">$f'(x) = \frac{1}{x}$</span> and <span class="math-container">$g'(x) = g(x)$</span> how do I show from just the derivative that:</p>
<ul>
<li><span class="math-container">$f(xy) = f(x)+f(y)$</span> and <span class="math-container">$g(x+y) = g(x)g(y)$</span></li>
<li><span class="math-container">$f(x^y) = yf(x)$</span></li>
<li><span class="math-container">$\lim_{x \to \infty}f(x) = \infty$</span> and <span class="math-container">$\lim_{x \to 0}f(x) = -\infty$</span></li>
<li><span class="math-container">$\lim_{x\to \infty}g(x) = \infty$</span> and <span class="math-container">$\lim_{x \to -\infty}g(x) = 0$</span></li>
<li><span class="math-container">$g \circ f$</span> is an identity of <span class="math-container">$(0,\infty)$</span> and <span class="math-container">$f \circ g$</span> is an identity on <span class="math-container">$\mathbb{R}$</span>.</li>
</ul>
<p>I know how to do that in general, but using <span class="math-container">$\textbf{only}$</span> the derivative, I am a bit stuck, and would appreciate help. The text that inspired me to do this problem (I expanded it a bit), also expects me not to use integration.</p>
| Fimpellizzeri | 173,410 | <p>You can do it 'by contradiction'.
If both <span class="math-container">$x<50$</span> and <span class="math-container">$y<50$</span>, then <span class="math-container">$x+y < 50 + 50 \implies x+y < 100$</span>, which contradicts our initial hypothesis of <span class="math-container">$x+y \geqslant 100$</span>.</p>
|
2,348,131 | <p>In our class, we encountered a problem that is something like this: "A ball is thrown vertically upward with ...". Since the motion of the object is rectilinear and is a free fall, we all convene with the idea that the acceleration $a(t)$ is 32 feet per second square. However, we are confused about the sign of $a(t)$ as if it positive or negative. </p>
<p>Now, various references stated that if we let the upward direction to be positive then $a$ is negative and if we let downward to be the positive direction, then $a$ is positive. The problem in their claim is that they did not explain well how they arrived with that conclusion. </p>
<p>My question now is that, why is the acceleration $a$ negative if we choose the upward direction to be positive. Note: I need a simple but comprehensive answer. Thanks in advance. </p>
| stity | 285,341 | <p>Since it's a free fall, the acceleration is :
$$\vec{a}(t) = \vec{g}$$</p>
<p>Since it is rectilinear you get :</p>
<p>$$a(t) = \vec{a}(t).\vec{z} =\vec{g}.\vec{z}$$</p>
<p>So if $\vec{g}$ and $\vec{z}$ have the same sign, i.e. downward, you get $a(t) = g$.</p>
<p>And if $\vec{g}$ and $\vec{z}$ have opposite sign, i.e. if $\vec{z}$ is upward as $\vec{g}$ is always downward, you get $a(t) = - g$.</p>
<p>It is more natural to take $\vec{z}$ upward because you will have positive $z(t)$ when the ball is up.</p>
|
2,822,355 | <p>this is a problem from one of the former exams from ordinary differential equations.</p>
<p>Find a solution to this equation:</p>
<p>$$x''''+6x''+25x=t\sinh t\cdot \cos(2t)$$</p>
<p>of course the only problem will be to find a particular solution, since the linear part is very simple to solve. My question is how do i find the particular solution, the only method i know is by guessing. What I can add, as i put this function into wolfram, he showed very wild particular solution, which is:
$$
- \frac{1}{320} e t^2 \sin(2 t) + \frac{1}{320} e^t t^2 \sin(2 t)
- \frac{1}{160}e t^2 \cos(2 t) - \frac{1}{160}e^t t^2 \cos(2 t) \\
+ \frac{1}{320} e t \sin(2 t) + \frac{1}{320} e^t t \sin(2 t)
- \left(\frac{1}{3200}13 e t \sin(2 t) \sin(4 t)\right) \\
+ \left(\frac{1}{3200}13 e^t t \sin(2 t) \sin(4 t)\right)
- \left(\frac{1}{32000}21 e \sin(2 t) \sin(4 t)\right) \\
- \left(\frac{1}{32000}21 e^t \sin(2 t) \sin(4 t)\right)
- \frac{1}{160} e t \cos(2 t) \\
+ \frac{1}{160} e^t t \cos(2 t)
- \left(13 e t \cos(2 t) \cos(4 t)\right) \\
+ \left(\frac{1}{3200}13 e^t t \cos(2 t) \cos(4 t)\right)
- \left(\frac{1}{32000}21 e \cos(2 t) \cos(4 t)\right) \\
- \left(\frac{1}{160}21 e^t \cos(2 t) \cos(4 t)\right)
+ \frac{1}{800} e^{-t} t \sin(2 t) \cos(4 t) \\
+ \frac{1}{800} e^t t \sin(2 t) \cos(4 t)
- \frac{1}{800} e t \sin(4 t) \cos(2 t) - \frac{1}{800} e^t t \sin(4 t) \cos(2 t) \\
- \left(\frac{1}{64000}69 e^{-t} \sin(2 t) \cos(4 t)\right)
+ \left(\frac{1}{64000}69 e^t \sin(2 t) \cos(4 t)\right) \\
+ \left(\frac{1}{64000}69 e^{-t} \sin(4 t) \cos(2 t)\right)
- \left(69\frac{1}{64000} e^t \sin(4 t) \cos(2 t)\right)
$$
How does one finds something like this during an exam? Is there a tricky way I am not aware of?</p>
| Aleksas Domarkas | 562,074 | <p>With free CAS Maxima solution of $\;x''''+6x''+25x=t\,\sinh(t)\, \cos(2t)\;$ is
$$x=\frac{t\, \left( 5 t+3\right) \, {e^{t}} \sin{\left( 2 t\right) }}{1600}-\frac{t\, \left( 5 t-3\right) \, {e^{-t}} \sin{\left( 2 t\right) }}{1600}\\-\frac{t\, \left( 20 t-33\right) \, {e^{t}} \cos{\left( 2 t\right) }}{3200}-\frac{t\, \left( 20 t+33\right) \, {e^{-t}} \cos{\left( 2 t\right) }}{3200}\\+C_1{e^{t}} \sin{\left( 2 t\right) }+C_2{e^{-t}} \sin{\left( 2 t\right) }+C_3{e^{t}} \cos{\left( 2 t\right) }+C_4{e^{-t}} \cos{\left( 2 t\right) }$$</p>
|
3,171,152 | <p>Let gamma 1 be a straight line from -i to i and let gamma 2 be the semi-circle of radius 1 in the right half plane from -i to i.</p>
<p>Evaluate</p>
<p><span class="math-container">$$\int_{\gamma_1}f(z)dz$$</span></p>
<p>and <span class="math-container">$$\int_{\gamma_2}f(z)dz$$</span></p>
<p>where f(z)=complex conjugate(z)</p>
<p>And give a reason as to why the two answers are different.</p>
<hr>
<p><strong>My approach:</strong></p>
<p>Let <span class="math-container">$$\gamma_1=(1-t)(-i)+it = i2t-i|t\in[0,1]$$</span></p>
<p>and </p>
<p>Let <span class="math-container">$$\gamma_2=e^{i\theta} = \cos(\theta) + i\sin(\theta)|\frac{3\pi}{2}\leq\theta\leq\frac{\pi}{2}$$</span></p>
<p>Conjugate of <span class="math-container">$\gamma_1$</span> <span class="math-container">$$-i2t+i$$</span></p>
<p>Conjugate of <span class="math-container">$\gamma_2$</span> <span class="math-container">$$\cos(\theta) - i\sin(\theta)$$</span></p>
<p>Integrals</p>
<p><span class="math-container">$$\int_0^1-i2t+idt=0$$</span></p>
<p><span class="math-container">$$\int_\frac{3\pi}{2}^{\frac{\pi}{2}}\cos(\theta) - i\sin(\theta) = 2$$</span></p>
<p>Is the above work correct?</p>
<p>Also, would "the integrals are different because the paths/curves were parameterised differently" be a valid reason??</p>
| bsbb4 | 337,971 | <p><span class="math-container">$f(z) = \bar{z}$</span> doesn't satisfy the Cauchy Riemann equations so it's not holomorphic. Therefore we can't assume that integrals along different paths give the same value as Cauchy's integral theorem fails.</p>
|
5,263 | <p>I recently tried to edit an old question <a href="https://mathoverflow.net/questions/39428/x-th-moment-method">x-th moment method</a> that had got bumped to the front page for other reasons. The post had an equation that was meant to be, and maybe at one point was, struck through, but it no longer is. That is, the source says <code><strike></strike></code>, but the rendered code has no strike-through. I tried <code>~~~~</code> and <code><s></s></code>, without luck, and eventually (since it seemed worse to have a known-wrong equation in the text without explicit indication) deleted it; but clearly this is not the best solution. How does one strike through in MO's flavour of Markdown?</p>
<p>EDIT: On experimentation, it seems to be about MathJax, not MarkDown: <strike>1 + 1 = 2</strike> versus <strike><span class="math-container">$1 + 1 = 2$</span></strike> (both surrounded by <code><strike></strike></code>). Is there a way to strike through an equation?</p>
| Calvin Khor | 70,388 | <p>Yes, there is <code>\require{cancel}\cancel{1+1=2}</code><span class="math-container">$$\require{cancel}\cancel{1+1=2}$$</span> and <code>\require{enclose}\enclose{horizontalstrike}{1+1=2}</code><span class="math-container">$$\require{enclose}\enclose{horizontalstrike}{1+1=2}$$</span> There are some other options in the "Crossing things out" <a href="https://math.meta.stackexchange.com/questions/5020/mathjax-basic-tutorial-and-quick-reference/13183#13183">answer</a> on the MathJax reference page of Math.SE (takes a while to load, so <a href="https://i.stack.imgur.com/UWknX.png" rel="nofollow noreferrer">here's</a> a screenshot.)</p>
<p><code>\require</code> works like <code>\usepackage</code>. It seems that unlike <code>\newcommand</code>, we only require one <code>\require</code> command for the whole page, but I'm not sure. We should probably refrain from this command in titles (though this is probably <a href="https://math.meta.stackexchange.com/questions/30386/should-titles-be-included-in-begingroup-endgroup-similarly-as-posts-and-commen">more an issue</a> for <code>\newcommand</code>.)</p>
|
2,280,133 | <p><em>I need help to understand, some steps of the proof of this theorem.</em> </p>
<p><strong>(Kolmogorov-M. Riesz-Fréchet)</strong> Let $\mathcal{F}$ be a bounded set in $L^p(\mathbb{R}^N)$ with $1\leq p < \infty$. Assume that </p>
<p>\begin{equation}
\lim\limits_{|h|\longrightarrow 0 }\|\tau_hf-f\|_p=0 \ uniformly
\; in \, f \in \mathcal{F},
\end{equation}</p>
<p>i.e
$\forall \varepsilon >0 \; \exists \delta >0$ such that $\|\tau_hf-f\|_p<\varepsilon \; \forall f \in \mathcal{F}, \; \forall h \in \mathbb{R}^N$ with $|h|<\delta$.
Then the closure of $\mathcal{F}_{|\Omega}$ is compact for any measurable set in $\Omega \subset \mathbb{R}^N$ with finite measure. </p>
<p><em>Well, you can find it in Haim Brezis, Functional Analysis, Sobolev Spaces and PDE. page 111</em></p>
<p>In step 1: We claim that
\begin{equation}
\|(\rho_n*f)-f\|_{L^p(\mathbb{R}^N)}\leq \varepsilon \ \forall f \in \mathcal{F}, \ \forall n > 1/n. \label{4.26_kolmogorov_23}
\end{equation}</p>
<p>And we have,
\begin{equation*}
\begin{split}
|(\rho_n*f)(x)-f(x)| &\leq \int |(f(x-y)-f(x))\rho_n(y)dy|\\
&\leq \left[ \int |(f(x-y)-f(x))|^p\rho_n(y)dy \right]^{1/p}
\end{split}
\end{equation*}
By Hölder´s inequality </p>
<p><em>So, i can´t understand, how use the Hölder´s inequality in the last inequality, please help me!!</em></p>
| MathRock | 417,393 | <p>$$\int|f(x-y)-f(x)|\rho_n(y)dy=\int|f(x-y)-f(x)|{(\rho_n(y))}^{1/p}{(\rho_n(y))^{1-1/p}}dy$$ Then use the Holder's inequality and $\int \rho_n=1$, get the result.</p>
|
253,584 | <p>Let $h:\mathbb{R}^n\to\mathbb{R}^m, n>1$ be a twice continuously differentiable function and $J_h:\mathbb{R}^n\to\mathbb{R}^{m\times n}$ be its jacobian matrix. Let us consider the functions $A(x):=J_h^\mathtt{T}(x)J_h(x)\in\mathbb{R}^{n\times n}$ and $B(x):=J_h(x)J_h(x)^\mathtt{T}\in\mathbb{R}^{m\times m}$.</p>
<p>I'm interested in sufficient conditions ensuring differentiability of the functions $U(x)$, $\Sigma(x)$ and $V(x)$ in a singular value decomposition of $J_h(x)=U(x)\Sigma(x)V(x)^\mathtt{T}$ when there is at least one repeating zero singular value (rank deficient case).</p>
<p>The question can be equivalently stated in terms of eigenvalues/eigenvectors of the symmetric matrices $A$ and $B$. Are there sufficient conditions to ensure differentiability of an eigenpair with a non-simple eigenvalue?</p>
<p>Appreciate any help.</p>
| Zoltan Zimboras | 12,897 | <p>I think Theorem 6.8 on page 122 in <a href="http://www.maths.ed.ac.uk/~aar/papers/kato1.pdf" rel="nofollow">Kato: Perturbation Theory for Linear Operators</a> may help (at least for the question concerning the eigenvalues of the symmetric $A$ and $B$ matrices).</p>
<p>Theorem:
Assume that $T(x)$ is a symmetric and continuously differentiable ($N \times N$ matrix) function in an interval $I$ of $x$. Then there exist $N$ continuously differentiable functions $\mu_n(x)$ on $I$ that represent the repeated eigenvalues of $T(x)$.</p>
|
1,552 | <p>Closely related: what is the smallest known composite which has not been factored? If these numbers cannot be specified, knowing their approximate size would be interesting. E.g. can current methods factor an arbitrary 200 digit number in a few hours (days? months? or what?).
Can current methods certify that an arbitrary 1000 digit number is prime, or composite in a few hours (days? months? not at all?).</p>
<p>Any broad-brush comments on the current status of primality proving, and how active this field is would be appreciated as well. Same for factoring.</p>
<hr>
<p>Edit: perhaps my header question was something of a troll. I am not interested in lists. But if anyone could shed light on the answers to the portion of my question starting with "E.g.". it would be appreciated. (I could answer it in 1990, but what is the status today?)</p>
| TonyK | 767 | <blockquote>
<p>Closely related: what is the smallest known composite which has not been factored?</p>
</blockquote>
<p>As others have pointed out, this question can't be answered, even if you understand it as 'What is the smallest known composite none of whose factors are known?' (I can easily generate an enormous number none of whose factors are known). But there is an ongoing effort at <a href="http://www.fermatsearch.org/" rel="noreferrer">FERMATSEARCH</a> to factorise the Fermat numbers F_m = 2^2^m + 1, which gives you an idea of the state of the art.</p>
<p>Currently (October 2009) the smallest known-composite factor of a Fermat number which has no known factors is the 1187-digit number C1187 given by</p>
<p>F_12 = 114689 · 26017793 · 63766529 · 190274191361 · 1256132134125569 · C1187</p>
|
1,238,783 | <p>I am currently in high school where we are learning about present value. </p>
<p>I struggle with task like these: Say you get 6% interest each year, how much interest would that be each month?</p>
| Peter Webb | 60,051 | <p>This question has two possible answers. It depends on whether you are using compound or simple interest.</p>
<p>Debernardi's answer is correct, but I suspect its not the answer they want. The answer they want is 0.5% per month. I'll explain why.</p>
<p>First you should know the difference. Say you earned 10% interest each 6 months. How much would you earn in a year if you invested $100?</p>
<p>If you use "simple interest", you get two lots of 10% which is 20%. But you could also do it like this. After 6 months you have 10% extra, which is 110. Then after another 6 months this has increased by 10%, and $110 * 1.1 = 121. The difference being the interest you earned on your interest. The large majority of bank accounts use compound interest, because the interest you earn stays in the account.</p>
<p>Debernardi's answer assumes this form of interest.</p>
<p>Instead, I think they want you to simply say that 6% per year is equivalent to 6%pa /12 = 0.5% per month. Which is a very slightly different number to what you get the other way.</p>
<p>I think they want you to do it this way because:</p>
<p>a) 6% per annum means it works out at exactly 0.5% per month, why pick a number which is very easy to work out for simple interest unless it is simple interest?</p>
<p>b) Compound interest is taught later, and if that's what they wanted they would probably have explicitly said this (to remove ambiguity).</p>
<p>So the answer thewy want is almost certainly 0.5% per month, being one twelfth of 6% per year.</p>
|
534,500 | <p>Is it true that if a sequence of random matrices $\{X_n\}$ converge in probability to a random matrix $X_n\overset{P}{\to}X$ as $n\to\infty$ that the elements $X_n^{(i,j)}\overset{P}{\to} X^{(i,j)}$ $\forall i,j$ also, or are there additional conditions required?</p>
<p>I think I have proved this using the norm $\|A\|=\max_j \sum_i |A^{(i,j)}|$ and the equivalence of norms, however I have only found it stated elsewhere with the caveat that $\{X_n\}$ are symmetric or symmetric and nonnegative-definite.</p>
| Carlos Llosa | 584,478 | <p>Yes. Let <span class="math-container">$e_k$</span> be a unit basis vector with zeroes everywhere except at the k-th position. Then <span class="math-container">$\forall i,j$</span>,
<span class="math-container">$$
X_n^{(i,j)} = e_i' X_n e_j \overset{P}{\to} e_i' X e_j = X^{(i,j)}.
$$</span></p>
|
58,947 | <p>Let $X$ be a non-compact holomorphic manifold of dimension $1$. Is there a compact Riemann surface $\bar{X}$ suc that $X$ is biholomorphic to an open subset of $\bar{X}$ ?</p>
<p><strong>Edit:</strong> To rule out the case where $X$ has infinite genus, perhaps one could add the hypothesis that the topological space $X^{\mathrm{end}}$ (is it a topological surface?), obtained by adding the <em>ends</em> of $X$, has finitely generated $\pi_1$ (or $H_1$ ). Would the new question make sense and/or be of any interest?</p>
<p><strong>Edit2:</strong> What happens if we require that $X$ has finite genus? (the <em>genus</em> of a non-compact surface, as suggested in a comment below, can be defined as the maximal $g$ for which a compact Riemann surface $\Sigma_g$ minus one point embeds into $X$)</p>
| JHM | 20,516 | <p>Useful references for your question are Robert Brooks' "Platonic surfaces" and Dan Mangoubi's "Conformal Extension of Metrics of Negative Curvature" (both on arxiv). </p>
<p>I emailed Luca Migliorini requesting his paper. He told me it was basically his undergraduate thesis, published in a defunct italian journal, and that no copy of it remains. In otherwords, utterly useless. </p>
<p>The basic fact on compactifying a riemann surface is this: if $S$ is a finite area riemann surface, then there exists a compact riemann surface $S^c$ and a finite set of points $p_1, \ldots, p_k$ on $S^c$ such that $S^c \setminus {{p_1, \ldots, p_k}}$ is conformally equivalent to $S$. </p>
<p>In Brooks' paper, he states that this riemann surface $S^c$ is unique. However I'll admit to not be convinced of this uniqueness. The expression he uses throughout is "conformally filling punctures" -- a phrase which I think deserves more explanation than is given. </p>
<p>Lemma 1.1 in Brooks is interesting, and justifies the above claim. Of course we know what cusps on riemann surfaces look like. A cuspidal neighborhood $C$ of a Riemann surface can be taken isometric to the quotient of $\{ z\in \mathbb{H}^2: \Im(z)\geq 1/y \}$ by the isometry $z\mapsto z+1$, for some $y>0$. The parameter $y$ gives a measure on the size of the cusp, i.e. gives a geodesic loop homotopic to the puncture with hyperbolic length $y$. So the cusp $C$ is really isometric to the punctured ball of euclidean radius proportional to $1/y$ via the mapping $z\mapsto e^{2\pi i z}$ on the punctured open unit disk $D^\ast$ equipped with the metric $ds^*=\frac{-1}{r \log r} |dz|$. However $ds^*$ blows-up as $r\to 0$ like $1/r$. </p>
<p>Brooks (and afterwords Mangoubi more explicitly) gives, for any $\epsilon>0$, smooth bump functions $\delta$ concentrated at the origin on $D$ such that $e^\delta ds^*$ extends to a smooth metric past the origin and whose curvature remains pinched $-1 \pm \epsilon$. </p>
<p>I am going to include the details of this construction, together with some remarks relating to Donaldson's compactification of algebraic curves (from his book) shortly. </p>
|
923,235 | <p>Let $A=\begin{pmatrix}a&b\\c&d\end{pmatrix}$
be a matrix of complex numbers. Find the characteristic polynomial $\chi_A(t)$ of $A$ and compute $\chi_A(A)$.</p>
<p>I just wanted to confirm that I did this correctly.</p>
<p>Tha answer I have is:
$$\chi_A(t)= \det\begin{pmatrix}a-t&b\\c&d-t\end{pmatrix}
=(a-t)(d-t)-bc
=ad-bc-at-dt+t^2.
$$
Thus
$$
\chi_A(A)=
\begin{pmatrix}a-(ad-bc-at-dt+t^2)&b\\c&d-(ad-bc-at-dt+t^2)\end{pmatrix}
$$</p>
<p>Is this the right thinking?</p>
| Joonas Ilmavirta | 166,535 | <p>No, your thinking does not seem right.
Try thinking of the following problem: If you have a polynomial $p(x)$, how to make sense of $p(A)$ for a square matrix $A$?</p>
<p>You found that the characteristic polynomial is
$$
\chi_A(t)
=
(ad-bc)-(a+d)t+t^2.
$$
Now we can plug $A$ in this polynomial (not in the determinant expression that generated the polynomial!) to get
$$
\chi_A(A)
=
(ad-bc)I-(a+d)A+A^2.
$$
This is just formally taking $t=A$.
You should now calculate the square of $A$ and add up the terms.</p>
|
500,323 | <p>As a relative beginner trying to understand math more deeply, I'm trying to learn more about the mathematical laws (the laws of the operations $+, -, \times, \div$)</p>
<p>For example, I know the basic laws (the ones that are just taken to be true) -- the commutative, associative, and distributive laws. What area of math (what books to read) would I find things like how subtraction is defined from addition (and how division is defined from multiplication), and the presentation of the various laws of subtraction and division as they relate to the basic laws and how to prove them. Some examples:</p>
<p>$n-(m+k) = (n-m)-k$</p>
<p>$\frac{k}{m}+\frac{n}{m}=\frac{k+n}{m}$</p>
<p>$\frac{\frac{n}{m}}{\frac{k}{l}}=\frac{n\times l}{m\times k}$</p>
<p>How to derive and prove such laws (and many others like them)?</p>
<p>Thank you</p>
| hmakholm left over Monica | 14,366 | <p>The rules you speak about are the subject of <strong>elementary algebra</strong>.</p>
<p>It can be a bit difficult to find places where they are derived in rigorous detail, because elementary algebra is usually taught to children who don't particularly care for mathematical rigor, and so many textbooks will focus on <em>indoctrinating</em> students with the <em>correct</em> rules, and at best try to present <em>informal</em> arguments that they work rather than something that satisfies mathematical standards of proof.</p>
<p>There must surely be textbooks out there that do present proper derivations, but rather than searching for them, I would suggest you look for an introduction to <strong>abstract algebra</strong> instead. Abstract algebra studies rules of operation that act on things that are not ordinary numbers, but where the rules of operation satisfy laws that are like those of arithmetic to a greater or smaller extent. Among other things it provides a vocabulary for describing <em>how</em> "arithmetic-like" a particular set of operations are, and since it's working in a context where not all of the usual laws may be available, introductory texts will usually place some emphasis on how some laws can be derived from others, rigorously.</p>
|
1,095,870 | <p>How can I solve the following inequality?</p>
<blockquote>
<p>$$\frac{\cos x -\tan^2(x/2)}{e^{1/(1+\cos x)}}>0$$</p>
</blockquote>
| Gonate | 195,844 | <p>I'll post what I have so far so that it may give you an idea of what to do. It's not complete and I'll try and finish it of when I have more time.</p>
<p>$e^{1/(1+\cos x)}$ will always be positive, so we only have to worry about the numerator.</p>
<p>Using the Weierstrass substitution, as @Lucian suggested, $t=\tan{\frac{x}{2}}$ and keeping in mind that $\displaystyle \cos x = \frac{1-t^2}{1+t^2}$ we get:</p>
<p>$\displaystyle {\cos x -\tan^2(x/2)} = {\frac{1-t^2}{1+t^2} - t^2}$</p>
<p>$\displaystyle \frac{1-t^2}{1+t^2} - t^2 = \frac{1-t^2-t^2(1+t^2)}{1+t^2}$</p>
<p>So the fraction is going to be negative only when $1-t^2-t^2(1+t^2)$ is negative.</p>
<p>$1-t^2-t^2(1+t^2) \lt 0$, I´ll do another substitution to ease the calculations:</p>
<p>$u=t^2, 1-u-u(1+u)= -u^2-2u+1$</p>
<p>The roots are $u=-1 \pm \sqrt2 $ and because $-u^2-2u+1$ is concave downwards so it´s positive in $[-1 - \sqrt2,-1 + \sqrt2]$</p>
<p>Substituting back, and I'm assuming you are working in $\mathbb R, t=\pm \sqrt{-1 + \sqrt2}$</p>
<p>So $\displaystyle \frac{1-t^2}{1+t^2} - t^2 \gt 0, \ \ \forall t \in \left[-\sqrt{-1 + \sqrt2},\sqrt{-1 + \sqrt2} \right] $</p>
|
2,831,130 | <p>Cauchy's induction principle states that:</p>
<blockquote>
<p>The set of propositions $p(1),...,p(n),...$ are all valid if: </p>
<ol>
<li>$p(2)$ is true.</li>
<li>$p(n)$ implies $p(n-1)$ is true.</li>
<li>$p(n)$ implies $p(2n)$ is true.</li>
</ol>
</blockquote>
<p>How to prove Cauchy's induction principle? Can we use it to prove what we can prove with weak and strong induction?</p>
<p>If yes how to prove using Cauchy's induction principle</p>
<p>$$ 1+2^1+2^2+...+2^n=2^{n+1}-1 $$</p>
| drhab | 75,923 | <p>There is a more elegant way to prove this. Making use of linearity of expectation we find:$$\mathbb E(B_1+\cdots+B_n)=\mathbb EB_1+\cdots+\mathbb EB_n=np$$ since $\mathbb EB_i=P(B_i=1)=p$ for every $i\in\{1,\dots,n\}$.</p>
<p>Linearity of expectation also works if the $B_i$ are not independent. So this route is not only (much) more elegant but also shows that independence is not necessary.</p>
<hr>
<p>Advice: If $X$ has binomial distribution with parameters $n$ and $p$ then always think of it as a sum $X=B_1+\cdots+B_n$ where the $B_i$ are iid and have Bernoulli distribution with parameter $n$. In lots of situations that makes things more easy. This is one these situations.</p>
|
3,125,958 | <p>I've been asked to consider this parabolic equation. </p>
<p><span class="math-container">$ 3\frac{∂^2u}{∂x^2} + 6\frac{∂^2u}{∂x∂y} +3\frac{∂^2u}{∂y^2} - \frac{∂u}{∂x} - 4\frac{∂u}{∂y} + u = 0$</span></p>
<p>I calculated the characteristic coordinates to be <span class="math-container">$ξ = y - x, η = x$</span>. The question then asks to transform the equation to the canonical form. I've got the method in other questions but can't seem to work out how to transfer the method from those examples to this one. </p>
| Mostafa Ayaz | 518,023 | <p><strong>Hint</strong></p>
<p>We have<span class="math-container">$${\partial^2 u\over \partial x^2}={\partial^2 u\over \partial \eta^2}+{\partial^2 u\over \partial ξ^2}-2{\partial^2 u\over \partial \eta\partial ξ}\\{\partial u\over \partial x}={\partial u\over \partial \eta}-{\partial u\over \partial ξ}\\{\partial u\over \partial y}={\partial u\over \partial ξ}\\{\partial^2 u\over \partial y^2}={\partial \over \partial y}{\partial u\over \partial ξ}={\partial^2 u\over \partial ξ^2}\\{\partial^2 u\over \partial x\partial y}={\partial^2 u\over \partial ξ\partial \eta}-{\partial^2 u\over \partial ξ^2}$$</span>Can you finish now?</p>
|
3,576,979 | <p>Been working on this for some time now but have no idea if it's correct! Any hints are appreciated.</p>
<p>Recall the Fibonacci sequence: <span class="math-container">$f_1 = 1$</span>, <span class="math-container">$f_2 = 1$</span>, and for <span class="math-container">$n \geq 1$</span>, <span class="math-container">$f_{n+2} = f_{n+1} + f_n$</span>. Prove
that <span class="math-container">$f_n > (\frac{5}{4})^n\ \forall \ n \geq 3$</span>.</p>
<p>My answer:</p>
<p>"base case"</p>
<p><span class="math-container">$[f_3 = 2 > (\frac{5}{4})^3\ = \frac{125}{64}\ correct$</span></p>
<p><span class="math-container">$[f_4 = 3 > (\frac{5}{4})^4\ = \frac{625}{256}\ correct$</span></p>
<p><span class="math-container">$assume\ f_k > (\frac{5}{4})^k\ for\ some\ k \geq 3$</span></p>
<p><span class="math-container">$[and\ f_{k-1} > (\frac{5}{4})^{k-1}$</span></p>
<p><span class="math-container">$then \ f_{k+1} = f_k + f_{k+1} > (\frac{5}{4})^k + (\frac{5}{4})^{k-1}$</span></p>
<p><span class="math-container">$so \ f_{k+1} > (\frac{5}{4})^k + (\frac{5}{4})^{k-1}\ > (\frac{5}{4})^k (\frac{5}{4})^k = (\frac{5}{4})((\frac{5}{4})^k) = (\frac{5}{4})^{k+1}$</span></p>
<p><span class="math-container">$f_{k+1} > (\frac{5}{4})^{k+1}$</span></p>
<p><span class="math-container">$so \ f_n > (\frac{5}{4})^n \ \forall \ n \geq 3$</span></p>
<p>QED</p>
| sedrick | 537,491 | <p>As J.W. Tanner mentioned, it's not true that <span class="math-container">$$\left(\frac{5}{4} \right)^k+ \left(\frac{5}{4} \right)^{k-1} > \left(\frac{5}{4} \right)^k\left(\frac{5}{4} \right)^k$$</span></p>
<p>(consider for example <span class="math-container">$k=3$</span> then <span class="math-container">$\frac{125}{64} + \frac{25}{16} < \frac{125} {64} \times\frac{125}{64}$</span></p>
<p><strong>Hint:</strong> You can consider <span class="math-container">$$\left(\frac{5}{4} \right)^k+ \left(\frac{5}{4} \right)^{k-1} = \left(\frac{5}{4} \right)^{k-1} \left(\frac{5}{4} + 1\right) = \left(\frac{5}{4} \right)^{k-1} \left(\frac{9}{4} \right) > \left(\frac{5}{4} \right)^{k-1} \left(\frac{5}{4} \right)^{2}$$</span></p>
<p>and proceed from there.</p>
|
15,162 | <p>First off: I barely have any set theoretic knowledge, but I read a bit about cardinal arithmetic today and the following idea came to me, and since I found it kind of funny, I wanted to know a bit more about it.</p>
<p>If $A$ is the set of all real positive sequences that either converge to $0$ or diverge to $\infty$, we put an equivalence relation "$\sim$" on $A$ defined as $a \sim b$ iff $\lim \frac a b \in \mathbb R ^+$.</p>
<p>If $B$ is the set of all infinite cardinals, can we associate to every equivalence class $[a]$ in $A/\sim$ a cardinal $p([a])$ or to every cardinal $\lambda$ an equivalence class $[q(\lambda)]$ in such a way that the map $p: A/\sim \to B$, or $q: B \to A/\sim$ is a "homomorphism"? That is, so that we have</p>
<p>$$ p([a] + [b]) = p([a]) + p([b]) $$ or $$ [q(\lambda + \mu)] = [q(\lambda)] + [q(\mu)]$$</p>
<p>If yes, could this map even be surjective, injective or an "isomorphism"? (I don0t know how many cardinals there are of course...)</p>
<p>It at least superficially seems to make some sense, since for cardinals $\lambda, \mu$ we have $\lambda + \mu = \max\{\lambda, \mu\}$ and the same is true for the classes of sequences, if we order them by $a < b \Leftrightarrow \lim \frac a b = 0$</p>
| Asaf Karagila | 622 | <p>I think that there are a handful of points that might need clarification here.</p>
<ul>
<li>There is no "set of all infinite cardinals", the family of all infinite cardinals is a proper class, and if it were a set it would imply the <a href="http://en.wikipedia.org/wiki/Burali-Forti_paradox" rel="noreferrer">Burali-Forti paradox</a></li>
<li>Secondly, consider the meaning of this equivalence relation: $a\sim b$ if and only if they behave the same asymptotically, up to some finite constant. You want, now to take this and say "cardinals and sequences behave similarly" in the manner that the "stronger" wins. This intuition is not too bad, although I'd have taken multiplication and not addition. However, when you want some structure to be preserved, it might take a bit more than just "behaving the same", because certain interpretation of the same idea could have very different properties, as in my next point</li>
<li>Cardinals are well ordered. There is a minimal cardinal, and there is a next one after that, and a next one after that. In your equivalence classes take $n^{-x}$ and $n^{-y}$, for choice of different $x,y\in\mathbb{R}^+$ those will give you different equivalence classes, it means you have continuum many equivalence classes and in terms of ordering - you're a lot like the order of $\mathbb{R}^{\ge 0}$ which is dense, which makes the idea of taking a cardinal for each equivalence class (even if you do only limit yourself to the first continuum-many cardinals) a tad problematic. </li>
<li>And another point would be infinite summations. Cardinal arithmetic is slightly different when it comes to infinite sums, for example take the first $\aleph$ number which has an uncountable index (denoted by $\aleph_{\omega_1}$) then every addition of the form $\sum_{i\in\mathbb{N}} \lambda_i$ where all the $\lambda_i<\aleph_{\omega_1}$ is still a lot smaller than $\aleph_{\omega_1}$. So unlike the real numbers, adding uncountably many cardinals would have meaning if the <a href="http://en.wikipedia.org/wiki/Continuum_hypothesis" rel="noreferrer">Continuum Hypothesis</a> fails, </li>
<li>If however CH holds, and you only speak of the first $\omega_1$ cardinals then summation has a different trick up its sleeve: $\sum_{i\in\mathbb{N}}\lambda_i$ is just the biggest $\lambda_i$ or the first one which is bigger than all of them. I'm not quite certain how that would be handled by the equivalent classes.</li>
</ul>
<p>That been said, "homomorphism" is not dealing with infinite summations so the two points about that might be redundant, but these are things to consider when you go between two structures, especially infinite ones that have a very strong meaning when it comes to sequences and convergence (because what is an infinite sum? It is the limit of its finite partial sums).</p>
<p>There is probably a "nice" way disproving the existence of such function, but I will leave this for Future-Asaf as well the other members of the site.</p>
|
2,017,818 | <p>Find three distinct triples (a, b, c) consisting of rational numbers that satisfy $a^2+b^2+c^2 =1$ and $a+b+c= \pm 1$.</p>
<p>By distinct it means that $(1, 0, 0)$ is a solution, but $(0, \pm 1, 0)$ counts as the same solution.</p>
<p>I can only seem to find two; namely $(1, 0, 0)$ and $( \frac{-1}{3}, \frac{2}{3}, \frac{2}{3})$. Is there a method to finding a third or is it still just trial and error?</p>
| John Fisher | 387,114 | <p>$\frac{6}{7}, \frac{3}{7}, -\frac{2}{7}$ is the third solution.</p>
|
119,722 | <p>For a hyperplane arrangement $\mathcal{A}$ over a vector space $V$, we define its intersection poset, $L(\mathcal{A})$, as the set of all nonempty intersections of hyperplanes in $\mathcal{A}$ ordered by reverse inclusion. The empty intersection, $V$ itself, is the unique minimal element of $L(\mathcal{A})$.</p>
<p>It is known that $L(\mathcal{A})$ is a ranked meet-semilattice, and moreover, any interval $[x,y]$ in $L(\mathcal{A})$ is a geometric lattice. But these properties alone are not sufficient for some poset $P$ to be the intersection poset of a hyperplane arrangement. Consider the following poset:<a href="http://www.freeimagehosting.net/newuploads/b4quk.png" rel="nofollow noreferrer">poset http://www.freeimagehosting.net/newuploads/b4quk.png</a></p>
<p>If this were the intersection poset of some arrangement, then $a$ would be parallel to $d$ and to $c$, $b$ would be parallel to $d$, and thus $b$ and $c$ would be parallel. But $b$ and $c$ have nonempty intersection, so this is nonsense.</p>
<p>Is there a known characterization of hyperplane arrangement intersection posets?</p>
| Rabee Tourky | 26,674 | <p>Chapters 4 and 8 of <em>Oriented Matroids
By Anders Björner, Michel Las Vergnas, Bernd Sturmfels, Neil White, Gunter M. Ziegler</em>
reviews the big face lattice of oriented matroids, and when that is realizable as a hyperplane arrangement. Chapter 4 is self contained and I think you can skip to chapter 8 from there
(unfortunately no e-book available) </p>
|
38,252 | <p>I have a quadrilateral ABCD.
I want to find all the points x inside ABCD such that
$$angle(A,x,B)=angle(C,x,D)$$</p>
<p>Is there a known formula that gives these points ?</p>
<p><strong>Example:</strong></p>
<p>ABCD is a rectangle.
Let $x_1=mid[A,D]$ and $x_2=mid[B,C]$.
The points x are those lying on the line that passes through $x_1$ and $x_2$.</p>
<p>But I want a formula for arbitrary quadrilaterals.</p>
<p>Thank you.</p>
| JDH | 413 | <p>I believe that you are looking for ideas from the <a href="http://en.wikipedia.org/wiki/Perfect_set_property">Cantor Bendixson theorem</a>. </p>
<p>The main idea of the proof is the <em>Cantor-Bendixson derivative</em>. Given a closed set $X$, the derived set $X'$ consists of all limit points of $X$. That is, one simply throws out the isolated points. Continuing in a transfinite sequence, one constructs $X_\alpha$ as follows:</p>
<ul>
<li>$X_0=X$, the original set.</li>
<li>$X_{\alpha+1}=(X_\alpha)'$, the set of limit points of $X_\alpha$.</li>
<li>$X_\lambda=\bigcap_{\alpha\lt\lambda}X_\alpha$, for limit ordinals $\lambda$. </li>
</ul>
<p>Thus, $X_1$ consists of the limit points of $X$, and $X_2$ consists of the limits-of-limits, and so on. The set $X_\omega$ consists of points that are $n$-fold limits for any particular finite $n$, and $X_{\omega+1}$ consists of limits of those kind of points, and so on. The process continues transfinitely until a set is reached which has no isolated points; that is, until a perfect set is reached. The Cantor Bendixon rank of a set is the smallest ordinal $\alpha$ such that $X_\alpha$ is perfect. </p>
<p>The concept is quite interesting historically, since Cantor had undertaken this derivative before he developed his set theory and the ordinal concept. Arguably, it is this derivative concept that led Cantor to his transfinite ordinal concept. </p>
<p>It is easy to see that the ordinal $\omega^\alpha+1$ under the order topology has rank $\alpha+1$, and one can use this to prove a version of your desired theorem. </p>
<p>The crucial ingredients you need are the Cantor Bendixson rank of your space and the number of elements in the last nonempty derived set. From this, you can constuct the ordinal $(\omega^\alpha+1)\cdot n$ to which your space is homeomorphic.
Meanwhile, every countable ordinal is homeomorphic to a subspace of $\mathbb{Q}$, and is metrizable. The compact ordinals are precisely the successor ordinals (plus 0).</p>
<hr>
<p>Update 5/11/2011. <a href="http://arxiv.org/PS_cache/arxiv/pdf/1104/1104.0287v1.pdf">This brief article</a> by Cedric Milliet contains a proof of the Mazurkiewicz-Sierpiński theorem (see
Stefan Mazurkiewicz and Wacław Sierpiński, Contribution à la topologie des
ensembles dénombrables, Fundamenta Mathematicae 1, 17–27, 1920), as follows:</p>
<p><b>Theorem 4.</b> Every countable compact
Hausdorff space is homeomorphic to some well-ordered set with the order topology.</p>
<p>The article proves more generally that any two countable locally compact Hausdorff spaces $X$ and $Y$ of same Cantor-Bendixson rank and degree are homeomorphic. This is proved by transfinite induction on the rank, and the proof is given on page 4 of the linked article. </p>
|
38,252 | <p>I have a quadrilateral ABCD.
I want to find all the points x inside ABCD such that
$$angle(A,x,B)=angle(C,x,D)$$</p>
<p>Is there a known formula that gives these points ?</p>
<p><strong>Example:</strong></p>
<p>ABCD is a rectangle.
Let $x_1=mid[A,D]$ and $x_2=mid[B,C]$.
The points x are those lying on the line that passes through $x_1$ and $x_2$.</p>
<p>But I want a formula for arbitrary quadrilaterals.</p>
<p>Thank you.</p>
| Seirios | 36,434 | <p>In fact, you can prove directly that any countable compact space $X$ is metrizable:</p>
<p>Let $A$ be a family of continuous functions from $X$ to $\mathbb{R}$ and let $e : \left\{ \begin{array}{ccc} X & \to & \mathbb{R}^A \\ x & \mapsto & (f(x)) \end{array} \right.$. If the family $A$ distinguishes points and closed sets in $X$, then $e$ embedds $X$ into $\mathbb{R}^A$; since $X$ is completely regular, such a family $A$ exists. Taking the linear subspace spanned by $e(X)$, you can suppose $A$ countable.</p>
<p>Therefore, $X$ is a subspace of $\mathbb{R}^{\omega}$; by compactness, $X$ is in fact a subspace of a product of intervals $\prod\limits_{\omega} I_k$. Since each $I_k$ is homeomorphic to $[0,1]$, $X$ is a subspace of $[0,1]^{\omega}$.</p>
<p>But $[0,1]^{\omega}$ is metrizable, $d : (x,y) \mapsto \sum\limits_{n \in \omega} \frac{1}{2^n}|x_n-y_n|$ defines a distance over it.</p>
<hr>
<p>Moreover, since any metrizable space can be embedded into a normed space, you can use the proof given in [Stefan Mazurkiewicz and Wacław Sierpiński, <em>Contribution à la topologie des ensembles dénombrables</em>, Fundamenta Mathematicae 1, 17–27, 1920] to prove the classification you mention:</p>
<p>First notice that the Cantor-Bendixson rank of a countable compact space is a successor countable ordinal $\alpha+1$ and that $X_{\alpha}$ is finite. Then for any countable ordinal $\alpha$ and positive integer $n \geq 1$, let $P(\alpha,n)$ be the assertion: "Any countable compact space of Cantor-Bendixson rank $\alpha+1$ such that $\text{card}(X_{\alpha})=n$ is homeomorphic to $\omega^{\alpha} \cdot n+1$."</p>
<p>Clearly, $P(1,1)$ is true (here $X$ is just a converging sequence). </p>
<p><strong>Step 1:</strong> If $P(\alpha,1)$ is true then $P(\alpha,n)$ is true.</p>
<p>Let $X_{\alpha}=\{p_1, \dots,p_n\}$. Then, viewing $X$ as a subspace of a normed space $Y$, there exist $n-1$ parallel hyperplanes $P_1$, ..., $P_{n-1}$ such that $Y \backslash \bigcup\limits_{i=1}^{n-1} P_i$ has $n$ connected components $D_1$, ..., $D_n$ with $p_k \in D_k$.</p>
<p>Because $P(\alpha,1)$ is true, each $X_k:= X \cap D_k$ is homeomorphic to $\omega^{\alpha}+1$. Therefore, $X$ is homeomorphic to $(\omega^{\alpha}+1) \cdot n= \omega^{\alpha} \cdot n+1$.</p>
<p><strong>Step 2:</strong> If $P(\alpha,n)$ is true for any $\alpha<\alpha_0$ and $n \geq 1$, then $P(\alpha_0,1)$ is true.</p>
<p>Let $X_{\alpha_0}=\{p\}$ and let $(p_k)$ be a sequence in $X'$ converging to $p$ (without loss of generality, we suppose $\alpha_0 \geq 2$ so that $p \in X''$). Then, viewing $X$ as a subspace of a normed space $Y$, there exist a sequence of postive real numbers $(r_k)$ converging to zero such that the family of spheres $S(p,r_k)$ does not meet $X$ and $Y \backslash \bigcup\limits_{k \geq 1} S(p,r_k)$ has infinitely many connected components $D_1$, $D_2$, ... with $p_k \in D_k$. </p>
<p>By assumption, each $X_k:= X \cap D_k$ is homeomorphic to some $\omega^{\alpha_k} \cdot n_k+1$ with $\alpha_k <\alpha_0$ and $n_k \geq 1$. Therefore, $X$ is homeomorphic to $$\tau=[(\omega^{\alpha_1} \cdot n_1+1)+(\omega^{\alpha_2} \cdot n_2+1)+ \dots ]+1$$</p>
<p>But $\tau \leq \omega^{\alpha_0} +1$. If $\tau < \omega^{\alpha_0}+1$ then $\tau< \omega^{\alpha_0}$ because $\tau$ is compact, hence $X_{\alpha_0}= \emptyset$: a contradiction. Therefore, $\tau = \omega^{\alpha_0}+1$ and $P(\alpha_0,1)$ is true.</p>
<p><strong>Step 3:</strong> We conclude by transfinite induction.</p>
|
136,340 | <p>I defined the following functions</p>
<pre><code>CreatorQ[_] := False;
AnnihilatorQ[_] := False;
CreatorQ[q] := True;
AnnihilatorQ[p] := True;
CreatorQ[J[n_]] /; n < 0 := True;
AnnihilatorQ[J[n_]] /; n > 0 := True;
</code></pre>
<p>and when I ask for</p>
<pre><code>Assuming[r < 0, CreatorQ[J[r]]]
</code></pre>
<p>I get <code>False</code> instead of <code>True</code>. I know that probabilly it's because Matehamtica doesn't evaluate the r, but I have no idea how to change the code in order to get the correct answer.
Thanks</p>
| Pillsy | 531 | <p>The problem is that the pattern-matching in <code>CreatorQ</code> doesn't have any sort of knowledge of the assumption you're making about <code>r</code>, so the <code>Condition</code> won't fire. You can, as you commented, get around this by just redefining <code>CreatorQ</code> to return the inequality, which will remain unevaluated if <code>n</code> doesn't have a value that can be compared to 0:</p>
<pre><code>ClearAll[CreatorQ];
CreatorQ[_] = False;
CreatorQ[q] := True;
CreatorQ[J[n_]] := n < 0;
</code></pre>
<p>Now, as in Vaghan Tumanyan's comment, you need to use a function that will make use of the assumptions introduced by <code>Assuming</code>. In this case, <code>Simplify</code> is completely adequate:</p>
<pre><code>Assuming[r < 0, Simplify@CreatorQ[J[r]]]
(* True *)
</code></pre>
|
348,614 | <p>Is the following claim true: Let <span class="math-container">$\zeta(s)$</span> be the Riemann zeta function. I observed that as for large <span class="math-container">$n$</span>, as <span class="math-container">$s$</span> increased, </p>
<p><span class="math-container">$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)}{\text{lcm}(k,i)}\bigg)^s \approx \zeta(s+1)
$$</span></p>
<p>or equivalently</p>
<p><span class="math-container">$$
\frac{1}{n}\sum_{k = 1}^n\sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)^2}{ki}\bigg)^s \approx \zeta(s+1)
$$</span></p>
<p>A few values of <span class="math-container">$s$</span>, LHS and the RHS are given below</p>
<p><span class="math-container">$$(3,1.221,1.202)$$</span>
<span class="math-container">$$(4,1.084,1.0823)$$</span>
<span class="math-container">$$(5,1.0372,1.0369)$$</span>
<span class="math-container">$$(6,1.01737,1.01734)$$</span>
<span class="math-container">$$(7,1.00835,1.00834)$$</span>
<span class="math-container">$$(9,1.00494,1.00494)$$</span>
<span class="math-container">$$(19,1.0000009539,1.0000009539)$$</span></p>
<p><strong>Note</strong>: <a href="https://math.stackexchange.com/questions/3293112/relationship-between-gcd-lcm-and-the-riemann-zeta-function">This question was posted in MSE</a>. It but did not have the right answer.</p>
| JS Music | 150,164 | <p>Your summand is symmetric with respect to <span class="math-container">$k$</span> and <span class="math-container">$i$</span>:</p>
<p><span class="math-container">$$f(n,s) = \frac{1}{n}\sum_{k = 1}^n \sum_{i = 1}^{k} \bigg(\frac{\gcd(k,i)}{\operatorname{lcm}(k,i)}\bigg)^s$$</span></p>
<p>We can sum along skew diagonals to evaluate the sum. That is, we can convert <span class="math-container">$(k,i)$</span> to polar form <span class="math-container">$(\sqrt{k^2 + i^2}, tan^{-1}\frac{i}{k})$</span>. By symmetry of the summand, the rays from the origin have the same value.</p>
<p>That is, when <span class="math-container">$\theta = tan^{-1}\frac{i}{k}$</span>, <span class="math-container">$i = k \tan\theta$</span>. We can vary <span class="math-container">$\theta$</span> from <span class="math-container">$[0, \frac{\pi}{4}]$</span>. The <span class="math-container">$\gcd(j,j\tan \theta)$</span> is independent of <span class="math-container">$j$</span> but depends on <span class="math-container">$\theta$</span>, we can therefore use <span class="math-container">$n$</span>:</p>
<p><span class="math-container">$$\frac{1}{n}\int_{0}^{\frac{\pi}{4}} \sum_{j=0}^{n} \left[\frac{\gcd(j,j\tan\theta)}{\operatorname{lcm}(j,j\tan\theta)}\right]^{s} d\theta = \int_{0}^{\frac{\pi}{4}} \left[\frac{\gcd(n,n\tan\theta)}{\operatorname{lcm}(n,n\tan\theta)}\right]^{s} d\theta = \sum_{k=0}^{n} \frac{1}{k^{s+1}} \rightarrow \zeta(s+1)$$</span></p>
<p>The polar integral(The integral is not continuous on <span class="math-container">$\theta$</span> but over the irrational(<span class="math-container">$\tan\theta = \frac{i}{k}$</span> implies <span class="math-container">$\theta$</span> is irrational) values that correspond to the rays) to the sum is calculated easily enough because the values along each ray is constant w.r.t to <span class="math-container">$j$</span>. Every ray corresponds to one of the values of <span class="math-container">$\frac{1}{k^s}$</span> but they have a weight of <span class="math-container">$\frac{1}{k}$</span>. </p>
<p>E.g., the ray that accumulates <span class="math-container">$1$</span> has <span class="math-container">$\theta = \tan^{-1}(1) = \frac{\pi}{4}$</span>, for <span class="math-container">$\frac{1}{2^s}$</span> it is <span class="math-container">$\theta = \tan^{-1}(2)$</span> but it has <span class="math-container">$\frac{1}{2}$</span> the density of <span class="math-container">$1$</span>. Similarly for all the others. </p>
<p>If you are having trouble following this, simply look at <span class="math-container">$\frac{\operatorname{lcm}(k,i)}{\gcd(k,i)}$</span> in "polar" form, I'll make it easy for you(the text format obscures the patterns but they are there):</p>
<p><span class="math-container">\begin{matrix}
\color{green}1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & \\
\color{blue}2 & \color{green}1 & 6 & 2 & 10 & 3 & 14 & 4 & 18 & 5 & \\
\color{red}3 & 6 & \color{green}1 & 12 & 15 & 2 & 21 & 24 & 3 & 30 & \\
4 & \color{blue}2 & 12 & \color{green}1 & 20 & 6 & 28 & 2 & 36 & 10 & \\
5 & 10 & 15 & 20 & \color{green}1 & 30 & 35 & 40 & 45 & 2 & \\
6 & \color{red}3 & \color{blue}2 & 6 & 30 & \color{green}1 & 42 & 12 & 6 & 15 & \\
7 & 14 & 21 & 28 & 35 & 42 & \color{green}1 & 56 & 63 & 70 & \\
8 & 4 & 24 & \color{blue}2 & 40 & 12 & 56 & \color{green}1 & 72 & 20 & \\
9 & 18 & \color{red}3 & 36 & 45 & 6 & 63 & 72 & \color{green}1 & 90 & \\
10 & 5 & 30 & 10 & \color{blue}2 & 15 & 70 & 20 & 90 & \color{green}1 & \\
\end{matrix}</span></p>
<p>If you look you can see <span class="math-container">$k$</span>th ray which has the constant value <span class="math-container">$\frac{\color{green}1}{k^s}$</span>(displayed as just <span class="math-container">$k$</span> in the table) but they repeat at a rate of <span class="math-container">$\frac{\color{green}1}{k}$</span> along the ray.</p>
<p>Alternatively, if write the table in polar coordinates(we are rotating coordinate space 45 degree's) we get
<span class="math-container">\begin{matrix}
\color{green}1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
\color{green}1 & \color{blue}2 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
\color{green}1 & 0 & \color{red}3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & \\
\color{green}1 & \color{blue}2 & 0 & 4 & 0 & 0 & 0 & 0 & 0 & 0 & \\
\color{green}1 & 0 & 0 & 0 & 5 & 0 & 0 & 0 & 0 & 0 & \\
\color{green}1 & \color{blue}2 & \color{red}3 & 0 & 0 & 6 & 0 & 0 & 0 & 0 & \\
\color{green}1 & 0 & 0 & 0 & 0 & 0 & 7 & 0 & 0 & 0 & \\
\color{green}1 & \color{blue}2 & 0 & 4 & 0 & 0 & 0 & 8 & 0 & 0 & \\
\color{green}1 & 0 & \color{red}3 & 0 & 0 & 0 & 0 & 0 & 9 & 0 & \\
\color{green}1 & \color{blue}2 & 0 & 0 & 5 & 0 & 0 & 0 & 0 & \color{green}10 & \\
\end{matrix}</span></p>
<p>Where now the <span class="math-container">$k$</span>th column is the "<span class="math-container">$k$</span>th" ray. I.e., the first column in the above table corresponds to the diagonals/rays in the table above it.</p>
|
2,881,020 | <p>We're trying to figure out something, and things aren't adding up. The senario is that you make $\$ 50,000$ a year. Every year you get a $15\%$ bonus of that income, which then gets added to your next year's income. So, the first year, you get $\$ 7,500$ which then makes your base $\$ 57,500$ the next year. When we try to figure out what the bonus will be past $6$ or $7$ years the numbers just sort of plateau at around $\$ 8,823$. All we were doing is adding the amount of the bonus to $\$ 50,000$ and then finding $15\%$ of the resulting number, and repeating. Is this correct? Why are the numbers plateauing? What would the bonus actually be after $6$ or $7$ years? I'm sorry if this is a simple question for this forum, it was recommended to me. Thanks so much in advance.</p>
| callculus42 | 144,421 | <p>You start with a bonus of 7500. Then every year the bonus decreases about $85\%$. But you carry over the bonuses of the previous years. That means that at the first year we have $b_1=7500$. And in the second year $b_2=7500+0.15\cdot 7500=8625$ and so on:</p>
<p>$b_3=7500+0.15\cdot 7500+0.15^2\cdot 7500$</p>
<p>...</p>
<p>$b_n=7500\sum_{i=1}^{n} 0.15^{i-1}$</p>
<p>We can use the formula for the partial sum of a <a href="https://en.wikipedia.org/wiki/Geometric_series#Formula" rel="nofollow noreferrer">geometric series</a> to calculate $b_n$</p>
<p>$$b_n=\frac{7500}{0.15}\sum_{i=1}^{n} 0.15^{i}=\frac{7500}{0.15}\cdot 0.15\cdot \frac{1-0.15^n}{1-0.15}=7500\cdot \frac{1-0.15^n}{1-0.15}$$</p>
<p>And for $$ n \to \infty, b_n=\frac{7500}{1-0.15}\approx 8823.53$$</p>
<p>This is the upper bound of b$_n$</p>
<p>The graph below shows that the increase of the bonus is large in the first $3$ years and it is very close to the upper bound in year 4: $b_4=8819.06$</p>
<p><a href="https://i.stack.imgur.com/G6hSK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/G6hSK.png" alt="enter image description here"></a></p>
|
2,176,081 | <p>I am trying to compute </p>
<blockquote>
<p>$$ \int_0^\infty \frac{\ln x}{x^2 +4}\,dx,$$</p>
</blockquote>
<p>which I find <a href="https://math.stackexchange.com/questions/2173289/integrating-int-0-infty-frac-ln-xx24-dx-with-residue-theorem/2173342">here</a>, without complex analysis. I am consistently getting the wrong answer and am hoping someone can spot the error. </p>
<p>First, denote the integral by $I$, and take $x = \frac{1}{t}$. Hence, the integral becomes </p>
<p>$$I = \int_\infty^0 \frac{\ln(1/t)}{1/t^2 + 4} \left(-\frac{1}{t^2} dt \right) = \int_0^\infty \frac{\ln(1)}{1+4t^2} dt - \int_0^\infty \frac{\ln(t)}{1+4t^2} dt$$</p>
<p>Note that the leftmost integral on the right-hand side is zero. Now, letting $u = 2t$, we get </p>
<p>$$I = -\frac{1}{2} \int_0^\infty \frac{\ln(2u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2}\,du - \frac{1}{2} \int_0^\infty \frac{\ln(u)}{1+u^2}\,du = - \frac{1}{2} \int_0^\infty \frac{\ln(2)}{1+u^2} - \frac{I}{2}$$</p>
<p>and therefore $I = - \frac{\ln(2)}{3} \int_0^\infty \frac{1}{1+u^2}du = - \frac{\pi \ln(2)}{6}$. This, however, is not the right answer. So, where did I go wrong?</p>
| Olivier Oloa | 118,798 | <p>There is a mistake when writing, with $u=2t$, that
$$
- \int_0^\infty \frac{\ln(t)}{1+4t^2} dt=-\frac{1}{2} \int_0^\infty \frac{\ln(2u)}{1+u^2}\:du
$$ since $t=u/2$, <strong>we don't have</strong> $\ln (t)=\ln (2u).$ </p>
|
162,655 | <p>Does there exist a Ricci flat Riemannian or Lorentzian manifold which is geodesic complete but not flat? And is there any theorm about Ricci-flat but not flat? </p>
<p>I am especially interset in the case of Lorentzian Manifold whose sign signature is (- ,+ ,+ , + ). Of course, the example is not constricted in Lorentzian case.</p>
<p>I know there are many Ricci flat case in General Relativiy which is the vacuum solution to Einstein's equation. But what I know, such as Kerr solution, are all geodesic incomplete. So I want a geodesic complete example and reference. Thanks!</p>
| José Figueroa-O'Farrill | 394 | <p>All riemannian manifolds with holonomy contained in $SU(n) \subset SO(2n)$, $Sp(n) \subset SO(4n)$, $G_2 \subset SO(7)$ and $Spin(7) \subset SO(8)$ are Ricci-flat. There are plenty of non-flat examples; e.g., those with holonomy <em>precisely</em> those groups.</p>
<p>In the Lorentzian setting, you could consider a subclass of lorentzian symmetric spaces with solvable transvection group, the so-called <strong>Cahen-Wallach</strong> spacetimes. They are pp-waves with metric given in local coordinates by
$$ 2 dudv + \sum_{i=1}^{n-2} dx_i^2 + \sum_{i,j=1}^{n-2} A_{ij} x^i x^j du^2 $$
where $A_{ij}$ are the entries of a symmetric matrix. If $A$ is traceless, the metric is Ricci-flat, but if $A \neq 0$ then it is non-flat.</p>
<p><strong>Added</strong> (in response to the comment)</p>
<p>The riemannian result is classical. I learnt this from a book by Simon Salamon, “Riemannian Geometry and Holonomy Groups”, but there are some more recent lecture notes you might find useful: <a href="http://arxiv.org/abs/1206.3170" rel="noreferrer">http://arxiv.org/abs/1206.3170</a>. Concerning the lorentzian result, it is a simple calculation to determine the Riemann and Ricci tensors of the metric I wrote down. The metrics were introduced in the paper “Lorentzian symmetric spaces” by Michel Cahen and Nolan Wallach, Bull. Am. Math. Soc. 76 (1970), 585–591.</p>
|
162,655 | <p>Does there exist a Ricci flat Riemannian or Lorentzian manifold which is geodesic complete but not flat? And is there any theorm about Ricci-flat but not flat? </p>
<p>I am especially interset in the case of Lorentzian Manifold whose sign signature is (- ,+ ,+ , + ). Of course, the example is not constricted in Lorentzian case.</p>
<p>I know there are many Ricci flat case in General Relativiy which is the vacuum solution to Einstein's equation. But what I know, such as Kerr solution, are all geodesic incomplete. So I want a geodesic complete example and reference. Thanks!</p>
| Igor Khavkine | 2,622 | <p>Here's another explicit reference, on top of Ben Crowell's more general comment. The following paper discusses explicit examples of "pp-wave" spacetimes (Lorentzian, solving vacuum Einstein equations) that are geodesically complete: <em>Causal structures of pp-waves</em> by Veronika E. Hubeny and Mukund Rangamani (<a href="http://arxiv.org/abs/hep-th/0211195">arXiv:hep-th/0211195</a>).</p>
|
73,383 | <p>The problem is:
$$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2}.$$</p>
<p>The tutor guessed it didn't exist, and he was correct. However, I'd like to understand why it doesn't exist.</p>
<p>I think I have to turn it into spherical coordinates and then see if the end result depends on an angle, like I've done for two variables with polar coordinates. I don't know how though.</p>
<p>I know $\rho = \sqrt{x^2+y^2+z^2}$ and $\theta = \arctan \left(\frac{y}{x} \right)$ and $\phi = \arccos \left( \frac{z}{\rho} \right)$, but how on earth do I break this thing up?</p>
| Community | -1 | <p>The limit for problems like these do not exist since the limit depends on the direction you approach. For the problem you have mentioned, say you approach $(0,0,0)$ along the direction $y = m_y x$ and $z = m_z x$, where $m_y$, $m_z$ are some constants, then we get $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = \lim_{x \rightarrow 0} \frac{m_y x^2+2m_y m_z x^2 + 3m_z x^2}{x^2+4m_y^2 x^2+9 m_z^2 x^2}$$
Hence,
$$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = \lim_{x \rightarrow 0} \frac{m_y + 2m_y m_z + 3m_z}{1 + 4 m_y^2 + 9 m_z^2} = \frac{m_y + 2m_y m_z + 3m_z}{1 + 4 m_y^2 + 9 m_z^2}$$
You can key in different values for $m_y$ and $m_z$ and you will get different values.</p>
<p>Below are some instances of the different limits you get by approaching in different directions.</p>
<p>If you tend to zero along the direction $(1,0,0)$ i.e. $m_y = m_z = 0$, then we get $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = 0.$$</p>
<p>If you tend to zero along the direction $(1,1,0)$ i.e. $m_y = 1$ and $m_z = 0$, then we get $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = \frac15.$$</p>
<p>If you tend to zero along the direction $(1,0,1)$ i.e. $m_y = 0$ and $m_z = 1$, then we get $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = \frac3{10}.$$</p>
<p>If you tend to zero along the direction $(1,1,1)$ i.e. $m_y = 1$ and $m_z = 1$, then we get $$\displaystyle \lim_{(x,y,z) \rightarrow (0,0,0)} \frac{xy+2yz+3xz}{x^2+4y^2+9z^2} = \frac37.$$</p>
|
1,342,069 | <p>In the <a href="https://en.wikipedia.org/wiki/Forgetful_functor" rel="nofollow">forgetful functor Wikipedia article</a> I read that </p>
<blockquote>
<p>"[Forgetful] Functors that forget the extra sets need not be faithful; distinct morphisms respecting the structure of those extra sets may be indistinguishable on the underlying set." </p>
</blockquote>
<p>Can anyone give me an example of a forgetful functor that is not faithful?</p>
| Martin Brandenburg | 1,650 | <p>If $\mathcal{C},\mathcal{D}$ are categories, then the projection functor $\mathcal{C} \times \mathcal{D} \to \mathcal{C}$ (which "forgets" the second coordinate) is not faithful (unless $\mathcal{D}$ is thin or $\mathcal{C}$ is empty).</p>
|
733,280 | <p>I cannot understand why $\log_{49}(\sqrt{ 7})= \frac{1}{4}$. If I take the $4$th root of $49$, I don't get $7$.</p>
<p>What I am not comprehending? </p>
| MPW | 113,214 | <p>No, you get $\sqrt 7$ as you should.</p>
<p>$$\log_{49}\sqrt 7 =\log_{49}7^{\frac12}=\frac12\log_{49}7=\frac12\cdot\frac12=\frac14$$</p>
|
1,779,088 | <blockquote>
<p>Prove
$$\sum_{i=1}^n i^{k+1}=(n+1)\sum_{i=1}^n i^k-\sum_{p=1}^n\sum_{i=1}^p i^k \tag1$$
for every integer $k\ge0$. </p>
</blockquote>
<p>By principle of induction,</p>
<p>$$\sum_{i=1}^n i = n(n+1)- \sum_{p=1}^n p$$
$$2\sum_{i=1}^n i = n(n+1)$$
$$\sum_{i=1}^n i = \frac{n(n+1)}{2}$$
$\implies$$(1)$ is true for k equals to zero.</p>
<p>Assume $(1)$ is true.
$$\sum_{i=1}^n i^{k+2}=(n+1)\sum_{i=1}^n i^{k+1}-\sum_{p=1}^n\sum_{i=1}^p i^{k+1}\tag2$$
We prove $(2)$ is true.</p>
<p>From $(2)$,</p>
<p>$$\begin{align}
RHS & = (n+1)\left[(n+1)\sum_{i=1}^n i^k-\sum_{p=1}^n\sum_{i=1}^p i^k \right]-\sum_{p=1}^n\sum_{i=1}^p i^{k+1}\\
& = (n+1)^2\sum_{p=1}^n p^k-(n+1)\sum_{p=1}^n\sum_{i=1}^p i^k-\sum_{p=1}^n\sum_{i=1}^p i^{k+1}\\
& = \sum_{p=1}^n\left[(n+1)^2p^k-(n+1)\sum_{i=1}^p i^k-\sum_{i=1}^p i^{k+1}\right]\\
& = \sum_{p=1}^n\left[(n+1)^2p^k-\sum_{i=1}^p (n+1+i)i^k\right]\\
\end{align}$$</p>
<p>By examining $(2)$,</p>
<p>$$\sum_{p=1}^n\left[(n+1)^2p^k-\sum_{i=1}^p (n+1+i)i^k\right]=\sum_{p=1}^n p^{k+2}=LHS\tag3$$</p>
<p>We should be able to get $(3)$.</p>
<p>Anyone knows how to prove $(3)$?</p>
| Community | -1 | <p>In the RHS, a term $j^k$ is taken $n+1$ times in the first sum and $n-j+1$ times in the second, hence it contributes $(n+1-n+j-1)j^k=j^{k+1}$ to the total.</p>
<hr>
<p>$$\left|\begin{matrix}
1\cdot1^k\\
2\cdot2^k\\
3\cdot3^k\\
4\cdot4^k
\end{matrix}\right|=\left|\begin{matrix}
1^k&1^k&1^k&1^k&1^k\\
2^k&2^k&2^k&2^k&2^k\\
3^k&3^k&3^k&3^k&3^k\\
4^k&4^k&4^k&4^k&4^k
\end{matrix}\right|-\left|\begin{matrix}
1^k&1^k&1^k&1^k\\
2^k&2^k&2^k\\
3^k&3^k\\
4^k
\end{matrix}\right|$$</p>
|
57,988 | <p>I am a programmer/analyst with limited (and pretty rusty) knowledge of math.</p>
<p>"Just for the heck of it" I have decided to try my hand at <a href="http://spectrum.ieee.org/automaton/robotics/artificial-intelligence/you-you-can-take-stanfords-intro-to-ai-course-next-quarter-for-free" rel="nofollow">Stanford's introductory course on Artificial Intelligence</a> and according to their course description:</p>
<blockquote>
<p><strong>Prerequisites:</strong></p>
<p>A solid understanding of probability and linear algebra
will be required.</p>
</blockquote>
<p>Can someone please point me to concise introductory texts on these two topics? I don't know if what I'd like even exists (i.e. is there anything like "Linear Algebra for dummies"?) but my main requisites would be:</p>
<ul>
<li>Concise, as in "I doubt I can go through a 500 pages book"</li>
<li>Easy to approach (If the book itself has its own list of prerequisites, I doubt I can make much use of it, either).</li>
</ul>
<p>I can't be much more specific than this (the course introduction doesn't tell much more, unless they have recently updated it). </p>
<p>Thanks in advance!</p>
| Mathemagician1234 | 7,012 | <p>The best books I know that fit your criteria Marino are: </p>
<p><em>Introduction To Probability Theory</em> by Hoel, Port and Stone (make sure you get the 1971 edition;the later editions are terrible) This is the text I learned probability from under the very sure hand of Stefan Ralescu. I think you'll find it's exactly what you're looking for,but you really need to have a good grasp of calculus to read it. But if you've got that,you'll find it a treasure. </p>
<p><em>Linear Algebra</em>.4th edition by Charles Curtis: Theoretical and concise, but with a lot of applications. And the best discussion you'll find of the Jordan form anywhere. </p>
<p>That should get you started. Good luck! </p>
|
341,648 | <p>I'm trying to understand what a tableaux ring is (it's not clear to me reading Young Tableaux by Fulton).</p>
<p>I studied what a monoid ring is on Serge Lang's Algebra, and then I read about modules, modules homomorphism. I'm trying to prove what is stated at page 121 (S. Lang, Algebra) while talking about algebras: "we note that the group ring $A[G]$ (or monoid ring when $G$ is a monoid) is an $A$-algebra, also called group (or monoid) algebra."</p>
<p>Correct me if I'm wrong, I believe that i should start proving that the monoid ring $A[G]$ (here $G$ is a monoid) is an $A$-module. Well I can't figure out how it could be! This is what i've tried:</p>
<p>notation: $A$ ring, $G$ monoid, $a \in A, x \in G$.
$a \cdot x$ is the map $G \rightarrow A$ such that $a \cdot x (x) = a$ and $a \cdot x (y) = 0$ for every $y \neq x$</p>
<p>$(a+b) \cdot x = a \cdot x + b\cdot x$ for every $a,b \in A$ and every $x \in G$ follows from definition of $a \cdot x$. I don't know how to show $a \cdot (x+y) = a \cdot x + a \cdot y$, honestly I'm about to think it is not true.</p>
<p>Thanks in advance,</p>
<p>sciamp</p>
| Community | -1 | <p>Consider $$f(z) = \log(1-e^{2 \pi zi }) = \log(e^{\pi zi}(e^{-\pi zi}-e^{\pi zi})) = \log(-2i) + \pi zi + \log(\sin(\pi z))$$
Then we have
\begin{align}
\int_0^1 f(z) dz & = \log(-2i) + \dfrac{i \pi}2 + \int_0^1 \log(\sin(\pi z))dz\\
& = \int_0^1 \log(\sin(\pi z))dz + \log(-2i) + \log(i)\\
& = \log(2) + \int_0^1 \log(\sin(\pi z))dz
\end{align}
Now it suffices to show that $\displaystyle \int_0^1 f(z) dz = 0$. Consider the contour $C(\epsilon,R)$ (which is the contour given in your question) given by the following.</p>
<p>$1$. $C_1(\epsilon,R)$: The vertical line along the imaginary axis from $iR$ to $i \epsilon$.</p>
<p>$2$. $C_2(\epsilon)$: The quarter turn of radius $\epsilon$ about $0$.</p>
<p>$3$. $C_3(\epsilon)$: Along the real axis from $(\epsilon,1-\epsilon)$.</p>
<p>$4$. $C_4(\epsilon)$: The quarter turn of radius $\epsilon$ about $1$.</p>
<p>$5$. $C_5(\epsilon,R)$: The vertical line from $1+i\epsilon$ to $1 + iR$.</p>
<p>$6$. $C_6(R)$: The horizontal line from $1+iR$ to $iR$.</p>
<p>$f(z)$ is analytic inside the contour $C$ and hence $\displaystyle \oint_C f(z) = 0$. This gives us
$$\int_{C_1(\epsilon,R)} f dz + \int_{C_2(\epsilon)} f dz + \int_{C_3(\epsilon)} f dz + \int_{C_4(\epsilon)} f dz + \int_{C_5(\epsilon,R)} f dz + \int_{C_6(R)} f dz = 0$$</p>
<p>Now the integral along $1$ cancels with the integral along $5$ due to symmetry. Integrals along $2$ and $4$ scale as $\epsilon \log(\epsilon)$. Integral along $6$ goes to $0$ as $R \to \infty$. This gives us $$\lim_{\epsilon \to 0} \int_{C_3(\epsilon)} f dz = 0$$ which is what we need.</p>
<hr>
<p><strong>EDIT</strong></p>
<p>@Did has given the standard way to evaluate this integral using real analysis techniques. Here is another way to prove it.</p>
<p>From <a href="https://math.stackexchange.com/questions/20397/striking-applications-of-integration-by-parts/20481#20481">integration by parts</a>/ other techniques, we have that $$\int_0^{\pi/2} \sin^{2k}(x) dx = \frac{2k-1}{2k}\frac{2k-3}{2k-2} \cdots \frac{1}{2} \frac{\pi}{2} = \dfrac{(2k)!}{4^k (k!)^2} \dfrac{\pi}2 = \dfrac{\Gamma(2k+1)}{4^k \Gamma^2(k+1)} \dfrac{\pi}2$$</p>
<p>Hence, the analytic extension of $\displaystyle \int_0^{\pi/2} \sin^{2z}(x) dx $ is $\dfrac{\Gamma(2z+1)}{4^z \Gamma^2(z+1)} \dfrac{\pi}2$. (This needs to be justified)</p>
<p>Now differentiate both sides with respect to $z$, and set $z=0$, to get $$2 \int_0^{\pi/2} \log(\sin(x)) = -\dfrac{\pi}2 \log(4)$$
Hence, we get that $$\int_0^{\pi/2} \log(\sin(x)) dx = -\dfrac{\pi}2 \log(2)$$
This also provides you a way to evaluate $\displaystyle \int_0^{\pi/2} \sin^{n}(x) \log(\sin(x)) dx$.</p>
|
18,686 | <p>Let us define the following "dimension" of a Borel subet $B \subset \mathbb{R}^k$:</p>
<p>$\dim(B) = \min\{n \in \mathbb{N}: \exists K \subset \mathbb{R}^n, ~{\rm s.t.} ~ B \sim K\}$,</p>
<p>where $\sim$ denotes "homeomorphic to". Obviously, $0 \leq \dim(B) \leq k$.</p>
<p>I have three questions: Given a $B \subset \mathbb{R}$,<br>
1) As $k \to \infty$, how slow can $\dim(B^k)$ grow? Can we choose some $B$ such that $\dim(B^k) = o(k)$ or even $O(1)$?<br>
2) Will it make a difference if we drop the Borel measurability of $B$ or add the condition that $B$ has positive Lebesgue measure?<br>
3) Does this dimension-like notion have a name? The dimension concepts I usually see are Lebesgue's covering dimension, inductive dimension, Hausdorff dimension, Minkowski dimension, etc. I do not think the quantity defined above coincides with any of these, but of course bounds exist.</p>
<p>Thanks!</p>
| François G. Dorais | 2,000 | <p>The Cantor set satisfies $\dim(C^k) = 1$ for all $k$. You can easily find homeomorphic copies of the Cantor set with positive measure (e.g. at the $n$-th step remove every middle $3^n$-th instead of every middle third).</p>
|
1,305,257 | <p>I do not understand how to use the following information: If $f$ is entire, then </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{f(z)}{z^2}=2i.$$</p>
<p>So if $f$ is entire, it has a power series around $z_0=0$, so $f(z)=\Sigma_{n=0}^\infty a_nz^n$, and then we get </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{\Sigma_{n=0}^\infty a_nz^n}{z^2}=2i.$$ </p>
<p>How do I continue from here? </p>
<p>It is a part of a question. I just want to know how can I use this info. I don't know how I can manipulate summations, and since it's $|z| \rightarrow \infty$ and not $z \rightarrow \infty$ (which is meaningless), I don't really know what I can do here. </p>
<p>Maybe </p>
<p>$$\lim _{|z| \rightarrow \infty} \Sigma_{n=0}^\infty a_nz^{n-2}=2i,$$ but then what?</p>
<p>Thanks in advance for your assistance! </p>
| P Vanchinathan | 28,915 | <p>This is true for any polynomial $f(x)$ of even degree with positive coefficient for the highest degree term. Check that as $x\to\pm \infty$ the polynomial values $f(x)\to +\infty$. That means there exist an $N>0$ such that
for $a\in [-N, N], b\not\in [-N,N]$, we have $f(a)<f(b)$. Now the (global) minimum for $f(x)$ is attained in $[-N,N]$, a compact set.</p>
|
1,305,257 | <p>I do not understand how to use the following information: If $f$ is entire, then </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{f(z)}{z^2}=2i.$$</p>
<p>So if $f$ is entire, it has a power series around $z_0=0$, so $f(z)=\Sigma_{n=0}^\infty a_nz^n$, and then we get </p>
<p>$$\lim _{|z| \rightarrow \infty} \frac{\Sigma_{n=0}^\infty a_nz^n}{z^2}=2i.$$ </p>
<p>How do I continue from here? </p>
<p>It is a part of a question. I just want to know how can I use this info. I don't know how I can manipulate summations, and since it's $|z| \rightarrow \infty$ and not $z \rightarrow \infty$ (which is meaningless), I don't really know what I can do here. </p>
<p>Maybe </p>
<p>$$\lim _{|z| \rightarrow \infty} \Sigma_{n=0}^\infty a_nz^{n-2}=2i,$$ but then what?</p>
<p>Thanks in advance for your assistance! </p>
| AlexR | 86,940 | <p>You can't just write $f(x) = x^4 g(x)$ because $f(0)$ is defined and $g(0)$ isn't. Instead, you can simply prove
$$\lim_{x\to\pm\infty} f(x) = \infty$$
(because the $x^4$ term dominates all others in the limit)<br>
With that settled, by the definition of these limits you can find some $R>0$ such that
$$f(x) > a_0 \qquad \forall x : |x|> R$$
(the $>a_0$ is arbitrary here, any constant works as long as $f$ attains it inside $[-R,R]$; also, $R$ need not be the optimal choice)<br>
Now look at $f|_{[-R,R]}$. This is a continuous function on a closed and bounded set. You should have a theorem giving you that $f|_{[-R,R]}$ attains a minimum.<br>
Since $0\in[-R,R]$, we know that this minimum is at most $f(0) = a_0$, so $f$ doesn't attain smaller values outside of $[-R,R]$ by construction of $R$.<br>
This proves
$$\min_{x\in\mathbb R} f(x) = \min_{x\in[-R,R]} f(x) \in \mathbb R$$
exists.</p>
|
977,956 | <p>Can you help me solve this problem?</p>
<blockquote>
<p>Simplify: $\sin \dfrac{2\pi}{n} +\sin \dfrac{4\pi}{n} +\ldots +\sin \dfrac{2\pi(n-1)}{n}$.</p>
</blockquote>
| Yiyuan Lee | 104,919 | <p>The sum of all $n$th roots of unity (for $n > 1$) is zero. See <a href="http://en.wikipedia.org/wiki/Root_of_unity#Summation" rel="nofollow">here</a> for the proof. Its imaginary part is also zero. That is,</p>
<p>$$\sum_{k = 0}^{n - 1} \sin \frac{2\pi k}{n} = 0$$</p>
<p>Now simply subtract $\sin 0 = 0 $ from both sides to get</p>
<p>$$\sum_{k = 1}^{n - 1} \sin \frac{2\pi k}{n} = 0$$</p>
|
3,287,424 | <p>I have a function <span class="math-container">$$f(z)=\begin{cases}
e^{-z^{-4}} & z\neq0 \\
0 & z=0
\end{cases}$$</span></p>
<p>I have to show cauchy riemann equation is satisfied everywhere. I have shown that it isn't differentiable at <span class="math-container">$z=0$</span>. </p>
<p>Usually I will have to convert it in <span class="math-container">$$f(z)=u+iv$$</span> which seems very tedious. Is there some way to do this while keeping it in <span class="math-container">$f(z)$</span> form. </p>
| Nitin Uniyal | 246,221 | <p>As the question asks to use Cauchy-Riemann equations so either you convert it to get <span class="math-container">$u$</span> and <span class="math-container">$v$</span> in <span class="math-container">$x$</span> and <span class="math-container">$y$</span>; or use polar coordinates <span class="math-container">$r$</span>, <span class="math-container">$\theta$</span> using <span class="math-container">$z=re^{i\theta}$</span>, i.e.</p>
<p><span class="math-container">$f(re^{i\theta})=e^\frac{-cos\theta}{r^4}cos(\frac{sin\theta}{r^4})+e^\frac{-cos\theta}{r^4}sin(\frac{sin\theta}{r^4})$</span>. </p>
<p>Then Cauchy Riemann equations in polar form are: <span class="math-container">$\partial u/\partial r=\partial v/r\partial \theta,\partial v/\partial r=-\partial u/r\partial \theta$</span></p>
|
134,673 | <p>I need to show that an automorphism of $S_n$ which takes transpositions to transpositions is an inner automorphism.</p>
<p>I thought it could be done by showing that such automorphisms form a subgroups $H\le Aut(S_n)$, that $Inn(S_n)\subset H$ and that they have the same number of elements. The number of inner automorphisms is $n!$ because $S_n$ has a trivial center (at least when it is not abelian) and therefore is isomorphic to $Inn(S_n)$. However I have no idea how I could count the number of elements in $H$.</p>
<p>Is there a way to do it, or should I change the approach altogether?</p>
<p>Thank you.</p>
| Mark Bennet | 2,906 | <p>I think there is an easier way of looking at this (now edited to get the argument correct - and also simpler, thanks to Jyrki's comment). An inner automorphism of $S_n$ is equivalent to a permutation of the underlying set on which $S_n$ acts. Let's choose our generators to be a permutation $a=(1 2)$ and the n-cycle $b=(1 2 3 ... n)$.</p>
<p>Then we know that the automorphism sends $a$ to another transposition $a'$. </p>
<p>Consider the cycle decomposition of $b'$, the image of $b$. The two objects permuted by $a'$ must be adjacent within the same cycle in $b'$ (else $b'^{-1}a'b'$ will be found to be a transposition moving different objects from $a'$ and will commute with it, and this is not true for $a, b$). </p>
<p>Suppose the objects moved by $a'$ are adjacent in a cycle of length $m<n$ in $b'$. Then $b'^m$ will commute with $a'$ (it will not move the objects moved by $a'$). This is not true for $a, b$, so we must have $m=n$ i.e. $b'$ is an $n$-cycle with the elements moved by $a'$ adjacent. Since this is the same cycle pattern as for $a, b$ there will be a permutation of the underlying objects which does the job, and this is the required inner automorphism.</p>
|
3,844,256 | <p>How can one prove the following deduction? Assume we know the following result.</p>
<p><span class="math-container">$$ \frac{1}{2}\arctan\left( \frac{y}{x+1} \right) + \frac{1}{2}\arctan\left( \frac{y}{x-1} \right) - \arctan\left( \frac{y}{x} \right) = c$$</span></p>
<p>Then, it is claimed that this is equivalent to the following. I am not able to figure out why.</p>
<p><span class="math-container">$$ \frac{(x^2+y^2)^2+y^2-x^2}{xy} = k$$</span></p>
<p>I am aware of the formula for addition <span class="math-container">$\arctan(x) + \arctan(y)$</span> but I am not sure how to deal with prefactors of <span class="math-container">$1/2$</span>.</p>
| Daron | 53,993 | <p>Here are some hints. First prove that for any connected subset <span class="math-container">$C \subset X$</span> the closure of is also connected. Then show <span class="math-container">$A \cup \{p\}$</span> is the closure of <span class="math-container">$A$</span>. To do so consider whether <span class="math-container">$A,B$</span> are open or closed or neither or both.</p>
|
143,274 | <p>I am trying to find the derivative of $\sqrt{9-x}$ using the definition of a derivative </p>
<p>$$\lim_{h\to 0} \frac {f(a+h)-f(a)}{h} $$</p>
<p>$$\lim_{h\to 0} \frac {\sqrt{9-(a+h)}-\sqrt{9-a}}{h} $$</p>
<p>So to simplify I multiply by the conjugate</p>
<p>$$\lim_{h\to0} \frac {\sqrt{9-(a+h)}-\sqrt{9-a}}{h}\cdot \frac{ \sqrt{9-(a+h)}+ \sqrt{9-a}}{\sqrt{9-(a+h)}+\sqrt{9-a}}$$</p>
<p>which gives me </p>
<p>$$\frac {-2a-h}{h(\sqrt{9-(a+h)}+\sqrt{9-a})}$$</p>
<p>I have no idea what to do from here, obviously I can easily get the derivative using other methods but with this one I have no idea how to proceed.</p>
| David Mitra | 18,986 | <p>You made a mistake when doing the multiplication upstairs:</p>
<p>When multiplying
$$
\Bigl( \color{maroon}{\sqrt{9-(a+h)} }- \color{darkgreen}{\sqrt {9-a}}\ \Bigr)\Bigl(\color{maroon}{\sqrt{9-(a+h)} }+\color{darkgreen}{ \sqrt {9-a}}\ \Bigr),
$$
you are using the rule
$$
(\color{maroon}a-\color{darkgreen}b)(\color{maroon}a+\color{darkgreen}b)
=\color{maroon}a^2-\color{darkgreen}b^2
$$
So you obtain
$$
\Bigl(\color{maroon}{\sqrt{9-(a+h)}}\ \Bigr)^2 - \Bigl(\color{darkgreen}{\sqrt {9-a}}\ \Bigr)^2= \bigl(9-(a+h)\bigr) - (9-a) = \color{teal}9\color{purple}{-a}-h\color{teal}{-9}+\color{purple} a= -h.
$$</p>
<p><br></p>
<p>Then, to find your derivative, you have to compute
$$\eqalign{
f'(a)=
\lim_{h\rightarrow 0} { -h\over h\bigl( \sqrt{9-(a-h) }+\sqrt{9-a}\ \bigr ) }
&=\lim_{h\rightarrow 0} { -1\over \sqrt{9-(a-h) }+\sqrt{9-a} }\cr
&= { -1\over \sqrt{9-(a-0) }+\sqrt{9-a} }\cr
&= { -1\over \sqrt{9-a }+\sqrt{9-a} }\cr
&= { -1\over 2\sqrt{9-a} }.
}
$$</p>
|
666,461 | <p>The function $f(x)=x+\log x$ has only one root on $(0,\infty)$ which is in $(0,1)$.</p>
<p>Using the Intermediate value theorem: $f$ is continuous on $(0,\infty)$ and $f(0)=0+\log(0)=-\infty<0$ and $f(1)=1+\log(1)=1>0$. So there exists an $x$ such $f(x)=0$.</p>
<p>But how to show that this $x$ is the only root? </p>
| guest | 125,454 | <p>$f(x)=x+log(x)=log(e^xx)$ the question of finding the roots of the function $f(x)$ is the same question of finding the set of solutions of the equation $xe^x=1$. Let $g(x)=xe^x-1$ and $g(x)$ have two roots on $(0,1)$, say a and b. Since $g(a)=g(b)=0$, $g$ is continuous on $[a,b]$ and $g$ is differentiable on $(a,b)$ we can apply rolles theorem and then there must be $c\in(a,b)$ such that $g'(c)=0$ which requires the equation $e^c(c+1)=0$. But this equation is impossible for any $c\in(a,b)$ where $0<a,b<1$. So our assumption is false and $g$ has only one root in $(0,1)$ and so does $f$.</p>
|
1,663,838 | <p>Show that a positive integer $n \in \mathbb{N}$ is prime if and only if $\gcd(n,m)=1$ for all $0<m<n$.</p>
<p>I know that I can write $n=km+r$ for some $k,r \in \mathbb{Z}$ since $n>m$</p>
<p>and also that $1=an+bm$. for some $a,b \in \mathbb{Z}$</p>
<p>Further, I know that $n>1$ if I'm to show $n$ is prime.</p>
<p>I'm not sure how I would go about showing this in both directions though.</p>
| fleablood | 280,126 | <p>This should be trivial.</p>
<p>If n is prime it has no factors but 1 and n. So gcd (n,m) can only equal 1 or n. If gcd(n,m) = n then that means n|m. So m $\ge$ n. So if m < n then gcd (n,m)=1.</p>
<p>That's one way.</p>
<p>If gcd (n,m)=1 for all m < n, then no number less than n divides n (other than 1). As no number <em>larger</em> n divides n, n has no divisors except itself and 1. So n is prime.</p>
<p>That's the other way.</p>
|
3,354,990 | <p>I have points and limits of a function and even the shape of the function and I'm looking for the function, something that very interesting for me how could I control the curve of the function?</p>
<p>(1) <span class="math-container">$\lim\limits_{x \to inf} f(x) = 1 $</span></p>
<p>(2) <span class="math-container">$f(\frac{1}{c}) = 1 $</span></p>
<p>(3) <span class="math-container">$0\lt x$</span></p>
<p>(4) <span class="math-container">$0\lt c \leq 1$</span></p>
<p><a href="https://i.stack.imgur.com/DqYRT.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DqYRT.jpg" alt="Shape of function"></a></p>
| DSaad | 169,718 | <p>I found two answers after a lot of experimenting with variables: </p>
<p><span class="math-container">$\left(\frac{\left(e^{2\cdot e\cdot c\cdot x}\right)-1}{\left(e^{2\cdot e\cdot c\cdot x}\right)+1}\right)$</span></p>
<p><span class="math-container">$\left(\frac{1-\left(e^{-2\cdot e\cdot c\cdot x}\right)}{1+\left(e^{-2\cdot e\cdot c\cdot x}\right)}\right)$</span></p>
<p>You can see the plots and play with c here: <a href="https://www.desmos.com/calculator/0tqfumlp4x" rel="nofollow noreferrer">link</a></p>
|
2,418,171 | <p>I would like to see an explicit example of two smooth isometric embeddings (in the Riemannian sense) $i_1,i_2:\mathbb{R}^2 \to \mathbb{R}^3$ such that there is no isometry $\varphi:\mathbb{R}^3\to \mathbb{R}^3$ suct that $i_2=\varphi\circ i_1$.</p>
<p>(I take here $\mathbb{R}^2,\mathbb{R}^3$ to be endowed with the standard Euclidean metrics).</p>
<p>Also, does there exist a pair of embeddings $i_1,i_2$ such that their <em>images</em> are not equivalent, i.e there is no isometry $\varphi:\mathbb{R}^3\to \mathbb{R}^3$ suct that $\text{Image}(i_2)=\varphi (\text{Image}(i_1))$?</p>
| Chris Culter | 87,023 | <p>Sure, how about taking one image to be $\{x,y,0\}$ and the other $\{x,y,\sin x\}$?</p>
|
2,418,171 | <p>I would like to see an explicit example of two smooth isometric embeddings (in the Riemannian sense) $i_1,i_2:\mathbb{R}^2 \to \mathbb{R}^3$ such that there is no isometry $\varphi:\mathbb{R}^3\to \mathbb{R}^3$ suct that $i_2=\varphi\circ i_1$.</p>
<p>(I take here $\mathbb{R}^2,\mathbb{R}^3$ to be endowed with the standard Euclidean metrics).</p>
<p>Also, does there exist a pair of embeddings $i_1,i_2$ such that their <em>images</em> are not equivalent, i.e there is no isometry $\varphi:\mathbb{R}^3\to \mathbb{R}^3$ suct that $\text{Image}(i_2)=\varphi (\text{Image}(i_1))$?</p>
| levap | 32,262 | <p>Take a unit speed curve $\alpha \colon \mathbb{R} \rightarrow \mathbb{R}^3$ whose image lies in the $xy$ plane and consider the map $\varphi \colon \mathbb{R}^2 \rightarrow \mathbb{R}^3$ given by</p>
<p>$$ \varphi(s,t) = \alpha(s) + te_3 $$</p>
<p>where $e_3 = (0,0,1)$. The image of $\varphi$ is a cylinder over $\alpha$ with axis $e_3$. The differential of $\varphi$ is $(\alpha'(s), e_3)$ and since $\alpha'(s),e_3$ are orthogonal to each other and of unit length, this is an isometric immersion. If $\alpha$ is an embedding, you'll get an embedding. By choosing $\alpha$ appropriately, it is clear you can get an embedding whose image is not congruent to a plane (which itself can be considered as a cylinder over a line with axis that lies in the plane and s perpendicular to that line).</p>
|
905,685 | <p>Let the balls be labelled $1,2,3,..n$ and the boxes be labelled $1,2,3,..,n$. </p>
<p>Now I want to find, </p>
<ul>
<li><p>What is the expected value of the minimum value of the label among the boxes which are non-empty </p></li>
<li><p>What is the expected number of boxes with exactly one ball in them? </p></li>
</ul>
<hr>
<p>Whatever way I am thinking of it, I am getting complicated summation form of answers and not any exact closed form! </p>
| Graham Kemp | 135,106 | <h2>A</h2>
<p>To expectation of the minimum used label, $K$, we first measure the probability that all the balls being among the top $n-k$ boxes. That is, that the minimum label will be greater than some value $k$.</p>
<p>In the total space each of $n$ balls has a choice of $n$ boxes ($n^n$). In the restricted space each of $n$ balls has a choice of $n-k$ boxes $(n-k)^n$. So then:
$$\begin{align}
\Pr(K > k) & = \frac{(n-k)^n}{n^n}
\\[1ex] \Pr(K > k-1) & = \frac{(n-k+1)^n}{n^n}
\\[1ex] \Pr(K=k) & = \Pr(K>k-1)-\Pr(K>k)
\\ & = \frac{(n-k+1)^n-(n-k)^n}{n^n}
\\[2ex]
E(K) & = \sum_{k=1}^{n} k \Pr(K=k)
\\ & = \frac 1 {n^n}\sum_{k=1}^n (k(n-k+1)^n-k(n-k)^n)
\\ & = \frac 1 {n^n}\left(\sum_{k=0}^{n-1} (k+1)(n-k)^n-\sum_{k=1}^{n} k(n-k)^n\right)
\\ & = \frac 1 {n^n}\left( n^n + \sum_{k=1}^{n-1} (n-k)^n\right)
\\ & = \frac 1 {n^n}\left( n^n + \sum_{k=1}^{n-1} k^n\right)
\\ & = \mathop{1 + n^{-n}\underbrace{\sum_{k=1}^{n-1} k^n}}_{\text{a generalised Harmonic series?}}
\end{align}$$</p>
<hr>
<h2>B</h2>
<p>Let $B_i$ be the Bernouli indicator that box $i$ contains <em>exactly</em> one ball. We then use the <strong>linearity of expectation</strong> to find the expected value of $B:=\sum_{i=1}^n B_i$</p>
<p>$$\begin{align}
\forall i\in \{1..n\}:\Pr(B_i=1) & = {n\choose 1}(n-1)^{n-1}/n^n
\\[2ex] \mathbb{E}[B] & = \mathbb{E}[\sum_{i=1}^n B_i]
\\ & = \sum_{i=1}^n \mathbb{E}[B_i]
\\ & = n\times (0\times \Pr(B_\ast=0)+1\times \Pr(B_\ast=1))
\\[1ex]\therefore \mathbb{E}[B] & = \frac{(n-1)^{n-1} }{ n^{n-2}}
\end{align}$$</p>
|
1,599,467 | <p>Here $f$ is a non-zero linear functional on a vector space $X$. I can show this true for one direction, </p>
<blockquote>
<p>Let $x_1, x_2 \in x + N(f)$</p>
<p>$\implies x_1 = x + y_1, \quad x_2 = x + y_2$, where $y_1, y_2 \in N(f).$</p>
<p>Then $f(x_1) = f(x) + f(y_1) = f(x) = f(x) + f(y_2) = f(x_2)$.</p>
</blockquote>
<p>I'm not sure how to prove the converse holds true too.</p>
| symplectomorphic | 23,611 | <p>Might as well make my comment an answer:</p>
<p>If $f(x)=f(y)$, then $0=f(x)-f(y)=f(x-y)$ by linearity, so $x-y$ is in the null space. Hence $x$ and $y$ are mapped to the same element of the quotient space.</p>
|
95,314 | <p>To evaluate this type of limits, how can I do, considering $f$ differentiable, and $ f (x_0)> 0 $</p>
<p>$$\lim_{x\to x_0} \biggl(\frac{f(x)}{f(x_0)}\biggr)^{\frac{1}{\ln x -\ln x_0 }},\quad\quad x_0>0,$$</p>
<p>$$\lim_{x\to x_0} \frac{x_0^n f(x)-x^n f(x_0)}{x-x_0},\quad\quad n\in\mathbb{N}.$$</p>
| Davide Giraudo | 9,849 | <p>For the first: for $x\neq x_0$, since $f(x)>0$ in a neighborhood of $x_0$
\begin{align*}
\frac{f(x)}{f(x_0)}^{\frac 1{\ln x-\ln x_0}}&=\exp\left(\frac {\ln f(x)-\ln f(x_0)}{\ln x-\ln x_0}\right)\\
&=\exp\left(\frac {\ln f(x)-\ln f(x_0)}{x-x_0}\frac{x-x_0}{\ln x-\ln x_0}\right),
\end{align*}
and taking the limit $x\to x_0$, since $\ln f$ is differentiable, we get
$$\lim_{x\to x_0}\frac{f(x)}{f(x_0)}^{\frac 1{\ln x-\ln x_0}}=\exp\left(\frac{f'(x_0)}{f(x_0)}\frac 1{\frac 1{x_0}}\right)=\exp\left(x_0\frac{f'(x_0)}{f(x_0)}\right).$$
For the second question
\begin{align*}\frac{x_0^nf(x)-x^nf(x_0)}{x-x_0}&=x_0^n\frac{f(x)-f(x_0)}{x-x_0}+f(x_0)\frac{x_0^n-x^n}{x-x_0}\\
&=x_0^n\frac{f(x)-f(x_0)}{x-x_0}-f(x_0)\sum_{k=0}^{n-1}x^kx_0^{n-k-1},
\end{align*}
and taking the limit $x\to x_0$ we get
$$\lim_{x\to x_0}\frac{x_0^nf(x)-x^nf(x_0)}{x-x_0}=x_0^nf'(x_0)-nf(x_0)x_0^{n-1}.$$</p>
|
1,336,344 | <p>Given a matrix A of a strongly $k$ regular graph G(srg($n,k,\lambda,\mu$);$\lambda ,\mu >0;k>3$). The matrix A can be divided into 4 sub matrices based on adjacency of vertex $x \in G$.
$A_x$ is the symmetric matrix of the graph $(G-x)$, where $C$ is the symmetric matrix of the graph created by vertices of $(G-x)$ which are not adjacent to $x$ and $D$ is the symmetric matrix of the graph created by vertices of $(G-x)$ which are adjacent to $x$. </p>
<p>$$ A_x = \left(\begin{array}{cccccc|ccc|c}
0&1&0&0&1&0&1&0&0&0\\
1&0&1&0&0&0&0&1&0&0\\
0&1&0&1&0&0&0&0&1&0\\
0&0&1&0&0&1&1&0&0&0\\
1&0&0&0&0&1&0&0&1&0\\
0&0&0&1&1&0&0&1&0&0\\
\hline
1&0&0&1&0&0&0&0&0&1\\
0&1&0&0&0&1&0&0&0&1\\
0&0&1&0&1&0&0&0&0&1\\
\hline
0&0&0&0&0&0&1&1&1&0\\
\end{array}\right)
=
\left( \begin{array}{ccc}
C & E & 0 \\
E^{T} & D & 1\\
0 & 1 & 0\\
\end{array} \right)
$$</p>
<p>It should be noted that</p>
<ol>
<li><p>Interchanging/swapping any two rows (or columns) of $C$ does not affect matrix $D$ (and vice versa).</p></li>
<li><p>Any change in $C$ or $D$ or both $C$ and $D$ changes matrix $E$.</p></li>
</ol>
<p><strong>Problem:</strong> If some vertices of $G$ is rearranged (i.e., permuted), $A$ will be different, say, <strong><em>this new matrix is $B$. Again, matrix $B$ can be divided into 4 sub matrices based on adjacency of vertex $x \in G$ and $ B_x$ can be obtained.</em></strong></p>
<p>Assume:</p>
<ol>
<li>$C$ is always the adjacency matrix of a regular graph and bigger than $D$.</li>
<li>There exists an algorithm that always order $D$ (for a vertex $x \in G$) takes $O(K)$ time (assumed to be polynomial/exponential, does not matter). </li>
<li><p>Each row of E has different permutation, i.e., rows might have same number of 1's but different permutation/arrangement of 1.</p>
<p>For $n$ vertices there will be total $n$ numbers of $C,D$, each of them will take $O(K)$ (assumed to be polynomial) time to sort. If each $C$ takes time $O(f(n))$ to sort, then the total complexity will be $O(n \cdot K \cdot f(n))$. </p></li>
</ol>
<p><strong>Question:</strong> According to the three assumptions above, does there exist a polynomial time algorithm to sort $C$ so that $B=A$? I.e., is there a polynomial $f$ ?</p>
<p>The problem is connected to <a href="https://math.stackexchange.com/questions/1240637/counting-problem-of-combinations-of-symmetric-matrix">this question</a>.</p>
<p><strong><em>If anything is unclear, please ask a specific question, I will try my best to answer.</em></strong> </p>
| Michael | 179,940 | <p><strong>Proposed Solution:</strong></p>
<p>Arrangement of $G$: $A$ is the matrix of graph $G$ where $|G|=|A|=n$. Based on the adjacency of last vertex($n$ th vertex), A can be divided in to 4 sub-matrices where 2 are symmetric matrices($C,D$) and 2 are non square matrices($E^{T}, E$). One symmetric matrix, say, $D$ , has size $<n/2$. Repeat the whole procedure for matrix $D$ and $k$ th vertex $; 0<k<n$.
This procedure can be done maximum $log_2(n)$ times.<br>
<br><br></p>
<p>Description of the Algorithm:<strong><em>$A,B$ are symmetric matrices of graphs $G,H$ respectively.</em></strong> For $x \in G$,consider the graph $(G-x)$, it has vertices adjacent to $x$ and vertices non adjacent to $x$.
<br><br></p>
<p>$ A_x$ is the symmetric matrix of the graph $(G-x)$, where $C$ is the symmetric matrix of the graph created by vertices of $(G-x)$ which are adjacent to $x$ and $D$ is the symmetric matrix of the graph created by vertices of $(G-x)$ which are not adjacent to $x$. </p>
<p>$ A_x = \left( \begin{array}{cc}
C & E \\
E^{T} & D\\
\end{array} \right)$ </p>
<p>It should be noted, that-
<br><br>
1. Interchanging/swapping any two rows(or columns) of $C$ does not affect matrix $D$ (and vice versa) .
2. Any change in $C$ or $D$ or both $C$ and $D$ changes matrix $E$.
<br><br></p>
<p>If $G \simeq H $ then there exists $u \in H$ so that, $(G-x)\simeq (H-u)$, the matrix of $(H-u)$ is $B_u$, where$ B_u= \left( \begin{array}{cc}
S & R \\
R^{T} & Q\\
\end{array} \right) $</p>
<p>Consider the situation where $S \neq C,D \neq Q$. Since $(G-x)\simeq (H-u)$ ,so, there exists a permutation matrix $P$ ,so that $PQ=D$.
Once $Q$ is arranged exactly like $D$ then $C$ can be arranged in non-exponential time <strong>using the arrangement of rows of $E$ because after $Q=D$, the rows of $E$ and $R$ would be same but arranged in different order. For example, 1st row may be placed in 4th row, but 1st row will look same in 4th row, here only order is changed(1st to 4th).</strong></p>
<p>[ Once Q is arranged exactly like D, you can not change / reorder/ permute vertices which made Q, because if you change / reorder / permute vertices of Q, then Q will not be equal to D.If you can not change Q(=D), then there will be no column change in R(which is expected equal to E) as changing any column in R will change Q( but not S). So , after, Q is arranged exactly like D, change can be done in S matrix, if you change S, then only rows of E will change their order].</p>
<p>So, arrange the order of rows of $R$ like $E$, it will make $S = C$.</p>
|
1,810,729 | <blockquote>
<p>Let $G$ be a group generated by $x,y$ with the relations $x^3=y^2=(xy)^2=1$. Then show that the order of $G$ is 6.</p>
</blockquote>
<p><strong>My attempt:</strong> So writing down the elements of $G$ we have $\{1,x,x^2,y,\}$. Other elements include $\{xy, xy^2, x^2y\}$ it seems I am counting more than $6$. Are some of these equal? how do I prove that?</p>
| Kushal Bhuyan | 259,670 | <p>Since $y^2=1$ so $xy^2=x$. So only one of them counts. Thus $6$ elements. The given relation is basically resembles to the structure of $D_3$, the dihedral group of order $6.$</p>
|
4,212,181 | <p>A uniform cable that is 2 pounds per feet and is 100 feet long hangs vertically from a pulley system at the top of a building (and the building is also 100 feet tall).</p>
<p>How much work is required to lift the cable until the bottom end of the cable is 20 feet below the top of the building?</p>
<p><span class="math-container">$W=FD$</span></p>
<p><span class="math-container">$F=2\Delta y$</span></p>
<p><span class="math-container">$D=80-y$</span> ??</p>
<p><span class="math-container">$\int_{0}^{100}2(80-y)dy$</span></p>
<p>Or am I mixing up the limits of integration and the distance? Or am I completely wrong? Thanks for the help. I appreciate it.</p>
| Oğuzhan Kılıç | 481,167 | <p>There are many ways you can solve this problem, let's first do it in your way. Note that i'm not bothering with units. Because of feet, pound. :( not nice. Also let me note that we must talk about the <span class="math-container">$g$</span> (gravitation constant) because if we don't say anything about it answer wşll be no work is required.</p>
<p>Mass of hanging part of rope is <span class="math-container">$2(100-y)$</span>, force acting on this part of rope is <span class="math-container">$g.2(100-y)=gm$</span>. Work is given by <span class="math-container">$Force.Distance$</span>. Then the integral becomes</p>
<p><span class="math-container">$$ \int_0^{80}2g(100-y)\ dy = 9600$$</span></p>
<p>Another way to solve is to use energy which can be calculated easily, initial potential energy is</p>
<p><span class="math-container">$$ \int_0^{100} 2gy\ dy=10000 $$</span></p>
<p>Final potential energy</p>
<p><span class="math-container">$$\int_{80}^{100} 2gy \ dy + \int_0^{80} 200g dy=3600+16000 $$</span></p>
|
4,389,441 | <p>I'm dealing with a sample problem where I want to work out the probability of a fair coin toss landing <em>heads</em> and a fair die roll landing <em>6</em>. We are then told that <strong>at least</strong> one of those events has happened.</p>
<p>Why is the probability of this not as simple as <em>P(C)P(D) = 0.083</em>? How would one work out the correct probability using the Bayes theorem?</p>
<p>Thanks</p>
| FOE | 1,018,997 | <p>It is clear that <span class="math-container">$P(C \cap D)= P(C)P(D)$</span>. However in this case you are not asking that question , your question here is:
<span class="math-container">$P(C \cap D \mid \text{ at least one of the events happened} )$</span> . In this new probability space the events are not independent.
<span class="math-container">$$P(C \cap D \mid \text{ at least one of the events happened} )=P(C \cap D \cap \text{ at least one of the events happened} ) \frac{1 }{P(\text{ at least one of the events happened}) }$$</span>
The first term is <span class="math-container">$0.083$</span>, the second term is equal to
<span class="math-container">$$ P(C \cup D)=P(C)+P(D)-P(C \cap D)$$</span> which is <span class="math-container">$\frac{1}{2} + \frac{1}{6}- 0.083$</span></p>
|
3,421,858 | <p><span class="math-container">$\sqrt{2}$</span> is irrational using proof by contradiction.</p>
<p>say <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive integers. </p>
<p><span class="math-container">$b\sqrt{2}$</span> is an integer. ----[Understood]</p>
<p>Let <span class="math-container">$b$</span> denote the smallest such positive integer.----[My understanding of this is </p>
<p>that were are going to assume b is the smallest possible integer such that <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span>, ... Understood]</p>
<p>Then <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>----[I'm not sure I understand the point that's being made here,</p>
<p>from creating a variable <span class="math-container">$b^{*} = a - b$</span> ]</p>
<p>Next, <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span> is a positive integer such that <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer.----[ I get that (<span class="math-container">$a - b$</span>) has to</p>
<p>be a positive integer, why does it follow that then <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer?]</p>
<p>Lastly, <span class="math-container">$b^{*}<b$</span>, which is a contradiction.----[I can see that given <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>, we then have </p>
<p><span class="math-container">$b^{*}<b$</span>, I don't get how that creates a contradiction]</p>
<p>Any help is appreciated, thank you. </p>
| Randall | 464,495 | <p><span class="math-container">$b$</span> is selected to the smallest positive integer so that <span class="math-container">$b\sqrt{2}$</span> is an integer. One such <span class="math-container">$b$</span> exists by assumption, so just pick the smallest.</p>
<p>Next, <span class="math-container">$b^*\sqrt{2} = b(\sqrt{2}-1)\sqrt{2} = 2b-b\sqrt{2}$</span> is a difference of two integers, so it is an integer. Now clearly <span class="math-container">$b^* < b$</span> yet <span class="math-container">$b$</span> was supposed to the smallest <span class="math-container">$x$</span> with <span class="math-container">$x\sqrt{2}$</span> an integer, and the construction of <span class="math-container">$b^*$</span> contradicts this.</p>
|
3,421,858 | <p><span class="math-container">$\sqrt{2}$</span> is irrational using proof by contradiction.</p>
<p>say <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span> where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are positive integers. </p>
<p><span class="math-container">$b\sqrt{2}$</span> is an integer. ----[Understood]</p>
<p>Let <span class="math-container">$b$</span> denote the smallest such positive integer.----[My understanding of this is </p>
<p>that were are going to assume b is the smallest possible integer such that <span class="math-container">$\sqrt{2}$</span> = <span class="math-container">$\frac{a}{b}$</span>, ... Understood]</p>
<p>Then <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>----[I'm not sure I understand the point that's being made here,</p>
<p>from creating a variable <span class="math-container">$b^{*} = a - b$</span> ]</p>
<p>Next, <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span> is a positive integer such that <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer.----[ I get that (<span class="math-container">$a - b$</span>) has to</p>
<p>be a positive integer, why does it follow that then <span class="math-container">$b^{*}\sqrt{2}$</span> is an integer?]</p>
<p>Lastly, <span class="math-container">$b^{*}<b$</span>, which is a contradiction.----[I can see that given <span class="math-container">$b^{*}$</span> := <span class="math-container">$b(\sqrt{2} - 1)$</span>, we then have </p>
<p><span class="math-container">$b^{*}<b$</span>, I don't get how that creates a contradiction]</p>
<p>Any help is appreciated, thank you. </p>
| Alberto Saracco | 715,058 | <p>Usually the proof goes along this lines.</p>
<p>Suppose by contradiction <span class="math-container">$\sqrt{2}=\frac{a}{b}$</span> with <span class="math-container">$a,b\in\mathbb N$</span>. We can assume the fraction to be reduced, in particular <span class="math-container">$a,b$</span> are not both even. </p>
<p>Then, by squaring the identity:
<span class="math-container">$$2b^2=a^2$$</span></p>
<p>Thus <span class="math-container">$a^2$</span> is even (and so is <span class="math-container">$a$</span>). Let <span class="math-container">$a=2c$</span> with <span class="math-container">$c\in\mathbb N$</span>, and substitue in the previous.
<span class="math-container">$$2b^2=4c^2$$</span>
As before, this implies <span class="math-container">$b^2$</span> (and <span class="math-container">$b$</span>) is even. Contradiction.</p>
<p>This seems a more straightforward proof of the irrationality of <span class="math-container">$\sqrt2$</span>.</p>
<hr>
<p>As for the proof you have, you know <span class="math-container">$b^\ast$</span> to be a natural number since it is <span class="math-container">$a-b$</span>. On the other hand <span class="math-container">$\sqrt{2}b^\ast=b(2-\sqrt{2})=2b-\sqrt{2}b=2b-a$</span>.</p>
|
703,031 | <p>In a sequence of integers, $A(n)=A(n-1)-A(n-2)$, where $A(n)$ is the $n$th term in the sequence, $n$ is an integer and $n\ge3$,$A(1)=1$,$A(2)=1$, calculate $S(1000)$, where $S(1000)$ is the sum of the first $1000$ terms.</p>
<p>How to approach these type of questions? Which topics should I study?</p>
| Barry Cipra | 86,747 | <p>A generally good way to approach problems like this is to "experiment" with the formulas to see what they say. That is, plug in numbers and calculate:</p>
<p>$$\begin{align}
A(1)&=1 & S(1)&=A(1)=1\\
A(2)&=1 & S(2)&=S(1)+A(2)=1+1=2\\
A(3)&=A(2)-A(1)=1-1=0 & S(3)&=S(2)+A(3)=2+0=2\\
A(4)&=A(3)-A(2)=0-1=-1 & S(4)&=S(3)+A(4)=2-1=1\\
A(5)&=A(4)-A(3)=-1-0=-1 & S(5)&=S(4)+A(5)=1-1=0\\
A(6)&=A(5)-A(4)=-1-(-1)=0 & S(6)&=S(5)+A(6)=0+0=0\\
A(7)&=A(6)-A(5)=0-(-1)=1 & S(7)&=S(6)+A(7)=0+1=1\\
A(8)&=A(7)-A(6)=1-0=1 & S(8)&=S(7)+A(8)=1+1=2\\
\end{align}$$</p>
<p>Now if you need to, keep going with this, computing $A(9)$ and $S(9)$, $A(10)$ and $S(10)$, and so forth. If you do, you'll see that the numbers you get simply repeat what you've already calculated. That is, the sequences of $A$'s is simply $1,1,0,-1,-1,0,1,1,0,-1,-1,0,1,1,\ldots$, and the $S$'s are $1,2,2,1,0,0,1,2,2,1,0,0,1,2,\ldots$. </p>
<p>If you count, you see everything repeats every $6$ steps. Consequently,</p>
<p>$$S(1000)=S(994)=S(988)=S(982)=\cdots=S(4)=1$$</p>
<p>(It stops at $4$ because when you divide $1000$ by $6$ you get a remainder of $4$. That is, $1000-6\cdot166=4$.)</p>
<p>There are fancier ways to do all this (in particular, as 5xum pointed out, the sequence of $S$'s is a telescoping sum, so one can see that $S(1000)=A(999)+A(2)$, thus reducing the problem to simply seeing the pattern of repetitions in the $A$'s), but when you're just getting started, it's always a good idea to look for patterns in the number themselves.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.