qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
2,962,377 | <p>Rewrite <span class="math-container">$f(x,y) = 1-x^2y^2$</span> as a product <span class="math-container">$g(x) \cdot h(y)$</span> (both arbitrary functions)</p>
<p>To make more clear what I'm talking about I will give a example.</p>
<p>Rewrite <span class="math-container">$f(x,y) = 1+x-y-xy$</span> as <span class="math-container">$g(x)h(y)$</span></p>
<p>If we choose <span class="math-container">$g(x) = (1+x)$</span> and <span class="math-container">$h(y) = (1-y)$</span> we have</p>
<p><span class="math-container">$f(x,y) = g(x) h(y) \implies (1+x-y-xy) = (1+x)(1-y)$</span></p>
<p>I'm trying to do the same with <span class="math-container">$f(x,y) = 1-x^2y^2 = (1-xy)(1+xy)$</span>.</p>
<p>New question:</p>
<blockquote>
<p>Is there also a contradiction for <span class="math-container">$f(x,y) = \frac{xy}{1-x^2y^2}$</span> ? Or it's possible to write <span class="math-container">$f(x,y) $</span> as <span class="math-container">$g(x)h(y)$</span> ?</p>
</blockquote>
| Jonathan | 606,065 | <p>Without loss of generality let <span class="math-container">$f(x) = ax^2 + b$</span> and <span class="math-container">$g(y) = cy^2 + d$</span>. Now
<span class="math-container">$$
1-x^2y^2 = f(x)g(y) = acx^2y^2 +adx^2 = cby^2 +bd
$$</span>
By comparing terms we obtain:
<span class="math-container">\begin{align}
ad&=0\\
cb&=0\\
ac&=-1\\
bd&=1
\end{align}</span>
By <span class="math-container">$bd=1$</span> we obtain <span class="math-container">$a=c=0$</span>. Hence this problem has no solution.</p>
|
2,747,074 | <p>I've seen multiple ways on how to solve it online (and most likely the majority are wrong), but I don't know how to solve this fully so that I could get the full grade during my exam.</p>
<p>The question is as follows:</p>
<blockquote>
<p>Prove that if $\sum a_n$ is convergent with $a_n > 0$ for all $n$,
then $\sum a^2_n$ is also convergent.</p>
</blockquote>
<p>I understand that the starting point to prove this is that $\lim a_n =0$, but after that I really don't know what I'm supposed to say.</p>
| J.G. | 56,861 | <p>For any $\epsilon >0$, there exists an $N$ such that all $n>N$ satisfy $|a_n|<\epsilon$. Hence $|a_n|^2<\epsilon |a_n|$.</p>
|
2,075,712 | <p>I want to show that if $A=[0,1)$ then its interior is $(0,1)$. I know that $int(A) \subset A$, and that $\forall x \in int(A)$ $\exists R>0 $ such that $B(x,R) \subset A$. Thus immediately we see that $0 \notin int(A)$ because $\not \exists R>0$ such that $B(0,R)\subset A$.</p>
<p>What I struggle to do is to show that the final set is equal to $(0,1)$.</p>
| Fawad | 369,983 | <p>We have
\begin{align}
(r_1-r)(r_2-r)(r_3-r)&= \left(\frac{\Delta}{s-a} - \frac{\Delta}{s}\right)\left(\frac{\Delta}{s-b} - \frac{\Delta}{s}\right)\left(\frac{\Delta}{s-c}-\frac{\Delta}{s}\right)\\
&={\Delta}^3 \left(\frac{ abc}{s^3(s-a)(s-b)(s-c)}\right)\\
&=\frac{abc{\Delta}^2}{s^2\Delta}\\
&=4R{\left(\frac{\Delta}{s}\right)}^2\\
&=4Rr^2.
\end{align}</p>
|
3,996,790 | <p>I just realized that there may be a case where L'Hopital's rule fails, specifically</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which evaluates to an indeterminate form, specifically <span class="math-container">$\frac{\infty}{\infty}$</span>. Sure, we can cancel the <span class="math-container">$e^x$</span>s, but when we use L'Hopital's, we get</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{(e^x)^\prime}{(e^x)^\prime}$$</span></p>
<p>Since the derivative of <span class="math-container">$e^x$</span> is <span class="math-container">$e^x$</span>, we have</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which is our original limit. Therefore, L'Hopital's fails to work in this example.</p>
<p>Question: Does L'Hopital's rule actually fail in this example, or am I understanding it wrong?</p>
<p>Edit: I mean "fails" in which it does not make progress toward a determinate result.</p>
| Community | -1 | <p>No. L'Hopital does not fail.</p>
<p>The L'Hopital rule says that (under some assumptions) if <span class="math-container">$\lim_{x\to\infty}\frac{f'(x)}{g'(x)}=L$</span> then
<span class="math-container">$$
\lim_{x\to\infty}\frac{f(x)}{g(x)}=L
$$</span></p>
<p>In your example,
<span class="math-container">$$
\lim_{x\to\infty}\frac{e^x}{e^x}=1
$$</span></p>
<p>One should not blame L'Hopital if one does not know how to calculate <span class="math-container">$\lim_{x\to\infty}\frac{f'(x)}{g'(x)}$</span> when it does exist.</p>
|
3,996,790 | <p>I just realized that there may be a case where L'Hopital's rule fails, specifically</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which evaluates to an indeterminate form, specifically <span class="math-container">$\frac{\infty}{\infty}$</span>. Sure, we can cancel the <span class="math-container">$e^x$</span>s, but when we use L'Hopital's, we get</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{(e^x)^\prime}{(e^x)^\prime}$$</span></p>
<p>Since the derivative of <span class="math-container">$e^x$</span> is <span class="math-container">$e^x$</span>, we have</p>
<p><span class="math-container">$$ \lim_{x \to \infty} \frac{e^x}{e^x}$$</span></p>
<p>which is our original limit. Therefore, L'Hopital's fails to work in this example.</p>
<p>Question: Does L'Hopital's rule actually fail in this example, or am I understanding it wrong?</p>
<p>Edit: I mean "fails" in which it does not make progress toward a determinate result.</p>
| J.G. | 56,861 | <p>The rule lets us write <span class="math-container">$\lim_{x\to a}\frac{f}{g}=\lim_{x\to a}\frac{hf}{hg}=\lim_{x\to a}\frac{(hf)^\prime}{(hg)^\prime}$</span>, so sometimes you need a non-constant <span class="math-container">$h$</span> to make progress.</p>
<p>With examples of the form <span class="math-container">$f/f$</span>, we can take <span class="math-container">$h=x^k/f$</span>, where the choice of <span class="math-container">$k$</span> is a context-dependent exercise.</p>
<p>More interesing is needing to transform. For example, consider <span class="math-container">$\lim_{x\to0^+}\frac{e^{-1/x}}{x}$</span>, which after <span class="math-container">$n$</span> iterations gives <span class="math-container">$\lim_{x\to0^+}\frac{e^{-1/x}}{n!x^{n+1}}$</span>, not helpful at all. But <span class="math-container">$y:=\frac1x$</span> converts it to <span class="math-container">$\lim_{y\to\infty}\frac{y}{e^y}=\lim_{y\to\infty}\frac{1}{e^y}=0$</span>.</p>
<p>Since such a transformation changes the variable with respect to which we differentiate, by the chain rule it uses the more general <span class="math-container">$\lim_{x\to a}\frac{f}{g}=\lim_{x\to a}\frac{j(hf)^\prime}{j(hg)^\prime}$</span>.</p>
|
3,941,728 | <p>I know that every function from the empty set to any other set is the empty function.</p>
<p>I also know that there is no function to the empty set from any other set.</p>
<p>Now, what if both the domain and codomain are empty? Would such a function exist?</p>
| NirF | 782,124 | <p>You can take this approach: <br>
<span class="math-container">$$A_n=\begin{bmatrix}
0&1&2& .&.&. &n-1\\
1&0&1&2& .&.&n-2 \\
2&1&0&1&.&.&. \\
.&.&.&.&.&.&. \\
.&.&.&.&.&.&2 \\
.&.&.&.&.&.&1 \\
n-1&n-2&.&.&2&1&0
\end{bmatrix} $$</span>
Now, add the last column to the first one, notice it will always be equal to <span class="math-container">$n-1$</span>. <span class="math-container">$(C_1 = C_1+C_n)$</span></p>
<p><span class="math-container">$$\begin{bmatrix}
n-1&1&2& .&.&. &n-1\\
n-1&0&1&2& .&.&n-2 \\
n-1&1&0&1&.&.&. \\
.&.&.&.&.&.&. \\
.&.&.&.&.&.&2 \\
.&.&.&.&.&.&1 \\
n-1&n-2&.&.&2&1&0
\end{bmatrix} =(n-1)\begin{bmatrix}
1&1&2& .&.&. &n-1\\
1&0&1&2& .&.&n-2 \\
1&1&0&1&.&.&. \\
.&.&.&.&.&.&. \\
.&.&.&.&.&.&2 \\
.&.&.&.&.&.&1 \\
1&n-2&.&.&2&1&0
\end{bmatrix}$$</span>
From here, we can do as follows: <br>
go from the last row towards the first and decrease each row's value with the one above it (for any row but the first one).
(<span class="math-container">$\forall i \neq 1, R_i=R_i-R_{i-1}$</span>, Starting with <span class="math-container">$i=n$</span> then <span class="math-container">$i=n-1 ... i=2$</span>)
<span class="math-container">$$ = (n-1)\begin{bmatrix}
1&1&2& .&.&. &n-1\\
0&-1&-1&-1& .&.&-1 \\
0&1&-1&-1&.&.&. \\
.&.&.&.&.&.&. \\
.&.&.&.&.&.&-1 \\
.&.&.&.&.&.&-1 \\
0&1&.&.&1&1&-1
\end{bmatrix}$$</span>
Expand <span class="math-container">$C_1$</span>:
<span class="math-container">$$ = (n-1)\begin{bmatrix}
-1&-1&-1& .&.&-1 \\
1&-1&-1&.&.&. \\
.&.&.&.&.&. \\
.&.&.&.&.&-1 \\
.&.&.&.&.&-1 \\
1&.&.&1&1&-1
\end{bmatrix}$$</span>
Now Add the first row to all of the other rows (<span class="math-container">$\forall i \neq 1, R_i = R_i + R_1$</span>)
<span class="math-container">$$ = (n-1)\begin{bmatrix}
-1&-1&-1& .&.&-1 \\
0&-2&-2&.&.&. \\
.&.&-2&.&.&. \\
.&.&.&.&.&-2 \\
.&.&.&.&.&-2 \\
0&.&.&0&0&-2
\end{bmatrix} = (n-1)[-1*(-2)^{n-2}]$$</span></p>
|
1,013,776 | <p>I am looking at the following <strong>Theorem</strong>:</p>
<p>Let $\phi$ a type. We suppose that there is a set $Y$, such that $\forall x(\phi(x) \to x \in Y)$. Then, there is the set $\{ x: \phi(x) \}$.</p>
<p>and I try to understand its proof.</p>
<p>From the axiom shema of specification, there is the set $V=\{ x \in Y: \phi(x) \}$</p>
<p>$$x \in V \leftrightarrow (x \in Y \wedge \phi(x))$$</p>
<p>How can we continue, in order to show that $\{ x: \phi(x) \}$ is a set?</p>
| ajotatxe | 132,456 | <p>$\{x:\phi(x)\}=V$, and you have proved that $V$ is a set.</p>
|
3,061,277 | <p>This concept I have asked a few people, but none of them are able to help me understand, so hope that there's a hero can save me from this problem!!!</p>
<p>My question occurs during substitution process, for example, sometimes we let <span class="math-container">$x = π - u$</span>. Then after some manipulation of numbers, we then let <span class="math-container">$x = u$</span> and integrate the guy that we want to integrate. That doesn't seem intuitive to me, isn't that we are changing the definition of x by omiting the rule of arithematic? <span class="math-container">$x = u \implies x = π - x$</span>. Why that operation doesn't affect the integration result?</p>
<p>Here is a concrete example illustrate my question</p>
<p><span class="math-container">$$ \int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx $$</span>
then we let <span class="math-container">$x = π - u \implies dx = -du$</span>
<span class="math-container">$$= \int^{π}_{0} \frac{(π - u)\sin(u)}{(1+\cos^2(u))} du$$</span>
<span class="math-container">$$= \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - \int^{π}_{0} \frac{u\sin(u)}{(1+\cos^2(u))} du$$</span>
<strong>And here (downward) is the part that I don't understand!!! (the x in _x_sin(x) in RHS)</strong></p>
<p>we then let <span class="math-container">$x = u$</span>
<span class="math-container">$$\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du - \int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx$$</span></p>
<p>move the rightmost guy to LHS and integrate RHS, solve the problem.</p>
<p><span class="math-container">$$2\int^{π}_{0} \frac{x\sin(x)}{(1+\cos^2(x))} dx = \int^{π}_{0} \frac{π\sin(u)}{(1+\cos^2(u))} du$$</span></p>
<p>Why can we let <span class="math-container">$x = u$</span>? isn't that we have given it the value <span class="math-container">$pi - u$</span> in the beginning?</p>
| Aniruddha Deshmukh | 438,367 | <p>This is called, not in the most humble manner, the "abuse" of notations. Let us go step by step in understanding what happened.</p>
<p>First, for the integral <span class="math-container">$\int\limits_{0}^{\pi} \dfrac{x \sin x}{1 + \cos^2 x} \ dx$</span>, we put <span class="math-container">$x = \pi - u$</span>, where since <span class="math-container">$x$</span> varies from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span>, <span class="math-container">$u$</span> varies from <span class="math-container">$\pi$</span> to <span class="math-container">$0$</span>. And from the exact differential term <span class="math-container">$dx = -du$</span>, we can reverse the limits of integration so that it becomes <span class="math-container">$\int\limits_{0}^{\pi} \dfrac{\left( \pi - u \right) \sin u}{1 + \cos^2 u} \ du$</span> (Here, some trigonometric formulae are also used). After this we split the numerator to obtain the two integrals.</p>
<p>Well, this was pretty easy! Now comes the step where problem arises. One of the two integrals is <span class="math-container">$\int\limits_{0}^{\pi} \dfrac{u \sin u}{1 + \cos^2 u} \ du$</span>. Now, let us look what we got in this integral. First, the limits are from <span class="math-container">$0$</span> to <span class="math-container">$\pi$</span>, just as the original integral. Next, the integrand is the same as the original integral except that we now have the variable <span class="math-container">$u$</span> instead of <span class="math-container">$x$</span>.</p>
<p>But, when it comes to integration, it does not really matter what the variable's "name" is. What matters is that integrand and the limits of integrations. So, the integral will be the same if we call the variable as <span class="math-container">$u$</span> or <span class="math-container">$x$</span>, or for that matter any other name!</p>
<p>Hence, we get <span class="math-container">$\int\limits_{0}^{\pi} \dfrac{u \sin u}{1 + \cos^2 u} \ du = \int\limits_{0}^{\pi} \dfrac{x \sin x}{1 + \cos^2 x} \ dx$</span>.</p>
<p>In other words, we DO NOT "let <span class="math-container">$x = u$</span>" in that step. Rather, we use facts from calculus to conclude the above mentioned equality. Since these two integrals are the same, it does not matter what variable we use and we rather replace <span class="math-container">$u$</span> by <span class="math-container">$x$</span> in the second integral to get the answer.</p>
|
3,831,310 | <p>I try to integrate</p>
<p><span class="math-container">$$\int \frac {dv}{\frac {-c}{m}v^2 - g \sin \theta}$$</span></p>
<p>I did substituted <span class="math-container">$u = \frac{c}{m}$</span> and <span class="math-container">$w = g \sin \theta$</span> to get</p>
<p><span class="math-container">$$-\int \frac {dv}{uv^2 + w}$$</span></p>
<p>I'm wondering if I have to do a second substitution. To be honest, I don't know if I can do that or how to do that. Furthermore, maybe I have to rearrange to get <span class="math-container">$\frac1{1+x^2}$</span></p>
| Alessio K | 702,692 | <p>You have <span class="math-container">$$-\int \frac{dv}{w+uv^2}=-\frac{1}{w}\int\frac{dv}{1+(v\sqrt{\frac{u}{w}})^2}$$</span></p>
<p>Now consider the substitution <span class="math-container">$v\sqrt{\frac{u}{w}}=\tan(x)$</span>.</p>
|
1,915,455 | <p>First of all they are all even... So it is at least $2$. But is it bigger than that? How do I find out?</p>
| SchrodingersCat | 278,967 | <p>Consider that all possible divisors of $n$ can be created by choosing from $p_1,p_2, \ldots, p_k$ in appropriate numbers.</p>
<p>So, for creating a particular divisor, we can choose $1$ $p_1$ or $2$ $p_1$'s or $3$ $p_1$'s and so on till a choice of all $a$ $p_1$'s. Then again we have the choice of not choosing any $p_1$ at all. So this amounts to $a+1$ choices.</p>
<p>Similarly for $p_2$, we have $b+1$ choices and in this way we can finally conclude that, for any $p_r$, $r=1,2,3 \ldots k$, we have $t'+1$ choices where $p_r$ is raised to the power $t'$ in the prime factorisation of $n$.</p>
<p>Finally since all $p_i$'s are distinct, we need to multiply the choices to get $d(n)$.</p>
<p>So total number of divisors possible $=d(n)=(a+1)(b+1)\ldots (m+1)$</p>
|
3,696,265 | <p>This is from Vakil's FOAG: exercise 2.5 C, part b. I understand how objects in the extension sheaf from a sheaf on a base <span class="math-container">$\mathcal B$</span> of a topology are created, but I am having trouble understanding how to produce a morphism of sheaves given a morphism of sheaves on a base. </p>
<p>Assume we have topological space <span class="math-container">$X$</span>. Supposing we have two sheaves on our base <span class="math-container">$\mathcal B$</span>, say <span class="math-container">$F$</span> and <span class="math-container">$G$</span>, and maps <span class="math-container">$F(B_i) \to G(B_i)$</span> for all <span class="math-container">$B_i \in \mathcal B$</span>, these induce maps between any stalks <span class="math-container">$F_x \to G_x$</span> we like, and we know also for any <span class="math-container">$x \in X$</span>, that <span class="math-container">$F_x \simeq F^{ext}_x$</span>, where <span class="math-container">$F^{ext}$</span> is our extended sheaf (likewise for <span class="math-container">$G^{ext}$</span>). After this, I do not know how to proceed, nor do I know if I needed all of that information.</p>
| Felix Marin | 85,343 | <p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span>
<span class="math-container">\begin{align}
&\bbox[10px,#ffd]{\sum_{k = 0}^{\min\braces{i,j}}{i \choose k}\pars{-1}^{k}{i + j - k \choose i}} =
\sum_{k = 0}^{\min\braces{i,j}}{i \choose k}\pars{-1}^{k}
{i + j - k \choose j - k}
\\[5mm] = &\
\sum_{k = 0}^{\min\braces{i,j}}{i \choose k}\pars{-1}^{k}
{-i - 1 \choose j - k}\pars{-1}^{j - k} =
\pars{-1}^{j}\sum_{k = 0}^{\min\braces{i,j}}{i \choose k}
\bracks{z^{j - k}}\pars{1 + z}^{-i - 1}
\\[5mm] = &\
\pars{-1}^{j}\bracks{z^{j}}\pars{1 + z}^{-i - 1}
\sum_{k = 0}^{\min\braces{i,j}}{i \choose k}z^{k}
\\[5mm] = &\
\pars{-1}^{j}\bracks{z^{j}}\pars{1 + z}^{-i - 1}
\\[2mm] &\ \times
\braces{\bracks{i \leq j}\sum_{k = 0}^{i}{i \choose k}z^{k} +
\bracks{i > j}\bracks{\sum_{k = 0}^{i}
{i \choose k}z^{k} - \sum_{k = j + 1}^{i}{i \choose k}z^{k}}}
\\[5mm] = &\
\pars{-1}^{j}\
\underbrace{\bracks{z^{j}}\pars{1 + z}^{-i - 1}
\overbrace{\sum_{k = 0}^{i}{i \choose k}z^{k}}^{\ds{\pars{1 + z}^{i}}}}_{\ds{\pars{-1}^{j}}}\ -\
\underbrace{\bracks{i > j}\pars{-1}^{j}\color{red}{\bracks{z^{j}}z^{j + 1}}
\pars{1 + z}^{-i - 1}\sum_{k = 0}^{i - j + 1}{i \choose k}z^{k}}
_{\ds{\begin{array}{c}{\Large = 0} \\ \mbox{See the}\ \color{red}{red}\ \mbox{detail} \end{array}}}
\\[5mm] = \bbox[10px,#ffd,border:1px groove navy]{\large 1} \\ &\
\end{align}</span></p>
|
77,642 | <p>$$\frac{1}{\sin(z)} = \cot (z) + \tan (\tfrac{z}{2})$$</p>
<p>I did this: </p>
<p><strong>First attempt</strong>: $$\displaystyle{\frac{1}{\sin (z)} = \frac{\cos (z)}{\sin (z)} + \frac{\sin (\frac{z}{2})}{ \cos (\frac{z}{2})} = \frac{\cos (z) }{\sin (z)} + \frac{2\sin(\frac{z}{4})\cos(\frac{z}{4})}{\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4})}} = $$
$$\frac{\cos (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))+2\sin z \sin(\frac{z}{4})\cos(\frac{z}{4})}{\sin (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))}$$</p>
<p>Stuck.</p>
<p><strong>Second attempt</strong>: </p>
<p>$$\displaystyle{\frac{1}{\sin z} = \left(\frac{1}{2i}(e^{iz}-e^{-iz})\right)^{-1} = 2i\left(\frac{1}{e^{iz}-e^{-iz}}\right)}$$</p>
<p>Stuck.</p>
<p>Does anybody see a way to continue?</p>
| Sasha | 11,069 | <p>Let $w = \frac{z}{2}$. Then
$$
\cot(2w) + \tan(w) = \frac{\cos^2(w)-\sin^2(w)}{2 \sin(w) \cos(w)} + \frac{\sin(w)}{\cos(w)} = \frac{1}{\cos(w)} \left( \frac{\cos^2(w)-\sin^2(w) + 2 \sin^2(w)}{2 \sin(w)} \right)
$$
The numerator becomes 1, and we arrive at the result $\frac{1}{2 \sin(w) \cos(w)} = \frac{1}{\cos(2w)} = \frac{1}{\cos(z)}$.</p>
|
77,642 | <p>$$\frac{1}{\sin(z)} = \cot (z) + \tan (\tfrac{z}{2})$$</p>
<p>I did this: </p>
<p><strong>First attempt</strong>: $$\displaystyle{\frac{1}{\sin (z)} = \frac{\cos (z)}{\sin (z)} + \frac{\sin (\frac{z}{2})}{ \cos (\frac{z}{2})} = \frac{\cos (z) }{\sin (z)} + \frac{2\sin(\frac{z}{4})\cos(\frac{z}{4})}{\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4})}} = $$
$$\frac{\cos (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))+2\sin z \sin(\frac{z}{4})\cos(\frac{z}{4})}{\sin (z)(\cos^{2}(\frac{z}{4})-\sin^{2}(\frac{z}{4}))}$$</p>
<p>Stuck.</p>
<p><strong>Second attempt</strong>: </p>
<p>$$\displaystyle{\frac{1}{\sin z} = \left(\frac{1}{2i}(e^{iz}-e^{-iz})\right)^{-1} = 2i\left(\frac{1}{e^{iz}-e^{-iz}}\right)}$$</p>
<p>Stuck.</p>
<p>Does anybody see a way to continue?</p>
| N. S. | 9,176 | <p>$$ \frac{\cos (z)}{\sin (z)} + \frac{\sin (\frac{z}{2})}{ \cos (\frac{z}{2})} =\frac{\cos (z)\cos (\frac{z}{2})+ \sin(z)\sin (\frac{z}{2}) }{\sin (z)\cos (\frac{z}{2})} =\frac{\cos (z-\frac{z}{2})}{\sin (z)\cos (\frac{z}{2})}$$</p>
|
255,230 | <p>A <em>complete linear hypergraph</em> is a <a href="https://en.wikipedia.org/wiki/Hypergraph" rel="nofollow noreferrer">hypergraph</a> $H=(V,E)$ such that </p>
<ol>
<li>$|e|\geq 2$ for all $e\in E$,</li>
<li>$|e_1\cap e_2|=1$ for all $e_1, e_2\in E$ with $e_1\neq e_2$, and</li>
<li>for all $v\in V$ we have $|\{e\in E:v\in e\}| \geq 2.$</li>
</ol>
<p>For $n>2$ set $\mathbb{N}_n =\{1,\ldots,n\}$ and $$\ell(n)=\min\{|E|: E\subseteq{\cal P}(\mathbb{N}_n) \text{ and }(\mathbb{N}_n, E) \text{ is complete linear}\},$$ so $\ell(n)$ is the <em>greatest lower bound</em> for the number of edges on a complete linear hypergraph on $n$ points.</p>
<p><strong>Question</strong>: What is the value of $\ell(n)$, depending on $n$?</p>
<hr>
<p>(Note concerning least upper bounds: If $H=(V,E)$ is a complete linear hypergraph with $|V|=n>2$, then $|E| \leq n$ by the <a href="https://en.wikipedia.org/wiki/De_Bruijn%E2%80%93Erd%C5%91s_theorem_(incidence_geometry)" rel="nofollow noreferrer">theorem of DeBruijn-Erdos</a>, and we can reach $|E| = n$ with the so-called "near pencil", see the same link.)</p>
| domotorp | 955 | <p>It is about $\sqrt n$.
If each edge has size at most $\sqrt n$, then you need at least $2n/\sqrt n=2\sqrt n$ edges to cover everything twice.
If there's an edge of size at least $\sqrt n$, then you need at least these many other edges to cover each of its points at least twice.
With some more involved argument of this kind, you might even get asymptotically $2\sqrt n$.</p>
|
2,511,061 | <p>I will prove 0=1.
we know that, From Factorial definition
zero factorial is equal to one and
one factorial is equal to one.
so,0!=1!.
factorial get cancelled both sides,
we get 0=1.
Is this right..?</p>
| marty cohen | 13,079 | <p>Factorial ("!") is not an expression - it is an operator.</p>
<p>For example,
$\sin(0) = \sin(\pi) = 0$
but you can't cancel "sin"
to get
$0 = \pi$.</p>
|
638,906 | <p>Find the coefficient of $x$ in the expansion of $$\left(1-2x^3+3x^5\right)\left(1+\frac{1}{x}\right)^5.$$ Answer is $154$, but how?</p>
| lab bhattacharjee | 33,337 | <p>$$(1-2x^3+3x^5)\left(1+\frac1x\right)^5=\frac{(1-2x^3+3x^5)(1+x)^5}{x^5}$$</p>
<p>We need the coefficient of $x^6$ in $(1-2x^3+3x^5)(1+x)^5$ which is $-2\cdot\binom53+3\binom51$</p>
|
2,625,719 | <p>I'm getting an equation $$(\ln y - x)\frac{dy}{dx} - y\ln y = 0$$</p>
<p>Which I try to factorize and bring over to get:
$$\frac{dy}{dx} = \frac{y\ln y}{(\ln y - x)}$$</p>
<p>But this cannot be factorized further. I am intending to use either the the substitution $u=y/x$ or finding the integrating factor, but this form makes it hard for me to try either method. How should I proceed?</p>
| David Quinn | 187,299 | <p>Hint...substitute $u=\ln y$ and you will arrive at a linear differential equation requiring an integrating factor.</p>
|
1,910,030 | <p>Let $V$ be a vector space and $U,W$ be its subspaces. Prove that if the union $U\cup W$ is a subspace of $V$, then $W \subseteq U$ or $U \subseteq W$.</p>
<p>I'm not sure where to begin at all really.</p>
| true blue anil | 22,388 | <ul>
<li><p>Re why multiply, you better go through an elementary presentation on <a href="https://people.richland.edu/james/lecture/m170/ch05-rul.html" rel="nofollow">probability rules.</a></p></li>
<li><p>Regarding $44$, do you understand if it is written as $\binom{11}1\binom41$ ?</p></li>
</ul>
<hr>
<p><em>Added explanation</em></p>
<p>For why multiply, you could look at the $13$ ranks (ace through K) as types of bread (say) and the $4$ suits as the spread (butter, cheese, etc) on them !</p>
<p>The two pairs can have any two from the $13$ ranks: $\binom{13}2$</p>
<p><em>Each</em> of the two chosen ranks can be associated with any of the $4$ suits: $\binom42\binom42$</p>
<p>The fifth card can be from any of the remaining $11$ ranks, and any of the $4$ suits: $\binom{11}1\binom41$ </p>
<p>It would help you if you studied computations of other types of <a href="https://en.wikipedia.org/wiki/Poker_probability" rel="nofollow">poker</a> hands to get a hang of it.</p>
<hr>
<p><em>Why do we multiply ?</em></p>
<p>Say you want to know the number of ways <strong>one</strong> pair can be obtained. There are $13$ ranks, (A through K), so obviously we could choose any one rank from $13\;i.e.\;\binom{13}1$ ways. Suppose we choose K. We are not done yet. We also need to specify which two of the four suits they belong to, which is $\binom42 = 6$ ways, viz$\;$ KS-KH, $\;$ KS-KD, $\;$ KS-KC, $\;$ KH-KD, $\;$ KH-KC $\;$ and$\;$ KD-KC.</p>
<p>So the # of ways of forming a pair from <strong>any</strong> rank is $\binom{13}1\times\binom42$</p>
<p>You should be able to build on this, just remember that once you have chosen two pairs, only $11$ ranks are available from which the <em>single</em> can be chosen.</p>
|
86,277 | <p>I have a question: does the Heine-Borel theorem hold for the space $\mathbb{R}^\omega$ (where $\mathbb{R}^\omega$ is the space of countable sequences of real numbers with the product topology). That is, prove that a subspace of $\mathbb{R}^\omega$ is compact if and only if it is the product of closed and bounded subspaces of $\mathbb{R}$ - or provide a counterexample.</p>
<p>I think it does not hold. But I can't come up with a counterexample!
Could anyone please help me with this? Thank you in advance. </p>
| Cheerful Parsnip | 2,941 | <p>The statement that a subspace of $\mathbb R^\omega$ is compact if and only if it is the product of closed and bounded subspaces of $\mathbb R$ is false even for $\mathbb R^2$. Take the "plus sign" subset $(\{0\}\times [-1,1])\cup ([-1,1]\times\{0\})$. It is compact but not a product of subsets of $\mathbb R$. This can be easily generalized to $\mathbb R^\omega$ via the inclusion $\mathbb R^2\hookrightarrow\mathbb R^\omega$ given by $(x,y)\mapsto (x,y,0,0,0,\ldots)$.</p>
|
1,686,455 | <p>For $a_1, ... , a_2 > 0$, I need to prove that
$( \sum_{i=1}^{n} a_i)( \sum_{i=1}^{n} \frac{1}{a_i})\geq n^2 $.</p>
<p>When do we have inequality?</p>
<p>Should I do it by induction? any hints</p>
| Paolo Leonetti | 45,736 | <p>Equivalently, you have to prove that
$$
\sum_{1\le i ,j\le n: i\neq j}\frac{a_i}{a_j} \ge n^2-n.
$$
Now, $x+\frac{1}{x}\ge 2$ by am-gm, therefore
$$
\sum_{1\le i <j\le n}\frac{a_i}{a_j}+\frac{a_j}{a_i}\ge 2\binom{n}{2}=n^2-n.
$$</p>
|
1,686,455 | <p>For $a_1, ... , a_2 > 0$, I need to prove that
$( \sum_{i=1}^{n} a_i)( \sum_{i=1}^{n} \frac{1}{a_i})\geq n^2 $.</p>
<p>When do we have inequality?</p>
<p>Should I do it by induction? any hints</p>
| DeepSea | 101,504 | <p>Apply AM-GM inequality twice: $$LHS \geq n\sqrt[n]{a_1a_2\cdots a_n}\cdot n\sqrt[n]{\dfrac{1}{a_1a_2\cdots a_n}}=n^2=RHS$$, and equality occurs when $a_1=a_2=\cdots = a_n$.</p>
|
2,344,931 | <p>It is given that $a,b$ are roots of $3x^2+2x+1$ then find the value of:
$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3$$</p>
<p>I thought to proceed in this manner:</p>
<p>We know $a+b=\frac{-2}{3}$ and $ab=\frac{1}{3}$. Using this I tried to convert everything to <em>sum and product of roots</em> form, but this way is too complicated! </p>
<p>Please suggest a simpler process.</p>
| Khosrotash | 104,171 | <p><span class="math-container">$$3a^2+2a+1=0 \to 3a^2+3a+1=a\\3b^2+2b+1=0\to 3b^2+3b+1=b\\a-1=3a(a+1)\\b-1=3b(b+1)$$</span>
so <span class="math-container">$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=\\
\left(\dfrac{-3a(a+1)}{1+a}\right)^3+\left(\dfrac{-3b(b+1)}{1+b}\right)^3\\=-27(a^3+b^3)=-27(s^3-3ps)\\=-27\left(\left(\frac{-2}{3}\right)^3-3\left(\frac{1}{3}\times\left(\frac{-2}{3}\right)\right)\right)\\=+8-18\\=-10$$</span>where <span class="math-container">$$s=a+b\\p=ab$$</span></p>
|
2,344,931 | <p>It is given that $a,b$ are roots of $3x^2+2x+1$ then find the value of:
$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3$$</p>
<p>I thought to proceed in this manner:</p>
<p>We know $a+b=\frac{-2}{3}$ and $ab=\frac{1}{3}$. Using this I tried to convert everything to <em>sum and product of roots</em> form, but this way is too complicated! </p>
<p>Please suggest a simpler process.</p>
| Raffaele | 83,382 | <p>Plug $x=\frac{1-y}{1+y}$ in the given equation. We get:
$$\frac{3 (1-y)^2}{(y+1)^2}+\frac{2 (1-y)}{y+1}+1=0$$
Expanding and collecting, we have:
$$y^2-2 y+3=0$$
whose solutions are $$y_1=\frac{1-a}{1+a};\;y_2=\frac{1-b}{1+b}$$
We also know that sum of roots is $s=y_1+y_2=2$ and product is $p=y_1y_2=3$.</p>
<p>The sum of cubes can be written as follows
$$y_1^3+y_2^3=\left(y_1+y_2\right)^3-3y_1y_2(y_1+y_2)=s^3-3ps=8-18=-10$$
so we have
$$\left(\dfrac{1-a}{1+a}\right)^3+\left(\dfrac{1-b}{1+b}\right)^3=-10$$</p>
|
2,130,823 | <blockquote>
<p>Is it possible for some integer $n>1$ that $2^n-1\mid 3^n-1$ ?</p>
</blockquote>
<p>I have tried many things, but nothing worked.</p>
| WafflesTasty | 70,877 | <p>I was looking for this as well, and eventually figured it out myself. So here's my solution for future reference. The short answer is, <span class="math-container">$2^n - 1$</span> never divides <span class="math-container">$3^n - 1$</span>. Here's the proof, making use of the Jacobi symbol.</p>
<p>Assume <span class="math-container">$2^n - 1 \mid 3^n - 1$</span>. If <span class="math-container">$n = 2k$</span> is even, then <span class="math-container">$2^n - 1 = 4^k - 1 \equiv 0 \bmod 3$</span>. Consequently, <span class="math-container">$3$</span> must also divide <span class="math-container">$3^n - 1$</span>, which is a contradiction. At the very least, we can already assume <span class="math-container">$n = 2k + 1$</span> is odd. Next, since <span class="math-container">$3^n \equiv 1 \bmod 2^n - 1$</span>, from the properties of the Jacobi-symbol it follows that </p>
<p><span class="math-container">\begin{equation}
1 = (\frac{1}{2^n - 1}) = (\frac{3^n}{2^n - 1}) = (\frac{3^{2k}}{2^n - 1}) \cdot (\frac{3}{2^n - 1}) = (\frac{3}{2^n - 1})
\end{equation}</span></p>
<p>However, using Jacobi's law of reciprocity we also know</p>
<p><span class="math-container">\begin{equation}
(\frac{2^n - 1}{3}) = (\frac{3}{2^n - 1}) \cdot (\frac{2^n - 1}{3}) = (-1)^{\frac{3 - 1}{2}\frac{2^n - 2}{2}} = (-1)^{2^{n - 1} - 1} = -1
\end{equation}</span></p>
<p>The only quadratic non-residue <span class="math-container">$\bmod 3$</span> is <span class="math-container">$2$</span>, therefore <span class="math-container">$2^n - 1 \equiv 2 \bmod 3$</span> or alternatively <span class="math-container">$2^n \equiv 0 \bmod 3$</span>. Since this implies <span class="math-container">$3$</span> divides <span class="math-container">$2^n$</span>, we again arrive at a contradiction.</p>
|
197,421 | <p>Consider the quotient group $\mathbb{Q}/\mathbb{Z}$ of the additive group of rational numbers. Then how to find the order of the element $2/3 + \mathbb{Z} $ in $\mathbb{Q}/\mathbb{Z}$.</p>
| Makoto Kato | 28,422 | <p>Let $a = 2/3 + \mathbb{Z}$.
Since $3a = 0$ and $a \neq 0$, the order of $a$ is $3$.</p>
|
197,421 | <p>Consider the quotient group $\mathbb{Q}/\mathbb{Z}$ of the additive group of rational numbers. Then how to find the order of the element $2/3 + \mathbb{Z} $ in $\mathbb{Q}/\mathbb{Z}$.</p>
| Bill | 418,485 | <p>The order of a coset $a + \mathbb Z$ and $a$ has given by is the smallest positive integer $b$ such that $b(a+\mathbb Z)= \mathbb Z$.</p>
|
4,293,199 | <p>let me preface by saying this <em>is</em> from a homework question, but the question is not to plot the decision boundary, just to train the model and do some predictions. I have already done that and my predictions <em>seem</em> to be correct.</p>
<p>But I would like to verify my results by plotting the decision boundary. This is not part of the homework.</p>
<p>The question was to take a simple dataset <span class="math-container">$$ X = \begin{bmatrix}
-1&0&2&0&1&2\\
-1&1&0&-2&0&-1
\end{bmatrix} $$</span></p>
<p><span class="math-container">$$ y = \begin{bmatrix}
1&1&1&-1&-1&-1
\end{bmatrix}
$$</span></p>
<p>Given this, convert the input to non-linear functions:
<span class="math-container">$$
z = \begin{bmatrix}
x_1\\x_2\\x_1^2\\x_1x_2\\x_2^2
\end{bmatrix}
$$</span></p>
<p>Then train the binary logistic regression model to determine parameters <span class="math-container">$\hat{w} = \begin{bmatrix} w\\b \end{bmatrix}$</span> using <span class="math-container">$\hat{z} = \begin{bmatrix} z\\1 \end{bmatrix}$</span></p>
<p>So, now assume that the model is trained and I have <span class="math-container">$\hat{w}^*$</span> and would like to plot my decision boundary <span class="math-container">$\hat{w}^{*T}\hat{z} = 0$</span></p>
<p>Currently to scatter the matrix I have</p>
<pre><code>scatter(X(1,:), X(2,:))
axis([-1.5 2.5 -2.5 1.5])
hold on
% what do I do to plot the decision boundary?
</code></pre>
<p>Not sure where to go from here. I have tried using symbolic functions, but <code>fplot</code> doesn't like using 2 variables.</p>
| lab bhattacharjee | 33,337 | <p>Clearly the roots of <span class="math-container">$$4c^3-3c-\cos3x=0$$</span> are <span class="math-container">$c_r\cos(x+2r\pi/3), r=0,1,2$</span></p>
<p>Let <span class="math-container">$p=c_0,c_1=q,c_2=r$</span></p>
<p><span class="math-container">$p+q+r=0\implies p^3+q^3+r^3=3pqr$</span></p>
<p><span class="math-container">$pq+qr+rp=-3/4\implies p^2+q^2+r^2=0^2-2(-3/4)$</span></p>
<p><span class="math-container">$pqr=\dfrac{\cos3x}4$</span></p>
<p>As <span class="math-container">$4c^5=3c^3+c^2\cos3x,$</span></p>
<p><span class="math-container">$4(p^5+q^5+r^5)$</span></p>
<p><span class="math-container">$=3(p^3+q^3+r^3)+(p^2+q^2+r^2)\cos3x$</span></p>
<p>Replace the values of <span class="math-container">$p^3+q^3+r^3, p^2+q^2+r^2$</span></p>
|
4,435,032 | <p>I have equations of two conic sections in general form. Is it possible to find minimal distance between them (if they are not intercross)?</p>
<p>I need it to calculate is two spacecrafts on two orbits (2d case) can collide or not. If minimal distance bigger than sum of radiuses of bounding circles I don't need to care about collision.</p>
| Andreas Cap | 202,204 | <p>You are right, then tensor fields are indeed parallel and this is independent of torsion-freeness.
In fact, if you take any <span class="math-container">$G$</span>-invariant tensor <span class="math-container">$T_0\in\otimes^k\mathbb R^{n*}\otimes\otimes^\ell\mathbb R^n$</span>, this gives rise to a <span class="math-container">$\binom\ell k$</span>-tensor field <span class="math-container">$T$</span> on any manifold <span class="math-container">$M$</span> endowed with a <span class="math-container">$G$</span>-structure <span class="math-container">$P$</span> by "pulling back along the frames in <span class="math-container">$P$</span>".
Now you can view the tensor bundle <span class="math-container">$\otimes^k T^*M\otimes\otimes^\ell TM$</span> as the associated bundle <span class="math-container">$P\times_G(\otimes^k\mathbb R^{n*}\otimes\otimes^\ell\mathbb R^n)$</span> and correspondingly, <span class="math-container">$\binom\ell k$</span>-tensor fields on <span class="math-container">$M$</span> are in bijective correspondence with <span class="math-container">$G$</span>-equivariant smooth functions <span class="math-container">$P\to \otimes^k\mathbb R^{n*}\otimes\otimes^\ell\mathbb R^n$</span>. By construction, in this picture <span class="math-container">$T$</span> corresponds to the constant function <span class="math-container">$T_0$</span>.
But for any principal connection <span class="math-container">$\gamma$</span> on <span class="math-container">$P$</span> the induced connection <span class="math-container">$\nabla$</span> on <span class="math-container">$\binom\ell k$</span>-tensor fields has the property that for a vector field <span class="math-container">$\xi\in\mathfrak X(M)$</span> the equivariant function corresponding to <span class="math-container">$\nabla_\xi T$</span> is obtained by differentiating the equivariant function corresponding to <span class="math-container">$T$</span> with respect to the horizontal lift of <span class="math-container">$\xi$</span> to <span class="math-container">$P$</span>.
But of course, if you differentiate a constant function, you always get <span class="math-container">$0$</span>.</p>
|
1,132,187 | <p>$x^2 = y^2 + xy + 5$, where $x$ and $y$ are natural numbers.</p>
<p>Here is what I have so far:</p>
<ol>
<li><p>$x \neq y$ (from the equation).</p></li>
<li><p>$x$ is always odd (using the equation and assuming $2$ cases - $y$ is odd or $y$ is even).</p></li>
<li><p>Solving for the equation as a quadratic in $y$, $5x^2 - 20 \geq 0$ and a perfect square.</p></li>
</ol>
<p>I feel I am missing a crucial point which will guide me towards a solution.</p>
<p>Hint please!</p>
| Pp.. | 203,995 | <p>Multiply by $4$.
$$4x^2=4y^2+4xy+20$$</p>
<p>$$5x^2=(2y+x)^2+20$$</p>
<p>So, try to solve $$5x^2=z^2+20$$</p>
<p>$z$ must be multiple of $5$. So put $z=5a$ to get $$x^2-5a^2=4$$</p>
<p>This is a <a href="http://en.wikipedia.org/wiki/Pell%27s_equation" rel="nofollow">Pell's equation</a> with a solution $x=3, a=1$. From this and a minimal solution of $$A^2-5B^2=1,$$</p>
<p>say $A=9, B=4$, you can generate all solutions, and return to the original variables to get the solutions for the original equation.</p>
|
2,391,505 | <p>Most ODE textbooks provide the following steps to the solution of a separable differential equation (here the exponential equation is used as an example):</p>
<p>$$\frac{dN}{dt}=-\lambda N(t) \Rightarrow \frac{dN}{N(t)}=-\lambda dt\Rightarrow \int\frac{1}{N}dN=-\lambda\int dt \Rightarrow ln\mid N\mid = -\lambda t+C\Rightarrow \mid N(t) \mid=e^{-\lambda t +C}=e^Ce^{-\lambda t}\Rightarrow N(t)=e^Ce^{-\lambda t} \text{ if N $>0$ and }N(t)=-e^Ce^{-\lambda t} \text{ if N < 0}.$$</p>
<p>Ultimately this can be simplified to $N(t)=Ae^{-\lambda t}$ where $A=e^C$ is positive or negative accordingly. </p>
<p>I find this demonstration unintuitive. Doesn't the author know that math students have just spent 3 semesters of Calculus having instructors insist that the Leibniz derivative operator is not a fraction, that these infinitesimals are objects that do not really exist on the real number line and which require great mathematical maturity to comprehend? Now, can we try to make this demonstration in a manner that respects our understanding of the Leibniz derivative operator as a symbol that cannot be broken apart?</p>
<p>EDIT: Questions similar to this have been asked all over this forum, few have satisfactory answers, however I have ran into this one with some great posts: <a href="https://math.stackexchange.com/questions/2142783/separable-differential-equations-detaching-dy-dx">Separable differential equations: detaching dy/dx</a> </p>
| Gribouillis | 398,505 | <p>I would say that</p>
<p>$$\frac{d(N(t)e^{\lambda t})}{dt} = \frac{d N(t)}{dt} e^{\lambda t} + N(t)\frac{d(e^{\lambda t})}{dt} = -\lambda N(t) e^{\lambda t}+ \lambda N(t) e^{\lambda t}=0$$
hence $N(t) e^{\lambda t}$ is a constant $A$ and $N(t) = A e^{-\lambda t}$.</p>
|
2,044,451 | <p>I'm trying to prove the following inequality:
$$- \ln (x) \leq (x)^{-\frac{1}{e}} $$</p>
<p>over $[0, 1]$. I'm not sure how to move forward. I know that the equality is at $x = e^{-e}$. Any help is greatly appreciated.</p>
| xpaul | 66,420 | <p>Let $f(x)=-\ln x-x^{-\frac1e}$. Then
$$ f'(x)=\frac1{ex}(-e+x^{-\frac1e})$$
and hence $f(x)$ is decreasing if $x>e^{-e}$ and increasing if $0<x<e^{-e}$. So if $x>e^{-e}$, then $f(x)<f(e^\frac1e)=0$. Thus the solution of $\ln x<x^{-\frac1e}$ is $x>e^{-e}$.</p>
|
4,219,689 | <p>I was trying to derive the perimeter of a circle using vector algebra and came across this.....</p>
<p>Taking a circle of radius <span class="math-container">$r$</span> with center at the origin, the position vector of a particular point on the circle making an angle <span class="math-container">$t$</span> with the positive x-axis is given by, <span class="math-container">$$\vec r_o = r\cos t \hat i + r\sin t \hat j $$</span>
If we move this point so by an infinitesimally small amount such that the new position vector of the point subtends an angle of <span class="math-container">$t+dt$</span> with the positive x-axis, then the new position vector is given by <span class="math-container">$$\vec r = r\cos (t+dt)\hat i + r\sin (t +dt)\hat j$$</span>
The small displacement vector <span class="math-container">$dl$</span> will be given by, <span class="math-container">$$\vec {dl} = \vec r - \vec r_o$$</span>
After substitution you get, <span class="math-container">$$\vec {dl} = r [\cos (t+dt)-\cos t ]\hat i + r[\sin(t+dt)-\sin t]\hat j$$</span>
The magnitude of this infinitesimal displacement would be, <span class="math-container">$$||\vec {dl}|| = dl = \sqrt {{r^2 [\cos (t+dt)-\cos t ]}^2+{r^2[\sin(t+dt)-\sin t]^2}}$$</span>
After further simplification; <span class="math-container">$$dl = 2r\sin {\frac{dt}{2}}$$</span>
To get the arclength (say, <span class="math-container">$da$</span>); <span class="math-container">$$da = dl = 2r\sin{\frac{dt}{2}}$$</span>
Usually we approximate <span class="math-container">$\sin{dt}=dt$</span> as <span class="math-container">$dt\to0$</span> and we end up with <span class="math-container">$$dl = 2r \frac{dt}{2} = rdt$$</span>
and integrate this equation to get the perimeter of the circle, instead i tried applying the taylor expansion of <span class="math-container">$\sin$</span> <span class="math-container">$$\sin x = x - \frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}+ ....$$</span>
<span class="math-container">$$dl = 2r[\frac{dt}{2}-\frac{(\frac{dt}{2})^3}{3!}+\frac{(\frac{dt}{2})^5}{5!}-\frac{(\frac{dt}{2})^7}{7!}+....]$$</span>
Integrating the above, <span class="math-container">$$\int{dl}=2r[\int{\frac{dt}{2}}-\int{\frac{(\frac{dt}{2})^3}{3!}}+\int{\frac{(\frac{dt}{2})^5}{5!}}-\int{\frac{(\frac{dt}{2})^7}{7!}}+....]$$</span>I'm stuck at this point and don't know how to carry forward, so i need some help....</p>
| Narasimham | 95,860 | <p>Due to vector symmetry for any regular n- sided polygon average of sum of projections is from the center of the polygon:</p>
<p><span class="math-container">$$ \bar p =\dfrac{\Sigma p}{n} =\dfrac{18}{6} =3.$$</span></p>
<p>Proof : Vector sum of sides =0, the above is a general relation.</p>
|
375,910 | <p>Okay so basically I want to know if you can solve this log equation without the use of u substitution: </p>
<p>$${\log_4{\log_3{x}}} = 1$$</p>
<p>I believe that u substitution is the only way to solve this problem, but please prove me wrong if theres another way to do so.</p>
| André Nicolas | 6,312 | <p>Since I do not know what $u$-substitution means, it will have to be done another way. To say that $\log_4$ of something is $1$ means that the something is $4^1$. So $\log_3 x=4$. And this means that $x=3^4$.</p>
|
1,804,333 | <blockquote>
<p>How many divisors does $25^2+98^2$ have?</p>
</blockquote>
<hr>
<p>My Attempt:</p>
<p>Calculator is not allowed but using calculator I found $193\times53$ that means $8$ divisors and that $4$ of them are positive.</p>
| barak manos | 131,263 | <p>HINT:</p>
<p>$a^2+b^2=(a+b)^2-2ab$</p>
<hr>
<p>Hence:</p>
<p>$25^2+98^2=$</p>
<p>$(25+98)^2-2\cdot25\cdot98=$</p>
<p>$123^2-4900=$</p>
<p>$123^2-70^2=$</p>
<p>$(123-70)\cdot(127+70)=$</p>
<p>$53\cdot193$</p>
|
384,471 | <p>I'm attempting to evaluate the limit</p>
<p>$\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}$</p>
<p>I got it reduced to the following</p>
<p>$\lim_{x\rightarrow\infty}\frac{\sqrt{\frac{1}{\left(x-2\right)^{2}}-\frac{3}{\left(x-2\right)^{4}}}+1}{1-\frac{3}{\left(x-2\right)^{2}}-1}$</p>
<p>But putting in $\infty$ I get $\frac{1}{0}$ and, what's worse, Mathematica tells me the limit is equal to $-\infty$. Where am I going wrong?</p>
| vadim123 | 73,324 | <p>Hint: multiply top and bottom by $\sqrt{x^2-4x+1}+x-2$.</p>
<p>Additional hint upon request: $\lim_{x\rightarrow \infty}\sqrt{x^2-4x+1}=\infty$ and $\lim_{x\rightarrow \infty} x-2=\infty$.</p>
|
384,471 | <p>I'm attempting to evaluate the limit</p>
<p>$\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}$</p>
<p>I got it reduced to the following</p>
<p>$\lim_{x\rightarrow\infty}\frac{\sqrt{\frac{1}{\left(x-2\right)^{2}}-\frac{3}{\left(x-2\right)^{4}}}+1}{1-\frac{3}{\left(x-2\right)^{2}}-1}$</p>
<p>But putting in $\infty$ I get $\frac{1}{0}$ and, what's worse, Mathematica tells me the limit is equal to $-\infty$. Where am I going wrong?</p>
| DonAntonio | 31,254 | <p>Hint: multiply numerator and denominator by the denominator's conjugate:</p>
<p>$$\lim_{x\rightarrow\infty}\frac{1}{\sqrt{x^{2}-4x+1}-x+2}=\frac{\sqrt{x^2-4x+1}+x-2}{x^2-4x+1-x^2+4x-4}=\;\ldots$$</p>
<p><strong>Added under request :</strong> </p>
<p>$$\left(\sqrt{x^2-4x+1}+x-2\right)\frac{\frac1x}{\frac1x}=\frac{\sqrt{1-\frac4x+\frac1{x^2}}+1-\frac2x}{\frac1x}\xrightarrow[x\to\infty]{}\ldots$$</p>
|
6,661 | <p>Is there a simple explanation of what the Laplace transformations do exactly and how they work? Reading my math book has left me in a foggy haze of proofs that I don't completely understand. I'm looking for an explanation in layman's terms so that I understand what it is doing as I make these seemingly magical transformations.</p>
<p>I searched the site and closest to an answer was <a href="https://math.stackexchange.com/questions/954/inverse-of-laplace-transform">this</a>. However, it is too complicated for me.</p>
| JP McCarthy | 19,352 | <p>I have been teaching the Laplace Transform to a night degree class (mature) of civil engineers. They are good students but not great mathematicians. They couldn't follow the method of how we use the Laplace Transform to solve differential equations until I told them this story:</p>
<blockquote>
<p>Suppose that you come across a poem written in English of whose meaning you don't understand. However suppose that you know a French-speaking gentleman who is a master of interpreting poems. So you translate the poem into French and send it to the French gentleman. The French gentleman writes a perfectly good interpretation of the poem in French and sends this back to you where you translate it back into English and you have the meaning of the poem.</p>
</blockquote>
<p>Obviously these are simple difficulties that these students are having but I still think it's a nice story.</p>
<p>D'accord:</p>
<p>Poem in English = Differential Equation. Interpretation in English = Solution of Differential Equation. Translate to French = Take Laplace Transform. Poem in French (better interpreter) = Algebraic Equation (easier to solve). Interpretation in French = Laplace Transform of Solution of Differential Equation. Translate back into English = Inverse Laplace Transform</p>
|
2,852,832 | <blockquote>
<p>How can we prove the following?</p>
<p>$-1+\frac1{12}{\pi }^{2}-\frac12\sum\limits_{n=2}^{\infty }\Gamma \left( n+1 \right) \sum\limits_{k=0}^{n+2}
\,{\frac {\zeta \left( k \right) \left( -
1 \right) ^{-2+k} \left( {2}^{2-k}-2 \right) }{\Gamma \left( -1+k
\right) \left( n+1-k \right) !}} =\sum\limits_{n=1}^{\infty }n
\sum\limits_{j=2}^{\infty }{\frac { \left( -1 \right) ^{j-1}}{{j}^{
2}} \left( 1-{j}^{-1} \right) ^{n-1}} =-\frac12$</p>
</blockquote>
<p>We also have
$\chi(n)=-\sum\limits_{k=0}^{n+2}\zeta \left( k
\right) \left( k-n-2 \right) {n\choose k-2} \left( \left( -1
\right) ^{1+k}{2}^{1-k}+ \left( -1 \right) ^{k} \right) $ by replacing the $\Gamma$ factors with the corresponding binomial occeficient</p>
<p>Related question at <a href="https://math.stackexchange.com/questions/2851837/references-for-chin-sum-j-2-infty-frac-left-1-right-j-1#2851837">References for $ \chi(n)=n\sum\limits_{j=2}^\infty\frac {(-1)^{j-1}}{j^2}\left(1-j^{-1}\right)^{n-1}$ in $\zeta$ expansion?</a></p>
<blockquote>Digits := 37; flist(proc (n) options operator, arrow; evalf(coolchi(n)) end proc, 0 .. 75);
Digits := 37
[
[0., -0.1775329665758867817637924166769874055,
-0.158151287891164991627192075621149797,
-0.100756475454096876860689273411921431,
-0.052729122560581908477394880842614692,
-0.022173447806981260740844120495648785,
-0.005724758139923986695354623118140584,
0.001711181525440404872086057781767772,
0.004163504754435619487164038876686477,
0.004224146796392587120306729731994248,
0.003361436685455899377754627000698078,
0.00231507684250186191279422185478205,
0.00140312527387483007618661260684936,
0.00072272177218128957979732200349640,
0.00026840905319980399461855630763639,
-0.00000386786514313326830151239928860,
-0.00014529652172375174501064342971419,
-0.00020079200685610596363763783920190,
-0.00020486397550091241285507339640177,
-0.00018177251376674866978557090651028,
-0.00014732720568299891079533324844990,
-0.00011103259250611536839455447354313,
-0.00007801852557445637063802766476294,
-0.00005057002459089850935720449502326,
-0.00002924737982070438622370684257219,
-0.0000136577182218588710540784871811,
-0.0000029575122625340335255970335431,
0.0000038395199899767419714034429197,
0.0000076881193734840299247444522077,
0.0000094245398419219327153653943207,
0.0000097337529690318816412614692587,
0.0000091480274459270312096627770217,
0.0000080619145517535381759258176097,
0.0000067542476407336638941225910227,
0.000005411593443449269790648097261,
0.000004150125437131325581547336324,
0.000003034497471576312110001308212,
0.000002093267538135562252003012925,
0.000001330969825279401198332450463,
-7
7.37209507955248062779620826 10 ,
-7
2.93263622398134958434524015 10 ,
-8
-2.3317072362848976102839973 10 ,
-7
-2.35650675570634484892471134 10 ,
-7
-3.65620346716670870284369471 10 ,
-7
-4.32803484845776951918534983 10 ,
-7
-4.53980382773092409894967670 10 ,
-7
-4.43028271969806636513667532 10 ,
-7
-4.11056601063391173974634568 10 ,
-7
-3.66679744724574133777868780 10 ,
-7
-3.16354929829636515323370167 10 ,
-7
-2.64737199886948816611138728 10 ,
-7
-2.15021028592691266141174466 10 ,
-7
-1.69250991341964200913477377 10 ,
-7
-1.28592821342563655928047463 10 ,
-8
-9.3562155186891501184884726 10 ,
-8
-6.4212079021714840683989162 10 ,
-8
-4.0282829593958497425360774 10 ,
-8
-2.1318155241615525408999560 10 ,
-9
-6.753256088468931634905523 10 ,
-9
4.020837075521115896921341 10 ,
-8
1.1611930078301706186481030 10 ,
-8
1.6596359914684719233773543 10 ,
-8
1.9500255425207775038798882 10 ,
-8
2.0789463805547333601557044 10 ,
-8
2.0865770984321044922048033 10 ,
-8
2.0067518073257429000271847 10 ,
-8
1.8673131375018625836228485 10 ,
-8
1.6906432963871255429917949 10 ,
-8
1.4942881292166557781340237 10 ,
-8
1.2916159703946332890495351 10 ,
-8
1.0924629207316840757383290 10 ,
-9
9.037454965393441382004053 10 ,
-9
7.300036840122634764357642 10 ,
-9
5.739399681529520904444054 10 ,
-9 -9
4.367571268906899802263293 10 , 3.185931532251771457815357 10
]
]
</blockquote>
<blockquote> listsum((20))= -0.4999999932944069400650356605249833795</blockquote>
<p>Note: when evaluating this sum, at appears at least 28 digits of precision are required for accurate evaluation. If you use less than this (Maple defaults to 10) then the result will diverge quite wildly to an inaccurate answer.</p>
<p>This table is the partial sums of the terms, so I think the equality =-1/2 is correct</p>
<blockquote>[0., -.1775329665758867817637924166769874055, -.3356842544670517733909844922981372025, -.4364407299211486502516737657100586335, -.4891698524817305587290686465526733255, -.5113433002887118194699127670483221105, -.5170680584286358061652673901664626945, -.5153568769031954012931813323846949225, -.5111933721487597818060172935080084455, -.5069692253523671946857105637760141975, -.5036077886669112953079559367753161195, -.5012927118244094333951617149205340695, -.4998895865505346033189751023136847095, -.4991668647783533137391777803101883095, -.4988984557251535097445592240025519195, -.4989023235902966430128607364018405195, -.4990476201120203947578713798315547095, -.4992484121188765007215090176707566095, -.4994532760943774131343640910671583795, -.4996350486081441618041496619736686595, -.4997823758138271607149449952221185595, -.4998934084063332760833395496956616895, -.4999714269319077324539775773604246295, -.5000219969564986309633347818554478895, -.5000512443363193353495584886980200795, -.5000649020545411942206125671852011795, -.5000678595668037282541381642187442795, -.5000640200468137515121667607758245795, -.5000563319274402674822420163236168795, -.5000469073875983455495266509292961795, -.5000371736346293136678853894600374795, -.5000280256071833866366757266830157795, -.5000199636926316330984998008654060795, -.5000132094449908994346056782743833795, -.5000077978515474501648150301771223795, -.5000036477261103188392334828407983795, -.5000006132286387425271234815325863795, -.4999985199611006069648714785196613795, -.4999971889912753275636731460691983795, -.4999964517817673723156103664483723795, -.4999961585181449741806519319243573795, -.4999961818352173370296280347643303795, -.4999964174858929076641129272354643795, -.4999967831062396243349832116049353795, -.4999972159097244701119351301399183795, -.4999976698901072432043450251075883795, -.4999981129183792130109815387751203795, -.4999985239749802764021555134096883795, -.4999988906547250009762892912784683795, -.4999992070096548306128046146486353795, -.4999994717468547175616212257873633795, -.4999996867678833102528873669618293795, -.4999998560188746522170882804392063795, -.4999999846116959947807442084866693795, -.5000000781738511816722453933713953795, -.5000001423859302033870860773605573795, -.5000001826687597973455835027213313795, -.5000002039869150389611089117208913795, -.5000002107401711274300405466264143795, -.5000002067193340519089246497050733795, -.5000001951074039736072184632240433795, -.5000001785110440589224992294505003795, -.5000001590107886337147241906516183795, -.5000001382213248281673905890945743795, -.5000001173555538438463456670465413795, -.5000000972880357705889166667746943795, -.5000000786149043955702908305462093795, -.5000000617084714316990354006282603795, -.5000000467655901395324776192880233795, -.5000000338494304355861447287926723795, -.5000000229248012282693039714093823795, -.5000000138873462628758625894053293795, -.5000000065873094227532278250476873795, -.5000000008479097412237069206036333795, -.4999999964803384723168071183403403795, -.4999999932944069400650356605249833795]
</blockquote>
| skbmoore | 321,120 | <p>In the following assume $|z|<1$ to begin with and I'll let $z \to 1$ to get the OP's left hand side formula:
$$ S(z):=\sum_{n=1}^\infty z^n\,n \sum_{j=2}^\infty\, \frac{(-1)^{j-1}}{j^2}\big(1-\frac{1}{j}\big)^{n-1} .$$
$S(z)$ is solvable in closed-form in the following manner: Interchange summations and to eliminate the $n$ series use the derivative of the geometric series to obtain</p>
<p>$$S(z)= z\sum_{j=2}^\infty\, \frac{(-1)^{j-1}}{j^2(1-z(1-1/j))^2}=\frac{z}{(1-z)^2}\sum_{j=2}^\infty\, \frac{(-1)^{j-1}}{ (j+z/(1-z))^2 }.$$
This sum is solvable in the terms of the trigamma function, once a $j=1$ term is added and subtracted. Thus</p>
<p>$$S(z)= \frac{z/4}{(1-z)^2}\Big( \psi\,'\big(\tfrac{3}{2}+\tfrac{z}{2(1-z)}\big) - \psi\,' \big( 1+\tfrac{z}{2(1-z)} \big) \Big) .$$</p>
<p>To find the limit $z \to 1,$ it is seen that it is equivalent to </p>
<p>$$S(1) = \lim_{y \to 0} \frac{1}{4y^2} \Big( \psi\,'\big(\tfrac{3}{2}+\tfrac{1}{2y}) - \psi\,' \big(1+\tfrac{1}{2y} \big) \Big) =
\lim_{x \to \infty} \frac{x^2}{4} \Big( \psi\,'\big(\tfrac{3}{2}+\tfrac{x}{2}\big) - \psi\,' \big( 1+\tfrac{x}{2} \big) \Big)$$
$$= \lim_{x \to \infty} \frac{x^2}{4} \Big(\frac{2}{x}-\frac{4}{x^2}+\mathit{O}(x^{-3}) - (\frac{2}{x}-\frac{2}{x^2}+{\mathit{O}}(x^{-3})) \Big)= -\frac{1}{2}$$
where the asymptotic formula for the trigamma function has been used.</p>
|
2,852,832 | <blockquote>
<p>How can we prove the following?</p>
<p>$-1+\frac1{12}{\pi }^{2}-\frac12\sum\limits_{n=2}^{\infty }\Gamma \left( n+1 \right) \sum\limits_{k=0}^{n+2}
\,{\frac {\zeta \left( k \right) \left( -
1 \right) ^{-2+k} \left( {2}^{2-k}-2 \right) }{\Gamma \left( -1+k
\right) \left( n+1-k \right) !}} =\sum\limits_{n=1}^{\infty }n
\sum\limits_{j=2}^{\infty }{\frac { \left( -1 \right) ^{j-1}}{{j}^{
2}} \left( 1-{j}^{-1} \right) ^{n-1}} =-\frac12$</p>
</blockquote>
<p>We also have
$\chi(n)=-\sum\limits_{k=0}^{n+2}\zeta \left( k
\right) \left( k-n-2 \right) {n\choose k-2} \left( \left( -1
\right) ^{1+k}{2}^{1-k}+ \left( -1 \right) ^{k} \right) $ by replacing the $\Gamma$ factors with the corresponding binomial occeficient</p>
<p>Related question at <a href="https://math.stackexchange.com/questions/2851837/references-for-chin-sum-j-2-infty-frac-left-1-right-j-1#2851837">References for $ \chi(n)=n\sum\limits_{j=2}^\infty\frac {(-1)^{j-1}}{j^2}\left(1-j^{-1}\right)^{n-1}$ in $\zeta$ expansion?</a></p>
<blockquote>Digits := 37; flist(proc (n) options operator, arrow; evalf(coolchi(n)) end proc, 0 .. 75);
Digits := 37
[
[0., -0.1775329665758867817637924166769874055,
-0.158151287891164991627192075621149797,
-0.100756475454096876860689273411921431,
-0.052729122560581908477394880842614692,
-0.022173447806981260740844120495648785,
-0.005724758139923986695354623118140584,
0.001711181525440404872086057781767772,
0.004163504754435619487164038876686477,
0.004224146796392587120306729731994248,
0.003361436685455899377754627000698078,
0.00231507684250186191279422185478205,
0.00140312527387483007618661260684936,
0.00072272177218128957979732200349640,
0.00026840905319980399461855630763639,
-0.00000386786514313326830151239928860,
-0.00014529652172375174501064342971419,
-0.00020079200685610596363763783920190,
-0.00020486397550091241285507339640177,
-0.00018177251376674866978557090651028,
-0.00014732720568299891079533324844990,
-0.00011103259250611536839455447354313,
-0.00007801852557445637063802766476294,
-0.00005057002459089850935720449502326,
-0.00002924737982070438622370684257219,
-0.0000136577182218588710540784871811,
-0.0000029575122625340335255970335431,
0.0000038395199899767419714034429197,
0.0000076881193734840299247444522077,
0.0000094245398419219327153653943207,
0.0000097337529690318816412614692587,
0.0000091480274459270312096627770217,
0.0000080619145517535381759258176097,
0.0000067542476407336638941225910227,
0.000005411593443449269790648097261,
0.000004150125437131325581547336324,
0.000003034497471576312110001308212,
0.000002093267538135562252003012925,
0.000001330969825279401198332450463,
-7
7.37209507955248062779620826 10 ,
-7
2.93263622398134958434524015 10 ,
-8
-2.3317072362848976102839973 10 ,
-7
-2.35650675570634484892471134 10 ,
-7
-3.65620346716670870284369471 10 ,
-7
-4.32803484845776951918534983 10 ,
-7
-4.53980382773092409894967670 10 ,
-7
-4.43028271969806636513667532 10 ,
-7
-4.11056601063391173974634568 10 ,
-7
-3.66679744724574133777868780 10 ,
-7
-3.16354929829636515323370167 10 ,
-7
-2.64737199886948816611138728 10 ,
-7
-2.15021028592691266141174466 10 ,
-7
-1.69250991341964200913477377 10 ,
-7
-1.28592821342563655928047463 10 ,
-8
-9.3562155186891501184884726 10 ,
-8
-6.4212079021714840683989162 10 ,
-8
-4.0282829593958497425360774 10 ,
-8
-2.1318155241615525408999560 10 ,
-9
-6.753256088468931634905523 10 ,
-9
4.020837075521115896921341 10 ,
-8
1.1611930078301706186481030 10 ,
-8
1.6596359914684719233773543 10 ,
-8
1.9500255425207775038798882 10 ,
-8
2.0789463805547333601557044 10 ,
-8
2.0865770984321044922048033 10 ,
-8
2.0067518073257429000271847 10 ,
-8
1.8673131375018625836228485 10 ,
-8
1.6906432963871255429917949 10 ,
-8
1.4942881292166557781340237 10 ,
-8
1.2916159703946332890495351 10 ,
-8
1.0924629207316840757383290 10 ,
-9
9.037454965393441382004053 10 ,
-9
7.300036840122634764357642 10 ,
-9
5.739399681529520904444054 10 ,
-9 -9
4.367571268906899802263293 10 , 3.185931532251771457815357 10
]
]
</blockquote>
<blockquote> listsum((20))= -0.4999999932944069400650356605249833795</blockquote>
<p>Note: when evaluating this sum, at appears at least 28 digits of precision are required for accurate evaluation. If you use less than this (Maple defaults to 10) then the result will diverge quite wildly to an inaccurate answer.</p>
<p>This table is the partial sums of the terms, so I think the equality =-1/2 is correct</p>
<blockquote>[0., -.1775329665758867817637924166769874055, -.3356842544670517733909844922981372025, -.4364407299211486502516737657100586335, -.4891698524817305587290686465526733255, -.5113433002887118194699127670483221105, -.5170680584286358061652673901664626945, -.5153568769031954012931813323846949225, -.5111933721487597818060172935080084455, -.5069692253523671946857105637760141975, -.5036077886669112953079559367753161195, -.5012927118244094333951617149205340695, -.4998895865505346033189751023136847095, -.4991668647783533137391777803101883095, -.4988984557251535097445592240025519195, -.4989023235902966430128607364018405195, -.4990476201120203947578713798315547095, -.4992484121188765007215090176707566095, -.4994532760943774131343640910671583795, -.4996350486081441618041496619736686595, -.4997823758138271607149449952221185595, -.4998934084063332760833395496956616895, -.4999714269319077324539775773604246295, -.5000219969564986309633347818554478895, -.5000512443363193353495584886980200795, -.5000649020545411942206125671852011795, -.5000678595668037282541381642187442795, -.5000640200468137515121667607758245795, -.5000563319274402674822420163236168795, -.5000469073875983455495266509292961795, -.5000371736346293136678853894600374795, -.5000280256071833866366757266830157795, -.5000199636926316330984998008654060795, -.5000132094449908994346056782743833795, -.5000077978515474501648150301771223795, -.5000036477261103188392334828407983795, -.5000006132286387425271234815325863795, -.4999985199611006069648714785196613795, -.4999971889912753275636731460691983795, -.4999964517817673723156103664483723795, -.4999961585181449741806519319243573795, -.4999961818352173370296280347643303795, -.4999964174858929076641129272354643795, -.4999967831062396243349832116049353795, -.4999972159097244701119351301399183795, -.4999976698901072432043450251075883795, -.4999981129183792130109815387751203795, -.4999985239749802764021555134096883795, -.4999988906547250009762892912784683795, -.4999992070096548306128046146486353795, -.4999994717468547175616212257873633795, -.4999996867678833102528873669618293795, -.4999998560188746522170882804392063795, -.4999999846116959947807442084866693795, -.5000000781738511816722453933713953795, -.5000001423859302033870860773605573795, -.5000001826687597973455835027213313795, -.5000002039869150389611089117208913795, -.5000002107401711274300405466264143795, -.5000002067193340519089246497050733795, -.5000001951074039736072184632240433795, -.5000001785110440589224992294505003795, -.5000001590107886337147241906516183795, -.5000001382213248281673905890945743795, -.5000001173555538438463456670465413795, -.5000000972880357705889166667746943795, -.5000000786149043955702908305462093795, -.5000000617084714316990354006282603795, -.5000000467655901395324776192880233795, -.5000000338494304355861447287926723795, -.5000000229248012282693039714093823795, -.5000000138873462628758625894053293795, -.5000000065873094227532278250476873795, -.5000000008479097412237069206036333795, -.4999999964803384723168071183403403795, -.4999999932944069400650356605249833795]
</blockquote>
| Sangchul Lee | 9,340 | <p>Write $\chi(n) = n \sum _{j=2}^{\infty} \frac{(-1)^{j-1}}{j^2} \left( 1-\frac{1}{j} \right)^{n-1}$ using OP's notation. Its partial sum up to the $(N+1)$-th term can be simplified by interchanging the order of summation:</p>
<p>\begin{align*}
\sum_{n=1}^{N+1} \chi(n)
&= \sum _{j=2}^{\infty} \frac{(-1)^{j-1}}{j^2} \sum_{n=1}^{N+1} n \left( 1-\frac{1}{j} \right)^{n-1} \\
&= \sum_{j=2}^{\infty} (-1)^j \underbrace{ \left[ \left(1 + \frac{N+1}{j}\right)\left(1 - \frac{1}{j}\right)^{N+1} - 1 \right] }_{=: f_N(j)},
\tag{1}
\end{align*}</p>
<p>Since $f_N(j) = \mathcal{O}(j^{-2})$ as $j\to\infty$ for each fixed $N$, $\text{(1)}$ is indeed a convergent series. Now grouping successive terms, $\text{(1)}$ simplifies to</p>
<p>\begin{align*}
\sum_{n=1}^{N+1} \chi(n)
&= - \sum_{k=1}^{\infty} (f_N(2k+1) - f_N(2k)) \\
&= - \sum_{k=1}^{\infty} \int_{2k}^{2k+1} f_N'(x) \, dx \\
&= - \sum_{k=1}^{\infty} (N+1)(N+2) \int_{2k}^{2k+1} \frac{\left(1 - \frac{1}{x}\right)^N}{x^3} \, dx \\
&= - \sum_{k=1}^{\infty} \frac{(N+1)(N+2)}{N^2} \int_{\frac{2k}{N}}^{\frac{2k+1}{N}} \frac{\left(1 - \frac{1}{Nu}\right)^N}{u^3} \, du \tag{$x=Nu$}
\end{align*}</p>
<p>Then Fubini-Tonelli's theorem gives</p>
<p>$$
\sum_{n=1}^{N+1} \chi(n)
= - \frac{(N+1)(N+2)}{N^2} \int_{0}^{\infty} \frac{\left(1 - \frac{1}{Nu}\right)^N}{u^3} \left( \sum_{k=1}^{\infty} \mathbf{1}_{\left[\frac{2k}{N}, \frac{2k+1}{N}\right]}(u) \right) \, du.
$$</p>
<p>Since the integrand is bounded by the integrable dominating function $e^{-1/u}/u^3$, as $N\to\infty$ we have</p>
<p>$$
\lim_{N\to\infty} \sum_{n=1}^{N+1} \chi(n)
= - \int_{0}^{\infty} \frac{e^{-1/u}}{u^3} \cdot \frac{1}{2} \, du
= - \frac{1}{2}.
$$</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Terry Tao | 766 | <p>If first-order logic counts as "elementary mathematics", then I would like to suggest (the relevant chapters of) "<a href="http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach">Godel, Escher, Bach</a>", by Douglas Hofstadter. (As an aside: Hofstadter's puzzle of encoding "n is a power of 10" as a predicate in Peano arithmetic is a wonderful one, quite tough even for professional mathematicians, especially if one is to avoid any form of the Godel numbering trick.)</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Renaud Dreyer | 35,306 | <p><a href="http://en.wikipedia.org/wiki/Mathematics_Made_Difficult" rel="noreferrer"><em>Mathematics Made Difficult</em></a> by Carl E. Linderholm.</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| UwF | 36,090 | <p>How about Lawvere&Schanuel's "Conceptual Mathematics: A First Introduction to Categories"? See, e.g., <a href="http://books.google.fr/books?id=h0zOGPlFmcQC&lpg=PP1&dq=lawvere&pg=PP1#v=onepage&q=lawvere&f=false" rel="nofollow">http://books.google.fr/books?id=h0zOGPlFmcQC&lpg=PP1&dq=lawvere&pg=PP1#v=onepage&q=lawvere&f=false</a></p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Michael Greinecker | 35,357 | <p>While it contains much beyond school mathematics, a lot of school mathematics is treated in a beautiful way in <a href="https://en.wikipedia.org/wiki/Mathematics,_Form_and_Function" rel="nofollow noreferrer">Mathematics, Form and Function</a> by Saunders Mac Lane.</p>
|
142,734 | <p>What are some books that discuss elementary mathematical topics ('school mathematics'), like arithmetic, basic non-abstract algebra, plane & solid geometry, trigonometry, etc, in an insightful way? I'm thinking of books like Klein's <em>Elementary Mathematics from an Advanced Standpoint</em> and the books of the Gelfand Correspondence School - school-level books with a university ethos.</p>
| Martin Peters | 60,435 | <p>Two great ones are:</p>
<ul>
<li>Fuchs, Tabachnikov: <em>Mathematical Omnibus</em> and </li>
<li>Arnold: <em>Lectures and Problems: A Gift to Young Mathematicians</em>.</li>
</ul>
|
375,016 | <p>I just started reading the ABC of category theory using the appendix of a text, the first chapter of a text that I have never read, and above all (I found out now that they handle well the theory) the wikipedia pages. I want to know only this: in all three sources are given the definition of the category, and in all three I noticed that it's not possible to treat categories in set theory. The reason strikes the eye immediately when it is assumed that the objects are contained in classes, in particular the example of the category of sets. I ask then how it is treated the theory because from what I'm reading I do not understand (indeed in the second source it seems that develops internally to set theory). I think it needs some extension of set theory for treating them.</p>
| Martin Brandenburg | 1,650 | <p>Both category theory and set theory can be seen as formal theories in the general sense of mathematical logic. It is well-known that category theory can be developed inside set theory (perhaps including Grothendieck's axiom of universes). But Lawvere has also shown the converse, via the <a href="http://ncatlab.org/nlab/show/ETCS" rel="nofollow">ETCS</a>. More generally one can quote Grothendieck's topos theory, which is actually a categorical refinement of set theory (which is just the topos corresponding to a point). So in somse sense category theory and set theory contain each other.</p>
|
3,247,982 | <p>If <span class="math-container">$f:X \to Y$</span> is a function and <span class="math-container">$U$</span> and <span class="math-container">$V$</span> are subsets of <span class="math-container">$X$</span>, then <span class="math-container">$f(U \cap V) = f(U) \cap f(V)$</span>.</p>
<p>I am a little lost on this proof. I believe it to be true, but I am uncertain as to where to start. Any solutions would be appreciated. I have many similar proofs to prove and I would love a complete one to base my further proofs on.</p>
| José Carlos Santos | 446,262 | <p>You can't prove it, since it is false. Take a set <span class="math-container">$X$</span> with more than one point, take <span class="math-container">$x,x'\in X$</span> with <span class="math-container">$x\neq x'$</span> and let <span class="math-container">$f$</span> be a constant function. Then<span class="math-container">$$f\bigl(\{x\}\cap\{x’\}\bigr)=f(\emptyset)=\emptyset\neq f\bigl(\{x\}\bigr)\cap f\bigl(\{x'\}\bigr).$$</span></p>
<p>Actually, the assertion that you want to prove is equivalent to the assertion that <span class="math-container">$f$</span> is injective.</p>
|
1,275,810 | <p>Let <span class="math-container">$ c_0 = \{ x = (x_n)_{n \in \mathbb N} \in l^\infty : \lim_{n \to \infty} x_n = 0\}$</span>. Show that <span class="math-container">$c_0$</span> is a Banach space with the norm <span class="math-container">$\rVert \cdot \lVert_\infty$</span></p>
<p>I am capable of showing the space where the limit of <span class="math-container">$x_n$</span> exists is normed linear space but am having trouble with showing that the limit of Cauchy sequences must converge to 0. </p>
<p>Let <span class="math-container">$(x^{(n)})_{n \in \mathbb N}$</span> be a Cauchy sequence in <span class="math-container">$c_0$</span> such that <span class="math-container">$x^{n} = (x^n_1, x^n_2,...)$</span>.
Fix <span class="math-container">$k \in \mathbb N$</span> consider the sequence <span class="math-container">$(x^n_k)_{n \in \mathbb N}$</span> in <span class="math-container">$\mathbb F$</span>. For any <span class="math-container">$n,m \in \mathbb N$</span></p>
<p><span class="math-container">$\lvert x^n_k - x^m_k \rvert \le \sup_{k \in \mathbb N} \lvert x^n_k - x^m_k \rvert = \lVert x^n - x^m \rVert_\infty \lt \epsilon $</span> (1)</p>
<p>Thus <span class="math-container">$x^n_k$</span> is Cauchy in <span class="math-container">$\mathbb F$</span> and so has limit <span class="math-container">$y_k$</span> such that <span class="math-container">$y = (y_1,y_2,...)$</span> and y is the limit of <span class="math-container">$x^n$</span></p>
<p>To show that such a y exists we look at the value of <span class="math-container">$\lvert y_n - y_m \rvert \le \lvert y_n - x^N_n \rvert + \lvert x^N_n - x^N_m \rvert + \lvert x^N_m - y_m \rvert \lt \epsilon$</span> for all <span class="math-container">$n,m \ge N$</span> (2)</p>
<p>The middle expression on RHS of (2) is <span class="math-container">$\lt \epsilon/3$</span> by (1)</p>
<p>The other two are also <span class="math-container">$\lt \epsilon/3$</span> follow from <span class="math-container">$x^N_k$</span> being Cauchy and converging to <span class="math-container">$y_k$</span></p>
<p>This shows that <span class="math-container">$\lim_{n \to \infty} y_n$</span> exists but we still have not shown that <span class="math-container">$y \in c_0$</span>.</p>
<p>I know that to show y tends to 0 i should show that <span class="math-container">$\lvert y_k \rvert \lt \epsilon$</span> for <span class="math-container">$k \ge N$</span></p>
<p>This is where I am stuck. Perhaps
<span class="math-container">$\lvert y_k \rvert = \lvert \lim_{n \to \infty} x^n_k \rvert$</span> and then we can take the limit function outside the absolute value sign by continuity?
Then we might say due to it being a Cauchy sequence <span class="math-container">$x^n_k \lt \epsilon$</span>. I know this last bit isn't at all convincing so I could do with some help.</p>
| Calvin Khor | 80,734 | <p>We note that
$$ |y_k| \leq |y_k - x^n_k| + |x^n_k| = \lim_{m→∞}|x^m_k - x^n_k| + |x^n_k| \leq\lim_{m→∞} ‖x^m - x^n‖ + |x^n_k|$$
which holds for arbitrary $n$. (the superscript is which sequence, the subscript is the index of the sequence) As $x^n∈ c_0$, choose $K_n$ large such that $|x^n_k |<ε$ for $k>K_n$.</p>
<p>Pick $N(ε)$ large such that $\|x^m - x^n||<ε $ for $n,m>N$. Taking limits($\star$) gives
$$ \lim_{m→∞}‖x^m - x^n‖ \leq ε \quad ∀ n \geq N$$</p>
<p>So putting the two bounds together: With $ε>0$ given, there is $K = K_{N(ε)}$ such that
$$ |y_k| \leq \lim_{m→∞} ‖x^m - x^{N(ε)}‖ + |x^{N(ε)}_k| \leq 2ε $$
for all $k > K$.</p>
<hr>
<p>The step marked $(\star)$ is easy to see if you notationally suppress the $n$:
$$ a_m < ε ∀ m > N \implies \lim a_m \leq ε $$
as (you have shown) this limit will exist.</p>
|
696,145 | <blockquote>
<p>For each positive integer <span class="math-container">$n$</span>, define <span class="math-container">$f(n)$</span> such that <span class="math-container">$f(n+1) > f(n)$</span> and <span class="math-container">$f(f(n))=3n$</span>. What is the value of <span class="math-container">$f(10)?$</span></p>
</blockquote>
<p>This question was really hard for me. Since <span class="math-container">$n$</span> is a positive integer and <span class="math-container">$f(f(n)) = 3n$</span>, I deduced that <span class="math-container">$1<f(1)<f(f(1))$</span> so <span class="math-container">$f(1) = 2$</span> and I couldn't manage to carry on because if i used <span class="math-container">$2<f(2)<f(f(2))$</span>, I would get <span class="math-container">$f(f(2)) = 6$</span> but I wouldn't know how to work out <span class="math-container">$f(2)$</span>.</p>
<p>If someone could please show me step by step how to get the answer, I would appreciate it as I would like to know how to get the answer, thanks.</p>
| Zoltan Zimboras | 7,317 | <p>You already deduced that $f(1) = 2$. From the $f(f(n))=3n$ condition, we have that $f(f(1))=f(2)=3$, hence $f(2)=3$. Similarly, we have that $f(3)=f(f(2))=3\cdot 2=6$, and $f(6)=f(f(3))=3\cdot 3=9$, and $f(9)=f(f(6))=3 \cdot 6=18$. So until now we have that </p>
<p>$$f(1)=2\, , \; f(2)=3 \, , \; f(3)=6 \, ,\; f(6)=9 \, , \; f(9)=18 \, .$$ </p>
<p>Since $f(n)< f(n+1)$, we have that $f(3)=6 < f(4) < f(5) < 9=f(6)$. As $f(n)$ is integer, the only solution is $f(4)=7$ and $f(5)=8$. Now we have that $f(7)=f(f(4))= 3\cdot 4=12$ and $f(12)=f(f(7))=3\cdot 7=21$. Finally, we can write $f(9)=18 < f(10) < f(11) < 21=f(12)$. So now you can deduce what $f(10)$ is.</p>
|
2,960,614 | <p>I am going through Tenenbaum and Pollard's book on differential equations and they define the differential <span class="math-container">$dy$</span> of a function <span class="math-container">$y = f(x)$</span> to be the function <span class="math-container">$$
(dy)(x,\Delta x) = f'(x) \cdot (d\hat{x})(x, \Delta x)
$$</span>
where</p>
<ul>
<li><span class="math-container">$\Delta x$</span> is a variable denoting an increment along the <span class="math-container">$x$</span>-coordinate </li>
<li><span class="math-container">$\hat{x}$</span> denotes the function <span class="math-container">$\hat{x}(x) = x$</span>, and</li>
<li><span class="math-container">$d\hat{x}$</span> is the differential of the function <span class="math-container">$\hat{x}$</span>.</li>
</ul>
<p>I've never seen differentials crisply defined this way. They're usually described as "small quantities" or just avoided in favor of definitions of the derivative in terms of limits. Anyway, this definition makes good sense to me. Is this the accepted way to think of them -- i.e. as functions?</p>
| Santana Afton | 274,352 | <p>For <strong>every</strong> pair of disjoint sets <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, assume the following holds:</p>
<blockquote>
<p><span class="math-container">$m^*(A\cup B) = m^*(A) + m^*(B).$</span></p>
</blockquote>
<p>Now, let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be arbitrary sets. Then <span class="math-container">$A = (A \cap B) \cup (A\setminus B)$</span> where <span class="math-container">$A\cap B$</span> is disjoint from <span class="math-container">$A\setminus B$</span>. </p>
<p>By our assumption, this means that <span class="math-container">$m^*(A) = m^*(A\cap B) + m^*(A\setminus B)$</span>, and so by Caratheodory we have that <span class="math-container">$B$</span> is measurable since <span class="math-container">$A$</span> was arbitrary. Unfortunately, out choice of <span class="math-container">$B$</span> was <strong>also</strong> arbitrary, so every set is measurable!</p>
|
1,176,016 | <p>How do you show that if you have sets $B_1, B_2, \cdots ,B_n$ and a set $C$, then
$$(B_1\cap B_2, \cap \cdots B_n)\cup C= (B_1\cup C)\cap(B_2\cup C) \cap \cdots
\cap (B_n\cup C)\,?$$</p>
<p>Thanks</p>
| MonkeysUncle | 217,283 | <p>For the equations you write to be true, $x$ has to take on some discrete value. The equality is not true in general. For the equals sign to hold, both sides of the equation need to be constants because it's only an equality when x takes on specific values. This means you can't just take the derivative or integral of both sides because you're changing the nature of the function.</p>
<p>Take a simple example, $x^2=2x$. This has solutions $x=0$ and $x=2$. Take the derivative of both sides, and you get $2x=2$. The solutions to the second equation have nothing to do with the solutions to the first equation, so taking a derivative is not a valid approach in general when only specific solutions exist.</p>
|
4,335,896 | <blockquote>
<p><span class="math-container">$$\frac{1}{2\cdot4}+\frac{1\cdot3}{2\cdot4\cdot6}+\frac{1\cdot3\cdot5}{2\cdot4\cdot6\cdot8}+\frac{1\cdot3\cdot5\cdot7}{2\cdot4\cdot6\cdot8\cdot10}+\cdots$$</span>
is equal to?</p>
</blockquote>
<p><strong>My approach:</strong></p>
<p>We can see that the <span class="math-container">$n^{th}$</span> term is <span class="math-container">\begin{align}a_n&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\color{red}{[(2n+2)-(2n+1)}]\\&=\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)}-\frac{1\cdot3\cdot5\cdot\space\dots\space\cdot(2n-3)\cdot(2n-1)\cdot(2n+1)}{2\cdot4\cdot6\cdot\space\dots\space\cdot(2n)\cdot(2n+2)}\\
\end{align}</span></p>
<p>From here I just have a telescopic series to solve, which gave me <span class="math-container">$$\sum_{n=1}^{\infty}a_n=0.5$$</span></p>
<p><strong>Another approach :</strong> <em>note :</em> <span class="math-container">$$\frac{(2n)!}{2^nn!}=(2n-1)!!$$</span></p>
<p>Which gives <span class="math-container">$$a_n=\frac{1}{2}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right)$$</span></p>
<p>So basically I need to compute
<span class="math-container">$$\frac{1}{2}\sum_{n=1}^{\infty}\left(\frac{(2n)!\left(\frac{1}{4}\right)^n}{n!(n+1)!}\right) \tag{*}$$</span></p>
<p>I'm not able to determine the binomial expression of <span class="math-container">$(*)$</span> (if it exists) or else you can just provide me the value of the sum</p>
<p>Any hints will be appreciated, and you can provide different approaches to the problem too</p>
| Mr.Gandalf Sauron | 683,801 | <p>If you look at the Binomial expansion of</p>
<p><span class="math-container">$$(1-x)^{-\frac{1}{2}}$$</span> you get :-</p>
<p><span class="math-container">$$\sum_{r=0}^{\infty}\frac{\binom{2r}{r}x^{r}}{4^{r}}$$</span></p>
<p>So <span class="math-container">$$\int_{0}^{1}(1-x)^{-\frac{1}{2}}dx=\sum_{r=0}^{\infty}\frac{\binom{2r}{r}}{4^{r}(r+1)}$$</span></p>
<p>So you get <span class="math-container">$$\frac{1}{2}\sum_{r=0}^{\infty}\frac{\binom{2r}{r}}{4^{r}(r+1)}=\frac{1}{2}\sum_{r=0}^{\infty}\frac{(2r)!}{4^{r}r!(r+1)!}=\frac{1}{2}\int_{0}^{1}(1-x)^{-\frac{1}{2}}dx=1$$</span></p>
<p>So <span class="math-container">$$\frac{1}{2}\sum_{r=1}^{\infty}\frac{(2r)!}{4^{r}r!(r+1)!}=\frac{1}{2}\sum_{r=0}^{\infty}\frac{(2r)!}{4^{r}r!(r+1)!}-\frac{1}{2}(1)=1-\frac{1}{2}=\frac{1}{2}$$</span></p>
|
3,064,943 | <p>Consider the senquence of iid r.v. <span class="math-container">$(Y_k)_{k\geq1}$</span> such that <span class="math-container">$\mathbb{P}(Y_k=1)=\mathbb{P}(Y_k=-1)=\frac{1}{2}$</span> and then consider the process <span class="math-container">$X=(X_k)_{n\geq1}$</span> such that <span class="math-container">$X_n=\sum_{k=1}^n\frac{Y_k}{k}$</span>. It's very easy to see that <span class="math-container">$X$</span> is a martingale w.r.t. the filtration that it generate itself. However i tried proving that it is almost surely convergent using the convergence theorem for martingales but i failed. Is this really almost surely convergent? if yes is possible to prove this using martingales theory or is better to use other techniques like Borel-Cantelli lemmas?</p>
| Aphelli | 556,825 | <p>For a « martingale » proof, the convergence theorem for martingales states that if <span class="math-container">$(X_n)$</span> is a martingale wrt some filtration, and <span class="math-container">$\mathbb{E}[|X_n|]$</span> is bounded, then <span class="math-container">$X_n$</span> converges pointwise. </p>
<p>Now note that <span class="math-container">$\mathbb{E}[|X_n|^2]$</span> is clearly bounded and is not lower than <span class="math-container">$\mathbb{E}[|X_n|]^2$</span>. </p>
|
3,855,521 | <p>Suppose <span class="math-container">$f:A\bigoplus M \to A\bigoplus N$</span> and <span class="math-container">$g: A \to A$</span>, where <span class="math-container">$g = \pi_A\circ f\circ l_A$</span>, are isomorphisms of <span class="math-container">$R$</span>-module. Prove that <span class="math-container">$M\cong N$</span>.</p>
| Narasimham | 95,860 | <p>HINT</p>
<p><span class="math-container">$y=\pm 1$</span> is clearly a tendency around <span class="math-container">$x=0$</span> and the</p>
<p><span class="math-container">$y=\log[(4/\pi)^2 x^{m}] $</span> tends to pass through <span class="math-container">$(x=1, x=-1)$</span> as <span class="math-container">$y\rightarrow 0$</span></p>
|
606,380 | <blockquote>
<p>If <span class="math-container">$abc=1$</span> and <span class="math-container">$a,b,c$</span> are positive real numbers, prove that <span class="math-container">$${1 \over a+b+1} + {1 \over b+c+1} + {1 \over c+a+1} \le 1\,.$$</span></p>
</blockquote>
<p>The whole problem is in the title. If you wanna hear what I've tried, well, I've tried multiplying both sides by 3 and then using the homogenic mean. <span class="math-container">$${3 \over a+b+1} \le \sqrt[3]{{1\over ab}} = \sqrt[3]{c}$$</span> By adding the inequalities I get <span class="math-container">$$ {3 \over a+b+1} + {3 \over b+c+1} + {3 \over c+a+1} \le \sqrt[3]a + \sqrt[3]b + \sqrt[3]c$$</span> And then if I proof that that is less or equal to 3, then I've solved the problem. But the thing is, it's not less or equal to 3 (obviously, because you can think of a situation like <span class="math-container">$a=354$</span>, <span class="math-container">$b={1\over 354}$</span> and <span class="math-container">$c=1$</span>. Then the sum is a lot bigger than 3).</p>
<p>So everything that I try doesn't work. I'd like to get some ideas. Thanks.</p>
| Michael Rozenberg | 190,319 | <p>Another way.</p>
<p>After full expanding we need to prove that:
<span class="math-container">$$\sum_{cyc}(a^2b+a^2c)\geq2(a+b+c)=2\sum_{cyc}a^{\frac{5}{3}}b^{\frac{2}{3}}c^{\frac{2}{3}},$$</span> which is true by Muirhead because <span class="math-container">$$(2,1,0)\succ\left(\frac{5}{3},\frac{2}{3},\frac{2}{3}\right)$$</span> or by AM-GM:
<span class="math-container">$$\sum_{cyc}(a^2b+a^2c)\geq2\sum_{cyc}\sqrt{a^2b\cdot a^2c}=2(a+b+c).$$</span></p>
|
1,678,687 | <p>Find the point $(x_0, y_0)$ on the line $ax + by = c$ that is closest to the origin. </p>
<p>According to this <a href="http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php" rel="nofollow noreferrer">source</a>, I thought that $\left( -\frac{ac}{a^{2}+b^{2}}, -\frac{bc}{a^{2}+b^{2}} \right)$ was the point but it doesn't seem to be correct. Thanks for any help.</p>
| lab bhattacharjee | 33,337 | <p>$$(h-0)^2+\left(\dfrac{c-ah}b\right)^2=\dfrac{(a^2+b^2)h^2-2cah+c^2}{b^2}$$</p>
<p>Now $(a^2+b^2)h^2-2cah=\left(h\sqrt{a^2+b^2}-\dfrac{ca}{\sqrt{a^2+b^2}}\right)^2-\dfrac{(ca)^2}{a^2+b^2}\ge-\dfrac{(ca)^2}{a^2+b^2}$
the minimum value occurs iff
the equality occurs if $$h\sqrt{a^2+b^2}-\dfrac{ca}{\sqrt{a^2+b^2}}=0\iff h=?$$</p>
<p>Can you take it from here?</p>
|
1,678,687 | <p>Find the point $(x_0, y_0)$ on the line $ax + by = c$ that is closest to the origin. </p>
<p>According to this <a href="http://www.intmath.com/plane-analytic-geometry/perpendicular-distance-point-line.php" rel="nofollow noreferrer">source</a>, I thought that $\left( -\frac{ac}{a^{2}+b^{2}}, -\frac{bc}{a^{2}+b^{2}} \right)$ was the point but it doesn't seem to be correct. Thanks for any help.</p>
| GoodDeeds | 307,825 | <p>$$ax_0+by_0=c$$
Thus,
$$y_0=\frac{c-ax_0}{b}$$
The distance of $(x_0,y_0)$ from $(0,0)$ is, $d=\sqrt{x_0^2+y_0^2}$</p>
<p>Thus,
$$d=\sqrt{x_0^2+\left(\frac{c-ax_0}{b}\right)^2}=\frac{\sqrt{b^2x_0^2+c^2+a^2x_0^2-2acx_0}}{b}$$
To minimize $d$, taking the derivative with respect to $x_0$,
$$\frac{d}{dx_0}(d)=\frac{2(a^2+b^2)x_0-2ac}{2b\sqrt{b^2x_0^2+c^2+a^2x_0^2-2acx_0}}=0$$
Thus,
$$x_0=\frac{ac}{a^2+b^2}$$
Correspondingly,
$$y_0=\frac{bc}{a^2+b^2}$$</p>
<p>It can be verified that this is indeed the point of minimum. Intuitively, it is clear that it must be so as a point with minimum distance must exist while a point with maximum distance from the origin does not exist.</p>
|
3,827,422 | <p>Given a finite group <span class="math-container">$G$</span>, how do we know that there exists a map <span class="math-container">$\rho: G \rightarrow GL(V)$</span> such that <span class="math-container">$\rho(g_1\circ g_2) = \rho(g_1).\rho(g_2)$</span> for any <span class="math-container">$g_1, g_2\in G$</span>?</p>
<p>Intuitively, why does matrix multiplication always capture the properties of a group?</p>
| CyclotomicField | 464,974 | <p>Permutation matrices fully capture the essential features of the symmetric groups. So for <span class="math-container">$S_n$</span> I would take all of the <span class="math-container">$n!$</span> permutation matrices of size <span class="math-container">$n \times n$</span> and this allows me to ensure any finite group can be represented by matrices this way. Since all groups are subgroups of symmetric groups this allows us to realize the group structure for an arbitrary finite group.</p>
<p>This is however almost never the minimal representation so while it ensures existence we're typically interested in finding the lowest possible dimensional representation, not an arbitrary one. You can see this immediately as <span class="math-container">$S_3$</span> only requires a two dimensional representation being isomorphic to <span class="math-container">$D_6$</span>.</p>
|
904,074 | <p>Is this Convergent or Divergent</p>
<p>$$\int_0^1 \frac{1}{\sin(x)}\mathrm dx $$</p>
<p>So little background to see if I am solid on this topic otherwise correct me please :) </p>
<p>To check for convergence I can look for a "bigger" function and compare if that is convergent, If yes then for sure the one in question is too. So if the "bigger" function is not convergent can we conclude that the function in question is divergent or do we have to check for divergence to? That is a "smaller" function which has to be divergent?</p>
<p>And for this question I have no Idea WHAT kind of function to compare with:/ </p>
| Mikasa | 8,581 | <p>You could use the following facts to find proper values of <span class="math-container">$A$</span> as well. Maybe it helps you for other integrands:</p>
<blockquote>
<p>Let <span class="math-container">$\lim_{x\to a^+}~(x-a)^pf(x)=A$</span>. Then:</p>
<ul>
<li><p>If <span class="math-container">$p<1$</span> and <span class="math-container">$A$</span> is finite then <span class="math-container">$\int_a^bfdx$</span> converges.</p>
</li>
<li><p>If <span class="math-container">$p\ge1$</span> and <span class="math-container">$A\neq0$</span> or <span class="math-container">$A=\infty$</span> then <span class="math-container">$\int_a^bfdx$</span> diverges.</p>
</li>
</ul>
</blockquote>
<p>Now, you can see that why @Lucian's short hint can guide you. Note that in above points <span class="math-container">$f(x)$</span> is unbounded only at the lower bound <span class="math-container">$a$</span>.</p>
|
2,845,451 | <p>I was reading a number theory book and it was stated that $d^k+(a-d)^k=a[d^{k-1}-d^{k-2}(a-d)+ . . .+(a-d)^{k-1}]$ for $k$ odd. How did they arrive at this factorization? Is there an easy way to see it?</p>
| Ethan Bolker | 72,858 | <p>Start by understanding how to factor
$$
x^n - y^n .
$$</p>
<p>Presumably you know how to do that when $n=2$. For $n=3$ you can check that
$$
x^3 - y^3 = (x-y)(x^2 + xy + y^2)
$$</p>
<p>Now guess for higher powers.</p>
<p>Then see what happens if $n$ is odd and you replace $y$ by $(-y)$.</p>
|
2,845,451 | <p>I was reading a number theory book and it was stated that $d^k+(a-d)^k=a[d^{k-1}-d^{k-2}(a-d)+ . . .+(a-d)^{k-1}]$ for $k$ odd. How did they arrive at this factorization? Is there an easy way to see it?</p>
| asdf | 436,163 | <p>They used that</p>
<p>$$x^k+y^k=(x+y)(x^{k-1}-x^{k-2}y- \dots +xy^{k-2}+y^{k-1})$$</p>
<p>holds for $k$ - odd and have plugged $x=d$, $y=a-d$</p>
|
3,772,193 | <p>If I put the two vectors 1,0,0 and 0,1,0 next to each other</p>
<p><span class="math-container">$$\begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0 \end{pmatrix}$$</span></p>
<p>I can see that they are independent since I cannot write <span class="math-container">$(1,0,0)^t$</span> as <span class="math-container">$(0,1,0)^t$</span>.</p>
<p>But I've heard that "If you have a row of <span class="math-container">$0$</span>'s in your matrix, it's linearly dependent"</p>
<p>But the matrix above has a row of zeroes but is still linearly independent. Which means I'm understanding something wrong. What am I missing?</p>
| Arctic Char | 629,362 | <p>The correct statement is probably this one: If you have <span class="math-container">$n$</span> column vectors in the <span class="math-container">$n$</span>-dimensional space, so that when put one next to each other there is a row of <span class="math-container">$0$</span>'s in your matrix, then it's linearly dependent.</p>
<p>There are several ways to see why this is true. Let <span class="math-container">$A$</span> be the square matrix formed. First it is very easy to show that</p>
<p>"If <span class="math-container">$A$</span> has a column of zero's then the vectors are linearly dependent"</p>
<p>This is indeed easy: the condition implies that one of your vector is the zero vector.</p>
<p>To go back to the original statement, we use:</p>
<p>(1) The vectors are linearly dependent if and only if <span class="math-container">$\det A = 0$</span>, and</p>
<p>(2) <span class="math-container">$\det A = \det A^t$</span>.</p>
<p>Then if <span class="math-container">$A$</span> has a row of zero, then <span class="math-container">$A^t$</span> has a row of zeros. Then the vectors formed by the columns of <span class="math-container">$A^t$</span> is linearly dependent. Then <span class="math-container">$\det A^t =0$</span> by (1). By (2) <span class="math-container">$\det A = 0$</span>, then the column of vectors formed by <span class="math-container">$A$</span> is linearly dependent.</p>
|
2,048,962 | <blockquote>
<p><strong>Question:</strong> How would you solve this sinusoidal equation:</p>
<blockquote>
<p>Solve $5\cos(6x)+6=9$. Assume $n$ is an integer and the answers are in degrees.</p>
<ul>
<li><p>$-8.86+n\cdot 60$</p></li>
<li><p>$-3.54+n\cdot 60$</p></li>
<li><p>$3.54+n\cdot 60$</p></li>
<li><p>$8.86+n\cdot 60$</p></li>
<li><p>$15.13+ n\cdot 360$</p></li>
<li><p>$126.87+n\cdot 360$</p></li>
</ul>
</blockquote>
</blockquote>
<p>I'm sort of new to this. But I have tried to isolate the trigonometric parts, and I get$$\cos(6x)=\frac 35\tag{1}$$
But after this, I'm not sure what to do. Do I take the $\arccos$ of both sides? If so, what will $\arccos\frac 35$ evaluate to? I don't think it's going to be a "perfect" number such as $\dfrac \pi 3$.</p>
| Lalala | 1,022,719 | <p>Using the cosine law you need to set the values of b and c (we are just going to have 1 variable because b and c will be equal, so we will only use b.)(A would be 6x)<br>
<span class="math-container">$$b = (180 - 6x)/2$$</span>
and so the formula would be (if b and c are equal): <br>
<span class="math-container">$$(2(b^2)-36(x^2))/2(b^2)$$</span>
and according to wolfram-alpha, this cancels out to: <br>
<span class="math-container">$$1-(36(x^2)/2(b^2))$$</span>
and inputting the value for b (using wolfram-alpha again) we get : <br>
<span class="math-container">$$9x^2-540x+8100$$</span>
which is kind of a quadratic equation except it doesn't equal 0. That is the formula for <span class="math-container">$$cos(6x)$$</span>. If you want the formula for <span class="math-container">$$cos(x)$$</span> you divide that by 6. <br>
So you want that sort-of quadratic equation to equal 3/5.<br>
Guess what? You can turn it into a quadratic equation!: <br>
<span class="math-container">$$9x^2-540x+8099.4=0$$</span> <br>
So using the quadratic formula we get: <br>
<span class="math-container">$$-540 (+ or -) \sqrt{291600-291578.4}\over 18$$</span>
<span class="math-container">$$-540 (+ or -) \sqrt{21.6}\over 18$$</span>
<span class="math-container">$$-540 (+ or -) 4.64758...\over 18$$</span>
<span class="math-container">$$-30 (+ or -) 0.25819889$$</span>
so x is either -30.25819889 or -29.74180111.</p>
|
2,439,069 | <p>$$\int_0^{\infty}\frac{dx}{2+\cosh (x)}$$</p>
<p>I'm not sure how to approach this problem. I tried substituting $\cosh (x) = \frac{e^x + e^{-x}}{2}$, and following the substitution $y = e^x$, I ended up with the integral </p>
<p>$$2 \cdot \int_1^{\infty} \frac{1}{(y+2)^2-3} dy$$</p>
<p>Which I wasn't able to evaluate, nor am I sure it is the best form to proceed. </p>
<p>Thanks!</p>
| George Coote | 445,167 | <p>An alternative approach for interest. With this integral it's relatively easy to rewrite $\cosh$ in terms of e, but in cases where it's less feasible, you can use the half-angle substitution for the hyperbolic tangent. </p>
<p>Using that, we have,
$$t = \tanh \frac x 2$$
$$\mathrm dx = \frac 2 {1-t^2} \mathrm dt$$
$$\cosh x = \frac{1+t^2}{1-t^2}$$</p>
<p>So for our limits we have, </p>
<p>$$x = 0 \implies t = \tanh(0) = 0$$
$$x \to \infty \implies t = \lim_{t_0 \to \infty} \tanh\left(t_0\right) = 1$$</p>
<p>And for our integrand we have, </p>
<p>$$\int_0^1 \frac {\mathrm dt} {2 + \frac {1+t^2}{1-t^2}} \cdot \frac 2 {1-t^2}$$
$$2\int_0^1 \frac {\mathrm dt} {2 - 2t^2 + 1 + t^2}$$
$$2\int_0^1 \frac {\mathrm dt} {3 - t^2}$$</p>
<p>Which is easily done with partial fractions, or another substitution. You will arrive at the same result as the others. If you want derivations of the above, I can edit this answer to include them. </p>
|
4,155,792 | <p>I have seen many math books, and some of them, very good books, that say that <span class="math-container">$0!=1$</span> 'by convention'. I think that <span class="math-container">$0!$</span> <em>must</em> be <span class="math-container">$1$</span> because it is the product of the empty set. That is, for <span class="math-container">$a\neq 0$</span>,
<span class="math-container">$$a^0=0!=\prod\emptyset=1$$</span>
What do you think?</p>
<p>EDIT: I'm not asking for a proof that <span class="math-container">$0!=1$</span>, I really don't be needed to prove that. The question is, <em>again</em>, 'what do you think?', that is, do you think that is more convenient to define <span class="math-container">$0!=1$</span> by... well, convention, or is it better to see <span class="math-container">$0!$</span> as an empty product?</p>
<p>In other words, this is a <strong>soft</strong> question.</p>
| Shinrin-Yoku | 789,929 | <p>One way to define <span class="math-container">$n!$</span> is that it’s the number of bijections from a set with <span class="math-container">$n$</span> elements to itself ,If the set is empty then the number of elements is <span class="math-container">$0$</span> hence <span class="math-container">$0!=1$</span> ,since there is only one bijection from the empty set to itself .<br> It is also notationally useful for example the power series of <span class="math-container">$\exp(x)$</span> is <span class="math-container">$\sum_{0}^{\infty}x^n/n!$</span> this makes use of <span class="math-container">$0!=1$</span> .<br>Another advantage is in combinatorics the number of ways to arrange <span class="math-container">$n$</span> objects is <span class="math-container">$n!$</span> , so the number of ways to arrange <span class="math-container">$0$</span> objects is <span class="math-container">$1$</span> ,this makes sense .</p>
|
282,139 | <p>Assume that $\gamma$ is an analytic simple closed curve in $\mathbb{R}^2$ which surrounds origin.</p>
<blockquote>
<p>Is there a polynomial vector field on the plane which is tangent to $\gamma$? In the other word, can an arbitrary analytic simple closed curve be realized as a closed orbit or a limit cycle of a polynomial vector field on the plane? </p>
</blockquote>
| Will Sawin | 18,060 | <p>If every homology class is a winning loop for one of the two players, and each player has at least one winning loop, then the game cannot end in a draw.</p>
<p>Proof: If one player has a particular loop, then the other player cannot have any of the other loops, so they either have no loops or the same loop. If both players have the same loops, one player wins. So either one player wins or one player has no loops. It suffices to show that when one player has no loops, the other player wins.</p>
<p>If that player has any closed loops that are homologous to zero, then they bound a disc, and we can remove all of the other player's hexes in that disc. So we can assume that player's hexes are actually a union of discs. Then the other player's hexes are a punctured torus, which maps surjectively on first homology to a torus, so the other player has every loop and wins.</p>
<hr>
<p>Using strategy stealing & symmetry we can prove the existence of winning strategies in some cases:</p>
<p>If there is an automorphism of the board that sends all the second player's loops to some of the first player's loops, then the first player has a strategy to guarantee at least a draw. (Proof: If not, then the second player has a winning strategy, which the first player can steal.)</p>
<p>If there is an involution of the board with no fixed points that sends the first player's loops to some of the second-player's loops, the second player can guarantee at least a draw. (Proof: Each turn, apply that involution to the first player's move and then play it.)</p>
<p>If a draw can be ruled out, then these scenarios imply the existence of winning strategies. </p>
|
519,715 | <p>I feel it true that $xy=1$ implies $yx=1$ in the real Clifford Algebra $C_n$ (with respect to the quadratic form $Q=-x_1^2-……-x_n^2$), but I cannot prove it. Is it true?</p>
| Doug Spoonwood | 11,300 | <p>It oftentimes works out easiest to start by hypothesizing the antecedent or "left side" of the conditionals you want to prove. Then use the assumptions and the hypothesis to help you deduce the conclusion, and then use conditional introduction. So, if you want to prove (p→r), hypothesize p, and use p along with the assumptions given. Then deduce the consequent r. Finally use conditional introduction to get to the desired well-formed formula.</p>
|
519,715 | <p>I feel it true that $xy=1$ implies $yx=1$ in the real Clifford Algebra $C_n$ (with respect to the quadratic form $Q=-x_1^2-……-x_n^2$), but I cannot prove it. Is it true?</p>
| Peter Smith | 35,151 | <p>Here's a <strong>hint</strong> for the second question.</p>
<p>You need to think strategically. You are aiming to prove a conditional. The default way of establishing a conditional $A \to C$ is to temporarily assume the antecedent $A$ and aim for the consequent $C$. If you can get there, you can then discharge the assumption $A$ and derive the conditional.</p>
<p>So the expected overall shape of the proof will be (inserting what look like necessary brackets)</p>
<blockquote>
<p>$p \to (\neg q \leftrightarrow (r \lor s)) \quad\quad\quad\text{Premiss}\\
\neg s \quad\quad\quad\quad\quad\quad\quad\quad\quad\ \ \ \text{Premiss}\\
\quad\quad|\quad (p \land \neg q)\quad\quad\quad\quad\text{Temporary assumption}\\
\quad\quad|\quad \ldots\\
\quad\quad|\quad r\\
(p \land \neg q) \to r \quad\quad\quad\ \ \quad\quad\text{Discharging temporary assumption}\\
$</p>
</blockquote>
<p>It is obvious what you have to do next. Disassemble the conjunction!</p>
<blockquote>
<p>$p \to (\neg q \leftrightarrow (r \lor s)) \\
\neg s \quad\quad\quad\quad\quad\quad\quad\quad\quad\ \\
\quad\quad|\quad (p \land \neg q) \\
\quad\quad|\quad p\\
\quad\quad|\quad \neg q\\
\quad\quad|\quad \ldots\\
\quad\quad|\quad r\\
(p \land \neg q) \to r \quad\quad\quad\ \ \\
$</p>
</blockquote>
<p>Where do you go from here. You'll have to start using the initial premisses. So Which to use? Again it is obvious. Start using the initial premiss and use modus ponens, and you should get soon to</p>
<blockquote>
<p>$p \to (\neg q \leftrightarrow (r \lor s)) \\
\neg s \quad\quad\quad\quad\quad\quad\quad\quad\quad\ \\
\quad\quad|\quad (p \land \neg q) \\
\quad\quad|\quad p\\
\quad\quad|\quad \neg q\\
\quad\quad|\quad \ldots\\
\quad\quad|\quad (r \lor s)\\
\quad\quad|\quad \ldots\\
\quad\quad|\quad r\\
(p \land \neg q) \to r \quad\quad\quad\ \ \\
$</p>
</blockquote>
<p>So now you need to get from $\neg s$ and $r \lor s$ to $r$. How you do this will depend on the details of your ND system, but it is a standard "bookwork" proof. So you can fill in the dots yourself, I hope!</p>
|
78,545 | <p>I know that ListLinePlot allow to plot data, but I would like to compare my data with some analytic function in the same plot. Is it possible to listlineplot data and plot a function at the same time?</p>
| Mark Adler | 94 | <p>Use <code>Show[]</code> to combine plots. E.g. <code>Show[plot1, plot2]</code>.</p>
|
78,545 | <p>I know that ListLinePlot allow to plot data, but I would like to compare my data with some analytic function in the same plot. Is it possible to listlineplot data and plot a function at the same time?</p>
| ciao | 11,467 | <p>A trivial way (many ways to do this...)</p>
<pre><code>data = Range@100;
myF[x_] := x^1/2 + Sin[x];
ListLinePlot[{data, myF /@ data}]
</code></pre>
<p><img src="https://i.stack.imgur.com/cfFEC.png" alt="enter image description here"></p>
|
949,853 | <p>If $\alpha: [a,b] \rightarrow \mathbb{R}$ is an increasing function, we can define the Riemann-Stieltjes integral $$\int_a^b f d \alpha$$ Does the function $\langle f,g\rangle = \int_a^b fg d\alpha$ define an inner product on the $\mathbb{R}$-space of all Riemann-Stieltjes integrable functions on $[a,b]$ (with respect to $\alpha$)? It seems to me that the condition $\langle f,f\rangle = 0$ implies $f= 0$ is false. For example if we take the Riemann integral and take $f(x)$ to be $0$ everywhere except $1$ at $(b-a)/2$, then $\langle f,f\rangle = \int_a^b f(x)^2dx = 0$, but $f \neq 0$. </p>
<p>If it is not an inner product, can we still get something like the Cauchy-Schwarz inequality, i.e. $$\Biggl[\int_a^b |fg| d \alpha\Biggr]^2 \leq \Biggl(\int_a^b |f|^2 d \alpha\Biggr) \Biggl(\int_a^b |g|^2 d \alpha\Biggr)\,?$$</p>
| bryanj | 54,886 | <p>If you're just trying to get the Schwarz inequality, I think you can look at it like this:</p>
<p>If both $\int |f|^2$ and $\int |g|^2$ are non-zero, the proof which involve dividing by $\langle f, f\rangle$ or $\langle g, g\rangle$ goes through.</p>
<p>So say we're worried about the situation where $\int |f|^2 = 0$.<br>
Note that for every interval $I$ in the range of integration, the infimum of the values $|f|^2$ needs to be zero; otherwise $|f|$ is bounded away from zero on that interval and the integral would be nonzero.</p>
<p>If $g$ is bounded, then for each interval $I$ and each $\epsilon$ you can find some $x$ in the interval so that $0 \le|f(x)g(x)| < \epsilon$.</p>
<p>Looking at the lower Darboux sums for $|fg|$, we see that all lower sums are zero, so that the integral $\int |fg|$ is zero.</p>
<p>And so the Schwarz inequality holds in this case, too.</p>
|
2,598,912 | <p>Let $f$ be some function in $L_{loc}^1(\mathbb{R})$ such that, for some $a \in \mathbb{R}$,</p>
<p>$$\int_{|x| \leq r} |f(x)|dx \leq (r+1)^a$$</p>
<p>for all $r \geq 0$. Show that $f(x)e^{-|tx|} \in L^1(\mathbb{R})$ for all $t \in \mathbb{R} \setminus \{0\}$. </p>
<p>I'm having a hard time finding use of the bound described above. Any help would be appreciated. </p>
| Did | 6,179 | <p>Assume without loss of generality that $f\geqslant0$ everywhere and that $t>0$, then note that, for every $x$, $$e^{-t|x|}=\int_{|x|}^\infty te^{-tr}dr$$ hence, applying <a href="https://en.wikipedia.org/wiki/Fubini%27s_theorem#Tonelli's_theorem_for_non-negative_functions" rel="nofollow noreferrer">Tonelli's theorem</a> (aka Fubini for nonnegative functions), one gets $$\int_\mathbb Rf(x)e^{-t|x|}dx=t\int_\mathbb R\int_{|x|}^\infty f(x)e^{-tr}dr\,dx=t\int_0^\infty e^{-tr}\left(\int_{|x|\leqslant r}f(x)dx\right)dr$$ Thanks to the hypothesis about the innermost integrals, one gets $$\int_\mathbb Rf(x)e^{-t|x|}dx\leqslant t\int_0^\infty e^{-tr}(r+1)^adr$$ The last integral obviously converges hence we are done (with an explicit upper bound if need be).</p>
|
972,385 | <p>Could someone please explain this question to me? I know that such integers do NOT exist, but I could not prove it. "Either solve it or give a brief explanation as to why it is impossible." Thank you!</p>
<p>Find integers $s$ and $t$ such that $15s + 11t = 1$.</p>
| GDumphart | 124,970 | <p><a href="http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm" rel="nofollow">http://en.wikipedia.org/wiki/Extended_Euclidean_algorithm</a></p>
<p>$$3 \cdot 15 -4 \cdot 11 = 1$$</p>
|
1,809,500 | <p>Prove that $k!>(\frac{k}{e})^{k}$. </p>
<p>It is known that $e^{k}>(1+k)$. So if we multiply $k!$ on both sides, we get $k!e^{k}>(k+1)!$. Also $k^k>k!$. Now how to proceed ?</p>
| LiorGolan | 332,029 | <p>For some reason I cannot comment so I'll share my thought here: <a href="https://en.wikipedia.org/wiki/Stirling%27s_approximation" rel="nofollow">Stirling's approximation</a></p>
|
485,407 | <p>I have always used equations for the line (y=a + bx) in R2. Recently I came upon this thing called parametric equations. I cannot grasp the difference between them and the equations for lines that I used before. Of course, I can't understand the plane and the parametric equations for it. Any good example or reference for having a good intuition about them. I'm a first year student in economics, if possible, an example in economics will be of great help.</p>
<p>I recently saw a video on khan academy about parametric equations where both x and y depend on time, time being the parameter. Why not just make a 3d graph with 3 variables: x, y and t instead of a parametric equation in 2d?</p>
| abiessu | 86,846 | <p>A set of parametric equations is called by that name because they have a parameter. Such equations are often used when a graph might not have certain "typical" and/or "nice" properties like being one-to-one or onto, e.g., a circle.</p>
<p>For a circle, a set of parametric equations might look like this:</p>
<p>$$y=a\sin t$$
$$x=a\cos t$$</p>
<p>Then the circle of radius $a$ (centered at the origin) is generated by varying the value of $t$ and applying that value within each of the equations.</p>
|
485,407 | <p>I have always used equations for the line (y=a + bx) in R2. Recently I came upon this thing called parametric equations. I cannot grasp the difference between them and the equations for lines that I used before. Of course, I can't understand the plane and the parametric equations for it. Any good example or reference for having a good intuition about them. I'm a first year student in economics, if possible, an example in economics will be of great help.</p>
<p>I recently saw a video on khan academy about parametric equations where both x and y depend on time, time being the parameter. Why not just make a 3d graph with 3 variables: x, y and t instead of a parametric equation in 2d?</p>
| Blue | 409 | <p>A good way to look at it is this ...</p>
<blockquote>
<p><strong>Parametric equations let you draw curves with an Etch-a-Sketch.</strong></p>
</blockquote>
<p><img src="https://i.stack.imgur.com/N3gIs.jpg" alt="enter image description here"></p>
<p>(Image credit: <a href="http://pikajane.deviantart.com/art/Continuous-line-etch-a-sketch-182792806" rel="nofollow noreferrer">pikajane @ deviantart</a> )</p>
<p>The above illustration is a bit extreme, but in theory you could re-create the image by coordinating the toy's knob-turns in just the right way. <em>This is what parametric equations help you do.</em></p>
<hr>
<p>Imagine giving control of each Etch-a-Sketch knob to a separate person, Lars Rue for Left-Right, and Ursula Dobbs for Up-Down. And imagine tasking LR and UD with drawing a perfect circle. Chances are, this wouldn't go well at first ---or maybe at all-- with each kid anticipating and/or reacting to the other's movements. There'd be lots of squiggles and back-tracks ... and perhaps more than a few colorful exclamations.</p>
<p>Now imagine setting a metronome nearby. <em>Tick ... tock ... tack ... tuck ...</em>. With any luck LR and UD will stop paying that much attention to each other, and just pay attention to the metronome. After a little strategizing, the kids might decide to start their circle at "$(1,0)$" and proceed thusly:</p>
<ul>
<li><em>tick</em> : LR turns his knob leftward, and UR turns hers upward.</li>
<li><em>tock</em> : LR keeps going left, and UR goes down.</li>
<li><em>tack</em> : LR reverses, going right, while UD keeps moving downward.</li>
<li><em>tuck</em> : LR keeps right, and UD switches to up.</li>
<li>Repeat.</li>
</ul>
<p>With some practice, LR and UD learn better how to far to move their knobs, how to <em>ease into</em> and <em>out of</em> each change of direction, settling into nice, fluid motions that are essentially the same for each kid, but "out of step" by one tick of the metronome. The better they get, and the more-precisely the circle gets traced, the more the kids find that their rhythmic left-rights and up-downs exactly mimic the rise-falls of the (co)sine wave.</p>
<p>Importantly: Once the kids get really good at tracing the circle, they <em>don't have to look at it ---or each other--- any more</em>. Just listening to the <em>tick ... tock ... tack ... tuck ...</em> of the metronome is all they need. Their motions are governed by <em>time</em>.</p>
<p>But notice: <em>time</em> is not part of the drawing itself. It doesn't measure horizontal distance, or vertical distance. It's very much an "other", so its role is as a "para-meter" (literally, "beside measurement", or better here "other measurement").</p>
<p>By introducing <em>time</em>, LR (controlling the $x$ coordinate of a point) and UD (controlling the $y$ coordinate) are able to work <em>completely independently</em> to draw their circle. That's what parametric equations <em>do</em>: replace the relationship between $x$ and $y$ with dual relationships "$x$ at time $t$" and "$y$ at time $t$". Being able to deal with coordinates independently has plenty of advantages. (In physics, we notice that a cannonball's flight path is a parabola, because its up-down-ness is affected by gravity, whereas its left-right-ness is not.)</p>
<hr>
<p>It's worth noting that the "good way to look at it" can be shortened:</p>
<blockquote>
<p><strong>Parametric equations let you draw curves.</strong></p>
</blockquote>
<p>The circle equation $x^2+y^2=25$ expresses a <em>relation</em> among the $x$ and $y$ coordinates of any point on the curve. The point $(0,5)$ is on the curve because $0^2 + 5^2 = 25$. The point $(-3, 4)$ is on the curve because $(-3)^2+4^2=25$. A computer (or <a href="https://math.stackexchange.com/questions/1730/how-do-you-define-functions-for-non-mathematicians/1744#1744">Function Monkey</a>) might plot the circle by taking $x$-values in sequence from left-to-right, computing the corresponding $y$-values (getting two at a time, except at $x=\pm 5$), but this ignores the inherent <em>circle-ness</em> of the circle.</p>
<p>Parametric equations, on the other hand, turn a <em>static picture</em> of points satisfying some Pythagorean relation, into a <em>dynamic path</em> that does loop-de-loops <em>around</em> the the circle. This has advantages, too ... not the least of which is that dynamic paths are just more fun.</p>
<p>Consider the <a href="http://en.wikipedia.org/wiki/Cardioid" rel="nofollow noreferrer">Cardioid</a>, which is <em>drawn</em> (in red) like this ...</p>
<p><img src="https://i.stack.imgur.com/6eCcU.gif" alt="enter image description here"></p>
<p>(Image credit: <a href="http://pl.wikipedia.org/wiki/Wikipedysta:WojciechSwiderski" rel="nofollow noreferrer">Wojciech Swiderski via Wikipedia</a>)</p>
<p>... according to these parametric equations:
$$\begin{align}
x &= 2 \cos t - \cos 2 t \\
y &= 2 \sin t - \sin 2 t
\end{align}$$</p>
<p>The cardioid curve <em>happens</em> to contain all (and only) the points satisfying this relation:
$$\left( x^2 + y^2 - 1 \right)^2 = 4 \left((x-1)^2 + y^2 \right)$$
but really ... how helpful is <em>that</em> to understanding the curve?</p>
<hr>
<p>Whoops ... This answer seems to have turned into a big wall of text. I'll stop now. :)</p>
|
402,310 | <p>Vector math is something I find very interesting. However, we have never been told the link between vectors in physics (usually represented as arrows, e.g. a force vector) and in algebra (e.g. represented like a column matrix). It was really never explained well in classes.</p>
<p>Here are the things I can't wrap my head around:</p>
<ul>
<li>How can a vector (starting from the algebraic definition) be represented as an arrow? Is it correct to assume that a vector (in a 2-dimensional space) $v = [1,1]$ could be represented as an arrow from the origin $[0,0]$ to the point $[1,1]$?</li>
<li>If the above assumption is correct, what does it mean in the physics representation to normalize a vector?</li>
<li>If I have a vector $[1,1]$, would the vector $[-1,1]$ be orthogonal to that first vector? (Because if you draw the arrows they are perpendicular).</li>
<li>How can one translate an object along a vector? Is that simply scalar addition?</li>
</ul>
<p>These questions probably sound really odd, but they come from a lack of decent explanation in both physics and algebra.</p>
| Max P | 79,458 | <p>You can represent a vector as an arrow in cartesian coordinates by drawing an arrow from (0,0) to the vector (row vector) for example <3,2> taken as a point on the plane (3,4). In other words, your first bullet point is correct.</p>
<p>Vector normalization is the process by which one takes an arbitrary vector (a, b) and converts it to a new vector (a', b') where the length of (a', b') is 1.
For example, normalize(3, 4) = (3/5, 4/5). And we can verify that |(3/5, 4/5)| = sqrt(9/25 + 16/25) = sqrt(25/25) = 1.</p>
<p>That is correct. But keep in mind that for any vector (x, y), the vector (-x, y) will only be perpendicular to (x, y) if x = y.</p>
|
50,406 | <p>This is a question about support of modules under extension of scalars.</p>
<p>Let $f \colon A \to B$ be a homomorphism of commutative rings (with unity), and let $M$ be a finitely generated $A$-module.
Recall that the <em>support</em> of $M$ is the set of prime ideals $\mathfrak{p}$ of $A$ such that the localization $M_{\mathfrak{p}}$
is nonzero.
Then
$\mathrm{Supp} _B(B \otimes_A M) = f^{*-1}(\mathrm{Supp}_A(M))$,
the set of prime ideals of $B$ whose contractions are in the support of $M$. </p>
<p>The $\subseteq$ containment is true for any $M$.
What's an obvious example of a non-finitely generated module where the other containment doesn't hold?</p>
| Angelo | 4,790 | <p>Take $A = \mathbb Z$, $M = \mathbb Q$, $B = \mathbb Z/p\mathbb Z$, were $p$ is a prime.</p>
|
85,651 | <p>Let $\Gamma$ be one of the classical congruence subgroups $\Gamma_0(N)$, $\Gamma_1(N)$ and $\Gamma(N)$ of $SL(2, \mathbb{Z})$.</p>
<p>How does the lower bound for the length of primitive geodesics on $\Gamma \backslash \mathbb{H}$ depending on $N \rightarrow \infty$?</p>
<p>Any suggestions?</p>
| Ian Agol | 1,345 | <p>For a hyperbolic element $A\in SL(2,\mathbb{Z})$, we have the length of the closed geodesic is given by $\ln[(tr^2(A)-2+\sqrt{tr^4(A)-4tr^2(A)})/4]$, and this is monotonic in $|tr(A)|$ for $|tr(A)|>2$. For $A\in \Gamma(N),\Gamma_1(N)$, we have $tr(A)\equiv 2 (\mod N)$, and $tr(A)\neq \pm 2$, so the smallest that $|tr(A)|$ can be is when $|tr(A)|=N-2$ (I suppose for $N>4$). This gives a lower bound on the shortest geodesic for $\Gamma_1(N)$. This is realized by the matrix </p>
<p>$$A=\left[\begin{matrix} 1-N & 1 \\ -N & 1 \end{matrix}\right]$$</p>
<p>For $\Gamma(N)$, one can obtain a better lower bound. Consider the matrix $$A=\left[\begin{matrix} 1+aN & bN \\ cN & 1+dN \end{matrix}\right]$$ with $\det(A)=1$. Then we have $(1+aN)(1+dN)-bcN^2=1$, so $a+d+(ad-bc)N=0$. This implies that $a+d\equiv 0(\mod N)$, so $tr(A)=2+(a+d)N \equiv 2 (\mod N^2)$. Thus, we get a lower bound of $|tr(A)|\geq N^2-2$. This is realized by the matrix </p>
<p>$$A=\left[\begin{matrix} 1-N^2 & N \\ -N & 1 \end{matrix}\right]$$</p>
<p><strong>Edit:</strong> (I'm modifying the answer to address Vitali's question in the comments below).</p>
<p>For a matrix $A\in \Gamma_0(N)$, we have
$$A=\left[\begin{matrix} a & b \\ cN & d \end{matrix}\right]$$ with $\det(A)=1$.
Then $ad-bcN=1$ implies $ad \equiv 1(\mod N)$. We want to minimize $tr(A)=a+d$ subject to the constraint $ad\equiv 1(\mod N)$. Conversely, if $ad=1+kN$ for some $k$, then the matrix </p>
<p>$$A=\left[\begin{matrix} a & k \\ N & d \end{matrix}\right]\in \Gamma_0(N)$$
has trace $a+d$. So the minimal trace of a hyperbolic element in $\Gamma_0(N)$ is given by $\min \{ a+d >2 | ad\equiv 1 (\mod N)\}$. </p>
<p>Let's reformulate this problem.
$ad\equiv 1(\mod N)$ is equivalent to the characteristic polynomial $\lambda^2-tr(A)\lambda+1\equiv(\lambda-a)(\lambda-d) (\mod N)$, i.e. the characteristic polynomial of $A$ reduces $(\mod N)$. So we want to minimize
$\min \{ t > 2 | \lambda^2-t\lambda+1 \equiv 0 (\mod N), some \lambda \}$. </p>
<p>If $t$ is even, then we complete the square to get $(\lambda-t/2)^2 \equiv t^2/4-1 (\mod N)$, that is $t^2/4-1$ is a quadratic residue $(\mod N)$. If $t$ is odd, then $N$ must be odd if $\lambda^2-t\lambda+1\equiv 0 (\mod N)$, so multiplying by $4$, this is equivalent to $(2\lambda-t)^2\equiv t^2-4 (\mod N)$.
Thus, the minimal trace is given by
$\min \{ t>2 | t^2-4$ is a quadratic residue $(\mod N)$, $N$ odd, or $t^2/4-1$ is a quadratic residue $(\mod N)$, $N$ even $\}$.</p>
<p>Thus, since there are infinitely many $N$ such that $3^2-4=5$ is a quadratic residue $(\mod N)$ (e.g. the sequence $N=a^2-3a+1$), we have that the systole does not approach $\infty$. </p>
<p>Also, the systoles are unbounded from above. To see this, note that if $j$ is not a quadratic residue $(\mod N)$, then it is not a quadratic residue $(\mod kN)$ for any $k$. For $t>2$, choose $n(t)$ such that $t^2-4$ is not a quadratic residue $(\mod n(t))$. Then the number $N(T)=n(3)n(4)\cdots n(T)$ has the property that the minimal trace of $\Gamma_0(N(T))$ is bigger than $T$. In particular, the systole of $\Gamma_0(N!)$ $\to \infty$. </p>
|
2,903,110 | <p>How do I prove that the function $$f:(0,1)\rightarrow \mathbb R$$ defined by:</p>
<p>$$f(x) = \frac{-2x+1}{(2x-1)^2-1}$$</p>
<p>is onto?</p>
| user | 505,767 | <p>Note that </p>
<ul>
<li>$\lim_{x\to 0^+} f(x)=-\infty$</li>
<li>$\lim_{x\to 1^-} f(x)=\infty$</li>
</ul>
<p>therefore since $f(x)$ is continuous for $x\in(0,1)$ by IVT we have that $f(x)$ is onto.</p>
<p>Moreover note that for $x\in(0,1)$ we have that $f(x)$ </p>
<p>$$f'(x)=\frac{2x^2-2x+1}{x(x-1)^2x^2}\ge 0$$</p>
<p>therefore it is strictly increasing and also one to one on that interval.</p>
<p>Therefore $f(x):(0,1)\to \mathbb{R}$ is also bijective and invertible.</p>
|
2,075 | <p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p>
<p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p>
<p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p>
<p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p>
<p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
| Mark | 7,667 | <p>Personally I am pretty new here as well and probably is not in a position to speak for the community, but from personal experience with other sites having civil debates and some doubts is a good thing and is a sign of maturity. </p>
|
2,075 | <p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p>
<p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p>
<p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p>
<p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p>
<p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
| Matt E | 221 | <p>Dear Amy,</p>
<p>I contribute on MathSE for the same reasons that I contribute on MO:</p>
<p>I enjoy thinking about math, and solving math problems. I also enjoy
talking about and explaining mathematical ideas. MO and MathSE provide the
opportunity to do all this.</p>
<p>Also, as a professional mathematician (and one who is getting older every day!)
I am interested in finding ways to keep myself sharp: both to practice line-by-line
technical reasoning, and to keep the big picture in focus. MathSE and MO provide chances to both answer precise technical questions in a broad range
of subjects, and to try to give accurate but concise and readable descriptions of the big picture, and so I also regard my participation here as part of my ongoing professional training regimen.</p>
<p>Yet another motivation is that my area of mathematics (number theory and the Langlands program) has something of a reputation for being technical and recondite in its aims and methods. This reputation is not completely undeserved, but I like to do what I can to counter it, and participating here gives a chance to do this.</p>
<p>Finally, I like the idea of mathematics presenting a pleasant face to the world,
and I think that contributing here helps in some small way with this.</p>
<p>Regards,</p>
<p>Matthew</p>
|
2,075 | <p>I've come across so many posts here and on the "main" math-SE site that voice complaints, frustrations, pet-peeves, grievances, or else are critical of another post/question, user, OP, etc. It is really an energy sapper! Certainly not a boost for morale.</p>
<p>Since I'm pretty new here, and feeling a bit ambivalent about the community here, or lack thereof, I'd really like to know what keeps others here? Given all the frustrations and pet peeves, what keeps you coming back, logging in, participating, contributing?</p>
<p>I really am serious: I'd really like to know, plus I think shifting gears for a moment might help balance the (recent?) discord/tension. I'm not in a position to know whether what I perceive to be as tension and impatience, bordering on intolerance, is a "fact of life" here/ "the norm"...or if it cycles, like all growing communities do, between "better times" and "worse times"...slanting toward unity, then tilting towards discord... and individually, between feeling exhilarated and feeling near-burn-out.</p>
<p>Just thought I'd ask. It is very likely that people here are happier than they may appear. After all, I think humans are wired to notice what's amiss and what's gone wrong than we are to noting what's going well!</p>
<p>Edit: (Addendum) I am reluctant to accept a single answer; the answers and comments have been overwhelmingly supportive and informative. With respect to the "post a question"/"accept an answer" norm for math.SE, is that also the norm here on meta.SE? I sought out input from all interested users regarding the subject line of this thread; everyone is unique, and so I wouldn't even think of establishing criteria with which to evaluate one user's input/answer/comment against another. I did make a point of "upvoting" a good number of contributions, however. Thanks to all who have "chimed in," and any additional answers and comments are most certainly welcome.</p>
| Andrew Stacey | 2,907 | <p>I've um'ed and ah'ed about contributing to this, but having just left my 9th answer, I'll pitch in.</p>
<p>The reason why I hesitate is that I don't consider myself to be a contributor to maths-SX, so in the strictest sense I can't answer this question. However, I'm a "conscientious objector" in that my non-participation is based on a deliberate choice rather than apathy, so perhaps my reasons for not participating will be of use to you.</p>
<p>I probably need to start by explaining that, since the first two paragraphs are apparently contradictory. I keep an eye on this place as I think that it is a great idea <em>in principle</em>. I do participate in two other SE sites (MathOverflow and TeX-SX) and I really like the model. But as yet, I haven't figured out exactly <em>how</em> to participate here and until I do that then I'm not going to do more than what I currently do. And that is to drop by every now and then to see if there is a question to which I happen to just know the answer. Certainly, I don't put any effort in to answering questions here beyond the effort of writing it out. On MO or TeX-SX then I will willingly take up a challenge and work at something, but here I won't. That's what I mean by "not participating".</p>
<p>My reasons for doing that are nothing to do with the atmosphere here on meta. I'm a veteran of MathOverflow's meta (indeed, I suggested it and was the first non-moderator to sign up) and have had many blazing arguments with many different users (including the guy who set up MO) so I'm not afraid of a bit of fire on meta. But I think that the atmosphere on meta is another <em>symptom</em> of what keeps me from joining in fully.</p>
<p>The truth is that I haven't worked out yet what this place is for. And I'm afraid that the answers given previously don't help me figure that out.</p>
<p>I am a professional mathematician. That means that I get paid for doing maths. Actually, I get paid 45% for <em>doing</em> maths, 45% for communicating it, and 10% for ... er ... for helping ensure that the university runs well. So when I do a mathematical activity, I have at the back of my mind "If my employer walked in right now, would I quickly change to a different tab in the browser, or not?". Now, I can justify lots of mathematical activities. MathOverflow is fairly easy, but this place is hard. It seems to fall between two things. Let me deal with them separately.</p>
<p><strong>Teaching:</strong> This seems obvious. By answering questions here, I am helping people to learn. Except that the part of my job that is teaching is not "teaching anyone who wants to learn" (would that it were!) but "teaching the students at my university". There aren't that many students from my university here (are there any?) and if there were, it would be an incredibly inefficient way of teaching them. I would be wiser to invest my time in trying to reach <em>the students right in front of me</em> than those half the way around the world.</p>
<p><strong>Problem Solving:</strong> There are no end of problems in mathematics, and whilst we only write up the ones where we think we have something new to say, I'm sure I'm not the only mathematician who doesn't really care if a problem has been solved before or not, the important thing is: can <em>I</em> solve it? But with so many, how does one choose which to solve? An easy way to choose is: someone else wants to know the answer. So this site seems perfect for that. Except that the level of the problems here are not the level that I particularly want to get my teeth in to. Basically, I already get my "problem solving" hit from MO. I don't get it here. Moreover, since there are so many problems out there that I <em>could</em> spend time on, it would be wiser of me to invest my time in trying to solve those that might help me with my actual research than just those I happened upon whilst reading some bizarre website.</p>
<p>I have a suspicion that this site is far more "Ask an expert" than any other of the SE network sites. I don't know enough about the data explorer to do this, but I'd like to compare the various sites on their "questioner" and "answerer" populations. A quick look at the top users shows that very few of them <em>ask</em> questions here. Certainly, I can't think of a single question that I could ask here (where I really wanted to know the answer). On MO and TeX-SX, I feel that I am both an <em>asker</em> and an <em>answerer</em>. Here I would be/am just an answerer. At the risk of seeming a bit cold-blooded, what's in it for me?</p>
<p>Now, I am an expert in some things. There are a couple of things about which I am one of the <em>best</em> people in the world to ask. But they don't come up that often here (they don't come up that often on MO either), and there are certainly plenty of experts already here in the wider area that I know about. So you don't <em>need</em> me here. My not participating doesn't hold Maths-SX back in any significant way.</p>
<p>So, in summary, why should I participate in an SE site?</p>
<ol>
<li><strong>To learn.</strong> But <em>as a mathematician</em>, there are more efficient ways for me to learn.</li>
<li><strong>To help.</strong> But there are more immediate people who need my help.</li>
</ol>
<p>Now, I realise that this all reads very cold and calculating. It has to be precisely because I am <em>not</em> very good at being cold and calculating when faced with a problem to be solved. If I jumped in here, I would be going crazy trying to answer questions left, right, and centre; sure that <em>I</em> had the right answer that was going to enlighten the questioner and open their eyes to the beauty of mathematics. On TeX-SX and MO I can be like that because I know that I will also gain: they are true <em>exchanges</em>. Here, I don't see the exchange.</p>
<p>As a last ditch attempt to dispel the calculating nature of this, let me add that second to proving a sneaky theorem is that moment when you <em>see</em> an explanation hit home with a student. To watch their face when it all becomes clear and, for a moment, they glimpse Mathematics with a capital Mathematics is a wonderful experience. To quote:</p>
<blockquote>
<p><strong>Rose:</strong> I can see everything. All that is, all that was, all that ever could be.<br>
<strong>The Doctor:</strong> That's what I see. All the time. And doesn't it drive you mad?</p>
</blockquote>
<p>When teaching, <em>those</em> are the moments that make it worthwhile. Show me how I can get that <em>hit</em> here, and I might just join in.</p>
|
18,421 | <p>I am teaching 4th-grade kids. The topic is Fraction. Basic understanding of a fraction as a part of the whole and as part of the collection is clear to the kids. Several concrete ways exist to teach this basic concept. But when it comes to fraction addition/subtraction I could not find a way that teaches it concretely.<br>
Of course, teaching fraction addition & subtraction of the form 3/2 + 1/2 is easy. But what about 3/2+ 4/3?<br>
It is where we start talking about the algorithm (using LCM), which makes the matter less intuitive and more abstract which I am trying to avoid in the beginning. I believe all abstract concepts should come after the concrete experience. </p>
<p>So teachers do you have any suggestions? </p>
| Amy B | 5,321 | <p>Use a piece of paper as your whole. To teach <span class="math-container">$3/2 + 4/3$</span> do the following.</p>
<ol>
<li>Give each child/group of children 6 pieces of paper.</li>
<li>One piece of paper should be left as a whole - the students can
write 1 whole on the paper.</li>
<li>Have the students fold 2 of the pieces in half lengthwise, label
each half as <span class="math-container">$1/2$</span>, and cut out the halves.</li>
<li>Have the students fold 2 pieces of papers in thirds widthwise, label
each third as <span class="math-container">$1/3$</span> and cut out the thirds.</li>
<li>Have the students take <span class="math-container">$3/2$</span> and <span class="math-container">$4/3$</span> and try to add them. You will
have 2 wholes and <span class="math-container">$1/2$</span> and <span class="math-container">$1/3$</span> Discuss what to do with the
leftover pieces...</li>
<li>Next take the last piece of paper and fold in sixths by folding in
half lengthwise and in thirds widthwise. Label each piece <span class="math-container">$1/6$</span> and
cut out the pieces. </li>
<li>Put the some of the sixths from part 6 on the leftover pieces in
part 5 to show that <span class="math-container">$1/2=3/6$</span> and <span class="math-container">$1/3 = 2/6$</span>. Together they make
<span class="math-container">$5/6$</span></li>
</ol>
<p>Hope this works for you.</p>
|
1,074,809 | <p>What is a non-decreasing sequence of sets and how come it can have a limit?</p>
<p>It appear in a probability theory book</p>
| A.S | 24,829 | <p>We want to define the domain of the function to be as large as possible. Note that $\sqrt x$ is defined over $x \ge 0$ only. The $\sin$ function is defined over all the reals. So the only restriction we have on the domain is that $2x-1 \ge 0$.</p>
|
325,860 | <p>Let $a,b \in \mathbb{R}$, $a<b$ and let $f$ be a differentiable real-valued function on an open subset of $\mathbb{R}$ that contains [a,b]. Show that if $\gamma$ is any real number between $f'(a)$ and $f'(b)$ then there exists a number $c\in(a,b)$ such that $\gamma=f'(c)$.</p>
<p>Hint: Combine mean value theorem with the intermediate value theorem for the function $\frac{(f(x_1)-f(x_2))}{x_1-x_2}$ on the set $\{(x_1,x_2)\in E^2: a\leq x_1 < x_2 \leq b\}$.</p>
<p>This is question number 7 on page 109 of Rosenlicht (introduction to Analysis).</p>
<p>I am having a lot of trouble trying to start on this problem.</p>
| Dominic Michaelis | 62,278 | <p>Hint:<br>
Define $h(x)= f(x)- \gamma \cdot x$. Then search for a maximum (which must exist on a compact interval).</p>
<p>Wlog assume $h'(a)>0$ and $h'(b)<0$ (if that is not the case, observe $-h(x)$ which fulfills the properties.</p>
<p>We see that although the derivative is not continuous in general, it still haves the intermediate value property (also known as darbaux property).</p>
|
2,640,909 | <p>I encountered a problem with 4 variables and I was wondering if anyone knows how to solve this:
This is what is known:</p>
<p>$$ \left\lbrace
\begin{align}
a+b &= 1800 \\
c+d &= 12 \\
a/c &= 100 \\
b/d &= 250 \\
(a+b)/(c+d) &= 150
\end{align}
\right.$$</p>
<p>Below is a screenshot from a spreadsheet. The red numbers are the 4 unknowns that I'm trying to figure out how to solve for (I happen to know them, but would love to understand how to solve for them when I do not know them).
<a href="https://i.stack.imgur.com/rlb4A.png" rel="nofollow noreferrer">screenshot</a>
Any help would be greatly appreciated! Thank you!</p>
| GNUSupporter 8964民主女神 地下教會 | 290,189 | <p>$$a/c = 100 \implies a = 100c \\ b/d = 250 \implies b = 250d$$</p>
<p>Substitute this into $a+b = 1800$ to get $100c+250d = 1800$. Divide by $50$ to get $2c+5d = 36$. From $c+d = 12$, we have $c = 12-d$, so $2(12-d)+5d = 36$.</p>
<p>$3d = 12 \iff d=4$, so $c = 12-4=8$. Hence $(a,b,c,d) = (800,1000,8,4)$</p>
|
342,491 | <p>How to prove the following:</p>
<p>$a_n = \left\{\left(1+\frac{1}{n}\right)^n\right\}$ is bounded sequence, $ n\in\mathbb{N}$</p>
| Siddhant Trivedi | 28,392 | <p>Let me denote $S_n=\{(1+\frac{1}{n})^n\}$
\begin{align*}
\bigg(1+\frac{1}{n}\bigg)^n &=1+ {_nC_1} \frac{1}{n}+ {_nC_2} \bigg(\frac{1}
{n}\bigg)^2+..........+{_nC_n} \bigg(\frac{1}{n}\bigg)^n\\
&=1+ n \frac{1}{n}+\frac{n(n-1)}{2!} \bigg(\frac{1}{n}\bigg)^2+ \frac{n(n-1)(n-2)}{2!} \bigg(\frac{1}{n}\bigg)^3+..........+\frac{n(n-1)(n-2)....(n-(n-1))}{n!}\bigg(\frac{1}{n}\bigg)^n\\
&=1+ 1+ \frac{(1-\frac{1}{n})}{2!} + \frac{(1-\frac{1}{n})(1-\frac{2}{n})} {3!}+..........+\frac{(1-\frac{1}{n})(1-\frac{2}{n}).....(n-\frac{n-1}{n})} {n!}\\
&<1+ 1+ \frac{1}{2!} + \frac{1}{3!}+..........+\frac{1}{n!}\\
&<1+ 1+ \frac{1}{2} + \frac{1} {2^2}+..........+\frac{1}{2^{n-1}}\\
&=1+\frac{1-(\frac{1}{2})^n}{1-\frac{1}{2}}\\
&=1+2\bigg(1-\frac{1}{2^n}\bigg)\\
&=3-\frac{1}{2^{n-1}}\\
&<3\\
\end{align*}</p>
<p>$S_n$ is bounded by 3.</p>
<p>Also note that $S_n>2$. </p>
<p>Here I'm not writing the reason in every step i hope u will understand.</p>
|
1,872,369 | <p>Prove that the equation $z^{3}e^{z}=1$ has infinitely many complex solutions.How many of them are real?</p>
<p>Use the argument principle,I choose a disk centered at $0$ with radius $R$ and get $\int_{\partial D}\dfrac {3z^{2}e^{z}+z^{3}e^{z}}{z^{3}e^{z}-1}dz$,and I don't how to do with this integral.Additionally,I use the monotonicity when z $z$ takes value in real number to find that there could be one real solution.</p>
| Claude Leibovici | 82,404 | <p><em>Just for your curiosity</em>.</p>
<p>The equation $z^3\,e^z=1$ has three roots which express in terms of <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow">Lambert function</a>. They are $$x_1=3 W\left(\frac{1}{3}\right)$$ $$x_2=3 W\left(-\frac{1}{3} (-1)^{1/3}\right)$$ $$x_3=3 W\left(\frac{1}{3} (-1)^{2/3}\right)$$ Since the argument is complex for $x_2$ and $x_3$, there is only one real root which is $x_1$.</p>
|
743,292 | <p>Three fair six-sided dice are thrown and the dice show three different numbers. Find the probability that at least one six is obtained. </p>
<p>Im unsure ofwhat type of question this is, I have tried combinations such as 6C1 over 18C3. </p>
<p>But im not sure if this is correct. Any guidance is much appreciated. </p>
<p>Thank you</p>
| Oleg567 | 47,993 | <p>There are $N=\binom{6}{3}=6C3=C_6^3 = 20$ total triplets with different numbers.</p>
<p>There are $M=\binom{5}{2}=5C2=C_5^2 = 10$ these triplets with one six.</p>
<hr>
<p>Full list:</p>
<p>$1,2,3$<br>
$1,2,4$<br>
$1,2,5$<br>
$1,2,6$ *<br>
$1,3,4$<br>
$1,3,5$<br>
$1,3,6$ *<br>
$1,4,5$<br>
$1,4,6$ *<br>
$1,5,6$ *<br></p>
<p>$2,3,4$<br>
$2,3,5$<br>
$2,3,6$ *<br>
$2,4,5$<br>
$2,4,6$ *<br>
$2,5,6$ *<br></p>
<p>$3,4,5$<br>
$3,4,6$ *<br></p>
<p>$4,5,6$ *<br></p>
|
35,327 | <p>This is a bit of a follow-up to a previous question: <a href="https://mathematica.stackexchange.com/questions/35256/how-can-i-merge-multiple-sets-of-morphological-components-perhaps-selected-usin">How can I merge multiple sets of morphological components (perhaps selected using different metrics)?</a></p>
<p>I've run into a few problems recently where it's actually rather difficult to select morphological components in an image based on size or geometry metrics, and really it would be much easier to just click on them, or to select them based on a coordinate in their interior. </p>
<p>Consider the task of selecting an arbitrary subset of morphological components in this image:</p>
<pre><code>Import["http://i.stack.imgur.com/gSXIj.png"]
</code></pre>
<p>Is something like this possible?</p>
<hr>
<p>Here's an update based on nikie's comment, where I believe he's suggesting we can do this:</p>
<pre><code>image = Binarize[Import["http://i.stack.imgur.com/gSXIj.png"]];
m = MorphologicalComponents[image];
m // Colorize
exMorphologicalComponentNumOne = PixelValue[Image[m], {50, 214}]
exMorphologicalComponentNumOneTEST = PixelValue[Image[m], {49, 213}]
exMorphologicalComponentNumTwo = PixelValue[Image[m], {206, 146}]
exMorphologicalComponentNumTwoTEST = PixelValue[Image[m], {203, 142}]
</code></pre>
<p>This, I believe, is telling us the index value for the morphological components containing the pixels at {50, 214} and at {206, 146} in the image. Here, <code>PixelValue</code> simply takes a pixel coordinate and returns whatever is sitting at this index in <code>ImageData[image]</code>. So if you look at the output for <code>MorphologicalComponents[image]</code>, you'll notice that the matrix is the same size as the output from <code>ImageData[image]</code> and that the positions in the image corresponding to a morphological component carry the value of the component's index.</p>
<p>This is a very good start (thank you nikie!), but it still isn't clear to me how to quickly select a subset of morphological components based on their index. It would also be really nice to be able to to the click-based selection I mention in the title, since here, we have to write down and retype coordinates using the locator pane. This becomes kind of time consuming if we need to select a large subset of morphological components in multiple images.</p>
| C. E. | 731 | <p>Here's a different solution I was working on. </p>
<pre><code> c = Import["http://i.stack.imgur.com/gSXIj.png"];
Manipulate[
Column[{
Show[
components // Colorize,
Graphics[Locator[x, Background -> Orange]]
],
Text["Color index: " <> ToString[components[[Sequence @@ ({-#[[2]], #[[1]]} &@Round[x])]]]]
}],
{x, Locator, Appearance -> None},
Initialization :> (
x = ImageDimensions[c]/2;
components = MorphologicalComponents[c];
)
]
</code></pre>
<p><img src="https://i.stack.imgur.com/XcNhm.png" alt="scrn"></p>
<p>I suppose the difference is that I didn't turn the matrix into an image, so I had to convert between indices and coordinates, unnecessarily adding complexity to the solution.</p>
|
951,914 | <p>Here's my question. We know that the absolute value of X looks like:
<img src="https://i.stack.imgur.com/ePQo9.png" alt="Absolute Value of x"></p>
<p>Clearly, we can see, since the absolute value of x is always greater than or equal to 0, the area under the curve is always positive. Why then does it integrate to the following? Where the integral function takes negative values? I do understand, though, that the derivative of the following function works out to be what one would expect: |x|</p>
<p><strong>EDIT</strong>: I guess this question is a little bit stupid. I am confusing definite integral with the indefinite integral. I do notice that if I take any two points and take the difference between the values of the indefinite integral evaluated at these points, I get a positive value for the area. </p>
<p><img src="https://i.stack.imgur.com/FCtT3.png" alt="Integral of Absolute Value of x"></p>
| mod0 | 171,143 | <p>I am answering my own stupid question for the sake of completeness. </p>
<p>I am confusing definite integral with the indefinite integral. I do notice that if I take any two points and take the difference between the values of the indefinite integral evaluated at these points, I get a positive value for the area. At the same time, the derivative of the indefinite integral gives |x|</p>
<p>i.e. </p>
<p>$$
\left(\frac{x^2 * sgn(x)}{2}\right)_{-1}^{0} = (0 - \frac{-1}{2}) = \frac{1}{2}
$$</p>
<p>which is the same as the area of the triangle as below:</p>
<p><img src="https://i.stack.imgur.com/HxHXj.png" alt="enter image description here"></p>
|
2,764,447 | <p>In the following question:</p>
<p><a href="https://i.stack.imgur.com/Rgzjr.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rgzjr.jpg" alt="enter image description here" /></a>
I'm getting the answer as LHL=-infinity while RHL= infinity</p>
<p>The answer given to this question is D.</p>
<p>However as far as I know when a limit tends to infinity it does exist. I don't know if I'm wrong, could someone please help.</p>
| user | 505,767 | <p>Indeed, I agree with you, by the usual definition we say that the two side limits exist and we have</p>
<ul>
<li>$\lim_{x\to1^+}f(x)=+\infty$</li>
<li>$\lim_{x\to1^-}f(x)=-\infty$</li>
</ul>
<p>since those are different we say that the limit at $x=1$ doesn't exist and then the function is discontinuos at $x=1$.</p>
|
3,266,705 | <p>I know that if <span class="math-container">$f(x)$</span> is irreducible then <span class="math-container">$\langle f(x) \rangle$</span> is a prime ideal. Then I thought: is it maximal? Then I search about it, I find that it is not maximal ideal but cannot find any proof. </p>
<p>Any help is appreciated. Thanks.</p>
| Ruben | 386,073 | <p>An ideal <span class="math-container">$I \subset \mathbb Z [X]$</span> is <em>maximal</em> if and only if the quotient <span class="math-container">$\mathbb Z [X] / I$</span> is a <em>field</em>.</p>
<p>Suppose <span class="math-container">$f \in \mathbb Z [X]$</span> is irreducible and let <span class="math-container">$I = (f)$</span> the ideal generated by <span class="math-container">$f$</span>. </p>
<p>If <span class="math-container">$f$</span> is degree zero (constant) then <span class="math-container">$\mathbb Z[X] / (f) \simeq \mathbb Z/(f)[X]$</span>. Do you see why this is not a field? (hint: find an inverse of <span class="math-container">$X$</span>)</p>
<p>If <span class="math-container">$f$</span> is of nonzero degree then <span class="math-container">$\mathbb Z[X] / (f) \simeq \mathbb Z[\alpha]$</span> where <span class="math-container">$\alpha$</span> is a root of <span class="math-container">$f$</span>. Do you see why this is not a field? (hint: find an inverse of 2 or 3 or any <span class="math-container">$n \in \mathbb Z$</span>).</p>
|
1,741,229 | <p>As the title.
Or rather, for any integer $m$ which is not the characteristic, does such an 'integer division' exist?</p>
| Noah Schweber | 28,111 | <p>The crucial fact is the following: if $k$ is a field of characteristic zero, there is a natural embedding $e$ of $\mathbb{Q}$ into $k$. </p>
<p>To prove this, first note that subring of $k$ generated by $1_k$ is isomorphic to $\mathbb{Z}$ (since the characteristic is zero), yielding an embedding $d$ of $\mathbb{Z}$ into $k$. Now we can extend this embedding of $\mathbb{Z}$ to an embedding $e$ of $\mathbb{Q}$ as desired: $e({p\over q})=d(p)\times d(q)^{-1}$.</p>
<p>Division by an integer $n$ in $k$ is then conducted by multiplying by $e({1\over n}$).</p>
<hr>
<p>Exercise: by a similar argument, embed $\mathbb{Z}/p\mathbb{Z}$ into any field of characteristic $p$.</p>
|
102,280 | <p>What are the usual tricks is proving that a group is not simple? (Perhaps a link to a list?)</p>
<p>Also, I may well be being stupid, but why if the number of Sylow p groups $n_p=1$ then we have a normal subgroup?</p>
| Did | 6,179 | <p><strong>Summary:</strong> Like in several questions of the same ilk asked previously on the site, the recursion can only determine each $T(n)$ as a function of $T(2i+1)$ and $k\geqslant0$, where $n=(2i+1)2^k$. Furthermoree, the series considered by the OP are simply not relevant to the asymptotics of $T(n)$. The growth of $T(n)$ of the order of $n\log n$, which the OP seems to infer, is not compatible with the recursion at hand since, speaking very roughly, $T(n)$ grows like $n^{\frac12\log n}$.</p>
<p><strong>Gory details:</strong> Consider for example $t_k=T(2^k)$. Then, for every $k\geqslant1$,
$$
t_k=2^{k-1}t_{k-1}+k\log2,
$$
hence the change of variables
$
s_k=2^{-k(k-1)/2}t_k
$
yields the recursion $s_0=t_0=T(1)$ and
$
s_k=s_{k-1}+2^{-k(k-1)/2}k\log2$. This is solved by
$$
s_k=s_0+\log2\sum\limits_{i=1}^ki2^{-i(i-1)/2},
$$
whose translation in terms of $t_k$ is
$$
t_k=2^{k(k-1)/2}\left(T(1)+\log2\sum\limits_{i=1}^k\frac{i}{2^{i(i-1)/2}}\right).
$$
Finally,
$$
T(2^k)=2^{k(k-1)/2}\tau_0-(\log2)\,(k/2^{k})+o(k/2^k),$$
with
$$
\tau_0=T(1)+(\log2)\,\sum\limits_{i=1}^{\infty}\frac{i}{2^{i(i-1)/2}}= T(1)+1.69306\ldots
$$
<strong>Conclusion:</strong> One sees that, along the subsequence indexed by the powers of $2$, $T(n)$ grows roughly like $n^{\frac12\log n}$ rather than like $n\log n$. The same argument applies to each subsequence indexed by the powers of $2$ times any given odd integer.</p>
|
56,804 | <p>I know the Galois group is $S_3$. And obviously we can swap the imaginary cube roots. I just can't figure out a convincing, "constructive" argument to show that I can swap the "real" cube root with one of the imaginary cube roots. </p>
<p>I know that if you have a 3-cycle and a 2-cycle operating on three elements, you get $S_3$. I have a general idea that based on the order of the group there's supposed to be at least a 3-cycle. But this doesn't feel very "constructive" to me. </p>
<p>I wonder if I've made myself understood in terms of what kind of argument I'd like to see?</p>
| Billy | 13,942 | <p>A brute-force way to see it is easy enough. The roots of $X^3 - 2$ are $\sqrt[3]{2},\; \omega\sqrt[3]{2},\; \omega^2\sqrt[3]{2}$, so the splitting field of your polynomial is $K = \mathbb{Q}(\sqrt[3]{2},\omega)$. You are asking why it's legitimate to send $\sqrt[3]{2}$ to $\omega\sqrt[3]{2}$. Well, let's try it. Let $\sigma: K \to K$ be a map that sends $\sqrt[3]{2}$ to $\omega\sqrt[3]{2}$. How are we going to extend this to an element of the Galois group? We need $\sigma$ to be an automorphism that fixes $\mathbb{Q}$.</p>
<p>Our field $K$ is a $\mathbb{Q}$-vector space generated by six basis elements: $1,\; \sqrt[3]{2},\; \sqrt[3]{4},\; \omega,\; \omega\sqrt[3]{2},\; \omega\sqrt[3]{4}$. If we can define how $\sigma$ acts on those, we can extend linearly to the whole space: that is, we can extend $\sigma$ such that $\sigma(a+b) = \sigma(a) + \sigma(b)$ for all $a$ and $b$. We also know that, if we define $\sigma$ sensibly, we should get that $\sigma(ab) = \sigma(a) \sigma(b)$. By "sensibly" here, I mean that $\sigma$ should send each element of $K$ to one of its conjugates - that is, it should send $x$ to any $y$ that satisfies the same minimal polynomial as $x$. We also know that it's enough to make sure $\sigma$ is multiplicative on the basis, and in fact we only need to define it on $\sqrt[3]{2}$ and $\omega$, since the rest will then follow automatically by multiplicativity.</p>
<p>So we have a few options from what we've deduced so far. $\sigma$ can send $\sqrt[3]{2}$ to any of $\sqrt[3]{2},\; \omega\sqrt[3]{2},\; \omega^2\sqrt[3]{2}$, and it can send $\omega$ to either of $\omega$ and $\omega^2$. So let's say $\sigma(\sqrt[3]{2}) = \omega\sqrt[3]{2}$ (as we wanted) and $\sigma(\omega) = \omega$ (no good reason for this choice, but we had to make one). Where do all the other basis elements go? Can you convince yourself now that $\sigma$ is an element of the Galois group?</p>
<p>(What would have happened if we'd chosen $\sigma(\omega) = \omega^2$?)</p>
|
56,804 | <p>I know the Galois group is $S_3$. And obviously we can swap the imaginary cube roots. I just can't figure out a convincing, "constructive" argument to show that I can swap the "real" cube root with one of the imaginary cube roots. </p>
<p>I know that if you have a 3-cycle and a 2-cycle operating on three elements, you get $S_3$. I have a general idea that based on the order of the group there's supposed to be at least a 3-cycle. But this doesn't feel very "constructive" to me. </p>
<p>I wonder if I've made myself understood in terms of what kind of argument I'd like to see?</p>
| Marty Green | 10,838 | <p>I recently found an answer to my own question that no one else has brought up, so I thought I'd post it. The thing that always bothered me about swapping the real and imaginary cube roots of two was the assymetry of it all. What makes the positive imaginary root distinguishable from the negative imaginary root?</p>
<p>For me at least, the assymetry goes away when you realize that the same permutation that exchanges the real cube root of two with the <em>positive</em> imaginary root ALSO exchanges the real cube root of FOUR with the <em>negative</em> imaginary root. </p>
|
56,804 | <p>I know the Galois group is $S_3$. And obviously we can swap the imaginary cube roots. I just can't figure out a convincing, "constructive" argument to show that I can swap the "real" cube root with one of the imaginary cube roots. </p>
<p>I know that if you have a 3-cycle and a 2-cycle operating on three elements, you get $S_3$. I have a general idea that based on the order of the group there's supposed to be at least a 3-cycle. But this doesn't feel very "constructive" to me. </p>
<p>I wonder if I've made myself understood in terms of what kind of argument I'd like to see?</p>
| dragoboy | 168,033 | <p>One can prove a more general result : Let $f(X)$ be an irreducible polynomial with rational coefficients and of degree $p$ prime. If $f$ has precisely two nonreal roots in the complex numbers, then the Galois group of $f$ is $S_p$</p>
<p>So taking here $p=3$ we're done</p>
|
2,481,463 | <p>I am self-learning elliptic functions and modular form but struggling with the very basics.
I have programmed the Eisenstein series
$$E_6 = \sum_{m,n \in \mathbb{Z}} \frac{1}{(m +n \tau)^6} $$
and produced the following image (color denotes the argument)</p>
<p><a href="https://i.stack.imgur.com/NZeZb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/NZeZb.png" alt="enter image description here"></a>
which is similar to that on <a href="https://en.wikipedia.org/wiki/Eisenstein_series" rel="nofollow noreferrer">wiki</a>.</p>
<p><em>Eisenstein series and automorphic representations</em> by P. Fleig introduced the fundamental domain, as the figure below.
<a href="https://i.stack.imgur.com/YWFa8.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/YWFa8.png" alt="enter image description here"></a></p>
<p>A similar figure finds in L. Vepstak’s <em>The Minkowski question Mark, GL(2,Z) and the Modular Group</em>.
<a href="https://i.stack.imgur.com/OyRSW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OyRSW.png" alt="enter image description here"></a></p>
<p>My question is how to transform from the fundamental domain
$$
|Re(z)|<\frac{1}{2} \quad and \quad |z|>1| \quad and \quad Im(z)\geqslant 0
$$
to other domains (appears as distorted triangles), or how to map a point in the fundamental domain to a corresponding point in another domain?</p>
<p>I guess that SL(2, $\mathbb{Z}$)-invariance should associate with the symmetries between the triangular domain
$$
G_{2k}\left( \frac{az+b}{cz+d} \right) = (cz+d)^{2k}G_{2k}(z)
$$
but no clues how this equation divides into triangles that correspond to the fundamental domain.</p>
| davidlowryduda | 9,754 | <p>First of all, I think you've chosen an excellent topic of self-study. It's very deep and beautiful, and there's still lots to be found.</p>
<p>I think the first thing you should ask yourself is <strong>what does it mean to be a <em>fundamental domain?</em></strong></p>
<p>You have a modular function $f(z)$. This means that for all $\gamma \in \textrm{SL}(2, \mathbb{Z})$, you have $f(\gamma z) = (cz + d)^k f(z)$ for some integer weight $k$. This is a transformation law. Let's better understand what each $\gamma$ does to $z$.</p>
<p>Two matrices of note in $\mathrm{SL}(2, \mathbb{Z})$ are
$$ T = \begin{pmatrix} 1&1\\0&1 \end{pmatrix}, \qquad S = \begin{pmatrix} 0&-1\\1&0\end{pmatrix}.$$
Unofficially, the $T$ stands for "translation" and $S$ is an "inverSion". ($I$ would probably mean the identity matrix).</p>
<p>The action of $T$ on $z$ is $Tz = z + 1$, and the action of $S$ on $z$ is $Sz = -1/z$. Under $T$, the strip $-\frac{1}{2} \leq \mathrm{Re} \; z \leq \frac{1}{2}$ is sent to the strip $\frac{1}{2} \leq \mathrm{Re} \; z \leq \frac{3}{2}$, and it's not hard to see that this tiles the plane in strips. Under the action of $S$, everything outside the unit circle is swapped with everything inside the unit circle.</p>
<p>A fundamental domain for the upper halfplane $\mathcal{H}$ under the action of $\mathrm{SL}(2, \mathbb{Z})$ is a set $D$ containing one representative of each orbit of $\mathcal{H}$ under $\mathrm{SL}(2, \mathbb{Z})$. Stated differently, for any $z \in \mathcal{H}$, there should exist some $\tau \in D$ and a $\gamma \in \mathrm{SL}(2, \mathbb{Z})$ such that $\gamma \tau = z$ (along with some uniqueness assumptions).</p>
<p>The standard picture of the fundamental domain for $\mathcal{H}$ under $\mathrm{SL}(2, \mathbb{Z})$ can be heuristically determined from the shifting property of $T$ (so that we can focus on just a unit strip) and $S$ (so that we can focus on just the outside of the unit circle). In fact,
$$ \mathrm{SL}(2, \mathbb{Z}) = \langle S, T \rangle,$$
and one can show that the standard fundamental really is a fundamental domain (with the proper definition) these generators.</p>
<p>Now, back to your question. Every point $z \in \mathcal{H}$ can be written uniquely as $\gamma \tau$ for a $\tau$ in the standard fundamental domain. Determining this $\gamma$ often occurs naturally in the proof that the standard fundamental domain really is a fundamental domain (and you would benefit from going through this proof). But it is also possible to handle it slightly differently: if $\textrm{Im} \; z \geq 1$, then as $f$ is invariant under translation, you can write $z = \tau + n$ for some integer $n$ and $\tau$ in the fundamental domain, and then $f(z) = f(\tau)$. If $\lvert z \rvert < 1$, then you can use $S$ to invert through the circle (and note that $f(Sz) = z^k f(z)$) and then shift appropriately, and then repeat if necessary. Describing particular steps completely is a bit tedious, but the process is not too hard.</p>
<p>This describes how to map points from the plane to the fundamental domain (or inversely, how to map points from the fundamental domain to the plane).</p>
|
2,845,001 | <p>If I know the length of a chord of a circle and the length of the corresponding arc, but do not know the circle's radius, is there a formula by which I can calculate the length of the sagitta?</p>
| Ted Shifrin | 71,348 | <p>I hadn't even known the term before you made me look it up.:) I don't see a way to compute it explicitly, but it is in fact uniquely determined. This is because knowing $r\theta=A$ and $r\sin\theta=B$, there is a unique $\theta$ ($0<\theta<\pi/2$) with $\dfrac{\sin\theta}{\theta} = \dfrac BA$, and then $r=\dfrac A\theta$ is unique as well. The sagitta is, of course, $r(1-\cos\theta) = r-\sqrt{r^2-B^2}$.</p>
<p><strong>Clarifying Comment</strong>: Here $A$ is half the arclength and $B$ is half the chord length. </p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.