qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
4,127,468 | <p>Suppose we have two functions <span class="math-container">$f,g:\Bbb R\rightarrow \Bbb R$</span>. The chain rule states the following about the derivative of the composition of these functions, namely that
<span class="math-container">$$
(f \circ g)'(x) = f′(g(x))\cdot g′(x).
$$</span>
However, the equivalent expression using Leibniz notation seems to be saying something different. I know that <span class="math-container">$f'(g(x))$</span> means the derivative of <span class="math-container">$f$</span> evaluated at <span class="math-container">$g(x)$</span>, but when considering the Leibniz equivalent of the chain rule, it appears that it should really mean the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$g(x)$</span>. If we let <span class="math-container">$z=f(y)$</span> and y=<span class="math-container">$g(x)$</span>, then
<span class="math-container">$$
{\frac {dz}{dx}}={\frac {dz}{dy}}\cdot {\frac {dy}{dx}}.
$$</span>
Where here the <span class="math-container">$\frac{dz}{dy}$</span> corresponds to <span class="math-container">$f'(g(x))$</span>. Since <span class="math-container">$y=g(x)$</span>, I am tempted to believe that the expression <span class="math-container">$f'(u)$</span> means the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$u$</span>; it would make sense in this case as we are treating <span class="math-container">$g(x)$</span> as the independant variable. This leaves me with the question: does <span class="math-container">$f'(g(x))$</span> mean the derivative of <span class="math-container">$f$</span> evaluated at <span class="math-container">$g(x)$</span>, <span class="math-container">$\frac{df}{dx} \Bigr\rvert_{x = g(x)}$</span>, or the derivative of <span class="math-container">$f$</span> with respect to <span class="math-container">$g(x)$</span>, <span class="math-container">$\frac{df}{dg(x)}?$</span></p>
| Jackozee Hakkiuz | 497,717 | <p><span class="math-container">$f'(g(x))$</span> means the derivative of <span class="math-container">$f$</span> evaluated at <span class="math-container">$g(x)$</span>.
Really the ambiguous one is Leibniz notation, because it makes you think the function <span class="math-container">$f$</span> <em>"cares"</em> about what is the name of its argument. <span class="math-container">$f$</span> is a funcion of one variable, so it can only be differentiated with respect to one thing: its only entry.</p>
|
91,252 | <p>Let $v_1, \ldots, v_n$ be a set of vectors in a vector space $V$. Show that $v_1, \ldots, v_n$ is a basis of $V$ if and only if for any non-zero linear function $f$ on $V$ there is a vector $v$ in $\operatorname{span}(v_1, \ldots, v_n)$ such that $f(v) \neq 0$. </p>
<p>Suppose that $v_1, \ldots, v_n$ is not a basis of $V$. Then the complement $W$ of $\operatorname{span}(v_1, \ldots, v_n)$ in $V$ is not empty. Let $f$ be a function such that $f(x)=1$ for all $x\in W$ and $f(x)=0$ for all $x$ in $\operatorname{span}(v_1, \ldots, v_n)$. My question is: is $f$ linear in $V$?</p>
| lhf | 589 | <p>Every linear transformation is determined by its values on a basis. These values may be chosen arbitrarily. So, your $f$ is a linear functional on $V$ if you require that $f(x)=1$ for a basis of $W$, not for the whole of $W$. A constant function cannot be linear unless it is zero.</p>
|
635,893 | <p>I am trying to prove following statement:</p>
<blockquote>
<p>$[m,n]$ is a set of functions defined as $f \in [m,n] \leftrightarrow f: \{1,...,m\} \rightarrow \{1,...,n\}$. The size of $[m,n]$ is $n^m$ for $m,n \in \mathbb{N}_{\gt0}$.</p>
</blockquote>
<p>I have tried to prove it but I am not entirely sure about its correctness:</p>
<p><strong>1)</strong> For the basic step $m=n=1$.</p>
<p>The size of $\{1\} \rightarrow \{1\}$ is $1$. And it equals $1^1 = 1$.</p>
<p><strong>2)</strong> Then I assume that for some $m,n$ the size $[m,n] = n^m$. Now comes the first problem: should I be proving it for $[m, n+1],[m+1,n],[m+1,n+1]$ or is some of it redundant?</p>
<p>When trying to prove $[m, n+1]$ I rewrite it as $[m,n+1] = (n+1) * (n+1) * (n+1) * ... * (n+1) = (n+1)^m$ but I don't use the induction assumption so is that correct?</p>
<p>Again $[m+1,n] = n*n*...*n = n^{m+1}$</p>
<p>Finally $[m+1,n+1] = (n+1)*(n+1)*...*(n+1) = (n+1)^{m+1}$.</p>
<p>During the process I didn't really used my induction assumption, so I am worried that this wouldn't qualify as a proof by induction. So what would be the correct way to prove this? </p>
| Hagen von Eitzen | 39,174 | <p>Assume $a:=f(x_0)\ne0$ for some $x_0\in(0,1)$. Then $\frac1a{f(x)}>\frac12$ on some open neighbourhood $x_0\in (x_0-r,x_0+r)\subset(0,1)$ because $f$ is continous. Hence
$$ 0=\frac1a\int_0^{x_0+r}f(t)\,\mathrm dt-\frac1a\int_0^{x_0-r}f(t)\,\mathrm dt=\int_{x_0-r}^{x_0+r}\frac1af(t)\,\mathrm dt\ge \int_{x_0-r}^{x_0+r}\frac12\,\mathrm dt=r>0.$$
Again by continuity, $f(0)=f(1)=0$, too.</p>
|
2,251,240 | <p>What is the first derivative and nth derivative of the following function $ y = \sqrt {2 +\sqrt {3 + \sqrt {x}}}$ </p>
<p>I think taking the ln for both sides will remove the first square root only?
Could anyone give me a hint ? </p>
| gandalf61 | 424,513 | <p>Alternative approach (same answer) ...</p>
<p>$y^2 = 2+ \sqrt{3+\sqrt{x}}$</p>
<p>$\Rightarrow y^2-2=\sqrt{3 + \sqrt{x}}$</p>
<p>$\Rightarrow y^4-4y^2+4=3+\sqrt{x}$</p>
<p>$\Rightarrow 4y^3\frac{dy}{dx} - 8y\frac{dy}{dx} = \frac{1}{2\sqrt{x}}$</p>
<p>$\Rightarrow \frac{dy}{dx} = \frac{1}{8y(y^2-2)\sqrt{x}}$</p>
|
2,734,109 | <p>This is what my lecture notes have but I cannot find anything like it online and there is no explanation in the notes. The example given is for 903 and 444.
<a href="https://i.stack.imgur.com/9NXYk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NXYk.png" alt="enter image description here"></a>
Thank you. </p>
| fleablood | 280,126 | <p>Let $a = 903$ and $b= 444$.</p>
<p>$903 - 2*444 = 15$ So $a - 2b = 15$.</p>
<p>$444- 29*15 = 444-435 = 9$ or in other words $b - 29(a-2b) = 59b-29a = 9$.</p>
<p>$15 - 9=6$ or in other words $(a-2b) -(59b-29a) = 30a -61b = 6$.</p>
<p>$9 - 6 = 3$ of in other words $(59b-29a) - (30a -61b) = 120b - 59a = 3$.</p>
<p>$6 -2*3 = 0$ so we are done.</p>
<p>$\gcd(903,444) = 3$ and $3 = 120b - 59a$</p>
|
2,734,109 | <p>This is what my lecture notes have but I cannot find anything like it online and there is no explanation in the notes. The example given is for 903 and 444.
<a href="https://i.stack.imgur.com/9NXYk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NXYk.png" alt="enter image description here"></a>
Thank you. </p>
| Will Jagy | 10,400 | <p>I like to write these as (simple) continued fractions. </p>
<p>$$ \gcd( 903, 444 ) = ??? $$ </p>
<p>$$ \frac{ 903 }{ 444 } = 2 + \frac{ 15 }{ 444 } $$
$$ \frac{ 444 }{ 15 } = 29 + \frac{ 9 }{ 15 } $$
$$ \frac{ 15 }{ 9 } = 1 + \frac{ 6 }{ 9 } $$
$$ \frac{ 9 }{ 6 } = 1 + \frac{ 3 }{ 6 } $$
$$ \frac{ 6 }{ 3 } = 2 + \frac{ 0 }{ 3 } $$
Simple continued fraction tableau:<br>
$$
\begin{array}{cccccccccccc}
& & 2 & & 29 & & 1 & & 1 & & 2 & \\
\frac{ 0 }{ 1 } & \frac{ 1 }{ 0 } & & \frac{ 2 }{ 1 } & & \frac{ 59 }{ 29 } & & \frac{ 61 }{ 30 } & & \frac{ 120 }{ 59 } & & \frac{ 301 }{ 148 }
\end{array}
$$
$$ $$
$$ 301 \cdot 59 - 148 \cdot 120 = -1 $$ </p>
<p>$$ \gcd( 903, 444 ) = 3 $$<br>
$$ 903 \cdot 59 - 444 \cdot 120 = -3 $$ </p>
|
1,264,353 | <blockquote>
<p>Evaluate the determinants given that $\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=-6.$</p>
</blockquote>
<ol>
<li>$\begin{vmatrix} a+d & b+e & c+f \\ -d & -e & -f \\ g & h & i \end{vmatrix}$ </li>
<li>$\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g+3a & h+3b & i+3c \end{vmatrix}$</li>
</ol>
<hr>
<p>Here is what I have tried:</p>
<p>1.</p>
<p>$\begin{vmatrix} a+d & b+e & c+f \\ -d & -e & -f \\ g & h & i \end{vmatrix}\stackrel{\text{add row 2 to row 1}}=\begin{vmatrix} a & b & c \\ -d & -e & -f \\ g & h & i \end{vmatrix}\stackrel{\text{factor out $-1$}}=-\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=-(-6)=6.$ </p>
<p>2.</p>
<p>$\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g+3a & h+3b & i+3c \end{vmatrix}\stackrel{\text{row 1 times -3, add to row 3}}{=}\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g & h & i \end{vmatrix}\stackrel{\text{factor out 2}}{=}2\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=2(-6)=-12.$</p>
<p>Did I do these correctly? </p>
<p>I've tried some cases with numbers where adding a multiple of one row to another and found that it doesn't not change the value of the determinant. But I can't seem to grasp the intuition as to why this is so from numeric calculations. </p>
<p>Why is this so? </p>
| Upax | 157,068 | <p>I'm not sure I understand. The determinant
\begin{vmatrix} a & b & c \\ d & e & f \\ g & h & i \end{vmatrix}=
$a e i-a f h-b d i+b f g+c d h-c e g$=-6.
While
\begin{vmatrix} a+d & b+e & c+f \\ -d & -e & -f \\ g & h & i \end{vmatrix}=
$-a e i+a f h+b d i-b f g-c d h+c e g$
And
\begin{vmatrix} a & b & c \\ 2d & 2e & 2f \\ g+3a & h+3b & i+3c \end{vmatrix}=
$2 a e i-2 a f h-2 b d i+2 b f g+2 c d h-2 c e g$
In order to compute the determinant of a "x" matrix you can use the Sarrus rule (<a href="http://en.wikipedia.org/wiki/Rule_of_Sarrus" rel="nofollow">http://en.wikipedia.org/wiki/Rule_of_Sarrus</a>)</p>
|
129,693 | <p><img src="https://i.stack.imgur.com/tFha8.png" alt="enter image description here"></p>
<p>I'm lost on what's happening here. This is regarding MinML( "an idealized programming language" ) . More pics below: Thank You Very Much</p>
<p><img src="https://i.stack.imgur.com/BYjTF.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/env9T.png" alt="enter image description here"></p>
<p><img src="https://i.stack.imgur.com/GDo6I.png" alt="enter image description here"></p>
| LeeJordan | 776,157 | <p>If you take
<span class="math-container">$$f(z) = \frac{e^{iz}}{1+z^2}$$</span>
and apply the residue theorem over the path <span class="math-container">$\Gamma = \gamma_1 + \gamma_2$</span>:
<a href="https://i.stack.imgur.com/Y6nFb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Y6nFb.png" alt="path <span class="math-container">$\Gamma$</span>" /></a></p>
<p>You have as follow:</p>
<p><span class="math-container">$$\frac{1}{2\pi i}\int_{\Gamma} f(z)dz = \text{Res}(f,i)$$</span></p>
<p>'cause, of the two poles of <span class="math-container">$f(z)$</span>, only <span class="math-container">$i$</span> is inside <span class="math-container">$\Gamma$</span>. So, <span class="math-container">$\int_{\Gamma} f(z)dz = \int_{\gamma_1} f(z)dz + \int_{\gamma_2} f(z)dz$</span>. Let's do it one by one:</p>
<p><span class="math-container">$$\int_{\gamma_1} f(z)dz = \int_{\gamma_1} \frac{e^{iz}}{1+z^2}dz = \int_{-R}^{R} \frac{e^{it}}{1+t^2}dt$$</span></p>
<p>because we have parametrized <span class="math-container">$\gamma_1$</span> as <span class="math-container">$\gamma_1(t) = t$</span>, for <span class="math-container">$t\in[-R,R]$</span> and <span class="math-container">$dz = dt$</span>.</p>
<p><span class="math-container">$$\int_{-R}^{R} \frac{e^{it}}{1+t^2}dt = \int_{-R}^{R} \frac{\cos(t) + i\sin(t)}{1+t^2}dt = \int_{-R}^{R} \frac{\cos(t)}{1+t^2}dt + i\int_{-R}^{R} \frac{\sin(t)}{1+t^2}dt$$</span>
But the second one of this two integrals is zero, cause <span class="math-container">$\frac{\sin(t)}{1+t^2}$</span> is an odd function. Allright, let's take care of the integral</p>
<p><span class="math-container">$$
\int_{\gamma_2} f(z)dz = \int_{0}^{\pi} \frac{e^{iRe^{it}}}{1 + R^2e^{i2t}}iRe^{it} dt
$$</span></p>
<p>'cause, now, <span class="math-container">$\gamma_2 = e^{it}$</span> for <span class="math-container">$t \in [0,\pi]$</span>, so <span class="math-container">$z=Re^{it}$</span> and <span class="math-container">$dz = iRe^{it}dt$</span>. Then, we have:</p>
<p><span class="math-container">$$
\int_{0}^{\pi} \frac{e^{iRe^{it}}}{1 + R^2e^{i2t}}iRe^{it} dt = \int_{0}^{\pi} \frac{e^{iR(\cos(t)+i\sin(t))}}{1 + R^2e^{i2t}}iRe^{it}dt =\\
\int_{0}^{\pi} \frac{e^{iR\cos(t)} e^{-R\sin(t)}}{1 + R^2e^{i2t}}iRe^{it}dt \leq \int_{0}^{\pi} \left|\frac{e^{iR\cos(t)} e^{-R\sin(t)}}{1 + R^2e^{i2t}}iRe^{it}dt\right| = \int_{0}^{\pi} \frac{R}{1 + R^2e^{i2t}}e^{-R\sin(t)}dt
$$</span>
and,
<span class="math-container">$$
\lim_{R\rightarrow +\infty} \int_{0}^{\pi} \frac{R}{1 + R^2e^{i2t}}e^{-R\sin(t)}dt = 0
$$</span></p>
<p>So, summing up, we have</p>
<p><span class="math-container">$$
\int_{\Gamma} f(z)dz = \int_{-R}^{R} \frac{\cos(t)}{1+t^2}dt
$$</span></p>
<p>and, trivially,</p>
<p><span class="math-container">$$
\lim_{R\rightarrow +\infty} \int_{-R}^{R} \frac{\cos(t)}{1+t^2}dt = \int_{-\infty}^{\infty} \frac{\cos(t)}{1+t^2}dt
$$</span></p>
<p>just, what we want to know. The computing of Res(<span class="math-container">$f, i$</span>) it's simple 'cause <span class="math-container">$i$</span> is a pole of degree 1, so you just have to derive the denominator and evaluate all in <span class="math-container">$z=i$</span>, And finally, for the residue theorem, we have, for <span class="math-container">$R\rightarrow +\infty$</span>:</p>
<p><span class="math-container">$$
\frac{1}{2\pi i} \int_{\Gamma} f(z)dz = \frac{1}{2\pi i} \int_{-\infty}^{\infty} \frac{\cos(t)}{1+t^2}dt = Res(f, i) = \frac{e^{it}}{(1+z^2)'}\Bigg\rvert_{z=i} = \frac{e^{it}}{2z}\Bigg\rvert_{z=i} =\frac{e^{-1}}{2i}\\
\longrightarrow \int_{-\infty}^{\infty} \frac{\cos(t)}{1+t^2}dt = \frac{\pi}{e}
$$</span></p>
|
1,471,415 | <p>I am failing to understand how to compute the derivative of a few exponential functions. Let's start with this one:</p>
<p>$$
v = 1 - e^{-t/\tau}
$$</p>
<p>The derivative is</p>
<p>$$
\frac{dv}{dt} = \frac{1-v}{\tau}
$$</p>
<p>Can someone walk me through this? If this is explained somewhere else, I'd love to know where.</p>
| zahbaz | 176,922 | <p>$$\frac{dv}{dt}= \frac{d}{dt} (1 - e^{-t/\tau})$$</p>
<p>The derivative of $1$ is zero, so</p>
<p>$$= -\frac{d}{dt}e^{-t/\tau}$$</p>
<p>Using the chain rule,</p>
<p>$$= -e^{-t/\tau}\cdot\frac{d}{dt}\frac{-t}{\tau}$$
$$= -e^{-t/\tau}\cdot\frac{-1}{\tau}$$
$$= \frac{e^{-t/\tau}}{\tau}$$</p>
<p>Now, we know that $v=1 - e^{-t/\tau}$. Rearranging gives $e^{-t/\tau}=1-v$. Substitute above to get</p>
<p>$$= \frac{1-v}{\tau}$$</p>
|
423,938 | <p>I don't know if I apply for this case sin (a-b), or if it is the case of another type of resolution, someone with some idea without using derivation or L'Hôpital's rule? Thank you.</p>
<p>$$\lim_{x\to0}\frac{\sin(x^2+\frac{1}{x})-\sin\frac{1}{x}}{x}$$</p>
| mick | 39,261 | <p>Perhaps not an elegant proof but I considered this.</p>
<p>Use a Taylor series for $\sin$.</p>
<p>$\sin(x)=x+ a_1x^3 + a_2x^5 + ...$</p>
<p>Let $A$ be the value of the limit.</p>
<p>Then we get
$A=\dfrac{x^2+\frac{1}{x}-\frac{1}{x}+a_1(x^2+\frac{1}{x})^3-a_1(\frac{1}{x})^3+\ldots}{x} = \dfrac{x^2+a_1(x^2+\frac{1}{x})^3-a_1(\frac{1}{x})^3)+\ldots}{x}$.</p>
<p>This equals $A =\dfrac{x^2}{x}+\dfrac{a_1((x^3+1)^3-1)}{x^4}+\dfrac{a_2((x^3+1)^5-1)}{x^6}+\ldots$</p>
<p>Now use big $O$ notation to rewrite as follows : $A=\dfrac{O(x^2)}{x}+\dfrac{a_1O(x^9)}{x^4}+\dfrac{a_2O(x^{15})}{x^6}+\ldots$ hence $A=0 + a_1 0+a_20 + \ldots =0$. To justify that infinite sum notice that each limit $O(x^a)/x^b$ goes faster to $0$ than $x$ does.</p>
|
976,392 | <p>I would need a proof that <span class="math-container">$n \left(1-p^{\frac{1}{n}}\right)$</span> is increasing in <span class="math-container">$n \in \mathbf{N}$</span> for any <span class="math-container">$p \in (0,1)$</span>.</p>
<h3>Context</h3>
<p>I am working on a larger question and this is the last missing piece. But with this I'm a bit out of ideas (I tried the difference between <span class="math-container">$n+1$</span> and <span class="math-container">$n$</span> and also derivative wrt <span class="math-container">$n$</span>, but neither gave anything that seemed useful).
I know how to find limit of this with L'Hospital's rule (it is <span class="math-container">$-\log p$</span>). But I it does not seem to help to prove the monotonicity.</p>
| mvggz | 167,171 | <p>0 < p < 1 : let f : x-> x*(1 - $p^\frac{1}{x}$)</p>
<p>$f'(x) = (1-p^\frac{1}{x}) +\frac{1}{x}*ln(p)*p^\frac{1}{x} = (1-p^\frac{1}{x}) -\frac{1}{x}*ln(\frac{1}{p})*p^\frac{1}{x} $ = $1 - [1+\frac{1}{x}*ln(\frac{1}{p})]*p^\frac{1}{x}$</p>
<p>Let $ h : u-> 1 - [1 + u*ln(\frac{1}{p})]*p^u = f'(\frac{1}{u}) $</p>
<p>$ h'(u) = u*p^u*[ln(p)]^2 >0 $, when u>0 </p>
<p>h increases, h(0) = 0, so for u >0, h(u)>0 => $f'(x)$ = $h(\frac{1}{x}) > 0 $
So f increases relatively to x, </p>
<p>n+1 > n => f(n+1) > f(n) => Your sequence increases</p>
|
4,625,921 | <p><a href="https://i.stack.imgur.com/mIKMW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/mIKMW.png" alt="Two polar curves." /></a></p>
<p>I am looking for the area outside the circle <span class="math-container">$r = 3\cos\Theta$</span> and inside the limaçon <span class="math-container">$r = 1 + \cos\Theta$</span>:</p>
<p><span class="math-container">$$A = \frac{1}{2}\int_{\frac{\pi }{3}}^{\frac{5\pi }{3}}\left[(1+\cos\Theta )^{2}-(3\cos\Theta )^{2}\right]\,\mathrm{d}\Theta = -2\pi \,\text{units}^{2} $$</span></p>
<p>If the opposite situation is used, I simply reverse the limits and curves. But since the lower limit must be smaller, I use coterminal angles to change one of the limits. Here, after the reversal I changed 5pi/3 to -pi/3</p>
<p><span class="math-container">$$A = \frac{1}{2}\int_{-\frac{\pi }{3}}^{\frac{\pi }{3}}\left[(3\cos\Theta )^{2}-(1+\cos\Theta )^{2}\right]\,\mathrm{d}\Theta = pi \,\text{units}^{2} $$</span></p>
<p>I can't understand why the area I'm getting is negative for the first situation. I get that maybe I have the limits or the curves in reverse but I'm following the correct limits and placement of the curves. What is the general rule for determining the curves and limits so that the result is always positive?</p>
<p>I followed the convention that the outer curve in the graph is the first radius used and the inner curve is the second radius. I've also made sure that the limits start and end appropriately based on the graph. It worked for the opposite situation, as shown in this question.</p>
<p><a href="https://math.stackexchange.com/questions/3519906/find-the-area-inside-r-3-cos-theta-and-outside-r-1-cos-theta">Find the area inside $r=3\cos\Theta $ and outside $r = 1+\cos\Theta $</a></p>
<p>I've seen other problems where that rule works. I'm just wondering why it doesn't work here.</p>
| Sammy Black | 6,509 | <p><strong>EDIT:</strong> Original answer didn't address radial function taking on negative values.</p>
<hr />
<p>The positive direction for the radial polar variable <span class="math-container">$r$</span> is <em>outward</em> so if you want the area of the region <em>outside</em> the polar curve <span class="math-container">$r = f(\theta) \geq 0$</span> and inside the curve <span class="math-container">$r = g(\theta)$</span>, i.e.
<span class="math-container">$$
f(\theta) \leq r \leq g(\theta)
$$</span>
for all <span class="math-container">$\alpha \leq \theta \leq \beta$</span>, then you calculate the definite integral
<span class="math-container">$$
A = \frac12 \int_\alpha^\beta \bigl[ g(\theta)^2 - f(\theta)^2 \bigr] \, \mathrm{d}\theta.
$$</span></p>
<hr />
<p>Let <span class="math-container">$f(\theta) = 3\cos\theta$</span> and <span class="math-container">$g(\theta) = 1 + \cos\theta$</span>.</p>
<p>Both functions exhibit the symmetry <span class="math-container">$f(2\pi - \theta) = f(\theta)$</span> and <span class="math-container">$g(2\pi - \theta) = g(\theta)$</span> for all <span class="math-container">$\theta$</span>, which is equivalent to the fact that their polar graphs have a reflection symmetry across <span class="math-container">$\theta = \pi$</span>, the <span class="math-container">$x$</span>-axis.</p>
<p>Thus, we can calculate the total area for
<span class="math-container">$\frac\pi3 \leq \theta \leq \frac{5\pi}3$</span>
by calculating the area for
<span class="math-container">$\frac\pi3 \leq \theta \leq \pi$</span>
and doubling the result.</p>
<p>However, there's still an issue: <span class="math-container">$f(\theta) \leq 0$</span> for <span class="math-container">$\frac\pi2 \leq \theta \leq \pi$</span>, so the points wrap around the bottom half of the circle. The naive application of the integral formula gives negative values over this domain. See this picture:</p>
<p><a href="https://i.stack.imgur.com/rYNTD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rYNTD.png" alt="Cardioid and circle with radial lines." /></a></p>
<p>Instead, we have to calculate the integral in two pieces:
<span class="math-container">\begin{array}{ccr}
\tfrac\pi3 \leq{} \theta \leq \tfrac\pi2
& \quad\leadsto & f(\theta) \leq{} r \leq g(\theta) \\
\tfrac\pi2 \leq{} \theta \leq \pi
& \quad\leadsto & 0 \leq{} r \leq g(\theta)
\end{array}</span></p>
<p><a href="https://i.stack.imgur.com/rUFMm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/rUFMm.png" alt="Cardioid and circle, closeup." /></a></p>
<p>Thus, the total area is
<span class="math-container">\begin{align}
A &= 2 \cdot \frac12 \biggl( \int_{\pi/3}^{\pi/2}
\bigl[ g(\theta)^2 - f(\theta)^2 \bigr] \, \mathrm{d}\theta
\;+\; \int_{\pi/2}^{\pi}
g(\theta)^2 \, \mathrm{d}\theta \biggr) \\
&= \int_{\pi/3}^{\pi/2}
\bigl[ (1 + \cos\theta)^2 - (3\cos\theta)^2 \bigr] \, \mathrm{d}\theta
\;+\; \int_{\pi/2}^{\pi}
(3\cos\theta)^2 \, \mathrm{d}\theta
\end{align}</span></p>
<p>You can probably take it from here.</p>
|
2,802,035 | <p>(a) All roots of $f(x)$ are real.<br>
(b) $f(x)$ has one real root and $2$ complex roots.<br>
(c) $f(x)$ has two roots in $(-1,1).$<br>
(d) $f(x)$ has at least one negative root.</p>
<p>I thought of solving this question using Descartes Rule. 'c' is negative and 'a' is turning out to be positive. 'b' should be negative but I'm not sure about my work.<br>
Also any literature or links regarding this concept will be appreciated. </p>
| nonuser | 463,553 | <p>So $f(1) >0$ and $f(-1)>0$ and $f(0)<0$: </p>
<p>\begin{array}{cccc}
x & -1 & 0 & 1 \\
f(x) & + & - & + \\
\end{array}</p>
<p>So $a)$ and $c)$ and $d)$ are true.</p>
|
265,037 | <p>Let $\mathbb{F}_p$ be a finite field, $A=\{a_1,\dots,a_k\}\subset\mathbb{F}_p^*$ a $k$-element set, for $k<p$. $\mathfrak{S}_k=$permutation gp.</p>
<blockquote>
<p><strong>Question.</strong> Is it true there is always a $\pi\in\mathfrak{S}_k$ such that the following are pair-wise distinct in $\mathbb{F}_p$?
$$a_{\pi(1)}, \,a_{\pi(1)}+a_{\pi(2)},\,a_{\pi(1)}+a_{\pi(2)}+a_{\pi(3)},\,\dots, \,a_{\pi(1)}+\cdots+a_{\pi(k)}.$$</p>
</blockquote>
<p><strong>EDIT.</strong> Due to Julian Rosen's example, I change $A$ to be a subset of $\mathbb{F}_p^*$ (non-zero elements).</p>
| Salvo Tringali | 16,537 | <p>I first learned about this problem from Éric Balandraud (in 2013). So I wrote to him a couple of days ago, and he has just sent me an e-mail explaining that the question dates back (at least) to 1971. Éric attributes it to Erdős and Graham: He hasn't provided me with a reference (he is in the desert these days), but I will come back and update this answer as soon as I hear from him again.</p>
<p><em>Update (March 31, 2017).</em> I've just got a new message from Éric Balandraud. Below is an excerpt.</p>
<hr>
<p>Here is the reference:</p>
<blockquote>
<p>R. Graham, <em>On sums of integers taken from a fixed sequences</em>, Proc.
Wash. State Univ. Conf. on Number Theory (1971), 22-40.</p>
</blockquote>
<p>The question is stated as Conjecture 10 on page 36. It also appears in:</p>
<blockquote>
<p>P. Erdős and R. Graham, <em>Old and new problems and results in combinatorial
number theory</em>, Monographie N28 de L'Enseignement Mathématiques,
Université de Genève (1980).</p>
</blockquote>
<p>The conjecture is stated there on page 95.</p>
<hr>
|
840,445 | <p>Stirling number of the second kind is the number of ways to partition a set of n objects into k non-empty subsets - S(n,k). I want to restrict/constrain this partition so I can count the ways to partition a set of n objects into k non-empty subsets while at least p subsets out of k will have size r. When p=k the answer is the associated Stirling of the second kind Sr(n,k) , but I was wondering whether there is a general expression for any p. In case there isn't I will be glad to find an expression for p=1 and r=1. Thank you.</p>
<p>An example: the number of partitions of 4 objects into 2 subsets is S(4,2)=7. I want to count only the partitions that contain at least p=1 subset in the size of r=1. So in this case the answer is 4 because I don't want to count {1 2 | 3 4}, {1 3 | 3 2} and {1 4 | 2 4}. </p>
| epi163sqrt | 132,007 | <p>Here is an approach based upon generating functions. The following can be found in section II.3.1 in <em><a href="http://algo.inria.fr/flajolet/Publications/book.pdf" rel="nofollow noreferrer">Analytic combinatorics</a></em> by P. Flajolet and R. Sedgewick:</p>
<blockquote>
<p>The class $S^{(A,B)}$ of set partitions with block sizes in $A\subseteq \mathbb{Z}_{\geq 1}$ and with a number of blocks that belongs to $B$ has exponential generating function</p>
<p>\begin{align*}
S^{(A,B)}(z)=\beta(\alpha(z))\qquad\text{where}\qquad \alpha(z)=\sum_{a\in A}\frac{z^a}{a!},\quad \beta(z)=\sum_{b\in B}\frac{z^b}{b!}\tag{1}
\end{align*}</p>
</blockquote>
<p>We decompose the problem into two parts and use (1) to derive generating functions for each part.</p>
<blockquote>
<p><strong>First part:</strong> We consider $p$ partitions with size $r$. </p>
<p>\begin{align*}
A_1=\{r\}, B_1=\{p\}\qquad\text{where}\qquad\alpha_1(z)=\frac{z^r}{r!},\beta_1(z)=\frac{z^p}{p!}
\end{align*}</p>
<p>We obtain a generating function
\begin{align*}
\beta_1\left(\alpha_1\left(z\right)\right)=\frac{1}{p!}\left(\frac{z^r}{r!}\right)^p=\frac{z^{rp}}{p!\left(r!\right)^p}\tag{2}
\end{align*}</p>
<p><strong>Second part:</strong> We consider $k-p$ partitions with size $\geq 1$.</p>
<p>\begin{align*}
A_2={\mathbb{Z}}_{\geq 1}, B_2=\{k-p\}\qquad\text{where}\qquad
\alpha_2(z)=\sum_{n=1}^\infty \frac{z^n}{n!},\beta_2(z)=\frac{z^{k-p}}{(k-p)!}
\end{align*}</p>
<p>We obtain a generating function
\begin{align*}
\beta_2\left(\alpha_2\left(z\right)\right)&=\frac{1}{(k-p)!}\left(e^z-1\right)^{k-p}
=\sum_{n=k-p}^\infty {n\brace k-p}\frac{z^n}{n!}\tag{3}
\end{align*}
Note the coefficients of (3) are the <em><a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling numbers of the second kind</a></em>.</p>
</blockquote>
<p>Since the original problem is the cross product of the two parts, the resulting generating functions is the product of the generating functions in (2) and (3). We denote with
\begin{align*}
a_{r,p}(n,k)
\end{align*}
the number of distributions of $n$ elements into $k$ partitions with at least $p$ partitions having exactly $r$ elements.</p>
<blockquote>
<p>We obtain for $1\leq p\leq k\leq n, 1\leq r\leq \left\lfloor\frac{n}{r}\right\rfloor$
\begin{align*}
\beta_1\left(\alpha_1\left(z\right)\right)\beta_2\left(\alpha_2\left(z\right)\right)
&=\sum_{n=k-p}^\infty {n\brace k-p}\frac{z^n}{n!}\cdot\frac{z^{rp}}{p!\left(r!\right)^p}\\
&=\frac{1}{p!\left(r!\right)^p}\sum_{n=k-p+rp}{n-rp\brace k-p}\frac{z^n}{(n-rp)!}\\
&=\frac{(rp)!}{p!(r!)^p}\binom{n}{rp}\sum_{n=k+(r-1)p}{n-rp\brace k-p}\frac{z^n}{n!}\\
&=\sum_{n=k+(r-1)p}a_{r,p}(n,k)
\end{align*}</p>
<p>We conclude:
\begin{align*}
\color{blue}{a_{r,p}(n,k)=\frac{(rp)!}{p!(r!)^p}\binom{n}{rp}{n-rp\brace k-p}}\tag{4}
\end{align*}</p>
</blockquote>
<p><strong>Example:</strong> Calculating OPs example with $n=4,k=2,p=1$ and $r=1$ using (4) results in
\begin{align*}
a_{1,1}(4,2)=\binom{4}{1}{3\brace 1}=4
\end{align*}
in accordance with OPs calculation.</p>
|
2,471,546 | <p>In order to find a shorter proof for this thread : </p>
<p><a href="https://math.stackexchange.com/questions/2470262/find-all-functions-fmfn2-1-fnfmn-fm-n/2470409?noredirect=1#comment5106478_2470409">Find all functions $f(m)[(f(n))^2-1]=f(n)[f(m+n)-f(m-n)]$</a></p>
<p><em>Rem: just to be clear, $\mathbb N^*$ stands for $\mathbb N\setminus\{0\}$</em></p>
<p><br/>
I'm interested in whether it is possible to establish directly that</p>
<p>If $\ \forall n>0$ then $F_n=\left(\alpha\,a^n + \beta\, \dfrac{(-1)^n}{a^n}\right)\in\mathbb N^*\quad$ with $(\alpha,\beta)\in\mathbb R^2$ and $a\in\mathbb N^*$ </p>
<p>$\implies (a=1) \text{ or }(\beta=0)$.</p>
<p><br/>
A possible additional condition could be to set $F_1=a$ also.</p>
<p>This seems expected because intuitively $\beta$ has to be divisible by any $a^n$, but when trying to prove it, I only manage to show that $\alpha,\beta$ are rational, but cannot find the decisive blow.</p>
<p>I'm sure I'm missing something obvious, but I can't see it...</p>
| stochasticboy321 | 269,063 | <p>Note that $a \alpha = F_1 + \frac{\beta}{a},$ which implies that $$F_n = F_1 a^{n-1} + \beta \left( \frac{a^{2n-2} +(-1)^n}{a^n} \right).$$ Since both $F_n$ and $F_1 a^{n-1}$ are naturals for every $n$, we must have that for every $n$, $$\beta \left( \frac{a^{2n-2} + (-1)^n}{a^n} \right) \in \mathbb{Z}.$$ In particular, this would hold for all odd $n$. Now note that for every $a, n >1 \in \mathbb{N},$ $a^n$ is relatively prime to $a^{2n-2} - 1.$ Indeed, if $d$ divides both $a^n$ and $a^{2n-2} - 1,$ then it must also divide $a^{n-2} a^n - (a^{2n-2} - 1) = 1,$ forcing $d = 1$. </p>
<p>Thus, there are infinitely many $n$ such that $\beta /a^n$ is an integer. But, since $a>1,$ for large enough $n$ we have $|\beta/ a^{n}| <1, $ forcing $\beta = 0$.</p>
|
3,295,973 | <p>Let <span class="math-container">$(\Omega,\mathcal A,\mu)$</span> be a measure space, <span class="math-container">$p,q\ge1$</span> with <span class="math-container">$p^{-1}+q^{-1}=1$</span> and <span class="math-container">$f:\Omega\to\mathbb R$</span> be <span class="math-container">$\mathcal A$</span>-measurable with <span class="math-container">$$\int|fg|\:{\rm d}\mu<\infty\;\;\;\text{for all }g\in L^q(\mu)\tag1.$$</span> By <span class="math-container">$(1)$</span>, <span class="math-container">$$L^q(\mu)\ni g\mapsto fg\tag2$$</span> is a bounded linear fuctional and hence there is a unique <span class="math-container">$\tilde f\in L^p(\mu)$</span> with <span class="math-container">$$(f-\tilde f)g=0\;\;\;\text{for all }g\in L^q(\mu)\tag3.$$</span></p>
<blockquote>
<p>Can we conclude that <span class="math-container">$f=\tilde f$</span>?</p>
</blockquote>
<p><strong>EDIT</strong>: As we can see from <a href="https://math.stackexchange.com/a/3223523/47771">this answer</a>, we need to impose further assumptions; but which do we really need?</p>
| Community | -1 | <p>Before doing anything, you know that <span class="math-container">$x\ge0$</span> because the real square root function is non-negative.</p>
<p>By inspection <span class="math-container">$x=1$</span>, which is the only solution.</p>
|
3,295,973 | <p>Let <span class="math-container">$(\Omega,\mathcal A,\mu)$</span> be a measure space, <span class="math-container">$p,q\ge1$</span> with <span class="math-container">$p^{-1}+q^{-1}=1$</span> and <span class="math-container">$f:\Omega\to\mathbb R$</span> be <span class="math-container">$\mathcal A$</span>-measurable with <span class="math-container">$$\int|fg|\:{\rm d}\mu<\infty\;\;\;\text{for all }g\in L^q(\mu)\tag1.$$</span> By <span class="math-container">$(1)$</span>, <span class="math-container">$$L^q(\mu)\ni g\mapsto fg\tag2$$</span> is a bounded linear fuctional and hence there is a unique <span class="math-container">$\tilde f\in L^p(\mu)$</span> with <span class="math-container">$$(f-\tilde f)g=0\;\;\;\text{for all }g\in L^q(\mu)\tag3.$$</span></p>
<blockquote>
<p>Can we conclude that <span class="math-container">$f=\tilde f$</span>?</p>
</blockquote>
<p><strong>EDIT</strong>: As we can see from <a href="https://math.stackexchange.com/a/3223523/47771">this answer</a>, we need to impose further assumptions; but which do we really need?</p>
| mlchristians | 681,917 | <p>In your title you ask, is <span class="math-container">$x = -2$</span> a solution of <span class="math-container">$\sqrt{2-x} = x$</span>?</p>
<p>To answer this question, you simple need to check the answer by substituting <span class="math-container">$x=-2$</span> into the original question and see if a true statement results. In this case, we get</p>
<p><span class="math-container">$$ \sqrt{2-(-2)} = \sqrt{4} = 2$$</span></p>
<p>which does not equal the RHS <span class="math-container">$x = -2$</span>.</p>
<p>Therefore <span class="math-container">$x=-2$</span> is not a solution of the original equation. </p>
<p>REMARK: Someone may try to argue that <span class="math-container">$-2$</span> is a square root of <span class="math-container">$4$</span> because <span class="math-container">$(-2)^{2}$</span> is <span class="math-container">$4$</span>; and indeed it is. But when a term of an equation is written as <span class="math-container">$\sqrt{2-x}$</span>, it means to take the positive square root of <span class="math-container">$2-x$</span>; otherwise, if the term would be written as <span class="math-container">$-\sqrt{2-x}$</span> if the negative square root was meant. (Just like <span class="math-container">$\sqrt{4} = 2$</span> and <span class="math-container">$-\sqrt{4} = -2$</span>.)</p>
|
647,757 | <blockquote>
<p>How many integers $n$ are there such that $\sqrt{n}+\sqrt{n+7259}$ is an integer? </p>
</blockquote>
<p>No idea on this one.</p>
| dmk | 88,878 | <p>(This assumes $n \in \mathbb{Z}$.)</p>
<p>First, note that $k^2 + (2k + 1) = (k+1)^2$; that is, the difference between two consecutive sqaures is an odd number. This implies that the difference between <em>any</em> two perfect squares is a sum of consecutive odd numbers.</p>
<p>We start by finding the largest $n$ that satisfies $\sqrt{n} + \sqrt{n+7259} \in \mathbb{Z}$. Let $n = k^2$. Now let $2k+1 = 7259$. This means $k=3629$, and so $n= 3629^2=13169641$. This must be the largest solution, as the difference between any larger consecutive squares would exceed $7259$.</p>
<p>Now assume that the difference between the two squares is the sum of consecutive odd numbers, </p>
<p>$$\begin{aligned}
\ (2k+1) + (2k + 3) + \ldots + (2k + [2z -1]) &= 2kz + (1 + 3 + \ldots +[2z - 1]) \\
\ &= 2kz + z^2 \\
\ &= (2k + z)z \\
\end{aligned}$$</p>
<p>And since $(2k+z)z = 7259$, we need only check values of $z$ such that $z$ divides $7259$ — i.e., $z \in \{7, 17, 61\}$. (Any product of these numbers will yield $z > \sqrt{7259}$ and therefore $k < 1$.) If $z = 7$, then $2k+7 = 17\cdot 61$, and so $k = 515 \implies n = 265225$. Letting $z = 17$ and $61$ yields $n = 42025$ and $841$, respectively.</p>
<p>Therefore, the four solutions to the problem are $n \in \{841, 42025, 265225, 13169641\}$.</p>
|
3,887,526 | <p>I need to prove <span class="math-container">$\frac{|x+y+z|}{1+|x+y+z|} \le \frac{|x|}{1+|y|+|z|}+\frac{|y|}{1+|x|+|z|}+\frac{|z|}{1+|x|+|y|}$</span>. I've tried to use triangle inequality or to explore the form of <span class="math-container">$(a+b+c)^2$</span> but it won't get me anywhere. I would be grateful for some suggestions.</p>
| Albus Dumbledore | 769,226 | <p>as <span class="math-container">$|x+y+z|\le |x|+|y|+|z|$</span></p>
<p>let <span class="math-container">$|x|=a,|y|=b,|z|=c$</span>
it suffices to prove <span class="math-container">$$\sum \frac{a}{1+b+c}\ge \sum \frac{a+b+c}{1+a+b+c}$$</span><br />
indeed by C-S/titu's lemma; <span class="math-container">$$\sum \frac{a}{1+b+c}=\sum \frac{a^2}{a+ba+ca}\ge \frac{{(a+b+c)}^2}{a+b+c+2(ab+bc+ca)}\ge \frac{a+b+c}{1+a+b+c}$$</span></p>
<p>Here we used <span class="math-container">$$\frac{2ab+2bc+2ca}{a+b+c}\le a+b+c$$</span> which is just <span class="math-container">$a^2+b^2+c^2\ge 0$</span></p>
|
351,815 | <p>Having trouble understanding this. Is there anyway to prove it?</p>
| Sunil B N | 71,150 | <p>To explain it more precisely, $n!$ grows very fast when compared to a power $n$. Because the greater number is multiplied with the product each time: $$(n+1)!=1 \cdot 2 \cdots n \cdot (n+1).$$ But in case of exponential function, $$a^{n+1} = a \cdot a \cdots a,$$ the term $a$ remains constant.</p>
|
2,028,594 | <p>Given a simple undirected connected graph. What is the minimum number of edge-disjoint paths needed to cover all edges of the graph? I have only found information about the vertex-covering one.</p>
<p>My friend suggests that such value equals <em>u/2</em>, with <em>u</em> being number of vertices with odd degree, or 1 if there is no such vertices.</p>
<p>Can anyone confirm if his assumption is correct or not? </p>
| user392700 | 392,700 | <p>In a directed graph, we can transform this into a flow problem with each edge having the lower-bound capacity of $1$ and all link to a source and a sink with edges with lower-bound capacity $0$. The least number of paths will be the minimum flow value. Is this idea correct and if so can it be generalized to 'undirected graph'? </p>
|
2,123,926 | <p>I have a triangle with the coordinates (0, 20), (15, -10), (-15, -10). So the centre of the triangle is 0,0. </p>
<p>I want to rotate this by one degree. Im using the formula:</p>
<pre><code>x' = cos(theta)*x - sin(theta)*y
y' = sin(theta)*x + cos(theta)*y
</code></pre>
<p>Yet when I work out the coordinate for angle rotation of 1, I get completely inexplicable coordinates:</p>
<pre><code>(-16.82941969615793, -3.3554222481086295), (16.51924443610106, 8.497441825246927), (0.31017526005686946, -5.142019577138298)
</code></pre>
<p>Which when plotted out looks nothing like the original triangle being rotated by 1 degree. I've made a simple demo in javascript <a href="https://jsfiddle.net/z8jemr5u/" rel="nofollow noreferrer">here</a>.</p>
<p>What could i be possibly doing wrong?</p>
| S.C.B. | 310,930 | <p>$$y''-2y'+y=0 \iff y''-y'-y'+y=0 \iff (y'-y)'-(y'-y)=0$$</p>
<p>Note that $$(y'-y)'-(y'-y)=0 \iff y'-y=ce^{x} \iff (y'-y)e^{-x}=c$$
For some constant $c$. Now note $$(y'-y)e^{-x}=(ye^{-x})'$$
So we have that $$(ye^{-x})'=c \iff ye^{-x}=cx+b$$
For some constant $c,b$. So your answer is incorrect. The answer should be $$y=cxe^{x}+be^{x}$$
This is corroborated through <a href="http://www.wolframalpha.com/input/?i=y%27%27-2y%27%2By%3D0" rel="nofollow noreferrer">computation</a>. </p>
|
88,284 | <p>If $$R=\left\{ \begin{pmatrix} a &b\\ 0 & c \end{pmatrix} \ : \ a \in \mathbb{Z}, \ b,c \in \mathbb{Q}\right\} $$
under usual addition and multiplication, then what are the left and right ideals of $R$?</p>
| rschwieb | 29,335 | <p>This is covered in full on page 17 of Lam's <em>First course in noncommutative rings</em>. In general the "triangular ring" where $R$ and $S$ are rings and $M$ is an $R-S$ bimodule looks like: </p>
<p>$$T= \begin{pmatrix} R &M\\ 0 & S \end{pmatrix}
$$</p>
<p>You can also visualize the ring as $R\oplus M\oplus S$ with funny multiplication, but don't ever confuse this with ordinary direct sums. Lam explains:</p>
<p>1) The right ideals are all of the form $J_1\oplus J_2$, where $J_1$ is a right ideal of $R$ and $J_2$ is a right $S$ submodule of $M\oplus S$ which contains $J_1M$.</p>
<p>2) Analogously the left ideals are all of the form $I_1\oplus I_2$ where $I_2$ is a left ideal of $S$, and $I_1$ is a left $R$ submodule of $R\oplus M$ which contains $MI_2$.</p>
<p>3) The ideals of $T$ look like $K_1\oplus K_0\oplus K_2$ where $K_1$ is an ideal of $R$, $K_2$ is an ideal of $S$, and $K_0$ is a subbimodule of $M$ containing $K_1M+MK_2$.</p>
<p>As a bonus, believe I remember later somewhere he also shows that the radical of this ring is:</p>
<p>$$
rad(T)= \begin{pmatrix} rad(R) &M\\ 0 & rad(S) \end{pmatrix}
$$</p>
|
1,842,779 | <p>This is Q28 from Australian Maths Competition 2014.</p>
<p>A circle is surrounded by 6 other circles,in a hexagonal formation.The leftmost circle is 0,which the rightmost circle is 1000.Each of the five missing numbers is the average of its neighbours. What is the largest of the 5 missing numbers?</p>
<p>What I tried:</p>
<p>Let the numbers be $x,y,z,j,k$</p>
<p>The number in the centre=$x=\frac {1000+0+y+z+j+k}{6} -(1)$ </p>
<p>$1000=\frac {j+x+y}{3} $</p>
<p>$0=\frac {z+x+k}{3}$</p>
<p>$1000=\frac {2(x+y+z+k+j)}{6}$ -(2)</p>
<p>Substituting (1) into (2),</p>
<p>$5x-2y-2z-2j-2k=0$</p>
<p>And I'm lost.I have seen this type of questions in competitions always.But I have always no clue.</p>
| Théophile | 26,091 | <p>Starting off with @heropup's approach, we can speed up the calculations by appealing to symmetry:</p>
<p>\begin{array}{ccccc} & a & & b & \\ 0 & & c & & 1000 \\ & d & & e & \end{array}</p>
<p>Observe that we must have $a=d, b=e$, and $a = 1000-b$. This eliminates all variables but $b$ and $c$ (we might as well work with $b$ rather than $a$, since the question asks for the greatest among the five values, which will clearly be $b$), so the system of equations is:</p>
<p>\begin{align*} 3b &= (1000-b) + c + 1000 \\ 6c &= 2(1000-b) + 2b + 1000. \end{align*}</p>
<p>That is,</p>
<p>\begin{align*} 4b &= 2000 + c \\ 6c &= 3000. \end{align*}</p>
<p>This gives $c=500$, from which we see that $b=625$.</p>
|
910,020 | <p>Show that if $G$ is a group of order $168$ that has a normal subgroup of order $4$ , then $G$ has a normal subgroup of order $28$.</p>
<p><strong>Attempt:</strong> $|G|=168=2^3.3.7$</p>
<p>Then number of sylow $7$ subgroups in $G = n_7 = 1$ or $8$.</p>
<p>Given that $H$ is a normal subgroup of order $4$ in $G$. </p>
<blockquote>
<p>If we prove that $n_7$ cannot be $8$, then $n_7=1$ and as a result, the sylow $7$ subgroup $K$ is normal.Hence, $HK$ will also be a normal subgroup of $G$ and since, $H \bigcap K = \{e\} \implies |HK|=28$.</p>
</blockquote>
<p>Now, suppose $n_7=8$ and hence, $K_1 \cdots K_8$ are the $8$ cyclic subgroups of order $7$.</p>
<p>Each $K_i$ has $\Phi(7)=6$ elements of order $7$ . Hence, total elements of orders $=7$ in the $K_i's$ are $6.8=48$</p>
<p>How do I move forward and bring a contradiction somewhere?</p>
<p>Thank you for your help. </p>
| Andreas Caranti | 58,401 | <p>If $K$ is a normal subgroup of $G$ of order $4$, you may well argue in $H = G/K$, and reduce to show, via the correspondence theorem, that in a group of order $2 \cdot 3 \cdot 7 = 42$ there must be a normal subgroup of order $7$.</p>
<p>To do this, just consider that the number of $7$-Sylow subgroups in $H$ must divide $42/7 = 6$, and be congruent to $1$ modulo $7$.</p>
|
910,020 | <p>Show that if $G$ is a group of order $168$ that has a normal subgroup of order $4$ , then $G$ has a normal subgroup of order $28$.</p>
<p><strong>Attempt:</strong> $|G|=168=2^3.3.7$</p>
<p>Then number of sylow $7$ subgroups in $G = n_7 = 1$ or $8$.</p>
<p>Given that $H$ is a normal subgroup of order $4$ in $G$. </p>
<blockquote>
<p>If we prove that $n_7$ cannot be $8$, then $n_7=1$ and as a result, the sylow $7$ subgroup $K$ is normal.Hence, $HK$ will also be a normal subgroup of $G$ and since, $H \bigcap K = \{e\} \implies |HK|=28$.</p>
</blockquote>
<p>Now, suppose $n_7=8$ and hence, $K_1 \cdots K_8$ are the $8$ cyclic subgroups of order $7$.</p>
<p>Each $K_i$ has $\Phi(7)=6$ elements of order $7$ . Hence, total elements of orders $=7$ in the $K_i's$ are $6.8=48$</p>
<p>How do I move forward and bring a contradiction somewhere?</p>
<p>Thank you for your help. </p>
| aaa | 186,150 | <p>If K is a normal subgroup of G of order 4 , you may well argue in $H=\frac{G}{K}$ , and reduce to show, via the correspondence theorem, that in a group of order $2\cdot 3\cdot 7=42$ there must be a normal subgroup of order 7 .</p>
<p>To do this, just consider that the number of 7 -Sylow subgroups in H must divide $\frac{42}{7}=6$ , and be congruent to 1 modulo 7</p>
|
507,133 | <p>I came across a problem in a book that asked us to find the first number $n$ such that $\phi(n)\geqslant 1,000$ it turns out that the answer is 1009, which is a prime number. There were several questions of this type, and our professor conjectured that it will always be the next prime. However, no one has been able to come up with a proof for this conjecture. So, more formally the conjecture is:</p>
<p>For all $n\in\mathbb{N}$ the least positive integer $k\in\mathbb{N}$, with $k>1$ such that $\phi(k)\geqslant n$ is a prime.</p>
<p>I have worked with the Möbius Inversion function, and other minimal element contradictory proofs, but nothing has worked so far. Does anyone have any good ideas? </p>
| mjqxxxx | 5,546 | <p>If $n$ is not prime, then it has a prime factor $\le \sqrt{n}$, and so
$$
\varphi(n) = n \prod_{p\;|\;n}\left(1-\frac{1}{p}\right)\le n\left(1-\frac{1}{\sqrt{n}}\right)=n-\sqrt{n};
$$
while $\varphi(n)=n-1$ if $n$ is prime.</p>
<p>So, fix $M$. Let $p$ be the next prime number after $M$, so $\varphi(p)=p-1\ge M$; and let $p'$ be the largest prime number $\le M$. (That is, $p$ and $p'$ are adjacent primes.) Finally, suppose that a composite $n < p$ also satisfies $\varphi(n)\ge M$. Then
$$
p-\sqrt{p}>n-\sqrt{n}\ge\varphi(n)\ge M \ge p',
$$
or
$$
p-p'\ge \sqrt{p}.
$$
If this were known to never happen for sufficiently large $p$ (i.e., if eventually $g_k < \sqrt{p_{k+1}}$, where $p_k$ is the $k$-th prime, and the prime gap $g_k\equiv p_{k+1}-p_{k}$), then your conjecture could be proven. This is a strong conjecture (although almost certainly true)... I do not believe it is even known to follow from the Riemann hypothesis. Looking at the table of prime gaps, though, there are very few counterexamples, and the only near misses for your conjecture are:
$$
M=2 \qquad \varphi(3)=\varphi(4)=2; \\
M=6 \qquad \varphi(7)=\varphi(9)=6; \\
M=20 \qquad \varphi(23)=22; \varphi(25)=20; \\
M=110 \qquad \varphi(113)=112; \varphi(121)=110.
$$</p>
|
3,929,952 | <p><strong>Question:</strong> How do I estimate the following integral? <span class="math-container">$$\int_{2}^{\infty} \frac{|\log \log t|}{t\, (\log t)^2} \, dt$$</span></p>
<p><strong>Attempt:</strong> Setting <span class="math-container">$u=\log t$</span>, we get</p>
<p><span class="math-container">$$\int_{2}^{\infty} \frac{|\log \log t|}{t\, (\log t)^2} \, dt = \int_{\log 2}^{\infty} \frac{|\log u|}{u^2}\, du$$</span> and then I am stuck.</p>
| Jacky Chong | 369,395 | <p>Observe that
<span class="math-container">\begin{align}
\int^\infty_2 \frac{|\log\log t|}{t(\log t)^2}\ dt = \int^\infty_{\log\log 2} |u|e^{-u}\ du.
\end{align}</span></p>
<p>You can even evaluate this by hand.</p>
|
44,082 | <p>I have basic questions about elliptic curves over finite fields. </p>
<ol>
<li><p>Where to find general references? Hartshorne for instance restricts to algebraically closed ground fields.</p></li>
<li><p>Over an arbitrary field $K$, is the right definition of an elliptic curve a smooth proper
curve of genus 1 with a choice of $K$-rational point? </p></li>
<li><p>What is known about the structure of the group of $K$-rational points when $K$ is finite? In particular, how much does it depend on the curve?</p></li>
<li><p>Are there simple examples where you can explicitly see all of the $K$-rational points, $K$ finite?</p></li>
</ol>
| Yuri Zarhin | 9,658 | <p>Elliptic curves $E$ and $E'$ over a finite field $K$ are $K$-isogenous if and only if the orders of $E(K)$ and $E'(K)$ coincide. However, it may happen that the groups $E(K)$ and $E'(K)$ have the same order (and even isomorphic) but $E$ and $E'$ are not isomorphic over $K$. Even worse, there exist such a $K$ and non-isomorphic over $K$ elliptic curves $E$ and $E'$ such that if $\bar{K}$ is an algebraic closure of $K$ then the Galois modules $E(\bar{K})$ and $E'(\bar{K})$ are isomorphic. In particular, if $L$ is an arbitrary finite field containing $K$ then the groups $E(L)$ and $E'(L)$ are isomorphic. (Of course, $E(K)$ and $E'(K)$ are the subgroups of Galois invariants in $E(\bar{K})$ and $E'(\bar{K})$ respectively.) See arXiv:0711.1615 [math.AG].</p>
<p>An explicit description of all groups that can be realized as $E(K)$ (for a given $K$) was done by Misha Tsfasman (In: Theory of numbers and its applications, Tbilisi, 1985, 286--287; see also Sect. 3.3.15 of the book Algebraic geometric codes: basic notions by Tsfasman, Vladut and Nogin, AMS 2007).
See also papers of René Schoof (J. Combinatorial Th. A 46 (1987), 183--211), Felipe Voloch (Bull. SMF 116 (1988), 455--458) and Sergey Rybakov (Centr. Eur. J. Math. 8 (2010), 282--288).</p>
|
2,531,538 | <p>I wish to show the following in equality:</p>
<p>$$\dfrac {n!}{(n-k)!} \leq n^{k}$$</p>
<p>Attempt:</p>
<p>$$\dfrac {n!}{(n-k)!} = \prod\limits_{i = 0}^{k -1} (n-i) = n\times (n-1)\times \cdots \times(n-({k-1})) $$</p>
<p>I am not sure how to make the argument that $n\times (n-1)\cdots \times (n-({k-1})) \leq n^k$</p>
| Sri-Amirthan Theivendran | 302,692 | <p>Each of the $k$ terms in the product is less than or equal to $n$. </p>
<p>ALternatively, the LHS counts the number of injective maps from a set with $k$ elements to a set with $n$ elements while the RHS counts all the maps from a set with $k$ elements to a set with $n$ elements. The result follows.</p>
|
981,181 | <p>I'm stuck with a logic problem like this</p>
<blockquote>
<p>I eat ice cream if I am sad.</p>
<p>I am not sad.</p>
<p>Therefore I am not eating ice cream.</p>
</blockquote>
<p>Is this conclusion logical? The first sentence can be understood both like "ice cream <span class="math-container">$\implies$</span> sad" and vice versa. He stated that he is not sad but does that not mean that he is not eating ice cream ? I'm confused.</p>
| Bruno Bentzen | 174,538 | <p><strong>Argument:</strong></p>
<blockquote>
<p>"I eat ice cream if I am sad. I am not sad.Therefore I am not eating ice cream."</p>
</blockquote>
<p>Whenever you are stuck in an argument, try represent it in proper symbolism:</p>
<p><strong>Glossary</strong>:</p>
<p>$E \equiv $ 'I eat ice cream'</p>
<p>$S \equiv $ 'I am sad'</p>
<p>Then we can represent your above argument like this:</p>
<blockquote>
<ol>
<li>$S \to E$</li>
<li>$\neg S$</li>
</ol>
<p>$\therefore \neg E$</p>
</blockquote>
<p>Now is this a <strong>valid argument</strong>? (Recall the definition of a valid argument!) For instance, Let the statements (1) and (2) be true. Can $\neg E$ be false?</p>
|
2,568,544 | <p>The exercise is to construct a cube (inscribed in the unit spher) with one of the corners at:
$$ \vec{v} = \frac{1}{\sqrt{10}} (3,1,0) \in S^2 $$</p>
<p>I'm a bit stuck constructing the other seven vertices. My first guess was to use the eight reflections available to me:</p>
<p>$$ \square \stackrel{?}{=}\left\{ \tfrac{1}{\sqrt{10}} (\pm 3, \pm 1, \pm 0) \right\} $$</p>
<p>This is not quite right since this describes the four vertices of a rectangle in 3-dimensional space. I remember the rotations of the form a group. Here is one of the matrices:</p>
<p>$$ \left[ \begin{array}{rrrr}
0 & -1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1 \end{array}\right] \in SO(3) $$</p>
<p>This would lead to the 4 vertices of a square (up to rescaling). Then I am stuck finding the remaining four.</p>
<ul>
<li>$ (3,1,0)\;,\; (1,-3,0)\;,\; (-3,-1,0)\;,\; (-1,3,0) $</li>
</ul>
<p>The algebra would give these vertices, but having the orientation of the cube <em>slightly off</em> vertical is enough to confuse my intuition. Perhaps I could use another matrix:</p>
<p>$$ \left[ \begin{array}{rrr}
1 & 0 & 0 \\
0 & 0 & -1 \\
0 & 1 & 0 \end{array}\right] \in SO(3) $$</p>
<p>This generates a few more verties (including some repeats). This brings the count up to six.</p>
<ul>
<li>$ \color{#A0A0A0}{(3,1,0)}\;,\; (3,0,-1)\;,\; (3,-1,0)\;,\; (3,0,1) $</li>
</ul>
<p>The rotations of the cube are an isometry, so I should be able to generate the cube as the $SO_3(\mathbb{Z})$ orbit of a single vector. It's hard to decide on paper if I'm on the right track.</p>
<p><strong>Following the comments</strong> Here's a third orthogonal matrix:
$$ \left[ \begin{array}{rrr}
0 & 0 & -1 \\
0 & 1 & 0 \\
1 & 0 & 0 \end{array}\right] \in SO(3) $$</p>
<p>and we get (up to) three more verices. So far we have 7 and we're only looking for one more.</p>
<ul>
<li>$ \color{#A0A0A0}{(3,1,0)}\;,\; (0,1,-3)\;,\; (-3,1,0)\;,\; (0,1,3) $</li>
</ul>
<p>We are up to <em>ten</em> vertices. So there is definitely a problem. There are $|SO(3, \mathbb{Z}) | = 24$ and we're computing the orbit correctly, but we are off a factor of three. We have obtained three interlocking cubes.</p>
<p>What subset of $SO(3)$ should I be using?</p>
<hr>
<p>Generically the image of a point is the <a href="https://en.wikipedia.org/wiki/Compound_of_three_cubes" rel="nofollow noreferrer">compound of three cubes</a>. So this construction is redundant by a factor of three.</p>
<p><img src="https://upload.wikimedia.org/wikipedia/commons/thumb/8/8b/UC08-3_cubes.png/280px-UC08-3_cubes.png" alt=""></p>
<p>Let's type out the possible vertices:</p>
<pre><code> a b c | -b a c | -a -b c
a -c b | -b -c a | -a -c -b
a -b -c | -b -a -c | -a b -c
a c -b | -b c -a | -a c b
-c b a | c a b | b -a c
-c -a b | c b -a | b -c -a
-c -b -a | c -a -b | b a -c
-c a -b | c -b a | b c a
</code></pre>
<p>The group theory suggests these should split into three groups of eight, forming the three cubes. I wasn't able to find the correct permutations.</p>
<hr>
<p>If I had set $\vec{v}_0 = \frac{1}{\sqrt{3}}(1,1,1)$, then the vertices $\frac{1}{\sqrt{3}}(\pm 1,\pm 1,\pm 1)$ are the vertices of a cube.</p>
| hmakholm left over Monica | 14,366 | <p>A possible brute-force approach:</p>
<p>Suppose you have orthogonal transformations $T$ and $U$ such that
$$ T(\mathbf e_1) = \frac1{\sqrt{10}}(3,1,0) \\ U(\mathbf e_1)=\frac1{\sqrt3}(1,1,1)$$</p>
<p>Then $T(U^{-1}(\frac1{\sqrt3}(\pm1,\pm1,\pm1)))$ will be a possible solution.</p>
<p>You can construct matrix representations for possible $T$ and $U$ systematically by the Gram-Schmidt process, but simply eyeballing it works too, leading to for example</p>
<p>$$ T=\begin{pmatrix}3 & -1 & 0 \\ 1 & 3 & 0 \\ 0 & 0 & 1 \end{pmatrix} \begin{pmatrix}1/\sqrt{10} \\ & 1/\sqrt{10} \\ && 1 \end{pmatrix} \\
U=\begin{pmatrix}1 & 1 & 1 \\ 1 & -1 & 1 \\ 1 & 0 & -2 \end{pmatrix} \begin{pmatrix}1/\sqrt{3} \\ & 1/\sqrt{2} \\ && 1/\sqrt{6} \end{pmatrix} \\$$</p>
<p>And inverting the orthogonal $U$ is easy.</p>
|
2,246,889 | <p>$\displaystyle \lim_{\substack{x \rightarrow 4 \\ y \rightarrow \pi}} x^2 sin \frac{y}{x}$</p>
<p>I don't know in which way should I try to solve this example, can maybe the single variable limit of $sin \frac{1}{x}$ help me ?</p>
| hamam_Abdallah | 369,188 | <p>Your limit is simply</p>
<p>$$(4)^2\sin (\frac {\pi}{4} )$$
$$=8\sqrt {2} .$$</p>
|
2,452,135 | <p>I need to prove the set
$A=\left \{ \left ( r\cos t,r\sin t \right ) \mid 0\leq t\leq \theta ,r\geq 0\right \}$ is closed. I have tried to use the theorems for continuous functions, but I always into troubles cause the domain is not bounded. </p>
| Math1000 | 38,584 | <p>Let $\{(x_n,y_n)\}$ be a sequence of points in $A$ such that $\lim_{n\to\infty}(x_n,y_n)=(x,y)$ for some $(x,y)\in\mathbb R^2$. We can assume that $(x,y)\ne(0,0)$ since clearly $0=0\cdot\cos t=0\cdot\sin t$ for any $t$ and so $(0,0)\in A$. Set $\rho = \sqrt{x^2+y^2}$ and $\theta = \mathrm{atan2}(y,x)$, where
$$\mathrm{atan2}(y,x) =
\begin{cases}
\arctan\left(\frac yx\right),& x>0\\
\arctan\left(\frac yx\right)+\pi,& x<0,\ y\geqslant0\\
\arctan\left(\frac yx\right),& x<0,\ y<0\\
\frac\pi2,& x=0,\ y>0\\
-\frac\pi2,& x=0,\ y<0.
\end{cases}
$$
Then $(x,y) = (\rho\cos\theta, \rho\sin\theta)\in A$, which implies that $A$ is closed.</p>
|
4,278,202 | <p>Suppose, I am working on <span class="math-container">$Eq(X)$</span> = the set of all equivalence relations on <span class="math-container">$X$</span>. Let <span class="math-container">$\theta_1, \theta_2$</span> denote arbitrary equivalence relations.</p>
<p>Then how does <span class="math-container">$\theta_1 \lor \theta_2$</span> look like? And what about <span class="math-container">$\theta_1 \land \theta_2$</span>?</p>
<p>I know lattices and that these usually correspond to supremum and infimum. But I cannot wrap my head around what does this mean for equivalence relations.</p>
<p>What does <span class="math-container">$\theta_1 \lor \theta_2$</span> mean for arbitrary <span class="math-container">$a, b \in \theta_1 \land \theta_2$</span>?</p>
<p>Thank you.</p>
| Tereza Tizkova | 943,366 | <p>If we take two arbitrary elements <span class="math-container">$a, b$</span>, how does <span class="math-container">$a (\theta_1 \lor \theta_2) b$</span> look like?</p>
<p>If this holds, there must exist an element <span class="math-container">$c$</span> such that</p>
<p><strong><span class="math-container">$a \: \theta_1 \: c$</span> or <span class="math-container">$c \: \theta_2 \: b$</span>.</strong> (1)</p>
<p>Notice that this is not equivalent to</p>
<p><strong><span class="math-container">$a \: \theta_1 \: b$</span> or <span class="math-container">$a \: \theta_2 \: b$</span></strong>, (2)</p>
<p>since the join is not the same thing as union. If the union was a transitive relation, the (1) and (2) would be the same thing.</p>
|
2,258,720 | <p>Define a set X of integers recursively as follows:</p>
<p>Base: 5 is in X</p>
<p>Rule 1: If x is in X and x>0, then x+3 is in X</p>
<p>Rule2: If x is in X and x>0, then x+5 is in X</p>
<p>Show that every integer n>7 is in X</p>
<p>I am pretty new to this stuff and I am very lost on this. Please help!</p>
| fleablood | 280,126 | <p>Let $n \ge 13$. Divide $n$ by $5$ and take the remainder to get $n = k*5 + r$ and $k$ is at least $3$.</p>
<p>Case 0: if $r = 0$ then $n = 5+5+5+5 ..... = 5*k + 0 = n$ and $n $ is in $X$.</p>
<p>Case 3: if $r = 3$ then $n = 5+ 5+ 5+ .... + 3 = 5+ 3 + 5(k-1)$ is in $X$.</p>
<p>Case 1: if $r = 1$ then $n = 5+5+5+5+ ..... + 5 + 1 = 5+5+5+..... + 6 = 5(k-1)+3+3$ is in $X$</p>
<p>Can you do Case 2 and 4?</p>
<p>=====</p>
<p>This is a cute way.</p>
<p>Let $n = ......ba$ be the number written in digits. $a$ is the last digt and $b$ is the second to last digit.</p>
<p>If $a = 0$ or $5$ then $n = 5+5+5+5+..... $ for some number of $5$ so $n $ is in $x$.</p>
<p>If $a = 1$ or $6$ then $n = 5+5+5+5+ .... + 5 + 3+3$ for some number of $5$ so $n$ is in $X$ if $n \ge 11$.</p>
<p>If $a = 2$ or $7$ then $n = 5+5+5+5+ ..... + 5 + 3+3+3+3$ for some number of $5$ so $n$ is in $n$ if $n > 12$.</p>
<p>If $a = 3$ or $8$ then $n = 5+5+5+... + 3$ and $n$ is in $X$ for $n \ge 8$.</p>
<p>If $a = 4$ or $9$ then $n = 5+5+5+5 + .... + 3+3+3$ and $n$ is in $X$ for $n \ge 14$.</p>
<p>So all integers greater than $12$ are in $X$.</p>
<p>====</p>
<p>Ah, This is supposed to be a proof by induction! (I was worried that might be advanced.)</p>
<p>1: All numbers in the form $5k + 3j; k \ge 1; j \ge 0$ are in $X$.</p>
<p>Base case: It is true for $k = 1$ and $j=0$ as $5 = 5*1 + 0 \in X$.</p>
<p>Induction case: Assume it is true for $n = 5k + 3*0$. Then by Rule one it is true for $n = 5k + 3*0 + 5 = 5(k+1) + 3*0$. So it is true for all $n = 5k + 3*0$.</p>
<p>Assume it is true $n = 5k + 3j$. Then by Rule two it is true for $n = 5k + 3j + 3 = 5k + 3(j+1)$ . So it is true for all $n = 5k + 3j$.</p>
<p>2) Prove that for all $n\ge 13$ that $n = 5k + 3j$ for some $k \ge 1$ and $j \ge 0$ and either $k >1$ or $j > 2$.</p>
<p>Base case: $13 = 5*2 + 3$</p>
<p>Induction: If $n = 5k + 3j$ and $k > 1$ then$n+1 = 5(k-1) + 3(j+2)$ and $k \ge 1;j > 2$. If $n = 5k + 3j$ and $j > 2$ then $n+1 = 5(k+2) + 3(j-3)$ and $k >1; j \ge 0$.</p>
<p>So All $n \ge 13$ are $n = 5k + 3j$ for some $k \ge 1$ and $j \ge 0$ and either $k > 1$ or $j > 2$.</p>
|
1,722,256 | <p>I'm trying to calculate the percentage difference $\Bbb{P}$ between two numbers, $x$ and $y$. I'm not given much context about how these numbers are different (for example, if one is "expected outcome" and one is "observed outcome").</p>
<p>When is each of this formulas relevant?</p>
<p>$$\Bbb{P} = \frac{|x-y|}{x}, \Bbb{P} = \frac{|x-y|}{y}, \Bbb{P} = \frac{|x-y|}{max(x,y)}, \Bbb{P} = \frac{|x-y|}{\frac{x+y}{2}}$$</p>
<p>What is the difference between the first two? Is the last one the most general?</p>
| peter.petrov | 116,591 | <p>All of these can be relevant in a certain context. The 1st one is the percentage difference as a percentage of $x$, the 2nd one is the percentage difference as a percentage of $y$ the 3rd one is the percentage difference as a percentage of the max of $x, y$ and so on. </p>
|
134,213 | <p>Is there a way to simplify the following expression</p>
<p>$$A = \frac{\left (a^6-b^6 \right )^2}{c^6-4a^3b^3}$$</p>
<p>assuming</p>
<p>\begin{eqnarray}
a = &\ (a-b)^2+b\ (a+1) \cr
b = &\ (b-c)^2+c\ (b+1) \cr
c = &\ (c-a)^2+a\ (c+1)
\end{eqnarray}</p>
<h3>Edit</h3>
<p>I would like to determine which of these answers for <code>A</code> is correct: </p>
<ol>
<li>$ a^3b^3$ </li>
<li>$b^3c^3$ </li>
<li>$a^3b^6$ </li>
<li>$a^3c^3$ </li>
<li>$c^6$</li>
</ol>
| Bob Hanlon | 9,362 | <pre><code>Solve[Eliminate[{A == (a^6 - b^6)^2/(c^6 - 4 a^3 b^3),
a == (a - b)^2 + b (a + 1), b == (b - c)^2 + c (b + 1),
c == (c - a)^2 + a (c + 1)}, {a, b, c}], A] // Simplify
(* {{A -> -6 - 3*((1/2)*(123 -
55*Sqrt[5]))^(1/3) -
3*((1/2)*(123 + 55*Sqrt[5]))^
(1/3)},
{A -> -6 + (3/2)*(1 + I*Sqrt[3])*
((1/2)*(123 - 55*Sqrt[5]))^
(1/3) + (3 - 3*I*Sqrt[3])/
(2^(2/3)*(123 - 55*Sqrt[5])^
(1/3))},
{A -> -6 + (3/2)*(1 - I*Sqrt[3])*
((1/2)*(123 - 55*Sqrt[5]))^
(1/3) + (3 + 3*I*Sqrt[3])/
(2^(2/3)*(123 - 55*Sqrt[5])^
(1/3))}} *)
% // N
(* {{A -> -21.5225}, {A -> 1.76124 - 12.398 I}, {A ->
1.76124 + 12.398 I}} *)
</code></pre>
<p><strong>EDIT:</strong> For a multiple choice,</p>
<pre><code>Select[
{a^3 b^3, b^3 c^3, a^3 b^6, a^3 c^3, c^6},
Assuming[
{a == (a - b)^2 + b (a + 1),
b == (b - c)^2 + c (b + 1),
c == (c - a)^2 + a (c + 1)},
Simplify[
(a^6 - b^6)^2/(c^6 - 4 a^3 b^3) == #]] &]
(* {c^6} *)
</code></pre>
|
895,325 | <p>I am dealing with some nice rings that are always isomorphic to some fairly nice quotient ring of a polynomial ring. A typical example is:</p>
<p>$$ \mathbb{C}[X,XY,XY^2] \cong \frac{\mathbb{C}[U,V,W]}{\langle V^2 - UW \rangle}. $$</p>
<p>I would like a nice way to write the Kahler differentials of such rings. For example when we have the following ring:</p>
<p>$$ \mathbb{C}[X^{ \pm 1}] \cong A := \frac{\mathbb{C}[U,V]}{\langle UV \rangle} $$
There is already a nice way of writing the differential - $d(f(X)) = \frac{\partial f}{\partial X} dX $</p>
<p>but also all $\ f(U,V) + \langle UV \rangle \ \in A \ $ can be written uniquely as $h(U) + g(V) + \langle UV \rangle $ for polynomials $h$ and $g$.</p>
<p>Then we can write something like: $d( f(U,V) + \langle UV \rangle ) = (\frac{\partial h}{\partial U} +\langle UV \rangle)dU + (\frac{\partial g}{\partial V} + \langle UV \rangle)dV + \langle d(UV)\rangle$</p>
<p>Note this is equivalent to the standard way to write the differential.</p>
<p>Can this be generalized? For example can I do this with the first example I gave? </p>
<p>really this is me trying to just get a nice way to write these maps, as they behave a lot like the standard way of writing the Kahler differential but the notation means i can't write $\frac{\partial f}{\partial X} $ for example.</p>
| Robert Israel | 8,508 | <p>If $a$ and $b$ are coprime, every odd divisor of $ab$ can be written in exactly one way as the product of an odd divisor of $a$ and and odd divisor of $b$, and every such product is an odd divisor of $ab$. So $f(ab) = f(a) f(b)$.</p>
|
3,072,198 | <p>How can one explain that <span class="math-container">$$\frac{d}{dx}\left(\int_0^x{\cos(t^2+t)dt}\right) = \cos(x^2+x)$$</span>
Without solving the integral?</p>
<p>I know it's related to the fundamental theorem of calculus, but here we have a derivative with respect to <span class="math-container">$x$</span>, while the antiderivative is with respect to <span class="math-container">$t$</span>. </p>
<p>Thank you.</p>
| Paramanand Singh | 72,031 | <p>Perhaps you are mixing two parts of the Fundamental Theorem of Calculus (henceforth referred to as FTC). First part of the theorem deals exactly with finding derivative of things of the form <span class="math-container">$\int_{a} ^{x} f(t) \, dt$</span>.</p>
<p>More formally, let the function <span class="math-container">$f:[a, b] \to\mathbb {R} $</span> be Riemann integrable on <span class="math-container">$[a, b] $</span>. The intent of the first part of FTC is to study the properties of a related function <span class="math-container">$F:[a, b] \to\mathbb {R} $</span> defined by <span class="math-container">$$F(x) =\int_{a} ^{x} f(t) \, dt$$</span> The function <span class="math-container">$F$</span> is defined by means of an integral and is not necessarily an anti-derivative of <span class="math-container">$f$</span>.</p>
<p>FTC says that <em>this function <span class="math-container">$F$</span> is continuous on <span class="math-container">$[a, b] $</span></em>. And more importantly <em><span class="math-container">$F$</span> is differentiable at those points where <span class="math-container">$f$</span> is continuous and at such a point <span class="math-container">$c\in[a, b] $</span> we have <span class="math-container">$F'(c) =f(c) $</span></em>. </p>
<p>FTC <em>does not</em> say anything about <span class="math-container">$F'(c) $</span> when <span class="math-container">$f$</span> is <em>discontinuous</em> at <span class="math-container">$c$</span> and it may be possible that in such cases</p>
<ul>
<li><span class="math-container">$F'(c) $</span> does not exist</li>
<li>or it exists but does not equal <span class="math-container">$f(c) $</span></li>
<li>or it may exist and be equal to <span class="math-container">$f(c) $</span></li>
</ul>
<p>In any case one should observe that first part of FTC does not deal with anti-derivatives.</p>
<p>Now your function under the integral namely <span class="math-container">$\cos(t^2+t)$</span> is continuous everywhere and hence the integral <span class="math-container">$\int_{0}^{x}\cos(t^2+t)\,dt$</span> is differentiable everywhere with derivative <span class="math-container">$\cos(x^2+x)$</span>. Thus the result in your question is an immediate consequence of the first part of FTC.</p>
<hr>
<p>There is a second part of FTC which deals with anti-derivatives. Like the first part it begins with a function <span class="math-container">$f:[a, b] \to \mathbb {R} $</span> which is Riemann integrable on <span class="math-container">$[a, b] $</span> but its intent is to evaluate the integral <span class="math-container">$\int_{a} ^{b} f(x) \, dx$</span> in an easy manner. But to achieve this goal it makes an additional and strong assumption: it assumes that there is an anti-derivative <span class="math-container">$F$</span> of <span class="math-container">$f$</span> on <span class="math-container">$[a, b] $</span>. In other words <em>we assume the existence of a function <span class="math-container">$F:[a, b] \to\mathbb {R} $</span> such that <span class="math-container">$F'(x) =f(x) \, \forall x\in[a, b] $</span> and then FTC says that the integral <span class="math-container">$\int_{a} ^{b} f(x) \, dx$</span> equals <span class="math-container">$F(b) - F(a) $</span></em>. </p>
|
3,690,260 | <blockquote>
<p>Find two vectors <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span>, whose sum is <span class="math-container">$\langle 0,-2\rangle$</span> where <span class="math-container">$v_1$</span> is parallel to <span class="math-container">$\langle 4,-3\rangle $</span>, while <span class="math-container">$v_2$</span> is perpendicular to <span class="math-container">$\langle 4,-3\rangle$</span>.</p>
</blockquote>
<p>To start, <span class="math-container">$v_1= \langle x_1,y_1\rangle$</span> which is parallel to <span class="math-container">$\langle 4,-3\rangle$</span> can be written as <span class="math-container">$\langle 4a,-3a\rangle$</span> and <span class="math-container">$v_2=\langle x_2,y_2\rangle$</span> is perpendicular to <span class="math-container">$\langle 4,-3\rangle$</span> has to equal zero, and then <span class="math-container">$\langle x_1,y_1\rangle+\langle x_2,y_2\rangle=\langle 0,-2\rangle$</span>. From here I am very lost. </p>
| Community | -1 | <p>Let <span class="math-container">$v_1 = \langle 4t, -3t \rangle, v_2 = \langle 3s,4s \rangle\implies v_1 \parallel \langle 4,-3\rangle, v_2 \perp \langle 4,-3\rangle$</span>.We solve for <span class="math-container">$s,t$</span>: <span class="math-container">$v_1+v_2=u\implies \langle 4t+3s, -3t+4s\rangle=\langle 0,-2\rangle\implies 3s+4t=0, 4s-3t=-2$</span>. From this you can solve for <span class="math-container">$s,t$</span> and substitute them back to <span class="math-container">$v_1, v_2$</span> to find <span class="math-container">$v_1,v_2$</span>.</p>
|
1,396,045 | <p>$$\int{\sqrt{\frac{(1-\sin x)(2-\sin x)}{(1+\sin x)(2+\sin x)}}dx}$$</p>
<p>$$\int\sqrt{\frac{(1-\sin x)(2-\sin x)}{(1+\sin x)(2+\sin x)}}dx=\int \frac{(1-\sin x)(2-\sin x)}{\sqrt{(1-\sin x)(2-\sin x)(1+\sin x)(2+\sin x)}}dx$$</p>
<p>I am stuck. Please help me....</p>
| juantheron | 14,311 | <p>Let $$\displaystyle I = \int\sqrt{\frac{(1-\sin x)(2-\sin x)}{(1+\sin x)(2+\sin x)}}dx$$</p>
<p>We can write $$\displaystyle \sqrt{\frac{1-\sin x}{1+\sin x}} = \sqrt{\frac{1-\sin x}{1+\sin x}\times \frac{1+\sin x}{1+\sin x}} = \frac{\cos x}{1+\sin x}$$</p>
<p>So we get $$\displaystyle I = \int\frac{\cos x}{1+\sin x}\cdot \sqrt{\frac{2-\sin x}{2+\sin x}}dx$$</p>
<p>Now Let $1+\sin x= y\;,$ Then $\cos xdx = dy$</p>
<p>So Integral $$\displaystyle I = \int\frac{1}{y}\cdot \sqrt{\frac{3-y}{1+y}}dy$$</p>
<p>Now Put $$\displaystyle \frac{3-y}{1+y}=t^2\Rightarrow y=\frac{3-t^2}{1+t^2}$$</p>
<p>So we get $$\displaystyle y=-\left[1-\frac{4}{1+t^2}\right] = \left[\frac{4}{1+t^2}-1\right].$$ So $\displaystyle dy = -\frac{8t}{(1+t^2)^2}$</p>
<p>So Integral $$\displaystyle I = \int\frac{1+t^2}{3-t^2}\cdot t\cdot \frac{-8t}{(1+t^2)^2}dt = 8\int\frac{t^2}{(t^2-3)\cdot (1+t^2)}dt$$</p>
<p>So Integral $$\displaystyle I = 2\int \left[\frac{3(t^2+1)+(t^2-3)}{(t^2-3)\cdot (1+t^2)}\right]dt = 2\int \left[\frac{3}{t^2-(\sqrt{3})^2}+\frac{1}{1+t^2}\right]dt$$</p>
<p>So Integral $$\displaystyle I = 6\cdot \frac{1}{2\sqrt{3}}\cdot \ln\left|\frac{t-\sqrt{3}}{t+\sqrt{3}}\right|+2\tan^{-1}(t)+\mathcal{C}$$</p>
<p>So Integral $$\displaystyle I = \sqrt{3}\cdot \ln\left|\frac{\sqrt{2-\sin x}-\sqrt{3}\cdot \sqrt{2+\sin x}}{\sqrt{2-\sin x}-\sqrt{3}\cdot \sqrt{2+\sin x}}\right|+2\tan^{-1}\left(\sqrt{\frac{2-\sin x}{2+\sin x}}\right)+\mathcal{C}$$</p>
|
3,552,264 | <p>How to find function <span class="math-container">$f(x)$</span> that has continuous derivative on <span class="math-container">$[0,2]$</span> satisfies the following conditions:</p>
<ol>
<li><span class="math-container">$f(2) = 3$</span></li>
<li><span class="math-container">$\displaystyle \int_0^2 [f'(x)]^2 dx = 4$</span></li>
<li><span class="math-container">$\displaystyle \int_0^2 x^2f(x) dx = \frac{1}{3}$</span></li>
</ol>
<p><em>My attempt:</em> By using integration by parts, I found that <span class="math-container">$\displaystyle \int_0^2 x^3f'(x) dx = 23$</span> and I tried to find constant <span class="math-container">$\alpha$</span> such that <span class="math-container">$\displaystyle \int_0^2 [f'(x) + \alpha x^3]^2 dx = 0$</span> so that I can have <span class="math-container">$f'(x) = -\alpha x^3$</span>. However, the result was strange, I obtained two different "ugly" values and failed to confirm whether the solution was right. I then searched for solution online but did not come across anything helpful.</p>
<p>I would love to know is there another way to solve this problem. I'm grateful if anyone could help. Thanks in advance.</p>
| Riemann | 27,899 | <p><span class="math-container">$$ \int_0^2 x^3f'(x)\, dx= \int_0^2 x^3 \,df(x)=x^3 f(x)|^2_0-3\int_0^2 x^2f(x) \,dx=8\cdot f(2)-3\cdot\frac{1}{3}=24-1=23.$$</span></p>
|
1,341,231 | <p>Let $f$ be continuous on $[0,1]$, and let $\alpha>0$. Find: $\lim\limits_{x\to 0}{x^{\alpha}\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt}$. I tried integration by parts, but I am not sure if $f$ is integrable and to what extent. It also gets really messy. Besides, I am not sure if I am to express the limit using $f$, or to arrive at an actual number. It would be nice if you could take a look.</p>
<p>Using other questions I got: Let us denote $G(x)=\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt$, so I am looking for: $\lim\limits_{x\to 0}{x^{\alpha}G(x)}=\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}$. If $g(t)=\int{f(t)\over t^{\alpha +1}}dt$, Then: $G(x)=g(1)-g(x)$ which means: $G'(x)=-g'(x)=-{f(x)\over x^{\alpha +1}}$. Let us use L'Hôpital's rule: $\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}=\lim\limits_{x\to 0}{{G'(x)=-{f(x)\over x^{\alpha +1}}}\over {-\alpha\over x^{\alpha+1}}}=\lim\limits_{x\to 0}{f(x)\over \alpha}={f(0)\over \alpha}$.</p>
| John_dydx | 82,134 | <p>I can certainly recommend <em><a href="http://www.springer.com/gb/book/9783540761976" rel="nofollow">Elementary Number Theory</a></em> by Gareth A Jones et al. It will get you started and then you can move onto more advanced texts. It's a very short book (about 300 pages) which means you can easily read through the whole text-a very good choice for self study. As for pre-requisites, a good grasp of algebra will probably do. </p>
|
1,341,231 | <p>Let $f$ be continuous on $[0,1]$, and let $\alpha>0$. Find: $\lim\limits_{x\to 0}{x^{\alpha}\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt}$. I tried integration by parts, but I am not sure if $f$ is integrable and to what extent. It also gets really messy. Besides, I am not sure if I am to express the limit using $f$, or to arrive at an actual number. It would be nice if you could take a look.</p>
<p>Using other questions I got: Let us denote $G(x)=\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt$, so I am looking for: $\lim\limits_{x\to 0}{x^{\alpha}G(x)}=\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}$. If $g(t)=\int{f(t)\over t^{\alpha +1}}dt$, Then: $G(x)=g(1)-g(x)$ which means: $G'(x)=-g'(x)=-{f(x)\over x^{\alpha +1}}$. Let us use L'Hôpital's rule: $\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}=\lim\limits_{x\to 0}{{G'(x)=-{f(x)\over x^{\alpha +1}}}\over {-\alpha\over x^{\alpha+1}}}=\lim\limits_{x\to 0}{f(x)\over \alpha}={f(0)\over \alpha}$.</p>
| Ethan Bolker | 72,858 | <p>Dover publishes many number theory titles. At \$10-\$15 each they're a bargain - no need even to look for the Amazon discount. You can get several and jump back and forth among them to get different perspectives on each topic. You can write yourself notes in the margins. Take them to the library to read.</p>
<p>This is a standard old undergraduate text:</p>
<p>Elementary Number Theory: Second Edition
(Underwood Dudley)
<a href="http://store.doverpublications.com/048646931x.html" rel="nofollow">http://store.doverpublications.com/048646931x.html</a></p>
<p>Way off the beaten path, but fun, with a rarely encountered (in elementary texts) proof of quadratic reciprocity:</p>
<p>An Adventurer's Guide to Number Theory
(Richard Friedberg)
<a href="http://store.doverpublications.com/0486281337.html" rel="nofollow">http://store.doverpublications.com/0486281337.html</a></p>
|
1,341,231 | <p>Let $f$ be continuous on $[0,1]$, and let $\alpha>0$. Find: $\lim\limits_{x\to 0}{x^{\alpha}\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt}$. I tried integration by parts, but I am not sure if $f$ is integrable and to what extent. It also gets really messy. Besides, I am not sure if I am to express the limit using $f$, or to arrive at an actual number. It would be nice if you could take a look.</p>
<p>Using other questions I got: Let us denote $G(x)=\int_{x}^{1}{f(t)\over t^{\alpha +1}}dt$, so I am looking for: $\lim\limits_{x\to 0}{x^{\alpha}G(x)}=\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}$. If $g(t)=\int{f(t)\over t^{\alpha +1}}dt$, Then: $G(x)=g(1)-g(x)$ which means: $G'(x)=-g'(x)=-{f(x)\over x^{\alpha +1}}$. Let us use L'Hôpital's rule: $\lim\limits_{x\to 0}{G(x)\over {1\over x^{\alpha}}}=\lim\limits_{x\to 0}{{G'(x)=-{f(x)\over x^{\alpha +1}}}\over {-\alpha\over x^{\alpha+1}}}=\lim\limits_{x\to 0}{f(x)\over \alpha}={f(0)\over \alpha}$.</p>
| Community | -1 | <p>In my opinion Hardy &Wright's book on Number Theory is not the best possible book for someone "who has no prior training in Number Theory", I would suggest the following books.</p>
<blockquote>
<ul>
<li><p><em>Elementary Number theory</em> by David M. Burton.</p>
</li>
<li><p><em>Number Theory A Historical Approach</em> by John H. Watkins</p>
</li>
<li><p><em>Higher Arithmetic</em> by H. Davenport</p>
</li>
</ul>
</blockquote>
<p>All the books are well-written. I think that if you are a beginner, and if you are interested in the historical aspects of Number Theory as well, you may first look at the second book. Although Burton's books also have some historical background in each chapter. I would suggest reading Davenport's books a bit later when you have a fair grasp of the subject.</p>
<p>Also I suggest you to look at the suggestions given at <a href="https://math.stackexchange.com/questions/329/best-book-ever-on-number-theory?lq=1">this post</a>.</p>
|
970,872 | <p>There is a common brain teaser that goes like this:</p>
<p>You are given two ropes and a lighter. This is the only equipment you can use. You are told that each of the two ropes has the following property: if you light one end of the rope, it will take exactly one hour to burn all the way to the other end. But it doesn't have to burn at a uniform rate. In other words, half the rope may burn in the first five minutes, and then the other half would take 55 minutes. The rate at which the two ropes burn is not necessarily the same, so the second rope will also take an hour to burn from one end to the other, but may do it at some varying rate, which is not necessarily the same as the one for the first rope. Now you are asked to measure a period of 45 minutes. How will you do it?</p>
<p>Now I usually love brain teasers but this one frustrated me for a while because I could not prove that if a rope of non-uniform density is burned at both ends it burns in time $T/2$. I think I have sketched a proof by induction that shows that it's not actually true.</p>
<p>Given a rope of uniform density the burn rate at either end is equal so clearly it burns in time $T/2$. Now, consider a rope of non-uniform density, the total time T for this rope to burn is the linear combination of the times of the uniform density "chunks" to burn, i.e. $T = T_1 + T_2 + \ldots + T_n$. So consider, $T/2 = T_1/2+ T_2/2 + \ldots + T_n/2$. If we look at each $T_i/2$ this is precisely the time it takes to burn the uniform segment $T_i$ if lit at both ends. Therefore, in order to arrive at a rope that burns in time $T/2$, one would need to light each uniform segment on both ends, not simply the end of both ends of the total rope. What am I doing wrong?</p>
| J2 student | 306,580 | <p>For the first rope, label one end 0 and the other end A. Starting from end 0, let derivative of time passed w.r.t. fire position x be f(x). The corresponding function starting from end A is f'(x). </p>
<p>Since rope A takes 1h to burn from either end, integrating f(x) from 0 to A, and integrating f'(x) from A to 0 gives time of an hour. Hence f(x)=-f'(x)</p>
<p>Now, if rope A burns from both ends, let time needed for complete burning be t hours and meeting point be a. </p>
<p>Since time passed for both processes of burning from each end is the same, integrating f(x) from 0 to a, and integrating f'(x) from A to a (which is the same as integrating f(x) from a to A, as established earlier) gives time t. </p>
<p>Hence 2t, which is twice the integral of f(x) from 0 to a = twice the integral of f(x) from a to A = integral of f(x) from 0 to a + integral of f(x) from a to A = integral of f(x) from 0 to A = 1 hour. Hence, t is 0.5 hour.</p>
<p>Next, for rope B, label one end 0 and the other end B and define the corresponding function for derivate of time w.r.t. position as g(x). After burning for 0.5 hour from one end, let position of fire be b.</p>
<p>We know that integrating g(x) from 0 to b gives half an hour, and integrating g(x) from 0 to B gives 1 hour. Then it can be seen that integrating g(x) from b to B, which is time needed to burn remaining the rope, which we can call rope C, is half an hour. </p>
<p>When we start burning the remaining rope, rope C from the other end, the same argument for rope A gives the time for complete burning as 0.25 hour. Hence we can get 0.75 hour. </p>
|
4,308,155 | <p>Given an operator <span class="math-container">$T \in B(H)$</span>, denote its spectrum by <span class="math-container">$\sigma(T)$</span>. We write <span class="math-container">$\Omega := \sigma(T)\setminus \{0\}$</span>. The following result is well-known and <em><strong>can be assumed without proof</strong></em>:</p>
<blockquote>
<p>Let <span class="math-container">$T \in B_0(H)$</span> be a self-adjoint compact operator on a Hilbert
space <span class="math-container">$H$</span>. Given <span class="math-container">$\lambda \in \Omega$</span>, let <span class="math-container">$P_\lambda$</span> be the
finite-rank projection on <span class="math-container">$\mathcal{E}_\lambda:= \ker(T-\lambda I)$</span>.
Then <span class="math-container">$$T= \sum_{\lambda \in \Omega} \lambda P_\lambda$$</span> where the
unordered sum converges in the norm topology.</p>
</blockquote>
<p>Given <span class="math-container">$\xi, \eta \in H$</span>, let us define the rank-one operator <span class="math-container">$R_{\xi, \eta}: H \to H$</span> by <span class="math-container">$R_{\xi, \eta}(\zeta):= \langle \zeta, \eta\rangle \xi$</span>. I want to prove the following theorem:</p>
<blockquote>
<p>If <span class="math-container">$T \in B_0(H)$</span> is a self-adjoint operator compact operator, then
there is an orthonormal basis <span class="math-container">$\{e_s\}_{s \in S}$</span> for <span class="math-container">$\ker(T)^\perp$</span>
and a collection <span class="math-container">$\{\mu_s\}_{s \in S} \in c_0(S)$</span> such that <span class="math-container">$$T=
\sum_{s \in S} \mu_s R_{e_s, e_s}$$</span> where the series converges in
norm. In particular, a self-adjoint compact operator admits an
orthonormal basis of eigenvectors.</p>
</blockquote>
<p>The idea of the proof should be more or less the following:</p>
<p>We have an orthogonal direct sum decomposition <span class="math-container">$$\ker(T)^\perp = \bigoplus_{\lambda \in \Omega} \mathcal{E}_\lambda$$</span>
Given <span class="math-container">$\lambda \in \Omega$</span>, let <span class="math-container">$\mathcal{F}_\lambda$</span> be an orthonormal basis for <span class="math-container">$\mathcal{E}_\lambda$</span>. Then we have <span class="math-container">$$P_\lambda = \sum_{\xi \in \mathcal{F}_\lambda} R_{\xi, \xi}$$</span> and it follows that
<span class="math-container">$$T = \sum_{\lambda \in \Omega}\lambda \left(\sum_{\xi \in \mathcal{F}_\lambda} R_{\xi, \xi}\right) = \sum_{\lambda \in \Omega}\left(\sum_{\xi \in \mathcal{F}_\lambda} \lambda R_{\xi, \xi}\right)= \sum_{\xi \in \bigcup_{\lambda \in \Omega} \mathcal{F}_\lambda} \lambda R_{\xi, \xi}$$</span>
Then <span class="math-container">$\bigcup_{\lambda \in \Omega} \mathcal{E}_\lambda$</span> is an orthonormal basis for <span class="math-container">$\ker(T)^\perp$</span> and this should yield the desired decomposition. However, I feel like the equality
<span class="math-container">$$\sum_{\lambda \in \Omega}\left(\sum_{\xi \in \mathcal{F}_\lambda} \lambda R_{\xi, \xi}\right)= \sum_{\xi \in \bigcup_{\lambda \in \Omega} \mathcal{F}_\lambda} \lambda R_{\xi, \xi}$$</span>
still requires some formal justification. We should at least show that the sum on the right converges in the norm topology! Of course, other approaches to prove my end goal are also welcome!</p>
<p>EDIT: It's a strong requirement to me that the convergence of all series involved is in the norm-topology. Many references only deal with the strong convergence of these series.</p>
| Community | -1 | <p>Here is the proof how I wrote it down:</p>
<p><strong>Theorem</strong>: Let <span class="math-container">$T \in B_0(H)$</span> is a self-adjoint operator compact operator. If <span class="math-container">$\mathcal{F}_\lambda$</span> is an orthonormal basis for <span class="math-container">$\mathcal{E}_\lambda$</span> for each <span class="math-container">$\lambda \in \Omega$</span>, then <span class="math-container">$\mathcal{F} = \bigcup_{\lambda \in \Omega} \mathcal{F}_\lambda$</span> is an orthonormal basis for <span class="math-container">$\ker(T)^\perp$</span> and we can write
<span class="math-container">$$T= \sum_{\xi \in \mathcal{F}} \lambda_\xi R_{\xi,\xi}$$</span>
where <span class="math-container">$\lambda_\xi = \lambda$</span> when <span class="math-container">$\xi \in \mathcal{F}_\lambda$</span> where the series converges in norm.</p>
<p><strong>Proof</strong>: Since eigenvectors belonging to distinct eigenvalues are orthogonal, it is clear that the family <span class="math-container">$\mathcal{F}$</span> is orthonormal. Let <span class="math-container">$\mathcal{K}$</span> be the closed linear span of <span class="math-container">$\mathcal{F}$</span>. Clearly <span class="math-container">$\mathcal{K} \perp \ker(T)$</span>. The operator <span class="math-container">$T\vert_{(\mathcal{K}\oplus \ker(T))^\perp}$</span> is self-adjoint and compact. If <span class="math-container">$(\mathcal{K}\oplus \ker(T))^\perp \ne 0$</span>, it would follow that <span class="math-container">$T$</span> has a non-zero eigenvector <span class="math-container">$\zeta \in (\mathcal{K}\oplus \ker(T))^\perp$</span>. But all eigenvectors are contained in <span class="math-container">$\mathcal{K}\oplus \ker(T)$</span>, so this is impossible. Hence, the orthogonal complement must be zero and we conclude that <span class="math-container">$\mathcal{K}\oplus \ker(T) = \mathcal{H}$</span>, i.e. <span class="math-container">$\ker(T)^\perp = \mathcal{K}$</span> and hence <span class="math-container">$\mathcal{F}$</span> is an orthonormal basis for <span class="math-container">$\ker(T)^\perp$</span>.</p>
<p>Given <span class="math-container">$\xi\in \mathcal{F}$</span>, define <span class="math-container">$\lambda_\xi:= \lambda$</span> where <span class="math-container">$\lambda \in \Omega$</span> is the unique eigenvalue with <span class="math-container">$\xi \in \mathcal{F}_\lambda$</span> We prove that the series <span class="math-container">$\sum_{\xi \in \mathcal{F}} \lambda_\xi R_{\xi,\xi}$</span> necessarily converges.</p>
<p>Consider the finite set <span class="math-container">$F_0:= \{\xi \in \mathcal{F}: |\lambda_\xi|\ge \epsilon\}$</span>. If <span class="math-container">$F$</span> is a finite subset of <span class="math-container">$\mathcal{F}$</span> with <span class="math-container">$F \cap F_0 = \emptyset$</span>, then
<span class="math-container">$$\left\|\sum_{\xi \in F} \lambda_\xi R_{\xi,\xi}\right\| = \max_{\xi \in F} |\lambda_{\xi}| < \epsilon$$</span>
and it follows that <span class="math-container">$T':= \sum_{\xi \in \mathcal{F}}\lambda_\xi R_{\xi,\xi} \in B_0(\mathcal{H})$</span> converges in norm.</p>
<p>It remains to show that <span class="math-container">$T' = T.$</span> But it is clear that <span class="math-container">$T\xi = T'\xi$</span> for all <span class="math-container">$\xi \in \mathcal{F}$</span> and that <span class="math-container">$T\vert_{\ker(T)}= 0 = T'\vert_{\ker(T)}$</span>, so this readily follows.</p>
|
43,690 | <p>I have to apologize because this is not the normal sort of question for this site, but there have been times in the past where MO was remarkably helpful and kind to undergrads with similar types of question and since it is worrying me increasingly as of late I feel that I must ask it.</p>
<p>My question is: what can one (such as myself) contribute to mathematics?</p>
<p>I find that mathematics is made by people like Gauss and Euler - while it may be possible to learn their work and understand it, nothing new is created by doing this. One can rewrite their books in modern language and notation or guide others to learn it too but I never believed this was the significant part of a mathematician work; which would be the creation of original mathematics. It seems entirely plausible that, with all the tremendously clever people working so hard on mathematics, there is nothing left for someone such as myself (who would be the first to admit they do not have any special talent in the field) to do. Perhaps my value would be to act more like cannon fodder? Since just sending in <em>enough</em> men in will surely break through some barrier.</p>
<p>Anyway I don't want to ramble too much but I really would like to find answers to this question - whether they come from experiences or peoples biographies or anywhere.</p>
<p>Thank you.</p>
| Laie | 6,013 | <p>One possible way to think about this midnight question might be to ask what smaller results have been important or interesting to you, and then to appreciate the existence of the mathematicians who have discovered/invented it. My guess would be that everyone can compile a list of such results by players who are not in the league of Gauss and Euler. My own list would be quite long, and some of the results are recent enough to have been discovered by users of MO. </p>
|
4,038,151 | <p>In Bernt Oksendal's <em>Stochastic Differential Equations</em>, Chapter 4, one has the following stochastic differential equation (whose solution is geometric Brownian motion):
<span class="math-container">$$dN_t=rN_tdt+\alpha N_tdB_t\;\;\;\text{ ie } \;\;\; N_t-N_0=r\int_0^t N_sds+\alpha\int_0^tN_sdB_s,$$</span>
where <span class="math-container">$\alpha,r\in\mathbb{R}$</span> and <span class="math-container">$B_t$</span> is standard Brownian motion (ie <span class="math-container">$B_0=0$</span>). After assuming that <span class="math-container">$N_t$</span> solves the above equation, the author abruptly deduces that
<span class="math-container">$$\frac{dN_t}{N_t}=rdt+\alpha dB_t \;\;\;\text{ie}\;\;\; \int_0^t\frac{1}{N_s}dN_s= rt+\alpha B_t. \;\;\;(*)$$</span></p>
<p>I don't understand how he obtained this directly.</p>
<p>What I understand for sure is that if we seek to compute
<span class="math-container">$$\int_0^t\frac{1}{N_s}dN_s $$</span>
we apply Itô's formula for <span class="math-container">$Y_t=\ln(N_t)$</span> (assuming <span class="math-container">$N_t$</span> satisfies all the needed conditions). After some computation this yields<br />
<span class="math-container">$$\frac{1}{N_t}dN_t=d\ln N_t+\frac{1}{2}\alpha^2dt \;\;\text{ i.e } \;\;\int_0^t\frac{1}{N_s}dN_s=\ln(N_t)-\ln(N_0)+\frac{1}{2}\alpha^2t.\;\;\;(**)$$</span>
But at first glance, it does not seem that <span class="math-container">$(**)$</span> implies <span class="math-container">$(*)$</span>. How did he obtain <span class="math-container">$(*)$</span>? Did he use a method other than the Ito formula or am I missing something?</p>
<hr />
<p>Thank you for the helpful answers! I've upvoted both and will accept whichever has more votes (in case of tie I'll just leave them be).</p>
<p>It turns out that what I was missing is the definition of the <em><strong>Itô integral with respect to an Itô process</strong></em>, which I could not find in the book. So actually <em><strong>by definition</strong></em> one has that for any Itô process of the form
<span class="math-container">$$dX_t=\alpha dt+\sigma dB_t,$$</span>
and <span class="math-container">$Y_t$</span> an appropriate integrand that
<span class="math-container">$$\boxed{\int_0^tY_sdX_s:=\int_0^t\alpha Y_s ds + \int_0^t\sigma Y_sdB_s}$$</span>
This justifies the formal notation
<span class="math-container">$$\frac{1}{N_t}dN_t=\frac{1}{N_t}(rN_tdt+\sigma N_tdB_t)= \alpha dt+\sigma dB_t,$$</span>
and automatically gives <span class="math-container">$(*)$</span> when <span class="math-container">$Y_t=1/N_t$</span>, of course, while assuming <span class="math-container">$Y_t$</span> meets all the necessary requirements.</p>
| nullUser | 17,459 | <p>Typically associativity of the integral is proved early on. If <span class="math-container">$X$</span> is a semimartingale and the integral <span class="math-container">$K \cdot X = \int K dX$</span> makes sense then the integral <span class="math-container">$(HK) \cdot X = \int HK dX$</span> makes sense if and only if the integral <span class="math-container">$H \cdot (K\cdot X) = \int H d(\int K dX)$</span> makes sense, in which case they are equal. In "differential form", this is <span class="math-container">$H\, d(K dX) = (HK) dX$</span>. I am assuming you are already comfortable with linearity of stochastic integrals.</p>
<p>Thus taking <span class="math-container">$dN_t = N_t r dt + N_t\alpha dB_t$</span> and integrating <span class="math-container">$1/N_t$</span> with respect to the semimartingales defined by either side gives
<span class="math-container">$$
(1/N_t) dN_t = (1/N_t) d(N_t r dt + N_t\alpha dB_t) = (1/N_t) d(N_t r dt)+ (1/N_t)d( N_t\alpha dB_t) = r dt + \alpha dB_t.
$$</span></p>
<p>There is no need to use Ito here, it is simply associativity of the integral. The key idea is that we are not "dividing by <span class="math-container">$N_t$</span>", instead we are <em>integrating</em> <span class="math-container">$1/N_t$</span> with respect to two (equal) semimartingales. In differential form, it just looks like dividing.</p>
|
1,961,614 | <p>I can't say I've gotten very far. You can show $3 + \sqrt{11}$ is irrational, call it $a$. Then I tried supposing it's rational, i.e.:</p>
<p>$a^{1/3}$ = $\frac{m}{n}$ for $m$ and $n$ integers.</p>
<p>You can write $m$ and $n$ in their canonical factorizations, then cube both sides of the equation...but I can't seem to derive a contradiction. </p>
| Deepak | 151,732 | <p>If you denote $x = (3 + \sqrt{11})^{\frac 13}$, you can show that the equation $x^6 - 6x^3 - 2 = 0$ holds and appeal to Rational Root Theorem.</p>
<p><a href="https://en.wikipedia.org/wiki/Rational_root_theorem" rel="nofollow">Rational Root Theorem</a></p>
|
1,961,614 | <p>I can't say I've gotten very far. You can show $3 + \sqrt{11}$ is irrational, call it $a$. Then I tried supposing it's rational, i.e.:</p>
<p>$a^{1/3}$ = $\frac{m}{n}$ for $m$ and $n$ integers.</p>
<p>You can write $m$ and $n$ in their canonical factorizations, then cube both sides of the equation...but I can't seem to derive a contradiction. </p>
| Steven Alexis Gregory | 75,410 | <p><strong>Rational Root Theorem</strong></p>
<blockquote>
<p>If $P(x) = a_nx^n + a_{n-1}x^{n-1} + a_{n-2}x^{n-2} + \cdots + a_1x + a_0$, where all of the $a_i$ are integers, and $P\left( \dfrac uv \right)=0$ where $u$ and $v$ are integers, then $v \mid a_n$ and $u \mid a_0$.</p>
</blockquote>
<p>Let $x = (3 + \sqrt{11})^{\frac 13}$. Then
\begin{align}
x^3 &= 3 + \sqrt{11} \\
x^3-3 &= \sqrt{11}\\
x^6 - 6x^3 + 9 &= 11 \\
x^6 - 6x^3 - 2 &= 0 \\
\end{align}</p>
<p>According to the rational root theorem, the only possible rational zeros of the polynomial $P(x) = x^6 - 6x^3 - 2$ are $1, -1, 2,$ and $-2$.</p>
<p>We find $P(1) = -7$, $P(-1)=-5$, $P(2)=14$ and $P(-2)=110$</p>
<p>Hence $P(x) = x^6 - 6x^3 - 2$ has no rational roots. It follows that
$x = (3 + \sqrt{11})^{\frac 13}$ is irrational.</p>
|
2,419,922 | <p>An involution is a permutation $P$ which is its own inverse: $P\cdot P = \text{id}$.</p>
<p>Every permutation can be written (in various ways) as a sequence of single-element swaps (transpositions). Sometimes these sequences are palindromes, in that the reversal of the sequence is the same sequence.</p>
<p>If $T$ is a sequence of transpositions for $P$, then the reversal of $T$ is a sequence of transpositions for $P^{-1}$. It follows that if the sequence $T$ is a palindrome, then $P$ is an involution. </p>
<p>I'm wondering whether the converse is true:</p>
<blockquote>
<p><strong>Conjecture</strong>: Any involution $P$ can be decomposed into a sequence of transpositions which is a palindrome.</p>
</blockquote>
<p>— and if it's false, if there is any alternative way to characterize the involutions which <em>can</em> be decomposed in such a way.</p>
| user326210 | 326,210 | <p>It's not possible in general to express an involution as a palindrome of transpositions. In fact, because of cancellation properties, the only involutions that can be expressed as palindromes are (1) a single transposition, and (2) the identity.</p>
<p>Indeed, any even-length palindrome of transpositions cancels itself out:
<span class="math-container">$$\pi_1\pi_2\ldots \pi_m \pi_m\ldots \pi_2\pi_1$$</span>
Each consecutive pair of transpositions in the middle will cancel out successively until they're all gone.</p>
<p>On the other hand, an odd length palindrome has the form <span class="math-container">$$\pi_1\pi_2\ldots \pi_m \pi_{m+1}\pi_m\ldots \pi_2\pi_1$$</span>. Grouping the other transpositions together, we see that it has the form <span class="math-container">$Q\cdot \pi_{m+1}\cdot Q^{-1}$</span> for some permutation <span class="math-container">$Q$</span>. Hence the palindrome is conjugate to the transposition <span class="math-container">$\pi_{m+1}$</span> by similarity; it is therefore itself a single transposition.</p>
|
37,977 | <p>I have two lists, say <code>a</code> and <code>b</code>, both of length <code>n</code>. I'd like to compute the following:</p>
<ul>
<li>minimum of $a[i]/b[i]$ where $i=1, 2, ...n$ and $b[i]>0$</li>
</ul>
<p>I'd also like to know the index of the element where the min occurs.</p>
| Nasser | 70 | <pre><code>a = RandomReal[{1, 10}, 10];
b = RandomReal[{1, 10}, 10];
lst = a/b;
Min[lst]
(*0.442821447015283*)
Position[lst, Min[lst]]
(* {{4}} *)
</code></pre>
<p><strong>Update to answer comment below</strong></p>
<p><code>How can I implement the b[i]>0 condition</code></p>
<p>One way can be to use <a href="http://reference.wolfram.com/mathematica/ref/MapThread.html" rel="nofollow">MapThread</a> to make the list by checking for the condition </p>
<pre><code>a = {2, 9, 5, 3, 0, 4, 9, 1};
b = {0, -4, 2, 0, 10, 4, 0, 10};
lst = MapThread[If[#2 > 0, #1/#2, Sequence @@ {}] &, {a, b}]
(*{5/2, 0, 1, 1/10}*)
Min[lst]
(* 0 *)
Position[lst, %]
{{2}}
</code></pre>
<p><strong>Updated to return position of min in original list not filtered list</strong></p>
<p>remember the index while filtering to use it to go back.</p>
<pre><code>a = {2, 9, 5, 3, 0, 4, 9};
b = {0, -4, 2, 0, 10, 4, 0};
i = 0;
lst = MapThread[(i++; If[#2 > 0, {i, #1/#2}, Sequence @@ {}]) &, {a,b}]
(* {{3, 5/2}, {5, 0}, {6, 1}, {8, 1/10}} *)
min = Min[lst[[All, 2]]]
(* 0 *)
p = Flatten@Position[lst[[All, 2]], min];
lst[[p, 1]]
(* {5} *)
</code></pre>
<p>A short hand version is</p>
<pre><code>i = 0;
p = Flatten@
Position[lst[[All,2]],Min@MapThread[(i++;If[#2>0,{i,#1/#2},Sequence@@{}])&,{a,b}][[All,2]]]
lst[[p, 1]]
</code></pre>
|
541,541 | <blockquote>
<p>Assume that $f_1\colon V_1\to W_1, f_2\colon V_2\to W_2$ are $k$-linear maps between $k$-vector spaces (over the same field $k$, but the dimension may be infinity). Then the tensor product $f_1\otimes f_2\colon V_1\otimes V_2\to W_1\otimes W_2$ is defined, and it's obvious that $\ker f_1\otimes V_2+ V_1\otimes \ker f_2 \subseteq \ker (f_1\otimes f_2)$. My question is whether the relation $\subseteq$ is in fact $=$. </p>
</blockquote>
<p>If this does not hold, how about assuming all these vector spaces are commutative associated $k$-algebras with identity and that all the maps are $k$-algebra homomorphisms? Or can you give a "right" form of the kernel $\ker (f_1\otimes f_2)$?</p>
| Jendrik Stelzner | 300,783 | <h1>When <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are surjective</h1>
<h2>Constructing an isomorphism</h2>
<p>Suppose first that <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are surjective, and that we are working over an arbitrary commutative ring <span class="math-container">$$</span>.
We may assume that <span class="math-container">$f_i$</span> is the canonical quotient map from <span class="math-container">$V_i$</span> to <span class="math-container">$V_i / U_i$</span> for a submodule <span class="math-container">$U_i$</span> of <span class="math-container">$V_i$</span>.
By abuse of notation, we denote by <span class="math-container">$U_1 ⊗ V_2 + V_1 ⊗ U_2$</span> the resulting submodule of <span class="math-container">$V_1 ⊗ V_2$</span>, even if the homomorphisms from <span class="math-container">$U_1 ⊗ V_2$</span> and <span class="math-container">$V_1 ⊗ U_2$</span> to <span class="math-container">$V_1 ⊗ V_2$</span> are not injective.
The submodule <span class="math-container">$U_1 ⊗ V_2 + V_1 ⊗ U_2$</span> of <span class="math-container">$V_1 ⊗ V_2$</span> lies in the kernel of <span class="math-container">$f_1 ⊗ f_2$</span>, whence we get an induced homomorphism of <span class="math-container">$$</span>-modules
<span class="math-container">$$
φ
\colon
(V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2)
\to
(V_1 / U_1) ⊗ (V_2 / U_2) \,,
\quad
[v_1 ⊗ v_2] \mapsto [v_1] ⊗ [v_2] \,.
$$</span>
We want to show that <span class="math-container">$φ$</span> is an isomorphism.
This can be done in multiple ways.</p>
<ol>
<li><p>We can construct an inverse <span class="math-container">$ψ$</span> to <span class="math-container">$φ$</span> as follows.
For every element <span class="math-container">$v_2$</span> of <span class="math-container">$V_2$</span>, the composite
<span class="math-container">$$
λ_{v_1}
\colon
V_1 \to V_1 ⊗ V_2 \to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\quad
v_1 \mapsto v_1 ⊗ v_2 \mapsto [v_1 ⊗ v_2]
$$</span>
is <span class="math-container">$$</span>-linear and contains <span class="math-container">$U_2$</span> in its kernel.
We get therefore an induced <span class="math-container">$$</span>-linear map.
<span class="math-container">$$
λ'_{v_1}
\colon
V_1 / U_1 \to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\quad
[v_1] \mapsto [v_1 ⊗ v_2] \,.
$$</span>
For every element <span class="math-container">$[v_1]$</span> of <span class="math-container">$V / U_1$</span> we have thus the well-defined map
<span class="math-container">$$
ρ_{[v_1]}
\colon
V_2 \to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\quad
v_2 \mapsto λ'_{[v_1]}(v_2) = [v_1 ⊗ v_2] \,.
$$</span>
This map is <span class="math-container">$$</span>-linear and contains <span class="math-container">$U_2$</span> in its kernel, and therefore induces a well-defined <span class="math-container">$$</span>-linear map
<span class="math-container">$$
ρ'_{[v_1]}
\colon
V_2 / U_2 \to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\quad
[v_2] \mapsto [v_1 ⊗ v_2] \,.
$$</span>
We have altogether a well-defined map
<span class="math-container">\begin{align*}
(V_1 / U_1) × (V_2 / U_2) &\to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\\
([v_1], [v_2]) &\mapsto ρ'_{[v_1]}([v_2]) = [v_1 ⊗ v_2] \,.
\end{align*}</span>
This map is <span class="math-container">$$</span>-bilinear, and therefore induces the desired <span class="math-container">$$</span>-linear map
<span class="math-container">$$
ψ
\colon
(V_1 / U_1) ⊗ (V_2 / U_2) \to (V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2) \,,
\quad
[v_1] ⊗ [v_2] \mapsto [v_1 ⊗ v_2] \,.
$$</span></p>
</li>
<li><p>Let <span class="math-container">$T$</span> be another <span class="math-container">$$</span>-module.
Any <span class="math-container">$$</span>-bilinear map from <span class="math-container">$β$</span> from <span class="math-container">$(V_1 / U_1) × (V_2 / U_2)$</span> to <span class="math-container">$T$</span> can be pulled back to a <span class="math-container">$$</span>-bilinear map from <span class="math-container">$α$</span> from <span class="math-container">$V_1 × V_2$</span> to <span class="math-container">$T$</span>, given by <span class="math-container">$α(v_1, v_2) = β([v_1], [v_2])$</span>.
This bilinear map <span class="math-container">$α$</span> satisfies <span class="math-container">$α(v_1, v_2) = 0$</span> whenever <span class="math-container">$v_1$</span> is contained in <span class="math-container">$U_1$</span> or <span class="math-container">$v_2$</span> is contained in <span class="math-container">$U_2$</span>.</p>
<p>Suppose conversely that <span class="math-container">$α$</span> is any <span class="math-container">$$</span>-bilinear map from <span class="math-container">$V_1 × V_2$</span> to <span class="math-container">$T$</span> such that <span class="math-container">$α(v_1, v_2) = 0$</span> whenever <span class="math-container">$v_1$</span> is contained in <span class="math-container">$U_1$</span> or <span class="math-container">$v_2$</span> is contained in <span class="math-container">$U_2$</span>.
For all <span class="math-container">$v_1, v'_1 ∈ V_1$</span> and <span class="math-container">$v_2, v'_2 ∈ V_2$</span> with <span class="math-container">$v_1 - v'_1 ∈ U_1$</span> and <span class="math-container">$v_2 - v'_2 ∈ U_2$</span> we then have
<span class="math-container">\begin{align*}
α(v_1, v_2)
&=
α(v'_1 + v_1 - v'_1, v'_2 + v_2 - v'_2)
\\
&=
α(v'_1, v'_2)
+ \underbrace{ α(v'_1, v_2 - v'_2) }_{= 0}
+ \underbrace{ α(v_1 - v'_1, v'_2) }_{= 0}
+ \underbrace{ α(v_1 - v'_1, v_2 - v'_2) }_{= 0}
\\
&= α(v'_1, v'_2) \,.
\end{align*}</span>
This shows that <span class="math-container">$α$</span> descends to a well-defined map <span class="math-container">$β$</span> from <span class="math-container">$(V_1 / U_1) × (V_2 / U_2)$</span> to <span class="math-container">$T$</span>, which is then again bilinear.
(We could also prove this by using the universal properties of <span class="math-container">$V_1 / U_1$</span> and <span class="math-container">$V_2 / U_2$</span>, similar to how we have constructed the inverse <span class="math-container">$ψ$</span> before.)
We have thus constructed an isomorphism
<span class="math-container">\begin{align*}
{}&
\mathrm{Bilin}(V_1 / U_1 , V_2 / U_2 ;\, T)
\\
≅{}&
\{
α ∈ \mathrm{Bilin}(V_1, V_2;\, T)
\mid
\text{$α(v_1, v_2) = 0$ whenever $v_1 ∈ U_1$ or $v_2 ∈ U_2$}
\} \,.
\end{align*}</span>
It follows that
<span class="math-container">\begin{align*}
{}&
\mathrm{Hom}((V_1 / U_1) ⊗ (V_2 / U_2), T)
\\
≅{}&
\mathrm{Bilin}(V_1 / U_1 , V_2 / U_2 ;\, T)
\\
≅{}&
\{
α ∈ \mathrm{Bilin}(V_1, V_2;\, T)
\mid
\text{$α(v_1, v_2) = 0$ whenever $v_1 ∈ U_1$ or $v_2 ∈ U_2$}
\}
\\
≅{}&
\{
f ∈ \mathrm{Hom}(V_1 ⊗ V_2, T)
\mid
\text{$f(v_1 ⊗ v_2) = 0$ whenever $v_1 ∈ U_1$ or $v_2 ∈ U_2$}
\}
\\
≅{}&
\{
f ∈ \mathrm{Hom}(V_1 ⊗ V_2, T)
\mid
\text{$f(x) = 0$ whenever $x ∈ U_1 ⊗ V_2$ or $x ∈ V_1 ⊗ U_2$}
\}
\\
≅{}&
\{
f ∈ \mathrm{Hom}(V_1 ⊗ V_2, T)
\mid
\text{$f(x) = 0$ for all $x ∈ U_1 ⊗ V_2 + V_1 ⊗ U_2$}
\}
\\
≅{}&
\mathrm{Hom}((V_1 ⊗ V_2) / (U_1 ⊗ V_2 + V_1 ⊗ U_2), T) \,.
\end{align*}</span>
By writing out these isomorphisms, we see that it is given (from top to bottom) by the induced homomorphism <span class="math-container">$φ^*$</span>.
It follows from Yoneda’s lemma (or more precisely from the fully faithfulness of the Yoneda embedding) that <span class="math-container">$φ$</span> is an isomorphism.</p>
</li>
<li><p>We could also combine both approaches:
we use the isomorphism
<span class="math-container">\begin{align*}
{}&
\mathrm{Hom}((V_1 / U_1) ⊗ (V_2 / U_2), T)
\\
≅{}&
\{
α ∈ \mathrm{Bilin}(V_1, V_2;\, T)
\mid
\text{$α(v_1, v_2) = 0$ whenever $v_1 ∈ U_1$ or $v_2 ∈ U_2$}
\}
\end{align*}</span>
for <span class="math-container">$T = (V_1 ⊗ V_2) / (U_1 ⊗ V_1 + V_1 ⊗ U_2)$</span> to easily construct <span class="math-container">$ψ$</span> as the homomorphism corresponding to the bilinear map <span class="math-container">$(v_1, v_2) \mapsto [v_1 ⊗ v_2]$</span>.</p>
</li>
</ol>
<p>We find in either case that <span class="math-container">$φ$</span> is an isomorphism because both <span class="math-container">$(V_1 ⊗ V_2) / (U_1 ⊗ V_1 + U_2 ⊗ V_1)$</span> and <span class="math-container">$(V_1 / U_1) ⊗ (V_2 / U_2)$</span> have suitable universal properties.</p>
<h2>Diagram chase</h2>
<p>We know that tensoring is right exact.
We get therefore the following commutative diagram with exact rows and exact columns:
<span class="math-container">$$
\require{AMScd}
\begin{CD}
U_1 ⊗ U_2 @>>> V_1 ⊗ U_2 @>>> (V_1 / U_1) ⊗ U_2 @>>> 0 \\
@VVV @VVV @VVV \\
U_1 ⊗ V_2 @>>> V_1 ⊗ V_2 @>>> (V_1 / U_1) ⊗ V_2 @>>> 0 \\
@VVV @VVV @VVV \\
U_1 ⊗ (V_2 / U_2) @>>> V_1 ⊗ (V_2 / U_2) @>>> (V_1 / U_1) ⊗ (V_2 / U_2) @>>> 0 \\
@VVV @VVV @VVV \\
0 @. 0 @. 0 @.
\end{CD}
$$</span>
So let’s use the following result:</p>
<blockquote>
<p><strong>Lemma.</strong>
Let <span class="math-container">$R$</span> be a ring and consider the following commutative diagram of <span class="math-container">$R$</span>-modules with exact rows and exact columns:
<span class="math-container">$$
\begin{CD}
{} @. A @>β>> B @>>> 0 \\
@. @Vδ'VV @VVε'V @. \\
C @>δ>> D @>ε>> E @. {} \\
@. @VVV @VVγV @. \\
{} @. F @>>> G @. {}
\end{CD}
$$</span>
The induced sequence <span class="math-container">$A ⊕ C \xrightarrow{f} D \xrightarrow{g} G$</span> is again exact.</p>
<p><em>Proof.</em>
It follows the exactness of the middle row and left column that <span class="math-container">$g ∘ f = 0$</span>.
Let <span class="math-container">$d ∈ \ker(f)$</span>.
We have <span class="math-container">$0 = g(d) = γε(d)$</span>, whence <span class="math-container">$ε(d) ∈ \ker(γ) = \mathrm{im}(ε')$</span>.
There hence exists <span class="math-container">$b ∈ B$</span> with <span class="math-container">$ε(d) = ε'(b)$</span>.
By the surjectivity of <span class="math-container">$β$</span>, there exists <span class="math-container">$a ∈ A$</span> with <span class="math-container">$b = β(a)$</span>.
Let <span class="math-container">$d' := d - δ'(a)$</span>.
We have
<span class="math-container">$$
ε(d') = ε(d) - εδ'(a) = ε(d) - ε'β(a) = ε(d) - ε'(b) = ε(d) - ε(d) = 0 \,.
$$</span>
There hence exists <span class="math-container">$c ∈ C$</span> with <span class="math-container">$d' = δ(c)$</span>.
We have now <span class="math-container">$d - δ'(a) = d' = δ(c)$</span>, and therefore <span class="math-container">$d = δ'(a) + δ(c) = f( (a, c) )$</span>.
This shows that the kernel of <span class="math-container">$g$</span> is contained in the image of <span class="math-container">$f$</span>.</p>
</blockquote>
<p>In our specific example, we find that the sequence
<span class="math-container">$$
(U_1 ⊗ V_2) ⊕ (V_1 ⊗ U_2) \to V_1 ⊗ V_2 \to (V_1 / U_1) ⊗ (V_2 / U_2)
$$</span>
is exact.
In other words, the kernel of <span class="math-container">$V_1 ⊗ V_2 \to (V_1 / U_1) ⊗ (V_2 / U_2)$</span> is given by <span class="math-container">$U_1 ⊗ V_2 + V_1 ⊗ U_2$</span>.</p>
<h1>When <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are not both surjective</h1>
<p>If <span class="math-container">$f_1$</span> and <span class="math-container">$f_2$</span> are not both surjective then we can proceed as already explained in Martin Brandenburg’s answer:
By using the factorization
<span class="math-container">$$
V_1 ⊗ V_2
\to \mathrm{im}(f_1) ⊗ \mathrm{im}(f_2)
\to W_1 ⊗ W_2 \,,
$$</span>
we see that the desired equality holds if and only if the homomorphism <span class="math-container">$\mathrm{im}(f_1) ⊗ \mathrm{im}(f_2) \to W_1 ⊗ W_2$</span> is injective.
This is, for example, the case if <span class="math-container">$\mathrm{im}(f_1)$</span> and <span class="math-container">$W_2$</span> are flat, or if <span class="math-container">$W_1$</span> and <span class="math-container">$\mathrm{im}(f_2)$</span> are flat.
This is in particular the case when <span class="math-container">$$</span> is a field, since then every <span class="math-container">$$</span>-module is free and thus flat.</p>
|
2,326,551 | <p>Prove that $\varphi(n^2)=n\cdot\varphi(n)$ for $n\in \Bbb{N}$, where $\varphi$ is Euler's totient function.</p>
| Nemanja Beric | 405,086 | <p>$Hint:$ Use multiplicity of Euler's function and fundamental theorem of arithmetics.</p>
|
544,763 | <p>Let $A\subseteq[a,b]$ be Lebesgue measurable, such that: $m(A)>\frac{2n-1}{2n}(b-a)$. I need to show that $A$ contains an arithmetic sequence with <em>n</em> numbers ($a_1,a_1+d,...,a_1+(n-1)*d$ for some d).</p>
<p>I thought about dividing [a,b] into n equal parts, and show that if I put one part on top of the other, there must be at least one lapping point, that will occur in every part. but I haven't succeeded in showing that.</p>
<p>Thank you.</p>
| George | 54,962 | <p>** Fact 1.** <em>Let $p$ be a linear normalized Lebesgue
measure on a circle $C$ centered at the origin of the plane $R^2$.
Suppose that $p(A)> (n-1)/n$. Then there exist $n$ points in $A$
which are vertices of a regular $n$-sided polygon.</em></p>
<p><strong>Proof.</strong>
Let us consider a rotation defined by $f(x,y)=e^{\frac{2\pi
i}{n}}(x+iy)$. Let consider sets $f^{0}(A),f^{1}(A), \cdots,
f^{n-1}(A)$. From rotation invariance of $p$ we get
$p(f^{0}(A))=p(f^{1}(A))= \cdots= p(f^{n-1}(A))>(n-1)/n$ and $p(C
\setminus f^{0}(A))=p(C \setminus f^{1}(A))= \cdots= p(C \setminus
f^{n-1}(A))<1/n$. We claim that $p(f^{0}(A) \cap f^{1}(A) \cap
\cdots \cap f^{n-1}(A))>0$. Assume the contrary. Then by our
assumption and De'Morgan rule we get
$$
1=p(C \setminus \cap_{k=0}^{n-1}f^{k}(A))=p(\cup_{k=0}^{n-1} (C
\setminus f^{k}(A)))\le \sum_{k=0}^{n-1}p(C \setminus f^{k}(A))<n
\times 1/n=1,$$ which is a contradiction.</p>
<p>Hence there is $x \in \cap_{k=0}^{n-1}f^{k}(A)$, equivalently, $x
\in f^{0}(A),x \in f^{1}(A), \cdots, x \in f^{n-1}(A)$. The latter
relations imply that $x \in A, f^{-1}(x) \in A, \cdots,
f^{-(n-1)}(x) \in A$. Notice that the points $M_0=x,
M_1=f^{-1}(x), \cdots, M_{n-1}=f^{-(n-1)}(x)$ are vertices of a
regular $n$-sided polygon.</p>
<p><strong>Fact 2.</strong> <em>Let $B$ be a subset of the real axis whose
linear Lebesgue measure is positive. Then for an arbitrary $n
>1$ there are $n$ points in $B$ which constitute an arithmetic progression.</em></p>
<p><strong>Proof.</strong> By Lebesgue theorem about density points, there is a
density point $x_0 \in B$. Let $\epsilon$ be such a positive
number that $\frac{m([x_0 -\epsilon,x_0+\epsilon[ \cap
B)}{2\epsilon }>(n-1)/n$, where $m$ denotes the standard
linear Lebesgue measure in the real axis $\mathbb{R}$. Let
consider a circle $C$ of the length $2 \epsilon$ and centered at
the origin of real plane. Let $p$ be a linear normalized Lebesgue
measure on $C$. Let reel up the set $[x_0 -\epsilon,x_0+\epsilon[$
on the circle $C$ by the unique transformation $\phi$ such that
$\phi(x_0 -\epsilon)=(0, \frac{2\epsilon}{2\pi})$ and $\phi(x_0
-\frac{\epsilon}{2})=(- \frac{2\epsilon}{2\pi},0)$. Obviously,
$$p(\phi(B \cap [x_0 -\epsilon,x_0+\epsilon[))=\frac{m([x_0 -\epsilon,x_0+\epsilon[ \cap
B)}{2\epsilon }>(n-1)/n$$ and by Fact 1, there exist $n$ points
$M_0, M_1, \cdots, M_{n-1}$ in $\phi(B \cap [x_0 -\epsilon,x_0+\epsilon[)$ which are vertices of a
regular $n$-sided polygon. From these points we choose a point
$M_{k_0}$ which is a nearest point for the point $(0,
\frac{2\epsilon}{2\pi})$ from the left (along the circle $C$). Then the points
$\phi^{-1}(M_{k_0}),
\phi^{-1}(f^{1}(M_{k_0})),\cdots,\phi^{-1}(f^{n-1}(M_{k_0})$ are
in $B \cap [x_0 -\epsilon,x_0+\epsilon[$ and they constitute an arithmetic progression.</p>
|
544,763 | <p>Let $A\subseteq[a,b]$ be Lebesgue measurable, such that: $m(A)>\frac{2n-1}{2n}(b-a)$. I need to show that $A$ contains an arithmetic sequence with <em>n</em> numbers ($a_1,a_1+d,...,a_1+(n-1)*d$ for some d).</p>
<p>I thought about dividing [a,b] into n equal parts, and show that if I put one part on top of the other, there must be at least one lapping point, that will occur in every part. but I haven't succeeded in showing that.</p>
<p>Thank you.</p>
| George | 54,962 | <p>Fact 2 implies that we need no a restriction that $m(A)>\frac{2n-1}{2n}(b-a)$. It is sufficient to require that $m(A)>0$. </p>
<p>It is subject of interest that there is a Lebesgue null set $X$ in $\mathbb{R}$ which
satisfies the following conditions:</p>
<p>(i) $\mbox{card}(X)=2^{\aleph_0}$;</p>
<p>(ii) for an arbitrary $n
>2$ there are no $n$ points in $X$ which constitute an arithmetic
progression.</p>
<p>The proof of this fact essentially implies an existence of Lebesgue measurable
Hamel bases(over of all rationals $\mathbb{Q}$) in $\mathbb{R}$.</p>
|
1,895,422 | <p>I would like to know if it is possible to prove the identity $\mathbb E[e^{4W_t-8t}(W_t-4t)^4]=3t^2$ without the use of a measure change and Girsanov Theorem? I only know the solution by using the exponential martingale for the first term and then simplify terms under the new measure. </p>
| Mark Fischler | 150,362 | <p>For arbitrary real $a$,
$$\int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{x^2}{2\sigma^2}}e^{4x-2a\sigma^2}(x-a\sigma^2)^4 \, dx = e^{(8-2a)\sigma^2}\sigma^4 \left( (a-4)^4\sigma^4 + 6(a-4)^2 \sigma^2 + 3\right)
$$
The relation you want to prove is the special case of $a=4$.</p>
|
1,989,532 | <p>The equation I am considering is </p>
<p>$$x^{3}-63x=162$$</p>
<p>Here is how far I got:</p>
<p>$$3st=-63$$</p>
<p>$$s=\frac{-63}{3t}=-21t$$</p>
<p>$$s^{3}t^{3}=-162$$</p>
<p>$$(-21t)^{3}-t^{3}=-162$$</p>
<p>$$-9261t^{3}-t^{3}=-162$$</p>
<p>Where would I go from here?</p>
| Dietrich Burde | 83,966 | <p>The roots are given by the factorization
$$
x^3-63x-162= (x + 6)(x + 3)(x - 9),
$$
and this is much easier than to use Cardano's formula for it (because it is not so obvious from the formulas that the roots are indeed integral).</p>
|
2,021,324 | <blockquote>
<p>Showing $\frac12 3^{2-p}(x+y+z)^{p-1}\le\frac{x^p}{y+z}+\frac{y^p}{x+z}+\frac{z^p}{y+x}$ with $p>1$, and $x,y,z$ positive </p>
</blockquote>
<p>By Jensen I got taking $p_1=p_2=p_3=\frac13$, (say $x=x_1,y=x_2,z=x_3$)</p>
<p>$\left(\sum p_kx_k\right)^p\le\sum p_kx^p$ which means;</p>
<p>$3^{1-p}(x+y+z)^p\le x^p+y^p+z^p$</p>
<p>and if I assume wlog $x\ge y\ge z$ then $\frac{1}{y+z}\ge\frac{1}{x+z}\ge\frac{1}{x+y}$</p>
<p>so these $2$ sequences are similarly ordered, and by rearrangement inequality the rest follows ?</p>
<p>I should use power means, but any other solution is also appreciated.</p>
| Michael Rozenberg | 190,319 | <p>Indeed, Jensen works.</p>
<p>Let $x+y+z=3$.</p>
<p>Hence, we need to prove that $\sum\limits_{cyc}f(x)\geq0$, where $f(x)=\frac{x^p}{3-x}-\frac{1}{2}$.</p>
<p>$f''(x)=\frac{x^{p-2}\left((p-2)(p-1)x^2-6p(p-2)x+9p(p-1)\right)}{(3-x)^3}$.</p>
<p>If $p=2$ so $f''(x)>0$.</p>
<p>If $p>2$ so since $f''(0)>0$ and $\lim\limits_{x\rightarrow3^-}f''(x)>0$ and $\frac{3p}{p-1}>3$, we see that $f''(x)>0$.</p>
<p>If $1<p<2$ so since $f''(0)>0$ and $\lim\limits_{x\rightarrow3^-}f''(x)>0$, we see that $f''(x)>0$.</p>
<p>Thus, your inequality follows from Jensen.</p>
<p>Done!</p>
|
969,871 | <p>First, by definition of odd functions I have $-f(x) = f (-x)$. It would follow that $|–f(x)|= |f(-x)|$. Then given $\epsilon > 0$ there exists $\delta > 0$ s.t. when $0 < |x| < \delta$, it follows that $|f(x)-M| < \epsilon$. With $M = 0$, then $|f(x)| < \epsilon$. (approaching $0$ from the left). If I look at $f (-x)$, assuming identical $\epsilon$ and $\delta$ because of symmetry about the origin, I have the same conclusion that $|f(x)| < \epsilon$. But I feel as though this is too general and I can’t wrap my head around stating it with math fluency that leads to a solid conclusion. Or that I'm even in the correct rabbit hole. Any help would be great.</p>
| Community | -1 | <p>Since $f$ is odd then $f(0)=0$ and since $f$ is continuous then
$$\lim_{x\to0}f(x)=f(0)=0$$</p>
|
2,530,221 | <p>Let $X$ be a topological space, and equip its homeomorphism group $\text{Homeo}(X)$ with the compact-open topology. I probably need to restrict $X$ to be locally compact for it to play off with $\text{Homeo}(X)$ nicely.</p>
<p>The usual definition of $X$ being <b>homogeneous</b> is that given $x,y \in X$ there exists $f \in \text{Homeo(X)}$ with $f(x) = y$. My question: When can this choice of $f$ be made continuous? More precisely, is there a natural class of homogeneous spaces for which there is always a continuous map $$\theta: \{(x,y): x \neq y\} \rightarrow \text{Homeo}(X)$$
such that $\theta(x,y)$ carries $x$ to $y$. Is it true for connected manifolds?</p>
| Alex Ravsky | 71,850 | <p>One of such natural classes are topological groups $G=X$ with $\theta(x,y)(t)=yx^{-1}(t)$ for each $t\in G$. Indeed, clearly that $\theta(x,y)(x)=y$ for each $x,y\in G$. To prove the continuity of the map $\theta: G\times G\to \operatorname{Homeo}(X)$ consider a subbase $\mathcal B$ of the space $\operatorname{Homeo}(X)$ consisting of sets $[K,O]=\{f\in\operatorname{Homeo}(X): f(K)\subset O,$ $K$ is a compact subset of $G$ and $O$ is an open subset of $G \}$.
It suffices to show that $\theta^{-1}(B)$ is open for each $[K,U]\in\mathcal B$. Let $\theta(x,y)\in [K,O]$, that is $yx^{-1}K\subset O$. Continuity of operations of topological group imply that for each $t\in K$ there exists open neighborhoods $U_t$, $V_t$, and $W_t$ of points $y$, $x$, and $t$ respectively such that $U_t^{-1}V_tW_t\subset O$. A family $\{W_t:t\in K\}$ is an open cover of the set $K$. Since the set $K$ is compact, there exists a finite subset $F$ of $K$ such that $K\subset \{W_t:t\in F\}$. Put
$U=\bigcap \{U_t:t\in F\}$ and $V=\bigcap \{V_t:t\in F\}$. Then $U^{-1}VK\subset O$, so $\theta(U,V)\in [K,O]$.</p>
|
3,115,566 | <p>Convergence of the series for <span class="math-container">$a \in \mathbb R$</span> <span class="math-container">$$\sum_{n=1}^\infty\sin\left(\pi \sqrt{n^2+a^2} \right)$$</span> </p>
<p>I saw this problem in a calculus book and it gave a hint that says </p>
<p><strong>HINT</strong> First show that <span class="math-container">$$\sin\left(\pi \sqrt{n^2+a^2} \right)=(-1)^n\sin\frac{\pi a^2}{\sqrt{n^2+a^2}+n}\sim(-1)^n\frac{\pi a^2}{2n}\qquad (n \to\infty)$$</span> </p>
<p>I was able to show that <span class="math-container">$$\sin\left(\pi \sqrt{n^2+a^2} \right)=\sin\left(\pi \sqrt{n^2+a^2} -\pi n+\pi n\right)=\sin\left(\pi (\sqrt{n^2+a^2}-n)+\pi n \right)=(-1)^n\sin\left(\pi (\sqrt{n^2+a^2}-n) \right)=(-1)^n\sin\frac{\pi a^2}{\sqrt{n^2+a^2}+n}$$</span></p>
<p>But what I don't understand is how did they come up with that equivalence? First I thought that they used the limit comparison test but now I can see that you can't use that test because we're dealing with alternating series. Did they do a mistake or something?</p>
<p>Can somebody help me understand this hint and how to solve this problem?</p>
| Peter Foreman | 631,494 | <p>For small <span class="math-container">$x$</span> we have that <span class="math-container">$\sin{(x)} \approx x$</span>. One can prove this from the Taylor series expansion of <span class="math-container">$\sin{(x)}$</span>. As
<span class="math-container">$$\frac{\pi a^2}{\sqrt{n^2+a^2}+n} \approx \frac{\pi a^2}{2n}$$</span> for large <span class="math-container">$n$</span> and <span class="math-container">$1/n$</span> is very small for large <span class="math-container">$n$</span> we then have
<span class="math-container">$$(-1)^n\sin\frac{\pi a^2}{\sqrt{n^2+a^2}+n} \approx (-1)^n\frac{\pi a^2}{2n}$$</span></p>
|
2,592,999 | <p>Prove that $$\lim_{x \to 2} x^3 = 8$$</p>
<p>My attempt, </p>
<p>Given $\epsilon>0$, $\exists \space \delta>0$ such that if $$|x^3-8|<\epsilon \space \text{if} \space 0<|x-2|<\delta$$ </p>
<p>$$|(x-2)(x^2+2x+4)|<\epsilon$$</p>
<p>I'm stuck here. Hope someone could continue the solution and explain it for me. Thanks in advance.</p>
| Shashi | 349,501 | <p>$\newcommand{\Res}{\operatorname{Res}}\newcommand{\Arg}{\operatorname{Arg}}$I understand that in Complex Analysis class, if the problem is to find a definite integral, then very likely it is expected from you to use the Residue Theorem on that specific integral. </p>
<p>I provided you a <a href="https://math.stackexchange.com/questions/1703805/finding-the-integral-i-int-01x-2-31-x-1-3dx">link</a> in the comments, but I will still guide you to the right place. I use an approach similar to Mark Viola's approach.</p>
<p>Let's start by defining $$f(z):=\frac{1}{(1+z^2)(-z)^{2/3}(1-z)^{1/3}}$$ As usual I use the prinicipal value logarithm to define the powers. That is also the main reason I put $(-z)$ in the denominator which seems strange in the first place.</p>
<p>Okay now you can verify that $f(z)$ is meromorphic on $\mathbb{C}\setminus [0,1]$. We use the dog bone contour $C$ where the dog bone is on $[0,1]$. The large circular part of the contour will be called $C_R$ and the radius is then $R\gg 1$. It is easy to see using the <a href="https://en.wikipedia.org/wiki/Estimation_lemma" rel="nofollow noreferrer">ML-lemma/estimation lemma</a> that the contribution of it is zero when $R\to\infty$. We also have the contours on the dog bone, some have circular form with radius $\epsilon>0$, but even those will have no contribution when $\epsilon\to 0$. These things are all left for you to verify! We are only left with the straight lines that go from $0$ to $1$ call it $K^+$ and the other one going from $1$ to $0$ from below call it $K^-$. So:
\begin{align}
\oint_C f(z)\,dz = \int_{K^+}f(z)\,dz+\int_{K^-}f(z)\,dz
\end{align}
You can verify with the choice for the principal log that we have:
\begin{align}\tag{1}
\oint_C f(z)\,dz = (e^{i2\pi/3}-e^{-i2\pi/3})\int^1_0 \frac{1}{(1+x^2)x^{2/3}(1-x)^{1/3}}\,dx
\end{align}
In $(1)$ you have to do some work. We can simplify more to get:
\begin{align}
\oint_C f(z)\,dz = 2i\sin(2\pi/3)\int^1_0 \frac{1}{(1+x^2)x^{2/3}(1-x)^{1/3}}\,dx
\end{align}</p>
<p>We did not apply the Residue Theorem yet. How many residues do we have inclosed by our contour? Two, namely $z=i$ and $z=-i$. So we have:
\begin{align}
\oint_C f(z)\,dz =2\pi i\left(\Res_{z=i} f(z)+\Res_{z=-i}f(z)\right)
\end{align}
I do one residue; the other one is for you.</p>
<p>\begin{align}
\Res_{z=i} f(z) &= \Res_{z=i}\frac{1}{(1+z^2)(-z)^{2/3}(1-z)^{1/3}}\\
&=\frac{1}{2i(-i)^{2/3}(1-i)^{1/3}}
\end{align}
We have $(-i)^{2/3}=\exp\left( i\frac{2}{3}\Arg(-i)\right)=\exp(-i\pi/3)$ and $(1-i)^{1/3}=\exp\left(\frac{1}{3}\ln(\sqrt[]{2})+ i\frac{1}{3}\Arg(1-i)\right)=\sqrt[6]{2}\exp(-i\pi/12)$ so:
\begin{align}
\Res_{z=i} f(z) = \frac{e^{i5\pi/12}}{2i\sqrt[6]{2}}
\end{align}
The other residue is:
\begin{align}
\Res_{z=-i} f(z) =- \frac{e^{-i5\pi/12}}{2i\sqrt[6]{2}}
\end{align}
So we get:
\begin{align}
\oint_C f(z)\,dz=\frac{2\pi i}{\sqrt[6]{2}}\sin(5\pi/12)
\end{align}
Finally:
\begin{align}
\int^1_0 \frac{1}{(1+x^2)x^{2/3}(1-x)^{1/3}}\,dx=\pi\frac{\sin(5\pi/12)}{\sqrt[6]{2}\sin(2\pi/3)}
\end{align}
That can be simplified further to get:</p>
<blockquote>
<p>\begin{align}
\int^1_0 \frac{1}{(1+x^2)x^{2/3}(1-x)^{1/3}}\,dx=\frac{\pi}{\sqrt[3]{4}}\left(1+\frac{1}{\sqrt[]{3}}\right)
\end{align}</p>
</blockquote>
|
255,215 | <p>I am very new to Mathematica and already spent a lot of time trying to do this but failed.</p>
<p>I am trying to solve an ODE:</p>
<pre><code>solution = DSolve[{-((m (1 + m) + 4/(9 (-2/3 + t) t)) y[t]) +
2 (-1/3 + t) y'[t] + (-2/3 + t) t y''[
t] == (-4 (1 + C/2))/(9 (-2/3 + t) t), y[1] == 1, y'[1] == C},
y, t]
</code></pre>
<p>where <span class="math-container">$m$</span> is a nonnegative integer and <span class="math-container">$C$</span> is a real number.</p>
<p>I want to show that there exists a <span class="math-container">$C$</span> such that the solution is <span class="math-container">$0$</span> at infinity.
When I try that code:</p>
<pre><code>Limit[y[t] /. solution[[1]], t -> Infinity, m \[Element] Integers]
</code></pre>
<p>it just spits out the same thing.</p>
<p>What should I do?
(Note that I don't need to find that value of <span class="math-container">$C$</span>; I just need to show that for every <span class="math-container">$m$</span>, there is a number <span class="math-container">$C$</span> in which the solution vanishes at infinity. )</p>
<p>EDIT:</p>
<p>I amanged to get that it's true for many values of <span class="math-container">$m$</span>. Here is the code I used for <span class="math-container">$m=10$</span>.</p>
<pre><code>solutionm =
DSolve[{-((10 (10 + 1) + 4/(9 (-2/3 + t) t)) y[t]) +
2 (-1/3 + t) y'[t] + (-2/3 + t) t y''[
t] == (-4 (1 + C/2))/(9 (-2/3 + t) t), y[1] == 1, y'[1] == C},
y, t]
</code></pre>
<pre><code>Limit[FullSimplify[Re[y[t] /. solutionm[[1]]]], t -> Infinity,
Assumptions -> C \[Element] Reals]
</code></pre>
<p>Which spits out:</p>
<pre><code>DirectedInfinity[360 (1036 - 943 Log[3]) + C (-59572 + 54225 Log[3])]
</code></pre>
<p>Then I choose the <span class="math-container">$C$</span> that makes that number in "DirectedInfinity" zero:</p>
<pre><code>{a} = Solve[360 (1036 - 943 Log[3]) + C (-59572 + 54225 Log[3]) == 0,
C]
a = C /. a[[1]]
</code></pre>
<p>Then when <span class="math-container">$C=a$</span>, the limit is 0:</p>
<pre><code>Limit[Re[y[t] /. solutionm[[1]]], t -> Infinity,
Assumptions -> C == a]
0
</code></pre>
<p>I tried for many values of <span class="math-container">$m$</span>, and I get the same thing. When I try to make <span class="math-container">$m$</span> arbitrary, something weird happens:</p>
<pre><code>$Assumptions={m \[Element] Integers, C \[Element] Reals}
solutionm =
DSolve[{-((m (m + 1) + 4/(9 (-2/3 + t) t)) y[t]) +
2 (-1/3 + t) y'[t] + (-2/3 + t) t y''[
t] == (-4 (1 + C/2))/(9 (-2/3 + t) t), y[1] == 1, y'[1] == C},
y, t]
</code></pre>
<p>When I run <code>y[t] /. solutionm[[1]] /. {m -> 1}</code>, it gives me an error: <code>Power::infy: Infinite expression 1/0 encountered.</code> I get the same error with any <span class="math-container">$m$</span>. I am not sure why this happens.</p>
<p>Also, when I repeat the same thing as above, and run</p>
<pre><code>Limit[FullSimplify[Re[y[t] /. solutionm[[1]]]], t -> Infinity,
Assumptions -> C \[Element] Reals]
</code></pre>
<p>it doesn't compute the limit. It just spits out the same thing. Is there a way around this? Or, as Nasser suggested, is this too complicated for Mathematica?
Also, I don't need to find that <span class="math-container">$C$</span>. I just want to show that there exists a <span class="math-container">$C$</span> in which the solution vanishes at infinity.</p>
| Cesareo | 62,129 | <p>We can have an insight about the solutions behavior for each <code>m</code> by doing</p>
<pre><code>tmax = 1000;
solution = ParametricNDSolve[{-((m (1 + m) + 4/(9 (-2/3 + t) t)) y[t]) + 2 (-1/3 + t) y'[t] + (-2/3 + t) t y''[t] == (-4 (1 + c/2))/(9 (-2/3 + t) t), y[1] == 1, y'[1] == c}, y, {t, 1, tmax}, {m, c}]
m = 1;
Plot3D[Evaluate[y[m, c][t] /. solution], {t, 1, tmax}, {c, -6, 0}]
</code></pre>
|
2,326,072 | <p>We define 3 sequences $(a_n),(b_n),(c_n)$ with positive terms so that
$$ a_{n+1}\leq\frac{b_n+c_n}{3}\ ,\ b_{n+1}\leq\dfrac{a_n+c_n}{3}\ ,\ c_{n+1}\leq\dfrac{a_n+b_n}{3} $$
Check if any of $(a_n),(b_n),(c_n)$ converge, and if they do find their limit.</p>
<p>PROOF</p>
<p>My part of the proof is this: By adding the above inequalities we get
$$ a_{n+1}+b_{n+1}+c_{n+1}\leq\frac{2}{3}(a_n+b_n+c_n) $$
We define the sequence $ (x_n) $ with $ x_n=a_n+b_n+c_n $ so we have
$$ x_{n+1}\leq\frac{2}{3}x_n\Rightarrow \frac{x_{n+1}}{x_n}\leq\frac{2}{3}<1 $$
which implies that $ (x_n) $ is decreasing. We also have $ x_n>0, \forall n $ thus it's bounded, and so it converges to $ 0 $. Is it correct to say that since $ a_n<x_n $ then $ a_n\to0 $?</p>
<p>EDIT</p>
<p>I forgot to mention that we also prove that $x_n\to0$.</p>
| CY Aries | 268,334 | <p>Since $\{x_n\}$ is decreasing and bounded from below, it is convergent. But this does not guarantee that the limit is zero. Indeed, since</p>
<p>$$0< x_{n+1}\le\frac{2}{3}x_n$$</p>
<p>we have</p>
<p>$$0< x_n\le\left(\frac{2}{3}\right)^{n-1}x_1$$</p>
<p>This implies that $x_n\to0$ as $n\to\infty$.</p>
<p>We have $\displaystyle 0<a_n<\frac{1}{3}x_n$, $\displaystyle 0<b_n<\frac{1}{3}x_n$ and $\displaystyle 0<c_n<\frac{1}{3}x_n$.</p>
<p>So $a_n\to 0$, $b_n\to 0$ and $c_n\to 0$ as $n\to\infty$.</p>
|
1,690,248 | <p>Hello I am hoping to find some direction on solving this try it yourself problem in my textbook.</p>
<p>Let S be an arbitrary set of symbols and let $\Phi = \{v_0 \equiv t | t \in T^S\} \cup \{\exists v_0 \exists v_1 \neg v_0 \equiv v_1\}$. </p>
<p><em>Note: $T^S$ is the set of all S-terms.</em></p>
<p>Show that $Con\Phi$ holds and that there is no consistent set in $L^S$ which includes $\Phi$ and contains witnesses.</p>
<p>I think my confusion begins with not understanding exactly what $\Phi$ implies with its 2 formulas. I think it states all terms are equivalent and every term has a negation? If my understanding is correct then to me $\Phi$ seems inconsistent since every S-term is equivalent, and in second formula $\neg v_0 \equiv v_1$ would only be true if the negation of every S-term is itself?</p>
<p>Thank you in advance for helping me wrangle this problem.</p>
| aventurin | 308,622 | <p>You can use a ring buffer of appropriate size. Then the average speed is the sum of the numbers in the ring buffer divided by the count of numbers currently in the buffer. The sum of the numbers at time $t+1$ is the sum at time $t$ plus the number added to the ring buffer minus the number that leaves the ring buffer (or zero if the buffer is not full yet).</p>
<p>However this probably not a mathematics question.</p>
|
1,012,652 | <p>I have a problem with the following question.</p>
<p>For which $n$ does the following equation have solutions in complex numbers</p>
<p>$$|z-(1+i)^n|=z $$</p>
<p>Progress so far.</p>
<ol>
<li><p>Let $z=a+bi$.</p></li>
<li><p>Since modulus represents a distance, the imaginary part of RHS has to be 0. This immediately makes $b=0$.</p></li>
<li><p>If solutions are in the complex domain $|a-(1+i)^n|=a $ by 2., and $a$ is Real. </p></li>
<li><p>?</p></li>
</ol>
<p>I don't know where to go from here. </p>
| DenDenDo | 166,977 | <p>You already found that z has to be real, now just write out $|z| = \sqrt{Re(z)^2 + Im(z)^2}$<br>
Note that $i+1 = \sqrt{2} e^{i\frac{\pi}{4}} = \sqrt{2} \cos(\frac{\pi}{4}) + i\sqrt{2} \sin(\frac{\pi}{4})$<br>
Then you need to find when $a^2 =|a-(1+i)^n|^2 = a^2 + 2 \cdot 2^\frac{n}{2} a \cos(\frac{\pi}{4}n) + 2^n$<br>
That is, finding the zeroes of $2 \cdot 2^\frac{n}{2} a \cos(\frac{\pi}{4}n) + 2^n$<br>
For small z this has solutions around $n \approx -2-4k$ (because the cos is zero and $2^n$ is negligible) but the only way this can be true for all z is in the limit $n\rightarrow - \infty$</p>
|
2,029,279 | <blockquote>
<p>Let <span class="math-container">$T:\Bbb R^2\to \Bbb R^2$</span> be a linear transformation such that <span class="math-container">$T (1,1)=(9,2)$</span> and <span class="math-container">$T(2,-3)=(4,-1)$</span>.</p>
<p>A) Determine if the vectors <span class="math-container">$(1,1)$</span> and <span class="math-container">$(2,-3)$</span> form a basis.<br>
B) Calculate <span class="math-container">$T(x,y)$</span>.</p>
</blockquote>
<p>I need help with these, please I'm stuck, don't even know how to start them...</p>
| copper.hat | 27,978 | <p>If two vectors $x_1,x_2$ are linearly dependent, the either
$x_1 = \lambda x_2$ or $x_2=\lambda x_1$ for some $\lambda$, in other words
they lie on the same line.</p>
|
986,875 | <p>A service center consists of two servers, each working at an exponential rate
of two services per hour. If customers arrive at a Poisson rate of three per hour, then,
assuming a system capacity of at most three customers,
What fraction of potential customers enter the system?</p>
<p>I was thinking if I could think of the servers as one big system since the customers don't have a preference one way or another.</p>
| cyberboy | 395,432 | <p>fraction of potential customers enter the system = 1 - πN<br>
here N is system capacity
= 1 - π3<br>
= 1-27/143<br>
= 0.811188</p>
<p>similar question with solution {see question 3}<br>
<a href="https://www2.isye.gatech.edu/~ashapiro/publications/MT2-solutions.pdf" rel="nofollow noreferrer">https://www2.isye.gatech.edu/~ashapiro/publications/MT2-solutions.pdf</a></p>
|
3,305,861 | <p>This is an answer from Slader for <em>Understanding Analysis</em> by Abbott. The book suggests to use <span class="math-container">$a = a-b+b$</span> but the steps for this answer doesn't make any sense to me. For one, why does s/he write use <span class="math-container">$a=a-b+b$</span> when it looks like they used <span class="math-container">$a = a-b$</span> which doesn't seem valid because then you don't have a true statement for all <span class="math-container">$a,b$</span> but only when <span class="math-container">$b = 0$</span>.</p>
<p><a href="https://i.stack.imgur.com/0KQQn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0KQQn.png" alt=""></a></p>
| Alejandro Nasif Salum | 481,187 | <p>First of all, it is real because it converges and because <span class="math-container">$\arctan\left(\frac1{1+x^2}\right)$</span> is a real number for every <span class="math-container">$x\in \mathbb R$</span>.</p>
<p>On the other hand, that some number can be represented with an expression involving <span class="math-container">$i$</span> doesn't mean it is not a real number. For instance,
<span class="math-container">$$\frac1i+i=0\in \mathbb R.$$</span></p>
|
3,305,861 | <p>This is an answer from Slader for <em>Understanding Analysis</em> by Abbott. The book suggests to use <span class="math-container">$a = a-b+b$</span> but the steps for this answer doesn't make any sense to me. For one, why does s/he write use <span class="math-container">$a=a-b+b$</span> when it looks like they used <span class="math-container">$a = a-b$</span> which doesn't seem valid because then you don't have a true statement for all <span class="math-container">$a,b$</span> but only when <span class="math-container">$b = 0$</span>.</p>
<p><a href="https://i.stack.imgur.com/0KQQn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0KQQn.png" alt=""></a></p>
| Yuriy S | 269,624 | <p>A pretty integral, and I'm not sure I would be able to guess the primitive right away.</p>
<p>Still, substitution is always a good way to simplify things:</p>
<p><span class="math-container">$$I=\int_a^b \arctan\left(\dfrac{1}{1+x^2}\right)dx, \\ 0 \leq a<b$$</span></p>
<hr>
<p><span class="math-container">$$x= \tan t$$</span></p>
<p><span class="math-container">$$I=\int_{\arctan a}^{\arctan b} \arctan\left(\cos^2 t\right)\frac{dt}{\cos^2 t}$$</span></p>
<p><span class="math-container">$$u = \cos t$$</span></p>
<p><span class="math-container">$$I=\int_{\cos \arctan b}^{\cos \arctan a} \arctan\left(u^2 \right)\frac{du}{u^2 \sqrt{1-u^2}}$$</span></p>
<p><span class="math-container">$$I=\int_{1/\sqrt{1+b^2}}^{1/\sqrt{1+a^2}} \arctan\left(u^2 \right)\frac{du}{u^2 \sqrt{1-u^2}}$$</span></p>
<p><span class="math-container">$$v=u^2$$</span></p>
<p><span class="math-container">$$I=\frac{1}{2} \int_{1/(1+b^2)}^{1/(1+a^2)} \arctan v \frac{dv}{v^{3/2} \sqrt{1-v}}$$</span></p>
<p>Now we can use integration by parts:</p>
<p><span class="math-container">$$\int \frac{1}{2} \frac{dv}{v^{3/2} \sqrt{1-v}}=-\frac{\sqrt{1-v}}{\sqrt{v}}$$</span></p>
<p><span class="math-container">$$(\arctan v)'= \frac{1}{1+v^2}$$</span></p>
<p><span class="math-container">$$I=-\sqrt{\frac{1}{v}-1} \arctan v \bigg|_{1/(1+b^2)}^{1/(1+a^2)}+ \int_{1/(1+b^2)}^{1/(1+a^2)} \frac{\sqrt{1-v} ~dv}{\sqrt{v} (1+v^2)}$$</span></p>
<blockquote>
<p><span class="math-container">$$I=b \arctan \frac{1}{1+b^2}-a \arctan \frac{1}{1+a^2} + 2 \int_{1/\sqrt{1+b^2}}^{1/\sqrt{1+a^2}} \frac{\sqrt{1-u^2} ~du}{1+u^4}$$</span></p>
</blockquote>
<p>For the second part we get back to trigonometric functions again:</p>
<p><span class="math-container">$$\int_{1/\sqrt{1+b^2}}^{1/\sqrt{1+a^2}} \frac{\sqrt{1-u^2} ~du}{1+u^4}=\int_{\arcsin(1/\sqrt{1+b^2})}^{\arcsin(1/\sqrt{1+a^2})} \frac{\cos^2 w}{1+\sin^4 w} dw$$</span></p>
<p>Now we use the tangent half angle substitution:</p>
<p><span class="math-container">$$p = \tan \frac{w}{2} \\ \cos w= \frac{1-p^2}{1+p^2} \\ \sin w= \frac{2p}{1+p^2} \\ dw = \frac{2dp}{1+p^2}$$</span></p>
<p>Denoting:</p>
<p><span class="math-container">$$\alpha= \tan \left(\frac{1}{2} \arcsin \frac{1}{\sqrt{1+a^2}} \right) \\ \beta = \tan \left(\frac{1}{2} \arcsin \frac{1}{\sqrt{1+b^2}} \right)$$</span></p>
<blockquote>
<p><span class="math-container">$$\int_{1/\sqrt{1+b^2}}^{1/\sqrt{1+a^2}} \frac{\sqrt{1-u^2} ~du}{1+u^4}=\int_{\beta}^{\alpha} \frac{2 (1-p^2)^2 (1+p^2) }{(1+p^2)^4+16 p^4} dp$$</span></p>
</blockquote>
<p>Now we have a rational function under the integral, which can be integrated using partial fractions.</p>
<p>I will stop here, because the point was, we don't really need CAS to integrate this, and no complex numbers were harmed in the process.</p>
<p>Well, it might be much easier to do the partial fractions in the last integral using complex numbers. But I'm sure we can avoid them.</p>
<p>Note that for the case in the OP, we have:</p>
<p><span class="math-container">$$a=0, b=\infty$$</span></p>
<p>Which means we need to find:</p>
<p><span class="math-container">$$\int_0^1 \frac{ (1-p^2)^2 (1+p^2) }{(1+p^2)^4+16 p^4} dp= \frac{1}{4} \sqrt{\frac{\sqrt{2}-1}{2}} ~ \pi$$</span></p>
<p>I confirmed this value numerically, but Mathematica can't find it from the integral.</p>
<p>Though it does find:</p>
<p><span class="math-container">$$\int_0^\infty \frac{ (1-p^2)^2 (1+p^2) }{(1+p^2)^4+16 p^4} dp= \frac{1}{2} \sqrt{\frac{\sqrt{2}-1}{2}} ~ \pi$$</span></p>
<p>To be honest, complex residues seems like the best method in this case to me.</p>
|
3,990,717 | <p>I'm currently a self-learner on Abstract Algebra and Group theory. The book that I'm currently reading is <em>A First Course In Abstract Algebra</em> by <em>John B. Fraleigh</em>. So I know this question might truly be a basic question which every student would be taught in class and there is confusion about it. However, maybe it is me who is not careful in reading the book that raises in my mind this question. So given a binary operation <span class="math-container">$\ast $</span> on a set <span class="math-container">$S$</span>, how can we compute:<br />
<span class="math-container">$$a \ast b \ast c$$</span>
Actually, on page 23 of the book, the author does mention this but I still can not get it. I don't know if we should compute this from left to right or reversely or we will just leave it there if <span class="math-container">$\ast$</span> is not associative. Moreover, if this binary operation is associated, is it true to define that:
<span class="math-container">$$a \ast b \ast c = \left(a \ast b \right)\ast c$$</span>
Then, after learning about <strong>Cyclic group</strong> there is a problem of which can be stated as below:</p>
<blockquote>
<p>Let <span class="math-container">$a$</span> and <span class="math-container">$b$</span> be elements of a group <span class="math-container">$G$</span>. Show that if <span class="math-container">$ab$</span> has finite order <span class="math-container">$n$</span>, then <span class="math-container">$ba$</span> also has order <span class="math-container">$n$</span>.</p>
</blockquote>
<p>I know that we should show that <span class="math-container">$(ba)^n = e$</span> which can be driven from <span class="math-container">$(ab)^n = e$</span>. Unfortunately, since <span class="math-container">$G$</span> might not be an Abelian group so what I had gone so far before looking for a solution is just manipulating <span class="math-container">$(ab)^n = e$</span> by using the fact that <span class="math-container">$(ab)^{-1} = b^{-1} a^{-1}$</span> since I don't know how to move further after splitting <span class="math-container">$(ab)^n = (ab)(ab)^{n-1}$</span>. I also get confused whether <span class="math-container">$(ab)^n = (ab)(ab)^{n-1}$</span> or <span class="math-container">$(ab)^n = (ab)^{n-1} (ab)$</span>. So I checked up for the solution and this is the way they manipulate it that I found: <a href="https://math.stackexchange.com/questions/622305/show-that-if-ab-has-finite-order-n-then-ba-also-has-order-n-fraleigh">Show that if ab has finite order n, then ba also has order n. - Fraleigh p. 47 6.46.</a>
So I completely have no idea about this can be infered:</p>
<p><span class="math-container">\begin{align}(\color{darkorange}{a}\color{darkcyan}{b})^n &= e \implies \color{darkorange}{a}\color{darkcyan}{b}(ab)^{n-1} = e \implies
\color{darkcyan}{b}(ab)^{n-1}\color{darkorange}{a} = e \implies (ba)^n = e
\end{align}</span>
I also tried to check another link in order to understand the solution: <a href="https://math.stackexchange.com/questions/3659250/let-g-be-a-group-and-a-b%E2%88%88g-suppose-order-of-ab-in-g-is-n-show-that?noredirect=1&lq=1">same question</a> And the asker states that:
<span class="math-container">$$(ab)^n=\underbrace{(ab)(ab)(ab)\cdots(ab)}_{n~\text{copies}}$$</span>
<span class="math-container">$$=a\underbrace{(ba)(ba)(ba)\cdots(ba)}_{n-1~\text{copies}}b$$</span>
<span class="math-container">$$=a(ba)^{n-1} b$$</span>
It is here that the number of terms which we're dealing with is <span class="math-container">$n$</span> now. I don't have any idea how could he manipulates from the first line to the second line so I think that I need some helps on this. In summary, how can we compute binary operation of more than 3 elements? Pleas explain the problems that I state above and give some other examples if possible. Thank you so much.</p>
| Mike | 544,150 | <p>Write</p>
<p><span class="math-container">$$(ab)^n = \underbrace{ababab \ldots ab}_{n {\text{ times}}}$$</span></p>
<p>Then</p>
<p><span class="math-container">$$(ba)^n = \underbrace{bababab \ldots ba}_{n {\text{ times}}} = (\underbrace{bababab \ldots ba}_{n {\text{ times}}})bb^{-1} = b(\underbrace{ababab \ldots ab}_{n {\text{ times}}})b^{-1} = b(ab)^nb^{-1}.$$</span></p>
<p>If <span class="math-container">$(ab)^n = e$</span>, then the line above gives</p>
<p><span class="math-container">$$(ba)^n = b(ab)^nb^{-1} = beb^{-1} = e.$$</span></p>
<p>Suppose we were to replace <span class="math-container">$e$</span> with any element <span class="math-container">$z \in Z$</span> that commutes with any other element of <span class="math-container">$G$</span>. Then what would we have.</p>
|
3,243,440 | <p>Find the number k such that:</p>
<p><span class="math-container">$$det\begin{bmatrix}
3a_1 & 2a_1 + a_2 - a_3 & a_3\\\
3b_1 & 2b_1 + b_2 - b_3 & b_3\\\
3c_1 & 2c_1 + c_2 - c_3 & c_3\end{bmatrix}$$</span></p>
<p><span class="math-container">$$ = k \bullet det\begin{bmatrix}
a_1 & a_2 & a_3\\\
b_1 & b_2 & b_3\\\
c_1 & c_2 & c_3\end{bmatrix}$$</span></p>
<p>Ive been working on this question for a while now, and I still cant seem to get it out. What I first thought was the best option would be to expand out the first matrix to get a value multiplied by the second matrix (this value would be k). However, I kept having trouble, and eventually tried a new approach. I tried to compute both determinates to find the value of k. Nevertheless, I was still unable to answer the question. Any help or hints would be much appreciated. Thanks in advance.</p>
| Mohammad Riazi-Kermani | 514,496 | <p>Performing elementary column operations, <span class="math-container">$$det\begin{bmatrix}
3a_1 & 2a_1 + a_2 - a_3 & a_3\\\
3b_1 & 2b_1 + b_2 - b_3 & b_3\\\
3c_1 & 2c_1 + c_2 - c_3 & c_3\end{bmatrix}=$$</span></p>
<p><span class="math-container">$$det\begin{bmatrix}
3a_1 & 2a_1 + a_2 & a_3\\\
3b_1 & 2b_1 + b_2 & b_3\\\
3c_1 & 2c_1 + c_2 & c_3\end{bmatrix}=$$</span></p>
<p><span class="math-container">$$det\begin{bmatrix}
3a_1 & a_2 & a_3\\\
3b_1 & b_2 & b_3\\\
3c_1 & c_2 & c_3\end{bmatrix}=$$</span></p>
<p><span class="math-container">$$3det\begin{bmatrix}
a_1 & a_2 & a_3\\\
b_1 & b_2 & b_3\\\
c_1 & c_2 & c_3\end{bmatrix}$$</span></p>
|
137,253 | <p>We have the following: $A,B,C$ are sets. </p>
<p>$C = \{ab: a \in A, b \in B\}$. </p>
<p>What is the relationship between $\sup(C),\sup(A)$, and $\sup(B)$?. </p>
<p>Is it: $$\sup(C) \le \sup(A) \sup(B)\;,$$ and why?.</p>
| Brian M. Scott | 12,042 | <p>If $A$ and $B$ are sets of non-negative real numbers, your conjecture is correct. Let $x=\sup A$ and $y=\sup B$. Then for any $c\in C$ there are $a\in A$ and $b\in B$ such that $c=ab$, and since $a,b\ge 0$, $ab\le xy$. That is, $c\le xy$ for every $c\in C$, so $\sup C\le xy=\sup A\sup B$.</p>
<p>But we can say more. Suppose for now that $0<x,y<\infty$, and let $\epsilon<\min\{x,y\}$ be positive. Then $$\begin{align*}(x-\epsilon)(y-\epsilon)&=xy-(x+y)\epsilon+\epsilon^2\\
&>xy-\epsilon(x+y)\;.
\end{align*}$$</p>
<p>Now $x=\sup A$, so there is an $a_\epsilon\in A\cap(x-\epsilon,x]$. Similarly, there is a $b_\epsilon\in B\cap(y-\epsilon,y]$, and clearly $xy-\epsilon(x+y)<a_\epsilon b_\epsilon\le xy$. Since $x+y$ is a positive real, by taking $\epsilon$ small enough we can make $\epsilon(x+y)$ as small as we like. Thus, we can find products $a_\epsilon b_\epsilon\in C$ as close to $xy$ as we like, and it follows that $xy=\sup C$.</p>
<p>It’s easy to check that this is also the case when one of $x$ and $y$ is $0$ and the other is finite, and when one of $x$ and $y$ is infinite and the other is positive. When one is $0$ and the other is infinite, $\sup C=0$. Thus, if we (perhaps somewhat arbitrarily) define $0\cdot\infty=0$, we can say that $$\sup(AB)=\sup A \sup B$$ when $A$ and $B$ are sets of non-negative real numbers, where $$AB=\{ab:a\in A\text{ and }b\in B\}\;.$$</p>
<p>As you can see from the example in the comments, matters are much more complicated when $A$ and $B$ are allowed to contain negative numbers.</p>
|
134,424 | <p>I would like to create a function where I can define which case I want to use to create a path.</p>
<pre><code>p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160};
listPurple = Symbol["p" <> ToString[#]] & /@ Range[3];
disksPurple = {Purple, Disk[#, 2] & /@ listPurple};
Graphics[{disksPurple}, ImageSize -> 200]
</code></pre>
<p><a href="https://i.stack.imgur.com/X8otFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X8otFm.png" alt="Imagem"></a></p>
<p>I do not want two functions, as I created:</p>
<pre><code>lVertical[p1_, p2_] := {p1, {p1[[1]], p2[[2]]}};
lHorizontal[p1_, p2_] := {p1, {p2[[1]], p1[[2]]}};
</code></pre>
<p>With them I define whether it will be a horizontal or vertical line:</p>
<pre><code>l1 = lVertical[p1, p2];
l2 = lHorizontal[p2, p1];
l3 = lHorizontal[p2, p3];
l4 = lVertical[p3, p2];
lines = Sort@Symbol["l" <> ToString[#]] & /@ Range[4];
l = {Red, Dashed, Line[#] & /@ lines};
Graphics[{l, disksPurple}, ImageSize -> 200]
</code></pre>
<p><a href="https://i.stack.imgur.com/cl4HEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cl4HEm.png" alt="Imagem"></a></p>
<p>I would like it in a format similar to this:</p>
<pre><code>f[p1_, p2_, lVertical or lHorizontal]
</code></pre>
| LCarvalho | 37,895 | <p>Try this:</p>
<pre><code>p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160};
listPurple = Symbol["p" <> ToString[#]] & /@ Range[3];
disksPurple = {Purple, Disk[#, 2] & /@ listPurple};
Graphics[{disksPurple}, ImageSize -> 200];
fLine[p1_, p2_, case_] :=
If[TrueQ[case == v], {p1, {p1[[1]], p2[[2]]}}, {p1, {p2[[1]],
p1[[2]]}}]
l1 = fLine[p1, p2, v];
l2 = fLine[p2, p1, h];
l3 = fLine[p2, p3, h];
l4 = fLine[p3, p2, v];
lines = Sort@Symbol["l" <> ToString[#]] & /@ Range[4];
l = {Red, Dashed, Line[#] & /@ lines};
Graphics[{l, disksPurple}, ImageSize -> 200]
</code></pre>
<p>This part you choose cases:</p>
<p><code>fLine[p1_, p2_, case_] :=
If[TrueQ[case == v], {p1, {p1[[1]], p2[[2]]}}, {p1, {p2[[1]],
p1[[2]]}}]</code></p>
|
134,424 | <p>I would like to create a function where I can define which case I want to use to create a path.</p>
<pre><code>p1 = {40, 48}; p2 = {50, 116}; p3 = {63, 160};
listPurple = Symbol["p" <> ToString[#]] & /@ Range[3];
disksPurple = {Purple, Disk[#, 2] & /@ listPurple};
Graphics[{disksPurple}, ImageSize -> 200]
</code></pre>
<p><a href="https://i.stack.imgur.com/X8otFm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/X8otFm.png" alt="Imagem"></a></p>
<p>I do not want two functions, as I created:</p>
<pre><code>lVertical[p1_, p2_] := {p1, {p1[[1]], p2[[2]]}};
lHorizontal[p1_, p2_] := {p1, {p2[[1]], p1[[2]]}};
</code></pre>
<p>With them I define whether it will be a horizontal or vertical line:</p>
<pre><code>l1 = lVertical[p1, p2];
l2 = lHorizontal[p2, p1];
l3 = lHorizontal[p2, p3];
l4 = lVertical[p3, p2];
lines = Sort@Symbol["l" <> ToString[#]] & /@ Range[4];
l = {Red, Dashed, Line[#] & /@ lines};
Graphics[{l, disksPurple}, ImageSize -> 200]
</code></pre>
<p><a href="https://i.stack.imgur.com/cl4HEm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cl4HEm.png" alt="Imagem"></a></p>
<p>I would like it in a format similar to this:</p>
<pre><code>f[p1_, p2_, lVertical or lHorizontal]
</code></pre>
| Sumit | 8,070 | <p>Quick Draw (for an alternative way to do the job using some unknown MMA functionality)</p>
<pre><code>n=10;
pts= RandomReal[1, {n, 2}];
ListLinePlot[pts, InterpolationOrder -> 0,
Mesh -> Full, MeshStyle -> Purple, PlotStyle -> {Red, Dashed}, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/oS8cf.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/oS8cf.png" alt="enter image description here"></a></p>
<p>Different formation can be created by considering different arrangement</p>
<pre><code>pts1 = pts[[RandomSample[Range[n], n]]];
ListLinePlot[pts1, InterpolationOrder -> 0, Mesh -> Full,
MeshStyle -> Purple, PlotStyle -> {Red, Dashed}, Frame -> True]
</code></pre>
<p><a href="https://i.stack.imgur.com/TInZL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/TInZL.png" alt="enter image description here"></a></p>
|
3,588,068 | <p>I have the following system of 3 equations and 3 unknowns:
<span class="math-container">$$c_{0} = \frac{x_0}{x_0 + x_1},\ \ c_{1} = \frac{x_1}{x_1 + x_2},\ \ \ c_{2} = \frac{x_2}{x_2 + x_0},$$</span>
where all <span class="math-container">$c_i\!\in\!(0,1)$</span> are known and all <span class="math-container">$x_i > 0$</span> are unknown. Am I right in that the solution of this system is the nullspace of the following matrix? <span class="math-container">$$\mathbf{A}=\left[\begin{matrix}(c_0-1)& c_0 & 0 \\ 0 & (c_1-1) & c_1 \\ c_2 & 0 & (c_2-1) \end{matrix}\right].$$</span>
If so, I want to find the non-trivial solution, i.e. the basis for <span class="math-container">$null(\mathbf{A})$</span>.</p>
<p>p.s. I have attempted to simplify <span class="math-container">$\mathbf{A}$</span> to its reduced row echelon form <span class="math-container">$rref(\mathbf{A})$</span>. I know that <span class="math-container">$null(\mathbf{A}) = null(rref(\mathbf{A}))$</span>, but I get a diagonal matrix for <span class="math-container">$rref(\mathbf{A})$</span>. So does this mean that <span class="math-container">$null(\mathbf{A}) = \mathbf{0}$</span>, and therefore, there are no solutions to the system?</p>
| Dietrich Burde | 83,966 | <p>We rewrite the equations as a system of linear equations <span class="math-container">$Ax=0$</span> with
<span class="math-container">$$
A=\begin{pmatrix} c_0-1 & c_0 & 0 \\ 0 & c_1-1 & c_1 \\
c_2 & 0 & c_2-1& \end{pmatrix}
$$</span>
and <span class="math-container">$x=(x_0,x_1,x_2)^t$</span>.
The nullspace of <span class="math-container">$A$</span> is non-trivial if and only if <span class="math-container">$\det(A)=0$</span>.
We have
<span class="math-container">$$
\det(A)=2c_0c_1c_2 - c_0c_1 - c_0c_2 + c_0 - c_1c_2 + c_1 + c_2 - 1.
$$</span>
The trivial solution <span class="math-container">$x=0$</span>, i.e., <span class="math-container">$x_0=x_1=x_2=0$</span> is forbidden by you. </p>
<p>So let us suppose that <span class="math-container">$\det(A)=0$</span> and <span class="math-container">$2c_0c_1 - c_0 - c_1 + 1\neq 0$</span>. Then we can express <span class="math-container">$c_2$</span> by <span class="math-container">$c_0$</span> and <span class="math-container">$c_1$</span> and then <span class="math-container">$\ker(A)$</span> is spanned by</p>
<p><span class="math-container">$$
\begin{pmatrix} c_1-1\\ -c_0 \\- 2c_0c_1 + c_0 + c_1 - 1 \end{pmatrix}.$$</span>
Similarly for the other cases.</p>
|
200,063 | <p>I am looking to evaluate</p>
<p>$$\int_0^1 x \sinh (x) \ \mathrm{dx}$$</p>
| user 1591719 | 32,016 | <p>If you use Taylor's series representation of the integrand and then interchange the sum and integral, you simply get</p>
<p>$$\sum_{k=0}^{\infty} \left(\frac{1}{(2k+2)!}-\frac{1}{(2k+3)!}\right) = \frac{1}{e}.$$ </p>
|
2,186,560 | <p>I was hoping someone can help me with the following.</p>
<p>I had to solve the following ODE and get the implicit form. </p>
<p>$$\frac{dy}{dx}=\frac{1+3y}{y} * x^2$$</p>
<p>this give by separating and integrating the following:
$1/9*(3y-ln(3y+1))=1/3*x^3 +c $
which can also be written as (c is an arbitrary constant):
$$3y-ln(3y+1)=3x^3+c $$</p>
<p>I don't know exactly how to solve this, I saw something about lambert functions but that wasn't entirely clear to me.
My start was rewriting the equation as: </p>
<p>$3y+1-ln(3y+1)=3x^3+c$<br>
$u-ln(u)=3x^3+c$<br>
$ln(e^u)-ln(u)=3x^3+c$ <strong>or</strong> $ln(e^{-u})+ln(u)=-3x^3+c$<br>
$ln(\frac{1}{u}*e^u)=3x^3+c$ <strong>or</strong> $ln(u*e^{-u})=-3x^3+c$<br>
$\frac{1}{u}*e^u= e^{(3x^3+c)}$ <strong>or</strong> $u*e^{-u}=e^{-3x^3+c}$<br>
but that's as far as I can get.
Since I don't know how to apply the lambert function for that nor any other way to solve it.</p>
<p>Any help with this would be greatly appreciated.</p>
| Emilio Novati | 187,568 | <p>By definition of the <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert W</a> function: if $ze^z=A$ than $z=W(A)$.</p>
<p>so, if we have:
$$
ue^{-u}=e^{-3x^2+c}
$$
that is :
$$
-ue^{-u}=-e^{-3x^2+c}
$$
we have:
$$
-u=W\left(-e^{-3x^2+c} \right)\quad \iff \quad u=-W\left(-e^{-3x^2+c} \right)
$$</p>
<p>so, for $u=3y+1$ we find:
$$
y=\frac{1}{3}\left( -W\left(-e^{c-3x^2} \right)-1\right)
$$</p>
|
301,905 | <p>I'm having troubles to solve this integration: $\int_0^1 \frac{x^{2012}}{1+e^x}dx$</p>
<p>I've tried a lot using so many techniques without success. I found $\int_{-1}^1 \frac{x^{2012}}{1+e^x}dx=1/2013$, but I couldn't solve from 0 to 1.</p>
<p>Thanks a lot.</p>
| mrf | 19,440 | <p>I doubt there is a simple expression for your integral.</p>
<p>One approach would be to write</p>
<p>$$\frac{1}{1+e^x} = \frac{1}{e^x(1+e^{-x})} = e^{-x} \sum_{k=0}^\infty (-1)^k e^{-kx} = \sum_{k=0}^\infty (-1)^k e^{-(k+1)x}$$</p>
<p>with uniform convergence on $x \ge c$ for every $c > 0$. Hence</p>
<p>\begin{align}
\int_0^1 \frac{x^{2012}}{1+e^x}\,dx &= \int_0^1 \left( \sum_{k=0}^\infty (-1)^k e^{-(k+1)x} x^{2012} \right)\,dx \\
&= \sum_{k=0}^\infty \left( (-1)^k \int_0^1 e^{-(k+1)x} x^{2012}\, dx \right) \\
&= \sum_{k=0}^\infty \left( (-1)^k \int_0^{k+1} e^{-t} \left(\frac{x}{k+1}\right)^{2012}\frac{1}{k+1}\, dt \right) \\
&= \sum_{k=0}^\infty \frac{(-1)^k}{(k+1)^{2013}}\gamma(2013,k+1)
\end{align}</p>
<p>where $\gamma$ is the <a href="http://en.wikipedia.org/wiki/Incomplete_gamma_function" rel="nofollow"><em>lower incomplete Gamma function</em></a>. Maple can't find a closed expression for this series (and neither can I). It's even tricky to evaluate numerically, so I'm not sure how much use it is.</p>
|
3,523,491 | <p>Consider a <span class="math-container">$m \times n$</span> matrix <span class="math-container">$M$</span> such that each cell in <span class="math-container">$M$</span> is equal to the sum of its adjacent cells (sharing either an edge or a corner with this cell). What are the values in this matrix. </p>
<p>I am trying to find a non-zero matrix satisfying this condition. Does there exist such a matrix, or if not, is there a proof for it?</p>
<p>I know that when the matrix values are equal to average of surrounding cells, then all values in the matrix are equal. Is there a similar conclusion we can reach to in this case?</p>
| Jaroslaw Matlak | 389,592 | <p>For sure such matrix could be found for <span class="math-container">$m \in 2\mathbb{N}+1, n \in 3\mathbb{N}+2$</span>:</p>
<p><span class="math-container">$$\left[\begin{matrix}
a&a&0&-a&-a&0&a&a&0&\cdots & 0 & a &a & 0 &-a &-a\\
0&0&0&0&0&0&0&0&0&\cdots&0&0&0&0&0&0\\
-a&-a&0&a&a&0&-a&-a&0&\cdots & 0 & -a &-a & 0 &a &a\\
0&0&0&0&0&0&0&0&0&\cdots&0&0&0&0&0&0\\
\cdots&&&&&&&&&\cdots\\
0&0&0&0&0&0&0&0&0&\cdots&0&0&0&0&0&0\\
a&a&0&-a&-a&0&a&a&0&\cdots & 0 & a &a & 0 &-a &-a\\
\end{matrix}\right]$$</span>
Of course depending on the value of <span class="math-container">$n$</span> and <span class="math-container">$m$</span> last row and two last columns might be different (last row might start with <span class="math-container">$-a, -a$</span> and last columns might start with <span class="math-container">$a,a$</span>), but general idea is the same.</p>
|
2,520,669 | <p>How are these equal? </p>
<blockquote>
<p>$$|\sqrt{x} - \sqrt{c}| = \frac{|x-c|}{|\sqrt{x} + \sqrt{c}|},$$ </p>
</blockquote>
| Nosrati | 108,128 | <p>Simpler $$\frac{\cos(x+h)-\cos(x)}{(x+h)^{1/2} - x^{1/2}}\times\frac{(x+h)^{1/2} + x^{1/2}}{(x+h)^{1/2} + x^{1/2}}=\frac{\cos(x+h)-\cos(x)}{h}[(x+h)^{1/2} + x^{1/2}]$$</p>
|
2,520,669 | <p>How are these equal? </p>
<blockquote>
<p>$$|\sqrt{x} - \sqrt{c}| = \frac{|x-c|}{|\sqrt{x} + \sqrt{c}|},$$ </p>
</blockquote>
| lab bhattacharjee | 33,337 | <p>Hint : </p>
<p>$$\dfrac1{\sqrt{x+h}-\sqrt x}=\dfrac{x+h-x}{\sqrt{x+h}+\sqrt x}=?$$</p>
<p>and Use <a href="http://mathworld.wolfram.com/ProsthaphaeresisFormulas.html" rel="nofollow noreferrer">Prosthaphaeresis Formulas</a> $$\cos(x+h)-\cos x=-2\sin\dfrac h2\sin\dfrac{2x+h}2$$</p>
<p>Finally, $\lim_{u\to0}\dfrac{\sin u}u=1$</p>
|
549,014 | <p>If $y=\sin x$, then find the value of $$\frac{d^2(\cos^7 x)}{dy^2}$$ </p>
<p>I have no idea on how to proceed in this problem. Please help.</p>
| Claude Leibovici | 82,404 | <p>You can also start replacing x by ArcSin[y] and remember that Cos[ArcSin[z]]=Sqrt[1-z^2]. Can you continue with this ?</p>
|
2,861,406 | <p>I want to calculate the answer of the integral $$\int\frac{dx}{x\sqrt{x^2-1}}$$ I use the substitution $x=\cosh(t)$ ($t \ge 0$) which yields $dx=\sinh(t)\,dt$. By using the fact that $\cosh^2(t)-\sinh^2(t)=1$ we can write $x^2-1=\cosh^2(t)-1=\sinh^2(t)$. Since $t\ge 0$, $\sinh(t)\ge 0$, and we have $\sqrt{x^2-1}=\sinh(t)$. Now, by substituting for $x$, $dx$, and $\sqrt{x^2-1}$ in the first integral, we have
$$\int\frac{dt}{\cosh(t)}$$
Since $\cosh(t)=(e^t+e^{-t})/2$ by substituting this in this integral we have
$$2\int\frac{dt}{e^t+e^{-t}}.$$
Now by multiplying numerator and denominator in $e^t$ one can write:$$2\int\frac{e^t\,dt}{1+e^{2t}}$$
Now by using $z=e^t$ in this integral one can write ($dz=e^t\,dt$): $$2\int\frac{dz}{1+z^2}$$
So we have $$\int\frac{dt}{\cosh(t)}=2\arctan(z)=2\arctan(e^t)$$
On the other hand $$t=\cosh^{-1}(x)=\ln\left(x+\sqrt{x^2-1}\right)$$
So we have $$\int\frac{dx}{x\sqrt{x^2-1}}=\int\frac{dt}{\cosh(t)}=2\arctan(\exp(\ln(x+\sqrt{x^2-1})))$$
which yields </p>
<blockquote>
<p>$$\int\frac{dx}{x\sqrt{x^2-1}}=2\arctan(x+\sqrt{x^2-1})$$</p>
</blockquote>
<p>but this answer is wrong. The true answer can be obtained by direct substitution $u=\sqrt{x^2-1}$ and is </p>
<blockquote>
<p>$$\int\frac{dx}{x\sqrt{x^2-1}}=\arctan(\sqrt{x^2-1})$$</p>
</blockquote>
<p>I don't want to know the answer of the integral. I want to know what I did wrong? Can somebody help?</p>
| Community | -1 | <p>Verify by differentiation.</p>
<p>$$(2\arctan(x+\sqrt{x^2-1}))'=2\frac{1+\dfrac x{\sqrt{x^2-1}}}{(x+\sqrt{x^2-1})^2+1}=2\frac{x+\sqrt{x^2-1}}{2(x^2+x\sqrt{x^2-1)}\sqrt{x^2-1}}$$</p>
<p>and</p>
<p>$$(\arctan\sqrt{x^2-1})'=\frac{\dfrac{x}{\sqrt{x^2-1}}}{(\sqrt{x^2-1})^2+1}.$$</p>
<p>Hence both answers are correct.</p>
|
2,924,831 | <p>Suppose you have <span class="math-container">$100$</span> coins whose probabilities of obtaining the outcome "head" are <span class="math-container">$p_1,\ldots,\,p_{100}$</span>. These probabilities are not necessarily equal each other. Consider the following random experiment divided into two rounds.</p>
<ul>
<li><strong>Round 1:</strong> Throw simultaneously the <span class="math-container">$100$</span> coins and observe the number of outcomes "head".</li>
<li><strong>Round 2:</strong> Throw only those coins you obtained the outcome "tail" in Round 1, and observe the number of outcomes "head".</li>
</ul>
<p>Define the random variables</p>
<p><span class="math-container">$Y_1$</span>: number of outcomes "head" in Round 1, </p>
<p><span class="math-container">$X_2$</span>: number of outcomes "head" in Round 2,</p>
<p><span class="math-container">$Y_2=Y_1+X_2$</span>.</p>
<p>I learned that </p>
<ul>
<li><span class="math-container">$Y_1\sim\text{Poisson-Binomial}(\{p_1,\ldots,\,p_{100}\})$</span>,</li>
<li><span class="math-container">$X_2\sim\text{Poisson-Binomial}(\{q_1,\ldots,\,q_{100}\})$</span>, where <span class="math-container">$q_j=(1-p_j)\cdot p_j$</span>, for all <span class="math-container">$j\in\{1,\ldots,100\}$</span>, and </li>
<li><span class="math-container">$Y_2\sim\text{Poisson-Binomial}(\{r_1,\ldots,\,r_{100}\})$</span>, where <span class="math-container">$r_j=p_j+q_j$</span>, for all <span class="math-container">$j\in\{1,\ldots,100\}$</span>.</li>
</ul>
<p><strong>Problem:</strong> Obtain efficiently the joint distribution of the random vector <span class="math-container">$(Y_1,\,Y_2)$</span> whose range is <span class="math-container">$\{(y_1,\,y_2)\in\{0,\,1,\ldots,\,100\}^2:\,y_2\geq y_1\}$</span>. </p>
<p>Note that <span class="math-container">$\mathbb{P}(Y_1=y_1,\,Y_2=y_2)=\mathbb{P}(X_2=y_2-y_1 \mid Y_1=y_1) \cdot \mathbb{P}(Y_1=y_1)$</span>.</p>
<p>If <span class="math-container">$y_1=0$</span>, then <span class="math-container">$\mathbb{P}(Y_1=0,\,Y_2=y_2)=\left(\displaystyle\prod_{j=1}^{100}(1-p_j)\right) \cdot \mathbb{P}(X_2=y_2\mid Y_1=0)$</span>, </p>
<p>where the second factor can be computed efficiently with the <span class="math-container">${\tt R}$</span>-command <span class="math-container">${\tt dpoibin}$</span>, for all <span class="math-container">$y_2\in\{0,\ldots,\,100\}$</span>. </p>
<p>If <span class="math-container">$y_1=100$</span>, then <span class="math-container">$\mathbb{P}(Y_1=100,\,Y_2=y_2)=\left(\displaystyle\prod_{j=1}^{100}p_j\right)\cdot 1$</span>.</p>
<p>Troubles to compute <span class="math-container">$\mathbb{P}(Y_1=y_1,\,Y_2=y_2)$</span> arise when <span class="math-container">$y_1\in\{1,\,2,\ldots,\,99\}$</span> and <span class="math-container">$y_2\in\{y_1,\ldots,\,100\}$</span>. </p>
<p>Does anyone know how to compute efficiently <span class="math-container">$\mathbb{P}(Y_1=y_1,\,Y_2=y_2)$</span>, for all <span class="math-container">$(y_1,\,y_2)$</span>? Thanks a lot for your help and suggestions.</p>
| Student1981 | 476,980 | <p>I found a solution to my problem. This solution builds on the paper </p>
<p>Nelsen, R. B. (1987). Discrete bivariate distributions with given marginals and correlation. <em>Communications in Statistics-Simulation and Computation</em>, 16(1), 199-208.</p>
<p>Since <span class="math-container">$\mathbb{P}(Y_1=y_1,\,Y_2=y_2)=\mathbb{P}(Y_1=y_1,\,X_2=y_2-y_1)$</span>, obtaining the joint probability distribution of the random vector <span class="math-container">$(Y_1,\,X_2)$</span> is enough.</p>
<p>Note that <span class="math-container">$\mathbb{C}\mbox{ov}(Y_1,\,X_2)=0.5\cdot(\mathbb{V}\mbox{ar}(Y_2)-\mathbb{V}\mbox{ar}(Y_1)-\mathbb{V}\mbox{ar}(X_2))$</span>, with </p>
<ul>
<li><span class="math-container">$\displaystyle\mathbb{V}\mbox{ar}(Y_1)=\sum_{j=1}^{100}(p_j\cdot(1-p_j))$</span>,</li>
<li><span class="math-container">$\displaystyle\mathbb{V}\mbox{ar}(X_2)=\sum_{j=1}^{100}(q_j\cdot(1-q_j))$</span>, and</li>
<li><span class="math-container">$\displaystyle\mathbb{V}\mbox{ar}(Y_2)=\sum_{j=1}^{100}(r_j\cdot(1-r_j))$</span>.</li>
</ul>
<p>Following notation of Nelsen (1987), one can</p>
<ul>
<li>obtain <span class="math-container">$f(x)$</span> and <span class="math-container">$f(y)$</span> efficiently with <span class="math-container">${\tt dpoibin}$</span> in <span class="math-container">${\tt R}$</span>-software,</li>
<li>obtain <span class="math-container">$F(x)$</span> and <span class="math-container">$F(y)$</span> efficiently with <span class="math-container">${\tt ppoibin}$</span> in <span class="math-container">${\tt R}$</span>-software, and</li>
<li>use <span class="math-container">$\rho=\frac{\mathbb{C}\text{ov}(Y_1,\,X_2)}{\sqrt{\mathbb{V}\mbox{ar}(Y_1)\cdot\mathbb{V}\mbox{ar}(X_2)}}$</span>.</li>
</ul>
<p>Thus, according to the sign for <span class="math-container">$\rho$</span>, one can obtain the joint probability distribution of <span class="math-container">$(Y_1,\,X_2)$</span> with Nelsen's approach faster than with the approach I provided in my question.</p>
<p>If you have comments or corrections concerning this solution, please let me know. Thanks a lot.</p>
|
356,583 | <p>In the paper <a href="https://arxiv.org/abs/1811.02002" rel="nofollow noreferrer">"Finding Mixed Nash Equilibria of Generative Adversarial Networks"</a> the authors write in equation (1) on page 2:</p>
<blockquote>
<p>Consider the classical formulation of a two-player game with
<em>finitely</em> many strategies:
<span class="math-container">\begin{equation*}
\tag1\label1
\min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q},\boldsymbol{a} \rangle -
\langle \boldsymbol{q},A\boldsymbol{p} \rangle ,
\end{equation*}</span>
where <span class="math-container">$A$</span> is a payoff matrix, <span class="math-container">$\boldsymbol a$</span> is a vector, and <span class="math-container">$ \Delta_d :=
\{\boldsymbol{z} \in \mathbb{R}_{\geq 0}^d \mid \sum\nolimits_{i=1}^d z_i = 1\}$</span> is
the probability simplex, representing the <em>mixed strategies</em> (i.e.,
probability distributions) over <span class="math-container">$d$</span> pure strategies. A pair
<span class="math-container">$(\boldsymbol{p}_{\text{NE}},\boldsymbol{q}_{\text{NE}})$</span> achieving the min-max
value in (\ref{1}) is called a mixed NE.</p>
</blockquote>
<p>I was wondering:</p>
<ul>
<li>What does this formulation mean?</li>
<li>The formulation seems to result in a <strong>parametrized</strong> by a vector <span class="math-container">$\boldsymbol a$</span> pair of strategies <span class="math-container">$(\boldsymbol p, \boldsymbol q)$</span>. What is the role of vector <span class="math-container">$\boldsymbol a$</span> in the above equation?</li>
</ul>
<p>Thank you</p>
<p>After further contemplating: I guess they want to align their application (GAN) with the game theory framework. To that end, they write on page 3:</p>
<blockquote>
<p>[W]e consider the set of all probability distributions
over <span class="math-container">$\Theta$</span> and <span class="math-container">$\mathcal{W}$</span>, and we search for the optimal distribution that solves the following program:
<span class="math-container">\begin{equation*}
\tag4\label4
\min_{\nu \in \mathcal{M}(\Theta)} \max_{\mu \in \mathcal{M}(\mathcal{W})}
\mathbb{E}_{\boldsymbol{w} \sim \mu} \mathbb{E}_{X \sim \mathbb{P}_{real}} [f_\boldsymbol{w}(X)] -
\mathbb{E}_{\boldsymbol{w} \sim \mu} \mathbb{E}_{\boldsymbol{\theta} \sim \nu} \mathbb{E}_{X \sim \mathbb{P}_{\boldsymbol{\theta}}} [f_\boldsymbol{w}(X)] .
\end{equation*}</span></p>
</blockquote>
<p>They then show that the above can be cast as</p>
<blockquote>
<p><span class="math-container">\begin{equation*}
\tag5\label5
\min_{\nu \in \mathcal{M}(\Theta)} \max_{\mu \in \mathcal{M}(\mathcal{W})} \langle \mu,g \rangle -
\langle \mu,G\nu \rangle ,
\end{equation*}</span>
with <span class="math-container">$g$</span> defined as <span class="math-container">$g : \mathcal{W} \rightarrow \mathbb{R}$</span> by <span class="math-container">$g(w) := \mathbb{E}_{X \sim \mathbb{P}_{real}} [f_\boldsymbol{w}(X)]$</span>, the operator <span class="math-container">$G : \mathcal{M}(\Theta) \rightarrow \mathcal{F}(\mathcal{W})$</span> as <span class="math-container">$(G\nu)(w) := \mathbb{E}_{\boldsymbol{\theta} \sim \nu} \mathbb{E}_{X \sim \mathbb{P}_{\boldsymbol{\theta}}} [f_\boldsymbol{w}(X)]$</span> and denoting <span class="math-container">$\langle \mu,h \rangle := \mathbb{E}_{\mu}h$</span> for any probability
measure <span class="math-container">$\mu$</span> and function <span class="math-container">$h$</span> (where <span class="math-container">$\langle \mu,h \rangle$</span> is NOT an inner product, but a dual pairing in Banach spaces),</p>
</blockquote>
<p>which looks like (\ref{1}) (for <em>finitely</em> many strategies). Notice that (\ref{4}) has a free parameter <span class="math-container">$\mathbb{P}_{real}$</span> (hidden in <span class="math-container">$g$</span> in (\ref{5})), which <span class="math-container">$\boldsymbol{a}$</span> in (\ref{1}) seems to have been introduced to account for.</p>
<p>Also,
<span class="math-container">\begin{equation*}
\min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q},\boldsymbol{a} \rangle -
\langle \boldsymbol{q},A\boldsymbol{p} \rangle =
\min_{\boldsymbol{p} \in \Delta_m} \max_{\boldsymbol{q} \in \Delta_n} \langle \boldsymbol{q}, (\boldsymbol{a} \otimes \boldsymbol{1} - A)\boldsymbol{p} \rangle
\end{equation*}</span></p>
<p>This is because <span class="math-container">$\boldsymbol{p}$</span> is a probability simplex and therefore each row <span class="math-container">$m$</span> of vector <span class="math-container">$\boldsymbol{a}$</span> increases row <span class="math-container">$m$</span> of payoff matrix <span class="math-container">$A$</span>. Therefore, the above game is equivalent to a standard zero-sum game with payoff matrix <span class="math-container">$\tilde{A}=(\boldsymbol{a} \otimes \boldsymbol{1} - A)$</span>.</p>
| qwer1304 | 155,643 | <p>This <strong>non-standard</strong> formulation matches better the intended application - (Wasserstein) GAN. It can be mapped to the <strong>standard</strong> formulation <span class="math-container">$\langle \boldsymbol{q}, \tilde{A}\boldsymbol{p} \rangle$</span> through a new payoff matrix <span class="math-container">$\tilde{A} = \boldsymbol{a} \otimes \boldsymbol{1} - A$</span>, where <span class="math-container">$\otimes$</span> is the outer product operation (<span class="math-container">$\boldsymbol{a} \otimes \boldsymbol{b} = \boldsymbol{a} \boldsymbol{b}^T)$</span></p>
|
2,013,459 | <p>The period of $\sin(x)$ is $2\pi$. Thus the period of $\sin(\pi x)$ will become $T_1=2$, similarly the period of $\sin(2\pi x)$ is $T_2=1$ and for $\sin(5\pi x)$, the period is $T_3=\frac{2}{5}$.</p>
<p>To find the period of $\sin(\pi x)-\sin(2\pi x)+\sin(5\pi x)$, we can have
$$\frac{T_1}{T_{2}}=\frac{2}{1}\Rightarrow \,\, T^*=T_1=2T_2=2$$
Now, We can also have
$$\frac{T^*}{T_{3}}=\frac{2}{\frac{2}{5}}\Rightarrow \,\, T=2T^*=10T_3=4$$ </p>
<p>So the period is 2. Is this correct? </p>
| AsianDuckKing | 381,322 | <p>I think you are right, since <span class="math-container">$2$</span> divided by <span class="math-container">$\frac{1}{3}$</span> is an integer, as well as the <span class="math-container">$1$</span>. You can have a check by plotting</p>
<p><a href="https://i.stack.imgur.com/o3OOA.png" rel="nofollow noreferrer">The image of the function</a></p>
|
2,060,418 | <p>We have</p>
<p><span class="math-container">$$ \begin{cases} \dot{x} = - y - y^3 \\ \dot{y} = x \end{cases} $$</span></p>
<p>where <span class="math-container">$x,y \in \mathbb{R}$</span>. Show that the critical point for the linear system is a <span class="math-container">$\mathbf{center}$</span>. Prove that the type of the critical point is the same for the <span class="math-container">$\mathbf{nonlinear}$</span> system.</p>
<h3>TRY:</h3>
<p>Notice <span class="math-container">$( \dot{x}, \dot{y} ) = (0,0)$</span> iff <span class="math-container">$x = 0 $</span>, <span class="math-container">$y + y^3 =0 \iff y(1+y^2) = 0$</span>. Thus, the only critical point is <span class="math-container">$(0,0)$</span>. Lets linearize the system. The jacobian is</p>
<p><span class="math-container">$$ J(x,y) = \left( \begin{matrix} 0 & -1 - 3y^2 \\ 1 & 0 \end{matrix} \right) $$</span></p>
<p>We have</p>
<p><span class="math-container">$$ J(0,0) = \left( \begin{matrix} 0 & -1 \\ 1 & 0 \end{matrix} \right) $$</span></p>
<p>And the eigenvalues of this matrix are <span class="math-container">$\lambda = \pm i $</span> showing that indeed the critical point is a <span class="math-container">$\mathbf{center}$</span>. But, Im stuck on showing that the same is true for the nonlinear system. How do we show the existence of closed orbits near the critical point?</p>
| MrYouMath | 262,304 | <p>You can use the following Lyapunov function candidate</p>
<p><span class="math-container">$$V(x,y)=1/2x^2+1/2y^2+1/4y^4,$$</span></p>
<p>which is positive definite and <span class="math-container">$V(x=0,y=0)=0$</span>.</p>
<p>The derivative is given by</p>
<p><span class="math-container">$$\dot{V}=x\dot{x}+y\dot{y}+y^3\dot{y}=x[-y-y^3]+yx+y^3[x]\equiv0.$$</span></p>
<p>This condition implies that the origin is a center.</p>
|
116,417 | <p>To be honest, I don't really know, whether or not the following is a research level
question:</p>
<p>Let $M$ be a smooth manifold, $C^\infty(M)$ the smooth function ring on $M$ and
suppose $R\subset C^\infty(M)$ is a subring. What are conditions, such that
$R$ is the smooth function ring of a smooth manifold ?</p>
<p>On a first impression I would say, that this is a huge question and an exhaustive
answer is unlikely. Nevertheless I don't really know where to start looking for
something like that...</p>
| David C | 27,816 | <p>You have to be careful: let $Y\subset V$ be a submanifold of $V$, you have a restriction map $$C^{\infty}(V)\rightarrow C^{\infty}(Y)$$
whose kernel is an ideal $p_Y$, and if $Y$ is a closed submanifold:
$$C^{\infty}(Y)\cong C^{\infty}(V)/p_Y$$ </p>
<p>Thus the question is rather what ideals of $C^{\infty}(V)$ are of the form $p_Y$ for $Y$ a submanifold of $V$.</p>
<p>1) if $Z$ is any subset of $V$ it makes sense to define the ideal $p_Z$ of functions of $C^{\infty}(V)$ that vanish on $Z$. And one can prove that a closed subset $Z$ of $V$ is a submanifold if and only if $p_Z$ is regular.</p>
<p>Now the question is what are the ideals of $C^{\infty}(V)$ of type $p_Z$ with $Z$ closed.</p>
<p>2) you consider $C^{\infty}(V)$ as a Fréchet space and notice that $p_Z$ is a closed ideal.</p>
<p>Hence the question is what are the closed ideals of $C^{\infty}(V)$ of type $p_Z$ with $Z$ closed?<br>
Let me give you an answer when $V=\mathbb{R}^n$.</p>
<p>Let $I$ be a closed ideal of $C^{\infty}(\mathbb{R}^n)$ the quotient $C^{\infty}(\mathbb R^n)/I$ is called a differentiable algebra. $I$ is of the form $p_Z$ if this quotient algebra is reduced ($O$ is the unique element vanishing at any point of the real spectrum $Spec_r(C^{\infty}(\mathbb{R}^n)/I)$).</p>
<p>Reference: $C^{\infty}$-Differentiable spaces (LNM) Juan A. Navarro González, Juan B. Sancho de Salas </p>
<p>Edit: @Nevermind, if you have a smooth surjective map $\pi:V\rightarrow Y$, then you will have a ring map $\pi^*:C^{\infty}(Y)\rightarrow C^{\infty}(V)$ and this map is injective.<br>
Now I recommand you to look at Dominic Joyce's survey
"Algebraic Geometry over $C^{\infty}$-rings"
Corollary 3.4 He explains that the category of smooth manifolds embeds (fully, faithfully) as a subcategory of the category of finitely presented $C^{\infty}$-rings.
Thus if you have a sub-$C^{\infty}$-ring
$$R\rightarrow C^{\infty}(V)$$
this morphism of $C^{\infty}$-ring will be realizable by a smooth map
$$V\rightarrow \mathfrak{R}$$
such that $C^{\infty}(\mathfrak{R})=R$ if and only if $R$ is an algebra of smooth functions and the way to recognize these algebras is exactly P. Michor's theorem in the note cited in your post. You notice that condition 1) of this theorem is satisfied for any subring of $C^{\infty}(V)$. Thus you are left with two criteria: "finitely generated" and "germ determined".
Germ dertermined (related to condition 3) in Michor's theorem) is related to "fair $C^{\infty}$-rings" in D. Joyce's papers and related to reduced in the book on differentiable spaces cited above.</p>
|
4,011,999 | <p>Let <span class="math-container">$n$</span> be a positive integer, and consider a sequence <span class="math-container">$a_1 , a_2 , \cdots , a_n $</span> of positive integers. Extend it periodically to an infinite sequence <span class="math-container">$a_1 , a_2 , \cdots $</span> by defining <span class="math-container">$a_{n+i} = a_i $</span> for all <span class="math-container">$i \ge 1$</span>. If <span class="math-container">$a_1 \le a_2 \le \cdots \le a_n \le a_1 +n $</span> and <span class="math-container">$a_{a_i } \le n+i-1 \quad\text{for}\quad i=1,2,\cdots, n, $</span> prove that <span class="math-container">$a_1 + \cdots +a_n \le n^2. $</span></p>
<p>Any help? I tried and noticed that for all <span class="math-container">$i \le a_1$</span>, we have <span class="math-container">$a_i \le n$</span>. Then for all <span class="math-container">$i \in \{2, 3, \cdots, a_1\},$</span> we have <span class="math-container">$a_{a_i} \le n+i-1$</span>.</p>
| Calvin Lin | 54,563 | <p>This is a great question to showcase how you can interpret that statement. I suggest that you try to make sense of the condition, and understand how it restricts the values.</p>
<p>The main observation that you need is: If <span class="math-container">$a_i = k$</span>, then <span class="math-container">$a_k \leq n+i - 1$</span> and thus</p>
<blockquote class="spoiler">
<p> In particular, if <span class="math-container">$ a_j \geq n+i$</span>, then <span class="math-container">$j > k$</span>.</p>
</blockquote>
<hr />
<p>Let <span class="math-container">$a_1 = I$</span>.<br />
Show that <span class="math-container">$I \leq n$</span>. (Why?)</p>
<p>Let <span class="math-container">$K$</span> be the largest index from 1 to <span class="math-container">$n$</span> such that <span class="math-container">$a_k \leq n$</span>.<br />
If this doesn't exist, then <span class="math-container">$\sum a_i \leq n^2$</span> and we are done. So let's assume that it exists.<br />
Since <span class="math-container">$ a_I = a_{a_1} \leq n, $</span> so <span class="math-container">$ K \geq I$</span>.</p>
<p>Let <span class="math-container">$\delta _i$</span> count the number of values of <span class="math-container">$a_i$</span> that are <span class="math-container">$ \geq n+i$</span>.<br />
The above observation (in hidden text) states that <span class="math-container">$ \delta_i \leq (n-a_i)$</span>.<br />
The condition <span class="math-container">$a_n \leq n+I$</span> tells us that <span class="math-container">$ \delta_i = 0 $</span> for <span class="math-container">$ i > I$</span>.</p>
<p>We split the summation into 2 parts:</p>
<ul>
<li><span class="math-container">$ \sum_{i \leq K} a_i = \sum_{i\leq K} n - (n-a_i) = nK - \sum_{i\leq I} (n-a_i).$</span></li>
<li><span class="math-container">$\sum_{i > k } a_i = \sum_{i>k} \sum_{j=1}^{a_i} 1 = n(n-K) + \sum_{i\leq I} \delta_i $</span> by changing the order of summation.</li>
</ul>
<p>Show that <span class="math-container">$ \sum_{i \leq I } \delta_i = \sum_{i \leq K } \delta_i$</span>. (Why?)</p>
<p>Hence, <span class="math-container">$ \sum a_i = [nK - \sum_{i\leq K} (n-a_i)] + [n(n-K) + \sum_{i \leq I} \delta_i] = n^2 + \sum_{i\leq K } \delta_i - (n-a_i ) \leq n^2. $</span></p>
<p>Notes:</p>
<ul>
<li>There is a very nice pictorial representation of this idea. For a valid sequence, plot the values <span class="math-container">$ (i, a_i)$</span> on a grid square. Then, the values are non-decreasing. The left triangle of squares representing the "undercount" arising from <span class="math-container">$a_i \leq n$</span>, can be rotated and flipped to more than cover the right triangle of squares representing the "over count" arising from <span class="math-container">$a_i > n$</span>.</li>
</ul>
|
3,243,906 | <p>I am doing a problem where I am differentiating from first principles, but I can't simplify the final expression:</p>
<p><span class="math-container">$\frac{-2xh - h^2}{x^4h + 2x^3h^2+x^2h^3}$</span></p>
<p>Could someone explain it in steps?</p>
| Dr. Sonnhard Graubner | 175,066 | <p>It is <span class="math-container">$$\frac{-2x-h}{x^2(x^2+2xh+h^2)}=\frac{-2x-h}{x^2(x+h)^2}$$</span>
it must be <span class="math-container">$$x\ne 0,x\ne -h$$</span>
If <span class="math-container">$$h$$</span> tends to zero we obtain <span class="math-container">$$\frac{-2x}{x^2(x^2)}=\frac{-2x}{x^4}=\frac{-2}{x^3}$$</span></p>
|
511,921 | <p>Let $S, T$ be operators in $\mathcal{L}(V)$, the space of all linear maps from $V$ to itself. In my lecture notes, I have the definition of <strong>similar</strong>: "We say that operators $S,T \in \mathcal{L}(V)$ are <strong>similar</strong> if there exists an isomorphism (in this context of linear maps, a bijective linear map) $P$ such that $T = P^{-1}SP$. (If $V$ is $\mathbb{F}^{n}$, coordinate space, we call P a <strong>change of coordinates</strong>.)"</p>
<p>Could someone please <em>try</em> to explain this idea of "similarity" without invoking matrices or bases (I apologize in advance if I am asking for too much here)? Specifically, I am having some trouble really understanding the action of $P^{-1}$ on $SP$. What does it mean conceptually for two linear operators to be similar? </p>
| Giuseppe Negro | 8,157 | <p>Write your linear map as an equation: $$y=Lx.$$
Now apply the following:
$$\tag{1}x=Px',\quad y=Py'.$$
Since $P$ is invertible, you get
$$y'=P^{-1}LPx'.$$
In a finite dimensional space, $(1)$ is what you call a <em>change of coordinates</em>.</p>
|
749,731 | <p>if $G' <H < G$ then $H$ is normal in $G$. ($G'$ is the commutator subgroup of $G$.)</p>
<p>This is what I do:</p>
<p>because $G' < H$ we have $\frac{H}{G'} \triangleleft \frac{G}{G'}$. because $\frac{G}{G'}$ is abelian then $\frac{\frac{G}{G'}}{\frac{H}{G'}} \approx \frac{G}{H} $ is abelian and it means $\frac{G}{H}$ is abelian. </p>
<p>now we have $Hg_1Hg_2=Hg_2Hg_1 \Rightarrow Hg_1g_2=Hg_2g_1 \Rightarrow (g_2g_1)^{-1}(g_1g_2)=h $ for some $h \in H$.</p>
<p>now I stuck here.I need some help to finish this, I feel that I am in right path. </p>
<p>Thank you very much.</p>
| egreg | 62,967 | <p>The homomorphism theorems imply that when $N$ is a normal subgroup of $G$, there is a bijection between the subgroups of $G$ containing $N$ and the subgroups of $G/N$ given by
$$
H\mapsto H/N
$$
In this correspondence, normal subgroups correspond to normal subgroups.</p>
<p>When $N=G'$, the quotient $G/G'$ is abelian, so each of its subgroups is normal. So, since $H/G'$ is normal in $G/G'$, you have that $H$ is normal in $G$.</p>
|
62,621 | <p>There are three elements: x, y, z and a relation C:</p>
<p>x C y, y C z, z C x, x C x, y C y, z C z.</p>
<p>Let us introduce two binary operations with respect to the C: "the leftmost" (L) and "the rightmost" (R), i.e. </p>
<p>x L x = x L y = y L x = x, y L y = y L z = z L y = y, z L z = z L x = x L z = z </p>
<p>x R x = x R z = z R x = x, y R y = x R y = y R x = y, z R z = z R y = y R z = z.</p>
<p>Similar construction produces a multi-valued logic, if to use a linear order instead of the C, but this non-associative "logic" also has some applications. Yet, I failed to find any notes about that in a book about multi-valued logic. I would be glad to know, if described construction was used somewhere earlier to provide correct references in my works.</p>
| Alex 'qubeat' | 14,175 | <p>Just few hours ago I found, that the construction was used in talks of J. B. Nation
“How aliens do math”, and “Logic on other planets” (<a href="http://www.math.hawaii.edu/~jb/talks.html" rel="nofollow">here</a>).
Despite of such titles, the works look quite instructive.</p>
|
2,494,552 | <p>Give an example of metric spaces $X,Y,Z$ and a function $f:X\times Y \to Z$ such that :</p>
<p>(a)if $a\in X$, then $f(a,y)$ is continuous</p>
<p>(b)if $b\in Y$, then $f(x,b)$ is continuous</p>
<p>(c)$f:X\times Y \to Z$ is not continuous</p>
<p>Actually I think $f$ must be continuous if it satisfies (a) and (b) so I can't construct an example.</p>
| user284331 | 284,331 | <p>I think the following works:</p>
<p>$f(z)=\exp\left(-\dfrac{1}{z^{4}}\right)$ for $z\ne 0$ and $f(0)=0$. Here we are dealing with complex plane.</p>
<p>$f$ is not continuous at $z=0$ because $f(h+ih)=\exp\left(\dfrac{1}{4h^{4}}\right)$ for real $h\ne 0$. </p>
<p>For the separate continuous, we have to consider only for $y=0$, then the function $\varphi(x)=\exp\left(-\dfrac{1}{x^{4}}\right)$ for $x\ne 0$, $\varphi(0)=0$ still meets the continuity. Likewise for $x=0$.</p>
|
2,494,552 | <p>Give an example of metric spaces $X,Y,Z$ and a function $f:X\times Y \to Z$ such that :</p>
<p>(a)if $a\in X$, then $f(a,y)$ is continuous</p>
<p>(b)if $b\in Y$, then $f(x,b)$ is continuous</p>
<p>(c)$f:X\times Y \to Z$ is not continuous</p>
<p>Actually I think $f$ must be continuous if it satisfies (a) and (b) so I can't construct an example.</p>
| Math1000 | 38,584 | <p>Let $f:\mathbb R^2\to\mathbb R$ be defined by
$$
f(x,y) = \begin{cases}
\frac{xy}{x^2+y^2},& (x,y)\ne(0,0)\\
0,& (x,y)=(0,0).
\end{cases}
$$
Clearly $f$ is continuous on $\mathbb R^2\setminus\{(0,0)\}$. Now, for each $y\ne 0$ we have $f(0,y)=0$, so $\lim_{y\to 0}f(0,y)=0$, so the map $y\to f(0,y)$ is continuous, and by symmetry the map $x\to f(x,0)$ is continuous. However, considering the line $y=x$, we have
$$
\lim_{x\to 0} f(x,x) = \lim_{x\to 0} \frac{x^2}{2x^2} = \frac12\ne 0,
$$
and hence $f$ is not continuous at $(0,0)$.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.