qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,527,004
<p>As stated in the title, I want <span class="math-container">$f(x)=\frac{1}{x^2}$</span> to be expanded as a series with powers of <span class="math-container">$(x+2)$</span>. </p> <p>Let <span class="math-container">$u=x+2$</span>. Then <span class="math-container">$f(x)=\frac{1}{x^2}=\frac{1}{(u-2)^2}$</span></p> <p>Note that <span class="math-container">$$\int \frac{1}{(u-2)^2}du=\int (u-2)^{-2}du=-\frac{1}{u-2} + C$$</span></p> <p>Therefore, <span class="math-container">$\frac{d}{du} (-\frac{1}{u-2})= \frac{1}{x^2}$</span> and</p> <p><span class="math-container">$$\frac{d}{du} (-\frac{1}{u-2})= \frac{d}{du} (-\frac{1}{-2(1-\frac{u}{2})})=\frac{d}{du}(\frac{1}{2} \frac{1}{1-\frac{u}{2}})=\frac{d}{du} \Bigg( \frac{1}{2} \sum_{n=0}^\infty \bigg(\frac{u}{2}\bigg)^n\Bigg)$$</span></p> <p><span class="math-container">$$= \frac{d}{du} \Bigg(\sum_{n=0}^\infty \frac{u^n}{2^{n+1}}\Bigg)= \frac{d}{dx} \Bigg(\sum_{n=0}^\infty \frac{(x+2)^n}{2^{n+1}}\Bigg)= \sum_{n=0}^\infty \frac{d}{dx} \bigg(\frac{(x+2)^n}{2^{n+1}}\bigg)=$$</span></p> <p><span class="math-container">$$\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p> <p>From this we can conclude that </p> <p><span class="math-container">$$f(x)=\frac{1}{x^2}=\sum_{n=0}^\infty \frac{n}{2^{n+1}} (x+2)^{n-1}$$</span></p> <p>Is this solution correct?</p>
vonbrand
43,946
<p>Binomial theorem:</p> <p><span class="math-container">$\begin{align*} \frac{1}{x^2} &amp;= (2 + (x - 2))^{-2} \\ &amp;= \frac{1}{4} \cdot \left( 1 + \frac{1}{2}(x - 2) \right)^{-2} \\ &amp;= \frac{1}{4} \cdot \sum_{k \ge 0} \binom{-2}{k} \left( \frac{x - 2}{2} \right)^k \\ &amp;= \sum_{k \ge 0} \frac{(-1)^k}{2^{k + 2}} \binom{k + 1}{2} (x - 2)^k \\ &amp;= \sum_{k \ge 0} \frac{(-1)^k (k + 1) k}{2^{k + 3}}(x - 2)^k \end{align*}$</span></p> <p>Any way you get a power series that converges to your function will give the same series. What way you select depends on taste/ease/familiarity.</p>
1,694,159
<p>I am prepping for my mid semester exam, and came across with the following question:</p> <blockquote> <p>Find the closed form for the sum $\sum_{j=k}^n (-1)^{j+k}\binom{n}{j}\binom{j}{k}$, using the assumption that $k = 0, 1,...n$ and $n$ can be any natural number.</p> </blockquote> <p>So what I have done is to note the fact that $$\binom{n}{j}\binom{j}{k}= \frac{n!}{j!(n-j)!}\frac{j!}{k!(j-k)!}=\frac{n!}{(n-j)!\ k!\ (j-k)!}$$</p> <p>Then we can write the summation as $$\sum_{j=k}^n {(-1)^{j+k}\binom{n}{j}\binom{j}{k}}= \sum_{j=k}^n (-1)^{j+k} \frac{n!}{(n-j)!\ k!\ (j-k)!} = \frac{n!}{k!} \sum_{j=k}^n (-1)^{j+k} \frac{1}{(n-j)!\ (j-k)!} $$</p> <p>I tried to let $m=j-k$: $$\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m+2k} \frac{1}{(n-m-k)!\ m!}=\frac{n!}{k!} \sum_{m=0}^{n-k} (-1)^{m} \frac{1}{(n-m-k)!\ m!}$$</p> <p>But I am not sure what to proceed next. Any help would be highly appreciated!</p>
Felix Marin
85,343
<p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle} \newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\half}{{1 \over 2}} \newcommand{\ic}{\mathrm{i}} \newcommand{\iff}{\Leftrightarrow} \newcommand{\imp}{\Longrightarrow} \newcommand{\pars}[1]{\left(\, #1 \,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$ <em>When $\ds{k &gt; 0}$, $\ds{{j \choose k} = 0}$ when $0 \leq j &lt; k$ and the sum over $\ds{j}$ can be extended to $\ds{j = 0,1,2,\ldots,n}$. When $\ds{k = 0}$, the whole sum already 'start' at $\ds{j = 0}$. Then,</em> <hr> \begin{align} \color{#f00}{\sum_{j = k}^{n}\pars{-1}^{\,j + k}{n \choose j}{j \choose k}} &amp; = \pars{-1}^{\,k}\sum_{j = 0}^{n}\pars{-1}^{\,j}{n \choose j}\ \overbrace{% \oint_{\verts{z} = 1}{\pars{1 + z}^{j} \over z^{k + 1}} \,{\dd z \over 2\pi\ic}}^{\ds{=\ {j \choose k}}} \\[3mm] &amp; = \pars{-1}^{\,k}\oint_{\verts{z} = 1}{1 \over z^{k + 1}} \sum_{j = 0}^{n}{n \choose j}\pars{-1 - z}^{\,j}\,{\dd z \over 2\pi\ic} \\[3mm] &amp; = \pars{-1}^{k}\oint_{\verts{z} = 1}{1 \over z^{k + 1}} \bracks{1 + \pars{-1 - z}}^{\,n}\,{\dd z \over 2\pi\ic} = \pars{-1}^{k + n}\ \overbrace{\oint_{\verts{z} = 1}{1 \over z^{k + 1 - n}} \,{\dd z \over 2\pi\ic}}^{\ds{\delta_{kn}}} \\[3mm] &amp; = \color{#f00}{% \delta_{kn} = \left\lbrace\begin{array}{lcrcl} 0 &amp; \mbox{if} &amp; k &amp; \not= &amp; n \\ 1 &amp; \mbox{if} &amp; k &amp; = &amp; n \end{array}\right.} \end{align}</p>
281,294
<p>Let $G$ be a finite abelian group.</p> <p>Is there a field $K$, and an elliptic curve $E$ over $K$ such that $E(K)_{tor} \cong G$?</p>
David Lampert
59,248
<p>If $n \geq 1$ and $2|mn$ then the Tate curve $E = L^*/2^{\mathbb Z}$ with $L={\mathbb Q}_2(\zeta_{mn}, 2^{1/n})$ has $E(L)_{tor} = {\mathbb Z}/n{\mathbb Z} \times {\mathbb Z}/mn{\mathbb Z}$. A 2-adically close elliptic curve over a suitable (i.e. containing the curve's corresponding torsion) number field $K \subset$ algebraic closure of ${\mathbb Q}$ in $L$ also has $E(K)_{tor} = {\mathbb Z}/n{\mathbb Z} \times {\mathbb Z}/mn{\mathbb Z}$. </p>
850,852
<p>This one comes from Gilbert Strang's Linear Algebra. Pick any numbers $x+y+z = 0$. Find an angle between $\mathbf v=(x,y,z)$ and $\mathbf w=(z,x,y)$. </p> <p>Explain why $$\dfrac{\bf v\cdot w}{\bf \Vert v\Vert \cdot\Vert w\Vert}$$ is always $-0.5$. </p>
Christopher K
101,768
<p>Let $(***)$ be the equation, then $$(***) = \frac{xz+xy+yz}{x^2+y^2+z^2} = \frac{x(-x)+yz}{x^2+y^2+z^2} \\ = \frac{-x^2 + 1/2\cdot[(y+z)^2-y^2-z^2]}{x^2+y^2+z^2} \\ = \frac{-x^2 + 1/2\cdot[x^2 - y^2 - z^2]}{x^2+y^2+z^2} \\ = \frac{(-1/2)\cdot (x^2+y^2+z^2)}{x^2+y^2+z^2} \\ = -1/2,$$ as desired.</p>
1,885,177
<p>What is the meaning of $x$ raised to any non-positive value? We know that $x^{-a} = \dfrac{1}{x^a}$ and $x^0 = 1$, but where does that come from? What is the proof? Why is this true? What about $x$ raised to a fraction, like say $\frac{1}{3}$? <em>How do you multiply $x$ $\frac{1}{3}$ times?</em></p>
Simply Beautiful Art
272,831
<p>We know that $x^{a+b}=x^ax^b$ since an $(a+b)$ amount of $x$'s is the same as an $a$ amount of $x$'s times a $b$ amount of $x$'s.</p> <p>From this, we have</p> <p>$$x^0=x^{a-a}=x^ax^{-a}$$</p> <p>Thus, we have</p> <p>$$x^{-a}=\frac{x^0}{x^a}$$</p> <p>and once you show $x^0=1$, your done.</p> <p>As for fractions, show that</p> <p>$$x^{\frac ab}=y$$</p> <p>For some value of $y$ which we are trying to solve for.</p> <p>We see this is the same as</p> <p>$$x^{\frac ab}=\underbrace{x\times x\times x\times\dots x}_{a/b?}=y$$</p> <p>However, we see that by multiplying both sides on itself $b$ times, we have</p> <p>$$\underbrace{x\times x\times x\times\dots x}_{a/b}=y\implies\underbrace{x\times x\times x\times\dots x}_{a}=y^b$$</p> <p>and so,</p> <p>$$\underbrace{x\times x\times x\times\dots x}_{a/b}=\sqrt[b]{\underbrace{x\times x\times x\times\dots x}_{a}}$$</p> <blockquote> <p>$$x^{\frac ab}=\sqrt[b]{x^a}$$</p> </blockquote>
1,885,177
<p>What is the meaning of $x$ raised to any non-positive value? We know that $x^{-a} = \dfrac{1}{x^a}$ and $x^0 = 1$, but where does that come from? What is the proof? Why is this true? What about $x$ raised to a fraction, like say $\frac{1}{3}$? <em>How do you multiply $x$ $\frac{1}{3}$ times?</em></p>
Arthur
15,500
<p>From that definition, it doesn't make sense. It's what we mathematicians call a generalisation. The original definition of powers only allow natural numbers to be exponents.</p> <p>However, one can ask "Is it possible to come up with a definition of $x^{-a}$ that plays well along with the old definition?" By which we mean that $x^{b+c}=x^bx^c$ and $(x^b)^c=x^{bc}$ still holds. And it turns out that there is exactly one definition that works.</p> <p>In the same way you get $x^{1/n}=\sqrt[n]x$. Not because the old definition says so, but because it's the only thing it can be if we want it to work.</p>
1,514,094
<p>Given three points on the $xy$ plane on $O(0,0),A(1,0)$ and $B(-1,0)$.Point $P$ is moving on the plane satisfying the condition $(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$<br> If the maximum and minimum values of $|\vec{PA}||\vec{PB}|$ are $M$ and $m$ respectively then prove that the value of $M^2+m^2=34$</p> <hr> <p>My Attempt:<br> Let the position vector of $P$ be $x\hat{i}+y\hat{j}$.Then $\vec{PA}=(1-x)\hat{i}-y\hat{j}$ and $\vec{PB}=(-1-x)\hat{i}-y\hat{j}$<br> $\vec{OA}=\hat{i},\vec{OB}=-\hat{i}$<br> $(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$ gives<br> $x^2-1+y^2-3=0$<br> $x^2+y^2=4$<br> $|\vec{PA}||\vec{PB}|=\sqrt{(1-x)^2+y^2}\sqrt{(-1-x)^2+y^2}=\sqrt{(x^2+y^2-2x+1)(x^2+y^2+2x+1)}$<br> $|\vec{PA}||\vec{PB}|=\sqrt{(5-2x)(5+2x)}=\sqrt{25-4x^2}$<br> I found the domain of $\sqrt{25-4x^2}$ which is $\frac{-5}{2}\leq x \leq \frac{5}{2}$.Then i found the minimum and maximum values of $\sqrt{25-4x^2}$ which comes out to be $M=5$ and $m=0$.So $M^2+m^2=25$<br><br> But i need to prove $M^2+m^2=34$.Where have i gone wrong?Please help me.Thanks.</p>
cr001
254,175
<p>The answer is correct and you are wong.</p> <p>The mistake in your solution happens when you apply the Cauchy Inequality where you missed to square the right side.</p> <p>$(x_1^2+x^2_2+x_3^2+x_4^2+x_5^2)(1+1+1+1+1)\ge(x_1+x_2+x_3+x_4+x_5)^2=400$</p> <p>So $x_1^2+x^2_2+x_3^2+x_4^2+x_5^2\ge\frac{400}{5}=80$</p> <p>and $b_{max}=\frac{1}{2}[20^2-80]=160$.</p> <p>A simple check would see your answer is wrong, the third coefficient in $(x-4)^5=160$ and not $198$.</p>
1,514,094
<p>Given three points on the $xy$ plane on $O(0,0),A(1,0)$ and $B(-1,0)$.Point $P$ is moving on the plane satisfying the condition $(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$<br> If the maximum and minimum values of $|\vec{PA}||\vec{PB}|$ are $M$ and $m$ respectively then prove that the value of $M^2+m^2=34$</p> <hr> <p>My Attempt:<br> Let the position vector of $P$ be $x\hat{i}+y\hat{j}$.Then $\vec{PA}=(1-x)\hat{i}-y\hat{j}$ and $\vec{PB}=(-1-x)\hat{i}-y\hat{j}$<br> $\vec{OA}=\hat{i},\vec{OB}=-\hat{i}$<br> $(\vec{PA}.\vec{PB})+3(\vec{OA}.\vec{OB})=0$ gives<br> $x^2-1+y^2-3=0$<br> $x^2+y^2=4$<br> $|\vec{PA}||\vec{PB}|=\sqrt{(1-x)^2+y^2}\sqrt{(-1-x)^2+y^2}=\sqrt{(x^2+y^2-2x+1)(x^2+y^2+2x+1)}$<br> $|\vec{PA}||\vec{PB}|=\sqrt{(5-2x)(5+2x)}=\sqrt{25-4x^2}$<br> I found the domain of $\sqrt{25-4x^2}$ which is $\frac{-5}{2}\leq x \leq \frac{5}{2}$.Then i found the minimum and maximum values of $\sqrt{25-4x^2}$ which comes out to be $M=5$ and $m=0$.So $M^2+m^2=25$<br><br> But i need to prove $M^2+m^2=34$.Where have i gone wrong?Please help me.Thanks.</p>
DuFong
193,997
<p>The condition for your answer is when $x_1=x_2=...=x_5=4$,so we can find that they are roots of $$(x-4)^5=0$$</p> <p>Expand this equation of left side we have the coefficient of $x^4$ is $$C(5,4)*(-4)^1=-20$$ and coefficient of $x^3$ is $$C(5,3)*(-4)^2=160$$</p>
1,400,399
<p>Here is an indefinite integral that is similar to an integral I wanna propose for a contest. Apart from using CAS, do you see any very easy way of calculating it?</p> <p>$$\int \frac{1+2x +3 x^2}{\left(2+x+x^2+x^3\right) \sqrt{1+\sqrt{2+x+x^2+x^3}}} \, dx$$</p> <p><strong>EDIT:</strong> It's a part from the generalization </p> <p>$$\int \frac{1+2x +3 x^2+\cdots n x^{n-1}}{\left(2+x+x^2+\cdots+ x^n\right) \sqrt{1\pm\sqrt{2+x+x^2+\cdots +x^n}}} \, dx$$</p> <p><strong>Supplementary question:</strong> How would you calculate the following integral using the generalization above? Would you prefer another way?</p> <p>$$\int_0^{1/2} \frac{1}{\left(x^2-3 x+2\right)\sqrt{\sqrt{\frac{x-2}{x-1}}+1} } \, dx$$</p> <p>As a note, the generalization like the one you see above and slightly modified versions can be wisely used for calculating very hard integrals.</p>
robjohn
13,854
<p>Let $\sinh^4(t)=x^3+x^2+x+2$. Then $\cosh(t)=\sqrt{1+\sqrt{x^3+x^2+x+2}}$ and $$ \begin{align} &amp;\int\frac{3x^2+2x+1}{\left(x^3+x^2+x+2\right)\sqrt{1+\sqrt{x^3+x^2+x+2}}} \,\mathrm{d}x\\ &amp;=\int\frac{\mathrm{d}\sinh^4(t)}{\sinh^4(t)\cosh(t)}\\ &amp;=4\int\frac{\mathrm{d}t}{\sinh(t)}\\ &amp;=4\int\frac{\mathrm{d}\cosh(t)}{\cosh^2(t)-1}\\ &amp;=2\int\left(\frac1{\cosh(t)-1}-\frac1{\cosh(t)+1}\right)\mathrm{d}\cosh(t)\\[2pt] &amp;=2\log\left(\frac{\cosh(t)-1}{\cosh(t)+1}\right)+C\\[6pt] &amp;=-4\operatorname{arccoth}(\cosh(t))+C\\[6pt] &amp;=-4\operatorname{arccoth}\left(\sqrt{1+\sqrt{x^3+x^2+x+2}}\right)+C \end{align} $$</p>
540,135
<p>$\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?</p> <p>I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number... </p>
lab bhattacharjee
33,337
<p>Let the highest power of prime $p$ in $a,b,c$ be $A,B,C$ respectively.</p> <p>The highest power of prime $p$ in lcm$(a,b,c)=$max$(A,B,C)$</p> <p>The highest power of prime $p$ in lcm$(a,b)=$max$(A,B)$</p> <p>The highest power of prime $p$ in lcm$($lcm$(a,b),c)=$max$($max$(A,B),C)$</p> <p>Can you see max$($max$(A,B),C)=$max$(A,B,C)$ ?</p> <p>This holds true for any prime that divides at least one of $a,b,c$</p>
540,135
<p>$\newcommand{\lcm}{\operatorname{lcm}}$Is $\lcm(a,b,c)=\lcm(\lcm(a,b),c)$?</p> <p>I managed to show thus far, that $a,b,c\mid\lcm(\lcm(a,b),c)$, yet I'm unable to prove, that $\lcm(\lcm(a,b),c)$ is the lowest such number... </p>
tchappy ha
384,082
<blockquote> <p>Lemma:<br /> Let <span class="math-container">$l:=\operatorname{lcm}(a,b,\dots,z)$</span>.<br /> Let <span class="math-container">$m$</span> be a common multiple of <span class="math-container">$a,b,\dots z$</span>.<br /> Then, <span class="math-container">$m$</span> is a multiple of <span class="math-container">$l$</span>.</p> <p>Proof:<br /> We can write <span class="math-container">$m=ql+r$</span>, <span class="math-container">$0\leq r&lt;l$</span>.<br /> Then, <span class="math-container">$r=m-ql$</span>.</p> <p>Both <span class="math-container">$m$</span> and <span class="math-container">$l$</span> are a multiple of <span class="math-container">$a$</span>.<br /> So, <span class="math-container">$r$</span> is a multiple of <span class="math-container">$a$</span>.<br /> Both <span class="math-container">$m$</span> and <span class="math-container">$l$</span> are a multiple of <span class="math-container">$b$</span>.<br /> So, <span class="math-container">$r$</span> is a multiple of <span class="math-container">$b$</span>.<br /> <span class="math-container">$\dots$</span><br /> Both <span class="math-container">$m$</span> and <span class="math-container">$l$</span> are a multiple of <span class="math-container">$z$</span>.<br /> So, <span class="math-container">$r$</span> is a multiple of <span class="math-container">$z$</span>.</p> <p>So, <span class="math-container">$r$</span> is a common multiple of <span class="math-container">$a,b\dots, z$</span>.<br /> If <span class="math-container">$0&lt;r&lt;l$</span>, then <span class="math-container">$r$</span> is a common multiple of <span class="math-container">$a,b\dots, z$</span> which is less than the least common multiple of <span class="math-container">$a,b\dots, z$</span>.<br /> This is a contradiction.<br /> So, <span class="math-container">$r$</span> must be <span class="math-container">$0$</span>.<br /> So, <span class="math-container">$m=ql+r=ql$</span>.<br /> So, <span class="math-container">$m$</span> is a multiple of <span class="math-container">$l$</span>.</p> </blockquote> <blockquote> <p><span class="math-container">$\operatorname{lcm}(a,b,c)=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span>.<br /> Proof:<br /> Let <span class="math-container">$l:=\operatorname{lcm}(a,b,c)$</span> and <span class="math-container">$l^{'}:=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span>.<br /> <span class="math-container">$l$</span> is a common multiple of <span class="math-container">$a$</span>, <span class="math-container">$b$</span> and <span class="math-container">$c$</span>.<br /> So, <span class="math-container">$l$</span> is a common multiple of <span class="math-container">$a$</span> and <span class="math-container">$b$</span>.<br /> So, by the above lemma, <span class="math-container">$l$</span> is a multiple of <span class="math-container">$\operatorname{lcm}(a,b)$</span>.<br /> <span class="math-container">$l$</span> is a multiple of <span class="math-container">$c$</span>.<br /> So, <span class="math-container">$l$</span> is a common multiple of <span class="math-container">$\operatorname{lcm}(a,b)$</span> and <span class="math-container">$c$</span>.<br /> So, by the above lemma, <span class="math-container">$l$</span> is a multiple of <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span>.</p> <p><span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$\operatorname{lcm}(a,b)$</span>.<br /> <span class="math-container">$\operatorname{lcm}(a,b)$</span> is a multiple of <span class="math-container">$a$</span>.<br /> So, <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$a$</span>.<br /> <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$\operatorname{lcm}(a,b)$</span>.<br /> <span class="math-container">$\operatorname{lcm}(a,b)$</span> is a multiple of <span class="math-container">$b$</span>.<br /> So, <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$b$</span>.<br /> <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$c$</span>.<br /> So, <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a common multiple of <span class="math-container">$a$</span>,<span class="math-container">$b$</span> and <span class="math-container">$c$</span>.<br /> So, by the above lemma, <span class="math-container">$l^{'}=\operatorname{lcm}(\operatorname{lcm}(a,b),c)$</span> is a multiple of <span class="math-container">$l=\operatorname{lcm}(a,b,c)$</span>.</p> <p><span class="math-container">$l$</span> is a multiple of <span class="math-container">$l^{'}$</span> and <span class="math-container">$l^{'}$</span> is a multiple of <span class="math-container">$l$</span>.<br /> So, <span class="math-container">$l=l^{'}$</span>.</p> </blockquote>
2,986,647
<p>If I know coordinates of point <span class="math-container">$A$</span>, coordinates of circle center <span class="math-container">$B$</span> and <span class="math-container">$r$</span> is the radius of the circle, is it possible to calculate the angle of the lines that are passing through point A that are also tangent to the circle?</p> <p><a href="https://i.stack.imgur.com/VZu8a.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VZu8a.png" alt="tangents_to_a_circle_from_an_external_point"></a></p> <p><span class="math-container">$A$</span> is the green point, <span class="math-container">$B$</span> is the center of the red circle and I am trying to find out the angle of the blue lines.</p>
Fnacool
318,321
<p>Here's a different approach. </p> <p>1) <span class="math-container">$$E[e^{-s|B_t|^2}] = E[e^{-ts N(0,1)^2}]^3.$$</span> </p> <p>2) <span class="math-container">\begin{align*} E[e^{-u N(0,1)^2}] &amp; = \frac{1}{\sqrt{2\pi}}\int e^{-x^2 ( u + 1/2) }dx\\ &amp; = (2u+1)^{-1/2}. \end{align*}</span> </p> <p>3) <span class="math-container">$$x^{-1} = \int_0^\infty e^{-s x} ds.$$</span> </p> <p>Putting it all together: </p> <p><span class="math-container">\begin{align*} E [|B_t|^{-2} ] &amp; = E [\int_0^\infty [e^{-s|B_t|^2} ]ds \\ &amp; = \int_0^\infty E [ e^{-s|B_t|^2}] ds \\ &amp; = \int_0^\infty (2st+1)^{-3/2} ds \\ &amp; = -\frac{2}{(2t)}(2st+1)^{-1/2} \left.\right|_{s=0}^{s=\infty}\\ &amp; = \frac{1}{t} \end{align*}</span> </p>
3,127,459
<p>If &#x3B1; , &#x3B2; , &#x3B3; </p> <p>are the roots of equation <span class="math-container">$x^3 -x -1 =0$</span> then</p> <p><span class="math-container">$$ \frac{1+\alpha}{1-\alpha} + \frac{1+\beta}{1-\beta} + \frac{1+\gamma}{1-\gamma} $$</span></p> <p>My attempt is in the attachment <img src="https://i.stack.imgur.com/3v92W.jpg" alt="My attempt"></p> <p>I got answer <span class="math-container">$=0$</span> but in book answer is given as <span class="math-container">$-7$</span> . Where I do mistake by solving the question ?</p>
David Quinn
187,299
<p>hint...write <span class="math-container">$$y=\frac{1+x}{1-x}\implies x=\frac{y-1}{y+1}$$</span> and substitute into the polynomial. Simplify the polynomial in <span class="math-container">$y$</span> and find the sum of the roots.</p> <p>Alternatively,<span class="math-container">$$\Sigma\frac{1+\alpha}{1-\alpha}=-3+2\Sigma\frac{1}{1-\alpha}$$</span> <span class="math-container">$$=-3+2\frac{3+\Sigma\alpha\beta-2\Sigma\alpha}{1-\Sigma\alpha+\Sigma\alpha\beta-\alpha\beta\gamma}$$</span> <span class="math-container">$$=-3+2\frac{3-1-0}{1-0-1-1}=-7$$</span></p>
3,127,459
<p>If &#x3B1; , &#x3B2; , &#x3B3; </p> <p>are the roots of equation <span class="math-container">$x^3 -x -1 =0$</span> then</p> <p><span class="math-container">$$ \frac{1+\alpha}{1-\alpha} + \frac{1+\beta}{1-\beta} + \frac{1+\gamma}{1-\gamma} $$</span></p> <p>My attempt is in the attachment <img src="https://i.stack.imgur.com/3v92W.jpg" alt="My attempt"></p> <p>I got answer <span class="math-container">$=0$</span> but in book answer is given as <span class="math-container">$-7$</span> . Where I do mistake by solving the question ?</p>
DXT
372,201
<p><span class="math-container">$$\sum \frac{1+\alpha}{1-\alpha}=2\sum\frac{1}{1-\alpha}-3\cdots (1)$$</span></p> <p>Where <span class="math-container">$$\sum\frac{1+\alpha}{1-\alpha}=\frac{1+\alpha}{1-\alpha}+\frac{1+\beta}{1-\beta}+\frac{1-\gamma}{1-\gamma}.$$</span></p> <p>Now adding <span class="math-container">$1$</span> in each term and </p> <p>subtracting <span class="math-container">$-3$</span>,We get equation no <span class="math-container">$(1)$</span></p> <p>Now <span class="math-container">$$x^3-x-1=(x-\alpha)(x-\beta)(x-\gamma)$$</span></p> <p>Taking <span class="math-container">$\ln$</span> and Differentiate both side and put <span class="math-container">$x=1$</span></p> <p><span class="math-container">$$\frac{3(1)^2-1}{(1)^3-(1)-1}=\frac{1}{1-\alpha}+\frac{1}{1-\beta}+\frac{1}{1-\gamma}$$</span></p> <p>So we get <span class="math-container">$\displaystyle \sum \frac{1}{1-\alpha}=-2$</span></p> <p>put into <span class="math-container">$(1)$</span>, we have <span class="math-container">$$\displaystyle \frac{1+\alpha}{1-\alpha}=-4-3=-7.$$</span></p>
3,264,333
<p>I am working on my scholarship exam practice and not sure how to begin. Please assume math knowledge at high school or pre-university level.</p> <blockquote> <p>Let <span class="math-container">$a$</span> be a real constant. If the constant term of <span class="math-container">$(x^3 + \frac{a}{x^2})^5$</span> is equal to <span class="math-container">$-270$</span>, then <span class="math-container">$a=$</span>......</p> </blockquote> <p>Could you please give a hint for this question? The answer provided is <span class="math-container">$-3$</span>.</p>
paulinho
474,578
<p>From the binomial theorem, we know that the sum of the powers of <span class="math-container">$x^3$</span> and <span class="math-container">$a/x^2$</span> must add to <span class="math-container">$5$</span>. So if our term in the expansion is <span class="math-container">$k (x^3)^p (\frac{a}{x^2})^q$</span>, <span class="math-container">$p+q = 5$</span>. Moreover, we want our term to be constant (no nonzero powers of <span class="math-container">$x$</span>), so we have <span class="math-container">$3p - 2q = 0$</span>. Solving these two expressions, we have <span class="math-container">$p = 2, q =3$</span>. So we must find <span class="math-container">$a$</span> such that the coefficient of the <span class="math-container">$(x^3)^2 (\frac{a}{x^2})^3$</span> term is <span class="math-container">$270$</span>.</p> <p>The binomial theorem says that the coefficient of this power in the expansion is <span class="math-container">${5 \choose 2} = 10$</span>, so in particular, the coefficient of the constant term is <span class="math-container">$10a^3$</span>, which we want to equal <span class="math-container">$270$</span>. Therefore, <span class="math-container">$a = -3$</span>.</p>
2,962,193
<p><strong>Q</strong>:If <span class="math-container">$2\cos p=x+\frac{1}{x}$</span> and <span class="math-container">$2\cos q=y+\frac{1}{y}$</span> then show that <span class="math-container">$2\cos(mp-nq)$</span> is one of the values of <span class="math-container">$\left( \frac{x^m}{y^n}+\frac{y^n}{x^m} \right)$</span><br><strong>My Approach</strong>:<span class="math-container">$2\cos p=x+\frac{1}{x}\Rightarrow x^2-2\cos px+1=0$</span> solving this equation i get <span class="math-container">$$x=\cos p\pm i\sin p$$</span> and <strong>similarly</strong>,<span class="math-container">$$y=\cos q\pm i\sin q$$</span>Because somehow i guess <span class="math-container">$$x^m=\cos mp\pm i\sin mp,y^n=\cos nq\pm i\sin nq$$</span> maybe needed.But now i get stuck. Any hints or solution will be appreciated.<br>Thanks in advance.</p>
David G. Stork
210,401
<p>If <span class="math-container">$x$</span>, <span class="math-container">$y$</span> and <span class="math-container">$f(x,y)$</span> are integers or integer-valued, then there are more inputs than available outputs. By the pigeonhole principle, no such function exists.</p> <p>If <span class="math-container">$f$</span> can be real-valued, then there are infinitely many such functions, e.g., <span class="math-container">$f(x,y) = x - 1 + y/1000000$</span>.</p>
1,829,342
<p>So I know that $\sum_{i\geq 0}{n \choose 2i}=2^{n-1}=\sum_{i\geq 0}{n \choose 2i-1}$. However, I need formulas for $\sum_{i\geq 0}i{n \choose 2i}$ and $\sum_{i\geq 0}i{n \choose 2i-1}$. Can anyone point me to a formula with proof for these two sums? My searches thus far have only turned up those first two sums without the $i$ coefficient in the summand. Thanks!</p>
Hao S
202,187
<p>So for n even you can use $ {n \choose 2i} = {n \choose n-2i}$ to rewrite the sum as $\sum_i i {n \choose n-2i}$ then add this sum to the original sum to get $\sum_i n{n \choose n-2i}$</p>
285,548
<p>I asked the following question on math.SE (<a href="https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d">https://math.stackexchange.com/questions/2420298/bvps-for-elliptic-pdos-when-do-green-functions-l2-inverses-define-pseudo-d</a>) just over two months ago, and it only received one, rather unsatisfactory to me, answer there. I'm wondering if people here can have a look. Some related questions have been posed <a href="https://mathoverflow.net/questions/188815/greens-operator-of-elliptic-differential-operator">here</a> and <a href="https://mathoverflow.net/questions/69521/estimates-on-the-green-function-of-an-elliptic-second-order-differential-operato">here</a> (and I noted in particular the literature link provided in the answer to the latter question). However, neither of those discussions cover the possibility of boundary conditions being present, or possible lack of compactness.</p> <h1>Repost of the original question</h1> <p>Let me illustrate my question by starting with the simplest possible example: Let us consider $P := - \mathrm{d}^2/\mathrm{d}x^2$, an elliptic partial differential operator on $\mathbb{R}$; let us also consider the following boundary-value problem on the interval $\overline{\Omega} = [0,1]$: \begin{equation} P u = f, \qquad u(0)=u(1)=0. \end{equation} As is (I think) well-known, when seen as an operator $L^2(\Omega) \to L^2(\Omega)$, $P$ is unbounded. However, it is closed on the dense domain $D(P) := H^2(\Omega) \cap H_0^1(\Omega)$ where $H_0^1(\Omega)$ is the closure of $C_{\mathrm{c}}^\infty(\Omega)$ in the $H^1$ norm (so that any element of this space has vanishing trace on $\partial \Omega = \{0,1\}$, i.e. it satisfies the Dirichlet boundary condition above in a weak sense). Furthermore, $0$ is in the resolvent of $(P,D(P))$, i.e. there exists a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$. In fact, in this example the inverse is easily computed: it is the integral operator defined by the (continuous, as it happens) kernel \begin{equation} G(x,y) = \begin{cases} x(1-y) &amp; x \leq y \\ y(1-x) &amp;x &gt; y \end{cases}, \quad (x,y) \in \Omega \times \Omega. \end{equation} Of course, when viewed as a distribution in $\mathscr{D}'(\Omega \times \Omega)$, $G$ is the Schwartz kernel of $P^{-1}$ which we know on abstract grounds must exist since $P^{-1} : C_{\mathrm{c}}^{\infty}(\Omega) \to \mathscr{D}'(\Omega)$ is continuous.</p> <p>My question is the following: in this example and in more general examples where $P$ is a second-order elliptic differential operator on, say, an open (and not necessarily compact) region $\Omega$ with smooth boundary in $\mathbb{R}^n$, and assuming that we can find a suitable dense domain $D(P)$ for $P$ as above so that $(P,D(P))$ has a bounded inverse $P^{-1} : L^2(\Omega) \to L^2(\Omega)$, <strong>does the Schwartz kernel $G$ of $P^{-1}$ always define a pseudodifferential operator on $\Omega$?</strong></p> <h1>Addendum for MO</h1> <p>User mcd on math.SE points out that the Boutet de Monvel calculus ought to be relevant here. Aside from wishing to see exactly how this is, I wonder whether the possible lack of compactness (of $\Omega$) might cause problems in such an approach.</p> <h1>UPDATE</h1> <p>I have reduced my question to the following subproblem: <a href="https://mathoverflow.net/questions/287167/composition-of-a-smoothing-operator-with-an-l2-bounded-operator-non-compact">Composition of a smoothing operator with an $L^2$-bounded operator, non-compact Riemannian manifold</a>, as explained in a comment there.</p>
Bombyx mori
18,850
<p><a href="http://www.maths.ed.ac.uk/~aar/papers/melrose.pdf" rel="nofollow noreferrer">The book</a> by Melrose was almost exactly written to address issues like the ones you mentioned above, where the very first example is the one in the question. The construction fo the parametrix is the focus point of the book. However the book is supposed to be very dense and difficult to read, and it also has a fair amount of typos. Now there are more reader friendly <a href="https://faculty.math.illinois.edu/~palbin/AnalysisOnMfds/LectureNotes.pdf" rel="nofollow noreferrer">lecture notes</a> by his academic descendants. Depending on your exact research problem, I am not sure if you need the de Monvel calculus, which is much more difficult than the $b$-calculus and $c$-calculus constructed Melrose's book. The expert on the site is <a href="https://mathoverflow.net/users/17969/rafe-mazzeo">Rafe Mezzo</a>, who you may contact directly. </p>
2,560,556
<p>Let $X,Y,Z$ be topological spaces. Let $p:X\rightarrow Y$ be a continuous surjection. Let $f:Y\rightarrow Z$ be continuous if and only if $f\circ p:X\rightarrow Z$ is continuous.</p> <p>I want to prove that this makes $p$ a quotient map. </p> <p>My thoughts:</p> <p>Since $p$ is a continuous surjection, all I need is for $p$ to also be open.</p> <p>If I can show that $p^{-1}$ exists and is continuous, then $p$ must be open, and therefore a quotient map. Since $p$ is surjective, I know that $p$ at least has a right inverse, so some function $g$ exists such that $p\circ g = Id_Y$.</p> <p>I don't know how to proceed, however. Am I on the right track?</p>
Henno Brandsma
4,280
<p>So $p:X \to Y$ obeys the property that </p> <blockquote> <p>for all functions $g: Y \to Z$, $g$ is continuous iff $g \circ p$ is continuous.</p> </blockquote> <p>Then suppose that $U$ is a subset of $Y$ that satisfies $p^{-1}[U]$ is open in $X$. Then define $Z = \{0,1\}$ with the topology $\{\{0\}, \emptyset, Z\}$ (the Sierpinski space) and define $g: Y \to Z$ by $g(y) = 0$ if $y \in U$, $g(y) = 1$ otherwise. Then $(g \circ p)^{-1}[\{0\}] = p^{-1}[g^{-1}[\{0\}]] = p^{-1}[U]$ is open, and as this $\{0\}$ the only non-trivial open set of $Z$, $g \circ p$ is continuous (the inverse image of $Z$ is just $X$, and of $\emptyset$ is $\emptyset$ again, so these never have to be checked), and by the property of $p$ we know that $g$ is continuous, so $g^{-1}[\{0\}] = U$ is open in $Y$, as required.</p> <p>On the other hand, if $U$ is open in $Y$ then the same function $g$ is continuous and so $g \circ p$ is continuous, which implies that $g^{-1}[U]= (g \circ p)^{-1}[\{0\}]$ is open in $X$. So $U$ open in $Y$ iff $p^{-1}[U]$ is open in $X$. This means by definition that $p$ is quotient. </p> <p>The last direction indeed follows directly if you assume $p$ is continuous. I wanted to show that the continuity of $p$ even follows from the “composition property”. </p>
3,432,911
<p>My argument is as follows:</p> <p>Let <span class="math-container">$R$</span> be a commutative ring with unity, <span class="math-container">$I$</span> an ideal of <span class="math-container">$R$</span>. If <span class="math-container">$(R/I)^n\cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules, then it follows that they are isomorphic as <span class="math-container">$R/I$</span>-modules because the isomorphism factors through the quotient. We observe that these are both free <span class="math-container">$R/I$</span>-modules with bases <span class="math-container">$${\mathfrak{B}_1=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{nj}+I\} \;\;\text{ and }\;\; \mathfrak{B}_2=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{mj}+I\}}$$</span> respectively. Then the isomorphism between <span class="math-container">$(R/I)^n \cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules induces an isomorphism of these free modules, meaning there is a bijection between the elements of <span class="math-container">$\mathfrak{B}_1$</span> and <span class="math-container">$\mathfrak{B}_2$</span>. It follows then that <span class="math-container">$n=m$</span>.</p>
José Carlos Santos
446,262
<p>Note that, for each <span class="math-container">$z\in\mathbb C$</span>,<span class="math-container">\begin{align}\lvert z-1\rvert=1&amp;\iff\lvert z-1\rvert^2=1\\&amp;\iff\lvert z\rvert^2-2\operatorname{Re}(z)+1=1\\&amp;\iff\lvert z\rvert^2=2\operatorname{Re}(z).\end{align}</span>So, if <span class="math-container">$z\neq0$</span><span class="math-container">\begin{align}\operatorname{Re}\left(\frac1z\right)=\frac12&amp;\iff\frac{\operatorname{Re}(z)}{\lvert z\rvert^2}=\frac12\\&amp;\iff\lvert z\rvert^2=2\operatorname{Re}(z)\\&amp;\iff\lvert z-1\rvert=1.\end{align}</span></p>
3,432,911
<p>My argument is as follows:</p> <p>Let <span class="math-container">$R$</span> be a commutative ring with unity, <span class="math-container">$I$</span> an ideal of <span class="math-container">$R$</span>. If <span class="math-container">$(R/I)^n\cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules, then it follows that they are isomorphic as <span class="math-container">$R/I$</span>-modules because the isomorphism factors through the quotient. We observe that these are both free <span class="math-container">$R/I$</span>-modules with bases <span class="math-container">$${\mathfrak{B}_1=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{nj}+I\} \;\;\text{ and }\;\; \mathfrak{B}_2=\{\delta_{1j}+I, \delta_{2j}+I, \dots, \delta_{mj}+I\}}$$</span> respectively. Then the isomorphism between <span class="math-container">$(R/I)^n \cong (R/I)^m$</span> as <span class="math-container">$R$</span>-modules induces an isomorphism of these free modules, meaning there is a bijection between the elements of <span class="math-container">$\mathfrak{B}_1$</span> and <span class="math-container">$\mathfrak{B}_2$</span>. It follows then that <span class="math-container">$n=m$</span>.</p>
Quanto
686,284
<p>Let <span class="math-container">$z-1 = e^{ia}$</span>, then</p> <p><span class="math-container">$$\frac 1z = \frac 1{1+e^{ia}}=\frac {1+e^{-ia}}{(1+e^{ia})(1+e^{-ia})}=\frac {1+\cos a - i\sin a}{2+2\cos a}=\frac 12 -\frac12\tan\frac a2 i $$</span></p> <p>Thus, <span class="math-container">$\frac 1z$</span> is of a vertical line <span class="math-container">$x=\frac 12$</span>.</p>
3,424,189
<p>I'm trying to calculate the integral <span class="math-container">$$\int_0^1 \frac{\sin\Big(a \cdot \ln(x)\Big)\cdot \sin \Big(b \cdot \ln(x)\Big)}{\ln(x)} dx, $$</span> but am stuck. I tried using Simpsons' rules and got here: <span class="math-container">$$\int_0^1 \frac{\cos\Big((a+b) \cdot \ln(x)\Big) - \cos \Big((a-b) \cdot \ln(x)\Big)}{2\ln(x)} dx, $$</span> but alas, that also got me nowhere. Does anyone have any ideas? </p>
Claude Leibovici
82,404
<p>I do not think that it would be very pleasant.</p> <p>After your simplification, you face two integrals looking like <span class="math-container">$$I=\int \frac {\cos(k \log(x))} {\log(x)} \,dx$$</span> First, let <span class="math-container">$x=e^t$</span> to make <span class="math-container">$$I=\int \frac{e^t \cos (k t)}{t}\,dt$$</span> and consider that you need the real part of <span class="math-container">$$I=\int \frac{e^t\,e^{ikt}}t\,dt=\int \frac{e^{(1+ik)t}}t\,dt$$</span> Now, let <span class="math-container">$(1+ik)t=u$</span> to make <span class="math-container">$$I=\int \frac {e^u} u \,du=\text{Ei}(u)$$</span> where appears the exponential integral function.</p>
673,334
<p>I used below pseudocode to generate a discrete normal distribution over 101 points.</p> <pre><code>mean = 0; stddev = 1; lowerLimit = mean - 4*stddev; upperLimit = mean + 4*stddev; interval = (upperLimit-lowerLimit)/101; for ( x = lowerLimit + 0.5*interval ; x &lt; upperLimit; x = x + interval) { y = exp(-sqr(x)/2)/sqrt(2*PI); print ("%f %f", x, y); } } </code></pre> <p>When I plot y Vs x I get normal distribution curve as expected. But when I try to calculate standard deviation I use following algorithm (According to <a href="http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable" rel="nofollow">http://en.wikipedia.org/wiki/Standard_deviation#Discrete_random_variable</a>)</p> <pre><code>for i = 1:101 sumsq += y[i]*(x[i]^2) end stddev = sqrt(sumsq) </code></pre> <p>I get $stddev = 3.55$ instead of $1$. Where is the problem?</p>
StasK
97,144
<p>The height of your "density" <code>y</code> should account for the width of the interval that you have discretized your distribution to. In other words, you need to assign the mass $\int_{x_0}^{x_0+h} \phi(x) \, {\rm d}x = \Phi(x_0+h) - \Phi(x_0) \approx \phi(x_0) h$ to the point $x_0$, where $\phi(x)$ is the standard normal density, $\Phi(x)$ is the cdf, and $h$ is your <code>interval</code>. Instead, you assign all of $\phi(x_0)$. You are off by the factor of $h=8/101=0.079$, and when you take the inverse square root of that, you get your 3.55.</p>
8,568
<p>I'm going to be starting teaching a course called algebra COE, which is for students who didn't pass the required state algebra exam to graduate and are now seniors, to do spaced-out exam-like extended problems after extensive support. </p> <p>I don't want to start the class out with "getting down to business" because I want the students to feel comfortable in the class, with me and with each other. The "getting down to business" will happen during the second week of class. Therefore, I'd like to start out with a class collaboration to solve a "fun" problem. (There are 5 students in the class)</p> <p>At the same time, I don't want to start out with a problem that feels too contrived, or too much like "school math" problems. They have clearly been turned off from "school math." I want one or some that feel more like they are doing a puzzle, yet still engage algebra-related skills and open up a discussion about problem-solving as a process and skill that can be honed. </p> <p>Some problems I have considered, yet I believe are too "math-feeling":</p> <ul> <li>The exponential chessboard and rice problem </li> <li>How many squares are there on the chessboard? (note: more than 64)</li> <li>The "lockers" problem</li> <li>The <a href="https://itunes.apple.com/us/app/ooops/id467564672?mt=8">ooops game</a></li> </ul> <p>Any suggestions?</p>
JRN
77
<p>The article titled &quot;Math Teachers’ Circles: Partnerships between Mathematicians and Teachers&quot; by B. Donaldson, M. Nakamaye, K. Umland, and D. White in the December 2014 <em>Notices of the AMS</em> (pp. 1335-1341) (<a href="http://www.ams.org/notices/201411/rnoti-p1335.pdf" rel="nofollow noreferrer">pdf version</a>) mentions a nice problem called &quot;Dividing squares.&quot;</p> <p>Shown below are some pictures of a square divided into smaller squares, not necessarily of the same size. (Image taken from the linked file.)</p> <p><a href="https://i.stack.imgur.com/CyFSY.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/CyFSY.png" alt="Squares divided into smaller squares" /></a></p> <p>A few possible questions you can ask:</p> <ol> <li>For which whole numbers <span class="math-container">$n$</span> is it possible to subdivide a square into <span class="math-container">$n$</span> smaller squares?</li> <li>For which whole numbers <span class="math-container">$n$</span> is it not possible to subdivide a square into <span class="math-container">$n$</span> smaller squares?</li> <li>Is it possible to perform the division so that there is a unique square of smallest size?</li> <li>Is it possible to perform the division in such a way that no two squares have the same size?</li> </ol> <p>The problem can lead to discussions on arithmetic sequences.</p>
8,568
<p>I'm going to be starting teaching a course called algebra COE, which is for students who didn't pass the required state algebra exam to graduate and are now seniors, to do spaced-out exam-like extended problems after extensive support. </p> <p>I don't want to start the class out with "getting down to business" because I want the students to feel comfortable in the class, with me and with each other. The "getting down to business" will happen during the second week of class. Therefore, I'd like to start out with a class collaboration to solve a "fun" problem. (There are 5 students in the class)</p> <p>At the same time, I don't want to start out with a problem that feels too contrived, or too much like "school math" problems. They have clearly been turned off from "school math." I want one or some that feel more like they are doing a puzzle, yet still engage algebra-related skills and open up a discussion about problem-solving as a process and skill that can be honed. </p> <p>Some problems I have considered, yet I believe are too "math-feeling":</p> <ul> <li>The exponential chessboard and rice problem </li> <li>How many squares are there on the chessboard? (note: more than 64)</li> <li>The "lockers" problem</li> <li>The <a href="https://itunes.apple.com/us/app/ooops/id467564672?mt=8">ooops game</a></li> </ul> <p>Any suggestions?</p>
kcrisman
1,608
<p>There are many, many examples of this in the resources at <a href="https://www.artofmathematics.org/" rel="nofollow">The Art of Discovering Mathematics</a>, including things like the game of Hex.</p>
2,543,834
<p>Ok, so in my differential equations class we've been doing problems which more or less amount to solving equations of the form:</p> <p><span class="math-container">$$\frac{dY}{dt} = AY$$</span></p> <p>Where <span class="math-container">$A$</span> is just some <span class="math-container">$2\times2$</span> linear transformation and <span class="math-container">$Y$</span> is a parametric vector function defined more specifically as</p> <p><span class="math-container">$$Y(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix}$$</span></p> <p>The end result, assuming that there exists <span class="math-container">$\lambda_1, \lambda_2 \ne 0; \lambda_1 \ne \lambda_2$</span> which define the eigen values for A, is a definition for <span class="math-container">$Y(t)$</span> of the form,</p> <p><span class="math-container">$$Y(t) = k_1e^{\lambda_1t}\vec{V_1} + k_2e^{\lambda_2t}\vec{V_2}$$</span></p> <p>Where <span class="math-container">$\vec{V_1}, \vec{V_2}$</span> are the corresponding eigen vectors to their respective eigen values and <span class="math-container">$k_1, k_2$</span> are just some constants.</p> <p>For solutions which involve either <span class="math-container">$k_1 = 0$</span> or <span class="math-container">$k_2 = 0$</span>, the end result is a straight-line solution. The rest are exponential curves within the vector space defined by the eigen vectors.</p> <p>My understanding of eigen vectors, from a linear algebra class I took a year ago, so far is as follows (roughly):</p> <ul> <li><p>geometrically speaking, an eigenvector is any vector whose direction after transformation by some matrix <span class="math-container">$A$</span> remains the same. It's only scaled and/or negated.</p> </li> <li><p>Every eigenvector for some matrix <span class="math-container">$A$</span> composes a subspace which in turn defines the <em>eigen space</em> for <span class="math-container">$A$</span>'s vector basis.</p> </li> <li><p>therefore, the eigenvectors which <span class="math-container">$span(A)$</span> are linearly independent and define a coordinate space which also exists within <span class="math-container">$A$</span>.</p> </li> </ul> <p>Regardless of whether or not the above is correct (if there's a mistake, any clarification/correction would be appreciated), what is it about eigenvectors <em>specifically</em> which allows for them to be used to solve these forms of differential equations?</p>
Math Lover
348,257
<p>Observe that $A = S \Lambda S^{-1}$, where $S = \begin{bmatrix}V_1 &amp; V_2\end{bmatrix}$, $\Lambda = \begin{bmatrix}\lambda_1 &amp; 0 \\ 0 &amp; \lambda_2\end{bmatrix}$, and $S^{-1}=S^T$. Consequently, $$\frac{dY(t)}{dt}=AY(t) = S\Lambda S^{-1} Y(t). \tag{1}$$ Let $S^{-1}Y(t)=Z(t)$. Consequently, $(1)$ becomes $$\frac{dZ(t)}{dt}=\Lambda Z(t) \implies \begin{bmatrix}z_1'(t) \\ z_2'(t)\end{bmatrix}=\begin{bmatrix}\lambda_1 z_1(t)\\\lambda_2 z_2(t)\end{bmatrix} \implies Z(t)=\begin{bmatrix}k_1e^{\lambda_1t}\\k_2e^{\lambda_2t}\end{bmatrix}. \tag{2}$$ But $Y(t)=S Z(t)=\begin{bmatrix}V_1 &amp; V_2\end{bmatrix}\begin{bmatrix}k_1e^{\lambda_1t}\\k_2e^{\lambda_2t}\end{bmatrix}=?$</p>
2,543,834
<p>Ok, so in my differential equations class we've been doing problems which more or less amount to solving equations of the form:</p> <p><span class="math-container">$$\frac{dY}{dt} = AY$$</span></p> <p>Where <span class="math-container">$A$</span> is just some <span class="math-container">$2\times2$</span> linear transformation and <span class="math-container">$Y$</span> is a parametric vector function defined more specifically as</p> <p><span class="math-container">$$Y(t) = \begin{bmatrix} x(t) \\ y(t) \end{bmatrix}$$</span></p> <p>The end result, assuming that there exists <span class="math-container">$\lambda_1, \lambda_2 \ne 0; \lambda_1 \ne \lambda_2$</span> which define the eigen values for A, is a definition for <span class="math-container">$Y(t)$</span> of the form,</p> <p><span class="math-container">$$Y(t) = k_1e^{\lambda_1t}\vec{V_1} + k_2e^{\lambda_2t}\vec{V_2}$$</span></p> <p>Where <span class="math-container">$\vec{V_1}, \vec{V_2}$</span> are the corresponding eigen vectors to their respective eigen values and <span class="math-container">$k_1, k_2$</span> are just some constants.</p> <p>For solutions which involve either <span class="math-container">$k_1 = 0$</span> or <span class="math-container">$k_2 = 0$</span>, the end result is a straight-line solution. The rest are exponential curves within the vector space defined by the eigen vectors.</p> <p>My understanding of eigen vectors, from a linear algebra class I took a year ago, so far is as follows (roughly):</p> <ul> <li><p>geometrically speaking, an eigenvector is any vector whose direction after transformation by some matrix <span class="math-container">$A$</span> remains the same. It's only scaled and/or negated.</p> </li> <li><p>Every eigenvector for some matrix <span class="math-container">$A$</span> composes a subspace which in turn defines the <em>eigen space</em> for <span class="math-container">$A$</span>'s vector basis.</p> </li> <li><p>therefore, the eigenvectors which <span class="math-container">$span(A)$</span> are linearly independent and define a coordinate space which also exists within <span class="math-container">$A$</span>.</p> </li> </ul> <p>Regardless of whether or not the above is correct (if there's a mistake, any clarification/correction would be appreciated), what is it about eigenvectors <em>specifically</em> which allows for them to be used to solve these forms of differential equations?</p>
Disintegrating By Parts
112,478
<p>Consider the vector ODE $$ \frac{d Y}{dt} = AY,\;\; Y(0)=Y_0, $$ where $Y$ is an $N$-vector function of $y$, where $Y_0$ is a constant $N$-vector, and where $A$ is a constant $N\times N$ matrix. This has a unique solution $$ Y(t) = e^{tA}Y_0\; \mbox{where}\; e^{tA}=\sum_{n=0}^{\infty}\frac{1}{n!}t^nA^n. $$ If $Y_0$ is an eigenvector of $A$ with eigenvalue $\lambda$, then $$ e^{tA}Y_0 = \sum_{n=0}^{\infty}\frac{1}{n!}t^nA^nY_0 = \left(\sum_{n=0}^{\infty}\frac{1}{n!}\lambda^n t^n\right)Y_0 = e^{\lambda t}Y_0. $$ If $A$ has a basis of eigenvectors $Y_1$, $Y_2$, then all solutions can be formed in this way because $Y_0 = \alpha Y_1+\alpha_2Y_2$ for unique $\alpha_1$, $\alpha_2$, which leads to the solution of the ODE in the form $$ Y(t)=\alpha_1 e^{\lambda_1 t}Y_1+\alpha_2 e^{\lambda_2 t}Y_2. $$ That's the basic story for this problem.</p> <p>The History of eigenvector analysis traces back to Fourier's analysis of his Heat Equation, where he used separation of variables to solve $$ \frac{\partial u}{\partial t} = \frac{\partial^2u}{\partial x^2}. $$ His separation of variables technique involved finding all solutions of the form $u(t,x,y,z)=T(t)X(x)$, and trying to form a general solution out of sums of such solutions. That naturally led to eigenfunction/eigenvalue equations, long before the terminology even existed. Fourier looked for solutions of the form $u(t,x,y,z)=T(t)X(x)$ and he rearranged the terms to conclude that $$ \frac{T'(t)}{T(t)} = \frac{X''(x)}{X(x)} $$ In order for there to exist such a solution, he deduced that, for fixed $t$, the right side would have to be constant, which would then force the left side to be a constant as well. So he introduce a separation parameter $\lambda$ which became an eigenvalue of $\frac{d}{dt}$, and of $\frac{d^2}{dx^2}$: $$ \frac{T'(t)}{T(t)}=\lambda,\;\;\; \lambda=\frac{X''(x)}{X(x)}. $$ Fourier then went on to try to find the most general solution by superimposing linear combinations of the possible eigenvalue solutions: \begin{align} u(t,x) &amp;= e^{\lambda_1 t}(A_1\cos(\sqrt{\lambda_1}x)+B_1\sin(\sqrt{\lambda_1}x)) \\ &amp;+e^{\lambda_2 t}(A_2\cos(\sqrt{\lambda_2}x)+B_2\sin(\sqrt{\lambda_2}x))+\cdots. \end{align}</p> <p>Fourier's superposition was the precursor of the first general definition of linearity and a linear space, and it was first done here in the context of infinite-dimensional linear spaces of functions.</p> <p>Fourier conjectured that solutions of such equations could be written in terms of such separated solutions, which is now translated into finding a basis of eigenvectors. The development of linearity, linear operators, selfadjoint operators, orthogonal eigenvector expansions, and eigenvector analysis evolved directly from the work of Fourier. And the infinite-dimensional problems of Fourier came before the finite-dimensional cases, making it even more confusing and obscure because its Historical context is lost when studying such analysis for the first time in finite-dimensional Linear Algebra.</p>
9,930
<p>One of the standard parts of homological algebra is "diagram chasing", or equivalent arguments with universal properties in abelian categories. Is there a rigorous theory of diagram chasing, and ideally also an algorithm?</p> <p>To be precise about what I mean, a diagram is a directed graph $D$ whose vertices are labeled by objects in an abelian category, and whose arrows are labeled by morphisms. The diagram might have various triangles, and we can require that certain triangles commute or anticommute. We can require that certain arrows vanish, which can be used to ask that certain compositions vanish. We can require that certain compositions are exact. Maybe some of the arrows are sums or direct sums of other arrows, and maybe some of the vertices are projective or injective objects. Then a diagram "lemma" is a construction of another diagram $D'$, with some new objects and arrows constructed from those of $D$, or at least some new restrictions.</p> <p>As described so far, the diagram $D$ can express a functor from any category $\mathcal{C}$ to the abelian category $\mathcal{A}$. This looks too general for a reasonable algorithm. So let's take the case that $D$ is acyclic and finite. This is still too general to yield a complete classification of diagram structures, since acyclic diagrams include all acyclic quivers, and some of these have a "wild" representation theory. (For example, three arrows from $A$ to $B$ are a wild quiver. The representations of this quiver are not tractable, even working over a field.) In this case, I'm not asking for a full classification, only in a restricted algebraic theory that captures what is taught as diagram chasing.</p> <p>Maybe the properties of a diagram that I listed in the second paragraph already yield a wild theory. It's fine to ditch some of them as necessary to have a tractable answer. Or to restrict to the category $\textbf{Vect}(k)$ if necessary, although I am interested in greater generality than that.</p> <p>To make an analogy, there is a theory of Lie bracket words. There is an algorithm related to <a href="http://en.wikipedia.org/wiki/Lyndon_word" rel="noreferrer">Lyndon words</a> that tells you when two sums of Lie bracket words are formally equal via the Jacobi identity. This is a satisfactory answer, even though it is not a classification of actual Lie algebras. In the case of commutative diagrams, I don't know a reasonable set of axioms &mdash; maybe they are related to triangulated categories &mdash; much less an algorithm to characterize their formal implications.</p> <p>(This question was inspired by a mathoverflow question about <a href="https://mathoverflow.net/questions/6749/">George Bergman's salamander lemma</a>.)</p> <hr> <p>David's reference is interesting and it could be a part of what I had in mind with my question, but it is not the main part. My thinking is that diagram chasing is boring, and that ideally there would be an algorithm to obtain all finite diagram chasing arguments, at least in the acyclic case. Here is a simplification of the question that is entirely rigorous.</p> <p>Suppose that the diagram $D$ is finite and acyclic and that all pairs of paths commute, so that it is equivalent to a functor from a finite <a href="http://en.wikipedia.org/wiki/Partially_ordered_set#In_category_theory" rel="noreferrer">poset category</a> $\mathcal{P}$ to the abelian category $\mathcal{A}$. Suppose that the only other decorations of $D$ are that: (1) certain arrows are the zero morphism, (2) certain vertices are the zero object, and (3) certain composable pairs of arrows are exact. (Actually condition 2 can be forced by conditions 1 and 3.) Then is there an algorithm to determine all pairs of arrows that are forced to be exact? Can it be done in polynomial time?</p> <p>This rigorous simplification does not consider many of the possible features of lemmas in homological algebra. Nothing is said about projective or injective objects, taking kernels and cokernels, taking direct sums of objects and morphisms (or more generally finite limits and colimits), or making connecting morphisms. For example, it does not include the <a href="http://en.wikipedia.org/wiki/Snake_lemma" rel="noreferrer">snake lemma</a>. It also does not include diagrams in which only some pairs of paths commute. But it is enough to express the monomorphism and epimorphism conditions, so it includes for instance the <a href="http://en.wikipedia.org/wiki/Five_lemma" rel="noreferrer">five lemma</a>.</p>
David E Speyer
297
<p>I'm not sure this is what you are looking for, but if you look at Gelfand and Manin's <em>Methods of Homological Algebra</em>, the end of Section II.5, they make the following definition:</p> <p>Let $Y$ be an object of an abelian category. An <em>element</em> of $Y$ is a pair $(X,h)$, with $h: X \to Y$, modulo the equivalence relation that $(X,h) \sim (X',h')$ if we can complete the square</p> <p>$$\begin{matrix} Z &amp; \to &amp; X' \\ \downarrow &amp; &amp; \downarrow \\ X &amp; \to &amp; Y \end{matrix}$$</p> <p>so that the maps $Z \to X$ and $Z \to X'$ are surjective.</p> <p>The map sending an object $Y$ to its elements, $E(Y)$, is clearly a functor from our Abelian category to PointedSetsWithAnInvolution, where the labelled point of $E(Y)$ is the zero map $0 \to Y$ and the involution is $(X,h) \mapsto (X,-h)$. (It seems to me that $(X,h)$ and $(X,-h)$ are always equivalent, but G+M sometimes use notation which seems to distinguish the elements $e$ and $-e$, so I am following them.) Gelfand and Manin state as exercises:</p> <ul> <li>A map $f: Y_1 \to Y_2$ is a monomorphism iff $E(f)^{-1}(0) =0$ iff $E(f)$ is injective.</li> <li>A map $f: Y_1 \to Y_2$ is a epimorphism iff $E(f)$ is surjective.</li> <li>A map $f: Y_1 \to Y_2$ is zero iff $E(f)$ is zero.</li> <li>A sequence $Y_1 \to Y_2 \to Y_3$ is exact iff, for $e \in E(Y_2)$, the element $e$ is taken to the element $0 \in E(Y_3)$ if and only if it is in the image of $E(Y_1)$.</li> <li>If $y_1$ and $y_2 \in E(Y)$ then there is an element $z \in E(Y)$ such that, for any $f: Y \to Y'$ with $f(y_i)=0$, we have $f(z)=(-1)^i f(y_{1-i})$. Moreover, if $g: Y \to X$ is any map such that $g(y_1) = g(y_{2})$, we can take $z$ such that $g(z)=0$. (So $z$ acts like $y-y'$. Note that there is not claimed to be a unique $z$ which works.)</li> </ul> <p>G and M then give several examples to demonstrate that most diagram chasing arguments can be rewrittten using the $E$ construction.</p>
2,885,097
<p>I'm having difficulty calculating the Taylor series of $\frac{1}{1-x}$ about $a=3$, and was wondering if anyone on here could help me out.</p> <p>Here's what I've tried so far:</p> <p><strong>Attempt 1 -</strong> Take $y=x-3$ and rearrange as $x=y+3$.</p> <p>Then do $\frac{1}{1-x}=\frac{1}{1-(y+3)}=\frac{1}{-y-2}=\frac{-1}{y+2}=\frac{-1}{1+(y+1)}$.</p> <p>Now, knowing $\frac{-1}{1+x}=\sum_{n=0}^{\infty} (-1)^{n+1}x^n$, I write $\frac{-1}{1+(y+1)}=\sum_{n=0}^{\infty} (-1)^{n+1}(y+1)^n$.</p> <p>We change now and get $\sum_{n=0}^{\infty} (-1)^{n+1}(y+1)^n=\sum_{n=0}^{\infty} (-1)^{n+1}((x-3)+1)^n$.</p> <p>And then simplify and get our final result which is $\sum_{n=0}^{\infty} (-1)^{n+1}(x-2)^n$.</p> <p><strong>Attempt 2 -</strong> Recall first the formal definition of a Taylor series centered about $x=a$;</p> <p>the formal definition is as I remember $f(x)=\sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n$.</p> <p>Take $f(x)=\frac{1}{1-x}$.</p> <p>Now, I see that $f'(x)=\frac{1}{(1-x)^2}$, $f''(x)=\frac{2}{(1-x)^3}$, ..., $f^{(n)}(x)=\frac{n!}{(1-x)^{n+1}}$.</p> <p>Replacing $x$ as $a$ I get now $f^{(n)}(a)=\frac{n!}{(1-a)^{n+1}}$.</p> <p>Taking $a=3$, we write $f^{(n)}(a)=\frac{n!}{(1-a)^{n+1}}=f^{(n)}(3)=\frac{n!}{(1-3)^{n+1}}=f^{(n)}(3)=\frac{n!}{(-2)^{n+1}}$.</p> <p>By the formal definition we now write $f(x)=\sum_{n=0}^{\infty} \frac{\frac{n!}{(-2)^{n+1}}}{n!}(x-3)^n$.</p> <p>Simplifying the above we get the end result $f(x)=\sum_{n=0}^{\infty} \frac{(x-3)^n}{(-2)^{n+1}}$.</p> <p>My attempts gave me two completely different answers. What gives? Is there something I'm doing wrong in either attempt? Is there something I'm doing wrong in both?</p> <p>Help regarding this issue would be greatly appreciated. Thanks in advance.</p>
José Carlos Santos
446,262
<p>Your first attempt is wrong, because after you do the substitution $y=x-3$, you should aim at a power series about $0$, not about $-1$.</p> <p>Your second attempt is corect (but I would put $(-1)^n$ in the numerator and $2^n$ in the denominator).</p>
2,885,097
<p>I'm having difficulty calculating the Taylor series of $\frac{1}{1-x}$ about $a=3$, and was wondering if anyone on here could help me out.</p> <p>Here's what I've tried so far:</p> <p><strong>Attempt 1 -</strong> Take $y=x-3$ and rearrange as $x=y+3$.</p> <p>Then do $\frac{1}{1-x}=\frac{1}{1-(y+3)}=\frac{1}{-y-2}=\frac{-1}{y+2}=\frac{-1}{1+(y+1)}$.</p> <p>Now, knowing $\frac{-1}{1+x}=\sum_{n=0}^{\infty} (-1)^{n+1}x^n$, I write $\frac{-1}{1+(y+1)}=\sum_{n=0}^{\infty} (-1)^{n+1}(y+1)^n$.</p> <p>We change now and get $\sum_{n=0}^{\infty} (-1)^{n+1}(y+1)^n=\sum_{n=0}^{\infty} (-1)^{n+1}((x-3)+1)^n$.</p> <p>And then simplify and get our final result which is $\sum_{n=0}^{\infty} (-1)^{n+1}(x-2)^n$.</p> <p><strong>Attempt 2 -</strong> Recall first the formal definition of a Taylor series centered about $x=a$;</p> <p>the formal definition is as I remember $f(x)=\sum_{n=0}^{\infty} \frac{f^{(n)}(a)}{n!}(x-a)^n$.</p> <p>Take $f(x)=\frac{1}{1-x}$.</p> <p>Now, I see that $f'(x)=\frac{1}{(1-x)^2}$, $f''(x)=\frac{2}{(1-x)^3}$, ..., $f^{(n)}(x)=\frac{n!}{(1-x)^{n+1}}$.</p> <p>Replacing $x$ as $a$ I get now $f^{(n)}(a)=\frac{n!}{(1-a)^{n+1}}$.</p> <p>Taking $a=3$, we write $f^{(n)}(a)=\frac{n!}{(1-a)^{n+1}}=f^{(n)}(3)=\frac{n!}{(1-3)^{n+1}}=f^{(n)}(3)=\frac{n!}{(-2)^{n+1}}$.</p> <p>By the formal definition we now write $f(x)=\sum_{n=0}^{\infty} \frac{\frac{n!}{(-2)^{n+1}}}{n!}(x-3)^n$.</p> <p>Simplifying the above we get the end result $f(x)=\sum_{n=0}^{\infty} \frac{(x-3)^n}{(-2)^{n+1}}$.</p> <p>My attempts gave me two completely different answers. What gives? Is there something I'm doing wrong in either attempt? Is there something I'm doing wrong in both?</p> <p>Help regarding this issue would be greatly appreciated. Thanks in advance.</p>
Jean-Claude Arbaut
43,608
<p>As already said above, your first attempt is incorrect. But it's easy to do it correctly: for $|y|&lt;2$,</p> <p>$$\frac{1}{1-(y+3)}=-\frac{1}{2+y}=-\frac12\frac{1}{1+\frac{y}2}=-\frac12\sum_{n=0}^\infty (-1)^n\frac{y^n}{2^n}$$</p>
3,687,043
<p>I have a polynomial <span class="math-container">$f(z)=z^4+z^3-2z^2+2z+4$</span>, and I want to find the number of roots in the first quadrant. I'm trying to use the argument principle (or Rouche), and I could try to make my contour the quarter circle, but I've having trouble because I can't justify that there are no roots on the real axis. Please give me some recommendations!</p> <p>So now I understand why there are no roots on the contour; I have also justified that the integral on the arc goes to <span class="math-container">$2\pi i$</span> by normal limit considerations. However, I still am unsure how to figure out the arguments.</p>
Sarvesh Ravichandran Iyer
316,409
<p>This one is a real weird one. </p> <p>That is because I am tempted to factor this polynomial.</p> <p>Let us try the rational root theorem first. If there is a rational root <span class="math-container">$\frac pq$</span> then <span class="math-container">$p$</span> divides <span class="math-container">$4$</span> and <span class="math-container">$q$</span> divides <span class="math-container">$1$</span> i.e. that root must be a multiple of <span class="math-container">$4$</span>.</p> <p>Merely trying out <span class="math-container">$z = -1$</span> works. Upon division by <span class="math-container">$z+1$</span> we get <span class="math-container">$z^3-2z+4$</span>.</p> <p>Another use of RRT gives <span class="math-container">$z = -2$</span> as a root, and division by <span class="math-container">$z+2$</span> yields <span class="math-container">$z^2-2z+2$</span>, which by the usual quadratic formula is <span class="math-container">$(z-1+i)(z-1-i)$</span>.</p> <p>So, we have the roots as <span class="math-container">$1\pm i, -1,-2$</span>. Of these, exactly the root <span class="math-container">$1+i$</span> is in the positive quadrant.</p> <p>While this may be disappointing as an answer because we have not used machinery, it is suitable for a beginner. I would always suggest it as a first approach, and then look to apply Rouche or something else if things did not work out.</p>
3,687,043
<p>I have a polynomial <span class="math-container">$f(z)=z^4+z^3-2z^2+2z+4$</span>, and I want to find the number of roots in the first quadrant. I'm trying to use the argument principle (or Rouche), and I could try to make my contour the quarter circle, but I've having trouble because I can't justify that there are no roots on the real axis. Please give me some recommendations!</p> <p>So now I understand why there are no roots on the contour; I have also justified that the integral on the arc goes to <span class="math-container">$2\pi i$</span> by normal limit considerations. However, I still am unsure how to figure out the arguments.</p>
Oscar Lanzi
248,217
<p>You can use the Rouche method if you prove that there are no roots on the <em>positive</em> real axis, which is the part of the real axis that's on your path.</p> <p>When <span class="math-container">$x&gt;\sqrt2$</span> (I'm using <span class="math-container">$x$</span> here to emphasize we're dealing with a real variable in the proof), <span class="math-container">$x^4$</span> has a greater absolute value than <span class="math-container">$-2x^2$</span> and the latter is the only negative term, thus the polynomial is forced (strictly) positive. Similarly, <span class="math-container">$4$</span> dominates <span class="math-container">$-2x^2$</span> for <span class="math-container">$0&lt;x&lt;\sqrt2$</span> and the sum <span class="math-container">$x^4+4$</span> dominates <span class="math-container">$-2x^2$</span> for the remaining case <span class="math-container">$x=\sqrt2$</span>.</p> <p>So there must be no positive real roots and you can use a Rouche path that includes the positive real axis. You should, of course, compare your result with that obtained by elementary techniques from Aston.</p>
237,446
<p>I find to difficult to evaluate with $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )$$ I tried to use the fact, that $$\frac{1}{1-n} \geqslant \ln(n)\geqslant 1+n$$ what gives $$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right ) \geqslant \lim_{n\rightarrow\infty} n(1-\sqrt[n]{1+n}) =\lim_{n\rightarrow\infty}n *\lim_{n\rightarrow\infty}(1-\sqrt[n]{1+n})$$ $$(1-\sqrt[n]{1+n})\rightarrow -1\Rightarrow\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )\rightarrow-\infty$$Is it correct? If not, what do I wrong?</p>
user 1591719
32,016
<p>Since it's so hard let's solve it in one line</p> <p>$$\lim_{n\rightarrow\infty}\left ( n\left(1-\sqrt[n]{\ln(n)} \right) \right )=-\lim_{n\rightarrow\infty}\left(\frac{\sqrt[n]{\ln(n)}-1}{\displaystyle\frac{1}{n}\ln(\ln (n)) }\cdot \ln(\ln (n))\right)=-(1\cdot \infty)=-\infty.$$</p> <p>Chris.</p>
181,940
<p>I've been unable to find an answer to the following question in the literature on generalized descriptive set theory. Consider Baire space $\kappa^{\kappa}$ where $\kappa$ is inaccessible. The basic open sets are the $U_f$ where $f\in\kappa^{&lt;\kappa}$. A perfect set is a nonempty closed set with no isolated points. Does a perfect set have cardinality $2^{\kappa}$? Does it have to have cardinality $&gt;\kappa$?</p> <p>A variation of an observation in "Generalized Descriptive Set Theory and Classification Theory" (arxiv.org/abs/1207.4311) states that if $T$ is a slim Kurepa tree (as defined in Devlin's "Constructibility") then there is no continuous (in fact Borel) injection of $2^{\kappa}$ into $[T]$, but this doesn't answer my question.</p> <p>Of course, if the existence of a slim Kurepa tree was consistent with $2^{\kappa}&gt;\kappa^+$ then it would be consistent that my first question had a "no" answer; I have not been able to find any references on this question, but I wouldn't be surprised if it were so.</p>
Joel David Hamkins
1,946
<p>It is consistent to have a perfect set of size $\kappa$, or of intermediate size between $\kappa$ and $2^\kappa$. </p> <p>To see this, suppose that $\kappa$ is an inaccessible cardinal in $V$, and let $T=2^{&lt;\kappa}$ be the tree of ${&lt;}\kappa$ binary sequences, and let $X=[T]={}^\kappa 2$ be the set of branches through this tree in $V$. So $X$ is a perfect set in $V$ and has size $2^\kappa$ there.</p> <p>Next, force over $V$ to add a Cohen real $c\subset\omega$ and then to collapse $(2^\kappa)^V$ to $\kappa$. Consider the resulting forcing extension $V[c][G]$.</p> <p>The combined forcing admits a closure point at $\omega_1$, and it follows from lemma 13 of my article <a href="http://jdh.hamkins.org/approximation-and-cover-properties/">Extensions with the approximation and cover properties have no new large cardinals</a> that the extension $V[c][G]$ does not add any new branches through a tree of height $\kappa$ in the ground model $V$. In particular, $X$ is still the set of all branches through $T$ in $V[c][G]$. Note that $X$ still has no isolated points. Further, since there are no new branches through $T$ in $V[c][G]$, it follows that $X$ is still closed (since any new element of the closure of $X$ would have to be a branch through $T$).</p> <p>So $X$ is perfect in $V[c][G]$, but since $(2^\kappa)^V$ was collapsed to $\kappa$, it follows that $X$ has size $\kappa$ there.</p> <p>One can modify the argument to make a perfect set of size between $\kappa$ and $2^\kappa$. Start in $V$ with $2^\kappa=\kappa^+$, but now force to add a Cohen real and then let $G$ pump up $2^\kappa$ very high. The set $X=({}^\kappa 2)^V$ will still be perfect in $V[c][G]$, but now it will have size $\kappa^+$, even though $2^\kappa$ is large in $V[c][G]$. One can make it have size $\kappa^{++}$ or whatever you like, in the same way.</p>
417,064
<p>Let T be a totally ordered set that is <strong>finite</strong>. Does it follow that minimum and maximum of T exist? Since T is finite, I believe there exists a minimal of T. From that it maybe able to be shown that the minimal is the minimum but not quite sure whether it is the right approach. </p>
Community
-1
<p>If a totally ordered set is finite <strong>and nonempty</strong>, then it follows that it has a maximum and a minimum.</p>
4,486,131
<p><span class="math-container">$F=Q(\sqrt{2i})$</span>,then Which one of the following is not true (Duet-2017 Q.26)</p> <p>1.<span class="math-container">$\sqrt{2}\in F$</span></p> <p>2.<span class="math-container">$i \in F$</span></p> <p>3.<span class="math-container">$x^8-16=0$</span> has a solution in <span class="math-container">$F$</span></p> <p>4.<span class="math-container">$dim_Q(F)=2$</span></p> <p>I thought it as <span class="math-container">$F=$</span>{<span class="math-container">$a+b(\sqrt{2i})| a,b \in Q$</span>}</p> <p>from which it implies <span class="math-container">$\sqrt{2} \notin F$</span> and <span class="math-container">$\sqrt{2i}$</span> is a solution of <span class="math-container">$x^8-16=0$</span></p> <p>I didn't get about what about dimensions And I thought <span class="math-container">$i \notin F$</span> but answer has only one solution which is option 1.</p> <p>Please guide. Thanks</p>
Guillermo García Sáez
696,501
<ol> <li><p>Is false, in case it's true we would have <span class="math-container">$F(\sqrt2)=F$</span>, but <span class="math-container">$$F(\sqrt2)=\mathbb{Q(\sqrt{2i},\sqrt 2)}=\mathbb{Q(\sqrt i,\sqrt 2)}\not =\mathbb{Q(\sqrt{2i})}=F$$</span></p> </li> <li><p>Similar argument</p> </li> <li><p>Is true <span class="math-container">$$(\sqrt{2i})^8=16$$</span></p> </li> <li><p>True The dimension of a field extension is just the dimension of the extension as a vector field over the base field. Its easy to prove that if we have an extension generated by an algebraic number, the dimension will be the degree of its minimal polynomial over the field so: <span class="math-container">$$dim_{\mathbb{Q}}(F)=[\mathbb{Q}(\sqrt{2i}):\mathbb{Q}]=deg(P_{min\;\sqrt{2i}}(t))=deg(t^2-2t+2)=2$$</span></p> </li> </ol>
3,998,018
<p>I was preparing for a calculus exam and I came across the Wikipedia <a href="https://en.wikipedia.org/wiki/List_of_trigonometric_identities" rel="nofollow noreferrer">article</a> for all the trigonometric identities. There I came across some terms that I had never seen before. They were: <span class="math-container">$\text{versin}(x)=1-\cos(x)$</span>, <span class="math-container">$\text{coversin}(x)=1-\sin(x)$</span>, <span class="math-container">$\text{vercosin}(x)=1+\cos(x)$</span> and other similar ones.</p> <p>In the article it states that these were historically used but nowadays they have no real use. Why is that? Why are these not used anymore?</p>
Community
-1
<p>The versine function is well documented in this Wikipedia article: <a href="https://en.wikipedia.org/wiki/Versine" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Versine</a></p> <blockquote> <p>The versine or versed sine is a trigonometric function found in some of the earliest (Vedic Aryabhatia I) trigonometric tables. The versine of an angle is 1 minus its cosine.</p> </blockquote> <p>It is very likely that people need all sorts of trig functions to write <a href="https://en.wikipedia.org/wiki/Trigonometric_tables" rel="nofollow noreferrer">trigonometric tables</a> in the old days. But now we have computers.</p>
3,998,018
<p>I was preparing for a calculus exam and I came across the Wikipedia <a href="https://en.wikipedia.org/wiki/List_of_trigonometric_identities" rel="nofollow noreferrer">article</a> for all the trigonometric identities. There I came across some terms that I had never seen before. They were: <span class="math-container">$\text{versin}(x)=1-\cos(x)$</span>, <span class="math-container">$\text{coversin}(x)=1-\sin(x)$</span>, <span class="math-container">$\text{vercosin}(x)=1+\cos(x)$</span> and other similar ones.</p> <p>In the article it states that these were historically used but nowadays they have no real use. Why is that? Why are these not used anymore?</p>
open problem
876,065
<p>As θ goes to zero, versin(θ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation, making separate tables for the latter convenient.</p> <p>These days we walk around with devices capable of approximating the versine to a degree of accuracy that would in the past be considered an extraordinary degree of accuracy and capable of looking up tables the size of books for exact results.</p> <p>That is the practical aspect, the other aspect is that it has fallen out of taste, and there is no accounting for taste so take it as you will.</p>
3,354,684
<p>I was trying to prove following inequality:</p> <p><span class="math-container">$$|\sin n\theta| \leq n\sin \theta \ \text{for all n=1,2,3... and } \ 0&lt;\theta&lt;π $$</span></p> <p>I succeeded in proving this via induction but I didn't get "feel" over the proof. Are there other proof for this inequality?</p>
albert chan
696,342
<p>Let <span class="math-container">$z = \cos(θ) + i \sin(θ) $</span></p> <p><span class="math-container">$\displaystyle {\sin(n\,θ)\over \sin(θ)} = {\Im(z^n) \over \Im(z)} = {z^n - 1/z^n \over z - 1/z} = z^{n-1} + z^{n-3}+\cdots+{1\over z^{n-3}} + {1\over z^{n-1}}$</span></p> <p>RHS can be group in pairs (except possibly center 1, if n is odd) :</p> <p><span class="math-container">${\displaystyle z^k + {1 \over z^k} = 2 \cos(k\,x) }$</span></p> <p>Sum of pair absolute value at most reached 2.</p> <p>With RHS having n terms, <span class="math-container">$\displaystyle \left|{\sin(n\,θ)\over \sin(θ)} \right| \le n$</span></p>
121,897
<p>I want to check if a user input the function with all the specified variables or not. For that I choose the replace variables with some values and check for if the result is a number or not via a doloop. I am thinking there might be more elegant way of doing it such as <a href="http://reference.wolfram.com/language/ref/ReplaceList.html" rel="nofollow"><code>ReplaceList</code></a> but it is not working the way I want it. </p> <p>Lets assume </p> <pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w; (*and user give variables as *) vas = {x, y, z, w}; (* I need to check if all the variables are in the function *) Do[ u = u /. vas[[i]] -&gt; 1.1; (* 1.1 is within where the function is going to get \ evaluated *) If[i == 4, numc9 = NumericQ[u]; Print[numc9];]; (* if numc9 False either there infinity or one of \ the variables in the list is not present in the function or function \ has extra variable(s) *) Print[u]; , {i, 4}] </code></pre> <p>Is there more elegant way doing it?</p> <p><strong>EDIT I</strong></p> <p>After @Mr.Wizard 's answer I realized that my question was not covering everything I wanted. @Mr. Wizard answer is working, if I was checking all the variables are present in u. However, at the same time I want to check if there is no extra variables in u. Because at the end I want to evaluate u using vars and if u has an extra variable I won't get a value at the end. </p> <p>For example:</p> <pre><code> u = z^2 Sin[π x] + z^4 Cos[π y] + I y^6 Cos[2 π y] + w + z^p; vas = {z, x, y, p}; </code></pre> <p>Level and FreeQ commands give all the variables in function u. After that you check if all the variables in vas are present in this list of variables coming from Level or FreeQ and in the example above its. </p> <p>In this situation @J.M. undocumented command does what I need. Or I will need to stick with my DoLoop.</p>
J. M.'s persistent exhaustion
50
<p>Using an <em>undocumented</em> function:</p> <pre><code>Reduce`FreeVariables[u] === Sort[vas] </code></pre>
381,036
<p>I must show that $f(x)=p{\sqrt{x}}$ , $p&gt;0$ is continuous on the interval [0,1). </p> <p>I'm not sure how I show that a function is continuous on an interval, as opposed to at a particular point. </p>
moray95
68,052
<p>An important theorem here : </p> <blockquote> <p>If $f'$ is continuous and defined on an interval I, then $f$ is also defined and continuous on I.</p> </blockquote> <p>Using this theorem, we get $f'(x)\frac{p}{2\sqrt{x}}$. $f'$ is not defined only for $x\in\Bbb R_-$. So $f$ is continuous for $x&gt;0$, therefore on $]0,1]$.</p>
66,314
<p>This is very similar to my earlier question <a href="https://mathematica.stackexchange.com/questions/60069/one-to-many-lists-merge">One to Many Lists Merge</a> but somehow different. I have two lists, first column in each list represents its key. I want to merge these two lists. The only problem is that these two lists have some common keys but not all keys are the same and that they are of different lengths. For example, </p> <pre><code>list1 = {{1, a, aa}, {2, b, bb}, {3, c, cc}, {4, d, dd}, {6, f, ff}, {7, g, gg}, {13, j, jj}}; list2 = {{1, 10, 100, 1000}, {2, 20, 200, 2000}, {5, 50, 500, 5000}, {6, 60, 600, 6000}, {7, 70, 700, 7000}, {9, 90, 900, 9000}}; </code></pre> <p>I am trying to merge these lists using their keys. If the key doesn't match, one list should have missing values such as -99.99. The result I am looking for the above two list would be</p> <pre><code>answerlist = {{1, a, aa, 10, 100, 1000}, {2, b, bb, 20, 200, 2000}, {3, c, cc, -99.99, -99.99, -99.99}, {4, d, dd, -99.99, -99.99, -99.99}, {5, -99.99, -99.99, 50, 500, 5000}, {6, f, ff, 60, 600, 6000}, {7, g, gg, 70, 700, 7000}, {9, -99.99, -99.99, 90, 900, 9000}, {13, j, jj, -99.99, -99.99, -99.99}}; </code></pre> <p>Thank you for your time and support in advance. </p>
andre314
5,467
<pre><code>rules1 = Append[#[[1]] -&gt; Rest[#] &amp; /@ list1, _ -&gt; {-99.99, -99.99}]; rules2 = Append[#[[1]] -&gt; Rest[#] &amp; /@ list2, _ -&gt; {-99.99, -99.99, -99.99}]; Table[Join[{i}, i /. rules1, i /. rules2], {i,Union[First /@ list1, First /@ list2]}] </code></pre> <blockquote> <p>{{1, a, aa, 10, 100, 1000}, {2, b, bb, 20, 200, 2000}, {3, c, cc, -99.99, -99.99, -99.99}, {4, d, dd, -99.99, -99.99, -99.99}, {5, -99.99, -99.99, 50, 500, 5000}, {6, f, ff, 60, 600, 6000}, {7, g, gg, 70, 700, 7000}, {9, -99.99, -99.99, 90, 900, 9000}, {13, j,<br> jj, -99.99, -99.99, -99.99}}</p> </blockquote> <p>Instead of <code>Table[Join[{i}, i /. rules1, i /. rules2], {i,Union[First /@ list1, First /@ list2]}]</code>, one can use <code>(Join[{#}, # /. rules1, # /. rules2] &amp; /@ Union[First /@ list1, First /@ list2])</code></p> <p>To answer to your comment, here is an other solution, which is certainly better in terms of speed :</p> <pre><code>keysList1 = First /@ list1 keysList2 = First /@ list2 keys = Union[keysList1, keysList2] list1Completed = Join[list1, {#, -99.99, -99.99} &amp; /@ Complement[keys, keysList1]] list2Completed = Join[list2, {#, -99.99, -99.99, -99.99} &amp; /@ Complement[keys, keysList2]] mergedList = Join[list1Completed, list2Completed] Sort[Join[{#[[1, 1]]}, Rest[#[[1]]], Rest[#[[2]]]] &amp; /@ GatherBy[mergedList, #[[1]] &amp;]] </code></pre>
3,267,550
<p>Considering the system <span class="math-container">$x_{k+1}=Ax_k+Bu_k$</span> with quadratic cost </p> <p><span class="math-container">$J^* = \min x_N^T S x_N + \sum_{k=0}^{N-1} x_k^T Qx_k+u^T_kRu_k$</span></p> <p>where <span class="math-container">$Q,S\succeq 0, R\succ 0$</span>. The optimal state feedback is found as <span class="math-container">$u_k = -(R+B^\top P_{k+1}B)^{-1}B^T P(k+1)A x_k$</span> where <span class="math-container">$P(k)$</span> is from the discrete-time riccati equation</p> <p><span class="math-container">$ P_{k} = Q+ A^T P_{k+1}A -A^T P_{k+1}B(R+B^T P_{k+1}B)^{-1}B^T P_{k+1}A $</span></p> <p>and terminal penalty <span class="math-container">$P_N = S$</span>. This can be solved like an LMI turning the equality into</p> <p><span class="math-container">$Q+ A^T P_{k+1}A - P_{k} -A^T P_{k+1}B(R+B^T P_{k+1}B)^{-1}B^T P_{k+1}A \succeq 0$</span></p> <p>But how can I formally prove that the Riccati equality can be turned into an inequality? is it just Schur complement considering <span class="math-container">$P(k)\succeq 0$</span> and <span class="math-container">$R\succ0$</span>?</p> <p>thanks for any suggestion</p>
JMJ
295,405
<p>If <span class="math-container">$A = B$</span> then it's trivially true that <span class="math-container">$A - B \geq 0$</span>. </p>
1,380,819
<p>Hi: I'm reading some introductory notes on hilbert spaces and there is a step in a proof that I don't follow. I will put the exact statement below. If someone could explain how it is obtained, it's appreciated. Note that commans between two terms when they have &lt; and > around them denotes the innner product. Also, $e_{n}$ for $n = 1,2,3,\ldots$ is a complete orthonormal sequence in a Hilbert space $H$ and $x$ is in $H$.</p> <p>Proof: Observe that</p> <p>\begin{eqnarray*} 0 &lt;= || x - \sum_{n=1}^{m} &lt;x,e_{n}&gt;e_{n}) ||^2 &amp; = &amp; \left&lt; x - \sum_{n=1}^{m} &lt;x,e_{n}&gt;e_{n}, x - \sum_{n=1}^{m} &lt;x,e_{n}&gt;e_{n} \right&gt; \\ &amp; = &amp; \left&lt; x, x - \sum_{n=1}^{m} &lt;x,e_{n}&gt;e_{n} \right&gt; - \sum_{n=1}^{m} &lt;x, e_{n}&gt; \left &lt; e_{n}, x - \sum_{n=1}^{m} &lt;x,e_{n}&gt;e_{n} \right &gt; \\ &amp; = &amp; ||x||^2 - \sum_{n=1}^{m} |&lt;x, e_{n}&gt;|^2 \end{eqnarray*}</p> <p>I understand the first two lines of above. My question is how one goes from the second to the last line to the last line. Thanks for your help.</p>
Archaick
191,173
<p>For clarity's sake, I've rewritten this second to last line as</p> <p>$$\left&lt; x, x - \sum_{n=1}^{m}\left[ &lt;x,e_{n}&gt;e_{n}\right] \right&gt; - \sum_{n=1}^{m}\left[ &lt;x, e_{n}&gt; \left &lt; e_{n}, x - \sum_{i=1}^{m} &lt;x,e_{i}&gt;e_{i} \right &gt;\right].$$ By linearity of the inner-product, $$\left&lt; x, x - \sum_{n=1}^{m}\left[ &lt;x,e_{n}&gt;e_{n}\right] \right&gt; =||x||^2-\sum_{n=1}^m \left[&lt;x,e_n&gt;^2\right].$$ Since $&lt;e_n,e_m&gt;=0$ whenever $m \neq n$ and $1$ otherwise, whenever $1 \leq n\leq m$, we find that $$\left &lt; e_{n}, x - \sum_{i=1}^{m} &lt;x,e_{i}&gt;e_{i} \right &gt;=&lt;e_n,x&gt;-&lt;x,e_n&gt;=0,$$ since inner-products are symmetric. Hence the second term in the original expression is identically zero. Hope that helps! Let me know if anything here is unclear.</p>
3,200,940
<p><a href="https://i.stack.imgur.com/7k9P8.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7k9P8.jpg" alt="enter image description here"></a></p> <p>Velleman's logic in sentence 3 under figure 4 is confusing me. He is using lines two and four of the truth table to infer what Q of line 1 should be. But lines two and four assume P --> Q are true while line 1 assumes P --> Q to be false. So the latter doesn't follow the former in the way Velleman is reasoning here.</p> <p>Furthermore it is already given in the first two columns of P and Q that they are both already F, so how he reasons that there is even a possible "inference" of Q being T is very strange.</p> <p>Help anyone.</p> <p>Thanks!</p>
lemontree
344,246
<p>Attmittedly it took me some time too to figure out what the author meant; here's what I think is intended: </p> <p>An inference is valid iff in all the rows where all of the premises are true, the conclusion is true as well, i.o.w., there is no row where all premises are true but the conclusion is false. To check whether the entailment holds, we only need to consider the rows which have true in all premise columns; rows where at least one of the premises is false may simply be ignored. </p> <p>In figure 4, we need the additional premise P to ensure that the entailment holds: If it weren't for P, then with row 1 we would have a row where all the premises (now only consisting of <span class="math-container">$P \to Q$</span>) are true, but in this row the conclusion <span class="math-container">$Q$</span> is false. So from <span class="math-container">$P \to Q$</span> alone without <span class="math-container">$P$</span> as a second premise we can't deduce <span class="math-container">$Q$</span>, because the validity of the argument would fail in the first row. </p> <p>This would be different if we changed the first row for <span class="math-container">$P \to Q$</span> to false: Then, this row would be unproblematic for the truth of the conclusion, since we only need to consider rows where all the premises are true, and now that we changed the only premise <span class="math-container">$P \to Q$</span> to false in this row, that row passes the test anyway. So we no longer need the falsity of <span class="math-container">$P$</span> to ensure that we don't have a case where the premises are true but the conclusion is false, because we already have the falisity of <span class="math-container">$P \to Q$</span>. </p> <p>So in the hypothetical scenario layed out where <span class="math-container">$P \to Q$</span> were false in the first row, we wouldn't need the second premise <span class="math-container">$P$</span> to conclude <span class="math-container">$Q$</span>, because then the statement "There are no rows such that all premises are true and the conclusion is false" is already guaranteed by <span class="math-container">$P \to Q$</span> alone. </p> <p>But indeed it is a bit strange to assume that the column for <span class="math-container">$P \to Q$</span> is alternated in some way, because the truth table for <span class="math-container">$P \to Q$</span> as given in 4 is just how implication is defined, so the confusion you uttereted in the second part of your question is understandable. </p>
921,144
<p>Can you give me an example of the function in metric space which is continuous but not uniformly continuous. Definitions are almost the same for both terms. This is what I found on wiki: ''The difference between being uniformly continuous, and being simply continuous at every point, is that in uniform continuity the value of $\delta$ depends only on $\varepsilon$ and not on the point in the domain.'' But in both definitions there's only $\exists \delta &gt;0$ </p>
Lehs
171,248
<p>Continuity is a condition on the function for given single points in the domain, while uniform continuity is a condition on the function for given pairs of points in the domain. Often one is sloppy with the quantifiers, but conditions of uniform continuity should start with two $\forall x_1 \forall x_2$ and then the $\exists \delta&gt;0$. The condition for continuity only got one $\forall x$ before the $\exists\delta&gt;0$ and one after.</p>
3,830,636
<p>This is something I'm doing for a video game so may see some nonsense in the examples I provide.</p> <p>Here's the problem: I want to get a specific amount minerals, to get this minerals I need to refine ore. There are various kinds of ore, and each of them provide different amounts of minerals per volume. So I want to calculate the optimum amount of ore (by least possible volume) to get the amount of minerals.</p> <p>For example:</p> <ul> <li>35 m<sup>3</sup> of Plagioclase is refined into 15 Tritanium, 19 Pyerite, 29 Mexallon</li> <li>120 m<sup>3</sup> of Kernite is refined into 79 Tritanium, 144 Mexallon, 14 Isongen</li> </ul> <p>How could I go and calculate the combination of Plagioclase and Kernite that gives at least 1000 Tritanium and 500 Mexallon with the least amount of Ore (by volume)</p> <p>I think this is a linear programming problem, but I haven't touched this subject in years</p>
Sil
290,240
<p>Rob's answer shows the problem definition in both linear and integer programming. Here is an integer programming code for experimenting. In practice you would typically use some library for the job, here is an example in Python 3 using <a href="https://docs.python-mip.com/en/latest/quickstart.html" rel="nofollow noreferrer">Python-MIP</a> library.</p> <p>Problem definition:</p> <pre><code>ores_def={ &quot;Plagioclase&quot;: { &quot;materials&quot;: { &quot;Tritanium&quot;: 15, &quot;Pyerite&quot;: 19, &quot;Mexallon&quot;: 29 }, &quot;volume&quot;: 35 }, &quot;Kernite&quot;: { &quot;materials&quot;: { &quot;Tritanium&quot;: 79, &quot;Mexallon&quot;: 144, &quot;Insongen&quot;: 14 }, &quot;volume&quot;: 120 }, } expected={ &quot;Tritanium&quot;: 1000, &quot;Mexallon&quot;: 500 } </code></pre> <p>Now if you mine <span class="math-container">$x_i$</span> units of <span class="math-container">$i$</span>-th ore, <span class="math-container">$c_i$</span> is a volume of <span class="math-container">$i$</span>−th ore (by mining unit) and <span class="math-container">$a_{i,1},a_{i,2},\dots,a_{i,k}$</span> are amounts of individual materials for <span class="math-container">$i$</span>-th ore. So if you want to mine <span class="math-container">$b_1,b_2,\dots,b_k$</span> of each material, you want to minimize <span class="math-container">$\sum c_i x_i$</span> subject to <span class="math-container">$\sum_i a_{i,j}x_i \geq b_j$</span> and <span class="math-container">$x_i \geq 0$</span> for <span class="math-container">$i=1,2,\dots,n$</span>. So, internal definition transformation into the variables described:</p> <pre><code>mats = set() a = dict() for name, ore in ores_def.items(): for mat in ore[&quot;materials&quot;]: mats.add(mat) mats_list=list(sorted(mats)) ores_list=list(sorted(ores_def.keys())) n=len(ores_list) k=len(mats_list) b=[0]*len(mats_list) for mat, value in expected.items(): b[mats_list.index(mat)] = value c=[0]*n for i in range(n): c[i] = ores_def[ores_list[i]][&quot;volume&quot;] a=dict() for i in range(n): d = ores_def[ores_list[i]] for j in range(k): m = mats_list[j] if m in d[&quot;materials&quot;]: a[i,j] = d[&quot;materials&quot;][m] else: a[i,j] = 0 </code></pre> <p>And finally the optimization itself:</p> <pre><code>from mip import * m = Model() x = [ m.add_var(var_type=INTEGER, lb=0, name=ores_list[i]) for i in range(n) ] for j in range(k): m += xsum(a[i,j]*x[i] for i in range(n)) &gt;= b[j] m.objective = minimize(xsum(c[i]*x[i] for i in range(n))) m.verbose = 0 status = m.optimize() if status == OptimizationStatus.OPTIMAL or status == OptimizationStatus.FEASIBLE: print(f'solution found ({&quot;optimal&quot; if status == OptimizationStatus.OPTIMAL else &quot;non-optimal&quot;}):') for v in m.vars: print(f'{v.name} : {round(v.x)}') else: print(&quot;no optimal or feasible solution found&quot;) </code></pre> <p>Output:</p> <pre><code>solution found (optimal): Kernite : 13 Plagioclase : 0 </code></pre> <p>Now it should be easy to play with starting definition and see how it affects the result.</p>
2,615,626
<p>The problem I have is:</p> <blockquote> <p>$\lim \limits_{x \to \infty} \sin{(x)}\ e^{-x}$</p> </blockquote> <p>Things I've tried:</p> <ol> <li><p>Researching how to do this problem, I've come across kind of similar examples that use either Taylors Rule, L'Hopitals Rule, or the Squeeze Theorem. Not sure which one to use.</p></li> <li><p>On Wolfram the anser is $0$.</p></li> <li><p>Broke the problem apart into $\lim \limits_{x \to \infty} \sin{(x)}\lim \limits_{x \to \infty} e^{-x}$</p></li> </ol> <p>$\lim \limits_{x \to \infty} \sin{(x)}=[-1,1]$</p> <p>$\lim \limits_{x \to \infty} e^{-x}=0$</p>
Peter Szilas
408,605
<p>For fun:</p> <p>Let $x \gt 0:$</p> <p>$0\le |\dfrac{\sin x}{e^x}| \le \dfrac{1}{e^x}\lt\dfrac{1}{x}.$</p> <p>Used: $e^x = 1+x+x^2/2! +...\gt x.$</p> <p>Let $\epsilon &gt;0$ be given. </p> <p>Choose $M$, real, such that $M &gt;1/\epsilon.$</p> <p>For $x\gt M$ we have:</p> <p>$0\le |\dfrac{\sin x}{e^x}| \lt \dfrac{1}{x} \lt\dfrac{1}{M} \lt \epsilon.$</p>
3,754,819
<p>Evaluate the integral: <span class="math-container">$$\int_{1}^{\sqrt{2}} \frac{x^4}{(x^2-1)^2+1}\,dx$$</span></p> <p>The denominator is irreducible, if I want to factorize and use partial fractions, it has to be in complex numbers and then as an indefinite integral, we get <span class="math-container">$$x + \frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 - i}}\right)}{\sqrt{-1 - i}} + \frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 + i}}\right)}{\sqrt{-1 + i}}+C$$</span></p> <p>But evaluating this from <span class="math-container">$1$</span> to <span class="math-container">$\sqrt{2}$</span> is another mess, keeping in mind the principal values. I also tried the substitution <span class="math-container">$x \mapsto \sqrt{x+1}$</span>, which then becomes</p> <p><span class="math-container">$$\frac{1}{2}\int_{0}^1 \frac{(x+1)^{3/2}}{x^2+1}\,dx$$</span></p> <p>I don't see where I can go from here. Another substitution of <span class="math-container">$x\mapsto \tan x$</span> also leads me nowhere.</p> <p>Should I approach the problem in some other way?</p>
Nikunj
287,774
<p>Start by writing <span class="math-container">$x^4 = (x^2 - 1 + 1)^2$</span> <span class="math-container">$\implies x^4 = (x^2-1)^2 + 1 + 2(x^2-1)$</span></p> <p>So the our integral becomes:</p> <p><span class="math-container">$$\int_1^{\sqrt2}\frac{(x^2-1)^2 + 1 + 2(x^2-1)}{(x^2-1)^2 + 1}\,dx$$</span> <span class="math-container">$$ = \int_1^{\sqrt2}\frac{(x^2-1)^2 + 1}{(x^2-1)^2 + 1}\,dx + 2\int_1^{\sqrt2}\frac{(x^2-1)}{(x^2-1)^2 + 1}\,dx$$</span> <span class="math-container">$$ = \sqrt2 - 1 + 2\int_1^{\sqrt2}\frac{(x^2-\sqrt2 + 1-\sqrt2)}{(x^4 - 2x^2 + 2)}\,dx$$</span> <span class="math-container">$$ = \sqrt2 - 1 + 2\int_1^{\sqrt2}\frac{(1-\sqrt2/x^2 )}{((x + \sqrt2/x)^2 - 2 - 2\sqrt2)}\,dx + 2(1-\sqrt2)\int_1^{\sqrt2}\frac{1}{(x^2-1)^2 + 1}\,dx$$</span> Here, I've split the integral so I can use the fact that after dividing the numerator and denominator by <span class="math-container">$x^2$</span>, I can complete a square (The square of <span class="math-container">$(x + \sqrt2/x))$</span> and I'll have its derivative in the numerator for an easy substitution.</p> <p>Put <span class="math-container">$(x + \sqrt2/x) \rightarrow t$</span> in the first integral, you can see that the upper and lower limits become the same <span class="math-container">$(1+ \sqrt2)$</span> So the first integral becomes <span class="math-container">$0$</span> and you're left with: <span class="math-container">$$\sqrt2 - 1 + 2(1-\sqrt2)\int_1^{\sqrt2}\frac{1}{(x^2-1)^2 + 1}\,dx$$</span> I was trying to avoid using complex numbers, but this integral becomes so much easier if you write: <span class="math-container">$(x^2-1)^2 + 1 = (x^2 - 1 + i)(x^2 - 1 + i)$</span> and use partial fractions. <span class="math-container">$$ =\sqrt2 - 1 + \frac{1 - \sqrt2}{i}\int_1^{\sqrt2}\left(\frac{1}{x^2-1-i} + \frac{1}{x^2-1+i}\right)\,dx$$</span></p> <p><span class="math-container">$$=\sqrt2 - 1 + \frac{1 - \sqrt2}{i}\left(\frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 - i}}\right)}{\sqrt{-1 - i}} + \frac{\tan^{-1}\left(\displaystyle\frac{x}{\sqrt{-1 + i}}\right)}{\sqrt{-1 + i}}\right)\Bigg|_{x=1}^{x=\sqrt2}$$</span></p>
1,072,524
<p>Suppose $X$ is an integral scheme. I would like to show that the restriction maps $res_{U,V} : O_X(U) \rightarrow O_X(V)$ is an inclusion so long as $V$ is not empty. I was wondering if someone could give me some assistance with this. This is exercise 5.2 I on Ravi Vakil's notes. Thank you!</p>
Amitai Yuval
166,201
<p>In other words, we want to show that a section $s\in O_X(U)$ that vanishes on $V$ is necessarily the trivial section. Since $X$ is integral, $U$ is irreducible, thus $V$ is dense in $U$, and we're done.</p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Thierry Zell
8,212
<p>I've already given my opinion, and this is more of a remark: how the pros and cons are weighed between blackboard and slides should be influenced by a whole collection of classroom factors, and the first one among them should probably be class size.</p> <p>This is a rather obvious remark, but I thought it was worth pointing out; Jaap Eldering's answer brought it to the forefront for me, because he mentioned doing examples on slides to avoid making mistakes, and my first reaction was: "making mistakes in class is good!". </p> <p>And then it occurred to me that I can use mistakes in the classroom fairly effectively because I only teach small classes. In a big classroom, I would simply not be able to receive instant feedback efficiently enough to do this as well, and I would not be comfortable trying.</p> <p>In a very large lecture hall, the blackboard will often lose a lot of its advantages given how large you have to write.</p>
80,056
<p>I am toying with the idea of using slides (Beamer package) in a third year math course I will teach next semester. As this would be my first attempt at this, I would like to gather ideas about the possible pros and cons of doing this. </p> <p>Obviously, slides make it possible to produce and show clear graphs/pictures (which will be particularly advantageous in the course I will teach) and doing all sorts of things with them; things that are difficult to carry out on a board. On the other hand, there seems to be something about writing slowly with a chalk on a board that makes it easier to follow and understand (I have to add the disclaimer that here I am just relying on only a few reports I have from students and colleagues).</p> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have noticed, and ways you have found to optimize this.</p> <p>I am aware that the question may be a bit like the one with Japanese chalks, once discussed here, but I think as using slides in the classroom is becoming more and more common in many other fields, it may be helpful to probe the advantages and disadvantages of using them in math classrooms as well.</p>
Andrew Stacey
45
<p>I'm going to try to answer the actual question rather than saying whether I think that chalk or projector is better. That "question" being:</p> <blockquote> <p>It would very much like to hear about your experiences with using slides in the classroom, possible pitfalls that you may have notices, and ways you have found to optimize this.</p> </blockquote> <p>(Though I'm curious about the request for ways found of optimising pitfalls!)</p> <p>I switched to using beamer slides 2.5 years ago. I'm partway through the fifth course that I've given using slides (and the course immediately prior to those was given on chalkboard but having prepared them as slides - a half-and-half experiment). By-and-large, I would say that I give better lectures using the slides than I used to when giving chalk talks. The following is a fairly disorganised list of my thoughts on both why I chose to switch and things that I've learnt in the process. I hope that this will be of use to you. Feel free to contact me for more details, and we've also recently been discussing this a bit on the <a href="http://www.math.ntnu.no/~stacey/Mathforge/nForum">nForum</a>.</p> <ol> <li><p>A big reason for me switching was that I teach in English in a Norwegian University. Although the students have excellent English, it is not their native language. It takes them longer to copy from the board, and their error rate is higher, so more time in a chalk-talk is wasted waiting for them to catch up than I felt I could allow. Giving the lecture using slides meant that I had much more control over where the students were focussing at any particular time (mainly, I wanted this to be on me).</p> <p>(To be clear: the time taken was <em>in addition</em> to the necessary time for students to process ideas that they've just been told about. Of course, pauses are necessary. But pauses by happenstance - because the students are busy copying the board - are not the best pauses.)</p></li> <li><p>As a consequence, I <em>always</em> make my slide notes available beforehand. Admittedly, sometimes it was at 11pm before an 8am lecture, but no-one's perfect! They can get the actual presentation, a compressed version (the <code>trans</code> option), and a handout version (they are strongly encouraged only to print the latter). That way, they can read in advance what I'm going to show them, and they can bring the handout version along to add any additional notes if they wish.</p></li> <li><p>The handouts are not a substitute for going to the lecture. The slides are not a summary of the lecture, they are what I want the students to be able to see while I am talking to them about something. Ideally, when the students look at the notes afterwards then they will be able to remember (more-or-less) what I've said. But if they weren't at the lecture then they won't have anything to remember so the handouts will be of less use (not of no use, it will still say what topics were covered so they can find out about them by other means).</p></li> <li><p>Lectures never go completely as planned. But <strong>never</strong> use the chalk-board and the screen. Whenever I see someone doing this at a conference I want to run out of the lecture hall screaming. Not only will the lighting be completely different for both, but also the students will have the wrong mindset and will take time to make the switch. Use a system whereby you can write on the presentation (and can bring up blank pages if needed). You can even leave deliberate gaps if you want! As well as not requiring a change in gear, you can then make the annotations available afterwards (and have an easy record of the annotations that you made when you revise the slides for next year). I've used xournal (for Linux), jarnal (when forced to use Windows), and am currently using an iPad (despite what's said elsewhere, this is extremely usable for this). (Incidentally, I'd say that going the other way is acceptable: if you are primarily using the board and then want to show a couple of fancy pictures then so long as it doesn't take an age to set-up the projector then it's okay.)</p></li> <li><p>Practise. Get a system so that your writing on the screen is acceptable (don't worry about perfect), you know how your program works, and you can change pages easily (preferably without looking at the machine).</p></li> <li><p>Yes, it usually takes longer to prepare the slides - first time. But once you're used to the flow of writing a beamer presentation then that aspect doesn't actually add that much more. What probably adds the most time is that you are now forced to completely prepare the lecture in advance, rather than "winging it" and claiming that it is "good for the students to see the professor make mistakes". (You can probably guess my reaction to those!) It can take some effort to get a really nice system, I think I have one, and now it doesn't take me long to prepare a presentation.</p> <p>On that note, preparing your notes in LaTeX makes it much easier to prepare it in "layers". First, lay out your lesson plan (you do have one, right?), then add the frame titles, finally add the content of each frame. Then go back and adjust the lesson plan according to what did and didn't fit as you expected. (And it's possible to produce the lesson plan from the same source as the presentation.)</p> <p>And when you come to reuse the slides, it's much faster.</p></li> <li><p>Think always "What can the students see right now?". If you want them to be able to refer to more than you can fit on a slide, consider giving them a "cheat sheet" handout as well. Slightly ironically, giving the lectures using LaTeX means that I am much more aware of how the presentation <em>looks</em>, something that is just as important as what is in it.</p></li> <li><p>As hinted above, my slide notes would not form a good set of "traditional lecture notes" from which to revise. But then I don't believe that the chief aim of a lecture should be to produce that. Again, consider using other methods for this. For my course, I have a wiki where I can put more lengthy arguments. I use homework questions to "force" the students to read the wiki.</p></li> </ol> <p>That's all that I can think of right now. You can get an idea of what my lectures look like by visiting the home page of my current course: <a href="http://mathsnotes.math.ntnu.no/mathsnotes/show/TMA4145+Home+Page">http://mathsnotes.math.ntnu.no/mathsnotes/show/TMA4145+Home+Page</a>.</p> <p>What I've said above is phrased a bit like advice, but it's really just a list of things that spring to mind when I think about how I've adapted. I will give one genuine piece of advice: don't base your lectures on what worked best for you. The reason why should be obvious! But to illustrate the absurdity, let me note that the undergraduate course in which I learn the most and where I really feel that I understood and still understand the topic the best, was the worst lecture course that I ever went to. Why? Simple: because I couldn't learn from the lecturer, I was forced to go and learn it by myself. So now I mumble, write illegibly, stop halfway through a proof, and get wildly sidetracked by irrelevant questions - because that's what worked best for me!</p>
2,432,817
<p>Let $X$ and $Y$ be topological spaces and let $A \subseteq X$ be a subspace of $X$. Suppose $A$ is homeomorphic to some subspace $B \subseteq Y$ of $Y$. Let $f$ explicitly denote this homeomorphism.</p> <p>If $f : A \to B$ is a homeomorphism, does $f$ extend to a homeomorphism between $\text{Cl}_X(A)$ and $\text{Cl}_Y(B)$, i.e does there exist a $g : Cl_X(A) \to Cl_Y(B)$, such that $g|_{A} = f$.</p> <p>More generally if $A$ and $B$ are homeomorphic, does that imply that $Cl_X(A)$ and $Cl_Y(B)$ are homeomorphic? </p> <hr> <p>If $X$ and $Y$ are homeormorphic, then this is true, since it is a well known-theorem that $f[Cl_X(A)] = Cl_Y(f[A] = B)$, if $f : X \to Y$ is a homeomorphism. </p> <p>However I can't seem to come up with a counterexample for my question, since I assume the implication is false.</p> <hr> <p><em>Edit :</em> I know a counterexample that comes from CW Complexes, where if $(X, \xi)$ is a CW-Complex, then $\xi$ is a collection of open cells $e$, which are topological spaces homeomorphic to $\mathbb{B}^n$ the open unit ball in $\mathbb{R}^n$. Each $e \subseteq X$ is a subspace of a haursdoff space $X$.</p> <p>Also a closed cell $\bar{e}$ is a topological space homeomorphic to the closed unit ball $\mathbb{\bar{B}}^n \subseteq \mathbb{R}^n$</p> <p>Now in this example $Y = \mathbb{R}^n$. It is also known that for any $e \in \xi$, $Cl_X(e) \neq \bar{e} \cong {\mathbb{\bar{B}}^n} = Cl_Y(\mathbb{{B}}^n \cong e) $</p> <p>Hence $e$ and $\mathbb{B}^n$ are homeomorphic, however $Cl_X(e)$ is not homeomorphic to $Cl_Y(\mathbb{B}^n) = \mathbb{\bar{B}}^n$, since $Cl_X(e) \neq \bar{e}$</p> <hr> <p>But using the above example feels like bringing a gun to a knife fight, are there any simpler counterexamples?</p>
egreg
62,967
<p>Let $A$ be a non closed subset of the compact space Hausdorff space $Y$, with the induced topology; now take $X=A$ and $f\colon X\to Y$ the inclusion map. The closure of $A$ in $X$ is $A$, not compact; the closure of $f(A)$ in $Y$ is compact.</p> <p>Explicit example: $A=X=(0,1)$, $Y=[0,1]$.</p>
915,414
<p>I recently did some work to try to find $\int{\frac{dx}{Ax^3 - B}}$, but I'm always paranoid that my solution has some minor trivial error in the middle of the process that screwed up the end result entirely, so could someone please help check my solution?</p> <p>The first step to my solution is to eliminate $A$ and $B$ from the integrand by pushing them out as constants and leaving $\frac{dx}{x^3 - 1}$. We start with $$ \begin{align} \int{\frac{dx}{Ax^3 - B}} &amp; = \int{\frac{dx}{A(x^3 - B/A)}} \\ &amp; = \frac{1}{A}\int{\frac{dx}{x^3 - B/A}}. \end{align} $$ To get rid of the $B/A$ term, we make the substitution $x = \sqrt[3]{B/A}u, \ dx = \sqrt[3]{B/A}du$. $$ \begin{align} \frac{1}{A}\int\frac{dx}{x^3 - B/A} &amp; = \frac{1}{A}\int{\frac{\sqrt[3]{B/A}du}{(B/A)u^3 - B/A}} \\ &amp; = \frac{1}{A}\cdot\frac{\sqrt[3]{B/A}}{B/A}\int{\frac{du}{u^3 - 1}} \\ &amp; = \mathcal{C}\int{\frac{du}{u^3 - 1}} \end{align} $$ with $\mathcal{C} = (AB^2)^{-1/3}$. Now we can just worry about solving $\int{\frac{du}{u^3 - 1}}$.</p> <p>We can decompose $\frac{1}{u^3 - 1}$ using the fact that $u^3 - 1 = (u - 1)(u^2 + u + 1)$. So</p> <p>$$ \begin{align} \frac{1}{u^3 - 1} &amp; = \frac{1}{(u - 1)(u^2 + u + 1)} \\ &amp; = \frac{P}{u - 1} + \frac{Qu + R}{u^2 + u + 1} \end{align} $$ Here $P = 1/3$, but $Q$ and $R$ aren't as trivial to find. $u^2 + u + 1$ has no real roots, so I chose to sub in $u = -\sqrt[3]{-1}$ (it was the first root that I found). We then have to solve the equation $$ (-\sqrt[3]{-1}-1)^{-1} = Q(-\sqrt[3]{-1}) + R $$ which we can solve by setting $Q = R = [(-1)^{2/3} - 1]^{-1}$ (which will also help when integrating $\frac{Qu + R}{u^2 + u + 1}$ since we can factor $Q$ and $R$ out as a common constant from the numerator).</p> <p>So \begin{align} \mathcal{C}\int{\frac{du}{u^3 - 1}} &amp; = \mathcal{C}\left[\frac{1}{3}\int{\frac{du}{u - 1}} + Q\int{\frac{u + 1}{u^2 + u + 1}du}\right] \\ &amp; = \mathcal{C}\left\{\frac{1}{3}\ln{|u - 1|} + Q\left[\frac{1}{2}\ln{|u^2 + u + 1|} + \frac{1}{\sqrt{3}}\arctan{\left(\frac{2}{\sqrt{3}}u + \frac{1}{\sqrt{3}}\right)} \right] \right\} \\ &amp; = (AB^2)^{-1/3}\left\{\frac{1}{3}\ln{\left|\frac{x}{\sqrt[3]{B/A}} - 1\right|} + \frac{1}{(-1)^{2/3} - 1}\left[\frac{1}{2}\ln{\left|\left(\frac{x}{\sqrt[3]{B/A}}\right)^2 + \frac{x}{\sqrt[3]{B/A}} + 1\right|} \right. \right. \\ &amp; \hspace{15mm} \left. \left. + \frac{1}{\sqrt{3}}\arctan{\left(\frac{2x}{\sqrt{3}\sqrt[3]{B/A}} + \frac{1}{\sqrt{3}}\right)}\right]\right\} + \text{constant} \end{align} </p> <p>(I know what $\int{\frac{x + 1}{x^2 + x + 1}dx}$ is from previous problems)</p> <p>How does it look? Did I do anything severely wrong (I don't feel entirely confident about the partial fractions part)? Any suggestions for how I could get to an answer faster or more efficiently?</p>
Claude Leibovici
82,404
<p>I think that we could make this simpler without involving complex numbers. </p> <p>Just as rogerl did using partial fractions, we have $$\frac{1}{u^3-1} =\frac{1}{3}\frac{1}{u-1}-\frac{1}{3}\frac{u+2}{u^2+u+1}$$ Now $$\frac{u+2}{u^2+u+1}=\frac{1}{2}\frac{2u+4}{u^2+u+1}=\frac{1}{2}\Big(\frac{2u+1}{u^2+u+1}+\frac{3}{u^2+u+1}\Big)$$ that is to say$$\frac{u+2}{u^2+u+1}=\frac{1}{2}\Big(\frac{2u+1}{u^2+u+1}+\frac{3}{\left(u+\frac{1}{2}\right)^2+\frac{3}{4}}\Big)$$ All pieces can be easily integrated and then $$\int\frac{du}{u^3-1} =\frac{1}{3} \log (1-u)-\frac{1}{6} \log \left(u^2+u+1\right)-\frac{1}{\sqrt{3}} \tan ^{-1}\left(\frac{2 u+1}{\sqrt{3}}\right)+C$$ </p>
915,016
<p>Let $S$ be a non - empty set and $F$ be a field. Let $C(S,F)$ denote the set of all functions $f\in \mathcal F(S,F)$ such that $f(s)=0$ for all but a finite number of elements of $S$. Prove that $C(S,F)$ is a subspace of $\mathcal F(S,F)$</p> <p>My progress: I have shown that if $f,g\in C(S,F)$ and $a\in F$ then $f+g\in C(S,F)$ and $af\in C(S,F)$.</p> <p>I need to show that the zero element is in $C(S,F)$. What is the zero element here and how do I show it's in $C(S,F)$</p>
Dmoreno
121,008
<p>Why to learn that <em>complicated</em> formula? Will you even remember it when taking your exam? Why not using the <em>method of characteristics</em>, which I think it should be firstly taught when learning PDEs. In your case, it reads:</p> <p>$$\frac{\mathrm{d}t}{1} = \frac{\mathrm{d}x}{1} = \frac{\mathrm{d}u}{(x+t) \cos{xt}}. $$</p> <p>From the first equatlity we have: $\mathrm{d}t - \mathrm{d}x = 0, $ which after integrating yields $x-t=c$, where $c$ is a constant called characteristic. Take now the second equality and put $x$ as a function of $t$ to have:</p> <p>$$\mathrm{d}t = \frac{\mathrm{du}}{(2t+c) \cos{[(t+c)t}]},$$ </p> <p>whichi is a separable differential equation. Solve the corresponding integral$^*$ and put the constant of integration as a function of $c = x-t$, solve for the given initial condition and you are done!</p> <p>Cheers!</p> <hr> <p>$^*$Note that the argument of the $\sin$ is a primitive of $2t+c$.</p>
1,535,376
<p>I have a hard time understanding that when and under what conditions we can use Gauss elimination with complete pivoting, and when with partial pivoting, and when with no pivoting? (I mean what is the exact feature of a matrix that will tell us which one to choose?)</p>
JP McCarthy
19,352
<p>Gaussian Elimination can be used as long as you are not using decimal rounding.</p> <p>If you are using rounding Gaussian Elimination can be very inaccurate and you should use partial pivoting in this case.</p> <p>I don't know without a Google when complete pivoting is necessary.</p>
1,535,376
<p>I have a hard time understanding that when and under what conditions we can use Gauss elimination with complete pivoting, and when with partial pivoting, and when with no pivoting? (I mean what is the exact feature of a matrix that will tell us which one to choose?)</p>
eepperly16
239,046
<p><strong>Rule of Thumb/TL;DR:</strong> When doing calculations using floating point numbers (such as double, single, and float data types in many common programming languages), use partial pivoting unless you know you're safe without it and complete pivoting only when you know that you need it.</p> <hr> <p><strong>Longer Explanation:</strong> A square matrix <span class="math-container">$A$</span> has an <span class="math-container">$LU$</span> factorization (without pivoting) if, and only if, no zero is encountered in the pivot position when computing an <span class="math-container">$LU$</span> factorization of <span class="math-container">$A$</span>. However, when does computations using floating point numbers a pivot that is <em>nearly</em> zero can lead to dramatic rounding errors. The simple workaround is to always permute the rows of the matrix such that the largest nonzero entry in a column is chosen as the pivot entry. This ensures that a nearly zero is never chosen. Complete pivoting goes even further by using row and column permutations to select the largest entry in the entire matrix as the pivot entry.</p> <p>The above paragraph is a lose, intuitive picture of why pivoting is necessary. One can also prove sharp error bounds by carefully tracking the propagation of errors throughout the entire <span class="math-container">$LU$</span> factorization calculation. One way of structuring this error bound is a so-called <strong>backward error estimate</strong>. In a backwards error estimate for solving a linear system of equations <span class="math-container">$Ax = b$</span>, one bounds the perturbation <span class="math-container">$E$</span> necessary to make the computed solution <span class="math-container">$\hat{x}$</span> produced by doing Gaussian elimination followed by back substitution an <em>exact</em> solution to a nearby system of linear equations <span class="math-container">$(A+E)\hat{x} = b$</span>. A backwards error estimate can be very revealing if, for example, the matrix <span class="math-container">$A$</span> is determined by measurements of an engineering system with some error tolerance. If the entries of <span class="math-container">$A$</span> are only known <span class="math-container">$\pm 1\%$</span> and the backwards error is less than <span class="math-container">$0.1\%$</span>, then the numerical errors made during our computations are smaller than the measurement errors and we've done a good job. TL;DR the quantity <span class="math-container">$E$</span> is a reasonable quantity to measure the error in an <span class="math-container">$LU$</span> factorization and resulting linear solve.</p> <p>For Gaussian elimination without pivoting, the backwards error can be arbitrarily bad. Fortunately, for partial pivoting, the backwards error <span class="math-container">$E$</span> can be bounded as <span class="math-container">$\|E\|_\infty \le 6n^3 \rho \|A\|_\infty u + \mbox{higher order terms in $u$}$</span> <span class="math-container">${}^\dagger$</span>. Here, <span class="math-container">$\|\cdot\|_\infty$</span> is the <a href="https://en.wikipedia.org/wiki/Matrix_norm" rel="nofollow noreferrer">operator matrix <span class="math-container">$\infty$</span>-norm</a> and <span class="math-container">$u$</span> is the <a href="https://en.wikipedia.org/wiki/Machine_epsilon" rel="nofollow noreferrer">unit roundoff</a> which quantifies the accuracy of floating point computations (<span class="math-container">$u \approx 10^{-16}$</span> for <a href="https://en.wikipedia.org/wiki/Double-precision_floating-point_format" rel="nofollow noreferrer">IEE double precision arithmetic</a>). The quantity <span class="math-container">$\rho$</span> is known as the growth factor for partial pivoting. While it is possible for <span class="math-container">$\rho$</span> to be as large as <span class="math-container">$2^{n-1}$</span>, in practice <span class="math-container">$\rho$</span> usually grows very modestly with <span class="math-container">$n$</span>.<span class="math-container">${}^\dagger$</span> Concerning the fact that <span class="math-container">$\rho$</span> grows very modestly in most applications, legendary numerical analyst Kahan wrote "Intolerable pivot-growth [with partial pivoting] is a phenomenon that happens only to numerical analysts who are looking for that phenomenon."<span class="math-container">${}^\$$</span> </p> <p>Nonetheless, one can write down matrices for which partial pivoting will fail to give an accurate answer due to an exponential growth factor <span class="math-container">$\rho = 2^{n-1}$</span>. Wilkinson showed that the growth factor for <em>complete pivoting</em> is much smaller in the worst case <span class="math-container">$\rho \le n^{1/2} (2\cdot 3^{1/2}\cdots n^{1/n-1})^{1/2}$</span>. In practice, <span class="math-container">$\rho$</span> for complete pivoting is almost always less than <span class="math-container">$100$</span>.<span class="math-container">${}^\dagger$</span></p> <p>After having delved into the details, you can see that this is a subtle business, and even in the classical field of numerical linear algebra there is somewhat of a gap between theory and experiment. In general, Gaussian elimination with partial pivoting is very reliable. Unless you know you can get away without pivoting (symmetric positive definite and diagonally dominant matrices are notable examples), one should use partial pivoting to get an accurate result. (Or compensate with something clever. For example, <a href="https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-09766-4_95" rel="nofollow noreferrer">SuperLU</a> uses "static pivoting" where one does "best guess" pivoting before starting the <span class="math-container">$LU$</span> factorization and then does no pivoting during the factorization. The loss of accuracy of this approach is compensated by using a few steps of <a href="https://en.wikipedia.org/wiki/Iterative_refinement" rel="nofollow noreferrer">iterative refinement</a>.)</p> <p>If partial pivoting isn't accurate enough, one can move to using complete pivoting instead for its lower growth factor. As user3417 points out, there are other ways of solving <span class="math-container">$Ax = b$</span> other than using <span class="math-container">$LU$</span> factorization-based approaches and these may be faster and more accurate than Gaussian elimination with complete pivoting. For example, <span class="math-container">$QR$</span> factorization runs in the <span class="math-container">$O(n^3)$</span> operations and has no growth factor. There may be special cases when one truly does want to use an <span class="math-container">$LU$</span> factorization: for example, a Gaussian elimination-based approach can be used to construct <a href="https://math.berkeley.edu/~mgu/Seminar/Fall2009/ToeplitzSeminarTalk.pdf" rel="nofollow noreferrer">structure-preserving factorizations of a Cauchy-like matrix</a>. In this case, complete pivoting (or its close cousin <a href="https://www.sciencedirect.com/science/article/pii/S0377042700004064" rel="nofollow noreferrer">rook pivoting</a>) may be the best approach.</p> <hr> <p><span class="math-container">${}^\dagger$</span> Reference Golub and Van Loan's <em>Matrix Computations</em> Fourth Edition Chapter 3.4 </p> <p><span class="math-container">${}^\$$</span> Quoted in Higham's <em>Accuracy and Stability of Numerical Algorithms</em> Second Edition Chapter 9</p>
1,534,246
<p>I'm trying to simplify this boolean expression:</p> <p>$$(AB)+(A'C)+(BC)$$</p> <p>I'm told by every calculator online that this would be logically equivalent:</p> <p>$(AB)+(A'C)$</p> <p>But so far, following the rules of boolean algebra, the best that I could get to was this: </p> <p>$(B+A')(B+C)(A+C)$</p> <p>All of the above are logically equivalent (I've made a truth table for each) but I don't understand what steps am I missing trying to simplify the expression.</p> <p>I also couldn't find an "expression simplifier" tool online that could show me the steps that I'm missing.</p> <p>Help / directions to go to would be much appreciated, thanks in advance.</p>
Graham Kemp
135,106
<p>Well, clearly there's either $A$ or $A'$ in the first two terms, so use this to split the third wheel up, and absorb the pieces.</p> <p>$\begin{align}(AB)+(A'C)+(BC) &amp; = (AB)+(A'C)+(A+A')(BC) \\ &amp; = (AB)+(A'C)+(ABC)+(A'BC) \\ &amp; = (AB+ABC)+(A'C+A'BC) \\ &amp; = (AB)(1+C)+(A'C)(1+B) \\ &amp; = (AB)+(A'C)\end{align}$</p>
1,049,933
<p>If $M$ is the transition matrix of a discrete Markov chain, and $M$ is both irreducible, symmetric and positiv-definite, is the resulting Markov chain necessarily aperiodic? </p> <p>In my intuition, periodicity would correspond to an $-1$-eigenvalue of $M$, but I don't know if that is true or how to formalize it.</p>
Woett
65,418
<p>By the most recent bound on <a href="https://en.wikipedia.org/wiki/Linnik%27s_theorem" rel="nofollow noreferrer">Linnik's Theorem</a>, there is an absolute constant <span class="math-container">$c$</span> such that for every prime <span class="math-container">$q &lt; cp_n^{1/5}$</span>, there is a prime <span class="math-container">$p &lt; p_n$</span> such that <span class="math-container">$p \equiv 1 \pmod{q}$</span>. Your least common multiple is therefore divisible by all primes below <span class="math-container">$cp_n^{1/5}$</span>. The prime number theorem implies that the product of all primes below <span class="math-container">$cp_n^{1/5}$</span> is <span class="math-container">$e^{(c + o(1))p_n^{1/5}}$</span>, and it follows that this is a lower bound on your lcm as well. </p> <p>Conjecturally there is a prime <span class="math-container">$p &lt; p_n$</span> such that <span class="math-container">$p \equiv 1 \pmod{q}$</span> for every <span class="math-container">$q &lt; cp_n^{1-\epsilon}$</span>, and this would provide a lower bound of <span class="math-container">$e^{(c + o(1))p_n^{1-\epsilon}}$</span>. On the other hand, your lcm is not divisible by any prime larger than <span class="math-container">$\frac{1}{2}p_n$</span> so, again using the prime number theorem, a straightforward upper bound is <span class="math-container">$e^{(\frac{1}{2} + o(1))p_n}$</span>.</p>
4,242,093
<p><em><strong>Question:</strong></em></p> <blockquote> <p>Let <span class="math-container">$G=(V_n,E_n)$</span> such that:</p> <ul> <li>G's vertices are words over <span class="math-container">$\sigma=\{a,b,c,d\}$</span> with length of <span class="math-container">$n$</span>, such that there aren't two adjacent equal chars.</li> <li>An edge is defined to be between two vertices that are different by only one char.</li> </ul> <p>A. Does the graph contain an Euler cycle?</p> <ul> <li>Find a pattern.</li> </ul> <p>B. Does the graph contain a Hamiltonian cycle</p> <ul> <li>This can be proven by induction.</li> </ul> </blockquote> <hr /> <p><span class="math-container">$Solution.A.$</span></p> <p>Now, when <span class="math-container">$n=1$</span>, we have 4 vertices: <span class="math-container">$$v_1= \ 'a'$$</span> <span class="math-container">$$v_2= \ 'b'$$</span> <span class="math-container">$$v_3= \ 'c'$$</span> <span class="math-container">$$v_4= \ 'd'$$</span></p> <p>Therefore, for each <span class="math-container">$v\in \{v_1,v_2,v_3,v_4\}$</span>, <span class="math-container">$N(v)=\{v_1,v_2,v_3,v_4\}/ \{v\}$</span> so we get that their degree is 3, so by a theorem we get that there isn't an Euler cycle.</p> <p>In addition, when <span class="math-container">$n=4$</span>, considering the string <span class="math-container">$"abad"$</span> we have 2 options to replace the edges of the string. In order to replace the second char we have 2 options, replacing it by <span class="math-container">$'c'$</span> and <span class="math-container">$'d'$</span>. For the third char, we can replace it only by <span class="math-container">$'c'$</span>. In total, we got 7 edges with this vertex, so by a theorem, we get that there isn't an Euler cycle.</p> <p>I cannot find here a pattern, because if we take a look at <span class="math-container">$n=2$</span> we get an Euler cycle.</p> <p><span class="math-container">$Solution.B.$</span></p> <p>First, we examine whether each vertex has at least <span class="math-container">$\frac{n}{2}$</span> neighbors. Hence, we should take the vertex to have the least number of neighbors. This vertex should be the string with disjoint chars. i.e. the string &quot;abcd&quot; when <span class="math-container">$n=4$</span>. The first and last chars has always 2 neighbors, so we get that the least degree is: <span class="math-container">$$2+2+\binom{n-2}{n-3} \cdotp 1=4+n-2=n+2\geq \frac{n}{2}$$</span></p> <p>Thus, we get that the graph always has a Hamiltonian cycle.</p> <hr /> <p>I don't get why I didn't get a pattern in <span class="math-container">$A$</span>, and how <span class="math-container">$B$</span> can be proven by induction. In addition, is my answer correct?</p>
Mike Desgrottes
464,046
<p>Claim: For <span class="math-container">$n &gt; 3$</span>, the graph <span class="math-container">$G = (V_{n},E_{n})$</span> has no Euler cycle.</p> <p>When <span class="math-container">$n$</span> is even, we consider the string <span class="math-container">$w = a_{1}a_{2}...a_{n}$</span> with <span class="math-container">$a_{2i - 1} = a$</span> for <span class="math-container">$i = 1, 2,....,k$</span>, <span class="math-container">$a_{2} = b$</span>, and <span class="math-container">$a_{2i} = c$</span> for <span class="math-container">$i = 2,...,k$</span>.</p> <p>Then we see that <span class="math-container">$w$</span> has an odd degree. There's only one word that differ from w at <span class="math-container">$a_{3}$</span>. This is because <span class="math-container">$a_{2} = b$</span> and <span class="math-container">$a_{4} = c$</span>. There's two words that differ from w at <span class="math-container">$a_{1},a_{n}$</span>. There's two words that differ from w at <span class="math-container">$a_{i}$</span> where <span class="math-container">$i \not = 3$</span>. This is because for <span class="math-container">$a_{i - 1} = a_{i + 1}$</span> when <span class="math-container">$i \not 3$</span>. Hence, there are <span class="math-container">$2(2k-1) + 1$</span> neighbors of w. The graph has no Euler cycle.</p> <p>When <span class="math-container">$n$</span> is odd, consider the string <span class="math-container">$w = b_{1}b_{2}...b_{n}$</span> with <span class="math-container">$b_{2i - 1} = a$</span> for <span class="math-container">$i = 1,2,...,k + 1$</span>. <span class="math-container">$b_{2} = b$</span>, and <span class="math-container">$b_{2i} = c$</span> for <span class="math-container">$i = 2,3,...,k$</span>. By the same argument as above, w has odd degree. The graph contains no Euler cycle.</p> <p>For <span class="math-container">$n = 3$</span>, the word <span class="math-container">$bac$</span> has degree 5.</p> <p>Problem B: The graph as defined as <span class="math-container">$4*3^{n - 1}$</span> vertices for all <span class="math-container">$n \in \mathbb{N}$</span>. We will show that for all <span class="math-container">$n &gt; 1$</span>, there exists a word with degree less than <span class="math-container">$2*3^{n - 1}$</span>.</p> <p>For <span class="math-container">$n = 2k$</span> even, let <span class="math-container">$w = a_{1}...a_{2(k - 1)}$</span> be a word such that <span class="math-container">$a_{i} \not a_{i + 2}$</span> for <span class="math-container">$i = 1,..., 2(k - 2)$</span> with <span class="math-container">$a_{i} \not = a$</span> for any <span class="math-container">$i$</span>. Then the word <span class="math-container">$awa$</span> has <span class="math-container">$4 + 2(k - 1)$</span> neighbors.</p> <p>For <span class="math-container">$n = 2k + 1$</span>, then let <span class="math-container">$w = a_{1}...a_{2k - 1}$</span> be a word such that <span class="math-container">$a_{i} \not = a_{i + 2}$</span> for <span class="math-container">$i = 1,...,2k - 3$</span> with <span class="math-container">$a_{i} \not = a$</span> for any <span class="math-container">$i$</span>. Then the word <span class="math-container">$awa$</span> has <span class="math-container">$4 + 2k - 1$</span> neighbors.</p> <p>In both instances, for <span class="math-container">$a_{i}$</span> a character in <span class="math-container">$w$</span>, it can only be replaced by one other character because <span class="math-container">$a_{i - 1} \not a_{i + 1}$</span>.</p> <p>Since, <span class="math-container">$4 + 2(k - 1) &lt; 2*3^{2k - 1}$</span> for <span class="math-container">$k \geq 1$</span> and <span class="math-container">$4 + 2k - 1 &lt; 2*3^{2k}$</span> for <span class="math-container">$k \geq 1$</span>.</p>
1,977,736
<p>I have these notations in an exercise and I can't understand them, the exercise is in French and I tried to translate it to English.</p> <p><strong>("e" is the neutral element, "*" is a law of composition)</strong> (?)</p> <p>1) Let (G,*) a group such as: </p> <p>∀x ∈ G, x²= e</p> <p><strong>-Is x² &lt;=> x*x ?</strong> </p> <p>2) Let (E,*) a group such as: </p> <p>∀x ∈ E, x*² = e</p> <p><strong>-What does x*² mean?</strong></p> <p>Thank you.</p> <p>Original material(Ex 1 &amp; 2): <a href="http://mp.cpgedupuydelome.fr/pdf/Structures%20alg%C3%A9briques%20-%20Groupes.pdf" rel="nofollow">http://mp.cpgedupuydelome.fr/pdf/Structures%20alg%C3%A9briques%20-%20Groupes.pdf</a></p>
ArtW
191,773
<p>Here is a sketch of a possible proof.</p> <p>Suppose $f$ is a Möbius transformation taking $\mathbb{R}_{\infty}$ to $\mathbb{R}_{\infty}$. Then I claim the coefficients $a,b,c,d$ of $f$ are all real. suppose $a_1,a_2,a_3\in\mathbb{R_{\infty}}$ are three distinct points that are mapped to $b_1,b_2,b_3\in\mathbb{R_{\infty}}$. Then by invariance of cross-ratio we have for all $z\in H$ $$\frac{z − a_1}{a_3 − a_1}:\frac{z − a_2}{a_3 − a_1}=\frac{f(z) − b_1}{b_3 − b_1}:\frac{f(z) − b_2}{b_3 − b_2}$$ with the appropriate conventions on $\infty$. It is easy to see this implies that $f$ can be written in the desired form.</p> <p>Next, we have $$\Im(f(z))=\frac{ad-bc}{|cz+d|^2}\cdot\Im(z).$$ This implies $ad-bc&gt;0$, which is what we wanted.</p>
1,977,736
<p>I have these notations in an exercise and I can't understand them, the exercise is in French and I tried to translate it to English.</p> <p><strong>("e" is the neutral element, "*" is a law of composition)</strong> (?)</p> <p>1) Let (G,*) a group such as: </p> <p>∀x ∈ G, x²= e</p> <p><strong>-Is x² &lt;=> x*x ?</strong> </p> <p>2) Let (E,*) a group such as: </p> <p>∀x ∈ E, x*² = e</p> <p><strong>-What does x*² mean?</strong></p> <p>Thank you.</p> <p>Original material(Ex 1 &amp; 2): <a href="http://mp.cpgedupuydelome.fr/pdf/Structures%20alg%C3%A9briques%20-%20Groupes.pdf" rel="nofollow">http://mp.cpgedupuydelome.fr/pdf/Structures%20alg%C3%A9briques%20-%20Groupes.pdf</a></p>
Nitin Uniyal
246,221
<p>$w=\frac{az+b}{cz+d}$ gives $z=\frac{b-wd}{cw-a}$. The upper half plane is $y\geq 0$ or $z-\overline z\geq 0$</p> <p>$\implies \frac{b-wd}{cw-a}-\frac{b-\overline wd}{c\overline w-a}\geq 0$</p> <p>$\implies (ad-bc)(w-\overline w)\geq 0$ (On simplifying)</p> <p>You need $ad-bc=1$ so that $w-\overline w\geq 0$ is the upper half $w$-plane.</p>
1,930,933
<blockquote> <p>Does there exist an $n \in \mathbb{N}$ greater than $1$ such that $\sqrt[n]{n!}$ is an integer?</p> </blockquote> <p>The expression seems to be increasing, so I was wondering if it is ever an integer. How could we prove that or what is the smallest value where it is an integer?</p>
bof
111,012
<p>If $n\gt1$ then $\sqrt[n]{n!}$ is not an integer (so it is an irrational number). A <a href="https://math.stackexchange.com/questions/1930933/is-sqrtnn-ever-an-integer/1930938#1930938">proof</a> using <a href="https://en.wikipedia.org/wiki/Bertrand%27s_postulate" rel="nofollow noreferrer">Bertrand's postulate</a> has been posted. The proof of Bertrand's postulate is somewhat involved. Here is a proof without using Bertrand's postulate.</p> <p>For a prime number $p,$ the $p$-adic order of a natural number $m,$ denoted by $\nu_p(m),$ is the highest exponent $\nu$ such that $p^\nu$ divides $m;$ the number $m$ is a perfect $k^\text{th}$ power if and only if $\nu_p(m)$ is divisible by $k$ for every prime $p.$ We can show that $n!$ is not a perfect $n^\text{th}$ power (for $n\gt1$) by showing that $\nu_2(n!)$ is not divisible by $n;$ in fact, $0\lt\nu_2(n!)\lt n.$ The lower bound is obvious. For the upper bound, let $m=\lfloor\log_2(n)\rfloor$ and use <a href="https://en.wikipedia.org/wiki/Legendre%27s_formula" rel="nofollow noreferrer">Legendre's formula</a>: $$\nu_2(n!)=\sum_{k=1}^\infty\left\lfloor\frac n{2^k}\right\rfloor=\sum_{k=1}^m\left\lfloor\frac n{2^k}\right\rfloor\le\sum_{k=1}^m\frac n{2^k}\lt\sum_{k=1}^\infty\frac n{2^k}=n.$$</p> <p>A much more general (and difficult) result, the Erdős-<a href="https://en.wikipedia.org/wiki/John_Selfridge" rel="nofollow noreferrer">Selfridge</a> theorem, says that the product of two or more consecutive positive integers is never a perfect $k^\text{th}$ power for any $k\gt1.$</p>
121,784
<p>I hope this is not too trivial for this forum. I was wondering if someone has come across this polytope.</p> <p>You start with the <a href="http://mathworld.wolfram.com/RhombicDodecahedron.html" rel="nofollow noreferrer">rhombic dodecahedron</a>, subdivide it into four parallellepipeds, <a href="https://i.stack.imgur.com/639Xy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/639Xy.jpg" alt="enter image description here"></a></p> <p>and then fill the space between the four parallellepipeds with a tetrahedron, six parallellepipeds and four prisms (hopefully I counted correctly), so as to obtain a convex polytope. </p> <p>Does this have a name? could someone provide a link to a picture?</p>
André Henriques
5,690
<p>It's the <a href="http://en.wikipedia.org/wiki/Minkowski_addition" rel="noreferrer">Minkowski sum</a> of the rhombic dodecahedron with a regular tetrahedron. (The rhombic dodecahedron is it self the Minkowski sum of four segments).</p>
1,030,050
<p>Assignment:</p> <blockquote> <p>Let $f$ be Lebesgue - measurable and $a,b \in \mathbb{R}$ with the property: $$\frac{1}{\lambda(M)} \cdot \int_Mf\ d\lambda \in [a,b]$$ for all Lebesgue - measurable sets $M \subset \mathbb{R}^n$ with $0 &lt; \lambda(M) &lt; \infty$.</p> <p>Show that: $f(x) \in [a,b]$ almost everywhere.</p> </blockquote> <p>So I need to show that the set $N$ defined as $N:= \{x\in\mathbb{R}^n; \ f(x) \notin [a,b]\}$ has a measure of zero. However, I cannot really see how to go from there and how the property given could help me.</p> <p>I'd appreciate any help.</p>
Chazz
92,822
<p>Since $[a,b]^{c}$ is open we can write it as a countable union of disjoint intervals. So it is enough to prove that $\lambda(f^{-1}(I))=0$, where $I:={B(x,r)}\subseteq [a,b]^{c}$. Write $E:=f^{-1}(I)$.</p> <p>If $\lambda(E)&gt;0$, then</p> <p>$$\left|\frac{1}{\lambda(E)}\int_{E}f d\lambda-x\right|\leq\frac{1}{\lambda(E)}\int_{E}|f-x| d\lambda\leq r$$</p> <p>a contradiction, since $\frac{1}{\lambda(E)}\int_{E}f d\lambda \in [a,b]$.</p>
220,618
<p>The cyclic group of $\mathbb{C}- \{ 0\}$ of complex numbers under multiplication generated by $(1+i)/\sqrt{2}$</p> <p>I just wrote that this is $\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}}i$ making a polar angle of $\pi/4$. I am not sure what to do next. My book say there are 8 elements. </p> <p>Working backwards, maybe the angle divides the the four quadrant into 8 areas? I have no idea</p>
wj32
35,914
<p>Firstly, $\mathbb{C}-\{0\}$ is not a cyclic group generated by that complex number (was that a typo?). The element $1/\sqrt{2}(1+i)=e^{\pi i/4}$ generates a cyclic <strong>subgroup</strong> of $\mathbb{C}-\{0\}$, so you need to find the smallest positive $k$ for which $(e^{\pi i/4})^k=e^{k\pi i/4}=1$ (since $1$ is the identity in your group). Why must $k$ be 8?</p>
720,794
<p>Given that the letters $A,B,C,D,E,F,G,H,J$ represents a distinct number from $1$ to $9$ each and </p> <p>$$\frac{A}{J}\left((B+C)^{D-E} - F^{GH}\right) = 10$$ $$C = B + 1$$ $$H = G + 3$$</p> <p>find (edit: without a calculator) $A,B,C,D,E,F,G,H,J$</p> <p>I could only deduce that $D\ge E$, from the first one. Eliminating C and H doesn't seem to help much either.</p>
Guy
127,574
<p>Too long for a comment.</p> <p>Testing following python code(brute forcing on finite sets is acceptable I think)</p> <pre><code>def funct(t): a,b,c,d,e,f,g,h,j=t return (a/j)*(pow(b+c,d-e)-pow(f,g*h)) import itertools for i in itertools.permutations(range(1,10)) if funct(i)==10: print(i) </code></pre> <p>Total of $64$ possible permutations are returned. I am not sure what the question is asking for. Surely we cannot enumerate <em>all</em> of them. Although it is interesting that it is exactly $2^6$. Sample(truncated) output:</p> <p><img src="https://i.stack.imgur.com/x2FI3.png" alt="enter image description here"></p> <p><em>EDIT</em>: I just realized that I neglected the other two restrictions. Implementing them to narrow down the search, this is the only possible permutation.</p> <p>$$(2, 5, 6, 9, 7, 3, 1, 4, 8)$$</p>
720,794
<p>Given that the letters $A,B,C,D,E,F,G,H,J$ represents a distinct number from $1$ to $9$ each and </p> <p>$$\frac{A}{J}\left((B+C)^{D-E} - F^{GH}\right) = 10$$ $$C = B + 1$$ $$H = G + 3$$</p> <p>find (edit: without a calculator) $A,B,C,D,E,F,G,H,J$</p> <p>I could only deduce that $D\ge E$, from the first one. Eliminating C and H doesn't seem to help much either.</p>
ml0105
135,298
<p>We have that $J|A$, and that $\frac{A}{J}|10$. So let's first consider the possible divisors of $10$, which are $1, 2, 5$. Clearly, $\frac{A}{J} \neq 5$ is the most likely option, based on the possible values. So $\frac{A}{J} = 2$. How many ways can we get this? Consider pairs $(A, J)$. We have $(2, 1)$, $(4, 2)$, $(8, 4)$, $(6, 3)$. </p> <p>We now have that since $\frac{A}{J} = 2$ that $(2B + 1)^{D - E} - F^{G^{2} + 3G} = 5$. I went ahead and substituted based on the constraints given. It will be most helpful to look at how the various digits behave under modular exponentiation, using modulo 10. So for example, when we exponentiate $3$, we get the one's place as $3^{1} \to 3$, $3^{2} \to 9$, $3^{3} \to 7$, $3^{4} \to 1$, $3^{5} \to 3$. The minimum value such that $a^{x} \equiv 1$ $(mod$ $10)$ is called the order of $a$ modulo 10. Once you pick the elements, it comes down to making sure the exponents are in line. Noting the order of an element will help you here.</p> <p>So how can you make $5$ on the digits places? You have $6 - 1$, $8 - 3$, and $9 - 4$.</p> <p>I think this should be a sufficient hint to get you going in the right direction. </p>
720,794
<p>Given that the letters $A,B,C,D,E,F,G,H,J$ represents a distinct number from $1$ to $9$ each and </p> <p>$$\frac{A}{J}\left((B+C)^{D-E} - F^{GH}\right) = 10$$ $$C = B + 1$$ $$H = G + 3$$</p> <p>find (edit: without a calculator) $A,B,C,D,E,F,G,H,J$</p> <p>I could only deduce that $D\ge E$, from the first one. Eliminating C and H doesn't seem to help much either.</p>
Thanos Darkadakis
105,049
<p>This is how I would find the solution with no computer aid:</p> <p>You can write the equation as: $$(B+C)^{D-E} - F^{GH} = 10\frac{J}{A} $$ You can see that the RHS can take very small values ($\le90$). So, I would start with the assumption that it is unlikely to have two very large powers that have difference $\leq$90.</p> <p>Having this in mind, take a look at the second power ($F^{GH}$) and specifically at the exponent, which equals $GH=G(G+3)$. Its value could be: 4 or 10 or 18 etc. If we had a base bigger than 2, then this makes even the $3^{10}$ a very large number, which is unlikely to solve our problem based on our assumption for large numbers. So exponents $\ge10$ should be avoided, <strong>unless</strong> the base equals 1. (Note that $2^{10}$ is not considered a big number, but we cannot use the number 2 twice for letters F and G, i.e. $2^{2*5}$.</p> <p>This leaves us with 2 possible solutions:</p> <ol> <li>F=1</li> <li>G=1, H=4</li> </ol> <p>In any case, number $1$ has already been used. This means that:</p> <p>a) Regarding the other power: $(B+C)^{D-E}$,the base can be 5,7,9,11,13,15,17.</p> <p>b) The right hand side can take values up to 45.</p> <p>Let's examine the first case (F=1). This means that the first power can take a value up to 46. In how many ways can this be achieved (Remember that number 1 has already been used and that the base can take a value of 5,7,9,...). At first glance, we cannot afford an exponent greater than 2. So let our exponent be 2. Out of the possible bases, only 5 gives us a result less than 46. But we cannot use it because if we use 5=2+3 as en exponent then RHS cannot be greater than 18 (=10/5*9). So <strong>if F=1 then D-E=1</strong>.</p> <p>Let's examine tha second case (G=1, H=4). Numbers B and C must be consecutive. This leaves us the following possibilities for B+C: 5,11,13,15,17. One of these numbers raised to the X, should be around $Y^4$. And $Y^4 \leq 9^4=6561$:</p> <p>$5^{5,6,7,8,9}\leq 6561$, leaves us with 5^5=25</p> <p>$11^{2,3,7,8,9}\leq 6561$, leaves us with 11^2=121 and 11^3=1331</p> <p>$13^{2,3,5,8,9}\leq 6561$, leaves us with 13^2=169 and 13^3=2197</p> <p>$15^{2,3,5,6,9}\leq 6561$, leaves us with 15^2=225 and 15^3=3375</p> <p>$17^{2,3,5,6,7}\leq 6561$, leaves us with 17^2=289 and 17^3=4913</p> <p>and the possible values for $Y^4$ are 16, 81, 256, 625, 1296, 2401, 4096, 6561</p> <p>The only pairs with difference less than 45 are: (25-16), (121-81), (289-256) and (1331-1296). This means 9, 40, 33 and 35 respectively. Out of these four, only two can be written in a form compatible with RHS: 121-81=40=10/2*8 and 1331-1296=35=10/2*7.</p> <p>But the second one has used twice the number 6. So let's examining the first one:</p> <p>$121-81=10\frac82$ or $(5+6)^2-3^{1\times 4}=10\frac82$ where 2=D-E.</p> <p>Happily we observe that the two numbers that we haven't already used (7,9) have a difference of 2. So, $$\frac{2}{8}\left((5+6)^{9-7} - 3^{1\times 4}\right) = 10$$</p> <p><strong>Note 1:</strong> I clarify again that this way just finds <strong>a</strong> solution. It could not be used if the problem-maker asked for a proof of a uniqueness of the solution. Moreover, it can surely be computed by hand, as the most "complicated" power was $9^4$.</p> <p><strong>Note 2:</strong> This solution is the result of our assumption. We saw that even with small powers, we found only two pairs with small difference. This makes us realize that our initial assumption was legit.</p>
1,946,637
<p>I have come across this integral and have been unable to solve it so far.</p> <p>$$I=\int_{-\infty}^{0.29881}e^{-0.5x^2}\,\mathrm dx$$</p>
E.H.E
187,799
<p>$$\frac{2^n(z^2)^n}{3^n+4^n}=\frac{2^n(z^2)^n}{4^n\left(1+(\frac{3}{4})^n\right)}=\frac{z^{2n}}{2^n\left(1+(\frac{3}{4})^n\right)}$$ now take the n-root $$L=\lim_{n\rightarrow \infty }\left(\frac{z^{2n}}{2^n\left(1+(\frac{3}{4})^n\right)}\right)^{\frac{1}{n}}=\lim_{n\rightarrow \infty }\frac{z^2}{2(1+(\frac{3}{4})^n)^{\frac{1}{n}}}=|\frac{z^2}{2(1)}|&lt;1$$ so the </p>
1,003,020
<p>Without recourse to Dirichlet's theorem, of course. We're going to go over the problems in class but I'd prefer to know the answer today.</p> <p>Let $S = \{3n+2 \in \mathbb P: n \in \mathbb N_{\ge 1}\}$</p> <p>edit:</p> <p>The original question is "the set of all primes of the form $3n + 2$, but I was only considering odd primes because of a reason I don't remember anymore.</p> <p>How can I show $S$ is infinite? I start by assuming it's finite. </p> <p>I've tried: </p> <p>Assuming that the product $(3n_1+2)(3n_2+2)...(3n_m+2)$ is of the form "something" so I can show the product contains a prime factor not in the finite list of primes, which would be a contradiction, but came up with no useful "something".</p> <p>I tried showing that the product would not be square-free, but couldn't show that.</p> <p>I tried showing the product or sum was both even and odd, but couldn't show that.</p> <p>What else should I try? Or was one of the above methods correct?</p>
vadim123
73,324
<p>The general strategy is to find a (large) number $n$ that is relatively prime to each of the existing list of such primes, and is also congruent to 2 modulo 3. The prime factorization of $n$ cannot consist only of primes congruent to $1$ modulo $3$, since the product of any number of such is still $1$ modulo $3$. Hence there must be some prime factor of $n$ that is congruent to $2$ modulo $3$, which must be not on our list by the construction of $n$.</p> <p>Now, how to construct such an $n$? Suppose the finite list is $\{p_1, p_2, \ldots, p_k\}$. If $k$ is even, then take $n=p_1p_2\cdots p_k+1$. If $k$ is odd, then take $n=(p_1p_2\cdots p_k)p_k+1$.</p>
465,945
<p>I just need some verification on finding the basis for column spaces and row spaces.</p> <p>If I'm given a matrix A and asked to find a basis for the row space, is the following method correct?</p> <p>-Reduce to row echelon form. The rows with leading 1's will be the basis vectors for the row space.</p> <p>When looking for the basis of the column space (given some matrix A), is the following method correct?</p> <p>-Reduce to row echelon form. The columns with leading 1's <strong>corresponding</strong> to the original matrix A will be the basis vectors for the column space. </p> <p>When looking for bases of row space/column space, there's no need in taking a transpose of the original matrix, right? I just reduce to row echelon and use the reduced matrix to get my basis vectors for the row space, and use the original matrix to correspond my reduced form columns with leading 1's to get the basis for my column space. </p>
Chris Tang
695,705
<p>Your procedure is correct, and I'd like to summary them:</p> <h2>Finding Row Space</h2> <ul> <li>Strategy: do Gaussian elimination on the matrix M; non-zero rows of reduced matrix M' form the set spanning row space. </li> <li>Reason: 1) elementary row operations don't change the row space of M, since row vectors are changed linearly by elementary row operations; 2)The non-zero rows in a <strong>reduced</strong> matrix form its row space, which can be proven <span class="math-container">$\sum_i c_i\vec{r_i}={0}\Rightarrow c_i=0,\forall i$</span>, indicating these row vectors are independent to form the row space. </li> </ul> <h2>Finding Column Space</h2> <ul> <li>Strategy: 1)You could do it by transpose. 2)Or, do Gaussian elimination, take the columns of <span class="math-container">$M$</span> corresponding to columns with leading entries in <span class="math-container">$M'$</span>.</li> <li>Reason for 2): <span class="math-container">$M'$</span> implies solutions where columns without leading entries are the linear combination of others, so we only need to take the columns with leading entries in <span class="math-container">$M$</span>. </li> </ul>
2,000,556
<p>Suppose $N&gt;m$, denote the number of ways to be $W(N,m)$</p> <h3>First method</h3> <p>Take $m$ balls out of $N$, put one ball at each bucket. Then every ball of the left the $N-m$ balls can be freely put into $m$ bucket. Thus we have: $W(N,m)=m^{N-m}$.</p> <h3>Second method</h3> <p>When we are going to put $N$-th ball, we are facing two possibilities:</p> <ol> <li><p>the previous $N-1$ balls have already satisfies the condition we required, i.e. each of $m$ buckets has at least one ball. Therefore, we can put the $N$-th ball into any bucket.</p></li> <li><p>the previous $N-1$ balls have made $m-1$ buckets satisfies the condition, we are left with one empty bucket, the $N$-th ball must be put into that empty bucket. However, that empty bucket may be any one of the $m$ buckets.</p></li> </ol> <p>Therefore, we have the recursion formula:</p> <p>$$ W(N,m) = m W(N-1,m) + m W(N-1,m-1) $$</p> <p><strong>It is obvious that the two methods are not identical, which one has flaws?</strong> I would like to know which part of the reasoning is wrong and I would also want to hear about the case when the balls are distinct.</p>
Brian M. Scott
12,042
<p><strong>hypergeometric</strong> has given a good analysis of the problem for indistinguishable balls.</p> <p>When the balls are distinct, we can number them $1$ through $N$. If we number the buckets $1$ through $m$, each assignment of balls to buckets with at least one ball in each bucket can be thought of as a function from the set $[N]=\{1,\ldots,N\}$ <strong>onto</strong> the set $[m]=\{1,\ldots,m\}$. That is, we’re counting the surjections from $[N]$ to $[m]$.</p> <p>The <a href="https://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling numbers of the second kind</a> are the key ingredient here: the Stirling number ${N\brace m}$ is the number of partitions of $[N]$ into $m$ non-empty parts. There are $m!$ ways to assign each of the parts to one of the $m$ numbers in $[m]$, so there are $m!{N\brace m}$ surjections from $[N]$ to $[m]$ and hence the same number of allowable assignments of the labelled balls to the labelled buckets. It turns out that</p> <p>$$m!{N\brace m}=\sum_{k=0}^m(-1)^{m-k}\binom{m}kk^N\;,$$</p> <p>a result that can also be obtained directly using the <a href="https://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle#Statement" rel="nofollow noreferrer">inclusion-exclusion principle</a>. There is no really nice closed form for this number, though the Stirling numbers themselves do satisfy a fairly nice recurrence:</p> <p>$${{n+1}\brace k}=k{n\brace k}+{n\brace{k-1}}\;,$$</p> <p>with ${0\brace 0}=1$ and ${n\brace 0}={0\brace n}=0$ for $n&gt;0$.</p>
66,719
<p>I know some elementary proofs of this fact. I was wondering if there's some short slick proof of this fact using the structure of the $2$-adic integers? I'm looking for a proof of this fact that's easy to remember.</p>
Jyrki Lahtonen
11,619
<p>The shortest proof I can think of goes as follows. Hensel is not needed. A counting argument will suffice.</p> <p>We take as the starting point that the group of units $U_n$ of the ring $\mathbf{Z}/2^n\mathbf{Z}$ has the structure $U_n\simeq C_2\times C_{2^{n-2}}$. Therefore exactly one quarter of elements of $U_n$ are squares.</p> <p>Assume $n\ge3$. For an odd integer $m$ to be a quadratic residue modulo $2^n$ it is surely necessary for $m$ to be a quadratic residue modulo 8. We easily check (or know in advance) that for $m$ to be a quadratic residue modulo 8 it has to be congruent to 1 modulo 8. Thus exactly one quarter of the odd integers in the range $[0,2^n]$ are quadratic residues modulo 8. As this is a subset of the equinumerous set of squares of $U_n$, the two sets must coincide.</p> <p>Edit: Or was this exactly (one of) the elementary proof(s) you know? I think that this is easy to remember, though :-)</p>
301,724
<p>There's an example in my textbook about cancellation error that I'm not totally getting. It says that with a $5$ digit decimal arithmetic, $100001$ cannot be represented.</p> <p>I think that's because when you try to represent it you get $1*10^5$, which is $100000$. However it goes on to say that when $100001$ is represented in this floating point system (when it's either chopped or rounded) it comes to $100000$.</p> <p>If what I said above is correct, does $100001$ go to $100000$ because of the fact that it can only be represented like $1*10^5$? </p> <p>If I'm completely off the mark, clarification would be great.</p>
Ross Millikan
1,827
<p>Yes, you only have five decimal digits available. $100001=1.00001*10^5$ but I have six digits in the mantissa. Clearly it is closer to go to $1.0000$ than to $1.0001$, so that is what we will do. So the numbers around here that can be represented are $99998, 99999, 100000, 100010, 100020,$ etc.</p>
532,525
<p>Is $P( A \cup B \,|\, C)$ the same as $P(A | C) + P(B | C)$ ? Here $A$ and $B$ are mutually exclusive.</p>
Henno Brandsma
4,280
<p>One could use nets as well (if you know them; they're pretty popular in functional analysis as a generalisation of sequences that work in all topological spaces): Take any $z \in \operatorname{cl}(A) + \operatorname{cl}(B)$, so $z= x + y$ with $x \in \operatorname{cl}(A)$, $y \in \operatorname{cl}(B)$, so we have a nets $a_i, b_i$, with a common index set $I$ wlog, such that all $a_i \in A$, all $b_i \in B$ and $a_i \rightarrow x$ and $b_i \rightarrow y$, so by continuity of addition (which is the crucial property we need) we get that the net $a_i+ b_i$ is in $A + B$ and converges to $x +y = z$, showing that $z \in \operatorname{cl}(A+B)$. </p>
651,731
<blockquote> <p>Let $\{f_n\}$be sequence of bounded real valued functions on $[0,1]$ converging at all points of this interval. Then If $\int^1_0 |f_n(t)-f(t)|dt\, \to 0$ as $n \to \infty$,does $\lim_{n \to \infty} \int^1_0 f_n(t) dt\,=\int^1_0 f(t)dt\,$</p> </blockquote> <p>I just know that if somehow we can show uniform convergence then we can say it would be true, but in other case what happens I don't know!</p>
TheNumber23
90,401
<p>This result is false. Let $f_{n}$ be zero for $x\leq\tfrac{1}{n+1}$ and $0$ for $x\geq \tfrac{1}{n}$. Then connect both points $(\tfrac{1}{n+1},0)$ and $(\tfrac{1}{n},0)$ to $(\tfrac{1}{2n}+\tfrac{1}{2n+1},n(n+1))$, Notice what the integral for each $f_{n}$ is and see what it converges to. </p>
3,065,331
<p>In <a href="https://rads.stackoverflow.com/amzn/click/0199208255" rel="nofollow noreferrer">Nonlinear Ordinary Differential Equations: An Introduction for Scientists and Engineers</a> one of the very first stated equations are, as in the title of the question,</p> <p><span class="math-container">$$ \ddot{x} = \frac{\mathrm{d}}{\mathrm{d}x} \left(\frac{1}{2} \dot{x}^2 \right). $$</span></p> <p>However, I'm having trouble seeing why this should be true. Could anyone clarify this? Thank you for your time in advance.</p>
J.G.
56,861
<p>By the chain rule, <span class="math-container">$$\frac{d}{dx}\left(\frac12\dot{x}^2\right)=\frac{1}{\dot{x}}\frac{d}{dt}\left(\frac12\dot{x}^2\right)=\frac{1}{\dot{x}}\times\dot{x}\ddot{x}=\ddot{x}.$$</span></p>
2,555,200
<p>Let $l$ be the smallest positive linear combination of $a,b\in \mathbb{Z}^+$ i.e.,$$l := \min\{ax+by &gt;0 : x,y\in\mathbb{Z}\}.$$ Now, according to @Brahadeesh's answer here, <a href="https://math.stackexchange.com/questions/2553324/proof-for-gcd-being-the-smallest-linear-combination-of-a-b-in-mathbb-z">Proof for $\gcd$ being the smallest linear combination of $a,b \in \mathbb {Z}$.</a>, we can show that $l\mid a$ and $l\mid b$ (simultaneously).</p> <p>But, consider $k&gt;l$ such that $k\in \{ax+by &gt;0 : x,y\in\mathbb{Z}\}$. Then, is it possible to show that $k\not\mid a $ and $k\not\mid b$ (simultaneously)? This indirectly gives us that there are no common divisors of $a$ and $b$ that are greater than $l$ which then proves that $l = \gcd(a,b)$.</p> <p>The direct proof of the fact that $l=\gcd(a,b)$ would be to show that $c\mid a $ and $c\mid b \; \Rightarrow \; c\mid l$, which is quite trivial from the definition of $l$. However, I don't want to make use of this fact and instead want to show that </p> <blockquote> <p>Claim: If $k&gt;l$ such that $k\in \{ax+by &gt;0 : x,y\in\mathbb{Z}\}$ then $k\not\mid a $ and $k\not\mid b$ (simultaneously).</p> </blockquote> <p>Thanks in advance.</p>
user7530
7,530
<p>I’m not sure that there’s much difference between what you want and the “direct” proof. I would argue as follows. Suppose for contradiction that $k\vert a$ and $k\vert b$. Then $k\vert l$, and by an identical argument $l\vert k$. Therefore $k=l \not&gt; l,$ a contradiction.</p>
1,423,728
<p>The definition of a limit in <a href="http://rads.stackoverflow.com/amzn/click/0321888545" rel="nofollow noreferrer">this book</a> stated like this </p> <p><a href="https://i.stack.imgur.com/tKjaa.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tKjaa.png" alt="enter image description here"></a></p> <p>1) why we require ƒ(x) be defined on an open interval about $x_0$ ?</p> <p>2) does the definition mean it is impossible to talk about limit of function of this sort?</p> <p>$f(x) = \begin{cases} undefined &amp; x \not\in \mathbb{Q}\\ 1 &amp; x \in \mathbb{Q} \end{cases}$</p> <p><strong>Edit</strong>: Since some similar answers to my first question is that we want to talk about two-sided limit, thus require ƒ(x) be defined on an open interval about $x_0$. However, these answers doesn't clear my intended question. Now if I restrict $x&gt;=1$, then can we talk about the limit of that function , especially for $\displaystyle \lim_{x \to 1^{+}}f(x)$?</p> <p>As for my second question, I found a more precise definition of limit in Courant's book Introduction_to_Calculus_and_Analysis stated like this <a href="https://i.stack.imgur.com/8J8Wh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8J8Wh.png" alt="enter image description here"></a> which doesn't require ƒ(x) to be defined on every point of the an open interval about $x_0$, so I think it is possible to talk about limit of that function.</p>
Subhasish Basak
266,148
<p>Suppose there is no open interval around $x_0$, where $f(x)$ is defined for all $x$ in that interval.</p> <p>Which means for every $\delta &gt;0$ $\exists x_1 \in (x_0-\delta,x_0+\delta)$ such that $f(x_1)$ is not defined. So we cannot say $|f(x)-L|&lt;\epsilon$ $\forall x\in(x_0-\delta,x_0+\delta)$ , as you can not say anything about $|f(x_1)-L| $.</p> <p>So the second condition can not be checked.</p> <p>That is why it is in the definition.</p>
131,435
<p>Wikipedia is a widely used resource for mathematics. For example, there are hundreds of mathematics articles that average over 1000 page views per day. <a href="http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Mathematics/Popular_pages" rel="noreferrer">Here is a list of the 500 most popular math articles</a>. The number of regular Wikipedia readers is increasing, while the number of editors is decreasing (<a href="http://stats.wikimedia.org/EN/SummaryEN.htm" rel="noreferrer">graphs</a>), which is causing growing concern within the Wikipedia community. </p> <p><a href="http://en.wikipedia.org/wiki/Wikipedia_talk:WikiProject_Mathematics" rel="noreferrer">WikiProject Mathematics</a> is relatively active (compared to other WikiProjects, but not compared to MathOverflow!), and there is always the need for more experts who really understand the material. An editor continually faces the tension between (1) providing a lot of advanced material and (2) explaining things well, which generates many productive discussions about how mathematics articles should be written, and which topics should have their own article. </p> <p>Regardless of the long term concerns raised about whether Wikipedia is capable of being a resource for advanced mathematics (see <a href="https://mathoverflow.net/questions/19631/can-wikipedia-be-a-reliable-and-sustainable-resource-for-advanced-mathematics">this closed question</a>), the fact is, people are attempting to learn from Wikipedia's mathematics articles right now. So improvements made to articles today will benefit the readers of tomorrow. </p> <hr> <p>Wikipedia is a very satisfying venue for summarizing topics you know well, and explaining things to other people, due to its large readership. Based on the number of mathematicians at MathOverflow, who are willing to spend time (for free!) answering questions and clarifying subjects for other people, it seems like there is a lot of untapped volunteer potential here. So in the interests of exposing the possible obstacles to joining Wikipedia, I would like to know:</p> <blockquote> <p>Why don't mathematicians spend more time improving Wikipedia articles?</p> </blockquote> <p>Recent efforts intended to attract new participants and keep existing ones include the friendly atmosphere of the <a href="http://en.wikipedia.org/wiki/Wikipedia:Teahouse" rel="noreferrer">Teahouse</a>, as well as WikiProject Editor Retention. </p> <p>In case anyone is interested, my Wikipedia username is User:Mark L MacDonald (which is my real name). </p>
Michael Hardy
6,316
<p><s>Here's the math WikiProject's "Current activity" page, which lists each day's new articles, articles for which deletion is proposed, articles needing expert attention, articles needing references, and articles needing various other sorts of work: <a href="http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Mathematics/Current_activity" rel="noreferrer">http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Mathematics/Current_activity</a> </s> <b>PS added in August 2016:</b> The "current activity" page hasn't worked for more than a year. But there is this page that includes new articles, which often need work: <a href="https://en.wikipedia.org/wiki/User:Mathbot/Changes_to_mathlists" rel="noreferrer">https://en.wikipedia.org/wiki/User:Mathbot/Changes_to_mathlists</a></p> <p>Here's the page listing "requested articles"---mathematics articles that do not yet exist: <a href="http://en.wikipedia.org/wiki/Wikipedia:Requested_articles/Mathematics" rel="noreferrer">http://en.wikipedia.org/wiki/Wikipedia:Requested_articles/Mathematics</a></p> <p>(Inexperienced or non-logged-in users are no longer allowed to create NEW articles, but they can create drafts in the user space, which can be moved by experienced users into the article space.)</p> <p><b>DO NOT</b> begin a Wikipedia article by writing "Let $\lbrace T_n\rbrace_{n=1}^\infty$ be a sequence of bounded linear operators on a Banach space $B$." That fails to even hint to the lay reader that the article is not about theology, music, chemistry, banking, politics, etc.</p> <p>The title of the article should be a singular noun phrase except when there's some special reason to use the plural. One such reason is that the article is about a set of things (in something more like the layman's understanding of "set" than that of the mathematician); for example "Maxwell's equations".</p> <p>One might begin by writing "In mathematics, an <b>oriented matroid</b> is&nbsp;.&nbsp;.&nbsp;." Or "In algebra,..." or "In number theory,..." or "In geometry,..." or "In calculus,..." But DON'T start by saying "In functional analysis,...", or "In category theory," or "In topology,....". Again with those, the lay reader is given no reason to suspect that mathematics is what it is about. There's no need for that sort of context-setting phrase if the title of the article is "Mathematical induction" or "Algebraic equation" or "Arithmetic of $p$-adic numbers".</p> <p>Usually, the word or phrase that is the title of the article should appear in or near the first sentence set in <b>bold</b>, not necessarily verbatim identical to the title (e.g. it may be plural where the title is singular).</p> <p>Do not begin a biographical article by writing "John Schmon was born in Putford in 1801. His father was a polecat farmer and his mother was a quantum software designer. He attended this school and that school. His older brother died when he was 10....." The reader should be told right at the beginning what John Schmon was notable for, thus: "<b>John Schmon</b> (1801–1998) was a Nevian mathematician who discovered this theorem and that theorem and made fundamental contributions to the understanding of whatever."</p> <p>Use lower case initial letters in article titles and section headings except where there is a reason to use a capital (e.g. a person's name). The first letter of a section heading is capital except in rare instances where there's a reason to use lower case. Thus a section may be called</p> <p><b>Applications of the theorem to population genetics</b></p> <p>but should NOT be called</p> <p><b>Applications of the Theorem To Population Genetics</b></p> <p>If you create a new article, it's a good idea to create redirect pages from alternative names, commonplace misspelling and miscapitalizations, commonplace misnomers, etc. Thus "Schmon's Theorem" (with a capital "T") might redirect to "Schmon's theorem" (the proper title).</p> <p>One of the easiest mistakes to make in creating a new article is to leave it as an "orphan", i.e. an article to which no other articles link. If you go to the "toolbox" menu and choose "what links here" you can see which other articles link to it. In the relatively early days of Wikipedia (in 2002 or 2003), I created a new article titled "exponential growth". I then used Google to find more than 150 occurrences of "exponential growth" or "grows exponentially" or "growing exponentially" in other Wikipedia articles, and linked them to the new article. Much more recently, in 2011, I found that 60 Wikipedia articles mentioned Gerald J. Toomer, in most cases by citing one of his works, but there was no article about him. I created the links to the new article about him BEFORE creating the article about him. These are "red links": links to an article that does not exist, and are red in color, whereas links to existing articles are blue. One should create appropriate red links in anticipation of the future existence of an article even when one intends never to create it oneself. A red link invites others to click on it and then create the new article.</p> <hr> <p>I've put whatever came to mind instantly into these comments. I could say a lot more if I took more time.</p> <p>Most of what appears above is codified in Wikipedia's style manuals and guidance pages.</p>
2,156,606
<p>I am stuck in this exercise of calculus about solving this indefinite integral, so I would like some help from your part: </p> <blockquote> <p>$$\int \frac{dx}{(1+x^{2})^{\frac{3}{2}}}$$</p> </blockquote>
Simply Beautiful Art
272,831
<p>We have $t&gt;0$.</p> <p>Recall that</p> <p>$$\int\frac1{(t^2-x^2)^{1/2}}\ dx=\arcsin\left(\frac xt\right)+c_1$$</p> <p>Take the derivative of both sides with respect to $t$ to get</p> <p>$$\int\frac{-t}{(t^2-x^2)^{3/2}}\ dx=\frac{-x}{|t|\sqrt{t^2-x^2}}+c_2$$</p> <p>Setting $t=1$ and simplifying, we reach</p> <p>$$\int\frac1{(1-x^2)^{3/2}}\ dx=\frac x{\sqrt{1-x^2}}+c_3$$</p>
57,642
<p>I'm looking for the shortest and the clearest proof for this following theorem:</p> <p>For $V$ vector space of dimension $n$ under $\mathbb C$ and $T: V \to V $ linear transformation , I need to show $V= \ker T^n \oplus $ Im $T^n$.</p> <p>Any hints? I don't know where to start from.</p> <p>Thank you.</p>
jspecter
11,844
<p>Let $E_1,...,E_i$ be the generalized Eigenspaces associated to $T.$ Show by examining the Jordan canonical form of $T$ that $T^n$ acts on $E_j$ as an automorphism if the eigenvalue associated to $E_j,$ denoted $\lambda(E_j),$ is non-zero and as the zero transformation otherwise. It follows $$\mathrm{Ker } T^n = \displaystyle\bigoplus_{\lambda(E_j) = 0} E_j$$ and $$\mathrm{Im} T^n = \displaystyle\bigoplus_{\lambda(E_j) \neq 0} E_j$$ and $$V = Ker T^n \oplus Im {T^n}.$$ </p>
57,642
<p>I'm looking for the shortest and the clearest proof for this following theorem:</p> <p>For $V$ vector space of dimension $n$ under $\mathbb C$ and $T: V \to V $ linear transformation , I need to show $V= \ker T^n \oplus $ Im $T^n$.</p> <p>Any hints? I don't know where to start from.</p> <p>Thank you.</p>
Pierre-Yves Gaillard
660
<p>If $S$ is an endomorphism of a finite dimensional vector space $V$, then $$V=\mathrm{Ker}\ S\ \oplus\ \mathrm{Im}\ S\Leftrightarrow\mathrm{Ker}\ S\ \cap\ \mathrm{Im}\ S=0\Leftrightarrow\mathrm{Ker}\ S^2=\mathrm{Ker}\ S.$$ </p> <p>Put $S:=T^n$. </p> <hr> <p>I find Robert Israel's argument wonderful, but it seems to me that jspecter's proof works over any field. </p> <p>Indeed, let $T$ be an endomorphism of an $n$-dimensional vector space $V$ over a field $K$. Let $f\in K[X]$ be the minimal of $T$. Write $f=X^rg$ with $g(0)\not=0$. By the Chinese Remainder Theorem, $K[T]=K[X]/(f)$ is the product of $K[X]/(X^r)$ and $K[X]/(g)$, yielding a decomposition $V=V_X\oplus V_g$, where $T$ is nilpotent on $V_X$ and invertible on $V_g$. </p>
566
<h3>We all love a good puzzle</h3> <p>To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in <a href="http://en.wikipedia.org/wiki/Latin_squares" rel="nofollow noreferrer">latin squares</a>). Mathematicians and puzzles get on, it seems, rather well.</p> <h3>But what is a good puzzle?</h3> <p>Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must</p> <ul> <li><strong>Not be widely known:</strong> If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that <em>hilarious</em> scene in the film 21, where Kevin Spacey explains the Monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disembowelled.</li> <li><strong>Be mathematical (as much as possible):</strong> It's true that logic <em>is</em> mathematics, but puzzles beginning '<em>There is a street where everyone has a different coloured house …</em>' are tedious as hell. Note: there is a happy medium between this and trig substitutions.</li> <li><strong>Not be too hard:</strong> Any level of difficulty is cool, but if coming up with an answer requires more than two sublemmas, you are misreading your audience.</li> <li><strong>Actually have an answer:</strong> Crank questions will not be appreciated! You can post the answers/hints in <a href="http://en.wikipedia.org/wiki/Rot_13" rel="nofollow noreferrer">Rot-13</a> underneath as comments as on MO if you fancy.</li> <li><strong>Have that indefinable spark that makes a puzzle awesome:</strong> Like a situation that seems familiar, requiring unfamiliar thought …</li> </ul> <p>Ideally include where you found the puzzle so we can find more cool stuff like it. For ease of voting, one puzzle per post is best.</p> <hr /> <h1>Some examples to set the ball rolling</h1> <blockquote> <p>Simplify <span class="math-container">$\sqrt{2+\sqrt{3}}$</span></p> </blockquote> <p><strong>From:</strong> problem solving magazine</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Try a two term solution</p> </blockquote> <hr /> <blockquote> <p>Can one make an equilateral triangle with all vertices at integer coordinates?</p> </blockquote> <p><strong>From:</strong> Durham distance maths challenge 2010</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> This is equivalent to the rational case</p> </blockquote> <hr /> <blockquote> <p>The collection of <span class="math-container">$n \times n$</span> <a href="http://en.wikipedia.org/wiki/Magic_squares" rel="nofollow noreferrer">Magic squares</a> form a vector space over <span class="math-container">$\mathbb{R}$</span> prove this, and by way of a linear transformation, derive the dimension of this vector space.</p> </blockquote> <p><strong>From:</strong> Me, I made this up (you can tell, can't you!)</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Apply the rank nullity theorem</p> </blockquote>
Aryabhata
1,102
<p>The odd town puzzle.</p> <p>You have a town with $m$ clubs formed by $n$ citizens of the town.</p> <p>The clubs are so formed that</p> <ul> <li>Each club has an odd number of members.</li> <li>Any two clubs have an even number of common members. (Could be zero too).</li> </ul> <p>Show that $m \le n$.</p>
566
<h3>We all love a good puzzle</h3> <p>To a certain extent, any piece of mathematics is a puzzle in some sense: whether we are classifying the homological intersection forms of four manifolds or calculating the optimum dimensions of a cylinder, it is an element of investigation and inherently puzzlish intrigue that drives us. Indeed most puzzles (cryptic crosswords aside) are somewhat mathematical (the mathematics of Sudoku for example is hidden in <a href="http://en.wikipedia.org/wiki/Latin_squares" rel="nofollow noreferrer">latin squares</a>). Mathematicians and puzzles get on, it seems, rather well.</p> <h3>But what is a good puzzle?</h3> <p>Okay, so in order to make this question worthwhile (and not a ten-page wade-athon through 57 varieties of the men with red and blue hats puzzle), we are going to have to impose some limitations. Not every puzzle-based answer that pops into your head will qualify as a good puzzle—to do so it must</p> <ul> <li><strong>Not be widely known:</strong> If you have a terribly interesting puzzle that motivates something in cryptography; well done you, but chances are we've seen it. If you saw that <em>hilarious</em> scene in the film 21, where Kevin Spacey explains the Monty hall paradox badly and want to share, don't do so here. Anyone found posting the liar/truth teller riddle will be immediately disembowelled.</li> <li><strong>Be mathematical (as much as possible):</strong> It's true that logic <em>is</em> mathematics, but puzzles beginning '<em>There is a street where everyone has a different coloured house …</em>' are tedious as hell. Note: there is a happy medium between this and trig substitutions.</li> <li><strong>Not be too hard:</strong> Any level of difficulty is cool, but if coming up with an answer requires more than two sublemmas, you are misreading your audience.</li> <li><strong>Actually have an answer:</strong> Crank questions will not be appreciated! You can post the answers/hints in <a href="http://en.wikipedia.org/wiki/Rot_13" rel="nofollow noreferrer">Rot-13</a> underneath as comments as on MO if you fancy.</li> <li><strong>Have that indefinable spark that makes a puzzle awesome:</strong> Like a situation that seems familiar, requiring unfamiliar thought …</li> </ul> <p>Ideally include where you found the puzzle so we can find more cool stuff like it. For ease of voting, one puzzle per post is best.</p> <hr /> <h1>Some examples to set the ball rolling</h1> <blockquote> <p>Simplify <span class="math-container">$\sqrt{2+\sqrt{3}}$</span></p> </blockquote> <p><strong>From:</strong> problem solving magazine</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Try a two term solution</p> </blockquote> <hr /> <blockquote> <p>Can one make an equilateral triangle with all vertices at integer coordinates?</p> </blockquote> <p><strong>From:</strong> Durham distance maths challenge 2010</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> This is equivalent to the rational case</p> </blockquote> <hr /> <blockquote> <p>The collection of <span class="math-container">$n \times n$</span> <a href="http://en.wikipedia.org/wiki/Magic_squares" rel="nofollow noreferrer">Magic squares</a> form a vector space over <span class="math-container">$\mathbb{R}$</span> prove this, and by way of a linear transformation, derive the dimension of this vector space.</p> </blockquote> <p><strong>From:</strong> Me, I made this up (you can tell, can't you!)</p> <p><strong>Hint:</strong></p> <blockquote class="spoiler"> <p> Apply the rank nullity theorem</p> </blockquote>
Fixee
7,162
<p>Prove that any 2-coloring of a $K_6$ has <strong>two</strong> monochromatic $K_3$'s.</p>
3,255,174
<p>How can I find the volume bounded between <span class="math-container">$z=(x^2+y^2)^2$</span> and <span class="math-container">$z=x$</span>? </p> <p>My idea so far is to use cylindrical polar coordinates and <span class="math-container">$z$</span> limit is from <span class="math-container">$(x^2+y^2)^2$</span> to <span class="math-container">$x$</span> quite clearly but I am struggling to parametrize the surface <span class="math-container">$(x^2+y^2)^2&lt;x$</span> for the other integration limits, could someone please help?</p>
José Carlos Santos
446,262
<p>In cylindrical coordinates, your equations become <span class="math-container">$z=r^4$</span> and <span class="math-container">$z=r\cos\theta$</span>. So, take <span class="math-container">$\theta\in\left[-\frac\pi2,\frac\pi2\right]$</span> (so that <span class="math-container">$\cos\theta\geqslant0$</span>). Now, <span class="math-container">$r^4\leqslant r\cos\theta\iff r\leqslant\sqrt[3]{\cos\theta}$</span>. So, compute<span class="math-container">$$\int_{-\frac\pi2}^{\frac\pi2}\int_0^{\sqrt[3]{\cos\theta}}\int_{r^4}^{r\cos\theta}r\,\mathrm dz\,\mathrm dr\,\mathrm d\theta.$$</span>You should get <span class="math-container">$\dfrac\pi{12}$</span>.</p>
387,519
<p>The domain of the following function $$y=2$$ is just 2? And the image of it?</p> <p>I don't think I quiet understand what the image of a function means. The domain is all values that it can assume, correct?</p> <p>Could you please try to define the image of this equation too: $$y = 2x - 6$$ so I can try to understand it?</p>
AndreasT
53,739
<p>When you define a function you should always provide a domain (i.e. a start set) and a co-domain (i.e. a target set). Writing $$ f:A\rightarrow B $$ means that $A$ is the domain and $B$ the co-domain. $f$ is a function if it "associates" every single element $a\in A$ with a unique element $b\in B$. We call such $b$ the <em>image of $a$</em> and denote it by $f(a)$. Then the <em>image of $f$</em> is the set of all images.</p> <p>When talking about real valued functions such as $f(x)=2x-6$, it is implied that the domain is the largest subset of $\mathbb R$ for which $f$ is defined (e.g. the domain of $f(x)=\sqrt x$ is $\{x\geq 0\}$), and that the co-domain is $\mathbb R$.</p> <p>Now, regarding your examples, $y=2$ is short for the (real valued) function $f(x)$ defined as $$ \begin{array}{crcl} f: &amp; \mathbb R &amp; \longrightarrow &amp; \mathbb R \\ &amp; x &amp; \longmapsto &amp; 2 \end{array} $$ (note that it is well defined for every $x\in\mathbb R$, so that by the convention above the domain of $f$ is the whole $\mathbb R$). The image of $f$ is just the set $\{2\}$, since no other values of the co-domain are "reached" by $f$.</p> <p>Concerning $f(x)=2x-6$, since no specification is given, as before we assume $$ \begin{array}{crcl} f: &amp; \mathbb R &amp; \longrightarrow &amp; \mathbb R \\ &amp; x &amp; \longmapsto &amp; 2x-6 \end{array} $$ i.e. that the domain is the largest subset of $\mathbb R$ for which the function is defined (so, again the whole $\mathbb R$) and that the co-domain is the whole $\mathbb R$. Now, for example the <em>image</em> of the element 5 is 4, since $f(5)=4$. It is easy to show that the image of $f$ is the whole co-domain $\mathbb R$ (in which case we say that $f$ is <em>surjective</em>, which <strong>is not</strong> the opposite of <em>injective</em>!). In general, you can find the image of $f$ by "turning $y=f(x)$ in something like $x=\ldots$" where the rhs is an expression that does not contain the variable $x$, and then see for which values of $y$ the equation has sense and the rhs is part of the domain. In this example, $$ y=2x -6 \Rightarrow 2x=y+6 \Rightarrow x= \frac{y+6}2 $$ and the last expression makes sense (i.e. can be computed) for every $y\in\mathbb R$. We then conclude that the image of $f$ is the whole $\mathbb R$.</p> <p>If we "restricted" the function as follows: $$ \begin{array}{crcl} f: &amp; [0,+\infty) &amp; \longrightarrow &amp; \mathbb R \\ &amp; x &amp; \longmapsto &amp; 2x-6 \end{array} $$ then this time finding its image would result in $$ x= \frac{y+6}2 \quad\text{and}\quad x\geq 0 $$ since now the domain in which $x$ lies is $[0,+\infty)$, i.e. $$ \frac{y+6}2\geq 0 \Leftrightarrow y\geq -6 $$ Therefore, in this case the image of $f$ would be $[-6,+\infty)$.</p> <p>I hope this example makes it clear.</p>
2,309,418
<p>How can I integrate this function?</p> <p>$$\int \left( {3x^3+3x^2+3x+1 \over (x^2+1)(x+1)^2}\right)dx$$</p> <p>Using a previous example I have found:</p> <p>$$\int \left({3x^3+3x^2+3x+1 \over (x^2+1)(x+1)^2}dx\right) ={A\over x^2+1}+{B \over x+1}+{Cx+D\over (x+1)^2}$$</p> <p>And then: $$\int 3x^3+3x^2+3x+1 ={A(x+1)(x+1)^2}+{B(x^2+1)(x+1)^2}+{Cx+D(x^2+1)(x+1)^2}$$</p> <p>How ever now I'm supposed to substitute a value of x into the formula, but I'm not sure what value of x to use, nor do I know if I have done these steps correctly</p> <p>Can anyone please point me in the right direction?</p>
Ahmed
126,745
<p><strong>Hint</strong> Instead of what you wrote, you have to take: $$\int \left({3x^3+3x^2+3x+1 \over (x^2+1)(x+1)^2}dx\right) ={Ax+B\over x^2+1}+{C \over x+1}+{D\over (x+1)^2}$$ Since $x^2+1$ is an erriducible polynomial.</p>
594,811
<p>This is my first post, sorry for my naiveness..</p> <p>I know a basic equation that relates Gram-schmidt matrix and Euclidean distance matrix:</p> <p>$XX'=-0.5*(I-J/n)*D*(I-J/n)'$</p> <p>Where $X$ is centered data (is $d \times n$), $I$ is identity matrix, $J$ is a matrix filled with ones (1), $n$ is the number of columns in $X$, and $D$ is the distance matrix (with dimensions $n \times n$).</p> <p>My question is:</p> <p>How can I derive Euclidean distance matrix $D$ from this equation? I would like something like:</p> <p>$D=$ (something ¿?)</p> <p>For example, I can see that:</p> <p>$D=-2*inv((I-J/n))*XX'*inv((I-J/n)'$</p> <p>But $(I-J/n)$ is a singular matrix. I am still interested in some approximation.</p> <p>Thanks a lot!!</p> <p>Mark</p>
lynne
116,783
<p>The solution to $A.X.B = Y$ in a least-squares sense is $$ X = A^+.Y.B^+ + W - A^+.A.W.B.B^+ $$ where $W$ is arbitrary and $A^+$ denotes the pseudo-inverse of $A$.</p> <p>In your problem, $A=B$ equals the centering matrix (see <a href="http://en.wikipedia.org/wiki/Centering_matrix" rel="nofollow">http://en.wikipedia.org/wiki/Centering_matrix</a>). You are correct, it <b>is</b> a singular matrix, but it has some very useful properties: $$\eqalign{ C^+ &amp;= C \cr C^2 &amp;= C \cr C' &amp;= C \cr }$$ So solving for $D$ yields $$ D = -2(C.X.X'.C) + W - C.W.C $$</p>
1,790,222
<p>I know that $[0,1]$ and a unit circle $\mathbb{S}^1$ are one-point compactifications of $\mathbb{R}$ under some suitable homeomorphism. But how does one construct the Stone–Čech compactification? </p>
egreg
62,967
<p>The Stone-Čech compactification of $\mathbb{R}$ can be functorially built as the maximal spectrum of $C_b(\mathbb{R})$, the ring of bounded continuous real functions on $\mathbb{R}$.</p> <p>The maximal spectrum $\operatorname{Max}(R)$ of a commutative ring is the set of all maximal ideals, with the spectral topology, where a basis of closed sets is given by $$ V(I)=\{\mathfrak{m}\in\operatorname{Max}(R):\mathfrak{m}\supseteq I\} $$ where $I$ is any ideal of $R$. In general this topology is not Hausdorff, but it can be proved that if $X$ is a completely regular Hausdorff space, then the space $\beta X=\operatorname{Max}(C_b(X))$ is compact Hausdorff.</p> <p>Suppose $f\colon X\to Y$ is a continuous map, where $Y$ is completely regular Hausdorff. Then we have an induced ring homomorphism $f^*\colon C_b(Y)\to C_b(X)$, mapping $\varphi\in C_b(Y)$ to $\varphi\circ f$. If $\mathfrak{m}$ is a maximal ideal of $C_b(X)$, then $\beta f(\mathfrak{m})=(f^*)^{-1}(\mathfrak{m})$ is a maximal ideal of $C_b(Y)$. It's easy to show that $\beta f$ is continuous under the spectral topology, so we get a continuous function $\beta f\colon\beta X\to\beta Y$.</p> <p>Moreover we get an embedding $X\to \beta X$ by mapping $x\to\mathfrak{m}_x$ defined by $$ \mathfrak{m}_x=\{\varphi\in C_b(X):\varphi(y)=0\} $$</p> <p>In the special case when $Y$ is compact, the maximal ideals of $C_b(Y)=C(Y)$ are all of the form $\mathfrak{m}_y$, for a unique $y\in Y$, so $Y$ can be identified with $\beta Y$. This shows that $\beta X$ indeed satisfies the universal property of the Stone-Čech compactification, that is, every continuous map $X\to Y$, where $Y$ is compact Hausdorff, can be extended to a continuous map $\beta X\to Y$.</p> <p>You should be able to find this construction of the Stone-Čech compactification in several books on topology, I believe it is also in Kelley's book.</p> <p>There's no “explicit” description, because the ideals of $C_b(X)$ can be described in terms of nonprincipal ultrafilters, whose existence depends on the axiom of choice. Only some cases admit an explicit description, because of special properties of $X$.</p>
42,258
<p>I have a Table of values e.g. </p> <pre><code>{{x,y,z},{x,y,z},{x,y,z}…} </code></pre> <p>How do I replace the the "z" column with a List of values?</p>
Z-Y.L
4,733
<p>Actually $\partial _t f[t]$ in Mathematica is interpreted as <code>D[f[t],t]</code> by default. You don't need to redefine it. </p> <p>Considering $\partial _t = D[f,t]$ given by the OP is only an example of what the OP wants to do, I regard this question as a way to redefine the basic rules for the input of the expression. You can define the low-level input rules by using <code>MakeExpression</code>. </p> <p>In this case, I try to define $\mathbb{D}_t f[t]$ as <code>D[f[t],t]</code> for better understanding.</p> <pre><code>MakeExpression[RowBox[{SubscriptBox["\[DoubleStruckCapitalD]", t_], f_}], StandardForm] := MakeExpression[RowBox[{"D", "[", f, ",", t, "]"}], StandardForm] </code></pre> <blockquote> <p>$\mathbb{D}_t$<code>f[t]</code></p> <p><code>f'[t]</code></p> </blockquote> <p>You can use <code>FullForm</code> to check it:</p> <blockquote> <p>$\mathbb{D}_t$<code>f[t]//FullForm</code> </p> <p><code>Derivative[1][f][t]</code></p> </blockquote> <p>For more details about Low-Level Input and Output Rules, pls read <a href="http://reference.wolfram.com/mathematica/tutorial/LowLevelInputAndOutputRules.html" rel="nofollow">here</a>.</p>
2,703,323
<p>How can one show that the limit of the following is $1$?</p> <p>$$\lim_{x\to 0}\frac{\frac{1}{1-x}-1}{x}=1$$</p>
ervx
325,617
<p>Hint:</p> <p>$$ \frac{\frac{1}{1-x}-1}{x}=\frac{\frac{1-(1-x)}{1-x}}{x}=\frac{x}{x(1-x)}=\frac{1}{1-x}. $$</p>
4,196,125
<p>Can someone tell me where this calculation goes wrong? I get (2 3 4)(1 2 3 4 5 6)^-1 = (1 6 5 2). My book and Mathematica get (1 6 5 4). I have read several explanations of how to multiply permutations in cycle notation and have worked dozens of examples successfully, but I always get this one wrong.</p> <p>(2 3 4)(1 2 3 4 5 6)^-1 = (2 3 4)(6 5 4 3 2 1) = (2 3 4)(1 6 5 4 3 2)</p> <p>1 -&gt; 6 then 6 is unchanged giving (1 6</p> <p>6 -&gt; 5 then 5 in unchanged giving (1 6 5</p> <p>5 -&gt; 4 then 4 -&gt; 2 giving (1 6 5 2</p> <p>2 -&gt; 1 then 1 is unchanged completing the cycle giving (1 6 5 2).</p>
Narasimham
95,860
<p>HINT</p> <p>It is also the same as the volume obtained by revolving circle (with Shifing / Reflection)</p> <p><span class="math-container">$$ (x-2)^2+y^2= 1$$</span></p> <p>about the y-axis.</p>
1,593,339
<p>What is $y$ in $$J^\frac{1}{2}f(x)=y$$ $$f(x)=w\sin(x)$$ where $w$ is a constant?</p>
Simply Beautiful Art
272,831
<p>Allow $\frac{d^n}{dx^n}e^{cx}=c^ne^{cx}$. This should be fairly obvious and holds by induction.</p> <p>Now let $c=i$ and we can get Euler's formula to do all the work.</p> <p>$$\sin(x)=\frac{e^{ix}-e^{-ix}}{2i}$$</p> <p>$$\frac{d^n}{dx^n}\sin(x)=\frac{d^n}{dx^n}\frac{e^{ix}-e^{-ix}}{2i}$$</p> <p>$$=\frac{i^ne^{ix}-(-i)^ne^{-ix}}{2i}$$</p> <p>$$=\frac{e^{i(x+\frac{n\pi}2)}-e^{-i(x+\frac{n\pi}2)}}{2i}$$</p> <p>$$=\sin(x+\frac{n\pi}2)$$</p> <p>$$\frac{d^n}{dx^n}w\sin(x)=w\frac{d^n}{dx^n}\sin(x)=w\sin(x+\frac{n\pi}2)$$</p> <p>For $n=1/2$, you get $w\sin(x+\frac\pi4)$.</p>
114,898
<p>A) Let $f:F\rightarrow S$ be a flat proper morphism of schemes with geometrically normal fibers. Then supposedly the number of $\textbf{connected}$ components of the geometric fibers is constant. Why is this? Without some kind of vanishing of cohomology or information on the base, I don't see why this is true. </p> <p>B) Furthermore, supposedly if $F$ is now a flat proper morphism with reduced, connected, nodal curves as geometric fibers, then there is a Zariski open subset of $S$ on which the fibers all have the same number of $\textbf{irreducible}$ components. Why is this?</p> <p>Finally, how far can these results be generalized? For example, is B) true for any flat proper morphism?</p>
Alexandre Eremenko
25,510
<p>Here it is: MR0015154 Salem, R.; Zygmund, A. Lacunary power series and Peano curves. Duke Math. J. 12, (1945). 569–578. </p>
416,387
<p>In the process of solving a DE and imposing the initial condition I came up with the following question.</p> <p>I've reached the stage that</p> <p>$$\ln y + C = \int\left(\frac{2}{x+2}-\frac{1}{x+1}\right)dx$$ $$\Rightarrow \ln y +C=2\ln|x+2|-\ln|x+1|$$ $$\Rightarrow y=A\frac{(x+2)^2}{|x+1|}.$$</p> <p>Now I had also found that the curve passes through $(-4,-3)$. Susbstituting in the expression above, we find $A=-9/4$. However, the solution in the markscheme (the problem was from a past exam) drops the modulus sign and so it gives $A=9/4$.</p> <p>So my question is, why do they drop the modulus sign and when is one allowed to do so in dealing with integrals of the form $\int 1/t ~dt$?</p> <p>Thanks in advance.</p>
Abhra Abir Kundu
48,639
<p>$\ln (-x)$ does not make sense for $x&gt;0$ so modulus sign must always be there for the function to make sense. You can get rid of modulus sign iff you know that $\ln(x)$ is used for some $x$ greater than zero(i.e. the parameter inside the function must be greater than zero.) </p> <p>Like in this case if the modulus sign is not there there and $A=-9/4$ then for $\forall x&gt;-1$ , we have $y&lt;0$ but then $\ln y$ does not make any sense(though we have used $\ln y$ to arrive at this equation.)</p>
2,140,363
<p>A fair die is rolled 10 times. Define N to be the number of distinct outcomes. For example, if we roll (3, 2, 2, 1, 4, 3, 1, 6, 2, 3) then N = 5 (the distinct outcomes being 1, 2, 3, 4 and 6). Find the mean and standard deviation of N. Hint: Define $I_j = \left\{ \begin{array}{ll} 1 &amp; \text{if outcome j appears at least once} \\ 0 &amp; \text{otherwise} \end{array} \right.$ and $j=1,...,6$. Then $N=\sum_{j=1}^{6} I_j$ where $I_1,...I_6$ are dependent.</p> <p>Any help would be much appreciated!</p>
JMoravitz
179,297
<p><em>For the sake of brevity, I will assume that you already know the definitions of random variables, expected value, and standard deviation and will not formally define these terms. For a formal definition, seek out any textbook on the subject.</em></p> <p>Our problem asks us to find the mean (<em>expected value</em>) and standard deviation (<em>square root of variance</em>) for the random variable $N$ which represents the total number of unique results from a throw of ten (<em>presumably standard fair six-sided</em>) dice. Clearly $N$ is an integral random variable which takes the values $0,1,2,\dots,6$ with corresponding probabilities $p_0,p_1,p_2,\dots,p_6$ respectively.</p> <p>One (<em>naive</em>) approach to the problem would be to try to calculate each of $p_0,p_1,\dots$ and use the formula $E[N] = 0\cdot p_0+1\cdot p_1+2\cdot p_2+\dots+6\cdot p_6$, call this result $\mu$, and then continue to calculate $Var(N) = (0-\mu)^2\cdot p_0+(1-\mu)^2\cdot p_1+\dots+(6-\mu)^2\cdot p_6$. Although this would work, this is a horribly difficult and tedious approach and is not at all recommended. We don't actually care about finding the exact values of each of these probabilities if all we want to know about is the expected value and variance. Instead, we approach in a much smarter and more efficient way which is alluded to in the given hint.</p> <p>By letting $I_j$ represent the <em>Indicator Random Variable</em> which takes the value of $1$ when result $j$ was included in the results of the ten thrown dice, and takes the value of $0$ otherwise. We notice then that $N$ could be represented by adding together $I_1,I_2,\dots,I_6$. We choose to do this because calculations using indicator random variables is much easier than otherwise since they only ever take the values of zero or one.</p> <p>I am happy to see from the comments that you have realized the strength of this for finding the expected value of $N$. Indeed, by the linearity of expectation and the symmetry of the problem we have $E[N] = E[\sum\limits_{j=1}^6 I_j] = \sum\limits_{j=1}^6 E[I_j] = 6\cdot E[I_1] = 6\cdot Pr(I_1=1) = 6\cdot (1-(\frac{5}{6})^{10})$</p> <hr> <p>Continuing the problem to find the Variance (<em>and taking the square root of this result to find the standard deviation</em>), we remember that from basic properties one finds that $Var(N)=E[N^2]-(E[N])^2$</p> <p>We know how to deal with the second term from our earlier work in the previous part, but we now need to look at $N^2$ and see how it acts. Continuing to use the indicator random variables, we see that</p> <p>$N^2 = (I_1+I_2+\dots+I_6)^2=(I_1+I_2+\dots+I_6)\cdot(I_1+I_2+\dots+I_6)$</p> <p>$=I_1(I_1+I_2+\dots+I_6)+I_2(I_1+I_2+\dots+I_6)+\dots+I_6(I_1+I_2+\dots+I_6)$</p> <p>As alluded to in my initial comments above, by expanding this completely and not adding any like-terms together, we are left with $36$ terms total in the above expression. By splitting these up into two categories, those which involve only one subscript and those which involve two subscripts, we have $6$ terms of the form $I_j^2$ and $30$ terms of the form $I_jI_k$ with $j\neq k$. (<em>I specify without adding like-terms together to make calculations more transparent since you have $I_1I_2$ as well as $I_2I_1$ appearing once each</em>)</p> <p>We have as a result $E[N^2]=E[(\sum\limits_{j=1}^6 I_j)^2] = E[\sum\limits_{j=1}^6\sum\limits_{k=1}^6 I_jI_k] = E[\sum\limits_{j=1}^6 I_j^2 + \sum\limits_{(j,k)\in \{(x,y)\in [6]^2~:~x\neq y\}} I_jI_k]$</p> <p>$=\sum\limits_{k=1}^6 E[I_1^2] + \sum\limits_{(j,k)\in \{(x,y)\in [6]^2~:~x\neq y\}} E[I_1I_2] = 6\cdot E[I_1^2]+30\cdot E[I_1I_2]$</p> <p>Now, notice that $I_1^2 = I_1$. Why? Well, whenever $I_1$ takes the value of $1$, $I_1^2$ will also take the value of $1$ since $1^2=1$ and whenever $I_1$ takes the value of $0$, $I_1^2$ will also take the value of $0$ since $0^2=0$. Incredibly convenient! So, we have already done the necessary calculations to find this part.</p> <p>Now... we look a bit more closely at $I_1\cdot I_2$. Since $I_1$ and $I_2$ are indicator random variables, you have $I_1\cdot I_2$ will take the value of $1$ if and only if both of $I_1$ and $I_2$ each took the value of $1$ as well and take the value of zero otherwise. That is to say $I_1\cdot I_2$ is also an indicator random variable! Specifically:</p> <p>$I_1\cdot I_2 = \begin{cases} 1&amp;\text{if}~1~\text{and}~2~\text{both appeared in the results of the ten thrown dice}\\0&amp;\text{otherwise}\end{cases}$</p> <p>So, $E[I_1\cdot I_2] = Pr(I_1\cdot I_2 = 1) = Pr(\text{both}~1~\text{and}~2~\text{appear})$</p> <p>I will leave the rest to you to complete now, but all that remains should be a routine calculation perhaps involving the use of inclusion-exclusion and arithmetic putting all of the gathered information together into a final answer.</p>
2,140,363
<p>A fair die is rolled 10 times. Define N to be the number of distinct outcomes. For example, if we roll (3, 2, 2, 1, 4, 3, 1, 6, 2, 3) then N = 5 (the distinct outcomes being 1, 2, 3, 4 and 6). Find the mean and standard deviation of N. Hint: Define $I_j = \left\{ \begin{array}{ll} 1 &amp; \text{if outcome j appears at least once} \\ 0 &amp; \text{otherwise} \end{array} \right.$ and $j=1,...,6$. Then $N=\sum_{j=1}^{6} I_j$ where $I_1,...I_6$ are dependent.</p> <p>Any help would be much appreciated!</p>
Marko Riedel
44,883
<p>Supposing that the fair die has $n$ sides and is rolled $m$ times we get for the probability of $k$ distinct outcomes by first principles the closed form</p> <p>$$\frac{1}{n^m} \times m! [z^m] {n\choose k} (\exp(z)-1)^k.$$</p> <p>Let us verify that this is a probability distribution. We obtain</p> <p>$$\frac{1}{n^m} \times m! [z^m] \sum_{k=0}^n {n\choose k} (\exp(z)-1)^k.$$</p> <p>Here we have included the value for $k=0$ because it is a constant with respect to $z$ and hence does not contribute to $[z^m]$ where $m\ge 1.$ Continuing we find</p> <p>$$\frac{1}{n^m} \times m! [z^m] \exp(nz) = 1$$</p> <p>and the sanity check goes through. Now for the expectation we get</p> <p>$$\mathrm{E}[N] = \frac{1}{n^m} \times m! [z^m] \sum_{k=1}^n k {n\choose k} (\exp(z)-1)^k \\ = \frac{n}{n^m} \times m! [z^m] \sum_{k=1}^n {n-1\choose k-1} (\exp(z)-1)^k \\ = \frac{n}{n^m} \times m! [z^m] (\exp(z)-1) \sum_{k=0}^{n-1} {n-1\choose k} (\exp(z)-1)^k \\ = \frac{n}{n^m} \times m! [z^m] (\exp(z)-1) \exp((n-1)z).$$</p> <p>This is</p> <p>$$\frac{n}{n^m} \times (n^m - (n-1)^m) = n \left(1-\left(1-\frac{1}{n}\right)^m\right).$$</p> <p>This goes to $n$ in $m$ as it ought to. For the next factorial moment we obtain</p> <p>$$\mathrm{E}[N(N-1)] = \frac{1}{n^m} \times m! [z^m] \sum_{k=2}^n k (k-1) {n\choose k} (\exp(z)-1)^k \\ = \frac{n(n-1)}{n^m} \times m! [z^m] \sum_{k=2}^n {n-2\choose k-2} (\exp(z)-1)^k \\ = \frac{n(n-1)}{n^m} \times m! [z^m] (\exp(z)-1)^2 \sum_{k=0}^{n-2} {n-2\choose k} (\exp(z)-1)^k \\ = \frac{n(n-1)}{n^m} \times m! [z^m] (\exp(z)-1)^2 \exp((n-2)z).$$</p> <p>This is</p> <p>$$\frac{n(n-1)}{n^m} (n^m - 2(n-1)^m + (n-2)^m) \\ = n(n-1) \left(1-2\left(1-\frac{1}{n}\right)^m + \left(1-\frac{2}{n}\right)^m \right).$$</p> <p>We get for the variance</p> <p>$$\mathrm{Var}(N) = \mathrm{E}[N^2] - \mathrm{E}[N]^2 = \mathrm{E}[N(N-1)] + \mathrm{E}[N] - \mathrm{E}[N]^2$$</p> <p>which yields</p> <p>$$n^2 \left(1-2\left(1-\frac{1}{n}\right)^m + \left(1-\frac{2}{n}\right)^m \right) \\ - n \left(1-2\left(1-\frac{1}{n}\right)^m + \left(1-\frac{2}{n}\right)^m \right) + n \left(1-\left(1-\frac{1}{n}\right)^m\right) \\ - n^2 \left(1-2\left(1-\frac{1}{n}\right)^m + \left(1-\frac{1}{n}\right)^{2m} \right) \\ = n^2 \left(\left(1-\frac{2}{n}\right)^m - \left(1-\frac{1}{n}\right)^{2m} \right) + n \left(\left(1-\frac{1}{n}\right)^m - \left(1-\frac{2}{n}\right)^m \right).$$</p> <p>The absolute value of the two differences of terms that are geometric in $m$ with positive common ratio less than one is bounded by the maximum of these two, which vanishes in $m$ and hence the variance goes to zero for $n$ fixed and $m$ going to infinity.<P></p> <p>The queried standard deviation is then given by the root of the variance.</p> <p><P>Here is some Maple code to explore these numbers.</p> <pre> ENUM := proc(n, m) option remember; local ind, data, gf; gf := 0; for ind from n^m to 2*n^m - 1 do data := convert(ind, base, n)[1..m]; gf := gf + v^nops(convert(data, &#96;multiset&#96;)); od; gf/n^m; end; EN_PROB := (n, m) -&gt; subs(v=1, ENUM(n, m)); EN_FM1 := (n, m) -&gt; subs(v=1, diff(ENUM(n, m), v)); EN_FM2 := (n, m) -&gt; subs(v=1, diff(ENUM(n, m), v&#36;2)); FM1 := (n, m) -&gt; n*(1-(1-1/n)^m); FM2 := (n, m) -&gt; n*(n-1)*(1 - 2*(1-1/n)^m + (1-2/n)^m); EN_VAR := (n, m) -&gt; - EN_FM1(n, m)^2 + subs(v=1, diff(v*diff(ENUM(n, m), v), v)); VAR := (n, m) -&gt; n^2*((1-2/n)^m-(1-1/n)^(2*m)) + n*((1-1/n)^m-(1-2/n)^m); </pre> <p><strong>Addendum.</strong> As pointed out by @JMoravitz we may need some clarification of the reasoning by which the probabilities are obtained. Note that $k$ distinct outcomes first of all require a choice of these $k$ values from the $n$ possibilities, for a factor of ${n\choose k}.$ Now we need to match up each of these $k$ values (think of them as listed sequentially from smallest to largest) with a subset of the $m$ rolls which may not be empty (this is what prevents double counting here). This yields the labeled combinatorial class (labeled means EGFs)</p> <p>$$\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \textsc{SEQ}_{=k}(\textsc{SET}_{\ge 1}(\mathcal{Z}))$$</p> <p>or alternatively (compare to Stirling numbers of the second kind)</p> <p>$$\textsc{SEQ}_{=k}(\textsc{SET}_{=1}(\mathcal{Z}) + \textsc{SET}_{=2}(\mathcal{Z}) + \textsc{SET}_{=3}(\mathcal{Z}) + \cdots).$$</p> <p>We get the generating function</p> <p>$$\left(z+\frac{z^2}{2!}+\frac{z^3}{3!}+\frac{z^4}{4!}+\cdots\right)^k = (\exp(z)-1)^k$$</p> <p>and may then continue as before.</p> <p><P> <strong>Addendum II.</strong> Another combinatorial class we can use here is</p> <p>$$\textsc{SEQ}_{=n}(\mathcal{E} + \mathcal{U}\times\textsc{SET}_{\ge 1}(\mathcal{Z})).$$</p> <p>This gives the EGF</p> <p>$$G(z, u) = (1+u(\exp(z)-1))^n = \sum_{k=0}^n {n\choose k} u^k (\exp(z)-1)^k.$$</p> <p>On evaluating </p> <p>$$\left. \frac{\partial}{\partial u} G(z, u) \right|_{u=1}$$</p> <p>using the second form we get the same sum as before. We can also give an alternate evaluation</p> <p>$$\frac{1}{n^m} \times m! [z^m] \left. \frac{\partial}{\partial u} G(z, u) \right|_{u=1} = \frac{n}{n^m} \times m! [z^m] \exp((n-1)z) (\exp(z) - 1) \\ = \frac{n}{n^m} \times m! [z^m] (\exp(nz) - \exp((n-1)z)) = \frac{n}{n^m} (n^m - (n-1)^m) \\ = n \left(1-\left(1-\frac{1}{n}\right)^m\right).$$</p> <p>For the next factorial moment we need another derivative which is</p> <p>$$n(n-1)(1+u(\exp(z)-1))^{n-2} (\exp(z)-1)^2.$$</p> <p>Set $u=1$ to get</p> <p>$$n(n-1) \exp((n-2)z)(\exp(2z)-2\exp(z)+1).$$</p> <p>Extracting coefficients now yields</p> <p>$$\frac{n(n-1)}{n^m} \left(n^m - 2 (n-1)^m + (n-2)^m\right)$$</p> <p>and the rest is as in the first version. Note that we get a faster enumeration routine if we admit the classification from the introduction. The result is midway between enumeration and the closed form and is shown below.</p> <pre> with(combinat); ENUM2 := proc(n, m) option remember; local gf, part, psize, mset; gf := 0; part := firstpart(m); while type(part, &#96;list&#96;) do psize := nops(part); mset := convert(part, &#96;multiset&#96;); gf := gf + binomial(n, psize) *m!/mul(p!, p in part) *psize!/mul(p[2]!, p in mset)*v^psize; part := nextpart(part); od; gf/n^m; end; </pre>
222,105
<p>I want to use geometric shapes in Mathematica to build complex shapes and use my raytracing algorithm on it. I have a working example where we can get the intersections from a combination of a <code>Cone[]</code> and <code>Cuboid[]</code>, e.g </p> <pre><code>shape1 = Cone[]; shape2 = Cuboid[]; (* add shapes in this list to make a more complicated shape *) shapes = {shape1, shape2}; (* this constains the shapes so the shape is considered as a whole *) constraints[shapes__] := And[## &amp; @@ (Not /@ Through[(RegionMember[RegionIntersection@##] &amp; @@@ Subsets[{shapes}, {2}])@#]), RegionMember[RegionUnion @@ (RegionBoundary /@ {shapes})]@#] &amp; direction = {-0.2, -0.2, -1}; point = {0.5, 0.5, 1.5}; line = HalfLine[{point, point + direction}]; intersections[l_, s__] := NSolve[# ∈ l &amp;&amp; constraints[s][#], #] &amp;@({x, y, z}[[;; RegionEmbeddingDimension[l]]]) (* find intersection *) intersection = intersections[line, ##] &amp; @@ shapes; points = Point[{x, y, z}] /. intersection; Graphics3D[{{Opacity[0.2], shapes}, line, {Red, points}}, PlotRange -&gt; {{-1, 1}, {-1, 1}, {-2, 2}}, Axes -&gt; True] </code></pre> <p>This works well, and we get the external intersections as we expect. </p> <p><a href="https://i.stack.imgur.com/3SCo1m.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/3SCo1m.png" alt="enter image description here"></a></p> <p>Now, let us try to take the difference between two shapes, modelling something like </p> <pre><code>square = Cuboid[]; ball = Ball[{0, 0, 1}, 1]; Region[RegionDifference[square, ball]] </code></pre> <p><a href="https://i.stack.imgur.com/tz64fm.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tz64fm.png" alt="enter image description here"></a></p> <pre><code>shapes = {RegionDifference[square, ball]}; direction = {0, 0, -1}; point = {0.5, 0.5, 5}; line = HalfLine[{point, point + direction}]; intersection = intersections[line, ##] &amp; @@ shapes </code></pre> <p>Does not work, with an error that the constraints are "<em>not a quantified system of equations and inequalities</em>"...even though the constraints look fine </p> <pre><code>constraints[shapes] (* (##1 &amp;) @@ Not /@ Through[ Apply[RegionMember[RegionIntersection[##1]] &amp;, Subsets[{{BooleanRegion[#1 &amp;&amp; ! #2 &amp;, {Cuboid[{0, 0, 0}], Ball[{0, 0, 1}, 1]}]}}, {2}], {1}][#1]] &amp;&amp; RegionMember[ RegionUnion @@ RegionBoundary /@ {{BooleanRegion[#1 &amp;&amp; ! #2 &amp;, {Cuboid[{0, 0, 0}], Ball[{0, 0, 1}, 1]}]}}][#1] &amp; *) </code></pre>
Tomi
36,939
<p>Tim Laska's solution is excellent. It is fast and accurate. However, for completeness, I have a solution for the <code>NDSolve</code> solution, where we can find the intersections instead of the (excellent) particle advancer (i.e. just jump between the intersections instead of advance). </p> <p>By using the solution from <a href="https://mathematica.stackexchange.com/questions/222199/finding-the-regionmemberfunction-of-the-boundary-of-a-regiondifference?noredirect=1#comment565699_222199">here</a> </p> <pre><code>line = HalfLine[{0.5, 0.5, 2}, {0, 0, -1}] intersection = NSolve[{x, y, z} \[Element] line &amp;&amp; RegionMember[ regionBoundary[RegionDifference[Cuboid[], Ball[]]]][{x, y, z}], {x, y, z}] regionBoundary[reg_?RegionQ] := Module[{x, y, z}, ImplicitRegion[ CylindricalDecomposition[RegionMember[reg, {x, y, z}], {x, y, z}, "Boundary"], {x, y, z}]] Show[{Region[RegionDifference[Cuboid[], Ball[]]], Region[Style[Point[{x, y, z}] /. intersection[[1]], Red]], Region[Style[Point[{x, y, z}] /. intersection[[2]], Red]]}] </code></pre> <p><a href="https://i.stack.imgur.com/h7CIT.png" rel="noreferrer"><img src="https://i.stack.imgur.com/h7CIT.png" alt="enter image description here"></a></p> <p>Intersections highlighted in red. </p>