qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
299,698
<p>I am having trouble with the following exercise. We are considering the following equation: </p> <p>$m\frac{dv}{dt}=gm-Kv^2$</p> <p>I need to solve the above equation knowing that $v(0)=32; m=128;g=10;K=5$</p> <p>We have: $128\frac{dv}{dt}=1280-5v^2$</p> <p>$\frac{dv}{dt}=10-\frac{5}{128}v^2$</p> <p>$\frac{dv}{dt}=\frac{-5}{128}(v^2-256)$</p> <p>$\frac{dv}{v^2-256}=\frac{-5}{128}dt$</p> <p>I don't know how to continue.</p> <p>Please help</p>
André Nicolas
6,312
<p><strong>Hint:</strong> Use <strong>partial fractions</strong> on the left. We have $$\frac{1}{v^2-256}=\frac{1}{32}\left(\frac{1}{v-16}-\frac{1}{v+16}\right).$$</p>
299,698
<p>I am having trouble with the following exercise. We are considering the following equation: </p> <p>$m\frac{dv}{dt}=gm-Kv^2$</p> <p>I need to solve the above equation knowing that $v(0)=32; m=128;g=10;K=5$</p> <p>We have: $128\frac{dv}{dt}=1280-5v^2$</p> <p>$\frac{dv}{dt}=10-\frac{5}{128}v^2$</p> <p>$\frac{dv}{dt}=\frac{-5}{128}(v^2-256)$</p> <p>$\frac{dv}{v^2-256}=\frac{-5}{128}dt$</p> <p>I don't know how to continue.</p> <p>Please help</p>
Ron Gordon
53,268
<p>Following @Andre</p> <p>$$\int \frac{dv}{v^2-256} = \frac{1}{32} \int dv \left ( \frac{1}{v-16} - \frac{1}{v+16} \right ) = \frac{1}{32} \ln{\left ( \frac{v-16}{v+16} \right ) }+ C$$</p>
3,185,927
<p><strong>Proposition:</strong> Let <span class="math-container">$X$</span> be a compact Hausdorff space. Suppose there are countable real valued continuous functions <span class="math-container">$\{f_n\}_{n \in \mathbb{Z}_+}$</span> separating <span class="math-container">$X$</span> i.e. for all <span class="math-container">$x, y \in X$</span> with <span class="math-container">$x \neq y$</span>, <span class="math-container">$\exists k:=k(x,y) \in \mathbb{Z}_+$</span>, <span class="math-container">$f_k(x)\neq f_k(y)$</span>. Let <span class="math-container">$$ d(x,y):=\sum_{n=1}^\infty \frac{\min\{|f_n(x)-f_n(y)|, 1\}}{2^n} $$</span> Then <span class="math-container">$X$</span> is metrizable by <span class="math-container">$d$</span>.</p> <p>I want to prove that, for all open set <span class="math-container">$U$</span> and <span class="math-container">$x \in U$</span>, there exists <span class="math-container">$B(x;r)$</span> s.t. <span class="math-container">$B(x;r)\subset U$</span> and for all <span class="math-container">$B(x;r)$</span>, there exists an open set <span class="math-container">$U$</span> s.t. <span class="math-container">$U\subset B(x;r)$</span>. Here, <span class="math-container">$B(x;r):=\{y\in X| d(x,y)&lt;r\}$</span>. I know <span class="math-container">$B(x;r)\supset \bigcap_{n \in \mathbb{Z}_+} \{y \in X |f_n(x)-f_n(y)|&lt;r\}$</span>, but right term is not open.</p> <p>How to prove this proposition?</p>
Henno Brandsma
4,280
<p>I'll denote <span class="math-container">$\Bbb Z^+$</span> by <span class="math-container">$\omega$</span>.</p> <p><span class="math-container">$\mathbb{R}^{\omega}$</span> (in the product topology) is metrisable by the metric <span class="math-container">$$D((x_n), (y_n))=\sum_{n \in \omega} \frac{\min(|x_n-y_n|, 1)}{2^n}$$</span> as is well-known, e.g. see my answer <a href="https://math.stackexchange.com/a/362319/4280">here</a>.</p> <p>Then from the <span class="math-container">$f_n$</span> we define <span class="math-container">$F: X \to \mathbb{R}^\omega$</span> by <span class="math-container">$F(x)=(f_n(x))_{n \in \omega}$</span> and note that <span class="math-container">$F$</span> is continuous as <span class="math-container">$\pi_n \circ F = f_n$</span> is continuous for all <span class="math-container">$n$</span> and where <span class="math-container">$\pi_n$</span> is the projection onto the <span class="math-container">$n$</span>-th coordinate. This follows from the characterisation of the product topology as the smallest topology that makes all pprojections continuous, and is a standard fact proved in many text books. </p> <p>The fact that the <span class="math-container">$f_n$</span> separate points means exactly that <span class="math-container">$F$</span> is injective (1-1).</p> <p>So <span class="math-container">$F: X \to F[X]$</span> is a continuous bijection between a compact space and a Hausdorff space (metric implies Hausdorff) and so <span class="math-container">$X$</span> is homeomorphic to <span class="math-container">$F[X]$</span> and the pulled-back metric of <span class="math-container">$F[X]\subseteq (\mathbb{R}^\omega, D)$</span> to <span class="math-container">$X$</span> is exactly <span class="math-container">$d(x,y)=D(F(x), F(y))$</span> and as <span class="math-container">$D$</span> is a metric for <span class="math-container">$F[X]$</span> and <span class="math-container">$F$</span> is a homeomorphism, <span class="math-container">$d$</span> (i.e. your metric on <span class="math-container">$X$</span>) is a metric for <span class="math-container">$X$</span>, as required.</p> <p>E.g. <span class="math-container">$B_d(x,r) = F^{-1}[B_D(F(x),r)]$</span> so <span class="math-container">$d$</span>-open balls are open and inverse images of a base under a homeomorphism form a base etc.</p>
3,117,139
<p>I am learning to calculate the arc length by reading a textbook, and there is a question</p> <p><a href="https://i.stack.imgur.com/Zigqv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zigqv.png" alt="enter image description here"></a></p> <p>However, I get stuck at calculating</p> <p><span class="math-container">$$\int^{\arctan{\sqrt15}}_{\arctan{\sqrt3}} \frac{\sec{(\theta)} (1+\tan^2{(\theta)})} {\tan{\theta}} d\theta$$</span> How can I continue calculating it?</p> <p><strong>Update 1:</strong></p> <p><span class="math-container">$$\int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \frac{\sec{(\theta)} (1+\tan^2{(\theta)})} {\tan{\theta}} d\theta = \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} (\csc{(\theta)} + \sec{(\theta)} \tan{(\theta)}) d\theta \\ = \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta) d\theta + \frac{1}{\cos{(\theta)}}} |^{arctan{\sqrt{15}}}_{arctan{\sqrt3}} \\ = \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta) d\theta + \frac{1}{\cos{(\sqrt{15})}} - \frac{1}{\cos{(\sqrt3)}}}$$</span></p> <p>But how can I get the final result?</p> <p><strong>Update 2:</strong></p> <p>Because <span class="math-container">$\frac{1}{\cos{(x)}} = \sqrt{ \frac{\cos^2{(x)} + \sin^2{(x)}}{cos^2{(x)}}} = \sqrt{1+\tan^2{(x)}}$</span>, I get </p> <p><span class="math-container">$$\frac{1}{\cos{(\sqrt{15})}} - \frac{1}{\cos{(\sqrt3)}} = \sqrt{1+15} - \sqrt{1+3} = 2$$</span> </p> <p>However, for the first part <span class="math-container">$\int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta)} d\theta$</span>, I get </p> <p><span class="math-container">$$ \int^{\arctan{\sqrt{15}}}_{\arctan{\sqrt3}} \csc{(\theta)} d\theta = \log \tan{\frac{\theta}{2}} |^{arctan{\sqrt{15}}}_{arctan{\sqrt3}}$$</span></p> <p>How can I continue it?</p>
caden Hong
517,671
<p>The example is inappropriate as this does not have an elementary solution. I think you actually mean <span class="math-container">$\int e^{x^2} x dx$</span>, substitution allows you see <span class="math-container">$1/2 \int e^u du$</span> then you can use the established formula to obtain the elementary solution.</p> <p>There is nothing wrong saying <span class="math-container">$dx = du/2x$</span>, it is just equivalent. <span class="math-container">$\int e^{x^2} x dx = \int e^{x^2} x \frac{1}{2x} du = 1/2 \int e^u du$</span>, you just cannot move the x out of integration as x is a variable.</p>
3,511,565
<p>I am in the middle of an exercise and I am stuck in a final step in which I want to show that <span class="math-container">$y-x^2 \notin \langle x(x-1), (x-1)y\rangle$</span> in <span class="math-container">$\mathbb{C}[x,y]$</span>. My first thought was that <span class="math-container">$x-1$</span> does not divide <span class="math-container">$y-x^2$</span>, but I have no idea how to prove this properly.</p>
Blue
409
<p>There's an interesting generalization of this result to three dimensions.</p> <p>Take <span class="math-container">$\triangle ABC$</span> (with angles <span class="math-container">$\alpha$</span>, <span class="math-container">$\beta$</span>, <span class="math-container">$\gamma$</span> and circumradius <span class="math-container">$r$</span>) to lie the <span class="math-container">$xy$</span>-plane using coordinates <span class="math-container">$$\begin{align} A &amp;= (r\cos\theta,r\sin\theta,0)\\ B &amp;=(r\cos(\theta+2\gamma),r\sin(\theta+2\gamma),0)\\ C &amp;=(r\cos(\theta-2\beta),r\sin(\theta-2\beta),0) \end{align}$$</span> Let point <span class="math-container">$P$</span> lie in the <span class="math-container">$xz$</span>-plane, with <span class="math-container">$p:=|OP|$</span> and <span class="math-container">$\phi$</span> the angle between <span class="math-container">$\overline{OP}$</span> and the positive <span class="math-container">$x$</span>-axis; thus, <span class="math-container">$$P = (p\cos\phi,0,p\sin\phi)$$</span> Then we have <span class="math-container">$$\begin{align} u^2 &amp;:= |BC|^2|PA|^2 = 4 r^2 \sin^2\alpha\cdot\left(p^2 + r^2 - 2 p r \cos\phi\cos\theta\right) \\ v^2 &amp;:= |CA|^2|PB|^2 = 4r^2\sin^2\beta\cdot\left(p^2 + r^2 - 2 p r \cos\phi\cos(\theta+2\gamma)\right) \\ w^2 &amp;:= |AB|^2|PC|^2 = 4r^2\sin^2\gamma\cdot\left(p^2 + r^2 - 2 p r \cos\phi\cos(\theta-2\beta)\right) \end{align}$$</span></p> <p>By Heron's Formula, <span class="math-container">$$\begin{align} \Delta_0^2 &amp;= \frac1{16}(u+v+w)(-u+v+w)(u-v+w)(u+v-w) \\[4pt] &amp;= \frac1{16}\left(-u^4-v^4-w^4+2u^2v^2+2u^2w^2+2v^2w^2\right) \\[4pt] &amp;= \text{... Mathematica ...} \\[4pt] &amp;= 4 r^4 \sin^2\alpha\sin^2\beta\sin^2\gamma \left(p^4 + r^4 - 2 p^2 r^2 \cos 2\phi\right) \\[4pt] &amp;=\left(\frac12\cdot 2r\sin\alpha\cdot 2r\sin\beta\cdot\sin\gamma\right)^2\left(p^4+r^4-2p^2r^2(2\cos^2\phi-1)\right)\\[4pt] &amp;=|\triangle ABC|^2\left(p^2 + r^2 - 2 p r \cos\phi\right) \left(p^2 + r^2 + 2 p r \cos\phi\right) \end{align}$$</span></p> <p><a href="https://i.stack.imgur.com/8DzQy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/8DzQy.png" alt="enter image description here"></a></p> <p>Interestingly, if we define <span class="math-container">$R$</span> and <span class="math-container">$R'$</span> as the points where the <span class="math-container">$x$</span>-axis meets the circumcircle —that is, the points where the plane through <span class="math-container">$\overline{OP}$</span>, perpendicular to the plane of <span class="math-container">$\triangle ABC$</span>, meets the circumcircle— the above becomes</p> <blockquote> <p><span class="math-container">$$\Delta_0 = |\triangle ABC| \;|PR|\;|PR'| \tag{$\star$}$$</span></p> </blockquote> <p>For <span class="math-container">$P$</span> in the plane of <span class="math-container">$\triangle ABC$</span>, the reader may recognize the product <span class="math-container">$|PR||PR'|$</span> as the absolute value of the <a href="https://en.wikipedia.org/wiki/Power_of_a_point" rel="nofollow noreferrer">power of <span class="math-container">$P$</span></a> with respect to the circumcircle. That product is equal to <span class="math-container">$\left|p^2-r^2\right|$</span>, yielding the result in the question. <span class="math-container">$\square$</span></p> <p>It seems as though <span class="math-container">$(\star)$</span> is trying to tell us something, but I'm not sure what it is ...</p>
4,215,226
<p>Let <span class="math-container">$f$</span> be an integrable function on <span class="math-container">$[a,b]$</span>. Then I have to show that there exists <span class="math-container">$x$</span> in <span class="math-container">$[a,b]$</span> such that <span class="math-container">$\int_a^x f=\int_x^b f$</span>. I understand it intuitively. It says that for any interval <span class="math-container">$[a,b]$</span>, we can find a point <span class="math-container">$c$</span> in between(in some cases, the end points of the interval) such that area of the curve lying in between <span class="math-container">$x=a$</span> and <span class="math-container">$x=c$</span> is exactly equal to area of the curve in between <span class="math-container">$x=c$</span> and <span class="math-container">$x=b$</span>. i.e. there is a point where the area is exactly halved. But I am having a problem proving it analytically in a rigors manner. There is a hint that it can be proved using intermediate value theorem. But I do not know how to do it. Please suggest how to proceed.</p>
paw88789
147,810
<p>The <span class="math-container">$48$</span> non-ace cards go into five (some possibly empty) runs of non-ace cards (before the first ace; between the first and second aces; ...; after the fourth ace).</p> <p>So the expected length of each of these non-ace runs is <span class="math-container">$\frac{48}{5}=9.6$</span>. So the expected value of the first ace is at position <span class="math-container">$10.6$</span>. Similarly, the expected position of the last ace is at position <span class="math-container">$42.4$</span>.</p>
2,982,368
<blockquote> <p>Let <span class="math-container">$f:\mathbb R\to\mathbb R$</span> differentiable such that <span class="math-container">$f(0)=0$</span> and <span class="math-container">$f'(x)=[f(x)]^2$</span> for all <span class="math-container">$x\in\mathbb R$</span>. Prove that <span class="math-container">$f(x)=0$</span> for all <span class="math-container">$x\in\mathbb R$</span>.</p> </blockquote> <p>My thoughts: </p> <p>Suppose, to the contrary, that <span class="math-container">$\exists a\in\mathbb R$</span> such that <span class="math-container">$f(a)\neq 0$</span>. Assume <span class="math-container">$f(a)&gt;0$</span> <span class="math-container">$\left(f(a)&lt;0 \mbox{ is analogous}\right)$</span>. Hence, <span class="math-container">$f'(a)&gt;0$</span> and therefore there is <span class="math-container">$\delta&gt;0$</span> such that <span class="math-container">$x\in\mathbb R$</span> and <span class="math-container">$a&lt;x&lt;a+\delta$</span> implies <span class="math-container">$0&lt;f(a)&lt;f(x)$</span>. Thus <span class="math-container">$f(x)\neq 0$</span> for all <span class="math-container">$x\in(a,a+\delta)$</span>. We have that <span class="math-container">$\dfrac{1}{f},h:(a,a+\delta)\to\mathbb R$</span> given by <span class="math-container">$h(x)=-x$</span> are primitives of the function <span class="math-container">$t(x)=-1$</span>, <span class="math-container">$x\in(a,a+\delta)$</span> since <span class="math-container">$$ \left(\frac{1}{f}\right)'(x)=\frac{-f'(x)}{f^2(x)}=\frac{-f^2(x)}{f^2(x)}=-1. $$</span> and <span class="math-container">$h'(x)=-1,\; \forall x\in(a,a+\delta)$</span>. Since primitives differ by a constant, it follows that <span class="math-container">$$ \frac{1}{f(x)}-h(x)= C \Longrightarrow f(x)=\frac{1}{-x+C},\quad C\in\mathbb R. $$</span></p> <p>I stop here. My attempt was to be able to evaluate <span class="math-container">$f(0)=\frac{1}{C}\neq 0$</span>, which would give a contradiction, but my function is defined only in <span class="math-container">$(a,a+\delta)$</span>. </p>
Kavi Rama Murthy
142,385
<p>On the open set <span class="math-container">$\{x:f(x) \neq 0\}$</span> we get <span class="math-container">$(x+c)f(x)=-1$</span> for some constant <span class="math-container">$c$</span>. This implies that the continuous function <span class="math-container">$(x+c)f(x)$</span> takes only the values <span class="math-container">$0$</span> and <span class="math-container">$-1$</span> on <span class="math-container">$\mathbb R$</span>. This implies that it is <span class="math-container">$\equiv 0$</span> or <span class="math-container">$\equiv -1$</span>. Second possibility is ruled out by taking <span class="math-container">$x =-c$</span>. Hence <span class="math-container">$(x+c)f(x)\equiv 0$</span> which implies <span class="math-container">$f \equiv 0$</span>.</p>
451,326
<p>I am having trouble understanding a certain part of the proof on why a function cannot approach two different limits near $a$, so I will just list the relevant parts. If this is not enough/ambiguous then please tell me and I will type out the whole proof.</p> <p>So, suppose we now have:</p> <p>$$ \text{if } 0&lt;|x-a|&lt;\delta_1, \text{ then } |f(x)-l|&lt;\epsilon \hspace{5cm} (1)$$</p> <p>and</p> <p>$$ \text{if } 0&lt;|x-a|&lt;\delta_2, \text{then} |f(x)-m|&lt;\epsilon \hspace{5cm} (2)$$</p> <p>and here's a quote from the text:</p> <blockquote> <p>We have had to use two numbers, $\delta_1$ and $\delta_2$, since there is no guarantee that the $\delta$ which works in one definition will work in the other. But, in fact, it is now easy to conclude that for any $\epsilon&gt;0$ there is some $\delta&gt;0$ such that, for all $x$, $$ \text{if } 0&lt;|x-a| &lt; \delta, \text{then } |f(x)-l| &lt; \epsilon \text{ and } |f(x)-m| \lt \epsilon$$ we simply chose $\delta=\text{min}(\delta_1,\delta_2)$</p> </blockquote> <p>I understand the need to use two distinct $\delta$. What I don't get is why selecting a $\delta$ that is the minimum of $\delta_1$ and $\delta_2$ will make that $\delta$ work in both (1) and (2). I mean, the limits are different so why would I expect that the delta that is the minimum of the two equations will satisfy both equations?</p> <p>Thank you in advance for any help provided.</p>
Etienne
80,469
<p>The function $t\mapsto \vert B(y,t)\vert$ is positive and $\mathcal C^1$ (in fact, it is $c\, t^n$ for some constant $c$); so you just need to show that $\varphi(t)=\int_{B(y,t)} u(x)\, dx$ is $\mathcal C^1$. If you put $x=y+t\xi$, then the change of variable formula gives $\varphi (t)=t^n\int_{B(0,1)} u(y+t\xi )\, d\xi $, so you have to show that $\psi(t)=\int_{B(0,1)} u(y+t\xi)\, d\xi$ is $\mathcal C^1$. But this follows from the usual differentiation theorems under the integral sign.</p>
204,003
<p>I have a sequence of numbers like 1,7,22,45,12,96,21,45,65,36,85,14,51,16,18,17,16....65...</p> <blockquote> <p>IS there any formula to check whether the sequence is random or not ?</p> </blockquote> <p>In my case</p> <ol> <li>odd numbers are not random since previous+2 </li> <li>even numbers are not random </li> <li>the numbers can be repeated in a sequence but must be far away,</li> <li>we cannot generate the sequence using any formula like (x+2)^2 -2x like...</li> </ol>
Gerry Myerson
8,269
<p>There is a very good discussion of this question in Seminumerical Algorithms, which is Volume 2 of Knuth's The Art Of Computer Programming. </p>
474,260
<p>We know that irrational number has not periodic digits of finite number as rational number.<br> All this means that we can find out which digit exist in any position of rational number.<br> But what about non-rational or irrational numbers?<br> For example:<br> How to find out which digit exists in Fortieth position of $\sqrt[2]{2}$ which equals 1,414213.......<br> Is it possible to solve such kind of problem for any irrational number?</p>
Mark Bennet
2,906
<p>You can use <a href="http://en.wikipedia.org/wiki/Continued_fraction" rel="nofollow">continued fraction</a> approximations to find rational numbers arbitrarily close to any irrational number.</p> <p>For $\sqrt 2$ this is equivalent to the chain of approximations $\frac 11, \frac 32, \frac 75, \frac {12}{17} \dots$ where the fraction $\cfrac {a_{n+1}}{b_{n+1}}=\cfrac {a_n+2b_n}{a_n+b_n}.$</p> <p>The accuracy of the estimate at the $n^{th}$ fraction is approximately $\left|\cfrac 1{b_n b_{n-1}} \right|$ - so you go far enough to get the accuracy you need to identify the decimal digit you want from the rational approximation.</p>
2,869,189
<p>Is the method of calculating determinant of $3\times 3$ matrix by diagonals, apply also on $4\times 4$ matrix?</p> <p>for example:</p> <p>$$\begin{matrix}2&amp;2&amp;1&amp;3|\\1&amp;4&amp;4&amp;5|\\5&amp;1&amp;1&amp;6|\\7&amp;1&amp;4&amp;5|\end{matrix}\begin{matrix}2&amp;2&amp;1\\1&amp;4&amp;4\\5&amp;1&amp;1\\7&amp;1&amp;4\end{matrix}$$</p> <p>$\det = 2\cdot4\cdot1\cdot5+\dotsb+3\cdot1\cdot1\cdot4 - 7\cdot1\cdot4\cdot3-\dotsb-5\cdot5\cdot4\cdot1 = 171$</p> <p>Is this valid?</p>
saulspatz
235,128
<p>No, it isn't valid. When you calculate the determinant of an $n\times n$ matrix by minors, you compute $n!$ products. When $n=3, n!=6$ and there are $6$ broken diagonals, and the "spaghetti rule" still involves $6$ products. But when $n=4, n!=24,$ and there are only $8$ broken diagonals, so there isn't much hope.</p> <p>You can easily come up with examples to prove it doesn't work. <s>(In fact, I imagine it's harder to come up with examples where it <em>does</em> work.)</s> </p>
2,869,189
<p>Is the method of calculating determinant of $3\times 3$ matrix by diagonals, apply also on $4\times 4$ matrix?</p> <p>for example:</p> <p>$$\begin{matrix}2&amp;2&amp;1&amp;3|\\1&amp;4&amp;4&amp;5|\\5&amp;1&amp;1&amp;6|\\7&amp;1&amp;4&amp;5|\end{matrix}\begin{matrix}2&amp;2&amp;1\\1&amp;4&amp;4\\5&amp;1&amp;1\\7&amp;1&amp;4\end{matrix}$$</p> <p>$\det = 2\cdot4\cdot1\cdot5+\dotsb+3\cdot1\cdot1\cdot4 - 7\cdot1\cdot4\cdot3-\dotsb-5\cdot5\cdot4\cdot1 = 171$</p> <p>Is this valid?</p>
Bernard
202,857
<p>The fastest method is Gaussian elimination to obtain a triangular matrix – in this case the determinant is the product of the diagonal elements: \begin{align} &amp;\begin{vmatrix} 2&amp;2&amp;1&amp;3\\1&amp;4&amp;4&amp;5\\5&amp;1&amp;1&amp;6\\7&amp;1&amp;4&amp;5 \end{vmatrix} =-\begin{vmatrix} 1&amp;4&amp;4&amp;5\\2&amp;2&amp;1&amp;3\\5&amp;1&amp;1&amp;6\\7&amp;1&amp;4&amp;5 \end{vmatrix} =-\begin{vmatrix}\begin{array}{rrrr} 1&amp;4&amp;4&amp;5\\0&amp;-6&amp;-7&amp;-7\\0&amp;-19&amp;-19&amp;-19\\0&amp;-27&amp;-24&amp;-30 \end{array}\end{vmatrix}=19\times 3\begin{vmatrix}\begin{array}{rrrr} 1&amp;4&amp;4&amp;5\\0&amp;6&amp;7&amp;7\\0&amp;1&amp;1&amp;1\\0&amp;9&amp;8&amp;10 \end{array}\end{vmatrix}\\[1ex] =&amp;-19\times 3\begin{vmatrix}\begin{array}{rrrr} 1&amp;4&amp;4&amp;5\\0&amp;1&amp;1&amp;1\\0&amp;6&amp;7&amp;7\\0&amp;9&amp;8&amp;10 \end{array}\end{vmatrix}=-19\times 3\begin{vmatrix}\begin{array}{rrrr} 1&amp;4&amp;4&amp;5\\0&amp;1&amp;1&amp;1\\0&amp;0&amp;1&amp;1\\0&amp;0&amp;-1&amp;1 \end{array}\end{vmatrix} =-19\times 3\begin{vmatrix} 1&amp;4&amp;4&amp;5\\0&amp;1&amp;1&amp;1\\0&amp;0&amp;1&amp;1\\0&amp;0&amp;0&amp;2 \end{vmatrix}=-114\\ &amp; \end{align}</p> <p>However, in the case of $4\times4$ matrices, you have two other possibilities, for a computation by blocks:</p> <ul> <li>If $M=\begin{pmatrix}A&amp;B\\C&amp;D \end{pmatrix}$ is a $4×4$ matrix consisting of blocks of size $2$, and if $CD=DC$, then we can compute a $2×2$ determinant: $$\det M=\det(AD-BC).$$</li> <li>We can use Laplace's expansion <em>along the first two columns</em>. To explain the computation, we introduce some notations:</li> </ul> <p>\begin{array}{ll} p_{ij}&amp;\text{is the $2×2$ determinant of rows $i$ and $j$ of the first two columns},\\ q_{ij}&amp;\text{is the $2×2$ determinant of rows $i$ and $j$ of the last two columns.} \end{array} With these notations, one has $$\det M=p_{12}q_{34}+p_{13}q_{42}+p_{14}q_{23}+q_{12}p_{34}+q_{13}p_{42}+q_{14}p_{23}.$$</p>
1,903,495
<p>Suppose,$y_1=y_1(x_1,x_2)$ and $y_2=y_2(x_1,x_2)$, such that,</p> <p>$$dy_1=\frac{\partial y_1}{\partial x_1}dx_1+\frac{\partial y_1}{\partial x_2} \, dx_2$$</p> <p>$$dy_2=\frac{\partial y_2}{\partial x_1}dx_1+\frac{\partial y_2}{\partial x_1} \, dx_2$$</p> <p>Then,I've taken the product of the above two,but unable to reach to the result.</p>
Doug M
317,162
<p>(5 red, 0 white) -- ${15\choose 5}$</p> <p>(4 red, 1 white) -- ${15\choose 4}{10\choose1}$</p> <p>(3 red, 2 white) -- ${15\choose 3}{10\choose2}$</p>
4,068,493
<p><a href="https://i.stack.imgur.com/Rzj9N.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rzj9N.jpg" alt="triangle diagram" /></a></p> <p>Given that:</p> <ol> <li>[BM) is the bisector of the angle ABC.</li> <li>(BM) and (AN) are parallel straight lines.</li> </ol> <p>I am trying to prove that the triangle ANB is an isosceles triangle with a main vertex B using Thales Theorem.</p> <p>Here's what I have done:</p> <p>Since BM and AN are parallel then by using Thales Theorem: <span class="math-container">$MA/MC = NB/BC = BM/AN$</span></p> <p>I don't know where to go from here.. Any help is appreciated. Thank you.</p>
dodoturkoz
804,253
<p>Look for the alternate interior angles and corresponding angles in the figure: <span class="math-container">$\measuredangle MBA=\measuredangle BAN$</span> and <span class="math-container">$\measuredangle CBM=\measuredangle BNA$</span>, so the triangle is isosceles.</p> <hr /> <p>If you insist to use Thales':</p> <p><span class="math-container">$$AM/CM = BN/BC$$</span></p> <p>From interior angle bisector theorem:</p> <p><span class="math-container">$$AM/CM=AB/BC$$</span></p> <p>Combining the two:</p> <p><span class="math-container">$$BN/BC=AB/BC \implies AB=BN$$</span></p>
14,167
<p>I randomly pick a natural number <em>n</em>. Assuming that I would have picked each number with the same probability, what was the probability for me to pick <em>n</em> before I did it?</p>
AplanisTophet
437,710
<p>Some clarification should be added to the above answers.</p> <p>By partitioning the reals into non-measurable sets, one can devise a way to relate a real selected uniformly at random from an interval such as [0, 1) to a natural number using a method by which all natural numbers have an "equal," however undefined, probability of being mapped to (here, equal simply means that no natural is given preference as to being mapped to over any other natural). The problem is that, because non-measurable sets are used to do this, no meaningful cumulative distribution function can be established (as might be expected). To see an example of such a selection process, see here:</p> <p><a href="https://math.stackexchange.com/questions/2242758/is-this-fraction-undefined-infinite-probability-question#2242758">Is this fraction undefined? Infinite Probability Question.</a></p>
2,006,870
<p>Is the following inequality true?</p> <p>$\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{ij}\,a_{ik}\,a_{jl}\,a_{kl} \right) \leq \left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^{1/2}\left( \sum \limits_{i=1}^\infty \sum \limits_{k=1}^\infty a_{ik}^2 \right)^{1/2}\left( \sum \limits_{j=1}^\infty \sum \limits_{l=1}^\infty a_{jl}^2 \right)^{1/2}\left( \sum \limits_{k=1}^\infty \sum \limits_{l=1}^\infty a_{kl}^2 \right)^{1/2}=\left( \sum \limits_{i=1}^\infty \sum \limits_{j=1}^\infty a_{ij}^2 \right)^2$</p> <p>where $a_{ij}$s are real numbers.</p>
barak manos
131,263
<p>If $x,y\in[0,999999999]$, then you can use $f(x,y)=1000000000x+y$.</p>
3,059,150
<p>Does there exist any open linear (vector) subspace of a Hilbert space? I could not think of any example.</p> <p>Actually, I was reading the book by Simmons, there almost in every theorem it assumed that "If M is a closed linear subspace".It seemed natural to me to think about subspaces which are not closed. I have an got an example which is not closed: Take the Hilbert space <strong>H = L^[0,1], with L^2 norm</strong> and the subspace <strong>set of all polynomials</strong>, it is not closed because it's closure is <strong>H</strong> and not open can be found here <a href="https://math.stackexchange.com/q/385464/428326">Set of all polynomials on [0, 1/2] is not open in C[0, 1/2]</a>. Then I asked myself an example of to think of an open set. But I could lead myself nowhere, as I am not familiar with infinite dimensional vector space. Not closed does not necessarily mean open.</p>
pitariver
614,358
<p>If <span class="math-container">$M \leq \mathcal{H}$</span> a subspace of a Hilbert space (or generally any normed space) is open, then it contains a ball around the origin <span class="math-container">$0 \in B_r(0) \subset M$</span>, but for every (none-zero) vector <span class="math-container">$v \in \mathcal{H}$</span>, we have <span class="math-container">$$ \frac{r}{2\Vert v \Vert} v \in B_r(0) \subset M $$</span> But M is a linear subspace so <span class="math-container">$ v \in M $</span>. Thus the only open subspaces of <span class="math-container">$ \mathcal{H} $</span> are <span class="math-container">$ \mathcal{H} $</span> itself.</p>
257,839
<p>I was trying to solve the ODE</p> <p>\begin{equation} \ddot{r} r = \alpha(\dot{r}^2-1) \end{equation} where $\alpha$ is an arbitrary constant. There are some simple cases when $\alpha = -1 $ then you can use separation of variables to find the solution. For the initial condition when $r'(0)=1$ it also simplifies, $r(t)= t + r(0)$. Not sure how take on the general case. Any help is welcome!</p>
Mathlover
22,430
<p>HINT:</p> <p>\begin{equation} \ddot{r} r = \alpha(\dot{r}^2-1) \end{equation}</p> <p>$$\frac{\dot{r}\ddot{r}}{\dot{r}^2-1}=\alpha\frac{\dot{r}}{r}$$</p> <p>$$\frac{\dot{r}\ddot{r}}{(\dot{r}+1)(\dot{r}-1)}=\alpha\frac{\dot{r}}{r}$$</p> <p>$$\frac{\ddot{r}}{\dot{r}+1} +\frac{\ddot{r}}{\dot{r}-1}=2\alpha\frac{\dot{r}}{r}$$</p> <p>Then integrate both side </p>
257,839
<p>I was trying to solve the ODE</p> <p>\begin{equation} \ddot{r} r = \alpha(\dot{r}^2-1) \end{equation} where $\alpha$ is an arbitrary constant. There are some simple cases when $\alpha = -1 $ then you can use separation of variables to find the solution. For the initial condition when $r'(0)=1$ it also simplifies, $r(t)= t + r(0)$. Not sure how take on the general case. Any help is welcome!</p>
doraemonpaul
30,938
<p>Case $1$ : $\alpha=0$</p> <p>Then $\ddot{r}r=0$</p> <p>$r=0$ or $\ddot{r}=0$</p> <p>$r=0$ or $r=C_1t+C_2$</p> <p>$\therefore r=C_1t+C_2$</p> <p>Case $2$ : $\alpha\neq0$</p> <p>Then $\ddot{r}r=\alpha(\dot{r}^2-1)$</p> <p>$r\dfrac{d^2r}{dt^2}=\alpha\left(\left(\dfrac{dr}{dt}\right)^2-1\right)$</p> <p>Let $u=\dfrac{dr}{dt}$ ,</p> <p>Then $\dfrac{d^2r}{dt^2}=\dfrac{du}{dt}=\dfrac{du}{dr}\dfrac{dr}{dt}=u\dfrac{du}{dr}$</p> <p>$\therefore ru\dfrac{du}{dr}=\alpha(u^2-1)$</p> <p>$\dfrac{u}{u^2-1}du=\dfrac{\alpha}{r}dr$</p> <p>$\int\dfrac{u}{u^2-1}du=\int\dfrac{\alpha}{r}dr$</p> <p>$\dfrac{1}{2}\ln(u^2-1)=\alpha\ln r+c_1$</p> <p>$\ln\left(\left(\dfrac{dr}{dt}\right)^2-1\right)=2\alpha\ln r+c_2$</p> <p>$\left(\dfrac{dr}{dt}\right)^2-1=C_1r^{2\alpha}$</p> <p>$\dfrac{dr}{dt}=\pm\sqrt{C_1r^{2\alpha}+1}$</p> <p>$dt=\pm\dfrac{dr}{\sqrt{C_1r^{2\alpha}+1}}$</p> <p>$\int dt=\pm\int\dfrac{dr}{\sqrt{C_1r^{2\alpha}+1}}$</p> <p>$t=\pm\int_k^r\dfrac{dr}{\sqrt{C_1r^{2\alpha}+1}}+C_2$</p>
2,696,320
<p>If $\gcd(m,15)= \gcd(n,15)=1$, show that either $15|(m^4 + n^4)$ or $15|(m^4 - n^4)$. </p> <p>I'm really stuck on this proof. This is what I know: </p> <p>Since the $\gcd(m,15)= 1$ we can write it as $mx+15y=1$ where $x,y$ are integers. Also, since $\gcd(n,15)=1$ it can also be written as $nu + 15v = 1$ where $u,v$ are integers. I think I'm missing something but I just can't see it. </p>
DonAntonio
31,254
<p>The conditions mean that $\;m,\,n\;$ aren't divisible by $\;3\;$ and neither by $\;5\;$ , yet $\;m^4\,,\,n^4\;$ are both $\;=1\pmod 3\;$ , whereas by Fermat's Little Theorem both $\;m^4,\,n^4=1\pmod 5\;$ ...</p>
307,466
<p>Is there any simple way to construct an entiere function $f$ such that : $$\forall p \in {\mathbb N} \quad f(2^p)=(-1)^p$$</p>
Community
-1
<p>Typically, one uses both the <a href="https://en.wikipedia.org/wiki/Weierstrass_factorization_theorem" rel="noreferrer">Weierstrass factorization</a> and <a href="https://en.wikipedia.org/wiki/Mittag-Leffler%27s_theorem" rel="noreferrer">Mittag-Leffler</a> theorem to prove the existence of an entire function $f$ such that $f(z_n)=w_n$ for given $z_n\to\infty$ and given $w_n$. </p> <ol> <li>Get an entire function $g$ with a simple zero at every $z_n$ (by Weierstrass). </li> <li>Get a meromorphic function $h$ with principal part $w_n(g'(z_n)(z-z_n))^{-1}$ at every $z_n$ (by Mittag-Leffler)</li> <li>Let $f=gh$: this is an entire function and $f(z_n)=w_n$.</li> </ol> <hr> <p>But if you want to avoid heavy-duty theorems, see the article <a href="http://www.jstor.org/stable/2370666" rel="noreferrer"><em>On Entire Function Interpolation</em></a> by I. M. Sheffer, American Journal of Mathematics Vol. 49, No. 3 (Jul., 1927), pp. 329-342 which offers a more elementary proof of the existence of an entire function with prescribed values at positive integers. </p>
606,656
<p>So I heard this a long time ago and I recently started thinking about it again. So I was told that the complex function $f(z)=1/z$ maps everything inside a circle to points outside the circle (the remaining part of the complex plane). Why is this?</p> <p>I realize we can write $$f(z)=\frac{1}{z}=\frac{\overline{z}}{|z|^2}$$ so that there is a reflection $\overline{z}$ and a dilation $1/|z|^2$. But I don't see why this then only maps to points outside the circle. Can you explain?</p>
Daniel Robert-Nicoud
60,713
<p>A point is inside the (unit) circle if $|z|&lt;1$, thus $$|f(z)|=\frac{1}{|z|}&gt;1$$ is outside the circle if $z$ was inside the circle, and vice-versa.</p>
3,197,301
<p>Let <span class="math-container">$OABC$</span> be a tetrahedron such that <span class="math-container">$|OA|=|OB|=|OC|$</span>. Denote by <span class="math-container">$D$</span> and <span class="math-container">$E$</span> the midpoints of segments <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> respectively. If <span class="math-container">$\alpha=\angle(DOE)$</span> and <span class="math-container">$\beta=\angle(BOC)$</span> what is the ratio <span class="math-container">$\beta/\alpha$</span>? </p> <p>It is obvious that <span class="math-container">$|BC|=2|DE|$</span> since triangles <span class="math-container">$\Delta(ABC)$</span> and <span class="math-container">$\Delta(ADE)$</span> are similar and <span class="math-container">$D, E$</span> are midpoints by assumption. I would expect that <span class="math-container">$\beta/\alpha\geqslant 2$</span> but I haven't been able to confirm this. I tried to use the cosine law for the segments <span class="math-container">$|BC|$</span> and <span class="math-container">$|DE|$</span> but without success. </p>
Martin Argerami
22,857
<p>This works in any metric space. So you have a distance <span class="math-container">$d$</span> and the ball of radius <span class="math-container">$1$</span> around <span class="math-container">$x_0$</span>. </p> <p>Let <span class="math-container">$r=1-d(x_1,x_0)$</span>. If <span class="math-container">$x$</span> is at distance less than <span class="math-container">$r$</span> from <span class="math-container">$x_1$</span>, then by the triangle inequality <span class="math-container">$$ d(x,x_0)\leq d(x,x_1)+d(x_1,x_0) &lt; r + 1-r = 1. $$</span> In your example <span class="math-container">$x_0=(0,0)$</span>, <span class="math-container">$x_1= (x_1,x_2)$</span> and <span class="math-container">$d((a,b),(c,d))=\sqrt{(a-c)^2+(b-d)^2}$</span>.</p>
146,394
<p>Here is a nice figure</p> <p><a href="https://i.stack.imgur.com/e6Kuf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/e6Kuf.jpg" alt="enter image description here"></a></p> <p>Now let's create a grid</p> <pre><code>plot = Show[GraphicsGrid[{{im1, im1, im1}, {SpanFromLeft, im1, im1}, {SpanFromLeft, SpanFromLeft, im1}}, Spacings -&gt; 0]] </code></pre> <p>which produces</p> <p><a href="https://i.stack.imgur.com/1ptri.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ptri.jpg" alt="enter image description here"></a></p> <p>As you can see, three dots (...) appear at the empty slots of the grids.</p> <p>My question: Is there a way to remove these dots?</p> <p>Many thanks in advance!</p>
Peter
63,945
<p>I had the same issue. You have to find first the hidden directory. Go to the search icon in task bar and </p> <p>1)type file explorer,</p> <p>2) view hidden systems files and hidden system data something like that(I have a greek version of windows 10 so it's in greek)</p> <p>3)select the option for programmers and select view settings to view hidden files and system data. </p> <p>4)Select view settings. </p> <p>5)Then a window pops out. Select from the bar the option view hidden files, folders and disk units (sth like that). Then tick the box. </p> <p>Mathematica is in installed in C/users/Your Username/AppData(which was hidden)/Roaming/Mathematica/Applications/Xact/Invar. You will see that the Riemann file is not there. Put it in Invar and then you are ok. </p> <p>I hope it helps.</p>
1,988,021
<p>Consider $n$ objects each with an associated probability $p_i$, $i\in\{1,\dots,n\}$. If I sample objects $k$ times independently with replacement according to the probability distribution defined by the $p_i$, how does one compute the expected number of times you sample an object you have sampled before?</p> <p>We can assume that $n &gt; k$.</p>
Kiki
281,305
<p>Try the hyperbolic substitution $x=sinh(y)$</p>
4,010,070
<p>The question was:</p> <ul> <li>Find <span class="math-container">$n$</span> where <span class="math-container">$GCD(a_{n}, 14) = 7$</span> where <span class="math-container">$n$</span> is natural if you knew that <span class="math-container">$a_{n} = n + 3$</span></li> </ul> <p>The book solved it by saying that means <span class="math-container">$a_{n}$</span> is a multiple of <span class="math-container">$7$</span> but not <span class="math-container">$14$</span> so <span class="math-container">$a_{n} = 7a$</span> and by putting it in the equation</p> <p><span class="math-container">$GCD(7a, 14) = 7$</span> means <span class="math-container">$GCD(a,2) = 1$</span> which means <span class="math-container">$a = 2k + 1$</span> and by putting it in <span class="math-container">$a_{n}$</span> we find that n = 14a + 4$</p> <p>The first, I understand that because if it was a multiple of 14 then we would take <span class="math-container">$14$</span> as the <span class="math-container">$GCD$</span> not <span class="math-container">$7$</span></p> <p>What I didn't understand is that what putting <span class="math-container">$a_{n} = 7a$</span> has anything to do with not being a multiple of 14? 14 can be written as <span class="math-container">$7(2)$</span> which means <span class="math-container">$a_{n}$</span> can be clearly a multiple of 14 too.</p>
Son Gohan
865,323
<p>Your last line comes straight from the definition of <span class="math-container">$\varphi(f)$</span> and the definition of continuity. Since <span class="math-container">$\varphi(f)$</span> is continuous, for every <em>small number</em> (here it is 1), we can find a neighborhood <span class="math-container">$V$</span> of <span class="math-container">$0$</span> with radius <span class="math-container">$\varepsilon$</span> such that for every <span class="math-container">$f \in V$</span> we have <span class="math-container">$\varphi (f) &lt;$</span> than our <em>small number</em>. In particular, you see that if <span class="math-container">$f(x_i) = 0$</span> for every <span class="math-container">$i$</span> then the functional <span class="math-container">$f$</span> belongs to every neighborhood of the origin, for any arbitrary radius. Hence <span class="math-container">$\varphi (f) &lt; \delta$</span> for any <span class="math-container">$\delta &gt; 0$</span>. Since <span class="math-container">$\delta$</span> is arbitrary, this implies that <span class="math-container">$\varphi (f)=0$</span>.</p>
4,010,070
<p>The question was:</p> <ul> <li>Find <span class="math-container">$n$</span> where <span class="math-container">$GCD(a_{n}, 14) = 7$</span> where <span class="math-container">$n$</span> is natural if you knew that <span class="math-container">$a_{n} = n + 3$</span></li> </ul> <p>The book solved it by saying that means <span class="math-container">$a_{n}$</span> is a multiple of <span class="math-container">$7$</span> but not <span class="math-container">$14$</span> so <span class="math-container">$a_{n} = 7a$</span> and by putting it in the equation</p> <p><span class="math-container">$GCD(7a, 14) = 7$</span> means <span class="math-container">$GCD(a,2) = 1$</span> which means <span class="math-container">$a = 2k + 1$</span> and by putting it in <span class="math-container">$a_{n}$</span> we find that n = 14a + 4$</p> <p>The first, I understand that because if it was a multiple of 14 then we would take <span class="math-container">$14$</span> as the <span class="math-container">$GCD$</span> not <span class="math-container">$7$</span></p> <p>What I didn't understand is that what putting <span class="math-container">$a_{n} = 7a$</span> has anything to do with not being a multiple of 14? 14 can be written as <span class="math-container">$7(2)$</span> which means <span class="math-container">$a_{n}$</span> can be clearly a multiple of 14 too.</p>
Evangelopoulos Phoevos
739,818
<p>Some background: Suppose X is a vector space and <span class="math-container">$\Phi $</span> is a family of linear functionals on <span class="math-container">$X$</span>. This family defines a topology on <span class="math-container">$X$</span>, denoted by <span class="math-container">$\mathcal T_\Phi$</span> such that <span class="math-container">$(X,\mathcal T_\Phi)$</span> is a topological vector space. The set<br /> <span class="math-container">$$\mathcal S = \{ f^{-1} [(-ε,ε)] : ε&gt;0, f \in \Phi \}.$$</span> is a sub-basis for <span class="math-container">$0 \in X$</span> and thus, a basic open set containing <span class="math-container">$0$</span> is of the form <span class="math-container">$ \bigcap_{i=1}^n f_i^{-1} [(-ε,ε)]$</span>.</p> <p>If <span class="math-container">$g: (X,\mathcal T_\Phi) \to \mathbb R$</span> is linear and continuous then <span class="math-container">$g \in &lt;\Phi&gt;= \operatorname{span} (\Phi)$</span>.</p> <p>To see this, let <span class="math-container">$V= g^{-1} [(-1,1)].$</span> By the continuity of <span class="math-container">$g$</span>, <span class="math-container">$V \in \mathcal T_\Phi$</span> and so we can find <span class="math-container">$ε&gt;0,n \in \mathbb N$</span> and <span class="math-container">$f_1,f_2,\dots, f_n \in \Phi$</span> such that <span class="math-container">$$ \bigcap_{i=1}^n f_i^{-1} [(-ε,ε)] \subset V.$$</span> In particular, <span class="math-container">$Y:= \bigcap_{i=1}^n \ker f_i \subset V$</span>. Notice that <span class="math-container">$Y$</span> is a (non trivial) linear subspace of <span class="math-container">$X$</span>. Furthermore, <span class="math-container">$Y \subset \ker G$</span> and thus we infer that <span class="math-container">$g \in &lt;f_1,f_2, \dots , f_n&gt; \subset &lt;\Phi&gt;$</span>. Indeed, let <span class="math-container">$x \in Y$</span>. For all <span class="math-container">$k \in \mathbb N$</span> one has that <span class="math-container">$kx \in Y \subset V$</span> and so <span class="math-container">$|g(kx) | &lt;1$</span>. In other words, <span class="math-container">$g(x) =0$</span>.</p> <p>The weak-star topology is just a special case, if we consider the space <span class="math-container">$X^*$</span> and <span class="math-container">$\Phi = j(X)$</span>, where <span class="math-container">$j : X \to X^{**}$</span> is the canonical isometric embedding.</p>
184,142
<p>I am currently an undergraduate math student. (In fact, freshmen.)</p> <p>I know that usually abstract algebra is taught somehow late in the undergraduate course, and curious how studies of abstract algebra at graduate level differ from studies at undergraduate level.</p> <p>So, things like what gets new treatment, or what is learned new are what I want to know.</p>
Mathemagician1234
7,012
<p>What constitutes a "graduate algebra' course in the United States has undergone a lengthy evolution. As MTurgon and rschwieb said earlier, it's highly subjective from teacher to teacher. But I think the evolution of the subject gives some insight of what can be expected at most programs today for a year long graduate course. </p> <p>The first modern graduate text was,of course, Van Der Waerden's <em>Moderne Algebra</em>,based on the legendary lectures of Emil Artin, Emmy Noether and Van Der Waerden himself at the the University of Gottingen before World War II. It was the first real abstract algebra textbook since these lectures emerged from the researches of the authors.The syllabus became the standard mantra for algebra courses, "Groups,rings and fields". Until the 1960's, algebra was considered a graduate course and it was very unusual for most undergraduates to have had much exposure to algebra with the exception of the world's top programs, such as Harvard or Yale. The first undergraduate course in algebra was developed and presented by Saunders MacLane and Garrett Birkoff at Harvard in 1941 and it eventually became,of course, the basis for their classic text, <em>A Survey of Modern Algebra</em>. But it was <strong>very</strong> unusual for undergraduates to have a solid course in abstract algebra before the 1960's. When undergraduate and graduate algebra courses became standard courses in math departments, the curricula was fairly well-established: undergraduate courses were based on the <em>Survey</em> while graduate courses where based on Van Der Waerden. By the late 1960's and early 1970's, Van Der Waerden's book was no longer representative of the frontiers of algebra, which were now nearly unrecognizable with the explosive growth of categorical and homological methods,noncommutative algebra, modern commutative algebra and modern algebraic geometry. The first edition of Serge Lang's <em>Algebra</em> was published in 1965, concurrent with the peak popularity of the Bourbaki volumes. Lang's book effectively replaced Van der Waerden as the graduate algebra text of choice at top programs due its completely modern approach and it's emphasis on categorical and homological methods in all areas of algebra. It still is,to a large degree-but its sheer difficulty and dry austerity,coupled with the mammoth size of later editions and the explosively rapid growth of algebra at the research level-has recently lead to a new generation of algebra books at the graduate level, such as Grillet (my personal favorite), Rowen and Rotman.All these books have continued the hard categorical slant of Lang while trying to bring more recent developments into the standard courses. </p> <p>From the prior discussion as well as my own experiences, I can state that graduate algebra <em>generally</em> differs from undergraduate courses in the subject in 3 ways: </p> <p>1) Much the same way undergraduate analysis covers the "classical" analysis of the late 19th century and graduate analysis courses cover modern topic of the last century, graduate algebra differs mainly from undergraduate algebra in the emphasis on category theory and homological methods. There are programs that attempt to present category theory and diagram chasing to undergraduates in their algebra courses, but I think this is mainly at the top research programs,where the goal is to speed students to the frontiers as quickly as possible.In general, the categorical approach isn't tackled full on until the graduate algebra sequence and consequently the topics that are most strongly developed by these methods-i.e. homological algebra, noncommutative ring and module theory, algebraic geometry-are not discussed in depth until the graduate course. </p> <p>2) Expect the course to be much deeper, terser and problem oriented then your undergraduate course. This for 2 reasons: a) A graduate course in algebra needs to survey most of the subject as it stands today to prepare the students for research in either algebra or other fields-and unless the student is ready to learn actively, there simply will not be time to cover the bulk of this work. Also b) graduate students are now beginning to make the transition to being professional mathematicians and they can't very well do that if they're still learning simple proofs off lectures or textbooks. They have to not only learn material much more quickly,they have to learn to build vast tracts of theory themselves. The best way to do both is to give the student a large chunk of the classwork to learn themselves. </p> <p>3)Depending on whether your instructor is a prominent researcher in the field of algebra, a graduate algebra course may be much more closely tied to the frontiers of research then is usual. If so, he or she may cover the standard material in a "need to know" fashion in order to cover the maximal amount of material relevant to his or her research interests and a large chunk of the course would then be more like a research seminar, relying much more on published papers then standard textbooks. If the professor is not an active algebraicist, expect the course to follow a much more standard path through a conventional textbook like one of the ones stated above. </p> <p><em>Specifically</em> what topics can you expect in a graduate algebra class? At the minimum, I would expect the following topics to be covered: group theory through the Sylow theorems, free groups and presentations and the Fundamental Theorum of Abelian Groups, ring and module theory including both the noncommutative and commutative aspects, field theory including a large section on Galois theory, linear and multilinear algebra including a full discussion of tensor products,basic category theory and homological algebra,universal algebra, semisimple rings and algebras and perhaps some algebraic geometry and algebraic number theory. </p> <p>Hope that helped-good luck! </p>
279,238
<p>I want to create a regular polygon from the initial two points <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and number of vertices <span class="math-container">$n$</span>,<br /> <a href="https://i.stack.imgur.com/dKr9A.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dKr9A.png" alt="enter image description here" /></a></p> <p><code>regularPolygon[{0, 0}, {1, 0}, 3]</code> gives <code>{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}}</code></p> <p><code>regularPolygon[{x1, y1}, {x2, y2}, 4]</code> gives <code>{{x1, y1}, {x2, y2}, {x2 + y1 - y2, -x1 + x2 + y2}, {x1 + y1 - y2, -x1 + x2 + y1}}</code></p> <p>I found a related function <a href="http://reference.wolfram.com/language/ref/CirclePoints.html" rel="noreferrer">CirclePoints</a>, it seems not suitable. Is there a simple way to implement such a function? Maybe you can use iteration.</p>
flinty
72,682
<p>You <em>can</em> use <code>CirclePoints</code>. You just need to transform them:</p> <pre><code>{x1, y1} = {3, 6}; {x2, y2} = {-4, 2}; circ = N@CirclePoints[5]; tran = Last@FindGeometricTransform[{{x1, y1}, {x2, y2}}, circ[[;;2]]]; With[{tcirc = tran[circ]}, ListLinePlot[Append[tcirc, First@tcirc], AspectRatio -&gt; 1, Epilog -&gt; {Red, PointSize[Large], Point[{x1, y1}], Point[{x2, y2}]}] ] </code></pre> <p><a href="https://i.stack.imgur.com/kC03U.png" rel="noreferrer"><img src="https://i.stack.imgur.com/kC03U.png" alt="circle points" /></a></p>
3,462,868
<p>What is the minimal size of a set <span class="math-container">$\mathfrak S$</span> of <span class="math-container">$k$</span>-element subsets of <span class="math-container">$\{1,...,n\}$</span> such that for any <span class="math-container">$k$</span>-element subset <span class="math-container">$S$</span> of <span class="math-container">$\{1,...,n\}$</span> there is an <span class="math-container">$S'\in\mathfrak S$</span> with <span class="math-container">$S\cap S'=\varnothing$</span>?</p> <p>As shown in an answer and comments to it, for <span class="math-container">$n&lt;2k$</span> there are no such <span class="math-container">$\mathfrak S$</span>, for <span class="math-container">$n=2k$</span> the only possibility is to take for <span class="math-container">$\mathfrak S$</span> all <span class="math-container">$k$</span>-element subsets, so that in this case the answer is <span class="math-container">$\binom{2k}k$</span>, while for <span class="math-container">$n\geqslant (k+1)k$</span> one can (and must at least) take any <span class="math-container">$k+1$</span> pairwise disjoint <span class="math-container">$k$</span>-element subsets and the answer is <span class="math-container">$k+1$</span>. Thus the cases <span class="math-container">$2k&lt;n&lt;(k+1)k$</span> remain unsolved.</p> <p>As suggested in a comment below: in case this is very hard, - mainly I would like to know this in the case <span class="math-container">$n=3k$</span>.</p> <p>Here are some calculations (being updated using the accepted answer): denoting by <span class="math-container">$\mu(n,k)$</span> the minimal size of <span class="math-container">$\mathfrak S$</span> as above, <span class="math-container">$$ \begin{matrix} \mu(\geqslant2,1)=2&amp;\mu(4,2)=6&amp;\mu(6,3)=20&amp;\mu(8,4)=70&amp;\mu(10,5)=252\\ &amp;\mu(5,2)=4&amp;\mu(7,3)=12&amp;\mu(9,4)=30&amp;\mu(11,5)\leqslant113\\ &amp;\mu(\geqslant6,2)=3&amp;\mu(8,3)=8&amp;\mu(10,4)\leqslant21&amp;\mu(12,5)\leqslant72\\ &amp;&amp;\mu(9,3)=7&amp;\mu(11,4)\leqslant18&amp;\mu(13,5)\leqslant54\\ &amp;&amp;\mu(10,3)=6&amp;\mu(12,4)=12&amp;\mu(14,5)\leqslant42\\ &amp;&amp;\mu(11,3)=5&amp;\mu(13,4)\leqslant14&amp;\mu(15,5)\leqslant31\\ &amp;&amp;\mu(\geqslant12,3)=4&amp;\mu(14,4)\leqslant12&amp;\mu(16,5)\leqslant28\\ &amp;&amp;&amp;\mu(15,4)\leqslant10&amp;\mu(17,5)\leqslant26\\ &amp;&amp;&amp;\mu(16,4)\leqslant9&amp;\mu(18,5)\leqslant24\\ &amp;&amp;&amp;\mu(17,4)\leqslant8&amp;\mu(19,5)\leqslant22\\ &amp;&amp;&amp;\mu(18,4)\leqslant7&amp;\mu(20,5)\leqslant20\\ &amp;&amp;&amp;\mu(19,4)\leqslant6&amp;\mu(21,5)\leqslant18\\ &amp;&amp;&amp;\mu(\geqslant20,4)=5&amp;\mu(22,5)\leqslant17\\ &amp;&amp;&amp;&amp;\mu(23,5)\leqslant16\\ &amp;&amp;&amp;&amp;\mu(24,5)\leqslant14\\ &amp;&amp;&amp;&amp;\mu(25,5)\leqslant12\\ &amp;&amp;&amp;&amp;\mu(26,5)\leqslant10\\ &amp;&amp;&amp;&amp;\mu(27,5)\leqslant9\\ &amp;&amp;&amp;&amp;\mu(28,5)\leqslant8\\ &amp;&amp;&amp;&amp;\mu(29,5)\leqslant7\\ &amp;&amp;&amp;&amp;\mu(\geqslant30,5)=6 \end{matrix} $$</span></p>
Matt Samuel
187,867
<p>Try taking the derivative again. The derivative of the constant is zero, so adding the constant gives you the same derivative. It turns out the converse is true: if two functions have the same derivative, then they differ by a constant. In order to get all antiderivatives, you have to include the constant. </p>
3,462,868
<p>What is the minimal size of a set <span class="math-container">$\mathfrak S$</span> of <span class="math-container">$k$</span>-element subsets of <span class="math-container">$\{1,...,n\}$</span> such that for any <span class="math-container">$k$</span>-element subset <span class="math-container">$S$</span> of <span class="math-container">$\{1,...,n\}$</span> there is an <span class="math-container">$S'\in\mathfrak S$</span> with <span class="math-container">$S\cap S'=\varnothing$</span>?</p> <p>As shown in an answer and comments to it, for <span class="math-container">$n&lt;2k$</span> there are no such <span class="math-container">$\mathfrak S$</span>, for <span class="math-container">$n=2k$</span> the only possibility is to take for <span class="math-container">$\mathfrak S$</span> all <span class="math-container">$k$</span>-element subsets, so that in this case the answer is <span class="math-container">$\binom{2k}k$</span>, while for <span class="math-container">$n\geqslant (k+1)k$</span> one can (and must at least) take any <span class="math-container">$k+1$</span> pairwise disjoint <span class="math-container">$k$</span>-element subsets and the answer is <span class="math-container">$k+1$</span>. Thus the cases <span class="math-container">$2k&lt;n&lt;(k+1)k$</span> remain unsolved.</p> <p>As suggested in a comment below: in case this is very hard, - mainly I would like to know this in the case <span class="math-container">$n=3k$</span>.</p> <p>Here are some calculations (being updated using the accepted answer): denoting by <span class="math-container">$\mu(n,k)$</span> the minimal size of <span class="math-container">$\mathfrak S$</span> as above, <span class="math-container">$$ \begin{matrix} \mu(\geqslant2,1)=2&amp;\mu(4,2)=6&amp;\mu(6,3)=20&amp;\mu(8,4)=70&amp;\mu(10,5)=252\\ &amp;\mu(5,2)=4&amp;\mu(7,3)=12&amp;\mu(9,4)=30&amp;\mu(11,5)\leqslant113\\ &amp;\mu(\geqslant6,2)=3&amp;\mu(8,3)=8&amp;\mu(10,4)\leqslant21&amp;\mu(12,5)\leqslant72\\ &amp;&amp;\mu(9,3)=7&amp;\mu(11,4)\leqslant18&amp;\mu(13,5)\leqslant54\\ &amp;&amp;\mu(10,3)=6&amp;\mu(12,4)=12&amp;\mu(14,5)\leqslant42\\ &amp;&amp;\mu(11,3)=5&amp;\mu(13,4)\leqslant14&amp;\mu(15,5)\leqslant31\\ &amp;&amp;\mu(\geqslant12,3)=4&amp;\mu(14,4)\leqslant12&amp;\mu(16,5)\leqslant28\\ &amp;&amp;&amp;\mu(15,4)\leqslant10&amp;\mu(17,5)\leqslant26\\ &amp;&amp;&amp;\mu(16,4)\leqslant9&amp;\mu(18,5)\leqslant24\\ &amp;&amp;&amp;\mu(17,4)\leqslant8&amp;\mu(19,5)\leqslant22\\ &amp;&amp;&amp;\mu(18,4)\leqslant7&amp;\mu(20,5)\leqslant20\\ &amp;&amp;&amp;\mu(19,4)\leqslant6&amp;\mu(21,5)\leqslant18\\ &amp;&amp;&amp;\mu(\geqslant20,4)=5&amp;\mu(22,5)\leqslant17\\ &amp;&amp;&amp;&amp;\mu(23,5)\leqslant16\\ &amp;&amp;&amp;&amp;\mu(24,5)\leqslant14\\ &amp;&amp;&amp;&amp;\mu(25,5)\leqslant12\\ &amp;&amp;&amp;&amp;\mu(26,5)\leqslant10\\ &amp;&amp;&amp;&amp;\mu(27,5)\leqslant9\\ &amp;&amp;&amp;&amp;\mu(28,5)\leqslant8\\ &amp;&amp;&amp;&amp;\mu(29,5)\leqslant7\\ &amp;&amp;&amp;&amp;\mu(\geqslant30,5)=6 \end{matrix} $$</span></p>
Gibbs
498,844
<p>Your wording is far from being formal. The operator <span class="math-container">$d/dt$</span> should not be interpreted as a fraction, rather as a differential operator, a derivation, or a vector field depending on the context. On the other hand, the differential <span class="math-container">$dt$</span> is its "dual", so it does not make much sense to talk about "cancellation" of <span class="math-container">$d/dt$</span> and <span class="math-container">$dt$</span>. I guess this is not the place to go into the details of the topic, but if you want to know more I will add some reference. What is more, the integral does not "vanish", derivative and integral are operators inverse of each other.</p> <p>Aside from this, I assume you know basic calculus rules (which is probably why your wording is imprecise). By definition the integral of a function <span class="math-container">$f$</span> is another function <span class="math-container">$F$</span> such <span class="math-container">$F'(t) = f(t)$</span> for all <span class="math-container">$t$</span>. Assuming all of this makes sense, when you find one such integral, say <span class="math-container">$F_1$</span>, then <span class="math-container">$F_1+c$</span> (for <span class="math-container">$c$</span> constant) has the same derivative: since the derivative of a constant is zero, then <span class="math-container">$$\frac{d}{dt}(F_1+c)(t) = \frac{d}{dt}(F_1(t)+c) = \frac{d}{dt}F_1(t)=f(t).$$</span> So when you find a primitive <span class="math-container">$F$</span> of <span class="math-container">$f$</span> you automatically find infinitely many others, which you can write in one line as <span class="math-container">$F+c$</span>.</p>
549,245
<p>Apparently, it is a common mistake to write $\forall x[R(x) \rightarrow P(x)]$ instead of $\forall x[R(x)\wedge P(x)]$ in some cases, however I can't seem to find the difference between the two. </p> <p>I did some research in ProofWiki and a Discrete Mathematics book, but I couldn't find this specific difference explained.</p> <p>Could you please explain it to me and give some examples on when to use one and when to use the other?</p> <p>Thank you for your time. </p>
DonAntonio
31,254
<p>Try with the following basic, intuitive definitions:</p> <p>$$x\in\Bbb Z=\text{the set of integer numbers}\;,\;\;R(x):=x\;\text{is a prime number}\;,\;$$</p> <p>$$P(x):=x\;\text{not divisible by}\;4$$</p> <p>Then you try to explain the difference between "For any integer $\;x\;$ , if $\;x\;$ is a prime number then is is undivisible by four" and "For any integer $\;x\;$ , $\;x\;$ is both prime and undivisible by four".</p>
549,245
<p>Apparently, it is a common mistake to write $\forall x[R(x) \rightarrow P(x)]$ instead of $\forall x[R(x)\wedge P(x)]$ in some cases, however I can't seem to find the difference between the two. </p> <p>I did some research in ProofWiki and a Discrete Mathematics book, but I couldn't find this specific difference explained.</p> <p>Could you please explain it to me and give some examples on when to use one and when to use the other?</p> <p>Thank you for your time. </p>
Trevor Wilson
39,378
<p>It's the difference between</p> <ol> <li><p>"anyone who gets a perfect score on the quiz will receive a cookie," and</p></li> <li><p>"everyone gets a perfect score on the quiz and will receive a cookie."</p></li> </ol> <p>It's a bit tricky to translate number 1 from colloquial language into first-order logic, but when you do, it says $\forall x\,(Q(x) \to C(x))$ where $Q(x)$ means "$x$ gets a perfect score on the quiz" and $C(x)$ means "$x$ will receive a cookie."</p>
3,020,825
<p>let A be a subset of <span class="math-container">$\mathbb{R}^P$</span> and <span class="math-container">$ x \in \mathbb{R}^P$</span> denotes <span class="math-container">$d(x,A) = \inf \{ d(x,y) : y \in A\}$</span>.There exists a point <span class="math-container">$y_0 \in A $</span> with <span class="math-container">$d(y_0,x) = d(x,A)$</span> if </p> <p>choose the correct option</p> <p><span class="math-container">$a)$</span> <span class="math-container">$A$</span> is any closed non emepty subset of <span class="math-container">$\mathbb{R}^P$</span></p> <p><span class="math-container">$b)$$ A$</span> is any non empty subset of <span class="math-container">$\mathbb{R}^P$</span></p> <p><span class="math-container">$c)$$ A$</span> is any non empty compact subset of <span class="math-container">$\mathbb{R}^P$</span></p> <p><span class="math-container">$d)$$ A$</span> is any non empty bounded subset of <span class="math-container">$\mathbb{R}^P$</span></p> <p>My attempt : i thinks option a) will correct because if A is closed then <span class="math-container">$x \in \bar A= A,$</span> that is <span class="math-container">$d(y_0,x) = d(x,A)=0$</span></p> <p>I don't know that other option pliz help me</p>
Patrick Stevens
259,262
<ul> <li>Show that "<span class="math-container">$n$</span> is square" is primitive recursive (more formally, that <span class="math-container">$n \mapsto 1$</span> if <span class="math-container">$n$</span> is square, and <span class="math-container">$0$</span> otherwise).</li> <li>Show that the function <span class="math-container">$g(n)$</span> which maps <span class="math-container">$n$</span> to <span class="math-container">$g(n-1) + 1$</span> if <span class="math-container">$n$</span> is square, and to <span class="math-container">$g(n-1)$</span> otherwise, is primitive recursive.</li> <li>Show that <span class="math-container">$g$</span> is very nearly the function you want. (There might be some off-by-one fiddling required; I haven't worked through the details.)</li> </ul>
281,386
<p>I have asked this question on stackexchange and have not received any answers or comments after 2 days of it being there.</p> <p>I read somewhere that the following statement is correct. A proof or any hint as to how to prove it would be helpful.</p> <p>Let $G$ be a connected reductive group defined over $k$(maynot be of characteristic 0). Let $H$ be a connected normal subgroup of $G\times \text{spec}\overline{k}$(apriori $H$ is only defined over $\overline{k}$). Then $H$ is defined over a finite 'separable' extension of $k$.</p> <p>Also there must be counterexamples where this is not true.</p> <p>Thanks.</p>
nfdc23
81,332
<p>First, some easy reduction steps to pass to the case where $G$ is semisimple. Let $T$ be a maximal $k$-torus in $G$, and pass to a finite separable extension of $k$ so that $T$ is split. We know that $H$ is the almost-direct product $\mathscr{D}(H) \cdot Z$ for a subtorus $Z \subset T_{\overline{k}}$ that is the maximal central $\overline{k}$-torus in $H$. But $T$ is $k$-split (by our initial finite separable extension on $k$), so $Z = S_{\overline{k}}$ for a unique $k$-subtorus $S \subset T$. Hence, our original task for $H$ is reduced to the same for $\mathscr{D}(H)$ that is normal in $\mathscr{D}(G_{\overline{k}}) = \mathscr{D}(G)_{\overline{k}}$. Hence, we can replace $G$ with $\mathscr{D}(G)$ and $H$ with $\mathscr{D}(H)$ to reduce to the case that $G$ is semisimple, and even $k$-split. </p> <p>In the split case, one can prove something much more precise: for each irreducible component $\Phi_i$ of the root system $\Phi = \Phi(G,T)$ (assume $G \ne 1$, so $\Phi$ is non-empty and hence admits irreducible components), the $k$-group $G_i \subset G$ generated by the root groups $U_a$ for $a \in \Phi_i$ is connected semisimple with root system (relative to its split maximal $k$-torus $T_i := T \cap G_i$) naturally identified with $\Phi_i$, and the $G_i$'s satisfy the following properties: (i) they are precisely the minimal non-trivial smooth connected normal $k$-subgroups of $G$, (ii) the $G_i$'s pairwise commute and for each subset $J$ of the set $I$ of $i$'s the map $\prod_{i\in J} G_i \rightarrow G$ is a central isogeny onto a smooth connected normal $k$-subgroup $G_J \subset G$, (iii) each smooth connected normal $k$-subgroup of $G$ coincides with $G_J$ for a unique $J$, (iv) each $G_i$ has no nontrivial smooth connected normal $k$-subgroup. </p> <p>Note that in view of the formulation, to prove (i)-(iv) it suffices to work over $\overline{k}$! And once that is done, we see via (iii) that the original question is answered in a more precise form. (One could prove a less precise version in which we only show the descent of $H$ to a $k$-subgroup by generating $H$ by some $T_{\overline{k}}$-root groups without establishing the link to subsets of the set of irreducible components of the root system. But this would not be as satisfying.) The most substantial ingredient in the proof is (iv). For a complete proof of (i)-(iv), see 10.2 in <a href="https://www.ams.org/open-math-notes/omn-view-listing?listingId=110663" rel="noreferrer">https://www.ams.org/open-math-notes/omn-view-listing?listingId=110663</a> Of course, aside from issues of algebro-geometric technique, everything done there is quite classical (and 14.10 in Borel's textbook is a "classic" reference for it). It is mentioned by @anon that this is all done in Milne's new book, but I don't have a copy of that book (yet) and so don't know a precise reference within it.</p>
1,017,320
<p>Assume ZFC. Let $B\subseteq\mathbb R$ be a set that is not Borel-measurable. Clearly, $B$ must be uncountable, since countable sets are always Borel being a countable union of measurable singletons.</p> <p><strong>Question:</strong> can one conclude that $B$ necessarily has the cardinality of the continuum <em>without</em> assuming either the continuum hypothesis or the negation thereof?</p> <p>A possibly related result is that any $\sigma$-algebra that contains infinitely many <em>sets</em> must necessarily have at least the cardinality of the continuum. This result is independent of the continuum hypothesis.</p>
Ramiro de la Vega
94,514
<p>The situation is quite the opposite. Every Borel set is either countable or has size continuum. So if $B \subseteq \mathbb{R}$ is an uncountable set of size less than the continuum, it is not Borel-measurable.</p>
15,702
<p>One way to prove that a field $K$ has no ideals except the entire field and the trivial ideal is to note the fact that every element $x$ has an inverse. By the definition of an ideal, if $x$ is in the ideal then $x^{-1}x$ is because $x^{-1} \in K$. But now we have that 1 is in the ideal, and so again by the definition of an ideal we have that every element is in the ideal. Therefore it is either the entire field or trivial.</p> <p>However, this works for any would-be ideal that has a unit; hence my question. I don't see how this coheres particularly with the idea that ideals are generalizations of things like "multiple of $n$", or that we use them to form quotient rings.</p> <p>Can someone please explain whether this has a deeper meaning or if it's not really important? I think it might have something to do with what is written in the "motivation" section in the <a href="http://en.wikipedia.org/wiki/Ideal_%28ring_theory%29" rel="nofollow">Wikipedia article for ideals</a> but I'm not really sure.</p> <p>Edit: I do realize that not all subsets without a unit are ideals. Sorry for the confusion.</p>
Alon Amit
308
<p>Ideals are not "unitless subsets", in the sense that there are many unitless subsets that aren't ideals. Proper ideals, indeed, contain no units, but they are also closed under addition, subtraction and multiplication by arbitrary ring elements.</p> <p>I'm not sure why you're confused about the case of "multiples of n". In the ring $\mathbb{Z}$ of integers, the proper ideals are precisely the multiples of some fixed integer. Indeed, there are many other "unitless subsets" in $\mathbb{Z}$, but they are not ideals. </p>
4,271,166
<p>I have been studying probability and there are many results regarding the sum of random variables.</p> <p>For example the sum of iid bernouli random variables is a binomial distributed The sum of Geometric random variables is Negative binomial distributed Sum of exponential random variables is Gamma distributed. Also what are the other important results like this?</p> <p>Where can I find a list of such properties that would help me deal with the subject better?. I am having to constantly search through my book for such references. If someone can provide me a a source for this I will be grateful.</p> <p>This is not a question but rather a request for a good reading material. So please try and understand and not downvote. Thanks</p>
Gwendolyn Anderson
964,863
<p>The most complete resource you may find online might be Wikipedia's List of Probability Distributions: <a href="https://en.wikipedia.org/wiki/List_of_probability_distributions" rel="nofollow noreferrer">Wikipedia: List of Probability Distributions</a>. You can click on each distribution listed to go to its own page, for more complete descriptions. The summary you prefer will most likely be much simpler than this complete resource, and I recommend you google lists/summaries of probability distributions and refer to several until you arrive at one or two that key resources that suit you well, print them out and make your own notes on them. You can search also for &quot;derivation of probability distributions.&quot; One example is:<br /> <a href="https://www.statlect.com/probability-distributions/" rel="nofollow noreferrer">Derivation of Probability Distributions</a>. If your course covers a particular list of distributions, you can find the Wikipedia page for each of these or you can google the derivation for the specific distribution.</p>
4,056,627
<p>Equation to minimize using Boolean Algebra Laws: <span class="math-container">$A+\bar{A} B \bar{C}$</span></p> <p>I have tried doing this but i am unsure of the answer: <span class="math-container">$$ \begin{array}{l} \text { Let } K=B \bar{C} \\ A+\bar{A} K=A+K=A+B \bar{C} \end{array} $$</span></p>
Community
-1
<p><span class="math-container">$G$</span> acts by left multiplication on the left quotient <span class="math-container">$G/H$</span>. If <span class="math-container">$G$</span> is simple, then <span class="math-container">$G$</span> embeds into <span class="math-container">$S_{[G:H]}$</span> and hence <span class="math-container">$|G|$</span> divides <span class="math-container">$[G:H]!$</span>. But <span class="math-container">$|G|=27[G:H]$</span>, so <span class="math-container">$|G|$</span> divides <span class="math-container">$[G:H]!$</span> if and only if <span class="math-container">$3^3 \mid ([G:H]-1)!$</span> if and only if <span class="math-container">$[G:H]\ge 10$</span>. Therefore, a stronger condition holds, actually.</p>
1,523,313
<p>Let $n$ be a positive integer. I want to prove one of these:</p> <p>1.$\mathbb{R}^n$ cannot be expressed as $A\cup B$ where $A,B$ are algebraic varieties $V(I),V(J)$ and $A,B\neq\mathbb{R}^n$.</p> <p>2.$\mathbb{R}^n$ is not Haussdorff under Zariski's topology.</p> <p>3.If $V(J)=\mathbb{R}^n$ then $J=0$. </p> <p>where $I,J$ are ideals of the ring of polynomials in $n$ indeterminates.</p> <p>I started trying to prove 2, then arrived to 1, which seemed very intuitive under the real numbers. But I still don't see how to prove it (I think there may be different ways to prove it outside of the algebraic geometry). Then I got to 3, but I don't think it's true in general (I'd have to use characteristic 0 I think).</p> <p>Any idea or hint?</p>
Servaes
30,382
<p>No; if $v$ is an eigenvector of a matrix $A$ for distinct eigenvalues $\lambda$ and $\mu$, then $$\lambda v=Av=\mu v,$$ and hence $(\lambda-\mu)v=0$, which implies that $v=0$, contradicting the fact that $v$ is an eigenvector.</p>
1,523,313
<p>Let $n$ be a positive integer. I want to prove one of these:</p> <p>1.$\mathbb{R}^n$ cannot be expressed as $A\cup B$ where $A,B$ are algebraic varieties $V(I),V(J)$ and $A,B\neq\mathbb{R}^n$.</p> <p>2.$\mathbb{R}^n$ is not Haussdorff under Zariski's topology.</p> <p>3.If $V(J)=\mathbb{R}^n$ then $J=0$. </p> <p>where $I,J$ are ideals of the ring of polynomials in $n$ indeterminates.</p> <p>I started trying to prove 2, then arrived to 1, which seemed very intuitive under the real numbers. But I still don't see how to prove it (I think there may be different ways to prove it outside of the algebraic geometry). Then I got to 3, but I don't think it's true in general (I'd have to use characteristic 0 I think).</p> <p>Any idea or hint?</p>
Alekos Robotis
252,284
<p>Suppose $v$ is an eigenvector of $A$ corresponding to two distinct eigenvalues $\lambda_1,\lambda_2$. This gives us $Tv=\lambda_1v$, $Tv=\lambda_2 v\implies \lambda_1v=\lambda_2v$, contrary to $\lambda_1\ne\lambda_2$ (because $v$ is nonzero).</p>
2,846,369
<p>As the title suggests, I would like to find the eigenfunctions, $y_n(x)$ of a self-adjoint linear differential operator $\dfrac{d^2y}{dx^2}$ that satisfy the boundary conditions $y_n(0)=y_n(\pi)=0$. Thereafter, I would like to construct its Green's function, $G(x, \xi).$. </p> <p>I know that it is common to define the eigenvalue of a linear operator by: $$\mathcal{Ly_n=\lambda_n\rho y_n}.$$ I was given that the required eigenfunctions satisfying the boundary conditions to be $$y_n(x)=\sqrt{\dfrac{2}{\pi}}\sin (nx).$$</p> <p>Can anyone explain to me how this was derived or the intuition in approaching this problem. Thereafter, how should I proceed to construct its Green's functions? Any help would be appreciated.</p>
Lutz Lehmann
115,115
<p>You find the general solution to $y''=λy$ and try to satisfy the boundary conditions. For positive $λ$ and almost all negative ones the only solution should be the zero solution. Cases with non-trivial solutions are the eigenvalues/eigenfunctions.</p> <p>For the Green function compute IVP solutions $y_1,y_2$ with $y_1(0)=0$, $y_1'(0)=1$ and $y_2(\pi)=0$, $y_2'(\pi)=-1$ and then consider $$ G(s,t)=\frac{y_1(\min(s,t))y_2(\max(s,t))}{W(s)} $$ as candidate for the Green function.</p>
2,846,369
<p>As the title suggests, I would like to find the eigenfunctions, $y_n(x)$ of a self-adjoint linear differential operator $\dfrac{d^2y}{dx^2}$ that satisfy the boundary conditions $y_n(0)=y_n(\pi)=0$. Thereafter, I would like to construct its Green's function, $G(x, \xi).$. </p> <p>I know that it is common to define the eigenvalue of a linear operator by: $$\mathcal{Ly_n=\lambda_n\rho y_n}.$$ I was given that the required eigenfunctions satisfying the boundary conditions to be $$y_n(x)=\sqrt{\dfrac{2}{\pi}}\sin (nx).$$</p> <p>Can anyone explain to me how this was derived or the intuition in approaching this problem. Thereafter, how should I proceed to construct its Green's functions? Any help would be appreciated.</p>
Disintegrating By Parts
112,478
<p>The problem in solving the eigenfunction equation is that solutions are unique only up to non-zero multiplicative constants. That is, any non-trivial solution of $$ y''=\lambda y , \;\;\; y(0)=y(\pi)=0 $$ may be scaled by any non-zero constant, and the new function is also a solution. Define $y_l$ and $y_r$ to be solutions with $$ y_l(0)=0,\;y_l'(0)=1,\;\;\;\;\; y_r(\pi)=0,\;y_r'(\pi)=1 $$ These solutions are uniquely given by $$ y_l(x)=\frac{\sin(\sqrt{\lambda}x)}{\sqrt{\lambda}} \\ y_r(x)=\frac{\sin(\sqrt{\lambda}(x-\pi))}{\sqrt{\lambda}}. $$ $\lambda$ is an eigenvalue iff these solutions are linearly dependent, which is the case iff the Wronskian of these solutions is $0$: $$ w(\lambda) = y_ly_r'-y_l'y_r \\ =\frac{1}{\sqrt{\lambda}}\{\sin(\sqrt{\lambda}x)\cos(\sqrt{\lambda}(x-\pi))-\cos(\sqrt{\lambda}x)\sin(\sqrt{\lambda}(x-\pi))\} \\ =\frac{\sin(\sqrt{\lambda}\pi)}{\sqrt{\lambda}} $$ $\lambda=0$ is not a solution because the limit as $\lambda\rightarrow 0$ is $\pi$. The eigenvalues are $\lambda_n=n^2$ for $n=1,2,3,\cdots$. The eigenfunctions are $B_n\sin(nx)$. The unique solution of $y''-\lambda y=g$ subject to $y(0)=(\pi)=0$ and $\lambda\ne \lambda_n$ is $$ y(x) = \frac{1}{w(\lambda)}\left[y_r(x)\int_{0}^{x}g(t)y_l(t)dt-y_l(x)\int_{x}^{1}g(t)y_r(t)dt\right] $$ The Green function is $$ \frac{1}{w(\lambda)}(y_r(x)y_l(t)_{t&lt;x}-y_l(x)y_r(t)_{t&gt;x}) $$ You are interested in the case $\lambda=0$.</p>
1,540,635
<p>Given $$\min\limits_x \|x\|_2^2$$ $$\text{s.t.} Ax = b$$ <strong>show $x^* = A^T(AA^T)^{-1}b$ where $A \in \mathbb{R}^{m \times n}, m &lt; n$</strong></p> <p>This is projection $x$ onto the hyperplane $Ax - b = 0$</p> <p>Multiplying $A^T$ on both sides</p> <p>$A^TAx - A^Tb = 0$</p> <p>So we have $\langle A^TAx - A^Tb, x \rangle = 0$</p> <p>$(A^TAx - A^Tb)^Tx = 0$</p> <p>$(x^TA^TA - b^TA)x = 0$</p> <p>$x^TA^TAx = b^TAx = x^TA^Tb$</p> <p>So $A^TAx = A^Tb$ (circular) </p> <p>$x = (A^TA)^{-1}A^Tb$ (Wrong)</p> <p>How can you do this correctly?</p>
Michael Grant
52,878
<p>It's not clear to me under what principle we could have expected "multiplying $A^T$ on both sides" to lead to a proper answer. For one thing, since $m&lt;n$, $A^TA$ is not invertible. (As it is, the correct answer assumes that $A$ has full row rank so that $AA^T$ is invertible; this is of course not always true.)</p> <p>The right way to do this is to use a Lagrange multiplier. The Lagrangian is $$L(x,\lambda) = \|x\|_2^2 - \lambda^T ( A x - b )$$ where $\lambda\in\mathbb{R}^m$ is our Lagrange multiplier. Differentiating with respect to $x$ leads to the optimality condition $$2x - A^T \lambda = 0 \quad\Longrightarrow\quad x = \tfrac{1}{2}A^T \lambda$$ Now this value of $x$ must satisfy $Ax=b$, so $$Ax=A \cdot \left(\tfrac{1}{2}A^T\lambda\right) = \tfrac{1}{2}AA^T\lambda = b \quad\Longrightarrow \lambda = 2(AA^T)^{-1}b$$ We can do this if $A$ has full row rank, so $AA^T$ will be invertible. Returning to our formula for $x$, $$x = \tfrac{1}{2}A^T \lambda = \tfrac{1}{2}A^T \cdot \left(2(AA^T)^{-1}b\right) = A^T(AA^T)^{-1}b.$$</p>
2,859,411
<p>Let $y(t)$ be a real valued function defined on the real line such that $y'= y(1 − y)$, with $y(0) \in [0, 1]$. Then $\lim_{t\to\infty} y(t) = 1$.</p> <p>The solution is given as false .But i have no idea about that. I try some counterexample but it won't work.</p> <p>How can i find solution with 3 minutes?</p>
José Carlos Santos
446,262
<p>Take $y(t)=0$. Then $y(0)\in[0,1]$ and $y'=0=y(1-y)$. But $\lim_{t\to\infty}y(t)=0\neq1$.</p>
161,766
<p>What is the best way to write a quick function which takes in a vertex (of some undirected graph), and <strong>gives the value of the k-Core's in which that vertex is in?</strong></p> <p>The obvious modification of <code>kCoreComponents[Graph,k]</code> using <code>Intersection</code> is either taking ages, since it needs to re-check the core composition for each vertex (I have over 7000), or spends ages "loading" after the script is complete.</p>
Acus
18,792
<p>Here is how I used it in version 6. On my recent 10.3 it still works: You can execute lines below directly in kernel (in text console, i. e. run math) or put it in .m file and use as script. Graphics server needs to be installed on the server.</p> <pre><code>whereIam="/home/acus/temp"; $Path=Append[$Path,whereIam]; SetDirectory[whereIam]; Needs["JLink`"]; $FrontEndLaunchCommand="mathematica -mathlink -display :1 -nogui"; (* print in file if connection was succesful *) arprisijungiau=ConnectToFrontEnd[]; Put[arprisijungiau,"testNon.txt"] (* the computer were script runs *) hostname=Import["!`echo /bin/hostname`","Text"]; PutAppend[hostname,"testNon.txt"] (* now make real graphics *) thePlot=Plot[Sin[x],{x,-Pi,Pi}]; SetDirectory[whereIam]; Export["some.svg", thePlot]; </code></pre>
3,109,226
<p>Express the following expression <span class="math-container">$$E=(x^3-y^3)(y^3-z^3)(z^3-x^3)$$</span> in terms of <span class="math-container">$a, b$</span> where <span class="math-container">$a,b \in \mathbb R$</span> and <span class="math-container">$$a=x^2y+y^2z+z^2x$$</span> <span class="math-container">$$b=xy^2+yz^2+zx^2$$</span></p>
nonuser
463,553
<p>Start: <span class="math-container">$$ a-b = xy(x-y)+yz (y-z)+zx(z-x) $$</span> <span class="math-container">$$= xy(x-y)+yz (y-z)+zx(z-\color{red}y)+zx(\color{red}y-x) $$</span> <span class="math-container">$$= (xy-zx)(x-y)+(yz -zx)(y-z)$$</span></p> <p><span class="math-container">$$ =x(y-z)(x-y)+z(y-x)(y-z) $$</span> <span class="math-container">$$ = (x-y)(y-z)(x-z)$$</span></p> <p>So <span class="math-container">$$ E = (a-b)\underbrace{(x^2+xy+y^2)(y^2+yz+z^2)(z^2+zx+x^2)}_{A}$$</span></p> <p>Now you have to figer out <span class="math-container">$A$</span>. (I bet it is <span class="math-container">$3ab$</span>.)</p>
3,109,226
<p>Express the following expression <span class="math-container">$$E=(x^3-y^3)(y^3-z^3)(z^3-x^3)$$</span> in terms of <span class="math-container">$a, b$</span> where <span class="math-container">$a,b \in \mathbb R$</span> and <span class="math-container">$$a=x^2y+y^2z+z^2x$$</span> <span class="math-container">$$b=xy^2+yz^2+zx^2$$</span></p>
Parallelism Alert
639,984
<p>Firstly, we use <span class="math-container">$x^3-y^3=(x-y)(x^2+xy+y^2)$</span> for <span class="math-container">$(x,y)=(a,b),(b,c),(a,c)$</span> and we obtain <span class="math-container">$$E=(x-y)(y-z)(z-x)(x^2+xy+y^2)(y^2+yz+z^2)(x^2+xz+z^2)$$</span> <span class="math-container">$$E=(x^2z+y^2x+z^2y-x^2y-y^2z-z^2x)(x^2+xy+y^2)(y^2+yz+z^2)(x^2+xz+z^2)$$</span> <span class="math-container">$$E=(b-a)(x^2+xy+y^2)(y^2+yz+z^2)(x^2+xz+z^2)=(b-a)E^{'}$$</span> <span class="math-container">$$\text{Let's say }c=\frac{a}{xyz}=\frac{x}{z} + \frac {y}{x} + \frac {z}{y} \text{and }d=\frac{b}{xyz}=\frac{x}{y} + \frac {y}{z} + \frac {z}{x}$$</span> <span class="math-container">$$\frac{E^{'}}{(xyz)^2}=\prod_{cyc}{(x/yz+1/z+y/xz)}=(\sum_{cyc} \frac{x^2}{y^2} + 2 * \sum_{cyc} \frac{x}{z}) + ( \sum_{cyc} \frac{x^2}{z^2} + 2* \sum_{cyc} \frac{x}{y} ) + (\sum {xy/z^2} +3 ) $$</span> <span class="math-container">$$\text{Therefore, }\frac{E^{'}}{(xyz)^2}=c^2 + d^2 + cd=\frac{a^2+b^2+ab}{(xyz)^2}$$</span> <span class="math-container">$$\text{Now we have obtained }E^{'}=a^2+b^2+ab\text{, and we know, from earlier, }E=(b-a)E^{'}\text{, therefore}$$</span> <span class="math-container">$$E=(b-a)(a^2+b^2+ab) $$</span> <span class="math-container">$$E=b^3-a^3 $$</span></p>
3,173,125
<p>Let <span class="math-container">$f(x) = x^3 + a x^2 + b x + c$</span> and <span class="math-container">$g(x) = x^3 + b x^2 + c x + a\,$</span> where <span class="math-container">$a, b, c$</span> are integers and <span class="math-container">$c\neq 0\,$</span>. Suppose that the following conditions hold:</p> <ol> <li><span class="math-container">$f(1)=0$</span> </li> <li>The roots of <span class="math-container">$g(x)$</span> are squares of the roots of <span class="math-container">$f(x)$</span>.</li> </ol> <p>I'd like to find <span class="math-container">$a, b$</span> and <span class="math-container">$c$</span>.</p> <p>I tried solving equations made using condition 1. and relation between the roots, but couldn't solve. The equation which I got in <span class="math-container">$c$</span> is <span class="math-container">$c^4 + c^2 +3 c-1=0$</span> (edit: eqn is wrong). Also I was able to express <span class="math-container">$a$</span> and <span class="math-container">$b$</span> in terms of <span class="math-container">$c$</span>. But the equation isn't solvable by hand. </p>
Hanno
316,749
<p>The conditions 1 and 2 imply that</p> <ul> <li><span class="math-container">$\;c\,$</span> must be <span class="math-container">$\,0\,$</span> or <span class="math-container">$\,-1$</span>,</li> <li><span class="math-container">$\;a= -c^2$</span>, and <span class="math-container">$\:b= c^2-c-1\,$</span>.</li> </ul> <hr> <p>Condition 1 gives <span class="math-container">$0=f(1)=1+a+b+c=g(1)\,.\,$</span> Hence both <span class="math-container">$f$</span> and <span class="math-container">$g$</span> have a zero at <span class="math-container">$1$</span> and factor as <span class="math-container">$$f(x) \,=\, (x-1)\big(x^2 + (a+1)x -c\big)\\[1.5ex] g(x) \,=\, (x-1)\big(x^2 -(a+c)x -a\big)\,.$$</span> Denote the roots of the quadratic factor of <span class="math-container">$f$</span> by <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>. Condition 2 says that the roots of <span class="math-container">$g$</span> are contained in <span class="math-container">$\{1,x_1^2,x_2^2\}$</span>. By Vieta's formula one gets <span class="math-container">$$\,-a \,=\, x_1^2x_2^2=(-c)^2 \,=\, c^2\,,\;\text{thus}\;\; b \,=\, -a-c-1 \,=\, c^2-c-1\,.\tag{1}$$</span> Note that condition 2 remains true when restricted to the quadratic factors of <span class="math-container">$f$</span> and <span class="math-container">$\,g$</span>. These are <span class="math-container">$$q_f \,=\,x^2 + \left(1-c^2\right)x -c\tag{2}\\ q_g \,=\,x^2 -c\,(1-c)x +c^2$$</span> when written in terms of <span class="math-container">$\,c$</span>.<br> It is shown next that condition 2 cannot hold if <span class="math-container">$c\neq 0\,$</span> or <span class="math-container">$\,-1$</span>.</p> <ol> <li><p>Assume <span class="math-container">$c\geqslant 1$</span>. Then <span class="math-container">$q_f(0)=-c&lt;0$</span>, and the roots <span class="math-container">$x_1,x_2$</span> of <span class="math-container">$q_f$</span> are real and distinct. By Vieta's formula regarding <span class="math-container">$q_g$</span> one reaches the contradiction <span class="math-container">$0&lt;x_1^2+x_2^2=c(1-c)\leq 0\,$</span>.</p></li> <li><p>Assume <span class="math-container">$c\leq -2\,$</span>. Then the discriminant <span class="math-container">$\left(1-c^2\right)^2+4c$</span> in <span class="math-container">$(2)$</span> is positive, and we run into the same contradiction as before.</p></li> </ol> <p>So we are left with the two solutions (with condition 2 obviously satisfied)</p> <ul> <li><span class="math-container">$a=0,b=-1,c=0\:$</span> which was ruled out a priori<br> then <span class="math-container">$f(x)=x(x+1)(x-1)\:$</span> and <span class="math-container">$\:g(x)=x^2(x-1)$</span></li> <li><span class="math-container">$a=-1,b=1,c=-1\:$</span><br> where <span class="math-container">$f(x)=\left(x^2+1\right)(x-1)\:$</span> and <span class="math-container">$\:g(x)=(x+1)^2(x-1)$</span></li> </ul>
1,125,891
<p>Let $U=&lt;\begin{pmatrix} 1 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0\end{pmatrix}, \begin{pmatrix}0 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0\end{pmatrix}, \begin{pmatrix}2 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 \end{pmatrix}&gt;$ and $V= &lt;\begin{pmatrix}2&amp; 0 &amp; 0 \\ 0 &amp; 2 &amp; 0 \end{pmatrix}, \begin{pmatrix} 0 &amp; 1 &amp; 1 \\ 0 &amp; 1 &amp; 1\end{pmatrix}$>. </p> <p>Let $f:U \to V$ such that </p> <p>$$f \begin{pmatrix} 1 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0\end{pmatrix}= \begin{pmatrix} 2 &amp; 1 &amp; 1 \\ 0 &amp; 3 &amp; 1\end{pmatrix}$$</p> <p>$$f \begin{pmatrix}0 &amp; 0 &amp; -1 \\ 0 &amp; 1 &amp; 0\end{pmatrix} = \begin{pmatrix} 4 &amp; 2 &amp; 2 \\ 0 &amp; 6 &amp; 2\end{pmatrix}$$</p> <p>$$f \begin{pmatrix}2 &amp; 0 &amp; 1 \\ 0 &amp; 1 &amp; 0 \end{pmatrix} = \begin{pmatrix} 1 &amp; 1/2 &amp; 1/2 \\ 0 &amp; 3/2 &amp; 1/2\end{pmatrix}.$$ </p> <p>I have to determine $ker(f)$ and $Im(f)$. Normally, if we work with $R^n$ and the standard basis, I've no problems, but this setting with matrices and non-common vector spaces confuses me: please, could you show me how to work out this problem?</p> <p>Thank you very much in advance.</p>
Bernard
202,857
<p>Let's denote the generators of $U$: $u_1$, $u_2$ and $u_3$ (in your very order) and those of $V$: $v_1$ and $v_2$. Note $f(u_1)=v_1+v_2$ and $f(u_2)$ and $f(u_3)$ are colinear with $f(u_1)$. With this information you can deduce $\operatorname{im} f$ and you can write the matrix of $f$ relative to the bases $(u_1,u_2,u_3)$ and $(v_1,v_2)$.</p> <p>Indeed \begin{align*}f(\lambda_1u_1+\lambda_2u_2+\lambda_3u_3)&amp;=\lambda_1f(u_1)+\lambda_2f(u_2)+\lambda_3f(u_3)\\ &amp;=\lambda_1(v_1+v_2)+2\lambda_2(v_1+v_2)+\frac12\lambda_3(v_1+v_2)\\ &amp;=\Bigl(\lambda_1+2\lambda_2+\frac12\lambda_3\Bigr)(v_1+v_2),\end{align*} which shows the image is contained in the subspace generated by $v_1+v_2$. Conversely, $v_1+v_2$ is attained by $u_1$. So $\operatorname{im} f =\langle v_1+v_2\rangle$.</p> <p>The above equation shows $\lambda_1u_1+\lambda_2u_2+\lambda_3u_3 \in \ker f$ iff $\,\lambda_1+2\lambda_2+\frac12\lambda_3=0$. It will be a subspace of dimension $2$. You may choose e.g. $-u_1+2u_3$ and $2u_1-u_2$ which are clearly independent, so that $$\ker f =\bigg\langle\begin{pmatrix}3&amp;0&amp;1\\ 0&amp;2&amp;0\end{pmatrix},\begin{pmatrix}2&amp;0&amp;3\\ 0&amp;1&amp;0\end{pmatrix}\bigg\rangle. $$</p>
204,087
<p>What would be an isomorphism between $\mathcal{F}(S;V / U)$ and $\mathcal{F}(S;V)/ \mathcal{F}(S;U)$, where $S$ is a set, $V$ a vector space and $U$ a subspace of $V$. $\mathcal{F}(A,B)$ denotes the set of functions from $A$ to $B$ and $V/U$ is the quotient space. </p>
Gerry Myerson
8,269
<p>You could take $\lambda_i={n\choose i}n^i$. Then your expression would be $$(a+abn)^n+(b+abn)^n-(abn)^n$$</p>
2,921,390
<blockquote> <p>Aashna and Radhika see the integers $1$ to $211$ written on a blackboard. They alternate turns and in every step each of them wipes out any $11$ numbers until only $2$ numbers are left on the blackboard. If the difference of these $2$ numbers (by subtracting the smaller from the larger) is $\geq 111$, the first player wins, otherwise the second. If you were Aashna, would you chose to play 1st or 2nd and why?</p> </blockquote> <p>By intuition only – I don’t see how I can prove it:</p> <p>Since there are 19 turns ($19\times 11=209$ numbers + 2 on the blackboard), I would choose to play 1st. In my 1st move I would wipe out numbers 101 to 111 since these are the only ones that do not have a pair to meet the rule. Then for whichever numbers Radhika would remove, I would respond by removing the numbers that would have a difference of 111, for example for 92 I wipe out 203 etc. If Radhika chose to remove pairs with difference equal to 111, there would still be a single number, for which I would remove its pair and then I would also remove pairs with difference 111 or 112 and so on.</p> <p>Does this method guarantee a win?</p>
user1234567890
492,496
<p>If you chose to play first you would have 10 turns. -> 10 x 11 = 110</p> <p>If you chose to play second you would have 9 turns. -> 9 x 11 = 99 </p> <p>If you are second and remove every high number starting with 211 while the first removes the low numbers you would stop at 211 - 99 = 112.</p> <p>The last left numbers would be 112 and one number (between 1 and 112). </p> <p>The first would then only win if that number is 1, because 112 - 1 = 111 (which is >= 111).</p> <p>If it was the opposite and you would play first you could remove 110 numbers and you would win if the difference is >= 111. <strong>There are only 211-111 = 100 numbers with a difference >= 111.</strong> Remove all of them and you win.</p> <p>So your chance of winning by playing first is <strong>100%.</strong> </p>
1,318,880
<p>I'm trying to prove that $\operatorname{lcm}(n,m) = nm/\gcd(n,m)$ I showed that both $n,m$ divides $nm/\gcd(n,m)$ but I can't prove that it is the smallest number. Any help will be appreciated.</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\,\ n,m\mid j \!\iff\! nm\mid nj,mj\!$</span> <span class="math-container">$\overset{\ \rm\color{darkorange}U}\iff\! nm\mid (nj,mj) \overset{\ \rm \color{#0a0}D_{\phantom |}}= (n,m)j\!$</span> <span class="math-container">$\iff\! nm/(n,m)\mid j$</span></p> <p>where above we have applied <span class="math-container">$\,\rm \color{darkorange}U = $</span> <a href="https://math.stackexchange.com/a/1189430/242">GCD Universal Property</a> and <span class="math-container">$\,\rm\color{#0a0} D =$</span> <a href="https://math.stackexchange.com/a/705874/242">GCD Distributive Law</a>.</p> <p><strong>Remark</strong> <span class="math-container">$ $</span> If we bring to the fore implicit <span class="math-container">$\rm\color{#0a0}{cofactor\ reflection}$</span> symmetry we obtain a simpler proof: <span class="math-container">$ $</span> it is easy to show <span class="math-container">$\,d\,\color{#0a0}\mapsto\, mn/d\,$</span> bijects common divisors of <span class="math-container">$\,m,n\,$</span> with common multiples <span class="math-container">$\le mn.$</span> Being order-<span class="math-container">$\rm\color{#c00}{reversing}$</span>, it maps the <span class="math-container">$\rm\color{#c00}{Greatest}$</span> common divisor to the <span class="math-container">$\rm\color{#c00}{Least}$</span> common multiple, i.e. <span class="math-container">$\,{\rm\color{#c00}{G}CD}(m,n)\,\color{#0a0}\mapsto\, mn/{\rm GCD}(m,n) = {\rm \color{#c00}{L }CM}(m,n).\,$</span></p> <p>See <a href="https://math.stackexchange.com/a/3055362/242">here</a> and <a href="https://math.stackexchange.com/search?q=user%3A242+lcm+gcd+involution">here</a> more on this <span class="math-container">$\:\!\rm\color{#0a0}{involution\ (reflection)}$</span> symmetry at the heart of gcd, lcm duality.</p>
3,068,270
<p>Given that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are real constants and that the equation <span class="math-container">$x^4+ax^3+2x^2+bx+1=0$</span> has at least one real root, find the minimum possible value of <span class="math-container">$a^2+b^2$</span>.</p> <p>I began this way: Let the polynomial be factorized as <span class="math-container">$(x^2+\alpha x + 1)(x^2+\beta x +1)$</span>. Then expanding and comparing coefficients we get <span class="math-container">$\alpha\beta=0$</span>, meaning either <span class="math-container">$\alpha=0$</span> or <span class="math-container">$\beta=0$</span>. Suppose <span class="math-container">$\alpha=0$</span>. Then we see that <span class="math-container">$(x^2+\beta x+1)$</span> should have real roots, from which we get <span class="math-container">$\beta^2 \geq 4$</span>. But we get <span class="math-container">$a=b=\beta$</span> from the comparison above. So <span class="math-container">$a^2+b^2 = 2\beta^2 \geq 8$</span>.</p> <p>Is it correct? Or is there any mistake? Any other solution is also welcome.</p>
vadim123
73,324
<p><span class="math-container">$$x^4+ax^3+2x^2+bx+1=$$</span> <span class="math-container">$$=(x^2+\frac{a}{2}x)^2-\frac{a^2}{4}x^2+2x^2+bx+1=$$</span> <span class="math-container">$$=(x^2+\frac{a}{2}x)^2-\frac{a^2}{4}x^2+2x^2-\frac{b^2}{4}x^2+(\frac{b}{2}x+1)^2=$$</span> <span class="math-container">$$=(x^2+\frac{a}{2}x)^2+x^2(2-\frac{a^2+b^2}{4})+(\frac{b}{2}x+1)^2$$</span></p> <p>If <span class="math-container">$2-\frac{a^2+b^2}{4}&gt;0$</span>, then we have a problem; the sum of three nonnegative terms equals zero (at our real root), which can only happen if <span class="math-container">$x=0$</span> to make the second term zero. But then <span class="math-container">$(\frac{b}{2}x+1)^2&gt;0$</span>. So if <span class="math-container">$a^2+b^2&lt;8$</span> there is no real root. Hence <span class="math-container">$a^2+b^2\ge 8$</span>.</p> <p>Now, if <span class="math-container">$a^2+b^2=8$</span>, then the middle term is zero. Can we make the other two terms zero as well? We need <span class="math-container">$x=-\frac{a}{2}$</span> and <span class="math-container">$x=-\frac{2}{b}$</span>, to make the first and last terms zero. This can be achieved, with <span class="math-container">$a=b=2$</span> and <span class="math-container">$x=-1$</span>.</p> <p>Hence, the minimum possible value of <span class="math-container">$a^2+b^2$</span> is indeed <span class="math-container">$8$</span>.</p>
2,337,357
<p><a href="https://i.stack.imgur.com/iWuCv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iWuCv.png" alt="enter image description here"></a></p> <p>The derivative of $\frac1x$ is $\frac{-1}{x^2}$.</p> <p>How do I find the $c$ if there is no zero in the derivative of the function?</p> <p>I started with $-1/x^2= -0.0625$ but I'm confused from here on.</p>
user164144
458,090
<p>Assuming that the rolls are independent, the event X of rolling 3, 20-sided dice may be considered as a binomial distributed variable with n=3 and p=1/20. You are looking for P(X>1) which is the probability that you will get at least 2 outcomes the same. This is 0.00725. However, since there are 20 possible outcomes, you multiply this by 20 which gets you 0.145 (14.5%)</p>
1,549,506
<p>To Prove $$\lim_{x \to 0}\frac{x^2+2\cos x-2}{x\sin^3 x}=\frac{1}{12}$$ I tried with L'Hospital rule but in vain.</p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. You may use Taylor expansions, as $x \to 0$, $$ \cos x =1-\frac{x^2}{2}+\frac{x^4}{24}+O(x^6) $$ $$ \sin x =x+O(x^3) $$ giving $$ x^2+2\cos x-2=\frac{x^4}{12}+O(x^6) $$ $$ x\sin^3 x =x^4+O(x^6) $$ and</p> <blockquote> <p>$$ \frac{x^2+2\cos x-2}{x\sin^3 x}=\frac{\frac{x^4}{12}+O(x^6)}{x^4+O(x^6)}=\frac{1}{12}+O(x^2) $$</p> </blockquote>
1,549,506
<p>To Prove $$\lim_{x \to 0}\frac{x^2+2\cos x-2}{x\sin^3 x}=\frac{1}{12}$$ I tried with L'Hospital rule but in vain.</p>
nicole
165,997
<p>L'Hopital's rule may be used repeatedly until you find an answer:</p> <p>$$\lim_{x \to 0}\frac{x^2+2\cos x-2}{x\sin^3 x}=\lim_{x \to 0}\frac{2x-2\sin x}{\sin^2 x(\sin x+3x\cos x)}$$</p> <p>$$= \lim_{x \to 0}\frac{2-2\cos x}{ 6\sin ^2(x)\cos (x) - 3x(\sin^3(x)-2\sin(x)\cos^2(x))}$$</p> <p>$$= \lim_{x \to 0}\frac{2\sin (x)}{ 3(-3\sin^3 (x) + 6\cos^2(x)\sin(x) + x(2\cos^3(x)-7\cos(x)\sin^2(x)))}$$</p> <p>$$= \lim_{x \to 0}\frac{-2\cos(x)}{ -24\cos^3(x)+84\sin^2(x)\cos(x)+x(60\sin(x)\cos^2(x)-21\sin^3(x))}$$</p> <p>$$= \frac{-2\cos(0)}{ -24\cos^3(0)+84\sin^2(0)\cos(0)+0(60\sin(0)\cos^2(0)-21\sin^3(0))}$$</p> <p>$$=\frac{1}{12}$$</p>
3,725,638
<p>The exterior derivative of a scalar function is</p> <p><span class="math-container">$d f(x,y,z) = ( \frac{\partial f}{\partial x} dx + \frac{\partial f}{\partial y} dy + \frac{\partial f}{\partial z} dz )$</span></p> <p>Am I correct in assuming then that</p> <p><span class="math-container">$d\left( F_x(x,y,z) e_x + F_y(x,y,z) e_y + F_z(x,y,z) e_z \right)$</span></p> <p>would be</p> <p><span class="math-container">$\left( \frac{\partial F_y}{\partial x} - \frac{\partial F_x}{\partial y} \right) dx \wedge dy + \left( \frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dz \wedge dx + \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dy \wedge dz$</span></p>
azif00
680,927
<p><span class="math-container">$$d(F_xdx + F_ydy + F_zdz) = dF_x \wedge dx + dF_y \wedge dy + dF_z \wedge dz$$</span> and use that <span class="math-container">$$dF = \frac{\partial F}{\partial x}dx + \frac{\partial F}{\partial y}dy + \frac{\partial F}{\partial z}dz$$</span> according to your first claim.</p>
483,173
<p>A non-zero matrix $A$ is said to be nilpotent for some positive integer $k\geq2$. If $A$ is nilpotent then is $I+A$ invertible?? Where $I$ is the identity matrix.</p>
DonAntonio
31,254
<p>Hint: Suppose $\,A^k=0\;$ , then:</p> <p>$$(A-I)(A^{k-1}+A^{k-2}+...+A+I)=A^K-1\ldots$$</p> <p>...but you have $\,A\color{red}+I\;$ , so what to do...? :)</p>
3,836,662
<p>I understand basic group theory. I would say that I've seen most of the standard stuff up to, say, the quotient group.</p> <p>I feel like I've seen in more than one place the suggestion that group theory is the study of symmetries, or actions that leave something (approximately) unchanged. Unfortunately I can only find a couple sources. At 0:49 in this <a href="https://www.youtube.com/watch?v=mH0oCDa74tE" rel="nofollow noreferrer">3 Blue 1 Brown video</a>, the narrator says &quot;[Group theory] is all about codifying the idea of symmetry.&quot; The whole video seems to be infused with the idea that every group represents the symmetry of something.</p> <p>In <a href="https://www.youtube.com/watch?v=ihMyW7Z5SAs" rel="nofollow noreferrer">this video</a> about the Langlands Program, the presenter discusses symmetry as a lead-in to groups beginning around 33:00. I don't know if he actually describes group theory as being about the study of symmetry, but the general attitude seems pretty similar to that of the previous video.</p> <p>This doesn't jive with my intuition very well. I can see perfectly well that <em>part</em> of group theory has to do with symmetries: one only has to consider rotating and flipping a square to see this. But is <em>all</em> of group theory about symmetry? I feel like there must be plenty of groups that have nothing to do with symmetry. Am I wrong?</p>
Community
-1
<p>In the case that<span class="math-container">$H\triangleleft G$</span>, this follows immediately from the second isomorphism theorem.</p> <p>But, actually your result is well known, and called the <em>product formula</em>. Neither <span class="math-container">$H$</span> nor <span class="math-container">$K$</span> is required to be normal. See &quot;Product of group subsets - Wikipedia&quot; <a href="https://en.m.wikipedia.org/wiki/Product_of_group_subsets" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Product_of_group_subsets</a></p>
3,974,768
<p>Suppose <span class="math-container">$a_n \rightarrow a$</span>, <span class="math-container">$b_n \rightarrow b$</span> and <span class="math-container">$ a &lt; b$</span>. Is it true that there is <span class="math-container">$ N \in \mathbb{N} $</span> such that for any <span class="math-container">$ m &gt; N $</span>, <span class="math-container">$a_m &lt; b_m $</span>?</p> <p>I think it's trivial but couldn't prove it.</p>
Martin R
42,969
<p>Assume that <span class="math-container">$z$</span> is a root of the equation with <span class="math-container">$|z| \ge 2$</span>. Then <span class="math-container">$$ 1 = \left|\frac{7z^4+4z+1}{z^7} \right| \le \left|\frac{7}{z^3}\right| + \left|\frac{4}{z^6}\right| +\left| \frac{1}{z^7} \right| \\ \le \frac{7}{8} + \frac{4}{64} + \frac{1}{128} = \frac{121}{128} &lt; 1 $$</span> gives a contradiction.</p> <p>The idea is to show that if <span class="math-container">$|z| \ge 2$</span> then the modulus of <span class="math-container">$z^7$</span> is larger than the modulus of <span class="math-container">$7z^4+4z+1$</span>, so that the sum of these terms can not be zero.</p> <p><em>Remark:</em> According to <a href="https://www.wolframalpha.com/input/?i=solve+z%5E7+%2B+7*z%5E4+%2B+4*z+%2B+1+%3D+0" rel="noreferrer">WolframAlpha</a>, the equation <span class="math-container">$z^7 + 7z^4+4z+1 = 0$</span> has one solution <span class="math-container">$z \approx -1.86283$</span>, so that the estimate <span class="math-container">$|z| &lt; 2$</span> for all solutions is not bad.</p>
3,974,768
<p>Suppose <span class="math-container">$a_n \rightarrow a$</span>, <span class="math-container">$b_n \rightarrow b$</span> and <span class="math-container">$ a &lt; b$</span>. Is it true that there is <span class="math-container">$ N \in \mathbb{N} $</span> such that for any <span class="math-container">$ m &gt; N $</span>, <span class="math-container">$a_m &lt; b_m $</span>?</p> <p>I think it's trivial but couldn't prove it.</p>
offline
556,883
<p>This question can be solved using Roché's Theorem. Consider the holomorphic function</p> <p><span class="math-container">\begin{align} h(z) := f(z) + g(z) = z^7 + 7z^4 + 4z + 1 \end{align}</span> with <span class="math-container">$f(z) = z^7$</span> and <span class="math-container">$g(z) = 7z^4+4z+1$</span>. Consider <span class="math-container">$K := \{z \in \mathbb{C}: |z| &lt; 2\}$</span>. Then <span class="math-container">$\partial K$</span> is a circle of radius <span class="math-container">$2$</span> centered at the origin. On <span class="math-container">$\partial K$</span>, that means for <span class="math-container">$|z| = 2$</span>, we have</p> <p><span class="math-container">\begin{align} |g(z)| = |7z^4+4z+1| \leq 7|z|^4 + 4|z| + 1 = 121 &lt; 128 = |z|^7 = |f(z)|. \end{align}</span></p> <p>Then, by Rouché's Theorem, we have that <span class="math-container">$f$</span> and <span class="math-container">$f+g$</span> have the same number of zeros inside <span class="math-container">$K$</span>. Since <span class="math-container">$f$</span> has all his seven zeros inside <span class="math-container">$K$</span> so does <span class="math-container">$f+g$</span> and we are done.</p>
20,470
<p>I couldn't find a more descriptive title, but I guess an example will explain my problem.</p> <p>I set up some customized <code>Grid</code> function including some additional functionalities which I control with custom Options. Additionally, I would like to change some of the standard <code>Grid</code> Options, e.g. always use <code>Frame-&gt;All</code>. Take the following working example:</p> <pre><code>Options[myGrid] = {Frame -&gt; All, "Tooltip" -&gt; False}; myGrid[content_, opts : OptionsPattern[]] := Module[{con}, If[OptionValue["Tooltip"], con = MapIndexed[Tooltip[#1, #2] &amp;, content, {-1}], con = content ]; Grid[con, Sequence @@ FilterRules[{opts}~Join~Options[myGrid], Options[Grid]] ] ] </code></pre> <p>defining an example content:</p> <pre><code>mat = {{1, 2, 3}, {4, 5, 6}, {7, 8, 9}}; </code></pre> <p>We can test the behavior:</p> <pre><code>myGrid[mat] </code></pre> <blockquote> <p><img src="https://i.stack.imgur.com/NTsTf.png" alt="Mathematica graphics"></p> </blockquote> <p>The custom "Tooltip" flag works as intended. Now I want to pass an <code>Option</code> to <code>Grid</code>, that has not been explicitely set in the above <code>Options[myGrid]</code> declaration.This eventually makes it through to the Grid, but produces an error message.</p> <pre><code>myGrid[mat, Background -&gt; Blue] </code></pre> <p><img src="https://i.stack.imgur.com/M1mow.png" alt="Mathematica graphics"></p> <p><img src="https://i.stack.imgur.com/PovKV.png" alt="Mathematica graphics"></p> <p>To get rid of the errors I embed the Options from Grid into my custom function:</p> <pre><code>Options[myGrid] = Join[ {Frame -&gt; All, "Tooltip" -&gt; False}, Options[Grid] ]; </code></pre> <p>Now, I can change the Grid Options without raising an error:</p> <pre><code>myGrid[mat, Background -&gt; Green] </code></pre> <p><img src="https://i.stack.imgur.com/lDbt9.png" alt="Mathematica graphics"></p> <p>but the custom setting <code>Frame-&gt;All</code> gets lost. </p> <pre><code>myGrid[mat, Frame -&gt; All] </code></pre> <p><img src="https://i.stack.imgur.com/PfSWh.png" alt="Mathematica graphics"></p> <p>Apparently, the default <code>Frame-&gt;None</code> setting for <code>Grid</code> overrules my custom setting. I banged my head against this problem for too long already, therefore my plea for your assistance.</p>
Leonid Shifrin
81
<p>The problem in this particular case comes from the default setting of the <code>Dividers</code> option, which overrides the <code>Frame</code> option settings. This does not strike me as a right behavior, or at least these options are not as orthogonal as they should be. This appears to fix the problem:</p> <pre><code>Options[myGrid] = Join[{Frame -&gt; All, "Tooltip" -&gt; False}, DeleteCases[Options[Grid], Dividers -&gt; _]]; </code></pre> <p>On a general note, however, I would add all options you may ever want to pass to some functions inside your function, explicitly as valid options of your function. If you find this too bothersome, you can, for this particular function (<code>myGrid</code>), switch back to good old <code>OptionQ</code> pattern:</p> <pre><code>myGrid[content_, opts___?OptionQ] := ... </code></pre> <p>at the expense of the short "magical" version of <code>OptionValue</code> not working any more, so you will have to use <code>OptionValue[myGrid,{opts},"Tooltip"]</code>. I do this sometimes, in exactly this sort of situations.</p>
366,589
<p>how do I solve this one? </p> <p>$$\lim_{q \to 0}\int_0^1{1\over{qx^3+1}} \, \operatorname{d}\!x$$</p> <p>I tried substituting $t=qx^3+1$ which didn't work, and re-writing it as $1-{qx^3\over{qx^3+1}}$ and then substituting, but I didn't manage to get on. </p> <p>Thanks in advance! </p>
Wolphram jonny
43,048
<p>Try substituting $t=q^{1/3}x$. Or, you can put the limit inside the definite integral first (the easy way)</p>
4,557,275
<p>I am studying measure theory from Cohn's Measure theory textbook. If <span class="math-container">$(X, \scr A, \mu)$</span> is a measure space then a subset <span class="math-container">$N$</span> of <span class="math-container">$X$</span> is said to be locally null if for every <span class="math-container">$A \in \scr A $</span> with <span class="math-container">$\mu (A) &lt; + \infty$</span>, we have that <span class="math-container">$A \cap N$</span> is null set. Also, subset <span class="math-container">$B$</span> of <span class="math-container">$X$</span> is said to be null set if there is a set <span class="math-container">$A \in \scr A$</span> such that <span class="math-container">$B \subset A$</span> and <span class="math-container">$\mu (A) = 0$</span>.</p> <p>Now, here's what I am trying to prove: if <span class="math-container">$f: X \to \mathbb C$</span> and <span class="math-container">$f=0$</span> locally almost everywhere, that is, <span class="math-container">$\{ x\in X : |f(x)| &gt;0 \}$</span> is locally null then <span class="math-container">$\int |f| \,d\mu = 0$</span>. This claim would be true if <span class="math-container">$f=0$</span> almost everywhere, that is, the <span class="math-container">$\{ x\in X : |f(x)| &gt;0 \}$</span> is null.</p> <p>This holds true if <span class="math-container">$X$</span> is <span class="math-container">$\sigma$</span>-finite. Because <span class="math-container">$\sigma$</span>-finiteness would imply that every locally null set is null and we would be done.</p> <p>The claim remains to be proved when <span class="math-container">$X$</span> is not <span class="math-container">$\sigma$</span>-finite. I tried my best to prove it but could not reach anywhere. Hints to prove or disprove it will be appreciated!</p>
geetha290krm
1,064,504
<p>You cannot prove this. <span class="math-container">$\int |f|d\mu=0$</span> <strong>if and only</strong> if <span class="math-container">$f=0$</span> a.e. Take a locally null set <span class="math-container">$A$</span> which is not a null set and take <span class="math-container">$f=\chi_A$</span>. This is a counter-example to your statement.</p>
4,557,275
<p>I am studying measure theory from Cohn's Measure theory textbook. If <span class="math-container">$(X, \scr A, \mu)$</span> is a measure space then a subset <span class="math-container">$N$</span> of <span class="math-container">$X$</span> is said to be locally null if for every <span class="math-container">$A \in \scr A $</span> with <span class="math-container">$\mu (A) &lt; + \infty$</span>, we have that <span class="math-container">$A \cap N$</span> is null set. Also, subset <span class="math-container">$B$</span> of <span class="math-container">$X$</span> is said to be null set if there is a set <span class="math-container">$A \in \scr A$</span> such that <span class="math-container">$B \subset A$</span> and <span class="math-container">$\mu (A) = 0$</span>.</p> <p>Now, here's what I am trying to prove: if <span class="math-container">$f: X \to \mathbb C$</span> and <span class="math-container">$f=0$</span> locally almost everywhere, that is, <span class="math-container">$\{ x\in X : |f(x)| &gt;0 \}$</span> is locally null then <span class="math-container">$\int |f| \,d\mu = 0$</span>. This claim would be true if <span class="math-container">$f=0$</span> almost everywhere, that is, the <span class="math-container">$\{ x\in X : |f(x)| &gt;0 \}$</span> is null.</p> <p>This holds true if <span class="math-container">$X$</span> is <span class="math-container">$\sigma$</span>-finite. Because <span class="math-container">$\sigma$</span>-finiteness would imply that every locally null set is null and we would be done.</p> <p>The claim remains to be proved when <span class="math-container">$X$</span> is not <span class="math-container">$\sigma$</span>-finite. I tried my best to prove it but could not reach anywhere. Hints to prove or disprove it will be appreciated!</p>
Oliver Díaz
121,671
<p>The statement in the OP's title is not quite correct. Here is a a counter example: consider the set <span class="math-container">$[0,1]$</span> equipped with the Borel <span class="math-container">$\sigma$</span>-algebra, and define the measure <span class="math-container">$\mu$</span> as <span class="math-container">$$\mu=\infty\cdot\delta_0+\mathbb{1}_{(0,1]}\cdot\lambda$$</span> where <span class="math-container">$\delta_0$</span> is the measure concentrated at <span class="math-container">$0$</span> and <span class="math-container">$\lambda$</span> is Lebesgue measure. Notice that <span class="math-container">$\mu(\{0\})=\infty$</span> and <span class="math-container">$\mu((0,1])=1$</span>. Let <span class="math-container">$f(x)=\mathbb{1}_{\{0\}}$</span>. This function is locally null for <span class="math-container">$\{|f|&gt;0\}=\{0\}$</span>, and for any set <span class="math-container">$A$</span> with finite <span class="math-container">$\mu$</span> measure <span class="math-container">$\{|f|&gt;0\}\cap A=\emptyset$</span> and so <span class="math-container">$\mu\big(\{|f|&gt;0\}\cap A\big)=0$</span>. However <span class="math-container">$\int_{[0,1]}f\,d\mu=\infty$</span>.</p> <p>The issue with this measure is that it there is an atom <span class="math-container">$\{0\}$</span> of infinite mass. Measures without this pathology are called semifinite measures. To be more precise</p> <blockquote> <p>A measure <span class="math-container">$\mu$</span> on a measurable space <span class="math-container">$(X,\mathscr{B})$</span> is semi finite of for any <span class="math-container">$B\in\mathscr{B}$</span>, if <span class="math-container">$\mu(B)&gt;0$</span>, then there is <span class="math-container">$A\in \mathscr{B}$</span> with <span class="math-container">$A\subset B$</span> and <span class="math-container">$0&lt;\mu(A)&lt;\infty$</span>.</p> </blockquote> <p>Any <span class="math-container">$\sigma$</span>-finte measure is semidefinite. We have the following result</p> <blockquote> <p>If <span class="math-container">$(X,\mathscr{B},\mu)$</span> is a semifinte measure space and <span class="math-container">$f$</span> is a locally null measurable function, then <span class="math-container">$\int|f|=0$</span>.</p> </blockquote> <p><strong>Here is a sketch o a proof:</strong> For any set <span class="math-container">$E\in\mathscr{B}$</span> define <span class="math-container">$$\mu_0(E)=\sup\{\mu(F): F\in\mathscr{B},\, F\subset E, \,\mu(B)&lt;\infty\}$$</span> It is not difficult to check that <span class="math-container">$\mu_0(E)=\mu(E)$</span> whenever <span class="math-container">$\mu$</span> is semifinite (see Exercises 14, 16 on pages 27 of Folland. G., <em>Real Analysis</em>, 2nd edition, Wiley &amp; Sons). Thus, if <span class="math-container">$f$</span> is locally null, then <span class="math-container">$\mu(|f|&gt;0)=\mu_0(|f|&gt;0)=0$</span>; hence <span class="math-container">$\int|f|\,d\mu=\int_{\{|f|&gt;0\}}|f|\,\mu=0$</span>.</p>
1,091,549
<p>How to compute $\int_{0}^{+\infty} \frac{dx}{e^{x+1} + e^{3-x}}$?</p> <p>My partial solution: $$ \int_{0}^{+\infty} \frac{dx}{e^{x+1} + e^{3-x}} = \int_{0}^{+\infty} \frac{dx}{e^{3-x}(1 + e^{2x-2})} \\ = \int_{0}^{+\infty} \frac{e^{x-3}dx}{1 + e^{2x-2}}. $$ Thank you very much.</p>
DeepSea
101,504
<p><strong>Hint:</strong> Put $t = e^{x-1} \to e^{2x-2} = t^2\to I = \dfrac{1}{e^2}\cdot \displaystyle \int_{e^{-1}}^\infty \dfrac{dt}{1+t^2}$</p>
3,126,080
<p>Is there any particular equation which doesn't work on the real plane of numbers but works on other planes?</p>
trancelocation
467,003
<p><span class="math-container">$1 + 1 = 0$</span> in <span class="math-container">$\mathbb{F}_2$</span></p> <p>(<span class="math-container">$\mathbb{F}_2$</span> is the field only consisting of <span class="math-container">$0$</span> and <span class="math-container">$1$</span> where <span class="math-container">$1$</span> is the additive inverse to itself.)</p> <p><span class="math-container">$1+1 \neq 0$</span> in <span class="math-container">$\mathbb{R}$</span></p>
2,651,486
<p>I have an exercise here that is asking me to write the MacLaurin formula of orders II, III, IV for a multivariable function. Example: $ f(x,y)=\cos x \cos y$</p> <p>Can anyone tell me what the formula looks like for a multivariable function and maybe guide me through this example? Would be much appreciated!</p>
user
505,767
<p><strong>HINT</strong></p> <p>You can expand separately</p> <ul> <li>$\cos x=1-\frac{x^2}2+\frac{x^4}{4!}+o(x^4)$</li> <li>$\cos y=1-\frac{y^2}2+\frac{y^4}{4!}+o(y^4)$</li> </ul> <p>and then multilply taking the terms to the desidered order.</p> <p>That is for order IV</p> <p>$$f(x,y)= \cos x \cdot \cos y =\left(1-\frac{x^2}2+\frac{x^4}{4!}+o(x^4)\right)\left(1-\frac{y^2}2+\frac{y^4}{4!}+o(y^4)\right)=\\=1-\frac{x^2}2-\frac{y^2}2+\frac{x^4}{4!}+\frac{y^4}{4!}-\frac{x^2y^2}4+o(|(x,y)|^4)$$</p> <p>Note that</p> <ul> <li>for order II: $f(x,y)= \cos x \cdot \cos y =1-\frac{x^2}2-\frac{y^2}2+o(|(x,y)|^2)$</li> <li>for order III: $f(x,y)= \cos x \cdot \cos y =1-\frac{x^2}2-\frac{y^2}2+o(|(x,y)|^3)$</li> </ul>
3,648,315
<p><a href="https://i.stack.imgur.com/IKnlD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKnlD.png" alt="enter image description here"></a></p> <p>I have an item, let's say sword. </p> <p>As described above, <br/> If my sword is at state 1, I can try upgrading it. There are 3 possibilities. <br/> It can be upgraded with prob = 0.3, remain still with prob = 0.68, can be destroyed with prob = 0.02.</p> <p>If my sword is at state 2, I still can try to upgrade it. <br/> It can be upgraded with prob = 0.3, can be downgraded to state 1 with prob = 0.68, can be destroyed with prob = 0.02.</p> <p>Once my sword destroyed, there is no turning back. <br/> Once my sword reached at state 3, no need to do something else. I'm done.</p> <p>I know it's a Markov chain problem. <br/> I can express this situation with matrix, and if I multiply it over and over, it can reach equilibrium state.</p> <pre><code>p2 = matrix(c(1, rep(0, 3), 0.02, 0.68, 0.3, 0, 0.02, 0.68, 0, 0.3, rep(0, 3), 1), 4, byrow = T) p2 ## [,1] [,2] [,3] [,4] ## [1,] 1.00 0.00 0.0 0.0 ## [2,] 0.02 0.68 0.3 0.0 ## [3,] 0.02 0.68 0.0 0.3 ## [4,] 0.00 0.00 0.0 1.0 matrix.power &lt;- function(A, n) { # For matrix multiplication e &lt;- eigen(A) M &lt;- e<span class="math-container">$vectors d &lt;- e$</span>values return(M %*% diag(d^n) %*% solve(M)) } round(matrix.power(p2, 1000), 3) ## [,1] [,2] [,3] [,4] ## [1,] 1.000 0 0 0.000 ## [2,] 0.224 0 0 0.776 ## [3,] 0.172 0 0 0.828 ## [4,] 0.000 0 0 1.000 </code></pre> <p>But how can I get the <code>Pr(Reach state 3 without destroyed | currently at state 2)</code> using Markov chain?</p> <p>I could get <code>Pr(Reach state 2 without destroyed | currently at state 1)</code> by using sum of geometric series.</p> <p>Thank you.</p>
Oliver Clarke
373,486
<p>Let's assume we have a solution <span class="math-container">$(x,y)$</span>. First we make a substitution <span class="math-container">$y = x + a$</span> for some integer <span class="math-container">$a$</span>. Substituting gives <span class="math-container">$2x^2 +2x(a+1) + (a - 83) = 0$</span> which we can think of as a quadratic in <span class="math-container">$x$</span>.</p> <p>If a general quadratic <span class="math-container">$ax^2 + bx + c = 0$</span> has integer roots then we have that it's discriminant <span class="math-container">$b^2 - 4ac$</span> is a square because it appears under the squareroot in the quadratic formula.</p> <p>So in our case, we have that <span class="math-container">$4(a+1)^2 - 4 \cdot 2 \cdot (a - 83)$</span> is a square. Simplifying gives us that <span class="math-container">$4(a^2 + 167)$</span> is a square. Suppose <span class="math-container">$a^2 + 167 = k^2$</span> for some integer <span class="math-container">$k$</span>, then <span class="math-container">$(k-a)(k+a) = 167$</span>. Since <span class="math-container">$167$</span> is prime we have <span class="math-container">$k = \pm 84 $</span> and <span class="math-container">$a = \pm 83$</span>. Note that <span class="math-container">$2k$</span> is the discriminant of our quadratic.</p> <p>So to find <span class="math-container">$x$</span> we plug <span class="math-container">$a$</span> into the quadratic formula. <span class="math-container">$$ x = \frac{-2(a+1) \pm 2k}{2 \cdot 2} = \frac{-2(\pm 83 + 1) \pm 2 \cdot 84}{4} = 0, 83, -84 \textrm{ or } -1. $$</span> So now we check the possible solutions <span class="math-container">$(x,y)$</span>. A trick we can use here it to note that the equation <span class="math-container">$x+y+2xy = 83$</span> is symmetric in <span class="math-container">$x$</span> and <span class="math-container">$y$</span> so the possible values for <span class="math-container">$y$</span> are also <span class="math-container">$0, 83, -84$</span> and <span class="math-container">$-1$</span>. Going through the options gives the solutions <span class="math-container">$(0,83), (83, 0), (-84,-1)$</span> and <span class="math-container">$(-1,84)$</span>.</p>
3,648,315
<p><a href="https://i.stack.imgur.com/IKnlD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKnlD.png" alt="enter image description here"></a></p> <p>I have an item, let's say sword. </p> <p>As described above, <br/> If my sword is at state 1, I can try upgrading it. There are 3 possibilities. <br/> It can be upgraded with prob = 0.3, remain still with prob = 0.68, can be destroyed with prob = 0.02.</p> <p>If my sword is at state 2, I still can try to upgrade it. <br/> It can be upgraded with prob = 0.3, can be downgraded to state 1 with prob = 0.68, can be destroyed with prob = 0.02.</p> <p>Once my sword destroyed, there is no turning back. <br/> Once my sword reached at state 3, no need to do something else. I'm done.</p> <p>I know it's a Markov chain problem. <br/> I can express this situation with matrix, and if I multiply it over and over, it can reach equilibrium state.</p> <pre><code>p2 = matrix(c(1, rep(0, 3), 0.02, 0.68, 0.3, 0, 0.02, 0.68, 0, 0.3, rep(0, 3), 1), 4, byrow = T) p2 ## [,1] [,2] [,3] [,4] ## [1,] 1.00 0.00 0.0 0.0 ## [2,] 0.02 0.68 0.3 0.0 ## [3,] 0.02 0.68 0.0 0.3 ## [4,] 0.00 0.00 0.0 1.0 matrix.power &lt;- function(A, n) { # For matrix multiplication e &lt;- eigen(A) M &lt;- e<span class="math-container">$vectors d &lt;- e$</span>values return(M %*% diag(d^n) %*% solve(M)) } round(matrix.power(p2, 1000), 3) ## [,1] [,2] [,3] [,4] ## [1,] 1.000 0 0 0.000 ## [2,] 0.224 0 0 0.776 ## [3,] 0.172 0 0 0.828 ## [4,] 0.000 0 0 1.000 </code></pre> <p>But how can I get the <code>Pr(Reach state 3 without destroyed | currently at state 2)</code> using Markov chain?</p> <p>I could get <code>Pr(Reach state 2 without destroyed | currently at state 1)</code> by using sum of geometric series.</p> <p>Thank you.</p>
user21820
21,820
<p>Will Jagy's approach is obviously the correct one. However, there is another way that does not rely on finding the algebraic factorization:</p> <p><span class="math-container">$y·(2x+1) = 83-x$</span>.</p> <p><span class="math-container">$2x+1 \mid 83-x$</span>.</p> <p><span class="math-container">$2x+1 \mid 2·(83-x)+(2x+1) = 167$</span>.</p> <p>Now check the factors.</p>
471,710
<p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
Andrew
946,023
<p>There are several correct ways to answer this that illuminate different aspects of what is going on (and I wouldn't be surprised if the answer is present on this site somewhere EDIT: indeed the other responses to this question are different ways of looking at it). Here is a geometrical answer.</p> <p>The basic formula is</p> <p>\begin{equation} l = r \theta \end{equation}</p> <p>where $l$ is the arc length of a segment of a circle with radius $r$ subtending an angle $\theta$. For this formula to be true, $\theta$ needs to be in radians. (Just try it out, is the arc length of the whole circle equal to $360 r$ or $2\pi r$?) In fact, that formula is really how you define what you mean by radians.</p> <p>Now consider a straight line segment connecting the two endpoints of the arc subtended by $\theta$. Call it's length $a$. You can show by fiddling with triangles that \begin{equation} a=2r\sin\left(\frac{\theta}{2}\right) \end{equation}</p> <p>In the limit that the angle is small (so only a small piece of the circle is subtended), you should be able to convince yourself that $a\approx l$. This is the core of the small angle approximation. </p> <p>Using the two relationships above we have</p> <p>\begin{equation} r\theta \approx 2r \sin\left(\frac{\theta}{2}\right) \end{equation} or \begin{equation} \sin\left(\frac{\theta}{2}\right)\approx\frac{\theta}{2} \end{equation} You can see that using radians was crucial here because it allowed us to use $l=r\theta$.</p>
2,698,555
<p><strong>Question</strong></p> <p>Find all the rational values of $x$ at which $y=\sqrt{x^2+x+3}$</p> <p><strong>My attempt</strong></p> <p>Since we only have to find the rational values of $x$ and $y$, we can assume that $$ x \in Q$$ $$ y \in Q$$ $$ y-x \in Q $$ Let$$ d = y-x$$ $$d=\sqrt{x^2+x+3}-x$$ $$d+x=\sqrt{x^2+x+3}$$ $$(d+x)^2=(\sqrt{x^2+x+3})^2$$ $$d^2 + x^2 + 2dx =x^2+x+3$$ $$d^2 +2dx = x +3$$ $$x = \frac{3-d^2}{2d-1}$$</p> <p>$$d \neq \frac{1}{2}$$</p> <p>So $x$ will be rational as long as $d \neq \frac{1}{2}$.</p> <p>Now $$ y = \sqrt{x^2+x+3}$$ $$ y = \sqrt{(\frac{3-d^2}{2d-1})^2 + \frac{3-d^2}{2d-1} + 3}$$ $$ y = \sqrt{\frac{(3-d^2)^2}{(2d-1)^2} + \frac{(3-d^2)(2d-1)}{(2d-1)^2} + 3\frac{(2d-1)^2}{(2d-1)^2}}$$ $$ y = \sqrt{\frac{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}{(2d-1)^2}} $$ $$ y = \frac{\sqrt{(3-d^2)^2 + (3-d^2)(2d-1) + 3(2d-1)^2}}{(2d-1)}$$ $$ y = \frac{\sqrt{d^4-2d^3+7d^2-6d+9}}{(2d-1)}$$</p> <p>I know that again $d \neq \frac{1}{2}$ but I don't know what to do with the numerator. Help</p>
mbomb007
206,300
<p>The red area $R$ is the area of the big quarter circle minus the internal white and blue areas. The radius of the larger circle is 2*r. So the area of the big quarter circle is:</p> <p>$$ Q = \frac {\pi(2r)^2}{4} = \pi r^2 $$</p> <p>Visually, this is equivalent to a quarter of each small circle (together, these are half a small circle) plus $r^2$ (the area of the small square in the lower-left corner) plus the red area:</p> <p>$$ Q = \pi r^2 = \frac {\pi r^2}{2} + r^2 + R $$</p> <p>Therefore:</p> <p>$$ R = \frac {\pi r^2}{2} - r^2 $$</p> <hr> <p>The blue area is equal to the area of the small square $r^2$ minus 2 times a smaller unknown white area, which we will call $S$. $S$ is equal to the area of the small square minus a quarter of the small circle.</p> <p>$$ B = r^2 - 2S $$</p> <p>$$ S = r^2 - \frac {\pi r^2}{4} $$</p> <p>Simplify:</p> <p>$$ B = r^2 - 2 \cdot (r^2 - \frac {\pi r^2}{4}) $$</p> <p>$$ B = r^2 - 2r^2 + \frac {\pi r^2}{2} $$</p> <p>$$ B = \frac {\pi r^2}{2} - r^2 $$</p>
1,639,521
<p>Let <span class="math-container">$f:[0,\infty)\to\mathbb{R}$</span> differentiable and suppose that <span class="math-container">$$\lim_{x\to\infty}f'(x)=L.$$</span> How can I prove that <span class="math-container">$$\lim_{x\to\infty}\frac{f(x)}{x} = L\;?$$</span></p> <p>I have solved some similar problems using the Mean Value Theorem, and I am trying to use it again in this one, but nothing works. For example, I tried to apply the MVT in <span class="math-container">$[x, 2x]$</span> but it does not work. Some hint?</p>
Clement C.
75,808
<p><em>From the basic definition of convergence, with $\varepsilon$'s:</em></p> <p>Fix $\varepsilon &gt; 0$. By definition, there exists $a \geq 0$ such that, for all $ x\geq a$, $L-\varepsilon \leq f^\prime(x) \leq L+\varepsilon$.</p> <p>For $x\geq a$, write $$ f(x) - f(a) = \int_a^x f^\prime $$ which gives $$ (L-\varepsilon)(x-a) \leq f(x) - f(a) \leq (L+\varepsilon)(x-a) $$ or, equivalently, $$ (L-\varepsilon)\left(1-\frac{a}{x}\right) + \frac{f(a)}{x} \leq \frac{f(x)}{x} \leq (L+\varepsilon)\left(1-\frac{a}{x}\right) + \frac{f(a)}{x}. $$</p> <p>Since $\frac{a}{x}\xrightarrow[x\to\infty]{} 0$ and $\frac{f(a)}{x}\xrightarrow[x\to\infty]{} 0$, there exists $b\geq 0$ such that, for $x\geq b$ we have $\lvert \frac{f(a)}{x}\rvert, \lvert \frac{a}{x}\rvert \leq \varepsilon$. It follows that for $x\geq \max(a,b)$,</p> <p>$$ (L-\varepsilon)\left(1-\varepsilon\right) + \varepsilon \leq \frac{f(x)}{x} \leq (L+\varepsilon)\left(1-\varepsilon\right) + \varepsilon. $$ which implies $$ L-L\varepsilon \leq \frac{f(x)}{x} \leq L+2\varepsilon. $$ This is easily seen to be equivalent to showing (e.g., by replacing $\varepsilon$ with $\varepsilon^\prime = \min(\frac{\varepsilon}{2}, \frac{\varepsilon}{L})$ in the beginning), since $\varepsilon$ was arbitrary, that $\frac{f(x)}{x}\xrightarrow[x\to\infty]{} L$.</p>
569,484
<p>A and B are events in a sample space with $p(A) &gt; 0$ and $p(B) &gt; 0$. Write $p(A|B)$ for the conditional probability of $A$ given that $B$ has occurred.</p> <p>1) If $p(A|B) &lt; p(A)$, show that $p(B|A) &lt; p(B)$ </p> <p>2) Show that $p(A|B) ≥ \frac{p(A)+p(B)-1}{p(B)}$</p> <p>For the first part: I subbed P(B|A) &lt; P(B) into P(B|A)= P(A and B)/P(A) to get P(A and B)/P(A) &lt; P(B) so basically P(A and B) &lt; P(B). I think that is the correct way for part 1 but need confirmation if possible and i'm not quite sure how to start part 2. Any help or tips would be much appreciated.</p>
MasterOfBinary
101,063
<p>For 1, Start with what you know: $P(A|B) &lt; P(A)$ and $P(A|B) = \frac{P(A \cap B)}{P(B)}$.</p> <p>So do a bit of substitution and see what you get:</p> <p>$$\frac{P(A \cap B)}{P(B)} &lt; P(A)$$</p> <p>You might notice something there that you can work with. Remember the formula for $P(B|A)$.</p>
128,533
<p>For the description, I have a simplified problem like the following:</p> <pre><code>MapAt[f[1, #1], {a, b, c, d}, #2] &amp; @@@ {{1, 2}, {3, 4}} </code></pre> <p>will give</p> <blockquote> <pre><code>{{a, f[1, 1][b], c, d}, {a, b, c, f[1, 3][d]}} </code></pre> </blockquote> <p>But actually <code>{a, f[1, 1][b], c, f[1, 3][d]}</code> is what I expected. What happened? How to adjust the code?</p> <hr> <p><strong>Update:</strong></p> <p>My real case is</p> <pre><code>bigList = Range @ 10; veryBigList = {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4, 10}}; Function[{binLevel, place}, MapAt[BitSet[#, binLevel] &amp;, bigList, place]] @@@ veryBigList </code></pre> <blockquote> <p>{{1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, {1, 2, 3, 6, 5, 6, 7, 8, 9, 10}, {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, {1, 2, 3, 4, 5, 6, 7, 8, 13, 10}, {1,<br> 2, 3, 4, 5, 6, 7, 8, 9, 26}}</p> </blockquote> <p>In my case, the list size is very huge. If I use <code>Fold</code>, it will act on a very big result list every calculation, for the RAM so I want to avoid using <code>Fold</code>.</p>
Ali Hashmi
27,331
<p><strong>This may be the fastest way in my analysis so far and may compete with <code>Fold</code> or perhaps perform better</strong></p> <pre><code>Module[{m = Range@10}, SetAttributes[func, HoldFirst]; func[x_, {}] := x; func[x_, y_] := (x[[y[[1, 2]] ]] = BitSet[x[[y[[1, 2]]]], y[[1, 1]]]; func[x, Rest@y]); func[m, {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4, 10}}] // AbsoluteTiming ] (* {0.0000401216, {1, 2, 3, 6, 5, 6, 7, 8, 13, 26}} *) </code></pre> <p><strong>other attempts</strong></p> <pre><code>Module[{list = Range@10, i = 0, pos, ind, indices = {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4, 10}}}, Nest[(pos = indices[[++i, 2]]; ind = indices[[i, 1]]; ReplacePart[#, pos -&gt; BitSet[#[[pos]], ind]]) &amp;, list, Length@indices]]// AbsoluteTiming (* {0.0000979715, {1, 2, 3, 6, 5, 6, 7, 8, 13, 26}} *) </code></pre> <p>this can be made faster using:</p> <pre><code>Module[{list = Range@10, i = 0, pos, ind, indices = {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4, 10}}}, Nest[(pos = indices[[++i, 2]]; ind = indices[[i, 1]]; list[[pos]] = BitSet[list[[pos]], ind]; list) &amp;, list, Length@indices]] // AbsoluteTiming (* {0.0000639147, {1, 2, 3, 6, 5, 6, 7, 8, 13, 26}} *) </code></pre> <p>with <code>Do</code> we can get slightly better:</p> <pre><code>Module[{list = Range@10, pos, ind, indices = {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4, 10}}}, Do[(pos = i[[2]]; ind = i[[1]]; list[[pos]] = BitSet[list[[pos]], ind]);, {i, indices}];list] // AbsoluteTiming (* {0.0000513184, {1, 2, 3, 6, 5, 6, 7, 8, 13, 26}} *) </code></pre> <p>However, the method suggested by Kuba is still the fastest</p> <pre><code>Fold[Function[{data, spec}, MapAt[BitSet[#, spec[[1]]] &amp;, data, #2] &amp; @@ spec], Range@10, {{1, 3}, {1, 4}, {2, 7}, {2, 9}, {4,10}}] // AbsoluteTiming (* {0.0000438539, {1, 2, 3, 6, 5, 6, 7, 8, 13, 26}} *) </code></pre>
3,080,566
<p>please tell me how I can solve the following equation. </p> <p><span class="math-container">$$z^3+\frac{(\sqrt2+\sqrt2i)^7}{i^{11}(-6+2\sqrt3i)^{13}}=0$$</span></p> <p>What formula should I use? If possible, tell me how to solve this equation or write where I can find a formula for solving such an equation. I searched for it on the Internet, but could not find anything useful.</p>
Dr. Sonnhard Graubner
175,066
<p>Simplify at first, <span class="math-container">$$(\sqrt{2}+\sqrt{2}i)^7=64\sqrt{2}(1-i)$$</span> to prove this write <span class="math-container">$$\sqrt{2}^7(1+i)^7$$</span> Calculate <span class="math-container">$$(1+i)^3\cdot (1+i)^3\cdot (1+i)$$</span></p>
2,729,364
<p>What are direct methods for proving that a ring is a UFD in general without proving that it's a PID/Euclidean domain/field and using the fact that all those things are UFDs?</p> <p>As an example, we can take <span class="math-container">$\mathbb{Z}[i]$</span> or <span class="math-container">$\mathbb{Z}[\sqrt{-2}]$</span> or other rings you come up with.</p>
lhf
589
<p>Here are some thoughts:</p> <ul> <li><p><em>Exsitence</em> is easy to prove using induction on the norm.</p></li> <li><p><em>Uniqueness</em> is the hard part, especially since it fails for most rings of the form <span class="math-container">$\mathbb Z[\sqrt d]$</span>. For the rings you've mentioned, it can be proved by knowing the units and exactly how primes in <span class="math-container">$\mathbb Z$</span> decompose in <span class="math-container">$\mathbb Z[\sqrt d]$</span>. There are three possibilities for a prime <span class="math-container">$p$</span>: it remains prime, it is product of two non associate primes, it is a square. For the rings you've mentioned, this can be decided in ad hoc ways.</p></li> </ul> <p>In the general case of the <a href="http://en.wikipedia.org/wiki/Quadratic_integer" rel="nofollow noreferrer">ring of integers</a> in quadratic fields <span class="math-container">$\mathbb{Q}(\sqrt{n})$</span>, the answer is not simple but is fascinating, See the book <a href="http://www.cs.amherst.edu/~dac/primes.html" rel="nofollow noreferrer"><em>Primes of the Form <span class="math-container">$x^2+ny^2$</span></em></a>, by David Cox.</p>
593,746
<p>In general, if a random process is ergodic, does it imply that it is also stationary in any sense?</p>
Fred S
349,583
<p>Yes, ergodicity implies stationarity.</p> <p>Consider an ensemble of realizations generated by a random process. Ergodicity states that the time-average is equal to the ensemble average. The time-average is obtained by taking the average of a single realization, giving you a particular number. To obtain the ensemble average, you take the average across the realizations at a particular time-point.</p> <p>If the process was not stationary in regard to the mean, the ensemble average would vary depending upon the time-point that you chose. It could not then be equal to the time-average of a single realization.</p>
4,627,133
<p>We have <span class="math-container">$A$</span> <span class="math-container">$(3×3)$</span> matrix with real entries. We know that A is orthogonal and <span class="math-container">$\operatorname{trace}(A)&gt;1$</span>. Show that matrix <span class="math-container">$A+I_{3}$</span> is invertible.</p> <p>We can see that <span class="math-container">$\det(A)=1$</span> or <span class="math-container">$\det(A)=-1$</span>. We can easily find <span class="math-container">$\operatorname{trace}(A^{*})=\det(A)\operatorname{trace}(A)$</span>. Suppose <span class="math-container">$\det(A+I_{3})=0$</span>. If we take the characteristic polynomial of A <span class="math-container">$$ P(x)=-\det(A-xI_{3})=-x^{3}+\operatorname{trace}(A)x^{2}-\operatorname{trace}(A^*)x+\det(A) $$</span> we can find that <span class="math-container">$P(-1)=0$</span> so <span class="math-container">$1+\operatorname{trace}(A)+\det(A) \operatorname{trace}(A)+\det(A)=0$</span>. If <span class="math-container">$\det(A)=1$</span> we get easily a contradiction, but in the case where <span class="math-container">$\det(A)=-1$</span> we get something right. I tried using eigen values to get in a contradiction with the fact that <span class="math-container">$ \operatorname{trace}(A)&gt;1$</span>, but nothing.</p>
P. Lawrence
545,558
<p>All thw eigenvalues of <span class="math-container">$A$</span> are on the unit circle. There exists a real invertible <span class="math-container">$3 \times 3$</span> matrix <span class="math-container">$P$</span> such that <span class="math-container">$P^{-1}AP$</span> is <span class="math-container">$I$</span> or has the form <span class="math-container">$$\begin{bmatrix}a&amp;-b&amp;0\\b&amp;a&amp;0\\0&amp;0&amp;1\end{bmatrix}$$</span> where <span class="math-container">$0&lt;a&lt;1\text{ and }a^2+b^2=1$</span> Then <span class="math-container">$$P^{-1}(A+I)P=2I\text{ or }P^{-1}AP=\begin{bmatrix}a+1&amp;-b&amp;0\\b&amp;a+1&amp;0\\0&amp;0&amp;2\end{bmatrix}$$</span> The determinant of the last matrix is <span class="math-container">$4(a+1)&gt;0$</span> so in all cases <span class="math-container">$A+I$</span> is invertible.</p>
1,429,937
<p>From <a href="http://www.gottfriedville.net/mathprob/comb-subrect.html" rel="nofollow">this</a> page it can be shown that the number of possible rectangles in and m*n grid can be found by first choosing $2$ lines from a possible $m+1$ to form the vertical sides of the rectangle, and then $2$ from the $n+1$ horizontal sides, to give ${{m+1} \choose {2}}*{{n+1} \choose {2}}$ rectangles. </p> <p>I want to extend this proof to cover only squares</p> <p>I have tried choosing $2$ from ${m+1}$ for one set of sides, as a square can be formed with any two lines, but I am having trouble on how many choices there are for the remaining sides. If the first two sides are beside each other, then the last two can be any two sides that are beside each other, so there are so n choices, but after that I get lost. </p> <p>The formula given on the page using a different approach is $$\frac {{n}{(2n+1)}{(n+1)}}{6}$$ or the sum of squares when m = n</p>
Yuriy S
269,624
<p>Yes, it's possible with complex numbers. If you are allowed to use complex numbers, then use Euler's identity: $\Bbb e ^{\Bbb ix} = \cos x + \Bbb i \sin x$.</p> <p>Then you get</p> <p>$$ \sin ax = \frac {\Bbb e ^{\Bbb i ax} - \Bbb e ^{-\Bbb i ax}} {2 \Bbb i} = \frac {(\Bbb e^ {\Bbb i bx})^\frac a b - (\Bbb e ^{-\Bbb i bx})^\frac a b} {2 \Bbb i} . $$</p> <p>Using Euler's identity again for $\Bbb e ^{\Bbb i bx}$ and $\Bbb e ^{-\Bbb i bx}$: $\Bbb e ^{\Bbb i bx} = \cos bx + \Bbb i \sin bx$ you get</p> <p>$$\sin ax = \Bbb i \frac {(\cos bx - \Bbb i \sin bx)^\frac a b - (\cos bx + \Bbb i \sin bx)^\frac a b} 2 .$$</p> <p>I also used the property of the imaginary unit $\frac 1 {\Bbb i} = -\Bbb i$.</p> <hr> <p>To make my answer more useful, here is how we can evaluate $(\cos bx \pm \Bbb i \sin bx)^\frac a b$ for any real $a, b$</p> <p>If $\cos bx \geq \sin bx$</p> <p>$$(\cos bx \pm \Bbb i \sin bx)^\frac a b=\cos bx ^{ \frac a b } (1 \pm i \tan bx)^\frac a b$$</p> <p>Which is binomial series and can be expanded.</p> <p>If $\cos bx \leq \sin bx$</p> <p>$$(\cos bx \pm \Bbb i \sin bx)^\frac a b=(-i \sin bx )^{ \frac a b } (1 \pm i \tan^{-1} bx)^\frac a b$$</p>
1,818,281
<p>Suppose I have a pair of 2 non-linear differential equations of the form: $$\begin{matrix} \frac{dy}{dt}=f(x,y)\\ \frac{dx}{dt}=g(x,y) \end{matrix}$$ Equilibrium points are where the trajectory ends up on, when plotted on the $x-y$ plane.</p> <p>What are the qualitative differences between a 'stable node' and a 'stable spiral'?</p> <p>Are they both stable?</p>
Rodrigo de Azevedo
339,790
<p>If we have two coupled nonlinear ODEs</p> <p>$$\dot{x}_1 = f_1 (x_1, x_2) \qquad \qquad \dot{x}_2 = f_2 (x_1, x_2)$$</p> <p>and the origin is an equilibrium point, i.e., $f_1 (0,0) = f_2 (0,0) = 0$, then let the <strong>linearization</strong> of vector field $f$ near the origin be</p> <p>$$\dot{\mathrm{x}} = \mathrm{A} \mathrm{x}$$</p> <p>Let $\lambda_1, \lambda_2$ be the eigenvalues of $\mathrm{A}$. If the eigenvalues of $\mathrm{A}$ have negative real parts, $\mbox{Re} (\lambda_i) &lt; 0$, then the origin is a <strong>stable</strong> equilibrium point. If the eigenvalues have negative real parts and</p> <ul> <li>zero imaginary parts, $\mbox{Im} (\lambda_i) = 0$, then the origin is a <strong>stable node</strong>.</li> <li>nonzero imaginary parts, $\mbox{Im} (\lambda_i) \neq 0$, then the origin is a <strong>stable spiral</strong>.</li> </ul>
2,391,769
<p>I'm concerned with the total number of ones, and the total number of runs, but not with the size of any of the runs.</p> <p>For example, $N=8$, $R=3$, $C=5$ includes 11101010, 01101011 among the 24 total possible strings.</p> <p>I can compute these for small $N$ easily enough, but I am specifically interested in the distribution for $N=65536$. As this will result in very large integers, the log probability distribution is equally useful.</p> <p>I found [1] and [2], which includes this:</p> <p>Let $N_{n;g_k,s_k}$ denote the number of binary strings which contain for given $g_k$ and $s_k$, $g_k=0,1,…,⌊\frac{s_k}{k}⌋$, $s_k=0,k,k+1,…,n$, exactly $g_k$ runs of 1’s of length at least $k$ with total number of 1’s (with sum of lengths of runs of 1’s) exactly equal to $s_k$ in all possible binary strings of length $n$.</p> <p>An expression for this is given in eq. (24):</p> <p>$N_{n;g_k,s_k} = \sum_{y=0}^{n-s_k} {y+1 \choose g_k } {s_k-(k-1)g_k-1 \choose g_k-1} \sum_{j=0}^{⌊\frac{n-y-s_k}{k}⌋} (-1)^j {y+1-g_k \choose j} {n-s_k-kj-g_k \choose n-s_k-kj-y} $</p> <p>for $g_k \in \{1, ..., ⌊\frac{s_k}{k}⌋\}$, $s_k \in \{k, k+1, ..., n\}$.</p> <p>I think this is exactly what I'm looking for, with $k = 1$, $s_k = C$ and $g_k = R$. However, when I implemented this I did not get the expected results (Python shown below, edge cases omitted), based on comparing to counting all strings for N=8. I am working backwards to try to understand where I might have gone wrong, but not having much luck yet. I wonder if I am misunderstanding the result.</p> <pre><code>def F(x, y, n): # x = C or s_k (cardinality) # y = R or g_k (runCount) # n = N (total bits) a1 = 0 for z in range(n-x+1): b1 = choose(z+1, y) * choose(x-1, y-1) a2 = 0 for j in range(n-z-x+1): a2 += (-1) ** j * choose(z+1-y, j) * choose(n-x-j-y, n-x-j-z) a1 += b1 * a2 return a1 </code></pre> <p>Note that the <code>choose</code> function uses factorial, which I realize won't work for larger $N$ - but should be fine for $N=8$.</p> <p>Edit: corrected a sign error typo in eq. (24) and the equivalent error in the python code.</p> <p>[1] Counting Runs of Ones and Ones in Runs of Ones in Binary Strings, Frosso S. Makri, Zaharias M. Psillakis, Nikolaos Kollas <a href="https://file.scirp.org/pdf/OJAppS_2013011110241057.pdf" rel="nofollow noreferrer">https://file.scirp.org/pdf/OJAppS_2013011110241057.pdf</a></p> <p>[2] On success runs of a fixed length in Bernoulli sequences: Exact and asymptotic results, Frosso S.Makria, Zaharias M.Psillakis <a href="http://www.sciencedirect.com/science/article/pii/S0898122110009284" rel="nofollow noreferrer">http://www.sciencedirect.com/science/article/pii/S0898122110009284</a></p>
Marko Riedel
44,883
<p>Perhaps there is some value in solving this with generating functions. In the present case we have the marked generating function using $z$ for ones and $w$ for zeros and $y$ for runs of ones</p> <p>$$(1+y(z+z^2+z^3+\cdots)) \\ \times \left(\sum_{q\ge 0} (w+w^2+w^3+\cdots)^q y^q (z+z^2+z^3+\cdots)^q\right) \\ \times (1+w+w^2+w^3+\cdots).$$</p> <p>This is</p> <p>$$\left(1+\frac{yz}{1-z}\right) \times \left(\sum_{q\ge 0} \frac{w^q}{(1-w)^q} y^q \frac{z^q}{(1-z)^q}\right) \times \frac{1}{1-w} \\ = \left(1+\frac{yz}{1-z}\right) \times \frac{1}{1-ywz/(1-w)/(1-z)} \times \frac{1}{1-w}.$$</p> <p>Extracting the coefficient on $[y^R]$ for $R$ runs we get</p> <p>$$\frac{w^R z^R}{(1-w)^R (1-z)^R} \frac{1}{1-w} + \frac{z}{1-z} \frac{w^{R-1} z^{R-1}}{(1-w)^{R-1} (1-z)^{R-1}} \frac{1}{1-w}.$$</p> <p>Next do the coefficient on $[z^C]$ for $C$ ones</p> <p>$$[z^{C-R}] \frac{w^R}{(1-w)^R (1-z)^R} \frac{1}{1-w} + [z^{C-R}] \frac{w^{R-1}}{(1-w)^{R-1} (1-z)^{R}} \frac{1}{1-w} \\ = {C-1\choose R-1} \frac{w^R}{(1-w)^{R+1}} + {C-1\choose R-1} \frac{w^{R-1}}{(1-w)^{R}}.$$</p> <p>Finally extract $[w^{N-C}]$ for the remaining zeros:</p> <p>$${C-1\choose R-1} [w^{N-C-R}] \frac{1}{(1-w)^{R+1}} + {C-1\choose R-1} [w^{N-C-R+1}] \frac{1}{(1-w)^{R}} \\ = {C-1\choose R-1} {N-C\choose R} + {C-1\choose R-1} {N-C\choose R-1} \\ = {C-1\choose R-1} {N-C+1\choose R}.$$</p> <p>This confirms the accepted answer, no credit claimed here.</p>
1,105,056
<p>There's something I've never understood about polynomials.</p> <p>Suppose $p(x) \in \mathbb{R}[x]$ is a real polynomial. Then obviously,</p> <p>$$(x-a) \mid p(x)\, \longrightarrow\, p(a) = 0.$$</p> <p>The converse of this statement was used throughout high school, but I never really understood why it was true. I think <em>maybe</em> a proof was given in 3rd year university algebra, but obviously it went over my head at the time. So anyway:</p> <blockquote> <p><strong>Question.</strong> Why does $p(a)=0$ imply $(x-a) \mid p(x)$?</p> </blockquote> <p>I'd especially appreciate an answer from a commutative algebra perspective.</p>
user 1
133,030
<p><strong>Hint</strong>: <em>using Division Algorithm</em>, you have:<br> $$p(x)=(x-a)q(x)+r $$ $$0=p(a)=r$$ </p> <hr> <p>Edition: (by comment of Marc van Leeuwen). definition from "<em>Cox D., Little J., O'Shea D. Ideals, Varieties, and Algorithms</em> " <img src="https://i.stack.imgur.com/gQNEL.png" alt="enter image description here"></p>
199,235
<blockquote> <p><strong>Possible Duplicate:</strong><br> <a href="https://math.stackexchange.com/questions/9505/xy-yx-for-integers-x-and-y">$x^y = y^x$ for integers $x$ and $y$</a> </p> </blockquote> <p>Determine the number of solutions of the equation $n^m = m^n$ where both m and n are integers.</p>
Berci
41,488
<p>I remember only the result, but not the proof (anyway, probably not too hard):</p> <p>Either $n=m$ or $\{n,m\}=\{2,4\}$.</p>
1,567,473
<p>$-\frac12·\frac{iz-2z}{z^2}+\frac{-1-i}{2i-2z}= \frac{\frac{-3}4-\frac12i}{z}$</p> <p>if $z=a+bi$, How to find $a$ and $b$? Thank you.</p>
gerw
58,577
<p>Hint: You could choose $M = b^2 + 1$. Then, try to find $a,b$ which maximizes $h = \min\{a, b/(b^2+1)\}$.</p>
38,597
<p>Or equivalently, if $G$ is a group, do the projective and injective dimension of $Z$ (viewed as a $ZG$-module) agree?</p> <p>Thanks! </p>
Xiaolei Wu
4,760
<p>The cohomological and homological dimension of a group do not agree in general. For example, the homological dimension of the group <span class="math-container">$Z[\frac{1}{2}]$</span> is one, while its cohomological dimension is 2. However, if |G| is countable, then cohomological dimension is either equal to the homological dimension or one dimension greater, see Proposition 2.4 in the following paper:</p> <p>R. Bieri, Normal subgroups in duality groups and in groups of cohomological dimension 2. J. Pure Appl. Algebra 7 (1976), no. 1, 35–51. </p> <p>Also , cohomological dimension of a group is defined by the highest dimension n such that <span class="math-container">$H^n(G,M)$</span> is nonzero, where M is a G-module. It is important to use a G-module instead of the usual cofficient. For example, the cohomological dimension for any nontrivial knot group is 2, while its cohomology with <span class="math-container">$Z$</span> cofficient, or any G-module with trivial group action is always the same as the cohomology of <span class="math-container">$S^1$</span>.</p>
1,226,162
<p>\begin{align} x' &amp;= -x^3 + x^5 + (x^4)(y^5)\\[.7em] y' &amp;= -8y^3 + y^5 - 10(y^4)(x^5) \end{align} $(0,0)$ is obviously a critical point of the system, and we are given that it is asymptotically stable, but have to show it. </p> <p>I have tried to make a Lyapunov function $V(x,y) = ax^2 + cy^2$, with a,c > 0 but I am having trouble to prove that $\frac{d}{dt} V(x,y)$ is negative definite. I get some complicated polynomial I can't use logic to finalize. How can I change the Lyapunov to come up with a meaningful conclusion?</p> <p>\begin{align} \frac{d}{dt}V(x,y) = 2ax(-3x^2 + 5x^4 + 4x^3y^5) + 2cy(-24y^2+5y^4-40y^3x^5)\\[.7em] \end{align}</p>
Robert Israel
8,508
<p>Hint: choose $a$ and $c$ so the terms in $x^5 y^5$ cancel. Note that for $x$ near $0$, lower powers of $x$ dominate higher powers, and similarly for $y$.</p>
115,657
<p>In topology the spheres <span class="math-container">$S^n$</span> are the &quot;simplest&quot; closed manifolds, and they are like &quot;Dirac's delta at <span class="math-container">$n$</span>&quot; for (reduced) cohomology groups. Furthermore they are boundaries of the simplest compact manifolds-with-boundary, i.e. the disks <span class="math-container">$D^{n+1}$</span>, which are contractible. And <span class="math-container">$S^{n}$</span> is obtained by glueing two copies of <span class="math-container">$D^{n}$</span> along their boundary <span class="math-container">$S^{n-1}$</span>. My question is:</p> <blockquote> <p>Are there some objects of algebraic geometric nature that somehow reproduce the same pattern, or that are considerable as the equivalent of spheres from topology?</p> <p>More generally, are there &quot;homology spheres&quot; for some homology theory like -say- Chow groups? What about an &quot;algebraic Poincaré conjecture&quot;?</p> </blockquote> <p>If they do exist, I don't expect them to be standard varieties or schemes, otherwise they probably would have made their appearence &quot;classically&quot;.</p>
Tom Goodwillie
6,666
<p>To expand on Will Sawin's comment in a vague sort of way (which is the best I can do):</p> <p>The cofiber of the pair $(\mathbb P^n,\mathbb P^{n-1})$ (which doesn't exist as a scheme but does exist if you widen your scope to a suitable category of (pre)sheaves) can be called a $2n$-sphere, the motivic $2n$-sphere.</p> <p>Also, the smash product of the motivic $2m$-sphere with the motivic $2n$-sphere is the motivic $2(m+n)$-sphere. For example, you can make a map of pairs $$(\mathbb P^1\times \mathbb P^1,\mathbb P^1\vee \mathbb P^1) \to (\mathbb P^2,\mathbb P^1)$$ inducing an isomorphism of cofibers.</p> <p>In some sense, based on considering maps $X\times \mathbb A^1\to Y$ as homotopies, $\mathbb P^{n-1}$ is a deformation retract of $\mathbb P^n-\ast$, the complement of a point in projective $n$-space. So the homotopy cofiber of the pair $(\mathbb P^n,\mathbb P^n-\ast)$ is, up to homotopy, the same thing.</p> <p>If you believe in excision, then the homotopy cofiber of the pair $(\mathbb A^n,\mathbb A^n-\ast)$is another model for the same sphere. And since $\mathbb A^n$ is contractible, that makes $\mathbb A^n-\ast$ look a lot like a $(2n-1)$-sphere, with the motivic $2n$-sphere being in some sense its suspension.</p> <p>But the motivic $2(n+1)$-sphere is not in the same sense the double suspension of the motivic $2n$-sphere; the cohomology is wrong</p>
115,657
<p>In topology the spheres <span class="math-container">$S^n$</span> are the &quot;simplest&quot; closed manifolds, and they are like &quot;Dirac's delta at <span class="math-container">$n$</span>&quot; for (reduced) cohomology groups. Furthermore they are boundaries of the simplest compact manifolds-with-boundary, i.e. the disks <span class="math-container">$D^{n+1}$</span>, which are contractible. And <span class="math-container">$S^{n}$</span> is obtained by glueing two copies of <span class="math-container">$D^{n}$</span> along their boundary <span class="math-container">$S^{n-1}$</span>. My question is:</p> <blockquote> <p>Are there some objects of algebraic geometric nature that somehow reproduce the same pattern, or that are considerable as the equivalent of spheres from topology?</p> <p>More generally, are there &quot;homology spheres&quot; for some homology theory like -say- Chow groups? What about an &quot;algebraic Poincaré conjecture&quot;?</p> </blockquote> <p>If they do exist, I don't expect them to be standard varieties or schemes, otherwise they probably would have made their appearence &quot;classically&quot;.</p>
Konrad Voelkel
956
<p>To expand on Tom Goodwillie's answer:</p> <p>a precise definition of "motivic sphere" would be $$S^{p,q} = \big( \Delta^{p-q} / \partial \Delta^{p-q} \big) \wedge \big( \bigwedge^q \mathbb{G}_m\big)$$ which you can interpret as the $q$-fold smash product of the multiplicative group $\mathbb{G}_m$ smashed with a $(p-q)$-dimensional simplicial sphere. This smash product makes sense in a category of simplicial presheaves on smooth schemes (which is what you work with in A¹-homotopy theory).</p> <p>The indices can be explained from looking at the motive $M(S^{p,q}) = \mathbb{Z}(q)[p]$ or at the realizations, where the complex realization gives $S^p$ and the real realization gives $S^{p-q}$. More about this can be read in the Morel-Voevodsky paper.</p> <p>Now these are not algebraic varieties, by definition, so a good question is</p> <blockquote> <p>For integers $p$ and $q$, does an algebraic variety $X$ exist which is A¹-weakly equivalent to $S^{p,q}$?</p> </blockquote> <p>We'll also say "$X$ is a $S^{p,q}$". From looking at realizations you can already exclude $p,q$ negative. One positive example is (by definition) $\mathbb{G}_m$, which is a $S^{1,1}$.</p> <p>As Tom Goodwillie explained, $\mathbb{A}^n / (\mathbb{A}^n \setminus 0)$ is a $S^{2n,n}$ (which some people shorten to "motivic $2n$-sphere") and $\mathbb{A}^n \setminus 0$ is a $S^{2n-1,n}$.</p> <p>There is not much known beyond that, I suppose.</p> <hr> <p>There are even more varieties that could count as algebraic versions of spheres. An interesting feature of algebraic geometry is that you can look at many fields of definition. Projective quadrics over the complex numbers all look the same (in each dimension), as they are isomorphic (given the same dimension) to the split quadrics $Q_{2n}^{split} = \{\sum_{i=0}^n x_iy_i = 0\}$ or $Q_{2n+1}^{split} = \{\sum_{i=0}^n x_iy_i = z^2\}$ as well as to the anisotropic quadrics $Q_{m} = \{\sum_{i=0}^m z_i^2\}$ (by change of basis). Over smaller, non-quadratically closed fields, these quadrics are no longer isomorphic.</p> <p>Now look at the affine quadrics inside the projective quadrics (by removing a suitable hyperplane section). Conjecturally, the split ones are motivic spheres (at least they have the right motive), while the anisotropic ones aren't. You can consider these forms of motivic spheres.</p>
4,482,600
<p>Prove <span class="math-container">$$\int_{0}^{\frac{\pi}{2}} \sqrt[n]{\tan x} \,dx = \frac{\pi}{2} \sec \left(\frac{\pi}{2n}\right)$$</span></p> <p>for all natural numbers <span class="math-container">$n \ge 2$</span>.</p> <p>There are several answers (<a href="https://math.stackexchange.com/questions/4385116/how-to-evaluate-the-integral-int-0-frac-pi2-sqrtn-tan-theta-d-th?rq=1">A1</a> <a href="https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913572#1913572">A2</a>) to this integral but they all involve the gamma function or the beta function or contour integration etc. Can one solve this using only 'real' 'elementary' techniques? For <span class="math-container">$n = 2$</span> and <span class="math-container">$n = 3$</span> it can be solved using only elementary substitutions and partial fractions.</p>
robjohn
13,854
<p><span class="math-container">$$ \begin{align} \int_0^{\frac\pi2}\sqrt[n]{\tan(x)}\,\mathrm{d}x\ &amp;=\int_0^\infty\frac{u^{1/n}\,\mathrm{d}u}{1+u^2}\tag{1a}\\ &amp;=\frac12\int_0^\infty\frac{v^{\frac{1-n}{2n}}}{1+v}\,\mathrm{d}v\tag{1b}\\ &amp;=\frac\pi2\csc\left(\pi\frac{n+1}{2n}\right)\tag{1c}\\[6pt] &amp;=\frac\pi2\sec\left(\frac\pi{2n}\right)\tag{1d} \end{align} $$</span> Explanation:<br /> <span class="math-container">$\text{(1a)}$</span>: set <span class="math-container">$x=\tan^{-1}(u)$</span><br /> <span class="math-container">$\text{(1b)}$</span>: set <span class="math-container">$u=v^{1/2}$</span><br /> <span class="math-container">$\text{(1c)}$</span>: apply <span class="math-container">$(2)$</span> below<br /> <span class="math-container">$\text{(1d)}$</span>: <span class="math-container">$\csc(\pi/2+x)=\sec(x)$</span></p> <p>Here is the argument from <span class="math-container">$(3)$</span> of <a href="https://math.stackexchange.com/a/176216">this answer</a> with more explanation: <span class="math-container">$$ \begin{align} \int_0^\infty\frac{x^{\alpha-1}}{1+x}\,\mathrm{d}x &amp;=\int_0^1\frac{x^{-\alpha}+x^{\alpha-1}}{1+x}\,\mathrm{d}x\tag{2a}\\ &amp;=\sum_{k=0}^\infty(-1)^k\int_0^1\left(x^{k-\alpha}+x^{k+\alpha-1}\right)\mathrm{d}x\tag{2b}\\ &amp;=\sum_{k=0}^\infty(-1)^k\left(\frac1{k-\alpha+1}+\frac1{k+\alpha}\right)\tag{2c}\\ &amp;=\sum_{k\in\mathbb{Z}}\frac{(-1)^k}{k+\alpha}\tag{2d}\\[6pt] &amp;=\pi\csc(\pi\alpha)\tag{2e} \end{align} $$</span> Explanation:<br /> <span class="math-container">$\text{(2a)}$</span>: break the integral into two parts: <span class="math-container">$[0,1]$</span> and <span class="math-container">$(1,\infty)$</span><br /> <span class="math-container">$\phantom{\text{(2a):}}$</span> substitute <span class="math-container">$x\mapsto1/x$</span> in the second part<br /> <span class="math-container">$\text{(2b)}$</span>: apply the series for <span class="math-container">$\frac1{1+x}$</span><br /> <span class="math-container">$\text{(2c)}$</span>: evaluate the integrals<br /> <span class="math-container">$\text{(2d)}$</span>: write as a principal value sum<br /> <span class="math-container">$\text{(2e)}$</span>: apply <span class="math-container">$(8)$</span> from <a href="https://math.stackexchange.com/a/3819077">this answer</a></p>
4,482,600
<p>Prove <span class="math-container">$$\int_{0}^{\frac{\pi}{2}} \sqrt[n]{\tan x} \,dx = \frac{\pi}{2} \sec \left(\frac{\pi}{2n}\right)$$</span></p> <p>for all natural numbers <span class="math-container">$n \ge 2$</span>.</p> <p>There are several answers (<a href="https://math.stackexchange.com/questions/4385116/how-to-evaluate-the-integral-int-0-frac-pi2-sqrtn-tan-theta-d-th?rq=1">A1</a> <a href="https://math.stackexchange.com/questions/1913325/real-analysis-methods-to-evaluate-int-0-infty-fracxa1x2-dx-a1/1913572#1913572">A2</a>) to this integral but they all involve the gamma function or the beta function or contour integration etc. Can one solve this using only 'real' 'elementary' techniques? For <span class="math-container">$n = 2$</span> and <span class="math-container">$n = 3$</span> it can be solved using only elementary substitutions and partial fractions.</p>
Quanto
686,284
<p>Here is to integrate with partial fractions. Utilize the factorization<br /> <span class="math-container">$ 1+y^{2n} = \prod^n_{k=1} (y^2-2y\cos a_k +1 )$</span>, with <span class="math-container">$a_k=\frac{(2k-1)\pi}{2n}$</span> <span class="math-container">\begin{align} &amp;\int_{0}^{\frac{\pi}{2}} \sqrt[n]{\tan x} \,dx \\ =&amp;\ \frac12 \int_{0}^{\frac{\pi}{2}} \sqrt[n]{\tan x}+\sqrt[n]{\cot x} \ {dx } \overset{y^n=\tan x} = \frac 12\int_0^\infty\frac{n(y^n+y^{n-2})}{1+y^{2n}}dy\\ = &amp;\ \frac12 \int_0^\infty \sum_{k=1}^n \frac{(-1)^{k+1}\sin 2a_k}{y^2-2y\cos a_k +1 }dy =- \sum_{k=1}^n (-1)^{n-k}a_k\cos a_k\\ =&amp;-\frac{d}{dt}\bigg(\sum_{k=1}^n (-1)^{n-k}\sin a_k t\bigg)_{t=1}= - \frac{d}{dt}\bigg(\frac{\sin \pi t}{2\cos\frac{\pi t}{2n}}\bigg)_{t=1} = \frac\pi2 \sec\frac\pi{2n} \end{align}</span></p>
18,691
<p>Let's say I have a folowing set of data:</p> <ul> <li>k = 1 : list of values </li> <li>k = 3 : list of values </li> <li>k = 10 : list of values</li> </ul> <p>I know that to make a <code>BoxWhiskerChart</code> I have to give it a list of listen of these values as data and ks as labels. </p> <p>How do I force the offset between the boxes for different ks to be proportional to the values of ks?</p> <p>This is like combining <code>ListPlot</code> and <code>BoxWhiskerChart</code> - list plot gives appropriate position of boxes relative to the x-axis.</p>
kglr
125
<p>From <a href="http://reference.wolfram.com/mathematica/ref/BoxWhiskerChart.html#" rel="nofollow noreferrer">docs > BoxWhiskerChart >Scope > Data and Wrappers</a>:</p> <blockquote> <p>Nonreal data is taken to be missing and typically yields a gap in the box-and-whisker chart</p> </blockquote> <p>So one can use the following approach:</p> <pre><code>salaries = ExampleData[{"Statistics", "UniversitySalaries"}, "DataElements"]; depts = {"Mathematics", "History", "English", "Chemistry", "Law", "Physics", "Statistics"}; data = Table[Cases[salaries, {d, _, salary_, "A"} :&gt; salary], {d, depts}]; xrange = {1, 2, 5, 7, 10, 11, 12}; newdata = ConstantArray[Missing[], Max[xrange]]; newdata[[xrange]] = data; xlabels = ConstantArray["", Max[xrange]]; xlabels[[xrange]] = xrange; BoxWhiskerChart[newdata, ChartLabels -&gt; Placed[xlabels, Axis], ChartStyle -&gt; 10, Joined -&gt; True, ImageSize -&gt; 500] </code></pre> <p><img src="https://i.stack.imgur.com/IW8fo.png" alt="enter image description here"></p>
2,028,646
<p>Let two random variables: $$x_1 \sim Bin(100, 0.5) \\ x_2 \sim Bin(100, 0.6)$$</p> <p>Now, we define a third random variable, $x_{12}$ which it's distribution is the aggregated distributions of $x_1$ and $x_2$, so it's <strong>not</strong> quite like $x_1 + x_2$ even though empirically the variance seems like the sum of the two variances. Is that the case? How can I show it?</p> <p>Thanks. </p>
BruceET
221,800
<p>From a theoretical point of view it is exactly the distribution of $X_1 + X_2.$ But it is <em>not</em> binomial because the two Success probabilities are not the same. $$E(X_1 + X_2) = n_1p_1 + n_2p_2 = 50 + 60.$$ Provided $X_1$ and $X_2$ are <em>independent</em>, you also have $$Var(X_1 + X_2) = n_1p_1(1-p_1) + n_2p_2(1-p_2).$$ [In particular, this is <em>not</em> the same thing as $(n_1 + n_2)p_a(1-p_a),$ where $p_a = (p_1 + p_2)/2.$] </p> <p>If this were an applied situation and the 100 observations for $X_1$ were taken contemporaneously with the 100 observations for $X_2$ and in the same place, then I'd investigate circumstances to make <em>sure</em> about the independence.</p> <p>For example, if the subjects for $X_1$ are 100 randomly chosen men from a population in which 50% are Democrats, and the subjects for $X_2$ are 100 randomly chosen women from a population in which 60% are Democrats, then independence seems reasonable. But I'd want to make sure that 'for convenience' the researchers didn't select 100 married M/F <em>couples</em> and use the 100 men and 100 women from those.</p>
116,392
<p>As <a href="https://mathematica.stackexchange.com/questions/110808/a-custom-function-is-too-slow-when-i-first-time-to-run-it">my this previous post</a>.I get a custom function which can help me got some function contain a certain option.And I have put it in my &quot;init.m&quot; file.</p> <pre><code>LookupOptionFunction[option_] := Select[ToExpression[ Complement[ Select[Names[&quot;System`*&quot;], StringFreeQ[&quot;$&quot;]], {&quot;AllowTransliteration&quot;, &quot;MyFind&quot;}]], KeyMemberQ[Options[#1], option] &amp;] </code></pre> <p>Usage:</p> <pre><code>LookupOptionFunction[SelfLoopStyle] </code></pre> <blockquote> <p>{GraphPlot, GraphPlot3D, LayeredGraphPlot, TreeForm, TreePlot}</p> </blockquote> <p>But there are some problem</p> <p><a href="https://i.stack.imgur.com/QDW9I.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QDW9I.png" alt="enter image description here" /></a></p> <p>As the <a href="https://mathematica.stackexchange.com/a/110814/21532">@Bob Hanlon's answer</a>,the speed up due to the caching produced in first time to run it.</p> <hr /> <h2>My Question</h2> <p>How to save the caching to help me speed up to run it <em><strong>even we restart the Mathematica</strong></em>?</p> <hr /> <h1>New progress:</h1> <p>I exclude the <code>Keys@SystemOptions[]</code> one by one and narrow it down from 90 to 17,I think following system options must can implement this demand:</p> <blockquote> <p>{&quot;CacheOptions&quot;,&quot;CatchMachineUnderflow&quot;,&quot;DataOptions&quot;,&quot;DefinitionsReordering&quot;,&quot;DifferentiationOptions&quot;,&quot;DynamicLibraryOptions&quot;,&quot;EnforceCallPacket&quot;,&quot;FileBackedCachingOptions&quot;,&quot;GlobFileNames&quot;,&quot;HolonomicOptions&quot;,&quot;LegacyFrontEnd&quot;,&quot;LegacyNewlineParsingInStrings&quot;,&quot;NeedNotReevaluateOptions&quot;,&quot;PostScriptBufferSize&quot;,&quot;RestorePackageDependencies&quot;,&quot;SymbolicProductThreshold&quot;,&quot;SymbolicSumThreshold&quot;}</p> </blockquote> <p>But I don't familar they by name.Maybe we just have a little bit of a difference</p>
Mr.Wizard
121
<p>The behavior you are encountering is the time taken by <em>Mathematica</em> evaluate all the Symbols in the System context, including definitions (and Options) that are only loaded on first use. (For one of my own encounters with this delayed loading please see <a href="https://stackoverflow.com/q/5649379">Why do I have to evaluate this twice?</a>)</p> <p>In a fresh Kernel observe that <code>GraphPlot</code> has no Options:</p> <pre><code>Quit[] (* quit the Kernel, then separately evaluate the line below *) Options @ Unevaluated @ GraphPlot </code></pre> <blockquote> <pre><code>{} </code></pre> </blockquote> <p>But when it is evaluated its definitions are loaded and it then has Options:</p> <pre><code>GraphPlot; (* seemingly inert command that pre-loads GraphPlot definitions *) Options @ Unevaluated @ GraphPlot </code></pre> <blockquote> <pre><code>{AlignmentPoint -&gt; Center, AspectRatio -&gt; Automatic, Axes -&gt; False, . . . </code></pre> </blockquote> <p>If we prevent all pre-loading and other evaluation by using <code>Unevaluated</code> as I did in the first example the initial search is quite fast:</p> <pre><code>Quit[] (* quit the kernel first *) nopreloadLookupOptionFunction[option_] := Complement @@ Names /@ {"System`*", "System`\\$*"} // ToHeldExpression // Cases[ Hold[s_Symbol] :&gt; s /; Quiet @ Options[Unevaluated @ s, option] =!= {} ] nopreloadLookupOptionFunction[SelfLoopStyle] // AbsoluteTiming </code></pre> <blockquote> <pre><code>{0.132444, {TreeForm}} </code></pre> </blockquote> <p>However we no longer have Symbols like <code>GraphPlot</code> etc. in the list because at the time of evaluation these had no Options. We therefore need to know which Symbols to preload, or which to not evaluate (because they are slow). I believe we can use the <a href="http://reference.wolfram.com/language/ref/OwnValues.html" rel="nofollow noreferrer"><code>OwnValues</code></a> of a Symbol to determine if it needs preloading by looking for the appearance of <code>System`Dump`AutoLoad</code>. Implementing this idea we have my</p> <h2>Proposed solution</h2> <pre><code>SetAttributes[preload, HoldFirst] preload[sym_Symbol] := If[! FreeQ[Quiet @ OwnValues @ Unevaluated @ sym, System`Dump`AutoLoad], sym;] fastLookupOptionFunction[option_] := Names["System`*"] // ToHeldExpression // Cases[ Hold[s_Symbol] :&gt; s /; (preload @ s; Quiet @ Options[Unevaluated @ s, option]) =!= {} ] </code></pre> <p>Which in a fresh kernel yields:</p> <pre><code>fastLookupOptionFunction[SelfLoopStyle] // AbsoluteTiming </code></pre> <blockquote> <pre><code>{1.47235, {GraphPlot, GraphPlot3D, LayeredGraphPlot, TreeForm, TreePlot}} </code></pre> </blockquote> <p>If this is still not fast enough for you or misses Options that are loaded in another way besides <code>System`Dump`AutoLoad</code> I suggest that you construct a separate database of the Symbol and Option names that you can search independent of evaluation.</p>
3,800,833
<p><strong>Question: Bob invests a certain sum of money in a scheme with a return of 22% p.a . After one year, he withdraws the entire amount (including the interest earned) and invests it in a new scheme with returns of 50% p.a (compounded annually) for the next two years. What is the compounded annual return on his initial investment over the 3 year period?</strong></p> <p>The answer to this problem is fairly simple if you assume initial investment to be say \$100 then calculate interest for 1st year at 22% then 2nd and 3rd year at 50% which would come out as \$274.5</p> <p>Then return is \$174.5 over 3 years, using Compound Interest formula, you get rate of interest at around 40% for three years.</p> <p>My question is can you skip all this lengthy process and use weighted averages to come up with the final answer? <span class="math-container">$$ Average\ rate\ of\ Interest = \frac{1 * 22 + 2 * 50}{1 + 2} \approx 40.67\% $$</span></p> <p>The answer with this is off by 0.67%, it doesn't matter much. However, is using weighted averages a correct approach or am I getting the correct answer using a wrong approach?</p> <p>Note: The goal of asking this question is to decide on a faster approach to this problem and not necessarily getting the final answer. If you have an approach faster than weighted averages (assuming it is correct), please feel free to post it as an answer.</p>
Felix Marin
85,343
<p><span class="math-container">$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$</span> <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\int_{0}^{\pi/2}\tan^{n - 1}\pars{x}\,\dd x} \,\,\,\stackrel{\tan\pars{x}\ \mapsto\ x}{=}\,\,\, \int_{0}^{\infty}{x^{n - 1} \over 1 + x^{2}}\,\dd x \,\,\,\stackrel{x^{2}\ \mapsto\ x}{=}\,\,\, {1 \over 2}\int_{0}^{\infty}{x^{n/2 - 1} \over 1 + x}\,\dd x \end{align}</span> with <span class="math-container">$\ds{0 &lt; \Re\pars{n} &lt; 2}$</span>. Note that <span class="math-container">$\ds{{1 \over 1 + x} = \sum_{k = 0}^{\infty}{\pars{-x}^{k}} = \sum_{k = 0}^{\infty}\color{red}{\Gamma\pars{k + 1}}\, {\pars{-x}^{k} \over k!}}$</span>. With <a href="https://mathworld.wolfram.com/RamanujansMasterTheorem.html" rel="nofollow noreferrer">Ramanujan's Master Theorem</a>: <span class="math-container">\begin{align} &amp;\bbox[5px,#ffd]{\int_{0}^{\pi/2}\tan^{n - 1}\pars{x}\,\dd x} = {1 \over 2}\,\Gamma\pars{n \over 2}\color{red}{\Gamma\pars{-\,{n \over 2} + 1}} = {1 \over 2}\,{\pi \over \sin\pars{\pi n/2}} = \bbx{{\pi \over 2}\csc\pars{\pi n \over 2}} \\ &amp; \end{align}</span></p>
2,006,927
<p>I want to prove that $\cap_{i=1}^{\infty}\left(A_i \cap \left(\cup_{j=1}^{\infty}B_j\right)\right)=\cup_{j=1}^{\infty}\left(\left(\cap_{i=1}^{\infty}A_i\right) \cap B_j\right)$ for sets $A_i,B_j$ and natural numbers $i,j$. If an element $x$ belongs to the left hand side, then $x\in A_1$ and $x\in$ some of $B_j$ and $x\in A_2$ and $x\in$ some of $B_j$ and so forth. Then $x\in A_1$, $x\in A_2$, $x\in A_3$ etc so $x \in \cap_{i=1}^{\infty}A_i$ but I don't see how I can proceed with the B:s and get to $x \in B_1$ and $x\in$ all $A_i$ or $x \in B_2$ and $x\in$ all $A_i$ or $x \in B_3$ and $x\in$ all $A_i$ and so forth.</p>
Jared N
385,410
<p>If $x$ is in the set on the left, then $x\in A_i$ for <em>all</em> indices $i$, and $x\in B_j$ for <em>some</em> index $j$. That would imply that $$x\in B_j \cap \bigcap_{i=1}^\infty A_i,$$ (again, just for a <em>single</em> index $j$.) Fortunately, we only need it to be true for a single $j$, because that implies that $$ x \in \bigcup_{j=1}^\infty( B_j \cap \bigcap_{i=1}^\infty A_i).$$</p> <p>All the statements are biconditional, so the sets are equal.</p>
98,317
<p>This comes from Artin Second Edition, page 219. Artin defined $G = \langle x,y\mid x^3, y^3, yxyxy\rangle$, and uses the Todd-Coxeter Algorithm to show that the subgroup $H = \langle y\rangle$ has index 1, and therefore $G = H$ is the cyclic group of order 3.</p> <p>That being the case, $x$ cannot be either $y$ or $y^2$, for then the third relation would not be satisfied. So the relation $x=1$ must follow from the given relations. Is there another way of seeing this besides from the Todd-Coxeter algorithm?</p>
Srivatsan
13,425
<p>Here's one way -- quite a bit ad hoc. </p> <p>The basic idea is write all the elements in terms of $z := yx$. From the third relation, we can see that $z^2 y = 1$, or $z^2 = y^{-1} = y^2$. Therefore, $y = y^4 = z^4$. Now, we can also write $x$ in terms of $z$: $$x = y^{-1} z = y^2 \cdot z = z^8 \cdot z = z^9 .$$ Now $x$ and $y$ commute, both being powers of $z$. (It is a simple exercise to show that $x = 1$ from this. The last line in Andres' answer explains this.) </p> <hr> <p>Here's an alternative approach, which is what I originally followed. Armed with these two identities, we can rewrite all three given relations entirely in terms of $z$: </p> <ul> <li>$(z^{9})^3 = 1$; </li> <li>$(z^{4})^3 = 1$; and</li> <li>$z^2 \cdot z^4 = 1$.</li> </ul> <p>From these observations, since $\gcd(27, 12, 6)=3$, we get that $z^3 = 1$. Finally, plugging this back, we obtain $x = z^9 = 1$ and $y = zx^{-1} = z$. </p>
4,494,199
<p>This is a problem from a past qualifying exam in complex analysis. I'm working through these to study for my own upcoming qual. For this question, I think my proof is fairly straightforward, but I'd like to know whether or not it is correct and complete. I'm also interested in other ways of answer the question. Thanks!</p> <p><strong>Problem:</strong></p> <p>Find how many solutions (counting multiplicity) the equation <span class="math-container">$\sin z = ez^4$</span> has on the unit disk <span class="math-container">$|z|&lt;1$</span>. Justify your answer.</p> <p><strong>My Solution:</strong></p> <p>Let <span class="math-container">$g(z) = \sin(z)$</span> and <span class="math-container">$f(z) = -ez^4$</span> and consider these functions on the unit circle <span class="math-container">$|z|=1$</span>. We will show by Rouche's Theorem that since <span class="math-container">$|g(z)|&lt;|f(z)|$</span>, then <span class="math-container">$f(z)+g(z)$</span> has the same number of zeros inside the unit circle as <span class="math-container">$f(z)$</span> counting multiplicities, thus there are four solutions to the given equation.</p> <p>First, we need to show that <span class="math-container">$|\sin(z)|&lt;|e|$</span>. We have <span class="math-container">$$ \begin{align*} |\sin(z)| &amp;= \left\vert \frac{e^{iz} - e^{-iz}}{2i}\right\vert \\ &amp;= \frac{1}{2} |e^{i(x+iy)} - e^{-i(x+iy)}| \\ &amp;\leq \frac{1}{2}(|e^{i(x+iy)}| + |- e^{-i(x+iy)}|)\\ &amp;= \frac{1}{2}(e^{-y} + e^y) \end{align*} $$</span></p> <p>On <span class="math-container">$|z|=1$</span>, we have <span class="math-container">$|y|\leq 1$</span>, thus <span class="math-container">$$ |\sin(z)| \leq \frac{1}{2}(e^{-y} + e^y) \leq \frac{1}{2}(e^{1}+e^{-1})&lt; \frac{1}{2}(2e) = e. $$</span></p> <p>Now, we have that when <span class="math-container">$|z|=1$</span> <span class="math-container">$$ |\sin(z)|&lt;e = e|z|^4 = |-ez^4|, $$</span></p> <p>thus <span class="math-container">$|g(z)|&lt;|f(z)|$</span>, and by Rouche's Theorem, <span class="math-container">$f(z)+g(z)$</span> has the same number of zeros inside the unit circle as <span class="math-container">$f(z)$</span>. Since <span class="math-container">$f(z) = -ez^4$</span> is a polynomial, we know by the fundamental theorem of algebra that it has exactly four roots counting multiplicity. Thus, <span class="math-container">$$ f(z) + g(z) = \sin(z) - ez^4 $$</span></p> <p>has exactly four roots and <span class="math-container">$\sin(z) = ez^4$</span> has exactly four solutions. <span class="math-container">$\blacksquare$</span></p>
Z Ahmed
671,540
<p>Let <span class="math-container">$0\le a , b \le 1$</span></p> <p>We prove that <span class="math-container">$$0\le \frac a{1+b}+\frac b{1+a} \le 1$$</span></p> <p>Let <span class="math-container">$$F=\frac a{1+b}+\frac b{1+a}$$</span> Then <span class="math-container">$F\ge 0$</span>, note that <span class="math-container">$$\frac{a}{1+b} \le \frac{a}{a+b}$$</span> and <span class="math-container">$$\frac{b}{1+a}\le \frac{b}{b+a}$$</span> Adding the last two we get <span class="math-container">$$F\le 1$$</span> Equality holds when <span class="math-container">$$a=1=b.$$</span></p>
158,978
<p><a href="https://i.stack.imgur.com/5nKVV.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/5nKVV.jpg" alt="Explore the Fourier Series for a square wave"></a></p> <pre><code>f (t) = (4/pi) Sum[(1/n) sin (2 pi (f (t))), {n, 1, Infinity}] </code></pre>
José Antonio Díaz Navas
1,309
<p>Once you have the function and its Fourier representation, you can try with this:</p> <pre><code>f[t_] = (4/Pi) Sum[(1/n) Sin[2 Pi n t], {n, 1, \[Infinity], 2}]; fserf[t_,nmax_] := (4/Pi) Sum[(1/n) Sin[2 Pi n t], {n, 1, 2*nmax-1, 2}]; GraphicsGrid[ Partition[ Plot[{f[t], fserf[t, #]}, {t, 0, 3}, PlotLabel -&gt; ToString[#] &lt;&gt; " Terms"] &amp;/@ {3, 5, 10, 20}, 2], ImageSize -&gt; Large] </code></pre> <p>You can get your plots:</p> <p><a href="https://i.stack.imgur.com/iCF1c.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iCF1c.jpg" alt="![enter image description here"></a></p> <p>Even use <code>Manipulate</code>to interactively see how the series works:</p> <pre><code>f[t_] = (4/Pi) Sum[(1/n) Sin[2 Pi n t], {n, 1, \[Infinity], 2}]; fserf[t_, nmax_] := (4/Pi) Sum[(1/n) Sin[2 Pi n t], {n, 1, 2*nmax-1, 2}]; Manipulate[ Plot[{f[t], fserf[t, nmax]}, {t, 0, scale},PlotLabel-&gt; ToString[nmax] &lt;&gt; " Terms", PlotRange -&gt; {-1.5, 1.5}], {{nmax, 1, "Nº of Terms"}, 1, 50, 1, Appearance -&gt; "Labeled"}, {{scale, 1, "x-axis length"}, 1, 5, 1, Appearance -&gt; "Labeled"} ] </code></pre> <p><a href="https://i.stack.imgur.com/csMdl.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/csMdl.jpg" alt="enter image description here"></a></p>