qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
258,332 | <blockquote>
<p>Prove that if $a^x=b^y=(ab)^{xy}$, then $x+y=1$.</p>
</blockquote>
<p>How do I use logarithms to approach this problem?</p>
| Shaun Ault | 13,074 | <p>Using logarithms:</p>
<p>Since $a^x = b^y$,
$$
\log a^x = \log b^y \quad \Rightarrow \quad x \log a = y \log b
\quad \Rightarrow \quad \log a = \frac{y}{x} \log b
$$
Then, since $b^y = (ab)^{xy}$,
$$
\log b^y = \log (ab)^{xy}
\quad \Rightarrow \quad
y \log b = xy \log (ab) = xy \left( \log a + \log b\right)
$$
Let's assume $y \neq 0$ (since if $y=0$, then we must also have $x=0$ and get the required conditions without having $x+y=1$ -- meaning the original question must have had some restriction such as $x, y \neq 0$). Cancel $y$ on both sides of the last equation:
$$
\log b = x\left(\log a + \log b\right)
= x\left( \frac{y}{x} \log b + \log b \right)
= y \log b + x \log b = (y + x)\log b
$$
Then as long as $b \neq 1$ (which is guaranteed if $x \neq 0$), we know that $\log b \neq 0$, hence we can cancel in the equation:
$$
\log b = (x+y)\log b
\quad \Rightarrow \quad
1 = x + y.
$$</p>
|
77,744 | <p>Hopefully a simple one for you (or at least seemingly)!</p>
<p>I import a .txt file, from which i make a ListLinePlot. I simply want to read in the name of the file, store it in a variable so I can use it to tag my plots later.</p>
<pre><code> Data = Import["C:\\Users\\Name\\Folder\\test2.txt", "CSV"];
ListLinePlot[Transpose[{Data[[All, 1]], Data[[All, 2]]}],
Frame -> True,
PlotRange -> {{Data[[All, 1]][[
First@Flatten[Position[Data, {_, Max[Data[[All, 2]]]}]]]]
- 0.0003,
Data[[All, 1]][[
First@Flatten[Position[Data, {_, Max[Data[[All, 2]]]}]]]]
+ 0.0003}}, FrameTicks -> Automatic, Axes -> False,
PlotStyle -> Directive[Black, Thin],
FrameLabel -> {"SIMULATION TIME STEP [\[Mu]s]",
"PARTICLE NUMBER [#]"}]
</code></pre>
<p>Here is my code I'm using to plot, the filename is "test2", and it is that I would like to extract.</p>
<p>Thanks in advance,</p>
<p>QP</p>
| Gordon Coale | 19,027 | <p>Given you already have the filename as a text string you could specify the file name and path separately when you first run, and create the full path & filename using FileNameJoin. FileNameJoin is OS agnostic so puts in forward or backslashes depending on whether you are running Windows, OSX or Linux.</p>
<p>Alternatively if you want something a bit more dynamic you can use Algohi's solution after first getting the filename with either </p>
<pre><code>file = SystemDialogInput["FileOpen"]
(* Brings up a standard FileOpen dialog box specific to your OS *)
FileNameSetter[Dynamic[file2], "Open"]
(* Creates a browse button in your notebook which opens a
file open dialog box when clicked and file2 is set when
you click on a file *)
</code></pre>
<p>For both options you can limit the shown files to a particular filename extension such as CSV.</p>
|
4,618,433 | <p>Just a heads up: "<span class="math-container">$a$</span>" and "<span class="math-container">$α$</span>" are different</p>
<p>Let <span class="math-container">$a,b \in \Bbb R$</span> and suppose <span class="math-container">$a^2 − 4b \neq 0$</span>. Let <span class="math-container">$\alpha$</span> and <span class="math-container">$\beta$</span> be the (distinct) roots of the polynomial <span class="math-container">$x^2 + ax + b$</span>. Prove that there is a real number <span class="math-container">$c$</span> such that either <span class="math-container">$\alpha − \beta = c$</span> or
<span class="math-container">$\alpha − \beta = ci$</span>.</p>
<p>I have no idea how to prove this mathematically. Can someone explain how they would this, including how they would implement this using a proof tree?</p>
<p>This is what I was trying to do.</p>
<p><span class="math-container">$$(x - \alpha)(x - \beta) = x^2 + ax + b$$</span></p>
<p><span class="math-container">$$x^2 - \alpha x - \beta x + \alpha \beta = x^2 + ax + b$$</span></p>
<p><span class="math-container">$$-\alpha x - \beta x = ax$$</span></p>
<p><span class="math-container">$$-x(\alpha + \beta) = ax$$</span></p>
<p><span class="math-container">$$(\alpha + \beta) = -a$$</span></p>
<p><span class="math-container">$$\alpha \beta = b$$</span></p>
<p>However, I'm confused where to go from here and wondering if what I'm doing is wrong.</p>
| Essaidi | 708,306 | <ol>
<li> If <span class="math-container">$a^2 - 4 b > 0$</span> then the roots are :
<span class="math-container">$$\alpha = \dfrac{-a + \sqrt{a^2 - 4 b}}{2} \text{ and } \beta = \dfrac{-a - \sqrt{a^2 - 4 b}}{2}$$</span>
so :
<span class="math-container">$$\alpha - \beta = \sqrt{a^2 - 4 b}$$</span>
Let <span class="math-container">$c = \sqrt{a^2 - 4 b}$</span> then <span class="math-container">$c \in \mathbb{R}$</span> and <span class="math-container">$\alpha - \beta = c$</span>.</li>
<li> If <span class="math-container">$a^2 - 4 b < 0$</span> then the roots are :
<span class="math-container">$$\alpha = \dfrac{-a + i \sqrt{4b - a^2}}{2} \text{ and } \beta = \dfrac{-a - i \sqrt{4 b - a^2}}{2}$$</span>
so :
<span class="math-container">$$\alpha - \beta = i \sqrt{4 b - a^2}$$</span>
Let <span class="math-container">$c = \sqrt{4 b - a^2}$</span> then <span class="math-container">$c \in \mathbb{R}$</span> and <span class="math-container">$\alpha - \beta = i c$</span>.</li>
</ol>
|
313,437 | <p>I have to find out the convergence of the next integral:
$$\int^{\pi/2}_0{\frac{\ln(\sin(x))}{\sqrt{x}}}dx$$
Any help? Thanks</p>
| Jonathan | 37,832 | <p>If you integrate by parts, then you find
$$\int_0^{\pi/2}dx\,\frac{\ln\sin x}{\sqrt{x}}=2\sqrt{x}\ln\sin x\Bigg|_0^{\pi/2}+2\int_0^{\pi/2}dx\,\sqrt{x}\frac{\cos x}{\sin x}.$$
It is fairly clear at this point that the integral converges because $\sqrt{x}/\sin x\sim x^{-1/2}$ at $x=0$.</p>
|
1,617,890 | <blockquote>
<p>Question: Solve $\sin(3x)=\cos(2x)$ for $0≤x≤2\pi$.</p>
</blockquote>
<p>My knowledge on the subject; I know the general identities, compound angle formulas and double angle formulas so I can only apply those.</p>
<p>With that in mind</p>
<p>\begin{align}
\cos(2x)=&~ \sin(3x)\\
\cos(2x)=&~ \sin(2x+x) \\
\cos(2x)=&~ \sin(2x)\cos(x) + \cos(2x)\sin(x)\\
\cos(2x)=&~ 2\sin(x)\cos(x)\cos(x) + \big(1-2\sin^2(x)\big)\sin(x)\\
\cos(2x)=&~ 2\sin(x)\cos^2(x) + \sin(x) - 2\sin^2(x)\\
\cos(2x)=&~ 2\sin(x)\big(1-\sin^2(x)\big)+\sin(x)-2\sin^2(x)\\
\cos(2x)=&~ 2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x)\\
\end{align}
<strong>edit</strong> </p>
<p>\begin{gather}
2\sin(x) - 2\sin^3(x) + \sin(x)- 2 \sin^2(x) = 1-2\sin^2(x) \\
2\sin^3(x) - 3\sin(x) + 1 = 0
\end{gather} </p>
<p>This is a cubic right? </p>
<p>So $u = \sin(x)$,</p>
<p>\begin{gather} 2u^3 - 3u + 1 = 0 \\
(2u^2 + 2u - 1)(u-1) = 0
\end{gather}</p>
<p>Am I on the right track?<br>
This is where I am stuck what should I do now?</p>
| Anurag A | 68,092 | <p>Use $\sin 3x=3 \sin x - 4 \sin^3x$ and $\cos 2x=1-2\sin^2x$. To get
$$3 \sin x - 4 \sin^3x=1-2\sin^2x.$$
Now call $\sin x=t$. Thus we have
$$4t^3-2t^2-3t+1=0.$$
Observe that $t=1$ is definitely a solution, so we have
$$(t-1)(4t^2+2t-1)=0.$$
The quadratic factor will be zero for
$$t=\frac{-1\pm \sqrt{5}}{4}$$
I hope you can solve from here. </p>
|
1,580,586 | <p>Question goes as follows:
Consider the points on a line; $A(1,3,-1)$ and $B(-1,4,-2)$. Find the point $Q$ on $L$ closest to the point $P(1,1,0)$.</p>
<p>My thinking:
Closest distance from $a$ to $b$ is always a straight line, $90$ degree angle.
Therefore:
$$
Q⋅P=0
$$</p>
<p>$$
L=
\left(\begin{array}{cc}
1\\
3\\
-1\\
\end{array}\right)
+
t
\left(\begin{array}{cc}
-2\\
1\\
-1\\
\end{array}\right)
$$</p>
<p>$$
Q =
\left(\begin{array}{cc}
1-2t\\
3+t\\
-1-t\\
\end{array}\right)
$$</p>
<p>$$
(1-2t)\times(1)+(3+t)\times(1)+(-1-t)\times(0)=0
$$
$$
4-t=0
$$</p>
<p>$$t=4
$$
and
$$
Q=
\left(\begin{array}{cc}
-7\\
7\\
-5\\
\end{array}\right)
$$</p>
<p>But it is wrong, my answer tells me a different story and when I graph it is wrong.</p>
<p>Answer $Q(2,5/2,-1/2)$</p>
| Martin Argerami | 22,857 | <p>If you pay attention to your graph, you will see that what you need is $(-2,1,-1)\cdot(P-Q)=0$. That is,
\begin{align}
0&=-2(1-(1-2t))+(1-(3+t))-(-(-1-t))\\
&=-6t-3,\\
\end{align}
so $t=-1/2$, and your point in the line is
$$
\left(\begin{array}{cc}
1-2(-1/2)\\
3+(-1/2)\\
-1-(-1/2)\\
\end{array}\right)
=
\left(\begin{array}{cc}
2\\
5/2\\
-1/2\\
\end{array}\right)
$$</p>
|
142,105 | <p>In trying to deduce the lower bound of the ramsey number R(4,4) I am following my book's hint and considering the graph with vertex set $\mathbb{Z}_{17}$ in which $\{i,j\}$ is colored red if and only if $i-j\equiv\pm2^i,i=0,1,2,3$; the set of non-zero quadratic (mod 17) and blue otherwise. This graph shows that $R(4,4)\ge 18$. That's all fine but how am I expected to convince myself of this without drawing a 17-vertex graph.</p>
<p>Is there some way to mathematically justify that such a graph will not contain a monochromatic $K_4$ without drawing this graph?</p>
<p>Thanks.</p>
| Gerry Myerson | 8,269 | <p>By symmetry, you just have to show that $0$ is not in any monochromatic $K_4$. $0$ is adjacent to $1,2,4,8,9,13,15,16$. Suppose $1$ is involved. $1$ is adjacent to $2,9,16$. No two of these are adjacent, so that rules out a red $K_4$ involving $0$ and $1$. Now you have to do the same kind of analysis for $0$ and $2$, etc. Eventually, you rule out all red $K_4$s. Then you get to work on the potential blue $K_4$s. You can do the same kind of analysis, or you can notice that taking $x$ to $3x$ takes all red edges to blue and all blue to red, so if there's no red $K_4$, there's no blue one either. </p>
<p>While this is true, it is also easy to show that if there is a $K_4$ then there must be a $K_4$ that has both 0 and 1 in its vertex set. So you only need to show that there is no edge in the induced subgraph formed by 2,9,16 and you are done. </p>
|
308,856 | <p>A set $E\subseteq \mathbb{R}^d$ is said to be Jordan measurable if its inner measure $m_{*}(E)$ and outer measure $m^{*}(E)$ are equal.However, Lebesgue mesure theory is developed with only outer measure. </p>
<p>A function is Riemann integrable iff its upper integral and lower integral are equal.However, in Lebesgue integration theory, we rarely use upper Lebesgue integral.</p>
<p>Why are outer measure and lower integral more important than inner measure and upper integral?</p>
| Pietro Majer | 6,101 | <p>My two cents. Outer measures are sub-additive on countable coverings: $$A\subset \cup _{j\in\mathbb{N}} A_j\quad \Rightarrow \quad \mu( A)\le \sum_{{j\in\mathbb{N}}}\mu(A_j)$$
which is somehow a nicer and more practical property than the analogous dual property of super-additivity for inner measures (even in the case of finite measures). It gives a bound on the set $A$ in terms of the supposedly simpler sets $A_j$. Also, we like sub-additivity more than super-additivity, because it recalls norms.</p>
|
2,705,794 | <p>I ran across this problem on a practice Putnam worksheet. Completely stumped.</p>
<p>Is $$\large \frac{m^{6} + 3m^{4} + 12m^{3} + 8m^{2}}{24}$$ an integer for all $m \in \mathbb{N}$?</p>
<p>I suspect it is an integer for any $m$. It checks out for small cases.</p>
<p>Any hints for proving the general case?</p>
| lhf | 589 | <p>You can just compute $f(m)$ at $7$ consecutive integers. If these values are all integers, then $f(m)$ is always an integer because of <a href="http://en.wikipedia.org/wiki/Newton_series#Newton.27s_series" rel="nofollow noreferrer">Newton's interpolation formula</a>. Try $m=-3,\dots,3$.</p>
|
1,875,351 | <p>I saw this problem in a book (not homework),</p>
<p>Assuming $L(n) = F(n)$ for$ n = 1,2,\cdots, k$
where $L(n)$ is Lucas Number and $F(n)$ is Fibonacci number.</p>
<p>$$\qquad L(k+1) = L(k) + L(k-1) \qquad \tag{by definiton}$$</p>
<p>$$ \qquad\qquad= F(k) + F(k-1) \tag{assumption}$$</p>
<p>$$ \ =F(k+1)$$</p>
<p>Hence, by principle of Mathematical induction, $F(n) = L(n)$ for each positive number $n$.</p>
<p>Is the reason why the proof is incorrect is because you can't assume that $L(n) = F(n)$ because that is not the relationship / definition of Lucas numbers in terms of Fibonacci numbers.</p>
| Seewoo Lee | 350,772 | <p>We have to prove $F(1)=L(1)$ and $F(2)=L(2)$, which is false. That is the reason why the proof is incorrect. </p>
|
1,875,351 | <p>I saw this problem in a book (not homework),</p>
<p>Assuming $L(n) = F(n)$ for$ n = 1,2,\cdots, k$
where $L(n)$ is Lucas Number and $F(n)$ is Fibonacci number.</p>
<p>$$\qquad L(k+1) = L(k) + L(k-1) \qquad \tag{by definiton}$$</p>
<p>$$ \qquad\qquad= F(k) + F(k-1) \tag{assumption}$$</p>
<p>$$ \ =F(k+1)$$</p>
<p>Hence, by principle of Mathematical induction, $F(n) = L(n)$ for each positive number $n$.</p>
<p>Is the reason why the proof is incorrect is because you can't assume that $L(n) = F(n)$ because that is not the relationship / definition of Lucas numbers in terms of Fibonacci numbers.</p>
| Kitter Catter | 166,001 | <p>This looks corny after someone already pointed this out, but you have to establish the assumption is true for some set of points. Now you can't just show $L(1)=F(1)$ because you need $L(k)$ and $L(k-1)$ to build on. Thus you would have to show it to be true for two consecutive values of $L$</p>
|
3,135,386 | <p>Our teacher tells us to convert it this way <span class="math-container">$ 3^x = e^{\ln 3^x}= e^{x\cdot\ln 3}$</span> and then use the rule <span class="math-container">$e^u\cdot u'$</span> but I can't understand where <span class="math-container">$\ln$</span> comes from and how <span class="math-container">$\ln 3^x$</span> = <span class="math-container">$x\cdot \ln 3$</span>.</p>
| Michael Rozenberg | 190,319 | <p>Because by the definition of <span class="math-container">$\ln$</span> we have <span class="math-container">$\ln3^x=x\ln3$</span> and from here: <span class="math-container">$$\left(3^x\right)'=\left(e^{x\ln3}\right)'=e^{x\ln3}\cdot\ln3=3^x\ln3.$$</span>
Actually, <span class="math-container">$\log_ab$</span> it's a number <span class="math-container">$c$</span> such that <span class="math-container">$a^c=b$</span>. (Here, <span class="math-container">$a>0$</span>, <span class="math-container">$b>0$</span> and <span class="math-container">$a\neq1$</span>) </p>
<p>Id est, <span class="math-container">$a^{\log_ab}=b$</span>. </p>
<p>For <span class="math-container">$a=e$</span> we obtain: <span class="math-container">$e^{\ln{b}}=b$</span> and <span class="math-container">$$3^x=\left(e^{\ln3}\right)^x=e^{x\ln3}.$$</span></p>
|
3,016,386 | <p>Hi I am struggling with this exercise, which may be perceived as simple. so I was trying to write tangents as follows:</p>
<p><span class="math-container">$$\tan(z)=-i\frac{e^{iz}-e^{-iz}}{e^{iz}+e^{-iz}}$$</span> and then <span class="math-container">$$z=a+bi$$</span>, which led me to <span class="math-container">$$ \tan z=-i\frac{\cos a(e^{-b}-e^{b})+i\sin a(e^{-b}+e^{b})}{\cos a(e^{-b}+e^{b})+i\sin a(e^{-b}-e^{b})}$$</span>, so I guess here I can multiply denominator by conjunction, but this is really a complicated computation on an exam... help appreciated</p>
| egreg | 62,967 | <p>The equation you have to solve is
<span class="math-container">$$
-i\frac{e^{iz}-e^{-iz}}{e^{iz}+e^{-iz}}=w
$$</span>
that can be rewritten as
<span class="math-container">$$
i-ie^{2iz}=we^{2iz}+w
$$</span>
hence
<span class="math-container">$$
e^{2iz}=\frac{i+w}{i-w}
$$</span>
The denominator vanishes for <span class="math-container">$w=i$</span>. The numerator vanishes for <span class="math-container">$w=-i$</span>. Since <span class="math-container">$e^{2iz}$</span> takes on all values except <span class="math-container">$0$</span>, the equation has solution for <span class="math-container">$w\ne\pm i$</span>.</p>
|
2,258,557 | <p>Why the equation of an arbitrary straight line in complex plane is $zz_o + \bar z \bar z_0 = D$ where D $\in R$</p>
<p>I understand that a vertical straight line can be defined by the equation $z+\bar z= D$ because suppose $z =x+yi$ then $\bar z = x-yi$ Thus, $z+\bar z = x+yi+x-yi=2x$ which is an arbitrary vertical straight line in w-plane.</p>
<p>But why $zz_o + \bar z \bar z_0 = D$ is an arbitrary straight line in complex plane?</p>
| Nosrati | 108,128 | <p>You know that a vertical straight line can be defined as $z+\bar z= D$, so if you rotate it's points with angle $\theta$ you get $(e^{i\theta}z)+ \overline{(e^{i\theta}z)}= D$ or $e^{i\theta}z + e^{-i\theta}\overline{z}= D$ and with arbitrary real $r\neq0$,
$$re^{i\theta}z + re^{-i\theta}\overline{z}= rD$$
gives us
$$zz_0 +\bar{z}\bar{z_0}= D_0$$
where $z_0=re^{i\theta}$ and $D_0=rD$.</p>
|
59,495 | <p>Suppose $K$ is an $n$-dimensional $C^2$ convex body in $\mathbb{R}^{n+1}$. We choose two distinct directions $z_0, z_1\in\mathbb{S^{n}}$. If $P_1$ and $P_2$ are the corresponding hyperplanes($z_0\perp P_1$ and $z_1\perp P_2$) and $K'$ is the projection of $K$ on $P_1\cap P_2$, what is the $Vol(K')$?
We know the support function, and for simplicity let's suppose the body is symmetric and centered at the origin.
If we just consider one hyperplane say, $P_0$, and want to compute the area of projection of $K$ on $P_0$ then the answer is $\frac{1}{2}\int_{\mathbb{S}^{n-1}}\frac{|\langle z,z_0\rangle|}{G}d\mu$ where $G$ is the Guass curvature of the boundary of $K$. I am looking for a solution of this type, possibly involving other symmetric functions of principle curvatures.</p>
| Liviu Nicolaescu | 20,302 | <p>You should definitely check <a href="http://www.nd.edu/~lnicolae/val-simple.pdf" rel="nofollow">these notes</a> generated by three bright undergraduates for an REU project that I supervised a few years ago. I promise you, it will be worth your time.</p>
|
4,450,169 | <p>Here is a (seemingly) simple problem in group theory. Given a non-elementary finite nilpotent group <span class="math-container">$N$</span>, show there exist <span class="math-container">$p \neq q$</span> primes such that <span class="math-container">$N$</span> has a quotient <span class="math-container">$\Bbb Z_{pq}^{2}$</span>.</p>
<p>Here, an elementary group is defined to be a direct product of a <span class="math-container">$p$</span> group and a cyclic group of order coprime to p. That is, <span class="math-container">$E$</span> elementary <span class="math-container">$\iff \exists P, C : E = P \times C$</span> where <span class="math-container">$|P| = p^{k}$</span>, and <span class="math-container">$C$</span> is cyclic such that <span class="math-container">$(|C|, p) = 1$</span>.</p>
<p>A nilpotent group is defined in the standard way, a group <span class="math-container">$N$</span> is nilpotent <span class="math-container">$\iff$</span> the central series of <span class="math-container">$N$</span> is finite.</p>
<p>I'm not sure how to approach this one-- does anyone have any pointers?</p>
| spin | 12,623 | <p>A finite group is nilpotent if and only if it is a direct product of its Sylow subgroups.</p>
<p>So say you have a nilpotent group <span class="math-container">$G$</span>, then <span class="math-container">$$G = P_1 \times P_2 \times \cdots \times P_t$$</span> with <span class="math-container">$P_i$</span> a <span class="math-container">$p_i$</span>-group for all <span class="math-container">$i$</span> and <span class="math-container">$p_1, p_2, \ldots, p_t$</span> are the prime divisors of <span class="math-container">$|G|$</span>.</p>
<p>If <span class="math-container">$t \geq 2$</span>, prove that there is a normal subgroup <span class="math-container">$N = N_1 \times N_2 \times P_3 \times \cdots \times P_t$</span> with <span class="math-container">$G/N$</span> cyclic of order <span class="math-container">$p_1p_2$</span>.</p>
|
2,255,617 | <p>I am trying to learn how to do proofs by contradiction. The proof is,</p>
<p>"Prove by Contradiction that there are no positive real roots of $x^6 + 2x^3 +4x + 5$"</p>
<p>I understand that now I am attempting to prove that there is a positive real root of this equation, so I am able to contradict myself within the proof. I just don't even know where to start.</p>
| Hayden | 27,496 | <p>This will depend somewhat on your definition of a ring and a ring homomorphism, as there are a few standard ways of defining these notions. </p>
<p>Firstly, for rings, there is sometimes a distinction between a <em>ring</em> and a <em>unital ring</em>, which additionally has a multiplicative identity $1$ (whereas a ring need not have one).</p>
<p>Secondly, and more importantly, even when ring is taken to mean <em>unital</em> ring, there is sometimes a distinction between a <em>ring homomorphism</em> (which simply preserves addition and multiplication) and a <em>unital ring homomorphism</em> (which much additionally preserve the multiplicative identity).</p>
<p>If a ring homomorphism need not preserve the multiplicative identities (assuming these are present in our rings), then between any two rings $R,S$ there is the zero map $\mathbf{0}: R \to S$ defined by $\mathbf{0}(r) = 0$. </p>
<p>As such, if your ring homomorphisms need not be unital, then you will always have such trivial homomorphisms. Note that a non-trivial homomorphism between unital rings will usually preserve $1$, so whether you instead want to consider "non-trivial" or "unital" ring homomorphisms, the answers given by Hurkyl or Lord Shark give examples of (unital) rings which don't admit unital/non-trivial ring homomorphisms in both directions.</p>
<p>For example, there is the obvious inclusion of $\mathbb{Z}$ into $\mathbb{Q}$ (and this is a unital ring homomorphism), but a non-trivial homomorphism $\varphi: \mathbb{Q} \to \mathbb{Z}$ must have $2\varphi(1/2) = \varphi(1)=1$, so thus $\varphi(1/2)$ is some integer which, when multiplied by $2$, gives $1$. Of course, this can't happen, so we get a contradiction.</p>
<p>(I said above that a non-trivial ring homomorphism between unital rings will usually preserve $1$. A quick and easy example of a non-trivial ring homomorphism not preserving $1$ between unital rings is given by the following: Take $\mathbb{F}_2 = \mathbb{Z}/2\mathbb{Z}$ and consider the inclusion of $\mathbb{F}_2$ into $\mathbb{F}_2^2$ by sending $0\mapsto (0,0)$ and $1\mapsto (1,0)$. Now, $(1,0)$ is not the multiplicative identity of $\mathbb{F}^2$, as $(1,1)$ is. However, it is a multiplicative identity of the image of the inclusion. In general, a ring homomorphism sends a multiplicative identity to a multiplicative identity of the image. In many cases, especially rings which have some sort of multiplicative cancellation property, that will be enough to mean that it actually sends $1$ to $1$. This is why I was able to justify the fact that $\varphi(1)=1$ above. </p>
<p>For more about this idea, see <a href="https://math.stackexchange.com/questions/170953/nontrivial-subring-with-unity-different-from-the-whole-ring/170956#170956">here</a>.)</p>
|
1,285,443 | <blockquote>
<p>Let us denote solution to the equation</p>
<p>$$(x+a)^{x+a}=x^{x+2a}$$</p>
<p>with $X_a$.</p>
<p>($a$ is a non-zero real number)</p>
<p>Prove that:</p>
<p>$$\lim_ {a \to 0} X_a = e$$</p>
</blockquote>
<p>This is something that I noticed while making numerical experiments for another problem. The statement looks interesting, I couldn't find anything close to it on the internet. I don't have the idea how to prove it, but numerical methods confirm the statement.</p>
| mathlove | 78,967 | <p>This might not be rigorous, but note that one has
$$(x+a)^x=x^{x+a}\Rightarrow x\ln(x+a)=(x+a)\ln x\Rightarrow \frac{\ln(x+a)}{x+a}=\frac{\ln x}{x}.$$</p>
<p>Then, let $f(x)=\frac{\ln x}{x}$. One has $f'(x)=0\iff x=e$, and considering the graph of $y=f(x)$ should give you the answer.</p>
|
1,285,443 | <blockquote>
<p>Let us denote solution to the equation</p>
<p>$$(x+a)^{x+a}=x^{x+2a}$$</p>
<p>with $X_a$.</p>
<p>($a$ is a non-zero real number)</p>
<p>Prove that:</p>
<p>$$\lim_ {a \to 0} X_a = e$$</p>
</blockquote>
<p>This is something that I noticed while making numerical experiments for another problem. The statement looks interesting, I couldn't find anything close to it on the internet. I don't have the idea how to prove it, but numerical methods confirm the statement.</p>
| Claude Leibovici | 82,404 | <p>If $a$ is small, a first order Taylor expansion built at $a=0$ gives $$(x+a)^{x+a}-x^{x+2a}=-a\, x^x\, \big(\log (x)-1\big)+O\left(a^2\right)$$ Do you think that this is sufficient to prove that $$\lim_ {a \to 0} X_a = e$$</p>
<p>Another way is starting from zoli's answer $$\frac{(x+a)\log(x+a)-x\log(x)}{a}=2\log(x)$$ and expand the lhs as a Taylor series built at $a=0$ gives $$(\log (x)+1)+\frac{a}{2 x}+O\left(a^2\right)\approx 2\log(x)$$ that is to say $$\frac{a}{2 x}-\log (x)+1\approx 0$$ and the solution of what would be the equation is $$x\approx \frac{a}{2 W\left(\frac{a}{2 e}\right)}=e+\frac{a}{2}-\frac{a^2}{8 e}+O\left(a^3\right)$$ where appears once more Lambert function.</p>
<p><strong>Edit</strong></p>
<p>What could be amazing when knowing the answer is to write $$x=e+\sum_{i=1}^\infty \alpha_i a^i$$ and to expand the equation around $a=0$. For the first terms, we have $$x=e+\frac{a}{2}-\frac{7 a^2}{24 e}+\frac{a^3}{3 e^2}+O\left(a^4\right)$$ For example, for $a=1$ this approximation gives $x\approx 3.15610$ while the rigourous solution should be $x\approx 3.14104$ </p>
|
3,433,492 | <p>I know that a function can admitted multiple series representation (according to Eugene Catalan), but I wonder if there is a proof for the fact that each analytic function has only one unique Taylor series representation. I know that Taylor series are defined by derivatives of increasing order. A function has one and only one unique derivative. So can this fact be employed to prove that each function only has one Taylor series representation?</p>
| Calvin Khor | 80,734 | <p>Well its possible for e.g. <span class="math-container">$f(x) = \sum a_n x^n = \sum b_n (x-1)^n$</span> simultaneously, but that probably isn't what you meant. Instead lets just consider the behavior at one point, say expanding around <span class="math-container">$x=0$</span>.</p>
<p>Let's fix notation-</p>
<blockquote>
<p>A "power series" (at <span class="math-container">$x=0$</span>) is any series formally defined by <span class="math-container">$\sum_{n=0}^\infty a_n x^n$</span>. A "Taylor series" (at <span class="math-container">$x=0$</span>) for a smooth (i.e. <span class="math-container">$C^\infty$</span>) function <span class="math-container">$f$</span> is the power series formally defined by <span class="math-container">$\sum_{n=0}^\infty \frac{f^{(n)}(0)}{n!} x^n$</span>.</p>
</blockquote>
<p>So any function that is infinitely differentiable (at <span class="math-container">$x=0$</span>) has a unique Taylor series at 0 [note that the Taylor series may not converge, and if it converges, it may not converge to <span class="math-container">$f$</span>]. But I think you are trying to ask if any "analytic function" (a term I haven't defined yet) is equal at each point to a unique power series, which is the Taylor series.
You can first prove the following result, which allows you to define the concept of "analytic functions"-</p>
<blockquote>
<p><strong>Theorem 1.</strong> Any power series <span class="math-container">$\sum_{n=0}^\infty a_n x^n $</span> that converges at one <span class="math-container">$x_0$</span> where <span class="math-container">$|x_0|=\rho>0$</span>, converges absolutely and locally uniformly on the set <span class="math-container">$|x|<\rho $</span>, where it defines a <span class="math-container">$C^\infty$</span> function <span class="math-container">$F(x) := \sum_{n=0}^\infty a_n x^n$</span>, and <span class="math-container">$ a_n = \frac{F^{(n)}(0) }{n!}$</span>.</p>
<p>In particular, the power series is the Taylor series of <span class="math-container">$F$</span>. An "analytic function" (near <span class="math-container">$x=0$</span>) is defined to be any such function <span class="math-container">$F$</span> that can be obtained in this way (i.e. an analytic function is a <span class="math-container">$C^\infty$</span> function locally equal to a convergent power series, its Taylor series.)</p>
</blockquote>
<p>Suppose now that we have <span class="math-container">$\lim_{N\to\infty}\sum_{n=0}^N a_n x^n = 0$</span> for <span class="math-container">$|x|<r$</span>. Then I claim that <span class="math-container">$a_n = 0$</span> for all <span class="math-container">$n$</span>, proving the uniqueness of convergent power series for <span class="math-container">$f(x) = 0$</span>. This immediately follows from <strong>Theorem 1</strong> above, which allows us to talk of the function <span class="math-container">$F(x) := \sum_{n=0}^\infty a_n x^n$</span>. But by hypothesis, <span class="math-container">$F$</span> is actually the zero function, so we have <span class="math-container">$a_n = \frac{F^{(n)}(0) }{n!} = 0$</span>.</p>
<p>This implies the uniqueness of convergent power series (at <span class="math-container">$0$</span>) for any analytic function; for if there were two different ones, their difference would be a nonzero convergent power series equal to 0, which doesn't exist.</p>
<p>I'll sketch the proof of the main result (<strong>Theorem 1</strong>). We have convergence at <span class="math-container">$x=x_0$</span> where <span class="math-container">$|x_0|=\rho$</span>. Let <span class="math-container">$0<r<\rho$</span>. Then note that we have (from <span class="math-container">$\sum_{n=0}^\infty d_n $</span> exists implies <span class="math-container">$ d_n \to 0$</span>) <span class="math-container">$$a_n x_0^n \xrightarrow[n\to\infty]{} 0 \implies |a_n| |x_0|^n = |a_n|\rho^n \xrightarrow[n\to\infty]{} 0.$$</span> In particular there exists <span class="math-container">$M>0$</span> such that <span class="math-container">$|a_n| \rho^n < M$</span> for all <span class="math-container">$n$</span>. Therefore for any <span class="math-container">$x$</span> such that <span class="math-container">$|x|\le r$</span>, by Geometric Series formula, since <span class="math-container">$\left(\frac r{\rho} \right)<1$</span>,
<span class="math-container">$$ |a_n x|^n \le |a_n | r^n = |a_n | \rho^n \left(\frac r{\rho} \right)^n \le M \left(\frac r{\rho} \right)^n, \quad\sum_{n=0}^\infty M \left(\frac r{\rho} \right)^n < \infty. $$</span>
So by the Weierstrass M-test, in fact the series converges absolutely and uniformly (and therefore pointwise) on the closed disk <span class="math-container">$|x|\le r$</span>. It therefore defines a function, which we call <span class="math-container">$F(x)$</span>.</p>
<p>If the series can be differentiated term-by-term, then a standard induction argument proves that <span class="math-container">$a_n = F^{(n)}(0)/n!$</span>. Formally differentiating once, we formally obtain the series <span class="math-container">$\sum_{n=1}^\infty n a_n x^{n-1} = \sum_{n=0}^\infty (n+1) a_{n+1} x^n$</span>. Now note that for <span class="math-container">$|x|\le r<\rho$</span>,
<span class="math-container">$$ |(n+1) |a_{n+1}| x^{n}| \le (n+1) |a_{n+1}| r^{n} \le (n+1) M \left(\frac{r}{\rho}\right)^n \le CM \left(\sqrt{\frac{r}{\rho}}\right)^{n}, \\ \sum_{n=0}^\infty CM \left(\sqrt{\frac{r}{\rho}}\right)^{n} < \infty$$</span>
since there exists <span class="math-container">$C>0$</span> such that <span class="math-container">$n+1 < C \left(\frac{\rho}r\right)^{n/2}$</span> for all <span class="math-container">$n$</span>. By Weierstrass M-test, the formal series obtained by term-by-term differentiation converges absolutely and uniformly to some function <span class="math-container">$G$</span> on <span class="math-container">$|x|\le r$</span>, which implies that <span class="math-container">$F$</span> is differentiable with <span class="math-container">$F'=G$</span>. This argument is repeatable (using instead <span class="math-container">$n^k < C_k \left(\frac{\rho}r\right)^{n/2}$</span>), proving by induction that <span class="math-container">$F$</span> is <span class="math-container">$C^\infty$</span>, and validating the result <span class="math-container">$ a_n = F^{(n)}(0)/n!$</span>
.</p>
|
120,687 | <p>Consider the following code</p>
<pre><code>styles = {Red, Blue, {Red, Dashed}, {Blue, Dashed}}
pt1 = Plot[{x^2, 2 x^2, 1/x^2, 2/x^2}, {x, 0, 3}, Frame -> True,
PlotStyle -> styles, PlotLegends -> {"1", "2", "1", "2"}]
</code></pre>
<p>I would like the two red lines to carry the same label "1" and the two blue lines the same label "2". That is, in the legend I would like a red line and a red-dashed line below each other and then one label right of it. Similarly for the blue lines. Does anybody know how to do this?</p>
| Anton Antonov | 34,008 | <p>This answer shows how to define <a href="https://mathematica.stackexchange.com/q/118324/34008">a new <code>NIntegrate</code> rule</a> that evaluates <code>f</code> in the list of two integrands <code>{f[x],g[f[x]]+h[x]}</code> only once per sampling point. The answer can be also easily modified into an answer of <a href="https://mathematica.stackexchange.com/q/77589/34008">"NIntegrate over a list of functions"</a>.</p>
<p>The definition of the <code>NIntegrate</code> rule <code>LessEvaluationsRule</code> given below is also aimed to be didactic and conceptually simple. The design (options and plug-in mechanism utilization) can be made to be more robust.</p>
<h2>How to use</h2>
<p>In order to compute the integrals in the standard <code>NIntegrate</code> call:</p>
<pre><code>NIntegrate[{Sin[x], Sqrt[Sin[x]] + x^2}, {x, 0, 1}]
(* {0.459698, 0.976311} *)
</code></pre>
<p>we specify the function of one argument <code>Sin</code> and the function of two arguments <code>(Sqrt[#2] + #1^2 &)</code>:</p>
<pre><code>NIntegrate[1, {x, 0, 1},
Method -> {"GlobalAdaptive", "SingularityHandler" -> None,
Method -> {LessEvaluationsRule, "BaseFunction" -> Sin,
"DependantFunction" -> (Sqrt[#2] + #1^2 &)}}, PrecisionGoal -> 6]
(* {0.459698, 0.976311} *)
</code></pre>
<p>Note that the rule definition below does not allow the (correct) application of <code>NIntegrate</code>'s singularity handling so it is prevented with <code>"SingularityHandler" -> None</code>.</p>
<h2>Comparison</h2>
<p>Another consequence of specifying the integrated functions as option values is that we cannot use <code>EvaluationMonitor</code>. We can see though that the timing is twice smaller. (Or just examine <code>IRuleEstimate</code> defined below.)</p>
<pre><code>In[214]:= RepeatedTiming[
NIntegrate[{Sin[x], Sqrt[Sin[x]] + x^2}, {x, 0, 1}, PrecisionGoal -> 4]]
Out[214]= {0.0062, {0.459698, 0.976315}}
In[215]:= RepeatedTiming[
NIntegrate[1, {x, 0, 1},
Method -> {"GlobalAdaptive", "SingularityHandler" -> None,
Method -> {LessEvaluationsRule, "BaseFunction" -> Sin,
"DependantFunction" -> (Sqrt[#2] + #1^2 &)}}, PrecisionGoal -> 4]
]
Out[215]= {0.0023, {0.459698, 0.976315}}
</code></pre>
<h2>Definition of <code>LessEvaluationsRule</code></h2>
<pre><code>Clear[LessEvaluationsRule];
Options[LessEvaluationsRule] = {"Method" -> "GaussKronrodRule",
"BaseFunction" -> Sin, "DependantFunction" -> (Sqrt[#2] + #1 &)};
LessEvaluationsRuleProperties = Part[Options[LessEvaluationsRule], All, 1];
LessEvaluationsRule /:
NIntegrate`InitializeIntegrationRule[LessEvaluationsRule, nfs_, ranges_,
ruleOpts_, allOpts_] :=
Module[{t, methodSpec, baseFunc, depFunc, pos, absc, weights, errweights},
t = NIntegrate`GetMethodOptionValues[LessEvaluationsRule,
LessEvaluationsRuleProperties, ruleOpts];
If[t === $Failed, Return[$Failed]];
{methodSpec, baseFunc, depFunc} = t;
t = NIntegrate`MOptionValue[methodSpec, nfs, ranges, allOpts];
If[t === $Failed, Return[$Failed]];
{absc, weights, errweights} = t[[1]];
LessEvaluationsRule[{absc, weights, errweights}, baseFunc, depFunc]
];
Clear[IRuleEstimate]
IRuleEstimate[f_, g_, {a_, b_}, {absc_, weights_, errweights_}] :=
Block[{fvals, integral1, error1, integral2, error2, xs},
xs = Rescale[absc, {0, 1}, {a, b}];
fvals = f /@ xs;
{integral1, error1} = (b - a) {fvals.weights, fvals.errweights};
fvals = g @@@ Transpose[{xs, fvals}];
{integral2,
error2} = (b - a) {fvals.weights, fvals.errweights}; {integral1,
integral2, Abs[error1], Abs[error2]}];
LessEvaluationsRule[{absc_, weights_, errweights_}, baseFunc_, depFunc_][
"ApproximateIntegral"[region_]] :=
Block[{a, b, fvals, f2expr, integral1, integral2, error1, error2},
{a, b} = region["Boundaries"][[1]];
(* Integrals calculation*)
{integral1, integral2, error1, error2} =
IRuleEstimate[baseFunc, depFunc, {a, b}, {absc, weights, errweights}];
(* Assuming 1D integrals *)
{{integral1, integral2},
Max[error1, error2], 1}
];
</code></pre>
|
12,927 | <p>The problem:</p>
<p><strong><em>Three poles standing at the points $A$, $B$ and $C$ subtend angles $\alpha$, $\beta$ and $\gamma$ respectively, at the circumcenter of $\Delta ABC$.If the heights of these poles are in arithmetic progression; then show that $\cot \alpha$, $\cot \beta$ and $\cot \gamma $ are in harmonic progression.</em></strong></p>
<p>Now, what I could not understand is subtending of the angle part,precisely <strong>how a point subtends angle at another point?</strong> So, what I am looking for a proper explanation of the problem statement with a figure, since it's troubling me from sometime.</p>
<p>PS: I am not looking for the solution (as of now) or any hint regarding the solution, just a clear explanation will be appreciated.</p>
<hr>
<p>My solution using <a href="https://math.stackexchange.com/users/1102/moron">Moron</a>'s <a href="https://math.stackexchange.com/questions/12927/help-to-understanding-the-problem/12933#12933">interpretation</a>,</p>
<p>Let $a$ $b$ and $c$ are the length of three sides of the poles and $O$ be the circumcenter then, $$ \cot \alpha = \frac{OA}{a}$$ $$ \cot \beta = \frac{OB}{b}$$</p>
<p>$$ \cot \gamma = \frac{OC}{c}$$</p>
<p>As $O$ is the circumcenter,$OA = OB = OC = k $(say) </p>
<p>Again, $a$ $b$ and $c$ are in arithmetic progression, hence</p>
<p>$$2 \cdot \frac{k}{\cot \alpha} = \frac{k}{\cot \beta} + \frac{k}{\cot \gamma}$$</p>
<p>Canceling $k$ from both sides,</p>
<p>$$2 \cdot \frac{1}{\cot \alpha} = \frac{1}{\cot \beta} + \frac{1}{\cot \gamma}$$</p>
<p>Hence, $\cot \alpha$, $\cot \beta$ and $\cot \gamma $ are in harmonic progression. (<strong>QED</strong>)</p>
| Aryabhata | 1,102 | <p>My guess is the poles are of (different) heights $h_A$, $h_B$, $h_C$ and the angle is from the foot of pole to the circumcentre of $\triangle ABC$ to the top of the pole.</p>
|
761,823 | <blockquote>
<p>Suppose that $G$ is a finite abelian group that does not contain a subgroup isomorphic to $\mathbb Z_p\oplus\mathbb Z_p$ for any prime $p$. Prove that $G$ is cyclic.</p>
</blockquote>
<p><strong>Attempt</strong>: If $G$ is a finite abelian group, then let $H$ be any subgroup of $G$</p>
<p>It's given that $H \not\simeq \mathbb{Z}_p \oplus \mathbb{Z}_p$ which can be due to a variety of reasons like : $|H|$ may not be $p^2$ or $H$ may not contain any element of order $p$ etc</p>
<p>Hence, the process of finding an element $g$ such that $|g| = |G|$ seems difficult, hence, probably the best bet would be to first assume that $G$ is not cyclic.</p>
<p>Hence, $O(g) \neq |G|~ \forall ~ g \in G$ . </p>
<p>Also, $G \not\simeq Z_{|G|}$ since $G$ is not cyclic.</p>
<p>How do i arrive at a contradiction from here that if $G$ is not cyclic, it must contain a subgroup $H \simeq \mathbb Z_p \oplus \mathbb Z_p $.</p>
<p>Please note that this question occurs in Gallian before normal and factor groups are introduced.</p>
<p>Thank you for help.</p>
| egreg | 62,967 | <p>Hint: decompose $G$ into the direct sum of its primary components; then examine each primary component and deduce it's cyclic because it has a minimum nonzero subgroup (that is, the intersection of its nonzero subgroups is nonzero).</p>
|
295,545 | <p>The following figure depicts the paths from home to work. SAM never travels through the park when going to work.</p>
<p><img src="https://i.stack.imgur.com/IANqM.png" alt="enter image description here"></p>
| anegligibleperson | 17,248 | <p>You know the plane passes through the origin, and you need two orthogonal vectors that span this plane. Do you see it now?</p>
|
7,981 | <p>I've read so much about it but none of it makes a lot of sense. Also, what's so unsolvable about it?</p>
| Roupam Ghosh | 2,320 | <p>A direct translation of RH (Riemann Hypothesis) would be very baffling in layman's terms. But, there are many problems that are equivalent to RH and hence, defining them would be actually indirectly stating RH. Some of the equivalent forms of RH are much easier to understand than RH itself. I give what I think is the most easy equivalent form that I have encountered:</p>
<blockquote>
<p>The Riemann hypothesis is equivalent
to the statement that an integer has
an equal probability of having an odd
number or an even number of distinct
prime factors. (Borwein page. 46)</p>
</blockquote>
|
3,615,117 | <p>I want to find the intersection of the sphere <span class="math-container">$x^2+y^2+z^2 = 1$</span> and the plane <span class="math-container">$x+y+z=0$</span>. </p>
<p><span class="math-container">$z=-(x+y)$</span> that gives <span class="math-container">$x^2+y^2+xy= \frac 12$</span></p>
<p>How do I represent this in the standard form of ellipse? Any help is appreciated to proceed further. Thanks in advance.</p>
| lab bhattacharjee | 33,337 | <p><span class="math-container">$$\dfrac12=\left(x+\dfrac y2\right)^2+y^2\left(1-\dfrac14\right)$$</span></p>
<p>Let <span class="math-container">$x+\dfrac y2=X, y=Y$</span></p>
<p>Alternately, use <a href="https://en.wikipedia.org/wiki/Rotation_of_axes#Rotation_of_conic_sections" rel="nofollow noreferrer">this</a> to eliminate <span class="math-container">$xy$</span></p>
<p>Also, from the link </p>
<p>(the coefficient of <span class="math-container">$xy)^2-4\cdot$</span>(the coefficient of <span class="math-container">$x^2)\cdot($</span>(the coefficient of <span class="math-container">$y^2)$</span></p>
<p><span class="math-container">$$=(1)^2-4\cdot1\cdot1<0$$</span></p>
<p>Hence it's an ellipse</p>
|
1,407,641 | <p>If $T$ is a linear transformation and is said to be one to one or onto- this only makes sense when we specify what domain and range is right?
$T: V \rightarrow V$ may not be onto or one to one
but $T: V \rightarrow Im(T)$ is certainly onto and may or may not be one to one.
Is this right?</p>
| Ben Grossmann | 81,360 | <p>Yes, you are correct. We can "make" a linear transformation onto by restricting the codomain to the image of the transformation.</p>
|
2,040,041 | <p>I was able to think that the numerator will always be positive and will overpower the denominator as well. But couldn't proceed from there.</p>
| mathcounterexamples.net | 187,663 | <p>Using the Mean-value forms of the remainder of <a href="https://en.m.wikipedia.org/wiki/Taylor%27s_theorem" rel="nofollow noreferrer">Taylor's theorem</a> you get the existence of $c \in (0,x)$ such that
$$e^x=1+x+\frac{x^2}{2}e^c$$ Hence
$$e^x = 1+x+\frac{x^2}{2}e^c <1+x+\frac{x^2}{2} e^x$$ and the desired result.</p>
|
3,086,758 | <p>I know that if <span class="math-container">$\mathbb{E}[X]=\mathbb{E}[X|Y] , \mathbb{E}[Y]=\mathbb{E}[Y|X]$</span>, <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> can be dependent, for example a ‘uniform’ distribution in a unit circle.
Now we add the variance, if
<span class="math-container">$$\mathbb{E}[X]=\mathbb{E}[X|Y], \mathbb{E}[Y]=\mathbb{E}[Y|X], $$</span><span class="math-container">$$Var(X)=Var(X|Y), Var(Y)=Var(Y|X).$$</span>
Say the expectation and variance of <span class="math-container">$X$</span> are both not affected by <span class="math-container">$Y$</span>, and vice versa, then must <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> be independent?
In this case I can not find a counterexample just like the uniform circle.</p>
<p>If they are independent, how to prove it? If not, is there a counterexample?</p>
<p>Thanks!</p>
| Dasherman | 177,453 | <p>Consider <span class="math-container">$X\sim N(0,1)$</span> and <span class="math-container">$Y\sim N(0,2)$</span> if <span class="math-container">$X>0$</span> and <span class="math-container">$Y\sim t_4$</span> otherwise. Then <span class="math-container">$X, Y$</span> are dependent, but your conditions hold (the <span class="math-container">$t_4$</span> distribution has mean 0 and variance 2).</p>
|
2,644,910 | <p>Ali Baba is trying to enter a cave. At the entrance, there is a drum with four openings, in each of which there is a pot with a herring inside. The herring may be lying with its tail up or down. Ali Baba can put his hands into any two
openings, feel the herrings, and put any one or both of them either tail up or tail down as he pleases. After this, the drum rotates and once it stops, Ali Baba cannot determine into which openings he put his hands before. The door to the cave will open as soon as the four herrings are either all tail up or tail down. What should Ali Baba do?<br>
This question is similar to a "binary" question, where I have to convert a series of 1s (up) and 0s (down) into all 1s or 0s, but I am not sure how to do that here with randomization.</p>
| fred goodman | 124,085 | <p>This should be a comment, but I had trouble with formatting in the comments. </p>
<p>Evidently, the same proof will work for $R/I \to R/J$ where $R$ is a PID. The more general problem was considered
<a href="https://mathoverflow.net/questions/31495/when-does-a-ring-surjection-imply-a-surjection-of-the-group-of-units">here</a> and
<a href="https://mathoverflow.net/q/32883">here</a>. </p>
<p>A nice generalization is that a surjection $f: R \to S$, where the rings are commutative Artinian, maps units onto units.</p>
|
2,274,736 | <p>I am finding particular subgroups of $Q_{12}$ and had a couple of questions about it.</p>
<p>$Q_{12}=\langle a,b:a^6=1,b^2=a^3,ba=a^{-1}b\rangle$</p>
<p>Firstly here is part of a solution I came across: </p>
<p>The first step is to establish the orders of the elements. So $1$ has order 1, $a^3$ has order 2, $a^2$ and $a^4$ have order 3, $a^ib$ has order 4 for all $i$, and $a$ and $a^{−1}$ have order 6. Therefore there is a unique subgroup of order 1, namely {1}, a unique subgroup of order 2, namely $\langle a^3 \rangle$ and
a unique subgroup of order 3, namely $\langle a^2\rangle$</p>
<p>How are the subgroups $\langle a^3 \rangle$ and $\langle a^2 \rangle$ identified straight away as unique? I understand that the fact that they are unique means they are normal and the fact they are normal means they are unique. But I wanted to know why they are identified as unique first hand in this example which then goes on to say because they are unique they are normal (I'm not interested in the other way round). Is it because they are cyclic?</p>
<p>Also the subgroups of order 4 are $\langle b \rangle$, $\langle ab \rangle$ and $\langle a^2b \rangle$. I'm struggling to see how because take $\langle b \rangle$, are the elements $\{1,b,a^3b\}$? but I thought a subgroup of order 4 had to have 4 elements?</p>
| Michael Hardy | 11,667 | <p>The notation $p\in(a,b)$ means $a<p<b$, and $p\in[a,b]$ means $a\le p \le b$, and $p\in(a,b]$ means $a<p\le b$ and $p\in[a,b)$ means $a\le p<b.$ The notation $p\in\{a,b\},$ on the other hand, means either $p=a$ or $p=b.$</p>
<p>You have
$$
L(p) ={n \choose x_1,x_2,x_3} (1 - 2p)^{x_1}\cdot p^{x_2+x_3}.
$$
Thus
$$
\ell(p) = \log L(p) = \text{constant} + x_1 \log(1-2p) + (x_2+x_3)\log p.
$$
Therefore
\begin{align}
\ell\,'(p) & = \frac{-2x_1}{1-2p} + \frac{x_1+x_2} p \\[10pt] & = \frac{(x_2+x_3) -2p(x_1+x_2+x_3)}{p(1-2p)} \quad \begin{cases} > 0 & \text{if } 0<p<\dfrac{x_2+x_3}{2(x_1+x_2+x_3)}, \\[10pt] < 0 & \text{if } \dfrac{x_2+x_3}{2(x_1+x_2+x_3)} < p < 1. \end{cases}
\end{align}</p>
<p>The problem ought to have said $p\in[0,1/2]$ rather than $p\in(0,1/2).$ The probability of an endpoint maximum is clearly more than $0$ and the MLE would be at an endpoint in that case.</p>
|
247,553 | <p>Let $f(x)=\binom{x}{2}+\binom{x}{4}+\cdots+\binom{x}{2u}$, where $u\in\mathbb{Z}^+$ and $\binom{x}{l}=\frac{x(x-1)\dots(x-l+1)}{l!}$ for all $l\in\mathbb{Z}^+$.
Then can we prove $f(x)$ is a convex function on $[0,+\infty)$?</p>
<p>Updates:</p>
<p>1) It was pointed out by @user44191 that, observing $\binom{x}{i}=\binom{x-1}{i}+\binom{x-1}{i-1}$, the question is equivalent to $\binom{x-1}{1}+\binom{x-1}{2}+\dots+\binom{x-1}{2u}$ is convex on $[0,+\infty)$.</p>
<p>2) Pointed out by @FedorPetrov @GeraldEdgar @H.H.Rugh:<br>
For $x<0$ each summand $\binom{x}{2i}$ is obviously convex, thus the question is equivalent to $f(x)$ is convex on $\mathbb{R}$.</p>
<p>3) Pointed out by @WłodzimierzHolsztyński:<br>
It has $(\Delta^2 f_u)(x) = 1+ f_{u-1}(x-2)$, where $(\Delta f_u)(x)=f_u(x)-f_u(x-1)$. Then we can conclude that $f(x)$ is discrete convex.</p>
| Pietro Majer | 6,101 | <p><em>(Here is a proof of the convexity of $\sum_{k=0}^n\binom{x}{k}$ on $[0,\infty)$
for any large $n$; with a bit more care the argument should work for all positive even integer $n$ on $[-1,\infty)$, which is the original problem)</em></p>
<p>We may consider the problem of showing the convexity of $S_n(x):=\sum_{k=0}^n\binom{x}{k}$ on $[-1,+\infty)$ for any positive even integer $n$, which is equivalent to the original, as observed. As also observed, since $S_n$ is smooth and each summand is convex for $x>n-1$, it is sufficient to prove the convexity on $[-1,n-1]$. For odd $n$, the convexity should be true only on $[0,+\infty)$ .</p>
<p>The sum $S_n$ is the $n$-th Taylor polynomial of the function $(1+t)^x$, centered at $t_0=0$ and evaluated at $t=1$. The corresponding remainder integral formula gives:</p>
<p>$$\sum_{k=0}^n\binom{x}{k}= 2^{x} - (n+1)\binom{x}{n+1}\int_0^1(1-t)^{n}(1+t)^{-n-1+x}dt.$$</p>
<p>So in order to prove convexity of the right-hand side on some interval $I=I_n$ we need to show the inequality</p>
<p>$$2^x(\log 2)^2 - \int_0^1(n+1)(1-t)^{n}(1+t)^{-n-1} \bigg[(1+t)^x\binom{x}{n+1}\bigg]''dt\ge0,$$</p>
<p>for $x\in I$ (here $'$ denotes derivative wrto $x$). Note that the integral weight $(n+1)(1-t)^{n}(1+t)^{-n-1}$ in front of the second derivative has mass less than $1$ (for a quick check: up to a sign, $ \int_0^1(1-t)^{n}(1+t)^{-n-1} dt$ is again an integral remainder of a Taylor expansion, namely of $\log(1+t)$, and in fact its value is exactly the remainder of the logarithmic series $\big|\log(2)-\sum_{k=1}^n(-1)^k/k\big|$, which is not larger that $1/(n+1)$.)
Therefore
$$\int_0^1(n+1)(1-t)^{n}(1+t)^{-n-1} \bigg[(1+t)^x\binom{x}{n+1}\bigg]''dt \le\sup_{0\le t\le 1} \bigg[(1+t)^x\binom{x}{n+1}\bigg]'' $$
$$=\sup_{0\le t\le 1} (1+t)^x\bigg[ \log^2(1+t)\binom{x}{n+1} +2\log(1+t) \binom{x}{n+1}'+\binom{x}{n+1}'' \bigg] $$
$$\le 2^x(1+\log 2)^2 \max\bigg\{\binom{x}{n+1}, \binom{x}{n+1}', \binom{x}{n+1}'' \bigg\}. $$</p>
<p>We can conclude that for a given positive integer $n$, $S_n(x)$ is convex for $x\in I$, provided the uniform norms of $\binom{x}{n+1}$, and of its first and second derivative on the interval $I$ is uniformly less than $1/6$.
Since these uniform norms on $I=[0,n]$ converge to $0$ as $n\to\infty$, it follows the convexity of $S_n(x)$ on $[0,\infty)$ for any large $n$.
(I do not have handy the relative convergence bounds, that should be classic in polynomial interpolation. For even $n$, by experiments it seems they are less than $1/6$ as soon as $n\ge8$). </p>
<p>Also note that the above weight concentrates around $t=0$, which suggests to break the integral into the two intervals $[0,\tau]$ and $[\tau, 1]$, to be estimated separately; this gives a much better estimate, of course, and should be of use in the original problem for even $n$, which required the bound for $x\in[-1,0]$ too.</p>
|
247,553 | <p>Let $f(x)=\binom{x}{2}+\binom{x}{4}+\cdots+\binom{x}{2u}$, where $u\in\mathbb{Z}^+$ and $\binom{x}{l}=\frac{x(x-1)\dots(x-l+1)}{l!}$ for all $l\in\mathbb{Z}^+$.
Then can we prove $f(x)$ is a convex function on $[0,+\infty)$?</p>
<p>Updates:</p>
<p>1) It was pointed out by @user44191 that, observing $\binom{x}{i}=\binom{x-1}{i}+\binom{x-1}{i-1}$, the question is equivalent to $\binom{x-1}{1}+\binom{x-1}{2}+\dots+\binom{x-1}{2u}$ is convex on $[0,+\infty)$.</p>
<p>2) Pointed out by @FedorPetrov @GeraldEdgar @H.H.Rugh:<br>
For $x<0$ each summand $\binom{x}{2i}$ is obviously convex, thus the question is equivalent to $f(x)$ is convex on $\mathbb{R}$.</p>
<p>3) Pointed out by @WłodzimierzHolsztyński:<br>
It has $(\Delta^2 f_u)(x) = 1+ f_{u-1}(x-2)$, where $(\Delta f_u)(x)=f_u(x)-f_u(x-1)$. Then we can conclude that $f(x)$ is discrete convex.</p>
| esg | 48,831 | <p>Here's an alternative proof based on probabilistic arguments (showing different aspects). Let
$$f_n(x):=\sum_{j=0}^n { x \choose j}=[t^n]\,\frac{(1+t)^x}{1-t}\;\;,$$
and let $^\prime$ denote derivative with resp. to $x$.</p>
<p>We have to show that for even $n=2k$ the second derivative $f_{2k}^{\prime\prime}(x)=[t^{2k}]
\frac{(1+t)^x}{1-t}\,(\log(1+t))^2$ is nonnegative.
The basic observation used below is that $g(t):=\frac{\log(1+t)}{t}$ is the Laplace transform (LT) of a nonnegative random variable possessing all moments.
The relation $g(t)=\int_0^1 \frac{1}{1+st}\,ds$ shows that $g$ is the LT of $U\cdot X_1$, where $U$ is uniform on $[0,1]$, $X_1$ is $\Gamma(1,1)=\exp(1)$ distributed
(i.e. has LT $\frac{1}{1+t}$), and the factors
are independent.</p>
<p><strong>Case 1</strong>:
$x<0$. In this case $(1+t)^x $ is the LT of $\Gamma(1, -x)$ . Thus $\ell(t):=(1+t)^x g(t)^2$ is the Laplace transform of a nonnegative rv
and (for $k\geq 1$) $f_{2k}^{\prime\prime}(x)=[t^{2k-2}]\, \frac{1}{1-t} \ell(t)$ is an even degree MacLaurin sum of $\ell$, evaluated at $1$,
and therefore exceeds $\ell(1)>0$.</p>
<p><strong>Case 2</strong>:
$x\geq 0$. Since $f_n(x+1)=f_n(x)+f_{n-1}(x)$ it will suffice to show that $f_n^{\prime\prime}(x)\geq 0$ for $x\in [0,1)$ and <strong>all</strong> $n$.</p>
<p>For $x=0$ <strong>all</strong> derivatives $f_n^{\prime\prime}(0)$ are nonnegative, since $a_r:=[t^r] (g(t))^2=2\,(-1)^r \frac{H_{r+1}}{r+2}$ ,
and $a_0=1>0$, $a_{2k}+a_{2k+1}\geq 0$.</p>
<p>Let $0<x<1$ and write $$(1+t)^x =1 +xt\,\frac{ (1+t)^x -1}{xt} =1 +xt\,h_x(t)$$ and accordingly
$$f_n^{\prime\prime}(x)=[t^{n-2}] \frac{1}{1-t}\left(1+ x t\,h_x(t)\right)g(t)^2=f_n^{\prime\prime}(0)+ x [t^{n-3}]\frac{h_x(t)g(t)^2}{1-t}\;\;.$$
Here $h_x$ is the LT of $U\cdot X_{1-x}$, where $U$ is uniform on $[0,1]$, $X_{1-x}$ is $\Gamma(1,1-x)$ (LT $(1+t)^{x-1}$) distributed, and the factors
are independent, thus $h_x(t)g(t)^2$ is again the LT of a random variable $Z$ posessing all moments.</p>
<p>(1) If $n$ is odd, $n-3$ is even and $f_n^{\prime\prime}(x)\geq f^{\prime\prime}(0)$, since the second term is nonnnegative by the same argument as above.</p>
<p>(2) If $n$ is even, $n-3$ is odd and the second term is the $Z$-expection of a decreasing function, and will therefore not increase if $Z$ is replaced by a stochastically
larger random variable. Replacing $U\cdot X_1$ for $U\cdot X_{1-x}$ in the first factor replaces $g$ for $h_x$, makes $Z$ stochastically larger and we get
$$f_n^{\prime\prime}(x)\geq f_n^{\prime\prime}(0)+ x [t^{n-3}]\frac{g(t)^3}{1-t}=f_n^{\prime\prime}(0)+x\,f_n^{\prime\prime\prime}(0)\;\;.$$
If $f_n^{\prime\prime\prime}(0)\geq 0$ we're done. If $f_n^{\prime\prime\prime}(0) <0$ it will suffice to show that
$f_n^{\prime\prime}(0)+f_n^{\prime\prime\prime}(0)\geq 0$. </p>
<p>This amounts to showing that the (even) partial sums of the coefficients
of $c(t):=(\log(1+t))^2 + (\log(1+t))^3 $ are nonnegative, and this can be done</p>
<p>EDIT: some details: the $n$-th coefficient $c_n=[t^n] c(t)$ of $c$ is $$c_n=\frac{(-1)^n}{n}\left(2 H_{n-1}-3(H_{n-1}^2-H_{n-1}^{(2)})\right)$$
Hence
$$c_n+c_{n+1}=\frac{(-1)^n}{n(n+1)}\left(-3 H_{n-1}^2 + 8 H_{n-1} + 3 H_{n-1}^{(2)} -2\right)$$
Thus (for $2k\geq 2$) the even partial sums fall until $2k=n+1=12$ and rise thereafter. Since $[t^{12}]\frac{c(t)}{1-t}=\frac{26647}{221760}>0.12$, all even partial sums are nonnegative.</p>
<p>Finally, note that this proof also shows that $f_n$ is convex on $[0,\infty)$ for odd $n$.</p>
|
2,137,332 | <p>On the MathWorld page: </p>
<p><a href="http://mathworld.wolfram.com/FermatPseudoprime.html" rel="nofollow noreferrer">http://mathworld.wolfram.com/FermatPseudoprime.html</a></p>
<p>in the first table, I expect to see $561$ on every line, but it is not on the line for base $3$.</p>
<p>When you click on the link to the OEIS page, it also is missing from the list. Since $561$ is a Carmichael number, I expected it to be there. Is this a typo (and if so, how do I report it)? If not, what am I missing? Certainly $3^{561} \equiv 3 \pmod{561}$; is there a different definition of "Fermat pseudoprime" that leaves $561$ out?</p>
| Laray | 396,534 | <p>To get the mapping of the lines, you first to describe the lines by using parameters. </p>
<p>Theline between $-1$ and $1$ ist desribed by $\lambda \in [-1, 1]$ and can easily be mapped onto the positive real numbers upto $1$
After that, you need to transform $\lambda \cdot 1 + (1-\lambda)\cdot i$ for $\lambda \in [0,1]$. This will tell you, where to map the right diagonal line. </p>
<p>You can do similar things with the left diagonal line, to see, where the triangle is mapped to. (Keep in mind to determine, if you actually have the inner or outer part of your figre)</p>
|
1,284,938 | <p>I was revising for one of my end of year maths exams, then I came across this example on how to find lines of tangents to ellipses outside the curve. Personally, I'd use differentiation and slopes to find such lines, but the lecturer does something simpler and more elegant.</p>
<p>The question is: "Find the equations of the lines through (1, 4) which are tangents to the ellipse $x^2 + 2y^2 = 6$"</p>
<p>And then we put the lines into the standard form, which comes out as $ y = mx − (m − 4)$ where $m$ is the slope of the line.</p>
<p>Then, the lecturer substitutes the equation we got into the original equation of the curve, which we get $x^2 + 2[mx − (m − 4)]^2 = 6$.</p>
<p>Now, the lecturer goes from the equation above to something I can't understand how to derive. With the explanation "We now look for repeated roots in the equation, as each tangent meets the line exactly once, we get":</p>
<p>$[4m(m − 4)]^2 − 4(1 + 2m^2
)(2m^2 − 16m + 26) = 0$</p>
<p>Can you guys please help me understand how to get to the equation above? I tried using the quadratic formula, where x has repeated roots (in other words, the rational bit is zero), but I still got something entirely different.</p>
<p>Thanks.</p>
| JJacquelin | 108,514 | <p>The solution of the PDE :
$${\partial z \over \partial x}+(2e^x-y){\partial z \over \partial y}=0.$$
according to the boundary condion $z(0,y)=y \:$ is :
$$z(x,y)=e^x y-e^{2x}+1$$
as shown below :</p>
<p><img src="https://i.stack.imgur.com/QRLtT.jpg" alt="enter image description here"></p>
<p>One can check the above solution in bringing back $z=e^x y-e^{2x}+1$ into the PDE. And it is easy to see that $z(0,y)=y$ , as required.</p>
|
23,911 | <p>I am teaching a course on Riemann Surfaces next term, and would <strong>like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties</strong> (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis.</p>
<p>Here are some examples that I thought of:</p>
<p><strong>1.</strong> Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space.</p>
<p><strong>2.</strong> Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that
agrees with $f$ on $V$ and is identically zero outside of $U$.</p>
<p>By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$.</p>
<p><strong>3.</strong> If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. </p>
<p><em><strong>What are some more examples?</em></strong></p>
<p>Answers illustrating the difference between real manifolds and complex manifolds are also welcome.</p>
| Dan Zaffran | 2,109 | <p>Here is a list biased towards what is remarkable in the complex case. (To the potential peeved real manifold: I love you too.) By "complex" I mean holomorphic manifolds and holomorphic maps; by "real" I mean $\mathcal{C}^{\infty}$ manifolds and $\mathcal{C}^{\infty}$ maps. </p>
<ul>
<li><p>Consider a map $f$ between manifolds of <em>equal</em> dimension.
In the complex case: if $f$ is injective then it is an isomorphism onto its image. In the real case, $x\mapsto x^3$ is not invertible. </p></li>
<li><p>Consider a holomorphic $f: U-K \rightarrow \mathbb{C}$, where $U\subset \mathbb{C}^n$ is open and $K$ is a compact s.t. $U-K$ is connected. When $n\geq 2$, $f$ extends to $U$. This so-called Hartogs phenomenon has no counterpart in the real case. </p></li>
<li><p>If a complex manifold is compact or is a bounded open subset of $\mathbb{C}^n$, then its group of automorphisms is a Lie group. In the smooth case it is always infinite dimensional.</p></li>
<li><p>The space of sections of a vector bundle over a compact complex manifold is finite dimensional. In the real case it is always infinite dimensional. </p></li>
<li><p>To expand on Charles Staats's excellent answer: few smooth atlases happen to be holomorphic, but even fewer diffeomorphisms happen to be holomorphic. Considering manifolds up to isomorphism, the net result is that many complex manifolds come in continuous families, whereas real manifolds rarely do (in dimension other than $4$: a compact topological manifold has at most finitely many smooth structures; $\mathbb{R}^n$ has exactly one). </p></li>
</ul>
<p>On the theme of <em>zero subsets</em> (i.e., subsets defined locally by the vanishing of one or several functions):</p>
<ul>
<li><p><em>One</em> equation always defines a <em>codimension one</em> subset in the complex case, but
{$x_1^2+\dots+x_n^2=0$} is reduced to one point in $\mathbb{R}^n$. </p></li>
<li><p>In the complex case, a zero subset isn't necessarily a submanifold, but
is amenable to manifold theory by Hironaka desingularization. In the real case, <em>any</em> closed subset is a zero set.</p></li>
<li><p>The image of a proper map between two complex manifolds is a zero subset, so isn't too bad by the previous point. Such a direct image is hard to deal with in the real case. </p></li>
</ul>
|
23,911 | <p>I am teaching a course on Riemann Surfaces next term, and would <strong>like a list of facts illustrating the difference between the theory of real (differentiable) manifolds and the theory non-singular varieties</strong> (over, say, $\mathbb{C}$). I am looking for examples that would be meaningful to 2nd year US graduate students who has taken 1 year of topology and 1 semester of complex analysis.</p>
<p>Here are some examples that I thought of:</p>
<p><strong>1.</strong> Every $n$-dimensional real manifold embeds in $\mathbb{R}^{2n}$. By contrast, a projective variety does not embed in $\mathbb{A}^n$ for any $n$. Every $n$-dimensional non-singular, projective variety embeds in $\mathbb{P}^{2n+1}$, but there are non-singular, proper varieties that do not embed in any projective space.</p>
<p><strong>2.</strong> Suppose that $X$ is a real manifold and $f$ is a smooth function on an open subset $U$. Given $V \subset U$ compactly contained in $U$, there exists a global function $\tilde{g}$ that
agrees with $f$ on $V$ and is identically zero outside of $U$.</p>
<p>By contrast, consider the same set-up when $X$ is a non-singular variety and $f$ is a regular function. It may be impossible find a global regular function $g$ that agrees with $f$ on $V$. When $g$ exists, it is unique and (when $f$ is non-zero) is not identically zero on outside of $U$.</p>
<p><strong>3.</strong> If $X$ is a real manifold and $p \in X$ is a point, then the ring of germs at $p$ is non-noetherian. The local ring of a variety at a point is always noetherian. </p>
<p><em><strong>What are some more examples?</em></strong></p>
<p>Answers illustrating the difference between real manifolds and complex manifolds are also welcome.</p>
| Community | -1 | <p>Some embedding statements.</p>
<p>A compact complex subvariety of ${\mathbb{C}}^n$ is a point. However, every compact real manifold of dimension $n$ can be realized as a submanifold of some ${\mathbb{R}}^{2n}$.</p>
<p>There are compact complex manifolds that cannot be embedded into complex projective space. An example most often quoted in textbooks is the Hopf manifold, which is not even Kahler. On the other hand, I heard that embedding into real projective space is not often considered in differential geometry.</p>
|
3,943,199 | <p>How to put <span class="math-container">$(-\sqrt{3}-i)^{\frac{5}{7}}$</span> into polar form and find all roots.</p>
<p>What I tried:</p>
<p><span class="math-container">$$w = -\sqrt{3}-i$$</span>
<span class="math-container">$$\arg(w)=\arctan(\frac{1}{\sqrt 3})-\pi = \frac{\pi}{6} - \pi = \frac{-5\pi}{6}$$</span>
<span class="math-container">$$w = 2(\cos(\frac{-5\pi}{6})+ i\sin(\frac{-5\pi}{6})) $$</span>
<span class="math-container">$$w^5 = 2^5(\cos(-\frac{5\pi}{6}*5)+i\sin(-\frac{5\pi}{6}*5)) $$</span>
<span class="math-container">$$z^7 = w^5$$</span>
<span class="math-container">$$z = w^{\frac{5}{7}} $$</span>
<span class="math-container">$$z = 32^{\frac{1}{7}}(\cos(\frac{-5\pi*5}{6*7}+\frac{2k\pi}{7})+i\sin(\frac{-5\pi*5}{6*7}+\frac{2k\pi}{7})) $$</span>
Here I get stuck and I don't know how to continue...</p>
| Z Ahmed | 671,540 | <p><span class="math-container">$$e^{i\phi}=\cos \phi +i \sin \phi,~~ e^{-i\phi}=\cos \phi -i\sin \phi$$</span>
add these two to get <span class="math-container">$$\sin \phi= \frac{e^{i\phi}-e^{-i\phi}}{2i}$$</span></p>
|
1,424,273 | <p>Let $(a_n)$ be a convergent sequence of positive real numbers. Why is the limit nonnegative?</p>
<p>My try: For all $\epsilon >0$ there is a $N\in \mathbb{N}$ such that $|a_n-L|<\epsilon$ for all $n\ge N$. And we know $0< a_n$ for all $n\in \mathbb{N}$, particularly $0<a_n$ for all $n\ge N$. Maybe by contradiction: suppose that $L<0$, then $L<0<a_n$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. Then $0<-L<a_n-L$ for all $n\in \mathbb{N}$, particularly for all $n\ge N$. It follows: for all $\epsilon >0$, there is a $N\in \mathbb{N}$ such that $0<|-L|=-L<|a_n-L|<\epsilon$ for all $n\ge N$, which can't be true.</p>
<p>Is my proof ok?</p>
| user118494 | 118,494 | <p>See, you have $$L\lt 0\lt a_{n} \ ,\ \ for\ \ all\ \ n\in N\ $$ Now,take $\epsilon={{|L|}\over {2}}$. Look at the $\epsilon$-nbd of $L$. This has no positive number at all in it, let alone the elements of the sequence $\{a_{n}\}$. So this contradicts the fact that $L$ is the limit point of the sequence $\{a_{n}\}$. </p>
<p>Hence $L$ must be non-negative.</p>
|
4,080,776 | <p>I am doing an individual study of an abstract algebra for number theory course online. I just started, so I hope my question just note come off as too trivial. The lecture notes state that the ring of <span class="math-container">$p$</span>-adic integers does not have a ring endomorphism.</p>
<h3>Questions:</h3>
<p><strong>1.</strong> Does not the identity mapping work as a counterexample?</p>
<p>Then, assuming they meant: "no endomorphism except the trivial case", so the entire thing is not just a mistake:</p>
<p><strong>2.</strong> I still cannot convince myself that there is no other ring endomorphism of <span class="math-container">$p$</span>-adic integers. Could you please give me a hint how to prove it or point me to literature where such a proof is shown?</p>
| roxas3582 | 596,046 | <p>For your reference, there is a more general perspective from the theory of "Witt vectors". Throughout, we let <span class="math-container">$p$</span> denote a fixed prime and <span class="math-container">$\mathbb{F}_p$</span> the finite field with <span class="math-container">$p$</span> elements.</p>
<p>The p-adic integers <span class="math-container">$\mathbb{Z}_p$</span> are an example of a strict <span class="math-container">$p$</span>-ring, which is a ring <span class="math-container">$A$</span> such that</p>
<ul>
<li><span class="math-container">$p$</span> is not a zero divisor in <span class="math-container">$A$</span>,</li>
<li><span class="math-container">$A$</span> is <span class="math-container">$p$</span>-adically complete and Hausdorff, and</li>
<li><span class="math-container">$A/(p)$</span> is a perfect ring (for example, a finite field).</li>
</ul>
<p>Then we have the following theorem: if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are strict <span class="math-container">$p$</span>-rings, then there is a natural "mod <span class="math-container">$p$</span>" bijection between ring homomorphisms <span class="math-container">$A \to B$</span> and ring homomorphisms <span class="math-container">$A/(p) \to B/(p)$</span> (see theorem 1.2 in <a href="https://arxiv.org/abs/1409.7445" rel="nofollow noreferrer">https://arxiv.org/abs/1409.7445</a>, which is an expository paper on Witt vectors, and/or Chapter II of Serre's <em>Local Fields</em>). Taking <span class="math-container">$A = \mathbb{Z}_p = B$</span>, we have a bijection between endomorphisms of <span class="math-container">$\mathbb{Z}_p$</span> and endomorphisms of <span class="math-container">$\mathbb{Z}_p/(p) = \mathbb{F}_p$</span>. The only ring endomorphism of <span class="math-container">$\mathbb{F}_p$</span> is the identity, so the same holds for <span class="math-container">$\mathbb{Z}_p$</span>.</p>
|
351,870 | <p>Let <span class="math-container">$(X_n)$</span> be a sequence of <span class="math-container">$\mathbb{R}^d$</span>-valued random variables converging in distribution to some limiting random variable <span class="math-container">$X$</span> whose CDF is absolutely continuous with respect to the Lebesgue measure.</p>
<p>Does it follow that <span class="math-container">$X_n$</span> converges to <span class="math-container">$X$</span> in convex distance, i.e. that</p>
<p><span class="math-container">$$\sup_{h} \lvert \operatorname{E}(h(X)) - \operatorname{E}(h(X_n)) \rvert \to 0,$$</span></p>
<p>where the supremum is taken over all indicator functions of measurable convex subsets of <span class="math-container">$\mathbb{R}^d$</span>, if necessary assuming (absolute) continuity of the CDFs of the <span class="math-container">$X_n$</span> as well?</p>
<p><strong>Remark 1:</strong>
For <span class="math-container">$d=1$</span>, the implication is true and can be proven by Polya's theorem (convergence in law of real valued random variables towards a limit with continuous CDF implies uniform convergence of the CDF). Is it still true for <span class="math-container">$d \geq 2$</span>?</p>
<p><strong>Remark 2:</strong>
If absolute continuity is replaced by continuity the conclusion is false, see <a href="https://mathoverflow.net/questions/351597/does-convergence-in-law-imply-convergence-in-convex-distance">here</a></p>
| Mateusz Kwaśnicki | 108,637 | <p>What is essential here is that the distribution of <span class="math-container">$X$</span> assigns little mass to sets which are essentially <span class="math-container">$(d-1)$</span>-dimensional.</p>
<hr>
<p>The standard approach to problems of this kind is to estimate
<span class="math-container">$$ \operatorname{P}(X_n \in K) - \operatorname{P}(X \in K) $$</span>
from above by
<span class="math-container">$$ \operatorname{E}(g(X_n)) - \operatorname{E}(f(X)) , $$</span>
and from below by
<span class="math-container">$$ \operatorname{E}(f(X_n)) - \operatorname{E}(g(X)) , $$</span>
where <span class="math-container">$0 \leqslant f \leqslant \mathbb{1}_K \leqslant g \leqslant 1$</span>, and <span class="math-container">$f$</span> and <span class="math-container">$g$</span> are continuous. In <span class="math-container">$k$</span>-th step we choose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> in such a way that <span class="math-container">$\operatorname{E}(g(x) - f(x)) < \tfrac{1}{2 k}$</span>. Convergence of <span class="math-container">$\operatorname{E}(f(X_n))$</span> to <span class="math-container">$\operatorname{E}(f(X))$</span> and convergence of <span class="math-container">$\operatorname{E}(g(X_n))$</span> to <span class="math-container">$\operatorname{E}(g(X))$</span> imply that
<span class="math-container">$$ -\tfrac{1}{k} \leqslant \operatorname{P}(X_n \in K) - \operatorname{P}(X \in K) \leqslant \tfrac{1}{k} $$</span>
for all <span class="math-container">$n$</span> large enough.</p>
<p>This works as expected, i.e. leads to convergence of <span class="math-container">$\operatorname{P}(X_n \in K)$</span> to <span class="math-container">$\operatorname{P}(X \in K)$</span>, if (and only if) <span class="math-container">$\operatorname{P}(X \in \partial K) = 0$</span>: then (and only then) it is possible to choose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> with the desired property.</p>
<hr>
<p>Now in order to get <em>uniform</em> convergence for a class of sets <span class="math-container">$K$</span> — here the class of convex subsets of <span class="math-container">$\mathbb{R}^d$</span> — in every step <span class="math-container">$k$</span> we should choose <span class="math-container">$f$</span> and <span class="math-container">$g$</span> from a fixed <em>finite-dimensional</em> set of functions (which, of course, can well depend on <span class="math-container">$k$</span>). One solution for the class of convex <span class="math-container">$K$</span> is as follows.</p>
<p>By tightness, there is <span class="math-container">$R > 1$</span> such that <span class="math-container">$\operatorname{P}(|X_n| > R) < \tfrac{1}{4 k}$</span> uniformly in <span class="math-container">$n$</span>. We will choose (small) <span class="math-container">$\delta \in (0, 1)$</span> at a later stage. We cover the ball <span class="math-container">$\overline{B}(0, R)$</span> using <span class="math-container">$d$</span>-dimensional cubes <span class="math-container">$Q_j$</span> with edge length <span class="math-container">$\delta$</span> and vertices at lattice points <span class="math-container">$(\delta \mathbb{Z})^d$</span>. To be specific, suppose that the cubes <span class="math-container">$Q_j$</span> are open sets. Let <span class="math-container">$h_j = \mathbb{1}_{Q_j}$</span> be the indicator of <span class="math-container">$Q_j$</span>. Then
<span class="math-container">$$h_1 + \ldots + h_J = 1 \quad \text{a.e. on $\overline{B}(0, R)$}$$</span>
(more precisely: everywhere on <span class="math-container">$\overline{B}(0, R)$</span>, except possibly on the faces of cubes <span class="math-container">$Q_j$</span>). We add <span class="math-container">$h_0 = 1 - \sum_{j = 1}^J h_j$</span> to this collection. Observe that <span class="math-container">$h_0 = 0$</span> a.e. on the complement of <span class="math-container">$\overline{B}(0, R)$</span>, and hence <span class="math-container">$\operatorname{E}(h_0(X)) \leqslant \operatorname{P}(|X_n| > R) < \tfrac{1}{4 k}$</span>. Furthermore, by the first part of this answer, we already know that <span class="math-container">$\operatorname{E}(h_j(X_n)) = \operatorname{P}(X_n \in Q_j)$</span> converges to <span class="math-container">$\operatorname{E}(h_j(X)) = \operatorname{P}(X \in Q_j)$</span> as <span class="math-container">$n \to \infty$</span> for every <span class="math-container">$j = 0, 1, \ldots, J$</span> (because the distribution of <span class="math-container">$X$</span> does not charge the boundaries of <span class="math-container">$Q_j$</span>).</p>
<p>Given a convex set <span class="math-container">$K$</span>, we define <span class="math-container">$f$</span> to be the sum of all <span class="math-container">$h_j$</span> corresponding to cubes <span class="math-container">$Q_j$</span> contained in <span class="math-container">$K$</span>, and <span class="math-container">$g$</span> to be the sum of all <span class="math-container">$h_j$</span> corresponding to cubes <span class="math-container">$Q_j$</span> which intersect <span class="math-container">$K$</span>. Clearly, <span class="math-container">$$0 \leqslant f \leqslant \mathbb{1}_K \leqslant g \leqslant 1 \qquad \text{a.e.}$$</span> As in the first part of the proof,
<span class="math-container">$$ \operatorname{E}(f(X_n)) - \operatorname{E}(g(X)) \leqslant \operatorname{P}(X_n \in K) - \operatorname{P}(X \in K) \leqslant \operatorname{E}(g(X_n)) - \operatorname{E}(f(X)) . $$</span>
For <span class="math-container">$n$</span> large enough we have
<span class="math-container">$$ \sum_{j = 0}^J |\operatorname{E}(h_j(X_n)) - \operatorname{E}(h_j(X))| \leqslant \tfrac{1}{2 k} , $$</span>
and so
<span class="math-container">$$ |\operatorname{E}(f(X_n)) - \operatorname{E}(f(X))| \leqslant \tfrac{1}{2 k} , \qquad |\operatorname{E}(g(X_n)) - \operatorname{E}(g(X))| \leqslant \tfrac{1}{2 k} .$$</span>
Thus,
<span class="math-container">$$ -\tfrac{1}{2k} - \operatorname{E}(g(X) - f(X)) \leqslant \operatorname{P}(X_n \in K) - \operatorname{P}(X \in K) \leqslant \tfrac{1}{2k} + \operatorname{E}(g(X) - f(X)) $$</span>
for <span class="math-container">$n$</span> large enough, uniformly with respect to <span class="math-container">$K$</span>. It remains to choose <span class="math-container">$\delta > 0$</span> such that <span class="math-container">$\operatorname{E}(g(X) - f(X)) < \tfrac{1}{2k}$</span> uniformly with respect to <span class="math-container">$K$</span>; once this is proved, we have
<span class="math-container">$$ -\tfrac{1}{k} \leqslant \operatorname{P}(X_n \in K) - \operatorname{P}(X \in K) \leqslant \tfrac{1}{k} $$</span>
for <span class="math-container">$n$</span> large enough, uniformly with respect to <span class="math-container">$K$</span>, as desired.</p>
<p>By definition, <span class="math-container">$g - f$</span> is the sum of some number of functions <span class="math-container">$h_j$</span> with <span class="math-container">$j \geqslant 1$</span> — say, <span class="math-container">$m$</span> of them — and possibly <span class="math-container">$h_0$</span>. Recall that <span class="math-container">$\operatorname{E}(h_0(X)) \leqslant \operatorname{P}(|X_n| > R) < \tfrac{1}{4 k}$</span>. It follows that
<span class="math-container">$$ \operatorname{E}(g(X) - f(X)) \leqslant \tfrac{1}{4 k} + \sup \{ \operatorname{P}(X \in A) : \text{$A$ is a sum of $m$ cubes $Q_j$} \} . \tag{$\heartsuit$} $$</span>
We now estimate the size of <span class="math-container">$m$</span>.</p>
<blockquote>
<p><strong>Lemma.</strong> For a convex <span class="math-container">$K$</span>, the number <span class="math-container">$m$</span> defined above is bounded by a constant times <span class="math-container">$(R / \delta)^{d - 1}$</span>.</p>
</blockquote>
<p><em>Proof:</em>
Suppose that <span class="math-container">$Q_j$</span> intersects <span class="math-container">$K$</span>, but it is not contained in <span class="math-container">$K$</span>. Consider any point <span class="math-container">$z$</span> of <span class="math-container">$K \cap Q_j$</span>, and the supporting hyperplane <span class="math-container">$$\pi = \{x : \langle x - z, \vec{u} \rangle = 0\}$$</span> of <span class="math-container">$K$</span> at that point. We choose <span class="math-container">$\vec{u}$</span> in such a way that <span class="math-container">$K$</span> is contained in <span class="math-container">$\pi^- = \{x : \langle x - z, \vec{u} \rangle \leqslant 0\}$</span>. If the boundary of <span class="math-container">$K$</span> is smooth at <span class="math-container">$z$</span>, then <span class="math-container">$\vec{u}$</span> is simply the outward normal vector to the boundary of <span class="math-container">$K$</span> at <span class="math-container">$z$</span>.</p>
<p>To simplify notation, assume that <span class="math-container">$\vec{u}$</span> has all coordinates non-negative. Choose two opposite vertices <span class="math-container">$x_1, x_2$</span> of <span class="math-container">$Q_j$</span> in such a way that <span class="math-container">$\vec{v} = x_2 - x_1 = (\delta, \ldots, \delta)$</span>. Then the coordinates of <span class="math-container">$x_2 - z$</span> are all positive. It follows that for every <span class="math-container">$n = 1, 2, \ldots$</span>, all coordinates of <span class="math-container">$(x_1 + n \vec{v}) - z = (x_2 - z) + (n - 1) \vec{v}$</span> are non-negative, and therefore the translated cubes <span class="math-container">$Q_j + n \vec{v}$</span> all lie in <span class="math-container">$\pi^+ = \{x : \langle x - z, \vec{u} \rangle \geqslant 0\}$</span>. In particular, all these cubes are disjoint with <span class="math-container">$K$</span>.</p>
<p>In the general case, when the coordinates of <span class="math-container">$\vec{u}$</span> have arbitrary signs, we obtain a similar result, but with <span class="math-container">$\vec{v} = (\pm \delta, \ldots, \pm \delta)$</span> for some choice of signs. It follows that with each <span class="math-container">$Q_j$</span> intersecting <span class="math-container">$K$</span> but not contained in <span class="math-container">$K$</span> we can associate the directed line <span class="math-container">$x_2 + \mathbb{R} \vec{v}$</span>, and this this line uniquely determines <span class="math-container">$Q_j$</span>: it is the last cube <span class="math-container">$Q$</span> with two vertices on this line that intersects <span class="math-container">$K$</span> (with "last" referring to the direction of the line).</p>
<p>It remains to observe that the number of lines with the above property is bounded by <span class="math-container">$2^d$</span> (the number of possible vectors <span class="math-container">$\vec{v}$</span>) times the number of points in the projection of <span class="math-container">$(\delta \mathbb{Z})^d \cap \overline{B}(0, R)$</span> onto the hyperplane perpendicular to <span class="math-container">$\vec{v}$</span>. The latter is bounded by a constant times <span class="math-container">$(R / \delta)^{d - 1}$</span>, and the proof is complete. <span class="math-container">$\square$</span></p>
<p>(The above proof includes simplification due to Iosif Pinelis.)</p>
<p>Since the Lebesgue measure of <span class="math-container">$Q_j$</span> is equal to <span class="math-container">$\delta^d$</span>, the measure of <span class="math-container">$A$</span> in (<span class="math-container">$\heartsuit$</span>) is bounded by <span class="math-container">$m \delta^d \leqslant C R^{d - 1} \delta$</span> for some constant <span class="math-container">$C$</span>. Furthermore, since the distribution of <span class="math-container">$X$</span> is absolutely continuous, we can find <span class="math-container">$\delta > 0$</span> small enough, so that <span class="math-container">$\operatorname{P}(X \in A) < \tfrac{1}{4 k}$</span> for <em>every</em> set <span class="math-container">$A$</span> with measure at most <span class="math-container">$C R^{d - 1} \delta$</span> (recall that <span class="math-container">$R$</span> was chosen before we fixed <span class="math-container">$\delta$</span>). By (<span class="math-container">$\heartsuit$</span>), we find that
<span class="math-container">$$ \operatorname{E}(g(X) - f(X)) \leqslant \tfrac{1}{4 k} + \sup \{ \operatorname{P}(X \in A) : |A| \le C R^{d - 1} \delta\} \leqslant \tfrac{1}{2 k} , $$</span>
uniformly with respect to <span class="math-container">$K$</span>.</p>
|
351,870 | <p>Let <span class="math-container">$(X_n)$</span> be a sequence of <span class="math-container">$\mathbb{R}^d$</span>-valued random variables converging in distribution to some limiting random variable <span class="math-container">$X$</span> whose CDF is absolutely continuous with respect to the Lebesgue measure.</p>
<p>Does it follow that <span class="math-container">$X_n$</span> converges to <span class="math-container">$X$</span> in convex distance, i.e. that</p>
<p><span class="math-container">$$\sup_{h} \lvert \operatorname{E}(h(X)) - \operatorname{E}(h(X_n)) \rvert \to 0,$$</span></p>
<p>where the supremum is taken over all indicator functions of measurable convex subsets of <span class="math-container">$\mathbb{R}^d$</span>, if necessary assuming (absolute) continuity of the CDFs of the <span class="math-container">$X_n$</span> as well?</p>
<p><strong>Remark 1:</strong>
For <span class="math-container">$d=1$</span>, the implication is true and can be proven by Polya's theorem (convergence in law of real valued random variables towards a limit with continuous CDF implies uniform convergence of the CDF). Is it still true for <span class="math-container">$d \geq 2$</span>?</p>
<p><strong>Remark 2:</strong>
If absolute continuity is replaced by continuity the conclusion is false, see <a href="https://mathoverflow.net/questions/351597/does-convergence-in-law-imply-convergence-in-convex-distance">here</a></p>
| Iosif Pinelis | 36,721 | <p><span class="math-container">$\newcommand{\R}{\mathbb{R}}
\newcommand{\ep}{\varepsilon}
\newcommand{\p}{\partial}
\newcommand{\de}{\delta}
\newcommand{\De}{\Delta}$</span>
This is to try to provide a simplification and detalization of the answer by Mateusz Kwaśnicki. </p>
<p>Suppose that the distribution of <span class="math-container">$X$</span> is absolutely continuous (with respect to the Lebesgue measure) and <span class="math-container">$X_n\to X$</span> in distribution. We are going to show that then <span class="math-container">$X_n\to X$</span> in in convex distance, that is,
<span class="math-container">$$\sup_K|\mu_n(K)-\mu(K)|\overset{\text{(?)}}\to0$$</span>
(as <span class="math-container">$n\to\infty$</span>), where <span class="math-container">$\mu_n$</span> and <span class="math-container">$\mu$</span> are the distributions of <span class="math-container">$X_n$</span> and <span class="math-container">$X$</span>, respectively, and <span class="math-container">$\sup_K$</span> is taken over all measurable convex sets in <span class="math-container">$\R^d$</span>. </p>
<p>Take any real <span class="math-container">$\ep>0$</span>. Then there is some real <span class="math-container">$R>0$</span> such that <span class="math-container">$P(X\notin Q_R)\le\ep$</span>, where <span class="math-container">$Q_R:=(-R/2,R/2]^d$</span>, a left-open <span class="math-container">$d$</span>-cube. Since <span class="math-container">$X_n\to X$</span> in distribution and <span class="math-container">$P(X\in\p Q_R)=0$</span>, there is some natural <span class="math-container">$n_\ep$</span> such that for all natural <span class="math-container">$n\ge n_\ep$</span> we have
<span class="math-container">$$P(X_n\notin Q_R)\le2\ep.$$</span>
Take a natural <span class="math-container">$N$</span> and partition the left-open <span class="math-container">$d$</span>-cube <span class="math-container">$Q_R$</span> naturally into <span class="math-container">$N^d$</span> left-open <span class="math-container">$d$</span>-cubes <span class="math-container">$q_j$</span> each with edge length <span class="math-container">$\de:=R/N$</span>, where <span class="math-container">$j\in J:=[N^d]:=\{1,\dots,N^d\}$</span>. </p>
<p>Using again the conditions that <span class="math-container">$X_n\to X$</span> in distribution and <span class="math-container">$\mu$</span> is absolutely continuous (so that <span class="math-container">$\mu(\p q_j)=0$</span> for all <span class="math-container">$j\in J$</span>), and increasing <span class="math-container">$n_\ep$</span> is needed, we may assume that for all natural <span class="math-container">$n\ge n_\ep$</span>
<span class="math-container">$$\De:=\sum_{j\in J}|\mu_n(q_j)-\mu(q_j)|\le\ep.$$</span></p>
<p>Take now any measurable convex set <span class="math-container">$K$</span> in <span class="math-container">$\R^d$</span>. Then
<span class="math-container">$$|\mu_n(K)-\mu(K)|\le|\mu_n(K\cap Q_R)-\mu(K\cap Q_R)| \\
+|\mu_n(K\setminus Q_R)-\mu(K\setminus Q_R)|
$$</span>
and
<span class="math-container">$$|\mu_n(K\setminus Q_R)-\mu(K\setminus Q_R)|\le
P(X_n\notin Q_R)+P(X\notin Q_R)\le3\ep.
$$</span></p>
<p>So, without loss of generality (wlog) <span class="math-container">$K\subseteq Q_R$</span>. Let
<span class="math-container">$$J_<:=J_{<,K}:=\{j\in J\colon q_j\subseteq K^\circ\},$$</span>
<span class="math-container">$$J_\le:=J_{\le,K}:=\{j\in J\colon q_j\cap \bar K\ne\emptyset\},$$</span>
<span class="math-container">$$J_=:=J_{=,K}:=\{j\in J\colon q_j\cap\p K\ne\emptyset\},$$</span>
where <span class="math-container">$K^\circ$</span> is the interior of <span class="math-container">$K$</span> and <span class="math-container">$\bar K$</span> is the closure of <span class="math-container">$K$</span>. </p>
<p>The key to the whole thing is </p>
<blockquote>
<p><strong>Lemma.</strong> <span class="math-container">$|\bigcup_{j\in J_=}q_j|\le2d(d+2)R^{d-1}\de$</span>, where <span class="math-container">$|\cdot|$</span> is the Lebesgue measure. </p>
</blockquote>
<p>This lemma will be proved at the end of this answer. Using the absolute continuity of the distribution of <span class="math-container">$X$</span>, we can take <span class="math-container">$N$</span> so large that for any Borel subset <span class="math-container">$B$</span> of <span class="math-container">$\R^d$</span> we have the implication
<span class="math-container">$$|B|\le2d(d+2)R^{d-1}\de\implies \mu(B)\le\ep.$$</span> </p>
<p>Using now the lemma, for <span class="math-container">$n\ge n_\ep$</span> we have
<span class="math-container">$$\mu_n(K)-\mu(K)
\le\sum_{j\in J_\le}\mu_n(q_j)-\sum_{j\in J_<}\mu(q_j) \\
\le\sum_{j\in J_\le}|\mu_n(q_j)-\mu(q_j)|
+\mu
\Big(\bigcup_{j\in J_=}q_j\Big)\le\De+\ep\le2\ep.
$$</span>
Similarly,
<span class="math-container">$$\mu(K)-\mu_n(K)
\le\sum_{j\in J_\le}\mu(q_j)-\sum_{j\in J_<}\mu_n(q_j) \\
\le\sum_{j\in J<}|\mu(q_j)-\mu_n(q_j)|
+\mu
\Big(\bigcup_{j\in J_=}q_j\Big)\le\De+\ep\le2\ep.
$$</span>
So, <span class="math-container">$|\mu_n(K)-\mu(K)|\le2\ep$</span>. That is, the desired result is proved modulo the lemma. </p>
<hr>
<p><em>Proof of the lemma.</em> Since <span class="math-container">$K$</span> is convex, for any <span class="math-container">$x\in\p K$</span> there is some unit vector <span class="math-container">$\nu(x)$</span> such that <span class="math-container">$\nu(x)\cdot(y-x)\le0$</span> for all <span class="math-container">$y\in K$</span> (the support half-space thing), where <span class="math-container">$\cdot$</span> denotes the dot product. For each <span class="math-container">$j\in[d]$</span>, let
<span class="math-container">$$S_j^+:=\{x\in\p K\colon\nu(x)_j\ge1/\sqrt d\},\quad
S_j^-:=\{x\in\p K\colon\nu(x)_j\le-1/\sqrt d\},$$</span>
<span class="math-container">$$J_{=,j}^+:=\{j\in J\colon q_j\cap S_j^+\ne\emptyset\},\quad
J_{=,j}^-:=\{j\in J\colon q_j\cap S_j^-\ne\emptyset\},$$</span>
where <span class="math-container">$v_j$</span> is the <span class="math-container">$j$</span>th coordinate of a vector <span class="math-container">$v\in\R^d$</span>.
Note that
<span class="math-container">$\bigcup_{j\in[d]}(S_j^+\cup S_j^-)=\p K$</span> and hence
<span class="math-container">$\bigcup_{j\in[d]}(J_{=,j}^+\cup J_{=,j}^-)=J_=$</span>, so that
<span class="math-container">$$\Big|\bigcup_{j\in J_=}q_j\Big|
\le
\sum_{j\in[d]}\Big(\Big|\bigcup_{j\in J_{=,j}^+}q_j\Big|+\Big|\bigcup_{j\in J_{=,j}^-}q_j\Big|\Big)
\le\de^d
\sum_{j\in[d]}(|J_{=,j}^+|+|J_{=,j}^-|),\tag{*}
$$</span>
where now <span class="math-container">$|J_{=,j}^\pm|$</span> denotes the cardinality of <span class="math-container">$J_{=,j}^\pm$</span>. </p>
<p>Now comes the key step in the proof of the lemma:
Take any <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$S_d^+$</span> such that <span class="math-container">$x_d\le y_d$</span>. We have the support "inequality" <span class="math-container">$\nu(x)\cdot(y-x)\le0$</span>, which implies
<span class="math-container">$$\frac{y_d-x_d}{\sqrt d}\le\nu(x)_d(y_d-x_d)\le\sum_{j=1}^{d-1}\nu(x)_j(x_j-y_j)
\le|P_{d-1}x-P_{d-1}y|,
$$</span>
where <span class="math-container">$P_{d-1}x:=(x_1,\dots,x_{d-1})$</span>. So, we get the crucial Lipschitz condition
<span class="math-container">$$|y_d-x_d|\le\sqrt d\,|P_{d-1}x-P_{d-1}y| \tag{**}$$</span>
for all <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$S_d^+$</span>. </p>
<p>Partition the left-open <span class="math-container">$(d-1)$</span>-cube <span class="math-container">$P_{d-1}Q_R$</span> naturally into <span class="math-container">$N^{d-1}$</span> left-open <span class="math-container">$(d-1)$</span>-cubes <span class="math-container">$c_i$</span> each with edge length <span class="math-container">$\de=R/N$</span>, where <span class="math-container">$i\in I:=[N^{d-1}]$</span>.
For each <span class="math-container">$i\in I$</span>, let
<span class="math-container">$$ J_{=,d,i}^+:=\{j\in J_{=,d}^+\colon P_{d-1}q_j=c_i\},\quad
s_i:=\bigcup_{j\in J_{=,d,i}}q_j,
$$</span>
so that <span class="math-container">$s_i$</span> is the "stack" of all the <span class="math-container">$d$</span>-cubes <span class="math-container">$q_j$</span> with <span class="math-container">$j\in J_{=,d}^+$</span> that <span class="math-container">$P_{d-1}$</span> projects onto the same <span class="math-container">$(d-1)$</span>-cube <span class="math-container">$c_i$</span>. Let <span class="math-container">$r_i$</span> be the cardinality of the set <span class="math-container">$J_{=,d,i}$</span>, that is,
the number of the <span class="math-container">$d$</span>-cubes <span class="math-container">$q_j$</span> in the stack <span class="math-container">$s_i$</span>. Then for some two points <span class="math-container">$x$</span> and <span class="math-container">$y$</span> in <span class="math-container">$s_i\cap S_d^+$</span> we have <span class="math-container">$|y_d-x_d|\ge(r_i-2)\de$</span>, whence, in view of (**),
<span class="math-container">$$\sqrt d\,\sqrt{d-1}\,\de\ge\sqrt d\,|P_{d-1}x-P_{d-1}y|\ge|y_d-x_d|\ge(r_i-2)\de,$$</span>
so that <span class="math-container">$r_i\le d+2$</span>. So,
<span class="math-container">$$|J_{=,d}^+|=\sum_{i\in I}r_i\le\sum_{i\in I}(d+2)=(d+2)N^{d-1}=(d+2)(R/\de)^{d-1}.$$</span>
Similarly, <span class="math-container">$|J_{=,j}^\pm|\le(d+2)(R/\de)^{d-1}$</span> for all <span class="math-container">$j\in[d]$</span>. Now the lemma follows from (*). </p>
|
2,330,196 | <p>The question asks me to draw a Hasse diagram for the given set of rules.
$$ (\{n\in \mathbb N: n\mid 100\ \lor\ n = 75 \}, {}\mid{} ) $$</p>
<p>My approach is to write down the set satisfying for $n\mid 100$, but I dont get what's with "or" $n =75.$</p>
<p>Could someone help me figure out what that means? is it set of all $n\mid100$ or $n\mid75$?</p>
<p>I'm new to discrete any solution is much appreciated. </p>
| Community | -1 | <p><strong>Hint</strong> : show that if $p \sim q$ (as divisor) then $X \cong \Bbb P^1$. Show then that Riemann Roch implies that there is $p,q \in X$ with $p \sim q$. </p>
|
2,897,785 | <blockquote>
<p>Fix a $2\times 2$ real matrix $A$. Let $V$ be the set of all $2\times 2$ real matrices $X$ such that $AX=XA$. Show that $V$ is a vector space of dimension of at least 2.</p>
</blockquote>
<p>I'm struggling to see a good way to approach this problem. There's the brute force style method of algebraically manipulating 4 equations in 8 unknowns to show that there are (at least) two matrices $X$ that satisfy $AX=XA$ for any given $A$, but it seems like there should be a more insightful approach. Certainly the identity is in $V$, so there's one element in a basis. And the zero matrix is also in $V$, but this doesn't contribute to a basis as the columns are linearly dependent. And since we don't know that $A$ is invertible, we can't simply take $X=A^{-1}$. </p>
| Arnaud Mortier | 480,423 | <p>Hint:</p>
<ul>
<li><p>Showing that it's a vector space should be easy enough without actually considering the entries of the matrices.</p></li>
<li><p>To see that the dimension is at least $2$, consider two cases:</p>
<ul>
<li>What if $A$ is a multiple of the identity matrix?</li>
<li>What if $A$ isn't a multiple of the identity matrix?</li>
</ul></li>
</ul>
|
12,204 | <p>A tag named <a href="https://math.stackexchange.com/questions/tagged/tricks" class="post-tag" title="show questions tagged 'tricks'" rel="tag">tricks</a> has recently been created in <a href="https://math.stackexchange.com/questions/616672/2-tricks-to-prove-every-group-with-an-identity-and-xx-identity-is-abelian-f">this question</a>. So far there is <a href="https://math.stackexchange.com/tags/tricks/info">no tag wiki</a> to indicate intended usage.</p>
<p>Do we really need such a tag? It veers slightly towards <a href="https://math.meta.stackexchange.com/questions/2498/the-meta-tags">meta tags</a>, which are generally discouraged.</p>
<hr>
<p>To me it seems somewhat similar to <a href="https://math.stackexchange.com/questions/tagged/proof-strategy" class="post-tag" title="show questions tagged 'proof-strategy'" rel="tag">proof-strategy</a>, see the <a href="https://math.stackexchange.com/tags/proof-strategy/info">tag-wiki</a>. (I would characterize this tag as questions that are similar to <a href="http://www.tricki.org/" rel="nofollow noreferrer">tricki</a> articles, see <a href="http://chat.stackexchange.com/rooms/3740/conversation/proof-strategy-tag">this conversation in chat</a>.) However, <a href="https://math.stackexchange.com/questions/tagged/proof-strategy" class="post-tag" title="show questions tagged 'proof-strategy'" rel="tag">proof-strategy</a> tag is often used incorrectly.</p>
| Grigory M | 152 | <p>Certainly not. Let's remove it.</p>
<p>It hard to imagine someone adding this tag to favorite or ignored tag. It's not useful for search.</p>
<p>It's a meta-tag. It has unclear and potentially very vast area of usage. What questions exactly should be tagged (trick)? Who should tag it — e.g. if an answer is 'trick' should the question be tagged (trick)? (If it's intended for question asking explanations of some 'tricks' then it seems to me there are more or less reasonable tags for this already.)</p>
<p>The proposed tag doesn't fit well into current tagging ideology of SE and it's (intended) usage is not at all clear. Of course, rules can have exceptions — but if someone wants an exception to be made <em>they</em> should make a coherent proposal with some arguments addressing the issues (no, 'why not' and 'there are a lot of highly voted questions that involve tricks' is not <em>remotely</em> enough).</p>
|
274,908 | <p>I would like to plot a molecule in 3D and use different colors for the same atom type in the molecule. For example, by using:</p>
<pre><code>MoleculePlot3D[Molecule["NC(=O)C[C@H](C(=O)O)N"], ColorRules -> {"C" -> Black}]
</code></pre>
<p>all C atoms become Black. But how can I make, for example, the first C atom green, the second orange, etc?</p>
| Jason B. | 9,490 | <p>This is underdocumented but the <code>"ColorRules"</code> option can take both atom indices and patterns.</p>
<pre><code>MoleculePlot3D[Molecule["NC(=O)C[C@H](C(=O)O)N"],
ColorRules -> {2 -> Green, "C" -> Black, "O" -> Orange, 8 -> Pink}]
</code></pre>
<p><a href="https://i.stack.imgur.com/ypPLA.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ypPLA.png" alt="enter image description here" /></a></p>
|
2,292,520 | <p>I know that the logical negation of $$\neg(a \rightarrow b)= a \wedge \neg b $$ I am not clear what that means in the following simple setting:</p>
<p>So its clear that $$x\geq 2 \to x^2\geq 4.$$ Now I can write the logical negation of $a\to b$ as $a \wedge \neg b$, but what does that intuitively mean? </p>
<p>Suppose I want to prove "$a \wedge \neg b$", what do i need to prove mathematically?</p>
<p>thnks</p>
| Graham Kemp | 135,106 | <p>Our statement is $\neg (a\to b)$</p>
<p>This reads: "It is false that $a$ (materially)implies $b$". </p>
<p>Recall that a <em>material implication</em> is falsified only when the antecedant is false and the consequent is true.</p>
<p>So our statement must be infering that "$a$ is true and $b$ is false."</p>
<p>Which is written $a\wedge\neg b$.</p>
<p>And we can argue vice versa, so the statements are equivalent.</p>
<p>That is all.</p>
<hr>
<p>Now the negation of $\bbox[lemonchiffon]{x\geq 2\to x^2\geq 4}$ is $\bbox[lemonchiffon]{x\geq 2\wedge x^2< 4}$. If the latter is false the former will be true (and vice versa).</p>
<p>For example, when $x=3$ then $\bbox[lemonchiffon]{3\geq 2\to 3^2\geq 4}$ is true because $\bbox[lemonchiffon]{3\geq 2\wedge 3^2< 4}$ is false.</p>
<p>Another example, when $x=1$ then $\bbox[lemonchiffon]{1\geq 2\to 1^2\geq 4}$ is true (<em>despite seeming absurd</em>) because $\bbox[lemonchiffon]{1\geq 2\wedge 1^2< 4}$ is false.</p>
<hr>
<p>Often when we write something like $x\geq 2\to x^2\geq 4$ we implicitly mean that the statement holds universally (for all $x$). That is $\forall x~(x\geq 2\to x^2\geq 4)$. </p>
<p>The negation of this <em>quantified statement</em> is $\exists x~(x\geq 2\wedge x^2<4)$. Since there is no real witness to this exitential, then the universal is infered to be true.</p>
|
2,710,703 | <p>Given any non abelian group, how can I prove that every proper subgroup may be abelian? I know the definition of "abelian," but I don't know the difference between a group and a subgroup, nor do I understand how the two interconnect.</p>
| Dietrich Burde | 83,966 | <p>There are many non-abelian groups all of whose proper subgroups are abelian. Studying such groups of low order, we immediately find examples, such as $S_3$ or $Q_8$, the quaternion group. Because we know all subgroups explicitly for these groups, it is easy to prove that they are abelian.</p>
<p>One might ask what properties this class of groups has. Every such group is necessarily metabelian, for example. </p>
<p>More generally there is the following reference here:</p>
<p><a href="https://math.stackexchange.com/questions/48197/what-can-we-say-of-a-group-all-of-whose-proper-subgroups-are-abelian">What can we say of a group all of whose proper subgroups are abelian?</a></p>
|
1,804,042 | <p><strong>Edit:</strong> Here is the original problem; it is possible that my recurrence for the stationary distribution $\pi$ is incorrect.</p>
<blockquote>
<p>Consider a single server queue where customers arrive according to a Poisson process with intensity $\lambda$ and request i.i.d. $\mathsf{Exp}(\mu)$ service times. The server is subject to failures and repairs. The lifetime of a working server is an $\mathsf{Exp}(\theta)$ random variable, while the repair time is an $\mathsf{Exp}(\alpha)$ random variable. Successive lifetimes and repair times are independent, and are independent of the number of customers in the queue. When the server fails, all the customers in the queue are forced to leave, and while the server is under repair no new customers are allowed to join.</p>
</blockquote>
<p><strong>Edit:</strong> I have revised the recurrence.</p>
<p>In a problem on queueing theory I've derived the following recurrence:
\begin{align}
\pi_1 &=\left(\frac{\lambda+\theta}\mu\right)\pi_0 - \frac{\alpha\theta}{\mu(\alpha+\theta)}\\
\pi_{n+1} &= \left(1+\frac{\lambda+\theta}\mu\right)\pi_n - \frac\lambda\mu\pi_{n-1},\ n\geqslant1.
\end{align}
where $\lambda$, $\mu$, $\theta$, and $\alpha$ are positive constants and $$\sum_{i=0}^\infty \pi_i = \frac\alpha{\alpha+\theta}. $$ </p>
<p>After a lot of tedious algebra, I found that
$$\scriptsize\pi_n = \left(\frac{\alpha \theta \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\theta +\lambda +\mu+ \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}\right)^n}{(\alpha +\theta ) (2 \mu )^n \sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}}\right)(1+\pi_0) $$ for $n\geqslant 1$. To save space, let $$\mathcal C:=\sqrt{\theta ^2+2 \theta (\lambda +\mu )+(\lambda -\mu )^2}. $$</p>
<p>Summing over $n$ and solving for $\pi_0$, I found
$$\pi_0 =\frac{\alpha \mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)}{2 \theta (\alpha +\theta ) \mathcal C}, $$</p>
<p>and so
$$
\pi_n=\left(\frac{ \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right) \left(\lambda -\mu-\theta-\mathcal C \right)+2 \theta (\alpha +\theta ) \mathcal C }{2(\alpha +\theta )^2\mathcal C^2\left(\alpha^2\mu \left(2 \theta (\lambda +\mu )+(\lambda -\mu )^2\right)\right)^{-1}} \right)\left(\frac{\lambda+\mu+\theta+\mathcal C }{2\mu}\right)^n.
$$</p>
<p>If you see any errors let me know...</p>
<p>I'm also wondering what conditions on $\lambda,\mu,\theta$, and $\alpha$ are necessary for $\sum_{i=0}^\infty \pi_i$ to converge. For context, this is a $M/M/1$ queue with arrival rate $\lambda$, service rate $\mu$, but with an added state $D$ with transitions of rate $\theta$ from each state $n$ to $D$ and a transition of rate $\alpha$ from $D$ to $0$.</p>
| Felix Marin | 85,343 | <p>$\newcommand{\angles}[1]{\left\langle\, #1 \,\right\rangle}
\newcommand{\braces}[1]{\left\lbrace\, #1 \,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\, #1 \,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\half}{{1 \over 2}}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\iff}{\Leftrightarrow}
\newcommand{\imp}{\Longrightarrow}
\newcommand{\ol}[1]{\overline{#1}}
\newcommand{\pars}[1]{\left(\, #1 \,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\, #2 \,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\, #1 \,\right\vert}$
<strong>The question</strong>:
$\ds{\pi_{n+1} = \overbrace{\frac{\lambda+\mu\theta}{\mu}}^{\ds{\equiv a}}\
\pi_n\ +\ \overbrace{\frac\theta\mu}^{\ds{\equiv b}}\sum_{i=0}^{n-1}\pi_i\ -\
\overbrace{\frac{\alpha\theta}{\mu(\alpha+\theta)}}^{\ds{\equiv c}}
\quad\imp\quad\pi_{n+1} = a\pi_n + b\sum_{i=0}^{n-1}\pi_i - c\quad}$ and
$\ds{\sum_{i = 0}^{\infty}\pi_{i}\ =\
\underbrace{{\alpha \over \alpha + \theta}}_{\ds{\equiv\ d}}\ =\ {c \over b}}$
<hr>
With $\verts{z} < 1$:
\begin{align}
\sum_{n = 0}^{\infty}\pi_{n + 1}z^{n} & = a\sum_{n = 0}^{\infty}\pi_nz^{n} + b\sum_{n = 0}^{\infty}z^{n}\sum_{i=0}^{n - 1}\pi_i - c\sum_{n = 0}^{\infty}z^{n}
\\[3mm]\imp
{1 \over z}\pars{\sum_{n = 0}^{\infty}\pi_{n}z^{n} - \pi_{0}} & =
a\sum_{n = 0}^{\infty}\pi_nz^{n} +
b\sum_{i = 0}^{\infty}\pi_{i}\sum_{n = 1 + i}^{\infty}z^{n} -
{c \over 1 - z}
\\[3mm]\imp
\pars{{1 \over z} - a}\sum_{n = 0}^{\infty}\pi_{n}z^{n} & =
{\pi_{0} \over z} +
b\sum_{i = 0}^{\infty}\pi_{i}{z^{i + 1} \over 1 - z} -
{c \over 1 - z}
\\[3mm]\imp
\pars{{1 \over z} - a - b\,{z \over 1 - z}}\sum_{n = 0}^{\infty}\pi_{n}z^{n} & =
{\pi_{0} \over z} - {c \over 1 - z}
\end{align}</p>
<p>Then,
$$
\sum_{i = 0}^{\infty}\pi_{i}z^{i} =
{\pars{\pi_{0} + c}z - \pi_{0} \over \pars{b - a}z^{2} + \pars{a + 1}z - 1}
$$</p>
<p>In order to get the set $\braces{\pi_{i}}$, expand th right hand side in powers of $z$. Maybe, some other conditions on the magnitud of $z$ will be required along the way.</p>
|
462,921 | <p>I simply don't get the following question answered:</p>
<p>How can i proof the equality $\lim_{a\to 0}\sup_{z\in\mathbb{Z}}2-2\cos(2\pi a z)=0$?</p>
<p>Or is it even false?</p>
<p>Thanks in advance!</p>
| R.T. | 89,230 | <p>I think this is wrong. </p>
<p>If $a$ is rational, then $\cos(2\pi az)$ will take the value $-1$ at some $z$. Hence, for $a\in \mathbb{Q}$, you have
$$\sup_z(2-2\cos(2\pi az)=4.$$</p>
<p>Since you can go arbitrarily close to zero with rational numbers, the limit cannot be zero...</p>
|
1,977,306 | <p>This is from a math competition so it must not be something really long
If a parabola touches the lines $y=x$ and $y=-x$ at $A(3,3) $ and $b(1,-1)$ respectively, then </p>
<p>(A) equation of axis of parabola is $2x+y=0$ </p>
<p>(B)slope of tangent at vertex is $1/2$</p>
<p>(C) Focus is $(6/5,-3/5)$</p>
<p>(D) Directrix passes through $(1,-2)$ </p>
<p>I thought the axis would be the angle bisector of the tangents passing through the focus but it turns out that is not the case in a parabola so how can I find anything..</p>
| Narasimham | 95,860 | <p>Conditions (A),(B),(C),(D) are not needed to determine the tilted parabola ( $xy$ term non-zero). Because out of five constants needed to determine a conic, one can be reduced as zero determinant for parabola.You have given two points and two slopes which are quite sufficient.</p>
<p>$$ (x, y, y^{ \prime} ) = (3,3,1), (1,-1,-1) $$</p>
<p>Parabola involving $xy,x^2, y^2,x,y $ terms can be solved for y in a quadratic as:</p>
<p>$$ y^2 - 2y ( ax+b) + (ax+b)^2 - (cx+d) =0 $$</p>
<p>$$ y = (ax +b) \pm \sqrt{ c x +d } $$</p>
<p>$\pm$ symbol separates regions on either side of vertical tangent. Plug in given point coordinates:</p>
<p>$$ 3 a + b+ \sqrt{ 3c+d} =3 , \quad a + b+ \sqrt{ c+d} = -1 ,\quad a + c \ /( 2 \sqrt{ c+d} ) = 1 ,\quad a - c \ /( 2 \sqrt{ c+d} ) = -1 $$</p>
<p>Using a CAS to reduce tedium we get $ (a,b,c,d) = ( 1/2, -3/4, 9/4,-27/16) $</p>
<p>The parabola is plotted to verify everything .It checks out (C), (D) but (A),(B) are inconsistent with first inputs and are clearly incorrect.</p>
<p>EDIT1</p>
<p>Some calculations needed (in progress)at focus and tangent orthogonal intersection on directrix</p>
<p>For parabola $ 4 a y = x^2 $ </p>
<p>Points of tangency
$$ (2at, a t^2)\quad (2a/t, a/t^2) $$
Point of intersection of polar chord tangents on directrix which happens at right angles at a point D
$$ a [( t-1/t), -1] $$
Length of tangents $T_1,T_2$ given by
$$ (T_1/a)^2 = t^4+ 3 t^2 + 1/t^2 +3 = 2, \quad (T_2/a)^2 = 1/t^4+ 3/ t^2 + t^2 +3 = 18 $$</p>
<p>$$\rightarrow t= \frac13 $$</p>
<p>$OF$ is perpendicular on hypotenuse AB</p>
<p>$$ \frac{1}{OF^2} =\frac{1}{2} + \frac{1}{ 9 \cdot 2} $$</p>
<p>$$ OF = \frac{3}{\sqrt 5} ... $$</p>
<p>$$ \cos \beta= \frac{OF}{OB} = \frac{ 3}{\sqrt10},\quad \sin \beta= = \frac{ 1}{\sqrt10}\quad $$</p>
<p>to be continued </p>
<p><a href="https://i.stack.imgur.com/QmaAZ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QmaAZ.png" alt="enter image description here"></a></p>
|
730,198 | <blockquote>
<p>Show $f(x)=\sqrt{x^4+1} - \sqrt{x^4+x^2} \rightarrow -1/2$ for $x \rightarrow \infty$, $x \in \mathbb R$.</p>
</blockquote>
<p>I've tried $$\frac {(\sqrt{x^4+1} - \sqrt{x^4+x^2})(\sqrt{x^4+1} + \sqrt{x^4+x^2})}{\sqrt{x^4+1} + \sqrt{x^4+x^2} } = \frac {1-x^2} {\sqrt{x^4+1} + \sqrt{x^4+x^2}}$$</p>
<p>and $$\frac {1} {\sqrt{x^4+1} + \sqrt{x^4+x^2}} \rightarrow 0$$ so it's enough to verify $$\frac {-x^2} {\sqrt{x^4+1} + \sqrt{x^4+x^2}} \rightarrow -1/2$$.</p>
<p>However I'm having trouble showing this. </p>
| 5xum | 112,884 | <p>Yes, is enough to verify the last expression. This is because of $f$ and $g$ both have limits as $x$ approaches $\infty$, then $\lim_{x\to\infty}f(x)+g(x) = \lim_{x\to\infty}f(x) + \lim_{x\to\infty} g(x)$.</p>
<p>When calculating the last limit, you can simply divide both sides of the fraction by $x^2$ to get</p>
<p>$$\frac{-x^2}{\sqrt{x^4+1} + \sqrt{x^4+x^2}} = \frac{-1}{\sqrt{1+\frac{1}{x^4}} + \sqrt{1 + \frac{1}{x^2}}}$$ which has an obvious limit.</p>
|
1,853,464 | <p>I am using the Lorentz Force Equation and the electric-cross-magnetic field velocity equation] to solve for the $E$ and $B$ fields given the known path of a particle moving in 3D. </p>
<p>So with that I have the following equations where a and v are known:
<a href="https://i.stack.imgur.com/oL7fg.gif" rel="nofollow noreferrer">Lorentz Form</a> and the
<a href="https://i.stack.imgur.com/xuxHP.gif" rel="nofollow noreferrer">E-cross-B Form</a></p>
<p>My question: Are these equations enough to solve for the $x, y, z$ components of $B$ and $E$?</p>
<p>----------Edit---------------</p>
<p>So this is actually being used as an analogy for the propagation of nano-scale self replicating cracks in 3D. In this analogy, the incoming tensile force is represented by the electric force, and the delamination is represented by the magnetic force. </p>
<p>So I have a parabaloid spiral shaped crack which will represent the motion of a charged particle. Since I know the shape/path I can directly get the position, velocity, and acceleration functions in each direction.</p>
<p>With that said, is there a way to use the two equations linked to find all components of the electric and magnetic fields?</p>
| Robert Israel | 8,508 | <p>Consider the case where the particle is moving in a straight line with constant velocity. Then the magnetic field in the direction of that velocity has no effect.</p>
|
620,045 | <p>What is the mean and variance of Squared Gaussian: $Y=X^2$ where: $X\sim\mathcal{N}(0,\sigma^2)$?</p>
<p>It is interesting to note that Gaussian R.V here is zero-mean and non-central Chi-square Distribution doesn't work.</p>
<p>Thanks.</p>
| Cm7F7Bb | 23,249 | <p>We can avoid using the fact that $X^2\sim\sigma^2\chi_1^2$, where $\chi_1^2$ is the chi-squared distribution with $1$ degree of freedom, and calculate the expected value and the variance just using the definition. We have that
$$
\operatorname E X^2=\operatorname{Var}X=\sigma^2
$$
since $\operatorname EX=0$ (see <a href="https://math.stackexchange.com/questions/768816/if-x-sim-n0-1-why-is-ex2-1/768824#768824">here</a>).</p>
<p>Also,
$$
\operatorname{Var}X^2=\operatorname EX^4-(\operatorname EX^2)^2.
$$
The fourth moment $\operatorname EX^4$ is equal to $3\sigma^4$ (see <a href="https://math.stackexchange.com/questions/1917647/proving-ex4-3%CF%834">here</a>). Hence,
$$
\operatorname{Var}X^2=3\sigma^4-\sigma^4=2\sigma^4.
$$</p>
|
1,102,638 | <p>Let $n\in \mathbb{N}$. Can someone help me prove this by induction:</p>
<p>$$\sum _{i=0}^{n}{i} =\frac { n\left( n+1 \right) }{ 2 } .$$</p>
| abel | 9,252 | <p>$$\sqrt{1 + x + x^2} = 1 + \dfrac{1}{2}(x+x^2) + \cdots = 1 + \dfrac{1}{2}x + \cdots$$ so the
$$\lim \limits_{x \to 0}{\frac{\sqrt{1 + x + x^2} - 1}{x}} =
\lim \limits_{x \to 0}\dfrac{1 + 1/2 x + \cdots - 1}{x} = \dfrac{1}{2}$$</p>
|
3,857,494 | <p>I have the following sequence given recursively by:</p>
<p><span class="math-container">$$A_n - 2A_{n-1} - 4A_{n-2} = 0$$</span></p>
<p>Where:</p>
<p><span class="math-container">$$A_0 = 1, A_1 = 3, A_2 = 10, A_3 = 32, etc.$$</span></p>
<p>To find the generating function, I have done the following:</p>
<p><span class="math-container">$$\begin{aligned} A &= 1 + 3x + 10x^2 + 32x^3 + \dots
\\ -2xA &= 0 - 2x - 6x^2 - 20x^3 + \dots
\\ -4x^2 A &= 0 - 0 - 4x^2 - 12x^3 + \dots \end{aligned}$$</span></p>
<p>[NOTE: The <span class="math-container">$0s$</span> are there for formatting purposes, they're not part of the expressions]</p>
<p>Adding these together:</p>
<p><span class="math-container">$$(1 - 2x - 4x^2)A = 1 + x + 0$$</span></p>
<p><span class="math-container">$$A = \frac{1+x}{1 - 2x - 4x^2}$$</span></p>
<p>Which, I'm guessing, gives me the generating function.</p>
<p>My question is, how do I know if this is correct? What is this generating function supposed to tell me?</p>
<p>If I substitute certain values into the generating function, will I get the initial sequence given recursively or will I get the function, <span class="math-container">$A = 1 + 3x + 10x^2 + 32x^3 + ...$</span>?</p>
| Joshua P. Swanson | 86,777 | <p>To answer your literal question, you can have software expand your rational function to make sure the first terms are correct, <a href="https://www.wolframalpha.com/input/?i=Series%5B%281%2Bx%29%2F%281-2x-4x%5E2%29%2C%20%7Bx%2C0%2C50%7D%5D" rel="nofollow noreferrer">like this</a>.</p>
|
3,065,818 | <blockquote>
<p>If <span class="math-container">$$z=\dfrac{\sqrt{3}-i}{2}$$</span> then <span class="math-container">$$(z^{95}+i^{67})^{94}=z^n$$</span> then, <span class="math-container">$\text{find the smallest positive integral value of}$</span> <span class="math-container">$n$</span> <span class="math-container">$\text{where}$</span> <span class="math-container">$i=\sqrt{-1}$</span></p>
</blockquote>
<p><span class="math-container">$\text{My Attempt:}$</span> First of all I tried to convert <span class="math-container">$z$</span> into <span class="math-container">$\text{Euler's Form}$</span> so, <span class="math-container">$z=e^{-i(\frac{π}{6})}$</span>
Then, I raised <span class="math-container">$z$</span> to the <span class="math-container">$\text{95th}$</span> power. Then I'm getting stuck. And, not being able to proceed. Help. </p>
| tarit goswami | 579,780 | <p>Here is an alternate method. </p>
<p>Note that set of primes dividing <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are same. Take any prime <span class="math-container">$p$</span> diving <span class="math-container">$x$</span>(and
hence <span class="math-container">$y$</span>), and let <span class="math-container">$\alpha$</span> is the maximum power of <span class="math-container">$p$</span> in <span class="math-container">$x$</span> and <span class="math-container">$\beta$</span> is the maximum power of <span class="math-container">$p$</span> in <span class="math-container">$y$</span>. Then <span class="math-container">$x^a=y^b \implies p^{\alpha a}=p^{\beta b}$</span>, which implies <span class="math-container">$a|\beta b $</span> and <span class="math-container">$b| \alpha a$</span>. But remember that <span class="math-container">$gcd(a,b)=1$</span>. So, <span class="math-container">$a|\beta$</span> and <span class="math-container">$b|\alpha$</span>. </p>
<p>Suppose, <span class="math-container">$\beta= a\cdot \beta_p$</span> and <span class="math-container">$\alpha=b\cdot \alpha_p$</span>. Then we have, <span class="math-container">$\alpha a=\beta b$</span> or, <span class="math-container">$b\alpha_pa =a\beta_p b$</span> or, <span class="math-container">$\alpha_p=\beta_p$</span>. So, for each prime <span class="math-container">$p$</span> diving <span class="math-container">$x$</span>, we have such <span class="math-container">$\alpha_p$</span>. Check that <span class="math-container">$n=\prod_{p|n}p^{\alpha_p}$</span> satisfies the required property. </p>
|
1,309,728 | <p>I know what a 3x10 looks like, but I cannot seem to find a distinguishable pattern to extend it to a 3x14.</p>
<p>The 3x10 pattern I'm using looks like the one at the top right of figure 6 of <a href="http://faculty.olin.edu/~sadams/DM/ktpaper.pdf" rel="nofollow">this paper</a>.</p>
<p>Any help would be greatly appreciated.</p>
| Bill Dubuque | 242 | <p>${\rm mod}\ x\!-\!a,\,y\!-\!b\!:\,\ x\equiv a,\,y\equiv b\,\Rightarrow\, xy-1\equiv ab-1\equiv0 $</p>
|
436,172 | <p>let $a,b,c,d$ are real numbers,show that
$$2\sqrt{a^2+c^2}+\sqrt{a^2+c^2+3(b^2+d^2)-2\sqrt{3}(ab+cd)}+\sqrt{a^2+c^2+3(b^2+d^2)+2\sqrt{3}(ab+cd)}\ge6\sqrt{|ad-bc|}$$</p>
<p>This problem is creat by China's famous mathematician hua luogeng,<a href="http://en.wikipedia.org/wiki/Hua_Luogeng" rel="nofollow">http://en.wikipedia.org/wiki/Hua_Luogeng</a>
when he is child,and Now this problem has some ugly methods,I think this inequality has nice methods,Thank you</p>
| S.B. | 35,778 | <p>Let $x=[a,c]^T$ and $y=\sqrt{3}[b,d]^T$ be two vector in the plane and denote the cross product by $\times$. The inequality is equivalent to $$2\Vert x\Vert+\Vert x+y\Vert+\Vert x-y\Vert\geq 2\sqrt{3\Vert x\times y\Vert}\iff\\4\Vert x\Vert^2+(\Vert x+y\Vert+\Vert x-y\Vert)^2+4\Vert x\Vert(\Vert x+y\Vert+\Vert x-y\Vert)\geq12\Vert x\times y\Vert.$$ For compactness let $u=x+y$ and $v=x-y$. Then we must show $$2\Vert u\Vert^2+2\Vert v\Vert^2+2\langle u,v\rangle+2\Vert u\Vert\Vert v\Vert+2\Vert u+v\Vert(\Vert u\Vert+\Vert v\Vert)\geq 6\Vert u\times v\Vert\iff\\\Vert u\Vert^2+\Vert v\Vert^2+\langle u,v\rangle+\Vert u\Vert\Vert v\Vert+\Vert u+v\Vert(\Vert u\Vert+\Vert v\Vert)\geq 3\Vert u\times v\Vert.$$ Note that $\Vert u+v\Vert\Vert u\Vert\geq\Vert (u+v)\times u\Vert=\Vert u\times v\Vert$, $\Vert u+v\Vert\Vert v\Vert\geq\Vert (u+v)\times v\Vert=\Vert u\times v\Vert$, and $\Vert u\Vert \Vert v\Vert+\langle u,v\rangle\geq0$. Therefore, it suffices to have $\Vert u\Vert^2 + \Vert v\Vert^2\geq\Vert u\times v\Vert$ which holds since the LHS is not less than $2\Vert u\Vert \Vert v\Vert$ which in turn is greater than or equal to $\Vert u\times v\Vert$.</p>
<p><strong>An alternative geometric proof.</strong> Consider a triangle with vertices at $x$,$y$, and $-x$. The LHS is $2p$, the perimeter of the triangle, whereas the RHS is $2\sqrt{3S}$ with $S$ being the area of the triangle. Thus we have to show $p\geq\sqrt{3S}$. Raise both sides to four and use Heron's formula for $S^2$. We have to show $p^3\geq27(p-\alpha)(p-\beta)(p-\gamma)$ where $\alpha,\beta$, and $\gamma$ are the side lengths of the considered triangle. This last inequality is simply an AM-GM inequality.</p>
|
436,172 | <p>let $a,b,c,d$ are real numbers,show that
$$2\sqrt{a^2+c^2}+\sqrt{a^2+c^2+3(b^2+d^2)-2\sqrt{3}(ab+cd)}+\sqrt{a^2+c^2+3(b^2+d^2)+2\sqrt{3}(ab+cd)}\ge6\sqrt{|ad-bc|}$$</p>
<p>This problem is creat by China's famous mathematician hua luogeng,<a href="http://en.wikipedia.org/wiki/Hua_Luogeng" rel="nofollow">http://en.wikipedia.org/wiki/Hua_Luogeng</a>
when he is child,and Now this problem has some ugly methods,I think this inequality has nice methods,Thank you</p>
| zyx | 14,120 | <p>It is the isoperimetric inequality for a triangle with vertices $0$, $x+y$, and $2x$, where $x=(a,c)$ and $y = \sqrt{3} (b,d)$. </p>
<p>If $P$ is the perimeter of the triangle and $A$ its area, Hua's inequality says $$|2x| + |x-y| + |x+y| \geq 6\sqrt{\frac{A}{\sqrt{3}}} .$$ The sum on the left is $P$ and the squared inequality is $ P^2 \geq (12 \sqrt{3}) A$. </p>
<p>This is the correct lower bound on $P^2/A$, because equality is attained at an equilateral triangle. </p>
<p>In Hua's parameterization, the triangle is equilateral when $(a,c)$ is a 90 degree rotation of $(b,d)$, or $(a,c) = \pm (-d,b)$. So it could be an inequality discovered by trying to prove the isoperimetric inequality (for triangles) in coordinates and choosing variables in which the algebra is simplified. Which is something that a very clever child might do. But did he rediscover the isoperimetric principle on his own? </p>
|
1,517,698 | <p>What notation should I use for the set of the form</p>
<p>$$\{a_1,a_2,a_3,a_4\}$$</p>
<p>where $a_i \in \{0,1\}$ for $i = 1,2,3,4$?</p>
<p>It's an output from an indicator function that is evaluated over some "domain" $\{b_1,b_2,b_3,b_4\}$. I.e. it produces 0 or 1 for each $b_i$. So the result (I think) should be a set $\{a_1,a_2,a_3,a_4\}$. Since the input is also a set of four elements. It makes no sense to consider the output as $\{0,1\}$ as then all "sequences" the indicator function produces could be the same.</p>
| vadim123 | 73,324 | <p>A set is an unordered collection of elements. $\{1,1,0,1\}$ is no different from $\{0,1\}$. </p>
<p>It appears that OP is looking for an ordered collection of four elements. The natural notation for this is a <em>function</em>. We define $$f:\{b_1,b_2,b_3,b_4\}\to \{0,1\}.$$</p>
<p>We can also use a shorthand to denote the function, via an ordered 4-tuple, e.g. $$f \text{ "is" } (a_1, a_2, a_3, a_4)$$where each $a_i\in\{0,1\}$.</p>
|
1,517,698 | <p>What notation should I use for the set of the form</p>
<p>$$\{a_1,a_2,a_3,a_4\}$$</p>
<p>where $a_i \in \{0,1\}$ for $i = 1,2,3,4$?</p>
<p>It's an output from an indicator function that is evaluated over some "domain" $\{b_1,b_2,b_3,b_4\}$. I.e. it produces 0 or 1 for each $b_i$. So the result (I think) should be a set $\{a_1,a_2,a_3,a_4\}$. Since the input is also a set of four elements. It makes no sense to consider the output as $\{0,1\}$ as then all "sequences" the indicator function produces could be the same.</p>
| Emanuele Paolini | 59,304 | <p>What you are speaking about are not sets. Sets are collections of elements without any order. There are only three sets composed by only zeros and ones: $\{\}, \{0\}, \{1\}, \{0,1\}$.</p>
<p>What you are speaking about are $n$-uples. Just use $()$ instead of $\{\}$ e.g.:
$$
(a_1,a_2,a_3,a_4) = (0,1,1,0).
$$</p>
|
1,917,313 | <p>I am to find a combinatorial argument for the following identity:</p>
<p>$$\sum_k \binom {2r} {2k-1}\binom{k-1}{s-1} = 2^{2r-2s+1}\binom{2r-s}{s-1}$$</p>
<p>For the right hand side, I was think that would just be number of ways to choose at least $s-1$ elements out of a $[2r-s]$ set. However, for the left hand side, I don't really know what it is representing. </p>
<p>Any help would be greatly appreciated!</p>
| Brian M. Scott | 12,042 | <p>HINT: We have <span class="math-container">$2r-s$</span> white balls numbered <span class="math-container">$1$</span> through <span class="math-container">$2r-s$</span>. We pick <span class="math-container">$s-1$</span> of them and paint those balls red, and we stick gold stars on any subset of the remaining white balls; since there are <span class="math-container">$2r-s-(s-1)=2r-2s+1$</span> white balls remaining, there are <span class="math-container">$$\binom{2r-s}{s-1}2^{2r-2s+1}$$</span> possible outcomes of this sequence of operations.</p>
<p>Alternatively, we can start with <span class="math-container">$2r$</span> white balls numbered <span class="math-container">$1$</span> through <span class="math-container">$2r$</span>. We pick an odd number of these balls, <span class="math-container">$2k-1$</span> for some <span class="math-container">$k\in[r]$</span>, and line them up in numerical order. The <span class="math-container">$k$</span>-th ball in line is the one in the middle; call it <span class="math-container">$B$</span>. We choose <span class="math-container">$s-1$</span> of the <span class="math-container">$k-1$</span> chosen balls with numbers smaller than that of <span class="math-container">$B$</span> and paint them red. This is possible only if <span class="math-container">$k\ge s$</span>, in which case the number on <span class="math-container">$B$</span> is at most <span class="math-container">$2r-s+1$</span>, and the red balls all have numbers in the set <span class="math-container">$[2r-s]$</span>. We now throw away the balls with numbers not in <span class="math-container">$[2r-s]$</span> and stick gold stars on any white balls left in the set of chosen balls. At this point we have <span class="math-container">$s-1$</span> red balls and possibly some white balls with gold stars. There are </p>
<p><span class="math-container">$$\sum_k\binom{2r}{2k-1}\binom{k-1}{s-1}$$</span></p>
<p>possible outcomes. </p>
<ul>
<li>Verify that these possible outcomes are exactly the same as for the first sequence of operations.</li>
</ul>
|
3,554,188 | <blockquote>
<p>I was given <span class="math-container">$$|ax - 11| = 4x - 10$$</span> has a positive integral solution and <span class="math-container">$a$</span> is a positive integer.</p>
<p>I was asked what was <span class="math-container">$x, a$</span></p>
</blockquote>
<p><span class="math-container">$$ax > 11 $$</span> then we have <span class="math-container">$x=\frac{1}{a-4}$</span>
<span class="math-container">$$ax < 11$$</span> then we have <span class="math-container">$x=\frac{22}{a+4}$</span></p>
<p>What should I do now? I don't understand what does the "one solution" mean because all I know there are some possibilities for <span class="math-container">$x,a$</span> to be the solutions. The answer keys are <span class="math-container">$x, a =3$</span></p>
| Will Jagy | 10,400 | <p>safer to split into cases and confirm with the original item.</p>
<p>We can have the thing in the absolute value either positive or negative,</p>
<p>(I) positive
<span class="math-container">$$ ax-11 = 4x-10 $$</span>
<span class="math-container">$$ (a-4)x = 1 $$</span>
Since we demand positive integers, we get <span class="math-container">$a=5, x=1.$</span> then check: <span class="math-container">$$ |ax-11| = |5 - 11| = |-6| = 6, $$</span> while
<span class="math-container">$$ 4x-10 = -6 $$</span>
So this type failed we can't have <span class="math-container">$ax-11$</span> positive</p>
<p>(II) negative
<span class="math-container">$$ 11 - ax = 4x-10 $$</span>
<span class="math-container">$$ 21 = (a+4)x $$</span>
we can't have <span class="math-container">$a+4$</span> come out either <span class="math-container">$1$</span> or <span class="math-container">$3,$</span> the remaining choices are: <span class="math-container">$a+4 = 7,$</span> <span class="math-container">$x=3,$</span> so <span class="math-container">$a = 3, x = 3.$</span> OR, <span class="math-container">$a+4 = 21,$</span> <span class="math-container">$x=1,$</span> so <span class="math-container">$a=17, x=1.$</span>
Checking:
<span class="math-container">$a=3,x=3,$</span>
<span class="math-container">$$ |ax-11| = |9 - 11| = |-2| = 2, $$</span> while
<span class="math-container">$$ 4x-10 = 12 - 10 = 2. $$</span>
This one works. </p>
<p>Finally
<span class="math-container">$a=17,x=1,$</span>
<span class="math-container">$$ |ax-11| = |17 - 11| = 6, $$</span> while
<span class="math-container">$$ 4x-10 = 8 - 10 = -2. $$</span>
This one fails.</p>
|
4,326,073 | <p><a href="https://i.stack.imgur.com/RTyOy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTyOy.jpg" alt="enter image description here" /></a>
I came across questions in the free module section of my abstract algebra text. In the text, the notation <span class="math-container">$End_{R}(V)$</span> denotes the set of all <span class="math-container">$R$</span>-module endomorphisms of <span class="math-container">$M$</span>. <a href="http://homepage.math.uiowa.edu/%7Egoodman/algebrabook.dir/book.2.6.pdf" rel="nofollow noreferrer">Algebra: Abstract and Concrete, exercise 8.1.9 on p358 in the attached picture of the linked text</a> Onto the question:</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be a finite dimensional vector space over a field <span class="math-container">$K$</span>. Let <span class="math-container">$T\in End_{K}(V)$</span>. Give <span class="math-container">$V$</span> the corresponding <span class="math-container">$K[x]$</span>-module structure defined by <span class="math-container">$\sum_{i} \alpha_{i}x^{i}v=\sum_{i}\alpha_{i}T^{i}(v).$</span> Show that <span class="math-container">$V$</span> is not free as a <span class="math-container">$K[x]$</span>-module.</p>
</blockquote>
<p>In this question, I don't understand how in <span class="math-container">$\sum_{i} \alpha_{i}x^{i}v=\sum_{i}\alpha_{i}T^{i}(v).$</span> affects whether <span class="math-container">$V$</span> can be a free <span class="math-container">$K[x]$</span>-module. If I take a finite basis <span class="math-container">$B$</span> for <span class="math-container">$V$</span>, where <span class="math-container">$B=\{v_1, v_2,...v_n\}$</span>, with coefficients <span class="math-container">$\alpha_i \in K$</span>, then would it be that each of the <span class="math-container">$\alpha_i x^i v_{i}$</span> term in the identity <span class="math-container">$\alpha_i x^i v_{i} = \alpha_{i}T^{i}(v_i)$</span>, the coefficients <span class="math-container">$\alpha_{i}$</span> or <span class="math-container">$\alpha_{i}x^{i}$</span> might not equal to zero? Actually, I am assuming the <span class="math-container">$v$</span> in the definition <span class="math-container">$\alpha_i x^i v = \alpha_{i}T^{i}(v)$</span> refers to basis elements from <span class="math-container">$B$</span>, but I am not sure where the <span class="math-container">$x^{i}$</span> is suppose to come from. Is it from the vector space <span class="math-container">$V$</span> or from the field <span class="math-container">$K$</span> along with the <span class="math-container">$\alpha_i$</span>.</p>
<p>Thank you in advance.</p>
| QC_QAOA | 364,346 | <p>Let <span class="math-container">$\epsilon>0$</span> be given. By definition, there exists <span class="math-container">$N\in\mathbb{N}$</span> such that <span class="math-container">$n\geq N$</span> implies <span class="math-container">$|a_n-a|<\epsilon$</span>. This then implies <span class="math-container">$a-\epsilon<a<a+\epsilon$</span> and we have</p>
<p><span class="math-container">$$\sum_{i=1}^n i a_i=\sum_{i=1}^N ia_i+\sum_{i=N+1}^n ia_i$$</span></p>
<p><span class="math-container">$$\Rightarrow \sum_{i=N+1}^n i(a-\epsilon)<\sum_{i=1}^n i a_i-\sum_{i=1}^N ia_i<\sum_{i=N+1}^n i(a+\epsilon)$$</span></p>
<p><span class="math-container">$$\Rightarrow (a-\epsilon)\frac{1}{2}(n-N)(n+N+1)<\sum_{i=1}^n i a_i-\sum_{i=1}^N ia_i<(a+\epsilon)\frac{1}{2}(n-N)(n+N+1)$$</span></p>
<p>This then implies</p>
<p><span class="math-container">$$(a-\epsilon)\frac{(n-N)(n+N+1)}{2n^2}<\frac{1}{n^2}\sum_{i=1}^n i a_i-\frac{1}{n^2}\sum_{i=1}^N ia_i<(a+\epsilon)\frac{(n-N)(n+N+1)}{2n^2}$$</span></p>
<p>Taking limits as <span class="math-container">$n$</span> approaches infinity gives</p>
<p><span class="math-container">$$\frac{a-\epsilon}{2}=\lim_{n\to\infty}(a-\epsilon)\frac{(n-N)(n+N+1)}{2n^2}$$</span></p>
<p><span class="math-container">$$<\lim_{n\to\infty}b_n+\left[\frac{1}{n^2}\sum_{i=1}^N \right]ia_i\lim_{n\to\infty}\frac{1}{n^2}$$</span></p>
<p><span class="math-container">$$<\lim_{n\to\infty}(a+\epsilon)\frac{(n-N)(n+N+1)}{2n^2}=\frac{a+\epsilon}{2}$$</span></p>
<p>Since the limit of <span class="math-container">$\frac{1}{n^2}$</span> is <span class="math-container">$0$</span> this simplifies to</p>
<p><span class="math-container">$$\frac{a-\epsilon}{2}<\lim_{n\to\infty}b_n<\frac{a+\epsilon}{2}$$</span></p>
<p>Since <span class="math-container">$\epsilon$</span> was arbitrary we conclude</p>
<p><span class="math-container">$$\frac{a}{2}\leq \lim_{n\to\infty}b_n\leq \frac{a}{2}$$</span></p>
<p>We conclude</p>
<p><span class="math-container">$$\lim_{n\to\infty}b_n=\frac{a}{2}$$</span></p>
<hr />
<p>EDIT: As requested, if for all <span class="math-container">$\epsilon>0$</span> we have</p>
<p><span class="math-container">$$a_n-\epsilon<b_n<a_n+\epsilon$$</span></p>
<p>and <span class="math-container">$a_n\to a$</span>, then <span class="math-container">$b_n\to a$</span>. First, note that we can rearrange this to</p>
<p><span class="math-container">$$|b_n-a_n|<\epsilon$$</span></p>
<p>Let <span class="math-container">$\delta>0$</span> be given and choose <span class="math-container">$\epsilon=\frac{\delta}{2}$</span>. Then choose <span class="math-container">$N$</span> such that <span class="math-container">$n\geq N$</span> implies <span class="math-container">$|a_n-a|<\frac{\delta}{2}$</span>. This gives us</p>
<p><span class="math-container">$$|b_n-a|=|b_n-a+a_n-a_n|\leq |b_n-a_n|+|a-a_n|<\frac{\delta}{2}+\frac{\delta}{2}=\delta$$</span></p>
<p>as desired.</p>
|
4,326,073 | <p><a href="https://i.stack.imgur.com/RTyOy.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RTyOy.jpg" alt="enter image description here" /></a>
I came across questions in the free module section of my abstract algebra text. In the text, the notation <span class="math-container">$End_{R}(V)$</span> denotes the set of all <span class="math-container">$R$</span>-module endomorphisms of <span class="math-container">$M$</span>. <a href="http://homepage.math.uiowa.edu/%7Egoodman/algebrabook.dir/book.2.6.pdf" rel="nofollow noreferrer">Algebra: Abstract and Concrete, exercise 8.1.9 on p358 in the attached picture of the linked text</a> Onto the question:</p>
<blockquote>
<p>Let <span class="math-container">$V$</span> be a finite dimensional vector space over a field <span class="math-container">$K$</span>. Let <span class="math-container">$T\in End_{K}(V)$</span>. Give <span class="math-container">$V$</span> the corresponding <span class="math-container">$K[x]$</span>-module structure defined by <span class="math-container">$\sum_{i} \alpha_{i}x^{i}v=\sum_{i}\alpha_{i}T^{i}(v).$</span> Show that <span class="math-container">$V$</span> is not free as a <span class="math-container">$K[x]$</span>-module.</p>
</blockquote>
<p>In this question, I don't understand how in <span class="math-container">$\sum_{i} \alpha_{i}x^{i}v=\sum_{i}\alpha_{i}T^{i}(v).$</span> affects whether <span class="math-container">$V$</span> can be a free <span class="math-container">$K[x]$</span>-module. If I take a finite basis <span class="math-container">$B$</span> for <span class="math-container">$V$</span>, where <span class="math-container">$B=\{v_1, v_2,...v_n\}$</span>, with coefficients <span class="math-container">$\alpha_i \in K$</span>, then would it be that each of the <span class="math-container">$\alpha_i x^i v_{i}$</span> term in the identity <span class="math-container">$\alpha_i x^i v_{i} = \alpha_{i}T^{i}(v_i)$</span>, the coefficients <span class="math-container">$\alpha_{i}$</span> or <span class="math-container">$\alpha_{i}x^{i}$</span> might not equal to zero? Actually, I am assuming the <span class="math-container">$v$</span> in the definition <span class="math-container">$\alpha_i x^i v = \alpha_{i}T^{i}(v)$</span> refers to basis elements from <span class="math-container">$B$</span>, but I am not sure where the <span class="math-container">$x^{i}$</span> is suppose to come from. Is it from the vector space <span class="math-container">$V$</span> or from the field <span class="math-container">$K$</span> along with the <span class="math-container">$\alpha_i$</span>.</p>
<p>Thank you in advance.</p>
| TravorLZH | 748,964 | <p>We rely on the following lemma:</p>
<p><strong>Lemma (Stolz-Césaro):</strong> Suppose <span class="math-container">$y_n\ge0$</span>, <span class="math-container">$x_n=o(y_n)$</span>, and <span class="math-container">$\sum_{k\le n}y_k\to+\infty$</span> as <span class="math-container">$n\to+\infty$</span>. Then</p>
<p><span class="math-container">$$
\sum_{k\le n}x_k=o\left(\sum_{k\le n}y_k\right)
$$</span></p>
<p>Before go onto proving the lemma, let's see how it helps to solve OP's problem:</p>
<p>Since <span class="math-container">$n^2\sim n(n+1)$</span> as <span class="math-container">$n\to+\infty$</span>, we can plug in <span class="math-container">$x_n=n(a_n-a)$</span> and <span class="math-container">$y_n=2n$</span>. It is easy to show that <span class="math-container">$x_n=o(y_n)$</span>, so we can now apply the lemma to obtain</p>
<p><span class="math-container">$$
\lim_{n\to\infty}{1\over n^2}\sum_{k\le n}k(a_k-a)=0
$$</span></p>
<p>This clearly indicates that</p>
<p><span class="math-container">$$
\lim_{n\to\infty}{1\over n^2}\sum_{k\le n}ka_k=\frac a2
$$</span></p>
<p><em>Proof of the lemma.</em></p>
<p>By definition, we know for all <span class="math-container">$\varepsilon>0$</span> there exists <span class="math-container">$N=N(\varepsilon)$</span> such that <span class="math-container">$|x_n|<\varepsilon y_n$</span> holds for all <span class="math-container">$n>N$</span>. This means</p>
<p><span class="math-container">$$
\left|\sum_{k\le n}x_k\right|<\left|\sum_{k\le N}x_k\right|+\varepsilon\sum_{N<k\le n}y_k
$$</span></p>
<p>As a result, we see that for all <span class="math-container">$\varepsilon>0$</span></p>
<p><span class="math-container">$$
\limsup_{n\to\infty}\left|\sum_{k\le n}x_k\right|\left(\sum_{k\le n}y_k\right)^{-1}<\varepsilon
$$</span></p>
<p>Consequently the limit is zero, concluding the proof.</p>
|
2,596,213 | <p>I'm having huge troubles with problems like this. I know the following:</p>
<p>$$\frac{\sin{x}}{x}=1-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)$$</p>
<p>and </p>
<p>$$\ln{(1+t)}=t-\frac{t^2}{2}+\frac{t^3}{3}+O(t^4)$$</p>
<p>So</p>
<p>$$\ln{\left(1+\left(-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right)\right)}=\\\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]-\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^2}{2}+\frac{\left[-\frac{x^2}{3!}+\frac{x^4}{5!}-\frac{x^6}{7!}+O(x^7)\right]^3}{3}+O(x^8).$$</p>
<p>But how on earth would one simplify this? Obviously I should not need to manually expand something of the form $(a+b+c+d+e)^n$. Seriously don't understand what is happening here.</p>
<p>Also, how should I know to what $O(x^?)$ I should expand the initial functions to? </p>
| Dylan | 135,643 | <p>If you want a series expansion up to $x^6$, then</p>
<p>$$ \left( -\frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + O(x^8)\right)^2 =
\frac{x^4}{(3!)^2} - 2\frac{x^6}{3!5!} + O(x^8) $$</p>
<p>$$ \left( -\frac{x^2}{3!} + \frac{x^4}{5!} - \frac{x^6}{7!} + O(x^8)\right)^3 = -\frac{x^6}{(3!)^3} + O(x^8) $$</p>
<p>The remaining higher-order terms are all grouped in with $O(x^8)$</p>
|
109,298 | <p>I'm an applied model theorist, and open image theorems are important in the mathematical structures I study (they limit the number of types of elements being realised, and therefore keep things model theoretically nice e.g. stable). </p>
<p>So I have some idea as to why these open image theorems should hold from a model theoretic viewpoint, and I know that these are regarded as important theorems, but I don't think I've ever come across a diophantine application of an open image theorem in the literature and I'd like to see one.</p>
<p>I'm most familiar with Serre's open image theorem for elliptic curves so an example in this context would be ideal.</p>
| Kevin Ventullo | 5,513 | <p>Well, this isn't explicitly diophantine, but here goes:</p>
<p>If $f$ is a level one weight $k$ eigenform with rational coefficients, the image of the attached Galois representation </p>
<p>$\rho_f:G_{\mathbb{Q}}\rightarrow GL_2(\hat{\mathbb{Z}})$ </p>
<p>is open in the subgroup $G$ defined by demanding </p>
<p>$det(G)\subset \hat{\mathbb{Z}}^{\times{k-1}}$. </p>
<p>In particular, the image contains an open subgroup of $SL_2(\hat{\mathbb{Z}})$. This has the following arithmetic consequence:</p>
<p><em>For almost all prime numbers $p$, there exists a non-solvable Galois extension $K/\mathbb{Q}$ ramified only at $p$</em>. </p>
<p>In fact, Serre shows that for the unique normalized weight 12 level 1 cuspform, the list of exceptional primes is 2,3,5,7,23,691. This theorem is now known for all p, although the last known case, p=7, was resolved only very recently by Dieulefait.</p>
|
1,019,078 | <p>Let $\alpha_1=[ 2,1,3,0] $
$\alpha_2=[ 1,1,1,-1] $, $\alpha_3=[ 2,-1,5,4] $, $\alpha_4=[ 1,2,0,-3] $, $\alpha_5=[ 3,1,6,1] $
be vectors from $\mathbb{R}^4$ . From vectors system ($\alpha_1,\alpha_2, \alpha_3, \alpha_4, \alpha_5 $) choose basis of vector space $V=lin(\alpha_1,\alpha_2, \alpha_3, \alpha_4, \alpha_5)\subset\mathbb{R}^4$ spanned by those vectors.
I need help with creating a proper matrix for this question.</p>
| mfl | 148,513 | <p>Since</p>
<p>$$\mathrm{rank}\begin{pmatrix}2 & 1 & 2 & 1 & 3\cr 1 & 1 & −1 & 2 & 1\cr 3 & 1 & 5 & 0 & 6\cr 0 & −1 & 4 & −3 & 1\end{pmatrix}=3$$</p>
<p>the vector subspace $\mathrm{span}\{\alpha_1,\cdots,\alpha_5\}$ has dimension $3.$ So, you need to find three linearly independent vectors.</p>
<p>Now, since </p>
<p>$$\mathrm{det}\begin{pmatrix}2 & 1 & 3\cr −1 & 2 & 1\cr 5 & 0 & 6\end{pmatrix}=24+5-30+6=5\ne 0$$ we have that $\{\alpha_3,\alpha_4,\alpha_5\}$ is a linearly independent set of vectors. So we have got a basis.</p>
|
3,729,851 | <p>So I have the following question here.</p>
<blockquote>
<p>Suppose that <span class="math-container">$y_1$</span> solves <span class="math-container">$2y''+y'+3x^2y=0$</span> and <span class="math-container">$y_2$</span> solves <span class="math-container">$2y''+y'+3x^2y=e^x$</span>. Which of the following is a solution of <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span>?</p>
</blockquote>
<blockquote>
<p>(A) <span class="math-container">$3y_1-2y_2$</span></p>
</blockquote>
<blockquote>
<p>(B) <span class="math-container">$y_1+2y_2$</span></p>
</blockquote>
<blockquote>
<p>(C) <span class="math-container">$2y_1-y_2$</span></p>
</blockquote>
<blockquote>
<p>(D) <span class="math-container">$y_1+2y_2-2e^x$</span></p>
</blockquote>
<blockquote>
<p>(E) None of the above.</p>
</blockquote>
<p>The answer is supposed to be <span class="math-container">$A$</span>. But I am not really sure how that happened.</p>
<p>I know that for <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span> the solution is always the homogeneous part and the particular part added together.</p>
<p>Furthermore, I know that the homogeneous part is given as <span class="math-container">$y_1$</span>.</p>
<p>I then know that for <span class="math-container">$2y''+y'+3x^2y=e^x$</span> the solution for that one is composed of the homogeneous part and the particular part and that I can also write the ODE as <span class="math-container">$-4y''-2y'-6x^2y=-2e^x$</span>. So this implies that the homogeneous portion is just <span class="math-container">$-2y_1$</span>.</p>
<p>I can't get further than that though. Is my thought process right so far? If not, what more can I do and how can I proceed from here? I can't even solve the first two equations since they have no analytic solution.</p>
<p>This is from an old exam I was looking at an not an assignment so feel free to show work.</p>
| Lutz Lehmann | 115,115 | <p>This is just linear algebra. On the left side you have a linear (differential, but that is not so important here) operator, call it <span class="math-container">$L$</span>. Then you are given <span class="math-container">$L(y_1)=0$</span> and <span class="math-container">$L(y_2)=f$</span>. Now you want to solve <span class="math-container">$L(y)=-2f$</span>. By combining the known solutions you get <span class="math-container">$L(cy_1-2y_2)=-2f$</span> for any coefficient <span class="math-container">$c$</span>.</p>
<p>Now it remains to check that <span class="math-container">$L(e^x)$</span> does not result in any usable right side.</p>
|
3,729,851 | <p>So I have the following question here.</p>
<blockquote>
<p>Suppose that <span class="math-container">$y_1$</span> solves <span class="math-container">$2y''+y'+3x^2y=0$</span> and <span class="math-container">$y_2$</span> solves <span class="math-container">$2y''+y'+3x^2y=e^x$</span>. Which of the following is a solution of <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span>?</p>
</blockquote>
<blockquote>
<p>(A) <span class="math-container">$3y_1-2y_2$</span></p>
</blockquote>
<blockquote>
<p>(B) <span class="math-container">$y_1+2y_2$</span></p>
</blockquote>
<blockquote>
<p>(C) <span class="math-container">$2y_1-y_2$</span></p>
</blockquote>
<blockquote>
<p>(D) <span class="math-container">$y_1+2y_2-2e^x$</span></p>
</blockquote>
<blockquote>
<p>(E) None of the above.</p>
</blockquote>
<p>The answer is supposed to be <span class="math-container">$A$</span>. But I am not really sure how that happened.</p>
<p>I know that for <span class="math-container">$2y''+y'+3x^2y=-2e^x$</span> the solution is always the homogeneous part and the particular part added together.</p>
<p>Furthermore, I know that the homogeneous part is given as <span class="math-container">$y_1$</span>.</p>
<p>I then know that for <span class="math-container">$2y''+y'+3x^2y=e^x$</span> the solution for that one is composed of the homogeneous part and the particular part and that I can also write the ODE as <span class="math-container">$-4y''-2y'-6x^2y=-2e^x$</span>. So this implies that the homogeneous portion is just <span class="math-container">$-2y_1$</span>.</p>
<p>I can't get further than that though. Is my thought process right so far? If not, what more can I do and how can I proceed from here? I can't even solve the first two equations since they have no analytic solution.</p>
<p>This is from an old exam I was looking at an not an assignment so feel free to show work.</p>
| Community | -1 | <p>By linearity, it suffices to substitute the RHS:</p>
<ul>
<li><p>a) <span class="math-container">$3y_1-2y_2\to-2e^x,$</span></p>
</li>
<li><p>b) <span class="math-container">$y_1+2y_2\to2e^x,$</span></p>
</li>
<li><p>c) <span class="math-container">$2y_1-y_2\to-e^x,$</span></p>
</li>
<li><p>d) <span class="math-container">$y_1+y_2-2e^x\to -e^x.$</span></p>
</li>
</ul>
|
1,113,760 | <blockquote>
<p>$\frac{4}{3} e^{3x} + 2 e^{2x} - 8 e^x$</p>
</blockquote>
<p>I have some confusion especially because of the e </p>
<p>how can I approach the solution?</p>
<p>The solution of the x-intercept is 0.838</p>
<p>Many thanks</p>
| bjd2385 | 167,604 | <p>\begin{align}
0&=\frac{4}{3}e^{3x}+2e^{2x}-8e^x\tag{1} \\[1em]
& = \frac{\frac{4}{3}e^{3x}+2e^{2x}-8e^{x}}{2e^x}\tag{2} \\[1em]
& = \frac{2}{3}e^{2x}+e^x-4\tag{3} \\[1em]
\end{align}
Now let $\xi=e^x,\therefore e^{2x}=\left(e^x\right)^2=\xi^2.$ This gives us
\begin{align}
0&=\frac{2}{3}\xi^2 +\xi-4\tag{4} \\[1em]
\therefore \xi & = \left\{\frac{-1+\sqrt{1-4\left(\frac{2}{3}\right)\left(-4\right)}}{2},\:\frac{-1-\sqrt{1-4\left(\frac{2}{3}\right)\left(-4\right)}}{2}\right\}\tag{5} \\[1em]
\end{align}
And I'm sure you can do the rest...</p>
|
57,195 | <p>Let $f$ be a morphism of schemes $f: (X,\mathcal{O}_X)\to (Y,\mathcal{O}_Y)$, and $\mathcal{F},\mathcal{G}$ be sheaves of $\mathcal{O}_Y$-modules. I am trying to prove (I do NOT claim this to be true):</p>
<p>$f^{\ast}\mathcal{F}\otimes_{\mathcal{O}_X}f^{\ast}\mathcal{G}\cong f^{\ast}(\mathcal{F}\otimes_{\mathcal{O}_Y}\mathcal{G})$</p>
<p>By the definition of $f^{*}$, and the property of the tensor product, one can check that this boils down to proving: $\quad f^{-1} \mathcal{F} \otimes_{f^{-1}\mathcal{O}_Y} f^{-1}\mathcal{G} \cong f^{-1}(\mathcal{F} \otimes_{\mathcal{O}_Y}\mathcal{G})$. However, I cannot continue this bare hand computation at the present stage. For one thing $f^{-1}$ and $\otimes$ both require sheafification, and thus I get a compostion of two sheafification objects; for another, I know nothing about good properties of stalks on $f^{-1}$.</p>
<p>I guess the computation may be dirty, but I appreciate any insight on handling the problem.</p>
| Martin Brandenburg | 1,650 | <p><em>Alternative proof, using only adjunctions.</em></p>
<p>First, notice that there is an isomorphism in $\mathsf{Mod}(Y)$</p>
<p>$$f_* \underline{\hom}_X(f^* G,H) = \underline{\hom}_Y(G,f^* H)$$</p>
<p>for $G \in \mathsf{Mod}(Y)$ and $H \in \mathsf{Mod}(X)$. In fact, on an open subset $V \subseteq Y$, we have</p>
<p>$\Gamma(V,f_* \underline{\hom}_X(f^* G,H)) = \hom_{f^{-1}(V)}(f^* G |_{f^{-1}(V)},H|_{f^{-1}(V)})$</p>
<p>$ = \hom_{f^{-1}(V)}(f_V^* G|_V,H|_{f^{-1}(V)}) = \hom_V(G|_V,(f_V)_* H|_{f^{-1}(V)})$</p>
<p>$ = \hom_V(G|_V,(f_* H)|_V) = \Gamma(V,\underline{\hom}_Y(G,f^* H)).$</p>
<p>The rest is purely formal:</p>
<p>$\hom_X(f^* F \otimes f^* G , H)
= \hom_X(f^* F , \underline{\hom}_X(f^* G,H))
= \hom_Y(F,f_* \underline{\hom}_X(f^* G,H))$
$ = \hom_Y(F,\underline{\hom}_Y(G,f_* H)) = \hom_Y(F \otimes G,f_* H) = \hom_X(f^* (F \otimes G),H).$</p>
<p>Hence $f^* F \otimes f^* G \cong f^* (F \otimes G)$ by Yoneda. This proof also works in quite general contexts (for example where no stalks are available).</p>
|
3,379,837 | <p>I know that the two semigroups <span class="math-container">$(\{0,1,2,\dots \},\times)$</span> and <span class="math-container">$(\{0,1,2,\dots \},+)$</span> are not isomorphic because if we want to map identity elements together then it can be see that we can't have injective function between them,but what can we say about <span class="math-container">$(\{1,2,\dots \},\times)$</span> and <span class="math-container">$(\{0,1,2,\dots \},+)$</span>?</p>
| ΑΘΩ | 623,462 | <p>As you have noted, it is easy to see that <span class="math-container">$(\mathbb{N}, +)$</span> and <span class="math-container">$(\mathbb{N}, \cdot)$</span> are not isomorphic as semigroups, since the latter possesses an algebraic feature that the first one does not, namely an <strong>absorptive element</strong> (a <span class="math-container">$0$</span>-element, which in our case is precisely the natural number <span class="math-container">$0$</span>). </p>
<p>When comparing however <span class="math-container">$(\mathbb{N}, +)$</span> with <span class="math-container">$(\mathbb{N}^{*}, \cdot)$</span>, it is important to realize that not only the former but also the latter are <strong>free commutative monoids</strong>, the former of basis <span class="math-container">$\{1\}$</span> and the latter of basis <span class="math-container">$\mathbb{P}$</span>, the set of all prime numbers. Free commutative monoids have the following remarkable property of rigidity: </p>
<ul>
<li>if <span class="math-container">$X$</span> is an arbitrary set and <span class="math-container">$\iota: X \to \mathbb{N}^{(X)}$</span> is the canonical injection into the free commutative monoid over <span class="math-container">$X$</span>, then any generating system <span class="math-container">$S$</span> of <span class="math-container">$\mathbb{N}^{(X)}$</span> <strong>as a monoid</strong> will necessarily include the canonical basis <span class="math-container">$\iota(X)$</span>.</li>
<li>as a consequence, <span class="math-container">$\mathbb{N}^{(X)}$</span> has <strong>no other basis than</strong> <span class="math-container">$\iota(X)$</span>.</li>
<li>more generally, any free commutative monoid will have a unique basis and will thus admit a notion of <strong>dimension, i.e. the cardinality of the unique basis it possesses</strong>.</li>
</ul>
<p>The dimension of a free commutative monoid is obviously an invariant under monoid isomorphisms; since <span class="math-container">$\mathrm{dim}\ \mathbb{N}^{*}=|\mathbb{P}|=\aleph_0$</span> and <span class="math-container">$\mathrm{dim}\ \mathbb{N}=1$</span> we now have a clear understanding of the obstruction to the existence of any isomorphism between the two monoids at hand.</p>
|
234,340 | <p>Suppose that I have two real-valued matrices $\bf{A}$ and $\bf{B}$. Both matrices are exactly the same size. I multiply both matrices together in a point-by-point fashion similar to the Matlab <code>A .* B</code> operation.</p>
<p>Under what conditions can I approximately separate $\bf{A}$ and $\bf{B}$ using Principle Components Analysis (PCA)? Would it be possible to remove some components of the product <code>A .* B</code> to get an approximation of $\bf{A}$ or $\bf{B}$?</p>
<p>What algorithm might be best suited for this operation?</p>
<p>I am not looking for an exact separation of the matrices, but a separation using some sort of (statistical or numerical?) constraints. How would I set this problem up, and is there a good example of how to do this?</p>
| lerije | 48,917 | <p>If the entries are non-negative then you could use <a href="http://en.wikipedia.org/wiki/Non-negative_matrix_factorization" rel="nofollow">NMF</a> (non-negative matrix factorization). Or let $\textbf{C} = \textbf{A} \textbf{B}$. Then you could use singular value decomposition on $\textbf{C}$. </p>
|
617,927 | <p>Find the taylor expansion of $\sin(x+1)\sin(x+2)$ at $x_0=-1$, up to order $5$.</p>
<p><strong>Taylor Series</strong></p>
<p>$$f(x)=f(a)+(x-a)f'(a)+\frac{(x-a)^2}{2!}f''(a)+...+\frac{(x-a)^r}{r!}f^{(r)}(a)+...$$</p>
<p>I've got my first term...</p>
<p>$f(a) = \sin(-1+1)\sin(-1+2)=\sin(0)\sin(1)=0$</p>
<p>Now, I've calculated $f'(x)=\sin(x+1)\cos(x+2)+\sin(x+2)\cos(x+1)$</p>
<p>So that $f'(-1) = \sin(1) = 0.8414709848$</p>
<p>This means my second term would be $(x+1)(0.8414709848).$</p>
<p>But, this doesn't seem to be nice and neat like the other expansions I have done and I can't figure out what I've done wrong.</p>
<p>Merry Christmas and thanks in advance.</p>
| mathlove | 78,967 | <p>You did nothing wrong. However, building the Taylor series for each term is better and easier. </p>
<p>Also, using the decimal is not good. Keep using $\sin(1).$</p>
<p>Also, notice that actually $\sin(1)\not =0.8414709848$. This is only an approximate value.</p>
|
2,117,225 | <p>I need to find an expression that satisfies the qualifying conditions for a quintic polynomial.</p>
<p>$f(0)=3$ and $f(-2)=f(\frac{1}{2})=f(1)=0$.</p>
<p>With this information, I found that the zeros are $2, -\frac{1}{2},$ and $-1$.</p>
<p>By plugging $0$ into $f(x)$, I found that $F=3$ using the form $ax^5+bx^4+cx^3+dx^2+ex+f$.</p>
<p>Any advice where to go from here?</p>
| Robert Israel | 8,508 | <p>Plugging in $x=-2$, $1/2$ and $1$ into $f(x)$ will give you three linear equations in the four remaining unknown parameters.</p>
|
2,117,225 | <p>I need to find an expression that satisfies the qualifying conditions for a quintic polynomial.</p>
<p>$f(0)=3$ and $f(-2)=f(\frac{1}{2})=f(1)=0$.</p>
<p>With this information, I found that the zeros are $2, -\frac{1}{2},$ and $-1$.</p>
<p>By plugging $0$ into $f(x)$, I found that $F=3$ using the form $ax^5+bx^4+cx^3+dx^2+ex+f$.</p>
<p>Any advice where to go from here?</p>
| WW1 | 88,679 | <p>One set of possibilities is ...
$$ f(x) = k \left [ (x+2)^n(x-\frac12)^p(x-1)^q \right ]$$
where $n,p,q$ are positive integers that sum to $5$</p>
<p>Choose any three you like and then use $f(0)=3$ to calculate $k$</p>
|
19,098 | <p>There are already <a href="https://math.meta.stackexchange.com/questions/4277/people-who-ask-homework-questions-and-then-remove-them">a</a> <a href="https://meta.stackexchange.com/questions/155933/preventing-misuse-of-question-self-deletion">lot</a> <a href="https://math.meta.stackexchange.com/questions/8528/why-do-some-users-delete-their-questions-after-receiving-an-answer">of</a> <a href="https://math.meta.stackexchange.com/questions/8537/what-can-moderators-do-when-a-user-defaces-his-her-own-post">questions</a> <a href="https://math.meta.stackexchange.com/questions/8540/is-there-need-to-patrol-for-deleted-questions">here</a> <a href="https://math.meta.stackexchange.com/questions/11102/deleting-questions-with-answers-and-no-upvotes">on</a> <a href="https://math.meta.stackexchange.com/questions/13273/question-voluntarily-removed-by-author-immediately-after-answer">Meta</a> about users deleting their questions promptly after getting an answer. It happened to me twice this weekend. (I even thought they were the same user, because it was on the same day and the topic was the same: metric spaces. In the end, it was two different users. I was surprised though to discover that they were not new users at all, both with about 1k rep).</p>
<p>Anyway, I was told in <a href="https://math.meta.stackexchange.com/questions/8528/why-do-some-users-delete-their-questions-after-receiving-an-answer/8529#comment69592_8529">this</a> comment to flag any post of mine (I couldn't flag the original question or answer, because the whole page was unavailable) for moderator attention, and explain it, which I did, and both questions were undeleted rapidly.</p>
<p>However, one of the users deleted the same question again. I haven't found anywhere, in any of the several questions I linked above, what I should do in such a situation.</p>
<p>I understand that users should have the right to delete their own posts, unless doing so would also delete content that's helpful to the community. I also understand that I'm not the one who should decide whether my answer is helpful of not (after all, it had no upvotes, and that's why the user was able to re-delete it). But still I'm not comfortable with just letting this be.</p>
<p>In the end, I would like a recommendation about what to do upon (re)deletion of questions with an answer of mine. Should I keep flagging forever until the user stops deleting it? Is it better to post this kind of request in the "requests for reopen and undeletion" thread? Should I leave him (and the moderators) alone if he insists in deleting his post?</p>
| apnorton | 23,353 | <p>Self-deletion after receiving an answer is an abuse of the site. (We're meant to be a repository of question and answer pairs--if someone deletes their question immediately, they're working against the purpose of the site.)</p>
<p>Therefore, you shouldn't let the issue rest; if the person deletes again, keep flagging. Eventually, a moderator may suspend the user and/or lock the post.</p>
|
19,098 | <p>There are already <a href="https://math.meta.stackexchange.com/questions/4277/people-who-ask-homework-questions-and-then-remove-them">a</a> <a href="https://meta.stackexchange.com/questions/155933/preventing-misuse-of-question-self-deletion">lot</a> <a href="https://math.meta.stackexchange.com/questions/8528/why-do-some-users-delete-their-questions-after-receiving-an-answer">of</a> <a href="https://math.meta.stackexchange.com/questions/8537/what-can-moderators-do-when-a-user-defaces-his-her-own-post">questions</a> <a href="https://math.meta.stackexchange.com/questions/8540/is-there-need-to-patrol-for-deleted-questions">here</a> <a href="https://math.meta.stackexchange.com/questions/11102/deleting-questions-with-answers-and-no-upvotes">on</a> <a href="https://math.meta.stackexchange.com/questions/13273/question-voluntarily-removed-by-author-immediately-after-answer">Meta</a> about users deleting their questions promptly after getting an answer. It happened to me twice this weekend. (I even thought they were the same user, because it was on the same day and the topic was the same: metric spaces. In the end, it was two different users. I was surprised though to discover that they were not new users at all, both with about 1k rep).</p>
<p>Anyway, I was told in <a href="https://math.meta.stackexchange.com/questions/8528/why-do-some-users-delete-their-questions-after-receiving-an-answer/8529#comment69592_8529">this</a> comment to flag any post of mine (I couldn't flag the original question or answer, because the whole page was unavailable) for moderator attention, and explain it, which I did, and both questions were undeleted rapidly.</p>
<p>However, one of the users deleted the same question again. I haven't found anywhere, in any of the several questions I linked above, what I should do in such a situation.</p>
<p>I understand that users should have the right to delete their own posts, unless doing so would also delete content that's helpful to the community. I also understand that I'm not the one who should decide whether my answer is helpful of not (after all, it had no upvotes, and that's why the user was able to re-delete it). But still I'm not comfortable with just letting this be.</p>
<p>In the end, I would like a recommendation about what to do upon (re)deletion of questions with an answer of mine. Should I keep flagging forever until the user stops deleting it? Is it better to post this kind of request in the "requests for reopen and undeletion" thread? Should I leave him (and the moderators) alone if he insists in deleting his post?</p>
| Scott Morrison | 28 | <p>Over at MathOverflow, we (the moderators, sometimes asking for help) have semi-regularly gone on an undeleting spree. We use judgement; if there's some genuine reason for deleting (even embarrassment) we leave it deleted, but if it's a worthwhile question and we can't see any reason it's fair game.</p>
|
3,817,340 | <p>I'm trying to draw the bio-hazard symbol for <a href="https://codegolf.stackexchange.com/questions/191294/draw-the-biohazard-symbol">a codegolf challenge</a> in Java, for which I've been given the following picture (later referred to as unit diagram):</p>
<p><a href="https://i.stack.imgur.com/fIsNl.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fIsNl.png" alt="enter image description here" /></a></p>
<p>Most existing answers in other programming languages use an approach involving a loop of 3, in which they rotate by 120 degrees and draw the circle again. In Java however, drawing each shape one by one from a fixed position would be shorter (and the shorter the better in <a href="https://codegolf.stackexchange.com/tags/code-golf/info">code-golf</a> challenges).<br />
I want to draw the shapes in the following order:</p>
<ol>
<li>Three big circles in black</li>
<li>Three inner circles in white</li>
<li>The small center circle in white</li>
<li>The three gaps at the center in white</li>
<li>The three gaps at the outer parts in white</li>
<li>A black ring in the middle, with three white rings along the circles we've drawn in step 2; which will create three arcs</li>
</ol>
<p>I won't go too deep into detail of what each Java method does, but in general, most of the methods are given an <span class="math-container">$x,y$</span>-coordinate of the top-left corner of the rectangle surrounding the oval, and a <span class="math-container">$width$</span> and <span class="math-container">$height$</span>. Because of this, I want to calculate all <span class="math-container">$x,y$</span>-coordinates of the circle given the unit diagram, while I only assume the coordinates of the very center of the screen.</p>
<p>Here a more visual representation of the steps and what I want to calculate (quickly made in paint, so excuse any inaccuracies):</p>
<p><a href="https://i.stack.imgur.com/h3j5n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/h3j5n.png" alt="enter image description here" /></a></p>
<p>So to use the Java methods, I need to know the <span class="math-container">$x,y$</span>-coordinates of all red dots; the width/height of the purple lines; and the angles of the blue lines (for the arcs of step 6).</p>
<p>Assumption: the pink dot at the very center is at <span class="math-container">$x,y$</span>-position <span class="math-container">$[300,300]$</span>; and the units in the first picture are multiplied by 10 for my output.</p>
<p>Here the ones I've been able to figure out myself thus far:</p>
<ol>
<li>Width/height (purple line): This is <span class="math-container">$H$</span> in the unit diagram, thus <span class="math-container">$300$</span>.
<ol>
<li>The first <span class="math-container">$x,y$</span>-coordinate (first red dot): we know that from the very center of the screen (pink dot) to the center of the large circles (yellow dot) is unit <span class="math-container">$E=110$</span> (green line). The yellow dot therefore is at position <span class="math-container">$[300, 300-E] → [300,190]$</span>. From there, we can subtract halve of <span class="math-container">$H$</span> from both the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> positions to get to coordinates of the red dot: <span class="math-container">$[300-\frac{H}{2}, 300-E-\frac{H}{2}] → [150,40]$</span>.</li>
<li>The second <span class="math-container">$x,y$</span>-coordinate (second red dot): <span class="math-container">$\color{red}?$</span></li>
<li>The third <span class="math-container">$x,y$</span>-coordinate (third red dot): <span class="math-container">$\color{red}?$</span></li>
</ol>
</li>
<li>Width/height (purple line): This is <span class="math-container">$G$</span> in the unit diagram, thus <span class="math-container">$210$</span>.
<ol>
<li>The first <span class="math-container">$x,y$</span>-coordinate (first red dot): <span class="math-container">$\color{red}?$</span></li>
<li>The second <span class="math-container">$x,y$</span>-coordinate (second red dot): <span class="math-container">$\color{red}?$</span></li>
<li>The third <span class="math-container">$x,y$</span>-coordinate (third red dot): <span class="math-container">$\color{red}?$</span></li>
</ol>
</li>
<li>Width/height (purple line): This is <span class="math-container">$D$</span> in the unit diagram, thus <span class="math-container">$60$</span>.
<ol>
<li><span class="math-container">$x,y$</span>-coordinate (red dot): This is the position of the pink dot, minus halve its width/height for both the <span class="math-container">$x$</span> and <span class="math-container">$y$</span> coordinates: <span class="math-container">$[300-\frac{D}{2}, 300-\frac{D}{2}] → [270,270]$</span>.</li>
</ol>
</li>
<li>Width/height (purple lines): The width is <span class="math-container">$A$</span> in the unit diagram, thus <span class="math-container">$10$</span>. The height doesn't really matter in this case, as long as it's large enough to create the entire gap, but also not too large. Although it doesn't reflect my paint drawing, we could for example use <span class="math-container">$D$</span> as height and draw up to the pink dot.
<ol>
<li>The first <span class="math-container">$x,y$</span>-coordinate (first red dot): Assuming the height is <span class="math-container">$D$</span> and we draw up to the pink dot, we know the <span class="math-container">$x,y$</span> coordinate is at position <span class="math-container">$[300-\frac{A}{2}, 300-D] → [295,240]$</span>.</li>
<li>The second/third/fourth/fifth <span class="math-container">$x,y$</span>-coordinates / red dots (the Java method to draw irregular oriented rectangles requires all four <span class="math-container">$x,y$</span>-coordinates of the corners): <span class="math-container">$\color{red}?$</span></li>
<li>The sixth/seventh/eight/ninth <span class="math-container">$x,y$</span>-coordinates / red dots (the Java method to draw irregular oriented rectangles requires all four <span class="math-container">$x,y$</span>-coordinates of the corners): <span class="math-container">$\color{red}?$</span></li>
</ol>
</li>
<li>Width/height (purple lines): The width is <span class="math-container">$C$</span> in the unit diagram, thus <span class="math-container">$40$</span>. The height is just like with step 4 not really important, so let's just use twice the <span class="math-container">$x$</span> coordinate of the very top, which we've calculated in step 1.1 and was <span class="math-container">$40$</span>, so we'll use a height of <span class="math-container">$80$</span> here.
<ol>
<li>The first <span class="math-container">$x,y$</span>-coordinate (first red dot): Assuming the height <span class="math-container">$80$</span> and we draw from <span class="math-container">$y=0$</span>, we know the <span class="math-container">$x,y$</span>-coordinate is at position <span class="math-container">$[300-\frac{C}{2}, 0] → [280,0]$</span>.</li>
<li>The second/third/fourth/fifth <span class="math-container">$x,y$</span>-coordinates / red dots (the Java method to draw irregular oriented rectangles requires all four <span class="math-container">$x,y$</span>-coordinates of the corners): <span class="math-container">$\color{red}?$</span></li>
<li>The sixth/seventh/eight/ninth <span class="math-container">$x,y$</span>-coordinates / red dots (the Java method to draw irregular oriented rectangles requires all four <span class="math-container">$x,y$</span>-coordinates of the corners): <span class="math-container">$\color{red}?$</span></li>
</ol>
</li>
<li>Width/height (purple line): Unlike the other circles, the height of the circle along which the ring is drawn isn't known in the unit diagram. We know the thickness of the ring (orange line) is <span class="math-container">$B=35$</span>. In the unit diagram we also see that from the very center (pink dot) to the center of the circles we've drawn in step 1, the unit is <span class="math-container">$E=110$</span>. And from the center of this circle of step 1 to the bottom of the arc is unit <span class="math-container">$A=10$</span>. We can therefore deduct that the width/height (purple line) is <span class="math-container">$2(E-A+B)→270$</span>.
<ol>
<li>The <span class="math-container">$x,y$</span>-coordinate (red dot): Since we know the circle is in the center and we also know it's width/height, we can easily calculate the <span class="math-container">$x,y$</span>-coordinate as: <span class="math-container">$[300-(E-A+B), 300-(E-A+B)] → [165,165]$</span>.</li>
<li>We also know the thickness of the last three white rings we draw on top is <span class="math-container">$A=10$</span>, and their width/height and <span class="math-container">$x,y$</span>-coordinates are the exact same as the three circles we've drawn in step 2.</li>
</ol>
</li>
</ol>
<p>Can anyone help me determine the <span class="math-container">$\color{red}?$</span> above. Thus the unknown <span class="math-container">$x,y$</span> coordinates in the steps 1, 2, 4 and 5? Just general information on how I could go about calculating these is fine as well, but right now I don't know where to even begin. Also, sorry if asking all steps at once is too much for a single question. I could split it up into the unknowns of each individual step in separated questions if that's preferable.</p>
| LCFactorization | 148,887 | <p>Here there is an answer: <a href="https://www.reddit.com/r/geogebra/comments/on54iw/how_to_create_such_a_biohazard_symbol_in_geogebra/" rel="nofollow noreferrer">https://www.reddit.com/r/geogebra/comments/on54iw/how_to_create_such_a_biohazard_symbol_in_geogebra/</a></p>
<p>If you know how to use geogebra, the solution in this link is very simple and elegant:
<a href="https://www.geogebra.org/classic/uwc2xt4y" rel="nofollow noreferrer">https://www.geogebra.org/classic/uwc2xt4y</a></p>
|
1,675 | <p>This is a follow-up to <a href="https://mathoverflow.net/questions/1039/explicit-direct-summands-in-the-decomposition-theorem">this post</a> on the Decomposition Theorem. Hopefully, this will also invite some discussion about the theorem and perverse sheaves in general.</p>
<p>My question is how does one use the Decomposition Theorem in practice? Is there any way to pin down the subvarieties and local systems that appear in the decomposition. For example, how do you compute intesection homology complexes using this theorem? Does anyone have a link to a source with worked out examples?</p>
<p>Another related question: What is the deep part of the theorem? Is it the fact that the pushforward of a perverse sheaf is isomorphic to its perverse hypercohomology? Is it the fact that these pieces are semisimple? Or are these both hard statements? And what is so special about algebraic varieties?</p>
| Mike Skirvin | 916 | <p>There's a recent paper by Mark Andrea de Cataldo and Luca Migliorini (<a href="http://arxiv.org/abs/0712.0349" rel="nofollow">http://arxiv.org/abs/0712.0349</a>) which gives an excellent introduction to the decomposition theorem. In particular, they discuss semi-small maps in the context of Springer theory and Hilbert schemes.</p>
|
4,236,878 | <p>Given a symmetric matrix <span class="math-container">$S$</span> and positive definite matrix <span class="math-container">$B$</span>, with <span class="math-container">$S,B \in \mathbb{R}^{n \times n}$</span> can one prove that</p>
<p><span class="math-container">\begin{align*}
\text{tr}((S-B)B) \le -\mu(S) \text{tr}(B)
\end{align*}</span></p>
<p>where <span class="math-container">$\mu(S) < 0$</span> is the largest eigenvalue of <span class="math-container">$S$</span>? And does this hold if <span class="math-container">$\mu(S) > 0$</span>?</p>
| FelipeCruzV10 | 957,011 | <p>If <span class="math-container">$\mu(S)>0$</span> it doesn't necessarily hold.</p>
<p>Taking <span class="math-container">$S=\begin{pmatrix}100 & 50 \\ 50 & 100\end{pmatrix}$</span> and <span class="math-container">$B=\begin{pmatrix}0.1 & 0 \\ 0 & 0.1\end{pmatrix}$</span>, we have that <span class="math-container">$B$</span> is a positive definite matrix, <span class="math-container">$\mu(S)=150>0$</span>, and <span class="math-container">$tr((S-B)B)\approx20>-30=-\mu(S)tr(B)$</span>.</p>
|
361,201 | <p>Let $\left\{ x_\alpha : \alpha \in \mathscr{A}\right\} \subset (0, + \infty ) $ be a set of positive real numbers such that for every countable subcollection $ \left\{ x_{\alpha_n} \right\} $ of distinct points it holds $ x_{\alpha_n} \rightarrow 0 $. Then $ \mathscr{A} $ is a countable set. \</p>
<p>I think that this statement is true. How can i prove it? (if it is true)</p>
<p>Thanks</p>
| Asaf Karagila | 622 | <p><strong>Hint:</strong> Let $X$ denote this collection. Prove that for every $n\in\Bbb N$, $X\cap\left(\frac1n,+\infty\right)$ must be finite.</p>
|
361,201 | <p>Let $\left\{ x_\alpha : \alpha \in \mathscr{A}\right\} \subset (0, + \infty ) $ be a set of positive real numbers such that for every countable subcollection $ \left\{ x_{\alpha_n} \right\} $ of distinct points it holds $ x_{\alpha_n} \rightarrow 0 $. Then $ \mathscr{A} $ is a countable set. \</p>
<p>I think that this statement is true. How can i prove it? (if it is true)</p>
<p>Thanks</p>
| Hagen von Eitzen | 39,174 | <p>For $\epsilon>0$ the set $A_\epsilon:=\{x_\alpha\mid x_\alpha>\epsilon\}$ must be finite as otherwise we'd find a countable subcollection with limit $\ge\epsilon$. Therefore
$$\{x_\alpha\mid \alpha\in\mathscr A\}=\bigcup_{n\in\mathbb N}A_{\frac1n}$$
is the countable union of finite sets, hence countable.</p>
|
2,107,685 | <p><a href="https://i.stack.imgur.com/GtU6e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GtU6e.png" alt="laaa"></a></p>
<p>I have to represent the function on the left as a power series, and this is the solution to it but I don't know how to calculate this for example when n=1?</p>
| NiU | 163,915 | <p>In the second case where $\Omega=[\tfrac{1}{4}, \tfrac{1}{3}) \cup ([\tfrac{1}{3}, \tfrac{1}{2}) \cap \mathbb{Q})$ there is no definite answer <em>even if we assume the existence of a cyclic vector</em>.</p>
<p>First, consider $A$ to be multiplication by $f(x)=x$ on $L^2[0,1]$ with the usual Lebesgue measure. A cyclic vector is given by the constant function 1 since polynomials are dense in $L^2[0,1]$. Then the spectral measure $E_\Omega$ is given explicitly by the functional calculus as the operator $\chi_\Omega (A) \phi (x) = \chi_\Omega(x) \phi$.</p>
<p>In particular, the operator $E_\Omega A$ is given by multiplication with $g(x)=x \chi_\Omega(x)$. Computing its norm amounts to computing its spectrum which amounts to computing its essential range with respect to the Lebesgue measure. Clearly, the essential range of $g$ is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}]$, so $|| E_\Omega A ||= \tfrac{1}{3}$.</p>
<p>Second, consider $L^2[0,1]$ with the measure given by $\mu = \lambda + \delta_{\tfrac{1}{2}}$, where $\lambda$ denotes the Lebesgue measure. I.e. we give the point $\tfrac{1}{2}$ mass 1. Then, by the same argument as above we have to find the essential range of $g(x)=x \chi_\Omega(x)$ w.r.t. $\mu$. Here, we have that the essential range is $\{ 0 \} \cup [\tfrac{1}{4}, \tfrac{1}{3}] \cup \{ \tfrac{1}{2} \}$, so $|| E_\Omega A ||= \tfrac{1}{2}$.</p>
<p>Note: In the second case, we also have a cyclic vector, namely the constant function 1. This is consequence of the following: For finite, regular Borel measures on compact subsets $K$ of $\mathbb{R}$ the continuous functions are dense in $L^2(K)$. Now, the polynomials are dense in the continuous functions (w.r.t. to the sup-norm), so particular they are dense w.r.t. to the $L^2$ norm.</p>
|
871,581 | <p>I am trying to prove the identity below to help with the simplification of another function that I'm investigating as it doesn't appear to be a standard trig identity.</p>
<p>$$
\tan\left(x\right) + \tan\left( y \right) = \frac{{\sin\left( {x + y} \right)}}{{\cos\left( x \right)\cos\left( y \right)}}
$$</p>
<p>Any assistance gratefully appreciated.</p>
| Blue | 409 | <p>For fun, here's a picture-proof:</p>
<p><img src="https://i.stack.imgur.com/vKC0Ym.png" alt="enter image description here"></p>
<p>$$\begin{align}
2\;|\triangle OAB| = \qquad\qquad |\overline{OR}|\;|\overline{AB}| \;&=\; |\overline{OA}|\;|\overline{OB}|\;\sin\angle AOB \\[6pt]
1\cdot (\;\tan\alpha + \tan\beta\;) \;&=\; \sec\alpha \;\cdot\;\sec\beta\;\cdot \;\sin(\alpha+\beta) \\
\tan\alpha + \tan\beta \;&=\; \frac{\sin(\alpha+\beta)}{\cos\alpha\cos\beta}
\end{align}$$</p>
|
636,467 | <p>What is it that makes something a paradox? It seems to me that paradoxes are just, in many cases, misunderstandings about the properties some object can have and so misunderstandings about definitions. Is there something I might be missing? How is this kind of thought handled in logic?</p>
| Peter Smith | 35,151 | <p>Two thoughts to add to Carl Mummert's fine answer. He writes that a serious use of the work of "paradox" </p>
<blockquote>
<p>refers to a result that shows that a particular naive intuition is not sound. </p>
</blockquote>
<p>Perhaps it is slightly better, as indeed his own examples show, to say that the interesting paradoxes are cases where we have a <em>bunch</em> of naive intuitions, which the paradoxical reasoning shows can't all be true together, even though taken separately the intuitions continue to look quite compelling. This is why the interesting paradoxes (like the Liar Paradox, to take another example) can be so recalcitrant: they seem to show that we have to give up <em>some</em> pre-theoretic intuition, but it can be quite unobvious which one is the best candidate for revision. So we have to start exploring the costs and benefits of different revisions, and in many cases, the complications ramify. </p>
<p>You might be interested, for example, to look at <a href="http://plato.stanford.edu/entries/liar-paradox/" rel="nofollow">http://plato.stanford.edu/entries/liar-paradox/</a> to see a discussion of the ramifications of the Liar Paradox; or for another example you could try <a href="http://plato.stanford.edu/entries/sorites-paradox/" rel="nofollow">http://plato.stanford.edu/entries/sorites-paradox/</a></p>
<p>The same truly excellent on-line encyclopaedia (generally very reliable on logical matters) also has a nice article about the role of various paradoxes in spurring on the development of modern logic: <a href="http://plato.stanford.edu/entries/paradoxes-contemporary-logic/" rel="nofollow">http://plato.stanford.edu/entries/paradoxes-contemporary-logic/</a></p>
<p>Enjoy!</p>
|
636,467 | <p>What is it that makes something a paradox? It seems to me that paradoxes are just, in many cases, misunderstandings about the properties some object can have and so misunderstandings about definitions. Is there something I might be missing? How is this kind of thought handled in logic?</p>
| MJD | 25,554 | <p>You may enjoy W.V.O. Quine's essay "The ways of paradox", which tries to answer many of the same questions. Quine suggests early on:</p>
<blockquote>
<p>May we say in general, then, that a paradox is just any conclusion that at first sounds absurd but that has an argument to sustain it? In the end I think this account stands up pretty well.</p>
</blockquote>
<p>This accords well with Carl Mummert's comments elsewhere in this thread that “modern authors use "paradox" for all sorts of surprising or unexpected results” and “‘paradox’ refers to a result that shows that a particular naive intuition is not sound.”.</p>
<p>Note that this definition leaves open the possibilities that the sustaining argument could be correct or incorrect. In the first case, Quine calls the paradox "veridical", in the second case, "falsidical". (“Typical falsidical paradoxes are the comic misproofs that $2=1$.”) </p>
<p>You said:</p>
<blockquote>
<p>It seems to me that paradoxes are just, in many cases, misunderstandings about the properties some object can have and so misunderstandings about definitions.</p>
</blockquote>
<p>Put that way, it all sounds very simple. The problem is that the properties can seem very simple, and the misunderstandings can be very deep, and it can be hard to understand how the thing can be mistaken on the one hand and yet so seemingly simple on the other hand. Your word "just" waves away the difficulties. Quine discusses Grelling's paradox in this context. We say that adjectives can be true of things; for example, the adjective "red" is true of red things, or "polysyllabic" is true of polysyllabic things, and of polysyllabic words in particular. Grelling asks us to consider the adjective "heterological", which means "not true of itself". The adjective "red" <em>is</em> heterological, because it is a word, and words have no colors, so the adjective "red" is not red. But "polysyllabic" is a polysyllabic word and so is not heterological. You can guess the next step: is "heterological" heterological?</p>
<p>On your account, this is "just" a misunderstanding about the properties some object can have. What is the misunderstanding there? Is it a misunderstanding when certain adjectives can be said to be true of things? (This is Quine's view.) If so, what is the misunderstanding? If there is something wrong with "heterological", what is it, exactly, and if you disqualify it it, what prevents you from also disqualifying "polysyllabic" or "red"?</p>
<p>Quine says:</p>
<blockquote>
<p>Yet so faithfully does the principle [of an adjective being true of the things it describes] reflect what we mean in calling adjectives true of things that we cannot abandon it without abjuring the very expression ‘true of’ as pernicious nonsense.</p>
</blockquote>
<p>His essay ends:</p>
<blockquote>
<p>Of all the ways of paradoxes, perhaps the quaintest is their capacity on occasion to turn out to be so very much less frivolous than they look.</p>
</blockquote>
|
514 | <p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p>
<p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p>
<p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p>
<p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p>
<p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p>
<p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p>
<p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p>
<p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p>
<p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
| Doug Chatham | 273 | <p>Further counterexamples can be found here: <a href="https://mathoverflow.net/questions/15444/the-phenomena-of-eventual-counterexamples">https://mathoverflow.net/questions/15444/the-phenomena-of-eventual-counterexamples</a></p>
|
514 | <p>I just came back from my Number Theory course, and during the lecture there was mention of the Collatz Conjecture.</p>
<p>I'm sure that everyone here is familiar with it; it describes an operation on a natural number – <span class="math-container">$n/2$</span> if it is even, <span class="math-container">$3n+1$</span> if it is odd.</p>
<p>The conjecture states that if this operation is repeated, all numbers will eventually wind up at <span class="math-container">$1$</span> (or rather, in an infinite loop of <span class="math-container">$1-4-2-1-4-2-1$</span>).</p>
<p>I fired up Python and ran a quick test on this for all numbers up to <span class="math-container">$5.76 \times 10^{18}$</span> (using the powers of cloud computing and dynamic programming magic). Which is millions of millions of millions. And all of them eventually ended up at <span class="math-container">$1$</span>.</p>
<p>Surely I am close to testing every natural number? How many natural numbers could there be? Surely not much more than millions of millions of millions. (I kid.)</p>
<p>I explained this to my friend, who told me, "Why would numbers suddenly get different at a certain point? Wouldn't they all be expected to behave the same?"</p>
<p>To which I said, "No, you are wrong! In fact, I am sure there are many conjectures which have been disproved by counterexamples that are extremely large!"</p>
<p>And he said, "It is my conjecture that there are none! (and if any, they are rare)".</p>
<p>Please help me, smart math people. Can you provide a counterexample to his conjecture? Perhaps, more convincingly, several? I've only managed to find one! (Polya's conjecture). One, out of the many thousands (I presume) of conjectures. It's also one that is hard to explain the finer points to the layman. Are there any more famous or accessible examples?</p>
| Mr Pie | 477,343 | <p>Let’s take the number $12$. This number is not prime. It is a composite number, equal to $2^2\times 3$. Also, $121$ is not prime either. It is equal to $11^2$. And $1211$ is not prime as well. It is equal to $7\times 173$. Now you might notice a pattern here.</p>
<blockquote>
<p>Let $$12\,\|\, \underbrace{1\,\|\, 1\,\| 1\,\|\cdots}_{k\text{ times}}\tag{$\star$}$$ such that $a\,\|\, b = \left\{10^na + b : b \text{ has $n$ digits}\right\}$. Then, for all $k\in\mathbb{Z}_{>1}$, Eq. $(\star)$ will never be prime and always remain composite ($k > 1$ since $121$ is trivially not prime).</p>
</blockquote>
<p>This conjecture sounds reasonable, but the first counter-example is obtained when $k = 136$, for which Eq. $(\star)$ is <em>finally</em> a prime number.</p>
<hr>
<blockquote>
<blockquote>
<p>Let’s also look at the equation,
$$\frac{a}{b+c} + \frac{b}{a + c} + \frac{c}{a + b} = 4.\tag{$\star\star$}$$ It was conjectured that if we set $\{a, b, c\}\subset \mathbb{Z}^+$ then there do not exist such values of $a$, $b$, and $c$ to satisfy Eq. $(\star\star)$.</p>
</blockquote>
</blockquote>
<p>However, there exists a counter-example, and the following equation shows the smallest values of $a$, $b$, and $c$. $$(a, b, c)= (\ldots,\ldots,\ldots)$$ such that $$a =$$ $$154476802108746166441951315019919837485664325669565431700026634898253202035277999.$$ $$b =$$ $$36875131794129999827197811565225474825492979968971970996283137471637224634055579.$$ $$c =$$ $$4373612677928697257861252602371390152816537558161613618621437993378423467772036.$$</p>
<p><strong>Edit:</strong></p>
<p>A link on the $MSE$ discussing this particular example can be found <a href="https://math.stackexchange.com/questions/2192461/find-answer-of-fracxyz-fracyxz-fraczyx-4">here</a>, and a similar link on the $MO$ can be found <a href="https://mathoverflow.net/questions/227713/estimating-the-size-of-solutions-of-a-diophantine-equation">here</a> (credit to @B.Mehta).</p>
<hr>
<p>So yeah, to sum it all up, there are tons of conjectures disproven by large (and <em>very</em> large) counter-examples :)</p>
|
443,578 | <blockquote>
<p>Is the limit
$$
e^{-x}\sum_{n=0}^N \frac{(-1)^n}{n!}x^n\to e^{-2x} \quad \text{as } \ N\to\infty \tag1
$$
uniform on $[0,+\infty)$? </p>
</blockquote>
<p>Numerically this appears to be true: see the difference of two sides in (1) for $N=10$ and $N=100$ plotted below. But the convergence is very slow (<strike>logarithmic</strike> error $\approx N^{-1/2}$ as shown by Antonio Vargas in his answer). In particular, putting $e^{-0.9x}$ and $e^{-1.9x}$ in (1) clearly makes convergence non-uniform. </p>
<p>One difficulty here is that the Taylor remainder formula is effective only up to $x\approx N/e$, and the maximum of the difference is at $x\approx N$.</p>
<p><img src="https://i.stack.imgur.com/Vuxmg.png" alt="N=10"></p>
<p><img src="https://i.stack.imgur.com/d0LHA.png" alt="enter image description here"></p>
<p>The question is inspired by an attempt to find an alternative proof of <a href="https://math.stackexchange.com/q/386807/">$\epsilon>0$ there is a polynomial $p$ such that $|f(x)-e^{-x}p|<\epsilon\forall x\in[0,\infty)$</a>. </p>
| Pedro | 23,350 | <p>Credits should go to <a href="https://math.stackexchange.com/users/46120/landscape">Landscape</a>.</p>
<p>Define $$r_n(x)=\sum_{k=n+1}^\infty (-1)^k\frac{x^k}{k!}$$</p>
<p>Note that by Taylor's theorem with <a href="https://en.wikipedia.org/wiki/Taylor's_theorem#Explicit_formulae_for_the_remainder" rel="nofollow noreferrer">Lagrange's form of the remainder</a> we can write $$r_n(x)=(-1)^{n+1}e^{-x'}\frac{x^{n+1}}{(n+1)!}$$</p>
<p>where $x'$ is positive. It follows $$e^{-x}|r_n(x)|\leq e^{-x}\frac{x^{n+1}}{(n+1)!}$$</p>
<p>Easy verification shows the last function has absolute maximum at $x=n+1$. But $$\frac{1}{{(n + 1)!}}{\left( {\frac{{n + 1}}{e}} \right)^{n + 1}} \sim \frac{1}{{\sqrt {2\pi \left( {n + 1} \right)} }}$$ by Stirling, so convergence is indeed uniform. $\quad \Box$</p>
|
2,621,932 | <p><strong>Question:</strong> In a chess match, there are 16 contestants. Every player has to be each other player (like a round-robin). The player with the most wins/points wins the tournament.</p>
<p>a) How many games must be played until there is a victor? </p>
<p>b) If every player has to team up with each other player to play doubles chess. How many games must now be played until one of the teams is a victor? </p>
<p><strong>My Attempt:</strong></p>
<p>a) Each of the 16 player would have to verse 15 other people, but Player 1 vs Player 16 is same as Player 16 vs Player 1. Hence, $(16*15)/2$</p>
<p>b) No idea </p>
<p><strong>Official Answer:</strong> </p>
<p>a) ${}^{16}C_2$</p>
<p>b) ${}^{16}C_2/2$</p>
<p><strong>My Problem:</strong></p>
<p>a) ${}^{16}C_2$ is the same as my answer, however I thought combination would mean, how many different ways you can choose 2 people out of 16 people. However the question asks how many games have to played, so how does how many games have to be played mean the same thing as how many ways you can choose 2 people out of 16? </p>
<p>b) No idea</p>
| Rohan Shinde | 463,895 | <p>For the first part for a better clarity begin with some smaller cases. </p>
<p>Let's consider there are 3 people playing the chess. The what will be the number of matches until a Victor is decided. Through simple counting we get this as 3( which is simply $\binom {3}{2}$ )</p>
<p>Let's consider 4 people to get a stronger hold over the assumption. Again by simple counting you see that the number of matches would be 6 which is simply $\binom {4}{2}$</p>
<p>While counting these cases you might have noticed that what you are doing is nothing but finding the number of ways you can select two people from $16$ contestants.</p>
<p>For the second part though I have a doubt over the official answer. If they meant to find the matches between all possible teams among themselves. Then we must form two teams. For the first team we select 2 people out of 16. (ways = $\binom {16}{2}$ ). Now for the second team we select 2 people from remaining 14 people( ways $=\binom {14}{2}$ ) </p>
<p>But during this process we do overcounting and allow two same teams compete twice. Hence we divide the answer by 2. </p>
<p>Hence according to me the answer to the second part should be $$\frac {\binom {16}{2}\binom {14}{2}}{2}$$</p>
|
1,055,091 | <p>I've been asked to estimate a y coordinate by using differentials. This normally isn't overly difficult, however, I'm not sure what to do in a case like this when y cannot be separated and used as a function of x. Can anyone point me in the right direction? I suspect I'll have to use implicit differentiation but I can't quite formulate how I'd approach it.</p>
<p>Solve for the y coordinate of point P near (1,2) on the curve $2x^3 + 2y^3 = 9xy$, given that the x-coordinate of P is 1.1</p>
| Matt Samuel | 187,867 | <p>Use the Leibniz formula, namely $d(fg)=gdf+fdg$. We have
$$d(2x^3+2y^3-9xy)=6x^2dx+6y^2dy-d(9xy)=6x^2dx+6y^2dy-9ydx-9xdy=(6x^2-9y)dx+(6y^2-9x)dy$$
so since this is equal to $0$ we have
$$(6x^2-9y)dx=(9x-6y^2)dy$$
hence
$$dy=\frac{6x^2-9y}{9x-6y^2}dx$$</p>
<p>Using the standard routine in assuming $dy\approx \Delta y$, we have
$$x=1.1,y\approx 2+0.1\frac{6(1)-9(2)}{9(1)-6(4)}=2.08$$
Maple says that the real answer is $\approx 2.076$.</p>
|
103,776 | <p>I am curious as to why Wolfram|Alpha is graphing a logarithm the way that it is. I was always taught that a graph of a basic logarithm function $\log{x}$ should look like this:</p>
<p><img src="https://i.stack.imgur.com/3SRqI.png" alt="enter image description here"></p>
<p>However, Wolfram|Alpha is graphing it like this:</p>
<p><img src="https://i.stack.imgur.com/W7JuQ.png" alt="enter image description here"></p>
<p>As you can see, there is a "real" range in the region $(-\infty, 0)$, and an imaginary part indicated by the orange line. Is there a part about log graphs that I am missing which would explain why Wolfram|Alpha shows the range of the log function as $\mathbb{R}$?</p>
| Tib | 23,349 | <p>As well as being an $\mathbb{R^+} \to \mathbb{R}$ function, the logarithm can also be extended to a multi-valued complex function. Wolfram Alpha interprets the logarithm as the complex logarithm, then restricts it to real line again for graphing. See <a href="http://enwp.org/wiki/Complex_logarithm" rel="nofollow">http://enwp.org/wiki/Complex_logarithm</a> for a full graph of the complex logarithm.</p>
|
340,855 | <p>Say if there is a matrix A:</p>
<p>$$\begin{bmatrix} 1 & 2 & 0 & 2 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$$</p>
<p><strong>What the column space of A?</strong> : I am confused whether to exclude NON-pivot columns.</p>
<p><strong>What is the dimension of column space?</strong> : The dimension of the column space or dimension of the basis of column space? Can be either 4 or 3?</p>
<p><strong>What is the basis of column space?</strong> This is just the pivot columns. Is this the so called $\operatorname{Col}A$?</p>
| Jim | 56,747 | <p>The column space is not a list of vectors so it's not clear what you mean when you ask if you should exclude non-pivot columns. The column space is the linear span of the columns. Each column (including the non-pivot columns) is contained in this space.</p>
<p>What you may be confusing yourself with is the column space vs. a <em>basis</em> for the column space. A basis is indeed a list of columns and for a reduced matrix such as the one you have a basis for the column space is given by taking exactly the pivot columns (as you have said). There are various notations for this, $\operatorname{Col}A$ is perfectly acceptable but don't be surprised if you see others.</p>
<p>As for the dimension of the column space, it's $3$, which is the number of elements in a basis, i.e., the number of pivot columns.</p>
|
897,043 | <p>I'm having issues getting my head around cartesian products and their cardinalities.</p>
<p>$A = \{0, 1, \{2, 3, 4\}\}$<br>
$B = \{1,5\}$<br>
$D = B \times N$ (where $N$ is the set of natural numbers)</p>
<p><strong>The first problem:</strong> What is the cardinality of:</p>
<p>(a) $A \times B$ (cartesian product)</p>
<p>(b) $A \times D$</p>
<p><strong>Part 2:</strong> true/false
(a) $N$ is a subset of $D$</p>
<p>for (a) I used $|A \times B|$ = $|A| * |B|$
and got $3*2 = 6$ </p>
<p>is this the correct way to do this?</p>
<p>for (b) I assumed that the cardinality was infinite since it involved the set of natural numbers, am I correct in assuming this?</p>
<p>for part 2 (a) I assumed that it was true since $D$ contains the natural set so presumably the natural set is a subset of $D$, am I correct in assuming this?</p>
| Fargle | 157,905 | <p>For (a) and (b), you were right, but more specifically, the cardinality of $A \times D$ is $\aleph_0$, or countable infinity. (The same cardinality as $\Bbb N$.</p>
<p>For part 2 (a), you were wrong, however. $D$ does not contain $\Bbb N$, because $1 \neq (b,n)$ for any $b \in B, n \in \Bbb N$. Put more simply, $A \times B$ does not contain either $A$ or $B$ for any non-empty $A,B$.</p>
|
125,610 | <p>I have question about sets. I need to prove that: $$X \cap (Y - Z) = (X \cap Y) - (X \cap Z)$$</p>
<p>Now, I tried to prove that from both sides of the equation but had no luck.</p>
<p>For example, I tried to do something like this: $$X \cap (Y - Z) = X \cap (Y \cap Z')$$ but now I don't know how to continue.</p>
<p>From the other side of the equation I tried to do something like this: $$(X \cap Y) - (X \cap Z) = (X \cap Y) \cap (X \cap Z)' = (X \cap Y) \cap (X' \cup Z')$$ and from here I don't know what to do again.</p>
<p>I will be glad to hear how should I continue from here and what I did wrong. Thanks in advance.</p>
| Arturo Magidin | 742 | <p>You can do it two ways: with manipulations using the properties of unions, intersections, and complements, or through double inclusion.</p>
<ol>
<li><p>To prove it by double inclusion, we must show that $X\cap(Y-Z)\subseteq (X\cap Y)-(X\cap Z)$, and that $(X\cap Y)-(X\cap Z)\subseteq X\cap(Y-Z)$.</p>
<p>I'll show you one of the inclusions: let $a\in X\cap(Y-Z)$. Then $a\in X$, and $a\in Y-Z$. Hence $a\in X$, $x\in Y$, and $x\notin Z$. Since $a\in X$ and $a\in Y$, then $a\in X\cap Y$. Since $a\in X$ and $a\notin Z$, then $a\notin X\cap Z$. Since $a\in X\cap Y$ and $x\notin X\cap Z$, then $a\in (X\cap Y)-(X\cap Z)$. This proves that if $a\in X\cap(Y-Z)$, then $a\in (X\cap Y)-(X\cap Z)$; that is, $X\cap(Y-Z)\subseteq (X\cap Y)-(X\cap Z)$. </p>
<p>Now show that $(X\cap Y)-(X\cap Z)\subseteq X\cap(Y-Z)$.</p></li>
<li><p>Using the properties, you should use the fact that intersections distribute over unions and vice-versa. So from
$$\begin{align*}
(X\cap Y)-(X\cap Z) &= (X\cap Y)\cap(X\cap Z)'\\
&= (X\cap Y)\cap(X'\cup Z')\\
&= (X\cap Y\cap X') \cup (X\cap Y\cap Z').
\end{align*}$$
Can you take it from there?</p></li>
</ol>
|
1,848,222 | <p>Very simple and quick question. Usually distribution notation is such that you give the name of the distribution, then its mean, and finally the variance, for example for normal distribution:</p>
<p>$$N(0,1)$$</p>
<p>The 0 means that the distribution has mean zero, and the 1 tells that the variance is one. However, for standard uniform distribution:</p>
<p>$$U(0,1)$$</p>
<p>The zero is the minimum value the distribution generates and 1 is the maximum value. Using the more standard notation it should be:</p>
<p>$$U(0.5, \frac{1}{12})$$</p>
<p>At least it would make more sense to me if it was U]0,1[. Can anyone explain why the notation is so, and whether there are any other exceptions?</p>
| Em. | 290,196 | <p>I don't know where you are getting this
"Usually distribution notation is such that you give the name of the distribution, then its mean, and finally the variance"
from.</p>
<p>As for the uniform distribution, the best way to imagine the uniform distribution is to know where it <em>starts</em> and where it <em>ends</em>. It gives you a quick idea of what the curve looks like. </p>
<p>The parameters of the normal distribution do too. The mean tells you were it is centered and the variance gives you an idea of the "spread".</p>
<p>So they are consistent in that sense. If I tell you the mean and variance of the uniform distribution, then you have to do some calculations to figure out what it looks like. Giving you the end points, you immediately know where it is.</p>
<p>However, if I think about $\text{Gamma}(n,\lambda)$, I don't think about the curve at all. I immediately understand it to be the sum of $n$ independent exponentials, each with rate $\lambda$. The expectation of such a random variable is $n/\lambda$ with varaince $n/\lambda^2$. Also, that's what it means to me, but this same notation means something else to other users.</p>
<p>There are more examples that you can look into on your own, like the Beta distribution. </p>
<p>So all in all, I don't believe they are consistent in how they are presented. But they consistent in that they do convey some useful information fast. But it should not be a surprise. Lots of notations and definitions are inconsistent across math and math textbooks.</p>
<p>Also, $U]0,1[$ is not common notation, specifically $]a,b[$. I know what you mean, but it is not universally used.</p>
|
3,953,153 | <p>I need help to prove this:
If <span class="math-container">$\gcd(a,b)=1$</span> then <span class="math-container">$\gcd(a+b, a^2-ab+b^2)$</span> is equal to <span class="math-container">$1$</span> or <span class="math-container">$3$</span>.
I have done this:</p>
<p>Let <span class="math-container">$d$</span> be the g.c.d. of <span class="math-container">$(a+b, a^2-ab+b^2)$</span>, then <span class="math-container">$d$</span> divides <span class="math-container">$a+b$</span> and <span class="math-container">$d$</span> divides <span class="math-container">$a^2-ab+b^2$</span>. That implies <span class="math-container">$d$</span> divides <span class="math-container">$(a+b)n + (a^2-ab+b^2)m$</span>, for some <span class="math-container">$n,m$</span> integers. Let <span class="math-container">$n$</span> and <span class="math-container">$m$</span> be <span class="math-container">$a$</span> and <span class="math-container">$1$</span> respectively, then <span class="math-container">$d$</span> divides <span class="math-container">$2a^2+ b$</span>. With the same argument but with <span class="math-container">$n=b$</span> we get <span class="math-container">$d$</span> divides <span class="math-container">$a^2+2b$</span>. Then <span class="math-container">$d$</span> divides <span class="math-container">$3a^2+3b^2$</span>. That implies <span class="math-container">$3a^2+3b^2 \geq d$</span>, and we get that <span class="math-container">$3 \geq d$</span>, because <span class="math-container">$\gcd(a^2+b^2)=1$</span>. So <span class="math-container">$d$</span> must be <span class="math-container">$3$</span> or <span class="math-container">$1$</span> because if <span class="math-container">$d =2$</span>, <span class="math-container">$d$</span> has to divide <span class="math-container">$a^2+2b$</span> and <span class="math-container">$2a^2+ b$</span>, but we see that no. So <span class="math-container">$d=3$</span> or <span class="math-container">$1$</span>.
I don't know if I did it well.</p>
| fleablood | 280,126 | <p>Well try to factor <span class="math-container">$a^2 -ab +b^2$</span> into terms of <span class="math-container">$a+b$</span> or <span class="math-container">$a,b$</span>.</p>
<p><span class="math-container">$a^2 -ab + b^2 = a^2 + 2ab + b^2 - 3ab= (a+b)^2 -3ab$</span> so</p>
<p><span class="math-container">$\gcd(a+b,a^2 -ab + b^2) =$</span></p>
<p><span class="math-container">$\gcd(a+b, (a+b)(a+b) - 3ab) = $</span></p>
<p><span class="math-container">$\gcd(a+b, -3ab)=\gcd(a+b,3ab)$</span>.</p>
<p>Now... <span class="math-container">$\gcd(a,b) =1$</span> so any factor of <span class="math-container">$a$</span> will not be a factor of <span class="math-container">$b$</span> so it will not be a factor of <span class="math-container">$a+b$</span>. And no factor of <span class="math-container">$b$</span> will be a factof so it will not be a factor of <span class="math-container">$a+b$</span>. And the only prime factors of <span class="math-container">$3ab$</span> are either prime factors of <span class="math-container">$a$</span>, or prime factors of <span class="math-container">$b$</span> or <span class="math-container">$3$</span>. No prime factor of <span class="math-container">$a$</span> or of <span class="math-container">$b$</span> are factors of <span class="math-container">$a+b$</span> so the only possible prime factors of <span class="math-container">$3ab$</span> and <span class="math-container">$a+b$</span> are (possibly) <span class="math-container">$3$</span>. Or there are no common prime factors of <span class="math-container">$3ab$</span> and <span class="math-container">$a+b$</span>.</p>
<p>If there are no common prime factors then <span class="math-container">$\gcd(a+b,3ab) = 1$</span>.</p>
<p>If the only common prime factor is <span class="math-container">$3$</span> then <span class="math-container">$3$</span> is a factor of <span class="math-container">$a+b$</span> but not of either <span class="math-container">$a$</span> nor of <span class="math-container">$b$</span>. So <span class="math-container">$\gcd(a+b, 3ab) =3$</span>.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.