qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
40,532 | <p>I'm having some problems understanding the following paragraph, which I read in a analysis script (hopefully I haven't made any translation errors):</p>
<blockquote>
<p>"A map $f:U \rightarrow Y$, where $U$ is open and $X,Y$ are Banach spaces, is continuous at $x' \in U$ if $$f(x')=\lim_{x\rightarrow x'} f(x)=\lim_{h\rightarrow 0} f(x'+h),$$
where $h=x-x'$. We can decompose $h$ in a "<strong>polar</strong>" fashion in $h=ts$, where $\left\Vert h \right\Vert \geq 0$ and $s=\frac{1}{\left\Vert h \right\Vert } h$. Then
$f(x')=\lim\limits_{h\rightarrow 0} f(x'+h)$ iff $f(x'+ts)\rightarrow f(x')$ for $t \rightarrow 0^+$ <strong>uniformly with respect to</strong> $\left\Vert s \right \Vert = 1$.
No matter from which direction $s$ with $\left\Vert s \right\Vert=1$ we approach $x'$, the value of the function has to converge to $f(x')$ with a to all $s$ <strong>common "minimal speed"</strong> ".</p>
</blockquote>
<p>What I don't understand is this:</p>
<p>1) What does in means to decompose anything in a "polar" fashion ?</p>
<p>2) I thought only sequences of function can converge uniformly...and what does it mean, if something converges uniformly with respect to another thing ?</p>
<p>3) What does the author mean with "common "minimal speed""</p>
<p>Thanks in advance.</p>
| mac | 9,390 | <p>I guess in (1) it should be $t=\|h\|$, so the analogy would be with the polar decomposition of a complex number $z=ts$ where $t=|z|$ and $s=z/|z|$.</p>
<p>In (2), the uniform convergence probably means that $\sup\limits_{\|s\|=1} \|f(x'+ts)-f(x')\|\to 0$ as $t\to 0_+$.</p>
<p>I suppose the minimal speed thing is meant to be a way of understanding the uniformity condition in (2). One way to keep convergence but violate this uniform convergence would be if you could let $h$ approach $0$ along different paths $\gamma_1,\gamma_2,\dots,$ say with $\|\gamma_i(t)\|=\|\gamma_j(t)\|$ for every $i,j$, but so that the rate of convergence of $f(x'+\gamma_i(t))$ to $f(x')$ gets slower and slower as $i$ increases. So by saying that there should be a lower bound on such rates of convergence (a "common minimal speed", I think) you'd avoid that issue.</p>
|
65,083 | <p>Rotman's book <em>An Introduction to the Theory of Groups</em> (Fourth Edition) asks, on page 22, Exercise 2.8, to show that <span class="math-container">$S(n)$</span> cannot be embedded in <span class="math-container">$A(n+1)$</span>, where <span class="math-container">$S(n)$</span> = the symmetric group on <span class="math-container">$n$</span> elements, and <span class="math-container">$A(n)$</span> = the alternating group on <span class="math-container">$n$</span> elements. I have a proof but it uses Bertrand's Postulate, which seems a bit much for page 22 of an introductory text. Does anyone have a more appropriate (i.e., easier) proof?</p>
| Aaron Meyerowitz | 8,008 | <p>One could ask Rotman. It may be that in a reorganization of the material in the book that problem ended up earlier than the material needed for the (intended) answer. On the other hand it is not a bad experience for students to see problems where the complete solution seems slightly out of reach. Here, one can prove several small cases and see various potential directions for a general proof. Which will work? which are in the spirit of the subject? Of course it is best to set up the expectation that there might be problems like this.</p>
|
1,712,481 | <p><span class="math-container">$$I=\displaystyle\int\frac{dx}{(3 + 2\sin x - \cos x)}$$</span></p>
<p>If <span class="math-container">$$\tan\left(\frac{x}{2}\right)=u$$</span></p>
<p>or <span class="math-container">$$x=2\cdot\tan^{-1}(u)$$</span></p>
<p>Then,</p>
<p><span class="math-container">$$\sin{x}=\dfrac{2u}{1+u^2}$$</span></p>
<p><span class="math-container">$$\cos{x}=\dfrac{1-u^2}{1+u^2}$$</span></p>
<p><span class="math-container">$$dx=\dfrac{2}{1+u^2}$$</span></p>
<p>Substitute <span class="math-container">$$\tan\left(\dfrac{x}{2}\right)=u$$</span></p>
<p>Let us simplify the integrand before integrating</p>
<p><span class="math-container">$$\dfrac{1}{3+2\sin{x}-\cos{x}}$$</span></p>
<p><span class="math-container">$$=\dfrac{1}{3+2\frac{2u}{1+u^2}-\frac{1-u^2}{1+u^2}}$$</span></p>
<p><span class="math-container">$$=\dfrac{1}{3+\frac{4u-1+u^2}{1+u^2}}$$</span></p>
<p><span class="math-container">$$=\dfrac{1}{\frac{4u-1+u^2+3+3u^2}{1+u^2}}$$</span></p>
<p><span class="math-container">$$=\dfrac{1+u^2}{4u^2+4u+2}$$</span></p>
<p><span class="math-container">$$=\dfrac{1+u^2}{(2u+1)^2+1}$$</span></p>
<p><span class="math-container">$$I=\displaystyle\int\dfrac{1+u^2}{(2u+1)^2+1}\cdot\dfrac{2}{1+u^2}\ du$$</span></p>
<p><span class="math-container">$$=\displaystyle\int\dfrac{1}{(2u+1)^2+1}\ 2\,du$$</span></p>
<p>Now,</p>
<p>Take : <span class="math-container">$$v=2u+1$$</span></p>
<p>Therefore, <span class="math-container">$$dv=2\,du$$</span></p>
<p><span class="math-container">$$I=\displaystyle\int\dfrac{1}{v^2+1}\ dv$$</span></p>
<p><span class="math-container">$$I=\tan^{-1}(v)$$</span></p>
<p>Substitute everything back</p>
<p><span class="math-container">$$I=\tan^{-1}(2u+1)$$</span></p>
<p><span class="math-container">$$I=\tan^{-1}\left(2\tan\left(\frac{x}{2}\right)+1\right)$$</span></p>
<p><span class="math-container">$$\boxed{\displaystyle\int\frac{dx}{(3 + 2\sin x - \cos x)} = \tan^{-1}\left(2\tan\left(\frac{x}{2}\right)+1\right)+C}$$</span></p>
<p>I know that my approach is also not so difficult but still I think there must be an relatively easy approach to this integral. I have tried many different things using trigonometric identities but nothing seems to bring to the solution easily. Kindly help me out.</p>
| jim | 289,829 | <p>You could write $2\sin(x)-\cos(x) = \sqrt{5} (\frac{2}{\sqrt{5}} \sin x - \frac{1}{\sqrt{5}} \cos x) = \sqrt{5} \sin (x - \alpha)$, with $\tan \alpha = \frac{1}{2}$, so that the required integral becomes $\int \dfrac{dx}{3 + \sqrt{5} \sin(x-\alpha)}$ and then change the integration variable from $x$ to $y = x - \alpha$. Your integral is then of the form $\int \dfrac{dy}{A + B\sin y}$ which is a bit easier to solve using standard methods.</p>
|
131,482 | <p>Noticed strange behaviour of <code>Export[]</code> when saving graphics to EPS. Here is an example figure I am exporting:</p>
<pre><code>fig =
Show[
ListPolarPlot[
Table[{a, .9}, {a, 0, 2 Pi, .001}],
PlotRange -> {{0, 1.1}, {0, 1.1}}, GridLines -> Automatic,
PlotStyle -> PointSize[.005]],
Graphics[
Inset[
ListPolarPlot[Table[{a, .1}, {a, 0, 2 Pi, .001}], Axes -> False,
PlotStyle -> PointSize[.05]],
{.4, .4}]
]
]
</code></pre>
<p>The exported figure does not look fine upon a close look --- horizontal grid lines seem to be rasterised:
<a href="https://i.stack.imgur.com/ZPdGn.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZPdGn.png" alt="zoom into the figure 1"></a></p>
<p>If I now apply <code>GridLinesStyle -> Thick</code>, the problem disappears:
<a href="https://i.stack.imgur.com/m9PXk.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m9PXk.png" alt="zoom into the figure 2"></a> </p>
<p>Any ideas why that happens? I use Mathematica for MacOS v. 11.0.1.0.</p>
<p>Can anyone with MMA v11 reproduce the issue?</p>
| Jose Enrique Calderon | 11,974 | <p>First do correct exporting procedure</p>
<pre><code>Export["/Users/jecalderon1/Documents/testFigreu.eps", fig, "EPS"]
</code></pre>
<p>Off course, you have to change the direction of the directory path../user/ to your local directory.</p>
<p>Then open using Adobe Illustrator . Will render the correct way.</p>
|
3,272,030 | <p>Let's X, Y are random variables (i.e. each one maps elements of a sample space to real numbers). In particular, let's X is such that <span class="math-container">$X:\Omega \rightarrow \mathcal{R}$</span>, where <span class="math-container">$\Omega=\{\omega_1, ..., \omega_n \}$</span> and <span class="math-container">$X(\omega_i)=i$</span></p>
<p>Then the conditional variance <span class="math-container">$Var(Y|X(\omega_i))$</span> is a random variable because it maps an element of a sample space of <span class="math-container">$\Omega$</span> to <span class="math-container">$\mathcal{R}$</span>.</p>
<p>Now consider <span class="math-container">$Var(Y|X(\omega_i)=1 \bigcup X(\omega_i)=2)$</span> This expression maps a set <span class="math-container">$\{\omega_1, \omega_2 \}$</span> to <span class="math-container">$\mathcal{R}$</span>. So, it seems no more to satisfy the definition of a random variable. Then what is it? A measure? Perhaps no.
Thank you for answering.</p>
| pre-kidney | 34,662 | <p>By definition,
<span class="math-container">$$
\textrm{Var}(Y\mid X):=\mathbb E\bigl[\bigl(Y-\mathbb E(Y\mid X)\bigr)^2\mid X\Bigr]=\mathbb E(Y^2\mid X)-\mathbb E(Y\mid X)^2.
$$</span>
Thus, the conditional variance is a random variable, in the same way that the conditional expectation <span class="math-container">$\mathbb E(Y\mid X)$</span> is. Conceptually, the variance is the "same type of object" as the expectation, in this regard.</p>
<p>Now, one may also consider an event <span class="math-container">$A\subseteq \Omega$</span> (the sample space) and ask what is <span class="math-container">$\textrm{Var}(Y\mid A)$</span>. And it follows the exact same behavior as the conditional expectation, namely that we define
<span class="math-container">$$
\textrm{Var}(Y\mid A):=\mathbb E\bigl[\bigl(Y-\mathbb E(Y\mid A)\bigr)^2\mid A\Bigr]=\mathbb E(Y^2\mid A)-\mathbb E(Y\mid A)^2.
$$</span></p>
<p>By definition, <span class="math-container">$$\mathbb E(Y\mid A):=\frac{\mathbb E(Y\cdot 1_A)}{\mathbb E(1_A)},$$</span>
where <span class="math-container">$1_A$</span> denotes the indicator of the set <span class="math-container">$A$</span>. It is a random variable taking the value <span class="math-container">$1$</span> on <span class="math-container">$A$</span> and <span class="math-container">$0$</span> off <span class="math-container">$A$</span>. Note also that <span class="math-container">$\mathbb E(1_A)=\mathbb P(A)$</span>, I just wrote it that way in the denominator of the formula for consistency with the numerator.</p>
<hr>
<p>Per the discussion below, there was an even more basic question that I should clarify. A random variable is a function from the sample space <span class="math-container">$\Omega$</span> to the real numbers. This means it assigns a real number to each <strong>element</strong> <span class="math-container">$\omega\in \Omega$</span>. On the other hand, when we condition on an event we obtain a <strong>set function</strong> on <span class="math-container">$\Omega$</span>, or in other words, a function that assigns values to <strong>subsets</strong> of <span class="math-container">$\Omega$</span> and not to individual elements of <span class="math-container">$\Omega$</span>. In this case, being even more precise, we have a <strong>partially defined set function</strong> which means that not every subset is assigned a value - it is only those subsets which are measurable and are assigned a positive measure for which the conditional variance is defined.</p>
<p>To compare and contrast the two types of mathematical objects, conditional variance with respect to a random variable is a function from <span class="math-container">$\Omega$</span> to <span class="math-container">$\mathbb R$</span>, whereas conditional variance with respect to an event is a partially defined function from <span class="math-container">$P(\Omega)$</span> to <span class="math-container">$\mathbb R$</span> (the power set of <span class="math-container">$\Omega)$</span>.</p>
|
3,551,976 | <p>Let <span class="math-container">$E$</span> be a uniformly convex banach space, <span class="math-container">$K$</span> convex and closed in <span class="math-container">$E$</span>, and <span class="math-container">$x\in B\setminus K$</span>.</p>
<p>Can someone give a concise proof that there is a unique <span class="math-container">$y\in K$</span> s.t. <span class="math-container">$dist(x,K) = \Vert x-y\Vert$</span> using the notion of uniform convexity exactly as given on the relevant <a href="https://en.wikipedia.org/wiki/Uniformly_convex_space" rel="nofollow noreferrer">wikipedia page</a> and without using weak compactness of reflexive spaces?</p>
| orangeskid | 168,051 | <p>HINT:</p>
<p>We may assume that <span class="math-container">$x=0$</span> and <span class="math-container">$d(0, K) = 1$</span>. </p>
<p>Consider <span class="math-container">$y_n \in K$</span> so that <span class="math-container">$\|y_n \| < 1 + \frac{1}{n}$</span>.
Since <span class="math-container">$K$</span> is convex we have
<span class="math-container">$$1\le \|\frac{y_m+y_n}{2}\|$$</span>
Let <span class="math-container">$y_n'=\frac{y_n}{\|y\|}$</span>. We have <span class="math-container">$\|y_n - y_n'\|< \frac{1}{n}$</span>.
If follows that
<span class="math-container">$$\|\frac{y_m+y_n}{2}- \frac{y'_m+y'_n}{2}\|< \frac{1/m+1/n}{2}$$</span>
and so
<span class="math-container">$$ \|\frac{y'_m+y'_n}{2}\|> 1-\frac{1/m+1/n}{2}$$</span></p>
<p>Now uniformly convex means:<span class="math-container">$\|x\|=\|y\|=1$</span>, <span class="math-container">$\|\frac{x+y}{2}\|>1-\delta$</span>
implies <span class="math-container">$\|x-y\|< \epsilon$</span>. Therefore, since
<span class="math-container">$$1- \|\frac{y'_m+y'_n}{2}\|\to 0$$</span>
as <span class="math-container">$m,n\to \infty$</span> we get
<span class="math-container">$$\|y'_m-y'_n\|\to 0$$</span>
and so
<span class="math-container">$$\|y_m - y_n\|\to 0$$</span>
and so <span class="math-container">$y_n$</span> is convergent to <span class="math-container">$y\in K$</span> so that <span class="math-container">$\|y\|=1$</span>.</p>
<p>Obs: The proof would also show that if <span class="math-container">$d(x,y_n) \to d(x, K)$</span> then <span class="math-container">$y_n$</span> converges to the unique closest point in <span class="math-container">$K$</span> to <span class="math-container">$x$</span>. </p>
|
1,988,021 | <p>Consider $n$ objects each with an associated probability $p_i$, $i\in\{1,\dots,n\}$. If I sample objects $k$ times independently with replacement according to the probability distribution defined by the $p_i$, how does one compute the expected number of times you sample an object you have sampled before?</p>
<p>We can assume that $n > k$.</p>
| Jan Eerland | 226,665 | <p>HINT, substitute $u=x^2$ and $\text{d}u=2x\space\text{d}x$:</p>
<p>$$\mathcal{I}(x)=\int\sqrt{16x^2+8+\frac{1}{x^2}}\space\space\text{d}x=\int\frac{\sqrt{16x^4+8x^2+1}}{x}\space\text{d}x=$$
$$\frac{1}{2}\int\frac{\sqrt{16u^2+8u+1}}{u}\space\text{d}u=\frac{1}{2}\int\frac{\sqrt{(1+4u)^2}}{u}\space\text{d}u$$</p>
|
184,142 | <p>I am currently an undergraduate math student. (In fact, freshmen.)</p>
<p>I know that usually abstract algebra is taught somehow late in the undergraduate course, and curious how studies of abstract algebra at graduate level differ from studies at undergraduate level.</p>
<p>So, things like what gets new treatment, or what is learned new are what I want to know.</p>
| rschwieb | 29,335 | <p>It differs in all the same ways that all graduate courses differ from undergraduate courses. They usually cover more material more quickly, and there is an expectation of better performance and ability from the students. Refining ability to write proofs well is the best skill develop. </p>
<p>If you're asking what specific material is covered, then that's impossible to answer. That will vary by teacher.</p>
|
279,238 | <p>I want to create a regular polygon from the initial two points <span class="math-container">$A$</span>, <span class="math-container">$B$</span> and number of vertices <span class="math-container">$n$</span>,<br />
<a href="https://i.stack.imgur.com/dKr9A.png" rel="noreferrer"><img src="https://i.stack.imgur.com/dKr9A.png" alt="enter image description here" /></a></p>
<p><code>regularPolygon[{0, 0}, {1, 0}, 3]</code> gives <code>{{0, 0}, {1, 0}, {1/2, Sqrt[3]/2}}</code></p>
<p><code>regularPolygon[{x1, y1}, {x2, y2}, 4]</code> gives <code>{{x1, y1}, {x2, y2}, {x2 + y1 - y2, -x1 + x2 + y2}, {x1 + y1 - y2, -x1 + x2 + y1}}</code></p>
<p>I found a related function <a href="http://reference.wolfram.com/language/ref/CirclePoints.html" rel="noreferrer">CirclePoints</a>, it seems not suitable. Is there a simple way to implement such a function? Maybe you can use iteration.</p>
| kglr | 125 | <p>For variety's sake, we can also use the given inputs to construct the circumradius, circumcenter and starting angle and use these with the three-argument form of <code>RegularPolygon</code>:</p>
<pre><code>ClearAll[regularPoly]
regularPoly[p1_, p2_, n_] :=
Module[{startingangle = Pi/n - Pi/2 + ArcTan @@ (p2 - p1),
circumradius = Norm[p2 - p1]/2/Sin[Pi/n], circumcenter},
circumcenter = p2 - circumradius Through@{Cos, Sin}@startingangle;
RegularPolygon[circumcenter, {circumradius, startingangle}, n]]
</code></pre>
<p><em><strong>Examples:</strong></em></p>
<pre><code>{p1, p2} = {{2, 2}, {5, 5}};
Graphics[{FaceForm[],
Table[{EdgeForm@RandomColor[], regularPoly[p1, p2, j]}, {j, 3, 9}],
AbsoluteThickness@5, Red, CapForm["Round"], Line@{p1, p2},
AbsolutePointSize@25, Point@{p1, p2}, White,
MapIndexed[Text[Subscript[p, #2[[1]]], #] &] @ {p1, p2}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/jJbIS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jJbIS.png" alt="enter image description here" /></a></p>
|
2,735,179 | <p>I am trying to sketch the curve given by the following two parametric equations.</p>
<p>$x=cos^3\theta$</p>
<p>$y=sin^3\theta$</p>
<p>Or the single cartesian equation:</p>
<p>$x^{\frac{2}{3}}+y^{\frac{2}{3}}=1$</p>
<p>So,</p>
<p>I put the graph in Desmos and got:</p>
<p><a href="https://i.stack.imgur.com/HtFNG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/HtFNG.png" alt="enter image description here"></a> </p>
<p>Now, the general advice for parametric curves is to create a table of values (by hand or with a calculator) and plot the (x,y) coordinates roughly on a graph.</p>
<p>Simple enough.</p>
<p>My problem is this:</p>
<p>The circle with the equation $x^2 + y^2 =1$ gives the graph:</p>
<p><a href="https://i.stack.imgur.com/fwZLh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/fwZLh.png" alt="enter image description here"></a></p>
<p>Now how do I know whether the graph curves "inwards" as in the first graph or "outwards" like a circle?</p>
<p>I am aware of increasing/decreasing function and implicit differentiation.</p>
<p>$\frac{d}{dx}[x^{\frac{2}{3}}+y^{\frac{2}{3}}]=\frac{d}{dx}[1]$</p>
<p>$\frac{dy}{dx}= - \sqrt[3]{\frac{y}{x}}$</p>
<p>Hence in the 1st quadrant where $x>0$ and $y>0$, and in the 3rd quadrant where $x<0$ and $y<0$ (same signs)</p>
<p>$\frac{dy}{dx}<0$ therefore it a decreasing function here</p>
<p>In 2nd quadrant where $x<0$ but $y>0$ and in the 4th quadrant where $x>0$ but $y<0$ (opposite signs)</p>
<p>$\frac{y}{x}<0$</p>
<p>$\therefore \frac{dy}{dx} >0$ Hence increasing function</p>
<p>which should explain the shape.</p>
<p>Does anyone have a less mechanical way of doing this as I feel I worked "backwards" as I knew what I was aiming for once I had seen the correct graph?</p>
| David Quinn | 187,299 | <p>The more times you repeat the experiment the better the estimate of the true probability. This is the Strong Law of Large Numbers <a href="https://en.m.wikipedia.org/wiki/Law_of_large_numbers" rel="nofollow noreferrer">https://en.m.wikipedia.org/wiki/Law_of_large_numbers</a></p>
|
3,243,655 | <p>Question:</p>
<blockquote>
<p>Prove the equation <span class="math-container">$2x - 6y = 3$</span> has no integer solution to <span class="math-container">$x$</span> and <span class="math-container">$y$</span>.</p>
</blockquote>
<p>I need to verify my proof I think I did it correctly, but am not fully sure since I don't have solutions in my book. I basically proved by contradiction and assumed there was an integer solution for x or y. I then solved for <span class="math-container">$x $</span> and <span class="math-container">$y$</span> in <span class="math-container">$2x - 6y = 3$</span> getting <span class="math-container">$x = 3y + 3/2$</span> and <span class="math-container">$y = x/3 - 1/2$</span> .since both <span class="math-container">$x,y$</span> are not integers I said it contradicts that <span class="math-container">$x$</span> or <span class="math-container">$y$</span> had an integer solution, meaning the original statement was correct. Did I prove this right, or should I redo?</p>
| heropup | 118,193 | <p>Alternatively, you may write <span class="math-container">$$2x - 6y = 2(x-3y).$$</span> Since <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are integers, so must be <span class="math-container">$x-3y$</span>. So <span class="math-container">$2(x-3y)$</span> must be an even integer, clearly being divisible by <span class="math-container">$2$</span>. But <span class="math-container">$3$</span> is odd.</p>
|
3,337,147 | <p>Let <span class="math-container">$R$</span> be a unique factorization domain (UFD). Given <span class="math-container">$a,b \in R$</span> not simultaneously equal to zero, an element <span class="math-container">$d \in R$</span> is by definition a greatest common divisor (GCD) of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> provided:</p>
<ol>
<li><span class="math-container">$d \mid a$</span> and <span class="math-container">$d \mid b$</span>.</li>
<li>For all <span class="math-container">$d' \in R$</span> such that <span class="math-container">$d' \mid a$</span> and <span class="math-container">$d' \mid b$</span>, we have that <span class="math-container">$d' \mid d$</span>.</li>
</ol>
<p>Let <span class="math-container">$U(R) := \{\,\text{units in $R$}\,\}$</span> and assume <span class="math-container">$a,b \neq 0$</span>, <span class="math-container">$a,b \notin U(R)$</span>. Since <span class="math-container">$R$</span> is a UFD, there exist <span class="math-container">$u,v \in U(R)$</span>, irreducible elements <span class="math-container">$p_1,\dots,p_s \in R$</span> which are mutually non associate, <span class="math-container">$d_1,\dots,d_s,e_1,\dots,e_s \in \mathbb{N}$</span> such that:
<span class="math-container">\begin{equation}
a = u \cdot p_1^{d_1} \cdots p_s^{d_s} \, , \quad b = v \cdot p_1^{e_1} \cdots p_s^{e_s}
\end{equation}</span>
For all <span class="math-container">$1 \leq i \leq s$</span>, let <span class="math-container">$f_i := \min\{d_i,e_i\}$</span>. We want to prove that a GCD of <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is:
<span class="math-container">\begin{equation}
c := p_1^{f_1} \cdots p_s^{f_s}
\end{equation}</span></p>
| 0CT0 | 699,175 | <p>No it is not correct. For the third, what are the chances that there are 2 balls left of the same color and what are the chances that there are two left that are not the same color? They are not both equally likely, so you do not have <span class="math-container">$1/2*(1+1/2))$</span>.</p>
|
469,521 | <p>$\theta$, $\phi$ are integrable random variables on a probability space $(\Omega,\mathcal{F},P)$ and $\mathcal{G}$ is $\sigma$-field on $\Omega$ contained in $\mathcal{F}$.
Now we want to prove $E(\theta\mid\mathcal{G})=E(\theta)$ if $\theta$ is independent of $\mathcal{G}$. The proof is, for any $B\in \mathcal{G}$, by independence, $\theta$ and $1_{B}$ are independent. And,$$\int_{B}E(\theta)dP=E(\theta)E(1_{B})=E(\theta 1_{B})=\int_{B}\theta dP,$$ and the conclusion follows. I'm really confused with the first equality $\int_{B}E(\theta)dP=E(\theta)E(1_{B})$. Can anyone explain this to me? Thanks!</p>
| Did | 6,179 | <p>For every real number $a$, one has: $\int\limits_Ba\mathrm dP=a\int\limits_B\mathrm dP=aP[B]$. Furthermore, $P[B]=E[I_B]$. Apply these to $a=E[\theta]$.</p>
|
3,253,145 | <p>I am doing this question
<a href="https://i.stack.imgur.com/AEaIn.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AEaIn.jpg" alt="enter image description here" /></a>
As it can't be solved using separation of variables (my assumption according to what i did, after checking by substituting <span class="math-container">$w(y,t)=f(y)g(t)$</span> , and getting a term at last which is <strong>not depending on only single variable</strong>)</p>
<blockquote>
<ol>
<li><p>Did my assumption is right ?</p>
</li>
<li><p>So how to solve this equation, I am stuck ?</p>
</li>
</ol>
</blockquote>
| StephenG - Help Ukraine | 298,172 | <p>Any odd integer.</p>
<p><span class="math-container">$$\sum_{i=0}^n 2^i = 2^{n+1}-1$$</span></p>
<p>That's an odd number.</p>
<p>Now you're subtracting :</p>
<p><span class="math-container">$$2\sum_{i=0}^n 2^i a_i$$</span></p>
<p>where <span class="math-container">$a_i$</span> is either <span class="math-container">$1$</span> or <span class="math-container">$0$</span></p>
<p>That's an even number.</p>
<p>Your sequence is formed by subtracting an even number from an odd number, which is an odd number.</p>
|
301,038 | <p>If you want to show that a sequence $(a_{n})$ in $\mathbb{R}$ is convergent, when is it sufficient to show that there is a number $b\in\mathbb{R}$ such that
$$ \liminf a_{n} \geq b \geq \limsup a_{n}$$</p>
<p>In particular, I have a situation where my sequence is bounded and I wanted to use this approach, but I'm not sure I really understand what is going on and why or if this works.</p>
<p>Thanks for any illumination!</p>
| Aeolian | 58,941 | <p>One way to think about this is that the $\liminf a_n$ is the smallest subsequential limit (the smallest limit of any subsequence of $a_n$), and the $\limsup a_n$ is the largest subsequential limit. So if you can show that $\liminf a_n \geq b \limsup a_n$, then you will have in fact shown $\liminf a_n = b = \limsup a_n$ because we will always have $\liminf a_n \leq \limsup a_n$. (The <a href="https://en.wikipedia.org/wiki/Limit_superior_and_limit_inferior" rel="nofollow">Wikipedia article</a> is not bad reading.)</p>
<p>But sometimes it is not clear what the actual limit of the sequence will be, so it is nice to have other criteria for convergence that do not require explicitly finding the limit of the sequence. Have you encountered the Cauchy criterion yet?</p>
<p>In fact, a sequence $a_n$ converges iff $\liminf a_n = \limsup a_n$ iff $a_n$ satisfies the Cauchy criterion (and, if $a_n$ converges, then $\liminf a_n = \limsup a_n = \lim a_n$).</p>
<blockquote>
<p>I have a situation where my sequence is bounded</p>
</blockquote>
<p>Another possible option: If the sequence is bounded and also monotonic, then it will converge. But your sequence may not have this condition.</p>
|
3,232,766 | <blockquote>
<p>Is <span class="math-container">$10^{100}$</span> (Googol) bigger than <span class="math-container">$100!$</span>?</p>
<p>If <span class="math-container">$10^{100}$</span> is called as Googol, does <span class="math-container">$100!$</span> have any special name to be called, apart from being called as "100 factorial"?</p>
</blockquote>
<hr />
<p>I ask this question because I get to know about the number <span class="math-container">$10^{100}$</span> on how big it is more often than <span class="math-container">$100!$</span>. If <span class="math-container">$100!$</span> is bigger than <span class="math-container">$10^{100}$</span>, then why don't we give more focus to <span class="math-container">$100!$</span> than the other number? Because for me, <span class="math-container">$100!$</span> looks simple.</p>
| Thomas | 89,516 | <p>Before there was an error on the algebra, as pointed out in the comments. I try to fix the error following the same approach:</p>
<p><span class="math-container">$100!=(1\times..\times 10)\times(11\times..\times 20)\times...\times(91\times..\times 100)=A_1...A_{10}$</span></p>
<p>so we estimate <span class="math-container">$A_i \ge 10^{10}$</span> for <span class="math-container">$i=2,..9$</span>.</p>
<p>Instead we write <span class="math-container">$A_1A_{10}=(1\times 100)\times(2\times 99)\times(3 \times 97)\times...\times(10 \times 91)\ge (10^2)^{10}$</span>.</p>
<p>Combining: <span class="math-container">$100!\ge (10^{10})^8 \times (10^2)^{10}=(10^{10})^{10}=10^{100}$</span>.</p>
|
3,232,766 | <blockquote>
<p>Is <span class="math-container">$10^{100}$</span> (Googol) bigger than <span class="math-container">$100!$</span>?</p>
<p>If <span class="math-container">$10^{100}$</span> is called as Googol, does <span class="math-container">$100!$</span> have any special name to be called, apart from being called as "100 factorial"?</p>
</blockquote>
<hr />
<p>I ask this question because I get to know about the number <span class="math-container">$10^{100}$</span> on how big it is more often than <span class="math-container">$100!$</span>. If <span class="math-container">$100!$</span> is bigger than <span class="math-container">$10^{100}$</span>, then why don't we give more focus to <span class="math-container">$100!$</span> than the other number? Because for me, <span class="math-container">$100!$</span> looks simple.</p>
| jacky | 14,096 | <p>Using <span class="math-container">$$n!>\bigg(\frac{n}{3}\bigg)^{n}, n>8$$</span></p>
<p><span class="math-container">$$100!>\bigg(\frac{100}{3}\bigg)^{100}>10^{100}$$</span></p>
|
794,032 | <p>Prove by induction that the complement of $ A1 \cup A2...An = A1^c \cap A2^c ...\cap An^c$</p>
<p>My approach: basic step is true, $\overline A1 = A1^c$,</p>
<p>then assume $ A1 \cup A2...Ak = A1^c \cap A2^c ...\cap Ak^c$, prove the case of $k+1$ is true. How should I do that?</p>
| Valent | 92,608 | <blockquote>
<p><strong>Hint:</strong> Your basic step must be $n=2$. Then since $$\overline{\bigcup_{i=1}^{n}A_i}=\overline{\left(\bigcup_{i=1}^{n-1}A_{i}\right)\cup A_n}$$ you can use the case $n=2$ and the induction hypothesis.</p>
</blockquote>
|
3,268,398 | <blockquote>
<p>General solution of the equation
<span class="math-container">$$x\left(\frac{dy}{dx}\right)^2+\left(y-x\right)\frac{dy}{dx}\:-y=0
$$</span>is</p>
</blockquote>
<p>Option are as follows:</p>
<p><span class="math-container">$a)\qquad (x-y+c)(xy-c)=0$</span></p>
<p><span class="math-container">$b)\qquad (x+y+c)(xy-c)=0$</span></p>
<p><span class="math-container">$c)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p>
<p><span class="math-container">$d)\qquad (x-y+c)(x^2+y^2-c)=0$</span></p>
| Archis Welankar | 275,884 | <p>The equations of such forms are Clairauts equations.Hint let <span class="math-container">$\frac{dy}{dx}=p$</span> we have <span class="math-container">$xp^2+(y-x)p-y=0$</span> thus <span class="math-container">$$p=\frac{(x-y)\pm\sqrt{y^2-2xy+x^2-4(x)(-y)}}{2x}=\frac{(x-y)\pm(y+x)}{2x}$$</span> now continue from here resubstitute <span class="math-container">$p$</span> and solve the two differential equations to see which gives the answer. Also include your effort in the question itself so that the question attracts more attention.</p>
|
500,446 | <p>Let $p$ be a prime and $p \geq 5$, Consider the congruence $x^3 \equiv a$ (mod p) with $\gcd(a,p)=1$. Show that The congruence has either no solution or three incongruent solutions modulo $p$ if $p \equiv 1$ (mod 6) and has unique solution modulo $p$ if $p \equiv 5$ (mod 6).</p>
<p>My attempt: By Lagrange theorem, the congruence $x^3 \equiv a$ (mod p) has at most $3$ incongruent solutions modulo $p$. Suppose the congruence has a solution $b$ such that $b^3 \equiv a$ (mod p). Then $x^3 \equiv a \equiv b^3 \textit{mod p} \Rightarrow x^3 -b^3 \equiv 0$ (mod p). Note that $x^3 -b^3=(x-b)(x^2+bx+b^2)$</p>
<p>Now I stuck at here. I observe that if $p \equiv 1$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has two incongruent solutions modulo $p$ and if $p \equiv 5$ (mod 6), then $(x^2+bx+b^2) \equiv 0$ (mod p) has one unique solution modulo $p$.</p>
<p>Can anyone guide me?</p>
| André Nicolas | 6,312 | <p>We give an extremely detailed argument. Sorry about the length!</p>
<hr>
<p>Suppose first that $p\equiv 1\pmod{6}$. So $p=6k+1$ for some $k$.</p>
<p>We will use a <em>primitive root</em> argument. Let $g$ be a primitive root of $p$. In group-theoretic terms, let $g$ be a generator of the multiplicative group modulo $p$. </p>
<p>Then $g$ has order $p-1$ modulo $p$. Note that $g^{p-1}=(g^{2k})^3\equiv 1\pmod{p}$ by Fermat's Theorem. Also, $(g^{4k})^3\equiv 1\pmod{p}$. Neither $g^{2k}$ nor $g^{4k}$ is congruent to $1$ modulo $p$, since the order of $g$ is $6k$. And they are not congruent to each other. </p>
<p>Thus the congruence $z^3\equiv 1\pmod{p}$ has at least $3$ (and therefore exactly $3$ solutions, namely $1$, $g^{2k}$, and $g^{4k}$.</p>
<p>So if $b^3\equiv a\pmod{p}$, then we also have $(g^{2k}b)^3\equiv a\pmod{p}$ and $(g^{4k}b)^3\equiv a\pmod{p}$. It follows that the congruence $x^3\equiv a \pmod{p}$ has $3$ solutions if it has a solution. </p>
<hr>
<p>Now suppose that $p\equiv 5\pmod{6}$. Let $p=6k+5$. Let $x$ and $y$ be non-zero integers, and suppose that $x^3\equiv y^3\pmod{p}$. This is the case if and only if $(xy^{-1})^3\equiv 1\pmod{p}$. We show that this <strong>forces</strong> $x\equiv y\pmod{p}$.</p>
<p>To do this, it is enough to show that the congruence $z^3\equiv 1\pmod{p}$ has only the obvious solution $z\equiv 1\pmod{p}$. </p>
<p>Again let $g$ be a primitive root of $p$. Any $z$ is congruent to $g^m$ for some $m$ with $1\le m\le p-1$. Ig $z^3\equiv 1\pmod{p}$ then $g^{3m}\equiv 1\pmod{p}$. It follows that $3m$ divides the order of $g$, that is, $3m$ divides $6k+4$. Since $3$ and $6k+4$ are relatively prime, it follows that $m$ divides $6k+4=p-1$. Thus $g^m\equiv 1\pmod p$, and therefore $z\equiv 1\pmod{p}$.</p>
<p>Now consider the mapping $\psi$ that takes any number $x$ between $1$ and $p-1$ into the remainder when $x^3$ is divided by $p$. By the calculation above, the function is <strong>one to one</strong> (injective). A one to one function from a finite set to itself must be <strong>onto</strong> (surjective). It follows that every number between $1$ and $p-1$ is $\psi(x)$ for some $x$. This says that every number between $1$ and $p-1$ is congruent to $1$ modulo $p$. That is what we wanted to prove.</p>
|
1,863,868 | <p>Coordinates: <span class="math-container">$(0,0), (3,3), (6,4.5), (9, 5.25)$</span></p>
<p>If this is a curve is there a formula for determining the <span class="math-container">$y$</span> value for any given <span class="math-container">$x$</span> within the range <span class="math-container">$0$</span> to <span class="math-container">$9?$</span></p>
| Christian | 332,403 | <p>I recommend looking at <a href="http://mathworld.wolfram.com/LagrangeInterpolatingPolynomial.html" rel="nofollow">Lagrange's Interpolation Formula</a>. Given any $n$ points, they can always be interpolated by a polynomial of degree $n-1$ or less. This means that those points will always lie on the curve of that polynomial. So given your 4 points, there is a polynomial of degree 3 or less such that the points are on the curve of that polynomial. The explicit equation for the polynomial can be found using the formula included in the link. </p>
<p>However, these four points most definitely do not determine uniquely a continuous function (a curve). There are very many curves that could pass through these points.</p>
|
275,526 | <p>What would be an easy example of a sequence of functions defined on a compact interval so that $f_n$ goes to $f$ pointwise but $\sup f_n$ does not go to $sup f$.</p>
<p>I thought of the usual example we take to show that the limits in integration can't be interchanged when we only have pointwise convergence. Is this correct?</p>
<p>Does $f(x)=x^n$ work in this context?
Any comments or hints?</p>
| Elias Costa | 19,266 | <p>Let's $f:[0,1]\to \mathbb{R}$. Set $f_n(x)=x^n$ if $x<1-\frac{1}{n}$ and $f_n(x)=0$ if $x\geq 1-\frac{1}{n}$. Note that $\lim_{n\to\infty}f_n=f\equiv 0$ Then
$$
\sup_{x}f_n(x)=1-\frac{1}{n} \mbox{ and } \sup_{x}f(x)=0
$$</p>
|
1,694,991 | <p>I tried doing $u$-substitution and got $-20e$ as my final answer, but I think the correct answer is just $20$. I'm not sure what I did wrong, but probably had to do with plugging in infinity... could someone explain the process of solving this integral?</p>
| Enrico M. | 266,764 | <p><strong>Hint</strong></p>
<p>$$\frac{x}{20} = y$$</p>
<p>$$20\int_0^{+\infty}ye^{-y}\ \text{d}y$$</p>
<p>Many ways to evaluate it. By parts once, or just knowing it's the gamma function:</p>
<p>$$20\int_0^{+\infty}ye^{-y}\ \text{d}y = 20\cdot\Gamma(2) = 20\cdot 1 = 20$$</p>
<p><strong>What Gamma Function is</strong></p>
<p>Euler Gamma Function is defined as</p>
<p>$$\Gamma(x) = \int_0^{+\infty}t^{x-1}e^{-t}\ \text{d}t$$</p>
<p>more here</p>
<p><a href="https://en.wikipedia.org/wiki/Gamma_function" rel="nofollow">https://en.wikipedia.org/wiki/Gamma_function</a></p>
<p><strong>By parts</strong></p>
<p>Simply call $f = x$ and $g' = e^{-x}$ and proceed, it easy!</p>
<p>In this case you applythe integration by parts use, obtaining</p>
<p>$$-xe^{-x}\bigg|_0^{+\infty} - \left(\int_0^{+\infty} -e^{-x}\ \text{d}x\right)$$</p>
<p>The first term is zero because at infinity the exponential dominates, and in zero the $x$ function dominates.</p>
<p>The second term is simply</p>
<p>$$-e^{-x}\bigg|_0^{+\infty} = -e^{-\infty} - (-e^0) = 0 - (-1) = 1$$</p>
<p>Remember the $20$ factor above and the answer is </p>
<p>$$\boxed{20}$$</p>
|
2,253,501 | <p>The question is to find out the sum of the series $$\sum_{n=1}^\infty n^2 e^{-n}$$</p>
<p>I tried to bring the summation in some form of telescoping series but failed. I then tried approximating the sum by the corresponding integral(which I am not sure about) to get the value as $2/e$ indicating that the sum converges. Any help shall be highly appreciated. Thanks. </p>
| Chinny84 | 92,628 | <p>$$
\frac{d}{da}\mathrm{e}^{an} = n\mathrm{e}^{an}
$$
and
$$
\frac{d^2}{da^2}\mathrm{e}^{an} = n^2\mathrm{e}^{an}
$$
so I posit that we can use
$$
\sum_{n=1}^\infty\frac{d^2}{da^2}\mathrm{e}^{an} = \sum_{n=1}^\infty n^2\mathrm{e}^{an}
$$
we can pull the derivative out of the sum to find
$$
\frac{d^2}{da^2}\sum_{n=1}^\infty\mathrm{e}^{an} =\frac{d^2}{da^2}\sum_{n=1}^\infty\lambda^{n}
$$
where $\lambda = \mathrm{e}^a$. This sum is a geometric series if we consider
$$
\sum_{n=0}^\infty \lambda^n = 1 + \sum_{n=1}^\infty \lambda^n
$$</p>
|
2,253,501 | <p>The question is to find out the sum of the series $$\sum_{n=1}^\infty n^2 e^{-n}$$</p>
<p>I tried to bring the summation in some form of telescoping series but failed. I then tried approximating the sum by the corresponding integral(which I am not sure about) to get the value as $2/e$ indicating that the sum converges. Any help shall be highly appreciated. Thanks. </p>
| Isaac Browne | 429,987 | <p>Just for kicks, here's the way I like to solve these without calculus:
As long as $|x|<1$, we have
$$S=\sum_{n=1}^{\infty}n^2x^n$$
$$S(1-x) = \sum_{n=1}^{\infty}n^2x^n - \sum_{n=2}^{\infty}(n-1)^2x^n = \sum_{n=1}^{\infty}(2n-1)x^n =\sum_{n=1}^{\infty}2nx^n-\sum_{n=1}^{\infty}x^n$$
$$S(1-x) + \frac{x}{1-x} = 2\sum_{n=1}^{\infty}nx^n$$
$$\big(S(1-x) + \frac{x}{1-x}\big)\frac{1-x}{2} = \sum_{n=1}^{\infty}nx^n -\sum_{n=2}^{\infty}(n-1)x^n =\sum_{n=1}^{\infty}x^n=\frac{x}{1-x}$$
Now that we have dealt with the $n$'s, we solve for S!
$$S(1-x)^2+x=\frac{2x}{1-x}$$
$$S=\frac{x+x^2}{(1-x)^3} =\sum_{n=1}^{\infty}n^2x^n$$ </p>
|
2,253,501 | <p>The question is to find out the sum of the series $$\sum_{n=1}^\infty n^2 e^{-n}$$</p>
<p>I tried to bring the summation in some form of telescoping series but failed. I then tried approximating the sum by the corresponding integral(which I am not sure about) to get the value as $2/e$ indicating that the sum converges. Any help shall be highly appreciated. Thanks. </p>
| Jack D'Aurizio | 44,121 | <p>We have
$$ \sum_{n\geq 0} e^{-nx} = \frac{1}{1-e^{-x}}\tag{1} $$
hence by applying $\frac{d^2}{dx^2}$ to both sides
$$ \sum_{n\geq 0} n^2 e^{-nx} = \frac{e^x(e^x+1)}{(e^x-1)^3}\tag{2} $$
and by evaluating at $x=1$
$$ \sum_{n\geq 1} n^2 e^{-n} = \color{red}{\frac{e(e+1)}{(e-1)^3}}.\tag{3}$$</p>
|
1,318,880 | <p>I'm trying to prove that $\operatorname{lcm}(n,m) = nm/\gcd(n,m)$
I showed that both $n,m$ divides $nm/\gcd(n,m)$
but I can't prove that it is the smallest number.
Any help will be appreciated.</p>
| Steven Alexis Gregory | 75,410 | <p>Let's just do this directly. Let $g = \gcd(m,n)$. We need to prove that
$\operatorname{lcm}(m,n) = \dfrac{mn}{g}$.</p>
<hr>
<p>STEP $0$. (Preliminary stuff.)</p>
<p>DEFINITION $1$. $L = \operatorname{lcm}(m,n)$ if and only if</p>
<pre><code> 1. L is a multiple of m and of n.
2. If C is a multiple of m and of n, then C is a multiple of L.
</code></pre>
<p>LEMMA $2$. If $\gcd(a,b) = 1$ and $a \mid bc$, then $a \mid c$.</p>
<p>PROOF. If $\gcd(a,b) = 1$, then there exists integers $A$ and $B$ such that
$aA + bB = 1$. It follows that $acA + bcB = c$. Since $a | acA$ and $a \mid bcB$, then $a \mid c$.</p>
<hr>
<p>STEP $1$. $\dfrac{mn}{g}$ is a common multiple of $m$ and of $n$.</p>
<p>This is true because $\dfrac m g$ and $\dfrac n g$ are integers and
$\dfrac{mn}{g} = \dfrac{m}{g}n = m \dfrac{n}{g}$.</p>
<hr>
<p>STEP $2$. If $G$ is a common multiple of $m$ and of $n$, then $G$
is a multiple of $\dfrac{mn}{g}$. </p>
<p>Suppose $G = mM = nN$ for some integers $M$ and $N$. Then
$\dfrac G g = \dfrac m g M = \dfrac n g N$.</p>
<p>Since $\gcd\left( \dfrac m g, \dfrac n g \right) = 1$, and
$\dfrac m g M = \dfrac n g N$, then, by LEMMA $2$, $\dfrac m g \mid N$, say
$N = \dfrac m g N'$ for some integer $N'$.</p>
<p>So $\dfrac{G}{g} = \dfrac{n}{g} N = \dfrac{m}{g} \dfrac{n}{g} N'$. It follows that $G = \dfrac{mn}{g} N'$ and so $G$ is a multiple of $\dfrac{mn}{g}$.</p>
<hr>
<p>From STEP $1$, STEP $2$, and DEFINITION $0$, we can conclude that
$\operatorname{lcm}(m,n) = \dfrac{mn}{\gcd(m,n)}$.</p>
|
1,318,880 | <p>I'm trying to prove that $\operatorname{lcm}(n,m) = nm/\gcd(n,m)$
I showed that both $n,m$ divides $nm/\gcd(n,m)$
but I can't prove that it is the smallest number.
Any help will be appreciated.</p>
| Community | -1 | <p>Here is one way without using the Fundmental theorem of arithmetic just using the definitions </p>
<p>The definition of lcm(a,b) is as follows:</p>
<p>t is the lowest common multiple of a and b if it satisfies the following:</p>
<p>i)a | t and b | t </p>
<p>ii)If a | c and b | c, then t | c.</p>
<p>Similiarly for the gcd(a,b).</p>
<p>Here is my proof:</p>
<p>Case I: gcd(a,b) $\neq$ 1</p>
<p>Suppose gcd(a,b) = d.</p>
<p>Then $ab = dq_1b = dbq_1 = d*(dq_1q_2)$</p>
<p>Claim: $lcm(a,b) = dq_1q_2$</p>
<p>$a = dq_1$ | $dq_1q_2$ </p>
<p>$b = dq_2$ | $dq_2q_1$.</p>
<p>Supppose lcm(a,b) = c.
Hence c $\leq$ $dq_1q_2$ .</p>
<p>To get the other inequality we have $dq_1$ | a and $dq_2$ | b. Hence $dq_1$ $\leq$ a $\leq$ c $\leq$ $dq_1q_2$ similiarly for $dq_2$.</p>
<p>Suppose that c is strictly less than $dq_1q_2$, so we have $dq_1q_2$ < $cq_2$ and $dq_1q_2$ < $cq_1$.</p>
<p>So $dq_1q_2$ < c < $cq_2$ < $dq_2^2q_1$ and $dq_1q_2$ < c < $cq_2$ < $dq_1^2q_2$, but $dq_1^2q_2$ > $dq_1q_2$ so c < $dq_1q_2$ and </p>
<p>c > $dq_1q_2$ contradiction. Hence c = d$q_1q_2$ </p>
<p>Notice that the case where gcd(a,b) = 1 we can just set $q_1 = a$ and $q_2$ = b, and the proof will be the same.</p>
|
1,549,506 | <p>To Prove $$\lim_{x \to 0}\frac{x^2+2\cos x-2}{x\sin^3 x}=\frac{1}{12}$$
I tried with L'Hospital rule but in vain.</p>
| Paramanand Singh | 72,031 | <p>Here is how you can do it via a single application of L'Hospital's Rule. The denominator in the expression can be replaced by $x^4$ via the standard limit $\lim_{x\to 0}(\sin x) /x=1$. And the resulting expression can be rewritten as $$\frac{x^2-4\sin^2(x/2)}{x^4}=\frac{t^2-\sin^2t}{4t^4} =\frac{t-\sin t} {4t^3}\cdot\left(1+\frac{\sin t} {t} \right) $$ using the substitution $x=2t$. The first factor tends to $(1/6)(1/4)=1/24$ via single application of L'Hospital's Rule and the second factor tends to $(1+1)=2$ so that the desired limit is $1/12$.</p>
<p>A little algebraic manipulation combined with standard limits is always a great help when applying the L'Hospital's Rule. </p>
|
2,607,333 | <p>Let $L$ be the splitting field of $x^3+2x+1$. I want to know how 59 splits in $L$. I calculated the discriminant of $\mathbb{Z}[\alpha]$ to be $-59$, (where $\alpha$ is a root of the polynomial), which is squarefree therefore $\mathcal{O}_K = \mathbb{Z}$ (because $d(\mathbb{Z}[\alpha]) = (\mathcal{O}_K:\mathbb{Z}[\alpha])^2d_K$). Since 59 divides the discriminant, it must be ramified. But I don't know how to get anything more than that, like what's the ramification index and how many primes does it split into?</p>
| bof | 111,012 | <p>Let $M$ be the set of all numbers $m\in\mathbb N$ such that $A$ can be partitioned into $m$ disjoint nonempty sets, each clopen in $A.$</p>
<p>It is clear that $1\in M,$ and that $m\in M\implies\{1,2,3,\dots,m\}\subseteq M.$</p>
<p>If $M$ has no greatest element, then $M=\mathbb N$ and we're done. Otherwise, let $m$ be the greatest element of $M.$ Let $A=C_1\cup\cdots\cup C_m$ where the sets $C_i$ are disjoint, nonempty, and clopen in $A.$ If some $C_i$ were disconnected, then we could partition $A$ into $m+1$ disjoint nonempty clopen sets, contradicting the fact that $m$ is the greatest element of $M.$ Therefore the sets $C_1,\dots,A_m$ are connected, and they are the components of $A,$ so $m\gt n.$</p>
|
299,140 | <p>Is there a closed form sum of </p>
<p>$\sum_{k=0}^{\infty} \frac{x^k}{(k!)^2}$</p>
<p>It is trivial to show that it is less than $e^x$ but is there a tighter bound?</p>
<p>Thanks</p>
| Iosif Pinelis | 36,721 | <p>Here there are many possibilities. One of them is as follows. Note that for $k=0,1,\dots$
\begin{equation}
\frac1{(k!)^2}=\binom{2k}k\,\frac1{(2k)!}\le\frac{2^{2k}}{(2k)!},
\end{equation}
whence for $x\ge0$ the sum of your series is no greater than
\begin{equation}
B(x):=\sum_{k=0}^{\infty} \frac{(4x)^k}{(2k)!}=\cosh\sqrt{4x},
\end{equation}
which is much less than $e^x$ for large $x$. </p>
<p><strong>Added:</strong> As pointed out in the comment by Carlo Beenaker,
\begin{equation}
S(x):=\sum_{k=0}^{\infty} \frac{x^k}{(k!)^2}\sim e^{\sqrt{4x}}/(4\pi\sqrt x)^{1/2}
\end{equation}
and hence
\begin{equation}
\ln B(x)\sim\ln S(x)
\end{equation}
as $x\to\infty$; that is, the bound $B(x)$ on $S(x)$ is logarithmically asymptotically tight for large $x$ (in contrast with the bound $e^x$). </p>
|
660,034 | <p>I wondered if all decimal expansions of $\frac{1}{n}$ could be thought of in such a way, but clearly for $n=6$,</p>
<p>$$.12+.0024+.000048+.00000096+.0000000192+...\neq.1\bar{6}$$</p>
<p>Why does it work for 7 but not 6? Is there only one such number per base, <em>i.e.</em> 7 in base 10? If so what is the general formula?</p>
| ShreevatsaR | 205 | <p>You're trying to extend your observation that
$$.14+.0028+.000056+.00000112+\dots = \frac17 = \frac7{49}$$
in two directions, that don't match. The right identities are:
$$.12 + 0.0024 + 0.000048 + \dots = \frac{6}{49}$$
and
$$.16 + 0.0064 + 0.000256 + \dots = \frac{1}{6}.$$</p>
<p>These three numbers are solutions to the equations
$$\begin{align}
x &= .14 + \frac{2x}{100} && \implies x = \frac1{7} \\
x &= .12 + \frac{2x}{100} && \implies x = \frac6{49} \\
x &= .16 + \frac{4x}{100} && \implies x = \frac1{6}
\end{align}
$$ respectively. (Note that to double the number and also shift it two places to the right corresponds to multiplying it by $\dfrac{2}{10^2}$.)</p>
<p>A generalization that covers all of them is this</p>
<blockquote>
<p><strong>Fact:</strong> Suppose you write down some starting number $s$ (like $0.14$ or $0.12$ or $0.16$ in the examples above), then successively multiply it by some ratio $r<1$ (like $\frac{2}{100} or \frac{2}{100} or \frac{4}{100}$ in the examples above) and add. Then the resulting number is the solution to
$$x = s + rx,$$ namely $x = \dfrac{s}{1-r}$.</p>
</blockquote>
<p>[If you care about the proof, it's straightforward: your definition of $x$ is that $x = s + sr + sr^2 + \dots$, which is equal to $\dfrac{s}{1-r}$ using geometric series, which is what the equation also gives.]</p>
<hr>
<p>This fact lets you do two things.</p>
<p>One, you can write down absolutely any expression you like (of the "multiply-it-by-r-and repeat" type), and find the simple fraction it's equal to: for example, if you write down $0.2 + 0.06 + 0.018 + 0.0054 + \dots$ (each term is the previous term tripled and shifted one place to the right, i.e. multplication by $r = \frac{3}{10}$), then you can immediately say that the number is $\displaystyle \frac{0.2}{1-\frac{3}{10}} = \frac{2}{10 - 3} = \frac27$.</p>
<p>Two (more usefully), for many fractions, you can calculate their digits without actual division. Given a fraction $a/b$, just find a multiple of $b$ that is close to a power of $10$, say $10^d = mb + n$. Then
$$\frac{a}{b} = \frac{ma}{10^d - n} = \frac{ma/10^d}{1 - n/10^d},$$
so you can calculate $a/b$ by writing down $ma/10^d$, then at each step multiplying the latest term by $n$, shifting it $d$ places to the right, and adding. For instance, given the fraction $\frac{7}{12}$, note that $12 \times 8 = 100 - 4$, so you can start with $7 \times 8/100 = 0.56$, and each time multiply by $4/100$ and add:
$$\frac{7}{12} = 0.56 + 0.0224 + 0.000896 + 0.00003584 + \dots$$
(note that $896 = 4 \times 224$, etc.) [Actually this turns out to be the very simple $0.583333\dots$, so for this particular example direct division may have been better.]</p>
<hr>
<p>You asked about other bases. In base $b$, corresponding to the base $10$ example
$$.14+.0028+.000056+.00000112+\dots$$
if you take the number
$$x = s + s(2/b^2) + s(2/b^2)^2\dots, \quad (\text{ where } s = 2n/b^2)$$
then $x = \dfrac{2n/b^2}{1-2/b^2} = \dfrac{2n}{b^2 - 2} = \dfrac{n}{b^2/2 - 1}.$</p>
<p>If you want $x = 1/n$, then $n^2 = b^2/2 - 1$. So you'll have a solution $n$ only for bases $b$ where $b^2 = 2n^2 + 2$ for some $n$, not for all bases. In fact the set of such $(n, b)$ can be got from solving a <a href="https://en.wikipedia.org/wiki/Pell's_equation" rel="nofollow noreferrer">Pell-type equation</a> $n^2 - 2(b/2)^2 = -1$: the solutions are <a href="https://math.stackexchange.com/questions/531833/generating-all-solutions-for-a-negative-pell-equation">given by</a>, if $(1 + \sqrt{2})^k = a_k + b_k\sqrt{2}$ where $k$ is odd, then $n = a_k$, $b = 2b_k$. So, in particular, the first few solutions are</p>
<p>$$k=1 (n=1, b=2): 1/1 = 0.1_2 + 0.01_2 + 0.001_2 + \dots = 0.11111_2$$
(the way <a href="https://en.wikipedia.org/wiki/0.999..." rel="nofollow noreferrer">$0.\overline{9} = 1$</a>),
$$k=3 (n=7, b=10): 1/7 = 0.14 + 0.0028 + 0.000056 + \dots = 0.142857\dots$$
$$k=5 (n=41, b=58): 1/41 = $$
well, I can't decide on symbols to use in base $58$, but you get the idea. There are no other solutions in between. So in some sense $7$ and $10$ <em>are</em> special, in that the next smallest base is as large as $58$.</p>
<p>However, if you don't constrain yourself to the numbers $s = 2n/b^2$ (as in the starting number being $.14$ for $n=7, b = 10$) and $r = 2/b^2$, then there are many solutions in base $10$ as explained above, or indeed in any base. For example, in base $5$, we have
$$\frac{1}{11} = \frac{11}{5^3 - 4} = \frac{21_5/5^3}{1 - 4/5^3} = 0.021_5 + 0.000134_5 + 0.000001201_5 + \dots$$
(note that in base $5$, we have $21_5 \times 4 = 134_5$ and $134_5 \times 4 = 1201_5$ etc.)</p>
|
199,842 | <p>I understand the reasoning behind $\pi r^2$ for a circle area however I'd like to know what is wrong with the reasoning below:</p>
<p>The area of a square is like a line, the height (one dimension, length) placed several times next to each other up to the square all the way until the square length thus we have height x length for the area.</p>
<p>The area of a circle could be thought of a line (The radius) placed next to each other several times enough to make up a circle. Given that circumference of a circle is $2 \pi r$ we would, by the same reasoning as above, have $2 \pi r^2$. Where is the problem with this reasoning?</p>
<p>Lines placed next to each other would only go straight like a rectangle so you'd have to spread them apart in one of the ends to be able to make up a circle so I believe the problem is there somewhere. Could anybody explain the issue in the reasoning above?</p>
| Timothy | 137,739 | <p>For each shape like a square or a circle that for sufficiently large <span class="math-container">$n$</span> can be split into lines of thickness of approximately width <span class="math-container">$\frac{1}{n}$</span>, you can find a function that assigns to each sufficiently large positive integer <span class="math-container">$n$</span> an ordered pair whose first component is one such splitting of part of it with zero area of overlap and whose second component is a relation from length to area where length is considered equivalent to an area <span class="math-container">$\frac{1}{n}$</span> times that length, such that as <span class="math-container">$n$</span> approaches <span class="math-container">$\infty$</span>, the unfilled area of the splitting approaches zero. That means that as <span class="math-container">$n$</span> approaches <span class="math-container">$\infty$</span>, the area corresponding to the length of such a splitting should approach the area of the circle, if the area unfilled by the splitting approaches zero and the area of overlap remains zero. If you take the splitting to <span class="math-container">$2\pi n$</span> times a number approaching 1 equally spaced lines going from the edge to the center of thickness <span class="math-container">$\frac{1}{n}$</span>, then it's not the case that the area of overlapping in the splitting is zero.</p>
|
134,815 | <p>Assume I have a shuffled deck of cards (52 cards, all normal, no jokers) I'd like to record the order in my computer in such a way that the <em>ordering</em> requires the least bits (I'm not counting look up tables ect as part of the deal, just the ordering itself. </p>
<p>For example, I could record a set of strings in memory:</p>
<p>"eight of clubs", "nine of dimonds" </p>
<p>but that's obviously silly, more sensibly I could give each card an (unsigned) integer and just record that... </p>
<p>17, 9, 51, 33... </p>
<p>which is much better (and I think would be around 6 bits per number times 53 numbers so around 318 bits), but probably still not ideal.. for a start I wouldn't have to record the last card, taking me to 312 bits, and if I know that the penultimate card is one of two choices then I could drop to 306 bits plus one bit that was true if the last card was the highest value of the two remaining cards and false otherwise.... </p>
<p>I could do some other flips and tricks, but I also suspect that this is a branch of maths were there is an elegant answer... </p>
| Ross Millikan | 1,827 | <p>As carlop says, you can store the order number in 226 bits. Then to find the order of the cards, divide the order number by 51!. The quotient is the card number of the first card and the remainder is the order number of the remaining 51 cards.</p>
<p>Added: If you just store a card number (0-51) for each position, it takes 6 bits per card, for a total of 312 bits. It costs less than 100 bits and is easier to unpack. The preceding approach assumes you keep track of which cards are used, so after you have done 20 cards, you can go down to 5 bits, after 36 you can go to 4 bits, etc. In this case you use 0 for the last card, 1 for the next to last, 2 for the next two, etc
0+1+2*2+3*4+4*8+5*16+6*20=249 bits. This is not much of an increase over the best possible. The increase comes because storing the number of the ordering takes advantage of the wasted data in using 6 bits to pick one of 40, for example.</p>
|
2,217,295 | <p>I thought about defining a function $ g(x) = \dfrac{1}{\epsilon} x^{\epsilon} - \ln(x) $ and show that it's larger than $0$. I saw that if I substitute
$1$, I get $\dfrac{1}{\epsilon}$ which is positive and so on. I also tried to derive the function and I got $g'(x) = x^{\epsilon -1} - \dfrac{1}{x} $</p>
<p>I'm stuck. Can someone help me realize how to keep going?</p>
<p><strong>edit</strong>: I need to show also that $\forall \epsilon >0 $ $\exists M >0 $ that $\forall x>M $ $\ln x < x^{\epsilon}$ I thought about using what we proved just now by multiplying by $\epsilon$ both sides and get $ \epsilon \ln x < x^{\epsilon}$ define a new function $f(x) = x^{\epsilon} - \epsilon \ln x$ and again if I derieve it I get $\epsilon x^{\epsilon -1} - \epsilon \dfrac{1}{x}$ get the $\epsilon$ out and get $\epsilon * (x^{\epsilon -1} - \dfrac{1}{x}) > 0 $.<br>
The only problem with it I never use the $M$ can someone help me please? </p>
<p>Thanks in advance!</p>
| Adren | 405,819 | <p>If you are allowed to use :</p>
<p>$$\lim_{x\to+\infty}\frac{\ln(x)}x=0$$</p>
<p>then, given $\epsilon>0$, you can see that :</p>
<p>$$\forall x>0,\,\frac{\ln(x)}{x^\epsilon}=\frac 1\epsilon\frac{\ln(x^\epsilon)}{x^\epsilon}$$and since $\lim_{x\to+\infty}x^\epsilon=+\infty$, composition of limits shows that :</p>
<p>$$\lim_{x\to+\infty}\frac{\ln(x)}{x^\epsilon}=0$$</p>
<p>Hence, for $x$ sufficiently large, we certainly have :</p>
<p>$$\frac{\ln(x)}{x^\epsilon}<1$$</p>
|
3,725,638 | <p>The exterior derivative of a scalar function is</p>
<p><span class="math-container">$d f(x,y,z) = (
\frac{\partial f}{\partial x} dx
+ \frac{\partial f}{\partial y} dy
+ \frac{\partial f}{\partial z} dz
)$</span></p>
<p>Am I correct in assuming then that</p>
<p><span class="math-container">$d\left( F_x(x,y,z) e_x + F_y(x,y,z) e_y + F_z(x,y,z) e_z \right)$</span></p>
<p>would be</p>
<p><span class="math-container">$\left(
\frac{\partial F_y}{\partial x}
- \frac{\partial F_x}{\partial y} \right)
dx \wedge dy +
\left(
\frac{\partial F_x}{\partial z} -
\frac{\partial F_z}{\partial x} \right)
dz \wedge dx +
\left(
\frac{\partial F_z}{\partial y} -
\frac{\partial F_y}{\partial z} \right)
dy \wedge dz$</span></p>
| peek-a-boo | 568,204 | <p>We only talk about exterior derivatives of differential <span class="math-container">$k$</span>-forms, not vector fields. However, what we can do is the following: given a vector field <span class="math-container">$F: \Bbb{R}^3 \to \Bbb{R}^3$</span>, <span class="math-container">$F = (F_x, F_y, F_z)$</span>, we can consider the following one-form:
<span class="math-container">\begin{align}
\omega &= F_x \, dx + F_y \, dy + F_z \, dz
\end{align}</span>
And yes, the exterior derivative of the one-form <span class="math-container">$\omega$</span> is indeed the thing you wrote down:
<span class="math-container">\begin{align}
d\omega &= \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) dx \wedge dy + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dz \wedge dx + \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dy \wedge dz
\end{align}</span></p>
<hr />
<p>Just some fun extra tidbits: if you know some vector calculus, the above expression probably looks pretty familiar, almost like the curl of <span class="math-container">$F$</span>, though not quite.
If you want to somehow get the curl of <span class="math-container">$F$</span> from here, you need to look at the "Hodge star" operator, which assigns to the above <span class="math-container">$2$</span>-form <span class="math-container">$d\omega$</span> a certain <span class="math-container">$1$</span>-form <span class="math-container">$\alpha$</span>, namely
<span class="math-container">\begin{align}
\alpha &= \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) dz + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) dy + \left(\frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z} \right) dx
\end{align}</span>
then from here, you can get a vector field, <span class="math-container">$G$</span>, (pretty much by replacing <span class="math-container">$dx$</span> with <span class="math-container">$e_x$</span>, <span class="math-container">$dy$</span> with <span class="math-container">$e_y$</span> and <span class="math-container">$dz$</span> with <span class="math-container">$e_z$</span>),
<span class="math-container">\begin{align}
G:= \left( \frac{\partial F_z}{\partial y} - \frac{\partial F_y}{\partial z}\right) e_x + \left(\frac{\partial F_x}{\partial z} - \frac{\partial F_z}{\partial x} \right) e_y + \left(\dfrac{\partial F_y}{\partial x} - \dfrac{\partial F_x}{\partial y}\right) e_z,
\end{align}</span>
and this is precisely the curl of <span class="math-container">$F$</span></p>
|
3,836,662 | <p>I understand basic group theory. I would say that I've seen most of the standard stuff up to, say, the quotient group.</p>
<p>I feel like I've seen in more than one place the suggestion that group theory is the study of symmetries, or actions that leave something (approximately) unchanged. Unfortunately I can only find a couple sources. At 0:49 in this <a href="https://www.youtube.com/watch?v=mH0oCDa74tE" rel="nofollow noreferrer">3 Blue 1 Brown video</a>, the narrator says "[Group theory] is all about codifying the idea of symmetry." The whole video seems to be infused with the idea that every group represents the symmetry of something.</p>
<p>In <a href="https://www.youtube.com/watch?v=ihMyW7Z5SAs" rel="nofollow noreferrer">this video</a> about the Langlands Program, the presenter discusses symmetry as a lead-in to groups beginning around 33:00. I don't know if he actually describes group theory as being about the study of symmetry, but the general attitude seems pretty similar to that of the previous video.</p>
<p>This doesn't jive with my intuition very well. I can see perfectly well that <em>part</em> of group theory has to do with symmetries: one only has to consider rotating and flipping a square to see this. But is <em>all</em> of group theory about symmetry? I feel like there must be plenty of groups that have nothing to do with symmetry. Am I wrong?</p>
| AlexanderGrey | 675,096 | <p>There is a quite easy way to prove this problem. If you understand chinese, there is a classic proof on Yang Zixu's 'Abstract Algebra'. Here is the proof:</p>
<p>Since <span class="math-container">$H\cap K\le H$</span>, let <span class="math-container">$|H|/|H\cap K|=m$</span> and
<span class="math-container">$H = h_1(H\cap K)\cup h_2(H\cap K)\cup \cdots \cup h_m(H\cap K),$</span>
here <span class="math-container">$h_i\in H, h_i^{-1} h_j \notin K,i\neq j.$</span>
Clearly,
<span class="math-container">$HK=h_1K\cup h_2K\cup\cdots\cup h_m K,$</span>
while
<span class="math-container">$h_iK\cap h_jK = \varnothing,i\neq j,$</span>
thus
<span class="math-container">$|HK|=m|K|,$</span>
which means
<span class="math-container">$|HK|=|H||K|/|H\cap K|.$</span>
QED</p>
<p>It is an application of coset decomposition theory. There is no need of considering the bijection or maps etc.</p>
|
902,407 | <p>Exercise</p>
<p>Let $f:G \to G'$ be an isomorphism and let $H\unlhd G$. If $H'=f(H)$, prove that $G/H \cong G'/H'$.</p>
<p>As I've shown that $H'\unlhd G'$, I thought of defining $$\phi(Ha)=H'f(a)$$I was trying to show that this function is well defined and that it is an isomorphism.</p>
<p>I'll work with right cosets (which, since $H$ and $H'$ are normal, it's the same as working with left cosets). I need to know if what I did is correct and I would appreciate some help to show injectivity (maybe the $\phi$ I've defined is not the correct one).</p>
<p>So, what I mean by well-defined is that if $Ha=Ha'$, then $H'f(a)=H'f(a')$. It will be sufficient to show that $f(a)f(a')^{-1} \in H'$; by hypothesis, we have $aa'^{-1} \in H$, which means $f(aa'^{-1}) \in H'$. But then $f(aa'^{-1})=f(a)f(a'^{-1})=f(a)f(a')^{-1} \in H'$. From here one deduces the well definition.</p>
<p>To check $\phi$ is a morphism, we have to show $\phi((Ha)(Hb))=\phi(Ha)\phi(Hb)$. But $(Ha)(Hb)=H(ab)$, so $\phi(((Ha)(Hb))=\phi(H(ab))=H'f(ab)=H'f(a)f(b)=(H'f(a))(H'f(b))=\phi(Ha)\phi(Hb)$.</p>
<p>Surjectivity is almost immediate, take a right coset $H'y$, since $f$ is isomorphic, there is $g \in G$ such that $f(g)=y$, so $\phi(Hg)=H'f(g)=H'y$</p>
<p>Now for injectivity, suppose $\phi(Ha)=\phi(Ha')$, then $H'f(a)=H'f(a')$, I got stuck there.</p>
<p>Any suggestions would be appreciated. Thanks in advance.</p>
| Asaf Karagila | 622 | <p>It's easy to get bogged down with the details here. But the simplest way would be to find a bijection between $A$ and $B$.</p>
<p>We are given that $|A-B|=|B-A|$, therefore there is a function $f\colon (A-B)\to (B-A)$ which is a bijection. Can you think of a way to extend $f$ to be a bijection between $A$ and $B$?</p>
|
4,024,871 | <p>I'm looking to build a function <span class="math-container">$f:S^2 \to \mathbb R^2$</span> such that <span class="math-container">$f(x)\neq f(−x)$</span> for all <span class="math-container">$x\in S^2$</span>.</p>
<p>By Borsuk-Ulam Theorem, this function must be discontinuous. I was trying to build a not too complicated function, but I always encountered a problem.</p>
<p>I appreciate any help.</p>
| Aryaman Maithani | 427,810 | <p>Fix two distinct elements <span class="math-container">$a$</span> and <span class="math-container">$b$</span> in <span class="math-container">$\Bbb R^2$</span>. Consider an arbitrary <span class="math-container">$(x, y, z) \in S^2.$</span></p>
<ul>
<li>If <span class="math-container">$x > 0$</span>, map it to <span class="math-container">$a$</span> and if <span class="math-container">$x < 0$</span>, map it to <span class="math-container">$b$</span>.</li>
<li>If <span class="math-container">$x = 0$</span>, then do the same as above with <span class="math-container">$y$</span> instead of <span class="math-container">$x$</span>.</li>
<li>If <span class="math-container">$x = y = 0$</span>, then <span class="math-container">$z \neq 0$</span> and thus, we can do the above with <span class="math-container">$z$</span> instead of <span class="math-container">$x$</span>.</li>
</ul>
<p>Note if <span class="math-container">$\mathbf x = (x, y, z) \in S^2,$</span> then both <span class="math-container">$\bf x$</span> and <span class="math-container">$-\bf x$</span> will follow the rule in the same bullet above and hence, one is mapped to <span class="math-container">$a$</span> and the other to <span class="math-container">$b \neq a$</span>.</p>
|
2,940,306 | <p>I understand that it would be n! permutations for the given amount of elements, but I am not sure calculate it with these parameters.</p>
| maveric | 590,250 | <p>3 options for last place
simultaneously 3 for first place,
now we have 7 objects left. so 7 objects, 7 places
which is 7!
so answer is 3 <em>3</em> 7!</p>
|
3,126,080 | <p>Is there any particular equation which doesn't work on the real plane of numbers but works on other planes?</p>
| Theo Bendit | 248,286 | <p>Freshman's Dream, <span class="math-container">$(x + y)^n = x^n + y^n$</span>, holds in fields of characteristic <span class="math-container">$n$</span>, when <span class="math-container">$n$</span> is prime.</p>
|
1,745,180 | <p>I want to prove that the polynomial </p>
<p>$$
f_p(x) = x^{2p+2} - cx^{2p} - dx^p - 1
$$</p>
<p>,where $c>0$ and $d>0$ are real numbers, has distinct roots. Also $p>0$ is an even integer. How can I prove that the polynomial $f_p(x)$ has distinct roots for any $c$,$d$ and $p$.</p>
<p>PS: There is a similar topic that <a href="https://math.stackexchange.com/questions/1740673/how-to-prove-that-my-polynomial-has-distinct-roots?lq=1">How to prove that my polynomial has distinct roots?</a></p>
| Bernard | 202,857 | <p>‘All’ you have to do is computing the g.c.d. of $f_p(x)$ and $f'_p(x)$ via the <em>Euclidean algorithm</em>. A polynomial has only simple roots (in $\mathbf C$) if and only if this polynomial and its derivative are coprime.</p>
|
372,401 | <p>Let us assume that the boundary of the domain in the definition of the Sobolev spaces $L^2$ and $H_0^1$ is sufficiently smooth.</p>
<p>Let $|\cdot |$ denote the norm in $L^2$. Then for a function $v$ in $H_0^1$, the norm is given via $\|v\|^2=|v|^2+|\nabla v|^2$. </p>
<p>In general, one cannot bound the $H_0^1$-norm by the $L^2$-norm, as the gradient of a function, cannot be bounded by the function values.</p>
<p>What if for $v\in H_0^1$, one has $|v|=0$. Does this imply that $|\nabla v|=0$?</p>
<p>I have tried to come to terms with this in 1D. Consider an interval $(a,b)$ and $u\in L^2(a,b)$, with a weak derivative $u'\in L^2(a,b)$ and $u(a) = u(b) = 0$. Then, $u$ is absolutely continuous almost everywhere, and one has $u(x) = \int_a^xu'(s)ds$. Then, $0=|u|^2=\int_a^b(\int_a^xu'(s)ds)^2dx$ which somehow should give that $\int_a^bu'(s)^2ds$ is zero as well...</p>
| Wintermute | 67,388 | <p>Suppose this set is path connected. Consider the point $(0,0)$ and the point $(1,\sin(1))$. The only way to link these two points is along the path $(x,\sin(\frac{1}{x}))$. However $\lim_{x \rightarrow 0}\sin(\frac{1}{x})=$DNE. Hence there is no path connecting $(0,0)$ to $(1,\sin(1))$, a contradiction. Therefore the set is not path connected.</p>
|
3,443,672 | <p>The equation is
<span class="math-container">$$\tan\frac{5\pi}{6} \cos x=1-\sin x$$</span>
<span class="math-container">$$\sin\frac{5\pi}{6} \cos x=\cos\frac{5\pi}{6}-\cos\frac{5\pi}{6} \sin x$$</span>
<span class="math-container">$$\sin\left(\frac{5\pi}{6}+x\right)=\cos \frac{5\pi}{6}$$</span> which looks weird to me. What am I doing wrong?</p>
| johnny09 | 285,238 | <p>Let <span class="math-container">$\theta = \frac{5 \pi}{6}$</span>. Then continuing from where you stopped and noting that <span class="math-container">$\cos(\theta) = -\frac{\sqrt(3)}{2}$</span>, we get
<span class="math-container">$$\sin(\theta + x) = \cos(\theta)
= -\frac{\sqrt{3}}{2}$$</span>
and so,
<span class="math-container">$$\theta + x = \arcsin\left(-\frac{\sqrt{3}}{2}\right) = -\frac{\pi}{3}.$$</span>
Thus, <span class="math-container">$x = - \frac{7 \pi}{6}$</span>.</p>
|
1,307,460 | <p>Need to integrate this function. Need help with my assignment. Thanks</p>
| DeepSea | 101,504 | <p>Let $I = \displaystyle \int_{0}^{\frac{\pi}{2}} \dfrac{\sin^2x}{\cos x+\sin x}dx$ and $J = \displaystyle \int_{0}^{\frac{\pi}{2}} \dfrac{\cos^2x}{\cos x+ \sin x}dx$. Then you can evaluate: $I+J, J-I$ quicky and then <strong>solve</strong> for $I$</p>
|
2,013,650 | <p>I am trying to prove that
$$n^{n/2}<n!,\text{ for } n\ge2.$$
I can't really figure it out. </p>
| Chill2Macht | 327,486 | <p>$\newcommand{\Vec}{\mathsf{Vec}}$$\newcommand{\Hom}{\operatorname{Hom}}$$\newcommand{\id}{\operatorname{id}}$My answer exceeded the character limit, so here's the last part:</p>
<p>Finally, to get the general case, consider the following functor: $$\bigotimes: \Vec^{\times k} \to \Vec, $$ which acts by: $$\bigotimes(V_1, \dots, V_k) = V_1 \otimes \dots \otimes V_k. $$</p>
<blockquote>
<p><strong>Claim:</strong> $\bigotimes$ is actually a functor.</p>
</blockquote>
<p>We already gave the proposed object part above ($\bigotimes(V_1, \dots, V_k) = V_1 \otimes \dots \otimes V_k$), now the morphism part is proposed to be the following: $$\bigotimes(f_1, \dots, f_k) = f_1 \otimes \dots \otimes f_k, $$ perhaps unsurprisingly. Now we need to check that this actually constitutes a functor.</p>
<p>Let $\id_{(V_1,\dots,V_k)}=(\id_{V_1}, \dots, \id_{V_k})$ (this holds by definition of product category). Then: $$\bigotimes \id_{(V_1, \dots, V_k)} = \bigotimes (\id_{V_1}, \dots, \id_{V_k}) = \id_{V_1} \otimes \dots \otimes \id_{V_k}. $$ For $\bigotimes$ to be a functor, it is necessary that this be the identity morphism on $\bigotimes (V_1, \dots, V_k) = V_1 \otimes \dots \otimes V_k$. However, by definition of the tensor product, it is actually relatively clear that: $$\id_{V_1} \otimes \dots \otimes \id_{V_k}=\id_{V_1 \otimes \dots \otimes V_k} = \id_{\bigotimes(V_1, \dots, V_k)} . $$ Thus $\bigotimes$ preserves identity morphisms, $$\bigotimes \id_{(V_1, \dots, V_k)} = \id_{\bigotimes (V_1, \dots, V_k)}, $$ as required. Now we need to show that $\bigotimes$ is compatible with composition of morphisms, i.e. given: $$(V_1, \dots, V_k) \overset{(f_1, \dots, f_k)}{\to} (V_1', \dots, V_k') \quad \text{and} \quad (V_1',\dots, V_k') \overset{(g_1, \dots, g_k)}{\to} (V_1'', \dots, V_k''), $$ one has that: $$\bigotimes\left( (g_1, \dots, g_k) \circ (f_1, \dots, f_k) \right) = \bigotimes(g_1, \dots, g_k) \circ \bigotimes (f_1, \dots, f_k). $$ Evaluating the left-hand side first, we have: $$\bigotimes \left( (g_1, \dots, g_k) \circ (f_1, \dots, f_k) \right) = \bigotimes (g_1\circ f_1, \dots, g_k \circ f_k)= (g_1 \circ f_1) \otimes \dots \otimes (g_k \circ f_k). $$ Now evaluating the right-hand side, one finds: $$\bigotimes (g_1, \dots, g_k) \circ \bigotimes (f_1, \dots, f_k) = (g_1 \otimes \dots \otimes g_k) \circ (f_1 \otimes \dots \otimes f_k). $$ We showed, when proving that $\Phi$ is a functor, that the tensor product $\otimes$ plays nicely with composition $\circ$, so we have finally that: $$(g_1 \otimes \dots \otimes g_k) \circ (f_1 \otimes \dots \otimes f_k) = (g_1 \circ f_1) \otimes \dots \otimes (g_k \circ f_k). $$ In conclusion, we have shown that: $$\bigotimes\left( (g_1, \dots, g_k) \circ (f_1, \dots, f_k) \right) = (g_1 \circ f_1) \otimes \dots \otimes (g_k\circ f_k) = \bigotimes(g_1, \dots,g_k ) \circ \bigotimes(f_1, \dots, f_k), $$ hence $\bigotimes$ is compatible with composition of morphisms and thus really is a functor, as claimed.</p>
<p><strong>Claim:</strong> $op: \Vec \to \Vec^{op}$ is a contravariant functor.</p>
<p>The object part is the same as the object part of the identity functor: $op: V \mapsto V$.</p>
<p>For the morphism part, we have the following rule (as follows from the definition of dual category): $$op: \left( V \overset{f}{\to} V' \right) \mapsto \left( V \overset{f^{op}}{\gets} V' \right). $$ Note that while $f$ is always a function (a linear transformation in fact), in general $f^{op}$ is not, i.e. only a morphism but not a function (although one can define it as a function in the case that $f$ happens to be an isomorphism).</p>
<p>Anyway, we need to show that $op$ is compatible with identity morphisms: $op(\id_V) = \id_{op(V)}=\id_V$. One has that: $$op \left( V \overset{id_V}{\to} \id_V \right) \mapsto \left( V \overset{\id_V}{\gets} V \right). $$ Now since $\id_V$ is an isomorphism, we have that the left and right hand sides are equal, hence $op(\id_V)= \id_V$, as required.</p>
<p>Now we show that $op$ is compatible with composition of morphisms in a contravariant manner. $$op(g \circ f) = \left( V \overset{(g\circ f)^{op}}{\gets} V'' \right) = \left( V \overset{f^{op}}{\gets} V' \right) \circ \left( V' \overset{g^{op}}{\gets} V'' \right) = op(f) \circ op(g), $$ as expected and required.</p>
<p>Because we have a natural isomorphism between $\Hom(-,-)$ and $\Phi(-,-)$, it follows that we also have a natural isomorphism between $\Hom((\bigotimes(-,\dots, -))^{op}, - )$ and $\Phi((\bigotimes(-,\dots, -))^{op},- )$, i.e. for any $k-$tuple of finite-dimensional vector spaces $(V_1, \dots, V_k)$, and any finite-dimensional vector space $W$, one has that: $$\Hom(V_1 \otimes \dots \otimes V_k, W) \cong \Hom(V_1 \otimes \dots \otimes V_k \otimes W^*, \mathbb{R} ), $$ the isomorphism being natural. </p>
<p><strong>Claim:</strong> $\times k: \Vec \to \Vec^{\times k}$ is a functor.</p>
<p>For the object part, we have simply: $$\times k (V) = \underbrace{(V, \dots, V)}_{k\text{ times}}, $$ and for the morphism part, given $V \overset{f}{\to} V'$, one has: $$\times k(f) = \underbrace{(V, \dots, V)}_{k\text{ times}} \overset{\underbrace{(f, \dots, f)}_{k \text{ times}}}{\to} \underbrace{(V', \dots, V')}_{k\text{ times}}.$$ Now one has fairly immediately that: $$\times k(\id_V) = (\id_V, \dots, \id_V) = \id_{(V,\dots, V)}. $$ Likewise, one also has fairly immediately that: $$\times k (g \circ f) = (g\circ f, \dots, g\circ f) = (g, \dots, g)\circ(f,\dots,f) = \times k(g) \circ \times k(f). $$ Thus $\times k: \Vec \to \Vec^{\times k}$ is a functor, as claimed.</p>
<p>Thus composing functors again, we get a natural isomorphism between $$\Hom\left( \left(\bigotimes( \times k (-) )\right)^{op}, - \right) \quad \text{and} \quad \Phi\left( \left(\bigotimes(\times k (-)) \right)^{op},- \right),$$ thus, as we wanted to show, for any choice of two finite-dimensional vector spaces $V$ and $W$, there is a natural isomorphism between: $$\Hom(\underbrace{V \otimes \dots \otimes V}_{k \text{ times}}, W) \quad \text{and} \quad \Hom(\underbrace{V \otimes \dots \otimes V}_{k\text{ times}} \otimes W^*, \mathbb{R}). $$</p>
|
3,648,315 | <p><a href="https://i.stack.imgur.com/IKnlD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/IKnlD.png" alt="enter image description here"></a></p>
<p>I have an item, let's say sword. </p>
<p>As described above, <br/>
If my sword is at state 1, I can try upgrading it. There are 3 possibilities. <br/>
It can be upgraded with prob = 0.3, remain still with prob = 0.68, can be destroyed with prob = 0.02.</p>
<p>If my sword is at state 2, I still can try to upgrade it. <br/>
It can be upgraded with prob = 0.3, can be downgraded to state 1 with prob = 0.68, can be destroyed with prob = 0.02.</p>
<p>Once my sword destroyed, there is no turning back. <br/>
Once my sword reached at state 3, no need to do something else. I'm done.</p>
<p>I know it's a Markov chain problem. <br/>
I can express this situation with matrix, and if I multiply it over and over, it can reach equilibrium state.</p>
<pre><code>p2 = matrix(c(1, rep(0, 3),
0.02, 0.68, 0.3, 0,
0.02, 0.68, 0, 0.3,
rep(0, 3), 1), 4, byrow = T)
p2
## [,1] [,2] [,3] [,4]
## [1,] 1.00 0.00 0.0 0.0
## [2,] 0.02 0.68 0.3 0.0
## [3,] 0.02 0.68 0.0 0.3
## [4,] 0.00 0.00 0.0 1.0
matrix.power <- function(A, n) { # For matrix multiplication
e <- eigen(A)
M <- e<span class="math-container">$vectors
d <- e$</span>values
return(M %*% diag(d^n) %*% solve(M))
}
round(matrix.power(p2, 1000), 3)
## [,1] [,2] [,3] [,4]
## [1,] 1.000 0 0 0.000
## [2,] 0.224 0 0 0.776
## [3,] 0.172 0 0 0.828
## [4,] 0.000 0 0 1.000
</code></pre>
<p>But how can I get the <code>Pr(Reach state 3 without destroyed | currently at state 2)</code> using Markov chain?</p>
<p>I could get <code>Pr(Reach state 2 without destroyed | currently at state 1)</code> by using sum of geometric series.</p>
<p>Thank you.</p>
| fleablood | 280,126 | <p>Wolog assume <span class="math-container">$x \le y$</span> and <span class="math-container">$y-x = m\ge 0$</span> then</p>
<p><span class="math-container">$2x + m + 2x(x+m) = 83$</span> and </p>
<p><span class="math-container">$2x^2 + (2+2m)x + (m-83) = 0$</span></p>
<p><span class="math-container">$x = \frac {-(2+2m) \pm\sqrt{4m^2+8m + 4-4(m-83)*2}}4=$</span></p>
<p><span class="math-container">$ \frac {-(2+2m) \pm\sqrt{4m^2 + 668}}4=$</span></p>
<p><span class="math-container">$\frac {-1-m\pm \sqrt{m^2 +167}}2\in \mathbb Z$</span></p>
<p>So <span class="math-container">$m^2 + 167 = k^2$</span> for some non-negative integer, <span class="math-container">$k$</span>, so </p>
<p><span class="math-container">$k^2 - m^2 = (k-m)(k+m) = 167$</span> but <span class="math-container">$167$</span> is prime so <span class="math-container">$k-m =1$</span> and <span class="math-container">$k+m=167$</span> so <span class="math-container">$m=83$</span> and <span class="math-container">$k = 84$</span></p>
<p>So <span class="math-container">$x = \frac {-1-83\pm84}2$</span></p>
<p>So <span class="math-container">$x = 0, -84$</span> and <span class="math-container">$y =83, -1$</span>. </p>
<p>So <span class="math-container">$x+y = 83$</span> or <span class="math-container">$-85$</span>.</p>
|
471,710 | <p>Why do small angle approximations only hold in radians? All the books I have say this is so but don't explain why.</p>
| Community | -1 | <p>You need to clarify what you are really asking: in physics the small angle approx. is typically used to approximate a non-linear differential equation by a linear one - which is much easier to solve - by allowing us to approximate the sin(x) function by x. The resulting equation is only a good one for small angles. I.e., its a physics thing, not a math thing.</p>
|
4,524,393 | <p>I don't know if this question is best suited to this stack exchange. If it isn't, feel free to migrate it or close it. This question was inspired by a mistake I saw in a math class. I corrected the professor, and he acknowledged it. I then said, "Some students think professors never make mistakes". And he said, "Yes, and those students are mistaken". So, what are some famous or at least semi-famous examples of math professors making errors in the classroom? Note, the mistake has to have taken place in a classroom, not in a journal or book or paper.</p>
| Jean-Armand Moroni | 1,064,750 | <p>Grothendieck is famous for having chosen <span class="math-container">$57$</span> as a prime number example, when answering a question from a student.</p>
<p>And actually the same mistake had been made before, by Weyl, in a paper, as can be read here.
<a href="https://hsm.stackexchange.com/questions/6358/story-of-grothendiecks-prime-number">https://hsm.stackexchange.com/questions/6358/story-of-grothendiecks-prime-number</a></p>
|
440,528 | <p>My question is about group theory:</p>
<blockquote>
<p>How many subgroups does a non-cyclic group contain whose order is 25?</p>
</blockquote>
<p>How can i answer that question?</p>
<p>Can you generalize the answer?</p>
<p>Thanks for your help.</p>
| N. S. | 9,176 | <p>If $H$ is a non-trivial subgroup, then $H$ has $5$ elements (why?).</p>
<p>As the subgroup is not cyclic, and $25=5^2$, the order of every element $x \neq e$ is .....</p>
<p>Last but not least, if the order of $x$ is 5, there are 4 other elements of order 5 which generate the same group.</p>
|
1,955,505 | <blockquote>
<p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p>
</blockquote>
<p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
| Hazem Orabi | 367,051 | <p>$$
\begin{aligned}
& S_{0} = \sum_{n=0}^{\infty} \frac{n^{0}}{4^{n}} = \sum_{n=0}^{\infty} \frac{1}{4^{n}} = \sum_{n=0}^{\infty} (1/4)^{n} = \frac{1}{1 - (1/4)} \Rightarrow \color{red}{S_{0} = \frac{4}{3}} \\ \\
& S_{1} = \sum_{n=0}^{\infty} \frac{n^{1}}{4^{n}} = \sum_{n=0}^{\infty} \frac{n + 1 - 1}{4^{n}} = \sum_{n=0}^{\infty} \frac{n + 1}{4^{n}} - \sum_{n=0}^{\infty} \frac{1}{4^{n}} = 4 \sum_{n=0}^{\infty} \frac{n + 1}{4^{n + 1}} - S_{0} \\
& \qquad = 4 \sum_{n=1}^{\infty} \frac{n}{4^{n}} - S_{0} = 4 \left[ - 0 + \sum_{n=0}^{\infty} \frac{n}{4^{n}} \right] = 4 S_{1} - S_{0} \Rightarrow \color{red}{S_{1} = \frac{1}{3} S_{0} = \frac{4}{9}} \\ \\
& S_{2} = \sum_{n=0}^{\infty} \frac{n^{2}}{4^{n}} = \sum_{n=0}^{\infty} \frac{(n + 1)^{2} - 2 n - 1}{4^{n}} = 4 \sum_{n=0}^{\infty} \frac{(n + 1)^{2}}{4^{n+1}} - 2 \sum_{n=0}^{\infty} \frac{n}{4^{n}} - \sum_{n=0}^{\infty} \frac{1}{4^{n}} \\
& \qquad = 4 S_{2} - 2 S_{1} - S_{0} = 4 S_{2} - \frac{5}{3} S_{0} \Rightarrow \color{red}{S_{2} = \frac{5}{9} S_{0} = \frac{20}{27}} \\ \\
& \sum_{n=0}^{\infty} \frac{n^{2} + 3 n + 2}{4^{n}} = S_{2} + 3 S_{1} + 2 S_{0} = \frac{20}{27} + \frac{12}{9} + \frac{8}{3} = \frac{20 + 36 + 72}{27} = \frac{128}{27} \\ \\
\end{aligned}
$$</p>
|
1,955,505 | <blockquote>
<p>$\sum_{n=0}^\infty \frac{n^2+3n+2}{4^n} = \frac{128}{27}$ Given hint: $(n^2+3n+2) = (n+2)(n+1)$</p>
</blockquote>
<p><strong>I've tried</strong> converting the series to a geometric one but failed with that approach and don't know other methods for normal series that help determine the actual convergence value. Help and hints are both appreciated</p>
| robjohn | 13,854 | <p>Using the formula for the sum of a geometric series:
$$
\sum_{k=0}^\infty x^k=\frac1{1-x}\tag{1}
$$
Taking the derivative of $(1)$ and tossing the terms which are $0$:
$$
\sum_{k=1}^\infty kx^{k-1}=\frac1{(1-x)^2}\tag{2}
$$
Taking the derivative of $(2)$ and tossing the terms which are $0$:
$$
\sum_{k=2}^\infty k(k-1)x^{k-2}=\frac2{(1-x)^3}\tag{3}
$$
Reindexing the sum in $(3)$:
$$
\sum_{k=0}^\infty(k+2)(k+1)x^k=\frac2{(1-x)^3}\tag{4}
$$
Plug in $x=\frac14$:
$$
\begin{align}
\sum_{k=0}^\infty\frac{(k+2)(k+1)}{4^k}
&=\frac2{\left(\frac34\right)^3}\\[6pt]
&=\frac{128}{27}\tag{5}
\end{align}
$$</p>
|
3,080,566 | <p>please tell me how I can solve the following equation. </p>
<p><span class="math-container">$$z^3+\frac{(\sqrt2+\sqrt2i)^7}{i^{11}(-6+2\sqrt3i)^{13}}=0$$</span></p>
<p>What formula should I use? If possible, tell me how to solve this equation or write where I can find a formula for solving such an equation. I searched for it on the Internet, but could not find anything useful.</p>
| Community | -1 | <p><span class="math-container">$\sqrt2+\sqrt2i=2e^{\frac{\pi i}4}$</span>.</p>
<p>And <span class="math-container">$-6+2\sqrt3i=4\sqrt3e^{\frac{\pi i}6}$</span>.</p>
<p>And <span class="math-container">$i^{11}=-i$</span>.</p>
<p>So we have <span class="math-container">$z^3-\frac{2^7e^{\frac{7\pi i}4}}{i\cdot (4\sqrt3)^{13}e^{\frac{13\pi i}6}}=0\implies z^3+\frac{i}{e^{\frac{5\pi i}{12}}2^{19}3^{\frac{13}2}}=0\implies z=-\frac1{576\cdot 2^{\frac13}\cdot 3^{\frac16}}e^{\frac{\pi i}{36}},-\frac{e^{\frac{\pi i}{36}}}{{576\cdot2^{\frac13}\cdot 3^\frac16}}\cdot e^{\frac{2\pi i}3}$</span> or <span class="math-container">$-\frac{e^{\frac{\pi i}{36}}}{{576\cdot2^{\frac13}\cdot 3^\frac16}}\cdot e^{\frac{4\pi i}3}$</span>.</p>
|
2,729,364 | <p>What are direct methods for proving that a ring is a UFD in general without proving that it's a PID/Euclidean domain/field and using the fact that all those things are UFDs?</p>
<p>As an example, we can take <span class="math-container">$\mathbb{Z}[i]$</span> or <span class="math-container">$\mathbb{Z}[\sqrt{-2}]$</span> or other rings you come up with.</p>
| Jordan Hardy | 202,814 | <p>This would be extremely wasteful, and nobody would do it before showing they were Euclidean domains or PIDs, but you could show that they have class number 1 through some indirect means. $\mathbb Z[i]$ is a Dedekind domain so it is a UFD if and only if its class number is 1.</p>
|
3,113,850 | <blockquote>
<p><span class="math-container">$$f(x)=\sum_{n=1}^{\infty}{\frac{x^{n-1}}{n}},$$</span></p>
<p>Prove that <span class="math-container">$f(x)+f(1-x)+\log(x)\log(1-x)=\frac{{\pi}^2}{6}$</span></p>
</blockquote>
<p>In my mind though,I think that this is related to Basel problem<span class="math-container">$\left(\sum\limits_{n=1}^{\infty}{\frac{1}{n^2}}\right)$</span>,but I don't know how to solve this.</p>
<p>Any help would be greatly appreciated :-)</p>
<p>Edit:</p>
<p>My attempt:</p>
<p>I cannot use latex expertly,so I post image.The circled part<a href="https://i.stack.imgur.com/gFDdf.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/gFDdf.jpg" alt="enter image description here " /></a></p>
| user1551 | 1,551 | <p>Let <span class="math-container">$A$</span> be <span class="math-container">$n\times n$</span>. Since each entry of <span class="math-container">$\operatorname{adj}(A)$</span> is a signed multiple of a <span class="math-container">$(n-1)$</span>-rowed minor, <span class="math-container">$\operatorname{adj}(A)=0$</span> if and only if <span class="math-container">$\operatorname{rank}(A)\le n-2$</span>.</p>
<p>It follows that <span class="math-container">$A\ne0=\operatorname{adj}(A)$</span> if and only if <span class="math-container">$n\ge3$</span> and <span class="math-container">$0<\operatorname{rank}(A)\le n-2$</span>.</p>
|
593,746 | <p>In general, if a random process is ergodic, does it imply that it is also stationary in any sense?</p>
| Val | 39,930 | <p>This man says that it is <a href="https://www.youtube.com/watch?v=k6y2kzayV6A&#t=1433" rel="nofollow">https://www.youtube.com/watch?v=k6y2kzayV6A&#t=1433</a>. So, ergodicity implies stationarity.</p>
|
3,161,874 | <p>Find the vector equation of the plane which contains points <span class="math-container">$A(0,1,1)$</span>, <span class="math-container">$B(-1,2,1)$</span> and <span class="math-container">$C(2,0,2)$</span>. </p>
<p>I did this by first finding <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> where I got <span class="math-container">$AB=(-1,1,0)$</span> and <span class="math-container">$AC=(2,-1,1)$</span>. Then I did cross product with this and I got <span class="math-container">$i+j-k$</span>. When I then used this to find equation I got <span class="math-container">$x+y-z=0$</span> but I am unsure why this is wrong. Can someone please help? </p>
| fleablood | 280,126 | <p>Suppose the fly is at the wall. And suppose this will be the last trip or partial trip of the fly. Suppose the car is <span class="math-container">$h$</span> miles away.</p>
<p>The fly and and car have a combined speed of <span class="math-container">$140 \frac {km}{hr}$</span> so the fly reaches the car in <span class="math-container">$\frac h{140}$</span> hours. In that time the car has traveled <span class="math-container">$40\frac h{140} = \frac 27h$</span> and is now <span class="math-container">$h-\frac 27h = \frac 57h$</span> from the wall. So the fly heads back to the wall.</p>
<p>As the trip back is just as far this takes <span class="math-container">$\frac h{140}$</span> hours and the car has traveled another <span class="math-container">$\frac 27h$</span> and is now <span class="math-container">$\frac 37h$</span> from the wall. [1]</p>
<p>So the fly starts another trip, contradicting that this was his last. So the fly never makes a last trip an instead there are an infinite number of trips.</p>
<p>Figuring out how far the fly flies is a matter of noting the car is on a straight path and travels <span class="math-container">$20km$</span> at <span class="math-container">$40 kmh$</span> so this takes <span class="math-container">$30$</span> minutes. The fly no matter how many times (infinitely many) it zigs will travel at <span class="math-container">$100kmh$</span>. So in <span class="math-container">$30$</span> minutes it flies <span class="math-container">$50 km$</span>.</p>
<p>If one wishes to set this up as an infinite sum.....</p>
<p>Each trip the fly flies <span class="math-container">$\frac {10}7$</span> of the distance the car was away. And each trip the car is <span class="math-container">$\frac {3}{7}$</span> of the distance it was before. So the distance the fly travels is <span class="math-container">$\sum_{k=0}^{\infty} \frac {10}7*(\frac {3}{7})^k*20$</span> which if I did this correctly is</p>
<p><span class="math-container">$\frac {10}7*20(\sum_{k=0}^{\infty} (\frac {3}{7})^k)= \frac {200}{7}\frac 1{1-\frac {3}{7}} = \frac {200}{7}\frac {7}{4}= 50$</span>km.</p>
<p>[1](<em>for the record, in this time, <span class="math-container">$\frac 1{70}h$</span> hours, the car has traveled <span class="math-container">$\frac 47h$</span> and the fly has travelled <span class="math-container">$\frac {10}7h$</span></em>).</p>
|
3,161,874 | <p>Find the vector equation of the plane which contains points <span class="math-container">$A(0,1,1)$</span>, <span class="math-container">$B(-1,2,1)$</span> and <span class="math-container">$C(2,0,2)$</span>. </p>
<p>I did this by first finding <span class="math-container">$AB$</span> and <span class="math-container">$AC$</span> where I got <span class="math-container">$AB=(-1,1,0)$</span> and <span class="math-container">$AC=(2,-1,1)$</span>. Then I did cross product with this and I got <span class="math-container">$i+j-k$</span>. When I then used this to find equation I got <span class="math-container">$x+y-z=0$</span> but I am unsure why this is wrong. Can someone please help? </p>
| Guest3711 | 657,760 | <p>Simple way to calculate it: It takes the car 30 minutes to travel the 20km to the wall at 40km/hr. The fly, traveling at 100km/hr will travel 50km in those same 30 minutes.</p>
<p>Edit: There will be an infinite number of trips. It's similar to how Zeno's paradox works where the trips get shorter and shorter and eventually take an infinitely small amount of time. But all those infinitely small trips end up as a finite amount of distance.</p>
|
87,319 | <p>How might I show that there's no metric on the space of measurable functions on $([0,1],\mathrm{Lebesgue})$ such that a sequence of functions converges a.e. iff the sequence converges in the metric?</p>
| pgassiat | 7,406 | <p>In a metric space, a sequence such that any subsequence has a converging subsequence must be convergent.</p>
<p>Considering a sequence converging in measure but not a.e. shows that this property is not true for a.e. convergence.</p>
|
3,908,795 | <p>This was left as an exercise in Apostol's Calculus Vol II and I'm not very sure how to proceed and have my doubts on whether the following would be sufficient due to step <span class="math-container">$(1)$</span>:
<span class="math-container">$$-(x+y) \stackrel{(1)}{=} -1(x+y) \stackrel{(2)}{=} -1x + -1y \stackrel{(3)}{=} (-x) + (-y) = -x-y$$</span></p>
<p>Where <span class="math-container">$(2)$</span> is allowed by the distributive multiplication property for addition in V and <span class="math-container">$(3)$</span> is allowed because <span class="math-container">$(-a)x = -(ax)$</span>, which is proven by:</p>
<p>Let <span class="math-container">$z = (-a)x$</span>
<span class="math-container">$$ z + ax = (-a)x + ax \stackrel{(a)}{=} (-a + a)x = 0x \stackrel{(b)}{=} O $$</span>
Where <span class="math-container">$(a)$</span> is allowed by the distributive multiplication property for scalars addition and <span class="math-container">$(b)$</span> is true because <span class="math-container">$0x = O$</span> (I'm not providing this proof here, but it can be assumed already proven). We find that <span class="math-container">$z$</span> is the negative of <span class="math-container">$ax$</span> and so <span class="math-container">$z = -(ax)$</span>.</p>
<p>Would this be a sufficient proof?</p>
| blackmirror7 | 849,727 | <p>You can simply say</p>
<p>-(x+y)=-(x+y)+x-x+y-y=-(x+y)+(x+y)-x-y (since we add x-x and y-y there is no problems)</p>
<p>which then would imply -(x+y)=-x-y</p>
|
3,908,795 | <p>This was left as an exercise in Apostol's Calculus Vol II and I'm not very sure how to proceed and have my doubts on whether the following would be sufficient due to step <span class="math-container">$(1)$</span>:
<span class="math-container">$$-(x+y) \stackrel{(1)}{=} -1(x+y) \stackrel{(2)}{=} -1x + -1y \stackrel{(3)}{=} (-x) + (-y) = -x-y$$</span></p>
<p>Where <span class="math-container">$(2)$</span> is allowed by the distributive multiplication property for addition in V and <span class="math-container">$(3)$</span> is allowed because <span class="math-container">$(-a)x = -(ax)$</span>, which is proven by:</p>
<p>Let <span class="math-container">$z = (-a)x$</span>
<span class="math-container">$$ z + ax = (-a)x + ax \stackrel{(a)}{=} (-a + a)x = 0x \stackrel{(b)}{=} O $$</span>
Where <span class="math-container">$(a)$</span> is allowed by the distributive multiplication property for scalars addition and <span class="math-container">$(b)$</span> is true because <span class="math-container">$0x = O$</span> (I'm not providing this proof here, but it can be assumed already proven). We find that <span class="math-container">$z$</span> is the negative of <span class="math-container">$ax$</span> and so <span class="math-container">$z = -(ax)$</span>.</p>
<p>Would this be a sufficient proof?</p>
| Mirzathecutiepie | 791,050 | <p>Notice that <span class="math-container">$(x + y) - (x + y) = 0$</span> but we also have <span class="math-container">$x + y - x - y = 0$</span>. Hence we have <span class="math-container">$(x + y) - (x + y) = x + y - x - y = (x + y) - x - y$</span>. This can be reduced to <span class="math-container">$(x + y) - (x+y) = (x + y) -x -y$</span>. Then simply cancel and we reach <span class="math-container">$-(x + y) = -x - y$</span>.</p>
<p>I'm assuming that you've proven the uniqueness of inverses in a vector space and the additive cancellation rule.</p>
|
4,065,868 | <p>Can anyone here recommend a low-dimensional topology textbook that contains knot theory and 3,4-manifolds?Or should I look for these subjects in separate textbooks?</p>
| user901807 | 901,807 | <p>These are diverse subjects studied with different tools. You will need a <em>strong</em> background in algebraic topology, as well as manifold topology (PL or differentiable, most likely the latter), to do well with the material.</p>
<p>Here are some very good introductory texts in each.</p>
<p>Knot theory: Rolfsen's "Knots and links". Background: in addition to a good full course in algebraic topology, know differentiable manifolds through the isotopy extension theorem.</p>
<p>3-manifold topology: Hempel's book is the classic. Hatcher's short set of notes is a good substitute, though it doesn't cover as much. At some point you should read Peter Scott's paper on geometries of 3-manifolds.</p>
<p>The theory of 4-manifolds is too diverse to be well-discussed in one book. One should read Gompf and Stipsicz, "4-manifolds and Kirby Calculus". After that you can try to feel around for your particular taste. Some people like the book by Scorpan but I haven't read it.</p>
|
1,105,056 | <p>There's something I've never understood about polynomials.</p>
<p>Suppose $p(x) \in \mathbb{R}[x]$ is a real polynomial. Then obviously,</p>
<p>$$(x-a) \mid p(x)\, \longrightarrow\, p(a) = 0.$$</p>
<p>The converse of this statement was used throughout high school, but I never really understood why it was true. I think <em>maybe</em> a proof was given in 3rd year university algebra, but obviously it went over my head at the time. So anyway:</p>
<blockquote>
<p><strong>Question.</strong> Why does $p(a)=0$ imply $(x-a) \mid p(x)$?</p>
</blockquote>
<p>I'd especially appreciate an answer from a commutative algebra perspective.</p>
| idmercer | 7,635 | <p>Here's what helped make this intuitive for me when I took undergraduate abstract algebra:</p>
<p>You can always <em>try</em> to divide a polynomial by another polynomial. There is a division algorithm.</p>
<p>Of course, the division might not come out evenly, and there might be a remainder. But what would that remainder look like?</p>
|
4,431,764 | <p>I was solving one problem in which I had to find the value of <span class="math-container">$x$</span> and at the last step my result came out to be <span class="math-container">$x^{\log_b{a}}=2$</span> but I was not able to get to the correct answer after this. Here's what I did:</p>
<p><span class="math-container">$$\begin{align}
&x^{\log_b{a}}=2\\
\Rightarrow\; & \log_b{a}\cdot\log_b{x}=\log_b{2} \rightarrow \text{Taking log to the base }b\text{ on both side}\\
\Rightarrow\; & \log_b{x}=\frac{\log_b{2}}{\log_b{a}}\\
\Rightarrow\; & \log_b{x}= \log_a{2}
\end{align}$$</span>
But as you see I was not able to proceed ahead and I tried other ways to get to the solution but I was not able to.</p>
<p>The correct answer that has been provided is <span class="math-container">$x=2^{\log_a{b}}$</span>. Can someone please help me as to how to get to this form of the answer?</p>
| 5xum | 112,884 | <p>In the reals, if <span class="math-container">$\beta > 0$</span>, the solution to the equation <span class="math-container">$$x^{\alpha} = \beta$$</span> is <span class="math-container">$x=\beta^\frac{1}{\alpha}$</span> which is also sometimes written as <span class="math-container">$x=\sqrt[\alpha]{\beta}$</span>.</p>
<p>Note that this is the same solution as the provided answer, because in your case, <span class="math-container">$\beta=2$</span> and <span class="math-container">$\alpha=\log_b a$</span>, which means</p>
<p><span class="math-container">$$x=2^{\frac{1}{\log_b a}} = 2^{\log_a b}$$</span></p>
<p>the last equality being true because, in general,
<span class="math-container">$$\frac{1}{\log_b(a)} = \frac{1}{\frac{\ln a}{\ln b}} = \frac{\ln b}{\ln a} = \log_a(b)$$</span></p>
|
1,047,292 | <p><img src="https://i.imgur.com/2xwGbMS.png" alt="Sum from n = 0 to infinity of Cn*(4^n)">
<img src="https://i.imgur.com/z9qZ5Wh.png" alt="Sum from n = 0 to infinity of Cn*(-4^n) and sum from n = 0 to infinity of Cn*(-2^n)"></p>
<p>I feel the coefficient Cn has to be zero in order for the original series to converge, as the power series of 4^n will diverge as n - > ∞. Are there any other ways for this series to converge, and if so, will the convergence remain in an alternating series with bases of -2 and -4?</p>
| Eoin | 163,691 | <p>It is not true that the values of $C_n$ must be $0$. For example, if $C_n=(\frac{1}{8})^n$ then our series would be $\sum_{n=0}^\infty(\frac{1}{8})^n4^n=\sum_{n=0}^\infty(\frac{1}{2})^n$ which does converge.</p>
<p>It is not true that $\sum_{n=0}^\infty C_n (-4)^n$ must converge. For an example, suppose $C_n=(-\frac{1}{4})^n\frac{1}{n+1}$. Then $$\sum_{n=0}^\infty C_n 4^n=\sum_{n=0}^\infty\bigg(\frac{4}{4}\bigg)^n\frac{(-1)^n}{n+1}=\sum_{n=0}^\infty(-1)^n\frac{1}{n+1}=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{k}$$ which converges. However, $$\sum_{n=0}^\infty C_n (-4)^n=\sum_{n=0}^\infty\frac{1}{n+1}=\sum_{k=1}^\infty \frac{1}{k}$$ </p>
<p>which is the harmonic series, and diverges.</p>
|
1,047,292 | <p><img src="https://i.imgur.com/2xwGbMS.png" alt="Sum from n = 0 to infinity of Cn*(4^n)">
<img src="https://i.imgur.com/z9qZ5Wh.png" alt="Sum from n = 0 to infinity of Cn*(-4^n) and sum from n = 0 to infinity of Cn*(-2^n)"></p>
<p>I feel the coefficient Cn has to be zero in order for the original series to converge, as the power series of 4^n will diverge as n - > ∞. Are there any other ways for this series to converge, and if so, will the convergence remain in an alternating series with bases of -2 and -4?</p>
| marty cohen | 13,079 | <p>If
$c_n
=\frac{(-1)^n}{n4^n}
$,
then
$\sum_{=1}^{\infty} c_n 4^n$
converges
(it's a alternating sum
with the terms decreasing to zero),
and
$\sum_{=1}^{\infty} (-1)^n c_n 4^n$
does not converge
(it's the well-known
harmonic sum).</p>
<p>If
$\sum_{=1}^{\infty} c_n 4^n$
converges,
then
$c_n 4^n \to 0$.
This is <em>necessary</em>
for the sum to converge.
Therefore,
for any $\epsilon > 0$,
there is an $N$
such that
$|c_n 4^n|
<\epsilon
$
for $n > N$.
In particular,
choosing $\epsilon = 1$,
there is an $N_1$
such that
$|c_n 4^n|
<1
$
for $n > N_1$.</p>
<p>Therefore,
for $n > N_1$,
$|c_n (-2)^n|
=|c_n 2^n|
=\dfrac{|c_n 4^n|}{2^n}
<\dfrac1{2^n}
$.
Since
$\sum_{n=1}^{\infty} \dfrac1{2^n}$
converges,
so does
$\sum_{n=1}^{\infty} c_n (-2)^n$.</p>
|
936,830 | <p>I made the differential equation : $$dQ = (-1/100)2Q dt$$ </p>
<p>I separate it and get: $\int_a^b x (dQ/Q) = \int_a^b x (-2/100)dt$</p>
<p>this leads me to: $\log(|Q|) = (-t/50) + C$</p>
<p>I simplify that to $Q = e^{-t/50}$</p>
<p>My TI-Nspire differential equation solver, however, gives me: $Q = Ce^{-t/50}$</p>
<p>I'm confused as to why the calculator is multiple my answer with a constant and which one is the correct answer.</p>
| Did | 6,179 | <p>Forgetting some irrelevant factorial and exponential terms, you seem to be trying to factor the ratio $$R=\frac{\theta^{u-v}\lambda^v}{(\theta+\lambda)^u}.$$
You are not very specific about the <em>complete</em> expression you get for $R$, but indeed, $$R=\left(\frac{\lambda}{\theta}\right)^v\left(\frac{\theta}{\theta+\lambda}\right)^u.$$ And $R$ is also, as Casella and Berger say, $$R=\left(\frac{\lambda}{\theta+\lambda}\right)^v\left(\frac{\theta}{\theta+\lambda}\right)^{u-v}.$$ So... what is the problem, really?</p>
|
2,053,255 | <p>Let $f_{X,Y}(x,y)=\frac{1}{8}$ for $-2<x<2$, $0<y<2$. Find $f(z)$ where $Z=X+Y$. </p>
<p>I am having difficulty solving the above problem. My attempt at a solution relies on a convolution formula, which says that the $pdf$ of $Z=X+Y$, given the joint $pdf$ $f(x,y)$ is given by $$f_Z(z)= \int_{s+t=z} f(s,t)ds=\int_{-\infty}^{\infty} f(s,z-s)ds$$. So first note that $f(s,z-s)=\frac{1}{8}$ if $-2<s<2$ and $0<z-s<2$. Then note that $0<z-s<2$ is the same as $z-2<s<z$. But these regions depend on the value of $z$. Hence,
$$\begin{equation}
f_Z(z)=
\begin{cases}
\int_{-2}^{z} \frac{1}{8}ds, & \text{if}\ 0\leq z \leq2\\
\int_{z-2}^{2} \frac{1}{8}ds, & \text{if}\ 2<z\leq4
\end{cases}
\end{equation}$$. After evaluating the integrals, I get </p>
<p>$$\begin{equation}
f_Z(z)=
\begin{cases}
\frac{z-2}{8}, & \text{if}\ 0\leq z \leq2\\
\frac{4-z}{8}, & \text{if}\ 2<z\leq4
\end{cases}
\end{equation}$$.</p>
<p>I have solved similar problems like this one but all of them have been defined over the intervals $0<x<1$ and $0<y<1$. In this problem, however, we have the intervals $(-2,2)$ for $x$ and $(0,2)$ for $y$.I have graphed these regions but I have no intuition on how to divide it. Any help would be appreciated. Thanks. </p>
| Graham Kemp | 135,106 | <p>First note that when <span class="math-container">$X\in[-2..2]$</span> and <span class="math-container">$Y\in[0..2]$</span>, then <span class="math-container">$X+Y\in[-2..4]$</span>, thus this is the support for the pdf of <span class="math-container">$Z$</span>.</p>
<p>You have obtained that: <span class="math-container">$$f_{\small Z}(z)=\int_\Bbb R \tfrac 18\mathbf 1_{-2\leqslant s\leqslant 2, 0\leqslant z-s\leqslant 2}\,\mathrm d s$$</span></p>
<p>With a little rearangment, and the afformentioned note, we shall obtain:<span class="math-container">$$f_{\small Z}(z)=\tfrac 18\mathbf 1_{-2\leqslant z\leqslant 4}\int_\Bbb R \mathbf 1_{\max(-2,z-2)\leqslant s\leqslant \min(2,z)}\,\mathrm d s$$</span></p>
<p>So, <span class="math-container">$-2=z-2$</span> and <span class="math-container">$2=z$</span> are the points we partition the support for <span class="math-container">$Z$</span>'s pdf. (<span class="math-container">$z=0$</span> and <span class="math-container">$z=2$</span>, obvs.)</p>
<p><span class="math-container">$$f_{\small Z}(z)=\tfrac 18\left(\mathbf 1_{-2\leqslant z\lt 0}\int_{-2}^{z}\mathrm d s+\mathbf 1_{0\leqslant z\lt 2}\int_{z-2}^z\mathrm d s+\mathbf 1_{2\leqslant z\leqslant 4}\int_{z-2}^2\mathrm d s \right)$$</span></p>
<p>Of course those integrals are trivial.</p>
<p><span class="math-container">$$f_{\small Z}(z)=\tfrac{z+2}{8}\mathbf 1_{-2\leqslant z\lt 0}+\tfrac{2}{8}\mathbf 1_{0\leqslant z\lt 2}+\tfrac {4-z}{8}\mathbf 1_{2\leqslant z\leqslant 4}$$</span></p>
<p>Or if you prefer:<span class="math-container">$$f_{\small Z}(z)=\begin{cases}(z+2)/8&:& -2\leqslant 2<0\\1/4&:&0\leqslant z\lt 2\\(4-z)/8&:&2\leqslant z\leqslant 4\\0&:&\textsf{elsewhere}\end{cases}$$</span></p>
<p>NB: This pdf is in the shape of a trapezoid.</p>
|
3,672,948 | <p>Consider the series
<span class="math-container">$$
S = \sum_{n=1}^\infty e^{-n^2x}
$$</span>
then I have to argue for that <span class="math-container">$S$</span> is convergent if and only if <span class="math-container">$x>0$</span>.</p>
<p>As this is if and only if I think I have to assume first that S is convergent and show that this implies that <span class="math-container">$x>0$</span> but I am not sure how to. It is easy for me see that if <span class="math-container">$x=0$</span> the series is divergent but if I were to assume that S is convergent and that for a contradiction that <span class="math-container">$x\leq 0$</span> how do I proceed? And how the other way around? </p>
<p>Do you mind helping me? </p>
| Alex | 38,873 | <p>Another way to see it is to compare the sum in question to the integral</p>
<p><span class="math-container">$$
\int_{1}^{\infty} e^{-(t\sqrt{x})^2} dx =\frac{1}{\sqrt{x}}\int_{\sqrt{x}}^{\infty} e^{-s^2} ds
$$</span>
Which exists only for <span class="math-container">$x>0$</span>.</p>
|
1,618,952 | <p>How to prove this exercise?</p>
<p>Let $A$ be a $n \times n$ diagonal matrix with characterist polynominal</p>
<p>$$(x-c_1)^{d_1}...(x-c_k)^{d_k}$$</p>
<p>where $c_i$ are distinct. Let $V$ be the space of $n \times n$ matrixes $B$ such that $AB=BA$. Prove that dimension of $V$ is $d_{1}^{2}+...+d_{k}^{2}$.</p>
| Daniel Buck | 293,319 | <p>Consider the convergent series
$$\sum_{s\in\mathbb{N}} f(s)$$
with sum $a$, and label this series $A$.
Now let us order the terms in series $A$ as $f(n)$, for $n=1,2,\dotsc,$ to give the series as follows:
$$\sum_{n=1}^{\infty} f(n),$$
and label this series $B$. For every partial sum $B_M=\sum_{n=1}^{M} f(n)$ of series $B$, there exists a subset of $\mathbb{N}$, say $T$, which gives a partial sum of $A$ containing all terms of $B_M$:
$$A_T=\sum_{s\in T} f(s).$$
We can now find a partial sum $B_N=\sum_{n=1}^{N} f(n)$ containing all terms from $A_T$. This gives the inequality,
$$
B_M\le A_T\le B_N.
$$
Since we know the sum of the convergent series $A$ is $a$ we also have $A_T\le a$, which, since $B_M\le A_T\le a$, implies $B$ will also be convergent with some limit $b$. Now if we let $M\rightarrow\infty$ in $B_M\le A_T\le B_N$, we get $b\le a\le b$, and so $b=a$. $\square$</p>
|
4,320,437 | <p>I was thinking about linear tramsformations and i came up with this example:
<span class="math-container">$$f:\mathbb{R}^n \to \mathbb{C}^n\\
f(x)=ix$$</span>
for this example, domain and co-domain are not defined over the same field and all linear transformations that i encountered by now had domain and co-domain defined over the same field. I was wondering that if this is a valid linear transformation or not? and if not, why did we put such a constraint?</p>
<p>also, if it is possible, keep the explanation simple because i'm pretty new in pure math. thank you in advance.</p>
| Wuestenfux | 417,848 | <p>Hint: <span class="math-container">$\mathbb{R}^n$</span> is an <span class="math-container">$n$</span>-dim. <span class="math-container">$\mathbb{R}$</span>-vector space and <span class="math-container">$\mathbb{C}^n$</span> is a <span class="math-container">$2n$</span>-dim, <span class="math-container">$\mathbb{R}$</span>-vector space. Both spaces are defined over the same scalar field <span class="math-container">$\mathbb{R}$</span> which is necessary for an <span class="math-container">$\mathbb{R}$</span>-linear mapping.
Now you need to check linearity.</p>
|
1,330,078 | <p>Given a quadrilateral $MNPQ$ for which $MN=26$, $NP=30$, $PQ=17$, $QM=25$ and $MP=28$ how do I find the length of $NQ$?</p>
| Rory Daulton | 161,807 | <p>For the answer to be unique we must assume the quadrilateral is convex and/or simple, which you said in a comment that we can assume.</p>
<p>One way to solve the problem is to place your quadrilateral in a Cartesian coordinate system, perhaps placing $M$ at the origin and $P$ at $(28,0)$, and assume that point $N$ is in the upper half-plane. This fixes the quadrilateral, with point $Q$ in the lower half-plane (by convexity).</p>
<p><img src="https://i.stack.imgur.com/Rau9s.png" alt="Quadrilateral MNPQ on the Cartesian plane"></p>
<p>The law of cosines in triangle $\triangle MNP$ tells us</p>
<p>$$\cos(\angle NMP)=\frac{26^2+28^2-30^2}{2\cdot 26\cdot 28}=\frac 5{13}$$</p>
<p>and we get</p>
<p>$$\sin(\angle NMP)=\sqrt{1-[\cos(\angle NMP)]^2}=\frac{12}{13}$$</p>
<p>So point $N$ is</p>
<p>$$\left(26\cos(\angle NMP),\ 26\sin(\angle NMP) \right)=\left(26\cdot\frac5{13},\ 26\cdot\frac{12}{13}\right)=(10,24)$$</p>
<p>We can similarly find that point $Q$ is $(20,-15)$. Then use the standard distance formula to get</p>
<p>$$NQ=\sqrt{1621}\approx 40.2616$$</p>
<p>I'm sure there are shorter ways, but this way makes each step easy and checkable.</p>
|
2,758,338 | <p>I need to find the laurent series and the residue of the following complex function
$$f(z)=(z+1)^2e^{3/z^2}$$
at $z=0$.</p>
<p>Since $e^z=\sum z^n/n!$, then
$$e^{3/z^2}=\sum_{n=0}^\infty \frac{3^n/n!}{z^{2n}}$$
thus
$$f(z)=(z^2+2z+1)\sum_{n=0}^\infty \frac{3^n/n!}{z^{2n}}=\sum_{n=0}^\infty \frac{3^n/n!}{z^{2(n-1)}}+\sum_{n=0}^\infty \frac{2\cdot3^n/n!}{z^{2n-1}}+\sum_{n=0}^\infty \frac{3^n/n!}{z^{2n}}$$
which, with a shift of index and expansion of positive powers, can be expressed as
$$f(z)=z^2+3+\sum_{n=1}^\infty\left(\frac{3}{n+1}+1\right)\frac{3^n/n!}{z^{2n}}+\sum_{n=1}^\infty\frac{2\cdot3^n/n!}{z^{2n-1}}$$
so the residue is given by evaluating the numerator of the second series at $n=1$, so its value is $6$. I tried using WolframAlpha and Mathematica to check my answer, but both would not return a value. Would this be correct? Also, is there a way to put the two sums together (one gives the coefficients of even powers, while the other of the odd) so I can have the principal part of the laurent series expressed only with one sum?</p>
| B. Mehta | 418,148 | <p>In $S_6$, the group generated by the cycle $(123456)$ is cyclic of order 6, and the group generated by $(12)(345)$ is cyclic of order 6. But, they are not the same. This example shows also that they may not even be conjugate subgroups. </p>
|
42,854 | <p>How much money would you have if the amount of money you started with was 5 and it increased by 5 a day for 365 days.
So January 1st you receive 5, Jan 2nd you receive 10, the third 15.. etc.
I'm wondering what the formula is</p>
| Arturo Magidin | 742 | <p>$$5 + 2\times 5 + 3\times 5 + \cdots + 365\times 5 = 5\times\Bigl( 1+2+\cdots + 365\Bigr)$$
at which point it comes down to figuring out how much is the sum of $n$ consecutive integers, starting with $1$.</p>
|
2,028,646 | <p>Let two random variables: $$x_1 \sim Bin(100, 0.5) \\ x_2 \sim Bin(100, 0.6)$$</p>
<p>Now, we define a third random variable, $x_{12}$ which it's distribution is the aggregated distributions of $x_1$ and $x_2$, so it's <strong>not</strong> quite like $x_1 + x_2$ even though empirically the variance seems like the sum of the two variances. Is that the case? How can I show it?</p>
<p>Thanks. </p>
| Henry | 6,460 | <p>It seems you may be describing a mixture of distributions rather than a sum</p>
<p>For the individual distributions you have </p>
<ul>
<li>$E[X_1] =50$, $\text{Var}[X_1]=25$, $E[X_1^2] = 2525$ </li>
<li>$E[X_2] = 60$, $\text{Var}[X_2]=24$, $E[X_2^2] = 3624$</li>
</ul>
<p>For the sum you would have</p>
<ul>
<li>$E[X_1+X_2] = 50+60=110$, </li>
<li>$\text{Var}[X_1+X_2]=25+24=49$, </li>
<li>$E[(X_1+X_2)^2] =110^2+49= 12149$ </li>
</ul>
<p>But for a mixture (half the observations from the first distribution and half from the second) you would have</p>
<ul>
<li>$E[X_m] = \frac{50+60}{2}=55$, </li>
<li>$E[X_m^2] = \frac{2525+3624}{2}=3074.5$</li>
<li>$\text{Var}[X_m]=3074.5-55^2=49.5$, </li>
</ul>
<p>which is not quite the sameas the sum, though the variances are close (as you have noticed)</p>
|
743,465 | <p>Suppose $l,t\in[0,1]$ and $l+t\leq1$ I want to prove $1+l+t>6lt$. When $t=0$ or $l=0$, it is trivial, so I started with $l,t\neq0$ but I couldn't reach anywhere. I don't have time to write in detail what I have already tried, but I tried to manipulate $(l-t)^2$ mostly. Anyway, if anyone help me with the proof that would be great. Many thanks!</p>
| rlartiga | 93,314 | <p>I think your problem is possible to state it as a optimization problem:</p>
<p>$$\min 1+l+t-6lt$$
$$s.t.$$
$$ l+t\leq 1$$
$$0\leq l\leq1$$
$$0\leq t\leq1$$</p>
<p>Which have a global minimum at $(l,t)=\left(\frac{1}{2},\frac{1}{2}\right)$ then $$1+l+t-6lt\geq1+\frac{1}{2}+\frac{1}{2}-2\frac{1}{2}\frac{1}{2}=\frac{1}{2}>0$$</p>
<p>Hope it helps</p>
|
4,337,887 | <p>I am solving an exercise:</p>
<p>Let <span class="math-container">$T: V \rightarrow W$</span> be a linear transformation.
<span class="math-container">$V$</span> and <span class="math-container">$W$</span> are finite-dimensional inner product spaces. Prove <span class="math-container">$T^*T$</span> and <span class="math-container">$TT^*$</span> are <strong>semidefinite</strong>.</p>
<p>This is a solution that I don't understand: <br>
<span class="math-container">$T^*T$</span> and <span class="math-container">$TT^*$</span> are self-adjoint, then we have <span class="math-container">$T^*T(x) = \lambda x$</span>. Hence: <br>
<span class="math-container">$$\lambda = ⟨T^*T(x),x⟩ = ⟨T(x),T(x)⟩ ≥ 0.$$</span>
<span class="math-container">$\lambda$</span> is <span class="math-container">$≥ 0$</span>, hence <span class="math-container">$T^*T$</span> is semidefinite.</p>
<p>I don't understand why the eigenvalue is equal to <span class="math-container">$⟨T^*T(x),x⟩$</span>.
Thank you for any kind of help!</p>
| I am a person | 806,777 | <p>This "it" is called the <strong>number</strong> of <strong>factors</strong> or <strong>divisors</strong> of a number.</p>
<p>As said in the comments however, I don't think there is a singular word that describes this.</p>
|
1,344,892 | <p>Let $K$ be a field, and let $A$ be a $K$-algebra such that $\alpha \in A$. Then the natural homomorphism $$ \phi: K[x] \to K[\alpha], \hspace{3mm} (x \mapsto \alpha )$$ has a kernel which is a principal ideal $ \langle f \rangle$ and so $$ K[x] / \langle f \rangle \cong K[\alpha]$$</p>
<p>Notice that $K[\alpha]$ is a field. The book then states that, if $n=$ deg $f$ we have that $\{1, \alpha, \alpha^2, \dots, \alpha^{n-1} \}$ are a $K$-basis of $K[\alpha]$. </p>
<p>I am not sure how to convince myself that this set is indeed a basis of $K[\alpha]$, how would I go about showing this? </p>
| lhf | 589 | <p>Every $g \in K[x]$ can be written as $g=fq+r$, with $r=0$ or $\deg r<n$.
Since $fq\in \ker\phi$, we have $\phi(g)=\phi(r)$.</p>
<p>In other words, the image of $\phi$ is generated by the image of $\{1, x, x^2, \dots, x^{n-1} \}$, with of course is $\{1, \alpha, \alpha^2, \dots, \alpha^{n-1} \}$.</p>
<p>That $\{1, \alpha, \alpha^2, \dots, \alpha^{n-1} \}$ is linearly independent comes from the minimality of $n$, the degree of the generator of $\ker\phi$.</p>
|
3,800,833 | <p><strong>Question: Bob invests a certain sum of money in a scheme with a return of 22% p.a . After one year, he withdraws the entire amount (including the interest earned) and invests it in a new scheme with returns of 50% p.a (compounded annually) for the next two years. What is the compounded annual return on his initial investment over the 3 year period?</strong></p>
<p>The answer to this problem is fairly simple if you assume initial investment to be say \$100 then calculate interest for 1st year at 22% then 2nd and 3rd year at 50% which would come out as \$274.5</p>
<p>Then return is \$174.5 over 3 years, using Compound Interest formula, you get rate of interest at around 40% for three years.</p>
<p>My question is can you skip all this lengthy process and use weighted averages to come up with the final answer?
<span class="math-container">$$
Average\ rate\ of\ Interest = \frac{1 * 22 + 2 * 50}{1 + 2} \approx 40.67\%
$$</span></p>
<p>The answer with this is off by 0.67%, it doesn't matter much. However, is using weighted averages a correct approach or am I getting the correct answer using a wrong approach?</p>
<p>Note: The goal of asking this question is to decide on a faster approach to this problem and not necessarily getting the final answer. If you have an approach faster than weighted averages (assuming it is correct), please feel free to post it as an answer.</p>
| Z Ahmed | 671,540 | <p>Use <span class="math-container">$\beta$</span> integral
<span class="math-container">$$\int_{0}^{\pi/2} \sin^{n-1} x \cos^{1-n} d x=\frac{1}{2} \frac{\Gamma(n/2) \Gamma(1-n/2)}{\Gamma(1)}= \frac{1}{2}\frac{\pi}{\sin n \pi/2}.$$</span> if <span class="math-container">$1-n/2>0$$ \implies 0<n<2.$</span></p>
<p>Use <span class="math-container">$$\Gamma(z) \Gamma(1-z)=\frac{\pi}{\sin z\pi}$$</span>
See:
<a href="https://en.wikipedia.org/wiki/Beta_function#:%7E:text=In%20mathematics%2C%20the%20beta%20function,0%2C%20Re%20y%20%3E%200" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Beta_function#:~:text=In%20mathematics%2C%20the%20beta%20function,0%2C%20Re%20y%20%3E%200</a>.</p>
|
241,612 | <p>I have the double inequality
<span class="math-container">$$G(n) = \left( \frac{1}{2} \right)^{\omega(n)} \tau(n)^2 <
F(n) \leq
\left( \frac{3}{4} \right)^{\omega(n)} \tau(n)^2 = H(n),$$</span>
where <span class="math-container">$F(n) = \sum_{d \mid n} \tau(d)$</span>. (What <span class="math-container">$\omega$</span> and <span class="math-container">$\tau$</span> are can be inferred from the code below.)</p>
<p>The question I have (which might be not appropriate) is: what might be a good way to plot this double inequality? Any suggestions would be appreciated.</p>
<p>My best idea at present is to plot the two ratios <span class="math-container">$F(n)/G(n)$</span> and <span class="math-container">$H(n)/F(n)$</span> so that it can be seen from the plot that they both stay above <span class="math-container">$1$</span>. The code I am using:</p>
<pre><code>F[n_] := DivisorSum[n, DivisorSigma[0, #] &]
DiscretePlot[{(3/4)^(PrimeNu[n])*(DivisorSigma[0, n])^2/F[n],
F[n]/((1/2)^(PrimeNu[n])*(DivisorSigma[0, n])^2)}, {n, 2, 400},
PlotLabels -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/qoSMN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qoSMN.png" alt="enter image description here" /></a></p>
| thorimur | 63,584 | <p>Building off of what you have, I think it's nice to put <span class="math-container">$F(n)$</span> in the denominator each time, and shift the origin of the axes to <code>{0,1}</code>. Then the one bigger than <span class="math-container">$F(n)$</span> is above the axis, and the one smaller than <span class="math-container">$F(n)$</span> is below it:</p>
<pre><code>DiscretePlot[{(3/4)^(PrimeNu[n])*(DivisorSigma[0, n])^2/F[n],
((1/2)^(PrimeNu[n])*(DivisorSigma[0, n])^2)/F[n]}, {n, 2,400},
PlotLabels -> Automatic, AxesOrigin -> {0, 1}]
</code></pre>
<p><a href="https://i.stack.imgur.com/1ihGu.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1ihGu.png" alt="A plot generated by the above code." /></a></p>
<p>Another similar way is to use <code>-</code> instead of <code>/</code>, and keep the axis where it is originally. In this case you'd also want <code>PlotRange -> Full</code>; the graph of this is a bit wilder, but not as lopsided from top to bottom. You'd also probably want to get rid of the x-axis markers in this case, as they overlap with the curves:</p>
<pre><code>DiscretePlot[{(3/4)^(PrimeNu[n])*(DivisorSigma[0, n])^2 - F[n],
((1/2)^(PrimeNu[n])*(DivisorSigma[0, n])^2) - F[n]}, {n, 2, 400},
PlotLabels -> Automatic, PlotRange -> Full, Ticks -> {False, Automatic}]
</code></pre>
<p><a href="https://i.stack.imgur.com/uDLX3.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uDLX3.png" alt="A plot generated by the above code." /></a></p>
|
241,612 | <p>I have the double inequality
<span class="math-container">$$G(n) = \left( \frac{1}{2} \right)^{\omega(n)} \tau(n)^2 <
F(n) \leq
\left( \frac{3}{4} \right)^{\omega(n)} \tau(n)^2 = H(n),$$</span>
where <span class="math-container">$F(n) = \sum_{d \mid n} \tau(d)$</span>. (What <span class="math-container">$\omega$</span> and <span class="math-container">$\tau$</span> are can be inferred from the code below.)</p>
<p>The question I have (which might be not appropriate) is: what might be a good way to plot this double inequality? Any suggestions would be appreciated.</p>
<p>My best idea at present is to plot the two ratios <span class="math-container">$F(n)/G(n)$</span> and <span class="math-container">$H(n)/F(n)$</span> so that it can be seen from the plot that they both stay above <span class="math-container">$1$</span>. The code I am using:</p>
<pre><code>F[n_] := DivisorSum[n, DivisorSigma[0, #] &]
DiscretePlot[{(3/4)^(PrimeNu[n])*(DivisorSigma[0, n])^2/F[n],
F[n]/((1/2)^(PrimeNu[n])*(DivisorSigma[0, n])^2)}, {n, 2, 400},
PlotLabels -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/qoSMN.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qoSMN.png" alt="enter image description here" /></a></p>
| kglr | 125 | <p>A variation on thorimur's second method: We can use custom ticks for the vertical axis and put a gap between the two curves to have the horizontal axis visible:</p>
<pre><code>ClearAll[g, h, F]
g[n_] := (1/2)^PrimeNu[n] DivisorSigma[0, n]^2
h[n_] := (3/4)^PrimeNu[n]*DivisorSigma[0, n]^2
F[n_] := DivisorSum[n, DivisorSigma[0, #] &]
gap = 20;
vticks = Join[Charting`FindTicks[{-100 - gap/2, -gap/2}, {100, 0}][-100-gap/2, -gap/2],
Charting`FindTicks[{gap/2, 100 + gap/2}, {0, 100}][gap, 100 + gap/2]];
DiscretePlot[{gap/2 + h[n] - F[n], g[n] - F[n] - gap/2}, {n, 2, 400},
Filling -> {1 -> gap/2, 2 -> -gap/2},
Ticks -> {Automatic, vticks},
PlotLegends -> {HoldForm[F[n] - g[n]], HoldForm[h[n] - F[n]]},
ImageSize -> Large, PlotRange -> {-100, 100},
GridLines -> {None, {-gap/2, gap/2}}]
</code></pre>
<p><a href="https://i.stack.imgur.com/KXz24.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/KXz24.png" alt="enter image description here" /></a></p>
|
98,317 | <p>This comes from Artin Second Edition, page 219. Artin defined $G = \langle x,y\mid x^3, y^3, yxyxy\rangle$, and uses the Todd-Coxeter Algorithm to show that the subgroup $H = \langle y\rangle$ has index 1, and therefore $G = H$ is the cyclic group of order 3.</p>
<p>That being the case, $x$ cannot be either $y$ or $y^2$, for then the third relation would not be satisfied. So the relation $x=1$ must follow from the given relations. Is there another way of seeing this besides from the Todd-Coxeter algorithm?</p>
| Mikasa | 8,581 | <p>Applying the way Artin used, you see from the following TC algorithm that $[G:H]=1$.</p>
<p><img src="https://i.stack.imgur.com/otRbr.jpg" alt="enter image description here"></p>
|
98,317 | <p>This comes from Artin Second Edition, page 219. Artin defined $G = \langle x,y\mid x^3, y^3, yxyxy\rangle$, and uses the Todd-Coxeter Algorithm to show that the subgroup $H = \langle y\rangle$ has index 1, and therefore $G = H$ is the cyclic group of order 3.</p>
<p>That being the case, $x$ cannot be either $y$ or $y^2$, for then the third relation would not be satisfied. So the relation $x=1$ must follow from the given relations. Is there another way of seeing this besides from the Todd-Coxeter algorithm?</p>
| i. m. soloveichik | 32,940 | <p>$e=y^{-1}(yxyxy)y=xyxy^{-1}$ so $y=xyx$ and $xy=yx^{-1}$; therefore $e=y^3=(xyx)^3=xyx^{-1}yx^{-1}yx=xxyxyyx=x^{-1}yxy^{-1}x$. Cancelling on the left and right yields $x=e$ and the result follows.</p>
|
586,724 | <p>Please how to integrate this $$\frac{1}{(1-e^{2x})^{1/2}}$$
I have tried $u= e^x$
But I think that is wrong
So can anyone help me ?</p>
| Stefan Smith | 55,689 | <p>Hint: $u = e^x$ should work. You get</p>
<p>$$\int \frac{e^x}{e^x\sqrt{1-e^{2x}}}\,dx=\int\frac{1}{u\sqrt{1-u^2}} \,du.$$</p>
<p>Then try trig substitution with $u = \sin\theta$.</p>
|
1,115,793 | <p>Let's consider a number of linear operators, defined on a finite dimensional complex vector space, which two by two commutes with each other. (the amount of them can be infinite). How to prove that that will have a common eigenvector?</p>
<p>The finite case can be done by induction:
1) $n=2$, $AB=BA$, then let $x$ be an eigenvector of $A$ (it does exist, because we are working over a $\mathbb{C}$) and $\alpha$ - an eigenvalue. Then, $A(x)=\alpha \cdot x, B(A(x))=A(B(x))=B(\alpha x)=\alpha B(x)$, so $B(x)$ is also an eigenvector of $A$, associated with $\alpha$ eigenvalue.
Analogically, we do it for $n>2$.</p>
<p>But, what can i do, while working with an infinite number of operators( induction doesn't work here, actually).</p>
<p>Any help would be appreciated. </p>
| copper.hat | 27,978 | <p>Here is an inductive proof:</p>
<p>Let ${\cal C}$ be the commuting family of operators.</p>
<p>Let me call an operator $A$ on a subspace $S$ a multiplier operator on $S$ <strong>iff</strong> there exists some $\lambda$ such that $Ax=\lambda x$ for all $x \in S$.</p>
<p>Let me call a subspace $S$ invariant <strong>iff</strong> $AS \subset S$ for all $A \in {\cal C}$.</p>
<p>Note that if a one dimensional space $S$ is invariant, then it is an eigenspace for all $A \in {\cal C}$.</p>
<p>Pick $A \in {\cal C}$, and suppose $\lambda $ is an eigenvalue of $A$, then
let $S_1 = \ker (A-\lambda I)$. It is straightforward to show that $S_1$ is invariant.</p>
<p>If every $B \in {\cal C}$ is a multiplier operator on $S_1$ then we are finished.</p>
<p>Otherwise pick $B \in {\cal C}$ that is not a multiplier operator on $S_1$ and suppose $\mu$ is an eigenvalue of $B$ on $S_1$. Then
$S_2 = \ker (B-\mu I) \cap S_1$ is non empty, invariant and $\dim S_2 < \dim S_1$.</p>
<p>Now repeat the process, noting that it must end because the space is finite dimensional.</p>
|
176,613 | <p>Working on Harmonic numbers, I found this very interesting recurrence relation :
$$
H_n = \frac{n+1}{n-1} \sum_{k=1}^{n-1}\left(\frac{2}{k+1}-\frac{1}{1+n-k}\right)H_k
,\quad \forall\ n\in\mathbb{N},n>1$$
My proof of this is quite long and complicated, so I was wondering if someone knows an elegant or concise one. Any idea would be appreciated.</p>
<p>Alternatively, if someone knows a reference that talks about this kind of relation, it would be of great interest for me.</p>
<p>Thanks.</p>
| J. M. ain't a mathematician | 498 | <p>Here's a generating function route: as I already mentioned in the comments,</p>
<p>$$\begin{align*}
H_n&=\frac{n+1}{n-1}\sum_{k=1}^{n-1}\left(\frac{2}{k+1}-\frac{1}{n-k+1}\right)H_k\\
H_n&=\frac{n+1}{n-1}\left(2\sum_{k=1}^{n-1}\frac{H_k}{k+1}-\sum_{k=1}^{n-1}\frac{H_k}{n-k+1}\right)\\
H_n&=\frac{n+1}{n-1}\left(2\sum_{k=1}^{n}\frac{H_k}{k+1}-\sum_{k=1}^{n}\frac{H_k}{n-k+1}+H_n-\frac{2H_n}{n+1}\right)\\
H_n&=\frac{n+1}{n-1}\left(2\sum_{k=1}^{n}\frac{H_k}{k+1}-\sum_{k=1}^{n}\frac{H_k}{n-k+1}\right)+\frac{n+1}{n-1}H_n-\frac2{n-1}H_n\\
H_n&=\frac{n+1}{n-1}\left(2\sum_{k=1}^{n}\frac{H_k}{k+1}-\sum_{k=1}^{n}\frac{H_k}{n-k+1}\right)+H_n\\
\sum_{k=1}^{n}\frac{H_k}{n-k+1}&=2\sum_{k=1}^{n}\frac{H_k}{k+1}
\end{align*}$$</p>
<p>We note that the sum $\sum\limits_{k=1}^{n}\frac{H_k}{n-k+1}$ is in the form of a convolution; thus, its generating function is</p>
<p>$$\left(\sum_{j=1}^\infty H_j x^j\right)\left(\sum_{j=1}^\infty \frac{x^j}{j}\right)=\frac{(\log(1-x))^2}{1-x}$$</p>
<p>The remaining task is to prove that the generating function given above is also the generating function of $2\sum\limits_{k=1}^{n}\frac{H_k}{k+1}$; to that effect, there is the identity</p>
<p>$$2\sum_{k=1}^{n}\frac{H_k}{k+1}=(H_{n+1})^2-H_{n+1}^{(2)}$$</p>
<p>where $H_n^{(k)}=\sum\limits_{j=1}^n \frac1{j^k}$ is a generalized harmonic number. From <a href="http://oeis.org/A103930" rel="nofollow">here</a> and <a href="http://mathworld.wolfram.com/HarmonicNumber.html" rel="nofollow">here</a> (see formula 36), we have the generating functions</p>
<p>$$\begin{align*}
\sum_{j=1}^\infty H_{j+1}^{(2)}x^{j+1}&=\frac{\mathrm{Li}_2(x)}{1-x}-x\\
\sum_{j=1}^\infty (H_{j+1})^2 x^{j+1}&=\frac{(\log(1-x))^2+\mathrm{Li}_2(x)}{1-x}-x
\end{align*}$$</p>
<p>where $\mathrm{Li}_2(x)=-\int_0^x \frac{\log(1-u)}{u}\mathrm du$ is a dilogarithm.</p>
<p>Thus,</p>
<p>$$\sum_{j=1}^\infty (H_{j+1})^2 x^{j+1}-\sum_{j=1}^\infty H_{j+1}^{(2)}x^{j+1}=\frac{(\log(1-x))^2}{1-x}$$</p>
<p>and we're golden.</p>
|
176,613 | <p>Working on Harmonic numbers, I found this very interesting recurrence relation :
$$
H_n = \frac{n+1}{n-1} \sum_{k=1}^{n-1}\left(\frac{2}{k+1}-\frac{1}{1+n-k}\right)H_k
,\quad \forall\ n\in\mathbb{N},n>1$$
My proof of this is quite long and complicated, so I was wondering if someone knows an elegant or concise one. Any idea would be appreciated.</p>
<p>Alternatively, if someone knows a reference that talks about this kind of relation, it would be of great interest for me.</p>
<p>Thanks.</p>
| M. M. | 36,778 | <p>Thanks for your help. Here is another proof that I found today :</p>
<p>From the well known $\zeta(3)=\frac{1}{2}\sum_{k=1}^{\infty}\frac{H_k}{k^2}$, we find
$$
\zeta(3)=\sum_{k=1}^{\infty}\frac{H_k}{(k+1)^2} \quad\quad(1)
$$
Were $\zeta(z)$ is the Riemann zeta function. But also,
$$
\zeta(3)=\frac{1}{2}\int_{0}^{\infty}\frac{t^2}{e^t-1}dt=4\int_{0}^{\pi/2}\tan{x}(\ln{\sin{x}})^2dx=4\int_{0}^{\pi/2}\tan{x}\left(\sum_{k=1}^{\infty}\frac{\cos^{2k}{x}}{2k}\right)^2dx
$$
Where I used the substitution $e^{-t}=\sin^2{x}$. By rearranging and using the formula for raising power series to powers (e.g. 0.314 p.17 in Gradshteyn and Ryzhik's Table of Integrals, Series, and Products),
$$
\zeta(3)=\sum_{k=0}^{\infty}c_k \int_{0}^{\pi/2}\sin{x}\ \cos^{2k+3}{x}dx=\sum_{k=0}^{\infty}c_k \frac{1}{2}B(1,k+2)=\sum_{k=1}^{\infty}\frac{c_{k-1}}{2(k+1)}
$$
$$
\text{where,}\quad c_0=1,\quad c_k=\frac{1}{k}\sum_{i=1}^{k}\frac{3i-k}{i+1}c_{k-i}
$$
Equating with (1) and rearranging, we get desired relation.</p>
|
36,272 | <p>Is there a characterisation for which $x\in\mathbb{R}$ the value $\arctan(x)$ is a rational multiple of $\pi$? </p>
<p>Or reformulated: What is the "structure" of the subset $A\subseteq\mathbb{R}$ which fulfils
$$ \arctan(x) \in \pi\mathbb{Q} \Leftrightarrow x\in A$$
for all $x\in\mathbb{R}$?</p>
| Joseph O'Rourke | 6,094 | <p>A partial answer was provided in response to my MSE question,
"<a href="https://math.stackexchange.com/questions/79861/">ArcTan(2) a rational multiple of $\pi$</a>?"</p>
<p>There Thomas Andrews showed that $\arctan(x)$ is not a rational multiple of $\pi$ for any
$x$ rational, except for $-1,0,1$.
More specifically:</p>
<blockquote>
<p>$\arctan(x)$ is a rational multiple of $\pi$ if and only if the complex number $1+xi$ has the property that $(1+xi)^n$ is a real number for some positive integer $n$.
This is not possible if $x$ is a rational,
$|x|\neq 1$, because $(q+pi)^n$ cannot be real for any $n$ if $(q,p)=1$ and $|qp|> 1$. So $\arctan(\frac{p}q)$ cannot be a rational multiple of $\pi$.</p>
</blockquote>
|
4,494,199 | <p>This is a problem from a past qualifying exam in complex analysis. I'm working through these to study for my own upcoming qual. For this question, I think my proof is fairly straightforward, but I'd like to know whether or not it is correct and complete. I'm also interested in other ways of answer the question. Thanks!</p>
<p><strong>Problem:</strong></p>
<p>Find how many solutions (counting multiplicity) the equation <span class="math-container">$\sin z = ez^4$</span> has on the unit disk <span class="math-container">$|z|<1$</span>. Justify your answer.</p>
<p><strong>My Solution:</strong></p>
<p>Let <span class="math-container">$g(z) = \sin(z)$</span> and <span class="math-container">$f(z) = -ez^4$</span> and consider these functions on the unit circle <span class="math-container">$|z|=1$</span>. We will show by Rouche's Theorem that since <span class="math-container">$|g(z)|<|f(z)|$</span>, then <span class="math-container">$f(z)+g(z)$</span> has the same number of zeros inside the unit circle as <span class="math-container">$f(z)$</span> counting multiplicities, thus there are four solutions to the given equation.</p>
<p>First, we need to show that <span class="math-container">$|\sin(z)|<|e|$</span>. We have
<span class="math-container">$$
\begin{align*}
|\sin(z)| &= \left\vert \frac{e^{iz} - e^{-iz}}{2i}\right\vert \\
&= \frac{1}{2} |e^{i(x+iy)} - e^{-i(x+iy)}| \\
&\leq \frac{1}{2}(|e^{i(x+iy)}| + |- e^{-i(x+iy)}|)\\
&= \frac{1}{2}(e^{-y} + e^y)
\end{align*}
$$</span></p>
<p>On <span class="math-container">$|z|=1$</span>, we have <span class="math-container">$|y|\leq 1$</span>, thus
<span class="math-container">$$
|\sin(z)| \leq \frac{1}{2}(e^{-y} + e^y) \leq \frac{1}{2}(e^{1}+e^{-1})< \frac{1}{2}(2e) = e.
$$</span></p>
<p>Now, we have that when <span class="math-container">$|z|=1$</span>
<span class="math-container">$$
|\sin(z)|<e = e|z|^4 = |-ez^4|,
$$</span></p>
<p>thus <span class="math-container">$|g(z)|<|f(z)|$</span>, and by Rouche's Theorem, <span class="math-container">$f(z)+g(z)$</span> has the same number of zeros inside the unit circle as <span class="math-container">$f(z)$</span>. Since <span class="math-container">$f(z) = -ez^4$</span> is a polynomial, we know by the fundamental theorem of algebra that it has exactly four roots counting multiplicity. Thus,
<span class="math-container">$$
f(z) + g(z) = \sin(z) - ez^4
$$</span></p>
<p>has exactly four roots and <span class="math-container">$\sin(z) = ez^4$</span> has exactly four solutions. <span class="math-container">$\blacksquare$</span></p>
| Adam Rubinson | 29,156 | <p>An easier proof for me is the following:</p>
<p>Firstly, <span class="math-container">$\frac a{1+b}+\frac b{1+a}\geq \frac{b}{1+a}>0,\ $</span> as numerator and denominator of <span class="math-container">$\frac{b}{1+a}$</span> are both positive.</p>
<p>Secondly,</p>
<p><span class="math-container">$$\frac a{1+b}+\frac b{1+a} = \frac{a(1+a)+b(1+b)}{(1+b)(1+a)} = \frac{a+b+a^2+b^2}{a+b+ab+1}. $$</span></p>
<p>So if we show that <span class="math-container">$ab+1 \geq a^2 + b^2, $</span> and then we are done. To this end:</p>
<p><span class="math-container">$$\text{Since } b>a,\ \text{ and } a\geq 0,\ ab \geq a^2.\quad \text{Also, } 1\geq b, \text{ and so } 1 \geq b^2.$$</span></p>
<p><span class="math-container">$$\text{Therefore, } ab+1 \geq a^2 + b^2. $$</span></p>
<p>The strict inequality holds on the left side only:</p>
<p><span class="math-container">$$0 < \frac a{1+b}+\frac b{1+a} \leq 1.$$</span></p>
<p>Equality on the RHS occurs when <span class="math-container">$b=1$</span> and <span class="math-container">$a=0.$</span></p>
|
514,352 | <p>Can anybody give me an example of a multiplicative function $f$ such that
$$\prod_p \sum_{k=0}^\infty f(p^k)$$
converges absolutely and such that
$$\sum_{n=1}^\infty f(n)$$
diverges?</p>
| PITTALUGA | 94,471 | <p>Wherever $\sum_n f(n)$ converges also $\prod_p\sum_k f(p^k)$ does and vice versa.
I emphasize that this is true <em>only inside the convergence region.</em>
This comes from the fact that $f$ is multiplicative and from the fundamental theorem of arithmetic.
So the answer is: no.</p>
|
889,628 | <p>The question and answer is shown but I don't fully understand the answer for part a. Could someone please explain to me why the integral setup for the marginal density function of y1 is from y1 to 1, and not 0 to 1? And also the same thing for y2. Thank you
<img src="https://i.stack.imgur.com/ECQQ7.png" alt="Question"></p>
<p><img src="https://i.stack.imgur.com/Kg8N1.png" alt="enter image description here"></p>
| Sara | 220,380 | <p>To tell someone what infinity ... Take a sheet of $A4$ paper and divide it into two halves. Now take one of the halves and divide it again. Repeat this step indefinitely. Here ask the question 'Will this process finish?'.</p>
|
874,607 | <blockquote>
<p>There were 10 questions on a test. A student gets 5 points for every correct answer and 3 points for every partially correct answer. If the student got 19 points, how many correct and partial answers did they have?</p>
</blockquote>
<p>To solve the problem I express the total number of points as the sum of multiples of 5 and 3.</p>
<blockquote>
<p>5x+3y=19</p>
</blockquote>
<p>After that, the only thing I can do is find the solution by brute forcing it. Is there a more mathematical way of finding it?</p>
| Rebecca J. Stones | 91,818 | <p>We see that $19 \equiv 4 \pmod 5$, so we must also have $3y \equiv 4 \pmod 5$. We check that the multiplicative inverse of $3$ modulo $5$ is $2$ (since $2 \times 3=6 \equiv 1 \pmod 5$), so we have $2 \times 3y \equiv 2 \times 4 \pmod 5$, i.e., $y \equiv 3 \pmod 5$.</p>
<p>Since $0 \leq y \leq \lfloor 19/3 \rfloor=6$ (along with satisfying $y \equiv 3 \pmod 5$) we must have $y=3$ and hence $5x+3y=19$ implies $x=2$.</p>
<p>This gives the unique non-negative integer solution $(x,y)=(2,3)$.</p>
<p>(Note that the uniqueness of the solution is mainly because "19 is small". E.g. if the equation were $5x+3y=34$ there would be more than one solution.)</p>
|
183,768 | <p>Prove convergence\divergence of the series:
$$\sum_{n=1}^{\infty}\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)}$$</p>
<p>Here is what I have at the moment:</p>
<p><strong>Method I</strong></p>
<p>My first way uses a result that is related to <strong><a href="http://en.wikipedia.org/wiki/Wallis_product" rel="nofollow noreferrer">Wallis product</a></strong> that we'll denote by $W_{n}$. Also,<br>
we may denote $\dfrac {1\cdot 3\cdots (2n-1)} {2\cdot 4\cdots (2n)}$ by $P_{n}$. Having noted these and taking a large
value of $n$<br>
we get:
$$(P_{n})^2 =\frac{1}{W_{n} \cdot (2n+1)}\approx\frac{2}{\pi}\cdot \frac{1}{2n+1}$$
$$P_{n}\approx \sqrt {\frac{2}{\pi}} \cdot \frac{1}{\sqrt{2n+1}}$$ </p>
<p>Further we have that:
$$\lim_{n\to\infty}\sqrt {\frac{2}{\pi}} \cdot \frac{n}{\sqrt{2n+1}} \le \sum_{n=1}^{\infty} P_{n}$$
that obviously shows us that the series diverges.</p>
<p><strong>Method II</strong></p>
<p>The second way is to resort to the powerful <strong><a href="http://mathworld.wolfram.com/KummersTest.html" rel="nofollow noreferrer">Kummer's Test</a></strong> and firstly proceed with the ratio test:
$$\lim_{n\to\infty} \frac{P_{n+1}}{P_{n}}=\frac{2n+1}{2n+2}=1$$
and according to the result, the ratio test is inconclusive.</p>
<p>Now, we apply Kummer's test and get:
$$\lim_{n\to\infty} \frac{P_{n}}{P_{n+1}}n-(n+1)=\lim_{n\to\infty} -\frac{n+1}{2n+1}=-\frac{1}{2} \le 0$$
Since
$$\sum_{n=1}^{\infty} \frac{1}{n} \longrightarrow \infty$$
our series diverges and we're done.</p>
<p>On the site I've also found <a href="https://math.stackexchange.com/questions/118383/convergence-of-sum-limits-n-1-infty-left-dfrac-1-cdot3-cdots-2n-1?rq=1">a related question</a> with answers that can be applied for my question.
Since I've already have some answers for my question you may regard it as a recreational one and if you have a nice proof to share I'd be glad to receive it. I like this question very much and want to make up a collection with nice proofs for it. Thanks. </p>
| Morgan Sherman | 21,512 | <p>Since
$$ \frac{1 \cdot 3 \cdot 5 \cdot \ldots \cdot (2n-1)}{2 \cdot 4 \cdot \ldots \cdot (2n)} \ge \frac{1 \cdot 2 \cdot 4 \cdot \ldots \cdot (2n-2)}{2 \cdot 4 \cdot \ldots \cdot (2n)} = \frac1{2n} $$
the series diverges by comparison to the Harmonic series.</p>
|
2,259,243 | <p>How to solve T(n) = T(n-2) + n using iterative substitution</p>
<pre><code>Base case:
T(0) = 1
T(1) = 1
Solve:
T(n) = T(n-2) + n
</code></pre>
<p>Currently I have:</p>
<pre><code>T(n) = T(n-2) + n
= T(n-4) + n - 2 + n = T(n-4) + 2n - 2
= T(n-6) + n - 4 + n - 2 + n = T(n-6) + 3n - 6
= T(n-8) + n - 6 + n - 4 + n -2 + n = T(n-8) + 4n - 12
= T(n-10) + n - 8 + n - 6 + n - 4 + n - 2 + n = T(n-10) + 5n - 20
</code></pre>
<p>The pattern I see is: </p>
<p>$$\ T(n-2 \sum_{i=1}^k i) + n \sum_{i=0}^k i - \sum_{i=0}^{k-1} i(i+1) $$</p>
<p>but this may be wrong because I am completely stuck after this</p>
| user1015113 | 1,015,113 | <p>Let P be a permutation... P is either odd or even so if P is odd then it's inverse also odd.. We know that identity permutation is product of a permutation and it's inverse .. So identity permutation will be product of P and it's inverse.. Since both permutation is odd so their product is even... Similarly we can prove for P is even permutation... Thank you</p>
|
29,926 | <p>Is there an algorithm which will allow me to find an isomorphism between two graphs if I have their adjacency lists?</p>
| Reed Richards | 7,041 | <p>It seems like it can't be done. If the $K^{-1}DK^{-1}$ is a m-by-n matrix then the map can be seen as just $m$ scalar products. However the <em>log</em> of a scalar product can't be tampered with since $\log(a \cdot b) = \log(a_{1} b_{1} + a_{2} b_{2} +...)$. Further on if we take the <em>log</em> of the individual elements we get $\log a_{1} \log b_{1} + \log a_{2} \log b_{2} +...$ which in this case is not feasible since $b$ in our case contains negative elements.</p>
<p>Even if $b$ just contained positive element there's no easy way of transforming it back since in general </p>
<p>$\log(a_{1} b_{1} + a_{2} b_{2} +...) \neq \log a_{1} \log b_{1} + \log a_{2} \log b_{2} +...$ </p>
<p>and we thus can't take the <em>exp</em> to transform back. </p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| hce | 2,781 | <p>Milnor - Lectures on the h-cobordism theorem</p>
|
64,905 | <p>Let's see if we could use MO to put some pressure on certain publishers...</p>
<p>Although it is wonderful that it has been put
<a href="http://www.jmilne.org/math/Books/DMOS.pdf" rel="nofollow">online</a>, I think it would make an even greater read if "Hodge Cycles, Motives and Shimura Varieties" by Deligne, Milne, Ogus and Shih would be (re)written in the latex typesetting (well, if I could understand its content..).</p>
<p>But enough about my opinion, what do you think? Which book(s) would you like to see "texified"? As customary in a CW question, one book per answer please.</p>
| hce | 2,781 | <p>Atiyah - K-Theory</p>
|
79,658 | <p>My knowledge is very limited for complex geometry. I have the following question:</p>
<p>If we have two complex vector bundles $E\to X$ and $F\to X$ such that we have an isomorphism $\mathcal O\left(E\right) \cong \mathcal O\left(F\right)$ between the sheaf of holomorphic sections, do we have an isomorphism $E \cong F$ ?</p>
| Georges Elencwajg | 450 | <p>No. On $\mathbb P^1=\mathbb P^1(\mathbb C)$ we have $\Gamma(\mathbb P^1,\mathcal O_{\mathbb P^1}(-1))=\Gamma(\mathbb P^1,\mathcal O_{\mathbb P^1}(-2)=0$, but $O_{\mathbb P^1}(-1)$ and $O_{\mathbb P^1}(-2)$ are not isomorphic. </p>
<p>However on an <em>affine algebraic</em> variety $X$, the answer is "yes". There is an amazing equivalence of categories between $\mathcal O(X)$-modules and the so-called quasi coherent sheaves on $X$. It is denoted $M\mapsto \tilde M.$<br>
In particular if you have a vector bundle $E$ on $X$, you can recover it (or rather its locally free associated sheaf) from $M=\Gamma(X,E)$ by this equivalence.<br>
And this remarkable result is not even very difficult! ( Hartshorne, <em>Algebraic Geometry</em>, II Corollary 5.5) . And it is valid on any affine <em>scheme</em>! </p>
<p><strong>Another interpretation</strong><br>
I have interpreted $\mathcal O(E)$ as the vector space $ \Gamma(X,E)$ of global sections of the bundle $E$.<br>
However Donu Arapura and Qfwfq consider that the notation designates the sheaf of sections of the vector bundle $E$, that is the sheaf $\mathcal E$ associating to the open subset $U\subset X$ the vector space $\mathcal E(U)=\Gamma(U,E)$. </p>
<p>In that case the answer is : yes, that sheaf determines the bundle.<br>
Indeed there is a <em>canonical</em> way to obtain from the sheaf $\mathcal E=\mathcal O(E)$ a holomorphic vector bundle $ Vec(\mathcal E) $ isomorphic to $E$.<br>
Its fiber at $x\in X$ is the the $\mathcal O_{X,x}/ \mathfrak m_x=\mathbb C$- vector space $Vec(\mathcal E) [x]=\mathcal E _x/\mathfrak m_x \mathcal E_x$.<br>
And the complex structure on $Vec(\mathcal E)=\bigsqcup Vec(\mathcal E) [x]$, is obtained from bijections with $U\times \mathbb C^r$ for all $U$'s on which $\mathcal E|U$ is free, that is isomorphic to $\mathcal O^r_U$.<br>
Of course, there are verifications to be made, which are as straightforward as they are boring and unpleasant to write down explicitly...<br>
<strong>Edit</strong> This latter interpretation is the one chritian had in mind, as he just stated in a comment.</p>
|
79,658 | <p>My knowledge is very limited for complex geometry. I have the following question:</p>
<p>If we have two complex vector bundles $E\to X$ and $F\to X$ such that we have an isomorphism $\mathcal O\left(E\right) \cong \mathcal O\left(F\right)$ between the sheaf of holomorphic sections, do we have an isomorphism $E \cong F$ ?</p>
| user2035 | 2,035 | <p>Another way of looking at the problem is to consider the (set-valued) sheaf of isomorphisms $E\to F$ and the sheaf of isomorphisms $\mathcal O(E)\to\mathcal O(F)$. There is clearly a map from the former to the latter, so we only need to show that it is an isomorphism locally, i.e., we may assume that both $E$ and $F$ are trivial. But then, isomorphisms $E\cong F$ as well as isomorphisms $\mathcal O(E)\cong\mathcal O(F)$ both have an explicit description by elements of $GL_n(\Gamma(U,\mathcal O))$, and it is easily checked that they correspond.</p>
|
2,962,377 | <p>Rewrite <span class="math-container">$f(x,y) = 1-x^2y^2$</span> as a product <span class="math-container">$g(x) \cdot h(y)$</span> (both arbitrary functions)</p>
<p>To make more clear what I'm talking about I will give a example.</p>
<p>Rewrite <span class="math-container">$f(x,y) = 1+x-y-xy$</span> as <span class="math-container">$g(x)h(y)$</span></p>
<p>If we choose <span class="math-container">$g(x) = (1+x)$</span> and <span class="math-container">$h(y) = (1-y)$</span> we have</p>
<p><span class="math-container">$f(x,y) = g(x) h(y) \implies (1+x-y-xy) = (1+x)(1-y)$</span></p>
<p>I'm trying to do the same with <span class="math-container">$f(x,y) = 1-x^2y^2 = (1-xy)(1+xy)$</span>.</p>
<p>New question:</p>
<blockquote>
<p>Is there also a contradiction for <span class="math-container">$f(x,y) = \frac{xy}{1-x^2y^2}$</span> ? Or it's possible to write <span class="math-container">$f(x,y) $</span> as <span class="math-container">$g(x)h(y)$</span> ?</p>
</blockquote>
| Daniel Schepler | 337,888 | <p>Suppose we had functions <span class="math-container">$g,h$</span> with <span class="math-container">$g(x) h(y) = 1 - x^2 y^2$</span>. Then substituting in <span class="math-container">$x:=1, y:=1$</span>, we would have <span class="math-container">$g(1) h(1) = 0$</span>, so either <span class="math-container">$g(1) = 0$</span> or <span class="math-container">$h(1) = 0$</span>. But we also must have <span class="math-container">$g(1) h(0) = 1$</span> which implies <span class="math-container">$g(1) \ne 0$</span>, and similarly <span class="math-container">$g(0) h(1) = 1$</span> which implies <span class="math-container">$h(1) \ne 0$</span>. Putting these requirements together, we get a contradiction.</p>
<p>Therefore, no such functions <span class="math-container">$g,h$</span> can exist.</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.