qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
948,329
<p>I have come across this trig identity and I want to understand how it was derived. I have never seen it before, nor have I seen it in any of the online resources including the many trig identity cheat sheets that can be found on the internet.</p> <p>$A\cdot\sin(\theta) + B\cdot\cos(\theta) = C\cdot\sin(\theta + \Phi)$</p> <p>Where $C = \pm \sqrt{A^2+B^2}$</p> <p>$\Phi = \arctan(\frac{B}{A})$</p> <p>I can see that Pythagorean theorem is somehow used here because of the C equivalency, but I do not understand how the equation was derived. </p> <p>I tried applying the sum of two angles identity of sine i.e. $\sin(a \pm b) = \sin(a)\cdot\cos(b) + \cos(a)\cdot\sin(b)$</p> <p>But I am unsure what the next step is, in order to properly understand this identity.</p> <p>Where does it come from? Is it a normal identity that mathematicians should have memorized?</p>
heropup
118,193
<p>The trigonometric angle identity $$\sin(\alpha + \beta) = \sin\alpha \cos\beta + \cos\alpha \sin\beta$$ is exactly what you need. Note that $A^2 + B^2 = C^2$, or $$(A/C)^2 + (B/C)^2 = 1.$$ Thus there exists an angle $\phi$ such that $\cos\phi = A/C$ and $\sin\phi = B/C$. Then we immediately obtain from the aforementioned identity $$\sin(\theta+\phi) = \frac{A}{C}\sin \theta + \frac{B}{C}\cos\theta,$$ with the choice $\alpha = \theta$; after which multiplying both sides by $C$ gives the claimed result.</p> <p>Note that $\phi = \tan^{-1} \frac{B}{A}$, not $\tan^{-1} \frac{B}{C}$.</p>
3,846,717
<p>Denote <span class="math-container">$\mathbb{F}=\mathbb{C}$</span> or <span class="math-container">$\mathbb{R}$</span>.</p> <p><strong>Theorem (Cauchy - Schwarz Inequality).</strong> <em>If <span class="math-container">$\langle\cdot,\cdot\rangle$</span> is a semi-inner product on a vector space <span class="math-container">$H$</span>, then</em> <span class="math-container">$$\lvert\langle x,y\rangle\rvert\le\lVert x\rVert\lVert y\rVert,\quad\textit{for all}\;x,y,\in H.$$</span></p> <p><em>Proof.</em> If <span class="math-container">$x=0$</span> or <span class="math-container">$y=0$</span>, then there is nothing to prove, so suppose that <span class="math-container">$x$</span> and <span class="math-container">$y$</span> are both nonzero.</p> <p>Given any scalar <span class="math-container">$z\in\mathbb{F}$</span>, there is a scalar <span class="math-container">$\alpha$</span> with modulus <span class="math-container">$\lvert\alpha\rvert=1$</span> such that <span class="math-container">$\alpha z=\lvert z \rvert$</span>. In particular, if we set <span class="math-container">$z=\langle x, y\rangle$</span>, then there is a scalar with <span class="math-container">$\lvert \alpha \rvert=1$</span> such that <span class="math-container">$$\langle x,y\rangle=\alpha\lvert\langle x,y\rangle\rvert.$$</span> Multiplying both sides by <span class="math-container">$\overline{\alpha}$</span>, we see that we also have <span class="math-container">$\overline{\alpha}\langle x, y\rangle=\lvert\langle x, y \rangle \rvert$</span>.</p> <p>For each <span class="math-container">$t\in\mathbb{R}$</span>, using the Polar Identity and antilinearity in the second variable, we compute that <span class="math-container">\begin{equation} \begin{split} 0\le\lVert x-\alpha ty\rVert= &amp;\rVert x\lVert^2-2\Re\big(\langle x,\alpha ty\rangle\big)+t^2\rVert y \lVert^2 \\ = &amp;\rVert x\lVert^2-2t\Re\big(\overline{\alpha}\langle x, y\rangle\big)+t^2\lVert y \lVert^2\\ = &amp; \lVert x \rVert ^2-2t\lvert\langle x, y\rangle\rvert+t^2\lVert y \rVert^2\\ = &amp; at^2+bt+c, \end{split} \end{equation}</span> where <span class="math-container">$a=\lVert y \rVert ^2$</span>, <span class="math-container">$b=-2\lvert \langle x, y\rangle \rvert$</span>, and <span class="math-container">$c=\lVert x \rVert ^2$</span>. This is a rel-valued quadratic polynomial in the variable <span class="math-container">$t$</span>. Since this polynomial is nonnegative, <span class="math-container">$\color{red}{it\;can\;have\;at\;most\;\;one\;real\;root}.$</span></p> <p><span class="math-container">$\color{blue}{This\;implies\;that\;the\;discriminant\; cannot\;be\;strictly\;positive}.$</span></p> <blockquote> <p><strong>Question.</strong> What are the reasons why the red and blue assertions hold? I need the precise details.</p> </blockquote> <p>Thanks!</p>
Math Lover
801,574
<p>Take example of <span class="math-container">$x^2-3x+2 = (x-1)(x-2) = 0$</span>. It has two roots and between <span class="math-container">$x = 1$</span> and <span class="math-container">$x = 2$</span>, it is negative, while positive for <span class="math-container">$x \lt 1, x \gt 2$</span>. As the quadratic in the question is always non-negative (zero or positive), it does not change sign so either it may have no root or at most it may have one root (which is both roots having the same value e.g. <span class="math-container">$x^2 - 2x + 1 = (x-1)(x-1) = 0$</span></p> <p>When the quadratic <span class="math-container">$(ax^2 + bx + c)$</span> has two real roots, its discriminant is positive i.e <span class="math-container">$b^2 - 4ac \gt 0$</span> and if it has only one real root its discriminant is zero i.e <span class="math-container">$b^2-4ac = 0$</span>. You can check the same for above two quadratic examples in my answer. If the discriminant is negative i.e. <span class="math-container">$b^2 - 4ac \lt 0$</span>, it will not have any real roots.</p>
1,017,989
<p>Consider three disjoint circles not necessarily of same radii. How do you draw the smallest circle enclosing all these three circles? Where is its centre, and what is its radius? </p>
Axel Kemper
58,610
<p><strong>Not a full solution but a sketch:</strong></p> <p>Let us assume the three disjoint circles are described by:</p> <p>$$\begin{aligned} (x - x_1)^2 + (y - y_1)^2 &amp;= r_1^2 \\ (x - x_2)^2 + (y - y_2)^2 &amp;= r_2^2 \\ (x - x_3)^2 + (y - y_3)^2 &amp;= r_3^2 \end{aligned}$$</p> <p><img src="https://i.stack.imgur.com/ROUFE.jpg" alt="enter image description here"></p> <p><img src="https://i.stack.imgur.com/eiuX2.jpg" alt="enter image description here"></p> <p>The three circles are at fixed locations and disjoint. Therefore, we know:</p> <p>$$\begin{aligned} (x_2 - x_1)^2 + (y_2 - y_1)^2 &amp;\ge (r_1 + r_2)^2 \\ (x_3 - x_2)^2 + (y_3 - y_2)^2 &amp;= (r_2 + r_3)^2 \\ (x_1 - x_3)^2 + (y_1 - y_3)^2 &amp;= (r_1 + r_3)^2 \end{aligned}$$</p> <p>Assuming that the center of the outer circle is $\{x_m;y_m\}$, we want to know for the three touch-points where the outer circle is tangential to the enclosed circles. The touch-points must be located on radii of the small circles and of the outer circle. As additional constraints, the tangential slopes must be equal between small circle and outer circle in all three touch-points.<br> Experimenting with <a href="http://www.geogebra.org" rel="nofollow noreferrer">GeoGebra</a>, I found that there seems to be unique solution. But I could not find an algebraical expression for it.</p> <p>The geometric construction of the center point $\{x_m;y_m\}$:</p> <ul> <li>Three lines from $\{x_m;y_m\}$ to the three center points intersect the circles in the three touch-points. </li> <li>$\{x_m;y_m\}$ is also the intersection of the perpendicular bisectors of the three touch-points. </li> </ul> <p>If the outer circle just has to enclose but not necessarily touch all circles, the construction does not help.<br> This case probably has to be solved numerically using a non-linear optimization procedure. The coordinates $\{x_m;y_m\}$ are varied. I would take the middle between the biggest two circles as starting point.</p> <p><strong>Note:</strong><br> If all three circles are located with their center points on the same line and have the same radius, there is no finite enclosing outer circle which touches all three circles.</p>
2,146,929
<p>Let $f:S^n \to S^n$ be a homeomorphism. I know the result that a rigid motion in $\mathbb R^{n+1}$ is always <a href="https://math.stackexchange.com/a/866471/185631">linear</a>, but can we get more information from the assumption that $f:S^n \to S^n$ is a homeomorphism?</p>
Tsemo Aristide
280,301
<p>Let $f:S^n\rightarrow S^n$ an homeomorphism. define $F:R^n\rightarrow R^n$ by $F(x)=\|x\|f({x\over{\|x\|}}), x\neq 0$, $F(0)=0$.</p>
101,953
<p>I've been asked (by a person not by a homework) about how to compute the following limit:</p> <p>$$ \lim_{x \to 10^-} \frac{[x^3] - x^3}{[x] - x}$$</p> <p>where $[\cdot]$ is used to denote the floor function:</p> <p>$$ [x] := \begin{cases} x &amp;&amp; x \in \mathbb{Z} \\&#160; \text{biggest integer smaller than }x &amp;&amp; \text{otherwise}&#160;\end{cases}$$</p> <p>My first thought was to sandwich this but using $x^3 - 1 \leq [x^3] \leq x^3$ to get $-1 \leq [x^3] - x^3 \leq 0$ leaves me with </p> <p>$$ \lim_{x \to 10^-} \frac{1}{x - [x]} \leq \lim_{x \to 10^-} \frac{[x^3] - x^3}{[x] - x} \leq 0$$ </p> <p>Which doesn't seem to lead anywhere. What's the right way to compute this limit? Thanks for your help.</p>
Davide Giraudo
9,849
<p>We have, putting $x=N+\delta$: $$\lim_{x\to N^-}\frac{[x^3]-x^3}{[x]-x}=\lim_{\delta\to 0^-}\frac{[(N+\delta)^3]-(N+\delta)^3}{[N+\delta]-(N+\delta)}.$$ We have $[N+\delta]-N=N+[\delta]-N=[\delta]=-1$ when for example $-\frac 12\leq\delta&lt;0$, and $$[(N+\delta)^3]-(N+\delta)^3=[N^3+3N^2\delta+3N\delta^2+\delta^3]-(N^3+3N^2\delta+3N\delta^2+\delta^3)=[a(\delta)]-a(\delta),$$ where $a(\delta)=3N^2\delta+3N\delta^2+\delta^3$. Since $a(\delta)&lt;0$ when $\delta&lt;0$ and when $|\delta|$ is small enough, it's $\geq -1$ so we conclude that for any $N\in\mathbb N$: $$\lim_{x\to N^-}\frac{[x^3]-x^3}{[x]-x}=1.$$</p>
1,785,633
<p><em>(First, I am very aware of the fact that Brownian motion is actually probably more difficult to understand than at least basic complex analysis, so the pedagogical merits of such an approach would be questionable for anyone besides a probabilist wanting to refresh or reshape their already existing complex analysis knowledge.)</em></p> <hr> <p>During my stochastic processes lecture, my professor said something to the effect that:</p> <blockquote> <p>"Every statement in complex analysis can be proven using Brownian motion, in particular using the fact that the image of a Brownian path under a conformal map is Brownian motion up to a time change."</p> </blockquote> <p><strong>To what extent is this true?</strong></p> <p>Even after he gave a proof of Liouville's Theorem using Brownian motion and promised to give a proof of the Riemann Mapping Theorem during the next lecture, I was still unconvinced.</p> <p>However, now the more that I think about it, it becomes more plausible -- Brownian motion is very closely related to the theory of harmonic functions, and analytic functions are just 2-dimensional harmonic functions satisfying the Cauchy-Riemann equations (is this correct?).</p> <p>Brownian motion has already been shown to have numerous applications to potential theory and PDEs (to the best of my knowledge), so is a similar formulation of complex analysis in terms of Brownian motion also theoretically possible? (even if not desirable?)</p>
Lee Mosher
26,501
<p>Hint: It can be easily be reduced to a simpler problem: find three vectors $a,b,c$ in $\mathbb{R}^2$ such that $\{a,b\}$, $\{a,c\}$ and $\{b,c\}$ are each linearly independent sets. Once you've found those vectors, simply include $\mathbb{R}^2$ as a vector subspace of $\mathbb{R}^3$.</p>
3,592,747
<p>How to solve this equation for <span class="math-container">$x$</span> in reals, without using theory of complex number? <span class="math-container">$$\frac{a}{\left(x+\frac{1}{x}\right)^{2}}+\frac{b}{\left(x-\frac{1}{x}\right)^{2}}=1$$</span> Where <span class="math-container">$a$</span> and <span class="math-container">$b$</span> is some constants</p> <p>Please write down a step-by-step solution with a good explanation to this equation. I spent a lot of time trying to solve it but could not find an answer.</p> <p>According to <a href="https://www.desmos.com/calculator/iwke0frbow" rel="nofollow noreferrer">desmos</a>, it has four roots:</p> <p><a href="https://i.stack.imgur.com/UO25t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/UO25t.png" alt="enter image description here"></a></p> <p><sub>I don't know which title and tags I should use for this question, please edit.</sub></p>
Matti P.
432,405
<p>I would start by doing this: <span class="math-container">$$ \frac{a}{\left(x+\frac{1}{x}\right)^{2}}+\frac{b}{\left(x-\frac{1}{x}\right)^{2}}= \frac{ax^2}{\left(x+\frac{1}{x}\right)^2 x^2}+\frac{bx^2}{\left(x-\frac{1}{x}\right)^2 x^2}= \frac{ax^2}{\left(x^2+1\right)^2}+\frac{bx^2}{\left(x^2-1\right)^2}=1 $$</span> Now multiply both sides by <span class="math-container">$(x^2 + 1)^2 (x^2 -1 )^2$</span>: <span class="math-container">$$ ax^2 (x^2 -1)^2 + bx^2 (x^2 +1)^2 = (x^2 + 1)(x^2 -1 )(x^2 + 1)(x^2 -1 ) = (x^4 -1 )^2 $$</span> now set <span class="math-container">$t= x^2$</span> and open the parentheses on the left-hand side: <span class="math-container">$$ at^3 - 2at^2 +at+bt^3 + 2bt^2 +bt = t^4 - 2t^2+1 $$</span> which is a quartic equation.</p>
3,012,416
<p>I know the answer is obvious: In <span class="math-container">$\mathbb{Z}$</span> the only solutions of <span class="math-container">$xy=-1$</span> are <span class="math-container">$x=-y=1$</span> and <span class="math-container">$x=-y=-1$</span>. My problem is that I want to formally prove it and I don't know how to write it. Where do you even begin for such a trivial statement?</p> <p><strong>Edit: I would like to prove it viewing <span class="math-container">$\mathbb{Z}$</span> as a ring. This is, just using the sum and product of integers. No order, no absolute value, etc...</strong></p>
Joel Pereira
590,578
<p>If xy = -1, then x = <span class="math-container">$\frac{-1}{y}$</span>. If x is an integer, y must divide -1. Therefore, y = <span class="math-container">$\pm$</span>1, which implies x <span class="math-container">$\mp$</span>1.</p>
3,012,416
<p>I know the answer is obvious: In <span class="math-container">$\mathbb{Z}$</span> the only solutions of <span class="math-container">$xy=-1$</span> are <span class="math-container">$x=-y=1$</span> and <span class="math-container">$x=-y=-1$</span>. My problem is that I want to formally prove it and I don't know how to write it. Where do you even begin for such a trivial statement?</p> <p><strong>Edit: I would like to prove it viewing <span class="math-container">$\mathbb{Z}$</span> as a ring. This is, just using the sum and product of integers. No order, no absolute value, etc...</strong></p>
Ovi
64,460
<p>For positive integers (same can be done for negative, or just use absolute value): </p> <p>If <span class="math-container">$x &gt;1$</span> and <span class="math-container">$y&gt;1$</span>, then <span class="math-container">$xy&gt;1$</span>. So either <span class="math-container">$x =1$</span> or <span class="math-container">$y=1$</span>. If <span class="math-container">$x=1$</span>, then <span class="math-container">$1 \cdot y=1$</span>, so <span class="math-container">$y=1$</span>. If <span class="math-container">$y=1$</span>, then <span class="math-container">$x=1$</span>.</p>
645,340
<p>I am not sure how to start tackling this question and would love a hint:</p> <blockquote> <p>Let $&lt;$ the regular order relation on $\mathbb{R}$, and $&lt;_w$ well ordering on $\mathbb{R}$. We define a coloring function $h:[\mathbb{R}]^2 \rightarrow 2$ like so: $$ h(\{x,y\}) = \left\{\begin{matrix} 1 &amp;((x &lt; y) \wedge (x &lt;_w y)) \vee ((y &lt; x) \wedge(y&lt;_wx)) \\ 0 &amp; otherwise \end{matrix}\right. $$ Prove that there isn't an homogeneous $H \subseteq \mathbb{R}$ such that $|H| = \aleph_1$.</p> </blockquote> <p>I'm not sure if I'm "allowed" or need to assume AC here. Looks like the question is related to combinatoric properties of large cardinals (the result would be that $\mathbb{R}$ is not weakly compact, if I'm not mistaking), but non of the theorems we saw seems relevant.</p>
Clive Newstead
19,542
<p><strong>Hint:</strong> Suppose there is such a homogeneous set, and use it to construct a subset of $\mathbb{R}$ whose order-type under $&lt;$ is $\omega_1$. This must be a contradiction because, by selecting rational points between points in this sequence, you get a strictly increasing $\omega_1$-sequence of rationals.</p>
645,340
<p>I am not sure how to start tackling this question and would love a hint:</p> <blockquote> <p>Let $&lt;$ the regular order relation on $\mathbb{R}$, and $&lt;_w$ well ordering on $\mathbb{R}$. We define a coloring function $h:[\mathbb{R}]^2 \rightarrow 2$ like so: $$ h(\{x,y\}) = \left\{\begin{matrix} 1 &amp;((x &lt; y) \wedge (x &lt;_w y)) \vee ((y &lt; x) \wedge(y&lt;_wx)) \\ 0 &amp; otherwise \end{matrix}\right. $$ Prove that there isn't an homogeneous $H \subseteq \mathbb{R}$ such that $|H| = \aleph_1$.</p> </blockquote> <p>I'm not sure if I'm "allowed" or need to assume AC here. Looks like the question is related to combinatoric properties of large cardinals (the result would be that $\mathbb{R}$ is not weakly compact, if I'm not mistaking), but non of the theorems we saw seems relevant.</p>
Asaf Karagila
622
<p>The question is not related to large cardinals. Rather it is closer to Ramsey theory, and its various degrees of failure in infinite sets.</p> <p>Suppose that there was such homogeneous $H$ of size $\aleph_1$.</p> <p>If $H$ is homogeneous and its color is $1$ then there is an order embedding of an uncountable ordinal into $\Bbb R$, which is impossible.</p> <p>If $H$ is homogeneous and its color is $0$ then there is a sequence $h_n\in H$ which is $&lt;$-increasing. By homogeneity this means that whenever $n&lt;k$, $h_k&lt;_w h_n$. This is a decreasing sequence in the well-order, which is impossible.</p> <p>To fill in the details in the above you proof you need to know that every well-ordered subset of $\Bbb R$ is countable (that is, well-ordered by the usual ordering). It's not hard to prove, and there are various proofs (especially if we are allowed to use the axiom of choice, then we can appeal to all sort of topological proofs).</p>
4,463,105
<p>Write down the radius of convergence of the power series, obtained by the Taylor expansion of the analytic functions about the stated point, in</p> <p><span class="math-container">$f(z) = \frac{(z+20)(z+21)}{(z-20i)^{21} (z^2 +z+1)}$</span> about <span class="math-container">$z = 0$</span>.</p> <p>My attempt: Since power series are continuous on the disk of convergence, the radius of convergence is the distance to the nearest point of discontinuity. <span class="math-container">$f(z)$</span> is not analytic at <span class="math-container">$z=20i$</span>, hence the radius of convergence would be <span class="math-container">$|0-20i|= \sqrt{20}$</span>.</p> <p>Am I correct?</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$f$</span> has poles at <span class="math-container">$\frac {-1\pm i\sqrt 3} 2$</span> also and these pole have modulus <span class="math-container">$1$</span>. So the radius of convergence is <span class="math-container">$1$</span>.</p>
22
<p>By matrix-defined, I mean</p> <p>$$\left&lt;a,b,c\right&gt;\times\left&lt;d,e,f\right&gt; = \left| \begin{array}{ccc} i &amp; j &amp; k\\ a &amp; b &amp; c\\ d &amp; e &amp; f \end{array} \right|$$</p> <p>...instead of the definition of the product of the magnitudes multiplied by the sign of their angle, in the direction orthogonal)</p> <p>If I try cross producting two vectors with no $k$ component, I get one with only $k$, which is expected. But why?</p> <p>As has been pointed out, I am asking why the algebraic definition lines up with the geometric definition.</p>
Noldorin
56
<p>The obvious but slightly trite answer is "because that's just how the cross-product works as an operation".</p> <p>If you're looking for an intuitive reason, the cross-product by definition produces a vector that is orthogonal to the two operand (input) vectors. You know that the base vectors $\mathbf{i}$, $\mathbf{j}$, and $\mathbf{k}$ are all orthogonal, thus if your two input vectors lie on the $(x, y)$ plane (i.e. only $\mathbf{i}$ and $\mathbf{j}$ components), you know that any orthogonal vector must have only a component in the $z$ direction (multiple of $\mathbf{k}$).</p>
500,931
<p>I have this theorem which I can't prove.Please help.</p> <p>"Show that every positive rational number $x$ can be expressed in the form $\sum_{k=1}^n \frac{a_k}{k!}$ in one and only one way where each $a_k$ is non-negative integer with $ a_k≤ k − 1$ for $k ≥ 2$ and $a_n&gt;0$."</p> <p>I think the ONE way is <a href="https://math.stackexchange.com/questions/71851/prove-that-any-rational-can-be-expressed-in-the-form-sum-limits-k-1n-frac">this</a>.But I don't know that how to prove that it is the ONLY way.</p> <p>Or is <a href="https://math.stackexchange.com/questions/71851/prove-that-any-rational-can-be-expressed-in-the-form-sum-limits-k-1n-frac">this</a> not that ONE way?</p> <p>Please prove it.Thank you.</p>
Calvin Lin
54,563
<p>First, we show existence. For any rational <span class="math-container">$x$</span>, define a sequence of integers as follows:</p> <p><span class="math-container">$a_1 = \lfloor x \rfloor$</span>, observe that <span class="math-container">$ 0 \leq x - a_1 &lt; 1$</span>. If it is 0, stop.<br> <span class="math-container">$a_2 = \lfloor 2!(x - \frac{a_1}{1}) \rfloor$</span>, the previous observation gives us that <span class="math-container">$a_2 \leq 1$</span>. Observe that <span class="math-container">$ 0 \leq x - \frac{a_1}{1} - \frac{a_2}{2!} &lt; \frac{1}{2!}$</span>. If it is 0, stop.<br> <span class="math-container">$a_3 = \lfloor 3!(x - a_1 - a_2) \rfloor$</span>, the previous observation gives us that <span class="math-container">$a_3 \leq 2$</span>. Obseve that <span class="math-container">$0 \leq x - \frac{a_1}{1!} - \frac{a_2}{2!} - \frac{a_3}{3!} &lt; \frac{1}{3!} $</span>. If it is 0, stop.<br> Continue in this manner to define <span class="math-container">$a_n$</span>. </p> <p>Next, we show uniqueness. This is pretty straightforward, so I'd leave it to you.</p>
2,640,763
<p>Let $\{x_i\}_{i=1}^{n}$ and $\{y_i\}_{i=1}^{n}$ two positive sequences, the first one is monotonic, the second one is strictly increasing .</p> <p>I noticed that in many cases if $\{x_i\}_{i=1}^{n}$ is increasing $$\frac{\frac{1}{n}\sum_{i=1}^{n}{x_iy_i}}{\left(\frac{1}{n}\sum_{i=1}^{n}{x_i}\right)\left(\frac{1}{n}\sum_{i=1}^{n}{y_i}\right)}\ge1$$ </p> <p>and if $\{x_i\}_{i=1}^{n}$ is decreasing $$\frac{\frac{1}{n}\sum_{i=1}^{n}{x_iy_i}}{\left(\frac{1}{n}\sum_{i=1}^{n}{x_i}\right)\left(\frac{1}{n}\sum_{i=1}^{n}{y_i}\right)}\le1$$ </p> <p>I am very skeptical about the veracity of this assertion, but I cannot prove it, any help or directions would be welcome.</p>
user
505,767
<p>Suppose that $T$ has rank k, then you can construct a basis made of $n-k$ vectors in Ker(T) and of $k$ vectors orthogonal to them</p> <p>$$\alpha=\{\alpha_1,...\alpha_k,\alpha_{k+1},...\alpha_n\}$$</p> <p>then assume</p> <p>$$\beta=\{\beta_1,...\beta_k,\beta_{k+1},...\beta_n\}$$</p> <p>with $\beta_i=T(\alpha_i)$ thus $[T]_{\alpha}^{\beta}$ is a diagonal matrix with each diagonal entry equal to either 0 or 1</p>
3,182,587
<p>I think it is simpler if I just focus on the derivation of <span class="math-container">$\cos(z)=\frac{e^{iz}+e^{-iz}}{2}$</span>.</p> <p>I start from the formulae for the complex exponential, assuming <span class="math-container">$z=x+iy$</span>: <span class="math-container">$e^{iz}=e^{-y}*(\cos(x)+i\sin(x))$</span> and <span class="math-container">$e^{-iz}=e^{y}*(\cos(x)-i\sin(x))$</span>.</p> <p>And here I start running into problems, I can simply not reproduce the given formula! The furthest I obtain is <span class="math-container">$$e^{iz}+e^{iz}=(e^{-y}+e^{y})*\cos(x)+i(e^{y}-e^{-y})*\sin(x)$$</span> and an equivalent formula for the difference. </p> <p>Please help.</p>
Peter Foreman
631,494
<p>The Taylor series expansions for <span class="math-container">$e^z$</span>, <span class="math-container">$\sin{(z)}$</span> and <span class="math-container">$\cos{(z)}$</span> which are valid for all <span class="math-container">$z\in\mathbb{C}$</span> are supplied below <span class="math-container">$$e^z=1+\frac{z}{1!}+\frac{z^2}{2!}+\dots=\sum_{k=0}^\infty \frac{z^k}{k!}$$</span> <span class="math-container">$$\sin{(z)}=\frac{z}{1!}-\frac{z^3}{3!}+\frac{z^5}{5!}-\dots=\sum_{k=0}^\infty \frac{(-1)^kz^{2k+1}}{(2k+1)!}$$</span> <span class="math-container">$$\cos{(z)}=1-\frac{z^2}{2!}+\frac{z^4}{4!}-\dots=\sum_{k=0}^\infty \frac{(-1)^kz^{2k}}{(2k)!}$$</span> If we look at the sum <span class="math-container">$$\begin{align} \cos{(z)}+i\sin{(z)} &amp;=\sum_{k=0}^\infty \frac{(-1)^kz^{2k}}{(2k)!}+i\sum_{k=0}^\infty \frac{(-1)^kz^{2k+1}}{(2k+1)!}\\ &amp;=\sum_{k=0}^\infty (-1)^k\left(\frac{z^{2k}}{(2k)!}+\frac{i\cdot z^{2k+1}}{(2k+1)!}\right)\\ \end{align}$$</span> it is in fact equal to <span class="math-container">$$\begin{align} e^{iz} &amp;=\sum_{k=0}^\infty \frac{(iz)^k}{k!}\\ &amp;=\sum_{k=0}^\infty \frac{(iz)^{2k}}{(2k)!}+\frac{(iz)^{2k+1}}{(2k+1)!}\\ &amp;=\sum_{k=0}^\infty \frac{(-z^2)^{k}}{(2k)!}+\frac{(-z^2)^k(iz)}{(2k+1)!}\\ &amp;=\sum_{k=0}^\infty (-1)^k\left(\frac{z^{2k}}{(2k)!}+\frac{z^{2k}(iz)}{(2k+1)!}\right)\\ &amp;=\sum_{k=0}^\infty (-1)^k\left(\frac{z^{2k}}{(2k)!}+\frac{i\cdot z^{2k+1}}{(2k+1)!}\right)\\ \end{align}$$</span> for all <span class="math-container">$z\in\mathbb{C}$</span>. So we can write that <span class="math-container">$$e^{iz}=\cos{(z)}+i\sin{(z)}$$</span> for all <span class="math-container">$z\in\mathbb{C}$</span>. Using this gives <span class="math-container">$$e^{i(-z)}=e^{-iz}=\cos{(-z)}+i\sin{(-z)}=\cos{(z)}-i\sin{(z)}$$</span> A proof that <span class="math-container">$$\sin{(-z)}=-\sin{(z)}$$</span> <span class="math-container">$$\cos{(-z)}=\cos{(z)}$$</span> for all <span class="math-container">$z\in\mathbb{C}$</span> comes from looking at the series expansions of <span class="math-container">$\sin{(-z)}$</span> and <span class="math-container">$\cos{(-z)}$</span> while using the facts that <span class="math-container">$$(-z)^{2k}=z^{2k}$$</span> <span class="math-container">$$(-z)^{2k+1}=-z^{2k+1}$$</span> for all <span class="math-container">$z\in\mathbb{C}$</span>. Finally we have <span class="math-container">$$e^{iz}-e^{-iz}=\cos{(z)}+i\sin{(z)}-(\cos{(z)}-i\sin{(z)})=2i\sin{(z)}$$</span> <span class="math-container">$$\therefore \sin{(z)}=\frac{e^{iz}-e^{-iz}}{2i}$$</span> <span class="math-container">$$e^{iz}+e^{-iz}=\cos{(z)}+i\sin{(z)}+\cos{(z)}-i\sin{(z)}=2\cos{(z)}$$</span> <span class="math-container">$$\therefore \cos{(z)}=\frac{e^{iz}+e^{-iz}}{2}$$</span> with the above identities holding for all <span class="math-container">$z\in\mathbb{C}$</span>.</p>
354,100
<p>Does the expression <span class="math-container">$$\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}R^n,$$</span> which gives the volume of an <span class="math-container">$n$</span>-dimensional ball of radius <span class="math-container">$R$</span> when <span class="math-container">$n$</span> is a nonnegative integer, have any known significance when <span class="math-container">$n$</span> is an odd negative integer?</p>
Anixx
10,059
<p>I want to make another answer to this question, although it is is not really an answer, and I do not know whether it is relevant or not, but it may be useful.</p> <p>Let's take </p> <p><span class="math-container">$$V(n,R)=\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}R^n$$</span></p> <p>Also, using the reflection formula for the Zeta function we can see:</p> <p><span class="math-container">$$n\zeta(1-n)\frac{\pi^{\frac{n}{2}}}{\Gamma(\frac{n}{2}+1)}=(1-n)\zeta(n)\frac{\pi^{\frac{1-n}{2}}}{\Gamma(\frac{1-n}{2}+1)}$$</span></p> <p>Interestingly enough, in the <a href="https://mathoverflow.net/questions/115743/an-algebra-of-integrals/342651#342651">algebra of divergent integrals described here</a>, there is a rule <span class="math-container">$\operatorname{reg}\omega_+^n=-n\zeta(1-n)$</span>, (where <span class="math-container">$\omega_+=\int_{-1/2}^\infty dx$</span> is an infinite divergent integral and <span class="math-container">$\operatorname{reg}$</span> denotes regularization), the overall formula becomes</p> <p><span class="math-container">$$\operatorname{reg} V(n, \omega_+)=\operatorname{reg} V(1-n, \omega_+)$$</span></p> <p>This is a very nice-looking relation but its deep meaning is unclear. Particularly because it is not evident what meaning should have the balls with infinite radius, and especially the balls of infinite radius in negative dimension :-). Still formally this relation looks beautiful.</p> <p>See also this question: <a href="https://mathoverflow.net/questions/221792/what-is-the-connection-between-the-riemann-xi-function-and-n-sphere">What is the connection between the Riemann Xi-function and n-sphere?</a></p>
1,346,039
<p>Let <span class="math-container">$F$</span> be a field of characteristic prime to <span class="math-container">$n$</span>, and let <span class="math-container">$F^a$</span> be an algebraic closure of <span class="math-container">$F$</span>. Let <span class="math-container">$\zeta$</span> be a primitive <span class="math-container">$n$</span>th root of unity in <span class="math-container">$F^a$</span>. I know that the monic polynomial <span class="math-container">$\Phi_n(X)$</span> factorizes as <span class="math-container">$$\Phi_n(X) = \prod_{m \in (\mathbb{Z}/n)^\times} (X - \zeta^m).$$</span>I also know that there is a canonical injective group homomorphism <span class="math-container">$\chi: \text{Gal}(F(\zeta)/F) \to (\mathbb{Z}/n)^\times$</span> that is characterized by the property that <span class="math-container">$g(\alpha) = \alpha^{\chi(g)}$</span> for any <span class="math-container">$g \in \text{Gal}(F(\zeta)/F)$</span> and any <span class="math-container">$\alpha \in \mu_n$</span>.</p> <p>What is the easiest and quickest way see that the following properties are true/where can I find a reference to their proofs?</p> <blockquote> <p>Let <span class="math-container">$S \subset (\mathbb{Z}/n)^\times$</span> be the image of the above homomorphism <span class="math-container">$\chi$</span>.</p> <ol> <li><p>If <span class="math-container">$m_1$</span> and <span class="math-container">$m_2$</span> are two elements of <span class="math-container">$(\mathbb{Z}/n)^\times$</span> then <span class="math-container">$\zeta^{m_1}$</span> and <span class="math-container">$\zeta^{m_2}$</span> are conjugate to another if and only if <span class="math-container">$m_1$</span> and <span class="math-container">$m_2$</span> lie in the same coset of <span class="math-container">$S$</span> in <span class="math-container">$(\mathbb{Z}/n)^\times$</span>.</p> </li> <li><p>Let <span class="math-container">$S = S_1, \dots, S_s$</span> denote the cosets of <span class="math-container">$S$</span> in <span class="math-container">$(\mathbb{Z}/n)^\times$</span>. Then for each <span class="math-container">$i = 1, \dots, s$</span>, the polynomial <span class="math-container">$f_i = \prod_{m \in S_i} (X - \zeta^m)$</span> is an irreducible element of <span class="math-container">$F[X]$</span>.</p> </li> <li><p><span class="math-container">$\Phi_n(X) = f_1(X) \dots f_s(X)$</span>.</p> </li> </ol> </blockquote> <p>Much thanks in advance.</p>
user149792
149,792
<ol> <li>$\zeta^{m_1}$ and $\zeta^{m_2}$ are conjugate if and only if there exists $g \in \text{Gal}(F(\zeta)/F)$ such that $g(\zeta^{m_1}) = \zeta^{m_2}$. This is the case if and only if there exists $g \in \text{Gal}(F(\zeta)/F)$ such that $\chi(g)m_1 = m_2$, which happens if and only if $m_1$ and $m_2$ lie in the same coset of $\text{Im}\,\chi = S$ in $(\mathbb{Z}/n\mathbb{Z})^\times$.</li> <li>We choose coset representatives $t_i$ so that $S_i = t_iS$; we have that $(\mathbb{Z}/n\mathbb{Z})^\times$ is commutative, so we need not worry about left or right. By (1),$$f_i(X) = \prod_{g \in \text{Gal}(F(\zeta)/F)} (X - \zeta^{t_i \chi(g)}) = \prod_{g \in \text{Gal}(F(\zeta)/F)} (X - g(\zeta^{t_i})).$$As this expands, it is clear that all coefficients are stable under the action of $\text{Gal}(F(\zeta)/F)$ and so $f_i \in F(\zeta)^G[X] = F[X]$, as $F(\zeta)/F$ is Galois. Moreover, it is the minimal polynomial of $\zeta^{t_i}$ over $F$ (by (1)), and so is irreducible.</li> <li>Each element of $(\mathbb{Z}/n\mathbb{Z})^\times$ appears in exactly one coset of $S$, and thus, appears as a power of $\zeta$ in exact one of the $f_i$. The equality in question is now clear.</li> </ol>
1,421,740
<p>Let $90^a=2$ and $90^b=5$, Evaluate </p> <h1>$45^\frac {1-a-b}{2-2a}$ </h1> <p>I know that the answer is 3 when I used logarithm, but I need to show to a student how to evaluate this without involving logarithm. Also, no calculators.</p>
JUMAR VELASCO
622,772
<p>Given <span class="math-container">$90^a = 2$</span>, <span class="math-container">$90^b = 5$</span></p> <p><span class="math-container">$$(90/2)^{(1-a-b)/(2–2a)}$$</span> <span class="math-container">$$=(90/90^a)^{(1-a-b)/(2–2a)}$$</span> <span class="math-container">$$=(90^{1-a})^{(1-a-b)/(2(1–a))}$$</span> <span class="math-container">$$=90^{(1-a-b)/2}$$</span> <span class="math-container">$$=\sqrt{\frac{90}{90^a 90^b}}$$</span> <span class="math-container">$$=\sqrt{\frac{90}{2\cdot 5}} = \sqrt{9} =3$$</span></p>
2,589,019
<p>I am aware that there is a result saying that if $L_1, L_2$ are two algebraically closed fields of the same characteristic, then either $L_1$ is isomorphic to the subfield of $L_2$, or $L_2$ is isomorphic to a subfield of $L_1$.</p> <p>I am unsure however of how to prove this result. I am quite surprised that this is true, because I feel like these algebraic subfields can be so large that there's no reason why their structures have to have similarities like this. Could anyone offer some insight please? </p>
Stella Biderman
123,230
<p>What follows is meant to illuminate the intuition, and is neither a proof nor entirely formally accurate.</p> <p>Both fields contain the same copy of either the algebraic closure of $\mathbb{Q}$ or of $\mathbb{F}_p$ depending on what the characteristic is. This is a straightforward consequence of the fact that the smallest subfield of any field is one of the $\mathbb F_p$ or is $\mathbb Q$.</p> <p>All of the other elements of $L_1$ and $L_2$ are transcendental, by definition. However, all transcendental elements “look the same.” From an algebraic point of view, transcendental numbers interact with algebraic numbers just like formal symbols. For an illustrative example, we have: $$\mathbb R(x)\cong {\mathbb R(\pi)}\cong{\mathbb R(e)}$$</p> <p>We can take an algebraically closed field, add a transcendental element, close under field operations, close under polynomials, and what we get back is another algebraically closed field. However, it doesn’t matter which element we added, any field obtained this way is isomorphic to every other one by the above equation.</p> <p>If you iterate this process, you can construct any algebraically closed field. The process is straightforward:</p> <blockquote> <p>Let $A$ be an algebraically closed field whose smallest sub field is $F$. Let $F_0=\overline{F}$. To construct $F_{s(\alpha)}$, pick any element in $A\setminus F_{\alpha}$ and call it $a$. We define $F_{s(\alpha)}=\overline{F_\alpha(a)}$. For limit ordinals, we define $F_\beta=\cup_{\alpha&lt;\beta} F_\alpha$. It can be proven that every element of $A$ will be eventually added by this process, and so for some $\gamma$ we have that $A=F_\gamma$. Therefore every algebraically closed field has this form.</p> </blockquote> <p>So all of the algebraically closed fields look the same, it just matters how many times we do this procedure. That’s what determines which one is isomorphic to a subset of the other.</p>
1,102,324
<p>I could use some pointers solving this problem:</p> <blockquote> <p>Given a certain s.v. $X$ with cdf $F_x(x)$ and pdf $f_X(x)$. Let s.v. $Y$ be the lower censored of $X$ at $x=b$. Meaning:</p> <p>$$Y = \begin{cases}0 &amp; \text{if }X&lt;b\\ X &amp; \text{if } X \geq b\end{cases}$$</p> <p>Find cdf $F_Y(y)$ and pdf $f_Y(y)$</p> </blockquote> <h3>My attempt</h3> <p>I'm looking for $$\begin{align} F_Y(y) = \mathcal{P}(Y&lt;y) &amp;= \mathcal{P}(Y&lt;y\mid X&lt;b)\mathcal{P}(X&lt;b)+\mathcal{P}(Y&lt;y\mid X \geq b)\mathcal{P}(X\geq b)\\ &amp; = \mathcal{P}(0&lt;y\mid X&lt;b)\mathcal{P}(X&lt;b) + \mathcal{P}(X&lt;y\mid X \geq b)\mathcal{P}(X\geq b) \end{align}$$</p> <p>I've tried continuing using Bayes, but I always get kinda stuck. Which path should I follow?</p> <p>Is the right mental picture for this problem? <img src="https://i.stack.imgur.com/OB7vT.png" alt="censored"></p> <p><em>Solution should be: $F_Y(y) = \frac{F_X(y)-F_X(b)}{1-F_X(b)}$</em></p>
abel
9,252
<p>the parabola is symmetric about $x = 0$ has roots $\pm \dfrac{1}{\sqrt a}$ so the area bounded by the parabola and $y = 0$ is $$\int_{-1/\sqrt a}^{1/\sqrt a}(1-ax^2) dx=2 \int_0^{1/\sqrt a} 1 - ax^2 dx = 2\left[ x - \dfrac{ax^3}{3}\right]_0^{1/\sqrt a}= 2(\dfrac{1}{\sqrt a} - \dfrac{1}{3\sqrt a}) = \dfrac{4}{3 \sqrt a}$$</p> <p>if you set the area equal to one you can find $a.$</p>
1,700,590
<p>Let $A=\{a_1\ldots,a_m \}$ be a set of linearly independent vectors. Suppose that each $a_j$ $(j=1,\ldots,m)$ can be written as a linear combination of vectors in the set $B=\{b_1,\ldots,b_n\}$.</p> <p>Then how to show that $m \le n$?</p> <p>I have tried as follows:</p> <p>Since $a_j \in A$, we have $a_j \in \operatorname{span} (B)$.</p> <p>Then we have $$ A \subseteq \operatorname{span} (B).$$</p> <p>Therefore, can we say that $m \le n$?</p> <p>Please give me the correct answer for this!!</p>
Darío G
27,454
<p>Suppose for a contradiction that <span class="math-container">$n&lt;m$</span> and write the equations:</p> <p><span class="math-container">$$\begin{align*}a_1&amp;=\alpha_{11}b_1+\ldots+\alpha_{1n}b_n\\\vdots &amp; \hspace{2cm}\vdots\\ a_m&amp;=\alpha_{m1}b_1+\cdots +\alpha_{mn}b_n\end{align*}$$</span></p> <p>From here we start a process that will allow us to obtain coefficients <span class="math-container">$\gamma_1,\ldots,\gamma_{n+1}$</span> not all zero such that <span class="math-container">$\gamma_1 a_1+\cdots + \gamma_{n+1}a_{n+1}=0$</span>, contradicting the fact that <span class="math-container">$\{a_1,\ldots,a_{n+1}\}$</span> are linearly independent.</p> <p>We do <span class="math-container">$n$</span> steps, and each step have 3 parts: &quot;substituting&quot;, &quot;renaming&quot;, &quot;separating&quot;.</p> <p>Step 1:</p> <ul> <li>In this step we do not substitute.</li> <li>Rename the vectors <span class="math-container">$b_1,\ldots,b_n$</span> such that <span class="math-container">$\alpha_{11}\neq 0$</span>.</li> <li>We have <span class="math-container">$\displaystyle{a_1=\alpha_{11} b_1 + \sum_{j=2}^n \alpha_{1j}b_j}$</span>, from which we obtain <span class="math-container">$$b_1=\dfrac{1}{\alpha_{11}}\left(a_1-\sum_{j=2}^{n} \alpha_{1n}b_n\right)$$</span></li> </ul> <hr /> <p>Step 2:</p> <ul> <li>Replace <span class="math-container">$b_1$</span> in the equation <span class="math-container">$a_2=\alpha_{21}b_1+\cdots+\alpha_{2n}b_n$</span>.</li> <li>Rename the coefficients and the <span class="math-container">$b_i's$</span> to obtain <span class="math-container">$$a_2=\beta_{21}a_1+\beta_{22}b_2+\cdots \beta_{2n}b_n$$</span> with <span class="math-container">$\beta_{22}\neq 0$</span>. Note that <span class="math-container">$b_1$</span> is not mentioned in this equation because we substitute it with a linear combination of the vectors <span class="math-container">$a_1,b_2,\ldots,b_n$</span>. Also, it is not possible that all coefficients of <span class="math-container">$b_2,\ldots,b_n$</span> are zero because otherwise, <span class="math-container">$a_2=\beta_{21}a_1$</span> which would contradict that <span class="math-container">$a_1,a_2$</span> are linearly independent.</li> <li>Separate <span class="math-container">$b_2$</span>: <span class="math-container">$$b_2=\dfrac{1}{\beta_{22}}\left(a_2-\beta_{21}a_1-\sum_{j=3}^n\beta_{2j}b_j\right)$$</span></li> </ul> <hr /> <p><span class="math-container">$\vdots$</span></p> <hr /> <p>Step <span class="math-container">$k+1$</span>:</p> <ul> <li><p>Replace <span class="math-container">$b_1,b_2,\ldots,b_k$</span> in terms of <span class="math-container">$a_1,\ldots,a_k,b_{k+1},\ldots,b_n$</span> in the equation <span class="math-container">$$a_{k+1}=\alpha_{k+1,1}b_1+\cdots+\alpha_{k+1,n}b_n$$</span></p> </li> <li><p>Rename the coefficients and the <span class="math-container">$b_i's$</span> (<span class="math-container">$i=k+1,\ldots,n$</span>) to obtain <span class="math-container">$$a_{k+1}=\beta_{k+1,1}a_1+\beta_{k+1,2}a_2+\cdots+\beta_{k+1,k}a_k+\beta_{k+1,k+1}\cdots \beta_{k+1,n}b_n$$</span> with <span class="math-container">$\beta_{k+1,k+1}\neq 0$</span>. Again, this is possible because if the coefficients of all <span class="math-container">$b_i's$</span> were zero, then it would contradict the fact that <span class="math-container">$a_1,\ldots,a_{k+1}$</span> are linearly independent.</p> </li> <li><p>Separate:<span class="math-container">$$b_{k+1}=\dfrac{1}{\beta_{k+1,k+1}}\left(a_{k+1}-\sum_{j=1}^k \beta_{k+1,j}a_j - \sum_{j=k+2}^n\beta_{k+1,n}b_j\right)$$</span></p> </li> </ul> <hr /> <p>After doing <span class="math-container">$n$</span> steps of this process, and substituting <span class="math-container">$b_1,\ldots,b_n$</span> in terms of <span class="math-container">$a_1,\ldots,a_n$</span> in the equation</p> <p><span class="math-container">$$a_{n+1}=\alpha_{n+1,1}b_1+\cdots + \alpha_{n+1,n}b_n$$</span></p> <p>we obtain</p> <p><span class="math-container">$$a_{n+1}=\beta_{n+1,1}a_1+\cdots \beta_{n+1,n}a_n.$$</span></p> <p>If we put <span class="math-container">$\gamma_i=\beta_{n+1,i}$</span> for <span class="math-container">$i=1,\ldots,n$</span>, we finally obtain,</p> <p><span class="math-container">$$\gamma_1 a_1 + \cdots + \gamma_n a_n - a_{n+1}=0$$</span> which will contradict the fact that <span class="math-container">$a_1,\ldots,a_{n+1}$</span> are linearly independent.</p> <p>This final contradiction allows us to conclude that <span class="math-container">$m\leq n$</span>. QED</p>
1,781,117
<h1>The question</h1> <p>Prove that: $$\prod_{n=2}^∞ \left( 1 - \frac{1}{n^4} \right) = \frac{e^π - e^{-π}}{8π}$$</p> <hr> <h2>What I've tried</h2> <p>Knowing that: $$\sin(πz) = πz \prod_{n=1}^∞ \left( 1 - \frac{z^2}{n^2} \right)$$ evaluating at $z=i$ gives $$ \frac{e^π - e^{-π}}{2i} = \sin(πi) = πi \prod_{n=1}^∞ \left( 1 + \frac{1}{n^2} \right)$$ so: $$ \prod_{n=1}^∞ \left( 1 + \frac{1}{n^2} \right) = \frac{e^π - e^{-π}}{2π}$$</p> <p>I'm stucked up and don't know how to continue, any help?</p>
Jack D'Aurizio
44,121
<p>$$\frac{\sin(\pi z)}{\pi z}=\prod_{n\geq 1}\left(1-\frac{z^2}{n^2}\right),\qquad \frac{\sinh(\pi z)}{\pi z}=\prod_{n\geq 1}\left(1+\frac{z^2}{n^2}\right)\tag{1}$$ give: $$ \frac{\sin(\pi z)\sinh(\pi z)}{\pi^2 z^2(1-z^4)}=\prod_{n\geq 2}\left(1-\frac{z^4}{n^4}\right)\tag{2} $$ hence by considering $\lim_{z\to 1}LHS$ we have: $$ \prod_{n\geq 2}\left(1-\frac{1}{n^4}\right)=\frac{\sinh \pi}{4\pi} = \color{red}{\frac{e^\pi-e^{-\pi}}{8\pi}}\tag{3}$$ as wanted.</p>
2,077,694
<p>How to find $A = M^{A}_{B}$ in linear transformation $F = \mathbb{P_{2}} \rightarrow \mathbb{R^{2}} $, where $ F(p(t)) = \begin{pmatrix} p(0) \\ P(1) \end{pmatrix},$ $ A = \{1,t,t^{2}\},$ $B=\left \{ \begin{pmatrix} 1\\ 0\end{pmatrix},\begin{pmatrix} 0\\ 1\end{pmatrix} \right \}$?</p>
Robert Israel
8,508
<p>If $n$ has proper divisors $d_1, \ldots, d_k$ with $d_1 + \ldots + d_k &gt; n$, then the proper divisors of $mn$ include $md_1, \ldots, m d_k$ with ... </p>
3,338,885
<p>I want to prove this theorem using Binomial theorem and I've got trouble in understanding 3rd step if anyone knows why please explain :) Prove that sum: </p> <p><span class="math-container">$\sum_{r=0}^{k}\binom{m}{r}\binom{n}{k-r}=\binom{m+n}{k}$</span> </p> <p>1st step:<br> <span class="math-container">$(1+y)^{m+n}=(1+y)^m(1+y)^n$</span> </p> <p>2nd step: we use the binomial theorem formula and evaluate that sum <span class="math-container">$\sum_{m+n}^{r=0}\binom{m+n}{r}y^r=\sum_{r=0}^{m}\binom{m}{r}y^r \sum_{r=0}^{n}\binom{n}{r}y^r$</span></p> <p>3step: equating the coefficient of <span class="math-container">$y^k$</span> </p> <p><span class="math-container">$\binom{m+n}{k}=\binom{m}{0}\binom{n}{k}+\binom{m}{1}\binom{n}{k-1}+...+\binom{m}{k}\binom{n}{k-k}$</span></p> <p>Why and how is the 3rd step allowed? Thank you </p>
Kavi Rama Murthy
142,385
<p>If <span class="math-container">$c_0+c_1y+\cdots +c_{n+m}y^{n+m}=d_0+d_1y+\cdots +d_{n+m}y^{n+m}$</span> for all <span class="math-container">$y$</span> then <span class="math-container">$c_i=d_i$</span> for all <span class="math-container">$i$</span>. This is a basic fact about polynomials (and comes from the fact that a non-zero polynomial has only finitely many zeros). This result allows you to equate the coefficients of like powers of <span class="math-container">$y$</span>. </p>
3,338,885
<p>I want to prove this theorem using Binomial theorem and I've got trouble in understanding 3rd step if anyone knows why please explain :) Prove that sum: </p> <p><span class="math-container">$\sum_{r=0}^{k}\binom{m}{r}\binom{n}{k-r}=\binom{m+n}{k}$</span> </p> <p>1st step:<br> <span class="math-container">$(1+y)^{m+n}=(1+y)^m(1+y)^n$</span> </p> <p>2nd step: we use the binomial theorem formula and evaluate that sum <span class="math-container">$\sum_{m+n}^{r=0}\binom{m+n}{r}y^r=\sum_{r=0}^{m}\binom{m}{r}y^r \sum_{r=0}^{n}\binom{n}{r}y^r$</span></p> <p>3step: equating the coefficient of <span class="math-container">$y^k$</span> </p> <p><span class="math-container">$\binom{m+n}{k}=\binom{m}{0}\binom{n}{k}+\binom{m}{1}\binom{n}{k-1}+...+\binom{m}{k}\binom{n}{k-k}$</span></p> <p>Why and how is the 3rd step allowed? Thank you </p>
Nick Shih
698,771
<p>By the equation of 2nd step,</p> <p>Left: <span class="math-container">$$\binom{m+n}{0}y^{0}+\binom{m+n}{1}y^{1}+\cdots+\binom{m+n}{m+n-1}y^{m+n-1}+\binom{m+n}{m+n}y^{m+n}$$</span></p> <p>The coefficient of <span class="math-container">$y^{k}$</span> is <span class="math-container">$\binom{m+n}{k}$</span></p> <p>Right: <span class="math-container">$$(\binom{m}{0}y^{0}+\cdots+\binom{m}{m}y^{m})(\binom{n}{0}y^{0}+\cdots+\binom{n}{n}y^{n}) $$</span></p> <p>The term <span class="math-container">$y^{k}$</span> come from <span class="math-container">$$\binom{m}{0}y^{0}\binom{n}{k}y^{k}+\binom{m}{1}y^{1}\binom{n}{k-1}y^{k-1}+ \cdots +\binom{m}{k-1}y^{k-1}\binom{n}{1}y^{1}+\binom{m}{k}y^{k}\binom{n}{0}y^{0}$$</span></p> <p>So the coefficient of <span class="math-container">$y^{k}$</span> is <span class="math-container">$\binom{m}{0}\binom{n}{k} + \binom{m}{1}\binom{n}{k-1}+ \cdots +\binom{m}{k-1}\binom{n}{1}+\binom{m}{k}\binom{n}{0}$</span></p> <p>By the equality of the equation of 2nd step, the coefficient of <span class="math-container">$y^{k}$</span> of left and right might be the same, so <span class="math-container">$$\binom{m+n}{k} = \binom{m}{0}\binom{n}{k} + \binom{m}{1}\binom{n}{k-1}+ \cdots +\binom{m}{k-1}\binom{n}{1}+\binom{m}{k}\binom{n}{0}$$</span></p>
156,479
<p>Let $S$ be a compact oriented surface of genus at least $2$ (possibly with boundary). Let $X$ be a connected component of the space of embeddings of $S^1$ into $S$.</p> <p>Question : what is the fundamental group of $X$? My guess is that the answer is $\mathbb{Z}$ with generator the loop of embeddings obtained by precomposing the base embedding with a sequence of rotations of $S^1$.</p> <p>I'm also interested in the higher homotopy groups of $X$, which I would guess are trivial.</p> <hr> <p>Edit: In response to Sam Nead's question, I'm most interested in the smooth category, but am also interested in the topological category. There are technical issues in giving an appropriate topology to mapping spaces in the PL category, so the question doesn't really make sense there.</p>
Sam Nead
1,650
<p>$ \newcommand{\Homeo}{\operatorname{Homeo}} \newcommand{\SO}{\operatorname{SO}} $Since you are interested in the topological category, then I think it will suffice to prove the necessary facts about $\Homeo_0(S)$ and about the curve stabilizer. Now, there is a topological proof that $\Homeo_0(S)$ is contractible -- see this mathoverflow question: </p> <p><a href="https://mathoverflow.net/questions/18698/homotopy-type-of-set-of-self-homotopy-equivalences-of-a-surface">Homotopy type of set of self homotopy-equivalences of a surface</a></p> <p>Here is the paper of Hamstrom that they are referring to. </p> <p><a href="http://0-projecteuclid.org.pugwash.lib.warwick.ac.uk/euclid.ijm/1256054895" rel="nofollow noreferrer">http://0-projecteuclid.org.pugwash.lib.warwick.ac.uk/euclid.ijm/1256054895</a></p> <p>Reading a bit of that I learned that Kneser was the first to prove (in 1926!) that $\Homeo_0(S^2)$ deformation retracts to $\SO(3)$.</p>
2,618,804
<p>Let $V$ be a vector space of dimension $m\geq 2$ and $ T: V\to V$ be a linear transformation such that $T^{n+1}=0$ and $T^{n}\neq 0$ for some $n\geq1$ .Then choose the correct statement(s):</p> <p>$(1)$ $rank(T^n)\leq nullity(T^n)$</p> <p>$(2)$ $rank(T^n)\leq nullity(T^{n+1})$</p> <p><strong>Try:</strong></p> <p>I found this case is possible if $n&lt;m$ and took some examples for $(2)$ , found it true but I've no idea how to prove. For (1) I'm not getting anything. </p>
Community
-1
<p>$0=T^{n+1}(V)=T(T^n(V)) \implies rank(T^n)\le nullity (T)$. But of course $nullity T\le nullity T^{n+1}$</p> <p>So (2) is true.</p> <p>Oh and (1) is true also...</p> <p>For (1), see <a href="https://math.stackexchange.com/a/400927"> here</a>... </p>
216,532
<p>How do I find the limit of something like</p> <p>$$ \lim_{x\to \infty} \frac{2\cdot3^{5x}+5}{3^{5x}+2^{5x}} $$</p> <p>?</p>
filmor
7,247
<p>Divide both the upper and the lower term by $3^{5x}$, that will do it.</p>
3,522,867
<p>Consider the sequence <span class="math-container">$\{x_n\}_{n\ge1}$</span> defined by <span class="math-container">$$x_n=\sum_{k=1}^n\frac{1}{\sqrt{k+1}+\sqrt{k}}, \forall n\in\mathbb{N}.$$</span> Is <span class="math-container">$\{x_n\}_{n\ge 1}$</span> bounded or unbounded. </p> <p>I solved the problem as stated in the answer posted by me. Is it possible to solve the problem in a more better and rigorous manner?</p>
Kavi Rama Murthy
142,385
<p><span class="math-container">$\sqrt {k+1}+\sqrt k \leq \sqrt {2k}+\sqrt k&lt;3\sqrt k$</span>. Hence the given sum is at least <span class="math-container">$\sum\limits_{k=1}^{n} \frac 1 {3\sqrt k}$</span> Now use the fact that the series <span class="math-container">$\sum\limits_{k=1}^{n} \frac 1 {3\sqrt k}$</span> is divergent.</p>
2,512,556
<p>What would be the solution of $ y''+y=\cos (ax) \ $ if $ \ a \to 1 \ $. </p> <p><strong>Answer:</strong></p> <p>I have found the complementary function $ y_c \ $ </p> <p>$ y_c(x)=A \cos x+B \sin x \ $</p> <p>But How can I find the particular integral if $ a \to 1 \ $ </p>
Plutoro
108,709
<p>For $a\neq 1$, we would choose $ P\cos(ax)+Q\sin(ax)$ for our particular solution, and solve for $P$ and $Q$. But if $a=1$, then this form will never solve the particular solution because it is also a solution to the homogeneous equation. The standard strategy in this situation is to multiply the standard form for the particular solution by $x$. So try $$y_p(x)=Px\cos(x)+Qx\sin(x).$$ Then $$y_p''(x)+y_p(x)=-2 P \sin (x)-P x \cos (x)-Q x \sin (x)+2 Q \cos (x)+Px\cos(x)+Qx\sin(x)=2 Q \cos (x)-2 P \sin (x).$$ To make this equal to $\cos(x)$, we require $Q=1/2$ and $P=0$. Therefore, your particular solution is $$y_P(x)=\frac{1}{2}x\sin(x).$$</p>
1,844,374
<p>Why does the "$\times$" used in arithmetic change to a "$\cdot$" as we progress through education? The symbol seems to only be ambiguous because of the variable $x$; however, we wouldn't have chosen the variable $x$ unless we were already removing $\times$ as the symbol for multiplication. So why do we? I am very curious. It seems like $\times$ is already quite sufficient as a descriptive symbol.</p>
hjhjhj57
150,361
<p>I believe the reason must be mostly pedagogical:</p> <ul> <li>As other answers mention, a kid learning about decimals may get confused between a product and a decimal.</li> <li><p>It may be easier for kids to think about $\times$ than $\cdot$, given that they already know the sum symbol and they are quite similar.</p></li> <li><p>Kids usually multiply only small sets of numbers, and they do it to learn the technique. Once the technique has been mastered efficiency (and therefore practicality) is what matters most! Imagine having to write $\times$ everytime you multiply something?</p></li> </ul> <p>For the historical part here are two references which put together confirm that Descartes was the one who first used $x$ as a variable <em>and</em> used the juxtaposition convention for multiplication: <a href="https://mathoverflow.net/a/30414/88911">about convention for unknowns</a> and <a href="https://hsm.stackexchange.com/a/2972/520">about multiplicative notation</a>. In fact, both of them cite <a href="https://archive.org/details/historyofmathema031756mbp" rel="nofollow noreferrer">Cajori's</a> as their main reference.</p>
70,976
<blockquote> <p>I'm considering the ring $\mathbb{Z}[\sqrt{-n}]$, where $n\ge 3$ and square free. I want to see why it's not a UFD.</p> </blockquote> <p>I defined a norm for the ring by $|a+b\sqrt{-n}|=a^2+nb^2$. Using this I was able to show that $2$, $\sqrt{-n}$ and $1+\sqrt{-n}$ are all irreducible. Is there someway to conclude that $\mathbb{Z}[\sqrt{-n}]$ is not a UFD based on this? Thanks.</p>
Mr. Brooks
162,538
<p>It really depends on what $-n$ is, but if you have to guess for a random $n$, it's a safe bet to say it's not a UFD, because only nine of the imaginary quadratic rings are UFDs. The general principle is to find an example of a number with two distinct factorizations, thereby proving the domain is not a unique factorization domain.</p> <p>The norm function is of crucial importance. I've seen the norm function normally defined as $N(a + b \sqrt{-n}) = a^2 + nb^2$. It's also important to note that $N(a + b \sqrt{-n}) = (a - b \sqrt{-n})(a + b \sqrt{-n})$. This means that if $m$ is an integer without an imaginary part, and prime in $\mathbb{Z}$, then it's irreducible if there's no way to express it as $a^2 + nb^2$. From this it follows that those real integers from $2$ to $n - 1$ which are prime in $\mathbb{Z}$ are therefore irreducible in $\mathbb{Z}[\sqrt{-n}]$. These are called "inert primes;" there are other inert primes, of course, but for those you have to check norms to be sure.</p> <p>If $n = 2k$ and $k &gt; 1$ is a squarefree real integer, it automatically follows that $\mathbb{Z}[\sqrt{-n}]$ is not a UFD because $n = (-1)(\sqrt{-n})^2 = 2k$. The latter factorization is further reducible if $k$ is composite in $\mathbb{Z}$, but it still represents a second factorization, distinct from the former, and therefore $\mathbb{Z}[\sqrt{-n}]$ is not a UFD.</p> <p>For example, $10$ in $\mathbb{Z}[\sqrt{-10}]$. $2$ and $5$ are irreducible in this domain, as is $\sqrt{-10}$. Then we have $10 = (-1)(\sqrt{-10})^2 = 2 \times 5$. Since $10$ has two distinct factorizations in $\mathbb{Z}[\sqrt{-10}]$, that domain is not a UFD.</p> <p>If $n = 2k - 1$ squarefree, we just need to show that $2k = (1 - \sqrt{-n})(1 + \sqrt{-n})$ and we're done. That is, unless $n \equiv 3 \bmod 4$. Because in that case $\mathbb{Z}[\sqrt{-n}]$ is not "integrally closed." Then we have to consider not just numbers of the form $a + b \sqrt{-n}$, but we also have to look at numbers of the form $\frac{a}{2} + \frac{b \sqrt{-n}}{2}$, where both $a$ and $b$ are odd. These numbers are sometimes called "half-integers" purely for convenience. The norm function is then $N\left(\frac{a}{2} + \frac{b \sqrt{-n}}{2}\right) = \frac{a^2}{4} + \frac{nb^2}{4}$, and it gives real integers as results. Then we have to see if maybe $\frac{n + 1}{4}$ is composite and has two distinct factorizations, or if it is prime then we look to see if there's another real integer of the form $\frac{a^2}{4} + \frac{nb^2}{4}$ with two distinct factorizations in the domain at hand.</p> <p>For example, in $\mathbb{Z}\left[\frac{1}{2} + \frac{\sqrt{-15}}{2}\right]$ (which we can also notate as $\mathcal{O}_{\mathbb{Q}(\sqrt{-15})}$ if we want), $2$ is irreducible, as is $\frac{1}{2} \pm \frac{\sqrt{-15}}{2}$. Then we have $4 = \left(\frac{1}{2} - \frac{\sqrt{-15}}{2}\right)\left(\frac{1}{2} + \frac{\sqrt{-15}}{2}\right) = 2^2$.</p> <p>It's a little trickier in $\mathcal{O}_{\mathbb{Q}(\sqrt{-51})}$, since $\frac{51 + 1}{4} = 13$, which is prime in $\mathbb{Z}$. So we look instead to $15 = \left(\frac{3}{2} - \frac{\sqrt{-51}}{2}\right)\left(\frac{3}{2} + \frac{\sqrt{-51}}{2}\right)$. This is a distinct factorization from $3 \times 5$ because $3$ and $5$ are irreducible in this domain.</p> <p>There are only seven values of $n \equiv 3 \bmod 4$ such that $\mathcal{O}_{\mathbb{Q}(\sqrt{-n})}$ is a UFD: $3, 7, 11, 19, 43, 67, 163$. Together with $1$ and $2$, these are the nine values of $n$ corresponding to imaginary quadratic rings with unique factorization. Carl Friedrich Gauss knew these numbers, but Heegner and Stark are the ones who proved there are no others. (Real quadratic rings, that's a different story).</p> <p>I have to admit I'm still a novice at this subject, for up to a few months ago I was quite unaware of the existence of $\sqrt{-1}$ and I still find the concept somewhat unreal and hard to believe, no pun intended. I may have made some tiny mistakes which others will greatly magnify.</p>
1,925,245
<p>Find the eigenvalues of $$ \left(\begin{matrix} C_1 &amp; C_1 &amp; C_1&amp;\cdots&amp;C_1 \\ C_2 &amp; C_2 &amp; C_2&amp;\cdots &amp;C_2 \\ C_3 &amp; C_3 &amp; C_3&amp;\cdots&amp;C_3 \\ \vdots&amp;\ &amp; \ &amp; \ &amp; \vdots\\ C_n &amp; C_n &amp; C_n&amp;\cdots&amp;C_n \\ \end{matrix}\right) $$</p> <p>My approach: $$\left| \begin{matrix} C_1-λ &amp; C_1 &amp; C_1&amp;...&amp;C_1 \\ C_2 &amp; C_2-λ &amp; C_2&amp;...&amp;C_2 \\ C_3 &amp; C_3 &amp; C_3-λ&amp;...&amp;C_3 \\ \vdots &amp; \ &amp; \ &amp; \ &amp; \vdots\\ C_n &amp; C_n &amp; C_n&amp;...&amp;C_n-λ \end{matrix}\right| $$ and i eventually have \begin{align} \ -λ^3 +(C_1+C_2+C_3)λ^2-C_1C_3λ-C_1C_2=0 \end{align} </p> <p>and then I'm lost at this stage...</p>
Nicholas Stull
28,997
<p>I'll work it out for $n=2$, $n=3$ and you can try to extrapolate (and figure out the pattern) from there:</p> <p>$n=2$:</p> <p>$$A = \left(\begin{array}{cc}a&amp;a\\b&amp;b\end{array}\right)$$ $$\det(A-\lambda I) = \left|\begin{array}{cc}a-\lambda&amp;a\\b&amp;b-\lambda\end{array}\right|= (a-\lambda)(b-\lambda)-ab = \lambda^2-(a+b)\lambda$$ which has two roots, namely $\lambda = 0$, $\lambda = a+b$.</p> <p>$n=3$:</p> <p>$$A = \left(\begin{array}{ccc}a&amp;a&amp;a\\b&amp;b&amp;b\\c&amp;c&amp;c\end{array}\right)$$ \begin{align*} \det(A-\lambda I) &amp;= \left| \begin{array}{ccc}a-\lambda&amp;a&amp;a\\b&amp;b-\lambda&amp;b\\c&amp;c&amp;c-\lambda\end{array} \right|\\ &amp;= (a-\lambda)\left|\begin{array}{cc}b-\lambda&amp;b\\c&amp;c-\lambda\end{array}\right| - a \left|\begin{array}{cc}b&amp;b\\c&amp;c-\lambda\end{array}\right| + a\left|\begin{array}{cc}b&amp;b-\lambda\\c&amp;c\end{array}\right|\\ &amp;= (a-\lambda)[(b-\lambda)(c-\lambda) -bc] - a[b(c-\lambda)-bc] + a[bc-c(b-\lambda)]\\ &amp;= (a-\lambda)(\lambda^2-(b+c)\lambda)+ a(b\lambda) + a(c\lambda)\\ &amp;= \lambda^2(a+b+c) - a(b+c)\lambda-\lambda^3 + a(b+c)\lambda\\ &amp;= -\lambda^3+(a+b+c)\lambda^2\\ &amp;= -\lambda^2(\lambda - (a+b+c)) \end{align*} which has three roots, namely $\lambda=0$, $\lambda=0$, $\lambda=a+b+c$.</p> <p>The rest can be extrapolated from this. (I have included the desired result in the end, in case you want to see it, but you should try to convince yourself of the result before looking at this.)</p> <hr> <p>By induction, you can prove that the matrix </p> <p>$$A = \left(\begin{array}{cccc}C_1 &amp; C_1 &amp; \cdots &amp; C_1\\ C_2&amp;C_2&amp;\cdots&amp;C_2\\ \vdots &amp; \ddots &amp; \ddots&amp; \vdots \\ C_n &amp; C_n &amp; \cdots &amp; C_n \end{array}\right)$$</p> <blockquote class="spoiler"> <p> has eigenvalues given by $\lambda=0$ with multiplicity $n-1$, and $\lambda = \sum_{k=1}^n C_k$ with multiplicity $1$.</p> </blockquote>
2,718,286
<p>I have came across this problem and after trying to answer it for some time, I thought my solution was correct but apparently it is not. Can you please explain to me what I have done wrong ?</p> <p>Problem:</p> <p>A train traveling from Aytown to Beetown meets with an accident after 1 hour. The train is stopped for 30 minutes, after which it proceeds at four-fifths its usual rate, arriving at Beetown 2 hours late. If the train had covered 80 miles before the accident, it would have been just one hour late. What is the usual rate of the train ?</p> <p>My Attempt:</p> <p>Let $d$ be the total distance of the trip, $t$ the usual time and $x$ the usual rate. We know that $$x\cdot t = d$$</p> <p>The first part of the question tells us that $$x\cdot 1+ \frac 45 x(t_2) = d$$ where $t_2$ is the time traveled at four-fifths the usual speed and $$t_2 = (t+2)-\frac 12 - 1 = t+\frac 12.$$ The $(t+2)$ is the "two hours late" part and the $(-1-\frac 12)$ is the break and the one hour before the accident. The second part tells us that $$80 + \frac 45 x(t_3) = d$$ where, again, $t_3$ is the time traveled at four-fifths the usual speed and $$t_3 = (t+1)-\frac 12 - \frac{80}x.$$ Similar to the first equation, $(t+1)$ is the one-hour late part and the $(-\frac 12 - \frac{80}x)$ is the break along with the time spent travelling before the accident. Solving for $x$ I got $16$, however, the answer is $20$. What have I done wrong ?</p>
saulspatz
235,128
<p>The problem is double counting. Let's just consider the example you gave. Since we know that there are $\binom{12}{6}$ strings in all, we just have have to count the strings with three consecutive ones. There are $10$ positions for the first one in the substring and then $\binom{9}{3}$ places to put the other three ones, so this gives $10\binom{9}{3}$ strings. But there are strings with four consecutive ones, and we've counted them twice, so we have to subtract them. That gives $10\binom{9}{3}-9\binom{8}{2}.$ Now what about strings with five consecutive ones? We've added three strings of length three, and subtracted two of length four, so we've only counted this string once, and no adjustment is needed. (This surprises me. Usually, when you use the principle of inclusion and exclusion, you add and subtract at every point, but if I've made a mistake I can't find it.) For a string with six ones, we have added four strings of length three, and subtracted three of length four, so again we've counted it once and no adjustemnt is needed.</p> <p>We're not done yet, however. It's possible to have two substrings of three ones that don't intersect at all as in $111011100000.$ I leave it to you to count those.</p> <p>So far as I know, there isn't a formula that will wok for all substrings. For example, if the "target" string were $101$ instead of $111$, there is no four-bit string that can contain the substring twice, so the analysis will be different.</p>
2,718,286
<p>I have came across this problem and after trying to answer it for some time, I thought my solution was correct but apparently it is not. Can you please explain to me what I have done wrong ?</p> <p>Problem:</p> <p>A train traveling from Aytown to Beetown meets with an accident after 1 hour. The train is stopped for 30 minutes, after which it proceeds at four-fifths its usual rate, arriving at Beetown 2 hours late. If the train had covered 80 miles before the accident, it would have been just one hour late. What is the usual rate of the train ?</p> <p>My Attempt:</p> <p>Let $d$ be the total distance of the trip, $t$ the usual time and $x$ the usual rate. We know that $$x\cdot t = d$$</p> <p>The first part of the question tells us that $$x\cdot 1+ \frac 45 x(t_2) = d$$ where $t_2$ is the time traveled at four-fifths the usual speed and $$t_2 = (t+2)-\frac 12 - 1 = t+\frac 12.$$ The $(t+2)$ is the "two hours late" part and the $(-1-\frac 12)$ is the break and the one hour before the accident. The second part tells us that $$80 + \frac 45 x(t_3) = d$$ where, again, $t_3$ is the time traveled at four-fifths the usual speed and $$t_3 = (t+1)-\frac 12 - \frac{80}x.$$ Similar to the first equation, $(t+1)$ is the one-hour late part and the $(-\frac 12 - \frac{80}x)$ is the break along with the time spent travelling before the accident. Solving for $x$ I got $16$, however, the answer is $20$. What have I done wrong ?</p>
Remy
325,426
<p>If you're just interested in the probability of getting at least three consecutive ones, then this problem is analogous to finding the probability that the length of the longest run of heads in $n$ coin tosses, say $\ell_n$, exceeds a given number $m$. This probability is given by</p> <p>$$\mathbb{P}(\ell_n \geq m)=\sum_{j=1}^{\lfloor n/m\rfloor} (-1)^{j+1}\left(p+\left({n-jm+1\over j}\right)(1-p)\right){n-jm\choose j-1}p^{jm}(1-p)^{j-1}.$$</p> <p><strong>Reference</strong>:</p> <p><a href="https://math.stackexchange.com/a/59749/325426">https://math.stackexchange.com/a/59749/325426</a></p> <p>In our case, $n=11$, $m=3$, and I assume $p=0.5$</p> <p>so we get</p> <p>$$\mathbb{P}(\ell_n \geq 3)=\sum_{j=1}^{3} (-1)^{j+1}\left(0.5+\left({11-3j +1\over j}\right)(1-0.5)\right){n-3j\choose j-1}0.5^{3j}(1-0.5)^{j-1}$$</p> <p>By Wolfram Alpha, we get</p> <p>$$\mathbb{P}(\ell_n \geq 3)\approx 0.5474$$</p> <p><strong>R Simulation:</strong></p> <pre><code>count=0 for(i in c(1:1000000)){ coin &lt;- sample(c("H", "T"), 11, replace = TRUE); coin.rle &lt;- rle(coin); sort(coin.rle$lengths, decreasing = TRUE); max=max(tapply(coin.rle$lengths, coin.rle$values, max)["H"]); if(is.na(max)){next} if(max&gt;=3){count=count+1}; i=i+1} count/(1000000) [1] 0.547234 </code></pre> <p>so our empirical answer agrees with our analytical answer. </p>
2,495,440
<p>If $f'(x) + f(x) = x,\;$ find $f(4)$.</p> <p>Could someone help me to solve this problem ? </p> <p>The answer is 3 but I don't know why. <em>with no use of integration or exponential functions</em> and <em>the function is polynomial</em></p>
Guy Fsone
385,707
<p>hint: multiplying both side by $e^x$ we have </p> <p>$$f'(x) + f(x) = x\Longleftrightarrow ( f(x)e^x)'= xe^x \Longleftrightarrow f(x)e^x= xe^x -e^x +c. $$ can you continue from here ?</p>
2,495,440
<p>If $f'(x) + f(x) = x,\;$ find $f(4)$.</p> <p>Could someone help me to solve this problem ? </p> <p>The answer is 3 but I don't know why. <em>with no use of integration or exponential functions</em> and <em>the function is polynomial</em></p>
ℋolo
471,959
<p>if $f'(x) + f(x) = x$ then $f'(x)e^x + f(x)e^x = xe^x$ now remember derivative by parts? $(f(x)g(x))'=f'(x)g(x)+g'(x)f(x)$. if $g(x)=e^x$ then you have $(f(x)e^x)'=f'(x)e^x+\left[e^x\right]'f(x)$ and because $\left[e^x\right]'=e^x$ you have $(f(x)e^x)'=f'(x)e^x+e^xf(x)$ so you left with:$$f'(x) + f(x) = x\implies f'(x)e^x + f(x)e^x = xe^x\implies(f(x)e^x)'=xe^x\\\implies f(x)e^x=\int xe^x\ dx=xe^x -e^x +c$$</p> <p>Edit</p> <p>If you know it is polynomial then:</p> <p>We can assume that $f(x)$ is first degree and we get $$f(x) + f'(x)= x\\(ax+b)+(a)=x\\(ax)+(b+a)=x\\\begin{cases}a=1\\b+a=0\implies b+1=0\implies b=-1\end{cases}$$ And you finally get $f(x)=x-1$ just put $4$ instead of $x$ and you'll get $3$</p>
3,173,636
<p>I have been trying to prove that there is no embedding from a torus to <span class="math-container">$S^2$</span> but to no avail.</p> <p>I am completely stuck on where to start. The proof is supposed to be based on Homology theory. I know how to prove that <span class="math-container">$S^n$</span> cannot be embedded in <span class="math-container">$\mathbb{R}^n$</span> however that hasn't helped me in this case. Any help/other eamples of how to prove a lack of an embedding would be great. </p>
Lee Mosher
26,501
<p>Suppose that there exists an embedding <span class="math-container">$f : T^2 \mapsto S^2$</span>. </p> <p>Each point <span class="math-container">$x \in T^2$</span> has an open neighborhood <span class="math-container">$U$</span> homeomorphic to the open unit disc in <span class="math-container">$S^2$</span>, and it follows that <span class="math-container">$f(U) \subset S^2$</span> is homeomorphic to the open unit disc. By the Invariance of Domain theorem, <span class="math-container">$f(U)$</span> is an open subset of <span class="math-container">$S^2$</span>. This shows that <span class="math-container">$f(T^2)$</span> is an open subset of <span class="math-container">$S^2$</span>.</p> <p>But <span class="math-container">$T^2$</span> is compact, so <span class="math-container">$f(T^2)$</span> is compact, so it is also a closed subset of <span class="math-container">$S^2$</span>.</p> <p>By connectivity of <span class="math-container">$S^2$</span>, it follows that <span class="math-container">$f(T^2)=S^2$</span>. So <span class="math-container">$f$</span> is a homeomorphism, contradicting that <span class="math-container">$S^2$</span> is simply connected and <span class="math-container">$T^2$</span> is not.</p>
3,950,808
<p><em>(note: this is very similar to <a href="https://math.stackexchange.com/questions/188252/spivaks-calculus-exercise-4-a-of-2nd-chapter">a related question</a> but as I'm trying to solve it without looking at the answer yet, I hope the gods may humor me anyways)</em></p> <p>I'm self-learning math, and an <a href="https://www.reddit.com/r/math/comments/kcb1cd/how_do_i_gain_proficiency_in_mathematics_through/gfqn6y2/" rel="nofollow noreferrer">answer</a> to an /r/math post about self-learning was finally enough to motivate me to try getting feedback on this site :) . After reading throughout the internet, I've decided to start with Spivak's <em>Calculus</em>. I'm loving the book thankfully, but I'm stuck on this problem and I don't want to look at the answer quite yet.</p> <blockquote> <p>4 . (a) Prove that <span class="math-container">$$\sum_{k=0}^l \binom{n}{k} \binom{m}{l-k} = \binom{n+m}{l}$$</span>. Hint: Apply the binomal theorem to <span class="math-container">$(1+x)^n(1+x)^m$</span> .</p> </blockquote> <p>I've done all the prior problems, including Problem 3 (proving the Binomial Theorem) which is obviously closely tied to this, but even with the hint it feels like there's too much I don't know. I applied the hint and found:</p> <p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i} x^i\right)\left(\sum_{j=0}^m \binom{m}{j} x^j\right) = \sum_{k=0}^{n+m} \binom{n+m}{k} x^k$$</span></p> <p>And, setting <span class="math-container">$x=1$</span> got something even more interesting:</p> <p><span class="math-container">$$\left(\sum_{i=0}^n \binom{n}{i}\right)\left(\sum_{j=0}^m \binom{m}{j}\right) = \sum_{k=0}^{n+m} \binom{n+m}{k}$$</span></p> <p>However, I don't know where to go after that...is there some property of multiplying sums that I need to prove first, to relate the multiplication of two sums of binomials with the sum of the multiplication of two binomials?</p> <p>Thank you!</p>
Albus Dumbledore
769,226
<p><strong>Hint</strong>:</p> <p>Think about the sum as the <strong>coefficient of <span class="math-container">$x^l$</span></strong> in the expansion <span class="math-container">${(1+x)}^n{(x+1)}^m$</span></p>
3,308,291
<p>I have an array of numbers (a column in excel). I calculated the half of the set's total and now I need the minimum number of set's values that the sum of them would be greater or equal to the half of the total. </p> <p>Example:</p> <pre><code>The set: 5, 5, 3, 3, 2, 1, 1, 1, 1 Half of the total is: 11 The least amount of set values that need to be added to get 11 is 3 </code></pre> <p>What is the formula to get '3'?</p> <p>It's probably something basic but I have not used calculus in a bit hence may have forgotten just forgotten it.</p> <p>Normally I would use a simple while loop with a sort but I am in excel so I was wondering is there a more elegant solution. </p> <p>P.S. I have the values sorted in descending order to make things easier.</p> <p>EDIT: Example</p>
Ethan Bolker
72,858
<p>It's not closed because <span class="math-container">$1+1$</span> isn't in it.</p>
2,294,548
<p><strong>Problem:</strong> Solve $y'=\sqrt{xy}$ with the initial condition $y(0)=1$.</p> <p><strong>Attempt:</strong> Using $\sqrt{ab}=\sqrt{a}\cdot\sqrt{b}$, I get that the DE is separable by dividing both sides by $\sqrt{y}:$ $$y'=\sqrt{x}\cdot\sqrt{y}\Leftrightarrow\frac{y'}{\sqrt{y}}=\sqrt{x}$$</p> <p>which can be rearranged to $$\frac{1}{\sqrt{y}}dy=\sqrt{x}dx$$ and proceeding to integrate both sides. </p> <p>$$\int\frac{1}{\sqrt{y}} \ dy=\int\sqrt{x} \ dx \Longleftrightarrow2\sqrt{y}+C_1=\frac{2x\sqrt{x}}{3}+C_2$$</p> <p>Which eventually gives $$y(x)=\left(\frac{\frac{2x\sqrt{x}}{3}+C_2-C1}{2}\right)^2=\left(\frac{x\sqrt{x}}{3}+D\right)^2=\frac{x^3}{9}+D.$$</p> <p><strong>Question:</strong> However, according to <a href="https://math.stackexchange.com/questions/2293258/sqrtx2-1-sqrtx1-cdot-sqrtx-1?noredirect=1#comment4717869_2293258">this question</a> I posted yesterday, $\sqrt{xy}=\sqrt{x}\cdot\sqrt{y}$ only holds for $x,y\geq 0$, but nowhere in this question is this restriction given given. Why is it ok for me to use it then?</p> <p><strong>Sidenote/question:</strong> Is my way of solving the DE correct? Any room for improvement?</p>
Lutz Lehmann
115,115
<p>Yes, you can make that restriction based on the initial point.</p> <p>You could avoid using two integration constants, using one integration constant $D$ (or $C$) when integrating both sides of an equation is standard.</p>
2,663,303
<blockquote> <p>Let $G$ be finite. Suppose that $\left\vert \{x\in G\mid x^n =1\}\right\vert \le n$ for all $n\in \mathbb{N}$. Then $G$ is cyclic.</p> </blockquote> <p>What I have attempted was the fact that every element is contained in a maximal subgroup following that <a href="https://groupprops.subwiki.org/wiki/Cyclic_iff_not_a_union_of_proper_subgroups" rel="nofollow noreferrer">cyclic iff not a union of cyclic subgroups</a>, <a href="https://math.stackexchange.com/questions/2060998/count-elements-in-a-cyclic-group-of-given-order">the order of elements of a cyclic group</a>, and Sylow-$p$ subgroups......But none of them seems helpful. </p> <p>: ) It’s very kind of you to give me some hints to push me further. Thanks!</p>
Community
-1
<p>Although <a href="https://math.stackexchange.com/questions/1593222/a-finite-abelian-group-a-is-cyclic-iff-for-each-n-in-bbbn-a-in-a">the duplicate</a> supposes $G$ to be abelian, Ihf’s answer doesn’t. It’s a perfect answer and please see an even more complete one <a href="https://math.stackexchange.com/questions/59903/finite-subgroups-of-the-multiplicative-group-of-a-field-are-cyclic">here</a>. By the way, the answers given here are also very nice!</p>
4,537,685
<p>Let <span class="math-container">$H$</span> be a Hilbert space, and let <span class="math-container">$$H_n = \otimes_n H = \Big\{\sum_{i_1,\ldots, i_n} \alpha_{i_1, \ldots, i_n} \big(e_{i_1} \otimes \cdots \otimes e_{i_n}\big) : \sum_{i_1,\ldots, i_n} |\alpha_{i_1, \ldots, i_n}|^2&lt;\infty \Big\}$$</span> denote the <span class="math-container">$n$</span>-fold tensor product of <span class="math-container">$H$</span>. Here <span class="math-container">$\{e_i\}$</span> denotes a basis element for <span class="math-container">$H_i$</span> and <span class="math-container">$\alpha_{i_1, \ldots, i_n} \in \mathbb{C}$</span>. Please correct me if this definition is incorrect.</p> <p>I am trying to understand a particular subset of <span class="math-container">$H_n$</span>, namely the symmetric tensor product. This is defined as the space <span class="math-container">$$H_n^s = \Big\{\sum_{i_1,\ldots, i_n} \alpha_{i_1, \ldots, i_n} \big(e_{i_1} \otimes \cdots \otimes e_{i_n}\big) \in H_n: ~\alpha_{i_1, \ldots, i_n} = \alpha_{i_{\sigma(1)}, \ldots, i_{\sigma(n)}}~ \forall \sigma \in S_n,\Big\}$$</span> where <span class="math-container">$S_n$</span> is of course the permutation group of <span class="math-container">$n$</span> objects.</p> <p>To me, this says that for every vector in <span class="math-container">$H_n^s$</span> if we swap the order of the coefficient's components then the result is the same coefficient we started with. However this is clearly wrong, as this would imply that the coefficients must all be the same. If anyone can provide a detailed breakdown of this definition that would be much appreciated.</p>
Nicholas Todoroff
1,068,683
<p>The symmetric tensors are all symmetrized tensor products of vectors: <span class="math-container">$$ H^s_n = \left\{\sum_{\sigma \in S_n}v_{\sigma(1)}\otimes\cdots\otimes v_{\sigma(n)} \;:\; v_1,\dotsc,v_n \in H\right\}. $$</span> It follows that each basis element of <span class="math-container">$H^s_n$</span> corresponds to a selection of integers <span class="math-container">$1 \leq i_1 \leq i_2 \leq \cdots \leq i_n$</span>, and that basis element is <span class="math-container">$$ \sum_{\sigma\in S_n}e_{i_{\sigma(1)}}\otimes\cdots\otimes e_{i_{\sigma(n)}}. $$</span> (Of course, you could normalize however you please.) This means we can write <span class="math-container">$$ H^s_n = \left\{\sum_{1\leq i_1\leq\cdots\leq i_n}a_{i_1,\cdots, i_n}\sum_{\sigma\in S_n}e_{i_{\sigma(1)}}\otimes\cdots\otimes e_{i_{\sigma(n)}} \;:\; a_{i_1,\cdots,i_n} \in \mathbb C\right\} $$</span> In terms of coordinates, this means that every <span class="math-container">$e_{i_1}\otimes\cdots\otimes e_{i_n}$</span> for any selection of <span class="math-container">$i_1,\dots, i_n$</span> must have the same coefficient as <span class="math-container">$e_{i_{\sigma(1)}}\otimes\cdots\otimes e_{i_{\sigma(n)}}$</span> for any <span class="math-container">$\sigma \in S_n$</span>. Hence we could also write <span class="math-container">$$ H^s_n = \left\{\sum_{i_1,\cdots,i_n} a_{i_1,\cdots,i_n}e_{i_1}\otimes\cdots\otimes e_{i_n} \;:\; \forall\sigma\in S_n.\: a_{i_{\sigma(1)},\cdots,i_{\sigma(n)}} = a_{i_1,\cdots,i_n}\right\}. $$</span> This says that if any two coefficients <span class="math-container">$a_{i_1,\cdots,i_n}$</span> and <span class="math-container">$a_{i'_1,\cdots,i'_n}$</span> draw their indices from the same <em>multiset</em>, then <span class="math-container">$a_{i_1,\cdots,i_n} = a_{i'_1,\cdots,i'_n}$</span>; it does <em>not</em> say anything about how e.g. <span class="math-container">$a_{1,2,3}$</span> and <span class="math-container">$a_{1,1,3}$</span> are related.</p>
17,480
<p>I have asked a question at <a href="https://academia.stackexchange.com/">academia.stackexchange</a> with three sub-questions recently and I was told that it was not proper there. I just wonder if it is acceptable if one asks multiple (related) questions at math.stackexchange? </p> <p>To mathematicians, if the answer is "no", that would not even make sense: I can always ask one single question of the form </p> <blockquote> <p>What is the ordered triple of the form $(X, Y, Z)$ such that $X$ is the answer to $X'$, $Y$ is the answer to $Y'$ and $Z$ is the answer to $Z'$?</p> </blockquote>
user642796
8,348
<p>I generally dislike it when users ask multiple questions within a single question, and this is backed up by the <a href="https://meta.stackexchange.com/q/39223/214632">MSE faq</a>. I believe the main reasons for this are the following:</p> <ul> <li><p>If people provide answers of varying quality to the different parts (<em>e.g.</em>, exemplary solutions to questions (1) and (3), but a totally incorrect answer to question (2)), it makes <em>judging</em> the quality of these answers as a whole much more problematic.</p></li> <li><p>Such questions will often be more difficult to search for (<em>e.g.</em>, have an unfocused title, tags that only fit some of the questions), thereby decreasing their future utility.</p></li> </ul> <p>I have been thinking of bringing up the idea of a custom close reason for these situations, but haven't found the time to fully develop it.</p>
224,970
<p>$\newcommand{\Int}{\operatorname{Int}}\newcommand{\Bdy}{\operatorname{Bdy}}$ If $A$ and $B$ are sets in a metric space, show that: (note that $\Int$ stands for interior of the set)</p> <ol> <li>$\Int (A) \cup \Int (B) \subset \Int (A \cup B)$.</li> <li>$(\overline{ A \cup B}) = (\overline A \cup \overline B )$. (note that $\overline A = \Int (A) \cup \Bdy(A)$ )</li> </ol> <p>Now for the first (1) I see why its true for instance in $R$ we can have the intervals set $A=[a,b]$ and $B=[b,c]$ we have $A \cup B=[a,c]$ so $\Int(A \cup B)=(a,c)$ now $\Int(A)=(a,b)$ and $\Int(B)=(b,c)$ so we lose $b$ when we take union to form $\Int(A) \cup \Int(B)=(a,b) \cup (b,c)$.</p>
Luke Mathieson
35,289
<p>You're off to the right start, it just has to be a bit more elaborate, the rough idea is that you want to add a character to $a$ and $c$ for each one you put into $b$, then the last step is to stick the "middle" $1$ in.</p> <p>More detail in the spoiler, but try to get it yourself first.</p> <blockquote class="spoiler"> <p> The basic rule is $S\rightarrow AABSBCC \;|\; ABSC \;|\; ASBC | 1$ where $A$, $B$ and $C$ got to $0\;|\; 1$, and really only have different labels to highlight the counting. Not that we're not really keeping track of the order, but the step-wise counting ensures we have enough $0$s and $1$s to either side (remember that the "middle" $1$ is not necessarily actually in the middle, just that the string can be broken into those three bits).</p> </blockquote>
3,744,560
<p>Suppose I have a function <span class="math-container">$\Lambda(t)$</span> for any <span class="math-container">$t&gt;0$</span>. This function has the following three properties:</p> <ol> <li><span class="math-container">$\Lambda(t)$</span> is differentiable.</li> <li><span class="math-container">$\Lambda(t)$</span> is strictly increasing.</li> <li><span class="math-container">$\Lambda(T) = \Lambda(T+S) - \Lambda(S)$</span> for any <span class="math-container">$T,S&gt;0$</span>.</li> </ol> <p>It is stated that the function has the form <span class="math-container">$\Lambda(t) = \lambda t$</span>, but how can I formally derive this from the above three properties. Thanks in advance.</p>
Community
-1
<p>Let <span class="math-container">$f (x)$</span> be such a function.From the second property we get, <span class="math-container">$$f(x+y)=f(x)+f(y)$$</span> <strong>Step 1</strong> Differentiate partially w.r.t. <span class="math-container">$x$</span>, <span class="math-container">$$f'(x+y)=f'(x)$$</span> <strong>Step 2</strong> Put <span class="math-container">$x=0$</span>, <span class="math-container">$$f'(y)=f'(0)=constant$$</span> <strong>Step 3</strong> This is an Ordinary Differential Equation which gives, <span class="math-container">$$f(y)=f'(0)y+C$$</span> Since <span class="math-container">$C=0$</span> (as <span class="math-container">$f(0)=0$</span>), we get, <span class="math-container">$$f(y)=\lambda y$$</span> Where <span class="math-container">$\lambda&gt;0$</span> (from the first property)</p>
1,114
<p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
Reid Barton
126,667
<p>Another answer is that a groupoid is a space which has no homotopy groups in dimension &ge; 2. (Analogously a set is a space which has no homotopy groups in dimension &ge; 1.) They arise from taking (homotopy) orbits of group actions on sets, as well as from categories (by discarding the noninvertible morphisms and then taking the nerve). People care about them because they retain useful homotopical information, analogous to the relationship between Hom and Ext in homological algebra, and also because they're a lot easier to work with than general spaces.</p>
1,114
<p>Or more specifically, why do people get so excited about them? And what's your favorite easy example of one, which illustrates why I should care (and is not a group)?</p>
Tom Leinster
586
<p>As other people have mentioned, a groupoid can be defined as a category in which every map is invertible. A groupoid with only one object is exactly a group, and a groupoid in which there are no maps between distinct objects is simply a family of groups.</p> <p>But there's another class of examples, orthogonal to these ones. Namely: any <em>equivalence relation</em> is a groupoid. In fact, an equivalence relation is exactly a groupoid in which for each object <i>a</i> and object <i>b</i>, there is at most one map from <i>a</i> to <i>b</i>. Concretely, the objects of the groupoid are the elements of the set on which the equivalence relation is defined, and there is a map from <i>a</i> to <i>b</i> iff <i>a</i> is equivalent to <i>b</i>.</p> <p>(A (small) category with the property that for any objects <i>a</i> and <i>b</i>, there is at most one map from <i>a</i> to <i>b</i>, is the same thing as a <i>preordered set</i> &mdash; that is, a set equipped with a reflexive transitive relation, usually denoted \leq.) </p>
929,532
<p>Okay so I want some hints (not solutions) on figuring out whether these sets are open, closed or neither.</p> <p>$A = \{ (x,y,z) \in \mathbb{R}^3\ \ | \ \ |x^2+y^2+z^2|\lt2 \ and \ |z| \lt 1 \} \\ B = \{(x,y) \in \mathbb{R}^2 \ | \ y=2x^2\}$</p> <p>Okay so since this question is the last part of the question where I proved that if the function $f$ is continuous then $f^{-1}(B)$ is open if $B$ is open where $f: X \to Y $ and $ B \subseteq X $. I assume I am supposed to define an image of $A$ and $B$ and show that they are close/open then use this definition. But I am not sure how to define the functions. I have an hint for the question stating that I should use the fact that polynomials are continuous mappings and the fact that any norm $\|\cdot\| : V \to \mathbb{R} $ is continous. So for $A$ should I consider a norm $\|\cdot\| $ induced by $|\cdot|$ then the image of $A$ would be $(-2, 2)$ since this set is open, $A$ (its preimage) will be open? And for $b$ I define $f(x)=2x^2$ but that didn't sound right... I don't know I am confused on how to take the first step. So any hints on how I should approach this question? </p>
saz
36,150
<p><strong>Hint</strong> Consider <span class="math-container">$\Omega = \mathbb{N}$</span>, the power set <span class="math-container">$F=\mathcal{P}(\mathbb{N})$</span> and the mapping <span class="math-container">$\mu: F\to [0,\infty]$</span>, <span class="math-container">$$\mu(A) := \begin{cases} 0, &amp; \text{$A$ is a finite set} \\ \infty, &amp; \text{otherwise}. \end{cases}$$</span></p>
94,501
<p>The well-known Vandermonde convolution gives us the closed form <span class="math-container">$$\sum_{k=0}^n {r\choose k}{s\choose n-k} = {r+s \choose n}$$</span> For the case <span class="math-container">$r=s$</span>, it is also known that <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {r \choose n-k} = (-1)^{n/2} {r \choose n/2} \quad [n \mathrm{\ is\ even}]$$</span> When <span class="math-container">$r\not= s$</span>, is there a known closed form for the alternating Vandermonde sum? <span class="math-container">$$\sum_{k=0}^n (-1)^k {r \choose k} {s \choose n-k}$$</span></p>
Phira
9,325
<p>You can use the Zeilberger algorithm to find a recurrence for the sum and then use Petkovsek's algorithm to verify that there are no hypergeometric (i.e. essentially products of factorials) closed formulas for the general case.</p> <p>So, you cannot expect to do better than the expressions proposed in the other answers.</p>
780,611
<p>The differential equation in question is a FODE,</p> <p>$$ \frac{dy}{dt} = -\frac{a^2\sqrt{2g}}{\sqrt{(R+y)(R-y)}} $$</p> <p>Upon first inspection, this is separable, but I don't know how to proceed from there.</p> <p>Thanks.</p>
Claude Leibovici
82,404
<p>Hint</p> <p>Just integrate $\sqrt{(R-y) (R+y)}$ with respect to $y$ after having set $dt=...dy$. You will have a nice first change of variable such as $y=R x$</p>
2,930,292
<p>I'm currently learning the unit circle definition of trigonometry. I have seen a graphical representation of all the trig functions at <a href="https://www.khanacademy.org/math/trigonometry/unit-circle-trig-func/unit-circle-definition-of-trig-functions/a/trig-unit-circle-review" rel="nofollow noreferrer">khan academy</a>. </p> <p><img src="https://i.stack.imgur.com/zp5WB.png" alt="enter image description here"></p> <p>I understand how to calculate all the trig functions and what they represent. Graphically, I only understand why sin and cos is drawn the way it is. I'm having trouble understand why tangent, cotangent, secant, cosecant are drawn the way they are. </p> <p>Can someone please provide me with some intuitions.</p>
Doug M
317,162
<p>While this figure is elegant in its way, it has a little bit too much going on for the beginning student. <a href="https://i.stack.imgur.com/1fNWH.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1fNWH.png" alt="enter image description here"></a></p> <p>Lets start with a simpler figure. <a href="https://i.stack.imgur.com/99wK4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/99wK4.png" alt="enter image description here"></a></p> <p>Our two triangles are similar. The smaller triangle has side lengths <span class="math-container">$(\cos x, sin x, 1)$</span></p> <p>Multiply all three by the same ratio <span class="math-container">$\frac{1}{\cos x}$</span> we will get the side lengths of the similar triangle.</p> <p><span class="math-container">$(1,\tan x, \sec x)$</span></p> <p>All of the "co" functions are "flipped" across the line <span class="math-container">$x=y$</span></p> <p><a href="https://i.stack.imgur.com/AZkwq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AZkwq.png" alt="enter image description here"></a></p> <p>Relating these figures to the first one, I will point out that these are congruent triangles.</p> <p><a href="https://i.stack.imgur.com/Wq8UI.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wq8UI.png" alt="enter image description here"></a></p>
140,134
<p>Does anyone know whether there is any geometric applications of the Iwaniec's conjecture on $ l^p $ bound of Beurling Alfhors transform (or the complex Hilbert transform). One application could have been was Behrling's conjecture (solved). Is there is any thing else that might be of geometric significance. For p=2, it is known that $ l^p $ bound is 1 and it coincides with the conjecture.It is already known to be $ l^p $ bounded by Calderon, Zygmund and even more. Is the exact bound is of any geometric significance.</p>
Community
-1
<p>As already pointed out, the Iwaniec conjecture has close links with calculus of variations. It has already been settled for "stretch" functions (with a rotation $e^{i\theta}$ introduced), ie, functions of the form \begin{equation*} f(re^{i\theta})=g(r)e^{-i\theta} \end{equation*} for positive and smooth $g$ on the upper-half real plane with end points equal to $0$. </p> <p>The Iwaniec conjecture is closely linked with rank one and quasiconvexity, specifically to Morrey's and Sverak's conjecture. Note that if the Banuelos-Wang conjecture is true, then the Iwaniec conjecture will be true. If the Banuelos-Wang conjecture is not true, then Morrey's conjecture would be settled for the case $m=n=2$. </p> <p>The truth of the Iwaniec conjecture would have consequences for QC mappings in $\mathbb{R}^n$. If the Iwaniec conjecture does hold, then it would be a stronger variation of Astala's area distortion theorem on QC mappings. </p> <p>The Beurling-Ahlfors transform already has links with algebra/geometry, eg, research has been conduced by Banuelos and Lindeman on an operator $B_n$ (for $n=2$) that can be identified with $B$ for $f:\mathbb{R}^n\to\Gamma$ with Grassmann algebra $\Gamma$. So the truth of the conjecture may well open up new unforeseen research possibilities in areas such as this (see links below).</p> <p>The truth of the Iwaniec conjecture would tell us how the Cauchy-Riemann operators $\partial f,~\overline{\partial}f$ are like differentially subordinate harmonic functions, or differentially subordinate martingales (which is in fact the key to settling the conjecture). </p> <p>Here are a couple of papers I have studied that may help with your question:</p> <ul> <li><p>Ahlfors-Beurling Operator on Radial Function by Alexander Volberg</p></li> <li><p>Nonlinear Cauchy-Riemann Operators in $\mathbb{R}^n$ by Tadeusz Iwaniec</p></li> <li><p>A Martingale Study of the Beurling-Ahlfors Transform in $\mathbb{R}^n$.</p></li> </ul> <p>This is something I am keen on so I will keep an eye on progress of the conjecture. </p>
2,125,206
<ol> <li><p>Let $W$ be the region bounded by the planes $x = 0$, $y = 0$, $z = 0$, $x + y = 1$, and $z = x + y$.</p></li> <li><p>$(x^2 + y^2 + z^2)\, \mathrm dx\, \mathrm dy\, \mathrm dz$; $W$ is the region bounded by $x + y + z = a$ (where $a &gt; 0$), $x = 0$, $y = 0$, and $z = 0$.</p></li> </ol> <p>$x,y,z$ being $0$ is throwing me off because I'm not sure how to graph it and get a bounded area; it seems like it would be infinite to me. What would the bounds for the triple integrals be?</p>
W.R.P.S
390,844
<p>1)</p> <p><a href="https://i.stack.imgur.com/QwKSM.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/QwKSM.png" alt="enter image description here"></a></p> <p>$$\int_{x=0}^{x=1}\int_{y=0}^{y=1-x}\int_{z=0}^{z=x+y}\ dzdydx $$</p> <p>2)</p> <p><a href="https://i.stack.imgur.com/OKL9e.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/OKL9e.png" alt="enter image description here"></a></p> <p>$$\int_0^{x=a}\int_0^{y=a-x}\int_0^{a-x-y} (x^2 + y^2 + z^2) \;dz\;dy\;dx$$</p>
1,644,905
<p>How to simplify the following equation:</p> <p>$$\sin(2\arccos(x))$$ I am thinking about:</p> <p>$$\arccos(x) = t$$</p> <p>Then we have:</p> <p>$$\sin(2t) = 2\sin(t)\cos(t)$$</p> <p>But then how to proceed?</p>
Harish Chandra Rajpoot
210,295
<p>Notice, let $\cos^{-1}x=\theta\iff \cos\theta=x$ $$\sin(2\theta)=2\sin\theta\cos \theta$$$$\sin(2\theta)=2\cos\theta\sqrt{1-\cos^2\theta}$$ setting the value of $\theta$, $$\sin(2\cos^{-1}x)=\color{red}{2x\sqrt{1-x^2}}$$ $\forall \ -1\le x\le 1$</p>
108,372
<p>Given a map $\psi: S\rightarrow S,$ for $S$ a closed surface, is there any algorithm to compute its translation distance in the curve complex? I should say that I mostly care about checking that the translation distance is/is not very small. That is, if the algorithm can pick among the possibilities: translation distance is 0, 1, 2, 3, many, then I am happy...</p> <p>I know there are algorithms for computing distances IN the curve complex, but this is not quite the same...</p>
Lee Mosher
20,787
<p>In the case that $\psi$ is pseudo-Anosov, the best one can do in general, as far as I know, is to get upper and lower bounds which are linear in translation length. These come from train track considerations. Assuming you have an invariant train track $T$ for $\psi$ in your hands (obtained by some algorithmic method of currently unknowable complexity as per my comment), factor it into a sequence of train track splits $$T=T_0, T_1, \ldots, T_k = \psi(T) $$ then using the method in the Masur-Minsky paper "Geometry of the curve comples I: hyperbolicity", one can algorithmically break the split sequence into blocks $$T_0 = T_{m_0}, ..., T_{m_1}, ..., T_{m_a}=T_k $$ such that the diameter of the subsequence from $T_{m_i}$ to $T_{m_{i+1}}$ has a certain constant upper bound and the diameter from $T_{m_i}$ to $T_{m_j}$ has a certain lower bound which is a constant times $|i-j|$. The material needed to do this is described in the section of their paper entitled "the nested train track argument".</p> <p>Other than that, Shackleton's paper "An acylindricity theorem for the mapping class group" contains some algorithmic detail, but not enough to answer your question. </p>
2,743,288
<p>I need help with this exponential equation: $5^{x+2}\ 2^{4-x} = 1000 $</p> <p>We know that $ 1000 = 10^3$, so:</p> <p>$$\ln(5^{x+2}\cdot2^{4-x}) = \ln10^3 \implies\ln(5^{x+2}) + \ln(2^{4-x}) = \ln10^3$$</p> <p>In the next step I use that: $\ln(a^x) = x\ln(a)$</p> <p>$$(x+2)\ln 5 + (4-x)\ln 2 = 3\ln 10$$</p> <p>And I'm stuck here. </p>
B. Mehta
418,148
<p>You can simplify a little more: $$\begin{align}(x+2)\ln 5 + (4-x)\ln2 &amp;= 3\ln 10 \\ &amp;=3\ln2 + 3 \ln 5 \\ (x-1)\ln5+(1-x)\ln2&amp;=0.\end{align}$$ Can you finish it off from here?</p> <p>Alternatively, you can do this without logs: $5^{x+2} \times 2^{4-x} = 2^3 5^3$, so $5^x 2^{-x} = \frac{5}{2}$, from where the answer should also be clear.</p>
2,743,288
<p>I need help with this exponential equation: $5^{x+2}\ 2^{4-x} = 1000 $</p> <p>We know that $ 1000 = 10^3$, so:</p> <p>$$\ln(5^{x+2}\cdot2^{4-x}) = \ln10^3 \implies\ln(5^{x+2}) + \ln(2^{4-x}) = \ln10^3$$</p> <p>In the next step I use that: $\ln(a^x) = x\ln(a)$</p> <p>$$(x+2)\ln 5 + (4-x)\ln 2 = 3\ln 10$$</p> <p>And I'm stuck here. </p>
user247327
247,327
<p>I see no reason to use logarithms here.</p> <p>Yes, $1000= 10^3$ and $10= (2)(5)$ so $1000= (2^3)(5^3)$.</p> <p>Your equation is $(2^{4- x})(5^{x+ 2})= (2^3)(5^3)$.</p> <p>2 and 5 are prime numbers and prime factorization is unique so we must have 4- x= 3 and x+ 2= 3. That is two equations in one unknown but fortunately, x= 1 satisfies both.</p>
2,831,270
<p>I am quite fascinated by the formula for the Mellin transform of the Gaussian Hypergeometric Function, which is given by:</p> <blockquote> <p><span class="math-container">$$\mathcal M [_2F_1(\alpha,\beta;\gamma;-x)] = \frac {B(s,\alpha-s)B(s,\beta-s)}{B(s,\gamma-s)}$$</span></p> </blockquote> <p><em>Source : <a href="https://authors.library.caltech.edu/43489/1/Volume%201.pdf" rel="nofollow noreferrer">Table of Integral Transforms</a> page <span class="math-container">$336$</span>, <span class="math-container">$6.9 (3)$</span></em></p> <p>I have found this within a table of integral transforms of various functions and I would be really interested in a proof for this formula.</p>
Maxim
491,644
<p>The inverse Mellin transform is given by $$\mathcal M^{-1}[F] = \frac 1 {2 \pi i} \int_{\sigma -i \infty}^{\sigma + i \infty} F(s) x^{-s} ds.$$ For $F(s) = \Gamma(s) \Gamma(\alpha - s) \Gamma(\beta - s) / \Gamma(\gamma - s)$, the line $\operatorname{Re} s = \sigma$ should separate the poles of $\Gamma(s)$ from the poles of $\Gamma(\alpha - s) \Gamma(\beta - s)$. For $0 &lt; x &lt; 1$, the sequence of integrals over left semicircles centered at $\sigma$ with radii $\sigma + k + 1/2$ tends to zero and the inverse transform can be calculated as the sum of the residues at $s = -k$: $$\mathcal M^{-1}[F] = \sum_{k=0}^\infty \operatorname{Res}_{s = -k} \frac {\Gamma(s) \Gamma(\alpha - s) \Gamma(\beta - s)} {\Gamma(\gamma - s)} x^{-s} = \\ \sum_{k=0}^\infty \frac {\Gamma(\alpha + k) \Gamma(\beta + k)} {\Gamma(\gamma + k)} \frac {(-x)^k} {k!} = \\ \frac {\Gamma(\alpha) \Gamma(\beta)} {\Gamma(\gamma)} {_2F_1}(\alpha, \beta; \gamma; -x).$$</p> <p>Since both the integral and ${_2F_1}$ are analytic functions of $x$ when $0 &lt; \operatorname{Re} x$, we conclude that the identity holds for $0 &lt; x$, giving your formula.</p>
2,377,946
<blockquote> <p>The integral is: $$\int_0^a \frac{x^4}{(x^2+a^2)^4}dx$$</p> </blockquote> <p>I used an approach that involved substitution of x by $a\tan\theta$. No luck :\ . Help?</p>
Bernard
202,857
<p><strong>Hint</strong></p> <p>You obtain the integral: $\;\displaystyle\int_0^\tfrac\pi4\!\frac{\tan^4\theta}{a^3(1+\tan^2\theta)^3}\,\mathrm d\mkern1mu\theta=\frac1{a^3}\int_0^\tfrac\pi4\frac{\sin^4\theta}{\cos\theta}\,\mathrm d\mkern1mu\theta$.</p> <p><em>Bioche's rules</em> say you should set $u=\sin\theta$.</p>
1,836,190
<p>I've been working on a problem and got to a point where I need the closed form of </p> <blockquote> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1}.$$</p> </blockquote> <p>I wasn't making any headway so I figured I would see what Wolfram Alpha could do. It gave me this: </p> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1} = \frac{n((m+2)n+1)}{(m+2) (m+3)}\binom{m+n+1}{ m+1}.$$</p> <p>That's quite the nasty formula. Can anyone provide some insight or justification for that answer? </p>
Marco Cantarini
171,547
<p>Using the integral representation of the binomial coefficient $$\dbinom{s}{k}=\frac{1}{2\pi i}\oint_{\left|z\right|=1}\frac{\left(1+z\right)^{s}}{z^{k+1}}dz$$ we have $$ \sum_{k=1}^{n}k\dbinom{m+k}{m+1}=\frac{1}{2\pi i}\sum_{k=1}^{n}k\oint_{\left|z\right|=1}\frac{\left(1+z\right)^{m+k}}{z^{m+2}}dz $$ $$=\frac{1}{2\pi i}\oint_{\left|z\right|=1}\frac{\left(1+z\right)^{m}}{z^{m+2}}\sum_{k=1}^{n}k\left(1+z\right)^{k}dz $$ $$=\frac{1}{2\pi i}\oint_{\left|z\right|=1}\frac{\left(1+z\right)^{m+n+1}}{z^{m+4}}\left(nz-1\right)dz+\frac{1}{2\pi i}\oint_{\left|z\right|=1}\frac{\left(1+z\right)^{m+1}}{z^{m+4}}dz$$ $$ =n\dbinom{m+n+1}{m+2}-\dbinom{m+n+1}{m+3} $$ which is equivalent to your claim. To see that we have the same result, note that holds $$\dbinom{n}{k}=\frac{n+1-k}{k}\dbinom{n}{k-1}.$$</p>
1,836,190
<p>I've been working on a problem and got to a point where I need the closed form of </p> <blockquote> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1}.$$</p> </blockquote> <p>I wasn't making any headway so I figured I would see what Wolfram Alpha could do. It gave me this: </p> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1} = \frac{n((m+2)n+1)}{(m+2) (m+3)}\binom{m+n+1}{ m+1}.$$</p> <p>That's quite the nasty formula. Can anyone provide some insight or justification for that answer? </p>
Marko Riedel
44,883
<p>For those who enjoy integrals here is another approach using the Egorychev method as presented in many posts by @FelixMarin and also by @MarkusScheuer, where we focus on finding an answer that differs from the approaches that have already been seen.</p> <p><P>Suppose we seek to compute</p> <p>$$S(n,m) = \sum_{k=0}^n k{m+k\choose m+1}.$$</p> <p>Introduce</p> <p>$${m+k\choose m+1} = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m+k} \; dz$$</p> <p>as well as the Iverson bracket</p> <p>$$[[0\le k\le n]] = \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{w^k}{w^{n+1}} \frac{1}{1-w} \; dw.$$</p> <p>This yields for the sum</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+1}} \frac{1}{1-w} \sum_{k\ge 0} k w^k (1+z)^k \; dw \; dz.$$</p> <p>For this to converge we must have $|w(1+z)|&lt;1.$ We get</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n+1}} \frac{1}{1-w} \frac{w(1+z)}{(1-w(1+z))^2} \; dw \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m+1} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n}} \frac{1}{1-w} \frac{1}{(1-w(1+z))^2} \; dw \; dz.$$</p> <p>We evaluate the inner integral using the fact that the residues at the poles sum to zero. The residue at $w=1$ produces</p> <p>$$-\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m+1} \frac{1}{(-z)^2} \; dz = -\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+4}} (1+z)^{m+1} \; dz = 0.$$</p> <p>For the residue at $w=1/(1+z)$ we re-write the inner integral to get</p> <p>$$\frac{1}{(1+z)^2} \frac{1}{2\pi i} \int_{|w|=\gamma} \frac{1}{w^{n}} \frac{1}{1-w} \frac{1}{(w-1/(1+z))^2} \; dw.$$</p> <p>We thus require $$\left.\left(\frac{1}{w^{n}} \frac{1}{1-w}\right)'\right|_{w=1/(1+z)} \\ = \left. \left(\frac{-n}{w^{n+1}} \frac{1}{1-w} + \frac{1}{w^n} \frac{1}{(1-w)^2}\right) \right|_{w=1/(1+z)} \\ = -n (1+z)^{n+1} (1+z)/z + (1+z)^n (1+z)^2/z^2.$$</p> <p>Substituting this into the outer integral and flipping signs we get two pieces which are</p> <p>$$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m-1} n(1+z)^{n+2}/z \; dz \\ = \frac{n}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+3}} (1+z)^{n+m+1} \; dz = n\times {n+m+1\choose m+2}.$$</p> <p>The second piece is $$- \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+2}} (1+z)^{m-1}(1+z)^{n+2}/z^2 \; dz \\ = - \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1}{z^{m+4}} (1+z)^{n+m+1} \; dz = - {n+m+1\choose m+3}.$$</p> <p>It follows that our answer is</p> <p>$$\left(n - \frac{n-1}{m+3}\right) {n+m+1\choose m+2} = \frac{nm+2n+1}{m+3} {n+m+1\choose m+2}.$$</p> <p><strong>Remark.</strong> Being rigorous we also verify that the residue at infinity in the calculation of the inner integral is zero. We get</p> <p>$$-\mathrm{Res}_{w=0} \frac{1}{w^2} w^{n} \frac{1}{1-1/w} \frac{1}{(1-(1+z)/w)^2} \\ = - \mathrm{Res}_{w=0} w^{n-2} \frac{w}{w-1} \frac{w^2}{(w-(1+z))^2} = - \mathrm{Res}_{w=0} \frac{w^{n+1}}{w-1} \frac{1}{(w-(1+z))^2}.$$</p> <p>There is certainly no pole at zero here and the residue is zero as claimed (the term $1+z$ rotates in a circle around the point one on the real axis and with $\epsilon \lt 1$ it is never zero). This last result could also be obtained by comparing degrees of numerator and denominator.</p>
1,836,190
<p>I've been working on a problem and got to a point where I need the closed form of </p> <blockquote> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1}.$$</p> </blockquote> <p>I wasn't making any headway so I figured I would see what Wolfram Alpha could do. It gave me this: </p> <p>$$\sum_{k=1}^nk\binom{m+k}{m+1} = \frac{n((m+2)n+1)}{(m+2) (m+3)}\binom{m+n+1}{ m+1}.$$</p> <p>That's quite the nasty formula. Can anyone provide some insight or justification for that answer? </p>
epi163sqrt
132,007
<p>Here is a slightly different variation of the theme. It is convenient to use the <em>coefficient of</em> operator $[x^k]$ to denote the coefficient of $x^k$ of a series. This way we can write e.g. \begin{align*} [x^k](1+x)^n=\binom{n}{k}\tag{1} \end{align*}</p> <blockquote> <p>We obtain \begin{align*} \sum_{k=1}^{n}&amp;k\binom{m+k}{m+1}=\sum_{k=1}^nk[x^{m+1}](1+x)^{m+k}\tag{2}\\ &amp;=[x^{m+1}](1+x)^{m+1}\sum_{k=1}^nk(1+x)^{k-1}\tag{3}\\ &amp;=[x^{m+1}](1+x)^{m+1}D_x\left(\sum_{k=1}^n(1+x)^{k}\right)\tag{4}\\ &amp;=[x^{m+1}](1+x)^{m+1}D_x\left(\frac{1-(1+x)^{n+1}}{1-(1+x)}-1\right)\tag{5}\\ &amp;=[x^{m+1}](1+x)^{m+1}\left(\frac{(nx-1)(1+x)^n}{x^2}+\frac{1}{x^2}\right)\tag{6}\\ &amp;=[x^{m+3}](1+x)^{m+n+1}(nx-1)\tag{7}\\ &amp;=n[x^{m+2}](1+x)^{m+n+1}-[x^{m+3}](1+x)^{m+n+1}\tag{8}\\ &amp;=n\binom{m+n+1}{m+2}-\binom{m+n+1}{m+3}\tag{9} \end{align*}</p> </blockquote> <p><em>Comment:</em></p> <ul> <li><p>In (2) we apply the <em>coefficient of</em> operator according to (1)</p></li> <li><p>In (3) we use the linearity of the <em>coefficient of</em> operator and split the binomial conveniently</p></li> <li><p>In (4) we introduce the differential operator $D_x:=\frac{d}{dx}$ </p></li> <li><p>In (5) we use the formula for the <em><a href="https://en.wikipedia.org/wiki/Geometric_series#Formula" rel="nofollow">finite geometric series</a></em></p></li> <li><p>In (6) we apply the differential operator $D_x$</p></li> <li><p>In (7) we do some simplifications and use the rule \begin{align*} [x^m]x^{-k}A(x)=[x^{m+k}]A(x) \end{align*}</p></li> <li><p>In (8) we use the linearity of the <em>coefficient of</em> operator and apply the rule above again</p></li> <li><p>In (9) we write the expression using binomial coefficients and obtain a result in accordance with the answer of @MarcoCantarini</p></li> </ul>
512,768
<p>I am trying to intuitively understand why the solution to the following problem is $-2$. $$\lim_{x\to\infty}\sqrt{x^2-4x}-x$$ $$\lim_{x\to\infty}(\sqrt{x^2-4x}-x)\frac{\sqrt{x^2-4x}+x}{\sqrt{x^2-4x}+x}$$ $$\lim_{x\to\infty}\frac{x^2-4x-x^2}{\sqrt{x^2-4x}+x}$$ $$\lim_{x\to\infty}\frac{-4x}{\sqrt{x^2-4x}+x}$$ $$\lim_{x\to\infty}\frac{-4}{\sqrt{1-\frac{4}{x}}+1}$$ $$\frac{-4}{\sqrt{1-0}+1}$$ $$\frac{-4}{2}$$ $$-2$$ I can understand the process that results in the answer being $-2$. However, I expected the result to be $0$. I have learned that when dealing with a limit approaching $\infty$, only the highest degree term matters because the others will not be as significant. For this reason, I thought that the $4x$ would be ignored, resulting in: $$\lim_{x\to\infty}\sqrt{x^2-4x}-x$$ $$\lim_{x\to\infty}\sqrt{x^2}-x$$ $$\lim_{x\to\infty}x-x$$ $$\lim_{x\to\infty}0$$ $$0$$ Why is the above process incorrect?</p>
egreg
62,967
<p>Change the variable: set $t=1/x$, so you want to compute $$ \lim_{t\to0^+}\sqrt{\frac{1}{t^2}-\frac{4}{t}}-\frac{1}{t} = \lim_{t\to0^+}\sqrt{\frac{1-4t}{t^2}}-\frac{1}{t} = \lim_{t\to0^+}\frac{\sqrt{1-4t}-1}{t} $$ Now it should be clearer why the limit can't be $0$. The square root can be written $$ \sqrt{1-4t}=1+\frac{1}{2}(-4t)+o(t^2) $$ so the limit becomes $$ \lim_{t\to0^+}\frac{1-2t+o(t^2)-1}{t}=-2 $$</p>
57,232
<p>Given a Heegaard splitting of genus $n$, and two distinct orientation preserving homeomorphisms, elements of the mapping class group of the genus $n$ torus, is there a method which shows whether or not these homeomorphisms, when used to identify the boundaries of the pair of handlebodies, will produce the same $3$-manifold?</p>
Matt Young
2,627
<p>Here is an answer to 1. It is known that for any $A &gt; 0$ that $\sum_{m \leq x} \mu(m) e(\alpha m) = O_A(x (\log{x})^{-A})$ uniformly in $\alpha$. For instance, consult Theorem 13.10 of Iwaniec and Kowalski's book, Analytic Number Theory. This uniform bound comes from combining the zero free region of Dirichlet L-functions with Vinogradov's method. This bound shows that $|\hat{\mu}(k)| \leq C_A (\log{n})^{-A}$. </p> <p>This gives 2) for $A$ fixed. Edit 1: It gives 3) also.</p> <p>Edit 2: I would guess the truth is $|\hat{\mu}(k)| \leq C(\varepsilon) n^{-1/2 + \varepsilon}$ so $s$ would have to be almost as large as $n$ to get anything not $o(1)$.</p>
1,419,483
<p>Can anyone please help me in solving this integration problem $\int \frac{e^x}{1+ x^2}dx \, $?</p> <p>Actually, I am getting stuck at one point while solving this problem via integration by parts.</p>
Will R
254,628
<p>According to Wolfram|Alpha, no closed form exists. However, that doesn't mean we can't make any progress at all.</p> <p>We can use the substitution $x=\tan{u}$, expand the resulting integrand as a power series in $\tan{u}$ and then each term can be expressed by way of a reduction formula.</p> <p>Let $x=\tan{u}$. Then, ${\mathrm{d}x \over \mathrm{d}u} = \sec^{2}{u}$. Hence, we have \begin{eqnarray*} \int\frac{e^{x}}{1+x^{2}}\,\mathrm{d}x &amp; = &amp; \int\frac{e^{\tan{u}}}{1+\tan^{2}{u}}\sec^{2}{u}\,\mathrm{d}u\\ &amp; = &amp; \int e^{\tan{u}}\,\mathrm{d}u\\ &amp; = &amp; \int \left( 1+ \tan{u} + \frac{1}{2!}\tan^{2}{u}+ \ldots + \frac{1}{k!}\tan^{k}{u} + \ldots \right)\,\mathrm{d}u\\ &amp; = &amp; \int \left( \sum_{k=0}^{\infty}\frac{1}{k!}\tan^{k}{u} \right)\,\mathrm{d}u\\ &amp; = &amp; \sum_{k=0}^{\infty}\frac{1}{k!}I_{k}, \end{eqnarray*} where, for each $k=0,1,2,\ldots$, we have $I_{k} = \int \tan^{k}{u}\,\mathrm{d}u$.</p> <p>Now we consider the reduction formula as follows: for $k\geq2$, we have \begin{eqnarray*} I_{k} &amp; = &amp; \int\tan^{k}{u}\,\mathrm{d}u\\ &amp; = &amp; \int (\sec^{2}{u}-1)\tan^{k-2}{u}\,\mathrm{d}u\\ &amp; = &amp; \int \sec^{2}{u}\tan^{k-2}{u}\,\mathrm{d}u - I_{k-2} &amp; = &amp; \frac{1}{k-1}\tan^{k-1}{u} -I_{k-2}. \end{eqnarray*} Additionally, note that $I_{0} = u\,(+\text{constant})$ and $I_{1} = \log{\sec{u}}\,( + \text{constant} )$. Using the formula derived above and these two initial values, we can calculate $I_{k}$ for any value of $k$ (there may well be a general formula; I haven't checked).</p> <p>Hence, we have a series representation of the integral to as many terms as we please; note also that each term in the series is basically a polynomial in $x=\tan{u}$ with an extra $\tan^{-1}{x}$ or $\log{\sec{\tan^{-1}{x}}}$ tacked on at the end.</p>
443,475
<p>I am reading some geometric algebra notes. They all started from some axioms. But I am still confused on the motivation to add inner product and wedge product together by defining $$ ab = a\cdot b + a \wedge b$$ Yes, it can be done like complex numbers, but what will we lose if we deal with inner product and wedge product separately? What are some examples to show the advantage of geometric product vs other methods?</p>
joeA
79,432
<p>Here's an excerpt from Lasenby, Lasenby and Doran, 1996, <a href="http://www.mrao.cam.ac.uk/~clifford/publications/ps/dll_millen.pdf" rel="nofollow"><em>A Unified Mathematical Language for Physics and Engineering in the 21st Century</em></a>:</p> <blockquote> <p>The next crucial stage of the story occurs in 1878 with the work of the English mathematician, William Kingdon Clifford (Clifford 1878). Clifford was one of the few mathematicians who had read and understood Grassmann's work, and in an attempt to unite the algebras of Hamilton and Grassmann into a single structure, he introduced his own <em>geometric algebra</em>. In this algebra we have a single <strong>geometric product</strong> formed by uniting the inner and outer products&mdash;this is associative like Grassmann's product but also <strong>invertible</strong>, like products in Hamilton's algebra. In Clifford's geometric algebra an equation of the type $\mathbf{ab}=C$ has the solution $\mathbf{b}=\mathbf{a}^{-1}C$, where $\mathbf{a}^{-1}$ exists and is known as the <em>inverse</em> of <strong>a</strong>. Neither the inner or outer product possess this invertibility on their own. Much of the power of geometric algebra lies in this property of invertibility. </p> <p>Clifford's algebra combined all the advantages of quaternions with those of vector geometry, [...]</p> </blockquote>
443,475
<p>I am reading some geometric algebra notes. They all started from some axioms. But I am still confused on the motivation to add inner product and wedge product together by defining $$ ab = a\cdot b + a \wedge b$$ Yes, it can be done like complex numbers, but what will we lose if we deal with inner product and wedge product separately? What are some examples to show the advantage of geometric product vs other methods?</p>
benrg
234,743
<p>It isn't so much that the Clifford product is useful as that it's natural.</p> <p>Instead of defining the Clifford product as <span class="math-container">$v\cdot w + v\wedge w$</span>, you can define the Clifford algebra as the algebra in which S+S and V+V addition, SS and SV multiplication, and V<sup>2</sup> squared norm have their usual meanings from vector calculus, the usual algebraic rules apply (except for commutativity of multiplication), and there are no other constraints. It isn't obvious that this produces anything sensible, but if it does (and indeed it does) then it seems quite natural. Starting from that formulation, the identity <span class="math-container">$v\cdot w = \tfrac12(\|v+w\|^2 - \|v\|^2 - \|w\|^2)$</span> gives you <span class="math-container">$v\cdot w = \tfrac12(vw+wv)$</span>, and you can show that <span class="math-container">$\tfrac12(vw-wv)$</span> satisfies all of the axioms of the wedge product, which justifies calling it <span class="math-container">$v\wedge w$</span>, and adding those together yields <span class="math-container">$vw = v\cdot w + v\wedge w$</span>.</p> <p>You could also informally motivate the sum by the fact that <span class="math-container">$\|v\cdot w\|^2 = \|v\|^2\|w\|^2 \cos^2 \theta$</span> and <span class="math-container">$\|v\wedge w\|^2 = \|v\|^2\|w\|^2 \sin^2 \theta$</span>, so it makes sense to combine them to get a product that satisfies <span class="math-container">$\|vw\|^2 = \|v\|^2\|w\|^2$</span> yet also preserves <span class="math-container">$\theta$</span> by putting the inner and wedge products in orthogonal components of the result. Straying further into truthiness-based argument, the wedge product is also called the (Grassmann) exterior product, and the (Grassmann) interior product coincides with the inner product for vectors, so you can say that the Clifford product combines the interior and exterior Grassmann products. They sound like they belong together, at least.</p>
635,989
<p>What counterexample can I use to prove that ($ \mathbb{R}_{[x]}$is any polynomial):</p> <p>$L :\mathbb{R}_{[x]}\rightarrow\mathbb{R}_{[x]},(L(p))(x)=p(x)p'(x)$ is not linear transformation. I have already proven this using definition but it is hard to think about example. I would be grateful for any help.</p>
Dan Rust
29,059
<p>Let $p$ be given by $p(x)=x$ and let $q=2p$. We note that $L(p)(x)=x$ but $L(q)(x)=2x\cdot 2=4x$ But if $L$ was linear then $L(q)(x)=L(2p)(x)=2L(p)(x)=2x\neq 4x$ and so we reach a contradiction. Hence $L$ is not linear.</p>
3,306,089
<p>I came across this meme today:</p> <p><a href="https://i.stack.imgur.com/RfJoJ.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RfJoJ.jpg" alt="enter image description here"></a></p> <p>The counterproof is very trivial, but I see no one disproves it. Some even say that the meme might be true. Well, <span class="math-container">$\pi$</span> cannot contain itself.</p> <p>Well, everything means <span class="math-container">$\pi$</span> might contain <span class="math-container">$\pi$</span> somewhere in it. Say it starts going <span class="math-container">$\pi=3.1415...31415...$</span> again on the <span class="math-container">$p$</span> digit. Then it will have to do the same at the <span class="math-container">$2p$</span> digit, since the "nested <span class="math-container">$\pi$</span>" also contains another <span class="math-container">$\pi$</span> in it. <span class="math-container">$\pi$</span> then will be rational, which is wrong. Thus <span class="math-container">$\pi$</span> does not contain all possible combination.</p> <p>Is this proof correct? I'm not a mathematician so I'm afraid I make silly mistakes.</p>
Clayton
43,239
<p>An infinite non-repeating decimal does <strong>not</strong> imply that every possible number combination exists somewhere. Consider the number: <span class="math-container">$0.101001000100001\ldots$</span>, the pattern is easy to spot, but this is an irrational number because...you guessed it...it's an infinite, non-repeating decimal.</p> <p>It is conjectured that <span class="math-container">$\pi$</span> is a <a href="https://en.wikipedia.org/wiki/Normal_number" rel="nofollow noreferrer">normal number</a>, but this has not been proven. <a href="https://mathoverflow.net/questions/51853/what-is-the-state-of-our-ignorance-about-the-normality-of-pi">Here</a> is an old question from MathOverflow with some more details.</p>
2,713,311
<p>$ \lim_{x \to \infty} [\frac{x^2+1}{x+1}-ax-b]=0 \ $ then show that $ \ a=1, \ b=-1 \ $</p> <p><strong>Answer:</strong></p> <p>$ \lim_{x \to \infty} [\frac{x^2+1}{x+1}-ax-b]=0 \\ \Rightarrow \lim_{x \to \infty} [\frac{x^2+1-ax^2-ax-bx-b}{x+1}]=0 \\ \Rightarrow \lim_{x \to \infty} \frac{2x-2ax-a-b}{1}=0 \\ \Rightarrow 2x-2ax-a-b=0 \ \ (?) $ </p> <p>Comparing both sides , we get </p> <p>$ 2-2a=0 \\ a+b=0 \ $</p> <p>Solving , we get </p> <p>$ a=1 , \ b=-1 \ $</p> <p>But I am not sure about the above line where question mark is there.</p> <p><strong>Can you help me?</strong></p>
farruhota
425,072
<p>The limit is: $$L=\lim_\limits{x\to \infty} ((2-2a)x-(a+b))$$ Consider the cases: $$L=\begin{cases} \infty, \ 2-2a\ne 0 \\ -(1+b), \ 2-2a=0\end{cases}$$ Further note: $$L=\begin{cases} 0, \ b=-1 \\ -(1+b)\ne 0, \ b \ne -1\end{cases}$$</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Tom Leinster
586
<p>77 instructional videos on category theory:</p> <p><a href="https://www.youtube.com/TheCatsters" rel="nofollow noreferrer">https://www.youtube.com/TheCatsters</a></p> <p>I know you said &quot;only one video per post&quot;, but I'm not posting 77 times...</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Harrison Brown
382
<p>You probably won't learn much actual math from it, but <a href="http://www.youtube.com/watch?v=E3tdimtxqic">One Geometry</a> is funnier and catchier than a Snoop Dogg parody about 3-manifolds has any right to be.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Harrison Brown
382
<p>I believe this was mentioned elsewhere, but for completeness, here's <a href="https://www.youtube.com/watch?v=ECQyFzzBHlo" rel="nofollow">Serre on writing</a>.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Scott Carter
36,108
<p>My good friend Professor Elvis Zap has the "Calculus Rap," the "Quantum Gravity Topological Quantum Field Theory Blues," a vid on constructing "Boy's Surface," "Drawing the hypercube (yes he knows there is a line missing in part 1)," A few things on quandles, and a bunch of precalculus and calculus videos. In order to embarrass all involved, he posted the series "Dehn's Dilemma" that was recorded in Italy last summer.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
sanokun
797
<p>This one is quite old but it was fun when I watched a few years ago. It's about Fermat's Last theorem.</p> <p><a href="http://www.archive.org/details/fermats_last_theorem">http://www.archive.org/details/fermats_last_theorem</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Jan Weidner
2,837
<p>Lots of Lie Theory talks: <a href="http://sms.cam.ac.uk/collection/533438?mediaOffset=20&amp;mediaMax=20&amp;mediaOrder=asc&amp;mediaSort=title#Media">http://sms.cam.ac.uk/collection/533438?mediaOffset=20&amp;mediaMax=20&amp;mediaOrder=asc&amp;mediaSort=title#Media</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Kelly Davis
1,400
<p>The series of videos from <a href="http://video.ias.edu/sm">IAS School of Mathematics</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Hans-Peter Stricker
2,672
<p><a href="http://www.youtube.com/watch?v=OmSbdvzbOzY" rel="nofollow">The Dot and the Line: A Romance in Lower Mathematics</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
graveolensa
4,672
<p>I'd like to think that <a href="http://www.youtube.com/diproton" rel="nofollow">my math art</a> is awesome, and start <a href="http://www.youtube.com/diproton#p/u/19/FWG5JblSsps" rel="nofollow">here</a>. </p> <p>the mapping behind that video is $(x,y,z)\rightarrow(2*cos(z-y),2*sin(x-z),7*cos(y-x))$, and has a singular Jacobian -- the immediate ramification of which is that there is overlap in the video.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
pi2000
4,778
<p>The Abdus Salam International Centre for Theoretical Physics has lots of lectures in mathematics and physics.Some of them are difficult to find in other places(Complex Analysis,Abstract Algebra,Topology,Functional Analysis,Algebraic Geometry..).For the same topic(ex:Complex Analysis)there are lectures by 2 ore more lecturers so you can choose. <a href="http://www.ictp.it/" rel="nofollow">http://www.ictp.it/</a> <a href="http://www.ictp.tv/diploma/index07-08.php?activityid=MTH" rel="nofollow">http://www.ictp.tv/diploma/index07-08.php?activityid=MTH</a> <a href="http://www.ictp.tv/diploma/index08-09.php?activityid=MTH" rel="nofollow">http://www.ictp.tv/diploma/index08-09.php?activityid=MTH</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Vivek Shende
4,707
<p><a href="http://www.youtube.com/watch?v=SyD4p8_y8Kw" rel="nofollow noreferrer">Hitler learns topology</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Gerald Edgar
454
<p>Searching for a video relating to another question, I found this: <a href="http://www.youtube.com/watch?v=kklb1J6Ij_U&amp;hd=1" rel="nofollow">My Calculus Project</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Peter Luthy
5,751
<p>Richard Feynman gave the 1964 Messenger Lectures at Cornell University --- this is an endowed lecture series to which a number of famous scholars have been invited, including several physicists. His lectures were recorded, and Bill Gates bought the rights to them and has provided them to the public for free.</p> <p><a href="http://research.microsoft.com/apps/tools/tuva/index.html" rel="nofollow">http://research.microsoft.com/apps/tools/tuva/index.html</a></p> <p>The content is mostly designed for a general audience, so if you have never learned physics you will learn something. And if you have studied plenty of physics already, you will be pleased to see the master at work in his prime. I very much enjoyed watching it.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Sharanjit Paddam
15,071
<p><a href="http://www.youtube.com/user/njwildberger" rel="nofollow">http://www.youtube.com/user/njwildberger</a></p> <p>Excellent lectures by Norman Wildberger on topics including: Geometry, Algebraic Topology, Linear Algebra, Foundations of Mathematics, and history of Mathematics</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Aleks Vlasev
15,493
<p>I am not sure if this will qualify as math exactly, but it's amazing nonetheless. It is a film with Richard Feynman called "Feynman: Take the wold from another point of view". Here is part 1</p> <p><a href="http://www.youtube.com/watch?v=PsgBtOVzHKI" rel="nofollow">Feynman: Take the wold from another point ov view - Part 1/4</a></p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Carl Najafi
19,027
<p><a href="https://www.youtube.com/watch?v=9MV65airaPA" rel="nofollow noreferrer">John Stillwell - ET Math: How different could it be?</a> A nice talk given at the SETI Institute.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
Justin Hilburn
333
<p>They filmed the <a href="http://nd.edu/~cmnd/conferences/topology/" rel="nofollow">FRG Conference on Topology and Field Theories</a> and put the lectures on <a href="http://www.youtube.com/user/NDdotEDU/videos" rel="nofollow">youtube</a>.</p>
1,714
<p>I know of two good mathematics videos available online, namely:</p> <ol> <li>Sphere inside out (<a href="https://www.youtube.com/watch?v=BVVfs4zKrgk" rel="nofollow noreferrer">part I</a> and <a href="https://www.youtube.com/watch?v=x7d13SgqUXg" rel="nofollow noreferrer">part II</a>)</li> <li><a href="https://www.youtube.com/watch?v=0z1fIsUNhO4" rel="nofollow noreferrer">Moebius transformation revealed</a></li> </ol> <p>Do you know of any other good math videos? Share.</p>
David Corwin
1,355
<p>The first <a href="http://www.math.princeton.edu/events/seminars/minerva-lectures/inaugural-minerva-lectures-i-equidistribution" rel="nofollow">Minerva Lecture</a> by Jean-Pierre Serre at Princeton in Fall 2012 is online. There were two other lectures, and they did videotape them, but I can't find them online.</p>
736,624
<blockquote> <p>Heaviside's function $H(x)$ is defined as follows:</p> <p>$H(x) = 1$ if $x &gt; 0$, </p> <p>$H(x) = \frac 1 2$ if $x= 0$</p> <p>$H(x) = 0$ if $x &lt; 0$ </p> </blockquote> <p>Define $F(x) := \int^x_0 H(t) \ dt, \ x \in \mathbb R$. </p> <p>$H(x)$ is continuous except at $x=0$ so the integral exists?</p> <p><strong>Is $F$ continuous?</strong></p> <p>I have $F(x) = x$ for $x &gt; 0$ and $F(x) = 0$ for $x \le 0$. So $F$ is indeed continuous on $\mathbb R$.</p> <p><strong>Is F differentiable?</strong></p> <p>According to the fundamental theorem of calculus $F^{'}(x_0) = H(x_0)$.</p> <p>However now to the final question:</p> <p><strong>Is the result contrary to the Fundamental Theorem of Analysis (Fundamental Theorem of Calculus)?</strong></p> <p>I don't see why it should be? Have I made an error?</p>
Community
-1
<p>$F(x)$ is not differentiable at $0$. Graph the function and you'll see why. </p> <p>The FTC states that if $H(x)$ is integrable, then $F(x)=\int_0^x H(t) dt$ is continuous and if $H(x)$ is continuous at $x_0$ then $F(x)$ is differentiable at $x_0$.</p> <p>This is the case here, no contradiction.</p>
51,341
<p>I have a function that is a summation of several Gaussians. Working with a 1D Gaussian here, there are 3 variables for each Gaussian: <code>A</code>, <code>mx</code>, and <code>sigma</code>:</p> <p>$A \exp \left ( - \frac{\left ( x - mx \right )^{2}}{2 \times sigma^{2}} \right )$</p> <pre><code>A*Exp[-((x - mx)^2/(2 sigma^2))] </code></pre> <p>The number of Gaussians in the final function will be vary each time the function is called, so my question is: <strong>What is the best way to define a function in Mathematica that can handle this variation, rather than hard-coding each Gaussian?</strong></p> <p>I was thinking along the lines of providing a list of <code>{A,mx,sigma}</code> to the function, so that if I want one Gaussian, I provide:</p> <pre><code>f[{{A,mx,sigma}}] </code></pre> <p>And if I want two Gaussians, I provide</p> <pre><code>f[{{A,mx,sigma},{A2,mx2,sigma2}}] </code></pre> <p>which would give:</p> <pre><code>A*Exp[-((x-mx)^2/(2sigma^2))] + A2*Exp[-((x-mx2)^2/(2sigma2^2))] </code></pre> <p>and so on.</p> <p>But I'm not at all sure how to design the function <code>f[]</code> to do this efficiently (for example, can it be done without a <code>For[]</code> loop? Can it be compiled in future if necessary?). </p> <p>Any help much appreciated - I did several searches on here and couldn't find anything, but I realise that might be because I'm not sure how to define my problem succinctly, so apologies if it has been asked before and I've missed it.</p>
Bob Hanlon
9,362
<p>Since all of the component functions are Listable</p> <pre><code>f[{a_, m_, s_}, x_] := Total[a*Exp[-(x - m)^2/(2*s^2)]] n = 5; amp = Array[a, n]; mean = Array[m, n]; sigma = Array[s, n]; f[{amp, mean, sigma}, x] </code></pre> <blockquote> <p>a[1]/E^((x - m[1])^2/ (2*s[1]^2)) + a[2]/E^((x - m[2])^2/ (2*s[2]^2)) + a[3]/E^((x - m[3])^2/ (2*s[3]^2)) + a[4]/E^((x - m[4])^2/ (2*s[4]^2)) + a[5]/E^((x - m[5])^2/ (2*s[5]^2))</p> </blockquote>
1,452,425
<p>From what I have been told, everything in mathematics has a definition and everything is based on the rules of logic. For example, whether or not <a href="https://math.stackexchange.com/a/11155/171192">$0^0$ is $1$ is a simple matter of definition</a>.</p> <p><strong>My question is what the definition of a set is?</strong> </p> <p>I have noticed that many other definitions start with a set and then something. A <a href="https://en.wikipedia.org/wiki/Group_%28mathematics%29#Definition" rel="noreferrer">group is a set</a> with an operation, an equivalence relation is a set, a <a href="https://en.wikipedia.org/wiki/Function_%28mathematics%29#Definition" rel="noreferrer">function can be considered a set</a>, even the <a href="https://en.wikipedia.org/wiki/Natural_number#Constructions_based_on_set_theory" rel="noreferrer">natural numbers can be defined as sets</a> of other sets containing the empty set.</p> <p>I understand that there is a whole area of mathematics (and philosophy?) that deals with <a href="https://en.wikipedia.org/wiki/Set_theory#Axiomatic_set_theory" rel="noreferrer">set theory</a>. I have looked at a book about this and I understand next to nothing.</p> <p>From what little I can get, it seems a sets are "anything" that satisfies the axioms of set theory. It isn't enough to just say that a set is any collection of elements because of various paradoxes. <strong>So is it, for example, a right definition to say that a set is anything that satisfies the ZFC list of axioms?</strong></p>
thepiercingarrow
274,737
<p>I will be using the definition of a set most commonly used in mathematics and the real world.</p> <p>A set is, simply put, a list. Anything is a set. One way to denote a set is to surround a list of elements separated with commas with brackets. A set of my favorite ice-cream types might be $\{Chocolate, Vanilla, Strawberry\}$. Sets can have one element: $\{5\}$ is an element where the only member is the number $5$. Sets can also be empty: $\{ \}$. The empty set is also denoted by $\emptyset$ (when written, it looks more like $\varnothing$). There are also infinite sets: $\{0,1,2,3,4\cdots\}$ is the set of whole numbers. Sets can even have sets as members, for example $\{\{1,2,3\},\{2,3,4\},\{3,4,5\}\}$ is also a set.</p> <p>Some sets have a rule, for example if the member of our sets are all coordinates, and $y=x$, then we can call the locus of coordinates where $y=x$ a set of vertices. There are an infinite number of such coordinates, including $(0,0), (\pi,\pi), and (\sqrt2,\sqrt2)$. Another function might be: given the domain of Bob, Alice, and Charlie, the output is their favorite ice-cream types. This function can be considered a set: $\{(Bob,Vanilla), (Alice, Chocolate), (Charlie, Chocolate)\}$.</p> <p>The ZFC list of axioms is just a list of rules that all sets have to follow - they don't really define a set. Sets have to follow the axioms, for example, one axiom claims the existence of a Power Set of a set - the Power Set of a set $S$ contains all subsets of $S$ including $\emptyset$ and $S$ itself, for example, the power set of $\{1,2\}$ is $\{\emptyset,\{1\},\{2\},\{1,2\}\}$. However, not all axioms apply, for example, one axiom claims the existence of one and only one empty set ($\emptyset$). A set can’t really “satisfy” this axiom.</p> <p>Hope this helps. (If I didn’t completely answer your question, feel free to comment and I’ll fix my answer).</p> <p>EDIT: A set's order does not matter (this is one thing that distinguishes sets from sequences).</p>
1,722,948
<blockquote> <p>$$\frac{1}{x}-1&gt;0$$</p> </blockquote> <p>$$\therefore \frac{1}{x} &gt; 1$$</p> <p>$$\therefore 1 &gt; x$$</p> <p>However, as evident from the graph (as well as common sense), the right answer should be $1&gt;x&gt;0$. Typically, I wouldn't multiple the x on both sides as I don't know its sign, but as I was unable to factories the LHS, I did so. How can I get this result algebraically?</p>
N. F. Taussig
173,070
<p>You have $$\frac{1}{x} - 1 &gt; 0$$ Forming a common denominator yields $$\frac{1 - x}{x} &gt; 0$$ The inequality is true if the numerator and denominator have the same sign.</p>
3,002,325
<p>Proof <span class="math-container">$t$</span> is irrational <span class="math-container">$ t = a-bs $</span> , Given <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are rational numbers, <span class="math-container">$b \neq 0$</span> and <span class="math-container">$s$</span> is irrational. Hence show that <span class="math-container">$(\sqrt3-1)/(\sqrt3+1)$</span> is irrational</p>
Eric kioko
577,696
<p>For the first part</p> <p><span class="math-container">$a/b -s = t/b$</span></p> <p><span class="math-container">$(a-t)/b=s$</span></p> <p>Assuming t was rational would mean <span class="math-container">$ s $</span> is rational since rational numbers are closed under addition and subtraction.</p> <p>This contradicts s being irrational. Therefore t is irrational.</p> <p>The second part <span class="math-container">$(\surd3-1)/(\surd3+1) * (\surd3-1)/(\surd3-1) $</span></p> <p><span class="math-container">$=2-\surd3$</span></p> <p>substituting <span class="math-container">$2-\surd3$</span> </p> <p>in <span class="math-container">$t=a-bs$</span></p> <p><span class="math-container">$a=2,b=1,s=\surd3$</span></p> <p><span class="math-container">$t=2-\surd3$</span></p> <p>Therefore <span class="math-container">$t=2-\surd3$</span> is irrational too.</p>
1,130,302
<p>I am struggling with following challenge in my free time programming project $-$ how is it possible to make reflection vector that reflects along normal with angle that is not larger than some $\alpha$?</p> <p><img src="https://i.stack.imgur.com/6bmgv.png" alt="Example of normal and limited angle reflection"></p> <p>I have already seen classical reflection formula $ r = d - 2 (d \cdot n) n $, which unfortunately does not provide answer to my problem.</p> <p>I tried to fiddle around with JS atan2() function, as described <a href="https://stackoverflow.com/questions/21483999/using-atan2-to-find-angle-between-two-vectors">here</a>, but it didn't work for all cases and I would appreciate some elegant general solution instead of patching special cases.</p> <p>Thank you in advance.</p>
MooS
211,913
<p>Let $G$ act on the cosets of $H$, we obtain a morphism $G \to S_3$ with Kernel $N$. It is well known that we have $N \leq H$, we need to prove equality. If not, then we have $[G:N]= 6$ and $G \to S_3$ is surjective. By the sign we obtain a surjection $G \to S_3 \to \{-1,1\}$, hence a subgroup of index $2$.</p>
2,628,149
<p>I am having trouble finding the general solution of the following second order ODE for $y = y(x)$ without constant coefficients: </p> <p>$3x^2y'' = 6y$<br> $x&gt;0$</p> <p>I realise that it may be possible to simply guess the form of the solution and substitute it back into the the equation but i do not wish to use that approach here. </p> <p>I would appreciate any help, thanks. </p>
Parcly Taxel
357,390
<p>As mentioned in the comments (though I did not see them while typing this): $$15^{1/4}=16^{1/4}\cdot\left(\frac{15}{16}\right)^{1/4}=2\cdot\left(1-\frac1{16}\right)^{1/4}$$ Thus the binomial series can be applied to $\left(1-\frac1{16}\right)^{1/4}$, the result to be doubled afterwards. $\frac1{16}$ is small, so this will converge quickly.</p>
1,335,878
<p>Let $N\unlhd K$ be a normal subgroup of a given group $K$ and let $$q:K\to K/N$$ be the natural quotient map. Let $A\subseteq K$ be a subset of $K$ and let the <a href="https://en.wikipedia.org/wiki/Conjugate_closure" rel="nofollow">conjugate closure</a> of $A$ in $K$ be denoted by $\langle A^K\rangle$.</p> <p><strong>Question</strong></p> <p>Is it true that if $\langle A^K\rangle$ is Abelian, then $\langle(q(A))^{K/N}\rangle$ is also Abelian?</p>
Hagen von Eitzen
39,174
<p>Assume $f_0\in X$ and $\|f_0\|_\infty=1$. Assume wlog. that $c:=\int_0^1 f(x)\,\mathrm dx&gt;0$. As $f_0(0)=0$ there exists $a&gt;0$ such that $|f_0(x)|&lt;\frac12$ for $0\le x\le a$. Then $$c\le \int_0^1 |f_0(x)|\,\mathrm dx\le \int_0^a |f_0(x)|\,\mathrm dx+ \int_a^1 |f_0(x)|\,\mathrm dx\le \frac a2+1-a&lt;1$$ Now consider functions $g$ of the form $$g_{q,m}(x)=\max\{f_0(x)-q,-mx\} $$ with $c&lt;q&lt;1$ and $m&gt;0$ and see how $\|f_0-g_{q,m}\|_\infty$ and $\int_0^1g_{q,m}\,\mathrm dx$ behave as $q\to 1$ and $m\to\infty$ (you may have to pick a suitable <em>path</em> to $(1,\infty)$ though). </p>
2,832,374
<p>I'm given the following definition asked to prove the following theorem:</p> <p>Definition: Let $X$ be a set and suppose $C$ is a collection of subsets of $X$. Then $\cup \mathbf{C}=\{x \in X : \exists C\in \mathbf{C}(x\in C)\}$</p> <p>Theorem: Let $\mathbf{C,D}$ be collections of subsets of a set $X$. Prove that $\cup ( \mathbf{C} \cup \mathbf{D}) = (\cup \mathbf{C}) \cup (\cup\mathbf{D})$ </p> <hr> <p>By my reading of the definition, I run into two problems:</p> <p>Firstly I think there is a type error since $ ( \mathbf{C} \cup \mathbf{D}) = \cup\{\mathbf{C}, \mathbf{D}\}$ is not a collection of subsets of $X$ (i.e. it is not a set whose elements are subsets of $X$); instead it is a set whose elements are collections of subsets of $X$.</p> <p>Second, if we ignore the type error and plug into the definition, we get: $\cup \{\mathbf{C}, \mathbf{D}\} =\{x\in X:\exists C\in \{\mathbf{C}, \mathbf{D}\}(x\in C)\}=\{x\in X:x\in \mathbf{C} \lor x \in \mathbf{D}\}$, however, since $\boldsymbol{C},\boldsymbol{D}$ are collections, all their elements are sets. Since $x$ is not a set, $x\notin\boldsymbol{C}\land x\notin\boldsymbol{D}$. Thus $\cup \{\mathbf{C}, \mathbf{D}\}=\emptyset$ </p> <p>However this can't be right since the other side of the equality; $(\cup \mathbf{C}) \cup (\cup\mathbf{D})\neq \emptyset$ in general. What am I missing?</p>
mathcounterexamples.net
187,663
<p><strong>Let's abstract a bit the question...</strong></p> <p>We are asked to prove that a closed ball $A$ with radius equal to $1$ in a normed vector space is closed. This is very generic.</p> <p>A simple proof is to consider the complement $V \setminus A$ and to prove that it is open. This is simple as for $f \in V \setminus A$, we have $\Vert f \Vert = r &gt; 1$. The open ball centered on $f$ with radius $\frac{r-1}{2}$ is included in $V \setminus A$. This ends the proof.</p>
3,157,338
<p>If we have a linear recurrence relation on a sequence <span class="math-container">$\{x_n\}$</span>, then I know how to find the worst case asymptotic growth. We consider the largest absolute value <span class="math-container">$\alpha$</span> of any root of the characteristic polynomial. Then, independent of the initial values, the asymptotic growth is <span class="math-container">$x_n=\mathcal{O}(\alpha^n)$</span>. I realize that sometimes you get an extra polynomial factor for roots of larger order, but let us ignore all polynomial factors for convenience.</p> <p>For example if <span class="math-container">$x_n=x_{n-1}+x_{n-2}$</span>, then <span class="math-container">$x_n=\mathcal{O}(\phi^n)$</span> where <span class="math-container">$\phi$</span> is the golden ratio, because that is the largest absolute value of any root of <span class="math-container">$x^2-x-1$</span>.</p> <p>My question is about the following. Say we have two linear recurrence relations on a sequence <span class="math-container">$\{x_n\}$</span> such that the first holds for some <span class="math-container">$n$</span>, but the second holds for other <span class="math-container">$n$</span>. For example, say we know that for all <span class="math-container">$n$</span> we have either <span class="math-container">$x_n=x_{n-1}+x_{n-2}$</span>, or <span class="math-container">$x_n=2x_{n-2}+4x_{n-3}$</span>, but we do not know which of the two holds for which values of <span class="math-container">$n$</span>.</p> <p>My intuition tells me that, to find the worst case asymptotic growth, I just have to take the worst asymptotic growth of any of the recurrences in question. In the example I just gave we hence have <span class="math-container">$x_n=\mathcal{O}(2^n)$</span> by the second recurrence. However, I can not come up with a rigorous argument for this to be the case. So my question is whether my intuition is correct.</p> <p>For some context, I am studying branching algorithms for <span class="math-container">$NP$</span>-hard problems. Often times you want to have multiple cases for how you should branch, which is where the multiple recurrences come from. However, there is no clear way to predict which case will pop up, and hence which recurrence holds for which index. Note that you can get arbitrarily many different linear recurrences.</p>
Sangchul Lee
9,340
<p><strong>Method 1.</strong> Under the unitary Fourier transform <span class="math-container">$ \mathcal{F}[f](\xi) = \int_{\mathbb{R}} f(x)e^{-2\pi i \xi x} \, \mathrm{d}x $</span> defined on the Schwarz space <span class="math-container">$\mathcal{S}(\mathbb{R})$</span>,</p> <p><span class="math-container">$$ \mathcal{F}\big[e^{\alpha \partial_x^2} f(x)\big](\xi) = \sum_{n=0}^{\infty} \frac{\alpha^n}{n!} \mathcal{F}[f^{(2n)}](\xi) = \sum_{n=0}^{\infty} \frac{\alpha^n}{n!} (2\pi i \xi)^{2n} \mathcal{F}[f](\xi) = e^{-4\pi^2 \alpha \xi^2} \mathcal{F}[f](\xi). $$</span></p> <p>Now recall that, if <span class="math-container">$\operatorname{Re}(\sigma) &gt; 0$</span> and <span class="math-container">$f(x) = \exp\left\{ -\frac{(2\sigma x+C)^2}{4\sigma} \right\}$</span>, then</p> <p><span class="math-container">$$ \mathcal{F}\left[ f \right](\xi) = \sqrt{\frac{\pi}{\sigma}} \exp\left\{ \frac{-\pi^2\xi^2 + iC\pi\xi}{\sigma }\right\}. $$</span></p> <p>Plugging this back,</p> <p><span class="math-container">$$ \mathcal{F}\big[e^{\alpha \partial_x^2} f(x)\big](\xi) = \sqrt{\frac{\pi}{\sigma}} \exp\left\{\frac{ -(1+4\alpha\sigma)\pi^2\xi^2 + iC\pi\xi}{\sigma }\right\}. $$</span></p> <p>Taking inverse Fourier transform,</p> <p><span class="math-container">$$ e^{\alpha \partial_x^2} f(x) = \frac{1}{\sqrt{4\alpha\sigma+1}} \exp\left\{ -\frac{(2\sigma x+C)^2}{4\sigma(1+4\alpha \sigma)} \right\}. $$</span></p> <hr> <p><strong>Method 2.</strong> Assume for a moment that <span class="math-container">$\alpha$</span> is real. Since <span class="math-container">$\partial_x^2$</span> is the infinitesimal generator of the heat equation, we know that <span class="math-container">$u(t, x) = e^{t\partial_x^2} f(x)$</span> solves the equation</p> <p><span class="math-container">$$ \partial_t u = \partial_x^2 u \qquad \text{and} \qquad u(0, x) = f(x) $$</span></p> <p>So the solution can be written as the convolution with the fundamental solution of the heat equation:</p> <p><span class="math-container">$$ u(t, x) = \int_{\mathbb{R}} f(y) \cdot \frac{1}{\sqrt{4\pi t}} e^{-(x-y)^2/4t} \, \mathrm{d}y. $$</span></p> <p>Plugging <span class="math-container">$f(x) = \exp\Big\{ -\frac{(2\sigma x+C)^2}{4\sigma} \Big\}$</span> and computing the resulting gaussian integral, we obtain the same as before. Finally, we can check that the map <span class="math-container">$\alpha \mapsto e^{\alpha \partial_x^2}f(x)$</span> is analytic, and so, the same answer applies to complex <span class="math-container">$\alpha$</span> by the principle of analytic continuation.</p>