qid int64 1 4.65M | question large_stringlengths 27 36.3k | author large_stringlengths 3 36 | author_id int64 -1 1.16M | answer large_stringlengths 18 63k |
|---|---|---|---|---|
865,598 | <p>How can I calculate this value?</p>
<p>$$\cot\left(\sin^{-1}\left(-\frac12\right)\right)$$</p>
| Ryan | 164,086 | <p>Let arcsin(-1/2)= x
Implies sinx=-1/2
Implies x= 120 degree
Now u need to find value of cot x
So cot x= - underroot 3</p>
|
756,236 | <p>The question is to write the general solution for this recurrence relation:</p>
<p>$y_{k+2} - 4y_{k+1} + 3y_{k} = -4k$.</p>
<p>I first solved the homogeneous equation $y_{k+2} - 4y_{k+1} + 3y_{k} = 0$, by writing the auxiliary equation $r^2 - 4r + 3 = (r-3)(r-1) = 0$. Thus $y_k^{h} = c_1(1)^k + c_2 (3)^k$. The general solution is just $y_k^{gen} = y_k^{h} + y_k^{p}$. My trouble is coming up with a particular solution. I keep up coming with $y_k^{p} = 2k^2$ when that doesn't work, but $k^2$ works, so my answer is close. I've gone through the artihmetic several times and cannot spot the mistake, here's the work:</p>
<p>The particular solution is of the form $y_k^{p} = a + bk$. Plugging in recurrence relation: $a + b(k+2) - 4(a + b(k+1) + 3(a + bk) = (a - 4a + 3a) + (bk - 4bk + 3bk) + (2b - 4b) = 0 + 0 - 2b = -4k$.</p>
<p>Thus $b = 2k$, and since our $y_k^p = a + bk$, it doesn't matter what pick $a$ to be so choose $a = 0$, which gives us $y_k^p = 2k^2$.</p>
<p>However, $2k^2$ doesn't satisfy the recurrence relation:
$2(k+2)^2 - 8(k+1)^2 + 6k^2 = (2k^2 - 8k^2 + 6k^2) + (8k - 16k) + (8 - 8) = -8k \ne -4k$.</p>
<p>Where is the error in my reasoning? I know $y_k^p = k^2$ works, but why do I keep coming up with $2k^2$.</p>
| mercio | 17,445 | <p>You started with the assumption that there existed two constants $a,b$ such that $y_k = a+bk$ was a solution. After a bit of algebra you end up with<br>
$y_k$ is a solution $\iff \forall k, y_{k+2}-4y_{k+1}+3y_k = -4k \iff \forall k, b = 2k$.<br>
So $b$ has to be equal to every even integer at once. Obviously this is impossible : you have reached a contradiction and there is no solution of the form $a+bk$.</p>
<p>If $z_k = y_{k+2}-4y_{k+1}+3y_k$, then you can check that $z_{k+1}-z_k$ is constant, and then that $z_{k+2}-2z_{k+1}+z_k = 0$. So the solutions to the orignal equation is a subset of the solutions of a larger homogeneous linear recurrent relation. Its characteristic polynomial is $(r-1)^3(r-3)$, so the solutions to this homogeneous equation ends up being $\{y_k = a+bk+ck^2+d3^k \mid (a,b,c,d) \in \Bbb R^4\}$, so you only have to check which of those satisfy the original equation, and obtain the value of $b$ and $c$.</p>
|
3,012,090 | <p>Let <span class="math-container">$x>0$</span>. I have to prove that</p>
<p><span class="math-container">$$
\int_{0}^{\infty}\frac{\cos x}{x^p}dx=\frac{\pi}{2\Gamma(p)\cos(p\frac{\pi}{2})}\tag{1}
$$</span></p>
<p>by converting the integral on the left side to a double integral using the expression below:</p>
<p><span class="math-container">$$
\frac{1}{x^p}=\frac{1}{\Gamma(p)}\int_{0}^{\infty}e^{-xt}t^{p-1}dt\tag{2}
$$</span></p>
<p>By plugging <span class="math-container">$(2)$</span> into <span class="math-container">$(1)$</span> I get the following double integral:</p>
<p><span class="math-container">$$
\frac{1}{\Gamma(p)}\int_{0}^{\infty}\int_{0}^{\infty}e^{-xt}t^{p-1}\cos xdtdx\tag{3}
$$</span></p>
<p>However, I unable to proceed any further as I am unclear as to what method should I use in order to compute this integral. I thought that an appropriate change of variables could transform it into a product of two gamma functions but I cannot see how that would work. Any help would be greatly appreciated.</p>
| mrtaurho | 537,079 | <p>Your given integral is closely related to the Mellin transform and can be evaluated by using <a href="https://en.wikipedia.org/wiki/Ramanujan%27s_master_theorem" rel="nofollow noreferrer">Ramanujan's Master Theorem</a>.</p>
<blockquote>
<p><strong>Ramanujan's Master Theorem</strong></p>
<p>Let <span class="math-container">$f(x)$</span> be an analytic function with a MacLaurin Expansion of the form
<span class="math-container">$$f(x)=\sum_{k=0}^{\infty}\frac{\phi(k)}{k!}(-x)^k$$</span>then the Mellin Transform of this function is given by
<span class="math-container">$$\int_0^{\infty}x^{s-1}f(x)dx=\Gamma(s)\phi(-s)$$</span></p>
</blockquote>
<p>Therefore expand the cosine function as Taylor series expansion to get</p>
<p><span class="math-container">$$\begin{align}
\mathfrak{I}=\int_0^{\infty}\cos(x)x^{-p}dx&=\int_0^{\infty}x^{-p}\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n}}{(2n)!}dx
\end{align}$$</span></p>
<p>In order to bring the above integral in the wanted form for the usage of Ramanujan's Master Theorem apply the substitution <span class="math-container">$x^2=u$</span>. So we further get</p>
<p><span class="math-container">$$\begin{align}
\mathfrak{I}=\int_0^{\infty}x^{-p}\sum_{n=0}^{\infty}(-1)^n\frac{x^{2n}}{(2n)!}dx&=\int_0^{\infty}x^{-p}\sum_{n=0}^{\infty}\frac{1}{(2n)!}(-x^2)^ndx\\
&=\int_0^{\infty}u^{-p/2}\sum_{n=0}^{\infty}\frac{1}{(2n)!}(-u)^n\frac{du}{2\sqrt{u}}\\
&=\frac12\int_0^{\infty}u^{-(p+1)/2}\sum_{n=0}^{\infty}\frac{1}{(2n)!}(-u)^ndu\\
&=\frac12\int_0^{\infty}u^{-(p+1)/2}\sum_{n=0}^{\infty}\frac{n!/(2n)!}{n!}(-u)^ndu
\end{align}$$</span></p>
<p>By using the relation <span class="math-container">$\Gamma(n)=(n-1)!$</span> which is valid for all <span class="math-container">$n\in\mathbb N$</span> we can consider the last integral as an application of Ramanujan's Master Theorem with <span class="math-container">$s=-\frac{p-1}2$</span> and <span class="math-container">$\phi(n)=\frac{\Gamma(n+1)}{\Gamma(2n+1)}$</span>. By finally using the Theorem we obtain</p>
<p><span class="math-container">$$\begin{align}
\mathfrak{I}=\frac12\int_0^{\infty}u^{-(p+1)/2}\sum_{n=0}^{\infty}\frac{n!/(2n)!}{n!}(-u)^ndu&=\frac12\Gamma\left(-\frac{p-1}2\right)\frac{\Gamma\left(\frac{p-1}2+1\right)}{\Gamma\left(2\left(\frac{p-1}2\right)+1\right)}\\
&=\frac1{2\Gamma(p)}\Gamma\left(1+\frac{p-1}2\right)\Gamma\left(-\frac{p-1}2\right)
\end{align}$$</span></p>
<p>Now by applying Euler's Reflection Formula with <span class="math-container">$z=1+\frac{p-1}2$</span> we moreover get</p>
<p><span class="math-container">$$\begin{align}
\mathfrak{I}=\frac1{2\Gamma(p)}\Gamma\left(1+\frac{p-1}2\right)\Gamma\left(-\frac{p-1}2\right)&=\frac1{2\Gamma(p)}\frac{\pi}{\sin\left(\pi\left(1+\frac{p-1}2\right)\right)}\\
&=\frac1{2\Gamma(p)}\frac{\pi}{\sin\left(\frac{p\pi}2+\frac{\pi}2\right)}\\
&=\frac1{2\Gamma(p)}\frac{\pi}{\cos\left(\frac{p\pi}2\right)}
\end{align}$$</span></p>
<p>where within the last step the fundamental relation <span class="math-container">$\sin\left(x+\frac{\pi}2\right)=\cos(x)$</span> was used. Thus for the original integral <span class="math-container">$\mathfrak{I}$</span> we get</p>
<blockquote>
<p><span class="math-container">$$\mathfrak{I}=\int_0^{\infty}\cos(x)x^{-p}dx=\frac{\pi}{2\Gamma(p)\cos\left(p\frac{\pi}2\right)}$$</span></p>
</blockquote>
|
2,810,008 | <p>Can I investigate this limit and if yes, how? $${i^∞}$$</p>
<p>I am at a loss of ideas and maybe it is undefined?</p>
| José Carlos Santos | 446,262 | <p>Since the sequence $(i^n)_{n\in\mathbb N}$ is the sequence$$i,-1,-i,1,i,-1,-i,1,\ldots$$your sequence diverges.</p>
|
2,810,008 | <p>Can I investigate this limit and if yes, how? $${i^∞}$$</p>
<p>I am at a loss of ideas and maybe it is undefined?</p>
| C Monsour | 552,399 | <p>The limit doesn't exist in any event, but you should still be specific about <em>what</em> precisely is tending to $\infty$, because you could talk about the limit <em>set</em> (i.e., in this case, the set of subsequential limits), and then it matters. If the exponents are integers, the limit set is $\{1,-1,i,-i\}$. If the exponents are real numbers, the limit set is the unit circle. If the exponents are complex numbers (tending to infinity in absolute value) then the limit set is the complex plane.</p>
|
1,671,111 | <p>I'm looking for an elegant way to show that, among <em>non-negative</em> numbers,
$$
\max \{a_1 + b_1, \dots, a_n + b_n\} \leq \max \{a_1, \dots, a_n\} + \max \{b_1, \dots, b_n\}
$$</p>
<p>I can show that $\max \{a+b, c+d\} \leq \max \{a,c\} + \max \{b,d\}$ by exhaustively checking all possibilities of orderings among $a,c$ and $b,d$.</p>
<p>But, I feel like there should be a more intuitive/efficient way to show this property for arbitrary sums like the one above.</p>
| carmichael561 | 314,708 | <p>For any index <span class="math-container">$j$</span>, <span class="math-container">$a_j+b_j\leq \max\{a_1,\dots,a_n\}+\max\{b_1,\dots,b_n\}$</span>. Now take the maximum over all <span class="math-container">$j$</span>.</p>
|
3,815,898 | <p>In my stats lecture, my professor introduced this theorem, however I don't quite understand what the theorem means and how he got from step 3 to 4. Also what does the prime symbol mean? Could someone paraphrase what the theorem means or point me to some online resource where I could learn more about this theorem? Thanks</p>
<p>Theorem: If <span class="math-container">$X$</span> is a continuous random variable with cumulative distribution function (CDF) <span class="math-container">$F$</span>, then <span class="math-container">$Y = F(X) \sim U[0,1]$</span></p>
<p>Proof:
Let CDF of <span class="math-container">$Y$</span> = <span class="math-container">$F_y$</span>
<span class="math-container">$$F_y=P(Y \leq y)$$</span>
<span class="math-container">$$=P(F(X) \leq y)$$</span>
<span class="math-container">$$=P(X \leq F^{-1}(y))$$</span>
<span class="math-container">$$=F(X^{'})$$</span>
<span class="math-container">$$=F(F^{-1}(y))=y$$</span></p>
<p><span class="math-container">$$Y \sim U[0,1]$$</span></p>
| lonza leggiera | 632,373 | <p>The proof you cite glosses over a technical point which arises when the distribution function <span class="math-container">$\ F\ $</span> is not <em>strictly</em> increasing. In that case, it is not injective (one-to-one), so it's not altogether clear what the definition of <span class="math-container">$\ F^{-1}\ $</span> should be. However, the last step of the proof only requires that <span class="math-container">$\ F\left(F^{-1}(y)\right)=y\ $</span>—that is, that <span class="math-container">$\ F^{-1}\ $</span> be a <em>left</em> inverse of <span class="math-container">$\ F\ $</span>. The other essential property needed (in the second step of the proof) is that
<span class="math-container">$$
F(x)\le y \iff x\le F^{-1}(y)\ .
$$</span>
The function <span class="math-container">$\ F^{-1}\ $</span> defined by
<span class="math-container">$$
F^{-1}(y)=\inf\left\{x\,| y\le F(x)\right\}
$$</span>
has both of these properties, so if you take that as its definition then the proof holds up.</p>
|
4,631,618 | <p>Consider this absolute value quadratic inequality</p>
<p><span class="math-container">$$ |x^2-4| < |x^2+2| $$</span></p>
<p>Right side is always positive for all real numbers,so the absolute value is not needed.</p>
<p>Now consider the cases for the left absolute value</p>
<ol>
<li><span class="math-container">$$ x^2-4 \geq 0 $$</span></li>
</ol>
<p>We get <span class="math-container">$$ x \geq \pm 2 $$</span></p>
<p>Solve for first case <span class="math-container">$$ x^2 - 4 < x^2+2 $$</span> the solution <span class="math-container">$$ 0 < 6$$</span> This is true for all real numbers; taking in consideration the boundary that x has to be greater equal +2 and -2 the first part of the solution I think should be <span class="math-container">$$ L_1 = [2, \infty) $$</span></p>
<ol start="2">
<li><span class="math-container">$$ x^2-4 < 0$$</span> <span class="math-container">$$ x < \pm 2 $$</span></li>
</ol>
<p>Solve for seonc case I get <span class="math-container">$$ x > \pm 1 $$</span> Solution for the second case considering the boundary of 2. should be <span class="math-container">$$ L_2 = (-2,-1) \cup (1,2) $$</span> Final solution;</p>
<p><span class="math-container">$$ L = (-2,-1) \cup (2,\infty) $$</span></p>
<p>According to the solutions,this is wrong; it should be <span class="math-container">$$ L = (-\infty,-1) \cup (1,\infty) $$</span> Rechecking their L1 and L2; L2 should be correct but for L1 they have <span class="math-container">$$ L_1 = R \backslash (-2,2) $$</span></p>
<p>So all real numbers except -2 and 2? Can anyone explain how this is the solution.First the sign is greater equal than +2 -2 shouldnt those numbers be included? Also we are looking for numbers GREATER than -2 and +2, since 2 is greater than -2 I assumed we only need to take numbers from to infinity,how does negative infinity come in consideration here?</p>
<p>Thanks in advance!</p>
| Anne Bauval | 386,889 | <p><span class="math-container">$4n^2=(2n-2)(2n-3)+5(2n-2)+4$</span> and for <span class="math-container">$n=1,$</span> this reduces to <span class="math-container">$4=0+5\cdot0+4,$</span> hence
<span class="math-container">$$4\sum_{n=1}^{\infty}\frac{n^2}{(2n-2)!}=\sum_{n=2}^\infty\frac1{(2n-4)!}+ \sum_{n=2}^\infty\frac5{(2n-3)!}+\sum_{n=1}^\infty\frac4{(2n-2)!}$$</span>
<span class="math-container">$$=\sum_{k=0}^\infty\frac{1+4}{(2k)!}+\sum_{k=0}^\infty\frac5{(2k+1)!}$$</span>
<span class="math-container">$$=5\sum_{n=0}^\infty\frac1{n!}=5e.$$</span>
Therefore,
<span class="math-container">$$\sum_{n=1}^{\infty}\frac{n^2}{(2n-2)!}=\frac{5e}4.$$</span></p>
|
2,248,550 | <p>Will be the value in the form of $\frac{"0"}{"0"}$? Do I have to use the L'Hopital rule? Or can I say, that the limit doesn't exist?</p>
| dxiv | 291,201 | <p>Hint: for $x \gt 0, y \lt 1\,$:</p>
<p>$$\require{cancel}
\frac{x+y-1}{\sqrt{x}-\sqrt{1-y}} = \frac{x-(1-y)}{\sqrt{x}-\sqrt{1-y}} = \frac{(\sqrt{x}+\sqrt{1-y})\cancel{(\sqrt{x}-\sqrt{1-y})}}{\cancel{\sqrt{x}-\sqrt{1-y}}}
$$</p>
|
655,261 | <p>Let meagre subsets be defined as:<br>
$A\text{ meagre}\iff A=\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ with }\overline{A_k}°=\varnothing$<br>
Then it satisfies:<br>
$B\subseteq A\text{ meagre}\Rightarrow B\text{ meagre}$<br>
$A_k\text{ meagre}\Rightarrow\bigcup_{\lvert K\rvert\leq\aleph_0} A_k\text{ meagre}$</p>
<p><em>Flipping this around, is it possible to define meagre subsets by these properties?</em></p>
| Asaf Karagila | 622 | <p>Given a set $X$, we say that $I$ is a $\sigma$-ideal on $X$ if the following holds:</p>
<ol>
<li>$I\subseteq\mathcal P(X)$, is not empty and $X\notin I$.</li>
<li>If $A\in I$ and $B\subseteq A$, then $B\in I$.</li>
<li>If for $n\in\Bbb N$ we have $A_n\in I$ then $\bigcup A_n\in I$. (If we weaken this just for finitely many, we have an ideal, without the $\sigma$.)</li>
</ol>
<p>The meager sets form a $\sigma$-ideal. But so do the zero measure sets (in the Lebesgue measure, for example). So do the countable subsets of $X$, if $X$ is uncountable.</p>
<p>The properties that you suggest are exactly those of a $\sigma$-ideal, but those are not unique at all to meager sets.</p>
<hr>
<p>One can extend this question and ask, given a $\sigma$-ideal $I$ on an uncountable set $X$, is there a topology where $I$ is exactly the meager sets of the topology?</p>
<p>Well, there is, but it's not a very exciting topology. We declare $\tau$ to be the topology $\{A\subseteq X\mid X\setminus A\in I\lor A=\varnothing\}$. Then $\tau$ is a topology on $X$ as it includes $X$ and $\varnothing$, closed under finite (and countable) intersections and arbitrary unions (because it's closed under supersets).</p>
<p>Note now that given $A\subseteq X$ then $\overline{A}=X$ if and only if $A\notin I$ if and only if $\overline{A}\notin I$.</p>
<p>Moreover all the closed sets, except $X$, have an empty interior. Therefore a set is meager if and only if it is closed and not equal to $X$. </p>
|
2,278,991 | <p>(NOTE: I am new to proof construction. Don't panic if your heart beat increase.)</p>
<p>Proof 1:</p>
<p>suppose $x$ is even and prime, then there is $k$ in $\mathbb N$ such that </p>
<p>$x = 2k$</p>
<p>But x is only divisible by itself and not 2.</p>
<p>$\frac{x}{2} != k$</p>
<p>$x$ can not be even.</p>
<hr>
<p>Proof 2:
$x$ is prime</p>
<p>$x = pq$ then either $p=1$ or $q=1$</p>
<p>$x$ is even</p>
<p>$x = 2k$ </p>
<p>$pq = 2k$</p>
<p>$(2)\frac{k}{pq} = 1$ </p>
<p>which is false </p>
| infinitylord | 178,643 | <p>Here's a proof that $2$ is the <strong>only</strong> even prime.</p>
<p>Let $a \in \mathbb{N} $ be prime and even. (We know this is okay to do, as there is at least <em>one</em> even prime)</p>
<p>Assume $a > 2$</p>
<p>Since $a$ is even, it can be written as $a = 2k$ for some $k \in \mathbb{N}$. By assumption, $k > 1$. </p>
<p>$\frac{a}{2} = k$, which implies that $a$ is divisible by $2$.</p>
<p>Since $a$ is prime, its only divisors are $1$ and $a$.</p>
<p>However this is contradicted by the fact that $2 | a $ and $a > 2$.</p>
<p>Thus the assumption that there is an even prime larger than $2$ is <strong>false</strong>.</p>
<p><strong>NOTE:</strong> This proof was much more verbose than necessary, but it seemed you were in need a fully stated proof by contradiction argument.</p>
|
3,075,263 | <p>Let <span class="math-container">$A$</span> be a positive semi-definite matrix. How to show that Frobenius norm is less than trace of the matrix? Formally,
<span class="math-container">$$\sqrt{\text{Tr}(A^2)} \leq \text{Tr}(A)$$</span>
Also, show when <span class="math-container">$A$</span> is an <span class="math-container">$n \times m$</span> the following is true
<span class="math-container">$$\sqrt{\text{Tr}(A^TA)} \leq \|A\|_*$$</span>
where <span class="math-container">$\|\cdot\|_*$</span> is nuclear norm which is the summation of the singular values.</p>
| Angina Seng | 436,618 | <p>For the first you can assume that <span class="math-container">$A$</span> is diagonal with diagonal entries
<span class="math-container">$a_1,\ldots,a_n$</span>, all <span class="math-container">$\ge0$</span>. Then your inequality becomes
<span class="math-container">$$\sum_{i=1}^n a_i^2\le\left(\sum_{i=1}^n a_i\right)^2$$</span>
which is clearly true on expanding the right side, recalling all variables
are non-negative.</p>
|
3,865,607 | <p>Given <span class="math-container">$B\subseteq X$</span> with both <span class="math-container">$B$</span> and <span class="math-container">$X$</span> contractible. How would you prove that the inclusion map <span class="math-container">$i:B \to X$</span> is a homotopy equivalence?</p>
<p>Thank you</p>
| Cornman | 439,383 | <p>Additional to the specific map Tsemo Aristide gave, there is the following theorem:</p>
<p>If <span class="math-container">$Y$</span> is contractible, then any two maps <span class="math-container">$X\to Y$</span> are homotopic (indeed they are nullhomotopic).</p>
<p>Reference: For example 'Introduction to Algebraic Topology' by Rotman Theorem 1.13</p>
<p>The proof is not difficult.</p>
<p>With that in mind the statement is completly trivial, as <span class="math-container">$B$</span> and <span class="math-container">$X$</span> are contractible.
Furthermore every continuous map <span class="math-container">$B\to X$</span> is a homotopy equivalence.</p>
|
2,041,441 | <p>$\binom{74}{37}-2$ is divisible by :</p>
<p>a) $1369$</p>
<p>b) $38$</p>
<p>c) $36$ </p>
<p>d) $none$ $of$ $ these$</p>
<p>I have no idea how to solve this...I tried writing $\binom{74}{37}$ in some useful form but its not helping...any clues?? Thanks in advance!!</p>
| JMoravitz | 179,297 | <p>Note that $\binom{74}{37}=\frac{74!}{37!37!}$</p>
<p>Note also that $n!$ contains $\sum\limits_{k=1}^\infty \left\lfloor\frac{n}{p^k}\right\rfloor$ factors of a prime $p$</p>
<p>Counting the number of factors of $19$ in the numerator, we get $\lfloor\frac{74}{19}\rfloor+\lfloor\frac{74}{19^2}\rfloor+\dots=3+0+0+\dots=3$</p>
<p>The number of factors of $19$ in the denominator however, we get each $37!$ has $\lfloor\frac{37}{19}\rfloor+\lfloor\frac{37}{19^2}\rfloor+\dots=1+0+0+\dots=1$ so the denominator has $2$ factors of $19$.</p>
<p>Thus, one of the factors from the numerator remains uncancelled and so $\binom{74}{37}$ is divisible by $19$. This implies $\binom{74}{37}-2$ is <strong>not</strong> divisible by $19$ and therefore cannot be divisible by $38$.</p>
<p>Similarly, running the same argument for counting the number of factors of $2$, we get a total of $71$ factors of two on the numerator and $68$ factors of two on the denominator, implying that $\binom{74}{37}$ is divisible by four. Thus $\binom{74}{37}-2$ is <strong>not</strong> divisible by $4$ and therefore cannot be divisible by $36$.</p>
<hr>
<p>For showing $\binom{74}{37}-2$ is divisible by $37^2$, we note first that $37$ is prime. We conjecture that $\binom{2p}{p}-2$ is divisible by $p^2$ for each prime $p$.</p>
<p>We notice first that $\binom{2p}{p}=\sum\limits_{j=0}^p\binom{p}{j}^2$ and also that $\binom{p}{j}\equiv 0\pmod{p}$ for $0<j<p$ and $\binom{p}{0}=\binom{p}{p}=1$ otherwise. (<em>To see this, use a similar technique as above, noting that there is exactly one factor of $p$ in the numerator and no factors of $p$ in the denominator of $\frac{p!}{j!(p-j)!}$ for each $0<j<p$</em>)</p>
<p>As the terms in the summation are being squared, we notice further that $\binom{p}{j}^2\equiv 0\pmod{p^2}$ for each $0<j<p$ and $\binom{p}{0}^2=\binom{p}{p}^2=1$</p>
<p>Thus $\binom{2p}{p}\equiv \sum\limits_{j=0}^p\binom{p}{j}^2\equiv 1+(0+0+\dots+0)+1\equiv 2\pmod{p^2}$</p>
<p>Finally, $\binom{2p}{p}-2\equiv 0\pmod{p^2}$ so the claim is true.</p>
<p>This proves in particular that $\binom{74}{37}-2$ is divisible by $37^2$</p>
|
23,268 | <p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p>
<p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p>
<p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p>
<p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p>
<p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
| Steven Gubkin | 1,106 | <p>Are you comfortable with products? Products are just limits of discrete diagrams - the only arrows are the identity arrows. Any small limit will have a unique monic arrow to the product of all of the objects in the diagram you are taking a limit over (Check this!). So in set theory land this means that all limits are just special subsets of the product - that subset which makes all of the arrows you want commute with each other. Try working this out in the category of groups, and the category of topological spaces. Intersections being a special case is no big deal - if $U \subset W$ and $V \subset W$ then $U \cap V$ is isomorphic to that subset of $U \times W$ consisting only of pairs of the form $(x,x)$. Draw the appropriate diagrams to show that this is the picture which comes out of arrow chasing as well.</p>
|
23,268 | <p>I'm the sort of mathematician who works really well with elements. I really enjoy point-set topology, and category theory tends to drive me crazy. When I was given a bunch of exercises on subjects like limits, colimits, and adjoint functors, I was able to do them, although I am sure my proofs were far longer and more laborious than they should have been. However, I felt like most of the understanding I gained from these exercises was gone within a week. I have a copy of MacLane's "Categories for the Working Mathematician," but whenever I pick it up, I can never seem to get through more than two or three pages (except in the introduction on foundations).</p>
<p>A couple months ago, I was trying to use the statements found in Hartshorne about glueing schemes and morphisms and realized that these statements were inadequate for my purposes. Looking more closely, I realized that Hartshorne's hypotheses are "wrong," in roughly the same way that it is "wrong" to require, in the definition of a basis for a topology that it be closed under finite intersections. (This would, for instance, exclude the set of open balls from being a basis for $\mathbb{R}^n$.) Working through it a bit more, I realized that the "right" statement was most easily expressed by saying that a certain kind of diagram in the category of schemes has a colimit. At this point, the notion of "colimit" began to seem much more manageable: a colimit is a way of gluing objects (and morphisms).</p>
<p>However, I cannot think of any similar intuition for the notion of "limit." Even in the case of a fibre product, a limit can be anything from an intersection to a product, and I find it intimidating to try to think of these two very different things as a special cases of the same construction. I understand how to show that they are; it just does not make intuitive sense, somehow.</p>
<p>For another example, I think (and correct me if I am wrong) that <strike>the sheaf condition on a presheaf can be expressed as stating that the contravariant functor takes colimits to limits</strike>. [This is not correct as stated. See Martin Brandenburg's answer below for an explanation of why not, as well as what the correct statement is.] It seems like a statement this simple should make everything clearer, but I find it much easier to understand the definition in terms of compatible local sections gluing together. I can (I think) prove that they are the same, but by the time I get to one end of the proof, I've lost track of the other end intuitively.</p>
<p>Thus, my question is this: Is there a nice, preferably geometric intuition for the notion of limit? If anyone can recommend a book on category theory that they think would appeal to someone like me, that would also be appreciated.</p>
| Peter LeFanu Lumsdaine | 2,273 | <p>Since you're familiar with the example of the sheaf condition, I think a nice one-liner intuition is:</p>
<blockquote>
<p>A limit of a diagram is an object of <strong>matching families in that diagram</strong>.</p>
</blockquote>
<p>...defined just like how, in the case of (pre)sheaves, you define a matching family of sections on a cover. A product is then the case where there's no matching condition to satisfy. An intersection (let's say of subobjects $A,B \subseteq X$) is the case where the matching condition forces all three elements to be the same (as elements of $X$), so an element of the limit is a single element of $X$ that can be seen also as an element of $A$ and as $B$.</p>
<p>(This answer is pretty much a sub-quotient of Martin B's earlier answer, but I think it's a useful one-liner to extract.) </p>
|
963,503 | <p>Vectors $a$, $b$ and $c$ all have length one. $a + b + c = 0$. Show that
$$
|a-c| = |a-b| = |b-c|
$$
I am not sure how to get started, as writing out the norms didn't help and there is no way to manipulate
$$
|a-c| \le |a-b| + |b-c|
$$
to get an equality. I just need an idea of where to start.</p>
| drhab | 75,923 | <p>Note that $Z\subseteq\left(X\cap Y\right)\cup Z$ so that $X\cap\left(Y\cup Z\right)=\left(X\cap Y\right)\cup Z$
implies that $Z\subseteq X\cap\left(Y\cup Z\right)\subseteq X$.</p>
|
204,842 | <p>A probability measure defined on a sample space $\Omega$ has the following properties:</p>
<ol>
<li>For each $E \subset \Omega$, $0 \le P(E) \le 1$</li>
<li>$P(\Omega) = 1$</li>
<li>If $E_1$ and $E_2$ are disjoint subsets $P(E_1 \cup E_2) = P(E_1) + P(E_2)$</li>
</ol>
<p>The above definition defines a measure that is finitely additive (by induction) but not necessarily countably additive.</p>
<p>What is a probability measure that would be finitely additive but not countably additive (for a countable sample space $\Omega$)?</p>
<p>The example that I have seen most commonly on forums (this and elsewhere) is to set $P(E) = 0$ if $E$ is finite and $P(E) = 1$ if $E$ is co-finite. But that is <strong>not</strong> a probability measure as defined above since it is not defined on every subset of $\Omega$. </p>
<p>So an example of such a probability measure, or what is the reasoning that a finitely additive probability measure is not always countably additive?</p>
| Michael Greinecker | 21,674 | <p>Let <span class="math-container">$\mathcal{U}$</span> be a <a href="https://en.wikipedia.org/wiki/Ultrafilter" rel="nofollow noreferrer">free ultrafilter</a> on <span class="math-container">$\mathbb{N}$</span>. Let <span class="math-container">$P(A)=1$</span> if <span class="math-container">$A\in\mathcal{U}$</span> and <span class="math-container">$P(A)=0$</span> if <span class="math-container">$A\notin\mathcal{U}$</span>. I think it is impossible to give an explicit example of a finitely additive measure on a <span class="math-container">$\sigma$</span>-algebra that is not countably additive, but our resident set theorists might be able to tell you more about that.</p>
|
2,677,134 | <p>Given the definition of Big-O, prove that $f(n) = n^2 - n$ is $O(n^2)$.</p>
<p>When I use the given definition, I get $n^2 - n \leq n^2 - n^2$ which means that $n^2 -n \leq 0$, which is not true. Is there some step I'm missing?</p>
| Siong Thye Goh | 306,553 | <p>We have $$n^2-n \ge n^2-n^2$$ not the opposite.</p>
<p>To prove $f(n)$ is $O(n^2)$.</p>
<p>$$n^2-n \leq n^2$$</p>
|
2,903,359 | <p>I am trying to prove the following:</p>
<p>Given $1 \le d \le n$, a matrix $P \in R^n$ is a rank-$d$ orthogonal projection matrix. Prove that P is projection matrix iff there exists a $n$x$d$ matrix $U$ such that $P =UU^T$ and $U^TU = I$.</p>
<p>I know that this is an obvious fact about projection matrices but I am not sure how to get started on proving it.</p>
<p>Once I can do that, I am looking to prove that,</p>
<p>for all $v \in R^n$,
$Pv = arg \min_{w \in range(P)} \lVert {v - w} \rVert^2$</p>
| obareey | 111,671 | <p>(<span class="math-container">$\Rightarrow$</span>) Let <span class="math-container">$P=UU^T$</span> and <span class="math-container">$U^TU=I$</span>. Then <span class="math-container">$P^2=UU^TUU^T=UU^T=P$</span> and <span class="math-container">$P^T=P$</span>.</p>
<p>(<span class="math-container">$\Leftarrow$</span>) Let <span class="math-container">$P^2=P$</span> and <span class="math-container">$P^T=P$</span>. Since <span class="math-container">$P$</span> is an orthogonal projection it has eigenvalues <span class="math-container">$0$</span> or <span class="math-container">$1$</span> and it is unitarily diagonalizable. So there exist an orthogonal <span class="math-container">$V$</span> such that
<span class="math-container">$$P=\begin{bmatrix}V_1 & V_2\end{bmatrix}\begin{bmatrix}I_d & 0 \\ 0 & 0\end{bmatrix}\begin{bmatrix}V_1^T \\ V_2^T\end{bmatrix} = V_1V_1^T$$</span>
Also <span class="math-container">$V^TV=I$</span> which implies
<span class="math-container">$$\begin{bmatrix}V_1^T \\ V_2^T\end{bmatrix}\begin{bmatrix}V_1 & V_2\end{bmatrix}=\begin{bmatrix}V_1^TV_1 & V_1^TV_2 \\ V_2^TV_1 & V_2^TV_2\end{bmatrix}=\begin{bmatrix}I_d & 0 \\ 0 & I_{n-d}\end{bmatrix}$$</span></p>
|
405,783 | <p>I saw the following in my lecture notes, and I am having difficulties
verifying the steps taken.</p>
<p>The question is:</p>
<blockquote>
<p>Assuming $0<\epsilon\ll1$ find all the roots of the polynomial
$$\epsilon^{2}x^{3}+x+1$$ which are $O(1)$ up to a precision of
$O(\epsilon^{2})$</p>
</blockquote>
<p>and the solution given was </p>
<blockquote>
<p>Assume that $x=O(1)$ and that $$x(\epsilon)=x_{0}+\epsilon
x_{1}+O(\epsilon^{2})$$ Then by setting it in the equation and letting
$\epsilon\to0$ we get $$x_{0}=-1,x_{1}=0$$</p>
<p>Hence $x(\epsilon)=-1+O(\epsilon^{2})$</p>
</blockquote>
<p>I have two questions: </p>
<ol>
<li><p>Where did we use the assumption that $x=O(1)$</p></li>
<li><p>How did they get $$x_{0}=-1,x_{1}=0 ?$$ </p></li>
</ol>
<p>When I did the step setting
it in the equation and letting $\epsilon\to0$ I got $$x_{0}+1+O(\epsilon^{2})=0$$
and so I don't know anything about $x_{1}$. </p>
<p>Should I ignore $O(\epsilon^{2})$
and from that I should get $x_{0}=-1$</p>
| Danny W. | 77,064 | <p>Ok, here is how this goes: </p>
<ol>
<li><p>This assumption is based on an idea of what the solution is going to look like. That is, by making this assumption, we won't be able to get any solutions where the first term ($x_0$) is not of this order. Since we are apparently not looking for solution of higher order (those where the first term $x_0$ scales as $\epsilon^{-\beta}$, $\beta>0$), this assumption will dictate what solution we are able to get.
In response to one of the comments: the assumption that the $x_1$ term scales with $\epsilon$ is exactly that, an assumption. However, I imagine that if you followed the below separation of scales analysis using an $\epsilon^1/2$ expansion, you would find each of these terms to be zero.</p></li>
<li><p>This is based on a fundamental concept from asymptotics, separation of scales - terms associated with different powers of $\epsilon$ are treated independently of others. For example, in this case, suppose we are looking for a two term solution to the polynomial above. We have
$$ \epsilon^2 (x_0 + \epsilon^1 x_1)^3 + x_0 + \epsilon_1 x_1 + 1 = 0 $$
This comes from just substituting $x_0 + \epsilon x_1$ into the equation.
Expanding this, we find at the $\mathcal{O}(1)$ and $\mathcal{O}(\epsilon)$ levels
\begin{gather}
\mathcal{O}(1): \quad x_0 + 1 = 0 \Rightarrow x_0 = -1 \\
\mathcal{O}(\epsilon): \quad x_1 = 0 \\
\dots
\end{gather}
The above relies on this fundamental assumption of asymptotics that for a particular asymptotics series to solve a polynomial, the terms at scale must evaluate to zero. </p></li>
</ol>
|
2,825,522 | <p>I have this problem:</p>
<p><a href="https://i.stack.imgur.com/blD6N.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/blD6N.png" alt="enter image description here"></a></p>
<p>I have not managed to solve the exercise, but this is my breakthrough:</p>
<p><a href="https://i.stack.imgur.com/0dTdO.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0dTdO.jpg" alt="enter image description here"></a></p>
<p>How can I continue to find it?</p>
| Stefan4024 | 67,746 | <p>As noted one of the angle is $30^{\circ}$. Also note that the longer diagonal bisects the $120^{\circ}$ angle. Hence the angles in the top triangle are $30^{\circ}$ and $60^{\circ}$, so the the angle complement to $x$ is $90^{\circ}$, so we must have that $x = 90^{\circ}$</p>
|
1,393,869 | <p>Given a cubic polynomial with real coefficients of the form $f(x) = Ax^3 + Bx^2 + Cx + D$ $(A \neq 0)$ I am trying to determine what the necessary conditions of the coefficients are so that $f(x)$ has exactly three distinct real roots. I am wondering if there is a way to change variables to simplify this problem and am looking for some clever ideas on this matter or on other ways to obtain these conditions.</p>
| bubba | 31,744 | <p>By solving <span class="math-container">$f'(x)=0$</span>, you can find out the two turning points of the cubic. Suppose the solutions are <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span>. In fact, we have
<span class="math-container">$$
x_{1,2} = \frac{-B \pm \sqrt{B^2-3AC}}{3A}
$$</span></p>
<p>If <span class="math-container">$B^2-3AC <0$</span>, then <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> are imaginary, so there are no turning points, and the cubic has only one real root.</p>
<p>If <span class="math-container">$x_1$</span> and <span class="math-container">$x_2$</span> are real, the cubic has three distinct roots iff <span class="math-container">$f(x_1)$</span> and <span class="math-container">$f(x_2)$</span> are non-zero and have opposite sign. Or, in short, as the comment says, iff <span class="math-container">$f(x_1)f(x_2) < 0$</span>.</p>
<p>There are numerous tolerance issues and corner cases to worry about, but that's the basic idea.</p>
<p><strong>Edit:</strong> Following up on the comment above, <a href="https://en.wikipedia.org/wiki/Cubic_function" rel="nofollow noreferrer">the Wikipedia page</a> says that the nature of the roots can be determined by examining the discriminant:
<span class="math-container">$$
\Delta = 18ABCD - 4B^3D + B^2C^2 -4AC^3 - 27A^2D^2
$$</span>
The cubic has three distinct real roots iff <span class="math-container">$\Delta > 0$</span>. This is a nice tidy criterion, but my discussion above may still be useful because it provides some geometric insight.</p>
|
386,172 | <p>The expression was simplified in the answer to <a href="https://math.stackexchange.com/questions/384592/finding-markov-chain-transition-matrix-using-mathematical-induction">this question</a>. I'm trying to simplify it but I got stuck. Multiplying all the factors and regrouping didn't work, but maybe I'm doing the wrong regrouping.</p>
<p>Also, I don't understand why the $(2p-1)^n$ dropped the exponent in the second line of the same solution.</p>
| vadim123 | 73,324 | <p>$$p\cdot (1/2)(2p-1)^n +(1-p) \cdot (-1/2)(2p-1)^n = \\
p \cdot (1/2)(2p-1)^n +(p-1) \cdot(1/2)(2p-1)^{\color{red}{n}}= \\ \left[\frac{p}{2}+\frac{p}{2}-\frac{1}{2}\right](2p-1)^n =\\ (2p-1)\cdot(1/2) (2p-1)^n$$</p>
<p>You are correct that an $n$ was missing; I inserted an extra line which should help clarify.</p>
|
3,982,937 | <p>To avoid typos, please see my screen captures below, and the red underline. The question says <span class="math-container">$h \rightarrow 0$</span>, thus why <span class="math-container">$|h|$</span> in the solution? Mustn't that <span class="math-container">$|h|$</span> be <span class="math-container">$h$</span>?</p>
<p><img src="https://i.stack.imgur.com/rweRh.jpg" alt="enter image description here" /></p>
<p>Spivak, <em>Calculus</em> 2008 4 edn. <a href="https://mathpop.com/" rel="nofollow noreferrer">His website's errata</a> lists no errata for these pages.</p>
| Ben | 754,927 | <p>Don't worry. It takes a while and some practice to wrap your head around this stuff. It's easy to get confused by all the different conditions and variables and what depends on what.</p>
<p>Suppose <span class="math-container">$\lim_{x \to a}f(x) = \ell$</span></p>
<p>This means that for any <span class="math-container">$\varepsilon > 0$</span> there exists some <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$x$</span>, if</p>
<p><span class="math-container">$$0 < |x - a| < \delta \text{ then } |f(x) - \ell| < \varepsilon$$</span></p>
<p>Now, for this same <span class="math-container">$\delta$</span>, suppose you have a number <span class="math-container">$h$</span> such that
<span class="math-container">$$0 < |h| < \delta$$</span></p>
<p>We can make this look like the previous condition by noting that <span class="math-container">$h = (a + h) - a$</span>.
Therefore, for all <span class="math-container">$h$</span> if
<span class="math-container">$$0 < |h| < \delta$$</span>
then
<span class="math-container">$$0 < |(a + h) - a| < \delta$$</span></p>
<p>Compare this with the initial <span class="math-container">$\delta$</span>-requirement placed on <span class="math-container">$x$</span>, namely <span class="math-container">$0 < |x - a| < \delta$</span>.</p>
<p>The number <span class="math-container">$a+h$</span> satisfies the <span class="math-container">$x$</span> requirement from the first limit, so we have <span class="math-container">$|f(a + h) - \ell| < \varepsilon$</span>.</p>
<p>Putting it all together, if <span class="math-container">$\lim_{x \to a}f(x) = \ell$</span> then for any <span class="math-container">$\varepsilon > 0$</span> there exists some <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$h$</span>, if</p>
<p><span class="math-container">$$0 < |h| < \delta \text { then } |f(a + h) - \ell| < \varepsilon$$</span></p>
<p>Another way of writing this is <span class="math-container">$\lim_{h \to 0} f(a+h) = \ell$</span>. If the first limit exists, then so too does the second, and they are equal.</p>
<p>To complete the proof we need to begin with the second limit <span class="math-container">$\lim_{h \to 0} f(a+h) = \ell$</span> and use similar arguments to show how it leads to the first limit.</p>
<p>Suppose <span class="math-container">$\lim_{h \to 0} f(a+h) = \ell$</span>. This tells us that for any <span class="math-container">$\varepsilon > 0$</span> there is a <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$h$</span>, if
<span class="math-container">$$0 < |h| < \delta \text { then } |f(a + h) - \ell| < \varepsilon$$</span></p>
<p>Suppose for this same <span class="math-container">$\delta$</span> we have numbers <span class="math-container">$x$</span> such that
<span class="math-container">$$0 < |x - a| < \delta$$</span></p>
<p>Compare this with the above <span class="math-container">$\delta$</span>-requirements for <span class="math-container">$h$</span>. These numbers <span class="math-container">$x-a$</span> fulfill the <span class="math-container">$\delta$</span>-requirements for <span class="math-container">$h$</span>, therefore</p>
<p><span class="math-container">$$|f(a + (x - a)) - \ell| < \varepsilon$$</span></p>
<p>but this is just
<span class="math-container">$$|f(x) - \ell| < \varepsilon$$</span>.</p>
<p>Putting it all together, we know that if <span class="math-container">$\lim_{h \to 0} f(a+h) = \ell$</span> then for any <span class="math-container">$\varepsilon > 0$</span> there is a <span class="math-container">$\delta > 0$</span> such that for all <span class="math-container">$x$</span>, if</p>
<p><span class="math-container">$$0 < |x - a| < \delta \text{ then } |f(x) - \ell| < \varepsilon$$</span></p>
<p>We can of course write this as <span class="math-container">$\lim_{x \to a}f(x) = \ell$</span>. Again, existence of one limit guarantees existence of the other, and they are equal.</p>
|
16,749 | <p>I wanted to remove the <code>Ticks</code> in my coding but i can't. Here when i try to remove the <code>Ticks</code> the number also gone. I need numbers without <code>Ticks</code>, <code>Ticks</code> and <code>GridLines</code> should be automatic and don't use<code>PlotRange</code> .</p>
<pre><code>BarChart[{{1, 2, 3}, {4, 5, 6}}, ImageSize -> 400, BarOrigin -> Left,
ChartLayout -> "Stacked", ImageSize -> {500, 300},
GridLines -> {Automatic, None}, Ticks -> {{Automatic}, None}
,
LabelStyle -> Directive[Opacity[1]],
TicksStyle -> Directive[Opacity[.3]]
,
Axes -> {True, False}, AxesStyle -> Opacity[.0],
ChartStyle -> {RGBColor[.06, .29, .66], RGBColor[.01, .56, .61],
RGBColor[1, .58, 0]},
ChartBaseStyle -> EdgeForm[GrayLevel[.6]]]
</code></pre>
<p><img src="https://i.stack.imgur.com/s28b2.png" alt="enter image description here"></p>
| Royce | 5,123 | <p>You can keep the numbers as they current show up but remove the ticks by specifying ticks as follows:</p>
<pre><code>Ticks -> {Table[{2 i, 2 i, 0}, {i, 7}], None}
</code></pre>
<p>or </p>
<pre><code>Ticks -> {Table[{2 i + 1, 2 i + 1, 0}, {i, 0, 8}], None}
</code></pre>
<p>(each tick is specified by a triplet {a,b,c}, with a corresponding to the location on the axis, b the label used, and c the length of the tick used - just set the length equal to 0).</p>
|
1,796,156 | <p>Let $F(n)$ denote the $n^{\text{th}}$ Fibonacci number<a href="http://mathworld.wolfram.com/FibonacciNumber.html" rel="noreferrer">$^{[1]}$</a><a href="http://en.wikipedia.org/wiki/Fibonacci_number" rel="noreferrer">$\!^{[2]}$</a><a href="http://oeis.org/A000045" rel="noreferrer">$\!^{[3]}$</a>. The Fibonacci numbers have a natural generalization to an analytic function of a complex argument:
$$F(z)=\left(\phi^z - \cos(\pi z)\,\phi^{-z}\right)/\sqrt5,\quad\text{where}\,\phi=\left(1+\sqrt5\right)/2.\tag1$$
This definition is used, for example, in <em>Mathematica</em>.<a href="http://reference.wolfram.com/language/ref/Fibonacci.html" rel="noreferrer">$^{[4]}$</a> It produces real values for $z\in\mathbb R$, and preserves the usual functional equation for Fibonacci numbers for all $z\in\mathbb C$: $$F(z)=F(z-1) + F(z-2).\tag2$$</p>
<hr>
<p>The fibonorial<a href="http://mathworld.wolfram.com/Fibonorial.html" rel="noreferrer">$^{[5]}$</a><a href="http://en.wikipedia.org/wiki/Fibonorial" rel="noreferrer">$\!^{[6]}$</a><a href="http://oeis.org/A003266" rel="noreferrer">$\!^{[7]}$</a> is usually denoted as $n!_F$, but here we prefer a different notation $\mathfrak F(n)$. It is defined for non-negative integer $n$ inductively as
$$\mathfrak F(0)=1,\quad \mathfrak F(n+1)=\mathfrak F(n)\times F(n+1).\tag3$$
In other words, the fibonorial $\mathfrak F(n)$ gives the product of the Fibonacci numbers from $F(1)$ to $F(n)$, inclusive. For example, $$\mathfrak F(5)=\prod_{m=1}^5F(m)=1\times1\times2\times3\times5=30.\tag4$$</p>
<blockquote>
<p><em>Questions:</em> Can the fibonorial be generalized in a natural way to an analytic function $\mathfrak F(z)$ of a complex (or, at least, positive real) variable, such that it preserves the functional equation $(3)$ for all arguments?</p>
<p>Is there an integral, series or continued fraction representation of $\mathfrak F(z)$, or a representation in a closed form using known special functions?</p>
<p>Is there an efficient algorithm to calculate values of $\mathfrak F(z)$ at non-integer arguments to an arbitrary precision?</p>
</blockquote>
<p>So, we can see that the fibonorial is to the Fibonacci numbers as the factorial is to natural numbers, and the analytic function $\mathfrak F(z)$ that I'm looking for is to the fibonorial as the analytic function $\Gamma(z+1)$ is to the factorial.</p>
<hr>
<p><em>Update:</em> While thinking on <a href="https://math.stackexchange.com/q/1914821/19661">this question</a> it occurred to me that perhaps we can use <a href="http://mathworld.wolfram.com/GammaFunction.html#eqn30" rel="noreferrer">the same trick</a> that is used to define the $\Gamma$-function using a limit involving factorials of integers:
$$\large\mathfrak F(z)=\phi^{\frac{z\,(z+1)}2}\cdot\lim_{n\to\infty}\left[F(n)^z\cdot\prod_{k=1}^n\frac{F(k)}{F(z+k)}\right]\tag5$$
or, equivalently,
$$\large\mathfrak F(z)=\frac{\phi^{\frac{z\,(z+1)}2}}{F(z+1)}\cdot\prod_{k=1}^\infty\frac{F(k+1)^{z+1}}{F(k)^z\,F(z+k+1)}\tag{$5'$}$$
This would give
$$\mathfrak F(1/2)\approx0.982609825013264311223774805605749109465380972489969443...\tag6$$
that appears to have a closed form in terms of the <a href="http://mathworld.wolfram.com/q-PochhammerSymbol.html" rel="noreferrer">q-Pochhammer symbol</a>:
$$\mathfrak F(1/2)=\frac{\phi^{3/8}}{\sqrt[4]{5}}\,\left(-\phi^{-2};-\phi^{-2}\right)_\infty\tag7$$
and is related to the <a href="http://mathworld.wolfram.com/FibonacciFactorialConstant.html" rel="noreferrer">Fibonacci factorial constant</a>.</p>
| Ali Caglayan | 87,191 | <p>Extending what Zach466920 said:</p>
<p>$$\partial_t[k(t,n)]=F(n+1)\cdot k(t,n+1)\tag1$$</p>
<p>We can develop a power series solution for this.
Let
$$k(t,n)=\sum_{m=0}^\infty \kappa(n, m)\ t^m$$</p>
<p>We equation $(1)$ as:
$$\sum_{m=1}^\infty m\ \kappa(n, m)\ t^{m-1}
=\sum_{m=0}^\infty(m+1)\ \kappa (n, m) \ t^m
=F(n+1)\cdot\sum_{m=0}^\infty \kappa(n+1, m)\ t^m$$
Which gives us the recurrence relation:
$$(m+1)\ \kappa(n, m)=F(n+1)\ \kappa (n+1,m)$$
I have no idea how to solve this recurrence equation or even if it is solvable. However if it is you should be able to get a series out of it. It then becomes a problem of solving that series until you can finally find a closed form for $k$.</p>
<hr>
<p>With help from @Semiclassical we have the solution for $\kappa(n, m)$ as $$\kappa(n,m)=\left(\prod^n_{m=0}\frac{m+1}{F(n+1)}\right)\kappa(0, m)=\frac{(m+1)^{n+1}}{\mathfrak{F}(n+1)}\kappa(0, m)$$
The question is now finding $k(t, n)$ from this relation. </p>
<hr>
<p>We have that $k(0, n) = 0$ or else there will be trouble with the convergence of $\mathfrak F$. This means that $$\mathfrak F(n) = \sum_{m=1}^\infty \kappa(n, m) \int_0^\infty e^{-t} t^m dt=\sum_{m=1}^\infty \kappa(n, m) \Gamma(m)$$ Consequently $$\sum_{m=1}^\infty \kappa(0, m) \Gamma(m)=1\tag 2$$ Which means that the seeding terms of $\kappa(n, m)$ seem to shrink faster than $\Gamma(m)$ grows as $m\to \infty$ and $n=0$.</p>
<p>Once a solution can be found for whatever $\kappa(0, m)$ is then a series for $k(t, n)$ can be established. I did some experimentation to see if $\kappa(0, m)$ could be guessed by letting $$\kappa(0, m)=\frac1{2^m\Gamma(m)}$$ as to satisfy condition $(2)$. This however lead to a value of $\mathfrak F(0)=\sqrt{3}$. This is clearly wrong.</p>
<hr>
<p>If we let $\kappa(0, m)=\frac{a_m}{\Gamma(m)}$ we have $$\mathfrak F(n) = \sum_{m=1}^\infty \frac{(m+1)^{n+1}}{\mathfrak (n+1)}a_{m}\\
\implies \mathfrak F(n)^2F(n+1)=\sum_{m=1}^\infty (m+1)^{n+1} a_m$$ If we let $n=0$ we have $$1=\mathfrak F(0)=\sum_{m=1}^\infty a_m(m+1)=\sum_{m=1}^\infty m a_m + \underbrace{\sum_{m=1}^\infty a_m}_{=1\text{ by }(2)}$$
Therefore: $$\sum_{m=1}^\infty ma_m = 0$$
No idea what this means and I am unsure whether further manipulation will actually get anywhere. I feel quite lost carrying these last few out. I am open to ideas on how to determine $a_m$.</p>
|
269,548 | <p>I want to know how I can solve this or plot a versus b ?</p>
<pre><code>Solve[ Sqrt[a] Cosh[1.2 Log[1.65 Sqrt[1/a]]] == Sqrt[-b] Sinh[1.2 Log[1.65 Sqrt[-(1/b)]]], b]
</code></pre>
<p>Thanks</p>
| Michael E2 | 4,999 | <p>We can help <code>Solve</code> by converting the equation to a polynomial one.</p>
<pre><code>eqn = Sqrt[a] Cosh[1.2 Log[1.65 Sqrt[1/a]]] ==
Sqrt[-b] Sinh[1.2 Log[1.65 Sqrt[-(1/b)]]];
(* replace constants with symbols *)
coeff = DeleteDuplicates@Cases[eqn, _Real, Infinity] -> {c1, c2} //
Thread
(* {1.2 -> c1, 1.65 -> c2} *)
(* here's what the equation looks like *)
eqn /. coeff
</code></pre>
<blockquote>
<pre><code>(*
Sqrt[a] Cosh[c1 Log[Sqrt[1/a] c2]] ==
Sqrt[-b] Sinh[c1 Log[Sqrt[-(1/b)] c2]]
*)
</code></pre>
</blockquote>
<p>To get a polynomial equation, we rational the coefficients. Only <code>c1</code> needs to be rationalized. We also convert the rational exponents to integers by raising the variables to the 10th power. This will introduce extraneous solutions which we will have to filter out.</p>
<pre><code>Reverse@Rationalize@First@coeff
(* c1 -> 6/5 *)
(* here's the polynomial equation *)
Simplify[
eqn /. coeff /. Reverse@Rationalize@First@coeff /. {a -> u^10,
b -> -v^10} // TrigToExp,
a > 0 && b < 0 && u > 0 && v > 0] // PowerExpand
</code></pre>
<blockquote>
<pre><code>(*
c2^(6/5) (u - v) == (u v (u^11 + v^11))/c2^(6/5)
*)
</code></pre>
</blockquote>
<pre><code>vsol = Solve[
Simplify[
eqn /. coeff /. Reverse@Rationalize@First@coeff /. {a -> u^10,
b -> -v^10} // TrigToExp, a > 0 && b < 0 && u > 0 && v > 0] //
PowerExpand, v];
</code></pre>
<p>Now check the solutions (one can use the tooltip in the front end if the legend is not clear enough):</p>
<pre><code>Plot[
eqn /. Equal -> Subtract /. b -> -Values[vsol]^10 /.
Reverse@Last@coeff /. u -> a^(1/10) // RealExponent //
MapIndexed[Tooltip] // Evaluate,
{a, 0, 8}, PlotStyle -> "Rainbow", PlotRange -> {1, -18},
PlotLegends -> Automatic]
</code></pre>
<p><a href="https://i.stack.imgur.com/DuWAK.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DuWAK.png" alt="enter image description here" /></a></p>
<p>The second and twelfth are solutions (over some interval), but the twelfth is complex-valued. The second one is the desired solution.</p>
<pre><code>Plot[
-Values[vsol][[2]]^10 /. Reverse@Last@coeff /. u -> a^(1/10),
{a, 0, 8}, PlotRange -> All]
</code></pre>
<img src="https://i.stack.imgur.com/3b3IO.png" width="420">
<p>Thus</p>
<pre><code>bsol = -Values[vsol][[2]]^10 /. Reverse@Last@coeff /. u -> a^(1/10) // First
</code></pre>
<blockquote>
<pre><code>(*
-Root[1.82381 a^(1/10) + (-1.82381 - 0.548301 a^(6/5)) #1 -
0.548301 a^(1/10) #1^12 &, 2]^10
*)
</code></pre>
</blockquote>
<p>Note that the 10th power arose from the rationalization of <code>1.2</code>. If <code>1.2</code> is approximate, then the solution, while symbolic, is only approximate too.</p>
|
269,548 | <p>I want to know how I can solve this or plot a versus b ?</p>
<pre><code>Solve[ Sqrt[a] Cosh[1.2 Log[1.65 Sqrt[1/a]]] == Sqrt[-b] Sinh[1.2 Log[1.65 Sqrt[-(1/b)]]], b]
</code></pre>
<p>Thanks</p>
| Akku14 | 34,287 | <p>One way to get solution in terms of root expression with variable substituion. (Use onesided eq.)</p>
<pre><code>f[a_, b_] =
Sqrt[a] Cosh[
1.2 Log[1.65 Sqrt[1/a]]] - (Sqrt[-b] Sinh[
1.2 Log[1.65 Sqrt[-(1/b)]]]) // Rationalize //
FullSimplify[#, a >= 0 && b <= 0] &
(* Sqrt[a] Cosh[3/5 Log[(400 a)/1089]] -
Sqrt[-b] Sinh[3/5 Log[-(1089/(400 b))]] *)
</code></pre>
<p>Substitute the arguments of Sinh and Cosh.</p>
<pre><code>sola = First@Solve[aa == 3/5 Log[(400 a)/1089] && a > 0, a, Reals]
(* {a -> 1089/400 E^(5 aa/3)} *)
solb = First@Solve[bb == 3/5 Log[-(1089/(400 b))] && b < 0, b, Reals]
f2[aa_, bb_] =
f[a, b] /. sola /. solb // Simplify[#, Element[{aa, bb}, Reals]] &
(* 33/20 (E^(5 aa/6) Cosh[aa] - E^(-5 bb/6) Sinh[bb]) *)
sol3 = First@Solve[f2[aa, bb] == 0, bb, Reals]
(* {bb -> 6 Log[Root[-1 - 2 E^(5 aa/6) Cosh[aa] #1^11 + #1^12 &, 2]]} *)
sol4 = sol3 /. aa -> 3/5 Log[(400 a)/1089] /.
bb -> 3/5 Log[-(1089/(400 b))]
solb = First@Solve[sol4 /. Rule -> Equal, b] // FullSimplify
(* {b -> -(1089/(
400 Root[-33 - 40 Sqrt[a] Cosh[3/5 Log[(400 a)/1089]] #1^11 +
33 #1^12 &, 2]^10))} *)
Plot[b /. solb, {a, 0, 5}]
</code></pre>
|
6,355 | <p>My question is located in trying to follow the argument bellow. </p>
<p>Given a normal algebraic variety $X$, and a line bundle $\mathcal{L}\rightarrow X$ which is ample, then eventually such a line bundle will have enough section to define an embedding $\phi:X\rightarrow \mathbb(H^0(X,\mathcal{L}^{\otimes d}))=\mathbb{P}^N$ (notice that $N$ depends on $d$). However, if the line bundle is NOT ample we can still say something about the existence of a certain map $\phi_d$; the so-called Iitaka fibration. I'll omit some details, but the argument of the construction goes (more or less) as follows.
Suppose in the first place that $\mathcal{L}$ is base-point base. That is to say, that there are no points (or a set) of $X$ which all the hyperplanes of $\mathbb{P}^N$ pass through. Then such a line bundle will define a linear system $|\mathcal{L}^d|$ which gives rise to a morphism $\phi_d:X\rightarrow \phi_d(X)\subset\mathbb{P}^N$ (again $N$ depends on $d$). Such a map may not be an embedding, however $\phi_d:X\rightarrow \phi(X)$ is an algebraic fiber space. My question is the following. As I increase the value $d$ the image $\phi_d(X)\subset \mathbb{P}^N$ may change, however, as a matter of fact such an image "stabilizes" as $d$ gets larger. Meaning that if $d$ is large enough, the image if $\phi_d$ is the "same" regardless $d$. </p>
<p>-What is the reason for this to happen?
-<strong>What is going on with all the sections of $\mathcal{L}^{\otimes d}$ that I am getting as I increase the value of $d$?</strong>. I'll appreciate any comment.</p>
<p>As a result, due to the fact that after a while we no longer care about the value of $d$, we can associate the space $X\rightarrow \phi(X)$ to the line bundle $\mathcal{L}$. Here, the variety $\phi(X)$ no longer depends on $d$.</p>
<p>Could someone comment further about the word "stabilizes"?.</p>
| David Lehavi | 404 | <ul>
<li><p>The sequence stabilizes because any increasing sequence of bounded integers (the dimensions of the images of $X$) stabilizes, but
I assume you mean something different.</p></li>
<li><p>Suppose that $X\to|L|$ has already stabilized, then the map $X\to|2L|$ decomposes through the map $|L|\to\mathbb{P}\mathrm{Sym}^2 H^0(L)\to\mathbb{P} H^0(2L)$.</p></li>
</ul>
<p>Where the first map is the Veronese, and the second is a projection.</p>
|
138,921 | <p>If $A\colon H\to S$ is a bounded operator on a Hilbert space $H$, and $S\subset H$. It is known that $\operatorname{trace}(A)=\sum_{n} \langle Af_n,f_n\rangle$ for any orthonormal basis $\{f_{n}\}$. Is there a relation between $\operatorname{trace}(A)$, $\operatorname{rank}(A)$, and dimension of $\operatorname{range}(S)$?</p>
<p><strong>Edit:</strong> What if $A$ is a composition of two orthogonal projections $A_{1}:H\to S_{1}$,$A_{2}:H\to S_{2}$, such that $A=A_{1}oA_{2}$, for $S_{1},S_{2}\subset H$. I need to show that $\operatorname{trace}(A)\leq \operatorname{rank}(A)\leq \dim(S_{2})$</p>
| Gelu | 59,199 | <p>However, if the boundary conditions are for finite $x$, then for the transformed
equation we have boundary conditions depending on time!! For example if $C$ denotes
a poluttant concentration, we see his value at some points.</p>
|
1,592,224 | <p>I need to understand how to find $a \times b = 72$ and $a + b = -17$. Or I am fine with any other example, even general form $a \times b = c$ and $a + b = d$, how to find $a$ and $b$.</p>
<p>Thanks!</p>
| André Nicolas | 6,312 | <p><strong>If</strong> the equations hold, then $(a+b)^2=289$, and therefore
$$(a-b)^2=(a+b)^2-4ab=289-288=1.$$
Thus $a-b=\pm 1$ and $a+b=-17$. </p>
<p>Solving the system of linear equations, we obtain $a=-8$, $b=-9$ or $a=-9$, $b=-8$. It is easy to verify that both of these satisfy the given equations.</p>
<p>The same strategy will work in general.</p>
|
4,050,855 | <p>On <a href="https://math.stackexchange.com/a/4050373/878105">this answer</a>, the function <span class="math-container">$f_n(x)=x^n$</span> in the interval <span class="math-container">$[0,1]$</span> is given as a pathologic example with pointwise convergence.</p>
<p>Can I say that this Cauchy sequence does not (pointwise) converge because the limit of the sequence is a function like this (not continuous):</p>
<p><a href="https://i.stack.imgur.com/1RR8t.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1RR8t.png" alt="enter image description here" /></a></p>
<p>without specifying any particular norm? I read that pointwise convergence, doesn't imply <span class="math-container">$d_\infty$</span> (uniform) convergence, and that uniform convergence implies pointwise convergence. But does lack of pointwise convergence negate uniform convergence?</p>
<p>Does this contradict in any way (or under certain norms) the fact that <span class="math-container">$C[a,b]$</span> with respect to <span class="math-container">$\Vert f \Vert_{\infty}$</span> is a Banach space? In other words, why is not an example of a Cauchy sequence that does not converge to some <span class="math-container">$f\in C[0,1]$</span>?</p>
| Antoni Parellada | 152,225 | <p>After some <a href="https://www.math.ucdavis.edu/%7Ehunter/intro_analysis_pdf/ch9.pdf" rel="nofollow noreferrer">research on the topic</a>, I think that the answer is no (as canonically answered by Elchanan), and the critical bit is his sentence, "<em>Pointwise convergence does not correspond to any metric (this can be proven)</em>."</p>
<p>But there remains the meta-questions of what is happening with that "pathologic" function, and what words would give the uninitiated some sense of direction.</p>
<p>The problem with the sequence <span class="math-container">$\{x^n\}_{n=1}^\infty$</span> in the interval <span class="math-container">$[0,1]$</span> is that it is <strong>pointwise convergent</strong>, but <strong>not uniformly convergent</strong>.</p>
<p>The sequence <span class="math-container">$f_n \to f$</span> pointwise on <span class="math-container">$[0,1]$</span> if <span class="math-container">$f_n(x) \to f(x)$</span> as <span class="math-container">$n \to \infty$</span> for every <span class="math-container">$x \in [0,1].$</span> There is no <span class="math-container">$\varepsilon!$</span></p>
<p>The limit of a pointwise convergent sequence of continuous functions does not have to be continuous, and does not generally preserve boundedness. In this case:</p>
<p><span class="math-container">$$\lim_{n\to\infty} f_n(x) = f(x) =
\begin{cases} 0 & (0\leq x\lt 1) \\ 1 & (x=1) \end{cases}$$</span></p>
<p>In uniform convergence, the uniform norm is introduced. Uniform convergence implies pointwise convergence, but not the other way around.</p>
<p>A sequence of functions converges uniformly if for every <span class="math-container">$\varepsilon>0$</span> there is an <span class="math-container">$N\in \mathbb N$</span> such that for all <span class="math-container">$n≥N$</span> and <strong>all</strong> <span class="math-container">$x \in [0,1],$</span> <span class="math-container">$d(f_n(x),f(x))<\varepsilon,$</span> where <span class="math-container">$d(f, g) = \sup_{x\in [0,1]} |f_n(x) - f(x)|.$</span></p>
<p>The space <span class="math-container">$C[0,1]$</span> with the infinity metric is complete (Banach), but the sequence <span class="math-container">$x^n$</span> is not Cauchy.</p>
<p>The link to Cauchy sequences is that <span class="math-container">$$\small \text{uniform convergence}\iff \text{Cauchy under the }\Vert \cdot \Vert_\infty \text{ uniform or infinity norm}. $$</span></p>
<p>The lack of uniform convergence is explained in Elchanan's answer, as well as <a href="http://www.math.wisc.edu/%7Eangenent/521.2017s/UniformConvergence.html" rel="nofollow noreferrer">here</a> by contradiction:</p>
<blockquote>
<p>If <span class="math-container">$f_n(x)$</span> converges uniformly, then the limit function must be <span class="math-container">$f(x)=0$</span> for <span class="math-container">$x∈[0,1)$</span> and <span class="math-container">$f(1)=1.$</span> Uniform convergence implies that for any <span class="math-container">$ϵ>0$</span> there is an <span class="math-container">$Nϵ \mathbb N$</span> such that <span class="math-container">$|x^n−f(x)|<ϵ$</span> for all <span class="math-container">$n≥ \mathbb N$</span> and all <span class="math-container">$x∈[0,1].$</span> Assuming this is indeed true we may choose <span class="math-container">$ϵ,$</span> in particular, we can choose <span class="math-container">$ϵ=1/2.$</span> Then there is an <span class="math-container">$N∈ \mathbb N$</span> such that for all <span class="math-container">$n≥N$</span> we have <span class="math-container">$|x^n−f(x)|<1/2.$</span> We may choose <span class="math-container">$n$</span> and <span class="math-container">$x.$</span> Let us choose <span class="math-container">$n=N,$</span> and <span class="math-container">$x=(\frac 3 4)^{1/N}.$</span> Then we have <span class="math-container">$f(x)=0$</span> and thus</p>
<p><span class="math-container">$$|f_N(x)−f(x)|=x^N−0=\frac3 4>\frac 1 2, $$</span></p>
<p>a contradiction.</p>
</blockquote>
|
241,871 | <p>A Lévy measure $\nu$ on $\mathbb R^{d}$ is a measure satisfying
$$\nu\{0\} = 0, \ \int_{\mathbb R^{d}} (|y|^{2}\wedge 1) \nu(dy) <\infty.$$</p>
<p>A Lévy process can be characterized by triples $(b, A, \nu)$ by
Lévy-Itô decomposition, then
$$X_{t} = bt + W_{A}(t) + \int_{B_{1}} x \tilde N(t, dx) + \int_{B_{1}^{c}} x N(t, dx)$$
where $N(t, B)$ is a Poisson measure with $\mathbb E N(1, B) = \nu(B)$ for a set $B$ bounded below,
and $\tilde N(t, dx) = N(t, dx) - t \nu(dx)$ is its compensated one.</p>
<p>[Q.] If $(0, 0, \nu)$ is a triplet of a Lévy process $X$ whose first moment is finite, is the following always true?
$$ \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx) < \infty.$$
Moreover, if $\nu(B_1^c) = 0$, then
$$\mathbb E X_1 = \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx).$$
END.</p>
<p>Remark: If $\nu(dx) = x^{-2} dx$, then it corresponds to 1-stable process, and
$ \lim_{r\to 0^{+}}\int_{B_{1}\setminus B_{r}^{c}} x \nu(dx) = 0$, while
$ \int_{B_{1}} x \nu(dx) $ is not well-defined.
[Q1.] Is there always a Lévy process corresponding to $(0, 0, \nu)$ for an
arbitrary Lévy measure $\nu$? </p>
<p>Remark: Consider $\nu(dx) = x^{-2} I(x>0) dx$, it is
a Lévy measure. But if there was an associated process $X_{t}$, then $\mathbb E[X_{1}] = \int_{0}^{\infty} x \nu(dx) = \infty$.</p>
| R. van Dobben de Bruyn | 82,179 | <p>A reference for birational equivalence of $CH_0$ is Fulton's <em>Intersection Theory</em> [1], <strong>Example 16.1.11</strong>. In the example, he makes the assumption that $k$ is algebraically closed, but he never uses it. Since the argument is fairly short, let me repeat it here.</p>
<blockquote>
<p><strong>Theorem.</strong> Let $k$ be a field, and let $X$ and $Y$ be smooth proper $k$-varieties. If $X$ and $Y$ are birational, then $CH_0(X) \cong CH_0(Y)$.</p>
</blockquote>
<p><em>Proof.</em> Let $f \colon X \dashrightarrow Y$ be a birational map. Let $\Gamma \subseteq X \times Y$ be the closure of the graph. Then $\Gamma$ defines maps
\begin{align*}
f_* \colon CH_0(X) &\to CH_0(Y) & f^* \colon CH_0(Y) &\to CH_0(X)\\
a &\mapsto \pi_{Y,*}(\Gamma \cdot \pi_X^* a), & b &\mapsto \pi_{X,*}(\Gamma \cdot \pi_Y^* b).
\end{align*}
The composition $f^* \circ f_*$ (resp. $f_* \circ f^*$) is given by the cycle $\Gamma^\top \circ \Gamma := \pi_{13, *} (\pi_{12}^* \Gamma \cdot \pi_{23}^* \Gamma^\top)$ on $X \times X$ (resp. by $\Gamma \circ \Gamma^\top$ on $Y \times Y$); see [<em>loc. cit.</em>, <strong>Def. 16.1.1</strong> and <strong>Prop. 16.1.2(a)</strong>] for details.</p>
<p>Let $V \subseteq Y$ be an open such that $f$ induces an isomorphism $f^{-1}(V) \stackrel\sim\to V$. Let $U = f^{-1}(V)$, $Z = X \setminus U$, and $W = Y \setminus V$. I claim that the cycle $\varepsilon := \Gamma^\top \circ \Gamma - \Delta_X$ is supported on $Z \times Z$. By the short exact sequence of [<em>loc. cit.</em>, <strong>Prop. 1.8</strong>], it suffices to show that the restriction of $\varepsilon$ to $S := X \times X \setminus Z \times Z = X \times U \cup U \times X$ is zero. Let
$$T := (X \times Y \times U) \cup (U \times Y \times X) = \pi_{13}^{-1} (S).$$
Then the restriction of $\varepsilon$ to $S$ is the pushforward along $\pi_{13}$ of
$$\left.\left(\pi_{12}^* \Gamma \cdot \pi_{23}^* \Gamma^\top\right)\right|_T.\tag{1}$$
We can compute the latter as
$$\left.\left(\pi_{12}^* \Gamma\right)\right|_T \cdot \left.\left(\pi_{23}^* \Gamma^\top\right)\right|_T.$$
This is a proper intersection, and the intersection is equal to the 'diagonal'
$$\left\{(a,b,c) \in U \times V \times U\ \big|\ f(a)=b=f(c)\right\}.$$
Indeed, the intersection agrees with this set on both $X \times Y \times U$ and $U \times Y \times X$.</p>
<p>Then the pushforward of (1) is $\Delta_U$. Hence $\varepsilon$ vanishes on $S$, so it is supported on $Z \times Z$. In particular, the projections $\pi_{1,*} \varepsilon, \pi_{2,*} \varepsilon \in CH_*(X)$ are supported on $Z$.</p>
<p>The punchline is that $\varepsilon$ acts as the identity on $CH_0(X)$ by the moving lemma. Indeed, any $0$-cycle $a$ on a smooth variety can be moved away from $Z$, so the intersection $\pi_1^* a \cdot \varepsilon$ is zero.</p>
<p>This proves that $f^* \circ f_*$ is the identity, and by symmetry the same holds for $f_* \circ f^*$. $\square$</p>
<p><strong>Remark.</strong> It is even true that $CH_0$ is a stable birational invariant: if $X$ and $Y$ are smooth and proper varieties with $X \times \mathbb P^n \stackrel\sim\dashrightarrow Y \times \mathbb P^m$, then $CH_0(X) \cong CH_0(Y)$. (The only thing left to prove is that $CH_0(X) \cong CH_0(X \times \mathbb P^1)$, since we already know the result for birational varieties.)</p>
<hr>
<p><strong>Remark.</strong> The reason Fulton only proves the theorem for algebraically closed fields is that Fulton only proves the moving lemma over those fields. For a proof of the moving lemma over arbitrary fields, see Roberts's appendix to the Oslo 1970 Algebraic Geometry proceedings [2] (finite fields are addressed separately).</p>
<p><strong>References:</strong></p>
<p>[1] <strong>Fulton, William</strong>. Intersection theory (second edition). Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. <em>Springer-Verlag</em>, Berlin, 1998. ISBN: 3-540-62046-X; 0-387-98549-2. <a href="http://www.ams.org/mathscinet-getitem?mr=1644323" rel="noreferrer">MR1644323</a> </p>
<p>[2] <strong>Roberts, Joel</strong>. Chow's moving lemma. Appendix 2 to: "Motives" by Steven L. Kleiman. <em>Algebraic geometry, Oslo 1970 (Proc. Fifth Nordic Summer School in Math.)</em>, pp. 89--96. <em>Wolters-Noordhoff</em>, Groningen, 1972. <a href="http://www.ams.org/mathscinet-getitem?mr=0382269" rel="noreferrer">MR0382269</a></p>
|
345,065 | <p>Let f and g be functions of one real variable and define $F(x,y)=f[x+g(y)]$. Find formulas for all the partial derivatives of F of first and second order.</p>
<p>For the first order, I think we have:</p>
<p>$\frac{\partial F}{\partial x}=\frac{\partial f}{\partial x}+ \frac{\partial f}{\partial y}$</p>
<p>$\frac{\partial F}{\partial y}=\frac{\partial f}{\partial x}g'(x)+ \frac{\partial f}{\partial y}g'(y)$</p>
<p>Is it correct? What are the second order derivatives?</p>
<p>Thank you</p>
| user1337 | 62,839 | <p>$f$ is a function of <strong>one</strong> variable. Therefore the notation $\frac{\partial f}{\partial x},\frac{\partial f}{\partial y}$ is problematic (and I suggest you adapt the prime notation in that case). What you have written is not correct.</p>
<p>The correct formulas are: $$\frac{\partial F}{\partial x}(x,y)=f'(x+g(y)) $$</p>
<p>$$\frac{\partial F}{\partial y}(x,y)=f'(x+g(y))g'(y) $$ $$\frac{\partial^2 F}{\partial x^2}(x,y)=f''(x+g(y)) $$ $$\frac{\partial^2 F}{\partial x \partial y}(x,y)=f''(x+g(y))g'(y)=\frac{\partial^2 F}{\partial y \partial x}(x,y) $$</p>
<p>$$\frac{\partial^2 F}{\partial y^2}(x,y)=f''(x+g(y))g'(y)+f'(x+g(y))g''(y) $$</p>
|
4,144,203 | <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be non-empty subsets of <span class="math-container">$\mathbb{R}$</span> with <span class="math-container">$a\leq b$</span> for all <span class="math-container">$a\in A, \space b \in B$</span> and suppose there exists a unique number <span class="math-container">$\alpha$</span> such that <span class="math-container">$a\leq \alpha \leq b$</span> for all <span class="math-container">$a\in A, \space b \in B.$</span> Then we have <span class="math-container">$\alpha = \sup A = \inf B$</span>.</p>
<p><strong>Proof:</strong> Suppose <span class="math-container">$\alpha < \sup A.$</span> If <span class="math-container">$S = \sup A\in A$</span> we are done so assume this is not the case. Choose <span class="math-container">$k = \dfrac{\alpha + S}{2}$</span> so <span class="math-container">$\alpha < k$</span> and <span class="math-container">$k < S.$</span> Now we can choose some <span class="math-container">$A\ni a \geq k$</span> which yields a contradiction since <span class="math-container">$\alpha <a$</span>. One can similarly show that <span class="math-container">$\alpha \leq \inf B.$</span> We conclude that <span class="math-container">$\sup A \leq \alpha \leq \inf B$</span> and since <span class="math-container">$\alpha$</span> is unique it follows that <span class="math-container">$\alpha = \sup A = \inf B$</span>.</p>
<p>Does my proof seem sound? If not, what do I have to adjust?</p>
| 311411 | 688,046 | <p>I was initially lost by your first two sentences. It is not super clear with what "we are done" if <span class="math-container">$\sup A$</span> belongs to <span class="math-container">$A$</span>. In any case, we can take the given existence of <span class="math-container">$\alpha$</span>
and apply just the definitions of supremum and infimum to say <span class="math-container">$\sup A \leq \alpha \leq \inf B$</span>.</p>
<p>Your final statement is sound, although it took me some time to become convinced. Of course it is intuitively obvious. You might want to add some detail to make clear why "it follows". As for myself,</p>
<p>I became convinced when I wrote it this way: <span class="math-container">$\,\,\,\,\alpha$</span> is unique <span class="math-container">$\,\,\implies\,\, \inf B \,\leq\, \sup A.$</span></p>
<p>Or, even more clear to me is: <span class="math-container">$\,\,\,\,\alpha$</span> is not unique <span class="math-container">$\,\,\impliedby\,\, \sup A \,< \,\inf B.$</span></p>
|
4,144,203 | <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be non-empty subsets of <span class="math-container">$\mathbb{R}$</span> with <span class="math-container">$a\leq b$</span> for all <span class="math-container">$a\in A, \space b \in B$</span> and suppose there exists a unique number <span class="math-container">$\alpha$</span> such that <span class="math-container">$a\leq \alpha \leq b$</span> for all <span class="math-container">$a\in A, \space b \in B.$</span> Then we have <span class="math-container">$\alpha = \sup A = \inf B$</span>.</p>
<p><strong>Proof:</strong> Suppose <span class="math-container">$\alpha < \sup A.$</span> If <span class="math-container">$S = \sup A\in A$</span> we are done so assume this is not the case. Choose <span class="math-container">$k = \dfrac{\alpha + S}{2}$</span> so <span class="math-container">$\alpha < k$</span> and <span class="math-container">$k < S.$</span> Now we can choose some <span class="math-container">$A\ni a \geq k$</span> which yields a contradiction since <span class="math-container">$\alpha <a$</span>. One can similarly show that <span class="math-container">$\alpha \leq \inf B.$</span> We conclude that <span class="math-container">$\sup A \leq \alpha \leq \inf B$</span> and since <span class="math-container">$\alpha$</span> is unique it follows that <span class="math-container">$\alpha = \sup A = \inf B$</span>.</p>
<p>Does my proof seem sound? If not, what do I have to adjust?</p>
| Hilberto1 | 831,222 | <p>Your argumentation is correct but I would add a bit more detail to the last sentence: Assume that <span class="math-container">$\sup A\neq \alpha \neq \inf B$</span> so we would obtain the inequality <span class="math-container">$\sup A < \alpha < \inf B$</span>. Then we could choose a number</p>
<p><span class="math-container">$t = \dfrac{
\alpha + \inf B}{2}$</span> (analogous to how you chose <span class="math-container">$k$</span> in your proof) so <span class="math-container">$\sup A < \alpha < t < \inf B$</span> which would contradict the uniqueness of <span class="math-container">$\alpha.$</span> The same applies for <span class="math-container">$\sup A$</span>. This makes your last conlcusion more clear in my opinion.</p>
<p>In general, it's good to try keeping the proof concise whereby one should not leave out details that make it "hard" to follow.</p>
|
456,826 | <p>I need to find the derivative of this function. I know I need to separate the integrals into two and use the chain rule but I am stuck.</p>
<p>$$y=\int_\sqrt{x}^{x^3}\sqrt{t}\sin t~dt~.$$</p>
<p>Thanks in advance</p>
| Nick Peterson | 81,839 | <p>Let me show you a general method which works in these sorts of situations.</p>
<p>By the Fundamental Theorem of Calculus, we know how to take the derivative of
$$
F(z):=\int_0^z\sqrt{t}\sin(t)\,dt;
$$
in particular, FTC tells us that
$$\tag{1}
F'(z)=\sqrt{z}\sin(z).
$$
Now, note that
$$
\int_{\sqrt{x}}^{x^3}\sqrt{t}\sin(t)\,dt=\int_0^{x^3}\sqrt{t}\sin(t)\,dt-\int_0^{\sqrt{x}}\sqrt{t}\sin(t)\,dt=F(x^3)-F(\sqrt{x}).
$$
So, the derivative you want is
$$
\frac{d}{dx}\left[F(x^3)-F(\sqrt{x})\right].
$$
See if you can use the Chain Rule, and (1), to finish it up from here.</p>
|
4,163,003 | <p>Let <span class="math-container">$Z \in \mathbb{R}^2$</span> be an i.i.d. Gaussian vector with mean <span class="math-container">$M$</span> where <span class="math-container">$P_{Z\mid M}$</span> is its distribution.</p>
<p>Let <span class="math-container">$g: \mathbb{R}^2 \to \mathbb{R}$</span> and consider the following equation:
<span class="math-container">$$
E[g(Z)\mid M=\mu]=0, \forall \mu \in C,
$$</span>
where <span class="math-container">$C=\{\mu: \frac{\mu_1^2}{r_1^2}+\frac{\mu_2^2}{r_2^2}=1 \}$</span> for some given <span class="math-container">$r_1,r_2 > 0$</span>. That is, <span class="math-container">$C$</span> is an ellipse.</p>
<p>It is not difficult to verify (see this <a href="https://math.stackexchange.com/questions/4159488/example-of-g-mathbbr2-to-mathbbr-s-t-egzm-mu-0-forall-mu-i/4159528?noredirect=1#comment8618967_4159528">question</a>, see also Edit 3) that a solution to this equation is given by
<span class="math-container">$$
g(x)= \frac{x_1^2}{r_1^2}+\frac{x_2^2}{r_2^2}- c,
$$</span>
where <span class="math-container">$c=\frac{1}{r_1^2}+\frac{1}{r_2^2}+1$</span>.
In fact any function <span class="math-container">$g_a(x)= a g(x)$</span> for any <span class="math-container">$a \in \mathbb{R}$</span> is a solution.</p>
<p><strong>Question:</strong> Is <span class="math-container">$g$</span> a unique solution up to a multiplicative constant?</p>
<p><strong>Edit:</strong> If we need to make an assumption on the class of allowed functions <span class="math-container">$g$</span>. Let us assume that <span class="math-container">$g$</span>'s are bounded by a quadratic monomial (i.e., for every <span class="math-container">$g$</span> there exists <span class="math-container">$a$</span> and <span class="math-container">$b$</span> such that <span class="math-container">$g(x) \le a \|x \|^2 +b$</span>).</p>
<p><strong>Edit 2:</strong> If you want to avoid expectation notation. Everything can be alternatively written as
<span class="math-container">$$
\iint g(z)\frac{1}{2 \pi} e^{-\frac{\|z-m\|^2}{2}} \, {\rm d} z=0, \, m\in C.
$$</span>
From here, one can see that this question is about a convolution.</p>
<p><strong>Edit 3:</strong> To see that <span class="math-container">$g(x)$</span> is a solution we use that the second moment of Gaussian is given by <span class="math-container">$E[Z_i^2\mid M_i=\mu_i]=1+\mu_i^2$</span>, which leads to
<span class="math-container">\begin{align}
E\left[\frac{Z_1^2}{r_1^2}+\frac{Z_2^2}{r_2^2}- c\mid M=\mu \right]&= \frac{E[Z_1^2\mid M_1=\mu_1]}{r_1^2}+\frac{ E[Z_2^2\mid M_2=\mu_2]}{r_2^2}-c\\[6pt]
&=\frac{1+\mu_1^2}{r_1^2}+\frac{ 1+\mu_1^2}{r_2^2}-c\\[6pt]
&=\frac{1}{r_1^2}+\frac 1 {r_2^2}+1-c,
\end{align}</span>
where in the last step we used that <span class="math-container">$\mu$</span> is on the ellipse.</p>
<p><strong>Edit 4:</strong> The comment below shows that the solution is not unique when an ellipse is a circle. However, I would still like to know the answer for a general ellipse.</p>
| Luca Ghidelli | 176,416 | <p>That was a very natural question to me, and I really enjoyed the journey of exploring it!
Let me give three different answers, which for me correspond to three different levels of understanding of the problem.</p>
<p>(Major edit on June 18th, 2021)</p>
<p><strong>Three approaches</strong></p>
<p>The three answers are based respectively on</p>
<ol>
<li>functions with circular symmetry:
this approach gives solutions for the case of a circle;</li>
<li>rescaling quasi-symmetry of gaussians:
this approach produces new solutions from known ones, and it works for general ellipses;</li>
<li>normal convolutional inverse:
this approach, which I prefer to carry out using Hermite polynomials, in principle works for very general subsets, not only for ellipses.</li>
</ol>
<p>Here I provide, briefly, the three solutions. More details on each of them are postponed to the end of this post.</p>
<hr />
<p><strong>Solution 1: The case of a circle and circular symmetry</strong></p>
<p>Here I discuss the case of a circle (<span class="math-container">$r_1=r_2=R^2$</span>).</p>
<p><strong>Proposition 1.</strong>
Let <span class="math-container">$g_1,g_2$</span> be two distinct functions on <span class="math-container">$\mathbb R^2$</span> with circular symmetry.
Let <span class="math-container">$W$</span> be an i.i.d. standard Gaussian vector with mean <span class="math-container">$M=(R,0)$</span>.
Let <span class="math-container">$E_i$</span> be the expectation of <span class="math-container">$g_i(W)$</span>.
Then <span class="math-container">$g(x) = E_1 g_2(x) - E_2 g_1(x_1)$</span> is a solution.</p>
<p><em>Here is the <strong>idea</strong>:</em> if the ellipse is a circle, since the standard gaussian distributions look the same from every point of view, we have that the problem has rotational symmetry. Then it is natural to search for solution that are rotationally symmetric themselves.</p>
<p><em>Remark 1:</em> In Proposition 1 we already see, at least in the case of a circle, that the conditions of the problem (namely, those of vanishing expectation when the mean varies along an ellipse) gives a system of equations that is not enough restrictive to cut out a one-dimensional set of solutions.</p>
<hr />
<p><strong>Solution 2: A family of solutions for the general case of an ellipse</strong></p>
<p>Let now <span class="math-container">$r_1,r_2>0$</span> be arbitrary, so we consider the case of an ellipse of equation <span class="math-container">$\frac {x_1^2}{r_1} + \frac {x_2^2}{r_2} =1.$</span></p>
<p>We have already seen (in the question itself) that the function <span class="math-container">$$g_r(x)= \left(\frac {x_1^2}{r_1} + \frac {x_2^2}{r_2}\right)- \left(\frac 1 {r_1} + \frac 1 {r_2} +1\right)$$</span> is a solution to the problem. We now construct a one-parameter family of solutions.</p>
<p><strong>Proposition 2</strong>: For every <span class="math-container">$t>-1/2$</span>, the function
<span class="math-container">$$ g_{r,t}(x) = \left ((1+2t)^2 \left(\frac {x_1^2}{r_1} + \frac {x_2^2}{r_2}\right)- (1+2t)\left(\frac {1}{r_1} + \frac {1}{r_2}\right) -1\right) e^{-t|x|^2 }$$</span>
is a solution to the problem.</p>
<p><em>Here is the <strong>idea</strong>:</em> Gaussians behave well with respect to multiplications with other gaussians. Moreover, if a function can be written as <span class="math-container">$g(x)=h(x)e^{-t|x|^2}$</span> for some reasonably growing factor <span class="math-container">$h(x)$</span>, then it should be automatically behave mildly at infinity, as required at some point in the question. Then, it makes sense to look for solutions in this form. In this particular case, it turns out that the factor <span class="math-container">$h(x)$</span> can be derived from a solution to the problem associated with another smaller ellipse.</p>
<p><em>Remark 2:</em> With Proposition 2, we see that there are infinitely many solutions which are bounded by a quadratic monomial. In fact, there are many solutions that tend to zero at infinity: those corresponding to parameters <span class="math-container">$t>0$</span>.</p>
<hr />
<p><strong>Solution 3: Inverse of the Gaussian convolution</strong></p>
<p>Given an integrable function <span class="math-container">$h:\mathbb R^2\to\mathbb{C}$</span> we define, for every <span class="math-container">$m\in\mathbb R^2$</span>, the shifted gaussian average operator
<span class="math-container">$$ E_m[h] = \frac 1 {2\pi}\iint h(x) e^{-|x-m|^2/2} dx_1 dx_2 .$$</span></p>
<p>We also define, for every <span class="math-container">$x\in\mathbb R^2$</span>, the dual operator <span class="math-container">$\hat E_x[\cdot]$</span> given by
<span class="math-container">$$ \hat E_x[h] = \frac 1 {2\pi}\iint h(ix) e^{-|m+ix|^2/2} dm_1 dm_2 .$$</span></p>
<p><strong>Proposition 3</strong>: Let <span class="math-container">$f\in\mathbb C[m_1,m_2]$</span> be a polynomial function that vanishes identically on the ellipse.
Then the function <span class="math-container">$g(x)=\hat E_x[f]$</span> is a solution to the problem.</p>
<p><em>Here is the <strong>idea</strong>:</em> The operator <span class="math-container">$E_m[\cdot]$</span> corresponds to the operation of convolutions with a standard gaussian distribution centered at <span class="math-container">$m\in \mathbb R^2$</span>. The dual operator <span class="math-container">$\hat E_x[\cdot]$</span> is basically an inverse operation that "unwinds" the convolution.</p>
<p><em>Remark 3.1:</em> The functions in Proposition 3 do not necessarily have to be polynomials, but I decided to restrict to this case, which is actually very natural to me.</p>
<p><em>Remark 3.2:</em> I preferred to treat this case using the theory of Hermite polynomials (more details below, in the appropriate section) and I hope not to have done any mistake. The main advantage is that it makes the theory practical for computations (see the next section). Another advantage is that the proofs are derived directly from known properties of Hermite polynomials.</p>
<hr />
<p><strong>Examples for solution 3: Hermite polynomials</strong></p>
<p>I strongly believe that abstract solutions should come with practical examples, whenever possible.</p>
<p><em>Example 1</em>: Let <span class="math-container">$f(m) = m_1^2 /r_1 + m_2^2/r_2 - 1$</span>. This is a function that vanish identically on the ellipse. Now let <span class="math-container">$g(x) = \hat E_x [f]$</span>.</p>
<p>At the end of the post, I will show that the operator <span class="math-container">$\hat E_x[\cdot]$</span> can be computed explicitly as follows on monomials
<span class="math-container">$$\hat E_x[m_1^pm_2^q] = He_p(x_1) He_q(x_2), $$</span>
where <span class="math-container">$He_n(t)$</span> denote the (probabilyst's) <a href="https://en.wikipedia.org/wiki/Hermite_polynomials" rel="nofollow noreferrer">Hermite polynomials</a>
<span class="math-container">$$ He_0(t) = 1 ,$$</span>
<span class="math-container">$$ He_1(t) = t ,$$</span>
<span class="math-container">$$ He_2(t) = t^2 -1, $$</span>
<span class="math-container">$$ He_3(t) = t^3 - 3t, $$</span>
<span class="math-container">$$ He_4(t) = t^4 - 6t^2 + 3 , etc.$$</span></p>
<p>Therefore we have that
<span class="math-container">$$ g(x) = He_2(x_1) / r_1 + He_2(x_2)/ r_2 - 1 $$</span>
is the function found by Boby already in the question of this problem.</p>
<p><em>Example 2:</em> Let <span class="math-container">$f(m) = \left( m_1^2 /r_1 + m_2^2/r_2 - 1\right)^2$</span>. This function vanishes on the ellipse identically.</p>
<p>We have
<span class="math-container">$$ f(m) = \frac {m_1^4} {r_1^2} + \frac {m_2^4} {r_2^2} + \frac {2m_1^2 m_2^2} {r_1r_2} - \frac {2m_1^2} {r_1} - \frac {2m_2^2} {r_2} + 1.$$</span>
Therefore if we let <span class="math-container">$g(x) = \hat E_x[f]$</span>, we have that
<span class="math-container">$$ g(x) = \frac {He_4(m_1)} {r_1^2} + \frac {He_4(m_2)} {r_2^2} + \frac {2He_2(m_1)He_2(m_2)} {r_1r_2} - \frac {2He_2(m_1)} {r_1} - \frac {2He_2(m_1)} {r_2} + 1$$</span>
is another solution to the original problem.</p>
<hr />
<p><strong>Proof of Proposition 1</strong></p>
<p>Since the two functions <span class="math-container">$g_1, g_2$</span> have circular symmetry, it means that if we compute the averages with respect to a Gaussian centered in any point of the circle, then we get the same results <span class="math-container">$E_1$</span> and <span class="math-container">$E_2$</span>. If we want to see it explicitly, we can write down the change of variables given by the rotation of the plane that sends a point of the circle <span class="math-container">$m$</span> to <span class="math-container">$(R,0)$</span>. Now, since the averages of <span class="math-container">$g_i$</span> are <span class="math-container">$E_i$</span>, we see that the expectation of their combination <span class="math-container">$g=E_1 g_2 - E_2 g_1$</span> is simply <span class="math-container">$E_1E_2−E_2E_1=0$</span>. This shows that <span class="math-container">$g$</span> is a solution to the problem posed. The key point here is that the expectation of a function with circular symmetry doesn't change if we rotate the probability measure .</p>
<hr />
<p><strong>Proof of Proposition 2</strong></p>
<p>Let <span class="math-container">$f_r(x) = \frac {x_1^2}{r_1} + \frac {x_2^2}{r_2} $</span> and <span class="math-container">$ c_r= \frac 1 {r_1} + \frac 1 {r_2} +1 $</span>.</p>
<p>We make the <span class="math-container">$u$</span>-substitution
<span class="math-container">$$(u_1,u_2) = (\sqrt {1+2t} x_1,\sqrt {1+2t} x_2) ,$$</span>
so that <span class="math-container">$e^{-t|x|^2}e^{-|x|^2/2} = e^{-|u|^2/2} $</span>.</p>
<p>Then, given <span class="math-container">$m=(m_1,m_2)$</span>, the expression <span class="math-container">$e^{-t|x|^2 }e^{-|x-m|^2/2}$</span> is equal to:
<span class="math-container">$$ = e^{-\frac 1 2 \left((1+2t)(x_1^2+x_2^2) -2m_1x_1-2m_2x_2 + m_1^2+m_2^2\right)}
$$</span>
<span class="math-container">$$ = e^{-\frac {1} {2} |u-\frac m {\sqrt{1+2t}}|^2 } e^{ -\frac {1} {2} |m|^2+\frac {1} {2} \frac{|m|^2}{1+2t}} .$$</span></p>
<p>Now, let <span class="math-container">$$d_{r,t}=(1+2t)\left(\frac {1}{r_1} + \frac {1}{r_2}\right) +1.$$</span>
The convolution of the gaussian with <span class="math-container">$g_{r,t}$</span> is
<span class="math-container">$$ \frac 1{2\pi} \iint g_{r,t}(x) e^{-|x-m|^2/2} dx$$</span>
<span class="math-container">$$ = \frac 1{2\pi} \iint \left( (1+2t)^2f_r(x) -d_{r,t}\right) e^{-t|x|^2}e^{-|x-m|^2/2} dx $$</span>
<span class="math-container">$$ = \frac 1 {2\pi} \frac 1 {1+2t} e^{-\frac t {1+2t} |m|^2}
\iint \left( (1+2t)^2f_r(u/\sqrt{1+2t}) - d_{r,t}\right) e^{-\frac 1 2 \left|u-\frac m{\sqrt {1+2t}}\right|^2} du $$</span>
<span class="math-container">$$ = \frac 1 {2\pi} \frac 1 {1+2t} e^{-\frac t {1+2t} |m|^2}
\iint \left( (1+2t)f_r(u) - d_{r,t}\right) e^{-\frac 1 2 \left|u-\frac m{\sqrt {1+2t}}\right|^2} du .$$</span></p>
<p>Note that <span class="math-container">$m/\sqrt{1+2t}$</span> belongs to the ellipse of equation <span class="math-container">$f_{r/\sqrt{1+2t}} (x)= 1$</span>. Moreover, we have
<span class="math-container">$$(1+2t)f_r(u) - d_{r,t} = f_{r/\sqrt{1+2t}}(u) - c_{r/\sqrt{1+2t}}.$$</span>
In fact, we designed the function <span class="math-container">$g_{r,t}$</span> precisely to get to this point.</p>
<p>Finally, we observe that the double integral in the displayed equation is equal to zero, because it is an instance of the solution originally given by Boby in the question, for the shrinked ellipse of equation <span class="math-container">$f_{r/\sqrt{1+2t}} (x)= 1$</span>.</p>
<hr />
<p><strong>Proof of Proposition 3</strong></p>
<p>First, let me recall that the normal moments in one variable are given by "dual Hermite polynomials" (as mentioned in an answer to <a href="https://math.stackexchange.com/q/1669837">this math.SE question</a>):
<span class="math-container">$$ E_m[x_1] = m_1, \quad \quad E_m[x_2] = m_2,$$</span>
<span class="math-container">$$ E_m[x_1^2] = m_1^2+1, \quad \quad E_m[x_2^2] = m_2^2+1,$$</span>
<span class="math-container">$$ E_m[x_1^3] = m_1^3+3m_1, \quad \quad E_m[x_2^3] = m_2^3+3m_2,$$</span>
<span class="math-container">$$ E_m[x_1^3] = m_1^4+6m_1^2+3, \quad \quad E_m[x_2^4] = m_2^4+6m_2^2 +3.$$</span></p>
<p>Compare with the first few (probabilyst's) Hermite polynomials
<span class="math-container">$$ He_0(t) = 1 ,$$</span>
<span class="math-container">$$ He_1(t) = t ,$$</span>
<span class="math-container">$$ He_2(t) = t^2 -1, $$</span>
<span class="math-container">$$ He_3(t) = t^3 - 3t, $$</span>
<span class="math-container">$$ He_4(t) = t^4 - 6t^2 + 3 .$$</span></p>
<p>For the record Hermite polynomials are explicitly given by the formula (here the signs alternate)
<span class="math-container">$$He_n(t) = n! \sum_{k=0} ^{\lfloor n/2\rfloor} \frac {(-1)^k t^{n-2k}}{2^kk!(n-2k)!},$$</span>
and the mixed moments of the shifted standard gaussian are
<span class="math-container">$$ E_m[x_1^p x_2^q] = i^{-p-q} He_p(im_1) He_q(im_2),$$</span>
where <span class="math-container">$i=\sqrt {-1}$</span> is a choice of imaginary unit.</p>
<p>On the other hand, the operator <span class="math-container">$\hat E_x[\cdot]$</span> is given on monomials by
<span class="math-container">$$\hat E_x[m_1^pm_2^q] = E_{-ix}[(im_1)^p(im_2)^q] = He_p(x_1) He_q(x_2).$$</span></p>
<p>Finally, we have the following property of Hermite polynomials (compare with Remark 4 below):
<span class="math-container">$$ E_m[He_n(x_1)] = m_1^n, \quad \quad E_m[He_n(x_2)] = m_2^n . $$</span></p>
<p>By linearity, this implies that <span class="math-container">$E_m$</span> and <span class="math-container">$\hat E_x$</span> are inverse operators, in the following sense:
<span class="math-container">$$ E_m [\hat E_x [f(m_1,m_2)]] = f(m_1,m_2) $$</span>
for every polynomial function <span class="math-container">$f$</span>.</p>
<p><em>Remark 4:</em>
For the record, the Hermite polynomials form a basis of the set of polynomials. This is explicitly given by the dual summation formula (the signs are all positive here!)
<span class="math-container">$$ t^n = n! \sum_{k=0} ^{\lfloor n/2\rfloor} \frac { He_{n-2k}(t)}{2^kk!(n-2k)!}.$$</span></p>
<p><em>Remark 5:</em>
To treat more general functions, one may use the following facts:</p>
<ol>
<li><p>The shifted gaussians are generating functions of Hermite functions:
<span class="math-container">$$e^{-|x-y|^2/2} = e^{-x^2/2} \sum_{n=0} ^\infty He_k(x) \frac {y^k}{k!};$$</span></p>
</li>
<li><p>The Hermite polynomials satisfy the following orthogonality relation:
<span class="math-container">$$ \int_{-\infty} ^\infty He_n(t) He_n(t) \frac {e^{-t^2}/2}{k!\sqrt 2\pi} dt = \delta_{k,n}$$</span></p>
</li>
</ol>
<p>and so <span class="math-container">$$ \int_{-\infty} ^\infty He_n (x) \frac {e^{-(x-y)^2/2}}{\sqrt {2\pi}} dx = y^n.$$</span></p>
|
3,545,548 | <p><span class="math-container">$\def\LIM{\operatorname{LIM}}$</span>
Let <span class="math-container">$(X,d)$</span> be a metric space and given any cauchy sequence <span class="math-container">$(x_n)_{n=1}^{\infty}$</span> in <span class="math-container">$X$</span> we introduce the formal limit <span class="math-container">$\LIM_{n\to \infty}x_n$</span>. We say that two formal limits <span class="math-container">$\LIM_{n\to \infty}x_n$</span> and <span class="math-container">$\LIM_{n\to \infty}y_n$</span> are equal iff <span class="math-container">$\lim_{n \to \infty}d(x_n,y_n)=0$</span>. We then define <span class="math-container">$\bar{X}$</span> to be set of all the formal limits of Cauchy sequences in <span class="math-container">$X$</span>. We define the metric <span class="math-container">$d_{\bar{X}}$</span> as follows: <span class="math-container">$$d_{\bar{X}}(\LIM_{n\to \infty}x_n,\LIM_{n\to \infty}y_n)= \lim_{n \to \infty} d(x_n,y_n)$$</span>
I have proved that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is indeed a metric space that that the definition of metric is well defined. But I am stuck to prove that <span class="math-container">$(\bar{X},d_{\bar{X}})$</span> is a complete metric space. This problem could be resolved without taking into account topological spaces as that concept in later in the book. Any suggestion on how to go about this problem without using machinery of topology would be highly invaluable. Thanks in advance.</p>
| Boka Peer | 304,326 | <p>Without loss of generality, assume <span class="math-container">$x < y< z.$</span> Since <span class="math-container">$x, y,$</span> and <span class="math-container">$z$</span> are in AP, we must have <span class="math-container">$ y-x = z-y.$</span> Therefore, <span class="math-container">$3^{y-x} = 3^{z-y}$</span>. Hence, <span class="math-container">$(3^y)^2 = 3^x. 3^z$</span>. So, we are done. </p>
|
2,791,068 | <p>The Laplace transform of a measure $\mu$ on the real line is defined by
$$f_{\mu}(s)= \int_{\mathbb{R}}e^{-st}d\mu(t), \hspace{1cm} \forall s \geqslant 0.$$
My question is ----</p>
<p>1)Does the Laplace transform of a measure (finite or infinite) always exists?</p>
<p>2)If not, can it be said that the Laplace transform of a probability measure always exists?</p>
<p>If the support of the measure is changed from the real line to the non-negative part of the real line, what happens to question (1) and (2)?</p>
| Kavi Rama Murthy | 142,385 | <p>If $\mu$ has density $\frac 1 {\pi (1+x^{2})}$ the the Laplace transform exists only for $s=0$. </p>
|
66,068 | <p>I have a list like this. </p>
<pre><code>cdatalist = {{1., 0.898785, Failed, Failed, 50., 25., "serial"}, {1., 1.31175,1., Failed, 50., 25., "serial"}, {1., 18.8025, Failed, 0.490235, 50., 25., "serial"}, {1., 19.6628, 0.990079, Failed, 50., 25., "serial"}, {1., 39.547, Failed, Failed, 50., 25., "serial"}, {1., 39.7503, Failed, 0.482749, 50., 25., "serial"}, {1., 40.2078, Failed, Failed, 50., 25., "serial"}, {1., 40.6208, 0.980588, Failed, 50., 25., "serial"}, {1., 102.588, Failed, Failed, 50., 25., "serial"}, {1., 102.781, Failed, 0.466214, 50., 25., "serial"}, {1., 102.826, Failed, Failed, 50., 25., "serial"}, {1., 102.833, Failed, Failed, 50., 25., "serial"}, {15., 0.89985, Failed, Failed, 50., 25., "serial"}, {15., 1.31344, 1., $Failed, 50., 25., "serial"}}
</code></pre>
<p>at the end, I want to compile a new list by dropping any lines that don't have "Failed" on the third column on each row. </p>
<pre><code>datalistfunc[input_] :=
Module[{cell, cell2, celltable, celllist},
i = 1;
celllist = {};
While[i < Length@cdatalist + 1,
cell =
Select[cdatalist[[i]][[1 ;; 3]],
Head[cdatalist[[i]][[3]]] == Real &];
i = If[i < Length@cdatalist + 1, i + 1, Length@cdatalist + 1];
celllist = AppendTo[celllist, cell2];
Print[cell2]
]
]
datalist = datalistfunc[cdata];
</code></pre>
<p>My list looks like this after filtering. </p>
<pre><code>{{},{}}
{{1.,1.31175,1.},{}}
{{},{}}
{{1.,19.6628,0.990079},{}}
{{},{}}
{{},{}}
{{},{}}
{{1.,40.6208,0.980588},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{},{}}
{{15.,1.31344,1.},{}}
</code></pre>
<p>Instead, I want my list to look like this. </p>
<pre><code>{{1.,1.31175,1.},
{1.,19.6628,0.990079},
{1.,40.6208,0.980588},
{15.,1.31344,1.}}
</code></pre>
| wxffles | 427 | <p>Aside from all the other good answers which attack your problem from the beginning, a handy thing to remember is a way to tidy it up at the end:</p>
<pre><code>datalist //. {} :> Sequence[]
</code></pre>
<p>This makes all your empty (sub)lists go away. </p>
|
228,135 | <p>I'm working to understand the Grothendieck topology version of the Zariski topology of schemes. Explained simply, it replaces the notion of "open subschemes" with "open immersions". So instead of $U\subseteq X$, we have $U\hookrightarrow X$.</p>
<p>The intersection $U\cap V$ between two open subschemes is replaced with the canonical immersion of the fiber product $(U\times_X V)\hookrightarrow X$. Is there a correspondingly simple analogue of the union, or do I have to construct it explicitly?</p>
| Community | -1 | <p>Maybe the following can help motivate this definition. In the case of $\mathbb{R}$-valued functions on a topological space $X$, if $\mathcal{U}$ is a <em>covering</em> sieve on $X$, so that the union of its open sets is $X$, then defining a function $\mathcal{U}$-locally on X is equivalent to defining a continuous function on $X$ (SGA 4 1/2, I, 2.3). One also defines a notion of vector bundles given $\mathcal{U}$-locally, and in this case the corresponding statement is that if $\mathcal{U}$ is a <em>covering</em> sieve, then there is an equivalence between the categories of vector bundles on $X$ and vector bundles given $\mathcal{U}$-locally (SGA 4 1/2, I, 3.3). And finally in the case of (flat) schemes over $X$, the corresponding statement is that if $\mathcal{U}$ is the sieve generated by a family $\{U_i\}$ of flat schemes over $X$, and the union of the images of $U_i$ is $X$, then there is an equivalence between the categories of quasi-coherent modules on $X$ and quasi-coherent modules given $\mathcal{U}$-locally (SGA 4 1/2, I, 4.5).</p>
|
4,059,489 | <blockquote>
<p>Let <span class="math-container">$ A, B \in M_n (\mathbb{C})$</span> such that <span class="math-container">$(A-B)^2 = A -B$</span>. Then <span class="math-container">$\mathrm{rank}(A^2 - B^2) \geq \mathrm{rank}( AB -BA)$</span>.</p>
</blockquote>
<p>I tried to apply the basic inequalities without results. How to start? Thank you.</p>
| John Bentin | 875 | <p>Write <span class="math-container">$x=1396+t$</span>. Then
<span class="math-container">$$\sqrt{1+\frac t{1388}}\:+\sqrt{1+\frac t{1389}}\:+\sqrt{1+\frac t{1390}}$$</span>
<span class="math-container">$$=\sqrt{1+\frac t8}\:+\sqrt{1+\frac t7}\:+\sqrt{1+\frac t6}.$$</span>
Clearly <span class="math-container">$t=0$</span> is one solution. Now, <span class="math-container">$t$</span> cannot be positive, because that would make the terms on the LHS each smaller than the terms on the RHS. Similarly, <span class="math-container">$t$</span> cannot be negative, for the opposite reason. Therefore <span class="math-container">$t=0$</span> or <span class="math-container">$x=1396$</span> is the unique solution.</p>
|
164,213 | <p>Suppose I have some list with duplicates by some condition and I want to take the duplicates and apply some function to choose which duplicate to keep. Is there an efficient way to apply this transformation?</p>
<p>To clarify here is an example. Consider a list with elements with duplicate first elements:</p>
<pre><code>list={{2, 0.2}, {3, 0.}, {2, 0.1}, {4, 0.9}, {6, 0.3}, {3, 0.4}, {6, 0.3}}
</code></pre>
<p>Now I can apply, <code>DeleteDuplicatesBy[list, #[[1]] &]</code> and it will drop all the second instances of the duplicates in the list. But instead I want to do something else, say keep the duplicate with the maximum second value. This would look something like (my hypothetical function takes the arguments: list, function to find duplicates, function to act on duplicate list):</p>
<pre><code>CombineDuplicateBy[list,#[[1]]&,MaximalBy[#,#[[2]]&]]
{{2, 0.2}, {4, 0.9},{3, 0.4}, {6, 0.3}}
</code></pre>
<p>Notice my hypothetical output deleted elements <code>{3, 0.}</code> and <code>{6, 0.3}</code> since they were duplicates and had a smaller (or equal) second value. </p>
| kglr | 125 | <p>Also</p>
<pre><code>KeyValueMap[{#, Max @ #2}&] @ Merge[Rule @@@ list, Identity]
Join @@ TakeLargestBy[Last, 1] /@ GatherBy[list, First]
Join @@ MaximalBy[#, Last, 1]& /@ GatherBy[list, First]
</code></pre>
<blockquote>
<p>{{2, 0.2},{3, 0.4},{4, 0.9},{6, 0.3}}</p>
</blockquote>
|
4,325,373 | <blockquote>
<p>Using Algebraic approach, test the convexity of the set <span class="math-container">$$S=\{(x_1,x_2):x_2^2\geq8x_1\}$$</span></p>
</blockquote>
<p>Definition of convexity: <span class="math-container">$S \in \mathbb R^2$</span> is a convex set if <span class="math-container">$\forall \alpha \in \mathbb R, 0 \leq\alpha \leq 1$</span> and <span class="math-container">$\forall \vec x,\vec y \in S$</span> holds: <span class="math-container">$\alpha \vec x + (1 - \alpha)\vec y \in S$</span>.</p>
<p>Let <span class="math-container">$\vec x=(a,b)$</span> and <span class="math-container">$\vec y=(c,d)$</span> then the convex combination is <span class="math-container">$\alpha(a,b)+(1-\alpha)(c,d)=(\alpha(a-c)+c,\alpha(b-d)+d)$</span> and
<span class="math-container">$$
\begin{align}
b^2-8a&\geq0\\
d^2-8c&\geq0\qquad(A)
\end{align}
$$</span>
Now I need to show that,
<span class="math-container">$$
\begin{align}
&(\alpha(b-d)+d)^2-8(\alpha(a-c)+c)\\
&\alpha^2b^2-2\alpha^2bd+\alpha^2d^2 +d^2+2\alpha(b-d)d-\alpha(8a)+\alpha(8c)-8c\\
&\alpha^2b^2-2\alpha^2bd+\alpha^2d^2+(d^2-8c)+2\alpha(b-d)d-\alpha(8a)+\alpha(8c)
\end{align}
$$</span></p>
<p>I couldn't conclude anything from the above expression except for <span class="math-container">$d^2-8c\geq0$</span>. Any help will be appreciated.</p>
| tomos | 653,050 | <p>I'm going to ignore the minus sign, the values at 1, and the <span class="math-container">$<$</span> inequality signs. But otherwise, your first sum is
\[ \sum _{de|n}\sum _{c|d}f(c)f^{-1}(e)g(d/c)g^{-1}(n/de)=\sum _{cde|n}f(c)f^{-1}(e)g(d)g^{-1}(n/cde)=\sum _{cde|n}f(c)f^{-1}(e)g(n/ced)g^{-1}(d)=\sum _{cde|n}f(e)f^{-1}(c)g(n/ecd)g^{-1}(d)\]
whilst the second is
\[ \sum _{de|n}\sum _{c|d}f(e)f^{-1}(n/de)g(c)g^{-1}(d/c)=\sum _{cde|n}f(e)f^{-1}(n/cde)g(c)g^{-1}(d)=\sum _{cde|n}f(e)f^{-1}(c)g(n/dec)g^{-1}(d)\]
so both sums are equal.</p>
<p>The manipulations for the first equation array, if needed (second is similar): First equality - write the <span class="math-container">$d$</span>'s as multiples of <span class="math-container">$c$</span>. Second equality - replace <span class="math-container">$d$</span> by <span class="math-container">$(n/ce)/d$</span>. Third equality - just relabelling <span class="math-container">$e$</span> and <span class="math-container">$c$</span>.</p>
|
8,307 | <p>I have a program that gives the average distance as the output. When I tried to repeat finding the average distance 100 times using Table, It failed to generate the output. This is the program.</p>
<pre><code>Xarray = A @@@ Tuples[Range[0, 4], 3];
Table[M = RandomSample[Xarray, 7];
energies = RandomVariate[ExponentialDistribution[1.5], {7}];
f = {#, First@Pick[energies, M, #]} &;
list = Map[f, M]
c = Subsets[Range[Length@list], {2}];
d = Length@%
distanceBetween[{n_, m_}] :=
Norm[List @@ list[[n, 1]] - List @@ list[[m, 1]]]
Map[distanceBetween, c]
ans = % // N;
Total[%/d], {i, 100}]
</code></pre>
<p>The program is like this. I created an array of let say 500 elements and assigned energy to it. Next I choose a coordinate from the list and find the distance from other coordinates using the norm equation. Next I sum up that distances in the list and take the average of the distance. The Problem comes when I am trying to do this process 100 times using Table or Do command.</p>
| Verbeia | 8 | <p>I see that you are at least using the answers to your previous questions to improve your code, but I do think you need to spend a bit more time with the documentation so that you better understand what you are doing. It really doesn't make people keen to help when you come back with the same problem and ask for help at every step.</p>
<p>Once you clean up the code with semicolons, you don't get errors any more, but the result is something like a 100-length vector where each element looks like:</p>
<pre><code> 1/125 A[0, 0, 0] + 1/125 A[0, 0, 1] + 1/125 A[0, 0, 2] +
1/125 A[0, 0, 3] + 1/125 A[0, 0, 4] + 1/125 A[0, 1, 0] +
1/125 A[0, 1, 1] + 1/125 A[0, 1, 2] + 1/125 A[0, 1, 3] +
1/125 A[0, 1, 4] + 1/125 A[0, 2, 0] + <<103>> + 1/125 A[4, 2, 4] +
1/125 A[4, 3, 0] + 1/125 A[4, 3, 1] + 1/125 A[4, 3, 2] +
1/125 A[4, 3, 3] + 1/125 A[4, 3, 4] + 1/125 A[4, 4, 0] +
1/125 A[4, 4, 1] + 1/125 A[4, 4, 2] + 1/125 A[4, 4, 3] +
1/125 A[4, 4, 4]
</code></pre>
<p>I'm sure that's not what you intended. The use of <code>%</code> inside the <code>Table</code> (or <code>Module</code>) seems unnecessary and I think it is part of the problem.</p>
<p>If you compress the last three lines inside your <code>Table</code> to </p>
<pre><code>Total[N[Map[distanceBetween, c]] ]/d
</code></pre>
<p>you will get numerical data of the kind I think you are looking for.</p>
<p>There are some other problems with your code, e.g. the unnecessary manual calculation of mean instead of using <code>Mean</code> (so you can remove the other <code>%</code>). I also think that calculating the energies inside the <code>Table</code> means that if you select the same element in more than one of your <code>RandomSample</code>s, it will have a different energy each time. I am sure this is not what you intend. The problem here is not with your understanding of Mathematica, it is with your understanding of what you are trying to do. Try writing out step by step in pseudocode or math first, to test if you have made a conceptual error in your implementation.</p>
<p>I'll also point out that if you hadn't given the elements of <code>Xarray</code> a <code>Head</code> of <code>A</code> using <code>A @@@</code>, you wouldn't need the <code>List @@</code> code further down in the distance between function.</p>
<p>In any case if you redefine your <code>distanceBetween</code> function like this, you won't need to keep inefficiently <em>redefining</em> it in each iteration of the <code>Table</code>.</p>
<pre><code>distanceBetween[lst_, {n_, m_}] := Norm[List @@ lst[[n, 1]] - List @@ lst[[m, 1]]]
</code></pre>
<p>Alternatively, leave the elements of <code>Xarray</code> as lists not <code>A[1,2,3]</code> and so on, and you can just use the <code>EuclideanDistance</code> function instead of recoding it yourself.</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| user164587 | 164,587 | <p>I don't think you can "know" that you aren't making a mistake. This applies to almost everything in life. What I would say is that if you want to do something, then do it. Life is too short to let worries paralyse you into doing nothing. Get into the course, focus your study on the parts that are most interesting, and see where you are at the end of the course. The worst that will happen is that you end up with a good degree and a good recreational mathematics habit (unless you drop out to follow your heart elsewhere).</p>
<p>I am absolutely certain that I am not made to push the boundaries of mathematics, but having a degree in the subject has opened many doors for me. And now that I'm spending my days at home, looking after my son, it gives me the ability to keep my brain active by studying alone at home.</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| tess | 175,323 | <p>As the mother of a high school junior taking all AP math/physics because he wants to be able to choose whatever path he determines is right for him, I wish more teachers were passionate about teaching. It would make a world of difference if a teacher would communicate HOW math and physics are present in our world, getting kids interested and curious by engaging them. The AP Physics teacher puts problems on the board and tells the students to solve them. How about showing a clip from an action movie and then discussing how physics is at work, or something similar.Stormwater problems, building design, all could used to illustrate the beauty of physics, and maybe lead to interest in innovation or solving problems.</p>
|
927,188 | <p>This question has been on my mind for a very long time, and I thought I'd finally ask it here. </p>
<p>When I was 6, my dad pulled me out of school. The classes were too easy; the professors, too dull. My father had been man of philosophy his entire life (almost got a PhD in it) and regretted not having a more quantitive background. He wanted me to have a different life and taught me math accordingly. When I was 11, I taught myself trig. When I was 12, I started taking calculus at my local university. I continued on this track, and finally got to real analysis and abstract algebra at 15. I loved every math course I ever took and found myself breezing through all that was presented to me (the university was not Princeton after all). However, around this time, I came to the conclusion that math was not for me. I decided to try a different path.</p>
<p>Why, you might ask, did I do this? The answer was simple: I didn't believe I could be a great mathematician. While I thrived taking the courses, I never turned math into a lifestyle. I didn't come home and do complex questions on a white board. I didn't read about Euler in my spare time. I also never felt I had a great intuition into problems. Once you showed me how to solve a problem, I was golden. But start from scratch on my own? It seemed like a different story entirely. To make things worse, my sister, who was at Caltech at the time, would call home with stories of all these incredible undergrads who solved the mathematical mysteries of the universe as a hobby. Whenever I mentioned math as a career, she would always issue a strong warning: you're not like these kids who spend all their time doing math. Think about doing something else. </p>
<p>Over time, I came to agree with this statement. Coincidentally, I got rejected by MIT and Princeton to continue my undergraduate studies there. This crushed me at the time; my dream of studying math at one of the great institutions had ended. Instead, I ended up at Georgia Tech (not terrible by any means, just not what I had envisioned). Being at an engineering school, I thought I'd give aerospace a shot. It had lots of math, right? Not really, or at least not enough for my taste. I went into CS. This was much better, but still didn't feel quite right. At last, as a sophomore, I felt it was time to get back on track: I'm now doubling majoring in applied math and CS. </p>
<p>My question is, how do I know I'm not making a mistake? There seems to be so many people doing math competitions, research, independent studies, etc, while I just started to take some math courses again. What should I do to test myself and see if I can really make math a career? I apologize for the long and possibly quite subjective post. I'd just really like to hear from math people who know their stuff. Thanks a bunch in advance. </p>
| Gahawar | 129,839 | <p>Your current situation is somewhat similar to what occurred during my adolescent years. I grew up in a rather unstable and meager lifestyle; I never had any opportunities to gain a good education. From what I can remember, I earned first place in a district mathematics competition during the fourth grade, though I did not enjoy mathematics at that time. Regardless, I began studying trigonometry when I was thirteen, having seen a sine wave somewhere. I was enthralled by trigonometry! I often spent hours reading on the subject and began to marvel at how intriguing it all was. Afterwards, I bought a book on the calculus from money that I had gathered; over the course of the next few months, I learned both single variable and multi-variable calculus. I never had any joys in the years before that, but studying the calculus made me smile for once. It was the only thing that kept me going, I believe.</p>
<p>But then one day in secondary school I realised that no one cared. I had learned more mathematics than was taught in the school curriculum, yet no one cared. I went to the teachers and the advisers about my situation, and while their intentions were pure, they could not help me. "Testing out of" the various mathematics courses costed a lot of money, which I did not have. Enrolling in courses at the local junior college required prerequisites and more money, which I did not have. It was at this point that I further realised that pursing mathematics was a waste of time, as no one cared. In a sense, I began to not care.</p>
<p>It is here that are situations differ. You are uncertain as to your ability to compete with others in mathematics, while my predicament centered upon the fact that no one cared about my ability. As such, I must ask you one question: What do you see when you look upon any mathematics whatsoever? I used to see wonderful ideas and concepts, but I now only see mere symbols, nothing more. The spark inside me is gone I am afraid and so I will never pursue mathematics any more than evaluating an integral here and there for the intellectual challenge. If you look upon mathematics and see something grand, then I believe that you should pursue such regardless if you do not prove a famous theorem.</p>
<p>As for me, I decided to take the opposite road and did not study mathematics in a formal setting, such as a university. With great disagreement to my previous character, I have found a fondness of the arts and writing. When I am not programming, I spend my time writing great stories of things that I will never do or see. I would not have it any other way.</p>
|
2,339,707 | <p>Suppose I have five bins into which I want to place 15 balls. The bins have capacities $2$, $2$, $3$, $3$, and $7.$ I place the balls one at a time in the bins, randomly and uniformly amongst the bins that are not full (so for example, if after placing four balls, both of the bins with capacity $2$ are already full, the next ball is placed with probability $1/3$ in each of the remaining three bins). </p>
<p>My question is if there is a an efficient way to estimate the probability that the bin with capacity $7$ is full at the end of this process (it would be great if the technique generalizes in the obvious way).</p>
| amd | 265,466 | <p>Let’s untangle your notation a bit and call the eigenvector that you’re using $v$. If your approach were correct, then it should hold for all $n$. In particular, $p_0=A^0v=\lambda^0v=v$, so this clearly can’t work for an arbitrary initial distribution $p_0$.</p>
|
2,227,280 | <p>For every positive number there exists a corresponding negative number. Would that imply that the number of positive numbers is "equal" to the number of negative numbers? (Are they incomparable because they both approach infinity?)</p>
| Guillemus Callelus | 361,108 | <p>There are exactly the same ones.</p>
<p>For each positive number there is one and only one negative,</p>
<p>And for each negative number there is one and only one positive,</p>
<p>So that both sets have the same cardinal.</p>
<p>There is a bijective function of the set of positive numbers to the set of negative numbers.</p>
|
2,894,606 | <p>Please, could you help me with the question below.
Demonstrate that O(log n^k) = O(log n):</p>
| Empy2 | 81,790 | <p>Rearrange your equation to
$$ B=f(B)=\sqrt[S]{\frac{B-1}R+1}$$</p>
<p>You could start with a rough estimate then apply the function $f(B)$ several times until the answer settles down.
$$\begin{split}
B_1&=2\\
B_2&=f(B_1)\\
B_3&=f(B_2)\\
B_4&=f(B_3)\\
B_5&=f(B_4)\\
&\hspace{0.55em}\vdots
\end{split}$$
You might check first that $S$ isn't exactly 1 or 2. There is no solution when $S=1$ and a simple solution when $S=2$.</p>
<p>EDIT</p>
<p>Sorry, ,Newton's method would be</p>
<p>$$B_2=B_1-\frac{RB_1^S-R-B_1+1}{RSB_1^{S-1}-1}$$
and repeat for B3, and so on.</p>
|
2,409,268 | <p><strong>Confirm that the identity $1+z+...+z^n=(1-z^{n+1})/(1-z)$ holds for every non-negaive integer $n$ and every complex number $z$, save for $z=1$</strong></p>
<p>I have tried to prove this by induction but I am not sure that I am doing things right, for $ n = 1 $ we have $ (1-z ^ 2) / (1-z) = (1-z) (1+ z) / (1-z) = 1 + z $, then this holds for $ n = 1 $. Suppose now that it holds for $ n $ and see that it holds for $ n + 1 $,$1+z+...+z^n+z^{n+1}=(1-z^{n+1})/(1-z)+z^{n+1}=[(1-z^{n+1})+(1-z)(z^{n+1})]/(1-z)=(1-z^{n+2})/(1-z) $ then this is true for every non-negative integer $ n$. This is OK?</p>
| BAI | 448,487 | <p>My proof goes like this:
Let $S:=1+z+z^2+...+z^n$
Then
$$zS=z+z^2+z^3+...+z^{n+1}$$
$$zS=S-1+z^{n+1}$$
$$(1-z)S=1-z^{n+1}$$
As $z\neq 1$, it gives the desired result.</p>
|
298,791 | <blockquote>
<p>If a ring $R$ is commutative, I don't understand why if $A, B \in R^{n \times n}$, $AB=1$ means that $BA=1$, i.e., $R^{n \times n}$ is Dedekind finite.</p>
</blockquote>
<p>Arguing with determinant seems to be wrong, although $\det(AB)=\det(BA ) =1$ but it necessarily doesn't mean that $BA =1$.</p>
<blockquote>
<p>And is every left zero divisor also a right divisor ? </p>
</blockquote>
| Theon Alexander | 165,460 | <p>Here's a slightly different argument.</p>
<p>The poster's initial argument was by using determinants, which as said above, implies that both $\det(A), \det(B) \in R^\times.$ Hence, both matrices have bilateral inverses, i.e. $\operatorname{adj}(A^T) \det(A)^{-1}$ and resp. for $B$ - recall that the formula
$$A\operatorname{adj}(A^T)= \operatorname{adj}(A^T)A= (\det A) I$$ is universal.</p>
<p>Now, for two maps (of sets) $f:S\to T$ and $g:T\to S$, if $f$ or $g$ is bijective and $fg=id_T$, then clearly $gf=id_S.$ This settles the first question.</p>
<hr>
<p>Zero-divisor question: Suppose that $AB=0$ for $A, B \neq 0$ square matrices; then $\det(A) \det(B)=0.$ Will there be a square matrix $C\neq 0$ such that $CA=0$</p>
<p>If $\det(A)$ is a zero divisor (and by definition, non-zero), we're done, since for $0\neq b\in R$ satisfying $\det(A)b=0$, the matrix $C=\operatorname{adj}(A^T)b$ is by the above a zero-divisor on both sides!</p>
<p>The case I still haven't worked out is the following. If $\det(A)=0$ and $\operatorname{adj}(A^T)\neq 0$, then $C=\operatorname{adj}(A^T)$ satisfies $CA=AC=0$, but what if $\operatorname{adj}(A)=0$? </p>
<p>So, the case that remains (so far) is when $\operatorname{adj}(A)=0$, which is equivalent to the condition $\Lambda^{n-1}A=0$ (exterior product). We'll come back later.</p>
|
1,986,798 | <p>The way I solved the problem is to change the equation to $|x+2|=1-|y-3|$, and then square both sides. But I don't think it is the right way to solve the problem. I hope someone can either give me a hint or show me how to solve the problem.</p>
<blockquote>
<p>$|x+2|+|y-3|=1$ is an equation for a square. How many units are in the lengths of its diagonals?</p>
</blockquote>
| Not a grad student | 36,274 | <p>Yes, that is the Jacobian matrix. Given any differentiable function $f: \mathbb{R}^n \rightarrow \mathbb{R}^m$, the Jacobian matrix is an $m \times n$ matrix.</p>
|
132,266 | <p>I apologize if this question is trivial, but a couple of days of searching for necessary routines have led me here. </p>
<blockquote>
<p>Does there exist software to compute symmetric powers of Schur polynomials? </p>
</blockquote>
<p>I am seeking such software in the hopes of computing the characters of representations of the simple Lie algebra $A_n$, i.e., $S_\lambda(x_1, \dots, x_n)$ to use the notation of Fulton and Harris, and then applying $\mathrm{Sym}^k$ to the resulting Schur polynomial and writing the result as a sum of Schur polynomials corresponding to differing partitions $\mu$. That is, as an example, I would like to compute something like the following:
$$
\mathrm{Sym}^3(S_\lambda(x_1, \dots, x_4)) = \sum_\mu k_\mu S_\mu(x_1, \dots, x_4)
$$
Where $S_\lambda(x_1, \dots, x_4)$ denotes the character of the irreducible representation of $A_4$ with highest weight $\lambda$. I understand that Mathematica may compute symmetric polynomials, however I have not found any routines for applying $\mathrm{Sym}$ to these polynomials. Regards.</p>
| F. C. | 10,881 | <p>this could be done in sage:</p>
<pre><code>sage: B3 = WeylCharacterRing("B3", style="coroots")
sage: spin = B3(0,0,1)
sage: spin.symmetric_power(6)
B3(0,0,0) + B3(0,0,2) + B3(0,0,4) + B3(0,0,6)
sage: A3 = WeylCharacterRing("A3", style="coroots")
sage: rep = A3(0,0,1)
sage: rep.symmetric_power(6)
A3(0,0,6)
</code></pre>
|
132,266 | <p>I apologize if this question is trivial, but a couple of days of searching for necessary routines have led me here. </p>
<blockquote>
<p>Does there exist software to compute symmetric powers of Schur polynomials? </p>
</blockquote>
<p>I am seeking such software in the hopes of computing the characters of representations of the simple Lie algebra $A_n$, i.e., $S_\lambda(x_1, \dots, x_n)$ to use the notation of Fulton and Harris, and then applying $\mathrm{Sym}^k$ to the resulting Schur polynomial and writing the result as a sum of Schur polynomials corresponding to differing partitions $\mu$. That is, as an example, I would like to compute something like the following:
$$
\mathrm{Sym}^3(S_\lambda(x_1, \dots, x_4)) = \sum_\mu k_\mu S_\mu(x_1, \dots, x_4)
$$
Where $S_\lambda(x_1, \dots, x_4)$ denotes the character of the irreducible representation of $A_4$ with highest weight $\lambda$. I understand that Mathematica may compute symmetric polynomials, however I have not found any routines for applying $\mathrm{Sym}$ to these polynomials. Regards.</p>
| Steven Sam | 321 | <p>This can be done with LiE:
<a href="http://wwwmathlabo.univ-poitiers.fr/~maavl/LiE/" rel="noreferrer">http://wwwmathlabo.univ-poitiers.fr/~maavl/LiE/</a></p>
<p>(In fact it will compute the Schur functor of any irreducible representation.) There is a form interface so you can try LiE on the web: <a href="http://wwwmathlabo.univ-poitiers.fr/~maavl/LiE/form.html" rel="noreferrer">http://wwwmathlabo.univ-poitiers.fr/~maavl/LiE/form.html</a></p>
<p>Here is an example of calculating $Sym^3$ of $s_{2,1}$ (everything is written in fundamental weight notation, so X[1,1,0] below refers to the partition (2,1,0) = (1,0,0) + (1,1,0)):</p>
<p>Input: sym_tensor(3,X[1,1,0],A3)</p>
<p>Output: 1X[0,0,3] +1X[0,1,1] +1X[0,3,1] +1X[1,0,0] +1X[1,1,2] +1X[1,2,0] + 2X[2,0,1] +1X[2,2,1] +1X[3,0,2] +1X[3,1,0] +1X[3,3,0]</p>
<p>The only caveat is that LiE treats A3 as $SL_4$, so for instance, the partition (2,1,1,1) is the same as the partition (1,0,0,0) (because we have the identification $x_1x_2x_3x_4 = 1$).</p>
|
4,013,065 | <p>I need help interpreting the answer to a question about the base and dimension of a subspace within linear algebra. I have a subspace W of <span class="math-container">$R^5$</span> that is spanned by the vectors:</p>
<p><span class="math-container">$${v_1}=\begin{pmatrix} 1 \\ 2 \\ 3 \\ -1 \\ 1 \end{pmatrix} , {v_2}=\begin{pmatrix} 0 \\ 1 \\ 1 \\ -1 \\ 2 \end{pmatrix} , {v_3}=\begin{pmatrix} -1 \\ -1 \\ -2 \\ 1 \\ 2 \end{pmatrix}
\text{and} \,{v_4}= \begin{pmatrix} 1 \\ 0 \\ 1 \\ 0 \\ -4 \end{pmatrix}$$</span></p>
<p>and I will determine the dimension of W and also specify the base of W.</p>
<p>To do this I put the vectors as columns in a matrix and simplified this using gauss:</p>
<p><span class="math-container">$$\begin{pmatrix}1 & 0 & -1 & 1 \\ 2 & 1 & -1 & 0 \\ 3 & 1 & -2 & 1 \\ -1 & -1 & 1 & 0 \\ 1 & 2 & 2 & -4\end{pmatrix}\sim\begin{pmatrix}1 & 0 & 0 & 0 \\ 0 & 1 & 0 & -1 \\ 0 & 0 & 1 & -1 \\ 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0\end{pmatrix}.\\$$</span></p>
<p>Here I concluded that the dimension max can be equal to three as we have three rows of pivot elements, then I also concluded that you can express v4 = -v2-v3 (I have also tested this and it is true) and the two vectors v2 and v3 must then be linearly independent. But what I have a hard time interpreting is if this means that v2 and v3 are the basis of W (and not v1,v2 and v3 as my first thought was)? And if the dimension in that case becomes two and not three (as the maximum can be)?</p>
<p>Thanks in advance!</p>
| Bernard | 202,857 | <p>I would do that transposing, and performing row operations, to make the whole process:
<span class="math-container">\begin{align}
&\begin{bmatrix}
1&2&3&-1&1\cr 0&1&1&-1&2\cr -1&-1&-2&1&2\cr 1&0&1&0&-4
\end{bmatrix}\xrightarrow[v_4\leftarrow v_4-v_1]{v_3\leftarrow v_3+v_1}
\begin{bmatrix}
1&2&3&-1&1\cr 0&1&1&-1&2\cr 0&1&1&0&3\cr 0&-2&-2&1&-5
\end{bmatrix}\\[1.5ex]
\xrightarrow[v_4\leftarrow v_4+v_2+v_3]{}
&\begin{bmatrix}
1&2&3&-1&1\cr 0&1&1&-1&2\cr 0&1&1&0&3\cr 0&0&0&0&0
\end{bmatrix}\xrightarrow{v_3\leftarrow v_3-v_2}
\begin{bmatrix}
1&2&3&-1&1\cr 0&1&1&-1&2\cr 0&0&0&1&1\cr 0&0&0&0&0
\end{bmatrix}
\end{align}</span>
Therefore, we obtain that the matrix has rank <span class="math-container">$3$</span>, and the vectors of the first three rows: <span class="math-container">$v_1, v_2$</span> and <span class="math-container">$v'_3=v_1-v_2+v_3$</span> are a basis of <span class="math-container">$W$</span>. As
<span class="math-container">$\operatorname{Span}(v1,v_2,v'_3)=\operatorname{Span}(v1,v_2,v_3)$</span>, there results that <span class="math-container">$\{v_1,v_2,v_3\}$</span> is also a basis of <span class="math-container">$W$</span>.</p>
|
986,412 | <p>Let $f(x) = \frac1{(1-x)}$.</p>
<p>Define the function $f^r$ to be $f^r(x) = f(f(f(...f(f(x)))))$.</p>
<p>Find $f^{653}(56)$.</p>
<p><strong>What I've done:</strong> </p>
<p>I started with r=1,2,3 and noticed the following pattern:
$$f^r(x)= \left\{
\begin{array}{c}
\frac1{1-x}, when \ r\equiv 1\pmod 3 \\
\frac{x-1}x, when \ r\equiv 2\pmod 3 \\
x, \ when \ r\equiv 0\pmod 3
\end{array}
\right. $$</p>
<p>As $653\equiv 2\pmod 3$, $\\$ $f^{653}(56) = \frac{55}{56}$</p>
<p>BUT how can I prove that I'm right? By induction? I don't know what to do then, when I go from $r$ to $r+1$. </p>
<p>Could you please share with me your reasoning by solving this problem?</p>
<p>PS: This problem is from the book "How to think like a mathematician" by Kevin Houston. </p>
| Vinícius Novelli | 148,344 | <p>You can just calculate the improper integral first:</p>
<p>$$
\int \frac{dx}{1-x^2}=\frac{1}{2}\int{\frac{dx}{1-x}}+\frac{1}{2}\int{\frac{dx}{1+x}} = -\frac{1}{2}\ln|1-x|+\frac{1}{2}\ln|1+x|+C
$$</p>
<p>which holds for $|x|\not= 1$. Given that in the interval $(0,\pi /2)$, $\sin$ or $\cos$ are never $1$ or $-1$, you're safe. Then just finish it off:
$$
f(\theta)=-\frac{1}{2}\ln|1-\cos{\theta}|+\frac{1}{2}\ln|1+\cos{\theta}|+\frac{1}{2}\ln|1-\sin{\theta}|-\frac{1}{2}\ln|1+\sin{\theta}| = \frac{1}{2}\ln{\frac{|(1+\cos\theta)(1-\sin\theta)|}{|(1-\cos\theta)(1+\sin\theta)|}}=\frac{1}{2}\ln{\frac{\sin^2\theta\cos^2\theta}{(1-\cos\theta)^2(1+\sin\theta)^2}}
$$</p>
<p>I mean, I think it's done, but maybe you can find a nice factorization.</p>
|
2,081,792 | <p>I learnt the derivation of the distance formula of two points in first quadrant I.e., $d = \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}$ where it is easy to find the legs of the hypotenuse (distance between two points) since the first has no negative coordinates and only two axes ($x$ coordinate and $y$ coordinate). while finding the distance between two points from two different quadrants of a Cartesian plane where four axes exist ($x$,$x_1$,$y$, $y_1$ coordinates), the same formula applies for this problem also. But, the derivation of the formula is based only on the distance between two points in first quadrant alone. Can you please explain the DERIVATION of the formula for more than two quadrants? Please</p>
| Christopher.L | 347,503 | <p>All you have to realize is that 'crossing' an axis does not change the distance. Try first to just realize this in one dimension, i.e the distance between two points on the real line:</p>
<p>$d(x_1,x_2)=|x_1-x_2|=\sqrt{(x_1-x_2)^2}$.</p>
<p>Take one point $x_1$, on the negative side, and another $x_2$, on the positive and calculate the distance between them, e.g </p>
<p>$x_1=-3, x_2=5$: $d(-3,5)=|-3-5|=\sqrt{((-3)-5)^2}=\sqrt{(-8)^2}=\sqrt{64}=8$.</p>
<p>The squaring takes care of the sign, and the square root, in a way, transforms it back again. The same applies, of course, to the components of the $y$-axis. With this in mind, read the derivation of the two dimensional case, that you've already read, again.</p>
<p>Can you then perhaps see why it makes no difference in what quadrants the points are?</p>
|
2,242,846 | <p>I will try to prove the theorem in the title:</p>
<blockquote>
<p>Suppose S is closed, non-empty then if $b = \sup\{x: x \in S\}$ (least upper bound), $b \in S$.</p>
</blockquote>
<p>I also use the following <strong>Theorem</strong> which we proved in class: S is closed iff every Cauchy sequence in S converges to a point in S. </p>
<p>Either $b \in S$ and we are done, or $b \notin S$ and so $\forall \epsilon_n = \frac{1}{n}$ for $n \in \mathbb{N},$ $\exists x_n \in (b - \epsilon_n, b] \cap S$ because if not then $b - \epsilon_n$ would be the least upper bound. So, clearly $x_n \rightarrow b$ as $n \rightarrow \infty.$ Since $x_n$ converges then $x_n$ is Cauchy and in $S$, so it must converge to some point in $S$, lets say $x_o$, by the above theorem. However, limits are unique, so $b = x_o$, but $b \notin S$ and $x_o \in S$. Thus we have arrived at a contradiction and so $b \in S. $</p>
| caozi | 581,222 | <p>Here is my solution:</p>
<p>$1.~\exists X~\lnot p(X)\quad $ Premise</p>
<p>$2.~\quad \lnot p(c)\quad$ Existential Elimination:1, Witness Assumed: $[c]$</p>
<p>$3.~\qquad\forall X~p(X)\quad$ Assumption</p>
<p>$4.~\qquad p(c)\quad$ Universal Elimination:3</p>
<p>$5.~\quad \forall X~p(X) \implies p(c)\quad$ Implication Introduction 3-4</p>
<p>$6.~\qquad\forall X~p(X)\quad$ Assumption</p>
<p>$7.~\qquad\lnot p(c)\quad$ Reiteration:2</p>
<p>$8.~\quad\forall X p(X) \implies \lnot p(c)\quad$ Implication Introduction: 6-7</p>
<p>$9.~\quad\lnot \forall X~p(X)\quad$ Negation Introduction 5,8</p>
<p>$10.~\lnot \forall X~p(X)\quad$ Witness Eliminated 2-9</p>
|
1,204,566 | <p>I tried asking this on StackOverflow and it was quickly closed for being too broad, so I come here to get the mathematical part nailed down, and then I can do the rest with no help, most likely.</p>
<p>From <a href="http://www.afjarvis.staff.shef.ac.uk/sudoku/sudgroup.html" rel="nofollow">this web page</a>, I learned that there are 5,472,730,538 essentially different solved sudoku grids.</p>
<p>Please, I beg you, do not respond unless you have read the entire webpage and have a decent understanding of it. You can think of a 9x9 sudoku as being represented by a string of 81 numbers, separated by commas.</p>
<p>I want to find a way to generate one sudoku board from each of the 5,472,730,538 groups, do some quick analysis on it, and move on to the next board, continuing until all 5.4 billion are analyzed. In this way, I will have analyzed all possible essentially different Sudoku boards. I am not familiar with <a href="http://www.gap-system.org/" rel="nofollow">GAP</a>.</p>
<p>So, I need someone, if they are willing, to help me bridge the knowledge gap I have, which is this: How do I go from <a href="http://www.afjarvis.staff.shef.ac.uk/sudoku/sudgroup.html" rel="nofollow">this web page</a> to actually finding (iterating through) each solution?</p>
<p>I do not expect a full detailed thorough answer on this. Instead, any post that I think is helpful to me in reaching my goal will be upvoted, no matter how short or how detailed.</p>
<hr>
<h2>EDIT 3-25-2015</h2>
<p>Thanks to Nick Gill's suggestion, I contacted Frazer Jarvis himself and here is his very helpful reply:</p>
<blockquote>
<p>Dear Jeremy,</p>
<p>The method we used simply evaluated the number of puzzles there had to
be, and didn't count them by listing them all, so it wasn't a
constructive proof in that sense. Nor can the proof be made to give a
list, as far as I can see. As you are probably aware, this work of
mine dates to 2005, and my active interest in Sudoku and related
mathematical problems probably ended soon after, although I followed
forums for a little while (my wife still likes doing them, but it has
been a long time since I did one!) - but I did read towards the end of
that time that someone had made a file of all of these. I don't
remember who it was, however. You may be able to find the forum
postings if you look around the internet. I'll have a quick look
around too.</p>
<p>OK - the forum that contained the discussions no longer exists, but
has been retitled, and is now at
<a href="http://forum.enjoysudoku.com/sudoku-the-puzzle-f5.html" rel="nofollow">http://forum.enjoysudoku.com/sudoku-the-puzzle-f5.html</a></p>
<p>Ah - here's the thread:
<a href="http://forum.enjoysudoku.com/catalog-of-all-essentially-different-grids-t6679.html" rel="nofollow">http://forum.enjoysudoku.com/catalog-of-all-essentially-different-grids-t6679.html</a>
- perhaps you could contact someone there?</p>
<p>Best wishes, Frazer</p>
</blockquote>
<p>He also added later:</p>
<blockquote>
<p>For what it's worth, the discussions about the number of different
grids is in the thread
<a href="http://forum.enjoysudoku.com/su-doku-s-maths-t44.html" rel="nofollow">http://forum.enjoysudoku.com/su-doku-s-maths-t44.html</a></p>
</blockquote>
| Nick Gill | 211,225 | <p>In response to the OP's request that I post an answer:</p>
<p>I believe that Russell & Jarvis have an explicit list of representatives for the 5.4 billion essentially different types of sudoko grid.</p>
<p>The credit for the answer should go to my office mate, Sian Jones.</p>
|
69,137 | <p>Is there any reference for gluing in the context of Morse homology on Hilbert manifolds?</p>
<p>Gluing is pretty standard in Morse homology for finite-dimensional manifolds. Unfortunately, in the infinite-dimensional case the sources I know avoid gluing. For proving that the Morse boundary operator squares to zero Abbondandolo-Majer use either an argument involving cellular filtrations or the graph transform method.</p>
<p>Unfortunately, in the context I want to consider, namely proving that some Morse-like theory is isomorphic to singular homology, both of these approaches do not work. (The latter approach works for showing that my Morse-like theory has a boundary operator that squares to zero, though.)</p>
<p>Reading through the finite-dimensional case ("Morse homology" of Schwarz) I realized that there are a lot of arguments which make it difficult to generalize the gluing procedure to cases where the target is not locally compact. For instance, in a lot of indirect arguments, the Rellich compact embedding theorem is used, which fails if the target is infinite-dimensional.</p>
<p>So, is there any instance where gluing in an infinite-dimensional context is proved? Are there certain obstacles not yet overcome?</p>
| Sesshoumaru | 20,893 | <p>I would say, take a look at the paper of Abbondandolo and Majer :<a href="http://arxiv.org/abs/math/0403552" rel="nofollow">http://arxiv.org/abs/math/0403552</a> or <a href="http://www.dm.unipi.it/~abbondandolo/preprints/montreal.pdf" rel="nofollow">http://www.dm.unipi.it/~abbondandolo/preprints/montreal.pdf</a>
The construction is neat and clear.</p>
|
1,561,563 | <p>Two circles $\Gamma_1,\Gamma_2$ have centers $O_1,O_2$. Let $\Gamma_1\cap\Gamma_2=A,B$, with $A\neq B$. An arbitrary line through $B$ intersects $\Gamma_1$ at $C$ and $\Gamma_2$ at $D$. The tangents to $\Gamma_1$ at $C$ and to $\Gamma_2$ at $D$ intersect at $M$. Let $N=AM\cap CD$. Let $l$ be a line through $N$ parallel to $CM$, and let $l\cap AC=K$. Prove that $BK$ is tangent to $\Gamma_2$.</p>
<hr>
<p>$\qquad\quad$ <img src="https://i.stack.imgur.com/mTLgz.png" alt=""></p>
<hr>
<p>Here is some progress I have:</p>
<p>We are looking to prove $\angle O_2BP=90^{\circ}$, and since $\angle O_2DP=90^{\circ}$, if we could prove $BP=PD$, we would be done by congruent triangles. So we are looking to prove $\dfrac{\sin \angle BDP}{\sin \angle DBP}=1$. Let $AM\cap BK=l$. We have $\angle BDP=\angle QBN$. By the law of sines on $\triangle DNM,\triangle BNQ$, we have $\sin \angle BDP=\sin \angle NDM=\dfrac{NM}{DM}\sin \angle DNM$ and $\sin\angle QBN=\dfrac {NQ}{BQ}\sin\angle BNQ$. Dividing, the sines cancel (since they are supplementary), and we are left with $\dfrac{NM\times BQ}{DM\times NQ}$, so it remains to prove $\dfrac{NM}{DM}=\dfrac{NQ}{BQ}$.</p>
<p>I'm not sure what to do from here. We would be done if we could prove $\triangle NBQ\sim\triangle NDM$, but this would imply $\angle QNB=\angle MND=90^{\circ}$, but from drawing multiple diagrams, it looks like this is not always the case. </p>
<p>As always, any ideas are appreciated!</p>
| Kay K. | 292,333 | <p>Finding indefinite integral:</p>
<p>\begin{align}
&x\mapsto\sin u\\
&I=\int \frac{\cos u}{\sin u + \cos u}du\\
&=\frac12\int \frac{\cos u - \sin u + \cos u + \sin u}{\sin u + \cos u}du=\frac12 \int\frac{\cos u-\sin u}{\sin u + \cos u}du+\frac u2\\
&=\frac12 \ln (\cos u + \sin u) + \frac u2+C\\
&=\frac12 \ln (\sqrt{1-x^2} + x) + \frac {\sin^{-1}x}{2}+C\\
\end{align}</p>
|
1,499,949 | <p>Prove that for all event $A,B$</p>
<p>$P(A\cap B)+P(A\cap \bar B)=P(A)$</p>
<p><strong>My attempt:</strong></p>
<p>Formula: $\color{blue}{P(A\cap B)=P(A)+P(B)-P(A\cup B)}$</p>
<p>$=\overbrace {P(A)+P(B)-P(A\cup B)}^{=P(A\cap B)}+\overbrace {P(A)+P(\bar B)-P(A\cup \bar B}^{=P(A\cap \bar B)})$</p>
<p>$=2P(A)+\underbrace{P(B)+P(\bar B)}_{=1}-P(A\cup B)-P(A\cup \bar B)$</p>
<p>$=1+2P(A)-P(A\cup B)-P(A\cup \bar B)$</p>
| Eric Wofsey | 86,856 | <p>There is no universal convention about the meaning of these notations in general. When $x$ is nonnegative, it is fairly common to say that both $x^{1/n}$ and $\sqrt[n]{x}$ refer to the unique nonnegative $n$th root of $x$. Some authors may use other conventions (such as the one mentioned in kilimanjaro's answer, in which $\sqrt[n]{x}$ refers to the nonnegative root but $x^{1/n}$ could refer to any root), but these are not universally understood and should not be used without explanation. In particular, when $x$ is not a nonnegative real number, you should never use either notation without specifying which $n$th root you are referring to (or saying that you don't care which $n$th root is picked). In general, in the absence of a specifically stated convention, I would assume that $\sqrt[n]{x}$ and $x^{1/n}$ are synonymous.</p>
|
83,512 | <p>Question: (From an Introduction to Convex Polytopes)</p>
<p>Let $(x_{1},...,x_{n})$ be an $n$-family of points from $\mathbb{R}^d$, where $x_{i} = (\alpha_{1i},...,\alpha_{di})$, and $\bar{x_{i}} =(1,\alpha_{1i},...,\alpha_{di})$, where $i=1,...,n$. Show that the $n$-family $(x_{1},...,x_{n})$ is affinely independent if and only if the $n$-family $(\bar{x_{1}},...,\bar{x_{n}})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.</p>
<p>-</p>
<p>Here is what I have so far, it is mostly just writing out definitions, if you can give me some hints towards how I can start the problem that would be great.</p>
<p>$(\Rightarrow)$ Assume that for $x_{i} = (\alpha_{1i},...,\alpha_{di})$, the $n$-family $(x_{1},...,x_{n})$ is affinely independent. Then, a linear combination $\lambda_{1}x_{1} + ... + \lambda_{n}x_{n} = 0$ can only equal the zero vector when $\lambda_{1} + ... + \lambda_{n} = 0$. An equivalent characterization of affine independence is that the $(n-1)$-families $(x_{1}-x_{i},...,x_{i-1}-x_{i},x_{i+1}-x_{i},...,x_{n}-x_{i})$ are linearly independent. We want to prove that for $\bar{x_{i}}=(1,\alpha_{1i},...,\alpha_{di})$, the $n$-family $(\bar{x}_{1},...,\bar{x}_{n})$ of vectors from $\mathbb{R}^{d+1}$ is linearly independent.</p>
| Alex Becker | 8,173 | <p>($\Rightarrow$): Suppose $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent, so we have $$\left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = \sum\limits_{i=1}^nc_i\bar{x_i} = 0$$
for some set of coefficients $c_i\in\mathbb{R}$, thus
$$\sum\limits_{i=1}^nc_ix_i = \left(\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = 0\text{ and }\sum\limits_{i=1}^n c_i = 0$$
so $(x_1,\ldots,x_n)$ is affinely dependent. Hence if $(x_1,\ldots,x_n)$ is affinely independent, $(\bar{x_1},\ldots,\bar{x_n})$ must be linearly independent.</p>
<p>($\Leftarrow$): Suppose $(x_1,\ldots,x_n)$ is affinely dependent, so we have $$\sum\limits_{i=1}^nc_ix_i = 0\text{ and }\sum\limits_{i=1}^nc_i=0$$ for some set of coefficients $c_i\in\mathbb{R}$. Then $$\sum\limits_{i=1}^nc_i\bar{x_i} = \left(\sum\limits_{i=1}^n c_i,\sum\limits_{i=1}^n c_i\alpha_{1i},\ldots,\sum\limits_{i=1}^n c_i\alpha_{ni}\right) = (0,0,\ldots,0) = 0$$ so $(\bar{x_1},\ldots,\bar{x_n})$ is linearly dependent. Hence if $(\bar{x_1},\ldots,\bar{x_n})$ is linearly independent, $(x_1,\ldots,x_n)$ must be affinely independent.</p>
|
596,005 | <p>Show that $f:\mathbb{R}^2\to\mathbb{R}$, $f \in C^{2}$ satisfies the equation
$$\frac{\partial^2 f}{\partial x^2} - \frac{\partial^2 f}{\partial y^2} = 0$$
for all points $(x,y) \in \mathbb{R}^2$ if and only if for all $(x,y)\in \mathbb{R}^2$ and $t \in \mathbb{R}$ we have:
$$f(x, y + 2t) + f(x, y) = f(x + t,y + t) + f(x - t, y +t).$$</p>
<p><strong>Note</strong>. In such case, $f$ is said to satisfy the <em>parallelogram's law</em>.</p>
| alexjo | 103,399 | <p>The PDE $f_{xx}-f_{yy}=0$ is the <a href="http://mathworld.wolfram.com/WaveEquation1-Dimensional.html" rel="nofollow">wave equation</a> with general solution
$$
f(x,y)=F(x-y)+G(x+y)
$$
where $F$ and $G$ are any functions.
Forall $t\in \Bbb R$ the function $f(x,y)=F(x-y)+G(x+y)$ satisfies $$f(x, y + 2t) + f(x, y) = f(x + t,y + t) + f(x - t, y +t)$$
because
$$\small
\underbrace{F(x-y-2t)+G(x+y+2t)}_{f(x, y + 2t)}+\underbrace{F(x-y)+G(x+y)}_{f(x,y)}=\underbrace{F(x+t-y-t)+G(x+t+y+t)}_{f(x + t,y + t)}+\underbrace{F(x-t-y-t)+G(x-t+y+t)}_{f(x - t, y +t)}
$$
that is
$$\small
\underbrace{\color{red}{F(x-y-2t)}+\color{blue}{G(x+y+2t)}}_{f(x, y + 2t)}+\underbrace{\color{green}{F(x-y)}+G(x+y)}_{f(x,y)}=\underbrace{\color{green}{F(x-y)}+\color{blue}{G(x+y+2t)}}_{f(x + t,y + t)}+\underbrace{\color{red}{F(x-y-2t)}+G(x+y)}_{f(x - t, y +t)}
$$</p>
|
537,228 | <p>I know some things about measures/probabilities and I know some things about categories. Shortly I realized that uptil now I have never encountered something as a category of measure spaces. It seems quite likely to me that something like that can be constructed. I am an amateur however and my scope is small. I have two questions:</p>
<blockquote>
<p>1 Is there indeed material of this sort and can you tell me about it? The whereabouts for instance.</p>
<p>2 Is there a reason for the fact that uptil now I did not find anything of the sort? Is it indeed rare for some reason?</p>
</blockquote>
| user43208 | 43,208 | <p>I'm just going to record some references that you might want to take a look at. They might be a little too heavy on the category theory for your taste, but they also might provide further places to look. </p>
<ul>
<li><p>$n$-Category Café post: <a href="http://golem.ph.utexas.edu/category/2007/02/category_theoretic_probability.html" rel="noreferrer">Category Theoretic Probability Theory</a> </p></li>
<li><p>Related in content, the <a href="http://ncatlab.org/nlab/show/probability+theory" rel="noreferrer">nLab article</a> on probability theory; see especially the section "Probability theory from the nPOV" </p></li>
<li><p>The <a href="http://ncatlab.org/nlab/show/measurable+space#related_concepts_19" rel="noreferrer">nLab article</a> on measurable spaces. See especially the connection with <a href="http://ncatlab.org/nlab/show/von+Neumann+algebra#RelationToMeasureSpaces" rel="noreferrer">von Neumann algebras</a>, where it is localizable measurable spaces which are the pertinent concept for the appropriate Gelfand-Naimark duality, and see also references to posts by Dmitri Pavlov at MathOverflow. </p></li>
</ul>
|
3,988,808 | <p>I recently got into set theory and i was wondering what is the cardinality of a set of all finite sequences of natural numbers?
I know that it is N for natural numbers and 2^N is for real numbers but how can i prove it?</p>
| user247327 | 247,327 | <p>Every finite sequence can be thought of as the digits in a terminating decimal, a subset of the rational numbers, so the set of all finite sequences of natural numbers is countably infinite.</p>
|
3,988,808 | <p>I recently got into set theory and i was wondering what is the cardinality of a set of all finite sequences of natural numbers?
I know that it is N for natural numbers and 2^N is for real numbers but how can i prove it?</p>
| Gribouillis | 398,505 | <p>Take a finite sequence of natural numbers such as <span class="math-container">$764\ 32\ 87\ 12\ 922$</span>. Write them in base <span class="math-container">$10$</span> and replace the space between the numbers with the letter 'a', giving <span class="math-container">$764a32a87a12a922$</span>. Interprete this as a number written in base <span class="math-container">$11$</span>. This correspondance is an injective map from the set of finite sequences of natural numbers into <span class="math-container">${\mathbb N}$</span>. Hence the set of such sequences is countable.</p>
<p>You can even make this correspondance a bijection if you replace the potential sequence <span class="math-container">$0a$</span> at the beginning of a number by <span class="math-container">$a$</span>.</p>
|
1,085,491 | <p>Prove that the following number is an integer:
$$\left( \dfrac{76}{\dfrac{1}{\sqrt[\large{3}]{77}-\sqrt[\large{3}]{75}}-\sqrt[\large{3}]{5775}}+\dfrac{1}{\dfrac{76}{\sqrt[\large{3}]{77}+\sqrt[\large{3}]{75}}+\sqrt[\large{3}]{5775}}\right)^{\large{3}}$$</p>
<p>How can I prove it?</p>
| Peđa | 15,660 | <p><strong>Hint :</strong></p>
<p>$\sqrt[3]{75}=a$</p>
<p>$\sqrt[3]{77}=b$</p>
<p>$\sqrt[3]{5775}=ab$</p>
<p>$(a^3+b^3)/2=76$</p>
<p>Next , try to simplify expression .</p>
|
1,085,491 | <p>Prove that the following number is an integer:
$$\left( \dfrac{76}{\dfrac{1}{\sqrt[\large{3}]{77}-\sqrt[\large{3}]{75}}-\sqrt[\large{3}]{5775}}+\dfrac{1}{\dfrac{76}{\sqrt[\large{3}]{77}+\sqrt[\large{3}]{75}}+\sqrt[\large{3}]{5775}}\right)^{\large{3}}$$</p>
<p>How can I prove it?</p>
| Nick Matteo | 59,435 | <p>Following the simplifications suggested by MathBot, we have:
$$\left( \dfrac{76}{\dfrac{1}{b-a}-ab}+\dfrac{1}{\dfrac{76}{b+a}+ab}\right)^{\large{3}}$$
Let's just take the part inside the parentheses, and put it over a common denominator.
$$\dfrac{\dfrac{76^2}{b+a} + 76 ab + \dfrac{1}{b-a} - ab}{\left(\dfrac{1}{b-a}-ab\right)\left(\dfrac{76}{b+a}+ab\right)}$$
Expand the denominator:
$$\dfrac{\dfrac{76^2}{b+a} + 75 ab + \dfrac{1}{b-a}}{\dfrac{76}{b^2-a^2}+\dfrac{ab}{b-a}- \dfrac{76ab}{b+a}-a^2b^2}$$
Put the numerator and denominator on a common denominator:
$$\dfrac{\dfrac{76^2b - 76^2 a + 75 a b^3 - 75 a^3 b + b + a}{b^2-a^2}}{\dfrac{76 + ab^2 + a^2 b - 76 ab^2 + 76a^2 b - a^2 b^4 + a^4 b^2}{b^2-a^2}}$$
Simplify:
$$\dfrac{76^2b - 76^2 a + 75 a b^3 - 75 a^3 b + b + a}{76 - 75 ab^2 + 77a^2 b - a^2 b^4 + a^4 b^2}$$
Remember that $a^3 = 75$ and $b^3 = 77$:
$$
\dfrac{(76^2 - 75^2 + 1)b + (75 \cdot 77 - 76^2 + 1) a}{76 - 75 ab^2 + 77a^2 b - 77 a^2 b + 75 a b^2}
\ = \ \dfrac{152b + 0 a}{76}
\ = \ 2b.$$
Remember we need to cube the whole thing, so the answer is
$(2b)^3 = 8 b^3 = 8\cdot 77$, or $616$.</p>
|
2,203,988 | <p>I'm reading the book <i>Heat Transfer</i> by J.P. Holman. On the chapter of unsteady-state conduction, page 140, the author remarks:</p>
<blockquote>
<p>The final series solution is therefore:
$${\theta(x,t) \over \theta_i} =
{4\over \pi} \sum^{\infty}_{n=1} {1\over n} e^{-\left({n\pi/L}\right)^2\alpha \,t}\sin{n\pi x \over L}$$
We note, of course, that at $t=0$ the series on the right side of the Equation must converge to unity for all values of x.</p>
</blockquote>
<p>In this equation $0 < x < L$, and $\alpha$ is a finite constant. My question is, how can I proof that</p>
<p>$${4 \over \pi}\sum^{\infty}_{\substack{n=1}} {1\over n} \sin{n\pi x \over L} = 1$$</p>
<p>Additional information: The solution presented above solves the PDE:
$${\partial^2 \theta(x,t) \over \partial x^2} = {1\over \alpha}{\partial^2 \theta(x,t) \over \partial t^2} $$ with initial and boundary conditions:
\begin{align}
\theta(x,0) &= \theta_i \qquad &0\leq x \leq L\\
\theta(0,t) &=0 \qquad & t > 0 \\
\theta(L,t) &=0 \qquad & t > 0
\end{align}</p>
| fleablood | 280,126 | <p>If you're like me the problem is finding a strategy. </p>
<p>And the strategy that figuring each ring has $3$ choices of fingers so there are $3^8$ ways to choose fingers for each ring, is a dead end as order matters and we can't straightforwardly multiply by any choice or permutation value for each finger as we don't know how many rings are on each finger.</p>
<p>We can choose to say each finger has $aPb = \frac {a!}{(a-b)!}$ where $a$ are the number of rings left, and $b$ are the number of rings on the finger:</p>
<p>Then the answer is $\sum_{k= 8,-1}^0 (8 P k)\sum_{j=0}^k(k P j)(k-j P k-j)$. Which seems intimidating but: $(k P j)(k-j P k-j) = \frac {k!}{(k-j)!}(k-j)! = k!$ and so $sum_{j=0}^k (k P j)(k-j P k-j) = \sum k! = (k+1)k! = (k+1)!$ and $\sum_{k= 8,-1}^0 (8 P k)\sum_{j=0}^k(k P j)(k-j P k-j) = \sum_{k = 0}^8 \frac{8!}{k!}*(k+1)! = 8! \sum_0^8 (k+1) = 8!\sum_{i=1}^9 i = 8!\frac{9*10}2 = 8!*45$.</p>
<p>But simpler way of thinking of it is that if all the rings were identical and there were $N$ ways to place $8$ things on $3$ fingers. And there are $8!$ ways to arrange the 8 rings. There would be $8!N$ ways to arrange 8 different rings. </p>
<p>There are two ways to solve $N$. You may put $a$= zero to $8$ rings on finger $1$ and you may put $b$ = 0 to $8-a$ on finger 2 and all the rest on finger 3. That is $\sum_{a=0}^8\sum_{b=0}^{8-a}1 = \sum_{a=0}^8 9-a = \sum{a=8,-1}^0 a+1 = \sum_{i=1}^9 i = \frac {9*10}2 = 45$.</p>
<p>Or: you need to divide $8$ onto three fingers. Put the on one after the other. There are nine points in time between $0$ and $8$ that you can put in a "go to next finger marker". Among the $0$ to $8$ ring and the marker there are 10 places to put a second marker. There are ${10 \choose 2} =\frac {10!}{2!8!} = \frac {10*9}n = 45$.</p>
<p>Either way there are $45*8!$ ways to do this.</p>
<p>==== old answer which was stream of thought with mistakes along the way below === it could be illuminating for thought process... or it could be frustrating as it takes a while to get the right answer =====</p>
<p>If order doesn't matter then each ring can go on any one finger is $3^8$ (not $8^3$) is correct.</p>
<p>But if order on the fingers matters (the emerald on the middle finger under the gold is different than the emerald on the middle finger over the gold) it's a different question.</p>
<p>One way: Arrange the rings in order that you will put them on. There are $8*7*6= \frac {8!}{(8-3)!}$ (is that what $8 P 3$ defined to be? I think so.) ways to arrange . Then each ring (in order) has a choice of three fingers to be placed on. So the answer is $8P3*3^8$.</p>
<p>==== Oh, F###; that's over counting ====</p>
<p>If diamond ring is 1 and we choose finger one and emerald is 2 and we choose finger two is the same as emerald is 1 and we choose finger two and diamond is 2 and we choose finger one.</p>
<p>Oh, well, I'm going to leave this up as foood for thought. but... it is a wrong answer.</p>
<p>=====</p>
<p>My first thought was the hardest. You can choose $a$ rings for the first finger and $b$ rings for the second and $8 - a -b$ for the third and so</p>
<p>$\sum_{a= 0}^8 (8Pa)\sum_{b=0}^{8-b}([8-b]Pb)*([8-b-a]!)$</p>
<p>Logic tells me that sum must add up to $8P3*3^8$ (EDIT: It doesn't) but ... I'd hate to actually work it out (EDIT: but I may have to... it seems like the correct answer still.).</p>
<p>(Actually my <em>very</em> first thought was order didn't matter and rings were all the same, so it'd be $\sum_{a=0}^8 \sum_{b=0}^{8-a} 1 = \sum_{a=0}^8 9-a = 9^2 - \frac{8*9}4= 45$). </p>
<p>====</p>
<p>Here's a second way. We are going to put all the rings on in order all the rings on the first finger first, then the rings on the second, and then the third.</p>
<p>There are $8!$ ways to arrange there rings and $\sum_{a=0}^8 \sum_{b=0}^{8-a} 1 = 45$ ways to place the two "breaks" on when we stop putting rings on one finger and start putting them on the other.</p>
<p>So $8!*45$. Is that right? </p>
<p>Anyone with time on his/her hands want to see if $\sum_{a= 0}^8 (8Pa)\sum_{b=0}^{8-b}([8-b]Pb)*([8-b-a]!) = 8!*45=8! {10 \choose 2}$?</p>
|
3,125,093 | <p>Let us remember, the conditions to apply L'Hôpital's Rule:</p>
<p>Let suppose:</p>
<p><span class="math-container">$f(x)$</span> and <span class="math-container">$g(x)$</span> are real and differentiable for all <span class="math-container">$x\in (a,b)$</span> </p>
<p>1-) <span class="math-container">$ \lim_{x\to c}{f(x)} = \lim_{x\to c}g(x) = 0$</span></p>
<p>2-) If <span class="math-container">$g'(x)\neq 0$</span> <span class="math-container">$,\,\,\,\,$</span> as on some deleted neighborhood of <span class="math-container">$\,\,$</span> <span class="math-container">$c$</span>.</p>
<p>3-) <span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L$</span>, then</p>
<p>4-) <span class="math-container">$\lim_{x\to c}{\frac{f(x)}{g(x)}}=L$</span>, <span class="math-container">$\,\,\,\,\,$</span> thus we can write:</p>
<p><span class="math-container">$$ \lim_{x\to c}{\frac{f(x)}{g(x)}}=\lim_{x\to c}{\frac{f'(x)}{g'(x)}} = L $$</span></p>
<p><strong>Note 1:</strong> I have written all the above information just to remember to L'Hôpital's Rule. They are not related with my question. If I wrote something as incorect, you can correct it inside of the question. </p>
<p><span class="math-container">$$-----------$$</span>
ANYWAY, Let us suppose that our following example satisfies the conditions to apply L'Hôpital's Rule which we stated above. If we start:</p>
<p><span class="math-container">$\lim_{u\to \infty}\int_0^u F(t,a)dt=0$</span> <span class="math-container">$\,\,\,$</span> and<span class="math-container">$\,\,\,\,\,$</span> <span class="math-container">$\lim_{u\to \infty}\int_0^u G(t,a)dt=0$</span> </p>
<p><span class="math-container">$$\lim_{u \to \infty} \frac{ \int_0^u F(t,a) \, dt}{\int_0^u G(t,a) \, dt} $$</span> </p>
<p>THE QUESTION. <strong>Before</strong> to apply L'Hôpital's Rule with respect to the parameter <span class="math-container">$´u´$</span> to the above-equation, do we need to also <strong>to prove</strong> the following one? </p>
<p>There exists a number <span class="math-container">$M$</span> as <span class="math-container">$M> 0$</span>.</p>
<p>The denominator <span class="math-container">$\,\,$</span> <span class="math-container">$\int_0^u G(t,a)dt$</span> <span class="math-container">$\,\,$</span> should <strong>not</strong> equal to zero for any<span class="math-container">$\,\,\,\,$</span> <span class="math-container">$u > M$</span>.</p>
<p><strong>Note 2:</strong> Please aware I am <strong>not asking</strong> anything about <span class="math-container">$\,\,$</span> <span class="math-container">$ \frac{d}{du} {\int_0^u G(t,a) \, dt}$</span> </p>
| Somos | 438,089 | <p>For this particular group presentation there is a simple way to use cancellation to identify it.
First define <span class="math-container">$\,u := ii = jj = kk = ijk\,$</span> which commutes with <span class="math-container">$\,i,j,k.\,$</span> Now <span class="math-container">$\,(ij)k = u = kk\,$</span> and using cancellation
<span class="math-container">$\,ij=k.\,$</span> Similarly, <span class="math-container">$\,i(jk) = u = ii\,$</span> and <span class="math-container">$\,jk=i.\,$</span>
Next, <span class="math-container">$\,(ki)j = k(ij) = kk = u = jj\,$</span> and using cancellation
<span class="math-container">$\,ki=j.\,$</span> Next, <span class="math-container">$\, uk = (jj)k = j(jk) = ji.\,$</span> Similarly, we get <span class="math-container">$\,ui = kj, \, uj = ik.\,$</span> Finally, <span class="math-container">$\,uuk = uji = iiji = iki = ij = k,\,$</span> and thus <span class="math-container">$uu = 1.$</span> The group has eight elements <span class="math-container">$\,\{1,i,j,k,u,ui,uj,uk\}\,$</span> and isomorphic to the quaternion group.</p>
|
293,026 | <p>The question is to show that $A\sin(x + B)$ can be written as $a\sin x + b\cos x$ for suitable a and b.</p>
<p>Also, could somebody please show me how $f(x)=A\sin(x+B)$ satisfies $f + f ''=0$?</p>
| Ross Millikan | 1,827 | <p>Given $f(x)=\sin(x+B)$ the chain rule gives $f'(x)=(x+B)'\cos(x+B)=\cos (x+B)$. Then another derivative gives $f''(x)=-\sin (x+B)$</p>
|
2,450,007 | <p>Show that if $x\in Q^p$, then there exists $-x\in Q^p$ where $$Q^p=\{a_{-l}p^{-l}+a_{-l+1}p^{-l+1}+...|l\in Z,a_i\in\{0,1,...,p-1\}\}$$ and p is a prime number.</p>
<p>Actually I don't quite understand p-adic numbers and how addition and multiplication work in this number system. For this question, I think I need to find a $y\in Q^p$ such that x+y=0 but I don't know how to start with this question.</p>
| Henno Brandsma | 4,280 | <p>In fact almost any uniform space is an example: a Tychonoff (so uniformisable) space $X$ is called "almost compact" if there is a unique uniformity that induces its topology. It turns out the there are the spaces $X$ where $\beta(X)\setminus X$ has at most one point. It includes the compact (Hausdorff) spaces, and also spaces like $\omega_1$. So e.g. $\mathbb{R}$ and $\mathbb{N}$ have plenty such different uniformities.</p>
|
656,560 | <p>I'm trying to get a solution for:</p>
<p>$4^{2x+1}-3^{3x+1}=4^{2x+3}-3^{3x+2}$</p>
<p>My main problem is that I don't know how to combine this potencys!</p>
<p>Ive also thought about another function that would bring me same difficulties:</p>
<p>$6^x=36*9.75^{x-2}$</p>
<p>What am I supposed to do? </p>
| Adi Dani | 12,848 | <p>$$4^{2x+1}-3^{3x+1}=4^{2x+3}-3^{3x+2}$$
$$4^{2x+1}-4^{2x+3}=3^{3x+1}-3^{3x+2}$$
$$4\cdot4^{2x}-4^34^{2x}=3\cdot3^{3x}-3^23^{3x}$$
$$60\cdot4^{2x}=6\cdot3^{3x}$$
$$10\cdot4^{2x}=3^{3x}$$
$$10\cdot16^{x}=27^{x}$$
$$10=(27/16)^{x}$$
$$\log_{10} 10=\log_{10} (27/16)^{x}$$
$$1=x\log_{10}(27/16)$$
$$x=\frac{1}{\log_{10}27-\log_{10}16}$$</p>
|
1,327,644 | <p>Using EM summation formula estimate
$$
\sum_{k=1}^n \sqrt k
$$</p>
<p>up to the term involving $\frac{1}{\sqrt n}$</p>
<p>My attempt is
$$
\sum_{k=1}^n \sqrt k = \frac{2 \sqrt{n^3}}{3} -\frac{2}{3} + \frac 1 2 (\sqrt n -1)+ \frac{1}{24} (\frac{1}{\sqrt n} -1) + \int_1^n P_{2k+1}(x)f^{(2k+1)}(x)dx
$$
I am not sure what can be said about the integral. Please tell me if I have made a mistake and how I can solve the integral? Have I stopped the summation at the right point?</p>
| robjohn | 13,854 | <p>Using the Euler-Maclaurin Sum Formula, we get
<span class="math-container">$$
\sum_{k=1}^n\sqrt{k}=\tfrac23n^{3/2}+\tfrac12n^{1/2}+\zeta(-\tfrac12)+\tfrac1{24}n^{-1/2}-\tfrac1{1920}n^{-5/2}+\tfrac1{9216}n^{-9/2}+O(n^{-13/2})
$$</span>
where the constant <span class="math-container">$\zeta(-\frac12)=-0.20788622497735456602$</span> is explained below.</p>
<hr>
<p>By the Euler-Maclaurin Sum Formula, we have for <span class="math-container">$\mathrm{Re}(s)\gt-1$</span>
<span class="math-container">$$
\sum_{k=1}^nk^{-s}=\frac1{1-s}n^{1-s}+\frac12n^{-s}+\zeta_\ast(s)+O(n^{-s-1})\tag{1}
$$</span>
Define the sequence of functions
<span class="math-container">$$
\zeta_n(s)=\sum_{k=1}^nk^{-s}-\frac1{1-s}n^{1-s}-\frac12n^{-s}\tag{2}
$$</span>
For all <span class="math-container">$n\ge1$</span>, <span class="math-container">$\zeta_n(s)$</span> is analytic. For <span class="math-container">$\mathrm{Re}(s)\gt1$</span>, <span class="math-container">$\lim\limits_{n\to\infty}\zeta_n(s)=\zeta(s)$</span>. Estimate <span class="math-container">$(1)$</span> says that on compact subsets of <span class="math-container">$\{s\in\mathbb{C}\setminus\{1\}:\mathrm{Re(s)\gt-1}\}$</span>, <span class="math-container">$\zeta_n(s)$</span> converges uniformly. Thus, for <span class="math-container">$s\in\mathbb{C}\setminus\{1\}$</span> and <span class="math-container">$\mathrm{Re}(s)\gt-1$</span>, <span class="math-container">$\lim\limits_{n\to\infty}\zeta_n(s)$</span> is analytic, so by analytic continuation, for <span class="math-container">$\mathrm{Re}(s)\gt-1$</span>,
<span class="math-container">$$
\zeta_\ast(s)=\lim\limits_{n\to\infty}\zeta_n(s)=\zeta(s)\tag{3}
$$</span></p>
|
1,575,397 | <p>I need help calculating
$$\lim_{n\to\infty}\left(\frac{1}{n^{2}}+\frac{2}{n^{2}}+...+\frac{n}{n^{2}}\right) = ?$$</p>
| Jimmy R. | 128,037 | <p>$$1+2+\ldots+n=\frac{n(n+1)}{2}$$ which implies that $$\frac1{n^2}+\frac2{n^2}+\ldots+\frac{n}{n^2}=\frac{n(n+1)}{2n^2}=\frac{n^2+n}{2n^2}=\frac12+\frac{1}{2n}$$</p>
|
1,526,474 | <p>Find the natural number $k <117$ such that $2^{117}\equiv k \pmod {117}$.</p>
<p>I know $117$ is the product of $3$ and $37$.</p>
<p>$2^{117}\equiv 2 \pmod 3$
$2^{117}\equiv 31 \pmod {37}$.
But $2^{117}\equiv 44 \pmod {117}$.</p>
<p>I can't seem to understand how to get $44$. Can anyone help me understand?</p>
| Piquito | 219,998 | <p>Remark that $8^4=4096=35\cdot117+1$ Hence $2^{117}=2^{3\cdot39}=8^{39}=8^{4\cdot9+3}\equiv(1)^9\cdot8^3$(mod$\space 117)$</p>
<p>Therefore $2^{117}\equiv(1)^9\cdot 512$ (mod$\space 117)\equiv(1)^9( 4\cdot117+44)$ (mod$\space 117)\equiv 44$ (mod$\space 117)
$
The answer is $k=44$</p>
|
4,292,618 | <p>I have the following function <span class="math-container">$$\frac{1}{1+2x}-\frac{1-x}{1+x} $$</span>
How to find equivalent way to compute it but when <span class="math-container">$x$</span> is much smaller than 1? I assume the problem here is with <span class="math-container">$1+x$</span> since it probably would be equal to 1. I don't know if multiplying by <span class="math-container">$(1-x)$</span> would be helpful as it would be
<span class="math-container">$$ \frac{1-x}{1+x-2x^2}-\frac{(1-x)^2}{1-x^2} $$</span> so there's still term <span class="math-container">$1+x$</span>.</p>
| Bobby Laspy | 986,676 | <p>You can replace the fractions by geometric progressions, giving</p>
<p><span class="math-container">$$\sum_{k=0}^\infty(-2x)^k-\sum_{k=0}^\infty(-x)^k-\sum_{k=0}^\infty(-x)^{k+1}=
\sum_{k=2}^\infty(-1)^k(2^k-2)x^k\\
=2x^2-6x^3+14x^4-30x^5\cdots.$$</span> Now truncate these sums to some power.</p>
|
1,071,040 | <p>I found <a href="https://math.stackexchange.com/questions/549065/how-exactly-do-you-measure-circumference-or-diameter">How exactly do you measure circumference or diameter?</a> but it was more related to how people measured circumference and diameter in old days.</p>
<p><strong>BUT</strong> now we have a formula, but the value of PI cannot not be accurately determined, how can I find the accurately calculate the value of circumference of a circle?</p>
<p>Is there any other may be physical mean by which I can calculate the correct circumference?</p>
<p>thank you</p>
| Mietek | 420,053 | <p>Construct the circle of diameter 1/pi.
The circle has circumference =1.
It’s sure exact value.
(You don't escape from pi - it's property of each circle).</p>
|
11,994 | <p>Now that we get to see the SE-network wide list of "hot" questions, I am just shaking my head in disbelief. At the time I am writing this, the two hot questions from Math.SE are titled (get a barf-bag, quick)</p>
<ul>
<li><a href="https://math.stackexchange.com/q/599520/8348">https://math.stackexchange.com/q/599520/8348</a></li>
<li><a href="https://math.stackexchange.com/q/600373/8348">$1/i=i$. I must be wrong but why?</a></li>
</ul>
<p>Who gets to select these questions? How? Irrespective of how this is done, this is ridiculous, as neither question has any even remotely serious content (the latter one is more or less a common <em>fake-proof</em>). </p>
<p>My proposal:</p>
<blockquote>
<p>The representatives of Math.SE on this list should be based only on the votes of
people who are active on Math.SE. Not just all voting members (suspecting/pointing
finger at SOers, who get the right to vote from association bonus alone).</p>
</blockquote>
<p>The exact rep limit (if any) is open to debate, may be 1000? Probably shouldn't put the bar too high, for that would introduce different kind of problems. But something that ensures a valued history of contributions on this site - not elsewhere on the SE network.</p>
| Community | -1 | <p>I've criticized the hot questions algorithm quite heavily on MSO in the past, it does favor certain types of questions without a good reason, and it has a strong self-reinforcing effect where questions that manage to get into the hot questions list get disproportionally more votes, which keeps them there longer.</p>
<p>But the algorithm is not the only thing responsible for the result, it has to work with the information it can get about the questions. It can't actually judge the quality of questions, the only measure it has available is the score of the question and its answers. Voting generally favors questions that are accessible to a wide audience, very specialized question tend to get rather low scores. The algorithm will prefer questions that appeal to a wider audience due to that, but that isn't necessarily a bad thing for the hot questions list. The votes from users that arrive at the question from the hot questions list will exaggerate this effect, but they are not the primary cause of it.</p>
<p>One aspect particular to MSE is that this site is one of the very few sites that still allows certain kinds of questions (the type often tagged as <a href="https://math.stackexchange.com/questions/tagged/soft-question" class="post-tag" title="show questions tagged 'soft-question'" rel="tag">soft-question</a> or <a href="https://math.stackexchange.com/questions/tagged/big-list" class="post-tag" title="show questions tagged 'big-list'" rel="tag">big-list</a>) as community wiki question. If this kind of question is so ridiculous or embarrassing, why are they still allowed? If ridiculous questions are not getting closed, but instead upvoted, the problem is not the selection algorithm for the hot questions list, but that those questions are upvoted and left open.</p>
<p>I've proposed a similar change and a few others in the following feature request on MSO: <a href="https://meta.stackexchange.com/questions/139102/better-criteria-for-the-hot-questions-list">Better criteria for the hot questions list</a></p>
<blockquote>
<p>There are several aspect that could be improved in my opinion:</p>
<p><strong>Questions can stay in the list for too long</strong>, making the list a bit too static for regular SE users. <a href="https://meta.stackexchange.com/questions/99077/dont-let-questions-stick-to-the-top-of-the-multicollider-forever">I've previously posted a feature
request</a> about this, and I still think that no question should be
able to stay in the hot questions list for a whole week. </p>
<p><strong>The current criteria are prone to select problematic questions.</strong> They value a high number of answers, which you find for example in
list questions. So questions like <a href="https://math.stackexchange.com/questions/168019/big-list-of-fun-math-books">this example from Math.SE</a> have
it easy to get into the list, while this type of question is regarded
as problematic by most of the SE network and explicitly disallowed on
many sites.</p>
<p><strong>Being in the list of hot questions is self-perpetuating.</strong> Questions on the list get significantly more exposure, which means more votes,
which then improves the position in the list of hot questions. The
votes from outside users are also problematic as they represent mostly
the popular appeal of the posts, not necessarily a judgement of the
quality by an expert.</p>
<p>I've some ideas on what could be changed:</p>
<p><strong>Exclude community wiki questions from the list.</strong> Those are usually big list-style questions that aren't a good example for high-quality
content across the network.</p>
<p><strong>Put less value on a high number of answers.</strong> The current method values (as far as I understand it) a high number of answers and a high
total score of all answers. This preferentially puts more subjective
or list-type questions to the top. A good question that got one
high-quality answer that is highly upvoted shouldn't be at a
disadvantage compared to a popular question that gets lots of
opinions. Maybe counting only the first two or three answers and their
combined score would be enough.</p>
<p><strong>Weight external votes differently from internal votes.</strong> External votes from users that discover the question via the hot questions list
shouldn't be counted in full, they mostly represent the additional
exposure and the mass-appeal of the question, not necessarily the
quality. But they also shouldn't be disregarded completely, a question
that doesn't get any votes from the external users is probably too
specialized to be of interest to a wide audience. </p>
<p><strong>Normalize votes for each site.</strong> The voting behaviour varies a lot between different sites. Currently sites that have an above average
number of votes are favoured compared to other sites. Normalizing the
votes on a per-site base would put a stronger emphasis on the number
of votes a specific questions gets compared to other questions on the
same sites. It would prevent certain sites from being overrepresented
due to their general voting behaviour.</p>
</blockquote>
|
1,992,143 | <p>I'm trying to determine if $\sum \limits_{n=1}^{\infty} \sin(n\pi + \frac{1}{2n})$ absolutely converges or not.</p>
<p>Help me check it. I don't know how to do it. Advance thanks. :)</p>
| DonAntonio | 31,254 | <p>Another approach, perhaps simpler but I think that shorter anyway: the limit comparison theorem with $\;b_n=\frac1{2n}\;$ , then</p>
<p>$$\frac{a_n}{b_n}=\frac{\sin\frac1{2n}}{\frac1{2n}}\xrightarrow[n\to\infty]{}1$$</p>
<p>and thus the series $\;\sum\sin\frac1{2n}\;,\;\;\sum\frac1{2n}\;$ converge or diverge together...but the latter is just a constant multiple of the harmonic one.</p>
|
749,714 | <p>Does anyone know how to show this preferable <strong>without</strong> using modular</p>
<p>For any prime $p>3$ show that 3 divides $2p^2+1$ </p>
| Community | -1 | <p>For any prime $p>3$ we have $p\equiv 1\mod 3$ or $p\equiv 2\mod 3$ and in the two cases we have
$$2p^2+1\equiv 0\mod 3$$</p>
|
1,980,606 | <p>Let $f:[0, 1] \to \mathbb{R}$ differentiable in $[0, 1]$ and $|f'(x)| \leq\frac{1}{2}$ for all $x \in [0, 1]$. If $a_n = f(\frac{1}{n})$, show that $\lim_{n \to \infty} a_n$ exist (Hint: Cauchy).</p>
<p>Can you help me? Thanks.</p>
| hamam_Abdallah | 369,188 | <p>for every nonzero $n$ and $p$ ,</p>
<p>$|a_{n+p}-a_n|\leq\frac{1}{2}(\frac{1}{n}-\frac{1}{n+p})\leq\frac{1}{2n}$</p>
<p>so $(a_n)$ is a Cauchy sequence.</p>
|
4,941 | <p>I was reviewing my class notes and found the following:</p>
<p>"The name 'torsion' comes from topology and refers to spaces that are twisted, ex. Möbius band"</p>
<p>In our notes we used the following definition for torsion element and torsion module:
An element m of an R-module M is called a torsion element if $rm=0$ for some $r\in R$.
A torsion module is a module which consists solely of torsion elements</p>
<p>What is the relationship between torsion modules and twisted spaces? Was the definition of torsion module somehow motivated from topological considerations of twisted spaces?</p>
<p>I don't really see any obvious connection. I'm taking my first topology class this semester, so I apologize if this is something you learn about later in courses like algebraic topology, but I haven't been able to find any explanation of this.</p>
| Qiaochu Yuan | 232 | <p>The definition of torsion in modules is a generalization of the definition of torsion in $\mathbb{Z}$-modules, e.g. abelian groups. Torsion in abelian groups refers to elements of finite order, and this in turn relates to topology because to any topological space we can associate abelian groups called (integral) <a href="http://en.wikipedia.org/wiki/Homology_(mathematics)">homology groups</a>, and torsion in these groups is suggestive of a kind of "twistedness" in the space. The simplest example of this is in the first integral homology of closed surfaces; the group has torsion if and only if the surface is non-orientable, such as the <a href="http://en.wikipedia.org/wiki/Klein_bottle">Klein bottle</a>.</p>
|
2,258,697 | <p>I recently encountered this question and have been stuck for a while. Any help would be appreciated!</p>
<p>Q: Given that
$$\frac{1}{a} + \frac{1}{b} + \frac{1}{c} = \frac{1}{5} \tag{1} \label{eq:1}$$
$$abc = 5 \tag{2} \label{eq:2}$$
Find $a^3 + b^3 + c^3$. It wasn't specified in the question but I think it can be assumed that $a, b, c$ are real numbers.</p>
<p>My approach:
$$ ab + ac + bc = \frac{1}{5} abc = 1 $$
$$ a^3 + b^3 + c^3 = (a+b+c)^3 - 3[(a + b + c)(ab + ac + bc) - abc] $$
$$ a^3 + b^3 + c^3 = (a+b+c)^3 - 3(a+b+c) + 15 $$
From there, I'm not sure how to go about solving for $a + b + c$.
Something else I tried was letting $x = \frac{1}{a}, y = \frac{1}{b}, z = \frac{1}{c}$, so we get $$ xyz = x + y + z = \frac{1}{5} $$Similarly, I'm not sure how to continue from there. </p>
| spaceisdarkgreen | 397,125 | <p>This matrix has a simple block form $$ \begin{pmatrix}I&I\\I&I\end{pmatrix}$$ where $I$ is the $2\times 2$ identity, so you can eyeball the eigenvalues of the $2\times 2$ all-ones matrix (which are $2$ and $0$) and then realize that they will both contribute twice since each eigenvector of this matrix ($(1,1)$ and $(1,-1))$ will correspond to a two-dimensional invariant subspace for the full $4\times 4$ matrix ((a,b,a,b) and $(a,b,-a,-b))$</p>
|
3,172,008 | <p>Let <span class="math-container">$n \in \mathbb{N}$</span> be the index of sequence <span class="math-container">$\{\frac{1}{n^2}\}_{n=1}^\infty$</span>.</p>
<p>I'm assuming that:
<span class="math-container">$$\displaystyle{\lim _{n\to\infty}} \frac{1}{n^2} = \frac{\displaystyle{\lim _{n\to\infty}} 1}{\displaystyle{\lim _{n\to\infty}}n^2}, \displaystyle{\lim _{n\to\infty}}n^2 \neq 0$$</span></p>
<p>The problem is, after I prove that the limit of the numerator is 1, I cannot prove that the denominator has a limit because it's a fluttering/divergent sequence. Having no limit, and striving towards infinity, how can I possibly do limit arithmetic on them?</p>
| Thorgott | 422,019 | <p>The identity
<span class="math-container">$$\lim_{n\rightarrow\infty}\frac{a_n}{b_n}=\frac{\lim\limits_{n\rightarrow\infty}a_n}{\lim\limits_{n\rightarrow\infty}b_n}$$</span>
does not hold when the limits on the RHS don't exist (or when the limit in the denominator would be <span class="math-container">$0$</span>). The limit <span class="math-container">$\lim_{n\rightarrow\infty}n^2$</span> does not exist as the sequence is divergent, so you can not use the above identity. A valid way way to do limit arithmetic would be
<span class="math-container">$$\lim_{n\rightarrow\infty}\frac{1}{n^2}=\left(\lim_{n\rightarrow\infty}\frac{1}{n}\right)\cdot\left(\lim_{n\rightarrow\infty}\frac{1}{n}\right).$$</span>
In this case, interchanging limit and product is possible, because the limits on the RHS each exist. They, of course, evaluate to <span class="math-container">$0$</span> and hence so does the original limit.</p>
|
3,172,008 | <p>Let <span class="math-container">$n \in \mathbb{N}$</span> be the index of sequence <span class="math-container">$\{\frac{1}{n^2}\}_{n=1}^\infty$</span>.</p>
<p>I'm assuming that:
<span class="math-container">$$\displaystyle{\lim _{n\to\infty}} \frac{1}{n^2} = \frac{\displaystyle{\lim _{n\to\infty}} 1}{\displaystyle{\lim _{n\to\infty}}n^2}, \displaystyle{\lim _{n\to\infty}}n^2 \neq 0$$</span></p>
<p>The problem is, after I prove that the limit of the numerator is 1, I cannot prove that the denominator has a limit because it's a fluttering/divergent sequence. Having no limit, and striving towards infinity, how can I possibly do limit arithmetic on them?</p>
| fleablood | 280,126 | <p>Just do delta-epsilon. There's no trick. </p>
<p>For any <span class="math-container">$\epsilon> 0$</span> let <span class="math-container">$N \ge \frac 1{\sqrt{\epsilon}}$</span>.</p>
<p>If <span class="math-container">$n > N$</span> then <span class="math-container">$|\frac 1{n^2} - 0| =\frac 1{n^2} < \frac 1{N^2} = \epsilon$</span>.</p>
<p>So <span class="math-container">$\lim_{n\to \infty}\frac 1{n^2} = 0$</span>.</p>
<p>That's it.</p>
<p>=====</p>
<p>In general, you should have one time or another have been presented with a proof that </p>
<blockquote>
<p>if <span class="math-container">$\lim f(n) = +\infty$</span> then <span class="math-container">$\lim \frac 1{f(n)} = 0$</span>.</p>
</blockquote>
<p>The proof would be: As <span class="math-container">$\lim_{n\to a|\infty} f(n)=+\infty$</span> then for any <span class="math-container">$N$</span> there is a condition for <span class="math-container">$n$</span> so that <span class="math-container">$f(n) > N$</span>. [If <span class="math-container">$\lim_{n\to a} f(n) = \infty$</span> the condition is there is a <span class="math-container">$\delta$</span> so tha <span class="math-container">$|n-a| < \delta$</span> will mean <span class="math-container">$f(n) >N$</span>. If <span class="math-container">$\lim_{n\to \infty} f(n) = \infty$</span> the condition is there is a <span class="math-container">$M$</span> so that <span class="math-container">$n > M$</span> will mean <span class="math-container">$f(n) > N$</span>]. </p>
<p>For any <span class="math-container">$\epsilon > 0$</span> let <span class="math-container">$N = \frac 1{\epsilon}$</span> then if <span class="math-container">$n$</span> has the condition we know <span class="math-container">$f(n) > N$</span> so <span class="math-container">$\frac 1{f(n)} < \epsilon$</span>. So <span class="math-container">$\lim \frac 1{f(n)} = 0$</span>.</p>
<p>....</p>
<p>So if we assume <span class="math-container">$\lim_{n\to \infty} n^2 = \infty$</span> (which is either very clear; or provable as for any <span class="math-container">$N>0$</span> if <span class="math-container">$n > \sqrt{N} \implies n^2 > N$</span>) we know the <span class="math-container">$\lim_{n\to \infty}\frac 1{n^2} = 0$</span>.</p>
<p>This is why we <em>informally</em> say <span class="math-container">$\frac 1\infty = 0$</span>. But that is informal and too many people say it without knowing what it <em>means</em>.</p>
|
2,076,908 | <blockquote>
<p><strong>Question:</strong> Prove that $e^x, xe^x,$ and $x^2e^x$ are linearly independent over $\mathbb{R}$.</p>
</blockquote>
<p>Generally we proceed by setting up the equation
$$a_1e^x + a_2xe^x+a_3x^2e^x=0_f,$$
which simplifies to $$e^x(a_1+a_2x+a_3x^2)=0_f,$$ and furthermore to
$$a_1+a_2x+a_3x^2=0_f.$$</p>
<p>From here I think it's obvious that the only choice to make the sum the zero function is to let each scalar equal 0, but this is very weak reasoning.</p>
<p>As an undergraduate we learned to test for independence by determining whether the Wronskian is not identically equal to 0. But I can only use this method if the functions are solutions to the same linear homogeneous differential equation of order 3. In other words, I cannot use this method for an arbitrary set of functions. I was not given a differential equation, so I determined it on my own and got that they satisfy $$y'''-3y''+3y'-y = 0.$$</p>
<p>I found the Wronskian, $2e^{3x}\neq0$ for any real number. Thus the set is linearly independent. But it took me some time to find the differential equation and even longer finding the Wronskian so I'm wondering if there is a stronger way to prove this without using the Wronskian Test for Independence.</p>
| symplectomorphic | 23,611 | <p>Though it is overkill and not really in the spirit of the problem (see kobe's answer for the most elementary approach), you can use the fundamental theorem of algebra (or the <a href="https://math.stackexchange.com/questions/25822/how-to-prove-that-a-polynomial-of-degree-n-has-at-most-n-roots">weaker result</a> that a real polynomial of degree $n>1$ has at most $n$ roots) on your reduced equation to conclude that $a_3$ and $a_2$ must be zero (otherwise you'd have a positive degree polynomial with infinitely many roots). Of course $a_1$ must be zero, too: this follows just from setting $x=0$.</p>
|
1,549,138 | <p>I have a problem with this exercise:</p>
<p>Proove that if $R$ is a reflexive and transitive relation then $R^n=R$ for each $n \ge 1$ (where $R^n \equiv \underbrace {R \times R \times R \times \cdots \times R} _{n \ \text{times}}$).</p>
<p>This exercise comes from my logic excercise book. The problem is that I've proven $R^n=R$ is false for $n=2$ and non-empty $R$.</p>
<p>Here is how I've done it:</p>
<p>Let's take $n=2$. $R$ is a relation so it's a set. $R^2$ is, by definition, a set of ordered pairs where both of their elements belong to $R$. But $R$ is a set of elements that belong to $R$ - I mean it's not the set of pairs of elements from $R$. So $R^2\neq R$.</p>
<p>Please tell me something about my proof and this exercise. How would you solve the problem?</p>
| Mankind | 207,432 | <p>I think you got the definition of $R^2$ wrong. Here is the correct definition:</p>
<p>Let $R$ be a relation on the set $A$. Then $R^2$ is defined by
$$R^2 = \{(x,y)\ |\ \exists z\in A\text{ such that } (x,y)\in R\text{ and }(y,z)\in R\}.$$</p>
<p>So $R^2$ is not a set of ordered pairs of elements from $R$. It is a set of ordered pairs of elements from $A$, the same type of object as $R$.</p>
<p>For instance, if $R=\{(1,1),(1,2),(2,3)\}$, then $R^2=\{(1,1),(1,2),(1,3)\}$.</p>
|
241,903 | <p>Suppose $f: \mathbb{D}\to \mathbb{C}$ is a univalent function with $$f(z)=z+a_2z^2+a_3z^3+\cdots.$$ The Bieberbach conjecture/de Branges' theorem asserts that $|a_n|\leq n$ with equality for the Koebe function, which has an unbounded image. Suppose we restrict to the class of univalent functions whose image is actually bounded. Is there a better bound than $|a_n|\leq n$ ?</p>
| Lasse Rempe | 3,651 | <p>This is a softer answer than Alex's, in line with Christian's comment to that answer.</p>
<p>If you are looking at the family of all bounded conformal maps, then you clearly do <strong>not</strong> get a better bound, as the Koebe function can be approximated by such.</p>
<p>On the other hand, if you look at all functions with the usual normalisations satisfying a <strong>fixed</strong> bound (say functions taking values in the unit disc) , this family is compact, and hence you <strong>do</strong> get a better bound for all coefficients. Of course this does not help you finding those bounds, which according to Alex's answer would seem to be currently hopeless.</p>
|
53,188 | <p>Recently I read the chapter "Doctrines in Categorical Logic" by Kock, and Reyes in the Handbook of Mathematical Logic. And I was quite impressed with the entire chapter. However it is very short, and considering that this copy was published in 1977, possibly a bit out of date. </p>
<p>My curiosity has been sparked (especially given the possibilities described in the aforementioned chapter) and would like to know of some more modern texts, and articles written in this same style (that is to say coming from a logic point of view with a strong emphasis on analogies with normal mathematical logic.)</p>
<p>However, that being said, <em>I am stubborn as hell, and game for anything.</em></p>
<p>So, Recommendations?</p>
<p>Side Note: More interested in the preservation of structure, and the production of models than with any sort of proposed foundational paradigm </p>
| Harry Gindi | 1,353 | <p>Just as technical texts, <em>Acessible Categories</em> by Makkai and Paré and <em>Locally Presentable and Accessible Categories</em> by Adamek and Rosicky are extremely useful even for non-logicians (they are a natural extension of the material in SGA4.1.i on colimits of functors indexed by "ensembles ordonné grand devant $\alpha$", which are now called $\alpha$-filtered colimits), but additionally, they cover a categorical approach to model theory that is also supposed to be pretty interesting (although I haven't read those parts of the books, I've heard good things from a number of people, including François Dorais (indirectly), who is one of our benevolent moderators).</p>
|
53,188 | <p>Recently I read the chapter "Doctrines in Categorical Logic" by Kock, and Reyes in the Handbook of Mathematical Logic. And I was quite impressed with the entire chapter. However it is very short, and considering that this copy was published in 1977, possibly a bit out of date. </p>
<p>My curiosity has been sparked (especially given the possibilities described in the aforementioned chapter) and would like to know of some more modern texts, and articles written in this same style (that is to say coming from a logic point of view with a strong emphasis on analogies with normal mathematical logic.)</p>
<p>However, that being said, <em>I am stubborn as hell, and game for anything.</em></p>
<p>So, Recommendations?</p>
<p>Side Note: More interested in the preservation of structure, and the production of models than with any sort of proposed foundational paradigm </p>
| Chris Heunen | 10,368 | <p>I'd like to add Carsten Butz' "<a href="http://www.brics.dk/LS/98/2/" rel="nofollow">Regular categories and regular logic</a>". It is available online in the BRICS lecture series, and is very accessible. Though perhaps a bit too basic at times for readers with some background knowledge, it is very suitable for a first introduction before jumping into texts on toposes.</p>
|
53,188 | <p>Recently I read the chapter "Doctrines in Categorical Logic" by Kock, and Reyes in the Handbook of Mathematical Logic. And I was quite impressed with the entire chapter. However it is very short, and considering that this copy was published in 1977, possibly a bit out of date. </p>
<p>My curiosity has been sparked (especially given the possibilities described in the aforementioned chapter) and would like to know of some more modern texts, and articles written in this same style (that is to say coming from a logic point of view with a strong emphasis on analogies with normal mathematical logic.)</p>
<p>However, that being said, <em>I am stubborn as hell, and game for anything.</em></p>
<p>So, Recommendations?</p>
<p>Side Note: More interested in the preservation of structure, and the production of models than with any sort of proposed foundational paradigm </p>
| none | 13,636 | <p><a href="http://en.wikipedia.org/wiki/Categorical_logic" rel="nofollow">http://en.wikipedia.org/wiki/Categorical_logic</a> has more refs. That's a sufficiently obvious place to look that maybe someone can move this "answer" to a comment.</p>
|
1,891,831 | <p>In linear algebra we have vectors:$$
\mathbf{A}=(x,y,z)=x\mathbf{\hat e}_x+y\mathbf{\hat e}_y+z\mathbf{\hat e}_z$$
We have vector algebra, i.e. <a href="https://en.wikipedia.org/wiki/Euclidean_vector#Basic_properties" rel="nofollow">vector addition, dot product, lines, planes, etc</a>. A vector have a magnitude and a direction.</p>
<p>However, in multivariable calculus we also have vectors:$$
\mathbf{A}(t)=(x,y,z)=x\mathbf{\hat e}_x+y\mathbf{\hat e}_y+z\mathbf{\hat e}_z
$$
Here we do derivatives and integrals.</p>
<p>What is the difference? Are there different types of vectors? </p>
<p>I have always thought of vectors as the representation in the above link.</p>
| Bib-lost | 294,785 | <p>Let's start with vectors in linear algebra. I don't know how abstract your linear algebra class was, but in essence linear algebra focusses on the algebraic aspect of vectors: they live in a structure called a <a href="https://en.wikipedia.org/wiki/Vector_space" rel="nofollow">vector space</a> and can be added, subtracted and scaled by numbers - that's all there is to it. This vector space can have any dimension: in your example the vector is in a three dimensional space, but lower or higher (even infinite) dimensions are also possible. Also the scalars (in your example the $x$, $y$ en $z$) can be any sort of numbers; I won't go into the abstract details, but for example you can let them be <a href="https://en.wikipedia.org/wiki/Rational_number" rel="nofollow">rational</a>, <a href="https://en.wikipedia.org/wiki/Real_number" rel="nofollow">real</a> or <a href="https://en.wikipedia.org/wiki/Complex_number" rel="nofollow">complex</a> numbers and this choice has minor impact on how the algebra works out.</p>
<p>What's important is that multivariate calculus relies on a <em>more specific</em> sort of vector, since two restrictions are made: the scalars must be real numbers and the dimension must be finite. These vectors are sometimes called euclidean and are what your linked Wikipedia article is about. In this specific case, as you mentioned, vectors can have an analytic structure as well and you can describe and study things like continuity, derivatives and integrals with these vectors.</p>
<p>TLDR; multivariate calculus vectors are a specific instance of linear algebra vectors: they are scaled by real numbers and live in a vector space of finite dimension, often denoted as $\mathbb{R}^n$ where $n$ is the dimension.</p>
|
383,037 | <p>I was going through "Convergence of Probability Measures" by Patrick Billingsley. In Section 1: I encountered the following problem:</p>
<p><strong>Show that inequivalent metrics can give rise to the same class of Borel sets.</strong></p>
<p>My idea is that the 2 metrics generate different topologies but the Sigma algebra generated by them is the same. However I don't know how to go about proceeding to prove this. I guess I need a convincing example.</p>
<p><strong>My Background</strong>:
I read Topology from "Topology and modern analysis" by G.F Simmons, the Rudin texts and Billingsley "Probability and Measure". But this still boggles me.</p>
<p><strong>My Searches</strong>: I searched for "non equivalent metrics" and "inequivalent metrics" getting 85 and 3 results respectively. But neither helpful nor relevant.</p>
<p>I would appreciate any useful hints, tips and even complete answers (preferably the first two). </p>
| joriki | 6,622 | <p>An example is given by the real line, and the real line with the origin replaced by an isolated point.</p>
|
785,188 | <p>I found a very simple algorithm that draws values from a Poisson distribution from <a href="http://www.akira.ruc.dk/~keld/research/javasimulation/javasimulation-1.1/docs/report.pdf" rel="nofollow">this project.</a></p>
<p>The algorithm's code in Java is:</p>
<pre><code>public final int poisson(double a) {
double limit = Math.exp(-a), prod = nextDouble();
int n;
for (n = 0; prod >= limit; n++)
prod *= nextDouble();
return n;
}
</code></pre>
<p><a href="http://docs.oracle.com/javase/7/docs/api/java/util/Random.html#nextDouble%28%29" rel="nofollow"><code>nextDouble()</code></a> is a function from the <code>Random</code> package in Java that returns a uniformly distributed random <code>double</code>, for example <code>0.885598042879084</code>.</p>
<p>I can't understand how this creates a Poisson distribution. </p>
<p>Can someone explain?</p>
| PA6OTA | 127,690 | <p>It is related to the Poisson process: suppose $a$ is fixed and $N$ is the number of independent Exponential (mean 1) RV's to be added until the sum exceeds $a$. In this case, $N \sim Poisson(a)$. </p>
<p>In the code above, everything is anti-logged. For example, instead of adding Exponentials, they multiply Uniforms, due to the relation $Y = - \ln(U)$ is an Exponential (mean 1) RV whenever $U$ is Uniform[0,1].</p>
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.