qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
221,026
<p>I need to find the value of <span class="math-container">$z$</span> for a particular value of <span class="math-container">$D_c$</span> (eg. <span class="math-container">$500$</span>), but <span class="math-container">$z$</span> is inside an integral, and I'm not able to use <code>Solve</code> since the integral is giving <code>Hypergeometric2F1</code> function as the output.</p> <pre><code>OmegaM = 0.3111; OmegaLambda = 0.6889; Dc = 500; eqn = Integrate[(OmegaM (1 + z1)^3 + OmegaLambda)^(-1/2), {z1, 0, z}, Assumptions -&gt; z &gt; 0] </code></pre> <blockquote> <pre><code>-1.1473+(1.20482+1.20482z)Hypergeometric2F1[0.333333,0.5,1.33333,-0.451589(1.+z)^3] </code></pre> </blockquote> <pre><code>zvalue = Solve[eqn == Dc, z] </code></pre> <blockquote> <pre><code>Solve was unable to solve the system with inexact coefficients or the system obtained by direct rationalization of inexact numbers present in the system. Since many of the methods used by Solve require exact input, providing Solve with an exact version of the system may help. </code></pre> </blockquote> <p>Is there any other way I can solve this equation? </p> <p>Also, Integrate is taking some time and I'd like it to be fast since I need to put it in a loop with lots of <span class="math-container">$z$</span> values to be computed for corresponding <span class="math-container">$D_c$</span> values. </p>
Artes
184
<p>We will demonstrate that the exact formula for <span class="math-container">$z$</span> reads: <span class="math-container">$$z=\wp\bigg(\frac{\sqrt{\Omega_M}}{2}D_c+\wp^{-1}\big(1;0,-\frac{4\Omega_\Lambda}{\Omega_M}\big);0,-\frac{4\Omega_\Lambda}{\Omega_M}\bigg)-1$$</span> where <span class="math-container">$\wp(x;g_2,g_3)$</span> is the Weierstrass elliptic function, which yields a value <span class="math-container">$w$</span> in the elliptic integral <span class="math-container">$$x=\int^{w}_{\infty}\frac{d t}{\sqrt{4t^3-g_2\;t-g_3}}$$</span> and so <strong>generalizing the answer</strong> to the original question for <strong>any integrand</strong> of the form <span class="math-container">$\frac{1}{\sqrt{R(t)}}$</span>, where <span class="math-container">$R(t)$</span> is a <strong>fourth or a third order polynomial</strong> in <span class="math-container">$t$</span>. This formula can be implemented in the following way:</p> <pre><code>z[ Dc_, OM_, OL_]:= WeierstrassP[ Sqrt[OM/4] Dc+ InverseWeierstrassP[ 1, { 0,-4OL/OM}], { 0, -4OL/OM}]-1 </code></pre> <p>We rationalize numeric constants to play with the system seamlessly (although this step is not neccesary):</p> <pre><code>{ OM, OL} = Rationalize[{ OmegaM = 0.3111, OmegaLambda = 0.6889}]; </code></pre> <p>Let's derive <span class="math-container">$z$</span>: <span class="math-container">$$D_c=\int^{z}_{0}\frac{d s}{\sqrt{\Omega_M (s+1)^3+\Omega_{\Lambda}}}=\frac{2}{\sqrt{\Omega_M}}\int^{z+1}_{1}\frac{d s}{\sqrt{4 s^3+\frac{4\Omega_{\Lambda}}{\Omega_M}}}=\\=\frac{2}{\sqrt{\Omega_M}}\Bigg(\int^{\infty}_{1}\frac{d s}{\sqrt{4 s^3+\frac{4\Omega_{\Lambda}}{\Omega_M}}}-\int^{\infty}_{z+1}\frac{d s}{\sqrt{4 s^3+\frac{4\Omega_{\Lambda}}{\Omega_M}}}\Bigg)=\\=\frac{2}{\sqrt{\Omega_M}}\Bigg(-\wp^{-1}\big(1;0,-\frac{4\Omega_\Lambda}{\Omega_M}\big)+\wp^{-1}\big(z+1;0,-\frac{4\Omega_\Lambda}{\Omega_M}\big)\Bigg)$$</span> and this implies our formula for <span class="math-container">$z$</span>.</p> <p>The formula for <span class="math-container">$z$</span> is valid in the range <span class="math-container">$0&lt;D_c&lt;D_{m}=3.25664$</span> and we can also derive an exact formula for <span class="math-container">$D_m$</span>: <span class="math-container">$$D_m=\frac{2}{\sqrt{\Omega_M}} \Re\Big( 2\;\omega_{1}(0,g_3)-\wp^{-1}\big(1;0,g_3\big)\Big)$$</span> where <span class="math-container">$\Re$</span> is the real part, <span class="math-container">$\omega_{1}(0,g_3)$</span> is the Weierstrass half period and <span class="math-container">$g_3$</span> is the Weierstrass invariant, in our case <span class="math-container">$g_3=-\frac{4\Omega_{\Lambda}}{\Omega_M}$</span>, implementing it:</p> <pre><code>g3=-4OL/OM; Dm = 2/Sqrt[OM]( 2WeierstrassHalfPeriodW1[{0, g3}]-InverseWeierstrassP[1,{0, g3}])//Re//N </code></pre> <blockquote> <pre><code>3.25664 </code></pre> </blockquote> <p><code>Dm</code> been calculated in version <code>12.1</code>, however in earlier versions one should evaluate simply <code>Dm = -2/Sqrt[OM] InverseWeierstrassP[1,{0, g3}]</code>. This is because <code>InverseWeierstrassP[1,{0, g3}]</code> is computed in a adjacent parallelogram (see e.g <a href="https://mathematica.stackexchange.com/questions/63121/integrate-yields-complex-value-while-after-variable-transformation-the-result-i/160055#160055">this discussion</a>). One should also note better handling of symbolic input in <code>WeierstrssHalfPeriodW1</code> etc. For the presentation of the structure of <span class="math-container">$z$</span> being an elliptic function (shifted and rescaled <span class="math-container">$\wp$</span>) we define:</p> <pre><code>wHP = Through @ { WeierstrassHalfPeriodW1,WeierstrassHalfPeriodW2, WeierstrassHalfPeriodW3} @{ 0,-4OL/OM}//ReIm // FullSimplify; GraphicsRow @ Table[ ContourPlot[ Evaluate @ Table[p[z[x+I y,OM,OL]] ==k, {k, wHP[[#1,#2]]&amp; @@@ {{2,1},{2,2},{3,1},{3,2}}}], {x, -8, 8}, {y, -8, 8}, ContourStyle -&gt;Thread[ {Thick,{Red,Darker@Cyan,Darker@Green,Orange}}]], {p, {Re, Im}}] </code></pre> <p><a href="https://i.stack.imgur.com/Wh4iN.png" rel="noreferrer"><img src="https://i.stack.imgur.com/Wh4iN.png" alt="enter image description here"></a></p> <p>There was an assumption that <span class="math-container">$z&gt;0$</span>, however <span class="math-container">$D_c=500$</span> can be reached for negative <span class="math-container">$z$</span>, e.g.</p> <pre><code>z[ 500,OM, OL]//N//Chop </code></pre> <blockquote> <pre><code>-1.73134 </code></pre> </blockquote> <p>and for <span class="math-container">$0&lt; z&lt;D_m$</span> e.g.</p> <pre><code>z[ 2, OM, OL]//N//Chop </code></pre> <blockquote> <pre><code>7.13731 </code></pre> </blockquote>
2,086,770
<p>$[c^2, c^3, c^4] \text { parallel to/same direction as } [1,-2,4]$</p> <p>Find $c$ if it exists.</p> <p>How do I see if they are parallel and find a c?</p> <p>Generally, we can say that $r[1,-2,4] = [c^2, c^3, c^4]$</p> <p>But then I have two unknowns and I don't know how to solve like this?</p>
Mark Bennet
2,906
<p>Consider $$c=\frac {c^3}{c^2}=\frac {c^4}{c^3}=?$$</p>
4,292,091
<p>I'm reading the definition of <span class="math-container">$inf\emptyset$</span> and <span class="math-container">$sup\emptyset$</span>.</p> <p>a) I'm wondering why <span class="math-container">$inf\emptyset = \infty$</span> and <span class="math-container">$sup\emptyset = -\infty$</span>. I would have expected both to be undefined.</p> <p>b) In general, can something equal infinity if it's not in the extend real number system? Should I assume they are using about extended real numbers in these definitions?</p>
311411
688,046
<p>You should assume the presence of the &quot;extended real number system&quot; when thinking about the infimum (about the supremum) of a set <span class="math-container">$M \subset \mathbb{R}$</span>.</p> <p>You should do this because it is helpful. I visualize it dynamically and non-rigorously, as follows. The extended real number system is a train track or subway line with a western terminus at <span class="math-container">$-\infty$</span> and an eastern terminus at <span class="math-container">$+\infty$</span>.</p> <p>While traveling from the western to the eastern end, each real number is passed.</p> <p>Now the algorithm for <span class="math-container">$\inf$</span> is as follows. Let the train begin at <span class="math-container">$-\infty$</span>, and for a fixed set <span class="math-container">$M$</span> let a flag be placed at each real <span class="math-container">$a \in M$</span>.</p> <p>When the train encounters or hits the first flag, it halts and declares its output as the real corresponding to that first flag. (<em>Why is this not rigorously correct? Consider <span class="math-container">$\{a \in \mathbb{R}: 0&lt;a&lt;1\}$</span>.</em>)</p> <p>But, in the case <span class="math-container">$M=\emptyset,$</span> the eastbound train never encounters a flag, and proceeds to the end of the line, which is <span class="math-container">$+\infty$</span>.</p> <p>The case for <span class="math-container">$\sup$</span> is similar but the train is westbound departing from <span class="math-container">$+\infty.$</span></p>
1,407,944
<p>Can this be proved that $\log(n)$ is irrational for every $n=1,2,3,\dots$ ?</p> <p>I find that question in my mind in searching for if $\log(x)$ is irrational for every rational number $x\gt0$. </p>
Caleb Stanford
68,107
<p>There is a theorem (mentioned <a href="https://math.stackexchange.com/a/809913/68107">here</a>) that for any nonzero $\alpha$, at least one of $\alpha$ and $e^{\alpha}$ is transcendental.</p> <p>In this case, take $\alpha = \log x$, where $x$ is rational. Then either $\log x$ is transcendental, or $x$ is. But $x$ is rational, hence algebraic. So $\log x$ is transcendental, and in particular it is irrational.</p>
1,999,643
<p>The title states my question: what aspect of closed makes it attractive for optimization?</p>
Jonasson
378,136
<p>Having a restricted and <strong>closed</strong> (i.e. compact) feasible set and if the (real-valued) objective <em>f</em> is furthermore continous, you are able to apply the <strong>Extreme value theorem</strong> in order to <strong>prove feasibility</strong> of your constrained optimization problem.</p>
478,516
<p>$\lim_{d \to \infty} (1+\frac{w}{d})^{\frac{d}{w} } = e$. But, what if the number of bits used to encode $d$ is polynomial in length. In this model, infinity can't be encoded. However, $d$ is polynomialy much larger than $w$. Is there any tight lower bound, a closed form function $f(d)$ such that $$ f(d) \le (1+\frac{w}{d})^{\frac{d}{w} }$$</p>
Yuval Filmus
1,277
<p>Let $x = d/w$. Since $d \ll w$, we have $x \ll 1$. Therefore the Taylor expansion of $(1+x)^{1/x}$ should offer a good approximation to the function: $$ (1+x)^{1/x} = e \left[1 - \frac{1}{2} x + \frac{11}{24} x^2 - \frac{7}{16} x^3 + \frac{2447}{5760} x^4 - \frac{959}{2304} x^5 + O(x^6) \right]. $$ In particular, we have the following lower bounds: $$ e \left[1 - \frac{1}{2} x\right], e \left[1 - \frac{1}{2} x + \frac{11}{24} x^2 - \frac{7}{16} x^3\right], \ldots$$ If you're interested in the entire Taylor expansion, have a look <a href="http://oeis.org/A055505" rel="nofollow">here</a>, <a href="http://oeis.org/A055535" rel="nofollow">here</a> and <a href="http://oeis.org/A106827" rel="nofollow">here</a>.</p>
6,990
<p>The Fourier transform of periodic function $f$ yields a $l^2$-series of the functions coefficients when represented as countable linear combination of $\sin$ and $\cos$ functions.</p> <ul> <li><p>In how far can this be generalized to other countable sets of functions? For example, if we keep our inner product, can we obtain another Schauder basis by an appropiate transform? What can we say about the bases in general?</p></li> <li><p>Does this generalize to other function spaces, say, periodic functions with one singularity?</p></li> <li><p>What do these thoughts lead to when considering the continouos FT?</p></li> </ul>
Julián Aguirre
1,168
<p>There are certainly many other basis for spaces of functions on an interval, if we eliminate the periodicity condition. The more widely used are orthogonal polynomials. Given an interval $I\subset\mathbb{R}$ and a weight $w\colon I\to (0,\infty)$, there is a sequence of polynomials $\{P_n\}$ orthogonal with respect the weight $w$: $$\int_I P_m(x)P_n(x)w(x)\,dx=0,\quad m\ne n.$$ They are a basis of $L^2(I)$. A classical reference is Gabor Szego (1939). Orthogonal Polynomials. Colloquium Publications - American Mathematical Society.</p>
62,981
<p><strong>Introduction.</strong> I recently revisited Shelah's model without P-points and I was wondering how "badly" Grigorieff forcing destroys ultrafilters, i.e., what kind of properties can survive the destruction of the "ultra"ness.</p> <p><strong>An example.</strong> Given a free (ultra)filter $F$ on $\omega$, <strong>Grigorieff forcing</strong> is defined as $$ G(F) := \{ f:X \rightarrow 2: \omega \setminus X \in F \},$$ partially ordered by reverse inclusion. A simple density argument shows that <strong>"$G(F)$ destroys $F$"</strong>, i.e., the filter generated by $F$ in a generic extension is <strong>not</strong> an ultrafilter (the generic real being the culprit).</p> <p>Of course, there are many forcing notions that specifically destroy ultrafilters (also, Bartoszynski, Judah and Shelah showed that whenever there's a new real in the extension, some ground model ultrafilter was destroyed).</p> <p>My question is: </p> <p><strong>If $F$ is destroyed, how far away is $F$ from being the ultrafilter it once was?</strong> </p> <p>Maybe a more positive version: <strong>Which properties of $F$ can we destroy while preserving others?</strong> </p> <p>This might seem awfully vague, so before you vote to close let me explain what kind of answers I'm hoping for.</p> <ul> <li><strong>Positive answers.</strong> <ul> <li>If the forcing is $\omega^\omega$-bounding and $F$ is rapid, then $F$ will still be rapid. That's a very clean and simple preservation. </li> <li>In Shelah's model without P-points, all ground model Ramsey ultrafilters stop being P-points but "remain" Q-points.</li> </ul></li> <li><strong>"Minimal" answers.</strong> Is it possible that $F$ together with the generic real generates an ultrafilter, i.e., there are only two ultrafilters extending $F$? For Grigorieff forcing, I'd expect this needs at least a Ramsey ultrafilter. But maybe other forcings have this property?</li> <li><strong>Negative answers.</strong> Say $F$ is a P-point; can $F$ still be extended to a P-point? Shelah tells us that forcing with the full product $G(F)^\omega$ denies this. Is it known whether $G(F)$ already denies this? Do other forcing notions allow this?</li> </ul> <p>I know there is a lot of literature on <strong>preserving ultrafilters</strong> (mostly P-points, I think) but I'm more interested in the case where the ultrafilter is actually destroyed. But I'd welcome anything that sheds light on this.</p> <p>PS: community wiki, of course.</p>
Peter Krautzberger
7,281
<p>I wanted to add two comments that I received in 'meatspace'. I hope this isn't too inappropriate.</p> <ul> <li>If <span class="math-container">$F$</span> is a P-filter, and the forcing is proper, than <span class="math-container">$F$</span> generates a P-filter in the extension.</li> <li>If <span class="math-container">$F$</span> is a Q-filter, i.e., every finite-to-one map becomes injective on a set in <span class="math-container">$F$</span>, and the forcing is <span class="math-container">$\omega^\omega$</span>-bounding, then <span class="math-container">$F$</span> generates a Q-filter in the extension.</li> </ul> <p>Proofs of these facts can be found, e.g., in <a href="https://projecteuclid.org/ebooks/perspectives-in-logic/Proper-and-Improper-Forcing/toc/pl/1235419814" rel="nofollow noreferrer">Shelah, Proper and Improper Forcing, Chapter VI, Section 4 and 5 resp.</a></p> <p>One more example from myself.</p> <ul> <li>If <span class="math-container">$F$</span> is an <a href="https://refubium.fu-berlin.de/discover?filtertype_0=mycoreId&amp;filter_relational_operator_0=equals&amp;filter_0=FUDISS_derivate_000000006649" rel="nofollow noreferrer">idempotent filter</a> then <span class="math-container">$F$</span> will remain an idempotent filter in any forcing extension. In particular, if <span class="math-container">$F$</span> is an idempotent ultrafilter, it will still extend to an idempotent ultrafilter.</li> </ul>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Neves
1,747
<p>$\displaystyle\big(a^2+b^2\big)\cdot\big(c^2+d^2\big)=\big(ac \mp bd\big)^2+\big(ad \pm bc\big)^2$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Integral
33,688
<p>Facts about $\pi$ are always fun!</p> <p>\begin{equation} \frac{\pi}{2} = \frac{2}{1}\cdot\frac{2}{3}\cdot\frac{4}{3}\cdot\frac{4}{5}\cdot\frac{6}{5}\cdot\frac{6}{7}\cdot\frac{8}{7}\cdot\ldots\\ \end{equation} \begin{equation} \frac{\pi}{4} = 1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\ldots\\ \end{equation} \begin{equation} \frac{\pi^2}{6} = 1+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{4^2}+\frac{1}{5^2}+\ldots\\ \end{equation} \begin{equation} \frac{\pi^3}{32} = 1-\frac{1}{3^3}+\frac{1}{5^3}-\frac{1}{7^3}+\frac{1}{9^3}-\ldots\\ \end{equation} \begin{equation} \frac{\pi^4}{90} = 1+\frac{1}{2^4}+\frac{1}{3^4}+\frac{1}{4^4}+\frac{1}{5^4}+\ldots\\ \end{equation} \begin{equation} \frac{2}{\pi} = \frac{\sqrt{2}}{2}\cdot\frac{\sqrt{2+\sqrt{2}}}{2}\cdot\frac{\sqrt{2+\sqrt{2+\sqrt{2}}}}{2}\cdot\ldots\\ \end{equation} \begin{equation} \pi = \cfrac{4}{1+\cfrac{1^2}{3+\cfrac{2^2}{5+\cfrac{3^2}{7+\cfrac{4^2}{9+\ldots}}}}}\\ \end{equation}</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Austin Mohr
11,245
<p>The product of any four consecutive integers is one less than a perfect square.</p> <p>To phrase it more like an identity:</p> <p>For every integer $n$, there exists an integer $k$ such that $$n(n+1)(n+2)(n+3) = k^2 - 1.$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
preferred_anon
27,150
<p>$$\frac{1}{998901}=0.000001002003004005006...997999000001...$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Felix Marin
85,343
<p>$$ \int_{-\infty}^{\infty}{\sin\left(x\right) \over x}\,{\rm d}x = \pi\int_{-1}^{1}\delta\left(k\right)\,{\rm d}k $$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
triple_sec
87,778
<p>$\textbf{Claim:}\quad$$$\frac{\sin x}{n}=6$$ for all $n,x$ ($n\neq 0$).</p> <p>$\textit{Proof:}\quad$$$\frac{\sin x}{n}=\frac{\dfrac{1}{n}\cdot\sin x}{\dfrac{1}{n}\cdot n}=\frac{\operatorname{si}x}{1}=\text{six}.\quad\blacksquare$$</p>
8,814
<p>Here is a funny exercise $$\sin(x - y) \sin(x + y) = (\sin x - \sin y)(\sin x + \sin y).$$ (If you prove it don't publish it here please). Do you have similar examples?</p>
Theemathas Chirananthavat
66,404
<p>For all $n\in\mathbb{N}$ and $n\neq1$ $$\prod_{k=1}^{n-1}2\sin\frac{k \pi}{n} = n$$</p> <p>For some reason, the proof involves complex numbers and polynomials.</p> <p>Link to proof: <a href="https://math.stackexchange.com/questions/8385/prove-that-prod-k-1n-1-sin-frack-pin-fracn2n-1">Prove that $\prod_{k=1}^{n-1}\sin\frac{k \pi}{n} = \frac{n}{2^{n-1}}$</a></p>
1,310,530
<p>From: $2015$ Singapore Mathematical Olympiad Secondary 2 (Grade 8) Question 21 Round 1 on 3rd June.</p> <blockquote> <p>Find the value of $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}$ (No use of calculators)</p> </blockquote> <p>My attempt:Special cases formula -> $x^2=(x+1)(x-1)+1$</p> <p>Therefore,</p> <p>\begin{align} \sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}&amp;=\sqrt {(99^2-1)(101^2-1)+(100\times 2)^2}\\ \end{align}</p> <p>Now take $y$=100</p> <p>\begin{align} \sqrt {(99^2-1)(101^2-1)+(100\times 2)^2}&amp;=\sqrt {[(y-1)^2-1][(y+1)^2-1]+(2y)^2}\\ &amp;= \sqrt {[y^2-2(y)(1)+1-1][y^2+2(y)(1)+1-1]+4y^2}\\ &amp;= \sqrt {(y^2-2y)(y^2+2y)+4y^2}\\ &amp;= \sqrt {y^4-4y^2+4y^2}\\ &amp;= \sqrt {y^4}\\ &amp;= y^2 \end{align}</p> <p>$100^2=10000$</p> <p>therefore $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}=10000$</p> <p>But using the calculator,the answer is 10002.Where did I go wrong,and is there a simpler way to do this other than using long multiplication?</p> <p>EDITED ANSWER(Error found by @Mathlove):</p> <p>\begin{align} \sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}&amp;=\sqrt {(99^2+1)(101^2+1)+(100\times 2)^2}\\ \end{align}</p> <p>Now take $y$=100</p> <p>\begin{align} \sqrt {(99^2+1)(101^2+1)+(100\times 2)^2}&amp;=\sqrt {[(y-1)^2+1][(y+1)^2+1]+(2y)^2}\\ &amp;= \sqrt {[y^2-2(y)(1)+1+1][y^2+2(y)(1)+1+1]+4y^2}\\ &amp;= \sqrt {(y^2-2y+2)(y^2+2y+2)+4y^2}\\ &amp;= \sqrt {(y^4-4y^2)+2(y^2+2y+2)-4y-2y^2+4y^2+4y^2}\\ &amp;= \sqrt {y^4-4y^2+2y^2+4y+4-4y-2y^2+4y^2+4y^2}\\ &amp;= \sqrt {y^4+4y^2+4}\\ &amp;= \sqrt {(y^2+2)^2} &amp;= y^2+2 \end{align}</p> <p>$100^2+2=10002$</p> <p>therefore $\sqrt {(98 \times 100+2)(100\times102+2)+(100\times2)^2}=10002$</p>
chenbai
59,487
<p>$((100-2)100+2)((100+2)100+2)=(100^2-2\times100+2)(100^2+2\times 100+2)=(100^2+2)^2-(2\times 100)^2$</p>
39,828
<p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p> <p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p> <ul> <li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li> <li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li> <li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li> </ul> <p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p> <p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p> <blockquote> <p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p> </blockquote> <p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p> <p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p> <p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p> <blockquote> <p>How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p> </blockquote> <p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p> <hr> <p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p> <hr> <p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
Tim Porter
3,502
<p>This is really an add-on to David Corfield's answer.</p> <p>Since David mentions groups and groupoids, I will mention that Ronnie Brown (<a href="https://groupoids.org.uk/hdaweb2.html" rel="nofollow noreferrer">https://groupoids.org.uk/hdaweb2.html</a>) considers some of the possible criteria as follows:</p> <p>Tests for a theory which is successful in a mathematical and scientific rather than sociological sense could be the following. A successful theory would be expected to yield. He wanted to evaluate some new concepts and proposed the following advantages.</p> <ul> <li><p>a range of new algebraic structures, with new applications and new results in traditional areas;</p> </li> <li><p>new viewpoints on classical material;</p> </li> <li><p>better understanding, from a higher dimensional viewpoint, of some phenomena in group theory;</p> </li> <li><p>new computations with these objects, and hence also in the areas in which they apply;</p> </li> <li><p>new algebraic understanding of the structure of certain geometric situations;</p> </li> <li><p>a stimulus to new ideas in related areas;</p> </li> <li><p>a range of unexplored ideas and potential applications;</p> </li> <li><p>the solution of some classical famous problems.</p> </li> </ul> <p>I would suggest that this list (albeit incomplete as Ronnie suggests) applies to algebraic situations as well as his higher dimensional group theory context and that, suitably interpreted for other contexts, they can provide some very partial answer to the question.</p> <p>The second question is perhaps best answered by saying that 'established' mathematicians are expected to have some sort of 'gut' feeling about the importance of a question or area. Sometimes they just have blind prejudice however. One task of a research supervisor 'should' be to train a PG student towards getting that intuition, but not to hand on the prejudices.</p> <p>At a pragmatic level a debutant mathematician needs to get work published and noticed and that is easier in established areas (or near established areas).</p>
39,828
<p>Dear MO-community, I am not sure how mature my view on this is and I might say some things that are controversial. I welcome contradicting views. In any case, I find it important to clarify this in my head and hope that this community can help me doing that.</p> <p>So after this longish introduction, here goes: Many of us routinely use algebraic techniques in our research. Some of us study questions in abstract algebra for their own sake. However, historically, most algebraic concepts were introduced with a specific goal, which more often than not lies outside abstract algebra. Here are a few examples:</p> <ul> <li>Galois developed some basic notions in group theory in order to study polynomial equations. Ultimately, the concept of a normal subgroup and, by extension, the concept of a simple group was kicked off by Galois. It would never have occurred to anyone to define the notion of a simple group and to start classifying those beasts, had it not been for their use in solving polynomial equations.</li> <li>The theory of ideals, UFDs and PIDs was developed by Kummer and Dedekind to solve Diophantine equations. Now, people study all these concepts for their own sake.</li> <li>Cohomology was first introduced by topologists to assign discrete invariants to topological spaces. Later, geometers and number theorists started using the concept with great effect. Now, cohomology is part of what people call "commutative algebra" and it has a life of its own.</li> </ul> <p>The list goes on and on. The axiom underlying my question is that you don't just invent an algebraic structure and study it for its own sake, if it hasn't appeared in front of you in some "real life situation" (whatever this means). Please feel free to dispute the axiom itself.</p> <p>Now, the actual question. Suppose that you have some algebraic concept which has proved useful somewhere. You can think of a natural generalisation, which you personally consider interesting.</p> <blockquote> <p>How do you decide whether a generalisation (that you find natural) of an established algebraic concept is worth studying? How often does it happen (e.g., how often has it happened to you or to your colleagues or to people you have heard of) that you undertake a study of an algebraic concept and when you try to publish your results, people wonder "so what on earth is this for?" and don't find your results interesting? How convincing does the heuristic "well, X naturally generalises Y and we all know how useful Y is" sound to you?</p> </blockquote> <p>Arguably, the most important motivation for studying a question in pure mathematics is curiosity. Now, you don't have to explain to your colleagues why you want to classify knots or to solve a Diophantine equation. But might you have to explain to someone, why you would want to study ideals if he doesn't know any of their applications (and if you are not interested in the applications yourself)? How do you motivate that you want to study some strange condition on some obscure groups?</p> <p>Just to clarify this, I have absolutely no difficulties motivating myself and I know what curiosity means subjectively. But I would like to understand, how a consensus on such things is established in the mathematical community, since our understanding of this consensus ultimately reflects our choice of problems to study.</p> <p>I could formulate this question much more widely about motivation in pure mathematics, but I would rather keep it focused on a particular area. But one broad question behind my specific one is</p> <blockquote> <p>How much would you subscribe to the statement that EDIT: "studying questions for the only reason that one finds them interesting is something established mathematicians do, while younger ones are better off studying questions that they know for sure the rest of the community also finds interesting"?</p> </blockquote> <p>Sorry about this long post! I hope I have been able to more or less express myself. I am sure that this question is of relevance to lots of people here and I hope that it is phrased appropriately for MO.</p> <hr> <p>Edit: just to clarify, this question addresses the status quo and the prevalent consensus of the mathematical community on the issues concerned (if such a thing exists), rather than what you would like to be true.</p> <hr> <p>Edit 2: I received some excellent answers that helped me clarify the situation, for which I am very grateful! I have chosen to accept Minhyong's answer, as that's the one that comes closest to giving examples of the sort I had in mind and also convincingly addresses the more general question at the end. But I am still very grateful to everyone who took the time to think about the question and I realise that for other people who find the question relevant, another answer might be "the correct one".</p>
sleepless in beantown
8,676
<p>Alex, don't feel as if the weight of the burden of proof (of concept generalization) has to rest completely on your shoulders. I realize you already agree that curiousity and your own interest can be enough reason to pursue a topic or generalization, but...</p> <p>Isn't it the same as asking a question on mathoverflow about a topic which is interesting to you on its own merits, and finding out about the existence of either a longer history of it based on a parallel set of definitions or other possible applications of it in other branches of mathematics or physics? I had been working on a particular topic, but having approached it from one direction I could only perceive the question from my point of view. </p> <p>Even my attempts to research it found nothing initially because I was using the wrong key-words to look for similar work on my topic. It turned out that there was a long history of work on the topic using different terminology which I had not been aware of.</p> <p>Perhaps giving a short summary on mathoverflow (as a different question) of the generalization which you are working on would provide you some different points of view from other mathematicians. </p> <p>As to the utility of a generalization or of a particular approach, it is not possible to predict or find all of, many of, or even more than a few of, the possible applications of a mathematical technique on your own because you cannot survey the entirety of it yourself. It's often the intersection of multiple disparate interests that creates the application of a technique onto a problem, and every individual (and every individual mathematician) has a different set of disparate interests. (As long as the number of categories of possible interests is greater than the logarithm in base two of the size of the population under consideration; otherwise the <em>pigeonhole principle</em> requires that there must be at least two individuals with exactly the same interests. :) )</p>
2,454,455
<p>I know this is a soft and opinion based question and I risk that this question get's closed/downvoted but I still wanted to know what other persons, who are interested in mathematics, think about my question.</p> <p>Whenever people are talking about the most beautiful equation/identity Euler's identity is cited in this fashion:</p> <p>$$e^{i\pi}+1=0.$$</p> <p>While I would agree that this is a beautiful identity (see my avatar) I personally always wondered why not </p> <p>$$e^{2i\pi}-1 = 0$$</p> <p>is the most beautiful identity. It has $e$, $i$, $\pi$, $0$ and the number $2$ in it. I prefer it because the number $2$ is the first and at the same time the only even odd prime number. Having the prime numbers, which are in some way the atoms of mathematics, included makes this formula even more pleasant for me. The minus sign seems a little bit "negative" but the good part is that it is displaying the principle of inversion.</p> <blockquote> <p>So my question is, why is this not the form in which it is most often presented?</p> </blockquote>
Vidyanshu Mishra
363,566
<p>Well said by @copperhat: Beauty lies in the eye of beholder.</p> <p>I like the form $e^{9i\pi}+1=0$ as $9$ is the first odd composite number. People have different tastes and you cannot force someone to like apples if you do like them. </p> <p>One problem I can think of with $e^{2i\pi}-1 = 0$ is that you can sqare both sides of its more elementary counterpart $e^{i\pi}=-1 $ and get you result.</p>
239,900
<p>Hatcher states the following theorem on page 114 of his Algebraic Topology:</p> <blockquote> <p>If $X$ is a space and $A$ is a nonempty closed subspace that is a deformation retract of some neighborhood in $X$, then there is an exact sequence $$...\longrightarrow\widetilde{H}_n(A)\overset{i_*}\longrightarrow \widetilde{H}_n(X)\overset{j_*}\longrightarrow\widetilde{H}_n(X/A)\overset{\partial}\longrightarrow \widetilde{H}_{n-1}(A)\overset{i_*}\longrightarrow... $$</p> <p>where $i: A\hookrightarrow X$ is the inclusion and $j:X \rightarrow X/A$ is the quotient map. </p> </blockquote> <p>Perhaps I am having a brain malfunction at the moment, but what are some interesting nonempty spaces which do not satisfy these criterion? By interesting, I mean something that appears "in nature." </p>
Miha Habič
9,440
<p>Remember that to be a deformation retract of a neighbourhood, you have to be locally path connected. If you think about this, you'll see that the comb space gives you an example. To be precise, if $X$ is the space $[0,1]\times\{0\}\cup(\{\frac{1}{n};n\in\mathbb{N}\}\cup\{0\})\times [0,1]$ and $A$ is $\{0\}\times[a,1]$ for some $a&gt;0$, then $A$ is closed, but no neighbourhood deformation retracts onto it. You can find plenty of examples based on this.</p> <p>If you're wondering why the $a&gt;0$ is there, it's because I'm not sure of Hatcher's terminology. What he calls deformation retracts I call strong deformation retracts. I think he calls my deformation retracts weak deformation retracts in an exercise somewhere. Note that $\{0\}\times[0,1]$ is a weak deformation retract of $X$, and I think the proof of the theorem you quote goes through with this weaker assumption.</p>
3,151,143
<p>I want to find a symmetric matrix <span class="math-container">$A$</span>, whose eigenvalues are <span class="math-container">$4$</span> and <span class="math-container">$-1$</span>. One of the eigenvectors corresponding to the eigenvalue <span class="math-container">$4$</span> is <span class="math-container">$(2,3)$</span>. I want to find an eigenvector corresponding to the eigenvalue <span class="math-container">$-1$</span> and then find the matrix <span class="math-container">$A$</span>.</p>
Widawensen
334,463
<p>Consider general form </p> <p><span class="math-container">$$\begin{bmatrix} x &amp; d \\ d &amp; y \end{bmatrix}= \begin{bmatrix} 2/\sqrt{13} &amp; a \\ 3/\sqrt{13} &amp; b \end{bmatrix} \begin{bmatrix} 4 &amp; 0 \\ 0 &amp; -1 \end{bmatrix} \begin{bmatrix} 2/\sqrt{13} &amp; a \\ 3/\sqrt{13} &amp; b \end{bmatrix}^T $$</span></p> <p>Additionally vector <span class="math-container">$v=\begin{bmatrix} a \\ b \end{bmatrix} $</span> can be normalized to unit length (as <span class="math-container">$[2 \ \ 3]^T$</span> was normalized) and it is orthogonal to <span class="math-container">$w=[2 \ \ 3]^T $</span> i.e. <span class="math-container">$w^Tv=0$</span>.</p>
3,151,143
<p>I want to find a symmetric matrix <span class="math-container">$A$</span>, whose eigenvalues are <span class="math-container">$4$</span> and <span class="math-container">$-1$</span>. One of the eigenvectors corresponding to the eigenvalue <span class="math-container">$4$</span> is <span class="math-container">$(2,3)$</span>. I want to find an eigenvector corresponding to the eigenvalue <span class="math-container">$-1$</span> and then find the matrix <span class="math-container">$A$</span>.</p>
amd
265,466
<p>Recall that the eigenspaces of a symmetric matrix are mutually orthogonal. Thus, all eigenvectors with eigenvalue <span class="math-container">$-1$</span> are orthogonal to all eigenvectors with eigenvalue <span class="math-container">$4$</span>. Can you come up with a nonzero vector that’s orthogonal to <span class="math-container">$(2,3)$</span>? Once you’ve done that, you have a basis of <span class="math-container">$\mathbb R^2$</span> that consists of eigenvectors of <span class="math-container">$A$</span>, therefore <span class="math-container">$A$</span> is diagonalizable. Can you construct <span class="math-container">$A$</span> from its diagonalization?</p>
2,125,297
<p><a href="https://i.stack.imgur.com/cBCJe.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/cBCJe.png" alt="enter image description here"></a></p> <p>I would like to know how exactly this works. I watched a khan academy video: "Multiplying matrices" but in this case he would've done B*A and had 2 columns, why does this one have 3 columns and 3 rows??</p>
Learnmore
294,365
<p>Note that distance of $x=(1,1,1,1)$ to $W=d(x,W)=\inf\{(d(x,w):w\in W\}$ </p> <p>Now $(1,-1,1,1)=(0,0,1,1)+(1,-1,0,0)\in W$ and hence </p> <p>$d((1,-1,1,1),(1,1,1,1))=\sqrt 4=2$</p> <p><strong>ELSE</strong>:</p> <p>Any element of $W$ is $a(0,0,1,1)+b(1,-1,0,0)=(b,-b,a,a)$</p> <p>Now $d((b,-b,a,a),(1,1,1,1))=\sqrt {(1-b)^2+(1+b)^2+(1-a)^2+(1-a)^2=F(a,b)}$</p> <p>For extrema of $F$; $F_a=0\implies a=1;F_b=0\implies b=0;F_{aa},F_{bb}&gt;0$</p> <p>Hence $F$ has minima at $(b,-b,a,a)=(0,0,1,1)$</p> <p>Also$F(1,0)=4\implies d((b,-b,a,a),(1,1,1,1))=2$</p>
840,211
<p>I have line integral of a vector function: $\vec{F}=-e^{-x}\sin y\,\,\vec{i}+e^{-x}\cos y\,\,\vec{j}$ The path is a square on the $xy$ plane with vertices at $(0,0),(1,0),(1,1),(0,1)$</p> <p>Of course it is a closed line integral, and I know the result should be zero. </p> <p>I am baffled how can you calculate $\sin y$ or $\cos y$ where $y$ is an actual coordinate point?!</p>
David
119,775
<p>For the first one: if $t$ is odd, then the polynomial $x^t+1$ can be factorised into a product of two polynomials with integer coefficients: $$x^t+1=(x+1)(x^{t-1}-x^{t-2}+\cdots-x+1)\ .$$ Now suppose that $2^m+1$ is prime, and let $t$ be an odd factor of $m$. Writing $m=st$, we have from above that $$2^m+1=(2^s)^t+1=(2^s+1)\ \hbox{times an integer}\ .$$ Since $2^m+1$ is prime, $2^s+1$ can only be equal to $1$ or $2^m+1$. The first is clearly impossible, so $2^s+1=2^m+1$, so $s=m$, so $t=1$.</p> <p>What all this proves is that if $2^m+1$ is prime, then $m$ has no odd factor except $1$. The only values of $m$ for which this is true are powers of $2$, say $m=2^k$, and so $$2^m+1=2^{2^k}+1=F_k\ .$$</p> <p>The second problem is quite similar, starting with the factorisation $$x^t-1=(x-1)(x^{t-1}+x^{t-2}+\cdots+x+1)$$ for any integer $t$. Give it a try.</p>
698,474
<p>I am trying to find the phi(18). Using an online calculator, it says it is 6 but im getting four. <br/> The method I am using is by breaking 18 down into primes and then multiplying the phi(primes)</p> <p>$$=\varphi (18)$$ $$=\varphi (3) \cdot \varphi(3) \cdot \varphi(2)$$ $$= 2 \cdot 2 \cdot 1$$ $$= 4$$</p>
kmbrgandhi
132,855
<p>Your multiplicative property is not necessarily true when the two numbers you're multiplying share a common factor. Here's the general formula: given $N = p_1^{q_1}p_2^{q_2}\cdots p_n^{q_n}$ (where $p_1, \ldots, p_n$ are distinct primes), we can find that $$ \phi(N) = N\left(1-\frac{1}{p_1}\right)\left(1-\frac{1}{p_2}\right)\cdots\left(1-\frac{1}{p_n}\right) $$</p> <p>In the case of $18 = 2 \cdot 3^2$, this gives: $$ \phi(18) = 18\left(\frac{1}{2}\right)\left(\frac{2}{3}\right) = 6 $$ I leave the proof of this as an exercise (hint: consider (and enumerate) those numbers divisible by $p_1, p_2, \ldots, p_n$). </p>
147,338
<p>I'm helping a student through a course in mathematics. In the course text, we came across the following problem concerning the Lagrange multiplier technique. Given a differentiable function with continuous partial derivatives $f:\mathbb{R}\to\mathbb{R}:(x,y)\mapsto f(x,y)$ that has to be extremized and a constraint given by an implicit relation $g(x,y)=0$ with $g$ likewise having continuous partial derivatives. The Lagrange multiplier technique looks for points $(x^*,y^*,\lambda^*)$ such that</p> <p>$$\nabla f(x^*,y^*)=\lambda^* \nabla g(x^*,y^*)$$</p> <p>$\lambda$ is the so-called Lagrange multiplier.</p> <p>The technique rests upon the fact that the gradient of the constraint is different from the zero vector: $\nabla g(x^*,y^*) \neq 0$. Then, <strong>the text says that if that condition is not fulfilled, it is possible to not have solutions of the system of equations resulting from the technique while there can actually be a constrained extremum.</strong> </p> <p>I have tried to construct such an example, but until now unsuccessfully as every attempt I make with a $\nabla g(x^*,y^*) = 0$ turns out to have a solution. Does anyone know of a nice counterexample? </p> <p>And does one know of a counterexample when there are $n$ variables and $m$ constraints with $n&gt;m&gt;1$ ? For the latter, the condition becomes that the gradients of the constraints have to be linearly independent. So the question is: can we find an example where there is no solution for the Lagrange multiplier method, but there is an extremum and the gradients of the constraints are linearly dependent? </p>
Christian Blatter
1,303
<p>Here is an example, albeit with two constraints: We want to find the minimum of the function $$f(x,y,z):=y\ ,$$ given the constraints $$F(x,y,z):=x^6-z=0\ ,\qquad G(x,y,z):=y^3-z=0\ .$$ The constraints define the curve $$\gamma:\quad x\mapsto (x,x^2, x^6)\qquad(-\infty&lt;x&lt;\infty)\ ,$$ and it is easy to see that the minimum of $f\restriction \gamma$ is taken at the origin. On the other hand $$\nabla f(0,0,0)=(0,1,0)\ ;\quad\nabla F(0,0,0)=\nabla G(0,0,0)=(0,0,-1)\ ;$$ whence $\nabla f(0,0,0)$ is <em>not</em> a linear combination of $\nabla F(0,0,0)$ and $\nabla G(0,0,0)$. This means that using Lagrange's method the origin would not have shown up as a conditionally stationary point.</p>
1,888,732
<p>When I had the calculus class about the limit, one of my classmate felt confused about this limit:</p> <blockquote> <p><span class="math-container">$$\lim_{x\to \infty} \sqrt{x^2+x+1}-\sqrt{x^2+3x+1} = -1$$</span></p> </blockquote> <p>What he thought that since <span class="math-container">$x^2 &gt; x$</span> and <span class="math-container">$x^2 &gt; 3x$</span> when <span class="math-container">$x \to \infty$</span> so the first square root must be <span class="math-container">$x$</span> and same for the second. Hence, the limit must be <span class="math-container">$0$</span>.</p> <p>It is obviously problematic.</p> <p>And what I thought is that make prefect square under the limit, though I know the right solution is to rationalize the numerator.</p> <p>After perfect-squaring, <span class="math-container">$$\lim_{x\to \infty} \sqrt{\left(x+\frac{1}{2}\right)^2+\frac{3}{4}}-\sqrt{\left(x+\frac{3}{2}\right)^2-\frac{5}{4}}$$</span> I assert that since there is a perfect square and a square root. As <span class="math-container">$x \to \infty$</span>, the constant does not matter. So</p> <p><span class="math-container">$$\lim_{x\to \infty} \sqrt{\left(x+\frac{1}{2}\right)^2+\frac{3}{4}}-\sqrt{\left(x+\frac{3}{2}\right)^2-\frac{5}{4}}= \lim_{x\to \infty} \sqrt{\left(x+\frac{1}{2}\right)^2}-\sqrt{\left(x+\frac{3}{2}\right)^2}\\ = \frac{1}{2}-\frac{3}{2} = -1 $$</span></p> <p>But when I conquer this limit: <a href="https://math.stackexchange.com/questions/1888153/better-way-to-find-lim-x-to-infty-frac-sqrtx2-2x3-sqrt4x25x-6x">this post</a>.</p> <p>Here is my argument:</p> <blockquote> <p><span class="math-container">$\sqrt{ax^2+bx+c} =O(\sqrt a(x+\frac{b}{2a}))$</span> when <span class="math-container">$x \to \infty$</span></p> <p><span class="math-container">$$\lim_{x\to\infty}\frac{\sqrt{x^2-2x+3}+\sqrt{4x^2+5x-6}}{x+\sqrt{x^2-1}} =\frac{x-1+2(x+5/4)}{x+x} = \frac{3}{2}$$</span></p> </blockquote> <p>Very concise get this answer but it gets downvoted.</p> <p>I do not know what is wrong with my strategy. But in general case: for instance this problem about cubic root my strategy seems to work really efficient:</p> <blockquote> <p><span class="math-container">$$\lim_{x\to \infty} \sqrt[3]{x^3+6x^2+9x+1}-\sqrt[3]{x^3+5x^2+x+1}$$</span></p> </blockquote> <p>My solution is:</p> <p><span class="math-container">$$\lim_{x\to \infty} \sqrt[3]{x^3+bx^2+cx+d} = \lim_{x\to \infty} \sqrt[3]{\left(x+\frac{b}{3}\right)^3}$$</span></p> <p>So the limit becomes: <span class="math-container">$$\lim_{x\to \infty} \sqrt[3]{(x+2)^3+O(x)}-\sqrt[3]{\left(x+\frac{5}{3}\right)^3 + O(x)} =\lim_{x\to \infty} (x+2) -\left(x+\frac{5}{3}\right) = \frac{1}{3} $$</span></p> <p><a href="http://www.wolframalpha.com/input/?i=lim%20%5B(x%5E3%2B6x%5E2%2B9x%2B1)%5E(1%2F3)-(x%5E3%2B5x%5E2%2Bx%2B1)%5E(1%2F3),%20x-%3Einfinity%5D" rel="nofollow noreferrer">This result gets verified by wolframalpha</a>.</p> <p>To put all into a nutshell, what is wrong with my solution to these three problem. Is there any counterexample to this substitution. Any help, you will be appreciated.</p>
imranfat
64,546
<p>@ZackNi As David indicated, you are assuming "infinity minus infinity = 0,which works out well because both radicals are squareroots and the headcoefficients of the polynomials are both 1.You can also get"infinity minus infinity = 0 from a squareroot and cuberoot, but using your approach will then yield to a wrong answer to the limit. Generally, $\infty-\infty$ and $\infty*0$ situations should be converted into situations like $\frac{0}{0}$ or $\frac{\infty}{\infty}$ on which other techniques can be applied to get the desired limit. In your problem the conjugate approach is also a good way to go. (My comment post got messed up and I can't delete it...)</p>
61,659
<p>How to find: $$\lim_{x \to \frac{\pi}{2}} \frac{\tan{2x}}{x - \frac{\pi}{2}}$$ I know that $\tan(2\theta)=\frac{2\tan\theta}{1-\tan^{2}\theta}$ but don't know how to apply it here.</p>
Community
-1
<p>Put $x = \frac{\pi}{2} + h$. As $x \to \frac{\pi}{2}$, you have $h \to 0$.</p> <p>Then you have \begin{align*} \lim_{x \to \frac{\pi}{2}} \frac{\tan{2x}}{x-\frac{\pi}{2}} &amp;= \lim_{h \to 0} \: \frac{\tan{2\bigl(\frac{\pi}{2}+h\bigr)}}{h} \\ &amp;=\lim_{h \to 0} \: \frac{\tan(\pi + 2h)}{h} \\ &amp;= \lim_{h \to 0} \: \frac{\tan(2h)}{h} \end{align*}</p> <p>Can you do it from here?</p>
1,843,724
<p>I know that the Maclaurin expansion of $e^x$ is $$1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...$$ But i'm not sure how to find the Maclaurin series here I tried this</p> <p>$$ f'_{(0)}=-2xe^{-x^2}=0 $$ And that follows to every derivative that follows, so how can I get a power series out of it?</p>
MPW
113,214
<p>You're mistaken in saying "And that follows to every derivative that follows".</p> <p>The product rule will come into play as you keep differentiating, giving terms that don't vanish. Here are a few steps:</p> <p>$f(x)=e^{-x^2}$, so $f(0)=1$.</p> <p>$f'(x) = -2xe^{-x^2}$, so $f'(0) = 0$.</p> <p>$f''(x) = -2x[-2xe^{-x^2}] -2[e^{-x^2}] = (4x^2-2)e^{-x^2}$, so $f''(0)=-2$.</p> <p>And so on. I think you will find that the odd-numbered derivatives ($f'(0), f^{(3)}(0),f^{(5)}(0),\ldots$) vanish , but the even-numbered ones ($f(0), f''(0),f^{(4)}(0),\ldots$) do not.</p>
1,241,639
<p>How do I prove that $\lim_{(x,y)\to (0,0)} \frac{x^2 y }{x^2 + y^2} = 0$?</p> <p>I can prove this by notifying $x=rcos\theta$ and $y=rsin\theta$, but I remember that it could also be proven by squeeze theorem.</p> <p>How do I prove this using squeeze theorem?</p>
Timbuc
118,527
<p>$$0\le\left|\frac{x^2y}{x^2+y^2}\right|\le \left|\frac{x^2y}{x^2}\right|=|y|\xrightarrow[(x,y)\to (0,0)]{}0$$</p>
1,241,639
<p>How do I prove that $\lim_{(x,y)\to (0,0)} \frac{x^2 y }{x^2 + y^2} = 0$?</p> <p>I can prove this by notifying $x=rcos\theta$ and $y=rsin\theta$, but I remember that it could also be proven by squeeze theorem.</p> <p>How do I prove this using squeeze theorem?</p>
Beni Bogosel
7,327
<p>Note that $\frac{x^2}{x^2+y^2} \leq 1$ so your function is bounded by $\pm y$.</p>
1,090,974
<p>We have two functions of time $f(t)$ and $g(t)$, for which convolution and correlation are defined as following:</p> <p>Convolution: $(f(t)\ast g(t))(\tau) = \int_{-\infty}^\infty{f(t)g(\tau-t)dt}$</p> <p>Correlation: $(f(t)\star g(t))(\tau) = \int_{-\infty}^\infty{f^\ast(t)g(\tau+t)dt}$</p> <p>In the english wikipedia and in other sources I found that the following relationship should hold:</p> <p>$(f(t)\star g(t))(\tau) = (f^\ast(-t)\ast g(t))(\tau)$</p> <p>Is this correct? If so, how can i prove this? Usually, i would try substitution, but how to change the $g(\tau+t)$ to $g(\tau-t)$?</p>
Swapnil Agarwal
689,634
<p>We have -<br> Convolution - <span class="math-container">$ f(t) * g(t) = \int_{-\infty}^{\infty}f(\tau) . g(t-\tau) d\tau$</span><br> Correlation - <span class="math-container">$ R_{fg}(\tau) = \int_{-\infty}^{\infty}f(t) . g(t-\tau) dt$</span><br></p> <p>Now, <span class="math-container">$$ f(t) * g(t) = \int_{-\infty}^{\infty}f(\tau) . g(t-\tau) d\tau$$</span><br> Replace <span class="math-container">$\tau$</span> by <span class="math-container">$u$</span>-<br> <span class="math-container">$$ f(t) * g(t) = \int_{-\infty}^{\infty}f(u) . g(t-u) du$$</span> Replace <span class="math-container">$t$</span> by <span class="math-container">$k$</span>-<br> <span class="math-container">$$f(t) * g(t) = \int_{-\infty}^{\infty}f(u) . g(k-u) du = \int_{-\infty}^{\infty}f(t) . g(k-t) dt = \int_{-\infty}^{\infty}f(t) . g(-(t-k)) dt = f(t) \star g(-t)$$</span> which is the correlation between <span class="math-container">$f(t)$</span> and <span class="math-container">$g(-t)$</span>. This can be seen as - <span class="math-container">$$f(t) \star g(-t) = \int_{-\infty}^{\infty}f(t) . g(-(t-\tau)) dt$$</span> <br> So, <span class="math-container">$(f(t)⋆g(-t))(\tau)=(f(t)∗g(t))(\tau)$</span>. <br> Similarly, we can prove that <span class="math-container">$(f(t)⋆g(t))(\tau)=(f(t)∗g(-t))(\tau)$</span>.</p>
2,148,354
<p>I've been reading <em>'How to Prove It'</em> by Velleman, in the book it states that:</p> <blockquote> <p>$\{x\,\in\,U\,|\,P(x)\}$; this is read 'the set of all $x$ in $U$ such that $P(x)$'.</p> </blockquote> <p>It then goes on to give $\{x\,\in\,\mathbb R\,|\,x^2&lt;9\}$ as an example of such a set which would have the real numbers between $-3$ and $3$ as elements.</p> <p>However, in the next chapter the book gives an example of the set of all perfect squares $S$ by specifying the universe of discourse on the right:</p> <blockquote> <p>$S=\{n^2|\,n\,\in\,\mathbb N\}$</p> </blockquote> <p>So is it acceptable to switch what is on the left side of the vertical bar with what is on the right?</p> <p>The Wikipedia entry on <a href="https://en.wikipedia.org/wiki/Set-builder_notation" rel="noreferrer">set builder notation</a>, under <em>'Specifying the domain'</em> uses the former notation, yet under <em>'More complex expressions on the left side of the notation'</em> they use the latter. From the book and Wikipedia I'd assume it would be valid to use either. That being said, I've seen people claim that by switching them one would be creating a different set.</p>
Alp Uzman
169,085
<p><strong>TL;DR:</strong> Yes.</p> <hr> <p>Surely for this sweeping answer to make sense we should justify that there are (or could be) universal conventions on mathematical notation (since you don't specify by whom you mean the notation to be acceptable, I interpret your question as something along the lines of: </p> <p>"Is it acceptable by a sufficiently large group of people sufficiently competent in mathematics to switch what is on the left side of the vertical bar with what is on the right?").</p> <p>I don't think trying to produce such a justification is fruitful, even if it could be produced; so to bypass it let us suppose we are interested in practicing mathematics that are not that intermingled with foundations (say we only occasionally use the Axiom of Choice etc.). In this case as long as your notation for a specific concept is coherent with the rest of your outlook (or argument, proof etc.) and what you mean by the notation you use makes it communicable (with a sufficiently small error margin, which is quite possibly never zero) any notation you use is very acceptable. Here are a few examples:</p> <ul> <li>When I am writing on a board sometimes due to board-scarcity I even write a set with a horizontal separator, LHS (the place for the general form) on the top and RHS (the place for the characterizing property) on the bottom.</li> <li>Sometimes $\in$ and the characterizing property are joined together and the separator is not even used, first instance in the undergraduate curriculum is probably the correspondence of the normal subgroups of a group that contains a given normal subgroup and the normal subgroups of the factor group:</li> </ul> <p>$$\{N\leq K\unlhd G\} \longleftrightarrow \{L\unlhd G/N\},$$</p> <p>which could also be written as</p> <p>$$\{K\mid N\leq K\unlhd G\} \longleftrightarrow \{L\mid L\unlhd G/N\},$$</p> <p>but generally is not ($G$ and $N$ are fixed, $K$ and $L$ are dummy, so the interlocutor expects to be understood contextually).</p> <ul> <li>Sometimes it is better not to use the set-builder notation partially, e.g. when defining the topology of a set $X$, writing "let $\mathcal{T}:=\{U\in\mathcal{P}(X)\mid U\mbox{ is open}\}$" is usually not better than writing "let $\mathcal{T}$ be the set (collection) of all open sets of $X$".</li> </ul> <p>The main point of the so-called set builder notation is that the separator separates what form a generic element of the set has and what their characterizing property is.</p> <ul> <li>For your first example, observe the following:</li> </ul> <p>\begin{align} A:=\{x\in\Bbb{R}\mid x^2&lt;9\} &amp;=\{x\in\Bbb{R}\mid \vert x\vert&lt;3\}\\ &amp;=\{x\in\Bbb{R}\mid x\in]-3,3[\}\\ &amp;=\{x\mid x\in\Bbb{R}\mbox{ and }x\in]-3,3[\}\\ &amp;=\{x\mid x\in\Bbb{R}\cap]-3,3[\}\\ &amp;=\{x\mid x\in]-3,3[\}\\ &amp;=]-3,3[\\ &amp;=\{x\in]-3,3[\mid x\in\Bbb{R}\}\\ &amp;=\{x\in]-3,3[\mid x\in\Bbb{C}\}\\ &amp;\neq\{x\in\Bbb{C}\mid x^2&lt;9\}, \end{align}</p> <p>where "observing" stands not for "checking that everything is in compliance with the universal terms and conditions" but for "understanding what I had in mind when I was manipulating the notation" (or "checking that everything is in compliance with the terms and conditions applicable in this instance of mathematical reasoning").</p> <ul> <li>If you prefer to be more consistent with the set-builder recipe, $S=\{m\in\Bbb{N}\vert \exists n\in\Bbb{N}: n^2=m\}$.</li> </ul> <hr> <p>The examples and the comments above makes way for a somewhat heuristical explanation of the set-builder notation: In general it is preferred to specify the ambient set together with the general form of the elements of the set. This is also in accordance with the ordering of verbal explanations generally used, e.g. "let $A$ be the set of all real numbers whose square is less than $9$" (observe that this is slightly more natural than saying "let $A$ be the set of all numbers whose square is less than $9$, which also happen to be real").</p> <hr> <p>Here is a "soft" exercise for you to thing about:</p> <p>Not only is the use of the set-builder notation is not set in stone in practice only, this notation may be somewhat vague inherently:</p> <ol> <li>If $S$ is a set, let $P_S(x)$ be the open sentence "$x$ is an element of $S$.", where by an "open sentence" I mean a statement whose truth value $\in\{\mbox{true}, \mbox{false}\}$ is dependent on $x$:</li> </ol> <p>$$P_S(x):="x\in S".$$</p> <ol start="2"> <li>If $P(x)$ is an open sentence, let $S_P$ be the set of all $x$'s which render $P(x)$ true:</li> </ol> <p>$$S_P:=\{P\mbox{ is true}\}:=\{x\mid P(x)\mbox{ is true}\}=\{x\mid P(x)\}.$$</p> <p>Then we have an inherent interchangability associated with LHS and RHS. This inherent interchangability is closely related to "extensionality":</p> <blockquote> <p><strong>Extensionality:</strong> A set is completely determined just by specifying elements.</p> </blockquote> <p>This is a foundational issue (see <a href="https://en.wikipedia.org/wiki/Russell%27s_paradox" rel="nofollow noreferrer">Russell's paradox</a>), but in different forms this interchange turns out to be very important, e.g. in measure theory we have the <strong>characteristic function</strong> of a given (measurable) set $A$:</p> <p>$$\chi_A(x):=\begin{cases} 1,\mbox{if $x\in A$}\\ 0,\mbox{if $x\not\in A$} \end{cases},$$</p> <p>which allows us to think of (measurable) sets as (measurable) functions and define a very robust integration.</p> <hr> <p>An excellent book that would supplement your current study would be Mac Lane's <em>Mathematics Form and Function</em>, which contains (in my opinion) just enough information on foundations (and philosophy of mathematics) for doing non-foundational mathematics (and it also gives a very good sense of the "big picture"). The framework of my answer can be thought of this book also: "Form is due to functionality." (if you were to ask for an induced motto).</p>
2,148,354
<p>I've been reading <em>'How to Prove It'</em> by Velleman, in the book it states that:</p> <blockquote> <p>$\{x\,\in\,U\,|\,P(x)\}$; this is read 'the set of all $x$ in $U$ such that $P(x)$'.</p> </blockquote> <p>It then goes on to give $\{x\,\in\,\mathbb R\,|\,x^2&lt;9\}$ as an example of such a set which would have the real numbers between $-3$ and $3$ as elements.</p> <p>However, in the next chapter the book gives an example of the set of all perfect squares $S$ by specifying the universe of discourse on the right:</p> <blockquote> <p>$S=\{n^2|\,n\,\in\,\mathbb N\}$</p> </blockquote> <p>So is it acceptable to switch what is on the left side of the vertical bar with what is on the right?</p> <p>The Wikipedia entry on <a href="https://en.wikipedia.org/wiki/Set-builder_notation" rel="noreferrer">set builder notation</a>, under <em>'Specifying the domain'</em> uses the former notation, yet under <em>'More complex expressions on the left side of the notation'</em> they use the latter. From the book and Wikipedia I'd assume it would be valid to use either. That being said, I've seen people claim that by switching them one would be creating a different set.</p>
Asaf Karagila
622
<p><strong>TL;DR:</strong> NO!</p> <hr> <p>The general structure of a set-builder notation is $\{\text{term}(x)\mid\text{condition on }x\}$ or $\{\text{term}(x)\in X\mid\text{condition on }x\}$ when we want to bound the terms we allow.</p> <p>The left side is some term defined from $x$. This can be $n^2$, or it can be $f(x)$ or whatever. But it is a term. It is a mathematical object in our mathematical universe, which might be bounded by a certain set. On the right side, we look at the condition on $x$. This is a formula which should have one free variable: $x$. And then we know that we take all the terms build from $x$'s satisfying the condition.</p> <p>What you are allowed to do, however, is retranslate the term and the conditions. You can always arrive at something of the form $\{x\mid\text{condition on }x\}$, because you can always put the bound $x\in X$ in the condition, and you can always say something like "There exist $y$ such that $\text{term}(y)=x$".</p> <p>So writing $\{n^2\mid n\in\Bbb N\}$ can be written as $\{n\mid\exists k\in\Bbb N:k^2=n\}$.</p> <p>But translating one thing to the other is not the same as switching sides. The set builder notation $\{n\in\Bbb N\mid n^2\}$ is meaningless, since $n^2$ is a term, not a formula. It does not evaluate to a true/false condition which tells us in which case we put $n$ into our set and in which case we do not.</p> <p>So really the only thing we can move around is the bounding set. Namely, $\{x^3\in\Bbb N\mid x+1\text{ is even}\}$ and $\{x^3\mid x\in\Bbb N\text{ is odd}\}$ form the same set, although they look differently. Why do we do that?</p> <p>The answer is readability. You want to write your set in the way that will be the most readable to your audience, and will maximize usability later, when you refer to properties of the elements in the set you defined.</p> <p>So when you want to take the set of squares in the natural numbers, it is easier to write $n^2$, and not "There is some $k$ such that $k^2=n$". So making the term more complex makes the condition way simpler. But now we are left without any condition, as we want to take <em>all</em> the squares. So we move the bounding set into the condition, then we get that $n\in\Bbb N$ is our condition.</p> <p>One extremely important caveat here is to note that sometimes the term moves you from one set to another, e.g $f\colon X\to Y$ and $A\subseteq X$, then $\{f(x)\mid x\in A\}$ is in fact a subset of $Y$, so it would be written as $\{y\in Y\mid\exists x\in A: f(x)=y\}$. So when you switch the sides of the bounding conditions, it is important to make sure that the sets you care about are changed if necessary.</p>
3,244,361
<p>I need some help on two exercises from Kiselev's geometry, about straight lines.</p> <blockquote> <p>Ex 7: Use a straightedge to draw a line passing through two points given on a sheet of paper. Figure out how to check that the line is really straight. Hint: Flip the straightedge upside down.</p> </blockquote> <p>I would draw the first line, then flip the straightedge and draw the second line over the first. The two lines should coincide nicely iff the straightedge is straight. Because, this shows that there is no "unevenness" or "bumps" on the edge of the straightedge. There would be gaps between the two lines if there are "unevenness/bumps" on the edge of the straightedge.</p> <blockquote> <p>Ex 8: Fold a sheet of paper and, using ex 7, check that the edge is straight. Can you explain why the edge of a folded paper is straight?</p> </blockquote> <p>Ex 8 is marked as more difficult by the author. I'm completely clueless about this exercise.</p> <p>Please provide insights and help me with these two exercises. I'd appreciate if they are more of an "experimental approach" than theoretical because exercises 7 and 8 are arranged in between the introduction and first chapter of the book.</p> <p>Thank you. :)</p>
user21820
21,820
<p>While these questions are handwavy and cannot be precisely answered, I think that the author is looking for something like the following for exercise 8:</p> <blockquote> <p>When you fold the paper, you can do it in two ways, corresponding to which side you fold across the line. Notice that the results of these two ways are reflections of one another across the line. But the line is the same no matter which side you fold across it, so by exercise 7 the line is straight.</p> </blockquote> <p>Now, I want to emphasize again that this is nowhere near a mathematical proof, but it is indeed interesting to think about it this way, as it shows that there are some inherent properties that we assume about our ambient space whenever we perform operations (typically rigid motions) within it.</p>
866,921
<p>On the <a href="http://chat.stackexchange.com/rooms/36/mathematics">Mathematics chat</a> we were recently talking about the following problem <a href="https://math.stackexchange.com/users/32016/chriss-sis">@Chris'ssis</a> had to solve during an interview :</p> <p>$$3\times 4=8$$ $$4\times 5=50$$ $$5\times 6=30$$ $$6\times 7=49$$ $$7\times 8=?$$</p> <p>We have not managed to solve it so far, all we know is the solution (which was given <strong>after</strong> we had given up) :</p> <blockquote class="spoiler"> <p> $224$</p> </blockquote> <p>How do we find this solution ?</p>
CaptainCodeman
65,852
<p>Easy, just define </p> <p>$$\begin{array}{rcl}a \times b &amp;=&amp; \hspace{10.5pt}(a-4)(b-5)(a-5)(b-6)(a-6)(b-7)(a-7)(b-8)/72 + \\&amp;&amp; 25(a-3)(b-4)(a-5)(b-6)(a-6)(b-7)(a-7)(b-8)/18 + \\&amp;&amp; 15(a-3)(b-4)(a-4)(b-5)(a-6)(b-7)(a-7)(b-8)/8 \hspace{5.25pt}+ \\&amp;&amp; 49(a-3)(b-4)(a-4)(b-5)(a-5)(b-6)(a-7)(b-8)/36 + \\&amp;&amp;\hspace{5.5pt}7(a-3)(b-4)(a-4)(b-5)(a-5)(b-6)(a-6)(b-7)/18\end{array}$$</p>
2,125,136
<p>I understand the usual motivation behind the truth table for the logical connective <span class="math-container">$\to$</span>.</p> <p>However, I would like to know if there is a more fundamental reason for that truth table. Something that would have to do with arguments and validity.</p> <p>A.G.Hamilton writes in <em>Logic for Mathematicians</em> that &quot;the significance of the conditional statement <span class="math-container">$A\to B$</span> is that its truth enables the truth of <span class="math-container">$B$</span> to be inferred from the truth of <span class="math-container">$A$</span>, and nothing in particular to be inferred from the falsity of <span class="math-container">$A$</span>&quot;.</p> <blockquote> <p><strong>Question:</strong> What does Hamilton mean precisely? Something like <span class="math-container">$\to$</span> is the only binary truth function such that</p> <ul> <li><p>The argument form <span class="math-container">$(p\to q),p;\therefore q$</span> is valid</p> </li> <li><p>The argument form <span class="math-container">$(p\to q),\sim p;\mathcal{A}$</span> is invalid unless <span class="math-container">$\mathcal{A}$</span> is a statement form logically equivalent to <span class="math-container">$\sim p$</span> ?</p> </li> </ul> </blockquote>
Tamar
467,879
<p>The total number of passwords should be the number of 4-6 digit passwords using the characters 2, 3, 5, and 7 minus the number of passwords using just 3, 5, and 7. For this you compute</p> <p>$$(4^4 + 4^5 + 4^6) - (3^4 + 3^5 + 3^6) = 5376 - 1053 = 4323.$$</p> <p>The probability that your friend guesses on the first attempt is $\frac{1}{4323}$, as you said.</p> <p>The probability that your friend guesses your password on the second attempt is the probability that she guesses wrong on the first attempt <em>and</em> right on the second attempt. That's $\frac{4322}{4323}\cdot\frac{1}{4322} = \frac{1}{4323}$.</p> <p>Similarly, let's say you look at the third attempt: you want the probability that your friend guesses wrong on the first and second tries, and then guesses your password on the third try. That happens with probability $\frac{4322}{4323}\cdot\frac{4321}{4322}\cdot\frac{1}{4321} = \frac{1}{4323}$.</p> <p>The same pattern continues: assuming your friend guesses a different password every time, the probability that she guesses on the $n^\text{th}$ attempt is $\frac{1}{4323}$.</p> <p>Now, as you suggested, you can add the probabilities corresponding to the first try + second try + ... + tenth try, since you want to know the odds that she guesses right on the first try <em>or</em> the second try <em>or</em> the third try, and so on.</p> <p>This gives you a probability of</p> <p>$$10 \cdot \frac{1}{4323} = \frac{10}{4323}.$$</p>
1,401,760
<p>First I tried to use integration: $$y=\lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{n}$$ $$\log y=\lim_{n\to\infty}\sum_{r=1}^n\log\frac{a}{r}$$ But I could not express it as a <em>riemann integral</em>. Now I am thinking about sandwich theorem.</p> <p>$$\frac{a}{n!}=\frac{a}{1}\cdot\frac{a}{2}\cdot\frac{a}{3}\cdots\frac{a}{t} \cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}=\frac{a}{t!}\cdot\frac{a}{t+1}\cdot\frac{a}{t+2}\cdots\frac{a}{n}$$ Since $\frac{a}{t+1}&gt;\frac{a}{t+2}&gt;\frac{a}{t+1}&gt;\cdots&gt;\frac{a}{n}$ $$\frac{a^n}{n!}&lt;\frac{a^t}{t!}\cdot\big(\frac{a}{t+1}\big)^{n-t}$$ since $\frac{a}{t+1}&lt;1$, $$\lim_{n\to\infty}\big(\frac{a}{t+1}\big)^{n-t}=0$$ Hence, $$\lim_{n\to\infty}\frac{a^t}{t!}\big(\frac{a}{t+1}\big)^{n-t}=0$$ And by using sandwich theorem, $y=0$. Is this correct?</p>
Christian Prince
249,512
<p>This is the shortest proof</p> <p>$\displaystyle \sum _{n=1} ^{\infty} \frac{a^{n}}{n!}$ converges by ratio test.</p> <p>Let $x_{n}=\frac{a^{n}}{n!} $</p> <p>Then the convergence of $\displaystyle \sum _{n=1} ^{\infty }x_{n} $ implies $\lbrace x_{n} \rbrace $ converges to zero</p>
859,493
<p>Given that any fixed integer $n&gt;0$, let $S=\{1,2,3,4,...,n\}$. Now a Red-Blue subset of $S$ is called $T$. Every element of $T$ is given a colour (either red or blue). For instance $\{17 (\text{red})\}, \{1 (\text{red}), 5 (\text{red})\}$ and $\{1(\text{red}),5(\text{blue})\}$ are $3$ different subsets of $S$.</p> <p>Determine the number of different RB-coloured subsets of $S$.</p> <p>First thing I did was look at the problem without the colour restrictions to find out how many different subsets I can have. The number of different I calculated was $2^n - 1$. The $-1$ is from subtracting the null set. </p> <p>I'm not sure how to number of possibilities when the colour restrictions are introduced.</p> <p>Also how would I go about determining the number of different subsets of size $i$, where $0&lt;i&lt;n$?</p>
Ethan Splaver
50,290
<p>$$(2x^2+(x-3))^8=a_0+a_1x+a_2x^2+a_3x^3\cdots$$</p> <p>$$\frac{d}{dx}(2x^2+(x-3))^8=0+a_1+a_22x+a_33x^2\cdots$$</p> <p>$$\lim_{x\to 0}\frac{d}{dx}(2x^2+(x-3))^8=a_1$$</p> <p>$$-17496=a_1$$</p>
3,858,966
<blockquote> <p>Let <span class="math-container">$A= \{(x,y,z) \in \Bbb R^3 \vert x+y&lt;z &lt; x^2+y^2 \}$</span>. Show that <span class="math-container">$A$</span> is an open set in <span class="math-container">$\Bbb R^3$</span> defined by the Euclidean metric.</p> </blockquote> <p>So <span class="math-container">$A$</span> can be written as <span class="math-container">$A = \{(x,y,z) \in \Bbb R^3 \vert x+y-z&lt;0, x^2+y^2-z&gt;0 \} = \{(x,y,z) \in \Bbb R^3 \vert x+y-z&lt;0\} \cap \{(x,y,z) \in \Bbb R^3 \vert x^2+y^2-z&gt;0\}$</span>.</p> <p>Now the book I'm reading had solved this defining <span class="math-container">$f,g : \Bbb R^3 \to \Bbb R$</span>, <span class="math-container">$f(x) = x+y-z$</span> and <span class="math-container">$g(x) = x^2+y^2-z$</span>. Showing that these two functions are continuous seemed to imply that <span class="math-container">$A$</span> is open? I'm not yet on the chapter that introduces continuity in metric spaces so I was thinking if there's any other way to show that <span class="math-container">$A$</span> would be open? I know the definition of continuity in metric spaces, but here they used some projections, etc. which I'm not familiar yet.</p>
José Carlos Santos
446,262
<p>Take <span class="math-container">$(a,b,c)\in A$</span>. Then <span class="math-container">$a^2+b^2&gt;c$</span>. If <span class="math-container">$\delta&gt;0$</span> and if <span class="math-container">$\|(x,y,z)-(a,b,c)\|&lt;\delta$</span>, then<span class="math-container">\begin{align}x^2+y^2-z&amp;=\bigl((x-a)+a\bigr)^2+\bigl((y-b)+b\bigr)^2-(z-c)+c\\&amp;=(x-a)^2+(y-b)^2-(z-c)+2a(x-a)+2b(y-b)+a^2+b^2-c.\end{align}</span>And, since <span class="math-container">$|x-a|,|y-b||z-c|&lt;\delta$</span>,<span class="math-container">$$\bigl|(x-a)^2+(y-b)^2-(z-c)+2a(x-a)+2b(y-b)\bigr|&lt;2\delta^2+(2|a|+2|b|+1)\delta.$$</span>So, if <span class="math-container">$\delta$</span> is so small that <span class="math-container">$2\delta^2+(2|a|+2|b|+1)\delta&lt;a^2+b^2-c$</span>, the computations above show that<span class="math-container">$$(x,y,z)\in B_\delta\bigl((a,b,c)\bigr)\implies x^2+y^2-z&gt;0.$$</span>Suppose now that <span class="math-container">$\delta$</span> is also so small that <span class="math-container">$3\delta&lt;c-(a+b)$</span>. Then a similar (indeed, simpler) computation show that you also have<span class="math-container">$$(x,y,z)\in B_\delta\bigl((a,b,c)\bigr)\implies z-(x+y)&gt;0$$</span>and therefore<span class="math-container">$$(x,y,z)\in B_\delta\bigl((a,b,c)\bigr)\implies(x,y,z)\in A.$$</span>This proves that <span class="math-container">$A$</span> is an open set.</p>
65,892
<p>Hello!</p> <p>Let $M$ be an almost complex manifold. Let $TM$ denote its tangent bundle. Then we have the decomposition $TM\otimes\mathbb{C}=T^{1,0}M\oplus T^{0,1}M$ corresponding to the eigenvalues of the almost complex structure. This decomposition yields the decomposition: $$ \Lambda^r(T^\star M\otimes\mathbb{C})=\Lambda^r(T^{1,0}M^\star\oplus T^{0,1}M^\star)=\bigoplus_{p+q=r}\Lambda^p(T^{1,0}M^\star)\otimes\Lambda^q(\overline{T^{0,1}M}^\star) $$ Now take a section $\omega$ of the complex vector bundle $$ \Lambda^{p,q}:=\Lambda^p(T^{1,0}M^\star)\otimes\Lambda^q(\overline{T^{0,1}M}^\star) $$ $\omega$ is called a complex differential form of type $(p,q)$. Consider a complex $(p,q)$-form $\omega$ and take its differential. Its differential $\mathrm{d}\omega$ is a section of: $$ \Lambda^{p+q+1}(T^\star M\otimes\mathbb{C})=\bigoplus_{m+n=p+q+1}\Lambda^{m,n} $$ Therefore $\mathrm{d}\omega$ can be decomposed in a sum of complex differential forms of type $(m,n)$ with $m+n=p+q+1$. However I have read that there are only four terms. My second question is:</p> <p><em>How do we prove that in fact $\mathrm{d}\omega$ is a section of: $$\Lambda^{p+2,q-1}\oplus\Lambda^{p+1,q}\oplus\Lambda^{p,q+1}\oplus\Lambda^{p-1,q+2}$$ only?</em></p> <p>I am aware that in the case where the almost complex structure is integrable we get only two terms such that finally we have $\mathrm{d}=\partial+\bar{\partial}$. But in fact it seems that in the almost complex case already we do not have so many terms (namely we have only 4 as above). I think this has something to do with the graduation of the algebra of differential forms and the nilpotence of the differential itself but I am not able to prove it.</p> <p>At last, since I am interesting in the same kind of question concerning Lie and Courant algebroids, I was wondering if this fact could be recast in the language of homotopical algebras (by which I vaguely mean that usual identities on brackets hold up to something else)? This is because the algebra of differential forms is a supercommutative algebra and that we can reformulate $\mathrm{d}^2=0$ by $[\mathrm{d},\mathrm{d}]=0$. Could somebody point me toward an article?</p> <p>Thank you very much!</p>
Ben McKay
13,268
<p>Try Chern, <a href="http://books.google.ie/books?hl=en&amp;lr=&amp;id=87O81_uJ10kC&amp;oi=fnd&amp;pg=PP8&amp;dq=complex+manifolds+without+potential+theory&amp;ots=R8p5dlhq6Q&amp;sig=SnJftV-DaK3hYnN3mwZfcw9mXlM&amp;redir_esc=y#v=onepage&amp;q=complex%2520manifolds%2520without%2520potential%2520theory&amp;f=false" rel="noreferrer">Complex Manifolds without Potential Theory</a>, p. 18.</p>
264,595
<p>I've been trying to find an asymptotic expansion of the following series</p> <p>$$C(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n!{\sqrt{n}} }$$</p> <p>and</p> <p>$$L(x) = \sum\limits_{n=1}^{\infty} \frac{x^{2n+1}}{n(n!{\sqrt{n}}) }$$</p> <p>around $+\infty$, in the from</p> <p>$$\exp(x^2)\Big(1+\frac{a_1}{x}+\frac{a_2}{x^2} + .. +\frac{a_k}{x^k}\Big) + O\Big(\frac{\exp(x^2)}{x^{k+1}}\Big)$$</p> <p>where $x$ is a positive real number. As far as I progressed, I obtained only</p> <p>$$C(x) = \exp(x^2) + \frac{\exp(x^2)}{x} + O\Big(\frac{\exp(x^2)}{x}\Big).$$</p> <p>I tried to use ideas from <a href="https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1">https://math.stackexchange.com/questions/484367/upper-bound-for-an-infinite-series-with-a-square-root?rq=1</a>, <a href="https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk">https://math.stackexchange.com/questions/115410/whats-the-sum-of-sum-limits-k-1-infty-fractkkk</a>, <a href="https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&amp;lq=1">https://math.stackexchange.com/questions/378024/infinite-series-involving-sqrtn?noredirect=1&amp;lq=1</a>, but I was unable to make them work in my case. </p> <p>Any suggestions would be greatly appreciated!</p> <p>(If someone has a solid culture in this kind of things, is there are any specific names for $C(x) $ and $L(x) $ ?).</p> <p>PS:</p> <p>This question was asked on the math.SE but was closed as duplicate of <a href="https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100">https://math.stackexchange.com/questions/2117742/lim-x-rightarrow-infty-sqrtxe-x-left-sum-k%ef%bc%9d1-infty-fracxk/2123100#2123100</a>. However, the latter question provides only the first term of the asymptotic expansion and does not address sufficiently the problem considered here.</p>
Johannes Trost
37,436
<p><strong>Warning</strong>: <strike>Nearly</strike> every number in this answer is wrong ! Please, read the answer posted by esg, which gives the right asymptotic expansion !</p> <p>Such problems can be solved by Laplace's method. The starting point is the observation, that the values of the terms of the sums are unimodal as function of $n$ for fixed (large) $x$, i.e., have one maximum. If the maximum (as in the given cases) is sharp enough, the terms of the sum can be approximated by a Gaussian. This Gaussian is then integrated on the whole $n$-axis, where $n$ is considered as a real number. The integral often converges. The contributions from negative $n$ are negligible.</p> <p>The maximum, $n_{0}$, for $\frac{x^{2 n +1}}{n!\sqrt{n}}$ for fixed $x$ can be evaluated to $$ n_{0} = x^2-1-\frac{5}{12}x^{-2}-\frac{1}{2} x^{-4}-\frac{123}{160} x^{-6}+\frac{359}{180} x^{-8} + O(x^{-10}). $$ The sum is approximated by an integral over a Gaussian (which we get when expanding around $n=n_{0}$ to second order in $\tau$) $$ C(x) \approx \int_{-\infty}^{\infty} d\tau\ \exp\left( \ln\left(\frac{x^{2 n +1}}{n!\sqrt{n}}\right)|_{n\rightarrow n_{0}+\tau}\right), $$ consistently expanded to $O(x^{-10})$ after integration. The result is $$ C(x)= e^{x^2}\left(1+\frac{5}{12} x^{-2}+\frac{157}{288} x^{-4}+\frac{49729}{51840} x^{-6}+\frac{417001}{497664} x^{-8}+O(x^{-10})\right). $$ I could not find any systematic for the coefficients. Only the numerators of the higher order terms seem to contain rather large prime factors.</p> <p>The result is different from your findings, though. However, numerical evidence suggests that the above asymptotic expansion is correct. </p> <p>My result for $L(x)$ is $$ L(x)=e^{x^2}\left(x^{-2}+\frac{23}{12} x^{-4}+\frac{1525}{288} x^{-6}+\frac{949099}{51840} x^{-8}+O(x^{-10})\right), $$ with again rather large prime factors of the nominators of the higher order coefficients. The maximum of the terms as function of $n$ is reached (up to $O(x^{-10})$) for $n=n_{0}$ with $$ n_{0}=x^2 -2-\frac{23}{12}x^{-2}-5 x^{-4}-\frac{2643}{160}x^{-6}-\frac{3007}{45} x^{-8}+O(x^{-10}). $$ <strike>A numerical test shows very good quality of the asymptotic expansions for $C(x)$ and $L(x)$, even for $x$ around $2$.</strike></p> <p>All calculations where done with Mathematica 11.</p> <p><strong>Edit</strong>: I corrected a typo in the coefficient of $x^{-8}$ in the expansion of $n_{0}$ for the asymptotics of $C(x)$.</p> <p><strong>Edit</strong>: More numerical calculations indicate that the asymptotic expansion for $C(x)$ given by me is by far not as accurate as I hoped. The <strong>numerical</strong> evidence shows that $$ C(x) = e^{x^{2}}\left(1+\frac{3}{8}x^{-2}+\frac{1}{2}x^{-4}...\right) $$ is much better.</p> <p><strong>Edit</strong>: The coefficients for $L(x)$ as given in my answer are wrong as well. The numbers given by esg in the comment below seem to be correct.</p>
3,169,258
<p>I need to evaluate on the circle <span class="math-container">$\left|z\right|=\pi$</span> the integral <span class="math-container">$$\int_{\left|z\right|=\pi}\frac{\left|z\right|e^{-\left|z\right|}}{z}dz.$$</span> The function is not holomorphic there. Anyway, I tried to integrate it using polar coordinates and simplyfing the modulo and I got <span class="math-container">$2\pi e^{-\pi}$</span> while the result should be <span class="math-container">$2\pi^2 ie^{-\pi}$</span>. I'm sure is trivial and I overlooked a stupid error. Can anybody tell me where?</p>
José Carlos Santos
446,262
<p>If <span class="math-container">$\gamma(t)=\pi e^{it}$</span> (<span class="math-container">$t\in[0,2\pi]$</span>), then<span class="math-container">\begin{align}\int_{\lvert z\rvert=\pi}\frac{\lvert z\rvert e^{-\lvert z\rvert}}z\,\mathrm dz&amp;=\int_0^{2\pi}\frac{\bigl\lvert\gamma(t)\bigr\rvert e^{-\lvert\gamma(t)\rvert}}{\gamma(t)}\gamma'(t)\,\mathrm dt\\&amp;=\int_0^{2\pi}\pi e^{-\pi}i\,\mathrm dt\\&amp;=2\pi^2ie^{-\pi}.\end{align}</span></p>
2,375,714
<p>I know this is a really easy question but for some reason I'm having trouble with it.</p> <p>If $M$ is an object in an additive category $\mathcal C$, and $\text{Hom}_{\mathcal C}(M,M) = 0$, then $M = 0$.</p> <p>I know that this implies $\text{id}_M =0$ but I'm having trouble showing that $M$ satisfies the condition to be the zero object, or showing that the map from $0\rightarrow M$ is necessarily invertible.</p> <p>We have that this composite $M \rightarrow 0 \rightarrow M$ is the $0$ map and the identity simultaneously but I'm stumped.</p> <p>I think I'm thinking about it too much.</p>
k.stm
42,242
<p>More generally, let $\mathcal C$ be a category. Let $x, y ∈ \operatorname{Ob} \mathcal C$ be objects of $\mathcal C$ whose only endomorphisms are their respective identities $\mathrm{id}_x$, $\mathrm{id}_y$. Then $$x \cong y \Longleftrightarrow \text{there are arrows $x → y$ and $y → x$}.$$ This is because whenever one can form paths $x → y → x$ and $y → x → y$, they both have to compose to the only endomorphisms of $x$ and $y$ – the identities (as already pointed out by others).</p>
4,331,258
<p>I'm trying to find the median of <span class="math-container">$f(x) = 4xe^{-2x}$</span>.</p> <p>So far, I've tried solving for <span class="math-container">$q_{50}$</span> by plugging it into an integral and setting it equal to 0.5 like so: <span class="math-container">$\int_{0}^{q_{50}} 4xe^{-2x} dx = 0.5$</span>. I eventually get to <span class="math-container">$-2q_{50}e^{-2q_{50}} - e^{-2q_{50}} + 1 = 0.5$</span>. Unfortunately, at this point, I have been unable to solve for <span class="math-container">$q_{50}$</span>.</p> <p>Is there something I've done wrong up to this point or another method that I could be using instead to find the median? Thanks for the help!</p>
Claude Leibovici
82,404
<p>Letting <span class="math-container">$x=q_{50}$</span>, as @Vítězslav Štembera answered, you want to solve for <span class="math-container">$x$</span> the equation <span class="math-container">$$ (2 x+1)\,e^{-2 x}=k \quad\implies\quad(2x+1)\,e^{-(2 x+1)}=\frac k e$$</span> The only explicit solution of it is given by <span class="math-container">$$x=-\frac{1}{2} \left(1+W_{-1}\left(-\frac{k}{e}\right)\right)$$</span> where <span class="math-container">$W_{-1}(.)$</span> is the second branch of <a href="https://en.wikipedia.org/wiki/Lambert_W_function" rel="nofollow noreferrer">Lambert function</a>.</p> <p>If you cannot use Lambert function, only numerical methods would give the solution.</p>
457,427
<p>Find the derivative of the functions:<br> $$\int_{x^2}^{\sin(x)}\sqrt{1+t^4}dt$$<br></p> <p>In class we had the following solution:<br> By the fundamental theorem of calculus we know that <br> $$\left(\int_a^xf(t)dt\right)'=f(x)$$ So<br> $$\int_{x^2}^0\sqrt{1+t^4}dt+\int_0^{\sin(x)}\sqrt{1+t^4}dt=$$<br> $$\int_0^{\sin(x)}\sqrt{1+t^4}dt-\int_0^{x^2}\sqrt{1+t^4}dt=$$<br> Letting $g(x)=\sqrt{1+t^4} $<br> $$g(\sin(x))(\sin(x))'-g(x^2)(x^2)'=$$<br> $$\sqrt{1+\sin(x)^4}\cdot \cos(x)-\sqrt{1+x^8} \cdot 2x$$<br></p> <p>However, if we have that $\left(\int_a^xf(t)dt \right)'=f(x)$ wouldn't the answer just be <br> $$\sqrt{1+\sin(x)^4}-\sqrt{1+x^8}?$$</p>
DonAntonio
31,254
<p>The function $\,\sqrt{1+t^4}\;$ is defined and continuous everywhere, from where we can thus apply the Fundamental Theorem of Integral Calculus and get</p> <p>$$\int\limits_{x^2}^{\sin x}\sqrt{1+t^4}dt=F(\sin x)-F(x^2)$$</p> <p>with $\;F\;$ a function s.t. $\,F'(x)=\sqrt{1+x^4}\;$ (the primitive function of the integrand).</p> <p>Thus we get, applying the chain rule:</p> <p>$$\left(\int\limits_{x^2}^{\sin x}\sqrt{1+t^4}dt\right)'=\left(F(\sin x)-F(x^2)\right)'=\cos xF'(\sin x)-2xF'(x^2) =$$</p> <p>$$=\cos x\sqrt{1+\sin^4x}-2x\sqrt{1+x^8}$$</p>
1,283,037
<blockquote> <p>Consider the series: $$ \sum_{i=1}^\infty \frac{i}{(i+1)!} $$ Make a guess for the value of the $n$-th partial sum and use induction to prove that your guess is correct.</p> </blockquote> <p>I understand the basic principles of induction I think I would have to assume the n-1 sum to be true and then use that to prove that the nth sum is true. But I have no idea how to guess what the sum might be? Doing the partial sums indicates that the series converges at possibly 1.</p>
DeepSea
101,504
<p>We have:$$\dfrac{i}{(i+1)!} = \dfrac{(i+1)-1}{(i+1)!} = \dfrac{i+1}{(i+1)!} - \dfrac{1}{(i+1)!} = \dfrac{1}{i!}-\dfrac{1}{(i+1)!} \Rightarrow S_n = 1-\dfrac{1}{(n+1)!}\Rightarrow S=\displaystyle \lim_{n\to \infty} S_n = 1$$</p>
3,563,419
<p>What I did was writing <span class="math-container">$(1\, 2)(3\, 4)(1\, 3)(2\, 4)$</span> as <span class="math-container">$\bigl(\begin{smallmatrix} 1&amp; 2 &amp;3 &amp; 4\\ 2&amp; 1 &amp;4 &amp; 3 \end{smallmatrix}\bigr)\bigl(\begin{smallmatrix} 1&amp; 2 &amp;3 &amp;4 \\ 3&amp; 4&amp; 1 &amp; 2 \end{smallmatrix}\bigr)$</span> which is equal to <span class="math-container">$(1 4)(2 3)$</span></p> <p>Is what I'm doing right ? And isn't there any other method for it ?</p>
Community
-1
<p>well it's as simple as using a theorem. This is an alternating series which monotonically converges to 0. So according to Leibniz it converges. thanks to @lulu</p>
4,552,098
<p>The rules: Two people rolls a single dice. If the dice rolls 1,2,3 or 4, person A gets a point. For the rolls 5 and 6, person B gets a point. One person needs a 2 point lead to win the game.</p> <p>This is a question taken from my math book. The answer says the probability is <span class="math-container">$\frac{4}{5}$</span> for person A to win the game. Which I dont understand.</p> <p>My thought process: Lets look at all the four possible outcomes with the two first roles. These would be AA, BB, AB or BA. AA means person A gets a point two times a row. Under is the probability for all these scenarios:</p> <p><span class="math-container">$P(AA)=(\frac{2}{3})^2=\frac{4}{9}$</span></p> <p><span class="math-container">$P(BB)=(\frac{1}{3})^2=\frac{1}{9}$</span></p> <p><span class="math-container">$P(AB)=\frac{2}{3}\cdot\frac{1}{3}=\frac{2}{9}$</span></p> <p><span class="math-container">$P(BA)=\frac{1}{3}\cdot\frac{2}{3}=\frac{2}{9}$</span></p> <p>If AB or BA happens they have an equal amount of points again, no matter how far they are into the game. The probability of this would then be <span class="math-container">$2\cdot\frac{2}{9}=\frac{4}{9}$</span>. Since they have an equal amount of points, you can look at that as the game has restarted.</p> <p>Meaning person A has to get two points a row to win no matter what. Would that not mean the probability is <span class="math-container">$\frac{4}{9}$</span> for person A to win? Can someone tell me where my logic is flawed and what the correct logic would be?</p>
Lourrran
1,104,122
<p>In probability, you need to always cross-check your results. In this game, A will win with proba PA, and B will win with proba PB.</p> <p>And <strong>PA+PB should be equal to 1.</strong></p> <p>If you say PA=4/9, you say, with same logic, PB=1/9</p> <p>Do you have PA+PB=1 ? No. So there is something wrong.</p> <p>4/9 is probability for A to win <strong>immediatly</strong>.</p>
2,370,378
<p>How many distinct pairs of disjoint hyperplanes of size $q^{n-1}$ exist in $\mathbb{F}_q^n$?</p> <p>Initially I had just thought to pick n points to define a hyperplane, and divide by the number of ways to pick such points, that is:</p> <p>$\frac{q^n \choose n}{q^{n-1} \choose n}$</p> <p>From there each hyperplane would presumably have $q-1$ disjoint neighbors giving:</p> <p>$\frac{(q-1){q^n \choose n}}{2{q^{n-1} \choose n}}$</p> <p>However, I realize this does not deal with sets of points which are coplanar. This number then is a lower bound, as these coplanar sets are not counted enough times. Is there a way to get around this? Knowing the number of sets of n non-coplanar points would presumably do the trick.</p>
Alex Meiburg
127,777
<p>For selecting the first and second points, you have $q^n$ and $q^n-1$ options: anything except for the same thing. For selecting the third point, you can again pick any of the $q^n$, except for the $q$ points on the line defined -- so that you have $q^n-q$ options. For the fourth point, you have $q^n-n^2$, because you're excluding the $n^2$ points generated by the first three. And so on. But this does indeed produce the same plane many times, and reducing the multiple counting is tricky.</p> <p>An easier method is to recognize that a hyperplane is defined by its normal vector, and its position along that vector. This can be any nonzero length-$n$ vector of $F_q$ values, but of course scaling the normal vector leaves the plane unchanged. There are $q^n$ vectors, then $q^n-1$ nonzero ones, and $q-1$ ways we can rescale it, so that there are $\frac{q^n-1}{q-1}$ normal vectors. Then there are $q$ places "along" this vector to place the plane. So there are </p> <p>$$\frac{q(q^n-1)}{q-1}$$</p> <p>distinct hyperplanes. If $q$ is even of course these can be perfectly paired up, and so there are</p> <p>$$\frac{q(q^n-1)}{2(q-1)}$$</p> <p>pairs. If $q$ is odd, then all-but-one with a given normal vector can be paired up, so that there are $(q-1)/2$ pairs with a given normal vector, or </p> <p>$$\frac{(q-1)(q^n-1)}{2(q-1)} = \frac{q^n-1}{2}$$</p> <p>pairs that can be made simultaneously.</p>
870,367
<p>Suppose $A$ is a $2$-by-$2$ matrix, and $\mathcal{l}$ is an invariant line under $A$, so $(x,mx+c)$ is mapped to $(X,mX+c)$ for some variable $X$ linear in $x$. Then is there a point on the line $\mathcal{l}$ which is a fixed point of $A$, i.e. there is some $x' \in \mathcal{l}$ such that $Ax'=x'$?</p> <p>The reason I ask is that apparently if $y=mx+c$ is an invariant line under $A$, so is $y=mx$ - the above is equilivant to this by linearity, I believe. Certainly, the case where the invariant line goes through the origin is simple - but unfortunately this isn't the only case, e.g. the matrix $$ \left(\begin{matrix} 0 &amp; 1 \\ 5 &amp; -4 \end{matrix}\right)$$ has an invariant line $y=-5x+2$ as $$ \left(\begin{matrix} 0 &amp; 1 \\ 5 &amp; -4 \end{matrix}\right)\left( \begin{matrix} x\\ -5x + 2 \end{matrix} \right) = \left( \begin{matrix} -5x+2\\ -5(-5x+2) + 2 \end{matrix} \right)$$ However, I can't see this example being very illuminating, as it's quite contrived to force $(0,1)$ to be mapped to itself, and I can't imagine the $y$-intercept always being a fixed point in these circumstances. Does anyone have any ideas?</p>
Graham Kemp
135,106
<p>Let $S$ be the count of the 12 students selected who eat sushi.</p> <p>$\begin{align}\Pr(S\geq 11) &amp; = \Pr(S=11 \cup S=12) \\ &amp; = \Pr(S=11)+\Pr(S=12) \\ &amp; = \frac{{^{18}C_{11}}{^{10}C_1}+{^{18}C_{12}}}{^{28}C_{12}}\end{align}$ </p> <p>This is the probability of selecting either 11 of 18 sushi( and 1 of the 10 others), or 12 of 18 sushi, out of all the ways to select 12 of 28 students.</p> <p><strong>Remark:</strong> "at least" means "greater than or equal to". In problems where you can calculate the probabilities of "exactly", this means summing the individual probabilities. </p> <p>Sometimes it may be easier to find the complement (though not in this example). Remember that: $\Pr($at least something$) = 1 - P($less than something$)$.</p>
14,765
<p>I like to make the "dominoes" analogy when I teach my students induction.</p> <p>I recently came across the following video:</p> <p><a href="https://www.youtube.com/watch?v=-BTWiZ7CYoI" rel="noreferrer">https://www.youtube.com/watch?v=-BTWiZ7CYoI</a></p> <p>In this video, a sequence of concrete block wall caps are set up like dominoes on the top of a wall. The first wall cap is knocked down, setting off the domino effect. The blocks are spaced so that they are resting on each other when they fall, but just barely. So rather than resting flat each block is supported slightly by its successor. When the last block falls, however, it falls flat (having no subsequent block to rest on). This causes the block behind it to slip off, and lay flat, which causes the brick behind it to slip off and lie flat, until all the blocks are lying flat perfectly end to end.</p> <p>Is there any instance of a similar phenomena occurring in mathematics? I am thinking of a situation in which you want to prove both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span> (say). If you are able to prove: </p> <ol> <li><span class="math-container">$P(1)$</span></li> <li><span class="math-container">$\forall k \in \{1,2,3, \dots, 99\} P(k) \implies P(k+1)$</span></li> <li><span class="math-container">$P(100) \implies Q(100)$</span></li> <li><span class="math-container">$\forall k \in \{ 100, 99, 98, \dots, 3,2\}, Q(k) \implies Q(k-1)$</span></li> </ol> <p>Then it will follow that both <span class="math-container">$P(n)$</span> and <span class="math-container">$Q(n)$</span> are true for <span class="math-container">$n = 1, 2, 3, \dots, 100$</span>.</p> <p>If an example is found, it could be a great example for teaching because it would force students to think through the logic of why induction works rather than blindly following a certain form of "an induction proof".</p>
Community
-1
<p>Maybe the first question to ask is : WHY are they doing that mistake. </p> <p>As says a famous 17th century philosopher, falsehood is nothing in itself, but just incomplete truth. So to understand the mistake, it can be usefull to look for the incomplete truth that lies in that mistake. </p> <p>I think the " incomplete truth" here is the "rule": </p> <pre><code> IF a²=b then sqrt(a²) = sqrt (b) and then a = sqrt(b). </code></pre> <p>Yes, this rule is true, but with that complement : IF AND ONLY IF it is first given that the number a is positive. </p> <p>The complete truth is that, when one does not know whether the number a is positive or not, one has to write : </p> <pre><code> sqrt (a²) = |a|. </code></pre> <p>Applying what precedes to the specific question, one has : </p> <p>(1) a²= b </p> <p>(2) sqrt (a²) = sqrt (b) </p> <p>(3) |a| = sqrt (b) </p> <p>(4) a = sqrt(b) OR -a = sqrt (b) </p> <p>(5) a = sqrt (b) OR a= - sqrt (b) </p>
2,060,633
<p>Let $R=(\Bbb Z/5\Bbb Z$)[X]/ $\langle X^n \rangle $ with $n\in\Bbb N_+$. Show that $f\in R$ is a zero divisor if and only if $f_0=0$. And how many zero dividers exist?</p> <p>I know that zero divispr means there exists non-zero $m$ such that $fm=0$ in $R$ and that $\Bbb Z/5\Bbb Z$ has no zero divisors, meaning $f_0$ or $m_0$ are equal to $0$ if $fm=0$.</p> <p>How can i show that if $m_0\neq0$ then $f_0=0$?</p>
Michael R
399,606
<p>Imagine that you are randomly placing students into 4 different groups. You "draw" (or randomly place) graduate student 1 first, and you put him or her in group x. You then have to place the remaining 15 students into the 4 groups (where group x has 3 remaining spots, and the other groups have 4 remaining spots). </p> <p>What is the probability that any single student (i.e. graduate student 2) will also be in group x? Since there are 3 spots left in group x and a total of 15 students to choose from, $ P = \frac{3}{15} $. However, you are interested in the probability that any single student (i.e. graduate student 2) will not be in group x, which amounts to $ P(A_1) = 1 - P = 1 - \frac{3}{15} = \frac{15-3}{15} = \frac{12}{15} $</p> <p>The order of the draws is not important (i.e. whether you choose student 4 or student 8 first doesn't change anything), but it often helps to think about these processes in such a way. </p> <p>You can use a similar approach/method to get $ P(A_2|A_1) $ and $ P(A_3|A_1\cap A_2) $...</p>
4,224,713
<p>Let <span class="math-container">$f : \Bbb{Z} \to \Bbb{Z}_2$</span> be any map such that <span class="math-container">$f(xy) = f(x) + f(y) + f(x)f(y)$</span> and <strong><span class="math-container">$f$</span> is non-constant.</strong></p> <p>Then can we always write <span class="math-container">$f(x) = (q \mid x)$</span> for some prime number <span class="math-container">$q$</span> <strong>(or <span class="math-container">$1$</span>)</strong>?</p> <p>The converse is easy since if <span class="math-container">$f(xy) = (q \mid xy) = 1$</span>, then either <span class="math-container">$q \mid x$</span> <em>xor</em> <span class="math-container">$q \mid y$</span> <em>xor</em> <span class="math-container">$(q \mid x)(q\mid y)$</span> is equal to &quot;<span class="math-container">$(q\mid x)$</span> or <span class="math-container">$(q \mid y)$</span>&quot;. In other words <span class="math-container">$q$</span> divides one but not both or it divides both.</p> <p>Addition or <span class="math-container">$+$</span> modulo <span class="math-container">$2$</span> acts the same way as logical <em>xor</em> when treating <span class="math-container">$0$</span> as false and <span class="math-container">$1$</span> as true. Similarly, multiplication modulo <span class="math-container">$2$</span> acts the same way as logical <em>and</em>.</p>
Alan
175,602
<p>The map <span class="math-container">$f(x)=1$</span> satisfies your equation, but there certainly is no prime that divides every <span class="math-container">$x$</span>! So no.</p>
4,224,713
<p>Let <span class="math-container">$f : \Bbb{Z} \to \Bbb{Z}_2$</span> be any map such that <span class="math-container">$f(xy) = f(x) + f(y) + f(x)f(y)$</span> and <strong><span class="math-container">$f$</span> is non-constant.</strong></p> <p>Then can we always write <span class="math-container">$f(x) = (q \mid x)$</span> for some prime number <span class="math-container">$q$</span> <strong>(or <span class="math-container">$1$</span>)</strong>?</p> <p>The converse is easy since if <span class="math-container">$f(xy) = (q \mid xy) = 1$</span>, then either <span class="math-container">$q \mid x$</span> <em>xor</em> <span class="math-container">$q \mid y$</span> <em>xor</em> <span class="math-container">$(q \mid x)(q\mid y)$</span> is equal to &quot;<span class="math-container">$(q\mid x)$</span> or <span class="math-container">$(q \mid y)$</span>&quot;. In other words <span class="math-container">$q$</span> divides one but not both or it divides both.</p> <p>Addition or <span class="math-container">$+$</span> modulo <span class="math-container">$2$</span> acts the same way as logical <em>xor</em> when treating <span class="math-container">$0$</span> as false and <span class="math-container">$1$</span> as true. Similarly, multiplication modulo <span class="math-container">$2$</span> acts the same way as logical <em>and</em>.</p>
Servaes
30,382
<p>A particular counterexample would be <span class="math-container">$$f(x):=\begin{cases}1&amp;\text{ if }2\mid x\text{ or }3\mid x\\ 0&amp;\text{ otherwise }\end{cases}.$$</span> Of course this generalizes to arbitrary primes, and arbitrarily many of them.</p> <p>Every such function <span class="math-container">$f$</span> is of this form, because the function <span class="math-container">$g(x):=f(x)+1$</span> satisfies <span class="math-container">$$g(xy)=g(x)g(y),$$</span> i.e. it is completely multiplicative. This means <span class="math-container">$g(x)$</span> is fully determined by where it sends the primes, and hence so is <span class="math-container">$f(x)$</span>. Choosing either <span class="math-container">$f(p)=1$</span> or <span class="math-container">$f(p)=0$</span> corresponds to including the condition <span class="math-container">$p\mid x$</span> or not.</p>
2,642,547
<p>If a subset A of $\mathbb{R}^n$ has no interior, must it be closed?</p> <p>Can I prove this using the example of a subset A that consists of a single point, so A has no interior yet it is closed?</p>
ajotatxe
132,456
<p>The set $\{\frac1n: n\in\Bbb Z_+\}$ is not closed, and its inerior is empty.</p>
2,008,341
<p>Consider the standard presentation of $D_{2n}$:</p> <p>$\langle r, s : r^n = s^2 =1, rs = sr^{-1}\rangle$.</p> <p>I have seen the latter relation given as $sr = r^{-1}s$ a few times. Is this correct, as well?</p>
eepperly16
239,046
<p>It is correct</p> <p>$$rs = sr^{-1} \Leftrightarrow rsr = s \Leftrightarrow sr = r^{-1}s$$</p>
3,624,953
<p>Let <span class="math-container">$(X_t)_{t \in [0, T]}$</span> be a continuous stochastic process with paths which are a.s. continuous, the underlying space of which is irrelevant but is well defined. Let <span class="math-container">$a$</span> be a constant Define two stopping times <span class="math-container">$$\tau_1 = \inf\{t \geq 0: X_t &gt; a\}$$</span> <span class="math-container">$$\tau_2 = \inf\{t \geq 0: X_t = a\}$$</span> Evidently, <span class="math-container">$X_{\tau_2} = a$</span>. However, can we claim <span class="math-container">$X_{\tau_1} = a$</span> ? This "feels like" having something to do with continuity/topology but I cannot figure it out. Any help would be greatly appreciated.</p>
infinite_monkey
776,731
<p>In general they are not. Consider the constant process which is identically <span class="math-container">$a$</span> for all <span class="math-container">$t$</span>. Then <span class="math-container">$\tau_2 = 0$</span> a.s. but <span class="math-container">$\tau_1 = +\infty$</span>. (If you are working with the convention <span class="math-container">$\inf \emptyset = + \infty$</span>)</p>
2,699,483
<blockquote> <p>The probability that a fair-coin lands on either <strong>Heads</strong> or <strong>Tails</strong> on a certain day of the week is $1/14$. </p> <p><strong>Example: (H, Monday), (H, Tuesday) $...$ (T, Monday), (T, Tuesday) $...$</strong><br> Thus, $(1/2 \cdot 1/7) = 1/14$. There are $14$ such outcomes.</p> <p>In some arbitrary week, Tom flips <strong>two</strong> fair-coins. You don't know if they were flipped on the same day, or on different days. After this arbitrary week, Tom tells you that at least one of the flips was a <strong>Heads</strong> which he flipped on <strong>Saturday</strong>.</p> <p><strong>Determine the probability that Tom flipped two heads in that week.</strong></p> </blockquote> <p>I know that this is a conditional probability problem. </p> <p>The probability of getting two heads is $(1/2)^2 = 1/4$. Call this event <strong>$P$</strong>.</p> <p>I am trying to figure out the probability of Tom flipping at least one head on a Saturday. To get this probability, I know that we must compute the probability of there being no (H, Saturday) which is $1 - 1/14 = 13/14$.</p> <p>But then to get this "at least", we need to do $1 - 13/14$ which gives us $1/14$ again. Call this event $Q$. </p> <p>So is the probability of event $Q = 1/14$? It doesn't sound right to me.</p> <p>Afterwards we must do $Pr(P | Q) = \frac{P(P \cap Q)}{Pr(Q)}$. Now I'm not quite sure what $P \cap Q$ means in this context.</p>
jgon
90,543
<p>Remy has already given the correct answer, but is not confident because of a missing intuition, and I already more or less answered the question in comments on NewGuy's answer, so I'll just write it up and try to give an intuition for it. </p> <p>The sample space for a single coin flip is $\Omega=\newcommand{\set}[1]{\left\{#1\right\}}\set{H,T}\times \set{M,Tu,W,Th,F,Sa,S}$, and it has the uniform distribution, with each pair equally likely. We can think of this as flipping a fair coin and rolling a fair 7 sided die labeled with the days of the week together (a d7).</p> <p>The sample space then for two coin flips is $\Omega \times \Omega$, which again is the same as flipping 2 fair coins and rolling 2 d7s.</p> <p>If $M$ is the event that both coins are heads, and $N$ is the event that at least one of the coins was flipped on Saturday and was heads. Then $M\cap N$ is the event that both coins were heads and at least one was flipped on Saturday. Now we're interested in $$P(M|N) = \frac{P(M\cap N)}{P(N)}=\frac{|M\cap N|}{|N|},$$<br> so we just need to compute the sizes of $N$ and $M\cap N$. Let's start with $M\cap N$. Since we know both coins came up heads, we just need to work with the days of the week. The number of ways that at least one of the days of the week can be Saturday is $1+6+6=13$, corresponding to the possibilities $(Sa,Sa)$ or $(Sa,\text{not }Sa)$ or $(\text{not }Sa,Sa)$.</p> <p>Now we can do a similar thing for $N$. We get $|{N}|=1+13+13=27$ corresponding to the possibilities $(HSa,HSa)$ or $(HSa,\text{not }HSa)$ or $(\text{not }HSa,HSa)$.</p> <p><strong>Intuition:</strong> Why does knowing that one of the coins was a head flipped on a Saturday reduce the probability that the other coin was also a head (13/27) compared to say having a bronze and a silver coin and knowing that the bronze coin was a head flipped on a Saturday (probability that the other coin was also a head 1/2)?</p> <p>The issue is essentially, for each state in $M\cap N$, $HH(day_1)(day_2)$ if only one of those days is Saturday, say $day_1=Sa$ we get two states in $N$: $HHSa(day_2)$ and $HTSa(day_2)$, but if both days are $Sa$, we get <strong>three</strong> states in $N$: $HHSaSa$, $HTSaSa$ and $THSaSa$. I.e. in the case when both days are Saturday, we get an extra way to fail to be both heads. Or viewed the other way, the fact that the Saturday flips are interchangeable when they both come up heads means that while $HTSaSa$ and $THSaSa$ are different, they only have one success case associated to them namely: $HHSaSa$. </p>
1,123,247
<p>let $a_n$ be a sequence of real numbers such that the series $\sum |a_n|^2$ is convergent.Find range of $p$ such that the series $\sum |a_n|^p$ is convergent.</p> <p>My try:</p> <p>To show the series it is necessary to show that sequence of partial sums of $\sum |a_n|^p$ is bounded.</p> <p>Consider $S_n=|a_1|^p+|a_2|^p+...+|a_n|^p$. How to proceed next? Any inequality which can be used?</p>
Umberto P.
67,536
<p>If you denote $M = \max\{|a_n|\}$ you have $$\sum_{n=1}^\infty |a_n|^p = \sum_{n=1}^\infty |a_n|^{p-2} |a_n|^2 \le M^{p-2} \sum_{n=1}^\infty |a_n|^2 &lt; \infty$$whenever $p \ge 2$.</p>
3,029,519
<p>I have a variable, <span class="math-container">$X$</span>, which is normally distributed</p> <p><span class="math-container">$$X \sim \mathcal{N}(\mu, \sigma^2)$$</span></p> <p>I also have an event, <span class="math-container">$A$</span>, which measures the probability that <span class="math-container">$X$</span> is greater than or equal to 0. <span class="math-container">$$P[A] = P[X \geq 0] = \lim_{b \to +\infty} \int_{x=0}^{x=b} f(x)\,dx \quad \text{f(x) is PDF of X}$$</span></p> <p>I'd like to know what the derivative of <span class="math-container">$P[A]$</span> is w.r.t. <span class="math-container">$\mu$</span> and <span class="math-container">$\sigma^2$</span>. </p> <p><span class="math-container">$$\frac{\partial P[A]}{\partial \mu} $$</span> <span class="math-container">$$\frac{\partial P[A]}{\partial \sigma^2}$$</span></p> <p>I think this is really simple because I'm looking for the derivative of an integral, but would like to make sure I'm doing it right:</p> <p><span class="math-container">$$\frac{\partial P[A]}{\partial \mu} = \frac{\partial}{\partial \mu} \lim_{b-&gt;+\infty} f(b) - f(0) = \frac{\partial}{\partial \mu} f(0) $$</span> <span class="math-container">$$\frac{\partial P[A]}{\partial \sigma^2} = \frac{\partial}{\partial \sigma^2} \lim_{b-&gt;+\infty} f(b) - f(0) = \frac{\partial}{\partial \sigma^2} f(0)$$</span></p> <p>where</p> <p><span class="math-container">$$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$</span></p> <p>Are the above equations correct?</p>
Community
-1
<p>I think <span class="math-container">$$\frac{4}{n+4}-\frac{n}{(n+3)(n+4)}=\frac{4(n+3)-n}{(n+3)(n+4)}=\frac{3(n+4)}{(n+3)(n+4)}=\frac{3}{n+3}$$</span></p>
4,510,808
<p>Value of p such that <span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {{x^p}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right)$</span> is some finite | non-zero number.</p> <p>My approach is as follow</p> <p><span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {{x^p}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right) \Rightarrow \mathop {\lim }\limits_{x \to \infty } \left( {{x^{\frac{{3p}}{3}}}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right)$</span></p> <p><span class="math-container">$\mathop {\lim }\limits_{x \to \infty } \left( {\sqrt[3]{{{x^{3p}}}}\left( {\sqrt[3]{{x + 1}} + \sqrt[3]{{x - 1}} - 2\sqrt[3]{x}} \right)} \right) \Rightarrow \mathop {\lim }\limits_{x \to \infty } \left( {\sqrt[3]{{{x^{3p + 1}} + {x^{3p}}}} + \sqrt[3]{{{x^{3p + 1}} - {x^{3p}}}} - 2\sqrt[3]{{{x^{3p + 1}}}}} \right)$</span></p> <p>How do we proceed</p>
Prem
464,087
<p><span class="math-container">$\lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x+1} + \sqrt[3]{x-1} -2\sqrt[3]{x})} = \lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x}\sqrt[3]{1+1/x} + \sqrt[3]{x}\sqrt[3]{1-1/x} -2\sqrt[3]{x})}$</span></p> <p><span class="math-container">$\lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x+1} + \sqrt[3]{x-1} -2\sqrt[3]{x})} = \lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x}(1+(1/3x)-(1/9x^2) + \cdots) + \sqrt[3]{x}(1-(1/3x)-(1/9x^2) + \cdots) -2\sqrt[3]{x})}$</span></p> <p><span class="math-container">$\lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x+1} + \sqrt[3]{x-1} -2\sqrt[3]{x})} = \lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x}(-(1/9x^2) + \cdots) + \sqrt[3]{x}(-(1/9x^2) + \cdots) )}$</span></p> <p><span class="math-container">$\lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x+1} + \sqrt[3]{x-1} -2\sqrt[3]{x})} = \lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x}(-2(1/9x^2) + \cdots)) }$</span></p> <p><span class="math-container">$\lim \limits_{x \to \infty } {x^{P}(\sqrt[3]{x+1} + \sqrt[3]{x-1} -2\sqrt[3]{x})} = \lim \limits_{x \to \infty } {x^{P+(1/3)-2}(((-2/9) + \cdots)) }$</span></p> <p>The Higher Negative Powers will tend to <span class="math-container">$0$</span> in the limit.</p> <p>The Only Power term we have is <span class="math-container">$x^{P+(1/3)-2}$</span> which must be <span class="math-container">$x^0$</span> hence <span class="math-container">${P+(1/3)-2} = {0}$</span> <span class="math-container">${P+(1/3)-2} = {5/3}$</span></p> <p>P less than that will give limit <span class="math-container">$0$</span><br /> P more than that will give limit <span class="math-container">$\infty$</span></p> <p>When <span class="math-container">$P=5/3$</span> : The Limit is <span class="math-container">$-2/9$</span></p>
411,549
<p>Here is an modular equation</p> <p>$$5x \equiv 6 \bmod 4$$</p> <p>And I can solve it, $x = 2$.</p> <p>But what if each side of the above equation times <strong>8</strong>, which looks like this</p> <p>$$40x \equiv 48 \bmod 4$$</p> <p>Apparently now, $x = 0$. Why is that? Am I not solving the modular equation in a right way, or should I divide both side with their greatest-common-divisor before solving it?</p> <p>P.S.</p> <p>To clarify, I was solving a system of modular equations, using <strong>Gaussian Elimination</strong>, and after applying the elimination on the coefficient matrix, the last row of the echelon-form matrix is :</p> <p>$$0, \dots, 40 | 48$$</p> <p>but I think each row in the echelon-form should have been divided by its greatest common divisor, that turns it into :</p> <p>$$0, \dots, 5 | 6$$</p> <p>But apparently they result into different solution, one is $x = 0,1,2,3....$, the other $x = 2$. And why? Am I applying <strong>Gaussian-Elimination</strong> wrong?</p>
Brian M. Scott
12,042
<p>The congruence $40x\equiv48\pmod4$ means that $4\mid40x-48$. But $40x-48=4(10x-12)$, so this is always true: $40x\equiv48\pmod4$ for <strong>all</strong> integers $x$. Thus, $x\equiv0\pmod4$ is <strong>not</strong> the only solution.</p> <p><strong>Added:</strong> If you have the congruence $ax\equiv b\pmod m$, and $d$ is a common divisor of $a$ and $b$, you cannot simply divide through by $d$ and say that </p> <p>$$\frac{a}dx\equiv\frac{b}d\!\!\!\!\pmod m\;;$$</p> <p>it’s not generally true. However, if $d$ is a common divisor of $a,b$, <strong>and</strong> $m$, the original congruence is equivalent to the congruence</p> <p>$$\frac{a}dx\equiv\frac{b}d\left(\bmod \frac{m}d\right)\;.$$</p> <p>Here you can take $d=4$ to reduce the original problem to solving $10x\equiv12\pmod1$, and since all integers are congruent to one another mod $1$, you again arrive at the conclusion that $x$ can be any integer.</p>
1,818,557
<p>I will be teaching some "topology" to high school students. I was wondering how to explain to such a school student that on a sphere the shortest path between 2 points is given by a great circle?</p> <p>Also, how to explain that if they lived on a sphere they would have no notion of "above" or "below"? I cannot find a nice way to convince them since they see the sphere embedded on 3d? </p>
user7530
7,530
<p>If you connect the two points by a rubber band in the shape of a meandering path on the sphere, it is intuitive the rubber band will snap into a great circle shape.</p> <p>Alternatively you can explain the geodesic as the path a magnetic marble would take if the sphere were a steel ball, and you let the marble roll along the surface of the sphere. It will roll in a great circle path, without veering "left" or "right."</p> <p>I'm not sure what you mean about not seeing above or below, since the sphere is orientable?</p> <hr> <p>I'm still not sure what you mean by "looking on the sky or ground." You mean that we are used to thinking of the sphere as having codimension 1, which is meaningless if we think of the sphere as an abstract manifold rather than embedded in space?</p> <p>I suppose you could argue by analogy to the circle. If you draw a circle on paper, you can travel perpendicular from the circle in two directions, inside and outside. But this is an accident of the fact that the circle is drawn on the paper. If instead you form a hoop in 3D, there is no longer two ways of moving away from the circle -- there are many directions you can travel, and the hoop no longer separates space into any kind of inside or outside.</p> <p>I suppose to really blow their minds you could show the horned sphere...</p>
198,105
<p>I have runtimes for requests on a webserver. Sometimes events occur that cause the runtimes to skyrocket (we've all seen the occasionaly slow web page before). Sometimes, they plummet, due to terminated connections and other events. I am trying to come up with a consistent method to throw away spurious events so that I can evaluate performance more consistently.</p> <p>I am trying Chauvenet's Criterion, and I am finding that, in some cases, it claims that all of my data points are outliers. How can this be? Take the following numbers for instance:</p> <pre><code>[30.0, 38.0, 40.0, 43.0, 45.0, 48.0, 48.0, 51.0, 60.0, 62.0, 69.0, 74.0, 78.0, 80.0, 83.0, 84.0, 86.0, 86.0, 86.0, 87.0, 92.0, 101.0, 103.0, 108.0, 108.0, 109.0, 113.0, 113.0, 114.0, 119.0, 123.0, 127.0, 128.0, 130.0, 131.0, 133.0, 138.0, 139.0, 140.0, 148.0, 149.0, 150.0, 150.0, 164.0, 171.0, 177.0, 180.0, 182.0, 191.0, 200.0, 204.0, 205.0, 208.0, 210.0, 227.0, 238.0, 244.0, 249.0, 279.0, 360.0, 378.0, 394.0, 403.0, 489.0, 532.0, 533.0, 545.0, 569.0, 589.0, 761.0, 794.0, 1014.0, 1393.0] </code></pre> <p><code>73</code> values. A mean of <code>222.29</code>, and a standard deviation of <code>236.87</code>. Chauvenet's criterion for the value <code>227</code> would have me calculate the probability according to a normal distribution (<code>0.001684</code> if my math is correct). That number times <code>73</code> is <code>.123</code>, less than <code>.5</code> and thus an outlier. What am I doing wrong here? Is there a better approach that I should be taking?</p>
Michael R. Chernick
30,995
<p>Methods such as Chauvenet's and Peirce's are 19th century attempts at formal testing for outliers from a normal distirbution. They are not sound approaches because they do not consider the distribution of the largest and smallest values from n iid normally distributed variables. Test's such as Grubbs' and Dixon's do take proper account of the distribution of the extreme order statistics from a normal distribution and should be used over Chauvenet or Peirce. As the wikipedia articles mention outlier detection is different from outlier rejection. Rejecting a data point merely on the basis of an outlier test is controversial and many statisticians including me don't agree with the idea of rejecting outliers based solely on these types of tests.</p>
702,506
<p>We have an exam in $3$ hours and I need help how to solve such trigonometric equations for intervals.</p> <p>How to solve</p> <p>$$\sin x - \cos x = -1$$</p> <p>for the interval $(0, 2\pi)$.</p>
Patrick Da Silva
10,704
<p>Square it. You get $$ 1 - 2 \sin x \cos x = \sin^2x + \cos^2 x - 2 \sin x \cos x = (\sin x - \cos x)^2 = 1 $$ which implies $\sin x = 0$ or $\cos x = 0$. Therefore the only possible solutions are multiples of $\pi/2$. See which ones are actual solutions of your problem (we squared, so $\sin x - \cos x = \pm 1$ when $x = k \pi / 2$). (Only $3 \pi / 2$ works in the interval $]0,2\pi[$.)</p> <p>Hope that helps,</p>
702,506
<p>We have an exam in $3$ hours and I need help how to solve such trigonometric equations for intervals.</p> <p>How to solve</p> <p>$$\sin x - \cos x = -1$$</p> <p>for the interval $(0, 2\pi)$.</p>
Barry Cipra
86,747
<p>Rewrite the equation as $\sin x=\cos x-1$, square both sides, and use the identity $\sin^2x=1-\cos^2x$ to get</p> <p>$$1-\cos^2x=\cos^2x-2\cos x+1$$</p> <p>which simplifies to</p> <p>$$\cos x(1-\cos x)=0$$</p> <p>so that either $\cos x=0$ or $\cos x=1$. The former corresponds to $x=\pi/2$ and $3\pi/2$ in the interval $0\lt x\lt2\pi$. The latter corresponds to $x=0$ or $2\pi$, neither of which is in the interval. So there are just two solutions to the <em>squared</em> equation in the interval $(0,2\pi)$. However, $x=\pi/2$ does not satisfy the <em>original</em> equation. So there is just one solution, namely $x=3\pi/2$.</p> <p><strong>Added later</strong>: Here's a second approach to solving the equation. First, rewrite it as</p> <p>$$\sin\theta=\cos\theta-1$$</p> <p>and then think of this as describing the intersection of the unit circle in the $xy$ plane with the line</p> <p>$$y=x-1$$</p> <p>If you sketch this, you see that the line passes through the unit circle at $(x,y)=(0,-1)$ and $(1,0)$. The first point corresponds to the angle $\theta=3\pi/2$, which is in the interval $(0,2\pi)$, while the other point corresponds to $\theta=0$, which isn't.</p>
702,506
<p>We have an exam in $3$ hours and I need help how to solve such trigonometric equations for intervals.</p> <p>How to solve</p> <p>$$\sin x - \cos x = -1$$</p> <p>for the interval $(0, 2\pi)$.</p>
robjohn
13,854
<p><strong>Hint:</strong> Use the formula for the sine of a difference to get $$ \begin{align} \sqrt2\sin(x-\pi/4) &amp;=\sqrt2\Big(\sin(x)\cos(\pi/4)-\cos(x)\sin(\pi/4)\Big)\\ &amp;=\sin(x)-\cos(x) \end{align} $$</p>
716,767
<p>I feel like an idiot for asking this but i can't get my formula to work with negative numbers</p> <p>assume you want to know the percentage of an increase/decrease between numbers</p> <pre><code>2.39 1.79 =100-(1.79/2.39*100)=&gt; which is 25.1% decrease </code></pre> <p>but how would i change this formula when there are some negative numbers?</p> <pre><code>6.11 -3.73 =100-(-3.73/6.11*100) which is 161% but should be -161% </code></pre> <p>the negative sign is lost.. what I am missing here?</p> <p>also</p> <pre><code>-2.1 0.6 =100-(-3.73/6.11*100) which is 128.6% ??? is it? </code></pre>
Hans Knudsen
178,134
<p>I know this is a very old thread, but I am here for the first time so I hope it is OK to comment.</p> <p>Let's take an example:</p> <hr> <p>Original value $-10$, final value $10$:</p> <p>$\frac{Original\ value - Final\ value}{Original\ value} = 200\% \ increase $</p> <hr> <p>Original value $-1$, final value $10$:</p> <p>$\frac{Original\ value - Final\ value}{Original\ value} = 1100\% \ increase $</p> <hr> <p>How can an increase from a smaller number ($-10$) to $10$ be a lesser percentage than an increase from a larger number ($-1$) to $10$?</p> <p>I’m not a mathematician, but I don’t think percent change with values of opposite signs is defined.</p> <p>See also: <a href="http://online.wsj.com/public/resources/documents/doe-help.htm" rel="noreferrer">http://online.wsj.com/public/resources/documents/doe-help.htm</a></p> <p>(the section named Net Income) </p>
18,530
<p>Sorry about the title, I have no idea how to describe these types of problems.</p> <p>Problem statement:</p> <p>$A(S)$ is the set of 1-1 mappings of $S$ onto itself. Let $S \supset T$ and consider the subset $U(T) = $ { $f \in A(S)$ | $f(t) \in T$ for every $t \in T$ }. $S$ has $n$ elements and $T$ has $m$ elements. Show that there is a mapping $F:U(T) \rightarrow S_m$ such that $F(fg) = F(f)F(g)$ for $f, g \in U(T)$ and $F$ is onto $S_m$.</p> <p>How do I write up this reasoning: When I look at the sets $S$ = { 1, 2, ..., $n$} and $T$ = { 1, 2, ..., $m$}, I can see that there are a bunch of permutations of the elements of $T$ within $S$. I can see there are $(n - m)!$ members of $S$ for each permutation of $T$'s elements. But there needs to be some way to get a handle on the positions of the elements in $S$ and $T$ in order to compare them to each other. But $S$ isn't any particular set, like a set of integers, so how can I relate the positions of the elements to one another? Or, is this the wrong way to go about it?</p> <p>Example:</p> <p>$U(T_3 \subset S_6) = \left( \begin{array}{cccccc} 1 &amp; 2 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ 1 &amp; 2 &amp; 3 &amp; 4 &amp; 6 &amp; 5 \\ 1 &amp; 2 &amp; 3 &amp; 5 &amp; 4 &amp; 6 \\ 1 &amp; 2 &amp; 3 &amp; 5 &amp; 6 &amp; 4 \\ 1 &amp; 2 &amp; 3 &amp; 6 &amp; 4 &amp; 5 \\ 1 &amp; 2 &amp; 3 &amp; 6 &amp; 5 &amp; 4 \\ 1 &amp; 3 &amp; 2 &amp; 4 &amp; 5 &amp; 6 \\ 1 &amp; 3 &amp; 2 &amp; 4 &amp; 6 &amp; 5 \\ 1 &amp; 3 &amp; 2 &amp; 5 &amp; 4 &amp; 6 \\ 1 &amp; 3 &amp; 2 &amp; 5 &amp; 6 &amp; 4 \\ 1 &amp; 3 &amp; 2 &amp; 6 &amp; 4 &amp; 5 \\ 1 &amp; 3 &amp; 2 &amp; 6 &amp; 5 &amp; 4 \\ 2 &amp; 1 &amp; 3 &amp; 4 &amp; 5 &amp; 6 \\ 2 &amp; 1 &amp; 3 &amp; 4 &amp; 6 &amp; 5 \\ 2 &amp; 1 &amp; 3 &amp; 5 &amp; 4 &amp; 6 \\ 2 &amp; 1 &amp; 3 &amp; 5 &amp; 6 &amp; 4 \\ 2 &amp; 1 &amp; 3 &amp; 6 &amp; 4 &amp; 5 \\ 2 &amp; 1 &amp; 3 &amp; 6 &amp; 5 &amp; 4 \\ 2 &amp; 3 &amp; 1 &amp; 4 &amp; 5 &amp; 6 \\ 2 &amp; 3 &amp; 1 &amp; 4 &amp; 6 &amp; 5 \\ 2 &amp; 3 &amp; 1 &amp; 5 &amp; 4 &amp; 6 \\ 2 &amp; 3 &amp; 1 &amp; 5 &amp; 6 &amp; 4 \\ 2 &amp; 3 &amp; 1 &amp; 6 &amp; 4 &amp; 5 \\ 2 &amp; 3 &amp; 1 &amp; 6 &amp; 5 &amp; 4 \\ 3 &amp; 1 &amp; 2 &amp; 4 &amp; 5 &amp; 6 \\ 3 &amp; 1 &amp; 2 &amp; 4 &amp; 6 &amp; 5 \\ 3 &amp; 1 &amp; 2 &amp; 5 &amp; 4 &amp; 6 \\ 3 &amp; 1 &amp; 2 &amp; 5 &amp; 6 &amp; 4 \\ 3 &amp; 1 &amp; 2 &amp; 6 &amp; 4 &amp; 5 \\ 3 &amp; 1 &amp; 2 &amp; 6 &amp; 5 &amp; 4 \\ 3 &amp; 2 &amp; 1 &amp; 4 &amp; 5 &amp; 6 \\ 3 &amp; 2 &amp; 1 &amp; 4 &amp; 6 &amp; 5 \\ 3 &amp; 2 &amp; 1 &amp; 5 &amp; 4 &amp; 6 \\ 3 &amp; 2 &amp; 1 &amp; 5 &amp; 6 &amp; 4 \\ 3 &amp; 2 &amp; 1 &amp; 6 &amp; 4 &amp; 5 \\ 3 &amp; 2 &amp; 1 &amp; 6 &amp; 5 &amp; 4 \end{array} \right),A(T_3) = \left( \begin{array}{ccc} 1 &amp; 2 &amp; 3 \\ 1 &amp; 3 &amp; 2 \\ 2 &amp; 1 &amp; 3 \\ 2 &amp; 3 &amp; 1 \\ 3 &amp; 1 &amp; 2 \\ 3 &amp; 2 &amp; 1 \end{array} \right)$</p>
Neigyl Noval
6,126
<p>$\log$ is usually used in statistical results. Here are few examples:</p> <ol> <li><p>If I have obtained a data with values in orders of $10$ (like: $10, 100, 50000, 6000000000$), I can use log in my graph to shorten the length of the axis values (see: log-log graph)</p></li> <li><p>There are results which increases exponentially like decaying system, half-life, and many systems in control systems engineering. As stated in example 1, it is hard to illustrate values that increases without bound.</p></li> <li><p>There are many instances where you cannot find on how many times a number is multiplied by the same number to produce a result. In equation form, you need to find $x$ in this problem: $5^x = 81.859$.</p></li> </ol> <p>$x\log x$ means the same thing. A shortened value multiplied by an amplitude to somehow increase its value. It usually comes out on finding the initial value problem on differential equations (generally from mechatronics). </p>
24,361
<p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p> <p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p> <p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
Andrea Ferretti
828
<p>No, indeed there exist sheaves which are locally isomorphic to locally constant sheaves, without being locally constant. These are usually called local coefficient systems.</p> <p>It is not hard to see that, for a nice spaces $X$, e.g. a manifold, to give a sheaf locally isomorphic to the locally constant sheaf $\underline{G}$ associated to the group $G$ is the same as to give a homomorphism $\pi(X) \to \mathop{Aut}(G)$. In particular whenever there is a nontrivial homomorphism, there exist nontrivial local coefficient systems.</p>
24,361
<p>Let $X$ be a topological space and let $\mathcal{F}$ and $\mathcal{G}$ be two sheaves over $X$.</p> <p>Of course, if one has a morphism $f : \mathcal{F} \to \mathcal{G}$ such that for all $x\in X$, $f_x : \mathcal{F}_x \to \mathcal{G}_x$ is an isomorphism, then it is known that $f$ itself is an isomorphism.</p> <p>My question is the following: if we don't have such a morphism $f$, but if we know that for all $x\in X$, $\mathcal{F}_x$ and $\mathcal{G}_x$ are isomorphic, is it true that $\mathcal{F}$ and $\mathcal{G}$ are isomorphic ?</p>
Qfwfq
4,721
<p>No: think of non-isomorphic vector bundles over $X$. They are stalkwise isomorphic even as modules (over $\mathcal{O}_{X,x}$ at various $x \in X$, where $\mathcal{O}_X$ is the sheaf of continuous functions on $X$), hence as abelian groups.</p>
138,866
<p>I have data in a csv file. The first row has labels, and the first column, too.</p> <pre><code>Datos = Import["C:\\Users\\jodom\\Desktop\\Data.csv"] </code></pre> <p>Tha data in the csv file is that:</p> <pre><code>{{"No", "Vol", "Vel"}, {1, 500, 45}, {2, 700, 67}, {3, 350, 87}, {4, 123, 23}, {5, 587, 45}, {6, 435, 89}, {7, 896, 65}, {8, 125, 45}, {9, 476, 27}, {10, 987, 80}} </code></pre> <p>I put those csv data into a dataset:</p> <pre><code>B = Dataset[Datos] </code></pre> <p>You can check it out as an image here,on how it has seen on wolfram after the import: <a href="https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing" rel="noreferrer">https://drive.google.com/file/d/0B56r_V66BiodQUhUMWNHcHZFOWc/view?usp=sharing</a></p> <p>Now I want to convert the first row that has the labels, into a head or label of the dataset, and the first column into a label column, so I can get data from this dataset, like </p> <pre><code>Dataset[labelrow, labelcolumn] </code></pre>
kglr
125
<pre><code>datos = {{"No", "Vol", "Vel"}, {1, 500, 45}, {2, 700, 67}, {3, 350, 87}, {4, 123, 23}, {5, 587, 45}, {6, 435, 89}, {7, 896, 65}, {8, 125, 5}, {9, 476, 27}, {10, 987, 80}}; ds = Dataset[AssociationThread[datos[[2;;,1]]-&gt; (AssociationThread[Rest@First@datos-&gt;Rest[#]]&amp;/@Rest[datos])]] </code></pre> <p><img src="https://i.stack.imgur.com/nPuXK.png" alt="Mathematica graphics"></p> <pre><code>ds[2, "Vel"] </code></pre> <blockquote> <p>67</p> </blockquote>
4,231,580
<p>I have been working on th integral <span class="math-container">$$\int_0^\infty \frac{\sin x}{1+x^2} dx$$</span> for a short while now trying substitutions and even Laplace transforms and other stuff but gave up. I looked to Wolfram Alpha but how did it get the series expansion of the constant that it did (see <a href="https://www.wolframalpha.com/input/?i=series+integral+of+sin%28x%29%2F%28x%5E2%2B1%29+from+0+to+inf" rel="nofollow noreferrer">here</a>)? I tried doing a little algebra with the expression from Wolfram Alpha it looks like an integral and there are more things that could be done but I don't know. Thanks.</p>
Community
-1
<p>In the hope that it becomes a bit more visible, what happens behind the scenes, I tried to extract all steps from the linked notebook:</p> <p><span class="math-container">$\frac{\text{Ei}(1)-e^2 \text{Ei}(-1)}{2 e}=\sum _{k=1}^{\infty } -\frac{e^2 (-1)^k-1}{2 e k k!}-\frac{e \gamma }{2}+\frac{\gamma }{2 e}$</span></p> <p><span class="math-container">$\frac{\text{Ei}(1)-e^2 \text{Ei}(-1)}{2 e}=-\frac{e \sum _{k=1}^{\infty } \sum _{j=1}^k \frac{(-1)^k}{j k!}-e \sum _{k=1}^{\infty } \sum _{j=1}^k \frac{(-1)^{2 k}}{j k!}+e^2 \gamma -\gamma }{2 e}$</span></p> <p><span class="math-container">$\frac{\text{Ei}(1)-e^2 \text{Ei}(-1)}{2 e}=\sum _{k=1}^{\infty } \frac{(-1)^k \left(e^2 \left(-z_0-1\right){}^k-\left(1-z_0\right){}^k\right) \Gamma \left(k,-z_0\right)}{2 e k! z_0^k}+\frac{\text{Ei}\left(z_0\right)}{2 e}-\frac{e \text{Ei}\left(z_0\right)}{2}$</span></p> <p><span class="math-container">$\frac{\text{Ei}(1)-e^2 \text{Ei}(-1)}{2 e}=\frac{e^{x+2} \sum _{k=1}^{\infty } \sum _{j=0}^{k-1} \frac{(-1)^{k-j} (-x-1)^k x^{j-k}}{k j!}-e^x \sum _{k=1}^{\infty } \sum _{j=0}^{k-1} \frac{(-1)^{k-j} (1-x)^k x^{j-k}}{k j!}-(2 i) e^2 \pi \left\lfloor \frac{\arg (-x-1)}{2 \pi }\right\rfloor +(2 i) \pi \left\lfloor \frac{\arg (1-x)}{2 \pi }\right\rfloor +\text{Ei}(x)-e^2 \text{Ei}(x)+i \pi }{2 e}$</span></p> <p><span class="math-container">$\frac{\text{Ei}(1)-e^2 \text{Ei}(-1)}{2 e}=\sum _{k=1}^{\infty } -\frac{\left((1-x)^k-e^2 (-x-1)^k\right) \left((-1)^k k!-k x \, _2\tilde{F}_2(1,1;2,2-k;x)\right)}{2 e k k! x^k}-i e \pi \left\lfloor \frac{\arg (-x-1)}{2 \pi }\right\rfloor +\frac{i \pi \left\lfloor \frac{\arg (1-x)}{2 \pi }\right\rfloor }{e}+\frac{\text{Ei}(x)}{2 e}-\frac{e \text{Ei}(x)}{2}+\frac{i \pi }{2 e}$</span></p> <p>Note that <span class="math-container">$\Gamma(x)$</span> is the Gamma function, <span class="math-container">$\text{Ei}(x)$</span> denotes the exponential integral, <span class="math-container">$\gamma$</span> is the <a href="https://mathworld.wolfram.com/Euler-MascheroniConstant.html" rel="nofollow noreferrer">Euler-Mascheroni Constant</a>, <span class="math-container">$\arg(z)$</span> is the <a href="https://mathworld.wolfram.com/ComplexArgument.html" rel="nofollow noreferrer">complex argument</a> and <span class="math-container">$\tilde{F}$</span> denotes the <a href="https://mathworld.wolfram.com/GeneralizedHypergeometricFunction.html" rel="nofollow noreferrer">Generalized Hypergeometric Function</a>.</p>
3,072,142
<p>Isn't <span class="math-container">$\mathbb Q[X]/(X^2+1)\cong \mathbb Q[i]$</span> wrong and should be <span class="math-container">$\mathbb Q[X]/(X^2+1)\cong \mathbb Q(i)$</span> ?</p> <p>Indeed, <span class="math-container">$\mathbb Q[X]/(X^2+1)$</span> is a field whereas <span class="math-container">$\mathbb Q[i]$</span> is a ring (is the Fraction ring of <span class="math-container">$\mathbb Q(i)$</span>).</p>
José Carlos Santos
446,262
<p>For any constant function <span class="math-container">$k$</span>, <span class="math-container">$\int_0^1k\,\mathrm d\lambda=k$</span>. So, yes, your argument is correct.</p>
163,153
<p>How do i find the absolute maximum and absolute minimum values of f on this given interval.</p> <p>$f(x) = 6x^3 − 9x^2 − 36x + 7, \ [−2, 3]$</p>
Ilies Zidane
38,639
<p>For the first question: An application of the $\lambda$-lemma.</p> <p>Theorem: Let $c_0$ and $c_1$ be in the same component $U$ of $C \backslash \partial M$, ($M$ is the Mandelbrot set) then $J_{c_0}$ and $J_{c_1}$ (the Julia sets) are homeomorphic (and even quasiconformal-homeomorphism) and the dynamics of the two polynomials $z^2+c_0$ and $z^2+c_1$ are conjugated on the Julia sets.</p> <p>Proof: First notice that $M$ and $\mathbb C \backslash M$ are connected (Douady-Hubbard theorem). So $U$ is conformally equivalent to $\mathbb D$ (because simply connected).</p> <p>Let $Q_k \subset U \times \mathbb C$ be set defined by the equation $P_c^k(z)=z$ where $P_c(z) = z^2+c$ and denote by $p_k : Q_k \longrightarrow U$ the projection onto the first factor. $Q_k$ is closed and $p_k$ is the restriction to $Q_k$ of the projection onto the first factor, so $p_k$ is a proper map. Moreover the two functions $(c,z)\mapsto P_c^k(z)-z$ and $(c,z)\mapsto (P_c^k(z))'-1$ vanish simultaneously at a discrete set $Z \subset U \times \mathbb C$, and the map</p> <p>$p_k : Q_k \backslash p_k^{-1}(p_k(Z)) \longrightarrow U \backslash p_k(Z)$</p> <p>is a finite sheeted convering map: it's proper and a local homeomorphism.</p> <p>Let $c^\star$ be a point of $p_k(Z)$ and $U' \subset U$ a simply connected neighborhood of $c^\star$ containing no point of $p_k(Z)$. Denote by: $Q'_k = p_k^{-1}(U')$ and $Q_k^\star = Q'_k \backslash p_k^{-1}(c^\star)$. Denote by $Y_i$ the connected component of $Q^\star_k$; each of there is a finite cover of $U^\star = U' \backslash \{c^\star\}$. The closure of each $Y_i$ in $Q'_k$ is $Y_i \cup \{y_i\}$ for some $y_i \in \mathbb C$. If $(P^k_{c^\star})'(y_i)\neq 1$ the by the implicit function theorem $Q'_k$ is near $y_i$ the graph of an analytic function $\phi_i : U' \longrightarrow \mathbb C$.</p> <p>Now let $Y_i$ be a component such that $(P^k_{c^\star})'(y_i)=1$. If $(c,z)\mapsto (P_c^k)'(z)$ is not constant on $Y_i$, the its image contains a neighborhood of $1$, in particular points of the unit circle, and the corresponding points of $Y_i$ are indifferent cycles that are not persistent. This cannot happen and $(c,z)\mapsto (P_c^k)'(z)$ is constant on every such component $Y_i$.</p> <p>From the above it follows that if $R_k \subset Q_k$ is the subset of repelling cycles, then the projection $p_k : R_k \longrightarrow U$ is a covering map. Indeed, it is a local homeomorphism by the implicit function theorem, and proper since a sequence $(c_n,z_n)$ in $R_k$ converging in $Q_k$ cannot converge to a point $(c^\star,z^\star)$ where $P^k_{c^\star}(z^\star) = 1$. Hence the set of all repelling cycles of $P_c$ is a holomorphic motion. By the $\lambda$-lemma, this map extends to the closure of the set of repelling points, i.e. to the Julia set $J_c$, which also forms a holomorphic motion. $\square$</p> <p>See also: Mane-Sad-Sullivan theorem.</p> <p>I don't really understand your second question.</p>
2,812,122
<blockquote> <p>For any real numbers $x$ and $y$ satisfying $x^2y + 6y = xy^3 +5x^2 +2x$, it is known that $$(x^2 + 2xy + 3y^2) \, f(x,y) = (4x^2 + 5xy + 6y^2) \, g(x,y)$$<br> Given that $g(0,0) = 6$, find the value of $f(0,0)$.</p> </blockquote> <p>I have tried expressing $f(x,y)$ in terms of $g(x,y)$. But seems that some tricks have to been done to further on the question. Can anyone figure out the expression?</p>
ned grekerzberg
489,746
<p>first we shall see that:</p> <p>$\frac1n\sum_{\omega:\omega^n=1} \omega^{-\alpha}f(\omega) = \frac1n\sum_{\omega:\omega^n=1}\omega^{-\alpha }\sum_{j}a_j\omega^j = \frac1n\sum_{\omega:\omega^n=1}\sum_{j}\omega^{j-\alpha }a_j =$</p> <p>$ = \frac1n\sum_{j}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j$ </p> <p>I'm leaving for you to think why we can preform the last move (change the order). also put notice that $\omega^{j-\alpha} = 1 \iff j \equiv \alpha \ (mod \ n)$ and also because there are n roots of unity in $\mathbb{C}$ we get:</p> <p>$ \frac1n\sum_{j}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j = \frac1n\sum_{j \equiv \alpha (mod \ n)}\sum_{\omega:\omega^n=1}a_j + \frac1n\sum_{j \neq \alpha (mod \ n)}\sum_{\omega:\omega^n=1}\omega^{j-\alpha }a_j = \sum_{j \equiv \alpha (mod \ n)} \frac{n}{n} a_j + \frac1n\sum_{j \neq \alpha (mod \ n)}a_j\sum_{\omega:\omega^n=1}\omega^{j-\alpha }$</p> <p>but we know that $\forall \ 0&lt;m&lt; \ n: \sum_{\omega:\omega^n=1}\omega^{m } = 0 $ and therefor we get: $\frac1n\sum_{j \neq \alpha (mod \ n)}a_j\sum_{\omega:\omega^n=1}\omega^{j-\alpha } = 0 $ (as $0&lt;j-\alpha &lt; n$) and were done (-:</p>
4,096,516
<p>Calculate <span class="math-container">$\operatorname{tg}( \alpha), $</span> if <span class="math-container">$\frac{\pi}{2} &lt; \alpha&lt;\pi$</span> and <span class="math-container">$\sin( \alpha)= \frac{2\sqrt{29}}{29}$</span>. Please provide a hint.<br /> I know that <span class="math-container">$\operatorname{tg}( \alpha)=\frac{\sin( \alpha)}{\cos( \alpha)}$</span> and <span class="math-container">$\sin^2( \alpha)+\cos^2( \alpha)=1$</span>, but still can't get the answer from there.</p>
PrincessEev
597,568
<p>José Carlos Santos provided an algebraic answer; I'll provide a more geometric one.</p> <p>Try drawing a picture. Since <span class="math-container">$\alpha$</span> is between <span class="math-container">$\pi/2$</span> and <span class="math-container">$\pi$</span>, it lies in Quadrant II. The triangle that becomes its reference triangle has an &quot;opposite&quot; (vertical) side length of <span class="math-container">$2 \sqrt{29}$</span> and the hypotenuse has length <span class="math-container">$29$</span>.</p> <p><a href="https://i.stack.imgur.com/GepQj.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GepQj.png" alt="enter image description here" /></a></p> <p>From here you can find <span class="math-container">$\tan(\beta)$</span> in the reference triangle (the acute angle next to <span class="math-container">$\alpha$</span>) - you will probably need to use the Pythagorean theorem. Then account for the sign of tangent in Quadrant II to get what <span class="math-container">$\tan(\alpha)$</span> would be.</p> <p>I'll leave the calculations and justifying these steps to you.</p>
1,990,638
<p>This is a purely affine variety. Define $F_1=y-x^2, F_2=z-xy, F_3=xz-y^2$. </p> <p>It is clear that $(x,x^2,x^3)$ is a solution to $F_1=F_2=F_3=0$. Since projective twisted cubic is not intersection of two quadractics which contains an extra line, I would not expect this to be the case in affine as well. However, $F_1\cap F_2$ seems to be exactly twisted cubic curve and I could not see any extra information on the other line. The other two intersections(i.e. $F_2\cap F_3$ and $F_1\cap F_3$ do yield two extra lines.</p> <p>What have I done wrong here?</p>
Georges Elencwajg
3,217
<p>1) It is perfectly correct that the affine curve $$C=\{(u:u^2:u^3)\vert u\in k\}\subset \mathbb A^3_k=\mathbb A^3_{x,y,z}=\mathbb A^3\:$$ is the complete intersection of the affine smooth quadrics $Q_1=V(F_1), Q_2=V(F_2)\subset \mathbb A^3$. </p> <p>2) The curve $C$ has as closure in $\mathbb P^3=\mathbb P^3_{[t:x:y:z]}$ the smooth complete curve $\overline C=C\cup \{[0:0:0:1]\}$. </p> <p>3) The quadrics $Q_1, Q_2$ have as closures in $\mathbb P^3$ the quadrics $\overline {Q_1}=V(\overline {F_1}), \overline {Q_2}=V(\overline {F_2})\subset \mathbb P^3$ where $\overline {F_1}=ty-x^2, \overline {F_2}=tz-xy$. </p> <p>4) However it is <strong>not true</strong> that $\overline C=\overline {Q_1}\cap \overline {Q_2}$ : actually $\overline {Q_1}\cap \overline {Q_2}=\overline C\cup L$ where $L\subset \mathbb P^3$ is the line given by the equations $ t=x=0$ (thus, a line included in the plane at infinity $t=0$). </p> <p>5) Worse still, the curve $\overline C$ is not the complete intersection of any two surfaces in $\mathbb P^3$.<br> We have to take the intersection of at least three hypersurfaces to get that curve: for example $\overline C=\overline {Q_1}\cap \overline {Q_2}\cap \overline {Q_3}$ where $\overline {Q_3}=V(F_3)=V(xz-y^2)$. </p> <p><strong>6) Explanation of the paradox</strong><br> To get rid of the parasitic component $L$ of $\overline {Q_1}\cap \overline {Q_2}$ and obtain just $\overline C$ we had to call $\overline Q_3$ to our rescue.<br> However restricting the situation to $\mathbb A^3$ gets rid of $L$ for free, since $L\subset \mathbb P^3\setminus \mathbb A^3$.<br> And that is why $C$ is already the complete intersection $C=Q_1\cap Q_2\subset \mathbb A^3$ of just two quadrics, whereas $\overline C$ cannot be obtained as the complete intersection of two surfaces.</p>
313,254
<p>I know this question has been asked before on MO and MSE (<a href="https://mathoverflow.net/questions/59605/reference-in-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/407004/good-book-for-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/1839673/books-on-riemann-surfaces">here</a>, <a href="https://math.stackexchange.com/questions/200537/complex-analysis-book-with-a-view-toward-riemann-surfaces">here</a>) but the answers that were given were only partially helpful to me, and I suspect that I am not the only one.</p> <p>I am about to teach a first course on Riemann surfaces, and I am trying to get a fairly comprehensive view of the main references, as a support for both myself and students.</p> <p>I compiled a list, here goes in alphabetical order. Of course, it is necessarily subjective. For more detailed entries, I made a bibliography using the bibtex entries from MathSciNet: <a href="https://www.brice.loustau.eu/teaching/RiemannSurfaces2018/References.pdf" rel="noreferrer">click here</a>.</p> <ol> <li>Bobenko. Introduction to compact Riemann surfaces. </li> <li>Bost. Introduction to compact Riemann surfaces, Jacobians, and abelian varieties.</li> <li>de Saint-Gervais. Uniformisation des surfaces de Riemann: retour sur un théorème centenaire.</li> <li>Donaldson. Riemann surfaces.</li> <li>Farkas and Kra. Riemann surfaces.</li> <li>Forster. Lectures on Riemann surfaces.</li> <li>Griffiths. Introduction to algebraic curves.</li> <li>Gunning. Lectures on Riemann surfaces.</li> <li>Jost. Compact Riemann surfaces.</li> <li>Kirwan. Complex algebraic curves.</li> <li>McMullen. Complex analysis on Riemann surfaces.</li> <li>McMullen. Riemann surfaces, dynamics and geometry.</li> <li>Miranda. Algebraic curves and Riemann surfaces.</li> <li>Narasimhan. Compact Riemann surfaces.</li> <li>Narasimhan and Nievergelt. Complex analysis in one variable.</li> <li>Reyssat. Quelques aspects des surfaces de Riemann.</li> <li>Springer. Introduction to Riemann surfaces.</li> <li>Varolin. Riemann surfaces by way of complex analytic geometry.</li> <li>Weyl. The concept of a Riemann surface.</li> </ol> <p>Having a good sense of what each of these books does, beyond a superficial first impression, is quite a colossal task (at least for me).</p> <p>What I'm hoping is that if you know very well such or such reference in the list, you can give a short description of it: where it stands in the existing literature, what approach/viewpoint is adopted, what are its benefits and pitfalls. Of course, I am also happy to update the list with new references, especially if I missed some major ones.</p> <p>As an example, for Forster's book (5.) I can just use the accepted answer <a href="https://math.stackexchange.com/questions/407004/good-book-for-riemann-surfaces">there</a>: According to <a href="https://math.stackexchange.com/users/71348/ted-shifrin">Ted Shifrin</a>:</p> <blockquote> <p>It is extremely well-written, but definitely more analytic in flavor. In particular, it includes pretty much all the analysis to prove finite-dimensionality of sheaf cohomology on a compact Riemann surface. It also deals quite a bit with non-compact Riemann surfaces, but does include standard material on Abel's Theorem, the Abel-Jacobi map, etc.</p> </blockquote>
M.G.
1,849
<p>As it is evident from your bibliography list, there are two aspects of the theory: Riemann surfaces in the sense of 1-dimensional complex manifolds (which are not necessarily algebraic) and Complex Algebraic Curves (which are not necessarily smooth). It should be pointed out that some authors (old-school?) still use the term Riemann Surface to mean a Complex Algebraic Curve, regardless of whether it is smooth or not, thus also excluding the non-compact case.</p> <p><em>I will now make a list of additional sources on Riemann Surfaces and Complex Algebraic Curves not present in your list and that focus exclusively on one or both of these two topics and then will edit my answer to add some information on each of them.</em> There are many more references that include Riemann Surfaces and Complex Algebraic Curves as subsets of, for example, bigger text on Complex Geometry - for the moment I won't be mentioning them, but let me know if you are interested, they can be good sources too for some topic.</p> <p>Legend: <em>italicized</em> references are present in OP's original list</p> <ol> <li>Arbarello, Cornalba, Griffiths, Harris - Geometry of Algebraic Curves Vol. I &amp; II (1985,2011): As comprehensive as it is, this is <strong>not</strong> a first course on Complex Algebraic Curves, but rather reflects the state of the art at the time of writing. Notice the big difference between the years of the first and the second volume. The central topic of the first volume is Linear Series, while the second volume deals with all kinds of moduli spaces of curves. In the introduction of the first volume the authors write that the reader should have a working knowledge of algebraic geometry in the amount of the first chapter of Hartshorne's, but I don't think this actually suffices, perhaps they actually meant the second and third chapter of Hartshorne's. The second volume is above my paygrade to comment on :-)</li> <li>Bertola - Riemann Surfaces and Theta Functions (lecture notes): it has a completely analytic approach, focusing mostly on the compact case after introducing the initial generalities. It contains a nice discussion of the three kinds of abelian differentials by means of the theta divisor and introduces bidifferentials.</li> <li><em>Bobenko - Compact Riemann Surfaces</em>: (obviously) it deals only with smooth complex algebraic curves, but it takes an analytic approach. It does not use sheaves. It contains a proof of Riemann-Roch (not all of them do). While it introduces all three kinds of abelian differentials, it does not discuss any of the reciprocity laws. It finishes with introducing line bundles.</li> <li>Brieskorn, Knörrer - Plane Algebraic Curves (1986): </li> <li>Cavalieri, Miles - Riemann Surfaces and Algebraic Curves, A First Course in Hurwitz Theory (2016): as the title suggests, it is an approach to Complex Algebraic Curves with strong focus on Hurwitz Theory. The basics of Riemann surfaces are layed out and then the author moves on to the counting. IMO the book is suitable for an undergraduate course since the prerequisites are low. However, singular complex algebraic curves are barely touched upon.</li> <li>Clemens - A Scrapbook of Complex Curve Theory (2ed.,2003)</li> <li>Dubrovin - Integrable Systems and Riemann Surfaces (lecture notes,2009): see the next reference. For Dubrovin Riemann Surfaces are complex algebraic curves. The notes are based on his book in Russian. However, what is not included in the next reference, is the connection with differential equations. The first (out of three) part of the notes is dedicated to the KdV equation, while the third part deals with Baker-Akhiezer Functions.</li> <li>Tamara Grava - Riemann Surfaces (lecture notes,2014): improved version based on Dubrovin's notes, but defines a Riemann Surface as a 1-dimensional complex-analytic manifold. It does not use sheaves. It deals almost only with compact Riemann Surfaces via an analytic approach, but also gives a discussion of resolution of singularities for complex algebraic curves. It includes a proof of Riemann-Roch. It does not mention line bundles at all. The chapter on divisors should be read with extra care as there might one or two hasty statements :-)</li> <li>Eynard - Lectures on Compact Riemann Surfaces (2018)</li> <li><em>Farkas, Kra - Riemann Surfaces (1980)</em>: it includes both the non-compact and the compact case and the treatment is analytic. It uses no sheaves (though IIRC the sheaf of holomorphic functions is given a definition somewhere). I am not fond of their proof of the reciprocity between abelian differentials of the third kind: IMO it is more elegant to introduce only one cut in the fundamental polygon, namely between the points <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> instead of 4 cuts from a fixed origin point on the boundary of the fundamental polygon <span class="math-container">$O$</span> to <span class="math-container">$P$</span> and <span class="math-container">$O$</span> to <span class="math-container">$Q$</span> and then backwards (I could include a proof sketch for the <span class="math-container">$PQ$</span> cut if anyone is interested). Moreover, there is a proper (sub)chapter on intersection theory on Riemann Surfaces.</li> <li>Fulton - Algebraic Curves, Introduction to Algebraic Geometry (2008): it is a standard algebraic geometry introduction to algebraic curves over an algebraically closed field in a classical way, i.e. without sheaves and schemes.</li> <li>Gibson - Elementary Geometry of Algebraic Curves (1998): it is similar in spirit to Fulton's book, but it is probably (even) more visual and example-oriented. </li> <li><em>Gunning - Riemann Surfaces and 2nd Order Theta Functions</em></li> <li>Gunning - Some Topics in the Function Theory of Compact Riemann Surfaces (draft ver July 2015): definitely not recommended as a first read. It discusses standard topics of Riemann Surfaces like Holomorphic and Meromorphic Differentials etc. from a more advanced POV, definitely sheaf-theoretic. IMO, proofs can be sometimes a little terse to follow, but after all it is only a draft and not meant as an introductiory course for undergraduates.</li> <li><em>Griffiths - Introduction to Algebraic Curves (revised,1985)</em>: analytic approach without sheaf theory and sheaf cohomology. However, it is the only book on Riemann Surfaces (in a broad sense) I know of that discusses normalization in detail!</li> <li>Harris - Geometry of Algebraic Curves (lecture notes from Harvard,2015)</li> <li><em>Kirwan - Complex Algebraic Curves (1992)</em></li> <li>Kunz - Introduction to Plane Algebraic Curves (2005)</li> <li>Lang - Introduction to Algebraic and Abelian Functions (2ed.,1982)</li> <li><em>Miranda - Algebraic Curves and Riemann Surfaces (1995)</em></li> <li>Mumford - Curves and Their Jacobians (1999)</li> <li><em>Narasimhan - Compact Riemann Surfaces (1992,reprint,1996)</em></li> <li>Perutz - Riemann Surfaces (lecture notes,2016)</li> <li><em>Springer - Introduction to Riemann Surfaces (1957)</em></li> <li>Teleman - Riemann Surfaces (lecture notes,2003): though short (69 pages only), personally I found it to be very illuminating on many points and contains several nice, albeit hand-drawn, pictures!</li> <li><em>Varolin - Riemann Surfaces By Way of Analytic Geometry</em>: it completely avoids sheaves despite being very detailed.</li> </ol>
2,460,195
<p>I had the following question:</p> <p>Three actors are to be chosen out of five — Jack, Steve, Elad, Suzy, and Ali. What is the probability that Jack and Steve would be chosen, but Suzy would be left out?</p> <p>The answer given was: Total Number of actors = $5$; Since Jack and Steve need to be in the selection and Suzy is to be left out, only one selection matters. Number of actors apart from Jack, Steve, and Suzy = $2$; Probability of choosing 3 actors including Jack and Steve, but not Suzy = $$\frac{C(2,1)}{C(3,5)} = \frac{1}{5}$$</p> <p>I do not understand the answer. What do they mean by only one selection matters? It looks like they are choosing $1$ person from $2$ combinations? Why? Can anyone please explain this.</p> <p>Thanks</p>
iamwhoiam
412,755
<p>Note that $y + z$ must be even. Hence, $y$ and $z$ are either both even or both odd. </p> <p>When they are both even (say $y = 2c_1$ and $z = 2c_2$), the equation reduces to $x + c_1 + c_2 = 8$. The number of non-negative integer solutions in this case is $\binom{10}{8}$. When they are both odd (say $y = 2c_1 + 1$ and $z = 2c_2 + 1$), the equation reduces to $x + c_1 + c_2 = 7$. The number of non-negative integer solutions in this case is $\binom{9}{7}$.</p> <p>Thus, we get a total of $\binom{10}{8} + \binom{9}{7} = 81$ solutions.</p>
234,668
<p>I have an equation in $x$ and I would like to determine if it has any solutions modulo a large prime $p$. Suppose $p$ is large enough that I can factor numbers up to $p$, but I cannot test all values up to $p$. (Actually, so far, I have been doing just that -- but I'd like to avoid this as it takes a long time. If you can avoid factoring, all the better.)</p> <p>The particular equation I have is $$ x^4-x^2\equiv4\pmod p $$ but I would be interested in</p> <ol> <li>Solutions to this particular problem, or more generally</li> <li>Solutions to other quadratics$\pmod p$ in $x^2$, or more generally</li> <li>Solutions to quartics$\pmod p$.</li> </ol> <p>I'm familiar with quadratic reciprocity but not with cubic or biquadratic. (It's not clear to me if this can be transformed so they can be used; if so, demonstrating the transformation and giving a pointer to a good source on higher reciprocity would suffice as an answer.)</p>
The Substitute
17,955
<p>$n=2m+1 \Rightarrow kn=2mk+k=k(2m+1)$. Note that the latter expression is the product of an integer and an odd integer, which is not always odd.</p> <p>For example, Even*Odd: $(2a)\cdot (2b+1)=4ab+2a=2a(2b+1)$, which is even.</p>
701,176
<p>A function $f$ is differentiable over its domain and has the following properties:</p> <ol> <li><p>$\displaystyle f(x+y)=\frac{f(x)+f(y)}{1-f(x)f(y)}$</p></li> <li><p>$\lim_{h \to 0} f(h) = 0$</p></li> <li><p>$\lim_{h \to 0} f(h)/h = 1$</p></li> </ol> <p>i) Show that $f(0)=0$</p> <p>ii) show that $f'(x)=1+[f(x)]^2$ by using the def of derivatives Show how the above properties are involved.</p> <p>iii) find $f(x)$ by finding the antiderivative. Use the boundary condition from part (i).</p> <hr> <p>So basically I think I found out how to do part 1 because if $x+y=0$ then the top part of the fraction will always have to be zero.</p> <p>part 2 and 3 are giving me trouble. The definition is the limit $(f(x+h)-f(x))/h$</p> <p>So I can set $x+y=h$ and make the numerator equal to $f(h)$?</p> <p>Thanks for all who help</p>
Harshal Gajjar
132,479
<p>$a^3−6a^2−2$ does hit the $x$-axis at $a=6.0546$, so, after placing the value of $a$ in the equation, you get the value of $a^3−6a^2−2$ to be $0.001536$ which is not equal to $0$.</p>
183,039
<p>I'm pretty weak in the field of mathematics, but a strong programmer. I am looking for a mathematical solution that, given two points on a line will give me a curve between them, including those two points within the curve itself.</p> <p>For instance, if I have a set of points { (0, 3) (1,10) } I'd like a mathematical way to generate points between the two (I believe this is called interpolate) to create a curve that will contain { (0,3) (1,10) }</p> <p>Will Linear Interpolation give me this? </p> <p>Thank you</p>
MJD
25,554
<p>You could do that, but if a straight line is acceptable, the best-of-breed algorithm for calculating which pixels to plot is <a href="http://en.wikipedia.org/wiki/Bresenham%27s_line_algorithm" rel="nofollow">Bresenham's algorithm</a>. It is easy to program, produces good-looking output, and it is extremely efficient.</p> <p>If you are interested in curved curves, you have a lot of choices. People often use <a href="http://en.wikipedia.org/wiki/Cubic_spline" rel="nofollow">cubic splines</a>, because they are graceful, fit together well, and the algorithm is easy to write and runs quickly. There is <a href="http://en.wikipedia.org/wiki/Midpoint_circle_algorithm" rel="nofollow">a variation of Bresenham's algorithm for circular arcs</a> instead of straight lines. If this isn't enough information, you should consider posting another question that elaborates on what you are looking for. </p>
2,965,865
<blockquote> <p>Finding the minimum value of <span class="math-container">$\displaystyle \frac{x^2 +y^2}{y}.$</span> where <span class="math-container">$x,y$</span> are real numbers satisfying <span class="math-container">$7x^2 + 3xy + 3y^2 = 1$</span></p> </blockquote> <p>Try: Equation <span class="math-container">$7x^2+3xy+3y^2=1$</span> represent Ellipse</p> <p>with center is at origin.</p> <p>So substitute <span class="math-container">$x=r\cos \alpha $</span> and <span class="math-container">$y=r\sin \alpha$</span> </p> <p>in <span class="math-container">$7x^2+3xy+3y^2=1$</span></p> <p><span class="math-container">$$3r^2+4r^2\cos^2 \alpha+3r^2\sin \alpha \cos \alpha =1$$</span></p> <p><span class="math-container">$$3r^2+2r^2(1+\cos 2 \alpha)+\frac{3r^2}{2}\sin 2 \alpha =1$$</span></p> <p><span class="math-container">$$8r^2+r^2(4\cos 2 \alpha+3\sin \alpha)=2$$</span></p> <p>So <span class="math-container">$$r^2=\frac{2}{8+(4\cos 2 \alpha+3\sin \alpha)}$$</span></p> <p><span class="math-container">$$\frac{2}{8+5}=\frac{2}{13}\leq r^2\leq \frac{2}{8-5}=\frac{2}{3}$$</span></p> <p>we have to find minimum of <span class="math-container">$$\frac{x^2+y^2}{y}=\frac{r}{\sin \alpha}$$</span></p> <p>How can i find it, could some help me </p>
smcc
354,034
<p>The problem has no solution (assuming you are looking for a (global) minimum rather than just a local minimum). As we approach the point <span class="math-container">$(1/\sqrt{7},0)$</span> along the constraint curve from below the <span class="math-container">$x$</span>-axis, the value of the function goes to <span class="math-container">$-\infty$</span>: </p> <p>The points</p> <p><span class="math-container">$$\left(\frac{\sqrt{28-75{y}^{2}}-3y}{14},y\right)$$</span></p> <p>satisfy the constraint. We have</p> <p><span class="math-container">$$\begin{align*}f\left(\frac{\sqrt{28-75{y}^{2}}-3y}{14},y\right)&amp;= %\frac{65{y}^{2}+14-3y\sqrt{28-75{y}^{2}}}{98y}= 65y+\frac{1}{7y}-\frac{3}{98}\sqrt{28-75{y}^{2}}\to-\infty\end{align*}$$</span></p> <p>as <span class="math-container">$y\to0^-$</span>.</p>
1,830,867
<p>I'm trying to self-study some algebraic topology, reading Hatcher. His questions seem much less straightforwardly worded than Munkres - with Munkres it was always clear that you weren't expected to know much coming in, whereas Hatcher seems to assume a lot of knowledge. Often simple seeming statements of his just seem vague and unclear to me, even though I also think they ought to be basic.</p> <p>Right now I'm stuck deciphering exercise 1.2:</p> <blockquote> <p>Show that the change-of-basepoint homomorphism $\beta_h$ depends only on the homotopy class of $h$.</p> </blockquote> <p>At a guess, this means that given any two homotopic paths $h_1$ and $h_2$, for all loops $f$, there exists a homotopy between $\beta_{h_1}(f)$ and $\beta_{h_2}(f)$ ? I'm uncertain, as though I'm guessing at what Hatcher means rather than actually understanding him. Is this the correct interpretation, or am I off? And in general, anything I could do to more easily understand Hatcher? Thanks!</p>
Alex Ermolin
292,258
<p>What I think is meant by this question is this. Given a topological space $X$ and a point $p \in X$ we define the fundamental group of $X$ with basepoint $p$ to be $\pi_1(X, p) = \{[\gamma] \mid \gamma \text{ is a closed loop around }p\}$. Here $[\gamma]$ denotes the homotopy class of $\gamma$.</p> <p>Given $p, q \in X$ and a path $h$ in $X$ connecting $p$ to $q$ we can define a function $$\beta_h:\pi_1(X, p) \to \pi_1(X, q)$$ by $$\beta_h([\gamma]) = [\overline h * \gamma * h]$$ where $\overline h$ denotes the reverse of $h$.</p> <p>As an exercise you can show that this mapping is well defined, and furthermore it is a group homomorphism. I believe that this is what Hatcher refers to as the change of basepoint homomorphism.</p> <p>Now the question asks you to prove that if $h, h'$ are homotopic then $\beta_h = \beta_{h'}$.</p>
1,460,561
<p>Let $\lambda =1$ is the eigenvalue corresponding to the single Jordan block $J$. Prove $J^m \sim J$ with an arbitrary positive integer $m$.</p> <p>My try: Because $\lambda = 1$ is eigenvalue, $(J-I)^m =0$. After that $(J-I)^{m-1} J = (J-I)^{m-1}$. At this point, I do not know how to continue.</p>
Andreas Cap
202,204
<p>First observe that any power $J^m$ of $J$ is an upper triangular matrix with all entries on the main diagonal equal to $1$. To prove that $J^m$ is similar to $J$, you can show that $J$ is the Jordan normal form of $J^m$. To do this, it suffices to show that $(J^m-I)^k$ is non-zero provided that $(J-I)^k$ is non-zero (since this implies that in the Jordan form of $J^m$ there must be one block of the same size as $J$). </p> <p>In the following computations, all matrices involved are polynomials in $J$ and hence commute. In particular, since $J$ and $I$ commute, you can use the standard formula to see that $$J^m-I=J^m-I^m=(J-I)(\sum_{i+j=m}J^iI^j)=(J-I)(\sum_{i=0}^mJ^i).$$ Now the second factor in this product is an upper triangular matrix with all entries on the main diagonal equal to $m+1$ and hence invertible. Forming powers (taking into account that all matrices we consider commute), you get $(J^m-I)^k=(J-I)^k(\sum_{i=0}^mJ^i)^k$. Since the second factor still is an invertible matrix, the claim follows. </p>
4,126,238
<p>Prove <span class="math-container">$f$</span> is uniformly continuous <span class="math-container">$\implies$</span> there exist <span class="math-container">$C, D$</span> such that <span class="math-container">$|f(x)| &lt; C + D|x|$</span>.</p> <p>Proof below. Please verify or critique.</p> <p>By definition of uniform continuity, there exists <span class="math-container">$\delta &gt; 0$</span> such that <span class="math-container">$|x_a - x_b| \leq \delta \implies |f(x_a)- f(x_b)| &lt; 1$</span>. Choose <span class="math-container">$D &gt; 1/\delta$</span> and <span class="math-container">$C &gt; |f(0)| + D + 1$</span>.</p> <p>For any <span class="math-container">$x$</span>, <span class="math-container">$|f(x)| - |f(0)| \leq |f(x) - f(0)| \leq \sum_{0 &lt; j \leq |x/\delta|+1}|f(j\delta) - f((j-1)\delta)| \leq |x/\delta|+1$</span>, so <span class="math-container">$|f(x)| \leq |x/\delta| + 1 + |f(0)| &lt; C + D|x|$</span>.</p>
Siong Thye Goh
306,553
<p>It seems to be that you are assuming <span class="math-container">$x$</span> to be positive and you are trying to partition <span class="math-container">$[0, x]$</span> and use triangle inequalities. Be careful that <span class="math-container">$x$</span> can be negative and you are partitioning the interval correctly.</p> <p>Let's define partition on <span class="math-container">$[0,|x|]$</span> such that each of the segments are at most length <span class="math-container">$\delta$</span>.</p> <p>Let <span class="math-container">$m = \left\lfloor \frac{|x|}{\delta} \right\rfloor$</span>, <span class="math-container">$$m \le \frac{|x|}{\delta} &lt; m+1$$</span></p> <p><span class="math-container">$$m \delta \le |x| &lt; (m+1)\delta.$$</span></p> <p>Let's divide the interval to <span class="math-container">$m+1$</span> segments with</p> <p><span class="math-container">$$x_i = \begin{cases} i \delta, &amp; i \le m \\ |x|, &amp; i = m+1\end{cases}$$</span></p> <p>Let <span class="math-container">$s= sign(x)$</span>,</p> <p><span class="math-container">\begin{align}|f(x)|-|f(0)| &amp;\le |f(x)-f(0)| \\&amp;\le \sum_{i=0}^{m}|f(sx_{i+1})-f(sx_i)|\\&amp;\le (m+1)\\ &amp;\le \left(\frac{|x|}{\delta}+1 \right) \end{align}</span></p> <p>Also, I think you can just choose <span class="math-container">$C&gt;|f(0)|+1$</span> though you can pick a larger number.</p>
2,028,799
<p>I've been trying to get this limit for hours. Can someone help me, please?<br> The solution manual says it's 0 but I can't get there. I tried to use $\lim_{h\to0} {h-\cos(h)\over h} = 0$.</p> <p>$$\lim_{x\to 0}\frac{x\cos(1/x)}{x-\sqrt{x}}.$$<br> Thank you.</p>
Simply Beautiful Art
272,831
<p>Divide numerator and denominator by $x$.</p> <p>$$\frac{x\cos(1/x)}{x-\sqrt x}=\frac{\cos(1/x)}{1-x^{-0.5}}$$</p> <p>Now see that</p> <p>$$-1\le\cos(1/x)\le1$$</p> <p>And apply squeeze theorem.</p> <hr> <p>Remark: Notice that $\frac{x\cos(1/x)}{x-\sqrt{x}}$ is not defined for $x&lt;0$, so the above limit only makes sense as $x\to0^+$.</p>
2,485,109
<blockquote> <p>Proof by induction that $1^2 + 2^2 + 3^2 +.......+ n^2 = \frac{1}{6}\cdot n\cdot (1+n)\cdot (1+2n)$</p> </blockquote> <p>I tried showing that </p> <p>$1^2 + 2^2 + 3^2 +.......+ (k+1)^2 = \frac{1}{6}\cdot (k+1)\cdot (1+(k+1))\cdot (1+2(k+1))$</p> <p>By using the left side:</p> <p>$1^2 + 2^2 + 3^2 +.......+ (k+1)^2$ </p> <p>$= 1^2 + 2^2 + 3^2 +.......+ k^2 +(k+1)^2 $</p> <p>$= \frac{1}{6}\cdot k\cdot (1+k)\cdot (1+2k) + (k+1)^2$</p> <p>I tried expanding and making it equal the right side but I was not able to get it.</p>
Mr Pie
477,343
<p><strong>This is more of a comment as opposed to an answer</strong></p> <hr> <p>$$5x + 7y = 1234$$ What you want to do first is find integer solutions for $x$ and $y$ to satisfy $5x + 7y = 1$. A solution is found by letting $x = 3$ and $y = -2$.</p> <p>Since $1234 = 1234\times 1$, we just multiply $3$ and $-2$ by $1234$. For example, let $5x + 7y = 2$, then $x = 3\times 2$ and $y = (-2)\times 2$ since $2 = 2\times 1$.</p> <p>$$\therefore 5x + 7y = \left\{1234 : x = 3\times 1234, \ y = (-2)\times 1234\right\} \Rightarrow x = 3702, \ y = -2468$$</p> <p>To find other solutions, you can use $5x + 7y = 1$ as a <em>rule</em> to graph the equation and find $x$ and $y$ coordinates. How to do it is to try and get $y$ as the subject (alone on the $LHS$; <em>Left Hand Side</em> of the equation).</p> <p>$$5x + 7y = 1 \Rightarrow 7y = 1 - 5x \Rightarrow y = -\frac{5}{7}x - \frac{1}{7}$$</p> <p>Here we see that $-\dfrac{1}{7}$ is the $y$-intercept at $x = 0$ and $-\dfrac{1}{5}$ is the $x$-intercept at $y = 0$.</p> <p>We can put these four coordinates into ordered pairs $\big(0, -\frac{1}{7}\big), \big(-\frac{1}{7}, 0\big)$ and then plot these coordinates to on the Cartesian Plane to make points. Now we can draw a line through them to make a linear graph (since the line will be straight with a constant gradient) and find points that have integer coordinates. These will be our solutions!</p> <p>For example, another solution we find is $x = 17$ and $y = -12$. Another solution is $x = 24$ and $y = -17$. Another solution is $x = 31$ and $y = -22$. Do you now see a pattern? $x$ is increasing by $7$ and $y$ is decreasing by $5$. Now multiply these results by $1234$ and you have found other integer solutions!</p> <p>Also, $x$ and $y$ do not each have to negative, like in all the examples I have given. In fact, $x = -4$ and $y = 3$ is another solution. Or $x = -11$ and $y = 8$. Or $x = -18$ and $y = 13$. The amount of integer solutions are endless!</p> <p>To formulate, $x = 3\pm 7n$ and $y = -2\mp 7n$ for all positive integers $n \geqslant 0$, that of which take the form $3702\pm 7t$ and $-2468 - 7t$, as mentioned in Bruce Ikenaga’s wonderful answer.</p>
3,709,331
<p>How can I integrate <span class="math-container">$$ \int \frac{x\,dx}{(a-bx^2)^2} $$</span> I've tried to use partial fraction decomposition, but I'm getting six equations for four variables, and they don't give uniform answers.</p>
J.G.
56,861
<p>Substitute <span class="math-container">$u=a-bx^2$</span> so your integral is <span class="math-container">$-\frac{1}{2b}\int\frac{du}{u^2}=\frac{1}{2bu}+C$</span>.</p>
806,779
<p>Prove, without expanding, that \begin{vmatrix} 1 &amp;a &amp;a^2-bc \\ 1 &amp;b &amp;b^2-ca \\ 1 &amp;c &amp;c^2-ab \end{vmatrix} vanishes.</p> <p>Any hints ?</p>
Lutz Lehmann
115,115
<p>Add $(ab+bc+ca)$ times the first column to the last and find the common factor of the last column.</p>
2,294,991
<p>I am trying to find the limit as $x\to 8$ of the following function. What follows is the function and then the work I've done on it. </p> <p>$$ \lim_{x\to 8}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8}$$</p> <hr> <p>\begin{align}\frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} &amp;= \frac{\frac{1}{\sqrt{x +1}} - \frac{1}{3}} {x-8} \times \frac{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}} \\\\ &amp; = \frac{\frac{1}{x+1}-\frac{1}{9}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\ &amp; = \frac{8-x}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}\\\\ &amp; = \frac {-1}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}}\end{align}</p> <p>At this point I try direct substitution and get: $$ = \frac{-1}{\frac{2}{3}}$$</p> <p>This is not the answer. Could someone please help me figure out where I've gone wrong?</p>
StackTD
159,845
<blockquote> <p>$$\frac{\frac{1}{x+1}-\frac{1}{9}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)} = \frac{\color{red}{8-x}}{(x-8)\left(\frac{1}{\sqrt{x +1}} + \frac{1}{3}\right)}$$</p> </blockquote> <p>Careful in the numerator: $$\frac{1}{x+1}-\frac{1}{9} \ne 8-x$$ but rather: $$\frac{1}{x+1}-\frac{1}{9}= \frac{9}{9(x+1)}-\frac{x+1}{9(x+1)} = \frac{8-x}{\color{blue}{9(x+1)}}$$ So then after cancelling/simplifying: $$\frac {\frac{-1}{\color{blue}{9(x+1)}}}{\frac{1}{\sqrt{x +1}} + \frac{1}{3}} \xrightarrow{x \to 8} -\frac{1}{54}$$</p>
156,474
<p>Is this phrase safe to consider in general: </p> <blockquote> <blockquote> <p>Equal sides of a polygon have corresponding equal angles</p> </blockquote> </blockquote> <p>if not how would you refine or correct it</p> <p>Example of a corresponding angle would be</p> <p><img src="https://i.stack.imgur.com/ovBf4.png" alt="enter image description here"></p> <p>Edited: For example suppose you ignore the fact that the above triangle is an equilateral and treat it as a regular polygon.Then according to the above statement Angle A and Angle B are both equal because Side A corresponds to angle A which is equal to SideB which corresponds to angle B</p>
Robert Mastragostino
28,869
<p>Construct a pentagon of all equal sides. Physically, out of pieces of straws or something. Note how you can bend it in any which way so that no angles are the same. So the question of "equal sides correspond to equal angles" is true for triangles, can <em>maybe</em> be rephrased in a way that's true for quadrilaterals (since there it forces two pairs of equal angles), but definitely fails to work in general.</p> <p>A better statement to prove might be the reverse: Do equal angles of a polygon imply equal sides?</p>
603,290
<p>this might be a stupid question, but is any $C^2$ function $f:\mathbb{R}\to\mathbb{C}$ of period $f(x+L)=f(x)$ automatically analytic (and in particular, infinitely often differentiable)?</p> <p>I learned that for a $L$-periodic $C^k$ function with Fourier coefficients $f_n$, they converge to zero like $|f_n| |n|^k \to 0$ as $|n|\to \infty$. So if $k\geq 2$, that would mean that the Fourier series converges absolutely and uniformly, right? So it would extend to an analytic funtion $f(z) = \sum_{n= -\infty}^{\infty} f_n e^{i n z}$ and therefore, it is also $C^\infty$. Am I wrong here?</p> <p>edit: ok, this is wrong. I thought the uniform convergence meant that $f(z) = \sum_{n= -\infty}^{\infty} f_n e^{i n z}$ should be holomorphic, but of course that only works when we have uniform convergence on an open subset of $\mathbb{C}$, here it's only on an interval in $\mathbb{R}$.</p>
Nate Eldredge
822
<p>Even a $C^\infty$ periodic function need not be real analytic. An example is $$f(x) = \exp\left(\frac{-1}{\sin(x)^2}\right)$$ with $f(n \pi) = 0$. Showing $f$ is smooth is a variation on a standard exercise. But at $x=n\pi$, all derivatives of $f$ vanish, yet $f$ is not zero in any neighborhood of $x$.</p>
239,097
<p>Recently, I want to buy some 30xx series GPUs which are used to do deep learning jobs, so does anyone know whether Mathematica's <code>NetTrain</code> supports 30xx series GPUs?</p>
HyperGroups
6,648
<p>2021-03-12</p> <p>Yes, I can run NetTrain with RTX3060 on Window 10</p> <p>But, I'd waited for a long time the first time without any tips.</p> <p><a href="https://i.stack.imgur.com/Rhguw.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Rhguw.jpg" alt="enter image description here" /></a> <a href="https://i.stack.imgur.com/1gDGb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/1gDGb.png" alt="enter image description here" /></a></p>
2,970,053
<p>I've been trying to understand this problem for hours but not getting it. HELP!!!</p> <p>The correct answer is <span class="math-container">$\frac{2}{3}$</span>, but I don't know why this is the correct answer.</p> <p>Thank you in advance for your help!</p>
M. Wind
30,735
<p>Please note that --ignoring the spaces in between them-- there are only six ways to order these three books: (123), (132), (213), (231), (312) and (321). Since the books are randomly arranged, each of this six orderings must have the same probability. Which is therefore 1/6. So that is your answer.</p>
4,605,263
<p>If someone asks me right now, what a number is, I would say 1,2,3,-1,1+i,2.13, etc. But what I have essentially stated are just symbols. I tried to explore a bit what a number exactly is. What are these symbols pointing to?</p> <p>As I got to know the answer is not that simple, we cannot have an all-encompassing definition of numbers. Accepting this, I can also not find peace with just treating them as symbols.</p> <p>So I just wanted to ask, is there a way I can somehow construct natural numbers, then fractions, then integers, real numbers, and so on, so that I at least have some notion attached to these numbers, rather than treat them just as symbols with which I'm playing.</p>
Quanto
686,284
<p>Let <span class="math-container">$J(a)=\int_{0}^{\infty}\frac{f(x)}x e^{-ax} dx$</span>, with periodic function <span class="math-container">$$f(x)= f(x+2\pi k)=|\cos( x-\frac{\pi}{4} ) |- |\cos( x+\frac{\pi}{4} ) |$$</span> Then <span class="math-container">\begin{align} J’(a)&amp; =- \int_0^\infty f(x)e^{-ax}dx = \sum_{k\ge 0}\int_0^{2\pi}f(x) e^{-a(x+2\pi k)}dx\\ &amp;=- \frac1{1-e^{-2\pi a}}\int_0^{2\pi}f(x)e^{-ax}dx= \frac{2(\text{sech}\frac{\pi a}2-\sqrt2)}{4a^2+1} \end{align}</span> Utilize <span class="math-container">$\int_0^\infty \frac{\cos ay}{\cosh y}dy = \frac\pi2\text{sech}\frac{\pi a}2$</span> and <span class="math-container">$\int_0^\infty \frac{\cos ay}{4a^2+1}da= \frac\pi4e^{-y/2}$</span> to integrate <span class="math-container">\begin{align} &amp; \int_{0}^{\infty}\frac{|\cos( x-\frac{\pi}{4} ) |- |\cos( x+\frac{\pi}{4} ) |}{x}dx\\ =&amp; \ J(0) =-\int_0^\infty J’(a)da = 2\int_0^\infty \frac{\sqrt2-\text{sech}\frac{\pi a}2}{4a^2+1} da \\ =&amp; \ 2\int_0^\infty \frac1{4a^2+1}\left(\frac2\pi\int_0^\infty \frac{\sqrt2-\cos ay}{\cosh y}dy\right)da\\ =&amp; \ \frac4\pi\int_0^\infty \frac1{\cosh y}\int_0^\infty \frac{\sqrt2-\cos ay}{4a^2+1}da\ dy\\ =&amp; \int_0^\infty \frac{\sqrt2-e^{-y/2}}{\cosh y}{ dy} \overset{t= e^{-y/2}} =\int_0^1 \frac{t(\sqrt2-t)}{1+t^4}dt=\sqrt2\coth^{-1}\sqrt2 \end{align}</span></p>
2,205,776
<p>I am pretty sure there is an easy Counter example But i do not find One right Now.</p>
Ethan Alwaise
221,420
<p>HINT: If the sequence $\{a_n\}$ is not eventually constant, then you can find arbitrarily large $n,m$ such that $$\vert a_n - a_m \vert \geq 1.$$</p>
381,093
<p>I would like to prove Chebyshev's sum inequality, which states that:</p> <p>If <span class="math-container">$a_1\geq a_2\geq \cdots \geq a_n$</span> and <span class="math-container">$b_1\geq b_2\geq \cdots \geq b_n$</span>, then<br /> <span class="math-container">$$ \frac{1}{n}\sum_{k=1}^n a_kb_k\geq \left(\frac{1}{n}\sum_{k=1}^n a_k\right)\left(\frac{1}{n}\sum_{k=1}^n b_k\right) $$</span><br /> I am familiar with the non-probabilistic proof, but I need a probabilistic one.</p>
Seva
9,924
<p>Let <span class="math-container">$A$</span> be the random variable attaining the values <span class="math-container">$a_1,\dotsc,a_n$</span> with equal probabilities, and define <span class="math-container">$B$</span> similarly, subject to <span class="math-container">$\mathbb P(B=b_i|A=a_i)=1$</span>. Then <span class="math-container">$\mathbb E(A)=\frac1n\,\sum_{1\le i\le n} a_i$</span>, <span class="math-container">$\mathbb E(B)=\frac1n\,\sum_{1\le i\le n} b_i$</span>, and <span class="math-container">$\mathbb E(AB)=\frac1n\,\sum_{1\le i\le n} a_ib_i$</span>. Since <span class="math-container">$a_i$</span> and <span class="math-container">$b_i$</span> are similarly ordered, we have <span class="math-container">$\mathrm{Cov}(A,B)&gt;0$</span> whence <span class="math-container">$\mathbb E(AB)\ge\mathbb E(A)\mathbb E(B)$</span>.</p>