qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
947,290
<p>In a cyclic group of order 8 show that element has a cube root. So for some $a\in G$ there is an element $x \in G$ with $x^3=a.$</p> <p>Also show in general that if $g=&lt;a&gt;$ is a cyclic group of order m and $(k,m)=1$ then each element in G has a $k$th root. What element will $a^k$ generate? Use this to express any element as a $k$th powers.</p> <p>Where do I begin? For the first one is it just through closure essentially? And the second one Im stuck on. Where do I begin? I know that gcd between k &amp; m is 1 so $kx+my=1$ with $x,y\in Z$. Thank you.</p>
Arpit Kansal
175,006
<p>Perhaps simpler approach consider $f:G\to G$ defind as $x\to x^3$.Now by using the fact that G is abelian and does not have any element of order $3$,show that $G$ is automorphism and hence done.Also i think abelian is sufficient condition!</p>
136,067
<p>Assume $f(x)&gt;0$ defined in $[a,b]$, and for a certain $L&gt;0$, $f(x)$ satisfies the Lipschitz condition $|f(x_1)-f(x_2)|\leq L|x_1-x_2|$.</p> <p>Assume that for $a\leq c\leq d\leq b$,$$\int_c^d \frac{1}{f(x)}dx=\alpha,\int_a^b\frac{1}{f(x)}dx=\beta$$Try to prove$$\int_a^b f(x)dx \leq \frac{e^{2L\beta}-1}{2L\alpha}\int_c^d f(x)dx$$</p>
Yimin
30,330
<p>If $f(x)$ is smooth, or $f\in C^1$, assume $\displaystyle h(t)=\int_a^t f(s)\mathrm{d}s$, and $\displaystyle g(t)=\int_a^{t}\frac{1}{f(s)}\mathrm{d}s$,we just can focus on \begin{equation} \frac{h(t)}{\exp(2Lg(t)-1)} \end{equation}</p> <p>because we know that there is a $\xi\in(c,d)$ s.t. \begin{equation} \frac{h(d)-h(c)}{g(d)-g(c)}=\frac{h'(\xi)}{g'(\xi)}=f^2(\xi) \end{equation}</p> <p>thus we just prove $$\frac{h(t)}{\exp(2Lg(t)-1)}\le\frac{\min_{[a,b]}f^2(\xi)}{2L}$$</p> <p>To find the minimum. Let's compute the derivative and we also can see the LHS function at $t=a$ has a limit as $f^2(a)/2L$.</p> <p>Assume the LHS term is $\gamma(t)$, then \begin{equation} \gamma'(t)=\frac{h'(\exp(2Lg)-1)-2Lg'\exp(2Lg)h}{(\exp(2Lg)-1)^2} \end{equation}</p> <p>It is easy to compute the limit at $t=a$, we find that $\gamma'(a)=\frac{f(a)f'(a)}{2L}-\frac{1}{2}f(a)\le0$. And we shall see that if we set \begin{eqnarray} p(t)&amp;=&amp;f\cdot(h'(\exp(2Lg)-1)-2Lg'\exp(2Lg)h)\\ &amp;=&amp;f^2(\exp(2Lg)-1)-2L\exp(2Lg)h \end{eqnarray}</p> <p>We can see that $p(a)=0$, since $h(a)=0$ and $g(a)=0$. However, \begin{eqnarray} p'(t)&amp;=&amp;2ff'(\exp(2Lg)-1)-4L^2\exp(2Lg)h/f\\ &amp;\le&amp;\frac{2L}{f}(f^2(\exp(2Lg)-1)-2L\exp(2Lg)h)\\ &amp;=&amp;\frac{2L}{f}p(t) \end{eqnarray}</p> <p>Thus we shall know that $p(t)\le 0$, since $\{(\exp(-2Lg(t))p\}'\le0$,and $p(a)=0$.</p> <p>So we shall get that $\gamma'(t)\le 0$.</p> <p>Thus $\gamma$ is decreasing in $t$, $\gamma(t)\le\gamma(a)$.</p> <p>On the other hand. Take $\displaystyle\int_{t}^b f(s)\mathrm{d}s=\phi(t)$,$\displaystyle\int_{t}^b\frac{1}{f(s)}\mathrm{d}s=\psi(t)$.</p> <p>Then we also can see that \begin{equation} \beta(t)=\frac{\phi(t)}{\exp(2L\psi(t)-1)} \end{equation} which has $\displaystyle\beta(b)=\frac{f^2(b)}{2L}.$</p> <p>with the same process(PAY ATTENTION TO THE SIGN), $$\beta'(b)=\frac{f(b)f'(b)}{2L}+\frac{f(b)}{2}\ge 0.$$</p> <p>And also we can obtain that $$q(t)=2L \exp(2L \psi)\phi-f^2(\exp(2L\psi)-1)\ge 0$$</p> <p>which means $\beta(t)$ is increasing in $t$, which is $\beta(t)\le\beta(b)$.</p> <p>Since $\beta(a)=\gamma(b)$, thus $\gamma(b)\le \min(f^2(a),f^2(b))/2L$.</p> <p>AND the choice of $b$ is arbitrary, we know that for any $t$, we have $$\gamma(t)\le f^2(t)/2L$$</p> <p>Consider the $\min_{[a,b]}f^2(t)$ is reached at $t=\upsilon$, then $\gamma(\upsilon)\le f^2(\upsilon)/2L$, since $\gamma(t)$ is decreasing in $t$. Thus $\gamma(b)\le f^2(\upsilon)/2L$. $\Box$</p>
317,753
<p>I am taking real analysis in university. I find that it is difficult to prove some certain questions. What I want to ask is:</p> <ul> <li>How do we come out with a proof? Do we use some intuitive idea first and then write it down formally?</li> <li>What books do you recommended for an undergraduate who is studying real analysis? Are there any books which explain the motivation of theorems? </li> </ul>
oks
60,529
<p>To come out with a proof I pretty much always started by 1. imagining a specific example 2. drawing the example as picture if possible 3. persuading myself (by looking at the picture) that the thing we were being asked to prove was actually true, then 4. making up some notation to describe what I was looking at.</p>
22,101
<p>The general rule used in LaTeX doesn't work: for example, typing <code>M\"{o}bius</code> and <code>Cram\'{e}r</code> doesn't give the desired outputs.</p>
mweiss
124,095
<p>There are certain system-dependent ways to enter diacriticals and other special characters. For example, on a Macintosh computer running any operating system prior to 10.10 (Yosemite):</p> <ul> <li>The keystroke combination <code>option-U + vowel</code> produces the vowel with an umlaut over it.</li> <li>The keystroke combination <code>option-E + vowel</code> produces the vowel with an accent over it.</li> </ul> <p>Further keystroke combinations for the Mac are listed <a href="http://www.corpbuscards.com/mackeycodesfortyping.htm" rel="nofollow noreferrer">here</a>.</p> <p>Beginning with Macintosh OS 10.10 (Yosemite) there is an <a href="https://support.apple.com/kb/PH18436?locale=en_US" rel="nofollow noreferrer">even simpler way to produce such characters</a>: simply press and hold the desired vowel key on the keyboard, and a palette of accented versions of that vowel will appear on screen.</p> <p>Slightly more elaborate methods exist in Windows; see <a href="http://windows.microsoft.com/en-us/windows-vista/type-and-display-accents-and-diacritical-marks" rel="nofollow noreferrer">here</a> and <a href="https://superuser.com/questions/110605/how-do-i-type-accented-characters-in-windows">here</a>.</p>
1,650,204
<p>I was given this problem and I can't seem to think of a solution.</p> <p>Here is a possibly helpful graphic:</p> <p><a href="https://i.stack.imgur.com/VKZkv.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/VKZkv.png" alt="Here is a possibly helpful link:" /></a></p> <blockquote> <p>Given two parallel lines (representing the banks of a river) and two arbitrary points <span class="math-container">$A$</span> and <span class="math-container">$B$</span> outside of the river (one above the top parallel line and one below the bottom parallel line). A bridge is to be constructed connecting the two sides of the river at point <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are to be an equal distance between points <span class="math-container">$A$</span> and <span class="math-container">$B$</span>, i.e. <span class="math-container">$\overline{AP}$</span> = <span class="math-container">$\overline{BQ}$</span>.</p> <p>Where should the bridge be placed, assuming that it runs at right angles to the banks?</p> </blockquote>
Bobson Dugnutt
259,085
<p>Just to supplement @PVanchinathan's excellent answer and because the comment became too long, I'm writing this answer. </p> <p>The movement of the dots represents the linear transformation on a whole. Some vectors are also shown. For instance, the red ones are all vertical/horizontal in the original representation, but when transformed, they suddenly point in another direction. Same deal with the purple ones. But the blue ones <em>doesn't</em> change direction under the transformation, they only change their length. If we represent the linear transformation in question by a matrix $\mathbb{A}$, we see that to apply it to a blue vector $v_b$ (the eigenvector) is the same as multiplying it with some number $\lambda$ (the eigenvalue), which can be written succinctly as an eigen-equation $$\mathbb{A}v_b=\lambda v_b$$</p> <p>The reason this kind of thing is so useful, for instance in QM, is first and foremost because it is easy to work with them (there are many nice theorems that allow you to do nice things when you're working in a basis of eigenvectors), but also because the (time-independent) Schrödinger-equation itself is an eigen-equation:</p> <p>$$H \psi = E \psi$$</p> <p>Oh, and eigenfunction is just another name for eigenvector. Same with eigenstate and eigenvalue. </p> <p>Hope that helps!</p> <p><a href="https://i.stack.imgur.com/XoPuJ.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/XoPuJ.gif" alt="gif"></a></p>
718,266
<p>Is there a simple intuitive graphical explanation of Clifford Algebra for the layman? Since Clifford Algebra is a Geometric Algebra, surely the best way to present those concepts is with graphical figures.</p>
user48672
138,298
<p>To get some geometrical meaning you can look at some special Clifford algebras:</p> <p>$\mathcal{Cl}_{0,1}$ is isomorphic to the complex plane.</p> <p>$\mathcal{Cl}_{2,0}$ is isomorphic to the Euclidean plane.</p> <p>$\mathcal{Cl}_{3,0}$ is isomorphic to the 3D Euclidean space.</p> <p>$\mathcal{Cl}_{3,1}$ is isomorphic to the 4D Minkowski spac-time.</p>
2,437,983
<p>What is the chance that at least two people were born on the same day of the week if there are 3 people in the room?</p> <p>I know how to get the answer which is 19/49 when considering all 3 people <strong>not being born on the same day</strong>. However, when I try to calculate the answer directly I seem to get it wrong.</p> <p>Considering exactly 2 people being born on the same day I get 1*1/7*6/7. And then, exactly 3 people is 1*1/7*1/7. Thus, the total is 6/49 + 1/49 = 7/49. This must be something fairly simple, but I was just wondering where I'm going wrong. </p> <p>Thanks</p>
Community
-1
<p>Half-open intervals $\mathcal{A}=\{[a,b):a&lt;b\}$ are Borel sets. Thus, $\sigma(\mathcal{A})\subset \mathcal{B}_{\mathbb{R}}$. On the other hand, any open set in $\mathbb{R}$ can be approximated by a countable union of the intervals from $\mathcal{A}$ which implies that $\mathcal{B}_{\mathbb{R}}\subset\sigma(\mathcal{A})$.</p>
3,460,595
<p>I am given the following sequence:</p> <p><span class="math-container">$$a_n = 1^9 + 2^9 + ... + n^9 - an^{10}$$</span></p> <p>Where <span class="math-container">$a \in \mathbb{R}$</span>. I have to find the value of <span class="math-container">$a$</span> for which the sequence <span class="math-container">$a_n$</span> is convergent (Or conclude that there is no such value of <span class="math-container">$a$</span>).</p> <p>How can I find this value (or that there is no such value)? I don't know how to approach something like this at all.</p>
Community
-1
<p>Consider</p> <p><span class="math-container">$$a_{n+1}=a_n+(n+1)^9+an^{10}-a(n+1)^{10}.$$</span></p> <p>In this recurrence, the term of degree <span class="math-container">$9$</span> has the coefficient <span class="math-container">$1-10a$</span>. If this coefficient is nonzero, the polynomial grows to infinity. Otherwise, the coefficient of the term of degree <span class="math-container">$8$</span> is <span class="math-container">$9-45a$</span>, which is nonzero, and the polynomial grows to infinity.</p>
267,706
<p>I'm making an animation of a <a href="https://en.wikipedia.org/wiki/Reuleaux_triangle" rel="nofollow noreferrer">Reuleaux triangle</a> rolling on a straight line like this <a href="https://i.stack.imgur.com/m0IMm.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/m0IMm.gif" alt="rolling Reuleaux triangle" /></a></p> <p>The animation generated by my code is not continuous. Is there a simple way to eliminate jumping?</p> <pre><code>Manipulate[ Module[{reuleaux, s}, reuleaux[t_] = {-Cos[Pi/3 (1 + 2 Floor[3 t])] + Sqrt[3] Cos[Pi/6 + Pi t + Pi/3 Floor[3 t]], -Sin[Pi/3 (1 + 2 Floor[3 t])] + Sqrt[3] Sin[Pi/6 + Pi t + Pi/3 Floor[3 t]]}; s[t_?NumericQ] := NIntegrate[Norm[reuleaux'[s]], {s, 0, t}]; ParametricPlot[{s[u], 0} + (reuleaux[t] - reuleaux[u]).RotationMatrix[ArcTan @@ (reuleaux'[u])] // Evaluate, {t, 0, 1}, PlotRange -&gt; {{-1, 7}, {-1, 2}}] ], {u, 0.001, 1 + 0.001}] </code></pre> <p><a href="https://i.stack.imgur.com/AqsNC.gif" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/AqsNC.gif" alt="animation with jump" /></a></p> <p>Reference link:<br /> <a href="https://community.wolfram.com/groups/-/m/t/1628699" rel="nofollow noreferrer">On rolling polygons and Reuleaux polygons</a><br /> <a href="https://math.stackexchange.com/questions/2279568/formula-to-create-a-reuleaux-polygon">formula-to-create-a-reuleaux-polygon</a><br /> <a href="https://mathematica.stackexchange.com/questions/242180/how-to-roll-a-graph-on-the-y-axis">How to roll a graph on the y-axis</a><br /> <a href="https://mathematica.stackexchange.com/questions/212962/how-to-plot-a-bicycle-with-square-wheels">How to plot a bicycle with square wheels</a></p>
Daniel Huber
46,318
<p>With code from the Wolfram Demo project for a Reuleaux triangle from <code>https://demonstrations.wolfram.com/ARotatingReuleauxTriangle/</code> and some small changes:</p> <pre><code>angle[vec_] := Arg[First[vec] + I*Last[vec]] + If[Last[vec] &gt;= 0, 0, 2*Pi] centerpath[t_] := Piecewise[{{{1 + Cos[Mod[t, 2 Pi/3] + 7 Pi/6] + Sqrt[3]/3*Sin[Mod[t, 2 Pi/3] + 7 Pi/6], 1 + Sin[Mod[t, 2 Pi/3] + 7 Pi/6] + Sqrt[3]/3*Cos[Mod[t, 2 Pi/3] + 7 Pi/6]}, 0 &lt;= Mod[t, 2 Pi/3] &lt; Pi/6}, {{-1 - Sin[Mod[t, 2 Pi/3] + Pi] - Sqrt[3]/3*Cos[Mod[t, 2 Pi/3] + Pi], 1 + Cos[Mod[t, 2 Pi/3] + Pi] + Sqrt[3]/3*Sin[Mod[t, 2 Pi/3] + Pi]}, Pi/6 &lt;= Mod[t, 2 Pi/3] &lt; Pi/3}, {{-1 - Cos[Mod[t, 2 Pi/3] + 5 Pi/6] - Sqrt[3]/3*Sin[Mod[t, 2 Pi/3] + 5 Pi/6], -1 - Sin[Mod[t, 2 Pi/3] + 5 Pi/6] - Sqrt[3]/3*Cos[Mod[t, 2 Pi/3] + 5 Pi/6]}, Pi/3 &lt;= Mod[t, 2 Pi/3] &lt; Pi/2}, {{ 1 + Sin[Mod[t, 2 Pi/3] + 2 Pi/3] + Sqrt[3]/3*Cos[Mod[t, 2 Pi/3] + 2 Pi/3], -1 - Cos[Mod[t, 2 Pi/3] + 2 Pi/3] - Sqrt[3]/3*Sin[Mod[t, 2 Pi/3] + 2 Pi/3]}, Pi/2 &lt;= Mod[t, 2 Pi/3] &lt; 2 Pi/3}}]; reuleaux[s_] := Module[{a, b, c}, a = centerpath[s] + Sqrt[3]/3*2*{Cos[-s], Sin[-s]}; b = centerpath[s] + Sqrt[3]/3*2*{Cos[-s + 2 Pi/3], Sin[-s + 2 Pi/3]}; c = centerpath[s] + Sqrt[3]/3*2*{Cos[-s + 4 Pi/3], Sin[-s + 4 Pi/3]}; Graphics[{LightGray, Disk[a, 2, {angle[b - a], angle[b - a] + Pi/3}], Disk[b, 2, {angle[c - b], angle[c - b] + Pi/3}], Disk[c, 2, {angle[a - c], angle[a - c] + Pi/3}], Black, Circle[a, 2, {angle[b - a], angle[b - a] + Pi/3}], Circle[b, 2, {angle[c - b], angle[c - b] + Pi/3}], Circle[c, 2, {angle[a - c], angle[a - c] + Pi/3}], PointSize[.02], Black, Point[a], Point[b], Point[c], Point[(a + b + c)/3], Line[{{1, -1}, {1, 1}, {-1, 1}, {-1, -1}, {1, -1}}]}, Axes -&gt; True] ] </code></pre> <p>And the addition of the x- movement we can create the following animation:</p> <pre><code>Animate[Graphics[{Translate[reuleaux[s][[1]], {Sqrt[3] s, 0}], Line[{{{-2, -1}, {12, -1}}, {{-2, 1}, {12, 1}}}]} , PlotRange -&gt; {{-2, 12}, {-1.2, 1.2}}, ImageSize -&gt; 500], {s, -0.1 , 2 Pi}] </code></pre> <p><a href="https://i.stack.imgur.com/Zll9n.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Zll9n.png" alt="enter image description here" /></a></p>
426,998
<p>Motivated in the <a href="https://cs.stackexchange.com/questions/12830/can-expected-depth-of-an-element-and-expected-height-differ-significantly">analysis of algorithms</a>, consider the following setup.</p> <p>Assume we have discrete random variables $X^{(n)}_1, \dots, X^{(n)}_n$ which we can not assume to be identical or independent. The distribution of the $X^{(n)}_i$ can depend on both $i$ and $n$. Let </p> <p>$\qquad\displaystyle X^{(n)} = \max_{i \in [1..n]} X^{(n)}_i$ </p> <p>the maximum of those. Assume furthermore that we have shown that $\mathbb{E}[X^{(n)}_i] \in O(f(n))$ for all $i$ as $n \to \infty$, so in particular $\mathbb{E}[X^{(n)}_i]$ depends on $n$. Here, $f : \mathbb{N} \to \mathbb{R}$ is a "simple" increasing function, e.g. polynomial or polylogarithmic¹.</p> <p>Under which conditions can we conclude that</p> <p>$\qquad\displaystyle \mathbb{E}[X^{(n)}] \in O\bigl(\max_{i \in [1..n]} \mathbb{E}[X^{(n)}_i]\bigr) = O(f(n))$?</p> <hr> <ol> <li>Actually, we'd have $\mathbb{E}[X^{(n)}_i] = g(i,n)$ for some "nice" function $g$. Since our interest is in an asymptotic bound in $n$, we drop the dependence on $i$ ensuring that $f(n)$ is an upper bound on $g(i,n)$ for all $i \leq n$ (up to a constant factor).</li> </ol>
András Salamon
3,362
<p>(<em>Note:</em> this answer was to an earlier version of the question. My understanding was that the distribution of $X_j$ was fixed. The current version of the question indicates that the parameters of its distribution depend on $n$ also. The bounds still apply, but may be less directly useful for this scenario.)</p> <hr> <p>Since you are presumably interested in how the maximum behaves as $n$ grows, let $X_{(n)}$ denote the $n$-th order statistic, i.e. the maximum among the $n$ random variables. Let $\mu_j = E[X_j]$ and $\sigma_j^2 = \text{Var}[X_j]$ for each $j$, and let $\overline{\mu} = \frac{1}{n}\sum_{j=1}^n \mu_j$. Also let $S^2 = \frac{1}{n}\sum_{j=1}^n (X_j - \frac{1}{n}\sum_{i=1}^n X_i)^2$ denote the sample variance.</p> <p>It is known (see pp. 48–49 of Arnold and Balakrishnan) that $$ \overline{\mu} + E[S]/\sqrt{n-1} \le E[X_{(n)}] \le \overline{\mu} + E[S]\sqrt{n-1}. $$ Further, Arnold and Groeneveld showed that $$ \overline{\mu} \le E[X_{(n)}] \le \overline{\mu} + \sqrt{\frac{n-1}{n}\sum_{j=1}^n (Var[X_j] + (\mu_j - \overline{\mu})^2)}, $$ if this expression is more useful for your application.</p> <ul> <li>B. C. Arnold and N. Balakrishnan, <em>Relations, Bounds and Approximations for Order Statistics.</em> Lecture Notes in Statistics <strong>53</strong>. Springer-Verlag, 1989.</li> <li>B. C. Arnold and R. A. Groeneveld, <em>Bounds on expectations of linear systematic statistics based on dependent samples</em>, Mathematics of Operations Research <strong>4</strong> 441–447.</li> </ul> <p>If the variables are independent and have the same mean and variance as well, then <a href="http://dx.doi.org/10.1214/aoms/1177728847" rel="nofollow">Gumbel</a> and also <a href="http://dx.doi.org/10.1214/aoms/1177728848" rel="nofollow">Hartley and David</a> showed that $E[X_{(n)}] \le \mu + \sigma(n-1)/\sqrt{2n-1}$, although your last comment indicates this doesn't apply. Some further bounds were derived by Downey.</p> <ul> <li>Peter J. Downey, <em>Distribution-free bounds on the expectation of the maximum with scheduling applications</em>, Operations Research Letters <strong>9</strong>, 1990, 189–201. doi:<a href="http://dx.doi.org/10.1016/0167-6377%2890%2990018-Z" rel="nofollow">10.1016/0167-6377(90)90018-Z</a></li> </ul> <p>More information seems to be needed about the precise behaviour of $\text{Var}[X_j]$ or $\mu_j$, but this should get you part of the way there.</p>
426,998
<p>Motivated in the <a href="https://cs.stackexchange.com/questions/12830/can-expected-depth-of-an-element-and-expected-height-differ-significantly">analysis of algorithms</a>, consider the following setup.</p> <p>Assume we have discrete random variables $X^{(n)}_1, \dots, X^{(n)}_n$ which we can not assume to be identical or independent. The distribution of the $X^{(n)}_i$ can depend on both $i$ and $n$. Let </p> <p>$\qquad\displaystyle X^{(n)} = \max_{i \in [1..n]} X^{(n)}_i$ </p> <p>the maximum of those. Assume furthermore that we have shown that $\mathbb{E}[X^{(n)}_i] \in O(f(n))$ for all $i$ as $n \to \infty$, so in particular $\mathbb{E}[X^{(n)}_i]$ depends on $n$. Here, $f : \mathbb{N} \to \mathbb{R}$ is a "simple" increasing function, e.g. polynomial or polylogarithmic¹.</p> <p>Under which conditions can we conclude that</p> <p>$\qquad\displaystyle \mathbb{E}[X^{(n)}] \in O\bigl(\max_{i \in [1..n]} \mathbb{E}[X^{(n)}_i]\bigr) = O(f(n))$?</p> <hr> <ol> <li>Actually, we'd have $\mathbb{E}[X^{(n)}_i] = g(i,n)$ for some "nice" function $g$. Since our interest is in an asymptotic bound in $n$, we drop the dependence on $i$ ensuring that $f(n)$ is an upper bound on $g(i,n)$ for all $i \leq n$ (up to a constant factor).</li> </ol>
Raphael
3,330
<p>The following method can yield bounds stronger than the one András cites but requires even more knowledge about the distribution of the <span class="math-container">$X^{(n)}_i$</span>. The idea is to use bounds on the tail probabilities of the <span class="math-container">$X^{(n)}_i$</span> to bound the tail of their maximum <span class="math-container">$X^{(n)}$</span>.</p> <p>We start with a lemma from Cover/Thomas [1] (Lemma 11.9.1, <a href="http://books.google.de/books?id=EuhBluW31hsC&amp;lpg=PA347&amp;pg=PA392#v=onepage&amp;q&amp;f=false" rel="nofollow noreferrer">p392 in 2nd edition</a>):</p> <blockquote> <p><strong>Lemma</strong></p> <p>Let <span class="math-container">$Y$</span> be any random variable and let <span class="math-container">$M_Y(z)$</span> be the moment generating function of <span class="math-container">$Y$</span>, i. e. <span class="math-container">$M_Y(z) = \mathbb{E}[e^{zY}]$</span>.</p> <p>Then</p> <p><span class="math-container">$\qquad\displaystyle \Pr[Y \geq a] \leq \frac{M_Y(z)}{e^{za}}$</span></p> <p>for all <span class="math-container">$z \geq 0$</span>.</p> </blockquote> <p>So if we can find the moment generating function of <span class="math-container">$X^{(n)}_i$</span> (e.g. via its probability generating function), we get bounds whose quality we can adjust by choosing both <span class="math-container">$a = c \cdot f(n)$</span> and <span class="math-container">$z$</span> appropriately. If all goes well, we get a uniform bound of the form</p> <p><span class="math-container">$\qquad \displaystyle \Pr[X^{(n)}_i \geq c \cdot f(n)] \leq \alpha(n) \in o(n^{-1})$</span>.</p> <p>Note that, in particular, <span class="math-container">$\alpha$</span> does not depend on <span class="math-container">$i$</span>. Then, we can conclude that</p> <p><span class="math-container">$\qquad \displaystyle \Pr[X^{(n)} \geq c \cdot f(n)] \leq \sum_{i=1}^n \Pr[X^{(n)}_i \geq c \cdot f(n)] \leq n \cdot \alpha(n) \in o(1)$</span></p> <p>using <span class="math-container">$\sigma$</span>-subadditivity. From this, the desired bound <span class="math-container">$\mathbb{E}[X^{(n)}] \in O(f(n))$</span> follows immediately.</p> <hr /> <ol> <li>Elements of Information Theory by T.M. Cover and J.A. Thomas</li> </ol>
3,660,652
<p>To which of the seventeen standard quadrics (<a href="https://mathworld.wolfram.com/QuadraticSurface.html" rel="nofollow noreferrer">https://mathworld.wolfram.com/QuadraticSurface.html</a>) do these two equations reduce? <span class="math-container">\begin{equation} Q_1^2+3 Q_2 Q_1+\left(3 Q_2+Q_3\right){}^2 = 3 Q_2+2 Q_1 Q_3. \end{equation}</span> <span class="math-container">\begin{equation} -9 Q_2-6 Q_3+3 \left(Q_1^2+\left(3 Q_2+4 Q_3-1\right) Q_1+9 Q_2^2+4 Q_3^2+6 Q_2 Q_3\right) = 0. \end{equation}</span> Further, what are the associated transformations needed to accomplish the reductions?</p> <p>This is a "distilled" form of a previous more expansive question <a href="https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces">https://mathoverflow.net/questions/359459/interpret-certain-expressions-in-terms-of-classical-quadratic-surfaces</a></p>
Bernard
202,857
<p>Here is how to obtain all solutions with congruences:</p> <p>This relation means that <span class="math-container">\begin{align} 781 + 256 (3d-1)\bmod 81&amp;\iff 52+ 13(3d-1)\equiv 0 \iff39(1+d)\equiv 0 \bmod 81\\ \scriptstyle\text{(simplifying by }3)&amp;\iff 13(1+d)\equiv 0 \bmod 27\\ \scriptstyle (13\text{ is a unit }\bmod 27) &amp;\iff d\equiv -1 \bmod 27. \end{align}</span></p>
3,833,767
<p>I am trying to brush up on calculus and picked up Peter Lax's Calculus with Applications and Computing Vol 1 (1976) and I am trying to solve exercise 5.2 a) in the first chapter (page 29):</p> <blockquote> <p>How large does <span class="math-container">$n$</span> have to be in order for</p> <p><span class="math-container">$$ S_n = \sum_{j = 1}^n \frac{1}{j^2}$$</span></p> <p>to be within <span class="math-container">$\frac{1}{10}$</span> of the infinite sum? within <span class="math-container">$\frac{1}{100}$</span>? within <span class="math-container">$\frac{1}{1000}$</span>? Calculate the first, second and third digit after the decimal point of <span class="math-container">$ \sum_{j = 1}^\infty \frac{1}{j^2}$</span></p> </blockquote> <p>Ok so the first part is easy and is derived from the chapter's text:</p> <p><span class="math-container">$ \forall j \geq 1 $</span> we have <span class="math-container">$\frac{1}{j^2} \leq \frac{2}{j(j+1)}$</span> and therefore:</p> <p><span class="math-container">\begin{equation} \begin{aligned} \forall n \geq 1,\quad \forall N \geq n +1 \quad S_N - S_n &amp;\leq 2\sum_{k = n+1}^N \frac{1}{k(k+1)}\\ &amp;= 2\sum_{k = n+1}^N \left\{ \frac{1}{k}- \frac{1}{k+1}\right\}\\ &amp;= 2 \left[ \frac{1}{n+1} - \frac{1}{N+1}\right] \end{aligned} \end{equation}</span></p> <p>Now because we know <span class="math-container">$S_N$</span> converges to a limit <span class="math-container">$l$</span> from below and by the rules of arithmetic for convergent sequences we have:</p> <p><span class="math-container">$$ 0 \leq S - S_n \leq \frac{2}{n+1}$$</span></p> <p>So if we want <span class="math-container">$S_n$</span> to be within <span class="math-container">$\frac{1}{10^k}$</span> of <span class="math-container">$S$</span> it suffices to have:</p> <p><span class="math-container">$$ n \geq N_{k} = 2\times10^k -1$$</span></p> <p>But the second part of the question puzzles me. I would like to say that computing <span class="math-container">$S_{N_{k}}$</span> is enough to have the first <span class="math-container">$k$</span> decimal points of <span class="math-container">$S$</span>. But earlier in the chapter (on page 9), there is a theorem that states:</p> <blockquote> <p>if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> have the same integer parts and the same digits up to the <span class="math-container">$m$</span>-th, then they differ by less than <span class="math-container">$10^{-m}$</span>, <span class="math-container">$$ |a - b | &lt; 10^{-m}$$</span> and the converse is <em>not</em> true.</p> </blockquote> <p>And the example of <span class="math-container">$a = 0.29999...$</span> and <span class="math-container">$b = 0.30000...$</span> indeed shows that two numbers can differ by less than <span class="math-container">$2\times 10^{-5}$</span> and yet have all different first digits.</p> <p>So I think there is something missing in my &quot;demonstration&quot; above. How to show that I indeed &quot;catch&quot; the first <span class="math-container">$k$</span> digits of <span class="math-container">$S$</span> by computing <span class="math-container">$S_{N_k}$</span>?</p> <p>Thanks!</p>
saulspatz
235,128
<p>If you get an answer like <span class="math-container">$1.29999$</span>, you'll simply have to compute more terms of the series, but in all likelihood, you'll be able to make a definite statement. Try to compute the first four digits after the decimal point. You may be doubtful about the fourth digit, but in all likelihood, you'll be able to make a definite statement about the first three. With <span class="math-container">$n=20000$</span> I got <span class="math-container">$$\sum_{k=1}^{20000}\frac1{n^2}=1.6448840680982086$$</span> We can't be sure if the value is <span class="math-container">$1.6448\dots$</span> or <span class="math-container">$1.6449\dots$</span> but we know it's <span class="math-container">$1.644\dots$</span>.</p> <p>(In point of fact, the value is <span class="math-container">$\frac{\pi^2}6\approx1.6449340668482264$</span>).</p>
152,295
<p>What is the definition of picture changing operation? What is a standard reference where it is defined - not just used?</p>
QGravity
64,606
<p>This is a physical description of Picture-Changing Operation:</p> <p>In the RNS formulation of superstring theory, the worldsheet theory has superconformal gauge invariance. One thus needs to fix the gauge. This means that one can (locally) fix the form of metric and the gravitino field. This will introduce ghost and anti-ghost field into the theory: </p> <ul> <li><p>If one fixes diffeomorphism (form of the metric), one gets $(b,c)$ system (they are fermions because they are ghost fields corresponding to a gauge fixing of a Bosonic field);</p></li> <li><p>If one fixes supersymmetry (form of gravitino), one gets $(\beta,\gamma)$ system (they are bosons because they are ghost fields corresponding to a gauge fixing of a Fermionic field);</p></li> </ul> <p>The action for the $(\beta,\gamma)$ system has first-order derivative and due to a general properties of such Bosonic systems, the Hamiltonian is <em>unbounded from below</em> which is not a desirable property. To fix this, <a href="http://www.physics.rutgers.edu/~friedan/papers/Nucl_Phys_B271_93_1986.pdf" rel="nofollow">Friedan, Shenker and Martinec</a> (FSM) introduced an equivalent description of $(\beta,\gamma)$ system in terms of a new set of fields $(\xi,\eta,\phi)$, a process called <strong>Bosonization</strong> (In this case, it is a Fermionization though, but in the literature it is called Bosonization). The fields $(\beta,\gamma)$ can be written in terms of derivative of $\xi$ and the $\eta$ and $e^{\pm\phi}$ fields. The equivalence of these two description means that OPE of the operators in the two descriptions match. But there is a new feature in this equivalent description : <em>There is a new degree of freedom in $(\xi,\eta,\phi)$ which was somehow hidden in the $(\beta,\gamma)$ description namely <strong>the zero mode of $\xi$ field</strong></em>. </p> <p>This zero mode has crucial importance in string theory : <br/> <strong>If this zero mode didn't exist, all Fermionic processes in superstring theory would vanish.</strong></p> <p>Using ${\bf SL}(2,\mathbb{C})$ invariant vacuum of string theory $\left|0\rangle\right._{{\bf SL}(2,\mathbb{C})}$ (namely the vacuum annihilated by generators of Virasoro algebra $\hat{L}_n$ for $n=0,\pm 1$) and modes of $\beta$ and $\gamma$ fields, one can construct states with arbitrary low energy (due to conformal dimensions of these fields). This means that: </p> <p><em>$\left|0\rangle\right._{{\bf SL}(2,\mathbb{C})}$ is not the correct vacuum of the theory.</em></p> <p>The correct vacuum of the theory can be written as follows:</p> <p>$$\left|\boldsymbol{\Omega}\rangle\right.=\hat{c}(0)e^{-\hat{\phi}(0)}\left|0\rangle\right._{{\bf SL}(2,\mathbb{C})}$$</p> <p>These two states, namely $\left|0\rangle\right._{{\bf SL}(2,\mathbb{C})}$ and $\left|\boldsymbol{\Omega}\rangle\right.$ lives in disjoint Hilbert space (due to the exponential operator), i.e. one can not reach the other by applying finite number of ghost fields modes $\beta_k$ and $\gamma_k$. This means that the states of the original $(\beta,\gamma)$ system contains a condensate of modes of these fields which is called <strong>Bose Sea</strong> in the language of FSM. They interpreted the charge of the $\phi$ field as the filling level of these Bose sea and it is called <strong>Picture Number</strong>. On the other hand the vertex operators associated to the the Ramond sector are <strong>Spin Fields</strong>. The form of spin fields in different picture number is different. There are subtleties on how one can associate picture number to R and NS sectors but all allowed picture numbers gives equivalent description of string theory. If the form of spin field in picture number $\alpha$ is $\mathcal{V}_\alpha$ then one can define the <strong>Picture-changing Operation</strong> as follows:</p> <p>$$\mathcal{V}_{\alpha+1}(z)\equiv\{{\hat{Q}}_{\bf BRST},\xi(z)\mathcal{V}_{\alpha}\}$$</p> <p>One can show that (page 139-140 of <a href="http://www.physics.rutgers.edu/~friedan/papers/Nucl_Phys_B271_93_1986.pdf" rel="nofollow">Friedan, Shenker and Martinec</a>) <em>two vertex operatos which are related by picture-changing opration give equivalent results for computation of on-shell scattering amplitudes in string theory</em> (section 7 of <a href="http://www.physics.rutgers.edu/~friedan/papers/Nucl_Phys_B271_93_1986.pdf" rel="nofollow">Friedan, Shenker and Martinec</a>).</p> <p>For a mathematical description, please check:</p> <ul> <li>Section 4.2 of <a href="https://arxiv.org/pdf/1209.2199.pdf" rel="nofollow">Notes On Supermanifolds And Integration</a></li> <li>Section 4.3 and 4.4 of <a href="https://arxiv.org/pdf/1209.2459v4.pdf" rel="nofollow">Notes On Super Riemann Surfaces And Their Moduli</a></li> <li>Section 3.6.2 and 3.6.3 of <a href="https://arxiv.org/pdf/1209.5461.pdf" rel="nofollow">Perturbative Superstring Theory Revisited</a></li> </ul>
3,583,475
<p>Write as a single fraction:</p> <p><span class="math-container">$(4x+2y)/(3x) - (5x+9y)/(6x) + 4$</span></p> <p>Simplify your answer as much as possible.</p> <p>The answer that I got from when I did the math was: (27x-5y)/(6x). But I have asked some of my friends who some got a different answer from mine. Please let me know if this seems right, and if it isn't please help me. Thanks in advance.</p>
fleablood
280,126
<p>Put them over a common denominator:</p> <p><span class="math-container">$\frac {4x+2y}{3x}\cdot \frac 22 - \frac {5x + 9y}{6x} + 4\cdot\frac {6x}{6x}=$</span></p> <p><span class="math-container">$\frac {2(4x+2y) - (5x+9y) +4\cdot 6x}{6x}=$</span></p> <p><span class="math-container">$\frac {(8x+4y) -(5x +9y) + 24x}{6x}=\frac {(8x-5x+24x)+(4y-9y)}{6x}=$</span></p> <p><span class="math-container">$\frac {27x -5y}{6x}$</span></p> <p>Now the sentence "Simplify your answer as much as possible" might be subjective.</p> <p><em>Many</em> mathematicians would say, that's it. We have a single fraction. That's a s simple as it gets.</p> <p>Some might say the sum in the numerator can be separated.</p> <p><span class="math-container">$\frac {27x - 5y}{6x} = \frac {27x}{6x} - \frac {5y}{6x} = \frac 92 - \frac {5y}{6x}$</span>.</p> <p>Which is "simpler"?</p> <p>I'm not sure. I'd say a single rational expression rather than a sum in rational expressions is simpler and I'd go with your original answer is the simplest.</p> <p>But <span class="math-container">$\frac {27x - 5y}{6x}$</span> and <span class="math-container">$\frac 92-\frac {5y}{6x}$</span> both equal the same thing.</p>
194
<p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p> <p><strong>How do we help students overcome their math anxieties?</strong></p>
adamblan
93
<p>There are a few strategies that are supported by experimental research which I will share here, but they all have to do with <strong>stereotype threat</strong>. I am sure there are other types of anxiety related to math which would not be helped by these strategies.</p> <p>First, the <a href="http://en.wikipedia.org/wiki/Stereotype_threat">wikipedia article</a> on stereotype threat is fairly comprehensive. It describes some studies that have been done, as well as some strategies to combat stereotype threat.</p> <p><strong>I am "not a math person".</strong> Students who believe that there are "math people" and "people who aren't good at math" feel that there is no way to grow. This is much more likely to effect a student who is undergoing stereotype threat. Students who are told that intelligence is "malleable" <a href="http://ac.els-cdn.com/S0193397303001126/1-s2.0-S0193397303001126-main.pdf?_tid=c1e71ef4-ac7a-11e3-b970-00000aab0f01&amp;acdnat=1394913079_ec13a21ef4c82db26d76d9110bedd537">perform much better</a>.</p> <p><strong>Awareness of stereotype threat.</strong> Making students aware of stereotype threat (that is, explicitly telling them "your race/gender/finances makes you more likely to do badly") can <strong>help</strong> them on easy tests while simultaneously <strong>hurting</strong> them on harder tests. There are many studies that support this, but <a href="http://psp.sagepub.com/content/29/6/782">here is one</a>.</p> <p><strong>Self-affirmation.</strong> Self-affirmation (that is, affirming a value important to the individual--not necessarily related to math) can <a href="http://ac.els-cdn.com/S0022103105000545/1-s2.0-S0022103105000545-main.pdf?_tid=3fb88986-ac7a-11e3-a6eb-00000aacb35d&amp;acdnat=1394912860_8de0d13d7ac53fc0b18aae56225a6a9d">significantly increase performance</a>.</p> <p><strong>Role Models.</strong> Role models who are in the minority that students respect can <a href="http://gpi.sagepub.com/content/14/4/447">significantly decrease stereotype threat</a>. This study also points out that students who did not believe the role model deserved (in this case) her success were not helped (but also not hurt).</p>
194
<p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p> <p><strong>How do we help students overcome their math anxieties?</strong></p>
Mandy Jansen
739
<p>I agree with the idea that different people might be anxious about mathematics for different reasons...</p> <p><a href="https://www.npr.org/blogs/health/2012/11/12/164793058/struggle-for-smarts-how-eastern-and-western-cultures-tackle-learning" rel="nofollow noreferrer">Culturally</a>, in the United States, we tend to look at capabilities in mathematics as determined by &quot;ability&quot; and from a &quot;fixed&quot; ability&quot; perspective – some people can do math and others cannot. Other nations attribute capabilities in mathematics to effort – everyone can improve at doing mathematics if they put in time and effort. It is likely that anyone with a &quot;fixed ability&quot; mindset will encounter anxiety when they struggle to understand something (oh no! I am struggling to make sense of this! I guess I am not as smart as I thought I was / I guess I'm not good at math!). [Carol Dweck's book – <a href="https://www.mindsetworks.com/" rel="nofollow noreferrer">Mindset</a> – gets at this issue!]</p> <p>So, one thing we can do as teachers, is promote a growth mindset – that effort combined with opportunity to learn (with support) leads to capabilities. A small thing: When people say, &quot;I don't understand!&quot; Instead, it's: &quot;I don't understand YET.&quot; More things – look up promoting a growth mindset in math. For instance: <a href="https://blogs.edweek.org/teachers/classroom_qa_with_larry_ferlazzo/2012/10/response_classroom_strategies_to_foster_a_growth_mindset.html" rel="nofollow noreferrer">here</a>. And Dweck has research that indicates that intervening to promote a growth mindset makes a positive different.</p>
194
<p>In many parts of the world, the majority of the population is uncomfortable with math. In a few countries this is not the case. We would do well to change our education systems to promote a healthier relationship with math. But in the present situation, how can we help the students who come to our classes, which they are required to take, with fear and loathing?</p> <p><strong>How do we help students overcome their math anxieties?</strong></p>
Allen Seay
5,247
<p>Sue VanHattum, in reference to your question "How can we help students who are very ANXIOUS about math?" I don't know that this will answer your question, but I want to refer you to www.mathmidway.org or midway@mathmuseum.net. Phone is (631) 444-0945. One of the questions in the booklet is "how do you get a 10-year-old excited about the world of numbers?" As the first exhibition created by the nation's only museum centered on mathematics, the Math Midway demonstrates the power of bringing hands-on math to the public. It launches the museum's mission to enhance the museums understanding and perception of mathematics as an evolving, creative, and esthetic human endeavor.The Math Midway make math approachable and enticing, so that children and adults alike can experience the exhilarating moment of mathematical discovery. I'm trying to get the portable math midway into Memphis so it might help our local public school system. </p>
1,281,967
<p>This is a dumb question I know.</p> <p>If I have matrix equation $Ax = b$ where $A$ is a square matrix and $x,b$ are vectors, and I know $A$ and $b$, I am solving for $x$.</p> <p>But multiplication is not commutative in matrix math. Would it be correct to state that I can solve for $A^{-1}Ax = A^{-1}b \implies x = A^{-1}b$?</p>
Bradley Morris
235,142
<p>If $A$ is invertible then your solution works. Use this little result to determine if $A$ is invertible:</p> <p>$A^{-1}$ exists $\Leftrightarrow$ det($A$) $\neq 0$. Where det($A$) is the determinant of $A$.</p> <p>A little reading on determinants: <a href="http://mathworld.wolfram.com/Determinant.html" rel="nofollow">http://mathworld.wolfram.com/Determinant.html</a> </p>
2,130,911
<p>I'm unsure how to compute the following : 3^1000 (mod13)</p> <p>I tried working through an example below,</p> <p>ie) Compute $3^{100,000} \bmod 7$ $$ 3^{100,000}=3^{(16,666⋅6+4)}=(3^6)^{16,666}*3^4=1^{16,666}*9^2=2^2=4 \pmod 7\\ $$</p> <p>but I don't understand why they divide 100,000 by 6 to get 16,666. Where did 6 come from? </p>
Jack D'Aurizio
44,121
<p>There is a fast&amp;brutal solution that requires very little knowledge: $$ 3^{1000} \equiv 3\cdot(3^3)^{333} \equiv 3\cdot 1^{333} \equiv \color{red}{3}\pmod{13}.$$ A similar approach works in the other case, too: $$ 3^{10000}\equiv 3\cdot(3^3)^{3333} \equiv 3\cdot(-1)^{3333} \equiv -3\equiv \color{red}{4}\pmod{7}.$$</p>
1,447,852
<p>Compute this sum:</p> <p><span class="math-container">$$\sum_{k=0}^{n} k \binom{n}{k}.$$</span></p> <p>I tried but I got stuck.</p>
MadMonty
145,364
<p>A more intuitive way of thinking about this is to ask, "Given n people, how many possible 'teams' of people are there, given that each team has a leader?".</p> <p>So on one hand, if a team has $k$ people in it, then there are ${n}\choose{k}$ ways to pick those $k$ people, and any of those $k$ people can be leader, so there are $k {{n}\choose{k}}$ possibilities for a team with k people with a leader. Summing up over $k$, this means there are $$\sum_{k=0}^{n} k {{n}\choose{k}}$$ ways of picking a team with a leader from $n$ people.</p> <p>On the other hand, there are $n$ people. Pick one of them to be a leader ($n$ possibilities) and then of the other $n-1$ people, they're either in the team or they're not, so that gives us $2^{n-1}$ ways of picking them. Multiplying, this gives us $$n 2^{n-1}$$.</p> <p>As these expressions represent the same quantity, they are equal.</p>
883,972
<p>Let:</p> <p>$$f(n) = n(n+1)(n+2)/(n+3)$$</p> <p>Therefore :</p> <p>$$f∈O(n^2)$$</p> <p>However, I don't understand how it could be $n^2$, shouldn't it be $n^3$? If I expand the top we get $$n^3 + 3n^2 + 2n$$ and the biggest is $n^3$ not $n^2$.</p>
Did
6,179
<p>$$n+2\leqslant n+3\implies f(n)\leqslant n(n+1)=n^2+n\leqslant2n^2$$ $$(n+1)(n+2)=n(n+3)+2\geqslant n(n+3)\implies f(n)\geqslant n^2$$</p>
883,972
<p>Let:</p> <p>$$f(n) = n(n+1)(n+2)/(n+3)$$</p> <p>Therefore :</p> <p>$$f∈O(n^2)$$</p> <p>However, I don't understand how it could be $n^2$, shouldn't it be $n^3$? If I expand the top we get $$n^3 + 3n^2 + 2n$$ and the biggest is $n^3$ not $n^2$.</p>
IAmNoOne
117,818
<p>Because formally, $$\lim_{n \to \infty} \left | \frac{f(x)}{g(x)} \right |= \lim_{n \to \infty} \left | \frac{\frac{n(n+1)(n+2)}{n+3}}{n^2} \right |= \lim_{n \to \infty} \left | \frac{n(n+1)(n+2)}{n^2(n+3)} \right | = 1.$$</p> <p>So $f\in O(n^2)$ indeed.</p>
883,972
<p>Let:</p> <p>$$f(n) = n(n+1)(n+2)/(n+3)$$</p> <p>Therefore :</p> <p>$$f∈O(n^2)$$</p> <p>However, I don't understand how it could be $n^2$, shouldn't it be $n^3$? If I expand the top we get $$n^3 + 3n^2 + 2n$$ and the biggest is $n^3$ not $n^2$.</p>
evinda
75,843
<p>$$f(n)=\frac{n(n+1)(n+2)}{n+3}=\frac{(n^2+n)(n+2)}{n+3}=\frac{n^3+2n^2+n^2+2n}{n+3}=\frac{n^3+3n^2+2n}{n+3} \\ =n^2-\frac{6}{n+3}+2$$</p> <p>Let $f(n)=O(n^2)$.Then, $\exists c&gt;0 \text{ and } n_0 \geq 1 \text{ such that } \forall n \geq n_0: \\ f(n) \leq cn^2 \Rightarrow n^2-\frac{6}{n+3}+2 \leq cn^2 \Rightarrow c \geq 1+\frac{2}{n^2}-\frac{6}{n^2(n+3)}$</p> <p>We could pick for example $c=1$ and $n_0=1$.</p> <p>Therefore,we can find such $c,n_0$,therefore:</p> <p>$$f(n)=O(n^2)$$</p>
1,905,863
<p>I'm on the section of my book about separable equations, and it asks me to solve this:</p> <p>$$\frac{dy}{dx} = \frac{ay+b}{cy+d}$$</p> <p>So I must separate it into something like: $f(y)\frac{dy}{dx} + g(x) = constant$</p> <p>*note that there are no $g(x)$</p> <p>but I don't think it's possible. Is there something I'm missing?</p>
Leucippus
148,155
<p>Consider: $$\frac{dy}{dx} = \frac{ay+b}{cy+d}$$ which can be seen as the following: \begin{align} 1 &amp;= \frac{c y + d}{a y + b} \, \frac{dy}{dx} \\ &amp;= \frac{c}{a} \, \left[ 1 + \left(\frac{d}{c} - \frac{b}{a} \right) \, \frac{a}{ay + b} \right] \, \frac{dy}{dx} \end{align} which becomes $$\frac{a}{c} \, dx = \left[ 1 + \left(\frac{d}{c} - \frac{b}{a} \right) \, \frac{a}{ay + b} \right] \, dy $$ and leads to, after integration, $$y + \left(\frac{d}{c} - \frac{b}{a}\right) \, \ln(a y + b) = \frac{a \, x}{c} + \mu_{0},$$ where $\mu_{0}$ is the constant of integration.</p>
1,074,534
<p>How can I get started on this proof? I was thinking originally:</p> <p>Let $ n $ be odd. (Proving by contradiction) then I dont know.</p>
drhab
75,923
<p>Let $n$ be the smallest positive number that has $k&gt;1$ divisors and let $n=p_1^{r_1}\times\cdots\times p_s^{r_s}$ be its factorization in primes. If $n$ is odd then $2&lt;p_i$ for $i=1,\dots,s$. Replacing one of the $p_i$ by $2$ results in a smaller number that has the same number of divisors ($k=r_1\times\cdots\times r_s$) so a contradiction is found.</p>
187,618
<p>I am trying to solve the following problem.</p> <p>The time $T$ required to repair a machine is an exponentially distributed random variable with mean 10 hours.</p> <p>a) What is the probability that a repair takes at least 15 hours given that its duration exceeds 12 hours? b) What is the probability that the combined time to repair two machines is at least 20 hours?</p> <p><strong>Solution Attempt</strong></p> <p>Since mean is given to be 10 hours hence $\lambda = \dfrac {1}{10}$ and the probability distribution of the time is given as $e^{-\lambda t} = e^{-\dfrac {1}{10} t} $ </p> <p>a) $P(T&gt;15 |T&gt;12) = P(0 $ repairs in $ (12, 15]) = e^{-\dfrac {1}{10} 3}$</p> <p>b) let $T_1$ be the r.v representing time to repair the first machine and $T_2$ be the r.v representing time to repair the second machine. So we seek to evaluate $P(T_1 + T_2 &gt; 20)$ we know both of these time should be independent as the exponential distribution process to memory less but i am not sure how to proceed from here. </p> <p>Any help would be much appreciated. </p>
Did
6,179
<p>$$\mathrm P(T_1+T_2\gt20)=\mathrm P(T_1\gt20)+\int_0^{20}\mathrm P(T_2\gt20-t)\cdot\lambda\mathrm e^{-\lambda t}\cdot\mathrm dt $$ $$ \mathrm P(T_1+T_2\gt20)=\mathrm e^{-20\lambda}+\int_0^{20}\mathrm e^{-\lambda (20-t)}\cdot\lambda\mathrm e^{-\lambda t}\cdot\mathrm dt=\ ...$$</p>
201,381
<p>I have basic training in Fourier and Harmonic analysis. And wanting to enter and work in area of number theory(and which is of some interest for current researcher) which is close to analysis. </p> <blockquote> <p>Can you suggest some fundamental papers(or books); so after reading these I can have, hopefully(probably), I will have some thing to work on(I mean, chance of discovering something new)?</p> </blockquote>
Desiderius Severus
43,737
<p>In an other fashion, you can be interested in how Fourier analysis (series decompositions, Poisson formula) is fundamental in :</p> <ul> <li>Trace formulas (kind of generalization of Poisson formula in the non-real-and-commutative case)</li> <li>Computing functional equations for zêta-functions and reaching Tamagawa numbers (those are volumes of fundamental quotient spaces in adelic settings)</li> <li>Modular and automorphic forms</li> </ul> <p>For trace formulas and automorphic forms, I would say that an efficient and pleasant first lecture is H. Iwaniec, <em>Spectral Methods of Automorphic Forms</em>, AMS. In order to see how Fourier analysis works well in those settings, you can read <em>Tate's thesis</em>, it is the GL(1) case, available in Cassels-Frohlich or in Lang, <em>Algebraic Number Theory</em>, Springer GTM.</p> <p>For Tamagawa numbers, the book of Vignéras, <em>Arithmétique des algèbres de quaternions</em>, Springer LNM, is a very nice reference. It is more or less translated in Reid-MacLachlan, <em>The arithmetic of Hyperbolic 3-Manifolds</em>, Springer GTM.</p> <p>Hoping you could uncover those lovely topics ;)</p>
4,294,577
<p>If I have a function with all positive integer for the coefficients, is there a way to have a lower bound? Zero isn't an option, because I've done the rational root theorem and found all possible roots. If you need, I can provide the function and its list of possible roots below:</p> <p><span class="math-container">$70x^{4}+163x^{3}+109x^{2}+37x+6$</span></p> <p><span class="math-container">$±1, ±1/2, ±1/5, ±1/7, ±1/10, ±1/14, ±1/35, ±1/70,$</span> <span class="math-container">$±2, ±2/5, ±2/7, ±2/35,$</span> <span class="math-container">$±3, ±3/2, ±3/5, ±3/7, ±3/10, ±3/14, ±3/35, ±3/70,$</span> <span class="math-container">$±6, ±6/5, ±6/7, ±6/35$</span></p>
Alexey Do
532,569
<p>Since the Bourbaki style is terrible to track down all the detals, I post this one to help others those who want to read a full proof of this problem; of course, I adapt modern notations.</p> <p>Fix a base <span class="math-container">$\Delta$</span> for the root system <span class="math-container">$\Sigma$</span>. Denote <span class="math-container">$$\Sigma^* = \left \{\alpha^* \mid \alpha \ \text{is a root} \ \right \}$$</span> <span class="math-container">$$\mathrm{Aut}(\Sigma^*,\Delta) = \left \{ \ \text{elements in} \ \mathrm{Aut}(\Sigma^*) \ \text{stabilize} \ \Delta \right \}.$$</span> We are now at the position to prove that if <span class="math-container">$\phi \in N$</span> s.t. <span class="math-container">$\psi(\phi)$</span> induces an element in <span class="math-container">$\mathrm{Aut}(\Sigma^*,\Delta)$</span> then <span class="math-container">$\delta=\psi(\phi)$</span> is induced by an element in <span class="math-container">$Z = \left \{\phi \in \mathrm{Aut}_0(\mathfrak{g}) \mid \phi_{\mid \mathfrak{h}} = \mathrm{id}_{\mid \mathfrak{h}} \right \}$</span>. The subgroup generated by <span class="math-container">$\delta$</span> has a finite number of orbits on <span class="math-container">$\Sigma^*$</span>, let <span class="math-container">$U$</span> be such an orbit of cardinal <span class="math-container">$r$</span>, denote <span class="math-container">$$g_U = \bigoplus_{\alpha^* \in U} g_{\alpha}.$$</span> Let <span class="math-container">$\alpha_1^*\in U$</span> and I define <span class="math-container">$\alpha_i^* = \delta^{i-1}(\alpha_1^*) = (\alpha_1 \circ \delta^{1-i})^* \ \forall \ i = \overline{1,r}$</span>. Thus <span class="math-container">$U = \left \{\alpha_1^*,...,\alpha_r^* \right \}$</span>. Let <span class="math-container">$X_1$</span> b a non-zero element in <span class="math-container">$g_{\alpha_1}$</span>. We are going to prove that there exists a non-zero scalar <span class="math-container">$c_U$</span> such that <span class="math-container">$\delta^r(X_1)=c_U X_1$</span>. To do this, recall that <span class="math-container">$U$</span> is an orbit, therefore <span class="math-container">$\delta^r(\alpha_1^*) = \alpha_1^*$</span>, equivalently, <span class="math-container">$(\alpha_1 \circ \delta^{-r})^* = \alpha_1^*$</span>. We claim that <span class="math-container">$\delta^r(X_1) \in g_{\alpha_1}$</span>. Indeed, for all <span class="math-container">$H \in \mathfrak{h}$</span>, <span class="math-container">$$\begin{align*} [H,\delta^r(X_1)] &amp; = \delta^r[\delta^{-r}(H),X_1] \\ &amp; = \delta^r\left((\alpha_1\circ \delta^{-r})(H)X_1 \right) \\&amp; = (\alpha_1\circ\delta^{-r})(H) \delta^r(X_1) \\ &amp; = B(H,(\alpha_1 \circ \delta^{-r})^*) \delta^r(X_1) \\ &amp; = B(H,\alpha_1^*)\delta^r(X_1) \\ &amp; = \alpha_1(H)\delta^r(X_1). \end{align*}$$</span> But by the semisimplicity of <span class="math-container">$\mathfrak{g}$</span>, <span class="math-container">$\mathrm{dim}(g_{\alpha})=1$</span>, hence <span class="math-container">$\delta^r(X_1)$</span> and <span class="math-container">$X_1$</span> are proportional, prove our claim. By the definition of other <span class="math-container">$X_i$</span>, we deduce that <span class="math-container">$$\delta^r_{\mid g_U} = c_U.\mathrm{id}_{g_U}.$$</span> We shall twist <span class="math-container">$\delta$</span> by an element of <span class="math-container">$Z$</span> so that the resulting automorphism still comes from an element in <span class="math-container">$Z$</span>. For each functional, <span class="math-container">$$\Theta: \bigoplus_{\alpha \in \Sigma} \mathbb{Z}\alpha^* \to \mathbb{C}^*$$</span> We define an element <span class="math-container">$Z$</span>, denoted by <span class="math-container">$f(\Theta)$</span>, by the following rule <span class="math-container">$$\begin{cases} f(\Theta)_{\mid g_{\alpha}} = \Theta(\alpha^*)\mathrm{id}_{\alpha} \\ f(\Theta)_{\mid \mathfrak{h}} = \mathrm{id}_{\mathfrak{h}} \end{cases}$$</span> It is clear that <span class="math-container">$f(\Theta) \in Z$</span>. Moreover, <span class="math-container">$$(\delta \circ f(\Theta))(X_1) = \delta(\Theta(\alpha_1^*)X_1) = \Theta(\alpha_1^*)\delta(X_1) = \Theta(\alpha_1^*)X_2.$$</span> Iterating this process by successively applying <span class="math-container">$\delta \circ f(\Theta)$</span>, we deduce that <span class="math-container">$$(\delta \circ f(\Theta))^r(X_1) = c_U\prod_{i=1}^r \Theta(\alpha_i^*)X_1 = c_U \Theta \left(\sum_{i=1}^r \alpha_i^* \right)X_1.$$</span> Again, this implies that <span class="math-container">$$(\delta \circ f(\Theta))^r_{\mid g_U} = c_U \Theta \left(\sum_{i=1}^r \alpha_i^* \right) \mathrm{id}_{g_U}.$$</span> Let write our fixed base <span class="math-container">$\Delta$</span> as <span class="math-container">$\Delta = \left \{\beta_1^*,...,\beta_n^* \right \}$</span>. Since this is a base, there exists <span class="math-container">$a_1^U,...,a_1^U \in \mathbb{Z}$</span>, have same sign, not all zero such that <span class="math-container">$\alpha_1^*= \sum_{i=1}^n a_i^U \beta_i^*$</span>, successively apply <span class="math-container">$\delta$</span> and recall that <span class="math-container">$\delta \in \mathrm{Aut}(\Sigma^*,\Delta)$</span> we deduce the existence of <span class="math-container">$m_1^U,...,m_n^U \in \mathbb{Z}$</span> of same sign, not all zero such that <span class="math-container">$$\sum_{i=1}^r \alpha_i^* = \sum_{i=1}^n m_i^U \beta_i^*.$$</span> Thus, define <span class="math-container">$c'_U = c_U\Theta \left(\sum_{i=1}^r \alpha_i^* \right) = c_U \prod_{i=1}^n \Theta(\beta_i^*)^{m_i^U}$</span>, we can choose <span class="math-container">$\Theta$</span> such that for all orbits <span class="math-container">$U$</span>, <span class="math-container">$c_U' \neq 1$</span>. This is possible, because it is equivalent to choose <span class="math-container">$\Theta(\beta_i^*) = t_i$</span> are elements in <span class="math-container">$\mathbb{C}^*$</span> such that all polynomial <span class="math-container">$c_U\prod_{i=1}^n t_i^{m_i^U} - c'_U$</span> vanish. So far, what we've done is to show that it is possible to choose <span class="math-container">$\Theta$</span> such that <span class="math-container">$\delta \circ f(\Theta)$</span> does not have <span class="math-container">$1$</span> as an eigenvalue on the subspace <span class="math-container">$\oplus_{\alpha \in \Sigma} g_{\alpha}$</span>.</p> <p>Now since <span class="math-container">$\mathrm{Aut}^0(\mathfrak{g})$</span> is a connected Lie group, every element can be written as product of some <span class="math-container">$e^{y}$</span> with <span class="math-container">$y \in \mathrm{Lie}(\mathrm{Aut}^0(\mathfrak{g})) \subset \mathrm{Der}(\mathfrak{g}) \overset{\text{semisimplicity}}{=} \mathrm{ad}(\mathfrak{g})$</span>; thus, each element in <span class="math-container">$\mathrm{Aut}^0(\mathfrak{g})$</span> is a product of elements of form <span class="math-container">$e^{\mathrm{ad}(x)}$</span> with <span class="math-container">$x \in \mathfrak{g}$</span>. We are at the position to prove that the generalized <span class="math-container">$0$</span>-space of <span class="math-container">$\delta \circ f(\Theta) - \mathrm{id}$</span> has dimension at least the dimension of <span class="math-container">$\mathfrak{h}$</span>, which equals <span class="math-container">$\mathrm{rank}(\mathfrak{g})$</span>. To illustrate how to do this, we consider the case <span class="math-container">$\delta \circ f(\Theta) = e^{\mathrm{ad}(x)}$</span> only. Since we are working over <span class="math-container">$\mathrm{C}$</span>, <span class="math-container">$\mathrm{ad}(x)$</span> is similar to a Jordan matrix <span class="math-container">$J$</span>, read <span class="math-container">$\mathrm{ad}(x) = AJA^{-1}$</span> for some <span class="math-container">$A \in \mathrm{GL}(\mathfrak{g})$</span>. Thus, <span class="math-container">$\mathrm{dim} \mathfrak{g}^0(\mathrm{ad}(x))$</span> is precisely the number of <span class="math-container">$0$</span> on the diagonal. Moreover, <span class="math-container">$e^{\mathrm{ad}(x)} - \mathrm{id} = A(e^J - 1)A^{-1}$</span> and <span class="math-container">$e^{\lambda}=1 \Leftrightarrow \lambda=0$</span>, we conclude that the number of <span class="math-container">$0$</span> on the diagonal of <span class="math-container">$e^J - \mathrm{id}$</span> equals of <span class="math-container">$\mathrm{ad}(x)$</span>. Finally <span class="math-container">$$\mathrm{dim} \bigcup_{k \geq 1} (\delta \circ f(\Theta) - \mathrm{id})^k = \mathrm{dim} \mathfrak{g}^0(\mathrm{ad}(x)) \geq \mathrm{rank}(\mathfrak{g}) = \mathrm{dim}\mathfrak{h}.$$</span> By the assumption of no eigenvalue <span class="math-container">$1$</span> on the complement <span class="math-container">$\oplus_{\alpha \in \Sigma}g_{\alpha}$</span>, we see that <span class="math-container">$\delta \circ f(\Theta) - \mathrm{id}$</span> is nilpotent on <span class="math-container">$\mathfrak{h}$</span>. On the other hand, <span class="math-container">$\delta \circ f(\Theta)$</span> permutes a base so it has finite order (it is an element of a finite symmetric group). Consequently, its characteristic polynomial (and hence minial polynomial) divides <span class="math-container">$x^k-1$</span> for some <span class="math-container">$k$</span>, the latter polynomial has all roots as simple roots so the Jordan blocks of <span class="math-container">$\delta \circ f(\Theta)$</span> are all of size <span class="math-container">$(1 \times 1)$</span>; i.e. <span class="math-container">$\delta \circ f(\Theta)$</span> is diagonalizable. Combine this with the previous assertion of <span class="math-container">$\delta \circ f(\Theta)$</span> being nilpotent, we deduce that <span class="math-container">$\delta \circ f(\Theta)_{\mid \mathfrak{h}} = \mathrm{id}_{\mathfrak{h}}$</span>, as desired.</p>
2,875,907
<p>There are set of rods of length <span class="math-container">$1,2,3,4 \dots N$</span>. Two players take turns to chose 3 rods and compose triangle with non-zero area. After that this particular 3 rods are removed. If it is not possible to compose triangle then player looses.</p> <p>Who has winning strategy?</p> <hr> <p>[Edit] Some easy observations:</p> <ul> <li>We get a triangle of non-zero area, if and only if the lengths of the chosen rods, say <span class="math-container">$a&lt;b&lt;c$</span>, satisfy the strict triangle inequality <span class="math-container">$a+b&gt;c$</span>. It may be easier to use this in the form <span class="math-container">$a&gt;c-b$</span> that can be interpreted as stating that the shortest chosen rod must be longer than the length gap between the two longer ones.</li> <li>The rod of length one can never be used because <span class="math-container">$a=1$</span> makes it impossible to satisfy the inequalities in the previous bullet. We can simply pretend that the rod of length one is not part of the game.</li> <li>When <span class="math-container">$N=7$</span> removing the triple <span class="math-container">$\{3,5,7\}$</span> leaves the other player with rods of lengths <span class="math-container">$\{2,4,6\}$</span> and no legal moves. This position is a win for the first player.</li> <li>When <span class="math-container">$N=8$</span> removing the triple <span class="math-container">$\{4,6,7\}$</span> similarly leaves the second player with an impossible task. The collection of lengths <span class="math-container">$\{2,3,5,8\}$</span> has (just barely) too long gaps for the second player to use either <span class="math-container">$2$</span> or <span class="math-container">$3$</span> in the role of <span class="math-container">$a$</span>. This is also a win for the first player.</li> <li>On the other hand when <span class="math-container">$N=9$</span> the game plays out differently. After removing the triple of rods used by the first player, five rods of lengths <span class="math-container">$2\le x_1&lt;x_2&lt;x_3&lt;x_4&lt;x_5\le9$</span> remain. Here <span class="math-container">$x_2\ge3$</span>. Because <span class="math-container">$x_3\ge4$</span> and <span class="math-container">$x_5\le9$</span> we have <span class="math-container">$x_3+2x_2&gt;x_5$</span>. This means that either <span class="math-container">$x_4-x_3$</span> or <span class="math-container">$x_5-x_4$</span> must be less than <span class="math-container">$x_2$</span>. Therefore the second player can pick either the rods of lengths <span class="math-container">$\{x_2,x_3,x_4\}$</span> or the rods of lengths <span class="math-container">$\{x_2,x_4,x_5\}$</span>. After having removed those rods, only two remain, so the second player wins in this case.</li> </ul> <p>But what happens in the general case? [/Edit, JL]</p>
Jaap Scherphuis
362,967
<p>This is a nim-like <a href="https://en.wikipedia.org/wiki/Impartial_game" rel="noreferrer">impartial game</a>, so each possible position has a nim-value associated with it. I wrote a little program to calculate the nim-values, and the values of the starting positions are:</p> <pre><code>N 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 v 1 1 1 2 2 0 3 1 1 2 0 0 1 1 1 2 0 0 1 1 1 2 0 0 1 </code></pre> <p>The game is therefore a win for the second player when $N$ is $9, 14, 15, 20, 21, 26, 27$, and a win for the first player for the other values with $4 \le N \le 28$.</p> <p>Maybe the pattern of zeroes continues and it is a second player win for every $N\ge9$ with $N \equiv 2,3 \mod 6$.</p> <p>For completeness, here is the c# code I used:</p> <pre><code>using System; using System.Collections.Generic; namespace test4 { class TriangleGame { static void Main() { for (int n = 3; n &lt;= 24; n++) { CalcGame(n); } } static void CalcGame(int N) { int numpos = 1 &lt;&lt; N; int[] nimval = new int[numpos]; ISet&lt;int&gt; reachable = new HashSet&lt;int&gt;(); for (int p = 0; p &lt; numpos; p++) { reachable.Clear(); for (int s1 = 1, m1 = 1; m1 &lt; numpos; m1 += m1, s1++) { if ((p &amp; m1) != 0) { for (int s2=s1+1, m2 = m1 + m1; m2 &lt; numpos; m2 += m2, s2++) { if ((p &amp; m2) != 0) { for (int s3=s2+1, m3 = m2 + m2; m3 &lt; numpos &amp;&amp; s3&lt;s1+s2; m3 += m3, s3++) { if ((p &amp; m3) != 0) { int q = p - m1-m2-m3; reachable.Add(nimval[q]); } } } } } } // find first unused number int k = 0; while (reachable.Contains(k)) k++; nimval[p] = k; } Console.WriteLine("Game {0} has nimval {1}", N, nimval[numpos-1]); } } } </code></pre>
3,385,420
<p>The question is from <em>Cambridge Admission Test 1983</em></p> <blockquote> <p>A room contains m men and w women. They leave one by one at random until only people of the same sex remain. show by a carefully explained inductive argument, or otherwise, that the expected number of people remaining is <span class="math-container">$ \frac{\text{m}}{\text{w}+1}+\frac{\text{w}}{\text{m}+1} $</span></p> </blockquote> <p>I can not think of a way by what it says, "inductive argument". Also, I can not fully understand my process. My thought is:</p> <p>Consider w women have arranged, then interpolate m men. Each man has the probability <span class="math-container">$ \frac{1}{\text{w}+1} $</span> to be the first (which corresponds to being the remaining). Then we have the expected number of men to be the remaining is <span class="math-container">$ \frac{m}{\text{w}+1} $</span>.</p> <p>However, similarly, we have the expected number of women remaining is <span class="math-container">$ \frac{w}{\text{m}+1} $</span>. But why can we directly add them together? My way of thinking suggested two mutually expectation, but it seems that we do not know whether the remaining is men or women.</p>
drhab
75,923
<p>Number the men with <span class="math-container">$1,\dots,m$</span> and the women with <span class="math-container">$1,\dots,w$</span>.</p> <p>For <span class="math-container">$i=1,\dots,m$</span> let <span class="math-container">$X_{i}$</span> take value <span class="math-container">$1$</span> if man <span class="math-container">$i$</span> will be one of the remaining persons, and value <span class="math-container">$0$</span> otherwise.</p> <p>For <span class="math-container">$j=1,\dots,m$</span> let <span class="math-container">$Y_{i}$</span> take value <span class="math-container">$1$</span> if woman <span class="math-container">$i$</span> will be one of the remaining persons, and value <span class="math-container">$0$</span> otherwise.</p> <p>Then <span class="math-container">$$Z=\sum_{i=1}^{m}X_{i}+\sum_{i=1}^{w}Y_{i}$$</span> denotes the number of remaining persons.</p> <p>Hopefully answering your question:</p> <blockquote> <p>"But why can we directly add them together?"</p> </blockquote> <p>With linearity of expectation and symmetry we find:<span class="math-container">$$\mathbb{E}Z=m\mathbb{E}X_{1}+w\mathbb{E}Y_{1}=mP\left(\text{man}1\text{ remains}\right)+wP\left(\text{woman}1\text{ remains}\right)$$</span></p> <p>Man <span class="math-container">$1$</span> will remain if and only if all women are leaving before him, so: <span class="math-container">$$P\left(\text{man}1\text{ remains}\right)=\frac{1}{w+1}$$</span></p> <p>Woman <span class="math-container">$1$</span> will remain if and only if all men are leaving before her, so: <span class="math-container">$$P\left(\text{woman}1\text{ remains}\right)=\frac{1}{m+1}$$</span> Proved is now that: <span class="math-container">$$\mathbb EZ=\frac{m}{w+1}+\frac{w}{m+1}$$</span></p>
853,774
<blockquote> <p>If $(G,*)$ is a group and $(a * b)^2 = a^2 * b^2$ then $(G, *)$ is abelian for all $a,b \in G$.</p> </blockquote> <p>I know that I have to show $G$ is commutative, ie $a * b = b * a$</p> <p>I have done this by first using $a^{-1}$ on the left, then $b^{-1}$ on the right, and I end up with and expression $ab = b * a$. Am I mixing up the multiplication and $*$ somehow?</p> <p>Thanks</p>
user160992
160,992
<p>There is only one operation defined for the group, namely $*$, so if you want to be pedantic/exact, $a*b$ is a valid statement, while $ab$ is not defined.</p> <p>However, in practice we shorten the notation, so $a*b$ can be written as $ab$.</p> <p>So your final expression is equivalently $ab=ba$ or $a*b=b*a$. They are the same with slightly different notation.</p>
1,050,232
<p>Prove the following proposition</p> <p>Let $x, y \in \mathbb{ R}&gt;0$. If $x &lt; y$ then $0 &lt; y^{-1 }&lt; x^{-1}.$</p> <p>So far I've gotten that since $x, y &gt; 0$ then $x^{-1}, y^{-1} &gt; 0$. </p>
Julián Aguirre
4,791
<p>Since $a,b\in(\frac\pi8,\frac\pi4)$, we have $2c\in(\frac\pi4,\frac\pi2)$. Then $$ \cos\frac\pi2&lt;\cos 2c&lt;\cos\frac\pi4\implies\cos^22c\le\frac{1}{2} $$ and $$ \frac{1}{\cos^22c}\ge2. $$</p>
1,050,232
<p>Prove the following proposition</p> <p>Let $x, y \in \mathbb{ R}&gt;0$. If $x &lt; y$ then $0 &lt; y^{-1 }&lt; x^{-1}.$</p> <p>So far I've gotten that since $x, y &gt; 0$ then $x^{-1}, y^{-1} &gt; 0$. </p>
ir7
26,651
<p>If $c\in(\frac\pi8,\frac\pi4)$, then $2c\in(\frac\pi4,\frac\pi2)$ and:</p> <p>$$ 0 =\cos^2\left(\frac{\pi}{2}\right) &lt;\cos^2(2c) &lt; \cos^2\left(\frac{\pi}{4}\right) = \frac{1}{2}. $$</p> <p>Hence:</p> <p>$$ \frac{2}{\cos^2(2c)} &gt; 4.$$</p> <p><strong>Edit:</strong> </p> <p>If $ a= b$, then</p> <p>$$ |\tan (2a) - \tan (2b)| = 0 = 4|a-b|. $$</p> <p>If $a\not= b$, proceed as you did with the application of MVT to obtain: $$ |\tan (2a) - \tan (2b)| &gt; 4|a-b|. $$</p> <p>So, no matter what the relationship between $a$ and $b$ is, as long as they are both in $(\frac\pi8,\frac\pi4)$ we have $$ |\tan (2a) - \tan (2b)| \geq 4|a-b|. $$</p> <p><strong>Edit2:</strong></p> <p>If $a&gt; b$, we show $$ \frac{\tan (2a) - \tan (2b)}{a-b} = \frac{2}{\cos^2(2c)}, $$ for some $c\in (b,a)$.</p> <p>If $a&lt;b$, we show $$ \frac{\tan (2b) - \tan (2a)}{b-a} = \frac{2}{\cos^2(2c)}, $$ for some $c\in (a,b)$.</p> <p>Concisely, using modulus, we say that for $a\not=b$ we have:</p> <p>$$ \frac{|\tan (2b) - \tan (2a)|}{|b-a|} = \frac{2}{\cos^2(2c)}, $$ for some $c$ in $(a,b)$ or $(b,a)$.</p> <p>Function $\tan$ is strictly increasing on $(\frac\pi4,\frac\pi2)$. </p>
11,518
<p>How to prove that $\mathcal{O}_{\sqrt[3]{3}}$ is an euclidean domain? I heard that one should prove the following but why it is enough?</p> <p>For any $ a,b,c\in\mathbb{R}$, prove that there are $ x,y,z\in\mathbb{R}$ such that $ x-a,y-b,z-c\in\mathbb{Z}$ and that $$-1\leq x^3+3y^3+9z^3-9xyz\leq 1.$$</p>
Alex B.
3,212
<p>Note that the ring of integers of $\mathbb{Q}(\sqrt[3]{3})$ is $\mathbb{Z}[\sqrt[3]{3}]$ with basis $1,\sqrt[3]{3},\sqrt[3]{9}$ over $\mathbb{Z}$. You want to show that the norm, defined by \begin{eqnarray*} N(a+b\sqrt[3]{3} + c\sqrt[3]{9}) &amp; = &amp; (a+b\sqrt[3]{3} + c\sqrt[3]{9})(a+\zeta_3b\sqrt[3]{3} + \zeta_3^2c\sqrt[3]{9})(a+\zeta_3^2b\sqrt[3]{3} + \zeta_3c\sqrt[3]{9})\\ &amp; = &amp; a^3 + 3b^3 + 9c^3 - 9abc \end{eqnarray*} gives in fact a Euclidean norm upon taking absolute values, where $\zeta_3$ is a fixed primitive cube root of unity. Now, the first (and maybe the main) thing you might wonder about is where I pulled this norm out. For that you have to know a little bit of Galois theory. The basic idea is that from the perspective of $\mathbb{Q}$, the elements $\sqrt[3]{3}$ and $\zeta_3\sqrt[3]{3}$ are indistinguishable: they are both just <strong>some</strong> roots of the polynomial $x^3-3$ and can be thought of as mirror images of each other. Essentially, the factors that I am multiplying are like "mirror images" of my given element of $\mathbb{Z}[\sqrt[3]{3}]$.</p> <p>Now, the strategy is exactly the same as in the case of some quadratic rings like $\mathbb{Z}[\sqrt{2}]$: given $\alpha = u+v\sqrt[3]{3} + w\sqrt[3]{9}$ and $\beta = f+g\sqrt[3]{3} + h\sqrt[3]{9}\in \mathbb{Z}[\sqrt[3]{3}]$, you want to show that there exist $p$ and $r$ in that ring such that \begin{eqnarray*}\alpha = p\beta + r, \end{eqnarray*} where either $r=0$ or $N(r)&lt;N(\beta)$. Dividing by $\beta$, this becomes equivalent to showing that given any $\alpha/\beta=a+b\sqrt[3]{3} + c\sqrt[3]{9}$ in the field of fraction $\mathbb{Q}(\sqrt[3]{3})$ of $\mathbb{Z}[\sqrt[3]{3}]$, there exists $r/\beta\in \mathbb{Q}(\sqrt[3]{3})$ such that the difference is in the ring of integers itself (since the difference is $p$) and $|N(r/\beta)|&lt;1$. That is the statement you posted.</p>
3,407,368
<p>Please help me to think through this.</p> <p>Take Riemann, for example. Finding a non-trivial zero with a real part not equal to <span class="math-container">$\frac{1}{2}$</span> (i.e., a counterexample) would disprove the conjecture, and also so it to be decidable.</p> <p>How about demonstrating that Riemann is undecidable? Would that not imply that we can check zeros ad infinitum without resolving the hypothesis? But, checking zeros can only provide a counterexample, i.e., a disproof. </p> <p>How (if at all) do these statements differ?</p> <p>Any non-trivial zeros that we can find through brute force checking will have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p> <p>All non-trival zeros have a real part of <span class="math-container">$\frac{1}{2}$</span>.</p> <p>Is my assumption that all non-trivial zeros is in the infinite set of zeros that can be checked by brute force correct, or even relevant? Or meaningful?</p> <p>Please be kind. I'm not sure if my question even makes sense.</p>
Robert Israel
8,508
<p>They do not differ. "Any non-trivial zeros that we can find through brute force checking " is exactly the same as "All non-trivial zeros". That is, there is a "brute-force" procedure that will enumerate all the zeros. </p> <p>If the RH is false, it is provably false. So if it happens to be undecidable, it must be true. But of course, for all we know, it could be provably true.</p>
1,437,287
<p>On <a href="https://en.wikipedia.org/wiki/Geometric_series#Geometric_power_series" rel="nofollow">Wikipedia</a> it is stated that by differentiating the following formula holds:</p> <p>$$ \sum_n n q^n = {1\over (1-q)^2}$$</p> <p>Does this not require a proof? It seems to me because the series is infinite it is not clear that differentiation commutes with taking the limit. </p> <blockquote> <p>How to prove this?</p> </blockquote>
Michael Hardy
11,667
<p>It does require proof. That the derivative of a sum of finitely many terms is the sum of the derivatives is proved in first-semester calculus, but it doesn't always work for infinite series. For example, let $$ g_n(x) = \frac{\sin(nx)} {n^2}. $$ Then $\displaystyle\sum_{n=1}^\infty g_n(x) \vphantom{\dfrac \sum {\displaystyle \int}}$ converges for every value of $x$ (since the absolute value of each term is $\le 1/n^2$ and $\sum_n 1/n^2 &lt; \infty$). And $\displaystyle \sum_{n=1}^\infty g_n'(x) = \sum_{n=1}^\infty \frac{\cos(nx)}{n}$, and that diverges when $x=0$.</p> <p>However, every <b>power series</b> converges <b>uniformly</b> on sets bounded away from the boundaries of the interval of convergence. I.e. $$ f(x) = \sum_{n=0}^\infty a_n (x - c)^n $$ converges for $x$ in some interval $(c-r,c+r)$. (In some cases it also converges at one or both of the endpoints. The number $r$ is the radius of convergence. A set that is "bounded away from the endpoints" is any subset of an interval of the form $(c-r+\varepsilon,c+r-\varepsilon)$. The series will fail to converge uniformly on $(c-r,c+r)$ if there is a vertical asymptote at either endpoint, but it converges <b>pointwise</b> on that set and uniformly on every set bounded away from the endpoints (i.e. no matter how small $\varepsilon$ above is). At this point I'll cite a book, but maybe I'll be back later. Walter Rudin's <em>Principles of Mathematical Analysis</em>, third edition, page 173. You'll find a proof of term-by-term differentiability of power series in $(c-r,c+r)$, using uniform convergence.</p>
1,686,568
<p>I am learning about tensor products of modules, but there is a question which makes me very confused about it! </p> <p>If $E$ is a right $R$-module and $F$ is a left $R$-module, then suppose we have a balanced map (or bilinear map) $E\times F\to E\otimes F$. If some element $x\otimes y \in E\otimes F$ is $0$, then can we say $x$ or $y$ must be equal to $0$? I know if $x = 0$ or $y = 0$, then $x\otimes y$ is $0$. Are there other cases where $x\otimes y$ is $0$? Can someone give me a specific example? </p> <p>Really thank you!</p>
rschwieb
29,335
<p>It is not possible to find a nice characterization of when simple tensors are zero.</p> <p>To give an example where $a, b$ are nonzero but $a\otimes b=0$, consider $\Bbb Z/6\Bbb Z\otimes_\Bbb Z \Bbb Z$ where $2\otimes 3=2\cdot3\otimes 1=0\otimes 1=0$.</p>
446,272
<p>let $$\left(\dfrac{x}{1-x^2}+\dfrac{3x^3}{1-x^6}+\dfrac{5x^5}{1-x^{10}}+\dfrac{7x^7}{1-x^{14}}+\cdots\right)^2=\sum_{i=0}^{\infty}a_{i}x^i$$</p> <p>How find the $a_{2^n}=?$</p> <p>my idea:let $$\dfrac{nx^n}{1-x^{2n}}=nx^n(1+x^{2n}+x^{4n}+\cdots+x^{2kn}+\cdots)=n\sum_{i=0}^{\infty}x^{(2k+1)n}$$ Thank you everyone</p>
Mariano Suárez-Álvarez
274
<p>The square of the sum is $$\sum_{u\geq0}\left[\sum_{\substack{n,m,k,l\geq0\\(2n+1)(2k+1)+(2m+1)(2l+1)=u}}(2n+1)(2m+1)\right]x^u.$$</p> <p>It is easy to use this formula to compute the first coefficients, and we get (starting from $a_1$) $$0, 1, 0, 8, 0, 28, 0, 64, 0, 126, 0, 224, 0, 344, 0, 512, 0, 757, 0, 1008, 0, 1332, 0, 1792, 0, 2198, 0, 2752, 0, 3528, \dots$$ We see that the odd indexed ones are zero. We look up the even ones in the OEIS and we see that $a_{2n}$ is the sum of the cubes of the divisors $d$ of $n$ such that $n/d$ is odd.</p> <p>In particular, $a_{2^n}=2^{3(n-1)}$.</p> <p>Notice that the sum $$\sum_{\substack{n,m,k,l\geq0\\(2n+1)(2k+1)+(2m+1)(2l+1)=u}}(2n+1)(2m+1)$$ can be written, if we group the terms according to what the products $=(2n+1)(2k+1)$ and $y=(2m+1)(2l+1)$ are, in the form $$\sum_{\substack{x+y=2u\\\text{$x$ and $y$ odd}}}\left(\sum_{a\mid x}a\right)\left(\sum_{b\mid x}b\right)=\sum_{\substack{x+y=2u\\\text{$x$ and $y$ odd}}}\sigma(x)\sigma(y),$$ where as usual $\sigma(x)$ denotes the sum of the divisors of $x$. This last sum is in fact equal to $$\sum_{x+y=2u}\sigma(x)\sigma(y)-\sum_{x+y=u}\sigma(2x)\sigma(2y).$$ The first sum is $\tfrac{1}{12}(5\sigma_3(n)-(6n+1)\sigma(n))$, as observed by Ethan (references are given in the wikipedia article)</p>
299,471
<p>I know this is just $S^2$. To see it, I use the CW structure of $S^1$ x $S^1$ , consisting of one 0-cell, two 1-cells and a 2-cell. Then since the reduced suspension is the cartesian product identifying the wedge (or smash product) , what just remains is the 0.cell and the 2-cell...This is very theoretical and I don't visualize what is going on. I imagine a torus with two circular threads being pulled to a point and then? If anyone has thought of this and is convinced by some visualization, I'll thank his idea....</p>
S. carmeli
115,052
<p>Here is a way to see this. After you collapse one circle, you end up with a sphere with two points attached. Then, the remaining circle (1-cell) is the image of the path connecting this two points. If you first glue the two points and then collapse the whole segmnent between them, is the same as collapsing the segment to a point from the very beginning. But collapsing an arc on a sphere does nothing, so you get a sphere. Alternatively, instead of attaching the two points, connect them with an external path. Then make the two points collide, and you get a bucket of a 2-sphere and a cirle. Then you have to collapse the circle to the point, since this is the second 1 cell, and you end up with a 2-sphere. </p>
330,508
<p>Going from $2\cos(2\theta+\pi/3)$ to $\cos2\theta−\sqrt{3}\sin2\theta$ is simple enough, however I'm stuck on going from $2\cos(2\theta+\pi/3)$ to $−2\sin(2\theta−\pi /6)$. How do i do this?</p>
Brian M. Scott
12,042
<p>Use the identities for the sine and cosine of the sum or difference of two angles:</p> <p>$$\begin{align*} \sin\left(2\theta-\frac{\pi}6\right)&amp;=\sin2\theta\cos\frac{\pi}6-\cos2\theta\sin\frac{\pi}6\\ &amp;=\frac{\sqrt3}2\sin2\theta-\frac12\cos2\theta\;, \end{align*}$$</p> <p>and</p> <p>$$\begin{align*} \cos\left(2\theta+\frac{\pi}3\right)&amp;=\cos2\theta\cos\frac{\pi}3-\sin2\theta\sin\frac{\pi}3\\&amp;=\frac12\cos2\theta-\frac{\sqrt3}2\sin2\theta\;. \end{align*}$$</p>
330,508
<p>Going from $2\cos(2\theta+\pi/3)$ to $\cos2\theta−\sqrt{3}\sin2\theta$ is simple enough, however I'm stuck on going from $2\cos(2\theta+\pi/3)$ to $−2\sin(2\theta−\pi /6)$. How do i do this?</p>
Kang Oedin
848,913
<p>First, let's recap:<br /> <span class="math-container">$\sin{\pi\over3}=\frac{\sqrt{3}}{2}$</span> <br /><span class="math-container">$\cos{\pi\over3}=\frac12$</span> <br /><span class="math-container">$\sin{\pi\over6}=\frac12$</span> <br /><span class="math-container">$\cos{\pi\over6}=\frac{\sqrt{3}}{2}$</span> <br /> <br /><strong>Let's rock 'n roll!</strong> <br /><strong>Solution 1:</strong><br />Going from <span class="math-container">$\cos2\theta−\sqrt{3}\sin2\theta$</span> to <span class="math-container">$2\cos\left(2\theta+\frac\pi3\right)$</span>. <span class="math-container">$$ \require{cancel} \begin{align} \cos2\theta−\sqrt{3}\sin2\theta&amp;=\cos^2\theta-\sin^2\theta-\sqrt{3}\cdot2\sin\theta\cos\theta\\ &amp;=2\cdot\frac12\left(\cos^2\theta-\sin^2\theta-\sqrt{3}\cdot2\sin\theta\cos\theta\right)\\ &amp;=2\left(\frac12\left(\cos^2\theta-\sin^2\theta\right)-\frac12\sqrt{3}\cdot2\sin\theta\cos\theta\right)\\ &amp;=2\left(\left(\cos^2\theta-\sin^2\theta\right)\cos\frac{\pi}{3}-2\sin\theta\cos\theta\sin\frac\pi3\right)\\ &amp;=2\left(\cos2\theta\cos\frac\pi3-\sin2\theta\sin\frac\pi3\right)\\ &amp;=2\cos\left(2\theta+\frac\pi3\right) \end{align} $$</span> <strong>Solution 2:</strong><br />Going from <span class="math-container">$\cos2\theta−\sqrt{3}\sin2\theta$</span> to <span class="math-container">$\sin(2\theta−\frac\pi6)$</span>. <span class="math-container">$$ \begin{align} \cos2\theta−\sqrt{3}\sin2\theta&amp;=\cos^2\theta-\sin^2\theta-\sqrt{3}\cdot2\sin\theta\cos\theta\\ &amp;=-2\cdot\left(-\frac12\right)\left(\cos^2\theta-\sin^2\theta-\sqrt{3}\cdot2\sin\theta\cos\theta\right)\\ &amp;=-2\left(\left(-\frac12\right)\left(\cos^2\theta-\sin^2\theta\right)-\left(-\frac12\right)\left(\sqrt{3}\cdot2\sin\theta\cos\theta\right)\right)\\ &amp;=-2\left(\left(-\frac12\right)\left(\cos^2\theta-\sin^2\theta\right)-\left(-\frac12\sqrt{3}\right)\left(2\sin\theta\cos\theta\right)\right)\\ &amp;=-2\left(-\sin\frac\pi6\left(\cos^2\theta-\sin^2\theta\right)-\left(-\cos\frac\pi6\right)\left(\sin\theta\cos\theta\right)\right)\\ &amp;=-2\left(-\sin\frac\pi6\left(\cos^2\theta-\sin^2\theta\right)+\cos\frac\pi6\left(2\sin\theta\cos\theta\right)\right)\\ &amp;=-2\left(\cos\frac\pi6\left(2\sin\theta\cos\theta\right)-\sin\frac\pi6\left(\cos^2\theta-\sin^2\theta\right)\right)\\ &amp;=-2\left(\cos\frac\pi6\sin2\theta-\sin\frac\pi6\cos2\theta\right)\\ &amp;=-2\left(\sin2\theta\cos\frac\pi6-\cos2\theta\sin\frac\pi6\right)\\ &amp;=-2\sin\left(2\theta-\frac\pi6\right) \end{align} $$</span> I hope this helps.</p>
2,941,854
<p>I want to determine all the <span class="math-container">$x$</span> vectors that belong to <span class="math-container">$\mathbb R^3$</span> which have a projection on the <span class="math-container">$xy$</span> plane of <span class="math-container">$w=(1,1,0)$</span> and so that <span class="math-container">$||x||=3$</span>.</p> <p>I know the formula to find a projection of two vectors:</p> <p><span class="math-container">$$p_v(x)=\frac{\langle x, v\rangle}{\langle v, v\rangle}\cdot v$$</span></p> <p>So I have the projection so I should be able to fill that in:</p> <p><span class="math-container">$$(1, 1, 0)=\frac{\langle x, v\rangle}{\langle v, v\rangle}\cdot v$$</span></p> <p>Now I consider a generic vector <span class="math-container">$x = (x_1, x_2, x_3)$</span> and I calculate the dot products, though I don't exactly understand what <span class="math-container">$v$</span> is. I know it's a vector that it should be parallel to the projection of the vectors <span class="math-container">$x$</span>, but not necessarily of the same length.</p> <p>Any hints on how to proceed from here or if I'm doing the right thing? Should I use any formulas?</p>
trancelocation
467,003
<p>I assume you mean vectors <span class="math-container">$v$</span> which have the same projection <span class="math-container">$w = (1,1,0)$</span> onto the <span class="math-container">$xy$</span>-plane:</p> <p><span class="math-container">$$P_{xy}v = w = (1,1,0)$$</span></p> <p>So, you look for all <span class="math-container">$$w_t = (1,1,t) \mbox{ with } ||w_t|| = 3 \Leftrightarrow 1+1+t^2 = 9$$</span></p> <p>Can you take it from there?</p>
2,337,524
<p>We have $p(x)$ a degree $m$ polynomial and $q(x)$ a degree $k$ polynomial. We also know that $p(x) = q(x)$ has at least $n+1$ solutions. And, $n\geq m\land n\geq k$.</p> <p>Now, I tried graphing a little to see if I see a pattern </p> <p>I tried making $$y= x^{2} $$ and $$y = -x^{2}+5 $$ There were two points of intersection so $n$ was $1$ in some sense here while the degrees were $2$. </p> <p>Then the only way I thought that the condition can hold for $n+1$ solutions and $n \geq m $ and $ n \geq k$ is if $p(x)$ and $q(x)$ are the same polynomial. </p> <p>That was the answer to this problem. But can someone give an idea for this please.</p>
Robert Israel
8,508
<p>$P(x) - Q(x)$ is a polynomial of degree at most $\max(m,k)$, and therefore has at most $\max(m,k)$ roots unless it is the $0$ polynomial, i.e. unless $P = Q$.</p>
2,736,323
<blockquote> <p>Given that $Y \sim U(2, 5)$ and $Z = 3Y - 4$, what is the distribution for $Z$?</p> </blockquote> <p>I've worked out that for $Y \sim N(2, 5)$, $Z \sim N(2, 45)$ since </p> <p>$$\mu=3\cdot2 - 4 = 2$$</p> <p>and </p> <p>$$\sigma^2=3^2 \cdot 5 = 45$$</p> <p>I'm wondering how the working differs when we have a uniform distribution, rather than a normal distribution? </p> <p><em>Sorry if a similar question has been asked before - I could not find anything on my search!</em></p> <p>Thanks!</p>
Maffred
279,068
<p>Call $U$ the CDF of $U(2,5)$, it is $U(t) = \frac{1}{3}t - \frac{2}{3}$.</p> <p>$A(t) = \mathbb P(Z \leq t) = \mathbb P (3Y-4 \leq t) = \mathbb P (Y\leq \frac{t+4}{3}) = U(\frac{t+4}{3}) = \frac{1}{3}[\frac{t+4}{3}] -\frac{2}{3}$ for $2 \leq \frac{t+4}{3} \leq 5$, $0$ elsewhere, i.e. for $2 \leq t \leq 11$. </p> <p>Thus $a(t) = A'(t) = \frac{1}{9}$ in $2 \leq t \leq 11$, $0$ elsewhere is a uniform distribution.</p>
2,736,323
<blockquote> <p>Given that $Y \sim U(2, 5)$ and $Z = 3Y - 4$, what is the distribution for $Z$?</p> </blockquote> <p>I've worked out that for $Y \sim N(2, 5)$, $Z \sim N(2, 45)$ since </p> <p>$$\mu=3\cdot2 - 4 = 2$$</p> <p>and </p> <p>$$\sigma^2=3^2 \cdot 5 = 45$$</p> <p>I'm wondering how the working differs when we have a uniform distribution, rather than a normal distribution? </p> <p><em>Sorry if a similar question has been asked before - I could not find anything on my search!</em></p> <p>Thanks!</p>
Remy
325,426
<p>If $Y\sim \mathsf {Unif}(2,5)$ and $Z=3Y-4$ then $Z\sim \mathsf {Unif}(2,11)$.</p> <p>The transformation stretches the distribution of $Y$ by a factor of $3$ and then shifts it $4$ units to the left. Recalling that </p> <p>$$F_Y(y) = \mathsf{P}({Y\leq y})=\frac{y-2}{3}$$ for $2&lt; y &lt;5$ we get that for $2&lt; z &lt;11$, </p> <p>$$\begin{align*} F_Z(z) &amp;=\mathsf P({Z\leq z})\\\\ &amp;=\mathsf P({3Y-4\leq z})\\\\ &amp;=\mathsf P({3Y\leq z+4})\\\\ &amp;=\mathsf P\left(Y\leq \frac{z+4}{3}\right)\\\\ &amp;=F_Y\left(\frac{z+4}{3}\right)\\\\ &amp;=\frac{z-2}{9} \end{align*}$$</p> <p>Because the density function of a random variable is the derivative of its CDF, we see that for $2&lt; z &lt;11$, the density function of $Z$ is $$f_Z(z) = \frac{1}{9}$$</p>
462,569
<blockquote> <p>Consider the polynomial ring <span class="math-container">$F\left[x\right]$</span> over a field <span class="math-container">$F$</span>. Let <span class="math-container">$d$</span> and <span class="math-container">$n$</span> be two nonnegative integers.</p> <p>Prove:<span class="math-container">$x^d-1 \mid x^n-1$</span> iff <span class="math-container">$d \mid n$</span>.</p> </blockquote> <p>my tries:</p> <hr /> <p>necessity, Let <span class="math-container">$n=d t+r$</span>, <span class="math-container">$0\le r&lt;d$</span></p> <p>since <span class="math-container">$x^d-1 \mid x^n-1$</span>, so,</p> <p><span class="math-container">$x^n-1=\left(x^d-1\right)\left(x^{\text{dt}+r-d}+\dots+1\right)$</span>...</p> <p>so,,, to prove <span class="math-container">$r=0$</span>?</p> <p>I don't know, and I can't go on. How to do it.</p>
lhf
589
<p>Here is another take.</p> <ul> <li><p><span class="math-container">$F=\mathbb Q$</span>: Write <span class="math-container">$x^n-1 = (x^d-1)q(x)$</span> with <span class="math-container">$q(x) \in \mathbb Z[x]$</span>, because <span class="math-container">$x^d-1$</span> is monic. Now take derivatives: <span class="math-container">$nx^{n-1} = dx^{d-1}q(x) + (x^d-1)q'(x)$</span>. Finally, take <span class="math-container">$x=1$</span> to get <span class="math-container">$n=dq(1)$</span>. Note that <span class="math-container">$q(1) \in \mathbb Z$</span>.</p></li> <li><p><span class="math-container">$F$</span> arbitrary: The <span class="math-container">$q(x) \in \mathbb Z[x]$</span> as above also works in <span class="math-container">$F[x]$</span>. This is a subtle point.</p></li> </ul>
186,726
<p>Just a soft-question that has been bugging me for a long time:</p> <p>How does one deal with mental fatigue when studying math?</p> <p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p> <p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p> <p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p> <p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
Sasha
11,069
<p>This question partially belongs to the sister SE site: <a href="http://productivity.stackexchange.com">productivity.SE</a></p> <p>To fight the mental fatigue the following things will help:</p> <ul> <li>doing physical exercises, as they improve oxygen supply to the brain (e.g. walking, working out, etc)</li> <li>getting enough sleep</li> <li>keeping a healthy diet</li> </ul> <p>Essentially of all the above is to condition the brain to be in the best working order.</p>
186,726
<p>Just a soft-question that has been bugging me for a long time:</p> <p>How does one deal with mental fatigue when studying math?</p> <p>I am interested in Mathematics, but when studying say Galois Theory and Analysis intensely after around one and a half hours, my brain starts to get foggy and my mental condition drops to suboptimal levels.</p> <p>I would wish to continue studying, but these circumstances force me to take a break. It is truly a case of "the spirit is willing but the brain is weak"?</p> <p>How do people maintain concentration over longer periods of time? Is this ability trainable or genetic? (Other than taking illegal drugs like Erdős.)</p> <p>I know this is a really soft question, but I guess asking mathematicians is the best choice since the subject of Mathematics requires the most mental concentration compared to other subjects.</p>
akkkk
28,826
<p>I was thinking about a specific mathematician, and after Thomas mentioned him I thought I'd make a comment:</p> <p>Coffee.</p> <p>As the very productive mathematician Paul Erdős did not say (it was actually Alfréd Rényi, according to wikipedia):</p> <blockquote> <p>A mathematician is a machine for turning coffee into mathematical theories.</p> </blockquote> <p>Take a break. It does not really matter how long it is, but it takes at most 5 minutes for an experienced drinker to finish a cheap coffee, mere minutes should be sufficient. I take /very/ regular coffee breaks (every hour?) and can stay active throughout most of the day.</p>
1,928,259
<p>I have the following problem: </p> <blockquote> <p>The function $f(x)$ is odd, its period is $5$ and $f(-8) = 1$. What is $f(18)$?</p> </blockquote> <p>So, $f(-8) = f(-8 + 5) = 1$. I also know that you could replace $(-8)$ with $(-3)$ and still get the same result of $1$.</p> <p>I'm just learning about periods. My grasp on it still isn't very impressive. I understand that even functions are symmetric about the y axis and odd functions are symmetric about the origin, but my brain just isn't making the connection on this one.</p> <p>Please help!</p> <p>-Jon</p>
sobol
369,285
<p>This notation is pretty common in thermodynamics. It just means that the derivative of $f$ is taken with respect to $y$ keeping $x$ constant i.e., it is just a normal partial derivative of $f$ with respect to $y$. The resulting entity is both a function of $x$ and $y$.</p> <p>$$f=f(x,y) \\ x=x(y,...)$$</p> <p>$(\frac{\partial f}{\partial y})_x$ is the derivative of $f$ with respect to $y$, where the subscript $x$ emphasises the fact that even though $x$ is a function of $y$ (or the other way around), it behaves like a constant while we take the derivative of $f$. An example is the following from <a href="https://amzn.com/0471862568" rel="nofollow">Callen's <em>Thermodynamics and an Introduction to Thermostatistics</em></a>:</p> <blockquote> <p>$S=S(U,V,N)$, where $S$ is entropy, $U$ is internal energy, $V$ is volume and $N$ is some other parameter. Here $U$ itself is dependent on $V$ and $N$, but the derivative of $S$ with respect to $U$ is taken considering all the $V$'s and $N$'s appearing in $S$ as constants. This gives the inverse of Temperature ($T$) as a function of $U$, $V$, and $N$. $$\frac{1}{T}=(\frac{S}{U})_{V,N}$$</p> </blockquote> <p>In this convention $\frac{\partial}{\partial x}(\frac{\partial f}{\partial y})_x\neq 0$ in general.</p>
2,061,547
<p>I am solving for the zeroes of the function:</p> <blockquote> <p>$$\frac{\cos(x)(3\cos^2(x)-1)}{(1+\cos^2(x))^2}$$</p> </blockquote> <p>The zeroes of the function I found were done by setting $\cos(x)=0$, and $3\cos^2(x)-1=0$</p> <p>For the $3\cos^2(x)-1=0$ I solved it and got $x=\cos^{-1}(\frac{\sqrt3}{3})$ but my calculator only gives one solution $x=.955$ but when I graphed it I got another solution at $x=2.186$. How would I get the one solution I didn't get with the calculator?</p>
Ben Grossmann
81,360
<p><strong>Hint:</strong> Around any non-zero value $L$, there exists a neighborhood $(L-\epsilon, L + \epsilon)$ small enough so that it contains at most one element of $S$.</p>
176,691
<p>Let $A'$ denotes the complement of A with respect to $ \mathbb{R}$ and $A,B,T$ are subsets of $\mathbb{R}$. I am trying to prove $A' \cap (A' \cup B') \cap T= A' \cap T$, but I got some problems along the way.</p> <p>$A' \cap (A' \cup B') \cap T= (A' \cap A') \cup (A' \cap B') \cap T= A' \cup (A \cup B) \cap T =(A' \cup A)\cup B \cap T= \mathbb{R} \cup B \cap T = B \cap T$ Something wrong?</p>
Jānis Lazovskis
30,265
<p>Well, given arbitrary sets $X,Y$ we always have $X\cap(X\cup Y) = X$ since the intersection of a set with a union of that same set and anything else is just the first set itself (I hope that's not too wordy). Try drawing all possibilites with sets to see this. So $A'\cap (A' \cup B') = A'$ and that gives you your answer.</p> <p>Your fault is that in the second equality does not hold: for arbitrary sets $X,Y$ in general $(X \cap X) \cup (X\cap Y) \neq X \cup (X'\cup Y')$. Again, drawing all possibilites really helps to make this clear. Here I'm generalizing slightly from what you have, so for you $X$ is $A'$ and $Y$ is $B'$. Recall $A'' = A$.</p>
4,646,773
<p>I started with an integral <span class="math-container">$ \int_{0}^{2\pi} \sqrt{2[\sin^2(t) + 16\cos^2(t) - 4\sin(t)\cos(t)]} \,dt $</span></p> <p>And I simplified it to <span class="math-container">$ \int_{0}^{2\pi} \sqrt{17 + 15\cos(2t) - 4\sin(2t)} \, dt$</span></p> <p>My question: I know this can be simplified with some sort of substitution that cancels the <span class="math-container">$\sin$</span> and <span class="math-container">$\cos$</span> with a <span class="math-container">$u$</span>-sub, but I do not know how. I saw it online, with no explanation (see the first answer: <a href="https://math.stackexchange.com/questions/200010/find-length-of-curve-of-intersection">find length of curve of intersection</a>).</p> <p>I think this has an exact elementary solution, if you use <span class="math-container">$\tan\left(\frac x2\right)$</span> substitution and possibly Feynman's trick if necessary.</p>
Astyx
377,528
<p>The <span class="math-container">$m$</span>-th term of the <span class="math-container">$n$</span>-th line of that &quot;triangle&quot; is <span class="math-container">$u_{n,m} = \sum_{k=0}^n (m+k){n\choose k} = 2^{n-1}(2m+n)$</span> (you can check this by induction).</p> <p>Then all that is left is writing <span class="math-container">$m$</span> and <span class="math-container">$n$</span> in terms of the <span class="math-container">$k$</span>-th number of the series so that <span class="math-container">$v_k = u_{n(k), m(k)}$</span>.</p>
3,106,696
<p>I am confused on the notation used when writing down the solution of x and y in quadratic equations. For example in <span class="math-container">$x^2+2x-15=0$</span>, do I write :</p> <p><span class="math-container">$x=-5$</span> AND <span class="math-container">$x=3$</span></p> <p>or is it</p> <p><span class="math-container">$x=-5$</span> OR <span class="math-container">$x=3$</span></p> <p>which is it and why? I thought that because x can only equal one of the values when you substitute it in so it would be OR, however there are sometimes 2 roots of a quadratic so is it more correct to use AND? What about for the value of <span class="math-container">$y$</span>, is it the same?</p> <p>Thanks in advance</p>
Bill Dubuque
242
<p><strong>Hint</strong> <span class="math-container">$\ (f_n,f_{n-1}) = (\overbrace{f_n-f_{n-1}}^{\Large 2n},\,f_n) = \overbrace{(2n,n(n\!+\!1)\!\color{#c00}{+\!1})}^{\Large 2,n\ \mid\ n(n+1)\ \ \ \ \ \ \ }=1\ $</span> by Euclid. </p>
3,106,696
<p>I am confused on the notation used when writing down the solution of x and y in quadratic equations. For example in <span class="math-container">$x^2+2x-15=0$</span>, do I write :</p> <p><span class="math-container">$x=-5$</span> AND <span class="math-container">$x=3$</span></p> <p>or is it</p> <p><span class="math-container">$x=-5$</span> OR <span class="math-container">$x=3$</span></p> <p>which is it and why? I thought that because x can only equal one of the values when you substitute it in so it would be OR, however there are sometimes 2 roots of a quadratic so is it more correct to use AND? What about for the value of <span class="math-container">$y$</span>, is it the same?</p> <p>Thanks in advance</p>
Stefan Lafon
582,769
<p>Suppose <span class="math-container">$d$</span> divides both <span class="math-container">$f(n)$</span> and <span class="math-container">$f(n+1)$</span>. Then it divides their difference <span class="math-container">$$f(n+1)-f(n)=2(n+1)$$</span> Because, as you said <span class="math-container">$f(n)$</span> is odd, we must have that <span class="math-container">$d$</span> divides <span class="math-container">$n+1$</span>. But note that <span class="math-container">$$f(n)=(n+1)n +1$$</span> So since <span class="math-container">$d$</span> divides both <span class="math-container">$f(n)$</span> and <span class="math-container">$n+1$</span>, it must divide <span class="math-container">$f(n)-(n+1)n=1$</span>. So <span class="math-container">$d=1$</span>.</p>
3,604,388
<p>Let <span class="math-container">$P_n$</span> be the statement that <span class="math-container">$\dfrac{d^{2n}}{dx^{2n}}(x^2-1)^n = (2n)!$</span> </p> <p>Base case: n = 0, <span class="math-container">$\dfrac{d^0}{dx^0}(x^2-1)^0 = 1 = 0!$</span></p> <p>Assume <span class="math-container">$P_m = \dfrac{d^m}{dx^m}(x^2-1)^m = m!$</span> is true. </p> <p>Prove <span class="math-container">$P_{m+1} = \dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1} = [2(m+1)]!$</span> </p> <p><span class="math-container">$\dfrac{d^{2(m+1)}}{dx^{2(m+1)}}(x^2-1)^{m+1}$</span></p> <p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(\dfrac{d^2}{dx^2}(x^2-1)^{m+1}\right)$</span> </p> <p>= <span class="math-container">$\dfrac{d^{2m}}{dx^{2m}}\left(2x(m)(m+1)(x^2-1)^{m-1}\right)$</span></p> <p>= <span class="math-container">$[\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m][2x(m)(m+1)(x^2-1)^{-1}]$</span></p> <p>From the inductive hypothesis, </p> <p>= <span class="math-container">$(2m)! [2x(m)(m+1)(x^2-1)^{-1}]$</span> </p> <p>I got stuck here, and not sure if I have done correctly thus far? I did not know how to get to <span class="math-container">$[2(m+1)]!$</span>. Please advise. Thank you. </p>
farruhota
425,072
<p>Assume: <span class="math-container">$$P_m = \dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m = (2m)!$$</span></p> <p>Then: <span class="math-container">$$P_{m+1}=\dfrac{d^{2m}}{dx^{2m}}\left(\dfrac{d^2}{dx^2}(x^2-1)^{m+1}\right)= \dfrac{d^{2m}}{dx^{2m}}\left(\dfrac{d}{dx}\left[2x(m+1)(x^2-1)^m\right]\right)=\\ 2(m+1)\dfrac{d^{2m}}{dx^{2m}}\left((x^2-1)^m+2\overbrace{x^2}^{x^2-1+1}m(x^2-1)^{m-1}\right)=\\ \color{blue}{2(m+1)\dfrac{d^{2m}}{dx^{2m}}\left((x^2-1)^m+2m(x^2-1+1)(x^2-1)^{m-1}\right)}=\\ \color{blue}{2(m+1)\dfrac{d^{2m}}{dx^{2m}}\left((x^2-1)^m+2m(x^2-1)^{m}+2m(x^2-1)^{m-1}\right)}=\\ 2(m+1)\dfrac{d^{2m}}{dx^{2m}}\left((2m+1)(x^2-1)^m+2m(x^2-1)^{m-1}\right)=\\ 2(m+1)(2m+1)\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m+\overbrace{4m(m+1)\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^{m-1}}^{0}=\\ \color{blue}{2(m+1)(2m+1)\dfrac{d^{2m}}{dx^{2m}}(x^2-1)^m+\overbrace{4m(m+1)\dfrac{d^{2m}}{dx^{2m}}P_{2m-2}}^0}=\\ (2m+2)(2m+1)(2m)!=(2(m+1))!$$</span> <span class="math-container">$\color{blue}{\text{where $P_{2m-2}$ is a polynomial of degree $2m-2$, whose $2m$-th order derivative is zero.}}$</span></p>
2,548,469
<p>Suppose there is a sequence of iid variates from $U(0,1)$, $X_1,X_2,\dots$ If we stop the process when $X_n&gt;X_{n+1}$, what is the expected number of generated variates?</p> <p>I am just thinking about treating it as a Bernoulli process, so that I can use geometric distribution. Is this the right approach?</p>
Dylan
135,643
<p>You can rewrite the integrand as</p> <p>$$ \frac{(x-1)(x+1)}{(x+1)^2\sqrt{x^2\left(x + \dfrac{1}{x} + 1\right)}} = \frac{x^2 - 1}{(x^3 + 2x^2 + x)\sqrt{x + \dfrac{1}{x} + 1}} \\ = \frac{1 - \dfrac{1}{x^2}}{\left( x + \dfrac{1}{x} + 2 \right)\sqrt{x + \dfrac{1}{x} + 1}} $$</p> <p>Then make the substitution $u^2 = x + \dfrac{1}{x} + 1$ to get</p> <p>$$ \int \frac{2}{u^2+1} du = 2\arctan u + C $$</p>
848,229
<p>Two teams take part at a KO-tournament with n rounds. Assuming, that the teams win all their games until they are paired together, what is the probability that they both meet in the final ?</p> <p>I figured out that the solution is </p> <p>$P_n=\prod_{j=2}^n (1-\frac{1}{2^j-1})=\frac{2^{n-1}}{2^n-1}$</p> <p>So, $\lim_{n-&gt;\infty} P_n = \frac{1}{2}$</p> <p>I tried to understand the result intuitively because I would have expected a much lower probability. </p>
Kushashwa Ravi Shrimali
42,058
<p>$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \large \textbf{Method -1} $ $$ \begin{align} I &amp; = \sin (bx) \left( \cfrac{1}{a} e^{ax} \right) - {\int} \left( b\cos (bx) \left(\cfrac{e^{ax}}{a} \right) \right) \\ &amp; = \sin (bx) \left( \cfrac{1}{a} e^{ax} \right) - \cfrac{b}{a} \left[ (\cos (bx)) \cfrac{e^{ax}}{a} + b\sin (bx) \cfrac{e^{ax}}{a} \right] \\ &amp;= \sin (bx) \left( \cfrac{1}{a} (e^{ax}) \right) - \cfrac{b}{a} \left[ \cfrac{1}{a} (\cos (bx) ^{e^{ax}}) + \cfrac{b}{a} (I) \right] \\ &amp; = \sin (bx) \left(\cfrac{1}{a} e^{ax} \right) - \cfrac{b}{a^2} \cos (bx) e^{ax} - \cfrac{b^2}{a^2} (I) \end{align}$$ Now, $$\begin{align} I + \cfrac{b^2}{a^2} (I) &amp; = \sin (bx) \cfrac{1}{a} (e^{ax}) - \cfrac{b}{a^2} (\cos(bx)) e^{ax} \\ I \left(\cfrac{a^2+b^2}{a^2}\right) &amp; = e^{ax} \left(\cfrac{\sin (bx)}{a} - \cfrac{b}{a^2} \cos(bx) \right) \\ I \left(\cfrac{a^2+b^2}{\require{cancel}\cancel{a^2}}\right)&amp; = \cfrac{e^{ax}}{\require{cancel}\cancel{a^2}} \left[a \sin (bx) – b\cos (bx) \right] \\ I (a^2 + b^2) &amp; = e^{ax} \left[ a\sin bx - b \cos bx \right] \end{align}$$</p> <p>Thus, we have our answer: $$\begin{align} I &amp;= \cfrac{e^{ax}}{a^2 + b^2} \left[a\sin bx - b \cos bx \right] + \color{grey}{\rm C} \end{align}$$</p>
848,229
<p>Two teams take part at a KO-tournament with n rounds. Assuming, that the teams win all their games until they are paired together, what is the probability that they both meet in the final ?</p> <p>I figured out that the solution is </p> <p>$P_n=\prod_{j=2}^n (1-\frac{1}{2^j-1})=\frac{2^{n-1}}{2^n-1}$</p> <p>So, $\lim_{n-&gt;\infty} P_n = \frac{1}{2}$</p> <p>I tried to understand the result intuitively because I would have expected a much lower probability. </p>
Kushashwa Ravi Shrimali
42,058
<p>$\qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \large{\textbf{Method - 2}}$</p> <p>Let : $I = \int e^{ax} \sin (bx) $ And say : $$y_1 = e^{ax} \sin (bx) \\ y_2 = e^{ax} \cos (bx) $$</p> <p>And : $$y'_1 = ae^{ax}\sin (bx) + e^{ax}b\cos (bx) \\ y'_2 = -e^{ax} b\sin (bx) + ae^{ax} \cos (bx) $$ Or we can also write $y'_1$ and $y'_2$ in the form of $y_1$ and $y_2$ as: $$y'_1 = ay_1 + by_2 \\ y'_2 = -by_1 + ay_2 $$ In a matrix, this can be written as : $$\bar y ' = \left[ \begin{matrix} a &amp; -b \\ b &amp; a \end{matrix} \right] \bar y$$ </p> <p>Inverting the Derivative Matrix : $$\begin{align} D^{-1} = \cfrac{1}{a^2 + b^2} \left[ \begin{matrix} a &amp; b \\ -b &amp; a \end{matrix} \right] \\ = \left[ \begin{matrix} \cfrac{a}{a^2+b^2} &amp; \cfrac{b}{a^2 + b^2} \\ \cfrac{-b}{a^2 + b^2} &amp; \cfrac{a}{a^2 + b^2} \end{matrix} \right] \times \left( \begin{matrix} 1 \\ 0 \end{matrix} \right) \\ = \cfrac{1}{a^2 + b^2} \left[ \begin{matrix} a \\ -b \end{matrix} \right] \\ I = \cfrac{e^{ax}}{a^2+b^2} (a \sin bx - b \cos bx ) + C \end{align}$$</p>
848,229
<p>Two teams take part at a KO-tournament with n rounds. Assuming, that the teams win all their games until they are paired together, what is the probability that they both meet in the final ?</p> <p>I figured out that the solution is </p> <p>$P_n=\prod_{j=2}^n (1-\frac{1}{2^j-1})=\frac{2^{n-1}}{2^n-1}$</p> <p>So, $\lim_{n-&gt;\infty} P_n = \frac{1}{2}$</p> <p>I tried to understand the result intuitively because I would have expected a much lower probability. </p>
user111187
111,187
<p><strong>Method 3</strong></p> <p>We have $$\int e^{ax} e^{ibx} dx = \frac{e^{ax} e^{ibx}}{a+ib}.$$ Taking the imaginary part yields $$ \int e^{ax} \sin{bx} dx = \frac{e^{ax}}{a^2+b^2}(a \sin bx - b \cos bx).$$ By taking the real part we also get for free $$ \int e^{ax} \cos{bx} dx = \frac{e^{ax}}{a^2+b^2}(a \cos bx + b \sin bx).$$</p>
77,311
<p>I am a first-time user pf <em>Mathematica</em> (V10). I know it's easy to install palettes, but uninstalling them drives me crazy. I want to delete one. Who can help me to do that? </p>
Mike Honeychurch
77
<p>Here is a bare bones tool to remove a palette and place it in a new directory. You can modify to delete the file entirely if you wish. You can modify the sources. There is an internal FE command to update the palette menu but I do not have that. You'll have to restart Mathematica.</p> <pre><code>DynamicModule[{new, source1 = FileNameJoin[{$BaseDirectory, "SystemFiles", "FrontEnd", "Palettes"}], source2 = FileNameJoin[{$UserBaseDirectory, "SystemFiles", "FrontEnd", "Palettes"}], source3 = FileNameJoin[{$UserBaseDirectory, "Applications", "*", "FrontEnd", "Palettes"}], menu, palette}, new = FileNameJoin[{$UserDocumentsDirectory, "Uninstalled Palettes"}]; If[! DirectoryQ[new], CreateDirectory[new] ]; menu = # -&gt; Composition[Last, FileNameSplit][#] &amp; /@ FileNames["*nb", {source1, source2, source3}]; Dynamic[ If[menu =!= {}, Row[{ PopupMenu[Dynamic[palette], menu], Spacer[10], Button["Remove Palette", CopyFile[palette, FileNameJoin[{$UserDocumentsDirectory, "Uninstalled Palettes", Composition[Last, FileNameSplit][palette]}]]; DeleteFile[palette] ] }], "No Palettes Found" ], TrackedSymbols :&gt; {menu}] ] </code></pre> <p><img src="https://i.stack.imgur.com/SQSUW.png" alt="enter image description here"></p> <p>...and you could of course make this into a palette for easy access.</p>
211,175
<p>In Gradshteyn and Ryzhik, (specifically starting with the section 3.13) there are several results involving integrals of polynomials inside square root. These are given in terms of combinations of elliptic integrals. See for instance: <a href="https://i.stack.imgur.com/6Cqyb.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/6Cqyb.jpg" alt="Gradshteyn and Ryzhik excerpt"></a></p> <p>where <span class="math-container">$F[\alpha, p]$</span> is the elliptic integral of first kind. I tried to reproduce the first result above in Mathematica (version 12) but failed. I would appreciate if anyone could point out what I am doing wrong. My first attempt is</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], {x, -Infinity, u}, Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>which returned no result:</p> <p><a href="https://i.stack.imgur.com/Wdama.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Wdama.jpg" alt="unevaluated"></a></p> <p>Then I tried without integration limits, and take the limits after</p> <pre><code>Integrate[1/Sqrt[(a - x) (b - x) (c - x)], x, Assumptions -&gt; { a &gt; b &gt; c}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/0Ci4P.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/0Ci4P.jpg" alt="complicated result"></a></p> <p>taking now the upper limit and simplifying </p> <pre><code>Simplify[Limit[(2 (a - x)^(3/2) Sqrt[(b - x)/(a - x)] Sqrt[(c - x)/(a - x)] EllipticF[ArcSin[Sqrt[a - b]/Sqrt[a - x]], (a - c)/(a - b)])/(Sqrt[a - b] Sqrt[(a - x) (-b + x) (-c + x)]), x -&gt; u], Assumptions -&gt; { a &gt; b &gt; c &gt;= u}] </code></pre> <p>giving:</p> <p><a href="https://i.stack.imgur.com/uqFwh.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uqFwh.jpg" alt="another result"></a></p> <p>whereas the integral vanishes when <span class="math-container">$x\rightarrow -\infty$</span>. Clearly, the above result given by Mathematica differs from the Gradshteyn and Ryzhik's. Two results match if the substitution: <span class="math-container">$b \rightarrow c$</span>, <span class="math-container">$c \rightarrow b$</span> is made but this would then be at odds with the condition: <span class="math-container">$a &gt; b &gt; c$</span>. </p>
J. M.'s persistent exhaustion
50
<p><strong>Update</strong></p> <p>The Carlson integrals (e.g. <a href="https://reference.wolfram.com/language/ref/CarlsonRF.html" rel="nofollow noreferrer"><code>CarlsonRF[]</code></a>) are now built-in, as of version 12.3; this supersedes the functionality of my old package.</p> <hr /> <p>Since TheDoctor mentioned <a href="https://github.com/tpfto/Carlson" rel="nofollow noreferrer">my package</a>, I'll present how to use <code>CarlsonRF[]</code> in conjunction with DLMF <a href="http://dlmf.nist.gov/19.29.E4" rel="nofollow noreferrer">formula 19.29.4</a> to evaluate the three cases in Gradshteyn and Ryzhik's formula 3.131. Because <code>CarlsonRF[]</code> is <code>Orderless</code>, it is especially suitable for exploiting the inherent permutation symmetry in the given integrals.</p> <p>Load the package first:</p> <pre><code>&lt;&lt;Carlson` </code></pre> <p>For case 1, this combines 19.29.4 with <a href="http://dlmf.nist.gov/19.29.E6" rel="nofollow noreferrer">19.29.6</a>:</p> <pre><code>With[{cc = {{a, -1}, {b, -1}, {c, -1}, {1, 0}}, pairs = {{1, 2}, {1, 3}, {2, 3}}}, 2 Apply[CarlsonRF, Table[With[{g1 = cc[[id]], g2 = cc[[Complement[Range[4], id]]]}, Apply[Times, Sqrt[-g2[[All, -1]]] Sqrt[g1.{1, u}]] + Apply[Times, Sqrt[-g1[[All, -1]]] Sqrt[g2.{1, u}]]], {id, pairs}]^2]] 2 CarlsonRF[a - u, b - u, c - u] </code></pre> <p>An example:</p> <pre><code>With[{c = 3, b = 5, a = 9, u = 1, prec = 25}, {NIntegrate[1/Sqrt[(a - x) (b - x) (c - x)], {x, -∞, u}, WorkingPrecision -&gt; prec], N[2 CarlsonRF[c - u, b - u, a - u], prec]}] {0.9688576532724524632309018, 0.9688576532724524632309018} </code></pre> <p>Case 2 and case 3 use <a href="http://dlmf.nist.gov/19.29.E5" rel="nofollow noreferrer">19.29.5</a> instead:</p> <pre><code>With[{cc = {{a, -1}, {b, -1}, {c, -1}, {1, 0}}, pairs = {{1, 2}, {1, 3}, {2, 3}}, x = c, y = u}, 2 Apply[CarlsonRF, Table[With[{g1 = cc[[id]], g2 = cc[[Complement[Range[4], id]]]}, (Apply[Times, Sqrt[g1.{1, x}] Sqrt[g2.{1, y}]] + Apply[Times, Sqrt[g2.{1, x}] Sqrt[g1.{1, y}]])/(x - y)], {id, pairs}]^2]] 2 CarlsonRF[(a - c) (b - c)/(c - u), (b - c) (a - u)/(c - u), (a - c) (b - u)/(c - u)] With[{cc = {{a, -1}, {b, -1}, {-c, 1}, {1, 0}}, pairs = {{1, 2}, {1, 3}, {2, 3}}, x = u, y = c}, 2 Apply[CarlsonRF, Table[With[{g1 = cc[[id]], g2 = cc[[Complement[Range[4], id]]]}, (Apply[Times, Sqrt[g1.{1, x}] Sqrt[g2.{1, y}]] + Apply[Times, Sqrt[g2.{1, x}] Sqrt[g1.{1, y}]])/(x - y)], {id, pairs}]^2]] 2 CarlsonRF[(a - c) (b - c)/(-c + u), (b - c) (a - u)/(-c + u), (a - c) (b - u)/(-c + u)] </code></pre> <p>Here's an example for case 3:</p> <pre><code>With[{c = 3, b = 5, a = 9, u = 4, prec = 25}, {NIntegrate[1/Sqrt[(a - x) (b - x) (x - c)], {x, c, u}, WorkingPrecision -&gt; prec], N[2 CarlsonRF[(a - c) (b - c)/(u - c), (b - c) (a - u)/(u - c), (a - c) (b - u)/(u - c)], prec]}] {0.6623825396975077898366125, 0.6623825396975077898366125} </code></pre> <hr /> <p>(Someday, when I am much less occupied with other pressing stuff, I'll work on adding functionality for facilitating the symbolic use of the Carlson integrals. But not today.)</p>
2,650,628
<p>The equation $\log_e(x) + \log_e(1+x) =0$ can be written as:</p> <p>a) $x^2+x-e=0$</p> <p>b) $x^2+x-1=0$</p> <p>c) $x^2+x+1=0$</p> <p>d) $x^2+xe-e=0$</p> <p>I tried differentiating both sides, then it becomes $\frac{1}{x}+\frac{1}{1+x}=0$, but I dont get any of the answers.</p>
Dr. Sonnhard Graubner
175,066
<p>then you will get $$\ln(x)+\ln(1+x)=\ln(1)$$ or $$x(x+1)=1$$ can you finish?</p>
3,079,929
<p>Find all positive triples of positive integers a, b, c so that <span class="math-container">$\frac {a+1}{b}$</span> , <span class="math-container">$\frac {b+1}{c}$</span>, <span class="math-container">$\frac {c+1}{a}$</span> are also integers. </p> <p>WLOG, let a<span class="math-container">$\leqq b\leqq c$</span>, </p>
Hagen von Eitzen
39,174
<p>If any two of <span class="math-container">$a,b,c$</span> are equal, then wlog. <span class="math-container">$a=b$</span>. As <span class="math-container">$\frac{b+1}{a}=1+\frac1a$</span> is an integer, we conclude <span class="math-container">$a=b=1$</span>. The remaining conditions are that <span class="math-container">$\frac{c+1}{1}$</span> and <span class="math-container">$\frac 2c$</span> are integers, which lead us to the solutions <span class="math-container">$$(1,1,1),\qquad (1,1,2) $$</span> (and cyclic permutations of the latter).</p> <p>So assume <span class="math-container">$a,b,c $</span> are pairwise different. By cyclic permutation, we may assume wlog that <span class="math-container">$a&lt;b&lt;c$</span> or that <span class="math-container">$a&gt;b&gt;c$</span>. In the first case, <span class="math-container">$0&lt;\frac{a+1}{b}\le \frac bb=1$</span> and hence <span class="math-container">$a+1=b$</span>. Likewise, <span class="math-container">$b+1=c$</span>. Then the last integer is <span class="math-container">$\frac{c+1}a=\frac{a+3}a=1+\frac 3a$</span> and we must have <span class="math-container">$a=1$</span> or <span class="math-container">$a=3$</span>, whic gives us the solutions <span class="math-container">$$(1,2,3),\qquad (3,4,5) $$</span> (and cyclic permutations).</p> <p>In the case <span class="math-container">$a&gt;b&gt;c$</span>, we instead have that <span class="math-container">$0&lt;\frac{c+1}{a}\le \frac{c+1}{c+2}&lt;1$</span>, not an integer. So this case does not produce additional solutions.</p>
941,709
<p><strong>Question:</strong> Let $X$ be any set with at least two elements. Assume that the only open subsets of $X$ are the empty set $\emptyset$ and $X$ itself. - Which subsets of $X$ are closed? - Which subsets of $X$ are compact?</p> <p><strong>My thoughts:</strong> Thus also $\emptyset$ and $X$ have to be also closed subsets. As their complements are both open and by definition the set is closed if the complement is open.</p> <p>The open set is compact as it is a finite set and also $X$ is compact as it has a finite amount of closed subsets, thus is bounded and closed. Am I in any way correct with these thoughts?</p>
Mike Earnest
177,399
<p>For a set to be $not$ compact, it needs to have an open cover without any finite subcovers. Are there any sets which have open covers which can't be reduced to finite ones? Think about what open covers must look like in this topology.</p>
586,112
<p>Consider following statment $\{\{\varnothing\}\} \subset \{\{\varnothing\},\{\varnothing\}\}$</p> <p>I think above statement is false as $\{\{\varnothing\}\}$ is subset of $\{\{\varnothing\},\{\varnothing\}\}$ but to be proper subset there must be some element in $\{\{\varnothing\},\{\varnothing\}\}$ which is not in $\{\{\varnothing\}\}$.As this is not the case here so it is false.</p> <p>Is my explanation and answer right or not?</p>
BCLC
140,308
<p>I can't believe I haven't found a single answer that says this, but:</p> <p><strong>This is exactly what the fundamental theorem of calculus is about.</strong></p> <ol> <li><p>An integral is an area function. An integral of function <span class="math-container">$f:\mathbb R \to \mathbb R$</span> (usually assumed continuous I guess) is any <span class="math-container">$F:\mathbb R \to \mathbb R$</span> s.t. <span class="math-container">$F(x) - F(a) = \int_a^x f(t)dt$</span>. <span class="math-container">$F$</span> tells you the area under <span class="math-container">$f$</span> from <span class="math-container">$a$</span> to <span class="math-container">$x$</span>.</p> </li> <li><p>An antiderivative <span class="math-container">$G:\mathbb R \to \mathbb R$</span> is any differentiable function whose derivative is <span class="math-container">$f$</span>.</p> </li> </ol> <p>The whole point of the fundamental theorem of calculus is to tell you</p> <blockquote> <p>anti-derivatives and integrals are identical.</p> </blockquote> <ol> <li><p><strong>Integrals are anti-derivatives</strong>: Such an area function <span class="math-container">$F$</span> of <span class="math-container">$f$</span> is actually not only a continuous function but a differentiable function whose derivative is <span class="math-container">$f$</span> itself, i.e. <span class="math-container">$F'(x)=f(x)$</span>.</p> </li> <li><p><strong>Anti-derivatives are integrals</strong>: Such an antiderivative <span class="math-container">$G$</span> of <span class="math-container">$f$</span> can actually be used to compute the area under <span class="math-container">$f$</span> like <span class="math-container">$\int_a^x f(x) dx = G(x)-G(a)$</span>.</p> </li> </ol> <p>Notes:</p> <ol> <li><p><span class="math-container">$F(x)-F(a)=G(x)-G(a)$</span> for any area functions / integrals / anti-derivatives <span class="math-container">$F,G$</span> and for any <span class="math-container">$x,a \in \mathbb R$</span>.</p> </li> <li><p>This is because the rule is any anti-derivatives <span class="math-container">$F$</span> &amp; <span class="math-container">$G$</span> will differ by a constant. Like for all <span class="math-container">$F,G$</span> there exists a <span class="math-container">$C$</span> s.t. for all <span class="math-container">$z \in \mathbb R$</span>, <span class="math-container">$F(z)-G(z)=C$</span>. In particular, <span class="math-container">$F(x)-G(x)=F(a)-G(a)=C$</span>.</p> </li> <li><p>It's just occurring to me that the fundamental theorem of calculus' saying that integrals and anti-derivatives are identical is kinda like in complex analysis where they say that <a href="https://en.wikipedia.org/wiki/Analyticity_of_holomorphic_functions" rel="nofollow noreferrer">holomorphic and analytic are identical</a>. (This isn't the fundamental theorem of complex analysis though, which is apparently Cauchy's Integral Theorem. Kinda anti-climactic, but ok I guess.)</p> </li> </ol>
1,839,693
<p>I think it must be true. Yet I have no rigorous proof for that. So, what I need to prove is that "group being simple" is invariant under isomorphism.</p> <p>That if $G \cong H$, then either both are simple groups, or both are not simple.</p>
Sidharth Ghoshal
58,294
<p>Recall a group is simple if it doesn't have a normal subgroup other than itself and the identity group, or put another way "no non trivial normal subgroup".</p> <p>So a natural proof by contradiction arises.</p> <p>Since $G \cong H$ we can consider an isomorphism $\pi: H \rightarrow G$.</p> <p>Suppose now that $H$ has a non trivial normal subgroup $K$. we will show that $\pi(K)$ must be a non trivial normal subgroup of $G$. </p> <ol> <li><p>$\pi(K)$ has the same number of elements as $K$ since $\pi$ is an bijection (since isomorphisms are bijections), so if it is a subgroup of G, then it must be a non trivial subgroup.</p></li> <li><p>This question: <a href="https://math.stackexchange.com/questions/1447701/generalized-isomorphism-theorem-for-groups">Generalized Isomorphism Theorem for Groups</a>, shows that the images of normal groups of $H$ must be normal subgroups of $G$ under the isomorphism $\pi$</p></li> <li><p>We combine both, to conclude that $\pi(K)$ is a normal subgroup of G, AND is a non trivial subgroup (so it isn't the whole group or the identity), thus it MUST be a non trivial normal subgroup, a contradiction if we started by assuming $G$ is simple.</p></li> <li><p>Thus we conclude that if $G$ is simple and $G \cong H$ then it must be the case that $H$ is simple.</p></li> </ol>
17,455
<p>Assume $S_1$ and $S_2$ are two $n \times n$ (positive definite if that helps) matrices, $c_1$ and $c_2$ are two variables taking scalar form, and $u_1$ and $u_2$ are two $n \times 1$ vectors. In addition, $c_1+c_2=1$, but in the more general case of $m$ $S$'s, $u$'s, and $c$'s, the $c$'s also sum to 1.</p> <p>What is the derivative of $(c_1 S_1+c_2 S_2)^{-1}(c_1 u_1+c_2 u_2)$ with respect to both $c_1$ and $c_2$?</p>
Jonas Meyer
1,424
<p>The condition that $S_1$ and $S_2$ are positive definite is relevant to the existence of the inverse in the definition of the function. I assume that it is taken as given that the inverse exists at the relevant values of $c_1$ and $c_2$. This would be true in particular if $c_1$ and $c_2$ were positive.</p> <p>By symmetry, the same method will apply for $c_1$ and $c_2$, and we're basically differentiating the function $f(t)=(tA+B)^{-1}(tu+v)$, where $A$ and $B$ are matrices and $u$ and $v$ are column vectors. You can write this as $f(t)=g(t)h(t)$, where $g$ is a matrix valued function and $h$ is a vector valued function. By the product rule, $f&#39;(t)=g&#39;(t)h(t)+g(t)h&#39;(t)$. So you just need to be able to determine $g&#39;$ and $h&#39;$. It is straightforward that $h&#39;(t)=u$. You can also show that $g&#39;(t)=-(tA+B)^{-1}A(tA+B)^{-1}$. </p>
4,385,209
<p>Let's start by generalizing the concept of a metric space. An <span class="math-container">$S$</span>-metric space is a set <span class="math-container">$X$</span> with a function <span class="math-container">$d : X \times X \to S$</span> such that</p> <ul> <li><span class="math-container">$d(x,y) = 0 \iff x = y$</span></li> <li><span class="math-container">$d(x,y) = d(y,x)$</span></li> <li><span class="math-container">$d(x,z) \leq d(x,y) + d(y,z)$</span></li> </ul> <p>This is just a metric space which need not necessarily map into <span class="math-container">$\mathbb{R}$</span>. So my question is:</p> <blockquote> <p>Are all <span class="math-container">$\mathbb{R}$</span>-metrizable spaces also <span class="math-container">$\mathbb{Q}$</span>-metrizable spaces?</p> </blockquote> <p>I suspect the answer is &quot;No&quot;, but I have yet to come up with a counter example. I have shown that a few metrizable spaces are <span class="math-container">$\mathbb{Q}$</span>-metrizable.</p> <p>For example discrete spaces are <span class="math-container">$\mathbb{Q}$</span>-metrizable since the usual metric has a range of <span class="math-container">$\{0,1\}$</span>. Additionally <span class="math-container">$\mathbb{R}^n$</span> is <span class="math-container">$\mathbb{Q}$</span>-metrizable. If we take <span class="math-container">$d$</span> to be the normal Euclidean metric then we can define <span class="math-container">$d'$</span> such that:</p> <p><span class="math-container">$ d'(x,y)=\left\lceil d(x,y)\right\rceil $</span></p> <p>The first two conditions follow trivially from the fact that <span class="math-container">$d$</span> is a metric and the third is true by virtue of the fact that <span class="math-container">$\lceil x+y\rceil \leq \lceil x\rceil + \lceil y\rceil$</span>.</p> <p>Since <span class="math-container">$\mathbb{R}^n$</span> is generated by unit balls, this metric generates the usual topology on <span class="math-container">$\mathbb{R}^n$</span>.</p> <p>This gives us a whole lot more spaces which are homeomorphic to a subspace of <span class="math-container">$\mathbb{R}^n$</span> as well, but I don't see a way to adjust this more generally.</p> <p>Is there an example of a space which is <span class="math-container">$\mathbb{R}$</span>-metrizable but not <span class="math-container">$\mathbb{Q}$</span>-metrizable by the above definition?</p>
Paul Frost
349,785
<p>We understand that <span class="math-container">$\mathbb R$</span> and <span class="math-container">$\mathbb Q$</span> are both endowed with the standard metric <span class="math-container">$d(x,y) = \lvert x - y \rvert$</span>. These metrics induce the usual topologies on <span class="math-container">$\mathbb R$</span> and <span class="math-container">$\mathbb Q$</span> and make <span class="math-container">$\mathbb Q$</span> a subspace of <span class="math-container">$\mathbb R$</span>. It is well-known that <span class="math-container">$\mathbb Q$</span> is a <em>totally disconnected</em> space which means that all connected components are one-point sets.</p> <p>Note that each <span class="math-container">$\mathbb Q$</span>-metric on a set <span class="math-container">$X$</span> is a metric (= <span class="math-container">$\mathbb R$</span>-metric) on <span class="math-container">$X$</span>, thus it induces the usual metric topology on <span class="math-container">$X$</span>.</p> <p>Let <span class="math-container">$(X,d)$</span> be a <span class="math-container">$\mathbb Q$</span>-metric space. For each <span class="math-container">$x_0 \in X$</span> the function <span class="math-container">$f_{x_0} : X \to \mathbb Q, f_{x_0}(x) = d(x,x_0)$</span>, is continuous with respect to the metric topology on <span class="math-container">$X$</span> and the standard topology on <span class="math-container">$\mathbb Q$</span>. In fact, for <span class="math-container">$x_1, x_2 \in X$</span> the triangle inequality shows <span class="math-container">$d(x_i,x_0) - d(x_j,x_0) \le d(x_1,x_2)$</span>, i.e. <span class="math-container">$\lvert f_{x_0}(x_1) - f_{x_0}(x_2) \rvert = \lvert d(x_1,x_0) - d(x_2,x_0) \rvert \le d(x_1,x_2)$</span>.</p> <p>Now let <span class="math-container">$C$</span> be a connected component of <span class="math-container">$X$</span> (with respect to the metric topology) and <span class="math-container">$x_0, x_1 \in C$</span>. We know that <span class="math-container">$f_{x_0}(C)$</span> is a connected subset of <span class="math-container">$\mathbb Q$</span>. The only connected subsets of <span class="math-container">$\mathbb Q$</span> are the one-point subspaces, thus <span class="math-container">$f_{x_0}$</span> is constant on <span class="math-container">$C$</span>. Thus <span class="math-container">$d(x_1,x_0) = f_{x_0}(x_1) = f_{x_0}(x_0) = d(x_0,x_0) = 0$</span> which implies <span class="math-container">$x_0 = x_1$</span>. Hence <span class="math-container">$C$</span> must be a one-point space. We conclude that <span class="math-container">$X$</span> is totally disconnected.</p> <p>In other words, a necessary condition for a topological space to be metrizable by a <span class="math-container">$\mathbb Q$</span>-metric is that it is totally disconnected.</p> <p>Therefore only totally disconnected <span class="math-container">$\mathbb{R}$</span>-metrizable spaces have a chance to be <span class="math-container">$\mathbb{Q}$</span>-metrizable. In particular, <span class="math-container">$\mathbb R$</span> is not <span class="math-container">$\mathbb{Q}$</span>-metrizable.</p> <p>Examples for <span class="math-container">$\mathbb{Q}$</span>-metrizable spaces are</p> <ul> <li><p>All discrete spaces: Take <span class="math-container">$d(x,y) = \begin{cases} 1 &amp; x \ne y \\ 0 &amp; x = y \end{cases}$</span></p> </li> <li><p><span class="math-container">$\mathbb Q$</span> itself</p> </li> </ul>
259,808
<p>For example, suppose I wanted to determine which of the following has the fastest asymptotic growth:</p> <ol> <li><p>$n^2\log(n)+(\log(n))^2$</p></li> <li><p>$n^2+\log(2^n)+1$</p></li> <li><p>$(n+1)^3+(n-1)^3$</p></li> <li><p>$(n+\log(n))^22^{100}$</p></li> </ol> <p>Are there any straightforward methods to tell which is fastest without actually graphic the functions?</p>
Samir
1,057,918
<p>'A' does a certain task in a time 'time' n3+1100 where n is the number of element processed algorithnm 'B' does the same task in a 'time' of 60n2+6000 if any would less efficient algorithm excute more quickly than the more efficient algorithm</p>
4,076,033
<p>I know how to check if a vector or a matrix is linearly dependent or independent , but how do I apply it on this problem?</p> <p>Let V1 , V2 , V3 be vectors How do I prove that the vector V3 = ( 2, 5, -5) is linearly dependent on V1 = ( 1,-2,3) and V2 = (4,1,1) ?</p> <p>Will it be enough or correct if I solved the equation: α1<em>V1 + α2</em>V2 - V3 = 0 and proved it has a solution?</p>
pmun
468,438
<p>To solve this you can approach with basic methodology:</p> <p>Consider the equation <span class="math-container">$c_1(1,-2,3)+c_2(4,1,1)=(2,5,-5).$</span> Then you will have system of linear equations:</p> <p><span class="math-container">$c_1+4c_2=2, -2c_1+c_2=5, 3c_1+c_2=-5$</span>.</p> <p>Finding that whether <span class="math-container">$v_3$</span> is linearly dependent on <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span> is equivalent to finding the solution of the above system.</p> <p>A trivial computation shows that</p> <p><span class="math-container">$c_1=-2, c_2=1.$</span></p> <p>Thus, the vector <span class="math-container">$v_3$</span> is linearly dependent on <span class="math-container">$v_1$</span> and <span class="math-container">$v_2$</span>.</p>
2,172,836
<p>I'm writing a small java program which calculates all possible knight's tours with the knight starting on a random field on a 5x5 board.</p> <p>It works well, however, the program doesn't calculate any closed knight's tours which makes me wonder. Is there an error in the code, or are there simply no closed knight's tours on a 5x5 board?</p> <p>If so, what is the minimum required board size for the existence of at least one closed knight's tour?</p>
TonyK
1,508
<p>No closed knight's tour is possible on a board with an odd number of squares, because each move changes the colour of the knight's square. So after an odd number of moves, you can't be back at the starting square, because it's the wrong colour.</p>
4,821
<p>A quick bit of motivation: recently a question I answered quite a while ago ( <a href="https://math.stackexchange.com/questions/22437/combining-two-3d-rotations/178957">Combining Two 3D Rotations</a> ) picked up another (IMHO rather poor) answer. While it was downvoted by someone else and I strongly concur with their opinion, I haven't downvoted it myself because I'm leery of any perception of 'competitive' downvoting on questions that I've already answered; in general I tend to be <em>very</em> stingy with downvotes (certainly more than I probably should), but this seems like a particularly thorny case.</p> <p>What I'm wondering is whether this is a reasonable concern (or reasonable approach) on my part; do people concur that this is something to be worried about from an ethical perspective, or should a bad answer be downvoted regardless of whether it might be abstractly 'beneficial' to myself to do so?</p>
André Nicolas
6,312
<p>I believe that if one has a "competing" answer, then the task of dealing with a conspicuously weak answer should in general be left to others.</p> <p>If the question is quite old, so that a new very weak answer is unlikely to get scrutiny, I think one should wait a while, and then perhaps leave a gentle comment.</p>
252,272
<p>I'm working with trace of matrices. Trace is defined for square matrix and there are some useful rule, i.e. <span class="math-container">$\text{tr}(AB) = \text{tr}(BA)$</span>, with <span class="math-container">$A$</span> and <span class="math-container">$B$</span> square, and more in general trace is invariant under cyclic permutation.</p> <p>I was wondering if the formula <span class="math-container">$\text{tr}(AB) = \text{tr}(BA)$</span> holds even if <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are rectangular, namely <span class="math-container">$A$</span> is <span class="math-container">$n$</span>-by-<span class="math-container">$m$</span> and <span class="math-container">$B$</span> is <span class="math-container">$m$</span>-by-<span class="math-container">$n$</span>.</p> <p>I figured out that if one completes the involved matrices to be square by adding zero entries in the right places, then the formula still works... but I want to be sure about this!</p>
joriki
6,622
<p>Yes, the cyclic invariance holds irrespective of the dimensions of the matrices. The trace of a product in either order is simply the sum of all products of corresponding entries.</p>
966,835
<p>I want to find the asymptotic complexity of the function:</p> <p>$$g(n)=n^6-9n^5 \log^2 n-16-5n^3$$</p> <p>That's what I have tried:</p> <p>$$n^6-9n^5 \log^2 n-16-5n^3 \geq n^6-9n^5 \sqrt{n}-16n^5 \sqrt{n}-5 n^5 \sqrt{n}=n^6-30n^5 \sqrt{n}=n^6-30n^{\frac{11}{2}} \geq c_1n^6 \Rightarrow (1-c_1)n^6 \geq 30n^{\frac{11}{2}} $$</p> <p>We pick $c_1=2$ and $n_1=3600$.</p> <p>$$n^6-9n^5 \log^2 n-16-5n^3 \leq n^6, \forall n \geq 1$$</p> <p>We pick $c_2=1, n_2=1$</p> <p>Therefore, for $n_0=\max \{ 3600, 1 \}=3600, c_1=2$ and $c_2=1$, we have that:</p> <p>$$g(n)=\Theta(n^6)$$</p> <p>Could you tell me if it is right? $$$$</p> <p>Also, can I begin, finding the inequalities or do I have to say firstly that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that:</p> <p>$$c_1 f(n) \leq g(n) \leq c_2 f(n), \forall n \geq n_0$$</p> <p>and then, after having found $f(n)$, should I say that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that:</p> <p>$$c_1 n^6 \leq g(n) \leq c_2 n^6, \forall n \geq n_0$$</p>
Exodd
161,426
<p>there's a faster way: if </p> <p>$$ \lim_{x\to \infty}\frac{f(x)}{g(x)}\in\mathbb{R}/\{0\} $$ then </p> <p>$$ f(n)=\Theta(g(n)) $$</p> <p>And this is easy to prove</p>
331,859
<p>I need to find the antiderivative of $$\int\sin^6x\cos^2x \mathrm{d}x.$$ I tried symbolizing $u$ as squared $\sin$ or $\cos$ but that doesn't work. Also I tried using the identity of $1-\cos^2 x = \sin^2 x$ and again if I symbolize $t = \sin^2 x$ I'm stuck with its derivative in the $dt$.</p> <p>Can I be given a hint?</p>
Mikasa
8,581
<p>Just adding a good point, however, you got the answer completely $\ddot\smile$ :</p> <blockquote> <p>Consider $$\int\sin^m(x)\cos^n(x)dx$$ where in $m,n\in\mathbb Q$. Whenever $m+n$ is an even integer, you can use $t=\tan(x)$ or $t=\cos(x)$ as a good substitution.</p> </blockquote> <p>And here $m+n=8$ is an even integer.</p>
3,108,847
<p>I am trying to prove that If <span class="math-container">$z\in \mathbb{C}-\mathbb{R}$</span> such that <span class="math-container">$\frac{z^2+z+1}{z^2-z+1}\in \mathbb{R}$</span>. Show that <span class="math-container">$|z|=1$</span>.</p> <p>1 method , through which I approached this problem is to assume <span class="math-container">$z=a+ib$</span> and to see that <span class="math-container">$$\frac{z^2+z+1}{z^2-z+1}=1+\frac{2z}{z^2-z+1}$$</span>.</p> <p>So problem reduces to show that <span class="math-container">$|z|=1$</span> whenever <span class="math-container">$\frac{2z}{z^2-z+1}\in \mathbb{R}$</span></p> <p>I put <span class="math-container">$z=a+ib$</span> and then rationalise to get the imaginary part of <span class="math-container">$\frac{2z}{z^2-z+1}$</span> be <span class="math-container">$\frac{b-b^3-a^2b}{something}$</span>. I equated this to zero and got my answer.</p> <p>Is there any better method?</p>
David K
139,123
<p>We are given that the imaginary part of <span class="math-container">$\frac{z^2+z+1}{z^2-z+1}$</span> is zero. Therefore <span class="math-container">$\frac{z^2+z+1}{z^2-z+1}$</span> is equal to its own conjugate: <span class="math-container">$$\frac{z^2+z+1}{z^2-z+1} = \frac{\bar z^2+\bar z+1}{\bar z^2-\bar z+1}.$$</span></p> <p>Cross-multiply (multiply both sides by <span class="math-container">$(z^2-z+1)(\bar z^2-\bar z+1)$</span>) and cancel all terms on the right. The result is <span class="math-container">$$ 2z\bar z^2 - 2z^2\bar z - 2\bar z + 2z = 0,$$</span> which you can simplify to <span class="math-container">$$(\bar z - z)z\bar z - \bar z + z = 0.$$</span></p> <p>Since the imaginary part of <span class="math-container">$z$</span> is <em>not</em> zero, it follows that <span class="math-container">$\bar z - z \neq 0$</span> and you can divide both sides of the equation by <span class="math-container">$\bar z - z$</span> to obtain <span class="math-container">$$ z\bar z - 1 = 0.$$</span></p>
1,274,514
<p>I want to show that proposition<span class="math-container">$5.33$</span> in introduction to homological algebra Rotman :let <span class="math-container">$I$</span> be a directed set , and let <span class="math-container">$\{A_i,\alpha_j^i\}$</span>, <span class="math-container">$\{B_i,\beta_j^i\}$</span>, and <span class="math-container">$\{C_i,\gamma_j^i\}$</span> be directed systems of left <span class="math-container">$R$</span>-modules over <span class="math-container">$I$</span> if <span class="math-container">$r:\{A_i,\alpha_j^i\}\to\{B_i,\beta_j^i\}$</span> and <span class="math-container">$s:\{B_i,\beta_j^i\}\to\{C_i,\gamma_j^i\}$</span> are morphisms of direct systems, and if</p> <blockquote> <p><span class="math-container">$$0\to A_i\xrightarrow{r_i}B_i\xrightarrow{s_i}C_i\to0$$</span></p> </blockquote> <p>is exact for each <span class="math-container">$i\in I$</span>,then there is an exact sequence</p> <blockquote> <p><span class="math-container">$$0\to\varinjlim A_i\xrightarrow{r^\to}\varinjlim B_i\xrightarrow{s^\to }\varinjlim C_i\to0$$</span></p> </blockquote> <p>I have same problem to show that ker <span class="math-container">${s^\to}\subset$</span>Image<span class="math-container">$ \ r^\to$</span>. can you help me!thanks.</p>
Daniel Valenzuela
156,302
<p><em>Hint:</em> $sr: \{A_i,\alpha^i_j\} \to \{B_i,\beta^i_j\}$ is the zero morphism of direct systems.</p>
2,343,993
<blockquote> <p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p> </blockquote> <p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
bloomers
432,669
<p>Hint: $\frac{n}{n+5} = 1 + \frac{-5}{n+5} $</p>
2,343,993
<blockquote> <p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p> </blockquote> <p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
farruhota
425,072
<p>Alternatively: $$\lim_\limits{n\to\infty} \left(\frac{n}{n+5}\right)^n=\lim_\limits{n\to\infty} \frac{1}{\left(1+\frac{5}{n}\right)^n}=$$ $$\lim_\limits{n\to\infty} \frac{1}{\left(\underbrace{\left(1+\frac{1}{\frac{n}{5}}\right)^{\frac{n}{5}}}_{=e}\right)^5}=\frac{1}{e^5}.$$</p>
2,343,993
<blockquote> <p>Find the limit -$$\left(\frac{n}{n+5}\right)^n$$</p> </blockquote> <p>I set it up all the way to $\dfrac{\left(\dfrac{n+5}{n}\right)}{-\dfrac{1}{n^2}}$ but now I am stuck and do not know what to do.</p>
Paramanand Singh
72,031
<p>Let us assume the fundamental limit $$\lim_{n\to\infty} \left(\frac{n+1}{n}\right)^{n}=\lim_{n\to\infty} \left(1+\frac{1}{n}\right)^{n}=e\tag{1} $$ Taking reciprocals we get $$\lim_{n\to\infty} \left(\frac{n} {n+1}\right)^{n}=\frac{1}{e}\tag{2}$$ And note that the above limit holds if $n$ is replaced by $n+k$ where $k$ is some fixed integer so that $$\lim_{n\to\infty} \left(\frac{n+k} {n+k+1}\right)^{n+k}=\frac{1}{e}$$ and hence $$\lim_{n\to\infty} \left(\frac{n+k} {n+k+1}\right)^{n}=\lim_{n\to\infty}\left(\frac{n+k}{n+k+1}\right)^{n+k}\left(\frac{n+k}{n+k+1}\right)^{-k}=\frac{1}{e}\cdot 1^{-k}=\frac{1}{e}\tag{3}$$ We can now see easily that $$\left(\frac{n} {n+5}\right)^{n}= \left(\frac{n} {n+1}\right)^{n} \left(\frac{n+1} {n+2}\right)^{n} \left(\frac{n+2} {n+3}\right)^{n} \left(\frac{n+3} {n+4}\right)^{n} \left(\frac{n+4} {n+5}\right)^{n} $$ From $(3)$ we can see that each factor on the right of above equation tends to $1/e$ and hence the whole expression tends to $1/e^{5}$.</p> <p>Using this approach starting from $(1)$ we can easily show via simple algebraic manipulation that $$\lim_{n\to\infty} \left(1+\frac{x}{n} \right) ^{n} =\lim_{n\to\infty} \left(1-\frac{x}{n}\right)^{-n}=e^{x}$$ for rational $x$. </p>
3,283,724
<p>Let A, B ⊆ R, and let f : A → B be a bijective function. Show that if <span class="math-container">$f$</span> is strictly increasing on A, then <span class="math-container">$f^{-1}$</span> is strictly increasing on B.</p> <p>How would I write this proof? I think by contradiction but I don't know where to start.</p>
drhab
75,923
<p>Let <span class="math-container">$x,y\in B$</span> with <span class="math-container">$x&lt;y$</span>.</p> <p>Now find <span class="math-container">$u,v\in A$</span> with <span class="math-container">$x=f(u)$</span> and <span class="math-container">$y=f(v)$</span> (possible because <span class="math-container">$f$</span> is surjective).</p> <p>From <span class="math-container">$x\neq y$</span> it follows that <span class="math-container">$u\neq v$</span> (because <span class="math-container">$f$</span> is injective).</p> <p><span class="math-container">$v&lt;u$</span> would lead to the false statement <span class="math-container">$y=f(v)&lt;f(u)=x$</span> (since <span class="math-container">$f$</span> is strictly increasing).</p> <p>We conclude that <span class="math-container">$u&lt;v$</span>.</p> <p>Now realize that <span class="math-container">$u=f^{-1}(x)$</span> and <span class="math-container">$v=f^{-1}(y)$</span> so actually we proved that:<span class="math-container">$$x&lt;y\implies f^{-1}(x)&lt;f^{-1}(y)$$</span>q.e.d.</p>
791,372
<p>Hi I am trying to solve this double integral $$ I:=\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt {xy}}\cos(x+y)\,dx\,dy=(\gamma+2\log 2)\pi^2. $$ Thank you.</p> <p>The constant in the result is given by $\gamma\approx .577$, and is known as the Euler-Mascheroni constant. I was thinking to write $$ I=\Re \bigg[\int_0^\infty \int_0^\infty \frac{\log x \log y}{\sqrt{xy}}\, e^{i(x+y)}\, dx\, dy\bigg] $$ and using Leibniz's rule for differentiation under the integral sign to write $$ I(\eta, \xi)=\Re\bigg[ \int_0^\infty \int_0^\infty \ \frac{\log (\eta x)\log(\xi y)}{\sqrt{xy}} e^{i(x+y)}dx\,dy. \bigg]\\ $$ After taking the derivatives it became obvious that I need to try another method since the x,y constants cancel out. How can we solve this integral I? Thanks. </p>
Ron Gordon
53,268
<p>Consider</p> <p>$$\int_0^{\infty} dx \, x^{\alpha} e^{i x}$$</p> <p>We know from Cauchy's theorem that this integral is equal to (when it converges)</p> <p>$$i \, e^{i \pi \alpha/2} \int_0^{\infty} du \, u^{\alpha} \, e^{-u} = i \, e^{i \pi \alpha/2} \, \Gamma(\alpha+1)$$</p> <p>Differentiating both sides with respect to $\alpha$, we get</p> <p>$$\int_0^{\infty} dx \, x^{\alpha} e^{i x}\, \log{x} = \Gamma(\alpha+1) e^{i \pi \alpha/2} \left [i \, \psi(\alpha+1)-\frac{\pi}{2} \right ] $$</p> <p>Square both sides:</p> <p>$$\begin{align}\int_0^{\infty} dx \, x^{\alpha} e^{i x}\, \log{x} \int_0^{\infty} dy \, y^{\alpha} e^{i y}\, \log{y} &amp;= \Gamma(\alpha+1)^2 e^{i \pi \alpha} \left [\frac{\pi^2 }{4}-\psi(\alpha+1)^2-i \pi \psi(\alpha+1) \right ] \end{align}$$</p> <p>Now plug in $\alpha=-1/2$ and consolidate; use the fact that $\Gamma(1/2)=\sqrt{\pi}$ and $\psi(1/2)=-\gamma-2 \log{2}$:</p> <p>$$\int_0^{\infty} dx \, \int_0^{\infty} dy \frac{\log{x} \log{y}}{\sqrt{x y}} e^{i (x+y)} = -i \pi \left [\frac{\pi^2}{16} - (\gamma+2 \log{2})^2 + i \pi (\gamma+2 \log{2}) \right ]$$</p> <p>Take the real part of both sides, and get</p> <p>$$\int_0^{\infty} dx \, \int_0^{\infty} dy \frac{\log{x} \log{y}}{\sqrt{x y}} \cos{(x+y)} = \pi^2 (\gamma+2 \log{2}) $$</p> <p>as was to be shown.</p>
1,823,187
<blockquote> <p>There are $n \gt 0$ different cells and $n+2$ different balls. Each cell cannot be empty. How many ways can we put those balls into those cells?</p> </blockquote> <p>My solution:</p> <p>Let's start with putting one different ball to each cell. for the first cell there are $n+2$ options to choose a ball. .. for the $n$th cell there are $3$ options to choose a ball. total : $\frac{(n+2)!}{2!}$</p> <p>Now we got $2$ balls left and <strong>my question is</strong>: Can I change the way I choose? I mean until now I choose for each cell a ball. Can I now choose for each of the balls that left a unique cell? Is this legal? If the answer is yes, then let's choose for each ball a cell ($n$ options for picking up a cell) so we get $n^2$ options to put $2$ balls in $n$ cells.</p> <p>we get: $\frac{(n+2)!}{ 2!} n^2$</p> <p>Is this correct?</p>
Sangchul Lee
9,340
<p>Define $f$ by</p> <p>$$ f(x) = \mathrm{e}^{-x} + \sum_{n=2}^{\infty} \max\{0, n - n^4|x - n|\}. $$</p> <p>It is easy to check that</p> <p>$$ \int_{1}^{\infty} f(x) \, \mathrm{d}x = \int_{1}^{\infty} \mathrm{e}^{-x} \, \mathrm{d}x + \sum_{n=2}^{\infty} \frac{1}{n^2} &lt; \infty. $$</p> <p>On the other hand,</p> <p>$$ \int_{1}^{\infty} f(x) \, \mathrm{d}x \geq \sum_{n=2}^{\infty} \int \max\{0, (n - n^4|x - n|)^2 \} \, \mathrm{d}x = \sum_{n=2}^{\infty} \frac{2}{3n} = \infty. $$</p> <p>The trick is to generate a train of picks. The following graph shows the summation part of $f$:</p> <p><a href="https://i.stack.imgur.com/qMDOS.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/qMDOS.png" alt="enter image description here"></a></p> <p>Picks are constructed to satisfy the following heuristics: Each pick at $x = n$ is of height $\sim n$ and width $\sim n^{-3}$, thus the area is of order $\sim n^{-2}$, when squared, however, its height gets squared and the area becomes of order $\sim n^{-1}$.</p>
2,426,535
<p>In the book <em>Simmons, George F.</em>, Introduction to topology and modern analysis, page no- 98, question no- 2, the problem is : <strong><em>Let $X$ be a topological space and a $Y$ be metric space and $f:A\subset X\rightarrow Y$ be a continuous map. Then $f$ can be extended in at most one way to a continuous mapping of $\bar{A}$ into $Y$.</em></strong></p> <p>I am trying to prove this way. Let $x_0\in \bar{A}-A$ and suppose that there is two extension $f$ and $g$ such that $f(x)=g(x)$ for $x\in A$. Now $f(x_0)\in \overline{f(A)}$ and $g(x_0)\in\overline {g(A)}$. So there exists a sequence $\{f(x_n)\}$ and $\{g(y_n)\}$ that converge to $f(x_0)$ and $g(x_0)$ respectively, where $x_n$ and $y_n$ belong to $A$ for all $n$. Then I am stuck!! Please help to complete the proof.</p>
Francesco Polizzi
456,212
<p>The following is a well-known result in point-set topology.</p> <blockquote> <p><strong>Proposition.</strong> Two continuous functions $f, g \colon X \to Y$ from a topological space $X$ to a Hausdorff space $Y$, that coincide over a dense subset $D \subseteq X$, necessarly coincide everywhere.</p> </blockquote> <p><em>Proof.</em> Consider the set $$Z :=\{x \in X \, | \, f(x)=g(x)\} \subseteq X.$$ Then $Z$ is closed in $X$, since it is the preimage of the diagonal of $Y \times Y$ (that is closed because $Y$ is Hausdorff) via the continuous map $$h \colon X \to Y \times Y, \quad x \mapsto (f(x), \, g(x)).$$ On the other hand, by assumption $D \subseteq Z$ and so, since $D$ is dense in $X$, we obtain $$X = \bar{D} \subseteq \bar{Z} = Z,$$ that is $X = Z$ and the proof is complete.</p> <p>Now we can get what you want from the Proposition above, because $A$ is dense in $\bar{A}$ and $Y$ is metric, hence Hausdorff.</p>
2,426,535
<p>In the book <em>Simmons, George F.</em>, Introduction to topology and modern analysis, page no- 98, question no- 2, the problem is : <strong><em>Let $X$ be a topological space and a $Y$ be metric space and $f:A\subset X\rightarrow Y$ be a continuous map. Then $f$ can be extended in at most one way to a continuous mapping of $\bar{A}$ into $Y$.</em></strong></p> <p>I am trying to prove this way. Let $x_0\in \bar{A}-A$ and suppose that there is two extension $f$ and $g$ such that $f(x)=g(x)$ for $x\in A$. Now $f(x_0)\in \overline{f(A)}$ and $g(x_0)\in\overline {g(A)}$. So there exists a sequence $\{f(x_n)\}$ and $\{g(y_n)\}$ that converge to $f(x_0)$ and $g(x_0)$ respectively, where $x_n$ and $y_n$ belong to $A$ for all $n$. Then I am stuck!! Please help to complete the proof.</p>
MSIS
678,294
<p>Some caveats: If A is also a metric space, then f must be uniformly continuous, as this preserves Cauchy sequences , while standard continuity does not.</p> <p>An example, for <span class="math-container">$A =\mathbb Q \subset \mathbb R$</span> , choose <span class="math-container">$f(x):= \frac {1}{ x-\sqrt 3} $</span>. This is continuous in <span class="math-container">$\mathbb Q$</span> but will not extend continuously into <span class="math-container">$\mathbb R$</span>, as <span class="math-container">$f$</span> will blow up near <span class="math-container">$\sqrt 3$</span>. So you can't extend , since <span class="math-container">$f$</span> is not uniformly continuous and will not preserve Cauchy, i.e., while the sequences <span class="math-container">$\{x_n\}$</span> with <span class="math-container">$x_n \rightarrow x$</span> are Cauchy, the &quot;Pushforward &quot; sequences {<span class="math-container">$f(x_n) \}$</span> with <span class="math-container">$x_n \rightarrow x$</span> are not, and will therefore not necessarily converge.</p>
296,536
<p>Let $\mu$ be some positive measure on $\mathbb{R}$. For technical reasons, I would like to know if the limit $$\lim_{p\rightarrow\infty}\frac {\ln \|f\|_{L^p(\mu)}}{\ln p}$$ exists in $[0,\infty]$ for any $f$ (That is, I want the limit to exist, but perhaps not be finite.)</p> <p>Moreover generally I would like to know if in general, $$\lim_{p\rightarrow\infty}\frac {\frac{d^k}{dp^k} \ln \|f\|_{L^p(\mu)}}{\frac{d^k}{dp^k} \ln p}$$ exists in $[0,\infty]$ for any $f$ such that $f\in L^p(\mu)$ for all $1\leq p&lt;\infty.$ (Although I only really need it for $k=2.$) Note that these limits are related by L'Hospital's rule.</p>
Iosif Pinelis
36,721
<p>$\newcommand{\ep}{\epsilon} \newcommand{\ga}{\gamma} \newcommand{\Ga}{\Gamma} \newcommand{\la}{\lambda} \newcommand{\Si}{\Sigma} \newcommand{\R}{\mathbb{R}} \newcommand{\E}{\operatorname{\mathsf E}} \newcommand{\PP}{\operatorname{\mathsf P}}$ </p> <p>In the excellent answer, Willie Wong offered a construction of a measure with a lacunary support set, disproving the conjecture. Let me offer another construction, where the support of the measure is a continuum; however, the construction is still lacunary in the sense that it involves sequences diverging faster than any exponential sequence. </p> <p>The main idea is this. Let $p&gt;0$ and $s&gt;0$. Let $\|f\|_p:=\|f\|_{L^p(\mu)}$. Let $f(x)=x$ for all real $x$. Let us start with this trivial lemma: </p> <blockquote> <p><strong>Lemma 1.</strong> If $\mu(dx)=\frac1s\,e^{-x/s}I\{x&gt;0\}$, where $I$ denotes the indicator, then \begin{equation*} \|f\|_p^p=\int_0^\infty x^p \,e^{-x/s}\frac{dx}s=s^p\Ga(p+1). \end{equation*}</p> </blockquote> <p>Then, by Stirling's formula, $\ln\|f\|_p\sim \ln(ps)$ as $s,p\to\infty$. So, if we alternate $s$ between $p$ and $p^2$, $\ln\|f\|_p/\ln p$ will alternate between $2$ and $3$. Now we need to glue pieces of such two alternating sequences of measures, with $s$ alternating between $p$ and $p^2$, to get one measure. The guiding idea in doing this is that for large $p$ the mass $m_p(dx):=x^p \,e^{-x/s}I\{x&gt;0\}\frac{dx}s$ is mostly concentrated near the point $ps$. </p> <p>To proceed, we shall need two more simple lemmas, which will be proved at the end of this answer. </p> <blockquote> <p><strong>Lemma 2.</strong> For every $y\in[0,ps]$ there is some $c\in[1/2,1]$ such that \begin{equation*} \int_y^\infty x^p \,e^{-x/s}\frac{dx}s=cs^p\Ga(p+1). \end{equation*}</p> <p><strong>Lemma 3.</strong> For every $y\ge2ps$, \begin{equation*} \int_y^\infty x^p \,e^{-x/s}\frac{dx}s \le\exp\{p\ln(2ps)-y/(2s)\}. \end{equation*}</p> </blockquote> <p>Let \begin{equation*} \mu(dx)=\sum_{j=1}^\infty\frac{dx}{s_j}\,e^{-x/s_j}I\{x_j&lt;x&lt;x_{j+1}\}, \end{equation*} where \begin{equation*} s_j:= \begin{cases} p_j&amp;\text{ if $j$ is odd}, \\ p_j^2&amp;\text{ if $j$ is even}, \end{cases} \qquad p_j:=2^{2^j},\qquad x_j:=p_j s_j, \end{equation*} so that $p_{j+1}=p_j^2$. Then \begin{multline*} \|f\|_p^p=\int_\R x^p\mu(dx)=\sum_{j=1}^\infty I_j(p),\\ I_j(p):=\int_{x_j}^{x_{j+1}} \exp\{g_{j,p}(x)\}\frac{dx}{s_j},\\ g_{j,p}(x):=p\ln x-x/s_j. \end{multline*}</p> <p>In view of Lemma 1 and Stirling's formula, for large odd $k$ one has \begin{equation*} I_k(p_k) \le s_k^{p_k} p_k^{p_k}=p_k^{2p_k} \tag{1} \end{equation*} and, for $j&lt;k$, \begin{equation*} I_j(p_k)\le s_j^{p_k} p_k^{p_k} \le p_j^{2p_k} p_k^{p_k} \le p_k^{2p_k}, \end{equation*} whence \begin{equation*} \sum_{j&lt;k}I_j(p_k) \le k p_k^{2p_k}=p_k^{(2+o(1))p_k}. \tag{2} \end{equation*}</p> <p>Next, for large odd $k$ and $j&gt;k$, \begin{equation*} p_k\ln(2p_ks_j)-p_k\ln(p_ks_k) \le p_k\ln s_j\le p_k\ln(p_j^2)\le p_j^{1/2}\ln(p_j^2)\le p_j/4, \end{equation*} whence, by Lemma 3, \begin{multline*} I_j(p_k)\le\exp\{p_k\ln(2p_ks_j)-p_j/2\}\le\exp\{p_k\ln(p_ks_k)-p_j/4\} \\ \le2^{-(j-k)}\exp\{p_k\ln(p_ks_k)\}, \end{multline*} so that \begin{equation*} \sum_{j&gt;k}I_j(p_k) \le p_k^{2p_k}. \tag{3} \end{equation*}</p> <p>Collecting (1), (2), (3), for large odd $k$ we have \begin{equation*} \ln\|f\|_{p_k}\lesssim 2\ln p_k. \end{equation*}</p> <p>On the other hand, for large even $k$, by Lemma 2 and Stirling's formula,<br> \begin{equation*} \ln\|f\|_{p_k}\ge\frac1{p_k}\,\ln I_k(p_k)= \ln(s_k p_k^{1+o(1)}) = \ln(p_k^{2} p_k^{1+o(1)}) \sim3\ln p_k. \end{equation*}</p> <p><strong>So, $\ln\|f\|_p/\ln p$ does not converge as $p\to\infty$.</strong> </p> <p>In conclusion, let us prove the lemmas. </p> <p><em>Proof of Lemma 1</em>. This is obvious. </p> <p><em>Proof of Lemma 2</em>. Write \begin{equation} x^p \,e^{-x/s}=e^{g(x)},\quad g(x):=p\ln x-x/s. \tag{4} \end{equation} Then $g'(ps)=0$ and $g''(x)$ is increasing in $x&gt;0$. So, $g(ps-u)&lt;g(ps+u)$ for $u\in(0,ps)$. So, for every $y\in[0,ps]$, \begin{equation*} \int_{-\infty}^\infty x^p \,e^{-x/s}\frac{dx}s \ge\int_y^\infty x^p \,e^{-x/s}\frac{dx}s \ge\frac12\,\int_{-\infty}^\infty x^p \,e^{-x/s}\frac{dx}s. \end{equation*} To complete the proof of Lemma 2, it remains to refer to Lemma 1. </p> <p><em>Proof of Lemma 3</em>. For $g$ as in (4) and for $x&gt;2ps$, we have $g'(x)=\frac px-\frac1s\le-\frac1{2s}$ and hence \begin{equation} g(x)\le g(2ps)-\frac1{2s}(x-2ps)=p\ln(2ps)-\frac x{2s}. \end{equation} Now Lemma 3 easily follows. </p>
537,021
<p>Say I divide a number by 6, will a number modulus by 6 always be between 0-5? If so, will a number modulus any number (N) , the result should be between $0$ and $ N - 1$?</p>
mdp
25,159
<p>I will assume that by "number", you mean an integer, although it doesn't matter for everything I say.</p> <p>The answer to your question depends a little on your point of view. I would say that $x\bmod n$ is usually best interpreted as the set of numbers $[x]_n=\{x+kn:k\in\mathbb{Z}\}$. However, there is a unique element of this set that lies between $0$ and $n-1$, so we could also use this.</p> <p>For example, if $x=15$ and $n=6$, then $[15]_6=\{\dotsc,-9,-3,3,9,15,21,\dotsc,\}$, and the unique element between $0$ and $5$ is $3$.</p> <p>One key point is that if you want to do a calculation involving only addition and multiplication, but you only care about the answer modulo $n$, you can replace any of the numbers $x$ in your calculation by any element of $[x]_n$, without changing the answer modulo $n$.</p> <p>For example, modulo $6$:</p> <p>$$15\times 3\equiv 3\times 3\equiv 9\equiv 3$$</p> <p>Note how every time I get to a "$\equiv$", I either do some arithmetic, or replace one number from the set $[15]_6$ with a different one.</p>
1,742,982
<p>I was trying to solve the equation using factorial as shown below but now I'm stuck at this level and need help.</p> <p>$$C(n,3) = 2*C(n,2)$$</p> <p>$$\frac{n!}{3!(n-3)!} = 2\frac{n!}{2!(n-2)!}$$</p> <p>$$3! (n - 3)! = (n - 2)!$$</p>
Nick Matteo
59,435
<p>Well, the function is always negative if $x&lt;-5$ or $x&gt;3$, is equal to zero when $x$ is $-5$ or $3$, and grows in magnitude without bound as $x$ increases past $3$, so the range includes $(-∞,0]$.</p> <p>Between $-5$ and $3$ the function is positive (except at $0$), so it's a question of finding the maximum achieved in this range. Personally, if I wasn't using calculus, I would graph the function with a graphing calculator, observe that the maximum is about $119.75$, and conclude that the range is approximately $(-\infty, 119.75]$.</p>
2,249,841
<p>Let $a_n$ denote the number of those permutations $\sigma$ on $\{1,2,3....,n\}$ such that $\sigma$ is a product of exactly two disjoint cycles. Then </p> <ol> <li><p>$a_5 = 50$</p></li> <li><p>$a_4 = 14$</p></li> <li><p>$a_5 = 40$</p></li> <li><p>$a_4 = 11$</p></li> </ol> <p>I tried specifically for $a_5$ and $a_4$ with a little bit of calculations. But I want to know about a formula for $a_n$ with less calculations </p>
Michael Lugo
173
<p>There's a well-known formula for the number of permutations with $p_i$ cycles of length $i$ for each $i$, namely </p> <p>$$ {n! \over \prod_i i^{p_i} (p_i)!}$$</p> <p>(see for example <a href="http://blog.plover.com/math/fixpoints.html" rel="nofollow noreferrer">this post by Mark Jason Dominus</a>). In the case where you have one cycle of length $k$ and one of length $n-k$, and $k \not = n-k$, then you have $p_k = p_{n-k} = 1$ and this reduces to $$n! \times {1 \over k(n-k)}.$$ If $k = n-k$, that is, if $k = n/2$, then you have $p_k = 2$ and the number of such permutations is $$n! \times {2 \over n^2}.$$</p> <p>For example, if $n = 5, k = 2$, the first formula gives you that there are $5!/(2 \times 3) = 20$ permutations of $[5]$ consisting of a 2-cycle and a 3-cycle. If $n = 4, k = 2$, the second formula gives you that there are $4! \times 2/4^2 = 3$ permutations consisting of two 2-cycles.</p> <p>From this you can derive a general formula for the number of permutations of $n$ which are products of two disjoint cycles by summing over the possible values of $k$. If $n$ is even you'll need to handle the $k = n/2$ case separately from the rest of the sum.</p> <p>(It's not clear if your definition counts a cycle of length 1 as a cycle. For example, is $(3, 5)(1, 4)(2)(6)$ a product of two distinct cycles in $S_6$? If so your sum will be more complicated but the general idea still holds.)</p>
51,096
<p>Is it possible to have a countable infinite number of countable infinite sets such that no two sets share an element and their union is the positive integers?</p>
Shai Covo
2,810
<p>Let $$ A_0 = \lbrace 1,3,5,7,9,\ldots \rbrace $$ and $$ A_1 = \lbrace 2^n 1 : n \in \mathbb{N} \rbrace, $$ $$ A_2 = \lbrace 2^n 3 : n \in \mathbb{N} \rbrace, $$ $$ A_3 = \lbrace 2^n 5 : n \in \mathbb{N} \rbrace, $$ $$ A_4 = \lbrace 2^n 7 : n \in \mathbb{N} \rbrace, $$ $$ A_5 = \lbrace 2^n 9 : n \in \mathbb{N} \rbrace, $$ $$ \cdots. $$ Noting that for any two distinct elements $r_1$ and $r_2$ of $A_0$ it holds $2^{n_1}r_1 \neq 2^{n_2}r_2$ $\forall n_1,n_2 \in \mathbb{N}$, we have that the $A_i$ are disjoint. On the other hand, let $2k$, with $k \in \mathbb{N}$, be an arbitrary even natural number. Considering its prime factorization, it is necessarily of the form $2k = 2^n r$, where $n \in \mathbb{N}$ and $r \in A_0$. Hence $2k \in \cup _{i = 1}^\infty A_i$, from which it follows that $\cup _{i = 1}^\infty A_i = \lbrace 2,4,6,8,10,\ldots \rbrace$, and so $\mathbb{N} = \cup _{i = 0}^\infty A_i$, with all the $A_i$ disjoint and countably infinite.</p> <p>EDIT: Relation to user6312's answer.</p> <p>The sets $A_i$, $i = 1,2,3,\ldots$, correspond to the rows $$ 2,4,8,16,32, \ldots, $$ $$ 6,12,24,48,96, \ldots, $$ $$ 10,20,40,80,160, \ldots, $$ $$ 14,28,56,112,224,\ldots, $$ $$ 18,36,72,144,288,\ldots, $$ $$ \cdots, $$ while the sets $W_i$, $i=1,2,3,\ldots$, in user6312's answer correspond to the corresponding columns, that is to $$ 2,6,10,14,18,\ldots, $$ $$ 4,12,20,28,36,\ldots, $$ $$ 8,24,40,56,72,\ldots, $$ $$ 16,48,80,112,144,\ldots, $$ $$ 32,96,160,224,288,\ldots, $$ $$ \cdots. $$</p>
184,361
<p>I'm doing some exercises on Apostol's calculus, on the floor function. Now, he doesn't give an explicit definition of $[x]$, so I'm going with this one:</p> <blockquote> <p><strong>DEFINITION</strong> Given $x\in \Bbb R$, the integer part of $x$ is the unique $z\in \Bbb Z$ such that $$z\leq x &lt; z+1$$ and we denote it by $[x]$.</p> </blockquote> <p>Now he asks to prove some basic things about it, such as: if $n\in \Bbb Z$, then $[x+n]=[x]+n$</p> <p>So I proved it like this: Let $z=[x+n]$ and $z'=[x]$. Then we have that</p> <p>$$z\leq x+n&lt;z+1$$</p> <p>$$z'\leq x&lt;z'+1$$</p> <p>Then $$z'+n\leq x+n&lt;z'+n+1$$</p> <p>But since $z'$ is an integer, so is $z'+n$. Since $z$ is unique, it must be that $z'+n=z$.</p> <p>However, this doesn't seem to get me anywhere to prove that $$\left[ {2x} \right] = \left[ x \right] + \left[ {x + \frac{1}{2}} \right]$$</p> <p>in and in general that </p> <p>$$\left[ {nx} \right] = \sum\limits_{k = 0}^{n - 1} {\left[ {x + \frac{k}{n}} \right]} $$</p> <p>Obviously one could do an informal proof thinking about "the carries", but that's not the idea, let alone how tedious it would be. Maybe there is some easier or clearer characterization of $[x]$ in terms of $x$ to work this out.</p> <p>Another property is $$[-x]=\begin{cases}-[x]\text{ ; if }x\in \Bbb Z \cr-[x]-1 \text{ ; otherwise}\end{cases}$$</p> <p>I argue: if $x\in\Bbb Z$, it is clear $[x]=x$. Then $-[x]=-x$, and $-[x]\in \Bbb Z$ so $[-[x]]=-[x]=[-x]$. For the other, I guess one could say:</p> <p>$$n \leqslant x &lt; n + 1 \Rightarrow - n - 1 &lt; x \leqslant -n$$</p> <p>and since $x$ is not an integer, this should be the same as $$ - n - 1 \leqslant -x &lt; -n$$</p> <p>$$ - n - 1 \leqslant -x &lt; (-n-1)+1$$</p> <p>So $[-x]=-[x]-1$</p>
Brian M. Scott
12,042
<p>Let $n=\lfloor x\rfloor$, and let $\alpha=x-n$; clearly either $0\le\alpha&lt;\frac12$, or $\frac12\le\alpha&lt;1$. Then </p> <p>$$\lfloor 2x\rfloor=\lfloor 2n+2\alpha\rfloor=2n+\lfloor 2\alpha\rfloor=\begin{cases} 2n,&amp;\text{if }0\le\alpha&lt;\frac12\\ 2n+1,&amp;\text{if }\frac12\le\alpha&lt;1\;, \end{cases}$$</p> <p>and</p> <p>$$\left\lfloor x+\frac12\right\rfloor=\left\lfloor n+\alpha+\frac12\right\rfloor=n+\left\lfloor\alpha+\frac12\right\rfloor=\begin{cases} n,&amp;\text{if }0\le\alpha&lt;\frac12\\ n+1&amp;\text{if }\frac12\le\alpha&lt;1\;; \end{cases}$$</p> <p>since $\lfloor x\rfloor=n$, the first result is immediate.</p> <p>The general case is handled similarly, except that there are $n$ cases; for $k=0,\dots,n-1$, case $k$ is $$\frac kn\le\alpha&lt;\frac{k+1}n\;.$$</p>
2,698,553
<p>Is the natural ring morphism $\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}$ an isomorphism?</p> <p>In other words, is there a $\mathbb Z$-linear map $f:\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}$ such that $$ f(z\otimes w)=z\otimes w $$ for all $z,w\in\mathbb{C}$? (Note that the two occurrences of $z\otimes w$ in the above display have different meanings.)</p>
Bernard
202,857
<p><strong>Hint</strong>:</p> <p> $$\mathbf C\otimes_{\mathbf Z }\mathbf C\simeq(\mathbf C\otimes_{\mathbf Z}\mathbf Q)\otimes_{\mathbf Q}\mathbf C. $$ Now $\;\mathbf C\otimes_{\mathbf Z}\mathbf Q\simeq \mathbf C$ because of the universal property of rings of fractions.</p>
2,698,553
<p>Is the natural ring morphism $\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}$ an isomorphism?</p> <p>In other words, is there a $\mathbb Z$-linear map $f:\mathbb{C}\otimes_{\mathbb{Q}}\mathbb{C}\to\mathbb{C}\otimes_{\mathbb{Z}}\mathbb{C}$ such that $$ f(z\otimes w)=z\otimes w $$ for all $z,w\in\mathbb{C}$? (Note that the two occurrences of $z\otimes w$ in the above display have different meanings.)</p>
Pierre-Yves Gaillard
660
<p>In this post $U,V,X$ and $Y$ are $\mathbb Q$-vector spaces, $a$ is an integer, $b$ is a nonzero integer, and $u,v$ and $x$ are vectors of $U,V$ and $X$ respectively.</p> <p>$(\star)$ A $\mathbb Z$-linear map $f:X\to Y$ is automatically $\mathbb Q$-linear.</p> <p>Proof of $(\star)$: We have $$ f\left(\frac abx\right)=af\left(\frac 1bx\right)=\frac ab\ b\ f\left(\frac 1bx\right)=\frac ab\ f(x).\ \square $$ We equip the $\mathbb Z$-module $U\otimes_{\mathbb Z}V$ with the $\mathbb Q$-vector space structure defined by $$ \frac ab\ (u\otimes v):=\left(\frac abu\right)\otimes v. $$ Consider the $\mathbb Z$-bilinear maps $$ g:U\times V\to U\otimes_{\mathbb Q}V,\quad(u,v)\mapsto u\otimes v, $$ $$ h:U\times V\to U\otimes_{\mathbb Z}V,\quad(u,v)\mapsto u\otimes v. $$ The map $g$ induces a $\mathbb Z$-linear map $$ g':U\otimes_{\mathbb Z}V\to U\otimes_{\mathbb Q}V, $$ which is $\mathbb Q$-linear by $(\star)$.</p> <p>We claim that $h$ is $\mathbb Q$-bilinear. </p> <p>The $\mathbb Q$-linearity in the first variable is clear. The $\mathbb Q$-linearity in the second variable follows from $(\star)$.</p> <p>As $h$ is $\mathbb Q$-bilinear, it induces a $\mathbb Q$-linear map $$ h':U\otimes_{\mathbb Q}V\to U\otimes_{\mathbb Z}V. $$ It is easy to see that $g'$ and $h'$ are mutual inverses.</p>
756,735
<blockquote> <p>Let $n&gt;0$ be a positive integer. For all $x\not=0$, prove that $f(x) = 1/x^n$ is differentiable at $x$ with $f^\prime(x) = -n/x^{n+1}$ by showing that the limit of the difference quotient exists.</p> </blockquote> <p>I am having trouble seeing how I can manipulate the difference quotient in order to get a limit that exists</p> <p>so far I have</p> <p>$$f^\prime(x)= \lim_{h\rightarrow 0} \frac{f(x+h) - f(x)}{h} =\lim_{h\rightarrow 0} \frac{1/(x+h)^n - 1/x^n}{h}$$</p> <p>All help is much appreciated.</p>
Angel
109,318
<p><span class="math-container">$$\lim_{h\to0}\frac{\frac{1}{(x+h)^n}-\frac{1}{x^n}}{h}=\lim_{h\to0}\frac{\frac{1}{x^n(1+\frac{h}{x})^n}-\frac{1}{x^n}}{h}=\lim_{h\to0}\frac{1}{x^n}\frac{\frac{1}{(1+\frac{h}{x})^n}-1}{h}=\lim_{h\to0}\frac{1}{x^{n+1}}\frac{\frac{1}{(1+\frac{h}{x})^n}-1}{\frac{h}{x}}.$$</span> Now let <span class="math-container">$\epsilon=\frac{h}{x}$</span>. As <span class="math-container">$h\rightarrow0,\epsilon\rightarrow0$</span>. Thus <span class="math-container">$$\lim_{h\to0}\frac{1}{x^{n+1}}\frac{\frac{1}{(1+\frac{h}{x})^n}-1}{\frac{h}{x}}=\frac{1}{x^{n+1}}\lim_{\epsilon\to0}\frac{\frac{1}{(1+\epsilon)^n}-1}{\epsilon}=-\frac{1}{x^{n+1}}\lim_{\epsilon\to0}\frac{1-\frac{1}{(1+\epsilon)^n}}{\epsilon}.$$</span> The reason I like this approach is because now, the limit expression is independent of <span class="math-container">$x$</span>. It only depends on <span class="math-container">$n$</span>, and all that remains is to prove that <span class="math-container">$$\lim_{\epsilon\to0}\frac{1-\frac{1}{(1+\epsilon)^n}}{\epsilon}=\lim_{\epsilon\to0}\frac{(1+\epsilon)^n-1}{\epsilon(1+\epsilon)^n}=n.$$</span> From the formula for the geometric progression, we know that <span class="math-container">$$(1+\epsilon)^n-1=[(1+\epsilon)-1]\sum_{m=0}^{n-1}(1+\epsilon)^m=\epsilon\sum_{m=0}^{n-1}(1+\epsilon)^m,$$</span> hence <span class="math-container">$$\lim_{\epsilon\to0}\frac{(1+\epsilon)^n-1}{\epsilon(1+\epsilon)^n}=\lim_{\epsilon\to0}\frac{\epsilon\sum_{m=0}^{n-1}(1+\epsilon)^m}{\epsilon(1+\epsilon)^n}=\lim_{\epsilon\to0}\frac{\sum_{m=0}^{n-1}(1+\epsilon)^m}{(1+\epsilon)^n}=\sum_{m=0}^{n-1}1=n$$</span></p>
2,666,425
<p>I'm working through Vakil's excellent The Rising Sea notes, and in an exercise, the following question is posed:</p> <p>If $X$ is a topological space, show that the fibered product always exists in the category of open sets of $X$, by describing what a fibered product is. </p> <p>Now I know intuitively the fibered product of 2 open sets is nothing but the intersection as the maps between them are that of inclusion. But I'm having trouble proving that explicitly using the universal property. Can anybody help me out? </p>
Angina Seng
436,618
<p>The category of open sets is a poset; that is there's at most one morphism between objects; here there's a morphism $U\to V$ iff $U$ is an open subset of the open subset $V$. In general if one has arrows $U\to W$ and $V\to W$ in a poset, then they have a pullback (fibre product) iff $U$ and $V$ have a least upper bound $Y$ in the poset. Then $Y\to U$ and $Y\to V$ finish off the pullback square.</p> <p>In the category of open sets $U\cap V$ is an infimum of $U$ and $V$; it is an open subset of $U$ and $V$, and each open subset of $U$ and $V$ is an open subset of $U\cap V$.</p>
2,688,829
<p>I was wondering how to find the growth rate of the function defined by the number of ways to partition $2^n$ as powers of 2. After a search through OEIS I came across <a href="https://oeis.org/A002577" rel="nofollow noreferrer">OEIS A002577</a> which is what I'm looking for. I can't seem to find any link to asymptotics for this function. Could someone help?</p>
Parcly Taxel
357,390
<p>Go a bit deeper. <a href="https://oeis.org/A000123" rel="nofollow noreferrer">OEIS A000123</a> gives the number of ways of partitioning $2n$ (multiplication, not exponentiation) into powers of two. In the formulas section there is this from Philippe Flajolet (typesetting and expansion of abbreviations is mine):</p> <blockquote> <p>The asymptotic rate of growth is known precisely – see <a href="http://www.dwc.knaw.nl/DL/publications/PU00018536.pdf" rel="nofollow noreferrer">de Bruijn's paper</a>. With $p(n)$ the number of partitions of $n$ into powers of two, the asymptotic formula of de Bruijn is: $$\log p(2n) =\frac1{2\log2}\left(\log\frac n{\log n}\right)^2+\left(\frac12+\frac1{\log2}+\frac{\log\log2}{\log2}\right)\log n\\ -\left(1+\frac{\log\log2}{\log2}\right)\log\log n+\Phi\left(\frac{\log\frac n{\log n}}{\log2}\right)$$ where […] $\Phi(x)$ is a certain periodic function with period 1 and a tiny amplitude.</p> </blockquote> <p>I will abbreviate the above RHS to $A\left(\log\frac n{\log n}\right)^2+B\log n +C\log\log n+O(1)$. To get the growth rate of the number of partitions of $2^n$, substitute $n\to2^{n-1}$: $$\log p(2^n)=A\left(\log\frac{2^{n-1}}{\log2^{n-1}}\right)^2+B\log2^{n-1}+C\log\log2^{n-1}+O(1)$$ $$=A((n-1)\log2-\log((n-1)\log2))^2+B(n-1)\log2+C\log((n-1)\log2)+O(1)$$ Define $m=(n-1)\log2=O(n)$: $$=A(m-\log m)^2+Bm+C\log m+O(1)$$ Therefore $$p(2^n)=e^{A(m-\log m)^2+Bm+C\log m+O(1)}=O(e^{n^2})$$</p>
3,337,475
<p>This is definitely the most difficult integral that I've ever seen. Of course, I'm not able to solve this. Could you help me?</p> <p><span class="math-container">$$\int { \sin { x\cos { x } \cosh { \left( \ln { \sqrt { \frac { 1 }{ 1-\sin { x } } } +\tanh ^{ -1 }{ \left( \sin x \right) +\tanh ^{ -1 }{ \left( \cos { x } \right) } } } \right) } } dx } $$</span></p>
David G. Stork
210,401
<p><em>Mathematica</em> gives:</p> <p><span class="math-container">$$-\frac{\sqrt{\frac{1}{1-\sin (x)}} \sqrt{\sin ^2(2 x)} \csc^2(x) \\ \left(-90 \sin \left(\frac{x}{2}\right)+35 \sin \left(\frac{3 x}{2}\right)-3 \sin \left(\frac{5 x}{2}\right)+15 \cos \left(\frac{3 x}{2}\right)+3 \cos \left(\frac{5 x}{2}\right)+30 \cos \left(\frac{x}{2}\right) \left(4 \sqrt{\frac{1}{\cos (x)+1}} \log \left(\tan \left(\frac{x}{2}\right)-1\right)-4 \sqrt{\frac{1}{\cos (x)+1}} \log \left(2 \sqrt{\frac{1}{\cos (x)+1}}+\tan \left(\frac{x}{2}\right)+1\right)+1\right)\right)}{60 \left(\csc \left(\frac{x}{2}\right)+\sec \left(\frac{x}{2}\right)\right)}$$</span></p> <p>so I doubt you'll want to work through this by hand.</p>
543,938
<p>Can anyone share a link to proof of this? $${{p-1}\choose{j}}\equiv(-1)^j(\text{mod}\ p)$$ for prime $p$.</p>
sanjshakun
73,938
<p>Definition of $a \equiv b \pmod{c}$ requires $a,b,c$ to be integers. (See David Burton's Elementary Number Theory for a definition and a similar problem.) Here is a way to do it. $$(p-1)(p-2)\ldots(p-j) \equiv (-1)^j j! \pmod{p}.$$ Therefore, $$\binom{p-1}{j} j! \equiv (-1)^j j! \pmod{p}.$$ Now we can "cancel" $j!$ because $\gcd(j!, p)=1$ for $1\leq j \leq p-1$ to obtain the result.</p>
524,686
<p>Let X be a uniform random variable on [0,1], and let $Y=\tan\left (\pi \left(x-\frac{1}{2}\right)\right)$. Calculate E(Y) if it exists. </p> <p>After doing some research into this problem, I have discovered that Y has a Cauchy distribution (although I do not know how to prove this); therefore, E(Y) does not exist.</p> <p>Also, I know that if I can show the improper integral does not absolutely converge - i.e., that $\int_{-\infty}^{\infty}|\tan\left(\pi\left(x-\frac{1}{2}\right)\right|dx$ diverges - I can show that E(Y) does not exist.</p> <p>The problem is that I do not know how to evaluate this integral. Could someone please enlighten me on how to do so? Thanks in advance.</p>
drhab
75,923
<p>$-\pi^{-1}\ln\cos\pi\left(x-\frac{1}{2}\right)$ serves as primitive of $\tan\pi\left(x-\frac{1}{2}\right)$ on $\left(0,1\right)$</p>