qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
4,574,446
<p>This is question 13 on page 294 of Vector Calculus by Marsden and Tromba.</p> <blockquote> <p>Find the volume of the region determined by <span class="math-container">$x^2 + y^2 + z^2 \leq 10, z \geq 2.$</span></p> </blockquote> <p>I have attempted it as follows.</p> <p>The region can be described by using spherical polar coordinates <span class="math-container">$(r,\theta,\phi)$</span> and we have <span class="math-container">$0 \leq r \leq \sqrt{10}$</span>, <span class="math-container">$0 \leq \theta \leq 2 \pi$</span>.</p> <p>For the limits for <span class="math-container">$\phi$</span>, I think that we can find it by saying that <span class="math-container">$z=r \cos \phi = \sqrt{10} \cos \phi \geq 2$</span> using that <span class="math-container">$r = \sqrt{10}$</span> on the surface. This gives <span class="math-container">$0 \leq \phi \leq \cos^{-1}(2/\sqrt{10}).$</span> Hence I get</p> <p><span class="math-container">\begin{align} \iiint_V dV &amp;= \int_0^{2\pi} \int_0^{\cos^{-1}(2/\sqrt{10})}\int_0^{\sqrt{10}} r^2 \sin \phi drd\phi d\theta \\ &amp;= 2\pi \int_0^{\cos^{-1}(2/\sqrt{10})} 10 \dfrac{\sqrt{3}}{3} \sin \phi d\phi \\&amp;= 20\pi \dfrac{\sqrt{10}}{3} - 40\dfrac{\pi}{3}. \end{align}</span></p> <p>However, the answer at the back of the book is</p> <blockquote> <p><span class="math-container">$20\pi \dfrac{\sqrt{10}}{3} - 52\dfrac{\pi}{3}$</span></p> </blockquote> <p>but I have been unable to identify the mistake. Please, could someone help me? Thank you very much.</p>
Mike
544,150
<p>We first note the following:</p> <p><strong>Claim 1:</strong> <em>The condition that <span class="math-container">$n$</span> is odd implies that <span class="math-container">$n \not = (a)(a+1)$</span> for all integer <span class="math-container">$a$</span>, because as always, one of <span class="math-container">$a,a+1$</span> is even.</em></p> <p>Write <span class="math-container">$n = ab$</span>, with <span class="math-container">$1&lt;a \le b &lt; n$</span>. Then if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are the same, then <span class="math-container">$n=a^2=b^2$</span>, and we are done. So now it remains to consider the case <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are not the same.</p> <p>Then if <span class="math-container">$a$</span> and <span class="math-container">$b$</span> are not the same however, this and Claim 1 gives <span class="math-container">$b \ge a+2$</span>. Thus <span class="math-container">$$b^2 \ge b(a+2)$$</span> <span class="math-container">$$= ba +2b = n + 2b &gt; n+2,$$</span> or in particular, <span class="math-container">$$b^2 &gt; n+2,$$</span> or equivalently, <span class="math-container">$$b &gt; \sqrt{n+2}.$$</span> This contradicts the hypothesis that <span class="math-container">$b \le \sqrt{n+2}$</span> however, which implies that <span class="math-container">$a$</span> and <span class="math-container">$b$</span> must be the same after all, which implies that <span class="math-container">$n=a^2=b^2$</span> must be a perfect square.</p>
165,888
<p>I want to return the final value from the Table.For example something like Table[n-1] value.</p>
Henrik Schumacher
38,178
<p><em>This</em> is definitely faster. It took me some time to figure out the combinatorics and there might still be some potential for improvement within the compiled functions.</p> <pre><code>getColumnIndices = Compile[{{m, _Integer}, {n, _Integer}, {i, _Integer}}, Transpose[Partition[ Partition[ Join[ Table[k, {k, 1, i n}], Flatten[Table[Table[k, {j, 1, n}], {k, i n + 1, i n + n}]], Table[k, {k, i n + n + 1, m n}] ], {1}], {n}]], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True, CompilationTarget -&gt; "C", RuntimeOptions -&gt; "Speed" ]; getValues = Compile[{{k1row, _Real, 1}, {k2, _Real, 2}, {i, _Integer}}, Block[{A}, A = Join[ Table[Compile`GetElement[k1row, k], {l, 1, Length[k2]}, {k, 1, i}], k2, Table[Compile`GetElement[k1row, k], {l, 1, Length[k2]}, {k, i + 2, Length[k1row]}], 2 ]; Do[A[[k, k + i]] += Compile`GetElement[k1row, i + 1], {k, 1, Length[k2]}]; A ], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True, CompilationTarget -&gt; "C", RuntimeOptions -&gt; "Speed" ]; makeKMat3[k1_, k2_] := With[{m = Length[k1], n = Length[k2]}, SparseArray @@ {Automatic, {m n, m n}, 0, {1, { Range[0, (m n ) (m + n - 1), m + n - 1], Flatten[getColumnIndices[m, n, Range[0, m - 1]], 2] }, Flatten[getValues[k1, k2, Range[0, m - 1]]] }} ] </code></pre> <p>Just to give you an idea of the timings:</p> <pre><code>m = 100; n = 6; k1 = RandomReal[{-1, 1}, {m, m}]; k2 = RandomReal[{-1, 1}, {n, n}]; A = makeKMat2[k1, k2]; // RepeatedTiming B = makeKMat3[k1, k2]; // RepeatedTiming Max[Abs[A - B]] </code></pre> <blockquote> <p>{0.11, Null}</p> <p>{0.000481, Null}</p> <p>0.</p> </blockquote> <p><strong>Addendum</strong></p> <p>The idea for the "higher dimensional" case is similar to the idea above. I exploit that the "ColumnIndices" of the resulting matrices (when partitioned into column indices per row) stay rectangular so that column indices from the diagonal matrices can be joined to it from the left and right. Some extra index list (<code>diagidx</code>) is needed for book keeping of those column indices of the blocks that belong to the diagonal entries. All in all, only operations on the nonzero values and the column indices are performed. No <code>SparseArray</code>s are built intermediately: Even building a <code>SparseArray</code> from row pointers, column indices and nonzero values has still some considerable overhead, probably because there is a consistency checker involved in the backend. That's a pitty since I <em>know</em> that I produce consistent data. This issue is also closely related to <a href="https://mathematica.stackexchange.com/questions/158413/librarylink-create-sparsearray-directly-from-row-pointers-and-column-indices">this post</a> by Szabolcs.</p> <pre><code>getValues2 = Compile[{{k1row, _Real, 1}, {blockvals, _Real, 2}, {diagidx, _Integer, 1}, {i, _Integer}}, Block[{A}, A = Join[ Table[Compile`GetElement[k1row, k], {l, 1, Dimensions[blockvals][[1]]}, {k, 1, i}], blockvals, Table[Compile`GetElement[k1row, k], {l, 1, Dimensions[blockvals][[1]]}, {k, i + 2, Length[k1row]}], 2]; Do[A[[k, Compile`GetElement[diagidx, k] + i]] += Compile`GetElement[k1row, i + 1], {k, 1, Length[blockvals]}]; A ], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True, CompilationTarget -&gt; "C", RuntimeOptions -&gt; "Speed" ]; getColumnIndices2 = Compile[{ {blockci, _Integer, 3}, {diagci, _Integer, 3}, {m, _Integer}, {n, _Integer}, {i, _Integer} }, If[i &gt; 0, If[i &lt; m - 1, Join[diagci[[All, 1 ;; i]], blockci + i n, diagci[[All, i + 1 ;; m - 1]] + (n), 2], Join[diagci[[All, 1 ;; i]], blockci + i n, 2] ], Join[blockci + i n, diagci[[All, i + 1 ;; m - 1]] + (n), 2] ], RuntimeAttributes -&gt; {Listable}, Parallelization -&gt; True, CompilationTarget -&gt; "C", RuntimeOptions -&gt; "Speed" ]; toSparseArrayData[b_?MatrixQ] := { Partition[SparseArray[b]["ColumnIndices"], Dimensions[b][[2]]], b, Dimensions[b][[2]], Range[Dimensions[b][[2]]] } toSparseArray[X_] := With[{d1 = Dimensions[X[[1]]][[1]], d2 = Dimensions[X[[1]]][[2]]}, SparseArray @@ {Automatic, {d1, d1}, 0, {1, {Range[0, d1 d2, d2], Flatten[X[[1]], 1]}, Flatten[X[[2]]]}} ] iteration[X_, a_] := With[{ m = Length[a], blockci = X[[1]], blockvals = X[[2]], n = X[[3]], diagidx = X[[4]] }, With[{ran = Range[0, m - 1]}, { Join @@ getColumnIndices2[blockci, Transpose[Partition[Partition[Range[(m - 1) n], 1], n]], m, n, ran], Join @@ getValues2[a, blockvals, diagidx, ran], m n, Join @@ Outer[Plus, ran, diagidx] } ] ] makeKMat3ND[ks : {__List}] := toSparseArray[Fold[iteration, toSparseArrayData[ks[[1]]], Rest[ks]]] </code></pre> <p>Usage example and timing test:</p> <pre><code>SeedRandom[123]; ks = Table[RandomReal[{-1, 1}, {RandomInteger[{3, 8}]}[[{1, 1}]]], 6]; A = makeKMat2ND[Reverse@ks]; // AbsoluteTiming B = makeKMat3ND[ks]; // AbsoluteTiming Max[Abs[A - B]] </code></pre> <blockquote> <p>{24.0011, Null}</p> <p>{0.038756, Null}</p> <p>8.88178*10^-16</p> </blockquote> <p>This is the sparsity pattern of the resulting matrix:</p> <p><a href="https://i.stack.imgur.com/MlJQp.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MlJQp.png" alt="enter image description here"></a></p>
2,786,610
<p>I have to calculate $\int\operatorname{arccot}(\cot(x))\ dx.$ If I had to find the derivative it would be easy with the chain rule. How can I do this?</p>
user
505,767
<p>From here we can solve as follow </p> <ul> <li><p>$Ab_1=b_1\implies (A-I)b_1=0$ and find $b_1$</p></li> <li><p>$Ab_2=b_2\implies (A-I)b_2=0$ and find $b_2$</p></li> <li><p>$Ab_3=b_2+b_3 \implies (A-I)b_3=b_2$ and find $b_3$</p></li> <li><p>$Ab_4=b_3+b_4 \implies (A-I)b_4=b_3$ and find $b_4$</p></li> </ul>
292,041
<p>The Helly theorem in the Euclidean plane asserts that if $S_1, \dots, S_n$ are $n \ge 3$ convex subsets such that $S_i \cap S_j \cap S_k \ne \emptyset$ for all distinct triples $i,j,k$, then the total intersection $\bigcap_{i = 1}^n S_i$ is also nonempty. </p> <p>I'm wondering if the same theorem is true in the hyperbolic plane (for concreteness, let's assume the Poincaré disk model). My understanding is that if the analogue of Radon's theorem is true in this setting, then Helly follows axiomatically.</p> <p>Radon's theorem in the Euclidean plane asserts that given any four points $x_1, \dots, x_4$, there is a partition into two nonempty subsets such that the convex hulls intersect. The proof I know uses the affine structure on the Euclidean plane and so doesn't seem to port directly into hyperbolic space. On the other hand, I can verify that the Radon property holds for all the collections of four points I've looked at... </p>
Andy Putman
317
<p>The original proof of Helly's theorem was topological and only uses basic homological properties of convex sets. It generalizes to all sorts of contexts, including the one you are interested in. Here is a general statement of what it can do. A <em>homology cell</em> is a topological space whose reduced singular homology is the same as that of a point (this implies in particular that it is nonempty).</p> <p><strong>Theorem</strong>: Let $X$ be a normal topological space such that for some $n \geq 1$, every open set $Y \subset X$ satisfies $H_q(Y)=0$ for $q \geq n$. Let $X_1,\ldots,X_k$ be a collection of closed homology cells in $X$. Assume that the intersection of any $r$ of the $X_i$ is nonempty for all $r \leq n+1$ and is a homology cell for $r \leq n$. Then the intersection of all the $X_i$ is a homology cell (and in particular is nonempty).</p> <p>A discussion of this with references is in Section 3 of</p> <p>B. Farb, Group actions and Helly's theorem, Adv. Math. 222 (2009), no. 5, 1574–1588. </p>
1,633,846
<p>I'm currently a high school Pre-Calculus student and my textbook presents the following theorem without proof:</p> <blockquote> <p>Let <span class="math-container">$f(x)$</span> be a polynomial with real coefficients and a positive leading coefficient.</p> <p>Let <span class="math-container">$a \geq 0$</span>. Then, if when <span class="math-container">$f(x)$</span> is divided by <span class="math-container">$x-a$</span>, all of the coefficients of the quotient and the remainder are non-negative, <span class="math-container">$a$</span> is an upper bound on the real zeroes of <span class="math-container">$f(x)$</span>.</p> <p>Now, let <span class="math-container">$a \leq 0$</span>. Then, if when <span class="math-container">$f(x)$</span> is divided by <span class="math-container">$x-a$</span>, all of the coefficients of the quotient and the remainder alternate between non-positive and non-negative, <span class="math-container">$a$</span> is a lower bound on the real zeroes of <span class="math-container">$f(x)$</span>.</p> </blockquote> <p>I am trying to find a proof for this theorem and I think I found a proof for it <a href="http://mathweb.scranton.edu/monks/courses/ProblemSolving/POLYTHEOREMS.pdf" rel="nofollow noreferrer">pages 6 and 7 of this PDF file</a>. However, the proof does not seem correct:</p> <blockquote> <p>[...] since all of <span class="math-container">$q(x)$</span>'s remaining coefficients are positive [...]</p> </blockquote> <p>This is a quote from the proof of the first part of this theorem. Here, <span class="math-container">$q(x)$</span> is the resulting polynomial from dividing <span class="math-container">$f(x)$</span> by <span class="math-container">$x-b$</span> for some root <span class="math-container">$b$</span> for <span class="math-container">$f(x)$</span>. However, there is a clear counter example to this: If <span class="math-container">$a=5$</span> and <span class="math-container">$f(x)=x^2-5x+6$</span>, then <span class="math-container">$f(x)$</span> and <span class="math-container">$a$</span> meet the hypothesis since <span class="math-container">$\frac{f(x)}{x-a}=x+0+\frac{6}{x-a}$</span>, so there are all non-negative coefficients in the quotient and the remainder. Then, <span class="math-container">$b=3$</span> is a root of <span class="math-container">$f(x)$</span>, but, <span class="math-container">$q(x)=\frac{f(x)}{x-b}=x-2$</span> does not have all positive coefficients, contradicting the above.</p> <p>The proof of the second part of the theorem is also wrong:</p> <blockquote> <p>Because <span class="math-container">$a &lt; 0$</span> and the leading term in <span class="math-container">$q(x)$</span> has a positive coefficient, the constant term in <span class="math-container">$q(x)$</span> has the same sign as <span class="math-container">$q(a)$</span>.</p> </blockquote> <p>However, if we let <span class="math-container">$a=-4$</span> and <span class="math-container">$f(x)=x^2+3x+2$</span>, then <span class="math-container">$f(x)$</span> and <span class="math-container">$a$</span> meet the hypothesis since <span class="math-container">$\frac{f(x)}{x-a}=x-1+\frac{6}{x-a}$</span>, which alternatives between non-negative and non-positive coefficients in the quotient and remainder. Then, <span class="math-container">$b=2$</span> is a root of <span class="math-container">$f(x)$</span>, but <span class="math-container">$q(x)=\frac{f(x)}{x-b}=x+1$</span> and thus <span class="math-container">$q(a)=-3$</span> while the constant term of <span class="math-container">$q(x)$</span> is <span class="math-container">$1$</span>, which also clearly contradicts the above.</p> <p>Thus, while I have found proofs for these theorems, I do not think they are valid. Could someone show me valid proofs for these theorems in my Pre-Calc textbook? Thank you!</p>
user574848
574,848
<p>Here's what I think is a more intuitive proof. </p> <p>For the first part:</p> <p>Suppose that <span class="math-container">$f(x)/(x-a)$</span> leaves only non-negative coefficients. So <span class="math-container">$f(x)=(x-a)q(x)+r(x)$</span>. But by the definition of polynomial division, <span class="math-container">$\deg q&gt;\deg r$</span>. Since <span class="math-container">$q$</span> is linear but not the zero polynomial, <span class="math-container">$r(x)$</span> is a constant polynomial - let this constant be <span class="math-container">$r$</span>. Then <span class="math-container">$f(x)=(x-a)q(x)+r$</span>. Now suppose that <span class="math-container">$b&gt;a$</span>. Then <span class="math-container">$f(b)=(b-a)q(b)+r$</span>. Here, <span class="math-container">$q(x)$</span> and <span class="math-container">$b-a$</span> are positive by definition and <span class="math-container">$r$</span> is non-negative. Hence <span class="math-container">$f(b)&gt;0$</span>, showing that <span class="math-container">$b$</span> is not a root of <span class="math-container">$f$</span>. </p> <p>A similar argument that invokes the upper bound theorem completes the lower bound proof. </p>
3,699,640
<p>How to show <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$2$</span>-adic ring of integers <span class="math-container">$\mathbb{Z}_2$</span> ?</p> <p>What is the general criterion fora solution of <span class="math-container">$x^2=a$</span> in <span class="math-container">$\mathbb{Z}_2$</span> ?</p> <p>I know for odd prime <span class="math-container">$x^2=a$</span> has solution in <span class="math-container">$\mathbb{Z}_p$</span> or in other word <span class="math-container">$a$</span> is quadratic residue modulo <span class="math-container">$p$</span> if <span class="math-container">$a_0$</span> is quadratic residue modulo <span class="math-container">$p$</span>, where <span class="math-container">$a=a_0+a_1p+a_2p^2+\cdots \in \mathbb{Z}_p$</span>.</p> <p>But what about for <span class="math-container">$p$</span> even i.e., <span class="math-container">$p=2$</span> ?</p> <p>We have two following results as well.</p> <blockquote> <p>Result <span class="math-container">$1$</span>: For <span class="math-container">$p \neq 2$</span>, an <span class="math-container">$\epsilon \in \mathbb{Z}_p^{\times}$</span> is square in <span class="math-container">$\mathbb{Z}_p$</span> iff it is square in the residue field of <span class="math-container">$\mathbb{Z}_p$</span>.</p> <p>Result <span class="math-container">$2$</span>: An unit <span class="math-container">$\epsilon \in \mathbb{Z}_2^{\times}$</span> is square iff <span class="math-container">$\epsilon \equiv 1 (\mod 8)$</span>.</p> </blockquote> <p>But as <span class="math-container">$20$</span> is not a unit in <span class="math-container">$\mathbb{Z}_2$</span>, the above Result <span class="math-container">$2$</span> is not applicable here.</p> <p>So we have to use other way.</p> <p>I am trying as follows:</p> <p>For a general <span class="math-container">$2$</span>-adic integer <span class="math-container">$a$</span>, we have the following form <span class="math-container">$$ a=2^r(1+a_1 \cdot 2+a_2 \cdot 2^2+\cdots).$$</span> Thus one necessary criterion for <span class="math-container">$a \in \mathbb{Q}_2$</span> to be square in <span class="math-container">$\mathbb{Q}_2$</span> is that <span class="math-container">$r$</span> must be <strong>even integer</strong>.</p> <p>Are there any condition on <span class="math-container">$a_1$</span> and <span class="math-container">$a_2$</span> ?</p> <p>For, let <span class="math-container">$\sqrt{20}=a_0+a_1 2+a_22^2+a_32^3+\cdots$</span>.</p> <p>Then squaring and taking modulo <span class="math-container">$2$</span>, we get <span class="math-container">$$ a_0^2 \equiv 0 (\mod 2) \Rightarrow a_0 \equiv 0 (\mod 2).$$</span> Thus, <span class="math-container">$\sqrt {20}=a_12+a_22^2+a_32^3+\cdots$</span>.</p> <p>Again squaring and taking modulo (<span class="math-container">$2^2)$</span>, we get <span class="math-container">$$a_1 \equiv 1 (\mod 2^2).$$</span> Thus we have <span class="math-container">$\sqrt{20}=2+a_22^2+a_32^3+\cdots$</span>.</p> <p>Again squaring and taking modulo <span class="math-container">$(2^3)$</span>, we get no solution for <span class="math-container">$a_i,\ i \geq 2$</span>.</p> <p>What do I conclude here from ?</p> <p>How do I conclude that <span class="math-container">$x^2=20$</span> has no solution in <span class="math-container">$\mathbb{Z}_2$</span> ?</p> <p>Help me</p>
nguyen quang do
300,700
<p>Here is an elementary solution, but which has the advantage to show "experimentally" why <span class="math-container">$2 $</span> is a "distinguished" prime. If <span class="math-container">$20$</span> is a square in <span class="math-container">$\mathbf Z_2$</span>, we can factor out <span class="math-container">$4$</span> to get <span class="math-container">$5=2^{2n}u^2$</span>, where <span class="math-container">$u$</span> is a <span class="math-container">$2$</span>-adic unit. But <span class="math-container">$5=1+4$</span> is a "principal unit", i.e. it belongs to the multiplicative subgroup <span class="math-container">$1+2\mathbf Z_2$</span>, so that <span class="math-container">$n=0$</span> and <span class="math-container">$u$</span> is also a principal unit, say <span class="math-container">$u=1+ 2^aw$</span> and <span class="math-container">$u^2=1+ 2^{a+1} w+ 2^{2a} w^2$</span> . Writing <span class="math-container">$v_2$</span> for the <span class="math-container">$2$</span>-adic valuation, we have <span class="math-container">$v_2 (5-1)=2$</span> and <span class="math-container">$v_2 (u^2-1)=a+1$</span> if <span class="math-container">$a\ge 2$</span>, a contradiction. If <span class="math-container">$a=1, u^2 -1 = 4w(1+w)$</span>. Taking classes modulo <span class="math-container">$2\mathbf Z_2$</span>, denoted by brackets[.], we have <span class="math-container">$[1+w]=[1]+[w]=[1]+[1]=[0]$</span> because the residual field is just <span class="math-container">$\mathbf Z/2\mathbf Z$</span>. This means that <span class="math-container">$v_2(u^2-1) \ge 3$</span>, a contradiction again. The last phenomenon could not have happene for <span class="math-container">$p$</span> odd.</p>
51,345
<p>Do all infinite fields of char p contain a subfield isomorphic to $F_{p}(x)$?</p>
Qiaochu Yuan
232
<p>Perhaps this is an opportunity to walk through a mechanical process from the statement of the problem to its solution. Suppose $F$ is a field of characteristic $p$ which does not contain a subfield isomorphic to $\mathbb{F}_p(x)$. In other words, no element of $F$ is transcendental over $\mathbb{F}_p$. It follows that every element of $F$ must instead be algebraic over $\mathbb{F}_p$, hence that $F$ must be a subfield of $\overline{\mathbb{F}_p}$. As no finite field is algebraically closed, $\overline{\mathbb{F}_p}$ is an infinite field of characteristic $p$ which contains no transcendental elements, and we are done. </p>
51,345
<p>Do all infinite fields of char p contain a subfield isomorphic to $F_{p}(x)$?</p>
Bill Dubuque
242
<p><strong>HINT</strong> $\ $ Consider the algebraic closure of $\rm\:\mathbb F_p\:,\:$ recalling that algebraically closed fields are infinite, since for any finite field $\rm\:\mathbb F\:$ with elements $\rm\:a_1,\cdots,a_n,\:$ a la Euclid, we can construct the polynomial $\rm\:1 + (x-a_1)\cdots(x-a_n)\:$ coprime to all of the primes $\rm\: x-a_i\:,\:$ hence having no roots $\rm\:a_{\:i}\:$ in $\rm\:\mathbb F\:.$ </p>
3,707,132
<p>I believe my proof of this simple fact is fine, but after a few false starts, I was hoping that someone could look this over. In particular, I am interested in whether there is an alternate proof.</p> <blockquote> <p>For a real number <span class="math-container">$a$</span> and non-empty subset of reals <span class="math-container">$B$</span>, define <span class="math-container">$a + B = \{a + b : b \in B\}$</span>. Show that if <span class="math-container">$B$</span> is bounded above, then <span class="math-container">$\sup(a + B) = a + \sup B$</span>.</p> </blockquote> <p>My attempt: </p> <blockquote> <p>Fix <span class="math-container">$a \in \mathbb{R}$</span>, take <span class="math-container">$B \subset \mathbb{R}$</span> to be nonempty and bounded above, and define <span class="math-container">$$a + B = \{a + b : b \in B\}.$$</span> Since <span class="math-container">$B$</span> is nonempty and bounded above, the least-upper-bound axiom guarantees the existence of <span class="math-container">$\sup B$</span>. For any <span class="math-container">$b \in B$</span>, we have <span class="math-container">$$b \leq \sup B,$$</span> which implies <span class="math-container">$$a + b \leq a + \sup B.$$</span> As this is true for any <span class="math-container">$b \in B$</span>, it follows that <span class="math-container">$a + \sup B$</span> is an upper bound of <span class="math-container">$a + B$</span>, and hence <span class="math-container">$\sup(a + B)$</span> exists, by the completeness axiom, since <span class="math-container">$B \neq \emptyset$</span> implies immediately that <span class="math-container">$a + B \neq \emptyset$</span>. I claim that <span class="math-container">$a + \sup B$</span> is in fact the least upper bound of <span class="math-container">$a + B$</span>. As we have already shown it to be an upper bound, it suffices to demonstrate that <span class="math-container">$a + \sup B$</span> is the least of the upper bounds. Let <span class="math-container">$\gamma$</span> be an upper bound of <span class="math-container">$a + B$</span>. Hence, for any <span class="math-container">$b \in B$</span>, <span class="math-container">$$a + b \leq \gamma,$$</span> which implies that <span class="math-container">$$b \leq \gamma - a.$$</span> As this holds for all <span class="math-container">$b \in B$</span>, <span class="math-container">$\gamma - a$</span> is an upper bound of <span class="math-container">$B$</span>. Hence, by the definition of supremum, <span class="math-container">$$\gamma - a \geq \sup B,$$</span> which implies that <span class="math-container">$$\gamma \geq a + \sup B,$$</span> as desired. </p> </blockquote> <p>I tried to write the proof initially be showing that <span class="math-container">$\sup(a + B) \leq a + \sup B$</span> and <span class="math-container">$\sup(a + B) \geq a + \sup B$</span>, but didn't have any luck. If there is a trick to it, I would be interested in hearing it.</p>
fleablood
280,126
<p>Given <span class="math-container">$B$</span> is non-empty, <span class="math-container">$B$</span> is bounded above and <span class="math-container">$\sup B$</span> is the least upper bound of <span class="math-container">$B$</span> then</p> <p>Claim 1: <span class="math-container">$a + B$</span> is non-empty.</p> <p>Pf: This will be the layout of all the claims.</p> <p><span class="math-container">$B$</span> is non empty. So there exists a <span class="math-container">$b \in B$</span> so <span class="math-container">$a + b \in a + B$</span>. So <span class="math-container">$a+B$</span> is not empty.</p> <p>Claim 2: <span class="math-container">$a + B$</span> is bounded above.</p> <p>Pf: <span class="math-container">$B$</span> is bounded above. so there exist <span class="math-container">$g$</span> so that <span class="math-container">$g \ge b$</span> for all <span class="math-container">$b \in B$</span>.</p> <blockquote class="spoiler"> <p> Let <span class="math-container">$k = a + B$</span>. Then <span class="math-container">$k = a + b$</span> for some <span class="math-container">$b \in B$</span>. So <span class="math-container">$g \ge b$</span> so <span class="math-container">$a+q \ge a+b=k$</span>. So <span class="math-container">$a+B$</span> is bounded above by <span class="math-container">$g$</span>.</p> </blockquote> <p>Claim 3: <span class="math-container">$a + \sup B$</span> is an upper bound for <span class="math-container">$a+B$</span>.</p> <blockquote class="spoiler"> <p> Pf: Apply the argument of Claim to but apply <span class="math-container">$\sup B$</span> as the upper bound in use. If <span class="math-container">$k \in a+ B$</span> there is a <span class="math-container">$b$</span> sso taht <span class="math-container">$k =a+b$</span> and <span class="math-container">$\sup B \ge b$</span> so <span class="math-container">$a + \sup B \ge a + b = k$</span>.</p> </blockquote> <p>Claim 4: If <span class="math-container">$l &lt; a + \sup B$</span> then <span class="math-container">$l$</span> is not an upper bound.</p> <p>If <span class="math-container">$l &lt; a + \sup B$</span> then <span class="math-container">$l - a &lt; \sup B$</span> and so <span class="math-container">$l-a$</span> is not an upper bound of <span class="math-container">$B$</span>. SO there exists a <span class="math-container">$b\in B$</span> so that <span class="math-container">$l-a &lt; b$</span>.</p> <blockquote class="spoiler"> <p> .... You can do this......</p> </blockquote>
1,797,243
<p>Find the relative extrema of the function by applying the first-derivative test:</p> <p>$$f(x)=x^5-5x^3-20x-2$$</p> <p>So I found the $f'(x)$</p> <p>$$f'(x) = 5x^4-15x^2-20$$</p> <p>Now, I'm trying to find the critical values, which $x=0$ or undefined, so I can apply the first-derivative test. However, I can't simpliy this. How can I find the relative extrema now? Thanks. </p>
Olivier Oloa
118,798
<p><strong>Hint</strong>. If one set $X=x^2$ then one has to solve a classic quadratic equation $$ 5X^2-15X-20=0. $$</p>
1,004,837
<p>The Warsaw circle is defined as a subset of $\mathbb{R}^2$: $$\left\{\left(x,\sin\frac{1}{x}\right): x\in\left(0,\frac{1}{2\pi}\right]\right\}\cup\left\{(0,y):-1\leq y\leq1\right\}\cup C\;,$$ where $C$ is the image of a curve connecting the other two pieces. </p> <p>A map from Warsaw circle to a single point space seems to be a well-known example showing weak homotopy equivalence is indeed weaker than homotopy equivalence. <strong>I am trying to see why the Warsaw circle is non-contractible.</strong> It seems intuitively reasonable since two 'ends' of it are connected in some sense, but I failed to give a proof. Any hint would be appreciated. Thank you very much. </p>
Cheerful Parsnip
2,941
<p>Its Cech cohomology is equal to $\mathbb Z$ in degree $1$. You can either check this directly, or you could apply Alexander Duality (which relates the reduced Cech cohomology with the reduced homology of the complement.) In this case, since the complement has two path components, the Cech cohomology of the Warsaw circle must be $\mathbb Z$ in degree 1.</p>
463,814
<p>I am struggling to find the values of these integrals after trying many substitution it did't worked for me </p> <p>1) $$ \int_{0}^{a}\frac{dx}{\sqrt{ax-x^2}} $$ 2) $$ \int_{}^{}\frac{3x^5dx}{1+x^{12}} $$</p>
Elias Costa
19,266
<p><strong>Hint (a):</strong> Use change of variables, then trigonometric substitutions. \begin{align} \int_0^a \frac{dx}{\sqrt{ax-x^2}} = &amp; \int_0^a \frac{dx}{\sqrt{\left(\frac{a}{2}\right)^2-\left(\frac{a}{2}\right)^2+2\cdot \left(\frac{a}{2}\right)x-x^2}} \\ = &amp; \int_0^a \frac{\left(x-\left(\frac{a}{2}\right) \right)^\prime dx}{\sqrt{\left(\frac{a}{2}\right)^2-\left(x-\left(\frac{a}{2}\right) \right)^2}} \end{align} If $u=\left(x-\left(\frac{a}{2}\right) \right)$ then $du=\left(x-\left(\frac{a}{2}\right) \right)^\prime dx$ and we have the new integral \begin{align} \int_0^a \frac{dx}{\sqrt{ax-x^2}}=&amp;\int_{\frac{a}{2}}^{\frac{3}{2}a} \frac{du}{\sqrt{\left(\frac{a}{2}\right)^2-u^2}} \end{align} Now you can proceed by trigonometric substitution $u=\left(\frac{a}{2}\right)\sin \theta$ then $ du=\left(\frac{a}{2}\right)\cos\theta \, d\theta $ and $$ \sqrt{\left(\frac{a}{2}\right)^2-u^2}= \sqrt{\left(\frac{a}{2}\right)^2-\left(\frac{a}{2}\right)^2\sin^2\theta}= \left|\frac{a}{2}\right|\cdot \left|\cos\theta\right| $$ </p> <p><strong>Hint (b):</strong> Use change of variables and partial fractions in \begin{align} \int\frac{3x^5dx}{1+x^{12}} = &amp; \int\frac{3}{6}\frac{(x^6)^\prime dx}{1+(x^{6})^2} \\ \end{align}</p>
463,814
<p>I am struggling to find the values of these integrals after trying many substitution it did't worked for me </p> <p>1) $$ \int_{0}^{a}\frac{dx}{\sqrt{ax-x^2}} $$ 2) $$ \int_{}^{}\frac{3x^5dx}{1+x^{12}} $$</p>
lab bhattacharjee
33,337
<p>HINT:</p> <p>For $$\int_0^a\frac{dx}{\sqrt{ax-x^2}}=\int_0^a\frac{dx}{\sqrt{x(a-x)}}=\int_0^a\frac{dx}{|a|\sqrt{\frac xa(1-\frac xa)}}$$</p> <p>Putting $x=a\sin^2y,dx=2a\sin y\cos y dy$ and as $x=0\implies y=0;x=a\implies y=\frac\pi2$</p> <p>$$\int_0^a\frac{dx}{|a|\sqrt{\frac xa(1-\frac xa)}}=\int_0^{\frac\pi2}\frac{2a\sin y\cos y dy}{|a|\sin y\cos y}=\frac 2{\text{ sign }(a)}\int_0^{\frac\pi2}dy$$</p> <p>$$\text{Similarly for the integral }\frac1{\sqrt{x(x-a)}},\text{ put } x=a\sec^2y$$</p> <p>$$\text{Similarly for the integral }\frac1{\sqrt{x(x+a)}},\text{ put } x=a\tan^2y$$</p>
3,287,067
<p>I'm reading a solution to the following exercise:</p> <p>"Assume that <span class="math-container">$\lim_{x\to c}f\left(x\right)=L$</span>, where <span class="math-container">$L\ne0$</span>, and assume <span class="math-container">$\lim_{x\to c}g\left(x\right)=0.$</span> Show that <span class="math-container">$\lim_{x\to c}\left|\frac{f\left(x\right)}{g\left(x\right)}\right|=\infty.$</span>" </p> <p>And at some point in the proof the following step appears:</p> <p>"Choose <span class="math-container">$\delta_1$</span> so that <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta _1$</span> implies <span class="math-container">$\left|f\left(x\right)-L\right|&lt;\frac{|L|}{2}$</span>. <strong>Then we have <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span></strong>."</p> <p>It's precisely the implication in bold that I'm struggling to understand. How does the writer go from <span class="math-container">$\left|f\left(x\right)-L\right|&lt;\frac{\left|L\right|}{2}$</span> to <span class="math-container">$\left|f\left(x\right)\right|\ge\frac{\left|L\right|}{2}$</span>? </p> <p>I'm probably failing to see something that may be very clear, but I've been attempting unsuccessfully to reach the conclusion algebraically long enough, and can't quite see why it is true either! </p> <p>Here is the rest of the solution, if necessary. </p> <p>"Let <span class="math-container">$M&gt;0\ $</span> be arbitrary. [...]. Because <span class="math-container">$\lim_{x\to c}g\left(x\right)=0$</span>, we can choose <span class="math-container">$\delta_2$</span> such that <span class="math-container">$\left|g\left(x\right)\right|&lt;\frac{\left|L\right|}{2M}\ $</span>provided <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta_2$</span>. </p> <p>Let <span class="math-container">$\delta=\min\left\{\delta_1,\delta_2\right\}.\ $</span> Then we have </p> <p><span class="math-container">$\left|\frac{f\left(x\right)}{g\left(x\right)}\right|\ge\left|\frac{\frac{\left|L\right|}{2}}{\frac{\left|L\right|}{2M}}\right|=M$</span> provided <span class="math-container">$0&lt;\left|x-c\right|&lt;\delta$</span>, as desired." </p>
grand_chat
215,011
<p>How about the <em>even</em> function <span class="math-container">$f(x):=x^2+c$</span> for suitable choice of <span class="math-container">$c$</span>? For simplicity you can take <span class="math-container">$a=1$</span>. Then setting the definite integral to zero yields an equation that <span class="math-container">$c$</span> must satisfy.</p>
3,245,838
<p>We have <span class="math-container">$n$</span> voltages <span class="math-container">$V_1, V_2,\dots , V_n$</span> that are received in a condensator or sum, such that <span class="math-container">$V=\sum V_i$</span> is the sum of received voltages in that point. Every voltage <span class="math-container">$V_i$</span> is a random variable uniformly distributed in the interval <span class="math-container">$[0, 10].$</span></p> <ol> <li>Calculate expected values and standard deviation of the voltages <span class="math-container">$V_i.$</span></li> <li>Calculate probability that the total voltage entrance overpases <span class="math-container">$105$</span> volts, for <span class="math-container">$n = 20, 50, 100.$</span></li> </ol> <p>I dont need much help with point <span class="math-container">$2,$</span> I just need to use the central limit theorem but I need an expected value and standard deviation of point <span class="math-container">$1.$</span> I thought in using theorem of big numbers but I am missing something cause I need to get constants for expected and standard deviation, please help.</p>
trancelocation
467,003
<p>You can avoid partial fraction decomposition by noting that</p> <p><span class="math-container">$$\oint_{\partial D_3(0)}\frac{\cos(z+4)}{z^2+1}dz = \oint_{\partial \color{blue}{D_1(i)}}\frac{\frac{\cos(z+4)}{z+i}}{z-i}dz + \oint_{\partial \color{blue}{D_1(-i)}}\frac{\frac{\cos(z+4)}{z-i}}{z+i}dz$$</span> <span class="math-container">$$= 2\pi i \left( \frac{\cos(i+4)}{i+i} + \frac{\cos(-i+4)}{-i-i}\right) = \pi(\cos(4+i)- \cos(4-i)) $$</span></p>
1,620,186
<p>I have to find an example of a surface of revolution excluding a sphere and a cone. </p> <p>Is $\sigma(x,y)=(\cos x, 5, x^2+y^2)$ such an example? </p> <p>$$$$ </p> <p>I also have to find an example of a surface the image of which is not the graph of a smooth function $z=f(x,y)$. </p> <p>Is $\sigma(x,y)=(3\sqrt{3}, 10\sqrt{y}, 0)$ such an example? </p>
Community
-1
<p>For the first one, you can proceed directly: Notice that if the expression is equal to some rational $r$, then</p> <p>$$x + b = r(x + a) \implies x(1 - r) = a - b$$</p> <p>Now the right side is rational, but the left side is irrational unless.....</p> <hr> <p>For the second, I'd suggest proceeding similarly. Write</p> <p>$$x^2 + x + \sqrt 2 = r(y^2 + y + \sqrt 2)$$ and rearrange to get</p> <p>$$x^2 + x - ry^2 - y = \sqrt2(r - 1)$$ From this, get $r$; then do some algebra to figure out when the first equality can hold.</p>
1,426,264
<p>Let $H$, $K$ be groups, and suppose that $H \cong K \times H$. Does it necessarily follow that $K$ is trivial?</p>
Glen Whitney
288,300
<p>@joriki's answer demonstrates that there is no polyhedron of genus 0 satisfying the stated conditions, which leads one to wonder whether it's possible to satisfy the conditions with a higher-genus polyhedron. And in fact, it is. One can construct a toroidal polyhedron with 9 quadrilateral faces (four meeting at each vertex), 9 vertices, and 18 edges. Basically, you obtain this if you take three equilateral triangular prisms, miter both ends of each at 30 degrees (so that when they are joined, they will make a 60 degree angle) and glue them all together in an overall triangular fashion. Here's a picture:</p> <p><a href="https://i.stack.imgur.com/taTrs.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/taTrs.png" alt="toroidal polyhedron all faces quadrilaterals, all vertices degree 4" /></a></p> <p>(This same picture also appears in the answer to question #1005105.)</p>
63,796
<p>I'm looking for a neater way to achieve this, which to me looks awkward and suggests that I am missing something...</p> <p>Given a Dataset...</p> <pre><code>ds = Dataset[{ &lt;|"a"-&gt;1,"b"-&gt;1,"c"-&gt;3|&gt;, &lt;|"a"-&gt;1,"b"-&gt;2,"c"-&gt;4|&gt;, &lt;|"a"-&gt;2,"b"-&gt;3,"c"-&gt;5|&gt;}]; </code></pre> <p>Return another Dataset given by the grouping by one column and the maximal by another...</p> <pre><code>ds[GroupBy[#a &amp;], MaximalBy[#b &amp;]] // Values // Flatten </code></pre> <p>To return...</p> <pre><code>Dataset[{&lt;|"a" -&gt; 1, "b" -&gt; 2, "c" -&gt; 4|&gt;, &lt;|"a" -&gt; 2, "b" -&gt; 3, "c" -&gt; 5|&gt;} </code></pre> <p>The GroupBy and MaximalBy return a list of Associations of row number to a List of Associations (of which there is only one) which then needs (something like) //Values//Flatten to retrieve the Dataset I want. </p> <p>Is there a more Dataset-eque way of doing this?</p>
WReach
142
<p>Since the question states that we need not worry about the case where there are multiple maxima in a group, the following expression eliminates the need for <code>Flatten</code>:</p> <pre><code>ds[GroupBy[#a &amp;] /* Values, MaximalBy[#b &amp;] /* First] </code></pre> <p><img src="https://i.stack.imgur.com/vVWIy.png" alt="dataset screenshot"></p> <p>This arbitrarily chooses the <code>First</code> from among potential multiple maxima. The expression also does its work within the confines of the query expression without having specify additional post-processing functions. In principle, the <code>Query</code> optimizer can do a better job when it can see the complete operation.</p> <p>If desired, we can (notionally) reduce the size of intermediate data structures by supplying a reduction function as the third argument to <code>MaximalBy</code>:</p> <pre><code>ds[GroupBy[#, #a &amp;, MaximalBy[#b &amp;] /* First] &amp; /* Values] </code></pre> <p><img src="https://i.stack.imgur.com/vVWIy.png" alt="dataset screenshot"></p> <p>We can see the difference between the plans of the two queries:</p> <pre><code>Dataset`ShowPlan[GroupBy[#a&amp;] /* Values, MaximalBy[#b&amp;] /* First] (* GroupBy[#a&amp;] /* Values /* Map[MaximalBy[#b&amp;] /* First] *) Dataset`ShowPlan[GroupBy[#, #a &amp;, MaximalBy[#b&amp;] /* First]&amp; /* Values] (* GroupBy[#1, #a&amp;, MaximalBy[#b&amp;] /* First]&amp; /* Values *) </code></pre> <p>Note how the second plan does not contain the <code>Map</code> operator that is present in the first plan. Instead, <code>GroupBy</code> determines the single maximum for each group directly. Whether this second query results in a material performance difference is something that would need to be determined by benchmarking with real data.</p>
199,374
<p>Consider the two table set below</p> <pre><code>t1= {0.44, 0.62, 0.77, 0.87, 0.93, 0.96, 0.98, 1} t2= {0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25, 0.25} </code></pre> <p>how can I create a plot which wil plot a rectangle with a heat map normalized to the values of the table?</p> <p>The result should look like</p> <p><a href="https://i.stack.imgur.com/GgXBG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/GgXBG.png" alt="h"></a></p>
kglr
125
<pre><code>Column[Quiet @ DensityPlot[Interpolation[#][x], {x, 0, 1}, {y, 0, 1}, AspectRatio -&gt; 1/5, Frame -&gt; False, ColorFunction -&gt; "Rainbow", ColorFunctionScaling -&gt; False, ImageSize -&gt; Medium] &amp; /@ {t1, t2}] </code></pre> <p><a href="https://i.stack.imgur.com/n3QBW.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/n3QBW.png" alt="enter image description here"></a></p>
51,732
<p>A <em>Perron number</em> is a real algebraic integer $\lambda$ that is larger than the absolute value of any of its Galois conjugates. The Perron-Frobenius theorem says that any non-negative integer matrix $M$ such that some power of $M$ is strictly positive has a unique positive eigenvector whose eigenvalue is a Perron number. Doug Lind proved the converse: given a Perron number $\lambda$, there exists such a matrix, perhaps in dimension much higher than the degree of $\lambda$. Perron numbers come up frequently in many places, especially in dynamical systems.</p> <p>My question:</p> <blockquote> <p>What is the limiting distribution of Galois conjugates of Perron numbers $\lambda$ in some bounded interval, as the degree goes to infinity?</p> </blockquote> <p>I'm particularly interested in looking at the limit as the length of the interval goes to 0. One way to normalize this is to look at the ratio $\lambda^g/\lambda$, as $\lambda^g$ ranges over the Galois conjugates. Let's call these numbers \emph{Perron ratios}.</p> <p>Note that for any fixed $C &gt; 1$ and integer $d &gt; 0$, there are only finitely many Perron numbers $\lambda &lt; C$ of degree $&lt; d$, since there is obviously a bound on the discriminant of the minimal polynomial for $\lambda$, so the question is only interesting when a bound goes to infinity. </p> <p>In any particular field, the set of algebraic numbers that are Perron lie in a convex cone in the product of Archimedean places of the field. For any lattice, among lattice points with $x_1 &lt; C$ that are within this cone, the projection along lines through the origin to the plane $x_1 = 1$ tends toward the uniform distribution, so as $C \rightarrow \infty$, the distribution of Perron ratios converges to a uniform distribution in the unit disk (with a contribution for each complex place of the field) plus a uniform distribution in the interval $[-1,1]$ (with a contribution for each real place of the field).</p> <p>But what happens when $C$ is held bounded and the degree goes to infinity? This question seems related to the theory of random matrices, but I don't see any direct translation from things I've heard. Choosing a random Perron number seems very different from choosing a random nonnegative integer matrix.</p> <p>I tried some crude experiments, by looking at randomly-chosen polynomials of a fixed degree whose coefficients are integers in some fixed range except for the coefficient of $x^d$ which is $1$, selecting from those the irreducible polynomials whose largest real root is Perron. This is not the same as selecting a random Perron number of the given degree in an interval. I don't know any reasonable way to do the latter except for small enough $d$ and $C$ that one could presumably find them by exhaustive search. Anyway, here are some samples from what I actually tried. First, from among the 16,807 fifth degree polynomials with coefficients in the range -3 to 3, there are $3,361$ that define a Perron number. Here is the plot of the Perron ratios:</p> <p><a href="http://dl.dropbox.com/u/5390048/PerronPoints5%2C3.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints5%2C3.jpg</a></p> <p>Here are the results of a sample of 20,000 degree 21 polynomials with coefficients between -5 and 5. Of this sample, 5,932 defined Perron numbers:</p> <p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21.jpg</a></p> <p>The distribution decidedly does not appear that it will converge toward a uniform distribution on the disk plus a uniform distribution on the interval. Maybe the artificial bounds on the coefficients cause the higher density ring.</p> <blockquote> <p>Are there good, natural distributions for selecting random integer polynomials? Is there a way to do it without unduly prejudicing the distribution of roots?</p> </blockquote> <p>To see if it would help separate what's happening, I tried plotting the Perron ratios restricted to $\lambda$ in subintervals. For the degree 21 sample, here is the plot of $\lambda$ by rank order:</p> <p><a href="http://dl.dropbox.com/u/5390048/CDF21.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/CDF21.jpg</a></p> <p>(If you rescale the $x$ axis to range from $0$ to $1$ and interchange $x$ and $y$ axes, this becomes the plot of the sample cumulative distribution function of $\lambda$.) Here are the plots of the Perron ratios restricted to the intervals $1.5 &lt; \lambda &lt; 2$ and $3 &lt; \lambda &lt; 4$:</p> <p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21%281.5%2C2%29.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21%281.5%2C2%29.jpg</a></p> <p><a href="http://dl.dropbox.com/u/5390048/PerronPoints21%283%2C4%29.jpg" rel="noreferrer">alt text http://dl.dropbox.com/u/5390048/PerronPoints21%283%2C4%29.jpg</a></p> <p>The restriction to an interval seems to concentrate the absolute values of Perron ratios even more. The angular distribution looks like it converges to the uniform distribution on a circle plus point masses at $0$ and $\pi$. </p> <p>Is there an explanation for the distribution of radii? Any guesses for what it is?</p>
Zili Huang
60,542
<p>There are many contexts in which one can establish that a family of polynomials, or even a single polynomial, has roots which are distributed according to some measure which approximates the uniform measure on the unit circle.</p> <p>Let $f(x) = a_N x^N + \ldots + a_1 x + a_0$ be a polynomial with real coefficients, and assume that $a_N \ne 0$. Consider the quantity:</p> <p>$$L_N(f) = \log \left( \sum |a_k| \right) - \frac{1}{2} \log |a_N| - \frac{1}{2} \log |a_0|.$$</p> <p>A theorem of Hughes and Nikeghbali (The zeros of random polynomials cluster uniformly near the unit circle, Compositio 144 (1998)) says (informally) that if $L_N(f)$ is small compared to $N$, then the roots are distributed uniformly along the unit circle. An ingredient of their result is a theorem of Erdos-Turan (On the distribution of roots of polynomials, Ann. of Math. (2) 51 (1950)) which (under the same hypothesis) shows that the arguments of the roots are distributed uniformly. Using this, we can answer:</p> <blockquote> <p>Is there an explanation for the distribution of radii? Any guesses for what it is?</p> </blockquote> <p>If one takes the coefficients to lie in some bounded range, say $[-5,5]$, then $L_N(f)$ has order $O(\log(N))$, which is certainly $o(N)$. Hence the random polynomials generated in this way have roots which are distributing in the unit circle. On the other hand, the polynomials being chosen are those which have a unique largest root of size at most five and usually around $5$, and hence the renormalized roots will appear to cluster around the circle of radius $1/5$. The clustering becomes more pronounced when one restricts to polynomials where the largest root lies in $[5,5-\lambda]$ for smaller $\lambda$ (Hence Thurston's graph with more restricted intervals are more "ring-like." (There is no point mass along the real axis by the theorem of Erdos-Turan.)</p> <blockquote> <p>Are there good, natural distributions for selecting random integer polynomials? Is there a way to do it without unduly prejudicing the distribution of roots?</p> </blockquote> <p>Given a large dimensional probability space, a natural way to generate random elements according to the distribution is to use a random walk Metropolis-Hastings algorithm. Concretely, one can start with a random Perron polynomial with largest root $&lt; 5$, and then perturb the coefficients according to some distribution (say a normal distribution with small variance). If the new polynomial is also Perron with largest root $&lt; 5$, choose this polynomial, or else keep the same polynomial. This Markov process should --- under suitable conditions --- generate random polynomials (in the aggregate) according to the required distribution. For example, let's run this algorithm and compare it to the roots of random monic polynomials with coefficients in $[-5,5]$ which have a unique largest root. The roots of polynomials with coefficients in $[-5,5]$ are illustrated in the first graph, and the result of the Metropolis-Hastings algorithm for all Perron polynomials with largest root less than $5$ is given in the second graph:</p> <p><img src="https://i.stack.imgur.com/X6gHJ.png" alt="Two graphs"></p> <p>Notice that the roots of the first graph cluster around the unit circle, as explained above. However, the roots of the second graph are clustering around the boundary. Indeed, this turns out to reflect reality (see the theorem below).</p> <blockquote> <p>What is the limiting distribution of Galois conjugates of Perron numbers $\lambda$ in some bounded interval, as the degree goes to infinity?</p> </blockquote> <p>Let's first consider the easier problem of describing "real Perron polynomials" --- that is, monic polynomials with real coefficients which have a unique largest root $\le r$ (necessarily real). The actual Perron numbers form integral lattice points in this space (although not all lattice points, just the ones corresponding to irreducible polynomials; however, the reducible lattice points form a thin subset). If one fixes the degree $N$ and increases the radius $r$, then (because the regions involved are suitable nice) the lattice points are distributed more or less uniformly inside the space (Davenport's Lemma). However, if one fixes $r$ and lets $N \rightarrow \infty$, it is no longer so clear whether the distribution of lattice points can be approximated by the corresponding real region. Studying lattice points in non-convex regions (or even convex ones) often leads to pretty thorny number theoretic issues, but one can at least hope (and compare with experiment) that the real geometry gives some indication of the truth of the matter. To this end, one has the following:</p> <p><b>Theorem:</b> Fix a real positive integer $r &gt; 0$. Let $\Omega_N$ denote the space of monic polynomials with real coefficients which have a real root $\alpha$ such that $r &gt; \alpha &gt; |\sigma \alpha|$ for all conjugates $\sigma \alpha \ne \alpha$. Then, as $N \rightarrow \infty$, the roots of a random polynomial in $\Omega_N$ are distributed uniformly along the circle $|z| = r$.</p> <p>The proof of this theorem uses the theorem of Hughes and Nikeghbali mentioned above. The point is that one has to obtain estimates of how the quantities $|a_i|$ vary over the space $\Omega_N$, which reduces, in the end, to evaluating and/or estimating various integrals over $\Omega_N$. For example, consider the example above of degree $21$ polynomials with a largest root $&lt; 5$. The model given by the OP choose $a_{21} \in [-5,5]$. It turns out, however, that the expected value of the constant term $a_{21}$ over $\Omega_{21}$ (with $r = 5$) is</p> <p>$$ \frac{3^2 \cdot 5^{21} \cdot 7 \cdot 13 \cdot 17 \cdot 19}{2^{17} \cdot 11} \sim 8.748 \times 10^{13}.$$</p> <p>This is pretty big compared to $[-5,5]$! </p> <p><b>Other statistics</b></p> <p>There are a few other interesting probabilities one can compute using integrals over spaces like $\Omega_N$ and related spaces. For example, one can consider all monic polynomials of degree $2N$ with the property that their roots all have absolute value at most one. This defines a compact region of (the coefficient space) $\mathbf{R}^N$, and so it has a natural uniform measure. Then the probability that a random such polynomial has no real roots in the interval $[-1,1]$ (equivalently, is positive on $[-1,1]$, equivalently, has no roots on all of $\mathbf{R}$) is equal to</p> <p>$$\sim \frac{2C}{\sqrt{2\pi} \cdot (2N)^{3/8}},$$</p> <p>where the constant $C$ is equal to</p> <p>$$C = 2^{-1/24} e^{- 3/2 \cdot \zeta'(-1)} = 1.24514 \ldots.$$</p> <p>Curiously enough, there is a theorem of Dembo, Poonen, Shao, and Zeitouni that a random polynomial (in the more usual sense) whose coefficients are chosen with (say) identical normal distributions with zero mean is positive in $[-\infty,\infty]$ with probability $N^{-b + o(1)}$ and positive in $[-1,1]$ with probability $N^{-b/2 + o(1)}$ for some universal constant~$b/2$, which they estimate be $0.38 \pm 0.015$. On the other hand, the exponent occurring above is $3/8 = 0.375$. Is there any direct relationship between these theorems? For example, does this suggest that $b = 3/4$? (The result of DPSZ actually holds for a very wide range of distributions with the same undetermined $b$, but it does not apply to our very rigid context.)</p> <p><b>Further Remarks</b></p> <p>There's some further analysis one can make of the space $\Omega_{N}$, or more generally the space of all monic polynomials with real coefficients whose roots have absolute value at most $r$. For example, one can show that a "random" polynomial of degree $N$, subject to the constraint that all its roots have absolute value at most $r$, will be a Perron polynomial (i.e. have a unique largest root) with probability exactly $1/N$ if $N$ is odd and $1/(N+1)$ if $N$ is even. Some of this analysis will appear in my thesis (I will add a link here when it is written!). As my advisor said, "Any question that Thurston asks is probably worth thinking about."</p>
4,102,443
<p>Let <span class="math-container">$a,b \in [0,\infty)$</span> with <span class="math-container">$a \leq b$</span>.</p> <p><span class="math-container">$D(x,y)=\mid \frac{x}{2+x}-\frac{y}{2+y} \mid$</span>.</p> <p>There exist constants <span class="math-container">$c_{1}, c_{2} \in [0,\infty)$</span> such that <span class="math-container">$$c_{1}\mid x - y \mid \leq D(x,y) \leq c_{2}\mid x - y \mid \forall x,y\in [a,b]$$</span></p> <p>Show that <span class="math-container">$([a,b],D)$</span> is a complete.</p> <p>I am very confused about how to prove that a metric space is complete. There are multiple theorems involving Cauchy sequences and closed subsets. I have seen solutions involving the standard metric but I am not sure how to use it in this proof. Any help would be appreciated.</p>
Eric Towers
123,905
<p>A ring action on an abelian group is a module.</p> <p>A linear group action on a vector space, <span class="math-container">$V$</span>, automatically produces a ring action (by the <a href="https://en.wikipedia.org/wiki/Group_ring" rel="nofollow noreferrer">group ring</a> <span class="math-container">$\Bbb{Z}G$</span> if <span class="math-container">$G$</span> is the group) on the free abelian group generated by <span class="math-container">$V$</span>. (Although this sounds very different from the first line, it is less different than it sounds.)</p> <p>(Linear group actions appear in many places, under the label &quot;<a href="https://en.wikipedia.org/wiki/Group_representation" rel="nofollow noreferrer">representations</a>&quot;.)</p> <p>This can be generalized to <span class="math-container">$\mathrm{Ab}$</span>-enriched categories with one object. In fact, any category, <span class="math-container">$C$</span>, has a free <span class="math-container">$\mathrm{Ab}$</span>-enriched category <span class="math-container">$\Bbb{Z}[C]$</span> with a universal functor from <span class="math-container">$C$</span> to an <span class="math-container">$\mathrm{Ab}$</span>-enriched category. (If this sounds similar to the group ring action above, it is.)</p>
4,102,443
<p>Let <span class="math-container">$a,b \in [0,\infty)$</span> with <span class="math-container">$a \leq b$</span>.</p> <p><span class="math-container">$D(x,y)=\mid \frac{x}{2+x}-\frac{y}{2+y} \mid$</span>.</p> <p>There exist constants <span class="math-container">$c_{1}, c_{2} \in [0,\infty)$</span> such that <span class="math-container">$$c_{1}\mid x - y \mid \leq D(x,y) \leq c_{2}\mid x - y \mid \forall x,y\in [a,b]$$</span></p> <p>Show that <span class="math-container">$([a,b],D)$</span> is a complete.</p> <p>I am very confused about how to prove that a metric space is complete. There are multiple theorems involving Cauchy sequences and closed subsets. I have seen solutions involving the standard metric but I am not sure how to use it in this proof. Any help would be appreciated.</p>
Aragogh
665,469
<p>I really like paul blart math cop's answer, and I'll give a slight modification with a more categorical flavor; note firstly that monoids and rings can be thought of as purely categorical gadgets, in the following sense; given a monoidal category <span class="math-container">$(\mathsf{C},\otimes, 1)$</span> (such as <span class="math-container">$\mathsf{Sets}$</span> with the cartesian product, or <span class="math-container">$\mathsf{Ab}$</span> with the tensor product) you can define a unital algebra to be an object <span class="math-container">$A \in \mathsf{C}$</span> with a multiplication map <span class="math-container">$m: A\otimes A \rightarrow A$</span> and a unit map <span class="math-container">$\eta:1\to A$</span> (where 1 is the unit of the monoidal product) such that the following associativity diagram commutes:</p> <p><span class="math-container">$$ \require{AMScd} \begin{CD} A \otimes A \otimes A @&gt;m \otimes id&gt;&gt; A \otimes A \\ @Vid\otimes mVV @VVmV\\ A\otimes A @&gt;m&gt;&gt; A \end{CD} $$</span></p> <p>And the following morphisms both equal the identity <span class="math-container">$$ \require{AMScd} \begin{CD} A \cong A\otimes1 @&gt;id \otimes \eta&gt;&gt; A \otimes A @&gt;m&gt;&gt; A \\ \\ A \cong 1 \otimes A @&gt;id \otimes \eta&gt;&gt; A \otimes A @&gt;m&gt;&gt; A \\ \end{CD} $$</span></p> <p>You can readily check that in the case of <span class="math-container">$(\mathsf{Sets}, \times, \{*\})$</span> or <span class="math-container">$(\mathsf{Ab}, \otimes_{\mathbb{Z}}, \mathbb{Z})$</span> that this categorifies the axioms for a monoid and a ring respectively. You can also define a module <span class="math-container">$M$</span> over an algebra via an action map <span class="math-container">$\alpha: A \otimes M \rightarrow M$</span> satisfying analogous associativity and unitality diagrams, which you can also readily check give you the usual notions of monoid action on a set and ring action on an abelian group (i.e. an <span class="math-container">$R$</span>-module). I will leave to you the categorification of the notion of a group.</p> <p>Now it remains to show that we have defined maps from the algebras into some sort of endomorphism object; this doesn't work in general, but it works in <span class="math-container">$\mathsf{Sets}$</span> and <span class="math-container">$\mathsf{Ab}$</span> because these categories have internal Homs, i.e. are enriched over themselves with the internal Hom being right adjoint to the tensor product (Currying and the classical tensor-hom adjunction).</p> <p>Working in <span class="math-container">$\mathsf{Sets}$</span>, let <span class="math-container">$A$</span> be a monoid and <span class="math-container">$\alpha: A \times M \to M$</span> a set with <span class="math-container">$A$</span> action. Notice that by currying (as described by paul blart math cop), this multiplication map corresponds to a map <span class="math-container">$\widetilde\alpha: A \to \operatorname{Hom}(M,M)$</span>. Note here that <span class="math-container">$\operatorname{Hom}(M,M)$</span> is itself a monoid via the composition map <span class="math-container">$$\circ: \operatorname{Hom}(M,M)\times\operatorname{Hom}(M,M) \to \operatorname{Hom}(M,M)$$</span> and using the associativity of the <span class="math-container">$A$</span> action on <span class="math-container">$M$</span> and iterated currying, one can check that <span class="math-container">$\widetilde \alpha$</span> is in fact a homomorphism of monoids (and rings in $\mathsf{Ab}) using essentially the same argument as the one paul blart math cop describes (though it is a good exercise to categorify and do this without referring to elements at all! Essentially this comes down to checking how the tensor-hom adjunction plays with composition). This phenomenon is really quite general and works in other categories with monoidal structures and internal homs, which are called &quot;closed&quot;.</p>
1,626,721
<blockquote> <p>In the <span class="math-container">$n$</span>-cell below, we are to tile it completely with cells of the form <span class="math-container">$\boxdot$</span> and <span class="math-container">$\boxtimes \hspace{-0.45 mm}\boxtimes$</span>. How many tilings are possible for a <span class="math-container">$12$</span>-cell?</p> <p><a href="https://i.stack.imgur.com/Xn9lh.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Xn9lh.png" alt="enter image description here" /></a></p> </blockquote> <p>Let <span class="math-container">$H_n$</span> denote the number of such tilings for an <span class="math-container">$n$</span>-cell. It is easy to see that <span class="math-container">$H_0 = 1, H_1 = 1, H_2 = 2, H_3 = 3,$</span> and <span class="math-container">$H_4 = 5$</span>. These seem to correspond to the fibonacci recursion of <span class="math-container">$_{n+1} = S_{n}+S_{n-1}$</span>, but how do I prove that this recursion is indeed <span class="math-container">$H_{n+1} = H_{n}+H_{n-1}$</span>? This is equivalent to saying, &quot;the number of tilings for an <span class="math-container">$n+1$</span>-cell is equal to the number of tilings for an <span class="math-container">$n$</span>-cell plus the number of tilings for an <span class="math-container">$n-1$</span>-cell.&quot; Why is this true?</p>
Zubin Mukerjee
111,946
<p>It is possible to find a direct formula for $H_n$ in terms of $n$, then show that this is equivalent to the Fibonacci numbers, without using the recursion. </p> <hr> <p>If $n$ is even, the number of tiles of the second type ($\,\boxtimes \hspace{-0.45 mm}\boxtimes\,$) can be between $0$ and $n/2$, inclusive. If there are $k$ tiles of the second type, then there are $n-2k$ tiles of the first type, and $n-k$ total tiles in the tiling. </p> <p>This means that the number of tilings of an $n$-cell with $n$ even and $k$ tiles of the second type is $$\binom{n-k}{k}$$</p> <p>If we sum this over all the possible values of $k$, we find that the number of tilings of an $n$-cell with $n$ even is</p> <p>$$H_n = \displaystyle\sum\limits_{k=0}^{n/2} \binom{n-k}{k}$$</p> <p>For example, if we take $n=12$, this gives </p> <p>\begin{align} H_{12}&amp;=1 + \binom{11}{1} + \binom{10}{2} + \binom{9}{3} + \binom{8}{4} + \binom{7}{5} + 1\\\\ H_{12}&amp;=1 + 11 + 45 + 84 + 70 + 21 + 1 \\\\ H_{12}&amp;=233 \end{align}</p> <hr> <p>If $n$ is odd, the same reasoning applies, but $k$ runs between $0$ and $(n-1)/2$. Therefore, we may write down the a full description of $H_n$:</p> <p>$$H_n = \begin{cases} \displaystyle\sum_{k=0}^{n/2} \binom{n-k}{k} \qquad \qquad\text{if} \,\, n \equiv 0 \pmod{2}\\\\ \displaystyle\sum_{k=0}^{(n-1)/2} \binom{n-k}{k} \qquad \text{if} \,\, n \equiv 1 \pmod{2} \end{cases}$$</p> <hr> <p>Finally, notice that these binomial coefficients form the "shallow diagonals" of Pascal's triangle. It is known that the sums of these shallow diagonals are equal to the Fibonacci numbers:</p> <p><a href="https://i.stack.imgur.com/RoqJt.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/RoqJt.jpg" alt="shallow diagonals in Pascal&#39;s triangle"></a></p> <p>Our calculation of $H_{12}$ above is the diagonal summing to $233$. </p> <hr> <p>While this does not answer your specific question about the recursion, I think it is an interesting alternative way to make the connection between your problem and the Fibonacci numbers. </p>
4,562,006
<p>Prove that <span class="math-container">$\text{Ker}(A)=\text{Ker}(A^2)$</span> if <span class="math-container">$A$</span> is a normal operator.</p> <p>I know that that <span class="math-container">$\text{Ker}(A)\subseteq \text{Ker}(A^2)$</span>, which I proved as follows: <span class="math-container">$x\in \text{Ker}(A) \implies Ax=0\implies(AA)x=A(Ax)=0\implies x\in \text{Ker}(A^2)$</span></p>
Anne Bauval
386,889
<p>Any normal operator <span class="math-container">$A$</span> has the same kernel as its adjoint (since <span class="math-container">$\|Ax\|^2=(A^*Ax,x)=(AA^*x,x)=\|A^*x\|^2$</span>), hence</p> <p><span class="math-container">$$A^2x=0\Rightarrow Ax\in\ker A=\ker A^*\Rightarrow0=(0,x)=(A^*Ax,x)=\|Ax\|^2.$$</span></p>
2,932,139
<blockquote> <p>How would I show that the line <span class="math-container">$A=[(x,y,z)=(0,t,t)\mid t\in\mathbb{R}]$</span> is parallel to the plane <span class="math-container">$5x-3y+3z=1$</span>?</p> </blockquote> <p>I know the normal vector would be <span class="math-container">$(5,-3,3)$</span>, but how would I get the the directional of <span class="math-container">$A$</span>?</p>
user
505,767
<p><strong>HINT</strong></p> <p>Recall that the plane is parallel if its normal is orthogonal to the vector.</p>
4,390,127
<p>I ma trying to prove that if <span class="math-container">$A$</span> is an <span class="math-container">$n\times n$</span> matrix, then <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R},\ f(\vec{x})=A\vec{x}\cdot\vec{x}=\vec{x}^T A\vec{x}$</span> is differentiable and <span class="math-container">$Df(\vec{a})\vec{h}=A\vec{a}\cdot\vec{h}+A\vec{h}\cdot\vec{a}.$</span></p> <p>(NOTE: computation edited according to Snaw's comment below)</p> <p>Now, since a function <span class="math-container">$f:\mathbb{R}^n\to\mathbb{R}^m$</span> is differentiable if <span class="math-container">$$ \lim\limits_{\vec{h}\to\vec{0}}\frac{\vec{f}(\vec{x}+\vec{h})-\vec{f}(\vec{x})-Df(\vec{a})\vec{h}}{||\vec{h}||}=\vec{0} $$</span> we have that <span class="math-container">$f$</span> is differentiable at <span class="math-container">$\vec{a}\in\mathbb{R}^n$</span> if <span class="math-container">$$\lim\limits_{\vec{h}\to\vec{0}}\frac{(\vec{a}+\vec{h})^TA(\vec{a}+\vec{h})-\vec{a}^TA\vec{a}-A\vec{a}\cdot\vec{h}-A\vec{h}\cdot\vec{a}}{||\vec{h}||}\\ =\lim\limits_{\vec{h}\to\vec{0}}\frac{\vec{a}^TA\vec{h}+\vec{h}^TA\vec{a}+\vec{h}^TA\vec{h}-\vec{h}^TA\vec{a}-\vec{a}^TA\vec{h}}{||\vec{h}||}=\lim\limits_{\vec{h}\to\vec{0}}\frac{\vec{h}^TA\vec{h}}{||\vec{h}||} $$</span></p> <p>but at this point I am not sure if I can claim that, as <span class="math-container">$\vec{h}\to\vec{0}$</span>, the limit is <span class="math-container">$\vec{0}$</span>, since it seems to me that I would encounter an indeterminate form of the type <span class="math-container">$\vec{0}/0$</span>.</p> <p>For example, if <span class="math-container">$A\in\mathbb{M}_{2\times 2}$</span> we would have <span class="math-container">$$\lim\limits_{(h_1,h_2)\to (0,0)}\frac{h_1^2 A_{11}+(A_{12}+A_{21})h_1h_2+h_2^2 A_{22}}{\sqrt{h_1^2+h_2^2}}.$$</span></p> <p>How could I justify this claim?</p> <p>Thanks</p>
Kavi Rama Murthy
142,385
<p>For any neighborhood <span class="math-container">$V$</span> of <span class="math-container">$a$</span> there is a convex open set <span class="math-container">$U$</span> containing <span class="math-container">$a$</span> and contained in <span class="math-container">$V$</span>. The net <span class="math-container">$(x_d)$</span> is eventually in <span class="math-container">$U$</span> and this implies that <span class="math-container">$X_d \subset U$</span> for some <span class="math-container">$d$</span>. It follows that <span class="math-container">$y_d' \in U \subset V$</span> for all <span class="math-container">$d' \geq d$</span> so <span class="math-container">$(y_d) \to a$</span>.</p>
3,465,171
<blockquote> <p>Let <span class="math-container">$G=[0,\infty) \times [0,\infty)$</span>, <span class="math-container">$\alpha \in (0,1)$</span> and <span class="math-container">$$\phi (x,y)=x^{\alpha} y^{1-\alpha}$$</span> Then <span class="math-container">$\phi$</span> is concave; that is, <span class="math-container">$-\phi$</span> is convex.</p> </blockquote> <hr> <p>This is left as an exercise in the book that I'm currently reading,and I think I have found a proof,which is shown below:</p> <p>Fix <span class="math-container">$x,y \in [0,\infty)$</span> and <span class="math-container">$n \in \mathbb N$</span>, where <span class="math-container">$\mathbb N$</span> always denotes the set of all positive integers,then we define <span class="math-container">$\alpha :=k/n$</span> for an arbitrary positive integer no greater than n so that <span class="math-container">$k/n \le 1$</span>.</p> <p>Let <span class="math-container">$\lambda \in (0,1)$</span>,and we define</p> <p><span class="math-container">$$z_k:=(\lambda x + (1-\lambda)y)^(k/n)$$</span></p> <p>and</p> <p><span class="math-container">$$w_k:=(\lambda x)^(k/n)+((1-\lambda)y)^(k/n)$$</span></p> <p>then it follows that</p> <p><span class="math-container">$$((z_k)^n)^(1/k)=\lambda x+(1-\lambda)y$$</span></p> <p>and that</p> <p><span class="math-container">$((w_k)^n)^(1/k) =[(\lambda x)^(k/n)+((1-\lambda)y)^(k/n)]^(n/k)$</span></p> <p><span class="math-container">$={\lambda x}[1+((1-\lambda)y/{\lambda x})^(k/n)]^(n/k)$</span></p> <p><span class="math-container">$\ge {\lambda x}[1+((1-\lambda)y/{\lambda x})^(k/n \cdot n/k)]$</span></p> <p><span class="math-container">$={\lambda x}[1+((1-\lambda)y/{\lambda x})]$</span></p> <p><span class="math-container">$=\lambda x+(1-\lambda)y$</span></p> <p><span class="math-container">$=((z_k)^n)^(1/k) $</span>,</p> <p>since it can be shown that <span class="math-container">$(1+x)^{\alpha} \ge 1+x^{\alpha}$</span> whenever <span class="math-container">$x \in [0,\infty)$</span> and <span class="math-container">$\alpha \in [1,\infty)$</span>.</p> <p>Therefore,<span class="math-container">$w_k \ge z_k$</span> holds,and hence the continuity of <span class="math-container">$\phi (x,y)$</span> with respect to <span class="math-container">$\alpha$</span> shows that</p> <p><span class="math-container">$(\lambda x+(1-\lambda)y)^{\alpha} \le (\lambda x)^{\alpha}+((1-\lambda)y)^{\alpha}$</span> for all <span class="math-container">$\alpha \in (0,1)$</span>.</p> <p>Finally,let <span class="math-container">$\lambda \in (0,1)$</span>,and let <span class="math-container">$u:=(x_1,y_1),v:=(x_2,y_2) \in G$</span>,then we obtain</p> <p><span class="math-container">$\phi (\lambda u+(1-\lambda)v)$</span></p> <p><span class="math-container">$=(\lambda x_1+(1-\lambda)x_2)^{\alpha}(\lambda y_1+(1-\lambda)y_2)^{1-\alpha}$</span></p> <p><span class="math-container">$\ge ((\lambda x_1)^{\alpha}+((1-\lambda)x_2) ^{\alpha})((\lambda y_1)^{1-\alpha}+((1-\lambda)y_2)^{1-\alpha})$</span></p> <p><span class="math-container">$\ge {\lambda x_1}^{\alpha} {\lambda y_1}^{1-\lambda}+{(1-\lambda)x_2}^{\alpha}+{(1-\lambda)y_2}^{1-\lambda}$</span></p> <p><span class="math-container">$=\phi (\lambda u)+\phi ((1-\lambda)v) $</span>,</p> <p>proving that <span class="math-container">$\phi(x,y)$</span> is concave.</p> <hr> <p>Is there anything wrong with my proof?I agree that my proof seems a little bit clumsy,so can anyone show me a succinct proof?Thank you!</p>
Kavi Rama Murthy
142,385
<p>If <span class="math-container">$f$</span> has a pole of order <span class="math-container">$2$</span> at <span class="math-container">$z_0$</span> then the residue at <span class="math-container">$z_0$</span> is <span class="math-container">$\lim_{z \to z_0} \frac d {dz} (z-z_0)^{2}f(z)$</span>. In this case we get <span class="math-container">$\lim_{z \to i} \frac d {dz} \frac 1 {(z+i)^{2}}=-\frac i 4$</span>. </p>
4,014,768
<blockquote> <p>Find the sationary points of the curve and their nature for the equation <span class="math-container">$y=e^x\cos x$</span> for <span class="math-container">$0\le x\le\pi/2$</span>.</p> </blockquote> <p>I derived it and got <span class="math-container">$e^x(-\sin x+\cos x)=0$</span>.</p> <p><span class="math-container">$e^x$</span> has no solution but I don't know how to find the <span class="math-container">$x$</span> such that <span class="math-container">$-\sin x+\cos x=0$</span></p>
Gabrielek
699,889
<p>Since you are asking for <span class="math-container">$0\le x \le \pi/2$</span> it's quite trivial that <span class="math-container">$x = \pi/4$</span> is the only solution of <span class="math-container">$\sin x = \cos x$</span></p> <p>Edit:</p> <p>Reasoning in <span class="math-container">$0\le x \le \pi/2$</span>: notice that <span class="math-container">$\cos x = 0$</span> only if <span class="math-container">$x =\pi/2$</span> and for <span class="math-container">$x = \pi/2$</span> we have <span class="math-container">$\sin \pi/2 = 1 \ne 0 = \cos \pi /2$</span>.</p> <p>So we can say for sure that <span class="math-container">$x = \pi/2$</span> is not a solution of the equation <span class="math-container">$\sin x = \cos x$</span>.</p> <p>Divide by <span class="math-container">$\cos(x)$</span> both sides of <span class="math-container">$\sin x = \cos x$</span> and you get <span class="math-container">$\tan x = 1$</span> which is satiesfied in <span class="math-container">$0\le x \le \pi/2$</span> only for <span class="math-container">$x = \pi/4$</span></p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
Humberto José Bortolossi
8,828
<p>I strongly recommend the film <a href="http://www.dimensions-math.org/Dim_E.htm" rel="noreferrer">Dimensions</a> by Jos Leys, Étienne Ghys and Aurélien Alvarez. It's free! The main tools used by the authors to explain the several dimensions are cross sections and steregraphic projections. The animation is very didactic, building the ideas in 2D and 3D as preparation for 4D. There are dubbed versions in Deutsch, American English, Français, Español, Italiano, 日 本語 and Pусский.</p> <p>Here are the first four chapters of the film:</p> <p>Dimension 2: <a href="https://youtu.be/6cpTEPT5i0A" rel="noreferrer">https://youtu.be/6cpTEPT5i0A</a></p> <p><a href="https://i.stack.imgur.com/ud8v8.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ud8v8.png" alt="enter image description here"></a></p> <p>Dimension 3: <a href="https://youtu.be/AhM9JH5GNiI" rel="noreferrer">https://youtu.be/AhM9JH5GNiI</a></p> <p><a href="https://i.stack.imgur.com/flOqy.png" rel="noreferrer"><img src="https://i.stack.imgur.com/flOqy.png" alt="enter image description here"></a></p> <p>Dimension 4: <a href="https://youtu.be/nz0ku71x22A" rel="noreferrer">https://youtu.be/nz0ku71x22A</a></p> <p><a href="https://i.stack.imgur.com/ojr5J.png" rel="noreferrer"><img src="https://i.stack.imgur.com/ojr5J.png" alt="enter image description here"></a></p>
18,136
<p>I introduced the hypercube (to undergraduate students in the U.S.) in the context of generalizations of the Platonic solids, explained its structure, showed it rotating. I mentioned <a href="https://matheducators.stackexchange.com/a/1824/511">Alicia Stott</a>, who discovered the <span class="math-container">$6$</span> regular polytopes in <span class="math-container">$\mathbb{R}^4$</span> (discovered after Schläfli). I sense they largely did not grasp what is the hypercube, let alone the other regular polytopes.</p> <p>I'd appreciate hearing of techniques for getting students to "grok" the fourth dimension.</p>
Carlo Wood
13,770
<p>I invented higher dimensions at an early age and have scored "off the charts" in spatial insight on any test. Since I've made numerous "infinite dimensional" puzzles (because 3 dimensional ones were too boring).</p> <p>This is how I grasp the concept of a hypercube:</p> <p>Each hyper cube of N dimensions exists of 2^N points. Half of those points (aka 2^(N-1)) form a N-1 dimensional hypercube, as does the other half. There a N ways to pick such a pair *). One such half is a copy of the other, merely translated the side L into a dimension perpendicular to the N-1 dimensions that those hypercubes exist in.</p> <p>*) Each corner point has (for example) coordinates 0 or 1 for each dimension: each point is represented by a vector like [0,1,1,0,0,1,0,0,0,1] where every permutation of 0 and 1's occur (leading to the 2^N points). Chose any coordinate and separate the points into two groups: one where that coordinate is 0 and one where that coordinate is 1. Hence, N choices. The remaining N-1 coordinates are again a vector of 0 and 1's that contain every permutation; so they are obviously also hypercubes, of one dimension less.</p> <p>Hence you can "build up" a hypercube from lower dimensions as follows: start with a point. Translate this point over a distance L. Note how it doesn't matter in WHICH direction, even though you have 3 dimensions to pick from (when restricting yourself still to 3D space). The point "draws" a line while being translated, giving you a line piece. The number of points have doubled: from 1 point to 2 points. Now you have a 1D hypercube.</p> <p>Next translate this line piece (1D hypercube) in any direction perpendicular to the previous used direction (even in 3D space this still allows choice, but which choice you make doesn't matter: all not used dimensions are equivalent), over a distance L. This doubles the points again, and each point draws a line again while being translated (in the end ask the students to find the formula for the number of lines as function of N). Next translate the resulting 2D hypercube (the square) over a distance L perpendicular to the square. This draws four more lines and doubles the number of points from 4 (one square) to 8 (original square plus copy).</p> <p>Next, translate the 3D hypercube over a distance L in a direction perpendicular to all previously used 3 dimensions. Note that there are infinite dimensions, but which direction you choose is not important, as long as it is per perpendicular to the used dimensions. The result of that is that new lines that are being drawn during the translation of the copy all are perpendicular to the orginal hypercube and thus all make an angle of 90 degrees with every previous drawn line.</p> <p>And so on: make a copy of the N-dimensional hypercube, translate it over a distance L perpendicular to all previous used dimensions, making all 2^N points draw 2^N extra lines.</p> <p>Note how every dimension is symmetrical: there are N axis, on each axis there are two opposite N-1 dimensional hypercubes: the "outsides" that limit the hypercube on that dimension (aka there are 2N outsides).</p> <p>Some students will grasp it. Let them form groups were students that got it explain in their own words to other students how they see it and how they grasped it. It can help to have someone else explain it (in different words).</p> <p>Here is a puzzle that I made:</p> <p>Given a hypercube of N dimension in an N dimensional space. If you paint the 2N outsides of the hypercube from a pallet of k colors, how many under rotation different permutations can you make? For example, N=2, k=2 gives: AAAA, AAAB, AABB, ABAB, ABBB and BBBB, so 6 different permutations (rotation of the squares is rotations of the strings here). N=2, k=3 gives 24 different permutations. What is the general formula? Don't look it up cause I have it published on the net somewhere :p</p> <p>Edit:</p> <p>More abstract, but certainly important, are the coordinate vectors with all permutations of 0's and 1's. You could explain that if you add more zero's but never change those zero's - then they don't matter. Aka:</p> <pre><code>0,0,0,0,0,0 0,0,0,1,0,0 0,0,1,0,0,0 0,0,1,1,0,0 0,1,0,0,0,0 0,1,0,1,0,0 0,1,1,0,0,0 0,1,1,1,0,0 </code></pre> <p>spans a 3D cube (in 6D space, but that doesn't matter at all).</p> <p>Likewise you could keep a coordinate at 1 (or whatever) as long as it doesn't change, it isn't used.</p> <p>Making a copy then is easy: copy the table and change one of the unused 0's into a 1. Both are 3D cubes as explained before, but they are translated by a distance 0,0,0,0,0,1 (or whatever coordinate you changed), and together now form a 4D hypercube.</p> <p>Question for the class: what if you correlate the coordinates? Ie, you pick two columns and only use 0,1 or 1,0 and never 0,0 or 1,1. Then that one column counts as 1 bit. This way you can ALSO make 2^N vectors of every "permutation", but using more than N (changing) coordinates (answer: a hyperblock; unless you <em>only</em> use pairs, for example,</p> <pre><code>0,1,0,1,0,1 0,1,0,1,1,0 0,1,1,0,0,1 0,1,1,0,1,0 1,0,0,1,0,1 1,0,0,1,1,0 1,0,1,0,0,1 1,0,1,0,1,0 </code></pre> <p>is a perfect 3D cube, in 6D space).</p> <p>EDIT 2</p> <p>unrelated maybe, but a neat invention of me:</p> <pre><code>0,0,0,0,0,1 0,0,0,0,1,0 0,0,0,1,0,0 0,0,1,0,0,0 0,1,0,0,0,0 1,0,0,0,0,0 </code></pre> <p>Is an N-dimensional hyper tetrahedron in N+1 dimensions. Isn't it amazing how simple the coordinates become if you add one dimension?! Try to write the coordinates down using only N dimensions :p (if at all possible!).</p>
648,066
<p>Let $\gamma(z_0,R)$ denote the circular contour $z_0+Re^{it}$ for $0\leq t \leq 2\pi$. Evaluate $$\int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz.$$</p> <p>I know that \begin{equation} \int_{\gamma(0,1)}\frac{\sin(z)}{z^4}dz = \frac{1}{z^4}\left(z-\frac{z^3}{3!}+\frac{z^5}{5!}-\cdots\right) = \frac{1}{z^3}-\frac{1}{6z}+\cdots \end{equation} but I'm not sure if I should calculate the residues and poles or to use Cauchy's formula?</p> <p>Using Cauchy's formula would give $$ \frac{2\pi i}{1!} \frac{d}{dz}\sin(z),$$ evaluated at $0$ gives $2\pi i$? I'm not sure though, any help will be greatly appreciated.</p>
Daniel Fischer
83,702
<p>Cauchy's integral formula is</p> <p>$$f^{(n)}(z) = \frac{n!}{2\pi i} \int_\gamma \frac{f(\zeta)}{(\zeta-z)^{n+1}}\,d\zeta,$$</p> <p>where $\gamma$ is a closed path winding once around $z$, and enclosing no singularity of $f$.</p> <p>Thus in your example, $n = 3$, and you need the third derivative,</p> <p>$$\int_{\gamma(0,1)} \frac{\sin z}{z^4}\,dz = \frac{2\pi i}{3!} \sin^{(3)} 0 = \frac{2\pi i}{6} (-\cos 0) = - \frac{\pi i}{3}.$$</p>
3,886,643
<p>I have doubts about how to solve this issue about bases in numbering systems:</p> <p>&quot;Determine the base b in <span class="math-container">$(104)_b = 8285$</span>&quot;</p> <p>can anybody help me? Thanks.</p>
MPW
113,214
<p><strong>Hint:</strong> The number in base <span class="math-container">$10$</span> is <span class="math-container">$8285$</span>. Remember that this means the number can be expressed as <span class="math-container">$$5\cdot 10^0 + 8\cdot 10^1 + 2\cdot 10^2 + 8\cdot 10^3 = 8285$$</span> You are told that, in base <span class="math-container">$b$</span>, the number can be expressed as <span class="math-container">$$4\cdot b^0 + 0\cdot b^1 + 1\cdot b^2 = 8285$$</span> The left side is just <span class="math-container">$b^2 + 4$</span>. So you want to solve <span class="math-container">$b^2 + 4 = 8285$</span> for <span class="math-container">$b$</span>.</p>
2,767,977
<p>I'm working through Williams' <em>Probability with Martingales</em> book and had a question. </p> <p>Suppose we have an iid increment random walk on the integers. $S_n = \sum_1^n Y_i$ where $P(Y_i = 1) = p, P(Y_i = -1) = 1-p$. On page 102, Williams proves that the random walk will almost surely hit $1$ in finite time for the symmetric case ($p= 0.5$) by constructing the Wald martingale from the random walk. My questions are as follows: </p> <p>1) Could this method not hold for any positive integer in the symmetric case, that is, could we not replicate this to show that the random walk hits any $x \in \mathbb{N}$ almost surely in finite time? Or is there some other method one must employ? As I see it, it should work. </p> <p>2) What if we had a biased random walk, where $p \neq 0.5$? Say, we have $p &gt; 0.5$? Intuitively, it makes sense that the random walk will now eventually go to $+ \infty$. Formally, how would we show that we could hit any $x \in \mathbb{N}$ in almost surely finite time? Is it simply a case of establishing submartingale convergence to $+\infty$ and concluding that we must pass through every positive integer at some finite step for that to happen or is there a more careful argument to be made? </p>
Dr. Sonnhard Graubner
175,066
<p>We have $$f(x)=\sin(\frac{x}{2})$$ then we get by the chain rule: $$f'(x)=\cos(\frac{x}{2})\cdot \frac{1}{2}$$</p>
2,767,977
<p>I'm working through Williams' <em>Probability with Martingales</em> book and had a question. </p> <p>Suppose we have an iid increment random walk on the integers. $S_n = \sum_1^n Y_i$ where $P(Y_i = 1) = p, P(Y_i = -1) = 1-p$. On page 102, Williams proves that the random walk will almost surely hit $1$ in finite time for the symmetric case ($p= 0.5$) by constructing the Wald martingale from the random walk. My questions are as follows: </p> <p>1) Could this method not hold for any positive integer in the symmetric case, that is, could we not replicate this to show that the random walk hits any $x \in \mathbb{N}$ almost surely in finite time? Or is there some other method one must employ? As I see it, it should work. </p> <p>2) What if we had a biased random walk, where $p \neq 0.5$? Say, we have $p &gt; 0.5$? Intuitively, it makes sense that the random walk will now eventually go to $+ \infty$. Formally, how would we show that we could hit any $x \in \mathbb{N}$ in almost surely finite time? Is it simply a case of establishing submartingale convergence to $+\infty$ and concluding that we must pass through every positive integer at some finite step for that to happen or is there a more careful argument to be made? </p>
Community
-1
<p>$$\sin'(x)=\cos(x)$$ $$\sin'(\frac{x}{2})=\sin'(u)=\cos(u) \cdot u'= \cos(\frac{x}{2}) \cdot \frac{1}{2}$$</p>
2,254,025
<p><strong>Please verify my answer to the following differential equation:</strong> $$y''-xy'+y=0$$ Let $y = {\sum_{n=0}^\infty}C_nx^n$, then $y' = {\sum_{n=1}^\infty}nC_nx^{n-1}$ and $y''={\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}$</p> <p>Substituting this to the equation we get</p> <p>$${\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}-x{\sum_{n=1}^\infty}nC_nx^{n-1}+{\sum_{n=0}^\infty}C_nx^n = 0$$ $${\sum_{n=2}^\infty}n(n-1)C_nx^{n-2}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=0}^\infty}C_nx^n = 0$$</p> <p>Getting the $x^n$ term on all the terms $${\sum_{n=0}^\infty}(n+2)(n+1)C_{n+2}x^{n}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=0}^\infty}C_nx^n = 0$$</p> <p>Getting the $0$th term from the first and the third summations we get $$2C_2+C_0 + {\sum_{n=1}^\infty}(n+2)(n+1)C_{n+2}x^{n}-{\sum_{n=1}^\infty}nC_nx^n+{\sum_{n=1}^\infty}C_nx^n = 0$$</p> <p>Factoring $x^n$ we get $$2C_2+C_0 + {\sum_{n=0}^\infty}[(n+2)(n+1)C_{n+2}-nC_n+C_n]x^n= 0$$</p> <p><strong>i.</strong>$$2C_2+C_0 = 0 =&gt; C_2 = \frac{-C_0}{2}$$</p> <p><strong>ii.</strong>$$(n+2)(n+1)C_{n+2}-nC_n+C_n = 0$$</p> <p>Therefore solving <strong>ii.</strong> for $C_{n+2}$ $$C_{n+2}=\frac{(n-1)C_n}{(n+2)(n+1)}, n=0,1,2,3,...$$</p> <p>If $n = 0$,</p> <p>$$C_2 = \frac{-C_0}{2!}$$</p> <p>If $n=1$,</p> <p>$$C_3 = 0$$</p> <p>If $n=2$,</p> <p>$$C_4 = \frac{C_2}{3*4} = \frac{-C_0}{4!}$$</p> <p>If $n=3$,</p> <p>$$C_5 = \frac{2C_3}{4*5}=0$$</p> <p>If $n=4$, $$C_6 = \frac{3C_4}{5*6} = \frac{-C_0}{6!}$$</p> <p>Upon seeing the pattern we realize that if $n=2m$ then $$C_{2m} = \frac{-C_0}{2m!}$$</p> <p>And if $n=2m+1$ then $$C_{2m+1} = 0$$</p> <p>So the final answer would be $$y = {\sum_{n=0}^\infty}C_nx^n =&gt; {\sum_{m=0}^\infty}\frac{-C_0*x^{2m}}{2m!}$$</p>
levap
32,262
<p>Your equation is a second order linear equation so it should be a two-dimensional space of solutions (that is, solutions that depend on two free parameters) while your final answer depends only on one free parameter $C_0$ so this means you did something wrong.</p> <p>Indeed, your equations don't determine what is $C_1$ which actually means that $C_1$ can be arbitrary. In addition, you made a mistake in deducing the general pattern. For example,</p> <p>$$ C_6 = \frac{3}{6 \cdot 5} C_4 = -\frac{3}{6 \cdot 5} \frac{1}{4!} C_0 = -\frac{3C_0}{6!} \neq -\frac{C_0}{6!}.$$</p> <p>In fact, we have</p> <p>$$ C_{2m} = \frac{(2m-3)C_{2m-2}}{(2m)(2m-1)} = \frac{(2m-3)(2m-5)C_{2m-4}}{(2m)(2m-1)(2m-2)(2m-3)} = \dots = -\frac{(2m - 3)(2m - 5) \dots 1}{(2m)!} C_0 = - \frac{(2m - 3)(2m - 5) \dots 1}{(2m)(2m-1)(2m-2)(2m-3)(2m-4) \dots 1} C_0 = -\frac{C_0}{(2m-1)2^{m}m!}$$</p> <p>and the general solution is given by</p> <p>$$ y(x) = C_1 \cdot x - C_0 \sum_{m=0}^{\infty} \frac{x^{2m}}{(2m-1)2^m m!}. $$</p>
3,742,707
<p>How many different fractions can be made up of the numbers 3, 5, 7, 11, 13, 17 so that each fraction contains 2 different numbers? How many of them will be the proper fractions?</p>
Triceratops
574,334
<p>Note that all the numbers are mutually <a href="https://en.wikipedia.org/wiki/Coprime_integers" rel="nofollow noreferrer">coprime integers</a> and therefore there cannot be cancellations in the fractions. Then you only need to count the number of possible pairs.</p>
2,585,828
<p>I saw in an article that for every real number, there exists a Cauchy sequence of rational numbers that converges to that real number. This was stated without proof, so I'm guessing it is a well-known theorem in analysis, but I have never seen this proof. So could someone give a proof (preferably with a source) for this fact? It can use other theorems from analysis, as long as they aren't too obscure.</p>
N. S.
9,176
<p>Let $\alpha$ be any real number. </p> <p>Define $$a_n = \frac{\lfloor n \alpha \rfloor}{n}$$ where $\lfloor , \rfloor $ denotes the floor function.</p> <p>Then $a_n \in \mathbb Q$. Moreover we have $$\frac{n \alpha -1}{n} \leq a_n \leq \alpha$$ and hence $a_n \to \alpha$.</p>
2,821,308
<p>Consider below complex function $$H(\omega) = \dfrac{1}{i\omega} \left(e^{i\omega} - e^{-i\omega}\right)$$</p> <p>If I replace $i$ by $-i$ in $H(\omega)$, I get back the same $H(\omega)$.<br> Easy to see that $H(\omega) = \dfrac{2}{\omega}\sin(\omega t)$. </p> <p>Long back I heard somebody claim this, but I couldn't pursue further... Now in another topic(signals and systems), this exact property is being used again. </p> <p>I feel this has something to do with flip transform. Like, f(-x) flips the graph of f(x) around y axis. Since the functions see the opposite x values, so the graphs of them flip around y axis. If f(-x) = f(x), then the function is symmetrical around y axis and we call it an even function. Hmm... I couldn't connect this to the complex domain. Help appreciated..</p> <p>EDIT : Here $\omega$ is a real number (angular frequency)</p>
A.Γ.
253,273
<p>First of all, your KKT system has a wrong sign, it must be \begin{align*} x_1 (-1 + v) \color{red}{-} 2 x_2 &amp;= 0 \tag{1'} ,\\ x_2 (-1 + v) \color{red}{-} 2 x_1 &amp;= 0. \tag{2'} \end{align*} However, the conclusion $(x_1-x_2)(1+v)=0$ from (1') minus (2') is correct.</p> <p>You cannot get all four solutions "only asking $v\ne -1$" simply because if $v\ne-1$ then $x_1=x_2$, and it is only first two solutions that satisfy $x_1=x_2$. It means that when you study the case $v\ne-1$ you get first two solutions and $v=3$. </p> <p>For a complete solution, you have to study the remaining case $v=-1$ as well. Set $v=-1$ into (1') and (2') to get $x_1+x_2=0$, i.e. $x_1=-x_2$, which gives you another two solutions from (3).</p>
237,960
<p>How could I solve this problem?</p> <blockquote> <p>Find the first digit of $2^{4242}$ without using a calculator.</p> </blockquote> <p>I know how to find the last digit with modular arithmetic, but I can't use that here.</p>
Ross Millikan
1,827
<p>If you have memorized that $\log_{10}2\approx 0.30103$ you can multiply by $4242$ and take the fractional part as $0.9692$ which looks like it should be greater than $\log_{10}9$ (and it is, but it is closer than I would have thought, it is $\approx 0.9542$). I don't know how to do it without log tables.</p>
1,626,840
<blockquote> <p>If I've played the lottery a certain $N$ number of times (and I didn't look into the results of each game), what this $N$ number must be to get me a $50\%$ chance of have already won at least one of my games? (imagine I 'll look up at all the results at once, after acheiving this chance) and the chance of winning this particularly lottery is $1$ in $1$ million.</p> </blockquote> <p>I was discussing this with my colleague and I particularly think that there's no exactly number $N$ of getting $50\%$ of chance of already have won. I think that when you ve played zero games the chance is $0\%$ and if you play an infinite numbers of games the chance is $100\%$ that you have won, but there s no ascending curve.</p>
lulu
252,071
<p>If $p$ is the probability of winning the lottery in a single try (so here $p=\frac 1{10^6}$) then the probability of not winning (in a single try) is $1-p$, and the probability of not winning in $n$ independent trials is $(1-p)^n$. It follows that the probability of winning at least once in $n$ independent trials is $1-(1-p)^n$. Thus you want to solve $$1-(1-p)^n=.5\;\implies (1-p)^n=.5\implies n=\frac {log(.5)}{log(1-p)}$$</p> <p>Easy to check that with $p=\frac {1}{10^6}$ we have $n\sim 693,147$ . It's true, of course, that $n$ is not integral, but I'm not sure that's terribly significant. If we use that (integral) value we get a win probability of about $.50000008$ </p>
444,865
<p>Prove that any natural number n can be written as $$n=a^2+b^2-c^2$$ where $a,b,c$ are also natural.</p>
Hagen von Eitzen
39,174
<p>If you consider the case $b=c+1$, you get $b^2-c^2=2c+1$, which can be any odd natural number $\ge3$. Thus with $a=1$ you reach all even $n\ge4$ and with $a=2$ all odd $n\ge 7$. This leaves only the cases $n\in\{1,2,3,5\}$ open. Can you find solutions for these $n$? (Hint: Try $b=c-1$).</p>
444,865
<p>Prove that any natural number n can be written as $$n=a^2+b^2-c^2$$ where $a,b,c$ are also natural.</p>
Oleg567
47,993
<p>Consider $n\ge 6$.</p> <p>If $n$ is odd, $n=2m+1$, then $$ n = 2m+1 = 2^2 + (m-1)^2 - (m-2)^2; $$ If $n$ is even, $n=2m$, then $$ n = 2m = 1^2 + m^2 - (m-1)^2; $$</p> <p>Small $n$:<br> $1=1^2+1^2-1^2$,<br> $2=3^2+3^2-4^2$,<br> $3=4^2+6^2-7^2$,<br> $4=2^2+1^2-1^2$,<br> $5=4^2+5^2-6^2$.</p>
444,865
<p>Prove that any natural number n can be written as $$n=a^2+b^2-c^2$$ where $a,b,c$ are also natural.</p>
Seirios
36,434
<p>First show the following lemma:</p> <blockquote> <p><strong>Lemma:</strong> Any odd postive integer can be written as the difference of two squares.</p> </blockquote> <p>For a proof without word, consider the area of the gray part with $a=n$ and $b=n-1$:</p> <p><img src="https://i.stack.imgur.com/Iox9n.png" alt="enter image description here"></p> <p>So if $n$ is odd, set $b=0$ and apply the previous lemma; if $n$ is even, set $b=1$ and apply the previous lemma to $n-1$.</p>
2,709,878
<p>How do I know that an equation will have an extraneous solution? </p> <p>For example in this question: 2log9(x) = log9(2) + log9(x + 24)</p>
Siong Thye Goh
306,553
<p>After you obtain the solution, substitute it in and check whether it is a valid solution. </p> <p>For this particular question, a possible way to solve the problem is to bring the $2$ up and become the power and we might have included an extra solution. </p> <p>$$x^2 = 2(x+24)$$</p> <p>$$x^2-2x-48=0$$</p> <p>$$(x-8)(x+6)=0$$</p> <p>and we see that we need to exclude the negative solution. </p>
324,557
<p>Isbell gave, in <a href="https://eudml.org/doc/213746" rel="noreferrer"><em>Two set-theoretic theorems in categories</em> (1964)</a>, a necessary criterion for categories to be concretisable (i.e. to admit some faithful functor into sets). Freyd, in <a href="https://www.sciencedirect.com/science/article/pii/0022404973900315" rel="noreferrer"><em>Concreteness</em> (1973)</a>, showed that Isbell’s criterion is also sufficient.</p> <p>My question is: <strong>Has anyone ever used Isbell’s criterion to check that a category is concretisable?</strong></p> <p>I’m interested not only in seeing the theorem is formally invoked in print, to show some category is concretisable — though of course that would be a perfect answer, if it’s happened. What I’m also interested in, and suspect is more likely to have occurred, is if anyone’s found the criterion useful as a heuristic for checking whether a category is concretisable, in a situation where one wants it to be concrete but finding a suitable functor is not totally trivial. (I’m imagining a situation similar to the adjoint functor theorems: they give very useful quick heuristics for guessing whether adjoints exist, but if they suggest an adjoint does exist, usually there’s an explicit construction as well, so they’re used as heuristics much more often than they’re formally invoked in print.)</p> <p>What I’m not so interested in is uses of the criterion to confirm that an expected non-concretisable category is indeed non-concretisable — I’m after cases where it’s used in expectation of a <em>positive</em> answer.</p>
Peter LeFanu Lumsdaine
2,273
<p>[Answer converted from a comment by Jiří Rosický on <a href="https://mathoverflow.net/a/324563/2273">another answer</a>.]</p> <p>Isbell’s criterion is used directly in Libor Barto’s paper <em>Accessible set functors are universal</em> (<a href="http://www.karlin.mff.cuni.cz/~barto/Articles/accfununiv.pdf" rel="nofollow noreferrer">pdf</a>), Section 4, to show that the category of “accessible set functors” (i.e. accessible endofunctors on <span class="math-container">$\mathrm{Set}$</span>) is concretisable. A slightly different argument, based on the simpler criterion “regular-well-powered” for the finitely complete case, is used for this same example in Remarks 5.5–6 of Adámek–Rosičký <em>How nice are free completions of categories?</em> (arXiv:<a href="https://arxiv.org/abs/1806.02524" rel="nofollow noreferrer">1806.02524</a>)</p>
2,537,311
<p>Question is, from finding a basis of e-vectors, determine an invertible matrix <strong>P</strong> such that: <img src="https://latex.codecogs.com/gif.latex?F=P%5E%7B-1%7DLP" title="F=L^{-1}AL" /> is diagonal. (write down the matrix F)</p> <p><img src="https://latex.codecogs.com/gif.latex?L=%5Cbegin%7Bpmatrix%7D&amp;space;%5C&amp;space;%5C1&amp;&amp;space;-2&amp;&amp;space;-1%5C%5C&amp;space;%5C&amp;space;%5C2&amp;&amp;space;%5C&amp;space;%5C6&amp;&amp;space;%5C&amp;space;%5C2%5C%5C&amp;space;-1&amp;&amp;space;-2&amp;&amp;space;%5C&amp;space;%5C1&amp;space;%5Cend%7Bpmatrix%7D" title="L=\begin{pmatrix} \ \1&amp; -2&amp; -1\\ \ \2&amp; \ \6&amp; \ \2\\ -1&amp; -2&amp; \ \1 \end{pmatrix}" /></p> <p>For my e-values using: </p> <p><img src="https://latex.codecogs.com/gif.latex?det(L-%5Clambda&space;I)" title="det(L-\lambda I)" /></p> <p>I obtained <img src="https://latex.codecogs.com/gif.latex?%5Clambda&space;=2%5C&space;and%5C&space;%5Clambda&space;=&space;4" title="\lambda =2\ and\ \lambda = 4" /></p> <p>Now for my e-vectors I'm slightly confused since I dont know what to do after plugging in <img src="https://latex.codecogs.com/gif.latex?%5Clambda&space;=2" title="\lambda =2" /> I got:</p> <p><img src="https://latex.codecogs.com/gif.latex?x&plus;2y&plus;z=0" title="x+2y+z=0" /></p> <p>what do I do from here?</p>
Rebellos
335,894
<p>$f$ is differentiable on $\mathbb R$ which means it's also continuous, thus we can apply the <em>Mean Value Theorem</em> on a domain $(0,x)$ :</p> <p>$$f'(ξ) = \frac{f(x) - f(0)}{x-0}$$</p> <p>By taking absolute values, we'd get : </p> <p>$$|f'(ξ)| = \Bigg| \frac{f(x) - f(0)}{x-0} \Bigg| \Leftrightarrow |f'(ξ)||x|=|f(x) -1|$$</p> <p>But $|f'(x)| \leq 1$, thus : </p> <p>$$|f(x)-1|\leq|x| \Rightarrow |f(x)| \leq |x| + 1$$</p>
4,474,095
<p>There is one thing I can't grasp about the proof given in the Linear Algebra Done Right book by Sheldon Axler (attached below).</p> <p>In the last part it says that <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)v = 0$</span>, hence <span class="math-container">$T - \lambda_jI$</span> is not injective for some <span class="math-container">$j$</span>.</p> <p>What I don't understand is why the following reasoning is not correct:</p> <ul> <li>the factors in the equation can be reordered.</li> <li>suppose <span class="math-container">$\lambda_j$</span> is the only eigenvalue <span class="math-container">$T$</span> has. Let's put it at the end: <span class="math-container">$(T - \lambda_1I)...(T - \lambda_mI)(T - \lambda_jI)v = 0$</span></li> <li>the only way for the expression above to be equal to <span class="math-container">$0$</span> is if <span class="math-container">$(T - \lambda_jI)v = 0$</span> (because <span class="math-container">$\lambda_j$</span> is the only eigenvalue, so the other <span class="math-container">$T - \lambda_iI$</span> are injective).</li> <li>hence <span class="math-container">$v$</span> is an eigenvector of <span class="math-container">$T$</span> corresponding to the eigenvalue <span class="math-container">$\lambda_j$</span>. But <span class="math-container">$v$</span> was chosen arbitrarily, so it can't be true.</li> </ul> <p>I know that my logic is flawed but I can't see where. Would appreciate it if someone pointed out to me where I'm wrong.</p> <p><a href="https://i.stack.imgur.com/Iwl4U.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/Iwl4U.png" alt="proof" /></a></p>
xbh
514,490
<ol> <li>If <span class="math-container">$f, g$</span> are polynomials over <span class="math-container">$\Bbb C$</span>, then <span class="math-container">$f(T)g(T) = g(T)f(T)$</span> by merely definition.</li> <li>You do not know <span class="math-container">$\lambda_j$</span> could be an eigenvalue beforehand. Then items 2 to 4 can not proceed.</li> <li>The correct reasoning is simple: if no such <span class="math-container">$j$</span> exists, then all <span class="math-container">$\mathcal T - \lambda _j \mathcal I$</span> are injective, so <span class="math-container">$v = 0$</span> which is not what we have chosen, contradiction.</li> <li>Whenever you know an eigenvalue <span class="math-container">$c$</span>, all corresponding eigenvectors must come from <span class="math-container">$\operatorname{Ker}(\mathcal T - c\mathcal I)$</span>. In this proof Axler fixed a vector [which has the potential to be an eigenvector as he proved] first, rather than found some value before proof.</li> </ol> <h3>UPDATE</h3> <p>As for the flaw, even if <span class="math-container">$\lambda_j$</span> is the only eigenvalue, in the factorization you cannot claim that <span class="math-container">$\lambda _k \neq \lambda _j$</span> iff <span class="math-container">$j \neq k$</span>. Like the answer you accepted stated, the expression can be something like <span class="math-container">$(T - \lambda_j I)^n$</span>, then <span class="math-container">$(T-\lambda_j)^{n-4}v =0$</span> is also a possible case [say if <span class="math-container">$n \geqslant 4$</span>], yet you asserted that <span class="math-container">$(T-\lambda_j)v$</span> is <span class="math-container">$0$</span>, which is not necessarily true. A counterexample is given in the comment section of the other answer.</p>
1,063,154
<p>I Ran into this question and I can't find the right way to approach it.</p> <p>We have $n$ different wine bootles numbered $i=1...n$. the first is 1 year old, the second is 2 years old ... the $n$'th bottle is $n$ years old.</p> <p>Each bottle is still good at probability of $1/i$.</p> <p>We pick out a random bottle and it is good. what is the expected value of the age of the bottle?</p> <p>I'm really not sure what the random varable here is and how to aproach the question. I'd be grateful for a lead.</p> <p>Thanks,</p> <p>Yaron.</p>
Ben S.
199,026
<p>You are asked to compute a conditional expectation (...'given that the bottle is good') for which you need a conditional probability distribution. Let $I$ be the age of the bottle you picked out, and recall that this variable take values over the range $i = 1...n$ By definition</p> <p>$E[I\,|\, I\mbox{ is good}] = \sum_{i=1}^n i\, P[I=i\,|\, I\mbox{ is good}]$</p> <p>Now you just need this conditional probability,which can be obtained from Bayes' Rule $P(A|B) = P(A, B)/P(B) $ (which you should read as 'the probability of A given B is the probability that both A and B occur, divided by the probability that B occurs):</p> <p>$P[I=i\,|\, I\mbox{ is good}] = \frac{P[I = i \,, I \mbox{ is good}]}{P[I\mbox{ is good}]} = \frac{(1/n)(1/i)}{\sum_{j=1}^n (1/n)(1/j)}$</p> <p>where the numerator is the product of the probability that a particular $i$ is chosen ($1/n$, since there are n bottles) with the probability $1/i$ that it is still good. The denominator is the probability that $I$ is good, but since $I$ can be <em>any</em> bottle, it is the sum over all possible probabilities appearing in the numerator.</p> <p>Combining these results gives the desired solution:</p> <p>$E[I\,|\, I\mbox{ is good}] = \frac{\sum_{i=1}^n i (1/n)(1/i)}{\sum_{j=1}^n (1/n)(1/j)} = \frac{1}{(1/n)\sum_{j=1}^n(1/j)} = \frac{n}{\sum_{j=1}^n(1/j)}$</p> <p>Note that the denominator is just the harmonic series, which scales like $\ln(n)$</p>
1,591,197
<p>Let $ A \trianglelefteq G $ and $ B \trianglelefteq A $ a Sylow normal subgroup of $ A $. My textbook says then that $ B \trianglelefteq G $.</p> <p>I don’t understand why that is.</p>
Elle Najt
54,092
<p>A subgroup $H$ of a group $G$ is said to be a characteristic subgroup if it is fixed under all automorphisms of $G$ (not just conjugation). Precisely, if $\sigma \in Aut(G)$, then $\sigma H = H$ as a set. </p> <p>Characteristic subgroups are of course normal, because conjugation is an automorphism.</p> <p>The point of this definition is that, although the condition of being a normal subgroup is not transitive, the condition of being characteristic is. Why? </p> <p>Suppose that $B$ is a normal subgroup of $A$ and $C$ is a normal subgroup of $B$. Let's try to prove that $C$ is normal in $A$ and see what fails: let $a \in A$, and consider that $aBa^{-1} = B$. But now the conjugation action of $a$ on $C$ is not necessarily induced by conjugating $C$ with an element of $B$. However, $b \to aba^{-1}$ is still an automorphism of $B$. So if $C$ was characteristic in $B$, then it would still be fixed by conjugation by $b$. </p> <p>Thus it follows that $aCa^{-1} = C$, for all $a \in A$, under the condition that $C$ is not just normal in $B$, but characteristic. So we have proven:</p> <p>Lemma: Let $B$ be a normal subgroup of $A$, and let $C$ be a characteristic subgroup of $B$. Then $C$ is normal in $A$.</p> <p>The key philosophy now is that canonically defined subgroups are characteristic. (For example, the center of a group, the commutator subgroup, or the socle of a group.)</p> <p>In particular, when there is a normal $p$-Sylow subgroup, it is the unique $p$-Sylow subgroup. Thus it is characteristic, and you can apply the lemma.</p> <p>For more see here: <a href="https://en.wikipedia.org/wiki/Characteristic_subgroup" rel="nofollow">https://en.wikipedia.org/wiki/Characteristic_subgroup</a></p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
user21820
50,073
<p>It is easy to prove the following in Z+CC (Zermelo plus countable choice):</p> <blockquote> <p>Every uncountable closed set of reals is in bijection with the reals.</p> </blockquote> <p>I was informed by Asaf that it can be proven in ZF (no choice at all), but that proof appears to use replacement. I hence <a href="https://math.stackexchange.com/q/2546436/21820">asked whether it could be proven in just Z</a>, but till today there has been no answer. And whether the answer is yes or no, it would be very interesting. If yes, then the proof is likely to be far from obvious, maybe even not previously known. If no, then we have a theorem that needs either choice or replacement over Z, despite those two principles seeming to be completely unrelated.</p>
416,407
<blockquote> <p>What examples are there of habitual but unnecessary uses of the axiom of choice, in any area of mathematics except topology?</p> </blockquote> <p>I'm interested in standard proofs that use the axiom of choice, but where choice can be eliminated via some judicious and maybe not quite obvious rephrasing. I'm less interested in proofs that were originally proved using choice and where it took some significant new idea to remove the dependence on choice.</p> <p>I exclude topology because I already know lots of topological examples. For instance, Andrej Bauer's <a href="https://www.ams.org/journals/bull/2017-54-03/S0273-0979-2016-01556-4/" rel="noreferrer">Five stages of accepting constructive mathematics</a> gives choicey and choice-free proofs of a standard result (Theorem 1.4): every open cover of a compact metric space has a Lebesgue number. Todd Trimble told me about some other topological examples, e.g. a compact subspace of a Hausdorff space is closed, or the product of two compact spaces is compact. There are more besides.</p> <p>One example per answer, please. And please sketch both the habitual proof using choice and the alternative proof that doesn't use choice.</p> <p>To show what I'm looking for, here's an example taken from that paper of Andrej Bauer. It would qualify as an answer except that it comes from topology.</p> <p><strong>Statement</strong> Every open cover <span class="math-container">$\mathcal{U}$</span> of a compact metric space <span class="math-container">$X$</span> has a Lebesgue number <span class="math-container">$\varepsilon$</span> (meaning that for all <span class="math-container">$x \in X$</span>, the ball <span class="math-container">$B(x, \varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>).</p> <p><strong>Habitual proof using choice</strong> For each <span class="math-container">$x \in X$</span>, choose some <span class="math-container">$\varepsilon_x &gt; 0$</span> such that <span class="math-container">$B(x, 2\varepsilon_x)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. Then <span class="math-container">$\{B(x, \varepsilon_x): x \in X\}$</span> is a cover of <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_{x_1}), \ldots, B(x_n, \varepsilon_{x_n})\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_{x_i}$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p> <p><strong>Proof without choice</strong> Consider the set of balls <span class="math-container">$B(x, \varepsilon)$</span> such that <span class="math-container">$x \in X$</span>, <span class="math-container">$\varepsilon &gt; 0$</span> and <span class="math-container">$B(x, 2\varepsilon)$</span> is contained in some member of <span class="math-container">$\mathcal{U}$</span>. This set covers <span class="math-container">$X$</span>, so it has a finite subcover <span class="math-container">$\{B(x_1, \varepsilon_1), \ldots, B(x_n, \varepsilon_n)\}$</span>. Put <span class="math-container">$\varepsilon = \min_i \varepsilon_i$</span> and check that <span class="math-container">$\varepsilon$</span> is a Lebesgue number.</p>
Timothy Chow
3,106
<p>The <a href="https://mathoverflow.net/a/22938/">highly-upvoted, accepted answer</a> (by Theo Johnson-Freyd) to another MO question, <a href="https://mathoverflow.net/q/22927/">Why worry about the axiom of choice?</a>, points out that the usual proof of the Poincaré–Birkhoff–Witt theorem assumes that every vector space has a basis and therefore uses the axiom of choice. However, the axiom of choice is not needed.</p> <p>Johnson-Freyd uses this example to illustrate a wider point; namely, the analogue of the axiom of choice in other categories is &quot;every epimorphism splits,&quot; which is false in other categories. Hence, a choice-free proof has the advantage of being easier to generalize to other settings.</p>
836,753
<p>$\{X_1, X2, \ldots, X_{121}\}$ are independent and identically distributed random variables such that $E(X_i)= 3$ and $\mathrm{Var}(X_i)= 25$. What is the standard deviation of their average? In other words, what is the standard deviation of $\bar X= {X_1+ X_2+ \cdots + X_{121} \over 121}$?</p>
André Nicolas
6,312
<p>We first compute the variance of $W$, where $$W=\frac{X_1+X_2+\cdots+X_{121}}{121}.$$ Here are the ingredients.</p> <p>$1$) The variance of a sum of <strong>independent</strong> random variables is the sum of the variances. Each $X_i$ has variance $25$, so the variance of $X_1+X_2+\cdots+X_{121}$ is $(25)(121)$.</p> <p>$2$) If $k$ is a constant, then the variance of $kY$ is $k^2$ times the variance of $Y$. It follows, taking $k=\frac{1}{121}$, that the variance of $W$ is $\frac{(25)(121)}{121^2}$.</p> <p>The variance of $W$ is therefore $\frac{25}{121}$ (we did some cancellation.) To find the <em>standard deviation</em>, take the square root of the variance. Thus $W$ has standard deviation $\frac{5}{11}$.</p>
2,306,122
<p>Show $X=\{n \in \mathbb{N}: \text{n is odd and} \ n = k(k+1) \text{for some} \ k \in \mathbb{N}\}=\emptyset$</p> <p>My proof is as follow, please point if I have made any mistake. </p> <p><strong>proof:</strong></p> <p>we have $\emptyset \subseteq X$ suppose $X≠\emptyset$ pick $n \in X$</p> <p>Then there are 2 cases 1st case: n is odd then n=(k+1)k</p> <p>Then suppose k is odd $\implies$ k+1 is even $\implies$ n is even</p> <p>2nd case: consider k is even $\implies k+1$ is odd</p> <p>then n=(k+1)k for some $k \in \mathbb{N}=\emptyset \implies n$ is even</p> <p>Therefore, n is neither even nor odd, so $k \in \mathbb{N} \implies n \not\in X$ and $\implies X= \emptyset$ </p> <p>Q.E.D</p>
mrnovice
416,020
<p>Your proof is correct, I would just be a bit pedantic about your format and ordering. So I'll follow on from your proof:</p> <p>Suppose for contradiction that $X\neq \emptyset$ and pick $n\in X$</p> <p>Then we know $n$ is odd and we can write $n=k(k+1)$ for some $k\in\mathbb{N}$</p> <p>Then the two cases are:</p> <p>$(i)\quad k\equiv 0\bmod 2\implies n\equiv 0\bmod 2\implies n\not\in X$</p> <p>$(ii)\,\,\,\,k\equiv 1\bmod 2\implies k+1\equiv 0\bmod 2\implies n\equiv 0\bmod 2\implies n\not\in X$</p> <p>Therefore by exhaustion of the all the possible cases, we have arrived at a contradiction, and hence $X=\emptyset$. </p> <p>It's exactly the same as what you did, but just a bit neater (in my opinion).</p>
1,394,141
<blockquote> <p>Compute the tangent space $T_pM$ of the unit matrix $p=I$ when $$(i)\,M=SO(n)\\ (ii)\,M=GL(n)\\ (iii)\,M=SL(n).$$</p> </blockquote> <p><strong>My attempt:</strong> I think I have computed the tangent space in the case that $M=SL(n)$. We can write $SL(n)=\det^{-1}(1)$ and I've proved in an earlier exercise that $1$ is a regular value and that $D\det(I)\cdot H= \text{trace } H$. So the tangent space consists of precisely those matrices $H$ which have vanishing trace.</p> <p>I don't know how to proceed in the other two cases. Can we write $SO(n)$ or $GL(n)$ as the pre-image of a regular value?</p>
Thomas Rot
5,882
<p>Basically you write down the defining equation of your group. I'll do it for $O(n)$. Something like this:</p> <p>$$M^TM=1$$</p> <p>The tangent space can be defined as the equivalence class of directions of curves in your manifold. Let $M_t\in O(n)$ with $M_0=1$ and $\frac{d}{dt}\bigr|_{t=0}M_t=X$. Then</p> <p>$$ \frac{d}{dt}\bigr|_{t=0}M_t^TM_t=0 $$</p> <p>hence $$ X^T+X=0 $$</p> <p>So you have shown that the tangent directions are contained in the skew symmetric matrices. Now you also have to show that any skew symmetric matrix can occur as a tangent direction. This can be done using the matrix exponential. Given $X$ skew symmetric, study the path $M_t=\exp (tX)$ Proceed similarly for the other groups.</p>
243,743
<p>I have been studying this sequence (<a href="https://oeis.org/A266882" rel="nofollow">A266882</a> in the OEIS) and found the following pattern:</p> <p>$13 + 17 + 19 + 23 + 29 = 101$ (101 is prime)</p> <p>37 does not hold.</p> <p>$223 + 227 + 229 + 233 + 239 = 1151$ (1151 is prime)</p> <p>The same is true for 1087, 1423, 1483, and 2683.</p> <p>So, is 37 the only prime of this sequence (A266882) for whom the sum is not also a prime?</p>
assaferan
74,819
<p>No. A simple check shows that it is already false for the next few primes in the sequence, namely for 4783, 6079, 7331. In fact, one could even ask whether these first numbers are the only ones for which this sum is prime, but it turns out that it later holds also for 11057 and 12269. </p>
729,441
<p>This question comes up after going over Arzela-Ascoli theorems.</p> <p>For a set of continuous functions $\mathbb F$ from $\mathbb R$ to $\mathbb R$ that is equicontinuous. How do I show that if sup{$|f(0)|:f \in F$} $&lt; \infty$ , then $\mathbb F$ is pointwise bounded?</p> <p>I know I need to show that since sup{$|f(0)|:f \in F$} = $M_0 &lt; \infty$ then for each $x \in \mathbb R$ sup{$|f(x)|:f \in F$} = $M_x &lt; \infty$.</p> <p>What I have gotten so far is that I should fix $\epsilon$ and use the fact that the domain is $\mathbb R$.</p> <p>I think I just need some help using equicontinuity and sup together.</p>
hmakholm left over Monica
14,366
<p>A proof sketch could be:</p> <p><em>1. Every (nonzero) vector is an eigenvector.</em> Let $v\ne 0$ and suppose $Tv$ is not a multiple of $v$. Then $v$ and $Tv$ are linearly independent; extend $\langle v,Tv\rangle$ to a basis $\langle v, Tv, v_3,v_4,\ldots,v_n\rangle$. By assumption $T$ has the same matrix representation $M$ in this basis and in the basis $\langle v,v+Tv,v_3,v_4,\ldots,v_n\rangle$. But that means that the first column of $M$ is simultaneously $(0,1,0,\ldots,0)^{\mathsf t}$ and $(-1,1,0,\ldots,0)^{\mathsf t}$, which is absurd.</p> <p><em>2. All eigenvalues are the same.</em> Since every vector is an eigenvector, there exists an eigenbasis. Therefore $M$ is diagonal. It can only be invariant under permutations of the basis vectors if all of the diagonal entries are equal..</p> <p>Therefore $T$ must be scalar multiplication by the common eigenvalue.</p>
3,466,707
<p>In that problem</p> <p><span class="math-container">$$\lim\limits_{x \to \infty} \left(1-\frac{1}{x}\right)^\left(e^x\right)$$</span></p> <p>I use <span class="math-container">$\ln$</span>, then it gave me <span class="math-container">$0\times\infty$</span> indeterminate, then I use L'Hospital Rule but I cannot reach the any answer.</p> <p>Sorry for grammatical mistakes, my native language is not English. </p>
Focus
254,076
<p>Use <span class="math-container">$$0 \leq e^{e^x \ln(1-1/x)} \leq e^{-e^x /x}$$</span> and take the limit, which gives 0.</p>
506,286
<p>This relates to a recent question which asked whether it was possible to approximate $\cos x$ with a series for an odd function (series with odd-powered terms). It occurred to me the answer maybe yes is we restricted to a finite positive interval and ask for approximation instead of absolute equality.</p> <p>So here's my question. Suppose we have a pretty nice function $f$ (say, continuous) defined on a positive interval $[a,b]$ ($a,b &gt; 0$). Suppose also we are told an infinite sequence of which powers $0 \leq n_1 &lt; n_2 &lt; n_3 &lt; \ldots$ we are allowed to use (all $n_j$ non-negative integers). For $\epsilon &gt; 0$, can we find coefficients $a_{n_j}$ defining a series</p> <p>$$\hat{f}(x) = \sum_j a_{n_j} x^{n_j}$$</p> <p>so that $||f-\hat{f}||_\infty &lt; \epsilon$, where the $L_\infty$ distance is computed over the interval $[a,b]$? If we can't, then are there restrictions on $a,b$ and the sequence of powers which makes it possible, such as $\sum_j 1/n_j$ diverges, and/or $a,b$ are both greater than $1$ or both less than $1$? Or maybe also/alternatively changing the norm for the approximation error?</p>
Ross Millikan
1,827
<p>The same sort of argument about positive and negative powers applies. The Legendre polynomials are orthonormal and complete on $[-1,1]$, which says that you can't approximate any one of them well in terms of the rest. Using linear algebra techniques, you can change the basis to the monomials in $x$. The (finite approximation to the) transformation matrix is even triangular, so easy to invert.</p>
506,286
<p>This relates to a recent question which asked whether it was possible to approximate $\cos x$ with a series for an odd function (series with odd-powered terms). It occurred to me the answer maybe yes is we restricted to a finite positive interval and ask for approximation instead of absolute equality.</p> <p>So here's my question. Suppose we have a pretty nice function $f$ (say, continuous) defined on a positive interval $[a,b]$ ($a,b &gt; 0$). Suppose also we are told an infinite sequence of which powers $0 \leq n_1 &lt; n_2 &lt; n_3 &lt; \ldots$ we are allowed to use (all $n_j$ non-negative integers). For $\epsilon &gt; 0$, can we find coefficients $a_{n_j}$ defining a series</p> <p>$$\hat{f}(x) = \sum_j a_{n_j} x^{n_j}$$</p> <p>so that $||f-\hat{f}||_\infty &lt; \epsilon$, where the $L_\infty$ distance is computed over the interval $[a,b]$? If we can't, then are there restrictions on $a,b$ and the sequence of powers which makes it possible, such as $\sum_j 1/n_j$ diverges, and/or $a,b$ are both greater than $1$ or both less than $1$? Or maybe also/alternatively changing the norm for the approximation error?</p>
Daniel Fischer
83,702
<p>If we have a sequence $0 \leqslant n_1 &lt; n_2 &lt; n_3 &lt; \dotsc$ of admissible exponents (not necessarily integers), then the linear subspace of $C([a,b])$, for $0 &lt; a &lt; b$, generated by the functions $t\mapsto t^{n_k}$ is dense if and only if</p> <p>$$\sum_{k=2}^\infty \frac{1}{n_k} = \infty.\tag{1}$$</p> <p>If the lower bound of the interval is $a = 0$, then $n_1 = 0$ is necessary.</p> <p>If $a &gt; 0$ and $n_1 &gt; 0$, the function $\psi \colon t \mapsto t^{-n_1}$ is continuous, bounded, and bounded away from zero, so $f$ can be uniformly approximated by a linear combination of the $t^{n_k}$ if and only if $f\cdot \psi$ can be uniformly approximated by a linear combination of the $t^{n_k-n_1}$, and $(1)$ isn't changed by substituting $n_k$ with $n_k - n_1$.</p> <p>So we can, without loss of generality assume that $n_1 = 0$.</p> <p>Then a small variation of the proof of the Müntz-Szasz theorem given in Rudin, Real And Complex Analysis, 15.26, pp. 313/314 in the third edition, namely replacing the function</p> <p>$$f(z) = \int_{[0,1]} t^z \,d\mu(t)$$</p> <p>by</p> <p>$$\tilde{f}(z) = b^{-z}\int_{[a,b]} t^z\,d\mu(t)$$</p> <p>for a complex Borel measure $\mu$ on $[a,b]$ that annihilates all $t^{n_k}$, yields the result that the closure of the subspace generated by the $t^{n_k}$ contains all $t^k$ and hence all polynomials, and by Weierstraß' theorem all of $C([a,b])$.</p>
2,161,830
<p>Hi i was reading a book called Symmetry and Pattern in Projective Geometry by Eric Lord, in his book the author give these axioms:</p> <ol> <li>Any two distinct points are contained in a unique line.</li> <li>In any plane, any two distinct lines contain a unique common point.</li> <li>Three points that do not lie on one line are contained in a unique plane.</li> <li>Three planes that do not contain a common line contain a unique common point.</li> </ol> <p>My question is if with these axioms can i prove the statement that any line contains at least three points?</p>
rschwieb
29,335
<p>As far as I can see, a line with two points satisfies this system of axioms.</p> <p>None of these axioms postulate the existence of noncollinear points, but that is normally a feature of axioms for the projective plane and projective $3$-space.</p> <p>Perhaps the author has given these axioms in addition to some others that occurred earlier?</p>
2,161,830
<p>Hi i was reading a book called Symmetry and Pattern in Projective Geometry by Eric Lord, in his book the author give these axioms:</p> <ol> <li>Any two distinct points are contained in a unique line.</li> <li>In any plane, any two distinct lines contain a unique common point.</li> <li>Three points that do not lie on one line are contained in a unique plane.</li> <li>Three planes that do not contain a common line contain a unique common point.</li> </ol> <p>My question is if with these axioms can i prove the statement that any line contains at least three points?</p>
mrnovice
416,020
<p>If we're allowed to use this definition for a line in $\mathbb{R}^{3}$:</p> <p>$L = \vec{a} + \lambda \vec{u}: \lambda \in \mathbb{R}$, $\vec{a}, \vec{u} \in \mathbb{R}^{3}$</p> <p>Where $\vec{a}$ and $\vec{u}$ are two distinct points contained by $L$</p> <p>Then by changing the value of $\lambda$ we can show that $L$ contains at least $3$ points. Although, this definition doesn't come straight from those axioms, so it seems like you can't prove the statement using only those axioms.</p>
1,360,658
<p>prove: $$\frac{1-\cos 2\theta}{1-\cos\theta}=2\cos\theta-2$$</p> <p>I'm thinking that there will be something to square in this? Because I notice that the $LHS$ looks like the half-angle identity....</p> <p>Edit: I am so sorry guys, my grave mistake, the expression should have been equal to 2 instead like,</p> <p>$$\frac{1-\cos 2\theta}{1-\cos\theta}-2\cos\theta=2$$</p> <p>BUT THANKS A LOT!</p>
Zain Patel
161,779
<p>We have, using $\cos 2\theta = 2\cos^2 \theta - 1$ $$\frac{1 - (2\cos^2 \theta -1)}{1-\cos \theta} = \frac{2(1-\cos^2 \theta)}{1-\cos \theta} = \frac{2(1-\cos \theta)(1+\cos \theta)}{1-\cos \theta} = 2(1 + \cos \theta)$$</p> <p>Since $1 - \cos^2 \theta$ is a difference of two squares. The result you asked for in your question is wrong. This is the correct result. So, we have $$\bbox[10px, border: solid 2px red]{\frac{1-\cos^2 2\theta}{1-\cos \theta} = 2 \cos \theta + 2.}$$</p> <p>As you can see, the above is equivalent to $$\frac{1-\cos^2 2\theta}{1-\cos \theta} - 2\cos \theta =2.$$ So the same proof works! </p>
2,619,344
<p>I just struck with a doubt today</p> <blockquote> <p>Why do most of the standard inequalities require the variables to be positive.</p> </blockquote> <p>For example if we want to find minimum value of a certain expression say <span class="math-container">$a+b+c$</span> the very first thought that comes in our mind is the AM GM inequality but the question must satisfy a condition <span class="math-container">$\mathbf {a, b, c\ge 0}$</span>.</p> <p>So I want to ask why is that so.</p> <blockquote> <p>Even in some very useful inequalities like the Muirhead inequality, Hölder's inequality, Minkowski's inequality, etc. we need the condition that the variables to be used must be non negative or positive.</p> <p>While there are also some inequalities like Chebyshev's inequality and Rearrangement inequality, Cauchy Schwartz inequality which do not have restrictions of the variables or terms to be positive.</p> </blockquote> <p>I want to know why such condition is needed to make the inequalities true. Is there a mathematical sense and a reason to do so? Does this have to do anything with geometry( I saw proofs of AM GM inequality using geometry and as the variables used were the lengths of some segments they were confined to be non negative)</p> <p>If someone has an idea please share.</p>
Shashi
349,501
<p>Your series is absolutely convergent. There is no need for many calculations. Notice that the supremum norm of $f$ exists due to the continuity of $f$ on a compact set. You can see the absolutely convergence by using the following: \begin{align} \sum_{n\geq 1}\frac{1}{n}\bigg | \int^1_0 f(x) e^{-nx} dx\bigg |&amp;\leq \sum_{n\geq 1} \frac{\Vert f\Vert_\infty}{n}\int^1_0e^{-nx} dx\\&amp;=\sum_{n\geq 1}\frac{\Vert f\Vert_\infty}{n^2}(1-e^{-n}) \end{align} As usual $\Vert f\Vert_\infty :=\sup_{x\in [0,1]}|f(x)|$</p>
1,330,362
<p>I need to define in ZFC the following things:</p> <ul> <li>image and domain of a binary relation ($\{ x \mid (x,y)\in f \}$ would be a definition of domain, but it is a class for which is for me is not quite clear why it is a set)</li> <li>f[X] for a binary relation $f$ and a set $X$</li> </ul> <p>Please describe a more or less formal way to describe these things in ZFC.</p>
Michael Hardy
11,667
<p>If $b&gt;1$ then $b^x\to\infty$ as $x\to\infty$ and $b\to 0$ as $x\to-\infty$.</p> <p>If $b&gt;1$ then $b^x\to\infty$ as $x\to-\infty$ and $b\to\infty$ as $x\to-\infty$.</p> <p>Either way, $b^x$ runs through all of the interval $(0,\infty)$.</p> <p>Similarly for $c^x$.</p> <p>If $b&gt;1$ and $c&gt;1$ then $b^x+c^x\to\infty$ as $x\to\infty$ and $b^x+c^x\to0$ as $x\to-\infty$.</p> <p>If $b&lt;1$ and $c&lt;1$ then $b^x+c^x\to\infty$ as $x\to-\infty$ and $b^x+c^x\to0$ as $x\to\infty$.</p> <p>Either way $b^x+c^x$ runs through the whole of $(0,\infty)$, and by the intermediate value theorem, a solution exists if $a&gt;0$.</p> <p>In those cases, you can tell whether $x$ is too big or too small, and after narrowing it down a bit, use Newton's method.</p> <p>If $b&gt;0$ and $c&lt;0$ or vice-versa, then it's more complicated: there is a positive number that is an absolute minimum and a solution exists if, but only if, $a\ge\text{the minumum}$.</p> <p>The minimum occurs at a value of $x$ for which $\dfrac d{dx}(b^x+c^x)=0$, so $$ b^x\log_e b + c^x \log_e c =0. $$ $$ \left( \frac b c \right)^x = \frac{-\log_e c}{\log_e b} = -\log_b c. $$ $$ x = \log_{b/c} (-\log_b c) = \log_{c/b}(-\log_c b). \tag 1 $$ (Notice that if either $b&lt;1&lt;c$ or $c&lt;1&lt;b$ then $\log_b c&lt;0$ so $-\log_b c&gt;0$ and we can take its logarithm.) The phrase "$x=$" in $(1)$ <b>does not mean that that is a solution.</b> Rather it mean if you plug that value of $x$ into $b^x+c^x$ you get the smallest value of $a$ for which a solution exists.</p>
1,330,362
<p>I need to define in ZFC the following things:</p> <ul> <li>image and domain of a binary relation ($\{ x \mid (x,y)\in f \}$ would be a definition of domain, but it is a class for which is for me is not quite clear why it is a set)</li> <li>f[X] for a binary relation $f$ and a set $X$</li> </ul> <p>Please describe a more or less formal way to describe these things in ZFC.</p>
Claude Leibovici
82,404
<p>As Omnomnomnom answered, in general, there will not be explicit solutions and numerical methods, such as Newton, would be used.</p> <p>Considering $$f(x)=b^x+c^x-a$$ we know that the function is such bracketted by $2 b^x-a$ and $2c^x-a$ which allows to define an approximate range for the solution.</p> <p>But the function $f(x)$ can be very stiff and so the convergence could be rather slow. If instead we consider $$g(x)=\log(b^x+c^x)-\log(a)$$ the function would probably look like a straight line and convergence could be very fast.</p> <p>Just in case you need it, starting with a "reasonable" guess of the solution $x_0$, Newton method will update it according to $$x_{n+1}=x_n-\frac{F(x_n)}{F'(x_n)}$$</p> <p>For illustration purposes, let us choose $b=2$, $c=3$, $a=10^6$. Using the bounding functions, we know that the solution $x$ will be more or less between $12$ and $19$. So, let us start with $x_0=15$.</p> <p>With $f(x)$, the successive iterates will then be $14.1523$, $13.4017$, $12.8562$, $12.6105$, $12.5708$, $12.5699$ which is the solution for six significant figures.</p> <p>Doing the same with $g(x)$ and the same starting point, the successive iterates will then be $12.5713$, $12.5699$. Quite faster, isn't it ?</p> <p><strong>Edit</strong></p> <p>For the initial guess of the solution, we can do much better : </p> <ul> <li>use the bounding functions to get the upper and lower bounds of the solution</li> <li>compute the value of function $g(x)$ at these two points</li> <li>assume that these two points are along a straight line ($g(x)\approx A x+B$ in the interval)</li> <li>compute parameters $A$ and $B$ and $x=-\frac BA$. This gives as an estimate $x_0$ of the solution.</li> </ul> <p>The two points having coordinates $$x_b=\frac{\log \left(\frac{a}{2}\right)}{\log (b)}~~~~\,~~~~ y_b=\log(b^{x_b}+c^{x_b})-\log(a) $$ $$x_c=\frac{\log \left(\frac{a}{2}\right)}{\log (c)}~~~~\,~~~~ y_c=\log(b^{x_c}+c^{x_c})-\log(a)$$ $$A=\frac{{x_c} {y_b}-{x_b} {y_c}}{{x_c}-{x_b}}~~~~\,~~~~ B= \frac{{y_c}-{y_b}}{{x_c}-{x_b}}$$ $$x_0=\frac {x_c y_b-x_b y_c}{y_b-y_c}$$ For the worked example, this gives $x_0\approx 12.5689$ and one single Newton iteration should suffice.</p> <p><strong>Edit</strong></p> <p>By the way, you could use the same procedure for finding the root of $$f(x)=\sum_{i=1}^n a_i^x-b=0$$ rewriting it as $$g(x)=\log\Big(\sum_{i=1}^n a_i^x\Big)-\log(b)=0$$ Admitting $a_1&gt;a_2&gt;a_3&gt;\cdots&gt;a_n&gt;1$, the solution will be such that $$\frac{\log \left(\frac{b}{n}\right)}{\log (a_1)}&lt;x&lt;\frac{\log \left(\frac{b}{n}\right)}{\log (a_n)}$$</p> <p>If you are lazy and do not want to perform these preliminary calculations, develop $g(x)$ as a Taylor series at $x=0$ and use as estimate $$x_0=\frac{n \log \left(\frac{b}{n}\right)}{\log \left(\prod _{i=1}^n a_i\right)}$$ </p>
1,260,722
<blockquote> <p>Prove that <span class="math-container">$f=x^4-4x^2+16\in\mathbb{Q}[x]$</span> is irreducible.</p> </blockquote> <p>I am trying to prove it with Eisenstein's criterion but without success: for <strong>p=2</strong>, it divides <strong>-4</strong> and the constant coefficient 16, don't divide the leading coeficient 1, but its square 4 divides the constant coefficient 16, so doesn't work. Therefore I tried to find <span class="math-container">$f(x\pm c)$</span> which is irreducible:</p> <blockquote> <p><span class="math-container">$f(x+1)=x^4+4x^3+2x^2-4x+13$</span>, but 13 has the divisors: <strong>1 and 13</strong>, so don't exist a prime number <strong>p</strong> such that to apply the first condition: <span class="math-container">$p|a_i, i\ne n$</span>; the same problem for <span class="math-container">$f(x-1)=x^4+...+13$</span></p> <p>For <span class="math-container">$f(x+2)=x^4+8x^3+20x^2+16x+16$</span> is the same problem from where we go, if we set <strong>p=2</strong>, that means <span class="math-container">$2|8, 2|20, 2|16$</span>, not divide the leading coefficient 1, but its square 4 divide the constant coefficient 16; again, doesn't work.. is same problem for <strong>x-2</strong></p> </blockquote> <p>Now I'll verify for <span class="math-container">$f(x\pm3)$</span>, but I think it will be fall... I think if I verify all constant <span class="math-container">$f(x\pm c)$</span> it doesn't work with this method... so have any idea how we can prove that <span class="math-container">$f$</span> is irreducible?</p>
Rolf Hoyer
228,612
<p>I think you should try using a method other than Eisenstein's criterion. By Gauss' lemma it suffices to show that $f(x)$ does not factor over the integers, so you need only show the following:</p> <ul> <li>There is no integer root: ie $f(x) \ne 0 $ for $x$ an integer dividing $16$.</li> <li>There is no quadratic factorization: You cannot write $f(x) = (x^2+ax+b)(x^2+cx+d)$.</li> </ul> <p>The first of these is straightforward to verify, and deriving a contradiction from the second is not unreasonable with this particular $f(x)$.</p>
1,260,722
<blockquote> <p>Prove that <span class="math-container">$f=x^4-4x^2+16\in\mathbb{Q}[x]$</span> is irreducible.</p> </blockquote> <p>I am trying to prove it with Eisenstein's criterion but without success: for <strong>p=2</strong>, it divides <strong>-4</strong> and the constant coefficient 16, don't divide the leading coeficient 1, but its square 4 divides the constant coefficient 16, so doesn't work. Therefore I tried to find <span class="math-container">$f(x\pm c)$</span> which is irreducible:</p> <blockquote> <p><span class="math-container">$f(x+1)=x^4+4x^3+2x^2-4x+13$</span>, but 13 has the divisors: <strong>1 and 13</strong>, so don't exist a prime number <strong>p</strong> such that to apply the first condition: <span class="math-container">$p|a_i, i\ne n$</span>; the same problem for <span class="math-container">$f(x-1)=x^4+...+13$</span></p> <p>For <span class="math-container">$f(x+2)=x^4+8x^3+20x^2+16x+16$</span> is the same problem from where we go, if we set <strong>p=2</strong>, that means <span class="math-container">$2|8, 2|20, 2|16$</span>, not divide the leading coefficient 1, but its square 4 divide the constant coefficient 16; again, doesn't work.. is same problem for <strong>x-2</strong></p> </blockquote> <p>Now I'll verify for <span class="math-container">$f(x\pm3)$</span>, but I think it will be fall... I think if I verify all constant <span class="math-container">$f(x\pm c)$</span> it doesn't work with this method... so have any idea how we can prove that <span class="math-container">$f$</span> is irreducible?</p>
Zarrax
3,035
<p>You've seen that $f(x)$ has no roots, so you want to exclude factorizations of the form $$f(x) = (x^2 + ax + b)(x^2 + cx + d)$$ Since $f(x) = f(-x)$, the above implies $$f(x) = (x^2 - ax + b)(x^2 - cx + d)$$ Here $a,b,c$, and $d$ are integers by Gauss's Lemma.</p> <p>So a given root $r$ of $x^2 - ax + b$ is a root of $x^2 + ax + b$ or $x^2 + cx + d$. </p> <p>If $r$ is a root of $x^2 + ax + b$, it is the root of the difference $x^2 + ax + b - (x^2 - ax + b) = 2ax$, which implies $a = 0$ since zero is not a root of $f(x)$.</p> <p>If $r$ is a root of $x^2 + cx + d$ it is similarly a root of the difference $(c + a)x + (d - b)$, and since $f(x)$ has no rational roots $c = -a$ and $d = b$.</p> <p>So either $a = 0$ or $c = -a$ and $d = b$. Since the argument is entirely symmetric in the two factors, we also either have $c = 0$ or $c = -a$ and $d = b$. Hence we have two possibilities: Either $a = c = 0$ or $c = -a$ and $d = b$.</p> <p>In the first case we have $$x^4 - 4x^2 + 16 = (x^2 + b)(x^2 + d)$$ But the roots of $y^2 - 4y + 16$ are irrational (they're not even real) so this can't happen.</p> <p>In the second case we have $$x^4 - 4x^2 + 16 = (x^2 + ax + b)(x^2 - ax + b) = x^4 + (2b - a^2) x^2 + b^2$$ Hence $b = \pm 4$ and therefore either $8 - a^2 = -4$ or $-8 - a^2 = -4$, neither of which has rational solutions. </p> <p>Hence $f(x)$ is irreducible.</p>
43,382
<p>Let <span class="math-container">$C$</span> be a coalgebra and <span class="math-container">$\Delta: C \to C\otimes C$</span> a co-multiplication map. Then, due the co-associative property we can consider <span class="math-container">$\Delta^m$</span>. But how is defined <span class="math-container">$\Delta^{m}: C \to C^{\otimes m}$</span>?</p> <p>Given <span class="math-container">$f,g \in C$</span> and <span class="math-container">$1\leq k \leq m$</span> can we have</p> <p><span class="math-container">$$\begin{align*}\Delta^{m-1}(fg)&amp;=\Delta^{k-1}(fg) \otimes \operatorname{id}^{m-k} + \Delta^{k-1}(f)\otimes \Delta^{m-k-1}(g) \\ &amp;+\Delta^{k-1}(g)\otimes \Delta^{m-k-1}(f)+\operatorname{id}^{\otimes k} \otimes \Delta^{m-k-1}(fg)?\end{align*}$$</span></p> <p>Thanks.</p>
Mariano Suárez-Álvarez
1,409
<p>We <em>usually</em> define $$\Delta^3=(\mathrm{id}_C\otimes\Delta)\circ\Delta,$$ and, more generally, $$\Delta^{k+1}=(\mathrm{id}_C^{k-1}\otimes\Delta)\circ\Delta^k.$$</p> <p>(Your last formula does not make sense in a coalgebra: you seem to be multiplying elements of $C$; even in bialgebra, though, things like "$\Delta(fg)\otimes\mathrm{id}$" are pretty strange...)</p>
43,382
<p>Let <span class="math-container">$C$</span> be a coalgebra and <span class="math-container">$\Delta: C \to C\otimes C$</span> a co-multiplication map. Then, due the co-associative property we can consider <span class="math-container">$\Delta^m$</span>. But how is defined <span class="math-container">$\Delta^{m}: C \to C^{\otimes m}$</span>?</p> <p>Given <span class="math-container">$f,g \in C$</span> and <span class="math-container">$1\leq k \leq m$</span> can we have</p> <p><span class="math-container">$$\begin{align*}\Delta^{m-1}(fg)&amp;=\Delta^{k-1}(fg) \otimes \operatorname{id}^{m-k} + \Delta^{k-1}(f)\otimes \Delta^{m-k-1}(g) \\ &amp;+\Delta^{k-1}(g)\otimes \Delta^{m-k-1}(f)+\operatorname{id}^{\otimes k} \otimes \Delta^{m-k-1}(fg)?\end{align*}$$</span></p> <p>Thanks.</p>
user91132
6,827
<p>To add to the answer above, I'd like to advertise the so-called <em>sumless Sweedler notation</em> here. </p> <p>This notation works as follows: let $C$ be a coalgebra; then if $c \in C$ then we write $\Delta(c) = c_1 \otimes c_2$ as an abbreviation of the more precise $\Delta(c) = \sum_{i=1}^k c_{i1} \otimes c_{i2}$ for some $c_{i1}, c_{i2} \in C, i = 1,\ldots, k$. Then if we wish to consider $(1 \otimes \Delta) \circ \Delta : C \to C\otimes C \otimes C$ we simply write </p> <p>$$[(1 \otimes \Delta) \circ \Delta] (c) = c_1 \otimes c_2 \otimes c_3.$$</p> <p>In this notation, the coassociativity axiom $(1 \otimes \Delta) \circ \Delta = (\Delta \otimes 1) \circ \Delta$ becomes the inevitable </p> <p>$$ c_1 \otimes (c_2 \otimes c_3) = (c_1 \otimes c_2) \otimes c_3.$$</p> <p>Sweedler's book "Hopf algebras", Susan Montgomery's book "Hopf algebras and their actions on rings" and the Wikipedia article <a href="http://en.wikipedia.org/wiki/Coalgebra">http://en.wikipedia.org/wiki/Coalgebra</a> are good references.</p>
6,090
<p>I have been reading about Riemann Zeta function and have been thinking about it for some time.</p> <p>Has anything been published regarding upper bound for the real part of zeta function zeros as the imaginary part of the zeros tend to infinity?</p> <p>Thanks</p>
Aryabhata
1,102
<p>I would suggest you read the book "Riemann's Zeta Function" by H.M.Edwards.</p> <p>As to your question, this is one theorem (from the above book)</p> <hr> <p><strong>De La Vallee Poussin's Theorem (1899)</strong> : There exist constants $c > 0, K > 1$ such that</p> <p>$$ \beta &lt; 1 - \frac{c}{\log \gamma}$$</p> <p>for all roots $\rho = \beta + i \gamma$, in the range $\gamma > K$.</p> <hr> <p>I am pretty sure there will be more such theorems (the book itself mentions that the above theorem has been improved upon). The book also mentions that (as of 1974) that the bound $\beta &lt; 1$ (real part of the root) has not been improved!</p> <p>Hope that helps.</p>
4,506,196
<p>If <span class="math-container">$R$</span> is a commutative Hilbert ring then each of its prime ideals is an intersection of maximal ideals. Is there a similar class of commutative rings for which every ideal is an intersection of prime ideals (that is, in which all ideals are radical)? Does this class have a name? Have the rings in it been characterized?</p>
Qiaochu Yuan
232
<p>This is an extremely rare condition. It implies that every ideal <span class="math-container">$I$</span> satisfies <span class="math-container">$I^2 = I$</span>; that is, every ideal is idempotent. <a href="https://math.stackexchange.com/questions/42727/finitely-generated-idempotent-ideals-are-principal-proof-without-using-nakayama">It's known</a> that if <span class="math-container">$I$</span> is a finitely generated idempotent ideal then it must be the principal ideal generated by an idempotent element <span class="math-container">$(e)$</span>, so we conclude that every finitely generated ideal is generated by an idempotent.</p> <p>The simplest class of rings with this property are the ones in which every element is idempotent; these are the <a href="https://en.wikipedia.org/wiki/Boolean_ring" rel="nofollow noreferrer">Boolean rings</a>. By a version of <a href="https://en.wikipedia.org/wiki/Stone%27s_representation_theorem_for_Boolean_algebras" rel="nofollow noreferrer">Stone's representation theorem</a> the category of Boolean rings is known to be contravariantly equivalent to the category of profinite sets, with the functor to profinite sets given by taking <span class="math-container">$\text{Spec}$</span> and the functor in the other direction given by taking continuous functions valued in <span class="math-container">$\mathbb{F}_2$</span>. It follows that if <span class="math-container">$I$</span> is an ideal of a Boolean ring <span class="math-container">$B$</span> then the quotient map <span class="math-container">$B \to B/I$</span> corresponds via this duality to an injection <span class="math-container">$\text{Spec}(B/I) \to \text{Spec}(B)$</span>, and the question of whether <span class="math-container">$I$</span> is an intersection of prime ideals is dual to the question of whether <span class="math-container">$\text{Spec}(B/I)$</span> is a join, in the lattice of subobjects of <span class="math-container">$\text{Spec}(B)$</span>, of points, which it is. So Boolean rings satisfy the desired condition.</p> <hr /> <p>Some general comments although they don't lead to a full classification: the property that every finitely generated ideal is generated by an idempotent passes to quotients, so if <span class="math-container">$R$</span> is such a ring and <span class="math-container">$P$</span> is a prime ideal of it then <span class="math-container">$R/P$</span> also satisfies this property, and since the only idempotents are <span class="math-container">$0$</span> and <span class="math-container">$1$</span> it follows that every finitely generated ideal is either zero or the unit ideal, hence that <span class="math-container">$R/P$</span> is a field. So every prime ideal of <span class="math-container">$R$</span> is maximal, meaning <span class="math-container">$R$</span> has Krull dimension zero. It also follows that <span class="math-container">$R$</span> is a Jacobson ring and hence its nilradical and Jacobson radical coincide. Since every ideal satisfies <span class="math-container">$I^2 = I$</span> no nonzero ideal can be nilpotent, so no nonzero element can be nilpotent either. So <span class="math-container">$R$</span> has trivial nilradical, hence trivial Jacobson radical. If <span class="math-container">$R$</span> is Noetherian it now follows that <span class="math-container">$R$</span> is a finite product of fields, and it's not hard to check that any finite product of fields satisfies the desired condition. I suspect an infinite product of fields also satisfies this condition (Boolean rings are precisely the subrings of arbitrary products of copies of <span class="math-container">$\mathbb{F}_2$</span>) but I have not checked this.</p>
1,139,789
<p>Is $f(z)=z^n$ holomorphic?</p> <p>I have tested a number of other functions using the Cauchy Riemann equations $u_x=v_y$, $v_x=-u_y$. However in the case of $f(z)=z^n$ I cannot think of a way to find the functions $u(x,y)$ and $v(x,y)$ without using a binomial expansion of $(x+iy)^n$. </p> <p>Any help or pointers is appreciated.</p> <p>edit - the problem requires the use of the Cauchy - Riemann equations and not the formal definition of complex differentiation.</p>
Adam Hughes
58,831
<p>The easiest way is just the definition</p> <p>$$\lim_{z\to a}{z^n-a^n\over z-a}=\lim_{z\to a}(z^{n-1}+az^{n-2}+\ldots +a^{n-2}z+a^{n-1})=na^{n-1}$$</p> <p>by definition, since $a$ was arbitrary, the derivative exists at all points, $a\in\Bbb C$.</p>
2,752,353
<p>In general, if I have $|\Psi\rangle = (|\Psi_{1_1}\rangle \otimes |\Psi_{1_2}\rangle + |\Psi_{2_1}\rangle \otimes |\Psi_{2_2}\rangle)$, can I find $|\Psi_{3_1}\rangle$ and $|\Psi_{3_2}\rangle$, such that $|\Psi\rangle = |\Psi_{3_1}\rangle \otimes |\Psi_{3_2}\rangle$? Here $\otimes$ means tensor product and $|\Psi\rangle$ and means a vector. No assumption is made about any relationship between the $|\Psi_{i_j}\rangle$, except that they are all the same dimension and their components are complex numbers.</p> <p>The motivation is the quantum double slit experiment, where the wave state, $|\Psi\rangle$, between the slits and the detector, is the sum of two interfering waves, and $|\Psi\rangle$ is still in a "pure state", which means that $|\Psi\rangle$ can also be written as a tensor product</p>
Arnaud Mortier
480,423
<p>In general the answer is no and you can clearly see why when you look at the dimensions: $\dim (V\otimes W)$ is much larger than $\dim (V\times W)$ whenever $V$ and $W$ are both of dimension greater than $1$. This tells you that in general a tensor is more than just a couple of vectors.</p> <p>Tensors of the form $a\otimes b$ are called <em>pure tensors</em>.</p> <hr> <p>Edit: As was correctly noted in the comments, the dimensional observation does not provide a proof just by itself, it is rather a convenient way to remember this fact. If $\{e_i\}_{i=1}^n$ is a basis of $V$ and $\{f_i\}_{i=1}^m$ is a basis of $W$ then a basis of $V\otimes W$ is given by $\{e_i\otimes f_j\}$ (dimension $nm$) and an element of $V\otimes W$ can be represented by an $n\times m$ matrix where the entries are coordinates in that basis. Now with this description, pure tensors are matrices of rank $1$. Except when $\dim V$ or $\dim W$ is equal to $1$, not all matrices have rank $1$. </p> <p>The subset of pure tensors is not a subspace, however, so in a way the dimensional observation might be misleading.</p>
2,752,353
<p>In general, if I have $|\Psi\rangle = (|\Psi_{1_1}\rangle \otimes |\Psi_{1_2}\rangle + |\Psi_{2_1}\rangle \otimes |\Psi_{2_2}\rangle)$, can I find $|\Psi_{3_1}\rangle$ and $|\Psi_{3_2}\rangle$, such that $|\Psi\rangle = |\Psi_{3_1}\rangle \otimes |\Psi_{3_2}\rangle$? Here $\otimes$ means tensor product and $|\Psi\rangle$ and means a vector. No assumption is made about any relationship between the $|\Psi_{i_j}\rangle$, except that they are all the same dimension and their components are complex numbers.</p> <p>The motivation is the quantum double slit experiment, where the wave state, $|\Psi\rangle$, between the slits and the detector, is the sum of two interfering waves, and $|\Psi\rangle$ is still in a "pure state", which means that $|\Psi\rangle$ can also be written as a tensor product</p>
chhro
255,099
<p>The matrix $A=\begin{pmatrix} 1&amp; 0&amp; 0&amp; 0\\ 0&amp;0&amp; 0&amp;0\\ 0&amp;0&amp;0&amp;0\\0&amp;0&amp;0&amp;1\end{pmatrix}$ is a sum of two simple tensors which is not a simple tensor itself. In particular, $A=E_{11}\otimes E_{11}+E_{22}\otimes E_{22}$ where $E_{11}=\begin{pmatrix} 1&amp;0\\0&amp;0\end{pmatrix}$ and $E_{22}=\begin{pmatrix} 0&amp;0\\0&amp;1\end{pmatrix}$. In fact, this is one of the strengths of quantum science: not all quantum states are pure states. In linear algebra terms, not all tensors are simple tensors.</p>
4,202,451
<p>I have been reading about the Miller-Rabin primality test. So far I think I got it except the part where the accuracy is stated.<br /> E.g from <a href="https://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test#Accuracy" rel="nofollow noreferrer">wiki</a></p> <blockquote> <p>The error made by the primality test is measured by the probability for a composite number to be declared probably prime. The more bases a are tried, the better the accuracy of the test. It can be shown that if n is composite, then at most 1⁄4 of the bases a are strong liars for n. As a consequence, if n is composite then running k iterations of the Miller–Rabin test will declare n probably prime with a probability at most <span class="math-container">$4^{−k}$</span>.</p> </blockquote> <p>So if I understand correctly if we have a large number <span class="math-container">$N$</span> and if we have <span class="math-container">$k$</span> random witnesses then if none of them observes the non-primality of <span class="math-container">$N$</span>, then the probability that <span class="math-container">$N$</span> is <strong>not</strong> a prime is <span class="math-container">$1$</span> in <span class="math-container">$4^k$</span></p> <p>What I am not clear is where does this <span class="math-container">$\frac{1}{4}$</span> come from.<br /> I understand we have <span class="math-container">$4$</span> conditions to be met (in order) i.e.:</p> <ol> <li><span class="math-container">$a \not\equiv 0 \mod N$</span></li> <li><span class="math-container">$a^{N-1} \not\equiv 1 \mod N$</span></li> <li><span class="math-container">$x^2 \equiv 1 \mod N$</span></li> <li><span class="math-container">$x \equiv \pm 1 \mod N$</span></li> </ol> <p>The process is the following:<br /> In the above <span class="math-container">$a$</span> is the witness. We first check condition (1).<br /> If that passes we check condition (2).<br /> Do do that we start multiplying <span class="math-container">$a, a \cdot a, a\cdot a\cdot a ....$</span> until we calculate <span class="math-container">$a^{N-1}$</span>.<br /> Do do that efficiently we can use the squaring method. If in the process of the multiplication during squaring we encounter a number e.g. <span class="math-container">$x$</span> such that the <span class="math-container">$x^2 \equiv 1$</span> but <span class="math-container">$x \not\equiv 1$</span> and <span class="math-container">$x \not\equiv -1$</span>. (E.g <span class="math-container">$19^2 \equiv 1 \pmod {40}$</span> but <span class="math-container">$19 \not \equiv 1 \pmod {40}$</span> and <span class="math-container">$19 \not \equiv -1 \pmod {40}$</span>) then the conditions (3) and (4) fails otherwise we proceed multiplying.<br /> We check the final product for condition (2)</p> <p>Does the <span class="math-container">$1/4$</span> mean that at most <span class="math-container">$1$</span> of these can be indicate a prime? If so, how is that validated?</p>
Ritam_Dasgupta
925,091
<p>The inductive hypothesis is: <span class="math-container">$$\prod_{i=1}^{k} \frac {k+i}{2i-3}=2^k(1-2k)$$</span> LHS can be written as: <span class="math-container">$$\frac {k+1}{-1} \cdot \frac {k+2}{1} \cdot \frac {k+3}{3}...\cdot \frac {k+k}{2k-3}=\left(\frac {k+2}{-1} \cdot \frac {k+3}{1} \cdot \frac {k+4}{3}...\cdot \frac {k+k}{2k-5}\right)\cdot \left(\frac {k+1}{2k-3}\right)=\left(\left(\prod_{i=1}^{k+1} \frac {k+1+i}{2i-3}\right)\left(\frac {2k-3}{2k+1} \cdot \frac {2k-1}{2k+2}\right)\right) \cdot \left(\frac {k+1}{2k-3}\right)=2^k(1-2k)$$</span> Now simplify the terms other than the <span class="math-container">$\prod$</span>. You are left with: <span class="math-container">$$\prod_{i=0}^{k+1} \frac {k+1+i}{2i-3}=-2^{k+1}(1+2k)$$</span> This completes the inductive step.</p>
4,492,250
<p>Say I have a function <span class="math-container">$f(x) = x^2 + 2$</span></p> <p>This function never touches the x-axis, but it could be easily transformed to touch it by cancelling the constant as in <span class="math-container">$g(x) = (x^2 + 2) - 2$</span></p> <p>Is there any way to generalize this, so that I can make any function &quot;magnet&quot; to the x-axis?</p>
trula
697,983
<p>if you mean touching in the sense that the x axes is a tangent o zur graph of f , than f(x9) must have at least one point x0 with f'(x0)=0 then you just take g(x)=f(x)-f(x0) and g touches the x- axes.</p>
253,208
<p>Here's a problem I'm working on: </p> <p>Find the matrix of T with respect to the standard bases $P_{3}$ and $\mathbb{R}^{3}$: </p> <p>$T(p(x)) = \left( \begin{array}{cc} p'(0) \\ p(0) \\ p(0) - p'(0)\end{array} \right)$</p> <p>So I'll list the steps that I've been taking and hopefully someone will be able to tell me what I'm doing wrong. So the first thing I did was prove that the transformation was linear, which wasn't too bad. Now since I know that the transformation is linear I can make use of the theorem that says that every linear transformation can be written in the form $T(x) = Ax$ where $A$ is the coefficient matrix and $x$ is a vector. </p> <p>Now I believe the standard bases for the polynomial in my example is $1, x, x^{2}, x^{3}$, so I assumed I could do the following $A = ( T(1) \space \space T(x) \space \space T(x^{2}) \space \space T(x^{3}) )$ but here's when I start to get confused. The problem definition says that each function is being forced to evaluate zero, so knowing what the basis's are doesn't matter, you only need to know that there are four (correct me if I'm wrong about there being 4 standard basis). </p> <p>Next, so if I take the derivative of $p(x)$ I'd get some polynomial, but if I input zero I'd just get some constant. Does this mean that each of the four columns of my coefficient matrix are just going to be unique constants? Thanks in advance for any help offered. </p>
André Nicolas
6,312
<p>The following is a probably unreasonable interpretation of the question. It is motivated by the number-theoretic setting: we are maybe implicitly asked to assume that $T$ is an <em>integer</em>. </p> <p>The man started watching, and watched <em>continuously</em> for several days, say $x$. Then the time spent was $24x+17$. If $11$ complete rounds were made, then $24x+17\equiv 0\pmod{11}$. To solve this quickly, rewrite as $2x+6\equiv 0\pmod{11}$, giving $x\equiv -3\pmod{11}$. The smallest positive solution is $x=8$, giving $T=19$. Long drive!</p> <p>Equivalently (and more simply!) we solve $11T\equiv 17\pmod{24}$. To solve quckly, note that $11^2\equiv 1\pmod{24}$, so multiply both sides of the congruence by $11$. </p>
540,029
<p>I know that for any matrix <span class="math-container">$A$</span>, <span class="math-container">$AA^{\ast}$</span> is positive semidefinite (where <span class="math-container">$A^\ast$</span> is <span class="math-container">$\overline{A}^T$</span>). Please help me show the following statement</p> <blockquote> <p>Any positive semidefinite can be written as <span class="math-container">$AA^{\ast}$</span>.</p> </blockquote>
Robert Lewis
67,071
<p>You cannot prove that any positive semidefinite matrix may be written $AA^*$ because it is not true. A complete explication for the real case is found in my answer to <a href="https://math.stackexchange.com/questions/482688/proof-of-a-matrix-is-positive-semidefinite-iff-it-can-be-written-in-the-form-x">this question</a>; the argument in the complex case involves only the usual modifications: transpose is replaced by Hermitian adjoint, orthogonal by unitary, and so forth.</p> <p>Hope this helps! Cheerio, </p> <p>and as always,</p> <p><strong><em>Fiat Lux!!!</em></strong></p>
401,967
<p>This question is about logical complexity of sentences in third order arithmetic. See <a href="https://en.wikipedia.org/wiki/Arithmetical_hierarchy" rel="nofollow noreferrer">Wikipedia</a> for the basic concepts.</p> <p>Recall that the Continuum Hypothesis is a <span class="math-container">$\Sigma^2_1$</span> sentence. Furthermore (loosely speaking) it can't be reduced to a <span class="math-container">$\Pi^2_1$</span> sentence, as stated in <a href="https://mathoverflow.net/a/218649/170446">Emil Jeřábek's answer to <em>Can we find CH in the analytical hierarchy?</em></a>.</p> <p>Is there an example of a <span class="math-container">$\Sigma^2_2$</span> sentence with no known reduction to a <span class="math-container">$\Pi^2_2$</span> sentence? (Equivalently, a <span class="math-container">$\Pi^2_2$</span> sentence with no known reduction to a <span class="math-container">$\Sigma^2_2$</span> sentence.) I mean that there should be no known reduction even under large cardinal assumptions.</p> <p>I'd prefer an example that's either famous or easy to state. But to begin, any example will do.</p> <p><em>Update:</em> Sentences such as &quot;<span class="math-container">$\mathfrak{c} \leqslant \aleph_2$</span>&quot; and &quot;<span class="math-container">$\mathfrak{c}$</span> is a successor cardinal&quot; are <span class="math-container">$\Delta^2_2$</span>, meaning that they're simultaneously <span class="math-container">$\Sigma^2_2$</span> and <span class="math-container">$\Pi^2_2$</span>. The reason is that each such sentence (and also its negation) can be expressed in the form &quot;<span class="math-container">$\mathbb{R}$</span> has a well-ordering <span class="math-container">$W$</span> such that <span class="math-container">$\phi(W)$</span>&quot; where <span class="math-container">$\phi$</span> is <span class="math-container">$\Sigma^2_2$</span>.</p>
Farmer S
160,347
<p>(As pointed out by @PaulBlainLevy, the following doesn't meet the requirement that it should have no known reduction even under large cardinal assumptions. But I think it's a natural <span class="math-container">$\Pi^2_2$</span> statement, so I'll leave it here.)</p> <p>Consider the statement &quot;For every set of reals <span class="math-container">$X$</span>, <span class="math-container">$X^\#$</span> exists&quot;. (Equivalently, &quot;for every set of reals <span class="math-container">$X$</span>, there is an elementary embedding <span class="math-container">$L(\mathbb{R},X)\to L(\mathbb{R},X)$</span>&quot;.) I claim it's <span class="math-container">$\Pi^2_2$</span> but not <span class="math-container">$\Sigma^2_2$</span>, at least assuming the consistency of ZFC + &quot;For every set of reals <span class="math-container">$X$</span>, <span class="math-container">$X^\#$</span> exists&quot;. (Here I mean that there is a fixed <span class="math-container">$\Pi^2_2$</span> formula <span class="math-container">$\psi$</span> such that ZFC proves &quot;<span class="math-container">$\psi$</span> holds iff <span class="math-container">$X^\#$</span> exists for all sets of reals <span class="math-container">$X$</span>&quot;, but this is not the case for <span class="math-container">$\Sigma^2_2$</span>).</p> <p>It's <span class="math-container">$\Pi^2_2$</span>: For given <span class="math-container">$X$</span>, it is <span class="math-container">$\Sigma^2_1(\{X\})$</span> to say that <span class="math-container">$X^\#$</span> exists, as <span class="math-container">$X^\#$</span> is coded by a real, and one just has to check that for each countable ordinal <span class="math-container">$\alpha$</span>, the model generated from <span class="math-container">$\mathbb{R}\cup\alpha$</span>-many indiscernibles is wellfounded, to know that it is correct, and this is all expressed as a projective statement about some set of reals coding everything.</p> <p>It's not <span class="math-container">$\Sigma^2_2$</span> (modulo the consistency mentioned above): For suppose it is, and fix a <span class="math-container">$\Pi^2_1$</span> formula <span class="math-container">$\varphi$</span> such that ZFC proves that <span class="math-container">$\exists A\subseteq\mathbb{R}\varphi(A)$</span> iff <span class="math-container">$X^\#$</span> exists for all sets of reals <span class="math-container">$X$</span>. Assume ZFC + <span class="math-container">$X^\#$</span> exists for all sets of reals <span class="math-container">$X$</span>. Let <span class="math-container">$A$</span> witness the <span class="math-container">$\Sigma^2_2$</span> statement, and let <span class="math-container">$A'=(A,W)$</span> where <span class="math-container">$W$</span> is a wellorder of <span class="math-container">$\mathbb{R}$</span>. Consider <span class="math-container">$M=L(\mathbb{R},A')$</span>. Then <span class="math-container">$M\models$</span>ZFC, and <span class="math-container">$A\in M$</span>, and <span class="math-container">$\mathbb{R}\subseteq M$</span>, so note the truth of <span class="math-container">$\varphi(A)$</span> goes down to <span class="math-container">$M$</span>. So <span class="math-container">$M\models\mathrm{ZFC}+V=L(\mathbb{R},A')$</span>+<span class="math-container">$\exists A\subseteq\varphi(A)$</span>, so models &quot;<span class="math-container">$(A')^\#$</span> exists&quot;, but this is a contradiction.</p>
3,502,507
<p>This is very similar to the <a href="https://math.stackexchange.com/questions/3501693/given-that-you-started-with-one-chip-what-is-the-probability-that-you-will-win">question</a> I've just asked, except now the requirement is to gain <span class="math-container">$4$</span> chips to win (instead of <span class="math-container">$3$</span>) </p> <p>The game is:</p> <blockquote> <p>You start with one chip. You flip a fair coin. If it throws heads, you gain one chip. If it throws tails, you lose one chip. If you have zero chips, you lose the game. If you have <strong>four</strong> chips, you win. What is the probability that you will win this game?</p> </blockquote> <p>I've tried to use the identical reasoning used to solve the problem with three chips, but seems like in this case, it doesn't work.</p> <p>So the attempt is:</p> <p>We will denote <span class="math-container">$H$</span> as heads and <span class="math-container">$T$</span> as tails (i.e <span class="math-container">$HHH$</span> means three heads in a row, <span class="math-container">$HT$</span> means heads and tails etc)</p> <p>Let <span class="math-container">$p$</span> be the probability that you win the game. If you throw <span class="math-container">$HHH$</span> (<span class="math-container">$\frac{1}{8}$</span> probability), then you win. If you throw <span class="math-container">$HT$</span> (<span class="math-container">$\frac{1}{4}$</span> probability), then your probability of winning is <span class="math-container">$p$</span> at this stage. <strong>If you throw heads <span class="math-container">$HHT$</span> (<span class="math-container">$\frac{1}{8}$</span> probability), then your probability of winning <span class="math-container">$\frac{1}{2}p$</span></strong></p> <p>Hence the recursion formula is </p> <p><span class="math-container">$$\begin{align}p &amp; = \frac{1}{8} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\ &amp;= \frac{1}{8} + \frac{1}{4}p +\frac{1}{16}p \\ &amp;= \frac{1}{8} + \frac{5}{16}p \end{align}$$</span></p> <p>Solving for <span class="math-container">$p$</span> gives</p> <p><span class="math-container">$$\frac{11}{16}p = \frac{1}{8} \implies p = \frac{16}{88}$$</span></p> <p>Now, to verify the accuracy of the solution above, I've tried to calculate the probability of losing using the same logic, namely:</p> <p>Let <span class="math-container">$p$</span> denote the probability of losing. If you throw <span class="math-container">$T$</span> (<span class="math-container">$\frac{1}{2}$</span> probability), you lose. If you throw <span class="math-container">$H$</span> (<span class="math-container">$\frac{1}{2}$</span> probability), the probaility of losing at this stage is <span class="math-container">$\frac{1}{2}p$</span>. If you throw <span class="math-container">$HH$</span>(<span class="math-container">$\frac{1}{4}$</span> probability), the probability of losing is <span class="math-container">$\frac{1}{4}p$</span>. Setting up the recursion gives</p> <p><span class="math-container">$$\begin{align}p &amp; = \frac{1}{2} + \frac{1}{4}p+ \frac{1}{8}\frac{1}{2}p \\ &amp;= \frac{1}{2} + \frac{1}{4}p +\frac{1}{16}p \\ &amp;= \frac{1}{2} + \frac{5}{16}p \end{align}$$</span></p> <p>Which implies that </p> <p><span class="math-container">$$\frac{11}{16}p = \frac{1}{2} \implies p = \frac{16}{22} = \frac{64}{88}$$</span></p> <p>Which means that probabilities of winning and losing the game do not add up to <span class="math-container">$1$</span>.</p> <p>So the <em>main</em> question is: Where is the mistake? How to solve it using recursion? (Note that for now, I'm mainly interested in the recursive solution)</p> <p>And the bonus question: Is there a possibility to generalize? I.e to find the formula that will give us the probability of winning the game, given that we need to gain <span class="math-container">$n$</span> chips to win? </p>
Christian Blatter
1,303
<p>Let <span class="math-container">$p(n)$</span> be the probability that you win the game when you have <span class="math-container">$n$</span> chips in the pocket. Then <span class="math-container">$p(0)=0$</span>, <span class="math-container">$\&gt;p(4)=1$</span>. Having <span class="math-container">$1\leq n\leq3$</span> chips one makes a further move, and one then has <span class="math-container">$$p(n)={1\over2}p(n-1)+{1\over2}p(n+1)\qquad(1\leq n\leq3)\ ,$$</span> so that <span class="math-container">$$p(n+1)-p(n)=p(n)-p(n-1)\qquad(1\leq n\leq3)\ .$$</span> These circumstances immediately imply that <span class="math-container">$p(1)={1\over4}$</span>.</p>
1,740,535
<p>If <span class="math-container">$X\sim U(0,1)$</span> and <span class="math-container">$Y\sim U(0,X)$</span> what is the density (distribution) function <span class="math-container">$f_Y(y)$</span>?</p> <p>I know the answer and I also found it on this site (link bellow). However, I just can't get the intuition why the last integral boundaries become from <span class="math-container">$y$</span> to <span class="math-container">$1$</span>?</p> <p>Step by step solution attempt:</p> <p><span class="math-container">$f_Y(y)=\displaystyle\int_\mathbb{R} f_{Y,X}(y,x)dx=\int_\mathbb{R} f_{Y|X=x}(y)f_{X}(x)dx=\displaystyle\int_\mathbb{R}\frac{1}{X}dx=^{?}\displaystyle\int_y^1\frac{1}{X}dx=-\ln(y)$</span></p> <p><a href="https://math.stackexchange.com/questions/1738765/let-x-sim-mathcalu0-1-given-x-x-let-y-sim-mathcalu0-x-ho">Let. X∼U(0,1). Given X=x, let Y∼U(0,x). How can I calculate E(X|Y=y)?</a></p>
Graham Kemp
135,106
<p>The support of the joint pdf is $0&lt; Y&lt;X &lt;1$</p> <p>Clearly to "integrate out" $X$ to obtain the marginal pdf of $Y=y$ requires integrating w.r.t. $x$ over the interval $y&lt;x&lt;1$</p> <p>$$\begin{align}f_Y(y) ~=~&amp; \int_\Bbb R \frac 1 x ~\mathbf 1_{y\in(0;x),x\in(0;1)}~\operatorname d x \\[1ex] =~&amp; ~\mathbf 1_{y\in(0;1)}\int_y^1 \frac 1 x ~\operatorname d x \\[1ex] =~&amp; -\ln y~\mathbf 1_{y\in(0;1)}\end{align}$$</p>
1,879,149
<p>Let $\mu$ be a Radon measure on $\mathbb{R}$. (<em>i.e.</em>, a locally finite Borel measure)</p> <p>Let $I$ be an interval of $\mathbb{R}$.</p> <p>Let $\gamma : I \mapsto \mathbb{R}$ be a <strong>monotone</strong> function that is Lebesgue-Stieltjes integrable with respect to $\mu$: $$\int_I \gamma(x) \mathrm{d}\mu (x) &lt; \infty.$$</p> <p>I would like a simple argument to prove that there exists a sequence $(\gamma_n)_{n\in\mathbb{N}}$ of <strong>continuous and monotone</strong> functions $\gamma_n : I \mapsto \mathbb{R}$ such that $$\int_I \gamma_n(x) \mathrm{d}\mu (x) &lt; \infty$$ and $$\lim_{n\to\infty} \int_I | \gamma(x) - \gamma_n (x)| \mathrm{d}\mu (x) =0.$$</p> <p>I have thought I could use the <a href="https://mathoverflow.net/a/31380">Lusin theorem</a>, but it would require to adapt its statement and proof to show the approximating functions are monotone. Besides, my hypotheses are much simpler ($\mathbb{R}$, no compacity of the support of approximating functions). So maybe a weaker theorem could give the answer? I am interested by any reference to such theorem.</p> <p>N.B.: a proof of this is very common if $\mu$ is the Lebesgue measure, but here there is no absolute continuity assumption on $\mu$.</p>
Luiz Cordeiro
58,818
<p>I don't think your version of Lusin's theorem can be used to give a short proof of the result, since we need some control over the continuous functions we get. Here's a solution, using a slightly weaker version of Lusin's theorem, and constructing the continuous functions more explicitly.</p> <p>Let's assume $\gamma$ is non-decreasing (otherwise, work with $-\gamma$). First let's consider $I$ a compact interval of the form $I=[a,b]$, and let's deal with the general case later.</p> <p>Let $\epsilon&gt;0$. By Lusin's theorem, there exists a subset $A\subseteq I$ of measure $\geq \mu(I)-\epsilon$ such that $\gamma|_A$ (the restriction of $\gamma$ to $A$) is continuous. (This is the slightly weaker form of Lusin's theorem than the one you linked to). Moreover, by regularity of $\mu$, we can assume $A$ is closed.</p> <p>We can also assume that $b\in A$: If this is not the case, then since $A$ is closed, $A\cup\{b\}$ is also closed and $\gamma|_{A\cup\{b\}}$ is continuous. Similarly, we can assume $a\in A$.</p> <p>Now we will extend $\gamma|_A$ to a continuous, non-decreasing function on $I$. The complement $O=I\setminus A$ of $A$ is an open set, so it can be written as a countable union of disjoint open intervals $O=\bigcup_n (a_n,b_n)$. Note that $a_n,b_n\in A$ for all $n$.</p> <p>Define $g=\gamma$ on $A$, and on each set $(a_n,b_n)$, define $g(x)=\gamma(a_n)+\frac{x-a_n}{b_n-a_n}(\gamma(b_n)-\gamma(a_n))$. Let's show that $g$ has the desired properties:</p> <blockquote> <p><strong>1. $g$ is monotone</strong></p> </blockquote> <p>Suppose $x&lt;y$ in $I$. We have some cases:</p> <p>1.1. $x,y\in A$.</p> <p>Then $g(x)=\gamma(x)&lt;\gamma(y)=g(y)$.</p> <p>1.2. $x\in A$, $y\not\in A$.</p> <p>In this case, $y$ is in some set of the form $(a_n,b_n)$, and we must have $x\leq a_n$, so $$g(x)=\gamma(x)\leq\gamma(a_n)\leq g(y)$$</p> <p>1.3. $x\not\in A$, $y\in A$.</p> <p>This is similar to 1.2.</p> <p>1.4. $x,y\not\in A$.</p> <p>Then $x\in (a_n,b_n)$ and $y\in(a_m,b_m)$ for certain $n,m$. If $n=m$ then it is clear that $g(x)\leq g(y)$. If $n\neq m$, then the interval $(a_n,b_n)$ stands to the left of the interval $(a_m,b_m)$ (since they are disjoint and contain $x$ and $y$, respectively, and $x&lt;y$), so $b_n\leq a_m$ and thus $$g(x)\leq g(b_n)=\gamma(b_n)\leq\gamma(a_m)=g(a_m)\leq g(y).$$</p> <blockquote> <p><strong>2. $g$ is continuous.</strong></p> </blockquote> <p>We leave the following result as an exercise:</p> <blockquote> <p><strong>Proposition:</strong> Suppose $g:[a,b]\to\mathbb{R}$ is monotone and $x\in [a,b)$. Suppose that there exists a sequence $x_n\in[a,b]$ with $x_n&gt;x$, $x_n\to x$ and $g(x_n)\to g(x)$. Then $g$ is right-continuous as $x$.</p> </blockquote> <p>Let's use this result: Let $x\in[a,b)$, and let's find a sequence as in the proposition. If $x\not\in A$, then $x\in(a_n,b_n)$ for some $n$ and it is easy enough to find such a sequence (e.g. $x_i=x+(b_n-x)/i$). So suppose $x\in A$. We have two cases:</p> <p>2.1. There is a sequence $x_n\in A$ with $x_n&gt;x$, $x_n\to x$.</p> <p>Since $\gamma|_A=g|_A$ is continuous, we immediately have $g(x_n)\to g(x)$.</p> <p>2.2. There is no sequence $x_n\in A$ with $x_n&gt;x$, $x_n\to x$.</p> <p>This means that there is some open interval of the form $(x,c)$ which does not intersect $A$, i.e., $(x,c)\subseteq I\setminus A$ and thus $(x,c)\subseteq(a_n,b_n)$ for some $n$. But $x\in A$, so $x=a_n$. Now it is again easy to construct the desired sequence (say $x_i=x+(b_i-a_i)/i$).</p> <p>Therefore $g$ is right-continuous. Left continuity is similar and therefore $g$ is continuous.</p> <blockquote> <p><strong>3. $\int_I|\gamma-g|d\mu$ is small.</strong></p> </blockquote> <p>Both $\gamma$ and $g$ are bounded on $I$ by $\max(|\gamma(a)|,\gamma(b)|)$, and they coincide on a set of measure $\geq\mu(I)-\epsilon$, thus $$\int_I|\gamma-g|d\mu\leq\max(|\gamma(a)|,\gamma(b)|)\epsilon$$ which is as small as necessary, by adjusting $\epsilon$.</p> <p>Note that from this it follows that $\int_I|g|d\mu&lt;\infty$.</p> <hr> <p>The above solves the case of a compact interval $I$. If $I$ is of the form $I=[a,b)$, with $b\in(a,\infty]$, then we can write $I=\bigcup_{n=1}^\infty[a_n,a_{n+1}]$, where $a=a_1&lt;a_2&lt;\cdots$ and $a_n\to b$. </p> <p>Given $\epsilon&gt;0$, for each $n$ find a function $g_n$ on $[a_n,a_{n+1}]$, as above, which is continuous, monotone, $g_n(a_n)=\gamma(a_n)$, $g_n(a_{n+1})=\gamma(a_{n+1})$, and $\int_{[a_n,b_n]}|g_n-\gamma|d\mu&lt;2^{-n}\epsilon$. Define $g$ on $I$ as $g|_{[a_n,a_{n+1}]}=g_n$, and we have the desired properties.</p> <p>The last case is of intervals of the form $I=(a,b)$, with $-\infty\leq a&lt;b\leq\infty$, and it follows from the previous case with arguments similar to the ones in the paragraph above.</p> <hr> <p><strong>Alternative solution:</strong></p> <p>We can assume that $I=[a,b]$ is compact and $\gamma$ is non-decreasing and non-constant. Moreover, up to normalization, we can assume $\gamma(a)=0$ and $\gamma(b)=1$. Let's start with the usual simple functions approximating $\gamma$: For each $n$ and $1\leq i&lt;2^n$, take $A_{n,i}=\gamma^{-1}[\frac{i-1}{2^n},\frac{i}{2^n})$, and $A_{n,2^n}=\gamma^{-1}[\frac{2^n-1}{2^n},1]$.</p> <p>Since $\gamma$ is monotone, each $A_{n,i}$ is an interval (possibly degenerate). Whenever $A_{n,i}$ has nonempty interior (i.e., it is not a point), take $a_{n,i}&lt;b_{n,i}$ with $\operatorname{int}(A_{n,i})=(a_{n,i},b_{n,i})$. Notice that $a_{n,i}=0$ for precisely one $i$, and $b_{n,i}=1$ for precisely one $i$, and $\left\{a_{n,i}\right\}\setminus\left\{0\right\}=\left\{b_{n,i}\right\}\setminus\left\{1\right\}$.</p> <p>Now take a decreasing sequence $\delta_n$ converging to $0$, in such a way that $a_{n,i}+\delta_n&lt;b_{n,i}-\delta_n$ for all $n,i$. Let $B_{n,i}=(a_{n,i}+\delta_n,b_{n,i}-\delta_n)$.</p> <p>Given $n$, define $g_n:I\to[0,1]$ as follows: $g_n(a_{n,i})=\gamma(a_{n,i})$, and $g_n(b_{n,i})=\gamma(b_{n,i})$ for all $i$. If $x\in B_{n,i}$ for some $i$, set $g_n(x)=i2^{-n}$. Then extend $g_n$ arbitrarily to some continuous, monotone function on $I$.</p> <p>Let's show that $g_n$ converges pointwise to $\gamma$. For convenience, for each $n$ consider the partition $\mathscr{P}_n=\left\{A_{n,i}:i\right\}$. Note that $\mathscr{P}_m$ refines $\mathscr{P}_n$ whenever $m\geq n$.</p> <p>Let $x\in I$ and $k\in\mathbb{N}$. We have two cases:</p> <p>If $x=a_{k,i}$ for some $i$, then for every $m\geq k$ there will be some $j$ with $x=a_{m,j}$. This happens because $\mathscr{P}_m$ refines $\mathscr{P}_k$. So in this case we have $g_m(x)=\gamma(x)$ for all $m\geq k$. The same happens if $x=b_{k,i}$ for some $i$.</p> <p>Now suppose $x$ is not any of the $a_{k,i}$ nor any of the $b_{k,i}$. Choose $i$ for which $x\in A_{k,i}$. Then $x$ lies in the interior of $A_{k,i}$, i.e., $x\in(a_{k,i},b_{k,i})$. Choose $N&gt;k$ large enough such that $a_{k,i}+\delta_N&lt;x&lt;b_{k,i}-\delta_N$.</p> <p>Now let's show that for any $m\geq N$, we have $|g_m(x)-\gamma(x)|\leq 2^{1-k}$. Again, we have some cases:</p> <p>If $x$ is of the form $a_{m,j}$ or $b_{m,j}$ then $g_m(x)=\gamma(x)$, and we are done. So suppose $x$ is not of the form $a_{m,j}$ nor $b_{m,j}$. Again, there will be some $j$ for which $x\in\operatorname{int}(A_{m,j})=(a_{m,j},b_{m,j})$. </p> <p>Take $p$ for which $b_{m,p}=b_{k,i}$. Again, since $\mathscr{P}_m$ refines $\mathscr{P}_k$, we have $A_{m,p}\subset A_{k,i}$, and the definition (and nonemptyness) of these sets garantees that $p2^{-m}\leq (i+1)2^{-k}$. Indeed, given $y\in A_{m,p}$, we have $$(p-1)2^{-m}\leq\gamma(y)\leq p2^{-m}\qquad\text{and}\qquad (i-1)2^{-k}\leq\gamma(y)\leq i2^{-k}$$ so $$p2^{-m}\leq\gamma(y)+2^{-m}\leq\gamma(y)+2^{-k}\leq (i+1)2^{-k}.$$</p> <p>By our choice of $\delta$, we have $x&lt;b_{n,i}-\delta_N\leq b_{m,p}-\delta_m$, so we can find some $y\in B_{m,p}$ with $x\leq y$. Since $g_m$ is monotone, $$g_m(x)\leq g_m(y)=p2^{-m}\leq (i+1)2^{-k}$$</p> <p>With the same kind of reasoning (now working with the $a_{m,j}$), we can prove that $$(i-1)2^{-k}\leq g_m(x)$$ so we have $$(i-1)2^{-k}\leq g_m(x)\leq(i+1)2^{-k}$$ and also, now by definition of $A_{k,i}$, $$(i-1)2^{-k}\leq \gamma(x)\leq i2^{-k}\leq (i+1)2^{-k}$$ so $|g_m(x)-\gamma(x)|\leq 2^{1-k}$, as we wanted.</p> <p>Therefore $g_m\to \gamma$ pointwise (everywhere!), and dominated convergence implies $\int g_m\to\int\gamma$. We did not assume anything about the measure (just that it was finite on $I$).</p>
132,957
<p>Could you show me that how can I solve this as fast as possible, please? <br /></p> <p>$$ ABCDEF \, \land \, DEFABC \; \large{ \in \, \mathbb{N^{+}} } $$ $$ ABCDEF \, = \, 6 \times (DEFABC) \\ $$ $$ (A+B+C+D+E+F)=\; ? $$</p> <p>Thank you very much... :)</p>
Dejan Govc
19,588
<p>Here's another solution: we immediately see that $D=1$, since otherwise $6\times DEFABC$ would have to have more than seven digits. Now write: $X=ABC$, $Y=DEF$. The equation becomes: $1000X+Y=6(1000Y+X)$ which can be rewritten as $994X=5999Y$. Now, since $994$ is divisible by the primes $2$ and $71$, and $5999$ isn't, $Y$ will have to be divisible by $2$ and $71$. But then $Y$ is divisible by $142$ and since the first digit of $Y$ is $1$, it follows that $Y=142$. Now $X=857$ follows easily and we arrive at the same (unique) solution.</p>
4,404,751
<p>I want to prove the following: Every symmetric matrix whose entries are calculated as <span class="math-container">$ 1/(n -1) $</span> with <span class="math-container">$n$</span> as the size of the matrix, except for the diagonal which is 0, has a characteristic polynomial with a root at <span class="math-container">$x=1$</span>. In other words, every such matrix has an eigenvalue of 1.</p> <p>For example Matrix 1:</p> <p><span class="math-container">\begin{array}{ccc} 0 &amp; \frac{1}{2} &amp; \frac{1}{2} \\ \frac{1}{2} &amp; 0 &amp; \frac{1}{2} \\ \frac{1}{2} &amp; \frac{1}{2} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-x^3+\frac{3 x}{4}+\frac{1}{4} $</span> ,which has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 2:</p> <p><span class="math-container">\begin{array}{cccc} 0 &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; 0 &amp; \frac{1}{3} &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; 0 &amp; \frac{1}{3} \\ \frac{1}{3} &amp; \frac{1}{3} &amp; \frac{1}{3} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$f(x)=-(1/27) - (8 x)/27 - (2 x^2)/3 + x^4 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>Matrix 3:</p> <p><span class="math-container">\begin{array}{ccccc} 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 &amp; \frac{1}{4} \\ \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; \frac{1}{4} &amp; 0 \\ \end{array}</span></p> <p>has a characteristic polynomial: <span class="math-container">$ f(x)=(4 + 60 x + 320 x^2 + 640 x^3 - 1024 x^5)/1024 $</span> ,which also has a root at <span class="math-container">$x=1$</span></p> <p>I want to show that this is true for any such n by n matrix, i.e. for all n.</p> <p>Looking for some tips and tricks on how to approach this.</p>
Diego Artacho
761,744
<p>The vector with all entries equal to <span class="math-container">$1$</span> is an eigenvector with eigenvalue <span class="math-container">$1$</span>.</p>
171,449
<p><strong>Theorem</strong> Every sequence {$s_n$} has a monotonic subsequence whose limit is equal to $\limsup s_n$. I think to show that there exist a monotonic subsequence is kind of straight forward but I could show that there exist such subsequences whose limit is $\limsup s_n$.</p>
copper.hat
27,978
<p>Plot in with a log scale on the y axis. Then the line will be straight and it will be easy to see where it is going.</p>
2,243,900
<p>I've researched this topic a lot, but couldn't find a proper answer to this, and I can't wait a year to learn it at school, so my question is:</p> <blockquote> <p>What exactly is calculus? </p> </blockquote> <p>I know who invented it, the Leibniz controversy, etc., but I'm not exactly sure what it is. I think I heard it was used to calculate the area under a curve on a graph. If anyone can help me with this, I'd much appreciate it.</p>
AspiringMathematician
375,631
<p>In a nutshell, Calculus (as seen in most basic undergraduate courses) is the study of change and behaviour of functions and sequences. The three main points are:</p> <ul> <li>Limits: How sequences and functions behave when getting closer and closer to a desired point (geometrically, what happens when you "zoom in" near a point)</li> <li>Derivatives: How functions change over a parameter (geometrically, the "slope of a graph at a given point")</li> <li>Integrals: What's the cumulative effect of a function (geometrically, the "area under a graph")</li> </ul> <p>And obviously (and maybe especially), how these relate to one another; the crowning jewel of Calculus is probably the Fundamental Theorem of Calculus, which truly lives up to its name and was developed by none other than Leibniz and Newton.</p>
2,218,960
<blockquote> <p>Let $(W_t,\mathscr{F_t})$ be a Wiener process and let $$M_t=M_0e^{W_t-t/2}\qquad t\ge0$$ where $M_0$ is deterministic. Show that, for $\epsilon&gt;0$, $\tau=\inf\{t\ge0:M_t\le\epsilon\}$ is a stopping time.</p> </blockquote> <p>I have been able to show that $M_t$ is a martingale but I'm a bit stuck here. I tried to use the fact that $W_t$ is a normal distribution to show that $P(M_t&lt;\epsilon)&gt;0$ but that got me no where.</p>
drhab
75,923
<p>Yes if the convergence is absolute: if $\sum_{n=1}^{\infty}|a_n|&lt;\infty$ then also $\sum_{n=1}^{\infty}|b_n|&lt;\infty$.</p> <p>No if the convergence is not absolute: see the counterexample given in the answer of Tsemo Aristide.</p>
3,033,202
<p>I have been banging my head against this proof for a few days now, as I can visualize why it is true in my head, but don't know how to prove it in words:</p> <p>Let <span class="math-container">$A$</span> and <span class="math-container">$B$</span> be nonempty subsets of <span class="math-container">$\mathbb{R}$</span>. Show that if there exist disjoint, open sets <span class="math-container">$U$</span> and <span class="math-container">$V$</span> with <span class="math-container">$A \subseteq U$</span> and <span class="math-container">$B \subseteq V$</span>, then <span class="math-container">$A$</span> and <span class="math-container">$B$</span> are separated.</p> <p>I've seen two answers to this proof on here, but I don't fully understand either one of them, and the question was asked so long ago that neither of those users are active on here anymore, so I can't even ask them specific questions to help my understanding. I have tried proving it directly, but immediately get bogged down in multiple cases of what <span class="math-container">$A$</span> looks like in <span class="math-container">$U$</span> while <span class="math-container">$B$</span> looks a certain way in <span class="math-container">$V$</span>, and vice versa. I have also tried assuming that <span class="math-container">$A$</span> and <span class="math-container">$B$</span> aren't separated, and I can't find a way to reach a contradiction (or contrapositive) from that. I would greatly appreciate any assistance as it is the only proof from my homework that I haven't been able to figure out on my own.</p>
KYHSGeekCode
553,404
<p>Problem: <span class="math-container">$$\int_0^s\frac 1{\sqrt{1-x^2}}dx$$</span></p> <p>Let <span class="math-container">$$x=\sin \theta$$</span> then we get: <span class="math-container">$$\frac{dx}{d\theta}=\cos \theta$$</span> By derivating both side by <span class="math-container">$x$</span>.</p> <p><span class="math-container">$$\therefore \int_0^s\frac 1{\sqrt{1-x^2}}dx$$</span> <span class="math-container">$$=\int_0^{\arcsin s}\frac {\cos\theta}{\sqrt{1-{\sin^2 \theta}}}d\theta$$</span> <span class="math-container">$$=\int_0^{\arcsin s} 1 d\theta$$</span> <span class="math-container">$$=\left[ \theta \right]^{\arcsin s}_{\arcsin 0} d\theta$$</span> <span class="math-container">$$=\arcsin s$$</span>.</p>
4,240,598
<blockquote> <p>Simplify <span class="math-container">$\dfrac{18 - \dfrac 7 {3x}} {\dfrac 7 {18x} - 3}$</span>?</p> </blockquote> <p>I'm having a hard time simplifying this particular expression and am seeking any type of assistance in solving it.</p> <p>In the expression <span class="math-container">$$\frac{18 - \dfrac 7 {3x}} {\dfrac 7 {18x} - 3}$$</span> I split the problems into two separate entities.</p> <p>For the numerator, I get <span class="math-container">$3x$</span> for the LCD and then rewrite the fraction as <span class="math-container">$$54x-7\frac 7 {3x}$$</span> As for the denominator, I get <span class="math-container">$18x$</span> for the LCD and then rewrite the fraction as <span class="math-container">$$7-\frac{54x}{18x}$$</span></p> <p>When I begin to divide, I switch the sign from division to multiplication and swap the numerator with the denominator (<span class="math-container">$7-\frac{54x}{18x}$</span> becomes <span class="math-container">$\frac{18x}{7-54x}$</span>).</p> <p>The product I get is <span class="math-container">$$\frac{972x^2-126x}{21x - 162x^2}$$</span> When I simplify I get <span class="math-container">$6-6$</span> which is zero. Is this answer correct?</p>
Robert Israel
8,508
<p>The <span class="math-container">$1$</span> on the right is supposed to be the identity (operator or matrix, depending on what sort of thing <span class="math-container">$A$</span> is).</p>
3,986,785
<p>On page 153 of <em>Linear Algebra Done Right</em> the second edition, it says:</p> <blockquote> <p>Define a linear map <span class="math-container">$S_1: \text{range}(\sqrt{T^*T} ) \to \text{range}(T)$</span> by:</p> </blockquote> <blockquote> <p><strong>7.43:</strong> <span class="math-container">$S_1 (\sqrt{T^* T}v)=Tv$</span></p> </blockquote> <blockquote> <p>First we must check that <span class="math-container">$S_1$</span> is <strong>well defined</strong>. To do this, suppose <span class="math-container">$v_1, v_2 \in V$</span> are such that <span class="math-container">$\sqrt {T^*T}v_1 = \sqrt{T^*T}v_2$</span>. For the definition given by 7.43 to make sense, we must show that <span class="math-container">$Tv_1=T v_2$</span>.</p> </blockquote> <p>It is not entirely clear to me what the term 'well-defined' means here. Can someone clarify?</p> <p>Thanks</p>
freakish
340,986
<p>What exactly a function <span class="math-container">$f:X\to Y$</span> is? Typically we define it as a subset of <span class="math-container">$X\times Y$</span> such that for any <span class="math-container">$x\in X$</span> there is precisely one <span class="math-container">$y\in Y$</span> such that <span class="math-container">$(x,y)\in f$</span>.</p> <p>And so &quot;well defined function&quot; means: &quot;a subset of <span class="math-container">$X\times Y$</span> we just defined is actually a function&quot; which boils down to showing that (a) for any <span class="math-container">$x\in X$</span> there is <span class="math-container">$y\in Y$</span> such that <span class="math-container">$(x,y)\in f$</span> and (b) if <span class="math-container">$(x,y)\in f$</span> and <span class="math-container">$(x',y)\in f$</span> for some <span class="math-container">$x,x'\in X$</span> then <span class="math-container">$x=x'$</span>.</p> <p>Or equivalently for any <span class="math-container">$x\in X$</span> the set <span class="math-container">$\{y\in Y\ |\ (x,y)\in f\}$</span> has exactly one element.</p> <hr /> <p>A common example is when we deal with equivalence relationships. For example consider integers <span class="math-container">$\mathbb{Z}$</span> with the following relationship: <span class="math-container">$x\sim y$</span> if and only if <span class="math-container">$2$</span> divides <span class="math-container">$x-y$</span>. Now consider <a href="https://en.wikipedia.org/wiki/Equivalence_class" rel="nofollow noreferrer">the quotient set</a> <span class="math-container">$X=\mathbb{Z}/\sim$</span> and</p> <p><span class="math-container">$$f:X\to\mathbb{Z}$$</span> <span class="math-container">$$f([x]_\sim)=x$$</span> <span class="math-container">$$g:X\to\mathbb{Z}$$</span> <span class="math-container">$$g([x]_\sim)=x\text{ mod }2$$</span></p> <p>Our first <span class="math-container">$f$</span> is not well defined. Because <span class="math-container">$[0]_\sim=[2]_\sim$</span> but <span class="math-container">$f([0]_\sim)=0$</span> while <span class="math-container">$f([2]_\sim)=2$</span> are different values for the same argument.</p> <p>But our <span class="math-container">$g$</span> is well defined. That's because <span class="math-container">$[x]_\sim=[y]_\sim$</span> if and only if <span class="math-container">$2$</span> divides <span class="math-container">$x-y$</span>. Which is if and only if <span class="math-container">$(x-y)\text{ mod }2=0$</span> and this is if and only if <span class="math-container">$x\text{ mod }2=y\text{ mod }2$</span>.</p>
1,036,964
<blockquote> <p>Let $\{e_1, e_2\}$ and $\{f_1, f_2, f_3\}$ the canonical ordered bases of $\mathbb{R}^2$ and $\mathbb{R}^3$ respectively. Find the coordinates of $x \otimes y$ with respect to the basis $\{e_i\otimes f_j\}$ of $\mathbb{R}^2\otimes\mathbb{R}^3$ where $x = (1, 1)$ and $y = (1, -2, 1)$.</p> </blockquote> <p>Since I have that $(1, 1) = e_1 + e_2$ and $(1, -2, 2) = f_1 - 2f_2 + f_3$, then $x \otimes y = (1, 1) \otimes (1, -2, 1) = (e_1 + e_2)\otimes (f_1 - 2f_2 + f_3) = \dots ?$ How can I can compute this last tensor product? Thanks. Also any further reading recomendation about examples of simple tensor product product calculations and not only the abstract background would be appreciated ;)</p>
drhab
75,923
<p>Prescribing $g\left(x,y\right)=xy$ on $B=\left[0,4\right)\times\left[-2,2\right)$ gives $g\left[B\right]=f\left[A\right]$. </p> <p>This actually is replacing $x^{2}$ with domain $\left(-2,1\right)$ by the 'less complex' $x$ with domain $\left[0,4\right)$.</p> <p>For $t\in\left(-8,8\right)$ find some $r\in\left(0,4\right)$ s.t. $\left|t\right|&lt;2r$. </p> <p>Then $t=g\left(r,s\right)$ for $s=\frac{t}{r}\in(-2,2)$.</p>
311,617
<p>If $S$ is a ring and $R \subset S$ is a subring it's common to write that $S/R$ is an extension of rings. I frequently find myself writing this and read it quite often in textbooks and lecture notes. But whenever I actually think about the notation, I find it to be one of the most confusing conventions in algebra. In almost any other context $S/R$ would mean taking a quotient of $S$ by $R$. It seems much more clear to me write let $R \subset S$ be an extension of rings, but I don't see this notation used very frequently. </p> <p>So I'm wondering if there's some high level reason we use this notation that I'm not seeing. I'm also curious in what context this notation first appeared. </p>
Community
-1
<p><strong>Hint.</strong> Take a short exact sequence $0\to K\to F\to N\to 0$, where $F$ is projective (or free). Then use the long exact homology sequence for Tor and induction on $n$. </p>
3,932,883
<p>Can someone suggest how to find the infinite series sum for</p> <p><span class="math-container">$$\frac{k(k+1)3^k}{k!}$$</span> where k goes from <span class="math-container">$1$</span> to infinity.</p> <p>I know that <span class="math-container">$\sum_0\frac{3^k}{k!}=e^3$</span> but I'm not sure if that helps here.</p>
Empy2
81,790
<p><span class="math-container">$$\frac{k(k+1)3^k}{k!}=\frac{k(k-1)3^k}{k!}+\frac{2k3^k}{k!}=\frac{3^k}{(k-2)!}+2\frac{3^k}{(k-1)!}\\ =9\frac{3^{k-2}}{(k-2)!}+6\frac{3^{k-1}}{(k-1)!}$$</span> Now change variables</p>
506,965
<p>How can I prove that: $E_π [ (dQ_X/dπ) S (T)| F_t ]= E_{Q_X} [S(T) | F_t]E_π [ dQ_X/dπ | F_t ]$. Obviously $E_π [(dQ_X/dπ) S(T) ]= E_{Q_X} [S(T)]$ I know that much, but how to prove when it is conditioned on $F_t$.</p>
Gordon
169,372
<p>Alternatively, note that, for ant $A \in \mathcal{F}_t$, \begin{align*} \int_A \frac{dQ_X}{d\pi}S(T) d\pi &amp;=\int_A S(T) dQ_X\\ &amp;=\int_A E_{Q_X}(S(T)\mid \mathcal{F}_t) dQ_X\\ &amp;=\int_A E_{Q_X}(S(T)\mid \mathcal{F}_t) \frac{dQ_X}{d\pi} d\pi\\ &amp;=\int_A E_{\pi}\left(E_{Q_X}(S(T)\mid \mathcal{F}_t) \frac{dQ_X}{d\pi} \mid \mathcal{F}_t \right)d\pi\\ &amp;=\int_A E_{Q_X}(S(T)\mid \mathcal{F}_t) E_{\pi}\left(\frac{dQ_X}{d\pi} \mid \mathcal{F}_t \right)d\pi. \end{align*} Therefore, \begin{align*} E_{\pi}\left(\frac{dQ_X}{d\pi}S(T) \mid \mathcal{F}_t\right) &amp;= E_{Q_X}(S(T)\mid \mathcal{F}_t) E_{\pi}\left(\frac{dQ_X}{d\pi} \mid \mathcal{F}_t \right). \end{align*}</p>
466,873
<p>I'm trying to prove that if $K$ is a finite field, then every subset of $\mathbb A^n(K)$ is algebraic. I know that if $K$ is finite, then every element of $K$ is algebraic, i.e., for every $a\in K$ there is a polynomial $f\in K$ such that $f(a)=0$, but this didn't help me to solve the question. I almost sure that we have to use this to solve this question.</p> <p>I need help.</p> <p>Thanks in advance.</p>
M Turgeon
19,379
<p>A finite union of algebraic sets is an algebraic set. </p>
1,361,517
<p>$K_1 K_2 \dotsb K_{11}$ is a regular $11$-gon inscribed in a circle, which has a radius of $2$. Let $L$ be a point, where the distance from $L$ to the circle's center is $3$. Find $LK_1^2 + LK_2^2 + \dots + LK_{11}^2$.</p> <p>Any suggestions as to how to solve this problem? I'm unsure what method to use. </p>
DeepSea
101,504
<p>Use the law of cosine for each triangle $\triangle LOK_i, i = 1,\cdots, 11$, we have: $LK_i^2 = LO^2+OK_i^2 - 2LO\cdot OK_i\cos \theta_i, \theta_i = \angle LOK_i\Rightarrow LK_i^2 = 3^2+2^2-12\cos \theta_i=13-12\cos \theta_i\Rightarrow S = \displaystyle \sum_{i=1}^{11}LK_i^2=13\cdot 11-12\displaystyle \sum_{i=1}^{11}\cos \theta_i=143-12\displaystyle \sum_{i=1}^{11}\dfrac{\vec{OK_i}\cdot \vec{OL}}{|\vec{OK_i}||\vec{OL}|}=143-2\displaystyle \sum_{i=1}^{11}\vec{OL}\cdot\left(\displaystyle \sum_{i=1}^{11}\vec{OK_i}\right)=143-2\displaystyle \sum_{i=1}^{11}\vec{OL}\cdot \vec{0}=143-2\cdot 0 = 143$</p>
466,909
<p>How many ways can you split the numbers 1 to 5 into two groups of varying size? For example: '1 and 2,3,4,5' or '1,2 and 3,4,5' or '1,2,3 and 4,5'. How many combinations are there like this? What is the formula?</p>
Sarthak Goyal
528,258
<p>This is a basic application of the famous "Stirling Numbers of the second kind".</p> <p>In case of $n$-subset-$2$, the formula is given by $2^{n-1} - 1$.</p> <p>So, here, we get the answer to be $2^{5-1} - 1$, i.e. $15$.</p>
3,912,722
<blockquote> <p>Circle of radius <span class="math-container">$r$</span> touches the parabola <span class="math-container">$y^2+12x=0$</span> at its vertex. Centre of circle lies left of the vertex and circle lies entirely within the parabola. What is the largest possible value of <span class="math-container">$r$</span>?</p> </blockquote> <p>So my book has given the solution as follows:</p> <blockquote> <p>The equation of the circle can be taken as: <span class="math-container">$(x+r)^2+y^2=r^2$</span><br /> and when we solve the equation of the circle and the parabola, we get <span class="math-container">$x=0$</span> or <span class="math-container">$x=12-2r$</span>.</p> </blockquote> <blockquote> <p>Then, <span class="math-container">$12-2r≥0$</span> and finally, the largest possible value of <span class="math-container">$r$</span> is <span class="math-container">$6$</span>.</p> </blockquote> <p>This is where I got stuck as I'm not able to understand why that condition must be true. I get that the circle must lie within the parabola...</p> <p>Can someone please explain this condition to me?</p>
Jake Mirra
278,017
<p>First of all, you have a typo, the circle is <span class="math-container">$ (x + r)^2 + y^2 = r^2 $</span>.</p> <p>Clearly since <span class="math-container">$ x \leq 0 $</span> on the parabola, any intersections of the circle with the parabola will have <span class="math-container">$ x \leq 0 $</span>. <span class="math-container">$ x = 0 $</span> will always correspond to the vertex. The question is asking you to set the radius <span class="math-container">$ r $</span> as big as possible without having any other intersection points. Through the calculations you/they have determined that any other intersection points <strong>can only</strong> occur when <span class="math-container">$ x = 12 - 2r $</span> or <span class="math-container">$ x = 0 $</span>. But since <span class="math-container">$ y^2 $</span> has to be non-negative (<span class="math-container">$ y $</span> is real after all) and we have <span class="math-container">$ y^2 = -12x $</span>, <span class="math-container">$ x = 12 - 2r $</span> corresponds to an intersection point if <em>and only if</em> <span class="math-container">$ x \leq 0 $</span>. So we get three intersection points if <em>and only if</em> <span class="math-container">$ 12 - 2r &lt; 0 $</span>, in which case the three intersection points are <span class="math-container">$ (0,0) $</span>, <span class="math-container">$ \left(12 - 2r, \sqrt{12(2r-12)} \right) $</span>, and <span class="math-container">$ \left(12 - 2r, -\sqrt{12(2r-12)} \right)$</span>. Put differently, the circle is in the parabola if and only if there is (at most) one intersection point, which in turn is true if and only if <span class="math-container">$ r &lt; 6 $</span>.</p> <p>Clearly if you have three intersection points, the circle is not &quot;inside&quot; the parabola.</p>
2,611,450
<p>I'm having trouble understanding the description of the event in italics (with P = 0.8) in the question below. </p> <p>"The probability that an American industry will locate in Shanghai is P(S) = 0.7.<br> The probability that it will locate in Beijing is P(B) = 0.4.<br> And the <em>probability that it will locate in either Shanghai or Beijing or both is 0.8</em>.<br> What is the probability that the industry will locate: (a) in both cities? (b) in neither city?"</p> <p><strong>Doubt:</strong> Why the answer to (a) isn't P = 0.8? </p> <p>The question itself says that "<em>the probability that it will locate in either Shanghai or Beijing or both is 0.8</em>". Isn't it saying P(A∪B) = P(A∩B) = 0.8?</p> <p>I've found many solutions available and the answers are (a) 0.3 and (b) 0.2 but I still cannot understand why it is not 0.8.</p> <p>Thanks.</p>
Graham Kemp
135,106
<blockquote> <p>The problem is, I know how to check those properties for simple relations (i.e. checking that $xRy→yRx$), but I don't understand how to do it in this example, because I don't really understand the relation. </p> </blockquote> <p>$\def\R{\operatorname R}\R$ is symmetric if $\forall s\in S\;\forall t\in S\; [s\R t\to t\R s]$. &nbsp; Now $S=\Bbb Z\times(\Bbb Z\setminus \{0\})$, and $\forall (a,b)\in S\;\forall (c,d)\in S\;[(a,b)\R(c,d)\leftrightarrow (ad=cb)]$ so that would be $\forall (a,b)\in S\;\forall (c,d)\in S\;[(ad=cb)\to (cb=ad)]$. &nbsp; Because this is true (via the symmetry of equality), therefore $\R$ is symmetric.</p> <p>Use similar principle to check for Reflexivity and Transitivity.</p> <p>$\\[3ex]$</p> <hr> <p>Hint: You might find it helpful that $(ad=cb)\leftrightarrow (a/b=c/d)$ <em>if</em> $b\neq0, d\neq 0$.</p>
4,531,998
<p>As you increase the value of n, you will generate all pythagorean triples whose first square is even. Is there any visual proof of the following explicit formula and where does it come from or how to derive it?</p> <p><span class="math-container">$(2n)^2 + (n^2 - 1)^2 = (n^2 + 1)^2$</span></p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> <th><span class="math-container">$(2n)^2+(n^2-1)^2=(n^2+1)^2$</span></th> </tr> </thead> <tbody> <tr> <td><span class="math-container">$(2*0)^2+(0^2-1)^2=(0^2+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1^2-1)^2=(1^2+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(2^2-1)^2=(2^2+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$(2*0)^2+(0-1)^2=(0+1)^2$</span></td> <td><span class="math-container">$(2*1)^2+(1-1)^2=(1+1)^2$</span></td> <td><span class="math-container">$(2*2)^2+(4-1)^2=(4+1)^2$</span></td> </tr> <tr> <td><span class="math-container">$0^2+1^2=1^2$</span></td> <td><span class="math-container">$2^2+0^2=2^2$</span></td> <td><span class="math-container">$4^2+3^2=5^2$</span></td> </tr> <tr> <td><span class="math-container">$0+1=1$</span></td> <td><span class="math-container">$4+0=4$</span></td> <td><span class="math-container">$16+9=25$</span></td> </tr> <tr> <td><span class="math-container">$1=1$</span></td> <td><span class="math-container">$4=4$</span></td> <td><span class="math-container">$25=25$</span></td> </tr> </tbody> </table> </div>
Constantinos Pisimisis
1,087,555
<p>The formula that you are referring is a sub-case of the Euclid's formula<br /> Accroding to the Euclid's formula it is true that: <span class="math-container">$$\text{Given an arbitrary pair of integers m and n with m$\gt$n and m,n$\gt$0}$$</span> <span class="math-container">$$a=m^2-n^2 \text{ , } b=2mn \text{ , } c=m^2+n^2 \text{ form a pythagorean triple}$$</span> <span class="math-container">$\text{In your case for:} n=1 \text{ and } m\epsilon\mathbb{N} $</span></p> <hr /> <p>You can found more about this forumla in this links:<br /> <a href="https://math.stackexchange.com/questions/3284909/proof-of-euclids-formula-for-primitive-pythagorean-triples">proof of euclid's formula MathExchange </a><br /> <a href="https://en.wikipedia.org/wiki/Pythagorean_triple" rel="nofollow noreferrer">Pythagorean triple-wikipedia</a></p>