qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
156,285
<p>I have been working on this exercise for a while now. It's in B.L. van der Waerden's <em>Algebra (Volume I)</em>, page $19$. The exercise is as follows:</p> <blockquote> <p>The order of the symmetric group $S_n$ is $n!=\prod_{1}^{n}\nu$. (Mathematical induction on $n$.)</p> </blockquote> <p>I don't comprehend how we can logically use induction here. It seems that the first step would be proving $S_1$ has $1!=1$ elements. This is simply justified: There is only one permutation of $1$, the permutation of $1$ to itself.</p> <p>The next step would be assuming that $S_n$ has order $n!$. Now here is where I get stuck. How do I use this to show that $S_{n+1}$ has order $(n+1)!$?</p> <p>Here is my attempt: I am thinking this is because all $n!$ permutations of $S_n$ now have a new element to permutate. For example, if we take one single permutation $$ p(1,\dots,n) = \begin{pmatrix} 1 &amp; 2 &amp; 3 &amp; \dots &amp; n\\ 1 &amp; 2 &amp; 3 &amp; \dots &amp; n \end{pmatrix} $$ We now have $n$ modifications of this single permutation by adding the symbol $(n+1)$:</p> <p>\begin{align} p(1,2,\dots,n,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1) \end{pmatrix}\\ p(2,1,\dots,n,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ 2 &amp; 1 &amp; \dots &amp; n &amp; (n+1) \end{pmatrix}\\ \vdots\\ p(n,2,\dots,1,(n+1))&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ n &amp; 2 &amp; \dots &amp; 1 &amp; (n+1) \end{pmatrix}\\ p((n+1),2,\dots,n,1)&amp;= \begin{pmatrix} 1 &amp; 2 &amp; \dots &amp; n &amp; (n+1)\\ (n+1) &amp; 2 &amp; \dots &amp; n &amp; 1 \end{pmatrix} \end{align}</p> <p>There are actually $(n+1)$ permutations of that specific form, but we take $p(1,\dots,n)=p(1,\dots,n,(n+1))$ in order to illustrate and prove our original statement. We can make this general equality for all $n!$ permutations: $p(x_1,x_2,\dots,x_n)=p(x_1,x_2,\dots,x_n,x_{n+1})$ where $x_i$ is any symbol of our finite set of $n$ symbols and $x_{n+1}$ is strictly defined as the symbol $(n+1)$.</p> <p>We can repeat this process for all $n!$ permutations in $S_n$. This gives us $n!n$ permutations. Then, adding in the original $n!$ permutations, we have $n!n+n!=(n+1)n!=(n+1)!$. Consequently, $S_n$ has order $n!$.</p> <p>How is my reasoning here? Furthermore, is there a more elegant argument? I do not really see my argument here as <em>incorrect</em>, it just seems to lack elegance. My reasoning may well be very incorrect, however. If so, please point it out to me.</p>
000
22,144
<p>Here's my answer using induction (it's a similar proof, but seems more concise and understandable):</p> <p>Our base is true.</p> <p>Assume $S_n$ has $n!$ permutations.</p> <p>Define all the original $n!$ permutations as permutations where $(n+1)$ is sent to itself. Thus, by definition, all other permutations ("the new permutations") are the original permutations, except $(n+1)$ is sent to a place other than itself. There are $n$ places to send $(n+1)$ if we exclude $(n+1) \to (n+1)$. Since there are $n!$ original permutations, there must be $n!n$ new permutations. The reason for this is because, for all $n!$ permutations, there is $n$ different modifications of the $n!$ permutations (e.g. $(n+1) \to 1$, $(n+1) \to 2$, etc.). Therefore, the total amount of permutations of $S_{n+1}$ is $n!+n!n=(n+1)!$.</p> <p>P.S. I only used induction because I wanted to do precisely as the exercise states; I try not to deviate in order to avoid erroneous proofs (in this case, I would prefer to deviate).</p>
7,761
<p>Our undergraduate university department is looking to spruce up our rooms and hallways a bit and has been thinking about finding mathematical posters to put in various spots; hoping possibly to entice students to take more math classes. We've had decent success in finding "How is Math Used in the Real World"-type posters (mostly through AMS), but we've been unable to find what I would call interesting/informative math posters. </p> <p>For example, I remember seeing a poster once (put out by Mathematica) that basically laid out how to solve general quadratics, cubics, and quartics. Then it had a good overview of proving that no formula existed for quintics. So not only was it pretty to look at, but if you stopped to read it you actually learned something.</p> <p>Does anyone know of a company or distributor that carries a variety of posters like this? </p> <p>I've tried searching online but all that comes up is a plethora of posters of math jokes. And even though the application/career-based posters are nice and serve a purpose, I don't feel like you actually gain mathematical knowledge by reading them.</p>
Joseph O'Rourke
511
<p>One potential source is the AMS blog <a href="http://blogs.ams.org/visualinsight/" rel="noreferrer">Visual Insights</a> run by John Baez. Each image comes with a clear mathematical story. Most would not fit on one page, but you could make a poster by printing out several pages and pasting them on a poster (or doing the equivalent electronically). <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <img src="https://i.stack.imgur.com/pLigc.jpg" width="300" /> <br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <sup> Schmidt Arrangement of the Eisenstein Integers – Katherine Stange. </sup> <hr /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <img src="https://i.stack.imgur.com/bxTjw.gif" width="300" /> <br /> &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; <sup> Icosahedron Illustrating Pentagon-Hexagon-Decagon Identity – Greg Egan. </sup></p> <hr />
2,629,744
<p>I have done the sum by first plotting the graph of the function in the Left Hand Side of the equation and then plotted the line $y=k$. For the equation to have $4$ solutions, both these two curves must intersect at $4$ different points, and from the two graphs, I could see that for the above to occur, the value of $k$ must lie between $\cfrac{1}{4}$ and $6$, that is k belongs to $( 0.25,6)$. Thus , the integral values of $k$ would be $1,2,3,4,5$; and so the number of integral values of $k = 5$. This was the answer given in the book. But as I was looking at the graph, I realized that $k=0$ also gives $4$ solutions namely $x=-2,-3,2,3$. So, should the number of integral values of $k$ be $6$ $(0,1,2,3,4,5)$? </p>
Michael Rozenberg
190,319
<p>Now, let $a_n=b_n+1$ for some sequence $b$.</p> <p>Thus, $$b_{n+1}=(b_n+1)\frac{n-1}{n+1}+\frac{2}{n+1}$$ or $$b_{n+1}+1=\frac{n-1}{n+1}b_n+\frac{n-1}{n+1}+\frac{2}{n+1}$$ or $$b_{n+1}=\frac{n-1}{n+1}b_n$$ and use your work.</p>
4,375,994
<blockquote> <p>Question:</p> <p>Show that, <span class="math-container">$$\pi =3\arccos(\frac{5}{\sqrt{28}}) + 3\arctan(\frac{\sqrt{3}}{2}) ~~~~~~ (*)$$</span></p> </blockquote> <p><em>My proof method for this question has received mixed responses. Some people say it's fine, others say that it is a verification, instead of a proof.</em></p> <p>Proof: <span class="math-container">$$\pi =3\arccos(\frac{5}{\sqrt{28}}) + 3\arctan(\frac{\sqrt{3}}{2})\iff \frac{\pi}{3} = \arccos(\frac{5}{\sqrt{28}}) + \arctan(\frac{\sqrt{3}}{2}) $$</span><span class="math-container">$$\iff \frac{\pi}{3} = \arctan(\frac{\sqrt{3}}{5})+\arctan(\frac{\sqrt{3}}{2})$$</span></p> <p>As <span class="math-container">$\arccos(\frac{5}{\sqrt{28}})=\arctan(\frac{\sqrt{3}}{5})$</span></p> <p>The plan now is to apply the tangent function to both sides, and show that LHS=RHS using the tangent addition formula to expand it out.</p> <p>I.e. <span class="math-container">$$\tan(\frac{\pi}{3}) = \tan\bigg(\arctan(\frac{\sqrt{3}}{5})+\arctan(\frac{\sqrt{3}}{2}\bigg)$$</span></p> <p><span class="math-container">$$\iff \sqrt{3} = \frac{\frac{\sqrt{3}}{5}+\frac{\sqrt{3}}{2}}{1-\frac{\sqrt{3}}{5} \frac{\sqrt{3}}{2}}$$</span></p> <p>and the RHS will reduce down to <span class="math-container">$\sqrt{3}$</span>. Hence LHS=RHS.</p> <p>Some things that I've noticed about this method of proof:</p> <ul> <li>It could be used to (incorrectly) prove that <span class="math-container">$$\frac{\pi}{3}+\pi = \arccos(\frac{5}{\sqrt{28}}) + \arctan(\frac{\sqrt{3}}{2})$$</span></li> </ul> <p>So because this method of proof can be used to prove things true, that are obviously false, that means it can't be used?</p> <ul> <li>Instead of proving (*), wouldn't this method of proof actually prove that? <span class="math-container">$$\arccos(\frac{5}{\sqrt{28}})+\arctan(\frac{\sqrt{3}}{2})=\frac{\pi}{3} + \pi k$$</span></li> </ul> <p>for some <span class="math-container">$k\in \mathbb{Z}$</span> which we must find. In this case being when <span class="math-container">$k=0$</span>.</p>
lhf
589
<p>Using complex numbers: <span class="math-container">$$ \arctan(\frac{\sqrt{3}}{5})+\arctan(\frac{\sqrt{3}}{2}) = \arg((5+\sqrt{3}i)(2+\sqrt{3}i)) = \arg(7+7\sqrt{3}i) = \arctan(\sqrt{3}) $$</span></p>
246,862
<p>I have stumbled upon this problem which keeps me from finishing a proof:</p> <p>$(\sum_{n} {|X_n|})^a \leq \sum_{n} {|X_n|}^a$, where $n \in \mathbb{N}$ and $ 0 \leq a \leq 1 $</p> <p>I have no idea how to prove this. It is something like the Cauchy-Schwarz inequality which applies in case $0 \leq a \leq 1$?</p> <p>Any tip is welcome. Thanks!</p>
loved.by.Jesus
272,774
<p>This is kind of the complementary of the <a href="https://math.stackexchange.com/questions/2735722/has-it-been-proven-that-the-sum-of-powers-is-greater-than-the-power-of-the-sum/2735741">question here</a>.</p> <p>So, I adapt the answer of <em>Saulspatz</em> to your case.</p> <p>I will adapt the notation, instead of writing <span class="math-container">$|X_n|$</span> for the sum elements, I will write <span class="math-container">$x_i$</span> so that <span class="math-container">$\forall i\; x_i&gt;0$</span>. Then, your question states:</p> <p><span class="math-container">\begin{equation} \left(\sum{x_i}\right)^k \leq\sum{x_i^k}\quad \text{if } x_i&gt;0\;\forall i\ \text{and } 0 \leq k\leq1 \end{equation}</span></p> <h2>Proof:</h2> <p>For the cases <span class="math-container">$k=0,1$</span> the proof is trivial, let us see now the cases <span class="math-container">$ 0 &lt; k &lt; 1$</span></p> <p>If you write <span class="math-container">$$f(x_1,x_2,\dots,x_n)=\left(\sum{x_i}\right)^k-\sum{x_i^k},$$</span></p> <p>then <span class="math-container">$f(0,0,...,0)=0$</span> and it's easy to show that all the first-order partial derivatives <span class="math-container">$\frac{\partial f}{\partial x_j}$</span> are strictly negative when <span class="math-container">$k&gt;1.$</span> Indeed,<span class="math-container">$$ \frac{\partial f}{\partial x_j}=k\left(\left(\sum{x_i}\right)^{k-1}-x_j^{k-1}\right)&lt;0$$</span> when <span class="math-container">$x_i&gt;0 \forall i$</span>.</p> <p>Because <span class="math-container">$g(x)=x^{k-1}$</span> for <span class="math-container">$0&lt;k&lt;1, x&gt;0$</span> is a decreasing function and <span class="math-container">$\sum x_i &gt; x_j\; \forall j$</span> since <span class="math-container">$x_i&gt;0\;\forall i$</span></p>
3,137,599
<p>I actually have a doubt about the solution of this question given in my book. It uses the equations tan 2A = - tan C (from A=B, A+B+C = 180 degrees) and 2 tan A + tan C = 100, thereby formulating the cubic equation <span class="math-container">$x^3 - 50x^2 + 50=0$</span>. The discriminant is <span class="math-container">$3x^2- 100x$</span>. Since <span class="math-container">$f(0) f(100/3) &lt;0 $</span>, it is found there are three distinct real roots. After this my book simply states that one of these roots is obtuse so the other two values are taken. Why is one of the values of A obtuse when we have used 2A + C = 180 degrees and an obtuse value of A would not satisfy this equation? Would someone please help?</p>
Michael Rozenberg
190,319
<p>Because <span class="math-container">$$2\alpha+\gamma=180^{\circ}$$</span> or <span class="math-container">$$\alpha=90^{\circ}-\frac{\gamma}{2}&lt;90^{\circ},$$</span> which says that <span class="math-container">$\alpha$</span> is an acute angle and we need to take <span class="math-container">$x&gt;0$</span> only.</p> <p>Easy to see that the equation, which you got has two positive roots.</p> <p>One on <span class="math-container">$(1,2)$</span> and the second on <span class="math-container">$(49,50)$</span>.</p> <p>There is also a negative root, which is not valid. </p>
521,589
<p>In a rectangle $ABCD$, the coordinates of $A$ and $B$ are $(1,2)$ and $(3,6)$ respectively and some diameter of the circumscribing circle of $ABCD$ has equation $2x-y+4=0$. Then the area of the rectangle is:</p> <p>My work: I found the equations of $AD$ and $BC$ of the rectangle. Taking the points $C$ and $D$ as $(x_1,y_1)$ and $(x_2,y_2)$ I wrote the equation for $AD=BC$. I think that the equation given for the diameter of the circle should pass through the point of intersection of the diagonals of the rectangle and wrote equation for the point of intersection. This gives me two equations and four unknowns. I know there is some problem with my method making the answer to this problem hard. So, please help me do the problem by the right method. </p>
dibyendu
95,293
<p>HINT: The slope of the line $AB$ (i.e $2$) is equal to that of the diameter, that is they are parallel.</p> <p>$\therefore$ The diameter must go through the mid-ponts of $BC$ and $AD$.</p> <p>$\therefore$ Perpendicular distance between the diameter and $AB$ is $\frac{1}{2}BC$.</p>
46,631
<p>I'm writing a program to play a game of <a href="http://en.wikipedia.org/wiki/Pente" rel="noreferrer">Pente</a>, and I'm struggling with the following question:</p> <blockquote> <p>What's the best way to detect patterns on a two-dimensional board?</p> </blockquote> <p>For example, in Pente a pair of neighboring stones of the same color can be captured when they are flanked from both sides by an opponent; how can we find all the stones that can be captured with the next move for the following board?</p> <p><img src="https://i.stack.imgur.com/cV3Gb.png" alt="sample board"></p> <p>Below I show one possible straightforward solution, but with a defect: it's hard to extend it for other interesting patterns, i.e. three stones of the same color in a row surrounded by empty spaces, or four stones of the same color in a row which are flanked from one side but open from another, etc.</p> <blockquote> <p>I'm wondering whether there is a way to define a DSL for detecting 2-dimensional structures like that on a board - sort of a <em>2D pattern matching</em>.</p> </blockquote> <p>P.S. I would also appreciate any advice on how to simplify the code below and make it more idiomatic - for example, I don't really like the way how <code>sortStones</code> is defined.</p> <h2>Straightforward solution</h2> <p>Here is one way to solve this problem (see below for graphics primitives to generate and display random boards):</p> <ul> <li>Enumerate all subsets of 3 stones from the board above</li> <li>Select those that form an <em>AABE</em> or <em>ABBE</em> pattern, where E denotes an unoccupied space</li> </ul> <p>Lets store the board as a list of black and white stones,</p> <pre><code>a = {black[2, 1], black[4, 3], black[2, 5], black[4, 2], black[5, 3], black[1, 2], black[1, 3], black[5, 4], black[1, 5], white[3, 1], white[4, 1], white[4, 4], white[3, 5], white[3, 4], white[5, 1], white[5, 2], white[3, 3], white[1, 1]} </code></pre> <p>First, we define <code>isTriple</code> which checks whether three stones sorted by their x and y coordinates are in the same row next to each other and follow an ABB or AAB pattern:</p> <pre><code>isTriple[{a_, b_, c_}] := And[ (* A A B or A B B *) Head[a] != Head[c] /. {black -&gt; 1, white -&gt; 0}, (* x and y coordinates are equally spaced *) a[[1]] - b[[1]] == b[[1]] - c[[1]], a[[2]] - b[[2]] == b[[2]] - c[[2]], (* and are next to each other *) Abs[a[[1]] - b[[1]]] &lt;= 1, Abs[a[[2]] - b[[2]]] &lt;= 1] </code></pre> <p>Next, we determine the coordinates and the color of the stone that will kill the pair:</p> <pre><code>killerStone[{a_, b_, c_}] := If[Head[a] == Head[b] /. {black -&gt; 1, white -&gt; 0}, Head[c][2 a[[1]] - b[[1]], 2 a[[2]] - b[[2]]], Head[a][2 c[[1]] - b[[1]], 2 c[[2]] - b[[2]]]] </code></pre> <p>Finally, we only select those triples where killer stone's space is not already occupied:</p> <pre><code>sortStones[l_] := Sort[l, OrderedQ[{#1, #2} /. {black -&gt; List, white -&gt; List}] &amp;] triplesToKill[board_] := Module[ {triples = Select[sortStones /@ Subsets[board, {3}], isTriple]}, Select[triples, Block[ {ks = killerStone[#]}, FreeQ[board, _[ks[[1]], ks[[2]]]]] &amp;]] displayBoard[a, #] &amp; /@ triplesToKill[a] // Partition[#, 3, 3, {1, 1}, {}] &amp; // GraphicsGrid </code></pre> <p><img src="https://i.stack.imgur.com/QHj2c.png" alt="straightforward solution"></p> <h2>Graphics primitives</h2> <pre><code>randomPoints[n_] := RandomSample[Block[{nn = Ceiling[Sqrt[n]]}, Flatten[Table[{i, j}, {i, 1, nn}, {j, 1, nn}], 1]], n]; (* n is number of moves = 2 * number of points *) randomBoard[n_] := Module[ {points = randomPoints[2 n]}, Join[ Take[points, n] /. {x_, y_} -&gt; black[x, y], Take[points, -n] /. {x_, y_} -&gt; white[x, y] ]] grid[minX_, minY_, maxX_, maxY_] := Line[Join[ Table[{{minX - 1.5, y}, {maxX + 1.5, y}}, {y, minY - 1.5, maxY + 1.5, 1}], Table[{{x, minY - 1.5}, {x, maxY + 1.5}}, {x, minX - 1.5, maxX + 1.5, 1}]]]; displayBoard[board_] := Module[ {minX = Min[First /@ board], maxX = Max[First /@ board], minY = Min[#[[2]] &amp; /@ board], maxY = Max[#[[2]] &amp; /@ board], n}, Graphics[{ grid[minX, minY, maxX, maxY], board /. { black[n__] -&gt; {Black, Disk[{n}, .4]}, white[n__] -&gt; {Thick, Circle[{n}, .4], White, Disk[{n}, .4]} }}, ImageSize -&gt; Small, Frame -&gt; True]]; displayBoard[board_, points_] := Show[ displayBoard[board], Graphics[ Map[{Red, Disk[{#[[1]], #[[2]]}, .2]} &amp;, points]]] </code></pre>
Mr.Wizard
121
<p>One function comes to mind that already implements matching of multidimensonal rules: <a href="http://reference.wolfram.com/mathematica/ref/CellularAutomaton.html" rel="nofollow noreferrer"><code>CellularAutomaton</code></a>. Allow me to represent your board data like this:</p> <pre><code>board = SparseArray[ a /. h_[x_, y_] :&gt; ({-y - 1, x + 1} -&gt; h) /. {black -&gt; ●, white -&gt; ○}, {7, 7}, " "]; </code></pre> <p>For my example I shall show a generic 3x3 rule operation, but this can easily be extended. I know of no built-in way to handle the reflections and translations of your rules, so I will assist with:</p> <pre><code>variants[x_, y_] := Union @@ Outer[ #@{y, x, y} ~Reverse~ #2 &amp;, {Identity, Transpose}, {{}, 1, 2, {1, 2}}, 1 ] expand[h_[x : {_, _, _}, v_]] := variants[x, {_, _, _}] :&gt; v // Thread </code></pre> <p>I now build the rules. The final rule merely keeps any element that is not at the center of a match unchanged.</p> <pre><code>rules = Join @@ expand /@ { {○, ○, ●} -&gt; "Q", {○, ●, ●} -&gt; "R", {_, z_, _} :&gt; z }; </code></pre> <p>Finally I apply them to my <code>board</code>. This shows the original, and after a single transformation:</p> <pre><code>MatrixForm /@ CellularAutomaton[rules, board, 1] </code></pre> <p><img src="https://i.stack.imgur.com/jaYwc.png" alt="enter image description here"></p> <p>You can see that any appearance of the patterns in any orthogonal orientation (but not a diagonal) is "marked" by a Q or R at the center accordingly.</p> <p>This is certainly not a complete implementation of what you requested but I hope that it gives you a reasonable place to start. Another would be <a href="http://reference.wolfram.com/mathematica/ref/ListCorrelate.html" rel="nofollow noreferrer"><code>ListCorrelate</code></a> and a kernel large enough to encompass your patters, filled perhaps with unique powers of two, thereby yielding a unique value for each possible "filling" of the overlay.</p>
3,521,534
<p>I tried solving a calculus problem and I got the right result, but I don't understand the solution provided at the end of the exercise. Even though I got the same answer, I would like to understand what's happening in the given solution aswell.</p> <blockquote> <p>Consider the function: <span class="math-container">$$\ f(x) = \begin{cases} x^2+ax+b &amp; x\leq 0 \\ x-1 &amp; x&gt;0 \\ \end{cases} \ $$</span> Find the antiderivatives of the function <span class="math-container">$f$</span> if they exist.</p> </blockquote> <p>The solution provided goes something like this:</p> <blockquote> <p>For <span class="math-container">$f$</span> to have antiderivatives the function <span class="math-container">$f$</span> must have the Darboux property. (...Some calculations...), therefore <span class="math-container">$f$</span> has the Darboux property if and only if <span class="math-container">$b = -1$</span> (I understood that now the function is continuous, therefore it has an anitederivative). Using the consequences of Lagrange's theorem on the intervals <span class="math-container">$(-\infty, 0)$</span> and <span class="math-container">$(0, \infty)$</span> any antiderivative <span class="math-container">$F : \mathbb{R} \rightarrow \mathbb{R}$</span> of <span class="math-container">$f$</span> has the form:</p> <p><span class="math-container">$$ F(x) = \ \begin{cases} \dfrac{x^3}{3} + a \dfrac{x^2}{2} - x + c_1 &amp; x &lt; 0 \\ \ c_2 &amp; x=0 \\ \dfrac{x^2}{2} - x + c_3 &amp; x&gt;0 \end{cases} \ $$</span></p> <p><span class="math-container">$F$</span> being differentiable, it is also continous, so <span class="math-container">$F(0) = c_2 = c_1 = c_3 $</span>.</p> <p>Therefore the antiderivatives of <span class="math-container">$f$</span> have the form:</p> <p><span class="math-container">$$ F(x) = c + \ \begin{cases} \dfrac{x^3}{3} + a \dfrac{x^2}{2} - x &amp; x\leq 0 \\ \dfrac{x^2}{2} - x &amp; x&gt;0 \end{cases} \ $$</span></p> </blockquote> <p>Again, I got the same result, but I don't understand a lot of the work done above. </p> <p>The first thing I didn't understand is the part where they say that <span class="math-container">$f$</span> has an antiderivative iff it has the Darboux property. I searched a bit online and I found that a function accepts antiderivatives only if it has the Darboux property. So I guess I have to accept that as a fact.</p> <p>The second (and more important thing) that I didn't understand was the part where they said that they used the consequences of Lagrange's Theorem on the intervals <span class="math-container">$(-\infty, 0)$</span> and <span class="math-container">$(0, \infty)$</span> to find that first form of the antiderivative. What theorem are they refering to? How did they use it on those intervals? Why is there a separate case for <span class="math-container">$x = 0$</span> with an aditional constant, <span class="math-container">$c_2$</span>. I only used <span class="math-container">$2$</span> constants, why were there needed <span class="math-container">$3$</span>? Long story short, I just don't understand at all how they arrived at that first form of the antiderivative and how they used these "consequences of Langrange's theorem". I understood the second form of the antiderivative, that's what I also got, but the first form put me in the dark.</p> <p>I know these are all just details, but I really want to understand what was used here, why was it used and how was it used.</p>
DanielWainfleet
254,665
<p><span class="math-container">$f(x)$</span> is continuous on <span class="math-container">$(-\infty,0]$</span> and on <span class="math-container">$(0,\infty).$</span> So if <span class="math-container">$F'(x)=f(x)$</span> for all <span class="math-container">$x,$</span> then by the Fundamental Theorem of Calculus there necessarily exist <span class="math-container">$k_1$</span> and <span class="math-container">$k_2$</span> such that <span class="math-container">$$\forall x\le 0\,(F(x)=x^3/3+ax^2/2+bx+k_1);$$</span> <span class="math-container">$$ \forall x&gt;0\,(F(x)=x^2/2-x+k_2).$$</span> For <span class="math-container">$F'(0)$</span> to exist we must have continuity of <span class="math-container">$F(x)$</span> at <span class="math-container">$x=0,$</span> so <span class="math-container">$$k_1=F(0)=\lim_{x\to 0^+}F(x)=k_2.$$</span> And we must have <span class="math-container">$$b=\lim_{x\to 0^-}\frac {F(x)-F(0)}{x-0}=F'(0)=\lim_{x\to 0^+}\frac {F(x)-F(0)}{x-0}=-1.$$</span> So if <span class="math-container">$F'(0)$</span> exists it is necessary that <span class="math-container">$b=-1.$</span></p> <p>You may check that these necessary conditions are also sufficient: <span class="math-container">$f(x)$</span> has an antiderivative for all <span class="math-container">$x$</span> iff <span class="math-container">$b=-1.$</span> And if <span class="math-container">$b=-1$</span> then <span class="math-container">$F$</span> is an antiderivative of <span class="math-container">$f$</span> iff, for some <span class="math-container">$k_1,$</span> we have <span class="math-container">$$x\le 0\implies F(x)=x^3/3+ax^2/2+bx+k_1=x^3/3+ax^2/2-x+k_1;$$</span> <span class="math-container">$$ x&gt;0\implies F(x)=x^2/2-x+k_1.$$</span></p> <p>Appendix. On the Darboux property.</p> <p><span class="math-container">$(I).$</span> For any <span class="math-container">$u,v\in \Bbb R$</span> let <span class="math-container">$In[u,v]$</span> be the closed interval with end-points <span class="math-container">$u,v.$</span> That is, <span class="math-container">$In[u,v]=[\min(u,v), \max(u,v)].$</span> Observe that for any <span class="math-container">$u,v,w\in \Bbb R$</span> we have <span class="math-container">$$In[u,v]\cup In[w,v]=[\min(u,v,w),\max(u,v,w)]\supset In[u,w].$$</span> <span class="math-container">$(II).$</span> Theorem: If <span class="math-container">$a\ne b$</span> and if <span class="math-container">$f$</span> is differentiable on <span class="math-container">$In[a,b]$</span> then <span class="math-container">$$\{f'(y): y\in In[a,b]\}\supset In[f'(a),f'(b)].$$</span> Proof: <span class="math-container">$(i).$</span> Let <span class="math-container">$g(a)=f'(a)$</span> and <span class="math-container">$g(x)=\frac {f(x)-f(a)}{x-a}$</span> for <span class="math-container">$a\ne x\in In[a,b].$</span> Now <span class="math-container">$g:[a,b]\to \Bbb R$</span> is continuous, so <span class="math-container">$$\{g(x):x\in In[a,b]\}\supset In[g(a),g(b)]=In[f'(a),g(b)].$$</span></p> <p><span class="math-container">$(ii).$</span> By the Mean Value Theorem, if <span class="math-container">$ x\in In[a,b]$</span> then <span class="math-container">$g(x)\in \{f'(y): y\in In[a,b]\}.$</span> So <span class="math-container">$$\{g(x):x\in In[a,b]\}\subset \{f'(y):y\in In[a,b]\}.$$</span> <span class="math-container">$(iii).$</span> By <span class="math-container">$(i)$</span> and <span class="math-container">$(ii)$</span> we have <span class="math-container">$$In[f'(a),g(b)]\subset \{f'(y):y\in In[a,b]\}.$$</span> <span class="math-container">$(iv).$</span> In <span class="math-container">$(i),(ii),(iii),$</span> interchange <span class="math-container">$a$</span> with <span class="math-container">$b$</span> and replace "<span class="math-container">$g$</span>" with "<span class="math-container">$h$</span>". As an analog of <span class="math-container">$(iii)$</span> we obtain <span class="math-container">$$In[f'(b),h(a)]\subset \{f'(y):y\in In[b,a]\}.$$</span> (v).Now <span class="math-container">$g(b)=h(a),$</span> so by <span class="math-container">$(I)$</span> with <span class="math-container">$u=f'(a),\,v=g(b)=h(a),\,w=f'(b),$</span> we have by <span class="math-container">$(iii)$</span> and <span class="math-container">$(iv)$</span> that <span class="math-container">$$\{f'(y): y\in In [a,b]\}\supset In[u,v]\cup In[w,v]\supset In[u,w]=In[f'(a),f'(b)].$$</span></p>
394,085
<p>How is it possible to establish proof for the following statement?</p> <p>$$n = \frac{1}{2}(5x+4),\;2&lt;x,\;\text{isPrime}(n)\;\Rightarrow\;n=10k+7$$</p> <p>Where $n,x,k$ are $\text{integers}$.</p> <hr> <p>To be more verbose:</p> <p>I conjecture that;</p> <p>If $\frac{1}{2}(5x+4),\;2&lt;x$ is a prime number, then $\frac{1}{2}(5x+4)=10k+7$</p> <p>How could one prove this?</p>
Ross Millikan
1,827
<p>$x$ must be even or$n$ is not a natural. So let $x=2y$ and your conjecture is that if $5y+2$ is prime, it is$10k+7$. As $5$ divides into $10$, $5y+2 \equiv 2,7 \pmod {10}$. Any number $2\pmod {10}$ is even and (if $\gt 2)$ not prime.</p>
227,562
<p>Let $K\subset \mathbb{R}^n$ be a compact convex set of full dimension. Assume that $0\in \partial K$. </p> <p><strong>Question 1.</strong> Is it true that there exists $\varepsilon_0&gt;0$ such that for any $0&lt;\varepsilon &lt;\varepsilon_0$ the intersection $K\cap \varepsilon S^{n-1}$ is contractible? Here $\varepsilon S^{n-1}$ is the unit sphere centered at 0 of radius $\varepsilon$.</p> <p>If Question 1 has a positive answer I would like to generalize it a little bit. Under the above assumptions, assume in addition that a sequence $\{K_i\}$ of compact convex sets converges in the Hausdorff metric to $K$.</p> <p><strong>Question 2.</strong> Is it true that there exists $\varepsilon_0&gt;0$ such that for any $0&lt;\varepsilon &lt;\varepsilon_0$ the intersection $K_i\cap \varepsilon S^{n-1}$ is contractible for $i&gt;i(\varepsilon)$?</p> <p>A reference would be helpful.</p>
Mikhail Katz
28,128
<p>Given a convex set $K$ in Euclidean space and a point $O\in\partial K$, there exists an $\epsilon&gt;0$ such that for all $r&lt;\epsilon$ the intersection $S(O,r)\cap K$ is connected.</p> <p>Thus, let $O\in \partial K$. Call $P\in \partial K$ a <em>critical point</em> if $\langle O-P, X-P\rangle \geq 0$ for all $X\in K$. Note that if $P_1$ and $P_2$ are critical points with $\angle P_1 O P_2 \leq \frac{\pi}{3}$ then by the Pythagorean theorem $\frac{|OP_1|}{|OP_2|}\leq 2$. Therefore by a simple packing argument $$\epsilon=\inf_{P \; critical} |OP|&gt;0.$$ For every $C&gt;0$ smaller than this infimum, we show that the sphere $S_C$ of radius $C$ centered at $O$ has the property that $S_C\cap K$ is connected. Indeed, if points $X,Y$ were in different connected components of $S_C\cap K$, we would connect them by a path in $K$ and then "push out" the path away from $O$ using a flow in $K$, using that fact that the flow can only get stuck at a saddle point of $\partial K$ for the distance function, and a saddle point is necessarily a critical point. The construction of the flow in the absence of such Grove-Shiohama critical points was described in <a href="http://link.springer.com/article/10.1007/BF02187719" rel="nofollow">http://link.springer.com/article/10.1007/BF02187719</a></p> <p>For example, for an acute triangle in the plane, the optimal $\epsilon$ for a point on one of the sides will be the smaller of the two distances from $O$ to the remaining two sides. Every circle of radius smaller than $\epsilon$ will meet the triangle in a connected arc.</p>
606,356
<p>I would appreciate if somebody could help me with the following problem</p> <p>Q: Quadratic Equation $x^4+ax^3+bx^2+ax+1=0$ have four real roots $x=\frac{1}{\alpha^3},\frac{1}{\alpha},\alpha,\alpha^3(\alpha&gt;0)$ and $2a+b=14$.</p> <p>Find $a,b=?(a,b\in\mathbb{R})$</p>
DonAntonio
31,254
<p>Hint:</p> <p>$$0=x^4+ax^3+bx^2+ax+1=(x-\alpha)(x-\alpha^3)\left(x-\frac1\alpha\right)\left(x-\frac1{\alpha^3}\right)$$</p> <p>Now compare coefficients in both sides (Viete's Formulas), for example:</p> <p>$$-a=\alpha+\alpha^3+\frac1\alpha+\frac1{\alpha^3}\;,\;\;b=\alpha^4+\frac1{\alpha^4}+\alpha^2+\frac1{\alpha^2}+2\;\ldots$$</p>
881,159
<p>I'm very lost on the following problem and will appreciate your help very much.</p> <p>How large should $n$ be to guarantee that the Simpson's Rule approximation on the $\int_0^1 19e^{x^2} \, dx$ is accurate to within $0.0001$?</p>
RRL
148,510
<p>A bound on the error in using Simpson's rule with $n$ subintervals to approximate the integral of $f(x)$ over $[a,b]$ is</p> <p>$$E \leq \frac{(b-a)^5}{180n^4}\max_{a \leq x \leq b}|f^{iv}(x)|.$$</p> <p>Differentiating $f(x) = 19e^{x^2}$ four times we have for $x \in [0,1]$</p> <p>$$f^{iv}(x) = 19e^{x^2}(12+48x^2+16x^4) \leq 1444e.$$</p> <p>We have $E &lt; 0.0001$ if</p> <p>$$ \frac{(b-a)^5}{180n^4}\max_{a \leq x \leq b}|f^{iv}(x)|= \frac{1444e}{180n^4}&lt; 0.0001,$$</p> <p>Thus $n \geq 22$ is required. </p>
881,159
<p>I'm very lost on the following problem and will appreciate your help very much.</p> <p>How large should $n$ be to guarantee that the Simpson's Rule approximation on the $\int_0^1 19e^{x^2} \, dx$ is accurate to within $0.0001$?</p>
Ala Pawelek
481,449
<p>A bound on the error in using Simpson's rule with nn subintervals to approximate the integral of $f(x)$ over $[a,b]$ is</p> <p>Differentiating $f(x)=19e^{x^2}$ four times we have for $x\in [0,1]$</p> <p>$$f(x)=38e^{x^2}(8+24x^2+8x^4)$$ </p> <p>plugging $1$ into $x$ we get $$38e^{x^2}(8+24x^2+8x^4) = 1444e$$</p> <p>and plugging this into the error formula you get </p> <p>$$\frac{1444e(1-0)^5}{180n^4} \geq .00001$$</p> <p>calculating for $n$ we get $n \geq 38.14$ and since $n$ has to be even the answer is $n \geq 40$ </p>
1,181,631
<p>Let $f : \mathbb R \to \mathbb R$ continuous. Prove that graph $G = \{(x, f(x)) \mid x \in \mathbb R\}$ is closed.</p> <p>I'm a little confused on how to prove $G$ is closed. I get the general strategy is to show that every arbitrary convergent sequence in $G$ converges to a point in $G$.</p> <p>Here is what I tried so far:</p> <ol> <li>Let $x_k$ be a sequence which converges to $x$.</li> <li>Since $f$ is continuous, this implies that $f(x_k)$ converges to $f(x)$.</li> <li>At this point, can you say every $(x_k, f(x_k))$ converges to a $(x, f(x))$, so $G$ is closed?</li> </ol>
Olórin
187,521
<p>Almost, but when learning topology, better try to understand what is purely topological and what proper to metric spaces. Here, what you want to prove is not proper at all to metric spaces, nor to the fact that $\mathbf{R}$'s addition $(x,y)\mapsto x+y$ is continuous, but is <em>proper to continuous maps with Hausdorff target</em>. I will give a proof emphasizing this.</p> <p>So let's show that $G$ is closed in $\mathbf{R}\times \mathbf{R}$ and to do it, let' show that its complement is open, so let $(x,y)\in \mathbf{R}\times \mathbf{R} \backslash G$. Since $(x,y)\not\in G$, we have $y\not=f(x)$. As $\mathbf{R}$ is Hausdorff (being a metric space, yes) there are disjoint open sets $U$ and $V$ in $\mathbf{R}$ such that $y\in U$ and $f(x)\in V$. Finally, $f$ is continuous, so there is an open neighbourhood $W$ of $x$ such that $f(W])\subseteq V$. By definition of the product topology, $W\times U$ is an open neighbourhood of $(x,y)$ in $\mathbf{R}\times\mathbf{R}$ and this neighbourhood is disjoint from $G$. Let $(z,f(z))$ be any point of $G$. If $z\not\in W$, then clearly $(z,f(z))\not\in W\times U$. If $z\in W$, then $f(z)\in V$, so $f(z)\not\in U$, and therefore $(z,f(z))\not\in W\times U$. So $(z,f(z))\notin W\times U$, and it follows that $(W\times U)\cap G=\varnothing$. Our open neighbourhood $W\times U$ lies in the complement of $G$. When have just show that every point of the complement of $G$ is in the interior of the complement of $G$, and this means that this complement is open.</p> <p><em>Remark 1.</em> Replacing the source $\mathbf{R}$ by any topological space $X$ and the target $\mathbf{R}$ by any Hausdorff topological space $Y$, the same proof as above shows that <em>any continuous map $f : X \to Y$ from a topological space to an Hausdorff topological space has a closed graph</em>.</p> <p><em>Remark 2.</em> There are counter-examples to the closedness of the graph is $Y$ is not Hausdorff. ;-)</p> <p><em>Remark 3.</em> If you want to give a proof using that $f$ continuous implies that the inverse image of a closed set in $Y$ is closed in $X$, do like this : as $Y$ is Hausdorff, $\Delta = \{(y,y)\;|\;y\in Y\}$ is closed in $Y\times Y$ (same style of proof as the proof I gave above) and now $G = (f\times \textrm{Id})^{-1}(\Delta)$ is closed, as $\Delta$ is and as $f\times \textrm{Id} : X\times Y\to Y\times Y$ defined by $(x,y)\mapsto (f(x),y)$ is continuous.</p>
76,753
<p>I am having a hard time getting to factor this binomial: I have tried other methods but they do not seem to work... ah well. $$4m^2-\frac{9}{25}.$$</p> <p>Thanks.</p>
Ross Millikan
1,827
<p>If you mean $4m^2-\frac{9}{25}$, note that the second term is $(\frac{3}{5})^2$ and you have a difference of squares.</p>
1,132,003
<blockquote> <p><strong>Problem</strong> Find the value of $$\frac{1}{\sqrt 1 + \sqrt 3} + \frac 1 {\sqrt 3 + \sqrt 5} + \dots + \frac 1 {\sqrt {1087} + \sqrt{1089}}$$</p> </blockquote> <p>I cant figure out how to solve this problem. I cant use summation.</p>
Workaholic
201,168
<p><strong>Hint:</strong> $$\dfrac1{\sqrt{n}+\sqrt{n+2}}=\dfrac{\sqrt{n}-\sqrt{n+2}}{(\sqrt{n}+\sqrt{n+2})(\sqrt{n}-\sqrt{n+2})}=\dfrac{\sqrt{n}-\sqrt{n+2}}{n-n-2}=\ldots$$</p>
1,356,545
<p>Given a fair 6-sided die, how can we simulate a biased coin with P(H)= 1/$\pi$ and P(T) = 1 - 1/$\pi$ ?</p>
Barry Cipra
86,747
<p>A fair die can obviously simulate a fair coin, so it suffices to show that a fair coin can simulate a biased one. There's a simple way to do so.</p> <p>Write $p$, the desired (biased) probability of getting Heads, in binary, with $.111\ldots$ if the probability is $1$. Now toss the fair coin until the result of the $n$th toss matches the $n$th binary digit of the desired probability, with Heads matching $1$ and Tails matching $0$. The probability that the concluding toss is Heads is $p$, so this procedure simulates the biased coin.</p> <p>Curiously, no matter what kind of bias you're trying to simulate, it takes, on average, just two tosses to carry out the simulation.</p> <p><strong>Added later</strong>: In case it's not clear that the simulation terminates at Heads with the target probability $p$, let me illustrate with an example.</p> <p>Suppose the target probability is $p=.001011\ldots$. Then the simulation will terminate with Heads as the first match if and only if the first match occurs either on the third toss or the fifth toss or the sixth toss or any subsequent occurrence of a $1$ in the binary expansion of $p$. These are mutually exclusive events, since the <em>first</em> match, by the meaning of "first," can only occur once. </p> <p>For the first match to occur on the third toss, the first two tosses must have been <em>mis</em>matches and the third toss a match. Since we're tossing a fair coin, this happens with probability </p> <p>$${1\over2}\cdot{1\over2}\cdot{1\over2}={1\over8}=.001$$ </p> <p>(in binary). Likewise, the probability that the first match occurs on the fifth toss is </p> <p>$${1\over2}\cdot{1\over2}\cdot{1\over2}\cdot{1\over2}\cdot{1\over2}={1\over32}=.00001$$ </p> <p>the probability it occurs on the sixth toss is </p> <p>$${1\over2}\cdot{1\over2}\cdot{1\over2}\cdot{1\over2}\cdot{1\over2}\cdot{1\over2}={1\over64}=.000001$$ </p> <p>and so forth. Thus the probability of getting Heads as the first match is the sum</p> <p>$$.001+.00001+.000001+\cdots=.001011\ldots=p$$</p>
1,887,536
<p>Howdy just a simple question,</p> <p>I know when A is diagonalizable, the eigenvalues of F(A) are just simply $F(\lambda_i)$ where $\lambda_i \exists \sigma (A)$</p> <p>I'm interested in the case of when A is not diagonalizable. I look at A as a Jordan form, but I cannot seem to show that when $A$ is not diagonalizable, that the eigenvalues of F(A) are $F(\lambda_i$). I'm okay with no proof, just want to know if the eigenvalues of F(A) are $F(\lambda_i$) when A is not diagonalizable. </p>
egreg
62,967
<p>Assuming the coefficients are real (otherwise the problem would be underdetermined), you know that also $1-4i$ is a root and, from Viète's formulas, that $$ \begin{cases} -\dfrac{b}{a}=2+1+4i+1-4i=4 \\[6px] \dfrac{c}{a}=2(1+4i)+2(1-4i)+(1+4i)(1-4i)=21 \\[6px] -\dfrac{d}{a}=2(1+4i)(1-4i)=34 \end{cases} $$ Hence $b=-4a$, $c=21a$ and $d=-34a$. The last relation is $f(1)=-32$, so $$ a+b+c+d=-32 $$</p>
365,631
<p>Suppose we want to prove that among some collection of things, at least one of them has some desirable property. Sometimes the easiest strategy is to equip the collection of all things with a measure, then show that the set of things with the desired property has positive measure. Examples of this strategy appear in many parts of mathematics.</p> <blockquote> <p><strong>What is your favourite example of a proof of this type?</strong></p> </blockquote> <p>Here are some examples:</p> <ul> <li><p><strong>The probabilistic method in combinatorics</strong> As I understand it, a typical pattern of argument is as follows. We have a set <span class="math-container">$X$</span> and want to show that at least one element of <span class="math-container">$X$</span> has property <span class="math-container">$P$</span>. We choose some function <span class="math-container">$f: X \to \{0, 1, \ldots\}$</span> such that <span class="math-container">$f(x) = 0$</span> iff <span class="math-container">$x$</span> satisfies <span class="math-container">$P$</span>, and we choose a probability measure on <span class="math-container">$X$</span>. Then we show that with respect to that measure, <span class="math-container">$\mathbb{E}(f) &lt; 1$</span>. It follows that <span class="math-container">$f^{-1}\{0\}$</span> has positive measure, and is therefore nonempty.</p> </li> <li><p><strong>Real analysis</strong> One example is <a href="http://www.artsci.kyushu-u.ac.jp/%7Essaito/eng/maths/Cauchy.pdf" rel="noreferrer">Banach's proof</a> that any measurable function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying Cauchy's functional equation <span class="math-container">$f(x + y) = f(x) + f(y)$</span> is linear. Sketch: it's enough to show that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>, since then it follows from additivity that <span class="math-container">$f$</span> is continuous everywhere, which makes it easy. To show continuity at <span class="math-container">$0$</span>, let <span class="math-container">$\varepsilon &gt; 0$</span>. An argument using Lusin's theorem shows that for all sufficiently small <span class="math-container">$x$</span>, the set <span class="math-container">$\{y: |f(x + y) - f(y)| &lt; \varepsilon\}$</span> has positive Lebesgue measure. In particular, it's nonempty, and additivity then gives <span class="math-container">$|f(x)| &lt; \varepsilon$</span>.</p> <p>Another example is the existence of real numbers that are <a href="https://en.wikipedia.org/wiki/Normal_number" rel="noreferrer">normal</a> (i.e. normal to every base). It was shown that almost all real numbers have this property well before any specific number was shown to be normal.</p> </li> <li><p><strong>Set theory</strong> Here I take ultrafilters to be the notion of measure, an ultrafilter on a set <span class="math-container">$X$</span> being a finitely additive <span class="math-container">$\{0, 1\}$</span>-valued probability measure defined on the full <span class="math-container">$\sigma$</span>-algebra <span class="math-container">$P(X)$</span>. Some existence proofs work by proving that the subset of elements with the desired property has measure <span class="math-container">$1$</span> in the ultrafilter, and is therefore nonempty.</p> <p>One example is a proof that for every measurable cardinal <span class="math-container">$\kappa$</span>, there exists some inaccessible cardinal strictly smaller than it. Sketch: take a <span class="math-container">$\kappa$</span>-complete ultrafilter on <span class="math-container">$\kappa$</span>. Make an inspired choice of function <span class="math-container">$\kappa \to \{\text{cardinals } &lt; \kappa \}$</span>. Push the ultrafilter forwards along this function to give an ultrafilter on <span class="math-container">$\{\text{cardinals } &lt; \kappa\}$</span>. Then prove that the set of inaccessible cardinals <span class="math-container">$&lt; \kappa$</span> belongs to that ultrafilter (&quot;has measure <span class="math-container">$1$</span>&quot;) and conclude that, in particular, it's nonempty.</p> <p>(Although it has a similar flavour, I would <em>not</em> include in this list the cardinal arithmetic proof of the existence of transcendental real numbers, for two reasons. First, there's no measure in sight. Second -- contrary to popular belief -- this argument leads to an <em>explicit construction</em> of a transcendental number, whereas the other arguments on this list do not explicitly construct a thing with the desired properties.)</p> </li> </ul> <p>(Mathematicians being mathematicians, someone will probably observe that <em>any</em> existence proof can be presented as a proof in which the set of things with the required property has positive measure. Once you've got a thing with the property, just take the Dirac delta on it. But obviously I'm after less trivial examples.)</p> <p><strong>PS</strong> I'm aware of the earlier question <a href="https://mathoverflow.net/questions/34390">On proving that a certain set is not empty by proving that it is actually large</a>. That has some good answers, a couple of which could also be answers to my question. But my question is specifically focused on <em>positive measure</em>, and excludes things like the transcendental number argument or the Baire category theorem discussed there.</p>
Ian Agol
1,345
<p><a href="https://en.wikipedia.org/wiki/Sard%27s_theorem" rel="noreferrer">Sard's theorem</a> implies that the measure of the set of critical points of a smooth function <span class="math-container">$f:M_1\to M_2$</span> between smooth manifolds has measure zero. Hence the preimage <span class="math-container">$f^{-1}(x)$</span> of almost every point in <span class="math-container">$M_2$</span> is a smooth submanifold. This can be used, for example, to prove the existence of Morse functions. Following <a href="https://www.maths.ed.ac.uk/%7Ev1ranick/papers/milnmors.pdf" rel="noreferrer">Milnor's Morse Theory</a>, Section 6, one can embed <span class="math-container">$M$</span> into <span class="math-container">$\mathbb{R}^n$</span>. Then for almost all points in <span class="math-container">$\mathbb{R}^n$</span>, the distance map is a Morse function. This may be seen by applying Sard's theorem to the normal bundle. The set of focal points has measure zero, and corresponds to the points at which the distance function is degenerate.</p>
365,631
<p>Suppose we want to prove that among some collection of things, at least one of them has some desirable property. Sometimes the easiest strategy is to equip the collection of all things with a measure, then show that the set of things with the desired property has positive measure. Examples of this strategy appear in many parts of mathematics.</p> <blockquote> <p><strong>What is your favourite example of a proof of this type?</strong></p> </blockquote> <p>Here are some examples:</p> <ul> <li><p><strong>The probabilistic method in combinatorics</strong> As I understand it, a typical pattern of argument is as follows. We have a set <span class="math-container">$X$</span> and want to show that at least one element of <span class="math-container">$X$</span> has property <span class="math-container">$P$</span>. We choose some function <span class="math-container">$f: X \to \{0, 1, \ldots\}$</span> such that <span class="math-container">$f(x) = 0$</span> iff <span class="math-container">$x$</span> satisfies <span class="math-container">$P$</span>, and we choose a probability measure on <span class="math-container">$X$</span>. Then we show that with respect to that measure, <span class="math-container">$\mathbb{E}(f) &lt; 1$</span>. It follows that <span class="math-container">$f^{-1}\{0\}$</span> has positive measure, and is therefore nonempty.</p> </li> <li><p><strong>Real analysis</strong> One example is <a href="http://www.artsci.kyushu-u.ac.jp/%7Essaito/eng/maths/Cauchy.pdf" rel="noreferrer">Banach's proof</a> that any measurable function <span class="math-container">$f: \mathbb{R} \to \mathbb{R}$</span> satisfying Cauchy's functional equation <span class="math-container">$f(x + y) = f(x) + f(y)$</span> is linear. Sketch: it's enough to show that <span class="math-container">$f$</span> is continuous at <span class="math-container">$0$</span>, since then it follows from additivity that <span class="math-container">$f$</span> is continuous everywhere, which makes it easy. To show continuity at <span class="math-container">$0$</span>, let <span class="math-container">$\varepsilon &gt; 0$</span>. An argument using Lusin's theorem shows that for all sufficiently small <span class="math-container">$x$</span>, the set <span class="math-container">$\{y: |f(x + y) - f(y)| &lt; \varepsilon\}$</span> has positive Lebesgue measure. In particular, it's nonempty, and additivity then gives <span class="math-container">$|f(x)| &lt; \varepsilon$</span>.</p> <p>Another example is the existence of real numbers that are <a href="https://en.wikipedia.org/wiki/Normal_number" rel="noreferrer">normal</a> (i.e. normal to every base). It was shown that almost all real numbers have this property well before any specific number was shown to be normal.</p> </li> <li><p><strong>Set theory</strong> Here I take ultrafilters to be the notion of measure, an ultrafilter on a set <span class="math-container">$X$</span> being a finitely additive <span class="math-container">$\{0, 1\}$</span>-valued probability measure defined on the full <span class="math-container">$\sigma$</span>-algebra <span class="math-container">$P(X)$</span>. Some existence proofs work by proving that the subset of elements with the desired property has measure <span class="math-container">$1$</span> in the ultrafilter, and is therefore nonempty.</p> <p>One example is a proof that for every measurable cardinal <span class="math-container">$\kappa$</span>, there exists some inaccessible cardinal strictly smaller than it. Sketch: take a <span class="math-container">$\kappa$</span>-complete ultrafilter on <span class="math-container">$\kappa$</span>. Make an inspired choice of function <span class="math-container">$\kappa \to \{\text{cardinals } &lt; \kappa \}$</span>. Push the ultrafilter forwards along this function to give an ultrafilter on <span class="math-container">$\{\text{cardinals } &lt; \kappa\}$</span>. Then prove that the set of inaccessible cardinals <span class="math-container">$&lt; \kappa$</span> belongs to that ultrafilter (&quot;has measure <span class="math-container">$1$</span>&quot;) and conclude that, in particular, it's nonempty.</p> <p>(Although it has a similar flavour, I would <em>not</em> include in this list the cardinal arithmetic proof of the existence of transcendental real numbers, for two reasons. First, there's no measure in sight. Second -- contrary to popular belief -- this argument leads to an <em>explicit construction</em> of a transcendental number, whereas the other arguments on this list do not explicitly construct a thing with the desired properties.)</p> </li> </ul> <p>(Mathematicians being mathematicians, someone will probably observe that <em>any</em> existence proof can be presented as a proof in which the set of things with the required property has positive measure. Once you've got a thing with the property, just take the Dirac delta on it. But obviously I'm after less trivial examples.)</p> <p><strong>PS</strong> I'm aware of the earlier question <a href="https://mathoverflow.net/questions/34390">On proving that a certain set is not empty by proving that it is actually large</a>. That has some good answers, a couple of which could also be answers to my question. But my question is specifically focused on <em>positive measure</em>, and excludes things like the transcendental number argument or the Baire category theorem discussed there.</p>
Ian Agol
1,345
<p><a href="https://en.wikipedia.org/wiki/Jeremy_Kahn" rel="noreferrer">Kahn</a> and <a href="https://en.wikipedia.org/wiki/Vladimir_Markovic" rel="noreferrer">Markovic</a> showed the <a href="https://annals.math.princeton.edu/2012/175-3/p04" rel="noreferrer">existence of immersed essential surfaces in closed hyperbolic 3-manifolds</a>. The idea was to construct many immersed pants in the manifold using the frame flow. From the exponential mixing of the frame flow, they showed that the cuffs of the pants were equidistributed in a sufficiently uniform way so that they could use <a href="https://en.wikipedia.org/wiki/Hall%27s_marriage_theorem" rel="noreferrer">Hall's marriage theorem</a> to pair the cuffs in a way that created a closed nearly geodesic (and hence essential) surface. They used similar ideas to resolve the <a href="https://en.wikipedia.org/wiki/Ehrenpreis_conjecture" rel="noreferrer">Ehrenpreis conjecture</a>, although the proof was more subtle since they couldn't use Hall's marriage theorem.</p>
438,336
<p>This a two part question:</p> <p>$1$: If three cards are selected at random without replacement. What is the probability that all three are Kings? In a deck of $52$ cards.</p> <p>$2$: Can you please explain to me in lay man terms what is the difference between with and without replacement.</p> <p>Thanks guys!</p>
ZZ7474
210,085
<p>4/52*3/51*2/50 =(24/13260)or(1/5525)same as (0.0018)remember when there's an "or" put a cross over the letter "O" &amp; remember to add rather than mutliply.When there's a plus sign between remember to multiply.</p>
3,068,031
<blockquote> <p>Let <span class="math-container">$G$</span> be a group and <span class="math-container">$H$</span> be a subgroup of <span class="math-container">$G$</span>. Let also <span class="math-container">$a,~b\in G$</span> such that <span class="math-container">$ab\in H$</span>.</p> <p>True or false? <span class="math-container">$a^2b^2\in H.$</span></p> </blockquote> <p><em>Attempt.</em> I believe the answer is no (i have proved that the statement is true for normal subgroups, but it seems that there is no need to hold for arbitrary subgroups). I was looking for a counterexample in a non abelian group of small order, such as <span class="math-container">$S_3$</span>, or <span class="math-container">$S_4$</span>, but i couldn't find a suitable combination of <span class="math-container">$H\leq S_n$</span>, <span class="math-container">$\sigma$</span> and <span class="math-container">$\tau\in S_n$</span> such that <span class="math-container">$\sigma \tau \in H$</span> and <span class="math-container">$\sigma^2 \tau^2 \notin H.$</span></p> <p>Thanks in advance for the help.</p>
John Douma
69,810
<p>Consider <span class="math-container">$S_3$</span>.</p> <p>Let <span class="math-container">$a=(1 2 3)$</span> and <span class="math-container">$b=(2 3)$</span>. Then <span class="math-container">$ab=(1 2)$</span> and <span class="math-container">$a^2b^2=(1 3 2)$</span> </p> <p>Let <span class="math-container">$H=\{1, ab\}$</span>. Then <span class="math-container">$ab\in H$</span> but <span class="math-container">$a^2b^2\not\in H$</span></p>
1,572,351
<p>Solve the differential equation;</p> <p>$(xdx+ydy)=x(xdy-ydx)$</p> <p>L.H.S. can be written as $\frac{d(x^2+y^2)}{2}$ but what should be done for R.H.S.?</p>
achille hui
59,379
<p>In polar coordinate $(r,\theta)$, $xdx + ydy = rdr$ and $xdy- ydx = r^2d\theta$. The equation at hand becomes $$rdr = r^3\cos\theta d\theta \iff \frac{1}{r^2} dr = \cos\theta d\theta \iff d\left(\frac{1}{r} + \sin\theta\right) = 0\\ \iff \frac{1+y}{r} = K \iff (1+y)^2 = K^2(x^2+y^2) $$ for some constant $K$.</p>
1,572,351
<p>Solve the differential equation;</p> <p>$(xdx+ydy)=x(xdy-ydx)$</p> <p>L.H.S. can be written as $\frac{d(x^2+y^2)}{2}$ but what should be done for R.H.S.?</p>
azc
166,613
<p>I think that there is actually a general solution to a general kind of ODE. So I give here its general form and its solution. After that, the solution to the ODE in the thread above is also given. Consider the following ODE \begin{gather*} A(x,y)(xd x+yd y)=B(x,y)(xd y-yd x)\tag{1} \end{gather*} where $A$ and $B$ are homogeneous functions of degree $\alpha$ and $\beta$ respectively, that is to say, for all $t&gt;0,$ we have \begin{gather*} A(tx,ty)=t^{\alpha}A(x,y),\\ B(tx,ty)=t^{\beta}B(x,y). \end{gather*} We claim that for $x&gt;0,$ ODE (1) can be transformed into a separable ODE, by a suitable change of variables. For the case of $x&lt;0,$ similar argument works. </p> <p>Since (a) can be manipulated as \begin{align*} &amp;A(x,y)(xd x+yd y)=B(x,y)(xd y-yd x) \\ \iff &amp; 2A(x,y)(xd x+yd y)=2B(x,y)(xd y-yd x) \\ \iff &amp; A(x,y)d (x^2+y^2)=2B(x,y)x^2d \left(\frac{y}{x}\right),\tag{2} \end{align*} introduce the change of variables \begin{align*} x^2+y^2=&amp;u, \\ \frac{y}{x}=&amp;v, \end{align*} and then \begin{gather*} 1+v^2=1+\frac{y^2}{x^2}=\frac{x^2+y^2}{x^2}=\frac{u}{x^2}, \end{gather*} which leads to, in the case of $x&gt;0,$ \begin{gather*} x=\sqrt{\frac{u}{1+v^2}},\tag{3}\\ y=xv=v\sqrt{\frac{u}{1+v^2}}. \tag{4} \end{gather*} By homogeneity condition and (4) we have \begin{gather*} A(x,y)=A(x,xv)=x^{\alpha}A(1,v),\\ B(x,y)=B(x,xv)=x^{\beta}B(1,v). \end{gather*} Thus (2) turns out to be \begin{align*} &amp;\quad x^{\alpha} A(1,v)d u=2x^{\beta+2}B(1,v)d v \\ &amp;\iff x^{\alpha-\beta-2}A(1,v)d u=2B(1,v)d v\\ &amp;\iff\frac{u^{\frac{\alpha-\beta}{2}-1}}{(1+v^2)^{\frac{\alpha-\beta}{2}-1}} A(1,v)d u=2B(1,v)d v\qquad \text{(by (3))}\\ &amp;\iff u^{\frac{\alpha-\beta}{2}-1}d u=\frac{2B(1,v)(1+v^2)^{\frac{\alpha-\beta}{2}-1}}{A(1,v)}d v.\tag{5} \end{align*} It is apparent that (5) is separable.</p> <p>Now consider the particular case \begin{gather*} x d x+yd y=x(xd y-yd x).\tag{6} \end{gather*} This corresponds to $A(x,y)=1, B(x,y)=x,$ and so the degrees of $\alpha=0, \beta=1.$ Thus, under the change of variables (3) and (4) we arrive at \begin{gather*} u^{-3/2}d u=2(1+v^2)^{-3/2}d v, \end{gather*} whose solution is \begin{gather*} \frac{1}{\sqrt{u}}+\frac{v}{\sqrt{1+v^2}}=C. \end{gather*} Revering to original variables, we have the solution to (6) is, in the case of $x&gt;0,$ \begin{gather*} \frac{1+y}{\sqrt{x^2+y^2}}=C. \end{gather*}</p>
3,110,508
<p>I read that implication like a=>b can be proof using the following steps : 1) suppose a true. 2) Then deduce b from a. 3) Then you can conclude that a=>b is true.</p> <p>Actually my real problem is to understand why step 1 and 2 are sufficient to prove that a=>b is true. I mean, how can you prove the truth table of a=>b just using 1 and 2 ? I know that implication a=>b is actually defined as "not(a) or b". How can steps 1 and 2 can prove that a and b are related like "not(a) or b" ?</p>
Dave
334,366
<p>In this case of <span class="math-container">$n=3$</span> your result follows from this other theorem by considering <span class="math-container">$x^3+y^3=x^3-(-y)^3$</span>. The pattern with odd <span class="math-container">$n$</span> is that <span class="math-container">$-y^n=(-y)^n$</span>.</p>
3,110,508
<p>I read that implication like a=>b can be proof using the following steps : 1) suppose a true. 2) Then deduce b from a. 3) Then you can conclude that a=>b is true.</p> <p>Actually my real problem is to understand why step 1 and 2 are sufficient to prove that a=>b is true. I mean, how can you prove the truth table of a=>b just using 1 and 2 ? I know that implication a=>b is actually defined as "not(a) or b". How can steps 1 and 2 can prove that a and b are related like "not(a) or b" ?</p>
nonuser
463,553
<p><span class="math-container">$$x^3+y^3 = x^3+\color{red}{x^2y-x^2y}+y^3 $$</span> <span class="math-container">$$ = x^2(x+y)-y(x^2-y^2)$$</span> <span class="math-container">$$ = x^2(x+y)-y(x-y)(x+y)$$</span> <span class="math-container">$$ = (x+y)(x^2-y(x-y))$$</span> <span class="math-container">$$ = (x+y)(x^2-xy+y^2)$$</span></p> <p>We can proceede similary for arbitrary odd <span class="math-container">$n$</span>. </p>
3,405,914
<blockquote> <p>It's known that <span class="math-container">$\lim_{n \to \infty} \left(1 + \frac{x}{n} \right)^n = e^x$</span>.</p> <p>Using the above statement, prove <span class="math-container">$\lim_{n \to \infty} \left(\frac{3n-2}{3n+1}\right)^{2n} = \frac{1}{e^2}$</span>.</p> </blockquote> <h2>My attempt</h2> <p>Obviously, we want to reach a statement such as <span class="math-container">$$\lim_{n \to \infty} \left(1 + \frac{-2}{n}\right)^n \quad \text{ or } \quad \lim_{n \to \infty} \left(1 + \frac{-1}{n}\right)^n \cdot \lim_{n \to \infty} \left(1 + \frac{-1}{n}\right)^n$$</span> in order to be able to apply the above condition. However, I was unable to achieve this. The furthest I've got was the following:</p> <p><span class="math-container">\begin{align} \left(\frac{n-2}{3n+1}\right)^{2n} &amp;= \left( \frac{9n^2 - 12n + 4}{9n^2 + 6n + 1} \right)^n\\ &amp;= \left(1 + \frac{-18n+3}{9n^2+6n+1}\right)^n\\ f(n)&amp;= \left(1 + \frac{-2 + \frac{3}{n}}{n+\frac{2}{3} + \frac{1}{9n}} \right)^n \end{align}</span></p> <p>It seems quite obvious that <span class="math-container">$\lim_{n \to \infty} \left(f(n)\right) = \left(1 + \frac{-2}{n + \frac{2}{3}}\right)^n$</span>, however, this is not exactly equal to the statement given above. Are you able to ignore the constant and apply the condition regardness? If so, why? How would you go about solving this problem?</p>
Community
-1
<p>Let <span class="math-container">$m:=3n+1$</span>. We have</p> <p><span class="math-container">$$\left(\frac{3n-2}{3n+1}\right)^{2n}=\left(1-\frac3m\right)^{2(m-1)/3}=\left(\left(1-\frac3m\right)^m\right)^{2/3}\left(1-\frac3m\right)^{-2/3}.$$</span></p> <p>Hence the limit is <span class="math-container">$e^{-3\cdot2/3}\cdot1$</span>.</p>
2,933,375
<p>I have a set of vectors, <span class="math-container">$M_1$</span> which is defined as the following: <span class="math-container">$$M_1:=[\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}, \begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}]$$</span> I have to show that <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>, even though it's linearly independent. My initial idea was that, because <span class="math-container">$$\begin{pmatrix}1 \\ 0 \\ 0 \end{pmatrix}≠a\cdot\begin{pmatrix}1 \\ 0 \\ 1 \end{pmatrix}+ b \cdot\begin{pmatrix}0 \\ 1 \\ 1 \end{pmatrix}$$</span> therefore <span class="math-container">$M_1$</span> isn't a generating set of <span class="math-container">$\mathbb R^3$</span>. However I would like to know if there is any other way to show that <span class="math-container">$M_1$</span> is not a generating set of <span class="math-container">$\mathbb R^3$</span>.</p>
Scientifica
164,983
<p>And in Enderton's <em>Elements of Set Theory</em>, he presents the axioms of ZFC in first-order logic as follows:<a href="https://i.stack.imgur.com/ZPefC.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/ZPefC.png" alt="enter image description here"></a></p> <p>There's neither ':' nor ','. Also, if you look at how first-order formulas are defined, you'll see no mention of ':' or ','.</p> <p>So I don't think there's a <strong>standard</strong> think or rule to follow about these kind of things when you're doing mathematics in general. See what others use in general, and use it similarly, in a way that would be clearly understood.</p> <p><strong>Edit:</strong> This kind of questions may be relevant when you're doing things related to mathematical logic or computer science.</p>
1,196,424
<p>So I'm reviewing my notes and I just realized that I can't think of how to show that a particular integer mod group is abelian. I know how to do it with symmetric but not with integers themselves.</p> <p>For example, lets say I was asked to show $\mathbb{Z_5}$ is abelian.</p> <p>I know for symmetric groups, lets say $S_5$ i can pick two elements in $S_5$, for example (123),(23) and if (123)(23)=(23)(123) then I know it is abelian but how would I go about the integers?</p>
Qudit
210,368
<p>Note that symmetric groups are <em>not</em> Abelian unless $n &lt; 3$. See my answer <a href="https://math.stackexchange.com/questions/1186094/for-what-values-n-is-the-group-s-n-cyclic/1186099#1186099">here</a> for a proof.</p> <p>As for how to see that $\Bbb{Z}_n$ is Abelian, note that the group $\Bbb{Z}$ is Abelian. Therefore, any quotient group of $\Bbb{Z}$ by a subgroup is also Abelian. Since $\Bbb{Z}_n \cong \Bbb{Z} / n \Bbb{Z}$, we are done.</p>
1,196,424
<p>So I'm reviewing my notes and I just realized that I can't think of how to show that a particular integer mod group is abelian. I know how to do it with symmetric but not with integers themselves.</p> <p>For example, lets say I was asked to show $\mathbb{Z_5}$ is abelian.</p> <p>I know for symmetric groups, lets say $S_5$ i can pick two elements in $S_5$, for example (123),(23) and if (123)(23)=(23)(123) then I know it is abelian but how would I go about the integers?</p>
Asinomás
33,907
<p>The symmetric group $S_n$ is not abelian for $n\geq 3$ since $(12)(123)=(23)$ and $(123)(12)=(13)$.</p> <p>On the other hand any cyclic group is abelian. This is because in reality every element $m$ can be thought of as $\underbrace{1+1+1\dots+1}_{\text{m times}}$.</p> <p>And so</p> <p>$m+n=(\underbrace{1+1+1\dots+1}_{\text{m times}})+(\underbrace{1+1+1\dots+1}_{\text{n times}})=(\underbrace{1+1+1\dots+1}_{\text{n times}})+(\underbrace{1+1+1\dots+1}_{\text{m times}})=n+m$</p> <p>Where the equality in the middle is just the associativity rule.</p>
834,678
<p>An object $X$ is a <em>generator</em> of a category $\mathcal{C}$ if the functor $Hom_{\mathcal{C}}(X,\_) : \mathcal{C} \rightarrow Set$ is faithful. </p> <p>I encountered the notion in the context of Morita-equivalence of rings, but I don't understand what its use is. Why is $X$ called a "generator"? What does it generate? </p>
Martin Brandenburg
1,650
<p>Generators don't generate the category. This explains why "generator" is an unfortunate terminology. A better one would be "separator". I've seen this suggestion in many places. This makes sense, because every two non-equal morphisms $f,g : A \to B$ may be separated by a morphism $i : X \to A$, i.e. $fi \neq gi$.</p> <p>However, assume that our category has coproducts. Then, if $A$ is any object, then there is a canonical morphism $\bigoplus_{f \in \hom(X,A)} X \to A$ and this is an epimorphism - exactly because $X$ is a generator. This looks more like a generating set (remember that for example an $R$-module $M$ is generated by a subset $S$ if and only if $\bigoplus_{s \in S} R \to M$ is an epimorphism).</p>
2,050,760
<p>The question:</p> <blockquote> <p>Find a recurrence for the number of n length ternary strings that contain "00", "11", or "22".</p> </blockquote> <p>My answer:</p> <p>$3(a_{n-2}) + 3(a_{n-1} - 1)$</p> <p>Proof:</p> <p>Cases:</p> <p>______________00 (a_(n-2))</p> <p>______________11 (a_(n-2))</p> <p>______________22 (a_(n-2))</p> <p>______________0 (a_(n-1) - 1)</p> <p>______________1 (a_(n-1) - 1)</p> <p>______________2 (a_(n-1) - 1) (Subtract the case in which it ends with 00, 11, or 22)</p>
Jean Marie
305,862
<p>I would like to add something to the answer given by @Robert Israel.</p> <p>The set $U$ of matrices :</p> <p>$$ U_c=\begin{pmatrix}1 &amp; c \\ 0 &amp; 1\end{pmatrix}$$</p> <p>is a subgroup of $GL(2,\mathbb{K})$ (for multiplication, of course).</p> <p>Mapping:</p> <p>$$f: U \rightarrow \mathbb{K}, \ \ U_c \mapsto c$$</p> <p>is a group isomorphism with the additive group $(\mathbb{K},+)$ because it is bijective and </p> <p>$$\tag{1}U_{c}U_{c'}=U_{c+c'}.$$</p> <p>You may have noticed a similarity with the correspondence between addtion and multiplication that is usually found in exponentiation. This shouldn't come as a surprise: there is a linear algebra concept called <strong>matrix exponentiation</strong>. </p> <p>If $V_c=\begin{pmatrix}0 &amp; c \\ 0 &amp; 0\end{pmatrix}$ then $exp(V_c)=\begin{pmatrix}1 &amp; c \\ 0 &amp; 1\end{pmatrix}$. In this way, property $(1)$ can be "trivially" written:</p> <p>$$\tag{2}exp(V_c)exp(V_{c'})=exp(V_{c+c'})$$</p>
3,207,767
<p>What is the general solution of differential equation <span class="math-container">$y\frac{d^{2}y}{dx^2} - (\frac{dy}{dx})^2 = y^2 log(y)$</span>.</p> <p>The answer to this DE is <span class="math-container">$log(y) = c_1 e^x + c_2 e^{-x}$</span></p> <p>I don't know the method to solve differential equation with degree more than 1. Please tell me how to solve these types of equations.</p>
user120123
120,123
<p>With thanks to Michael for pointing this beautiful topic. For proving <span class="math-container">$$(x^2+3)(y^2+3)(z^2+3)(t^2+3)\geq16(x+y+z+t)^2,$$</span> I have used the following result discovered by me: <span class="math-container">$$(x^2+3)(y^2+3)(z^2+3)(t^2+3)\geq16(x+y+z+t)^2+$$</span> <span class="math-container">$$+9\sum_{1\leq i&lt;j\leq4}{(xy-1)^2}+3\sum_{1\leq i&lt;j&lt;k\leq4}{(xyz-1)^2}+(xyzt-1)^2.$$</span></p>
791,535
<blockquote> <p>Let $f,g:\mathbb [a,b] \to \mathbb [a,b]$ be monotonically increasing functions such that $f\circ g=g\circ f$</p> <p>Prove that $f$ and $g$ have a common fixed point.</p> </blockquote> <p>I found this problem in a problem set, it's quite similar to this <a href="https://math.stackexchange.com/questions/791411/real-analysis-homework-fixed-point-ordered-sets-hint/791470?noredirect=1#comment1638923_791470">Every increasing function from a certain set to itself has at least one fixed point</a> but I can't solve it.</p> <p>I think it's one of those tricky problems where you need to consider a given set and use the LUB... I think $\{x \in [a,b]/ x &lt; f(x) \text{and} x&lt; g(x) \}$ is a good one.</p> <p>Any hint ?</p>
Gabriel Romon
66,096
<p>Let $A=\{x \in [a,b]/ x \leq f(x) \; \text{and} \; x \leq g(x) \}$</p> <ul> <li><p>$a\in A$</p></li> <li><p>let $u=\sup A$</p></li> <li><p>Let us prove that $f(u)$ and $g(u)$ are upper bounds for $A$</p></li> </ul> <p>Indeed let $x\in A$.</p> <p>Then $x\leq u$. Hence $f(x) \leq f(u)$, thus $x\leq f(x) \leq f(u)$ and finally $x\leq f(u)$ </p> <p>In the same way, $x\leq g(u)$ </p> <ul> <li><p>Therefore, by LUB definition, $\color{red}{ u\leq f(u)}$ and $\color{red}{ u\leq g(u)}$</p></li> <li><p>Then, $f(u) \leq f(g(u))=g(f(u))$. Thus $u\leq f(u)\leq g(f(u))$ and then $u\leq g(f(u))$</p></li> <li><p>But in the same way, $u\leq f(f(u))$</p></li> </ul> <p>Therefore, $f(u) \in A$ (see the last two last inequalities in the last two bulleted points)</p> <p>Similarly, $g(u) \in A$.</p> <ul> <li>By LUB definition, $\color{red}{ f(u) \leq u}$ and $\color{red}{ g(u) \leq u}$</li> </ul> <p>$u$ is a common fixed point.</p>
393,712
<p>I studied elementary probability theory. For that, density functions were enough. What is a practical necessity to develop measure theory? What is a problem that cannot be solved using elementary density functions?</p>
not all wrong
37,268
<p>Simple answer: Tossing a coin.</p> <p>Longer answer: You know that you treat discrete events like the above with probability mass functions or similar, but continuous things with probability density functions. Imagine you had $X$ which is randomly uniform on $[0,1]$ half the time and $5$ the rest of the time. Perfectly reasonable thing, could easily come up. Doesn't fit into either framework.</p> <p>Measure theory provides a consistent language and mathematical framework unifying these ideas, and indeed much more general objects in stochastic theory. It removes any necessity to distinguish between fundamentally similar objects, and crystallizes the relevant points out, allowing much deeper understanding of the theory.</p>
1,684,095
<p>How can I evaluate the following series.</p> <p>$$\sum_{k=1}^{\infty}\frac{1}{(k+1)(k-1)!}.$$</p>
Hanul Jeon
53,976
<p>Consider the class $C = \{(\alpha, n) : \alpha\in\mathrm{On}\text{ and } n=0, 1\}$ and define an ordering $\prec$ over $C$ as lexicographical order. </p> <p>Our $C$ is well-ordered: let $B\subset C$ is a subclass. If the intersection of $B$ and the initial segment $S=\{x\in C : x\prec (0,1)\}$ is not empty, then $B$ has the minimal element since the initial segment is isomorphic to $\mathrm{On}$. Otherwise $B$ is a subclass of $\{x\in C : (0,1)\preceq x\}$, which is also isomorphic to $\mathrm{On}$, so $B$ has the minimal element.</p> <p>You can see that the order-type of $C$ is $\mathrm{On+On}$ and it is not isomorphic to $\mathrm{On}$ with usual order. More direct proof of $C\not\cong \mathrm{On}$ comes from examining possible sizes of initial segments: $C$ has a proper-class sized initial segment whereas $\mathrm{On}$ is not.</p> <p>However, unlike $\mathrm{On}$, no proper class with membership relation is isomorphic to $C$. This is because if $(B,\in)$ is well-ordered proper class then its initial segments must be a set.</p>
3,747,453
<p>Isn't it wrong to write the following with only the percent sign? Instead of <span class="math-container">$100 \%$</span>?</p> <blockquote> <p>The change in height as a percentage is <span class="math-container">$$ \frac{a - b}{a} \% \tag 1 $$</span> where <span class="math-container">$a$</span> is the initial height and <span class="math-container">$b$</span> is the final height.</p> </blockquote> <p>Because if <span class="math-container">$a=10$</span>, <span class="math-container">$b=5$</span> we have <span class="math-container">$$ \frac{10-5}{10}\%=\frac{1}{2} \% = 0.5 \frac{1}{100} = 0.005 \quad \text{what?!} \tag 2 $$</span></p> <p>If we convert a decimal number to percent we multiply it by <span class="math-container">$100$</span> and add the percent sign. We have <span class="math-container">$1\%=\frac{1}{100}$</span>, so with <span class="math-container">$100 \%$</span> we multiply the number by <span class="math-container">$1$</span>, i.e. <span class="math-container">\begin{align} \frac{10-5}{10} \cdot \color{blue}{1} &amp;= \frac{10-5}{10} \cdot \color{blue}{100 \%} = \frac{5}{10} \cdot \color{blue}{100 \frac{1}{100}} =\frac{1}{2} \cdot \color{blue}{100 \frac{1}{100}} \tag 3 \\ &amp;=0.5\cdot \color{blue}{100 \frac{1}{100}} =50 \color{blue}{\frac{1}{100}} = 50 \color{blue}{\%} \tag 4 \end{align}</span> So, shouldn't we instead write <span class="math-container">$(1)$</span> as <span class="math-container">$$ \frac{a - b}{a} 100 \% \quad ? \tag 5 $$</span></p>
dan_fulea
550,003
<p>Consider the composition of rings <span class="math-container">$\Bbb Z\to \Bbb Z[X]\to \Bbb Z$</span>, the first morphism is the inclusion <span class="math-container">$f:\Bbb Z\to \Bbb Z[X]$</span>, mapping <span class="math-container">$n$</span> into the constant polynomial <span class="math-container">$n=n\cdot X^0$</span>, the second morphism is <span class="math-container">$g:\Bbb Z[X]\to \Bbb Z$</span>, the evaluation in zero, <span class="math-container">$g\to g(0)$</span>. The composition is the identity, thus a monomorphism. But <span class="math-container">$g$</span> is of course not injective.</p>
3,011,758
<p>So I was going through a sum for </p> <p>Prove <span class="math-container">$ex \leq e^x$</span> , <span class="math-container">$\forall x \in \mathbb{R} $</span></p> <p>I took <span class="math-container">$g(x) = e^x - ex$</span></p> <p>Then <span class="math-container">$g'(x)= e^x - e$</span></p> <p>I understood the case when <span class="math-container">$x&gt;1$</span> function is strictly increasing i.e <span class="math-container">$g'(x) &gt;0$</span> then <span class="math-container">$e^x&gt;ex$</span> but what about when <span class="math-container">$x \leq 1$</span> </p>
user
505,767
<p>We have that</p> <p><span class="math-container">$$g(x)=e^x-ex \implies g'(x)=e^x-e=0\quad x=1$$</span></p> <p>moreover <span class="math-container">$g''(x)=e^x &gt; 0$</span>, therefore <span class="math-container">$x=1$</span> is a point of minimum for <span class="math-container">$g(x)$</span>.</p>
498,056
<p>If $A$ is a subset (not proper subset) of $B$, does that mean that $B$ is a subset (not proper subset) of $A$ and that $A=B$?</p>
J. W. Perry
93,144
<p>Your parentheses give me cause for concern as they introduce a degree of possible ambiguity.</p> <p>The following is true:</p> <p>$$(A \subseteq B) \wedge (A \not \subset B) \Rightarrow A=B.$$</p> <p>Read this as "If $A$ is a subset of $B$ and $A$ is not a proper subset of $B$, then $A=B$".</p> <p>However, the following is also true: $$(A \subseteq B) \not\Rightarrow (B \subseteq A).$$</p> <p>Read this as "If $A$ is a subset of $B$ then it does not necessarily imply that $B$ is a subset of $A$".</p> <p>If you were to use the intended notation ($\subset$ for "proper subset" or $\subseteq$ for "not proper subset" a.k.a. "improper subset" or just "subset"), or you use the statement "$A$ is an improper subset of $B$", or you were to introduce the statement "$A$ is a subset of $B$ and $A$ is not a proper subset of $B$", then there would be no ambiguity.</p> <p>A subset ($\subseteq$) of another set may or may not contain every element of the other set (and it contains no elements that are not in the other set). A proper subset ($\subset$) of another set contains some, but definitely does not contain all, elements of the other set (and it contains no elements that are not in the other set).</p>
498,056
<p>If $A$ is a subset (not proper subset) of $B$, does that mean that $B$ is a subset (not proper subset) of $A$ and that $A=B$?</p>
ILikeMath
86,744
<p>Yes. We have $A \subseteq B$ but $A \not\subset B$. Of the second statement $A \not\subseteq B$ or $A= B$ (this is the negation of $A \subset B$ equivalent to $A \subseteq B$ and $A \ne B$), therefore $A=B$. It follows that $B\subseteq A$ too.</p>
3,984,930
<p>I am studying maths purely out of interest and have come across this question in my text book:</p> <p>A rectangular piece of paper ABCD is folded about the line joining points P on AB and Q on AD so that the new position of A is on CD. If AB = a and AD = b, where <span class="math-container">$a \ge\frac{2b}{\sqrt3}$</span>, show that the least possible area of the triangle APQ is obtained when the angle AQP is equal to <span class="math-container">$\frac{\pi}{3}$</span>. What is the significance of the condition <span class="math-container">$a \ge\frac{2b}{\sqrt3}$</span>?</p> <p>I realise I need to get an equation for the area of the fold, which I can then differentiate. Even then, I am not sure how to relate this to angle AQP.I have looked at solutions on the internet but they tend to look at the length of the crease, not the area of the fold.</p> <p>I have taken A in my diagram to represent its position <em>before</em> the fold. I could be mistaken.</p> <p>This is how I visualise it:</p> <p><a href="https://i.stack.imgur.com/y1v2x.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/y1v2x.png" alt="enter image description here" /></a></p> <p>I have said:</p> <p><span class="math-container">$x^2 = m^2 + (b - x)^2 = m^2 + b^2 - 2bx +x^2 \implies m^2 = 2b(x - \frac{b}{2})$</span></p> <p><span class="math-container">$L^2 = (L - m)^2 + b^2 \implies L = \frac{m^2 + b^2}{2m}$</span></p> <p><span class="math-container">$y^2 = x^2 + L^2 = x^2 + \frac{(m^2 + b^2)^2}{4m^2} = x^2 + \frac{(2bx - b^2 + b^2)^2}{4m^2} = x^2 + \frac{4b^2x^2}{8b(x - \frac{b}{2})} = \frac{bx^2}{2x - b}$</span></p> <p>After this I am not sure how to proceed.</p>
Math Lover
801,574
<p>If <span class="math-container">$\angle AQP = \theta$</span>, <span class="math-container">$\angle APQ = 90^0 - \theta$</span>.</p> <p>Draw a perp from <span class="math-container">$A_1$</span> to line segment <span class="math-container">$AP$</span> and say it is point <span class="math-container">$A_2$</span> on line segement <span class="math-container">$AP$</span>.</p> <p>Then <span class="math-container">$A_1A_2 = b$</span> and <span class="math-container">$\angle A_2PA_1 = 2 \ \angle APQ = 180^0 - 2\theta \ $</span>.</p> <p>So <span class="math-container">$AP = L = \frac{A_1 A_2}{\sin(180^0 - 2 \theta)} = \frac{b}{\sin (180^0 - 2 \theta)} = \frac{b}{\sin 2 \theta} \ $</span> (in <span class="math-container">$\triangle A_1PA_2$</span>)</p> <p><span class="math-container">$AQ = x = L \cot \theta = \frac{b}{2 \sin^2 \theta}$</span></p> <p>Area of <span class="math-container">$\triangle APQ = \frac{1}{2} \ L \ x = \frac{b^2}{8 \sin^3 \theta \cos \theta}$</span></p> <p>So to minimize area of <span class="math-container">$\triangle APQ$</span>, we need to maximize <span class="math-container">$f(\theta) = \sin ^3 \theta \ \cos \theta$</span>.</p> <p><span class="math-container">$f'(\theta) = 3 \sin^2 \theta \cos^2 \theta - \sin^4 \theta = 0$</span></p> <p><span class="math-container">$\implies \sin \theta = 0, \tan \theta = \pm \sqrt3$</span>. This leads to <span class="math-container">$\theta = \frac{\pi}{3}$</span>.</p> <p>Which also tells that <span class="math-container">$L = \frac{b}{\sin 2 \theta} = \frac{2b}{\sqrt3}$</span></p> <p>Also note that for us to be able to fold such that we obtain minimum area,</p> <p><span class="math-container">$a \geq L = \frac{2b}{\sqrt3}$</span>.</p>
4,246,048
<p>As I understand it, Cantor defined two sets as having the same cardinality iff their members can be paired 1-to-1. He applied this to infinite sets, so ostensibly the integers (Z) and the even integers (E) have the same cardinality because we can pair each element of Z with exactly one element of E.</p> <p>For infinite sets, this definition seems problematic no matter which direction we come at it from: We don't know up front that two infinite sets have the same cardinality, so we cannot conclude that their elements can be <em>exactly</em> paired. And we do not know up front that two infinite sets' elements can be paired up exactly (because we don't know with certainty what happens beyond the finite cases we can verify). So we cannot conclude that their cardinalities are the same. The definition above therefore seems useless, since we cannot start from either side of the &quot;iff&quot;.</p> <p>It might be argued that if we state it as follows: &quot;For each element e of E, pair it with element e/2 of Z,&quot; then we have expressed the general case symbolically, and it works. But we can only verify that for finite values of E and Z. We can't know what happens beyond finite elements of those sets. So expressing it symbolically does not seem to help.</p> <p>Why is Cantor's definition not circular and therefore useless for deciding the question of infinite set cardinalities?</p>
md2perpe
168,433
<p>There are many definitions where you kind of need to know the answer before you use the definition. I write &quot;kind of&quot; because <strong>you don't need to have a proof for it, but you need to have a guess</strong>. Then you write a proof to show that the guess is actually correct.</p> <p>The definition of equal cardinality is such a definition. Another one that you might have seen is the definition of limit: <span class="math-container">$\lim_{x\to a} f(x) = L$</span> if for all <span class="math-container">$\epsilon&gt;0$</span> there exists <span class="math-container">$\delta&gt;0$</span> such that <span class="math-container">$|f(x)-L|&lt;\epsilon$</span> whenever <span class="math-container">$|x-a|&lt;\delta.$</span> Here you need to have a guess of the limit <span class="math-container">$L$</span> before you can show that <span class="math-container">$\lim_{x\to a} f(x) = L.$</span> But primarily the definition is used to show some laws/rules. These laws can then be used to determine a limit and no explicit use of the definition is needed. Similarly one can find laws for cardinalities and use them instead of the definition.</p>
42,787
<p>I am using <code>ListPlot</code> to display from 5 to 12 lines of busy data. The individual time series in my data are not easy to distinguish visually, as may be evident below, because the colors are not sufficiently different.</p> <p><img src="https://i.stack.imgur.com/PiMMh.png" alt="enter image description here"></p> <p>I have been trying to use <code>PlotStyle</code>, <code>ColorData</code>and related functions to get better colors. I would rather not have to specify a specific list of colors because the number of plot items varies from test to test. I created a toy plot to experiment with - the problem is illustrated by lines "F" and "G", which seem to be almost the same color. <code>PlotStyle</code> -> <code>ColorData</code> doesn't seem to work. Is there a simple way to do this?</p> <pre><code>ListPlot[Table[i*Range[0, 10], {i, 1, 5, 0.5}] , Frame -&gt; True, Joined -&gt; True, PlotRange -&gt; All , PlotLegends -&gt; SwatchLegend[Characters["ABCDEFGHIGKLMNOP"]] , PlotStyle -&gt; ColorData["TemperatureMap"] ] </code></pre> <p><img src="https://i.stack.imgur.com/Wdi0O.png" alt="enter image description here"></p> <p>It looks like </p> <pre><code>ListLinePlot[Table[data2*i, {i, k}], PlotStyle -&gt; Thick, ColorFunction -&gt; Function[{x1, x2}, ColorData[c1][x2]]] </code></pre> <p>from another <a href="https://mathematica.stackexchange.com/questions/27131/plotstyle-in-listplot-change-color-scheme-manually-choose-color-of-first-plot?rq=1">question</a> may be the answer. I didn't see that before. I'll try it out. I don't think I really understand <code>ColorData</code>. Meanwhile, if anyone has generally enlightening comments, I would appreciate them.</p>
mfvonh
5,059
<p>As mentioned by others, you should read up on <code>ColorDataFunctions</code>. For example, you could evenly space colors across a continuous color scheme, for an arbitrary number of lines, with:</p> <pre><code>d = Table[i*Range[0, 10], {i, 1, 5, 0.5}]; ListPlot[d, Frame -&gt; True, Joined -&gt; True, PlotRange -&gt; All, PlotLegends -&gt; SwatchLegend[Characters["ABCDEFGHIGKLMNOP"]], PlotStyle -&gt; ColorData["Rainbow"] /@ (Range[0, Length@d]/Length@d)] </code></pre> <p><img src="https://i.stack.imgur.com/eLD6f.png" alt="enter image description here"></p>
227,797
<p>I have this function and I want to see where it is zero. <span class="math-container">$$\frac{1}{16} \left(\sinh (\pi x) \left(64 \left(x^2-4\right) \cosh \left(\frac{2 \pi x}{3}\right) \cos (y)+\left(x^2+4\right)^2+256 x \sinh \left(\frac{2 \pi x}{3}\right) \sin (y)\right)+\left(x^2-12\right)^2 \sinh \left(\frac{7 \pi x}{3}\right)-2 \left(x^2+4\right)^2 \sinh \left(\frac{5 \pi x}{3}\right)\right)+2 \left(x^2-4\right) \sinh \left(\frac{\pi x}{3}\right)$$</span> I use ContourPlot</p> <pre><code>f[x_, y_] := 2 (-4 + x^2) Sinh[(π x)/3] + 1/16 (((4 + x^2)^2 + 64 (-4 + x^2) Cos[y] Cosh[(2 π x)/3] + 256 x Sin[y] Sinh[(2 π x)/3]) Sinh[π x] - 2 (4 + x^2)^2 Sinh[(5 π x)/3] + (-12 + x^2)^2 Sinh[( 7 π x)/3]); ContourPlot[ f[x, y] == 0, {x, 3.465728, 3.465729}, {y, 1.046786, 1.046795}, PlotPoints -&gt; 500] </code></pre> <p>and I obtain this plot</p> <p>Now, my question is that can I trust this plot and conclude that the curves do not cross?</p> <p>Or, I should increase the precision of the plot? And if so, how can I ask Mathematica to give higher precision for the axis in ContourPlot?</p>
C. E.
731
<p>Not a complete answer but one direction to go in:</p> <pre><code>cp = ContourPlot[f[x, y] == 0, {x, 3.465728, 3.465729}, {y, 1.046786, 1.046795}, PlotPoints -&gt; 500] {l1, l2} = Cases[Normal[cp], _Line, Infinity]; {{x1, y1}} = MinimalBy[First@l1, RegionDistance[l2]]; {{x2, y2}} = MinimalBy[First@l2, RegionDistance[l1]]; Show[ cp, ListPlot[{{x1, y1}, {x2, y2},}, PlotStyle -&gt; {PointSize[Large], Red}] ] </code></pre> <p><a href="https://i.stack.imgur.com/F3kLL.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/F3kLL.png" alt="Output" /></a></p> <pre><code>sol1 = r /. FindRoot[f[r, y], {r, x1}]; sol2 = r /. FindRoot[f[r, y], {r, x2}]; sol1 - sol2 </code></pre> <blockquote> <p>3.11929*10^-8</p> </blockquote> <p>At least <code>FindRoot</code> also finds that there are two distinct solutions at that <code>y</code>. However, <code>FindRoot</code> also warns about its ability to find roots with the desired accuracy, so it is still not conclusive. Also, perhaps it should be looking for the root by also adjusting <code>y</code>.</p> <p>The idea here is that we can extract values from the plot and then try to verify our conclusions about the plot using other functions.</p>
394,580
<p>Let <span class="math-container">$S$</span> be a smooth compact closed surface embedded in <span class="math-container">$\mathbb{R}^3$</span> of genus <span class="math-container">$g$</span>. Starting from a point <span class="math-container">$p$</span>, define a random walk as taking discrete steps in a uniformly random direction, each step a geodesic segment of the same length <span class="math-container">$\delta$</span>. Assume <span class="math-container">$\delta$</span> is less than the injectivity radius and small with respect to the intrinsic diameter of <span class="math-container">$S$</span>.</p> <blockquote> <p><em><strong>Q</strong></em>. Is the set of footprints of the random walk evenly distributed on <span class="math-container">$S$</span>, in the limit? By evenly distributed I mean the density of points per unit area of surface is the same everywhere on <span class="math-container">$S$</span>.</p> </blockquote> <p>This is likely known, but I'm not finding it in the literature on random walks on manifolds. I'm especially interested in genus <span class="math-container">$0$</span>. Thanks!</p> <hr /> <p><em>Update</em> (6JUn2021). The answer to <em><strong>Q</strong></em> is <em>Yes</em>, going back 38 years to Toshikazu Sunada, as recounted in @RW's <a href="https://mathoverflow.net/a/394625/6094">answer</a>.</p>
Carlo Beenakker
11,260
<p>This random walk is known in the literature as the &quot;geodesic random walk&quot;. For a manifold with positive curvature, theorems 1 and 4 of <A HREF="https://arxiv.org/abs/1609.02901" rel="noreferrer">arXiv:1609.02901</A> prove that the uniform measure on the manifold is the unique stationary distribution of the geodesic walk.</p>
394,580
<p>Let <span class="math-container">$S$</span> be a smooth compact closed surface embedded in <span class="math-container">$\mathbb{R}^3$</span> of genus <span class="math-container">$g$</span>. Starting from a point <span class="math-container">$p$</span>, define a random walk as taking discrete steps in a uniformly random direction, each step a geodesic segment of the same length <span class="math-container">$\delta$</span>. Assume <span class="math-container">$\delta$</span> is less than the injectivity radius and small with respect to the intrinsic diameter of <span class="math-container">$S$</span>.</p> <blockquote> <p><em><strong>Q</strong></em>. Is the set of footprints of the random walk evenly distributed on <span class="math-container">$S$</span>, in the limit? By evenly distributed I mean the density of points per unit area of surface is the same everywhere on <span class="math-container">$S$</span>.</p> </blockquote> <p>This is likely known, but I'm not finding it in the literature on random walks on manifolds. I'm especially interested in genus <span class="math-container">$0$</span>. Thanks!</p> <hr /> <p><em>Update</em> (6JUn2021). The answer to <em><strong>Q</strong></em> is <em>Yes</em>, going back 38 years to Toshikazu Sunada, as recounted in @RW's <a href="https://mathoverflow.net/a/394625/6094">answer</a>.</p>
R W
8,588
<p>This problem was first considered and solved by Sunada, see his 1983 paper <a href="http://www.numdam.org/item/?id=CM_1983__48_1_129_0" rel="noreferrer">Mean-value theorems and ergodicity of certain geodesic random walks</a>. Alas, the authors of the quoted arxiv paper were not aware of this. Any assumptions on curvature and dimension are not necessary - it is just enough to assume that the manifold is compact. As it has been pointed out by Pierre PC, the fact that the Riemannian volume is a stationary measure is an immediate consequence of the Liouville theorem. Its ergodicity with respect to the geodesic random walk is equivalent to the absence of invariant subsets of the manifold, which would follow, for instance, if any two points can be joined by a chain of geodesic segments of length <span class="math-container">$\delta$</span>. Actually, geodesic random walks are always mixing for sufficiently small <span class="math-container">$\delta$</span> - this is a consequence of the fact that the cube of the transition operator has a density which is bounded away from 0 on the diagonal. The latter also implies the uniqueness of the stationary measure.</p> <p>EDIT I had misattributed the term &quot;geodesic random walk&quot; to Sunada. Actually, it seems to be first introduced in 1975 by Jørgensen <a href="https://link.springer.com/content/pdf/10.1007/BF00533088.pdf" rel="noreferrer">The central limit problem for geodesic random walks</a> whose work is quoted by Sunada.</p>
473,508
<p>Let $p &gt; 2$ be a prime. Can someone prove that for for any $t \leq p-2$ the following identity holds</p> <blockquote> <p>$\displaystyle \sum_{x \in \mathbb{F}_p} x^t = 0$</p> </blockquote>
azimut
61,691
<p>Because of $t &lt; p-1$, there exists an $a\in\mathbb F_p^\times$ with $a^t\neq 1$. Now $$ \sum_{x\in\mathbb F_p} x^t = \sum_{x\in\mathbb F_p} (ax)^t = a^t \sum_{x\in\mathbb F_p} x^t. $$ So $$\underbrace{(1 - a^t)}_{\neq 0}\sum_{x\in\mathbb F_p} x^t = 0$$ and therefore $$\sum_{x\in\mathbb F_p} x^t = 0.$$</p>
474,587
<p>Does $\|Tv\|\leq\|v\|$ (for all $v \in V$) leads to $T$ is normal?</p> <p>If not, when I add the additional information that every e.e of $T$ is of the absolute value 1, can I prove $T$ is unitary? </p> <p>Thanks!</p>
user8268
8,268
<p>For the second question the answer is yes. I'll suppose that $T$ is diagonalizable (i.e. no Jordan blocks). If $T$ is not unitary then there are eigenvectors $u,v$ with eigenvalues $a,b$, $a\neq b$ ($|a|=|b|=1$) such that $(u,v)\neq 0$. If $c$ is a complecx number then $\|u+cv\|^2=\|u\|^2+|c|^2\|v\|^2+2Re\, c(v,u)$ and $\|T(u+cv)\|^2=\|u\|^2+|c|^2\|v\|^2+2Re\, a^{-1}bc(v,u)$. If you choose $c$ so that $c(v,u)$ is real and negative (e.g. $c=-(u,v)$) then the inequality (for the vector $u+cv$) fails. I'll leave the non-diagonalizable case as an excercise.</p>
4,353,203
<p>While looking for an explanation to the first inequality, I spied another similar inequality. So, I will ask about both of them here.</p> <p><span class="math-container">$a$</span>, <span class="math-container">$b$</span>, and <span class="math-container">$c$</span> are positive numbers. <span class="math-container">$$\frac{a}{b + c} + \frac{b}{a + c} + \frac{c}{a + b} \geq \frac{(a + b + c)^{2}}{2(ab + ac + bc)}.$$</span> <span class="math-container">$$\frac{a^{2}}{b} + \frac{b^{2}}{c} + \frac{c^{2}}{a} \geq \frac{(a + b + c)^{2}}{a + b + c} = a + b + c.$$</span></p> <p>If the second inequality is an application of the Cauchy-Schwarz Inequality, state for me the two vectors in <span class="math-container">$\mathbb{R}^{3}$</span> that are used to justify it.</p>
md5
301,549
<p>Here it is not too hard to explicitly compute the distribution of <span class="math-container">$s_{2n}$</span>. By counting the number of <span class="math-container">$x_k$</span> that match <span class="math-container">$a_k$</span>, we see that for all <span class="math-container">$x\in\{0,\ldots,n\}$</span>, <span class="math-container">$$ P(\sqrt{2n}s_{2n}=2n-4x)=\frac{\binom{n}{x}\binom{n}{n-x}}{\binom{2n}{n}}. $$</span> Now it suffices to apply some local limit theorem. What we need to prove is a statement of the form: for all <span class="math-container">$$ x_n=\frac{n}{2}+\frac{\sqrt{n}t}{4}+o(\sqrt{n}), $$</span> it holds that <span class="math-container">$P(s_{2n}=x_n)/\sqrt{n}\to e^{-t^2/2\sigma^2}/\sqrt{2\pi\sigma^2}.$</span> This should not be too hard, using standard estimates such as <span class="math-container">$$ \binom{n}{\frac{n}{2}+\sqrt{n}t/4+o(\sqrt{n})}2^{-n}\sim\sqrt{\frac{2}{\pi n}}e^{-t^2/2}. $$</span></p>
318,351
<p>As we know, the Ky Fan norm is convex, and so is the Ky Fan k-norm. My question is, does this imply that the difference between them is a non-convex function, since it results from "difference between two convex" functions ?</p>
Muphrid
45,296
<p>Differential forms are a way to talk about fields that aren't vector or scalar fields--or at least, to talk about those and things beyond them. In traditional vector calculus, you only talk about scalar fields and vector fields. But using differential forms, you can talk about something like $\omega = 2y \, dx \wedge dy$ as a "bivector" field, a field of oriented planes throughout space.</p> <p>You don't <em>need</em> to do this in 3d because each plane has a unique normal vector, so you can cheat and just use the vectors. But outside of 3d, you must do something like this to talk about all kinds of fields meaningfully. You can also talk about oriented volume fields and so on. Again, this is something looked over in vector calculus because there is only one unit volume in 3d, so you can pretend it's really a scalar field when it's not.</p>
12,717
<p>In the familiar case of (smooth projective) curves over an algebraically closed fields, (closed) points correspond to DVR's.</p> <p>What if we have a non-singular projective curve over a non-algebraically closed field? The closed points will certainly induce DVR's, but would all DVR's come from closed points? Is there a characterization of the DVR's that aren't induced by closed points?</p> <p>And how about for a general projective variety that is regular in codimension 1 (both for algebraically closed and non-algebraically closed)? Point of codimension 1 induce DVR's. Do they induce all of them? What is the characterization of the ones they do induce?</p> <p>How about complete integral schemes that are regular in codimension 1?</p>
Pete L. Clark
1,149
<blockquote> <p>What if we have a non-singular projective curve over a non-algebraically closed field? The closed points will certainly induce DVR's, but would all DVR's come from closed points?</p> </blockquote> <p>Yes: let <span class="math-container">$L/k$</span> be a function field in one variable, so it can be given as a finite separable extension of <span class="math-container">$K = k(t)$</span>. Then the discrete valuations on <span class="math-container">$L$</span> which are trivial on <span class="math-container">$k$</span> are in canonical bijection with the closed points on the unique regular projective model <span class="math-container">$C_{/l}$</span>, where <span class="math-container">$l$</span> is the algebraic closure of <span class="math-container">$k$</span> in <span class="math-container">$L$</span> (since <span class="math-container">$l/k$</span> is finite, any valuation which is trivial on <span class="math-container">$k$</span> is also trivial on <span class="math-container">$l$</span>).</p> <p>By coincidence, this is almost exactly where I am in a course I am now teaching, although I won't insist on the geometric language: see Section 1.7, especially Exercise 1.22, of</p> <p><a href="http://alpha.math.uga.edu/%7Epete/8410Chapter1.pdf" rel="nofollow noreferrer">http://alpha.math.uga.edu/~pete/8410Chapter1.pdf</a></p> <blockquote> <p>And how about for a general projective variety that is regular in codimension 1 (both for algebraically closed and non-algebraically closed)? Point of codimension 1 induce DVR's. Do they induce all of them? What is the characterization of the ones they do induce?</p> </blockquote> <p>No, they do not induce all of them (even if the variety is smooth, which I will assume for simplicity). The problem here is that, unlike in dimension one, there is no unique nonsingular projective model, so e.g. there will be discrete valuations on <span class="math-container">$k(t_1,t_2)$</span> which correspond not to codimension one points on <span class="math-container">$\mathbb{P}^2$</span> but to points on (at least) some arbitrary blowup of <span class="math-container">$\mathbb{P}^2$</span>. Because of this, I am pretty sure that there is no simple valuation-theoretic characterization of those valuations which correspond to closed points on a particular projective model of the function field.</p> <p>By the way, I do not understand valuation theory on function fields in more than one variable very well, so I especially welcome further responses which elaborate on this issue.</p>
1,442,240
<p>I have a little question, that run threw my thoughts, when i saw this exercise: $$\lim _{x\to \infty }\left(\frac{\int _{sin\left(x\right)}^xe^{\sqrt{t^2+1}}dt}{e^{\sqrt{x^2-1}}}\:\right)$$</p> <p>Of course I want to implement here Lophital's rule, but without showing a calculation, is there intuitive and logical explanation why is the nominator (the integral) is $\infty$ ?</p> <p>I asked myself this question, because I remember that double integral on constant 1 for exmaple is the the area, and triple on 1 is the velocity, so here I tried to think what is an integral with the same domain but with constant 1? </p> <p>$$\int _{sin\left(x\right)}^x1dt\:$$</p> <p>That is defintly diverges because $x-sin(x)$ when x "aproaches" infnity. And here, as I conclude, there isn't too much geometric meaning to that kind of integral, where the domain is between functions (like here x and sin x), I mean I can't say that it is the area that bounded by x and sin(x), the integral on 1 is just the result of substraction, right ? </p> <p>So, I dont have any geometrical intuition on that integral, except one thing I see clearly: $$$$ As I said, the integral on 1 diverges and the function $e^{\sqrt{x^2+1}}$ is defintly increasing function, but it is enough to say that it is bigger then 1 for each $x$ and that's why (one of the reasons the limit is $\infty$.</p>
Crostul
160,300
<p>You can see that $$\int_{\sin x}^x e^{\sqrt{t^2-1}} dt = \int_0^x e^{\sqrt{t^2-1}} dt + \int_{\sin x}^0 e^{\sqrt{t^2-1}} dt$$ and clearly the first summand diverges to $\infty$, while the second summand is bounded becuase $$\left| \int_{\sin x}^0 e^{\sqrt{t^2-1}} dt \right| \le \int_{-1}^1 e^{\sqrt{t^2-1}} dt = constant$$ So, in order to compute your limit, you can simply replace $\sin x$ with $0$, and then apply L'Hopital.</p>
316,055
<p>I have no idea of how to solve the following: </p> <p>$$\displaystyle \lim_{x\rightarrow 0}\frac{e^x-1}{3x}$$</p> <p>I know about the notable special limit $$\displaystyle \lim_{x\rightarrow 0}\frac{e^x-1}{x}=1$$, and I know that I have to do some algebraic manipulation and change what I have above to the notable one, but I can't quite see how.</p>
Jeel Shah
38,296
<p>Note, that the limit that you have provided is indeterminate. $$\begin{align} \displaystyle \lim_{x\rightarrow 0}\frac{e^x-1}{3x} &amp;=\frac{e^0-1}{3(0)} =\frac{0}{0} \end{align}$$</p> <p>Therefore, you can use L'Hopital's Rule which states: </p> <p>$$\displaystyle \lim_{x \rightarrow 0}\frac{f(x)}{g(x)} = \displaystyle \lim_{x \rightarrow 0}\frac{f'(x)}{g'(x)}$$</p> <p>Therefore, let $f(x)$ be $e^x -1$ and let $g(x)$ be $3x$ and apply L'Hopital's Rule.</p> <p>$$\begin{align} &amp;\displaystyle \lim_{x \rightarrow 0}\frac{f'(x)}{g'(x)} \\ &amp;\displaystyle \lim_{x \rightarrow 0}\frac{e^x}{3}\\ &amp;\displaystyle \lim_{x \rightarrow 0}\frac{e^0}{3} = \frac{1}{3} \end{align}$$</p> <p>Therefore, the limit is $\frac{1}{3}$. Note, that the derivative of $e^x$ is $e^x$.</p>
674,259
<p>I am trying to understand why the derivative of $f(x)=x^\frac{1}{2}$ is $\frac{1}{2\sqrt{x}}$ using the limit theorem. I know $f'(x) = \frac{1}{2\sqrt{x}}$, but what I want to understand is how to manipulate the following limit so that it gives this result as h tends to zero:</p> <p>$$f'(x)=\lim_{h\to 0} \frac{(x+h)^\frac{1}{2}-x^\frac{1}{2}}{h} = \frac{1}{2\sqrt{x}}$$</p> <p>I have tried writing this as :</p> <p>$$\lim_{h\to 0} \frac{\sqrt{x+h} - \sqrt{x}}{h}$$</p> <p>But I can't see how to get to the limit of $\frac{1}{2\sqrt{x}}$. Whilst this is not homework I have actually been set, I would like to understand how to evaluate the limit.</p>
Dan
79,007
<p>Hint: Simplify $$ \lim_{h\to0}\frac{\sqrt{x+h}-\sqrt{x}}{h} = \lim_{h\to0}\frac{\sqrt{x+h}-\sqrt{x}}{h}\cdot\frac{\sqrt{x+h} + \sqrt{x}}{\sqrt{x+h} + \sqrt{x}} $$</p>
347,385
<p>Assume $f(x) \in C^1([0,1])$,and $\int_0^{\frac{1}{2}}f(x)\text{d}x=0$,show that: $$\left(\int_0^1f(x)\text{d}x\right)^2 \leq \frac{1}{12}\int_0^1[f'(x)]^2\text{d}x$$</p> <p>and how to find the smallest constant $C$ which satisfies $$\left(\int_0^1f(x)\text{d}x\right)^2 \leq C\int_0^1[f'(x)]^2\text{d}x$$</p>
math110
58,742
<p>let <span class="math-container">$$\displaystyle\int_{0}^{\frac{1}{2}}f(x)=0\Longrightarrow \int_{0}^{\frac{1}{2}}xf'(x)dx=\dfrac{1}{2}f(\dfrac{1}{2})$$</span></p> <p>so <span class="math-container">\begin{align} &amp;(\int_{0}^{1}f(x)dx)^2=\left[\int_{\frac{1}{2}}^{1}(f(x)-f(\dfrac{1}{2}))dx+\dfrac{1}{2}f(\dfrac{1}{2})\right]^2=\left[\int_{\frac{1}{2}}^{1}\int_{\frac{1}{2}}^{x}f'(t)dtdx+\int_{0}^{\frac{1}{2}}xf'(x)dx\right]^2\\ &amp;=\left[\int_{\frac{1}{2}}^{1}(1-t)f'(t)dt+\int_{0}^{\frac{1}{2}}xf'(x)dx\right]^2\\ &amp;\le2\left[\int_{\frac{1}{2}}^{1}(1-t)f'(t)dt\right]^2+2\left[\int_{0}^{\frac{1}{2}}xf'(x)dx\right]^2\\ &amp;\le 2\left[\int_{\frac{1}{2}}^{1}(1-t)^2dt\int_{\frac{1}{2}}^{1}f'^2(t)dt+\int_{0}^{\frac{1}{2}}x^2dx\int_{0}^{\frac{1}{2}}f'^2(t)dt\right]\\ &amp;=\frac{1}{12}\int_{0}^{1}f'^2(x)dx \end{align}</span></p>
147,994
<p><strong>Preamble</strong></p> <p>Consider a <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/b4eqb.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/b4eqb.png" alt="Blockquote"></a></p> </blockquote> <p>That is with vertical layout:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical"] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/7CIbD.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/7CIbD.png" alt="Blockquote"></a></p> </blockquote> <p>Two things are different:</p> <p>First, the unpressed buttons look differently in vertical than in horizontal layout, because the vertical layout has a mouse-over behavior, which horizontal layout doesn't. </p> <blockquote> <p><a href="https://i.stack.imgur.com/uQ1Q4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uQ1Q4.png" alt="Blockquote"></a></p> </blockquote> <p>Second, the buttons are nicely aligned by the width of the longest element, but when you click one more time on the already selected button, it shrinks to the length of the string.</p> <blockquote> <p><a href="https://i.stack.imgur.com/JeqYG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/JeqYG.png" alt="Blockquote"></a></p> </blockquote> <p>I tried to construct a setter bar myself, using <a href="http://reference.wolfram.com/language/ref/Setter.html" rel="nofollow noreferrer"><code>Setter</code></a>:</p> <pre><code>Column[Setter[Dynamic@x, #, StringRepeat["q", #]] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/tfvwE.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/tfvwE.png" alt="Blockquote"></a></p> </blockquote> <p>Looks normal now (no mouse-over behavior), but the same story with the button changing its appearance when double clicked. Apparently, we can use <a href="http://reference.wolfram.com/language/ref/ImageSize.html" rel="nofollow noreferrer"><code>ImageSize</code></a> to fix this:</p> <pre><code>Column[Setter[Dynamic@x, #, StringRepeat["q", #], ImageSize -&gt; 100] &amp; /@ Range@5] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/DRq3B.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/DRq3B.png" alt="Blockquote"></a></p> </blockquote> <p>I thought I can port this to <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>, and it works, but the FE complains that the <a href="http://reference.wolfram.com/language/ref/ImageSize.html" rel="nofollow noreferrer"><code>ImageSize</code></a>is not a valid option for <code>SetterBar</code>:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical", ImageSize -&gt; 100] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/MmCyT.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/MmCyT.png" alt="Blockquote"></a></p> </blockquote> <p>Also, the buttons seem to be next to each other, not like in the column of <code>Setter</code>s, so I added <a href="http://reference.wolfram.com/language/ref/ImageMargins.html" rel="nofollow noreferrer"><code>ImageMargins</code></a>, and it works again, but the FE doens't like it either:</p> <pre><code>SetterBar[1, StringRepeat["q", #] &amp; /@ Range@5, Appearance -&gt; "Vertical", ImageSize -&gt; 100, ImageMargins -&gt; {{0, 0}, {5, 0}}] </code></pre> <blockquote> <p><a href="https://i.stack.imgur.com/9AoOq.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9AoOq.png" alt="Blockquote"></a></p> </blockquote> <p>Also, the appearance is still "mouse-over" like. </p> <p><strong>Questions:</strong></p> <p>1) What happens with <a href="http://reference.wolfram.com/language/ref/SetterBar.html" rel="nofollow noreferrer"><code>SetterBar</code></a>when I change the layout from horizontal to vertical? Why the appearance of the buttons changes to "mouse-over"? Can we control it?</p> <p>2) Why the FE complains about options for <code>SetterBar</code> that work fine? Is this a normal behavior? Is this is bug? Are the other examples of the similar behavior?</p>
Michael E2
4,999
<p>I usually use <code>Pane</code> to solve such alignment problems. String padding in a variable-width font does not produce reliable results, and <code>Pane</code> can be used to get around that. (Fortunately or unfortunately, a vertical <code>SetterBar</code> automatically pads out the buttons to be the same sizes, and I cannot check this on Windows.)</p> <p>One of the issues is to determine the dimensions to use for <code>Pane</code>. In particular, the width depends on the style in which the <code>SetterBar</code> is ultimately displayed. When the style is unknown, it can be a challenge to get it right. Usually, I don't worry if the dimensions are little too large, and estimate by trial and error. However, sometimes one wants to get it exactly right, or at least exactly the same as something else. For that one can apply <code>Rasterize</code> to the string in the display style.</p> <pre><code>strings = StringRepeat["q", #] &amp; /@ Range@5; width = First@Rasterize[#, "BoundingBox"] &amp; /@ strings // Max (* 39 *) SetterBar[1, Pane[#, width, Alignment -&gt; Center] &amp; /@ strings, Appearance -&gt; {"Vertical"}] </code></pre> <p><img src="https://i.stack.imgur.com/Lrhex.png" alt="Mathematica graphics"></p> <p>For another style, do this:</p> <pre><code>style = "Label"; width = First@Rasterize[Style[#, style], "BoundingBox"] &amp; /@ styledstrings // Max (* 28 *) SetterBar[1, Pane[#, width, Alignment -&gt; Center] &amp; /@ strings, Appearance -&gt; {"Vertical"}, BaseStyle -&gt; style] </code></pre> <p><img src="https://i.stack.imgur.com/EWZcd.png" alt="Mathematica graphics"></p> <p>For a <code>Manipulate</code> setter bar, use the <code>"ManipulateLabel"</code> style:</p> <pre><code>style = "ManipulateLabel"; width = First@Rasterize[Style[#, style], "BoundingBox"] &amp; /@ styledstrings // Max (* 27 *) Manipulate[ s, {{s, "q"}, # -&gt; Pane[#, width, Alignment -&gt; Center] &amp; /@ strings, SetterBar, Appearance -&gt; {"Vertical"}, ControlPlacement -&gt; Left} ] </code></pre> <p><img src="https://i.stack.imgur.com/1zaoD.png" alt="Mathematica graphics"></p> <p>One can compare the width with the default <code>Manipulate</code>, or with other values for <code>width</code>, with</p> <pre><code>Grid[List /@ {man1, man2,...}, Spacings -&gt; 0, Dividers -&gt; {{1 -&gt; Thin, 2 -&gt; Thin}, None}] </code></pre> <p>Below are the widths in the <code>"ManipulateLabel"</code> style of the strings padded with spaces. They vary considerably.</p> <pre><code>First@Rasterize[Style[#, style], "BoundingBox"] &amp; /@ StringPadRight@(StringRepeat["q", #] &amp; /@ Range@5) (* {15, 19, 21, 25, 27} *) </code></pre> <p>For a fixed-width font, there is no such problem.</p> <pre><code>First@Rasterize[#, "BoundingBox"] &amp; /@ StringPadRight@(StringRepeat["q", #] &amp; /@ Range@5) (* {39, 39, 39, 39, 39} *) </code></pre>
1,335,483
<p>Given a relation $R \subseteq A \times A$ with $n$ tuples, I am trying to prove that its transitive closure $R^+$ has at the most $n^2$ elements.</p> <p>My initial idea was to use the following definition of the transitive closure to identify an argument why the statement to be proven must be true:</p> <p>$$R^+ = R \cup R^2 \cup R^3 \cup \ldots$$</p> <p>where $R^k, k \in \mathbb{N}$ stands for the k-fold composition of $R$, but that didn't give me any useful hint to continue the prove. I appreciate any hint that may help me on.</p>
ajotatxe
132,456
<p>For each ordered pair of different tuples $\{(a,b),(b,c)\}$ we have to add at most one tuple, namely $(a,c)$.</p> <p>Since there are at most $n(n-1)=n^2-n$ of such pairs, then $|R^+|\le n+n^2-n$</p>
706,980
<p>If I know that $f(z)$ is differentiable at $z_0$, $z_0 = x_0 + iy_0$. How do I prove that $g(z) = \overline{f(\overline{z})}$ is differentiable at $\overline z_0$?</p>
copper.hat
27,978
<p>First note that $z \to z_0$ <strong>iff</strong> $\bar{z} \to \bar{z_0}$. In particular, the map $z \mapsto \bar{z}$ is continuous. </p> <p>Then note that $\lim_{z \to z_0} { f(\bar{z})-f(\bar{z_0}) \over \bar{z} - \bar{z_0} } = f'(\bar{z_0})$.</p> <p>Finally, ${ g(z) -g(z_0) \over z - z_0} = \overline{\left( { f(\bar{z})-f(\bar{z_0}) \over \bar{z} - \bar{z_0} } \right) }$, and so we have $g'(z_0) = \lim_{z \to z_0} { g(z) -g(z_0) \over z - z_0} = \overline{ \left( \lim_{z \to z_0}{ f(\bar{z})-f(\bar{z_0}) \over \bar{z} - \bar{z_0} } \right) } = \overline{f'(\bar{z}_0)}$.</p>
3,291,549
<p>How does one prove that the exponential and logarithmic functions are inverse using the definitions:</p> <p><span class="math-container">$$e^x= \sum_{i=0}^{\infty} \frac{x^i}{i!}$$</span> and <span class="math-container">$$\log(x)=\int_{1}^{x}\frac{1}{t}dt$$</span></p> <p>My naive approach (sort of ignoring issues of convergence) is to just apply the definitions straightforwardly, so in one direction I get:</p> <p><span class="math-container">\begin{align}\log(e^x)&amp;=\int_{1}^{e^x}\frac{1}{t}dt\\ &amp;=\int_{0}^{e^x-1}\frac{1}{1+t}dt\\ &amp;=\int_0^{e^x-1}\sum_{j=0}^\infty (-1)^jt^jdt \\ &amp;=\sum_{j=0}^\infty (-1)^j \int_0^{e^x-1} t^jdt\\ &amp;=\sum_{j=0}^\infty \frac{(-1)^j}{j+1}(e^x-1)^{j+1}\\ &amp;=\sum_{j=0}^\infty \frac{(-1)^j}{j+1} \sum_{k=0}^{j+1} \frac{n!}{k!(n-k)!}e^{x(n-k)}(-1)^k\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \frac{(-1)^{j+k}n!}{(j+1)k!(n-k)!} e^{x(n-k)}\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \frac{(-1)^{j+k}n!}{(j+1)k!(n-k)!} \sum_{\ell=0}^\infty \frac{(-1)^\ell}{\ell !}(n-k)^\ell x^\ell\\ &amp;=\sum_{j=0}^\infty \sum_{k=0}^{j+1} \sum_{\ell=0}^\infty \frac{(-1)^{j+k+\ell}n!(n-k)^\ell x^\ell}{(j+1)k!\ell!(n-k)!} \end{align}</span></p> <p>and I cant see at all that this is equal to <span class="math-container">$x$</span>. My guess is I'm going about this all wrong. </p>
Julien D
688,390
<p><span class="math-container">$\log(e^x)=\int_1^{e^x}\frac{1}{t}dt=\int_0^x\frac{1}{e^u}e^u du=x$</span> and since it's easy to prove that <span class="math-container">$e^x$</span> is bijective then <span class="math-container">$\log$</span> is its inverse.</p>
1,182,953
<p>Does anyone know the provenance of or the answer to the following integral</p> <p>$$\int_0^\infty\ \frac{\ln|\cos(x)|}{x^2} dx $$</p> <p>Thanks.</p>
Jack D'Aurizio
44,121
<p>Lucian's answer is just fine (as always), but from $$ \sum_{n\in\mathbb{Z}}\frac{1}{(x+n\pi)^2}=\frac{1}{\sin^2 x}\tag{1}$$ for any $x\in(-\pi,\pi)$ it also follows that: $$ I = \frac{1}{2}\int_{0}^{+\infty}\frac{\log\cos^2 x}{x^2}\,dx = \frac{1}{2}\int_{-\pi/2}^{\pi/2}\frac{\log\cos x}{\sin^2 x}\,dx=-\frac{1}{2}\int_{0}^{+\infty}\frac{\log(1+t^2)}{t^2}\,dt$$ by replacing $x$ with $\arctan t$ in the last step. Integration by parts now leads to: $$ I = -\int_{0}^{+\infty}\frac{dt}{1+t^2} = \color{red}{-\frac{\pi}{2}}.\tag{2}$$ Someone may ask now: <em>How to prove $(1)$?</em> </p> <p>Well, for such a purpose, start from the Weierstrass product for the sine function: $$\frac{\sin z}{z}=\prod_{n\geq 1}\left(1-\frac{z^2}{\pi^2 n^2}\right)$$ then consider the logarithm of both sides and differentiate it twice with respect to $z$.</p>
201,458
<p>Consider $W\subseteq V$, a subspace over a field $\mathbb{F}$ and $T:V\rightarrow V$ a linear transformation with the stipulation that $T(W)\subseteq W$. Then we have the induced linear transformation $\overline{T}:V/W \rightarrow V/W$ such that $\overline{T}=T(v)+W$.</p> <p>I'm supposed to show that this induced transformation is well-defined, and that given $V$ finite and $T$ an isomorphism that $\overline{T}$ is an isomorphism. I'm having a little trouble with this part. Namely, I want to show that the $ker(\overline{T})=W$, and I'm to the point where I realize that this means $T(v)\in W$. How do I know there's not some random $v\in V\setminus W $ such that $ T(v)\in W$.</p> <p>In particular, is all this true if $V$ is not finite dimensional? I can't immediately think of a counterexample...</p>
Michael Joyce
17,673
<p>Hint: Given a linear isomorphism $T : V \rightarrow V'$ of finite-dimensional vector spaces, can you prove that the restricted linear transformation $T|_W : W \rightarrow T(W)$ is an isomorphism?</p> <p>If $V' = V$ and $T(W) \subseteq W$ as in your problem, can you prove that $T(W) = W$? In that case, you'll have $T^{-1}(W) = W$, which may prove useful in showing that $\ker \overline{T} = \{0\}$.</p>
201,458
<p>Consider $W\subseteq V$, a subspace over a field $\mathbb{F}$ and $T:V\rightarrow V$ a linear transformation with the stipulation that $T(W)\subseteq W$. Then we have the induced linear transformation $\overline{T}:V/W \rightarrow V/W$ such that $\overline{T}=T(v)+W$.</p> <p>I'm supposed to show that this induced transformation is well-defined, and that given $V$ finite and $T$ an isomorphism that $\overline{T}$ is an isomorphism. I'm having a little trouble with this part. Namely, I want to show that the $ker(\overline{T})=W$, and I'm to the point where I realize that this means $T(v)\in W$. How do I know there's not some random $v\in V\setminus W $ such that $ T(v)\in W$.</p> <p>In particular, is all this true if $V$ is not finite dimensional? I can't immediately think of a counterexample...</p>
M Turgeon
19,379
<p>When you restrict to $W$, you still get a linear transformation -- this much is clear. Since $T$ is injective, its restriction to $W$ is still injective. Therefore, the dimension of the image $T(W)$ is equal to the dimension of $W$. Since $T(W)\subseteq W$, it readily follows that we have an equality. Since $T|_W$ is both injective and surjective, it is an isomorphism.</p> <p>A few things change when you go to the infinite-dimensional case. First of all, it may be the case that $T(W)\subseteq W$, without equality. For example, this could happen if $W$ is a closed subset of $V$, but $T(W)$ isn't. Also, if topology is involved, a linear isomorphism may not be the kind of isomorphism you are looking for -- you may want bicontinuity as well.</p>
544,464
<p>Show that any subset of $\{1, 2, 3, ..., 200\}$ having more than $100$ members must contain at least one pair of integers which add to $201$.</p> <p>I think it is doable using the Pigeonhole Principle.</p>
1233dfv
102,540
<p>Consider the subset {$1,2,3,...,100$}. No two integers from this set sum to $201$. However, the very next integer you include in this set gives you the smallest subset with more than $100$ integers in it where two of the integers sum to $201$. Any other subset of the desired size will always have two integers whose sum is $201$. Thus any subset of {$1,2,3,...,200$} that contains more than $100$ integers must contain at least one pair of integers whose sum is $201$. </p>
311,677
<p>The problem from the book. </p> <blockquote> <p>$\dfrac{\mathrm{d}y}{\mathrm{d}x} = 6 -y$ </p> </blockquote> <p>I understand the solution till this part. </p> <p>$\ln \vert 6 - y \vert = x + C$ </p> <p>The solution in the book is $6 - Ce^{-x}$ </p> <p>My issue this that this solution, from the book, doesn't seem to resolve the issue of the abs value of $\vert 6 - y\vert$ </p>
Iuli
33,954
<p>$$\dot{y}(x)=6-y(x)$$ $$\frac{\dot{y}(x)}{6-y(x)}=1$$ $$\int{\frac{\dot{y}(x)}{6-y(x)}}dx=\int{1}dx$$ $$-\ln{|6-y(x)|}=x+c$$ $$y(x)=6-e^{-x-c}.$$</p>
71,636
<p>For a self-map $\varphi:X\longrightarrow X$ of a space $X$, many important notions of entropy are defined through a limit of the form $$\lim_{n\rightarrow\infty}\frac{1}{n}\log a_n,$$ where in each case $a_n$ represents some appropriate quantity (see, for example, <a href="https://mathoverflow.net/questions/69218/if-you-were-to-axiomatize-the-notion-of-entropy/69231#69231">this answer</a> to one of my previous questions.) Let $h(\varphi)$ denote a typical entropy that is defined by a limit as above and <strong>after</strong> Ian's example, assume that $h(\varphi)&gt;0$. Does anyone know if limits of the form \begin{equation} \lim_{n\rightarrow\infty}\ \ \frac{a_n}{\exp(n\cdot h(\varphi))} \end{equation} have been studied anywhere? I will appreciate any possible information about such limits. For example, is there a known case where the limit exists? If so, what is the limit called? etc.</p> <p><strong>EDIT</strong>: As pointed out later by Ian, even if we assume $h(\varphi)&gt;0$ this limit may not exist. I was curios to know if there were instances where the limit is known to exist. Or even better, can one characterize self-maps $\varphi$ for which the limit exists? </p>
Joe Silverman
11,926
<p>Let $\phi:\mathbb{P}^N\to\mathbb{P}^N$ be a rational map. The <em>algebraic entropy</em> of $\phi$ is the quantity $$h_{alg}(\phi) = \limsup_{n\to\infty} \frac{1}{n}\log \deg(\phi^n).$$</p> <p>Suppose now that $\phi$ is defined over $\overline{\mathbb{Q}}$. Since you're using $h$ for entropy, I will let $w:\mathbb{P}^N(\overline{\mathbb{Q}})\to[0,\infty)$ denote the (absolute logarithmic) Weil height. The <em>arithmetic entropy</em> of $(\phi,P)$ is the quantity $$h_{arith}(\phi,P)=\limsup_{n\to\infty} \frac{1}{n}\log w(\phi^n(P)).$$ In the arithmetic setting, the quantity you're asking about is more-or-less what's called the canonical height. In particular, if we assume that $\phi$ is a morphism, then $h_{alg}(\phi)=\log(d)$, and $$\lim_{n\to\infty} \frac{w(\phi^n(P))}{\exp(nh_{alg}(\phi)} = \lim_{n\to\infty} \frac{w(\phi^n(P))}{d^n}$$ exists and is called the <em>$\phi$-canonical height of $P$</em>. (It's usually denoted $\hat{h}_\phi(P)$.) For the case of morphisms, this is all well known, see for example [2]. Algebraic entropy for rational maps is a subject of current research, see for example [1], and arithmetic entropy is defined (and studied for monomial maps) in [3].</p> <ol> <li>Degree-growth of monomial maps. <em>Ergodic Theory Dynam. Systems</em>, 27(5):1375--1397, 2007.</li> <li>Canonical heights on varieties with morphisms. <em>Compositio Math.</em>, 89(2):163-205, 1993.</li> <li><a href="http://arxiv.org/abs/1111.5664" rel="nofollow">http://arxiv.org/abs/1111.5664</a></li> </ol>
1,122,926
<p>Question: The product of monotone sequences is monotone, T or F?</p> <p>Uncompleted Solution: There are four cases from considering each of two monotone sequences, increasing or decreasing.</p> <p>CASE I: Suppose we have two monotonically decreasing sequences, say ${\{a_n}\}$ and ${\{b_n}\}$. Then, $a_{n+1}\leq a_n$ and $b_{n+1}\leq b_n$; if $b_n\geq 0$ and $b_{n+1}\geq 0$ then $a_{n+1}b_{n+1}\leq a_{n}b_{n+1}\leq a_{n}b_{n}$, but the l-h-s inequality, i.e., $a_{n}b_{n+1}\leq a_{n}b_{n}$, implies that must $a_{n}\geq 0$ since $b_{n+1}\leq b_n$ already has been supposed, but $a_{n}\geq 0$ has not been supposed. So does it mean that two monotonically decreasing sequences with requisites of $a_{n}\leq 0$ since $b_{n+1}\leq b_n$; is counterexample for "the product of monotone sequences is monotone"?</p> <p>Under which circumstances the product of monotone sequences is monotone, even if it may not true for all cases? And, is there any short (general) proof without need to evaluate each single of sub-cases of the 4-cases?</p> <p>Thank you. </p>
Mnifldz
210,719
<p>In general the answer is no. Take $a_n = \left ( \frac{5}{4} \right )^n$ and $b_n = \frac{1}{n}$. We then have </p> <p>\begin{eqnarray*} a_1b_1 &amp; = &amp; \frac{5}{4} \;\; = \;\; 1.25 \\ a_2b_2 &amp; = &amp; \frac{25}{32} \;\; \approx\;\; 0.781 \\ a_3b_3 &amp; = &amp; \frac{125}{192} \;\; \approx \;\; 0.651 \\ a_{10}b_{10} &amp; \approx &amp; 0.93 \\ a_{15}b_{15} &amp; \approx &amp; 1.894. \end{eqnarray*}</p> <p>We therefore have that neither $a_nb_n \leq a_{n+1}b_{n+1}$ for all $n$, nor $a_n b_n \geq a_{n+1}b_{n+1}$ for all $n$.</p>
19,495
<p>I was told that one of the most efficient tools (e.g. in terms of computations relevant to physics, but also in terms of guessing heuristically mathematical facts) that physicists use is the so called &quot;Feynman path integral&quot;, which, as far as I understand, means &quot;integrating&quot; a functional (action) on some infinite-dimentional space of configurations (fields) of a system.</p> <p>Unfortunately, it seems that, except for some few instances like Gaussian-type integrals, the quotation marks cannot be eliminated in the term &quot;integration&quot;, cause a mathematically sound integration theory on infinite-dimensional spaces — I was told — has not been invented yet.</p> <p>I would like to know the state of the art of the attempts to make this &quot;path integral&quot; into a well-defined mathematical entity.</p> <p>Difficulties of analytical nature are certainly present, but I read somewhere that perhaps the true nature of path integral would be hidden in some combinatorial or higher-categorical structures which are not yet understood...</p> <p>Edit: I should be more precise about the kind of answer that I expected to this question. I was not asking about reference for books/articles in which the path integral is treated at length and in detail. I'd have just liked to have some &quot;fresh&quot;, (relatively) concise and not too-specialistic account of the situation; something like: &quot;Essentially the problems are due to this and this, and there have been approaches X, Y, Z that focus on A, B, C; some progress have been made in ... but problems remain in ...&quot;.</p>
Allen Knutson
391
<p>Here's a relatively recent paper by Jonathan Weitsman: <a href="http://arxiv.org/abs/math/0509104" rel="nofollow">http://arxiv.org/abs/math/0509104</a></p> <p>He has more recent papers, but I'm not entirely sure that they're following the program he meant to initiate with this paper.</p>
19,495
<p>I was told that one of the most efficient tools (e.g. in terms of computations relevant to physics, but also in terms of guessing heuristically mathematical facts) that physicists use is the so called &quot;Feynman path integral&quot;, which, as far as I understand, means &quot;integrating&quot; a functional (action) on some infinite-dimentional space of configurations (fields) of a system.</p> <p>Unfortunately, it seems that, except for some few instances like Gaussian-type integrals, the quotation marks cannot be eliminated in the term &quot;integration&quot;, cause a mathematically sound integration theory on infinite-dimensional spaces — I was told — has not been invented yet.</p> <p>I would like to know the state of the art of the attempts to make this &quot;path integral&quot; into a well-defined mathematical entity.</p> <p>Difficulties of analytical nature are certainly present, but I read somewhere that perhaps the true nature of path integral would be hidden in some combinatorial or higher-categorical structures which are not yet understood...</p> <p>Edit: I should be more precise about the kind of answer that I expected to this question. I was not asking about reference for books/articles in which the path integral is treated at length and in detail. I'd have just liked to have some &quot;fresh&quot;, (relatively) concise and not too-specialistic account of the situation; something like: &quot;Essentially the problems are due to this and this, and there have been approaches X, Y, Z that focus on A, B, C; some progress have been made in ... but problems remain in ...&quot;.</p>
Tim van Beek
1,478
<p>First, there are several rigorous definitions of integration in infinite dimensional spaces, like the Bochner integral in Banach spaces (see Wikipedia), or see the book by Parthasarathy: "Probability measures on metric spaces" (this includes the Gaussian probability measures used by constructive QFT already mentioned).</p> <p>These cannot be used to make the Feynman path integral into a rigorous defined mathematical entity with <em>finite values</em>, i.e. the problem is to get an integral that spits out finite numbers in physically interesting models.</p> <p>For starters, there cannot be a translationally invariant measure (on the Borel sigma algebra) other than the one that assigns infinite volume to every open set in an infinite dimensional metric space (hint: a ball of radius r contains infinitly many pairwise disjunct copies of the ball of radius r/2). So the path integral, as it is written by physicists, certainly has no interpretation via a translationally invariant measure, contrary to what the notation usually employed may suggest.</p> <p>While there currently is no mathematically rigorous definition of a Feynman path integral applicable to an interesting subset of physical models, here are some books that give some hints at the current state of the affair:</p> <p>Huang and Yan: "Introduction to Infinite Dimensional Stochastic Analysis" (this contains a description of the Feynman path integral from the viewpoint of "white noise analysis"),</p> <p>Sergio Albeverio, Raphael Hoegh-Krohn; Sonia Mazzucchi:"Mathematical theory of Feynman path integrals. An introduction",</p> <p>Pierre Cartier, Cecile DeWitt-Morette: "Functional integration: action and symmetries".</p> <p>BTW: This is in a certain sense a "one million dollar" question because one of the millenium problems of the Clay Mathematics Institute is a rigorous construction of Yang-Mills theories.</p>
1,874,634
<blockquote> <p>Corollary (of Schur's Lemma): Every irreducible complex representation of a finite abelian group G is one-dimensional.</p> </blockquote> <p>My question is now, why has the group to be abelian? As far as I know, we want the representation $\rho(g)$ to be a $Hom_G(V,V)$, where $V$ is the representation space. Isn't this always the case (i.e. even if the $\rho(g)$ is not abelian) as it is by definition a function $G \rightarrow GL(V)$?</p>
H. H. Rugh
355,946
<p>Any finite group is isomorphic to a direct product of its irreducible representations, acting on a direct sum of vector spaces. If all irreducible representations are one-dimensional then this faithful representation consists of diagonal matrices which commute. Whence the group is abelian.</p>
1,741,765
<p>Problem description: Show that well-ordering is not a first-order notion. Suppose that $\Gamma$ axiomatizes the class of well-orderings. Add countably many constants $c_i$ and show that $\Gamma \cup \{c_{i+1} &lt; c_i \mid i \in \mathcal{N}\}$ has a model. </p> <p>So, I don’t quite get how to go about this. <a href="https://math.stackexchange.com/questions/937348/why-isnt-there-a-first-order-theory-of-well-order">This post</a> seems to address the same question, but I don’t understand the answer. I assume I’ll start something like this: </p> <p>Suppose that $\Gamma$ axiomatizes the class of well-orderings (do I need to define this class?). Let $\{c_i \mid i \in \mathcal{N}\}$ be a set of new constants. Consider the set $A = \Gamma \cup \{c_{i+1} &lt; c_i \mid i \in \mathcal{N}\}$. </p> <p>And here I can’t quite see how to proceed. How do I show that every finite subset $A_0 \subseteq A$ is satisfiable? </p> <p>Anyway, suppose I manage to do the above. Then I use compactness to show that $A$ is satisfiable, i.e. $A$ has a model. </p> <p>Then what? How do I finish it? Or if it is finished, how to I explain that it is? </p> <p>I should add: hints only, no direct answers — stop me if I’m asking questions that aren’t possible to answer without revealing the solution. Thanks! </p>
rschwieb
29,335
<p>Let's forget the hashes and just write with juxtaposition.</p> <p>From $aba=b$ we get $aa=abab=bb$. Call the value that everything squares to "$e$". It is an identity: $ae=aaa=ea=a$ for all $a$, using the hypothesis. So it is at least a monoid.</p> <p>Further, $aa=aea=e$, so every element is its own inverse.</p> <p>This also implies $abab=e$, then $aba=b$, then $ab=ba$ after multiplying on the right.</p> <p>So in fact you get a commutative group operation.</p> <p>Of course it need not be finite: just take an infinite product of groups of order 2 for a counterexample.</p>
70,946
<p>I'm an REU student who has just recently been thrown into a dynamical system problem without basically any background in the subject. My project advisor has told me that I should represent regions of my dynamical system by letters and look at the sequence of letters formed by the trajectory of a point under the iteration of my map.</p> <p>He claims that it's a common result that if two points share the same sequence, then this sequence of letters is periodic. I've asked around among some of the other students, and they said that this is sometimes called symbolic dynamics, but none of them remembers this sort of result. I've also searched the internet, but it's possible that my google-fu is weak, since I didn't find any answers that way.</p> <p>To go one step further, there are obvious cases where it is false- take $S^1\times I$, and encode the regions as $A$ corresponds to $[0,\pi)\times I$ and $B$ corresponds to $[\pi,2\pi)\times I$ with map $f(x,y)=(x+1\mod{2\pi},y)$. Obviously any two points $(x,y)$ and $(x,z)$ with $y\neq z$ will have the same sequence, but since 1 is an irrational multiple of $2\pi$, the trajectory will never be periodic.</p> <p>I'm interested in the general theory and common techniques applied to the question:</p> <blockquote> <p>Represent a dynamical system by associating symbols with regions of the space. When is it true that if two distinct points's trajectories have the same sequence of symbols, then the sequence of symbols is periodic?</p> </blockquote> <p>Any answers, examples, or specific references would be greatly appreciated.</p>
Sam Nead
1,650
<p>I am not an expert in this topic. I would guess that "minimality" is required to make anything work. For example, take any system with any labelling (also called a partition, or Markov partition if it satisfies various properties). Take any point $x_0$. Let $x_i$ be the $i$-th iterate of $x_0$. Attach a blob $B_i$ to $x_i$, where all of the $B_i$ are identical. Make a new system where $B_i$ gets sent to $B_{i+1}$ and the points of $B_i$ get the label of $x_i$. </p> <p>Here is a reference for minimality. </p> <p><a href="http://www.scholarpedia.org/article/Minimal_dynamical_systems" rel="nofollow">http://www.scholarpedia.org/article/Minimal_dynamical_systems</a></p>
2,771,240
<p>Let $\mathbb F$ be a field and $\mathbb K $ be an extension field of $\mathbb F$ such that $\mathbb K$ is algebraically closed. </p> <p>Let $\mathbb L$ be the field of all elements of $\mathbb K$ which are algebraic over $\mathbb F$. Then $\mathbb L_{|\mathbb F}$ is an algebraic extension. </p> <p>My question is : Is $\mathbb L$ algebraically closed ?</p> <p>I am trying to prove the existence of algebraic closure, so please don't assume that every field has an algebraic closure. </p>
angryavian
43,949
<p>Let $m_n = \lfloor n/T \rfloor$.</p> <p>\begin{align} \frac{1}{n} \int_0^n f(x) \, dx &amp;= \frac{1}{n} \int_0^{m_n T} f(x) \, dx + \frac{1}{n} \int_{m_n T}^n f(x) \, dx \\ &amp;= \frac{m_n T}{n} \cdot \left(\frac{1}{T} \int_0^T f(x) \, dx\right) + \frac{1}{n} \int_{m_n T}^n f(x) \, dx. \end{align}</p> <p>The second term is bounded by $\frac{n-m_n T}{n} \sup_{x \in [0,T]} |f(x)| \le \frac{T}{n} \sup_{x \in [0,T]} |f(x)| \to 0$ as $n \to \infty$. The first term converges to $1$ the desired limit, since $1 - \frac{T}{n} \le \frac{m_n T}{n} \le 1$.</p>
2,821,323
<blockquote> <p>How to show that a rational polynomial is irreducible in $\mathbb{Q}[a,b,c]$? For example, I try to show this polynomial $$p(a,b,c)=a(a+c)(a+b)+b(b+c)(b+a)+c(c+a)(c+b)-4(a+b)(a+c)(b+c)(*)$$ is irreducible, where $a,b,c\in \mathbb{Q}$.</p> </blockquote> <p>The related problem is <a href="https://math.stackexchange.com/questions/2779545/ask-for-the-rational-roots-of-fracabc-fracbac-fraccab-4">Ask for the rational roots of $\frac{a}{b+c}+\frac{b}{a+c}+\frac{c}{a+b}=4.$</a>. Could I consider the points $(*)$ intersect with $L_{\infty}$ are three? $L_{\infty}$ is the infinity line in a projective space $\mathbb{C}P^2$.</p>
Batominovski
72,152
<p>Suppose contrary that $p(a,b,c)$ is reducible over $\mathbb{Q}$. You can write $p(a,b,c)$ as $$a^3+b^3+c^3-3(b+c)a^2-3(c+a)b^2-3(a+b)c^2-5abc\,.$$ It suffices to regard $p(a,b,c)$ as a polynomial over $\mathbb{F}_3$ (why?). Over $\mathbb{F}_3$, $$p(a,b,c)=a^3+b^3+c^3+abc=a^3+(bc)a+(b+c)^3\,.$$ Since $p(a,b,c)$ is homogeneous of degree $3$ and reducible, it has a linear factor $a+ub+vc$ for some $u,v\in\mathbb{F}_3$. Clearly, we must have $ub+vc \mid (b+c)^3$, whence $u=v=1$ or $u=v=-1$. However, both choices are impossible via direct computation.</p>
121,865
<p>Can someone please help me clarify the notations/definitions below:</p> <p>Does $\{0,1\}^n$ mean a $n$-length vector consisting of $0$s and/or $1$s?</p> <p>Does $[0,1]^n$ ($(0,1)^n$) mean a $n$-length vector consisting of any number between $0$ and $1$ inclusive (exclusive)?</p> <p>As a related question, is there a reference web page for all such definitions/notations? Or do we just need to take note of them individually as we learn. Thanks.</p>
Alex Becker
8,173
<p>The notation $\{0,1\}^n$ refers to the <em>space</em> of all $n$-length vectors consisting of $0$s and $1$s, while the notation $[0,1]^n$ ($(0,1)^n$) refers to the space of all $n$-length vectors consisting of real numbers between $0$ and $1$ inclusive (exclusive).</p> <p>Edit: I often find wikipedia's <a href="http://en.wikipedia.org/wiki/List_of_mathematical_symbols">list of mathematical symbols</a> useful for looking up the meaning of symbols, although I'm not sure it would help with this question.</p>
3,208,412
<p>I have to prove the following:</p> <p><span class="math-container">$$ \sqrt{x_1} + \sqrt{x_2} +...+\sqrt{x_n} \ge \sqrt{x_1 + x_2 + ... + x_n}$$</span></p> <p>For every <span class="math-container">$n \ge 2$</span> and <span class="math-container">$x_1, x_2, ..., x_n \in \Bbb N$</span></p> <p>Here's my attempt:</p> <p>Consider <span class="math-container">$P(n): \sqrt{x_1} + \sqrt{x_2} +...+\sqrt{x_n} \ge \sqrt{x_1 + x_2 + ... + x_n}$</span></p> <p><span class="math-container">$$P(2): \sqrt{x_1} + \sqrt{x_2} \ge \sqrt{x_1 + x_2}$$</span> <span class="math-container">$$ x_1 + x_2 + 2\sqrt{x_1x_2} \ge x_1 + x_2$$</span> Which is true because <span class="math-container">$2\sqrt{x_1x_2} &gt; 0$</span>.</p> <p><span class="math-container">$$P(n + 1): \sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_n} + \sqrt{x_{n+1}} \ge \sqrt{x_1 + x_2 +...+ x_n + x_{n+1}}$$</span> </p> <p>From the hypothesis we have:</p> <p><span class="math-container">$$\sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_n} + \sqrt{x_{n+1}} \ge \sqrt{x_1 + x_2 + ... + x_n} + \sqrt{x_{n + 1}} \ge \sqrt{x_1 + x_2 +...+ x_n + x_{n+1}}$$</span> </p> <p>Squaring both sides of the right part:</p> <p><span class="math-container">$$ x_1 + x_2 + ... + x_n + x_{n + 1} + 2\sqrt{x_{n+1}(x_1 + x_2 +... + x_n)} \ge x_1 + x_2 +...+ x_n + x_{n+1} $$</span></p> <p>Which is true, hence <span class="math-container">$P(n + 1)$</span> is true as well.</p> <p>I'm not sure if I did it correctly?</p>
Martin Pekár
658,253
<p>I'm will just show it my way, since I find it simpler.</p> <p>First, we will start off with the basis step where <span class="math-container">$n = 1$</span>.</p> <h2>Basis step</h2> <p><span class="math-container">$\sqrt{x_1} \geq \sqrt{x_1}$</span>, which is of course true.</p> <h2>Inductive step</h2> <p>We let the inequality be true for any <span class="math-container">$k$</span>, where <span class="math-container">$0 \leq k \leq n$</span>:</p> <p><span class="math-container">$\sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_k} \geq \sqrt{x_1 + x_2 + ... + x_k}$</span></p> <p>If this inequality must hold, the inequalty must, by induction, hold for <span class="math-container">$k + 1$</span>:</p> <p><span class="math-container">$\sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_k} + \sqrt{x_{k + 1}} \geq \sqrt{x_1 + x_2 + ... + x_k + x_{k + 1}}$</span></p> <p>We add <span class="math-container">$x_{k + 1}$</span> to both sides of the inequality:</p> <p><span class="math-container">$\sqrt{x_1} + \sqrt{x_2} + ... + \sqrt{x_k} + \sqrt{x_{k + 1}} \geq \sqrt{x_1 + x_2 + ... + x_k} + \sqrt{x_{k + 1}} \geq \sqrt{x_1 + x_2 + ... + x_k + x_{k + 1}}$</span></p> <p>The inequality is hereby proved.</p>
1,392,661
<p>For a National Board Exam Review: </p> <blockquote> <p>Find the equation of the perpendicular bisector of the line joining (4,0) and (-6, -3)</p> </blockquote> <p>Answer is 20x + 6y + 29 = 0</p> <p>I dont know where I went wrong. This is supposed to be very easy:</p> <p>Find slope between two points:</p> <p>$${ m=\frac{y^2 - y^1}{ x^2 - x^1 } = \frac{-3-0}{-6-4} = \frac{3}{10}}$$</p> <p>Obtain Negative Reciprocal:</p> <p>$${ m'=\frac{-10}{3}}$$</p> <p>Get Midpoint fox X</p> <p>$${ \frac{-6-4}{2} = -5 }$$</p> <p>Get Midpoint for Y</p> <p>$${ \frac{-0--3}{2} = \frac{3}{2} }$$</p> <p>Make Point Slope Form: </p> <p>$${ y = m'x +b = \frac{-10}{3}x + b}$$</p> <p>Plugin Midpoints in Point Slope Form</p> <p>$${ \frac{3}{2} = \frac{-10}{3}(-5) + b}$$</p> <p>Evaluate b</p> <p>$${ b = \frac{109}{6}}$$</p> <p>Get Equation and Simplify</p> <p>$${ y = \frac{-10}{3}x + \frac{109}{6}}$$ $${ 6y + 20x - 109 = 0 }$$</p> <p>Is the problem set wrong? What am I doing wrong?</p>
Harish Chandra Rajpoot
210,295
<p>Notice, the mid=point of the line joining $(4, 0)$ &amp; $(-6, -3)$ is given as $$\left(\frac{4+(-6)}{2}, \frac{0+(-3)}{2}\right)\equiv \left(-1, -\frac{3}{2}\right)$$ The slope of the perpendicular bisector $$=\frac{-1}{\text{slope of line joining}\ (4, 0)\ \text{&amp;}\ (-6, -3)}$$ $$=\frac{-1}{\frac{-3-0}{-6-4}}=-\frac{10}{3}$$</p> <p>Hence, the equation of the perpendicular bisector: $$y-\left(-\frac{3}{2}\right)=-\frac{10}{3}(x-(-1))$$ $$6y+9=-20x-20$$</p> <p>$$\bbox[5px, border:2px solid #C0A000]{\color{red}{\text{equation of the perpendicular bisector:}\ 20x+6y+29=0}}$$</p>
2,961,686
<p>Consider a matrix <span class="math-container">$A$</span> which we subject to a small perturbation <span class="math-container">$\partial A$</span>. If <span class="math-container">$\partial A$</span> is small, then we have <span class="math-container">$(A + \partial A)^{-1} \approx A^{-1} - A^{-1} \partial A A^{-1}$</span></p> <p>I came across this approximation in some notes and I am trying to understand where it comes from. <a href="https://math.stackexchange.com/a/1063542/278734">This answer</a> seems related, but I am having trouble translating the results from the cited paper into the provided equation. </p>
user1551
1,551
<p>The usual argument is that, if you perturb <span class="math-container">$A$</span> by a small <span class="math-container">$X$</span> and get <span class="math-container">$(A+X)^{-1}=A^{-1}+Y+O(\|X\|^2)$</span>, where <span class="math-container">$Y$</span> is the first-order (i.e. linear) change in <span class="math-container">$A^{-1}$</span>, then by comparing the first-order terms on both sides of <span class="math-container">$\left(A^{-1}+Y+O\left(\|X\|^2\right)\right)(A+X)=I$</span>, you get <span class="math-container">$YA+A^{-1}X=0$</span>. Hence <span class="math-container">$Y=-A^{-1}XA^{-1}$</span> and <span class="math-container">$$(A+X)^{-1}=A^{-1}+Y+O\left(\|X\|^2\right)\approx A^{-1}+Y=A^{-1}-A^{-1}XA^{-1}. $$</span></p> <p><strong>Edit.</strong> The above is a rigorous argument provided that <span class="math-container">$A\mapsto A^{-1}$</span> is differentiable in the first place, but this is indeed the case because <span class="math-container">$A^{-1}=\frac1{\det(A)}\operatorname{adj}(A)$</span> is a rational function in the entries of <span class="math-container">$A$</span>.</p>
1,356,900
<p>For section 1 on Fields, there is a question 2c:</p> <p>2.</p> <p>a) Is the set of all positive integers a field?</p> <p>b) What about the set of all integers?</p> <p>c) Can the answers to both these question be changed by re-defining addition or multiplication (or both)?</p> <p>My answer initially to 2c was that for a), as long as there no longer was the addition axiom dealing with (-a) + (a) = 0 and the multiplicative axiom dealing with reciprocals, it would be a field. For b) it would be a field if the reciprocal axiom did not exist. </p> <p>However, I looked at a suggested answer here (<a href="https://drexel28.wordpress.com/2010/09/21/halmos-chaper-one-section-1-fields/" rel="nofollow">https://drexel28.wordpress.com/2010/09/21/halmos-chaper-one-section-1-fields/</a>) and completely did not understand the lemma:</p> <p>I don't understand why the things are in parentheses, or what the things represent. I don't understand what card F = n means, and why in the sentence, plus and multiplication signs are in circles. I don't understand what is meant by F x F --> F. I don't understand cardinalities, or what legitimate binary operations are. I don't understand the use of inverses either. </p> <p>In the section on Vector Space Examples, Halmos writes that, "Let P be set of all polynomials with complex coefficients, in a variable t. To make P into a complex vector space, we interpret vector addition and scalar multiplication as the ordinary addition of two polynomials and the multiplication of a polynomial by a complex number; the origin in P is the polynomial identically zero.</p> <ol> <li>What does it mean for something to be identically zero? </li> </ol> <p>What else should I read along with Halmos to get a good understanding of Linear Algebra and Abstract Algebra? I'm also reading Spence Freidberg Insel, Artin, Fraleigh.</p>
jeo15
252,647
<p>I just want to answer one of your questions. If we have a map, say g, such that $g:\mathbb{F}\times \mathbb{F}\rightarrow \mathbb{F}$ this means that g is a function with domain in $\mathbb{F} \times \mathbb{F}$ with its codomain in $\mathbb{F}$. Here, $\mathbb{F}$ means the field.</p> <p>In other words, if $\mathbb{F} = \mathbb{R}$, a function that maps $\mathbb{R} \times \mathbb{R} \rightarrow \mathbb{R}$ is of the form:$$z=f(x,y)$$ where a point $(x,y) \in \mathbb{R} \times \mathbb{R}$ and the image point $z \in \mathbb{R}$.</p>
661,026
<p>prove or disprove this $$\sum_{k=0}^{n}\binom{n}{k}^3\approx\dfrac{2}{\pi\sqrt{3}n}\cdot 8^n,n\to\infty?$$</p> <p>this problem is from when Find this limit $$\lim_{n\to\infty}\dfrac{\displaystyle\sum_{k=0}^{n}\binom{n}{k}^3}{\displaystyle\sum_{k=0}^{n+1}\binom{n+1}{k}^3}=\dfrac{1}{8}?$$</p> <p>first,follow I can't $$\sum_{k=0}^{n}\binom{n}{k}^3\approx\dfrac{2}{\pi\sqrt{3}n}\cdot 8^n,n\to\infty?$$ .Thank you for you help</p>
André Nicolas
6,312
<p>We can do it by looking at the expression for different ranges of $x$. It is clearly positive if $x\le 0$. </p> <p>If $0 &lt; x \le 4$, then $x^3+x\lt 100$, so our expression is positive even without the aid of $\frac{1}{2}x^4$. If $x\gt 4$, then $\frac{1}{2}x^4\gt 2x^3$, so $\frac{1}{2}x^4-x^3-x\gt 0$.</p>
3,043,780
<p><a href="https://i.stack.imgur.com/h1M7D.png" rel="nofollow noreferrer">the image shows right-angled triangles in semi-circle</a></p> <p>In Definite Integration, we know that area can be found by adding up the total area of each small divided parts.</p> <p>So, base on the Definite Integration, we may say the area of circle is equal to <span class="math-container">$\sum^{n}_{k=1}(h_k)$</span></p> <p>And we also know that <span class="math-container">$A_k=2r(h_k)(1/2)=rh_k$</span> while <span class="math-container">$A$</span> is refer to the right-angled triangle's area.</p> <p>so that, <span class="math-container">$A_k(1/r)=h_k$</span></p> <p>As a result this equation comes out with fixed position of diameter:</p> <p><span class="math-container">$\sum^{n}_{k=1}(h_k)=\pi r^2$</span></p> <p><span class="math-container">$\sum^{n}_{k=1}(A_k)(1/r)=\pi r^2$</span></p> <p><span class="math-container">$\sum^{n}_{k=1}(A_k)=\pi r^3$</span></p> <p>I wish to know whether I am correct or not, thanks.</p>
Shubham Johri
551,962
<p>If the point is in the <span class="math-container">$3^{rd}$</span> quadrant, the angle it makes with the <span class="math-container">$x$</span> axis in the anti-clockwise direction is <span class="math-container">$180+\arctan(\Big|\frac yx\Big|)$</span></p>
787,358
<p>Consider the equation: ay'' +by'+cy=0</p> <p>If the roots of the corresponding characteristic equation are real, show that a solution to the differential equation either is everywhere zero or else can take on the value zero at most once.</p> <p>hmm I have no idea how to do this one, I think it might have to do something with repeated roots?, but im now sure, any tips/solutions to this one? thanks! :D</p>
user314551
314,551
<ol> <li>Consider the linear 2nd order ODE ay′′ + by′ + cy = 0. (1) </li> </ol> <p>Since the equation is linear, with constant coefficients, and no terms not involving y (i.e. the right hand side is 0), a logical solution to try is</p> <p>y(t) = ce^(rt), (2)</p> <p>where c and r are unknown constants.</p> <p>(a) Why is the proposed solution (2) a logical choice for a linear equation? Hint: Think back to other ODEs we have solved. There is a class of first order ODEs that is analogous to (1).</p> <p>(b) Plug the proposed solution into (1) and solve for r. (c) What three cases arise?</p> <p>(c) What three cases arise?</p>
1,115,222
<blockquote> <p>Suppose <span class="math-container">$f$</span> is a continuous, strictly increasing function defined on a closed interval <span class="math-container">$[a,b]$</span> such that <span class="math-container">$f^{-1}$</span> is the inverse function of <span class="math-container">$f$</span>. Prove that, <span class="math-container">$$\int_{a}^bf(x)dx+\int_{f(a)}^{f(b)}f^{-1}(x)dx=bf(b)-af(a)$$</span></p> </blockquote> <p>A high school student or a Calculus first year student will simply, possibly, apply change of variable technique, then integration by parts and he/she will arrive at the answer without giving much thought into the process. A smarter student would probably compare the integrals with areas and conclude that the equality is immediate.</p> <p>However, I am an undergrad student of Analysis and I would want to solve the problem "carefully". That is, I wouldn't want to forget my definitions, and the conditions of each technique. For example, while applying change of variables technique, I cannot apply it blindly; I must be prudent enough to realize that the criterion to apply it includes continuous differentiability of a function. Simply with <span class="math-container">$f$</span> continuous, I cannot apply change of variables technique.</p> <p>Is there any method to solve this problem rigorously? One may apply the techniques of integration (by parts, change of variables, etc.) only after proper justification.</p> <p>The reason I am not adding any work of mine is simply that I could not proceed even one line since I am not given <span class="math-container">$f$</span> is differentiable. However, this seems to hold for non-differentiable functions also.</p> <p>I would really want some help. Pictorial proofs and/or area arguments are invalid.</p>
Julián Aguirre
4,791
<p>Let $\{x_0,x_1,\dots,x_N\}$ be a partition of $[a,b]$. Then $\{f(x_0),f(x_1),\dots,f(x_N)\}$ is a partition of $[f(a),f(b)]$. The following equality holds: $$ \sum_{i=0}^{N-1}f(x_i)(x_{i+1}-x_i)+\sum_{i=0}^{N-1}x_i(f(x_{i+1})-f(x_i))+\sum_{i=0}^{N-1}(x_{i+1}-x_i)(f(x_{i+1})-f(x_i))=b\,f(b)-a\,f(a). $$ The first two sums are Riemann sums for $\int_a^bf$ and $\int_{f(a)}^{f(b)}f^{-1}$ respectively. The third sum converges to $0$ as the size of the partition goes to $0$.</p>
2,324,850
<p>How to find the shortest distance from line to parabola?</p> <p>parabola: $$2x^2-4xy+2y^2-x-y=0$$and the line is: $$9x-7y+16=0$$ Already tried use this formula for distance: $$\frac{|ax_{0}+by_{0}+c|}{\sqrt{a^2+b^2}}$$</p>
Michael Rozenberg
190,319
<p>We need to find the minimum of $$\frac{|9x-7y+16|}{\sqrt{9^2+7^2}},$$ where $x+y=2(x-y)^2$ or $$\min\frac{|(9x-7y)(x+y)+32(x-y)^2|}{2\sqrt{130}(x-y)^2}$$ or $$\min\frac{|41x^2-62xy+25y^2|}{2\sqrt{130}(x-y)^2}$$ or $$\min\frac{41x^2-62xy+25y^2}{2\sqrt{130}(x-y)^2},$$</p> <p>which is $\frac{8}{\sqrt{130}}$ because $$\frac{41x^2-62xy+25y^2}{2\sqrt{130}(x-y)^2}\geq\frac{8}{\sqrt{130}}$$ it's just $$(5x-3y)^2\geq0$$ and the equality occurs.</p> <p>Done! </p>
2,324,850
<p>How to find the shortest distance from line to parabola?</p> <p>parabola: $$2x^2-4xy+2y^2-x-y=0$$and the line is: $$9x-7y+16=0$$ Already tried use this formula for distance: $$\frac{|ax_{0}+by_{0}+c|}{\sqrt{a^2+b^2}}$$</p>
hamam_Abdallah
369,188
<p><strong>hint</strong></p> <p>The parametric equations are</p> <p>for parabola $$2 (x-y)^2=x+y $$ $$x-y=t $$ $$x+y=2t^2$$ thus</p> <blockquote> <p>$$x=t^2+t/2 \;,\;y=t^2-t/2$$</p> </blockquote> <p>the distance from a point of parabola to the line is </p> <p>$$D=\frac {|9 (t^2+t/2)-7 (t^2-t/2)+16|}{\sqrt{81+49}} $$</p> <p>$$=\frac {2t^2+8t+16}{\sqrt {130}} $$</p> <p>the minimum is attained for $t=-2$ and it is</p> <blockquote> <p>$$D_{min}=\frac{8}{\sqrt {130} }$$</p> </blockquote>
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
Yong Hao Ng
31,788
<p>Disclaimer: This is a long post and I am not originally a Mathematician. I just wanted to offer my viewpoint as someone who is not just in the process of learning it, but also had prior experience in its application. </p> <p><strong>What you might be able to appreciate from an introductory course</strong><br> Personally, I think the <a href="http://en.wikipedia.org/wiki/Rubik%27s_Cube">Rubik's Cube</a> is a good object to make use of your group theory knowledge.<br> It can be fully described as <a href="http://en.wikipedia.org/wiki/Rubik%27s_Cube_group#Group_structure">products of permutation groups</a> and it is fun! </p> <p>In fact, it probably fits what you mentioned in the first part: </p> <blockquote> <p>of interest to someone who has not necessarily encountered abstract algebra yet </p> </blockquote> <p>So if you like puzzles, consider it as a learning tool!<br> Try to solve it using the theory you learned in class.<br> If the typical cube is too mundane for you, <a href="http://www.mefferts.com/products/index.php?category_new=13&amp;lang_new=en">there are also exotic choices</a>. :)<br> There are even Algebra courses conducted using it. (I cannot remember where) </p> <p>And while staying in just group theory, a lot of application will come from combinatorics.<br> There is a good reason for this: <a href="http://en.wikipedia.org/wiki/Cayley%27s_theorem">every group is isomorphic to a subgroup of a permutation group</a>.<br> And permutation happens very frequently in combinatorics topics.<br> For example, it can be shown that enumerating all permutations of $\lbrace 1,\dots,n\rbrace$ <a href="http://www.wiki.canisiusmath.net/index.php?title=Well-Known_Generators_of_the_Symmetric_Group">can be done using only 2 generators</a>. </p> <p>Going 1 step further in this direction, some areas of Graph theory is closely related to group theory too. For example, the <a href="http://en.wikipedia.org/wiki/Graph_isomorphism_problem">Graph Isomorphsim problem</a> is known to be a <a href="http://en.wikipedia.org/wiki/Hidden_subgroup_problem">non-abelian hidden subgroup problem</a>. Very difficult in general cases!<br> But if the graph involved is a <a href="http://en.wikipedia.org/wiki/Permutation_graph">permutation graph</a>, then it can be solved efficiently.</p> <p>If we consider an abelian hidden subgroup problem instead, then this includes 2 very important classes of problems: Integer factorization and Discrete Logarithm.<br> Although to be precise, study of these problems goes beyond group theory. </p> <p>While talking about number theory, there are some nice algorithms that you can understand with just group theory.<br> Many important results are derived from just Fermat's little theorem:<br> <a href="http://en.wikipedia.org/wiki/Miller%E2%80%93Rabin_primality_test">Miller-Rabin primality test</a>: A probabilistic test for primes (I think still most efficient)<br> <a href="http://en.wikipedia.org/wiki/Pollard%27s_p_%E2%88%92_1_algorithm">Pollard's p-1 algorithm</a>: An algorithm to find small prime factors </p> <p>If your introductory course covers rings, more specifically polynomials in rings, then factorization of polynomial rings is a great area to look at.<br> Note: The question of whether polynomials have solutions is equivalent to its factorization into linear factors. There are some nice results stating when this is possible.<br> An example: If GCD$(f(x),x^p-x)\neq 1$, then it has solutions in $\mathbb{F}_p$. </p> <p>I don't think I have enough experience to explain it better... this is the best I can offer at the moment. Hope it will be useful! </p> <p>P.S. Algebra is also an important gateway to many other areas. Algebraic Geometry/Algebraic Topology/Algebraic Combinatorics etc<br> Perhaps if you also looked a bit into that direction you may find more things that interests you.</p>
217,291
<p>I am trying to recreate the following image in latex (pgfplots), but in order to do so I need to figure out the mathematical expressions for the functions</p> <p><img src="https://i.stack.imgur.com/jYGNP.png" alt="wavepacket"></p> <p>So far I am sure that the gray line is $\sin x$, and that the redline is some version of $\sin x / x$. Whereas the green line is some linear combination of sine and cosine functions.</p> <p>Anyone know a good way to find these functions? </p>
kjetil b halvorsen
32,967
<p>I just found the following book: <a href="http://rads.stackoverflow.com/amzn/click/0521457181" rel="nofollow">http://www.amazon.com/Fourier-Analysis-Applications-Mathematical-Society/dp/0521457181/ref=sr_1_1?s=books&amp;ie=UTF8&amp;qid=1350785755&amp;sr=1-1&amp;keywords=audrey+terras</a></p> <p>which are written just for people like you wanting to find out about various applications of algebra.</p> <p>For example, this book have some very interesting applications of groups to error-correcting codes, but also to number theory, and to graph theory ...</p>
453,295
<p>I wanna show that the non-zero elements of $\mathbb Z_p$ ($p$ prime) form a group of order $p-1$ under multiplication, i.e., the elements of this group are $\{\overline1,\ldots,\overline{p-1}\}$. I'm trying to prove that every element is invertible in the following manner:</p> <blockquote> <p><strong>Proof (a)</strong></p> <p>By Bézout's lemma given $\bar a\in\mathbb Z_p$, there are $x,y \in \mathbb Z$ such that </p> <p>$ax+py=1\implies\overline {ax+py}=\overline 1\implies \overline a \overline x+\overline p\overline y=\overline1\implies \overline a \overline x+\overline 0=\overline1\implies \overline a\overline x=\overline1$</p> </blockquote> <p>There are two problems with this proof, first $\overline 0$ is not defined, because $\overline 0$ is not defined because isn't in $\{\overline 1,\ldots \overline {p-1}\}$ secondly the sum is not defined, because I'm using multiplication in this case.</p> <blockquote> <p><strong>Proof (b)</strong></p> <p>By Fermat's little theorem, for each $\overline a \in \mathbb Z_p$, we have:</p> <p>$a^{p-1}\equiv 1$ (mod $p$), then $\overline {a^{p-2}}$ is an inverse to $\overline a$. </p> </blockquote> <p>My problem with this proof is why $\overline {a^{p-2}}\in \{\overline1,\ldots,\overline{p-1}\}$.</p> <p>I've already known these proofs, but with a little bit more experience, I found these lacks of rigor.</p> <p>Thanks in advance.</p>
DonAntonio
31,254
<p>Don't talk about rings if you don't want to, but you still must talk about the <em>additive abelian group</em> $\,\Bbb Z_p\,$ and you can work with it all along, remarking</p> <p>$$\forall\,k\in\Bbb Z\;,\;\;kp=\overline 0=0\pmod p$$</p> <p>Then you can work comfortably with Bezout's lemma as you did, including $\,\overline 0\,$ and the sum, of course.</p> <p>Certainly the above is the same as talking about the ring of residues, which as far as I know is usually done even in the first undergraduate year as an example of <strong>field</strong> which is not one of the ones beginning students are used to, so I don't think a beginner wouldn't work without the additive operation and only with the multiplicative one.</p> <p>About question (2):</p> <p>$$\forall\,\overline a\in \Bbb Z_p^*\;\wedge\;\forall\,k\in\Bbb Z\;,\;\;\overline a^k\in\Bbb Z_p^*$$</p> <p>and this much is true for <em>any</em> group (written multiplicatively), abelian or not, finite or not.</p>
1,511,078
<p><strong>Show that the product of two upper (lower) triangular matrices is again upper (lower) triangular.</strong></p> <p>I have problems in formulating proofs - although I am not 100% sure if this text requires one, as it uses the verb "show" instead of "prove". However, I have found on the internet the proof below but my problem is not just that I can't do one by myself, but also that it happens that I don't understand a proof which is already written - thing which let me think that I should quit these studies.</p> <p><strong>My first question is: how to choose the right strategy for proving something, in this case as in others?</strong>. I guess it is also matter of interest...Interest for numbers and their properties. I've never had such interest, frankly - although, perhaps, it is not nice to say this here. The reason for which I started studying math and some physics over an year and a half ago is because I dreamed about and still dreaming to do a kind of research for which I am interested and I need scientific background.</p> <p>Sorry for the digression but I am so upset and I need some general advice too. Coming back to the topic of the present proof, <strong>my second question is: could you explain me in plain English the idea and logic behind the following proof, please? Is this proof general enough to cover also my case and what changes do I need to add for it?</strong> - I ask this because the text to which I refer speaks about upper and lower triagular matrices, while the text of the proof I report here speaks about upper triangular matrices only. I also notice that the following proof considers only square matrices but also rectangular matrices can be upper/lower triangular.</p> <p>The only thing I know about all this is what upper/lower triangular matrices are and how to perform multiplication between matrices. This is the proof found on the internet:</p> <p>"Suppose that U and V are two upper triangular <em>n × n</em> matrices. By the row-column rule for matrix multiplication we know that the <em>(i,j)-th</em> entry of the product UV is $ui1v1j + ui2v2j +···+ uinvnj$. We need to show that if i > j then this expression evaluates to 0. In fact, we will show that every term $uikvkj$ of this expression evaluates to 0. To prove this, we consider two cases: • If i > k then $uik = 0$ since U is upper triangular. Hence $uikvkj = 0$. • If k > j then $vkj = 0$ since V is upper triangular. Hence $uikvkj = 0$. Since i > j, for every k weeither have $i &gt; k$ or $k &gt; j$ (possibly both) so the set wo cases cover all possibilities for k."</p>
Hagen von Eitzen
39,174
<p>$n\times n$ matrices are (or at least readily can be viewed) as linear maps from $n$-dimensional space to itself.</p> <p>An upper triangular matrix is one where each vector from the standard basis is mapped to a linear combination of itself and the <em>preceeding</em> base vectors alone. In other words: The subspaces spanned by the first few basis vectors is mapped into itself (no matter how many "first few" I take).</p> <p>The product of matrices corresponds to the composition of the linear maps. And of the subspace spanned by the first few basis vectors is mapped into itself by each of the factors, this also holds for their composition / the matrix product. Therefore the product is upper triangular.</p>
292,831
<p>Usually the question whether the <a href="https://en.wikipedia.org/wiki/Diamond_principle" rel="noreferrer">diamond principle</a> $\diamondsuit(\kappa)$ holds for some large cardinal $\kappa$ only concerns large cardinal notions of very low consistency (among the weakly compacts). Partly since it <em>does</em> hold for all <a href="http://cantorsattic.info/Ineffable#Subtle_cardinal" rel="noreferrer">subtle cardinals</a>, which are only barely stronger than the weakly compacts, and pretty much every large cardinal notion below a weakly compact has been shown to consistently <em>not</em> satisfy it (see <a href="https://mathoverflow.net/questions/137036/failure-of-diamond-at-large-cardinals">Failure of diamond at large cardinals</a> and <a href="https://arxiv.org/abs/1705.01611" rel="noreferrer">Ben Neria ('17)</a>).</p> <p>That subtle cardinals satisfy diamond of course means that almost all large cardinals <em>do</em> satisfy it as well, but there are some strange ones lying around though, including <a href="https://en.wikipedia.org/wiki/Woodin_cardinal" rel="noreferrer">Woodin cardinals</a> and inaccessible <a href="http://cantorsattic.info/Jonsson" rel="noreferrer">Jónsson cardinals</a>. Is anything known about diamond holding for any of these two?</p>
Not Mike
8,843
<p>Assuming that by "inaccessible Jónsson" you meant a regular limit cardinal of uncountable cofinality which is Jónsson; then using the arguments of [1] (Theorem 15 p.115), we have</p> <blockquote> <p>if $\mathbb{P}$ is c.c.c. and $\kappa$ is Jónsson then for any $V$-generic $G\subset \mathbb{P}$, $V[G]\vDash$ "$\kappa$ is Jónsson".</p> </blockquote> <p>In particular, if $\kappa$ is Jónsson, and $G\subset \mathbb{P}=\mathsf{Fn}(\kappa^{+}, 2)$ is $V$-generic, then </p> <blockquote> <p>$V[G] \vDash $ "$\kappa &lt; 2^{\aleph_0}$ and $\kappa$ is Jónsson." </p> </blockquote> <p>hence $V[G] \vDash \neg \diamondsuit_\kappa$ and $\kappa$ is Jónsson. Moreover, If we started with $\kappa$ which was a regular limit cardinal then the same holds for $\kappa$ in $V[G]$.</p> <p>[1] <em>Devlin, Keith J.</em>, <a href="http://dx.doi.org/10.1016/0003-4843(73)90010-7" rel="nofollow noreferrer"><strong>Some weak versions of large cardinal axioms</strong></a>, Ann. Math. Logic 5, 291-325 (1973). <a href="https://zbmath.org/?q=an:0279.02051" rel="nofollow noreferrer">ZBL0279.02051</a>.</p>
4,259,561
<p>I am looking for the derivation of the closed form along any given diagonal <span class="math-container">$a$</span> of Pascal's triangle,<br /> <span class="math-container">$$\sum_{k=a}^n {k\choose a}\frac{1}{2^k}=?$$</span> Numbered observations follow. As for the limit proposed in the title given by:</p> <p><strong>Observation 1</strong></p> <p><span class="math-container">$$\sum_{k=a}^\infty {k\choose a}\frac{1}{2^k}=2,$$</span> when I calculate the sums numerically using MS Excel for any <span class="math-container">$a$</span> within the domain (<span class="math-container">$0\le a \le100$</span>) the sum approaches 2.000000 in all cases within total steps <span class="math-container">$n\le285$</span>. The first series with <span class="math-container">$a=0$</span> is a familiar geometric series, and perhaps others look familiar to you as well:</p> <p><span class="math-container">$$\sum_{k=0}^\infty {k\choose 0}\frac{1}{2^k}=1+\frac{1}{2}+\frac{1}{4}+\frac{1}{8}+... =2,$$</span> <span class="math-container">$$\sum_{k=1}^\infty {k\choose 1}\frac{1}{2^k}=\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{1}{4}+\frac{5}{32}... =2,$$</span> <span class="math-container">$$\sum_{k=2}^\infty {k\choose 2}\frac{1}{2^k}=\frac{1}{4}+\frac{3}{8}+\frac{3}{8}+\frac{5}{16}+\frac{15}{64}... =2,$$</span> but it is both surprising and elegantly beautiful that these sums across all diagonals appear to approach the same value. Some additional observations from the numerically determined sums:</p> <p><strong>Observation 2</strong></p> <p>The maximum value of any term <span class="math-container">${k\choose a}\frac{1}{2^k}$</span> within a diagonal <span class="math-container">$a$</span> for the domain <span class="math-container">$(a&gt;0)$</span> is attained at <span class="math-container">$k=2a-1$</span> and repeated for the term immediately following (<span class="math-container">$k=2a$</span>).</p> <p><strong>Observation 3</strong> <span class="math-container">$$\sum_{k=a}^{2a} {k\choose a}\frac{1}{2^k}=1$$</span> <strong>Observation 4</strong> <span class="math-container">$$\sum_{k=a}^{n} {k\choose a}\frac{1}{2^k} + \sum_{k=n-a}^{n} {k\choose n-a}\frac{1}{2^k}=2$$</span> It's very likely that the general closed form has been derived before, but searching for the past several days has produced no results. It appears that setting up the appropriate generating function may play a role, but I am at a loss as to how to proceed. Looking forward to the responses.</p>
MXXZ
966,405
<p><strong>Edit 1:</strong> I misinterpreted the question to mean that (at least) <span class="math-container">$2$</span> sit next to each other, so the following solution is incorrect. I'll leave it if anyone is interested in that case.</p> <p><strong>Edit 2:</strong> Even in that situation it wouldn't work (see the comments for the reason).</p> <p>Let's label them from <span class="math-container">$0$</span> to <span class="math-container">$24$</span> and do arithmetic (with those labels) in <span class="math-container">$\mathbb{Z}_{25}$</span> as they sit in a round table.</p> <p>We need to count the number of <span class="math-container">$3$</span>-subsets of <span class="math-container">$\mathbb{Z}_{25}$</span> which are of the form <span class="math-container">$\{i, i+1, j\}$</span>, <span class="math-container">$j \not \in \{i, i+1\}$</span>.</p> <p>Let's call the set of such subsets <span class="math-container">$M$</span>.</p> <p>To not overcount, we can restrict ourselves to <span class="math-container">$j \neq i-1$</span> as <span class="math-container">$i' = j, j' = i+1$</span> would lead to the same set.</p> <p>With that restriction, the question becomes equivalent to count pairs <span class="math-container">$(i,j) \in \mathbb{Z}_{25}^2$</span> with <span class="math-container">$j \not \in \{i-1, i, i+1\}$</span>: Every subset in <span class="math-container">$M$</span> can obviously be created using one such pair and each pair maps uniquely to one subset in <span class="math-container">$M$</span>.</p> <p>Now, for <span class="math-container">$i$</span> we have <span class="math-container">$25$</span> possbilities and for <span class="math-container">$j$</span> <span class="math-container">$22$</span>. So the numerator is <span class="math-container">$25 \cdot 22 = 550$</span>.</p> <p>Therefore, the probability is</p> <p><span class="math-container">$$\frac{550}{2300} = \frac{11}{46}.$$</span></p>
4,076,006
<p>I would like to know the number of valuation rings of <span class="math-container">$\Bbb Q_p((T))$</span>. I know <span class="math-container">$\Bbb Q_p$</span> has <span class="math-container">$2$</span> valuation rings, that is,<span class="math-container">$\Bbb Q_p$</span> and <span class="math-container">$\Bbb Z_p$</span>. Every algebraic extension of <span class="math-container">$\Bbb Q_p$</span> has more than <span class="math-container">$2$</span> valuation rings because of extension theorem on valuation.But <span class="math-container">$\Bbb Q_p((T))$</span> is not algebraic over <span class="math-container">$\Bbb Q_p$</span>,I am at a loss.</p> <p>How many valuation rings of <span class="math-container">$\Bbb Q_p((T))$</span> are there?</p> <p>Thank you in advance.</p>
reuns
276,986
<p>There is only one (non-trivial) <strong>discrete</strong> valuation on <span class="math-container">$\Bbb{Q}_p((T))$</span>.</p> <p>For all <span class="math-container">$f\in 1+p\Bbb{Z}_p+T \Bbb{Q}_p[[T]]$</span> the binomial series gives that <span class="math-container">$f^{1/n}\in \Bbb{Q}_p((T))$</span> whenever <span class="math-container">$p\nmid n$</span>. Therefore, that <span class="math-container">$v$</span> is discrete implies <span class="math-container">$v(f)=0$</span>.</p> <p>Decompose <span class="math-container">$$\Bbb{Q}_p((T))^\times = T^\Bbb{Z} p^\Bbb{Z} \langle \zeta_{p-1}\rangle \ (1+p\Bbb{Z}_p+T \Bbb{Q}_p[[T]])$$</span> Thus, it remains to find <span class="math-container">$v(\zeta_{p-1}),v(p)$</span> and <span class="math-container">$v(T)$</span>.</p> <p>Since <span class="math-container">$(\zeta_{p-1})^{p-1}=1$</span> we must have <span class="math-container">$v(\zeta_{p-1})=0$</span>.</p> <ul> <li><p>If <span class="math-container">$v(p)\ne 0$</span>, then <span class="math-container">$p$</span> is a multiple of <span class="math-container">$1$</span> so we must have <span class="math-container">$v(p)&gt;0$</span>. For <span class="math-container">$r$</span> large enough we have <span class="math-container">$v(p^{-r} T)&lt;0$</span> so that <span class="math-container">$v(1+p^{-r} T)&lt;0$</span>, a contradiction.</p> </li> <li><p>Whence <span class="math-container">$v(p)=0$</span>. We must have <span class="math-container">$v(T)\ne 0$</span> for <span class="math-container">$v$</span> being non-trivial.</p> <p><span class="math-container">$v(T)&lt;0$</span> would contradict <span class="math-container">$v(1+T)=0$</span> so we must have <span class="math-container">$v(T)&gt;0$</span> and hence ​<span class="math-container">$$\{ x\in \Bbb{Q}_p((T))^\times,v(x)\ge 0\} = \Bbb{Q}_p[[T]]-0$$</span></p> </li> </ul>
763,199
<p>I am trying to understand more about the Bidualspace (or double dual space). The whole idea is that $V$ and $V^{**}$ are canonically isomorphic to one another, <s>which means that they are isomorphic without the choice of a basis</s>, which means there exists an isomorphism between them which <strong>does not</strong> depend on choosing some basis <em>(on either of the spaces)</em>. (suggestion by @DonAntonio)</p> <hr> <p>Let $V$ be a finite dimensional Vectorspace with Basis $\beta = \lbrace v_1, \dots , v_n \rbrace $ (the infinite dimensional case is discussed at <a href="https://math.stackexchange.com/questions/179367/canonical-isomorphism-between-mathbfv-and-mathbfv">Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$</a>). I do understand that although we make a choice of Basis here, it will later on somehow be obsolete (which I don't see why)</p> <p><strong>Define</strong>: \begin{align} \begin{matrix} V &amp; \overset{\Phi}{\longrightarrow}&amp; V^* &amp; \overset{\Phi^*}{\longrightarrow}&amp;V^{**} \\ \sum_{i=1}^n \lambda_i v_i &amp; \longmapsto &amp; \sum_{i=1}^n \lambda_i v_i^* &amp; \longmapsto &amp; \sum_{i=1}^n \lambda_i v_i^{**} \end{matrix} \end{align} <strong>Discussion</strong>: I know and have already shown that $V$ and $V^*$ are isomorphic (not canonically isomorphic!) to one another, meaning that $\lbrace v_1^*, \dots , v_n^* \rbrace$ defines a Basis for $V^*$. Although I did not show it I am at peace with the statement that the mapping $\Phi^*$ introduces another isomorphism between $V^*$ and $V^{**}$. It is the <strong>canonical</strong> isomorphism between $V$ and $V^{**}$ that bothers me.</p> <p><strong>My tutors reasoning</strong>: In the following I will use $\checkmark$ to highlight whether or not I understand something.</p> <ul> <li>$v_i^{**} \in V^{**}=\hom(V^{*},k)  \checkmark$, sure nothing to add here</li> <li>$v_i^{**} (\sum_{j=1}^n \lambda_j v_j^{*})=\lambda_i \checkmark$, should be completely analogous to $v_i^*(v_j)=\lambda_i$</li> <li>$\Phi^* \circ \Phi (v) =: \iota_v$ $$ \iota: \begin{cases} V &amp; \longrightarrow V^{**} \\ v &amp; \longmapsto \iota_v \end{cases}$$ where $\iota_v( \varphi)= \varphi(v)$ and $\varphi \in V^*$ is a linear functional. My tutor said that $\iota$ is 'suddenly' a canonical isomorphism, independent of the choice of the Basis $\beta$ because $\Phi$ and $\Phi^*$ are isomorphisms. This is the step where I understand nothing about.</li> </ul> <p><strong>What happened next (two more equations)</strong>: I told my tutor that I don't see why $\iota$ is independent of a basis choice, because at $\Phi^* \circ \Phi= \iota$ we still make a choice for a basis at $\Phi$ namely $\beta$. My tutor said that this seems to be the case at first sight and made two more calculations which I in fact "understand": $$ \Phi^* \circ \Phi \left( \sum_{i=1}^n \lambda_i v_i \right) ( \varphi) = \Phi^* \left( \sum_{i=1}^n \lambda_i v_i^* \right) ( \varphi) = \left( \sum_{i=1}^n \lambda_i v_i^{**} \right)(\varphi) \\ = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_i \underbrace{v_i^{**}(v_j^*)}_{= \delta_{ij}} = \sum_{i=1}^n \lambda_i \mu_i $$ I understand this calculation, it's mainly an application of the above introduced definitions and the Kronecker-Delta. Apparently I am supposed to have an "aha" moment here, which unfortunately didn't occur yet. The next calculation he did was $$ \varphi (v) = \left(\sum_{j=1}^n \mu_j v_j^{*} \right) \left( \sum_{i=1}^n \lambda_i v_i \right)= \sum_{j=1}^n \sum_{i=1}^n \mu_j \lambda_i \underbrace{v_j^*( v_i)}_{\delta_{ij}} = \sum_{i=1}^n \lambda_i \mu_i $$ which evidently gives the same result as above. I would appreciate some wording on how those two equations (which calculations I understand) help me to see why there appears to be a canonical isomorphism between $V$ and $V^{**}$</p>
M Turgeon
19,379
<p>The first thing you mention is that, if you fix a basis for $V$, you get a non-canonical isomorphism $V\cong V^*$. Similarly, fixing a basis for $V^*$, you get a non-canonical isomorphism $V^*\cong (V^*)^*=V^{**}$. The "magic" is that, when you compose these two isomorphisms, you get an isomorphism $V\cong V^{**}$ which of course depends on your choice of bases, but which is exactly equal to the canonical isomorphism $$\iota: V\to V^{**};\quad \iota(v)=\iota_v,$$ <strong>regardless of what choices you made to define $\Phi$ and $\Phi^*$.</strong> The two extra equations you give show that $$\Phi^*\circ\Phi = \iota.$$ What may confuse you is that, in order to show these two maps are equal, we choose bases. But this is normal: we need these bases to define $\Phi$ and $\Phi^*$ to start with. But note that the two extra equations hold regardless of what bases were chosen.</p>
763,199
<p>I am trying to understand more about the Bidualspace (or double dual space). The whole idea is that $V$ and $V^{**}$ are canonically isomorphic to one another, <s>which means that they are isomorphic without the choice of a basis</s>, which means there exists an isomorphism between them which <strong>does not</strong> depend on choosing some basis <em>(on either of the spaces)</em>. (suggestion by @DonAntonio)</p> <hr> <p>Let $V$ be a finite dimensional Vectorspace with Basis $\beta = \lbrace v_1, \dots , v_n \rbrace $ (the infinite dimensional case is discussed at <a href="https://math.stackexchange.com/questions/179367/canonical-isomorphism-between-mathbfv-and-mathbfv">Canonical Isomorphism Between $\mathbf{V}$ and $(\mathbf{V}^*)^*$</a>). I do understand that although we make a choice of Basis here, it will later on somehow be obsolete (which I don't see why)</p> <p><strong>Define</strong>: \begin{align} \begin{matrix} V &amp; \overset{\Phi}{\longrightarrow}&amp; V^* &amp; \overset{\Phi^*}{\longrightarrow}&amp;V^{**} \\ \sum_{i=1}^n \lambda_i v_i &amp; \longmapsto &amp; \sum_{i=1}^n \lambda_i v_i^* &amp; \longmapsto &amp; \sum_{i=1}^n \lambda_i v_i^{**} \end{matrix} \end{align} <strong>Discussion</strong>: I know and have already shown that $V$ and $V^*$ are isomorphic (not canonically isomorphic!) to one another, meaning that $\lbrace v_1^*, \dots , v_n^* \rbrace$ defines a Basis for $V^*$. Although I did not show it I am at peace with the statement that the mapping $\Phi^*$ introduces another isomorphism between $V^*$ and $V^{**}$. It is the <strong>canonical</strong> isomorphism between $V$ and $V^{**}$ that bothers me.</p> <p><strong>My tutors reasoning</strong>: In the following I will use $\checkmark$ to highlight whether or not I understand something.</p> <ul> <li>$v_i^{**} \in V^{**}=\hom(V^{*},k)  \checkmark$, sure nothing to add here</li> <li>$v_i^{**} (\sum_{j=1}^n \lambda_j v_j^{*})=\lambda_i \checkmark$, should be completely analogous to $v_i^*(v_j)=\lambda_i$</li> <li>$\Phi^* \circ \Phi (v) =: \iota_v$ $$ \iota: \begin{cases} V &amp; \longrightarrow V^{**} \\ v &amp; \longmapsto \iota_v \end{cases}$$ where $\iota_v( \varphi)= \varphi(v)$ and $\varphi \in V^*$ is a linear functional. My tutor said that $\iota$ is 'suddenly' a canonical isomorphism, independent of the choice of the Basis $\beta$ because $\Phi$ and $\Phi^*$ are isomorphisms. This is the step where I understand nothing about.</li> </ul> <p><strong>What happened next (two more equations)</strong>: I told my tutor that I don't see why $\iota$ is independent of a basis choice, because at $\Phi^* \circ \Phi= \iota$ we still make a choice for a basis at $\Phi$ namely $\beta$. My tutor said that this seems to be the case at first sight and made two more calculations which I in fact "understand": $$ \Phi^* \circ \Phi \left( \sum_{i=1}^n \lambda_i v_i \right) ( \varphi) = \Phi^* \left( \sum_{i=1}^n \lambda_i v_i^* \right) ( \varphi) = \left( \sum_{i=1}^n \lambda_i v_i^{**} \right)(\varphi) \\ = \sum_{i=1}^n \sum_{j=1}^n \lambda_i \mu_i \underbrace{v_i^{**}(v_j^*)}_{= \delta_{ij}} = \sum_{i=1}^n \lambda_i \mu_i $$ I understand this calculation, it's mainly an application of the above introduced definitions and the Kronecker-Delta. Apparently I am supposed to have an "aha" moment here, which unfortunately didn't occur yet. The next calculation he did was $$ \varphi (v) = \left(\sum_{j=1}^n \mu_j v_j^{*} \right) \left( \sum_{i=1}^n \lambda_i v_i \right)= \sum_{j=1}^n \sum_{i=1}^n \mu_j \lambda_i \underbrace{v_j^*( v_i)}_{\delta_{ij}} = \sum_{i=1}^n \lambda_i \mu_i $$ which evidently gives the same result as above. I would appreciate some wording on how those two equations (which calculations I understand) help me to see why there appears to be a canonical isomorphism between $V$ and $V^{**}$</p>
Community
-1
<p>The canonical map $\theta$ from a vector space to its double-dual has a particularly clean pointwise forumula:</p> <p>$$\theta(v)(\omega) = \omega(v)$$</p> <p>where $v$ is a vector and $\omega$ is a covector (an element of $V^*$).</p> <p>Since $\omega$ ranges over all covectors, this gives a pointwise definition of $\theta(v)$. And since $v$ ranges over all vectors, this in turn gives a pointwise definition of $\theta$.</p> <p>This notation is a little strange at first, but it's quite useful once you're used to it. $\theta(v)$ is an element of $V^{**}$, and so it makes sense to evaluate $\theta(v)$ at $\omega$ to get a scalar.</p>
1,408,036
<p>Five points are drawn on the surface of an orange. Prove that it is possible to cut the orange in half in such a way that at least four of the points are on the same hemisphere. (Any points lying along the cut count as being on both hemispheres.)</p>
davidlowryduda
9,754
<p>Through any two points, there is a great circle that passes through those two points. Such a cut will split the other 3 pigeons &mdash; <em>oh, I mean points</em> &mdash; among 2 halves. </p> <p>[You can now handle additional points being on the great circle on your own, I believe.]</p>
146,813
<p>Is sigma-additivity (countable additivity) of Lebesgue measure (say on measurable subsets of the real line) deducible from the Zermelo-Fraenkel set theory (without the axiom of choice)?</p> <p>Note 1. Follow-up question: Jech's 1973 book on the axiom of choice seems to be cited as the source for the Feferman-Levy model. Can this be sourced in the work of Feferman and levy themselves? Are these S. Feferman and A. Levy?</p>
Asaf Karagila
7,206
<p>No, you can't have that. It is consistent that the real numbers are a countable union of countable sets, in which case you immediately have that there is no nontrivial measure which is countably additive on the real numbers.</p> <p>There are other models, however, in which $\aleph_1$ is singular, the countable union of countable sets of real numbers is countable; but every set is Borel. In such models, I believe, you can't have a countably additive Lebesgue measure as well.</p> <hr> <p>To your question, yes. These are Solomon Feferman and Azriel Levy. The result appears as an abstract in Notices of the AMS from 1964 (give or take a year, this is from memory). </p>
21,262
<p><strong>Bug introduced in 9.0 and fixed in 11.1</strong></p> <hr> <p><code>NDSolve</code> in Mathematica 9.0.0 (MacOS) is behaving strangely with a piecewise right hand side. The following code (a simplified version of my real problem):</p> <pre><code>sol = NDSolve[{x'[t] == Piecewise[{{2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}} ], x[0] == 0}, x, {t, 0, 1}]; Print[x[1] /. sol[[1]]]; </code></pre> <p>gives the correct answer of 0.5 about 50% of the time, but often returns -0.5 and -1 instead. Rerunning it gives apparently random results. It always gives the correct result in Mathematica 8.</p> <p>Here's what I've figured out so far:</p> <ol> <li>It apparently has something to do with the <code>Mod[t,1]</code>, because it works fine with just "t" in the <code>Piecewise</code>. Unfortunately I'm looking at a piecewise periodic system (not just from t=0 to 1).</li> <li>It's only the first segment of the solution from t=0 to t=0.5 that varies from run to run.</li> <li>Using initial condition <code>x[10^-100]==0</code> fixes the problem, but this is an ugly hack.</li> </ol> <p>Can anyone replicate this strange behavior, know what's behind it, or have a better suggested fix?</p>
Albert Retey
169
<p>This is not really an answer (the answer is of course that this is a bug), but it is too long for a comment and probably gives a hint where the problem is and how one can avoid it in other cases. The workaround is to define the piecewise function as an external definition only for numeric arguments. It looks like otherwise some invalid optimization with the symbolic expression is done:</p> <pre><code>ClearAll@rhs rhs[t_?NumericQ] := Piecewise[{ {2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1} }] Table[ sol = NDSolve[{x'[t] == rhs[t], x[0] == 0}, x, {t, 0, 1} ]; Plot[x[t] /. sol[[1]], {t, 0, 1}, PlotLabel -&gt; (x[1] /. sol[[1]]), Frame -&gt; True, PlotRange -&gt; All], {10} ] </code></pre> <p>I have tested this with Mathematica 9.0.1 on Windows 7 64bit. Unlike for <code>NIntegrate</code> there seems to not be a <code>Method</code> option with which one could switch off the symbolic optimization, at least I didn't find one. It might well be there, even if not documented and of course might be a better workaround...</p>
21,262
<p><strong>Bug introduced in 9.0 and fixed in 11.1</strong></p> <hr> <p><code>NDSolve</code> in Mathematica 9.0.0 (MacOS) is behaving strangely with a piecewise right hand side. The following code (a simplified version of my real problem):</p> <pre><code>sol = NDSolve[{x'[t] == Piecewise[{{2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}} ], x[0] == 0}, x, {t, 0, 1}]; Print[x[1] /. sol[[1]]]; </code></pre> <p>gives the correct answer of 0.5 about 50% of the time, but often returns -0.5 and -1 instead. Rerunning it gives apparently random results. It always gives the correct result in Mathematica 8.</p> <p>Here's what I've figured out so far:</p> <ol> <li>It apparently has something to do with the <code>Mod[t,1]</code>, because it works fine with just "t" in the <code>Piecewise</code>. Unfortunately I'm looking at a piecewise periodic system (not just from t=0 to 1).</li> <li>It's only the first segment of the solution from t=0 to t=0.5 that varies from run to run.</li> <li>Using initial condition <code>x[10^-100]==0</code> fixes the problem, but this is an ugly hack.</li> </ol> <p>Can anyone replicate this strange behavior, know what's behind it, or have a better suggested fix?</p>
xzczd
1,871
<p>Another possible fix to the bug is to use <code>Simplify`PWToUnitStep</code> to expand the <code>Piecewise</code> into a combination of <code>UnitStep</code>:</p> <pre><code>Table[NDSolveValue[{x'[t] == Simplify`PWToUnitStep@ Piecewise[{{2, 0 &lt;= Mod[t, 1] &lt; 0.5}, {-1, 0.5 &lt;= Mod[t, 1] &lt; 1}}], x[0] == 0}, x, {t, 0, 1}][1], {100}] // Union </code></pre> <p><img src="https://i.stack.imgur.com/bAnq0.png" alt="Mathematica graphics"></p>
206,723
<p>Can any one explain why the probability that an integer is divisible by a prime $p$ (or any integer) is $1/p$?</p>
Steven Stadnicki
785
<p>As I said in a comment, the notion of 'probability' over the set of all integers (or equivalently, the natural numbers) is fraught with some peril. A better statement of the question is that the <em>natural density</em> of the numbers divisible by $p$ is $\frac{1}{p}$. Natural density captures what people think of as probability; it simply represents the limit of the proportion of integers with the given property. More specifically, the natural density of a set $A$ is defined as the limit $\lim_{n\rightarrow\infty}\frac{1}{n}\#\left\{i:i\leq n \wedge i\in A\right\}$. For more details, see <a href="http://en.wikipedia.org/wiki/Natural_density">http://en.wikipedia.org/wiki/Natural_density</a>.</p> <p>In your particular case, the natural density result is easy to prove: the number of naturals $i\leq n$ that are divisible by $p$ (call this count $c$) satisfies $\frac{n}{p}-1\lt c\lt \frac{n}{p}+1$, so the density $d = \lim_{n\rightarrow\infty}\frac{c}{n}$ satisfies $\frac{1}{p}-\frac{1}{n}\lt d\lt \frac{1}{p}+\frac{1}{n}$ for all $n$; therefore we must have $d=\frac{1}{p}$.</p>
948,329
<p>I have come across this trig identity and I want to understand how it was derived. I have never seen it before, nor have I seen it in any of the online resources including the many trig identity cheat sheets that can be found on the internet.</p> <p>$A\cdot\sin(\theta) + B\cdot\cos(\theta) = C\cdot\sin(\theta + \Phi)$</p> <p>Where $C = \pm \sqrt{A^2+B^2}$</p> <p>$\Phi = \arctan(\frac{B}{A})$</p> <p>I can see that Pythagorean theorem is somehow used here because of the C equivalency, but I do not understand how the equation was derived. </p> <p>I tried applying the sum of two angles identity of sine i.e. $\sin(a \pm b) = \sin(a)\cdot\cos(b) + \cos(a)\cdot\sin(b)$</p> <p>But I am unsure what the next step is, in order to properly understand this identity.</p> <p>Where does it come from? Is it a normal identity that mathematicians should have memorized?</p>
beep-boop
127,192
<p>We can write $$A\sin(\theta)+B\cos(\theta)$$ in the form $C \sin(\theta+\phi)$ for some $\phi$ and $C$.</p> <p>i.e. $$A\sin(\theta)+B\cos(\theta) \equiv C\sin(\theta+\phi).$$</p> <p>Let's expand the RHS using the addition identity for sine.</p> <p>$$A\sin(\theta)+B\cos(\theta) \equiv C\underbrace{[\sin(\theta)\cos(\phi)+\cos(\theta)\sin(\phi)]}_{\equiv \ \sin(\theta+\phi)}.$$</p> <p>Simplifying, we have $$\color{red}A\sin(\theta)+\color{green}B\cos(\theta) \equiv \color{red}{C\cos(\phi)} \sin(\theta)+\color{green}{C\sin(\phi)}\sin(\theta).$$</p> <p>This is an identity, so it's true for all (permitted) values of $\theta$.</p> <p>Comparing coefficients of $\sin(\theta)$ we have $$A=C\cos(\phi) \tag{1}.$$</p> <p>Comparing coefficients of $\cos(\theta)$ we have $$B=C\sin(\phi) \tag{2}.$$</p> <p>If we do $\frac{(2)}{(1)},$ we have $$\frac{C\sin(\phi)}{C\cos(\phi)}=\Large \boxed{\tan(\phi)=\frac{\beta}{\alpha}}.$$</p> <p>If we do $(1)^2+(2)^2,$ we get $$[C\cos(\phi)]^2+[C \sin(\phi)]^2 =C^2[\cos^2(\phi)+\sin^2(\phi)=C^2(1)=C^2 \quad.$$</p> <p>Hence we have $$\Large \boxed{A^2+B^2=C^2} \quad.$$</p>