qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
3,056,616
<p><span class="math-container">$P(x) = 0$</span> is a polynomial equation having <strong>at least one</strong> integer root, where <span class="math-container">$P(x)$</span> is a polynomial of degree five and having integer coefficients. If <span class="math-container">$P(2) = 3$</span> and <span class="math-container">$P(10)= 11$</span>, then prove that the equation <span class="math-container">$P(x) = 0$</span> has <strong>exactly one</strong> integer root.</p> <p>I tried by assuming a fifth degree polynomial but got stuck after that.</p> <p>The question was asked by my friend.</p>
W-t-P
181,098
<p>If <span class="math-container">$u$</span> and <span class="math-container">$v$</span> are integer roots of <span class="math-container">$P$</span>, then <span class="math-container">$P(x)=(x-u)(x-v)Q(x)$</span>, where <span class="math-container">$Q$</span> is a polynomial with integer coefficients. From <span class="math-container">$P(2)=3$</span> we get <span class="math-container">$(u-2)(v-2)\mid 3$</span>, and then WLOG either <span class="math-container">$u-2=1$</span> or <span class="math-container">$u-2=-1$</span>, implying <span class="math-container">$u\in\{1,3\}$</span>. Now <span class="math-container">$P(10)=11$</span> gives <span class="math-container">$(u-10)(v-10)\mid 11$</span>, showing that <span class="math-container">$u-10$</span> is a divisor of <span class="math-container">$11$</span>. However, neither <span class="math-container">$1-10=-9$</span>, not <span class="math-container">$3-10=-7$</span> is a divisor of <span class="math-container">$11$</span>, a contradiction. </p>
1,246,250
<p>I recently learned of Cantor's diagonal argument, and was thinking about why there can't be a bijection between any infinite set of integers and any infinite set of real numbers. I understood the basic idea behind the proof, but I was thinking of a particular transformation, for which I don't see why it doesn't form a bijection. </p> <p>Let's say we want to map all of the integers, to the real numbers between $[-1, 1]$. My transformation is created in such a way, that for every integer, I transform it into a mirrored number that is placed after a decimal. </p> <p>For example:</p> <p>$$0 \rightarrow 0.0$$ $$1 \rightarrow 0.1$$ $$2 \rightarrow 0.2$$ $$\vdots$$ $$100 \rightarrow 0.001$$ $$\vdots$$</p> <p>It would seem to me that this is one to one, and every number between $[-1, 1]$ is hit. Even in the example of an irrational number, say $(\sqrt2 - 1)$, there is sum integer of infinite size that maps exactly to $(\sqrt2 - 1)$, because $(\sqrt2 - 1)$ can be written as a decimal made of up an infinite number of integer sitting behind a decimal point. It could be mapped as:</p> <p>$$...73265312414 \rightarrow 0.41421356237...$$</p> <p>Now, I'm assuming I was probably not the first person to think of this, so, why does this not work as a bijcetive mapping?</p>
MooS
211,913
<p>There are no integers with infinite length (in decimal system). The image of your map is contained in the rational numbers, hence can not be all of the real interval.</p> <p>Note that the image is not even all of the rational numbers between $0$ and $1$. For instance $\frac{1}{3}$ does not appear.</p>
2,368,179
<p>Answer should be in radians Like π/4 (45°) π(90°). I used $\tan(A+B)$ formula and got $5/7$ as the answer, but that's obviously wrong.</p>
Vidyanshu Mishra
363,566
<p>Use the formula $\tan (x+y)=\frac{(\tan x+ \tan y)}{1-\tan x\tan y}$</p> <p>And then take inverse</p>
2,917,896
<p>I think my proof is wrong but I don't know how to approach the statement differently. I hope you can help me identify where I'm mistaken/incomplete.</p> <p>Proof: $$\text{We need to prove: } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] $$</p> <p>$$\text{Thus, } x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \iff x \in [2, 6]$$</p> <p>$$\text{We first consider the converse of the biconditional.}$$</p> <p>$$\text{and proceed by contrapositive.} $$ $$x \notin \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \notin [2, 6]$$ $$\text{Given that when } n = 1, [3-\frac{1}{n}, 6]=[2,6] \text{ and } $$ $$ \forall z \in (\mathbb{N} - {1}) , [3-\frac{1}{z}, 6] &lt; [2, 6] \text{ thus } \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2,6]$$ $$\text{It follows that, } x \notin [2,6] \text{. Thus the converse is true.}$$</p> <p>$$\text{Now, for left to right } (\implies) \text{ we proceed by direct proof. }$$</p> <p>$$x \in \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] \implies x \in [2, 6]$$ $$\text{By the same logic as for the converse, we continue..}$$</p> <p>$$\text{Given that, when } n = 1, [3-\frac{1}{n}, 6] = [2, 6], \text{ It follows that: } $$ $$x \in [2,6]$$</p> <p>$$\therefore \bigcup_{n=1}^{\infty} A_{n} = [2, 6] \text{ } \blacksquare$$ </p> <p>Thank you for your time.</p> <hr> <p><strong>Updated proof:</strong></p> <p>Proof: </p> <p>We assume $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$ $$A_{1} = [2, 6] &gt; \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$ $$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \in [2,6]$$</p> <p>[ I placed a (*) to show where I'm uncertain. My problem is in knowing how much I should explain to the reader. I have to establish somehow that $A_{1}$ is the biggest interval but I kind of leave open 'why' $\bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6]$ is true. For example, I thought I had to show why $3 - \frac{1}{i} &gt; 2$ for every i $\geq$ 2. So I have a tedency to break everything down too much]</p> <p>Now for the converse we proceed by contrapositive.</p> <p>We assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$ $$A_{1} = [2, 6] &gt; \bigcup_{i=2}^{\infty} [3 - \frac{1}{i}, 6] = [ \frac{5}{2}, 6] *$$ $$\therefore A_{1} = \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] , \space x \notin [2,6]$$</p> <p>$$ \therefore \bigcup_{n=1}^{\infty}[3 - \frac{1}{n}, 6] = [2, 6] \blacksquare$$</p> <p><strong><em>Updated proof #2:</em></strong></p> <p>Proof:</p> <p>We assume, $x \in \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$.</p> <p>Since $2 \leq 3 - \frac{1}{n} &lt; 3$ for all $ n \geq 1$, $ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \in [2, 6]$ </p> <p>For the converse we assume $x \notin \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6]$. </p> <p>Following the same reasoning as above, Since $2 \leq 3 - \frac{1}{n} &lt; 3$ for all $ n \geq 1$, $ \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] \subseteq [2, 6], x \notin [2, 6]$ </p> <p>$\therefore \bigcup_{n=1}^{\infty} [3 - \frac{1}{n}, 6] = [2, 6] \space \blacksquare$ </p>
drhab
75,923
<p>You should end with something like:$$\text{From }x\in\bigcup_{n=1}^{\infty} \left[3-\frac1n,6\right]\text{ it follows that }x\in\left[3-\frac1n,6\right]\subseteq[2,6]\text{ for some positive integer }n$$</p> <p>That integer does not have to be $1$.</p>
2,546,161
<p>I came across this interesting question in an interview:</p> <p>Given $X$ and $Y$, these two independent standard normal. We have the following probability of $P(X&gt;0| X+Y&gt;0) = 0.75$. One can get this easily by draw a 2d plane and find out the required area.</p> <p>Now, if $X$ and $Y$ are joint normal with correlation $\rho$, and we are given $P(X&gt;0| X+Y&gt;0) = 0.8$, what is the value of $\rho$? </p> <p>We can find this by writing out the pdf of the joint normal $(X, Y)$ and compute the required probability and solve for $\rho$. I want to know if there is a more intuitive way other than the cumbersome double integral? </p> <p>I am thinking about making some transformation of $X$ and $Y$, but I don't have much clue how. </p>
Shashi
349,501
<p>$\newcommand{\PM}{\mathbb{P}}$I don't know if the elaboration I provide is an intuitive way of getting the answer. I think it is clear from your question that we have $X\sim N(0,1)$ and $Y\sim N(0,1)$ in the second case as well. We have (by definition): \begin{align} \PM(X&gt;0 |X+Y&gt;0) = \frac{\PM(X&gt;0, X+Y&gt;0)}{\PM(X+Y&gt;0)} \end{align} The numerator can be rewritten using the <a href="https://en.wikipedia.org/wiki/Law_of_total_probability" rel="nofollow noreferrer">Law of Total Probability</a>: \begin{align} \PM(X&gt;0 , X+Y&gt;0) = \PM(X&gt;0,X+Y&gt;0,Y&gt;0) + \PM(X&gt;0,X+Y&gt;0,Y&lt;0) \end{align} And that can be simplified further: \begin{align} \PM(X&gt;0,X+Y&gt;0,Y&gt;0) = \PM(X&gt;0,Y&gt;0) \end{align} and on the other hand: \begin{align} \PM(X&gt;0,X+Y&gt;0,Y&lt;0) &amp;= \PM(X+Y&gt;0, Y&lt;0) \\ &amp;=\PM(Y&lt;0|X+Y&gt;0)\PM(X+Y&gt;0)\\ &amp;= (1-\PM(Y&gt;0|X+Y&gt;0))\PM(X+Y&gt;0)\\ &amp;=(1-\PM(X&gt;0|X+Y&gt;0))\PM(X+Y&gt;0)\\ \end{align} Note that we know without calculations that $\PM(X+Y&gt;0)=\frac{1}{2}$ (why?). So putting everything together: \begin{align} \PM(X&gt;0 |X+Y&gt;0) &amp;= \frac{\PM(X&gt;0,Y&gt;0)+(1-\PM(X&gt;0|X+Y&gt;0))\PM(X+Y&gt;0)}{\PM(X+Y&gt;0)}\\ &amp;= 2\PM(X&gt;0,Y&gt;0) + 1-\PM(X&gt;0|X+Y&gt;0) \end{align} Finally we get: \begin{align} \PM(X&gt;0 |X+Y&gt;0) = \frac{1}{2}+\PM(X&gt;0,Y&gt;0) \end{align} Now we are happy, because there is a known <a href="http://mathworld.wolfram.com/BivariateNormalDistribution.html" rel="nofollow noreferrer"> closed form</a> of $\PM(X&gt;0,Y&gt;0)$ in the case of $X,Y\sim N(0,1)$, namely: \begin{align} \PM(X&gt;0,Y&gt;0) = \frac{1}{4}+\frac{\arcsin(\rho)}{2\pi} \end{align} So we need to solve: \begin{align} \frac{3}{4}+\frac{\arcsin(\rho)}{2\pi} = \frac{4}{5} \end{align} We can solve this and finally get the result, namely: \begin{align} \rho = \frac{\sqrt[]{5}-1}{4} \end{align} Just playing with the probability rules got us very far. (And using the result for $\PM(X&gt;0,Y&gt;0)$ of course)</p>
1,691,306
<p>Find all pairs of values $a$ and $b$ that satisfy $(a+bi)^2 = 48 + 14i$</p> <p>Here's what I have so far:</p> <p>$$\begin{align} z^2 &amp;= 48 + 14i = 50 \operatorname{cis} 0.2837\\ z &amp;= \sqrt{50} \operatorname{cis} 0.1419 = 7 + i \\ z &amp;= \sqrt{50} \operatorname{cis} 3.2834 = -7 - i\\ a &amp;= ± 7 \\ b &amp;= ± 1 \end{align} $$</p> <p>What are the other solutions, and how do I find them?</p>
lab bhattacharjee
33,337
<p>$$48+14i=(a+ib)^2=a^2-b^2+i(2ab)$$</p> <p>Equating the imaginary parts, $2ab=14\iff ab=7$</p> <p>Equating the real parts, $a^2-b^2=48$</p> <p>$(a^2+b^2)^2=(a^2-b^2)^2+(2ab)^2=48^2+14^2=50^2$</p> <p>$\iff a^2+b^2=50$ as $a,b$ are real</p> <p>and we have $a^2-b^2=48$</p> <p>Solve for $a^2,b^2$</p> <p>As $ab=7&gt;0\implies a,b$ will have the same sign</p>
638,348
<p><img src="https://i.stack.imgur.com/U1l29.jpg" alt="enter image description here"></p> <p>Radius of the big triangle is $2$. ABCD is a square. What is the difference between $T_{1}$ and $(M_{1}+M_{2})$.<br> I have solved it already though I don't know if my answer is right or wrong. My solution is very long. Is there any short/quick/easy method to solve this problem? Please check it out.</p>
Community
-1
<p>For the first case, $r=(7,2,3)+t^3(3,-1,5)$. As $t$ varies through $\mathbb R$, $t^3$ varies through $\mathbb R$, so we have a line.</p> <p>For the second case, $r=(-1,3,1)+t^2(5,2,-1)$. As $t$ varies through $\mathbb R$, $t^2$ varies through the non-negative reals, so we have a ray.</p>
638,348
<p><img src="https://i.stack.imgur.com/U1l29.jpg" alt="enter image description here"></p> <p>Radius of the big triangle is $2$. ABCD is a square. What is the difference between $T_{1}$ and $(M_{1}+M_{2})$.<br> I have solved it already though I don't know if my answer is right or wrong. My solution is very long. Is there any short/quick/easy method to solve this problem? Please check it out.</p>
Joel
85,072
<p>There are several ways to see this. Firstly you can solve each $t^3$ in terms of $x,y$ and $z$, and have a line in terms of its symmetric equation.</p> <p>You could also re-parameterize these by making the substitution $u = t^3$, and you can see it must be a line in that case.</p> <p>Incidentally, the second equation is a ray, not a line. That is because $t^2$ only takes positive values.</p>
2,762,237
<blockquote> <p>Does the sequence $(x_n)_{n=1}^\infty$ with $x_{n+1}=2\sqrt{x_n}$ converge?</p> </blockquote> <p>I'm almost positive this converges but I am not entirely sure how to go about this. The square root is really throwing me off as I haven't dealt with it at all up until now.</p>
Felix Marin
85,343
<p>$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} x_{n + 1} &amp; = 2\root{x_{n}} = 2\, x_{n}^{1/2} = 2^{1 + 1/2}\,x_{n - 1}^{1/2^{2}} = 2^{1 + 1/2 + 1/2^{2}}\, x_{n - 2}^{1/2^{3}} = \cdots = 2^{\large\sum_{k = 0}^{n}2^{-k}}\,x_{0}^{1/2^{\large n + 1}} \\[5mm] &amp; =2^{2 - 2^{-n}}\,x_{0}^{1/2^{\large n + 1}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\Large\to}\,\,\, 2^{2} = \bbx{4} \end{align}</p>
518,140
<p>What is the relation between the definition of homotopy of two functions</p> <blockquote> <p>"A homotopy between two continuous functions $f$ and $g$ from a topological space $X$ to a topological space $Y$ is defined to be a continuous function $H : X × [0,1] → Y$ from the product of the space $X$ with the unit interval $[0,1]$ to $Y$ such that, if $x \in X$ then $H(x,0) = f(x)$ and $H(x,1) = g(x)$".</p> </blockquote> <p>and the definition of the homotopy between two morphisms of chain complexes</p> <blockquote> <p>"Let $A$ be an additive category. The homotopy category $K(A)$ is based on the following definition: if we have complexes $A, B$ and maps $f, g$ from $A$ to $B$, a chain homotopy from $f$ to $g$ is a collection of maps $h^n \colon A^n \to B^{n - 1}$ (not a map of complexes) such that $f^n - g^n = d_B^{n - 1} h^n + h^{n + 1} d_A^n$, or simply $f - g = d_B h + h d_A$." </p> </blockquote> <p>Please help me. Thank you!</p>
Brian M. Scott
12,042
<p>You really ought to mention some of the mathematical achievements of the $14$th century philosopher <a href="https://en.wikipedia.org/wiki/Nicole_Oresme">Nicolas Oresme</a>: he worked with fractional exponents; he was the first to prove that the harmonic series diverges; he gave in essence a formula for the sum of a geometric series with arbitrary first term and ratio $\frac1n$ for integers $n\ge 2$; and he came close to inventing Cartesian coordinates, using his version to prove that the distance travelled in a given period by an object moving under constant acceleration is equal to the distance travelled in the same period by an object moving at a constant speed equal to that of the first object at the midpoint of the period. There’s a fair bit of information available on the web; the summary <a href="http://plato.stanford.edu/entries/nicole-oresme/#Mat">here</a> should be helpful.</p> <p>I’d replace the abacus, which in the form in which we think of it was little used in mediæval Europe, with its mediæval equivalent, the counting board or <a href="https://upload.wikimedia.org/wikipedia/commons/e/e0/Rechentisch.png">counting table</a>.</p>
73,912
<p>For my work, I am examining the values of a complex function as I vary the input according to a real parameter, and I want to both give the general plot and the plot of specific points, with labels (so one sees the direction of increase). </p> <p>I knew from the documentation that <code>Point</code> and <code>Epilog</code> together allow you to label points on graphs; e.g., </p> <pre><code>ourF[z_] := z^2; parts[z_] := {Re[z], Im[z]} ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3}, Epilog -&gt; {{PointSize[Medium], Point[Table[parts[ourF[ j + I/4]], {j, -3, 3}]]}}] </code></pre> <p>This produces:</p> <p><img src="https://i.stack.imgur.com/7bIh8.png" alt="A plot with labeled points, just as in the documentation"></p> <p>Looking at the answers to this site's <a href="https://mathematica.stackexchange.com/questions/1854/adding-labels-to-points-in-listplot">Question 1854</a> (especially Jacob Jurmain's), <code>Listplot</code> in newer versions of Mathematica has a <code>Labeled</code> option that labels the points in the <code>Listplot</code>. Indeed, I can get what I want by making a <code>Plot</code> and <code>Listplot</code> separately and then <code>Show</code>ing them together. e.g.</p> <pre><code>ourF[z_] := z^2; parts[z_] := {Re[z], Im[z]} plotOne = ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3}]; plotTwo = ListPlot[Table[Labeled[parts[ourF[ j + I/4]], Row[{"x = ", j}], Right ], {j, -3, 3}]]; Show[plotOne, plotTwo] </code></pre> <p>which yields</p> <p><img src="https://i.stack.imgur.com/bAYwK.png" alt="A plot with labeled points"></p> <p>as required. </p> <p>My first attempt, however, was to simply put the <code>ListPlot</code> in the <code>Epilog</code>, e.g.</p> <pre><code>ourF[z_] := z^2; parts[z_] := {Re[z], Im[z]} ParametricPlot[parts[ourF[x + I/4]], {x, -3 , 3}, Epilog -&gt; {ListPlot[Table[Labeled[parts[ourF[ j + I/4]], Row[{"x = ", 13/10 + j/10}], Right ], {j, -3, 3}]]}] </code></pre> <p>This yields the error:</p> <pre><code>Graphics is not a Graphics primitive or directive. </code></pre> <p>I guess <code>ParametricPlot</code> calls <code>Graphics</code>, and <code>Listplot</code> is now calling <code>Graphics</code> inside the other <code>Graphics</code>, hence the issue. </p> <p>I also tried putting <code>Labeled</code> in the <code>Point</code> variation, but <code>Point</code> doesn't know what to do with the label, and the error message becomes</p> <pre><code>Coordinate Labeled[{8.9375, -1.5}, Row[{"x = ", 1}], Right] should be a pair of numbers, or a Scaled or Offset form. </code></pre> <hr> <p><strong>Q:</strong> Is there any way of putting it all in one plotting command?</p> <hr> <p><em>P.S.:</em> An answer in the vein of, "You have acceptable output, stop worrying about it" would also be reasonable. I am new to Mathematica, but it seems to the newcomer as though Mathematica puts in one line what I would use 5-10 lines in MATLAB to set up, and [after having debugged] the fewest number of lines is the best.</p>
george2079
2,079
<p>Just to show you can readily do this directly with graphics primitives:</p> <pre><code>ourF[z_] := z^2; parts[z_] := {Re[z], Im[z]} ParametricPlot[parts[ourF[x + I/4]], {x, -3, 3}, Epilog -&gt; Table[{ {PointSize[.01], Red, Point@#}, Text[Row[{"x = ", 13/10 + j/10}], #, {-2, 0}]} &amp;@ parts[ourF[j + I/4]], {j, -3, 3}], PlotRangePadding -&gt; {{0, 1}, {1, 1}}] </code></pre> <p><img src="https://i.stack.imgur.com/CyOMa.png" alt="enter image description here"></p>
1,454,500
<p>I am self studying mathematics for Physics by reading book <strong>Mathematical methods in Physical Sciences</strong>. I am stuck at this problem for days:</p> <pre><code>Prove the following by appropriate manipulations using Fact 1 to 4; do not just evaluate the determinants. | 1 a bc | | 1 a a^2 | | 1 a a^2 | | 1 b ac | = | 1 b b^2 | = (c - a)(b - a)(c - b)| 0 1 b+a | | 1 c ab | | 1 c c^2 | | 0 0 1 | = (c - a)(b - a)(c - b) </code></pre> <p>I can evaluate the first determinant and obtain the result is (c-a)(b-a)(c-b) as above. Also, I can manipulate the second determinant to the third determinant and obtain the result as above. In fact, this can prove above equations. </p> <p>However, I wonder there is any way to transform first determinant to the second and the third. And is there any way to transform the third determinant back to the first and the second? I have been spend a day to find a way but without success. </p> <p>Can you suggest or hint me something to overcome this struggle? </p> <p>Here is the four facts about determinant manipulation that above problem mentioned:</p> <p><strong>Fact 1:</strong> If each element of one row (or one column) of a determinant is multiplied by a number k, the value of the determinant is multiplied by k.</p> <p><strong>Fact 2:</strong> The value of a determinant is zero if</p> <p>(a) all elements of one row (or column) are zero; or if </p> <p>(b) two rows (or two columns) are identical; or if</p> <p>(c) two rows (or two columns) are proportional.</p> <p><strong>Fact 3:</strong> If two rows (or two columns) of a determinant are interchanged, the value of the determinant changes sign.</p> <p><strong>Fact 4:</strong> The value of a determinant is unchanged if</p> <p>(a) rows are written as columns and columns as rows; or if</p> <p>(b) we add to each element of one row, k times the corresponding element of another row, where k is any number (and a similar statement for columns).</p> <p>Thank you very much.</p>
user236182
236,182
<p>None of the two double-inequalities can hold for any $a,b,c,d\in\Bbb R^+$.</p> <p>$$\frac{a + b}{a + b + c + d} &lt; \frac{a}{a + c} \iff (a+b)(a+c)&lt;a(a+b+c+d)$$</p> <p>$$\iff \frac{a}{b}&gt;\frac{c}{d}$$</p> <p>The same way you get:</p> <p>$$ \frac{a}{a+c} &lt; \frac{b}{b + d} \iff \frac{a}{b}&lt;\frac{c}{d}$$</p> <p>The two inequalities contradict each other.</p> <p>Use the same method for the other double-inequality.</p>
3,931,672
<p>Is there any bounded continuous map f:A to R (A is open) which can not be extended on whole R?</p> <p>This is a question posed by myself. My attempt: Let A=(1,2) then we can extend it. If A is finitely many intervals it can be extended. If A is countable many intervals then it can also be extended.</p> <p>But the last claim is based on A is countable union of disjoint open interval.</p> <p>Am i right or wrong?</p>
Hagen von Eitzen
39,174
<p>If <span class="math-container">$A$</span> is open and neither empty nor all of <span class="math-container">$\Bbb R$</span>, then we can pick <span class="math-container">$a\in A$</span>, <span class="math-container">$b\notin A$</span>. Wlog <span class="math-container">$a&lt;b$</span>. Let <span class="math-container">$s=\inf([a,b]\setminus A)$</span>. Then <span class="math-container">$a&lt;s\le b$</span> and <span class="math-container">$s\notin A$</span> and we can define <span class="math-container">$f\colon A\to \Bbb R$</span> as <span class="math-container">$f(x)=\sin\frac1{x-s}$</span>. There is no <em>continuous</em> extension of <span class="math-container">$f$</span> to <span class="math-container">$A\cup\{s\}$</span>.</p>
194,096
<p>Is it possible to find an expression for: $$S(N)=\sum_{k=0}^{+\infty}\frac{1}{\sum_{n=0}^{N}k^n}?$$</p> <p>For $N=1$ we have</p> <p>$$S(1) = \displaystyle\sum_{k=0}^{+\infty}\frac{1}{1 + k} = \displaystyle\sum_{k=1}^{+\infty}\frac{1}{k}$$</p> <p>which is the (divergent) harmonic series. Thus, $S (1) = \infty$.</p> <p>For $N=2$ this sum is: $$S(2)=\sum_{k=0}^{+\infty}\frac{1}{1+k+k^2}$$ which can be expressed as: $$S(2)=-1+\frac{1}{3}\sqrt 3 \pi \tanh(\frac{1}{2}\pi\sqrt 3)\approx 0.798$$</p> <p>For $N=3$ we have: $$S(3)=\frac{1}{4}\Psi(I)+\frac{1}{4I}\Psi(I)-\frac{1}{4I}\pi\coth(\pi)+\frac{1}{4}\pi\coth(\pi)+\frac{1/}{4}\Psi(1+I)-\frac{1}{4I}\Psi(1+I)-\frac{1}{2}+\frac{1}{2}\gamma \approx 0.374$$</p>
Sasha
11,069
<p>Perform a partial fraction decomposition: $$ \frac{1}{p(k)} = \frac{1}{1+k+\cdots+k^{n-1}} = \frac{1}{ \prod_{m=1}^{n-1}\left(k-\exp\left(i \frac{2 \pi}{n} m \right)\right)} = \sum_{m=1}^{n-1} \frac{1}{k-\exp\left(i \frac{2 \pi}{n} m \right)} \frac{1}{p^\prime\left(\exp\left(i \frac{2 \pi}{n} m \right)\right)} $$ Now: $$ p^\prime\left(z\right) = \sum_{m=1}^{n-1} m z^{m-1} = \frac{\mathrm{d}}{\mathrm{d}z} \sum_{m=0}^{n-1} z^{m} = \frac{\mathrm{d}}{\mathrm{d}z} \frac{1-z^n}{1-z} = \frac{z-z^n (n-z(n-1))}{z (1-z)^2} $$ Therefore, using $z^n=1$ for $z=\exp\left(i \frac{2 \pi}{n} m \right)$: $$ c_m := \frac{1}{p^\prime\left(\exp\left(i \frac{2 \pi}{n} m \right)\right)} = \frac{1}{n} \exp\left(i \frac{2 \pi}{n} m \right) \left( \exp\left(i \frac{2 \pi}{n} m \right) - 1 \right) = \frac{1}{n} \exp\left(i \frac{2 \pi}{n} m \right) \left( \exp\left(i \frac{2 \pi}{n} m \right) - 1 \right) $$ We thus have, and using $\sum_{m=1}^{n-1} c_m = 0$: $$ \begin{eqnarray} \sum_{k=0}^\infty \frac{1}{p(k)} &amp;=&amp; \sum_{k=0}^\infty \sum_{m=1}^{n-1} \frac{c_m}{k-\exp\left(i \frac{2 \pi}{n} m \right)} = \sum_{k=0}^\infty \sum_{m=1}^{n-1} c_m \left(\frac{1}{k-\exp\left(i \frac{2 \pi}{n} m \right)} - \frac{1}{k+1}\right) \\ &amp;=&amp; -\sum_{m=1}^{n-1} c_m \sum_{k=0}^\infty \left(\frac{1}{k+1} - \frac{1}{k-\exp\left(i \frac{2 \pi}{n} m \right)}\right) \\ &amp;=&amp; -\sum_{m=1}^{n-1} c_m \left( \gamma + \psi\left(-\exp\left(i \frac{2 \pi}{n} m \right)\right)\right) \end{eqnarray} $$ Again, making use of $\sum_{m=1}^{n-1} c_m = 0$ we arrive at: $$ \sum_{k=0}^\infty \frac{1}{1+k+\cdots+k^{n-1}} = \sum_{m=1}^{n-1} \frac{1}{n} \exp\left(i \frac{2 \pi}{n} m \right) \left(1- \exp\left(i \frac{2 \pi}{n} m \right) \right) \cdot \psi\left(-\exp\left(i \frac{2 \pi}{n} m \right)\right) $$ where $\psi(x)$ denotes the <a href="http://en.wikipedia.org/wiki/Digamma_function" rel="nofollow">digamma function</a>.</p>
4,159,060
<p>Let <span class="math-container">$H_1, H_2$</span> be Hilbert spaces and <span class="math-container">$T:H_1\to H_2$</span>. We say that <span class="math-container">$T$</span> is unitary if it preserves the inner product and unto.</p> <ol> <li>Show that the following claims are equivalent:</li> </ol> <p>A. <span class="math-container">$T$</span> is unitary.</p> <p>B. T copies every orthonormal basis of H to an orthonormal basis of H.</p> <p>C. T is injective and there exist an orthonormal basis of <span class="math-container">$H_1$</span> such that <span class="math-container">$T$</span> copies to an orthonormal basis of <span class="math-container">$H_2$</span>.</p> <p>D. T is invertible and <span class="math-container">$T^{-1}=T^*$</span></p> <ol start="2"> <li><p>Show that <span class="math-container">$T$</span> is unitary iff <span class="math-container">$T^*$</span> is.</p> </li> <li><p>if <span class="math-container">$H_1=H_2$</span>. Show that <span class="math-container">$T$</span> is unitary iff <span class="math-container">$T$</span> preserves the inner product and is normal.</p> </li> </ol> <p>For 1:</p> <p>A=&gt;B: Let T be an unitary operator, i.e it preserves the inner product. Let <span class="math-container">$(u_a)\in H$</span> be a hilbert basis of H (for every hilbert space there's an orthonormal basis), then <span class="math-container">$&lt;Tu_a,Tu_b&gt;=&lt;u_a,u_b&gt;=0$</span> for all <span class="math-container">$a\neq b$</span> and <span class="math-container">$&lt;Tu_a,Tu_a&gt;=&lt;u_a,u_a&gt;=1$</span>. Thus, T copies the orthonormal basis <span class="math-container">$(u_a)$</span> to an orthonormal basis <span class="math-container">$(Tu_a)$</span>.</p> <p>D=&gt;A: Let T be invertible and <span class="math-container">$T^*=T^{-1}$</span>. Then, <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,y&gt;$</span> so T is unitary by definition.</p> <p>For 2: Using that A &lt;=&gt;D T is unitary iff it is invertible and <span class="math-container">$T^{-1}=T^*$</span>.</p> <p>If T is unitary then <span class="math-container">$&lt;T^*x,T^*y&gt;=&lt;x,TT^*y&gt;=&lt;x,Iy&gt;=&lt;x,y&gt;$</span>, we get that <span class="math-container">$T^*$</span> is unitry. In the second way, if <span class="math-container">$T^*$</span> is unitary, then <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,y&gt;$</span>, so T is unitary.</p> <p>For 3: If T is unitary then T preservess the inner product (by def), and using A &lt;=&gt; D,</p> <p><span class="math-container">$T^*T=T^{-1}T=I$</span><br /> <span class="math-container">$TT^*=TT^{-1}=I$</span> Therefore T is normal.</p> <p>For the inverse, let T be norml and preseres the inner product, <span class="math-container">$&lt;Tx,Ty&gt;=&lt;x,T^*Ty&gt;=&lt;x,TT^*y&gt;=&lt;x,y&gt;$</span>, so <span class="math-container">$T^*T=TT^*=I$</span>, so T is invertible and <span class="math-container">$T^*=T^{-1}$</span>, thus T is unitary (by A &lt;=&gt;D).</p> <p>Is what i did fine?</p> <p>I did not get the idea in the rest =&gt; in 1, so will appreciate your help.</p>
Maximilian Janisch
631,742
<p>As demonstrated by the Borel–Kolmogorov paradox, it is <em>impossible</em> that the term &quot;<span class="math-container">$\mathsf P(Y=1\mid X=x)$</span>&quot; is defined using the event <span class="math-container">$\{X=x\}$</span> if <span class="math-container">$\mathsf P(\{X=x\})=0$</span>. Instead, the term &quot;<span class="math-container">$\mathsf P(Y=1\mid X=x)$</span>&quot; is very intimately related to the random variable <span class="math-container">$X$</span>. Here is the usual definition:</p> <p>For any Lebesgue-integrable (real) random variable <span class="math-container">$Y$</span> and any (real) random variable <span class="math-container">$X$</span> on the same probability space, one can define the <em>conditional expectation</em> <span class="math-container">$\mathsf E(Y\mid X)$</span> as the, informally speaking, &quot;best approximation to <span class="math-container">$Y$</span> if <span class="math-container">$X$</span> is known&quot;. See for instance [1; Definition 8.11] for a formal definition.</p> <p>Now, it follows from the definition that <span class="math-container">$\mathsf E(Y\mid X)$</span> is <span class="math-container">$\sigma(X)$</span>-measurable, so by [1; Korollar 1.97], there exists a measurable function <span class="math-container">$\eta: \mathbb R\to\mathbb R$</span> such that <span class="math-container">$$\mathsf E(Y\mid X) = \eta\circ X$$</span> <span class="math-container">$\mathsf P$</span>-almost everywhere. The function <span class="math-container">$\eta$</span> is uniquely determined <span class="math-container">$X_\#\mathsf P$</span>-almost everywhere. (<strong>TODO: Prove this.</strong>)</p> <p>(Here, <span class="math-container">$X_\#\mathsf P$</span> denotes the pushforward of <span class="math-container">$\mathsf P$</span> under <span class="math-container">$X$</span>, i.e. <span class="math-container">$X_\#\mathsf P(A)\overset{\text{Def.}}=\mathsf P(X^{-1}(A))$</span> for all measurable <span class="math-container">$A\subset\mathbb R$</span>.)</p> <p>For example, suppose that we have a random variable <span class="math-container">$Y$</span> and a random variable <span class="math-container">$X$</span> such that <span class="math-container">$\mathsf E(Y\mid X)=X^2$</span>. Then we have <span class="math-container">$\eta(x)=x^2$</span> <span class="math-container">$X_\#\mathsf P$</span>-almost everywhere.</p> <p>Therefore, one can now <em>define</em>, with an abuse of notation, (and using Iverson brackets, i.e. if <span class="math-container">$A$</span> is an event then <span class="math-container">$[A]$</span> shall denote the random variable that is the indicator function of <span class="math-container">$A$</span>, it is often also denoted by <span class="math-container">$\mathbf 1_A$</span>) <span class="math-container">$$\mathsf P(Y=1\mid X=x)=\eta(x)$$</span> where <span class="math-container">$\eta$</span> is a function satisfying <span class="math-container">$$\mathsf E([Y=1]\mid X) = \eta\circ X$$</span> <span class="math-container">$\mathsf P$</span>-almost everywhere.</p> <hr /> <p>So, the Wikipedia article (with very confusing notation in my opinion) just says this: <span class="math-container">$$R(h)=\mathsf P(h(X)\neq Y)=\mathsf E([h(X)\neq Y]).$$</span> Since <span class="math-container">$h(X), Y$</span> only take the values <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, <span class="math-container">$$\mathsf E([h(X)\neq Y]) = \mathsf E([h(X)=0][Y=1])+\mathsf E([h(X)=1][Y=0]).$$</span></p> <p>By the tower property for the conditional expectation [1; Satz 8.14 (iv)], we have <span class="math-container">$$\mathsf E([h(X)=0][Y=1]) = \mathsf E(\mathsf E([h(X)=0][Y=1]\mid X)).$$</span> By [1; Satz 8.14 (iii)], since <span class="math-container">$h(X)$</span> is <span class="math-container">$\sigma(X)$</span>-measurable (assuming <span class="math-container">$h$</span> is measurable), we have <span class="math-container">$$\mathsf E([h(X)=0][Y=1]\mid X) = [h(X)=0]\mathsf E([Y=1]\mid X).$$</span></p> <p>But we chose the notation <span class="math-container">$E([Y=1]\mid X)=\eta(X)$</span>, so we get <span class="math-container">$$\mathsf E([h(X)=0][Y=1]) = \mathsf E([h(X)=0] \eta(X)).$$</span> Analogously (exercise), we have <span class="math-container">$$\mathsf E([h(X)=1][Y=0]) = \mathsf E([h(X)=1] (1-\eta(X))),$$</span> and this is enough to conclude the proof for what you wanted to show.</p> <h1>Literature</h1> <p>[1] Achim Klenke: <em>Wahrscheinlichkeitstheorie.</em> 3. Auflage (2012/2013). Springer Spektrum.</p>
1,555,429
<p>Hi I am trying to solve the sum of the series of this problem:</p> <p>$$ 11 + 2 + \frac 4 {11} + \frac 8 {121} + \cdots $$</p> <p>I know its a geometric series, but I cannot find the pattern around this. </p>
Brevan Ellefsen
269,764
<p><strong>Hint</strong></p> <p>The geometric sequence can be rewritten as $\frac{1}{11^{-1}}, \frac{2}{11^0}, \frac{4}{11^1}, \frac{8}{11^2}...$ Notice the powers of two on top and powers of eleven on bottom</p>
14,515
<p>My problem is: What is the expression in $n$ that equals to $\sum_{i=1}^n \frac{1}{i^2}$?</p> <p>Thank you very much~</p>
Aryabhata
1,102
<p>I don't think there is a "closed" form. You can give a good approximation using the <a href="http://en.wikipedia.org/wiki/Euler%E2%80%93Maclaurin_formula" rel="nofollow noreferrer">Euler-McLaurin Summation</a> formula though:</p> <p>$$\sum_{j=1}^{n} \dfrac{1}{j^2} = \dfrac{\pi^2}{6} - \dfrac{1}{n} - \dfrac{1}{2n^2} + \mathcal{O}(\dfrac{1}{n^3})$$</p> <p>(If you need more accuracy you can include more terms from the summation formula to give the coefficients of the lower order terms)</p> <p>Note: The Euler McLaurin Summation formula only tells us that</p> <p>$$\sum_{j=1}^{n} \dfrac{1}{j^2} = C - \dfrac{1}{n} - \dfrac{1}{2n^2} + \mathcal{O}(\dfrac{1}{n^3})$$</p> <p>for some constant $\displaystyle C$.</p> <p>We know by other means that $\displaystyle C = \dfrac{\pi^2}{6}$, for instance, see this for a multitude of ways: <a href="https://math.stackexchange.com/questions/8337/different-methods-to-compute-sum-n-1-infty-frac1n2">Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$</a></p>
14,515
<p>My problem is: What is the expression in $n$ that equals to $\sum_{i=1}^n \frac{1}{i^2}$?</p> <p>Thank you very much~</p>
Tyler Clark
3,791
<p>I am not sure if this will work or not, but maybe you could try writing the expression in terms of <a href="http://mathworld.wolfram.com/FallingFactorial.html" rel="nofollow">falling factorials</a>. Then maybe use <a href="http://mathworld.wolfram.com/SummationbyParts.html" rel="nofollow">summation by parts</a>. I am not sure how nicely this will work, but you could try it. Let me know what you find out!</p>
956,256
<p>If $a_n \ge 0 $ for all n, prove that $\sum_{n=1}^\infty a_n$ converges if and only if $\sum_{n=1}^\infty {a_n\over 1+a_n}$ converges. </p> <p>Here is my attempt!</p> <p>=> Suppose that $\epsilon \ge 0$ is given and $a_n$ converges, then for all $N\le n \le m$$$\sum_{k=n+1}^m a_k \lt \epsilon.$$ let $b_n={a_n\over 1+a_n}$ and Notice that $b_n \lt a_n$ and </p> <p>Then, applying the comparison test $$|\sum_{k=n+1}^m b_k| \le \sum_{k=n+1}^m |b_k |\le\sum_{k=n+1}^m a_k\lt\epsilon$$Hence,${a_n\over 1+a_n}$ converges.</p> <p>let me know if it is worng and I don't know how to start for the other direction. Any help is appreciated!</p>
symmetricuser
125,084
<p>You made a minor mistake: $v(t)$ should be $112 - 32t$. This should make everything correct. As for the second part, ground means $s(t)=0$. So find the corresponding time and plug into $v(t)$ to find the impact velocity.</p>
629,347
<p>I understand <strong>how</strong> to calculate the dot product of the vectors. But I don't actually understand <strong>what</strong> a dot product is, and <strong>why</strong> it's needed.</p> <p>Could you answer these questions?</p>
Cameron Williams
22,551
<p>Dot products are very geometric objects. They actually encode relative information about vectors, specifically they tell us &quot;how much&quot; one vector is in the direction of another. Particularly, the dot product can tell us if two vectors are (anti)parallel or if they are perpendicular.</p> <p>We have the formula <span class="math-container">$\vec{a}\cdot\vec{b} = \lVert \vec{a}\rVert\lVert \vec{b}\rVert\cos(\theta)$</span>, where <span class="math-container">$\theta$</span> is the angle between the two vectors in the plane that they make. If they are perpendicular, <span class="math-container">$\theta = 90^{\circ}, 270^{\circ}$</span> so that <span class="math-container">$\cos(\theta) = 0$</span>. This tells us that the dot product is zero. This reasoning works in the opposite direction: if the dot product is zero, the vectors are perpendicular.</p> <p>This gives us a quick way to tell if two vectors are perpendicular. It also gives easy ways to do projections and the like.</p>
3,629,186
<p>Assume that <span class="math-container">$x=x(t)$</span> and <span class="math-container">$y=y(t)$</span>. Find <span class="math-container">$dx/dt$</span> given the other information.</p> <p><span class="math-container">$x^2−2xy−y^2=7$</span>; <span class="math-container">$\frac{dy}{dt} = -1$</span> when <span class="math-container">$x=2$</span> and <span class="math-container">$y=-1$</span></p> <p>I am trying to figure this problem out. My book does not give one example similar to it.</p> <p>I'm assuming that the first step is to find the derivative, to which I get</p> <p><span class="math-container">$2x\left(\frac{dy}{dt}\right)-2\left[x(\frac{dy}{dt})+y(\frac{dx}{dt})\right]-2y\left(\frac{dx}{dt}\right) = 0$</span></p> <p>I'm not sure if this is correct, and I'm not sure what to do after this. Do I just plug in <span class="math-container">$x$</span> and <span class="math-container">$y$</span>?</p>
Allawonder
145,126
<p>That's not correct. Letting primes denote differentiation with respect to <span class="math-container">$t,$</span> we obtain <span class="math-container">$(x^2)'-2(xy)'-(y^2)'=7',$</span> which gives <span class="math-container">$2xx'-2(xy'+yx')-2yy'=0,$</span> which simplifies to become <span class="math-container">$$(2x-2y)x'-(2x+2y)y'=0.$$</span></p> <p>Then given that <span class="math-container">$y',\,x$</span> and <span class="math-container">$y,$</span> you want to find <span class="math-container">$x'.$</span> Just plug away and solve the linear equation.</p>
615,067
<p>I heard that Weil proved the Riemann hypothesis for finite fields. Where can I found the details of the proof? I found the following sketch but I was unable to fill the details: </p> <p>Motivation: I try to understand the elementary theory of finite fields but I'm not an expert of algebraic geometry, it would be nice to get some hints what should I study before I can understand schemes so well that I understand the proof.</p> <p>Let $C,E$ be two proper smooth curves over a field $k$, and $f:C\to E$ a finite morphism. Let us set $X=C\times_{\operatorname{Spec}k}E$. Let us consider the graph $\Gamma_f\subseteq X$ of $f$ endowed with the reduced closed subscheme structure.</p> <p>(a) Let $p_1:X\to C$ and $p_2:X\to E$ denote the projections. Then $p_1$ induces an isomorphism $\varphi:\Gamma_f\simeq C$. Show that $\omega_{X/K}\simeq p_1^*\omega_{C/k}\otimes p_2^*\omega_{E/k}$ and that $\omega_{X/k}|_ {\Gamma_F}\simeq \varphi^*\omega_{C/k}\otimes \varphi^*f^*\omega_{E/k}$.</p> <p>(b) Show that</p> <p>$$\operatorname{deg}_k\omega_{X/k}\mid_{\Gamma_f}=2g(C)-2+(\operatorname{deg} f)(2g(E)-2).$$</p> <p>Deduce from this that $\Gamma_f^2=(\operatorname{deg} f)(2-2g(E))$.</p> <p>(c) Let us henceforth suppose that $C=E$. Let $\Delta\subset X$ denote the diagonal. Show that $\Delta^2=2-2g(C)$.</p> <p>(d) Let us suppose that $f\ne \operatorname{Id}_C$. Let $x\in X(k)\cap \Delta\cap \Gamma_f$, let $y=p_1(x)$, and let $t$ be a uniformizing parameter for $\mathcal{O}_{C,y}$. Show that</p> <p>$$i_x(\Gamma_f,\Delta)=\operatorname{length}\mathcal{O}_{C,y}/(\sigma(t)-t),$$</p> <p>where $\sigma$ is the automorphism of $\mathcal{O}_{C,y}$ induced by $f$.</p> <p>(e) Let us take a finite field $k=\mathbb{F}_{p^r}$ of characteristic $p&gt;0$, and let $f:C\to C$ be the Frobenius $F_C^r$. Show that the divisors $\Gamma_f,\Delta$ meet transversally and that $\Gamma_f\cap\Delta\subseteq X(k)$. Deduce from this that the cardinal $N$ of $C(k)$ is given by $N=\Gamma_f\cdot \Delta$.</p>
Igor Rivin
109,865
<p>A reasonably elementary introduction is given by Gabizon and Paskin-Cherniavsky <a href="http://www.scribd.com/doc/109789239/weilcourse-pdf" rel="nofollow">here.</a></p>
1,376,659
<p>Let $5=\frac ab$ $\forall\ a,b\ \epsilon\ N$. And $(a,b)=1$ <Br> Squaring both sides, <Br> $25b^2=a^2$ <Br> Thus, $25|a^2$; $25|a$ <Br> So $a=25m$ <Br> Substituting, $25b^2=25^2m^2$ <Br> So $b^2=25m^2$ <Br> So $25|b$ (By the same logic used before). <Br> But are assumption is proved to be wrong, because $25$ comes to be the common factor. So contradiction, proving that $5$ is not rational. So how is it possible?</p>
mvw
86,776
<p>The volume of the big box is $V_B = 7\cdot 9 \cdot 11 = 693$, the total volume of the small boxes is $V_b = 77 \cdot 3 \cdot 3 \cdot 1 = 693$.</p> <p>This means the volume of the small boxes is sufficient and we need to use all small boxes.</p> <p>Let us try to model this problem Tetris style: </p> <ul> <li>We have a base field, e.g. $7\times 9$, and need to drop all 77 small boxes over it.</li> <li>For each drop we have two decisions: <ul> <li>where to put the center of the box $c = (c_x, c_y, c_z)$ over the base field $(c_x, c_y) \in I_x \times I_y$ with $I_x = \{ 1, \ldots, 7 \}$ and $I_y = \{ 1, \ldots, 9 \}$</li> <li>how to orientate the $3\times 3\times 1$ box. There seem only three feasible orientations: <ul> <li>a large $3\times 3$ side as base, like a pizza box ("O") </li> <li>a small $3 \times 1$ side as base, orientated along the $x$-axis ("-")</li> <li>a small $3 \times 1$ side as base, orientated along the $y$-axis ("|")</li> </ul></li> </ul></li> <li>We loose if after the drop some part of the box sticks outside the big volume</li> <li>We win if we dropped all $77$ boxes without loosing.</li> </ul> <p>This is a search space of $77\times 7 \times 9 \times 3 = 14553$ drop configurations. Not that much for a machine.</p> <p>We could avoid the drop simulation and instead have $c_z$ as another choice. This would enlarge the search space to $77\times 7 \times 9 \times 11 \times 3 = 160083$ configurations. In both cases we need to check that boxes do not intersect.</p> <p>This should be sufficient to code a solver which visits all configurations of the search space (brute force) and will answer the question by either listing feasible configurations or reporting that there is no solution.</p> <p>Note: I submitted this before MJD published a counter argument.</p>
2,936,269
<p>How do you simplify: <span class="math-container">$$\sqrt{9-6\sqrt{2}}$$</span></p> <p>A classmate of mine changed it to <span class="math-container">$$\sqrt{9-6\sqrt{2}}=\sqrt{a^2-2ab+b^2}$$</span> but I'm not sure how that helps or why it helps.</p> <p>This questions probably too easy to be on the Math Stack Exchange but I'm not sure where else to post it.</p>
Siong Thye Goh
306,553
<p><span class="math-container">\begin{align} 9 - 6\sqrt2 &amp;= 3 (3-2\sqrt2) \\ &amp;= 3((\sqrt2)^2 - 2(1)\sqrt{2} +1^2) \\ &amp;= 3(\sqrt2-1)^2 \end{align}</span> Hence, <span class="math-container">$$\sqrt{9-6\sqrt2} = \sqrt{3}(\sqrt2 - 1)$$</span></p>
3,628,159
<p>I have <span class="math-container">$1,2,\ldots, n$</span> numbers and I want pick <span class="math-container">$k$</span> of them with replacement and such that order matters. </p> <p>So for <span class="math-container">$n=10$</span> and <span class="math-container">$k=4$</span> I can get: <span class="math-container">$(1,2,2,4), (1,2,4,2), (1,2,3,10)$</span>,...etc</p> <p>I then have <span class="math-container">$n^k$</span> possible combinations. But now I only want to count <strong>the tuples which have a unique number</strong>. So <span class="math-container">$(1,2,2,4)$</span> and <span class="math-container">$(1,1,1,2)$</span> would be included but <span class="math-container">$(1,1,2,2)$</span> would not be included since both 1 and 2 are not unique? How do I count these?</p> <p>I figured that I can pick a number out of <span class="math-container">$n$</span> for the first element in my tuple and then the remaining <span class="math-container">$k-1$</span> elements out of the remaining <span class="math-container">$n-1$</span> numbers, so the number of combinations would be <span class="math-container">$n\,(n-1)^{k-1}$</span>. Since I have <span class="math-container">$k$</span> possible locations for the unique number I get <span class="math-container">$k\, n\, (n-1)^{k-1}$</span>. However, clearly I am counting some combinations multiple times and I am not sure how to discount them.</p>
DonAntonio
31,254
<p>You could argue as follows: suppose <span class="math-container">$\;3\mid(4^n+5)\;$</span> isn't true for all natural numbers, and let <span class="math-container">$\;K:=\left\{\,k\in\Bbb N\;|\; 3\nmid(4^k+5)\,\right\}\;$</span> . As <span class="math-container">$\;K\neq\emptyset\;$</span> by assumption, the WOP tells us it has a first element, say <span class="math-container">$\;k_0\;$</span>. But then <span class="math-container">$\;3\mid(4^{k_0-1}+5)\;$</span> , and then</p> <p><span class="math-container">$$4^{k_0}+5=4\cdot4^{k_0-1}+5=(4^{k_0-1}+5)+3\cdot4^{k_0-1}$$</span></p> <p>Buth the first term in the rightmost expressios is divisible by <span class="math-container">$\;3\;$</span> by the minimality of <span class="math-container">$\;k_0\;$</span> , whereas the second term there is trivially divisible of <span class="math-container">$\;3\;$</span> , and thus the whole expression is divisible by <span class="math-container">$\;3\;$</span> , which provides a contradiction to <span class="math-container">$\;K\neq\emptyset\;$</span> .</p> <p>As you can see, the above is almost exactly the same as a proof by mathematical induction...which isn't a surprise as they both are equivalent.</p>
3,531,971
<p>Let <span class="math-container">$T$</span> a linear operator. <span class="math-container">$T$</span> is bounded then ker(<span class="math-container">$T$</span>) is closed.</p> <p><b>My attempt:</b></p> <p>Let <span class="math-container">$\{x_n\}\subset \ker(T)$</span>.</p> <p>As <span class="math-container">$T$</span> is bounded then exists <span class="math-container">$M&gt;0$</span> such that <span class="math-container">$||T(x_n)||_y\leq M.||x_n||$</span></p> <p>Note that <span class="math-container">$T(x_n)=0$</span> for all <span class="math-container">$n\in \mathbb{N}$</span>.</p> <p>then <span class="math-container">$\lim_{n\rightarrow\infty}T(x_n)=0$</span></p> <p>Here i'm stuck. can someone help me?</p>
Leonardo
557,543
<p>Continuing with your proof: suppose <span class="math-container">$x_n \in \text{ker}T$</span> converges to <span class="math-container">$x$</span>. You have to show that <span class="math-container">$x \in \text{ker}T$</span>. Since <span class="math-container">$T$</span> is linear and bounded then it is also continuous, therefore sequentially continuous, therefore the limit of <span class="math-container">$T(x_n)$</span> is <span class="math-container">$T(x)$</span>, and it must be zero, since for all <span class="math-container">$n$</span> you have <span class="math-container">$T(x_n)=0$</span>. So the claim follows.</p> <p>Or also <span class="math-container">$\text{ker}T=T^{-1}(\{0\})$</span>. But <span class="math-container">$\{0\}$</span> is closed, then so is <span class="math-container">$\text{ker}T=T^{-1}(\{0\})$</span>, as <span class="math-container">$T$</span> Is continuous (because linear and bounded)</p>
484,313
<p>I am taking linear algebra and none of this stuff is expained. I found this helpful link <a href="http://www.math.ucla.edu/~pskoufra/M115A-Notation.pdf" rel="nofollow">http://www.math.ucla.edu/~pskoufra/M115A-Notation.pdf</a></p> <p>but it is missing a lot of what I need to know. Just right now though what does v and ^ mean in the context of linear algebra and set stuff? It is not defined anywhere in my book and it is exceptionally frustrating trying to read this...stuff. Also what does something like upside down A$ x(x \epsilon A _&gt; x \epsilon B)$ mean?</p> <p>More context </p> <p>$ x(x \epsilon A _&gt; x \epsilon B) $^ 4 \exists x \in B $ ^ $x $ \not\in A)$</p>
Emily
31,475
<p>Sometimes, $\wedge$ is used to mean "and", whereas $\vee$ is used to mean "or." The LaTeX code for these are <code>\wedge</code> and <code>\vee</code>, respectively.</p> <p>$\forall$ means "for all". So if you wrote $\forall x \in S$, this means literally "for every element $x$ in the set $S$."</p> <p>$\exists$ means "there exists". This is commonly used with $\forall$ in mathematics. For instance:</p> <blockquote> <p>Let $E$ denote all even positive integers. Then, $\forall x \in E$, $\exists n \ge 1$ such that $2n = x$.</p> </blockquote> <p>This means that for any element in the set $E$, we can find some number $n$ greater than or equal to 1 that, when multiplied by 2, becomes $x$. In other words, all even numbers are multiples of 2.</p> <p>This is a trivially easy example, but the compactness of this nomenclature is certainly useful when you start getting into more complicated definitions of ideas.</p>
1,397,776
<p>Suppose $X_1,\ldots,X_n$ are iid r.v.'s, each with pdf $f_{\theta}(x)=\frac{1}{\theta}I\{\theta&lt;x&lt;2\theta\}$. I find the minimal sufficient statistics $(X_{(1)},X_{(n)})$. I am trying to prove it is complete. Can someone give me hint? Also are there any complete sufficient statistics in this model?</p>
Saty
213,298
<p>Find $E(X_{(1)})$ and $E(X_{(n)})$. Play with them to make it $0$ i.e. find $a,b$ such that $E[aX_{(1)}+bX_{(n)}]=0$. Call $g(T)=aX_{(1)}+bX_{(n)}$. Then we have $E[g(T)]=0$ but does that mean $g(T)=0$ a.e.? No right?</p>
52,874
<p>Consider a coprime pair of integers $a, b.$ As we all know ("Bezout's theorem") there is a pair of integers $c, d$ such that $ac + bd=1.$ Consider the smallest (in the sense of Euclidean norm) such pair $c_0, d_0$, and consider the ratio $\frac{\|(c_0, d_0)\|}{\|(a, b)\|}.$ The question is: what is the statistics of this ratio as $(a, b)$ ranges over all <em>visible</em> pairs in, for example, the square $1\leq a \leq N, 1 \leq b \leq N?$</p> <p>Experiment shows the following amazing histogram:<img src="https://dl.dropbox.com/u/5188175/histogram.jpg" alt="alt text"></p> <p><strong>EDIT</strong> by popular demand: the histogram is for an experiment for $N=1000.$ The $x$ axis is the ratio, the $y$ axis is the number of points in the bin. The total number of points is $1000000/\zeta(2),$ so there are $100$ bins each with around $6000$ points.</p> <p>But no immediate proof leaps to mind.</p>
Bill Thurston
9,062
<p>Here's a more geometric formulation of your question:</p> <p>On the torus $\mathbb R^2/\mathbb Z^2$, consider a long simple closed geodesic $\overline {(0,0)(a,b)}$. It cuts the torus into a thin cylinder; the cylinder is joined to itself by a twist by some angle to form the torus. What is the distribution of the angle of the twist?</p> <p>From this perspective, perhaps your intuition tells you that the angles of twist should tend toward the uniform distribution, as the homotopy classes of geodesics are chosen uniformly with longer and longer lengths.</p> <p>To get a rigorous argument, we can think about the problem from the opposite direction. Begin by starting from a long thin annulus of area 1, and ask what are the ways to glue it together to form a torus? You can glue it by any angle; however, the torus you get is not usually isometric to the square torus; but it is isometric to $\mathbb E^2$ modulo some lattice of area 1.</p> <p>The space of lattices up to similarity in $\mathbb E^2$ together with a choice of positively oriented generators is the Teichmüller space for the torus, and can be identified with the hyperbolic plane, in the upper halfspace model: make the first vector go from $0$ to $1$ along the $x$-axis, and the second vector will be a point $z$ in upper half space. Change of generators acts by fractional linear transformations, and preserves the hyperbolic metric; the quotient is the space of isometry classes of Euclidean tori of area 1, the moduli space of the torus.</p> <p>Twisting an annulus is a well-studied operation, the horocycle flow, on the moduli space. The action preserves a probably familiar tiling of the hyperbolic plane by ideal triangles whose vertices are at rational points on the bounding line, completed to make $\mathbb {RP}^1$, and whose edges connect pairs of slopes corresponding to your equation. The points in Teichmüller space that give square toruses are the midpoints of the edges. (Actually, the edges are infinitely long, so they don't have an obvious definition of midpoint, but the triangles have altitudes whose feet we can call the midpoints: these are the square toruses).</p> <p>In any case: if you look at all lattice vectors with length between say $N$ and $2N$, and ask for the distribution of angles among these, this is equivalent to taking all points representing square lattices in a band in Teichmüller space between horocycles that appear in upper half space as a rectangle bounded by horizontal lines at height $1/N$ and $1/2N$ and vertical lines $x = \pm 1/2$; the question is the distribution of $x$-coordinates of points representing square toruses within that band. That this tends to the uniform distribution follows from ergodicity of the horocycle flow, a well-known fact whose history probably predates the sources I'm familiar with, so I won't try to give the attribution.</p> <ul> - </ul>
52,874
<p>Consider a coprime pair of integers $a, b.$ As we all know ("Bezout's theorem") there is a pair of integers $c, d$ such that $ac + bd=1.$ Consider the smallest (in the sense of Euclidean norm) such pair $c_0, d_0$, and consider the ratio $\frac{\|(c_0, d_0)\|}{\|(a, b)\|}.$ The question is: what is the statistics of this ratio as $(a, b)$ ranges over all <em>visible</em> pairs in, for example, the square $1\leq a \leq N, 1 \leq b \leq N?$</p> <p>Experiment shows the following amazing histogram:<img src="https://dl.dropbox.com/u/5188175/histogram.jpg" alt="alt text"></p> <p><strong>EDIT</strong> by popular demand: the histogram is for an experiment for $N=1000.$ The $x$ axis is the ratio, the $y$ axis is the number of points in the bin. The total number of points is $1000000/\zeta(2),$ so there are $100$ bins each with around $6000$ points.</p> <p>But no immediate proof leaps to mind.</p>
David Feldman
10,909
<p>Roughly:</p> <p>Suppose you have a fraction $a/b$ and you expand it as a simple continued fraction $1/c_1+1/c_2\cdots+1/c_{n-1}+1/c_n$. Now truncate the last convergent and collapse $1/c_1+1/c_2\cdots+1/c_{n-1}$ to get, say, $a'/b'$. Now consider the simple continued fraction expansion of $b'/b$. As I recall, this will (usually? perhaps I need a hypothesis to avoid degenerate cases?) equal the <em>reverse</em> of the continued fraction of $a/b$. </p> <p>For complicated fractions the beginning and end of a continued fraction should be almost uncorrelated. Also for a random fraction the convergents have a known distribution (Gauss-Kuzmin) and reversing the continued fraction doesn't change the distribution of the convergents, so you get the uniform distribution back. </p>
1,567,152
<blockquote> <p>Theorem: $X$ is a finite Hausdorff. Show that the topology is discrete.</p> </blockquote> <p>My attempt: $X$ is Hausdorff then $T_2 \implies T_1$ Thus for any $x \in X$ we have $\{x\}$ is closed. Thus $X \setminus \{x\}$ is open. Now for any $y\in X \setminus \{x\}$ and $x$ using Hausdorff property, we get $\{x\}$ is open. Am I right till here? And how to proceed further? </p>
Léreau
351,999
<p>If you showed that, for Hausdorff spaces , all one point sets $\{x\}$ are closed, it also follows that all finite subsets $F = \{x_1,\dots, x_n \}$ are closed, since $$ F = \bigcup_{j=1}^n \{x_j\} $$ will be closed as a finite union of closed sets.</p> <p>Let's apply this to the situation where $X$ is finite and Hausdorff. For any subset $U \subseteq X$, the complement $X \setminus U$ will be finite and thus closed by the argument above. Hence $U$ is open. </p> <p>So all subsets of $X$ are open, which means the topology is discrete.</p>
3,235,300
<p>I tried with , whenever <span class="math-container">$x &gt; y$</span> implies <span class="math-container">$p(x) - p(y) =( 5/13)^x (1-(13/5)^{(x-y)}) + (12/13)^x (1- (13/12)^{(x-y)}) &gt; 0 $</span>. But here I don't understand why the answer is no.</p>
Fred
380,717
<p>We have <span class="math-container">$p(x)=a^x+b^x -1$</span> with <span class="math-container">$0&lt; a,b&lt;1.$</span></p> <p>Then <span class="math-container">$p'(x)= a^x \ln a +b^x \ln b$</span>. Since <span class="math-container">$ \ln a , \ln b&lt;0$</span>, <span class="math-container">$p'(x)&lt;0.$</span></p>
4,008,987
<p>I am reading a math book and in it, it says, &quot;Let <span class="math-container">$V$</span> be the set of all functions <span class="math-container">$f: \mathbb{Z^n_2} \rightarrow \mathbb{R}.$</span> I know that <span class="math-container">$\mathbb{Z^n_2}$</span> is just the cyclic group of order <span class="math-container">$2$</span> taken to the <span class="math-container">$n$</span>th power, but I don't get what the actual function means.</p> <p>How would such a function, <span class="math-container">$f$</span>, work? Could someone please give an example?</p>
Arturo Magidin
742
<p>The fact that <span class="math-container">$\mathbb{Z}_2^n$</span> is a group is completely irrelevant here. It is being used simply as a set. So let us discuss this construction without any reference to <span class="math-container">$\mathbb{Z}_2^n$</span>.</p> <p>Let <span class="math-container">$X$</span> be your favorite set. Define <span class="math-container">$$\mathbb{R}^X=\{f\colon X\to \mathbb{R}\mid f\text{ is a function}\}.$$</span> No other conditions on <span class="math-container">$f$</span>: just a function from the set <span class="math-container">$X$</span> to the real numbers.</p> <p>We define addition of elements of <span class="math-container">$\mathbb{R}^X$</span> as follows: given <span class="math-container">$f,g\in\mathbb{R}^X$</span>, I want to define a function from <span class="math-container">$X$</span> to <span class="math-container">$\mathbb{R}$</span> called “<span class="math-container">$f+g$</span>”. To do so, I need to tell you how to evaluate this function “<span class="math-container">$f+g$</span>” at each <span class="math-container">$x\in X$</span>. The rule is: <span class="math-container">$$(f+g)(x) = f(x) + g(x),$$</span> where the addition on the right hand side is the addition of real numbers. Note that since <span class="math-container">$f(x)$</span> and <span class="math-container">$g(x)$</span> are real numbers, this makes sense.</p> <p>Now, the set <span class="math-container">$\mathbb{R}^X$</span> together with this operation <span class="math-container">$+$</span> on functions is a commutative group: verify that <span class="math-container">$+$</span> is associative, for example, by showing that the functions <span class="math-container">$(f+g)+h$</span> and <span class="math-container">$f+(g+h)$</span> take the same value at each <span class="math-container">$x\in X$</span>. The identity of the operation <span class="math-container">$+$</span> on functions is the function <span class="math-container">$\mathbf{z}\colon X\to \mathbb{R}$</span> defined by <span class="math-container">$\mathbf{z}(x) = 0$</span> for all <span class="math-container">$x\in X$</span>. And the additive inverse of a function <span class="math-container">$f\colon X\to \mathbb{R}$</span> is the function <span class="math-container">$(-f)\colon X\to \mathbb{R}$</span> defined by <span class="math-container">$$(-f)(x) = -(f(x)),$$</span> where <span class="math-container">$-$</span> on the right hand side is the usual negative symbol of the real numbers.</p> <p>We define a scalar multiplication on <span class="math-container">$\mathbb{R}^X$</span> as follows: given <span class="math-container">$\alpha\in\mathbb{R}$</span> and <span class="math-container">$f\in\mathbb{R}^X$</span>, we define the function <span class="math-container">$(\alpha f)$</span> to be <span class="math-container">$$(\alpha f)(x) = \alpha\cdot f(x)\text{ for all }x\in X,$$</span> where <span class="math-container">$\alpha\cdot f(x)$</span> means the product in <span class="math-container">$\mathbb{R}$</span> of <span class="math-container">$\alpha$</span> and <span class="math-container">$f(x)$</span>, both real numbers.</p> <p>These two operations, <span class="math-container">$+$</span> and the scalar multiplication, turn <span class="math-container">$\mathbb{R}^X$</span> into a vector space over <span class="math-container">$\mathbb{R}$</span>.</p> <p>Now, we define a distinguished set of elements of <span class="math-container">$\mathbb{R}^X$</span>: for each <span class="math-container">$y\in X$</span>, define <span class="math-container">$f_y\colon X\to\mathbb{R}$</span> by <span class="math-container">$$f_y(x) = \delta_{yx} = \left\{\begin{array}{ll} 0 &amp; \text{if }y\neq x,\\ 1 &amp; \text{if }y=x. \end{array}\right.$$</span> So <span class="math-container">$f_y$</span> takes the value <span class="math-container">$1$</span> at <span class="math-container">$y$</span>, and the value zero everywhere else.</p> <p>If <span class="math-container">$X$</span> is finite, verify that the set <span class="math-container">$\{f_y\}_{y\in X}$</span> is a basis for the vector space <span class="math-container">$\mathbb{R}^X$</span>.</p> <p>This is the construction you are being presented with, except that the set <span class="math-container">$X$</span> is taken to be the underlying set of the group <span class="math-container">$\mathbb{Z}_2^n$</span>. But the fact that it is a group doesn’t matter at all; what is being used is the fact that it is a (finite) set.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Tom Copeland
12,178
<p>One of the most clever and amusing, yet deep, presentations I've seen on the interdisciplinary import of the golden ratio and Fibonacci sequences is the video sequence by V. Hart &quot;<a href="https://www.youtube.com/watch?v=ahXIMUkSXX0" rel="nofollow noreferrer">Doodling in Math: Spirals, Fibonacci, and Being a Plant</a>&quot; and, of course, the comments and refs in the OEIS entry <a href="https://oeis.org/A000045" rel="nofollow noreferrer">A000045</a> on the standard Fibonacci sequence and the golden ratio contain a lot of applications.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Pietro Majer
6,101
<p>The golden ratio is also the order of convergence of the <a href="https://en.wikipedia.org/wiki/Secant_method" rel="nofollow noreferrer">secant method</a> .</p> <p><em><strong>edit Dec 2022.</strong></em> It seems it has not yet been recalled the relevance of the golden ratio and Fibonacci numbers in the theory of <em><strong>continued fractions</strong></em>, due to the <a href="https://en.wikipedia.org/wiki/Hurwitz%27s_theorem_(number_theory)" rel="nofollow noreferrer">Hurwitz's theorem</a>.</p> <p>It should be noticed that in both the examples above, and, it seems to me, in many examples quoted in these answers, the role of the golden ratio is not really due to a certain property in itself, but rather, to the fact that it is the best (or worse) case in a given context, so that it actually gives optimal bounds. I think this explains (and in some sense resizes) the &quot;ubiquity&quot; of the golden ratio and Fibonacci numbers in mathematics and in nature. It is not that simple models like the golden section, or &quot;<span class="math-container">$x_n=x_{n-1}+x_{n-2}$</span>&quot;, or the most popular &quot;Fibonacci spiral&quot; really explain so many facts (although it seems that out there most people really thinks so); it seems more true that they give first approximations and sharp bounds to more complex situations.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
KConrad
3,272
<p>In every real quadratic field <span class="math-container">$K$</span>, the unit group of its ring of integers <span class="math-container">$\mathcal O_K$</span> is known to have the form <span class="math-container">$\pm u^\mathbf Z$</span> for a unique number <span class="math-container">$u &gt; 1$</span>, which is called the fundamental unit of <span class="math-container">$K$</span> (really, of <span class="math-container">$\mathcal O_K$</span>). The field <span class="math-container">$K = \mathbf Q(\sqrt{5})$</span>, which is the real quadratic field with smallest discriminant (and the number field overall with the second smallest discriminant, after <span class="math-container">$\mathbf Q$</span>), has <span class="math-container">$\mathcal O_K = \mathbf Z[u]$</span> and <span class="math-container">$\mathcal O_K^\times = \pm u^\mathbf Z$</span> where <span class="math-container">$u = (1+\sqrt{5})/2$</span>.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Brian Hopkins
14,807
<p>Clearly there are many possible answers; MathSciNet has 50 entries with &quot;golden mean&quot; in the title and 447 other entries with the phrase appearing &quot;anywhere.&quot; Let me mention three in particular.</p> <p>One of the commonly claimed applications of the Fibonacci numbers in nature is sunflower seeds. While some applications can be disputed, this one seems to be accurate. Michael Naylor wrote an engaging article &quot;Golden, √2, and Flowers: A Spiral Story&quot; for <em>Mathematics Magazine</em> (75(3) (2002) 163-172) that includes a reference to Mitchison's &quot;Phyllotaxis and the Fibonacci Series&quot; in <em>Science</em> (196 (April 1977) 270-275).</p> <p>A more surprising application may be in symbolic dynamics, where the &quot;golden mean shift&quot; consists of bi-infinite binary sequences the avoid adjacent ones. Similar to <span class="math-container">$a(n) = a(n-1) + a(n-2)$</span> being one of the most foundational recurrence sequences, this golden mean shift is a basic shift space in the younger field of dynamical systems.</p> <p>Finally, not so modern but unexpected: Fibonacci numbers arise in analyzing the Euclidean algorithm; they contribute the &quot;worst cases,'' e.g., running the algorithm on 8 and 5 takes four steps, and no pair <span class="math-container">$(a,b)$</span> with <span class="math-container">$a, b \le 12$</span> takes more. Knuth says of this result (due to Lamé and other 19th century French mathematicians), &quot;This theorem has the historical claim of being the first practical application of the Fibonacci sequence.&quot; (<em>TAOCP</em> 2, p. 360)</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Tony Huynh
2,233
<p>The golden mean also crops up in matroid theory. A matroid <span class="math-container">$M$</span> is a <em>golden mean</em> matroid if it can be represented by a real matrix such that every non-zero subdeterminant is <span class="math-container">$\pm \phi^i$</span>, for some <span class="math-container">$i \in \mathbb{Z}$</span>, where <span class="math-container">$\phi$</span> is the golden mean. Somewhat surprisingly, Vertigan proved that a matroid is a golden mean matroid if and only if it is representable over both <span class="math-container">$GF(4)$</span> and <span class="math-container">$GF(5)$</span>, where <span class="math-container">$GF(q)$</span> is the finite field with <span class="math-container">$q$</span> elements.</p> <p>See Stefan van Zwam's <a href="http://www.matroidunion.org/stefan/research.html" rel="noreferrer">PhD thesis</a> for more theorems along these lines.</p>
403,184
<p>A (non-mathematical) friend recently asked me the following question:</p> <blockquote> <p>Does the golden ratio play any role in contemporary mathematics?</p> </blockquote> <p>I immediately replied that I never come across any mention of the golden ratio in my daily work, and would guess that this is the same for almost every other mathematician working today.</p> <p>I then began to wonder if I were totally correct in this statement . . . which has led me to ask this question.</p> <p>My apologies is this question is unsuitable for Mathoverflow. If it is, then please feel free to close it.</p>
Timothy Chow
3,106
<p>Here is an example which on the surface has nothing at all to do with Fibonacci numbers or continued fractions. Theorem 1.1 of Itai Dinur's 2021 SODA paper <em><a href="https://doi.org/10.1137/1.9781611976465.151" rel="noreferrer">Improved algorithms for solving polynomial systems over GF(2) by multiple parity-counting</a></em> states:</p> <blockquote> <p>There is a randomized algorithm that given a system <span class="math-container">$E$</span> of polynomial equations over <span class="math-container">$\mathbb{F}_2$</span> with degree at most <span class="math-container">$d$</span> in <span class="math-container">$n$</span> variables, finds a solution to <span class="math-container">$E$</span> or correctly decides that a solution does not exist with high probability. The runtime of the algorithm is bounded by <span class="math-container">$O(2^{0.6943n})$</span> for <span class="math-container">$d=2$</span> and by <span class="math-container">$O(2^{(1-1/(2d))n})$</span> for <span class="math-container">$d&gt;2$</span>.</p> </blockquote> <p>The author adds, &quot;We note that for <span class="math-container">$d=2$</span>, the complexity of our algorithm can be made arbitrarily close to <span class="math-container">$O(\varphi^n)$</span>, where <span class="math-container">$\varphi = \frac{1}{2}(1+\sqrt{5})$</span> is the golden ratio.&quot;</p> <p>Dinur's algorithm, which builds on recent work by other authors, is highly ingenious and is not just a straightforward re-packaging of the results mentioned by some other respondents where <span class="math-container">$\varphi$</span> shows up in the asymptotic analysis of the runtime of an algorithm. One indication that the appearance of <span class="math-container">$\varphi$</span> here is not at all obvious is that the previous best algorithm for <span class="math-container">$d=2$</span> had an asymptotic runtime of <span class="math-container">$O(2^{0.804n})$</span>.</p>
3,667,798
<p>Find values of <span class="math-container">$a$</span> for which the integral <span class="math-container">$$\int^{\infty}_{0}e^{-at}\sin(7t)dt$$</span> converges</p> <p>What i try</p> <p><span class="math-container">$$\int^{\infty}_{0}e^{-at}\sin(7t)dt$$</span></p> <p><span class="math-container">$$=\frac{1}{a^2+49}\bigg(-e^{-at}a\sin(7t)-7e^{-at}\cos(7t\bigg)\bigg|^{\infty}_{0}=\frac{7}{a^2+49}$$</span></p> <p>The integral is converges for all real <span class="math-container">$a$</span></p> <p>What i have done above is right. If not then please tell me how do i solve it. Thanks </p>
Allawonder
145,126
<p>That converges for all <span class="math-container">$a&gt;0$</span> since then we have that <span class="math-container">$$e^{-at}|\sin 7t|\le e^{-at}.$$</span></p> <p>You can directly see that it does not when <span class="math-container">$a\le 0.$</span></p>
271,915
<p>Not clear from <code>DayMatchQ</code> <a href="https://reference.wolfram.com/language/ref/MatchQ.html" rel="nofollow noreferrer">doc page</a> but doesn't seem to work for say alternatives the way <code>MatchQ</code> does, eg</p> <p><code>DayRange[Today,DayPlus[Today,30]] // Select[DayMatchQ[#,Monday | Wednesday | Friday]] </code></p> <p>Returns <code>{}</code> as opposed to say matching <code>Monday</code> only. Is there a different syntax or workaround?</p> <p>Don't tell me need to OR a list of <code>DayMatchQ</code>, ie WL diminishing orthogonality</p>
lericr
84,894
<p>I assume that DayMatchQ is more complicated than just matching a form. One workaround would be to use DayName:</p> <pre><code>Select[ DayRange[Today, DayPlus[Today, 30]], MemberQ[{Monday, Wednesday, Friday}, DayName[#]] &amp;] </code></pre>
390,640
<p>Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$</p> <p>I was told it could be calculated in a closed form.</p>
math110
58,742
<p>Find the integral $$I=\int_{0}^{1}\ln{\left(\ln{\left(\dfrac{1}{x}+\sqrt{\dfrac{1}{x^2}-1}\right)}\right)}dx$$ solution:since let $$x=e^{-y}$$ then $$I=\int_{0}^{\infty}e^{-y}\ln{\ln{\left(e^y+\sqrt{e^{2y}-1}\right)}}dy$$ so \begin{align*} &amp;\int_{0}^{\infty}e^{-x}\ln{(\ln{(e^x+\sqrt{e^{2x}-1})})}dx=_{y=\ln{(e^x+\sqrt{e^{2x}-1})}}=2\int_{0}^{\infty}\dfrac{e^y(e^{2y}-1)}{(1+e^{2y})^2}\ln{y}dy\\ &amp;=2\int_{0}^{\infty}\dfrac{e^{-y}(1-e^{-2y})}{(1+e^{-2y})^2}\ln{y}dy= 2\int_{0}^{\infty}e^{-y}(1-e^{-2y})\left(\sum_{n=1}^{\infty}(-1)^{n-1}ne^{-2y(n-1)}\right)\ln{y}dy\\ &amp;=2\sum_{n=1}^{\infty}(-1)^{n-1}n\int_{0}^{\infty}\left(e^{-y(2n-1)}-e^{-y(2n+1)}\right)\ln{y}dy =2\sum_{n=1}^{\infty}(-1)^{n-1}\cdot n\left(-\dfrac{\gamma+\ln{(2n-1)}}{2n-1}+\dfrac{\gamma+\ln{(2n+1)}}{2n+1}\right)\\ &amp;=2\gamma\sum_{n=1}^{\infty}(-1)^n\cdot n\left(\dfrac{1}{2n-1}-\dfrac{1}{2n+1}\right)+2\sum_{n=1}^{\infty}(-1)^n \cdot n\left(\dfrac{\ln{(2n-1)}}{2n-1}-\dfrac{\ln{(2n+1)}}{2n+1}\right)\\ &amp;=\gamma\sum_{n=1}^{\infty}(-1)^n\left(\dfrac{2n}{2n-1}-\dfrac{2n}{2n+1}\right)+\sum_{n=1}^{\infty}(-1)^n \left(\dfrac{2n\ln{(2n-1)}}{2n-1}-\dfrac{2n\cdot\ln{(2n+1)}}{2n+1}\right)\\ &amp;=\gamma\sum_{n=1}^{\infty}(-1)^n\left(\dfrac{1}{2n-1}+\dfrac{1}{2n+1}\right)+\sum_{n=1}^{\infty}(-1)^n\left( \dfrac{(2n-1+1)\ln{(2n-1)}}{2n-1}-\dfrac{(2n+1-1)\ln{(2n+1)}}{2n+1}\right)\\ &amp;=-\gamma+\sum_{n=1}^{\infty}(-1)^n\ln{\dfrac{2n-1}{2n+1}}+\sum_{n=1}^{\infty}(-1)^n\left(\dfrac{\ln{(2n-1)}}{2n-1} +\dfrac{\ln{(2n+1)}}{2n+1}\right)=-\gamma+\sum_{n=1}^{\infty}(-1)^n\ln{\dfrac{2n-1}{2n+1}}\\ &amp;=-\gamma-\sum_{n=1}^{\infty}\ln{\dfrac{4n-3}{4n-1}}+\sum_{n=1}^{\infty}\ln{\dfrac{4n-1}{4n+1}}=-\gamma+ \sum_{n=1}^{\infty}\ln{\dfrac{(n-1/4)^2}{(n-3/4)(n+1/4)}}\\ &amp;=-\gamma+\lim_{N\to\infty}\ln{\left( \dfrac{((-\frac{1}{4}+1)(-\dfrac{1}{4}+2)\cdots(-\dfrac{1}{4}+N))^2}{((-\dfrac{3}{4}+1)(-\dfrac{3}{4}+2)\cdots (-\dfrac{3}{4}+N))((\dfrac{1}{4}+1)(\dfrac{1}{4}+2)\cdots(\dfrac{1}{4}+N))}\right)}\\ &amp;=-\gamma+\ln{\dfrac{-3\Gamma{(-3/4)}\Gamma{(1/4)}}{\Gamma^2{(-1/4)}}} =-\gamma+\ln{\dfrac{4\Gamma^2{(1/4)}}{\Gamma^2(-1/4)}}=-\gamma+4\ln{\Gamma{\left(\dfrac{1}{4}\right)}} -3\ln{2}-2\ln{\pi} \end{align*}</p>
390,640
<p>Please help me to find a closed form for the following integral: $$\int_0^1\log\left(\log\left(\frac{1}{x}+\sqrt{\frac{1}{x^2}-1}\right)\right)\,{\mathrm d}x.$$</p> <p>I was told it could be calculated in a closed form.</p>
Nanayajitzuki
611,558
<p>Following the comments above, there is another path with digamma function. Recalling this identity of Euler-Mascheroni constant</p> <p><span class="math-container">$$-\int_{0}^{\infty} {e^{-u}\ln u \&gt;\mathrm{d}u} = \gamma$$</span></p> <p>and</p> <p><span class="math-container">$$\frac{\mathrm{d}(e^{-u}\tanh u)}{\mathrm{d}u} = e^{-u} - \frac{\sinh u}{\cosh^{2}\!u}$$</span></p> <p>We deduce</p> <p><span class="math-container">$$\begin{aligned} I + \gamma &amp; = \int_{0}^{\infty} {\left( \frac{\sinh u}{\cosh^{2}\!u}-e^{-u} \right)\ln u \&gt;\mathrm{d}u}\\ &amp; = -(e^{-u}\tanh u\ln u) \bigr|_{u=0}^{\infty} + \int_{0}^{\infty} {\frac{e^{-u}\tanh u}{u} \mathrm{d}u}\\ &amp; = \int_{0}^{\infty} {\frac{e^{-u}\tanh u}{u} \mathrm{d}u} \end{aligned}$$</span></p> <p>Introduce a parameterized integral</p> <p><span class="math-container">$$J(a) = \int_{0}^{\infty} {\frac{e^{-au}\tanh u}{u} \mathrm{d}u}$$</span></p> <p>Take derivative of <span class="math-container">$J(a)$</span></p> <p><span class="math-container">$$\begin{aligned} \frac{\mathrm{d}J(a)}{\mathrm{d}a} &amp; = -\int_{0}^{\infty} {e^{-au}\tanh u \&gt;\mathrm{d}u}\\ &amp; = \int_{0}^{\infty} {e^{-au} \&gt;\mathrm{d}u} - \int_{0}^{\infty} {e^{-au}(1+\tanh u) \&gt;\mathrm{d}u} \end{aligned}$$</span></p> <p>where we have</p> <p><span class="math-container">$$\begin{aligned} \int_{0}^{\infty} {e^{-au}(1+\tanh u) \&gt;\mathrm{d}u} &amp; = 2\int_{0}^{\infty} {\frac{e^{-au}}{1+e^{-2u}} \mathrm{d}u}\\ &amp; = 2\int_{0}^{\infty} {\frac{e^{-au}(1-e^{-2u})}{1-e^{-4u}} \mathrm{d}u}\\ &amp; = \frac1{2} \int_{0}^{\infty} {\frac{e^{-\tfrac{a}{4}u}}{1-e^{-u}} \mathrm{d}u} - \frac1{2} \int_{0}^{\infty} {\frac{e^{-\tfrac{a+2}{4}u}}{1-e^{-u}} \mathrm{d}u} \end{aligned}$$</span></p> <p>Using the integral representation of digamma function</p> <p><span class="math-container">$$\psi(z) = \int_{0}^{\infty} {\left( \frac{e^{-u}}{u} - \frac{e^{-zu}}{1-e^{-u}} \right) \mathrm{d}u}$$</span></p> <p>we have</p> <p><span class="math-container">$$\frac{\mathrm{d}J(a)}{\mathrm{d}a} = \frac1{a} + \frac1{2}\psi\bigg(\frac{a}{4}\bigg) - \frac1{2}\psi\left(\frac{a+2}{4}\right)$$</span></p> <p>with <span class="math-container">$\lim_{a\to\infty}J(a)=0$</span> and <span class="math-container">$I+\gamma=J(1)$</span></p> <p><span class="math-container">$$\begin{aligned} J(1) &amp; = -\int_{1}^{\infty} {J'(a) \&gt;\mathrm{d}a}\\ &amp; = -\left( \ln a + 2\ln\Gamma\bigg(\frac{a}{4}\bigg) - 2\ln\Gamma\left(\frac{a+2}{4}\right) \right) \biggr|_{a=1}^{\infty} \end{aligned}$$</span></p> <p>Notice the asymptotic series of <span class="math-container">$\ln\Gamma(z) = (z-\tfrac1{2})\ln z - z + \ln2\pi + o(z^{-1})$</span> which indicates</p> <p><span class="math-container">$$\lim_{a\to\infty} {\left( \ln a + 2\ln\Gamma\bigg(\frac{a}{4}\bigg) - 2\ln\Gamma\left(\frac{a+2}{4}\right) \right)} = 2\ln2$$</span></p> <p>thus</p> <p><span class="math-container">$$I + \gamma = -2\ln2 + 2\ln\frac{\Gamma(1/4)}{\Gamma(3/4)}$$</span></p> <p>Recalling reflection formula, we can finally deduce</p> <p><span class="math-container">$$I = -\gamma - 3\ln2 - 2\ln\pi + 4\ln\Gamma\left(\frac1{4}\right)$$</span></p> <hr /> <p><em>(Edit for another path)</em></p> <p>Occasionally, I find a direct solution which is almost equivalent to the method I used above, where let <span class="math-container">$\frac1{t} = \frac1{x} + \sqrt{\frac1{x^{2}}-1}$</span>, which gives <span class="math-container">$$ x = \frac{2t}{1+t^{2}}, \quad \mathrm{d}x = -\frac{2(t^{2}-1)}{(t^{2}+1)^{2}} \mathrm{d}t $$</span> hence <span class="math-container">$$ \int_{0}^{1} {\ln \left(\ln \left(\frac1{x} + \sqrt{\frac1{x^{2}}-1} \right) \right) \mathrm{d}x} = -2\int_{0}^{1} {\frac{t^{2}-1}{(t^{2}+1)^{2}} \ln\left(\ln\left(\frac1{t}\right)\right) \mathrm{d}t} $$</span> on the other hand, with integration by parts <span class="math-container">$$ \begin{aligned} \int_{0}^{1} {\frac{t^{2}-1}{t^{2}+1} \frac{\mathrm{d}t}{\ln t}} &amp; = \frac{t(t^{2}-1)}{t^{2}+1}\ln\left(\ln\left(\frac1{t}\right)\right)\biggr|_{t=0}^{1} - \int_{0}^{1} {\frac{t^{4}+4t^{2}-1}{(t^{2}+1)^{2}} \ln\left(\ln\left(\frac1{t}\right)\right) \mathrm{d}t}\\ &amp; = -\int_{0}^{1} {\ln\left(\ln\left(\frac1{t}\right)\right) \mathrm{d}t} - 2\int_{0}^{1} {\frac{t^{2}-1}{(t^{2}+1)^{2}} \ln\left(\ln\left(\frac1{t}\right)\right) \mathrm{d}t} \end{aligned} $$</span> thus <span class="math-container">$$ \int_{0}^{1} {\ln \left(\ln \left(\frac1{x} + \sqrt{\frac1{x^{2}}-1} \right) \right) \mathrm{d}x} = \int_{0}^{1} {\ln\left(\ln\left(\frac1{t}\right)\right) \mathrm{d}t} + \int_{0}^{1} {\frac{t^{2}-1}{t^{2}+1} \frac{\mathrm{d}t}{\ln t}} $$</span> the first item is literally <span class="math-container">$-\gamma$</span>, the second can be find in this <a href="https://math.stackexchange.com/questions/285130/prove-int-01-fract2-1t21-log-tdt-2-log-left-frac2-gamma-left">post</a>, where, actually, the integral is cracked by similar fashion used in this post above.</p>
4,563,707
<p>Sequence given : 6, 66, 666, 6666. Find <span class="math-container">$S_n$</span> in terms of n</p> <p>The common ratio of a geometric progression can be solved is <span class="math-container">$\frac{T_n}{T_{n-1}} = r$</span>, where r is the common ratio and n is the</p> <p>When plugging in 66 as <span class="math-container">$T_n$</span> and 6 as <span class="math-container">$T_{n-1}$</span>, I got the following ratio: <span class="math-container">$ \frac {66}{6} = 11$</span>.</p> <p>However, when I plugged in 666 as <span class="math-container">$T_n$</span> and 66 as <span class="math-container">$T_{n-1}$</span>, I got: <span class="math-container">$\frac {666}{66} = 10.09$</span>.</p> <p>And when I plugged in 6666 and 666: <span class="math-container">$ \frac {6666}{666} = 10.009$</span>.</p> <p>It's clear to me that the ratio is slowly decreasing, and seems to be approaching 10. Alas, this is about as far as I have gotten.</p> <p>Looking at the answers scheme, the final answers is <span class="math-container">$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)}.$</span></p> <p>The answers scheme does include a few steps, but frankly, I couldn't understand the reasoning behind them, but I guess I should post them anyways.</p> <p><span class="math-container">$$ \frac {2}{3}[9 + 99 + 999 + 9999] $$</span></p> <p><span class="math-container">$$ S_n = \frac {2}{3}{(10-1)} + (10^2-1) + ...(10^n-1) $$</span></p> <p><span class="math-container">$$ = \frac {2}{3}[10^1+10^2+10^3...10^n] + \frac {2}{3}[-1-1-1-1...-1] $$</span></p> <p><span class="math-container">$$ = \frac {2}{3}(10)\left(\frac {10^n-1}{10-1}\right) - \frac {2}{3}(n) $$</span> <span class="math-container">$$ \frac {20}{27}{(10^n-1)} - \frac {2}{3}{(n)} $$</span></p> <p>I'm sorry if I did something stupid, but I have no idea where that <span class="math-container">$\frac {2}{3}$</span> came from, and even if I do, I don't understand the reasoning or the explanation behind it. I already asked my teacher, as well as my mother, both of which yielded little in understanding the logic behind the solutions given in the answers scheme.</p> <p>If anybody could offer an explanation, it would be greatly appreciated</p>
Henry
6,460
<p>There are many ways, including saying <span class="math-container">$$T_n=\frac23(10^n-1)$$</span> and so <span class="math-container">$$S_n=\sum\limits_{k=1}^n T_k=\frac23\left(\sum\limits_{k=1}^n 10^k - \sum\limits_{k=1}^n 1 \right)$$</span> where the sums are simple geometric series.</p> <p>Personally I prefer</p> <ul> <li><span class="math-container">$S_n = 6+66+ 666+ \cdots + 66\cdots 6 + 66\cdots 66$</span></li> <li><span class="math-container">$10S_n+6n = 66+666+ 6666+ \cdots + 66\cdots 66 + 66\cdots 666$</span></li> <li><span class="math-container">$9S_n+6n = 10S_n+6n-S_n= 66\cdots 666 - 6$</span></li> <li>etc.</li> </ul>
1,903,263
<p>I am developing an algorithm that approximates a curve using a series of linear* segments. The plot below is an example, with the blue being the original curve, and the red and yellow being an example of a two segment approximation. The x-axis is time and the y-axis is attenuation in dB. The ultimate goal is to use the least amount of segments while keeping the maximum error below a certain value. Clearly, since some portions of the curve are more linear than others, some segments can be longer than others.</p> <p><a href="https://i.stack.imgur.com/yJoT6.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yJoT6.png" alt="fitting segments to curve"></a></p> <p>*The line segments are not actually linear, they are logarithmic. I am actually operating in log (dB) space, while the segments are linear in voltage.</p> <p>My question is, how do I approach such an optimization problem? The method I am currently considering is to start at one end and select the longest segment possible, then move to the next segment and do the same. This will be good enough, in that it will keep me under my max error, but it may not be the best method. If it matters, I am working in Matlab.</p>
Francesco Alem.
175,276
<p>An approach would be this one:</p> <p>1) Start with only one segment that connects start and end point of the curve.</p> <p>2) find the point on the curve where the approximation error is maximum.</p> <p>3) if the error is below the tolerance: go to step 5; else: continue</p> <p>4) add one point exacly where the error is maximun, thus breaking the corrisponding segment into two new segments of lower error. go to step 2.</p> <p>5) find the segment with the maximun height in y</p> <p>6) if the height is below the limit: STOP; else: break the segment in half and go to step 5. </p>
7,080
<p>What is the right definition of the symmetric algebra over a graded vector space V over a field k?</p> <p>More generally: What is the right definition of the symmetric algebra over an object in a symmetric monoidal category (which is suitably (co-)complete)?</p> <p>Two possible definitions come to my mind:</p> <p>1) Take the tensor algebra over V and identify those tensors which differ only by an element of the symmetric group, i.e. take the coinvariants wrt. the symmetric group. The resulting algebra A is then the universal algebra together with a map V -> A such that the product of elements in V is commutative.</p> <p>2) Take the tensor algebra over V and divide out the ideal generated by antisymmetric two-tensors. In this case, the resulting algebra A is the universal algebra together with a map V -> A such that the product of A vanishes on all antisymmetric two-tensors (one could say that all commutators of A vanish).</p> <p>The definition 1) looks more natural and gives, for example, the polynomial ring in case V is of degree 0.</p> <p>The definition 2) applied a vector space shifted by degree 1 gives (up to degree shift) the exterior algebra over the unshifted vector space. However, in characteristic 2 for example, one doesn't get the polynomial ring if one starts with a vector space of degree 0.</p> <p>Finally, both definitions have a shortcoming in that they don't commute well with base change.</p>
S. Carnahan
121
<p>Symmetric algebras (aka free commutative associative unital algebras) are given by a functor, and they satisfy a universal property: If M is a module over a commutative ring k and R is a commutative k-algebra, then k-algebra homomorphisms from Sym<sub>k</sub>(M) to R are in bijection with k-module maps from M to R. This bijection should be functorial with respect to R (i.e., ring homomorphisms in the target). More succinctly, the symmetric algebra functor is left adjoint to the forgetful functor from commutative k-algebras to k-modules. This is the description in the "categorical properties" section of the <a href="http://en.wikipedia.org/wiki/Symmetric_algebra" rel="noreferrer">Wikipedia article</a>.</p> <p>The universal construction normally yields definition 1, but in the derived world, you might have to do something extra, like take homotopy coinvariants (this means taking some kind of resolution to get a free symmetric group action).</p>
3,506,982
<p>Let <span class="math-container">${X_n, n\in N}$</span> be an iid sequence of psitive rrvs and let <span class="math-container">$K$</span> be a rrv independent of this sequence and taking its values in <span class="math-container">$N$</span> with <span class="math-container">$P(K=k)=p_k$</span>. Consider the rrv <span class="math-container">$Z=\sum_{n=1}^{K} X_n$</span>. Suppose that <span class="math-container">$E(X_n) &lt;\infty$</span>. Then show <span class="math-container">$E(Z)=E(K)E(X_n)$</span></p> <p>My attempt is that since <span class="math-container">$K$</span> is independent of <span class="math-container">$X_n$</span> then how can i apply independence formula for expectation? </p>
Sri-Amirthan Theivendran
302,692
<p>Note that <span class="math-container">$$ Z=\sum_{n=1}^\infty X_nI(K\geq n). $$</span> Since the <span class="math-container">$X_n$</span> are non-negative the monotone convergence theorem together with independence of <span class="math-container">$X_n$</span> and <span class="math-container">$K$</span> imply that <span class="math-container">$$ EZ=\sum_{n=1}^\infty (EX_n)P(K\geq n)=\sum_{n=1}^\infty EX_1 P(K\geq n)=EX_1\sum_{n=1}^\infty P(K\geq n)=EX_1 EK $$</span> where in the final step we use the tail formula for expectation.</p>
2,525,498
<p>I'm an undergrad, and I've been presented with the following problem:</p> <blockquote> <p>Fundamental Theorem of Arithmetic: Let $\mathbb{N}_{&gt;0}$ be the monoid of positive integers with binary operation given by ordinary multiplication, let $P$ be the set of primes in $\mathbb{N}$, let $M$ be a commutative monoid and let $g : P → M$ be a function. Prove that there is a unique monoid homomorphism $G : \mathbb{N}_{&gt;0} → M$ such that $G(p) = g(p)$ for every prime $p ∈ P$.</p> </blockquote> <p>So far, I've been able to come up with this:</p> <blockquote> <p>Let $G: \mathbb{N}_{&gt;0} \to M$ be such that $G(p) = g(p)$ for all $p \in P$. Since $G$ isn't explicitly defined for non-prime numbers, we can just say that $G(1) = e$, where $e \in M$ is the identity. Let $x, y$ be positive integers. We want to show that $G(xy) = G(x)G(y)$. </p> </blockquote> <p>Am I right in just declaring $G$ to be what I want it to be and then showing it's a monoid homomorphism? Does my logic for the identities make sense? How do I attack the last part (with $G(xy) = G(x)G(y)$)? Or am I completely wrong and I should erase what I have and start over? And what does the fundamental theorem of arithmetic have to do with any of this?</p>
Rob Arthan
23,171
<p>Let $f : (0, 1) \to [0, 1]$ is a continuous bijection and let's write $f[X]$ for the image of $X \subseteq (0, 1)$ under $f$. As $f$ is a bijection, there is a unique $x \in (0, 1)$ such that $f(x) = 0$. And then $f[(0, x]]$ and $f[[x, 1)]$ are connected subsets of $[0, 1]$ each having at least two elements and each containing $0$. But connected subsets of intervals are intervals. So, for some $\epsilon_0 &gt; 0$ and $\epsilon_1 &gt; 0$, $f[(0, x)] \supseteq (0, \epsilon_0)$ and $f[(x, 1)] \supseteq (0, \epsilon_1)$. So $f[(0, x)] \cap f[(x, 1)] \neq \emptyset$ contradicting the assumption that $f$ is bijective.</p>
2,525,498
<p>I'm an undergrad, and I've been presented with the following problem:</p> <blockquote> <p>Fundamental Theorem of Arithmetic: Let $\mathbb{N}_{&gt;0}$ be the monoid of positive integers with binary operation given by ordinary multiplication, let $P$ be the set of primes in $\mathbb{N}$, let $M$ be a commutative monoid and let $g : P → M$ be a function. Prove that there is a unique monoid homomorphism $G : \mathbb{N}_{&gt;0} → M$ such that $G(p) = g(p)$ for every prime $p ∈ P$.</p> </blockquote> <p>So far, I've been able to come up with this:</p> <blockquote> <p>Let $G: \mathbb{N}_{&gt;0} \to M$ be such that $G(p) = g(p)$ for all $p \in P$. Since $G$ isn't explicitly defined for non-prime numbers, we can just say that $G(1) = e$, where $e \in M$ is the identity. Let $x, y$ be positive integers. We want to show that $G(xy) = G(x)G(y)$. </p> </blockquote> <p>Am I right in just declaring $G$ to be what I want it to be and then showing it's a monoid homomorphism? Does my logic for the identities make sense? How do I attack the last part (with $G(xy) = G(x)G(y)$)? Or am I completely wrong and I should erase what I have and start over? And what does the fundamental theorem of arithmetic have to do with any of this?</p>
DanielWainfleet
254,665
<p>Suppose $f:(0,1)\to [0,1]$ is a continuous surjection . For $n\in \Bbb N$ the subspace $S(n)=[2^{-n},1-2^{-n}]$ is connected so its image $f(S(n))$ is connected so $f(S(n))$ is an interval. </p> <p>For some $n_1$ there exist $x,y\in S(n_1)$ with $f(x)=0$ and $f(y)=1,$ implying that $0$ and $1$ belong to the interval $f(S(n_1))$ , so $f(S(n_1)=[0,1].$ </p> <p>Then $\phi \ne f( (0,1)$ \ $S(n_1)) \subset [0,1]=f(S(n_1))$ so $f$ cannot be a bijection. </p> <p>By contrast the three totally-disconnected spaces $\Bbb Q\cap (0,1),\;$ $ \Bbb Q \cap [0,1),\;$ $\Bbb Q\cap [0,1]$ are homeomorphic to each other. (The homeomorphisms do not preserve the "$&lt;$" order.)</p>
2,877,916
<p>Can you some one please tell how to prove Holder Space is Normed Linear Space</p> <p>The Holder Space $C^{k,\gamma}(\bar{U})$ consisting of the all $u \in C^k(\bar{U})$ for which the norm</p> <p>$$\|u\|_{C^{k,\gamma}(\bar{U})}:= \sum_{|\alpha|\le k} \|D^\alpha u \|_{C(\bar{U})}+\sum_{|\alpha|=k} [D^\alpha u]_{C^{0,\gamma}(\bar{U})}$$</p> <p>is finite </p> <p><strong>Definition 1:</strong> </p> <p>If $u:U\to \mathbb{R}$ is bounded and continuous , we write</p> <p>$$\|u\|_{C(\bar{U})}:=\sup_{x\in U}|u(x)|.$$</p> <p><strong>Definition 2</strong></p> <p>The $\gamma^{th} -$ Holder seminorm of $u:U\to \mathbb{R}$ is </p> <p>$$[u]_{C^{0,\gamma}(\bar{U})}:=\sup_{\substack{x,y\in U \\ x \neq y}} \left\{\frac{|u(x)-u(y)|}{|x-y|^\gamma} \right\},$$</p> <p>and the $\gamma^{th} -$ Holder Norm is</p> <p>$$\|u\|_{C^{0,\gamma}(\bar{U})}:=\|u\|_{C(\bar{U})}+[u]_{C^{0,\gamma}(\bar{U})}.$$</p> <p>and please explain those norms ..I was trying to understand things but i can't thank you very much </p>
Andres Mejia
297,998
<p>Take the square with one point removed. You can certainly radiallyretract to the boundary to just get $S^1$.</p> <p>For $2$ points removed, put one at $(1/3,1/2)$ and $(2/3,1/2)$ (which I'll include just for concreteness.</p> <p>Using the vertical line at $x=1/2$, you can radially retract each "half of the square" onto the boundary of the "half square."</p> <p>You can do this for $n$ points removed, by spacing them evenly and considering "$1/3$ squares" and so forth.</p>
1,037,736
<p>$$\sum \limits_{v=1}^n v=\frac{n^2+n}{2}$$</p> <p>please don't downvote if this proof is stupid, it is my first proof, and i am only in grade 5, so i haven't a teacher for any of this 'big sums'</p> <p>proof:</p> <p>if we look at $\sum \limits_{v=1}^3 v=1+2+3,\sum \limits_{v=1}^4 v=1+2+3+4,\sum \limits_{v=1}^5 v=1+2+3+4+5$</p> <p>i learnt rainbow numbers in class three years ago, so i use that knowlege here:</p> <p>$n=3,1+3=4$ and $2$.</p> <p>$n=4,1+4$ and $2+3$</p> <p>$n=5,1+5$ and $2+4$ and $3$</p> <p>and more that i have done on paper that i don't wanna type.</p> <p>we can see from this for the odd case that we have $(n+1)$ added together moving in from the outside, so we get to add $(n+1)$ to the total $\frac{(n-1)}2$ times plus the center number, which is $\frac{n+1}2$.. giving $\frac{n-1}2(n+1)+\frac{n+1}2=\frac{(n+1)(n-1)}{2}+\frac{n+1}{2}$ and i can get $\frac{n^2-1}2+\frac{n+1}2=\frac{n^2+n}2$ which is what we want.</p> <p>so odd are proven.</p> <p>for even we have a simplier problem: we have $n+1$ on each pair of numbers going in. since we are even numbers, we have $1+n=n+1$ , with $n$ even, $2+(n-1)=n+1$ and we can see this is good for all numbers since we increase one side by one and lower the other by 1. so we get $\frac{n}2$ times $n+1$ gives $\frac{n^2+n}{2}$</p> <p>thus is proven for all cases. thus is is proven</p>
Community
-1
<p>$\displaystyle\sum_{v=1}^nv=\dfrac{1}{2}\displaystyle\sum_{v=1}^n2v=\dfrac{1}{2}\displaystyle\sum_{v=1}^n\left((v+1)^2-v^2-1\right)=\dfrac{1}{2}\left((n+1)^2-(n+1)\right)-=\dfrac{1}{2}n(n+1)$</p>
3,369,188
<p>For which frequency k>f is </p> <p><span class="math-container">$\cos (\omega fx_1)= \cos (\omega kx_1)$</span></p> <p>only at point <span class="math-container">$x_1$</span>.</p> <p>Like k = 9</p> <p><span class="math-container">$\cos (\omega*1*0.1)= \cos (\omega * 9 * 0.1) \approx 0.809$</span></p> <p>I need at general solution.</p>
sabeelmsk
578,078
<p>Hint: Here, <span class="math-container">$3x^2+5x=x(3x+5)$</span> both factors are irreducible and relatively prime. Use Chinese reminder theorem for rings to find an isomorphism and to SO get the order of ring.</p>
3,369,188
<p>For which frequency k>f is </p> <p><span class="math-container">$\cos (\omega fx_1)= \cos (\omega kx_1)$</span></p> <p>only at point <span class="math-container">$x_1$</span>.</p> <p>Like k = 9</p> <p><span class="math-container">$\cos (\omega*1*0.1)= \cos (\omega * 9 * 0.1) \approx 0.809$</span></p> <p>I need at general solution.</p>
lhf
589
<p>Let <span class="math-container">$f=3x^{2}+5x$</span> and <span class="math-container">$I=\langle f \rangle$</span>.</p> <ul> <li><p><span class="math-container">$I \ni 3f = 9x^2+15x = 9x^2$</span>.</p></li> <li><p><span class="math-container">$I \ni 5f = 15x^2+25x = 25x = 10x = -5x$</span>.</p></li> <li><p><span class="math-container">$I \ni 5xf - 3f = 10x^2 - 9x^2 = x^2$</span>.</p></li> </ul> <p>Thus, <span class="math-container">$I=\langle 5x, x^2 \rangle$</span>.</p> <p>Therefore, for every <span class="math-container">$g \in \mathbb{Z}_{15}[x]$</span> we have <span class="math-container">$g \equiv ax+b \bmod I$</span>, with <span class="math-container">$a \in \{0,1,2,3,4\}$</span>.</p> <p>Thus, <span class="math-container">$\mathbb{Z}_{15}[x] / I$</span> has <span class="math-container">$5 \cdot 15 = 75$</span> elements.</p>
164,060
<p>When I plot the data I have using <code>ListStepPlot</code> and <code>ListLinePlot[data,InterpolationOrder -&gt; 0]</code> I am getting two different plot. I guess there is a bug in <code>ListStepPlot</code>. </p> <pre><code>data={{{0, 1}, {0.0582215, 2}, {0.597255, 3}, {1.17158, 4}}, {{1.17158, 4}, {1.36478, 5}, {1.424, 6}, {1.4586, 7}}, {{1.4586, 7}, {1.73938, 8}, {1.88332, 9}, {2.03753, 10}}, {{2.03753, 10}, {2.17872, 11}, {2.46005, 12}, {2.71547, 13}}, {{2.71547, 13}, {3.16095, 14}, {3.30726, 15}, {3.5329, 16}}, {{3.5329, 16}, {3.63022, 17}, {4.34524, 18}, {5.20954, 19}}}; ListLinePlot[data, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600, InterpolationOrder -&gt; 0] </code></pre> <p><a href="https://i.stack.imgur.com/jBaN4.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/jBaN4.png" alt="enter image description here"></a></p> <pre><code>ListStepPlot[data, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600] </code></pre> <p><a href="https://i.stack.imgur.com/9NJWd.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9NJWd.png" alt="enter image description here"></a></p> <p>Here is different data that has a problem </p> <pre><code>data1={{{0, 1}, {0.139219, 2}, {0.607566, 3}, {1.18343, 4}}, {{1.18343, 4}, {1.22964, 5}, {2.01722, 6}, {2.62576, 7}}, {{2.62576, 7}, {3.69976, 8}, {3.90317, 9}, {4.49939, 10}}, {{4.49939, 10}, {4.83385, 11}, {4.92839, 12}, {5.3667, 13}}, {{5.3667, 13}, {5.37191, 14}, {5.75267, 15}, {5.86257, 16}}, {{5.86257, 16}, {6.49011, 17}, {6.56514, 18}, {6.73022, 19}}}; </code></pre> <p>"11.1.0 for Microsoft Windows (64-bit) (March 13, 2017)"</p>
Alan
19,530
<p><code>ListStepPlot</code> always produces a step shape. You can decide where you want the resulting "extra bit" of the curve with a second argument.</p> <pre><code>data = {{{1, 2}, {2, 3}}, {{3, 4}, {4, 5}}}; left = ListStepPlot[data, Left, Mesh -&gt; Full, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600]; center = ListStepPlot[data, Center, Mesh -&gt; Full, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600]; right = ListStepPlot[data, Right, Mesh -&gt; Full, Frame -&gt; True, PlotTheme -&gt; "Detailed", ImageSize -&gt; 600]; Row[{left, center, right}] </code></pre> <p><a href="https://i.stack.imgur.com/9sddG.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/9sddG.png" alt="enter image description here"></a></p>
4,304,724
<p><strong>Here's the question</strong>: <em>Suppose there's a bag filled with balls numbered one through fifty. You reach in and grab three at random, put them to the side, and then replace the ones you took so that the bag is once again filled with fifty distinctly numbered balls. Do this five times, so you have 5 groups of 3 numbered balls such that within each group every number is distinct from the other, but across groups, the numbers may not necessarily be distinct.</em></p> <p><em>What is the probability that you have some three-of-a-kind in your five groups? That is to say, what is the probability that some number appears at least three times among the selected balls?</em></p> <p>Now I'm not very good at probability, but I'm pretty sure I know how I would brute-force calculate the probability of this situation, but it would take a ridiculously long time. Does anyone know a particularly elegant method for solving something like this? Also, more generally, are there problems of this sort that are fundamentally messy, which require long case-by-case calculations and there's no tidy and pleasing way to answer them?</p> <p>Hope my question makes sense, let me know if there is any clarification needed. Cheers friends!</p> <p><strong>Edit</strong>: It appears I need to share more context and more of my own work so far. Briefly, I came up with this question, it's not for a class, just my own curiosity. It's actually related to character selection in the video game Heroes of the Storm, where each of five players is given a selection of three characters at random. I was just trying to calculate some probabilities, like - What is the chance you get a particular character you want to play? What is the chance the character you want to play appears somewhere among the five players? What is the chance that some character appears twice or more among the five players? Etc.</p> <p>For the latter question - What is the chance that some character appears twice or more - I managed a fairly straightforward solution that I hope is correct, here is my process:</p> <p>Characters are represented as the numbers 1-50. Five groups are selected represented as <span class="math-container">${(X_1, Y_1, Z_1), (X_2, Y_2, Z_2), ..., (X_5, Y_5, Z_5)}$</span> s.t <span class="math-container">$X_n \neq Y_n \neq Z_n$</span><br /> Let's also call the character set <span class="math-container">$C_n = (X_n, Y_n, Z_n)$</span></p> <p>The probability that some character appears twice or more is the same as 1 minus the probability that all characters are distinct. So we want to find</p> <p><span class="math-container">$ P(C_1, C_2, ..., C_5 $</span> are distinct <span class="math-container">$) = P(C_1 $</span> is distinct<span class="math-container">$) * P(C_2 $</span> is distinct<span class="math-container">$ | C_1 $</span> is distinct<span class="math-container">$) * ... * P(C_5 $</span> is distinct<span class="math-container">$ | C_1, C_2, C_3, C_4 $</span> are distinct<span class="math-container">$) $</span></p> <p><span class="math-container">$X_1 \neq Y_1 \neq Z_1$</span> therefore <span class="math-container">$C_1$</span> is distinct always.</p> <p><span class="math-container">$P(C_2$</span> is distict | <span class="math-container">$C_1$</span> is distinct) <span class="math-container">$= (\frac{47}{50})(\frac{46}{49})(\frac{45}{48}) $</span> since there are three choices that can no longer be taken if distinction is going to be preserved. Since <span class="math-container">$X_2 \neq Y_2 \neq Z_2$</span>, the denominator must decrease by one each time.</p> <p>Similarly, <span class="math-container">$P(C_3$</span> is distinct | <span class="math-container">$C_1, C_2$</span> are distinct) <span class="math-container">$= (\frac{44}{50})(\frac{43}{49})(\frac{42}{48})$</span></p> <p><span class="math-container">$P(C_4$</span> is distinct | <span class="math-container">$C_1, C_2, C_3$</span> are distinct) <span class="math-container">$= (\frac{41}{50})(\frac{40}{49})(\frac{39}{48})$</span></p> <p><span class="math-container">$P(C_5$</span> is distinct | <span class="math-container">$C_1, C_2, C_3, C_4$</span> are distinct) <span class="math-container">$= (\frac{38}{50})(\frac{37}{49})(\frac{36}{48})$</span></p> <p>The probability that every character is distinct is the product of all the above terms, so: <span class="math-container">$(\frac{50}{50})(\frac{49}{49})(\frac{48}{48})(\frac{47}{50})(\frac{46}{49})(\frac{45}{48})(\frac{44}{50})(\frac{43}{49})(\frac{42}{48})(\frac{41}{50})(\frac{40}{49})(\frac{39}{48})(\frac{38}{50})(\frac{37}{49})(\frac{36}{48})$</span></p> <p>Or more succinctly, <span class="math-container">$\frac{50!}{35!*50^5*49^5*48^5} \approx 13.1\%$</span></p> <p>It follows then that the probability of having one character appear at least twice would be approximately 86.9%. I feel fairly confident in this answer but I'm always prone to think I'm right and then be miles off, so if someone sees a mistake in my reasoning (if it's even readable) let me know!</p> <p>I am having a hard time figuring out a solution to the more specific problem of - what is the probability of having one character appear at least three times? I would approach it a similar way, but it seems to require ridiculous amounts of calculations that I don't really care to do, I'd just rather code a quick simulation to find the answer, haha. I am interested in the mathematics of it though, and wonder if anyone has any advice on a more elegant way than brute-forcing every conditional case, I would love to hear it! Hope this clears things up a bit.</p>
Rohit Pandey
155,881
<p>First, you consider the probability of the number <span class="math-container">$1$</span> appearing three or more times. You can then multiply this probability by <span class="math-container">$50$</span>. In any 3-ball draw, the probability of <span class="math-container">$1$</span> appearing is <span class="math-container">$p=\frac{3}{50}$</span>. Now you want three or more of the <span class="math-container">$5$</span> draws to have <span class="math-container">$1$</span>. This is the survival function of the Binomial distribution with <span class="math-container">$n=5$</span> and <span class="math-container">$p$</span>.</p> <hr /> <p>EDIT: As pointed out in the comments, this answer is wrong since it assumes we can multiply by <span class="math-container">$50$</span>, which is not the case. Leaving it in as a cautionary tale.</p>
2,455,428
<p>While going through some exercises in my analysis textbook, I came up with an equation which looks like an identity. I strongly believe that this is the case, but I couldn't prove this.</p> <blockquote> <p>$$\sum_{0\leq k\leq n}(-1)^k\frac{p}{k+p}\binom{n}{k} = \binom{n+p}{p}^{-1}$$</p> </blockquote> <p>Can someone provide a proof of this identity? Also, it would help a lot if you could explain the general strategy of proving such identities, if there is one, for me. Thank you.</p>
Ivan Neretin
269,518
<p>The general strategy is to introduce a variable $x$ and then to deal with some polynomials and/or other power series, of which your expression is a value at $x=1$.</p> <p>In you case it goes along these lines: $$\sum_{0\leq k\leq n}(-1)^k\frac{p}{k+p}\binom{n}{k}=F(1),\text{ where} \\ F(x)=\sum_{0\leq k\leq n}(-1)^kx^{k+p}\frac{p}{k+p}\binom{n}{k}$$</p> <p>The power was chosen out of the blue, so that the function would simplify a little when we take a <em>derivative</em>. $$F'(x)=p\cdot\sum_{0\leq k\leq n}(-1)^kx^{k+p-1}\binom{n}{k}=p\cdot x^{p-1}\sum_{0\leq k\leq n}(-x)^k\binom{n}{k}=p\,x^{p-1}(1-x)^n$$</p> <p>Together with the obvious $F(0)=0$ and the knowledge of <a href="https://en.wikipedia.org/wiki/Beta_function" rel="nofollow noreferrer">beta function</a>, this gives us the way to reconstruct the desired $F(1)$: $$F(1)=\int\limits_0^1F'(x)dx=p\int\limits_0^1x^{p-1}(1-x)^ndx=p\cdot B(p,n+1)=\\ =p\cdot{(p-1)!\;n!\over(n+p)!}={p!\;n!\over(n+p)!}=\binom{n+p}{p}^{-1}$$</p>
2,332,419
<p>What's the angle between the two pointers of the clock when time is 15:15? The answer I heard was 7.5 and i really cannot understand it. Can someone help? Is it true, and why?</p>
B. Goddard
362,009
<p>At 15:15, the minute hand is 90 degrees from 12. The hour hand is $3+1/4= 13/4$ hours past 12, so the angle is $13/4$ out of $12$ hours. So it's</p> <p>$$\frac{13/4}{12} 360 = \frac{195}{2}.$$</p> <p>The difference between the two angles is </p> <p>$$\frac{195}{2} - 90 = \frac{15}{2}.$$</p>
384,450
<p>I don't know what this double-arrow $\twoheadrightarrow$ means!</p>
amWhy
9,003
<p>From Wikipedia: <a href="http://en.wikipedia.org/wiki/Surjective_function">Surjective funtion</a></p> <blockquote> <p>A surjective function is a function whose image is equal to its codomain. Equivalently, a function f with domain $X$ and codomain $Y$ is surjective if for every $y$ in $Y$ there exists at least one $x$ in $X$ with $f(x)=y$. <strong>Surjections are sometimes denoted by a two-headed rightwards arrow,</strong> as in $f : X \twoheadrightarrow Y,\;$ [Boldface mine.]</p> </blockquote> <p>See also the section on the properties or characterizations of <a href="http://en.wikipedia.org/wiki/Surjective_function#Properties">"surjections"</a>.</p>
2,564,321
<p>I'm trying to prove </p> <p>$$e^x\leq e^a\frac{b-x}{b-a}+e^b\frac{x-a}{b-a}$$</p> <p>for any $x\in[a,b]$. Since this looks reminiscent of the mean value theorem or linear approximations I jotted down some equations relating to those, but didn't see any way of making progress with them. I know that $e^x$ is an increasing function so if I could perhaps show that the value on the right is equal to $e$ to some value and prove that value is greater than $x$, it would be sufficient. But I'm not seeing any way to make that work either.</p> <p>The right-hand side is also equal to this line</p> <p>$$\left(\frac{e^b-e^a}{b-a}\right)x+\frac{e^ab-e^ba}{b-a}$$</p> <p>But I can't think of how I would prove that two curves don't intersect in a region. </p>
user284331
284,331
<p>Since $\exp:x\rightarrow e^{x}$ is convex, we have \begin{align*} \exp\left(\dfrac{b-x}{b-a}a+\dfrac{x-a}{b-a}b\right)\leq\dfrac{b-x}{b-a}\exp(a)+\dfrac{x-a}{b-a}\exp(b). \end{align*}</p>
4,141,477
<blockquote> <p>Find <span class="math-container">$$\lim_{x\rightarrow0} x\tan\frac1x$$</span></p> </blockquote> <p>Now I tried to find the form of the limit (<span class="math-container">$0/0$</span> or <span class="math-container">$0\cdot \infty$</span> or <span class="math-container">$\infty/\infty$</span>), but as <span class="math-container">$x\rightarrow 0$</span>, <span class="math-container">$\tan(1/x)$</span> tends to <span class="math-container">$\tan \infty$</span>, and since <span class="math-container">$\tan x$</span> is unbounded unlike <span class="math-container">$\sin x$</span> or <span class="math-container">$\cos x$</span>, no particular value or range can be assumed for <span class="math-container">$\tan(1/x)$</span>.</p> <p>Then I tried to find LHL and RHL.</p> <p>Let <span class="math-container">$\lim_{x\rightarrow0^+} x\tan{(1/x)}=L$</span>.</p> <p>Then <span class="math-container">$\lim_{x\rightarrow0^-} x\tan{(1/x)}=-L$</span>, since <span class="math-container">$x$</span> is approaching from the negative side, the input <span class="math-container">$1/x$</span> of <span class="math-container">$\tan$</span> is the negative of the input in RHL, and <span class="math-container">$\tan (-x)=-\tan x$</span></p> <p>Now if the limit exists, then <span class="math-container">$LHL=RHL$</span>, thus <span class="math-container">$L=0$</span>.</p> <p>Thus I got that if the limit exists, then it must be equal to <span class="math-container">$0$</span>. But this doesn't confirm that the limit exists (and it doesn't).</p> <p>Please help me in proving that the limit doesn't exist, and also please point out the mistakes (if any) in the argument I presented above (sorry for I might be weak in limits and the basics of it)</p> <p><strong>EDIT:</strong> As pointed out by Shubham in the comments, I forgot to take the sign of <span class="math-container">$x$</span> too in the <span class="math-container">$LHL$</span>, thus rendering the argument which proved <span class="math-container">$L=0$</span> moot.</p> <p><em><strong>THANK YOU</strong></em></p>
Thomas Andrews
7,933
<p>Let <span class="math-container">$f(x)=x\tan(1/x)$</span> when <span class="math-container">$x\neq 0.$</span></p> <p>If <span class="math-container">$a_n=\frac{1}{2\pi n}$</span> then <span class="math-container">$f(a_n)=0$</span> for all <span class="math-container">$n.$</span> Thus <span class="math-container">$$\lim_{n\to\infty} f(a_n)=0.$$</span></p> <p>On the other hand, let <span class="math-container">$$b_n=\frac1{2\pi n+\arctan(n)}$$</span></p> <p>Then <span class="math-container">$$\frac{n}{(2n+1)\pi}&lt;f(b_n)&lt;\frac{n}{2n\pi}$$</span> so by the squeeze theorem, <span class="math-container">$$\lim_{n\to\infty} f(b_n)=\frac{1}{2\pi}$$</span></p> <p>Finally, let <span class="math-container">$$c_n=\frac1{2\pi n+\arctan(n^2)}$$</span> and show <span class="math-container">$f(c_n)\to +\infty.$</span></p> <p>But <span class="math-container">$a_n,b_n,c_n$</span> all converge to <span class="math-container">$0,$</span> so <span class="math-container">$$\lim_{x\to 0} f(x)$$</span> cannot exist.</p> <p>(We only needed any two of these three sequences to disprove convergence.)</p>
1,252,167
<p>I'm trying to understand what a vector of functions is, from trying to understand how to solve linear homogeneous differential equations. </p> <p>It seems that functions can be manipulated as vectors as long as they are not interpreted as having real values.<br> Suppose the solution space of a linear homogeneous diff equation is spanned by $\cos(x)$, $\sin(x)$, then the solution is $a \sin(x) + b \cos(x)$, and it's a vector.</p> <p>But, if $y$ is a vector $y = a \sin(x) + b \cos(x)$, then how is it that for any value of $x$, $y$ is always a scalar value?</p> <p>If $x$ is a set of values rather than a symbol then how can $y$ remain a vector if, for each element of $x$, $y$ is scalar?</p>
Ben Blum-Smith
13,120
<p>As often happens when I write questions here, I realized something while writing. Here I actually realized the answer.</p> <p>It is <strong>no</strong>.</p> <p>Here is an open cover of $\mathbb{R}$ with no irredundant subcover:</p> <p>$\Lambda=\mathbb{N}$.</p> <p>$U_n = (-n,n)$.</p> <p>Because $U_1\subset U_2\subset U_3\subset\dots$, given any pair of $U_n$'s, one is contained in the other. Therefore any subcover is redundant unless it consists of a single $U_n$. But no single $U_n$ forms a cover. Thus no subcover is irredundant.</p>
1,824,966
<p>Ok, I was asked this strange question that I can't seem to grasp the concept of..</p> <blockquote> <p>Let $T$ be a linear transformation such that: $$T \langle1,-1\rangle = \langle 0,3\rangle \\ T \langle2, 3\rangle = \langle 5,1\rangle $$ Find $T$.</p> </blockquote> <p>Is there suppose to be a function out of this? A matrix of some kind? Maybe both? If so, what is it?</p>
jugglingmike
341,620
<p>Assuming $T:\mathbb{K}^2 \rightarrow \mathbb{K}^2$ some vector field $\mathbb{K}$, the required transformation can be expressed as a matrix. To do this recall $T&lt;1,0&gt;$ gives the first column and $T&lt;0,1&gt;$ gives the second. <br/> Then $T&lt;1,0&gt; = \frac{1}{5}(3&lt;0,3&gt;+&lt;5,1&gt;)=&lt;3,2&gt;$ and $T&lt;0,1&gt; = \frac{1}{5}(-2&lt;0,3&gt;+&lt;5,1&gt;)=&lt;1,-1&gt;$.</p>
37,052
<p>This is my first question with mathOverflow so I hope my etiquette is up to par here.</p> <p>My question is regarding a <span class="math-container">$3\times3$</span> magic square constructed using the la Loubère method (see <a href="http://en.wikipedia.org/wiki/Magic_square#Method_for_constructing_a_magic_square_of_odd_order" rel="nofollow noreferrer">la Loubère method</a>)</p> <p>Using the method, I have constructed a magic square and several semimagic squares (where one or both of the diagonals do not add up to a magic sum) with a program on written on my graphing calculator. After playing around with the program, I was shocked that the determinants of these <span class="math-container">$3\times3$</span> magic squares are all the same (specifically -360). Why is this so? (I am still an undergraduate so please go easy on the math :] )</p>
BigBill
5,210
<p>The answer is provided by the article</p> <ul> <li>Edward G. Effros and Mihai Popa, <em>Feynman diagrams and Wick products associated with q-Fock space</em>, PNAS 100 (15) (2003) 8629-8633, <a href="https://doi.org/10.1073/pnas.1531460100" rel="nofollow noreferrer">https://doi.org/10.1073/pnas.1531460100</a></li> </ul> <p>However, the authors work in the context of <span class="math-container">$q$</span>-Fock space. I does not know if there exists an older paper which provides the answer in the less general context of antisymmetric Fock space (i.e. q=1).</p>
2,445,023
<p>I have a trouble in calculating which function grows faster. </p> <p>$f(n) = 3\log_4 n + \sqrt{n} + 3 \\ g(n) = 4\log_3 n + \log n + 200$</p> <p>Can someone let me know how to solve this? </p>
videlity
70,729
<p>$\log n$ grows slower than any $n^d$, in particular $\sqrt{n}$. Then, look at the terms which grow the fastest in $f$ and $g$. It is clear that $f$ will grow faster because it has the $\sqrt{n}$ term.</p> <p>You can show this concretely by considering $$\lim_{n\to\infty} \frac{f(n)}{g(n)}$$ and showing it goes to $\infty$.</p> <p>edit:</p> <p>$$\begin{align} \lim_{n\to\infty} \frac{f(n)}{g(n)} &amp;= \lim_{n\to\infty}\frac{3\log_4 n + \sqrt{n} + 3}{4\log_3 n + \log n + 200} \\ &amp;= \lim_{n\to\infty} \frac{1/\sqrt{n}}{1/\sqrt{n}}\frac{3\log_4 n + \sqrt{n} + 3}{4\log_3 n + \log n + 200} \\&amp;=\lim_{n\to\infty} \frac{3\log_4 n/\sqrt{n} + 1 + 3/\sqrt{n}}{4\log_3 n /\sqrt{n}+ \log n/\sqrt{n} + 200/\sqrt{n}} \\&amp;\to \infty \end{align}$$ since $\log n/\sqrt{n}\to 0$ (for any log base).</p>
3,663,526
<p><span class="math-container">$F(x)=\begin{cases} x^3+5, &amp; x\ge 1\\ x^3+2, &amp; 0\leq x&lt;1\\ x^3, &amp; x&lt;0 \end{cases}$</span></p> <p>Let <span class="math-container">$\mu_{F}$</span> be the Lebesgue-Stieltjes measure associated with <span class="math-container">$F$</span>. Find the lebesgue decomposition of <span class="math-container">$\mu_{F}$</span> with respect to the Lebesgue measure <span class="math-container">$m$</span>.</p> <p><span class="math-container">$\frac{dF(x)}{dx}=3x^2$</span>. It has jump of height 2 at <span class="math-container">$x=0$</span> and has jump of height 3 at <span class="math-container">$x=1$</span>. </p> <p>My answer is <span class="math-container">$\mu_{F}=\mu_{a}(E)+\mu_{d}(E)=\int_{E} 3x^2dm(x)+2\delta_{0}+3\delta_{1}$</span>, where <span class="math-container">$\mu_{a}&lt;&lt;m$</span>, and <span class="math-container">$\mu_{a}\perp\mu_{d}$</span>.</p> <p>I am asking because I do not quite understand the concept of Lebesgue decomposition. Are there any flaws in my analysis? I asked similar questions here before, but no one answers.</p>
Saptak Bhattacharya
734,601
<p>This function is not absolutely continuous,to be precise.For one thing, absolutely continuous functions need to be continuous,which is clearly not the case here.Secondly,you can think of Lebesgue decomposion theorem in terms of projections on cones.You call two measures mutually orthogonal if either of them is zero on the support of the other,so in a sense,they don't 'overlap'.You can see for yourself that the space of all positive measures(take finite,if you wish) on a measurable space forms a convex cone over <span class="math-container">$\mathbb{R}_{\geq 0}$</span>.The collection of measures absolutely continuous with respect to a given measure <span class="math-container">$m$</span> forms a cone too.So you think of Lebesgue decomposition theorem in terms of an orthogonal decomposition,a concept you might be familiar with from geometry and linear algebra.The absolutely continuous part can be thought of as the projection of the measure on the cone of measures absolutely continuous with respect to it.</p>
3,663,526
<p><span class="math-container">$F(x)=\begin{cases} x^3+5, &amp; x\ge 1\\ x^3+2, &amp; 0\leq x&lt;1\\ x^3, &amp; x&lt;0 \end{cases}$</span></p> <p>Let <span class="math-container">$\mu_{F}$</span> be the Lebesgue-Stieltjes measure associated with <span class="math-container">$F$</span>. Find the lebesgue decomposition of <span class="math-container">$\mu_{F}$</span> with respect to the Lebesgue measure <span class="math-container">$m$</span>.</p> <p><span class="math-container">$\frac{dF(x)}{dx}=3x^2$</span>. It has jump of height 2 at <span class="math-container">$x=0$</span> and has jump of height 3 at <span class="math-container">$x=1$</span>. </p> <p>My answer is <span class="math-container">$\mu_{F}=\mu_{a}(E)+\mu_{d}(E)=\int_{E} 3x^2dm(x)+2\delta_{0}+3\delta_{1}$</span>, where <span class="math-container">$\mu_{a}&lt;&lt;m$</span>, and <span class="math-container">$\mu_{a}\perp\mu_{d}$</span>.</p> <p>I am asking because I do not quite understand the concept of Lebesgue decomposition. Are there any flaws in my analysis? I asked similar questions here before, but no one answers.</p>
Ian
83,396
<p>You first try to subtract off <span class="math-container">$3x^2 dm(x)$</span> from <span class="math-container">$\mu_F$</span>, since that's definitely an absolutely continuous measure that's in there.</p> <p>What you're left with is now <span class="math-container">$\mu_G$</span> where <span class="math-container">$G(x)=\begin{cases} 0 &amp; x &lt; 0 \\ 2 &amp; 0 \leq x &lt; 1 \\ 5 &amp; x \geq 1 \end{cases}$</span>. At this point you see that <span class="math-container">$\mu_G$</span> and <span class="math-container">$m$</span> are mutually singular, by decomposing <span class="math-container">$\mathbb{R}$</span> into the disjoint union <span class="math-container">$\{ 0,1 \} \cup (\mathbb{R} \setminus \{ 0,1 \})$</span>. So <span class="math-container">$\mu_F=3x^2 dm(x) + \mu_G$</span> is a Lebesgue decomposition of <span class="math-container">$\mu_F$</span>.</p>
49,281
<p>I have two ingredients here:</p> <ul> <li>a big dataset contained in a list, with ~ 20M values. </li> <li>a function that takes as each element of the list as input and yields True or False</li> </ul> <p>I want to save somewhere the elements of the list that yielded True. Usually I would do something like that:</p> <pre><code>list = Range[10]; fun = PrimeQ; Reap[ If[fun[#]&amp;,Sow[#]]&amp; /@ list ] </code></pre> <p>This works perfectly, except the fact that we have to wait the end of the computation in order to be able to see all the results. When dealing with such huge lists sometimes waiting for the computation to finish is not an option.</p> <p>This is what I am doing now to split the computation is more chunks, </p> <pre><code>SaveResult[list_, partitions_, fun_] := Table[ Print["doing iteration ", i]; If[ #[[2]] =!= {}, #[[2]] &gt;&gt;&gt; NotebookDirectory[] &lt;&gt; "/Data/" &lt;&gt; ToString[i] ] &amp;@ Reap[(If[fun[#], Sow[#]]) &amp; /@ Partition[list, partitions][[i]]], {i, 1, 9} ]; </code></pre> <p>My question is: is there a better way to saving partial results or dealing with huge datasets?</p>
ciao
11,467
<p>Rather than partition the whole list which will just gobble space (Edit: Turns out that's false in general for simple partitioning, but nonetheless the way you're doing it in your example can cause data to be missed unless the number of partitions is an exact divisor of the length of the list - you'd need <code>Partition[list, psize, psize, {1, 1}, {}]</code> otherwise), something like:</p> <pre><code>test = Range[20]; rasherFn[x_] := Pick[x, Positive[Mod[x, 3]]]; savedata[range_, data_] := {range, data}; doPartial = Module[{data = #1, func = #3, saver = #4, partial, partrange}, Map[(Print["Doing range: ", partrange = #[[1]] + 1 ;; Min[Length@data, #[[2]]]]; partial = func[data[[partrange]]]; saver[partrange, partial]) &amp;, Partition[FindDivisions[{0, Length@data, 1}, #2], 2, 1]]] &amp;; doPartial[test, 5, rasherFn, savedata] (* Doing range: 1;;5 Doing range: 6;;10 Doing range: 11;;15 Doing range: 16;;20 {{1 ;; 5, {1, 2, 4, 5}}, {6 ;; 10, {7, 8, 10}}, {11 ;; 15, {11, 13, 14}}, {16 ;; 20, {16, 17, 19, 20}}} *) </code></pre> <p>Where the arguments to <code>doPartial</code> are your list, the number of divisions to make (e.g. 10 will try to divide the work into 10 chunks), the function to apply to the segment of data, and the function to save the result, respectively.</p> <p>I've used a simple example function (picking the <code>Mod</code> 3 values of the data where the result is positive), and a saving function that in this case just returns a list with the range of data and results of the function (here you'd want to export the result to a file).</p> <p>Check the documentation for <code>FindDivisions</code> for more info on its operation, and note that it will try to get "close" to your desired # of divisions, but won't always match exactly. In any case, the end result will be division of the work with the union being the total work...</p>
261,410
<p>Let $z_1,z_2,\dots,z_n\in\Bbb{C}$ be distinct and $w_1,w_2,\dots,w_n\in\Bbb{C}$ be arbitrary. Suppose $f, g$ are two polynomials of degree less than $n$ such that $$f(z_j)=w_j,\qquad g(z_j)=\bar{w}_j \qquad\text{for $1\leq j\leq n$}.$$ Define $\Omega(z)=\prod_{j=1}^n(z-z_j)$. The following puzzles me.</p> <blockquote> <p><strong>Question 1.</strong> Is this true? $$\sum_{k=1}^n\frac{\vert f^{\prime}(z_k)\vert^2}{\vert \Omega^{\prime}(z_k)\vert^2}\leq \sum_{k=1}^n\frac{\vert g^{\prime}(z_k)\vert^2}{\vert\Omega^{\prime}(z_k)\vert^2}.$$</p> </blockquote> <p><strong>EDIT.</strong> I'm sorry, one of the conditions was missing. To give credit, I'll leave the question as it stands and ask the correct one below.</p> <blockquote> <p><strong>Question 2.</strong> What if we insist that $f$ and $g$ have equal degrees?</p> </blockquote>
Markus Sprecher
100,908
<p>This Matlab program shows that your conjecture is in general not true for $n\geq 3$</p> <pre><code>n = 3; z = rand(1, n) + 1i * rand(1, n) w = rand(1, n) + 1i * rand(1, n) f = polyfit(z, w, n-1); g = polyfit(z, conj(w), n - 1); % note that Omega(z_i) = 0 and Omega(z) = z^n+q(z) where deg(q)&lt;=n-1 and % hence q(z_i)=-z_i^n Omega = [1 -polyfit(z, z.^n, n - 1)]; df = polyder(f); dg = polyder(g); dOmega = polyder(Omega); adf = abs(polyval(df, z)); adg = abs(polyval(dg, z)); adO = abs(polyval(dOmega, z)); sum((adf./adO).^2)-sum((adg./adO).^2) </code></pre>
2,618,746
<p>The distance between two stations $X$ and $Y$ is 220 km.</p> <p>Trains $P$ and $Q$ leave station $X$ at 7 am and 8:15 am respectively at the speed of 25 km/hr and 20 km/hr respectively for journey towards $Y$.</p> <p>Train $R$ leaves station $Y$ at 11:30 am at a speed of 30 km/hr for journey towards $X$. </p> <p>When and where will $P$ be equidistant from $Q$ and $R$ ?</p>
Michael Hoppe
93,935
<p>Hint: start by analyzing the situation at 11:30</p>
2,604,178
<p>Let $(G=(a_1,...,a_n),*)$ be a finite Group. Define for a element $a_i \in G$ a permutation $\phi = \phi(a_i)$ by left multiplication:</p> <p>$$ \begin{bmatrix} a_1 &amp; a_2 &amp; ... &amp; a_n \\ a_i*a_1 &amp; a_i*a_2 &amp; ... &amp; a_i*a_n \\ \end{bmatrix} $$ I am struggling to understand why this is the permutation as </p> <p>$$ \begin{bmatrix} a_k*a_1 &amp; a_k*a_2 &amp; ... &amp; a_k*a_n \\ a_i*a_k*a_1 &amp; a_i*a_k*a_2 &amp; ... &amp; a_i*a_k*a_n \\ \end{bmatrix} $$ where $a_k \in G$. Can somebody give me a reason for why these permutations are the same? Thanks for any help.</p>
user3794724
521,240
<p>In some cases, the Newton's Method does not work. For example, if you encounter a stationary point in the process (division by zero). </p> <p>Your example is another one in which Newton's Method does not work, because starting with a real number you only go through real numbers in the process. BUT you could start with a complex number, and (with a bit of luck) you will converge to the correct complex root.</p>
3,149,110
<p>I am learning algebraic number theory, the exercises are so hard for me, could you please recommend me a book with answers? Many thanks!</p>
vxnture
652,366
<p>Murty &amp; Esmonde's <em>Problems in Algebraic Number Theory</em> (available <a href="http://148.206.53.84/tesiuami/S_pdfs/Problems%20in%20Algebraic%20Number%20Theory.pdf" rel="nofollow noreferrer">here</a> as a pdf) is an excellent source of problems with solutions. However, as someone pointed out in the comments, looking up a solution to a problem is helpful only after you have worked on it yourself for a sufficient amount of time.</p>
361,862
<p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p> <p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p> <p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p> <p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p> <p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
Phil Tosteson
52,918
<p>There is <a href="https://en.wikipedia.org/wiki/Euler_characteristic" rel="noreferrer">Euler's formula</a> <span class="math-container">$$V - E + F = 2.$$</span> Today, we might not think of the hypotheses as being especially tricky. But Lakatos's classic <a href="https://www.jstor.org/stable/685347" rel="noreferrer">Proofs and Refutations</a> makes an entertaining case for its subtlety. </p> <hr> <p>If Lakatos does not convince you, consider <i>Euler's Theorem for Tilings</i>. Suppose we have a tiling of the plane; take a finite portion of it, apply the standard Euler formula, and divide by <span class="math-container">$F$</span>. Intuitively, as we take larger and larger portions, <span class="math-container">$V/F$</span> and <span class="math-container">$E/F$</span> approach limiting values <span class="math-container">$v$</span> and <span class="math-container">$e$</span> respectively, and we obtain Euler's Theorem for Tilings: <span class="math-container">$$v - e + 1 = 0.$$</span> However, even if the limits <span class="math-container">$v$</span> and <span class="math-container">$e$</span> exist, they do not necessarily satisfy Euler's Theorem for Tilings unless the tiling satisfies certain subtle hypotheses. For example, in the heptagonal tiling below (taken from Gr&uuml;nbaum and Shephard's book <i>Tilings and Patterns</i>), the heptagons get skinnier and skinnier as one moves out from the center, creating a "singularity at infinity." It is not hard to see that <span class="math-container">$v=7/3$</span> and <span class="math-container">$e=7/2$</span>, so <span class="math-container">$v-e+1 = -1/6$</span> and not zero.</p> <p><img src="https://i.stack.imgur.com/5TsTT.jpg"></p> <p>In the notes to Chapter 3, Gr&uuml;nbaum and Shephard write:</p> <blockquote> <p>Euler's Theorem for Tilings and its various corollaries are often quoted and used&mdash;usually without any indication of restrictions that must be imposed on a tiling to give meaning and validity to this procedure. In contrast to many other cases&mdash;in which a cavalier attitude towards mathematical rigor is an aesthetic shortcoming that does not affect the outcome&mdash;here many authors have claimed to have proved statements that are actually false. As recent examples we may mention Walsh (<i>Geometriae Dedicata</i> <b>1</b> (1971), 117&ndash;124) and Loeb (<i>Space Structures: Their Harmony and Counterpoint</i>, especially Chapter 9).</p> </blockquote>
361,862
<p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p> <p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p> <p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p> <p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p> <p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
Noah Schweber
8,133
<p>This is one which I've seen trip up a number of students when first learning the material: the hypothesis of <strong>admissibility</strong> <em>(or <strong>acceptability</strong> - I learned the latter, but the former seems more common)</em> in the context of numberings of unary partial computable functions (or equivalent objects like c.e. sets).</p> <p>Results like Rice's Theorem and the Recursion Theorem are generally presented for a specific numbering whose details are quickly forgotten; the motto "all reasonable numberings work the same" is introduced somewhere around this point, and is mostly true. However, the right notion of "reasonability" is not usually obvious, since presentations tend to focus on the following two features of the canonical numbering <span class="math-container">$\Phi:=(\varphi_e)_{e\in\mathbb{N}}$</span>:</p> <ul> <li><p>The numbering construed as a partial binary function <span class="math-container">$\langle e,x\rangle\mapsto\varphi_e(x)$</span> should itself be computable.</p></li> <li><p>For every unary partial computable <span class="math-container">$f$</span> there should be some <span class="math-container">$e$</span> with <span class="math-container">$f\simeq \varphi_e$</span>.</p></li> </ul> <p>By themselves these properties are not enough to get the standard results to apply: the usual extreme counterexample is a <a href="https://www.sciencedirect.com/science/article/pii/0304397590901414" rel="noreferrer"><strong>Friedberg numbering</strong></a>, which is a numbering satisfying the two properties above such that every partial computable <span class="math-container">$f$</span> has <em>exactly</em> one index (so Rice's Theorem and the Recursion Theorem each fail basically trivially).</p> <p>Instead, we need to strengthen the second bulletpoint above as follows:</p> <ul> <li><strong>(Admissibility/acceptability)</strong>: For every <strong>binary</strong> partial computable <span class="math-container">$f$</span> there is some total computable unary <span class="math-container">$g$</span> such that for each <span class="math-container">$e$</span> we have <span class="math-container">$$f(e,-)\simeq \varphi_{g(e)}.$$</span></li> </ul> <p>This amounts to a kind of "universality" of the numbering in question; roughly speaking, every other numbering needs to be translatable into it. This turns out to be exactly what we need to deduce all the basic results about the usual numbering, and indeed so far as I'm aware there really are no essential differences between <em>admissible</em> numberings. Moreover, once this sort of universality occurs to us as something important we're led to consider general comparisons between numberings of various systems, and this leads to several interesting topics (see especially <em>Rogers semilattices</em>).</p>
361,862
<p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p> <p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p> <p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p> <p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p> <p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
Asaf Karagila
7,206
<blockquote> <p><strong>Theorem.</strong> Assuming the axiom of choice, the countable union of countable sets is countable.</p> </blockquote> <p><em>Proof.</em> Let <span class="math-container">$\{A_n\mid n\in\Bbb N\}$</span> be a family of countable sets, and so we can write <span class="math-container">$A_n$</span> as <span class="math-container">$\{a_{n,m}\mid m\in\Bbb N\}$</span>.</p> <p>Let <span class="math-container">$A$</span> be the union, and define <span class="math-container">$f(a) = 2^n3^m$</span> such that <span class="math-container">$n$</span> is the least such that <span class="math-container">$a\in A_n$</span>, and <span class="math-container">$a=a_{n,m}$</span>. Easily, this is an injection so the union is countable.</p> <hr> <p>The trained eye, of course will notice the use of the axiom of choice immediately. We choose an enumeration of each <span class="math-container">$A_n$</span>. But this is very subtle and usually people will not notice that at first.</p> <p>And of course, this use of choice is necessary. Indeed, it is consistent that the real numbers are a countable union of countable sets! (Still uncountable, though.) </p>
361,862
<p>I would like you to expose and explain briefly some examples of theorems having some hypothesis that are (as far as we know) actually necessary in their proofs but whose uses in the arguments are extremely subtle and difficult to note at a first sight. I am looking for hypothesis or conditions that appear to be almost absent from the proof but which are actually hidden behind some really abstract or technical argument. It would be even more interesting if this unnoticed hypothesis was not noted at first but later it had to be added in another paper or publication not because the proof of the theorem were wrong but because the author did not notice that this or that condition was actually playing a role behind the scene and needed to be added. And, finally, an extra point if this hidden hypothesis led to some important development or advance in the area around the theorem in the sense that it opened new questions or new paths of research. This question might be related with this <a href="https://mathoverflow.net/questions/352249/nontrivially-fillable-gaps-in-published-proofs-of-major-theorems">other</a> but notice that it is not the same as I am speaking about subtleties in proof that were not exactly incorrect but incomplete in the sense of not mentioning that some object or result had to be use maybe in a highly tangential way.</p> <p>In order to put some order in the possible answers and make this post useful for other people I would like you to give references and to at least explain the subtleties that helps the hypothesis to hide at a first sight, expose how they relate with the actual proof or method of proving, and tell the main steps that were made by the community until this hidden condition was found, i.e., you can write in fact a short history about the evolution of our understanding of the subtleties and nuances surrounding the result you want to mention.</p> <p>A very well known and classic example of this phenomenon is the full theory of classical greek geometry that although correctly developed in the famous work of Euclides was later found to be incompletely axiomatized as there were some axioms that Euclides uses but <a href="https://en.wikipedia.org/wiki/Euclidean_geometry#cite_note-6" rel="noreferrer">he did not mention</a> as such mainly because these manipulations are so highly intuitive that were not easy to recognize that they were being used in an argument. Happily, a better understanding of these axioms and their internal respective logical relations through a period of long study and research lasting for millennia led to the realization that these axioms were not explicitly mention but necessary and to the development of new kinds of geometry and different geometrical worlds.</p> <p>Maybe this one is (because of being the most classic and expanded through so many centuries and pages of research) the most known, important and famous example of the phenomena I am looking for. However, I am also interested in other small and more humble examples of this phenomena appearing and happening in some more recent papers, theorems, lemmas and results in general.</p> <p>Note: I vote for doing this community wiki as it seems that this is the best way of dealing with this kind of questions.</p>
Oscar Lanzi
86,625
<p>How about the classic equality <span class="math-container">$0.999...=1$</span>?</p> <p>Proofs of this fact either assume the series defined by this infinite decimal representation converges, or initially prove convergence using the fact that the common ratio of the geometric series has norm less than unity. But the convergence works only because we use a number system where that norm is in fact less than unity. To illustrate how this cannot be taken for granted, render the decimal expansion as the series</p> <p><span class="math-container">$(9/10)+(9/100)+(9/1000)+...$</span></p> <p>and render each term into <span class="math-container">$3$</span>-adics. Since the initial term <span class="math-container">$9/10$</span> and the common ratio <span class="math-container">$1/10$</span> can each be expressed as a <span class="math-container">$3$</span>-adic integer, so can every term in the series. But they all end with <span class="math-container">$...00$</span> and thus the sum can never converge to <span class="math-container">$...01$</span>. Nor can the sum converge to anything else since the <span class="math-container">$3^2$</span> place is <span class="math-container">$1$</span> in every term. The latter gives a hint of where the breakdown occurs: the terms of the geometric series do not tend to zero because in <span class="math-container">$3$</span>-adics the common ratio actually gives <span class="math-container">$|1/10|=1$</span>, not <span class="math-container">$|1/10|&lt;1$</span>.</p>
3,623,924
<p>Trying to solve the following problem:</p> <p>Let <span class="math-container">$f(x)$</span> be a continuous real-valued function on <span class="math-container">$[0,3]$</span>. Given any <span class="math-container">$\varepsilon&gt;0$</span> prove there exists a polynomial, <span class="math-container">$p(x)$</span>, such that <span class="math-container">$\int_0^3|f(x)-p(x)|\,dx&lt;\varepsilon$</span></p> <p>This almost seems trivially true, which leads me to believe that I'm thinking about it incorrectly. If by Weierstrass theorem we know there exists a sequence of polynomials <span class="math-container">$P_n(x)$</span> in <span class="math-container">$[0,3]$</span> such that <span class="math-container">$\lim_{n \to \infty} P_n(x)=f(x)$</span>, then if we set <span class="math-container">$p(x)=\lim_{n \to \infty} P_n(x)=f(x)$</span>, then <span class="math-container">$|f(x)-p(x)|=|f(x)-f(x)|=0$</span> and therefore it is obviously true that <span class="math-container">$\int_0^3|f(x)-p(x)|\,dx&lt;\varepsilon$</span>. I'm almost certain this is not correct, so what am I doing wrong?</p>
Sebathon
482,453
<p>Your problem is that <span class="math-container">$p=\lim P_{n}(x)$</span> may not be a polynomial. I understand your idea and i fixed: By Weierstrass's theorem, there exists a sequence of polynomials such that <span class="math-container">$P_{n} \to f$</span> uniformly. So, given <span class="math-container">$\varepsilon&gt;0$</span>, there exists <span class="math-container">$n_{0}$</span> such that <span class="math-container">$|f(x)-P_{n_{0}}(x)|&lt;\frac{\varepsilon}{6}$</span> for all <span class="math-container">$x \in [0,3]$</span> (notice that tis equivalent to say that <span class="math-container">$\sup_{x \in [0,3]} |f(x)-P_{n_{0}}|&lt;\frac{\varepsilon}{6}$</span>). So, <span class="math-container">$$\int_{0}^{3}|f(x)-P_{n_{0}}(x)|dx \leq \int_{0}^{3} \frac{\varepsilon}{6}dx=\frac{\varepsilon}{2}&lt;\varepsilon.$$</span></p>
1,041,177
<p>Proof that if $p$ is a prime odd and $k$ is a integer such that $1≤k≤p-1$ , then the binomial coefficient</p> <p>$$\displaystyle \binom{p-1}{k}\equiv (-1)^k \mod p$$</p> <p>This exercise was on a test and I could not do!!</p>
André Nicolas
6,312
<p>Let $a=\binom{p-1}{k}$. Then $$a k!=(p-1)(p-2)(p-3)\cdots (p-k).$$ The $i$-th term on the right-hand side is congruent to $-i$ modulo $p$. Thus $$ak!\equiv (-1)^k k!\pmod{p}.$$ Now since $k!$ is not divisible by $p$ we can cancel.</p>
1,041,177
<p>Proof that if $p$ is a prime odd and $k$ is a integer such that $1≤k≤p-1$ , then the binomial coefficient</p> <p>$$\displaystyle \binom{p-1}{k}\equiv (-1)^k \mod p$$</p> <p>This exercise was on a test and I could not do!!</p>
lab bhattacharjee
33,337
<p>$$\binom{p-1}k=\frac{(p-1)\cdots(p-k)}{k!}=\prod_{r=1}^k\frac{p-r}r$$</p> <p>Now $p-r\equiv-r\pmod p\implies\dfrac{p-r}r\equiv-1$</p>
923,000
<p>I am confused because I have seen implies and equivalent used interchangibly. For instance, I've seen </p> <p>$$x-y=0 \implies x=y$$</p> <p>And I've also seen</p> <p>$$x-y=0 \Longleftrightarrow x=y$$</p> <p>Are both of these statements correct? Which one am I supposed to use?</p> <p>I know that implies is true if the first statement is false, or both are true. And equivalence is only true if both are true or both are false. So is using them interchangibly okay for stuff like this?</p>
Tara Nanda
754,648
<p>I have found the 3 volumes by Garling "A Course in Mathematical Analysis" published by Cambridge University Press excellent. The author has several years of teaching experience and has thought through the subject in great depth. Vol. I ISBN: 978-1-107-03202-6 Vol. II ISBN: 978-1-107-03203-3 Vol III ISBN: 978-1-107-03204-0</p>
1,001,839
<p>$$\frac{\pi x y^2}{4}$$</p> <p>Is this function continuous? I really haven't worked with continuity with multivariable funtions before, so I am a little stumped. How would one answer such a question? </p> <p>I'm reading a bit ahead of my level, and I'm seeing all these epsilon delta things... is that what I am supposed to use? Makes very little sense.... </p>
Vladimir Vargas
187,578
<p>Continuity in $\mathbb R^2$ of a function is defined as follows:</p> <p>Let $X$ be an open subset of $\mathbb R^2$ and $f$ a real valued function from $X \subseteq \mathbb R^2 \rightarrow \mathbb R$. We say that $f$ is continuous over $X$ if $\forall (x_0,y_0)\in X\wedge\forall\varepsilon&gt;0\;\exists\,\delta &gt;0$ such that:</p> <p>$$\sqrt{(x_0-x)^2+(y_0-y)^2}&lt;\delta \Rightarrow |f(x,y)-f(x_0,y_0)|&lt;\varepsilon.$$</p> <p>Is the function $f(x,y)=\dfrac{\pi}{4}$ continuous?</p> <p>Let $\delta \in \mathbb R^+$, then $\sqrt{(x_0-x)^2+(y_0-y)^2}&lt;\delta \Rightarrow |f(x,y)-f(x_0,y_0)| = 0&lt;\varepsilon$.</p> <p>Is the function $f(x,y)=x$ continuous?</p> <p>$|x-x_0|=|x_0-x|=\sqrt{(x_0-x)^2}\leq\sqrt{(x_0-x)^2+(y_0-y)^2}&lt;\delta=\varepsilon$. From which it can be seen that if $\sqrt{(x_0-x)^2+(y_0-y)^2}&lt;\delta=\epsilon$ then $|f(x,y)-f(x_0,y_0)|&lt;\varepsilon$.</p> <p>Do the same with $y$. Recall that the product of continuous functions is continuous as well.</p>
2,832,311
<p>Suppose I draw 10 tickets at random with replacement from a box of tickets, each of which is labeled with a number. The average of the numbers on the tickets is 1, and the SD of the numbers on the tickets is 1. Suppose I repeat this over and over, drawing 10 tickets at a time. Each time, I calculate the sum of the numbers on the 10 tickets I draw. Consider the list of values of the sample sum, one for each sample I draw. This list gets an additional entry every time I draw 10 tickets.</p> <p>i) As the number of repetitions grows, the average of the list of sums is increasingly likely to be between 9.9 and 10.1.</p> <p>ii) As the number of repetitions grows, the histogram of the list of sums is likely to be approximated better and better by a normal curve (after converting the list to standard units).</p> <p>The answer given is that (i) is correct and (ii) is not correct .</p> <p>I assumed the reason why (i) is correct is the fact that as the number of draws increase the average of the list of sums is going to converge to tickets' expected sum of value.</p> <p>I can't figure out why (ii) is incorrect. Based on the definitions for CLT : </p> <ul> <li>The central limit theorem applies to sum of .</li> <li>The number of draws should be reasonably large.</li> <li>The more lopsided the values are, the more draws needed for reasonable approximation (compare the approximations of rolling 5 in roulette to flipping a fair coin).</li> <li>It is another type of convergence : <strong>as the number of draws grows, the normal approximation gets better.</strong></li> </ul> <p>Aside from what I can perceive that this case does seem to satisfy the above definitions, doesn't the last definition confirms what (ii) is suggesting to be true?</p>
Mikhail Katz
72,694
<p>To understand the definition and be able to work with it properly it may be helpful to use an equivalent definition that has one variable less. Namely, a sequence $(a_n)$ converges to $a$ if for every $\epsilon&gt;0$ there is an $N\in \mathbb N$ such that all the terms $$ a_N, a_{N+1}, a_{N+2}, \ldots $$ are within $\epsilon$ of $a$. Thus $\frac{1}{n}$ coverges to $0$ because for each $\epsilon&gt;0$, all the terms starting at rank $\left\lceil\frac{1}{\epsilon}\right\rceil$ will be within $\epsilon$ of $0$.</p>
149,049
<p>Suppose you have a list of intervals (or tuples), such as:</p> <pre><code>intervals = {{3,7}, {17,43}, {64,70}}; </code></pre> <p>And you wanted to know the intervals of all numbers not included above, e.g.:</p> <pre><code>myRange = 100; numbersNotUed[myRange,intervales] (*out: {{1,2},{8,16},{44,63},{71,100}}*) </code></pre> <p>What would be the most efficient way to approach this?</p> <p><em>Mathematica</em> currently supports <code>IntervalIntersection</code> but not <code>IntervalComplement</code>.</p>
eldo
14,254
<pre><code>int = {{3, 7}, {17, 43}, {64, 70}}; com = Partition[#, 2]&amp; @ {1, Sequence @@ Riffle[int[[All, 1]] - 1, int[[All, 2]] + 1], 100} </code></pre> <blockquote> <p>{{1, 2}, {8, 16}, {44, 63}, {71, 100}}</p> </blockquote> <pre><code>NumberLinePlot[{Interval @@ int, Interval @@ com}, PlotTheme -&gt; "Detailed"] </code></pre> <p><a href="https://i.stack.imgur.com/yOKPq.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/yOKPq.jpg" alt="enter image description here"></a></p>
3,541,897
<p>While searching for non-isomorph subgroups of order <span class="math-container">$2002$</span> I just encountered something, which I want to understand. Obviously I looked for abelian subgroups first and found <span class="math-container">$2002=2^2*503$</span> so we have the groups <span class="math-container">$$ \mathbb{Z}/2^2\mathbb{Z} \times \mathbb{Z}/503\mathbb{Z},\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/503\mathbb{Z} $$</span></p> <p>Now I want to understand why those two are not isomorph. I know that for two groups <span class="math-container">$\mathbb{Z}/n\mathbb{Z} \times \mathbb{Z}/m\mathbb{Z} \cong\mathbb{Z}/(nm)\mathbb{Z}$</span> it has to hold that <span class="math-container">$\gcd(n,m)=1$</span>. But I don't understand how we can compare Groups written as two products with groups written as three products as above, how does that work? And I think that goes in the same direction: How is it then at the same time that <span class="math-container">$$ \mathbb{Z}/4\mathbb{Z} × \mathbb{Z}/503\mathbb{Z} \cong \mathbb{Z}/2012\mathbb{Z} \ncong \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/503\mathbb{Z} \cong \mathbb{Z}/2\mathbb{Z} × \mathbb{Z}/1006Z $$</span> because <span class="math-container">$\gcd (4,2012)\neq 1, \gcd (2,2)\neq 1, \gcd (503,1006)\neq 1 $</span>. I don't understand the difference to the first comparison.</p>
user729424
729,424
<p>First let's note that <span class="math-container">$2^2\cdot503=2012\ne2002$</span>.</p> <p><strong>Abelian groups of order 2002:</strong></p> <p>There is only one Abelian group of order <span class="math-container">$2002$</span>, namely</p> <p><span class="math-container">$$\Bbb{Z}_{2002}\cong\Bbb{Z}_{2}\times\Bbb{Z}_{7}\times\Bbb{Z}_{11}\times\Bbb{Z}_{13}$$</span></p> <p><strong>Abelian groups of order 2012:</strong></p> <p>Since <span class="math-container">$2012=2^2\cdot503$</span> it might initially seem like there are four possibilities:</p> <p><span class="math-container">$$\Bbb{Z}_{2}\times\Bbb{Z}_{2}\times\Bbb{Z}_{503}$$</span> <span class="math-container">$$\Bbb{Z}_{4}\times\Bbb{Z}_{503}$$</span> <span class="math-container">$$\Bbb{Z}_{2}\times\Bbb{Z}_{1006}$$</span> <span class="math-container">$$\Bbb{Z}_{2012}$$</span></p> <p>But using the fact that</p> <p><span class="math-container">$$\Bbb{Z}_n\times\Bbb{Z}_m\cong\Bbb{Z}_{nm}\quad\text{iff}\quad GCD(n,m)=1.$$</span></p> <p>we get that </p> <p><span class="math-container">$$\Bbb{Z}_{2}\times\Bbb{Z}_{2}\times\Bbb{Z}_{503}\cong\Bbb{Z}_{2}\times\Bbb{Z}_{1006}$$</span></p> <p>and</p> <p><span class="math-container">$$\Bbb{Z}_{4}\times\Bbb{Z}_{503}\cong\Bbb{Z}_{2012}$$</span></p> <p>and </p> <p><span class="math-container">$$\Bbb{Z}_{2}\times\Bbb{Z}_{1006}\ncong\Bbb{Z}_{2012}$$</span></p> <p>Hence there are exactly two non-isomorphic groups of order <span class="math-container">$2012$</span>.</p>
844,700
<p>I am looking for a calculator which can calculate functions like $f(x) = x+2$ at $x=a$ etc; but I am unable to do so. Can you recommend any online calculator?</p>
Ned
521,624
<p>You can use the Calcpad online calculator for free: Just go to <a href="http://calcpad.net/Calculator" rel="nofollow noreferrer">http://calcpad.net/Calculator</a> Try to type the following example into the 'Script' box:</p> <pre><code>f(x) = x + 2 a = 4 f(a) a = 6 f(a) $Plot{f(x) @ x = 0 : a} </code></pre> <p>Then press 'Enter' or refresh the output to see the results</p>
510,151
<p>Prove by induction that $2k(k+1) + 1 &lt; 2^{k+1} - 1$ for $ k &gt; 4$. Can some one pls help me with this?</p> <p>I reformulated like this</p> <p>$ 2k(k+1) + 1 &lt; 2^{k+1} - 1 $</p> <p>$ 2k^2+2k+2&lt;2^{k+1}$</p> <p>and I tried like this Take $k=k+1$</p> <p>$ 2^{k+2} -1 &gt; 2(k+1)(k+2) + 1 $</p> <p>$2^{k+2} &gt; 2(k+1)(k+2) + 2$</p> <p>$ 2^{k+2} &gt; 2k^2+2k+2 +4k+4$</p> <p>I dont know how to proceed further</p> <p>Please help me.</p>
Julián Aguirre
4,791
<p>Because in the discrete metric the only converging sequences are the constant sequences from some point on.</p> <p>In the case of $\{1/n\}\subset\mathbb{R}$ with the discrete metric $d$ we have $$ d(1/n,0)=1. $$</p>
4,369,232
<p>I have the following problem:</p> <blockquote> <p>Let <span class="math-container">$\{(M_i,\tau_i)\}_{i\in I}$</span> be nonempty topological spaces where <span class="math-container">$I$</span> is arbitrary but non empty. Let <span class="math-container">$M=\prod_{i\in I} M_i$</span>. Let <span class="math-container">$F$</span> be a filter on <span class="math-container">$M$</span> and denote by</p> <p><span class="math-container">$$F_i=(pr_i)_*F=\{B\subseteq M_i: \exists\,\,A\in F\,\,s.t.\,\,pr_i(A)\subseteq B\}$$</span> the corresponding image filter on each componente <span class="math-container">$M_i$</span>. Show that <span class="math-container">$F$</span> converges to <span class="math-container">$p\in M$</span> iff <span class="math-container">$F_i$</span> converges to <span class="math-container">$p_i=pr_i(p)$</span> for all <span class="math-container">$i\in I$</span>.</p> </blockquote> <p>Then <span class="math-container">$\Rightarrow$</span> implication worked perfectly by using some theorem from the lecture. But I somehow struggled a bit with the other implication. In the solution they did something with subbasis ect but I didn't understand that. Therefore I wandet do do it different. My Idea was the following:</p> <p><span class="math-container">$\Leftarrow$</span> Let us assume tat <span class="math-container">$F_i$</span> converges to <span class="math-container">$p_i$</span>. We need to show that <span class="math-container">$F$</span> converges to <span class="math-container">$p$</span>, i.e. we need to show that <span class="math-container">$\mathfrak{U}(p)\subset F$</span> where <span class="math-container">$\mathfrak{U}(p)$</span> is the neighbourhood filter of <span class="math-container">$p$</span>. Now let us take any basis open set <span class="math-container">$B$</span> in M. <span class="math-container">$$B=\bigcap_{i\in I} pr_i^{-1}(O_i)$$</span>where <span class="math-container">$I$</span> is finite and <span class="math-container">$p_i\in O_i$</span> are open sets in <span class="math-container">$M_i$</span>. We see that it is enought to show that <span class="math-container">$pr_i^{-1}(O_i)\in F$</span>. To do so let us use that <span class="math-container">$F_i\rightarrow p_i\Leftrightarrow \mathfrak{U}(p_i)\subset F_i$</span>. But then <span class="math-container">$O_i\in (pr_i)_*F$</span> which implies that there is some <span class="math-container">$A_i\in F$</span> such that <span class="math-container">$pr_i(A_i)\subset O_i$</span>. This implies that <span class="math-container">$A_i\subset pr_i^{-1}(O_i)\in F$</span>. Then since the intersection is finite also <span class="math-container">$B\in F$</span>.</p> <p>Now what I don't see is why then <span class="math-container">$\mathfrak{U}(p)\subset F$</span>.</p> <p>I'm really not sure if this workes therefore it would be nice if someone could take a look at it. Thanks a lot</p>
Henno Brandsma
4,280
<p>Let <span class="math-container">$O$</span> be a subbasic element in the product i.e. <span class="math-container">$O=\pi_i^{-1}[U]$</span> for some <span class="math-container">$i \in I$</span> and some open <span class="math-container">$U \subseteq X_i$</span> (I use <span class="math-container">$\pi_i$</span> for the projections), so that <span class="math-container">$p \in O$</span> i.e. <span class="math-container">$p_i \in U$</span>.</p> <p>As by assumption <span class="math-container">$(\pi_i)_\ast(\mathcal{F}) \to p_i$</span> we know that <span class="math-container">$U \in (\pi_i)_\ast(\mathcal{F})$</span> so for some <span class="math-container">$F_i \in \mathcal{F}$</span>, <span class="math-container">$\pi_i[F_i] \subseteq U$</span>, which implies <span class="math-container">$F_i \subseteq O$</span> and hence <span class="math-container">$O \in \mathcal{F}$</span> as <span class="math-container">$\mathcal{F}$</span> is a filter.</p> <p>So all subbasic elements in the product that contain <span class="math-container">$p$</span> are in the filter, and so all <em>basic</em> elements (which are the <strong>finite</strong> intersections of such subbasic sets) also are, and hence <span class="math-container">$\mathcal{F}\to p$</span>, as required.</p>
104,335
<p>I am implementing a code that works correctly but it takes too much time. I did not see how to optimize it to be run quickly. Here is my code:</p> <pre><code>data=RandomInteger[{1,400},{5000,2}]; c=10; r=60; pts=c + r {Cos[#], Sin[#]} &amp; /@ Range[0, 2 π, 2 π/16]; newCoord = Table[(# - pts[[i]]) &amp; /@ data, {i, 1, Length@pts}]; PolarCoords = Table[ToPolarCoordinates[#] &amp; /@ newCoord[[i]] /. {x_, y_} /; y &lt; 0 -&gt; {x, y + 2 π}, {i, 1, Length@pts}]; // AbsoluteTiming (* {8.59775, Null} *) </code></pre> <p>I am running my code Mac OS X processor, Core i7 and RAM 8 GB.</p>
Diogo
36,260
<p>If you use float instead integer you can reduce the computing time.</p> <pre><code> data = RandomInteger[{1, 400}, {5000, 2}]; c = 10.; r = 60.; pts = c + r {Cos[#], Sin[#]} &amp; /@ Range[0., 2. π, 2. π/16.]; newCoord = Table[(# - pts[[i]]) &amp; /@ data, {i, 1, Length@pts}]; PolarCoords = Table[ToPolarCoordinates[#] &amp; /@ newCoord[[i]] /. {x_, y_} /; y &lt; 0 -&gt; {x, y + 2 π}, {i, 1, Length@pts}]; // AbsoluteTiming </code></pre>
2,007,403
<p>Determine the convergence or divergence of </p> <p>$$\sqrt[n]{n!}$$</p> <p>I was trying to use the propriety $\lim_{n\to\infty}\sqrt[n]{n}=1$, maybe I can write this</p> <p>$\lim_{n\to\infty}\sqrt[n]{n!}=\lim_{n\to\infty}\sqrt[n]{n \ \times(n-1)\times(n-2)\times(n-3)\cdots2\ \times \ 1}=\lim_{n\to\infty}\sqrt[n]n \ \times\sqrt[n]{(n-1)}\times\sqrt[n]{(n-2)}\times\sqrt[n]{(n-3)}\cdots\sqrt[n]{2}\ \times \sqrt[n] 1= 1\times1\times1\times1\times1\times\cdots1\times1\times1=1$</p> <p>Am I right?</p>
zhw.
228,045
<p>Hint: If $n$ is even, then $n!&gt; (n/2)^{n/2}.$</p>
34,487
<p>A few years ago Lance Fortnow listed his favorite theorems in complexity theory: <a href="http://blog.computationalcomplexity.org/2005/12/favorite-theorems-first-decade-recap.html" rel="nofollow">(1965-1974)</a> <a href="http://blog.computationalcomplexity.org/2006/12/favorite-theorems-second-decade-recap.html" rel="nofollow">(1975-1984)</a> <a href="http://eccc.hpi-web.de/eccc-reports/1994/TR94-021/index.html" rel="nofollow">(1985-1994)</a> <a href="http://blog.computationalcomplexity.org/2004/12/favorite-theorems-recap.html" rel="nofollow">(1995-2004)</a> But he restricted himself (check the third one) and his last post is now 6 years old. An updated and more comprehensive list can be helpful.</p> <blockquote> <p>What are the most important results (and papers) in complexity theory that every one should know? What are your favorites?</p> </blockquote>
Stefan Geschke
7,743
<p>My favourite results are (1) the existence of NP-complete problems (Cook), (2) the Baker-Gil-Solovay theorem that whether P=NP holds relativized to on oracle depends on the oracle, and (3) Fagin's characterization of NP in terms of second order logic.</p> <p>I am not so much interested in the large number of proofs that show that a certain problem is NP-complete, but the fact that there is some problem that is NP-complete is remarkable and important. And Cook's SAT is actually natural. (2) shows that several approaches will not work when one wants to settle P versus NP. (3) gives a much more natural definition of the class NP. Fagin's formulation (NP is the class of graph properties (of finite graphs) that can be expressed with a formula that has an n-ary second order existential quantifier in front, followed by a first order formula) indicates that NP vs co-NP is a very fundamental question as well (can second order existential quantification be replaced by second order universal quantification?). </p>
1,116,009
<p>Suppose $\alpha$,a,b are integers and $b\neq-1$. Show that if $\alpha$ satisfies the equation $x^2+ax+b+1=0$,then prove $a^2+b^2$ is composite.</p> <p>I am starting with this study course of polynomials and finding it very difficult to understand. Please help me with the question. Thanks in advance ! </p>
mollyerin
29,809
<p>I wrote a lot, due to boredom and the general hope that some of it is helpful. Much will probably be familiar to you.</p> <p>First: In order to work with tensors or to do calculus on manifolds at all, it's very important to start making the distinction between <em>vectors</em> and <em>covectors</em> (or "dual vectors" or whatever). Briefly, if $V$ is a (finite dimensional, real) vector space, there's an associated (real) vector space $V^*$ which is the set of linear maps $V \to \mathbb{R}$. $V^*$ is called the <em>dual</em> to $V$, and its elements are called <em>covectors</em> or <em>dual vectors</em>. </p> <p>When $V = \mathbb{R}^n$ as in college linear algebra courses, we often make the identification of $V$ with column vectors, and $V^*$ with row vectors. We have an easy way of taking a column vector and making a row vector or vice versa (taking the transpose), so often we don't distinguish between the two and just call everything a "vector". But <em>on a manifold</em>, there are many different possible choices of coordinates making the tangent spaces look like $\mathbb{R}^n$ in different ways, which means that there's no "natural" way to take a "transpose" of a tangent vector. This means that it's important to think of tangent vectors and tangent covectors as very different things; which means back in linear algebra world, we need to differentiate between $V$ and $V^*$.</p> <p>Said another way, there's two different kinds of "one-tensors" on a vector space $V$: what's called a $(0,1)$-tensor, which is an ordinary vector, an element of $V$, and what's called a $(1,0)$-tensor, which is a <em>covector</em>, an element of $V^*$. It's the right intuition to think of $(0,1)$-tensors as column vectors and $(1,0)$-tensors as row vectors, but you should remember that "transpose" isn't meaningful in this generality (and won't be when you start talking about tangent spaces to smooth manifolds).</p> <p>So in general it's confusing to talk about "$n$-tensors"; instead the correct notion is that of a $(k,l)$-tensor, which is a multilinear map $$ T : V \times \cdots \times V \times V^* \times \cdots \times V^* \to \mathbb{R}, $$ where there are $k$ copies of $V$ and $l$ copies of $V^*$.</p> <p>A matrix $A$ is a $(1,1)$-tensor, because it eats a row vector (an element of $V^*$) and a column vector (an element of $V$) and produces a number. Differential geometers will write the element of a matrix $A$ in the $i$-th row and $j$-th column as $A^j_i$; the idea of writing the $i$ as a subscript, and the $j$ as a superscript, is to explicitly indicate that the matrix is a $(1,1)$-tensor.</p> <p>A good example of a $(2,0)$-tensor on a vector space $V$ is an inner product, since an inner product is suppose to eat two <em>vectors</em> and tell you something about the angle between them and their lengths and so forth. It's misleading to picture an inner product as a matrix -- you want to say that the inner product given by the matrix $A$ is the map taking the vectors $u, v$ to $u^T A v$, but without coordinates we can't make sense of $u^T$, which means that we won't be able to make sense of $u^T$ in a coherent way when we're on the tangent space to a manifold (where there are many choices of coordinates).</p> <p>So if you wanted to be able to take tangent vectors (at the same point) to your manifold $M$ and do inner-product things to them like measure the angle between them, you'd want for each tangent space $T_pM$ a $(2, 0)$-tensor on that space -- known as a $(2,0)$-tensor <em>field</em> on $M$ -- with some nice properties that make it an inner product and "vary smoothly". A $(1,1)$-tensor field just wouldn't do the job! (But I digress... I don't want to be discussing tensor fields.)</p> <p><strong>This is all a preamble to saying that I think the intuition "a $3$-tensor is like a cube of numbers" is not a particularly helpful way to think about things,</strong> because (1) you can't write a cube of numbers down on paper very easily and (2) <strong>it obscures that there's a fundamental difference between "row-like" things and "column-like" things,</strong> and that each of the "axes" of this "cube of numbers" should really be marked "row-like" or "column-like" to determine whether it acts on column vectors or row vectors. (There really are only these <em>two</em> kinds of vectors we're considering in differential geometry -- no weird "depth vectors" or whatever.)</p> <p>So <em>if</em> you want to think about tensors in coordinates, with numbers, like you do when you call a $(1,1)$-tensor a matrix, I think the best you can do is to think of a $(k,l)$ tensor $T$ as a list of numbers $$ T^{j_1, \dots, j_k}_{i_1, \dots, i_l} $$ where the $i$'s and $j$'s each run from $1$ to $n$ (where $n = \dim V$). (Just like with writing down the matrix of a linear transformation, actually writing a tensor as such a list of numbers requires first choosing a basis for $V$.) The top indicies run across generalized "rows" and the bottom down generalized "columns", but I'm not sure this really means anything or is a useful way to think. I don't have a coherent system for writing such huge lists/arrays/whatever down on paper. Relatedly, it's also the case that I almost never do write such things down on paper.</p> <p>How does this list of numbers become a linear transformation? It's just like matricies. Fixing a basis on $V$ (and hence also on $V^*$), we can write elements of $V$ as $(0,1)$-tensors, which are lists of numbers $v_i$, for $1 \leq i \leq n$, and elements of $V^*$ are $(1,0)$-tensors, which are lists of numbers $w^j$. So say we have a $(3,2)$-tensor $T^{j_1, j_2, j_3}_{i_1, i_2}$ and three vectors $u_i, v_i, w_i$ and two covectors $a^i, b^i$; then the tensor $T$ evaluated at $(u, v, w, a, b)$ is the number $$ \sum_{1 \leq j_i, j_2, j_3, i_1, i_2 \leq n} T^{j_1, j_2, j_3}_{i_1, i_2} u_{j_1} u_{j_2} u_{j_3} a^{i_1} b^{i_2} , $$ which is what it looks like -- a mess. Luckily, we don't have to think about tensors this way very often.</p>
1,783,200
<p>Prove or disprove the following statement:</p> <p><strong>Statement.</strong> <em>Continuous for each variables, when other variables are fixed, implies continuous?</em> More clearly, prove or disprove the following problem:</p> <p>Let $\displaystyle f:\left[ a,b \right]\times \left[ c,d \right]\to \mathbb{R}$ for which:</p> <ul> <li>For every $\displaystyle {{x}_{0}}\in \left[ a,b \right]$, $\displaystyle f\left( {{x}_{0}},y \right)$ is continuous on $\displaystyle \left[ c,d \right]$ respect to variable $ \displaystyle y$.</li> <li>For every $ \displaystyle {{y}_{0}}\in \left[ c,d \right]$, $ \displaystyle f\left( x,{{y}_{0}} \right)$ is continuous on $ \displaystyle \left[ a,b \right]$ respect to variable $\displaystyle x$.</li> </ul> <p>Then $\displaystyle f\left( x,y \right)$ is continuous on $ \displaystyle \left[ a,b \right]\times \left[ c,d \right]$. ?</p> <p><a href="https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/" rel="nofollow">https://hongnguyenquanba.wordpress.com/2016/05/12/problem-6/</a></p>
Robert Israel
8,508
<p>Standard example: $$ f(x,y) = \dfrac{xy}{x^2 + y^2}$$ with $f(0,0) = 0$.</p>
69,476
<p>Hello everybody !</p> <p>I was reading a book on geometry which taught me that one could compute the volume of a simplex through the determinant of a matrix, and I thought (I'm becoming a worse computer scientist each day) that if the result is exact this may not be the computationally fastest way possible to do it.</p> <p>Hence, the following problem : if you are given a polynomial in one (or many) variables $\alpha_1 x^1 + \dots + \alpha_n x^n$, what is the cheapest way (in terms of operations) to evaluate it ?</p> <p>Indeed, if you know that your polynomial is $(x-1)^{1024}$, you can do much, much better than computing all the different powers of $x$ and multiply them by their corresponding factor.</p> <p>However, this is not a problem of factorization, as knowing that the polynomial is equal to $(x-1)^{1024} + (x-2)^{1023}$ is also much better than the naive evaluation.</p> <p>Of course, multiplication and addition all have different costs on a computer, but I would be quite glad to understand how to minimize the "total number of operations" (additions + multiplications) for a start ! I had no idea how to look for the corresponding litterature, and so I am asking for your help on this one :-)</p> <p>Thank you !</p> <p>Nathann</p> <p>P.S. : <em>I am actually looking for a way, given a polynomial, to obtain a sequence of addition/multiplication that would be optimal to evaluate it. This sequence would of course only work for <strong>THIS</strong> polynomial and no other. It may involve working for hours to find out the optimal sequence corresponding to this polynomial, so that it may be evaluated many times cheaply later on.</em></p>
Gab
15,326
<p>If you want to evaluate the polynomial at a lot of equidistant points, you can do "forward differencing"; here are 3 slides explaining the method: <a href="http://zach.in.tu-clausthal.de/teaching/info2_11/folien/evaluating%20a%20polynomial%20at%20equidistant%20points.pdf" rel="nofollow">http://zach.in.tu-clausthal.de/teaching/info2_11/folien/evaluating%20a%20polynomial%20at%20equidistant%20points.pdf</a> (they are in German, but I believe you'll still get it).</p>
43,886
<p>Has work been done on looking at what happens to the exponents of the prime factorization of a number $n$ as compared to $n+1$? I am looking for published material or otherwise. For example, let $n=9=2^0\cdot{}3^2$, then,</p> <p>$$ 9 \;\xrightarrow{+1}\; 10 $$</p> <p>$$ 2^0\cdot{}3^2 \;\xrightarrow{+1}\; 2^1\cdot{}3^0\cdot{}5^1 $$</p> <p>or looking just at the exponents,</p> <p>$$ [0,2,0,0,...] \;\xrightarrow{+1}\; [1,0,1,0,...] $$</p> <p>I realize the canonical way of reaching the latter is generating the prime factorization of $n$ and of $n+1$ separately, but has there been any research into manipulating the exponents directly instead (short-cutting around the factorization)?</p> <p>For anyone who can't answer but still wants to see something interesting, check out <a href="http://scienceblogs.com/goodmath/2006/10/prime_number_pathology_fractra.php" rel="nofollow">FRACTRAN</a>.</p>
JavaMan
6,491
<p>I found this on MO which is relevant to the question:</p> <p><a href="https://mathoverflow.net/questions/55010/prime-factorization-of-n1">https://mathoverflow.net/questions/55010/prime-factorization-of-n1</a></p>
2,225,606
<p>Solution: The eigenvalues for $\begin{bmatrix}1.25 &amp; -.75 \\ -.75 &amp; 1.25\end{bmatrix}$ are $2$ and $0.5$. </p> <p>I'm confused on how it's not $1$ and $-1$. If we set up the characteristic matrix: $\begin{bmatrix}5/4 - \lambda &amp; -3/4 \\ -3/4 &amp; 5/4 - \lambda \end{bmatrix}$ </p> <p>$ad-bc=0$</p> <p>$25/16 - \lambda ^2 - 9/16 = 0$</p> <p>$16/16- \lambda ^2=0$</p> <p>$\lambda = 1, -1$</p>
mrnovice
416,020
<p>$$(\frac{5}{4}-\lambda)^2-\frac{9}{16}=0$$</p> <p>Assuming you got the above equation correctly, your expansion of the terms was incorrect.</p> <p>$$\frac{25}{16}-\frac{5}{2}\lambda+\lambda^2-\frac{9}{16}=0$$</p> <p>$$\lambda^2-\frac{5}{2}\lambda+1=0$$</p> <p>$$(\lambda-\frac{5}{4})^2+\frac{-25+16}{16}=0$$</p> <p>$$\lambda = \frac{5\pm 3}{4}\implies \lambda =\frac{1}{2}\quad\text{or}\quad\lambda=2$$</p>
122,728
<p>Suppose that I have a <a href="http://en.wikipedia.org/wiki/Symmetric_matrix" rel="nofollow noreferrer">symmetric</a> <a href="http://en.wikipedia.org/wiki/Toeplitz_matrix" rel="nofollow noreferrer">Toeplitz</a> <span class="math-container">$n\times n$</span> matrix</p> <p><span class="math-container">$$\mathbf{A}=\left[\begin{array}{cccc}a_1&amp;a_2&amp;\cdots&amp; a_n\\a_2&amp;a_1&amp;\cdots&amp;a_{n-1}\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots\\a_n&amp;a_{n-1}&amp;\cdots&amp;a_1\end{array}\right]$$</span></p> <p>where <span class="math-container">$a_i \geq 0$</span>, and a diagonal matrix</p> <p><span class="math-container">$$\mathbf{B}=\left[\begin{array}{cccc}b_1&amp;0&amp;\cdots&amp; 0\\0&amp;b_2&amp;\cdots&amp;0\\\vdots&amp;\vdots&amp;\ddots&amp;\vdots\\0&amp;0&amp;\cdots&amp;b_n\end{array}\right]$$</span></p> <p>where <span class="math-container">$b_i = \frac{c}{\beta_i}$</span> for some constant <span class="math-container">$c&gt;0$</span> such that <span class="math-container">$\beta_i&gt;0$</span>. Let</p> <p><span class="math-container">$$\mathbf{M}=\mathbf{A}(\mathbf{A}+\mathbf{B})^{-1}\mathbf{A}$$</span></p> <p>Can one express a partial derivative <span class="math-container">$\partial_{\beta_i} \operatorname{Tr}[\mathbf{M}]$</span> in closed form, where <span class="math-container">$\operatorname{Tr}[\mathbf{M}]$</span> is the <a href="http://en.wikipedia.org/wiki/Matrix_trace" rel="nofollow noreferrer">trace</a> operator?</p>
john316
262,158
<p>Define some variables for convenience $$\eqalign{ P &amp;= {\rm Diag}(\beta) \cr B &amp;= cP^{-1} \cr b &amp;= {\rm diag}(B) \cr S &amp;= A+B \cr M &amp;= AS^{-1}A \cr }$$ all of which are symmetric matrices, except for $b$ which is a vector.</p> <p>Then the function and its differential can be expressed in terms of the Frobenius (:) product as $$\eqalign{ f &amp;= {\rm tr}(M) \cr &amp;= A^2 : S^{-1} \cr\cr df &amp;= A^2 : dS^{-1} \cr &amp;= -A^2 : S^{-1}\,dS\,S^{-1} \cr &amp;= -S^{-1}A^2S^{-1} : dS \cr &amp;= -S^{-1}A^2S^{-1} : dB \cr &amp;= -S^{-1}A^2S^{-1} : c\,dP^{-1} \cr &amp;= c\,S^{-1}A^2S^{-1} : P^{-1}\,dP\,P^{-1} \cr &amp;= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : dP \cr &amp;= c\,P^{-1}S^{-1}A^2S^{-1}P^{-1} : {\rm Diag}(d\beta) \cr &amp;= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big)^T d\beta \cr }$$ So the derivative is $$\eqalign{ \frac{\partial f}{\partial\beta} &amp;= {\rm diag}\big(c\,P^{-1}S^{-1}A^2S^{-1}P^{-1}\big) \cr &amp;= \frac{1}{c}\,{\rm diag}\big(BS^{-1}A^2S^{-1}B\big) \cr &amp;= \Big(\frac{b\circ b}{c}\Big)\circ{\rm diag}\big(S^{-1}A^2S^{-1}\big) \cr\cr }$$ which uses Hadamard ($\circ$) products in the final expression. This is the same as joriki's result, but with more details.</p>
1,639,568
<p>The above applies $\forall x,y \in \mathbb{R}$</p> <p>I've tried: $x + y \ge 0$</p> <p>$$x + y \ge x$$</p> <p>$$ (x + y)^2 \ge 2xy$$</p> <p>$$\frac{(x + y)^2}{2} \ge xy$$</p> <p>But the closest I get is $\dfrac{x+y}{\sqrt{2}} \ge \sqrt{xy}$</p> <p>Any ideas?</p>
L.F. Cavenaghi
248,387
<p>$$(x-y)^2 \ge 0$$ $$x^2 - 2xy + y^2 \ge 0 $$ $$x^2 + y^2 \ge 2xy $$ $$x^2 + 2xy + y^2 \ge 4xy $$ $$(x+y)^2 \ge 4xy $$</p>
1,014,303
<blockquote> <p>Is $$\sum^\infty_{n=4}\frac{3^{2n}}{(-10)^n}$$ Convergent or Divergent? Explain why.</p> </blockquote> <p>I know I can do: $$\sum^\infty_{n=4}\frac{9^{n}}{(-10)^n} \Rightarrow \sum^\infty_{n=4}\bigg(\frac{9}{-10}\bigg)^n$$ But I'm not sure where to go from here. The negative denominator is really throwing me off.</p>
Pedro
23,350
<p>We're assuming $R$ is a ring in which every prime ideal is f.g., so $I$ being non f.g. <em>should be non prime</em> (this is the "Observe $I$ is not prime" line). This guarantees the existence of such $J_1,J_2$ as in the proof in the post. The point is that, as proved, any such ideal <em>is prime</em>. Inevitably we reach a contradiction, namely that $I$ is both f.g. and non f.g., which means that the set of ideals which are not finitely generated must be empty all along, so $R$ is noetherian. Alternatively, we're proving that in any non noetherian ring not only there are non f.g. ideals, but there are always non f.g. prime ideals. I hope the confusion I caused is cleared out.</p>
3,573,811
<p>This is a theorem given by my professor from Artin Algebra:</p> <p>Suppose that a finite abelian group <span class="math-container">$V$</span> is a direct sum of cyclic groups of prime orders <span class="math-container">$d_j=p_j^{r_j}$</span>. The integers <span class="math-container">$d_j$</span> are uniquely determined by the group <span class="math-container">$V$</span>. </p> <p>Proof: Let <span class="math-container">$p$</span> be one of those primes that appear in the direct sum decomposition of <span class="math-container">$V$</span>, and let <span class="math-container">$c_i$</span> denote the number of cyclic groups of order <span class="math-container">$p^i$</span> in the decomposition. The set of elements whose orders divide <span class="math-container">$p^i$</span> is a subgroup of <span class="math-container">$V$</span> whose order is a power of <span class="math-container">$p$</span>, say <span class="math-container">$p^{l_i}$</span>. Let <span class="math-container">$k$</span> be the largest index such that <span class="math-container">$c_k&gt;0$</span>. Then </p> <p><span class="math-container">$l_1=c_1+c_2+c_3+...+c_k$</span></p> <p><span class="math-container">$l_2=c_1+2c_2+2c_3+...+2c_k$</span></p> <p><span class="math-container">$l_3=c_1+2c_2+3c_3+...+3c_k$</span></p> <p><span class="math-container">$l_k=c_1+2c_2+3c_3+...+kc_k$</span></p> <p>The exponents <span class="math-container">$l_i$</span> determine the integers <span class="math-container">$c_i$</span>.</p> <p>DONE</p> <p>The only thing I understand is that abelian groups can be decomposed as cyclic groups of prime order. Unlike my other questions on this site, I have zero idea about this proof. I might not even understand the statement of the proof. Please help me understand this proof.</p> <p>Thanks I get it now</p>
Captain Lama
318,467
<p>This is the first time I see this argument, I find it rather amusing. Let's look at it in detail.</p> <p>Suppose <span class="math-container">$G$</span> is a direct sum of groups of the type <span class="math-container">$\mathbb{Z}/p^i\mathbb{Z}$</span>. Write it as <span class="math-container">$$G = \mathbb{Z}/p\mathbb{Z} \oplus \dots \oplus \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p^2\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/p^2\mathbb{Z}\oplus \dots \oplus \mathbb{Z}/p^k\mathbb{Z}$$</span> where <span class="math-container">$\mathbb{Z}/p\mathbb{Z}$</span> appears <span class="math-container">$c_1$</span> times, <span class="math-container">$\mathbb{Z}/p^2\mathbb{Z}$</span> appears <span class="math-container">$c_2$</span> times, etc. until <span class="math-container">$\mathbb{Z}/p^k\mathbb{Z}$</span> which appears <span class="math-container">$c_k$</span> times.</p> <p>Clearly the <span class="math-container">$c_i$</span> determine <span class="math-container">$G$</span> up to isomorphism. The goal is to show the converse: given the isomorphism class of <span class="math-container">$G$</span>, the <span class="math-container">$c_i$</span> are completely determined.</p> <p>Now for each <span class="math-container">$i$</span> we can define <span class="math-container">$G_i\subset G$</span> as the subgroup of elements <span class="math-container">$x$</span> such that <span class="math-container">$p^ix=0$</span>. This is a subgroup because <span class="math-container">$G$</span> is abelian, and clearly the order of <span class="math-container">$G_i$</span> is a power of <span class="math-container">$p$</span> (since the order of <span class="math-container">$G$</span> is itself a power of <span class="math-container">$p$</span>). We have <span class="math-container">$$0=G_0\subset G_1\subset\dots \subset G_k=G.$$</span></p> <p>Now write <span class="math-container">$|G_i|=p^{l_i}$</span> for some <span class="math-container">$l_i\in \mathbb{N}$</span>. Clearly, the integers <span class="math-container">$l_i$</span> depend only on <span class="math-container">$G$</span>: we did not refer at any point to the decomposition of <span class="math-container">$G$</span> to define them. It is obvious that we should be able to compute the <span class="math-container">$l_i$</span> from the <span class="math-container">$c_i$</span> (since the <span class="math-container">$c_i$</span> control all the information about <span class="math-container">$G$</span> up to isomorphism). If we can show that conversely the <span class="math-container">$c_i$</span> can be found using only the <span class="math-container">$l_i$</span>, then we are done.</p> <p>Now the rest is just some counting: how many elements does <span class="math-container">$G_i$</span> have, given the decomposition? Let us start with <span class="math-container">$G_1$</span>: how many elements of order (at most) <span class="math-container">$p$</span> are there in <span class="math-container">$G$</span>? Well, in <span class="math-container">$\mathbb{Z}/p^j\mathbb{Z}$</span>, there is exactly one subgroup isomorphic to <span class="math-container">$\mathbb{Z}/p\mathbb{Z}$</span> (in general, in <span class="math-container">$\mathbb{Z}/n\mathbb{Z}$</span> there is exactly one subgroup of order <span class="math-container">$d$</span> if <span class="math-container">$d$</span> divides <span class="math-container">$n$</span>). So in <span class="math-container">$G$</span>, we each factor in the decomposition gives one copy of <span class="math-container">$\mathbb{Z}/p\mathbb{Z}$</span>, which means that in total we have <span class="math-container">$c_1+\dots+c_k$</span> copies of <span class="math-container">$\mathbb{Z}/p\mathbb{Z}$</span>, and <span class="math-container">$$l_1 = c_1+\dots+c_k.$$</span></p> <p>Similarly, how many elements in <span class="math-container">$G_2$</span>? This time, we have to make a distinction among the factors in the decomposition:</p> <ul> <li><p>the <span class="math-container">$c_1$</span> factors of type <span class="math-container">$\mathbb{Z}/p\mathbb{Z}$</span> are indeed in <span class="math-container">$G_2$</span> (they are already in <span class="math-container">$G_1$</span>), but they only contribute for <span class="math-container">$p$</span> elements each, so their contribution to <span class="math-container">$l_2$</span> is <span class="math-container">$c_1$</span> in total;</p></li> <li><p>the <span class="math-container">$c_j$</span> factors of type <span class="math-container">$\mathbb{Z}/p^j\mathbb{Z}$</span> where <span class="math-container">$j\geqslant 2$</span> contain exactly one subgroup isomorphic to <span class="math-container">$\mathbb{Z}/p^2\mathbb{Z}$</span>, so they contribute for <span class="math-container">$p^2$</span> elements each, which means that their contribution to <span class="math-container">$l_2$</span> is <span class="math-container">$2c_j$</span>;</p></li> </ul> <p>In total: <span class="math-container">$$l_2 = c_1 + 2c_2 + 2c_3 +\dots +2c_k.$$</span></p> <p>It is easy to see that this goes one: <span class="math-container">$$l_i = c_1 + 2c_2 + 3c_3+\dots+ jc_j + \dots +jc_k.$$</span></p> <p>Now we have computed the <span class="math-container">$l_i$</span> from the <span class="math-container">$c_j$</span> (and we found the same formula as in your question), and it just remain to notice that <span class="math-container">$$\begin{pmatrix} 1 &amp; 1 &amp; 1 &amp; \dots &amp; 1 \\ 1 &amp; 2 &amp; 2 &amp; \dots &amp; 2 \\ 1 &amp; 2 &amp; 3 &amp; \dots &amp; 3 \\ \vdots &amp; \vdots &amp; \vdots &amp; \dots &amp; \vdots \\ 1 &amp; 2 &amp; 3 &amp; \dots &amp; k \end{pmatrix}$$</span> is invertible.</p>
2,280,243
<blockquote> <p>A tribonacci sequence is a sequence of numbers such that each term from the fourth onward is the sum of the previous three terms. The first three terms in a tribonacci sequence are called its <em>seeds</em> For example, if the three seeds of a tribonacci sequence are $1,2$,and $3$, it's 4th terms is $6$<br> ($1+2+3$),then $11(2+3+6)$.</p> </blockquote> <p>Find the smallest 5 digit term in a tribonacci sequence if the seeds are $6,19,22$</p> <p>I'm having trouble with this. I don't know where to start. The formula for the tribonacci sequence in relation to its seeds is $$u_{n+3} = u_{n} + u_{n+1} + u_{n+2}$$ This tribonacci formula holds for all integer $n$. But that's all I know how to work out. And just if it helps, the next few numbers in the sequence mentioned in the question are $47,88,157,292$. Is there some shortcut to it, because I need to show some working out and having two pages full of addition doesn't sound very easy to mark, does it?</p>
Γιώργος Πλούσος
422,616
<p>I don't understand exactly what the question is, but the following result may be useful (I do not quote the proof):</p> <p><strong>Tribonacci</strong></p> <p>The formula for calculating the nth term is equivalent to the following relationships where only one cubic constant is used instead of three.</p> <p><span class="math-container">$$q=\left(19–3\sqrt{33}\right)^{\frac{1}{3}}$$</span> <span class="math-container">$$b=(1+q+4/q)/3$$</span></p> <p><span class="math-container">$$T_{n}=\text{round}( ((b–1)/(4b–6))×b^n )$$</span></p> <p>(Note: round<span class="math-container">$(r)=int(r+1/2), r\ge 0$</span>)</p>
2,280,243
<blockquote> <p>A tribonacci sequence is a sequence of numbers such that each term from the fourth onward is the sum of the previous three terms. The first three terms in a tribonacci sequence are called its <em>seeds</em> For example, if the three seeds of a tribonacci sequence are $1,2$,and $3$, it's 4th terms is $6$<br> ($1+2+3$),then $11(2+3+6)$.</p> </blockquote> <p>Find the smallest 5 digit term in a tribonacci sequence if the seeds are $6,19,22$</p> <p>I'm having trouble with this. I don't know where to start. The formula for the tribonacci sequence in relation to its seeds is $$u_{n+3} = u_{n} + u_{n+1} + u_{n+2}$$ This tribonacci formula holds for all integer $n$. But that's all I know how to work out. And just if it helps, the next few numbers in the sequence mentioned in the question are $47,88,157,292$. Is there some shortcut to it, because I need to show some working out and having two pages full of addition doesn't sound very easy to mark, does it?</p>
Community
-1
<p>By the theory of linear recurrences, the sequence approximately follows a geometric progression</p> <p>$$u_n=ar^n$$ where $r$ is the largest root of $r^3=r^2+r+1$, which is about $1.8392867552142$.</p> <p>With $u_7=292$, we estimate $a=4.1005$.</p> <p>Then we can expect $u_n\ge10000$ for </p> <p>$$n\ge\frac{\log 10000-\log a}{\log r}=12.798\cdots$$</p> <hr> <p>Indeed,</p> <p>$$1\to 6\\ 2\to 19\\ 3\to 22\\ 4\to 47\\ 5\to 88\\ 6\to 157\\ 7\to 292\\ 8\to 537\\ 9\to 986\\ 10\to 1815\\ 11\to 3338\\ 12\to 6139\\ 13\to 11292\\ $$</p>