qid
int64
1
4.65M
question
large_stringlengths
27
36.3k
author
large_stringlengths
3
36
author_id
int64
-1
1.16M
answer
large_stringlengths
18
63k
989,118
<p>The sum is $$\sum_{n&gt;0} \mathrm{i}^n\frac{J_n(x)}{n}\sin\frac{2\pi n}{3}\stackrel{?}{=}0$$ and $$\sum_{n&gt;0} (\mathrm{-i})^n\frac{J_n(x)}{n}\sin\frac{2\pi n}{3}\stackrel{?}{=}0$$</p> <p>I suspect they are zero because I am working on a project which from symmetry consideration they should be zero. </p> <p><strong>If anyone can point out they cannot be zero, or they cannot identically be zero, that will be great, too.</strong></p> <p>I've asked a question about Bessel function before, see <a href="https://math.stackexchange.com/questions/985566/does-this-infinite-summation-of-bessel-function-has-a-closed-form">Does this Infinite summation of Bessel function has a closed form?</a> I hope the reference it contains may do some help.</p>
Hagen von Eitzen
39,174
<p>Note that $i^2=-i^4=i^6=-i^8=\ldots$ so that the sum simplifies <em>significantly</em>.</p>
851,647
<p>I have $$ X_i, i=1,...,n_1, $$ $$ Y_j, j=1,...,n_2, $$ $$ Z_k, k=1,...,n_3, $$ whereat $X_i, Y_j$ and $Z_k$ are all independent. Does then follow that $$ \bar{X}=\frac{1}{n_1}\sum_{i=1}^{n_1}X_i, \bar{Y}=\frac{1}{n_2}\sum_{=1}^{n_2}Y_j, \bar{Z}=\frac{1}{n_3}\sum_{k=1}^{n_3}Z_k $$ independent, too?</p>
vadim123
73,324
<p>$$V-FA^M=P\left(\frac{G}{B}\right)A^M + P\left(\frac{H}{B}\right)=P\left(\left(\frac{G}{B}\right)A^M + \left(\frac{H}{B}\right)\right)$$</p> <p>$$P=\frac{V-FA^M}{\left(\frac{G}{B}\right)A^M + \left(\frac{H}{B}\right)}$$</p>
160,392
<p>Did I solve the following quadratic equation correctly.</p> <p>$$W(W+2)-7=2W(W-3)$$</p> <p>I got.</p> <p>$$W^2-8W+7$$ Then for my solution I got.</p> <p>$$(W-1)(W-7)$$</p>
Dinuki Seneviratne
250,288
<p>$$W (W+2) -7 = 2W (W-3)$$ Expanding the brackets : $$W^2+2W -7 = 2W^2-6W$$ Then shift the right-hand side part of the equation to the left-hand side of the equation: $$W^2+2W-7-(2W^2-6W) = (2W^2-6W)-(2W^2-6W)$$ $$W^2+2W-7-(2W^2-6W)= 0$$ Expand the brackets: $$W^2+2W-7-2W^2+6W= 0$$ $$W^2-2W^2+2W+6W-7=0$$ $$-W^2+8W-7 =0$$ The above line is multiplied by (-1): $$(-1)(-W^2+8W-7) =(-1)(0)$$ Exapanding the brackets: $$W^2-8W+7 =0$$ This could be factorised as below $$W^2-7W-1W+7 =0$$ $$W(W-7)-1(W-7)=0$$ $$(W-1)(W-7) =0$$ Using Null-Factor law; Either $(W-1) =0$ or $(W-7) =0$</p> <p><strong>Therefore $W =1$ or $W =7$</strong></p> <p>So you have solved it correctly!</p>
3,219,187
<p>The cancellation law for the multiplication of natural numbers is:</p> <p><span class="math-container">$$\forall m, n\in\mathbb N, \forall p\in\mathbb N-\{0\}, m\cdot p=n\cdot p\Rightarrow m=n.$$</span></p> <p>Is it possible to show this using induction?</p> <p>I tried to define <span class="math-container">$$X=\{n\in\mathbb N: \forall m\in\mathbb N, \forall p\in\mathbb N-\{0\}, m\cdot p=n\cdot p\Rightarrow m=n\}.$$</span></p> <p>It is easy to verify that <span class="math-container">$0\in X$</span> but I'm not able to show that if <span class="math-container">$n\in X$</span> then <span class="math-container">$n+1\in X$</span>. </p> <p>My attemp: Suppose <span class="math-container">$$m\cdot p=n\cdot p\Rightarrow m=n$$</span> for all <span class="math-container">$m\in\mathbb N$</span> and <span class="math-container">$p\in\mathbb N-\{0\}$</span>. Supposing</p> <p><span class="math-container">$$m\cdot p=(n+1)\cdot p$$</span> we should show <span class="math-container">$m=n+1$</span>. But:</p> <p><span class="math-container">$$m\cdot p=(n+1)\cdot p=n\cdot p+p,$$</span> and I can't see how to use the induction hypothesis.</p> <p>Well, I could try to prove this after trichotomy. If it were the case that <span class="math-container">$m\neq n+1$</span> then <span class="math-container">$m&gt;n+1$</span> or <span class="math-container">$m&lt;n+1$</span>. In the first case we would have <span class="math-container">$m=n+1+r$</span> for some <span class="math-container">$r\in\mathbb N-\{0\}$</span>. Then:</p> <p><span class="math-container">$m\cdot p=(n+1+r)\cdot p=(n+1)\cdot p+r\cdot p.$</span>Since <span class="math-container">$r, p\in \mathbb N-\{0\}$</span> it would follow <span class="math-container">$$m\cdot p&gt;(n+1)\cdot p,$$</span> which contradicts <span class="math-container">$$m\cdot p=(n+1)\cdot p.$$</span> Is there another way to do this proof?</p>
hmakholm left over Monica
14,366
<p>If you have proved trichotomy <em>and</em> enough laws for addition, then you could do something like:</p> <p>Assume <span class="math-container">$n\ne m$</span>, then without loss of generality <span class="math-container">$m&gt;n$</span> which is to say, <span class="math-container">$m=n+k$</span> for some <span class="math-container">$k\ge 1$</span>. We then have <span class="math-container">$ (n+k)p = np $</span> -- but by induction on <span class="math-container">$k$</span> this is impossible.</p>
1,660,361
<blockquote> <p>$$x^4+x^2-2$$</p> </blockquote> <p>How do I write in a that can be used for partial fraction? taking out $x$ does not help, what is the process? </p> <p>using Wolfram I got to $(x^2+2)(x-1)(x+1)$</p>
Austin Mohr
11,245
<p>Writing a polynomial as the product of smaller multiplicative factors is called "factoring".</p> <p>You might try the substitution $z = x^2$ to get $z^2 + z - 2$. Can you factor this polynomial?</p>
481,673
<p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p> <p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
Andrey Sokolov
30,727
<p>Substituting an infinitesimal $y=dx$ gives</p> <p>$g(x)+g(x)g(0)=g(0)+g(x)+g(0)$ or $g(x)g(0)=2 g(0)$</p> <p>and</p> <p>$g'(x)+g(x)g'(0)=g'(0)x+g'(0)$,</p> <p>for a smooth function $g(x)$. The first equation implies either $g(0)=0$ or $g(x)=2$. Solving the differential equation with the condition $g(0)=0$ gives</p> <p>$g(x)=\frac{1}{k}((1-k)e^{-kx}+k(x+1)-1),$</p> <p>where $k=g'(0)\ne0$. If $g'(0)=0$, then $g(x)=0$. Any smooth solution of your equality with $g(0)=0$ and $g'(0)\ne0$ must be of this form for some $k=g'(0)$. I get $g(x)=x$ for $k=1$. You can check if that is the only option by substituting $g(x)$ into your equality.</p>
481,673
<p>Find all functions $g:\mathbb{R}\to\mathbb{R}$ with $g(x+y)+g(x)g(y)=g(xy)+g(x)+g(y)$ for all $x,y$.</p> <p>I think the solutions are $0, 2, x$. If $g(x)$ is not identically $2$, then $g(0)=0$. I'm trying to show if $g$ is not constant, then $g(1)=1$. I have $g(x+1)=(2-g(1))g(x)+g(1)$. So if $g(1)=1$, we can show inductively that $g(n)=n$ for integer $n$. Maybe then extend to rationals and reals.</p>
Calvin Lin
54,563
<p>Let <span class="math-container">$P(x,y)$</span> denote <span class="math-container">$g(x+y) + g(x) g(y) =g(xy) + g(x) + g(y) $</span>. The important statements that we'd consider are</p> <p><span class="math-container">$$ \begin{array} { l l l } P(x,y) &amp; : g(x+y) + g(x) g(y) =g(xy) + g(x) + g(y) &amp; (1) \\ P(x,1) &amp; : g(x+1) + g(x) g(1) = 2 g(x) + g(y) &amp; (2) \\ P(x, y+1) &amp; : g(x+y+1) + g(x) g(y+1) = g(xy+x) + g(x) +g(y+1) &amp; (3)\\ \end{array} $$</span></p> <p>Let <span class="math-container">$g(1) = A$</span>. With <span class="math-container">$x=y=1$</span>, we get <span class="math-container">$g(2) + A^2 = 3A$</span>. </p> <p>With <span class="math-container">$x=y=2$</span>, we get <span class="math-container">$g(4) + g(2)^2 = g(4) + 2g(2) $</span>. Hence either <span class="math-container">$g(2) = 0$</span> or <span class="math-container">$g(2) = 2$</span>. </p> <p><strong>Case 1:</strong> If <span class="math-container">$g(2) = 0$</span>, then <span class="math-container">$A^2 - 3A = 0$</span> so <span class="math-container">$A=0$</span> or <span class="math-container">$A=3$</span>.</p> <p><strong>Case 1a:</strong> (This case is incomplete) If <span class="math-container">$A=0$</span>:<br> <span class="math-container">$(2)$</span> gives us <span class="math-container">$g(x+1) = 2g(x)$</span>. </p> <p><span class="math-container">$(3)$</span> gives us <span class="math-container">$g(x+y+1) +g(x)g(y+1)= g(xy+y) + g(x)+g(y+1)$</span>.<br> Applying the above gives <span class="math-container">$2g(x+y)+g(x)2g(y) = g(xy+x) + g(x) + 2g(y)$</span>, or that <span class="math-container">$2g(x+y) + 2g(x)g(y) = g(xy+x) + g(x) + 2g(y)$</span>. </p> <p><span class="math-container">$(1)$</span>, combined with the above gives us <span class="math-container">$g(xy+x) = 2g(xy) + g(x) $</span>.<br> Substitute <span class="math-container">$y=1$</span> in the above identity, we get <span class="math-container">$g(2x) = 3g(x)$</span>. </p> <p>(Incomplete here)</p> <p>Comparing with the first equation, we thus have <span class="math-container">$g(x) = 0 $</span>.<br> It is easy to check that <span class="math-container">$g(x) = 0$</span> is a solution.</p> <p><strong>Case 1b:</strong> If <span class="math-container">$A=3$</span>:<br> <span class="math-container">$(2)$</span> gives us <span class="math-container">$g(x+1) = - g(x) + 3$</span>.</p> <p><span class="math-container">$P(x,2)$</span> gives us <span class="math-container">$g(x+2) = g(2x) + g(x)$</span>.</p> <p>From the previous identity, <span class="math-container">$g(x+2) = -g(x+1) + 3 = g(x)$</span>. Hence <span class="math-container">$g(2x) = 0$</span> for all <span class="math-container">$x$</span>. However, with <span class="math-container">$x = \frac{1}{2}$</span>, we have a contradiction. There is no solution in this case.</p> <p><strong>Case 2:</strong> If <span class="math-container">$g(2) = 2$</span>, then <span class="math-container">$A^2 - 3A +2 = 0 $</span> so <span class="math-container">$A= 1$</span> or <span class="math-container">$A=2$</span>.</p> <p><strong>Case 2a:</strong> If <span class="math-container">$A = 1$</span>:<br> <span class="math-container">$P(0,1) $</span> gives us <span class="math-container">$g(1) + g(0) = g(0) + g(0) + g(1)$</span> so <span class="math-container">$g(0) = 0 $</span>.</p> <p><span class="math-container">$(2)$</span> gives us <span class="math-container">$g(x+1) = g(x) + g(1)$</span>.</p> <p><span class="math-container">$(3)$</span> gives us <span class="math-container">$g(x+y+1) + g(x) g(y+1) = g(xy+x) + g(x) + g(y+1)$</span>. Using the above, this gives us <span class="math-container">$ g(x+y) + g(x) g(y) = g(xy+x) + g(x) + g(y) $</span>.</p> <p><span class="math-container">$(1)$</span> gives us <span class="math-container">$g(x+y) + g(x) g(y) = g(xy) + g(x) + g(y)$</span>. Hence <span class="math-container">$g(xy+x) = g(xy) + g(x) $</span>.</p> <p>Now, for non-zero <span class="math-container">$a, b$</span>, let <span class="math-container">$x = b$</span> and <span class="math-container">$ y = \frac{b}{a}$</span>. This gives us <span class="math-container">$g(a+b) = g(a) + g(b) $</span>. It is clear that this equation also holds if <span class="math-container">$a$</span> or <span class="math-container">$b$</span> equals 0, since <span class="math-container">$g(0) = 0 $</span>. Thus, we have <span class="math-container">$g(x+y) = g(x) + g(y) $</span>.</p> <p>This also yields <span class="math-container">$g(xy) = g(x) g(y)$</span>. With these 2 equations, the only solution is <span class="math-container">$g(x) = x$</span>. (slight work here, but this is standard in functional equations).</p> <p><strong>Case 2b:</strong> If <span class="math-container">$A=2$</span>: </p> <p><span class="math-container">$(2)$</span> gives us <span class="math-container">$g(x+1) +2g(x) = g(x) + g(x) + g(1)$</span>, so <span class="math-container">$g(x) = 2$</span> for all <span class="math-container">$x$</span>. This is clearly a solution.</p> <p>In conclusion, we only have the 3 solutions <span class="math-container">$g(x) = 0$</span> or <span class="math-container">$g(x) = 2$</span> or <span class="math-container">$g(x) = x$</span>.</p>
399,564
<p>Using Mathematical induction on $k$, prove that for any integer $k\geq 1$,</p> <p>$$(1-x)^{-k}=\sum_{n\geq 0}\binom{n+k-1}{k-1}x^n$$</p> <p>How should I proceed? The tutorial teacher attempted this question and forgot halfway through... <em>facepalm</em></p>
digital-Ink
22,728
<p>The idea is that you prove it for $k=1$ (which is exactly the sum of a geometrical progression), then assuming it is true for $k$, differentiate both members of the equality and obtain the result for $k+1$. The computation must be quite elementary.</p>
2,517,601
<p>I tried to find the residue of the function $$f(z) = \frac{1}{z(1-\cos(z))}$$ at the $z=0$. So I did:</p> <p>\begin{align} \hbox{Res}(0)&amp;=\lim_{x \to 0}(z)(f(z))\\ \hbox{Res}(0)&amp;= \lim_{x \to 0} \frac{1}{1-\cos(z)} \end{align}</p> <p>and I got that the residue to be $\infty$, and it seems to be wrong.</p>
robjohn
13,854
<p>Since $1-\cos(z)=2\sin^2(z/2)$, $$ \begin{align} \frac1{z(1-\cos(z))} &amp;=\frac2{z^3}\left(\frac{z/2}{\sin(z/2)}\right)^2\\ &amp;=\frac2{z^3}\left(\frac1{1-\frac1{24}z^2+O\!\left(z^4\right)}\right)^2\\ &amp;=\frac2{z^3}\left(1+\frac1{12}z^2+O\!\left(z^4\right)\right)\\[3pt] &amp;=\frac2{z^3}+\color{#C00}{\frac1{6z}}+O\!\left(z\right) \end{align} $$</p>
3,663,267
<p>in a linear algebra class i was given a theorem: If <span class="math-container">$\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\}$</span> is a linearly independent subset of <span class="math-container">$\mathbb{R}^{n}$</span> then <span class="math-container">$ \mathbb{R}^{n}=\operatorname{span}\left\{\boldsymbol{u}_{1}, \boldsymbol{u}_{2}, \ldots, \boldsymbol{u}_{n}\right\} $</span></p> <p>I understand what this means but it got me thinking - what do we really mean when we say a set is equal to <span class="math-container">$\mathbb{R}^{n}$</span>? If we have the two vectors (1,0,0) and (0,1,0), then they are linearly independent and their span gives us a plane. Now, would we call this spanning set equal to <span class="math-container">$\mathbb{R}^{2}$</span> or is it just isomorphic to <span class="math-container">$\mathbb{R}^{2}$</span>? If we only have the latter, then is any subset of <span class="math-container">$\mathbb{R}^{3}$</span> actually equal to <span class="math-container">$\mathbb{R}^{2}$</span> or can we only talk about a set being equal to <span class="math-container">$\mathbb{R}^{2}$</span> when we have not explicitly defined vectors that can live in <span class="math-container">$\mathbb{R}^{3}$</span> (as in vectors that have 3 coordinates)?</p>
ancient mathematician
414,424
<p>It is clear that all your matrices act as the identity on all basis vectors except those labelled <span class="math-container">$i,j,k$</span>; so we need only compute what happens to these three vectors. It is also clear that all the matrices send these three vectors to linear combinations of these three. So we can restrict ourselves to the case <span class="math-container">$n=3$</span>, and without loss of generality we may assume <span class="math-container">$i=1$</span>, <span class="math-container">$j=3$</span> and <span class="math-container">$k=2$</span>. </p> <p>Then <span class="math-container">$$ E_{12}(a)E_{23}(b)= \begin{bmatrix} 1 &amp; a &amp; 0\\ 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp;1 \end{bmatrix} \begin{bmatrix} 1 &amp; 0 &amp; 0\\ 0 &amp; 1 &amp; b\\ 0 &amp; 0 &amp;1 \end{bmatrix} = \begin{bmatrix} 1 &amp; a &amp; ab\\ 0 &amp; 1 &amp; b\\ 0 &amp; 0 &amp;1 \end{bmatrix}. $$</span> Hence <span class="math-container">$$ E_{12}(a)^{-1}E_{23}^{-1}(b)= E_{12}(-a)E_{23}(-b)= \begin{bmatrix} 1 &amp; -a &amp; ab\\ 0 &amp; 1 &amp; -b\\ 0 &amp; 0 &amp;1 \end{bmatrix}. $$</span> Finally <span class="math-container">$$ [E_{12}(a),E_{23}(b)]= E_{12}(a)E_{23}(b)E_{12}(a)^{-1}E_{23}^{-1}(b)= \begin{bmatrix} 1 &amp; 0 &amp; ab\\ 0 &amp; 1 &amp; 0\\ 0 &amp; 0 &amp;1 \end{bmatrix}= E_{13}(ab). $$</span></p>
1,136,458
<p>Suppose $G$ is an abelian group. Then the subset $H= \big\{ x\in G\ | \ x^3=1_G \big\}$ is a subgroup of $G$.</p> <p>I was able to show the statement is correct but the weird thing is that I didn't use the fact $G$ is abelian. <strong>Is it possible that the fact $G$ is abelian is redundant?</strong></p> <p>Here is how I proved the statement above without using the fact $G$ is abelian:</p> <p><strong>(Identity)</strong> Of course the identity is in H because $1_G^3=1_G$.</p> <p><strong>(Closure)</strong> Suppose $a,b\in H$ then $a^3=1_G$ and $b^3=1_G$. Therefore we have: $(ab)^3 = a^3b^3 = 1_G1_G = 1_G$.</p> <p><strong>(Inverse)</strong> Suppose $a\in H$ then $a^3=1_G \implies aa^2=1_G \implies a^{-1} = a^2$</p> <p>Am I correct?</p>
nbubis
28,743
<p>The problem is your calculator or computer, not the mathematical limit. Because of the way floating point numbers work, they store a limited amount digits as well as the power. So even though $1\times 10^{-11}$ is valid, you can't store the number $1+10^{-11}$ accurately, since this would require too many digits of precision. Instead, the computer just rounds this number down to $1$, and you get your result.</p>
1,147,773
<p>Do you have any explicit example of an infinite dimensional vector space, with an explicit basis ?</p> <p>Not an Hilbert basis but a family of linearly independent vectors which spans the space -any $x$ in the space is a <strong>finite</strong> linear sum of elements of the basis.</p> <p>In general the existence of such a basis follows by the Axiom of choice but I wonder if there is at least one non trivial (not finite dimensional) case where we have some explicit constuction.</p>
Aaron Maroja
143,413
<p>What about $$\beta=\{1, x, x^2, \ldots, x^n,\ldots \}$$</p> <p>basis of $K[x]$ polynomial ring, where $K$ is any field. In this case we have $[K[x]: K] = \infty$.</p> <p>Another example is </p> <p>$$\gamma = \{1 , \cos x, \sin x, \ldots , \cos n x, \sin n x, \ldots \}$$</p> <p>is a basis for the Euclidian Space $\mathscr{PC}[-\pi,\pi]$. It's used in Fourier series. </p>
1,154,690
<p>We have instituted random drug testing at our company. I was charged with writing the code to generate the weekly random list of employees. We've gotten some complaints because of people getting picked more frequently than they expected and our lawyer wants some evidence that these events fall within the bell curve of randomness. </p> <p>I'm very confident in our code. I have now written several Monte Carlo simulations that back up the results we've had. That said, all my Monte Carlo simulations (each written from scratch, completely independently) also show a phenomenon that I can't explain and I'm hoping others can.</p> <p>Here are the parameters I'm using: 4550 employees, of which 91 (2%) are picked each week at random.</p> <p>The phenomenon we're encountering is this: </p> <p>Over the first 20 weeks, we expect (according to the Monte Carlo simulations) roughly 32 people to be picked 3+ times (and around 2.7 people to be picked 4+ times, but let's just stick with the people picked 3+ times). And we've had the program going for about 20 weeks and the numbers seem to agree so far.</p> <p>Over the first 40 weeks, the number of people picked 3+ times shoots up to 207 (more than 6 times as many in twice the time). </p> <p>Over the first 52 weeks, the number shoots up again to 390 (30% more time than 40 weeks, but 90% more people picked 3 times).</p> <p>Maybe I've written all my Monte Carlo simulations wrong, but I'm pretty sure I haven't. I've looked at all this a bunch of different ways and I'm convinced this phenomenon is real, but I need to be able to explain it to the VP of HR and I'm not sure why the number of people picked 3 times rises so fast from say 40 to 52 weeks (and this is all of the counts. The number of people picked 4+ times, the number of people picked 5+ times, etc).</p> <p>I do understand that say, in the first 4 weeks, you can't possibly have anyone picked 5 times, so the first week where that would be possible would be week 5. So after 10 weeks, you have 5 times as many opportunities for someone to be picked 5 times as you do in 5 weeks (500% increase in odds over increase 100% in time).</p> <p>But I'm not sure that explains the 40 week to 52 week changes. Or does it?</p> <p>I've also ruled out any issues with the random number generator (I get the roughly the same results using the basic one as I do using the random number generator from the cryptography library). </p> <p>Thanks to anyone who can explain this in a way that I can take back to HR and our legal guys.</p> <p><strong>Update</strong></p> <p>To expound a bit on the process, here's an example: I have a database table that I've created called DrugTest. It has 2 columns: TestRun and Employee. Both columns are integers.</p> <p>So for 52 weeks, I have TestRun values of 1 to 52 and then I have 91 random employee numbers (numbers between 0 and 4549) for each TestRun value. No employee can be picked twice in the same week (the primary key is (TestRun, Employee), ensuring unique employee numbers for each TestRun value).</p> <p>For a sample run, I loaded up 52 weeks of data. Then I execute the following query:</p> <pre><code>select employee, count(*) as cnt from DrugTest where TestRun &lt;= 52 group by employee having count(*) = 3 order by 2 </code></pre> <p>The above query returns 313 results</p> <pre><code>select employee, count(*) as cnt from DrugTest where TestRun &lt;= 40 group by employee having count(*) = 3 order by 2 </code></pre> <p>The above query returns 178 results</p> <pre><code>select employee, count(*) as cnt from DrugTest where TestRun &lt;= 20 group by employee having count(*) = 3 order by 2 </code></pre> <p>The above query returns 34 results</p>
Litho
197,288
<blockquote> <p>So after 10 weeks, you have 5 times as many opportunities for someone to be picked 5 times as you do in 5 weeks (500% increase in odds over increase 100% in time).</p> </blockquote> <p>Actually, the difference in odds is much higher. After 5 weeks, you have one possibility to be picked on 5 weeks: that is, on weeks 1, 2, 3, 4, 5. After 10 weeks, you can be picked on weeks 1, 2, 3, 4, 5; or, on weeks 1, 2, 3, 4, 6; or, on weeks 1, 2, 3, 4, 7; or, on weeks 1, 2, 3, 4, 8; or, on weeks 1, 2, 3, 4, 9; or, on weeks 1, 2, 3, 4, 10; or, on weeks 1, 3, 4, 5, 6; and so on, and so on, and so on. There are $\binom{10}{5}=\frac{10!}{5!\cdot 5!}=252$ ways to select 5 objects out 10, and the odds increase accordingly. (They do not increase <em>precisely</em> 252 times, but it's a decent approximation.)</p> <p>Similarly, the number of ways to choose $k$ weeks out of $n$ is $\binom{n}{k} = \frac{n!}{(n-k)!k!} = \frac{n(n-1)\dots (n-k+1)}{k!}$. When $n$ is much greater that $k$, this can be approximated as $\frac{n^k}{k!}$. </p> <p>So, when the number of weeks passed increases $x$ times, we can estimate that the number of people picked $k$ times will increase $x^k$ times (so the increase is actually polynomial, not exponential). In particular, if $x=1.3$ and $k=3$, we get the estimate $1.3^3=2.197$ times. The real increase in your case is $390/207\approx 1.884$ times; well, as I said, my approximation is rather crude and does not take some additional factors into account. Still, it should explain why the growth in not linear.</p>
289,405
<p>Consider an elementary class $\mathcal{K}$. It is quite common in model theory that a structure $K$ in $\mathcal K$ comes with a closure operator $$\text{cl}: \mathcal{P}(K) \to \mathcal{P}(K), $$ which establishes a <a href="https://en.wikipedia.org/wiki/Pregeometry_(model_theory)" rel="nofollow noreferrer">pregeometry</a> on $K$.</p> <p>Any pregeometry yields a notion of dimension, say:</p> <p>$$\text{dim} (K) = \min \{|A|: A \subset K \text{ and } \text{cl}(A) = K\}$$ I am interested in some natural properties shared by dimensions induced by pregeometries.</p> <p>What kind of properties I am looking for? An example is the following,</p> <blockquote> <p>Suppose $K = \bigcup K_i$ (non redundant increasing chain) and that its dimesion is infinite, is it true that $\text{dim}(K) = \sum_i \text{dim}(K_i)$?</p> </blockquote> <p>I already know that there are many results like "trivial geometries are modular" but this is not the kind of result I am looking for. I am looking for structural properties of dimension because I am interested in giving an axiomatic definition of dimension.</p>
Alex Kruckman
2,126
<p>Here's an example showing that in general, for pregeometries arising in model theory, you can't characterize the dimension of a union of a chain of models just in terms of the dimensions of the models. In other words, it matters how the models embed into each other.</p> <p>Consider the theory of a single equivalence relation $E$ with infinitely many infinite classes, and define $\dim(M) = |M/E|$. </p> <p>First union: Let $M_0$ be the unique countable model of this theory up to isomorphism. For every countable ordinal $\alpha$, let $M_{\alpha+1}$ be the elementary extension of $M_\alpha$ obtained by adding a single new equivalence class with countably many elements. For a limit ordinal $\lambda$, let $M_\lambda = \bigcup_{\alpha&lt;\lambda} M_\alpha$. Then $\dim(M_\alpha) = \aleph_0$ for all $\alpha&lt;\omega_1$, but $\dim(M_{\omega_1}) = \aleph_1$, </p> <p>Second union: Let $N_0 = M_0$, and pick an equivalence class $C$. This time, for every $\alpha$, let $N_{\alpha+1}$ be the elementary extension of $N_\alpha$ obtained by adding a single new element to $C$. As before, take unions as limit stages. Then $\dim(N_\alpha) = \aleph_0$ for all $\alpha&lt;\omega_1$, and also $\dim(N_{\omega_1}) = \aleph_0$. </p> <p>Now I need to convince you that this dimension function arises in a standard way from a pregeometry studied in model theory. In stability theory, there's the notion of a <em>regular type</em>: a stationary type which is orthogonal to all of its forking extensions. The key point is that if $p(x)$ is a regular type (let's say over $\emptyset$ for simplicity), then forking dependence gives rise to a pregometry on the realizations of $p$ via the closure operator $\mathrm{cl}(A) = \{b\models p(x)\mid \text{tp}(b/A) \text{ forks over }\emptyset\}$. </p> <p>In my example, the theory is stable, the unique $1$-type is a regular type, and forking is governed by the equivalence relation $E$, so we get a pregometry on the whole model with closure operator $\mathrm{cl}(A) = \bigcup_{a\in A} [a]_E$, where $[a]_E$ is the $E$-class of $a$. And the induced dimension function is $\dim(M) = |M/E|$. </p> <hr> <p>Well, maybe you don't like this kind of pregeometry, and you only want to consider the kind you meet more often in model theory, namely pregeometries induced by the $\text{acl}$ operator. That's fine, but then the dimensions are only interesting for models that are at most the size of the language (so only countable models if the language is countable). </p> <p>Indeed, suppose $T$ is a theory such that $\text{acl}$ induces a pregometry on every model of $T$, and let $M\models T$ with $|M| &gt; |L|$. Since $|\text{acl}(A)| = \max(|A|,|L|)$ for all $A\subseteq M$, any basis for $M$ must have cardinality $|M|$, and $\dim(M) = |M|$. </p> <p>In this case, the answer to your question about unions is an easy yes.</p> <hr> <p><strong>Added in edit:</strong> You might also decide that you're only interested in closure operators with the property that when $A\subseteq M\prec N$, $\text{cl}(A)$ in $M$ equals $\text{cl}(A)$ in $N$, i.e. closures don't grow in elementary extensions. This is the case for $\text{cl} = \text{acl}$, and it would salvage the proof in your answer that $\dim$ takes unions of chains to sums, since if $N$ is a proper elementary extension of $M$, the closure of a basis for $M$ is contained in $M$, and we need at least one new element to form a basis for $N$. But we actually don't get anything beyond $\text{acl}$ under this assumption.</p> <p>Indeed, suppose $\text{cl}$ satisfies the condition above, and look at $A\subseteq M$. Embed $M$ in a large monster model $\mathbb{M}$. Then $\text{cl}_M(A) = \text{cl}_{\mathbb{M}}(A)$. In fact, for any $A\subseteq N\prec \mathbb{M}$, we have $\text{cl}_N(A) = \text{cl}_{\mathbb{M}}(A)$, so $\text{cl}_{\mathbb{M}}(A)\subseteq N$. But $\bigcap\{N\mid A\subseteq N\prec \mathbb{M}\} = \text{acl}(A)$, so $\text{cl}(A)\subseteq \text{acl}(A)$. </p> <hr> <p>If you're interested in axiomatizing dimension functions, you might want to look <a href="https://arxiv.org/abs/1003.3919" rel="nofollow noreferrer">this paper</a>, which gives a number of equivalent axiom systems for infinite matroids. In particular, look at the axioms in terms of rank functions. Their rank functions take values in $\mathbb{N}\cup \{\infty\}$, but you might as well be in this situation if you're thinking about $\text{acl}$ pregometries ($\dim(M) = \infty$ means $\dim(M) = |M|$). </p>
2,038,498
<p>How do you prove this formula $$\lim_{x \to\infty}\frac{x^{x^2}}{2^{2^x}}$$</p> <p>Since both top and bottom approaches infinity, I assume it is L'Hospital's rule to solve it, but after the first step I'm stuck $$\lim_{x \to\infty}\frac{x^2 logx}{2^xlog2}$$ So how can I solve this problem, it seems the answer is infinity but I don't know how to approach that.</p>
Ronnie Brown
28,586
<p>You need to go back to the intuitive idea of the join. </p> <p>Suppose $X,Y$ are two subsets of Euclidean space of sufficiently high dimensions that the line segments joining points $x \in X$ to points $y \in Y$ do not meet. The union of these line segments is then the <strong>join</strong> $X * Y$. Its points are $rx + sy$ where $r,s\geqslant 0$, and $r+s=1$. Here are some pictures <img src="https://groupoids.org.uk/images/fig5_5cs.jpg" alt="joins"> </p> <p>$$\text{Fig.5.5}$$</p> <p>taken from <a href="https://groupoids.org.uk/topgrpds-e.pdf" rel="nofollow noreferrer">Topology and Groupoids: pdf</a>, Section 5.7. </p> <p>Of course we do not want to use generally an embedding in Euclidean space, so we define $X*Y$ to consist of "formal points" $rx + sy$ with $r,s$ as above except that if $r=0$ we ignore $rx$, getting just $y,$ and if $s=0$ we ignore $sy$, getting just $x$. A useful topology on $X*Y$ is then an <strong>initial topology</strong> as detailed in that Section, and which I won't labour on here. This topology makes the join associative. </p> <p>However a more commonly used topology is a <strong>final topology</strong> as an identification space of $X \times Y \times [0,1]$. </p> <p>The initial topology is good for deciding continuity of maps into the join, while the final topology is good for deciding continuity of maps out of the join. </p>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
gt6989b
16,192
<p><strong>HINT</strong></p> <p>Squaring both equations you get $$ \sin^2 x + \sin^2 y + 2\sin x \sin y = 1\\ \cos^2 x + \cos^2 y + 2\cos x \cos y = 0 $$ Now add them together to get $$ 2 + 2 \sin x \sin y + 2 \cos x \cos y = 1 $$ or in other words $$ \frac{-1}{2} = \cos x \cos y + \sin x \sin y = \cos (x-y) $$</p>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
Ken Duna
318,831
<p>There are identities for sum of sin and cos:</p> <p>$$\sin(x)+\sin(y) = 2\sin\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right)_.$$ $$\cos(x) + \cos(y) = 2\cos\left(\frac{x+y}{2}\right)\cos\left(\frac{x-y}{2}\right).$$</p> <p>Using the first equation tells us that $\cos\left(\frac{x-y}{2}\right)\neq 0.$ Therefore by the second equation, $\cos\left(\frac{x+y}{2}\right) = 0.$</p> <p>This may or may not be useful in finishing the problem.</p>
1,866,801
<p>Let $A$ be an infinite set, $B\subseteq A$ and $a\in B$. Let $X\subseteq \mathcal{P}(A)$ be an infinite family of subsets of $A$ such that $a\in \bigcap X$.</p> <p>Suppose $\bigcap X\subseteq B$. Is it possible that, for every non-empty finite subfamily $Y\subset X$, $\bigcap Y \not\subseteq B$ ? </p> <p>Thanks for your help</p>
Ovi
64,460
<p>I will only look for solutions on $[0, 2\pi)$</p> <p>We need to make $2$ observations:</p> <p>$1)$ The maximum value of $\sin \theta$ is $1$. Therefore, if any of $\sin x$, $\sin y$ is negative, $\sin x + \sin y &lt; 1$. It follows that both $x$ and $y$ must be $\in [0, \pi]$. </p> <p>$2)$ $\cos x= -\cos y \iff \cos x = \cos (\pi - y)$ . (You can prove this using the <a href="http://mathworld.wolfram.com/TrigonometricAdditionFormulas.html" rel="nofollow">cosine subtraction formula</a>). Since we are working only on the interval $[0, \pi]$, we must have $x=\pi-y$. This makes sense, because the cosines of symmetric angles on opposite sides of the $y$ axis will cancel out to $0$.</p> <hr> <p>Substitute $x=\pi-y$ into the first equation:</p> <p>$$\sin (\pi-y) + \sin y = 1$$</p> <p>Using $\sin (a-b)= \sin a \cos b - \cos a \sin b$ we can deduce that $\sin (\pi-y)=\sin y$</p> <p>$$\sin y + \sin y = 1$$</p> <p>$$\sin y = \dfrac 12$$</p> <p>$$y = \dfrac {\pi}{6}, y= \dfrac {5\pi}{6}$$</p> <p>$$x=\pi-y$$</p> <p>Therefore the solutions are $ \left( \dfrac {\pi}{6}, \dfrac {5\pi}{6} \right)$ and $ \left(\dfrac {5\pi}{6}, \dfrac {\pi}{6} \right)$. The symmetry of the solutions is expected, since both original equations and symmetric.</p>
903,117
<p>I am trying to evaluate $$\int_{-\infty}^{\infty} \frac{\sin(x)^2}{x^2} dx $$ Would a contour work? I have tried using a contour but had no success. Thanks.</p> <p>Edit: About 5 minutes after posting this question I suddenly realised how to solve it. Therefore, sorry about that. But thanks for all the answers anyways.</p>
Jack D'Aurizio
44,121
<p>Why not to try integration by parts? This just gives:</p> <blockquote> <p>$$\int_{\mathbb{R}}\frac{\sin^2 x}{x^2}\,dx=\int_{\mathbb{R}}\frac{\sin(2x)}{x}\,dx=\pi.$$</p> </blockquote> <p>With the same approach you can also find the values of $$I_m = \int_{\mathbb{R}}\frac{\sin^m(x)}{x^m}\,dx.$$</p>
988,581
<p>Our teacher challenge us a question and it goes like this: The derivative of a function is define such as </p> <p>$$\begin{cases} 1 &amp; \text{if } x&gt;0 \\ 2 &amp; \text{if } x=0 \\ -1 &amp;\text{if } x&lt;0\end{cases} $$</p> <p>Now he said that "$2$, if $x=0$" is not possible.He asked us to proved that the derivative at $x=0$ does not exist without using the concept of limit or using graphical intuition such as the slope of tangent but using only verbal words without resorting to use limit.</p>
JessicaK
102,435
<p>Let $a_{n} = \frac{\sqrt{n^{3}+2}}{n^{4}+3n^{2} + 1}$ and let $b_{n} = \frac{1}{n^{\alpha}}$ for some positive $\alpha$. Consider</p> <p>$$\lim_{n\rightarrow\infty} \frac{a_{n}}{b_{n}} = \lim_{n\rightarrow \infty} \frac{\frac{\sqrt{n^{3}+2}}{n^{4}+3n^{2} + 1}}{\frac{1}{n^{\alpha}}} = \lim_{n\rightarrow \infty} \frac{n^{\alpha}\sqrt{n^{3}+2}}{n^{4}+3n^{2} + 1} = \lim_{n\rightarrow \infty} \frac{n^{\alpha}n^{\frac{3}{2}}}{n^{4}} = \lim_{n\rightarrow \infty} n^{\alpha}n^{-\frac{5}{2}}$$</p> <p>If $\alpha = 5/2$, $\lim_{n\rightarrow \infty} a_{n}/b_{n} = 1$ and by the limit comparison test both $\sum a_{n}$ and $\sum b_{n}$ converge together or diverge together...</p>
381,309
<p>Let <span class="math-container">$T\in B(\mathcal{H} \otimes \mathcal{H})$</span> where <span class="math-container">$\mathcal{H}$</span> is a Hilbert space. We can define operators <span class="math-container">$$T_{[12]}= T \otimes 1;\quad T_{[23]}= 1 \otimes T$$</span> and if <span class="math-container">$\Sigma: \mathcal{H} \otimes \mathcal{H} \to \mathcal{H} \otimes \mathcal{H}$</span> is the &quot;flip&quot; map, then we can define <span class="math-container">$$T_{[13]}= \Sigma_{[23]}T_{[12]}\Sigma_{[23]}= \Sigma_{[12]}T_{[23]}\Sigma_{[12]}$$</span></p> <p><strong>Question</strong>: Given <span class="math-container">$S,T \in B(\mathcal{H} \otimes \mathcal{H})$</span>, is it true that <span class="math-container">$$(ST)_{[13]}= S_{[13]}T_{[13]}?$$</span></p> <p>I attempted this as follows:</p> <p>We know that the algebraic tensor product <span class="math-container">$B(\mathcal{H}) \odot B(\mathcal{H})$</span> is weak<span class="math-container">$^*$</span>-dense (= <span class="math-container">$\sigma$</span>-weakly dense) in <span class="math-container">$B(\mathcal{H} \otimes \mathcal{H})$</span>. It is easy to see that the identity holds for <span class="math-container">$S,T \in B(\mathcal{H}) \odot B(\mathcal{H})$</span>.</p> <p>Can I conclude from this that the equality holds for all <span class="math-container">$S,T \in B(\mathcal{H}) \overline{\otimes} B(\mathcal{H})= B(\mathcal{H}\otimes \mathcal{H})$</span> (here, the first tensorproduct is the von-Neumann algebraic tensor product).</p> <p>It is natural to try to use results involving weak<span class="math-container">$^*$</span>-continuity and Kaplansky-density-like results, but I'm having trouble finishing the proof. Any ideas?</p>
Jesse Peterson
6,460
<p>The proof for the reals can be generalized to any non-discrete locally compact group <span class="math-container">$G$</span>. We let <span class="math-container">$K \subset G$</span> be any compact set with positive Haar measure <span class="math-container">$\lambda(K) &gt; 0$</span> (e.g., <span class="math-container">$K = [0, 1]$</span> when <span class="math-container">$G = \mathbb R$</span>), and we let <span class="math-container">$\Lambda &lt; G$</span> be any subgroup such that <span class="math-container">$\Lambda \cap KK^{-1}$</span> is countably infinite (e.g., <span class="math-container">$\Lambda = \mathbb Q$</span> when <span class="math-container">$G = \mathbb R$</span>). We define an equivalence relation on <span class="math-container">$G$</span> by the <span class="math-container">$\Lambda$</span>-orbits coming from left multiplication and we let <span class="math-container">$V \subset K$</span> be a set containing exactly one representative of each equivalence class that intersects non-trivially with <span class="math-container">$\Lambda K$</span>.</p> <p>We then have <span class="math-container">$K \subset (\Lambda \cap K K^{-1}) V \subset K K^{-1} K$</span>. The first inclusion here follows from the fact that if <span class="math-container">$k \in K$</span>, then we may find <span class="math-container">$t \in \Lambda$</span> such that <span class="math-container">$tk \in V \subset K$</span>. It then follows that <span class="math-container">$t = (tk) k^{-1} \in K K^{-1}$</span> and hence <span class="math-container">$k \in (\Lambda \cap K K^{-1})V$</span>.</p> <p>If we were able to extend the Haar measure to a countably additive left-invariant measure defined on all subsets of <span class="math-container">$G$</span>, then we would have <span class="math-container">$\lambda(( \Lambda \cap K K^{-1} ) V) = \sum_{t \in \Lambda \cap K K^{-1}} \lambda(V) \in \{ 0, \infty \}$</span>, but this would then contradict the inequalities <span class="math-container">$$ 0 &lt; \lambda(K) \leq \lambda( ( \Lambda \cap K K^{-1} ) V ) \leq \lambda(K K^{-1} K) &lt; \infty. $$</span></p>
2,010,329
<p>The function under consideration is:</p> <p>$$y = \int_{x^2}^{\sin x}\cos(t^2)\mathrm d t$$</p> <p>Question asks to find the derivative of the following function. I let $u=\sin(x)$ and then $\tfrac{\mathrm d u}{\mathrm d x}=\cos(x)$. Solved accordingly but only to get the answer as $$ \cos(x)\cos(\sin^2(x))-\cos(x)\cos(x^4) $$ but the answer is given as: $$ \cos(x)\cos(\sin^2(x))-2x\cdot\cos(x^4) $$</p> <p>May I know where I went wrong? Is my substitution wrong in the first place?</p>
Claude Leibovici
82,404
<p><strong>Hint</strong></p> <p>The fundamental theorem of calculus write $$\frac d {dx}\int_{a(x)}^{b(x)} f(t)\,dt=f(b(x)) b'(x)-f(a(x)) a'(x)$$ </p>
4,180,344
<p>I started learning from Algebra by Serge Lang. In page 5, he presented an equation<br><i></p> <blockquote> <p>Let <span class="math-container">$G$</span> be a commutative monoid, and <span class="math-container">$x_1,\ldots,x_n$</span> elements of <span class="math-container">$G$</span>. Let <span class="math-container">$\psi$</span> be a bijection of the set of integers <span class="math-container">$(1,\ldots,n)$</span> onto itself. Then</i> <span class="math-container">$$\prod_{\nu = 1}^n x_{\psi(\nu)} = \prod_{\nu=1}^n x_\nu$$</span></p> </blockquote> <p>In this equation, the mapping <span class="math-container">$\psi(\nu) = \nu $</span>, resulting the same value, I don't understand why <span class="math-container">$x_{\psi(\nu)}$</span> bothers to index with a mapping, rather than a number.</p>
Suzu Hirose
190,784
<p>Here <span class="math-container">$\psi(v)$</span> is a different ordering of the actions of multiplication. The <span class="math-container">$\Pi x_{\psi(v)}$</span> refers to a different order of the terms in the product than <span class="math-container">$\Pi{x_v}$</span>. In words, what that means is &quot;Regardless of the order in which the operation is carried out, the result is the same&quot;. Contrast with a non-commutative form of multiplication such as matrix multiplication.</p>
2,298,855
<p>Let $u,v\in H^1(\Omega )$ where $\Omega =B_2\backslash B_1$. How can I have continuity and coercivity of $$a(u,v)=\int_{\Omega }\nabla u\cdot \nabla v-\int_{B_2\backslash B_1}3^{-d}uv\ ?$$</p> <p>The best thing I get is $$|a(u,v)|\leq \|\nabla u\|_{L^2}\|\nabla v\|_{L^2}+3^{-d}\|u\|_{L^2}\|v\|_{L^2},$$ but I can't get $|a(u,v)|\leq C\|u\|_{H^1}\|v\|_{H^1}$. And for coercivity, I don't know.</p>
Surb
154,545
<p>By definition, $\|u\|_{H^1}=\|u\|_{L^2}+\|\nabla u\|_{L^2}$, therefore $\|u\|_{L^2},\|\nabla u\|_{L^2}\leq \|u\|_{H^1}$ and same for $v$. You can then conclude for continuity. For coercivity, $$a(u,u)=\int_\Omega |\nabla u|^2-3^{-d}\int_\Omega |u|^2\geq \int_\Omega |\nabla u|^2-3^{-d}\int_\Omega |u|^2=\|u\|_{H^1}^2-(1+3^{-d})\int|u|^2\geq\|u\|_{H^1}^2.$$</p> <p>For coercivity, I used the equivalent norm $\|u\|_{H^1}=\sqrt{\|u\|_{L^2}^2+\|\nabla u\|_{L^2}^2}$.</p>
2,983,519
<p>I have to calculate the limit of this formula as <span class="math-container">$n\to \infty$</span>.</p> <p><span class="math-container">$$a_n = \frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl)$$</span></p> <p>I tried the Squeeze Theorem, but I get something like this:</p> <p><span class="math-container">$$\frac{1}{\sqrt{2}}\leftarrow\frac{n}{\sqrt{2n^2}}\le\frac{1}{\sqrt{n}}\bigl(\frac{1}{\sqrt{n+1}}+\cdots+\frac{1}{\sqrt{2n}}\bigl) \le \frac{n}{\sqrt{n^2+n}}\to1$$</span></p> <p>As you can see, the limits of two other sequences aren't the same. Can you give me some hints? Thank you in advance.</p>
Will Jagy
10,400
<p>for a decreasing function such as <span class="math-container">$1/\sqrt x$</span> with <span class="math-container">$x$</span> positive, a simple picture shows <span class="math-container">$$ \int_a^{b+1} \; f(x) \; dx &lt; \sum_{k=a}^b f(k) &lt; \int_{a-1}^{b} \; f(x) \; dx $$</span> <span class="math-container">$$ \int_{n+1}^{2n+1} \; \frac{1}{\sqrt x} \; dx &lt; \sum_{k=n+1}^{2n} \frac{1}{\sqrt k} &lt; \int_{n}^{2n} \; \frac{1}{\sqrt x} \; dx $$</span> getting there <span class="math-container">$$ 2 \sqrt {2n+1} - 2 \sqrt {n+1} &lt; \sum_{k=n+1}^{2n} \frac{1}{\sqrt k} &lt; 2 \sqrt {2n} - 2 \sqrt {n} $$</span> <span class="math-container">$$ 2 \sqrt {2+\frac{1}{n}} - 2 \sqrt {1+\frac{1}{n}} &lt; \frac{1}{\sqrt n} \sum_{k=n+1}^{2n} \frac{1}{\sqrt k} &lt; 2 \sqrt {2} - 2 \sqrt {1} $$</span></p>
1,372,779
<p>Given that $x$ is a positive integer, find $x$ in $(E)$.</p> <p>$$\tag{E} j-n=x-n\cdot\left\lceil\frac{x}{n}\right\rceil$$ All $n, j, x$ are positive integers.</p>
Khosrotash
104,171
<p>1st step<br> $n|x$ then ,not solvable for x $$x=nk+0\\j-n=x-n.k\\j-n=x-x=0 \rightarrow j=n \\x=nk \\x=jk \\$$so $x$ is an integer that divide by n<br> 2nd step :<br> now take $$x=nk+r\\0&lt;r&lt;n \\\frac{x}{n}=\frac{nk+r}{n}=k+\frac{r}{n}\\\left \lceil \frac{x}{n} \right \rceil=k+1\\j-n=x-n(k+1)\\j-n=(nk+r)-n(k+1)\\j-n=r-n\\j=r\\x=nk+j , j&lt;n$$</p>
827,072
<p>How to find the equation of a circle which passes through these points $(5,10), (-5,0),(9,-6)$ using the formula $(x-q)^2 + (y-p)^2 = r^2$.</p> <p>I know i need to use that formula but have no idea how to start, I have tried to start but don't think my answer is right.</p>
Erk Ekin
238,077
<p>This method that I wrote creates a circle object which has a radius and a centre point. (written in <em>Swift</em> but can be port to many other languages.)</p> <pre><code> func circle(p1: (x: Float,y: Float), p2: (x: Float,y: Float), p3: (x: Float,y: Float)) -&gt; (r: Float, c: (x: Float,y: Float)) { let x1 = p1.x let x2 = p2.x let x3 = p3.x let y1 = p1.y let y2 = p2.y let y3 = p3.y let mr = (y2-y1) / (x2-x1); let mt = (y3-y2) / (x3-x2); if mr == mt { return nil } let x12 = x1 - x2; let x13 = x1 - x3; let y12 = y1 - y2; let y13 = y1 - y3; let y31 = y3 - y1; let y21 = y2 - y1; let x31 = x3 - x1; let x21 = x2 - x1; // x1^2 - x3^2 let sx13 = pow(x1, 2) - pow(x3, 2); // y1^2 - y3^2 let sy13 = pow(y1, 2) - pow(y3, 2) let sx21 = pow(x2, 2) - pow(x1, 2) let sy21 = pow(y2, 2) - pow(y1, 2) let f = ((sx13) * (x12) + (sy13) * (x12) + (sx21) * (x13) + (sy21) * (x13)) / (2 * ((y31) * (x12) - (y21) * (x13))) let g = ((sx13) * (y12) + (sy13) * (y12) + (sx21) * (y13) + (sy21) * (y13)) / (2 * ((x31) * (y12) - (x21) * (y13))) let c = -pow(x1, 2) - pow(y1, 2) - 2 * g * x1 - 2 * f * y1; let h = -g; let k = -f; let sqr_of_r = h * h + k * k - c; let r = sqrt(sqr_of_r) return (r: r, center: (x: h, y: k)) } </code></pre>
2,365,839
<p>My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.</p> <p>So if I know $h$, then how can I deduce $C_1$ from the IVP?</p>
KittyL
206,286
<p>Given an IVP: $$\frac{dy}{dt}=f(t,y), y(a)=y_0, t\in [a,b].$$ Here is a Theorem from Numerical Analysis by Sauer:</p> <blockquote> <p>Assume that $f(t,y)$ has a Lipschitz constant $L$ for the variable $y$ and that the solution $y_i$ of the initial value problem at $t_i$ is approximated by $w_i$, using Euler's method. Let $M$ be an upper bound for $|y''(t)|$ on $[a,b]$. Then $$|w_i-y_i|\le \frac{Mh}{2L}(e^{L(t_i-a)}-1).$$</p> </blockquote> <p>The proof is based on the following lemma:</p> <blockquote> <p>Assume that $f(t,y)$ is Lipschitz in the variable $y$ on the set $S=[a,b]\times [\alpha,\beta]$. If $Y(t)$ and $Z(t)$ are solutions in $S$ of the differential equation $y'=f(t,y)$ with initial conditions $Y(a)$ and $Z(a)$ respectively, then $$|Y(t)-Z(t)|\le e^{L(t-a)}|Y(a)-Z(a)|.$$</p> </blockquote> <p>Sketch of proof of the first theorem:</p> <p>Let $g_i$ be the global error, $e_i$ be the local truncation error, $z_i$ satisfy the local IVP: $$z_i'=f(t,z_i),z_i(t_i)=w_i, t\in [t_i,t_{i+1}].$$</p> <p>Then $$g_i=|w_i-y_i|=|w_i-z_i(t)+z_i(t)-y_i|\le |w_i-z_i(t)|+|z_i(t)-y_i|\\ \le e_i+e^{Lh}g_{i-1}\\ \le e_i+e^{Lh}(e_{i-1}+e^{Lh}g_{i-2})\le \cdots\\ \le e_i+e^{Lh}e_{i-1}+e^{2Lh}e_{i-2}+\cdots +e^{(i-1)Lh}e_1.$$ Since each $e_i\le \frac{h^2M}{2}$, we have $$g_i\le \frac{h^2M}{2}(1+e^{Lh}+\cdots+e^{(i-1)Lh})=\frac{h^2M(e^{iLh}-1)}{2(e^{Lh}-1)}\le \frac{Mh}{2L}(e^{L(t_i-a)}-1).$$</p> <p>Hope this helps.</p>
2,365,839
<p>My textbook claims that, for small step size $h$, Euler's method has a global error which is at most proportional to $h$ such that error $= C_1h$. It is then claimed that $C_1$ depends on the initial value problem, but no explanation is given as to how one finds $C_1$.</p> <p>So if I know $h$, then how can I deduce $C_1$ from the IVP?</p>
Lutz Lehmann
115,115
<p>Consider the IVP <span class="math-container">$y'=f(t,y)$</span>, <span class="math-container">$y(t_0)=y_0$</span>. Let <span class="math-container">$t_k=t_0+kh$</span>, <span class="math-container">$y_k$</span> computed by the Euler method. We know that the error order of the Euler method is one. Thus the iterates <span class="math-container">$y_k$</span> have an error relative to the exact solution <span class="math-container">$y(t)$</span> of the form <span class="math-container">$$y_k=y(t_k)+c_kh$$</span> with some coefficients <span class="math-container">$c_k$</span> that will be closer determined in the course of this answer.</p> <hr> <p>Now insert this representation of <span class="math-container">$y_k$</span> into the Euler step and apply Taylor expansion where appropriate <span class="math-container">\begin{align} [y(t_{k+1})+c_{k+1}h]&amp;=[y(t_k)+c_kh]+hf(t_k,[y(t_k)+c_kh])\\ &amp;=y(t_k)+c_kh+h\Bigl(f(t_k,y(t_k))+h\,∂_yf(t_k,y(t_k))\,c_k+O(h^2)\Bigr)\\ y(t_k)+hy'(t_k)+\tfrac12h^2y''(t_k)+O(h^3)&amp;=y(t_k)+hy'(t_k)+h\Bigl[c_k+h\,∂_yf(t_k,y(t_k))\,c_k\Bigr] \end{align}</span> where <span class="math-container">$∂_y=\frac{\partial}{\partial y}$</span> and later <span class="math-container">$∂_t=\frac{\partial}{\partial t}$</span>.</p> <hr> <p>In the Taylor series for <span class="math-container">$y(t_{k+1})=y(t_k+h)$</span> on the left side the first two terms cancel against the same terms on the right side. The second derivative can be written as <span class="math-container">$$ y''(t)=\frac{d}{dt}f(t,y(t)) =∂_tf(t,y(t))+∂_yf(t,y(t))\,f(t,y(t)) \overset{\rm Def}=Df(t,y(t)). $$</span> Divide the remaining equation by <span class="math-container">$h$</span> and re-arrange to get a difference equation for <span class="math-container">$c_k$</span> <span class="math-container">$$ c_{k+1}=c_k+h\Bigl[∂_yf(t_k,y(t_k))c_k-\tfrac12Df(t_k,y(t_k))\Bigr]+O(h^2). $$</span></p> <hr> <p>This looks like the Euler method for the linear ODE for a continuous differentiable function <span class="math-container">$c$</span>, <span class="math-container">$$ c'(t)=∂_yf(t,y(t))c(t)-\tfrac12Df(t,y(t)),~~\text{ with }~~c(t_0)=0. $$</span> Again by the first order of the Euler method, <span class="math-container">$c_k$</span> and <span class="math-container">$c(t_k)$</span> will have a difference <span class="math-container">$O(h)$</span>, so that the error we aim to estimate is <span class="math-container">$$y_k-y(t_k)=c(t_k)h+O(h^2).$$</span></p> <hr> <p>Now if <span class="math-container">$L$</span> is a bound for <span class="math-container">$∂_yf$</span>, the <span class="math-container">$y$</span>-Lipschitz constant, and <span class="math-container">$M$</span> is a bound for <span class="math-container">$Df=∂_tf+∂_yf\,f$</span>, or the second derivative, then by Grönwall's lemma <span class="math-container">$$ \|c'\|\le L\|c\|+\frac12M\implies \|c(t)\|\le \frac{M(e^{L|t-t_0|}-1)}{2L} $$</span> which reproduces the usual specific estimate of the coefficient of the error term. </p>
3,407,474
<p>I can understand that The set <span class="math-container">$M_2(2\mathbb{Z})$</span> of <span class="math-container">$2 \times 2$</span> matrices with even integer entries is an infinite non commutative ring. But why It doesn't have any unity? I think <span class="math-container">$I(2\times2)$</span> is a unity for this ring. Where is my misunderstanding in subject of <span class="math-container">$I(2\times2)$</span> being a unity?</p>
rschwieb
29,335
<p>As everyone pointed out right away, your candidate is not an element of your ring.</p> <p>Here is a slightly different way to prove that <span class="math-container">$M_2(2\mathbb Z)$</span> can't have an identity. </p> <p>Notice that because your entries are always sums of products of things in <span class="math-container">$2\mathbb Z$</span>, you're always accumulating powers of two.</p> <p>In particular, </p> <ol> <li><p>if <span class="math-container">$I$</span> is a nonzero element such that <span class="math-container">$I^2=I$</span> (this must hold for an identity!), then <span class="math-container">$I\in M_2(4\mathbb Z)$</span>.</p></li> <li><p>Then <span class="math-container">$IA\in M_2(8\mathbb Z)$</span> for any matrix <span class="math-container">$A\in M_2(2\mathbb Z)$</span>. </p></li> <li><p>But you know that you can pick <span class="math-container">$A\in M_2(2\mathbb Z)\setminus M_2(8\mathbb Z)$</span>. </p></li> <li>This contradicts the observation that <span class="math-container">$A=AI\in M_2(8\mathbb Z)$</span>.</li> </ol> <hr> <p>Here is another argument reminiscent of Chris Leary's approach. Suppose <span class="math-container">$E$</span> is the identity of <span class="math-container">$M_2(2\mathbb Z)$</span>. </p> <p>Then the adjugate matrix <span class="math-container">$\mathrm{adj}(E)\in M_2(2\mathbb Z)$</span> also, and satisfies <span class="math-container">$E\mathrm{adj}(E)=det(E)I_2\in M_2(2\mathbb Z)$</span>. In particular, <span class="math-container">$\det(E)$</span> is an even number. Also observe that both <span class="math-container">$E$</span> and <span class="math-container">$\mathrm{adj}(E)$</span> are both nonzero.</p> <p>Since <span class="math-container">$E^2=E$</span>, we have <span class="math-container">$\det(E)=\det(E)^2$</span>, and the only two integers that satisfy <span class="math-container">$x^2=x$</span> are <span class="math-container">$0$</span> and <span class="math-container">$1$</span>, and <span class="math-container">$1$</span> is obviously not even.</p> <p>So <span class="math-container">$\det(E)=0$</span>: but then <span class="math-container">$\mathrm{adj}(E)=E\mathrm{adj}(E)=0$</span>, contradicting the earlier observation that <span class="math-container">$\mathrm{adj}(E)\neq 0$</span>.</p>
1,975,199
<blockquote> <p>In this problem, construct a bijection to show the identity. Use the definitions of these quantities, not formulas for them.</p> <p>For <span class="math-container">$0≤j≤n$</span> <span class="math-container">$$\binom{n}{j}=\binom{n}{n-j}$$</span></p> </blockquote> <p>I know how to prove this by induction, but I have no idea what &quot;construct a bijection&quot; stands for. Could anyone explain this or (better) show me an example? Thanks.</p>
Parcly Taxel
357,390
<p>Consider the number of ways to arrange $n$ bits, $j$ of which are ones; by definition there are $\binom nj$ ways of doing so. For every such way we can flip the bits to obtain an arrangement of $n$ bits where $n-j$ of them are ones; there are $\binom n{n-j}$ arrangements here. For example:</p> <pre><code>00100110100 &lt;- n=11, j=4 11011001011 &lt;- n=11, j=7 </code></pre> <p>This flipping of bits constitutes the bijection. Since every way of arranging the bits is accounted for on both sides of the bijection, this shows that $\binom nj=\binom n{n-j}$.</p>
675,857
<p>I am trying to figure out easy understandable approach to given small number of $n$, list all possible is with proof, I read this paper but it is really beyond my level to fathom,</p> <p>attempt for $\phi(n)=110$, </p> <p>$$\phi(n)=110=(2^x)\cdot(3^b)\cdot(11^c)\cdot(23^d)\quad\text{ since }\quad p-1 \mid \phi(n)=110$$ and $x =\{0,1\}$, $b=\{0,1\}$, $c=\{0,1,2\}$, $d=\{0,1\}$ .</p> <p>So total $2\cdot2\cdot3\cdot2 =24$ options to test if the $\phi(n)=110$, </p> <p>I am not sure if this is a enough to show or there are no other numbers.</p> <p>look at this paper <a href="http://arxiv.org/pdf/math/0404116v3.pdf" rel="nofollow">http://arxiv.org/pdf/math/0404116v3.pdf</a></p>
vadim123
73,324
<p>Suppose that $\phi(n)=110=2\cdot 5\cdot 11$. We factor $n=p_1^{a_1}p_2^{a_2}\cdots p_k^{a_k}$. Since $\phi$ is multiplicative, $\phi(n)=\phi(p_1^{a_1})\phi(p_2^{a_2})\cdots \phi(p_k^{a_k})$. We also can evaluate the totient at any prime power, so $$\phi(n)=\frac{n}{p_1p_2\cdots p_k}(p_1-1)(p_2-1)\cdots(p_k-1)$$</p> <p>Note that each $p_i-1$ is even for any odd prime $p_i$. Since $110$ has only one power of 2, we conclude that $n$ can have at most one odd prime divisor, i.e. $n=2^ap^b$, for some $a,b$ nonnegative integers.</p>
595,865
<p>A real number $r$ belonging to $[0,1]$ is said to be computable if there is a simple TM such that for each binary encoding of $n$ ($n$ is a natural number), returns the $n$th bit in the binary expansion of $r$. Give a proof that there are non-computable real numbers.</p>
rewritten
43,219
<p>It's a diagonal argument. You have to assume (or read it somewhere) that there are strictly more real numbers than natural numbers. It is a theorem, and it can be proved in several ways, but it's not the point here. The point is that you have to know that any function from $\mathbb{N}$ to $\mathbb{R}$ (or any interval thereof) leaves most point out of the image.</p> <p>The easiest way to do this is to use the famous Cantor diagonal construction, where you supposedly have an enumeration $e:\mathbb{N}\to\mathbb{R}$ of all the real numbers, and you prove that there is at least one real number not in it, by "writing them down" in an infinite matrix, and taking a number that is different from the diagonal of the matrix at each decimal place.</p> <p>On the other side, you have Turing Machines. They are in a natural correspondence with $\mathbb{N}$, so different TMs map to different numbers (but not all natural numbers correspond to TMs). Some of these turing machines will actually be machines that compute a computable real number (not all of them do, but it doesn't matter). So you have a set $Comp\subseteq\mathbb{N}$ of number that represent TMs which compute computable real numbers. If there were no noncomputable real numbers, then this correspondence would be an enumeration of the reals, and we saw befaore that there is no such enumeration. $\QED$</p>
819,805
<p>The polynomial remainder theorem states that when a polynomial $P(x)$ of degree $&gt; 0$ is divided by $x-r$ ($r$ being some constant) the remainder is equal to $P(r)$, that is:<br> $$\begin{array}l If &amp; \quad P(x) = (x-r)Q(x)+R \\ then &amp; \quad P(r) = R \end{array}$$ The algebra and the graphic representation make sense; the question is <em>why</em>. <em>Why</em> is the functional relationship between $r$ and $R$ the function $P(x)$? What are the <em>mechanics</em>, so to speak, that produce this result? Or is it just a fortunate algebraic "accident"?</p>
Bill Dubuque
242
<p>It is clearer in congruence language, $\ {\rm mod}\ \,x\!-\!r\!:\,\ x\equiv r\,\Rightarrow\ P(x)\equiv P(r).\ $ This is true because polynomials are composed of sums and products, and congruences are equivalence relations that are compatible with sums and products, i.e. $\, A\equiv a,\,B\equiv b\,\Rightarrow\, A+B\equiv a+b,\ AB \equiv ab.\,$ See the proofs of the Congruence Sum, Product and Polynomials Rules below (written for the ring of integers $\,\Bbb Z,\:$ but also valid in any polynomial ring (or any commutative ring).</p> <hr> <p><strong>Congruence Sum Rule</strong> $\rm\qquad\quad A\equiv a,\quad B\equiv b\ \Rightarrow\ \color{#c0f}{A+B\,\equiv\, a+b}\ \ \ (mod\ m)$</p> <p><strong>Proof</strong> $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a) + (B\!-\!b)\ =\ \color{#c0f}{A+B - (a+b)} $</p> <p><strong>Congruence Product Rule</strong> $\rm\quad\ A\equiv a,\ \ and \ \ B\equiv b\ \Rightarrow\ \color{blue}{AB\equiv ab}\ \ \ (mod\ m)$</p> <p><strong>Proof</strong> $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ \color{blue}{AB - ab} $</p> <p><strong>Congruence Power Rule</strong> $\rm\qquad \color{}{A\equiv a}\ \Rightarrow\ \color{#c00}{A^n\equiv a^n}\ \ (mod\ m)$</p> <p><strong>Proof</strong> $\ $ It is true for $\rm\,n=1\,$ and $\rm\,A\equiv a,\ A^n\equiv a^n \Rightarrow\, \color{#c00}{A^{n+1}\equiv a^{n+1}},\,$ by the Product Rule, so the result follows by induction on $\,n.$</p> <p><strong>Polynomial Congruence Rule</strong> $\ $ If $\,f(x)\,$ is <em>polynomial</em> with <em>integer</em> coefficients then $\ A\equiv a\ \Rightarrow\ f(A)\equiv f(a)\,\pmod m.$</p> <p><strong>Proof</strong> $\ $ By induction on $\, n = $ degree $f.\,$ Clear if $\, n = 0.\,$ Else $\,f(x) = f(0) + x\,g(x)\,$ for $\,g(x)\,$ a polynomial with integer coefficients of degree $&lt; n.\,$ By induction $\,g(A)\equiv g(a)\,$ so $\, \color{#0a0}{A g(A)\equiv a g(a)}\,$ by the Product Rule. Hence $\,f(A) = f(0)+\color{#0a0}{Ag(A)}\equiv f(0)+\color{#0a0}{ag(a)} = f(a)\,$ by the Sum Rule. </p> <p><strong>Beware</strong> $ $ that such rules need not hold true for other operations, e.g. the exponential analog of above $\rm A^B\equiv\, a^b$ is not generally true (unless $\rm B = b,\,$ so it follows by applyimg the Polynomial Rule with $\,f(x) = x^{\rm b}).$ </p>
1,629,592
<p>I tried to prove this question by first considering the possible last digit of $p$ when $p=n^2+5$, but that reasoning got me nowhere. Then I tried to prove it by contrapositive, and however I just couldn't really find where to start.<br> Hence I'm here asking for some hints (only hints, no solution please).</p> <p>Many thanks,<br> D.</p>
Tsemo Aristide
280,301
<p>$n^2=0,1, 4,9,6,5$ mod $10$ so $n^2+5= 6,9,4,1,0,5$ mod 10. If $n^2+5$ prime, it is odd and not divisible by 5 the result follows. so you can only have $n=1, 9$ mod 10</p>
3,921,335
<p>how can I formally proof that for a &gt; 1: <span class="math-container">$$ a&gt; \sqrt a &gt; \sqrt[3]a &gt; \sqrt[4]a ... $$</span>? Can someone help? ;)</p>
Peter Szilas
408,605
<p>Set <span class="math-container">$a:=e^b,$</span> where <span class="math-container">$b&gt;0;$</span></p> <p>Then</p> <p><span class="math-container">$e^b&gt;e^{b/2}&gt;.....e^{b/n}&gt;e^{b/(n+1)}$</span> since <span class="math-container">$f(x)=e^x$</span> is an increasing function.</p>
1,595,080
<p>I am doing a question out of H. E. Rose's A Course In Number Theory (Chapter 2, problem 2) which I have been struggling with for some time. However I found a solution which strays from the way advised by the author, and I would just like some validation as to whether it is correct. </p> <p>The question starts by asking to prove $\sum_{i \leq n} \mu(i) \big[ \frac{n}{i} \big] = 1$, which I have done in a method similar to the approach as seen here: <a href="https://math.stackexchange.com/questions/332588/prove-sum-d-leq-x-mud-left-lfloor-frac-xd-right-rfloor-1?lq=1">Prove $\sum_{d \leq x} \mu(d)\left\lfloor \frac xd \right\rfloor = 1 $</a></p> <p>Now from this, it is asked to show $|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq 1$. Asked like this I assume the author is asking you to use a method or result from the preceding part to prove this. However my alternate route was as follows:</p> <p>$-1 \leq \mu(i) \leq 1$</p> <p>$\sum_{i \leq n} (-1) \leq \sum_{i \leq n} \mu(i) \leq \sum_{i \leq n} (1)$ </p> <p>By summing all values of $i$ from 1, upto and including $n$. Now using the fact $\sum_{i \leq n} (1) = n$;</p> <p>$-n \leq \sum_{i \leq n} \mu(i) \leq n$</p> <p>Taking the absolute value;</p> <p>$|\sum_{i \leq n} \mu(i)| \leq n$</p> <p>Now multiplying through by $\sum_{i \leq n} \frac{1}{i}$ which is strictly positive (and hence it can be taken inside the absolution), giving;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n \sum_{i \leq n} \frac{1}{i}$</p> <p>From here, you can use Euler's integral theorem;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\big(\ln(n) + \gamma +O(\frac{1}{n}) \big)$</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\ln(n) + n\gamma +O(1) $</p> <p>As $n \gamma$ is constant, this can be absorbed into the $O(1)$;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\ln(n) + O(1) $</p> <p>This is where I make an assumption; $\ln(n) = o(\frac{1}{n})$ then substituting this in;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n(o(\frac{1}{n})) + O(1) = o(1)$</p> <p>Thus, finally giving;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq 1$</p> <p>Thank you.</p>
Dietrich Burde
83,966
<p>Your estimate somehow arrives at the bound $n\log(n)+O(1)$ which is certainly not $\le 1$ for large $n$ - where you say, "This is where I make an assumption", it is not true.</p>
1,595,080
<p>I am doing a question out of H. E. Rose's A Course In Number Theory (Chapter 2, problem 2) which I have been struggling with for some time. However I found a solution which strays from the way advised by the author, and I would just like some validation as to whether it is correct. </p> <p>The question starts by asking to prove $\sum_{i \leq n} \mu(i) \big[ \frac{n}{i} \big] = 1$, which I have done in a method similar to the approach as seen here: <a href="https://math.stackexchange.com/questions/332588/prove-sum-d-leq-x-mud-left-lfloor-frac-xd-right-rfloor-1?lq=1">Prove $\sum_{d \leq x} \mu(d)\left\lfloor \frac xd \right\rfloor = 1 $</a></p> <p>Now from this, it is asked to show $|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq 1$. Asked like this I assume the author is asking you to use a method or result from the preceding part to prove this. However my alternate route was as follows:</p> <p>$-1 \leq \mu(i) \leq 1$</p> <p>$\sum_{i \leq n} (-1) \leq \sum_{i \leq n} \mu(i) \leq \sum_{i \leq n} (1)$ </p> <p>By summing all values of $i$ from 1, upto and including $n$. Now using the fact $\sum_{i \leq n} (1) = n$;</p> <p>$-n \leq \sum_{i \leq n} \mu(i) \leq n$</p> <p>Taking the absolute value;</p> <p>$|\sum_{i \leq n} \mu(i)| \leq n$</p> <p>Now multiplying through by $\sum_{i \leq n} \frac{1}{i}$ which is strictly positive (and hence it can be taken inside the absolution), giving;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n \sum_{i \leq n} \frac{1}{i}$</p> <p>From here, you can use Euler's integral theorem;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\big(\ln(n) + \gamma +O(\frac{1}{n}) \big)$</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\ln(n) + n\gamma +O(1) $</p> <p>As $n \gamma$ is constant, this can be absorbed into the $O(1)$;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n\ln(n) + O(1) $</p> <p>This is where I make an assumption; $\ln(n) = o(\frac{1}{n})$ then substituting this in;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq n(o(\frac{1}{n})) + O(1) = o(1)$</p> <p>Thus, finally giving;</p> <p>$|\sum_{i \leq n} \frac{\mu(i)}{i}| \leq 1$</p> <p>Thank you.</p>
Wojowu
127,263
<p>There are quite a few mistakes happening here.</p> <ol> <li><p>You get $|\sum_{i\leq n}\mu(n)|\leq n$ and you multiply it by $\sum_{i \leq n} \frac{1}{i}$, getting on the left hand side $|\sum_{i \leq n} \frac{\mu(i)}{i}|$. You seem to be using a rule of a sort $(\sum_{i\leq n}a_n)(\sum_{i\leq n}b_n)=\sum_{i\leq n}a_nb_n$, which is not correct - the latter sum doesn't include terms like $a_1b_2$ (to be exact - it misses $a_ib_j$ for $i\neq j$).</p></li> <li><p>$n\gamma$ is <em>not</em> a constant. This is a strictly increasing function of $n$. Hence it's incorrect to absorb it into $O(1)$ term.</p></li> <li><p>This is I believe the most important: your assumption that $\ln(n)=o(\frac{1}{n}$ is very, very wrong. $\ln(n)$ is an increasing function which tends to infinity, while every function which is $o(\frac{1}{n})$ decreases to zero moderately fast.</p></li> <li><p>In second to last equation you write $n(o(\frac{1}{n}))+O(1)=o(1)$. Left hand side of this equality is clearly $o(1)+O(1)$, but this need not be $o(1)$. Indeed, $o(1)+O(1)$ is exactly the same thing as $O(1)$.</p></li> <li><p>After you get $|\sum_{i \leq n} \frac{\mu(i)}{i}|=o(1)$ you conclude $|\sum_{i \leq n} \frac{\mu(i)}{i}|\leq 1$. However, that the sum of $o(1)$ only tells you that the absolute value of the sum is <em>eventually</em> less than or equal to $1$, so this would only establish that the sum is less than or equal to $1$ _for $n$ large enough.</p></li> </ol> <p>Lastly I wanted to ask, what is exactly the "Euler's integral theorem" you are refering to? The asymptotic you provide for $\sum_{i\leq n}\frac{1}{i}$ is correct, I'm just asking for what the theorem you refer to states.</p>
3,379,899
<p>We have two coins. The first one is a fair coin and has <span class="math-container">$50\%$</span> chance of landing on head and <span class="math-container">$50\%$</span> of landing on tails. for the other one it is <span class="math-container">$60\%$</span> for head and <span class="math-container">$40\%$</span> for tails. We choose one of them randomly (with equal chance) and toss it three times.The result is Tails-Head-Head. If we toss it again for the 4th time, what is the probability of the coin landing on tails.</p> <p>At first, it seems like a bayes theorem question, but after some thought I think because the result of every toss is independent of others, the answer is just:</p> <p><span class="math-container">$$\frac{1}{2} . \frac{1}{2} + \frac{1}{2} . \frac{4}{10} = \frac{9}{20} $$</span></p> <p>The first <span class="math-container">$\frac{1}{2}$</span> is because we choose the coins randomly.</p> <p>I want to know whether my argument is correct for this probability question or not.</p>
Mark Fischler
150,362
<p>The answer, giving two terms corresponding to the actual coin being fair, then the actual coin being biased, will be <span class="math-container">$$P(T) = \frac{\left(\frac12\right)^3}{\left(\frac12\right)^3+\frac25\left(\frac35\right)^2} \left(\frac12\right) + \frac{\frac25\left(\frac35\right)^2}{\left(\frac12\right)^3+\frac25\left(\frac35\right)^2}\left(\frac25\right)\\ $$</span></p> <p>Doing the arithmetic, <span class="math-container">$$ P(T) = \frac{1201}{2690} \approx 44.65\% $$</span> The probability that your coin is biased is about 53.5%, which is why the tails on the next throw is less likely than you might think.</p>
2,776,388
<p>I tried this problem and I first found the lim of $x^x$ as $x$ approaches zero from right to be 1 (I did this by re-writing $x^x$ as an exponential) and when I repeated the same process to find lim of $x^{(x^x)}$, I found 1 again but the final answer should be ZERO. Could I have an explanation on why it's a zero?</p>
user
505,767
<p>Note that</p> <p>$$x^{x^x}=e^{x^x\log x}\to0$$</p> <p>indeed</p> <p>$$x^x\log x\to -\infty$$</p>
4,371,541
<p>The series representation <span class="math-container">$\sum1/n^s$</span> of the Riemann zeta function <span class="math-container">$\zeta(s)$</span> is known to converge for <span class="math-container">$\sigma&gt;1$</span>, where <span class="math-container">$s=\sigma+it$</span>.</p> <p>In order to show an infinite series is holomorphic, we need to show the series <strong>converges uniformly</strong>.</p> <p>However, the series <span class="math-container">$\sum 1/n^s$</span> doesn't converge uniformly in the domain <span class="math-container">$\sigma&gt;1$</span>. Intuitively, the closer <span class="math-container">$s$</span> is closer to <span class="math-container">$s=1$</span> the more quickly it diverges.</p> <p>To address this, various sources make an argument based on <strong>locally uniform convergence</strong>.</p> <p>The argument is that the series is uniformly convergent on any <span class="math-container">$\sigma&gt;a$</span> where <span class="math-container">$a&gt;1$</span>. The argument proceeds by saying that the union of all these domains is <span class="math-container">$\sigma&gt;1$</span>, so the series is uniformly convergent here.</p> <p><strong>This is where I am confused.</strong> Surely <span class="math-container">$\sigma&gt;a$</span> with <span class="math-container">$a&gt;1$</span> means <span class="math-container">$\sigma&gt;1$</span> which we agreed at the top is a domain in which the series does not converge uniformly.</p> <p><strong>Question:</strong> What am I misunderstanding about the standard argument? Where is my error?</p>
davidlowryduda
9,754
<p>It is not true that the Dirichlet series for <span class="math-container">$\zeta(s)$</span> converges uniformly for <span class="math-container">$\mathrm{Re} s &gt; 1$</span>. This is also not how the &quot;easy&quot; arguments go to show that <span class="math-container">$\zeta(s)$</span> is holomorphic for <span class="math-container">$\mathrm{Re} s &gt; 1$</span>.</p> <p>The typical argument begins as you note: for any <span class="math-container">$a &gt; 1$</span>, the Dirichlet series for <span class="math-container">$\zeta(s)$</span> converges absolutely for <span class="math-container">$\mathrm{Re} s &gt; a$</span>. It follows that for any compact subset <span class="math-container">$K$</span> contained in the halfplane <span class="math-container">$\mathrm{Re} s &gt; 1$</span>, the Dirichlet series for <span class="math-container">$\zeta(s)$</span> converges uniformly on <span class="math-container">$K$</span>. Then Morera's theorem applies, showing that the Dirichlet series for <span class="math-container">$\zeta(s)$</span> is holomorphic in <span class="math-container">$\mathrm{Re} s &gt; 1$</span>.</p> <p>Now with slightly more detail. Morera's theorem says that if <span class="math-container">$f$</span> is continuous in a nice domain <span class="math-container">$D$</span>, and if for any triangle <span class="math-container">$T$</span> contained in <span class="math-container">$D$</span> we have that <span class="math-container">$$ \int_T f(z) dz = 0, $$</span> then <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$D$</span>.</p> <p>Suppose now that <span class="math-container">$f_n$</span> is a sequence of holomorphic functions that converges uniformly to a function <span class="math-container">$f$</span> in every compact subset <span class="math-container">$K$</span> of some region <span class="math-container">$\Omega$</span>. Then in fact <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$\Omega$</span>. (Sometimes stated concisely as <em>locally uniform convergence of holomomorphic functions is holomorphic</em>). To see this, let <span class="math-container">$D$</span> be a closed disk contained in <span class="math-container">$\Omega$</span>, and let <span class="math-container">$T$</span> be any triangle in that disk. As each <span class="math-container">$f_n$</span> is holomorphic, we know that <span class="math-container">$$ \int_T f_n(z) dz = 0 \quad (\forall n). $$</span> As <span class="math-container">$f_n \to f$</span> uniformly in the closure of <span class="math-container">$D$</span>, one can show that <span class="math-container">$$ \int_T f_n(z) dz \to \int_T f(z) dz $$</span> and also that <span class="math-container">$f$</span> is continuous. Morera's theorem now shows that <span class="math-container">$f$</span> is holomorphic in <span class="math-container">$D$</span>. As this is true for any <span class="math-container">$D$</span> in <span class="math-container">$\Omega$</span>, we find that <span class="math-container">$f$</span> is holomorphic on <span class="math-container">$\Omega$</span>.</p>
1,383,613
<p>Give recursive definition of sequence $a_n = 2^n, n = 2, 3, 4... where $ $a_1 = 2$</p> <p>I'm just not sure how to approach these problems. </p> <p>Then it asks to give a def for:</p> <p>$a_n = n^2-3n, n = 0, 1, 2...$</p> <p>Thanks for all the help!</p>
Terra Hyde
253,768
<p>For the first one, write the term $a_{n+1}$ and compare it to $a_n$: $$a_{n+1}=2^{n+1}=2\cdot2^n=2a_n$$</p> <p>For the second one, repeat the process: $$\begin{align}a_{n+1}&amp;=(n+1)^2-3(n+1)\\ &amp;=n^2+2n+1-3n-3\\ &amp;=n^2-3n+2n-2\\ &amp;=a_n+2(n-1) \end{align}$$</p>
1,686,012
<p>Let $T$: $\mathbb{R}^3 \to \mathbb{R}$ be a linear transformation.</p> <p>Show that either $T$ is surjective, or $T$ is the zero linear transformation.</p> <p>My approach:</p> <p>First we start off by supposing T is not surjective and we want to show that $\forall \overrightarrow v \in \mathbb{R}^3, T(\overrightarrow v)=\overrightarrow 0$.</p> <p>$T$ not surjective implies that $Image(T) \not = \mathbb{R}$ and that $\exists \overrightarrow y \in \mathbb{R}$ such that $$ \begin{bmatrix} a &amp; b &amp; c \end{bmatrix} . \overrightarrow v \not = \overrightarrow y$$ with a,b,c $\in \mathbb{R}$</p> <p>$\Rightarrow$ $ax_{1}+bx_{2}+cx_{3} \not = y_{1}$</p> <p>From here, I want to show that a,b,c are equal to zero. Should I say that an equation of a plane to a real number should have infinite solutions unless the coefficients are zero and conclude that a,b,c are zero or is there another way?</p>
Arthur
15,500
<p>Assume that $T$ is <em>not</em> the zero transformation (this means that there is a $v$ such that $T(v) = r \neq 0$), and deduce that it is surjective (by using $v$ and linearity of $T$).</p>
2,158,981
<p>How should i solve : $$\sum_{r=1}^n (2r-1)\cos(2r-1)\theta $$</p> <p>I can solve $\sum_{r=1}^n cos(2r-1)\theta $ by considering $\Re \sum_{r=1}^n z^{2r-1} $ and using summation of geometric series, but I can't seem to find a common geometric ratio when $ 2r-1 $ is involved in the summation.</p> <p>Visually : $\sum_{r=1}^n z^{2r-1} = z +z^3+...+z^{2r-1}$ where the common ratio $ r= z^2 $ can easily be seen, but in the case of $\sum_{r=1}^n (2r-1)z^{2r-1} = z + 3z^3 + 5z^5 +...+ (2r-1)z^{2r-1}$, how should i solve this ? A hint would be appreciated.</p>
S.C.B.
310,930
<p>Note that $$\sum_{r=1}^{n} (2r-1) \cos (2r-1) \theta=\sum_{r=1}^{n} \frac{\mathrm{d}}{\mathrm{d}\theta}\sin (2r-1) \theta= \frac{\mathrm{d}}{\mathrm{d}\theta}\sum_{r=1}^{n} \sin (2r-1) \theta$$ Now calculate $$\sum_{r=1}^{n} \sin (2r-1) \theta=\frac{ \sin^2(n\theta) }{ \sin(\theta) }$$ Through <a href="https://math.stackexchange.com/questions/1967339/how-is-it-solved-sinx-sin3x-sin5x-dotsb-sin2n-1x?noredirect=1&amp;lq=1">the formula for the sum of sin's</a>. </p> <p>Now just diffferentiate with regard to $\theta $. </p>
244,489
<p>Given:</p> <p>${AA}\times{BC}=BDDB$</p> <p>Find $BDDB$:</p> <ol> <li>$1221$</li> <li>$3663$</li> <li>$4884$</li> <li>$2112$</li> </ol> <p>The way I solved it:</p> <p>First step - expansion &amp; dividing by constant ($11$): $AA\times{BC}$=$11A\times{BC}$</p> <ol> <li>$1221$ => $1221\div11$ => $111$</li> <li>$3663$ => $3663\div11$ => $333$</li> <li>$4884$ => $4884\div11$ => $444$</li> <li>$2112$ => $2112\div11$ => $192$</li> </ol> <p>Second step - each result is now equal to $A\times{BC}$. We're choosing multipliers $A$ and $BC$ manually and in accordance with initial condition. It takes <strong>a lot</strong> of time to pick up a number and check whether it can be a multiplier.</p> <p>That way I get two pairs:</p> <p>$22*96$=$2112$</p> <p>$99*37$=$3663$</p> <p>Of course $99*37$=$3663$ is the right one.</p> <p>Is there more efficient way to do this? Am I missing something?</p>
Christian Blatter
1,303
<p>Given two nonoverlapping squares in a large square $Q$ there is a line $g$ separating the two squares. Assume $g$ is not parallel to one of the sides of $Q$ (otherwise we are done). Let $A$ and $C$ be the two vertices of $Q$ farthest away from $g$ on the two sides of $g$. The two edges of $Q$ meeting at $A$ together with $g$ determine a rightangled triangle $T_A$, and similarly we get a rightangled triangle $T_C$; see the following figure. (One could draw a second figure where $g$ cuts off just one vertex $A$. The proof remains the same.)</p> <p><img src="https://i.stack.imgur.com/Es5AN.jpg" alt="enter image description here"></p> <p>To finish the proof we need the following </p> <p><strong>Lemma.</strong> Given a triangle $T_C$ with a right angle at $C$ the largest square inscribed in $T$ is the square with one vertex at $C$ and one vertex at the intersection of the angle bisector at $C$ with the hypotenuse of $T$.</p> <p>Sketch of proof of the Lemma:</p> <p>Put $C$ at the origin of the $(x,y)$-plane, and let $(a,0)$, $(0,b)$ be the other two vertices of $T$. We may assume two vertices of the inscribed square on the legs of $T$. If $(u,0)$ and $(0,v)$ are these two vertices the other two vertices are $(u+v,u)$ and $(v,u+v)$. It follows that $u$ and $v$ have to satisfy the conditions $$u\geq0, \quad v\geq 0,\quad {u+v\over a}+{u\over b}\leq 1,\quad {v\over a}+{u+v\over b}\leq 1\ .$$ These four conditions define a convex quadrilateral $P$ in the "abstract" $(u,v)$-plane. The vertices of $P$ are $$V_1=(0,0),\quad V_2=\Bigl({ab\over a+b},0\Bigr),\quad V_3=\Bigl({a^2b\over a^2+ab +b^2},{ab^2\over a^2+ab +b^2}\Bigr), \quad V_4=\Bigl(0,{ab\over a+b}\Bigr)\ .$$ Computation shows that $$|V_2|^2-|V_3|^2={a^4 b^4 \over(a+b)^2(a^2+ab+b^2)^2}&gt;0\ .$$</p> <p>Therefore $u^2+v^2$ ($=$ the square of the side length of our square) is maximal at the two vertices $V_2$ and $V_4$ of $P$, which correspond to the statement of the Lemma.</p>
3,470,703
<p><span class="math-container">$\int \left(\frac{1}{2}t+\frac{\sqrt{2}}{4} \right)e^{t^2-\sqrt{2}t}\ dt$</span></p>
19aksh
668,124
<p>Hint <span class="math-container">$$\text{Let} \ t^2-\sqrt2 t = x \implies (2t - \sqrt2)dt=dx \implies \left(\frac{t}{2}-\frac{\sqrt2}{4}\right)dt = \frac {dx}4$$</span></p>
16,247
<p>Given a map $f:B^n \to S^n$, where $B^n$ is the unit ball and $S^n$ is the unit sphere, is it true that the degree of $f|_{S^n}$ is always 0, where $f_{S^n}$ is the restriction of $f$ to $S^n$? If so, why? </p> <p>Thanks!</p>
Mariano Suárez-Álvarez
274
<p>The degree is zero because the restriction of $f$ to the boundary of $B^n$ (which most humans write $S^{n-1}$!&nbsp;:)&nbsp;) is homotopic to to a constant map.</p>
3,554,555
<p>Let <span class="math-container">$S_n=1-2^{-n}$</span>. Prove that the sequence <span class="math-container">$S_n$</span> converges to <span class="math-container">$1$</span> as <span class="math-container">$n$</span> approaches infinity. </p> <p>I let <span class="math-container">$N = \lceil\log_2 \varepsilon\rceil + 1$</span>. Then there exists <span class="math-container">$k\geq 1$</span> such that <span class="math-container">$\varepsilon \geq 1/2^k$</span>. Then, set <span class="math-container">$N=k+1$</span>. Let <span class="math-container">$n\geq N$</span>. </p> <p>I am having trouble proving this though. any help would be greatly appreciated. thank you!</p>
Anguepa
196,351
<p>By definition of convergence you must show that, for every <span class="math-container">$\varepsilon&gt;0$</span>, there exists <span class="math-container">$N&gt;0$</span> such that, for every <span class="math-container">$n\geq N$</span>, it holds that <span class="math-container">$$ |1-(1-2^{-n})|=|2^{-n}|=2^{-n}&lt;\varepsilon. $$</span> Or, rearranging the inequality, <span class="math-container">$$ \frac{1}{\varepsilon}&lt;2^n. $$</span> So it is quite obvious that this holds for all <span class="math-container">$n$</span> large enough. If you want an expression for <span class="math-container">$N$</span> in terms in <span class="math-container">$\varepsilon$</span> just take <span class="math-container">$log_2$</span> on both sides and you have <span class="math-container">$$ \log_2{1/\varepsilon}&lt; n. $$</span> So you can just take N to be any natural number greater than <span class="math-container">$log_2(1/\varepsilon)$</span> (close but not quite what you wrote).</p> <p>If you know how work with logarithms you can derive that <span class="math-container">$\log_2(1/\varepsilon)=\log_2(\varepsilon^{-1})=-\log_2(\varepsilon)$</span>, so you are left with <span class="math-container">$$ -\log_2{\varepsilon} &lt; n. $$</span></p>
3,554,555
<p>Let <span class="math-container">$S_n=1-2^{-n}$</span>. Prove that the sequence <span class="math-container">$S_n$</span> converges to <span class="math-container">$1$</span> as <span class="math-container">$n$</span> approaches infinity. </p> <p>I let <span class="math-container">$N = \lceil\log_2 \varepsilon\rceil + 1$</span>. Then there exists <span class="math-container">$k\geq 1$</span> such that <span class="math-container">$\varepsilon \geq 1/2^k$</span>. Then, set <span class="math-container">$N=k+1$</span>. Let <span class="math-container">$n\geq N$</span>. </p> <p>I am having trouble proving this though. any help would be greatly appreciated. thank you!</p>
The 2nd
751,538
<p>From the following limit properties:</p> <blockquote> <p><span class="math-container">$$\lim_{x \to \infty} \frac{1}{n^{x}} = 0, \forall n&gt;1$$</span></p> <p><span class="math-container">$$\lim_{x \to \infty} f(x) \pm g(x) = \lim_{x \to \infty} f(x) \pm \lim_{x \to \infty} g(x)$$</span></p> <p><span class="math-container">$$ \lim_{x \to \infty} c = c $$</span></p> </blockquote> <p>where c is a constant.</p> <p>We can get the answer:</p> <p><span class="math-container">$$\lim_{x \to \infty} 1 - 2^{-n}$$</span> <span class="math-container">$$= \lim_{x \to \infty} 1 - \frac{1}{2^n}$$</span> <span class="math-container">$$= \lim_{x \to \infty} 1 - \lim_{x \to \infty} \frac{1}{2^n}$$</span> <span class="math-container">$$=1 - 0 =1$$</span></p>
332,001
<p><a href="https://en.wikipedia.org/wiki/Cauchy%27s_theorem_(geometry)#Generalizations_and_related_results" rel="nofollow noreferrer">Wikipedia states</a> that A. D. Alexandrov generalized <em>Cauchy's rigidity theorem for polyhedra</em> to higher dimensions. </p> <p>The relevant statement in the article is not linked to any source. The sources at the end of the Wikipedia page seem to be only about <span class="math-container">$3$</span>-dimensional polyhedra as well, in particular Alexandrov's book "Convex polyhedra".</p> <blockquote> <p>Where can I find a reference for that statement?</p> </blockquote>
j.c.
353
<p>The following is Theorem 27.2 of Igor Pak's book <a href="http://www.math.ucla.edu/~pak/book.htm" rel="noreferrer">Lectures on Discrete and Polyhedral Geometry</a> (which in general is a very nice resource for these sorts of questions):</p> <blockquote> <p>Let <span class="math-container">$P,Q\subset\mathbb{R}^d$</span> (or <span class="math-container">$P,Q \subset S^d_+$</span>), <span class="math-container">$d\geq3$</span> be two combinatorially equivalent (spherical) convex polyhedra whose corresponding facets are isometric. Then <span class="math-container">$P$</span> and <span class="math-container">$Q$</span> are isometric.</p> </blockquote> <p>(Here <span class="math-container">$S^d_+$</span> is a d-dimensional hemisphere.)</p>
19,672
<p>The <a href="http://en.wikipedia.org/wiki/Stirling_numbers_of_the_second_kind" rel="nofollow noreferrer">Stirling number of the second kind</a> is the number of ways to partition a set of $n$ objects into $k$ non-empty subsets. In Mathematica, this is implemented as <code>StirlingS2</code>. How can I enumerate all the sets? Ideally I would like to get a list of lists, where each list contains $k$ lists.</p> <p>The question <a href="https://mathematica.stackexchange.com/questions/3044/partition-a-set-into-subsets-of-size-k">Partition a set into subsets of size k</a> seems relevant.</p>
Dr. belisarius
193
<pre><code>&lt;&lt; Combinatorica` KSetPartitions[{a, b, c}, 2] (* {{{a}, {b, c}}, {{a, b}, {c}}, {{a, c}, {b}}} *) StirlingS2[3, 2] (* 3 *) </code></pre>
1,115,389
<p>Given: </p> <p>$$\sum_{n = 0}^{\infty} a_nx^n = f(x)$$</p> <p>where:</p> <p>$$a_{n+2} = a_{n+1} - \frac{1}{4}a_n$$</p> <p>is the recurrence relationship for $a_2$ and above ($a_0$ and $a_1$ are also given).</p> <p>Is there a nice closed form to this pretty recurrence relationship?</p>
user203856
203,856
<p>Don't have enough points for a comment, but you seem to be contradicting yourself when you first say that for the first rook you have 9^2 , for the second you have 8^2 and then for the total options you cite 81, 80, etc.</p>
4,412,700
<p>I've been working on a problem but have been stuck for several hours finishing it.</p> <p>The problem is to show that <span class="math-container">$$ \frac{x}{(e+x) \ln(e+x)} \leq \ln(\ln(e+x)) \leq \frac{x}{e} $$</span> for all <span class="math-container">$x &gt; 0$</span>.</p> <p>I proceeded by first integrating each component as the problem suggested, which changed it to proving that <span class="math-container">$$ \int_0^x \frac{1}{(e+x) \ln(e+x)} \,\mathrm{d}t \leq \int_0^x \frac{1}{(e+t) \ln(e+t)} \,\mathrm{d}t \leq \int_0^x \frac{1}{e} \,\mathrm{d}t \,. $$</span></p> <p>From here I was able to show quite easily that when <span class="math-container">$x&gt;0$</span>, the second inequality holds. However, I'm unsure of how to now prove the first inequality: is there a result which could help with this or something that I'm missing?</p> <p>Thanks very much for any help, and sorry for the poor formatting, it's my first time using stack exchange.</p> <p>Mark</p>
Átila Correia
953,679
<p><strong>HINT</strong></p> <p>Let <span class="math-container">$u = \ln(e + x)$</span> (observe that <span class="math-container">$u\geq 1$</span>). Then one has that <span class="math-container">\begin{align*} \frac{x}{(e + x)\ln(e + x)} \leq \ln(\ln(e + x)) &amp; \Longleftrightarrow \frac{e^{u} - e}{e^{u}u} \leq \ln(u) \end{align*}</span></p> <p>If we let <span class="math-container">$u = 1$</span>, both the RHS and the LHS equals zero.</p> <p>Now we can take the difference and differentiate it: <span class="math-container">\begin{align*} f(u) := \frac{1}{u} - \frac{e^{1 - u}}{u} - \ln(u) \Rightarrow f'(u) &amp; = -\frac{1}{u^{2}} + \frac{e^{1-u}u + e^{1-u}}{u^{2}} - \frac{1}{u}\\\\ &amp; = \frac{e^{1 - u}(u + 1) - (u + 1)}{u^{2}}\\\\ &amp; = \frac{(e^{1 - u} - 1)(u + 1)}{u^{2}} \end{align*}</span></p> <p>Consequently, the proposed relation holds iff <span class="math-container">$f'(u) &lt; 0$</span> for <span class="math-container">$u\geq 1$</span>. Indeed, this is the case (why?).</p> <p>For the other inequality, consider the same substitution <span class="math-container">$u = \ln(e + x)$</span> (once again, <span class="math-container">$u\geq 1$</span>).</p> <p>Then we have the following equivalence: <span class="math-container">\begin{align*} \ln(\ln(e + x)) \leq \frac{x}{e} \Longleftrightarrow \ln(u) \leq \frac{e^{u} - e}{e} \end{align*}</span></p> <p>If we let <span class="math-container">$u = 1$</span>, both the RHS and LHS equals zero.</p> <p>Once again, we are going to take the difference and differentiate it: <span class="math-container">\begin{align*} g(u) := \ln(u) - e^{u - 1} + 1 \Rightarrow g'(u) &amp; = \frac{1}{u} - e^{u-1} \end{align*}</span></p> <p>Analogously, the proposed relation holds iff <span class="math-container">$g'(u) &lt; 0$</span> for every <span class="math-container">$u\geq 1$</span>. Indeed, this is the case (why?).</p> <p>Hopefully this helps!</p>
47,492
<p>Consider a continuous irreducible representation of a compact Lie group on a finite-dimensional complex Hilbert space. There are three mutually exclusive options:</p> <p>1) it's not isomorphic to its dual (in which case we call it 'complex')</p> <p>2) it has a nondegenerate symmetric bilinear form (in which case we call it 'real')</p> <p>3) it has a nondegenerate antisymmetric bilinear form (in which case we call it 'quaternionic')</p> <p>It's 'real' in this sense iff it's the complexification of a representation on a real vector space, and it's 'quaternionic' in this sense iff it's the underlying complex representation of a representation on a quaternionic vector space.</p> <p>Offhand, I know just <b>four compact Lie groups whose continuous irreducible representations on complex vector spaces are all either real or quaternionic in the above sense</b>:</p> <p>1) the group Z/2 </p> <p>2) the trivial group</p> <p>3) the group SU(2)</p> <p>4) the group SO(3)</p> <p>Note that I'm so desperate for examples that I'm including 0-dimensional compact Lie groups, i.e. finite groups! </p> <p>1) is the group of unit-norm real numbers, 2) is a group covered by that, 3) is the group of unit-norm quaternions, and 4) is a group covered by <i>that</i>. This probably explains why these are all the examples I know. For 1), 2) and 4), all the continuous irreducible representations are in fact real.</p> <p><b>What are all the examples?</b> </p>
Faisal
430
<p>This was a comment on Torsten's answer, but it got too long.</p> <p>Suppose $G$ is connected and semisimple. Fixing a choice $\Phi^+$ of positive roots for $G$, we can describe $w_0$ as the unique element of the Weyl group of $G$ that takes $\Phi^+$ to the negative roots $\Phi^- = -\Phi^+$. Now, $-w_0$ is an involution of the Dynkin diagram of $G$. This involution is trivial when the components of the Dynkin diagram lack two-fold symmetry, and this happens precisely for components of type $A_1$, $B_n$, $C_n$, $D_{2n}$, $E_7$, $E_8$, $F_4$ and $G_2$, in which case $-w_0=1$. For type $A_n$ ($n&gt;1$), the involution is given by $\alpha_i \leftrightarrow \alpha_{n-i+1}$, for $D_n$ it's given by $\alpha_i \leftrightarrow \alpha_{i-1}$, and for $E_6$ it's given by $\alpha_1 \leftrightarrow \alpha_6$ and $\alpha_2 \leftrightarrow \alpha_5$.</p> <p>Now if $V$ is an irrep of highest weight $\lambda$, then $V^\ast$ has highest weight $-w_0\lambda$. So $V \cong V^\ast$ whenever $-w_0=1$, and the above discussion tells us when this happens.</p> <p>Side note: There's a closely related <a href="https://mathoverflow.net/questions/38165/">MO question</a>, which was asked not too long ago, whose answers might be helpful.</p>
3,297,110
<p>Evaluate <span class="math-container">$$\int\frac{1}{x^{\frac{25}{25} }\cdot x^{\frac{16}{25}}+x^{\frac{9}{25}}}dx$$</span></p> <p>I start by factoring <span class="math-container">$$\int\frac{1}{x^{\frac{25}{25} }\cdot x^{\frac{16}{25}}+x^{\frac{9}{25}}}dx=\int\frac{1}{x^{\frac{9}{25}}\left(x^{\frac{32}{25}}+1\right)}dx$$</span></p> <p>Can I do partial fraction from here? </p>
Claude Leibovici
82,404
<p>Let <span class="math-container">$$x=y^{25}\implies dx=25\,y^{24}\,dy$$</span> <span class="math-container">$$\int\frac{dx}{x^{\frac{25}{25} }\cdot x^{\frac{16}{25}}+x^{\frac{9}{25}}}=25\int\frac{ y^{15}}{y^{32}+1}\,dy=\frac {25}{16}\int\frac{16\, y^{15}}{y^{32}+1}\,dy=\frac {25}{16}\int\frac{ \left(y^{16}\right)'}{\left(y^{16}\right)^2+1}$$</span></p>
654,263
<p>My intention is neither to learn basic probability concepts, nor to learn applications of the theory. My background is at the graduate level of having completed all engineering courses in probability/statistics -- mostly oriented toward the applications without much emphasis on mathematical rigor.</p> <p>Now I am very interested in learning the core logic and mathematical framework of probability theory, as a math branch. More specifically, I would like to learn answers to the following questions:</p> <blockquote> <p>(1) What are the necessary axioms from which we can build probability theory?</p> <p>(2) What are the core theorems and results in the mathematical theory of probability?</p> <p>(3) What are the derived rules for reasoning/inference, based on the theorems/results in probability theory?</p> </blockquote> <p>So I am seeking a book that covers the &quot;heart&quot; of mathematical probability theory -- not needing much on applications, or discussion on extended topics.</p> <p>I would like to appreciate your patience for reading my post and any informative responses.</p> <p>Regards, user36125</p>
user901823
123,198
<p>Williams, Probability with Martingales</p>
654,263
<p>My intention is neither to learn basic probability concepts, nor to learn applications of the theory. My background is at the graduate level of having completed all engineering courses in probability/statistics -- mostly oriented toward the applications without much emphasis on mathematical rigor.</p> <p>Now I am very interested in learning the core logic and mathematical framework of probability theory, as a math branch. More specifically, I would like to learn answers to the following questions:</p> <blockquote> <p>(1) What are the necessary axioms from which we can build probability theory?</p> <p>(2) What are the core theorems and results in the mathematical theory of probability?</p> <p>(3) What are the derived rules for reasoning/inference, based on the theorems/results in probability theory?</p> </blockquote> <p>So I am seeking a book that covers the &quot;heart&quot; of mathematical probability theory -- not needing much on applications, or discussion on extended topics.</p> <p>I would like to appreciate your patience for reading my post and any informative responses.</p> <p>Regards, user36125</p>
Jyotirmoy Bhattacharya
1,195
<p>Billingsley's <em>Probability and Measure</em> covers many interesting topics and has excellent problems. It has an unusual approach: building up the mathematical structure of subject not at once but in a number of iterative steps of increasing complexity. That takes up time but I have still found this repetition helpful in consolidating my understanding. </p> <p>Being very comfortable with real analysis at the level of Rudin's <em>Principles</em> is an essential prerequisite for this book.</p> <p>If you are buying this avoid the "Anniversary Edition" which is littered with typos. Try to get a copy of the 3rd edition instead.</p>
629,887
<p>I was going through one of the topic "Introduction to Formal proof". In one example while explaining "Hypothesis" and "conclusion" got confused.</p> <p>The example is as follows:</p> <blockquote> <p>If $x\geq4$, then $2^x \geq x^2$.</p> </blockquote> <p>While deriving conclusion article said "As $x$ grows larger than $4$, LHS $2^x$ doubles as $x$ increases by $1$ and RHS grows by ratio $x+1/x$".</p> <p>Thanks in advance.</p>
Pedro
23,350
<p>Consider $f(x,y)=(x+y)^n$. Expand, differentitate w.r.t to $x$, multiply by $x$ and plug $x=t$, $y=1-t$. If you know about Berstein Polynomials, you shouldn't be surprised the result is just $t$. </p> <p><strong>Spoiler</strong> Differentiate $$n(x+y)^{n-1}=\sum_{k=1}^nk\binom nkx^{k-1}y^{n-k}$$ Multiply by $x$, divide by $n$ $$x(x+y)^{n-1}=\sum_{k=1}^n\frac kn \binom nk x^{k}y^{n-k}$$</p> <p>Set $x=t,y=1-t$ $$t=\sum_{k=1}^n\frac kn \binom nk t^{k}(1-t)^{n-k}$$</p>
321,255
<p>I've been looking for the definition of projective ideal but haven't found anything, all I've seen is the definition of projective module (but I don't know how these are related, if they are ¿?). Does anyone know a book where I can look up basic facts about projective ideals?</p>
Jyrki Lahtonen
11,619
<p>I don't know of any other meaning of a <em>projective ideal</em> other than the one suggested by Boris Novikov, i.e. an ideal of a ring <span class="math-container">$R$</span> that is also projective as an <span class="math-container">$R$</span>-module. I want to emphasize that such an ideal <span class="math-container">$I$</span> need NOT be a direct summand of <span class="math-container">$R$</span> (Boris never implied that condition to be necessary - only sufficient!) as well as give more examples.</p> <p>The obvious examples of that are principal ideals of a commutative domain - they are projective by virtue of being free of rank one. </p> <p>Another set of projective ideals that comes to mind are the ideals of rings of integers of a number field. If <span class="math-container">$K$</span> is a finite extension of <span class="math-container">$\mathbb{Q}$</span>, and <span class="math-container">$R={\cal O}_K$</span> its ring of integers, then every ideal of <span class="math-container">$R$</span> is projective. This is because the ring is a Dedekind domain. When the ideal is not principal, it is not isomorphic to <span class="math-container">$R$</span> as a module. For example, if <span class="math-container">$R=\mathbb{Z}[\sqrt{-5}]$</span> is the ring of integers of the field <span class="math-container">$\mathbb{Q}[\sqrt{-5}]$</span>, then the prime ideal <span class="math-container">$P$</span> generated by the elements <span class="math-container">$2$</span> and <span class="math-container">$1+\sqrt{-5}$</span> is not isomorphic to <span class="math-container">$R$</span> itself or a direct summand of it (<span class="math-container">$R$</span> is indecomposable as an <span class="math-container">$R$</span>-module). However, <span class="math-container">$P$</span> can be viewed as a submodule of <span class="math-container">$R^2$</span> in such a way that it becomes a direct summand. </p> <p>Namely, it is easy to verify that for all <span class="math-container">$x\in P$</span> we have <span class="math-container">$x(1-\sqrt{-5})/2\in R$</span>. Therefore we have a well-defined monomorphism <span class="math-container">$s: P\to R^2, x\mapsto (-x,x(1-\sqrt{-5})/2)$</span>. It is easy to check that this splits the obvious epimorphism <span class="math-container">$p:R^2\to P, (r_1,r_2)\mapsto 2r_1+(1+\sqrt{-5}) r_2$</span> by another banal calculation.</p> <p>Yet another set of examples consists of the Wedderburn components of a complex group algebra <span class="math-container">$R=\mathbb{C}[G]$</span> of a finite group <span class="math-container">$G$</span>. These are the isotypic components of the (left) regular module, and serve as examples of Boris' more general scheme.</p>
271,785
<p>I have to prove that $f(x)=\log(1+x^2)$ is Uniform continuous in $[0,\infty)$ (with $\epsilon ,\delta$ formulas...)</p> <p>I wrote the definition: (what I have to prove):</p> <p>$\forall \epsilon&gt;0 \quad \exists \delta&gt;0 \quad s.t. \quad \forall x,y\in [0,\infty) : \quad |x-y|&lt;\delta \quad \Rightarrow \quad |f(x)-f(y)|&lt;\epsilon $</p> <p>So I tried developing $|f(x)-f(y)| = |\log(1+x^2)-\log(1+y^2)| = |\log(\frac{1+x^2}{1+y^2})| $... </p> <p>now this is where I try to make it bigger and simplify the expression so i can choose the right $\delta$ which depends on the $\epsilon$, and then say that if the simplified bigger expression is still smaller than $\epsilon$ then of course the original $|f(x)-f(y)|$ is smaller than $\epsilon$ </p> <p>but what can I do with this expression? any algebraic tricks? </p>
mrf
19,440
<p>By the mean value theorem, $$f(x)-f(y) = f'(c)(x-y) = \frac{2c}{1+c^2}(x-y),$$ for some $c$ between $x$ and $y$.</p> <p>Show that $$ \left|\frac{2c}{1+c^2}\right| \le 1,$$ for example using the AM-GM inequality.</p>
3,688,151
<blockquote> <p>A necklace is made up of <span class="math-container">$3$</span> beads of one sort and <span class="math-container">$6n$</span> of another, those of each sort being similar. Show that the number of arrangement of the beads is <span class="math-container">$3n^2+3n +1$</span>.</p> </blockquote> <p><strong>My attempt</strong>:</p> <p>There are total <span class="math-container">$6n+3$</span> beads. Also, it is a necklace, so their clockwise and anticlockwise arrangements are same. Hence, no. of their cyclic arrangements are given by, <span class="math-container">$$\frac{1}{2}\frac{(6n+2)!}{3!\times 6n!}$$</span></p> <p>This is obviously not what we want to prove. Where am I going wrong? </p> <p>I found few questions on MSE related to this topic <a href="https://math.stackexchange.com/questions/706595/what-are-the-number-of-circular-arrangements-possible">Q.1</a>, <a href="https://www.google.com/url?sa=t&amp;source=web&amp;rct=j&amp;url=https://math.stackexchange.com/questions/2690473/how-many-different-necklaces-can-be-formed-with-6-white-and-5-red-beads&amp;ved=2ahUKEwjrwq2wtMnpAhWZyDgGHRvXBV0QFjABegQIBxAB&amp;usg=AOvVaw07b2tsMZXoXwM3PDpPAnC2" rel="nofollow noreferrer">Q.2</a>, <a href="https://math.stackexchange.com/questions/3507865/circular-permutations-with-same-objects">Q.3</a> (I doubt this solutions are wrong) . But it didn't address my concern. Also, I'm a high school student and Burnside Lemma is out of my scope. Please help me with an alternate method. </p>
Olius
791,850
<p>This is open to interpretation, but I understand the statement that "it is a necklace" to mean that rotating the beads cyclically yield equivalent arrangements (but not flipping the necklace). I assume this in what follows.</p> <p>Observe that in a given arrangement, the <span class="math-container">$3$</span> beads of the first sort (I will call them <em>red</em>) separate the <span class="math-container">$6n$</span> (<em>blue</em>) beads of the second sort into three (possibly empty) groups. We can represent the arrangement without loss of information as an ordered triplet of the three group sizes in clockwise order. Obviously the three numbers in this triplet must be nonnegative and must sum to <span class="math-container">$6n$</span>. However, some such triplets represent the same necklace; any cyclic permutation of a triplet is a representation of the same necklace, e.g.:</p> <ul> <li><span class="math-container">$(3n,3n,0) \sim (3n,0,3n) \sim (0,3n,3n)$</span></li> <li><span class="math-container">$(3n-1,3n-1,2) \sim (3n-1,2,3n-1) \sim (2,3n-1,3n-1)$</span></li> <li>...</li> </ul> <p>Careful! Even if we had the number of nonnegative triplets summing to <span class="math-container">$6n$</span>, we could not simply divide that by <span class="math-container">$3$</span>. Triplets like <span class="math-container">$(2n,2n,2n)$</span> are counted only once, not three times.*</p> <p>Here we are saved from having to use Burnside's lemma or similar results because there are only three numbers. Consider a triplet representing some fixed, arbitrary arrangement. Since it sums to <span class="math-container">$6n = 3 \times 2n$</span>, at least one of its values must be <span class="math-container">$\ge 2n$</span>, and one must be <span class="math-container">$\le 2n$</span>. The largest value in the triplet must then be at least <span class="math-container">$2n$</span>. If it appears only once, we can rotate our triplet so that it appears first. If it appears more than once, we choose the rotation that places the smallest value first. <strong>Thus we can represent each necklace as a triplet <span class="math-container">$(a,b,c)$</span> where <span class="math-container">$a+b+c=6n$</span> and exactly one of the following holds:</strong></p> <ul> <li><span class="math-container">$a \gt b,c$</span> and <span class="math-container">$a \ge 2n$</span></li> <li><span class="math-container">$a \le b = c$</span></li> </ul> <p>For each necklace we have one and only one such triplet, and vice-versa; this correspondence is a <strong>bijection</strong>. So we just have to count the number of such triplets. Try it!</p> <p>*This observation is what is behind Burnside's lemma. Later I might add a small explanation of the lemma as problems like this are much more cleanly solved using it.</p> <hr> <h2>A note on the lemma</h2> <p>Let's try to make the problem description a little more formal. First consider the simpler situation of <span class="math-container">$n$</span> beads on a straight piece of string, each of a different color (I will label them <span class="math-container">$a$</span>, <span class="math-container">$b$</span>, ..., <span class="math-container">$z$</span>, but there can be any number). Any necklace arrangement with these same beads can be obtained from one such string arrangement by tying the string ends together, but more than one string arrangement produces the given necklace arrangement in this way. The precise loss of identity we introduce among the string arrangements by tying ends is the following:</p> <blockquote> <p>The string arrangement <span class="math-container">$abc\dotso xyz$</span> becomes the same necklace arrangement as any of <span class="math-container">$bc\dotso xyza$</span>, <span class="math-container">$c\dotso xyzab$</span>, etc., as well as <span class="math-container">$zabc\dotso xy$</span>, <span class="math-container">$yzabc\dotso x$</span>, etc.</p> </blockquote> <p>These "rotations" of strings of beads are commonly called <em>cyclic permutations</em>. There are exactly <span class="math-container">$n$</span>. With this terminology, a string gives the same necklace as any cyclic permutation of itself. A cyclic permutation really is the operation of taking off a certain number of beads from one end of the string and putting them back on the other end in the same order; formally this operation is seen as a function from strings to strings. This view draws attention to three important properties of cyclic permutations:</p> <ul> <li>They can be chained together. We say <em>composed</em>.</li> <li>One of them does nothing, namely moving 0 beads from one end to the other. We call this one the <em>identity</em>, as it sends every string to itself.</li> <li>Each can be undone. More formally, each cyclic permutation can be composed with its <em>inverse</em> permutation to give the identity. (The inverse is unique for a given permutation).</li> </ul> <p>These three properties are <em>the group axioms</em>. The cyclic permutations form a <em>group</em>, and it is such a common one that it is given a name: <em>the cyclic group of order <span class="math-container">$n$</span></em>, written <em><span class="math-container">$C_n$</span></em>.</p> <p>With these permutations we can do more than just rotate the particular strings we have been considering. We can also use them to rotate strings with other, less varied beads, non coincidentally the ones in the original problem for example, with 3 beads of one color and a multiple of 6 of another; this set of strings will be called <span class="math-container">$X$</span>. This action of sending elements of <span class="math-container">$X$</span> to other elements of <span class="math-container">$X$</span> through <span class="math-container">$C_n$</span> is called exactly that, an <em>action</em> of <span class="math-container">$C_n$</span> on <span class="math-container">$X$</span>.</p> <p>Burnside's lemma relates the number of strings in <span class="math-container">$X$</span> to the sizes of special substructures of <span class="math-container">$C_n$</span>. These substructures are identified by the way they act on strings in <span class="math-container">$X$</span>. Their sizes are relatively easy to determine, and because the number of these substructures is very closely related to the number of distinct necklaces obtained from strings in <span class="math-container">$X$</span>, the lemma allows us to calculate this number.</p> <p>I will complete this fly-over tomorrow. I'll probably add links to Wikipedia for the various objects mentioned.</p>
1,728,662
<p>Each set has bijective ordinal and cardinality is defined as the least one of such ordinals. I know that a set of ordinals is well-ordered by $\subseteq$ (inclusion) and thus has $\subseteq$-least element. However, I wonder which axiom guaranteed that the bijective ordinals above really construct to be a set. I worry about this since I noticed that all ordinals is not a set. THX. my friends. </p>
Asaf Karagila
622
<p>The power set axiom is what allows us to ensure that there are arbitrarily large ordinals, this is true even without assuming the axiom of choice, as witnessed by Hartogs' theorem. </p> <p>Well, to be accurate, power set ensures there are very large well ordered sets, and replacement gives us the ordinals. </p> <p>It is consistent with the failure of the power set axiom that every ordinal, and in fact every set, is countable. To see this, simply note that the set $\rm HC$ of those sets with a countable transitive closure, is a model to all the axioms of set theory except the power set axiom. </p>
1,170,708
<p>What functions satisfy $f(x)+f(x+1)=x$?</p> <p>I tried but I do not know if my answer is correct. $f(x)=y$</p> <p>$y+f(x+1)=x$</p> <p>$f(x+1)=x-y$</p> <p>$f(x)=x-1-y$</p> <p>$2y=x-1$</p> <p>$f(x)=(x-1)/2$</p>
Milo Brandt
174,927
<p>Your conclusion isn't quite right, and isn't found correctly either; your definition of $f(x)=y$ is a bit misleading because we <em>have</em> to interpret it as applying to a particular $x$ and not to <em>all</em> $x$. So when you step from $$f(x+1)=x-y$$ to $$f(x)=x-1-y$$ by "shifting" the argument by one (which be allowable <em>if</em> $x$ was "free"), you're making a misstep, since, if we wanted to be sure, the proper way to make this shift would be to set $u=x+1$ and then $$f(u)=u-1-y$$ which is true, but we can't say $f(u)=y$.</p> <p>One way to approach this sort of problem is to try to find some function related to $f$ which satisfies a simpler relation. In particular, suppose we let $$g(x)=f(x)-\frac{x}2-\frac{1}4$$ which we choose to "undo" the $x$ on the left hand side. Then we can show $$g(x)+g(x+1)=f(x)+f(x+1)-\frac{x}2-\frac{x+1}2-\frac{1}4-\frac{1}4=x-\frac{x}2-\frac{x+1}2-\frac{1}4-\frac{1}4=0.$$ So, we can say that we can build solutions by choosing any $g$ satisfying $$g(x)=-g(x+1)$$ and adding the quantity $\frac{x}2+\frac{1}4$ to them. So, for instance, in a vein similar to your solution, we get $f(x)=\frac{x}2+\frac{1}4$ by setting $g$ to be zero everywhere. More generally, we can choose the values of $g$ arbitrarily in $[0,1)$ and then use the relation $g(x+n)=(-1)^n g(x)$ for integer $n$ to find its value everywhere else. A simple function we could use would be $g(x)=\sin(\pi x)$, yielding the solution $f(x)=\sin(\pi x)+\frac{x}2+\frac{1}4$, however there are infinitely many possible solutions.</p>
1,734,819
<p>I think I'm on the right track with constructing this proof. Please let me know.</p> <p>Claim: Prove that there exists a unique real number $x$ between $0$ and $1$ such that $x^{3}+x^{2} -1=0$</p> <p>Using the intermediate value theorem we get $$r^{3}+r^{2}-1=c^{3}+c^{2}-1$$ ...... $$r^{3}+r^{2}-c^{3}-c^{2}=0$$</p> <p>Factoring gives us</p> <p>$$(r-c)[(r^{2}+rc+c^{2})+(r+c)]=0$$ I'm lost now. How do I prove that there exists a number between $0$ and $1$.</p>
Community
-1
<p>Existence follows from the intermediate value theorem, and uniqueness from the fact that the derivative has constant sign on this interval. But let's prove that the zero is actually unique on the entire real line.</p> <hr> <p>Suppose the polynomial $p$ has real zeros at $\alpha$ and $\beta$. Take $\beta$ to be the largest zero, which happens to be near $0.6$. You can check this using the intermediate value theorem and looking at the derivative.</p> <p>By the mean value theorem, there exists a number $c$ between $\alpha$ and $\beta$ for which</p> <p>$$p'(c) = \frac{p(\alpha) - p(\beta)}{\alpha - \beta} = 0$$</p> <p>That is, $3c^2 + 2c = 0$. Thus either $c = 0$ or $c = -2/3$. </p> <p>Now $p(-2/3) &lt; 0$ and $p'(x) &gt; 0$ for $x &lt; -2/3$, so there are no zeros in this domain. Hence, $\alpha$ must lie between $0$ and $-2/3$.</p> <p>Now argue this can't happen directly: Perhaps start with the fact that $|x^3 + x^2| &lt; 1$ on $(-2/3, 0)$.</p> <p>Finally, conclude that $\beta$ is unique.</p>
174,075
<p>What is the difference when a line is said to be normal to another and a line is said to be perpendicular to other?</p>
VISHAL YASH
196,457
<p>A normal is an object such as a line or vector that is perpendicular to a given object. For example, in the two-dimensional case, the normal line to a curve at a given point is the line perpendicular to the tangent line to the curve at the point.</p>
1,307,280
<p>I m in the point X. I m 2 blocks up from a point A and 3 blocks down from my home H. Every time I walk one block i drop a coin.</p> <p>H . . . X . . A</p> <p>If the coin is face I go one block up and if it is not face I go one block down.</p> <p>Which is the probability of arriving home before the point A?</p> <hr> <p>What I really want to do is to solve that problem in a recursive way. Maybe it can be solved with a binomial distribution... But is it also recursive?</p>
Michael Hardy
11,667
<blockquote> <blockquote> <p>when $x$ gets closer to $a$, $f(x)$ gets closer to $L$</p> </blockquote> </blockquote> <p>That is wrong. Consider two examples:</p> <ul> <li><p>Let $f(x) = 6-(x-4)^2$. Clearly $f(x)$ never gets bigger than $6$, so the limit cannot be $7$, but $f(x)$ gets closer to $7$ (and to all numbers that are bigger than $6$) as $x$ gets closer to $4$, but $\lim\limits_{x\to4}f(x)$ is $6$, not $7$.</p></li> <li><p>Suppose that as $x$ approaches $4$, $g(x)$, depending continuously on $x$, goes up to $10+0.1$, then down to $10-0.1$, then up to $10+0.01$, then down to $10-0.01$, then up to $10+0.001$, then down to $10-0.001$, etc. Then $\lim\limits_{x\to4} g(x)=10$. But $g(x)$ does not keep getting closer to $10$, but gets alternately closer and farther away: As it's going down from $10+0.1$ to $10$ it's getting closer to $10$, and as it continues going downward from $10$ to $10-0.1$, it's getting farther from $10$.</p></li> </ul>
165,487
<p>After I use Simplify on an expression I get$\dfrac{1}{2}\sqrt{-\dfrac{\sqrt{(-b^2+16|c|^2)(4|c|^2+b\Im(c))^2}}{4a(4|c|^2+b\Im(c)])}}$. This expression can clearly be simplified further by noticing that the square bracket term in the numerator cancels the other bracket term in the denominator so $\dfrac{1}{2}\sqrt{-\dfrac{\sqrt{(-b^2+16|c|^2)}}{4a}}$. This is clearly a much simpler form since it includes less terms, so my question is why does not mathematica do this?</p> <p>Edit: here is my code</p> <pre><code>real[x_, y_] := -2 a x^3 - 2 a y^2 x + 2 Re[c] x + 2 Im[c] y + b/2 y; imaginary[x_, y_] := -2 a x^2 y - 2 a y^3 + 2 Im[c] x - 2 Re[c] y - b/2 x; sol = Solve[{real[x, y] == 0, imaginary[x, y] == 0},{x,y}]; FullSimplify[Sqrt[(x /. sol[[2, 1]])^2 + (y /. sol[[2, 2]])^2], Assumptions -&gt; {(a | b) ∈ Reals &amp;&amp; c ∈ Complexes &amp;&amp; (a | b | c) &gt; 0}] </code></pre>
Alexei Boulbitch
788
<p>You have given contradictive assumptions. In Mma the condition that a variable, say, c, is positive (<code>c&gt;0</code>) automatically means that it belongs to Reals. Thus, when you fix <code>c ∈ Complexes &amp;&amp; (a | b | c) &gt; 0</code> you mislead Mma. </p> <p>According to your initial expressions the parameters a and b are Reals and positive, while c is Complex, am I right? If yes, try this: </p> <pre><code> expr = Simplify[Sqrt[(x /. sol[[2, 1]])^2 + (y /. sol[[2, 2]])^2], Assumptions -&gt; {a, b} &gt; 0]; MapAt[PowerExpand, expr, {2, 1}] (* 1/2 Sqrt[-((I Sqrt[b^2 - 16 Im[c]^2 - 16 Re[c]^2])/a)] *) </code></pre> <p>Have fun!</p> <p><strong>Edit</strong>: To address your question:</p> <p>{2,1} is a TreeCoordinate of the part of the whole expression that is under the outer square root. The Tree you can visualize by the function </p> <pre><code>TreeForm[expr] </code></pre> <p>yielding the following structure </p> <p><a href="https://i.stack.imgur.com/sGdcv.jpg" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/sGdcv.jpg" alt="enter image description here"></a></p> <p>Here the arrow indicates the element {2,1} that wee need. This can be made visible, if you hover the cursor over this element. It is this element that the PowerExpand function is convenient to be applied to.</p>
1,679,044
<blockquote> <p>Sketch the region enclosed by the given curves. Decide whether to integrate with respect to x or y. Draw a typical approximating rectangle. Then find the area.</p> <p>$$y = 2x^2, y = 8x^2, 3x + y = 5, x ≥ 0$$ </p> </blockquote> <p>Here is my drawing: <a href="http://www.webassign.net/waplots/d/a/986f170c68ee080b539049f410cbba.gif" rel="nofollow">http://www.webassign.net/waplots/d/a/986f170c68ee080b539049f410cbba.gif</a> </p> <p>If I'm integrating with in terms of x, I know I'm going to need more than one integral, but I don't know how to set it up. Any help is appreciated</p>
3SAT
203,577
<p>You can use double integral, </p> <p>the intersection points: </p> <p>between $y=8x^2$ and $y=5-3x$ are at $x=-1$ and $x=5/8$</p> <p>between $y=2x^2$ and $y=5-3x$ are at $x=-5/2$ and $x=1$</p> <p>let's assume that we want $\text{d$y$d$x$}$ </p> <p>for $I_1$: when $y$ goes from $y=2x^2$ to $y=8x^2$ then $x$ goes from $0$ to $5/8$</p> <p>for $I_2$: when $y$ goes from $y=2x^2$ to $y=5-3x$ then $x$ goes from $5/8$ to $1$</p> <p>$$\underbrace{\int_{x=0}^{x=5/8}\int_{y=2x^2}^{y=8x^2}1\text{d$y$d$x$}}_{I_1}+\underbrace{\int_{x=5/8}^{x=1}\int_{y=2x^2}^{y=5-3x}1\text{d$y$d$x$}}_{I_2}$$</p> <p>$$=\int_{0}^{5/8} \left(8x^2 - 2x^2\right)\text{d}x + \int_{5/8}^{1} \left(-3x + 5 - 2x^2\right)\text{d}x = \frac{125}{256}+\frac{117}{256}$$</p>
1,906,013
<blockquote> <p>Let $f$ be a smooth function such that $f'(0) = f''(0) = 1$. Let $g(x) = f(x^{10})$. Find $g^{(10)}(x)$ and $g^{(11)}(x)$ when $x=0$.</p> </blockquote> <p>I tried applying chain rule multiple times:</p> <p>$$g'(x) = f'(x^{10})(10x^9)$$</p> <p>$$g''(x) = \color{red}{f'(x^{10})(90x^8)}+\color{blue}{(10x^9)f''(x^{10})}$$</p> <p>$$g^{(3)}(x)=\color{red}{f'(x^{10})(720x^7) + (90x^8)f''(x^{10})(10x^9)}+\color{blue}{(10x^9)(10x^9)f^{(3)}(x^{10})+f''(x^{10})(90x^8)}$$</p> <p>The observation here is that, each time we take derivative, one "term" becomes two terms $A$ and $B$, where $A$ has power of $x$ decreases and $B$ has power of $x$ increases. $A$ parts will become zero when evaluated at zero, but what about $B$ parts?</p>
benguin
121,903
<p>Try applying the general Leibniz rule after you take the first derivative: <a href="https://en.m.wikipedia.org/wiki/General_Leibniz_rule" rel="nofollow">https://en.m.wikipedia.org/wiki/General_Leibniz_rule</a></p> <p>From there, you should be able to identify which terms in the sum go to zero when $x$ is evaluated at $0$.</p>
2,861,443
<p>Let $C_c(\mathbb{R})$ be the following:</p> <p>$$C_c = \{ f \in C(\mathbb{R}) \mid \exists \text{ } T &gt; 0 \text{ s.t. } f(t) = 0 \text{ for } |t| \geq T\}$$</p> <p>Let $T_n \in L(C_c(\mathbb{R}))$ be a linear operator such that:</p> <p>$$T_n u = \delta_n *u, \forall u \in C_c(\mathbb{R}),$$</p> <p>where</p> <p>$$ \delta_n(t)= \begin{cases} n^2(t+1/n) &amp; -1/n \leq t \leq 0 \\ -n^2(t-1/n) &amp; 0 &lt; t \leq 1/n \\ 0 &amp; \text{elsewhere} \end{cases} $$</p> <p>I have to prove that with respect to the 2-norm, the operator $T_n$ has $\|T_n\| = 1$ and I have succeeded in proving that $\|T_nu\|_2 \leq \|u\|_2, \forall u \in C_c(\mathbb{R}).$</p> <p>Now, I remained to prove that $\exists \text{ } u \in C_c(\mathbb{R}) \text{ s.t. } \|T_nu\|_2 = \|u\|_2$, but I really don't get how $u$ should be shaped in order to satisfy it.</p>
Paul Frost
349,785
<p>You know that each chart $\phi : U \to V \subset \mathbb{R}^n$ for $M$ induces a canonical bijection $\tilde{\phi} : TM \mid_U = p^{-1}(U) \to V \times \mathbb{R}^n$. We define $W \subset TM$ to be open if $\tilde{\phi}(W \cap p^{-1}(U))$ is open in $V \times \mathbb{R}^n$ for all $\phi$.</p> <p>It is an easy exercise to show that this is in fact a Hausdorff topology.</p>
3,605,368
<p>Imagine a <span class="math-container">$9 \times 9$</span> square array of pigeonholes, with one pigeon in each pigeonhole. Suppose that all at once, all the pigeons move up, down, left, or right by one hole. (The pigeons on the edges are not allowed to move out of the array.) Show that some pigeonhole winds up with two pigeons in it.</p> <p>Let each side of the square be n. There are <span class="math-container">$n^2$</span> pigeons and pigeonholes. If the pigeons are shifted in any direction, then there will be n empty pigeonholes on the side opposite to the direction. Furthermore, now <span class="math-container">$n^2$</span> pigeons are trying to fit into <span class="math-container">$n^2 - n$</span> pigeonholes. We can invoke the pigeon hole principle as follows: Let the entire set of pigeons be <span class="math-container">$X$</span> and the set of pigeonholes to be populated after the shift be <span class="math-container">$Y$</span>. For <span class="math-container">$X$</span> and <span class="math-container">$Y$</span> and for some integer <span class="math-container">$k$</span>, if <span class="math-container">$X &gt; k Y$</span>, and <span class="math-container">$f X: \to Y$</span>, then <span class="math-container">$f(x) = \ldots = f(x {\rm till\ index}\ k+1)$</span>.</p> <p>So, <span class="math-container">$81 &gt; 72 k$</span> which means <span class="math-container">$k &gt; 1.125$</span> which means <span class="math-container">$k = 2$</span>. This means that there are at least <span class="math-container">$3$</span> instances with <span class="math-container">$2$</span> pigeons in it.</p> <p>Now intuitively I know there ought to be <span class="math-container">$9$</span> instances. Where did I go wrong? Forgive me if I have butchered the whole thing. I am new to this type of math. </p>
fleablood
280,126
<p>Label each pigeon as <span class="math-container">$(a,b)$</span> where <span class="math-container">$a$</span> is the column number and <span class="math-container">$b$</span> is the row number. Each pigeon must move to a square that has the same column or row number but the row or column number is off by <span class="math-container">$1$</span>.</p> <p>So if <span class="math-container">$(a,b)$</span> is the square the pigeon start at, and <span class="math-container">$(a',b')$</span> is the square they move to, then <span class="math-container">$a+b = a' + b' \pm 1$</span>. So if <span class="math-container">$a+b$</span> is even, then <span class="math-container">$a'+b'$</span> is odd, and vice versa. </p> <p>So all the pigeons with an even sum must move to a square with odd sum and vice versa. As there are <span class="math-container">$41$</span> squares with even sums and only <span class="math-container">$40$</span> with odd sums we will end up with <span class="math-container">$41$</span> pigeons in the <span class="math-container">$40$</span> squares with odd sums, and <span class="math-container">$40$</span> pigeons in the <span class="math-container">$41$</span> squares with even sums.</p> <p>So at least one of the squares with odd sums will have two pigeons, and at least one of the squares with even sums will be empty.</p>
3,176,155
<blockquote> <p>Let <span class="math-container">$ABCD$</span> be a parallelogram. I proved that the angle bisectors of <span class="math-container">$A$</span>, <span class="math-container">$B$</span>, <span class="math-container">$C$</span>, <span class="math-container">$D$</span> form a rectangle. How can I prove that the diagonals of this rectangle are parallel to the sides of <span class="math-container">$ABCD$</span>? And is there a relation between the length of these diagonals and <span class="math-container">$AB$</span> or <span class="math-container">$BC$</span>?</p> </blockquote> <p>I'm looking for an elementary solution only using parallelograms, congruent triangles.</p> <p><a href="https://i.stack.imgur.com/uehHJ.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/uehHJ.png" alt="enter image description here"></a></p>
Vasili
469,083
<p><a href="https://i.stack.imgur.com/iRtKy.png" rel="nofollow noreferrer"><img src="https://i.stack.imgur.com/iRtKy.png" alt="enter image description here" /></a></p> <p>Without loss of generality, we can assume that <span class="math-container">$AB&gt;BC$</span>. Let's start with proving that <span class="math-container">$AW=ZB=DU=VC=CD-AD$</span>.<p>In parallelogram, adjacent angles are supplementary so <span class="math-container">$m\angle{A}+m\angle{D}=180\circ$</span>. From <span class="math-container">$\triangle{ADV}$</span> we have: <span class="math-container">$0.5 \cdot m\angle{A}+m\angle{D}+m\angle{DVA}=180\circ$</span> which means <span class="math-container">$m\angle{DVA}=0.5 \cdot m\angle{A}$</span>. Thus, <span class="math-container">$\triangle{ADV}$</span> is isosceles and <span class="math-container">$DV=AD$</span>, <span class="math-container">$VC=CD-AD$</span>. <p>Similarly, <span class="math-container">$CU=BC$</span> and <span class="math-container">$DU=CD-BC=CD-AD=VC$</span>.<p> It's easy to show now that <span class="math-container">$\triangle{AZM} \cong \triangle{DMV}$</span>: <span class="math-container">$AZ=AD=DV$</span> and alternative interior angles are congruent. Thus, <span class="math-container">$MZ=MD$</span>. Similarly, <span class="math-container">$\triangle{AMD} \cong \triangle{BCS}$</span> so <span class="math-container">$BS=MD=MZ$</span>. Because <span class="math-container">$DZ||BU$</span>, <span class="math-container">$MSBZ$</span> is a parallelogram and <span class="math-container">$MS||AB||DC$</span>. <p>We can now see that <span class="math-container">$MS=CD-AD=TN$</span> (diagonals of rectangle are congruent).</p>
3,857,698
<p>Let <span class="math-container">$D_1, ..., D_n$</span> be arbitrary <span class="math-container">$n$</span> sets where <span class="math-container">$D_i \cap D_j \neq \emptyset$</span>. In the simplified case where <span class="math-container">$n = 2$</span>, we have that <span class="math-container">$$ \begin{split} | X \cap D_1 | + | X \cap D_2 | = &amp;| X \cap (D_1 \setminus D_2) | + | X \cap (D_1 \cap D_2) | \\ &amp;+ | X \cap (D_2 \setminus D_1) | + | X \cap (D_1 \cap D_2) | \\ = &amp; |X| + | X \cap (D_1 \cap D_2) | \\ \leq &amp; |X| + | D_1 \cap D_2 |. \end{split} $$</span></p> <p>My question is that, can we generalize the above upper bound to something like <span class="math-container">$$ \sum_{i = 1}^n | X \cap D_i | \leq |X| + c, $$</span> where <span class="math-container">$c$</span> is dependent on <span class="math-container">$(D_1, D_2, ..., D_n)$</span>? It is self-evident that, if <span class="math-container">$D_1, ..., D_n$</span> is a disjoint partition of a universe, then we have <span class="math-container">$\sum_{i = 1}^n | X \cap D_i | = |X|$</span>. However, it seems difficult for me to bound <span class="math-container">$c$</span> when <span class="math-container">$D_1, ..., D_n$</span> are not disjoint.</p> <p>It would be appreciated if you could give me any hint.</p>
Vercassivelaunos
803,179
<p>We can start way earlier to get a geometric interpretation, at the real numbers. Multiplication by a real number is a combination of scaling and mirroring. Multiplying by a a positive number is scaling the real line, multiplying by <span class="math-container">$-1$</span> is mirroring it at the origin. On an abstract level, a core feature of mirroring is that doing it twice returns the original picture. This gives rise to the interpretation that multiplication by <span class="math-container">$-1$</span> is a mirroring, since <span class="math-container">$(-1)^2=1$</span>, so multiplying by <span class="math-container">$-1$</span> twice is the identity.</p> <p>The complex numbers give rise to a similar interpretation. We can still view multiplication by <span class="math-container">$-1$</span> as mirroring the plane at the origin, but in a 2d context, we can also see it as a <span class="math-container">$180^\circ$</span> rotation. They are really the same. But we also get a new element, <span class="math-container">$\mathrm i$</span>. Its basic feature is that <span class="math-container">$\mathrm i^2=-1$</span>, that is, multiplying by <span class="math-container">$\mathrm i$</span> twice is rotation by <span class="math-container">$180^\circ$</span>. But that's also a core feature of rotation by <span class="math-container">$90^\circ$</span>: rotating by that amount twice is the same as rotating by <span class="math-container">$180^\circ$</span> once. So that's a good hint that complex multiplication <em>can</em> have something to do with rotations. We just need to find a fitting topology (a scalar product to describe angles, most importantly) which makes multiplication by <span class="math-container">$\mathrm i$</span> an actual <span class="math-container">$90^\circ$</span> rotation. And it turns out that the scalar product wrt which <span class="math-container">$1$</span> and <span class="math-container">$\mathrm i$</span> form an orthonormal basis does just that. So it's a good idea to choose those as a basis of <span class="math-container">$\mathbb C$</span> as a real vector space, making them span the coordinate axes. In this picture, multiplication by <span class="math-container">$\mathrm i$</span> will be guaranteed to be a <span class="math-container">$90^\circ$</span> rotation. And using some algebra, all other complex multiplications can then be shown to also be rotations and scalings.</p>
2,459,169
<p>Let $A$ be an $n \times n$ matrix. Show that if $A^2 = 0$, then $I − A$ is nonsingular and $(I − A)^{−1} = I + A$.</p> <p>(Matrix Algebra)</p>
Clive Newstead
19,542
<p>Nonsingularity follows from existence of an inverse. Note that $$(I-A)(I+A) = I^2-AI+IA-A^2$$ Now do some (basic) matrix algebra, using the fact that $A^2=0$.</p>
2,459,169
<p>Let $A$ be an $n \times n$ matrix. Show that if $A^2 = 0$, then $I − A$ is nonsingular and $(I − A)^{−1} = I + A$.</p> <p>(Matrix Algebra)</p>
Robert Lewis
67,071
<p>Note that, just as with numbers</p> <p>$(1 + x)(1 -x) = (1 - x) + x(1 - x) = 1 - x + x - x^2 = 1 - x^2, \tag 1$</p> <p>so with matrices</p> <p>$(I - A)(I + A) = I - A^2; \tag 2$</p> <p>the algebraic maneuvers are essentially the same in each case. Now with</p> <p>$A^2 = 0, \tag 3$</p> <p>(2) becomes</p> <p>$(I - A)(I + A) = I, \tag 4$</p> <p>which shows both that $I - A$ is nonsingular and that</p> <p>$(I - A)^{-1} = I + A; \tag 5$</p> <p>we can also see that $I - A$ is nonsingular by taking determinants in (4):</p> <p>$\det(I - A) \det(I + A) = \det (I) = 1, \tag 6$</p> <p>which shows that</p> <p>$\det(I - A) \ne 0 \ne \det(I + A). \tag 7$</p>
203,138
<p>I'm starting to take calculus in college (the first offered math class) and I was shown this problem. Find $f(x)$ when$$-\frac{df(x)}{dx}\cdot\frac{x}{f(x)}=1$$Rearranging this to make$$\frac{d\,f(x)}{dx}=-\frac{f(x)}{x}$$gives me an interesting relationship, and very strongly suggests to me a polynomial function, but I think this approach lacks rigor (since I can guess all I want what $f(x)$ is, but that's still heuristic reasoning since my brain can't exhaust all the possibilities).</p> <p>Can someone walk me through a solution?</p> <p><strong>edit</strong></p> <p>This is not a homework problem, as my grade will not depend on its completion. But it's an interesting one. I've seen sorts of problems similar to this before, and they're all very unusual compared to what I've learned before.</p>
EuYu
9,246
<p>Let us write $y=f(x)$. The trick is to notice that $$xy^{\prime} + y = (xy)^{\prime}$$ which is a reverse application of the product rule. Therefore $$\frac{d}{dx}(xy) = 0 \implies xf(x) = C$$ for some constant $C$. The general solution is then given by $$f(x) = \frac{C}{x}$$ There is a systematic method for attacking these problems and this specific equation is known as a linear first order differential equation. The above trick in particular generalizes into something called the integrating factor. A quick google search should provide more if you are interested. The study of these equations involving derivatives are generally called differential equations and the study of differential equations form a huge part of mathematics with applications in all branches of math/science.</p>
88,880
<p>In a short talk, I had to explain, to an audience with little knowledge in geometry or algebra, the three different ways one can define the tangent space $T_x M$ of a smooth manifold $M$ at a point $x \in M$ and more generally the tangent bundle $T M$:</p> <ul> <li>Using equivalent classes of smooth curves through $x$</li> <li>Using derivations near $x$</li> <li>Using cotangent vectors at $x$</li> </ul> <p>Just by looking at the definition, it is not at all clear why they should all define the same object. I went through the proof, but judging from their reaction, it was not very meaningful. I wonder if there is any way I can let them "see", with just intuition, that the three definitions are, in certain sense, the same.</p>
Nick Salter
960
<p>Klaus Jänich's undergraduate-level book "Vector Analysis" includes a section showing the equivalence between the three descriptions of the tangent space that you mention. He gives rigorous proofs of everything, and also provides a fair amount of motivation. </p>
1,653,806
<p>This is a but of a more mathematically juvenile question but I'm trying to get all my intuition in order. When taking a limit we can cancel things that might be zero because in taking a limit, we allow ourselves to avoid trouble spots. If the limit exists, it is very much a number/function which could be zero. So, if I have a function $f$ differentiable at a point $a$ then I have </p> <p>$$\lim_{z \to a}\frac{f(z)-f(a)}{z-a}=f'(z)$$</p> <p>Now, this implies continuity since</p> <p>$$\lim_{z \to a}f(z)-f(a)=\lim_{z \to a}\frac{f(z)-f(a)}{z-a}\cdot f'(z)\lim_{z \to a}z-a$$</p> <p>Now, this idea makes perfect sense to me but we know that the limit $z-a$ approaches is $0$. How is it we are comfortable cancelling them? Is it within the $\epsilon, \delta$ language? If my question is unclear let me know. Thanks for your help!</p>
Jack Lee
1,421
<p>The divergence of a vector field $V = V^a \partial _a$ on a (pseudo-)Riemannian manifold is given by $\operatorname{div} V = V^a{}_{;a}$. In words, this is obtained by taking the trace of the total covariant derivative. In the special case of $\mathbb R^n$ with a flat Euclidean or pseudo-Euclidean metric, this yields the usual calculus formula for the divergence. </p> <p>By extension, it is common to define the divergence of an arbitrary tensor field as the trace of its total covariant derivative on (usually) the last two indices. So if $G$ is the (contravariant) Einstein tensor, then its divergence would be the vector field $\operatorname{div} G = G^{ba}{}_{;a} \partial_b$. Because $G$ is symmetric, this is also equal to $G^{ab}{}_{;a}\partial _b$.</p> <p>You can also apply this to the covariant Einstein tensor with components $G_{ab}$; its divergence is the $1$-form $G_{ba;}{}^{a}dx^b$.</p>
3,105,482
<p>For the formula:</p> <p><span class="math-container">$$1 = \sqrt{x^2 + y^2 + z^2 + w^2}$$</span></p> <p>How to rewrite it to find <span class="math-container">$w$</span>?</p>
Andrei
331,661
<p>Square the equation, move <span class="math-container">$x^2+y^2+z^2$</span> to the other side, then take the square root. Notice that you can take either the square root or the negative of that.</p>
1,173,643
<p>I saw the following statement written, but I can't understand why it is true.</p> <p>$$ \dfrac {P(A \text{ and } B)}{P(B)} = \dfrac{P(A)-P(A \text{ and }B^c)}{ 1-P(B^c)} $$</p> <p>Any help understanding why these are equivalent would be appreciated.</p>
Michael Burr
86,421
<p>First, the denominators $P(B)$ and $1-P(B^c)$ are equal because $P(B)+P(B^c)=1$ (either $B$ happens or it doesn't). </p> <p>For the numerator, $P(A\text{ and }B)$ is the probability that both $A$ and $B$ happen. For $P(A)-P(A\text{ and }B)$, consider $P(A)$ first. $P(A)$ is the probability that $A$ happens. When $A$ happens, either $B$ happens or $B$ doesn't happen. This means that $P(A)=P(A\text{ and }B)+P(A\text{ and }B^c)$. By rearranging, you can see the numerators are the same.</p>
2,761,658
<blockquote> <p>Two numbers $x$ and $y$ are chosen at random from the numbers $1,2,3,4,\ldots,2004$. The probability that $x^3+y^3$ is divisible by $3$ is?</p> </blockquote> <p>The correct answer is $\dfrac13$ while mine is $\dfrac{445}{2003}$</p> <p>My attempt:</p> <p>For $x^3+y^3$ to be divisible by $3$, EITHER both $x$ and $y$ should be a multiple of $3$ OR one of them should leave remainder $1$ when divided by $3$ and the other should leave remainder $2$.</p> <p>Therefore, $$\text{no. of ways} = \frac{668 \times 667 + 668\times 668}{2004\times 2003} = \frac{445}{2003}$$ </p>
Rebecca J. Stones
91,818
<h3>With replacement</h3> <p>As you basically note $x^3+y^3 \equiv x+y \pmod 3$, which follows from <a href="https://en.wikipedia.org/wiki/Fermat%27s_little_theorem" rel="nofollow noreferrer">Fermat's Little Theorem</a>.</p> <p>We note that $2004 \equiv 0 \pmod 3$, so there's exactly $2004/3=668$ numbers equivalent to $i \pmod 3$, for all $i \in \{0,1,2\}$.</p> <p>So, no matter what $x$ value is randomly chosen, there are $2004/3$ out of $2004$ (i.e., $1/3$ probability) of randomly choosing an $y$ value for which $y \equiv -x \pmod 3$.</p> <p>We can do this like the method in the question: $$ \frac{\overbrace{668 \times 668}^{x \equiv 0, y \equiv 0} + \overbrace{668 \times 668}^{x \equiv 1, y \equiv 2} + \overbrace{668 \times 668}^{x \equiv 2, y \equiv 1}}{2004^2}=\frac{1}{3}. $$</p> <h3>Without replacement</h3> <p>If $x$ and $y$ are drawn without replacement, i.e., we assume $x \neq y$, then there's two distinctions: (a) when $x \equiv 0 \pmod 3$, there are $2004/3-1$ distinct $y$ values for which $x+y \equiv 0 \pmod 3$, and (b) there are $2004 \times 2003$ ordered pairs $(x,y)$, which also gives the probability: $$ \frac{\overbrace{668 \times 667}^{x \equiv 0, y \equiv 0} + \overbrace{668 \times 668}^{x \equiv 1, y \equiv 2} + \overbrace{668 \times 668}^{x \equiv 2, y \equiv 1}}{2004 \times 2003}=\frac{1}{3}. $$</p> <p>To interpret this using the first method I mention, we have $1/3$ probability of any given value of $x \pmod 3$, and given a value of $x \pmod 3$, we have probability of either $667/2003$ (when $x \equiv 0 \pmod 3$) or $668/2003$ (when $x \not\equiv 0 \pmod 3$) of randomly choosing $y \equiv -x \pmod 3$. Since these are mutually exclusive events, we get the probability $$ \overbrace{\frac{1}{3} \times \frac{667}{2003}}^{x \equiv 0, y \equiv 0}+\overbrace{\frac{1}{3} \times \frac{668}{2003}}^{x \equiv 1, y \equiv 2}+\overbrace{\frac{1}{3} \times \frac{668}{2003}}^{x \equiv 2, y \equiv 1}=\frac{1}{3}. $$</p> <p>The given answer seems to assume that $x \neq y$, which is this second case. However, the main problem is that it doesn't account for both cases $(x,y) \equiv (-1,1) \pmod 3$ and $(x,y) \equiv (1,-1) \pmod 3$.</p>
108,110
<p>How can I use <em>Mathematica</em> to expand such a product (only need a finite number of terms):</p> <p>$$\prod^{\infty}_{n=1}\frac{({1-yq^{n+1}})({1-y^{-1}q^n})}{(1-q^n)^2}$$</p>
JimB
19,758
<p>If you follow @J.M. 's hint and assume that <code>0 &lt; q &lt; 1</code>, you can get a value for the infinite product:</p> <pre><code>Product[(1 - y q^(n + 1)) (1 - q^n/y)/(1 - q^n)^2, {n, 1, ∞}, Assumptions -&gt; 0 &lt; q &lt; 1] </code></pre> <p>with result</p> <pre><code>-((y QPochhammer[1/y, q] QPochhammer[q y, q])/((-1 + y) (-1 + q y) QPochhammer[q, q]^2)) </code></pre>
963,125
<p>I'm not exactly sure where to start on this one. Any help would be greatly appreciated.</p> <p>Show that $4$ does not divide $12x+3$ for any $x$ in the integers.</p> <p>Here's what I have so far:</p> <p>There exist c in the integers such that 4 | 12x+3. Then 12x+3 = 4y for some y in the integers. This is a contradiction since y = 3x + 3/4 and y is in the integers.</p> <p>Is that correct?</p>
NovaDenizen
109,816
<p>Saying $4 | 12x + 3$ is equivalent to saying there exists an integer $k$ such that $4k = 12x + 3$.</p> <p>If this is the case, then also $4k - 12x = 3$, or $4(k - 3x) = 3$. $4 \cdot 0 = 0$ and $4 \cdot 1 = 4$, so this would imply $0 &lt; k - 3x &lt; 1$, which is impossible. So it must not be the case that $4 | 12x + 3$.</p>
1,160,076
<p>Consider $f:\left[a,b\right]\rightarrow \mathbb{R}$ continuous at $\left[a,b\right]$ and differentiable at $\left(a,b\right)$.<br> $\forall x\:\ne \frac{a+b}{2}$, $f'\left(x\right)\:&gt;\:0$, but at $x\:=\frac{a+b}{2}$, $f'\left(x\right)\:=\:0$.<br> Show that $f$ is <strong>strictly increasing</strong> at $\left[a,b\right]$. </p> <p>So far i don't find any approach how to handle with $x\:=\:\frac{a+b}{2}$.<br> I know that $x\:=\:\frac{a+b}{2}$ can't be minimum or maximum. I thought about assuming there is point $c\:\ne \frac{a+b}{2}$ such that $f\left(c\right)=f\left(\frac{a+b}{2}\right)$ and get a contradict somehow but i can't figure out how.. Any ideas? thanks in advance!</p>
rafalpw
217,690
<p>1) $f$ is strictly increasing in $[a,c]$ and in $[c,b]$</p> <p>Let $x,y \in [a,c]$ and $ x&lt;y $ Then, from the mean value theorem, we have: $f(y)-f(x)=f'(\xi)(y-x) $ where $\xi \in (x,y) $ Since $\xi \in (a,c)$ we have $f'(\xi)&gt;0$ and $(y-x)&gt;0$ , so $f(y)-f(x)&gt;0$</p> <p>Proof for $[c,b]$ is identical.</p> <p>2) $f$ is strictly increasing in $[a,b]$</p> <p>Let $x,y \in [a,b]$ and $x&lt;y$ </p> <p>If $x,y \in [a,c]$ or $x,y \in [c,b]$ then $f(x)&lt;f(y) $</p> <p>Let $x \in [a,c] $ and $y \in (c,b] $ then $f(x) \le f(c) &lt; f(y) $</p> <p>Let $x \in [a,c) $ and $y \in [c,b] $ then $f(x) &lt; f(c) \le f(y) $</p>
1,682,961
<p>I was making a few exercises on set proofs but I met an exercise on which I don't know how to start:</p> <blockquote> <p>If $A \cap C = B \cap C $ and $ A-C=B-C $ then $A = B$</p> </blockquote> <p>Where should I start? Should I start from $ A \subseteq B $ or should I start from this $ ((A\cap C = B\cap C) \land (A-C = B-C)) \Rightarrow (A = B)$ ? </p>
Arthur
15,500
<p>You show $A\subseteq B$ and $B \subseteq A$, as one usually would when showing that two sets are equal. Since the conditions are symmetric in $A$ and $B$, the two proofs are completely analoguous, so I will only do one of them.</p> <p>To show $A \subseteq B$, take an $a \in A$, and note that either $a \in C$ or $a \notin C$. If $a \in C$, then we have $a \in A\cap C$. If $a\notin C$, then we have $a \in A-C$. In both cases, you may use the given set equalities to conclude that $a \in B$. This shows $A \subseteq B$.</p>
235,940
<p>Consider a sphere $S$ of radius $a$ centered at the origin. Find the average distance between a point in the sphere to the origin.</p> <p>We know that the distance $d = \sqrt{x^2+y^2+z^2}$. </p> <p>If we consider the problem in spherical coordinates, we have a 'formula' which states that the average distance $d_{avg} = \frac{1}{V(S)}\iiint \rho dV$</p> <p>I think that this is reminiscent of an average density function which I've seen in physics courses, and it is clear that the $\iiint \rho dV$ is equal to the volume of the sphere, but I'm not sure as to why we must integrate over the distance and then divide by the actual volume to calculate the average distance.</p> <p>I am looking for a way to explain this to my students without presenting the solution as a formula, any insights would be appreciated.</p>
Ross Millikan
1,827
<p>If $\rho$ is the radius, the volume is in fact $\iiint dV$ without the $\rho$. The average distance is then as you have said. This is an example of the general formula for the average value of a variable $X$ over a probability distribution $V$, which is $\bar X= \tfrac {\int X dV}{\int dV}$</p>
24,912
<p>I have the category-theoretic background of the occasional stroll through MacLane's text, so excuse my ignorance in this regard. I was trying to learn all that I could on the subject of tensor algebras, and higher exterior forms, and I ran into the notion of cohomological determinants. Along this line of inquiry, I ran into the general use of the notion of a Picard category, and kept running into frustration in trying to find some sort of exposition of what these structures are. So where can I find out more about these structures, and about (cohomological) determinants in K-theory, which seem to be a hot topic among AG, AT, and RT researchers alike at the moment.</p>
user234212323
429,204
<p><a href="https://agrothendieck.github.io/divers/ps.pdf" rel="nofollow noreferrer">Pursuing stacks</a> should fit here.</p> <p>There are plenty of uses of Picard categories (see the appendix), and in fact, the appendix list some examples of Picard 2, 3-categories.</p>
293,234
<p>I recently asked a question about <a href="https://math.stackexchange.com/questions/287116/proof-that-mutual-statistical-independence-implies-pairwise-independence">pairwise versus mutual independence</a> (also related to <a href="https://math.stackexchange.com/questions/281800/example-relations-pairwise-versus-mutual">this</a> and <a href="https://math.stackexchange.com/questions/283579/how-to-model-mutual-independence-in-bayesian-networks">this</a> q). </p> <p>However, </p> <p>(1) I inadvertently used incorrect terminology:</p> <blockquote> <p>three events, A, B, C are mutually independent when:</p> <p>P[A,B]=P[A]P[B], P[B,C]=P[B]P[C], P[A,C]=P[A]P[C], P[A,B,C]=P[A]P[B]P[C]</p> </blockquote> <p>Did and others pointed out that</p> <blockquote> <p>"Mutual independence means the four identities you copied, pairwise independence means the first three of these identities." -- Did</p> </blockquote> <p>Note that the term <em>mutual</em> has varying definitions across math. For example, <a href="http://en.wikipedia.org/wiki/Mutual_information" rel="nofollow noreferrer">mutual information</a> is a pairwise relation. </p> <p>(2) Going back to probability, GC Rota said the theory can be approached two ways: focusing on random variables (event algebra) or focusing on distributions. Here I am interested in distributions, where independence can be interpreted as factorization of the probability distribution function. The conditions are the same as above, where P is interpreted as the PDF function. </p> <p>The following graphic based on a standard example from <a href="http://rads.stackoverflow.com/amzn/click/0412989018" rel="nofollow noreferrer"><em>Counterexamples in Probability and Statistics</em></a> of a 3-dimensional binomial PDF that factorizes pairwise (ie, each of the 3 pairs of random variables are independent and the 2-dim joint distributions can all be written as the product of the respective marginals) but not 3-way independent (the joint distribution cannot be written as the product of the individual marginal distributions)</p> <p><img src="https://i.stack.imgur.com/IBCra.jpg" alt="enter image description here"></p> <p>My question is whether the opposite can happen, ie if the 3-dim (or perhaps higher) joint distribution factorizes into the 1-dim marginals, does that imply the pairwise factorization of <em>all</em> 2-dim joint distributions into the marginals? </p>
JSchlather
1,930
<p>Using Sage I was able to calculate $[L: \mathbb Q]=64$. Basically you can create number fields in Sage and then create relative extensions of these number fields. I was able to factor the minimum polynomial of $\alpha$ over each extension and then adjoin a root until I reached a point at which the minimum polynomial split. I can try to throw together a sage notebook or something if you want to see this process. </p>
2,067,553
<p>I'm currently working on another problem: let $x_1,x_2,x_3$ be the roots of the polynomial: $x^3+3x^2-7x+1$, calculate $x_1^2+x_2^2+x_3^2$. Here is what i did: $x^3+3x^2-7x+1=0$ imply $x^2=(7x-x^3-1)/3$. And so $x_1^2+x_2^2+x_3^2= (7x_1-x_1^3-1)+7x_2-x_2^3-1+7x_3-x_3^3-1)/3= 7(x_1+x_2+x_3)/3+(x_1^3+x_2^3+x_3^3)-1$. Then I don't know what to do anymore.</p>
Arnaldo
391,612
<p><strong>Hint</strong></p> <p>$$(x_1)^2+(x_2)^2+(x_3)^2=(x_1+x_2+x_3)^2-2(x_1x_2+x_1x_3+x_2x_3)$$</p> <p>You just have to find $x_1+x_2+x_3$ and $x_1x_2+x_1x_3+x_2x_3$ from the coeficients.</p> <p>Check here: <a href="https://en.wikipedia.org/wiki/Vieta%27s_formulas" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Vieta%27s_formulas</a></p>
785,844
<p>I've got the following limit to solve:</p> <p>$$\lim_{s\to 1} \frac{\sqrt{s}-s^2}{1-\sqrt{s}}$$</p> <p>I was taught to multiply by the conjugate to get rid of roots, but that doesn't help, or at least I don't know what to do once I do it. I can't find a way to make the denominator not be zero when replacing $s$ for $1$. Help?</p>
user148623
148,623
<p>$$\frac{\sqrt{s} - s^2}{1 - \sqrt{s}} = \frac{\sqrt s \left(1 - s^{3/2}\right)}{\left(1 - s^{3/2}\right)\left(1 + s^{3/2}\right)} = \frac{\sqrt s}{1 + s^{3/2}}$$</p>
146,075
<p>Within my limited experience, I have only known free groups to occur through two mechanisms: as fundamental groups of trees (graphs) and ping-pong. And sometimes only through one way: the fact that sufficiently high-powers of hyperbolic elements in a Gromov-hyperbolic group generate a free group arises via ping-pong. I know of no tree to represent the situation. </p> <p>To the experts, the following question is surely either an obvious yes or no: </p> <p>Can we explicitly describe all mechanisms by which finitely generated free subgroups of the Artin braid groups $B_n$ (for $n=2,3,\ldots$) arise ? </p> <p>More specifically, seeing $B_n$ as the isotopy space of all $n$-pointed braids in the closed 2-disk, can we characterize all configurations generating the rank $k$ free group?</p> <p>Much less specifically, do we know, say, that all finitely-generated free subgroups arise from ping-pong and can we describe all ping-pong games? </p>
Misha
21,684
<p>This is a very extended comment to your question. </p> <p>Your question does not have "obvious yes or no" answer; in part, this is because you asked 3 somewhat different questions: </p> <ol> <li><p>"Can one explicitly describe all finitely generated free subgroups..." In this form, the question has no answer since the word "explicit" is meaningless in this context. </p></li> <li><p>Your second question "can we explicitly enumerate/characterize all configurations which generate the rank k free group" is a bit better, since one can interpret it as a request for an algorithm that, given as its inputs tuples $(w_1,...,w_k)$ of words in the standard generators of $B_n$, would determine if they generate rank $k$ free subgroups of $B_n$. Existence of such an algorithm is unknown for $n\ge 6$. The reason is that $B_n$ contains $F_2\times F_2$ and, hence, also contains <em>Mikhailova subgroups</em>. In particular, it is undecidable if a given set of $k$ elements in $F_2\times F_2$ generates $F_2\times F_2$ or not. </p></li> <li><p>Lastly, you are asking something about ping-pong. What this "something" is, is very much unclear. Every free group $F_k$ has a ping-pong description (this is almost a tautology). On the other hand, the "collection" of such ping-pongs is not even a set! One can ask, however, a meaningful question restricting to "Schottky ping-pongs", by considering the action of $B_n$ on a certain topological sphere $S=S^{n-3}$, the space of projective classes of measured laminations on $n+1$ times punctured sphere. A <em>Schottky ping-pong</em> on $S$ is given by a collection of disjoint compact subsets $C_1, C_1',...,C_k, C_k'$ and elements $g_i\in B_n$, so that each $g_i$ sends the interior of $C_i$ homeomorphically to the exterior of $C_i'$. Then the subgroup $&lt;g_1,...,g_k&gt;\subset B_n$ is free of rank $k$ and is necessarily necessarily purely pseudo-Anosov; such subgroups are <em>Schottky subgroups</em> in $B_n$ by analogy with Schottky groups acting on hyperbolic spaces (and their ideal boundaries). It is also unknown if $B_n$ (or any mapping class group for this matter) contains a purely pseudo-Anosov free subgroup which is not Schottky. Note that already $PSL(2,C)$ contains free subgroups of rank 2 which are purely hyperbolic and not Schottky. </p></li> </ol>
1,270,584
<p>I tried googling for simple proofs that some number is transcendental, sadly I couldn't find any I could understand.</p> <p>Do any of you guys know a simple transcendentality (if that's a word) proof?</p> <p>E: What I meant is that I wanted a rather simple proof that some particular number is transcendental ($e$ or $\pi$ would work), not a method to prove that any number is transcendental, sorry for the confusion.</p> <p>Or even a proof about transcendental numbers being 'as common' as algebraic numbers?</p>
hmakholm left over Monica
14,366
<p>It's possible to argue fairly elementarily that <em>Liouville's number</em> $$ L = \sum_{k=1}^\infty 10^{-k!} $$ is transcendental, by showing directly that for every integer polynomial $$p(x) = a_nx^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0$$ there is a decimal position in $p(L)$ that must be nonzero.</p> <p>The powers of $L$ have the form $$ L^k = c_1 10^{-b_1} + c_2 10^{-b_2} + c_3 10^{-b_3} + \cdots $$ where each of the (all different) $b_i$ is a number that can be written as the sum of $k$ (not necessarily different) factorials, and $c_i$ is some integer $\le k!$ that depends on whether some of the factorials are different.</p> <p>Now let $n$ be the degree of $p$. Among the $b_i$s we find numbers of the form $B_{h,n}=(h+1)!+(h+2)!+\cdots (h+n)!$, and by choosing $h$ large enough, these numbers can be arbitrarily far from anything that can be written as a sum of <em>fewer than</em> $n$ factorials (which are the ones the appear in the expansion of <em>lower</em> powers of $L$).</p> <p>So if we choose $h$ large enough, we find a $B_{h,n}$ such that the only terms in $p(L)$ that contribute to the digits around decimal position $B_{h,n}$ is the product of $a_n$ with $C_{h,n}$, which is nonzero.</p> <p>(Everything to the <em>right</em> of this is the sum of products of some $10^{-b_j}$ with a factor that is at most $n!a_n+(n-1)!a_{n-1}+\cdots+a_1$. The bound of the factor depends only on $p$, so if only we choose $h$ such that that the first $b_j$ after $B_{h,n}$ is separated by at least the length of this bound their contribution cannot reach position $B_{h,n}$.</p> <p>(Similarly, we can arrange for there to be enough space to the <em>left</em> of $B_{h,n}$ to make room for all of $a_nC_{h,n}$ before the <em>previous</em> $b_j$).</p>
3,291,303
<p>this very same problem appeared in a different thread but the questions was slightly different. In my case, I'm looking precisely for the answer.</p> <p>This is how I solved it, I only need confirmation of whether this actually is correct:</p> <p>So I assumed that 1111... (100 ones) is going to be exactly divided by 1111111 (7 ones) 14 times (100/7 = 14). Hence, if 14 times 7 equals 98 '1's, then the remainder is 2 '1's.</p> <p>Thanks.</p>
J. W. Tanner
615,567
<p>Mod <span class="math-container">$10^7-1: 10^7\equiv1\implies10^{98}\equiv1\implies10^{100}\equiv100\implies10^{100}-1\equiv99.$</span></p> <p>I.e., <span class="math-container">$10^7-1|(10^{100}-1)-99\implies \dfrac{10^7-1}9|\dfrac{10^{100}-1}9-\color{blue}{11}$</span></p>
3,657,106
<blockquote> <p>Sam was adding the integers from <span class="math-container">$1$</span> to <span class="math-container">$20$</span>. In his rush, he skipped one of the numbers and forgot to add it. His final sum was a multiple of <span class="math-container">$20$</span>. What number did he forget to add?</p> </blockquote> <p>My idea was to use Gauss's trick to find this relatively simply so I proceeded as follows.</p> <p>We have <span class="math-container">$S=1+2+3+ \dots+ 18+19+20$</span>. Using Gauss's trick we get <span class="math-container">$\frac{n(n+1)}{2} = \frac{20(21)}{2} = 210$</span>. Since we want this to equal some multiple of <span class="math-container">$20$</span> we have that <span class="math-container">$210 = 20n$</span>, but solving for <span class="math-container">$n$</span> results in <span class="math-container">$\frac{21}{2} = 10.5$</span>.</p> <p>The correct answer for this was <span class="math-container">$10$</span>, but it seems that I'm missing something?</p>
Bram28
256,001
<p>Here is where your approach goes wrong:</p> <p>The sum of all numbers <em>with the exception of the one skipped one</em> is a multiple of <span class="math-container">$20$</span>.</p> <p>So, if <span class="math-container">$k$</span> is the skipped number, what you have is: <span class="math-container">$210-k = 20n$</span></p> <p>Also, what you need to solve for is <span class="math-container">$k$</span>, not <span class="math-container">$n$</span>. The fact that in your case, <span class="math-container">$n$</span> happened to be fairly close to the <span class="math-container">$k$</span> that they were looking for is complete happenstance.</p>
581,958
<p>I am working on a project about Mathematics and Origami. I am working on a section about how origami can be used to solve cubic equations. This is the source I am looking at:<br> <a href="http://origami.ousaan.com/library/conste.html">http://origami.ousaan.com/library/conste.html</a><br> I understand the proof they go on to explain about how the slope of the crease line is the solution to the general cubic equation. I am also interested in understanding how this is used to double the cube and trisect the angle which they go on to mention at the end. However, I am having difficulty filling in the missing gaps and understanding how to use this origami method to solve those equations given and how this tells us we can trisect the angle, and double the cube. Any help is much appreciated</p>
Guest
201,847
<p>Thre's a good video showing the trisection bit here:</p> <p><a href="https://www.youtube.com/watch?v=SL2lYcggGpc&amp;feature=em-subs_digest" rel="nofollow">https://www.youtube.com/watch?v=SL2lYcggGpc&amp;feature=em-subs_digest</a></p>
1,602,392
<p>I think I have a proof using Pythagoras for $\sqrt{a_1^2} + \sqrt{a_2^2} &gt; \sqrt{a_1^2 + a_2^2}$.</p> <p><em>I'm interested in whether there's a way to use that proof with Pythagoras to prove the general $a_n$ case (for this, hints are appreciated rather than complete proofs), and <strong>also</strong> in other ways (algebraic, geometric, number theoric, calculusic...anything) that you might know or come up with to prove the general case (for those, either hints or complete proofs are great, up to you).</em></p> <p><strong><em>Lemma</em></strong>:</p> <p>Let positive (edited) real numbers $a_1, a_2$ be the legs of a right triangle. </p> <p>Then $\sqrt{a_1^2 + a_2^2}$ is the length of the hypothenuse of that triangle.</p> <p>And $\sqrt{a_1^2} + \sqrt{a_2^2}$ is the sum of the length of the two legs.</p> <p>By the triangular inequality, we know that the length of the hypothenuse has to be less than the length of the sum of the two legs.</p> <p>Therefore, for any real numbers $a_1, a_2$, $\sqrt{a_1^2} + \sqrt{a_2^2} &gt; \sqrt{a_1^2 + a_2^2}$.</p> <p><em>I'm stuck here...I was thinking of comparing pairs of elements from each side of the expression using my lemma, but it doesn't seem possible to "extract" pairs of elements from under $\sqrt{a_1^2 + a_2^2 +...+a_n^2}$. I also thought about summing all elements but $a_1$ into a single number and using my lemma on those simplified expressions, but I run into the same problem.</em></p>
user236182
236,182
<p>Both sides of the inequality are non-negative (for all $a_i\in\mathbb R$), therefore the following equivalences hold:</p> <p>$$\sqrt{a_1^2} + \sqrt{a_2^2} +\cdots + \sqrt{a_n^2} \ge \sqrt{a_1^2 + a_2^2 +…+a_n^2}$$</p> <p>$$\iff \left(\sqrt{a_1^2} + \sqrt{a_2^2} +\cdots + \sqrt{a_n^2}\right)^2 \ge \left(\sqrt{a_1^2 + a_2^2 +…+a_n^2}\right)^2$$</p> <p>$$\iff a_1^2+a_2^2+\cdots+a_n^2+2\sum_{i=1}^n \sum_{j&gt;i}^n|a_ia_j|\ge a_1^2+a_2^2+\cdots+a_n^2$$</p> <p>$$\iff 2\sum_{i=1}^n \sum_{j&gt;i}^n|a_ia_j|\ge 0,$$</p> <p>which is true, with equality if and only if at least $n-1$ of $a_1,a_2,\ldots,a_n$ are equal to $0$.</p>
1,909,422
<blockquote> <p>let $A=\left(\begin{array}{ccc} -5 &amp; -1 &amp; 6 \\ -2 &amp; -5 &amp; 8 \\ -1 &amp; -1 &amp; 1 \end{array}\right)$ and $B=\left(\begin{array}{ccc} -9 &amp; 3 &amp; -3 \\ -14 &amp; 4 &amp; -7 \\ -2 &amp; 1 &amp; -4 \end{array}\right)$ matrices above $\mathbb{C}$ are they similar?</p> </blockquote> <p>So I started with finding the eigenvalues and geometric multiplicity.</p> <p>For $A$ I got $f_{\lambda}(x)=(x+3)^3$ and $m_{\lambda}(x)=(x+3)^3$</p> <p>For $B$ I got $f_{\lambda}(x)=(x+3)^3$ and $m_{\lambda}(x)=(x+3)^2$</p> <p>So the jordan normal form of $A$ is $A=J_{1}(-3),J_{1}(-3),J_{1}(-3)$ or $A=\left(\begin{array}{ccc} -3 &amp; 0 &amp; 0 \\ 0 &amp; -3 &amp; 0 \\ 0 &amp; 0 &amp; -3 \end{array}\right)$</p> <p>And the jordan normal form of $B$ is $B=J_{2}(-3),J_{1}(-3)$ or $B=\left(\begin{array}{ccc} -3 &amp; 1 &amp; 0 \\ 0 &amp; -3 &amp; 0 \\ 0 &amp; 0 &amp; -3 \end{array}\right)$</p> <p>So $A$ is not similar to $B$ is it correct?</p>
H. H. Rugh
355,946
<p>Indeed they are not similar. But $A$ is in fact not diagolizable. It has a size 3 Jordan block:</p> <p>$(A+3E)^2 \neq 0$ (whereas $(B+3E)^2=0$). This is also what you got using the $m_\lambda$ formulation.</p>
1,909,422
<blockquote> <p>let $A=\left(\begin{array}{ccc} -5 &amp; -1 &amp; 6 \\ -2 &amp; -5 &amp; 8 \\ -1 &amp; -1 &amp; 1 \end{array}\right)$ and $B=\left(\begin{array}{ccc} -9 &amp; 3 &amp; -3 \\ -14 &amp; 4 &amp; -7 \\ -2 &amp; 1 &amp; -4 \end{array}\right)$ matrices above $\mathbb{C}$ are they similar?</p> </blockquote> <p>So I started with finding the eigenvalues and geometric multiplicity.</p> <p>For $A$ I got $f_{\lambda}(x)=(x+3)^3$ and $m_{\lambda}(x)=(x+3)^3$</p> <p>For $B$ I got $f_{\lambda}(x)=(x+3)^3$ and $m_{\lambda}(x)=(x+3)^2$</p> <p>So the jordan normal form of $A$ is $A=J_{1}(-3),J_{1}(-3),J_{1}(-3)$ or $A=\left(\begin{array}{ccc} -3 &amp; 0 &amp; 0 \\ 0 &amp; -3 &amp; 0 \\ 0 &amp; 0 &amp; -3 \end{array}\right)$</p> <p>And the jordan normal form of $B$ is $B=J_{2}(-3),J_{1}(-3)$ or $B=\left(\begin{array}{ccc} -3 &amp; 1 &amp; 0 \\ 0 &amp; -3 &amp; 0 \\ 0 &amp; 0 &amp; -3 \end{array}\right)$</p> <p>So $A$ is not similar to $B$ is it correct?</p>
InsideOut
235,392
<p>A well known theorem state that: <em>two matrices are similar if and only if they have the same Jordan normal form</em> . In your case, the Jordan normal forms are different, so they cannot be similar.</p>
3,714,418
<p>I need to calculate the angle between two 3D vectors. There are plenty of examples available of how to do that but the result is always in the range <span class="math-container">$0-\pi$</span>. I need a result in the range <span class="math-container">$\pi-2\pi$</span>.</p> <p>Let's say that <span class="math-container">$\vec x$</span> is a vector in the positive x-direction and <span class="math-container">$\vec y$</span> is a vector in the positive y-direction and <span class="math-container">$\vec z$</span> is a reference vector in the positive z-direction. <span class="math-container">$\vec z$</span> is perpendicular to both <span class="math-container">$\vec x$</span> and <span class="math-container">$\vec y$</span>. Would it then be possible to calculate the angle between <span class="math-container">$\vec x$</span> and <span class="math-container">$\vec y$</span> and get a result in the range <span class="math-container">$\pi-2\pi$</span>? </p> <p>The angle value should be measured counter clockwise. I have not been able to figure out how to do that. I am no math guru but I have basic understanding of vectors at least. Thank you very much for the help! </p>
Siong Thye Goh
306,553
<p>If <span class="math-container">$C \subset D$</span>, then we have <span class="math-container">$A - D \subset A-C$</span>.</p> <p>We have <span class="math-container">$B - A \subset B$</span>, hence <span class="math-container">$A- B \subset A - (B-A)$</span>.</p>
29,323
<p>You're hanging out with a bunch of other mathematicians - you go out to dinner, you're on the train, you're at a department tea, et cetera. Someone says something like "A group of 100 people at a party are each receive hats with different prime numbers and ..." For the next few minutes everyone has fun solving the problem together.</p> <p>I love puzzles like that. But there's a problem -- I running into the same puzzles over and over. But there must be lots of great problems I've never run into. So I'd like to hear problems that other people have enjoyed, and hopefully everyone will learn some new ones.</p> <p>So: What are your favorite dinner conversation math puzzles?</p> <p>I don't want to provide hard guidelines. But I'm generally interested in problems that are mathematical and not just logic puzzles. They shouldn't require written calculations or a convoluted answer. And they should be fun - with some sort of cute step, aha moment, or other satisfying twist. I'd prefer to keep things pretty elementary, but a cool problem requiring a little background is a-okay.</p> <p>One problem per answer.</p> <p>If you post the answer, please obfuscate it with something like <a href="http://www.rot13.com/">rot13</a>. Don't spoil the fun for everyone else.</p>
Douglas S. Stones
2,264
<p>You have a glass of red wine and a glass of white wine (of equal volume). You take a teaspoon of the red wine and put it in the glass of white wine and stir. You then take a teaspoon of the white wine (which now has a teaspoon of the red wine in it) and put it in the glass of red wine and stir.</p> <blockquote> <p>Which glass has a higher ratio of (original wine)/(introduced wine)?</p> </blockquote>